* [PATCH v6 00/17] drm/xe: Multi Queue feature support
@ 2025-12-11 1:02 Niranjana Vishwanathapura
2025-12-11 1:02 ` [PATCH v6 01/17] drm/xe/multi_queue: Add multi_queue_enable_mask to gt information Niranjana Vishwanathapura
` (20 more replies)
0 siblings, 21 replies; 32+ messages in thread
From: Niranjana Vishwanathapura @ 2025-12-11 1:02 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, matthew.d.roper
Multi Queue is a new mode of execution supported by the compute and
blitter copy command streamers (CCS and BCS, respectively). It is an
enhancement of the existing hardware architecture and leverages the
same submission model. It enables support for efficient, parallel
execution of multiple queues within a single context.
Add support for multi-queue feature and enable it on xe3p_xpc.
The associated IGT patch series is,
https://patchwork.freedesktop.org/series/156866/
The Compute UMD usecase is,
https://github.com/intel/compute-runtime/pull/862
Changes in v2:
- Rename multi_queue_enable_mask to multi_queue_engine_class_mask.
- Remove group->lock, fix function semantics, add additional comments,
add asserts, update uapi kernel-doc, move group->list_lock to right
patch, add XE_BO_FLAG_GGTT_INVALIDATE for cgp bo.
- Fix G2H_LEN_DW_MULTI_QUEUE_CONTEXT value.
- Add fs_reclaim lock dependency, add bspec ref, other minor cleanups.
Changes in v3:
- Add patch "drm/xe/xe3p: Disable GuC Dynamic ICS for Xe3p"
- Add patch "drm/xe/multi_queue: Reset GT upon CGP_SYNC failure"
- Code refactoring and kernel-doc update
- Assert CGP_SYNC message length is valid
- For CGP bo use PINNED_LATE_RESTORE/USER_VRAM/GGTT_INVALIDATE flags
- Properly handle cleanup of multi-queue group
Changes in v4:
- uapi change due to rebase
- Remove unwated patch 'drm/xe/xe3p: Disable GuC Dynamic ICS for Xe3p'
- Use xe_guc_ct_wake_waiters(), remove vf recovery support
- Fix IS_ENABLED(CONFIG_LOCKDEP) check
- Revert stop/restart of all submission queues in the group in TDR
- Assert !multi_queue where applicable
- Fix CGP_CONTEXT_ERROR message
Changes in v5:
- Add FIXME in TDR (Matt Brost)
- Keep multi-queue disabled (until FIXME are addressed) by removing
patch 'drm/xe/multi_queue: Enable multi_queue on xe3p_xpc'
- Ban group for guc reported errors
Changes in v6:
- In TDR, trigger cleanup of group only for multi-queue case
Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
Acked-by: Mateusz Hoppe <mateusz.hoppe@intel.com>
Niranjana Vishwanathapura (17):
drm/xe/multi_queue: Add multi_queue_enable_mask to gt information
drm/xe/multi_queue: Add user interface for multi queue support
drm/xe/multi_queue: Add GuC interface for multi queue support
drm/xe/multi_queue: Add multi queue priority property
drm/xe/multi_queue: Handle invalid exec queue property setting
drm/xe/multi_queue: Add exec_queue set_property ioctl support
drm/xe/multi_queue: Add support for multi queue dynamic priority
change
drm/xe/multi_queue: Add multi queue information to guc_info dump
drm/xe/multi_queue: Handle tearing down of a multi queue
drm/xe/multi_queue: Set QUEUE_DRAIN_MODE for Multi Queue batches
drm/xe/multi_queue: Handle CGP context error
drm/xe/multi_queue: Reset GT upon CGP_SYNC failure
drm/xe/multi_queue: Teardown group upon job timeout
drm/xe/multi_queue: Tracepoint support
drm/xe/multi_queue: Support active group after primary is destroyed
drm/xe/doc: Add documentation for Multi Queue Group
drm/xe/doc: Add documentation for Multi Queue Group GuC interface
Documentation/gpu/xe/xe_exec_queue.rst | 14 +
drivers/gpu/drm/xe/abi/guc_actions_abi.h | 4 +
.../gpu/drm/xe/instructions/xe_gpu_commands.h | 1 +
drivers/gpu/drm/xe/xe_debugfs.c | 2 +
drivers/gpu/drm/xe/xe_device.c | 9 +-
drivers/gpu/drm/xe/xe_exec_queue.c | 431 ++++++++++++-
drivers/gpu/drm/xe/xe_exec_queue.h | 51 ++
drivers/gpu/drm/xe/xe_exec_queue_types.h | 59 ++
drivers/gpu/drm/xe/xe_gt_types.h | 5 +
drivers/gpu/drm/xe/xe_guc_ct.c | 8 +
drivers/gpu/drm/xe/xe_guc_fwif.h | 3 +
drivers/gpu/drm/xe/xe_guc_submit.c | 584 ++++++++++++++++--
drivers/gpu/drm/xe/xe_guc_submit.h | 3 +
drivers/gpu/drm/xe/xe_guc_submit_types.h | 13 +
drivers/gpu/drm/xe/xe_lrc.c | 29 +
drivers/gpu/drm/xe/xe_lrc.h | 3 +
drivers/gpu/drm/xe/xe_pci.c | 1 +
drivers/gpu/drm/xe/xe_pci_types.h | 1 +
drivers/gpu/drm/xe/xe_ring_ops.c | 64 +-
drivers/gpu/drm/xe/xe_trace.h | 46 ++
include/uapi/drm/xe_drm.h | 44 ++
21 files changed, 1292 insertions(+), 83 deletions(-)
--
2.43.0
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH v6 01/17] drm/xe/multi_queue: Add multi_queue_enable_mask to gt information
2025-12-11 1:02 [PATCH v6 00/17] drm/xe: Multi Queue feature support Niranjana Vishwanathapura
@ 2025-12-11 1:02 ` Niranjana Vishwanathapura
2025-12-11 1:02 ` [PATCH v6 02/17] drm/xe/multi_queue: Add user interface for multi queue support Niranjana Vishwanathapura
` (19 subsequent siblings)
20 siblings, 0 replies; 32+ messages in thread
From: Niranjana Vishwanathapura @ 2025-12-11 1:02 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, matthew.d.roper
Add multi_queue_enable_mask field to the gt information structure
which is bitmask of all engine classes with multi queue support
enabled.
v2: Rename multi_queue_enable_mask to multi_queue_engine_class_mask
(Matt Brost)
Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_debugfs.c | 2 ++
drivers/gpu/drm/xe/xe_gt_types.h | 5 +++++
drivers/gpu/drm/xe/xe_pci.c | 1 +
drivers/gpu/drm/xe/xe_pci_types.h | 1 +
4 files changed, 9 insertions(+)
diff --git a/drivers/gpu/drm/xe/xe_debugfs.c b/drivers/gpu/drm/xe/xe_debugfs.c
index 0f8a96a05a8e..4fa423a82bea 100644
--- a/drivers/gpu/drm/xe/xe_debugfs.c
+++ b/drivers/gpu/drm/xe/xe_debugfs.c
@@ -93,6 +93,8 @@ static int info(struct seq_file *m, void *data)
xe_force_wake_ref(gt_to_fw(gt), XE_FW_GT));
drm_printf(&p, "gt%d engine_mask 0x%llx\n", id,
gt->info.engine_mask);
+ drm_printf(&p, "gt%d multi_queue_engine_class_mask 0x%x\n", id,
+ gt->info.multi_queue_engine_class_mask);
}
return 0;
diff --git a/drivers/gpu/drm/xe/xe_gt_types.h b/drivers/gpu/drm/xe/xe_gt_types.h
index 0a728180b6fe..5318d92fd473 100644
--- a/drivers/gpu/drm/xe/xe_gt_types.h
+++ b/drivers/gpu/drm/xe/xe_gt_types.h
@@ -140,6 +140,11 @@ struct xe_gt {
u64 engine_mask;
/** @info.gmdid: raw GMD_ID value from hardware */
u32 gmdid;
+ /**
+ * @multi_queue_engine_class_mask: Bitmask of engine classes with
+ * multi queue support enabled.
+ */
+ u16 multi_queue_engine_class_mask;
/** @info.id: Unique ID of this GT within the PCI Device */
u8 id;
/** @info.has_indirect_ring_state: GT has indirect ring state support */
diff --git a/drivers/gpu/drm/xe/xe_pci.c b/drivers/gpu/drm/xe/xe_pci.c
index c8188a5b0f76..16b3eb247439 100644
--- a/drivers/gpu/drm/xe/xe_pci.c
+++ b/drivers/gpu/drm/xe/xe_pci.c
@@ -764,6 +764,7 @@ static struct xe_gt *alloc_primary_gt(struct xe_tile *tile,
gt->info.type = XE_GT_TYPE_MAIN;
gt->info.id = tile->id * xe->info.max_gt_per_tile;
gt->info.has_indirect_ring_state = graphics_desc->has_indirect_ring_state;
+ gt->info.multi_queue_engine_class_mask = graphics_desc->multi_queue_engine_class_mask;
gt->info.engine_mask = graphics_desc->hw_engine_mask;
/*
diff --git a/drivers/gpu/drm/xe/xe_pci_types.h b/drivers/gpu/drm/xe/xe_pci_types.h
index f19f35359696..b06c108e25e6 100644
--- a/drivers/gpu/drm/xe/xe_pci_types.h
+++ b/drivers/gpu/drm/xe/xe_pci_types.h
@@ -60,6 +60,7 @@ struct xe_device_desc {
struct xe_graphics_desc {
u64 hw_engine_mask; /* hardware engines provided by graphics IP */
+ u16 multi_queue_engine_class_mask; /* bitmask of engine classes which support multi queue */
u8 has_asid:1;
u8 has_atomic_enable_pte_bit:1;
--
2.43.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH v6 02/17] drm/xe/multi_queue: Add user interface for multi queue support
2025-12-11 1:02 [PATCH v6 00/17] drm/xe: Multi Queue feature support Niranjana Vishwanathapura
2025-12-11 1:02 ` [PATCH v6 01/17] drm/xe/multi_queue: Add multi_queue_enable_mask to gt information Niranjana Vishwanathapura
@ 2025-12-11 1:02 ` Niranjana Vishwanathapura
2025-12-11 1:02 ` [PATCH v6 03/17] drm/xe/multi_queue: Add GuC " Niranjana Vishwanathapura
` (18 subsequent siblings)
20 siblings, 0 replies; 32+ messages in thread
From: Niranjana Vishwanathapura @ 2025-12-11 1:02 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, matthew.d.roper
Multi Queue is a new mode of execution supported by the compute and
blitter copy command streamers (CCS and BCS, respectively). It is an
enhancement of the existing hardware architecture and leverages the
same submission model. It enables support for efficient, parallel
execution of multiple queues within a single context. All the queues
of a group must use the same address space (VM).
The new DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_QUEUE execution queue
property supports creating a multi queue group and adding queues to
a queue group. All queues of a multi queue group share the same
context.
A exec queue create ioctl call with above property specified with value
DRM_XE_SUPER_GROUP_CREATE will create a new multi queue group with the
queue being created as the primary queue (aka q0) of the group. To add
secondary queues to the group, they need to be created with the above
property with id of the primary queue as the value. The properties of
the primary queue (like priority, timeslice) applies to the whole group.
So, these properties can't be set for secondary queues of a group.
Once destroyed, the secondary queues of a multi queue group can't be
replaced. However, they can be dynamically added to the group up to a
total of 64 queues per group. Once the primary queue is destroyed,
secondary queues can't be added to the queue group.
v2: Remove group->lock, fix xe_exec_queue_group_add()/delete()
function semantics, add additional comments, remove unused
group->list_lock, add XE_BO_FLAG_GGTT_INVALIDATE for cgp bo,
Assert LRC is valid, update uapi kernel doc.
(Matt Brost)
v3: Use XE_BO_FLAG_PINNED_LATE_RESTORE/USER_VRAM/GGTT_INVALIDATE
flags for cgp bo (Matt)
v4: Ensure queue is not a vm_bind queue
uapi change due to rebase
Signed-off-by: Stuart Summers <stuart.summers@intel.com>
Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_exec_queue.c | 197 ++++++++++++++++++++++-
drivers/gpu/drm/xe/xe_exec_queue.h | 47 ++++++
drivers/gpu/drm/xe/xe_exec_queue_types.h | 26 +++
include/uapi/drm/xe_drm.h | 10 ++
4 files changed, 278 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
index 02b75652d497..f76ec277c5af 100644
--- a/drivers/gpu/drm/xe/xe_exec_queue.c
+++ b/drivers/gpu/drm/xe/xe_exec_queue.c
@@ -13,6 +13,7 @@
#include <drm/drm_syncobj.h>
#include <uapi/drm/xe_drm.h>
+#include "xe_bo.h"
#include "xe_dep_scheduler.h"
#include "xe_device.h"
#include "xe_gt.h"
@@ -63,6 +64,33 @@ enum xe_exec_queue_sched_prop {
static int exec_queue_user_extensions(struct xe_device *xe, struct xe_exec_queue *q,
u64 extensions, int ext_number);
+static void xe_exec_queue_group_cleanup(struct xe_exec_queue *q)
+{
+ struct xe_exec_queue_group *group = q->multi_queue.group;
+ struct xe_lrc *lrc;
+ unsigned long idx;
+
+ if (xe_exec_queue_is_multi_queue_secondary(q)) {
+ /*
+ * Put pairs with get from xe_exec_queue_lookup() call
+ * in xe_exec_queue_group_validate().
+ */
+ xe_exec_queue_put(xe_exec_queue_multi_queue_primary(q));
+ return;
+ }
+
+ if (!group)
+ return;
+
+ /* Primary queue cleanup */
+ xa_for_each(&group->xa, idx, lrc)
+ xe_lrc_put(lrc);
+
+ xa_destroy(&group->xa);
+ xe_bo_unpin_map_no_vm(group->cgp_bo);
+ kfree(group);
+}
+
static void __xe_exec_queue_free(struct xe_exec_queue *q)
{
int i;
@@ -73,6 +101,10 @@ static void __xe_exec_queue_free(struct xe_exec_queue *q)
if (xe_exec_queue_uses_pxp(q))
xe_pxp_exec_queue_remove(gt_to_xe(q->gt)->pxp, q);
+
+ if (xe_exec_queue_is_multi_queue(q))
+ xe_exec_queue_group_cleanup(q);
+
if (q->vm)
xe_vm_put(q->vm);
@@ -588,6 +620,150 @@ static int exec_queue_set_hang_replay_state(struct xe_device *xe,
return 0;
}
+static int xe_exec_queue_group_init(struct xe_device *xe, struct xe_exec_queue *q)
+{
+ struct xe_tile *tile = gt_to_tile(q->gt);
+ struct xe_exec_queue_group *group;
+ struct xe_bo *bo;
+
+ group = kzalloc(sizeof(*group), GFP_KERNEL);
+ if (!group)
+ return -ENOMEM;
+
+ bo = xe_bo_create_pin_map_novm(xe, tile, SZ_4K, ttm_bo_type_kernel,
+ XE_BO_FLAG_VRAM_IF_DGFX(tile) |
+ XE_BO_FLAG_PINNED_LATE_RESTORE |
+ XE_BO_FLAG_FORCE_USER_VRAM |
+ XE_BO_FLAG_GGTT_INVALIDATE |
+ XE_BO_FLAG_GGTT, false);
+ if (IS_ERR(bo)) {
+ drm_err(&xe->drm, "CGP bo allocation for queue group failed: %ld\n",
+ PTR_ERR(bo));
+ kfree(group);
+ return PTR_ERR(bo);
+ }
+
+ xe_map_memset(xe, &bo->vmap, 0, 0, SZ_4K);
+
+ group->primary = q;
+ group->cgp_bo = bo;
+ xa_init_flags(&group->xa, XA_FLAGS_ALLOC1);
+ q->multi_queue.group = group;
+
+ return 0;
+}
+
+static inline bool xe_exec_queue_supports_multi_queue(struct xe_exec_queue *q)
+{
+ return q->gt->info.multi_queue_engine_class_mask & BIT(q->class);
+}
+
+static int xe_exec_queue_group_validate(struct xe_device *xe, struct xe_exec_queue *q,
+ u32 primary_id)
+{
+ struct xe_exec_queue_group *group;
+ struct xe_exec_queue *primary;
+ int ret;
+
+ /*
+ * Get from below xe_exec_queue_lookup() pairs with put
+ * in xe_exec_queue_group_cleanup().
+ */
+ primary = xe_exec_queue_lookup(q->vm->xef, primary_id);
+ if (XE_IOCTL_DBG(xe, !primary))
+ return -ENOENT;
+
+ if (XE_IOCTL_DBG(xe, !xe_exec_queue_is_multi_queue_primary(primary)) ||
+ XE_IOCTL_DBG(xe, q->vm != primary->vm) ||
+ XE_IOCTL_DBG(xe, q->logical_mask != primary->logical_mask)) {
+ ret = -EINVAL;
+ goto put_primary;
+ }
+
+ group = primary->multi_queue.group;
+ q->multi_queue.valid = true;
+ q->multi_queue.group = group;
+
+ return 0;
+put_primary:
+ xe_exec_queue_put(primary);
+ return ret;
+}
+
+#define XE_MAX_GROUP_SIZE 64
+static int xe_exec_queue_group_add(struct xe_device *xe, struct xe_exec_queue *q)
+{
+ struct xe_exec_queue_group *group = q->multi_queue.group;
+ u32 pos;
+ int err;
+
+ xe_assert(xe, xe_exec_queue_is_multi_queue_secondary(q));
+
+ /* Primary queue holds a reference to LRCs of all secondary queues */
+ err = xa_alloc(&group->xa, &pos, xe_lrc_get(q->lrc[0]),
+ XA_LIMIT(1, XE_MAX_GROUP_SIZE - 1), GFP_KERNEL);
+ if (XE_IOCTL_DBG(xe, err)) {
+ xe_lrc_put(q->lrc[0]);
+
+ /* It is invalid if queue group limit is exceeded */
+ if (err == -EBUSY)
+ err = -EINVAL;
+
+ return err;
+ }
+
+ q->multi_queue.pos = pos;
+
+ return 0;
+}
+
+static void xe_exec_queue_group_delete(struct xe_device *xe, struct xe_exec_queue *q)
+{
+ struct xe_exec_queue_group *group = q->multi_queue.group;
+ struct xe_lrc *lrc;
+
+ xe_assert(xe, xe_exec_queue_is_multi_queue_secondary(q));
+
+ lrc = xa_erase(&group->xa, q->multi_queue.pos);
+ xe_assert(xe, lrc);
+ xe_lrc_put(lrc);
+}
+
+static int exec_queue_set_multi_group(struct xe_device *xe, struct xe_exec_queue *q,
+ u64 value)
+{
+ if (XE_IOCTL_DBG(xe, !xe_exec_queue_supports_multi_queue(q)))
+ return -ENODEV;
+
+ if (XE_IOCTL_DBG(xe, !xe_device_uc_enabled(xe)))
+ return -EOPNOTSUPP;
+
+ if (XE_IOCTL_DBG(xe, !q->vm->xef))
+ return -EINVAL;
+
+ if (XE_IOCTL_DBG(xe, xe_exec_queue_is_parallel(q)))
+ return -EINVAL;
+
+ if (XE_IOCTL_DBG(xe, xe_exec_queue_is_multi_queue(q)))
+ return -EINVAL;
+
+ if (value & DRM_XE_MULTI_GROUP_CREATE) {
+ if (XE_IOCTL_DBG(xe, value & ~DRM_XE_MULTI_GROUP_CREATE))
+ return -EINVAL;
+
+ q->multi_queue.valid = true;
+ q->multi_queue.is_primary = true;
+ q->multi_queue.pos = 0;
+ return 0;
+ }
+
+ /* While adding secondary queues, the upper 32 bits must be 0 */
+ if (XE_IOCTL_DBG(xe, value & (~0ull << 32)))
+ return -EINVAL;
+
+ return xe_exec_queue_group_validate(xe, q, value);
+}
+
typedef int (*xe_exec_queue_set_property_fn)(struct xe_device *xe,
struct xe_exec_queue *q,
u64 value);
@@ -597,6 +773,7 @@ static const xe_exec_queue_set_property_fn exec_queue_set_property_funcs[] = {
[DRM_XE_EXEC_QUEUE_SET_PROPERTY_TIMESLICE] = exec_queue_set_timeslice,
[DRM_XE_EXEC_QUEUE_SET_PROPERTY_PXP_TYPE] = exec_queue_set_pxp_type,
[DRM_XE_EXEC_QUEUE_SET_HANG_REPLAY_STATE] = exec_queue_set_hang_replay_state,
+ [DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_GROUP] = exec_queue_set_multi_group,
};
static int exec_queue_user_ext_set_property(struct xe_device *xe,
@@ -618,7 +795,8 @@ static int exec_queue_user_ext_set_property(struct xe_device *xe,
XE_IOCTL_DBG(xe, ext.property != DRM_XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY &&
ext.property != DRM_XE_EXEC_QUEUE_SET_PROPERTY_TIMESLICE &&
ext.property != DRM_XE_EXEC_QUEUE_SET_PROPERTY_PXP_TYPE &&
- ext.property != DRM_XE_EXEC_QUEUE_SET_HANG_REPLAY_STATE))
+ ext.property != DRM_XE_EXEC_QUEUE_SET_HANG_REPLAY_STATE &&
+ ext.property != DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_GROUP))
return -EINVAL;
idx = array_index_nospec(ext.property, ARRAY_SIZE(exec_queue_set_property_funcs));
@@ -667,6 +845,12 @@ static int exec_queue_user_extensions(struct xe_device *xe, struct xe_exec_queue
return exec_queue_user_extensions(xe, q, ext.next_extension,
++ext_number);
+ if (xe_exec_queue_is_multi_queue_primary(q)) {
+ err = xe_exec_queue_group_init(xe, q);
+ if (XE_IOCTL_DBG(xe, err))
+ return err;
+ }
+
return 0;
}
@@ -821,12 +1005,18 @@ int xe_exec_queue_create_ioctl(struct drm_device *dev, void *data,
if (IS_ERR(q))
return PTR_ERR(q);
+ if (xe_exec_queue_is_multi_queue_secondary(q)) {
+ err = xe_exec_queue_group_add(xe, q);
+ if (XE_IOCTL_DBG(xe, err))
+ goto put_exec_queue;
+ }
+
if (xe_vm_in_preempt_fence_mode(vm)) {
q->lr.context = dma_fence_context_alloc(1);
err = xe_vm_add_compute_exec_queue(vm, q);
if (XE_IOCTL_DBG(xe, err))
- goto put_exec_queue;
+ goto delete_queue_group;
}
if (q->vm && q->hwe->hw_engine_group) {
@@ -849,6 +1039,9 @@ int xe_exec_queue_create_ioctl(struct drm_device *dev, void *data,
kill_exec_queue:
xe_exec_queue_kill(q);
+delete_queue_group:
+ if (xe_exec_queue_is_multi_queue_secondary(q))
+ xe_exec_queue_group_delete(xe, q);
put_exec_queue:
xe_exec_queue_put(q);
return err;
diff --git a/drivers/gpu/drm/xe/xe_exec_queue.h b/drivers/gpu/drm/xe/xe_exec_queue.h
index fda4d4f9bda8..e6daa40003f2 100644
--- a/drivers/gpu/drm/xe/xe_exec_queue.h
+++ b/drivers/gpu/drm/xe/xe_exec_queue.h
@@ -66,6 +66,53 @@ static inline bool xe_exec_queue_uses_pxp(struct xe_exec_queue *q)
return q->pxp.type;
}
+/**
+ * xe_exec_queue_is_multi_queue() - Whether an exec_queue is part of a queue group.
+ * @q: The exec_queue
+ *
+ * Return: True if the exec_queue is part of a queue group, false otherwise.
+ */
+static inline bool xe_exec_queue_is_multi_queue(struct xe_exec_queue *q)
+{
+ return q->multi_queue.valid;
+}
+
+/**
+ * xe_exec_queue_is_multi_queue_primary() - Whether an exec_queue is primary queue
+ * of a multi queue group.
+ * @q: The exec_queue
+ *
+ * Return: True if @q is primary queue of a queue group, false otherwise.
+ */
+static inline bool xe_exec_queue_is_multi_queue_primary(struct xe_exec_queue *q)
+{
+ return q->multi_queue.is_primary;
+}
+
+/**
+ * xe_exec_queue_is_multi_queue_secondary() - Whether an exec_queue is secondary queue
+ * of a multi queue group.
+ * @q: The exec_queue
+ *
+ * Return: True if @q is secondary queue of a queue group, false otherwise.
+ */
+static inline bool xe_exec_queue_is_multi_queue_secondary(struct xe_exec_queue *q)
+{
+ return xe_exec_queue_is_multi_queue(q) && !xe_exec_queue_is_multi_queue_primary(q);
+}
+
+/**
+ * xe_exec_queue_multi_queue_primary() - Get multi queue group's primary queue
+ * @q: The exec_queue
+ *
+ * If @q belongs to a multi queue group, then the primary queue of the group will
+ * be returned. Otherwise, @q will be returned.
+ */
+static inline struct xe_exec_queue *xe_exec_queue_multi_queue_primary(struct xe_exec_queue *q)
+{
+ return xe_exec_queue_is_multi_queue(q) ? q->multi_queue.group->primary : q;
+}
+
bool xe_exec_queue_is_lr(struct xe_exec_queue *q);
bool xe_exec_queue_is_idle(struct xe_exec_queue *q);
diff --git a/drivers/gpu/drm/xe/xe_exec_queue_types.h b/drivers/gpu/drm/xe/xe_exec_queue_types.h
index 3ba10632dcd6..29feafb42e0a 100644
--- a/drivers/gpu/drm/xe/xe_exec_queue_types.h
+++ b/drivers/gpu/drm/xe/xe_exec_queue_types.h
@@ -32,6 +32,20 @@ enum xe_exec_queue_priority {
XE_EXEC_QUEUE_PRIORITY_COUNT
};
+/**
+ * struct xe_exec_queue_group - Execution multi queue group
+ *
+ * Contains multi queue group information.
+ */
+struct xe_exec_queue_group {
+ /** @primary: Primary queue of this group */
+ struct xe_exec_queue *primary;
+ /** @cgp_bo: BO for the Context Group Page */
+ struct xe_bo *cgp_bo;
+ /** @xa: xarray to store LRCs */
+ struct xarray xa;
+};
+
/**
* struct xe_exec_queue - Execution queue
*
@@ -111,6 +125,18 @@ struct xe_exec_queue {
struct xe_guc_exec_queue *guc;
};
+ /** @multi_queue: Multi queue information */
+ struct {
+ /** @multi_queue.group: Queue group information */
+ struct xe_exec_queue_group *group;
+ /** @multi_queue.pos: Position of queue within the multi-queue group */
+ u8 pos;
+ /** @multi_queue.valid: Queue belongs to a multi queue group */
+ u8 valid:1;
+ /** @multi_queue.is_primary: Is primary queue (Q0) of the group */
+ u8 is_primary:1;
+ } multi_queue;
+
/** @sched_props: scheduling properties */
struct {
/** @sched_props.timeslice_us: timeslice period in micro-seconds */
diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
index 876a076fa6c0..19a8ae856a17 100644
--- a/include/uapi/drm/xe_drm.h
+++ b/include/uapi/drm/xe_drm.h
@@ -1272,6 +1272,14 @@ struct drm_xe_vm_bind {
* Given that going into a power-saving state kills PXP HWDRM sessions,
* runtime PM will be blocked while queues of this type are alive.
* All PXP queues will be killed if a PXP invalidation event occurs.
+ * - %DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_GROUP - Create a multi-queue group
+ * or add secondary queues to a multi-queue group.
+ * If the extension's 'value' field has %DRM_XE_MULTI_GROUP_CREATE flag set,
+ * then a new multi-queue group is created with this queue as the primary queue
+ * (Q0). Otherwise, the queue gets added to the multi-queue group whose primary
+ * queue's exec_queue_id is specified in the lower 32 bits of the 'value' field.
+ * All the other non-relevant bits of extension's 'value' field while adding the
+ * primary or the secondary queues of the group must be set to 0.
*
* The example below shows how to use @drm_xe_exec_queue_create to create
* a simple exec_queue (no parallel submission) of class
@@ -1313,6 +1321,8 @@ struct drm_xe_exec_queue_create {
#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_TIMESLICE 1
#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_PXP_TYPE 2
#define DRM_XE_EXEC_QUEUE_SET_HANG_REPLAY_STATE 3
+#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_GROUP 4
+#define DRM_XE_MULTI_GROUP_CREATE (1ull << 63)
/** @extensions: Pointer to the first extension struct, if any */
__u64 extensions;
--
2.43.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH v6 03/17] drm/xe/multi_queue: Add GuC interface for multi queue support
2025-12-11 1:02 [PATCH v6 00/17] drm/xe: Multi Queue feature support Niranjana Vishwanathapura
2025-12-11 1:02 ` [PATCH v6 01/17] drm/xe/multi_queue: Add multi_queue_enable_mask to gt information Niranjana Vishwanathapura
2025-12-11 1:02 ` [PATCH v6 02/17] drm/xe/multi_queue: Add user interface for multi queue support Niranjana Vishwanathapura
@ 2025-12-11 1:02 ` Niranjana Vishwanathapura
2025-12-11 1:02 ` [PATCH v6 04/17] drm/xe/multi_queue: Add multi queue priority property Niranjana Vishwanathapura
` (17 subsequent siblings)
20 siblings, 0 replies; 32+ messages in thread
From: Niranjana Vishwanathapura @ 2025-12-11 1:02 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, matthew.d.roper
Implement GuC commands and response along with the Context
Group Page (CGP) interface for multi queue support.
Ensure that only primary queue (q0) of a multi queue group
communicate with GuC. The secondary queues of the group only
need to maintain LRCA and interface with drm scheduler.
Use primary queue's submit_wq for all secondary queues of a multi
queue group. This serialization avoids any locking around CGP
synchronization with GuC.
v2: Fix G2H_LEN_DW_MULTI_QUEUE_CONTEXT value, add more comments
(Matt Brost)
v3: Minor code refactro, use xe_gt_assert
v4: Use xe_guc_ct_wake_waiters(), remove vf recovery support
(Matt Brost)
Signed-off-by: Stuart Summers <stuart.summers@intel.com>
Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/abi/guc_actions_abi.h | 3 +
drivers/gpu/drm/xe/xe_exec_queue_types.h | 2 +
drivers/gpu/drm/xe/xe_guc_ct.c | 4 +
drivers/gpu/drm/xe/xe_guc_fwif.h | 3 +
drivers/gpu/drm/xe/xe_guc_submit.c | 278 +++++++++++++++++++++--
drivers/gpu/drm/xe/xe_guc_submit.h | 1 +
6 files changed, 269 insertions(+), 22 deletions(-)
diff --git a/drivers/gpu/drm/xe/abi/guc_actions_abi.h b/drivers/gpu/drm/xe/abi/guc_actions_abi.h
index 47756e4674a1..3e9fbed9cda6 100644
--- a/drivers/gpu/drm/xe/abi/guc_actions_abi.h
+++ b/drivers/gpu/drm/xe/abi/guc_actions_abi.h
@@ -139,6 +139,9 @@ enum xe_guc_action {
XE_GUC_ACTION_DEREGISTER_G2G = 0x4508,
XE_GUC_ACTION_DEREGISTER_CONTEXT_DONE = 0x4600,
XE_GUC_ACTION_REGISTER_CONTEXT_MULTI_LRC = 0x4601,
+ XE_GUC_ACTION_REGISTER_CONTEXT_MULTI_QUEUE = 0x4602,
+ XE_GUC_ACTION_MULTI_QUEUE_CONTEXT_CGP_SYNC = 0x4603,
+ XE_GUC_ACTION_NOTIFY_MULTI_QUEUE_CONTEXT_CGP_SYNC_DONE = 0x4604,
XE_GUC_ACTION_CLIENT_SOFT_RESET = 0x5507,
XE_GUC_ACTION_SET_ENG_UTIL_BUFF = 0x550A,
XE_GUC_ACTION_SET_DEVICE_ENGINE_ACTIVITY_BUFFER = 0x550C,
diff --git a/drivers/gpu/drm/xe/xe_exec_queue_types.h b/drivers/gpu/drm/xe/xe_exec_queue_types.h
index 29feafb42e0a..06fb518b8533 100644
--- a/drivers/gpu/drm/xe/xe_exec_queue_types.h
+++ b/drivers/gpu/drm/xe/xe_exec_queue_types.h
@@ -44,6 +44,8 @@ struct xe_exec_queue_group {
struct xe_bo *cgp_bo;
/** @xa: xarray to store LRCs */
struct xarray xa;
+ /** @sync_pending: CGP_SYNC_DONE g2h response pending */
+ bool sync_pending;
};
/**
diff --git a/drivers/gpu/drm/xe/xe_guc_ct.c b/drivers/gpu/drm/xe/xe_guc_ct.c
index 648f0f523abb..4d5b4ed357cc 100644
--- a/drivers/gpu/drm/xe/xe_guc_ct.c
+++ b/drivers/gpu/drm/xe/xe_guc_ct.c
@@ -1401,6 +1401,7 @@ static int parse_g2h_event(struct xe_guc_ct *ct, u32 *msg, u32 len)
lockdep_assert_held(&ct->lock);
switch (action) {
+ case XE_GUC_ACTION_NOTIFY_MULTI_QUEUE_CONTEXT_CGP_SYNC_DONE:
case XE_GUC_ACTION_SCHED_CONTEXT_MODE_DONE:
case XE_GUC_ACTION_DEREGISTER_CONTEXT_DONE:
case XE_GUC_ACTION_SCHED_ENGINE_MODE_DONE:
@@ -1614,6 +1615,9 @@ static int process_g2h_msg(struct xe_guc_ct *ct, u32 *msg, u32 len)
ret = xe_guc_g2g_test_notification(guc, payload, adj_len);
break;
#endif
+ case XE_GUC_ACTION_NOTIFY_MULTI_QUEUE_CONTEXT_CGP_SYNC_DONE:
+ ret = xe_guc_exec_queue_cgp_sync_done_handler(guc, payload, adj_len);
+ break;
default:
xe_gt_err(gt, "unexpected G2H action 0x%04x\n", action);
}
diff --git a/drivers/gpu/drm/xe/xe_guc_fwif.h b/drivers/gpu/drm/xe/xe_guc_fwif.h
index 7d93c2749485..e27f0088f24f 100644
--- a/drivers/gpu/drm/xe/xe_guc_fwif.h
+++ b/drivers/gpu/drm/xe/xe_guc_fwif.h
@@ -16,6 +16,7 @@
#define G2H_LEN_DW_DEREGISTER_CONTEXT 3
#define G2H_LEN_DW_TLB_INVALIDATE 3
#define G2H_LEN_DW_G2G_NOTIFY_MIN 3
+#define G2H_LEN_DW_MULTI_QUEUE_CONTEXT 3
#define GUC_ID_MAX 65535
#define GUC_ID_UNKNOWN 0xffffffff
@@ -62,6 +63,8 @@ struct guc_ctxt_registration_info {
u32 wq_base_lo;
u32 wq_base_hi;
u32 wq_size;
+ u32 cgp_lo;
+ u32 cgp_hi;
u32 hwlrca_lo;
u32 hwlrca_hi;
};
diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
index ff6fda84bf0f..bafe42393d22 100644
--- a/drivers/gpu/drm/xe/xe_guc_submit.c
+++ b/drivers/gpu/drm/xe/xe_guc_submit.c
@@ -19,6 +19,7 @@
#include "abi/guc_klvs_abi.h"
#include "regs/xe_lrc_layout.h"
#include "xe_assert.h"
+#include "xe_bo.h"
#include "xe_devcoredump.h"
#include "xe_device.h"
#include "xe_exec_queue.h"
@@ -541,7 +542,8 @@ static void init_policies(struct xe_guc *guc, struct xe_exec_queue *q)
u32 slpc_exec_queue_freq_req = 0;
u32 preempt_timeout_us = q->sched_props.preempt_timeout_us;
- xe_gt_assert(guc_to_gt(guc), exec_queue_registered(q));
+ xe_gt_assert(guc_to_gt(guc), exec_queue_registered(q) &&
+ !xe_exec_queue_is_multi_queue_secondary(q));
if (q->flags & EXEC_QUEUE_FLAG_LOW_LATENCY)
slpc_exec_queue_freq_req |= SLPC_CTX_FREQ_REQ_IS_COMPUTE;
@@ -561,6 +563,8 @@ static void set_min_preemption_timeout(struct xe_guc *guc, struct xe_exec_queue
{
struct exec_queue_policy policy;
+ xe_assert(guc_to_xe(guc), !xe_exec_queue_is_multi_queue_secondary(q));
+
__guc_exec_queue_policy_start_klv(&policy, q->guc->id);
__guc_exec_queue_policy_add_preemption_timeout(&policy, 1);
@@ -568,6 +572,11 @@ static void set_min_preemption_timeout(struct xe_guc *guc, struct xe_exec_queue
__guc_exec_queue_policy_action_size(&policy), 0, 0);
}
+static bool vf_recovery(struct xe_guc *guc)
+{
+ return xe_gt_recovery_pending(guc_to_gt(guc));
+}
+
#define parallel_read(xe_, map_, field_) \
xe_map_rd_field(xe_, &map_, 0, struct guc_submit_parallel_scratch, \
field_)
@@ -575,6 +584,119 @@ static void set_min_preemption_timeout(struct xe_guc *guc, struct xe_exec_queue
xe_map_wr_field(xe_, &map_, 0, struct guc_submit_parallel_scratch, \
field_, val_)
+#define CGP_VERSION_MAJOR_SHIFT 8
+
+static void xe_guc_exec_queue_group_cgp_update(struct xe_device *xe,
+ struct xe_exec_queue *q)
+{
+ struct xe_exec_queue_group *group = q->multi_queue.group;
+ u32 guc_id = group->primary->guc->id;
+
+ /* Currently implementing CGP version 1.0 */
+ xe_map_wr(xe, &group->cgp_bo->vmap, 0, u32,
+ 1 << CGP_VERSION_MAJOR_SHIFT);
+
+ xe_map_wr(xe, &group->cgp_bo->vmap,
+ (32 + q->multi_queue.pos * 2) * sizeof(u32),
+ u32, lower_32_bits(xe_lrc_descriptor(q->lrc[0])));
+
+ xe_map_wr(xe, &group->cgp_bo->vmap,
+ (33 + q->multi_queue.pos * 2) * sizeof(u32),
+ u32, guc_id);
+
+ if (q->multi_queue.pos / 32) {
+ xe_map_wr(xe, &group->cgp_bo->vmap, 17 * sizeof(u32),
+ u32, BIT(q->multi_queue.pos % 32));
+ xe_map_wr(xe, &group->cgp_bo->vmap, 16 * sizeof(u32), u32, 0);
+ } else {
+ xe_map_wr(xe, &group->cgp_bo->vmap, 16 * sizeof(u32),
+ u32, BIT(q->multi_queue.pos));
+ xe_map_wr(xe, &group->cgp_bo->vmap, 17 * sizeof(u32), u32, 0);
+ }
+}
+
+static void xe_guc_exec_queue_group_cgp_sync(struct xe_guc *guc,
+ struct xe_exec_queue *q,
+ const u32 *action, u32 len)
+{
+ struct xe_exec_queue_group *group = q->multi_queue.group;
+ struct xe_device *xe = guc_to_xe(guc);
+ long ret;
+
+ /*
+ * As all queues of a multi queue group use single drm scheduler
+ * submit workqueue, CGP synchronization with GuC are serialized.
+ * Hence, no locking is required here.
+ * Wait for any pending CGP_SYNC_DONE response before updating the
+ * CGP page and sending CGP_SYNC message.
+ *
+ * FIXME: Support VF migration
+ */
+ ret = wait_event_timeout(guc->ct.wq,
+ !READ_ONCE(group->sync_pending) ||
+ xe_guc_read_stopped(guc), HZ);
+ if (!ret || xe_guc_read_stopped(guc)) {
+ xe_gt_warn(guc_to_gt(guc), "Wait for CGP_SYNC_DONE response failed!\n");
+ return;
+ }
+
+ xe_guc_exec_queue_group_cgp_update(xe, q);
+
+ WRITE_ONCE(group->sync_pending, true);
+ xe_guc_ct_send(&guc->ct, action, len, G2H_LEN_DW_MULTI_QUEUE_CONTEXT, 1);
+}
+
+static void __register_exec_queue_group(struct xe_guc *guc,
+ struct xe_exec_queue *q,
+ struct guc_ctxt_registration_info *info)
+{
+#define MAX_MULTI_QUEUE_REG_SIZE (8)
+ u32 action[MAX_MULTI_QUEUE_REG_SIZE];
+ int len = 0;
+
+ action[len++] = XE_GUC_ACTION_REGISTER_CONTEXT_MULTI_QUEUE;
+ action[len++] = info->flags;
+ action[len++] = info->context_idx;
+ action[len++] = info->engine_class;
+ action[len++] = info->engine_submit_mask;
+ action[len++] = 0; /* Reserved */
+ action[len++] = info->cgp_lo;
+ action[len++] = info->cgp_hi;
+
+ xe_gt_assert(guc_to_gt(guc), len <= MAX_MULTI_QUEUE_REG_SIZE);
+#undef MAX_MULTI_QUEUE_REG_SIZE
+
+ /*
+ * The above XE_GUC_ACTION_REGISTER_CONTEXT_MULTI_QUEUE do expect a
+ * XE_GUC_ACTION_NOTIFY_MULTI_QUEUE_CONTEXT_CGP_SYNC_DONE response
+ * from guc.
+ */
+ xe_guc_exec_queue_group_cgp_sync(guc, q, action, len);
+}
+
+static void xe_guc_exec_queue_group_add(struct xe_guc *guc,
+ struct xe_exec_queue *q)
+{
+#define MAX_MULTI_QUEUE_CGP_SYNC_SIZE (2)
+ u32 action[MAX_MULTI_QUEUE_CGP_SYNC_SIZE];
+ int len = 0;
+
+ xe_gt_assert(guc_to_gt(guc), xe_exec_queue_is_multi_queue_secondary(q));
+
+ action[len++] = XE_GUC_ACTION_MULTI_QUEUE_CONTEXT_CGP_SYNC;
+ action[len++] = q->multi_queue.group->primary->guc->id;
+
+ xe_gt_assert(guc_to_gt(guc), len <= MAX_MULTI_QUEUE_CGP_SYNC_SIZE);
+#undef MAX_MULTI_QUEUE_CGP_SYNC_SIZE
+
+ /*
+ * The above XE_GUC_ACTION_MULTI_QUEUE_CONTEXT_CGP_SYNC do expect a
+ * XE_GUC_ACTION_NOTIFY_MULTI_QUEUE_CONTEXT_CGP_SYNC_DONE response
+ * from guc.
+ */
+ xe_guc_exec_queue_group_cgp_sync(guc, q, action, len);
+}
+
static void __register_mlrc_exec_queue(struct xe_guc *guc,
struct xe_exec_queue *q,
struct guc_ctxt_registration_info *info)
@@ -670,6 +792,13 @@ static void register_exec_queue(struct xe_exec_queue *q, int ctx_type)
info.flags = CONTEXT_REGISTRATION_FLAG_KMD |
FIELD_PREP(CONTEXT_REGISTRATION_FLAG_TYPE, ctx_type);
+ if (xe_exec_queue_is_multi_queue(q)) {
+ struct xe_exec_queue_group *group = q->multi_queue.group;
+
+ info.cgp_lo = xe_bo_ggtt_addr(group->cgp_bo);
+ info.cgp_hi = 0;
+ }
+
if (xe_exec_queue_is_parallel(q)) {
u64 ggtt_addr = xe_lrc_parallel_ggtt_addr(lrc);
struct iosys_map map = xe_lrc_parallel_map(lrc);
@@ -700,11 +829,18 @@ static void register_exec_queue(struct xe_exec_queue *q, int ctx_type)
set_exec_queue_registered(q);
trace_xe_exec_queue_register(q);
- if (xe_exec_queue_is_parallel(q))
+ if (xe_exec_queue_is_multi_queue_primary(q))
+ __register_exec_queue_group(guc, q, &info);
+ else if (xe_exec_queue_is_parallel(q))
__register_mlrc_exec_queue(guc, q, &info);
- else
+ else if (!xe_exec_queue_is_multi_queue_secondary(q))
__register_exec_queue(guc, &info);
- init_policies(guc, q);
+
+ if (!xe_exec_queue_is_multi_queue_secondary(q))
+ init_policies(guc, q);
+
+ if (xe_exec_queue_is_multi_queue_secondary(q))
+ xe_guc_exec_queue_group_add(guc, q);
}
static u32 wq_space_until_wrap(struct xe_exec_queue *q)
@@ -712,11 +848,6 @@ static u32 wq_space_until_wrap(struct xe_exec_queue *q)
return (WQ_SIZE - q->guc->wqi_tail);
}
-static bool vf_recovery(struct xe_guc *guc)
-{
- return xe_gt_recovery_pending(guc_to_gt(guc));
-}
-
static int wq_wait_for_space(struct xe_exec_queue *q, u32 wqi_size)
{
struct xe_guc *guc = exec_queue_to_guc(q);
@@ -835,6 +966,12 @@ static void submit_exec_queue(struct xe_exec_queue *q, struct xe_sched_job *job)
if (exec_queue_suspended(q) && !xe_exec_queue_is_parallel(q))
return;
+ /*
+ * All queues in a multi-queue group will use the primary queue
+ * of the group to interface with GuC.
+ */
+ q = xe_exec_queue_multi_queue_primary(q);
+
if (!exec_queue_enabled(q) && !exec_queue_suspended(q)) {
action[len++] = XE_GUC_ACTION_SCHED_CONTEXT_MODE_SET;
action[len++] = q->guc->id;
@@ -881,6 +1018,18 @@ guc_exec_queue_run_job(struct drm_sched_job *drm_job)
trace_xe_sched_job_run(job);
if (!killed_or_banned_or_wedged && !xe_sched_job_is_error(job)) {
+ if (xe_exec_queue_is_multi_queue_secondary(q)) {
+ struct xe_exec_queue *primary = xe_exec_queue_multi_queue_primary(q);
+
+ if (exec_queue_killed_or_banned_or_wedged(primary)) {
+ killed_or_banned_or_wedged = true;
+ goto run_job_out;
+ }
+
+ if (!exec_queue_registered(primary))
+ register_exec_queue(primary, GUC_CONTEXT_NORMAL);
+ }
+
if (!exec_queue_registered(q))
register_exec_queue(q, GUC_CONTEXT_NORMAL);
if (!job->restore_replay)
@@ -889,6 +1038,7 @@ guc_exec_queue_run_job(struct drm_sched_job *drm_job)
job->restore_replay = false;
}
+run_job_out:
/*
* We don't care about job-fence ordering in LR VMs because these fences
* are never exported; they are used solely to keep jobs on the pending
@@ -914,6 +1064,11 @@ int xe_guc_read_stopped(struct xe_guc *guc)
return atomic_read(&guc->submission_state.stopped);
}
+static void handle_multi_queue_secondary_sched_done(struct xe_guc *guc,
+ struct xe_exec_queue *q,
+ u32 runnable_state);
+static void handle_deregister_done(struct xe_guc *guc, struct xe_exec_queue *q);
+
#define MAKE_SCHED_CONTEXT_ACTION(q, enable_disable) \
u32 action[] = { \
XE_GUC_ACTION_SCHED_CONTEXT_MODE_SET, \
@@ -927,7 +1082,9 @@ static void disable_scheduling_deregister(struct xe_guc *guc,
MAKE_SCHED_CONTEXT_ACTION(q, DISABLE);
int ret;
- set_min_preemption_timeout(guc, q);
+ if (!xe_exec_queue_is_multi_queue_secondary(q))
+ set_min_preemption_timeout(guc, q);
+
smp_rmb();
ret = wait_event_timeout(guc->ct.wq,
(!exec_queue_pending_enable(q) &&
@@ -955,9 +1112,12 @@ static void disable_scheduling_deregister(struct xe_guc *guc,
* Reserve space for both G2H here as the 2nd G2H is sent from a G2H
* handler and we are not allowed to reserved G2H space in handlers.
*/
- xe_guc_ct_send(&guc->ct, action, ARRAY_SIZE(action),
- G2H_LEN_DW_SCHED_CONTEXT_MODE_SET +
- G2H_LEN_DW_DEREGISTER_CONTEXT, 2);
+ if (xe_exec_queue_is_multi_queue_secondary(q))
+ handle_multi_queue_secondary_sched_done(guc, q, 0);
+ else
+ xe_guc_ct_send(&guc->ct, action, ARRAY_SIZE(action),
+ G2H_LEN_DW_SCHED_CONTEXT_MODE_SET +
+ G2H_LEN_DW_DEREGISTER_CONTEXT, 2);
}
static void xe_guc_exec_queue_trigger_cleanup(struct xe_exec_queue *q)
@@ -1163,8 +1323,11 @@ static void enable_scheduling(struct xe_exec_queue *q)
set_exec_queue_enabled(q);
trace_xe_exec_queue_scheduling_enable(q);
- xe_guc_ct_send(&guc->ct, action, ARRAY_SIZE(action),
- G2H_LEN_DW_SCHED_CONTEXT_MODE_SET, 1);
+ if (xe_exec_queue_is_multi_queue_secondary(q))
+ handle_multi_queue_secondary_sched_done(guc, q, 1);
+ else
+ xe_guc_ct_send(&guc->ct, action, ARRAY_SIZE(action),
+ G2H_LEN_DW_SCHED_CONTEXT_MODE_SET, 1);
ret = wait_event_timeout(guc->ct.wq,
!exec_queue_pending_enable(q) ||
@@ -1188,14 +1351,17 @@ static void disable_scheduling(struct xe_exec_queue *q, bool immediate)
xe_gt_assert(guc_to_gt(guc), exec_queue_registered(q));
xe_gt_assert(guc_to_gt(guc), !exec_queue_pending_disable(q));
- if (immediate)
+ if (immediate && !xe_exec_queue_is_multi_queue_secondary(q))
set_min_preemption_timeout(guc, q);
clear_exec_queue_enabled(q);
set_exec_queue_pending_disable(q);
trace_xe_exec_queue_scheduling_disable(q);
- xe_guc_ct_send(&guc->ct, action, ARRAY_SIZE(action),
- G2H_LEN_DW_SCHED_CONTEXT_MODE_SET, 1);
+ if (xe_exec_queue_is_multi_queue_secondary(q))
+ handle_multi_queue_secondary_sched_done(guc, q, 0);
+ else
+ xe_guc_ct_send(&guc->ct, action, ARRAY_SIZE(action),
+ G2H_LEN_DW_SCHED_CONTEXT_MODE_SET, 1);
}
static void __deregister_exec_queue(struct xe_guc *guc, struct xe_exec_queue *q)
@@ -1213,8 +1379,11 @@ static void __deregister_exec_queue(struct xe_guc *guc, struct xe_exec_queue *q)
set_exec_queue_destroyed(q);
trace_xe_exec_queue_deregister(q);
- xe_guc_ct_send(&guc->ct, action, ARRAY_SIZE(action),
- G2H_LEN_DW_DEREGISTER_CONTEXT, 1);
+ if (xe_exec_queue_is_multi_queue_secondary(q))
+ handle_deregister_done(guc, q);
+ else
+ xe_guc_ct_send(&guc->ct, action, ARRAY_SIZE(action),
+ G2H_LEN_DW_DEREGISTER_CONTEXT, 1);
}
static enum drm_gpu_sched_stat
@@ -1657,6 +1826,7 @@ static int guc_exec_queue_init(struct xe_exec_queue *q)
{
struct xe_gpu_scheduler *sched;
struct xe_guc *guc = exec_queue_to_guc(q);
+ struct workqueue_struct *submit_wq = NULL;
struct xe_guc_exec_queue *ge;
long timeout;
int err, i;
@@ -1677,8 +1847,20 @@ static int guc_exec_queue_init(struct xe_exec_queue *q)
timeout = (q->vm && xe_vm_in_lr_mode(q->vm)) ? MAX_SCHEDULE_TIMEOUT :
msecs_to_jiffies(q->sched_props.job_timeout_ms);
+
+ /*
+ * Use primary queue's submit_wq for all secondary queues of a
+ * multi queue group. This serialization avoids any locking around
+ * CGP synchronization with GuC.
+ */
+ if (xe_exec_queue_is_multi_queue_secondary(q)) {
+ struct xe_exec_queue *primary = xe_exec_queue_multi_queue_primary(q);
+
+ submit_wq = primary->guc->sched.base.submit_wq;
+ }
+
err = xe_sched_init(&ge->sched, &drm_sched_ops, &xe_sched_ops,
- NULL, xe_lrc_ring_size() / MAX_JOB_SIZE_BYTES, 64,
+ submit_wq, xe_lrc_ring_size() / MAX_JOB_SIZE_BYTES, 64,
timeout, guc_to_gt(guc)->ordered_wq, NULL,
q->name, gt_to_xe(q->gt)->drm.dev);
if (err)
@@ -2463,7 +2645,11 @@ static void deregister_exec_queue(struct xe_guc *guc, struct xe_exec_queue *q)
trace_xe_exec_queue_deregister(q);
- xe_guc_ct_send_g2h_handler(&guc->ct, action, ARRAY_SIZE(action));
+ if (xe_exec_queue_is_multi_queue_secondary(q))
+ handle_deregister_done(guc, q);
+ else
+ xe_guc_ct_send_g2h_handler(&guc->ct, action,
+ ARRAY_SIZE(action));
}
static void handle_sched_done(struct xe_guc *guc, struct xe_exec_queue *q,
@@ -2513,6 +2699,16 @@ static void handle_sched_done(struct xe_guc *guc, struct xe_exec_queue *q,
}
}
+static void handle_multi_queue_secondary_sched_done(struct xe_guc *guc,
+ struct xe_exec_queue *q,
+ u32 runnable_state)
+{
+ /* Take CT lock here as handle_sched_done() do send a h2g message */
+ mutex_lock(&guc->ct.lock);
+ handle_sched_done(guc, q, runnable_state);
+ mutex_unlock(&guc->ct.lock);
+}
+
int xe_guc_sched_done_handler(struct xe_guc *guc, u32 *msg, u32 len)
{
struct xe_exec_queue *q;
@@ -2717,6 +2913,44 @@ int xe_guc_exec_queue_reset_failure_handler(struct xe_guc *guc, u32 *msg, u32 le
return 0;
}
+/**
+ * xe_guc_exec_queue_cgp_sync_done_handler - CGP synchronization done handler
+ * @guc: guc
+ * @msg: message indicating CGP sync done
+ * @len: length of message
+ *
+ * Set multi queue group's sync_pending flag to false and wakeup anyone waiting
+ * for CGP synchronization to complete.
+ *
+ * Return: 0 on success, -EPROTO for malformed messages.
+ */
+int xe_guc_exec_queue_cgp_sync_done_handler(struct xe_guc *guc, u32 *msg, u32 len)
+{
+ struct xe_device *xe = guc_to_xe(guc);
+ struct xe_exec_queue *q;
+ u32 guc_id = msg[0];
+
+ if (unlikely(len < 1)) {
+ drm_err(&xe->drm, "Invalid CGP_SYNC_DONE length %u", len);
+ return -EPROTO;
+ }
+
+ q = g2h_exec_queue_lookup(guc, guc_id);
+ if (unlikely(!q))
+ return -EPROTO;
+
+ if (!xe_exec_queue_is_multi_queue_primary(q)) {
+ drm_err(&xe->drm, "Unexpected CGP_SYNC_DONE response");
+ return -EPROTO;
+ }
+
+ /* Wakeup the serialized cgp update wait */
+ WRITE_ONCE(q->multi_queue.group->sync_pending, false);
+ xe_guc_ct_wake_waiters(&guc->ct);
+
+ return 0;
+}
+
static void
guc_exec_queue_wq_snapshot_capture(struct xe_exec_queue *q,
struct xe_guc_submit_exec_queue_snapshot *snapshot)
diff --git a/drivers/gpu/drm/xe/xe_guc_submit.h b/drivers/gpu/drm/xe/xe_guc_submit.h
index 100a7891b918..ad8c0e8e0415 100644
--- a/drivers/gpu/drm/xe/xe_guc_submit.h
+++ b/drivers/gpu/drm/xe/xe_guc_submit.h
@@ -36,6 +36,7 @@ int xe_guc_exec_queue_memory_cat_error_handler(struct xe_guc *guc, u32 *msg,
u32 len);
int xe_guc_exec_queue_reset_failure_handler(struct xe_guc *guc, u32 *msg, u32 len);
int xe_guc_error_capture_handler(struct xe_guc *guc, u32 *msg, u32 len);
+int xe_guc_exec_queue_cgp_sync_done_handler(struct xe_guc *guc, u32 *msg, u32 len);
struct xe_guc_submit_exec_queue_snapshot *
xe_guc_exec_queue_snapshot_capture(struct xe_exec_queue *q);
--
2.43.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH v6 04/17] drm/xe/multi_queue: Add multi queue priority property
2025-12-11 1:02 [PATCH v6 00/17] drm/xe: Multi Queue feature support Niranjana Vishwanathapura
` (2 preceding siblings ...)
2025-12-11 1:02 ` [PATCH v6 03/17] drm/xe/multi_queue: Add GuC " Niranjana Vishwanathapura
@ 2025-12-11 1:02 ` Niranjana Vishwanathapura
2025-12-11 1:02 ` [PATCH v6 05/17] drm/xe/multi_queue: Handle invalid exec queue property setting Niranjana Vishwanathapura
` (16 subsequent siblings)
20 siblings, 0 replies; 32+ messages in thread
From: Niranjana Vishwanathapura @ 2025-12-11 1:02 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, matthew.d.roper
Add support for queues of a multi queue group to set
their priority within the queue group by adding property
DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_QUEUE_PRIORITY.
This is the only other property supported by secondary
queues of a multi queue group, other than
DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_QUEUE.
v2: Add kernel doc for enum xe_multi_queue_priority,
Add assert for priority values, fix includes and
declarations (Matt Brost)
v3: update uapi kernel-doc (Matt Brost)
v4: uapi change due to rebase
Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_exec_queue.c | 17 +++++++++++++-
drivers/gpu/drm/xe/xe_exec_queue_types.h | 16 +++++++++++++
drivers/gpu/drm/xe/xe_guc_submit.c | 1 +
drivers/gpu/drm/xe/xe_lrc.c | 29 ++++++++++++++++++++++++
drivers/gpu/drm/xe/xe_lrc.h | 3 +++
include/uapi/drm/xe_drm.h | 4 ++++
6 files changed, 69 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
index f76ec277c5af..aa46d154d04a 100644
--- a/drivers/gpu/drm/xe/xe_exec_queue.c
+++ b/drivers/gpu/drm/xe/xe_exec_queue.c
@@ -180,6 +180,7 @@ static struct xe_exec_queue *__xe_exec_queue_alloc(struct xe_device *xe,
INIT_LIST_HEAD(&q->multi_gt_link);
INIT_LIST_HEAD(&q->hw_engine_group_link);
INIT_LIST_HEAD(&q->pxp.link);
+ q->multi_queue.priority = XE_MULTI_QUEUE_PRIORITY_NORMAL;
q->sched_props.timeslice_us = hwe->eclass->sched_props.timeslice_us;
q->sched_props.preempt_timeout_us =
@@ -764,6 +765,17 @@ static int exec_queue_set_multi_group(struct xe_device *xe, struct xe_exec_queue
return xe_exec_queue_group_validate(xe, q, value);
}
+static int exec_queue_set_multi_queue_priority(struct xe_device *xe, struct xe_exec_queue *q,
+ u64 value)
+{
+ if (XE_IOCTL_DBG(xe, value > XE_MULTI_QUEUE_PRIORITY_HIGH))
+ return -EINVAL;
+
+ q->multi_queue.priority = value;
+
+ return 0;
+}
+
typedef int (*xe_exec_queue_set_property_fn)(struct xe_device *xe,
struct xe_exec_queue *q,
u64 value);
@@ -774,6 +786,8 @@ static const xe_exec_queue_set_property_fn exec_queue_set_property_funcs[] = {
[DRM_XE_EXEC_QUEUE_SET_PROPERTY_PXP_TYPE] = exec_queue_set_pxp_type,
[DRM_XE_EXEC_QUEUE_SET_HANG_REPLAY_STATE] = exec_queue_set_hang_replay_state,
[DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_GROUP] = exec_queue_set_multi_group,
+ [DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_QUEUE_PRIORITY] =
+ exec_queue_set_multi_queue_priority,
};
static int exec_queue_user_ext_set_property(struct xe_device *xe,
@@ -796,7 +810,8 @@ static int exec_queue_user_ext_set_property(struct xe_device *xe,
ext.property != DRM_XE_EXEC_QUEUE_SET_PROPERTY_TIMESLICE &&
ext.property != DRM_XE_EXEC_QUEUE_SET_PROPERTY_PXP_TYPE &&
ext.property != DRM_XE_EXEC_QUEUE_SET_HANG_REPLAY_STATE &&
- ext.property != DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_GROUP))
+ ext.property != DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_GROUP &&
+ ext.property != DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_QUEUE_PRIORITY))
return -EINVAL;
idx = array_index_nospec(ext.property, ARRAY_SIZE(exec_queue_set_property_funcs));
diff --git a/drivers/gpu/drm/xe/xe_exec_queue_types.h b/drivers/gpu/drm/xe/xe_exec_queue_types.h
index 06fb518b8533..46e5f4715a0d 100644
--- a/drivers/gpu/drm/xe/xe_exec_queue_types.h
+++ b/drivers/gpu/drm/xe/xe_exec_queue_types.h
@@ -32,6 +32,20 @@ enum xe_exec_queue_priority {
XE_EXEC_QUEUE_PRIORITY_COUNT
};
+/**
+ * enum xe_multi_queue_priority - Multi Queue priority values
+ *
+ * The priority values of the queues within the multi queue group.
+ */
+enum xe_multi_queue_priority {
+ /** @XE_MULTI_QUEUE_PRIORITY_LOW: Priority low */
+ XE_MULTI_QUEUE_PRIORITY_LOW = 0,
+ /** @XE_MULTI_QUEUE_PRIORITY_NORMAL: Priority normal */
+ XE_MULTI_QUEUE_PRIORITY_NORMAL,
+ /** @XE_MULTI_QUEUE_PRIORITY_HIGH: Priority high */
+ XE_MULTI_QUEUE_PRIORITY_HIGH,
+};
+
/**
* struct xe_exec_queue_group - Execution multi queue group
*
@@ -131,6 +145,8 @@ struct xe_exec_queue {
struct {
/** @multi_queue.group: Queue group information */
struct xe_exec_queue_group *group;
+ /** @multi_queue.priority: Queue priority within the multi-queue group */
+ enum xe_multi_queue_priority priority;
/** @multi_queue.pos: Position of queue within the multi-queue group */
u8 pos;
/** @multi_queue.valid: Queue belongs to a multi queue group */
diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
index bafe42393d22..7cca03d4296c 100644
--- a/drivers/gpu/drm/xe/xe_guc_submit.c
+++ b/drivers/gpu/drm/xe/xe_guc_submit.c
@@ -640,6 +640,7 @@ static void xe_guc_exec_queue_group_cgp_sync(struct xe_guc *guc,
return;
}
+ xe_lrc_set_multi_queue_priority(q->lrc[0], q->multi_queue.priority);
xe_guc_exec_queue_group_cgp_update(xe, q);
WRITE_ONCE(group->sync_pending, true);
diff --git a/drivers/gpu/drm/xe/xe_lrc.c b/drivers/gpu/drm/xe/xe_lrc.c
index a05060f75e7e..70eae7d03a27 100644
--- a/drivers/gpu/drm/xe/xe_lrc.c
+++ b/drivers/gpu/drm/xe/xe_lrc.c
@@ -44,6 +44,11 @@
#define LRC_INDIRECT_CTX_BO_SIZE SZ_4K
#define LRC_INDIRECT_RING_STATE_SIZE SZ_4K
+#define LRC_PRIORITY GENMASK_ULL(10, 9)
+#define LRC_PRIORITY_LOW 0
+#define LRC_PRIORITY_NORMAL 1
+#define LRC_PRIORITY_HIGH 2
+
/*
* Layout of the LRC and associated data allocated as
* lrc->bo:
@@ -1399,6 +1404,30 @@ setup_indirect_ctx(struct xe_lrc *lrc, struct xe_hw_engine *hwe)
return 0;
}
+static u8 xe_multi_queue_prio_to_lrc(struct xe_lrc *lrc, enum xe_multi_queue_priority priority)
+{
+ struct xe_device *xe = gt_to_xe(lrc->gt);
+
+ xe_assert(xe, (priority >= XE_MULTI_QUEUE_PRIORITY_LOW &&
+ priority <= XE_MULTI_QUEUE_PRIORITY_HIGH));
+
+ /* xe_multi_queue_priority is directly mapped to LRC priority values */
+ return priority;
+}
+
+/**
+ * xe_lrc_set_multi_queue_priority() - Set multi queue priority in LRC
+ * @lrc: Logical Ring Context
+ * @priority: Multi queue priority of the exec queue
+ *
+ * Convert @priority to LRC multi queue priority and update the @lrc descriptor
+ */
+void xe_lrc_set_multi_queue_priority(struct xe_lrc *lrc, enum xe_multi_queue_priority priority)
+{
+ lrc->desc &= ~LRC_PRIORITY;
+ lrc->desc |= FIELD_PREP(LRC_PRIORITY, xe_multi_queue_prio_to_lrc(lrc, priority));
+}
+
static int xe_lrc_init(struct xe_lrc *lrc, struct xe_hw_engine *hwe,
struct xe_vm *vm, void *replay_state, u32 ring_size,
u16 msix_vec,
diff --git a/drivers/gpu/drm/xe/xe_lrc.h b/drivers/gpu/drm/xe/xe_lrc.h
index a32472b92242..8acf85273c1a 100644
--- a/drivers/gpu/drm/xe/xe_lrc.h
+++ b/drivers/gpu/drm/xe/xe_lrc.h
@@ -13,6 +13,7 @@ struct drm_printer;
struct xe_bb;
struct xe_device;
struct xe_exec_queue;
+enum xe_multi_queue_priority;
enum xe_engine_class;
struct xe_gt;
struct xe_hw_engine;
@@ -135,6 +136,8 @@ void xe_lrc_dump_default(struct drm_printer *p,
u32 *xe_lrc_emit_hwe_state_instructions(struct xe_exec_queue *q, u32 *cs);
+void xe_lrc_set_multi_queue_priority(struct xe_lrc *lrc, enum xe_multi_queue_priority priority);
+
struct xe_lrc_snapshot *xe_lrc_snapshot_capture(struct xe_lrc *lrc);
void xe_lrc_snapshot_capture_delayed(struct xe_lrc_snapshot *snapshot);
void xe_lrc_snapshot_print(struct xe_lrc_snapshot *snapshot, struct drm_printer *p);
diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
index 19a8ae856a17..fd79d78de2e9 100644
--- a/include/uapi/drm/xe_drm.h
+++ b/include/uapi/drm/xe_drm.h
@@ -1280,6 +1280,9 @@ struct drm_xe_vm_bind {
* queue's exec_queue_id is specified in the lower 32 bits of the 'value' field.
* All the other non-relevant bits of extension's 'value' field while adding the
* primary or the secondary queues of the group must be set to 0.
+ * - %DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_QUEUE_PRIORITY - Set the queue
+ * priority within the multi-queue group. Current valid priority values are 0–2
+ * (default is 1), with higher values indicating higher priority.
*
* The example below shows how to use @drm_xe_exec_queue_create to create
* a simple exec_queue (no parallel submission) of class
@@ -1323,6 +1326,7 @@ struct drm_xe_exec_queue_create {
#define DRM_XE_EXEC_QUEUE_SET_HANG_REPLAY_STATE 3
#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_GROUP 4
#define DRM_XE_MULTI_GROUP_CREATE (1ull << 63)
+#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_QUEUE_PRIORITY 5
/** @extensions: Pointer to the first extension struct, if any */
__u64 extensions;
--
2.43.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH v6 05/17] drm/xe/multi_queue: Handle invalid exec queue property setting
2025-12-11 1:02 [PATCH v6 00/17] drm/xe: Multi Queue feature support Niranjana Vishwanathapura
` (3 preceding siblings ...)
2025-12-11 1:02 ` [PATCH v6 04/17] drm/xe/multi_queue: Add multi queue priority property Niranjana Vishwanathapura
@ 2025-12-11 1:02 ` Niranjana Vishwanathapura
2025-12-11 1:02 ` [PATCH v6 06/17] drm/xe/multi_queue: Add exec_queue set_property ioctl support Niranjana Vishwanathapura
` (15 subsequent siblings)
20 siblings, 0 replies; 32+ messages in thread
From: Niranjana Vishwanathapura @ 2025-12-11 1:02 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, matthew.d.roper
Only MULTI_QUEUE_PRIORITY property is valid for secondary queues of a
multi queue group. MULTI_QUEUE_PRIORITY only applies to multi queue group
queues. Detect invalid user queue property setting and return error.
Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_exec_queue.c | 66 ++++++++++++++++++++++++++----
1 file changed, 57 insertions(+), 9 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
index aa46d154d04a..d0082eb45a4a 100644
--- a/drivers/gpu/drm/xe/xe_exec_queue.c
+++ b/drivers/gpu/drm/xe/xe_exec_queue.c
@@ -62,7 +62,7 @@ enum xe_exec_queue_sched_prop {
};
static int exec_queue_user_extensions(struct xe_device *xe, struct xe_exec_queue *q,
- u64 extensions, int ext_number);
+ u64 extensions);
static void xe_exec_queue_group_cleanup(struct xe_exec_queue *q)
{
@@ -209,7 +209,7 @@ static struct xe_exec_queue *__xe_exec_queue_alloc(struct xe_device *xe,
* may set q->usm, must come before xe_lrc_create(),
* may overwrite q->sched_props, must come before q->ops->init()
*/
- err = exec_queue_user_extensions(xe, q, extensions, 0);
+ err = exec_queue_user_extensions(xe, q, extensions);
if (err) {
__xe_exec_queue_free(q);
return ERR_PTR(err);
@@ -790,9 +790,35 @@ static const xe_exec_queue_set_property_fn exec_queue_set_property_funcs[] = {
exec_queue_set_multi_queue_priority,
};
+static int exec_queue_user_ext_check(struct xe_exec_queue *q, u64 properties)
+{
+ u64 secondary_queue_valid_props = BIT_ULL(DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_GROUP) |
+ BIT_ULL(DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_QUEUE_PRIORITY);
+
+ /*
+ * Only MULTI_QUEUE_PRIORITY property is valid for secondary queues of a
+ * multi-queue group.
+ */
+ if (xe_exec_queue_is_multi_queue_secondary(q) &&
+ properties & ~secondary_queue_valid_props)
+ return -EINVAL;
+
+ return 0;
+}
+
+static int exec_queue_user_ext_check_final(struct xe_exec_queue *q, u64 properties)
+{
+ /* MULTI_QUEUE_PRIORITY only applies to multi-queue group queues */
+ if ((properties & BIT_ULL(DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_QUEUE_PRIORITY)) &&
+ !(properties & BIT_ULL(DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_GROUP)))
+ return -EINVAL;
+
+ return 0;
+}
+
static int exec_queue_user_ext_set_property(struct xe_device *xe,
struct xe_exec_queue *q,
- u64 extension)
+ u64 extension, u64 *properties)
{
u64 __user *address = u64_to_user_ptr(extension);
struct drm_xe_ext_set_property ext;
@@ -818,20 +844,25 @@ static int exec_queue_user_ext_set_property(struct xe_device *xe,
if (!exec_queue_set_property_funcs[idx])
return -EINVAL;
+ *properties |= BIT_ULL(idx);
+ err = exec_queue_user_ext_check(q, *properties);
+ if (XE_IOCTL_DBG(xe, err))
+ return err;
+
return exec_queue_set_property_funcs[idx](xe, q, ext.value);
}
typedef int (*xe_exec_queue_user_extension_fn)(struct xe_device *xe,
struct xe_exec_queue *q,
- u64 extension);
+ u64 extension, u64 *properties);
static const xe_exec_queue_user_extension_fn exec_queue_user_extension_funcs[] = {
[DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY] = exec_queue_user_ext_set_property,
};
#define MAX_USER_EXTENSIONS 16
-static int exec_queue_user_extensions(struct xe_device *xe, struct xe_exec_queue *q,
- u64 extensions, int ext_number)
+static int __exec_queue_user_extensions(struct xe_device *xe, struct xe_exec_queue *q,
+ u64 extensions, int ext_number, u64 *properties)
{
u64 __user *address = u64_to_user_ptr(extensions);
struct drm_xe_user_extension ext;
@@ -852,13 +883,30 @@ static int exec_queue_user_extensions(struct xe_device *xe, struct xe_exec_queue
idx = array_index_nospec(ext.name,
ARRAY_SIZE(exec_queue_user_extension_funcs));
- err = exec_queue_user_extension_funcs[idx](xe, q, extensions);
+ err = exec_queue_user_extension_funcs[idx](xe, q, extensions, properties);
if (XE_IOCTL_DBG(xe, err))
return err;
if (ext.next_extension)
- return exec_queue_user_extensions(xe, q, ext.next_extension,
- ++ext_number);
+ return __exec_queue_user_extensions(xe, q, ext.next_extension,
+ ++ext_number, properties);
+
+ return 0;
+}
+
+static int exec_queue_user_extensions(struct xe_device *xe, struct xe_exec_queue *q,
+ u64 extensions)
+{
+ u64 properties = 0;
+ int err;
+
+ err = __exec_queue_user_extensions(xe, q, extensions, 0, &properties);
+ if (XE_IOCTL_DBG(xe, err))
+ return err;
+
+ err = exec_queue_user_ext_check_final(q, properties);
+ if (XE_IOCTL_DBG(xe, err))
+ return err;
if (xe_exec_queue_is_multi_queue_primary(q)) {
err = xe_exec_queue_group_init(xe, q);
--
2.43.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH v6 06/17] drm/xe/multi_queue: Add exec_queue set_property ioctl support
2025-12-11 1:02 [PATCH v6 00/17] drm/xe: Multi Queue feature support Niranjana Vishwanathapura
` (4 preceding siblings ...)
2025-12-11 1:02 ` [PATCH v6 05/17] drm/xe/multi_queue: Handle invalid exec queue property setting Niranjana Vishwanathapura
@ 2025-12-11 1:02 ` Niranjana Vishwanathapura
2026-01-19 16:57 ` Thomas Hellström
2025-12-11 1:02 ` [PATCH v6 07/17] drm/xe/multi_queue: Add support for multi queue dynamic priority change Niranjana Vishwanathapura
` (14 subsequent siblings)
20 siblings, 1 reply; 32+ messages in thread
From: Niranjana Vishwanathapura @ 2025-12-11 1:02 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, matthew.d.roper
This patch adds support for exec_queue set_property ioctl.
It is derived from the original work which is part of
https://patchwork.freedesktop.org/series/112188/
Currently only DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_QUEUE_PRIORITY
property can be dynamically set.
v2: Check for and update kernel-doc which property this ioctl
supports (Matt Brost)
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Pallavi Mishra <pallavi.mishra@intel.com>
Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_device.c | 2 ++
drivers/gpu/drm/xe/xe_exec_queue.c | 35 ++++++++++++++++++++++++++++++
drivers/gpu/drm/xe/xe_exec_queue.h | 2 ++
include/uapi/drm/xe_drm.h | 26 ++++++++++++++++++++++
4 files changed, 65 insertions(+)
diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
index 1197f914ef77..7a498c8db7b1 100644
--- a/drivers/gpu/drm/xe/xe_device.c
+++ b/drivers/gpu/drm/xe/xe_device.c
@@ -207,6 +207,8 @@ static const struct drm_ioctl_desc xe_ioctls[] = {
DRM_IOCTL_DEF_DRV(XE_MADVISE, xe_vm_madvise_ioctl, DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(XE_VM_QUERY_MEM_RANGE_ATTRS, xe_vm_query_vmas_attrs_ioctl,
DRM_RENDER_ALLOW),
+ DRM_IOCTL_DEF_DRV(XE_EXEC_QUEUE_SET_PROPERTY, xe_exec_queue_set_property_ioctl,
+ DRM_RENDER_ALLOW),
};
static long xe_drm_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
index d0082eb45a4a..d738a9fea1e1 100644
--- a/drivers/gpu/drm/xe/xe_exec_queue.c
+++ b/drivers/gpu/drm/xe/xe_exec_queue.c
@@ -790,6 +790,41 @@ static const xe_exec_queue_set_property_fn exec_queue_set_property_funcs[] = {
exec_queue_set_multi_queue_priority,
};
+int xe_exec_queue_set_property_ioctl(struct drm_device *dev, void *data,
+ struct drm_file *file)
+{
+ struct xe_device *xe = to_xe_device(dev);
+ struct xe_file *xef = to_xe_file(file);
+ struct drm_xe_exec_queue_set_property *args = data;
+ struct xe_exec_queue *q;
+ int ret;
+ u32 idx;
+
+ if (XE_IOCTL_DBG(xe, args->reserved[0] || args->reserved[1]))
+ return -EINVAL;
+
+ if (XE_IOCTL_DBG(xe, args->property !=
+ DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_QUEUE_PRIORITY))
+ return -EINVAL;
+
+ q = xe_exec_queue_lookup(xef, args->exec_queue_id);
+ if (XE_IOCTL_DBG(xe, !q))
+ return -ENOENT;
+
+ idx = array_index_nospec(args->property,
+ ARRAY_SIZE(exec_queue_set_property_funcs));
+ ret = exec_queue_set_property_funcs[idx](xe, q, args->value);
+ if (XE_IOCTL_DBG(xe, ret))
+ goto err_post_lookup;
+
+ xe_exec_queue_put(q);
+ return 0;
+
+ err_post_lookup:
+ xe_exec_queue_put(q);
+ return ret;
+}
+
static int exec_queue_user_ext_check(struct xe_exec_queue *q, u64 properties)
{
u64 secondary_queue_valid_props = BIT_ULL(DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_GROUP) |
diff --git a/drivers/gpu/drm/xe/xe_exec_queue.h b/drivers/gpu/drm/xe/xe_exec_queue.h
index e6daa40003f2..ffcc1feb879e 100644
--- a/drivers/gpu/drm/xe/xe_exec_queue.h
+++ b/drivers/gpu/drm/xe/xe_exec_queue.h
@@ -125,6 +125,8 @@ int xe_exec_queue_destroy_ioctl(struct drm_device *dev, void *data,
struct drm_file *file);
int xe_exec_queue_get_property_ioctl(struct drm_device *dev, void *data,
struct drm_file *file);
+int xe_exec_queue_set_property_ioctl(struct drm_device *dev, void *data,
+ struct drm_file *file);
enum xe_exec_queue_priority xe_exec_queue_device_get_max_priority(struct xe_device *xe);
void xe_exec_queue_last_fence_put(struct xe_exec_queue *e, struct xe_vm *vm);
diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
index fd79d78de2e9..705081bf0d81 100644
--- a/include/uapi/drm/xe_drm.h
+++ b/include/uapi/drm/xe_drm.h
@@ -106,6 +106,7 @@ extern "C" {
#define DRM_XE_OBSERVATION 0x0b
#define DRM_XE_MADVISE 0x0c
#define DRM_XE_VM_QUERY_MEM_RANGE_ATTRS 0x0d
+#define DRM_XE_EXEC_QUEUE_SET_PROPERTY 0x0e
/* Must be kept compact -- no holes */
@@ -123,6 +124,7 @@ extern "C" {
#define DRM_IOCTL_XE_OBSERVATION DRM_IOW(DRM_COMMAND_BASE + DRM_XE_OBSERVATION, struct drm_xe_observation_param)
#define DRM_IOCTL_XE_MADVISE DRM_IOW(DRM_COMMAND_BASE + DRM_XE_MADVISE, struct drm_xe_madvise)
#define DRM_IOCTL_XE_VM_QUERY_MEM_RANGE_ATTRS DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_VM_QUERY_MEM_RANGE_ATTRS, struct drm_xe_vm_query_mem_range_attr)
+#define DRM_IOCTL_XE_EXEC_QUEUE_SET_PROPERTY DRM_IOW(DRM_COMMAND_BASE + DRM_XE_EXEC_QUEUE_SET_PROPERTY, struct drm_xe_exec_queue_set_property)
/**
* DOC: Xe IOCTL Extensions
@@ -2315,6 +2317,30 @@ struct drm_xe_vm_query_mem_range_attr {
};
+/**
+ * struct drm_xe_exec_queue_set_property - exec queue set property
+ *
+ * Sets execution queue properties dynamically.
+ * Currently only %DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_QUEUE_PRIORITY
+ * property can be dynamically set.
+ */
+struct drm_xe_exec_queue_set_property {
+ /** @extensions: Pointer to the first extension struct, if any */
+ __u64 extensions;
+
+ /** @exec_queue_id: Exec queue ID */
+ __u32 exec_queue_id;
+
+ /** @property: property to set */
+ __u32 property;
+
+ /** @value: property value */
+ __u64 value;
+
+ /** @reserved: Reserved */
+ __u64 reserved[2];
+};
+
#if defined(__cplusplus)
}
#endif
--
2.43.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH v6 07/17] drm/xe/multi_queue: Add support for multi queue dynamic priority change
2025-12-11 1:02 [PATCH v6 00/17] drm/xe: Multi Queue feature support Niranjana Vishwanathapura
` (5 preceding siblings ...)
2025-12-11 1:02 ` [PATCH v6 06/17] drm/xe/multi_queue: Add exec_queue set_property ioctl support Niranjana Vishwanathapura
@ 2025-12-11 1:02 ` Niranjana Vishwanathapura
2025-12-11 1:02 ` [PATCH v6 08/17] drm/xe/multi_queue: Add multi queue information to guc_info dump Niranjana Vishwanathapura
` (13 subsequent siblings)
20 siblings, 0 replies; 32+ messages in thread
From: Niranjana Vishwanathapura @ 2025-12-11 1:02 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, matthew.d.roper
Support dynamic priority change for multi queue group queues via
exec queue set_property ioctl. Issue CGP_SYNC command to GuC through
the drm scheduler message interface for priority to take effect.
v2: Move is_multi_queue check to exec_queue layer and assert
is_multi_queue being set in guc submission layer (Matt Brost)
v3: Assert CGP_SYNC message length is valid (Matt Brost)
Signed-off-by: Pallavi Mishra <pallavi.mishra@intel.com>
Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_exec_queue.c | 11 ++++-
drivers/gpu/drm/xe/xe_exec_queue_types.h | 3 ++
drivers/gpu/drm/xe/xe_guc_submit.c | 57 ++++++++++++++++++++++--
3 files changed, 65 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
index d738a9fea1e1..256e2ce1fe69 100644
--- a/drivers/gpu/drm/xe/xe_exec_queue.c
+++ b/drivers/gpu/drm/xe/xe_exec_queue.c
@@ -771,9 +771,16 @@ static int exec_queue_set_multi_queue_priority(struct xe_device *xe, struct xe_e
if (XE_IOCTL_DBG(xe, value > XE_MULTI_QUEUE_PRIORITY_HIGH))
return -EINVAL;
- q->multi_queue.priority = value;
+ /* For queue creation time (!q->xef) setting, just store the priority value */
+ if (!q->xef) {
+ q->multi_queue.priority = value;
+ return 0;
+ }
- return 0;
+ if (!xe_exec_queue_is_multi_queue(q))
+ return -EINVAL;
+
+ return q->ops->set_multi_queue_priority(q, value);
}
typedef int (*xe_exec_queue_set_property_fn)(struct xe_device *xe,
diff --git a/drivers/gpu/drm/xe/xe_exec_queue_types.h b/drivers/gpu/drm/xe/xe_exec_queue_types.h
index 46e5f4715a0d..1c285ac12868 100644
--- a/drivers/gpu/drm/xe/xe_exec_queue_types.h
+++ b/drivers/gpu/drm/xe/xe_exec_queue_types.h
@@ -260,6 +260,9 @@ struct xe_exec_queue_ops {
int (*set_timeslice)(struct xe_exec_queue *q, u32 timeslice_us);
/** @set_preempt_timeout: Set preemption timeout for exec queue */
int (*set_preempt_timeout)(struct xe_exec_queue *q, u32 preempt_timeout_us);
+ /** @set_multi_queue_priority: Set multi queue priority */
+ int (*set_multi_queue_priority)(struct xe_exec_queue *q,
+ enum xe_multi_queue_priority priority);
/**
* @suspend: Suspend exec queue from executing, allowed to be called
* multiple times in a row before resume with the caveat that
diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
index 7cca03d4296c..2f467cc1929f 100644
--- a/drivers/gpu/drm/xe/xe_guc_submit.c
+++ b/drivers/gpu/drm/xe/xe_guc_submit.c
@@ -1779,10 +1779,34 @@ static void __guc_exec_queue_process_msg_resume(struct xe_sched_msg *msg)
}
}
-#define CLEANUP 1 /* Non-zero values to catch uninitialized msg */
-#define SET_SCHED_PROPS 2
-#define SUSPEND 3
-#define RESUME 4
+static void __guc_exec_queue_process_msg_set_multi_queue_priority(struct xe_sched_msg *msg)
+{
+ struct xe_exec_queue *q = msg->private_data;
+
+ if (guc_exec_queue_allowed_to_change_state(q)) {
+#define MAX_MULTI_QUEUE_CGP_SYNC_SIZE (2)
+ struct xe_guc *guc = exec_queue_to_guc(q);
+ struct xe_exec_queue_group *group = q->multi_queue.group;
+ u32 action[MAX_MULTI_QUEUE_CGP_SYNC_SIZE];
+ int len = 0;
+
+ action[len++] = XE_GUC_ACTION_MULTI_QUEUE_CONTEXT_CGP_SYNC;
+ action[len++] = group->primary->guc->id;
+
+ xe_gt_assert(guc_to_gt(guc), len <= MAX_MULTI_QUEUE_CGP_SYNC_SIZE);
+#undef MAX_MULTI_QUEUE_CGP_SYNC_SIZE
+
+ xe_guc_exec_queue_group_cgp_sync(guc, q, action, len);
+ }
+
+ kfree(msg);
+}
+
+#define CLEANUP 1 /* Non-zero values to catch uninitialized msg */
+#define SET_SCHED_PROPS 2
+#define SUSPEND 3
+#define RESUME 4
+#define SET_MULTI_QUEUE_PRIORITY 5
#define OPCODE_MASK 0xf
#define MSG_LOCKED BIT(8)
#define MSG_HEAD BIT(9)
@@ -1806,6 +1830,9 @@ static void guc_exec_queue_process_msg(struct xe_sched_msg *msg)
case RESUME:
__guc_exec_queue_process_msg_resume(msg);
break;
+ case SET_MULTI_QUEUE_PRIORITY:
+ __guc_exec_queue_process_msg_set_multi_queue_priority(msg);
+ break;
default:
XE_WARN_ON("Unknown message type");
}
@@ -2022,6 +2049,27 @@ static int guc_exec_queue_set_preempt_timeout(struct xe_exec_queue *q,
return 0;
}
+static int guc_exec_queue_set_multi_queue_priority(struct xe_exec_queue *q,
+ enum xe_multi_queue_priority priority)
+{
+ struct xe_sched_msg *msg;
+
+ xe_gt_assert(guc_to_gt(exec_queue_to_guc(q)), xe_exec_queue_is_multi_queue(q));
+
+ if (q->multi_queue.priority == priority ||
+ exec_queue_killed_or_banned_or_wedged(q))
+ return 0;
+
+ msg = kmalloc(sizeof(*msg), GFP_KERNEL);
+ if (!msg)
+ return -ENOMEM;
+
+ q->multi_queue.priority = priority;
+ guc_exec_queue_add_msg(q, msg, SET_MULTI_QUEUE_PRIORITY);
+
+ return 0;
+}
+
static int guc_exec_queue_suspend(struct xe_exec_queue *q)
{
struct xe_gpu_scheduler *sched = &q->guc->sched;
@@ -2113,6 +2161,7 @@ static const struct xe_exec_queue_ops guc_exec_queue_ops = {
.set_priority = guc_exec_queue_set_priority,
.set_timeslice = guc_exec_queue_set_timeslice,
.set_preempt_timeout = guc_exec_queue_set_preempt_timeout,
+ .set_multi_queue_priority = guc_exec_queue_set_multi_queue_priority,
.suspend = guc_exec_queue_suspend,
.suspend_wait = guc_exec_queue_suspend_wait,
.resume = guc_exec_queue_resume,
--
2.43.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH v6 08/17] drm/xe/multi_queue: Add multi queue information to guc_info dump
2025-12-11 1:02 [PATCH v6 00/17] drm/xe: Multi Queue feature support Niranjana Vishwanathapura
` (6 preceding siblings ...)
2025-12-11 1:02 ` [PATCH v6 07/17] drm/xe/multi_queue: Add support for multi queue dynamic priority change Niranjana Vishwanathapura
@ 2025-12-11 1:02 ` Niranjana Vishwanathapura
2025-12-11 1:02 ` [PATCH v6 09/17] drm/xe/multi_queue: Handle tearing down of a multi queue Niranjana Vishwanathapura
` (12 subsequent siblings)
20 siblings, 0 replies; 32+ messages in thread
From: Niranjana Vishwanathapura @ 2025-12-11 1:02 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, matthew.d.roper
Dump multi queue specific information in the guc exec queue
dump.
v2: Move multi queue related fields inside the multi_queue
sub-structure (Matt Brost)
Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_guc_submit.c | 10 ++++++++++
drivers/gpu/drm/xe/xe_guc_submit_types.h | 13 +++++++++++++
2 files changed, 23 insertions(+)
diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
index 2f467cc1929f..d52b7b9bcedf 100644
--- a/drivers/gpu/drm/xe/xe_guc_submit.c
+++ b/drivers/gpu/drm/xe/xe_guc_submit.c
@@ -3100,6 +3100,11 @@ xe_guc_exec_queue_snapshot_capture(struct xe_exec_queue *q)
if (snapshot->parallel_execution)
guc_exec_queue_wq_snapshot_capture(q, snapshot);
+ if (xe_exec_queue_is_multi_queue(q)) {
+ snapshot->multi_queue.valid = true;
+ snapshot->multi_queue.primary = xe_exec_queue_multi_queue_primary(q)->guc->id;
+ snapshot->multi_queue.pos = q->multi_queue.pos;
+ }
spin_lock(&sched->base.job_list_lock);
snapshot->pending_list_size = list_count_nodes(&sched->base.pending_list);
snapshot->pending_list = kmalloc_array(snapshot->pending_list_size,
@@ -3182,6 +3187,11 @@ xe_guc_exec_queue_snapshot_print(struct xe_guc_submit_exec_queue_snapshot *snaps
if (snapshot->parallel_execution)
guc_exec_queue_wq_snapshot_print(snapshot, p);
+ if (snapshot->multi_queue.valid) {
+ drm_printf(p, "\tMulti queue primary GuC ID: %d\n", snapshot->multi_queue.primary);
+ drm_printf(p, "\tMulti queue position: %d\n", snapshot->multi_queue.pos);
+ }
+
for (i = 0; snapshot->pending_list && i < snapshot->pending_list_size;
i++)
drm_printf(p, "\tJob: seqno=%d, fence=%d, finished=%d\n",
diff --git a/drivers/gpu/drm/xe/xe_guc_submit_types.h b/drivers/gpu/drm/xe/xe_guc_submit_types.h
index dc7456c34583..25e29e85502c 100644
--- a/drivers/gpu/drm/xe/xe_guc_submit_types.h
+++ b/drivers/gpu/drm/xe/xe_guc_submit_types.h
@@ -135,6 +135,19 @@ struct xe_guc_submit_exec_queue_snapshot {
u32 wq[WQ_SIZE / sizeof(u32)];
} parallel;
+ /** @multi_queue: snapshot of the multi queue information */
+ struct {
+ /**
+ * @multi_queue.primary: GuC id of the primary exec queue
+ * of the multi queue group.
+ */
+ u32 primary;
+ /** @multi_queue.pos: Position of the exec queue within the multi queue group */
+ u8 pos;
+ /** @valid: The exec queue is part of a multi queue group */
+ bool valid;
+ } multi_queue;
+
/** @pending_list_size: Size of the pending list snapshot array */
int pending_list_size;
/** @pending_list: snapshot of the pending list info */
--
2.43.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH v6 09/17] drm/xe/multi_queue: Handle tearing down of a multi queue
2025-12-11 1:02 [PATCH v6 00/17] drm/xe: Multi Queue feature support Niranjana Vishwanathapura
` (7 preceding siblings ...)
2025-12-11 1:02 ` [PATCH v6 08/17] drm/xe/multi_queue: Add multi queue information to guc_info dump Niranjana Vishwanathapura
@ 2025-12-11 1:02 ` Niranjana Vishwanathapura
2025-12-11 1:02 ` [PATCH v6 10/17] drm/xe/multi_queue: Set QUEUE_DRAIN_MODE for Multi Queue batches Niranjana Vishwanathapura
` (11 subsequent siblings)
20 siblings, 0 replies; 32+ messages in thread
From: Niranjana Vishwanathapura @ 2025-12-11 1:02 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, matthew.d.roper
As all queues of a multi queue group use the primary queue of the group
to interface with GuC. Hence there is a dependency between the queues of
the group. So, when primary queue of a multi queue group is cleaned up,
also trigger a cleanup of the secondary queues also. During cleanup, stop
and re-start submission for all queues of a multi queue group to avoid
any submission happening in parallel when a queue is being cleaned up.
v2: Initialize group->list_lock, add fs_reclaim dependency, remove
unwanted secondary queues cleanup (Matt Brost)
v3: Properly handle cleanup of multi-queue group (Matt Brost)
v4: Fix IS_ENABLED(CONFIG_LOCKDEP) check (Matt Brost)
Revert stopping/restarting of submissions on queues of the
group in TDR as it is not needed.
Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_exec_queue.c | 10 +++
drivers/gpu/drm/xe/xe_exec_queue_types.h | 6 ++
drivers/gpu/drm/xe/xe_guc_submit.c | 86 ++++++++++++++++++------
3 files changed, 82 insertions(+), 20 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
index 256e2ce1fe69..d337b7bc2b80 100644
--- a/drivers/gpu/drm/xe/xe_exec_queue.c
+++ b/drivers/gpu/drm/xe/xe_exec_queue.c
@@ -87,6 +87,7 @@ static void xe_exec_queue_group_cleanup(struct xe_exec_queue *q)
xe_lrc_put(lrc);
xa_destroy(&group->xa);
+ mutex_destroy(&group->list_lock);
xe_bo_unpin_map_no_vm(group->cgp_bo);
kfree(group);
}
@@ -648,9 +649,18 @@ static int xe_exec_queue_group_init(struct xe_device *xe, struct xe_exec_queue *
group->primary = q;
group->cgp_bo = bo;
+ INIT_LIST_HEAD(&group->list);
xa_init_flags(&group->xa, XA_FLAGS_ALLOC1);
+ mutex_init(&group->list_lock);
q->multi_queue.group = group;
+ /* group->list_lock is used in submission backend */
+ if (IS_ENABLED(CONFIG_LOCKDEP)) {
+ fs_reclaim_acquire(GFP_KERNEL);
+ might_lock(&group->list_lock);
+ fs_reclaim_release(GFP_KERNEL);
+ }
+
return 0;
}
diff --git a/drivers/gpu/drm/xe/xe_exec_queue_types.h b/drivers/gpu/drm/xe/xe_exec_queue_types.h
index 1c285ac12868..8a954ee62505 100644
--- a/drivers/gpu/drm/xe/xe_exec_queue_types.h
+++ b/drivers/gpu/drm/xe/xe_exec_queue_types.h
@@ -58,6 +58,10 @@ struct xe_exec_queue_group {
struct xe_bo *cgp_bo;
/** @xa: xarray to store LRCs */
struct xarray xa;
+ /** @list: List of all secondary queues in the group */
+ struct list_head list;
+ /** @list_lock: Secondary queue list lock */
+ struct mutex list_lock;
/** @sync_pending: CGP_SYNC_DONE g2h response pending */
bool sync_pending;
};
@@ -145,6 +149,8 @@ struct xe_exec_queue {
struct {
/** @multi_queue.group: Queue group information */
struct xe_exec_queue_group *group;
+ /** @multi_queue.link: Link into group's secondary queues list */
+ struct list_head link;
/** @multi_queue.priority: Queue priority within the multi-queue group */
enum xe_multi_queue_priority priority;
/** @multi_queue.pos: Position of queue within the multi-queue group */
diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
index d52b7b9bcedf..d38f5aab0a99 100644
--- a/drivers/gpu/drm/xe/xe_guc_submit.c
+++ b/drivers/gpu/drm/xe/xe_guc_submit.c
@@ -577,6 +577,45 @@ static bool vf_recovery(struct xe_guc *guc)
return xe_gt_recovery_pending(guc_to_gt(guc));
}
+static void xe_guc_exec_queue_trigger_cleanup(struct xe_exec_queue *q)
+{
+ struct xe_guc *guc = exec_queue_to_guc(q);
+ struct xe_device *xe = guc_to_xe(guc);
+
+ /** to wakeup xe_wait_user_fence ioctl if exec queue is reset */
+ wake_up_all(&xe->ufence_wq);
+
+ if (xe_exec_queue_is_lr(q))
+ queue_work(guc_to_gt(guc)->ordered_wq, &q->guc->lr_tdr);
+ else
+ xe_sched_tdr_queue_imm(&q->guc->sched);
+}
+
+static void xe_guc_exec_queue_reset_trigger_cleanup(struct xe_exec_queue *q)
+{
+ if (xe_exec_queue_is_multi_queue(q)) {
+ struct xe_exec_queue *primary = xe_exec_queue_multi_queue_primary(q);
+ struct xe_exec_queue_group *group = q->multi_queue.group;
+ struct xe_exec_queue *eq;
+
+ set_exec_queue_reset(primary);
+ if (!exec_queue_banned(primary) && !exec_queue_check_timeout(primary))
+ xe_guc_exec_queue_trigger_cleanup(primary);
+
+ mutex_lock(&group->list_lock);
+ list_for_each_entry(eq, &group->list, multi_queue.link) {
+ set_exec_queue_reset(eq);
+ if (!exec_queue_banned(eq) && !exec_queue_check_timeout(eq))
+ xe_guc_exec_queue_trigger_cleanup(eq);
+ }
+ mutex_unlock(&group->list_lock);
+ } else {
+ set_exec_queue_reset(q);
+ if (!exec_queue_banned(q) && !exec_queue_check_timeout(q))
+ xe_guc_exec_queue_trigger_cleanup(q);
+ }
+}
+
#define parallel_read(xe_, map_, field_) \
xe_map_rd_field(xe_, &map_, 0, struct guc_submit_parallel_scratch, \
field_)
@@ -1121,20 +1160,6 @@ static void disable_scheduling_deregister(struct xe_guc *guc,
G2H_LEN_DW_DEREGISTER_CONTEXT, 2);
}
-static void xe_guc_exec_queue_trigger_cleanup(struct xe_exec_queue *q)
-{
- struct xe_guc *guc = exec_queue_to_guc(q);
- struct xe_device *xe = guc_to_xe(guc);
-
- /** to wakeup xe_wait_user_fence ioctl if exec queue is reset */
- wake_up_all(&xe->ufence_wq);
-
- if (xe_exec_queue_is_lr(q))
- queue_work(guc_to_gt(guc)->ordered_wq, &q->guc->lr_tdr);
- else
- xe_sched_tdr_queue_imm(&q->guc->sched);
-}
-
/**
* xe_guc_submit_wedge() - Wedge GuC submission
* @guc: the GuC object
@@ -1627,6 +1652,14 @@ static void __guc_exec_queue_destroy_async(struct work_struct *w)
guard(xe_pm_runtime)(guc_to_xe(guc));
trace_xe_exec_queue_destroy(q);
+ if (xe_exec_queue_is_multi_queue_secondary(q)) {
+ struct xe_exec_queue_group *group = q->multi_queue.group;
+
+ mutex_lock(&group->list_lock);
+ list_del(&q->multi_queue.link);
+ mutex_unlock(&group->list_lock);
+ }
+
if (xe_exec_queue_is_lr(q))
cancel_work_sync(&ge->lr_tdr);
/* Confirm no work left behind accessing device structures */
@@ -1917,6 +1950,19 @@ static int guc_exec_queue_init(struct xe_exec_queue *q)
xe_exec_queue_assign_name(q, q->guc->id);
+ /*
+ * Maintain secondary queues of the multi queue group in a list
+ * for handling dependencies across the queues in the group.
+ */
+ if (xe_exec_queue_is_multi_queue_secondary(q)) {
+ struct xe_exec_queue_group *group = q->multi_queue.group;
+
+ INIT_LIST_HEAD(&q->multi_queue.link);
+ mutex_lock(&group->list_lock);
+ list_add_tail(&q->multi_queue.link, &group->list);
+ mutex_unlock(&group->list_lock);
+ }
+
trace_xe_exec_queue_create(q);
return 0;
@@ -2144,6 +2190,10 @@ static void guc_exec_queue_resume(struct xe_exec_queue *q)
static bool guc_exec_queue_reset_status(struct xe_exec_queue *q)
{
+ if (xe_exec_queue_is_multi_queue_secondary(q) &&
+ guc_exec_queue_reset_status(xe_exec_queue_multi_queue_primary(q)))
+ return true;
+
return exec_queue_reset(q) || exec_queue_killed_or_banned_or_wedged(q);
}
@@ -2853,9 +2903,7 @@ int xe_guc_exec_queue_reset_handler(struct xe_guc *guc, u32 *msg, u32 len)
* jobs by setting timeout of the job to the minimum value kicking
* guc_exec_queue_timedout_job.
*/
- set_exec_queue_reset(q);
- if (!exec_queue_banned(q) && !exec_queue_check_timeout(q))
- xe_guc_exec_queue_trigger_cleanup(q);
+ xe_guc_exec_queue_reset_trigger_cleanup(q);
return 0;
}
@@ -2934,9 +2982,7 @@ int xe_guc_exec_queue_memory_cat_error_handler(struct xe_guc *guc, u32 *msg,
trace_xe_exec_queue_memory_cat_error(q);
/* Treat the same as engine reset */
- set_exec_queue_reset(q);
- if (!exec_queue_banned(q) && !exec_queue_check_timeout(q))
- xe_guc_exec_queue_trigger_cleanup(q);
+ xe_guc_exec_queue_reset_trigger_cleanup(q);
return 0;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH v6 10/17] drm/xe/multi_queue: Set QUEUE_DRAIN_MODE for Multi Queue batches
2025-12-11 1:02 [PATCH v6 00/17] drm/xe: Multi Queue feature support Niranjana Vishwanathapura
` (8 preceding siblings ...)
2025-12-11 1:02 ` [PATCH v6 09/17] drm/xe/multi_queue: Handle tearing down of a multi queue Niranjana Vishwanathapura
@ 2025-12-11 1:02 ` Niranjana Vishwanathapura
2025-12-11 1:02 ` [PATCH v6 11/17] drm/xe/multi_queue: Handle CGP context error Niranjana Vishwanathapura
` (10 subsequent siblings)
20 siblings, 0 replies; 32+ messages in thread
From: Niranjana Vishwanathapura @ 2025-12-11 1:02 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, matthew.d.roper
To properly support soft light restore between batches
being arbitrated at the CFEG, PIPE_CONTROL instructions
have a new bit in the first DW, QUEUE_DRAIN_MODE. When
set, this indicates to the CFEG that it should only
drain the current queue.
Additionally we no longer want to set the CS_STALL bit
for these multi queue queues as this causes the entire
pipeline to stall waiting for completion of the prior
batch, preventing this soft light restore from occurring
between queues in a queue group.
v4: Assert !multi_queue where applicable (Matt Roper)
Bspec: 56551
Signed-off-by: Stuart Summers <stuart.summers@intel.com>
Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
Reviewed-by: Matt Roper <matthew.d.roper@intel.com>
---
.../gpu/drm/xe/instructions/xe_gpu_commands.h | 1 +
drivers/gpu/drm/xe/xe_ring_ops.c | 64 ++++++++++++-------
2 files changed, 42 insertions(+), 23 deletions(-)
diff --git a/drivers/gpu/drm/xe/instructions/xe_gpu_commands.h b/drivers/gpu/drm/xe/instructions/xe_gpu_commands.h
index 5d41ca297447..885fcf211e6d 100644
--- a/drivers/gpu/drm/xe/instructions/xe_gpu_commands.h
+++ b/drivers/gpu/drm/xe/instructions/xe_gpu_commands.h
@@ -47,6 +47,7 @@
#define GFX_OP_PIPE_CONTROL(len) ((0x3<<29)|(0x3<<27)|(0x2<<24)|((len)-2))
+#define PIPE_CONTROL0_QUEUE_DRAIN_MODE BIT(12)
#define PIPE_CONTROL0_L3_READ_ONLY_CACHE_INVALIDATE BIT(10) /* gen12 */
#define PIPE_CONTROL0_HDC_PIPELINE_FLUSH BIT(9) /* gen12 */
diff --git a/drivers/gpu/drm/xe/xe_ring_ops.c b/drivers/gpu/drm/xe/xe_ring_ops.c
index ac0c6dcffe15..96a14fb74507 100644
--- a/drivers/gpu/drm/xe/xe_ring_ops.c
+++ b/drivers/gpu/drm/xe/xe_ring_ops.c
@@ -12,7 +12,7 @@
#include "regs/xe_engine_regs.h"
#include "regs/xe_gt_regs.h"
#include "regs/xe_lrc_layout.h"
-#include "xe_exec_queue_types.h"
+#include "xe_exec_queue.h"
#include "xe_gt.h"
#include "xe_lrc.h"
#include "xe_macros.h"
@@ -135,12 +135,11 @@ emit_pipe_control(u32 *dw, int i, u32 bit_group_0, u32 bit_group_1, u32 offset,
return i;
}
-static int emit_pipe_invalidate(u32 mask_flags, bool invalidate_tlb, u32 *dw,
- int i)
+static int emit_pipe_invalidate(struct xe_exec_queue *q, u32 mask_flags,
+ bool invalidate_tlb, u32 *dw, int i)
{
u32 flags0 = 0;
- u32 flags1 = PIPE_CONTROL_CS_STALL |
- PIPE_CONTROL_COMMAND_CACHE_INVALIDATE |
+ u32 flags1 = PIPE_CONTROL_COMMAND_CACHE_INVALIDATE |
PIPE_CONTROL_INSTRUCTION_CACHE_INVALIDATE |
PIPE_CONTROL_TEXTURE_CACHE_INVALIDATE |
PIPE_CONTROL_VF_CACHE_INVALIDATE |
@@ -152,6 +151,11 @@ static int emit_pipe_invalidate(u32 mask_flags, bool invalidate_tlb, u32 *dw,
if (invalidate_tlb)
flags1 |= PIPE_CONTROL_TLB_INVALIDATE;
+ if (xe_exec_queue_is_multi_queue(q))
+ flags0 |= PIPE_CONTROL0_QUEUE_DRAIN_MODE;
+ else
+ flags1 |= PIPE_CONTROL_CS_STALL;
+
flags1 &= ~mask_flags;
if (flags1 & PIPE_CONTROL_VF_CACHE_INVALIDATE)
@@ -175,37 +179,47 @@ static int emit_store_imm_ppgtt_posted(u64 addr, u64 value,
static int emit_render_cache_flush(struct xe_sched_job *job, u32 *dw, int i)
{
- struct xe_gt *gt = job->q->gt;
+ struct xe_exec_queue *q = job->q;
+ struct xe_gt *gt = q->gt;
bool lacks_render = !(gt->info.engine_mask & XE_HW_ENGINE_RCS_MASK);
- u32 flags;
+ u32 flags0, flags1;
if (XE_GT_WA(gt, 14016712196))
i = emit_pipe_control(dw, i, 0, PIPE_CONTROL_DEPTH_CACHE_FLUSH,
LRC_PPHWSP_FLUSH_INVAL_SCRATCH_ADDR, 0);
- flags = (PIPE_CONTROL_CS_STALL |
- PIPE_CONTROL_TILE_CACHE_FLUSH |
+ flags0 = PIPE_CONTROL0_HDC_PIPELINE_FLUSH;
+ flags1 = (PIPE_CONTROL_TILE_CACHE_FLUSH |
PIPE_CONTROL_RENDER_TARGET_CACHE_FLUSH |
PIPE_CONTROL_DEPTH_CACHE_FLUSH |
PIPE_CONTROL_DC_FLUSH_ENABLE |
PIPE_CONTROL_FLUSH_ENABLE);
if (XE_GT_WA(gt, 1409600907))
- flags |= PIPE_CONTROL_DEPTH_STALL;
+ flags1 |= PIPE_CONTROL_DEPTH_STALL;
if (lacks_render)
- flags &= ~PIPE_CONTROL_3D_ARCH_FLAGS;
+ flags1 &= ~PIPE_CONTROL_3D_ARCH_FLAGS;
else if (job->q->class == XE_ENGINE_CLASS_COMPUTE)
- flags &= ~PIPE_CONTROL_3D_ENGINE_FLAGS;
+ flags1 &= ~PIPE_CONTROL_3D_ENGINE_FLAGS;
+
+ if (xe_exec_queue_is_multi_queue(q))
+ flags0 |= PIPE_CONTROL0_QUEUE_DRAIN_MODE;
+ else
+ flags1 |= PIPE_CONTROL_CS_STALL;
- return emit_pipe_control(dw, i, PIPE_CONTROL0_HDC_PIPELINE_FLUSH, flags, 0, 0);
+ return emit_pipe_control(dw, i, flags0, flags1, 0, 0);
}
-static int emit_pipe_control_to_ring_end(struct xe_hw_engine *hwe, u32 *dw, int i)
+static int emit_pipe_control_to_ring_end(struct xe_exec_queue *q, u32 *dw, int i)
{
+ struct xe_hw_engine *hwe = q->hwe;
+
if (hwe->class != XE_ENGINE_CLASS_RENDER)
return i;
+ xe_gt_assert(q->gt, !xe_exec_queue_is_multi_queue(q));
+
if (XE_GT_WA(hwe->gt, 16020292621))
i = emit_pipe_control(dw, i, 0, PIPE_CONTROL_LRI_POST_SYNC,
RING_NOPID(hwe->mmio_base).addr, 0);
@@ -213,16 +227,20 @@ static int emit_pipe_control_to_ring_end(struct xe_hw_engine *hwe, u32 *dw, int
return i;
}
-static int emit_pipe_imm_ggtt(u32 addr, u32 value, bool stall_only, u32 *dw,
- int i)
+static int emit_pipe_imm_ggtt(struct xe_exec_queue *q, u32 addr, u32 value,
+ bool stall_only, u32 *dw, int i)
{
- u32 flags = PIPE_CONTROL_CS_STALL | PIPE_CONTROL_GLOBAL_GTT_IVB |
- PIPE_CONTROL_QW_WRITE;
+ u32 flags0 = 0, flags1 = PIPE_CONTROL_GLOBAL_GTT_IVB | PIPE_CONTROL_QW_WRITE;
if (!stall_only)
- flags |= PIPE_CONTROL_FLUSH_ENABLE;
+ flags1 |= PIPE_CONTROL_FLUSH_ENABLE;
+
+ if (xe_exec_queue_is_multi_queue(q))
+ flags0 |= PIPE_CONTROL0_QUEUE_DRAIN_MODE;
+ else
+ flags1 |= PIPE_CONTROL_CS_STALL;
- return emit_pipe_control(dw, i, 0, flags, addr, value);
+ return emit_pipe_control(dw, i, flags0, flags1, addr, value);
}
static u32 get_ppgtt_flag(struct xe_sched_job *job)
@@ -371,7 +389,7 @@ static void __emit_job_gen12_render_compute(struct xe_sched_job *job,
mask_flags = PIPE_CONTROL_3D_ENGINE_FLAGS;
/* See __xe_pt_bind_vma() for a discussion on TLB invalidations. */
- i = emit_pipe_invalidate(mask_flags, job->ring_ops_flush_tlb, dw, i);
+ i = emit_pipe_invalidate(job->q, mask_flags, job->ring_ops_flush_tlb, dw, i);
/* hsdes: 1809175790 */
if (has_aux_ccs(xe))
@@ -391,11 +409,11 @@ static void __emit_job_gen12_render_compute(struct xe_sched_job *job,
job->user_fence.value,
dw, i);
- i = emit_pipe_imm_ggtt(xe_lrc_seqno_ggtt_addr(lrc), seqno, lacks_render, dw, i);
+ i = emit_pipe_imm_ggtt(job->q, xe_lrc_seqno_ggtt_addr(lrc), seqno, lacks_render, dw, i);
i = emit_user_interrupt(dw, i);
- i = emit_pipe_control_to_ring_end(job->q->hwe, dw, i);
+ i = emit_pipe_control_to_ring_end(job->q, dw, i);
xe_gt_assert(gt, i <= MAX_JOB_SIZE_DW);
--
2.43.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH v6 11/17] drm/xe/multi_queue: Handle CGP context error
2025-12-11 1:02 [PATCH v6 00/17] drm/xe: Multi Queue feature support Niranjana Vishwanathapura
` (9 preceding siblings ...)
2025-12-11 1:02 ` [PATCH v6 10/17] drm/xe/multi_queue: Set QUEUE_DRAIN_MODE for Multi Queue batches Niranjana Vishwanathapura
@ 2025-12-11 1:02 ` Niranjana Vishwanathapura
2025-12-11 1:03 ` [PATCH v6 12/17] drm/xe/multi_queue: Reset GT upon CGP_SYNC failure Niranjana Vishwanathapura
` (9 subsequent siblings)
20 siblings, 0 replies; 32+ messages in thread
From: Niranjana Vishwanathapura @ 2025-12-11 1:02 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, matthew.d.roper
Trigger multi-queue context cleanup upon CGP context error
notification from GuC.
v4: Fix error message
Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/abi/guc_actions_abi.h | 1 +
drivers/gpu/drm/xe/xe_guc_ct.c | 4 +++
drivers/gpu/drm/xe/xe_guc_submit.c | 31 ++++++++++++++++++++++++
drivers/gpu/drm/xe/xe_guc_submit.h | 2 ++
drivers/gpu/drm/xe/xe_trace.h | 5 ++++
5 files changed, 43 insertions(+)
diff --git a/drivers/gpu/drm/xe/abi/guc_actions_abi.h b/drivers/gpu/drm/xe/abi/guc_actions_abi.h
index 3e9fbed9cda6..8af3691626bf 100644
--- a/drivers/gpu/drm/xe/abi/guc_actions_abi.h
+++ b/drivers/gpu/drm/xe/abi/guc_actions_abi.h
@@ -142,6 +142,7 @@ enum xe_guc_action {
XE_GUC_ACTION_REGISTER_CONTEXT_MULTI_QUEUE = 0x4602,
XE_GUC_ACTION_MULTI_QUEUE_CONTEXT_CGP_SYNC = 0x4603,
XE_GUC_ACTION_NOTIFY_MULTI_QUEUE_CONTEXT_CGP_SYNC_DONE = 0x4604,
+ XE_GUC_ACTION_NOTIFY_MULTI_QUEUE_CGP_CONTEXT_ERROR = 0x4605,
XE_GUC_ACTION_CLIENT_SOFT_RESET = 0x5507,
XE_GUC_ACTION_SET_ENG_UTIL_BUFF = 0x550A,
XE_GUC_ACTION_SET_DEVICE_ENGINE_ACTIVITY_BUFFER = 0x550C,
diff --git a/drivers/gpu/drm/xe/xe_guc_ct.c b/drivers/gpu/drm/xe/xe_guc_ct.c
index 4d5b4ed357cc..3e49e7fd0031 100644
--- a/drivers/gpu/drm/xe/xe_guc_ct.c
+++ b/drivers/gpu/drm/xe/xe_guc_ct.c
@@ -1618,6 +1618,10 @@ static int process_g2h_msg(struct xe_guc_ct *ct, u32 *msg, u32 len)
case XE_GUC_ACTION_NOTIFY_MULTI_QUEUE_CONTEXT_CGP_SYNC_DONE:
ret = xe_guc_exec_queue_cgp_sync_done_handler(guc, payload, adj_len);
break;
+ case XE_GUC_ACTION_NOTIFY_MULTI_QUEUE_CGP_CONTEXT_ERROR:
+ ret = xe_guc_exec_queue_cgp_context_error_handler(guc, payload,
+ adj_len);
+ break;
default:
xe_gt_err(gt, "unexpected G2H action 0x%04x\n", action);
}
diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
index d38f5aab0a99..3be5e78485c7 100644
--- a/drivers/gpu/drm/xe/xe_guc_submit.c
+++ b/drivers/gpu/drm/xe/xe_guc_submit.c
@@ -48,6 +48,8 @@
#include "xe_uc_fw.h"
#include "xe_vm.h"
+#define XE_GUC_EXEC_QUEUE_CGP_CONTEXT_ERROR_LEN 6
+
static struct xe_guc *
exec_queue_to_guc(struct xe_exec_queue *q)
{
@@ -3009,6 +3011,35 @@ int xe_guc_exec_queue_reset_failure_handler(struct xe_guc *guc, u32 *msg, u32 le
return 0;
}
+int xe_guc_exec_queue_cgp_context_error_handler(struct xe_guc *guc, u32 *msg,
+ u32 len)
+{
+ struct xe_gt *gt = guc_to_gt(guc);
+ struct xe_device *xe = guc_to_xe(guc);
+ struct xe_exec_queue *q;
+ u32 guc_id = msg[2];
+
+ if (unlikely(len != XE_GUC_EXEC_QUEUE_CGP_CONTEXT_ERROR_LEN)) {
+ drm_err(&xe->drm, "Invalid length %u", len);
+ return -EPROTO;
+ }
+
+ q = g2h_exec_queue_lookup(guc, guc_id);
+ if (unlikely(!q))
+ return -EPROTO;
+
+ xe_gt_dbg(gt,
+ "CGP context error: [%s] err=0x%x, q0_id=0x%x LRCA=0x%x guc_id=0x%x",
+ msg[0] & 1 ? "uc" : "kmd", msg[1], msg[2], msg[3], msg[4]);
+
+ trace_xe_exec_queue_cgp_context_error(q);
+
+ /* Treat the same as engine reset */
+ xe_guc_exec_queue_reset_trigger_cleanup(q);
+
+ return 0;
+}
+
/**
* xe_guc_exec_queue_cgp_sync_done_handler - CGP synchronization done handler
* @guc: guc
diff --git a/drivers/gpu/drm/xe/xe_guc_submit.h b/drivers/gpu/drm/xe/xe_guc_submit.h
index ad8c0e8e0415..4d89b2975fe9 100644
--- a/drivers/gpu/drm/xe/xe_guc_submit.h
+++ b/drivers/gpu/drm/xe/xe_guc_submit.h
@@ -37,6 +37,8 @@ int xe_guc_exec_queue_memory_cat_error_handler(struct xe_guc *guc, u32 *msg,
int xe_guc_exec_queue_reset_failure_handler(struct xe_guc *guc, u32 *msg, u32 len);
int xe_guc_error_capture_handler(struct xe_guc *guc, u32 *msg, u32 len);
int xe_guc_exec_queue_cgp_sync_done_handler(struct xe_guc *guc, u32 *msg, u32 len);
+int xe_guc_exec_queue_cgp_context_error_handler(struct xe_guc *guc, u32 *msg,
+ u32 len);
struct xe_guc_submit_exec_queue_snapshot *
xe_guc_exec_queue_snapshot_capture(struct xe_exec_queue *q);
diff --git a/drivers/gpu/drm/xe/xe_trace.h b/drivers/gpu/drm/xe/xe_trace.h
index 79a97b086cb2..c9d0748dae9d 100644
--- a/drivers/gpu/drm/xe/xe_trace.h
+++ b/drivers/gpu/drm/xe/xe_trace.h
@@ -172,6 +172,11 @@ DEFINE_EVENT(xe_exec_queue, xe_exec_queue_memory_cat_error,
TP_ARGS(q)
);
+DEFINE_EVENT(xe_exec_queue, xe_exec_queue_cgp_context_error,
+ TP_PROTO(struct xe_exec_queue *q),
+ TP_ARGS(q)
+);
+
DEFINE_EVENT(xe_exec_queue, xe_exec_queue_stop,
TP_PROTO(struct xe_exec_queue *q),
TP_ARGS(q)
--
2.43.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH v6 12/17] drm/xe/multi_queue: Reset GT upon CGP_SYNC failure
2025-12-11 1:02 [PATCH v6 00/17] drm/xe: Multi Queue feature support Niranjana Vishwanathapura
` (10 preceding siblings ...)
2025-12-11 1:02 ` [PATCH v6 11/17] drm/xe/multi_queue: Handle CGP context error Niranjana Vishwanathapura
@ 2025-12-11 1:03 ` Niranjana Vishwanathapura
2025-12-11 1:03 ` [PATCH v6 13/17] drm/xe/multi_queue: Teardown group upon job timeout Niranjana Vishwanathapura
` (8 subsequent siblings)
20 siblings, 0 replies; 32+ messages in thread
From: Niranjana Vishwanathapura @ 2025-12-11 1:03 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, matthew.d.roper
If GuC doesn't response to CGP_SYNC message, trigger
GT reset and cleanup of all the queues of the multi
queue group.
Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_guc_submit.c | 38 ++++++++++++++++++++++++++++++
1 file changed, 38 insertions(+)
diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
index 3be5e78485c7..e8bde976e4c8 100644
--- a/drivers/gpu/drm/xe/xe_guc_submit.c
+++ b/drivers/gpu/drm/xe/xe_guc_submit.c
@@ -593,6 +593,23 @@ static void xe_guc_exec_queue_trigger_cleanup(struct xe_exec_queue *q)
xe_sched_tdr_queue_imm(&q->guc->sched);
}
+static void xe_guc_exec_queue_group_trigger_cleanup(struct xe_exec_queue *q)
+{
+ struct xe_exec_queue *primary = xe_exec_queue_multi_queue_primary(q);
+ struct xe_exec_queue_group *group = q->multi_queue.group;
+ struct xe_exec_queue *eq;
+
+ xe_gt_assert(guc_to_gt(exec_queue_to_guc(q)),
+ xe_exec_queue_is_multi_queue(q));
+
+ xe_guc_exec_queue_trigger_cleanup(primary);
+
+ mutex_lock(&group->list_lock);
+ list_for_each_entry(eq, &group->list, multi_queue.link)
+ xe_guc_exec_queue_trigger_cleanup(eq);
+ mutex_unlock(&group->list_lock);
+}
+
static void xe_guc_exec_queue_reset_trigger_cleanup(struct xe_exec_queue *q)
{
if (xe_exec_queue_is_multi_queue(q)) {
@@ -618,6 +635,23 @@ static void xe_guc_exec_queue_reset_trigger_cleanup(struct xe_exec_queue *q)
}
}
+static void set_exec_queue_group_banned(struct xe_exec_queue *q)
+{
+ struct xe_exec_queue *primary = xe_exec_queue_multi_queue_primary(q);
+ struct xe_exec_queue_group *group = q->multi_queue.group;
+ struct xe_exec_queue *eq;
+
+ /* Ban all queues of the multi-queue group */
+ xe_gt_assert(guc_to_gt(exec_queue_to_guc(q)),
+ xe_exec_queue_is_multi_queue(q));
+ set_exec_queue_banned(primary);
+
+ mutex_lock(&group->list_lock);
+ list_for_each_entry(eq, &group->list, multi_queue.link)
+ set_exec_queue_banned(eq);
+ mutex_unlock(&group->list_lock);
+}
+
#define parallel_read(xe_, map_, field_) \
xe_map_rd_field(xe_, &map_, 0, struct guc_submit_parallel_scratch, \
field_)
@@ -677,7 +711,11 @@ static void xe_guc_exec_queue_group_cgp_sync(struct xe_guc *guc,
!READ_ONCE(group->sync_pending) ||
xe_guc_read_stopped(guc), HZ);
if (!ret || xe_guc_read_stopped(guc)) {
+ /* CGP_SYNC failed. Reset gt, cleanup the group */
xe_gt_warn(guc_to_gt(guc), "Wait for CGP_SYNC_DONE response failed!\n");
+ set_exec_queue_group_banned(q);
+ xe_gt_reset_async(q->gt);
+ xe_guc_exec_queue_group_trigger_cleanup(q);
return;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH v6 13/17] drm/xe/multi_queue: Teardown group upon job timeout
2025-12-11 1:02 [PATCH v6 00/17] drm/xe: Multi Queue feature support Niranjana Vishwanathapura
` (11 preceding siblings ...)
2025-12-11 1:03 ` [PATCH v6 12/17] drm/xe/multi_queue: Reset GT upon CGP_SYNC failure Niranjana Vishwanathapura
@ 2025-12-11 1:03 ` Niranjana Vishwanathapura
2025-12-11 1:03 ` [PATCH v6 14/17] drm/xe/multi_queue: Tracepoint support Niranjana Vishwanathapura
` (7 subsequent siblings)
20 siblings, 0 replies; 32+ messages in thread
From: Niranjana Vishwanathapura @ 2025-12-11 1:03 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, matthew.d.roper
Upon a job timeout, teardown the multi-queue group by
triggering TDR on all queues of the multi-queue group
and by skipping timeout checks in them.
v5: Ban the group while triggering TDR for the guc
reported errors
Add FIXME in TDR to take multi-queue group off HW
(Matt Brost)
v6: Trigger cleanup of group only for multi-queue case
Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_exec_queue_types.h | 2 ++
drivers/gpu/drm/xe/xe_guc_submit.c | 23 ++++++++++++++++++++++-
2 files changed, 24 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/xe/xe_exec_queue_types.h b/drivers/gpu/drm/xe/xe_exec_queue_types.h
index 8a954ee62505..5fc516b0bb77 100644
--- a/drivers/gpu/drm/xe/xe_exec_queue_types.h
+++ b/drivers/gpu/drm/xe/xe_exec_queue_types.h
@@ -64,6 +64,8 @@ struct xe_exec_queue_group {
struct mutex list_lock;
/** @sync_pending: CGP_SYNC_DONE g2h response pending */
bool sync_pending;
+ /** @banned: Group banned */
+ bool banned;
};
/**
diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
index e8bde976e4c8..f678b806acaa 100644
--- a/drivers/gpu/drm/xe/xe_guc_submit.c
+++ b/drivers/gpu/drm/xe/xe_guc_submit.c
@@ -602,6 +602,8 @@ static void xe_guc_exec_queue_group_trigger_cleanup(struct xe_exec_queue *q)
xe_gt_assert(guc_to_gt(exec_queue_to_guc(q)),
xe_exec_queue_is_multi_queue(q));
+ /* Group banned, skip timeout check in TDR */
+ WRITE_ONCE(group->banned, true);
xe_guc_exec_queue_trigger_cleanup(primary);
mutex_lock(&group->list_lock);
@@ -617,6 +619,9 @@ static void xe_guc_exec_queue_reset_trigger_cleanup(struct xe_exec_queue *q)
struct xe_exec_queue_group *group = q->multi_queue.group;
struct xe_exec_queue *eq;
+ /* Group banned, skip timeout check in TDR */
+ WRITE_ONCE(group->banned, true);
+
set_exec_queue_reset(primary);
if (!exec_queue_banned(primary) && !exec_queue_check_timeout(primary))
xe_guc_exec_queue_trigger_cleanup(primary);
@@ -1487,6 +1492,19 @@ guc_exec_queue_timedout_job(struct drm_sched_job *drm_job)
exec_queue_killed_or_banned_or_wedged(q) ||
exec_queue_destroyed(q);
+ /* Skip timeout check if multi-queue group is banned */
+ if (xe_exec_queue_is_multi_queue(q) &&
+ READ_ONCE(q->multi_queue.group->banned))
+ skip_timeout_check = true;
+
+ /*
+ * FIXME: In multi-queue scenario, the TDR must ensure that the whole
+ * multi-queue group is off the HW before signaling the fences to avoid
+ * possible memory corruptions. This means disabling scheduling on the
+ * primary queue before or during the secondary queue's TDR. Need to
+ * implement this in least obtrusive way.
+ */
+
/*
* If devcoredump not captured and GuC capture for the job is not ready
* do manual capture first and decide later if we need to use it
@@ -1639,7 +1657,10 @@ guc_exec_queue_timedout_job(struct drm_sched_job *drm_job)
xe_sched_add_pending_job(sched, job);
xe_sched_submission_start(sched);
- xe_guc_exec_queue_trigger_cleanup(q);
+ if (xe_exec_queue_is_multi_queue(q))
+ xe_guc_exec_queue_group_trigger_cleanup(q);
+ else
+ xe_guc_exec_queue_trigger_cleanup(q);
/* Mark all outstanding jobs as bad, thus completing them */
spin_lock(&sched->base.job_list_lock);
--
2.43.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH v6 14/17] drm/xe/multi_queue: Tracepoint support
2025-12-11 1:02 [PATCH v6 00/17] drm/xe: Multi Queue feature support Niranjana Vishwanathapura
` (12 preceding siblings ...)
2025-12-11 1:03 ` [PATCH v6 13/17] drm/xe/multi_queue: Teardown group upon job timeout Niranjana Vishwanathapura
@ 2025-12-11 1:03 ` Niranjana Vishwanathapura
2025-12-11 1:03 ` [PATCH v6 15/17] drm/xe/multi_queue: Support active group after primary is destroyed Niranjana Vishwanathapura
` (6 subsequent siblings)
20 siblings, 0 replies; 32+ messages in thread
From: Niranjana Vishwanathapura @ 2025-12-11 1:03 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, matthew.d.roper
Add xe_exec_queue_create_multi_queue event with
multi-queue information.
Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_guc_submit.c | 5 +++-
drivers/gpu/drm/xe/xe_trace.h | 41 ++++++++++++++++++++++++++++++
2 files changed, 45 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
index f678b806acaa..778cab377f84 100644
--- a/drivers/gpu/drm/xe/xe_guc_submit.c
+++ b/drivers/gpu/drm/xe/xe_guc_submit.c
@@ -2024,7 +2024,10 @@ static int guc_exec_queue_init(struct xe_exec_queue *q)
mutex_unlock(&group->list_lock);
}
- trace_xe_exec_queue_create(q);
+ if (xe_exec_queue_is_multi_queue(q))
+ trace_xe_exec_queue_create_multi_queue(q);
+ else
+ trace_xe_exec_queue_create(q);
return 0;
diff --git a/drivers/gpu/drm/xe/xe_trace.h b/drivers/gpu/drm/xe/xe_trace.h
index c9d0748dae9d..6d12fcc13f43 100644
--- a/drivers/gpu/drm/xe/xe_trace.h
+++ b/drivers/gpu/drm/xe/xe_trace.h
@@ -13,6 +13,7 @@
#include <linux/types.h>
#include "xe_exec_queue_types.h"
+#include "xe_exec_queue.h"
#include "xe_gpu_scheduler_types.h"
#include "xe_gt_types.h"
#include "xe_guc_exec_queue_types.h"
@@ -97,11 +98,51 @@ DECLARE_EVENT_CLASS(xe_exec_queue,
__entry->guc_state, __entry->flags)
);
+DECLARE_EVENT_CLASS(xe_exec_queue_multi_queue,
+ TP_PROTO(struct xe_exec_queue *q),
+ TP_ARGS(q),
+
+ TP_STRUCT__entry(
+ __string(dev, __dev_name_eq(q))
+ __field(enum xe_engine_class, class)
+ __field(u32, logical_mask)
+ __field(u8, gt_id)
+ __field(u16, width)
+ __field(u32, guc_id)
+ __field(u32, guc_state)
+ __field(u32, flags)
+ __field(u32, primary)
+ ),
+
+ TP_fast_assign(
+ __assign_str(dev);
+ __entry->class = q->class;
+ __entry->logical_mask = q->logical_mask;
+ __entry->gt_id = q->gt->info.id;
+ __entry->width = q->width;
+ __entry->guc_id = q->guc->id;
+ __entry->guc_state = atomic_read(&q->guc->state);
+ __entry->flags = q->flags;
+ __entry->primary = xe_exec_queue_multi_queue_primary(q)->guc->id;
+ ),
+
+ TP_printk("dev=%s, %d:0x%x, gt=%d, width=%d guc_id=%d, guc_state=0x%x, flags=0x%x, primary=%d",
+ __get_str(dev), __entry->class, __entry->logical_mask,
+ __entry->gt_id, __entry->width, __entry->guc_id,
+ __entry->guc_state, __entry->flags,
+ __entry->primary)
+);
+
DEFINE_EVENT(xe_exec_queue, xe_exec_queue_create,
TP_PROTO(struct xe_exec_queue *q),
TP_ARGS(q)
);
+DEFINE_EVENT(xe_exec_queue_multi_queue, xe_exec_queue_create_multi_queue,
+ TP_PROTO(struct xe_exec_queue *q),
+ TP_ARGS(q)
+);
+
DEFINE_EVENT(xe_exec_queue, xe_exec_queue_supress_resume,
TP_PROTO(struct xe_exec_queue *q),
TP_ARGS(q)
--
2.43.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH v6 15/17] drm/xe/multi_queue: Support active group after primary is destroyed
2025-12-11 1:02 [PATCH v6 00/17] drm/xe: Multi Queue feature support Niranjana Vishwanathapura
` (13 preceding siblings ...)
2025-12-11 1:03 ` [PATCH v6 14/17] drm/xe/multi_queue: Tracepoint support Niranjana Vishwanathapura
@ 2025-12-11 1:03 ` Niranjana Vishwanathapura
2025-12-19 21:06 ` Rodrigo Vivi
2025-12-11 1:03 ` [PATCH v6 16/17] drm/xe/doc: Add documentation for Multi Queue Group Niranjana Vishwanathapura
` (5 subsequent siblings)
20 siblings, 1 reply; 32+ messages in thread
From: Niranjana Vishwanathapura @ 2025-12-11 1:03 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, matthew.d.roper
Add support to keep the group active after the primary queue is
destroyed. Instead of killing the primary queue during exec_queue
destroy ioctl, kill it when all the secondary queues of the group
are killed.
Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_device.c | 7 ++-
drivers/gpu/drm/xe/xe_exec_queue.c | 55 +++++++++++++++++++++++-
drivers/gpu/drm/xe/xe_exec_queue.h | 2 +
drivers/gpu/drm/xe/xe_exec_queue_types.h | 4 ++
include/uapi/drm/xe_drm.h | 4 ++
5 files changed, 69 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
index 7a498c8db7b1..24efb6a3e0ea 100644
--- a/drivers/gpu/drm/xe/xe_device.c
+++ b/drivers/gpu/drm/xe/xe_device.c
@@ -177,7 +177,12 @@ static void xe_file_close(struct drm_device *dev, struct drm_file *file)
xa_for_each(&xef->exec_queue.xa, idx, q) {
if (q->vm && q->hwe->hw_engine_group)
xe_hw_engine_group_del_exec_queue(q->hwe->hw_engine_group, q);
- xe_exec_queue_kill(q);
+
+ if (xe_exec_queue_is_multi_queue_primary(q))
+ xe_exec_queue_group_kill_put(q->multi_queue.group);
+ else
+ xe_exec_queue_kill(q);
+
xe_exec_queue_put(q);
}
xa_for_each(&xef->vm.xa, idx, vm)
diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
index d337b7bc2b80..3f4840d135a0 100644
--- a/drivers/gpu/drm/xe/xe_exec_queue.c
+++ b/drivers/gpu/drm/xe/xe_exec_queue.c
@@ -418,6 +418,26 @@ struct xe_exec_queue *xe_exec_queue_create_bind(struct xe_device *xe,
}
ALLOW_ERROR_INJECTION(xe_exec_queue_create_bind, ERRNO);
+static void xe_exec_queue_group_kill(struct kref *ref)
+{
+ struct xe_exec_queue_group *group = container_of(ref, struct xe_exec_queue_group,
+ kill_refcount);
+ xe_exec_queue_kill(group->primary);
+}
+
+static inline void xe_exec_queue_group_kill_get(struct xe_exec_queue_group *group)
+{
+ kref_get(&group->kill_refcount);
+}
+
+void xe_exec_queue_group_kill_put(struct xe_exec_queue_group *group)
+{
+ if (!group)
+ return;
+
+ kref_put(&group->kill_refcount, xe_exec_queue_group_kill);
+}
+
void xe_exec_queue_destroy(struct kref *ref)
{
struct xe_exec_queue *q = container_of(ref, struct xe_exec_queue, refcount);
@@ -650,6 +670,7 @@ static int xe_exec_queue_group_init(struct xe_device *xe, struct xe_exec_queue *
group->primary = q;
group->cgp_bo = bo;
INIT_LIST_HEAD(&group->list);
+ kref_init(&group->kill_refcount);
xa_init_flags(&group->xa, XA_FLAGS_ALLOC1);
mutex_init(&group->list_lock);
q->multi_queue.group = group;
@@ -725,6 +746,11 @@ static int xe_exec_queue_group_add(struct xe_device *xe, struct xe_exec_queue *q
q->multi_queue.pos = pos;
+ if (group->primary->multi_queue.keep_active) {
+ xe_exec_queue_group_kill_get(group);
+ q->multi_queue.keep_active = true;
+ }
+
return 0;
}
@@ -738,6 +764,11 @@ static void xe_exec_queue_group_delete(struct xe_device *xe, struct xe_exec_queu
lrc = xa_erase(&group->xa, q->multi_queue.pos);
xe_assert(xe, lrc);
xe_lrc_put(lrc);
+
+ if (q->multi_queue.keep_active) {
+ xe_exec_queue_group_kill_put(group);
+ q->multi_queue.keep_active = false;
+ }
}
static int exec_queue_set_multi_group(struct xe_device *xe, struct xe_exec_queue *q,
@@ -759,12 +790,24 @@ static int exec_queue_set_multi_group(struct xe_device *xe, struct xe_exec_queue
return -EINVAL;
if (value & DRM_XE_MULTI_GROUP_CREATE) {
- if (XE_IOCTL_DBG(xe, value & ~DRM_XE_MULTI_GROUP_CREATE))
+ if (XE_IOCTL_DBG(xe, value & ~(DRM_XE_MULTI_GROUP_CREATE |
+ DRM_XE_MULTI_GROUP_KEEP_ACTIVE)))
+ return -EINVAL;
+
+ /*
+ * KEEP_ACTIVE is not supported in preempt fence mode as in that mode,
+ * VM_DESTROY ioctl expects all exec queues of that VM are already killed.
+ */
+ if (XE_IOCTL_DBG(xe, (value & DRM_XE_MULTI_GROUP_KEEP_ACTIVE) &&
+ xe_vm_in_preempt_fence_mode(q->vm)))
return -EINVAL;
q->multi_queue.valid = true;
q->multi_queue.is_primary = true;
q->multi_queue.pos = 0;
+ if (value & DRM_XE_MULTI_GROUP_KEEP_ACTIVE)
+ q->multi_queue.keep_active = true;
+
return 0;
}
@@ -1312,6 +1355,11 @@ void xe_exec_queue_kill(struct xe_exec_queue *q)
q->ops->kill(q);
xe_vm_remove_compute_exec_queue(q->vm, q);
+
+ if (!xe_exec_queue_is_multi_queue_primary(q) && q->multi_queue.keep_active) {
+ xe_exec_queue_group_kill_put(q->multi_queue.group);
+ q->multi_queue.keep_active = false;
+ }
}
int xe_exec_queue_destroy_ioctl(struct drm_device *dev, void *data,
@@ -1338,7 +1386,10 @@ int xe_exec_queue_destroy_ioctl(struct drm_device *dev, void *data,
if (q->vm && q->hwe->hw_engine_group)
xe_hw_engine_group_del_exec_queue(q->hwe->hw_engine_group, q);
- xe_exec_queue_kill(q);
+ if (xe_exec_queue_is_multi_queue_primary(q))
+ xe_exec_queue_group_kill_put(q->multi_queue.group);
+ else
+ xe_exec_queue_kill(q);
trace_xe_exec_queue_close(q);
xe_exec_queue_put(q);
diff --git a/drivers/gpu/drm/xe/xe_exec_queue.h b/drivers/gpu/drm/xe/xe_exec_queue.h
index ffcc1feb879e..10abed98fb6b 100644
--- a/drivers/gpu/drm/xe/xe_exec_queue.h
+++ b/drivers/gpu/drm/xe/xe_exec_queue.h
@@ -113,6 +113,8 @@ static inline struct xe_exec_queue *xe_exec_queue_multi_queue_primary(struct xe_
return xe_exec_queue_is_multi_queue(q) ? q->multi_queue.group->primary : q;
}
+void xe_exec_queue_group_kill_put(struct xe_exec_queue_group *group);
+
bool xe_exec_queue_is_lr(struct xe_exec_queue *q);
bool xe_exec_queue_is_idle(struct xe_exec_queue *q);
diff --git a/drivers/gpu/drm/xe/xe_exec_queue_types.h b/drivers/gpu/drm/xe/xe_exec_queue_types.h
index 5fc516b0bb77..67ea5eebf70b 100644
--- a/drivers/gpu/drm/xe/xe_exec_queue_types.h
+++ b/drivers/gpu/drm/xe/xe_exec_queue_types.h
@@ -62,6 +62,8 @@ struct xe_exec_queue_group {
struct list_head list;
/** @list_lock: Secondary queue list lock */
struct mutex list_lock;
+ /** @kill_refcount: ref count to kill primary queue */
+ struct kref kill_refcount;
/** @sync_pending: CGP_SYNC_DONE g2h response pending */
bool sync_pending;
/** @banned: Group banned */
@@ -161,6 +163,8 @@ struct xe_exec_queue {
u8 valid:1;
/** @multi_queue.is_primary: Is primary queue (Q0) of the group */
u8 is_primary:1;
+ /** @multi_queue.keep_active: Keep the group active after primary is destroyed */
+ u8 keep_active:1;
} multi_queue;
/** @sched_props: scheduling properties */
diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
index 705081bf0d81..bd6154e3b728 100644
--- a/include/uapi/drm/xe_drm.h
+++ b/include/uapi/drm/xe_drm.h
@@ -1280,6 +1280,9 @@ struct drm_xe_vm_bind {
* then a new multi-queue group is created with this queue as the primary queue
* (Q0). Otherwise, the queue gets added to the multi-queue group whose primary
* queue's exec_queue_id is specified in the lower 32 bits of the 'value' field.
+ * If the extension's 'value' field has %DRM_XE_MULTI_GROUP_KEEP_ACTIVE flag
+ * set, then the multi-queue group is kept active after the primary queue is
+ * destroyed.
* All the other non-relevant bits of extension's 'value' field while adding the
* primary or the secondary queues of the group must be set to 0.
* - %DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_QUEUE_PRIORITY - Set the queue
@@ -1328,6 +1331,7 @@ struct drm_xe_exec_queue_create {
#define DRM_XE_EXEC_QUEUE_SET_HANG_REPLAY_STATE 3
#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_GROUP 4
#define DRM_XE_MULTI_GROUP_CREATE (1ull << 63)
+#define DRM_XE_MULTI_GROUP_KEEP_ACTIVE (1ull << 62)
#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_QUEUE_PRIORITY 5
/** @extensions: Pointer to the first extension struct, if any */
__u64 extensions;
--
2.43.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH v6 16/17] drm/xe/doc: Add documentation for Multi Queue Group
2025-12-11 1:02 [PATCH v6 00/17] drm/xe: Multi Queue feature support Niranjana Vishwanathapura
` (14 preceding siblings ...)
2025-12-11 1:03 ` [PATCH v6 15/17] drm/xe/multi_queue: Support active group after primary is destroyed Niranjana Vishwanathapura
@ 2025-12-11 1:03 ` Niranjana Vishwanathapura
2025-12-11 1:03 ` [PATCH v6 17/17] drm/xe/doc: Add documentation for Multi Queue Group GuC interface Niranjana Vishwanathapura
` (4 subsequent siblings)
20 siblings, 0 replies; 32+ messages in thread
From: Niranjana Vishwanathapura @ 2025-12-11 1:03 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, matthew.d.roper
Add kernel documentation for Multi Queue group and update
the corresponding rst.
Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
Documentation/gpu/xe/xe_exec_queue.rst | 6 ++++
drivers/gpu/drm/xe/xe_exec_queue.c | 45 ++++++++++++++++++++++++++
2 files changed, 51 insertions(+)
diff --git a/Documentation/gpu/xe/xe_exec_queue.rst b/Documentation/gpu/xe/xe_exec_queue.rst
index 6076569e311c..732af4741df4 100644
--- a/Documentation/gpu/xe/xe_exec_queue.rst
+++ b/Documentation/gpu/xe/xe_exec_queue.rst
@@ -7,6 +7,12 @@ Execution Queue
.. kernel-doc:: drivers/gpu/drm/xe/xe_exec_queue.c
:doc: Execution Queue
+Multi Queue Group
+=================
+
+.. kernel-doc:: drivers/gpu/drm/xe/xe_exec_queue.c
+ :doc: Multi Queue Group
+
Internal API
============
diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
index 3f4840d135a0..e16e4d2d4053 100644
--- a/drivers/gpu/drm/xe/xe_exec_queue.c
+++ b/drivers/gpu/drm/xe/xe_exec_queue.c
@@ -54,6 +54,51 @@
* the ring operations the different engine classes support.
*/
+/**
+ * DOC: Multi Queue Group
+ *
+ * Multi Queue Group is another mode of execution supported by the compute
+ * and blitter copy command streamers (CCS and BCS, respectively). It is
+ * an enhancement of the existing hardware architecture and leverages the
+ * same submission model. It enables support for efficient, parallel
+ * execution of multiple queues within a single shared context. The multi
+ * queue group functionality is only supported with GuC submission backend.
+ * All the queues of a group must use the same address space (VM).
+ *
+ * The DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_QUEUE execution queue property
+ * supports creating a multi queue group and adding queues to a queue group.
+ *
+ * The XE_EXEC_QUEUE_CREATE ioctl call with above property with value field
+ * set to DRM_XE_MULTI_GROUP_CREATE, will create a new multi queue group with
+ * the queue being created as the primary queue (aka q0) of the group. To add
+ * secondary queues to the group, they need to be created with the above
+ * property with id of the primary queue as the value. The properties of
+ * the primary queue (like priority, time slice) applies to the whole group.
+ * So, these properties can't be set for secondary queues of a group.
+ *
+ * The hardware does not support removing a queue from a multi-queue group.
+ * However, queues can be dynamically added to the group. A group can have
+ * up to 64 queues. To support this, XeKMD holds references to LRCs of the
+ * queues even after the queues are destroyed by the user until the whole
+ * group is destroyed. The secondary queues hold a reference to the primary
+ * queue thus preventing the group from being destroyed when user destroys
+ * the primary queue. Once the primary queue is destroyed, secondary queues
+ * can't be added to the queue group, but they can continue to submit the
+ * jobs if the DRM_XE_MULTI_GROUP_KEEP_ACTIVE flag is set during the multi
+ * queue group creation.
+ *
+ * The queues of a multi queue group can set their priority within the group
+ * through the DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_QUEUE_PRIORITY property.
+ * This multi queue priority can also be set dynamically through the
+ * XE_EXEC_QUEUE_SET_PROPERTY ioctl. This is the only other property
+ * supported by the secondary queues of a multi queue group, other than
+ * DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_QUEUE.
+ *
+ * When GuC reports an error on any of the queues of a multi queue group,
+ * the queue cleanup mechanism is invoked for all the queues of the group
+ * as hardware cannot make progress on the multi queue context.
+ */
+
enum xe_exec_queue_sched_prop {
XE_EXEC_QUEUE_JOB_TIMEOUT = 0,
XE_EXEC_QUEUE_TIMESLICE = 1,
--
2.43.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH v6 17/17] drm/xe/doc: Add documentation for Multi Queue Group GuC interface
2025-12-11 1:02 [PATCH v6 00/17] drm/xe: Multi Queue feature support Niranjana Vishwanathapura
` (15 preceding siblings ...)
2025-12-11 1:03 ` [PATCH v6 16/17] drm/xe/doc: Add documentation for Multi Queue Group Niranjana Vishwanathapura
@ 2025-12-11 1:03 ` Niranjana Vishwanathapura
2025-12-11 1:11 ` ✗ CI.checkpatch: warning for drm/xe: Multi Queue feature support (rev6) Patchwork
` (3 subsequent siblings)
20 siblings, 0 replies; 32+ messages in thread
From: Niranjana Vishwanathapura @ 2025-12-11 1:03 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, matthew.d.roper
Add kernel documentation for Multi Queue group GuC interface.
Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
Documentation/gpu/xe/xe_exec_queue.rst | 8 ++++
drivers/gpu/drm/xe/xe_exec_queue.c | 3 ++
drivers/gpu/drm/xe/xe_guc_submit.c | 57 ++++++++++++++++++++++++++
3 files changed, 68 insertions(+)
diff --git a/Documentation/gpu/xe/xe_exec_queue.rst b/Documentation/gpu/xe/xe_exec_queue.rst
index 732af4741df4..8707806211c9 100644
--- a/Documentation/gpu/xe/xe_exec_queue.rst
+++ b/Documentation/gpu/xe/xe_exec_queue.rst
@@ -13,6 +13,14 @@ Multi Queue Group
.. kernel-doc:: drivers/gpu/drm/xe/xe_exec_queue.c
:doc: Multi Queue Group
+.. _multi-queue-group-guc-interface:
+
+Multi Queue Group GuC interface
+===============================
+
+.. kernel-doc:: drivers/gpu/drm/xe/xe_guc_submit.c
+ :doc: Multi Queue Group GuC interface
+
Internal API
============
diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
index e16e4d2d4053..cb45962be14c 100644
--- a/drivers/gpu/drm/xe/xe_exec_queue.c
+++ b/drivers/gpu/drm/xe/xe_exec_queue.c
@@ -97,6 +97,9 @@
* When GuC reports an error on any of the queues of a multi queue group,
* the queue cleanup mechanism is invoked for all the queues of the group
* as hardware cannot make progress on the multi queue context.
+ *
+ * Refer :ref:`multi-queue-group-guc-interface` for multi queue group GuC
+ * interface.
*/
enum xe_exec_queue_sched_prop {
diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
index 778cab377f84..21a8bd2ec672 100644
--- a/drivers/gpu/drm/xe/xe_guc_submit.c
+++ b/drivers/gpu/drm/xe/xe_guc_submit.c
@@ -664,6 +664,63 @@ static void set_exec_queue_group_banned(struct xe_exec_queue *q)
xe_map_wr_field(xe_, &map_, 0, struct guc_submit_parallel_scratch, \
field_, val_)
+/**
+ * DOC: Multi Queue Group GuC interface
+ *
+ * The multi queue group coordination between KMD and GuC is through a software
+ * construct called Context Group Page (CGP). The CGP is a KMD managed 4KB page
+ * allocated in the global GTT.
+ *
+ * CGP format:
+ *
+ * +-----------+---------------------------+---------------------------------------------+
+ * | DWORD | Name | Description |
+ * +-----------+---------------------------+---------------------------------------------+
+ * | 0 | Version | Bits [15:8]=Major ver, [7:0]=Minor ver |
+ * +-----------+---------------------------+---------------------------------------------+
+ * | 1..15 | RESERVED | MBZ |
+ * +-----------+---------------------------+---------------------------------------------+
+ * | 16 | KMD_QUEUE_UPDATE_MASK_DW0 | KMD queue mask for queues 31..0 |
+ * +-----------+---------------------------+---------------------------------------------+
+ * | 17 | KMD_QUEUE_UPDATE_MASK_DW1 | KMD queue mask for queues 63..32 |
+ * +-----------+---------------------------+---------------------------------------------+
+ * | 18..31 | RESERVED | MBZ |
+ * +-----------+---------------------------+---------------------------------------------+
+ * | 32 | Q0CD_DW0 | Queue 0 context LRC descriptor lower DWORD |
+ * +-----------+---------------------------+---------------------------------------------+
+ * | 33 | Q0ContextIndex | Context ID for Queue 0 |
+ * +-----------+---------------------------+---------------------------------------------+
+ * | 34 | Q1CD_DW0 | Queue 1 context LRC descriptor lower DWORD |
+ * +-----------+---------------------------+---------------------------------------------+
+ * | 35 | Q1ContextIndex | Context ID for Queue 1 |
+ * +-----------+---------------------------+---------------------------------------------+
+ * | ... |... | ... |
+ * +-----------+---------------------------+---------------------------------------------+
+ * | 158 | Q63CD_DW0 | Queue 63 context LRC descriptor lower DWORD |
+ * +-----------+---------------------------+---------------------------------------------+
+ * | 159 | Q63ContextIndex | Context ID for Queue 63 |
+ * +-----------+---------------------------+---------------------------------------------+
+ * | 160..1024 | RESERVED | MBZ |
+ * +-----------+---------------------------+---------------------------------------------+
+ *
+ * While registering Q0 with GuC, CGP is updated with Q0 entry and GuC is notified
+ * through XE_GUC_ACTION_REGISTER_CONTEXT_MULTI_QUEUE H2G message which specifies
+ * the CGP address. When the secondary queues are added to the group, the CGP is
+ * updated with entry for that queue and GuC is notified through the H2G interface
+ * XE_GUC_ACTION_MULTI_QUEUE_CONTEXT_CGP_SYNC. GuC responds to these H2G messages
+ * with a XE_GUC_ACTION_NOTIFY_MULTIQ_CONTEXT_CGP_SYNC_DONE G2H message. GuC also
+ * sends a XE_GUC_ACTION_NOTIFY_MULTI_QUEUE_CGP_CONTEXT_ERROR notification for any
+ * error in the CGP. Only one of these CGP update messages can be outstanding
+ * (waiting for GuC response) at any time. The bits in KMD_QUEUE_UPDATE_MASK_DW*
+ * fields indicate which queue entry is being updated in the CGP.
+ *
+ * The primary queue (Q0) represents the multi queue group context in GuC and
+ * submission on any queue of the group must be through Q0 GuC interface only.
+ *
+ * As it is not required to register secondary queues with GuC, the secondary queue
+ * context ids in the CGP are populated with Q0 context id.
+ */
+
#define CGP_VERSION_MAJOR_SHIFT 8
static void xe_guc_exec_queue_group_cgp_update(struct xe_device *xe,
--
2.43.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* ✗ CI.checkpatch: warning for drm/xe: Multi Queue feature support (rev6)
2025-12-11 1:02 [PATCH v6 00/17] drm/xe: Multi Queue feature support Niranjana Vishwanathapura
` (16 preceding siblings ...)
2025-12-11 1:03 ` [PATCH v6 17/17] drm/xe/doc: Add documentation for Multi Queue Group GuC interface Niranjana Vishwanathapura
@ 2025-12-11 1:11 ` Patchwork
2025-12-11 1:12 ` ✓ CI.KUnit: success " Patchwork
` (2 subsequent siblings)
20 siblings, 0 replies; 32+ messages in thread
From: Patchwork @ 2025-12-11 1:11 UTC (permalink / raw)
To: Niranjana Vishwanathapura; +Cc: intel-xe
== Series Details ==
Series: drm/xe: Multi Queue feature support (rev6)
URL : https://patchwork.freedesktop.org/series/156865/
State : warning
== Summary ==
+ KERNEL=/kernel
+ git clone https://gitlab.freedesktop.org/drm/maintainer-tools mt
Cloning into 'mt'...
warning: redirecting to https://gitlab.freedesktop.org/drm/maintainer-tools.git/
+ git -C mt rev-list -n1 origin/master
8f50e69d0ce3656564bbdf8b3e213d61470d463f
+ cd /kernel
+ git config --global --add safe.directory /kernel
+ git log -n1
commit d8b436d34ddb4892ba86b178fc1ab96876f1d8a2
Author: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
Date: Wed Dec 10 17:03:05 2025 -0800
drm/xe/doc: Add documentation for Multi Queue Group GuC interface
Add kernel documentation for Multi Queue group GuC interface.
Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
+ /mt/dim checkpatch 3adb3f4aa5a8563ee9c1f6e137827b740e3ab40b drm-intel
2f87fda5ee05 drm/xe/multi_queue: Add multi_queue_enable_mask to gt information
444552b1cfd0 drm/xe/multi_queue: Add user interface for multi queue support
d52a89aba46c drm/xe/multi_queue: Add GuC interface for multi queue support
54409c414623 drm/xe/multi_queue: Add multi queue priority property
7dcf5201f460 drm/xe/multi_queue: Handle invalid exec queue property setting
af628674e34e drm/xe/multi_queue: Add exec_queue set_property ioctl support
-:109: WARNING:LONG_LINE: line length of 145 exceeds 100 columns
#109: FILE: include/uapi/drm/xe_drm.h:127:
+#define DRM_IOCTL_XE_EXEC_QUEUE_SET_PROPERTY DRM_IOW(DRM_COMMAND_BASE + DRM_XE_EXEC_QUEUE_SET_PROPERTY, struct drm_xe_exec_queue_set_property)
total: 0 errors, 1 warnings, 0 checks, 101 lines checked
d81ad991045e drm/xe/multi_queue: Add support for multi queue dynamic priority change
305a1652c487 drm/xe/multi_queue: Add multi queue information to guc_info dump
24dc18392be3 drm/xe/multi_queue: Handle tearing down of a multi queue
6ac5d9836685 drm/xe/multi_queue: Set QUEUE_DRAIN_MODE for Multi Queue batches
9487d6d7c8d8 drm/xe/multi_queue: Handle CGP context error
ba9b6090795f drm/xe/multi_queue: Reset GT upon CGP_SYNC failure
4151a82f5c1d drm/xe/multi_queue: Teardown group upon job timeout
16784b35a7da drm/xe/multi_queue: Tracepoint support
-:48: CHECK:OPEN_ENDED_LINE: Lines should not end with a '('
#48: FILE: drivers/gpu/drm/xe/xe_trace.h:105:
+ TP_STRUCT__entry(
-:60: CHECK:OPEN_ENDED_LINE: Lines should not end with a '('
#60: FILE: drivers/gpu/drm/xe/xe_trace.h:117:
+ TP_fast_assign(
total: 0 errors, 0 warnings, 2 checks, 69 lines checked
8946fa220a03 drm/xe/multi_queue: Support active group after primary is destroyed
c1b04380947d drm/xe/doc: Add documentation for Multi Queue Group
d8b436d34ddb drm/xe/doc: Add documentation for Multi Queue Group GuC interface
^ permalink raw reply [flat|nested] 32+ messages in thread
* ✓ CI.KUnit: success for drm/xe: Multi Queue feature support (rev6)
2025-12-11 1:02 [PATCH v6 00/17] drm/xe: Multi Queue feature support Niranjana Vishwanathapura
` (17 preceding siblings ...)
2025-12-11 1:11 ` ✗ CI.checkpatch: warning for drm/xe: Multi Queue feature support (rev6) Patchwork
@ 2025-12-11 1:12 ` Patchwork
2025-12-11 2:25 ` ✓ Xe.CI.BAT: " Patchwork
2025-12-11 10:04 ` ✗ Xe.CI.Full: failure " Patchwork
20 siblings, 0 replies; 32+ messages in thread
From: Patchwork @ 2025-12-11 1:12 UTC (permalink / raw)
To: Niranjana Vishwanathapura; +Cc: intel-xe
== Series Details ==
Series: drm/xe: Multi Queue feature support (rev6)
URL : https://patchwork.freedesktop.org/series/156865/
State : success
== Summary ==
+ trap cleanup EXIT
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/xe/.kunitconfig
[01:11:26] Configuring KUnit Kernel ...
Generating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[01:11:31] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[01:12:01] Starting KUnit Kernel (1/1)...
[01:12:01] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[01:12:01] ================== guc_buf (11 subtests) ===================
[01:12:01] [PASSED] test_smallest
[01:12:01] [PASSED] test_largest
[01:12:01] [PASSED] test_granular
[01:12:01] [PASSED] test_unique
[01:12:01] [PASSED] test_overlap
[01:12:01] [PASSED] test_reusable
[01:12:01] [PASSED] test_too_big
[01:12:01] [PASSED] test_flush
[01:12:01] [PASSED] test_lookup
[01:12:01] [PASSED] test_data
[01:12:01] [PASSED] test_class
[01:12:01] ===================== [PASSED] guc_buf =====================
[01:12:01] =================== guc_dbm (7 subtests) ===================
[01:12:01] [PASSED] test_empty
[01:12:01] [PASSED] test_default
[01:12:01] ======================== test_size ========================
[01:12:01] [PASSED] 4
[01:12:01] [PASSED] 8
[01:12:01] [PASSED] 32
[01:12:01] [PASSED] 256
[01:12:01] ==================== [PASSED] test_size ====================
[01:12:01] ======================= test_reuse ========================
[01:12:01] [PASSED] 4
[01:12:01] [PASSED] 8
[01:12:01] [PASSED] 32
[01:12:01] [PASSED] 256
[01:12:01] =================== [PASSED] test_reuse ====================
[01:12:01] =================== test_range_overlap ====================
[01:12:01] [PASSED] 4
[01:12:01] [PASSED] 8
[01:12:01] [PASSED] 32
[01:12:01] [PASSED] 256
[01:12:01] =============== [PASSED] test_range_overlap ================
[01:12:01] =================== test_range_compact ====================
[01:12:01] [PASSED] 4
[01:12:01] [PASSED] 8
[01:12:01] [PASSED] 32
[01:12:01] [PASSED] 256
[01:12:01] =============== [PASSED] test_range_compact ================
[01:12:01] ==================== test_range_spare =====================
[01:12:01] [PASSED] 4
[01:12:01] [PASSED] 8
[01:12:01] [PASSED] 32
[01:12:01] [PASSED] 256
[01:12:01] ================ [PASSED] test_range_spare =================
[01:12:01] ===================== [PASSED] guc_dbm =====================
[01:12:01] =================== guc_idm (6 subtests) ===================
[01:12:01] [PASSED] bad_init
[01:12:01] [PASSED] no_init
[01:12:01] [PASSED] init_fini
[01:12:01] [PASSED] check_used
[01:12:01] [PASSED] check_quota
[01:12:01] [PASSED] check_all
[01:12:01] ===================== [PASSED] guc_idm =====================
[01:12:01] ================== no_relay (3 subtests) ===================
[01:12:01] [PASSED] xe_drops_guc2pf_if_not_ready
[01:12:01] [PASSED] xe_drops_guc2vf_if_not_ready
[01:12:01] [PASSED] xe_rejects_send_if_not_ready
[01:12:01] ==================== [PASSED] no_relay =====================
[01:12:01] ================== pf_relay (14 subtests) ==================
[01:12:01] [PASSED] pf_rejects_guc2pf_too_short
[01:12:01] [PASSED] pf_rejects_guc2pf_too_long
[01:12:01] [PASSED] pf_rejects_guc2pf_no_payload
[01:12:01] [PASSED] pf_fails_no_payload
[01:12:01] [PASSED] pf_fails_bad_origin
[01:12:01] [PASSED] pf_fails_bad_type
[01:12:01] [PASSED] pf_txn_reports_error
[01:12:01] [PASSED] pf_txn_sends_pf2guc
[01:12:01] [PASSED] pf_sends_pf2guc
[01:12:01] [SKIPPED] pf_loopback_nop
[01:12:01] [SKIPPED] pf_loopback_echo
[01:12:01] [SKIPPED] pf_loopback_fail
[01:12:01] [SKIPPED] pf_loopback_busy
[01:12:01] [SKIPPED] pf_loopback_retry
[01:12:01] ==================== [PASSED] pf_relay =====================
[01:12:01] ================== vf_relay (3 subtests) ===================
[01:12:01] [PASSED] vf_rejects_guc2vf_too_short
[01:12:01] [PASSED] vf_rejects_guc2vf_too_long
[01:12:01] [PASSED] vf_rejects_guc2vf_no_payload
[01:12:01] ==================== [PASSED] vf_relay =====================
[01:12:01] ================ pf_gt_config (6 subtests) =================
[01:12:01] [PASSED] fair_contexts_1vf
[01:12:01] [PASSED] fair_doorbells_1vf
[01:12:01] [PASSED] fair_ggtt_1vf
[01:12:01] ====================== fair_contexts ======================
[01:12:01] [PASSED] 1 VF
[01:12:01] [PASSED] 2 VFs
[01:12:01] [PASSED] 3 VFs
[01:12:01] [PASSED] 4 VFs
[01:12:01] [PASSED] 5 VFs
[01:12:01] [PASSED] 6 VFs
[01:12:01] [PASSED] 7 VFs
[01:12:01] [PASSED] 8 VFs
[01:12:01] [PASSED] 9 VFs
[01:12:01] [PASSED] 10 VFs
[01:12:01] [PASSED] 11 VFs
[01:12:01] [PASSED] 12 VFs
[01:12:01] [PASSED] 13 VFs
[01:12:01] [PASSED] 14 VFs
[01:12:01] [PASSED] 15 VFs
[01:12:01] [PASSED] 16 VFs
[01:12:01] [PASSED] 17 VFs
[01:12:01] [PASSED] 18 VFs
[01:12:01] [PASSED] 19 VFs
[01:12:01] [PASSED] 20 VFs
[01:12:01] [PASSED] 21 VFs
[01:12:01] [PASSED] 22 VFs
[01:12:01] [PASSED] 23 VFs
[01:12:01] [PASSED] 24 VFs
[01:12:01] [PASSED] 25 VFs
[01:12:01] [PASSED] 26 VFs
[01:12:01] [PASSED] 27 VFs
[01:12:01] [PASSED] 28 VFs
[01:12:01] [PASSED] 29 VFs
[01:12:01] [PASSED] 30 VFs
[01:12:01] [PASSED] 31 VFs
[01:12:01] [PASSED] 32 VFs
[01:12:01] [PASSED] 33 VFs
[01:12:01] [PASSED] 34 VFs
[01:12:01] [PASSED] 35 VFs
[01:12:01] [PASSED] 36 VFs
[01:12:01] [PASSED] 37 VFs
[01:12:01] [PASSED] 38 VFs
[01:12:01] [PASSED] 39 VFs
[01:12:01] [PASSED] 40 VFs
[01:12:01] [PASSED] 41 VFs
[01:12:01] [PASSED] 42 VFs
[01:12:01] [PASSED] 43 VFs
[01:12:01] [PASSED] 44 VFs
[01:12:01] [PASSED] 45 VFs
[01:12:01] [PASSED] 46 VFs
[01:12:01] [PASSED] 47 VFs
[01:12:01] [PASSED] 48 VFs
[01:12:01] [PASSED] 49 VFs
[01:12:01] [PASSED] 50 VFs
[01:12:01] [PASSED] 51 VFs
[01:12:01] [PASSED] 52 VFs
[01:12:01] [PASSED] 53 VFs
[01:12:01] [PASSED] 54 VFs
[01:12:01] [PASSED] 55 VFs
[01:12:01] [PASSED] 56 VFs
[01:12:01] [PASSED] 57 VFs
[01:12:01] [PASSED] 58 VFs
[01:12:01] [PASSED] 59 VFs
[01:12:01] [PASSED] 60 VFs
[01:12:01] [PASSED] 61 VFs
[01:12:01] [PASSED] 62 VFs
[01:12:01] [PASSED] 63 VFs
[01:12:01] ================== [PASSED] fair_contexts ==================
[01:12:01] ===================== fair_doorbells ======================
[01:12:01] [PASSED] 1 VF
[01:12:01] [PASSED] 2 VFs
[01:12:01] [PASSED] 3 VFs
[01:12:01] [PASSED] 4 VFs
[01:12:01] [PASSED] 5 VFs
[01:12:01] [PASSED] 6 VFs
[01:12:01] [PASSED] 7 VFs
[01:12:01] [PASSED] 8 VFs
[01:12:01] [PASSED] 9 VFs
[01:12:01] [PASSED] 10 VFs
[01:12:01] [PASSED] 11 VFs
[01:12:01] [PASSED] 12 VFs
[01:12:01] [PASSED] 13 VFs
[01:12:01] [PASSED] 14 VFs
[01:12:01] [PASSED] 15 VFs
[01:12:01] [PASSED] 16 VFs
[01:12:01] [PASSED] 17 VFs
[01:12:01] [PASSED] 18 VFs
[01:12:01] [PASSED] 19 VFs
[01:12:01] [PASSED] 20 VFs
[01:12:01] [PASSED] 21 VFs
[01:12:01] [PASSED] 22 VFs
[01:12:01] [PASSED] 23 VFs
[01:12:01] [PASSED] 24 VFs
[01:12:01] [PASSED] 25 VFs
[01:12:01] [PASSED] 26 VFs
[01:12:01] [PASSED] 27 VFs
[01:12:01] [PASSED] 28 VFs
[01:12:01] [PASSED] 29 VFs
[01:12:01] [PASSED] 30 VFs
[01:12:01] [PASSED] 31 VFs
[01:12:01] [PASSED] 32 VFs
[01:12:01] [PASSED] 33 VFs
[01:12:01] [PASSED] 34 VFs
[01:12:01] [PASSED] 35 VFs
[01:12:01] [PASSED] 36 VFs
[01:12:01] [PASSED] 37 VFs
[01:12:01] [PASSED] 38 VFs
[01:12:01] [PASSED] 39 VFs
[01:12:01] [PASSED] 40 VFs
[01:12:01] [PASSED] 41 VFs
[01:12:01] [PASSED] 42 VFs
[01:12:01] [PASSED] 43 VFs
[01:12:01] [PASSED] 44 VFs
[01:12:01] [PASSED] 45 VFs
[01:12:01] [PASSED] 46 VFs
[01:12:01] [PASSED] 47 VFs
[01:12:02] [PASSED] 48 VFs
[01:12:02] [PASSED] 49 VFs
[01:12:02] [PASSED] 50 VFs
[01:12:02] [PASSED] 51 VFs
[01:12:02] [PASSED] 52 VFs
[01:12:02] [PASSED] 53 VFs
[01:12:02] [PASSED] 54 VFs
[01:12:02] [PASSED] 55 VFs
[01:12:02] [PASSED] 56 VFs
[01:12:02] [PASSED] 57 VFs
[01:12:02] [PASSED] 58 VFs
[01:12:02] [PASSED] 59 VFs
[01:12:02] [PASSED] 60 VFs
[01:12:02] [PASSED] 61 VFs
[01:12:02] [PASSED] 62 VFs
[01:12:02] [PASSED] 63 VFs
[01:12:02] ================= [PASSED] fair_doorbells ==================
[01:12:02] ======================== fair_ggtt ========================
[01:12:02] [PASSED] 1 VF
[01:12:02] [PASSED] 2 VFs
[01:12:02] [PASSED] 3 VFs
[01:12:02] [PASSED] 4 VFs
[01:12:02] [PASSED] 5 VFs
[01:12:02] [PASSED] 6 VFs
[01:12:02] [PASSED] 7 VFs
[01:12:02] [PASSED] 8 VFs
[01:12:02] [PASSED] 9 VFs
[01:12:02] [PASSED] 10 VFs
[01:12:02] [PASSED] 11 VFs
[01:12:02] [PASSED] 12 VFs
[01:12:02] [PASSED] 13 VFs
[01:12:02] [PASSED] 14 VFs
[01:12:02] [PASSED] 15 VFs
[01:12:02] [PASSED] 16 VFs
[01:12:02] [PASSED] 17 VFs
[01:12:02] [PASSED] 18 VFs
[01:12:02] [PASSED] 19 VFs
[01:12:02] [PASSED] 20 VFs
[01:12:02] [PASSED] 21 VFs
[01:12:02] [PASSED] 22 VFs
[01:12:02] [PASSED] 23 VFs
[01:12:02] [PASSED] 24 VFs
[01:12:02] [PASSED] 25 VFs
[01:12:02] [PASSED] 26 VFs
[01:12:02] [PASSED] 27 VFs
[01:12:02] [PASSED] 28 VFs
[01:12:02] [PASSED] 29 VFs
[01:12:02] [PASSED] 30 VFs
[01:12:02] [PASSED] 31 VFs
[01:12:02] [PASSED] 32 VFs
[01:12:02] [PASSED] 33 VFs
[01:12:02] [PASSED] 34 VFs
[01:12:02] [PASSED] 35 VFs
[01:12:02] [PASSED] 36 VFs
[01:12:02] [PASSED] 37 VFs
[01:12:02] [PASSED] 38 VFs
[01:12:02] [PASSED] 39 VFs
[01:12:02] [PASSED] 40 VFs
[01:12:02] [PASSED] 41 VFs
[01:12:02] [PASSED] 42 VFs
[01:12:02] [PASSED] 43 VFs
[01:12:02] [PASSED] 44 VFs
[01:12:02] [PASSED] 45 VFs
[01:12:02] [PASSED] 46 VFs
[01:12:02] [PASSED] 47 VFs
[01:12:02] [PASSED] 48 VFs
[01:12:02] [PASSED] 49 VFs
[01:12:02] [PASSED] 50 VFs
[01:12:02] [PASSED] 51 VFs
[01:12:02] [PASSED] 52 VFs
[01:12:02] [PASSED] 53 VFs
[01:12:02] [PASSED] 54 VFs
[01:12:02] [PASSED] 55 VFs
[01:12:02] [PASSED] 56 VFs
[01:12:02] [PASSED] 57 VFs
[01:12:02] [PASSED] 58 VFs
[01:12:02] [PASSED] 59 VFs
[01:12:02] [PASSED] 60 VFs
[01:12:02] [PASSED] 61 VFs
[01:12:02] [PASSED] 62 VFs
[01:12:02] [PASSED] 63 VFs
[01:12:02] ==================== [PASSED] fair_ggtt ====================
[01:12:02] ================== [PASSED] pf_gt_config ===================
[01:12:02] ===================== lmtt (1 subtest) =====================
[01:12:02] ======================== test_ops =========================
[01:12:02] [PASSED] 2-level
[01:12:02] [PASSED] multi-level
[01:12:02] ==================== [PASSED] test_ops =====================
[01:12:02] ====================== [PASSED] lmtt =======================
[01:12:02] ================= pf_service (11 subtests) =================
[01:12:02] [PASSED] pf_negotiate_any
[01:12:02] [PASSED] pf_negotiate_base_match
[01:12:02] [PASSED] pf_negotiate_base_newer
[01:12:02] [PASSED] pf_negotiate_base_next
[01:12:02] [SKIPPED] pf_negotiate_base_older
[01:12:02] [PASSED] pf_negotiate_base_prev
[01:12:02] [PASSED] pf_negotiate_latest_match
[01:12:02] [PASSED] pf_negotiate_latest_newer
[01:12:02] [PASSED] pf_negotiate_latest_next
[01:12:02] [SKIPPED] pf_negotiate_latest_older
[01:12:02] [SKIPPED] pf_negotiate_latest_prev
[01:12:02] =================== [PASSED] pf_service ====================
[01:12:02] ================= xe_guc_g2g (2 subtests) ==================
[01:12:02] ============== xe_live_guc_g2g_kunit_default ==============
[01:12:02] ========= [SKIPPED] xe_live_guc_g2g_kunit_default ==========
[01:12:02] ============== xe_live_guc_g2g_kunit_allmem ===============
[01:12:02] ========== [SKIPPED] xe_live_guc_g2g_kunit_allmem ==========
[01:12:02] =================== [SKIPPED] xe_guc_g2g ===================
[01:12:02] =================== xe_mocs (2 subtests) ===================
[01:12:02] ================ xe_live_mocs_kernel_kunit ================
[01:12:02] =========== [SKIPPED] xe_live_mocs_kernel_kunit ============
[01:12:02] ================ xe_live_mocs_reset_kunit =================
[01:12:02] ============ [SKIPPED] xe_live_mocs_reset_kunit ============
[01:12:02] ==================== [SKIPPED] xe_mocs =====================
[01:12:02] ================= xe_migrate (2 subtests) ==================
[01:12:02] ================= xe_migrate_sanity_kunit =================
[01:12:02] ============ [SKIPPED] xe_migrate_sanity_kunit =============
[01:12:02] ================== xe_validate_ccs_kunit ==================
[01:12:02] ============= [SKIPPED] xe_validate_ccs_kunit ==============
[01:12:02] =================== [SKIPPED] xe_migrate ===================
[01:12:02] ================== xe_dma_buf (1 subtest) ==================
[01:12:02] ==================== xe_dma_buf_kunit =====================
[01:12:02] ================ [SKIPPED] xe_dma_buf_kunit ================
[01:12:02] =================== [SKIPPED] xe_dma_buf ===================
[01:12:02] ================= xe_bo_shrink (1 subtest) =================
[01:12:02] =================== xe_bo_shrink_kunit ====================
[01:12:02] =============== [SKIPPED] xe_bo_shrink_kunit ===============
[01:12:02] ================== [SKIPPED] xe_bo_shrink ==================
[01:12:02] ==================== xe_bo (2 subtests) ====================
[01:12:02] ================== xe_ccs_migrate_kunit ===================
[01:12:02] ============== [SKIPPED] xe_ccs_migrate_kunit ==============
[01:12:02] ==================== xe_bo_evict_kunit ====================
[01:12:02] =============== [SKIPPED] xe_bo_evict_kunit ================
[01:12:02] ===================== [SKIPPED] xe_bo ======================
[01:12:02] ==================== args (11 subtests) ====================
[01:12:02] [PASSED] count_args_test
[01:12:02] [PASSED] call_args_example
[01:12:02] [PASSED] call_args_test
[01:12:02] [PASSED] drop_first_arg_example
[01:12:02] [PASSED] drop_first_arg_test
[01:12:02] [PASSED] first_arg_example
[01:12:02] [PASSED] first_arg_test
[01:12:02] [PASSED] last_arg_example
[01:12:02] [PASSED] last_arg_test
[01:12:02] [PASSED] pick_arg_example
[01:12:02] [PASSED] sep_comma_example
[01:12:02] ====================== [PASSED] args =======================
[01:12:02] =================== xe_pci (3 subtests) ====================
[01:12:02] ==================== check_graphics_ip ====================
[01:12:02] [PASSED] 12.00 Xe_LP
[01:12:02] [PASSED] 12.10 Xe_LP+
[01:12:02] [PASSED] 12.55 Xe_HPG
[01:12:02] [PASSED] 12.60 Xe_HPC
[01:12:02] [PASSED] 12.70 Xe_LPG
[01:12:02] [PASSED] 12.71 Xe_LPG
[01:12:02] [PASSED] 12.74 Xe_LPG+
[01:12:02] [PASSED] 20.01 Xe2_HPG
[01:12:02] [PASSED] 20.02 Xe2_HPG
[01:12:02] [PASSED] 20.04 Xe2_LPG
[01:12:02] [PASSED] 30.00 Xe3_LPG
[01:12:02] [PASSED] 30.01 Xe3_LPG
[01:12:02] [PASSED] 30.03 Xe3_LPG
[01:12:02] [PASSED] 30.04 Xe3_LPG
[01:12:02] [PASSED] 30.05 Xe3_LPG
[01:12:02] [PASSED] 35.11 Xe3p_XPC
[01:12:02] ================ [PASSED] check_graphics_ip ================
[01:12:02] ===================== check_media_ip ======================
[01:12:02] [PASSED] 12.00 Xe_M
[01:12:02] [PASSED] 12.55 Xe_HPM
[01:12:02] [PASSED] 13.00 Xe_LPM+
[01:12:02] [PASSED] 13.01 Xe2_HPM
[01:12:02] [PASSED] 20.00 Xe2_LPM
[01:12:02] [PASSED] 30.00 Xe3_LPM
[01:12:02] [PASSED] 30.02 Xe3_LPM
[01:12:02] [PASSED] 35.00 Xe3p_LPM
[01:12:02] [PASSED] 35.03 Xe3p_HPM
[01:12:02] ================= [PASSED] check_media_ip ==================
[01:12:02] =================== check_platform_desc ===================
[01:12:02] [PASSED] 0x9A60 (TIGERLAKE)
[01:12:02] [PASSED] 0x9A68 (TIGERLAKE)
[01:12:02] [PASSED] 0x9A70 (TIGERLAKE)
[01:12:02] [PASSED] 0x9A40 (TIGERLAKE)
[01:12:02] [PASSED] 0x9A49 (TIGERLAKE)
[01:12:02] [PASSED] 0x9A59 (TIGERLAKE)
[01:12:02] [PASSED] 0x9A78 (TIGERLAKE)
[01:12:02] [PASSED] 0x9AC0 (TIGERLAKE)
[01:12:02] [PASSED] 0x9AC9 (TIGERLAKE)
[01:12:02] [PASSED] 0x9AD9 (TIGERLAKE)
[01:12:02] [PASSED] 0x9AF8 (TIGERLAKE)
[01:12:02] [PASSED] 0x4C80 (ROCKETLAKE)
[01:12:02] [PASSED] 0x4C8A (ROCKETLAKE)
[01:12:02] [PASSED] 0x4C8B (ROCKETLAKE)
[01:12:02] [PASSED] 0x4C8C (ROCKETLAKE)
[01:12:02] [PASSED] 0x4C90 (ROCKETLAKE)
[01:12:02] [PASSED] 0x4C9A (ROCKETLAKE)
[01:12:02] [PASSED] 0x4680 (ALDERLAKE_S)
[01:12:02] [PASSED] 0x4682 (ALDERLAKE_S)
[01:12:02] [PASSED] 0x4688 (ALDERLAKE_S)
[01:12:02] [PASSED] 0x468A (ALDERLAKE_S)
[01:12:02] [PASSED] 0x468B (ALDERLAKE_S)
[01:12:02] [PASSED] 0x4690 (ALDERLAKE_S)
[01:12:02] [PASSED] 0x4692 (ALDERLAKE_S)
[01:12:02] [PASSED] 0x4693 (ALDERLAKE_S)
[01:12:02] [PASSED] 0x46A0 (ALDERLAKE_P)
[01:12:02] [PASSED] 0x46A1 (ALDERLAKE_P)
[01:12:02] [PASSED] 0x46A2 (ALDERLAKE_P)
[01:12:02] [PASSED] 0x46A3 (ALDERLAKE_P)
[01:12:02] [PASSED] 0x46A6 (ALDERLAKE_P)
[01:12:02] [PASSED] 0x46A8 (ALDERLAKE_P)
[01:12:02] [PASSED] 0x46AA (ALDERLAKE_P)
[01:12:02] [PASSED] 0x462A (ALDERLAKE_P)
[01:12:02] [PASSED] 0x4626 (ALDERLAKE_P)
[01:12:02] [PASSED] 0x4628 (ALDERLAKE_P)
[01:12:02] [PASSED] 0x46B0 (ALDERLAKE_P)
stty: 'standard input': Inappropriate ioctl for device
[01:12:02] [PASSED] 0x46B1 (ALDERLAKE_P)
[01:12:02] [PASSED] 0x46B2 (ALDERLAKE_P)
[01:12:02] [PASSED] 0x46B3 (ALDERLAKE_P)
[01:12:02] [PASSED] 0x46C0 (ALDERLAKE_P)
[01:12:02] [PASSED] 0x46C1 (ALDERLAKE_P)
[01:12:02] [PASSED] 0x46C2 (ALDERLAKE_P)
[01:12:02] [PASSED] 0x46C3 (ALDERLAKE_P)
[01:12:02] [PASSED] 0x46D0 (ALDERLAKE_N)
[01:12:02] [PASSED] 0x46D1 (ALDERLAKE_N)
[01:12:02] [PASSED] 0x46D2 (ALDERLAKE_N)
[01:12:02] [PASSED] 0x46D3 (ALDERLAKE_N)
[01:12:02] [PASSED] 0x46D4 (ALDERLAKE_N)
[01:12:02] [PASSED] 0xA721 (ALDERLAKE_P)
[01:12:02] [PASSED] 0xA7A1 (ALDERLAKE_P)
[01:12:02] [PASSED] 0xA7A9 (ALDERLAKE_P)
[01:12:02] [PASSED] 0xA7AC (ALDERLAKE_P)
[01:12:02] [PASSED] 0xA7AD (ALDERLAKE_P)
[01:12:02] [PASSED] 0xA720 (ALDERLAKE_P)
[01:12:02] [PASSED] 0xA7A0 (ALDERLAKE_P)
[01:12:02] [PASSED] 0xA7A8 (ALDERLAKE_P)
[01:12:02] [PASSED] 0xA7AA (ALDERLAKE_P)
[01:12:02] [PASSED] 0xA7AB (ALDERLAKE_P)
[01:12:02] [PASSED] 0xA780 (ALDERLAKE_S)
[01:12:02] [PASSED] 0xA781 (ALDERLAKE_S)
[01:12:02] [PASSED] 0xA782 (ALDERLAKE_S)
[01:12:02] [PASSED] 0xA783 (ALDERLAKE_S)
[01:12:02] [PASSED] 0xA788 (ALDERLAKE_S)
[01:12:02] [PASSED] 0xA789 (ALDERLAKE_S)
[01:12:02] [PASSED] 0xA78A (ALDERLAKE_S)
[01:12:02] [PASSED] 0xA78B (ALDERLAKE_S)
[01:12:02] [PASSED] 0x4905 (DG1)
[01:12:02] [PASSED] 0x4906 (DG1)
[01:12:02] [PASSED] 0x4907 (DG1)
[01:12:02] [PASSED] 0x4908 (DG1)
[01:12:02] [PASSED] 0x4909 (DG1)
[01:12:02] [PASSED] 0x56C0 (DG2)
[01:12:02] [PASSED] 0x56C2 (DG2)
[01:12:02] [PASSED] 0x56C1 (DG2)
[01:12:02] [PASSED] 0x7D51 (METEORLAKE)
[01:12:02] [PASSED] 0x7DD1 (METEORLAKE)
[01:12:02] [PASSED] 0x7D41 (METEORLAKE)
[01:12:02] [PASSED] 0x7D67 (METEORLAKE)
[01:12:02] [PASSED] 0xB640 (METEORLAKE)
[01:12:02] [PASSED] 0x56A0 (DG2)
[01:12:02] [PASSED] 0x56A1 (DG2)
[01:12:02] [PASSED] 0x56A2 (DG2)
[01:12:02] [PASSED] 0x56BE (DG2)
[01:12:02] [PASSED] 0x56BF (DG2)
[01:12:02] [PASSED] 0x5690 (DG2)
[01:12:02] [PASSED] 0x5691 (DG2)
[01:12:02] [PASSED] 0x5692 (DG2)
[01:12:02] [PASSED] 0x56A5 (DG2)
[01:12:02] [PASSED] 0x56A6 (DG2)
[01:12:02] [PASSED] 0x56B0 (DG2)
[01:12:02] [PASSED] 0x56B1 (DG2)
[01:12:02] [PASSED] 0x56BA (DG2)
[01:12:02] [PASSED] 0x56BB (DG2)
[01:12:02] [PASSED] 0x56BC (DG2)
[01:12:02] [PASSED] 0x56BD (DG2)
[01:12:02] [PASSED] 0x5693 (DG2)
[01:12:02] [PASSED] 0x5694 (DG2)
[01:12:02] [PASSED] 0x5695 (DG2)
[01:12:02] [PASSED] 0x56A3 (DG2)
[01:12:02] [PASSED] 0x56A4 (DG2)
[01:12:02] [PASSED] 0x56B2 (DG2)
[01:12:02] [PASSED] 0x56B3 (DG2)
[01:12:02] [PASSED] 0x5696 (DG2)
[01:12:02] [PASSED] 0x5697 (DG2)
[01:12:02] [PASSED] 0xB69 (PVC)
[01:12:02] [PASSED] 0xB6E (PVC)
[01:12:02] [PASSED] 0xBD4 (PVC)
[01:12:02] [PASSED] 0xBD5 (PVC)
[01:12:02] [PASSED] 0xBD6 (PVC)
[01:12:02] [PASSED] 0xBD7 (PVC)
[01:12:02] [PASSED] 0xBD8 (PVC)
[01:12:02] [PASSED] 0xBD9 (PVC)
[01:12:02] [PASSED] 0xBDA (PVC)
[01:12:02] [PASSED] 0xBDB (PVC)
[01:12:02] [PASSED] 0xBE0 (PVC)
[01:12:02] [PASSED] 0xBE1 (PVC)
[01:12:02] [PASSED] 0xBE5 (PVC)
[01:12:02] [PASSED] 0x7D40 (METEORLAKE)
[01:12:02] [PASSED] 0x7D45 (METEORLAKE)
[01:12:02] [PASSED] 0x7D55 (METEORLAKE)
[01:12:02] [PASSED] 0x7D60 (METEORLAKE)
[01:12:02] [PASSED] 0x7DD5 (METEORLAKE)
[01:12:02] [PASSED] 0x6420 (LUNARLAKE)
[01:12:02] [PASSED] 0x64A0 (LUNARLAKE)
[01:12:02] [PASSED] 0x64B0 (LUNARLAKE)
[01:12:02] [PASSED] 0xE202 (BATTLEMAGE)
[01:12:02] [PASSED] 0xE209 (BATTLEMAGE)
[01:12:02] [PASSED] 0xE20B (BATTLEMAGE)
[01:12:02] [PASSED] 0xE20C (BATTLEMAGE)
[01:12:02] [PASSED] 0xE20D (BATTLEMAGE)
[01:12:02] [PASSED] 0xE210 (BATTLEMAGE)
[01:12:02] [PASSED] 0xE211 (BATTLEMAGE)
[01:12:02] [PASSED] 0xE212 (BATTLEMAGE)
[01:12:02] [PASSED] 0xE216 (BATTLEMAGE)
[01:12:02] [PASSED] 0xE220 (BATTLEMAGE)
[01:12:02] [PASSED] 0xE221 (BATTLEMAGE)
[01:12:02] [PASSED] 0xE222 (BATTLEMAGE)
[01:12:02] [PASSED] 0xE223 (BATTLEMAGE)
[01:12:02] [PASSED] 0xB080 (PANTHERLAKE)
[01:12:02] [PASSED] 0xB081 (PANTHERLAKE)
[01:12:02] [PASSED] 0xB082 (PANTHERLAKE)
[01:12:02] [PASSED] 0xB083 (PANTHERLAKE)
[01:12:02] [PASSED] 0xB084 (PANTHERLAKE)
[01:12:02] [PASSED] 0xB085 (PANTHERLAKE)
[01:12:02] [PASSED] 0xB086 (PANTHERLAKE)
[01:12:02] [PASSED] 0xB087 (PANTHERLAKE)
[01:12:02] [PASSED] 0xB08F (PANTHERLAKE)
[01:12:02] [PASSED] 0xB090 (PANTHERLAKE)
[01:12:02] [PASSED] 0xB0A0 (PANTHERLAKE)
[01:12:02] [PASSED] 0xB0B0 (PANTHERLAKE)
[01:12:02] [PASSED] 0xD740 (NOVALAKE_S)
[01:12:02] [PASSED] 0xD741 (NOVALAKE_S)
[01:12:02] [PASSED] 0xD742 (NOVALAKE_S)
[01:12:02] [PASSED] 0xD743 (NOVALAKE_S)
[01:12:02] [PASSED] 0xD744 (NOVALAKE_S)
[01:12:02] [PASSED] 0xD745 (NOVALAKE_S)
[01:12:02] [PASSED] 0x674C (CRESCENTISLAND)
[01:12:02] [PASSED] 0xFD80 (PANTHERLAKE)
[01:12:02] [PASSED] 0xFD81 (PANTHERLAKE)
[01:12:02] =============== [PASSED] check_platform_desc ===============
[01:12:02] ===================== [PASSED] xe_pci ======================
[01:12:02] =================== xe_rtp (2 subtests) ====================
[01:12:02] =============== xe_rtp_process_to_sr_tests ================
[01:12:02] [PASSED] coalesce-same-reg
[01:12:02] [PASSED] no-match-no-add
[01:12:02] [PASSED] match-or
[01:12:02] [PASSED] match-or-xfail
[01:12:02] [PASSED] no-match-no-add-multiple-rules
[01:12:02] [PASSED] two-regs-two-entries
[01:12:02] [PASSED] clr-one-set-other
[01:12:02] [PASSED] set-field
[01:12:02] [PASSED] conflict-duplicate
[01:12:02] [PASSED] conflict-not-disjoint
[01:12:02] [PASSED] conflict-reg-type
[01:12:02] =========== [PASSED] xe_rtp_process_to_sr_tests ============
[01:12:02] ================== xe_rtp_process_tests ===================
[01:12:02] [PASSED] active1
[01:12:02] [PASSED] active2
[01:12:02] [PASSED] active-inactive
[01:12:02] [PASSED] inactive-active
[01:12:02] [PASSED] inactive-1st_or_active-inactive
[01:12:02] [PASSED] inactive-2nd_or_active-inactive
[01:12:02] [PASSED] inactive-last_or_active-inactive
[01:12:02] [PASSED] inactive-no_or_active-inactive
[01:12:02] ============== [PASSED] xe_rtp_process_tests ===============
[01:12:02] ===================== [PASSED] xe_rtp ======================
[01:12:02] ==================== xe_wa (1 subtest) =====================
[01:12:02] ======================== xe_wa_gt =========================
[01:12:02] [PASSED] TIGERLAKE B0
[01:12:02] [PASSED] DG1 A0
[01:12:02] [PASSED] DG1 B0
[01:12:02] [PASSED] ALDERLAKE_S A0
[01:12:02] [PASSED] ALDERLAKE_S B0
[01:12:02] [PASSED] ALDERLAKE_S C0
[01:12:02] [PASSED] ALDERLAKE_S D0
[01:12:02] [PASSED] ALDERLAKE_P A0
[01:12:02] [PASSED] ALDERLAKE_P B0
[01:12:02] [PASSED] ALDERLAKE_P C0
[01:12:02] [PASSED] ALDERLAKE_S RPLS D0
[01:12:02] [PASSED] ALDERLAKE_P RPLU E0
[01:12:02] [PASSED] DG2 G10 C0
[01:12:02] [PASSED] DG2 G11 B1
[01:12:02] [PASSED] DG2 G12 A1
[01:12:02] [PASSED] METEORLAKE 12.70(Xe_LPG) A0 13.00(Xe_LPM+) A0
[01:12:02] [PASSED] METEORLAKE 12.71(Xe_LPG) A0 13.00(Xe_LPM+) A0
[01:12:02] [PASSED] METEORLAKE 12.74(Xe_LPG+) A0 13.00(Xe_LPM+) A0
[01:12:02] [PASSED] LUNARLAKE 20.04(Xe2_LPG) A0 20.00(Xe2_LPM) A0
[01:12:02] [PASSED] LUNARLAKE 20.04(Xe2_LPG) B0 20.00(Xe2_LPM) A0
[01:12:02] [PASSED] BATTLEMAGE 20.01(Xe2_HPG) A0 13.01(Xe2_HPM) A1
[01:12:02] [PASSED] PANTHERLAKE 30.00(Xe3_LPG) A0 30.00(Xe3_LPM) A0
[01:12:02] ==================== [PASSED] xe_wa_gt =====================
[01:12:02] ====================== [PASSED] xe_wa ======================
[01:12:02] ============================================================
[01:12:02] Testing complete. Ran 510 tests: passed: 492, skipped: 18
[01:12:02] Elapsed time: 35.269s total, 4.214s configuring, 30.539s building, 0.463s running
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/tests/.kunitconfig
[01:12:02] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[01:12:03] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[01:12:28] Starting KUnit Kernel (1/1)...
[01:12:28] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[01:12:28] ============ drm_test_pick_cmdline (2 subtests) ============
[01:12:28] [PASSED] drm_test_pick_cmdline_res_1920_1080_60
[01:12:28] =============== drm_test_pick_cmdline_named ===============
[01:12:28] [PASSED] NTSC
[01:12:28] [PASSED] NTSC-J
[01:12:28] [PASSED] PAL
[01:12:28] [PASSED] PAL-M
[01:12:28] =========== [PASSED] drm_test_pick_cmdline_named ===========
[01:12:28] ============== [PASSED] drm_test_pick_cmdline ==============
[01:12:28] == drm_test_atomic_get_connector_for_encoder (1 subtest) ===
[01:12:28] [PASSED] drm_test_drm_atomic_get_connector_for_encoder
[01:12:28] ==== [PASSED] drm_test_atomic_get_connector_for_encoder ====
[01:12:28] =========== drm_validate_clone_mode (2 subtests) ===========
[01:12:28] ============== drm_test_check_in_clone_mode ===============
[01:12:28] [PASSED] in_clone_mode
[01:12:28] [PASSED] not_in_clone_mode
[01:12:28] ========== [PASSED] drm_test_check_in_clone_mode ===========
[01:12:28] =============== drm_test_check_valid_clones ===============
[01:12:28] [PASSED] not_in_clone_mode
[01:12:28] [PASSED] valid_clone
[01:12:28] [PASSED] invalid_clone
[01:12:28] =========== [PASSED] drm_test_check_valid_clones ===========
[01:12:28] ============= [PASSED] drm_validate_clone_mode =============
[01:12:28] ============= drm_validate_modeset (1 subtest) =============
[01:12:28] [PASSED] drm_test_check_connector_changed_modeset
[01:12:28] ============== [PASSED] drm_validate_modeset ===============
[01:12:28] ====== drm_test_bridge_get_current_state (2 subtests) ======
[01:12:28] [PASSED] drm_test_drm_bridge_get_current_state_atomic
[01:12:28] [PASSED] drm_test_drm_bridge_get_current_state_legacy
[01:12:28] ======== [PASSED] drm_test_bridge_get_current_state ========
[01:12:28] ====== drm_test_bridge_helper_reset_crtc (3 subtests) ======
[01:12:28] [PASSED] drm_test_drm_bridge_helper_reset_crtc_atomic
[01:12:28] [PASSED] drm_test_drm_bridge_helper_reset_crtc_atomic_disabled
[01:12:28] [PASSED] drm_test_drm_bridge_helper_reset_crtc_legacy
[01:12:28] ======== [PASSED] drm_test_bridge_helper_reset_crtc ========
[01:12:28] ============== drm_bridge_alloc (2 subtests) ===============
[01:12:28] [PASSED] drm_test_drm_bridge_alloc_basic
[01:12:28] [PASSED] drm_test_drm_bridge_alloc_get_put
[01:12:28] ================ [PASSED] drm_bridge_alloc =================
[01:12:28] ================== drm_buddy (8 subtests) ==================
[01:12:28] [PASSED] drm_test_buddy_alloc_limit
[01:12:28] [PASSED] drm_test_buddy_alloc_optimistic
[01:12:28] [PASSED] drm_test_buddy_alloc_pessimistic
[01:12:28] [PASSED] drm_test_buddy_alloc_pathological
[01:12:28] [PASSED] drm_test_buddy_alloc_contiguous
[01:12:28] [PASSED] drm_test_buddy_alloc_clear
[01:12:29] [PASSED] drm_test_buddy_alloc_range_bias
[01:12:29] [PASSED] drm_test_buddy_fragmentation_performance
[01:12:29] ==================== [PASSED] drm_buddy ====================
[01:12:29] ============= drm_cmdline_parser (40 subtests) =============
[01:12:29] [PASSED] drm_test_cmdline_force_d_only
[01:12:29] [PASSED] drm_test_cmdline_force_D_only_dvi
[01:12:29] [PASSED] drm_test_cmdline_force_D_only_hdmi
[01:12:29] [PASSED] drm_test_cmdline_force_D_only_not_digital
[01:12:29] [PASSED] drm_test_cmdline_force_e_only
[01:12:29] [PASSED] drm_test_cmdline_res
[01:12:29] [PASSED] drm_test_cmdline_res_vesa
[01:12:29] [PASSED] drm_test_cmdline_res_vesa_rblank
[01:12:29] [PASSED] drm_test_cmdline_res_rblank
[01:12:29] [PASSED] drm_test_cmdline_res_bpp
[01:12:29] [PASSED] drm_test_cmdline_res_refresh
[01:12:29] [PASSED] drm_test_cmdline_res_bpp_refresh
[01:12:29] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced
[01:12:29] [PASSED] drm_test_cmdline_res_bpp_refresh_margins
[01:12:29] [PASSED] drm_test_cmdline_res_bpp_refresh_force_off
[01:12:29] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on
[01:12:29] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_analog
[01:12:29] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_digital
[01:12:29] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced_margins_force_on
[01:12:29] [PASSED] drm_test_cmdline_res_margins_force_on
[01:12:29] [PASSED] drm_test_cmdline_res_vesa_margins
[01:12:29] [PASSED] drm_test_cmdline_name
[01:12:29] [PASSED] drm_test_cmdline_name_bpp
[01:12:29] [PASSED] drm_test_cmdline_name_option
[01:12:29] [PASSED] drm_test_cmdline_name_bpp_option
[01:12:29] [PASSED] drm_test_cmdline_rotate_0
[01:12:29] [PASSED] drm_test_cmdline_rotate_90
[01:12:29] [PASSED] drm_test_cmdline_rotate_180
[01:12:29] [PASSED] drm_test_cmdline_rotate_270
[01:12:29] [PASSED] drm_test_cmdline_hmirror
[01:12:29] [PASSED] drm_test_cmdline_vmirror
[01:12:29] [PASSED] drm_test_cmdline_margin_options
[01:12:29] [PASSED] drm_test_cmdline_multiple_options
[01:12:29] [PASSED] drm_test_cmdline_bpp_extra_and_option
[01:12:29] [PASSED] drm_test_cmdline_extra_and_option
[01:12:29] [PASSED] drm_test_cmdline_freestanding_options
[01:12:29] [PASSED] drm_test_cmdline_freestanding_force_e_and_options
[01:12:29] [PASSED] drm_test_cmdline_panel_orientation
[01:12:29] ================ drm_test_cmdline_invalid =================
[01:12:29] [PASSED] margin_only
[01:12:29] [PASSED] interlace_only
[01:12:29] [PASSED] res_missing_x
[01:12:29] [PASSED] res_missing_y
[01:12:29] [PASSED] res_bad_y
[01:12:29] [PASSED] res_missing_y_bpp
[01:12:29] [PASSED] res_bad_bpp
[01:12:29] [PASSED] res_bad_refresh
[01:12:29] [PASSED] res_bpp_refresh_force_on_off
[01:12:29] [PASSED] res_invalid_mode
[01:12:29] [PASSED] res_bpp_wrong_place_mode
[01:12:29] [PASSED] name_bpp_refresh
[01:12:29] [PASSED] name_refresh
[01:12:29] [PASSED] name_refresh_wrong_mode
[01:12:29] [PASSED] name_refresh_invalid_mode
[01:12:29] [PASSED] rotate_multiple
[01:12:29] [PASSED] rotate_invalid_val
[01:12:29] [PASSED] rotate_truncated
[01:12:29] [PASSED] invalid_option
[01:12:29] [PASSED] invalid_tv_option
[01:12:29] [PASSED] truncated_tv_option
[01:12:29] ============ [PASSED] drm_test_cmdline_invalid =============
[01:12:29] =============== drm_test_cmdline_tv_options ===============
[01:12:29] [PASSED] NTSC
[01:12:29] [PASSED] NTSC_443
[01:12:29] [PASSED] NTSC_J
[01:12:29] [PASSED] PAL
[01:12:29] [PASSED] PAL_M
[01:12:29] [PASSED] PAL_N
[01:12:29] [PASSED] SECAM
[01:12:29] [PASSED] MONO_525
[01:12:29] [PASSED] MONO_625
[01:12:29] =========== [PASSED] drm_test_cmdline_tv_options ===========
[01:12:29] =============== [PASSED] drm_cmdline_parser ================
[01:12:29] ========== drmm_connector_hdmi_init (20 subtests) ==========
[01:12:29] [PASSED] drm_test_connector_hdmi_init_valid
[01:12:29] [PASSED] drm_test_connector_hdmi_init_bpc_8
[01:12:29] [PASSED] drm_test_connector_hdmi_init_bpc_10
[01:12:29] [PASSED] drm_test_connector_hdmi_init_bpc_12
[01:12:29] [PASSED] drm_test_connector_hdmi_init_bpc_invalid
[01:12:29] [PASSED] drm_test_connector_hdmi_init_bpc_null
[01:12:29] [PASSED] drm_test_connector_hdmi_init_formats_empty
[01:12:29] [PASSED] drm_test_connector_hdmi_init_formats_no_rgb
[01:12:29] === drm_test_connector_hdmi_init_formats_yuv420_allowed ===
[01:12:29] [PASSED] supported_formats=0x9 yuv420_allowed=1
[01:12:29] [PASSED] supported_formats=0x9 yuv420_allowed=0
[01:12:29] [PASSED] supported_formats=0x3 yuv420_allowed=1
[01:12:29] [PASSED] supported_formats=0x3 yuv420_allowed=0
[01:12:29] === [PASSED] drm_test_connector_hdmi_init_formats_yuv420_allowed ===
[01:12:29] [PASSED] drm_test_connector_hdmi_init_null_ddc
[01:12:29] [PASSED] drm_test_connector_hdmi_init_null_product
[01:12:29] [PASSED] drm_test_connector_hdmi_init_null_vendor
[01:12:29] [PASSED] drm_test_connector_hdmi_init_product_length_exact
[01:12:29] [PASSED] drm_test_connector_hdmi_init_product_length_too_long
[01:12:29] [PASSED] drm_test_connector_hdmi_init_product_valid
[01:12:29] [PASSED] drm_test_connector_hdmi_init_vendor_length_exact
[01:12:29] [PASSED] drm_test_connector_hdmi_init_vendor_length_too_long
[01:12:29] [PASSED] drm_test_connector_hdmi_init_vendor_valid
[01:12:29] ========= drm_test_connector_hdmi_init_type_valid =========
[01:12:29] [PASSED] HDMI-A
[01:12:29] [PASSED] HDMI-B
[01:12:29] ===== [PASSED] drm_test_connector_hdmi_init_type_valid =====
[01:12:29] ======== drm_test_connector_hdmi_init_type_invalid ========
[01:12:29] [PASSED] Unknown
[01:12:29] [PASSED] VGA
[01:12:29] [PASSED] DVI-I
[01:12:29] [PASSED] DVI-D
[01:12:29] [PASSED] DVI-A
[01:12:29] [PASSED] Composite
[01:12:29] [PASSED] SVIDEO
[01:12:29] [PASSED] LVDS
[01:12:29] [PASSED] Component
[01:12:29] [PASSED] DIN
[01:12:29] [PASSED] DP
[01:12:29] [PASSED] TV
[01:12:29] [PASSED] eDP
[01:12:29] [PASSED] Virtual
[01:12:29] [PASSED] DSI
[01:12:29] [PASSED] DPI
[01:12:29] [PASSED] Writeback
[01:12:29] [PASSED] SPI
[01:12:29] [PASSED] USB
[01:12:29] ==== [PASSED] drm_test_connector_hdmi_init_type_invalid ====
[01:12:29] ============ [PASSED] drmm_connector_hdmi_init =============
[01:12:29] ============= drmm_connector_init (3 subtests) =============
[01:12:29] [PASSED] drm_test_drmm_connector_init
[01:12:29] [PASSED] drm_test_drmm_connector_init_null_ddc
[01:12:29] ========= drm_test_drmm_connector_init_type_valid =========
[01:12:29] [PASSED] Unknown
[01:12:29] [PASSED] VGA
[01:12:29] [PASSED] DVI-I
[01:12:29] [PASSED] DVI-D
[01:12:29] [PASSED] DVI-A
[01:12:29] [PASSED] Composite
[01:12:29] [PASSED] SVIDEO
[01:12:29] [PASSED] LVDS
[01:12:29] [PASSED] Component
[01:12:29] [PASSED] DIN
[01:12:29] [PASSED] DP
[01:12:29] [PASSED] HDMI-A
[01:12:29] [PASSED] HDMI-B
[01:12:29] [PASSED] TV
[01:12:29] [PASSED] eDP
[01:12:29] [PASSED] Virtual
[01:12:29] [PASSED] DSI
[01:12:29] [PASSED] DPI
[01:12:29] [PASSED] Writeback
[01:12:29] [PASSED] SPI
[01:12:29] [PASSED] USB
[01:12:29] ===== [PASSED] drm_test_drmm_connector_init_type_valid =====
[01:12:29] =============== [PASSED] drmm_connector_init ===============
[01:12:29] ========= drm_connector_dynamic_init (6 subtests) ==========
[01:12:29] [PASSED] drm_test_drm_connector_dynamic_init
[01:12:29] [PASSED] drm_test_drm_connector_dynamic_init_null_ddc
[01:12:29] [PASSED] drm_test_drm_connector_dynamic_init_not_added
[01:12:29] [PASSED] drm_test_drm_connector_dynamic_init_properties
[01:12:29] ===== drm_test_drm_connector_dynamic_init_type_valid ======
[01:12:29] [PASSED] Unknown
[01:12:29] [PASSED] VGA
[01:12:29] [PASSED] DVI-I
[01:12:29] [PASSED] DVI-D
[01:12:29] [PASSED] DVI-A
[01:12:29] [PASSED] Composite
[01:12:29] [PASSED] SVIDEO
[01:12:29] [PASSED] LVDS
[01:12:29] [PASSED] Component
[01:12:29] [PASSED] DIN
[01:12:29] [PASSED] DP
[01:12:29] [PASSED] HDMI-A
[01:12:29] [PASSED] HDMI-B
[01:12:29] [PASSED] TV
[01:12:29] [PASSED] eDP
[01:12:29] [PASSED] Virtual
[01:12:29] [PASSED] DSI
[01:12:29] [PASSED] DPI
[01:12:29] [PASSED] Writeback
[01:12:29] [PASSED] SPI
[01:12:29] [PASSED] USB
[01:12:29] = [PASSED] drm_test_drm_connector_dynamic_init_type_valid ==
[01:12:29] ======== drm_test_drm_connector_dynamic_init_name =========
[01:12:29] [PASSED] Unknown
[01:12:29] [PASSED] VGA
[01:12:29] [PASSED] DVI-I
[01:12:29] [PASSED] DVI-D
[01:12:29] [PASSED] DVI-A
[01:12:29] [PASSED] Composite
[01:12:29] [PASSED] SVIDEO
[01:12:29] [PASSED] LVDS
[01:12:29] [PASSED] Component
[01:12:29] [PASSED] DIN
[01:12:29] [PASSED] DP
[01:12:29] [PASSED] HDMI-A
[01:12:29] [PASSED] HDMI-B
[01:12:29] [PASSED] TV
[01:12:29] [PASSED] eDP
[01:12:29] [PASSED] Virtual
[01:12:29] [PASSED] DSI
[01:12:29] [PASSED] DPI
[01:12:29] [PASSED] Writeback
[01:12:29] [PASSED] SPI
[01:12:29] [PASSED] USB
[01:12:29] ==== [PASSED] drm_test_drm_connector_dynamic_init_name =====
[01:12:29] =========== [PASSED] drm_connector_dynamic_init ============
[01:12:29] ==== drm_connector_dynamic_register_early (4 subtests) =====
[01:12:29] [PASSED] drm_test_drm_connector_dynamic_register_early_on_list
[01:12:29] [PASSED] drm_test_drm_connector_dynamic_register_early_defer
[01:12:29] [PASSED] drm_test_drm_connector_dynamic_register_early_no_init
[01:12:29] [PASSED] drm_test_drm_connector_dynamic_register_early_no_mode_object
[01:12:29] ====== [PASSED] drm_connector_dynamic_register_early =======
[01:12:29] ======= drm_connector_dynamic_register (7 subtests) ========
[01:12:29] [PASSED] drm_test_drm_connector_dynamic_register_on_list
[01:12:29] [PASSED] drm_test_drm_connector_dynamic_register_no_defer
[01:12:29] [PASSED] drm_test_drm_connector_dynamic_register_no_init
[01:12:29] [PASSED] drm_test_drm_connector_dynamic_register_mode_object
[01:12:29] [PASSED] drm_test_drm_connector_dynamic_register_sysfs
[01:12:29] [PASSED] drm_test_drm_connector_dynamic_register_sysfs_name
[01:12:29] [PASSED] drm_test_drm_connector_dynamic_register_debugfs
[01:12:29] ========= [PASSED] drm_connector_dynamic_register ==========
[01:12:29] = drm_connector_attach_broadcast_rgb_property (2 subtests) =
[01:12:29] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property
[01:12:29] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property_hdmi_connector
[01:12:29] === [PASSED] drm_connector_attach_broadcast_rgb_property ===
[01:12:29] ========== drm_get_tv_mode_from_name (2 subtests) ==========
[01:12:29] ========== drm_test_get_tv_mode_from_name_valid ===========
[01:12:29] [PASSED] NTSC
[01:12:29] [PASSED] NTSC-443
[01:12:29] [PASSED] NTSC-J
[01:12:29] [PASSED] PAL
[01:12:29] [PASSED] PAL-M
[01:12:29] [PASSED] PAL-N
[01:12:29] [PASSED] SECAM
[01:12:29] [PASSED] Mono
[01:12:29] ====== [PASSED] drm_test_get_tv_mode_from_name_valid =======
[01:12:29] [PASSED] drm_test_get_tv_mode_from_name_truncated
[01:12:29] ============ [PASSED] drm_get_tv_mode_from_name ============
[01:12:29] = drm_test_connector_hdmi_compute_mode_clock (12 subtests) =
[01:12:29] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb
[01:12:29] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc
[01:12:29] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc_vic_1
[01:12:29] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc
[01:12:29] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc_vic_1
[01:12:29] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_double
[01:12:29] = drm_test_connector_hdmi_compute_mode_clock_yuv420_valid =
[01:12:29] [PASSED] VIC 96
[01:12:29] [PASSED] VIC 97
[01:12:29] [PASSED] VIC 101
[01:12:29] [PASSED] VIC 102
[01:12:29] [PASSED] VIC 106
[01:12:29] [PASSED] VIC 107
[01:12:29] === [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_valid ===
[01:12:29] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_10_bpc
[01:12:29] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_12_bpc
[01:12:29] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_8_bpc
[01:12:29] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_10_bpc
[01:12:29] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_12_bpc
[01:12:29] === [PASSED] drm_test_connector_hdmi_compute_mode_clock ====
[01:12:29] == drm_hdmi_connector_get_broadcast_rgb_name (2 subtests) ==
[01:12:29] === drm_test_drm_hdmi_connector_get_broadcast_rgb_name ====
[01:12:29] [PASSED] Automatic
[01:12:29] [PASSED] Full
[01:12:29] [PASSED] Limited 16:235
[01:12:29] === [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name ===
[01:12:29] [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name_invalid
[01:12:29] ==== [PASSED] drm_hdmi_connector_get_broadcast_rgb_name ====
[01:12:29] == drm_hdmi_connector_get_output_format_name (2 subtests) ==
[01:12:29] === drm_test_drm_hdmi_connector_get_output_format_name ====
[01:12:29] [PASSED] RGB
[01:12:29] [PASSED] YUV 4:2:0
[01:12:29] [PASSED] YUV 4:2:2
[01:12:29] [PASSED] YUV 4:4:4
[01:12:29] === [PASSED] drm_test_drm_hdmi_connector_get_output_format_name ===
[01:12:29] [PASSED] drm_test_drm_hdmi_connector_get_output_format_name_invalid
[01:12:29] ==== [PASSED] drm_hdmi_connector_get_output_format_name ====
[01:12:29] ============= drm_damage_helper (21 subtests) ==============
[01:12:29] [PASSED] drm_test_damage_iter_no_damage
[01:12:29] [PASSED] drm_test_damage_iter_no_damage_fractional_src
[01:12:29] [PASSED] drm_test_damage_iter_no_damage_src_moved
[01:12:29] [PASSED] drm_test_damage_iter_no_damage_fractional_src_moved
[01:12:29] [PASSED] drm_test_damage_iter_no_damage_not_visible
[01:12:29] [PASSED] drm_test_damage_iter_no_damage_no_crtc
[01:12:29] [PASSED] drm_test_damage_iter_no_damage_no_fb
[01:12:29] [PASSED] drm_test_damage_iter_simple_damage
[01:12:29] [PASSED] drm_test_damage_iter_single_damage
[01:12:29] [PASSED] drm_test_damage_iter_single_damage_intersect_src
[01:12:29] [PASSED] drm_test_damage_iter_single_damage_outside_src
[01:12:29] [PASSED] drm_test_damage_iter_single_damage_fractional_src
[01:12:29] [PASSED] drm_test_damage_iter_single_damage_intersect_fractional_src
[01:12:29] [PASSED] drm_test_damage_iter_single_damage_outside_fractional_src
[01:12:29] [PASSED] drm_test_damage_iter_single_damage_src_moved
[01:12:29] [PASSED] drm_test_damage_iter_single_damage_fractional_src_moved
[01:12:29] [PASSED] drm_test_damage_iter_damage
[01:12:29] [PASSED] drm_test_damage_iter_damage_one_intersect
[01:12:29] [PASSED] drm_test_damage_iter_damage_one_outside
[01:12:29] [PASSED] drm_test_damage_iter_damage_src_moved
[01:12:29] [PASSED] drm_test_damage_iter_damage_not_visible
[01:12:29] ================ [PASSED] drm_damage_helper ================
[01:12:29] ============== drm_dp_mst_helper (3 subtests) ==============
[01:12:29] ============== drm_test_dp_mst_calc_pbn_mode ==============
[01:12:29] [PASSED] Clock 154000 BPP 30 DSC disabled
[01:12:29] [PASSED] Clock 234000 BPP 30 DSC disabled
[01:12:29] [PASSED] Clock 297000 BPP 24 DSC disabled
[01:12:29] [PASSED] Clock 332880 BPP 24 DSC enabled
[01:12:29] [PASSED] Clock 324540 BPP 24 DSC enabled
[01:12:29] ========== [PASSED] drm_test_dp_mst_calc_pbn_mode ==========
[01:12:29] ============== drm_test_dp_mst_calc_pbn_div ===============
[01:12:29] [PASSED] Link rate 2000000 lane count 4
[01:12:29] [PASSED] Link rate 2000000 lane count 2
[01:12:29] [PASSED] Link rate 2000000 lane count 1
[01:12:29] [PASSED] Link rate 1350000 lane count 4
[01:12:29] [PASSED] Link rate 1350000 lane count 2
[01:12:29] [PASSED] Link rate 1350000 lane count 1
[01:12:29] [PASSED] Link rate 1000000 lane count 4
[01:12:29] [PASSED] Link rate 1000000 lane count 2
[01:12:29] [PASSED] Link rate 1000000 lane count 1
[01:12:29] [PASSED] Link rate 810000 lane count 4
[01:12:29] [PASSED] Link rate 810000 lane count 2
[01:12:29] [PASSED] Link rate 810000 lane count 1
[01:12:29] [PASSED] Link rate 540000 lane count 4
[01:12:29] [PASSED] Link rate 540000 lane count 2
[01:12:29] [PASSED] Link rate 540000 lane count 1
[01:12:29] [PASSED] Link rate 270000 lane count 4
[01:12:29] [PASSED] Link rate 270000 lane count 2
[01:12:29] [PASSED] Link rate 270000 lane count 1
[01:12:29] [PASSED] Link rate 162000 lane count 4
[01:12:29] [PASSED] Link rate 162000 lane count 2
[01:12:29] [PASSED] Link rate 162000 lane count 1
[01:12:29] ========== [PASSED] drm_test_dp_mst_calc_pbn_div ===========
[01:12:29] ========= drm_test_dp_mst_sideband_msg_req_decode =========
[01:12:29] [PASSED] DP_ENUM_PATH_RESOURCES with port number
[01:12:29] [PASSED] DP_POWER_UP_PHY with port number
[01:12:29] [PASSED] DP_POWER_DOWN_PHY with port number
[01:12:29] [PASSED] DP_ALLOCATE_PAYLOAD with SDP stream sinks
[01:12:29] [PASSED] DP_ALLOCATE_PAYLOAD with port number
[01:12:29] [PASSED] DP_ALLOCATE_PAYLOAD with VCPI
[01:12:29] [PASSED] DP_ALLOCATE_PAYLOAD with PBN
[01:12:29] [PASSED] DP_QUERY_PAYLOAD with port number
[01:12:29] [PASSED] DP_QUERY_PAYLOAD with VCPI
[01:12:29] [PASSED] DP_REMOTE_DPCD_READ with port number
[01:12:29] [PASSED] DP_REMOTE_DPCD_READ with DPCD address
[01:12:29] [PASSED] DP_REMOTE_DPCD_READ with max number of bytes
[01:12:29] [PASSED] DP_REMOTE_DPCD_WRITE with port number
[01:12:29] [PASSED] DP_REMOTE_DPCD_WRITE with DPCD address
[01:12:29] [PASSED] DP_REMOTE_DPCD_WRITE with data array
[01:12:29] [PASSED] DP_REMOTE_I2C_READ with port number
[01:12:29] [PASSED] DP_REMOTE_I2C_READ with I2C device ID
[01:12:29] [PASSED] DP_REMOTE_I2C_READ with transactions array
[01:12:29] [PASSED] DP_REMOTE_I2C_WRITE with port number
[01:12:29] [PASSED] DP_REMOTE_I2C_WRITE with I2C device ID
[01:12:29] [PASSED] DP_REMOTE_I2C_WRITE with data array
[01:12:29] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream ID
[01:12:29] [PASSED] DP_QUERY_STREAM_ENC_STATUS with client ID
[01:12:29] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream event
[01:12:29] [PASSED] DP_QUERY_STREAM_ENC_STATUS with valid stream event
[01:12:29] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream behavior
[01:12:29] [PASSED] DP_QUERY_STREAM_ENC_STATUS with a valid stream behavior
[01:12:29] ===== [PASSED] drm_test_dp_mst_sideband_msg_req_decode =====
[01:12:29] ================ [PASSED] drm_dp_mst_helper ================
[01:12:29] ================== drm_exec (7 subtests) ===================
[01:12:29] [PASSED] sanitycheck
[01:12:29] [PASSED] test_lock
[01:12:29] [PASSED] test_lock_unlock
[01:12:29] [PASSED] test_duplicates
[01:12:29] [PASSED] test_prepare
[01:12:29] [PASSED] test_prepare_array
[01:12:29] [PASSED] test_multiple_loops
[01:12:29] ==================== [PASSED] drm_exec =====================
[01:12:29] =========== drm_format_helper_test (17 subtests) ===========
[01:12:29] ============== drm_test_fb_xrgb8888_to_gray8 ==============
[01:12:29] [PASSED] single_pixel_source_buffer
[01:12:29] [PASSED] single_pixel_clip_rectangle
[01:12:29] [PASSED] well_known_colors
[01:12:29] [PASSED] destination_pitch
[01:12:29] ========== [PASSED] drm_test_fb_xrgb8888_to_gray8 ==========
[01:12:29] ============= drm_test_fb_xrgb8888_to_rgb332 ==============
[01:12:29] [PASSED] single_pixel_source_buffer
[01:12:29] [PASSED] single_pixel_clip_rectangle
[01:12:29] [PASSED] well_known_colors
[01:12:29] [PASSED] destination_pitch
[01:12:29] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb332 ==========
[01:12:29] ============= drm_test_fb_xrgb8888_to_rgb565 ==============
[01:12:29] [PASSED] single_pixel_source_buffer
[01:12:29] [PASSED] single_pixel_clip_rectangle
[01:12:29] [PASSED] well_known_colors
[01:12:29] [PASSED] destination_pitch
[01:12:29] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb565 ==========
[01:12:29] ============ drm_test_fb_xrgb8888_to_xrgb1555 =============
[01:12:29] [PASSED] single_pixel_source_buffer
[01:12:29] [PASSED] single_pixel_clip_rectangle
[01:12:29] [PASSED] well_known_colors
[01:12:29] [PASSED] destination_pitch
[01:12:29] ======== [PASSED] drm_test_fb_xrgb8888_to_xrgb1555 =========
[01:12:29] ============ drm_test_fb_xrgb8888_to_argb1555 =============
[01:12:29] [PASSED] single_pixel_source_buffer
[01:12:29] [PASSED] single_pixel_clip_rectangle
[01:12:29] [PASSED] well_known_colors
[01:12:29] [PASSED] destination_pitch
[01:12:29] ======== [PASSED] drm_test_fb_xrgb8888_to_argb1555 =========
[01:12:29] ============ drm_test_fb_xrgb8888_to_rgba5551 =============
[01:12:29] [PASSED] single_pixel_source_buffer
[01:12:29] [PASSED] single_pixel_clip_rectangle
[01:12:29] [PASSED] well_known_colors
[01:12:29] [PASSED] destination_pitch
[01:12:29] ======== [PASSED] drm_test_fb_xrgb8888_to_rgba5551 =========
[01:12:29] ============= drm_test_fb_xrgb8888_to_rgb888 ==============
[01:12:29] [PASSED] single_pixel_source_buffer
[01:12:29] [PASSED] single_pixel_clip_rectangle
[01:12:29] [PASSED] well_known_colors
[01:12:29] [PASSED] destination_pitch
[01:12:29] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb888 ==========
[01:12:29] ============= drm_test_fb_xrgb8888_to_bgr888 ==============
[01:12:29] [PASSED] single_pixel_source_buffer
[01:12:29] [PASSED] single_pixel_clip_rectangle
[01:12:29] [PASSED] well_known_colors
[01:12:29] [PASSED] destination_pitch
[01:12:29] ========= [PASSED] drm_test_fb_xrgb8888_to_bgr888 ==========
[01:12:29] ============ drm_test_fb_xrgb8888_to_argb8888 =============
[01:12:29] [PASSED] single_pixel_source_buffer
[01:12:29] [PASSED] single_pixel_clip_rectangle
[01:12:29] [PASSED] well_known_colors
[01:12:29] [PASSED] destination_pitch
[01:12:29] ======== [PASSED] drm_test_fb_xrgb8888_to_argb8888 =========
[01:12:29] =========== drm_test_fb_xrgb8888_to_xrgb2101010 ===========
[01:12:29] [PASSED] single_pixel_source_buffer
[01:12:29] [PASSED] single_pixel_clip_rectangle
[01:12:29] [PASSED] well_known_colors
[01:12:29] [PASSED] destination_pitch
[01:12:29] ======= [PASSED] drm_test_fb_xrgb8888_to_xrgb2101010 =======
[01:12:29] =========== drm_test_fb_xrgb8888_to_argb2101010 ===========
[01:12:29] [PASSED] single_pixel_source_buffer
[01:12:29] [PASSED] single_pixel_clip_rectangle
[01:12:29] [PASSED] well_known_colors
[01:12:29] [PASSED] destination_pitch
[01:12:29] ======= [PASSED] drm_test_fb_xrgb8888_to_argb2101010 =======
[01:12:29] ============== drm_test_fb_xrgb8888_to_mono ===============
[01:12:29] [PASSED] single_pixel_source_buffer
[01:12:29] [PASSED] single_pixel_clip_rectangle
[01:12:29] [PASSED] well_known_colors
[01:12:29] [PASSED] destination_pitch
[01:12:29] ========== [PASSED] drm_test_fb_xrgb8888_to_mono ===========
[01:12:29] ==================== drm_test_fb_swab =====================
[01:12:29] [PASSED] single_pixel_source_buffer
[01:12:29] [PASSED] single_pixel_clip_rectangle
[01:12:29] [PASSED] well_known_colors
[01:12:29] [PASSED] destination_pitch
[01:12:29] ================ [PASSED] drm_test_fb_swab =================
[01:12:29] ============ drm_test_fb_xrgb8888_to_xbgr8888 =============
[01:12:29] [PASSED] single_pixel_source_buffer
[01:12:29] [PASSED] single_pixel_clip_rectangle
[01:12:29] [PASSED] well_known_colors
[01:12:29] [PASSED] destination_pitch
[01:12:29] ======== [PASSED] drm_test_fb_xrgb8888_to_xbgr8888 =========
[01:12:29] ============ drm_test_fb_xrgb8888_to_abgr8888 =============
[01:12:29] [PASSED] single_pixel_source_buffer
[01:12:29] [PASSED] single_pixel_clip_rectangle
[01:12:29] [PASSED] well_known_colors
[01:12:29] [PASSED] destination_pitch
[01:12:29] ======== [PASSED] drm_test_fb_xrgb8888_to_abgr8888 =========
[01:12:29] ================= drm_test_fb_clip_offset =================
[01:12:29] [PASSED] pass through
[01:12:29] [PASSED] horizontal offset
[01:12:29] [PASSED] vertical offset
[01:12:29] [PASSED] horizontal and vertical offset
[01:12:29] [PASSED] horizontal offset (custom pitch)
[01:12:29] [PASSED] vertical offset (custom pitch)
[01:12:29] [PASSED] horizontal and vertical offset (custom pitch)
[01:12:29] ============= [PASSED] drm_test_fb_clip_offset =============
[01:12:29] =================== drm_test_fb_memcpy ====================
[01:12:29] [PASSED] single_pixel_source_buffer: XR24 little-endian (0x34325258)
[01:12:29] [PASSED] single_pixel_source_buffer: XRA8 little-endian (0x38415258)
[01:12:29] [PASSED] single_pixel_source_buffer: YU24 little-endian (0x34325559)
[01:12:29] [PASSED] single_pixel_clip_rectangle: XB24 little-endian (0x34324258)
[01:12:29] [PASSED] single_pixel_clip_rectangle: XRA8 little-endian (0x38415258)
[01:12:29] [PASSED] single_pixel_clip_rectangle: YU24 little-endian (0x34325559)
[01:12:29] [PASSED] well_known_colors: XB24 little-endian (0x34324258)
[01:12:29] [PASSED] well_known_colors: XRA8 little-endian (0x38415258)
[01:12:29] [PASSED] well_known_colors: YU24 little-endian (0x34325559)
[01:12:29] [PASSED] destination_pitch: XB24 little-endian (0x34324258)
[01:12:29] [PASSED] destination_pitch: XRA8 little-endian (0x38415258)
[01:12:29] [PASSED] destination_pitch: YU24 little-endian (0x34325559)
[01:12:29] =============== [PASSED] drm_test_fb_memcpy ================
[01:12:29] ============= [PASSED] drm_format_helper_test ==============
[01:12:29] ================= drm_format (18 subtests) =================
[01:12:29] [PASSED] drm_test_format_block_width_invalid
[01:12:29] [PASSED] drm_test_format_block_width_one_plane
[01:12:29] [PASSED] drm_test_format_block_width_two_plane
[01:12:29] [PASSED] drm_test_format_block_width_three_plane
[01:12:29] [PASSED] drm_test_format_block_width_tiled
[01:12:29] [PASSED] drm_test_format_block_height_invalid
[01:12:29] [PASSED] drm_test_format_block_height_one_plane
[01:12:29] [PASSED] drm_test_format_block_height_two_plane
[01:12:29] [PASSED] drm_test_format_block_height_three_plane
[01:12:29] [PASSED] drm_test_format_block_height_tiled
[01:12:29] [PASSED] drm_test_format_min_pitch_invalid
[01:12:29] [PASSED] drm_test_format_min_pitch_one_plane_8bpp
[01:12:29] [PASSED] drm_test_format_min_pitch_one_plane_16bpp
[01:12:29] [PASSED] drm_test_format_min_pitch_one_plane_24bpp
[01:12:29] [PASSED] drm_test_format_min_pitch_one_plane_32bpp
[01:12:29] [PASSED] drm_test_format_min_pitch_two_plane
[01:12:29] [PASSED] drm_test_format_min_pitch_three_plane_8bpp
[01:12:29] [PASSED] drm_test_format_min_pitch_tiled
[01:12:29] =================== [PASSED] drm_format ====================
[01:12:29] ============== drm_framebuffer (10 subtests) ===============
[01:12:29] ========== drm_test_framebuffer_check_src_coords ==========
[01:12:29] [PASSED] Success: source fits into fb
[01:12:29] [PASSED] Fail: overflowing fb with x-axis coordinate
[01:12:29] [PASSED] Fail: overflowing fb with y-axis coordinate
[01:12:29] [PASSED] Fail: overflowing fb with source width
[01:12:29] [PASSED] Fail: overflowing fb with source height
[01:12:29] ====== [PASSED] drm_test_framebuffer_check_src_coords ======
[01:12:29] [PASSED] drm_test_framebuffer_cleanup
[01:12:29] =============== drm_test_framebuffer_create ===============
[01:12:29] [PASSED] ABGR8888 normal sizes
[01:12:29] [PASSED] ABGR8888 max sizes
[01:12:29] [PASSED] ABGR8888 pitch greater than min required
[01:12:29] [PASSED] ABGR8888 pitch less than min required
[01:12:29] [PASSED] ABGR8888 Invalid width
[01:12:29] [PASSED] ABGR8888 Invalid buffer handle
[01:12:29] [PASSED] No pixel format
[01:12:29] [PASSED] ABGR8888 Width 0
[01:12:29] [PASSED] ABGR8888 Height 0
[01:12:29] [PASSED] ABGR8888 Out of bound height * pitch combination
[01:12:29] [PASSED] ABGR8888 Large buffer offset
[01:12:29] [PASSED] ABGR8888 Buffer offset for inexistent plane
[01:12:29] [PASSED] ABGR8888 Invalid flag
[01:12:29] [PASSED] ABGR8888 Set DRM_MODE_FB_MODIFIERS without modifiers
[01:12:29] [PASSED] ABGR8888 Valid buffer modifier
[01:12:29] [PASSED] ABGR8888 Invalid buffer modifier(DRM_FORMAT_MOD_SAMSUNG_64_32_TILE)
[01:12:29] [PASSED] ABGR8888 Extra pitches without DRM_MODE_FB_MODIFIERS
[01:12:29] [PASSED] ABGR8888 Extra pitches with DRM_MODE_FB_MODIFIERS
[01:12:29] [PASSED] NV12 Normal sizes
[01:12:29] [PASSED] NV12 Max sizes
[01:12:29] [PASSED] NV12 Invalid pitch
[01:12:29] [PASSED] NV12 Invalid modifier/missing DRM_MODE_FB_MODIFIERS flag
[01:12:29] [PASSED] NV12 different modifier per-plane
[01:12:29] [PASSED] NV12 with DRM_FORMAT_MOD_SAMSUNG_64_32_TILE
[01:12:29] [PASSED] NV12 Valid modifiers without DRM_MODE_FB_MODIFIERS
[01:12:29] [PASSED] NV12 Modifier for inexistent plane
[01:12:29] [PASSED] NV12 Handle for inexistent plane
[01:12:29] [PASSED] NV12 Handle for inexistent plane without DRM_MODE_FB_MODIFIERS
[01:12:29] [PASSED] YVU420 DRM_MODE_FB_MODIFIERS set without modifier
[01:12:29] [PASSED] YVU420 Normal sizes
[01:12:29] [PASSED] YVU420 Max sizes
[01:12:29] [PASSED] YVU420 Invalid pitch
[01:12:29] [PASSED] YVU420 Different pitches
[01:12:29] [PASSED] YVU420 Different buffer offsets/pitches
[01:12:29] [PASSED] YVU420 Modifier set just for plane 0, without DRM_MODE_FB_MODIFIERS
[01:12:29] [PASSED] YVU420 Modifier set just for planes 0, 1, without DRM_MODE_FB_MODIFIERS
[01:12:29] [PASSED] YVU420 Modifier set just for plane 0, 1, with DRM_MODE_FB_MODIFIERS
[01:12:29] [PASSED] YVU420 Valid modifier
[01:12:29] [PASSED] YVU420 Different modifiers per plane
[01:12:29] [PASSED] YVU420 Modifier for inexistent plane
[01:12:29] [PASSED] YUV420_10BIT Invalid modifier(DRM_FORMAT_MOD_LINEAR)
[01:12:29] [PASSED] X0L2 Normal sizes
[01:12:29] [PASSED] X0L2 Max sizes
[01:12:29] [PASSED] X0L2 Invalid pitch
[01:12:29] [PASSED] X0L2 Pitch greater than minimum required
[01:12:29] [PASSED] X0L2 Handle for inexistent plane
[01:12:29] [PASSED] X0L2 Offset for inexistent plane, without DRM_MODE_FB_MODIFIERS set
[01:12:29] [PASSED] X0L2 Modifier without DRM_MODE_FB_MODIFIERS set
[01:12:29] [PASSED] X0L2 Valid modifier
[01:12:29] [PASSED] X0L2 Modifier for inexistent plane
[01:12:29] =========== [PASSED] drm_test_framebuffer_create ===========
[01:12:29] [PASSED] drm_test_framebuffer_free
[01:12:29] [PASSED] drm_test_framebuffer_init
[01:12:29] [PASSED] drm_test_framebuffer_init_bad_format
[01:12:29] [PASSED] drm_test_framebuffer_init_dev_mismatch
[01:12:29] [PASSED] drm_test_framebuffer_lookup
[01:12:29] [PASSED] drm_test_framebuffer_lookup_inexistent
[01:12:29] [PASSED] drm_test_framebuffer_modifiers_not_supported
[01:12:29] ================= [PASSED] drm_framebuffer =================
[01:12:29] ================ drm_gem_shmem (8 subtests) ================
[01:12:29] [PASSED] drm_gem_shmem_test_obj_create
[01:12:29] [PASSED] drm_gem_shmem_test_obj_create_private
[01:12:29] [PASSED] drm_gem_shmem_test_pin_pages
[01:12:29] [PASSED] drm_gem_shmem_test_vmap
[01:12:29] [PASSED] drm_gem_shmem_test_get_pages_sgt
[01:12:29] [PASSED] drm_gem_shmem_test_get_sg_table
[01:12:29] [PASSED] drm_gem_shmem_test_madvise
[01:12:29] [PASSED] drm_gem_shmem_test_purge
[01:12:29] ================== [PASSED] drm_gem_shmem ==================
[01:12:29] === drm_atomic_helper_connector_hdmi_check (27 subtests) ===
[01:12:29] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode
[01:12:29] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode_vic_1
[01:12:29] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode
[01:12:29] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode_vic_1
[01:12:29] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode
[01:12:29] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode_vic_1
[01:12:29] ====== drm_test_check_broadcast_rgb_cea_mode_yuv420 =======
[01:12:29] [PASSED] Automatic
[01:12:29] [PASSED] Full
[01:12:29] [PASSED] Limited 16:235
[01:12:29] == [PASSED] drm_test_check_broadcast_rgb_cea_mode_yuv420 ===
[01:12:29] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_changed
[01:12:29] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_not_changed
[01:12:29] [PASSED] drm_test_check_disable_connector
[01:12:29] [PASSED] drm_test_check_hdmi_funcs_reject_rate
[01:12:29] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_rgb
[01:12:29] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_yuv420
[01:12:29] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_ignore_yuv422
[01:12:29] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_ignore_yuv420
[01:12:29] [PASSED] drm_test_check_driver_unsupported_fallback_yuv420
[01:12:29] [PASSED] drm_test_check_output_bpc_crtc_mode_changed
[01:12:29] [PASSED] drm_test_check_output_bpc_crtc_mode_not_changed
[01:12:29] [PASSED] drm_test_check_output_bpc_dvi
[01:12:29] [PASSED] drm_test_check_output_bpc_format_vic_1
[01:12:29] [PASSED] drm_test_check_output_bpc_format_display_8bpc_only
[01:12:29] [PASSED] drm_test_check_output_bpc_format_display_rgb_only
[01:12:29] [PASSED] drm_test_check_output_bpc_format_driver_8bpc_only
[01:12:29] [PASSED] drm_test_check_output_bpc_format_driver_rgb_only
[01:12:29] [PASSED] drm_test_check_tmds_char_rate_rgb_8bpc
[01:12:29] [PASSED] drm_test_check_tmds_char_rate_rgb_10bpc
[01:12:29] [PASSED] drm_test_check_tmds_char_rate_rgb_12bpc
[01:12:29] ===== [PASSED] drm_atomic_helper_connector_hdmi_check ======
[01:12:29] === drm_atomic_helper_connector_hdmi_reset (6 subtests) ====
[01:12:29] [PASSED] drm_test_check_broadcast_rgb_value
[01:12:29] [PASSED] drm_test_check_bpc_8_value
[01:12:29] [PASSED] drm_test_check_bpc_10_value
[01:12:29] [PASSED] drm_test_check_bpc_12_value
[01:12:29] [PASSED] drm_test_check_format_value
[01:12:29] [PASSED] drm_test_check_tmds_char_value
[01:12:29] ===== [PASSED] drm_atomic_helper_connector_hdmi_reset ======
[01:12:29] = drm_atomic_helper_connector_hdmi_mode_valid (4 subtests) =
[01:12:29] [PASSED] drm_test_check_mode_valid
[01:12:29] [PASSED] drm_test_check_mode_valid_reject
[01:12:29] [PASSED] drm_test_check_mode_valid_reject_rate
[01:12:29] [PASSED] drm_test_check_mode_valid_reject_max_clock
[01:12:29] === [PASSED] drm_atomic_helper_connector_hdmi_mode_valid ===
[01:12:29] ================= drm_managed (2 subtests) =================
[01:12:29] [PASSED] drm_test_managed_release_action
[01:12:29] [PASSED] drm_test_managed_run_action
[01:12:29] =================== [PASSED] drm_managed ===================
[01:12:29] =================== drm_mm (6 subtests) ====================
[01:12:29] [PASSED] drm_test_mm_init
[01:12:29] [PASSED] drm_test_mm_debug
[01:12:29] [PASSED] drm_test_mm_align32
[01:12:29] [PASSED] drm_test_mm_align64
[01:12:29] [PASSED] drm_test_mm_lowest
[01:12:29] [PASSED] drm_test_mm_highest
[01:12:29] ===================== [PASSED] drm_mm ======================
[01:12:29] ============= drm_modes_analog_tv (5 subtests) =============
[01:12:29] [PASSED] drm_test_modes_analog_tv_mono_576i
[01:12:29] [PASSED] drm_test_modes_analog_tv_ntsc_480i
[01:12:29] [PASSED] drm_test_modes_analog_tv_ntsc_480i_inlined
[01:12:29] [PASSED] drm_test_modes_analog_tv_pal_576i
[01:12:29] [PASSED] drm_test_modes_analog_tv_pal_576i_inlined
[01:12:29] =============== [PASSED] drm_modes_analog_tv ===============
[01:12:29] ============== drm_plane_helper (2 subtests) ===============
[01:12:29] =============== drm_test_check_plane_state ================
[01:12:29] [PASSED] clipping_simple
[01:12:29] [PASSED] clipping_rotate_reflect
[01:12:29] [PASSED] positioning_simple
[01:12:29] [PASSED] upscaling
[01:12:29] [PASSED] downscaling
[01:12:29] [PASSED] rounding1
[01:12:29] [PASSED] rounding2
[01:12:29] [PASSED] rounding3
[01:12:29] [PASSED] rounding4
[01:12:29] =========== [PASSED] drm_test_check_plane_state ============
[01:12:29] =========== drm_test_check_invalid_plane_state ============
[01:12:29] [PASSED] positioning_invalid
[01:12:29] [PASSED] upscaling_invalid
[01:12:29] [PASSED] downscaling_invalid
[01:12:29] ======= [PASSED] drm_test_check_invalid_plane_state ========
[01:12:29] ================ [PASSED] drm_plane_helper =================
[01:12:29] ====== drm_connector_helper_tv_get_modes (1 subtest) =======
[01:12:29] ====== drm_test_connector_helper_tv_get_modes_check =======
[01:12:29] [PASSED] None
[01:12:29] [PASSED] PAL
[01:12:29] [PASSED] NTSC
[01:12:29] [PASSED] Both, NTSC Default
[01:12:29] [PASSED] Both, PAL Default
[01:12:29] [PASSED] Both, NTSC Default, with PAL on command-line
[01:12:29] [PASSED] Both, PAL Default, with NTSC on command-line
[01:12:29] == [PASSED] drm_test_connector_helper_tv_get_modes_check ===
[01:12:29] ======== [PASSED] drm_connector_helper_tv_get_modes ========
[01:12:29] ================== drm_rect (9 subtests) ===================
[01:12:29] [PASSED] drm_test_rect_clip_scaled_div_by_zero
[01:12:29] [PASSED] drm_test_rect_clip_scaled_not_clipped
[01:12:29] [PASSED] drm_test_rect_clip_scaled_clipped
[01:12:29] [PASSED] drm_test_rect_clip_scaled_signed_vs_unsigned
[01:12:29] ================= drm_test_rect_intersect =================
[01:12:29] [PASSED] top-left x bottom-right: 2x2+1+1 x 2x2+0+0
[01:12:29] [PASSED] top-right x bottom-left: 2x2+0+0 x 2x2+1-1
[01:12:29] [PASSED] bottom-left x top-right: 2x2+1-1 x 2x2+0+0
[01:12:29] [PASSED] bottom-right x top-left: 2x2+0+0 x 2x2+1+1
[01:12:29] [PASSED] right x left: 2x1+0+0 x 3x1+1+0
[01:12:29] [PASSED] left x right: 3x1+1+0 x 2x1+0+0
[01:12:29] [PASSED] up x bottom: 1x2+0+0 x 1x3+0-1
[01:12:29] [PASSED] bottom x up: 1x3+0-1 x 1x2+0+0
[01:12:29] [PASSED] touching corner: 1x1+0+0 x 2x2+1+1
[01:12:29] [PASSED] touching side: 1x1+0+0 x 1x1+1+0
[01:12:29] [PASSED] equal rects: 2x2+0+0 x 2x2+0+0
[01:12:29] [PASSED] inside another: 2x2+0+0 x 1x1+1+1
[01:12:29] [PASSED] far away: 1x1+0+0 x 1x1+3+6
[01:12:29] [PASSED] points intersecting: 0x0+5+10 x 0x0+5+10
[01:12:29] [PASSED] points not intersecting: 0x0+0+0 x 0x0+5+10
[01:12:29] ============= [PASSED] drm_test_rect_intersect =============
[01:12:29] ================ drm_test_rect_calc_hscale ================
[01:12:29] [PASSED] normal use
[01:12:29] [PASSED] out of max range
[01:12:29] [PASSED] out of min range
[01:12:29] [PASSED] zero dst
[01:12:29] [PASSED] negative src
[01:12:29] [PASSED] negative dst
[01:12:29] ============ [PASSED] drm_test_rect_calc_hscale ============
[01:12:29] ================ drm_test_rect_calc_vscale ================
[01:12:29] [PASSED] normal use
stty: 'standard input': Inappropriate ioctl for device
[01:12:29] [PASSED] out of max range
[01:12:29] [PASSED] out of min range
[01:12:29] [PASSED] zero dst
[01:12:29] [PASSED] negative src
[01:12:29] [PASSED] negative dst
[01:12:29] ============ [PASSED] drm_test_rect_calc_vscale ============
[01:12:29] ================== drm_test_rect_rotate ===================
[01:12:29] [PASSED] reflect-x
[01:12:29] [PASSED] reflect-y
[01:12:29] [PASSED] rotate-0
[01:12:29] [PASSED] rotate-90
[01:12:29] [PASSED] rotate-180
[01:12:29] [PASSED] rotate-270
[01:12:29] ============== [PASSED] drm_test_rect_rotate ===============
[01:12:29] ================ drm_test_rect_rotate_inv =================
[01:12:29] [PASSED] reflect-x
[01:12:29] [PASSED] reflect-y
[01:12:29] [PASSED] rotate-0
[01:12:29] [PASSED] rotate-90
[01:12:29] [PASSED] rotate-180
[01:12:29] [PASSED] rotate-270
[01:12:29] ============ [PASSED] drm_test_rect_rotate_inv =============
[01:12:29] ==================== [PASSED] drm_rect =====================
[01:12:29] ============ drm_sysfb_modeset_test (1 subtest) ============
[01:12:29] ============ drm_test_sysfb_build_fourcc_list =============
[01:12:29] [PASSED] no native formats
[01:12:29] [PASSED] XRGB8888 as native format
[01:12:29] [PASSED] remove duplicates
[01:12:29] [PASSED] convert alpha formats
[01:12:29] [PASSED] random formats
[01:12:29] ======== [PASSED] drm_test_sysfb_build_fourcc_list =========
[01:12:29] ============= [PASSED] drm_sysfb_modeset_test ==============
[01:12:29] ================== drm_fixp (2 subtests) ===================
[01:12:29] [PASSED] drm_test_int2fixp
[01:12:29] [PASSED] drm_test_sm2fixp
[01:12:29] ==================== [PASSED] drm_fixp =====================
[01:12:29] ============================================================
[01:12:29] Testing complete. Ran 624 tests: passed: 624
[01:12:29] Elapsed time: 26.945s total, 1.702s configuring, 24.821s building, 0.394s running
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/ttm/tests/.kunitconfig
[01:12:29] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[01:12:31] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[01:12:40] Starting KUnit Kernel (1/1)...
[01:12:40] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[01:12:40] ================= ttm_device (5 subtests) ==================
[01:12:40] [PASSED] ttm_device_init_basic
[01:12:40] [PASSED] ttm_device_init_multiple
[01:12:40] [PASSED] ttm_device_fini_basic
[01:12:40] [PASSED] ttm_device_init_no_vma_man
[01:12:40] ================== ttm_device_init_pools ==================
[01:12:40] [PASSED] No DMA allocations, no DMA32 required
[01:12:40] [PASSED] DMA allocations, DMA32 required
[01:12:40] [PASSED] No DMA allocations, DMA32 required
[01:12:40] [PASSED] DMA allocations, no DMA32 required
[01:12:40] ============== [PASSED] ttm_device_init_pools ==============
[01:12:40] =================== [PASSED] ttm_device ====================
[01:12:40] ================== ttm_pool (8 subtests) ===================
[01:12:40] ================== ttm_pool_alloc_basic ===================
[01:12:40] [PASSED] One page
[01:12:40] [PASSED] More than one page
[01:12:40] [PASSED] Above the allocation limit
[01:12:40] [PASSED] One page, with coherent DMA mappings enabled
[01:12:40] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[01:12:40] ============== [PASSED] ttm_pool_alloc_basic ===============
[01:12:40] ============== ttm_pool_alloc_basic_dma_addr ==============
[01:12:40] [PASSED] One page
[01:12:40] [PASSED] More than one page
[01:12:40] [PASSED] Above the allocation limit
[01:12:40] [PASSED] One page, with coherent DMA mappings enabled
[01:12:40] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[01:12:40] ========== [PASSED] ttm_pool_alloc_basic_dma_addr ==========
[01:12:40] [PASSED] ttm_pool_alloc_order_caching_match
[01:12:40] [PASSED] ttm_pool_alloc_caching_mismatch
[01:12:40] [PASSED] ttm_pool_alloc_order_mismatch
[01:12:40] [PASSED] ttm_pool_free_dma_alloc
[01:12:40] [PASSED] ttm_pool_free_no_dma_alloc
[01:12:40] [PASSED] ttm_pool_fini_basic
[01:12:40] ==================== [PASSED] ttm_pool =====================
[01:12:40] ================ ttm_resource (8 subtests) =================
[01:12:40] ================= ttm_resource_init_basic =================
[01:12:40] [PASSED] Init resource in TTM_PL_SYSTEM
[01:12:40] [PASSED] Init resource in TTM_PL_VRAM
[01:12:40] [PASSED] Init resource in a private placement
[01:12:40] [PASSED] Init resource in TTM_PL_SYSTEM, set placement flags
[01:12:40] ============= [PASSED] ttm_resource_init_basic =============
[01:12:40] [PASSED] ttm_resource_init_pinned
[01:12:40] [PASSED] ttm_resource_fini_basic
[01:12:40] [PASSED] ttm_resource_manager_init_basic
[01:12:40] [PASSED] ttm_resource_manager_usage_basic
[01:12:40] [PASSED] ttm_resource_manager_set_used_basic
[01:12:40] [PASSED] ttm_sys_man_alloc_basic
[01:12:40] [PASSED] ttm_sys_man_free_basic
[01:12:40] ================== [PASSED] ttm_resource ===================
[01:12:40] =================== ttm_tt (15 subtests) ===================
[01:12:40] ==================== ttm_tt_init_basic ====================
[01:12:40] [PASSED] Page-aligned size
[01:12:40] [PASSED] Extra pages requested
[01:12:40] ================ [PASSED] ttm_tt_init_basic ================
[01:12:40] [PASSED] ttm_tt_init_misaligned
[01:12:40] [PASSED] ttm_tt_fini_basic
[01:12:40] [PASSED] ttm_tt_fini_sg
[01:12:40] [PASSED] ttm_tt_fini_shmem
[01:12:40] [PASSED] ttm_tt_create_basic
[01:12:40] [PASSED] ttm_tt_create_invalid_bo_type
[01:12:40] [PASSED] ttm_tt_create_ttm_exists
[01:12:40] [PASSED] ttm_tt_create_failed
[01:12:40] [PASSED] ttm_tt_destroy_basic
[01:12:40] [PASSED] ttm_tt_populate_null_ttm
[01:12:40] [PASSED] ttm_tt_populate_populated_ttm
[01:12:40] [PASSED] ttm_tt_unpopulate_basic
[01:12:40] [PASSED] ttm_tt_unpopulate_empty_ttm
[01:12:40] [PASSED] ttm_tt_swapin_basic
[01:12:40] ===================== [PASSED] ttm_tt ======================
[01:12:40] =================== ttm_bo (14 subtests) ===================
[01:12:40] =========== ttm_bo_reserve_optimistic_no_ticket ===========
[01:12:40] [PASSED] Cannot be interrupted and sleeps
[01:12:40] [PASSED] Cannot be interrupted, locks straight away
[01:12:40] [PASSED] Can be interrupted, sleeps
[01:12:40] ======= [PASSED] ttm_bo_reserve_optimistic_no_ticket =======
[01:12:40] [PASSED] ttm_bo_reserve_locked_no_sleep
[01:12:40] [PASSED] ttm_bo_reserve_no_wait_ticket
[01:12:40] [PASSED] ttm_bo_reserve_double_resv
[01:12:40] [PASSED] ttm_bo_reserve_interrupted
[01:12:40] [PASSED] ttm_bo_reserve_deadlock
[01:12:40] [PASSED] ttm_bo_unreserve_basic
[01:12:40] [PASSED] ttm_bo_unreserve_pinned
[01:12:40] [PASSED] ttm_bo_unreserve_bulk
[01:12:40] [PASSED] ttm_bo_fini_basic
[01:12:40] [PASSED] ttm_bo_fini_shared_resv
[01:12:40] [PASSED] ttm_bo_pin_basic
[01:12:40] [PASSED] ttm_bo_pin_unpin_resource
[01:12:40] [PASSED] ttm_bo_multiple_pin_one_unpin
[01:12:40] ===================== [PASSED] ttm_bo ======================
[01:12:40] ============== ttm_bo_validate (21 subtests) ===============
[01:12:40] ============== ttm_bo_init_reserved_sys_man ===============
[01:12:40] [PASSED] Buffer object for userspace
[01:12:40] [PASSED] Kernel buffer object
[01:12:40] [PASSED] Shared buffer object
[01:12:40] ========== [PASSED] ttm_bo_init_reserved_sys_man ===========
[01:12:40] ============== ttm_bo_init_reserved_mock_man ==============
[01:12:40] [PASSED] Buffer object for userspace
[01:12:40] [PASSED] Kernel buffer object
[01:12:40] [PASSED] Shared buffer object
[01:12:40] ========== [PASSED] ttm_bo_init_reserved_mock_man ==========
[01:12:40] [PASSED] ttm_bo_init_reserved_resv
[01:12:40] ================== ttm_bo_validate_basic ==================
[01:12:40] [PASSED] Buffer object for userspace
[01:12:40] [PASSED] Kernel buffer object
[01:12:40] [PASSED] Shared buffer object
[01:12:40] ============== [PASSED] ttm_bo_validate_basic ==============
[01:12:40] [PASSED] ttm_bo_validate_invalid_placement
[01:12:40] ============= ttm_bo_validate_same_placement ==============
[01:12:40] [PASSED] System manager
[01:12:40] [PASSED] VRAM manager
[01:12:40] ========= [PASSED] ttm_bo_validate_same_placement ==========
[01:12:40] [PASSED] ttm_bo_validate_failed_alloc
[01:12:40] [PASSED] ttm_bo_validate_pinned
[01:12:40] [PASSED] ttm_bo_validate_busy_placement
[01:12:40] ================ ttm_bo_validate_multihop =================
[01:12:40] [PASSED] Buffer object for userspace
[01:12:40] [PASSED] Kernel buffer object
[01:12:40] [PASSED] Shared buffer object
[01:12:40] ============ [PASSED] ttm_bo_validate_multihop =============
[01:12:40] ========== ttm_bo_validate_no_placement_signaled ==========
[01:12:40] [PASSED] Buffer object in system domain, no page vector
[01:12:40] [PASSED] Buffer object in system domain with an existing page vector
[01:12:40] ====== [PASSED] ttm_bo_validate_no_placement_signaled ======
[01:12:40] ======== ttm_bo_validate_no_placement_not_signaled ========
[01:12:40] [PASSED] Buffer object for userspace
[01:12:40] [PASSED] Kernel buffer object
[01:12:40] [PASSED] Shared buffer object
[01:12:40] ==== [PASSED] ttm_bo_validate_no_placement_not_signaled ====
[01:12:40] [PASSED] ttm_bo_validate_move_fence_signaled
[01:12:40] ========= ttm_bo_validate_move_fence_not_signaled =========
[01:12:40] [PASSED] Waits for GPU
[01:12:40] [PASSED] Tries to lock straight away
[01:12:40] ===== [PASSED] ttm_bo_validate_move_fence_not_signaled =====
[01:12:40] [PASSED] ttm_bo_validate_happy_evict
[01:12:40] [PASSED] ttm_bo_validate_all_pinned_evict
[01:12:40] [PASSED] ttm_bo_validate_allowed_only_evict
[01:12:40] [PASSED] ttm_bo_validate_deleted_evict
[01:12:40] [PASSED] ttm_bo_validate_busy_domain_evict
[01:12:40] [PASSED] ttm_bo_validate_evict_gutting
[01:12:40] [PASSED] ttm_bo_validate_recrusive_evict
stty: 'standard input': Inappropriate ioctl for device
[01:12:40] ================= [PASSED] ttm_bo_validate =================
[01:12:40] ============================================================
[01:12:40] Testing complete. Ran 101 tests: passed: 101
[01:12:40] Elapsed time: 11.419s total, 1.717s configuring, 9.485s building, 0.176s running
+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel
^ permalink raw reply [flat|nested] 32+ messages in thread
* ✓ Xe.CI.BAT: success for drm/xe: Multi Queue feature support (rev6)
2025-12-11 1:02 [PATCH v6 00/17] drm/xe: Multi Queue feature support Niranjana Vishwanathapura
` (18 preceding siblings ...)
2025-12-11 1:12 ` ✓ CI.KUnit: success " Patchwork
@ 2025-12-11 2:25 ` Patchwork
2025-12-11 10:04 ` ✗ Xe.CI.Full: failure " Patchwork
20 siblings, 0 replies; 32+ messages in thread
From: Patchwork @ 2025-12-11 2:25 UTC (permalink / raw)
To: Niranjana Vishwanathapura; +Cc: intel-xe
[-- Attachment #1: Type: text/plain, Size: 1497 bytes --]
== Series Details ==
Series: drm/xe: Multi Queue feature support (rev6)
URL : https://patchwork.freedesktop.org/series/156865/
State : success
== Summary ==
CI Bug Log - changes from xe-4220-3adb3f4aa5a8563ee9c1f6e137827b740e3ab40b_BAT -> xe-pw-156865v6_BAT
====================================================
Summary
-------
**SUCCESS**
No regressions found.
Participating hosts (12 -> 12)
------------------------------
No changes in participating hosts
Known issues
------------
Here are the changes found in xe-pw-156865v6_BAT that come from known issues:
### IGT changes ###
#### Issues hit ####
* igt@xe_waitfence@reltime:
- bat-dg2-oem2: [PASS][1] -> [FAIL][2] ([Intel XE#6520])
[1]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4220-3adb3f4aa5a8563ee9c1f6e137827b740e3ab40b/bat-dg2-oem2/igt@xe_waitfence@reltime.html
[2]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/bat-dg2-oem2/igt@xe_waitfence@reltime.html
[Intel XE#6520]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6520
Build changes
-------------
* Linux: xe-4220-3adb3f4aa5a8563ee9c1f6e137827b740e3ab40b -> xe-pw-156865v6
IGT_8663: aca02d1cc9804e5f1868b0ebfba6426e2d1244fc @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
xe-4220-3adb3f4aa5a8563ee9c1f6e137827b740e3ab40b: 3adb3f4aa5a8563ee9c1f6e137827b740e3ab40b
xe-pw-156865v6: 156865v6
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/index.html
[-- Attachment #2: Type: text/html, Size: 2062 bytes --]
^ permalink raw reply [flat|nested] 32+ messages in thread
* ✗ Xe.CI.Full: failure for drm/xe: Multi Queue feature support (rev6)
2025-12-11 1:02 [PATCH v6 00/17] drm/xe: Multi Queue feature support Niranjana Vishwanathapura
` (19 preceding siblings ...)
2025-12-11 2:25 ` ✓ Xe.CI.BAT: " Patchwork
@ 2025-12-11 10:04 ` Patchwork
20 siblings, 0 replies; 32+ messages in thread
From: Patchwork @ 2025-12-11 10:04 UTC (permalink / raw)
To: Niranjana Vishwanathapura; +Cc: intel-xe
[-- Attachment #1: Type: text/plain, Size: 26324 bytes --]
== Series Details ==
Series: drm/xe: Multi Queue feature support (rev6)
URL : https://patchwork.freedesktop.org/series/156865/
State : failure
== Summary ==
CI Bug Log - changes from xe-4220-3adb3f4aa5a8563ee9c1f6e137827b740e3ab40b_FULL -> xe-pw-156865v6_FULL
====================================================
Summary
-------
**FAILURE**
Serious unknown changes coming with xe-pw-156865v6_FULL absolutely need to be
verified manually.
If you think the reported changes have nothing to do with the changes
introduced in xe-pw-156865v6_FULL, please notify your bug team (I915-ci-infra@lists.freedesktop.org) to allow them
to document this new failure mode, which will reduce false positives in CI.
Participating hosts (2 -> 2)
------------------------------
No changes in participating hosts
Possible new issues
-------------------
Here are the unknown changes that may have been introduced in xe-pw-156865v6_FULL:
### IGT changes ###
#### Possible regressions ####
* igt@kms_cursor_legacy@cursora-vs-flipa-varying-size:
- shard-bmg: [PASS][1] -> [INCOMPLETE][2]
[1]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4220-3adb3f4aa5a8563ee9c1f6e137827b740e3ab40b/shard-bmg-8/igt@kms_cursor_legacy@cursora-vs-flipa-varying-size.html
[2]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-5/igt@kms_cursor_legacy@cursora-vs-flipa-varying-size.html
* igt@kms_flip@2x-single-buffer-flip-vs-dpms-off-vs-modeset-interruptible@ac-dp2-hdmi-a3:
- shard-bmg: [PASS][3] -> [FAIL][4]
[3]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4220-3adb3f4aa5a8563ee9c1f6e137827b740e3ab40b/shard-bmg-8/igt@kms_flip@2x-single-buffer-flip-vs-dpms-off-vs-modeset-interruptible@ac-dp2-hdmi-a3.html
[4]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-6/igt@kms_flip@2x-single-buffer-flip-vs-dpms-off-vs-modeset-interruptible@ac-dp2-hdmi-a3.html
* igt@kms_pm_rpm@i2c:
- shard-bmg: [PASS][5] -> [SKIP][6] +1 other test skip
[5]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4220-3adb3f4aa5a8563ee9c1f6e137827b740e3ab40b/shard-bmg-8/igt@kms_pm_rpm@i2c.html
[6]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-2/igt@kms_pm_rpm@i2c.html
* igt@xe_exec_fault_mode@many-execqueues-bindexecqueue-userptr-imm:
- shard-bmg: NOTRUN -> [FAIL][7]
[7]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-3/igt@xe_exec_fault_mode@many-execqueues-bindexecqueue-userptr-imm.html
* igt@xe_exec_fault_mode@twice-bindexecqueue-userptr-invalidate:
- shard-bmg: NOTRUN -> [DMESG-FAIL][8]
[8]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-3/igt@xe_exec_fault_mode@twice-bindexecqueue-userptr-invalidate.html
Known issues
------------
Here are the changes found in xe-pw-156865v6_FULL that come from known issues:
### IGT changes ###
#### Issues hit ####
* igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-0-hflip:
- shard-bmg: NOTRUN -> [SKIP][9] ([Intel XE#1124])
[9]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-2/igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-0-hflip.html
* igt@kms_big_fb@yf-tiled-addfb-size-overflow:
- shard-bmg: NOTRUN -> [SKIP][10] ([Intel XE#610])
[10]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-2/igt@kms_big_fb@yf-tiled-addfb-size-overflow.html
* igt@kms_ccs@crc-sprite-planes-basic-4-tiled-dg2-rc-ccs-cc:
- shard-bmg: NOTRUN -> [SKIP][11] ([Intel XE#2887]) +1 other test skip
[11]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-2/igt@kms_ccs@crc-sprite-planes-basic-4-tiled-dg2-rc-ccs-cc.html
* igt@kms_chamelium_edid@dp-edid-change-during-suspend:
- shard-bmg: NOTRUN -> [SKIP][12] ([Intel XE#2252])
[12]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-2/igt@kms_chamelium_edid@dp-edid-change-during-suspend.html
* igt@kms_content_protection@dp-mst-suspend-resume:
- shard-bmg: NOTRUN -> [SKIP][13] ([Intel XE#6743])
[13]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-2/igt@kms_content_protection@dp-mst-suspend-resume.html
* igt@kms_cursor_crc@cursor-suspend:
- shard-bmg: [PASS][14] -> [FAIL][15] ([Intel XE#6747]) +2 other tests fail
[14]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4220-3adb3f4aa5a8563ee9c1f6e137827b740e3ab40b/shard-bmg-4/igt@kms_cursor_crc@cursor-suspend.html
[15]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-6/igt@kms_cursor_crc@cursor-suspend.html
* igt@kms_cursor_legacy@flip-vs-cursor-atomic:
- shard-bmg: NOTRUN -> [FAIL][16] ([Intel XE#6715])
[16]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-2/igt@kms_cursor_legacy@flip-vs-cursor-atomic.html
* igt@kms_flip@2x-plain-flip-fb-recreate-interruptible:
- shard-bmg: [PASS][17] -> [DMESG-FAIL][18] ([Intel XE#5545]) +1 other test dmesg-fail
[17]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4220-3adb3f4aa5a8563ee9c1f6e137827b740e3ab40b/shard-bmg-8/igt@kms_flip@2x-plain-flip-fb-recreate-interruptible.html
[18]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-2/igt@kms_flip@2x-plain-flip-fb-recreate-interruptible.html
* igt@kms_flip@2x-single-buffer-flip-vs-dpms-off-vs-modeset-interruptible:
- shard-bmg: [PASS][19] -> [FAIL][20] ([Intel XE#3149] / [Intel XE#3650]) +1 other test fail
[19]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4220-3adb3f4aa5a8563ee9c1f6e137827b740e3ab40b/shard-bmg-8/igt@kms_flip@2x-single-buffer-flip-vs-dpms-off-vs-modeset-interruptible.html
[20]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-6/igt@kms_flip@2x-single-buffer-flip-vs-dpms-off-vs-modeset-interruptible.html
* igt@kms_flip@2x-single-buffer-flip-vs-dpms-off-vs-modeset-interruptible@ad-dp2-hdmi-a3:
- shard-bmg: [PASS][21] -> [FAIL][22] ([Intel XE#3650]) +2 other tests fail
[21]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4220-3adb3f4aa5a8563ee9c1f6e137827b740e3ab40b/shard-bmg-8/igt@kms_flip@2x-single-buffer-flip-vs-dpms-off-vs-modeset-interruptible@ad-dp2-hdmi-a3.html
[22]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-6/igt@kms_flip@2x-single-buffer-flip-vs-dpms-off-vs-modeset-interruptible@ad-dp2-hdmi-a3.html
* igt@kms_flip@flip-vs-suspend@d-hdmi-a3:
- shard-bmg: NOTRUN -> [DMESG-WARN][23] ([Intel XE#6766])
[23]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-3/igt@kms_flip@flip-vs-suspend@d-hdmi-a3.html
* igt@kms_flip_scaled_crc@flip-32bpp-4tile-to-32bpp-4tiledg2rcccs-upscaling@pipe-a-valid-mode:
- shard-bmg: NOTRUN -> [SKIP][24] ([Intel XE#2293]) +1 other test skip
[24]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-2/igt@kms_flip_scaled_crc@flip-32bpp-4tile-to-32bpp-4tiledg2rcccs-upscaling@pipe-a-valid-mode.html
* igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-64bpp-ytile-upscaling:
- shard-bmg: NOTRUN -> [SKIP][25] ([Intel XE#2293] / [Intel XE#2380]) +1 other test skip
[25]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-2/igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-64bpp-ytile-upscaling.html
* igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-pri-indfb-draw-mmap-wc:
- shard-bmg: NOTRUN -> [SKIP][26] ([Intel XE#4141])
[26]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-2/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-pri-indfb-draw-mmap-wc.html
* igt@kms_frontbuffer_tracking@fbcdrrs-1p-primscrn-cur-indfb-onoff:
- shard-bmg: NOTRUN -> [SKIP][27] ([Intel XE#2311]) +2 other tests skip
[27]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-2/igt@kms_frontbuffer_tracking@fbcdrrs-1p-primscrn-cur-indfb-onoff.html
* igt@kms_frontbuffer_tracking@psr-2p-scndscrn-pri-indfb-draw-mmap-wc:
- shard-bmg: NOTRUN -> [SKIP][28] ([Intel XE#2313]) +4 other tests skip
[28]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-2/igt@kms_frontbuffer_tracking@psr-2p-scndscrn-pri-indfb-draw-mmap-wc.html
* igt@kms_hdr@bpc-switch@pipe-a-dp-2:
- shard-bmg: [PASS][29] -> [ABORT][30] ([Intel XE#6740]) +3 other tests abort
[29]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4220-3adb3f4aa5a8563ee9c1f6e137827b740e3ab40b/shard-bmg-2/igt@kms_hdr@bpc-switch@pipe-a-dp-2.html
[30]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-5/igt@kms_hdr@bpc-switch@pipe-a-dp-2.html
* igt@kms_pipe_stress@stress-xrgb8888-yftiled:
- shard-bmg: NOTRUN -> [SKIP][31] ([Intel XE#5624])
[31]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-2/igt@kms_pipe_stress@stress-xrgb8888-yftiled.html
* igt@kms_plane_multiple@2x-tiling-yf:
- shard-bmg: NOTRUN -> [SKIP][32] ([Intel XE#5021])
[32]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-2/igt@kms_plane_multiple@2x-tiling-yf.html
* igt@kms_psr2_sf@fbc-pr-plane-move-sf-dmg-area:
- shard-bmg: NOTRUN -> [SKIP][33] ([Intel XE#1406] / [Intel XE#6703])
[33]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-3/igt@kms_psr2_sf@fbc-pr-plane-move-sf-dmg-area.html
* igt@kms_psr2_sf@psr2-primary-plane-update-sf-dmg-area-big-fb:
- shard-bmg: NOTRUN -> [SKIP][34] ([Intel XE#1406] / [Intel XE#1489]) +1 other test skip
[34]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-2/igt@kms_psr2_sf@psr2-primary-plane-update-sf-dmg-area-big-fb.html
* igt@kms_psr@fbc-psr-sprite-render:
- shard-bmg: NOTRUN -> [SKIP][35] ([Intel XE#1406] / [Intel XE#2234] / [Intel XE#2850])
[35]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-2/igt@kms_psr@fbc-psr-sprite-render.html
* igt@kms_rotation_crc@bad-pixel-format:
- shard-bmg: NOTRUN -> [SKIP][36] ([Intel XE#3414] / [Intel XE#3904])
[36]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-2/igt@kms_rotation_crc@bad-pixel-format.html
* igt@xe_eudebug@vma-ufence:
- shard-bmg: NOTRUN -> [SKIP][37] ([Intel XE#4837])
[37]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-2/igt@xe_eudebug@vma-ufence.html
* igt@xe_eudebug_online@single-step-one:
- shard-bmg: NOTRUN -> [SKIP][38] ([Intel XE#4837] / [Intel XE#6665])
[38]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-2/igt@xe_eudebug_online@single-step-one.html
* igt@xe_exec_basic@multigpu-no-exec-null-defer-bind:
- shard-bmg: NOTRUN -> [SKIP][39] ([Intel XE#2322])
[39]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-2/igt@xe_exec_basic@multigpu-no-exec-null-defer-bind.html
* igt@xe_exec_system_allocator@many-stride-malloc-prefetch:
- shard-bmg: [PASS][40] -> [WARN][41] ([Intel XE#5786])
[40]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4220-3adb3f4aa5a8563ee9c1f6e137827b740e3ab40b/shard-bmg-8/igt@xe_exec_system_allocator@many-stride-malloc-prefetch.html
[41]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-5/igt@xe_exec_system_allocator@many-stride-malloc-prefetch.html
* igt@xe_exec_system_allocator@process-many-large-execqueues-mmap-new-huge:
- shard-bmg: NOTRUN -> [SKIP][42] ([Intel XE#4943]) +1 other test skip
[42]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-2/igt@xe_exec_system_allocator@process-many-large-execqueues-mmap-new-huge.html
* igt@xe_exec_system_allocator@threads-many-large-mmap-remap-ro-dontunmap-eocheck:
- shard-bmg: [PASS][43] -> [SKIP][44] ([Intel XE#6557] / [Intel XE#6703]) +2 other tests skip
[43]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4220-3adb3f4aa5a8563ee9c1f6e137827b740e3ab40b/shard-bmg-8/igt@xe_exec_system_allocator@threads-many-large-mmap-remap-ro-dontunmap-eocheck.html
[44]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-2/igt@xe_exec_system_allocator@threads-many-large-mmap-remap-ro-dontunmap-eocheck.html
* igt@xe_exec_system_allocator@threads-shared-vm-many-malloc-race:
- shard-bmg: [PASS][45] -> [SKIP][46] ([Intel XE#6703]) +52 other tests skip
[45]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4220-3adb3f4aa5a8563ee9c1f6e137827b740e3ab40b/shard-bmg-8/igt@xe_exec_system_allocator@threads-shared-vm-many-malloc-race.html
[46]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-2/igt@xe_exec_system_allocator@threads-shared-vm-many-malloc-race.html
* igt@xe_exec_threads@threads-bal-shared-vm-userptr-invalidate:
- shard-bmg: NOTRUN -> [SKIP][47] ([Intel XE#6703]) +61 other tests skip
[47]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-3/igt@xe_exec_threads@threads-bal-shared-vm-userptr-invalidate.html
* igt@xe_pxp@pxp-stale-queue-post-termination-irq:
- shard-bmg: NOTRUN -> [SKIP][48] ([Intel XE#4733])
[48]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-2/igt@xe_pxp@pxp-stale-queue-post-termination-irq.html
#### Possible fixes ####
* igt@kms_cursor_edge_walk@256x256-top-edge@pipe-d-dp-2:
- shard-bmg: [FAIL][49] -> [PASS][50] +2 other tests pass
[49]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4220-3adb3f4aa5a8563ee9c1f6e137827b740e3ab40b/shard-bmg-2/igt@kms_cursor_edge_walk@256x256-top-edge@pipe-d-dp-2.html
[50]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-5/igt@kms_cursor_edge_walk@256x256-top-edge@pipe-d-dp-2.html
* igt@kms_flip@flip-vs-suspend@c-hdmi-a3:
- shard-bmg: [INCOMPLETE][51] ([Intel XE#2049] / [Intel XE#2597]) -> [PASS][52]
[51]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4220-3adb3f4aa5a8563ee9c1f6e137827b740e3ab40b/shard-bmg-2/igt@kms_flip@flip-vs-suspend@c-hdmi-a3.html
[52]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-3/igt@kms_flip@flip-vs-suspend@c-hdmi-a3.html
* igt@kms_flip@modeset-vs-vblank-race@d-dp2:
- shard-bmg: [FAIL][53] ([Intel XE#3650]) -> [PASS][54] +7 other tests pass
[53]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4220-3adb3f4aa5a8563ee9c1f6e137827b740e3ab40b/shard-bmg-6/igt@kms_flip@modeset-vs-vblank-race@d-dp2.html
[54]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-4/igt@kms_flip@modeset-vs-vblank-race@d-dp2.html
* igt@testdisplay:
- shard-bmg: [ABORT][55] ([Intel XE#6740]) -> [PASS][56]
[55]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4220-3adb3f4aa5a8563ee9c1f6e137827b740e3ab40b/shard-bmg-5/igt@testdisplay.html
[56]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-2/igt@testdisplay.html
#### Warnings ####
* igt@kms_big_fb@linear-64bpp-rotate-90:
- shard-bmg: [SKIP][57] ([Intel XE#2327]) -> [SKIP][58] ([Intel XE#6703])
[57]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4220-3adb3f4aa5a8563ee9c1f6e137827b740e3ab40b/shard-bmg-8/igt@kms_big_fb@linear-64bpp-rotate-90.html
[58]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-2/igt@kms_big_fb@linear-64bpp-rotate-90.html
* igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-180:
- shard-bmg: [SKIP][59] ([Intel XE#1124]) -> [SKIP][60] ([Intel XE#6703])
[59]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4220-3adb3f4aa5a8563ee9c1f6e137827b740e3ab40b/shard-bmg-8/igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-180.html
[60]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-2/igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-180.html
* igt@kms_bw@connected-linear-tiling-4-displays-2160x1440p:
- shard-bmg: [SKIP][61] ([Intel XE#2314] / [Intel XE#2894]) -> [SKIP][62] ([Intel XE#6703])
[61]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4220-3adb3f4aa5a8563ee9c1f6e137827b740e3ab40b/shard-bmg-8/igt@kms_bw@connected-linear-tiling-4-displays-2160x1440p.html
[62]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-2/igt@kms_bw@connected-linear-tiling-4-displays-2160x1440p.html
* igt@kms_ccs@bad-pixel-format-4-tiled-dg2-rc-ccs:
- shard-bmg: [SKIP][63] ([Intel XE#2887]) -> [SKIP][64] ([Intel XE#6703])
[63]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4220-3adb3f4aa5a8563ee9c1f6e137827b740e3ab40b/shard-bmg-8/igt@kms_ccs@bad-pixel-format-4-tiled-dg2-rc-ccs.html
[64]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-2/igt@kms_ccs@bad-pixel-format-4-tiled-dg2-rc-ccs.html
* igt@kms_chamelium_hpd@dp-hpd-after-suspend:
- shard-bmg: [SKIP][65] ([Intel XE#2252]) -> [SKIP][66] ([Intel XE#6703])
[65]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4220-3adb3f4aa5a8563ee9c1f6e137827b740e3ab40b/shard-bmg-8/igt@kms_chamelium_hpd@dp-hpd-after-suspend.html
[66]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-2/igt@kms_chamelium_hpd@dp-hpd-after-suspend.html
* igt@kms_dp_link_training@uhbr-sst:
- shard-bmg: [SKIP][67] ([Intel XE#4354]) -> [FAIL][68] ([Intel XE#6793])
[67]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4220-3adb3f4aa5a8563ee9c1f6e137827b740e3ab40b/shard-bmg-4/igt@kms_dp_link_training@uhbr-sst.html
[68]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-6/igt@kms_dp_link_training@uhbr-sst.html
* igt@kms_flip@flip-vs-suspend:
- shard-bmg: [INCOMPLETE][69] ([Intel XE#2049] / [Intel XE#2597]) -> [DMESG-WARN][70] ([Intel XE#5208])
[69]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4220-3adb3f4aa5a8563ee9c1f6e137827b740e3ab40b/shard-bmg-2/igt@kms_flip@flip-vs-suspend.html
[70]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-3/igt@kms_flip@flip-vs-suspend.html
* igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytileccs-upscaling:
- shard-bmg: [SKIP][71] ([Intel XE#2293] / [Intel XE#2380]) -> [SKIP][72] ([Intel XE#6703])
[71]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4220-3adb3f4aa5a8563ee9c1f6e137827b740e3ab40b/shard-bmg-8/igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytileccs-upscaling.html
[72]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-2/igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytileccs-upscaling.html
* igt@kms_frontbuffer_tracking@fbcdrrs-1p-primscrn-cur-indfb-draw-mmap-wc:
- shard-bmg: [SKIP][73] ([Intel XE#2311]) -> [SKIP][74] ([Intel XE#6703]) +2 other tests skip
[73]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4220-3adb3f4aa5a8563ee9c1f6e137827b740e3ab40b/shard-bmg-8/igt@kms_frontbuffer_tracking@fbcdrrs-1p-primscrn-cur-indfb-draw-mmap-wc.html
[74]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-2/igt@kms_frontbuffer_tracking@fbcdrrs-1p-primscrn-cur-indfb-draw-mmap-wc.html
* igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-spr-indfb-move:
- shard-bmg: [SKIP][75] ([Intel XE#2313]) -> [SKIP][76] ([Intel XE#6703])
[75]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4220-3adb3f4aa5a8563ee9c1f6e137827b740e3ab40b/shard-bmg-8/igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-spr-indfb-move.html
[76]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-2/igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-spr-indfb-move.html
* igt@kms_hdr@brightness-with-hdr:
- shard-bmg: [SKIP][77] ([Intel XE#3374] / [Intel XE#3544]) -> [SKIP][78] ([Intel XE#3544])
[77]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4220-3adb3f4aa5a8563ee9c1f6e137827b740e3ab40b/shard-bmg-5/igt@kms_hdr@brightness-with-hdr.html
[78]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-2/igt@kms_hdr@brightness-with-hdr.html
* igt@kms_hdr@invalid-hdr:
- shard-bmg: [ABORT][79] ([Intel XE#6740]) -> [SKIP][80] ([Intel XE#1503])
[79]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4220-3adb3f4aa5a8563ee9c1f6e137827b740e3ab40b/shard-bmg-5/igt@kms_hdr@invalid-hdr.html
[80]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-4/igt@kms_hdr@invalid-hdr.html
* igt@kms_psr@fbc-pr-suspend:
- shard-bmg: [SKIP][81] ([Intel XE#1406] / [Intel XE#2234] / [Intel XE#2850]) -> [SKIP][82] ([Intel XE#1406] / [Intel XE#6703])
[81]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4220-3adb3f4aa5a8563ee9c1f6e137827b740e3ab40b/shard-bmg-8/igt@kms_psr@fbc-pr-suspend.html
[82]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-2/igt@kms_psr@fbc-pr-suspend.html
* igt@xe_eudebug@basic-vm-access-userptr-faultable:
- shard-bmg: [SKIP][83] ([Intel XE#4837]) -> [SKIP][84] ([Intel XE#6703])
[83]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4220-3adb3f4aa5a8563ee9c1f6e137827b740e3ab40b/shard-bmg-8/igt@xe_eudebug@basic-vm-access-userptr-faultable.html
[84]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-2/igt@xe_eudebug@basic-vm-access-userptr-faultable.html
* igt@xe_exec_basic@multigpu-many-execqueues-many-vm-bindexecqueue-rebind:
- shard-bmg: [SKIP][85] ([Intel XE#2322]) -> [SKIP][86] ([Intel XE#6703])
[85]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4220-3adb3f4aa5a8563ee9c1f6e137827b740e3ab40b/shard-bmg-8/igt@xe_exec_basic@multigpu-many-execqueues-many-vm-bindexecqueue-rebind.html
[86]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-2/igt@xe_exec_basic@multigpu-many-execqueues-many-vm-bindexecqueue-rebind.html
* igt@xe_exec_system_allocator@twice-mmap-huge:
- shard-bmg: [SKIP][87] ([Intel XE#4943]) -> [SKIP][88] ([Intel XE#6703])
[87]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4220-3adb3f4aa5a8563ee9c1f6e137827b740e3ab40b/shard-bmg-8/igt@xe_exec_system_allocator@twice-mmap-huge.html
[88]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-2/igt@xe_exec_system_allocator@twice-mmap-huge.html
* igt@xe_query@multigpu-query-invalid-size:
- shard-bmg: [SKIP][89] ([Intel XE#944]) -> [SKIP][90] ([Intel XE#6703])
[89]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4220-3adb3f4aa5a8563ee9c1f6e137827b740e3ab40b/shard-bmg-8/igt@xe_query@multigpu-query-invalid-size.html
[90]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/shard-bmg-2/igt@xe_query@multigpu-query-invalid-size.html
[Intel XE#1124]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1124
[Intel XE#1406]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1406
[Intel XE#1489]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1489
[Intel XE#1503]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1503
[Intel XE#2049]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2049
[Intel XE#2234]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2234
[Intel XE#2252]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2252
[Intel XE#2293]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2293
[Intel XE#2311]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2311
[Intel XE#2313]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2313
[Intel XE#2314]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2314
[Intel XE#2322]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2322
[Intel XE#2327]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2327
[Intel XE#2380]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2380
[Intel XE#2597]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2597
[Intel XE#2850]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2850
[Intel XE#2887]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2887
[Intel XE#2894]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2894
[Intel XE#3149]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3149
[Intel XE#3374]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3374
[Intel XE#3414]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3414
[Intel XE#3544]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3544
[Intel XE#3650]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3650
[Intel XE#3904]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3904
[Intel XE#4141]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4141
[Intel XE#4354]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4354
[Intel XE#4733]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4733
[Intel XE#4837]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4837
[Intel XE#4943]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4943
[Intel XE#5021]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5021
[Intel XE#5208]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5208
[Intel XE#5545]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5545
[Intel XE#5624]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5624
[Intel XE#5786]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5786
[Intel XE#610]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/610
[Intel XE#6557]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6557
[Intel XE#6665]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6665
[Intel XE#6703]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6703
[Intel XE#6715]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6715
[Intel XE#6740]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6740
[Intel XE#6743]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6743
[Intel XE#6747]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6747
[Intel XE#6766]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6766
[Intel XE#6793]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6793
[Intel XE#944]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/944
Build changes
-------------
* Linux: xe-4220-3adb3f4aa5a8563ee9c1f6e137827b740e3ab40b -> xe-pw-156865v6
IGT_8663: aca02d1cc9804e5f1868b0ebfba6426e2d1244fc @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
xe-4220-3adb3f4aa5a8563ee9c1f6e137827b740e3ab40b: 3adb3f4aa5a8563ee9c1f6e137827b740e3ab40b
xe-pw-156865v6: 156865v6
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156865v6/index.html
[-- Attachment #2: Type: text/html, Size: 31087 bytes --]
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v6 15/17] drm/xe/multi_queue: Support active group after primary is destroyed
2025-12-11 1:03 ` [PATCH v6 15/17] drm/xe/multi_queue: Support active group after primary is destroyed Niranjana Vishwanathapura
@ 2025-12-19 21:06 ` Rodrigo Vivi
2025-12-19 22:35 ` Niranjana Vishwanathapura
0 siblings, 1 reply; 32+ messages in thread
From: Rodrigo Vivi @ 2025-12-19 21:06 UTC (permalink / raw)
To: Niranjana Vishwanathapura; +Cc: intel-xe, matthew.brost, matthew.d.roper
On Wed, Dec 10, 2025 at 05:03:03PM -0800, Niranjana Vishwanathapura wrote:
> Add support to keep the group active after the primary queue is
> destroyed. Instead of killing the primary queue during exec_queue
> destroy ioctl, kill it when all the secondary queues of the group
> are killed.
>
> Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
> Reviewed-by: Matthew Brost <matthew.brost@intel.com>
> ---
> drivers/gpu/drm/xe/xe_device.c | 7 ++-
> drivers/gpu/drm/xe/xe_exec_queue.c | 55 +++++++++++++++++++++++-
> drivers/gpu/drm/xe/xe_exec_queue.h | 2 +
> drivers/gpu/drm/xe/xe_exec_queue_types.h | 4 ++
> include/uapi/drm/xe_drm.h | 4 ++
Hi Niranjana,
Where is the UMD ack for this? Who is using this uAPI?
Thanks,
Rodrigo.
> 5 files changed, 69 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
> index 7a498c8db7b1..24efb6a3e0ea 100644
> --- a/drivers/gpu/drm/xe/xe_device.c
> +++ b/drivers/gpu/drm/xe/xe_device.c
> @@ -177,7 +177,12 @@ static void xe_file_close(struct drm_device *dev, struct drm_file *file)
> xa_for_each(&xef->exec_queue.xa, idx, q) {
> if (q->vm && q->hwe->hw_engine_group)
> xe_hw_engine_group_del_exec_queue(q->hwe->hw_engine_group, q);
> - xe_exec_queue_kill(q);
> +
> + if (xe_exec_queue_is_multi_queue_primary(q))
> + xe_exec_queue_group_kill_put(q->multi_queue.group);
> + else
> + xe_exec_queue_kill(q);
> +
> xe_exec_queue_put(q);
> }
> xa_for_each(&xef->vm.xa, idx, vm)
> diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
> index d337b7bc2b80..3f4840d135a0 100644
> --- a/drivers/gpu/drm/xe/xe_exec_queue.c
> +++ b/drivers/gpu/drm/xe/xe_exec_queue.c
> @@ -418,6 +418,26 @@ struct xe_exec_queue *xe_exec_queue_create_bind(struct xe_device *xe,
> }
> ALLOW_ERROR_INJECTION(xe_exec_queue_create_bind, ERRNO);
>
> +static void xe_exec_queue_group_kill(struct kref *ref)
> +{
> + struct xe_exec_queue_group *group = container_of(ref, struct xe_exec_queue_group,
> + kill_refcount);
> + xe_exec_queue_kill(group->primary);
> +}
> +
> +static inline void xe_exec_queue_group_kill_get(struct xe_exec_queue_group *group)
> +{
> + kref_get(&group->kill_refcount);
> +}
> +
> +void xe_exec_queue_group_kill_put(struct xe_exec_queue_group *group)
> +{
> + if (!group)
> + return;
> +
> + kref_put(&group->kill_refcount, xe_exec_queue_group_kill);
> +}
> +
> void xe_exec_queue_destroy(struct kref *ref)
> {
> struct xe_exec_queue *q = container_of(ref, struct xe_exec_queue, refcount);
> @@ -650,6 +670,7 @@ static int xe_exec_queue_group_init(struct xe_device *xe, struct xe_exec_queue *
> group->primary = q;
> group->cgp_bo = bo;
> INIT_LIST_HEAD(&group->list);
> + kref_init(&group->kill_refcount);
> xa_init_flags(&group->xa, XA_FLAGS_ALLOC1);
> mutex_init(&group->list_lock);
> q->multi_queue.group = group;
> @@ -725,6 +746,11 @@ static int xe_exec_queue_group_add(struct xe_device *xe, struct xe_exec_queue *q
>
> q->multi_queue.pos = pos;
>
> + if (group->primary->multi_queue.keep_active) {
> + xe_exec_queue_group_kill_get(group);
> + q->multi_queue.keep_active = true;
> + }
> +
> return 0;
> }
>
> @@ -738,6 +764,11 @@ static void xe_exec_queue_group_delete(struct xe_device *xe, struct xe_exec_queu
> lrc = xa_erase(&group->xa, q->multi_queue.pos);
> xe_assert(xe, lrc);
> xe_lrc_put(lrc);
> +
> + if (q->multi_queue.keep_active) {
> + xe_exec_queue_group_kill_put(group);
> + q->multi_queue.keep_active = false;
> + }
> }
>
> static int exec_queue_set_multi_group(struct xe_device *xe, struct xe_exec_queue *q,
> @@ -759,12 +790,24 @@ static int exec_queue_set_multi_group(struct xe_device *xe, struct xe_exec_queue
> return -EINVAL;
>
> if (value & DRM_XE_MULTI_GROUP_CREATE) {
> - if (XE_IOCTL_DBG(xe, value & ~DRM_XE_MULTI_GROUP_CREATE))
> + if (XE_IOCTL_DBG(xe, value & ~(DRM_XE_MULTI_GROUP_CREATE |
> + DRM_XE_MULTI_GROUP_KEEP_ACTIVE)))
> + return -EINVAL;
> +
> + /*
> + * KEEP_ACTIVE is not supported in preempt fence mode as in that mode,
> + * VM_DESTROY ioctl expects all exec queues of that VM are already killed.
> + */
> + if (XE_IOCTL_DBG(xe, (value & DRM_XE_MULTI_GROUP_KEEP_ACTIVE) &&
> + xe_vm_in_preempt_fence_mode(q->vm)))
> return -EINVAL;
>
> q->multi_queue.valid = true;
> q->multi_queue.is_primary = true;
> q->multi_queue.pos = 0;
> + if (value & DRM_XE_MULTI_GROUP_KEEP_ACTIVE)
> + q->multi_queue.keep_active = true;
> +
> return 0;
> }
>
> @@ -1312,6 +1355,11 @@ void xe_exec_queue_kill(struct xe_exec_queue *q)
>
> q->ops->kill(q);
> xe_vm_remove_compute_exec_queue(q->vm, q);
> +
> + if (!xe_exec_queue_is_multi_queue_primary(q) && q->multi_queue.keep_active) {
> + xe_exec_queue_group_kill_put(q->multi_queue.group);
> + q->multi_queue.keep_active = false;
> + }
> }
>
> int xe_exec_queue_destroy_ioctl(struct drm_device *dev, void *data,
> @@ -1338,7 +1386,10 @@ int xe_exec_queue_destroy_ioctl(struct drm_device *dev, void *data,
> if (q->vm && q->hwe->hw_engine_group)
> xe_hw_engine_group_del_exec_queue(q->hwe->hw_engine_group, q);
>
> - xe_exec_queue_kill(q);
> + if (xe_exec_queue_is_multi_queue_primary(q))
> + xe_exec_queue_group_kill_put(q->multi_queue.group);
> + else
> + xe_exec_queue_kill(q);
>
> trace_xe_exec_queue_close(q);
> xe_exec_queue_put(q);
> diff --git a/drivers/gpu/drm/xe/xe_exec_queue.h b/drivers/gpu/drm/xe/xe_exec_queue.h
> index ffcc1feb879e..10abed98fb6b 100644
> --- a/drivers/gpu/drm/xe/xe_exec_queue.h
> +++ b/drivers/gpu/drm/xe/xe_exec_queue.h
> @@ -113,6 +113,8 @@ static inline struct xe_exec_queue *xe_exec_queue_multi_queue_primary(struct xe_
> return xe_exec_queue_is_multi_queue(q) ? q->multi_queue.group->primary : q;
> }
>
> +void xe_exec_queue_group_kill_put(struct xe_exec_queue_group *group);
> +
> bool xe_exec_queue_is_lr(struct xe_exec_queue *q);
>
> bool xe_exec_queue_is_idle(struct xe_exec_queue *q);
> diff --git a/drivers/gpu/drm/xe/xe_exec_queue_types.h b/drivers/gpu/drm/xe/xe_exec_queue_types.h
> index 5fc516b0bb77..67ea5eebf70b 100644
> --- a/drivers/gpu/drm/xe/xe_exec_queue_types.h
> +++ b/drivers/gpu/drm/xe/xe_exec_queue_types.h
> @@ -62,6 +62,8 @@ struct xe_exec_queue_group {
> struct list_head list;
> /** @list_lock: Secondary queue list lock */
> struct mutex list_lock;
> + /** @kill_refcount: ref count to kill primary queue */
> + struct kref kill_refcount;
> /** @sync_pending: CGP_SYNC_DONE g2h response pending */
> bool sync_pending;
> /** @banned: Group banned */
> @@ -161,6 +163,8 @@ struct xe_exec_queue {
> u8 valid:1;
> /** @multi_queue.is_primary: Is primary queue (Q0) of the group */
> u8 is_primary:1;
> + /** @multi_queue.keep_active: Keep the group active after primary is destroyed */
> + u8 keep_active:1;
> } multi_queue;
>
> /** @sched_props: scheduling properties */
> diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
> index 705081bf0d81..bd6154e3b728 100644
> --- a/include/uapi/drm/xe_drm.h
> +++ b/include/uapi/drm/xe_drm.h
> @@ -1280,6 +1280,9 @@ struct drm_xe_vm_bind {
> * then a new multi-queue group is created with this queue as the primary queue
> * (Q0). Otherwise, the queue gets added to the multi-queue group whose primary
> * queue's exec_queue_id is specified in the lower 32 bits of the 'value' field.
> + * If the extension's 'value' field has %DRM_XE_MULTI_GROUP_KEEP_ACTIVE flag
> + * set, then the multi-queue group is kept active after the primary queue is
> + * destroyed.
> * All the other non-relevant bits of extension's 'value' field while adding the
> * primary or the secondary queues of the group must be set to 0.
> * - %DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_QUEUE_PRIORITY - Set the queue
> @@ -1328,6 +1331,7 @@ struct drm_xe_exec_queue_create {
> #define DRM_XE_EXEC_QUEUE_SET_HANG_REPLAY_STATE 3
> #define DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_GROUP 4
> #define DRM_XE_MULTI_GROUP_CREATE (1ull << 63)
> +#define DRM_XE_MULTI_GROUP_KEEP_ACTIVE (1ull << 62)
> #define DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_QUEUE_PRIORITY 5
> /** @extensions: Pointer to the first extension struct, if any */
> __u64 extensions;
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v6 15/17] drm/xe/multi_queue: Support active group after primary is destroyed
2025-12-19 21:06 ` Rodrigo Vivi
@ 2025-12-19 22:35 ` Niranjana Vishwanathapura
2025-12-19 22:53 ` Matt Roper
0 siblings, 1 reply; 32+ messages in thread
From: Niranjana Vishwanathapura @ 2025-12-19 22:35 UTC (permalink / raw)
To: Rodrigo Vivi; +Cc: intel-xe, matthew.brost, matthew.d.roper
On Fri, Dec 19, 2025 at 04:06:17PM -0500, Rodrigo Vivi wrote:
>On Wed, Dec 10, 2025 at 05:03:03PM -0800, Niranjana Vishwanathapura wrote:
>> Add support to keep the group active after the primary queue is
>> destroyed. Instead of killing the primary queue during exec_queue
>> destroy ioctl, kill it when all the secondary queues of the group
>> are killed.
>>
>> Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
>> Reviewed-by: Matthew Brost <matthew.brost@intel.com>
>> ---
>> drivers/gpu/drm/xe/xe_device.c | 7 ++-
>> drivers/gpu/drm/xe/xe_exec_queue.c | 55 +++++++++++++++++++++++-
>> drivers/gpu/drm/xe/xe_exec_queue.h | 2 +
>> drivers/gpu/drm/xe/xe_exec_queue_types.h | 4 ++
>> include/uapi/drm/xe_drm.h | 4 ++
>
>Hi Niranjana,
>
>Where is the UMD ack for this? Who is using this uAPI?
https://lists.freedesktop.org/archives/intel-xe/2025-November/105779.html
Niranjana
>
>Thanks,
>Rodrigo.
>
>> 5 files changed, 69 insertions(+), 3 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
>> index 7a498c8db7b1..24efb6a3e0ea 100644
>> --- a/drivers/gpu/drm/xe/xe_device.c
>> +++ b/drivers/gpu/drm/xe/xe_device.c
>> @@ -177,7 +177,12 @@ static void xe_file_close(struct drm_device *dev, struct drm_file *file)
>> xa_for_each(&xef->exec_queue.xa, idx, q) {
>> if (q->vm && q->hwe->hw_engine_group)
>> xe_hw_engine_group_del_exec_queue(q->hwe->hw_engine_group, q);
>> - xe_exec_queue_kill(q);
>> +
>> + if (xe_exec_queue_is_multi_queue_primary(q))
>> + xe_exec_queue_group_kill_put(q->multi_queue.group);
>> + else
>> + xe_exec_queue_kill(q);
>> +
>> xe_exec_queue_put(q);
>> }
>> xa_for_each(&xef->vm.xa, idx, vm)
>> diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
>> index d337b7bc2b80..3f4840d135a0 100644
>> --- a/drivers/gpu/drm/xe/xe_exec_queue.c
>> +++ b/drivers/gpu/drm/xe/xe_exec_queue.c
>> @@ -418,6 +418,26 @@ struct xe_exec_queue *xe_exec_queue_create_bind(struct xe_device *xe,
>> }
>> ALLOW_ERROR_INJECTION(xe_exec_queue_create_bind, ERRNO);
>>
>> +static void xe_exec_queue_group_kill(struct kref *ref)
>> +{
>> + struct xe_exec_queue_group *group = container_of(ref, struct xe_exec_queue_group,
>> + kill_refcount);
>> + xe_exec_queue_kill(group->primary);
>> +}
>> +
>> +static inline void xe_exec_queue_group_kill_get(struct xe_exec_queue_group *group)
>> +{
>> + kref_get(&group->kill_refcount);
>> +}
>> +
>> +void xe_exec_queue_group_kill_put(struct xe_exec_queue_group *group)
>> +{
>> + if (!group)
>> + return;
>> +
>> + kref_put(&group->kill_refcount, xe_exec_queue_group_kill);
>> +}
>> +
>> void xe_exec_queue_destroy(struct kref *ref)
>> {
>> struct xe_exec_queue *q = container_of(ref, struct xe_exec_queue, refcount);
>> @@ -650,6 +670,7 @@ static int xe_exec_queue_group_init(struct xe_device *xe, struct xe_exec_queue *
>> group->primary = q;
>> group->cgp_bo = bo;
>> INIT_LIST_HEAD(&group->list);
>> + kref_init(&group->kill_refcount);
>> xa_init_flags(&group->xa, XA_FLAGS_ALLOC1);
>> mutex_init(&group->list_lock);
>> q->multi_queue.group = group;
>> @@ -725,6 +746,11 @@ static int xe_exec_queue_group_add(struct xe_device *xe, struct xe_exec_queue *q
>>
>> q->multi_queue.pos = pos;
>>
>> + if (group->primary->multi_queue.keep_active) {
>> + xe_exec_queue_group_kill_get(group);
>> + q->multi_queue.keep_active = true;
>> + }
>> +
>> return 0;
>> }
>>
>> @@ -738,6 +764,11 @@ static void xe_exec_queue_group_delete(struct xe_device *xe, struct xe_exec_queu
>> lrc = xa_erase(&group->xa, q->multi_queue.pos);
>> xe_assert(xe, lrc);
>> xe_lrc_put(lrc);
>> +
>> + if (q->multi_queue.keep_active) {
>> + xe_exec_queue_group_kill_put(group);
>> + q->multi_queue.keep_active = false;
>> + }
>> }
>>
>> static int exec_queue_set_multi_group(struct xe_device *xe, struct xe_exec_queue *q,
>> @@ -759,12 +790,24 @@ static int exec_queue_set_multi_group(struct xe_device *xe, struct xe_exec_queue
>> return -EINVAL;
>>
>> if (value & DRM_XE_MULTI_GROUP_CREATE) {
>> - if (XE_IOCTL_DBG(xe, value & ~DRM_XE_MULTI_GROUP_CREATE))
>> + if (XE_IOCTL_DBG(xe, value & ~(DRM_XE_MULTI_GROUP_CREATE |
>> + DRM_XE_MULTI_GROUP_KEEP_ACTIVE)))
>> + return -EINVAL;
>> +
>> + /*
>> + * KEEP_ACTIVE is not supported in preempt fence mode as in that mode,
>> + * VM_DESTROY ioctl expects all exec queues of that VM are already killed.
>> + */
>> + if (XE_IOCTL_DBG(xe, (value & DRM_XE_MULTI_GROUP_KEEP_ACTIVE) &&
>> + xe_vm_in_preempt_fence_mode(q->vm)))
>> return -EINVAL;
>>
>> q->multi_queue.valid = true;
>> q->multi_queue.is_primary = true;
>> q->multi_queue.pos = 0;
>> + if (value & DRM_XE_MULTI_GROUP_KEEP_ACTIVE)
>> + q->multi_queue.keep_active = true;
>> +
>> return 0;
>> }
>>
>> @@ -1312,6 +1355,11 @@ void xe_exec_queue_kill(struct xe_exec_queue *q)
>>
>> q->ops->kill(q);
>> xe_vm_remove_compute_exec_queue(q->vm, q);
>> +
>> + if (!xe_exec_queue_is_multi_queue_primary(q) && q->multi_queue.keep_active) {
>> + xe_exec_queue_group_kill_put(q->multi_queue.group);
>> + q->multi_queue.keep_active = false;
>> + }
>> }
>>
>> int xe_exec_queue_destroy_ioctl(struct drm_device *dev, void *data,
>> @@ -1338,7 +1386,10 @@ int xe_exec_queue_destroy_ioctl(struct drm_device *dev, void *data,
>> if (q->vm && q->hwe->hw_engine_group)
>> xe_hw_engine_group_del_exec_queue(q->hwe->hw_engine_group, q);
>>
>> - xe_exec_queue_kill(q);
>> + if (xe_exec_queue_is_multi_queue_primary(q))
>> + xe_exec_queue_group_kill_put(q->multi_queue.group);
>> + else
>> + xe_exec_queue_kill(q);
>>
>> trace_xe_exec_queue_close(q);
>> xe_exec_queue_put(q);
>> diff --git a/drivers/gpu/drm/xe/xe_exec_queue.h b/drivers/gpu/drm/xe/xe_exec_queue.h
>> index ffcc1feb879e..10abed98fb6b 100644
>> --- a/drivers/gpu/drm/xe/xe_exec_queue.h
>> +++ b/drivers/gpu/drm/xe/xe_exec_queue.h
>> @@ -113,6 +113,8 @@ static inline struct xe_exec_queue *xe_exec_queue_multi_queue_primary(struct xe_
>> return xe_exec_queue_is_multi_queue(q) ? q->multi_queue.group->primary : q;
>> }
>>
>> +void xe_exec_queue_group_kill_put(struct xe_exec_queue_group *group);
>> +
>> bool xe_exec_queue_is_lr(struct xe_exec_queue *q);
>>
>> bool xe_exec_queue_is_idle(struct xe_exec_queue *q);
>> diff --git a/drivers/gpu/drm/xe/xe_exec_queue_types.h b/drivers/gpu/drm/xe/xe_exec_queue_types.h
>> index 5fc516b0bb77..67ea5eebf70b 100644
>> --- a/drivers/gpu/drm/xe/xe_exec_queue_types.h
>> +++ b/drivers/gpu/drm/xe/xe_exec_queue_types.h
>> @@ -62,6 +62,8 @@ struct xe_exec_queue_group {
>> struct list_head list;
>> /** @list_lock: Secondary queue list lock */
>> struct mutex list_lock;
>> + /** @kill_refcount: ref count to kill primary queue */
>> + struct kref kill_refcount;
>> /** @sync_pending: CGP_SYNC_DONE g2h response pending */
>> bool sync_pending;
>> /** @banned: Group banned */
>> @@ -161,6 +163,8 @@ struct xe_exec_queue {
>> u8 valid:1;
>> /** @multi_queue.is_primary: Is primary queue (Q0) of the group */
>> u8 is_primary:1;
>> + /** @multi_queue.keep_active: Keep the group active after primary is destroyed */
>> + u8 keep_active:1;
>> } multi_queue;
>>
>> /** @sched_props: scheduling properties */
>> diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
>> index 705081bf0d81..bd6154e3b728 100644
>> --- a/include/uapi/drm/xe_drm.h
>> +++ b/include/uapi/drm/xe_drm.h
>> @@ -1280,6 +1280,9 @@ struct drm_xe_vm_bind {
>> * then a new multi-queue group is created with this queue as the primary queue
>> * (Q0). Otherwise, the queue gets added to the multi-queue group whose primary
>> * queue's exec_queue_id is specified in the lower 32 bits of the 'value' field.
>> + * If the extension's 'value' field has %DRM_XE_MULTI_GROUP_KEEP_ACTIVE flag
>> + * set, then the multi-queue group is kept active after the primary queue is
>> + * destroyed.
>> * All the other non-relevant bits of extension's 'value' field while adding the
>> * primary or the secondary queues of the group must be set to 0.
>> * - %DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_QUEUE_PRIORITY - Set the queue
>> @@ -1328,6 +1331,7 @@ struct drm_xe_exec_queue_create {
>> #define DRM_XE_EXEC_QUEUE_SET_HANG_REPLAY_STATE 3
>> #define DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_GROUP 4
>> #define DRM_XE_MULTI_GROUP_CREATE (1ull << 63)
>> +#define DRM_XE_MULTI_GROUP_KEEP_ACTIVE (1ull << 62)
>> #define DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_QUEUE_PRIORITY 5
>> /** @extensions: Pointer to the first extension struct, if any */
>> __u64 extensions;
>> --
>> 2.43.0
>>
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v6 15/17] drm/xe/multi_queue: Support active group after primary is destroyed
2025-12-19 22:35 ` Niranjana Vishwanathapura
@ 2025-12-19 22:53 ` Matt Roper
2025-12-31 1:51 ` Niranjana Vishwanathapura
0 siblings, 1 reply; 32+ messages in thread
From: Matt Roper @ 2025-12-19 22:53 UTC (permalink / raw)
To: Niranjana Vishwanathapura; +Cc: Rodrigo Vivi, intel-xe, matthew.brost
On Fri, Dec 19, 2025 at 02:35:53PM -0800, Niranjana Vishwanathapura wrote:
> On Fri, Dec 19, 2025 at 04:06:17PM -0500, Rodrigo Vivi wrote:
> > On Wed, Dec 10, 2025 at 05:03:03PM -0800, Niranjana Vishwanathapura wrote:
> > > Add support to keep the group active after the primary queue is
> > > destroyed. Instead of killing the primary queue during exec_queue
> > > destroy ioctl, kill it when all the secondary queues of the group
> > > are killed.
> > >
> > > Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
> > > Reviewed-by: Matthew Brost <matthew.brost@intel.com>
> > > ---
> > > drivers/gpu/drm/xe/xe_device.c | 7 ++-
> > > drivers/gpu/drm/xe/xe_exec_queue.c | 55 +++++++++++++++++++++++-
> > > drivers/gpu/drm/xe/xe_exec_queue.h | 2 +
> > > drivers/gpu/drm/xe/xe_exec_queue_types.h | 4 ++
> > > include/uapi/drm/xe_drm.h | 4 ++
> >
> > Hi Niranjana,
> >
> > Where is the UMD ack for this? Who is using this uAPI?
>
> https://lists.freedesktop.org/archives/intel-xe/2025-November/105779.html
I think Rodrigo's question was more about the specific
DRM_XE_MULTI_GROUP_KEEP_ACTIVE flag added in this patch. It doesn't
appear that the compute-runtime PR linked in the cover letter makes use
of that flag at all. Is there a second PR that adds usage of that?
Matt
>
> Niranjana
>
> >
> > Thanks,
> > Rodrigo.
> >
> > > 5 files changed, 69 insertions(+), 3 deletions(-)
> > >
> > > diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
> > > index 7a498c8db7b1..24efb6a3e0ea 100644
> > > --- a/drivers/gpu/drm/xe/xe_device.c
> > > +++ b/drivers/gpu/drm/xe/xe_device.c
> > > @@ -177,7 +177,12 @@ static void xe_file_close(struct drm_device *dev, struct drm_file *file)
> > > xa_for_each(&xef->exec_queue.xa, idx, q) {
> > > if (q->vm && q->hwe->hw_engine_group)
> > > xe_hw_engine_group_del_exec_queue(q->hwe->hw_engine_group, q);
> > > - xe_exec_queue_kill(q);
> > > +
> > > + if (xe_exec_queue_is_multi_queue_primary(q))
> > > + xe_exec_queue_group_kill_put(q->multi_queue.group);
> > > + else
> > > + xe_exec_queue_kill(q);
> > > +
> > > xe_exec_queue_put(q);
> > > }
> > > xa_for_each(&xef->vm.xa, idx, vm)
> > > diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
> > > index d337b7bc2b80..3f4840d135a0 100644
> > > --- a/drivers/gpu/drm/xe/xe_exec_queue.c
> > > +++ b/drivers/gpu/drm/xe/xe_exec_queue.c
> > > @@ -418,6 +418,26 @@ struct xe_exec_queue *xe_exec_queue_create_bind(struct xe_device *xe,
> > > }
> > > ALLOW_ERROR_INJECTION(xe_exec_queue_create_bind, ERRNO);
> > >
> > > +static void xe_exec_queue_group_kill(struct kref *ref)
> > > +{
> > > + struct xe_exec_queue_group *group = container_of(ref, struct xe_exec_queue_group,
> > > + kill_refcount);
> > > + xe_exec_queue_kill(group->primary);
> > > +}
> > > +
> > > +static inline void xe_exec_queue_group_kill_get(struct xe_exec_queue_group *group)
> > > +{
> > > + kref_get(&group->kill_refcount);
> > > +}
> > > +
> > > +void xe_exec_queue_group_kill_put(struct xe_exec_queue_group *group)
> > > +{
> > > + if (!group)
> > > + return;
> > > +
> > > + kref_put(&group->kill_refcount, xe_exec_queue_group_kill);
> > > +}
> > > +
> > > void xe_exec_queue_destroy(struct kref *ref)
> > > {
> > > struct xe_exec_queue *q = container_of(ref, struct xe_exec_queue, refcount);
> > > @@ -650,6 +670,7 @@ static int xe_exec_queue_group_init(struct xe_device *xe, struct xe_exec_queue *
> > > group->primary = q;
> > > group->cgp_bo = bo;
> > > INIT_LIST_HEAD(&group->list);
> > > + kref_init(&group->kill_refcount);
> > > xa_init_flags(&group->xa, XA_FLAGS_ALLOC1);
> > > mutex_init(&group->list_lock);
> > > q->multi_queue.group = group;
> > > @@ -725,6 +746,11 @@ static int xe_exec_queue_group_add(struct xe_device *xe, struct xe_exec_queue *q
> > >
> > > q->multi_queue.pos = pos;
> > >
> > > + if (group->primary->multi_queue.keep_active) {
> > > + xe_exec_queue_group_kill_get(group);
> > > + q->multi_queue.keep_active = true;
> > > + }
> > > +
> > > return 0;
> > > }
> > >
> > > @@ -738,6 +764,11 @@ static void xe_exec_queue_group_delete(struct xe_device *xe, struct xe_exec_queu
> > > lrc = xa_erase(&group->xa, q->multi_queue.pos);
> > > xe_assert(xe, lrc);
> > > xe_lrc_put(lrc);
> > > +
> > > + if (q->multi_queue.keep_active) {
> > > + xe_exec_queue_group_kill_put(group);
> > > + q->multi_queue.keep_active = false;
> > > + }
> > > }
> > >
> > > static int exec_queue_set_multi_group(struct xe_device *xe, struct xe_exec_queue *q,
> > > @@ -759,12 +790,24 @@ static int exec_queue_set_multi_group(struct xe_device *xe, struct xe_exec_queue
> > > return -EINVAL;
> > >
> > > if (value & DRM_XE_MULTI_GROUP_CREATE) {
> > > - if (XE_IOCTL_DBG(xe, value & ~DRM_XE_MULTI_GROUP_CREATE))
> > > + if (XE_IOCTL_DBG(xe, value & ~(DRM_XE_MULTI_GROUP_CREATE |
> > > + DRM_XE_MULTI_GROUP_KEEP_ACTIVE)))
> > > + return -EINVAL;
> > > +
> > > + /*
> > > + * KEEP_ACTIVE is not supported in preempt fence mode as in that mode,
> > > + * VM_DESTROY ioctl expects all exec queues of that VM are already killed.
> > > + */
> > > + if (XE_IOCTL_DBG(xe, (value & DRM_XE_MULTI_GROUP_KEEP_ACTIVE) &&
> > > + xe_vm_in_preempt_fence_mode(q->vm)))
> > > return -EINVAL;
> > >
> > > q->multi_queue.valid = true;
> > > q->multi_queue.is_primary = true;
> > > q->multi_queue.pos = 0;
> > > + if (value & DRM_XE_MULTI_GROUP_KEEP_ACTIVE)
> > > + q->multi_queue.keep_active = true;
> > > +
> > > return 0;
> > > }
> > >
> > > @@ -1312,6 +1355,11 @@ void xe_exec_queue_kill(struct xe_exec_queue *q)
> > >
> > > q->ops->kill(q);
> > > xe_vm_remove_compute_exec_queue(q->vm, q);
> > > +
> > > + if (!xe_exec_queue_is_multi_queue_primary(q) && q->multi_queue.keep_active) {
> > > + xe_exec_queue_group_kill_put(q->multi_queue.group);
> > > + q->multi_queue.keep_active = false;
> > > + }
> > > }
> > >
> > > int xe_exec_queue_destroy_ioctl(struct drm_device *dev, void *data,
> > > @@ -1338,7 +1386,10 @@ int xe_exec_queue_destroy_ioctl(struct drm_device *dev, void *data,
> > > if (q->vm && q->hwe->hw_engine_group)
> > > xe_hw_engine_group_del_exec_queue(q->hwe->hw_engine_group, q);
> > >
> > > - xe_exec_queue_kill(q);
> > > + if (xe_exec_queue_is_multi_queue_primary(q))
> > > + xe_exec_queue_group_kill_put(q->multi_queue.group);
> > > + else
> > > + xe_exec_queue_kill(q);
> > >
> > > trace_xe_exec_queue_close(q);
> > > xe_exec_queue_put(q);
> > > diff --git a/drivers/gpu/drm/xe/xe_exec_queue.h b/drivers/gpu/drm/xe/xe_exec_queue.h
> > > index ffcc1feb879e..10abed98fb6b 100644
> > > --- a/drivers/gpu/drm/xe/xe_exec_queue.h
> > > +++ b/drivers/gpu/drm/xe/xe_exec_queue.h
> > > @@ -113,6 +113,8 @@ static inline struct xe_exec_queue *xe_exec_queue_multi_queue_primary(struct xe_
> > > return xe_exec_queue_is_multi_queue(q) ? q->multi_queue.group->primary : q;
> > > }
> > >
> > > +void xe_exec_queue_group_kill_put(struct xe_exec_queue_group *group);
> > > +
> > > bool xe_exec_queue_is_lr(struct xe_exec_queue *q);
> > >
> > > bool xe_exec_queue_is_idle(struct xe_exec_queue *q);
> > > diff --git a/drivers/gpu/drm/xe/xe_exec_queue_types.h b/drivers/gpu/drm/xe/xe_exec_queue_types.h
> > > index 5fc516b0bb77..67ea5eebf70b 100644
> > > --- a/drivers/gpu/drm/xe/xe_exec_queue_types.h
> > > +++ b/drivers/gpu/drm/xe/xe_exec_queue_types.h
> > > @@ -62,6 +62,8 @@ struct xe_exec_queue_group {
> > > struct list_head list;
> > > /** @list_lock: Secondary queue list lock */
> > > struct mutex list_lock;
> > > + /** @kill_refcount: ref count to kill primary queue */
> > > + struct kref kill_refcount;
> > > /** @sync_pending: CGP_SYNC_DONE g2h response pending */
> > > bool sync_pending;
> > > /** @banned: Group banned */
> > > @@ -161,6 +163,8 @@ struct xe_exec_queue {
> > > u8 valid:1;
> > > /** @multi_queue.is_primary: Is primary queue (Q0) of the group */
> > > u8 is_primary:1;
> > > + /** @multi_queue.keep_active: Keep the group active after primary is destroyed */
> > > + u8 keep_active:1;
> > > } multi_queue;
> > >
> > > /** @sched_props: scheduling properties */
> > > diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
> > > index 705081bf0d81..bd6154e3b728 100644
> > > --- a/include/uapi/drm/xe_drm.h
> > > +++ b/include/uapi/drm/xe_drm.h
> > > @@ -1280,6 +1280,9 @@ struct drm_xe_vm_bind {
> > > * then a new multi-queue group is created with this queue as the primary queue
> > > * (Q0). Otherwise, the queue gets added to the multi-queue group whose primary
> > > * queue's exec_queue_id is specified in the lower 32 bits of the 'value' field.
> > > + * If the extension's 'value' field has %DRM_XE_MULTI_GROUP_KEEP_ACTIVE flag
> > > + * set, then the multi-queue group is kept active after the primary queue is
> > > + * destroyed.
> > > * All the other non-relevant bits of extension's 'value' field while adding the
> > > * primary or the secondary queues of the group must be set to 0.
> > > * - %DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_QUEUE_PRIORITY - Set the queue
> > > @@ -1328,6 +1331,7 @@ struct drm_xe_exec_queue_create {
> > > #define DRM_XE_EXEC_QUEUE_SET_HANG_REPLAY_STATE 3
> > > #define DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_GROUP 4
> > > #define DRM_XE_MULTI_GROUP_CREATE (1ull << 63)
> > > +#define DRM_XE_MULTI_GROUP_KEEP_ACTIVE (1ull << 62)
> > > #define DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_QUEUE_PRIORITY 5
> > > /** @extensions: Pointer to the first extension struct, if any */
> > > __u64 extensions;
> > > --
> > > 2.43.0
> > >
--
Matt Roper
Graphics Software Engineer
Linux GPU Platform Enablement
Intel Corporation
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v6 15/17] drm/xe/multi_queue: Support active group after primary is destroyed
2025-12-19 22:53 ` Matt Roper
@ 2025-12-31 1:51 ` Niranjana Vishwanathapura
0 siblings, 0 replies; 32+ messages in thread
From: Niranjana Vishwanathapura @ 2025-12-31 1:51 UTC (permalink / raw)
To: Matt Roper; +Cc: Rodrigo Vivi, intel-xe, matthew.brost
On Fri, Dec 19, 2025 at 02:53:33PM -0800, Matt Roper wrote:
>On Fri, Dec 19, 2025 at 02:35:53PM -0800, Niranjana Vishwanathapura wrote:
>> On Fri, Dec 19, 2025 at 04:06:17PM -0500, Rodrigo Vivi wrote:
>> > On Wed, Dec 10, 2025 at 05:03:03PM -0800, Niranjana Vishwanathapura wrote:
>> > > Add support to keep the group active after the primary queue is
>> > > destroyed. Instead of killing the primary queue during exec_queue
>> > > destroy ioctl, kill it when all the secondary queues of the group
>> > > are killed.
>> > >
>> > > Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
>> > > Reviewed-by: Matthew Brost <matthew.brost@intel.com>
>> > > ---
>> > > drivers/gpu/drm/xe/xe_device.c | 7 ++-
>> > > drivers/gpu/drm/xe/xe_exec_queue.c | 55 +++++++++++++++++++++++-
>> > > drivers/gpu/drm/xe/xe_exec_queue.h | 2 +
>> > > drivers/gpu/drm/xe/xe_exec_queue_types.h | 4 ++
>> > > include/uapi/drm/xe_drm.h | 4 ++
>> >
>> > Hi Niranjana,
>> >
>> > Where is the UMD ack for this? Who is using this uAPI?
>>
>> https://lists.freedesktop.org/archives/intel-xe/2025-November/105779.html
>
>I think Rodrigo's question was more about the specific
>DRM_XE_MULTI_GROUP_KEEP_ACTIVE flag added in this patch. It doesn't
>appear that the compute-runtime PR linked in the cover letter makes use
>of that flag at all. Is there a second PR that adds usage of that?
>
Ok, after speaking to the compute team it appears that it is not
a must have feature. Let me send out patch to revert this feature.
We can always add it back later if the requirement arises.
Thanks,
Niranjana
>
>Matt
>
>>
>> Niranjana
>>
>> >
>> > Thanks,
>> > Rodrigo.
>> >
>> > > 5 files changed, 69 insertions(+), 3 deletions(-)
>> > >
>> > > diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
>> > > index 7a498c8db7b1..24efb6a3e0ea 100644
>> > > --- a/drivers/gpu/drm/xe/xe_device.c
>> > > +++ b/drivers/gpu/drm/xe/xe_device.c
>> > > @@ -177,7 +177,12 @@ static void xe_file_close(struct drm_device *dev, struct drm_file *file)
>> > > xa_for_each(&xef->exec_queue.xa, idx, q) {
>> > > if (q->vm && q->hwe->hw_engine_group)
>> > > xe_hw_engine_group_del_exec_queue(q->hwe->hw_engine_group, q);
>> > > - xe_exec_queue_kill(q);
>> > > +
>> > > + if (xe_exec_queue_is_multi_queue_primary(q))
>> > > + xe_exec_queue_group_kill_put(q->multi_queue.group);
>> > > + else
>> > > + xe_exec_queue_kill(q);
>> > > +
>> > > xe_exec_queue_put(q);
>> > > }
>> > > xa_for_each(&xef->vm.xa, idx, vm)
>> > > diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
>> > > index d337b7bc2b80..3f4840d135a0 100644
>> > > --- a/drivers/gpu/drm/xe/xe_exec_queue.c
>> > > +++ b/drivers/gpu/drm/xe/xe_exec_queue.c
>> > > @@ -418,6 +418,26 @@ struct xe_exec_queue *xe_exec_queue_create_bind(struct xe_device *xe,
>> > > }
>> > > ALLOW_ERROR_INJECTION(xe_exec_queue_create_bind, ERRNO);
>> > >
>> > > +static void xe_exec_queue_group_kill(struct kref *ref)
>> > > +{
>> > > + struct xe_exec_queue_group *group = container_of(ref, struct xe_exec_queue_group,
>> > > + kill_refcount);
>> > > + xe_exec_queue_kill(group->primary);
>> > > +}
>> > > +
>> > > +static inline void xe_exec_queue_group_kill_get(struct xe_exec_queue_group *group)
>> > > +{
>> > > + kref_get(&group->kill_refcount);
>> > > +}
>> > > +
>> > > +void xe_exec_queue_group_kill_put(struct xe_exec_queue_group *group)
>> > > +{
>> > > + if (!group)
>> > > + return;
>> > > +
>> > > + kref_put(&group->kill_refcount, xe_exec_queue_group_kill);
>> > > +}
>> > > +
>> > > void xe_exec_queue_destroy(struct kref *ref)
>> > > {
>> > > struct xe_exec_queue *q = container_of(ref, struct xe_exec_queue, refcount);
>> > > @@ -650,6 +670,7 @@ static int xe_exec_queue_group_init(struct xe_device *xe, struct xe_exec_queue *
>> > > group->primary = q;
>> > > group->cgp_bo = bo;
>> > > INIT_LIST_HEAD(&group->list);
>> > > + kref_init(&group->kill_refcount);
>> > > xa_init_flags(&group->xa, XA_FLAGS_ALLOC1);
>> > > mutex_init(&group->list_lock);
>> > > q->multi_queue.group = group;
>> > > @@ -725,6 +746,11 @@ static int xe_exec_queue_group_add(struct xe_device *xe, struct xe_exec_queue *q
>> > >
>> > > q->multi_queue.pos = pos;
>> > >
>> > > + if (group->primary->multi_queue.keep_active) {
>> > > + xe_exec_queue_group_kill_get(group);
>> > > + q->multi_queue.keep_active = true;
>> > > + }
>> > > +
>> > > return 0;
>> > > }
>> > >
>> > > @@ -738,6 +764,11 @@ static void xe_exec_queue_group_delete(struct xe_device *xe, struct xe_exec_queu
>> > > lrc = xa_erase(&group->xa, q->multi_queue.pos);
>> > > xe_assert(xe, lrc);
>> > > xe_lrc_put(lrc);
>> > > +
>> > > + if (q->multi_queue.keep_active) {
>> > > + xe_exec_queue_group_kill_put(group);
>> > > + q->multi_queue.keep_active = false;
>> > > + }
>> > > }
>> > >
>> > > static int exec_queue_set_multi_group(struct xe_device *xe, struct xe_exec_queue *q,
>> > > @@ -759,12 +790,24 @@ static int exec_queue_set_multi_group(struct xe_device *xe, struct xe_exec_queue
>> > > return -EINVAL;
>> > >
>> > > if (value & DRM_XE_MULTI_GROUP_CREATE) {
>> > > - if (XE_IOCTL_DBG(xe, value & ~DRM_XE_MULTI_GROUP_CREATE))
>> > > + if (XE_IOCTL_DBG(xe, value & ~(DRM_XE_MULTI_GROUP_CREATE |
>> > > + DRM_XE_MULTI_GROUP_KEEP_ACTIVE)))
>> > > + return -EINVAL;
>> > > +
>> > > + /*
>> > > + * KEEP_ACTIVE is not supported in preempt fence mode as in that mode,
>> > > + * VM_DESTROY ioctl expects all exec queues of that VM are already killed.
>> > > + */
>> > > + if (XE_IOCTL_DBG(xe, (value & DRM_XE_MULTI_GROUP_KEEP_ACTIVE) &&
>> > > + xe_vm_in_preempt_fence_mode(q->vm)))
>> > > return -EINVAL;
>> > >
>> > > q->multi_queue.valid = true;
>> > > q->multi_queue.is_primary = true;
>> > > q->multi_queue.pos = 0;
>> > > + if (value & DRM_XE_MULTI_GROUP_KEEP_ACTIVE)
>> > > + q->multi_queue.keep_active = true;
>> > > +
>> > > return 0;
>> > > }
>> > >
>> > > @@ -1312,6 +1355,11 @@ void xe_exec_queue_kill(struct xe_exec_queue *q)
>> > >
>> > > q->ops->kill(q);
>> > > xe_vm_remove_compute_exec_queue(q->vm, q);
>> > > +
>> > > + if (!xe_exec_queue_is_multi_queue_primary(q) && q->multi_queue.keep_active) {
>> > > + xe_exec_queue_group_kill_put(q->multi_queue.group);
>> > > + q->multi_queue.keep_active = false;
>> > > + }
>> > > }
>> > >
>> > > int xe_exec_queue_destroy_ioctl(struct drm_device *dev, void *data,
>> > > @@ -1338,7 +1386,10 @@ int xe_exec_queue_destroy_ioctl(struct drm_device *dev, void *data,
>> > > if (q->vm && q->hwe->hw_engine_group)
>> > > xe_hw_engine_group_del_exec_queue(q->hwe->hw_engine_group, q);
>> > >
>> > > - xe_exec_queue_kill(q);
>> > > + if (xe_exec_queue_is_multi_queue_primary(q))
>> > > + xe_exec_queue_group_kill_put(q->multi_queue.group);
>> > > + else
>> > > + xe_exec_queue_kill(q);
>> > >
>> > > trace_xe_exec_queue_close(q);
>> > > xe_exec_queue_put(q);
>> > > diff --git a/drivers/gpu/drm/xe/xe_exec_queue.h b/drivers/gpu/drm/xe/xe_exec_queue.h
>> > > index ffcc1feb879e..10abed98fb6b 100644
>> > > --- a/drivers/gpu/drm/xe/xe_exec_queue.h
>> > > +++ b/drivers/gpu/drm/xe/xe_exec_queue.h
>> > > @@ -113,6 +113,8 @@ static inline struct xe_exec_queue *xe_exec_queue_multi_queue_primary(struct xe_
>> > > return xe_exec_queue_is_multi_queue(q) ? q->multi_queue.group->primary : q;
>> > > }
>> > >
>> > > +void xe_exec_queue_group_kill_put(struct xe_exec_queue_group *group);
>> > > +
>> > > bool xe_exec_queue_is_lr(struct xe_exec_queue *q);
>> > >
>> > > bool xe_exec_queue_is_idle(struct xe_exec_queue *q);
>> > > diff --git a/drivers/gpu/drm/xe/xe_exec_queue_types.h b/drivers/gpu/drm/xe/xe_exec_queue_types.h
>> > > index 5fc516b0bb77..67ea5eebf70b 100644
>> > > --- a/drivers/gpu/drm/xe/xe_exec_queue_types.h
>> > > +++ b/drivers/gpu/drm/xe/xe_exec_queue_types.h
>> > > @@ -62,6 +62,8 @@ struct xe_exec_queue_group {
>> > > struct list_head list;
>> > > /** @list_lock: Secondary queue list lock */
>> > > struct mutex list_lock;
>> > > + /** @kill_refcount: ref count to kill primary queue */
>> > > + struct kref kill_refcount;
>> > > /** @sync_pending: CGP_SYNC_DONE g2h response pending */
>> > > bool sync_pending;
>> > > /** @banned: Group banned */
>> > > @@ -161,6 +163,8 @@ struct xe_exec_queue {
>> > > u8 valid:1;
>> > > /** @multi_queue.is_primary: Is primary queue (Q0) of the group */
>> > > u8 is_primary:1;
>> > > + /** @multi_queue.keep_active: Keep the group active after primary is destroyed */
>> > > + u8 keep_active:1;
>> > > } multi_queue;
>> > >
>> > > /** @sched_props: scheduling properties */
>> > > diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
>> > > index 705081bf0d81..bd6154e3b728 100644
>> > > --- a/include/uapi/drm/xe_drm.h
>> > > +++ b/include/uapi/drm/xe_drm.h
>> > > @@ -1280,6 +1280,9 @@ struct drm_xe_vm_bind {
>> > > * then a new multi-queue group is created with this queue as the primary queue
>> > > * (Q0). Otherwise, the queue gets added to the multi-queue group whose primary
>> > > * queue's exec_queue_id is specified in the lower 32 bits of the 'value' field.
>> > > + * If the extension's 'value' field has %DRM_XE_MULTI_GROUP_KEEP_ACTIVE flag
>> > > + * set, then the multi-queue group is kept active after the primary queue is
>> > > + * destroyed.
>> > > * All the other non-relevant bits of extension's 'value' field while adding the
>> > > * primary or the secondary queues of the group must be set to 0.
>> > > * - %DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_QUEUE_PRIORITY - Set the queue
>> > > @@ -1328,6 +1331,7 @@ struct drm_xe_exec_queue_create {
>> > > #define DRM_XE_EXEC_QUEUE_SET_HANG_REPLAY_STATE 3
>> > > #define DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_GROUP 4
>> > > #define DRM_XE_MULTI_GROUP_CREATE (1ull << 63)
>> > > +#define DRM_XE_MULTI_GROUP_KEEP_ACTIVE (1ull << 62)
>> > > #define DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_QUEUE_PRIORITY 5
>> > > /** @extensions: Pointer to the first extension struct, if any */
>> > > __u64 extensions;
>> > > --
>> > > 2.43.0
>> > >
>
>--
>Matt Roper
>Graphics Software Engineer
>Linux GPU Platform Enablement
>Intel Corporation
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v6 06/17] drm/xe/multi_queue: Add exec_queue set_property ioctl support
2025-12-11 1:02 ` [PATCH v6 06/17] drm/xe/multi_queue: Add exec_queue set_property ioctl support Niranjana Vishwanathapura
@ 2026-01-19 16:57 ` Thomas Hellström
2026-01-20 21:06 ` Niranjana Vishwanathapura
0 siblings, 1 reply; 32+ messages in thread
From: Thomas Hellström @ 2026-01-19 16:57 UTC (permalink / raw)
To: Niranjana Vishwanathapura, intel-xe; +Cc: matthew.brost, matthew.d.roper
Hi,
On Wed, 2025-12-10 at 17:02 -0800, Niranjana Vishwanathapura wrote:
> This patch adds support for exec_queue set_property ioctl.
> It is derived from the original work which is part of
> https://patchwork.freedesktop.org/series/112188/
>
> Currently only DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_QUEUE_PRIORITY
> property can be dynamically set.
>
> v2: Check for and update kernel-doc which property this ioctl
> supports (Matt Brost)
>
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> Signed-off-by: Pallavi Mishra <pallavi.mishra@intel.com>
> Signed-off-by: Niranjana Vishwanathapura
> <niranjana.vishwanathapura@intel.com>
> Reviewed-by: Matthew Brost <matthew.brost@intel.com>
Two questions on this patch:
1) I don't see any locking to protect the exec-queue / lrc state.
Perhaps I missed it, but couldn't this blow up if someone runs two
set_property ioctls in parallel on the same exec-queue? Looks like at
least the set_priority is doing non-atomic RMW assignments?
2) Do we really need this to be mutable? In i915 assuming that lrc /
context configuration could change created horrendous locking
constructs that were later fixed by making it immutable at first
command sumbission. I don't exactly remember the details but do we need
to change properties after first submission? If not I suggest blocking
that.
Thanks,
Thomas
> ---
> drivers/gpu/drm/xe/xe_device.c | 2 ++
> drivers/gpu/drm/xe/xe_exec_queue.c | 35
> ++++++++++++++++++++++++++++++
> drivers/gpu/drm/xe/xe_exec_queue.h | 2 ++
> include/uapi/drm/xe_drm.h | 26 ++++++++++++++++++++++
> 4 files changed, 65 insertions(+)
>
> diff --git a/drivers/gpu/drm/xe/xe_device.c
> b/drivers/gpu/drm/xe/xe_device.c
> index 1197f914ef77..7a498c8db7b1 100644
> --- a/drivers/gpu/drm/xe/xe_device.c
> +++ b/drivers/gpu/drm/xe/xe_device.c
> @@ -207,6 +207,8 @@ static const struct drm_ioctl_desc xe_ioctls[] =
> {
> DRM_IOCTL_DEF_DRV(XE_MADVISE, xe_vm_madvise_ioctl,
> DRM_RENDER_ALLOW),
> DRM_IOCTL_DEF_DRV(XE_VM_QUERY_MEM_RANGE_ATTRS,
> xe_vm_query_vmas_attrs_ioctl,
> DRM_RENDER_ALLOW),
> + DRM_IOCTL_DEF_DRV(XE_EXEC_QUEUE_SET_PROPERTY,
> xe_exec_queue_set_property_ioctl,
> + DRM_RENDER_ALLOW),
> };
>
> static long xe_drm_ioctl(struct file *file, unsigned int cmd,
> unsigned long arg)
> diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c
> b/drivers/gpu/drm/xe/xe_exec_queue.c
> index d0082eb45a4a..d738a9fea1e1 100644
> --- a/drivers/gpu/drm/xe/xe_exec_queue.c
> +++ b/drivers/gpu/drm/xe/xe_exec_queue.c
> @@ -790,6 +790,41 @@ static const xe_exec_queue_set_property_fn
> exec_queue_set_property_funcs[] = {
> exec_queue_s
> et_multi_queue_priority,
> };
>
> +int xe_exec_queue_set_property_ioctl(struct drm_device *dev, void
> *data,
> + struct drm_file *file)
> +{
> + struct xe_device *xe = to_xe_device(dev);
> + struct xe_file *xef = to_xe_file(file);
> + struct drm_xe_exec_queue_set_property *args = data;
> + struct xe_exec_queue *q;
> + int ret;
> + u32 idx;
> +
> + if (XE_IOCTL_DBG(xe, args->reserved[0] || args-
> >reserved[1]))
> + return -EINVAL;
> +
> + if (XE_IOCTL_DBG(xe, args->property !=
> +
> DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_QUEUE_PRIORITY))
> + return -EINVAL;
> +
> + q = xe_exec_queue_lookup(xef, args->exec_queue_id);
> + if (XE_IOCTL_DBG(xe, !q))
> + return -ENOENT;
> +
> + idx = array_index_nospec(args->property,
> +
> ARRAY_SIZE(exec_queue_set_property_funcs));
> + ret = exec_queue_set_property_funcs[idx](xe, q, args-
> >value);
> + if (XE_IOCTL_DBG(xe, ret))
> + goto err_post_lookup;
> +
> + xe_exec_queue_put(q);
> + return 0;
> +
> + err_post_lookup:
> + xe_exec_queue_put(q);
> + return ret;
> +}
> +
> static int exec_queue_user_ext_check(struct xe_exec_queue *q, u64
> properties)
> {
> u64 secondary_queue_valid_props =
> BIT_ULL(DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_GROUP) |
> diff --git a/drivers/gpu/drm/xe/xe_exec_queue.h
> b/drivers/gpu/drm/xe/xe_exec_queue.h
> index e6daa40003f2..ffcc1feb879e 100644
> --- a/drivers/gpu/drm/xe/xe_exec_queue.h
> +++ b/drivers/gpu/drm/xe/xe_exec_queue.h
> @@ -125,6 +125,8 @@ int xe_exec_queue_destroy_ioctl(struct drm_device
> *dev, void *data,
> struct drm_file *file);
> int xe_exec_queue_get_property_ioctl(struct drm_device *dev, void
> *data,
> struct drm_file *file);
> +int xe_exec_queue_set_property_ioctl(struct drm_device *dev, void
> *data,
> + struct drm_file *file);
> enum xe_exec_queue_priority
> xe_exec_queue_device_get_max_priority(struct xe_device *xe);
>
> void xe_exec_queue_last_fence_put(struct xe_exec_queue *e, struct
> xe_vm *vm);
> diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
> index fd79d78de2e9..705081bf0d81 100644
> --- a/include/uapi/drm/xe_drm.h
> +++ b/include/uapi/drm/xe_drm.h
> @@ -106,6 +106,7 @@ extern "C" {
> #define DRM_XE_OBSERVATION 0x0b
> #define DRM_XE_MADVISE 0x0c
> #define DRM_XE_VM_QUERY_MEM_RANGE_ATTRS 0x0d
> +#define DRM_XE_EXEC_QUEUE_SET_PROPERTY 0x0e
>
> /* Must be kept compact -- no holes */
>
> @@ -123,6 +124,7 @@ extern "C" {
> #define
> DRM_IOCTL_XE_OBSERVATION DRM_IOW(DRM_COMMAND_BASE + DRM_XE_OBSERVATION,structdrm_xe_observation_param)
> #define
> DRM_IOCTL_XE_MADVISE DRM_IOW(DRM_COMMAND_BASE + DRM_XE_MADVISE, structdrm_xe_madvise)
> #define
> DRM_IOCTL_XE_VM_QUERY_MEM_RANGE_ATTRS DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_VM_QUERY_MEM_RANGE_ATTRS,structdrm_xe_vm_query_mem_range_attr)
> +#define
> DRM_IOCTL_XE_EXEC_QUEUE_SET_PROPERTY DRM_IOW(DRM_COMMAND_BASE + DRM_XE_EXEC_QUEUE_SET_PROPERTY,structdrm_xe_exec_queue_set_property)
>
> /**
> * DOC: Xe IOCTL Extensions
> @@ -2315,6 +2317,30 @@ struct drm_xe_vm_query_mem_range_attr {
>
> };
>
> +/**
> + * struct drm_xe_exec_queue_set_property - exec queue set property
> + *
> + * Sets execution queue properties dynamically.
> + * Currently only
> %DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_QUEUE_PRIORITY
> + * property can be dynamically set.
> + */
> +struct drm_xe_exec_queue_set_property {
> + /** @extensions: Pointer to the first extension struct, if
> any */
> + __u64 extensions;
> +
> + /** @exec_queue_id: Exec queue ID */
> + __u32 exec_queue_id;
> +
> + /** @property: property to set */
> + __u32 property;
> +
> + /** @value: property value */
> + __u64 value;
> +
> + /** @reserved: Reserved */
> + __u64 reserved[2];
> +};
> +
> #if defined(__cplusplus)
> }
> #endif
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v6 06/17] drm/xe/multi_queue: Add exec_queue set_property ioctl support
2026-01-19 16:57 ` Thomas Hellström
@ 2026-01-20 21:06 ` Niranjana Vishwanathapura
2026-01-20 22:20 ` Matthew Brost
0 siblings, 1 reply; 32+ messages in thread
From: Niranjana Vishwanathapura @ 2026-01-20 21:06 UTC (permalink / raw)
To: Thomas Hellström; +Cc: intel-xe, matthew.brost, matthew.d.roper
On Mon, Jan 19, 2026 at 05:57:29PM +0100, Thomas Hellström wrote:
>Hi,
>
>On Wed, 2025-12-10 at 17:02 -0800, Niranjana Vishwanathapura wrote:
>> This patch adds support for exec_queue set_property ioctl.
>> It is derived from the original work which is part of
>> https://patchwork.freedesktop.org/series/112188/
>>
>> Currently only DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_QUEUE_PRIORITY
>> property can be dynamically set.
>>
>> v2: Check for and update kernel-doc which property this ioctl
>> supports (Matt Brost)
>>
>> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
>> Signed-off-by: Pallavi Mishra <pallavi.mishra@intel.com>
>> Signed-off-by: Niranjana Vishwanathapura
>> <niranjana.vishwanathapura@intel.com>
>> Reviewed-by: Matthew Brost <matthew.brost@intel.com>
>
>Two questions on this patch:
>
>1) I don't see any locking to protect the exec-queue / lrc state.
>Perhaps I missed it, but couldn't this blow up if someone runs two
>set_property ioctls in parallel on the same exec-queue? Looks like at
>least the set_priority is doing non-atomic RMW assignments?
>
Hmm...I think only checking and setting of 'q->multi_queue.priority'
in guc_exec_queue_set_multi_queue_priority() is a potential issue
(as the rest is handled by message handling queue and guc ct backend).
I think we can fix it with a simple spicklock in this function.
>2) Do we really need this to be mutable? In i915 assuming that lrc /
>context configuration could change created horrendous locking
>constructs that were later fixed by making it immutable at first
>command sumbission. I don't exactly remember the details but do we need
>to change properties after first submission? If not I suggest blocking
>that.
Yah, ability to dynamically change multi-queue priority is a requirement
from the Compute UMD team.
Niranjana
>
>Thanks,
>Thomas
>
>
>
>
>> ---
>> drivers/gpu/drm/xe/xe_device.c | 2 ++
>> drivers/gpu/drm/xe/xe_exec_queue.c | 35
>> ++++++++++++++++++++++++++++++
>> drivers/gpu/drm/xe/xe_exec_queue.h | 2 ++
>> include/uapi/drm/xe_drm.h | 26 ++++++++++++++++++++++
>> 4 files changed, 65 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_device.c
>> b/drivers/gpu/drm/xe/xe_device.c
>> index 1197f914ef77..7a498c8db7b1 100644
>> --- a/drivers/gpu/drm/xe/xe_device.c
>> +++ b/drivers/gpu/drm/xe/xe_device.c
>> @@ -207,6 +207,8 @@ static const struct drm_ioctl_desc xe_ioctls[] =
>> {
>> DRM_IOCTL_DEF_DRV(XE_MADVISE, xe_vm_madvise_ioctl,
>> DRM_RENDER_ALLOW),
>> DRM_IOCTL_DEF_DRV(XE_VM_QUERY_MEM_RANGE_ATTRS,
>> xe_vm_query_vmas_attrs_ioctl,
>> DRM_RENDER_ALLOW),
>> + DRM_IOCTL_DEF_DRV(XE_EXEC_QUEUE_SET_PROPERTY,
>> xe_exec_queue_set_property_ioctl,
>> + DRM_RENDER_ALLOW),
>> };
>>
>> static long xe_drm_ioctl(struct file *file, unsigned int cmd,
>> unsigned long arg)
>> diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c
>> b/drivers/gpu/drm/xe/xe_exec_queue.c
>> index d0082eb45a4a..d738a9fea1e1 100644
>> --- a/drivers/gpu/drm/xe/xe_exec_queue.c
>> +++ b/drivers/gpu/drm/xe/xe_exec_queue.c
>> @@ -790,6 +790,41 @@ static const xe_exec_queue_set_property_fn
>> exec_queue_set_property_funcs[] = {
>> exec_queue_s
>> et_multi_queue_priority,
>> };
>>
>> +int xe_exec_queue_set_property_ioctl(struct drm_device *dev, void
>> *data,
>> + struct drm_file *file)
>> +{
>> + struct xe_device *xe = to_xe_device(dev);
>> + struct xe_file *xef = to_xe_file(file);
>> + struct drm_xe_exec_queue_set_property *args = data;
>> + struct xe_exec_queue *q;
>> + int ret;
>> + u32 idx;
>> +
>> + if (XE_IOCTL_DBG(xe, args->reserved[0] || args-
>> >reserved[1]))
>> + return -EINVAL;
>> +
>> + if (XE_IOCTL_DBG(xe, args->property !=
>> +
>> DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_QUEUE_PRIORITY))
>> + return -EINVAL;
>> +
>> + q = xe_exec_queue_lookup(xef, args->exec_queue_id);
>> + if (XE_IOCTL_DBG(xe, !q))
>> + return -ENOENT;
>> +
>> + idx = array_index_nospec(args->property,
>> +
>> ARRAY_SIZE(exec_queue_set_property_funcs));
>> + ret = exec_queue_set_property_funcs[idx](xe, q, args-
>> >value);
>> + if (XE_IOCTL_DBG(xe, ret))
>> + goto err_post_lookup;
>> +
>> + xe_exec_queue_put(q);
>> + return 0;
>> +
>> + err_post_lookup:
>> + xe_exec_queue_put(q);
>> + return ret;
>> +}
>> +
>> static int exec_queue_user_ext_check(struct xe_exec_queue *q, u64
>> properties)
>> {
>> u64 secondary_queue_valid_props =
>> BIT_ULL(DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_GROUP) |
>> diff --git a/drivers/gpu/drm/xe/xe_exec_queue.h
>> b/drivers/gpu/drm/xe/xe_exec_queue.h
>> index e6daa40003f2..ffcc1feb879e 100644
>> --- a/drivers/gpu/drm/xe/xe_exec_queue.h
>> +++ b/drivers/gpu/drm/xe/xe_exec_queue.h
>> @@ -125,6 +125,8 @@ int xe_exec_queue_destroy_ioctl(struct drm_device
>> *dev, void *data,
>> struct drm_file *file);
>> int xe_exec_queue_get_property_ioctl(struct drm_device *dev, void
>> *data,
>> struct drm_file *file);
>> +int xe_exec_queue_set_property_ioctl(struct drm_device *dev, void
>> *data,
>> + struct drm_file *file);
>> enum xe_exec_queue_priority
>> xe_exec_queue_device_get_max_priority(struct xe_device *xe);
>>
>> void xe_exec_queue_last_fence_put(struct xe_exec_queue *e, struct
>> xe_vm *vm);
>> diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
>> index fd79d78de2e9..705081bf0d81 100644
>> --- a/include/uapi/drm/xe_drm.h
>> +++ b/include/uapi/drm/xe_drm.h
>> @@ -106,6 +106,7 @@ extern "C" {
>> #define DRM_XE_OBSERVATION 0x0b
>> #define DRM_XE_MADVISE 0x0c
>> #define DRM_XE_VM_QUERY_MEM_RANGE_ATTRS 0x0d
>> +#define DRM_XE_EXEC_QUEUE_SET_PROPERTY 0x0e
>>
>> /* Must be kept compact -- no holes */
>>
>> @@ -123,6 +124,7 @@ extern "C" {
>> #define
>> DRM_IOCTL_XE_OBSERVATION DRM_IOW(DRM_COMMAND_BASE + DRM_XE_OBSERVATION,structdrm_xe_observation_param)
>> #define
>> DRM_IOCTL_XE_MADVISE DRM_IOW(DRM_COMMAND_BASE + DRM_XE_MADVISE, structdrm_xe_madvise)
>> #define
>> DRM_IOCTL_XE_VM_QUERY_MEM_RANGE_ATTRS DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_VM_QUERY_MEM_RANGE_ATTRS,structdrm_xe_vm_query_mem_range_attr)
>> +#define
>> DRM_IOCTL_XE_EXEC_QUEUE_SET_PROPERTY DRM_IOW(DRM_COMMAND_BASE + DRM_XE_EXEC_QUEUE_SET_PROPERTY,structdrm_xe_exec_queue_set_property)
>>
>> /**
>> * DOC: Xe IOCTL Extensions
>> @@ -2315,6 +2317,30 @@ struct drm_xe_vm_query_mem_range_attr {
>>
>> };
>>
>> +/**
>> + * struct drm_xe_exec_queue_set_property - exec queue set property
>> + *
>> + * Sets execution queue properties dynamically.
>> + * Currently only
>> %DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_QUEUE_PRIORITY
>> + * property can be dynamically set.
>> + */
>> +struct drm_xe_exec_queue_set_property {
>> + /** @extensions: Pointer to the first extension struct, if
>> any */
>> + __u64 extensions;
>> +
>> + /** @exec_queue_id: Exec queue ID */
>> + __u32 exec_queue_id;
>> +
>> + /** @property: property to set */
>> + __u32 property;
>> +
>> + /** @value: property value */
>> + __u64 value;
>> +
>> + /** @reserved: Reserved */
>> + __u64 reserved[2];
>> +};
>> +
>> #if defined(__cplusplus)
>> }
>> #endif
>
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v6 06/17] drm/xe/multi_queue: Add exec_queue set_property ioctl support
2026-01-20 21:06 ` Niranjana Vishwanathapura
@ 2026-01-20 22:20 ` Matthew Brost
2026-01-20 22:30 ` Matthew Brost
2026-01-21 15:29 ` Thomas Hellström
0 siblings, 2 replies; 32+ messages in thread
From: Matthew Brost @ 2026-01-20 22:20 UTC (permalink / raw)
To: Niranjana Vishwanathapura
Cc: Thomas Hellström, intel-xe, matthew.d.roper
On Tue, Jan 20, 2026 at 01:06:07PM -0800, Niranjana Vishwanathapura wrote:
> On Mon, Jan 19, 2026 at 05:57:29PM +0100, Thomas Hellström wrote:
> > Hi,
> >
> > On Wed, 2025-12-10 at 17:02 -0800, Niranjana Vishwanathapura wrote:
> > > This patch adds support for exec_queue set_property ioctl.
> > > It is derived from the original work which is part of
> > > https://patchwork.freedesktop.org/series/112188/
> > >
> > > Currently only DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_QUEUE_PRIORITY
> > > property can be dynamically set.
> > >
> > > v2: Check for and update kernel-doc which property this ioctl
> > > supports (Matt Brost)
> > >
> > > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > > Signed-off-by: Pallavi Mishra <pallavi.mishra@intel.com>
> > > Signed-off-by: Niranjana Vishwanathapura
> > > <niranjana.vishwanathapura@intel.com>
> > > Reviewed-by: Matthew Brost <matthew.brost@intel.com>
> >
> > Two questions on this patch:
> >
> > 1) I don't see any locking to protect the exec-queue / lrc state.
> > Perhaps I missed it, but couldn't this blow up if someone runs two
> > set_property ioctls in parallel on the same exec-queue? Looks like at
> > least the set_priority is doing non-atomic RMW assignments?
> >
I don't think we'd blow up. At worst q->multi_queue.priority would race
between a set in guc_exec_queue_set_priority (parallel across queues)
and read of that value in __guc_exec_queue_process_msg_set_sched_props
(serialize across queues). Eventually we'd stablize on the most recently
set value in __guc_exec_queue_process_msg_set_sched_props.
>
> Hmm...I think only checking and setting of 'q->multi_queue.priority'
> in guc_exec_queue_set_multi_queue_priority() is a potential issue
> (as the rest is handled by message handling queue and guc ct backend).
> I think we can fix it with a simple spicklock in this function.
>
Hmm, I'm not sure what a lock here buys us as if two user threads try to
set q->multi_queue.priority we'd never really knows who should win the
race of saying of setting.
> > 2) Do we really need this to be mutable? In i915 assuming that lrc /
> > context configuration could change created horrendous locking
> > constructs that were later fixed by making it immutable at first
> > command sumbission. I don't exactly remember the details but do we need
> > to change properties after first submission? If not I suggest blocking
> > that.
I don't think this is overly complicated here given the actual queue
interaction with the GuC is serialized by the DRM scheduler.
My thinking is this patch is probably fine as is, albiet a little
harmlessly racey.
Matt
>
> Yah, ability to dynamically change multi-queue priority is a requirement
> from the Compute UMD team.
>
> Niranjana
>
> >
> > Thanks,
> > Thomas
> >
> >
> >
> >
> > > ---
> > > drivers/gpu/drm/xe/xe_device.c | 2 ++
> > > drivers/gpu/drm/xe/xe_exec_queue.c | 35
> > > ++++++++++++++++++++++++++++++
> > > drivers/gpu/drm/xe/xe_exec_queue.h | 2 ++
> > > include/uapi/drm/xe_drm.h | 26 ++++++++++++++++++++++
> > > 4 files changed, 65 insertions(+)
> > >
> > > diff --git a/drivers/gpu/drm/xe/xe_device.c
> > > b/drivers/gpu/drm/xe/xe_device.c
> > > index 1197f914ef77..7a498c8db7b1 100644
> > > --- a/drivers/gpu/drm/xe/xe_device.c
> > > +++ b/drivers/gpu/drm/xe/xe_device.c
> > > @@ -207,6 +207,8 @@ static const struct drm_ioctl_desc xe_ioctls[] =
> > > {
> > > DRM_IOCTL_DEF_DRV(XE_MADVISE, xe_vm_madvise_ioctl,
> > > DRM_RENDER_ALLOW),
> > > DRM_IOCTL_DEF_DRV(XE_VM_QUERY_MEM_RANGE_ATTRS,
> > > xe_vm_query_vmas_attrs_ioctl,
> > > DRM_RENDER_ALLOW),
> > > + DRM_IOCTL_DEF_DRV(XE_EXEC_QUEUE_SET_PROPERTY,
> > > xe_exec_queue_set_property_ioctl,
> > > + DRM_RENDER_ALLOW),
> > > };
> > >
> > > static long xe_drm_ioctl(struct file *file, unsigned int cmd,
> > > unsigned long arg)
> > > diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c
> > > b/drivers/gpu/drm/xe/xe_exec_queue.c
> > > index d0082eb45a4a..d738a9fea1e1 100644
> > > --- a/drivers/gpu/drm/xe/xe_exec_queue.c
> > > +++ b/drivers/gpu/drm/xe/xe_exec_queue.c
> > > @@ -790,6 +790,41 @@ static const xe_exec_queue_set_property_fn
> > > exec_queue_set_property_funcs[] = {
> > > exec_queue_s
> > > et_multi_queue_priority,
> > > };
> > >
> > > +int xe_exec_queue_set_property_ioctl(struct drm_device *dev, void
> > > *data,
> > > + struct drm_file *file)
> > > +{
> > > + struct xe_device *xe = to_xe_device(dev);
> > > + struct xe_file *xef = to_xe_file(file);
> > > + struct drm_xe_exec_queue_set_property *args = data;
> > > + struct xe_exec_queue *q;
> > > + int ret;
> > > + u32 idx;
> > > +
> > > + if (XE_IOCTL_DBG(xe, args->reserved[0] || args-
> > > >reserved[1]))
> > > + return -EINVAL;
> > > +
> > > + if (XE_IOCTL_DBG(xe, args->property !=
> > > +
> > > DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_QUEUE_PRIORITY))
> > > + return -EINVAL;
> > > +
> > > + q = xe_exec_queue_lookup(xef, args->exec_queue_id);
> > > + if (XE_IOCTL_DBG(xe, !q))
> > > + return -ENOENT;
> > > +
> > > + idx = array_index_nospec(args->property,
> > > +
> > > ARRAY_SIZE(exec_queue_set_property_funcs));
> > > + ret = exec_queue_set_property_funcs[idx](xe, q, args-
> > > >value);
> > > + if (XE_IOCTL_DBG(xe, ret))
> > > + goto err_post_lookup;
> > > +
> > > + xe_exec_queue_put(q);
> > > + return 0;
> > > +
> > > + err_post_lookup:
> > > + xe_exec_queue_put(q);
> > > + return ret;
> > > +}
> > > +
> > > static int exec_queue_user_ext_check(struct xe_exec_queue *q, u64
> > > properties)
> > > {
> > > u64 secondary_queue_valid_props =
> > > BIT_ULL(DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_GROUP) |
> > > diff --git a/drivers/gpu/drm/xe/xe_exec_queue.h
> > > b/drivers/gpu/drm/xe/xe_exec_queue.h
> > > index e6daa40003f2..ffcc1feb879e 100644
> > > --- a/drivers/gpu/drm/xe/xe_exec_queue.h
> > > +++ b/drivers/gpu/drm/xe/xe_exec_queue.h
> > > @@ -125,6 +125,8 @@ int xe_exec_queue_destroy_ioctl(struct drm_device
> > > *dev, void *data,
> > > struct drm_file *file);
> > > int xe_exec_queue_get_property_ioctl(struct drm_device *dev, void
> > > *data,
> > > struct drm_file *file);
> > > +int xe_exec_queue_set_property_ioctl(struct drm_device *dev, void
> > > *data,
> > > + struct drm_file *file);
> > > enum xe_exec_queue_priority
> > > xe_exec_queue_device_get_max_priority(struct xe_device *xe);
> > >
> > > void xe_exec_queue_last_fence_put(struct xe_exec_queue *e, struct
> > > xe_vm *vm);
> > > diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
> > > index fd79d78de2e9..705081bf0d81 100644
> > > --- a/include/uapi/drm/xe_drm.h
> > > +++ b/include/uapi/drm/xe_drm.h
> > > @@ -106,6 +106,7 @@ extern "C" {
> > > #define DRM_XE_OBSERVATION 0x0b
> > > #define DRM_XE_MADVISE 0x0c
> > > #define DRM_XE_VM_QUERY_MEM_RANGE_ATTRS 0x0d
> > > +#define DRM_XE_EXEC_QUEUE_SET_PROPERTY 0x0e
> > >
> > > /* Must be kept compact -- no holes */
> > >
> > > @@ -123,6 +124,7 @@ extern "C" {
> > > #define
> > > DRM_IOCTL_XE_OBSERVATION DRM_IOW(DRM_COMMAND_BASE + DRM_XE_OBSERVATION,structdrm_xe_observation_param)
> > > #define
> > > DRM_IOCTL_XE_MADVISE DRM_IOW(DRM_COMMAND_BASE + DRM_XE_MADVISE, structdrm_xe_madvise)
> > > #define
> > > DRM_IOCTL_XE_VM_QUERY_MEM_RANGE_ATTRS DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_VM_QUERY_MEM_RANGE_ATTRS,structdrm_xe_vm_query_mem_range_attr)
> > > +#define
> > > DRM_IOCTL_XE_EXEC_QUEUE_SET_PROPERTY DRM_IOW(DRM_COMMAND_BASE + DRM_XE_EXEC_QUEUE_SET_PROPERTY,structdrm_xe_exec_queue_set_property)
> > >
> > > /**
> > > * DOC: Xe IOCTL Extensions
> > > @@ -2315,6 +2317,30 @@ struct drm_xe_vm_query_mem_range_attr {
> > >
> > > };
> > >
> > > +/**
> > > + * struct drm_xe_exec_queue_set_property - exec queue set property
> > > + *
> > > + * Sets execution queue properties dynamically.
> > > + * Currently only
> > > %DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_QUEUE_PRIORITY
> > > + * property can be dynamically set.
> > > + */
> > > +struct drm_xe_exec_queue_set_property {
> > > + /** @extensions: Pointer to the first extension struct, if
> > > any */
> > > + __u64 extensions;
> > > +
> > > + /** @exec_queue_id: Exec queue ID */
> > > + __u32 exec_queue_id;
> > > +
> > > + /** @property: property to set */
> > > + __u32 property;
> > > +
> > > + /** @value: property value */
> > > + __u64 value;
> > > +
> > > + /** @reserved: Reserved */
> > > + __u64 reserved[2];
> > > +};
> > > +
> > > #if defined(__cplusplus)
> > > }
> > > #endif
> >
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v6 06/17] drm/xe/multi_queue: Add exec_queue set_property ioctl support
2026-01-20 22:20 ` Matthew Brost
@ 2026-01-20 22:30 ` Matthew Brost
2026-01-21 15:29 ` Thomas Hellström
1 sibling, 0 replies; 32+ messages in thread
From: Matthew Brost @ 2026-01-20 22:30 UTC (permalink / raw)
To: Niranjana Vishwanathapura
Cc: Thomas Hellström, intel-xe, matthew.d.roper
On Tue, Jan 20, 2026 at 02:20:22PM -0800, Matthew Brost wrote:
> On Tue, Jan 20, 2026 at 01:06:07PM -0800, Niranjana Vishwanathapura wrote:
> > On Mon, Jan 19, 2026 at 05:57:29PM +0100, Thomas Hellström wrote:
> > > Hi,
> > >
> > > On Wed, 2025-12-10 at 17:02 -0800, Niranjana Vishwanathapura wrote:
> > > > This patch adds support for exec_queue set_property ioctl.
> > > > It is derived from the original work which is part of
> > > > https://patchwork.freedesktop.org/series/112188/
> > > >
> > > > Currently only DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_QUEUE_PRIORITY
> > > > property can be dynamically set.
> > > >
> > > > v2: Check for and update kernel-doc which property this ioctl
> > > > supports (Matt Brost)
> > > >
> > > > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > > > Signed-off-by: Pallavi Mishra <pallavi.mishra@intel.com>
> > > > Signed-off-by: Niranjana Vishwanathapura
> > > > <niranjana.vishwanathapura@intel.com>
> > > > Reviewed-by: Matthew Brost <matthew.brost@intel.com>
> > >
> > > Two questions on this patch:
> > >
> > > 1) I don't see any locking to protect the exec-queue / lrc state.
> > > Perhaps I missed it, but couldn't this blow up if someone runs two
> > > set_property ioctls in parallel on the same exec-queue? Looks like at
> > > least the set_priority is doing non-atomic RMW assignments?
> > >
>
> I don't think we'd blow up. At worst q->multi_queue.priority would race
> between a set in guc_exec_queue_set_priority (parallel across queues)
> and read of that value in __guc_exec_queue_process_msg_set_sched_props
> (serialize across queues). Eventually we'd stablize on the most recently
> set value in __guc_exec_queue_process_msg_set_sched_props.
>
> >
> > Hmm...I think only checking and setting of 'q->multi_queue.priority'
> > in guc_exec_queue_set_multi_queue_priority() is a potential issue
> > (as the rest is handled by message handling queue and guc ct backend).
> > I think we can fix it with a simple spicklock in this function.
> >
>
> Hmm, I'm not sure what a lock here buys us as if two user threads try to
> set q->multi_queue.priority we'd never really knows who should win the
> race of saying of setting.
>
> > > 2) Do we really need this to be mutable? In i915 assuming that lrc /
> > > context configuration could change created horrendous locking
> > > constructs that were later fixed by making it immutable at first
> > > command sumbission. I don't exactly remember the details but do we need
> > > to change properties after first submission? If not I suggest blocking
> > > that.
>
> I don't think this is overly complicated here given the actual queue
> interaction with the GuC is serialized by the DRM scheduler.
>
> My thinking is this patch is probably fine as is, albiet a little
> harmlessly racey.
Actually maybe we need WRITE_ONCE / READ_ONCE semantics here to prevent
data tearing.
Matt
>
> Matt
>
> >
> > Yah, ability to dynamically change multi-queue priority is a requirement
> > from the Compute UMD team.
> >
> > Niranjana
> >
> > >
> > > Thanks,
> > > Thomas
> > >
> > >
> > >
> > >
> > > > ---
> > > > drivers/gpu/drm/xe/xe_device.c | 2 ++
> > > > drivers/gpu/drm/xe/xe_exec_queue.c | 35
> > > > ++++++++++++++++++++++++++++++
> > > > drivers/gpu/drm/xe/xe_exec_queue.h | 2 ++
> > > > include/uapi/drm/xe_drm.h | 26 ++++++++++++++++++++++
> > > > 4 files changed, 65 insertions(+)
> > > >
> > > > diff --git a/drivers/gpu/drm/xe/xe_device.c
> > > > b/drivers/gpu/drm/xe/xe_device.c
> > > > index 1197f914ef77..7a498c8db7b1 100644
> > > > --- a/drivers/gpu/drm/xe/xe_device.c
> > > > +++ b/drivers/gpu/drm/xe/xe_device.c
> > > > @@ -207,6 +207,8 @@ static const struct drm_ioctl_desc xe_ioctls[] =
> > > > {
> > > > DRM_IOCTL_DEF_DRV(XE_MADVISE, xe_vm_madvise_ioctl,
> > > > DRM_RENDER_ALLOW),
> > > > DRM_IOCTL_DEF_DRV(XE_VM_QUERY_MEM_RANGE_ATTRS,
> > > > xe_vm_query_vmas_attrs_ioctl,
> > > > DRM_RENDER_ALLOW),
> > > > + DRM_IOCTL_DEF_DRV(XE_EXEC_QUEUE_SET_PROPERTY,
> > > > xe_exec_queue_set_property_ioctl,
> > > > + DRM_RENDER_ALLOW),
> > > > };
> > > >
> > > > static long xe_drm_ioctl(struct file *file, unsigned int cmd,
> > > > unsigned long arg)
> > > > diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c
> > > > b/drivers/gpu/drm/xe/xe_exec_queue.c
> > > > index d0082eb45a4a..d738a9fea1e1 100644
> > > > --- a/drivers/gpu/drm/xe/xe_exec_queue.c
> > > > +++ b/drivers/gpu/drm/xe/xe_exec_queue.c
> > > > @@ -790,6 +790,41 @@ static const xe_exec_queue_set_property_fn
> > > > exec_queue_set_property_funcs[] = {
> > > > exec_queue_s
> > > > et_multi_queue_priority,
> > > > };
> > > >
> > > > +int xe_exec_queue_set_property_ioctl(struct drm_device *dev, void
> > > > *data,
> > > > + struct drm_file *file)
> > > > +{
> > > > + struct xe_device *xe = to_xe_device(dev);
> > > > + struct xe_file *xef = to_xe_file(file);
> > > > + struct drm_xe_exec_queue_set_property *args = data;
> > > > + struct xe_exec_queue *q;
> > > > + int ret;
> > > > + u32 idx;
> > > > +
> > > > + if (XE_IOCTL_DBG(xe, args->reserved[0] || args-
> > > > >reserved[1]))
> > > > + return -EINVAL;
> > > > +
> > > > + if (XE_IOCTL_DBG(xe, args->property !=
> > > > +
> > > > DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_QUEUE_PRIORITY))
> > > > + return -EINVAL;
> > > > +
> > > > + q = xe_exec_queue_lookup(xef, args->exec_queue_id);
> > > > + if (XE_IOCTL_DBG(xe, !q))
> > > > + return -ENOENT;
> > > > +
> > > > + idx = array_index_nospec(args->property,
> > > > +
> > > > ARRAY_SIZE(exec_queue_set_property_funcs));
> > > > + ret = exec_queue_set_property_funcs[idx](xe, q, args-
> > > > >value);
> > > > + if (XE_IOCTL_DBG(xe, ret))
> > > > + goto err_post_lookup;
> > > > +
> > > > + xe_exec_queue_put(q);
> > > > + return 0;
> > > > +
> > > > + err_post_lookup:
> > > > + xe_exec_queue_put(q);
> > > > + return ret;
> > > > +}
> > > > +
> > > > static int exec_queue_user_ext_check(struct xe_exec_queue *q, u64
> > > > properties)
> > > > {
> > > > u64 secondary_queue_valid_props =
> > > > BIT_ULL(DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_GROUP) |
> > > > diff --git a/drivers/gpu/drm/xe/xe_exec_queue.h
> > > > b/drivers/gpu/drm/xe/xe_exec_queue.h
> > > > index e6daa40003f2..ffcc1feb879e 100644
> > > > --- a/drivers/gpu/drm/xe/xe_exec_queue.h
> > > > +++ b/drivers/gpu/drm/xe/xe_exec_queue.h
> > > > @@ -125,6 +125,8 @@ int xe_exec_queue_destroy_ioctl(struct drm_device
> > > > *dev, void *data,
> > > > struct drm_file *file);
> > > > int xe_exec_queue_get_property_ioctl(struct drm_device *dev, void
> > > > *data,
> > > > struct drm_file *file);
> > > > +int xe_exec_queue_set_property_ioctl(struct drm_device *dev, void
> > > > *data,
> > > > + struct drm_file *file);
> > > > enum xe_exec_queue_priority
> > > > xe_exec_queue_device_get_max_priority(struct xe_device *xe);
> > > >
> > > > void xe_exec_queue_last_fence_put(struct xe_exec_queue *e, struct
> > > > xe_vm *vm);
> > > > diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
> > > > index fd79d78de2e9..705081bf0d81 100644
> > > > --- a/include/uapi/drm/xe_drm.h
> > > > +++ b/include/uapi/drm/xe_drm.h
> > > > @@ -106,6 +106,7 @@ extern "C" {
> > > > #define DRM_XE_OBSERVATION 0x0b
> > > > #define DRM_XE_MADVISE 0x0c
> > > > #define DRM_XE_VM_QUERY_MEM_RANGE_ATTRS 0x0d
> > > > +#define DRM_XE_EXEC_QUEUE_SET_PROPERTY 0x0e
> > > >
> > > > /* Must be kept compact -- no holes */
> > > >
> > > > @@ -123,6 +124,7 @@ extern "C" {
> > > > #define
> > > > DRM_IOCTL_XE_OBSERVATION DRM_IOW(DRM_COMMAND_BASE + DRM_XE_OBSERVATION,structdrm_xe_observation_param)
> > > > #define
> > > > DRM_IOCTL_XE_MADVISE DRM_IOW(DRM_COMMAND_BASE + DRM_XE_MADVISE, structdrm_xe_madvise)
> > > > #define
> > > > DRM_IOCTL_XE_VM_QUERY_MEM_RANGE_ATTRS DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_VM_QUERY_MEM_RANGE_ATTRS,structdrm_xe_vm_query_mem_range_attr)
> > > > +#define
> > > > DRM_IOCTL_XE_EXEC_QUEUE_SET_PROPERTY DRM_IOW(DRM_COMMAND_BASE + DRM_XE_EXEC_QUEUE_SET_PROPERTY,structdrm_xe_exec_queue_set_property)
> > > >
> > > > /**
> > > > * DOC: Xe IOCTL Extensions
> > > > @@ -2315,6 +2317,30 @@ struct drm_xe_vm_query_mem_range_attr {
> > > >
> > > > };
> > > >
> > > > +/**
> > > > + * struct drm_xe_exec_queue_set_property - exec queue set property
> > > > + *
> > > > + * Sets execution queue properties dynamically.
> > > > + * Currently only
> > > > %DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_QUEUE_PRIORITY
> > > > + * property can be dynamically set.
> > > > + */
> > > > +struct drm_xe_exec_queue_set_property {
> > > > + /** @extensions: Pointer to the first extension struct, if
> > > > any */
> > > > + __u64 extensions;
> > > > +
> > > > + /** @exec_queue_id: Exec queue ID */
> > > > + __u32 exec_queue_id;
> > > > +
> > > > + /** @property: property to set */
> > > > + __u32 property;
> > > > +
> > > > + /** @value: property value */
> > > > + __u64 value;
> > > > +
> > > > + /** @reserved: Reserved */
> > > > + __u64 reserved[2];
> > > > +};
> > > > +
> > > > #if defined(__cplusplus)
> > > > }
> > > > #endif
> > >
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v6 06/17] drm/xe/multi_queue: Add exec_queue set_property ioctl support
2026-01-20 22:20 ` Matthew Brost
2026-01-20 22:30 ` Matthew Brost
@ 2026-01-21 15:29 ` Thomas Hellström
2026-01-21 21:23 ` Niranjana Vishwanathapura
1 sibling, 1 reply; 32+ messages in thread
From: Thomas Hellström @ 2026-01-21 15:29 UTC (permalink / raw)
To: Matthew Brost, Niranjana Vishwanathapura; +Cc: intel-xe, matthew.d.roper
On Tue, 2026-01-20 at 14:20 -0800, Matthew Brost wrote:
> On Tue, Jan 20, 2026 at 01:06:07PM -0800, Niranjana Vishwanathapura
> wrote:
> > On Mon, Jan 19, 2026 at 05:57:29PM +0100, Thomas Hellström wrote:
> > > Hi,
> > >
> > > On Wed, 2025-12-10 at 17:02 -0800, Niranjana Vishwanathapura
> > > wrote:
> > > > This patch adds support for exec_queue set_property ioctl.
> > > > It is derived from the original work which is part of
> > > > https://patchwork.freedesktop.org/series/112188/
> > > >
> > > > Currently only
> > > > DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_QUEUE_PRIORITY
> > > > property can be dynamically set.
> > > >
> > > > v2: Check for and update kernel-doc which property this ioctl
> > > > supports (Matt Brost)
> > > >
> > > > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > > > Signed-off-by: Pallavi Mishra <pallavi.mishra@intel.com>
> > > > Signed-off-by: Niranjana Vishwanathapura
> > > > <niranjana.vishwanathapura@intel.com>
> > > > Reviewed-by: Matthew Brost <matthew.brost@intel.com>
> > >
> > > Two questions on this patch:
> > >
> > > 1) I don't see any locking to protect the exec-queue / lrc state.
> > > Perhaps I missed it, but couldn't this blow up if someone runs
> > > two
> > > set_property ioctls in parallel on the same exec-queue? Looks
> > > like at
> > > least the set_priority is doing non-atomic RMW assignments?
> > >
>
> I don't think we'd blow up. At worst q->multi_queue.priority would
> race
> between a set in guc_exec_queue_set_priority (parallel across queues)
> and read of that value in
> __guc_exec_queue_process_msg_set_sched_props
> (serialize across queues). Eventually we'd stablize on the most
> recently
> set value in __guc_exec_queue_process_msg_set_sched_props.
>
> >
> > Hmm...I think only checking and setting of 'q-
> > >multi_queue.priority'
> > in guc_exec_queue_set_multi_queue_priority() is a potential issue
> > (as the rest is handled by message handling queue and guc ct
> > backend).
> > I think we can fix it with a simple spicklock in this function.
> >
>
> Hmm, I'm not sure what a lock here buys us as if two user threads try
> to
> set q->multi_queue.priority we'd never really knows who should win
> the
> race of saying of setting.
>
> > > 2) Do we really need this to be mutable? In i915 assuming that
> > > lrc /
> > > context configuration could change created horrendous locking
> > > constructs that were later fixed by making it immutable at first
> > > command sumbission. I don't exactly remember the details but do
> > > we need
> > > to change properties after first submission? If not I suggest
> > > blocking
> > > that.
>
> I don't think this is overly complicated here given the actual queue
> interaction with the GuC is serialized by the DRM scheduler.
>
> My thinking is this patch is probably fine as is, albiet a little
> harmlessly racey.
I think even if it's possible to reason that this is harmless in its
current form, state that can be modified by my multiple threads
simultaneously must really be protected by locks.
Otherwise the code will be fragile: Any future updates where the author
doesn't realize the state is not properly protected might break it.
Also any person trying to familiarize himself with the code and noting
that the data is not protected from racing will have to dig down and
verify that each and every concurrent access is actually ok taking the
lack of barriers and hardware reordering into account.
So please, let's adhere to "data with concurrent accesses must be
protected unless immutable".
Thanks,
Thomas
>
> Matt
>
> >
> > Yah, ability to dynamically change multi-queue priority is a
> > requirement
> > from the Compute UMD team.
> >
> > Niranjana
> >
> > >
> > > Thanks,
> > > Thomas
> > >
> > >
> > >
> > >
> > > > ---
> > > > drivers/gpu/drm/xe/xe_device.c | 2 ++
> > > > drivers/gpu/drm/xe/xe_exec_queue.c | 35
> > > > ++++++++++++++++++++++++++++++
> > > > drivers/gpu/drm/xe/xe_exec_queue.h | 2 ++
> > > > include/uapi/drm/xe_drm.h | 26 ++++++++++++++++++++++
> > > > 4 files changed, 65 insertions(+)
> > > >
> > > > diff --git a/drivers/gpu/drm/xe/xe_device.c
> > > > b/drivers/gpu/drm/xe/xe_device.c
> > > > index 1197f914ef77..7a498c8db7b1 100644
> > > > --- a/drivers/gpu/drm/xe/xe_device.c
> > > > +++ b/drivers/gpu/drm/xe/xe_device.c
> > > > @@ -207,6 +207,8 @@ static const struct drm_ioctl_desc
> > > > xe_ioctls[] =
> > > > {
> > > > DRM_IOCTL_DEF_DRV(XE_MADVISE, xe_vm_madvise_ioctl,
> > > > DRM_RENDER_ALLOW),
> > > > DRM_IOCTL_DEF_DRV(XE_VM_QUERY_MEM_RANGE_ATTRS,
> > > > xe_vm_query_vmas_attrs_ioctl,
> > > > DRM_RENDER_ALLOW),
> > > > + DRM_IOCTL_DEF_DRV(XE_EXEC_QUEUE_SET_PROPERTY,
> > > > xe_exec_queue_set_property_ioctl,
> > > > + DRM_RENDER_ALLOW),
> > > > };
> > > >
> > > > static long xe_drm_ioctl(struct file *file, unsigned int cmd,
> > > > unsigned long arg)
> > > > diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c
> > > > b/drivers/gpu/drm/xe/xe_exec_queue.c
> > > > index d0082eb45a4a..d738a9fea1e1 100644
> > > > --- a/drivers/gpu/drm/xe/xe_exec_queue.c
> > > > +++ b/drivers/gpu/drm/xe/xe_exec_queue.c
> > > > @@ -790,6 +790,41 @@ static const xe_exec_queue_set_property_fn
> > > > exec_queue_set_property_funcs[] = {
> > > > exec_q
> > > > ueue_s
> > > > et_multi_queue_priority,
> > > > };
> > > >
> > > > +int xe_exec_queue_set_property_ioctl(struct drm_device *dev,
> > > > void
> > > > *data,
> > > > + struct drm_file *file)
> > > > +{
> > > > + struct xe_device *xe = to_xe_device(dev);
> > > > + struct xe_file *xef = to_xe_file(file);
> > > > + struct drm_xe_exec_queue_set_property *args = data;
> > > > + struct xe_exec_queue *q;
> > > > + int ret;
> > > > + u32 idx;
> > > > +
> > > > + if (XE_IOCTL_DBG(xe, args->reserved[0] || args-
> > > > > reserved[1]))
> > > > + return -EINVAL;
> > > > +
> > > > + if (XE_IOCTL_DBG(xe, args->property !=
> > > > +
> > > > DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_QUEUE_PRIORITY))
> > > > + return -EINVAL;
> > > > +
> > > > + q = xe_exec_queue_lookup(xef, args->exec_queue_id);
> > > > + if (XE_IOCTL_DBG(xe, !q))
> > > > + return -ENOENT;
> > > > +
> > > > + idx = array_index_nospec(args->property,
> > > > +
> > > > ARRAY_SIZE(exec_queue_set_property_funcs));
> > > > + ret = exec_queue_set_property_funcs[idx](xe, q, args-
> > > > > value);
> > > > + if (XE_IOCTL_DBG(xe, ret))
> > > > + goto err_post_lookup;
> > > > +
> > > > + xe_exec_queue_put(q);
> > > > + return 0;
> > > > +
> > > > + err_post_lookup:
> > > > + xe_exec_queue_put(q);
> > > > + return ret;
> > > > +}
> > > > +
> > > > static int exec_queue_user_ext_check(struct xe_exec_queue *q,
> > > > u64
> > > > properties)
> > > > {
> > > > u64 secondary_queue_valid_props =
> > > > BIT_ULL(DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_GROUP) |
> > > > diff --git a/drivers/gpu/drm/xe/xe_exec_queue.h
> > > > b/drivers/gpu/drm/xe/xe_exec_queue.h
> > > > index e6daa40003f2..ffcc1feb879e 100644
> > > > --- a/drivers/gpu/drm/xe/xe_exec_queue.h
> > > > +++ b/drivers/gpu/drm/xe/xe_exec_queue.h
> > > > @@ -125,6 +125,8 @@ int xe_exec_queue_destroy_ioctl(struct
> > > > drm_device
> > > > *dev, void *data,
> > > > struct drm_file *file);
> > > > int xe_exec_queue_get_property_ioctl(struct drm_device *dev,
> > > > void
> > > > *data,
> > > > struct drm_file *file);
> > > > +int xe_exec_queue_set_property_ioctl(struct drm_device *dev,
> > > > void
> > > > *data,
> > > > + struct drm_file *file);
> > > > enum xe_exec_queue_priority
> > > > xe_exec_queue_device_get_max_priority(struct xe_device *xe);
> > > >
> > > > void xe_exec_queue_last_fence_put(struct xe_exec_queue *e,
> > > > struct
> > > > xe_vm *vm);
> > > > diff --git a/include/uapi/drm/xe_drm.h
> > > > b/include/uapi/drm/xe_drm.h
> > > > index fd79d78de2e9..705081bf0d81 100644
> > > > --- a/include/uapi/drm/xe_drm.h
> > > > +++ b/include/uapi/drm/xe_drm.h
> > > > @@ -106,6 +106,7 @@ extern "C" {
> > > > #define DRM_XE_OBSERVATION 0x0b
> > > > #define DRM_XE_MADVISE 0x0c
> > > > #define DRM_XE_VM_QUERY_MEM_RANGE_ATTRS 0x0d
> > > > +#define DRM_XE_EXEC_QUEUE_SET_PROPERTY 0x0e
> > > >
> > > > /* Must be kept compact -- no holes */
> > > >
> > > > @@ -123,6 +124,7 @@ extern "C" {
> > > > #define
> > > > DRM_IOCTL_XE_OBSERVATION DRM_IOW(DRM_COMMAND_BA
> > > > SE + DRM_XE_OBSERVATION,structdrm_xe_observation_param)
> > > > #define
> > > > DRM_IOCTL_XE_MADVISE DRM_IOW(DRM_COMMAND_BA
> > > > SE + DRM_XE_MADVISE, structdrm_xe_madvise)
> > > > #define
> > > > DRM_IOCTL_XE_VM_QUERY_MEM_RANGE_ATTRS DRM_IOWR(DRM_COMMAND_B
> > > > ASE +
> > > > DRM_XE_VM_QUERY_MEM_RANGE_ATTRS,structdrm_xe_vm_query_mem_range
> > > > _attr)
> > > > +#define
> > > > DRM_IOCTL_XE_EXEC_QUEUE_SET_PROPERTY DRM_IOW(DRM_COMMAND_BA
> > > > SE +
> > > > DRM_XE_EXEC_QUEUE_SET_PROPERTY,structdrm_xe_exec_queue_set_prop
> > > > erty)
> > > >
> > > > /**
> > > > * DOC: Xe IOCTL Extensions
> > > > @@ -2315,6 +2317,30 @@ struct drm_xe_vm_query_mem_range_attr {
> > > >
> > > > };
> > > >
> > > > +/**
> > > > + * struct drm_xe_exec_queue_set_property - exec queue set
> > > > property
> > > > + *
> > > > + * Sets execution queue properties dynamically.
> > > > + * Currently only
> > > > %DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_QUEUE_PRIORITY
> > > > + * property can be dynamically set.
> > > > + */
> > > > +struct drm_xe_exec_queue_set_property {
> > > > + /** @extensions: Pointer to the first extension
> > > > struct, if
> > > > any */
> > > > + __u64 extensions;
> > > > +
> > > > + /** @exec_queue_id: Exec queue ID */
> > > > + __u32 exec_queue_id;
> > > > +
> > > > + /** @property: property to set */
> > > > + __u32 property;
> > > > +
> > > > + /** @value: property value */
> > > > + __u64 value;
> > > > +
> > > > + /** @reserved: Reserved */
> > > > + __u64 reserved[2];
> > > > +};
> > > > +
> > > > #if defined(__cplusplus)
> > > > }
> > > > #endif
> > >
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v6 06/17] drm/xe/multi_queue: Add exec_queue set_property ioctl support
2026-01-21 15:29 ` Thomas Hellström
@ 2026-01-21 21:23 ` Niranjana Vishwanathapura
0 siblings, 0 replies; 32+ messages in thread
From: Niranjana Vishwanathapura @ 2026-01-21 21:23 UTC (permalink / raw)
To: Thomas Hellström; +Cc: Matthew Brost, intel-xe, matthew.d.roper
On Wed, Jan 21, 2026 at 04:29:34PM +0100, Thomas Hellström wrote:
>On Tue, 2026-01-20 at 14:20 -0800, Matthew Brost wrote:
>> On Tue, Jan 20, 2026 at 01:06:07PM -0800, Niranjana Vishwanathapura
>> wrote:
>> > On Mon, Jan 19, 2026 at 05:57:29PM +0100, Thomas Hellström wrote:
>> > > Hi,
>> > >
>> > > On Wed, 2025-12-10 at 17:02 -0800, Niranjana Vishwanathapura
>> > > wrote:
>> > > > This patch adds support for exec_queue set_property ioctl.
>> > > > It is derived from the original work which is part of
>> > > > https://patchwork.freedesktop.org/series/112188/
>> > > >
>> > > > Currently only
>> > > > DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_QUEUE_PRIORITY
>> > > > property can be dynamically set.
>> > > >
>> > > > v2: Check for and update kernel-doc which property this ioctl
>> > > > supports (Matt Brost)
>> > > >
>> > > > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
>> > > > Signed-off-by: Pallavi Mishra <pallavi.mishra@intel.com>
>> > > > Signed-off-by: Niranjana Vishwanathapura
>> > > > <niranjana.vishwanathapura@intel.com>
>> > > > Reviewed-by: Matthew Brost <matthew.brost@intel.com>
>> > >
>> > > Two questions on this patch:
>> > >
>> > > 1) I don't see any locking to protect the exec-queue / lrc state.
>> > > Perhaps I missed it, but couldn't this blow up if someone runs
>> > > two
>> > > set_property ioctls in parallel on the same exec-queue? Looks
>> > > like at
>> > > least the set_priority is doing non-atomic RMW assignments?
>> > >
>>
>> I don't think we'd blow up. At worst q->multi_queue.priority would
>> race
>> between a set in guc_exec_queue_set_priority (parallel across queues)
>> and read of that value in
>> __guc_exec_queue_process_msg_set_sched_props
>> (serialize across queues). Eventually we'd stablize on the most
>> recently
>> set value in __guc_exec_queue_process_msg_set_sched_props.
>>
>> >
>> > Hmm...I think only checking and setting of 'q-
>> > >multi_queue.priority'
>> > in guc_exec_queue_set_multi_queue_priority() is a potential issue
>> > (as the rest is handled by message handling queue and guc ct
>> > backend).
>> > I think we can fix it with a simple spicklock in this function.
>> >
>>
>> Hmm, I'm not sure what a lock here buys us as if two user threads try
>> to
>> set q->multi_queue.priority we'd never really knows who should win
>> the
>> race of saying of setting.
>>
>> > > 2) Do we really need this to be mutable? In i915 assuming that
>> > > lrc /
>> > > context configuration could change created horrendous locking
>> > > constructs that were later fixed by making it immutable at first
>> > > command sumbission. I don't exactly remember the details but do
>> > > we need
>> > > to change properties after first submission? If not I suggest
>> > > blocking
>> > > that.
>>
>> I don't think this is overly complicated here given the actual queue
>> interaction with the GuC is serialized by the DRM scheduler.
>>
>> My thinking is this patch is probably fine as is, albiet a little
>> harmlessly racey.
>
>I think even if it's possible to reason that this is harmless in its
>current form, state that can be modified by my multiple threads
>simultaneously must really be protected by locks.
>
>Otherwise the code will be fragile: Any future updates where the author
>doesn't realize the state is not properly protected might break it.
>
>Also any person trying to familiarize himself with the code and noting
>that the data is not protected from racing will have to dig down and
>verify that each and every concurrent access is actually ok taking the
>lack of barriers and hardware reordering into account.
>
>So please, let's adhere to "data with concurrent accesses must be
>protected unless immutable".
>
Hi Thomas, Matt,
I have posted a patch to fix the issue.
https://patchwork.freedesktop.org/series/160447/
Can you take a look?
Thanks,
Niranjana
>Thanks,
>Thomas
>
>
>
>>
>> Matt
>>
>> >
>> > Yah, ability to dynamically change multi-queue priority is a
>> > requirement
>> > from the Compute UMD team.
>> >
>> > Niranjana
>> >
>> > >
>> > > Thanks,
>> > > Thomas
>> > >
>> > >
>> > >
>> > >
>> > > > ---
>> > > > drivers/gpu/drm/xe/xe_device.c | 2 ++
>> > > > drivers/gpu/drm/xe/xe_exec_queue.c | 35
>> > > > ++++++++++++++++++++++++++++++
>> > > > drivers/gpu/drm/xe/xe_exec_queue.h | 2 ++
>> > > > include/uapi/drm/xe_drm.h | 26 ++++++++++++++++++++++
>> > > > 4 files changed, 65 insertions(+)
>> > > >
>> > > > diff --git a/drivers/gpu/drm/xe/xe_device.c
>> > > > b/drivers/gpu/drm/xe/xe_device.c
>> > > > index 1197f914ef77..7a498c8db7b1 100644
>> > > > --- a/drivers/gpu/drm/xe/xe_device.c
>> > > > +++ b/drivers/gpu/drm/xe/xe_device.c
>> > > > @@ -207,6 +207,8 @@ static const struct drm_ioctl_desc
>> > > > xe_ioctls[] =
>> > > > {
>> > > > DRM_IOCTL_DEF_DRV(XE_MADVISE, xe_vm_madvise_ioctl,
>> > > > DRM_RENDER_ALLOW),
>> > > > DRM_IOCTL_DEF_DRV(XE_VM_QUERY_MEM_RANGE_ATTRS,
>> > > > xe_vm_query_vmas_attrs_ioctl,
>> > > > DRM_RENDER_ALLOW),
>> > > > + DRM_IOCTL_DEF_DRV(XE_EXEC_QUEUE_SET_PROPERTY,
>> > > > xe_exec_queue_set_property_ioctl,
>> > > > + DRM_RENDER_ALLOW),
>> > > > };
>> > > >
>> > > > static long xe_drm_ioctl(struct file *file, unsigned int cmd,
>> > > > unsigned long arg)
>> > > > diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c
>> > > > b/drivers/gpu/drm/xe/xe_exec_queue.c
>> > > > index d0082eb45a4a..d738a9fea1e1 100644
>> > > > --- a/drivers/gpu/drm/xe/xe_exec_queue.c
>> > > > +++ b/drivers/gpu/drm/xe/xe_exec_queue.c
>> > > > @@ -790,6 +790,41 @@ static const xe_exec_queue_set_property_fn
>> > > > exec_queue_set_property_funcs[] = {
>> > > > exec_q
>> > > > ueue_s
>> > > > et_multi_queue_priority,
>> > > > };
>> > > >
>> > > > +int xe_exec_queue_set_property_ioctl(struct drm_device *dev,
>> > > > void
>> > > > *data,
>> > > > + struct drm_file *file)
>> > > > +{
>> > > > + struct xe_device *xe = to_xe_device(dev);
>> > > > + struct xe_file *xef = to_xe_file(file);
>> > > > + struct drm_xe_exec_queue_set_property *args = data;
>> > > > + struct xe_exec_queue *q;
>> > > > + int ret;
>> > > > + u32 idx;
>> > > > +
>> > > > + if (XE_IOCTL_DBG(xe, args->reserved[0] || args-
>> > > > > reserved[1]))
>> > > > + return -EINVAL;
>> > > > +
>> > > > + if (XE_IOCTL_DBG(xe, args->property !=
>> > > > +
>> > > > DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_QUEUE_PRIORITY))
>> > > > + return -EINVAL;
>> > > > +
>> > > > + q = xe_exec_queue_lookup(xef, args->exec_queue_id);
>> > > > + if (XE_IOCTL_DBG(xe, !q))
>> > > > + return -ENOENT;
>> > > > +
>> > > > + idx = array_index_nospec(args->property,
>> > > > +
>> > > > ARRAY_SIZE(exec_queue_set_property_funcs));
>> > > > + ret = exec_queue_set_property_funcs[idx](xe, q, args-
>> > > > > value);
>> > > > + if (XE_IOCTL_DBG(xe, ret))
>> > > > + goto err_post_lookup;
>> > > > +
>> > > > + xe_exec_queue_put(q);
>> > > > + return 0;
>> > > > +
>> > > > + err_post_lookup:
>> > > > + xe_exec_queue_put(q);
>> > > > + return ret;
>> > > > +}
>> > > > +
>> > > > static int exec_queue_user_ext_check(struct xe_exec_queue *q,
>> > > > u64
>> > > > properties)
>> > > > {
>> > > > u64 secondary_queue_valid_props =
>> > > > BIT_ULL(DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_GROUP) |
>> > > > diff --git a/drivers/gpu/drm/xe/xe_exec_queue.h
>> > > > b/drivers/gpu/drm/xe/xe_exec_queue.h
>> > > > index e6daa40003f2..ffcc1feb879e 100644
>> > > > --- a/drivers/gpu/drm/xe/xe_exec_queue.h
>> > > > +++ b/drivers/gpu/drm/xe/xe_exec_queue.h
>> > > > @@ -125,6 +125,8 @@ int xe_exec_queue_destroy_ioctl(struct
>> > > > drm_device
>> > > > *dev, void *data,
>> > > > struct drm_file *file);
>> > > > int xe_exec_queue_get_property_ioctl(struct drm_device *dev,
>> > > > void
>> > > > *data,
>> > > > struct drm_file *file);
>> > > > +int xe_exec_queue_set_property_ioctl(struct drm_device *dev,
>> > > > void
>> > > > *data,
>> > > > + struct drm_file *file);
>> > > > enum xe_exec_queue_priority
>> > > > xe_exec_queue_device_get_max_priority(struct xe_device *xe);
>> > > >
>> > > > void xe_exec_queue_last_fence_put(struct xe_exec_queue *e,
>> > > > struct
>> > > > xe_vm *vm);
>> > > > diff --git a/include/uapi/drm/xe_drm.h
>> > > > b/include/uapi/drm/xe_drm.h
>> > > > index fd79d78de2e9..705081bf0d81 100644
>> > > > --- a/include/uapi/drm/xe_drm.h
>> > > > +++ b/include/uapi/drm/xe_drm.h
>> > > > @@ -106,6 +106,7 @@ extern "C" {
>> > > > #define DRM_XE_OBSERVATION 0x0b
>> > > > #define DRM_XE_MADVISE 0x0c
>> > > > #define DRM_XE_VM_QUERY_MEM_RANGE_ATTRS 0x0d
>> > > > +#define DRM_XE_EXEC_QUEUE_SET_PROPERTY 0x0e
>> > > >
>> > > > /* Must be kept compact -- no holes */
>> > > >
>> > > > @@ -123,6 +124,7 @@ extern "C" {
>> > > > #define
>> > > > DRM_IOCTL_XE_OBSERVATION DRM_IOW(DRM_COMMAND_BA
>> > > > SE + DRM_XE_OBSERVATION,structdrm_xe_observation_param)
>> > > > #define
>> > > > DRM_IOCTL_XE_MADVISE DRM_IOW(DRM_COMMAND_BA
>> > > > SE + DRM_XE_MADVISE, structdrm_xe_madvise)
>> > > > #define
>> > > > DRM_IOCTL_XE_VM_QUERY_MEM_RANGE_ATTRS DRM_IOWR(DRM_COMMAND_B
>> > > > ASE +
>> > > > DRM_XE_VM_QUERY_MEM_RANGE_ATTRS,structdrm_xe_vm_query_mem_range
>> > > > _attr)
>> > > > +#define
>> > > > DRM_IOCTL_XE_EXEC_QUEUE_SET_PROPERTY DRM_IOW(DRM_COMMAND_BA
>> > > > SE +
>> > > > DRM_XE_EXEC_QUEUE_SET_PROPERTY,structdrm_xe_exec_queue_set_prop
>> > > > erty)
>> > > >
>> > > > /**
>> > > > * DOC: Xe IOCTL Extensions
>> > > > @@ -2315,6 +2317,30 @@ struct drm_xe_vm_query_mem_range_attr {
>> > > >
>> > > > };
>> > > >
>> > > > +/**
>> > > > + * struct drm_xe_exec_queue_set_property - exec queue set
>> > > > property
>> > > > + *
>> > > > + * Sets execution queue properties dynamically.
>> > > > + * Currently only
>> > > > %DRM_XE_EXEC_QUEUE_SET_PROPERTY_MULTI_QUEUE_PRIORITY
>> > > > + * property can be dynamically set.
>> > > > + */
>> > > > +struct drm_xe_exec_queue_set_property {
>> > > > + /** @extensions: Pointer to the first extension
>> > > > struct, if
>> > > > any */
>> > > > + __u64 extensions;
>> > > > +
>> > > > + /** @exec_queue_id: Exec queue ID */
>> > > > + __u32 exec_queue_id;
>> > > > +
>> > > > + /** @property: property to set */
>> > > > + __u32 property;
>> > > > +
>> > > > + /** @value: property value */
>> > > > + __u64 value;
>> > > > +
>> > > > + /** @reserved: Reserved */
>> > > > + __u64 reserved[2];
>> > > > +};
>> > > > +
>> > > > #if defined(__cplusplus)
>> > > > }
>> > > > #endif
>> > >
^ permalink raw reply [flat|nested] 32+ messages in thread
end of thread, other threads:[~2026-01-21 21:23 UTC | newest]
Thread overview: 32+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-12-11 1:02 [PATCH v6 00/17] drm/xe: Multi Queue feature support Niranjana Vishwanathapura
2025-12-11 1:02 ` [PATCH v6 01/17] drm/xe/multi_queue: Add multi_queue_enable_mask to gt information Niranjana Vishwanathapura
2025-12-11 1:02 ` [PATCH v6 02/17] drm/xe/multi_queue: Add user interface for multi queue support Niranjana Vishwanathapura
2025-12-11 1:02 ` [PATCH v6 03/17] drm/xe/multi_queue: Add GuC " Niranjana Vishwanathapura
2025-12-11 1:02 ` [PATCH v6 04/17] drm/xe/multi_queue: Add multi queue priority property Niranjana Vishwanathapura
2025-12-11 1:02 ` [PATCH v6 05/17] drm/xe/multi_queue: Handle invalid exec queue property setting Niranjana Vishwanathapura
2025-12-11 1:02 ` [PATCH v6 06/17] drm/xe/multi_queue: Add exec_queue set_property ioctl support Niranjana Vishwanathapura
2026-01-19 16:57 ` Thomas Hellström
2026-01-20 21:06 ` Niranjana Vishwanathapura
2026-01-20 22:20 ` Matthew Brost
2026-01-20 22:30 ` Matthew Brost
2026-01-21 15:29 ` Thomas Hellström
2026-01-21 21:23 ` Niranjana Vishwanathapura
2025-12-11 1:02 ` [PATCH v6 07/17] drm/xe/multi_queue: Add support for multi queue dynamic priority change Niranjana Vishwanathapura
2025-12-11 1:02 ` [PATCH v6 08/17] drm/xe/multi_queue: Add multi queue information to guc_info dump Niranjana Vishwanathapura
2025-12-11 1:02 ` [PATCH v6 09/17] drm/xe/multi_queue: Handle tearing down of a multi queue Niranjana Vishwanathapura
2025-12-11 1:02 ` [PATCH v6 10/17] drm/xe/multi_queue: Set QUEUE_DRAIN_MODE for Multi Queue batches Niranjana Vishwanathapura
2025-12-11 1:02 ` [PATCH v6 11/17] drm/xe/multi_queue: Handle CGP context error Niranjana Vishwanathapura
2025-12-11 1:03 ` [PATCH v6 12/17] drm/xe/multi_queue: Reset GT upon CGP_SYNC failure Niranjana Vishwanathapura
2025-12-11 1:03 ` [PATCH v6 13/17] drm/xe/multi_queue: Teardown group upon job timeout Niranjana Vishwanathapura
2025-12-11 1:03 ` [PATCH v6 14/17] drm/xe/multi_queue: Tracepoint support Niranjana Vishwanathapura
2025-12-11 1:03 ` [PATCH v6 15/17] drm/xe/multi_queue: Support active group after primary is destroyed Niranjana Vishwanathapura
2025-12-19 21:06 ` Rodrigo Vivi
2025-12-19 22:35 ` Niranjana Vishwanathapura
2025-12-19 22:53 ` Matt Roper
2025-12-31 1:51 ` Niranjana Vishwanathapura
2025-12-11 1:03 ` [PATCH v6 16/17] drm/xe/doc: Add documentation for Multi Queue Group Niranjana Vishwanathapura
2025-12-11 1:03 ` [PATCH v6 17/17] drm/xe/doc: Add documentation for Multi Queue Group GuC interface Niranjana Vishwanathapura
2025-12-11 1:11 ` ✗ CI.checkpatch: warning for drm/xe: Multi Queue feature support (rev6) Patchwork
2025-12-11 1:12 ` ✓ CI.KUnit: success " Patchwork
2025-12-11 2:25 ` ✓ Xe.CI.BAT: " Patchwork
2025-12-11 10:04 ` ✗ Xe.CI.Full: failure " Patchwork
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox