* [PATCH 01/20] drm/xe/eudebug: Introduce eudebug interface
2025-12-02 13:52 [PATCH 00/20] Intel Xe GPU Debug Support (eudebug) v6 Mika Kuoppala
@ 2025-12-02 13:52 ` Mika Kuoppala
2025-12-10 16:48 ` [PATCH 01/21] " Mika Kuoppala
2025-12-02 13:52 ` [PATCH 02/20] drm/xe/eudebug: Introduce discovery for resources Mika Kuoppala
` (24 subsequent siblings)
25 siblings, 1 reply; 30+ messages in thread
From: Mika Kuoppala @ 2025-12-02 13:52 UTC (permalink / raw)
To: intel-xe
Cc: simona.vetter, matthew.brost, christian.koenig, thomas.hellstrom,
joonas.lahtinen, christoph.manszewski, rodrigo.vivi,
andrzej.hajda, matthew.auld, maciej.patelczyk, gwan-gyeong.mun,
Mika Kuoppala, Maarten Lankhorst, Lucas De Marchi,
Dominik Grzegorzek, Andi Shyti, Matt Roper,
Zbigniew Kempczyński, Jonathan Cavitt
This patch adds the eudebug interface to the Xe driver, enabling
user-space debuggers (e.g., GDB) to track and interact with GPU resources
of a DRM client. Debuggers can inspect or modify these resources,
for example, to locate ISA/ELF sections and install breakpoints in a
shader's instruction stream.
A debugger opens a connection to the Xe driver via a DRM ioctl, specifying
the target DRM client's file descriptor. This returns an anonymous file
descriptor for the connection, which can be used to listen for resource
creation/destruction events. The same file descriptor can also be used to
receive hardware state change events and control execution flow by
interrupting EU threads on the GPU (in follow-up patches).
This patch introduces the eudebug connection and event queuing,
adding client create/destroy and VM create/destroy events as a baseline.
Additional events and hardware control for full debugger operation are
needed and will be introduced in follow-up patches.
The resource tracking components are inspired by Maciej Patelczyk's work on
resource handling for i915. Chris Wilson suggested a two-way mapping
approach, which simplifies using the resource map as definitive
bookkeeping forresources relayed to the debugger during the discovery
phase (in a follow-up patch).
v2: - Kconfig support (Matthew)
- ptraced access control (Lucas)
- pass expected event length to user (Zbigniew)
- only track long running VMs
- checkpatch (Tilak)
- include order (Andrzej)
- 32bit fixes (Andrzej)
- cleaner get_task_struct
- remove xa_array and use clients.list for tracking (Mika)
v3: - adapt to removal of clients.lock (Mika)
- create_event cleanup (Christoph)
v4: - add proper header guards (Christoph)
- better read_event fault handling (Christoph, Mika)
- simplify attach (Mika)
- connect using target file descriptors
- avoid event->seqno after queue as it is can UAF (Mika)
- use drmm for eudebug_fini (Maciej)
- squash dynamic enable
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Lucas De Marchi <lucas.demarchi@intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Andi Shyti <andi.shyti@linux.intel.com>
Cc: Matt Roper <matthew.d.roper@intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Andrzej Hajda <andrzej.hajda@intel.com>
Signed-off-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Signed-off-by: Maciej Patelczyk <maciej.patelczyk@intel.com>
Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Signed-off-by: Jonathan Cavitt <jonathan.cavitt@intel.com>
Signed-off-by: Christoph Manszewski <christoph.manszewski@intel.com>
---
drivers/gpu/drm/xe/Kconfig | 10 +
drivers/gpu/drm/xe/Makefile | 3 +
drivers/gpu/drm/xe/xe_device.c | 14 +
drivers/gpu/drm/xe/xe_device_types.h | 31 +
drivers/gpu/drm/xe/xe_eudebug.c | 1041 +++++++++++++++++++++++++
drivers/gpu/drm/xe/xe_eudebug.h | 65 ++
drivers/gpu/drm/xe/xe_eudebug_types.h | 128 +++
drivers/gpu/drm/xe/xe_vm.c | 7 +-
include/uapi/drm/xe_drm.h | 21 +
include/uapi/drm/xe_drm_eudebug.h | 77 ++
10 files changed, 1396 insertions(+), 1 deletion(-)
create mode 100644 drivers/gpu/drm/xe/xe_eudebug.c
create mode 100644 drivers/gpu/drm/xe/xe_eudebug.h
create mode 100644 drivers/gpu/drm/xe/xe_eudebug_types.h
create mode 100644 include/uapi/drm/xe_drm_eudebug.h
diff --git a/drivers/gpu/drm/xe/Kconfig b/drivers/gpu/drm/xe/Kconfig
index 4b288eb3f5b0..05ee0dd60f3b 100644
--- a/drivers/gpu/drm/xe/Kconfig
+++ b/drivers/gpu/drm/xe/Kconfig
@@ -128,6 +128,16 @@ config DRM_XE_FORCE_PROBE
Use "!*" to block the probe of the driver for all known devices.
+config DRM_XE_EUDEBUG
+ bool "Enable gdb debugger support (eudebug)"
+ depends on DRM_XE
+ default y
+ help
+ Choose this option if you want to add support for debugger (gdb) to
+ attach into process using Xe and debug the gpu/gpgpu programs.
+ With debugger support, Xe will provide interface for a debugger to
+ process to track, inspect and modify resources.
+
menu "drm/Xe Debugging"
depends on DRM_XE
depends on EXPERT
diff --git a/drivers/gpu/drm/xe/Makefile b/drivers/gpu/drm/xe/Makefile
index a7e13a676f7d..d81981b6a297 100644
--- a/drivers/gpu/drm/xe/Makefile
+++ b/drivers/gpu/drm/xe/Makefile
@@ -146,6 +146,9 @@ xe-$(CONFIG_I2C) += xe_i2c.o
xe-$(CONFIG_DRM_XE_GPUSVM) += xe_svm.o
xe-$(CONFIG_DRM_GPUSVM) += xe_userptr.o
+# debugging shaders with gdb (eudebug) support
+xe-$(CONFIG_DRM_XE_EUDEBUG) += xe_eudebug.o
+
# graphics hardware monitoring (HWMON) support
xe-$(CONFIG_HWMON) += xe_hwmon.o
diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
index 1197f914ef77..1c7f98dd42be 100644
--- a/drivers/gpu/drm/xe/xe_device.c
+++ b/drivers/gpu/drm/xe/xe_device.c
@@ -31,6 +31,7 @@
#include "xe_dma_buf.h"
#include "xe_drm_client.h"
#include "xe_drv.h"
+#include "xe_eudebug.h"
#include "xe_exec.h"
#include "xe_exec_queue.h"
#include "xe_force_wake.h"
@@ -105,6 +106,11 @@ static int xe_file_open(struct drm_device *dev, struct drm_file *file)
mutex_init(&xef->exec_queue.lock);
xa_init_flags(&xef->exec_queue.xa, XA_FLAGS_ALLOC1);
+#if IS_ENABLED(CONFIG_DRM_XE_EUDEBUG)
+ mutex_init(&xef->eudebug.lock);
+ INIT_LIST_HEAD(&xef->eudebug.target_link);
+#endif
+
file->driver_priv = xef;
kref_init(&xef->refcount);
@@ -127,6 +133,9 @@ static void xe_file_destroy(struct kref *ref)
xa_destroy(&xef->vm.xa);
mutex_destroy(&xef->vm.lock);
+#if IS_ENABLED(CONFIG_DRM_XE_EUDEBUG)
+ mutex_destroy(&xef->eudebug.lock);
+#endif
xe_drm_client_put(xef->client);
kfree(xef->process_name);
kfree(xef);
@@ -168,6 +177,8 @@ static void xe_file_close(struct drm_device *dev, struct drm_file *file)
guard(xe_pm_runtime)(xe);
+ xe_eudebug_file_close(xef);
+
/*
* No need for exec_queue.lock here as there is no contention for it
* when FD is closing as IOCTLs presumably can't be modifying the
@@ -207,6 +218,7 @@ static const struct drm_ioctl_desc xe_ioctls[] = {
DRM_IOCTL_DEF_DRV(XE_MADVISE, xe_vm_madvise_ioctl, DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(XE_VM_QUERY_MEM_RANGE_ATTRS, xe_vm_query_vmas_attrs_ioctl,
DRM_RENDER_ALLOW),
+ DRM_IOCTL_DEF_DRV(XE_EUDEBUG_CONNECT, xe_eudebug_connect_ioctl, DRM_RENDER_ALLOW),
};
static long xe_drm_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
@@ -967,6 +979,8 @@ int xe_device_probe(struct xe_device *xe)
if (err)
goto err_unregister_display;
+ xe_eudebug_init(xe);
+
return devm_add_action_or_reset(xe->drm.dev, xe_device_sanitize, xe);
err_unregister_display:
diff --git a/drivers/gpu/drm/xe/xe_device_types.h b/drivers/gpu/drm/xe/xe_device_types.h
index 9de73353223f..2b4ae7aedd12 100644
--- a/drivers/gpu/drm/xe/xe_device_types.h
+++ b/drivers/gpu/drm/xe/xe_device_types.h
@@ -13,6 +13,7 @@
#include <drm/ttm/ttm_device.h>
#include "xe_devcoredump_types.h"
+#include "xe_eudebug_types.h"
#include "xe_heci_gsc.h"
#include "xe_late_bind_fw_types.h"
#include "xe_lmtt_types.h"
@@ -660,6 +661,23 @@ struct xe_device {
spinlock_t lock;
} uncore;
#endif
+
+#if IS_ENABLED(CONFIG_DRM_XE_EUDEBUG)
+ /** @debugger connection list and globals for device */
+ struct {
+ /** @eudebug.session_count: session counter to track connections */
+ u64 session_count;
+
+ /** @eudebug.available: is the debugging functionality available */
+ enum xe_eudebug_state state;
+
+ /** @eudebug.targets: this is list for xe_files for each target */
+ struct list_head targets;
+
+ /** @eudebug.lock: protects state and targets */
+ struct mutex lock;
+ } eudebug;
+#endif
};
/**
@@ -721,6 +739,19 @@ struct xe_file {
/** @refcount: ref count of this xe file */
struct kref refcount;
+
+#if IS_ENABLED(CONFIG_DRM_XE_EUDEBUG)
+ struct {
+ /** @eudebug.debugger: the debugger connection into this xe_file */
+ struct xe_eudebug *debugger;
+
+ /** @eudebug.lock: protecting debugger */
+ struct mutex lock;
+
+ /** @target_link: link into xe_device.eudebug.targets */
+ struct list_head target_link;
+ } eudebug;
+#endif
};
#endif
diff --git a/drivers/gpu/drm/xe/xe_eudebug.c b/drivers/gpu/drm/xe/xe_eudebug.c
new file mode 100644
index 000000000000..df7ad93d032c
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_eudebug.c
@@ -0,0 +1,1041 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2023-2025 Intel Corporation
+ */
+
+#include <linux/anon_inodes.h>
+#include <linux/delay.h>
+#include <linux/poll.h>
+#include <linux/uaccess.h>
+
+#include <drm/drm_managed.h>
+#include <uapi/drm/xe_drm.h>
+
+#include "xe_assert.h"
+#include "xe_device.h"
+#include "xe_eudebug.h"
+#include "xe_eudebug_types.h"
+#include "xe_macros.h"
+#include "xe_vm.h"
+
+/*
+ * If there is no detected event read by userspace, during this period, assume
+ * userspace problem and disconnect debugger to allow forward progress.
+ */
+#define XE_EUDEBUG_NO_READ_DETECTED_TIMEOUT_MS (25 * 1000)
+
+#define cast_event(T, event) container_of((event), typeof(*(T)), base)
+
+static struct drm_xe_eudebug_event *
+event_fifo_pending(struct xe_eudebug *d)
+{
+ struct drm_xe_eudebug_event *event;
+
+ if (kfifo_peek(&d->events.fifo, &event))
+ return event;
+
+ return NULL;
+}
+
+/*
+ * This is racy as we dont take the lock for read but all the
+ * callsites can handle the race so we can live without lock.
+ */
+__no_kcsan
+static unsigned int
+event_fifo_num_events_peek(const struct xe_eudebug * const d)
+{
+ return kfifo_len(&d->events.fifo);
+}
+
+static bool
+xe_eudebug_detached(struct xe_eudebug *d)
+{
+ bool connected;
+
+ spin_lock(&d->target.lock);
+ connected = !!d->target.xef;
+ spin_unlock(&d->target.lock);
+
+ return !connected;
+}
+
+static unsigned int
+event_fifo_has_events(struct xe_eudebug *d)
+{
+ /* Allow all waiters to proceed to check their state */
+ if (xe_eudebug_detached(d))
+ return 1;
+
+ return event_fifo_num_events_peek(d);
+}
+
+static const struct rhashtable_params rhash_res = {
+ .head_offset = offsetof(struct xe_eudebug_handle, rh_head),
+ .key_len = sizeof_field(struct xe_eudebug_handle, key),
+ .key_offset = offsetof(struct xe_eudebug_handle, key),
+ .automatic_shrinking = true,
+};
+
+static struct xe_eudebug_resource *
+resource_from_type(struct xe_eudebug_resources * const res, const int t)
+{
+ return &res->rt[t];
+}
+
+static struct xe_eudebug_resources *
+xe_eudebug_resources_alloc(void)
+{
+ struct xe_eudebug_resources *res;
+ int err;
+ int i;
+
+ res = kzalloc(sizeof(*res), GFP_ATOMIC);
+ if (!res)
+ return ERR_PTR(-ENOMEM);
+
+ mutex_init(&res->lock);
+
+ for (i = 0; i < XE_EUDEBUG_RES_TYPE_COUNT; i++) {
+ xa_init_flags(&res->rt[i].xa, XA_FLAGS_ALLOC1);
+ err = rhashtable_init(&res->rt[i].rh, &rhash_res);
+
+ if (err)
+ break;
+ }
+
+ if (err) {
+ while (i--) {
+ xa_destroy(&res->rt[i].xa);
+ rhashtable_destroy(&res->rt[i].rh);
+ }
+
+ kfree(res);
+ return ERR_PTR(err);
+ }
+
+ return res;
+}
+
+static void res_free_fn(void *ptr, void *arg)
+{
+ XE_WARN_ON(ptr);
+ kfree(ptr);
+}
+
+static void
+xe_eudebug_destroy_resources(struct xe_eudebug *d)
+{
+ struct xe_eudebug_resources *res = d->res;
+ struct xe_eudebug_handle *h;
+ unsigned long j;
+ int i;
+ int err;
+
+ mutex_lock(&res->lock);
+ for (i = 0; i < XE_EUDEBUG_RES_TYPE_COUNT; i++) {
+ struct xe_eudebug_resource *r = &res->rt[i];
+
+ xa_for_each(&r->xa, j, h) {
+ struct xe_eudebug_handle *t;
+
+ err = rhashtable_remove_fast(&r->rh,
+ &h->rh_head,
+ rhash_res);
+ xe_eudebug_assert(d, !err);
+ t = xa_erase(&r->xa, h->id);
+ xe_eudebug_assert(d, t == h);
+ kfree(t);
+ }
+ }
+ mutex_unlock(&res->lock);
+
+ for (i = 0; i < XE_EUDEBUG_RES_TYPE_COUNT; i++) {
+ struct xe_eudebug_resource *r = &res->rt[i];
+
+ rhashtable_free_and_destroy(&r->rh, res_free_fn, NULL);
+ xe_eudebug_assert(d, xa_empty(&r->xa));
+ xa_destroy(&r->xa);
+ }
+
+ mutex_destroy(&res->lock);
+
+ kfree(res);
+}
+
+static void xe_eudebug_free(struct kref *ref)
+{
+ struct xe_eudebug *d = container_of(ref, typeof(*d), ref);
+ struct drm_xe_eudebug_event *event;
+
+ xe_assert(d->xe, xe_eudebug_detached(d));
+
+ while (kfifo_get(&d->events.fifo, &event))
+ kfree(event);
+
+ xe_eudebug_destroy_resources(d);
+ XE_WARN_ON(d->target.xef);
+
+ xe_eudebug_assert(d, !kfifo_len(&d->events.fifo));
+
+ kfree(d);
+}
+
+static void xe_eudebug_put(struct xe_eudebug *d)
+{
+ kref_put(&d->ref, xe_eudebug_free);
+}
+
+static void remove_debugger(struct xe_file *xef)
+{
+ struct xe_eudebug *d;
+
+ if (XE_WARN_ON(!xef))
+ return;
+
+ mutex_lock(&xef->eudebug.lock);
+ d = xef->eudebug.debugger;
+ if (d)
+ xef->eudebug.debugger = NULL;
+ mutex_unlock(&xef->eudebug.lock);
+
+ if (d) {
+ struct xe_device *xe = d->xe;
+
+ mutex_lock(&xe->eudebug.lock);
+ list_del_init(&xef->eudebug.target_link);
+ mutex_unlock(&xe->eudebug.lock);
+
+ eu_dbg(d, "debugger removed");
+
+ xe_eudebug_put(d);
+ }
+}
+
+static bool xe_eudebug_detach(struct xe_device *xe,
+ struct xe_eudebug *d,
+ const int err)
+{
+ struct xe_file *target = NULL;
+
+ XE_WARN_ON(err > 0);
+
+ spin_lock(&d->target.lock);
+ if (d->target.xef) {
+ target = d->target.xef;
+ d->target.xef = NULL;
+ d->target.err = err;
+ }
+ spin_unlock(&d->target.lock);
+
+ if (!target)
+ return false;
+
+ eu_dbg(d, "session %lld detached with %d", d->session, err);
+
+ remove_debugger(target);
+ xe_file_put(target);
+
+ return true;
+}
+
+static int _xe_eudebug_disconnect(struct xe_eudebug *d,
+ const int err)
+{
+ wake_up_all(&d->events.write_done);
+ wake_up_all(&d->events.read_done);
+
+ return xe_eudebug_detach(d->xe, d, err);
+}
+
+#define xe_eudebug_disconnect(_d, _err) ({ \
+ if (_xe_eudebug_disconnect((_d), (_err))) { \
+ if ((_err) == 0 || (_err) == -ETIMEDOUT) \
+ eu_dbg((_d), "Session closed (%d)", (_err)); \
+ else \
+ eu_err((_d), "Session disconnected, err = %d (%s:%d)", \
+ (_err), __func__, __LINE__); \
+ } \
+})
+
+static struct xe_eudebug *
+xe_eudebug_get(struct xe_file *xef)
+{
+ struct xe_eudebug *d;
+
+ mutex_lock(&xef->eudebug.lock);
+ d = xef->eudebug.debugger;
+ if (d && !kref_get_unless_zero(&d->ref))
+ d = NULL;
+ mutex_unlock(&xef->eudebug.lock);
+
+ if (!d)
+ return NULL;
+
+ if (xe_eudebug_detached(d)) {
+ xe_eudebug_put(d);
+ return NULL;
+ }
+
+ return d;
+}
+
+static int xe_eudebug_queue_event(struct xe_eudebug *d,
+ struct drm_xe_eudebug_event *event)
+{
+ const u64 wait_jiffies = msecs_to_jiffies(1000);
+ u64 last_read_detected_ts, last_head_seqno, start_ts;
+ const u64 event_seqno = event->seqno;
+
+ xe_eudebug_assert(d, event->len > sizeof(struct drm_xe_eudebug_event));
+ xe_eudebug_assert(d, event->type);
+ xe_eudebug_assert(d, event->type != DRM_XE_EUDEBUG_EVENT_READ);
+
+ start_ts = ktime_get();
+ last_read_detected_ts = start_ts;
+ last_head_seqno = 0;
+
+ do {
+ struct drm_xe_eudebug_event *head;
+ u64 head_seqno;
+ bool was_queued;
+
+ if (xe_eudebug_detached(d))
+ break;
+
+ spin_lock(&d->events.lock);
+ head = event_fifo_pending(d);
+ if (head)
+ head_seqno = event->seqno;
+ else
+ head_seqno = 0;
+
+ was_queued = kfifo_in(&d->events.fifo, &event, 1);
+ spin_unlock(&d->events.lock);
+
+ wake_up_all(&d->events.write_done);
+
+ if (was_queued) {
+ eu_dbg(d, "queued event with seqno %lld (head %lld)\n",
+ event_seqno, head_seqno);
+ event = NULL;
+ break;
+ }
+
+ XE_WARN_ON(!head_seqno);
+
+ /* If we detect progress, restart timeout */
+ if (last_head_seqno != head_seqno)
+ last_read_detected_ts = ktime_get();
+
+ last_head_seqno = head_seqno;
+
+ wait_event_interruptible_timeout(d->events.read_done,
+ !kfifo_is_full(&d->events.fifo),
+ wait_jiffies);
+
+ } while (ktime_ms_delta(ktime_get(), last_read_detected_ts) <
+ XE_EUDEBUG_NO_READ_DETECTED_TIMEOUT_MS);
+
+ if (event) {
+ eu_dbg(d,
+ "event %llu queue failed (blocked %lld ms, avail %d)",
+ event ? event->seqno : 0,
+ ktime_ms_delta(ktime_get(), start_ts),
+ kfifo_avail(&d->events.fifo));
+
+ kfree(event);
+
+ return -ETIMEDOUT;
+ }
+
+ return 0;
+}
+
+static struct xe_eudebug_handle *
+alloc_handle(const int type, const u64 key)
+{
+ struct xe_eudebug_handle *h;
+
+ h = kzalloc(sizeof(*h), GFP_ATOMIC);
+ if (!h)
+ return NULL;
+
+ h->key = key;
+
+ return h;
+}
+
+static struct xe_eudebug_handle *
+__find_handle(struct xe_eudebug_resource *r,
+ const u64 key)
+{
+ struct xe_eudebug_handle *h;
+
+ h = rhashtable_lookup_fast(&r->rh,
+ &key,
+ rhash_res);
+ return h;
+}
+
+static int _xe_eudebug_add_handle(struct xe_eudebug *d,
+ int type,
+ void *p,
+ u64 *seqno,
+ int *handle)
+{
+ const u64 key = (uintptr_t)p;
+ struct xe_eudebug_resource *r;
+ struct xe_eudebug_handle *h, *o;
+ int err;
+
+ if (XE_WARN_ON(!p))
+ return -EINVAL;
+
+ if (xe_eudebug_detached(d))
+ return -ENOTCONN;
+
+ h = alloc_handle(type, key);
+ if (!h)
+ return -ENOMEM;
+
+ r = resource_from_type(d->res, type);
+
+ mutex_lock(&d->res->lock);
+ o = __find_handle(r, key);
+ if (!o) {
+ err = xa_alloc(&r->xa, &h->id, h, xa_limit_31b, GFP_KERNEL);
+
+ if (h->id >= INT_MAX) {
+ xa_erase(&r->xa, h->id);
+ err = -ENOSPC;
+ }
+
+ if (!err)
+ err = rhashtable_insert_fast(&r->rh,
+ &h->rh_head,
+ rhash_res);
+
+ if (err) {
+ xa_erase(&r->xa, h->id);
+ } else {
+ if (seqno)
+ *seqno = atomic_long_inc_return(&d->events.seqno);
+ }
+ } else {
+ xe_eudebug_assert(d, o->id);
+ err = -EEXIST;
+ }
+ mutex_unlock(&d->res->lock);
+
+ if (handle)
+ *handle = o ? o->id : h->id;
+
+ if (err) {
+ kfree(h);
+ XE_WARN_ON(err > 0);
+ return err;
+ }
+
+ xe_eudebug_assert(d, h->id);
+
+ return h->id;
+}
+
+static int xe_eudebug_add_handle(struct xe_eudebug *d,
+ int type,
+ void *p,
+ u64 *seqno)
+{
+ int ret;
+
+ ret = _xe_eudebug_add_handle(d, type, p, seqno, NULL);
+
+ eu_dbg(d, "handle type %d handle %p added: %d\n", type, p, ret);
+
+ return ret;
+}
+
+static int _xe_eudebug_remove_handle(struct xe_eudebug *d, int type, void *p,
+ u64 *seqno)
+{
+ const u64 key = (uintptr_t)p;
+ struct xe_eudebug_resource *r;
+ struct xe_eudebug_handle *h, *xa_h;
+ int ret;
+
+ if (XE_WARN_ON(!key))
+ return -EINVAL;
+
+ if (xe_eudebug_detached(d))
+ return -ENOTCONN;
+
+ r = resource_from_type(d->res, type);
+
+ mutex_lock(&d->res->lock);
+ h = __find_handle(r, key);
+ if (h) {
+ ret = rhashtable_remove_fast(&r->rh,
+ &h->rh_head,
+ rhash_res);
+ xe_eudebug_assert(d, !ret);
+ xa_h = xa_erase(&r->xa, h->id);
+ xe_eudebug_assert(d, xa_h == h);
+ if (!ret) {
+ ret = h->id;
+ if (seqno)
+ *seqno = atomic_long_inc_return(&d->events.seqno);
+ }
+ } else {
+ ret = -ENOENT;
+ }
+ mutex_unlock(&d->res->lock);
+
+ kfree(h);
+
+ xe_eudebug_assert(d, ret);
+
+ return ret;
+}
+
+static int xe_eudebug_remove_handle(struct xe_eudebug *d, int type, void *p,
+ u64 *seqno)
+{
+ int ret;
+
+ ret = _xe_eudebug_remove_handle(d, type, p, seqno);
+
+ eu_dbg(d, "handle type %d handle %p removed: %d\n", type, p, ret);
+
+ return ret;
+}
+
+static struct drm_xe_eudebug_event *
+xe_eudebug_create_event(struct xe_eudebug *d, u16 type, u64 seqno, u16 flags,
+ u32 len)
+{
+ const u16 known_flags =
+ DRM_XE_EUDEBUG_EVENT_CREATE |
+ DRM_XE_EUDEBUG_EVENT_DESTROY |
+ DRM_XE_EUDEBUG_EVENT_STATE_CHANGE |
+ DRM_XE_EUDEBUG_EVENT_NEED_ACK;
+ struct drm_xe_eudebug_event *event;
+
+ BUILD_BUG_ON(type > XE_EUDEBUG_MAX_EVENT_TYPE);
+
+ xe_eudebug_assert(d, type <= XE_EUDEBUG_MAX_EVENT_TYPE);
+ xe_eudebug_assert(d, !(~known_flags & flags));
+ xe_eudebug_assert(d, len > sizeof(*event));
+
+ event = kzalloc(len, GFP_KERNEL);
+ if (!event)
+ return NULL;
+
+ event->len = len;
+ event->type = type;
+ event->flags = flags;
+ event->seqno = seqno;
+
+ return event;
+}
+
+static int send_vm_event(struct xe_eudebug *d, u32 flags,
+ const u64 vm_handle,
+ const u64 seqno)
+{
+ struct drm_xe_eudebug_event *event;
+ struct drm_xe_eudebug_event_vm *e;
+
+ event = xe_eudebug_create_event(d, DRM_XE_EUDEBUG_EVENT_VM,
+ seqno, flags, sizeof(*e));
+ if (!event)
+ return -ENOMEM;
+
+ e = cast_event(e, event);
+
+ e->vm_handle = vm_handle;
+
+ return xe_eudebug_queue_event(d, event);
+}
+
+static int vm_create_event(struct xe_eudebug *d,
+ struct xe_file *xef, struct xe_vm *vm)
+{
+ int vm_id;
+ u64 seqno;
+ int ret;
+
+ if (!xe_vm_in_lr_mode(vm))
+ return 0;
+
+ vm_id = xe_eudebug_add_handle(d, XE_EUDEBUG_RES_TYPE_VM, vm, &seqno);
+ if (vm_id < 0)
+ return vm_id;
+
+ ret = send_vm_event(d, DRM_XE_EUDEBUG_EVENT_CREATE, vm_id, seqno);
+ if (ret)
+ eu_dbg(d, "send_vm_event create error %d\n", ret);
+
+ return ret;
+}
+
+static int vm_destroy_event(struct xe_eudebug *d,
+ struct xe_file *xef, struct xe_vm *vm)
+{
+ int vm_id;
+ u64 seqno;
+ int ret;
+
+ if (!xe_vm_in_lr_mode(vm))
+ return 0;
+
+ vm_id = xe_eudebug_remove_handle(d, XE_EUDEBUG_RES_TYPE_VM, vm, &seqno);
+ if (vm_id < 0)
+ return vm_id;
+
+ ret = send_vm_event(d, DRM_XE_EUDEBUG_EVENT_DESTROY, vm_id, seqno);
+ if (ret)
+ eu_dbg(d, "send_vm_event destroy error %d\n", ret);
+
+ return ret;
+}
+
+#define xe_eudebug_event_put(_d, _err) ({ \
+ if ((_err)) \
+ xe_eudebug_disconnect((_d), (_err)); \
+ xe_eudebug_put((_d)); \
+ })
+
+void xe_eudebug_vm_create(struct xe_file *xef, struct xe_vm *vm)
+{
+ struct xe_eudebug *d;
+
+ if (!xe_vm_in_lr_mode(vm))
+ return;
+
+ d = xe_eudebug_get(xef);
+ if (!d)
+ return;
+
+ xe_eudebug_event_put(d, vm_create_event(d, xef, vm));
+}
+
+void xe_eudebug_vm_destroy(struct xe_file *xef, struct xe_vm *vm)
+{
+ struct xe_eudebug *d;
+
+ if (!xe_vm_in_lr_mode(vm))
+ return;
+
+ d = xe_eudebug_get(xef);
+ if (!d)
+ return;
+
+ xe_eudebug_event_put(d, vm_destroy_event(d, xef, vm));
+}
+
+static int add_debugger(struct xe_device *xe, struct xe_eudebug *d,
+ struct drm_file *target)
+{
+ struct xe_file *xef = target->driver_priv;
+ int ret = -EBUSY;
+
+ mutex_lock(&xef->eudebug.lock);
+ if (!xef->eudebug.debugger) {
+ d->target.xef = xe_file_get(xef);
+ d->target.pid = xef->pid;
+ kref_get(&d->ref);
+ xef->eudebug.debugger = d;
+ ret = 0;
+ }
+ mutex_unlock(&xef->eudebug.lock);
+
+ if (ret)
+ return ret;
+
+ mutex_lock(&xe->eudebug.lock);
+ XE_WARN_ON(!list_empty(&xef->eudebug.target_link));
+ list_add_tail(&xef->eudebug.target_link, &xef->xe->eudebug.targets);
+ mutex_unlock(&xe->eudebug.lock);
+
+ return 0;
+}
+
+static int
+xe_eudebug_attach(struct xe_device *xe, struct drm_file *parent_file,
+ struct xe_eudebug *d, u64 target_pidfd)
+{
+ struct file *file __free(fput) = NULL;
+ struct drm_file *drm_file;
+ int ret;
+
+ file = fget(target_pidfd);
+ if (XE_IOCTL_DBG(xe, !file))
+ return -EBADFD;
+
+ drm_file = file->private_data;
+ if (XE_IOCTL_DBG(xe, !drm_file))
+ return -EINVAL;
+
+ if (XE_IOCTL_DBG(xe, parent_file->filp->f_op != file->f_op))
+ return -EINVAL;
+
+ if (XE_IOCTL_DBG(xe, !drm_file->authenticated))
+ return -EACCES;
+
+ ret = add_debugger(xe, d, drm_file);
+ if (XE_IOCTL_DBG(xe, ret))
+ return ret;
+
+ d->xe = xe;
+ d->session = ++xe->eudebug.session_count ?: 1;
+
+ eu_dbg(d, "session %lld attached to %s", d->session,
+ parent_file == drm_file ? "self" : "remote");
+
+ return 0;
+}
+
+static int xe_eudebug_release(struct inode *inode, struct file *file)
+{
+ struct xe_eudebug *d = file->private_data;
+
+ xe_eudebug_disconnect(d, 0);
+ xe_eudebug_put(d);
+
+ return 0;
+}
+
+static __poll_t xe_eudebug_poll(struct file *file, poll_table *wait)
+{
+ struct xe_eudebug * const d = file->private_data;
+ __poll_t ret = 0;
+
+ poll_wait(file, &d->events.write_done, wait);
+
+ if (xe_eudebug_detached(d)) {
+ ret |= EPOLLHUP;
+ if (d->target.err)
+ ret |= EPOLLERR;
+ }
+
+ if (event_fifo_num_events_peek(d))
+ ret |= EPOLLIN;
+
+ return ret;
+}
+
+static ssize_t xe_eudebug_read(struct file *file,
+ char __user *buf,
+ size_t count,
+ loff_t *ppos)
+{
+ return -EINVAL;
+}
+
+static long xe_eudebug_read_event(struct xe_eudebug *d,
+ const u64 arg,
+ const bool wait)
+{
+ struct xe_device *xe = d->xe;
+ struct drm_xe_eudebug_event __user * const user_orig =
+ u64_to_user_ptr(arg);
+ struct drm_xe_eudebug_event user_event;
+ struct drm_xe_eudebug_event *pending, *event_out;
+ long ret = 0;
+
+ if (XE_IOCTL_DBG(xe, copy_from_user(&user_event, user_orig, sizeof(user_event))))
+ return -EFAULT;
+
+ if (XE_IOCTL_DBG(xe, !user_event.type))
+ return -EINVAL;
+
+ if (XE_IOCTL_DBG(xe, user_event.type > XE_EUDEBUG_MAX_EVENT_TYPE))
+ return -EINVAL;
+
+ if (XE_IOCTL_DBG(xe, user_event.type != DRM_XE_EUDEBUG_EVENT_READ))
+ return -EINVAL;
+
+ if (XE_IOCTL_DBG(xe, user_event.len < sizeof(*user_orig)))
+ return -EINVAL;
+
+ if (XE_IOCTL_DBG(xe, user_event.flags))
+ return -EINVAL;
+
+ if (XE_IOCTL_DBG(xe, user_event.reserved))
+ return -EINVAL;
+
+ /* XXX: define wait time in connect arguments ? */
+ if (wait) {
+ ret = wait_event_interruptible_timeout(d->events.write_done,
+ event_fifo_has_events(d),
+ msecs_to_jiffies(5 * 1000));
+
+ if (XE_IOCTL_DBG(xe, ret < 0))
+ return ret;
+ }
+
+ if (XE_IOCTL_DBG(xe, xe_eudebug_detached(d)))
+ return -ENOTCONN;
+
+ event_out = NULL;
+ spin_lock(&d->events.lock);
+ pending = event_fifo_pending(d);
+ if (!pending)
+ ret = wait ? -ETIMEDOUT : -EAGAIN;
+ else if (user_event.len < pending->len)
+ ret = -EMSGSIZE;
+ else if (access_ok(user_orig, pending->len))
+ ret = kfifo_out(&d->events.fifo, &event_out, 1) == 1 ? 0 : -EIO;
+ else
+ ret = -EFAULT;
+
+ wake_up_all(&d->events.read_done);
+ spin_unlock(&d->events.lock);
+
+ if (!pending)
+ return ret;
+
+ if (ret == -EMSGSIZE) {
+ if (XE_IOCTL_DBG(xe, put_user(pending->len, &user_orig->len)))
+ return -EFAULT;
+
+ return -EMSGSIZE;
+ }
+
+ if (XE_IOCTL_DBG(xe, ret)) {
+ xe_eudebug_disconnect(d, (int)ret);
+ return ret;
+ }
+
+ XE_WARN_ON(pending != event_out);
+
+ if (__copy_to_user(user_orig, event_out, event_out->len)) {
+ ret = -EFAULT;
+ /* We can't rollback anymore, disconnect */
+ xe_eudebug_disconnect(d, -EFAULT);
+ }
+
+ eu_dbg(d, "event read=%ld: type=%u, flags=0x%x, seqno=%llu", ret,
+ event_out->type, event_out->flags, event_out->seqno);
+
+ kfree(event_out);
+
+ return ret;
+}
+
+static long xe_eudebug_ioctl(struct file *file,
+ unsigned int cmd,
+ unsigned long arg)
+{
+ struct xe_eudebug * const d = file->private_data;
+ long ret;
+
+ switch (cmd) {
+ case DRM_XE_EUDEBUG_IOCTL_READ_EVENT:
+ ret = xe_eudebug_read_event(d, arg,
+ !(file->f_flags & O_NONBLOCK));
+ break;
+
+ default:
+ ret = -EINVAL;
+ }
+
+ return ret;
+}
+
+static const struct file_operations fops = {
+ .owner = THIS_MODULE,
+ .release = xe_eudebug_release,
+ .poll = xe_eudebug_poll,
+ .read = xe_eudebug_read,
+ .unlocked_ioctl = xe_eudebug_ioctl,
+};
+
+static int
+xe_eudebug_connect(struct xe_device *xe,
+ struct drm_file *file,
+ struct drm_xe_eudebug_connect *param)
+{
+ const u64 known_open_flags = 0;
+ unsigned long f_flags = 0;
+ struct xe_eudebug *d;
+ int fd, err;
+
+ if (XE_IOCTL_DBG(xe, param->extensions))
+ return -EINVAL;
+
+ if (XE_IOCTL_DBG(xe, !param->fd))
+ return -EINVAL;
+
+ if (XE_IOCTL_DBG(xe, param->flags & ~known_open_flags))
+ return -EINVAL;
+
+ if (XE_IOCTL_DBG(xe, param->version &&
+ param->version != DRM_XE_EUDEBUG_VERSION))
+ return -EINVAL;
+
+ param->version = DRM_XE_EUDEBUG_VERSION;
+
+ mutex_lock(&xe->eudebug.lock);
+ err = xe_eudebug_is_enabled(xe) ? 0 : -EOPNOTSUPP;
+ mutex_unlock(&xe->eudebug.lock);
+
+ if (XE_IOCTL_DBG(xe, err))
+ return err;
+
+ d = kzalloc(sizeof(*d), GFP_KERNEL);
+ if (XE_IOCTL_DBG(xe, !d))
+ return -ENOMEM;
+
+ kref_init(&d->ref);
+ spin_lock_init(&d->target.lock);
+ init_waitqueue_head(&d->events.write_done);
+ init_waitqueue_head(&d->events.read_done);
+
+ spin_lock_init(&d->events.lock);
+ INIT_KFIFO(d->events.fifo);
+
+ d->res = xe_eudebug_resources_alloc();
+ if (XE_IOCTL_DBG(xe, IS_ERR(d->res))) {
+ err = PTR_ERR(d->res);
+ goto err_free;
+ }
+
+ err = xe_eudebug_attach(xe, file, d, param->fd);
+ if (XE_IOCTL_DBG(xe, err))
+ goto err_free_res;
+
+ fd = anon_inode_getfd("[xe_eudebug]", &fops, d, f_flags);
+ if (XE_IOCTL_DBG(xe, fd < 0)) {
+ err = fd;
+ goto err_detach;
+ }
+
+ eu_dbg(d, "connected session %lld", d->session);
+
+ return fd;
+
+err_detach:
+ xe_eudebug_detach(xe, d, err);
+err_free_res:
+ xe_eudebug_destroy_resources(d);
+err_free:
+ kfree(d);
+
+ return err;
+}
+
+void xe_eudebug_file_close(struct xe_file *xef)
+{
+ remove_debugger(xef);
+}
+
+bool xe_eudebug_is_enabled(struct xe_device *xe)
+{
+ return READ_ONCE(xe->eudebug.state) == XE_EUDEBUG_ENABLED;
+}
+
+int xe_eudebug_enable(struct xe_device *xe, bool enable)
+{
+ mutex_lock(&xe->eudebug.lock);
+
+ if (xe->eudebug.state == XE_EUDEBUG_NOT_SUPPORTED) {
+ mutex_unlock(&xe->eudebug.lock);
+ return -EPERM;
+ }
+
+ if (!enable && !list_empty(&xe->eudebug.targets)) {
+ mutex_unlock(&xe->eudebug.lock);
+ return -EBUSY;
+ }
+
+ if (enable == xe_eudebug_is_enabled(xe)) {
+ mutex_unlock(&xe->eudebug.lock);
+ return 0;
+ }
+
+ xe->eudebug.state = enable ?
+ XE_EUDEBUG_ENABLED : XE_EUDEBUG_DISABLED;
+ mutex_unlock(&xe->eudebug.lock);
+
+ return 0;
+}
+
+static ssize_t enable_eudebug_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct xe_device *xe = pdev_to_xe_device(to_pci_dev(dev));
+
+ return sysfs_emit(buf, "%u\n", xe_eudebug_is_enabled(xe));
+}
+
+static ssize_t enable_eudebug_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct xe_device *xe = pdev_to_xe_device(to_pci_dev(dev));
+ bool enable;
+ int ret;
+
+ ret = kstrtobool(buf, &enable);
+ if (ret)
+ return ret;
+
+ ret = xe_eudebug_enable(xe, enable);
+ if (ret)
+ return ret;
+
+ return count;
+}
+
+static DEVICE_ATTR_RW(enable_eudebug);
+
+static void xe_eudebug_sysfs_fini(void *arg)
+{
+ struct xe_device *xe = arg;
+ struct drm_device *dev = &xe->drm;
+
+ sysfs_remove_file(&dev->dev->kobj,
+ &dev_attr_enable_eudebug.attr);
+}
+
+void xe_eudebug_init(struct xe_device *xe)
+{
+ struct drm_device *dev = &xe->drm;
+ int err;
+
+ INIT_LIST_HEAD(&xe->eudebug.targets);
+
+ xe->eudebug.state = XE_EUDEBUG_NOT_SUPPORTED;
+
+ err = drmm_mutex_init(dev, &xe->eudebug.lock);
+ if (err)
+ goto out_err;
+
+ err = sysfs_create_file(&dev->dev->kobj,
+ &dev_attr_enable_eudebug.attr);
+ if (err)
+ goto out_err;
+
+ err = devm_add_action_or_reset(dev->dev, xe_eudebug_sysfs_fini, xe);
+ if (err)
+ goto out_err;
+
+ xe->eudebug.state = XE_EUDEBUG_DISABLED;
+
+ return;
+
+out_err:
+ drm_warn(&xe->drm, "eudebug disabled, init fail: %d\n", err);
+}
+
+int xe_eudebug_connect_ioctl(struct drm_device *dev,
+ void *data,
+ struct drm_file *file)
+{
+ struct xe_device *xe = to_xe_device(dev);
+ struct drm_xe_eudebug_connect * const param = data;
+
+ return xe_eudebug_connect(xe, file, param);
+}
diff --git a/drivers/gpu/drm/xe/xe_eudebug.h b/drivers/gpu/drm/xe/xe_eudebug.h
new file mode 100644
index 000000000000..22fbb2ff24da
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_eudebug.h
@@ -0,0 +1,65 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2023-2025 Intel Corporation
+ */
+
+#ifndef _XE_EUDEBUG_H_
+#define _XE_EUDEBUG_H_
+
+#include <linux/types.h>
+
+struct drm_device;
+struct drm_file;
+struct xe_device;
+struct xe_file;
+struct xe_vm;
+
+#if IS_ENABLED(CONFIG_DRM_XE_EUDEBUG)
+
+#define XE_EUDEBUG_DBG_STR "eudbg: %lld:%lu:%s (%d/%d) -> (%d): "
+#define XE_EUDEBUG_DBG_ARGS(d) (d)->session, \
+ atomic_long_read(&(d)->events.seqno), \
+ !READ_ONCE(d->target.xef) ? "disconnected" : "", \
+ current->pid, \
+ task_tgid_nr(current), \
+ READ_ONCE(d->target.xef) ? d->target.xef->pid : -1
+
+#define eu_err(d, fmt, ...) drm_err(&(d)->xe->drm, XE_EUDEBUG_DBG_STR # fmt, \
+ XE_EUDEBUG_DBG_ARGS(d), ##__VA_ARGS__)
+#define eu_warn(d, fmt, ...) drm_warn(&(d)->xe->drm, XE_EUDEBUG_DBG_STR # fmt, \
+ XE_EUDEBUG_DBG_ARGS(d), ##__VA_ARGS__)
+#define eu_dbg(d, fmt, ...) drm_dbg(&(d)->xe->drm, XE_EUDEBUG_DBG_STR # fmt, \
+ XE_EUDEBUG_DBG_ARGS(d), ##__VA_ARGS__)
+
+#define xe_eudebug_assert(d, ...) xe_assert((d)->xe, ##__VA_ARGS__)
+
+int xe_eudebug_connect_ioctl(struct drm_device *dev,
+ void *data,
+ struct drm_file *file);
+
+void xe_eudebug_init(struct xe_device *xe);
+bool xe_eudebug_is_enabled(struct xe_device *xe);
+
+void xe_eudebug_file_close(struct xe_file *xef);
+
+void xe_eudebug_vm_create(struct xe_file *xef, struct xe_vm *vm);
+void xe_eudebug_vm_destroy(struct xe_file *xef, struct xe_vm *vm);
+int xe_eudebug_enable(struct xe_device *xe, bool enable);
+
+#else
+
+static inline int xe_eudebug_connect_ioctl(struct drm_device *dev,
+ void *data,
+ struct drm_file *file) { return 0; }
+
+static inline void xe_eudebug_init(struct xe_device *xe) { }
+static inline bool xe_eudebug_is_enabled(struct xe_device *xe) { return false; }
+
+static inline void xe_eudebug_file_close(struct xe_file *xef) { }
+
+static inline void xe_eudebug_vm_create(struct xe_file *xef, struct xe_vm *vm) { }
+static inline void xe_eudebug_vm_destroy(struct xe_file *xef, struct xe_vm *vm) { }
+
+#endif /* CONFIG_DRM_XE_EUDEBUG */
+
+#endif /* _XE_EUDEBUG_H_ */
diff --git a/drivers/gpu/drm/xe/xe_eudebug_types.h b/drivers/gpu/drm/xe/xe_eudebug_types.h
new file mode 100644
index 000000000000..1e673c934169
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_eudebug_types.h
@@ -0,0 +1,128 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2023-2025 Intel Corporation
+ */
+
+#ifndef _XE_EUDEBUG_TYPES_H_
+#define _XE_EUDEBUG_TYPES_H_
+
+#include <linux/completion.h>
+#include <linux/kfifo.h>
+#include <linux/kref.h>
+#include <linux/mutex.h>
+#include <linux/rbtree.h>
+#include <linux/rhashtable.h>
+#include <linux/wait.h>
+#include <linux/xarray.h>
+
+struct xe_device;
+struct task_struct;
+
+/**
+ * enum xe_eudebug_state - eudebug capability state
+ *
+ * @XE_EUDEBUG_NOT_SUPPORTED: eudebug feature support off
+ * @XE_EUDEBUG_DISABLED: eudebug feature supported but disabled
+ * @XE_EUDEBUG_ENABLED: eudebug enabled
+ */
+enum xe_eudebug_state {
+ XE_EUDEBUG_NOT_SUPPORTED = 0,
+ XE_EUDEBUG_DISABLED,
+ XE_EUDEBUG_ENABLED,
+};
+
+#define CONFIG_DRM_XE_DEBUGGER_EVENT_QUEUE_SIZE 64
+#define XE_EUDEBUG_MAX_EVENT_TYPE DRM_XE_EUDEBUG_EVENT_VM
+
+/**
+ * struct xe_eudebug_handle - eudebug resource handle
+ */
+struct xe_eudebug_handle {
+ /** @key: key value in rhashtable <key:id> */
+ u64 key;
+
+ /** @id: opaque handle id for xarray <id:key> */
+ int id;
+
+ /** @rh_head: rhashtable head */
+ struct rhash_head rh_head;
+};
+
+/**
+ * struct xe_eudebug_resource - Resource map for one resource
+ */
+struct xe_eudebug_resource {
+ /** @xa: xarrays for <id->key> */
+ struct xarray xa;
+
+ /** @rh rhashtable for <key->id> */
+ struct rhashtable rh;
+};
+
+#define XE_EUDEBUG_RES_TYPE_VM 0
+#define XE_EUDEBUG_RES_TYPE_COUNT (XE_EUDEBUG_RES_TYPE_VM + 1)
+
+/**
+ * struct xe_eudebug_resources - eudebug resources for all types
+ */
+struct xe_eudebug_resources {
+ /** @lock: guards access into rt */
+ struct mutex lock;
+
+ /** @rt: resource maps for all types */
+ struct xe_eudebug_resource rt[XE_EUDEBUG_RES_TYPE_COUNT];
+};
+
+/**
+ * struct xe_eudebug - Top level struct for eudebug: the connection
+ */
+struct xe_eudebug {
+ /** @ref: kref counter for this struct */
+ struct kref ref;
+
+ struct {
+ /** @xef: the target xe_file that we are debugging */
+ struct xe_file *xef;
+
+ /** @pid: pid of target */
+ pid_t pid;
+
+ /** @err: error code on disconnect */
+ int err;
+
+ /** @lock: guards access to xef and err */
+ spinlock_t lock;
+ } target;
+
+ /** @xe: the parent device we are serving */
+ struct xe_device *xe;
+
+ /** @res: the resource maps we track for target_task */
+ struct xe_eudebug_resources *res;
+
+ /** @session: session number for this connection (for logs) */
+ u64 session;
+
+ /** @events: kfifo queue of to-be-delivered events */
+ struct {
+ /** @lock: guards access to fifo */
+ spinlock_t lock;
+
+ /** @fifo: queue of events pending */
+ DECLARE_KFIFO(fifo,
+ struct drm_xe_eudebug_event *,
+ CONFIG_DRM_XE_DEBUGGER_EVENT_QUEUE_SIZE);
+
+ /** @write_done: waitqueue for signalling write to fifo */
+ wait_queue_head_t write_done;
+
+ /** @read_done: waitqueue for signalling read from fifo */
+ wait_queue_head_t read_done;
+
+ /** @event_seqno: seqno counter to stamp events for fifo */
+ atomic_long_t seqno;
+ } events;
+
+};
+
+#endif /* _XE_EUDEBUG_TYPES_H_ */
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 00ffd3f03983..903f478ff1cc 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -26,6 +26,7 @@
#include "xe_bo.h"
#include "xe_device.h"
#include "xe_drm_client.h"
+#include "xe_eudebug.h"
#include "xe_exec_queue.h"
#include "xe_migrate.h"
#include "xe_pat.h"
@@ -1957,6 +1958,8 @@ int xe_vm_create_ioctl(struct drm_device *dev, void *data,
args->vm_id = id;
+ xe_eudebug_vm_create(xef, vm);
+
return 0;
err_close_and_put:
@@ -1988,8 +1991,10 @@ int xe_vm_destroy_ioctl(struct drm_device *dev, void *data,
xa_erase(&xef->vm.xa, args->vm_id);
mutex_unlock(&xef->vm.lock);
- if (!err)
+ if (!err) {
+ xe_eudebug_vm_destroy(xef, vm);
xe_vm_close_and_put(vm);
+ }
return err;
}
diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
index 37881b1eb6ba..0ce485ce2948 100644
--- a/include/uapi/drm/xe_drm.h
+++ b/include/uapi/drm/xe_drm.h
@@ -106,6 +106,7 @@ extern "C" {
#define DRM_XE_OBSERVATION 0x0b
#define DRM_XE_MADVISE 0x0c
#define DRM_XE_VM_QUERY_MEM_RANGE_ATTRS 0x0d
+#define DRM_XE_EUDEBUG_CONNECT 0x0e
/* Must be kept compact -- no holes */
@@ -123,6 +124,7 @@ extern "C" {
#define DRM_IOCTL_XE_OBSERVATION DRM_IOW(DRM_COMMAND_BASE + DRM_XE_OBSERVATION, struct drm_xe_observation_param)
#define DRM_IOCTL_XE_MADVISE DRM_IOW(DRM_COMMAND_BASE + DRM_XE_MADVISE, struct drm_xe_madvise)
#define DRM_IOCTL_XE_VM_QUERY_MEM_RANGE_ATTRS DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_VM_QUERY_MEM_RANGE_ATTRS, struct drm_xe_vm_query_mem_range_attr)
+#define DRM_IOCTL_XE_EUDEBUG_CONNECT DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_EUDEBUG_CONNECT, struct drm_xe_eudebug_connect)
/**
* DOC: Xe IOCTL Extensions
@@ -2278,6 +2280,25 @@ struct drm_xe_vm_query_mem_range_attr {
};
+/*
+ * Debugger ABI (ioctl and events) Version History:
+ * 0 - No debugger available
+ * 1 - Initial version
+ */
+#define DRM_XE_EUDEBUG_VERSION 1
+
+struct drm_xe_eudebug_connect {
+ /** @extensions: Pointer to the first extension struct, if any */
+ __u64 extensions;
+
+ __u64 fd; /* Target drm client fd */
+ __u32 flags; /* MBZ */
+
+ __u32 version; /* output: current ABI (ioctl / events) version */
+};
+
+#include "xe_drm_eudebug.h"
+
#if defined(__cplusplus)
}
#endif
diff --git a/include/uapi/drm/xe_drm_eudebug.h b/include/uapi/drm/xe_drm_eudebug.h
new file mode 100644
index 000000000000..fd2a0c911d02
--- /dev/null
+++ b/include/uapi/drm/xe_drm_eudebug.h
@@ -0,0 +1,77 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2023 Intel Corporation
+ */
+
+#ifndef _UAPI_XE_DRM_EUDEBUG_H_
+#define _UAPI_XE_DRM_EUDEBUG_H_
+
+#if defined(__cplusplus)
+extern "C" {
+#endif
+
+/**
+ * Do a eudebug event read for a debugger connection.
+ *
+ * This ioctl is available in debug version 1.
+ */
+#define DRM_XE_EUDEBUG_IOCTL_READ_EVENT _IO('j', 0x0)
+
+/**
+ * struct drm_xe_eudebug_event - Base type of event delivered by xe_eudebug.
+ * @len: Length of event, including the base, of event.
+ * @type: Event type
+ * @flags: Flags for the event
+ * @seqno: Sequence number
+ * @reserved: MBZ
+ *
+ * Base event for xe_eudebug interface. To initiate a read, type
+ * needs to be set to DRM_XE_EUDEBUG_EVENT_READ and length
+ * need to be set by userspace to what has been allocated as max.
+ * On successful return the event len will be deliver or -EMSGSIZE
+ * if it does not fit. Seqno can be used to form a timeline
+ * as event delivery order does not guarantee event creation
+ * order.
+ *
+ * flags will indicate if resource was created, destroyed
+ * or its state changed.
+ *
+ * if DRM_XE_EUDEBUG_EVENT_NEED_ACK is set, the xe_eudebug
+ * will held the said resource until it is acked by userspace
+ * using another acking ioctl with the seqno of said event.
+ *
+ */
+struct drm_xe_eudebug_event {
+ __u32 len;
+
+ __u16 type;
+#define DRM_XE_EUDEBUG_EVENT_NONE 0
+#define DRM_XE_EUDEBUG_EVENT_READ 1
+#define DRM_XE_EUDEBUG_EVENT_VM 2
+
+ __u16 flags;
+#define DRM_XE_EUDEBUG_EVENT_CREATE (1 << 0)
+#define DRM_XE_EUDEBUG_EVENT_DESTROY (1 << 1)
+#define DRM_XE_EUDEBUG_EVENT_STATE_CHANGE (1 << 2)
+#define DRM_XE_EUDEBUG_EVENT_NEED_ACK (1 << 3)
+ __u64 seqno;
+ __u64 reserved;
+};
+
+/**
+ * struct drm_xe_eudebug_event_vm - VM resource event
+ * @vm_handle: Handle of a vm that was created/destroyed
+ *
+ * Resource creation/destruction event for a VM.
+ */
+struct drm_xe_eudebug_event_vm {
+ struct drm_xe_eudebug_event base;
+
+ __u64 vm_handle;
+};
+
+#if defined(__cplusplus)
+}
+#endif
+
+#endif /* _UAPI_XE_DRM_EUDEBUG_H_ */
--
2.43.0
^ permalink raw reply related [flat|nested] 30+ messages in thread* [PATCH 01/21] drm/xe/eudebug: Introduce eudebug interface
2025-12-02 13:52 ` [PATCH 01/20] drm/xe/eudebug: Introduce eudebug interface Mika Kuoppala
@ 2025-12-10 16:48 ` Mika Kuoppala
0 siblings, 0 replies; 30+ messages in thread
From: Mika Kuoppala @ 2025-12-10 16:48 UTC (permalink / raw)
To: intel-xe
Cc: simona.vetter, matthew.brost, christian.koenig, thomas.hellstrom,
joonas.lahtinen, christoph.manszewski, rodrigo.vivi,
andrzej.hajda, matthew.auld, maciej.patelczyk, gwan-gyeong.mun,
Mika Kuoppala, Maarten Lankhorst, Lucas De Marchi,
Dominik Grzegorzek, Andi Shyti, Matt Roper,
Zbigniew Kempczyński, Jonathan Cavitt
This patch adds the eudebug interface to the Xe driver, enabling
user-space debuggers (e.g., GDB) to track and interact with GPU resources
of a DRM client. Debuggers can inspect or modify these resources,
for example, to locate ISA/ELF sections and install breakpoints in a
shader's instruction stream.
A debugger opens a connection to the Xe driver via a DRM ioctl, specifying
the target DRM client's file descriptor. This returns an anonymous file
descriptor for the connection, which can be used to listen for resource
creation/destruction events. The same file descriptor can also be used to
receive hardware state change events and control execution flow by
interrupting EU threads on the GPU (in follow-up patches).
This patch introduces the eudebug connection and event queuing,
adding client create/destroy and VM create/destroy events as a baseline.
Additional events and hardware control for full debugger operation are
needed and will be introduced in follow-up patches.
The resource tracking components are inspired by Maciej Patelczyk's work on
resource handling for i915. Chris Wilson suggested a two-way mapping
approach, which simplifies using the resource map as definitive
bookkeeping forresources relayed to the debugger during the discovery
phase (in a follow-up patch).
v2: - Kconfig support (Matthew)
- ptraced access control (Lucas)
- pass expected event length to user (Zbigniew)
- only track long running VMs
- checkpatch (Tilak)
- include order (Andrzej)
- 32bit fixes (Andrzej)
- cleaner get_task_struct
- remove xa_array and use clients.list for tracking (Mika)
v3: - adapt to removal of clients.lock (Mika)
- create_event cleanup (Christoph)
v4: - add proper header guards (Christoph)
- better read_event fault handling (Christoph, Mika)
- simplify attach (Mika)
- connect using target file descriptors
- avoid event->seqno after queue as it is can UAF (Mika)
- use drmm for eudebug_fini (Maciej)
- squash dynamic enable
v6: - drm->authenticated is overzealous for render (Mika)
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Lucas De Marchi <lucas.demarchi@intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Andi Shyti <andi.shyti@linux.intel.com>
Cc: Matt Roper <matthew.d.roper@intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Zbigniew Kempczyński <zbigniew.kempczynski@intel.com>
Cc: Andrzej Hajda <andrzej.hajda@intel.com>
Signed-off-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Signed-off-by: Maciej Patelczyk <maciej.patelczyk@intel.com>
Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Signed-off-by: Jonathan Cavitt <jonathan.cavitt@intel.com>
Signed-off-by: Christoph Manszewski <christoph.manszewski@intel.com>
---
drivers/gpu/drm/xe/Kconfig | 10 +
drivers/gpu/drm/xe/Makefile | 3 +
drivers/gpu/drm/xe/xe_device.c | 14 +
drivers/gpu/drm/xe/xe_device_types.h | 31 +
drivers/gpu/drm/xe/xe_eudebug.c | 1046 +++++++++++++++++++++++++
drivers/gpu/drm/xe/xe_eudebug.h | 65 ++
drivers/gpu/drm/xe/xe_eudebug_types.h | 128 +++
drivers/gpu/drm/xe/xe_vm.c | 7 +-
include/uapi/drm/xe_drm.h | 21 +
include/uapi/drm/xe_drm_eudebug.h | 77 ++
10 files changed, 1401 insertions(+), 1 deletion(-)
create mode 100644 drivers/gpu/drm/xe/xe_eudebug.c
create mode 100644 drivers/gpu/drm/xe/xe_eudebug.h
create mode 100644 drivers/gpu/drm/xe/xe_eudebug_types.h
create mode 100644 include/uapi/drm/xe_drm_eudebug.h
diff --git a/drivers/gpu/drm/xe/Kconfig b/drivers/gpu/drm/xe/Kconfig
index 4b288eb3f5b0e..05ee0dd60f3b2 100644
--- a/drivers/gpu/drm/xe/Kconfig
+++ b/drivers/gpu/drm/xe/Kconfig
@@ -128,6 +128,16 @@ config DRM_XE_FORCE_PROBE
Use "!*" to block the probe of the driver for all known devices.
+config DRM_XE_EUDEBUG
+ bool "Enable gdb debugger support (eudebug)"
+ depends on DRM_XE
+ default y
+ help
+ Choose this option if you want to add support for debugger (gdb) to
+ attach into process using Xe and debug the gpu/gpgpu programs.
+ With debugger support, Xe will provide interface for a debugger to
+ process to track, inspect and modify resources.
+
menu "drm/Xe Debugging"
depends on DRM_XE
depends on EXPERT
diff --git a/drivers/gpu/drm/xe/Makefile b/drivers/gpu/drm/xe/Makefile
index 6ecba27d85f7c..e8e41a4dd38e3 100644
--- a/drivers/gpu/drm/xe/Makefile
+++ b/drivers/gpu/drm/xe/Makefile
@@ -146,6 +146,9 @@ xe-$(CONFIG_I2C) += xe_i2c.o
xe-$(CONFIG_DRM_XE_GPUSVM) += xe_svm.o
xe-$(CONFIG_DRM_GPUSVM) += xe_userptr.o
+# debugging shaders with gdb (eudebug) support
+xe-$(CONFIG_DRM_XE_EUDEBUG) += xe_eudebug.o
+
# graphics hardware monitoring (HWMON) support
xe-$(CONFIG_HWMON) += xe_hwmon.o
diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
index 1197f914ef777..1c7f98dd42bee 100644
--- a/drivers/gpu/drm/xe/xe_device.c
+++ b/drivers/gpu/drm/xe/xe_device.c
@@ -31,6 +31,7 @@
#include "xe_dma_buf.h"
#include "xe_drm_client.h"
#include "xe_drv.h"
+#include "xe_eudebug.h"
#include "xe_exec.h"
#include "xe_exec_queue.h"
#include "xe_force_wake.h"
@@ -105,6 +106,11 @@ static int xe_file_open(struct drm_device *dev, struct drm_file *file)
mutex_init(&xef->exec_queue.lock);
xa_init_flags(&xef->exec_queue.xa, XA_FLAGS_ALLOC1);
+#if IS_ENABLED(CONFIG_DRM_XE_EUDEBUG)
+ mutex_init(&xef->eudebug.lock);
+ INIT_LIST_HEAD(&xef->eudebug.target_link);
+#endif
+
file->driver_priv = xef;
kref_init(&xef->refcount);
@@ -127,6 +133,9 @@ static void xe_file_destroy(struct kref *ref)
xa_destroy(&xef->vm.xa);
mutex_destroy(&xef->vm.lock);
+#if IS_ENABLED(CONFIG_DRM_XE_EUDEBUG)
+ mutex_destroy(&xef->eudebug.lock);
+#endif
xe_drm_client_put(xef->client);
kfree(xef->process_name);
kfree(xef);
@@ -168,6 +177,8 @@ static void xe_file_close(struct drm_device *dev, struct drm_file *file)
guard(xe_pm_runtime)(xe);
+ xe_eudebug_file_close(xef);
+
/*
* No need for exec_queue.lock here as there is no contention for it
* when FD is closing as IOCTLs presumably can't be modifying the
@@ -207,6 +218,7 @@ static const struct drm_ioctl_desc xe_ioctls[] = {
DRM_IOCTL_DEF_DRV(XE_MADVISE, xe_vm_madvise_ioctl, DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(XE_VM_QUERY_MEM_RANGE_ATTRS, xe_vm_query_vmas_attrs_ioctl,
DRM_RENDER_ALLOW),
+ DRM_IOCTL_DEF_DRV(XE_EUDEBUG_CONNECT, xe_eudebug_connect_ioctl, DRM_RENDER_ALLOW),
};
static long xe_drm_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
@@ -967,6 +979,8 @@ int xe_device_probe(struct xe_device *xe)
if (err)
goto err_unregister_display;
+ xe_eudebug_init(xe);
+
return devm_add_action_or_reset(xe->drm.dev, xe_device_sanitize, xe);
err_unregister_display:
diff --git a/drivers/gpu/drm/xe/xe_device_types.h b/drivers/gpu/drm/xe/xe_device_types.h
index 9de73353223f1..2b4ae7aedd12b 100644
--- a/drivers/gpu/drm/xe/xe_device_types.h
+++ b/drivers/gpu/drm/xe/xe_device_types.h
@@ -13,6 +13,7 @@
#include <drm/ttm/ttm_device.h>
#include "xe_devcoredump_types.h"
+#include "xe_eudebug_types.h"
#include "xe_heci_gsc.h"
#include "xe_late_bind_fw_types.h"
#include "xe_lmtt_types.h"
@@ -660,6 +661,23 @@ struct xe_device {
spinlock_t lock;
} uncore;
#endif
+
+#if IS_ENABLED(CONFIG_DRM_XE_EUDEBUG)
+ /** @debugger connection list and globals for device */
+ struct {
+ /** @eudebug.session_count: session counter to track connections */
+ u64 session_count;
+
+ /** @eudebug.available: is the debugging functionality available */
+ enum xe_eudebug_state state;
+
+ /** @eudebug.targets: this is list for xe_files for each target */
+ struct list_head targets;
+
+ /** @eudebug.lock: protects state and targets */
+ struct mutex lock;
+ } eudebug;
+#endif
};
/**
@@ -721,6 +739,19 @@ struct xe_file {
/** @refcount: ref count of this xe file */
struct kref refcount;
+
+#if IS_ENABLED(CONFIG_DRM_XE_EUDEBUG)
+ struct {
+ /** @eudebug.debugger: the debugger connection into this xe_file */
+ struct xe_eudebug *debugger;
+
+ /** @eudebug.lock: protecting debugger */
+ struct mutex lock;
+
+ /** @target_link: link into xe_device.eudebug.targets */
+ struct list_head target_link;
+ } eudebug;
+#endif
};
#endif
diff --git a/drivers/gpu/drm/xe/xe_eudebug.c b/drivers/gpu/drm/xe/xe_eudebug.c
new file mode 100644
index 0000000000000..a38eab54336ff
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_eudebug.c
@@ -0,0 +1,1046 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2023-2025 Intel Corporation
+ */
+
+#include <linux/anon_inodes.h>
+#include <linux/delay.h>
+#include <linux/poll.h>
+#include <linux/uaccess.h>
+
+#include <drm/drm_managed.h>
+#include <uapi/drm/xe_drm.h>
+
+#include "xe_assert.h"
+#include "xe_device.h"
+#include "xe_eudebug.h"
+#include "xe_eudebug_types.h"
+#include "xe_macros.h"
+#include "xe_vm.h"
+
+/*
+ * If there is no detected event read by userspace, during this period, assume
+ * userspace problem and disconnect debugger to allow forward progress.
+ */
+#define XE_EUDEBUG_NO_READ_DETECTED_TIMEOUT_MS (25 * 1000)
+
+#define cast_event(T, event) container_of((event), typeof(*(T)), base)
+
+static struct drm_xe_eudebug_event *
+event_fifo_pending(struct xe_eudebug *d)
+{
+ struct drm_xe_eudebug_event *event;
+
+ if (kfifo_peek(&d->events.fifo, &event))
+ return event;
+
+ return NULL;
+}
+
+/*
+ * This is racy as we dont take the lock for read but all the
+ * callsites can handle the race so we can live without lock.
+ */
+__no_kcsan
+static unsigned int
+event_fifo_num_events_peek(const struct xe_eudebug * const d)
+{
+ return kfifo_len(&d->events.fifo);
+}
+
+static bool
+xe_eudebug_detached(struct xe_eudebug *d)
+{
+ bool connected;
+
+ spin_lock(&d->target.lock);
+ connected = !!d->target.xef;
+ spin_unlock(&d->target.lock);
+
+ return !connected;
+}
+
+static unsigned int
+event_fifo_has_events(struct xe_eudebug *d)
+{
+ /* Allow all waiters to proceed to check their state */
+ if (xe_eudebug_detached(d))
+ return 1;
+
+ return event_fifo_num_events_peek(d);
+}
+
+static const struct rhashtable_params rhash_res = {
+ .head_offset = offsetof(struct xe_eudebug_handle, rh_head),
+ .key_len = sizeof_field(struct xe_eudebug_handle, key),
+ .key_offset = offsetof(struct xe_eudebug_handle, key),
+ .automatic_shrinking = true,
+};
+
+static struct xe_eudebug_resource *
+resource_from_type(struct xe_eudebug_resources * const res, const int t)
+{
+ return &res->rt[t];
+}
+
+static struct xe_eudebug_resources *
+xe_eudebug_resources_alloc(void)
+{
+ struct xe_eudebug_resources *res;
+ int err;
+ int i;
+
+ res = kzalloc(sizeof(*res), GFP_ATOMIC);
+ if (!res)
+ return ERR_PTR(-ENOMEM);
+
+ mutex_init(&res->lock);
+
+ for (i = 0; i < XE_EUDEBUG_RES_TYPE_COUNT; i++) {
+ xa_init_flags(&res->rt[i].xa, XA_FLAGS_ALLOC1);
+ err = rhashtable_init(&res->rt[i].rh, &rhash_res);
+
+ if (err)
+ break;
+ }
+
+ if (err) {
+ while (i--) {
+ xa_destroy(&res->rt[i].xa);
+ rhashtable_destroy(&res->rt[i].rh);
+ }
+
+ kfree(res);
+ return ERR_PTR(err);
+ }
+
+ return res;
+}
+
+static void res_free_fn(void *ptr, void *arg)
+{
+ XE_WARN_ON(ptr);
+ kfree(ptr);
+}
+
+static void
+xe_eudebug_destroy_resources(struct xe_eudebug *d)
+{
+ struct xe_eudebug_resources *res = d->res;
+ struct xe_eudebug_handle *h;
+ unsigned long j;
+ int i;
+ int err;
+
+ mutex_lock(&res->lock);
+ for (i = 0; i < XE_EUDEBUG_RES_TYPE_COUNT; i++) {
+ struct xe_eudebug_resource *r = &res->rt[i];
+
+ xa_for_each(&r->xa, j, h) {
+ struct xe_eudebug_handle *t;
+
+ err = rhashtable_remove_fast(&r->rh,
+ &h->rh_head,
+ rhash_res);
+ xe_eudebug_assert(d, !err);
+ t = xa_erase(&r->xa, h->id);
+ xe_eudebug_assert(d, t == h);
+ kfree(t);
+ }
+ }
+ mutex_unlock(&res->lock);
+
+ for (i = 0; i < XE_EUDEBUG_RES_TYPE_COUNT; i++) {
+ struct xe_eudebug_resource *r = &res->rt[i];
+
+ rhashtable_free_and_destroy(&r->rh, res_free_fn, NULL);
+ xe_eudebug_assert(d, xa_empty(&r->xa));
+ xa_destroy(&r->xa);
+ }
+
+ mutex_destroy(&res->lock);
+
+ kfree(res);
+}
+
+static void xe_eudebug_free(struct kref *ref)
+{
+ struct xe_eudebug *d = container_of(ref, typeof(*d), ref);
+ struct drm_xe_eudebug_event *event;
+
+ xe_assert(d->xe, xe_eudebug_detached(d));
+
+ while (kfifo_get(&d->events.fifo, &event))
+ kfree(event);
+
+ xe_eudebug_destroy_resources(d);
+ XE_WARN_ON(d->target.xef);
+
+ xe_eudebug_assert(d, !kfifo_len(&d->events.fifo));
+
+ kfree(d);
+}
+
+static void xe_eudebug_put(struct xe_eudebug *d)
+{
+ kref_put(&d->ref, xe_eudebug_free);
+}
+
+static void remove_debugger(struct xe_file *xef)
+{
+ struct xe_eudebug *d;
+
+ if (XE_WARN_ON(!xef))
+ return;
+
+ mutex_lock(&xef->eudebug.lock);
+ d = xef->eudebug.debugger;
+ if (d)
+ xef->eudebug.debugger = NULL;
+ mutex_unlock(&xef->eudebug.lock);
+
+ if (d) {
+ struct xe_device *xe = d->xe;
+
+ mutex_lock(&xe->eudebug.lock);
+ list_del_init(&xef->eudebug.target_link);
+ mutex_unlock(&xe->eudebug.lock);
+
+ eu_dbg(d, "debugger removed");
+
+ xe_eudebug_put(d);
+ }
+}
+
+static bool xe_eudebug_detach(struct xe_device *xe,
+ struct xe_eudebug *d,
+ const int err)
+{
+ struct xe_file *target = NULL;
+
+ XE_WARN_ON(err > 0);
+
+ spin_lock(&d->target.lock);
+ if (d->target.xef) {
+ target = d->target.xef;
+ d->target.xef = NULL;
+ d->target.err = err;
+ }
+ spin_unlock(&d->target.lock);
+
+ if (!target)
+ return false;
+
+ eu_dbg(d, "session %lld detached with %d", d->session, err);
+
+ remove_debugger(target);
+ xe_file_put(target);
+
+ return true;
+}
+
+static int _xe_eudebug_disconnect(struct xe_eudebug *d,
+ const int err)
+{
+ wake_up_all(&d->events.write_done);
+ wake_up_all(&d->events.read_done);
+
+ return xe_eudebug_detach(d->xe, d, err);
+}
+
+#define xe_eudebug_disconnect(_d, _err) ({ \
+ if (_xe_eudebug_disconnect((_d), (_err))) { \
+ if ((_err) == 0 || (_err) == -ETIMEDOUT) \
+ eu_dbg((_d), "Session closed (%d)", (_err)); \
+ else \
+ eu_err((_d), "Session disconnected, err = %d (%s:%d)", \
+ (_err), __func__, __LINE__); \
+ } \
+})
+
+static struct xe_eudebug *
+xe_eudebug_get(struct xe_file *xef)
+{
+ struct xe_eudebug *d;
+
+ mutex_lock(&xef->eudebug.lock);
+ d = xef->eudebug.debugger;
+ if (d && !kref_get_unless_zero(&d->ref))
+ d = NULL;
+ mutex_unlock(&xef->eudebug.lock);
+
+ if (!d)
+ return NULL;
+
+ if (xe_eudebug_detached(d)) {
+ xe_eudebug_put(d);
+ return NULL;
+ }
+
+ return d;
+}
+
+static int xe_eudebug_queue_event(struct xe_eudebug *d,
+ struct drm_xe_eudebug_event *event)
+{
+ const u64 wait_jiffies = msecs_to_jiffies(1000);
+ u64 last_read_detected_ts, last_head_seqno, start_ts;
+ const u64 event_seqno = event->seqno;
+
+ xe_eudebug_assert(d, event->len > sizeof(struct drm_xe_eudebug_event));
+ xe_eudebug_assert(d, event->type);
+ xe_eudebug_assert(d, event->type != DRM_XE_EUDEBUG_EVENT_READ);
+
+ start_ts = ktime_get();
+ last_read_detected_ts = start_ts;
+ last_head_seqno = 0;
+
+ do {
+ struct drm_xe_eudebug_event *head;
+ u64 head_seqno;
+ bool was_queued;
+
+ if (xe_eudebug_detached(d))
+ break;
+
+ spin_lock(&d->events.lock);
+ head = event_fifo_pending(d);
+ if (head)
+ head_seqno = event->seqno;
+ else
+ head_seqno = 0;
+
+ was_queued = kfifo_in(&d->events.fifo, &event, 1);
+ spin_unlock(&d->events.lock);
+
+ wake_up_all(&d->events.write_done);
+
+ if (was_queued) {
+ eu_dbg(d, "queued event with seqno %lld (head %lld)\n",
+ event_seqno, head_seqno);
+ event = NULL;
+ break;
+ }
+
+ XE_WARN_ON(!head_seqno);
+
+ /* If we detect progress, restart timeout */
+ if (last_head_seqno != head_seqno)
+ last_read_detected_ts = ktime_get();
+
+ last_head_seqno = head_seqno;
+
+ wait_event_interruptible_timeout(d->events.read_done,
+ !kfifo_is_full(&d->events.fifo),
+ wait_jiffies);
+
+ } while (ktime_ms_delta(ktime_get(), last_read_detected_ts) <
+ XE_EUDEBUG_NO_READ_DETECTED_TIMEOUT_MS);
+
+ if (event) {
+ eu_dbg(d,
+ "event %llu queue failed (blocked %lld ms, avail %d)",
+ event ? event->seqno : 0,
+ ktime_ms_delta(ktime_get(), start_ts),
+ kfifo_avail(&d->events.fifo));
+
+ kfree(event);
+
+ return -ETIMEDOUT;
+ }
+
+ return 0;
+}
+
+static struct xe_eudebug_handle *
+alloc_handle(const int type, const u64 key)
+{
+ struct xe_eudebug_handle *h;
+
+ h = kzalloc(sizeof(*h), GFP_ATOMIC);
+ if (!h)
+ return NULL;
+
+ h->key = key;
+
+ return h;
+}
+
+static struct xe_eudebug_handle *
+__find_handle(struct xe_eudebug_resource *r,
+ const u64 key)
+{
+ struct xe_eudebug_handle *h;
+
+ h = rhashtable_lookup_fast(&r->rh,
+ &key,
+ rhash_res);
+ return h;
+}
+
+static int _xe_eudebug_add_handle(struct xe_eudebug *d,
+ int type,
+ void *p,
+ u64 *seqno,
+ int *handle)
+{
+ const u64 key = (uintptr_t)p;
+ struct xe_eudebug_resource *r;
+ struct xe_eudebug_handle *h, *o;
+ int err;
+
+ if (XE_WARN_ON(!p))
+ return -EINVAL;
+
+ if (xe_eudebug_detached(d))
+ return -ENOTCONN;
+
+ h = alloc_handle(type, key);
+ if (!h)
+ return -ENOMEM;
+
+ r = resource_from_type(d->res, type);
+
+ mutex_lock(&d->res->lock);
+ o = __find_handle(r, key);
+ if (!o) {
+ err = xa_alloc(&r->xa, &h->id, h, xa_limit_31b, GFP_KERNEL);
+
+ if (h->id >= INT_MAX) {
+ xa_erase(&r->xa, h->id);
+ err = -ENOSPC;
+ }
+
+ if (!err)
+ err = rhashtable_insert_fast(&r->rh,
+ &h->rh_head,
+ rhash_res);
+
+ if (err) {
+ xa_erase(&r->xa, h->id);
+ } else {
+ if (seqno)
+ *seqno = atomic_long_inc_return(&d->events.seqno);
+ }
+ } else {
+ xe_eudebug_assert(d, o->id);
+ err = -EEXIST;
+ }
+ mutex_unlock(&d->res->lock);
+
+ if (handle)
+ *handle = o ? o->id : h->id;
+
+ if (err) {
+ kfree(h);
+ XE_WARN_ON(err > 0);
+ return err;
+ }
+
+ xe_eudebug_assert(d, h->id);
+
+ return h->id;
+}
+
+static int xe_eudebug_add_handle(struct xe_eudebug *d,
+ int type,
+ void *p,
+ u64 *seqno)
+{
+ int ret;
+
+ ret = _xe_eudebug_add_handle(d, type, p, seqno, NULL);
+
+ eu_dbg(d, "handle type %d handle %p added: %d\n", type, p, ret);
+
+ return ret;
+}
+
+static int _xe_eudebug_remove_handle(struct xe_eudebug *d, int type, void *p,
+ u64 *seqno)
+{
+ const u64 key = (uintptr_t)p;
+ struct xe_eudebug_resource *r;
+ struct xe_eudebug_handle *h, *xa_h;
+ int ret;
+
+ if (XE_WARN_ON(!key))
+ return -EINVAL;
+
+ if (xe_eudebug_detached(d))
+ return -ENOTCONN;
+
+ r = resource_from_type(d->res, type);
+
+ mutex_lock(&d->res->lock);
+ h = __find_handle(r, key);
+ if (h) {
+ ret = rhashtable_remove_fast(&r->rh,
+ &h->rh_head,
+ rhash_res);
+ xe_eudebug_assert(d, !ret);
+ xa_h = xa_erase(&r->xa, h->id);
+ xe_eudebug_assert(d, xa_h == h);
+ if (!ret) {
+ ret = h->id;
+ if (seqno)
+ *seqno = atomic_long_inc_return(&d->events.seqno);
+ }
+ } else {
+ ret = -ENOENT;
+ }
+ mutex_unlock(&d->res->lock);
+
+ kfree(h);
+
+ xe_eudebug_assert(d, ret);
+
+ return ret;
+}
+
+static int xe_eudebug_remove_handle(struct xe_eudebug *d, int type, void *p,
+ u64 *seqno)
+{
+ int ret;
+
+ ret = _xe_eudebug_remove_handle(d, type, p, seqno);
+
+ eu_dbg(d, "handle type %d handle %p removed: %d\n", type, p, ret);
+
+ return ret;
+}
+
+static struct drm_xe_eudebug_event *
+xe_eudebug_create_event(struct xe_eudebug *d, u16 type, u64 seqno, u16 flags,
+ u32 len)
+{
+ const u16 known_flags =
+ DRM_XE_EUDEBUG_EVENT_CREATE |
+ DRM_XE_EUDEBUG_EVENT_DESTROY |
+ DRM_XE_EUDEBUG_EVENT_STATE_CHANGE |
+ DRM_XE_EUDEBUG_EVENT_NEED_ACK;
+ struct drm_xe_eudebug_event *event;
+
+ BUILD_BUG_ON(type > XE_EUDEBUG_MAX_EVENT_TYPE);
+
+ xe_eudebug_assert(d, type <= XE_EUDEBUG_MAX_EVENT_TYPE);
+ xe_eudebug_assert(d, !(~known_flags & flags));
+ xe_eudebug_assert(d, len > sizeof(*event));
+
+ event = kzalloc(len, GFP_KERNEL);
+ if (!event)
+ return NULL;
+
+ event->len = len;
+ event->type = type;
+ event->flags = flags;
+ event->seqno = seqno;
+
+ return event;
+}
+
+static int send_vm_event(struct xe_eudebug *d, u32 flags,
+ const u64 vm_handle,
+ const u64 seqno)
+{
+ struct drm_xe_eudebug_event *event;
+ struct drm_xe_eudebug_event_vm *e;
+
+ event = xe_eudebug_create_event(d, DRM_XE_EUDEBUG_EVENT_VM,
+ seqno, flags, sizeof(*e));
+ if (!event)
+ return -ENOMEM;
+
+ e = cast_event(e, event);
+
+ e->vm_handle = vm_handle;
+
+ return xe_eudebug_queue_event(d, event);
+}
+
+static int vm_create_event(struct xe_eudebug *d,
+ struct xe_file *xef, struct xe_vm *vm)
+{
+ int vm_id;
+ u64 seqno;
+ int ret;
+
+ if (!xe_vm_in_lr_mode(vm))
+ return 0;
+
+ vm_id = xe_eudebug_add_handle(d, XE_EUDEBUG_RES_TYPE_VM, vm, &seqno);
+ if (vm_id < 0)
+ return vm_id;
+
+ ret = send_vm_event(d, DRM_XE_EUDEBUG_EVENT_CREATE, vm_id, seqno);
+ if (ret)
+ eu_dbg(d, "send_vm_event create error %d\n", ret);
+
+ return ret;
+}
+
+static int vm_destroy_event(struct xe_eudebug *d,
+ struct xe_file *xef, struct xe_vm *vm)
+{
+ int vm_id;
+ u64 seqno;
+ int ret;
+
+ if (!xe_vm_in_lr_mode(vm))
+ return 0;
+
+ vm_id = xe_eudebug_remove_handle(d, XE_EUDEBUG_RES_TYPE_VM, vm, &seqno);
+ if (vm_id < 0)
+ return vm_id;
+
+ ret = send_vm_event(d, DRM_XE_EUDEBUG_EVENT_DESTROY, vm_id, seqno);
+ if (ret)
+ eu_dbg(d, "send_vm_event destroy error %d\n", ret);
+
+ return ret;
+}
+
+#define xe_eudebug_event_put(_d, _err) ({ \
+ if ((_err)) \
+ xe_eudebug_disconnect((_d), (_err)); \
+ xe_eudebug_put((_d)); \
+ })
+
+void xe_eudebug_vm_create(struct xe_file *xef, struct xe_vm *vm)
+{
+ struct xe_eudebug *d;
+
+ if (!xe_vm_in_lr_mode(vm))
+ return;
+
+ d = xe_eudebug_get(xef);
+ if (!d)
+ return;
+
+ xe_eudebug_event_put(d, vm_create_event(d, xef, vm));
+}
+
+void xe_eudebug_vm_destroy(struct xe_file *xef, struct xe_vm *vm)
+{
+ struct xe_eudebug *d;
+
+ if (!xe_vm_in_lr_mode(vm))
+ return;
+
+ d = xe_eudebug_get(xef);
+ if (!d)
+ return;
+
+ xe_eudebug_event_put(d, vm_destroy_event(d, xef, vm));
+}
+
+static int add_debugger(struct xe_device *xe, struct xe_eudebug *d,
+ struct drm_file *target)
+{
+ struct xe_file *xef = target->driver_priv;
+ int ret = -EBUSY;
+
+ mutex_lock(&xef->eudebug.lock);
+ if (!xef->eudebug.debugger) {
+ d->target.xef = xe_file_get(xef);
+ d->target.pid = xef->pid;
+ kref_get(&d->ref);
+ xef->eudebug.debugger = d;
+ ret = 0;
+ }
+ mutex_unlock(&xef->eudebug.lock);
+
+ if (ret)
+ return ret;
+
+ mutex_lock(&xe->eudebug.lock);
+ XE_WARN_ON(!list_empty(&xef->eudebug.target_link));
+ list_add_tail(&xef->eudebug.target_link, &xef->xe->eudebug.targets);
+ mutex_unlock(&xe->eudebug.lock);
+
+ return 0;
+}
+
+static int
+xe_eudebug_attach(struct xe_device *xe, struct drm_file *parent_file,
+ struct xe_eudebug *d, u64 target_pidfd)
+{
+ struct file *file __free(fput) = NULL;
+ struct drm_file *drm_file;
+ struct xe_file *target_xef;
+ int ret;
+
+ file = fget(target_pidfd);
+ if (XE_IOCTL_DBG(xe, !file))
+ return -EBADFD;
+
+ drm_file = file->private_data;
+ if (XE_IOCTL_DBG(xe, !drm_file))
+ return -EINVAL;
+
+ if (XE_IOCTL_DBG(xe, parent_file->filp->f_op != file->f_op))
+ return -EINVAL;
+
+ target_xef = drm_file->driver_priv;
+ if (XE_IOCTL_DBG(xe, !target_xef))
+ return -EINVAL;
+
+ if (XE_IOCTL_DBG(xe, xe != target_xef->xe))
+ return -EINVAL;
+
+ ret = add_debugger(xe, d, drm_file);
+ if (XE_IOCTL_DBG(xe, ret))
+ return ret;
+
+ d->xe = xe;
+ d->session = ++xe->eudebug.session_count ?: 1;
+
+ eu_dbg(d, "session %lld attached to %s", d->session,
+ parent_file == drm_file ? "self" : "remote");
+
+ return 0;
+}
+
+static int xe_eudebug_release(struct inode *inode, struct file *file)
+{
+ struct xe_eudebug *d = file->private_data;
+
+ xe_eudebug_disconnect(d, 0);
+ xe_eudebug_put(d);
+
+ return 0;
+}
+
+static __poll_t xe_eudebug_poll(struct file *file, poll_table *wait)
+{
+ struct xe_eudebug * const d = file->private_data;
+ __poll_t ret = 0;
+
+ poll_wait(file, &d->events.write_done, wait);
+
+ if (xe_eudebug_detached(d)) {
+ ret |= EPOLLHUP;
+ if (d->target.err)
+ ret |= EPOLLERR;
+ }
+
+ if (event_fifo_num_events_peek(d))
+ ret |= EPOLLIN;
+
+ return ret;
+}
+
+static ssize_t xe_eudebug_read(struct file *file,
+ char __user *buf,
+ size_t count,
+ loff_t *ppos)
+{
+ return -EINVAL;
+}
+
+static long xe_eudebug_read_event(struct xe_eudebug *d,
+ const u64 arg,
+ const bool wait)
+{
+ struct xe_device *xe = d->xe;
+ struct drm_xe_eudebug_event __user * const user_orig =
+ u64_to_user_ptr(arg);
+ struct drm_xe_eudebug_event user_event;
+ struct drm_xe_eudebug_event *pending, *event_out;
+ long ret = 0;
+
+ if (XE_IOCTL_DBG(xe, copy_from_user(&user_event, user_orig, sizeof(user_event))))
+ return -EFAULT;
+
+ if (XE_IOCTL_DBG(xe, !user_event.type))
+ return -EINVAL;
+
+ if (XE_IOCTL_DBG(xe, user_event.type > XE_EUDEBUG_MAX_EVENT_TYPE))
+ return -EINVAL;
+
+ if (XE_IOCTL_DBG(xe, user_event.type != DRM_XE_EUDEBUG_EVENT_READ))
+ return -EINVAL;
+
+ if (XE_IOCTL_DBG(xe, user_event.len < sizeof(*user_orig)))
+ return -EINVAL;
+
+ if (XE_IOCTL_DBG(xe, user_event.flags))
+ return -EINVAL;
+
+ if (XE_IOCTL_DBG(xe, user_event.reserved))
+ return -EINVAL;
+
+ /* XXX: define wait time in connect arguments ? */
+ if (wait) {
+ ret = wait_event_interruptible_timeout(d->events.write_done,
+ event_fifo_has_events(d),
+ msecs_to_jiffies(5 * 1000));
+
+ if (XE_IOCTL_DBG(xe, ret < 0))
+ return ret;
+ }
+
+ if (XE_IOCTL_DBG(xe, xe_eudebug_detached(d)))
+ return -ENOTCONN;
+
+ event_out = NULL;
+ spin_lock(&d->events.lock);
+ pending = event_fifo_pending(d);
+ if (!pending)
+ ret = wait ? -ETIMEDOUT : -EAGAIN;
+ else if (user_event.len < pending->len)
+ ret = -EMSGSIZE;
+ else if (access_ok(user_orig, pending->len))
+ ret = kfifo_out(&d->events.fifo, &event_out, 1) == 1 ? 0 : -EIO;
+ else
+ ret = -EFAULT;
+
+ wake_up_all(&d->events.read_done);
+ spin_unlock(&d->events.lock);
+
+ if (!pending)
+ return ret;
+
+ if (ret == -EMSGSIZE) {
+ if (XE_IOCTL_DBG(xe, put_user(pending->len, &user_orig->len)))
+ return -EFAULT;
+
+ return -EMSGSIZE;
+ }
+
+ if (XE_IOCTL_DBG(xe, ret)) {
+ xe_eudebug_disconnect(d, (int)ret);
+ return ret;
+ }
+
+ XE_WARN_ON(pending != event_out);
+
+ if (__copy_to_user(user_orig, event_out, event_out->len)) {
+ ret = -EFAULT;
+ /* We can't rollback anymore, disconnect */
+ xe_eudebug_disconnect(d, -EFAULT);
+ }
+
+ eu_dbg(d, "event read=%ld: type=%u, flags=0x%x, seqno=%llu", ret,
+ event_out->type, event_out->flags, event_out->seqno);
+
+ kfree(event_out);
+
+ return ret;
+}
+
+static long xe_eudebug_ioctl(struct file *file,
+ unsigned int cmd,
+ unsigned long arg)
+{
+ struct xe_eudebug * const d = file->private_data;
+ long ret;
+
+ switch (cmd) {
+ case DRM_XE_EUDEBUG_IOCTL_READ_EVENT:
+ ret = xe_eudebug_read_event(d, arg,
+ !(file->f_flags & O_NONBLOCK));
+ break;
+
+ default:
+ ret = -EINVAL;
+ }
+
+ return ret;
+}
+
+static const struct file_operations fops = {
+ .owner = THIS_MODULE,
+ .release = xe_eudebug_release,
+ .poll = xe_eudebug_poll,
+ .read = xe_eudebug_read,
+ .unlocked_ioctl = xe_eudebug_ioctl,
+};
+
+static int
+xe_eudebug_connect(struct xe_device *xe,
+ struct drm_file *file,
+ struct drm_xe_eudebug_connect *param)
+{
+ const u64 known_open_flags = 0;
+ unsigned long f_flags = 0;
+ struct xe_eudebug *d;
+ int fd, err;
+
+ if (XE_IOCTL_DBG(xe, param->extensions))
+ return -EINVAL;
+
+ if (XE_IOCTL_DBG(xe, !param->fd))
+ return -EINVAL;
+
+ if (XE_IOCTL_DBG(xe, param->flags & ~known_open_flags))
+ return -EINVAL;
+
+ if (XE_IOCTL_DBG(xe, param->version &&
+ param->version != DRM_XE_EUDEBUG_VERSION))
+ return -EINVAL;
+
+ param->version = DRM_XE_EUDEBUG_VERSION;
+
+ mutex_lock(&xe->eudebug.lock);
+ err = xe_eudebug_is_enabled(xe) ? 0 : -EOPNOTSUPP;
+ mutex_unlock(&xe->eudebug.lock);
+
+ if (XE_IOCTL_DBG(xe, err))
+ return err;
+
+ d = kzalloc(sizeof(*d), GFP_KERNEL);
+ if (XE_IOCTL_DBG(xe, !d))
+ return -ENOMEM;
+
+ kref_init(&d->ref);
+ spin_lock_init(&d->target.lock);
+ init_waitqueue_head(&d->events.write_done);
+ init_waitqueue_head(&d->events.read_done);
+
+ spin_lock_init(&d->events.lock);
+ INIT_KFIFO(d->events.fifo);
+
+ d->res = xe_eudebug_resources_alloc();
+ if (XE_IOCTL_DBG(xe, IS_ERR(d->res))) {
+ err = PTR_ERR(d->res);
+ goto err_free;
+ }
+
+ err = xe_eudebug_attach(xe, file, d, param->fd);
+ if (XE_IOCTL_DBG(xe, err))
+ goto err_free_res;
+
+ fd = anon_inode_getfd("[xe_eudebug]", &fops, d, f_flags);
+ if (XE_IOCTL_DBG(xe, fd < 0)) {
+ err = fd;
+ goto err_detach;
+ }
+
+ eu_dbg(d, "connected session %lld", d->session);
+
+ return fd;
+
+err_detach:
+ xe_eudebug_detach(xe, d, err);
+err_free_res:
+ xe_eudebug_destroy_resources(d);
+err_free:
+ kfree(d);
+
+ return err;
+}
+
+void xe_eudebug_file_close(struct xe_file *xef)
+{
+ remove_debugger(xef);
+}
+
+bool xe_eudebug_is_enabled(struct xe_device *xe)
+{
+ return READ_ONCE(xe->eudebug.state) == XE_EUDEBUG_ENABLED;
+}
+
+int xe_eudebug_enable(struct xe_device *xe, bool enable)
+{
+ mutex_lock(&xe->eudebug.lock);
+
+ if (xe->eudebug.state == XE_EUDEBUG_NOT_SUPPORTED) {
+ mutex_unlock(&xe->eudebug.lock);
+ return -EPERM;
+ }
+
+ if (!enable && !list_empty(&xe->eudebug.targets)) {
+ mutex_unlock(&xe->eudebug.lock);
+ return -EBUSY;
+ }
+
+ if (enable == xe_eudebug_is_enabled(xe)) {
+ mutex_unlock(&xe->eudebug.lock);
+ return 0;
+ }
+
+ xe->eudebug.state = enable ?
+ XE_EUDEBUG_ENABLED : XE_EUDEBUG_DISABLED;
+ mutex_unlock(&xe->eudebug.lock);
+
+ return 0;
+}
+
+static ssize_t enable_eudebug_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct xe_device *xe = pdev_to_xe_device(to_pci_dev(dev));
+
+ return sysfs_emit(buf, "%u\n", xe_eudebug_is_enabled(xe));
+}
+
+static ssize_t enable_eudebug_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct xe_device *xe = pdev_to_xe_device(to_pci_dev(dev));
+ bool enable;
+ int ret;
+
+ ret = kstrtobool(buf, &enable);
+ if (ret)
+ return ret;
+
+ ret = xe_eudebug_enable(xe, enable);
+ if (ret)
+ return ret;
+
+ return count;
+}
+
+static DEVICE_ATTR_RW(enable_eudebug);
+
+static void xe_eudebug_sysfs_fini(void *arg)
+{
+ struct xe_device *xe = arg;
+ struct drm_device *dev = &xe->drm;
+
+ sysfs_remove_file(&dev->dev->kobj,
+ &dev_attr_enable_eudebug.attr);
+}
+
+void xe_eudebug_init(struct xe_device *xe)
+{
+ struct drm_device *dev = &xe->drm;
+ int err;
+
+ INIT_LIST_HEAD(&xe->eudebug.targets);
+
+ xe->eudebug.state = XE_EUDEBUG_NOT_SUPPORTED;
+
+ err = drmm_mutex_init(dev, &xe->eudebug.lock);
+ if (err)
+ goto out_err;
+
+ err = sysfs_create_file(&dev->dev->kobj,
+ &dev_attr_enable_eudebug.attr);
+ if (err)
+ goto out_err;
+
+ err = devm_add_action_or_reset(dev->dev, xe_eudebug_sysfs_fini, xe);
+ if (err)
+ goto out_err;
+
+ xe->eudebug.state = XE_EUDEBUG_DISABLED;
+
+ return;
+
+out_err:
+ drm_warn(&xe->drm, "eudebug disabled, init fail: %d\n", err);
+}
+
+int xe_eudebug_connect_ioctl(struct drm_device *dev,
+ void *data,
+ struct drm_file *file)
+{
+ struct xe_device *xe = to_xe_device(dev);
+ struct drm_xe_eudebug_connect * const param = data;
+
+ return xe_eudebug_connect(xe, file, param);
+}
diff --git a/drivers/gpu/drm/xe/xe_eudebug.h b/drivers/gpu/drm/xe/xe_eudebug.h
new file mode 100644
index 0000000000000..22fbb2ff24da6
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_eudebug.h
@@ -0,0 +1,65 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2023-2025 Intel Corporation
+ */
+
+#ifndef _XE_EUDEBUG_H_
+#define _XE_EUDEBUG_H_
+
+#include <linux/types.h>
+
+struct drm_device;
+struct drm_file;
+struct xe_device;
+struct xe_file;
+struct xe_vm;
+
+#if IS_ENABLED(CONFIG_DRM_XE_EUDEBUG)
+
+#define XE_EUDEBUG_DBG_STR "eudbg: %lld:%lu:%s (%d/%d) -> (%d): "
+#define XE_EUDEBUG_DBG_ARGS(d) (d)->session, \
+ atomic_long_read(&(d)->events.seqno), \
+ !READ_ONCE(d->target.xef) ? "disconnected" : "", \
+ current->pid, \
+ task_tgid_nr(current), \
+ READ_ONCE(d->target.xef) ? d->target.xef->pid : -1
+
+#define eu_err(d, fmt, ...) drm_err(&(d)->xe->drm, XE_EUDEBUG_DBG_STR # fmt, \
+ XE_EUDEBUG_DBG_ARGS(d), ##__VA_ARGS__)
+#define eu_warn(d, fmt, ...) drm_warn(&(d)->xe->drm, XE_EUDEBUG_DBG_STR # fmt, \
+ XE_EUDEBUG_DBG_ARGS(d), ##__VA_ARGS__)
+#define eu_dbg(d, fmt, ...) drm_dbg(&(d)->xe->drm, XE_EUDEBUG_DBG_STR # fmt, \
+ XE_EUDEBUG_DBG_ARGS(d), ##__VA_ARGS__)
+
+#define xe_eudebug_assert(d, ...) xe_assert((d)->xe, ##__VA_ARGS__)
+
+int xe_eudebug_connect_ioctl(struct drm_device *dev,
+ void *data,
+ struct drm_file *file);
+
+void xe_eudebug_init(struct xe_device *xe);
+bool xe_eudebug_is_enabled(struct xe_device *xe);
+
+void xe_eudebug_file_close(struct xe_file *xef);
+
+void xe_eudebug_vm_create(struct xe_file *xef, struct xe_vm *vm);
+void xe_eudebug_vm_destroy(struct xe_file *xef, struct xe_vm *vm);
+int xe_eudebug_enable(struct xe_device *xe, bool enable);
+
+#else
+
+static inline int xe_eudebug_connect_ioctl(struct drm_device *dev,
+ void *data,
+ struct drm_file *file) { return 0; }
+
+static inline void xe_eudebug_init(struct xe_device *xe) { }
+static inline bool xe_eudebug_is_enabled(struct xe_device *xe) { return false; }
+
+static inline void xe_eudebug_file_close(struct xe_file *xef) { }
+
+static inline void xe_eudebug_vm_create(struct xe_file *xef, struct xe_vm *vm) { }
+static inline void xe_eudebug_vm_destroy(struct xe_file *xef, struct xe_vm *vm) { }
+
+#endif /* CONFIG_DRM_XE_EUDEBUG */
+
+#endif /* _XE_EUDEBUG_H_ */
diff --git a/drivers/gpu/drm/xe/xe_eudebug_types.h b/drivers/gpu/drm/xe/xe_eudebug_types.h
new file mode 100644
index 0000000000000..1e673c934169c
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_eudebug_types.h
@@ -0,0 +1,128 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2023-2025 Intel Corporation
+ */
+
+#ifndef _XE_EUDEBUG_TYPES_H_
+#define _XE_EUDEBUG_TYPES_H_
+
+#include <linux/completion.h>
+#include <linux/kfifo.h>
+#include <linux/kref.h>
+#include <linux/mutex.h>
+#include <linux/rbtree.h>
+#include <linux/rhashtable.h>
+#include <linux/wait.h>
+#include <linux/xarray.h>
+
+struct xe_device;
+struct task_struct;
+
+/**
+ * enum xe_eudebug_state - eudebug capability state
+ *
+ * @XE_EUDEBUG_NOT_SUPPORTED: eudebug feature support off
+ * @XE_EUDEBUG_DISABLED: eudebug feature supported but disabled
+ * @XE_EUDEBUG_ENABLED: eudebug enabled
+ */
+enum xe_eudebug_state {
+ XE_EUDEBUG_NOT_SUPPORTED = 0,
+ XE_EUDEBUG_DISABLED,
+ XE_EUDEBUG_ENABLED,
+};
+
+#define CONFIG_DRM_XE_DEBUGGER_EVENT_QUEUE_SIZE 64
+#define XE_EUDEBUG_MAX_EVENT_TYPE DRM_XE_EUDEBUG_EVENT_VM
+
+/**
+ * struct xe_eudebug_handle - eudebug resource handle
+ */
+struct xe_eudebug_handle {
+ /** @key: key value in rhashtable <key:id> */
+ u64 key;
+
+ /** @id: opaque handle id for xarray <id:key> */
+ int id;
+
+ /** @rh_head: rhashtable head */
+ struct rhash_head rh_head;
+};
+
+/**
+ * struct xe_eudebug_resource - Resource map for one resource
+ */
+struct xe_eudebug_resource {
+ /** @xa: xarrays for <id->key> */
+ struct xarray xa;
+
+ /** @rh rhashtable for <key->id> */
+ struct rhashtable rh;
+};
+
+#define XE_EUDEBUG_RES_TYPE_VM 0
+#define XE_EUDEBUG_RES_TYPE_COUNT (XE_EUDEBUG_RES_TYPE_VM + 1)
+
+/**
+ * struct xe_eudebug_resources - eudebug resources for all types
+ */
+struct xe_eudebug_resources {
+ /** @lock: guards access into rt */
+ struct mutex lock;
+
+ /** @rt: resource maps for all types */
+ struct xe_eudebug_resource rt[XE_EUDEBUG_RES_TYPE_COUNT];
+};
+
+/**
+ * struct xe_eudebug - Top level struct for eudebug: the connection
+ */
+struct xe_eudebug {
+ /** @ref: kref counter for this struct */
+ struct kref ref;
+
+ struct {
+ /** @xef: the target xe_file that we are debugging */
+ struct xe_file *xef;
+
+ /** @pid: pid of target */
+ pid_t pid;
+
+ /** @err: error code on disconnect */
+ int err;
+
+ /** @lock: guards access to xef and err */
+ spinlock_t lock;
+ } target;
+
+ /** @xe: the parent device we are serving */
+ struct xe_device *xe;
+
+ /** @res: the resource maps we track for target_task */
+ struct xe_eudebug_resources *res;
+
+ /** @session: session number for this connection (for logs) */
+ u64 session;
+
+ /** @events: kfifo queue of to-be-delivered events */
+ struct {
+ /** @lock: guards access to fifo */
+ spinlock_t lock;
+
+ /** @fifo: queue of events pending */
+ DECLARE_KFIFO(fifo,
+ struct drm_xe_eudebug_event *,
+ CONFIG_DRM_XE_DEBUGGER_EVENT_QUEUE_SIZE);
+
+ /** @write_done: waitqueue for signalling write to fifo */
+ wait_queue_head_t write_done;
+
+ /** @read_done: waitqueue for signalling read from fifo */
+ wait_queue_head_t read_done;
+
+ /** @event_seqno: seqno counter to stamp events for fifo */
+ atomic_long_t seqno;
+ } events;
+
+};
+
+#endif /* _XE_EUDEBUG_TYPES_H_ */
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index c2012d20faa62..165f6369d5cb6 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -26,6 +26,7 @@
#include "xe_bo.h"
#include "xe_device.h"
#include "xe_drm_client.h"
+#include "xe_eudebug.h"
#include "xe_exec_queue.h"
#include "xe_migrate.h"
#include "xe_pat.h"
@@ -1957,6 +1958,8 @@ int xe_vm_create_ioctl(struct drm_device *dev, void *data,
args->vm_id = id;
+ xe_eudebug_vm_create(xef, vm);
+
return 0;
err_close_and_put:
@@ -1988,8 +1991,10 @@ int xe_vm_destroy_ioctl(struct drm_device *dev, void *data,
xa_erase(&xef->vm.xa, args->vm_id);
mutex_unlock(&xef->vm.lock);
- if (!err)
+ if (!err) {
+ xe_eudebug_vm_destroy(xef, vm);
xe_vm_close_and_put(vm);
+ }
return err;
}
diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
index 876a076fa6c0c..46b9d2db1a892 100644
--- a/include/uapi/drm/xe_drm.h
+++ b/include/uapi/drm/xe_drm.h
@@ -106,6 +106,7 @@ extern "C" {
#define DRM_XE_OBSERVATION 0x0b
#define DRM_XE_MADVISE 0x0c
#define DRM_XE_VM_QUERY_MEM_RANGE_ATTRS 0x0d
+#define DRM_XE_EUDEBUG_CONNECT 0x0e
/* Must be kept compact -- no holes */
@@ -123,6 +124,7 @@ extern "C" {
#define DRM_IOCTL_XE_OBSERVATION DRM_IOW(DRM_COMMAND_BASE + DRM_XE_OBSERVATION, struct drm_xe_observation_param)
#define DRM_IOCTL_XE_MADVISE DRM_IOW(DRM_COMMAND_BASE + DRM_XE_MADVISE, struct drm_xe_madvise)
#define DRM_IOCTL_XE_VM_QUERY_MEM_RANGE_ATTRS DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_VM_QUERY_MEM_RANGE_ATTRS, struct drm_xe_vm_query_mem_range_attr)
+#define DRM_IOCTL_XE_EUDEBUG_CONNECT DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_EUDEBUG_CONNECT, struct drm_xe_eudebug_connect)
/**
* DOC: Xe IOCTL Extensions
@@ -2301,6 +2303,25 @@ struct drm_xe_vm_query_mem_range_attr {
};
+/*
+ * Debugger ABI (ioctl and events) Version History:
+ * 0 - No debugger available
+ * 1 - Initial version
+ */
+#define DRM_XE_EUDEBUG_VERSION 1
+
+struct drm_xe_eudebug_connect {
+ /** @extensions: Pointer to the first extension struct, if any */
+ __u64 extensions;
+
+ __u64 fd; /* Target drm client fd */
+ __u32 flags; /* MBZ */
+
+ __u32 version; /* output: current ABI (ioctl / events) version */
+};
+
+#include "xe_drm_eudebug.h"
+
#if defined(__cplusplus)
}
#endif
diff --git a/include/uapi/drm/xe_drm_eudebug.h b/include/uapi/drm/xe_drm_eudebug.h
new file mode 100644
index 0000000000000..fd2a0c911d022
--- /dev/null
+++ b/include/uapi/drm/xe_drm_eudebug.h
@@ -0,0 +1,77 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2023 Intel Corporation
+ */
+
+#ifndef _UAPI_XE_DRM_EUDEBUG_H_
+#define _UAPI_XE_DRM_EUDEBUG_H_
+
+#if defined(__cplusplus)
+extern "C" {
+#endif
+
+/**
+ * Do a eudebug event read for a debugger connection.
+ *
+ * This ioctl is available in debug version 1.
+ */
+#define DRM_XE_EUDEBUG_IOCTL_READ_EVENT _IO('j', 0x0)
+
+/**
+ * struct drm_xe_eudebug_event - Base type of event delivered by xe_eudebug.
+ * @len: Length of event, including the base, of event.
+ * @type: Event type
+ * @flags: Flags for the event
+ * @seqno: Sequence number
+ * @reserved: MBZ
+ *
+ * Base event for xe_eudebug interface. To initiate a read, type
+ * needs to be set to DRM_XE_EUDEBUG_EVENT_READ and length
+ * need to be set by userspace to what has been allocated as max.
+ * On successful return the event len will be deliver or -EMSGSIZE
+ * if it does not fit. Seqno can be used to form a timeline
+ * as event delivery order does not guarantee event creation
+ * order.
+ *
+ * flags will indicate if resource was created, destroyed
+ * or its state changed.
+ *
+ * if DRM_XE_EUDEBUG_EVENT_NEED_ACK is set, the xe_eudebug
+ * will held the said resource until it is acked by userspace
+ * using another acking ioctl with the seqno of said event.
+ *
+ */
+struct drm_xe_eudebug_event {
+ __u32 len;
+
+ __u16 type;
+#define DRM_XE_EUDEBUG_EVENT_NONE 0
+#define DRM_XE_EUDEBUG_EVENT_READ 1
+#define DRM_XE_EUDEBUG_EVENT_VM 2
+
+ __u16 flags;
+#define DRM_XE_EUDEBUG_EVENT_CREATE (1 << 0)
+#define DRM_XE_EUDEBUG_EVENT_DESTROY (1 << 1)
+#define DRM_XE_EUDEBUG_EVENT_STATE_CHANGE (1 << 2)
+#define DRM_XE_EUDEBUG_EVENT_NEED_ACK (1 << 3)
+ __u64 seqno;
+ __u64 reserved;
+};
+
+/**
+ * struct drm_xe_eudebug_event_vm - VM resource event
+ * @vm_handle: Handle of a vm that was created/destroyed
+ *
+ * Resource creation/destruction event for a VM.
+ */
+struct drm_xe_eudebug_event_vm {
+ struct drm_xe_eudebug_event base;
+
+ __u64 vm_handle;
+};
+
+#if defined(__cplusplus)
+}
+#endif
+
+#endif /* _UAPI_XE_DRM_EUDEBUG_H_ */
--
2.43.0
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH 02/20] drm/xe/eudebug: Introduce discovery for resources
2025-12-02 13:52 [PATCH 00/20] Intel Xe GPU Debug Support (eudebug) v6 Mika Kuoppala
2025-12-02 13:52 ` [PATCH 01/20] drm/xe/eudebug: Introduce eudebug interface Mika Kuoppala
@ 2025-12-02 13:52 ` Mika Kuoppala
2025-12-02 13:52 ` [PATCH 03/20] drm/xe/eudebug: Introduce exec_queue events Mika Kuoppala
` (23 subsequent siblings)
25 siblings, 0 replies; 30+ messages in thread
From: Mika Kuoppala @ 2025-12-02 13:52 UTC (permalink / raw)
To: intel-xe
Cc: simona.vetter, matthew.brost, christian.koenig, thomas.hellstrom,
joonas.lahtinen, christoph.manszewski, rodrigo.vivi,
andrzej.hajda, matthew.auld, maciej.patelczyk, gwan-gyeong.mun,
Mika Kuoppala, Dominik Grzegorzek
A debugger connection can occur after a client has created
and destroyed an arbitrary number of resources. To support
this, we need to relay all currently existing resources to
the debugger. The client is held on selected ioctls until
this discovery process, executed by a workqueue, is complete.
This patch is based on discovery work by Maciej Patelczyk
for the i915 driver.
v2: - use rw_semaphore to block DRM ioctls during discovery (Matthew)
- only lock according to ioctl at play (Dominik)
v4: - s/discovery_lock/ioctl_lock
- change lock to be per xe_file as is connections
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Co-developed-by: Maciej Patelczyk <maciej.patelczyk@intel.com>
Signed-off-by: Maciej Patelczyk <maciej.patelczyk@intel.com>
Signed-off-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Acked-by: Matthew Brost <matthew.brost@intel.com> #locking
---
drivers/gpu/drm/xe/xe_device.c | 13 +++-
drivers/gpu/drm/xe/xe_device.h | 42 ++++++++++++
drivers/gpu/drm/xe/xe_device_types.h | 6 ++
drivers/gpu/drm/xe/xe_eudebug.c | 96 ++++++++++++++++++++++++++-
drivers/gpu/drm/xe/xe_eudebug_types.h | 7 ++
5 files changed, 160 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
index 1c7f98dd42be..a60e5265ae59 100644
--- a/drivers/gpu/drm/xe/xe_device.c
+++ b/drivers/gpu/drm/xe/xe_device.c
@@ -109,6 +109,7 @@ static int xe_file_open(struct drm_device *dev, struct drm_file *file)
#if IS_ENABLED(CONFIG_DRM_XE_EUDEBUG)
mutex_init(&xef->eudebug.lock);
INIT_LIST_HEAD(&xef->eudebug.target_link);
+ init_rwsem(&xef->eudebug.ioctl_lock);
#endif
file->driver_priv = xef;
@@ -232,8 +233,12 @@ static long xe_drm_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
ACQUIRE(xe_pm_runtime_ioctl, pm)(xe);
ret = ACQUIRE_ERR(xe_pm_runtime_ioctl, &pm);
- if (ret >= 0)
+ if (ret >= 0) {
+ bool lock = xe_eudebug_discovery_lock(file, cmd);
ret = drm_ioctl(file, cmd, arg);
+ if (lock)
+ xe_eudebug_discovery_unlock(file, cmd);
+ }
return ret;
}
@@ -250,8 +255,12 @@ static long xe_drm_compat_ioctl(struct file *file, unsigned int cmd, unsigned lo
ACQUIRE(xe_pm_runtime_ioctl, pm)(xe);
ret = ACQUIRE_ERR(xe_pm_runtime_ioctl, &pm);
- if (ret >= 0)
+ if (ret >= 0) {
+ bool lock = xe_eudebug_discovery_lock(file, cmd);
ret = drm_compat_ioctl(file, cmd, arg);
+ if (lock)
+ xe_eudebug_discovery_unlock(file, cmd);
+ }
return ret;
}
diff --git a/drivers/gpu/drm/xe/xe_device.h b/drivers/gpu/drm/xe/xe_device.h
index 6604b89330d5..b3b4cd72e658 100644
--- a/drivers/gpu/drm/xe/xe_device.h
+++ b/drivers/gpu/drm/xe/xe_device.h
@@ -7,6 +7,7 @@
#define _XE_DEVICE_H_
#include <drm/drm_util.h>
+#include <drm/drm_ioctl.h>
#include "xe_device_types.h"
#include "xe_gt_types.h"
@@ -214,4 +215,45 @@ int xe_is_injection_active(void);
#define LNL_FLUSH_WORK(wrk__) \
flush_work(wrk__)
+#if IS_ENABLED(CONFIG_DRM_XE_EUDEBUG)
+static inline int xe_eudebug_needs_ioctl_lock(const unsigned int cmd)
+{
+ const unsigned int xe_cmd = DRM_IOCTL_NR(cmd) - DRM_COMMAND_BASE;
+
+ switch (xe_cmd) {
+ case DRM_XE_VM_CREATE:
+ case DRM_XE_VM_DESTROY:
+ case DRM_XE_VM_BIND:
+ case DRM_XE_EXEC_QUEUE_CREATE:
+ case DRM_XE_EXEC_QUEUE_DESTROY:
+ return 1;
+ }
+
+ return 0;
+}
+
+static inline bool xe_eudebug_discovery_lock(struct file *file, unsigned int cmd)
+{
+ struct drm_file *file_priv = file->private_data;
+ struct xe_file *xef = file_priv->driver_priv;
+
+ if (!xe_eudebug_needs_ioctl_lock(cmd))
+ return false;
+
+ down_read(&xef->eudebug.ioctl_lock);
+ return true;
+}
+
+static inline void xe_eudebug_discovery_unlock(struct file *file, unsigned int cmd)
+{
+ struct drm_file *file_priv = file->private_data;
+ struct xe_file *xef = file_priv->driver_priv;
+
+ up_read(&xef->eudebug.ioctl_lock);
+}
+#else
+static inline bool xe_eudebug_discovery_lock(struct file *file, unsigned int cmd) { return false; }
+static inline void xe_eudebug_discovery_unlock(struct file *file, unsigned int cmd) { }
+#endif /* CONFIG_DRM_XE_EUDEBUG */
+
#endif
diff --git a/drivers/gpu/drm/xe/xe_device_types.h b/drivers/gpu/drm/xe/xe_device_types.h
index 2b4ae7aedd12..6fe6e200fe9f 100644
--- a/drivers/gpu/drm/xe/xe_device_types.h
+++ b/drivers/gpu/drm/xe/xe_device_types.h
@@ -676,6 +676,9 @@ struct xe_device {
/** @eudebug.lock: protects state and targets */
struct mutex lock;
+
+ /** @wq: used for client discovery */
+ struct workqueue_struct *wq;
} eudebug;
#endif
};
@@ -750,6 +753,9 @@ struct xe_file {
/** @target_link: link into xe_device.eudebug.targets */
struct list_head target_link;
+
+ /** @eudebug.ioctl_lock syncing ioctl access */
+ struct rw_semaphore ioctl_lock;
} eudebug;
#endif
};
diff --git a/drivers/gpu/drm/xe/xe_eudebug.c b/drivers/gpu/drm/xe/xe_eudebug.c
index df7ad93d032c..8b43e0384b57 100644
--- a/drivers/gpu/drm/xe/xe_eudebug.c
+++ b/drivers/gpu/drm/xe/xe_eudebug.c
@@ -168,6 +168,8 @@ static void xe_eudebug_free(struct kref *ref)
struct xe_eudebug *d = container_of(ref, typeof(*d), ref);
struct drm_xe_eudebug_event *event;
+ WARN_ON(work_pending(&d->discovery_work));
+
xe_assert(d->xe, xe_eudebug_detached(d));
while (kfifo_get(&d->events.fifo, &event))
@@ -228,6 +230,8 @@ static bool xe_eudebug_detach(struct xe_device *xe,
}
spin_unlock(&d->target.lock);
+ flush_work(&d->discovery_work);
+
if (!target)
return false;
@@ -259,7 +263,7 @@ static int _xe_eudebug_disconnect(struct xe_eudebug *d,
})
static struct xe_eudebug *
-xe_eudebug_get(struct xe_file *xef)
+xe_eudebug_get_nolock(struct xe_file *xef)
{
struct xe_eudebug *d;
@@ -272,7 +276,8 @@ xe_eudebug_get(struct xe_file *xef)
if (!d)
return NULL;
- if (xe_eudebug_detached(d)) {
+ if (xe_eudebug_detached(d) ||
+ !completion_done(&d->discovery)) {
xe_eudebug_put(d);
return NULL;
}
@@ -280,6 +285,14 @@ xe_eudebug_get(struct xe_file *xef)
return d;
}
+static struct xe_eudebug *
+xe_eudebug_get(struct xe_file *xef)
+{
+ lockdep_assert_held(&xef->eudebug.ioctl_lock);
+
+ return xe_eudebug_get_nolock(xef);
+}
+
static int xe_eudebug_queue_event(struct xe_eudebug *d,
struct drm_xe_eudebug_event *event)
{
@@ -503,6 +516,8 @@ static int xe_eudebug_remove_handle(struct xe_eudebug *d, int type, void *p,
{
int ret;
+ XE_WARN_ON(!completion_done(&d->discovery));
+
ret = _xe_eudebug_remove_handle(d, type, p, seqno);
eu_dbg(d, "handle type %d handle %p removed: %d\n", type, p, ret);
@@ -634,6 +649,66 @@ void xe_eudebug_vm_destroy(struct xe_file *xef, struct xe_vm *vm)
xe_eudebug_event_put(d, vm_destroy_event(d, xef, vm));
}
+static struct xe_file *xe_eudebug_target_get(struct xe_eudebug *d)
+{
+ struct xe_file *xef = NULL;
+
+ spin_lock(&d->target.lock);
+ if (d->target.xef)
+ xef = xe_file_get(d->target.xef);
+ spin_unlock(&d->target.lock);
+
+ return xef;
+}
+
+static void discover_client(struct xe_eudebug *d)
+{
+ struct xe_file *xef;
+ struct xe_vm *vm;
+ unsigned long i;
+ unsigned int vm_count = 0;
+ int err = 0;
+
+ xef = xe_eudebug_target_get(d);
+ if (!xef)
+ return;
+
+ down_write(&xef->eudebug.ioctl_lock);
+
+ eu_dbg(d, "Discovery start for %lld", d->session);
+
+ xa_for_each(&xef->vm.xa, i, vm) {
+ err = vm_create_event(d, xef, vm);
+ if (err)
+ break;
+ vm_count++;
+ }
+
+ complete_all(&d->discovery);
+
+ eu_dbg(d, "Discovery end for %lld: %d", d->session, err);
+
+ up_write(&xef->eudebug.ioctl_lock);
+
+ if (vm_count)
+ eu_dbg(d, "Discovery found %u vms", vm_count);
+
+ xe_file_put(xef);
+}
+
+static void discovery_work_fn(struct work_struct *work)
+{
+ struct xe_eudebug *d = container_of(work, typeof(*d),
+ discovery_work);
+
+ if (xe_eudebug_detached(d))
+ complete_all(&d->discovery);
+ else
+ discover_client(d);
+
+ xe_eudebug_put(d);
+}
+
static int add_debugger(struct xe_device *xe, struct xe_eudebug *d,
struct drm_file *target)
{
@@ -831,6 +906,10 @@ static long xe_eudebug_ioctl(struct file *file,
struct xe_eudebug * const d = file->private_data;
long ret;
+ if (cmd != DRM_XE_EUDEBUG_IOCTL_READ_EVENT &&
+ !completion_done(&d->discovery))
+ return -EBUSY;
+
switch (cmd) {
case DRM_XE_EUDEBUG_IOCTL_READ_EVENT:
ret = xe_eudebug_read_event(d, arg,
@@ -892,9 +971,11 @@ xe_eudebug_connect(struct xe_device *xe,
spin_lock_init(&d->target.lock);
init_waitqueue_head(&d->events.write_done);
init_waitqueue_head(&d->events.read_done);
+ init_completion(&d->discovery);
spin_lock_init(&d->events.lock);
INIT_KFIFO(d->events.fifo);
+ INIT_WORK(&d->discovery_work, discovery_work_fn);
d->res = xe_eudebug_resources_alloc();
if (XE_IOCTL_DBG(xe, IS_ERR(d->res))) {
@@ -912,6 +993,9 @@ xe_eudebug_connect(struct xe_device *xe,
goto err_detach;
}
+ kref_get(&d->ref);
+ queue_work(xe->eudebug.wq, &d->discovery_work);
+
eu_dbg(d, "connected session %lld", d->session);
return fd;
@@ -1003,6 +1087,7 @@ static void xe_eudebug_sysfs_fini(void *arg)
void xe_eudebug_init(struct xe_device *xe)
{
struct drm_device *dev = &xe->drm;
+ struct workqueue_struct *wq;
int err;
INIT_LIST_HEAD(&xe->eudebug.targets);
@@ -1013,6 +1098,13 @@ void xe_eudebug_init(struct xe_device *xe)
if (err)
goto out_err;
+ wq = drmm_alloc_ordered_workqueue(dev, "xe-eudebug", 0);
+ if (IS_ERR(wq)) {
+ err = PTR_ERR(wq);
+ goto out_err;
+ }
+ xe->eudebug.wq = wq;
+
err = sysfs_create_file(&dev->dev->kobj,
&dev_attr_enable_eudebug.attr);
if (err)
diff --git a/drivers/gpu/drm/xe/xe_eudebug_types.h b/drivers/gpu/drm/xe/xe_eudebug_types.h
index 1e673c934169..55b71ddd92b6 100644
--- a/drivers/gpu/drm/xe/xe_eudebug_types.h
+++ b/drivers/gpu/drm/xe/xe_eudebug_types.h
@@ -17,6 +17,7 @@
struct xe_device;
struct task_struct;
+struct workqueue_struct;
/**
* enum xe_eudebug_state - eudebug capability state
@@ -103,6 +104,12 @@ struct xe_eudebug {
/** @session: session number for this connection (for logs) */
u64 session;
+ /** @discovery: completion to wait for discovery */
+ struct completion discovery;
+
+ /** @discovery_work: worker to discover resources for target_task */
+ struct work_struct discovery_work;
+
/** @events: kfifo queue of to-be-delivered events */
struct {
/** @lock: guards access to fifo */
--
2.43.0
^ permalink raw reply related [flat|nested] 30+ messages in thread* [PATCH 03/20] drm/xe/eudebug: Introduce exec_queue events
2025-12-02 13:52 [PATCH 00/20] Intel Xe GPU Debug Support (eudebug) v6 Mika Kuoppala
2025-12-02 13:52 ` [PATCH 01/20] drm/xe/eudebug: Introduce eudebug interface Mika Kuoppala
2025-12-02 13:52 ` [PATCH 02/20] drm/xe/eudebug: Introduce discovery for resources Mika Kuoppala
@ 2025-12-02 13:52 ` Mika Kuoppala
2025-12-02 13:52 ` [PATCH 04/20] drm/xe: Add EUDEBUG_ENABLE exec queue property Mika Kuoppala
` (22 subsequent siblings)
25 siblings, 0 replies; 30+ messages in thread
From: Mika Kuoppala @ 2025-12-02 13:52 UTC (permalink / raw)
To: intel-xe
Cc: simona.vetter, matthew.brost, christian.koenig, thomas.hellstrom,
joonas.lahtinen, christoph.manszewski, rodrigo.vivi,
andrzej.hajda, matthew.auld, maciej.patelczyk, gwan-gyeong.mun,
Dominik Grzegorzek, Mika Kuoppala
From: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Add events to inform the debugger about the creation and destruction of
exec_queues. Use user engine class types instead of the internal
xe_engine_class enum in exec_queue events. During discovery, only advertise
exec_queues with render or compute class,excluding others.
v2: - Only track long running queues
- Checkpatch (Tilak)
v3: __counted_by added
v4: - use helpers for filtering engines (Mika)
Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Signed-off-by: Maciej Patelczyk <maciej.patelczyk@intel.com>
Signed-off-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
---
drivers/gpu/drm/xe/xe_eudebug.c | 209 +++++++++++++++++++++++++-
drivers/gpu/drm/xe/xe_eudebug.h | 7 +
drivers/gpu/drm/xe/xe_eudebug_types.h | 7 +-
drivers/gpu/drm/xe/xe_exec_queue.c | 5 +
drivers/gpu/drm/xe/xe_hw_engine.h | 14 ++
include/uapi/drm/xe_drm_eudebug.h | 11 ++
6 files changed, 248 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_eudebug.c b/drivers/gpu/drm/xe/xe_eudebug.c
index 8b43e0384b57..4fee035765df 100644
--- a/drivers/gpu/drm/xe/xe_eudebug.c
+++ b/drivers/gpu/drm/xe/xe_eudebug.c
@@ -15,6 +15,8 @@
#include "xe_device.h"
#include "xe_eudebug.h"
#include "xe_eudebug_types.h"
+#include "xe_exec_queue.h"
+#include "xe_hw_engine.h"
#include "xe_macros.h"
#include "xe_vm.h"
@@ -391,6 +393,28 @@ __find_handle(struct xe_eudebug_resource *r,
return h;
}
+static int find_handle(struct xe_eudebug_resources *res,
+ const int type,
+ const void *p)
+{
+ const u64 key = (uintptr_t)p;
+ struct xe_eudebug_resource *r;
+ struct xe_eudebug_handle *h;
+ int id;
+
+ if (XE_WARN_ON(!key))
+ return -EINVAL;
+
+ r = resource_from_type(res, type);
+
+ mutex_lock(&res->lock);
+ h = __find_handle(r, key);
+ id = h ? h->id : -ENOENT;
+ mutex_unlock(&res->lock);
+
+ return id;
+}
+
static int _xe_eudebug_add_handle(struct xe_eudebug *d,
int type,
void *p,
@@ -649,6 +673,174 @@ void xe_eudebug_vm_destroy(struct xe_file *xef, struct xe_vm *vm)
xe_eudebug_event_put(d, vm_destroy_event(d, xef, vm));
}
+static const u16 xe_to_user_engine_class[] = {
+ [XE_ENGINE_CLASS_RENDER] = DRM_XE_ENGINE_CLASS_RENDER,
+ [XE_ENGINE_CLASS_COPY] = DRM_XE_ENGINE_CLASS_COPY,
+ [XE_ENGINE_CLASS_VIDEO_DECODE] = DRM_XE_ENGINE_CLASS_VIDEO_DECODE,
+ [XE_ENGINE_CLASS_VIDEO_ENHANCE] = DRM_XE_ENGINE_CLASS_VIDEO_ENHANCE,
+ [XE_ENGINE_CLASS_COMPUTE] = DRM_XE_ENGINE_CLASS_COMPUTE,
+};
+
+static int send_exec_queue_event(struct xe_eudebug *d, u32 flags,
+ u64 vm_handle, u64 exec_queue_handle,
+ enum xe_engine_class class,
+ u32 width, u64 *lrc_handles, u64 seqno)
+{
+ struct drm_xe_eudebug_event *event;
+ struct drm_xe_eudebug_event_exec_queue *e;
+ const u32 sz = struct_size(e, lrc_handle, width);
+ const u32 xe_engine_class = xe_to_user_engine_class[class];
+
+ if (!xe_engine_supports_eudebug(class))
+ return -EINVAL;
+
+ event = xe_eudebug_create_event(d, DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE,
+ seqno, flags, sz);
+ if (!event)
+ return -ENOMEM;
+
+ e = cast_event(e, event);
+
+ e->vm_handle = vm_handle;
+ e->exec_queue_handle = exec_queue_handle;
+ e->engine_class = xe_engine_class;
+ e->width = width;
+
+ memcpy(e->lrc_handle, lrc_handles, width);
+
+ return xe_eudebug_queue_event(d, event);
+}
+
+static int exec_queue_create_event(struct xe_eudebug *d,
+ struct xe_file *xef, struct xe_exec_queue *q)
+{
+ int h_vm, h_queue;
+ u64 h_lrc[XE_HW_ENGINE_MAX_INSTANCE], seqno;
+ int i;
+ int ret;
+
+ if (!xe_exec_queue_is_lr(q))
+ return 0;
+
+ h_vm = find_handle(d->res, XE_EUDEBUG_RES_TYPE_VM, q->vm);
+ if (h_vm < 0)
+ return h_vm;
+
+ if (XE_WARN_ON(q->width >= XE_HW_ENGINE_MAX_INSTANCE))
+ return -EINVAL;
+
+ for (i = 0; i < q->width; i++) {
+ int h, ret;
+
+ ret = _xe_eudebug_add_handle(d,
+ XE_EUDEBUG_RES_TYPE_LRC,
+ q->lrc[i],
+ NULL,
+ &h);
+
+ if (ret < 0 && ret != -EEXIST)
+ return ret;
+
+ XE_WARN_ON(!h);
+
+ h_lrc[i] = h;
+ }
+
+ h_queue = xe_eudebug_add_handle(d, XE_EUDEBUG_RES_TYPE_EXEC_QUEUE, q, &seqno);
+ if (h_queue <= 0)
+ return h_queue;
+
+ /* No need to cleanup for added handles on error as if we fail
+ * we disconnect
+ */
+
+ ret = send_exec_queue_event(d, DRM_XE_EUDEBUG_EVENT_CREATE,
+ h_vm, h_queue, q->class,
+ q->width, h_lrc, seqno);
+
+ if (ret)
+ eu_dbg(d, "send_exec_queue_event create error %d", ret);
+
+ return ret;
+}
+
+static int exec_queue_destroy_event(struct xe_eudebug *d,
+ struct xe_file *xef,
+ struct xe_exec_queue *q)
+{
+ int h_vm, h_queue;
+ u64 h_lrc[XE_HW_ENGINE_MAX_INSTANCE], seqno;
+ int i;
+ int ret;
+
+ if (!xe_exec_queue_is_lr(q))
+ return 0;
+
+ h_vm = find_handle(d->res, XE_EUDEBUG_RES_TYPE_VM, q->vm);
+ if (h_vm < 0)
+ return h_vm;
+
+ if (XE_WARN_ON(q->width >= XE_HW_ENGINE_MAX_INSTANCE))
+ return -EINVAL;
+
+ h_queue = xe_eudebug_remove_handle(d,
+ XE_EUDEBUG_RES_TYPE_EXEC_QUEUE,
+ q,
+ &seqno);
+ if (h_queue <= 0)
+ return h_queue;
+
+ for (i = 0; i < q->width; i++) {
+ ret = _xe_eudebug_remove_handle(d,
+ XE_EUDEBUG_RES_TYPE_LRC,
+ q->lrc[i],
+ NULL);
+ if (ret < 0 && ret != -ENOENT)
+ return ret;
+
+ XE_WARN_ON(!ret);
+
+ h_lrc[i] = ret;
+ }
+
+ ret = send_exec_queue_event(d, DRM_XE_EUDEBUG_EVENT_DESTROY,
+ h_vm, h_queue, q->class,
+ q->width, h_lrc, seqno);
+
+ if (ret)
+ eu_dbg(d, "send_exec_queue_event destroy error %d\n", ret);
+
+ return ret;
+}
+
+void xe_eudebug_exec_queue_create(struct xe_file *xef, struct xe_exec_queue *q)
+{
+ struct xe_eudebug *d;
+
+ if (!xe_engine_supports_eudebug(q->class))
+ return;
+
+ d = xe_eudebug_get(xef);
+ if (!d)
+ return;
+
+ xe_eudebug_event_put(d, exec_queue_create_event(d, xef, q));
+}
+
+void xe_eudebug_exec_queue_destroy(struct xe_file *xef, struct xe_exec_queue *q)
+{
+ struct xe_eudebug *d;
+
+ if (!xe_engine_supports_eudebug(q->class))
+ return;
+
+ d = xe_eudebug_get(xef);
+ if (!d)
+ return;
+
+ xe_eudebug_event_put(d, exec_queue_destroy_event(d, xef, q));
+}
+
static struct xe_file *xe_eudebug_target_get(struct xe_eudebug *d)
{
struct xe_file *xef = NULL;
@@ -664,9 +856,10 @@ static struct xe_file *xe_eudebug_target_get(struct xe_eudebug *d)
static void discover_client(struct xe_eudebug *d)
{
struct xe_file *xef;
+ struct xe_exec_queue *q;
struct xe_vm *vm;
unsigned long i;
- unsigned int vm_count = 0;
+ unsigned int vm_count = 0, eq_count = 0;
int err = 0;
xef = xe_eudebug_target_get(d);
@@ -684,14 +877,24 @@ static void discover_client(struct xe_eudebug *d)
vm_count++;
}
+ xa_for_each(&xef->exec_queue.xa, i, q) {
+ if (!xe_engine_supports_eudebug(q->class))
+ continue;
+
+ err = exec_queue_create_event(d, xef, q);
+ if (err)
+ break;
+ }
+
complete_all(&d->discovery);
eu_dbg(d, "Discovery end for %lld: %d", d->session, err);
up_write(&xef->eudebug.ioctl_lock);
- if (vm_count)
- eu_dbg(d, "Discovery found %u vms", vm_count);
+ if (vm_count || eq_count)
+ eu_dbg(d, "Discovery found %u vms, %u exec_queues",
+ vm_count, eq_count);
xe_file_put(xef);
}
diff --git a/drivers/gpu/drm/xe/xe_eudebug.h b/drivers/gpu/drm/xe/xe_eudebug.h
index 22fbb2ff24da..10480a226fac 100644
--- a/drivers/gpu/drm/xe/xe_eudebug.h
+++ b/drivers/gpu/drm/xe/xe_eudebug.h
@@ -13,6 +13,7 @@ struct drm_file;
struct xe_device;
struct xe_file;
struct xe_vm;
+struct xe_exec_queue;
#if IS_ENABLED(CONFIG_DRM_XE_EUDEBUG)
@@ -46,6 +47,9 @@ void xe_eudebug_vm_create(struct xe_file *xef, struct xe_vm *vm);
void xe_eudebug_vm_destroy(struct xe_file *xef, struct xe_vm *vm);
int xe_eudebug_enable(struct xe_device *xe, bool enable);
+void xe_eudebug_exec_queue_create(struct xe_file *xef, struct xe_exec_queue *q);
+void xe_eudebug_exec_queue_destroy(struct xe_file *xef, struct xe_exec_queue *q);
+
#else
static inline int xe_eudebug_connect_ioctl(struct drm_device *dev,
@@ -60,6 +64,9 @@ static inline void xe_eudebug_file_close(struct xe_file *xef) { }
static inline void xe_eudebug_vm_create(struct xe_file *xef, struct xe_vm *vm) { }
static inline void xe_eudebug_vm_destroy(struct xe_file *xef, struct xe_vm *vm) { }
+static inline void xe_eudebug_exec_queue_create(struct xe_file *xef, struct xe_exec_queue *q) { }
+static inline void xe_eudebug_exec_queue_destroy(struct xe_file *xef, struct xe_exec_queue *q) { }
+
#endif /* CONFIG_DRM_XE_EUDEBUG */
#endif /* _XE_EUDEBUG_H_ */
diff --git a/drivers/gpu/drm/xe/xe_eudebug_types.h b/drivers/gpu/drm/xe/xe_eudebug_types.h
index 55b71ddd92b6..57bff7482163 100644
--- a/drivers/gpu/drm/xe/xe_eudebug_types.h
+++ b/drivers/gpu/drm/xe/xe_eudebug_types.h
@@ -33,7 +33,7 @@ enum xe_eudebug_state {
};
#define CONFIG_DRM_XE_DEBUGGER_EVENT_QUEUE_SIZE 64
-#define XE_EUDEBUG_MAX_EVENT_TYPE DRM_XE_EUDEBUG_EVENT_VM
+#define XE_EUDEBUG_MAX_EVENT_TYPE DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE
/**
* struct xe_eudebug_handle - eudebug resource handle
@@ -61,7 +61,9 @@ struct xe_eudebug_resource {
};
#define XE_EUDEBUG_RES_TYPE_VM 0
-#define XE_EUDEBUG_RES_TYPE_COUNT (XE_EUDEBUG_RES_TYPE_VM + 1)
+#define XE_EUDEBUG_RES_TYPE_EXEC_QUEUE 1
+#define XE_EUDEBUG_RES_TYPE_LRC 2
+#define XE_EUDEBUG_RES_TYPE_COUNT (XE_EUDEBUG_RES_TYPE_LRC + 1)
/**
* struct xe_eudebug_resources - eudebug resources for all types
@@ -133,3 +135,4 @@ struct xe_eudebug {
};
#endif /* _XE_EUDEBUG_TYPES_H_ */
+
diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
index 226d07a3d852..a3bbc776f99d 100644
--- a/drivers/gpu/drm/xe/xe_exec_queue.c
+++ b/drivers/gpu/drm/xe/xe_exec_queue.c
@@ -29,6 +29,7 @@
#include "xe_trace.h"
#include "xe_vm.h"
#include "xe_pxp.h"
+#include "xe_eudebug.h"
/**
* DOC: Execution Queue
@@ -842,6 +843,8 @@ int xe_exec_queue_create_ioctl(struct drm_device *dev, void *data,
args->exec_queue_id = id;
+ xe_eudebug_exec_queue_create(xef, q);
+
return 0;
kill_exec_queue:
@@ -1027,6 +1030,8 @@ int xe_exec_queue_destroy_ioctl(struct drm_device *dev, void *data,
if (q->vm && q->hwe->hw_engine_group)
xe_hw_engine_group_del_exec_queue(q->hwe->hw_engine_group, q);
+ xe_eudebug_exec_queue_destroy(xef, q);
+
xe_exec_queue_kill(q);
trace_xe_exec_queue_close(q);
diff --git a/drivers/gpu/drm/xe/xe_hw_engine.h b/drivers/gpu/drm/xe/xe_hw_engine.h
index 6b5f9fa2a594..d8781bf79547 100644
--- a/drivers/gpu/drm/xe/xe_hw_engine.h
+++ b/drivers/gpu/drm/xe/xe_hw_engine.h
@@ -79,4 +79,18 @@ enum xe_force_wake_domains xe_hw_engine_to_fw_domain(struct xe_hw_engine *hwe);
void xe_hw_engine_mmio_write32(struct xe_hw_engine *hwe, struct xe_reg reg, u32 val);
u32 xe_hw_engine_mmio_read32(struct xe_hw_engine *hwe, struct xe_reg reg);
+static inline bool xe_engine_supports_eudebug(const enum xe_engine_class ec)
+{
+ if (ec == XE_ENGINE_CLASS_COMPUTE ||
+ ec == XE_ENGINE_CLASS_RENDER)
+ return true;
+
+ return false;
+}
+
+static inline bool xe_hw_engine_has_eudebug(const struct xe_hw_engine *hwe)
+{
+ return xe_engine_supports_eudebug(hwe->class);
+}
+
#endif
diff --git a/include/uapi/drm/xe_drm_eudebug.h b/include/uapi/drm/xe_drm_eudebug.h
index fd2a0c911d02..360d7a7ecb67 100644
--- a/include/uapi/drm/xe_drm_eudebug.h
+++ b/include/uapi/drm/xe_drm_eudebug.h
@@ -48,6 +48,7 @@ struct drm_xe_eudebug_event {
#define DRM_XE_EUDEBUG_EVENT_NONE 0
#define DRM_XE_EUDEBUG_EVENT_READ 1
#define DRM_XE_EUDEBUG_EVENT_VM 2
+#define DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE 3
__u16 flags;
#define DRM_XE_EUDEBUG_EVENT_CREATE (1 << 0)
@@ -70,6 +71,16 @@ struct drm_xe_eudebug_event_vm {
__u64 vm_handle;
};
+struct drm_xe_eudebug_event_exec_queue {
+ struct drm_xe_eudebug_event base;
+
+ __u64 vm_handle;
+ __u64 exec_queue_handle;
+ __u32 engine_class;
+ __u32 width;
+ __u64 lrc_handle[];
+};
+
#if defined(__cplusplus)
}
#endif
--
2.43.0
^ permalink raw reply related [flat|nested] 30+ messages in thread* [PATCH 04/20] drm/xe: Add EUDEBUG_ENABLE exec queue property
2025-12-02 13:52 [PATCH 00/20] Intel Xe GPU Debug Support (eudebug) v6 Mika Kuoppala
` (2 preceding siblings ...)
2025-12-02 13:52 ` [PATCH 03/20] drm/xe/eudebug: Introduce exec_queue events Mika Kuoppala
@ 2025-12-02 13:52 ` Mika Kuoppala
2025-12-02 13:52 ` [PATCH 05/20] drm/xe/eudebug: Mark guc contexts as debuggable Mika Kuoppala
` (21 subsequent siblings)
25 siblings, 0 replies; 30+ messages in thread
From: Mika Kuoppala @ 2025-12-02 13:52 UTC (permalink / raw)
To: intel-xe
Cc: simona.vetter, matthew.brost, christian.koenig, thomas.hellstrom,
joonas.lahtinen, christoph.manszewski, rodrigo.vivi,
andrzej.hajda, matthew.auld, maciej.patelczyk, gwan-gyeong.mun,
Dominik Grzegorzek, Mika Kuoppala
From: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
This patch introduces an immutable eudebug property for exec_queues,
using a flags value to enable eudebug-specific features. For now, the
engine LRC uses this flag to enable the runalone hardware feature.
Runalone ensures that only one hardware engine in a group
[rcs0, ccs0-3] is active on a tile.
v2: - check CONFIG_DRM_XE_EUDEBUG and LR mode (Matthew)
- disable preempt (Dominik)
- lrc_create remove from engine init
Cc: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Signed-off-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
---
drivers/gpu/drm/xe/xe_eudebug.c | 4 +--
drivers/gpu/drm/xe/xe_exec_queue.c | 46 +++++++++++++++++++++++-
drivers/gpu/drm/xe/xe_exec_queue.h | 2 ++
drivers/gpu/drm/xe/xe_exec_queue_types.h | 7 ++++
drivers/gpu/drm/xe/xe_lrc.c | 10 ++++++
include/uapi/drm/xe_drm.h | 2 ++
6 files changed, 68 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_eudebug.c b/drivers/gpu/drm/xe/xe_eudebug.c
index 4fee035765df..b8a9462eed17 100644
--- a/drivers/gpu/drm/xe/xe_eudebug.c
+++ b/drivers/gpu/drm/xe/xe_eudebug.c
@@ -719,7 +719,7 @@ static int exec_queue_create_event(struct xe_eudebug *d,
int i;
int ret;
- if (!xe_exec_queue_is_lr(q))
+ if (!xe_exec_queue_is_debuggable(q))
return 0;
h_vm = find_handle(d->res, XE_EUDEBUG_RES_TYPE_VM, q->vm);
@@ -773,7 +773,7 @@ static int exec_queue_destroy_event(struct xe_eudebug *d,
int i;
int ret;
- if (!xe_exec_queue_is_lr(q))
+ if (!xe_exec_queue_is_debuggable(q))
return 0;
h_vm = find_handle(d->res, XE_EUDEBUG_RES_TYPE_VM, q->vm);
diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
index a3bbc776f99d..ddaef00b56ff 100644
--- a/drivers/gpu/drm/xe/xe_exec_queue.c
+++ b/drivers/gpu/drm/xe/xe_exec_queue.c
@@ -209,6 +209,9 @@ static int __xe_exec_queue_init(struct xe_exec_queue *q, u32 exec_queue_flags)
if (!(exec_queue_flags & EXEC_QUEUE_FLAG_KERNEL))
flags |= XE_LRC_CREATE_USER_CTX;
+ if (q->eudebug_flags & EXEC_QUEUE_EUDEBUG_FLAG_ENABLE)
+ flags |= XE_LRC_CREATE_RUNALONE;
+
err = q->ops->init(q);
if (err)
return err;
@@ -586,6 +589,45 @@ static int exec_queue_set_hang_replay_state(struct xe_device *xe,
return 0;
}
+static int exec_queue_set_eudebug(struct xe_device *xe, struct xe_exec_queue *q,
+ u64 value)
+{
+ const u64 known_flags = DRM_XE_EXEC_QUEUE_EUDEBUG_FLAG_ENABLE;
+
+ if (XE_IOCTL_DBG(xe, (q->class != XE_ENGINE_CLASS_RENDER &&
+ q->class != XE_ENGINE_CLASS_COMPUTE)))
+ return -EINVAL;
+
+ if (XE_IOCTL_DBG(xe, (value & ~known_flags)))
+ return -EINVAL;
+
+ if (XE_IOCTL_DBG(xe, !IS_ENABLED(CONFIG_DRM_XE_EUDEBUG)))
+ return -EOPNOTSUPP;
+
+ if (XE_IOCTL_DBG(xe, !xe_exec_queue_is_lr(q)))
+ return -EINVAL;
+ /*
+ * We want to explicitly set the global feature if
+ * property is set.
+ */
+ if (XE_IOCTL_DBG(xe,
+ !(value & DRM_XE_EXEC_QUEUE_EUDEBUG_FLAG_ENABLE)))
+ return -EINVAL;
+
+ if (XE_IOCTL_DBG(xe, !xe_eudebug_is_enabled(xe)))
+ return -EPERM;
+
+ q->eudebug_flags = EXEC_QUEUE_EUDEBUG_FLAG_ENABLE;
+ q->sched_props.preempt_timeout_us = 0;
+
+ return 0;
+}
+
+int xe_exec_queue_is_debuggable(struct xe_exec_queue *q)
+{
+ return q->eudebug_flags & EXEC_QUEUE_EUDEBUG_FLAG_ENABLE;
+}
+
typedef int (*xe_exec_queue_set_property_fn)(struct xe_device *xe,
struct xe_exec_queue *q,
u64 value);
@@ -595,6 +637,7 @@ static const xe_exec_queue_set_property_fn exec_queue_set_property_funcs[] = {
[DRM_XE_EXEC_QUEUE_SET_PROPERTY_TIMESLICE] = exec_queue_set_timeslice,
[DRM_XE_EXEC_QUEUE_SET_PROPERTY_PXP_TYPE] = exec_queue_set_pxp_type,
[DRM_XE_EXEC_QUEUE_SET_HANG_REPLAY_STATE] = exec_queue_set_hang_replay_state,
+ [DRM_XE_EXEC_QUEUE_SET_PROPERTY_EUDEBUG] = exec_queue_set_eudebug,
};
static int exec_queue_user_ext_set_property(struct xe_device *xe,
@@ -616,7 +659,8 @@ static int exec_queue_user_ext_set_property(struct xe_device *xe,
XE_IOCTL_DBG(xe, ext.property != DRM_XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY &&
ext.property != DRM_XE_EXEC_QUEUE_SET_PROPERTY_TIMESLICE &&
ext.property != DRM_XE_EXEC_QUEUE_SET_PROPERTY_PXP_TYPE &&
- ext.property != DRM_XE_EXEC_QUEUE_SET_HANG_REPLAY_STATE))
+ ext.property != DRM_XE_EXEC_QUEUE_SET_HANG_REPLAY_STATE &&
+ ext.property != DRM_XE_EXEC_QUEUE_SET_PROPERTY_EUDEBUG))
return -EINVAL;
idx = array_index_nospec(ext.property, ARRAY_SIZE(exec_queue_set_property_funcs));
diff --git a/drivers/gpu/drm/xe/xe_exec_queue.h b/drivers/gpu/drm/xe/xe_exec_queue.h
index fda4d4f9bda8..34415042249c 100644
--- a/drivers/gpu/drm/xe/xe_exec_queue.h
+++ b/drivers/gpu/drm/xe/xe_exec_queue.h
@@ -111,4 +111,6 @@ int xe_exec_queue_contexts_hwsp_rebase(struct xe_exec_queue *q, void *scratch);
struct xe_lrc *xe_exec_queue_lrc(struct xe_exec_queue *q);
+int xe_exec_queue_is_debuggable(struct xe_exec_queue *q);
+
#endif
diff --git a/drivers/gpu/drm/xe/xe_exec_queue_types.h b/drivers/gpu/drm/xe/xe_exec_queue_types.h
index 3ba10632dcd6..6e784afe3373 100644
--- a/drivers/gpu/drm/xe/xe_exec_queue_types.h
+++ b/drivers/gpu/drm/xe/xe_exec_queue_types.h
@@ -97,6 +97,13 @@ struct xe_exec_queue {
*/
unsigned long flags;
+ /**
+ * @eudebug_flags: immutable eudebug flags for this exec queue.
+ * Set up with DRM_XE_EXEC_QUEUE_SET_PROPERTY_EUDEBUG.
+ */
+#define EXEC_QUEUE_EUDEBUG_FLAG_ENABLE BIT(0)
+ unsigned long eudebug_flags;
+
union {
/** @multi_gt_list: list head for VM bind engines if multi-GT */
struct list_head multi_gt_list;
diff --git a/drivers/gpu/drm/xe/xe_lrc.c b/drivers/gpu/drm/xe/xe_lrc.c
index a05060f75e7e..35bfbe5e8b91 100644
--- a/drivers/gpu/drm/xe/xe_lrc.c
+++ b/drivers/gpu/drm/xe/xe_lrc.c
@@ -1557,6 +1557,16 @@ static int xe_lrc_init(struct xe_lrc *lrc, struct xe_hw_engine *hwe,
if (err)
goto err_lrc_finish;
+ if (init_flags & XE_LRC_CREATE_RUNALONE) {
+ u32 ctx_control = xe_lrc_read_ctx_reg(lrc, CTX_CONTEXT_CONTROL);
+
+ drm_dbg(&xe->drm, "read CTX_CONTEXT_CONTROL: 0x%x\n", ctx_control);
+ ctx_control |= _MASKED_BIT_ENABLE(CTX_CTRL_RUN_ALONE);
+ drm_dbg(&xe->drm, "written CTX_CONTEXT_CONTROL: 0x%x\n", ctx_control);
+
+ xe_lrc_write_ctx_reg(lrc, CTX_CONTEXT_CONTROL, ctx_control);
+ }
+
return 0;
err_lrc_finish:
diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
index 0ce485ce2948..7349b832837d 100644
--- a/include/uapi/drm/xe_drm.h
+++ b/include/uapi/drm/xe_drm.h
@@ -1299,6 +1299,8 @@ struct drm_xe_exec_queue_create {
#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_TIMESLICE 1
#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_PXP_TYPE 2
#define DRM_XE_EXEC_QUEUE_SET_HANG_REPLAY_STATE 3
+#define DRM_XE_EXEC_QUEUE_SET_PROPERTY_EUDEBUG 4
+#define DRM_XE_EXEC_QUEUE_EUDEBUG_FLAG_ENABLE (1 << 0)
/** @extensions: Pointer to the first extension struct, if any */
__u64 extensions;
--
2.43.0
^ permalink raw reply related [flat|nested] 30+ messages in thread* [PATCH 05/20] drm/xe/eudebug: Mark guc contexts as debuggable
2025-12-02 13:52 [PATCH 00/20] Intel Xe GPU Debug Support (eudebug) v6 Mika Kuoppala
` (3 preceding siblings ...)
2025-12-02 13:52 ` [PATCH 04/20] drm/xe: Add EUDEBUG_ENABLE exec queue property Mika Kuoppala
@ 2025-12-02 13:52 ` Mika Kuoppala
2025-12-06 2:03 ` Daniele Ceraolo Spurio
2025-12-02 13:52 ` [PATCH 06/20] drm/xe: Introduce ADD_DEBUG_DATA and REMOVE_DEBUG_DATA vm bind ops Mika Kuoppala
` (20 subsequent siblings)
25 siblings, 1 reply; 30+ messages in thread
From: Mika Kuoppala @ 2025-12-02 13:52 UTC (permalink / raw)
To: intel-xe
Cc: simona.vetter, matthew.brost, christian.koenig, thomas.hellstrom,
joonas.lahtinen, christoph.manszewski, rodrigo.vivi,
andrzej.hajda, matthew.auld, maciej.patelczyk, gwan-gyeong.mun,
Mika Kuoppala, Lucas De Marchi, Daniele Ceraolo Spurio,
Jan Sokolowski, Dominik Grzegorzek
We need to inform to guc which contexts are debuggable
as their handling is different from ordinary contexts.
v2: void return, use xe_gt_dbg, no need for lrc (Matt)
v3: add the workaround enabling (Daniele)
v4: version needed to 70.49.4
v5: bail out early before registering eq (Daniele)
v6: export the guc action for future (Mika)
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Lucas De Marchi <lucas.demarchi@intel.com>
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Jan Sokolowski <jan.sokolowski@intel.com>
Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Signed-off-by: Maciej Patelczyk <maciej.patelczyk@intel.com>
Signed-off-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
---
drivers/gpu/drm/xe/abi/guc_actions_abi.h | 5 ++++
drivers/gpu/drm/xe/abi/guc_klvs_abi.h | 1 +
drivers/gpu/drm/xe/xe_exec_queue.c | 5 ++++
drivers/gpu/drm/xe/xe_guc.c | 17 ++++++++++++
drivers/gpu/drm/xe/xe_guc.h | 3 +++
drivers/gpu/drm/xe/xe_guc_ads.c | 17 ++++++++++++
drivers/gpu/drm/xe/xe_guc_submit.c | 34 ++++++++++++++++++++++++
drivers/gpu/drm/xe/xe_guc_submit.h | 1 +
drivers/gpu/drm/xe/xe_wa_oob.rules | 2 ++
9 files changed, 85 insertions(+)
diff --git a/drivers/gpu/drm/xe/abi/guc_actions_abi.h b/drivers/gpu/drm/xe/abi/guc_actions_abi.h
index 47756e4674a1..32a5f680a6d2 100644
--- a/drivers/gpu/drm/xe/abi/guc_actions_abi.h
+++ b/drivers/gpu/drm/xe/abi/guc_actions_abi.h
@@ -155,6 +155,7 @@ enum xe_guc_action {
XE_GUC_ACTION_NOTIFY_FLUSH_LOG_BUFFER_TO_FILE = 0x8003,
XE_GUC_ACTION_NOTIFY_CRASH_DUMP_POSTED = 0x8004,
XE_GUC_ACTION_NOTIFY_EXCEPTION = 0x8005,
+ XE_GUC_ACTION_EU_KERNEL_DEBUG = 0x8006,
XE_GUC_ACTION_TEST_G2G_SEND = 0xF001,
XE_GUC_ACTION_TEST_G2G_RECV = 0xF002,
XE_GUC_ACTION_LIMIT
@@ -278,4 +279,8 @@ enum xe_guc_g2g_type {
/* invalid type for XE_GUC_ACTION_NOTIFY_MEMORY_CAT_ERROR */
#define XE_GUC_CAT_ERR_TYPE_INVALID 0xdeadbeef
+enum xe_guc_eu_kernel_debug_request_type {
+ XE_GUC_EU_KERNEL_DEBUG_ENABLE = 0x3,
+};
+
#endif
diff --git a/drivers/gpu/drm/xe/abi/guc_klvs_abi.h b/drivers/gpu/drm/xe/abi/guc_klvs_abi.h
index 265a135e7061..fba190d4f84b 100644
--- a/drivers/gpu/drm/xe/abi/guc_klvs_abi.h
+++ b/drivers/gpu/drm/xe/abi/guc_klvs_abi.h
@@ -423,6 +423,7 @@ enum xe_guc_klv_ids {
GUC_WA_KLV_WAKE_POWER_DOMAINS_FOR_OUTBOUND_MMIO = 0x900a,
GUC_WA_KLV_RESET_BB_STACK_PTR_ON_VF_SWITCH = 0x900b,
GUC_WA_KLV_RESTORE_UNSAVED_MEDIA_CONTROL_REG = 0x900c,
+ GUC_WA_KLV_RESET_DEP_ENGINES_ON_DEBUG_CTX_SWITCH = 0x900d,
};
#endif
diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
index ddaef00b56ff..e5590c6e3148 100644
--- a/drivers/gpu/drm/xe/xe_exec_queue.c
+++ b/drivers/gpu/drm/xe/xe_exec_queue.c
@@ -17,6 +17,7 @@
#include "xe_device.h"
#include "xe_gt.h"
#include "xe_gt_sriov_vf.h"
+#include "xe_guc.h"
#include "xe_hw_engine_class_sysfs.h"
#include "xe_hw_engine_group.h"
#include "xe_hw_fence.h"
@@ -593,6 +594,7 @@ static int exec_queue_set_eudebug(struct xe_device *xe, struct xe_exec_queue *q,
u64 value)
{
const u64 known_flags = DRM_XE_EXEC_QUEUE_EUDEBUG_FLAG_ENABLE;
+ struct xe_guc *guc = &q->gt->uc.guc;
if (XE_IOCTL_DBG(xe, (q->class != XE_ENGINE_CLASS_RENDER &&
q->class != XE_ENGINE_CLASS_COMPUTE)))
@@ -604,6 +606,9 @@ static int exec_queue_set_eudebug(struct xe_device *xe, struct xe_exec_queue *q,
if (XE_IOCTL_DBG(xe, !IS_ENABLED(CONFIG_DRM_XE_EUDEBUG)))
return -EOPNOTSUPP;
+ if (XE_IOCTL_DBG(xe, !xe_guc_has_debug_contexts(guc)))
+ return -EOPNOTSUPP;
+
if (XE_IOCTL_DBG(xe, !xe_exec_queue_is_lr(q)))
return -EINVAL;
/*
diff --git a/drivers/gpu/drm/xe/xe_guc.c b/drivers/gpu/drm/xe/xe_guc.c
index 88376bc2a483..ec0d6b5e0693 100644
--- a/drivers/gpu/drm/xe/xe_guc.c
+++ b/drivers/gpu/drm/xe/xe_guc.c
@@ -1744,6 +1744,23 @@ bool xe_guc_using_main_gamctrl_queues(struct xe_guc *guc)
return GT_VER(gt) >= 35;
}
+bool xe_guc_has_debug_contexts(struct xe_guc *guc)
+{
+ const struct xe_uc_fw_version required = XE_UC_FW_VERSION_DEBUG_CONTEXTS;
+ struct xe_uc_fw_version *version = &guc->fw.versions.found[XE_UC_FW_VER_RELEASE];
+ struct xe_gt *gt = guc_to_gt(guc);
+
+ if (MAKE_GUC_VER_STRUCT(*version) < MAKE_GUC_VER_STRUCT(required)) {
+ xe_gt_info(gt,
+ "debug context unsupported in GuC interface v%u.%u.%u, need v%u.%u.%u or higher\n",
+ version->major, version->minor, version->patch, required.major,
+ required.minor, required.patch);
+ return false;
+ }
+
+ return true;
+}
+
#if IS_ENABLED(CONFIG_DRM_XE_KUNIT_TEST)
#include "tests/xe_guc_g2g_test.c"
#endif
diff --git a/drivers/gpu/drm/xe/xe_guc.h b/drivers/gpu/drm/xe/xe_guc.h
index fdb08658d05a..10e387c72861 100644
--- a/drivers/gpu/drm/xe/xe_guc.h
+++ b/drivers/gpu/drm/xe/xe_guc.h
@@ -23,6 +23,8 @@
#define GUC_FIRMWARE_VER(guc) \
MAKE_GUC_VER_STRUCT((guc)->fw.versions.found[XE_UC_FW_VER_RELEASE])
+#define XE_UC_FW_VERSION_DEBUG_CONTEXTS { .major = 70, .minor = 49, .patch = 4 }
+
struct drm_printer;
void xe_guc_comm_init_early(struct xe_guc *guc);
@@ -55,6 +57,7 @@ void xe_guc_stop(struct xe_guc *guc);
int xe_guc_start(struct xe_guc *guc);
void xe_guc_declare_wedged(struct xe_guc *guc);
bool xe_guc_using_main_gamctrl_queues(struct xe_guc *guc);
+bool xe_guc_has_debug_contexts(struct xe_guc *guc);
#if IS_ENABLED(CONFIG_DRM_XE_KUNIT_TEST)
int xe_guc_g2g_test_notification(struct xe_guc *guc, u32 *payload, u32 len);
diff --git a/drivers/gpu/drm/xe/xe_guc_ads.c b/drivers/gpu/drm/xe/xe_guc_ads.c
index bcb85a1bf26d..f4d1470229f1 100644
--- a/drivers/gpu/drm/xe/xe_guc_ads.c
+++ b/drivers/gpu/drm/xe/xe_guc_ads.c
@@ -363,6 +363,23 @@ static void guc_waklv_init(struct xe_guc_ads *ads)
guc_waklv_enable(ads, NULL, 0, &offset, &remain,
GUC_WORKAROUND_KLV_DISABLE_PSMI_INTERRUPTS_AT_C6_ENTRY_RESTORE_AT_EXIT);
+#if IS_ENABLED(CONFIG_DRM_XE_EUDEBUG)
+ if (XE_GT_WA(gt, 14022766366)) {
+ if (xe_guc_has_debug_contexts(>->uc.guc)) {
+ guc_waklv_enable(ads, NULL, 0, &offset, &remain,
+ GUC_WA_KLV_RESET_DEP_ENGINES_ON_DEBUG_CTX_SWITCH);
+ } else {
+ const struct xe_uc_fw_version required =
+ XE_UC_FW_VERSION_DEBUG_CONTEXTS;
+
+ xe_gt_info(gt, "eudebug needs GuC version %u.%u.%u or greater\n",
+ required.major,
+ required.minor,
+ required.patch);
+ }
+ }
+#endif
+
size = guc_ads_waklv_size(ads) - remain;
if (!size)
return;
diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
index 3ca2558c8c96..dd9d567f0a7b 100644
--- a/drivers/gpu/drm/xe/xe_guc_submit.c
+++ b/drivers/gpu/drm/xe/xe_guc_submit.c
@@ -651,6 +651,37 @@ static void __register_exec_queue(struct xe_guc *guc,
xe_guc_ct_send(&guc->ct, action, ARRAY_SIZE(action), 0, 0);
}
+int xe_guc_action_eu_kernel_debug(struct xe_guc *guc, u32 id, u32 cmd)
+{
+ const u32 action[] = {
+ XE_GUC_ACTION_EU_KERNEL_DEBUG,
+ id,
+ cmd,
+ 0, /* reserved */
+ };
+
+ return xe_guc_ct_send(&guc->ct, action,
+ ARRAY_SIZE(action), 0, 0);
+}
+
+static void set_eu_kernel_debug(struct xe_exec_queue *q)
+{
+ struct xe_guc *guc = exec_queue_to_guc(q);
+ struct xe_gt *gt = guc_to_gt(guc);
+ int ret;
+
+ ret = xe_guc_action_eu_kernel_debug(guc, q->guc->id,
+ XE_GUC_EU_KERNEL_DEBUG_ENABLE);
+
+ if (ret)
+ xe_gt_warn(gt,
+ "GuC ctx=%d debug enabling failed with %d",
+ q->guc->id, ret);
+ else
+ xe_gt_dbg(gt,
+ "GuC ctx=%d enabled for debug", q->guc->id);
+}
+
static void register_exec_queue(struct xe_exec_queue *q, int ctx_type)
{
struct xe_guc *guc = exec_queue_to_guc(q);
@@ -705,6 +736,9 @@ static void register_exec_queue(struct xe_exec_queue *q, int ctx_type)
else
__register_exec_queue(guc, &info);
init_policies(guc, q);
+
+ if (xe_exec_queue_is_debuggable(q))
+ set_eu_kernel_debug(q);
}
static u32 wq_space_until_wrap(struct xe_exec_queue *q)
diff --git a/drivers/gpu/drm/xe/xe_guc_submit.h b/drivers/gpu/drm/xe/xe_guc_submit.h
index 100a7891b918..b25bd8f32abf 100644
--- a/drivers/gpu/drm/xe/xe_guc_submit.h
+++ b/drivers/gpu/drm/xe/xe_guc_submit.h
@@ -50,5 +50,6 @@ void xe_guc_submit_print(struct xe_guc *guc, struct drm_printer *p);
void xe_guc_register_vf_exec_queue(struct xe_exec_queue *q, int ctx_type);
int xe_guc_contexts_hwsp_rebase(struct xe_guc *guc, void *scratch);
+int xe_guc_action_eu_kernel_debug(struct xe_guc *guc, u32 id, u32 cmd);
#endif
diff --git a/drivers/gpu/drm/xe/xe_wa_oob.rules b/drivers/gpu/drm/xe/xe_wa_oob.rules
index 7ca7258eb5d8..ae6daa50eaf1 100644
--- a/drivers/gpu/drm/xe/xe_wa_oob.rules
+++ b/drivers/gpu/drm/xe/xe_wa_oob.rules
@@ -77,3 +77,5 @@
15015404425_disable PLATFORM(PANTHERLAKE), MEDIA_STEP(B0, FOREVER)
16026007364 MEDIA_VERSION(3000)
14020316580 MEDIA_VERSION(1301)
+14022766366 GRAPHICS_VERSION_RANGE(2001, 2004)
+ GRAPHICS_VERSION_RANGE(3000, 3005)
--
2.43.0
^ permalink raw reply related [flat|nested] 30+ messages in thread* Re: [PATCH 05/20] drm/xe/eudebug: Mark guc contexts as debuggable
2025-12-02 13:52 ` [PATCH 05/20] drm/xe/eudebug: Mark guc contexts as debuggable Mika Kuoppala
@ 2025-12-06 2:03 ` Daniele Ceraolo Spurio
0 siblings, 0 replies; 30+ messages in thread
From: Daniele Ceraolo Spurio @ 2025-12-06 2:03 UTC (permalink / raw)
To: Mika Kuoppala, intel-xe
Cc: simona.vetter, matthew.brost, christian.koenig, thomas.hellstrom,
joonas.lahtinen, christoph.manszewski, rodrigo.vivi,
andrzej.hajda, matthew.auld, maciej.patelczyk, gwan-gyeong.mun,
Lucas De Marchi, Jan Sokolowski, Dominik Grzegorzek
On 12/2/2025 5:52 AM, Mika Kuoppala wrote:
> We need to inform to guc which contexts are debuggable
> as their handling is different from ordinary contexts.
>
> v2: void return, use xe_gt_dbg, no need for lrc (Matt)
> v3: add the workaround enabling (Daniele)
> v4: version needed to 70.49.4
> v5: bail out early before registering eq (Daniele)
> v6: export the guc action for future (Mika)
>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Lucas De Marchi <lucas.demarchi@intel.com>
> Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
> Cc: Jan Sokolowski <jan.sokolowski@intel.com>
> Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
> Signed-off-by: Maciej Patelczyk <maciej.patelczyk@intel.com>
> Signed-off-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
> ---
> drivers/gpu/drm/xe/abi/guc_actions_abi.h | 5 ++++
> drivers/gpu/drm/xe/abi/guc_klvs_abi.h | 1 +
> drivers/gpu/drm/xe/xe_exec_queue.c | 5 ++++
> drivers/gpu/drm/xe/xe_guc.c | 17 ++++++++++++
> drivers/gpu/drm/xe/xe_guc.h | 3 +++
> drivers/gpu/drm/xe/xe_guc_ads.c | 17 ++++++++++++
> drivers/gpu/drm/xe/xe_guc_submit.c | 34 ++++++++++++++++++++++++
> drivers/gpu/drm/xe/xe_guc_submit.h | 1 +
> drivers/gpu/drm/xe/xe_wa_oob.rules | 2 ++
> 9 files changed, 85 insertions(+)
>
> diff --git a/drivers/gpu/drm/xe/abi/guc_actions_abi.h b/drivers/gpu/drm/xe/abi/guc_actions_abi.h
> index 47756e4674a1..32a5f680a6d2 100644
> --- a/drivers/gpu/drm/xe/abi/guc_actions_abi.h
> +++ b/drivers/gpu/drm/xe/abi/guc_actions_abi.h
> @@ -155,6 +155,7 @@ enum xe_guc_action {
> XE_GUC_ACTION_NOTIFY_FLUSH_LOG_BUFFER_TO_FILE = 0x8003,
> XE_GUC_ACTION_NOTIFY_CRASH_DUMP_POSTED = 0x8004,
> XE_GUC_ACTION_NOTIFY_EXCEPTION = 0x8005,
> + XE_GUC_ACTION_EU_KERNEL_DEBUG = 0x8006,
> XE_GUC_ACTION_TEST_G2G_SEND = 0xF001,
> XE_GUC_ACTION_TEST_G2G_RECV = 0xF002,
> XE_GUC_ACTION_LIMIT
> @@ -278,4 +279,8 @@ enum xe_guc_g2g_type {
> /* invalid type for XE_GUC_ACTION_NOTIFY_MEMORY_CAT_ERROR */
> #define XE_GUC_CAT_ERR_TYPE_INVALID 0xdeadbeef
>
> +enum xe_guc_eu_kernel_debug_request_type {
> + XE_GUC_EU_KERNEL_DEBUG_ENABLE = 0x3,
> +};
> +
> #endif
> diff --git a/drivers/gpu/drm/xe/abi/guc_klvs_abi.h b/drivers/gpu/drm/xe/abi/guc_klvs_abi.h
> index 265a135e7061..fba190d4f84b 100644
> --- a/drivers/gpu/drm/xe/abi/guc_klvs_abi.h
> +++ b/drivers/gpu/drm/xe/abi/guc_klvs_abi.h
> @@ -423,6 +423,7 @@ enum xe_guc_klv_ids {
> GUC_WA_KLV_WAKE_POWER_DOMAINS_FOR_OUTBOUND_MMIO = 0x900a,
> GUC_WA_KLV_RESET_BB_STACK_PTR_ON_VF_SWITCH = 0x900b,
> GUC_WA_KLV_RESTORE_UNSAVED_MEDIA_CONTROL_REG = 0x900c,
> + GUC_WA_KLV_RESET_DEP_ENGINES_ON_DEBUG_CTX_SWITCH = 0x900d,
> };
>
> #endif
> diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
> index ddaef00b56ff..e5590c6e3148 100644
> --- a/drivers/gpu/drm/xe/xe_exec_queue.c
> +++ b/drivers/gpu/drm/xe/xe_exec_queue.c
> @@ -17,6 +17,7 @@
> #include "xe_device.h"
> #include "xe_gt.h"
> #include "xe_gt_sriov_vf.h"
> +#include "xe_guc.h"
> #include "xe_hw_engine_class_sysfs.h"
> #include "xe_hw_engine_group.h"
> #include "xe_hw_fence.h"
> @@ -593,6 +594,7 @@ static int exec_queue_set_eudebug(struct xe_device *xe, struct xe_exec_queue *q,
> u64 value)
> {
> const u64 known_flags = DRM_XE_EXEC_QUEUE_EUDEBUG_FLAG_ENABLE;
> + struct xe_guc *guc = &q->gt->uc.guc;
>
> if (XE_IOCTL_DBG(xe, (q->class != XE_ENGINE_CLASS_RENDER &&
> q->class != XE_ENGINE_CLASS_COMPUTE)))
> @@ -604,6 +606,9 @@ static int exec_queue_set_eudebug(struct xe_device *xe, struct xe_exec_queue *q,
> if (XE_IOCTL_DBG(xe, !IS_ENABLED(CONFIG_DRM_XE_EUDEBUG)))
> return -EOPNOTSUPP;
>
> + if (XE_IOCTL_DBG(xe, !xe_guc_has_debug_contexts(guc)))
> + return -EOPNOTSUPP;
> +
> if (XE_IOCTL_DBG(xe, !xe_exec_queue_is_lr(q)))
> return -EINVAL;
> /*
> diff --git a/drivers/gpu/drm/xe/xe_guc.c b/drivers/gpu/drm/xe/xe_guc.c
> index 88376bc2a483..ec0d6b5e0693 100644
> --- a/drivers/gpu/drm/xe/xe_guc.c
> +++ b/drivers/gpu/drm/xe/xe_guc.c
> @@ -1744,6 +1744,23 @@ bool xe_guc_using_main_gamctrl_queues(struct xe_guc *guc)
> return GT_VER(gt) >= 35;
> }
>
> +bool xe_guc_has_debug_contexts(struct xe_guc *guc)
> +{
> + const struct xe_uc_fw_version required = XE_UC_FW_VERSION_DEBUG_CONTEXTS;
> + struct xe_uc_fw_version *version = &guc->fw.versions.found[XE_UC_FW_VER_RELEASE];
> + struct xe_gt *gt = guc_to_gt(guc);
> +
> + if (MAKE_GUC_VER_STRUCT(*version) < MAKE_GUC_VER_STRUCT(required)) {
> + xe_gt_info(gt,
> + "debug context unsupported in GuC interface v%u.%u.%u, need v%u.%u.%u or higher\n",
> + version->major, version->minor, version->patch, required.major,
> + required.minor, required.patch);
> + return false;
> + }
> +
> + return true;
> +}
> +
> #if IS_ENABLED(CONFIG_DRM_XE_KUNIT_TEST)
> #include "tests/xe_guc_g2g_test.c"
> #endif
> diff --git a/drivers/gpu/drm/xe/xe_guc.h b/drivers/gpu/drm/xe/xe_guc.h
> index fdb08658d05a..10e387c72861 100644
> --- a/drivers/gpu/drm/xe/xe_guc.h
> +++ b/drivers/gpu/drm/xe/xe_guc.h
> @@ -23,6 +23,8 @@
> #define GUC_FIRMWARE_VER(guc) \
> MAKE_GUC_VER_STRUCT((guc)->fw.versions.found[XE_UC_FW_VER_RELEASE])
>
> +#define XE_UC_FW_VERSION_DEBUG_CONTEXTS { .major = 70, .minor = 49, .patch = 4 }
> +
> struct drm_printer;
>
> void xe_guc_comm_init_early(struct xe_guc *guc);
> @@ -55,6 +57,7 @@ void xe_guc_stop(struct xe_guc *guc);
> int xe_guc_start(struct xe_guc *guc);
> void xe_guc_declare_wedged(struct xe_guc *guc);
> bool xe_guc_using_main_gamctrl_queues(struct xe_guc *guc);
> +bool xe_guc_has_debug_contexts(struct xe_guc *guc);
>
> #if IS_ENABLED(CONFIG_DRM_XE_KUNIT_TEST)
> int xe_guc_g2g_test_notification(struct xe_guc *guc, u32 *payload, u32 len);
> diff --git a/drivers/gpu/drm/xe/xe_guc_ads.c b/drivers/gpu/drm/xe/xe_guc_ads.c
> index bcb85a1bf26d..f4d1470229f1 100644
> --- a/drivers/gpu/drm/xe/xe_guc_ads.c
> +++ b/drivers/gpu/drm/xe/xe_guc_ads.c
> @@ -363,6 +363,23 @@ static void guc_waklv_init(struct xe_guc_ads *ads)
> guc_waklv_enable(ads, NULL, 0, &offset, &remain,
> GUC_WORKAROUND_KLV_DISABLE_PSMI_INTERRUPTS_AT_C6_ENTRY_RESTORE_AT_EXIT);
>
> +#if IS_ENABLED(CONFIG_DRM_XE_EUDEBUG)
> + if (XE_GT_WA(gt, 14022766366)) {
> + if (xe_guc_has_debug_contexts(>->uc.guc)) {
> + guc_waklv_enable(ads, NULL, 0, &offset, &remain,
> + GUC_WA_KLV_RESET_DEP_ENGINES_ON_DEBUG_CTX_SWITCH);
> + } else {
> + const struct xe_uc_fw_version required =
> + XE_UC_FW_VERSION_DEBUG_CONTEXTS;
> +
> + xe_gt_info(gt, "eudebug needs GuC version %u.%u.%u or greater\n",
> + required.major,
> + required.minor,
> + required.patch);
> + }
> + }
> +#endif
> +
> size = guc_ads_waklv_size(ads) - remain;
> if (!size)
> return;
> diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
> index 3ca2558c8c96..dd9d567f0a7b 100644
> --- a/drivers/gpu/drm/xe/xe_guc_submit.c
> +++ b/drivers/gpu/drm/xe/xe_guc_submit.c
> @@ -651,6 +651,37 @@ static void __register_exec_queue(struct xe_guc *guc,
> xe_guc_ct_send(&guc->ct, action, ARRAY_SIZE(action), 0, 0);
> }
>
> +int xe_guc_action_eu_kernel_debug(struct xe_guc *guc, u32 id, u32 cmd)
> +{
> + const u32 action[] = {
> + XE_GUC_ACTION_EU_KERNEL_DEBUG,
> + id,
> + cmd,
> + 0, /* reserved */
> + };
> +
> + return xe_guc_ct_send(&guc->ct, action,
> + ARRAY_SIZE(action), 0, 0);
> +}
> +
> +static void set_eu_kernel_debug(struct xe_exec_queue *q)
> +{
> + struct xe_guc *guc = exec_queue_to_guc(q);
> + struct xe_gt *gt = guc_to_gt(guc);
> + int ret;
> +
> + ret = xe_guc_action_eu_kernel_debug(guc, q->guc->id,
> + XE_GUC_EU_KERNEL_DEBUG_ENABLE);
> +
> + if (ret)
> + xe_gt_warn(gt,
> + "GuC ctx=%d debug enabling failed with %d",
> + q->guc->id, ret);
> + else
> + xe_gt_dbg(gt,
> + "GuC ctx=%d enabled for debug", q->guc->id);
nit: q->guc->id is unsigned, so maybe use %u?
Reviewed-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Daniele
> +}
> +
> static void register_exec_queue(struct xe_exec_queue *q, int ctx_type)
> {
> struct xe_guc *guc = exec_queue_to_guc(q);
> @@ -705,6 +736,9 @@ static void register_exec_queue(struct xe_exec_queue *q, int ctx_type)
> else
> __register_exec_queue(guc, &info);
> init_policies(guc, q);
> +
> + if (xe_exec_queue_is_debuggable(q))
> + set_eu_kernel_debug(q);
> }
>
> static u32 wq_space_until_wrap(struct xe_exec_queue *q)
> diff --git a/drivers/gpu/drm/xe/xe_guc_submit.h b/drivers/gpu/drm/xe/xe_guc_submit.h
> index 100a7891b918..b25bd8f32abf 100644
> --- a/drivers/gpu/drm/xe/xe_guc_submit.h
> +++ b/drivers/gpu/drm/xe/xe_guc_submit.h
> @@ -50,5 +50,6 @@ void xe_guc_submit_print(struct xe_guc *guc, struct drm_printer *p);
> void xe_guc_register_vf_exec_queue(struct xe_exec_queue *q, int ctx_type);
>
> int xe_guc_contexts_hwsp_rebase(struct xe_guc *guc, void *scratch);
> +int xe_guc_action_eu_kernel_debug(struct xe_guc *guc, u32 id, u32 cmd);
>
> #endif
> diff --git a/drivers/gpu/drm/xe/xe_wa_oob.rules b/drivers/gpu/drm/xe/xe_wa_oob.rules
> index 7ca7258eb5d8..ae6daa50eaf1 100644
> --- a/drivers/gpu/drm/xe/xe_wa_oob.rules
> +++ b/drivers/gpu/drm/xe/xe_wa_oob.rules
> @@ -77,3 +77,5 @@
> 15015404425_disable PLATFORM(PANTHERLAKE), MEDIA_STEP(B0, FOREVER)
> 16026007364 MEDIA_VERSION(3000)
> 14020316580 MEDIA_VERSION(1301)
> +14022766366 GRAPHICS_VERSION_RANGE(2001, 2004)
> + GRAPHICS_VERSION_RANGE(3000, 3005)
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH 06/20] drm/xe: Introduce ADD_DEBUG_DATA and REMOVE_DEBUG_DATA vm bind ops
2025-12-02 13:52 [PATCH 00/20] Intel Xe GPU Debug Support (eudebug) v6 Mika Kuoppala
` (4 preceding siblings ...)
2025-12-02 13:52 ` [PATCH 05/20] drm/xe/eudebug: Mark guc contexts as debuggable Mika Kuoppala
@ 2025-12-02 13:52 ` Mika Kuoppala
2025-12-02 13:52 ` [PATCH 07/20] drm/xe/eudebug: Introduce vm bind and vm bind debug data events Mika Kuoppala
` (19 subsequent siblings)
25 siblings, 0 replies; 30+ messages in thread
From: Mika Kuoppala @ 2025-12-02 13:52 UTC (permalink / raw)
To: intel-xe
Cc: simona.vetter, matthew.brost, christian.koenig, thomas.hellstrom,
joonas.lahtinen, christoph.manszewski, rodrigo.vivi,
andrzej.hajda, matthew.auld, maciej.patelczyk, gwan-gyeong.mun,
Mika Kuoppala
From: Christoph Manszewski <christoph.manszewski@intel.com>
Make it possible to add and remove per vm debug data, which can be used
to annotate vm ranges (using pseudopaths) or to associate them with
a file which can carry arbitrary debug data (e.g. binary instruction to
code line mapping). The debug data is kept separe from the vmas. Each
address can be associated with only one debug data entry i.e. debug data
entries cannot overlap. Each entry is atomic so to remove it the
creation address and range has to be passed for removal.
For debug data manipulation only the 'op' and 'extensions' field from
'struct drm_xe_vm_bind_op' is used. All required parameters are passed
through 'struct drm_xe_vm_bind_op_ext_debug_data' and a valid instance
should be present in the extension chain pointed by the 'extensions'
field.
Debug data will be accessible through the eudebug event interface,
introduced in the following patch. An alternative way to access debug data
using debugfs, without relying on eudebug, will be proposed as a follow-up
to the eudebug series.
v2: enforce empty path on unmap (Joonas, Mika)
Signed-off-by: Christoph Manszewski <christoph.manszewski@intel.com>
Signed-off-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
---
drivers/gpu/drm/xe/Makefile | 1 +
drivers/gpu/drm/xe/xe_debug_data.c | 314 +++++++++++++++++++++++
drivers/gpu/drm/xe/xe_debug_data.h | 22 ++
drivers/gpu/drm/xe/xe_debug_data_types.h | 25 ++
drivers/gpu/drm/xe/xe_vm.c | 157 +++++++++++-
drivers/gpu/drm/xe/xe_vm_types.h | 19 ++
include/uapi/drm/xe_drm.h | 36 +++
7 files changed, 568 insertions(+), 6 deletions(-)
create mode 100644 drivers/gpu/drm/xe/xe_debug_data.c
create mode 100644 drivers/gpu/drm/xe/xe_debug_data.h
create mode 100644 drivers/gpu/drm/xe/xe_debug_data_types.h
diff --git a/drivers/gpu/drm/xe/Makefile b/drivers/gpu/drm/xe/Makefile
index d81981b6a297..caf2b9e518ea 100644
--- a/drivers/gpu/drm/xe/Makefile
+++ b/drivers/gpu/drm/xe/Makefile
@@ -86,6 +86,7 @@ xe-y += xe_bb.o \
xe_irq.o \
xe_late_bind_fw.o \
xe_lrc.o \
+ xe_debug_data.o \
xe_migrate.o \
xe_mmio.o \
xe_mmio_gem.o \
diff --git a/drivers/gpu/drm/xe/xe_debug_data.c b/drivers/gpu/drm/xe/xe_debug_data.c
new file mode 100644
index 000000000000..1cc89396d8b2
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_debug_data.c
@@ -0,0 +1,314 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2025 Intel Corporation
+ */
+
+#include "xe_debug_data.h"
+#include "xe_debug_data_types.h"
+#include "xe_vm.h"
+
+const char *xe_debug_data_pseudo_path_to_string(u64 pseudopath)
+{
+ switch (pseudopath) {
+ case DRM_XE_VM_BIND_DEBUG_DATA_PSEUDO_MODULE_AREA:
+ return "[module_area]";
+ case DRM_XE_VM_BIND_DEBUG_DATA_PSEUDO_SBA_AREA:
+ return "[sba_area]";
+ case DRM_XE_VM_BIND_DEBUG_DATA_PSEUDO_SIP_AREA:
+ return "[sip_area]";
+ default:
+ return "[unknown]";
+ }
+}
+
+static bool
+debug_data_overlaps(const struct drm_xe_vm_bind_op_ext_debug_data *a,
+ const struct xe_debug_data *b)
+{
+ const u64 s1 = a->addr;
+ const u64 e1 = a->addr + a->range;
+ const u64 s2 = b->addr;
+ const u64 e2 = b->addr + b->range;
+
+ return (s1 < e2) && (s2 < e1);
+}
+
+static bool
+debug_data_matches(const struct drm_xe_vm_bind_op_ext_debug_data *a,
+ const struct xe_debug_data *b)
+{
+ return (a->addr == b->addr) && (a->range == b->range);
+}
+
+static bool
+debug_data_is_empty(const struct drm_xe_vm_bind_op_ext_debug_data *dd)
+{
+ int i;
+
+ if (dd->flags)
+ return false;
+
+ if (dd->offset)
+ return false;
+
+ for (i = 0; i < PATH_MAX; i++)
+ if (dd->pathname[i])
+ return false;
+
+ return true;
+}
+
+static int xe_debug_data_check_add(struct xe_vm *vm,
+ const struct drm_xe_vm_bind_op_ext_debug_data *ext)
+{
+ struct xe_device *xe = vm->xe;
+ struct xe_debug_data *i;
+
+ mutex_lock(&vm->debug_data.lock);
+ list_for_each_entry(i, &vm->debug_data.list, link) {
+ if (XE_IOCTL_DBG(xe, debug_data_overlaps(ext, i))) {
+ mutex_unlock(&vm->debug_data.lock);
+ return -EINVAL;
+ }
+ }
+ mutex_unlock(&vm->debug_data.lock);
+
+ return 0;
+}
+
+static int xe_debug_data_check_remove(struct xe_vm *vm,
+ const struct drm_xe_vm_bind_op_ext_debug_data *ext)
+{
+ struct xe_device *xe = vm->xe;
+ struct xe_debug_data *i;
+ bool found = false;
+
+ if (XE_IOCTL_DBG(xe, !debug_data_is_empty(ext)))
+ return -EINVAL;
+
+ mutex_lock(&vm->debug_data.lock);
+ list_for_each_entry(i, &vm->debug_data.list, link) {
+ found = debug_data_matches(ext, i);
+ if (found)
+ break;
+ }
+ mutex_unlock(&vm->debug_data.lock);
+
+ if (XE_IOCTL_DBG(xe, !found)) {
+ drm_dbg(&xe->drm, "Debug data to remove not found for addr 0x%llx, range 0x%llx\n",
+ ext->addr, ext->range);
+ return -ENOENT;
+ }
+
+ return 0;
+}
+
+int xe_debug_data_check_extension(struct xe_vm *vm, u32 operation, u64 extension)
+{
+ const u64 __user * const address = u64_to_user_ptr(extension);
+ struct drm_xe_vm_bind_op_ext_debug_data *ext;
+ struct xe_device *xe = vm->xe;
+ int ret;
+
+ if (XE_IOCTL_DBG(xe, operation != DRM_XE_VM_BIND_OP_ADD_DEBUG_DATA &&
+ operation != DRM_XE_VM_BIND_OP_REMOVE_DEBUG_DATA))
+ return -EINVAL;
+
+ ext = kzalloc(sizeof(*ext), GFP_KERNEL);
+ if (!ext)
+ return -ENOMEM;
+
+ if (copy_from_user(ext, address, sizeof(*ext))) {
+ kfree(ext);
+ return -EFAULT;
+ }
+
+ if (XE_IOCTL_DBG(xe, ext->flags & ~DRM_XE_VM_BIND_DEBUG_DATA_FLAG_PSEUDO) ||
+ XE_IOCTL_DBG(xe, ext->flags & DRM_XE_VM_BIND_DEBUG_DATA_FLAG_PSEUDO &&
+ ext->offset != 0) ||
+ XE_IOCTL_DBG(xe, ext->flags & DRM_XE_VM_BIND_DEBUG_DATA_FLAG_PSEUDO &&
+ (ext->pseudopath < DRM_XE_VM_BIND_DEBUG_DATA_PSEUDO_MODULE_AREA ||
+ ext->pseudopath > DRM_XE_VM_BIND_DEBUG_DATA_PSEUDO_SIP_AREA)) ||
+ XE_IOCTL_DBG(xe, !(ext->flags & DRM_XE_VM_BIND_DEBUG_DATA_FLAG_PSEUDO) &&
+ strnlen(ext->pathname, PATH_MAX) >= PATH_MAX)) {
+ kfree(ext);
+ return -EINVAL;
+ }
+
+ ret = operation == DRM_XE_VM_BIND_OP_ADD_DEBUG_DATA ?
+ xe_debug_data_check_add(vm, ext) :
+ xe_debug_data_check_remove(vm, ext);
+
+ kfree(ext);
+ return ret;
+}
+
+static int xe_debug_data_add(struct xe_vm *vm, struct xe_vma_op *vma_op,
+ struct drm_xe_vm_bind_op_ext_debug_data *ext)
+{
+ struct xe_debug_data *dd;
+
+ vm_dbg(&vm->xe->drm,
+ "ADD_DEBUG_DATA: addr=0x%016llx, range=0x%016llx, offset=0x%08x, flags=0x%016llx, path=%s\n",
+ ext->addr, ext->range, ext->offset, ext->flags,
+ (ext->flags & DRM_XE_VM_BIND_DEBUG_DATA_FLAG_PSEUDO) ?
+ xe_debug_data_pseudo_path_to_string(ext->pseudopath) : ext->pathname);
+
+ dd = kzalloc(sizeof(*dd), GFP_KERNEL);
+ if (!dd)
+ return -ENOMEM;
+
+ dd->addr = ext->addr;
+ dd->range = ext->range;
+ dd->flags = ext->flags;
+ dd->offset = ext->offset;
+
+ if (ext->flags & DRM_XE_VM_BIND_DEBUG_DATA_FLAG_PSEUDO) {
+ dd->pseudopath = ext->pseudopath;
+ } else if (strscpy(dd->pathname, ext->pathname, PATH_MAX) < 0) {
+ kfree(dd);
+ return -EINVAL;
+ }
+
+ mutex_lock(&vm->debug_data.lock);
+ list_add_tail(&dd->link, &vm->debug_data.list);
+ mutex_unlock(&vm->debug_data.lock);
+
+ memcpy(&vma_op->modify_debug_data.debug_data, dd, sizeof(*dd));
+
+ return 0;
+}
+
+static int xe_debug_data_remove(struct xe_vm *vm, struct xe_vma_op *vma_op,
+ struct drm_xe_vm_bind_op_ext_debug_data *ext)
+{
+ struct xe_debug_data *dd;
+
+ vm_dbg(&vm->xe->drm,
+ "REMOVE_DEBUG_DATA: addr=0x%016llx, range=0x%016llx, offset=0x%08x, flags=0x%016llx, path=%s\n",
+ ext->addr, ext->range, ext->offset, ext->flags,
+ (ext->flags & DRM_XE_VM_BIND_DEBUG_DATA_FLAG_PSEUDO) ?
+ xe_debug_data_pseudo_path_to_string(ext->pseudopath) : ext->pathname);
+
+ mutex_lock(&vm->debug_data.lock);
+ list_for_each_entry(dd, &vm->debug_data.list, link) {
+ if (dd->addr == ext->addr && dd->range == ext->range) {
+ list_del(&dd->link);
+ memcpy(&vma_op->modify_debug_data.debug_data, dd, sizeof(*dd));
+ kfree(dd);
+ break;
+ }
+ }
+ mutex_unlock(&vm->debug_data.lock);
+
+ return 0;
+}
+
+int xe_debug_data_process_extension(struct xe_vm *vm, struct drm_gpuva_ops *ops, u32 operation,
+ u64 extension)
+{
+ const u64 __user * const address = u64_to_user_ptr(extension);
+ struct drm_xe_vm_bind_op_ext_debug_data *ext;
+ struct xe_vma_op *vma_op;
+ struct drm_gpuva_op *op;
+ int ret;
+
+ ext = kzalloc(sizeof(*ext), GFP_KERNEL);
+ if (!ext)
+ return -ENOMEM;
+
+ if (copy_from_user(ext, address, sizeof(*ext))) {
+ kfree(ext);
+ return -EFAULT;
+ }
+
+ /* We expect only a single op for debug data */
+ op = drm_gpuva_first_op(ops);
+ if (op != drm_gpuva_last_op(ops))
+ drm_warn(&vm->xe->drm, "NOT POSSIBLE");
+
+ vma_op = gpuva_op_to_vma_op(op);
+
+ if (vma_op->subop == XE_VMA_SUBOP_ADD_DEBUG_DATA)
+ ret = xe_debug_data_add(vm, vma_op, ext);
+ else
+ ret = xe_debug_data_remove(vm, vma_op, ext);
+
+ kfree(ext);
+ return ret;
+}
+
+static int xe_debug_data_op_unwind_add(struct xe_vm *vm, struct xe_vma_op *vma_op)
+{
+ const struct xe_debug_data *op_data = &vma_op->modify_debug_data.debug_data;
+ struct xe_debug_data *dd;
+
+ vm_dbg(&vm->xe->drm,
+ "Reverting debug data add: addr=0x%016llx, range=0x%016llx, offset=0x%08x, flags=0x%016llx, path=%s\n",
+ op_data->addr, op_data->range, op_data->offset, op_data->flags,
+ (op_data->flags & DRM_XE_VM_BIND_DEBUG_DATA_FLAG_PSEUDO) ?
+ xe_debug_data_pseudo_path_to_string(op_data->pseudopath) : op_data->pathname);
+
+ mutex_lock(&vm->debug_data.lock);
+ list_for_each_entry(dd, &vm->debug_data.list, link) {
+ if (dd->addr == op_data->addr && dd->range == op_data->range) {
+ list_del(&dd->link);
+ kfree(dd);
+ break;
+ }
+ }
+ mutex_unlock(&vm->debug_data.lock);
+
+ return 0;
+}
+
+static int xe_debug_data_op_unwind_remove(struct xe_vm *vm, struct xe_vma_op *vma_op)
+{
+ const struct xe_debug_data *op_data = &vma_op->modify_debug_data.debug_data;
+ struct xe_debug_data *dd;
+
+ vm_dbg(&vm->xe->drm,
+ "Reverting debug data remove: addr=0x%016llx, range=0x%016llx, offset=0x%08x, flags=0x%016llx, path=%s\n",
+ op_data->addr, op_data->range, op_data->offset, op_data->flags,
+ (op_data->flags & DRM_XE_VM_BIND_DEBUG_DATA_FLAG_PSEUDO) ?
+ xe_debug_data_pseudo_path_to_string(op_data->pseudopath) : op_data->pathname);
+
+ dd = kzalloc(sizeof(*dd), GFP_KERNEL);
+ if (!dd)
+ return -ENOMEM;
+
+ memcpy(dd, op_data, sizeof(*dd));
+
+ mutex_lock(&vm->debug_data.lock);
+ list_add_tail(&dd->link, &vm->debug_data.list);
+ mutex_unlock(&vm->debug_data.lock);
+
+ return 0;
+}
+
+int xe_debug_data_op_unwind(struct xe_vm *vm, struct xe_vma_op *vma_op)
+{
+ switch (vma_op->subop) {
+ case XE_VMA_SUBOP_ADD_DEBUG_DATA:
+ return xe_debug_data_op_unwind_add(vm, vma_op);
+ case XE_VMA_SUBOP_REMOVE_DEBUG_DATA:
+ return xe_debug_data_op_unwind_remove(vm, vma_op);
+ default:
+ drm_err(&vm->xe->drm, "Invalid debug data subop %d\n", vma_op->subop);
+ return -EINVAL;
+ }
+}
+
+int xe_debug_data_destroy(struct xe_vm *vm)
+{
+ struct xe_debug_data *dd, *tmp;
+
+ mutex_lock(&vm->debug_data.lock);
+ list_for_each_entry_safe(dd, tmp, &vm->debug_data.list, link) {
+ list_del(&dd->link);
+ kfree(dd);
+ }
+ mutex_unlock(&vm->debug_data.lock);
+
+ return 0;
+}
diff --git a/drivers/gpu/drm/xe/xe_debug_data.h b/drivers/gpu/drm/xe/xe_debug_data.h
new file mode 100644
index 000000000000..3436a7023920
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_debug_data.h
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2025 Intel Corporation
+ */
+
+#ifndef _XE_DEBUG_DATA_H_
+#define _XE_DEBUG_DATA_H_
+
+#include <linux/types.h>
+
+struct drm_gpuva_ops;
+struct xe_vm;
+struct xe_vma_op;
+
+const char *xe_debug_data_pseudo_path_to_string(u64 pseudopath);
+int xe_debug_data_check_extension(struct xe_vm *vm, u32 operation, u64 extension);
+int xe_debug_data_process_extension(struct xe_vm *vm, struct drm_gpuva_ops *ops, u32 operation,
+ u64 extension);
+int xe_debug_data_op_unwind(struct xe_vm *vm, struct xe_vma_op *vma_op);
+int xe_debug_data_destroy(struct xe_vm *vm);
+
+#endif /* _XE_DEBUG_DATA_H_ */
diff --git a/drivers/gpu/drm/xe/xe_debug_data_types.h b/drivers/gpu/drm/xe/xe_debug_data_types.h
new file mode 100644
index 000000000000..a8b430af2275
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_debug_data_types.h
@@ -0,0 +1,25 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2025 Intel Corporation
+ */
+
+#ifndef _XE_DEBUG_DATA_TYPES_H_
+#define _XE_DEBUG_DATA_TYPES_H_
+
+#include <linux/limits.h>
+#include <linux/list.h>
+#include <linux/types.h>
+
+struct xe_debug_data {
+ struct list_head link;
+ u64 addr;
+ u64 range;
+ u64 flags;
+ u32 offset;
+ union {
+ u64 pseudopath;
+ char pathname[PATH_MAX];
+ };
+};
+
+#endif /* _XE_DEBUG_DATA_TYPES_H_ */
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 903f478ff1cc..4bc23d384134 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -24,6 +24,7 @@
#include "regs/xe_gtt_defs.h"
#include "xe_assert.h"
#include "xe_bo.h"
+#include "xe_debug_data.h"
#include "xe_device.h"
#include "xe_drm_client.h"
#include "xe_eudebug.h"
@@ -1514,6 +1515,9 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags, struct xe_file *xef)
for_each_tile(tile, xe, id)
xe_range_fence_tree_init(&vm->rftree[id]);
+ INIT_LIST_HEAD(&vm->debug_data.list);
+ mutex_init(&vm->debug_data.lock);
+
vm->pt_ops = &xelp_pt_ops;
/*
@@ -1812,6 +1816,8 @@ void xe_vm_close_and_put(struct xe_vm *vm)
for_each_tile(tile, xe, id)
xe_range_fence_tree_fini(&vm->rftree[id]);
+ xe_debug_data_destroy(vm);
+
xe_vm_put(vm);
}
@@ -2154,6 +2160,7 @@ static void prep_vma_destroy(struct xe_vm *vm, struct xe_vma *vma,
#if IS_ENABLED(CONFIG_DRM_XE_DEBUG_VM)
static void print_op(struct xe_device *xe, struct drm_gpuva_op *op)
{
+ struct xe_vma_op *vma_op;
struct xe_vma *vma;
switch (op->op) {
@@ -2188,6 +2195,12 @@ static void print_op(struct xe_device *xe, struct drm_gpuva_op *op)
vm_dbg(&xe->drm, "PREFETCH: addr=0x%016llx, range=0x%016llx",
(ULL)xe_vma_start(vma), (ULL)xe_vma_size(vma));
break;
+ case DRM_GPUVA_OP_DRIVER:
+ vma_op = gpuva_op_to_vma_op(op);
+ if (vma_op->subop != XE_VMA_SUBOP_ADD_DEBUG_DATA &&
+ vma_op->subop != XE_VMA_SUBOP_REMOVE_DEBUG_DATA)
+ drm_warn(&xe->drm, "Unexpected vma sub op: %d", vma_op->subop);
+ break;
default:
drm_warn(&xe->drm, "NOT POSSIBLE");
}
@@ -2232,12 +2245,13 @@ vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_vma_ops *vops,
struct xe_bo *bo, u64 bo_offset_or_userptr,
u64 addr, u64 range,
u32 operation, u32 flags,
- u32 prefetch_region, u16 pat_index)
+ u32 prefetch_region, u16 pat_index, u64 extensions)
{
struct drm_gem_object *obj = bo ? &bo->ttm.base : NULL;
struct drm_gpuva_ops *ops;
struct drm_gpuva_op *__op;
struct drm_gpuvm_bo *vm_bo;
+ struct xe_vma_op *vma_op;
u64 range_start = addr;
u64 range_end = addr + range;
int err;
@@ -2291,6 +2305,24 @@ vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_vma_ops *vops,
drm_gpuvm_bo_put(vm_bo);
xe_bo_unlock(bo);
break;
+ case DRM_XE_VM_BIND_OP_ADD_DEBUG_DATA:
+ case DRM_XE_VM_BIND_OP_REMOVE_DEBUG_DATA:
+ ops = kzalloc(sizeof(*ops), GFP_KERNEL);
+ if (!ops)
+ return ERR_PTR(-ENOMEM);
+
+ INIT_LIST_HEAD(&ops->list);
+ vma_op = kzalloc(sizeof(*vma_op), GFP_KERNEL);
+ if (!vma_op) {
+ kfree(ops);
+ return ERR_PTR(-ENOMEM);
+ }
+
+ vma_op->base.op = DRM_GPUVA_OP_DRIVER;
+ vma_op->subop = operation == DRM_XE_VM_BIND_OP_ADD_DEBUG_DATA ?
+ XE_VMA_SUBOP_ADD_DEBUG_DATA : XE_VMA_SUBOP_REMOVE_DEBUG_DATA;
+ list_add_tail(&vma_op->base.entry, &ops->list);
+ break;
default:
drm_warn(&vm->xe->drm, "NOT POSSIBLE");
ops = ERR_PTR(-EINVAL);
@@ -2569,6 +2601,11 @@ static int xe_vma_op_commit(struct xe_vm *vm, struct xe_vma_op *op)
case DRM_GPUVA_OP_PREFETCH:
op->flags |= XE_VMA_OP_COMMITTED;
break;
+ case DRM_GPUVA_OP_DRIVER:
+ if (op->subop != XE_VMA_SUBOP_ADD_DEBUG_DATA &&
+ op->subop != XE_VMA_SUBOP_REMOVE_DEBUG_DATA)
+ drm_warn(&vm->xe->drm, "Unexpected vma sub op: %d", op->subop);
+ break;
default:
drm_warn(&vm->xe->drm, "NOT POSSIBLE");
}
@@ -2768,6 +2805,11 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask, 1);
break;
+ case DRM_GPUVA_OP_DRIVER:
+ if (op->subop != XE_VMA_SUBOP_ADD_DEBUG_DATA &&
+ op->subop != XE_VMA_SUBOP_REMOVE_DEBUG_DATA)
+ drm_warn(&vm->xe->drm, "Unexpected vma sub op: %d", op->subop);
+ break;
default:
drm_warn(&vm->xe->drm, "NOT POSSIBLE");
}
@@ -2830,6 +2872,13 @@ static void xe_vma_op_unwind(struct xe_vm *vm, struct xe_vma_op *op,
case DRM_GPUVA_OP_PREFETCH:
/* Nothing to do */
break;
+ case DRM_GPUVA_OP_DRIVER:
+ if (op->subop == XE_VMA_SUBOP_ADD_DEBUG_DATA ||
+ op->subop == XE_VMA_SUBOP_REMOVE_DEBUG_DATA)
+ xe_debug_data_op_unwind(vm, op);
+ else
+ drm_warn(&vm->xe->drm, "Unexpected vma sub op: %d", op->subop);
+ break;
default:
drm_warn(&vm->xe->drm, "NOT POSSIBLE");
}
@@ -3007,6 +3056,11 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
exec);
break;
}
+ case DRM_GPUVA_OP_DRIVER:
+ if (op->subop != XE_VMA_SUBOP_ADD_DEBUG_DATA &&
+ op->subop != XE_VMA_SUBOP_REMOVE_DEBUG_DATA)
+ drm_warn(&vm->xe->drm, "Unexpected vma sub op: %d", op->subop);
+ break;
default:
drm_warn(&vm->xe->drm, "NOT POSSIBLE");
}
@@ -3246,6 +3300,11 @@ static void op_add_ufence(struct xe_vm *vm, struct xe_vma_op *op,
case DRM_GPUVA_OP_PREFETCH:
vma_add_ufence(gpuva_to_vma(op->base.prefetch.va), ufence);
break;
+ case DRM_GPUVA_OP_DRIVER:
+ if (op->subop != XE_VMA_SUBOP_ADD_DEBUG_DATA &&
+ op->subop != XE_VMA_SUBOP_REMOVE_DEBUG_DATA)
+ drm_warn(&vm->xe->drm, "Unexpected vma sub op: %d", op->subop);
+ break;
default:
drm_warn(&vm->xe->drm, "NOT POSSIBLE");
}
@@ -3332,6 +3391,79 @@ ALLOW_ERROR_INJECTION(vm_bind_ioctl_ops_execute, ERRNO);
#define XE_64K_PAGE_MASK 0xffffull
#define ALL_DRM_XE_SYNCS_FLAGS (DRM_XE_SYNCS_FLAG_WAIT_FOR_OP)
+#define MAX_USER_EXTENSIONS 16
+
+typedef int (*xe_vm_bind_user_extension_check_fn)(struct xe_vm *vm, u32 operation, u64 extension);
+
+typedef int (*xe_vm_bind_user_extension_process_fn)(struct xe_vm *vm, struct drm_gpuva_ops *ops,
+ u32 operation, u64 extension);
+
+static const xe_vm_bind_user_extension_check_fn vm_bind_extension_check_funcs[] = {
+ [XE_VM_BIND_OP_EXTENSIONS_DEBUG_DATA] = xe_debug_data_check_extension,
+};
+
+static const xe_vm_bind_user_extension_process_fn vm_bind_extension_process_funcs[] = {
+ [XE_VM_BIND_OP_EXTENSIONS_DEBUG_DATA] = xe_debug_data_process_extension,
+};
+
+#define MAX_USER_EXTENSIONS 16
+static int __vm_bind_op_user_extensions(struct xe_vm *vm, struct drm_gpuva_ops *ops,
+ u32 operation, u64 extensions)
+{
+ struct xe_device *xe = vm->xe;
+ int debug_data_count = 0;
+ int ext_count = 0;
+ int err = -1;
+
+ struct drm_xe_user_extension ext;
+
+ while (extensions) {
+ u64 __user *address = u64_to_user_ptr(extensions);
+
+ if (XE_IOCTL_DBG(xe, ++ext_count >= MAX_USER_EXTENSIONS))
+ return -E2BIG;
+
+ err = copy_from_user(&ext, address, sizeof(ext));
+ if (XE_IOCTL_DBG(xe, err))
+ return -EFAULT;
+
+ if (XE_IOCTL_DBG(xe, operation != DRM_XE_VM_BIND_OP_ADD_DEBUG_DATA &&
+ operation != DRM_XE_VM_BIND_OP_REMOVE_DEBUG_DATA &&
+ ext.name == XE_VM_BIND_OP_EXTENSIONS_DEBUG_DATA) ||
+ XE_IOCTL_DBG(xe, ext.name == XE_VM_BIND_OP_EXTENSIONS_DEBUG_DATA &&
+ ++debug_data_count > 1))
+ return -EINVAL;
+
+ if (XE_IOCTL_DBG(xe, ext.pad) ||
+ XE_IOCTL_DBG(xe, ext.name > XE_VM_BIND_OP_EXTENSIONS_DEBUG_DATA))
+ return -EINVAL;
+
+ if (!ops)
+ err = vm_bind_extension_check_funcs[ext.name](vm, operation, extensions);
+ else
+ err = vm_bind_extension_process_funcs[ext.name](vm, ops, operation,
+ extensions);
+
+ if (XE_IOCTL_DBG(xe, err))
+ return err;
+
+ extensions = ext.next_extension;
+ }
+
+ return 0;
+}
+
+static int vm_bind_ioctl_check_user_extensions(struct xe_vm *vm, u32 operation, u64 extensions)
+{
+ return __vm_bind_op_user_extensions(vm, NULL, operation, extensions);
+}
+
+static int vm_bind_ioctl_process_user_extensions(struct xe_vm *vm, struct drm_gpuva_ops *ops,
+ u32 operation, u64 extensions)
+{
+ return __vm_bind_op_user_extensions(vm, ops, operation, extensions);
+}
+
static int vm_bind_ioctl_check_args(struct xe_device *xe, struct xe_vm *vm,
struct drm_xe_vm_bind *args,
struct drm_xe_vm_bind_op **bind_ops)
@@ -3380,6 +3512,7 @@ static int vm_bind_ioctl_check_args(struct xe_device *xe, struct xe_vm *vm,
bool is_cpu_addr_mirror = flags &
DRM_XE_VM_BIND_FLAG_CPU_ADDR_MIRROR;
u16 pat_index = (*bind_ops)[i].pat_index;
+ u64 extensions = (*bind_ops)[i].extensions;
u16 coh_mode;
if (XE_IOCTL_DBG(xe, is_cpu_addr_mirror &&
@@ -3407,7 +3540,7 @@ static int vm_bind_ioctl_check_args(struct xe_device *xe, struct xe_vm *vm,
goto free_bind_ops;
}
- if (XE_IOCTL_DBG(xe, op > DRM_XE_VM_BIND_OP_PREFETCH) ||
+ if (XE_IOCTL_DBG(xe, op > DRM_XE_VM_BIND_OP_REMOVE_DEBUG_DATA) ||
XE_IOCTL_DBG(xe, flags & ~SUPPORTED_FLAGS) ||
XE_IOCTL_DBG(xe, obj && (is_null || is_cpu_addr_mirror)) ||
XE_IOCTL_DBG(xe, obj_offset && (is_null ||
@@ -3449,10 +3582,16 @@ static int vm_bind_ioctl_check_args(struct xe_device *xe, struct xe_vm *vm,
XE_IOCTL_DBG(xe, addr & ~PAGE_MASK) ||
XE_IOCTL_DBG(xe, range & ~PAGE_MASK) ||
XE_IOCTL_DBG(xe, !range &&
- op != DRM_XE_VM_BIND_OP_UNMAP_ALL)) {
+ op != DRM_XE_VM_BIND_OP_UNMAP_ALL &&
+ op != DRM_XE_VM_BIND_OP_ADD_DEBUG_DATA &&
+ op != DRM_XE_VM_BIND_OP_REMOVE_DEBUG_DATA)) {
err = -EINVAL;
goto free_bind_ops;
}
+
+ err = vm_bind_ioctl_check_user_extensions(vm, op, extensions);
+ if (err)
+ goto free_bind_ops;
}
return 0;
@@ -3710,11 +3849,17 @@ int xe_vm_bind_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
u64 obj_offset = bind_ops[i].obj_offset;
u32 prefetch_region = bind_ops[i].prefetch_mem_region_instance;
u16 pat_index = bind_ops[i].pat_index;
+ u64 extensions = bind_ops[i].extensions;
ops[i] = vm_bind_ioctl_ops_create(vm, &vops, bos[i], obj_offset,
addr, range, op, flags,
- prefetch_region, pat_index);
- if (IS_ERR(ops[i])) {
+ prefetch_region, pat_index, extensions);
+
+ if (!IS_ERR(ops[i]) && extensions) {
+ err = vm_bind_ioctl_process_user_extensions(vm, ops[i], op, extensions);
+ if (err)
+ goto unwind_ops;
+ } else if (IS_ERR(ops[i])) {
err = PTR_ERR(ops[i]);
ops[i] = NULL;
goto unwind_ops;
@@ -3822,7 +3967,7 @@ struct dma_fence *xe_vm_bind_kernel_bo(struct xe_vm *vm, struct xe_bo *bo,
ops = vm_bind_ioctl_ops_create(vm, &vops, bo, 0, addr, xe_bo_size(bo),
DRM_XE_VM_BIND_OP_MAP, 0, 0,
- vm->xe->pat.idx[cache_lvl]);
+ vm->xe->pat.idx[cache_lvl], 0);
if (IS_ERR(ops)) {
err = PTR_ERR(ops);
goto release_vm_lock;
diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
index 3bf912bfbdcc..1aa1412f0a2c 100644
--- a/drivers/gpu/drm/xe/xe_vm_types.h
+++ b/drivers/gpu/drm/xe/xe_vm_types.h
@@ -14,6 +14,7 @@
#include <linux/mmu_notifier.h>
#include <linux/scatterlist.h>
+#include "xe_debug_data_types.h"
#include "xe_device_types.h"
#include "xe_pt_types.h"
#include "xe_range_fence.h"
@@ -335,6 +336,12 @@ struct xe_vm {
bool batch_invalidate_tlb;
/** @xef: Xe file handle for tracking this VM's drm client */
struct xe_file *xef;
+
+ /** @debug_data: track debug_data mapped to vm */
+ struct {
+ struct list_head list;
+ struct mutex lock;
+ } debug_data;
};
/** struct xe_vma_op_map - VMA map operation */
@@ -401,6 +408,12 @@ struct xe_vma_op_prefetch_range {
struct xe_tile *tile;
};
+/** struct xe_vma_op_debug_data - debug data altering operation */
+struct xe_vma_op_modify_debug_data {
+ /** @debug_data: debug data associated with that operation */
+ struct xe_debug_data debug_data;
+};
+
/** enum xe_vma_op_flags - flags for VMA operation */
enum xe_vma_op_flags {
/** @XE_VMA_OP_COMMITTED: VMA operation committed */
@@ -417,6 +430,10 @@ enum xe_vma_subop {
XE_VMA_SUBOP_MAP_RANGE,
/** @XE_VMA_SUBOP_UNMAP_RANGE: Unmap range */
XE_VMA_SUBOP_UNMAP_RANGE,
+ /** @XE_VMA_SUBOP_ADD_DEBUG_DATA: Add debug data to vm */
+ XE_VMA_SUBOP_ADD_DEBUG_DATA,
+ /** @XE_VMA_SUBOP_REMOVE_DEBUG_DATA: Remove debug data from vm */
+ XE_VMA_SUBOP_REMOVE_DEBUG_DATA,
};
/** struct xe_vma_op - VMA operation */
@@ -445,6 +462,8 @@ struct xe_vma_op {
struct xe_vma_op_unmap_range unmap_range;
/** @prefetch_range: VMA prefetch range operation specific data */
struct xe_vma_op_prefetch_range prefetch_range;
+ /** @debug_data: debug_data operation specific data */
+ struct xe_vma_op_modify_debug_data modify_debug_data;
};
};
diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
index 7349b832837d..8217e60700a6 100644
--- a/include/uapi/drm/xe_drm.h
+++ b/include/uapi/drm/xe_drm.h
@@ -6,6 +6,8 @@
#ifndef _UAPI_XE_DRM_H_
#define _UAPI_XE_DRM_H_
+#include <linux/limits.h>
+
#include "drm.h"
#if defined(__cplusplus)
@@ -991,6 +993,35 @@ struct drm_xe_vm_destroy {
__u64 reserved[2];
};
+struct drm_xe_vm_bind_op_ext_debug_data {
+ /** @base: base user extension */
+ struct drm_xe_user_extension base;
+
+ /** @addr: Address of the metadata mapping */
+ __u64 addr;
+
+ /** @range: Range of the metadata mapping */
+ __u64 range;
+
+#define DRM_XE_VM_BIND_DEBUG_DATA_FLAG_PSEUDO (1 << 0)
+ /** @flags: Debug metadata flags */
+ __u64 flags;
+
+ /** @offset: Offset into the debug data file, MBZ for DEBUG_PSEUDO */
+ __u32 offset;
+
+ /** @reserved: Reserved */
+ __u32 reserved;
+
+ union {
+#define DRM_XE_VM_BIND_DEBUG_DATA_PSEUDO_MODULE_AREA 0x1
+#define DRM_XE_VM_BIND_DEBUG_DATA_PSEUDO_SBA_AREA 0x2
+#define DRM_XE_VM_BIND_DEBUG_DATA_PSEUDO_SIP_AREA 0x3
+ __u64 pseudopath;
+ char pathname[PATH_MAX];
+ };
+};
+
/**
* struct drm_xe_vm_bind_op - run bind operations
*
@@ -1000,6 +1031,8 @@ struct drm_xe_vm_destroy {
* - %DRM_XE_VM_BIND_OP_MAP_USERPTR
* - %DRM_XE_VM_BIND_OP_UNMAP_ALL
* - %DRM_XE_VM_BIND_OP_PREFETCH
+ * - %DRM_XE_VM_BIND_OP_ADD_DEBUG_DATA
+ * - %DRM_XE_VM_BIND_OP_REMOVE_DEBUG_DATA
*
* and the @flags can be:
* - %DRM_XE_VM_BIND_FLAG_READONLY - Setup the page tables as read-only
@@ -1043,6 +1076,7 @@ struct drm_xe_vm_destroy {
* the memory region advised by madvise.
*/
struct drm_xe_vm_bind_op {
+#define XE_VM_BIND_OP_EXTENSIONS_DEBUG_DATA 0
/** @extensions: Pointer to the first extension struct, if any */
__u64 extensions;
@@ -1134,6 +1168,8 @@ struct drm_xe_vm_bind_op {
#define DRM_XE_VM_BIND_OP_MAP_USERPTR 0x2
#define DRM_XE_VM_BIND_OP_UNMAP_ALL 0x3
#define DRM_XE_VM_BIND_OP_PREFETCH 0x4
+#define DRM_XE_VM_BIND_OP_ADD_DEBUG_DATA 0x5
+#define DRM_XE_VM_BIND_OP_REMOVE_DEBUG_DATA 0x6
/** @op: Bind operation to perform */
__u32 op;
--
2.43.0
^ permalink raw reply related [flat|nested] 30+ messages in thread* [PATCH 07/20] drm/xe/eudebug: Introduce vm bind and vm bind debug data events
2025-12-02 13:52 [PATCH 00/20] Intel Xe GPU Debug Support (eudebug) v6 Mika Kuoppala
` (5 preceding siblings ...)
2025-12-02 13:52 ` [PATCH 06/20] drm/xe: Introduce ADD_DEBUG_DATA and REMOVE_DEBUG_DATA vm bind ops Mika Kuoppala
@ 2025-12-02 13:52 ` Mika Kuoppala
2025-12-02 13:52 ` [PATCH 08/20] drm/xe/eudebug: Add UFENCE events with acks Mika Kuoppala
` (18 subsequent siblings)
25 siblings, 0 replies; 30+ messages in thread
From: Mika Kuoppala @ 2025-12-02 13:52 UTC (permalink / raw)
To: intel-xe
Cc: simona.vetter, matthew.brost, christian.koenig, thomas.hellstrom,
joonas.lahtinen, christoph.manszewski, rodrigo.vivi,
andrzej.hajda, matthew.auld, maciej.patelczyk, gwan-gyeong.mun,
Mika Kuoppala
From: Christoph Manszewski <christoph.manszewski@intel.com>
This patch adds events to track the bind ioctl and associated debug data add
and remove operations. As a single bind can involve multiple operations and
may fail mid-process.
Add bind event to signal to debugger when bind operation is executed.
Further add debug data add and remove operations so debugger
can keep track of regions where they reside. The bind event is
important as we will want to include ufence event further
in the series and tie it to this bind.
Only deliver bind+operations to the debugger if the vm bind
op execution chain succeeds.
Signed-off-by: Christoph Manszewski <christoph.manszewski@intel.com>
Co-developed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Signed-off-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
---
drivers/gpu/drm/xe/xe_eudebug.c | 221 +++++++++++++++++++++++++-
drivers/gpu/drm/xe/xe_eudebug.h | 7 +
drivers/gpu/drm/xe/xe_eudebug_types.h | 2 +-
drivers/gpu/drm/xe/xe_vm.c | 4 +
include/uapi/drm/xe_drm_eudebug.h | 50 ++++++
5 files changed, 279 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_eudebug.c b/drivers/gpu/drm/xe/xe_eudebug.c
index b8a9462eed17..3f3654f4a700 100644
--- a/drivers/gpu/drm/xe/xe_eudebug.c
+++ b/drivers/gpu/drm/xe/xe_eudebug.c
@@ -12,6 +12,7 @@
#include <uapi/drm/xe_drm.h>
#include "xe_assert.h"
+#include "xe_debug_data_types.h"
#include "xe_device.h"
#include "xe_eudebug.h"
#include "xe_eudebug_types.h"
@@ -841,6 +842,162 @@ void xe_eudebug_exec_queue_destroy(struct xe_file *xef, struct xe_exec_queue *q)
xe_eudebug_event_put(d, exec_queue_destroy_event(d, xef, q));
}
+static int send_vm_bind_event(struct xe_eudebug *d,
+ struct xe_vm *vm,
+ u64 vm_handle,
+ u32 bind_flags,
+ u32 num_ops, u64 *seqno)
+{
+ struct drm_xe_eudebug_event_vm_bind *e;
+ struct drm_xe_eudebug_event *event;
+ const u32 sz = sizeof(*e);
+ const u32 base_flags = DRM_XE_EUDEBUG_EVENT_STATE_CHANGE;
+
+ *seqno = atomic_long_inc_return(&d->events.seqno);
+
+ event = xe_eudebug_create_event(d, DRM_XE_EUDEBUG_EVENT_VM_BIND,
+ *seqno, base_flags, sz);
+ if (!event)
+ return -ENOMEM;
+
+ e = cast_event(e, event);
+
+ e->vm_handle = vm_handle;
+ e->flags = bind_flags;
+ e->num_bind_ops = num_ops;
+
+ return xe_eudebug_queue_event(d, event);
+}
+
+static int vm_bind_event(struct xe_eudebug *d,
+ struct xe_vm *vm,
+ u32 flags,
+ u32 num_ops,
+ u64 *seqno)
+{
+ int h_vm;
+
+ h_vm = find_handle(d->res, XE_EUDEBUG_RES_TYPE_VM, vm);
+ if (h_vm < 0)
+ return h_vm;
+
+ return send_vm_bind_event(d, vm, h_vm, flags,
+ num_ops, seqno);
+}
+
+static int vm_bind_op_event(struct xe_eudebug *d,
+ struct xe_vm *vm,
+ const u32 flags,
+ const u64 bind_ref_seqno,
+ const u64 num_extensions,
+ struct xe_debug_data *debug_data,
+ u64 *op_seqno)
+{
+ struct drm_xe_eudebug_event_vm_bind_op_debug_data *e;
+ struct drm_xe_eudebug_event *event;
+ const u32 sz = sizeof(*e);
+
+ *op_seqno = atomic_long_inc_return(&d->events.seqno);
+
+ event = xe_eudebug_create_event(d, DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_DEBUG_DATA,
+ *op_seqno, flags, sz);
+ if (!event)
+ return -ENOMEM;
+
+ e = cast_event(e, event);
+
+ e->vm_bind_ref_seqno = bind_ref_seqno;
+ e->num_extensions = num_extensions;
+ e->addr = debug_data->addr;
+ e->range = debug_data->range;
+ e->flags = debug_data->flags;
+ e->offset = debug_data->offset;
+
+ if (debug_data->flags & DRM_XE_VM_BIND_DEBUG_DATA_FLAG_PSEUDO)
+ e->pseudopath = debug_data->pseudopath;
+ else
+ strscpy(e->pathname, debug_data->pathname, PATH_MAX);
+
+ return xe_eudebug_queue_event(d, event);
+}
+
+static int vm_bind_op(struct xe_eudebug *d, struct xe_vm *vm,
+ const u32 flags, const u64 bind_ref_seqno,
+ struct xe_debug_data *debug_data)
+{
+ u64 op_seqno = 0;
+ u64 num_extensions = 0;
+ int ret;
+
+ ret = vm_bind_op_event(d, vm, flags, bind_ref_seqno, num_extensions,
+ debug_data, &op_seqno);
+ if (ret)
+ return ret;
+
+ return 0;
+}
+
+void xe_eudebug_vm_bind_execute(struct xe_vm *vm,
+ struct xe_vma_ops *ops)
+{
+ struct xe_eudebug *d;
+ struct xe_vma_op *op;
+ u64 bind_seqno = 0;
+ u32 num_ops;
+ int err;
+
+ if (!xe_vm_in_lr_mode(vm))
+ return;
+
+ d = xe_eudebug_get(vm->xef);
+ if (!d)
+ return;
+
+ num_ops = 0;
+ list_for_each_entry(op, &ops->list, link) {
+ if (op->base.op != DRM_GPUVA_OP_DRIVER)
+ continue;
+
+ if (op->subop == XE_VMA_SUBOP_ADD_DEBUG_DATA ||
+ op->subop == XE_VMA_SUBOP_REMOVE_DEBUG_DATA)
+ num_ops++;
+ }
+
+ lockdep_assert_held_write(&vm->lock);
+
+ err = vm_bind_event(d, vm, 0,
+ num_ops, &bind_seqno);
+ if (err)
+ goto out_err;
+
+ list_for_each_entry(op, &ops->list, link) {
+ u32 flags = 0;
+
+ if (op->base.op != DRM_GPUVA_OP_DRIVER)
+ continue;
+
+ if (op->subop == XE_VMA_SUBOP_ADD_DEBUG_DATA)
+ flags = DRM_XE_EUDEBUG_EVENT_CREATE;
+
+ if (op->subop == XE_VMA_SUBOP_REMOVE_DEBUG_DATA)
+ flags = DRM_XE_EUDEBUG_EVENT_DESTROY;
+
+ if (!flags)
+ continue;
+
+ err = vm_bind_op(d, vm, flags, bind_seqno,
+ &op->modify_debug_data.debug_data);
+ if (err)
+ goto out_err;
+ }
+
+out_err:
+ if (err)
+ xe_eudebug_disconnect(d, err);
+
+ xe_eudebug_put(d);
+}
+
static struct xe_file *xe_eudebug_target_get(struct xe_eudebug *d)
{
struct xe_file *xef = NULL;
@@ -853,19 +1010,67 @@ static struct xe_file *xe_eudebug_target_get(struct xe_eudebug *d)
return xef;
}
+static int vm_discover_binds(struct xe_eudebug *d, struct xe_vm *vm)
+{
+ struct xe_debug_data *dd;
+ struct list_head *pos;
+ unsigned int ops, count;
+ u64 ref_seqno;
+ int err;
+
+ if (list_empty(&vm->debug_data.list))
+ return 0;
+
+ count = 0;
+ list_for_each(pos, &vm->debug_data.list)
+ count++;
+
+ ops = count;
+ ref_seqno = 0;
+ err = vm_bind_event(d, vm, 0, ops, &ref_seqno);
+ if (err) {
+ eu_dbg(d, "vm_bind_event error %d\n", err);
+ return err;
+ }
+
+ list_for_each_entry(dd, &vm->debug_data.list, link) {
+ err = vm_bind_op(d, vm, DRM_XE_EUDEBUG_EVENT_CREATE, ref_seqno, dd);
+ if (err) {
+ eu_dbg(d, "vm_bind_op error %d\n", err);
+ return err;
+ }
+
+ ops--;
+ }
+
+ XE_WARN_ON(ops);
+
+ return ops ? -EIO : count;
+}
+
static void discover_client(struct xe_eudebug *d)
{
struct xe_file *xef;
struct xe_exec_queue *q;
struct xe_vm *vm;
unsigned long i;
- unsigned int vm_count = 0, eq_count = 0;
+ unsigned int vm_count = 0, eq_count = 0, ops_count = 0;
int err = 0;
xef = xe_eudebug_target_get(d);
if (!xef)
return;
+ /*
+ * xe_eudebug ref is taken for discovery worker. It will
+ * hold target xe_file ref and xe_file holds vm and exec_queue
+ * refs.
+ *
+ * The relevant ioctls through xe_file are through
+ * down_read(&xef->eudebug.lock). That means we can peek inside
+ * the resources without taking their respective locks by
+ * taking write lock.
+ */
down_write(&xef->eudebug.ioctl_lock);
eu_dbg(d, "Discovery start for %lld", d->session);
@@ -875,6 +1080,12 @@ static void discover_client(struct xe_eudebug *d)
if (err)
break;
vm_count++;
+
+ err = vm_discover_binds(d, vm);
+ if (err < 0)
+ break;
+
+ ops_count += err;
}
xa_for_each(&xef->exec_queue.xa, i, q) {
@@ -884,6 +1095,8 @@ static void discover_client(struct xe_eudebug *d)
err = exec_queue_create_event(d, xef, q);
if (err)
break;
+
+ eq_count++;
}
complete_all(&d->discovery);
@@ -892,9 +1105,9 @@ static void discover_client(struct xe_eudebug *d)
up_write(&xef->eudebug.ioctl_lock);
- if (vm_count || eq_count)
- eu_dbg(d, "Discovery found %u vms, %u exec_queues",
- vm_count, eq_count);
+ if (vm_count || eq_count || ops_count)
+ eu_dbg(d, "Discovery found %u vms, %u exec_queues, %u bind_ops",
+ vm_count, eq_count, ops_count);
xe_file_put(xef);
}
diff --git a/drivers/gpu/drm/xe/xe_eudebug.h b/drivers/gpu/drm/xe/xe_eudebug.h
index 10480a226fac..9c622362c0f7 100644
--- a/drivers/gpu/drm/xe/xe_eudebug.h
+++ b/drivers/gpu/drm/xe/xe_eudebug.h
@@ -10,10 +10,14 @@
struct drm_device;
struct drm_file;
+struct xe_debug_data;
struct xe_device;
struct xe_file;
struct xe_vm;
+struct xe_vma;
+struct xe_vma_ops;
struct xe_exec_queue;
+struct xe_user_fence;
#if IS_ENABLED(CONFIG_DRM_XE_EUDEBUG)
@@ -50,6 +54,8 @@ int xe_eudebug_enable(struct xe_device *xe, bool enable);
void xe_eudebug_exec_queue_create(struct xe_file *xef, struct xe_exec_queue *q);
void xe_eudebug_exec_queue_destroy(struct xe_file *xef, struct xe_exec_queue *q);
+void xe_eudebug_vm_bind_execute(struct xe_vm *vm, struct xe_vma_ops *ops);
+
#else
static inline int xe_eudebug_connect_ioctl(struct drm_device *dev,
@@ -67,6 +73,7 @@ static inline void xe_eudebug_vm_destroy(struct xe_file *xef, struct xe_vm *vm)
static inline void xe_eudebug_exec_queue_create(struct xe_file *xef, struct xe_exec_queue *q) { }
static inline void xe_eudebug_exec_queue_destroy(struct xe_file *xef, struct xe_exec_queue *q) { }
+static inline void xe_eudebug_vm_bind_execute(struct xe_vm *vm, struct xe_vma_ops *ops) { }
#endif /* CONFIG_DRM_XE_EUDEBUG */
#endif /* _XE_EUDEBUG_H_ */
diff --git a/drivers/gpu/drm/xe/xe_eudebug_types.h b/drivers/gpu/drm/xe/xe_eudebug_types.h
index 57bff7482163..502b121114df 100644
--- a/drivers/gpu/drm/xe/xe_eudebug_types.h
+++ b/drivers/gpu/drm/xe/xe_eudebug_types.h
@@ -33,7 +33,7 @@ enum xe_eudebug_state {
};
#define CONFIG_DRM_XE_DEBUGGER_EVENT_QUEUE_SIZE 64
-#define XE_EUDEBUG_MAX_EVENT_TYPE DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE
+#define XE_EUDEBUG_MAX_EVENT_TYPE DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_DEBUG_DATA
/**
* struct xe_eudebug_handle - eudebug resource handle
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 4bc23d384134..6052bb81a827 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -3360,6 +3360,10 @@ static struct dma_fence *vm_bind_ioctl_ops_execute(struct xe_vm *vm,
xe_vm_set_validation_exec(vm, &exec);
fence = ops_execute(vm, vops);
xe_vm_set_validation_exec(vm, NULL);
+
+ if (!IS_ERR(fence) || PTR_ERR(fence) == -ENODATA)
+ xe_eudebug_vm_bind_execute(vm, vops);
+
if (IS_ERR(fence)) {
if (PTR_ERR(fence) == -ENODATA)
vm_bind_ioctl_ops_fini(vm, vops, NULL);
diff --git a/include/uapi/drm/xe_drm_eudebug.h b/include/uapi/drm/xe_drm_eudebug.h
index 360d7a7ecb67..5891f4d91358 100644
--- a/include/uapi/drm/xe_drm_eudebug.h
+++ b/include/uapi/drm/xe_drm_eudebug.h
@@ -49,6 +49,8 @@ struct drm_xe_eudebug_event {
#define DRM_XE_EUDEBUG_EVENT_READ 1
#define DRM_XE_EUDEBUG_EVENT_VM 2
#define DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE 3
+#define DRM_XE_EUDEBUG_EVENT_VM_BIND 4
+#define DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_DEBUG_DATA 5
__u16 flags;
#define DRM_XE_EUDEBUG_EVENT_CREATE (1 << 0)
@@ -81,6 +83,54 @@ struct drm_xe_eudebug_event_exec_queue {
__u64 lrc_handle[];
};
+/*
+ * When the client (debuggee) calls the vm_bind_ioctl with the
+ * DRM_XE_VM_BIND_OP_[ADD|REMOVE]_DEBUG_DATA operation, the following event
+ * sequence will be created (for the debugger):
+ *
+ * ┌───────────────────────┐
+ * │ EVENT_VM_BIND ├──────────────────┬─┬┄┐
+ * └───────────────────────┘ │ │ ┊
+ * ┌──────────────────────────────────┐ │ │ ┊
+ * │ EVENT_VM_BIND_OP_DEBUG_DATA #1 ├───┘ │ ┊
+ * └──────────────────────────────────┘ │ ┊
+ * ... │ ┊
+ * ┌──────────────────────────────────┐ │ ┊
+ * │ EVENT_VM_BIND_OP_DEBUG_DATA #n ├─────┘ ┊
+ * └──────────────────────────────────┘ ┊
+ * ┊
+ * ┌┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┐ ┊
+ * ┊ EVENT_UFENCE ├┄┄┄┄┄┄┄┘
+ * └┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┘
+ *
+ * All the events below VM_BIND will reference the VM_BIND
+ * they associate with, by field .vm_bind_ref_seqno.
+ */
+
+struct drm_xe_eudebug_event_vm_bind {
+ struct drm_xe_eudebug_event base;
+
+ __u64 vm_handle;
+ __u32 flags;
+ __u32 num_bind_ops;
+};
+
+struct drm_xe_eudebug_event_vm_bind_op_debug_data {
+ struct drm_xe_eudebug_event base;
+ __u64 vm_bind_ref_seqno; /* *_event_vm_bind.base.seqno */
+ __u64 num_extensions;
+
+ __u64 addr;
+ __u64 range;
+ __u64 flags;
+ __u32 offset;
+ __u32 reserved;
+ union {
+ __u64 pseudopath;
+ char pathname[PATH_MAX];
+ };
+};
+
#if defined(__cplusplus)
}
#endif
--
2.43.0
^ permalink raw reply related [flat|nested] 30+ messages in thread* [PATCH 08/20] drm/xe/eudebug: Add UFENCE events with acks
2025-12-02 13:52 [PATCH 00/20] Intel Xe GPU Debug Support (eudebug) v6 Mika Kuoppala
` (6 preceding siblings ...)
2025-12-02 13:52 ` [PATCH 07/20] drm/xe/eudebug: Introduce vm bind and vm bind debug data events Mika Kuoppala
@ 2025-12-02 13:52 ` Mika Kuoppala
2025-12-02 13:52 ` [PATCH 09/20] drm/xe/eudebug: vm open/pread/pwrite Mika Kuoppala
` (17 subsequent siblings)
25 siblings, 0 replies; 30+ messages in thread
From: Mika Kuoppala @ 2025-12-02 13:52 UTC (permalink / raw)
To: intel-xe
Cc: simona.vetter, matthew.brost, christian.koenig, thomas.hellstrom,
joonas.lahtinen, christoph.manszewski, rodrigo.vivi,
andrzej.hajda, matthew.auld, maciej.patelczyk, gwan-gyeong.mun,
Mika Kuoppala
When vma is in place, debugger needs to intercept before
userspace proceeds with the workload. For example to install
a breakpoint in an eu shader.
If ufence is part of bind sequence,aAttach debugger in
xe_user_fence. When ufence signal is about to be delivered,
check if this ufence needs to be tracked by debugger.
If so, stall the delivery of the ufence signal up until
debugger has acked the ufence (event), with the ack ioctl.
v2: - return err instead of 0 to guarantee signalling (Dominik)
- checkpatch (Tilak)
- Kconfig (Mika, Andrzej)
- use lock instead of cmpxchg (Mika)
v4: - improve ref handling and no ufences nodebug binds
v5: - remove overzealous warn_on on bind_ref_seqno (Christoph)
- remove superfluous signalled (Mika)
- fix double free on bind sequence (Mika)
- Dont fill op fields if no debugger (Maciej)
v6: - rework to align with xe_eudebug_bind_execute()
Signed-off-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
---
drivers/gpu/drm/xe/xe_eudebug.c | 301 +++++++++++++++++++++++++-
drivers/gpu/drm/xe/xe_eudebug.h | 9 +
drivers/gpu/drm/xe/xe_eudebug_types.h | 9 +-
drivers/gpu/drm/xe/xe_sync.c | 39 ++--
drivers/gpu/drm/xe/xe_sync.h | 7 +-
drivers/gpu/drm/xe/xe_sync_types.h | 28 ++-
include/uapi/drm/xe_drm_eudebug.h | 35 ++-
7 files changed, 402 insertions(+), 26 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_eudebug.c b/drivers/gpu/drm/xe/xe_eudebug.c
index 3f3654f4a700..d3a8ef2ea9e5 100644
--- a/drivers/gpu/drm/xe/xe_eudebug.c
+++ b/drivers/gpu/drm/xe/xe_eudebug.c
@@ -19,6 +19,7 @@
#include "xe_exec_queue.h"
#include "xe_hw_engine.h"
#include "xe_macros.h"
+#include "xe_sync.h"
#include "xe_vm.h"
/*
@@ -217,6 +218,115 @@ static void remove_debugger(struct xe_file *xef)
}
}
+struct xe_eudebug_ack {
+ struct rb_node rb_node;
+ u64 seqno;
+ u64 ts_insert;
+ struct xe_user_fence *ufence;
+};
+
+#define fetch_ack(x) rb_entry(x, struct xe_eudebug_ack, rb_node)
+
+static int compare_ack(const u64 a, const u64 b)
+{
+ if (a < b)
+ return -1;
+ else if (a > b)
+ return 1;
+
+ return 0;
+}
+
+static int ack_insert_cmp(struct rb_node * const node,
+ const struct rb_node * const p)
+{
+ return compare_ack(fetch_ack(node)->seqno,
+ fetch_ack(p)->seqno);
+}
+
+static int ack_lookup_cmp(const void * const key,
+ const struct rb_node * const node)
+{
+ return compare_ack(*(const u64 *)key,
+ fetch_ack(node)->seqno);
+}
+
+static struct xe_eudebug_ack *remove_ack(struct xe_eudebug *d, u64 seqno)
+{
+ struct rb_root * const root = &d->acks.tree;
+ struct rb_node *node;
+
+ spin_lock(&d->acks.lock);
+ node = rb_find(&seqno, root, ack_lookup_cmp);
+ if (node)
+ rb_erase(node, root);
+ spin_unlock(&d->acks.lock);
+
+ if (!node)
+ return NULL;
+
+ return rb_entry_safe(node, struct xe_eudebug_ack, rb_node);
+}
+
+static void ufence_signal_worker(struct work_struct *w)
+{
+ struct xe_user_fence * const ufence =
+ container_of(w, struct xe_user_fence, eudebug.worker);
+
+ if (READ_ONCE(ufence->signalled))
+ xe_sync_ufence_signal(ufence);
+
+ xe_sync_ufence_put(ufence);
+}
+
+static void kick_ufence_worker(struct xe_user_fence *f)
+{
+ queue_work(f->xe->eudebug.wq, &f->eudebug.worker);
+}
+
+static void handle_ack(struct xe_eudebug *d, struct xe_eudebug_ack *ack,
+ bool on_disconnect)
+{
+ struct xe_user_fence *f = ack->ufence;
+ u64 signalled_by;
+ bool signal = false;
+
+ spin_lock(&f->eudebug.lock);
+ if (!f->eudebug.signalled_seqno) {
+ f->eudebug.signalled_seqno = ack->seqno;
+ f->eudebug.bind_ref_seqno = 0;
+ signal = true;
+ }
+ signalled_by = f->eudebug.signalled_seqno;
+ spin_unlock(&f->eudebug.lock);
+
+ if (signal)
+ kick_ufence_worker(f);
+ else
+ xe_sync_ufence_put(f);
+
+ eu_dbg(d, "ACK: seqno=%llu: signalled by %llu (%s) (held %lluus)",
+ ack->seqno, signalled_by,
+ on_disconnect ? "disconnect" : "debugger",
+ ktime_us_delta(ktime_get(), ack->ts_insert));
+
+ kfree(ack);
+}
+
+static void release_acks(struct xe_eudebug *d)
+{
+ struct xe_eudebug_ack *ack, *n;
+ struct rb_root root;
+
+ spin_lock(&d->acks.lock);
+ root = d->acks.tree;
+ d->acks.tree = RB_ROOT;
+ spin_unlock(&d->acks.lock);
+
+ rbtree_postorder_for_each_entry_safe(ack, n, &root, rb_node)
+ handle_ack(d, ack, true);
+}
+
static bool xe_eudebug_detach(struct xe_device *xe,
struct xe_eudebug *d,
const int err)
@@ -240,6 +350,8 @@ static bool xe_eudebug_detach(struct xe_device *xe,
eu_dbg(d, "session %lld detached with %d", d->session, err);
+ release_acks(d);
+
remove_debugger(target);
xe_file_put(target);
@@ -937,11 +1049,134 @@ static int vm_bind_op(struct xe_eudebug *d, struct xe_vm *vm,
return 0;
}
+void xe_eudebug_ufence_init(struct xe_user_fence *ufence)
+{
+ spin_lock_init(&ufence->eudebug.lock);
+ INIT_WORK(&ufence->eudebug.worker, ufence_signal_worker);
+ ufence->eudebug.bind_ref_seqno = 0;
+ ufence->eudebug.signalled_seqno = 0;
+}
+
+void xe_eudebug_ufence_fini(struct xe_user_fence *ufence)
+{
+ XE_WARN_ON(READ_ONCE(ufence->eudebug.bind_ref_seqno));
+
+ if (!ufence->eudebug.debugger)
+ return;
+
+ xe_eudebug_put(ufence->eudebug.debugger);
+}
+
+static int xe_eudebug_track_ufence(struct xe_eudebug *d,
+ struct xe_user_fence *f,
+ u64 seqno)
+{
+ struct xe_eudebug_ack *ack;
+ struct rb_node *old;
+
+ ack = kzalloc(sizeof(*ack), GFP_KERNEL);
+ if (!ack)
+ return -ENOMEM;
+
+ ack->seqno = seqno;
+ ack->ts_insert = ktime_get();
+
+ __xe_sync_ufence_get(f);
+
+ spin_lock(&d->acks.lock);
+ old = rb_find_add(&ack->rb_node,
+ &d->acks.tree, ack_insert_cmp);
+ if (!old)
+ ack->ufence = f;
+ spin_unlock(&d->acks.lock);
+
+ if (ack->ufence)
+ return 0;
+
+ xe_sync_ufence_put(f);
+ kfree(ack);
+
+ return -EEXIST;
+}
+
+static int track_ufence(struct xe_eudebug *d,
+ struct xe_user_fence *ufence)
+{
+ struct drm_xe_eudebug_event *event;
+ struct drm_xe_eudebug_event_vm_bind_ufence *e;
+ const u32 sz = sizeof(*e);
+ const u32 flags = DRM_XE_EUDEBUG_EVENT_CREATE |
+ DRM_XE_EUDEBUG_EVENT_NEED_ACK;
+ u64 seqno;
+ int ret;
+
+ if (XE_WARN_ON(!ufence->eudebug.bind_ref_seqno))
+ return -EINVAL;
+
+ seqno = atomic_long_inc_return(&d->events.seqno);
+
+ event = xe_eudebug_create_event(d, DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE,
+ seqno, flags, sz);
+ if (!event)
+ return -ENOMEM;
+
+ e = cast_event(e, event);
+ e->vm_bind_ref_seqno = ufence->eudebug.bind_ref_seqno;
+
+ ret = xe_eudebug_track_ufence(d, ufence, seqno);
+ if (ret) {
+ kfree(event);
+
+ eu_dbg(d, "tracking of ufence %llu failed with %d\n", seqno, ret);
+
+ return ret;
+ }
+
+ return xe_eudebug_queue_event(d, event);
+}
+
+/**
+ * xe_eudebug_track_ufence - Track the ufence for eudebug
+ * @ufence : user fence that might be applicaple to tracking
+ *
+ * If this user fence was part of bind sequence, we need
+ * to track it so that we can hold the client signalling on behalf
+ * of debugger and thus deliver event to debugger.
+ *
+ * Return: true debugger will track, false debugger not interested
+ *
+ */
+bool xe_eudebug_ufence_track(struct xe_user_fence *ufence)
+{
+ struct xe_eudebug *d;
+ int ret;
+
+ spin_lock(&ufence->eudebug.lock);
+ d = ufence->eudebug.debugger;
+ spin_unlock(&ufence->eudebug.lock);
+
+ if (!d)
+ return false;
+
+ if (xe_eudebug_detached(d))
+ return false;
+
+ ret = track_ufence(d, ufence);
+ if (ret) {
+ xe_eudebug_disconnect(d, ret);
+ return false;
+ }
+
+ return true;
+}
+
void xe_eudebug_vm_bind_execute(struct xe_vm *vm,
struct xe_vma_ops *ops)
{
+ struct xe_user_fence *ufence = NULL;
struct xe_eudebug *d;
struct xe_vma_op *op;
+ unsigned int i;
u64 bind_seqno = 0;
u32 num_ops;
int err;
@@ -953,6 +1188,15 @@ void xe_eudebug_vm_bind_execute(struct xe_vm *vm,
if (!d)
return;
+ for (i = 0; i < ops->num_syncs; i++) {
+ struct xe_sync_entry *se = &ops->syncs[i];
+
+ if (xe_sync_is_ufence(se)) {
+ xe_assert(vm->xe, ufence == NULL);
+ ufence = se->ufence;
+ }
+ }
+
num_ops = 0;
list_for_each_entry(op, &ops->list, link) {
if (op->base.op != DRM_GPUVA_OP_DRIVER)
@@ -965,7 +1209,8 @@ void xe_eudebug_vm_bind_execute(struct xe_vm *vm,
lockdep_assert_held_write(&vm->lock);
- err = vm_bind_event(d, vm, 0,
+ err = vm_bind_event(d, vm,
+ ufence ? DRM_XE_EUDEBUG_EVENT_VM_BIND_FLAG_UFENCE : 0,
num_ops, &bind_seqno);
if (err)
goto out_err;
@@ -991,6 +1236,14 @@ void xe_eudebug_vm_bind_execute(struct xe_vm *vm,
goto out_err;
}
+ if (ufence) {
+ spin_lock(&ufence->eudebug.lock);
+ kref_get(&d->ref);
+ ufence->eudebug.debugger = d;
+ ufence->eudebug.bind_ref_seqno = bind_seqno;
+ spin_unlock(&ufence->eudebug.lock);
+ }
+
out_err:
if (err)
xe_eudebug_disconnect(d, err);
@@ -1315,6 +1568,44 @@ static long xe_eudebug_read_event(struct xe_eudebug *d,
return ret;
}
+static long
+xe_eudebug_ack_event_ioctl(struct xe_eudebug *d,
+ const unsigned int cmd,
+ const u64 arg)
+{
+ struct drm_xe_eudebug_ack_event __user * const user_ptr =
+ u64_to_user_ptr(arg);
+ struct drm_xe_eudebug_ack_event user_arg;
+ struct xe_eudebug_ack *ack;
+ struct xe_device *xe = d->xe;
+
+ if (XE_IOCTL_DBG(xe, _IOC_SIZE(cmd) < sizeof(user_arg)))
+ return -EINVAL;
+
+ /* Userland write */
+ if (XE_IOCTL_DBG(xe, !(_IOC_DIR(cmd) & _IOC_WRITE)))
+ return -EINVAL;
+
+ if (XE_IOCTL_DBG(xe, copy_from_user(&user_arg,
+ user_ptr,
+ sizeof(user_arg))))
+ return -EFAULT;
+
+ if (XE_IOCTL_DBG(xe, user_arg.flags))
+ return -EINVAL;
+
+ if (XE_IOCTL_DBG(xe, xe_eudebug_detached(d)))
+ return -ENOTCONN;
+
+ ack = remove_ack(d, user_arg.seqno);
+ if (XE_IOCTL_DBG(xe, !ack))
+ return -EINVAL;
+
+ handle_ack(d, ack, false);
+
+ return 0;
+}
+
static long xe_eudebug_ioctl(struct file *file,
unsigned int cmd,
unsigned long arg)
@@ -1331,7 +1622,10 @@ static long xe_eudebug_ioctl(struct file *file,
ret = xe_eudebug_read_event(d, arg,
!(file->f_flags & O_NONBLOCK));
break;
-
+ case DRM_XE_EUDEBUG_IOCTL_ACK_EVENT:
+ ret = xe_eudebug_ack_event_ioctl(d, cmd, arg);
+ eu_dbg(d, "ioctl cmd=EVENT_ACK ret=%ld\n", ret);
+ break;
default:
ret = -EINVAL;
}
@@ -1393,6 +1687,9 @@ xe_eudebug_connect(struct xe_device *xe,
INIT_KFIFO(d->events.fifo);
INIT_WORK(&d->discovery_work, discovery_work_fn);
+ spin_lock_init(&d->acks.lock);
+ d->acks.tree = RB_ROOT;
+
d->res = xe_eudebug_resources_alloc();
if (XE_IOCTL_DBG(xe, IS_ERR(d->res))) {
err = PTR_ERR(d->res);
diff --git a/drivers/gpu/drm/xe/xe_eudebug.h b/drivers/gpu/drm/xe/xe_eudebug.h
index 9c622362c0f7..d0f1b51564dc 100644
--- a/drivers/gpu/drm/xe/xe_eudebug.h
+++ b/drivers/gpu/drm/xe/xe_eudebug.h
@@ -56,6 +56,10 @@ void xe_eudebug_exec_queue_destroy(struct xe_file *xef, struct xe_exec_queue *q)
void xe_eudebug_vm_bind_execute(struct xe_vm *vm, struct xe_vma_ops *ops);
+void xe_eudebug_ufence_init(struct xe_user_fence *ufence);
+void xe_eudebug_ufence_fini(struct xe_user_fence *ufence);
+bool xe_eudebug_ufence_track(struct xe_user_fence *ufence);
+
#else
static inline int xe_eudebug_connect_ioctl(struct drm_device *dev,
@@ -74,6 +78,11 @@ static inline void xe_eudebug_exec_queue_create(struct xe_file *xef, struct xe_e
static inline void xe_eudebug_exec_queue_destroy(struct xe_file *xef, struct xe_exec_queue *q) { }
static inline void xe_eudebug_vm_bind_execute(struct xe_vm *vm, struct xe_vma_ops *ops) { }
+
+static inline void xe_eudebug_ufence_init(struct xe_user_fence *ufence) { }
+static inline void xe_eudebug_ufence_fini(struct xe_user_fence *ufence) { }
+static inline bool xe_eudebug_ufence_track(struct xe_user_fence *ufence) { return false; }
+
#endif /* CONFIG_DRM_XE_EUDEBUG */
#endif /* _XE_EUDEBUG_H_ */
diff --git a/drivers/gpu/drm/xe/xe_eudebug_types.h b/drivers/gpu/drm/xe/xe_eudebug_types.h
index 502b121114df..a294e2f4e7df 100644
--- a/drivers/gpu/drm/xe/xe_eudebug_types.h
+++ b/drivers/gpu/drm/xe/xe_eudebug_types.h
@@ -33,7 +33,7 @@ enum xe_eudebug_state {
};
#define CONFIG_DRM_XE_DEBUGGER_EVENT_QUEUE_SIZE 64
-#define XE_EUDEBUG_MAX_EVENT_TYPE DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_DEBUG_DATA
+#define XE_EUDEBUG_MAX_EVENT_TYPE DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE
/**
* struct xe_eudebug_handle - eudebug resource handle
@@ -132,6 +132,13 @@ struct xe_eudebug {
atomic_long_t seqno;
} events;
+ /* user fences tracked by this debugger */
+ struct {
+ /** @lock: guards access to tree */
+ spinlock_t lock;
+
+ struct rb_root tree;
+ } acks;
};
#endif /* _XE_EUDEBUG_TYPES_H_ */
diff --git a/drivers/gpu/drm/xe/xe_sync.c b/drivers/gpu/drm/xe/xe_sync.c
index ff74528ca0c6..fd38be30fa67 100644
--- a/drivers/gpu/drm/xe/xe_sync.c
+++ b/drivers/gpu/drm/xe/xe_sync.c
@@ -15,27 +15,20 @@
#include <uapi/drm/xe_drm.h>
#include "xe_device.h"
+#include "xe_eudebug.h"
#include "xe_exec_queue.h"
#include "xe_macros.h"
#include "xe_sched_job_types.h"
-struct xe_user_fence {
- struct xe_device *xe;
- struct kref refcount;
- struct dma_fence_cb cb;
- struct work_struct worker;
- struct mm_struct *mm;
- u64 __user *addr;
- u64 value;
- int signalled;
-};
-
static void user_fence_destroy(struct kref *kref)
{
struct xe_user_fence *ufence = container_of(kref, struct xe_user_fence,
refcount);
mmdrop(ufence->mm);
+
+ xe_eudebug_ufence_fini(ufence);
+
kfree(ufence);
}
@@ -49,7 +42,8 @@ static void user_fence_put(struct xe_user_fence *ufence)
kref_put(&ufence->refcount, user_fence_destroy);
}
-static struct xe_user_fence *user_fence_create(struct xe_device *xe, u64 addr,
+static struct xe_user_fence *user_fence_create(struct xe_device *xe,
+ u64 addr,
u64 value)
{
struct xe_user_fence *ufence;
@@ -70,14 +64,15 @@ static struct xe_user_fence *user_fence_create(struct xe_device *xe, u64 addr,
ufence->mm = current->mm;
mmgrab(ufence->mm);
+ xe_eudebug_ufence_init(ufence);
+
return ufence;
}
-static void user_fence_worker(struct work_struct *w)
+void xe_sync_ufence_signal(struct xe_user_fence *ufence)
{
- struct xe_user_fence *ufence = container_of(w, struct xe_user_fence, worker);
+ XE_WARN_ON(!ufence->signalled);
- WRITE_ONCE(ufence->signalled, 1);
if (mmget_not_zero(ufence->mm)) {
kthread_use_mm(ufence->mm);
if (copy_to_user(ufence->addr, &ufence->value, sizeof(ufence->value)))
@@ -88,11 +83,23 @@ static void user_fence_worker(struct work_struct *w)
drm_dbg(&ufence->xe->drm, "mmget_not_zero() failed, ufence wasn't signaled\n");
}
+ wake_up_all(&ufence->xe->ufence_wq);
+}
+
+static void user_fence_worker(struct work_struct *w)
+{
+ struct xe_user_fence *ufence = container_of(w, struct xe_user_fence, worker);
+
/*
* Wake up waiters only after updating the ufence state, allowing the UMD
* to safely reuse the same ufence without encountering -EBUSY errors.
*/
- wake_up_all(&ufence->xe->ufence_wq);
+ WRITE_ONCE(ufence->signalled, 1);
+
+ /* Lets see if debugger wants to track this */
+ if (!xe_eudebug_ufence_track(ufence))
+ xe_sync_ufence_signal(ufence);
+
user_fence_put(ufence);
}
diff --git a/drivers/gpu/drm/xe/xe_sync.h b/drivers/gpu/drm/xe/xe_sync.h
index 51f2d803e977..62caaa6470af 100644
--- a/drivers/gpu/drm/xe/xe_sync.h
+++ b/drivers/gpu/drm/xe/xe_sync.h
@@ -10,8 +10,12 @@
struct drm_syncobj;
struct xe_device;
-struct xe_exec_queue;
struct xe_file;
+struct xe_exec_queue;
+struct drm_syncobj;
+struct dma_fence;
+struct dma_fence_chain;
+struct drm_xe_sync;
struct xe_sched_job;
struct xe_vm;
@@ -43,5 +47,6 @@ struct xe_user_fence *__xe_sync_ufence_get(struct xe_user_fence *ufence);
struct xe_user_fence *xe_sync_ufence_get(struct xe_sync_entry *sync);
void xe_sync_ufence_put(struct xe_user_fence *ufence);
int xe_sync_ufence_get_status(struct xe_user_fence *ufence);
+void xe_sync_ufence_signal(struct xe_user_fence *ufence);
#endif
diff --git a/drivers/gpu/drm/xe/xe_sync_types.h b/drivers/gpu/drm/xe/xe_sync_types.h
index b88f1833e28c..33a93a0faa72 100644
--- a/drivers/gpu/drm/xe/xe_sync_types.h
+++ b/drivers/gpu/drm/xe/xe_sync_types.h
@@ -6,13 +6,31 @@
#ifndef _XE_SYNC_TYPES_H_
#define _XE_SYNC_TYPES_H_
+#include <linux/dma-fence-array.h>
+#include <linux/kref.h>
+#include <linux/spinlock.h>
#include <linux/types.h>
-struct drm_syncobj;
-struct dma_fence;
-struct dma_fence_chain;
-struct drm_xe_sync;
-struct user_fence;
+struct xe_user_fence {
+ struct xe_device *xe;
+ struct kref refcount;
+ struct dma_fence_cb cb;
+ struct work_struct worker;
+ struct mm_struct *mm;
+ u64 __user *addr;
+ u64 value;
+ int signalled;
+
+#if IS_ENABLED(CONFIG_DRM_XE_EUDEBUG)
+ struct {
+ spinlock_t lock;
+ struct xe_eudebug *debugger;
+ u64 bind_ref_seqno;
+ u64 signalled_seqno;
+ struct work_struct worker;
+ } eudebug;
+#endif
+};
struct xe_sync_entry {
struct drm_syncobj *syncobj;
diff --git a/include/uapi/drm/xe_drm_eudebug.h b/include/uapi/drm/xe_drm_eudebug.h
index 5891f4d91358..b363583cb1d6 100644
--- a/include/uapi/drm/xe_drm_eudebug.h
+++ b/include/uapi/drm/xe_drm_eudebug.h
@@ -15,7 +15,8 @@ extern "C" {
*
* This ioctl is available in debug version 1.
*/
-#define DRM_XE_EUDEBUG_IOCTL_READ_EVENT _IO('j', 0x0)
+#define DRM_XE_EUDEBUG_IOCTL_READ_EVENT _IO('j', 0x0)
+#define DRM_XE_EUDEBUG_IOCTL_ACK_EVENT _IOW('j', 0x1, struct drm_xe_eudebug_ack_event)
/**
* struct drm_xe_eudebug_event - Base type of event delivered by xe_eudebug.
@@ -51,6 +52,7 @@ struct drm_xe_eudebug_event {
#define DRM_XE_EUDEBUG_EVENT_EXEC_QUEUE 3
#define DRM_XE_EUDEBUG_EVENT_VM_BIND 4
#define DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_DEBUG_DATA 5
+#define DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE 6
__u16 flags;
#define DRM_XE_EUDEBUG_EVENT_CREATE (1 << 0)
@@ -105,6 +107,24 @@ struct drm_xe_eudebug_event_exec_queue {
*
* All the events below VM_BIND will reference the VM_BIND
* they associate with, by field .vm_bind_ref_seqno.
+ * EVENT_UFENCE will only be included if the client did
+ * attach sync of type UFENCE into its vm_bind_ioctl().
+ *
+ * When EVENT_UFENCE is sent by the driver, all the OPs of
+ * the original VM_BIND are completed and the [addr,range]
+ * contained in them are present and modifiable through the
+ * vm accessors. Accessing [addr, range] before related ufence
+ * event will lead to undefined results as the actual bind
+ * operations are async and the backing storage might not
+ * be there on a moment of receiving the event.
+ *
+ * Client's UFENCE sync will be held by the driver: client's
+ * drm_xe_wait_ufence will not complete and the value of the ufence
+ * won't appear until ufence is acked by the debugger process calling
+ * DRM_XE_EUDEBUG_IOCTL_ACK_EVENT with the event_ufence.base.seqno.
+ * This will signal the fence, .value will update and the wait will
+ * complete allowing the client to continue.
+ *
*/
struct drm_xe_eudebug_event_vm_bind {
@@ -112,6 +132,8 @@ struct drm_xe_eudebug_event_vm_bind {
__u64 vm_handle;
__u32 flags;
+#define DRM_XE_EUDEBUG_EVENT_VM_BIND_FLAG_UFENCE (1 << 0)
+
__u32 num_bind_ops;
};
@@ -131,6 +153,17 @@ struct drm_xe_eudebug_event_vm_bind_op_debug_data {
};
};
+struct drm_xe_eudebug_event_vm_bind_ufence {
+ struct drm_xe_eudebug_event base;
+ __u64 vm_bind_ref_seqno; /* *_event_vm_bind.base.seqno */
+};
+
+struct drm_xe_eudebug_ack_event {
+ __u32 type;
+ __u32 flags; /* MBZ */
+ __u64 seqno;
+};
+
#if defined(__cplusplus)
}
#endif
--
2.43.0
^ permalink raw reply related [flat|nested] 30+ messages in thread* [PATCH 09/20] drm/xe/eudebug: vm open/pread/pwrite
2025-12-02 13:52 [PATCH 00/20] Intel Xe GPU Debug Support (eudebug) v6 Mika Kuoppala
` (7 preceding siblings ...)
2025-12-02 13:52 ` [PATCH 08/20] drm/xe/eudebug: Add UFENCE events with acks Mika Kuoppala
@ 2025-12-02 13:52 ` Mika Kuoppala
2025-12-02 13:52 ` [PATCH 10/20] drm/xe/eudebug: userptr vm pread/pwrite Mika Kuoppala
` (16 subsequent siblings)
25 siblings, 0 replies; 30+ messages in thread
From: Mika Kuoppala @ 2025-12-02 13:52 UTC (permalink / raw)
To: intel-xe
Cc: simona.vetter, matthew.brost, christian.koenig, thomas.hellstrom,
joonas.lahtinen, christoph.manszewski, rodrigo.vivi,
andrzej.hajda, matthew.auld, maciej.patelczyk, gwan-gyeong.mun,
Mika Kuoppala
Debugger needs access to the client's vm to read and write. For
example inspecting ISA/ELF and setting up breakpoints.
Add ioctl to open target vm with debugger client and vm_handle
and hook up pread/pwrite possibility.
Open will take timeout argument so that standard fsync
can be used for explicit flushing between cpu/gpu for
the target vm.
Implement this for bo backed storage. userptr will
be done in following patch.
v2: - checkpatch (Maciej)
- 32bit fixes (Andrzej)
- bo_vmap (Mika)
- fix vm leak if can't allocate k_buffer (Mika)
- assert vm write held for vma (Matthew)
v3: - fw ref, ttm_bo_access
- timeout boundary check (Dominik)
- dont try to copy to user on zero bytes (Mika)
v4: - offset as unsigned long (Thomas)
- check XE_VMA_DESTROYED
v5: drm_dev_put before releasing debugger (Mika)
Cc: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
---
drivers/gpu/drm/xe/Makefile | 2 +-
drivers/gpu/drm/xe/regs/xe_gt_regs.h | 24 ++
drivers/gpu/drm/xe/xe_eudebug.c | 41 ++-
drivers/gpu/drm/xe/xe_eudebug.h | 12 +
drivers/gpu/drm/xe/xe_eudebug_types.h | 5 +
drivers/gpu/drm/xe/xe_eudebug_vm.c | 418 ++++++++++++++++++++++++++
drivers/gpu/drm/xe/xe_eudebug_vm.h | 8 +
include/uapi/drm/xe_drm_eudebug.h | 15 +
8 files changed, 521 insertions(+), 4 deletions(-)
create mode 100644 drivers/gpu/drm/xe/xe_eudebug_vm.c
create mode 100644 drivers/gpu/drm/xe/xe_eudebug_vm.h
diff --git a/drivers/gpu/drm/xe/Makefile b/drivers/gpu/drm/xe/Makefile
index caf2b9e518ea..ccf051e65408 100644
--- a/drivers/gpu/drm/xe/Makefile
+++ b/drivers/gpu/drm/xe/Makefile
@@ -148,7 +148,7 @@ xe-$(CONFIG_DRM_XE_GPUSVM) += xe_svm.o
xe-$(CONFIG_DRM_GPUSVM) += xe_userptr.o
# debugging shaders with gdb (eudebug) support
-xe-$(CONFIG_DRM_XE_EUDEBUG) += xe_eudebug.o
+xe-$(CONFIG_DRM_XE_EUDEBUG) += xe_eudebug.o xe_eudebug_vm.o
# graphics hardware monitoring (HWMON) support
xe-$(CONFIG_HWMON) += xe_hwmon.o
diff --git a/drivers/gpu/drm/xe/regs/xe_gt_regs.h b/drivers/gpu/drm/xe/regs/xe_gt_regs.h
index 917a088c28f2..70e9a32c69a6 100644
--- a/drivers/gpu/drm/xe/regs/xe_gt_regs.h
+++ b/drivers/gpu/drm/xe/regs/xe_gt_regs.h
@@ -570,6 +570,30 @@
#define CCS_MODE_CSLICE(cslice, ccs) \
((ccs) << ((cslice) * CCS_MODE_CSLICE_WIDTH))
+#define RCU_ASYNC_FLUSH XE_REG(0x149fc)
+#define RCU_ASYNC_FLUSH_IN_PROGRESS REG_BIT(31)
+#define RCU_ASYNC_FLUSH_ENGINE_ID_SHIFT 28
+#define RCU_ASYNC_FLUSH_ENGINE_ID_DECODE1 REG_BIT(26)
+#define RCU_ASYNC_FLUSH_AMFS REG_BIT(8)
+#define RCU_ASYNC_FLUSH_PREFETCH REG_BIT(7)
+#define RCU_ASYNC_FLUSH_DATA_PORT REG_BIT(6)
+#define RCU_ASYNC_FLUSH_DATA_CACHE REG_BIT(5)
+#define RCU_ASYNC_FLUSH_HDC_PIPELINE REG_BIT(4)
+#define RCU_ASYNC_INVALIDATE_HDC_PIPELINE REG_BIT(3)
+#define RCU_ASYNC_INVALIDATE_CONSTANT_CACHE REG_BIT(2)
+#define RCU_ASYNC_INVALIDATE_TEXTURE_CACHE REG_BIT(1)
+#define RCU_ASYNC_INVALIDATE_INSTRUCTION_CACHE REG_BIT(0)
+#define RCU_ASYNC_FLUSH_AND_INVALIDATE_ALL ( \
+ RCU_ASYNC_FLUSH_AMFS | \
+ RCU_ASYNC_FLUSH_PREFETCH | \
+ RCU_ASYNC_FLUSH_DATA_PORT | \
+ RCU_ASYNC_FLUSH_DATA_CACHE | \
+ RCU_ASYNC_FLUSH_HDC_PIPELINE | \
+ RCU_ASYNC_INVALIDATE_HDC_PIPELINE | \
+ RCU_ASYNC_INVALIDATE_CONSTANT_CACHE | \
+ RCU_ASYNC_INVALIDATE_TEXTURE_CACHE | \
+ RCU_ASYNC_INVALIDATE_INSTRUCTION_CACHE)
+
#define FORCEWAKE_ACK_GT XE_REG(0x130044)
/* Applicable for all FORCEWAKE_DOMAIN and FORCEWAKE_ACK_DOMAIN regs */
diff --git a/drivers/gpu/drm/xe/xe_eudebug.c b/drivers/gpu/drm/xe/xe_eudebug.c
index d3a8ef2ea9e5..41a9cdfd6142 100644
--- a/drivers/gpu/drm/xe/xe_eudebug.c
+++ b/drivers/gpu/drm/xe/xe_eudebug.c
@@ -16,6 +16,7 @@
#include "xe_device.h"
#include "xe_eudebug.h"
#include "xe_eudebug_types.h"
+#include "xe_eudebug_vm.h"
#include "xe_exec_queue.h"
#include "xe_hw_engine.h"
#include "xe_macros.h"
@@ -52,8 +53,7 @@ event_fifo_num_events_peek(const struct xe_eudebug * const d)
return kfifo_len(&d->events.fifo);
}
-static bool
-xe_eudebug_detached(struct xe_eudebug *d)
+bool xe_eudebug_detached(struct xe_eudebug *d)
{
bool connected;
@@ -187,7 +187,7 @@ static void xe_eudebug_free(struct kref *ref)
kfree(d);
}
-static void xe_eudebug_put(struct xe_eudebug *d)
+void xe_eudebug_put(struct xe_eudebug *d)
{
kref_put(&d->ref, xe_eudebug_free);
}
@@ -662,6 +662,35 @@ static int xe_eudebug_remove_handle(struct xe_eudebug *d, int type, void *p,
return ret;
}
+static void *find_resource__unlocked(struct xe_eudebug_resources *res,
+ int type,
+ u32 id)
+{
+
+ struct xe_eudebug_resource *r;
+ struct xe_eudebug_handle *h;
+
+ r = resource_from_type(res, type);
+ h = xa_load(&r->xa, id);
+
+ return h ? (void *)(uintptr_t)h->key : NULL;
+}
+
+struct xe_vm *xe_eudebug_vm_get(struct xe_eudebug *d, u32 id)
+{
+ struct xe_vm *vm;
+
+ mutex_lock(&d->res->lock);
+ vm = find_resource__unlocked(d->res, XE_EUDEBUG_RES_TYPE_VM, id);
+ if (vm)
+ xe_vm_get(vm);
+
+ mutex_unlock(&d->res->lock);
+
+ return vm;
+}
+
+
static struct drm_xe_eudebug_event *
xe_eudebug_create_event(struct xe_eudebug *d, u16 type, u64 seqno, u16 flags,
u32 len)
@@ -1626,6 +1655,10 @@ static long xe_eudebug_ioctl(struct file *file,
ret = xe_eudebug_ack_event_ioctl(d, cmd, arg);
eu_dbg(d, "ioctl cmd=EVENT_ACK ret=%ld\n", ret);
break;
+ case DRM_XE_EUDEBUG_IOCTL_VM_OPEN:
+ ret = xe_eudebug_vm_open_ioctl(d, arg);
+ eu_dbg(d, "ioctl cmd=VM_OPEN ret=%ld\n", ret);
+ break;
default:
ret = -EINVAL;
}
@@ -1690,6 +1723,8 @@ xe_eudebug_connect(struct xe_device *xe,
spin_lock_init(&d->acks.lock);
d->acks.tree = RB_ROOT;
+ mutex_init(&d->hw.lock);
+
d->res = xe_eudebug_resources_alloc();
if (XE_IOCTL_DBG(xe, IS_ERR(d->res))) {
err = PTR_ERR(d->res);
diff --git a/drivers/gpu/drm/xe/xe_eudebug.h b/drivers/gpu/drm/xe/xe_eudebug.h
index d0f1b51564dc..74171cc81fe1 100644
--- a/drivers/gpu/drm/xe/xe_eudebug.h
+++ b/drivers/gpu/drm/xe/xe_eudebug.h
@@ -18,6 +18,7 @@ struct xe_vma;
struct xe_vma_ops;
struct xe_exec_queue;
struct xe_user_fence;
+struct xe_eudebug;
#if IS_ENABLED(CONFIG_DRM_XE_EUDEBUG)
@@ -38,6 +39,10 @@ struct xe_user_fence;
#define xe_eudebug_assert(d, ...) xe_assert((d)->xe, ##__VA_ARGS__)
+#define xe_eudebug_for_each_hw_engine(__hwe, __gt, __id) \
+ for_each_hw_engine(__hwe, __gt, __id) \
+ if (xe_hw_engine_has_eudebug(__hwe))
+
int xe_eudebug_connect_ioctl(struct drm_device *dev,
void *data,
struct drm_file *file);
@@ -47,10 +52,15 @@ bool xe_eudebug_is_enabled(struct xe_device *xe);
void xe_eudebug_file_close(struct xe_file *xef);
+bool xe_eudebug_detached(struct xe_eudebug *d);
+
void xe_eudebug_vm_create(struct xe_file *xef, struct xe_vm *vm);
void xe_eudebug_vm_destroy(struct xe_file *xef, struct xe_vm *vm);
+
int xe_eudebug_enable(struct xe_device *xe, bool enable);
+struct xe_vm *xe_eudebug_vm_get(struct xe_eudebug *d, u32 vm_id);
+
void xe_eudebug_exec_queue_create(struct xe_file *xef, struct xe_exec_queue *q);
void xe_eudebug_exec_queue_destroy(struct xe_file *xef, struct xe_exec_queue *q);
@@ -60,6 +70,8 @@ void xe_eudebug_ufence_init(struct xe_user_fence *ufence);
void xe_eudebug_ufence_fini(struct xe_user_fence *ufence);
bool xe_eudebug_ufence_track(struct xe_user_fence *ufence);
+void xe_eudebug_put(struct xe_eudebug *d);
+
#else
static inline int xe_eudebug_connect_ioctl(struct drm_device *dev,
diff --git a/drivers/gpu/drm/xe/xe_eudebug_types.h b/drivers/gpu/drm/xe/xe_eudebug_types.h
index a294e2f4e7df..292e93c72a64 100644
--- a/drivers/gpu/drm/xe/xe_eudebug_types.h
+++ b/drivers/gpu/drm/xe/xe_eudebug_types.h
@@ -139,6 +139,11 @@ struct xe_eudebug {
struct rb_root tree;
} acks;
+
+ struct {
+ /** @lock: guards access to hw state */
+ struct mutex lock;
+ } hw;
};
#endif /* _XE_EUDEBUG_TYPES_H_ */
diff --git a/drivers/gpu/drm/xe/xe_eudebug_vm.c b/drivers/gpu/drm/xe/xe_eudebug_vm.c
new file mode 100644
index 000000000000..4dd747680a9c
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_eudebug_vm.c
@@ -0,0 +1,418 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2023-2025 Intel Corporation
+ */
+
+#include "xe_eudebug_vm.h"
+
+#include <linux/anon_inodes.h>
+#include <linux/fs.h>
+#include <linux/vmalloc.h>
+
+#include <drm/drm_drv.h>
+
+#include "xe_bo.h"
+#include "xe_device.h"
+#include "xe_eudebug.h"
+#include "xe_eudebug_types.h"
+#include "xe_force_wake.h"
+#include "xe_gt.h"
+#include "xe_mmio.h"
+#include "xe_vm.h"
+
+#include "regs/xe_gt_regs.h"
+#include "regs/xe_engine_regs.h"
+
+static int xe_eudebug_vma_access(struct xe_vma *vma,
+ unsigned long offset_in_vma,
+ void *buf, unsigned long len, bool write)
+{
+ struct xe_bo *bo;
+ u64 bytes;
+
+ lockdep_assert_held_write(&xe_vma_vm(vma)->lock);
+
+ if (XE_WARN_ON(offset_in_vma >= xe_vma_size(vma)))
+ return -EINVAL;
+
+ if (vma->gpuva.flags & XE_VMA_DESTROYED)
+ return -EINVAL;
+
+ bytes = min_t(u64, len, xe_vma_size(vma) - offset_in_vma);
+ if (!bytes)
+ return 0;
+
+ bo = xe_bo_get(xe_vma_bo(vma));
+ if (bo) {
+ int ret;
+
+ ret = ttm_bo_access(&bo->ttm, offset_in_vma, buf, bytes, write);
+
+ xe_bo_put(bo);
+
+ return ret;
+ }
+
+ return -EINVAL;
+}
+
+static int xe_eudebug_vm_access(struct xe_vm *vm, unsigned long offset,
+ void *buf, unsigned long len, bool write)
+{
+ struct xe_vma *vma;
+ int ret;
+
+ down_write(&vm->lock);
+
+ vma = xe_vm_find_overlapping_vma(vm, offset, len);
+ if (vma) {
+ /* XXX: why find overlapping returns below start? */
+ if (offset < xe_vma_start(vma) ||
+ offset >= (xe_vma_start(vma) + xe_vma_size(vma))) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ /* Offset into vma */
+ offset -= xe_vma_start(vma);
+ ret = xe_eudebug_vma_access(vma, offset, buf, len, write);
+ } else {
+ ret = -EINVAL;
+ }
+
+out:
+ up_write(&vm->lock);
+
+ return ret;
+}
+
+struct vm_file {
+ struct xe_eudebug *debugger;
+ struct xe_vm *vm;
+ u64 flags;
+ u64 vm_handle;
+ unsigned int timeout_us;
+};
+
+static ssize_t __vm_read_write(struct xe_vm *vm,
+ void *bb,
+ char __user *r_buffer,
+ const char __user *w_buffer,
+ unsigned long offset,
+ unsigned long len,
+ const bool write)
+{
+ ssize_t ret;
+
+ if (!len)
+ return 0;
+
+ if (write) {
+ ret = copy_from_user(bb, w_buffer, len);
+ if (ret)
+ return -EFAULT;
+
+ ret = xe_eudebug_vm_access(vm, offset, bb, len, true);
+ if (ret <= 0)
+ return ret;
+
+ len = ret;
+ } else {
+ ret = xe_eudebug_vm_access(vm, offset, bb, len, false);
+ if (ret <= 0)
+ return ret;
+
+ len = ret;
+
+ ret = copy_to_user(r_buffer, bb, len);
+ if (ret)
+ return -EFAULT;
+ }
+
+ return len;
+}
+
+static ssize_t __xe_eudebug_vm_access(struct file *file,
+ char __user *r_buffer,
+ const char __user *w_buffer,
+ size_t count, loff_t *__pos)
+{
+ struct vm_file *vmf = file->private_data;
+ struct xe_eudebug * const d = vmf->debugger;
+ struct xe_device * const xe = d->xe;
+ const bool write = !!w_buffer;
+ struct xe_vm *vm;
+ ssize_t copied = 0;
+ ssize_t bytes_left = count;
+ ssize_t ret;
+ unsigned long alloc_len;
+ loff_t pos = *__pos;
+ void *k_buffer;
+
+ if (XE_IOCTL_DBG(xe, write && r_buffer))
+ return -EINVAL;
+
+ vm = xe_eudebug_vm_get(d, vmf->vm_handle);
+ if (XE_IOCTL_DBG(xe, !vm))
+ return -EINVAL;
+
+ if (XE_IOCTL_DBG(xe, vm != vmf->vm)) {
+ eu_warn(d, "vm_access(%s): vm handle mismatch vm_handle=%llu, flags=0x%llx, pos=%llu, count=%zu\n",
+ write ? "write" : "read",
+ vmf->vm_handle, vmf->flags, pos, count);
+ xe_vm_put(vm);
+ return -EINVAL;
+ }
+
+ if (!count) {
+ xe_vm_put(vm);
+ return 0;
+ }
+
+ alloc_len = min_t(unsigned long, ALIGN(count, PAGE_SIZE), 64 * SZ_1M);
+ do {
+ k_buffer = vmalloc(alloc_len);
+ if (k_buffer)
+ break;
+
+ alloc_len >>= 1;
+ } while (alloc_len > PAGE_SIZE);
+
+ if (XE_IOCTL_DBG(xe, !k_buffer)) {
+ xe_vm_put(vm);
+ return -ENOMEM;
+ }
+
+ do {
+ const ssize_t len = min_t(ssize_t, bytes_left, alloc_len);
+
+ ret = __vm_read_write(vm, k_buffer,
+ write ? NULL : r_buffer + copied,
+ write ? w_buffer + copied : NULL,
+ pos + copied,
+ len,
+ write);
+ if (ret <= 0)
+ break;
+
+ bytes_left -= ret;
+ copied += ret;
+ } while (bytes_left > 0);
+
+ vfree(k_buffer);
+ xe_vm_put(vm);
+
+ if (XE_WARN_ON(copied < 0))
+ copied = 0;
+
+ *__pos += copied;
+
+ return copied ?: ret;
+}
+
+static ssize_t xe_eudebug_vm_read(struct file *file,
+ char __user *buffer,
+ size_t count, loff_t *pos)
+{
+ return __xe_eudebug_vm_access(file, buffer, NULL, count, pos);
+}
+
+static ssize_t xe_eudebug_vm_write(struct file *file,
+ const char __user *buffer,
+ size_t count, loff_t *pos)
+{
+ return __xe_eudebug_vm_access(file, NULL, buffer, count, pos);
+}
+
+static int engine_rcu_flush(struct xe_eudebug *d,
+ struct xe_hw_engine *hwe,
+ unsigned int timeout_us)
+{
+ const struct xe_reg psmi_addr = RING_PSMI_CTL(hwe->mmio_base);
+ struct xe_gt *gt = hwe->gt;
+ unsigned int fw_ref;
+ u32 mask = RCU_ASYNC_FLUSH_AND_INVALIDATE_ALL;
+ u32 psmi_ctrl;
+ u32 id;
+ int ret;
+
+ if (hwe->class == XE_ENGINE_CLASS_RENDER)
+ id = 0;
+ else if (hwe->class == XE_ENGINE_CLASS_COMPUTE)
+ id = hwe->instance + 1;
+ else
+ return -EINVAL;
+
+ if (id < 8)
+ mask |= id << RCU_ASYNC_FLUSH_ENGINE_ID_SHIFT;
+ else
+ mask |= (id - 8) << RCU_ASYNC_FLUSH_ENGINE_ID_SHIFT |
+ RCU_ASYNC_FLUSH_ENGINE_ID_DECODE1;
+
+ fw_ref = xe_force_wake_get(gt_to_fw(gt), hwe->domain);
+ if (!fw_ref)
+ return -ETIMEDOUT;
+
+ /* Prevent concurrent flushes */
+ mutex_lock(&d->hw.lock);
+ psmi_ctrl = xe_mmio_read32(>->mmio, psmi_addr);
+ if (!(psmi_ctrl & IDLE_MSG_DISABLE))
+ xe_mmio_write32(>->mmio, psmi_addr, _MASKED_BIT_ENABLE(IDLE_MSG_DISABLE));
+
+ /* XXX: Timeout is per operation but in here we flush previous */
+ ret = xe_mmio_wait32(>->mmio, RCU_ASYNC_FLUSH,
+ RCU_ASYNC_FLUSH_IN_PROGRESS, 0,
+ timeout_us, NULL, false);
+ if (ret)
+ goto out;
+
+ xe_mmio_write32(>->mmio, RCU_ASYNC_FLUSH, mask);
+
+ ret = xe_mmio_wait32(>->mmio, RCU_ASYNC_FLUSH,
+ RCU_ASYNC_FLUSH_IN_PROGRESS, 0,
+ timeout_us, NULL, false);
+out:
+ if (!(psmi_ctrl & IDLE_MSG_DISABLE))
+ xe_mmio_write32(>->mmio, psmi_addr, _MASKED_BIT_DISABLE(IDLE_MSG_DISABLE));
+
+ mutex_unlock(&d->hw.lock);
+ xe_force_wake_put(gt_to_fw(gt), fw_ref);
+
+ return ret;
+}
+
+static int xe_eudebug_vm_fsync(struct file *file, loff_t start, loff_t end, int datasync)
+{
+ struct vm_file *vmf = file->private_data;
+ struct xe_eudebug *d = vmf->debugger;
+ struct xe_gt *gt;
+ int gt_id;
+ int ret = -EINVAL;
+
+ eu_dbg(d, "vm_fsync: vm_handle=%llu, flags=0x%llx, start=%llu, end=%llu datasync=%d\n",
+ vmf->vm_handle, vmf->flags, start, end, datasync);
+
+ for_each_gt(gt, d->xe, gt_id) {
+ struct xe_hw_engine *hwe;
+ enum xe_hw_engine_id id;
+
+ /* XXX: vm open per engine? */
+ xe_eudebug_for_each_hw_engine(hwe, gt, id) {
+ ret = engine_rcu_flush(d, hwe, vmf->timeout_us);
+ if (ret)
+ break;
+ }
+ }
+
+ return ret;
+}
+
+static int xe_eudebug_vm_release(struct inode *inode, struct file *file)
+{
+ struct vm_file *vmf = file->private_data;
+ struct xe_eudebug *d = vmf->debugger;
+
+ eu_dbg(d, "vm_release: vm_handle=%llu, flags=0x%llx",
+ vmf->vm_handle, vmf->flags);
+
+ xe_vm_put(vmf->vm);
+ drm_dev_put(&d->xe->drm);
+ xe_eudebug_put(d);
+
+ kfree(vmf);
+
+ return 0;
+}
+
+static const struct file_operations vm_fops = {
+ .owner = THIS_MODULE,
+ .llseek = generic_file_llseek,
+ .read = xe_eudebug_vm_read,
+ .write = xe_eudebug_vm_write,
+ .fsync = xe_eudebug_vm_fsync,
+ .mmap = NULL,
+ .release = xe_eudebug_vm_release,
+};
+
+long xe_eudebug_vm_open_ioctl(struct xe_eudebug *d, unsigned long arg)
+{
+ struct drm_xe_eudebug_vm_open param;
+ struct xe_device * const xe = d->xe;
+ struct vm_file *vmf = NULL;
+ struct xe_vm *vm;
+ struct file *file;
+ long ret = 0;
+ int fd;
+
+ if (XE_IOCTL_DBG(xe, _IOC_SIZE(DRM_XE_EUDEBUG_IOCTL_VM_OPEN) != sizeof(param)))
+ return -EINVAL;
+
+ if (XE_IOCTL_DBG(xe, !(_IOC_DIR(DRM_XE_EUDEBUG_IOCTL_VM_OPEN) & _IOC_WRITE)))
+ return -EINVAL;
+
+ if (XE_IOCTL_DBG(xe, copy_from_user(¶m, (void __user *)arg, sizeof(param))))
+ return -EFAULT;
+
+ if (XE_IOCTL_DBG(xe, param.flags))
+ return -EINVAL;
+
+ if (XE_IOCTL_DBG(xe, xe_eudebug_detached(d)))
+ return -ENOTCONN;
+
+ vm = xe_eudebug_vm_get(d, param.vm_handle);
+ if (XE_IOCTL_DBG(xe, !vm))
+ return -EINVAL;
+
+ vmf = kzalloc(sizeof(*vmf), GFP_KERNEL);
+ if (XE_IOCTL_DBG(xe, !vmf)) {
+ ret = -ENOMEM;
+ goto out_vm_put;
+ }
+
+ fd = get_unused_fd_flags(O_CLOEXEC);
+ if (XE_IOCTL_DBG(xe, fd < 0)) {
+ ret = fd;
+ goto out_free;
+ }
+
+ kref_get(&d->ref);
+ vmf->debugger = d;
+ vmf->vm = vm;
+ vmf->flags = param.flags;
+ vmf->vm_handle = param.vm_handle;
+ vmf->timeout_us = div64_u64(param.timeout_ns, 1000ull);
+
+ file = anon_inode_getfile("[xe_eudebug.vm]", &vm_fops, vmf, O_RDWR);
+ if (IS_ERR(file)) {
+ ret = PTR_ERR(file);
+ XE_IOCTL_DBG(xe, ret);
+ file = NULL;
+ goto out_fd_put;
+ }
+
+ drm_dev_get(&xe->drm);
+
+ file->f_mode |= FMODE_PREAD | FMODE_PWRITE |
+ FMODE_READ | FMODE_WRITE | FMODE_LSEEK;
+
+ fd_install(fd, file);
+
+ eu_dbg(d, "vm_open: handle=%llu, flags=0x%llx, fd=%d",
+ vmf->vm_handle, vmf->flags, fd);
+
+ XE_WARN_ON(ret);
+
+ return fd;
+
+out_fd_put:
+ put_unused_fd(fd);
+ xe_eudebug_put(d);
+out_free:
+ kfree(vmf);
+out_vm_put:
+ xe_vm_put(vm);
+
+ XE_WARN_ON(ret >= 0);
+
+ return ret;
+}
diff --git a/drivers/gpu/drm/xe/xe_eudebug_vm.h b/drivers/gpu/drm/xe/xe_eudebug_vm.h
new file mode 100644
index 000000000000..b3dc5618a5e6
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_eudebug_vm.h
@@ -0,0 +1,8 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2023-2025 Intel Corporation
+ */
+
+struct xe_eudebug;
+
+long xe_eudebug_vm_open_ioctl(struct xe_eudebug *d, unsigned long arg);
diff --git a/include/uapi/drm/xe_drm_eudebug.h b/include/uapi/drm/xe_drm_eudebug.h
index b363583cb1d6..139926a0f38c 100644
--- a/include/uapi/drm/xe_drm_eudebug.h
+++ b/include/uapi/drm/xe_drm_eudebug.h
@@ -17,6 +17,7 @@ extern "C" {
*/
#define DRM_XE_EUDEBUG_IOCTL_READ_EVENT _IO('j', 0x0)
#define DRM_XE_EUDEBUG_IOCTL_ACK_EVENT _IOW('j', 0x1, struct drm_xe_eudebug_ack_event)
+#define DRM_XE_EUDEBUG_IOCTL_VM_OPEN _IOW('j', 0x2, struct drm_xe_eudebug_vm_open)
/**
* struct drm_xe_eudebug_event - Base type of event delivered by xe_eudebug.
@@ -164,6 +165,20 @@ struct drm_xe_eudebug_ack_event {
__u64 seqno;
};
+struct drm_xe_eudebug_vm_open {
+ /** @extensions: Pointer to the first extension struct, if any */
+ __u64 extensions;
+
+ /** @vm_handle: id of vm */
+ __u64 vm_handle;
+
+ /** @flags: flags */
+ __u64 flags;
+
+ /** @timeout_ns: Timeout value in nanoseconds operations (fsync) */
+ __u64 timeout_ns;
+};
+
#if defined(__cplusplus)
}
#endif
--
2.43.0
^ permalink raw reply related [flat|nested] 30+ messages in thread* [PATCH 10/20] drm/xe/eudebug: userptr vm pread/pwrite
2025-12-02 13:52 [PATCH 00/20] Intel Xe GPU Debug Support (eudebug) v6 Mika Kuoppala
` (8 preceding siblings ...)
2025-12-02 13:52 ` [PATCH 09/20] drm/xe/eudebug: vm open/pread/pwrite Mika Kuoppala
@ 2025-12-02 13:52 ` Mika Kuoppala
2025-12-02 13:52 ` [PATCH 11/20] drm/xe/eudebug: hw enablement for eudebug Mika Kuoppala
` (15 subsequent siblings)
25 siblings, 0 replies; 30+ messages in thread
From: Mika Kuoppala @ 2025-12-02 13:52 UTC (permalink / raw)
To: intel-xe
Cc: simona.vetter, matthew.brost, christian.koenig, thomas.hellstrom,
joonas.lahtinen, christoph.manszewski, rodrigo.vivi,
andrzej.hajda, matthew.auld, maciej.patelczyk, gwan-gyeong.mun,
Mika Kuoppala, Simona Vetter, Dominik Grzegorzek
Implement debugger vm access for userptrs.
When bind is done, take ref to current task so that
we know from which vm the address was bound. Then during
debugger pread/pwrite we use this target task as
parameter to access the debuggee vm with access_process_vm().
This is based on suggestions from Simona, Thomas and Joonas.
v2: need to add offset into vma (Dominik)
v3: move code into xe_userptr.c (Mika)
Cc: Simona Vetter <simona@ffwll.ch>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Andrzej Hajda <andrzej.hajda@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Cc: Christian König <christian.koenig@amd.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Signed-off-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
---
drivers/gpu/drm/xe/xe_eudebug_vm.c | 16 +++++++++++++++
drivers/gpu/drm/xe/xe_userptr.c | 4 ++++
drivers/gpu/drm/xe/xe_userptr.h | 32 ++++++++++++++++++++++++++++++
3 files changed, 52 insertions(+)
diff --git a/drivers/gpu/drm/xe/xe_eudebug_vm.c b/drivers/gpu/drm/xe/xe_eudebug_vm.c
index 4dd747680a9c..6d341bae4ffc 100644
--- a/drivers/gpu/drm/xe/xe_eudebug_vm.c
+++ b/drivers/gpu/drm/xe/xe_eudebug_vm.c
@@ -51,6 +51,22 @@ static int xe_eudebug_vma_access(struct xe_vma *vma,
xe_bo_put(bo);
return ret;
+ } else if (xe_vma_is_userptr(vma)) {
+ struct xe_userptr *userptr = &to_userptr_vma(vma)->userptr;
+
+ if (XE_WARN_ON(!userptr->eudebug.task))
+ return -EINVAL;
+
+ /*
+ * access_remote_vm() would fit as userptr notifier has
+ * mm ref so we would not need to carry task ref at all.
+ * But access_remote_vm is not exported. access_process_vm()
+ * is exported so use it instead.
+ */
+ return access_process_vm(userptr->eudebug.task,
+ xe_vma_userptr(vma) + offset_in_vma,
+ buf, bytes,
+ write ? FOLL_WRITE : 0);
}
return -EINVAL;
diff --git a/drivers/gpu/drm/xe/xe_userptr.c b/drivers/gpu/drm/xe/xe_userptr.c
index 0d9130b1958a..96f8ae352c18 100644
--- a/drivers/gpu/drm/xe/xe_userptr.c
+++ b/drivers/gpu/drm/xe/xe_userptr.c
@@ -292,6 +292,8 @@ int xe_userptr_setup(struct xe_userptr_vma *uvma, unsigned long start,
userptr->pages.notifier_seq = LONG_MAX;
+ xe_eudebug_track_userptr_task(userptr);
+
return 0;
}
@@ -300,6 +302,8 @@ void xe_userptr_remove(struct xe_userptr_vma *uvma)
struct xe_vm *vm = xe_vma_vm(&uvma->vma);
struct xe_userptr *userptr = &uvma->userptr;
+ xe_eudebug_untrack_userptr_task(userptr);
+
drm_gpusvm_free_pages(&vm->svm.gpusvm, &uvma->userptr.pages,
xe_vma_size(&uvma->vma) >> PAGE_SHIFT);
diff --git a/drivers/gpu/drm/xe/xe_userptr.h b/drivers/gpu/drm/xe/xe_userptr.h
index ef801234991e..4af2569f1fa1 100644
--- a/drivers/gpu/drm/xe/xe_userptr.h
+++ b/drivers/gpu/drm/xe/xe_userptr.h
@@ -66,6 +66,12 @@ struct xe_userptr {
#if IS_ENABLED(CONFIG_DRM_XE_USERPTR_INVAL_INJECT)
u32 divisor;
#endif
+
+#if IS_ENABLED(CONFIG_DRM_XE_EUDEBUG)
+ struct {
+ struct task_struct *task;
+ } eudebug;
+#endif
};
#if IS_ENABLED(CONFIG_DRM_GPUSVM)
@@ -104,4 +110,30 @@ static inline void xe_vma_userptr_force_invalidate(struct xe_userptr_vma *uvma)
{
}
#endif
+
+#if IS_ENABLED(CONFIG_DRM_XE_EUDEBUG)
+static inline void xe_eudebug_track_userptr_task(struct xe_userptr *userptr)
+{
+ /*
+ * We could use the mm which is on notifier. But
+ * the access_remote_vm() is not exported. Thus
+ * we get reference to task for access_process_vm()
+ */
+ userptr->eudebug.task = get_task_struct(current);
+}
+
+static inline void xe_eudebug_untrack_userptr_task(struct xe_userptr *userptr)
+{
+ put_task_struct(userptr->eudebug.task);
+}
+#else
+static inline void xe_eudebug_track_userptr_task(struct xe_userptr *userptr)
+{
+}
+
+static inline void xe_eudebug_untrack_userptr_task(struct xe_userptr *userptr)
+{
+}
+#endif /* CONFIG_DRM_XE_EUDEBUG */
+
#endif
--
2.43.0
^ permalink raw reply related [flat|nested] 30+ messages in thread* [PATCH 11/20] drm/xe/eudebug: hw enablement for eudebug
2025-12-02 13:52 [PATCH 00/20] Intel Xe GPU Debug Support (eudebug) v6 Mika Kuoppala
` (9 preceding siblings ...)
2025-12-02 13:52 ` [PATCH 10/20] drm/xe/eudebug: userptr vm pread/pwrite Mika Kuoppala
@ 2025-12-02 13:52 ` Mika Kuoppala
2025-12-02 13:52 ` [PATCH 12/20] drm/xe/eudebug: Introduce EU control interface Mika Kuoppala
` (14 subsequent siblings)
25 siblings, 0 replies; 30+ messages in thread
From: Mika Kuoppala @ 2025-12-02 13:52 UTC (permalink / raw)
To: intel-xe
Cc: simona.vetter, matthew.brost, christian.koenig, thomas.hellstrom,
joonas.lahtinen, christoph.manszewski, rodrigo.vivi,
andrzej.hajda, matthew.auld, maciej.patelczyk, gwan-gyeong.mun,
Dominik Grzegorzek, Mika Kuoppala
From: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
In order to turn on debug capabilities, (i.e. breakpoints), TD_CTL
and some other registers needs to be programmed. Implement eudebug
mode enabling including eudebug related workarounds.
v2: Move workarounds to xe_wa_oob. Use reg_sr directly instead of
xe_rtp as it suits better for dynamic manipulation of those register we
do later in the series.
v3: get rid of undefining XE_MCR_REG (Mika)
Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Signed-off-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
---
drivers/gpu/drm/xe/Makefile | 2 +-
drivers/gpu/drm/xe/regs/xe_engine_regs.h | 4 +
drivers/gpu/drm/xe/regs/xe_gt_regs.h | 19 +++
drivers/gpu/drm/xe/xe_eudebug_hw.c | 72 +++++++++
drivers/gpu/drm/xe/xe_eudebug_hw.h | 25 ++++
drivers/gpu/drm/xe/xe_gt_debug.c | 179 +++++++++++++++++++++++
drivers/gpu/drm/xe/xe_gt_debug.h | 32 ++++
drivers/gpu/drm/xe/xe_reg_sr.c | 21 ++-
drivers/gpu/drm/xe/xe_reg_sr.h | 4 +-
drivers/gpu/drm/xe/xe_reg_whitelist.c | 2 +-
drivers/gpu/drm/xe/xe_rtp.c | 2 +-
drivers/gpu/drm/xe/xe_wa_oob.rules | 2 +
12 files changed, 354 insertions(+), 10 deletions(-)
create mode 100644 drivers/gpu/drm/xe/xe_eudebug_hw.c
create mode 100644 drivers/gpu/drm/xe/xe_eudebug_hw.h
create mode 100644 drivers/gpu/drm/xe/xe_gt_debug.c
create mode 100644 drivers/gpu/drm/xe/xe_gt_debug.h
diff --git a/drivers/gpu/drm/xe/Makefile b/drivers/gpu/drm/xe/Makefile
index ccf051e65408..05c74032ed63 100644
--- a/drivers/gpu/drm/xe/Makefile
+++ b/drivers/gpu/drm/xe/Makefile
@@ -148,7 +148,7 @@ xe-$(CONFIG_DRM_XE_GPUSVM) += xe_svm.o
xe-$(CONFIG_DRM_GPUSVM) += xe_userptr.o
# debugging shaders with gdb (eudebug) support
-xe-$(CONFIG_DRM_XE_EUDEBUG) += xe_eudebug.o xe_eudebug_vm.o
+xe-$(CONFIG_DRM_XE_EUDEBUG) += xe_eudebug.o xe_eudebug_vm.o xe_eudebug_hw.o xe_gt_debug.o
# graphics hardware monitoring (HWMON) support
xe-$(CONFIG_HWMON) += xe_hwmon.o
diff --git a/drivers/gpu/drm/xe/regs/xe_engine_regs.h b/drivers/gpu/drm/xe/regs/xe_engine_regs.h
index 68172b0248a6..d588975e5be9 100644
--- a/drivers/gpu/drm/xe/regs/xe_engine_regs.h
+++ b/drivers/gpu/drm/xe/regs/xe_engine_regs.h
@@ -123,6 +123,10 @@
#define INDIRECT_RING_STATE(base) XE_REG((base) + 0x108)
+#define CS_DEBUG_MODE2(base) XE_REG((base) + 0xd8, XE_REG_OPTION_MASKED)
+#define INST_STATE_CACHE_INVALIDATE REG_BIT(6)
+#define GLOBAL_DEBUG_ENABLE REG_BIT(5)
+
#define RING_BBADDR(base) XE_REG((base) + 0x140)
#define RING_BBADDR_UDW(base) XE_REG((base) + 0x168)
diff --git a/drivers/gpu/drm/xe/regs/xe_gt_regs.h b/drivers/gpu/drm/xe/regs/xe_gt_regs.h
index 70e9a32c69a6..7920e687f6e7 100644
--- a/drivers/gpu/drm/xe/regs/xe_gt_regs.h
+++ b/drivers/gpu/drm/xe/regs/xe_gt_regs.h
@@ -480,10 +480,20 @@
#define DG2_DISABLE_ROUND_ENABLE_ALLOW_FOR_SSLA REG_BIT(15)
#define CLEAR_OPTIMIZATION_DISABLE REG_BIT(6)
+#define TD_CTL XE_REG_MCR(0xe400)
+#define TD_CTL_FEH_AND_FEE_ENABLE REG_BIT(7) /* forced halt and exception */
+#define TD_CTL_FORCE_EXTERNAL_HALT REG_BIT(6)
+#define TD_CTL_FORCE_THREAD_BREAKPOINT_ENABLE REG_BIT(4)
+#define TD_CTL_FORCE_EXCEPTION REG_BIT(3)
+#define TD_CTL_BREAKPOINT_ENABLE REG_BIT(2)
+#define TD_CTL_GLOBAL_DEBUG_ENABLE REG_BIT(0) /* XeHP */
+
#define CACHE_MODE_SS XE_REG_MCR(0xe420, XE_REG_OPTION_MASKED)
#define DISABLE_ECC REG_BIT(5)
#define ENABLE_PREFETCH_INTO_IC REG_BIT(3)
+#define EU_ATT(reg, row) XE_REG_MCR((reg ? 0xe478 : 0xe470) + (row) * 4)
+
#define ROW_CHICKEN4 XE_REG_MCR(0xe48c, XE_REG_OPTION_MASKED)
#define DISABLE_GRF_CLEAR REG_BIT(13)
#define XEHP_DIS_BBL_SYSPIPE REG_BIT(11)
@@ -493,6 +503,8 @@
#define THREAD_EX_ARB_MODE REG_GENMASK(3, 2)
#define THREAD_EX_ARB_MODE_RR_AFTER_DEP REG_FIELD_PREP(THREAD_EX_ARB_MODE, 0x2)
+#define EU_ATT_CLR(reg, row) XE_REG_MCR((reg ? 0xe698 : 0xe490) + (row) * 4)
+
#define ROW_CHICKEN3 XE_REG_MCR(0xe49c, XE_REG_OPTION_MASKED)
#define XE2_EUPEND_CHK_FLUSH_DIS REG_BIT(14)
#define DIS_FIX_EOT1_FLUSH REG_BIT(9)
@@ -507,11 +519,13 @@
#define MDQ_ARBITRATION_MODE REG_BIT(12)
#define STALL_DOP_GATING_DISABLE REG_BIT(5)
#define EARLY_EOT_DIS REG_BIT(1)
+#define STALL_DOP_GATING_DISABLE REG_BIT(5)
#define ROW_CHICKEN2 XE_REG_MCR(0xe4f4, XE_REG_OPTION_MASKED)
#define DISABLE_READ_SUPPRESSION REG_BIT(15)
#define DISABLE_EARLY_READ REG_BIT(14)
#define ENABLE_LARGE_GRF_MODE REG_BIT(12)
+#define XEHPC_DISABLE_BTB REG_BIT(11)
#define PUSH_CONST_DEREF_HOLD_DIS REG_BIT(8)
#define DISABLE_TDL_SVHS_GATING REG_BIT(1)
#define DISABLE_DOP_GATING REG_BIT(0)
@@ -570,6 +584,11 @@
#define CCS_MODE_CSLICE(cslice, ccs) \
((ccs) << ((cslice) * CCS_MODE_CSLICE_WIDTH))
+#define RCU_DEBUG_1 XE_REG(0x14a00)
+#define RCU_DEBUG_1_ENGINE_STATUS REG_GENMASK(2, 0)
+#define RCU_DEBUG_1_RUNALONE_ACTIVE REG_BIT(2)
+#define RCU_DEBUG_1_CONTEXT_ACTIVE REG_BIT(0)
+
#define RCU_ASYNC_FLUSH XE_REG(0x149fc)
#define RCU_ASYNC_FLUSH_IN_PROGRESS REG_BIT(31)
#define RCU_ASYNC_FLUSH_ENGINE_ID_SHIFT 28
diff --git a/drivers/gpu/drm/xe/xe_eudebug_hw.c b/drivers/gpu/drm/xe/xe_eudebug_hw.c
new file mode 100644
index 000000000000..aa31b4c91713
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_eudebug_hw.c
@@ -0,0 +1,72 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2023-2025 Intel Corporation
+ */
+
+#include "xe_eudebug_hw.h"
+
+#include <linux/delay.h>
+#include <linux/pm_runtime.h>
+#include <generated/xe_wa_oob.h>
+
+#include "regs/xe_gt_regs.h"
+#include "regs/xe_engine_regs.h"
+
+#include "xe_eudebug.h"
+#include "xe_eudebug_types.h"
+#include "xe_exec_queue.h"
+#include "xe_exec_queue_types.h"
+#include "xe_force_wake.h"
+#include "xe_gt.h"
+#include "xe_gt_debug.h"
+#include "xe_gt_mcr.h"
+#include "xe_hw_engine.h"
+#include "xe_lrc.h"
+#include "xe_macros.h"
+#include "xe_mmio.h"
+#include "xe_reg_sr.h"
+#include "xe_rtp.h"
+#include "xe_wa.h"
+
+static void add_sr_entry(struct xe_hw_engine *hwe,
+ struct xe_reg_mcr mcr_reg,
+ u32 mask, bool enable)
+{
+ const struct xe_reg_sr_entry sr_entry = {
+ .reg = mcr_reg.__reg,
+ .clr_bits = mask,
+ .set_bits = enable ? mask : 0,
+ .read_mask = mask,
+ };
+
+ xe_reg_sr_add(&hwe->reg_sr, &sr_entry, hwe->gt, true);
+}
+
+void xe_eudebug_init_hw_engine(struct xe_hw_engine *hwe, bool enable)
+{
+ struct xe_gt *gt = hwe->gt;
+ struct xe_device *xe = gt_to_xe(gt);
+
+ if (!xe_rtp_match_first_render_or_compute(xe, gt, hwe))
+ return;
+
+ if (XE_GT_WA(gt, 18022722726))
+ add_sr_entry(hwe, ROW_CHICKEN,
+ STALL_DOP_GATING_DISABLE, enable);
+
+ if (XE_GT_WA(gt, 14015474168))
+ add_sr_entry(hwe, ROW_CHICKEN2,
+ XEHPC_DISABLE_BTB,
+ enable);
+
+ if (xe->info.graphics_verx100 >= 1200)
+ add_sr_entry(hwe, TD_CTL,
+ TD_CTL_BREAKPOINT_ENABLE |
+ TD_CTL_FORCE_THREAD_BREAKPOINT_ENABLE |
+ TD_CTL_FEH_AND_FEE_ENABLE,
+ enable);
+
+ if (xe->info.graphics_verx100 >= 1250)
+ add_sr_entry(hwe, TD_CTL,
+ TD_CTL_GLOBAL_DEBUG_ENABLE, enable);
+}
diff --git a/drivers/gpu/drm/xe/xe_eudebug_hw.h b/drivers/gpu/drm/xe/xe_eudebug_hw.h
new file mode 100644
index 000000000000..7362ed9bde68
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_eudebug_hw.h
@@ -0,0 +1,25 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2023-2025 Intel Corporation
+ */
+
+#include <linux/types.h>
+
+#ifndef _XE_EUDEBUG_HW_H_
+#define _XE_EUDEBUG_HW_H_
+
+#include <linux/types.h>
+
+struct xe_eudebug;
+struct xe_hw_engine;
+struct xe_gt;
+
+#if IS_ENABLED(CONFIG_DRM_XE_EUDEBUG)
+
+void xe_eudebug_init_hw_engine(struct xe_hw_engine *hwe, bool enable);
+
+#else /* CONFIG_DRM_XE_EUDEBUG */
+
+#endif /* CONFIG_DRM_XE_EUDEBUG */
+
+#endif /* _XE_EUDEBUG_HW_H_ */
diff --git a/drivers/gpu/drm/xe/xe_gt_debug.c b/drivers/gpu/drm/xe/xe_gt_debug.c
new file mode 100644
index 000000000000..314eef6734c3
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_gt_debug.c
@@ -0,0 +1,179 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2023 Intel Corporation
+ */
+
+#include "regs/xe_gt_regs.h"
+#include "xe_device.h"
+#include "xe_force_wake.h"
+#include "xe_gt.h"
+#include "xe_gt_topology.h"
+#include "xe_gt_debug.h"
+#include "xe_gt_mcr.h"
+#include "xe_pm.h"
+#include "xe_macros.h"
+
+unsigned int xe_gt_eu_att_regs(struct xe_gt *gt)
+{
+ return (GRAPHICS_VERx100(gt_to_xe(gt)) >= 3000) ? 2u : 1u;
+}
+
+int xe_gt_foreach_dss_group_instance(struct xe_gt *gt,
+ int (*fn)(struct xe_gt *gt,
+ void *data,
+ u16 group,
+ u16 instance,
+ bool present),
+ void *data)
+{
+ const enum xe_force_wake_domains fw_domains = XE_FW_GT;
+ xe_dss_mask_t dss_mask;
+ unsigned int dss, fw_ref;
+ u16 group, instance;
+ int ret = 0;
+
+ fw_ref = xe_force_wake_get(gt_to_fw(gt), fw_domains);
+ if (!fw_ref)
+ return -ETIMEDOUT;
+
+ bitmap_or(dss_mask, gt->fuse_topo.g_dss_mask, gt->fuse_topo.c_dss_mask,
+ XE_MAX_DSS_FUSE_BITS);
+
+ /*
+ * Note: This removes terminating zeros when last dss is fused out!
+ * In order bitmask to be exactly the same as on with i915 we would
+ * need to figure out max dss for given platform, most probably by
+ * querying hwconfig
+ */
+
+ for (dss = 0;
+ dss <= find_last_bit(dss_mask, XE_MAX_DSS_FUSE_BITS);
+ dss++) {
+ xe_gt_mcr_get_dss_steering(gt, dss, &group, &instance);
+
+ ret = fn(gt, data, group, instance, test_bit(dss, dss_mask));
+ if (ret)
+ break;
+ }
+
+ xe_force_wake_put(gt_to_fw(gt), fw_ref);
+
+ return ret;
+}
+
+static int read_first_attention_mcr(struct xe_gt *gt, void *data,
+ u16 group, u16 instance, bool present)
+{
+ unsigned int reg, row;
+
+ if (!present)
+ return 0;
+
+ for (reg = 0; reg < xe_gt_eu_att_regs(gt); reg++) {
+ for (row = 0; row < XE_GT_EU_ATT_ROWS; row++) {
+ u32 val;
+
+ val = xe_gt_mcr_unicast_read(gt, EU_ATT(reg, row), group, instance);
+
+ if (val)
+ return 1;
+ }
+ }
+
+ return 0;
+}
+
+#define MAX_EUS_PER_ROW 4u
+#define MAX_THREADS 8u
+
+/**
+ * xe_gt_eu_attention_bitmap_size - query size of the attention bitmask
+ *
+ * @gt: pointer to struct xe_gt
+ *
+ * Return: size in bytes.
+ */
+int xe_gt_eu_attention_bitmap_size(struct xe_gt *gt)
+{
+ xe_dss_mask_t dss_mask;
+
+ bitmap_or(dss_mask, gt->fuse_topo.c_dss_mask,
+ gt->fuse_topo.g_dss_mask, XE_MAX_DSS_FUSE_BITS);
+
+ return (find_last_bit(dss_mask, XE_MAX_DSS_FUSE_BITS) + 1) *
+ XE_GT_EU_ATT_ROWS * xe_gt_eu_att_regs(gt) * MAX_THREADS *
+ MAX_EUS_PER_ROW / 8;
+}
+
+struct attn_read_iter {
+ struct xe_gt *gt;
+ unsigned int i;
+ unsigned int size;
+ u8 *bits;
+};
+
+static int read_eu_attentions_mcr(struct xe_gt *gt, void *data,
+ u16 group, u16 instance, bool present)
+{
+ struct attn_read_iter * const iter = data;
+ unsigned int reg, row;
+
+ for (reg = 0; reg < xe_gt_eu_att_regs(gt); reg++) {
+ for (row = 0; row < XE_GT_EU_ATT_ROWS; row++) {
+ u32 val;
+
+ if (iter->i >= iter->size)
+ return 0;
+
+ XE_WARN_ON(iter->i + sizeof(val) > xe_gt_eu_attention_bitmap_size(gt));
+
+ if (present)
+ val = xe_gt_mcr_unicast_read(gt, EU_ATT(reg, row), group, instance);
+ else
+ val = 0;
+
+ memcpy(&iter->bits[iter->i], &val, sizeof(val));
+ iter->i += sizeof(val);
+ }
+ }
+
+ return 0;
+}
+
+/**
+ * xe_gt_eu_attention_bitmap - query host attention
+ *
+ * @gt: pointer to struct xe_gt
+ *
+ * Return: 0 on success, negative otherwise.
+ */
+int xe_gt_eu_attention_bitmap(struct xe_gt *gt, u8 *bits,
+ unsigned int bitmap_size)
+{
+ struct attn_read_iter iter = {
+ .gt = gt,
+ .i = 0,
+ .size = bitmap_size,
+ .bits = bits
+ };
+
+ return xe_gt_foreach_dss_group_instance(gt, read_eu_attentions_mcr, &iter);
+}
+
+/**
+ * xe_gt_eu_threads_needing_attention - Query host attention
+ *
+ * @gt: pointer to struct xe_gt
+ *
+ * Return: 1 if threads waiting host attention, 0 otherwise.
+ */
+int xe_gt_eu_threads_needing_attention(struct xe_gt *gt)
+{
+ int err;
+
+ err = xe_gt_foreach_dss_group_instance(gt, read_first_attention_mcr, NULL);
+
+ XE_WARN_ON(err < 0);
+
+ return err < 0 ? 0 : err;
+}
diff --git a/drivers/gpu/drm/xe/xe_gt_debug.h b/drivers/gpu/drm/xe/xe_gt_debug.h
new file mode 100644
index 000000000000..9dabe9cc1d25
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_gt_debug.h
@@ -0,0 +1,32 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2023 Intel Corporation
+ */
+
+#ifndef __XE_GT_DEBUG_
+#define __XE_GT_DEBUG_
+
+#include <linux/bits.h>
+#include <linux/math.h>
+
+struct xe_gt;
+
+#define XE_GT_ATTENTION_TIMEOUT_MS 100
+#define XE_GT_EU_ATT_ROWS 2u
+
+unsigned int xe_gt_eu_att_regs(struct xe_gt *gt);
+
+int xe_gt_eu_threads_needing_attention(struct xe_gt *gt);
+int xe_gt_foreach_dss_group_instance(struct xe_gt *gt,
+ int (*fn)(struct xe_gt *gt,
+ void *data,
+ u16 group,
+ u16 instance,
+ bool present),
+ void *data);
+
+int xe_gt_eu_attention_bitmap_size(struct xe_gt *gt);
+int xe_gt_eu_attention_bitmap(struct xe_gt *gt, u8 *bits,
+ unsigned int bitmap_size);
+
+#endif
diff --git a/drivers/gpu/drm/xe/xe_reg_sr.c b/drivers/gpu/drm/xe/xe_reg_sr.c
index 1a465385f909..a720acb506e5 100644
--- a/drivers/gpu/drm/xe/xe_reg_sr.c
+++ b/drivers/gpu/drm/xe/xe_reg_sr.c
@@ -73,22 +73,31 @@ static void reg_sr_inc_error(struct xe_reg_sr *sr)
int xe_reg_sr_add(struct xe_reg_sr *sr,
const struct xe_reg_sr_entry *e,
- struct xe_gt *gt)
+ struct xe_gt *gt,
+ bool overwrite)
{
unsigned long idx = e->reg.addr;
struct xe_reg_sr_entry *pentry = xa_load(&sr->xa, idx);
int ret;
if (pentry) {
- if (!compatible_entries(pentry, e)) {
+ if (overwrite && e->set_bits) {
+ pentry->clr_bits |= e->clr_bits;
+ pentry->set_bits |= e->set_bits;
+ pentry->read_mask |= e->read_mask;
+ } else if (overwrite && !e->set_bits) {
+ pentry->clr_bits |= e->clr_bits;
+ pentry->set_bits &= ~e->clr_bits;
+ pentry->read_mask |= e->read_mask;
+ } else if (!compatible_entries(pentry, e)) {
ret = -EINVAL;
goto fail;
+ } else {
+ pentry->clr_bits |= e->clr_bits;
+ pentry->set_bits |= e->set_bits;
+ pentry->read_mask |= e->read_mask;
}
- pentry->clr_bits |= e->clr_bits;
- pentry->set_bits |= e->set_bits;
- pentry->read_mask |= e->read_mask;
-
return 0;
}
diff --git a/drivers/gpu/drm/xe/xe_reg_sr.h b/drivers/gpu/drm/xe/xe_reg_sr.h
index 51fbba423e27..d67fafdcd847 100644
--- a/drivers/gpu/drm/xe/xe_reg_sr.h
+++ b/drivers/gpu/drm/xe/xe_reg_sr.h
@@ -6,6 +6,8 @@
#ifndef _XE_REG_SR_
#define _XE_REG_SR_
+#include <linux/types.h>
+
/*
* Reg save/restore bookkeeping
*/
@@ -21,7 +23,7 @@ int xe_reg_sr_init(struct xe_reg_sr *sr, const char *name, struct xe_device *xe)
void xe_reg_sr_dump(struct xe_reg_sr *sr, struct drm_printer *p);
int xe_reg_sr_add(struct xe_reg_sr *sr, const struct xe_reg_sr_entry *e,
- struct xe_gt *gt);
+ struct xe_gt *gt, bool overwrite);
void xe_reg_sr_apply_mmio(struct xe_reg_sr *sr, struct xe_gt *gt);
void xe_reg_sr_apply_whitelist(struct xe_hw_engine *hwe);
diff --git a/drivers/gpu/drm/xe/xe_reg_whitelist.c b/drivers/gpu/drm/xe/xe_reg_whitelist.c
index 7ca360b2c20d..59e3998e4aaf 100644
--- a/drivers/gpu/drm/xe/xe_reg_whitelist.c
+++ b/drivers/gpu/drm/xe/xe_reg_whitelist.c
@@ -126,7 +126,7 @@ static void whitelist_apply_to_hwe(struct xe_hw_engine *hwe)
}
xe_reg_whitelist_print_entry(&p, 0, reg, entry);
- xe_reg_sr_add(&hwe->reg_sr, &hwe_entry, hwe->gt);
+ xe_reg_sr_add(&hwe->reg_sr, &hwe_entry, hwe->gt, false);
slot++;
}
diff --git a/drivers/gpu/drm/xe/xe_rtp.c b/drivers/gpu/drm/xe/xe_rtp.c
index ed509b1c8cfc..c6f24df5556d 100644
--- a/drivers/gpu/drm/xe/xe_rtp.c
+++ b/drivers/gpu/drm/xe/xe_rtp.c
@@ -178,7 +178,7 @@ static void rtp_add_sr_entry(const struct xe_rtp_action *action,
};
sr_entry.reg.addr += mmio_base;
- xe_reg_sr_add(sr, &sr_entry, gt);
+ xe_reg_sr_add(sr, &sr_entry, gt, false);
}
static bool rtp_process_one_sr(const struct xe_rtp_entry_sr *entry,
diff --git a/drivers/gpu/drm/xe/xe_wa_oob.rules b/drivers/gpu/drm/xe/xe_wa_oob.rules
index ae6daa50eaf1..7300eb728d9b 100644
--- a/drivers/gpu/drm/xe/xe_wa_oob.rules
+++ b/drivers/gpu/drm/xe/xe_wa_oob.rules
@@ -79,3 +79,5 @@
14020316580 MEDIA_VERSION(1301)
14022766366 GRAPHICS_VERSION_RANGE(2001, 2004)
GRAPHICS_VERSION_RANGE(3000, 3005)
+18022722726 GRAPHICS_VERSION_RANGE(1250, 1274)
+14015474168 PLATFORM(PVC)
--
2.43.0
^ permalink raw reply related [flat|nested] 30+ messages in thread* [PATCH 12/20] drm/xe/eudebug: Introduce EU control interface
2025-12-02 13:52 [PATCH 00/20] Intel Xe GPU Debug Support (eudebug) v6 Mika Kuoppala
` (10 preceding siblings ...)
2025-12-02 13:52 ` [PATCH 11/20] drm/xe/eudebug: hw enablement for eudebug Mika Kuoppala
@ 2025-12-02 13:52 ` Mika Kuoppala
2025-12-02 13:52 ` [PATCH 13/20] drm/xe/eudebug: Introduce per device attention scan worker Mika Kuoppala
` (13 subsequent siblings)
25 siblings, 0 replies; 30+ messages in thread
From: Mika Kuoppala @ 2025-12-02 13:52 UTC (permalink / raw)
To: intel-xe
Cc: simona.vetter, matthew.brost, christian.koenig, thomas.hellstrom,
joonas.lahtinen, christoph.manszewski, rodrigo.vivi,
andrzej.hajda, matthew.auld, maciej.patelczyk, gwan-gyeong.mun,
Dominik Grzegorzek, Mika Kuoppala
From: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Introduce EU control functionality, which allows EU debugger
to interrupt, resume, and inform about the current state of
EU threads during execution. Provide an abstraction layer,
so in the future guc will only need to provide appropriate callbacks.
Based on implementation created by authors and other folks within
i915 driver.
v2: - checkpatch (Maciej)
- lrc index off by one fix (Mika)
- checkpatch (Tilak)
- 32bit fixes (Andrzej, Mika)
- find_resource_get for client (Mika)
v3: - fw ref (Mika)
- attention register naming
v4: - fused off handling (Dominik)
- squash xe3 parts and ptl attentions (Mika)
Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Signed-off-by: Maciej Patelczyk <maciej.patelczyk@intel.com>
Signed-off-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Signed-off-by: Christoph Manszewski <christoph.manszewski@intel.com>
---
drivers/gpu/drm/xe/regs/xe_engine_regs.h | 1 +
drivers/gpu/drm/xe/xe_eudebug.c | 52 ++
drivers/gpu/drm/xe/xe_eudebug.h | 2 +
drivers/gpu/drm/xe/xe_eudebug_hw.c | 658 +++++++++++++++++++++++
drivers/gpu/drm/xe/xe_eudebug_hw.h | 7 +
drivers/gpu/drm/xe/xe_eudebug_types.h | 25 +
include/uapi/drm/xe_drm_eudebug.h | 18 +
7 files changed, 763 insertions(+)
diff --git a/drivers/gpu/drm/xe/regs/xe_engine_regs.h b/drivers/gpu/drm/xe/regs/xe_engine_regs.h
index d588975e5be9..09d335927eb8 100644
--- a/drivers/gpu/drm/xe/regs/xe_engine_regs.h
+++ b/drivers/gpu/drm/xe/regs/xe_engine_regs.h
@@ -146,6 +146,7 @@
#define IDLE_DELAY REG_GENMASK(20, 0)
#define RING_CURRENT_LRCA(base) XE_REG((base) + 0x240)
+#define CURRENT_LRCA_VALID REG_BIT(0)
#define RING_CONTEXT_CONTROL(base) XE_REG((base) + 0x244, XE_REG_OPTION_MASKED)
#define CTX_CTRL_PXP_ENABLE REG_BIT(10)
diff --git a/drivers/gpu/drm/xe/xe_eudebug.c b/drivers/gpu/drm/xe/xe_eudebug.c
index 41a9cdfd6142..5faa06ba44db 100644
--- a/drivers/gpu/drm/xe/xe_eudebug.c
+++ b/drivers/gpu/drm/xe/xe_eudebug.c
@@ -15,11 +15,14 @@
#include "xe_debug_data_types.h"
#include "xe_device.h"
#include "xe_eudebug.h"
+#include "xe_eudebug_hw.h"
#include "xe_eudebug_types.h"
#include "xe_eudebug_vm.h"
#include "xe_exec_queue.h"
+#include "xe_gt.h"
#include "xe_hw_engine.h"
#include "xe_macros.h"
+#include "xe_pm.h"
#include "xe_sync.h"
#include "xe_vm.h"
@@ -690,6 +693,29 @@ struct xe_vm *xe_eudebug_vm_get(struct xe_eudebug *d, u32 id)
return vm;
}
+struct xe_exec_queue *xe_eudebug_exec_queue_get(struct xe_eudebug *d, u32 id)
+{
+ struct xe_exec_queue *q;
+
+ mutex_lock(&d->res->lock);
+ q = find_resource__unlocked(d->res, XE_EUDEBUG_RES_TYPE_EXEC_QUEUE, id);
+ if (q)
+ xe_exec_queue_get(q);
+ mutex_unlock(&d->res->lock);
+
+ return q;
+}
+
+struct xe_lrc *xe_eudebug_find_lrc(struct xe_eudebug *d, u32 id)
+{
+ struct xe_lrc *lrc;
+
+ mutex_lock(&d->res->lock);
+ lrc = find_resource__unlocked(d->res, XE_EUDEBUG_RES_TYPE_LRC, id);
+ mutex_unlock(&d->res->lock);
+
+ return lrc;
+}
static struct drm_xe_eudebug_event *
xe_eudebug_create_event(struct xe_eudebug *d, u16 type, u64 seqno, u16 flags,
@@ -1659,6 +1685,10 @@ static long xe_eudebug_ioctl(struct file *file,
ret = xe_eudebug_vm_open_ioctl(d, arg);
eu_dbg(d, "ioctl cmd=VM_OPEN ret=%ld\n", ret);
break;
+ case DRM_XE_EUDEBUG_IOCTL_EU_CONTROL:
+ ret = xe_eudebug_eu_control(d, arg);
+ eu_dbg(d, "ioctl cmd=EU_CONTROL ret=%ld\n", ret);
+ break;
default:
ret = -EINVAL;
}
@@ -1741,6 +1771,8 @@ xe_eudebug_connect(struct xe_device *xe,
goto err_detach;
}
+ xe_eudebug_hw_init(d);
+
kref_get(&d->ref);
queue_work(xe->eudebug.wq, &d->discovery_work);
@@ -1770,6 +1802,10 @@ bool xe_eudebug_is_enabled(struct xe_device *xe)
int xe_eudebug_enable(struct xe_device *xe, bool enable)
{
+ struct xe_gt *gt;
+ int i;
+ u8 id;
+
mutex_lock(&xe->eudebug.lock);
if (xe->eudebug.state == XE_EUDEBUG_NOT_SUPPORTED) {
@@ -1787,6 +1823,22 @@ int xe_eudebug_enable(struct xe_device *xe, bool enable)
return 0;
}
+ xe_pm_runtime_get(xe);
+
+ for_each_gt(gt, xe, id) {
+ for (i = 0; i < ARRAY_SIZE(gt->hw_engines); i++) {
+ if (!(gt->info.engine_mask & BIT(i)))
+ continue;
+
+ xe_eudebug_init_hw_engine(>->hw_engines[i], enable);
+ }
+
+ xe_gt_reset_async(gt);
+ flush_work(>->reset.worker);
+ }
+
+ xe_pm_runtime_put(xe);
+
xe->eudebug.state = enable ?
XE_EUDEBUG_ENABLED : XE_EUDEBUG_DISABLED;
mutex_unlock(&xe->eudebug.lock);
diff --git a/drivers/gpu/drm/xe/xe_eudebug.h b/drivers/gpu/drm/xe/xe_eudebug.h
index 74171cc81fe1..bd9fd7bf454f 100644
--- a/drivers/gpu/drm/xe/xe_eudebug.h
+++ b/drivers/gpu/drm/xe/xe_eudebug.h
@@ -63,6 +63,8 @@ struct xe_vm *xe_eudebug_vm_get(struct xe_eudebug *d, u32 vm_id);
void xe_eudebug_exec_queue_create(struct xe_file *xef, struct xe_exec_queue *q);
void xe_eudebug_exec_queue_destroy(struct xe_file *xef, struct xe_exec_queue *q);
+struct xe_exec_queue *xe_eudebug_exec_queue_get(struct xe_eudebug *d, u32 id);
+struct xe_lrc *xe_eudebug_find_lrc(struct xe_eudebug *d, u32 id);
void xe_eudebug_vm_bind_execute(struct xe_vm *vm, struct xe_vma_ops *ops);
diff --git a/drivers/gpu/drm/xe/xe_eudebug_hw.c b/drivers/gpu/drm/xe/xe_eudebug_hw.c
index aa31b4c91713..7ac0dd03ebf0 100644
--- a/drivers/gpu/drm/xe/xe_eudebug_hw.c
+++ b/drivers/gpu/drm/xe/xe_eudebug_hw.c
@@ -70,3 +70,661 @@ void xe_eudebug_init_hw_engine(struct xe_hw_engine *hwe, bool enable)
add_sr_entry(hwe, TD_CTL,
TD_CTL_GLOBAL_DEBUG_ENABLE, enable);
}
+
+static int __current_lrca(struct xe_hw_engine *hwe, u32 *lrc_hw)
+{
+ u32 lrc_reg;
+
+ lrc_reg = xe_hw_engine_mmio_read32(hwe, RING_CURRENT_LRCA(0));
+
+ if (!(lrc_reg & CURRENT_LRCA_VALID))
+ return -ENOENT;
+
+ *lrc_hw = lrc_reg & GENMASK(31, 12);
+
+ return 0;
+}
+
+static int current_lrca(struct xe_hw_engine *hwe, u32 *lrc_hw)
+{
+ unsigned int fw_ref;
+ int ret;
+
+ fw_ref = xe_force_wake_get(gt_to_fw(hwe->gt), hwe->domain);
+ if (!fw_ref)
+ return -ETIMEDOUT;
+
+ ret = __current_lrca(hwe, lrc_hw);
+
+ xe_force_wake_put(gt_to_fw(hwe->gt), fw_ref);
+
+ return ret;
+}
+
+static bool lrca_equals(u32 a, u32 b)
+{
+ return (a & GENMASK(31, 12)) == (b & GENMASK(31, 12));
+}
+
+static int match_exec_queue_lrca(struct xe_exec_queue *q, u32 lrc_hw)
+{
+ int i;
+
+ for (i = 0; i < q->width; i++)
+ if (lrca_equals(lower_32_bits(xe_lrc_descriptor(q->lrc[i])), lrc_hw))
+ return i;
+
+ return -1;
+}
+
+static int rcu_debug1_engine_index(const struct xe_hw_engine * const hwe)
+{
+ if (hwe->class == XE_ENGINE_CLASS_RENDER) {
+ XE_WARN_ON(hwe->instance);
+ return 0;
+ }
+
+ XE_WARN_ON(hwe->instance > 3);
+
+ return hwe->instance + 1;
+}
+
+static u32 engine_status_xe1(const struct xe_hw_engine * const hwe,
+ u32 rcu_debug1)
+{
+ const unsigned int first = 7;
+ const unsigned int incr = 3;
+ const unsigned int i = rcu_debug1_engine_index(hwe);
+ const unsigned int shift = first + (i * incr);
+
+ return (rcu_debug1 >> shift) & RCU_DEBUG_1_ENGINE_STATUS;
+}
+
+static u32 engine_status_xe2(const struct xe_hw_engine * const hwe,
+ u32 rcu_debug1)
+{
+ const unsigned int first = 7;
+ const unsigned int incr = 4;
+ const unsigned int i = rcu_debug1_engine_index(hwe);
+ const unsigned int shift = first + (i * incr);
+
+ return (rcu_debug1 >> shift) & RCU_DEBUG_1_ENGINE_STATUS;
+}
+
+static u32 engine_status_xe3(const struct xe_hw_engine * const hwe,
+ u32 rcu_debug1)
+{
+ const unsigned int first = 6;
+ const unsigned int incr = 4;
+ const unsigned int i = rcu_debug1_engine_index(hwe);
+ const unsigned int shift = first + (i * incr);
+
+ return (rcu_debug1 >> shift) & RCU_DEBUG_1_ENGINE_STATUS;
+}
+
+static u32 engine_status(const struct xe_hw_engine * const hwe,
+ u32 rcu_debug1)
+{
+ u32 status = 0;
+
+ if (GRAPHICS_VER(gt_to_xe(hwe->gt)) < 20)
+ status = engine_status_xe1(hwe, rcu_debug1);
+ else if (GRAPHICS_VER(gt_to_xe(hwe->gt)) < 30)
+ status = engine_status_xe2(hwe, rcu_debug1);
+ else if (GRAPHICS_VER(gt_to_xe(hwe->gt)) < 35)
+ status = engine_status_xe3(hwe, rcu_debug1);
+ else
+ XE_WARN_ON(GRAPHICS_VER(gt_to_xe(hwe->gt)));
+
+ return status;
+}
+
+static bool engine_has_runalone_set(const struct xe_hw_engine * const hwe,
+ u32 rcu_debug1)
+{
+ return engine_status(hwe, rcu_debug1) & RCU_DEBUG_1_RUNALONE_ACTIVE;
+}
+
+static bool engine_has_context_set(const struct xe_hw_engine * const hwe,
+ u32 rcu_debug1)
+{
+ return engine_status(hwe, rcu_debug1) & RCU_DEBUG_1_CONTEXT_ACTIVE;
+}
+
+static struct xe_hw_engine *get_runalone_active_hw_engine(struct xe_gt *gt)
+{
+ struct xe_hw_engine *hwe, *first = NULL;
+ unsigned int num_active, id, fw_ref;
+ u32 val;
+
+ fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
+ if (!fw_ref) {
+ drm_dbg(>_to_xe(gt)->drm, "eudbg: runalone failed to get force wake\n");
+ return NULL;
+ }
+
+ val = xe_mmio_read32(>->mmio, RCU_DEBUG_1);
+ xe_force_wake_put(gt_to_fw(gt), fw_ref);
+
+ drm_dbg(>_to_xe(gt)->drm, "eudbg: runalone RCU_DEBUG_1 = 0x%08x\n", val);
+
+ num_active = 0;
+ xe_eudebug_for_each_hw_engine(hwe, gt, id) {
+ bool runalone, ctx;
+
+ runalone = engine_has_runalone_set(hwe, val);
+ ctx = engine_has_context_set(hwe, val);
+
+ drm_dbg(>_to_xe(gt)->drm, "eudbg: engine %s: runalone=%s, context=%s",
+ hwe->name, runalone ? "active" : "inactive",
+ ctx ? "active" : "inactive");
+
+ /*
+ * On earlier gen12 the context status seems to be idle when
+ * it has raised attention. We have to omit the active bit.
+ */
+ if (IS_DGFX(gt_to_xe(gt)))
+ ctx = true;
+
+ if (runalone && ctx) {
+ num_active++;
+
+ drm_dbg(>_to_xe(gt)->drm, "eudbg: runalone engine %s %s",
+ hwe->name, first ? "selected" : "found");
+ if (!first)
+ first = hwe;
+ }
+ }
+
+ if (num_active > 1)
+ drm_err(>_to_xe(gt)->drm, "eudbg: %d runalone engines active!",
+ num_active);
+
+ return first;
+}
+
+static struct xe_exec_queue *active_hwe_to_exec_queue(struct xe_hw_engine *hwe,
+ int *lrc_idx)
+{
+ struct xe_device *xe = gt_to_xe(hwe->gt);
+ struct xe_gt *gt = hwe->gt;
+ struct xe_exec_queue *q, *found = NULL;
+ struct xe_file *xef;
+ unsigned long i;
+ int idx, err;
+ u32 lrc_hw;
+
+ err = current_lrca(hwe, &lrc_hw);
+ if (err)
+ return ERR_PTR(err);
+
+ mutex_lock(&xe->eudebug.lock);
+ list_for_each_entry(xef, &xe->eudebug.targets, eudebug.target_link) {
+ down_write(&xef->eudebug.ioctl_lock);
+ xa_for_each(&xef->exec_queue.xa, i, q) {
+ if (q->gt != gt)
+ continue;
+
+ if (q->class != hwe->class)
+ continue;
+
+ if (xe_exec_queue_is_idle(q))
+ continue;
+
+ idx = match_exec_queue_lrca(q, lrc_hw);
+ if (idx < 0)
+ continue;
+
+ found = xe_exec_queue_get(q);
+
+ if (lrc_idx)
+ *lrc_idx = idx;
+
+ break;
+ }
+ up_write(&xef->eudebug.ioctl_lock);
+
+ if (found)
+ break;
+ }
+ mutex_unlock(&xe->eudebug.lock);
+
+ if (!found)
+ return ERR_PTR(-ENOENT);
+
+ if (XE_WARN_ON(current_lrca(hwe, &lrc_hw)) &&
+ XE_WARN_ON(match_exec_queue_lrca(found, lrc_hw) < 0)) {
+ xe_exec_queue_put(found);
+ return ERR_PTR(-ENOENT);
+ }
+
+ return found;
+}
+
+static struct xe_exec_queue *runalone_active_queue_get(struct xe_gt *gt, int *lrc_idx)
+{
+ struct xe_hw_engine *active;
+
+ active = get_runalone_active_hw_engine(gt);
+ if (!active) {
+ drm_dbg(>_to_xe(gt)->drm, "Runalone engine not found!");
+ return ERR_PTR(-ENOENT);
+ }
+
+ return active_hwe_to_exec_queue(active, lrc_idx);
+}
+
+static int do_eu_control(struct xe_eudebug *d,
+ const struct drm_xe_eudebug_eu_control * const arg,
+ struct drm_xe_eudebug_eu_control __user * const user_ptr)
+{
+ void __user * const bitmask_ptr = u64_to_user_ptr(arg->bitmask_ptr);
+ struct xe_device *xe = d->xe;
+ u8 *bits = NULL;
+ unsigned int hw_attn_size, attn_size;
+ struct xe_exec_queue *q;
+ struct xe_lrc *lrc;
+ u64 seqno;
+ int ret;
+
+ if (xe_eudebug_detached(d))
+ return -ENOTCONN;
+
+ /* Accept only hardware reg granularity mask */
+ if (XE_IOCTL_DBG(xe, !IS_ALIGNED(arg->bitmask_size, sizeof(u32))))
+ return -EINVAL;
+
+ q = xe_eudebug_exec_queue_get(d, arg->exec_queue_handle);
+ if (XE_IOCTL_DBG(xe, !q))
+ return -EINVAL;
+
+ if (XE_IOCTL_DBG(xe, !xe_exec_queue_is_debuggable(q))) {
+ ret = -EINVAL;
+ goto queue_put;
+ }
+
+ lrc = xe_eudebug_find_lrc(d, arg->lrc_handle);
+ if (XE_IOCTL_DBG(xe, !lrc)) {
+ ret = -EINVAL;
+ goto queue_put;
+ }
+
+ hw_attn_size = xe_gt_eu_attention_bitmap_size(q->gt);
+ attn_size = arg->bitmask_size;
+
+ if (attn_size > hw_attn_size)
+ attn_size = hw_attn_size;
+
+ if (attn_size > 0) {
+ bits = kmalloc(attn_size, GFP_KERNEL);
+ if (!bits) {
+ ret = -ENOMEM;
+ goto queue_put;
+ }
+
+ if (copy_from_user(bits, bitmask_ptr, attn_size)) {
+ ret = -EFAULT;
+ goto out_free;
+ }
+ }
+
+ if (!pm_runtime_active(xe->drm.dev)) {
+ ret = -EIO;
+ goto out_free;
+ }
+
+ ret = -EINVAL;
+ mutex_lock(&d->hw.lock);
+
+ switch (arg->cmd) {
+ case DRM_XE_EUDEBUG_EU_CONTROL_CMD_INTERRUPT_ALL:
+ /* Make sure we dont promise anything but interrupting all */
+ if (!attn_size)
+ ret = d->ops->interrupt_all(d, q, lrc);
+ break;
+ case DRM_XE_EUDEBUG_EU_CONTROL_CMD_STOPPED:
+ ret = d->ops->stopped(d, q, lrc, bits, attn_size);
+ break;
+ case DRM_XE_EUDEBUG_EU_CONTROL_CMD_RESUME:
+ ret = d->ops->resume(d, q, lrc, bits, attn_size);
+ break;
+ default:
+ break;
+ }
+
+ if (ret == 0)
+ seqno = atomic_long_inc_return(&d->events.seqno);
+
+ mutex_unlock(&d->hw.lock);
+
+ if (ret)
+ goto out_free;
+
+ if (put_user(seqno, &user_ptr->seqno)) {
+ ret = -EFAULT;
+ goto out_free;
+ }
+
+ if (copy_to_user(bitmask_ptr, bits, attn_size)) {
+ ret = -EFAULT;
+ goto out_free;
+ }
+
+ if (hw_attn_size != arg->bitmask_size)
+ if (put_user(hw_attn_size, &user_ptr->bitmask_size))
+ ret = -EFAULT;
+
+out_free:
+ kfree(bits);
+queue_put:
+ xe_exec_queue_put(q);
+
+ return ret;
+}
+
+static int xe_eu_control_interrupt_all(struct xe_eudebug *d,
+ struct xe_exec_queue *q,
+ struct xe_lrc *lrc)
+{
+ struct xe_gt *gt = q->hwe->gt;
+ struct xe_device *xe = d->xe;
+ struct xe_exec_queue *active;
+ struct xe_hw_engine *hwe;
+ unsigned int fw_ref;
+ int lrc_idx, ret;
+ u32 lrc_hw;
+ u32 td_ctl;
+
+ hwe = get_runalone_active_hw_engine(gt);
+ if (XE_IOCTL_DBG(xe, !hwe)) {
+ drm_dbg(>_to_xe(gt)->drm, "Runalone engine not found!");
+ return -EINVAL;
+ }
+
+ active = active_hwe_to_exec_queue(hwe, &lrc_idx);
+ if (XE_IOCTL_DBG(xe, IS_ERR(active)))
+ return PTR_ERR(active);
+
+ if (XE_IOCTL_DBG(xe, q != active)) {
+ xe_exec_queue_put(active);
+ return -EINVAL;
+ }
+ xe_exec_queue_put(active);
+
+ if (XE_IOCTL_DBG(xe, lrc_idx >= q->width || q->lrc[lrc_idx] != lrc))
+ return -EINVAL;
+
+ fw_ref = xe_force_wake_get(gt_to_fw(gt), hwe->domain);
+ if (!fw_ref)
+ return -ETIMEDOUT;
+
+ /* Additional check just before issuing MMIO writes */
+ ret = __current_lrca(hwe, &lrc_hw);
+ if (ret)
+ goto put_fw;
+
+ if (!lrca_equals(lower_32_bits(xe_lrc_descriptor(lrc)), lrc_hw)) {
+ ret = -EBUSY;
+ goto put_fw;
+ }
+
+ td_ctl = xe_gt_mcr_unicast_read_any(gt, TD_CTL);
+
+ /* Halt on next thread dispatch */
+ if (!(td_ctl & TD_CTL_FORCE_EXTERNAL_HALT))
+ xe_gt_mcr_multicast_write(gt, TD_CTL,
+ td_ctl | TD_CTL_FORCE_EXTERNAL_HALT);
+ else
+ eu_warn(d, "TD_CTL force external halt bit already set!\n");
+
+ /*
+ * The sleep is needed because some interrupts are ignored
+ * by the HW, hence we allow the HW some time to acknowledge
+ * that.
+ */
+ usleep_range(100, 110);
+
+ /* Halt regardless of thread dependencies */
+ if (!(td_ctl & TD_CTL_FORCE_EXCEPTION))
+ xe_gt_mcr_multicast_write(gt, TD_CTL,
+ td_ctl | TD_CTL_FORCE_EXCEPTION);
+ else
+ eu_warn(d, "TD_CTL force exception bit already set!\n");
+
+ usleep_range(100, 110);
+
+ xe_gt_mcr_multicast_write(gt, TD_CTL, td_ctl &
+ ~(TD_CTL_FORCE_EXTERNAL_HALT | TD_CTL_FORCE_EXCEPTION));
+
+ /*
+ * In case of stopping wrong ctx emit warning.
+ * Nothing else we can do for now.
+ */
+ ret = __current_lrca(hwe, &lrc_hw);
+ if (ret || !lrca_equals(lower_32_bits(xe_lrc_descriptor(lrc)), lrc_hw))
+ eu_warn(d, "xe_eudebug: interrupted wrong context.");
+
+put_fw:
+ xe_force_wake_put(gt_to_fw(gt), fw_ref);
+
+ return ret;
+}
+
+struct ss_iter {
+ struct xe_eudebug *debugger;
+ unsigned int i;
+
+ unsigned int size;
+ u8 *bits;
+};
+
+static int check_attn_mcr(struct xe_gt *gt, void *data,
+ u16 group, u16 instance, bool present)
+{
+ struct ss_iter *iter = data;
+ struct xe_eudebug *d = iter->debugger;
+ unsigned int reg, row;
+
+ for (reg = 0; reg < xe_gt_eu_att_regs(gt); reg++) {
+ for (row = 0; row < XE_GT_EU_ATT_ROWS; row++) {
+ u32 val, cur = 0;
+
+ if (iter->i >= iter->size)
+ return 0;
+
+ if (XE_WARN_ON((iter->i + sizeof(val)) >
+ (xe_gt_eu_attention_bitmap_size(gt))))
+ return -EIO;
+
+ memcpy(&val, &iter->bits[iter->i], sizeof(val));
+ iter->i += sizeof(val);
+
+ if (present)
+ cur = xe_gt_mcr_unicast_read(gt, EU_ATT(reg, row), group, instance);
+
+ if ((val | cur) != cur) {
+ eu_dbg(d,
+ "WRONG CLEAR (%u:%u:%u:%u) EU_ATT_CLR: 0x%08x; EU_ATT: 0x%08x\n",
+ group, instance, reg, row, val, cur);
+ return -EINVAL;
+ }
+ }
+ }
+
+ return 0;
+}
+
+static int clear_attn_mcr(struct xe_gt *gt, void *data,
+ u16 group, u16 instance, bool present)
+{
+ struct ss_iter *iter = data;
+ struct xe_eudebug *d = iter->debugger;
+ unsigned int reg, row;
+
+ for (reg = 0; reg < xe_gt_eu_att_regs(gt); reg++) {
+ for (row = 0; row < XE_GT_EU_ATT_ROWS; row++) {
+ u32 val;
+
+ if (iter->i >= iter->size)
+ return 0;
+
+ if (XE_WARN_ON((iter->i + sizeof(val)) >
+ (xe_gt_eu_attention_bitmap_size(gt))))
+ return -EIO;
+
+ memcpy(&val, &iter->bits[iter->i], sizeof(val));
+ iter->i += sizeof(val);
+
+ if (!val)
+ continue;
+
+ if (present) {
+ xe_gt_mcr_unicast_write(gt, EU_ATT_CLR(reg, row), val,
+ group, instance);
+
+ eu_dbg(d,
+ "EU_ATT_CLR: (%u:%u:%u:%u): 0x%08x\n",
+ group, instance, reg, row, val);
+ } else {
+ eu_warn(d,
+ "EU_ATT_CLR: (%u:%u:%u:%u): 0x%08x to fused off dss\n",
+ group, instance, reg, row, val);
+ }
+ }
+ }
+
+ return 0;
+}
+
+static int xe_eu_control_resume(struct xe_eudebug *d,
+ struct xe_exec_queue *q,
+ struct xe_lrc *lrc,
+ u8 *bits, unsigned int bitmask_size)
+{
+ struct xe_device *xe = d->xe;
+ struct ss_iter iter = {
+ .debugger = d,
+ .i = 0,
+ .size = bitmask_size,
+ .bits = bits
+ };
+ int ret = 0;
+ struct xe_exec_queue *active;
+ int lrc_idx;
+
+ active = runalone_active_queue_get(q->gt, &lrc_idx);
+ if (IS_ERR(active))
+ return PTR_ERR(active);
+
+ if (XE_IOCTL_DBG(xe, q != active)) {
+ xe_exec_queue_put(active);
+ return -EBUSY;
+ }
+ xe_exec_queue_put(active);
+
+ if (XE_IOCTL_DBG(xe, lrc_idx >= q->width || q->lrc[lrc_idx] != lrc))
+ return -EBUSY;
+
+ /*
+ * hsdes: 18021122357
+ * We need to avoid clearing attention bits that are not set
+ * in order to avoid the EOT hang on PVC.
+ */
+ if (GRAPHICS_VERx100(d->xe) == 1260) {
+ ret = xe_gt_foreach_dss_group_instance(q->gt, check_attn_mcr, &iter);
+ if (ret)
+ return ret;
+
+ iter.i = 0;
+ }
+
+ xe_gt_foreach_dss_group_instance(q->gt, clear_attn_mcr, &iter);
+ return 0;
+}
+
+static int xe_eu_control_stopped(struct xe_eudebug *d,
+ struct xe_exec_queue *q,
+ struct xe_lrc *lrc,
+ u8 *bits, unsigned int bitmask_size)
+{
+ struct xe_device *xe = d->xe;
+ struct xe_exec_queue *active;
+ int lrc_idx;
+
+ if (XE_WARN_ON(!q) || XE_WARN_ON(!q->gt))
+ return -EINVAL;
+
+ active = runalone_active_queue_get(q->gt, &lrc_idx);
+ if (IS_ERR(active))
+ return PTR_ERR(active);
+
+ if (active) {
+ if (XE_IOCTL_DBG(xe, q != active)) {
+ xe_exec_queue_put(active);
+ return -EBUSY;
+ }
+
+ if (XE_IOCTL_DBG(xe, lrc_idx >= q->width || q->lrc[lrc_idx] != lrc)) {
+ xe_exec_queue_put(active);
+ return -EBUSY;
+ }
+ }
+
+ xe_exec_queue_put(active);
+
+ return xe_gt_eu_attention_bitmap(q->gt, bits, bitmask_size);
+}
+
+static struct xe_eudebug_eu_control_ops eu_control = {
+ .interrupt_all = xe_eu_control_interrupt_all,
+ .stopped = xe_eu_control_stopped,
+ .resume = xe_eu_control_resume,
+};
+
+void xe_eudebug_hw_init(struct xe_eudebug *d)
+{
+ d->ops = &eu_control;
+}
+
+long xe_eudebug_eu_control(struct xe_eudebug *d, const u64 arg)
+{
+ struct drm_xe_eudebug_eu_control __user * const user_ptr =
+ u64_to_user_ptr(arg);
+ struct drm_xe_eudebug_eu_control user_arg;
+ struct xe_device *xe = d->xe;
+ int ret;
+
+ if (XE_IOCTL_DBG(xe, !(_IOC_DIR(DRM_XE_EUDEBUG_IOCTL_EU_CONTROL) & _IOC_WRITE)))
+ return -EINVAL;
+
+ if (XE_IOCTL_DBG(xe, !(_IOC_DIR(DRM_XE_EUDEBUG_IOCTL_EU_CONTROL) & _IOC_READ)))
+ return -EINVAL;
+
+ if (XE_IOCTL_DBG(xe, _IOC_SIZE(DRM_XE_EUDEBUG_IOCTL_EU_CONTROL) != sizeof(user_arg)))
+ return -EINVAL;
+
+ if (copy_from_user(&user_arg,
+ user_ptr,
+ sizeof(user_arg)))
+ return -EFAULT;
+
+ if (XE_IOCTL_DBG(xe, user_arg.flags))
+ return -EINVAL;
+
+ if (!access_ok(u64_to_user_ptr(user_arg.bitmask_ptr), user_arg.bitmask_size))
+ return -EFAULT;
+
+ eu_dbg(d,
+ "eu_control: cmd=%u, flags=0x%x, exec_queue_handle=%llu, bitmask_size=%u\n",
+ user_arg.cmd, user_arg.flags, user_arg.exec_queue_handle,
+ user_arg.bitmask_size);
+
+ ret = do_eu_control(d, &user_arg, user_ptr);
+
+ eu_dbg(d,
+ "eu_control: cmd=%u, flags=0x%x, exec_queue_handle=%llu, bitmask_size=%u ret=%d\n",
+ user_arg.cmd, user_arg.flags, user_arg.exec_queue_handle,
+ user_arg.bitmask_size, ret);
+
+ return ret;
+}
diff --git a/drivers/gpu/drm/xe/xe_eudebug_hw.h b/drivers/gpu/drm/xe/xe_eudebug_hw.h
index 7362ed9bde68..8f59ec574e4e 100644
--- a/drivers/gpu/drm/xe/xe_eudebug_hw.h
+++ b/drivers/gpu/drm/xe/xe_eudebug_hw.h
@@ -16,10 +16,17 @@ struct xe_gt;
#if IS_ENABLED(CONFIG_DRM_XE_EUDEBUG)
+void xe_eudebug_hw_init(struct xe_eudebug *d);
void xe_eudebug_init_hw_engine(struct xe_hw_engine *hwe, bool enable);
+long xe_eudebug_eu_control(struct xe_eudebug *d, const u64 arg);
+
+struct xe_exec_queue *xe_gt_runalone_active_queue_get(struct xe_gt *gt, int *lrc_idx);
+
#else /* CONFIG_DRM_XE_EUDEBUG */
+static inline void xe_eudebug_init_hw_engine(struct xe_hw_engine *hwe, bool enable) { }
+
#endif /* CONFIG_DRM_XE_EUDEBUG */
#endif /* _XE_EUDEBUG_HW_H_ */
diff --git a/drivers/gpu/drm/xe/xe_eudebug_types.h b/drivers/gpu/drm/xe/xe_eudebug_types.h
index 292e93c72a64..205777a851a3 100644
--- a/drivers/gpu/drm/xe/xe_eudebug_types.h
+++ b/drivers/gpu/drm/xe/xe_eudebug_types.h
@@ -17,7 +17,11 @@
struct xe_device;
struct task_struct;
+struct xe_eudebug;
+struct xe_hw_engine;
struct workqueue_struct;
+struct xe_exec_queue;
+struct xe_lrc;
/**
* enum xe_eudebug_state - eudebug capability state
@@ -76,6 +80,24 @@ struct xe_eudebug_resources {
struct xe_eudebug_resource rt[XE_EUDEBUG_RES_TYPE_COUNT];
};
+/**
+ * struct xe_eudebug_eu_control_ops - interface for eu thread
+ * state control backend
+ */
+struct xe_eudebug_eu_control_ops {
+ /** @interrupt_all: interrupts workload active on given hwe */
+ int (*interrupt_all)(struct xe_eudebug *e, struct xe_exec_queue *q,
+ struct xe_lrc *lrc);
+
+ /** @resume: resumes threads reflected by bitmask active on given hwe */
+ int (*resume)(struct xe_eudebug *e, struct xe_exec_queue *q,
+ struct xe_lrc *lrc, u8 *bitmap, unsigned int bitmap_size);
+
+ /** @stopped: returns bitmap reflecting threads which signal attention */
+ int (*stopped)(struct xe_eudebug *e, struct xe_exec_queue *q,
+ struct xe_lrc *lrc, u8 *bitmap, unsigned int bitmap_size);
+};
+
/**
* struct xe_eudebug - Top level struct for eudebug: the connection
*/
@@ -144,6 +166,9 @@ struct xe_eudebug {
/** @lock: guards access to hw state */
struct mutex lock;
} hw;
+
+ /** @ops operations for eu_control */
+ struct xe_eudebug_eu_control_ops *ops;
};
#endif /* _XE_EUDEBUG_TYPES_H_ */
diff --git a/include/uapi/drm/xe_drm_eudebug.h b/include/uapi/drm/xe_drm_eudebug.h
index 139926a0f38c..df6e028bcd9c 100644
--- a/include/uapi/drm/xe_drm_eudebug.h
+++ b/include/uapi/drm/xe_drm_eudebug.h
@@ -18,6 +18,7 @@ extern "C" {
#define DRM_XE_EUDEBUG_IOCTL_READ_EVENT _IO('j', 0x0)
#define DRM_XE_EUDEBUG_IOCTL_ACK_EVENT _IOW('j', 0x1, struct drm_xe_eudebug_ack_event)
#define DRM_XE_EUDEBUG_IOCTL_VM_OPEN _IOW('j', 0x2, struct drm_xe_eudebug_vm_open)
+#define DRM_XE_EUDEBUG_IOCTL_EU_CONTROL _IOWR('j', 0x3, struct drm_xe_eudebug_eu_control)
/**
* struct drm_xe_eudebug_event - Base type of event delivered by xe_eudebug.
@@ -179,6 +180,23 @@ struct drm_xe_eudebug_vm_open {
__u64 timeout_ns;
};
+struct drm_xe_eudebug_eu_control {
+
+#define DRM_XE_EUDEBUG_EU_CONTROL_CMD_INTERRUPT_ALL 0
+#define DRM_XE_EUDEBUG_EU_CONTROL_CMD_STOPPED 1
+#define DRM_XE_EUDEBUG_EU_CONTROL_CMD_RESUME 2
+ __u32 cmd;
+ __u32 flags;
+
+ __u64 seqno;
+
+ __u64 exec_queue_handle;
+ __u64 lrc_handle;
+ __u32 reserved;
+ __u32 bitmask_size;
+ __u64 bitmask_ptr;
+};
+
#if defined(__cplusplus)
}
#endif
--
2.43.0
^ permalink raw reply related [flat|nested] 30+ messages in thread* [PATCH 13/20] drm/xe/eudebug: Introduce per device attention scan worker
2025-12-02 13:52 [PATCH 00/20] Intel Xe GPU Debug Support (eudebug) v6 Mika Kuoppala
` (11 preceding siblings ...)
2025-12-02 13:52 ` [PATCH 12/20] drm/xe/eudebug: Introduce EU control interface Mika Kuoppala
@ 2025-12-02 13:52 ` Mika Kuoppala
2025-12-02 13:52 ` [PATCH 14/20] drm/xe/eudebug_test: Introduce xe_eudebug wa kunit test Mika Kuoppala
` (12 subsequent siblings)
25 siblings, 0 replies; 30+ messages in thread
From: Mika Kuoppala @ 2025-12-02 13:52 UTC (permalink / raw)
To: intel-xe
Cc: simona.vetter, matthew.brost, christian.koenig, thomas.hellstrom,
joonas.lahtinen, christoph.manszewski, rodrigo.vivi,
andrzej.hajda, matthew.auld, maciej.patelczyk, gwan-gyeong.mun,
Dominik Grzegorzek, Mika Kuoppala
From: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Scan for EU debugging attention bits periodically to detect if some EU
thread has entered the system routine (SIP) due to EU thread exception.
Make the scanning interval 10 times slower when there is no debugger
connection open. Send attention event whenever we see attention with
debugger presence. If there is no debugger connection active - reset.
Based on work by authors and other folks who were part of attentions in
i915.
v2: - use xa_array for files
- null ptr deref fix for non-debugged context (Dominik)
- checkpatch (Tilak)
- use discovery_lock during list traversal
v3: - engine status per gen improvements, force_wake ref
- __counted_by (Mika)
v4: - attention register naming (Dominik)
Signed-off-by: Dominik Grzegorzek <dominik.grzegorzek@intel.com>
Signed-off-by: Christoph Manszewski <christoph.manszewski@intel.com>
Signed-off-by: Maciej Patelczyk <maciej.patelczyk@intel.com>
Signed-off-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
---
drivers/gpu/drm/xe/xe_device_types.h | 3 +
drivers/gpu/drm/xe/xe_eudebug.c | 170 ++++++++++++++++++++++++++
drivers/gpu/drm/xe/xe_eudebug_hw.c | 6 +-
drivers/gpu/drm/xe/xe_eudebug_types.h | 3 +-
include/uapi/drm/xe_drm_eudebug.h | 11 ++
5 files changed, 188 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_device_types.h b/drivers/gpu/drm/xe/xe_device_types.h
index 6fe6e200fe9f..15deeb8488da 100644
--- a/drivers/gpu/drm/xe/xe_device_types.h
+++ b/drivers/gpu/drm/xe/xe_device_types.h
@@ -679,6 +679,9 @@ struct xe_device {
/** @wq: used for client discovery */
struct workqueue_struct *wq;
+
+ /** @attention_poll: attention poll work */
+ struct delayed_work attention_dwork;
} eudebug;
#endif
};
diff --git a/drivers/gpu/drm/xe/xe_eudebug.c b/drivers/gpu/drm/xe/xe_eudebug.c
index 5faa06ba44db..97e8a7ccef55 100644
--- a/drivers/gpu/drm/xe/xe_eudebug.c
+++ b/drivers/gpu/drm/xe/xe_eudebug.c
@@ -21,6 +21,8 @@
#include "xe_exec_queue.h"
#include "xe_gt.h"
#include "xe_hw_engine.h"
+#include "xe_gt.h"
+#include "xe_gt_debug.h"
#include "xe_macros.h"
#include "xe_pm.h"
#include "xe_sync.h"
@@ -1704,6 +1706,154 @@ static const struct file_operations fops = {
.unlocked_ioctl = xe_eudebug_ioctl,
};
+static int send_attention_event(struct xe_eudebug *d, struct xe_exec_queue *q, int lrc_idx)
+{
+ struct drm_xe_eudebug_event_eu_attention *e;
+ struct drm_xe_eudebug_event *event;
+ const u32 size = xe_gt_eu_attention_bitmap_size(q->gt);
+ const u32 sz = struct_size(e, bitmask, size);
+ int h_queue, h_lrc;
+ int ret;
+
+ XE_WARN_ON(lrc_idx < 0 || lrc_idx >= q->width);
+
+ XE_WARN_ON(!xe_exec_queue_is_debuggable(q));
+
+ h_queue = find_handle(d->res, XE_EUDEBUG_RES_TYPE_EXEC_QUEUE, q);
+ if (h_queue < 0)
+ return h_queue;
+
+ h_lrc = find_handle(d->res, XE_EUDEBUG_RES_TYPE_LRC, q->lrc[lrc_idx]);
+ if (h_lrc < 0)
+ return h_lrc;
+
+ event = xe_eudebug_create_event(d, DRM_XE_EUDEBUG_EVENT_EU_ATTENTION, 0,
+ DRM_XE_EUDEBUG_EVENT_STATE_CHANGE, sz);
+
+ if (!event)
+ return -ENOSPC;
+
+ e = cast_event(e, event);
+ e->exec_queue_handle = h_queue;
+ e->lrc_handle = h_lrc;
+ e->bitmask_size = size;
+
+ mutex_lock(&d->hw.lock);
+ event->seqno = atomic_long_inc_return(&d->events.seqno);
+ ret = xe_gt_eu_attention_bitmap(q->gt, &e->bitmask[0], e->bitmask_size);
+ mutex_unlock(&d->hw.lock);
+
+ if (ret)
+ return ret;
+
+ return xe_eudebug_queue_event(d, event);
+}
+
+static int xe_send_gt_attention(struct xe_gt *gt)
+{
+ struct xe_eudebug *d;
+ struct xe_exec_queue *q;
+ int ret, lrc_idx;
+
+ q = xe_gt_runalone_active_queue_get(gt, &lrc_idx);
+ if (IS_ERR(q))
+ return PTR_ERR(q);
+
+ if (!xe_exec_queue_is_debuggable(q)) {
+ ret = -EPERM;
+ goto err_exec_queue_put;
+ }
+
+ d = xe_eudebug_get_nolock(q->vm->xef);
+ if (!d) {
+ ret = -ENOTCONN;
+ goto err_exec_queue_put;
+ }
+
+ if (!completion_done(&d->discovery)) {
+ eu_dbg(d, "discovery not yet done\n");
+ ret = -EBUSY;
+ goto err_eudebug_put;
+ }
+
+ ret = send_attention_event(d, q, lrc_idx);
+ if (ret)
+ xe_eudebug_disconnect(d, ret);
+
+err_eudebug_put:
+ xe_eudebug_put(d);
+err_exec_queue_put:
+ xe_exec_queue_put(q);
+
+ return ret;
+}
+
+static int xe_eudebug_handle_gt_attention(struct xe_gt *gt)
+{
+ int ret;
+
+ ret = xe_gt_eu_threads_needing_attention(gt);
+ if (ret <= 0)
+ return ret;
+
+ ret = xe_send_gt_attention(gt);
+
+ /* Discovery in progress, fake it */
+ if (ret == -EBUSY)
+ return 0;
+
+ return ret;
+}
+
+static void attention_poll_work(struct work_struct *work)
+{
+ struct xe_device *xe = container_of(work, typeof(*xe),
+ eudebug.attention_dwork.work);
+ const unsigned int poll_interval_ms = 100;
+ long delay = msecs_to_jiffies(poll_interval_ms);
+ struct xe_gt *gt;
+ u8 gt_id;
+
+ if (list_empty(&xe->eudebug.targets))
+ delay *= 11;
+
+ if (delay >= HZ)
+ delay = round_jiffies_up_relative(delay);
+
+ if (xe_pm_runtime_get_if_active(xe)) {
+ for_each_gt(gt, xe, gt_id) {
+ int ret;
+
+ if (gt->info.type != XE_GT_TYPE_MAIN)
+ continue;
+
+ ret = xe_eudebug_handle_gt_attention(gt);
+ if (ret) {
+ /* TODO: error capture */
+ drm_info(>_to_xe(gt)->drm,
+ "gt:%d unable to handle eu attention ret=%d\n",
+ gt_id, ret);
+
+ xe_gt_reset_async(gt);
+ }
+ }
+
+ xe_pm_runtime_put(xe);
+ }
+
+ schedule_delayed_work(&xe->eudebug.attention_dwork, delay);
+}
+
+static void attention_poll_stop(struct xe_device *xe)
+{
+ cancel_delayed_work_sync(&xe->eudebug.attention_dwork);
+}
+
+static void attention_poll_start(struct xe_device *xe)
+{
+ mod_delayed_work(system_wq, &xe->eudebug.attention_dwork, 0);
+}
+
static int
xe_eudebug_connect(struct xe_device *xe,
struct drm_file *file,
@@ -1775,6 +1925,7 @@ xe_eudebug_connect(struct xe_device *xe,
kref_get(&d->ref);
queue_work(xe->eudebug.wq, &d->discovery_work);
+ attention_poll_start(xe);
eu_dbg(d, "connected session %lld", d->session);
@@ -1843,6 +1994,11 @@ int xe_eudebug_enable(struct xe_device *xe, bool enable)
XE_EUDEBUG_ENABLED : XE_EUDEBUG_DISABLED;
mutex_unlock(&xe->eudebug.lock);
+ if (enable)
+ attention_poll_start(xe);
+ else
+ attention_poll_stop(xe);
+
return 0;
}
@@ -1884,6 +2040,15 @@ static void xe_eudebug_sysfs_fini(void *arg)
&dev_attr_enable_eudebug.attr);
}
+static void xe_eudebug_fini(struct drm_device *dev, void *__unused)
+{
+ struct xe_device *xe = to_xe_device(dev);
+
+ xe_assert(xe, list_empty(&xe->eudebug.targets));
+
+ attention_poll_stop(xe);
+}
+
void xe_eudebug_init(struct xe_device *xe)
{
struct drm_device *dev = &xe->drm;
@@ -1891,6 +2056,7 @@ void xe_eudebug_init(struct xe_device *xe)
int err;
INIT_LIST_HEAD(&xe->eudebug.targets);
+ INIT_DELAYED_WORK(&xe->eudebug.attention_dwork, attention_poll_work);
xe->eudebug.state = XE_EUDEBUG_NOT_SUPPORTED;
@@ -1905,6 +2071,10 @@ void xe_eudebug_init(struct xe_device *xe)
}
xe->eudebug.wq = wq;
+ err = drmm_add_action_or_reset(&xe->drm, xe_eudebug_fini, NULL);
+ if (err)
+ goto out_err;
+
err = sysfs_create_file(&dev->dev->kobj,
&dev_attr_enable_eudebug.attr);
if (err)
diff --git a/drivers/gpu/drm/xe/xe_eudebug_hw.c b/drivers/gpu/drm/xe/xe_eudebug_hw.c
index 7ac0dd03ebf0..236740ef10ba 100644
--- a/drivers/gpu/drm/xe/xe_eudebug_hw.c
+++ b/drivers/gpu/drm/xe/xe_eudebug_hw.c
@@ -301,7 +301,7 @@ static struct xe_exec_queue *active_hwe_to_exec_queue(struct xe_hw_engine *hwe,
return found;
}
-static struct xe_exec_queue *runalone_active_queue_get(struct xe_gt *gt, int *lrc_idx)
+struct xe_exec_queue *xe_gt_runalone_active_queue_get(struct xe_gt *gt, int *lrc_idx)
{
struct xe_hw_engine *active;
@@ -612,7 +612,7 @@ static int xe_eu_control_resume(struct xe_eudebug *d,
struct xe_exec_queue *active;
int lrc_idx;
- active = runalone_active_queue_get(q->gt, &lrc_idx);
+ active = xe_gt_runalone_active_queue_get(q->gt, &lrc_idx);
if (IS_ERR(active))
return PTR_ERR(active);
@@ -654,7 +654,7 @@ static int xe_eu_control_stopped(struct xe_eudebug *d,
if (XE_WARN_ON(!q) || XE_WARN_ON(!q->gt))
return -EINVAL;
- active = runalone_active_queue_get(q->gt, &lrc_idx);
+ active = xe_gt_runalone_active_queue_get(q->gt, &lrc_idx);
if (IS_ERR(active))
return PTR_ERR(active);
diff --git a/drivers/gpu/drm/xe/xe_eudebug_types.h b/drivers/gpu/drm/xe/xe_eudebug_types.h
index 205777a851a3..85fc321f8b0e 100644
--- a/drivers/gpu/drm/xe/xe_eudebug_types.h
+++ b/drivers/gpu/drm/xe/xe_eudebug_types.h
@@ -37,7 +37,7 @@ enum xe_eudebug_state {
};
#define CONFIG_DRM_XE_DEBUGGER_EVENT_QUEUE_SIZE 64
-#define XE_EUDEBUG_MAX_EVENT_TYPE DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE
+#define XE_EUDEBUG_MAX_EVENT_TYPE DRM_XE_EUDEBUG_EVENT_EU_ATTENTION
/**
* struct xe_eudebug_handle - eudebug resource handle
@@ -172,4 +172,3 @@ struct xe_eudebug {
};
#endif /* _XE_EUDEBUG_TYPES_H_ */
-
diff --git a/include/uapi/drm/xe_drm_eudebug.h b/include/uapi/drm/xe_drm_eudebug.h
index df6e028bcd9c..9f558e56b577 100644
--- a/include/uapi/drm/xe_drm_eudebug.h
+++ b/include/uapi/drm/xe_drm_eudebug.h
@@ -55,6 +55,7 @@ struct drm_xe_eudebug_event {
#define DRM_XE_EUDEBUG_EVENT_VM_BIND 4
#define DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_DEBUG_DATA 5
#define DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE 6
+#define DRM_XE_EUDEBUG_EVENT_EU_ATTENTION 7
__u16 flags;
#define DRM_XE_EUDEBUG_EVENT_CREATE (1 << 0)
@@ -197,6 +198,16 @@ struct drm_xe_eudebug_eu_control {
__u64 bitmask_ptr;
};
+struct drm_xe_eudebug_event_eu_attention {
+ struct drm_xe_eudebug_event base;
+
+ __u64 exec_queue_handle;
+ __u64 lrc_handle;
+ __u32 flags;
+ __u32 bitmask_size;
+ __u8 bitmask[];
+};
+
#if defined(__cplusplus)
}
#endif
--
2.43.0
^ permalink raw reply related [flat|nested] 30+ messages in thread* [PATCH 14/20] drm/xe/eudebug_test: Introduce xe_eudebug wa kunit test
2025-12-02 13:52 [PATCH 00/20] Intel Xe GPU Debug Support (eudebug) v6 Mika Kuoppala
` (12 preceding siblings ...)
2025-12-02 13:52 ` [PATCH 13/20] drm/xe/eudebug: Introduce per device attention scan worker Mika Kuoppala
@ 2025-12-02 13:52 ` Mika Kuoppala
2025-12-02 13:52 ` [PATCH 15/20] drm/xe: Implement SR-IOV and eudebug exclusivity Mika Kuoppala
` (11 subsequent siblings)
25 siblings, 0 replies; 30+ messages in thread
From: Mika Kuoppala @ 2025-12-02 13:52 UTC (permalink / raw)
To: intel-xe
Cc: simona.vetter, matthew.brost, christian.koenig, thomas.hellstrom,
joonas.lahtinen, christoph.manszewski, rodrigo.vivi,
andrzej.hajda, matthew.auld, maciej.patelczyk, gwan-gyeong.mun,
Mika Kuoppala
From: Christoph Manszewski <christoph.manszewski@intel.com>
Introduce kunit test for eudebug. For now it checks the dynamic
application of WAs.
v2: adapt to removal of call_for_each_device (Mika)
v3: s/FW_RENDER/FORCEWAKE_ALL (Mika)
Signed-off-by: Christoph Manszewski <christoph.manszewski@intel.com>
Signed-off-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
---
drivers/gpu/drm/xe/tests/xe_eudebug.c | 183 ++++++++++++++++++++
drivers/gpu/drm/xe/tests/xe_live_test_mod.c | 5 +
drivers/gpu/drm/xe/xe_eudebug.c | 4 +
3 files changed, 192 insertions(+)
create mode 100644 drivers/gpu/drm/xe/tests/xe_eudebug.c
diff --git a/drivers/gpu/drm/xe/tests/xe_eudebug.c b/drivers/gpu/drm/xe/tests/xe_eudebug.c
new file mode 100644
index 000000000000..f839fb292b9b
--- /dev/null
+++ b/drivers/gpu/drm/xe/tests/xe_eudebug.c
@@ -0,0 +1,183 @@
+// SPDX-License-Identifier: GPL-2.0 AND MIT
+/*
+ * Copyright © 2024 Intel Corporation
+ */
+
+#include <kunit/visibility.h>
+
+#include "regs/xe_gt_regs.h"
+#include "regs/xe_engine_regs.h"
+
+#include "xe_force_wake.h"
+#include "xe_gt_mcr.h"
+#include "xe_mmio.h"
+
+#include "tests/xe_kunit_helpers.h"
+#include "tests/xe_pci_test.h"
+#include "tests/xe_test.h"
+
+#undef XE_REG_MCR
+#define XE_REG_MCR(r_, ...) ((const struct xe_reg_mcr){ \
+ .__reg = XE_REG_INITIALIZER(r_, ##__VA_ARGS__, .mcr = 1) \
+ })
+
+static const char *reg_to_str(struct xe_reg reg)
+{
+ if (reg.raw == TD_CTL.__reg.raw)
+ return "TD_CTL";
+ else if (reg.raw == CS_DEBUG_MODE2(RENDER_RING_BASE).raw)
+ return "CS_DEBUG_MODE2";
+ else if (reg.raw == ROW_CHICKEN.__reg.raw)
+ return "ROW_CHICKEN";
+ else if (reg.raw == ROW_CHICKEN2.__reg.raw)
+ return "ROW_CHICKEN2";
+ else if (reg.raw == ROW_CHICKEN3.__reg.raw)
+ return "ROW_CHICKEN3";
+ else
+ return "UNKNOWN REG";
+}
+
+static u32 get_reg_mask(struct xe_device *xe, struct xe_reg reg)
+{
+ struct kunit *test = kunit_get_current_test();
+ u32 val = 0;
+
+ if (reg.raw == TD_CTL.__reg.raw) {
+ val = TD_CTL_BREAKPOINT_ENABLE |
+ TD_CTL_FORCE_THREAD_BREAKPOINT_ENABLE |
+ TD_CTL_FEH_AND_FEE_ENABLE;
+
+ if (GRAPHICS_VERx100(xe) >= 1250)
+ val |= TD_CTL_GLOBAL_DEBUG_ENABLE;
+
+ } else if (reg.raw == CS_DEBUG_MODE2(RENDER_RING_BASE).raw) {
+ val = GLOBAL_DEBUG_ENABLE;
+ } else if (reg.raw == ROW_CHICKEN.__reg.raw) {
+ val = STALL_DOP_GATING_DISABLE;
+ } else if (reg.raw == ROW_CHICKEN2.__reg.raw) {
+ val = XEHPC_DISABLE_BTB;
+ } else if (reg.raw == ROW_CHICKEN3.__reg.raw) {
+ val = XE2_EUPEND_CHK_FLUSH_DIS;
+ } else {
+ kunit_warn(test, "Invalid register selection: %u\n", reg.raw);
+ }
+
+ return val;
+}
+
+static u32 get_reg_expected(struct xe_device *xe, struct xe_reg reg, bool enable_eudebug)
+{
+ u32 reg_mask = get_reg_mask(xe, reg);
+ u32 reg_bits = 0;
+
+ if (enable_eudebug || reg.raw == ROW_CHICKEN3.__reg.raw)
+ reg_bits = reg_mask;
+ else
+ reg_bits = 0;
+
+ return reg_bits;
+}
+
+static void check_reg(struct xe_gt *gt, bool enable_eudebug, struct xe_reg reg)
+{
+ struct kunit *test = kunit_get_current_test();
+ struct xe_device *xe = gt_to_xe(gt);
+ u32 reg_bits_expected = get_reg_expected(xe, reg, enable_eudebug);
+ u32 reg_mask = get_reg_mask(xe, reg);
+ u32 reg_bits = 0;
+
+ if (reg.mcr)
+ reg_bits = xe_gt_mcr_unicast_read_any(gt, (struct xe_reg_mcr){.__reg = reg});
+ else
+ reg_bits = xe_mmio_read32(>->mmio, reg);
+
+ reg_bits &= reg_mask;
+
+ kunit_printk(KERN_DEBUG, test, "%s bits: expected == 0x%x; actual == 0x%x\n",
+ reg_to_str(reg), reg_bits_expected, reg_bits);
+ KUNIT_EXPECT_EQ_MSG(test, reg_bits_expected, reg_bits,
+ "Invalid bits set for %s\n", reg_to_str(reg));
+}
+
+static void __check_regs(struct xe_gt *gt, bool enable_eudebug)
+{
+ struct xe_device *xe = gt_to_xe(gt);
+
+ if (GRAPHICS_VERx100(xe) >= 1200)
+ check_reg(gt, enable_eudebug, TD_CTL.__reg);
+
+ if (GRAPHICS_VERx100(xe) >= 1250 && GRAPHICS_VERx100(xe) <= 1274)
+ check_reg(gt, enable_eudebug, ROW_CHICKEN.__reg);
+
+ if (xe->info.platform == XE_PVC)
+ check_reg(gt, enable_eudebug, ROW_CHICKEN2.__reg);
+
+ if (GRAPHICS_VERx100(xe) >= 2000 && GRAPHICS_VERx100(xe) <= 2004)
+ check_reg(gt, enable_eudebug, ROW_CHICKEN3.__reg);
+}
+
+static void check_regs(struct xe_device *xe, bool enable_eudebug)
+{
+ struct kunit *test = kunit_get_current_test();
+ struct xe_gt *gt;
+ unsigned int fw_ref;
+ u8 id;
+
+ kunit_printk(KERN_DEBUG, test, "Check regs for eudebug %s\n",
+ enable_eudebug ? "enabled" : "disabled");
+
+ xe_pm_runtime_get(xe);
+ for_each_gt(gt, xe, id) {
+ if (xe_gt_is_media_type(gt))
+ continue;
+
+ /* XXX: Figure out per platform proper domain */
+ fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
+ KUNIT_ASSERT_TRUE_MSG(test, fw_ref, "Forcewake failed.\n");
+
+ __check_regs(gt, enable_eudebug);
+
+ xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ }
+ xe_pm_runtime_put(xe);
+}
+
+static int toggle_reg_value(struct xe_device *xe)
+{
+ struct kunit *test = kunit_get_current_test();
+ bool enable_eudebug = xe_eudebug_is_enabled(xe);
+
+ kunit_printk(KERN_DEBUG, test, "Test eudebug WAs for graphics version: %u\n",
+ GRAPHICS_VERx100(xe));
+
+ check_regs(xe, enable_eudebug);
+
+ xe_eudebug_enable(xe, !enable_eudebug);
+ check_regs(xe, !enable_eudebug);
+
+ xe_eudebug_enable(xe, enable_eudebug);
+ check_regs(xe, enable_eudebug);
+
+ return 0;
+}
+
+static void xe_eudebug_toggle_reg_kunit(struct kunit *test)
+{
+ struct xe_device *xe = test->priv;
+
+ toggle_reg_value(xe);
+}
+
+static struct kunit_case xe_eudebug_tests[] = {
+ KUNIT_CASE_PARAM(xe_eudebug_toggle_reg_kunit,
+ xe_pci_live_device_gen_param),
+ {}
+};
+
+VISIBLE_IF_KUNIT
+struct kunit_suite xe_eudebug_test_suite = {
+ .name = "xe_eudebug",
+ .test_cases = xe_eudebug_tests,
+ .init = xe_kunit_helper_xe_device_live_test_init,
+};
+EXPORT_SYMBOL_IF_KUNIT(xe_eudebug_test_suite);
diff --git a/drivers/gpu/drm/xe/tests/xe_live_test_mod.c b/drivers/gpu/drm/xe/tests/xe_live_test_mod.c
index c55e46f1ae92..dc83bb6a892d 100644
--- a/drivers/gpu/drm/xe/tests/xe_live_test_mod.c
+++ b/drivers/gpu/drm/xe/tests/xe_live_test_mod.c
@@ -19,6 +19,11 @@ kunit_test_suite(xe_migrate_test_suite);
kunit_test_suite(xe_mocs_test_suite);
kunit_test_suite(xe_guc_g2g_test_suite);
+#if IS_ENABLED(CONFIG_DRM_XE_EUDEBUG)
+extern struct kunit_suite xe_eudebug_test_suite;
+kunit_test_suite(xe_eudebug_test_suite);
+#endif
+
MODULE_AUTHOR("Intel Corporation");
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("xe live kunit tests");
diff --git a/drivers/gpu/drm/xe/xe_eudebug.c b/drivers/gpu/drm/xe/xe_eudebug.c
index 97e8a7ccef55..1c1fa02d1bd7 100644
--- a/drivers/gpu/drm/xe/xe_eudebug.c
+++ b/drivers/gpu/drm/xe/xe_eudebug.c
@@ -2101,3 +2101,7 @@ int xe_eudebug_connect_ioctl(struct drm_device *dev,
return xe_eudebug_connect(xe, file, param);
}
+
+#if IS_ENABLED(CONFIG_DRM_XE_KUNIT_TEST)
+#include "tests/xe_eudebug.c"
+#endif
--
2.43.0
^ permalink raw reply related [flat|nested] 30+ messages in thread* [PATCH 15/20] drm/xe: Implement SR-IOV and eudebug exclusivity
2025-12-02 13:52 [PATCH 00/20] Intel Xe GPU Debug Support (eudebug) v6 Mika Kuoppala
` (13 preceding siblings ...)
2025-12-02 13:52 ` [PATCH 14/20] drm/xe/eudebug_test: Introduce xe_eudebug wa kunit test Mika Kuoppala
@ 2025-12-02 13:52 ` Mika Kuoppala
2025-12-02 13:52 ` [PATCH 16/20] drm/xe: Add xe_client_debugfs and introduce debug_data file Mika Kuoppala
` (10 subsequent siblings)
25 siblings, 0 replies; 30+ messages in thread
From: Mika Kuoppala @ 2025-12-02 13:52 UTC (permalink / raw)
To: intel-xe
Cc: simona.vetter, matthew.brost, christian.koenig, thomas.hellstrom,
joonas.lahtinen, christoph.manszewski, rodrigo.vivi,
andrzej.hajda, matthew.auld, maciej.patelczyk, gwan-gyeong.mun,
Mika Kuoppala
From: Christoph Manszewski <christoph.manszewski@intel.com>
EU debug functionality relies on access to specific mmio registers.
Since VFs don't have access to those registers and in order to avoid
interference with VFs, make SR-IOV and eudebug functionality exclusive.
I.e. don't allow to enable eudebug in VF mode and don't allow to enable
eudebug when any VFs are provisioned. Likewise, don't allow to provision
VFs when eudebug is enabled.
Signed-off-by: Christoph Manszewski <christoph.manszewski@intel.com>
Signed-off-by: Maciej Patelczyk <maciej.patelczyk@intel.com>
Signed-off-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
---
drivers/gpu/drm/xe/tests/xe_eudebug.c | 10 ++++++++++
drivers/gpu/drm/xe/xe_eudebug.c | 23 +++++++++++++++++++++--
2 files changed, 31 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/xe/tests/xe_eudebug.c b/drivers/gpu/drm/xe/tests/xe_eudebug.c
index f839fb292b9b..601725c642bd 100644
--- a/drivers/gpu/drm/xe/tests/xe_eudebug.c
+++ b/drivers/gpu/drm/xe/tests/xe_eudebug.c
@@ -8,9 +8,13 @@
#include "regs/xe_gt_regs.h"
#include "regs/xe_engine_regs.h"
+#include "xe_device.h"
+#include "xe_eudebug.h"
#include "xe_force_wake.h"
+#include "xe_gt.h"
#include "xe_gt_mcr.h"
#include "xe_mmio.h"
+#include "xe_pm.h"
#include "tests/xe_kunit_helpers.h"
#include "tests/xe_pci_test.h"
@@ -147,6 +151,12 @@ static int toggle_reg_value(struct xe_device *xe)
struct kunit *test = kunit_get_current_test();
bool enable_eudebug = xe_eudebug_is_enabled(xe);
+ if (IS_SRIOV_VF(xe))
+ kunit_skip(test, "eudebug not available in SR-IOV VF mode\n");
+
+ if (xe->eudebug.state == XE_EUDEBUG_NOT_SUPPORTED)
+ kunit_skip(test, "eudebug not supported\n");
+
kunit_printk(KERN_DEBUG, test, "Test eudebug WAs for graphics version: %u\n",
GRAPHICS_VERx100(xe));
diff --git a/drivers/gpu/drm/xe/xe_eudebug.c b/drivers/gpu/drm/xe/xe_eudebug.c
index 1c1fa02d1bd7..44381084ee96 100644
--- a/drivers/gpu/drm/xe/xe_eudebug.c
+++ b/drivers/gpu/drm/xe/xe_eudebug.c
@@ -25,6 +25,7 @@
#include "xe_gt_debug.h"
#include "xe_macros.h"
#include "xe_pm.h"
+#include "xe_sriov_pf.h"
#include "xe_sync.h"
#include "xe_vm.h"
@@ -1954,6 +1955,7 @@ bool xe_eudebug_is_enabled(struct xe_device *xe)
int xe_eudebug_enable(struct xe_device *xe, bool enable)
{
struct xe_gt *gt;
+ int ret;
int i;
u8 id;
@@ -1974,6 +1976,14 @@ int xe_eudebug_enable(struct xe_device *xe, bool enable)
return 0;
}
+ if (enable && IS_SRIOV_PF(xe)) {
+ ret = xe_sriov_pf_lockdown(xe);
+ if (ret) {
+ mutex_unlock(&xe->eudebug.lock);
+ return ret;
+ }
+ }
+
xe_pm_runtime_get(xe);
for_each_gt(gt, xe, id) {
@@ -1994,11 +2004,15 @@ int xe_eudebug_enable(struct xe_device *xe, bool enable)
XE_EUDEBUG_ENABLED : XE_EUDEBUG_DISABLED;
mutex_unlock(&xe->eudebug.lock);
- if (enable)
+ if (enable) {
attention_poll_start(xe);
- else
+ } else {
attention_poll_stop(xe);
+ if (IS_SRIOV_PF(xe))
+ xe_sriov_pf_end_lockdown(xe);
+ }
+
return 0;
}
@@ -2060,6 +2074,11 @@ void xe_eudebug_init(struct xe_device *xe)
xe->eudebug.state = XE_EUDEBUG_NOT_SUPPORTED;
+ if (IS_SRIOV_VF(xe)) {
+ drm_info(&xe->drm, "eudebug not available in SR-IOV VF mode\n");
+ return;
+ }
+
err = drmm_mutex_init(dev, &xe->eudebug.lock);
if (err)
goto out_err;
--
2.43.0
^ permalink raw reply related [flat|nested] 30+ messages in thread* [PATCH 16/20] drm/xe: Add xe_client_debugfs and introduce debug_data file
2025-12-02 13:52 [PATCH 00/20] Intel Xe GPU Debug Support (eudebug) v6 Mika Kuoppala
` (14 preceding siblings ...)
2025-12-02 13:52 ` [PATCH 15/20] drm/xe: Implement SR-IOV and eudebug exclusivity Mika Kuoppala
@ 2025-12-02 13:52 ` Mika Kuoppala
2025-12-03 9:07 ` Mika Kuoppala
2025-12-02 13:52 ` [PATCH 17/20] drm/xe/eudebug: Add read/count/compare helper for eu attention Mika Kuoppala
` (9 subsequent siblings)
25 siblings, 1 reply; 30+ messages in thread
From: Mika Kuoppala @ 2025-12-02 13:52 UTC (permalink / raw)
To: intel-xe
Cc: simona.vetter, matthew.brost, christian.koenig, thomas.hellstrom,
joonas.lahtinen, christoph.manszewski, rodrigo.vivi,
andrzej.hajda, matthew.auld, maciej.patelczyk, gwan-gyeong.mun,
Mika Kuoppala
From: Christoph Manszewski <christoph.manszewski@intel.com>
Create a debug_data file for each xe file/client which is supposed to
list all mapped debug data and attempts to mimic '/proc/pid/maps'.
Each line represents a single mapping and has the following format:
<vm id> <begin>-<end> <flags> <offset> <pathname>
Signed-off-by: Christoph Manszewski <christoph.manszewski@intel.com>
Signed-off-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
---
drivers/gpu/drm/xe/Makefile | 3 +-
drivers/gpu/drm/xe/xe_client_debugfs.c | 118 +++++++++++++++++++++++++
drivers/gpu/drm/xe/xe_client_debugfs.h | 19 ++++
drivers/gpu/drm/xe/xe_device.c | 3 +
4 files changed, 142 insertions(+), 1 deletion(-)
create mode 100644 drivers/gpu/drm/xe/xe_client_debugfs.c
create mode 100644 drivers/gpu/drm/xe/xe_client_debugfs.h
diff --git a/drivers/gpu/drm/xe/Makefile b/drivers/gpu/drm/xe/Makefile
index 05c74032ed63..e19736227dfa 100644
--- a/drivers/gpu/drm/xe/Makefile
+++ b/drivers/gpu/drm/xe/Makefile
@@ -336,7 +336,8 @@ ifeq ($(CONFIG_DRM_FBDEV_EMULATION),y)
endif
ifeq ($(CONFIG_DEBUG_FS),y)
- xe-y += xe_debugfs.o \
+ xe-y += xe_client_debugfs.o \
+ xe_debugfs.o \
xe_gt_debugfs.o \
xe_gt_sriov_vf_debugfs.o \
xe_gt_stats.o \
diff --git a/drivers/gpu/drm/xe/xe_client_debugfs.c b/drivers/gpu/drm/xe/xe_client_debugfs.c
new file mode 100644
index 000000000000..0b952038e698
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_client_debugfs.c
@@ -0,0 +1,118 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2025 Intel Corporation
+ */
+
+#include "xe_client_debugfs.h"
+
+#include <linux/debugfs.h>
+
+#include "xe_debug_data.h"
+#include "xe_debug_data_types.h"
+#include "xe_device_types.h"
+#include "xe_vm_types.h"
+
+#define MAX_LINE_LEN (64 + PATH_MAX)
+
+static ssize_t debug_data_read(struct file *file, char __user *buf, size_t count,
+ loff_t *ppos)
+{
+ struct xe_debug_data *dd;
+ unsigned long vm_index;
+ const char *path;
+ char *kbuf;
+ struct xe_vm *vm;
+
+ struct xe_file *xef = file->private_data;
+ ssize_t total = 0;
+ loff_t pos = 0;
+
+ if (!xef || !buf)
+ return -EINVAL;
+
+ kbuf = kmalloc(MAX_LINE_LEN, GFP_KERNEL);
+ if (!kbuf)
+ return -ENOMEM;
+
+ mutex_lock(&xef->vm.lock);
+
+ xa_for_each(&xef->vm.xa, vm_index, vm) {
+ mutex_lock(&vm->debug_data.lock);
+ list_for_each_entry(dd, &vm->debug_data.list, link) {
+ int len;
+
+ path = dd->flags & DRM_XE_VM_BIND_DEBUG_DATA_FLAG_PSEUDO ?
+ xe_debug_data_pseudo_path_to_string(dd->pseudopath) :
+ dd->pathname;
+
+ /* Format: <vm id> <begin>-<end> <flags> <offset> <pathname> */
+ len = snprintf(kbuf, MAX_LINE_LEN, "%lu 0x%llx-0x%llx 0x%llx 0x%x\t%s\n",
+ vm_index,
+ dd->addr,
+ dd->addr + dd->range,
+ dd->flags,
+ dd->offset,
+ path);
+
+ if (pos + len <= *ppos) {
+ pos += len;
+ continue;
+ }
+
+ if (pos < *ppos) {
+ const int skip = *ppos - pos;
+
+ len -= skip;
+ memmove(kbuf, kbuf + skip, len);
+ pos = *ppos;
+ }
+
+ if (total + len > count)
+ len = count - total;
+
+ if (copy_to_user(buf + total, kbuf, len)) {
+ mutex_unlock(&vm->debug_data.lock);
+ mutex_unlock(&xef->vm.lock);
+ kfree(kbuf);
+ return -EFAULT;
+ }
+
+ total += len;
+ pos += len;
+
+ if (total >= count) {
+ mutex_unlock(&vm->debug_data.lock);
+ mutex_unlock(&xef->vm.lock);
+ kfree(kbuf);
+ *ppos = pos;
+ return total;
+ }
+ }
+ mutex_unlock(&vm->debug_data.lock);
+ }
+
+ mutex_unlock(&xef->vm.lock);
+ kfree(kbuf);
+ *ppos = pos;
+ return total;
+}
+
+static int debug_data_open(struct inode *inode, struct file *file)
+{
+ struct xe_file *xef = inode->i_private;
+
+ file->private_data = xef;
+ return 0;
+}
+
+static const struct file_operations maps_fops = {
+ .owner = THIS_MODULE,
+ .open = debug_data_open,
+ .read = debug_data_read,
+ .llseek = default_llseek,
+};
+
+void xe_client_debugfs_register(struct xe_file *xef)
+{
+ debugfs_create_file("debug_data", 0444, xef->drm->debugfs_client, xef, &maps_fops);
+}
diff --git a/drivers/gpu/drm/xe/xe_client_debugfs.h b/drivers/gpu/drm/xe/xe_client_debugfs.h
new file mode 100644
index 000000000000..9eace15c0a49
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_client_debugfs.h
@@ -0,0 +1,19 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2025 Intel Corporation
+ */
+
+#ifndef _XE_CLIENT_DEBUGFS_H_
+#define _XE_CLIENT_DEBUGFS_H_
+
+#include <linux/debugfs.h>
+
+struct xe_file;
+
+#ifdef CONFIG_DEBUG_FS
+void xe_client_debugfs_register(struct xe_file *xef);
+#else
+static inline void xe_client_debugfs_register(struct xe_file *xef) { }
+#endif
+
+#endif // _XE_CLIENT_DEBUGFS_H_
diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
index a60e5265ae59..cf3c20b11f83 100644
--- a/drivers/gpu/drm/xe/xe_device.c
+++ b/drivers/gpu/drm/xe/xe_device.c
@@ -25,6 +25,7 @@
#include "regs/xe_regs.h"
#include "xe_bo.h"
#include "xe_bo_evict.h"
+#include "xe_client_debugfs.h"
#include "xe_debugfs.h"
#include "xe_devcoredump.h"
#include "xe_device_sysfs.h"
@@ -122,6 +123,8 @@ static int xe_file_open(struct drm_device *dev, struct drm_file *file)
put_task_struct(task);
}
+ xe_client_debugfs_register(xef);
+
return 0;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 30+ messages in thread* [PATCH 16/20] drm/xe: Add xe_client_debugfs and introduce debug_data file
2025-12-02 13:52 ` [PATCH 16/20] drm/xe: Add xe_client_debugfs and introduce debug_data file Mika Kuoppala
@ 2025-12-03 9:07 ` Mika Kuoppala
0 siblings, 0 replies; 30+ messages in thread
From: Mika Kuoppala @ 2025-12-03 9:07 UTC (permalink / raw)
To: intel-xe; +Cc: Christoph Manszewski, Sunil Khatri, Mika Kuoppala
From: Christoph Manszewski <christoph.manszewski@intel.com>
Create a debug_data file for each xe file/client which is supposed to
list all mapped debug data and attempts to mimic '/proc/pid/maps'.
Each line represents a single mapping and has the following format:
<vm id> <begin>-<end> <flags> <offset> <pathname>
Cc: Sunil Khatri <sunil.khatri@amd.com>
Signed-off-by: Christoph Manszewski <christoph.manszewski@intel.com>
Signed-off-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
---
drivers/gpu/drm/xe/Makefile | 3 +-
drivers/gpu/drm/xe/xe_client_debugfs.c | 118 +++++++++++++++++++++++++
drivers/gpu/drm/xe/xe_client_debugfs.h | 19 ++++
drivers/gpu/drm/xe/xe_device.c | 3 +
4 files changed, 142 insertions(+), 1 deletion(-)
create mode 100644 drivers/gpu/drm/xe/xe_client_debugfs.c
create mode 100644 drivers/gpu/drm/xe/xe_client_debugfs.h
diff --git a/drivers/gpu/drm/xe/Makefile b/drivers/gpu/drm/xe/Makefile
index 05c74032ed63..e19736227dfa 100644
--- a/drivers/gpu/drm/xe/Makefile
+++ b/drivers/gpu/drm/xe/Makefile
@@ -336,7 +336,8 @@ ifeq ($(CONFIG_DRM_FBDEV_EMULATION),y)
endif
ifeq ($(CONFIG_DEBUG_FS),y)
- xe-y += xe_debugfs.o \
+ xe-y += xe_client_debugfs.o \
+ xe_debugfs.o \
xe_gt_debugfs.o \
xe_gt_sriov_vf_debugfs.o \
xe_gt_stats.o \
diff --git a/drivers/gpu/drm/xe/xe_client_debugfs.c b/drivers/gpu/drm/xe/xe_client_debugfs.c
new file mode 100644
index 000000000000..0b952038e698
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_client_debugfs.c
@@ -0,0 +1,118 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2025 Intel Corporation
+ */
+
+#include "xe_client_debugfs.h"
+
+#include <linux/debugfs.h>
+
+#include "xe_debug_data.h"
+#include "xe_debug_data_types.h"
+#include "xe_device_types.h"
+#include "xe_vm_types.h"
+
+#define MAX_LINE_LEN (64 + PATH_MAX)
+
+static ssize_t debug_data_read(struct file *file, char __user *buf, size_t count,
+ loff_t *ppos)
+{
+ struct xe_debug_data *dd;
+ unsigned long vm_index;
+ const char *path;
+ char *kbuf;
+ struct xe_vm *vm;
+
+ struct xe_file *xef = file->private_data;
+ ssize_t total = 0;
+ loff_t pos = 0;
+
+ if (!xef || !buf)
+ return -EINVAL;
+
+ kbuf = kmalloc(MAX_LINE_LEN, GFP_KERNEL);
+ if (!kbuf)
+ return -ENOMEM;
+
+ mutex_lock(&xef->vm.lock);
+
+ xa_for_each(&xef->vm.xa, vm_index, vm) {
+ mutex_lock(&vm->debug_data.lock);
+ list_for_each_entry(dd, &vm->debug_data.list, link) {
+ int len;
+
+ path = dd->flags & DRM_XE_VM_BIND_DEBUG_DATA_FLAG_PSEUDO ?
+ xe_debug_data_pseudo_path_to_string(dd->pseudopath) :
+ dd->pathname;
+
+ /* Format: <vm id> <begin>-<end> <flags> <offset> <pathname> */
+ len = snprintf(kbuf, MAX_LINE_LEN, "%lu 0x%llx-0x%llx 0x%llx 0x%x\t%s\n",
+ vm_index,
+ dd->addr,
+ dd->addr + dd->range,
+ dd->flags,
+ dd->offset,
+ path);
+
+ if (pos + len <= *ppos) {
+ pos += len;
+ continue;
+ }
+
+ if (pos < *ppos) {
+ const int skip = *ppos - pos;
+
+ len -= skip;
+ memmove(kbuf, kbuf + skip, len);
+ pos = *ppos;
+ }
+
+ if (total + len > count)
+ len = count - total;
+
+ if (copy_to_user(buf + total, kbuf, len)) {
+ mutex_unlock(&vm->debug_data.lock);
+ mutex_unlock(&xef->vm.lock);
+ kfree(kbuf);
+ return -EFAULT;
+ }
+
+ total += len;
+ pos += len;
+
+ if (total >= count) {
+ mutex_unlock(&vm->debug_data.lock);
+ mutex_unlock(&xef->vm.lock);
+ kfree(kbuf);
+ *ppos = pos;
+ return total;
+ }
+ }
+ mutex_unlock(&vm->debug_data.lock);
+ }
+
+ mutex_unlock(&xef->vm.lock);
+ kfree(kbuf);
+ *ppos = pos;
+ return total;
+}
+
+static int debug_data_open(struct inode *inode, struct file *file)
+{
+ struct xe_file *xef = inode->i_private;
+
+ file->private_data = xef;
+ return 0;
+}
+
+static const struct file_operations maps_fops = {
+ .owner = THIS_MODULE,
+ .open = debug_data_open,
+ .read = debug_data_read,
+ .llseek = default_llseek,
+};
+
+void xe_client_debugfs_register(struct xe_file *xef)
+{
+ debugfs_create_file("debug_data", 0444, xef->drm->debugfs_client, xef, &maps_fops);
+}
diff --git a/drivers/gpu/drm/xe/xe_client_debugfs.h b/drivers/gpu/drm/xe/xe_client_debugfs.h
new file mode 100644
index 000000000000..9eace15c0a49
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_client_debugfs.h
@@ -0,0 +1,19 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2025 Intel Corporation
+ */
+
+#ifndef _XE_CLIENT_DEBUGFS_H_
+#define _XE_CLIENT_DEBUGFS_H_
+
+#include <linux/debugfs.h>
+
+struct xe_file;
+
+#ifdef CONFIG_DEBUG_FS
+void xe_client_debugfs_register(struct xe_file *xef);
+#else
+static inline void xe_client_debugfs_register(struct xe_file *xef) { }
+#endif
+
+#endif // _XE_CLIENT_DEBUGFS_H_
diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
index a60e5265ae59..cf3c20b11f83 100644
--- a/drivers/gpu/drm/xe/xe_device.c
+++ b/drivers/gpu/drm/xe/xe_device.c
@@ -25,6 +25,7 @@
#include "regs/xe_regs.h"
#include "xe_bo.h"
#include "xe_bo_evict.h"
+#include "xe_client_debugfs.h"
#include "xe_debugfs.h"
#include "xe_devcoredump.h"
#include "xe_device_sysfs.h"
@@ -122,6 +123,8 @@ static int xe_file_open(struct drm_device *dev, struct drm_file *file)
put_task_struct(task);
}
+ xe_client_debugfs_register(xef);
+
return 0;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [PATCH 17/20] drm/xe/eudebug: Add read/count/compare helper for eu attention
2025-12-02 13:52 [PATCH 00/20] Intel Xe GPU Debug Support (eudebug) v6 Mika Kuoppala
` (15 preceding siblings ...)
2025-12-02 13:52 ` [PATCH 16/20] drm/xe: Add xe_client_debugfs and introduce debug_data file Mika Kuoppala
@ 2025-12-02 13:52 ` Mika Kuoppala
2025-12-02 13:52 ` [PATCH 18/20] drm/xe/vm: Support for adding null page VMA to VM on request Mika Kuoppala
` (8 subsequent siblings)
25 siblings, 0 replies; 30+ messages in thread
From: Mika Kuoppala @ 2025-12-02 13:52 UTC (permalink / raw)
To: intel-xe
Cc: simona.vetter, matthew.brost, christian.koenig, thomas.hellstrom,
joonas.lahtinen, christoph.manszewski, rodrigo.vivi,
andrzej.hajda, matthew.auld, maciej.patelczyk, gwan-gyeong.mun,
Mika Kuoppala
From: Gwan-gyeong Mun <gwan-gyeong.mun@intel.com>
Add xe_eu_attentions structure to capture and store eu attention bits.
Add a function to count the number of eu threads that have turned on from
eu attentions, and add a function to count the number of eu threads that
have changed on a state between eu attentions.
v2: fix array size calculation (Christoph)
Signed-off-by: Gwan-gyeong Mun <gwan-gyeong.mun@intel.com>
Signed-off-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
---
drivers/gpu/drm/xe/xe_gt_debug.c | 65 ++++++++++++++++++++++++++
drivers/gpu/drm/xe/xe_gt_debug.h | 7 +++
drivers/gpu/drm/xe/xe_gt_debug_types.h | 23 +++++++++
3 files changed, 95 insertions(+)
create mode 100644 drivers/gpu/drm/xe/xe_gt_debug_types.h
diff --git a/drivers/gpu/drm/xe/xe_gt_debug.c b/drivers/gpu/drm/xe/xe_gt_debug.c
index 314eef6734c3..bf2ca95c7389 100644
--- a/drivers/gpu/drm/xe/xe_gt_debug.c
+++ b/drivers/gpu/drm/xe/xe_gt_debug.c
@@ -3,12 +3,14 @@
* Copyright © 2023 Intel Corporation
*/
+#include <linux/delay.h>
#include "regs/xe_gt_regs.h"
#include "xe_device.h"
#include "xe_force_wake.h"
#include "xe_gt.h"
#include "xe_gt_topology.h"
#include "xe_gt_debug.h"
+#include "xe_gt_debug_types.h"
#include "xe_gt_mcr.h"
#include "xe_pm.h"
#include "xe_macros.h"
@@ -177,3 +179,66 @@ int xe_gt_eu_threads_needing_attention(struct xe_gt *gt)
return err < 0 ? 0 : err;
}
+
+static inline unsigned int
+xe_eu_attentions_count(const struct xe_eu_attentions *a)
+{
+ return bitmap_weight((void *)a->att, a->size * BITS_PER_BYTE);
+}
+
+void xe_gt_eu_attentions_read(struct xe_gt *gt,
+ struct xe_eu_attentions *a,
+ const unsigned int settle_time_ms)
+{
+ unsigned int prev = 0;
+ ktime_t end, now;
+
+ now = ktime_get_raw();
+ end = ktime_add_ms(now, settle_time_ms);
+
+ a->ts = 0;
+ a->size = min_t(int,
+ xe_gt_eu_attention_bitmap_size(gt),
+ sizeof(a->att));
+
+ do {
+ unsigned int attn;
+
+ xe_gt_eu_attention_bitmap(gt, a->att, a->size);
+ attn = xe_eu_attentions_count(a);
+
+ now = ktime_get_raw();
+
+ if (a->ts == 0)
+ a->ts = now;
+ else if (attn && attn != prev)
+ a->ts = now;
+
+ prev = attn;
+
+ if (settle_time_ms)
+ udelay(5);
+
+ /*
+ * XXX We are gathering data for production SIP to find
+ * the upper limit of settle time. For now, we wait full
+ * timeout value regardless.
+ */
+ } while (ktime_before(now, end));
+}
+
+unsigned int xe_eu_attentions_xor_count(const struct xe_eu_attentions *a,
+ const struct xe_eu_attentions *b)
+{
+ unsigned int count = 0;
+ unsigned int i;
+
+ if (XE_WARN_ON(a->size != b->size))
+ return -EINVAL;
+
+ for (i = 0; i < a->size; i++)
+ if (a->att[i] ^ b->att[i])
+ count++;
+
+ return count;
+}
diff --git a/drivers/gpu/drm/xe/xe_gt_debug.h b/drivers/gpu/drm/xe/xe_gt_debug.h
index 9dabe9cc1d25..0d03565195b4 100644
--- a/drivers/gpu/drm/xe/xe_gt_debug.h
+++ b/drivers/gpu/drm/xe/xe_gt_debug.h
@@ -9,6 +9,7 @@
#include <linux/bits.h>
#include <linux/math.h>
+struct xe_eu_attentions;
struct xe_gt;
#define XE_GT_ATTENTION_TIMEOUT_MS 100
@@ -29,4 +30,10 @@ int xe_gt_eu_attention_bitmap_size(struct xe_gt *gt);
int xe_gt_eu_attention_bitmap(struct xe_gt *gt, u8 *bits,
unsigned int bitmap_size);
+void xe_gt_eu_attentions_read(struct xe_gt *gt,
+ struct xe_eu_attentions *a,
+ const unsigned int settle_time_ms);
+
+unsigned int xe_eu_attentions_xor_count(const struct xe_eu_attentions *a,
+ const struct xe_eu_attentions *b);
#endif
diff --git a/drivers/gpu/drm/xe/xe_gt_debug_types.h b/drivers/gpu/drm/xe/xe_gt_debug_types.h
new file mode 100644
index 000000000000..35a0e822f20a
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_gt_debug_types.h
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2025 Intel Corporation
+ */
+
+#ifndef __XE_GT_DEBUG_TYPES_
+#define __XE_GT_DEBUG_TYPES_
+
+#include <linux/types.h>
+
+#define XE_GT_EU_ATT_ROWS 2u
+#define XE_GT_EU_ATT_MAX_THREADS 16
+#define XE_GT_EU_MAX_NUM 1024
+
+struct xe_eu_attentions {
+ u8 att[XE_GT_EU_MAX_NUM *
+ XE_GT_EU_ATT_ROWS *
+ XE_GT_EU_ATT_MAX_THREADS/8];
+ unsigned int size;
+ ktime_t ts;
+};
+
+#endif
--
2.43.0
^ permalink raw reply related [flat|nested] 30+ messages in thread* [PATCH 18/20] drm/xe/vm: Support for adding null page VMA to VM on request
2025-12-02 13:52 [PATCH 00/20] Intel Xe GPU Debug Support (eudebug) v6 Mika Kuoppala
` (16 preceding siblings ...)
2025-12-02 13:52 ` [PATCH 17/20] drm/xe/eudebug: Add read/count/compare helper for eu attention Mika Kuoppala
@ 2025-12-02 13:52 ` Mika Kuoppala
2025-12-02 13:52 ` [PATCH 19/20] drm/xe/eudebug: Introduce EU pagefault handling interface Mika Kuoppala
` (7 subsequent siblings)
25 siblings, 0 replies; 30+ messages in thread
From: Mika Kuoppala @ 2025-12-02 13:52 UTC (permalink / raw)
To: intel-xe
Cc: simona.vetter, matthew.brost, christian.koenig, thomas.hellstrom,
joonas.lahtinen, christoph.manszewski, rodrigo.vivi,
andrzej.hajda, matthew.auld, maciej.patelczyk, gwan-gyeong.mun,
Oak Zeng, Niranjana Vishwanathapura, Stuart Summers, Bruce Chang,
Mika Kuoppala
From: Gwan-gyeong Mun <gwan-gyeong.mun@intel.com>
The XE2 (and PVC) HW has a limitation that the pagefault due to invalid
access will halt the corresponding EUs. So, in order to activate the
debugger, kmd needs to install the temporal page to unhalt the EUs.
Plan to be used for pagefault handling when the EU debugger is running.
The idea is to install a null page vma if the pagefault is from an invalid
access. After installing null page pte, the user debugger can continue to
run/inspect without causing a fatal failure or reset and stop.
Based on Bruce's implementation [1].
[1] https://lore.kernel.org/intel-xe/20230829231648.4438-1-yu.bruce.chang@intel.com/
v2: s/NULL_VMA/DRM_GPUVA_SPARSE (Mika)
v3: use ERR_CAST as we dont return null (Mika)
Cc: Oak Zeng <oak.zeng@intel.com>
Cc: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
Cc: Stuart Summers <stuart.summers@intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Co-developed-by: Bruce Chang <yu.bruce.chang@intel.com>
Signed-off-by: Bruce Chang <yu.bruce.chang@intel.com>
Signed-off-by: Gwan-gyeong Mun <gwan-gyeong.mun@intel.com>
Signed-off-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
---
drivers/gpu/drm/xe/xe_vm.c | 33 +++++++++++++++++++++++++++++++++
drivers/gpu/drm/xe/xe_vm.h | 2 ++
2 files changed, 35 insertions(+)
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 6052bb81a827..a7df015af3b2 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -4676,3 +4676,36 @@ int xe_vm_alloc_cpu_addr_mirror_vma(struct xe_vm *vm, uint64_t start, uint64_t r
return xe_vm_alloc_vma(vm, &map_req, false);
}
+
+struct xe_vma *xe_vm_create_null_vma(struct xe_vm *vm, u64 addr)
+{
+ struct xe_vma_mem_attr default_attr = {
+ .preferred_loc = {
+ .devmem_fd = DRM_XE_PREFERRED_LOC_DEFAULT_DEVICE,
+ .migration_policy = DRM_XE_MIGRATE_ALL_PAGES,
+ },
+ .atomic_access = DRM_XE_ATOMIC_UNDEFINED,
+ .default_pat_index = vm->xe->pat.idx[XE_CACHE_NONE],
+ .pat_index = vm->xe->pat.idx[XE_CACHE_NONE],
+ };
+ struct xe_vma *vma;
+ u32 page_size;
+ int err;
+
+ if (xe_vm_is_closed_or_banned(vm))
+ return ERR_PTR(-ENOENT);
+
+ page_size = vm->flags & XE_VM_FLAG_64K ? SZ_64K : SZ_4K;
+ vma = xe_vma_create(vm, NULL, 0, addr, addr + page_size - 1,
+ &default_attr, DRM_GPUVA_SPARSE);
+ if (IS_ERR(vma))
+ return ERR_CAST(vma);
+
+ err = xe_vm_insert_vma(vm, vma);
+ if (err) {
+ xe_vma_destroy_late(vma);
+ return ERR_PTR(err);
+ }
+
+ return vma;
+}
diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
index 361f10b3c453..37da7d1d6ec1 100644
--- a/drivers/gpu/drm/xe/xe_vm.h
+++ b/drivers/gpu/drm/xe/xe_vm.h
@@ -414,4 +414,6 @@ static inline struct drm_exec *xe_vm_validation_exec(struct xe_vm *vm)
#define xe_vm_has_valid_gpu_mapping(tile, tile_present, tile_invalidated) \
((READ_ONCE(tile_present) & ~READ_ONCE(tile_invalidated)) & BIT((tile)->id))
+struct xe_vma *xe_vm_create_null_vma(struct xe_vm *vm, u64 addr);
+
#endif
--
2.43.0
^ permalink raw reply related [flat|nested] 30+ messages in thread* [PATCH 19/20] drm/xe/eudebug: Introduce EU pagefault handling interface
2025-12-02 13:52 [PATCH 00/20] Intel Xe GPU Debug Support (eudebug) v6 Mika Kuoppala
` (17 preceding siblings ...)
2025-12-02 13:52 ` [PATCH 18/20] drm/xe/vm: Support for adding null page VMA to VM on request Mika Kuoppala
@ 2025-12-02 13:52 ` Mika Kuoppala
2025-12-02 13:52 ` [PATCH 20/20] drm/xe/eudebug: Enable EU pagefault handling Mika Kuoppala
` (6 subsequent siblings)
25 siblings, 0 replies; 30+ messages in thread
From: Mika Kuoppala @ 2025-12-02 13:52 UTC (permalink / raw)
To: intel-xe
Cc: simona.vetter, matthew.brost, christian.koenig, thomas.hellstrom,
joonas.lahtinen, christoph.manszewski, rodrigo.vivi,
andrzej.hajda, matthew.auld, maciej.patelczyk, gwan-gyeong.mun,
Jan Maślak, Mika Kuoppala
From: Gwan-gyeong Mun <gwan-gyeong.mun@intel.com>
The XE2 (and PVC) HW has a limitation that the pagefault due to invalid
access will halt the corresponding EUs. To solve this problem, introduce
EU pagefault handling functionality, which allows to unhalt pagefaulted
eu threads and to EU debugger to get inform about the eu attentions state
of EU threads during execution.
If a pagefault occurs, send the DRM_XE_EUDEBUG_EVENT_PAGEFAULT event
after handling the pagefault. The pagefault eudebug event follows
the newly added drm_xe_eudebug_event_pagefault type.
When a pagefault occurs, it prevents to send the
DRM_XE_EUDEBUG_EVENT_EU_ATTENTION event to the client during pagefault
handling.
The page fault event delivery follows the below policy.
(1) If EU Debugger discovery has completed and pagefaulted eu threads turn
on attention bit then pagefault handler delivers pagefault event
directly.
(2) If a pagefault occurs during eu debugger discovery process, pagefault
handler queues a pagefault event and sends the queued event when
discovery has completed and pagefaulted eu threads turn on attention
bit.
(3) If the pagefaulted eu thread struggles to turn on the attention bit
within the specified time, the attention scan worker sends a pagefault
event when it detects that the attention bit is turned on.
If multiple eu threads are running and a pagefault occurs due to accessing
the same invalid address, send a single pagefault event
(DRM_XE_EUDEBUG_EVENT_PAGEFAULT type) to the user debugger instead of a
pagefault event for each of the multiple eu threads.
If eu threads (other than the one that caused the page fault before) access
the new invalid addresses, send a new pagefault event.
As the attention scan worker send the eu attention event whenever the
attention bit is turned on, user debugger receives attenion event
immediately after pagefault event.
In this case, the page-fault event always precedes the attention event.
When the user debugger receives an attention event after a pagefault event,
it can detect whether additional breakpoints or interrupts occur in
addition to the existing pagefault by comparing the eu threads where the
pagefault occurred with the eu threads where the attention bit is newly
enabled.
v2: use only force exception (Joonas, Mika)
v3: rebased on v4 (Mika)
v4: streamline uapi, cleanups (Mika)
Signed-off-by: Gwan-gyeong Mun <gwan-gyeong.mun@intel.com>
Signed-off-by: Jan Maślak <jan.maslak@intel.com>
Signed-off-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
---
drivers/gpu/drm/xe/Makefile | 2 +-
drivers/gpu/drm/xe/xe_eudebug.c | 101 ++++-
drivers/gpu/drm/xe/xe_eudebug.h | 9 +
drivers/gpu/drm/xe/xe_eudebug_hw.c | 15 +-
drivers/gpu/drm/xe/xe_eudebug_pagefault.c | 441 ++++++++++++++++++++++
drivers/gpu/drm/xe/xe_eudebug_pagefault.h | 47 +++
drivers/gpu/drm/xe/xe_eudebug_types.h | 69 +++-
drivers/gpu/drm/xe/xe_pagefault_types.h | 4 +
include/uapi/drm/xe_drm_eudebug.h | 12 +
9 files changed, 678 insertions(+), 22 deletions(-)
create mode 100644 drivers/gpu/drm/xe/xe_eudebug_pagefault.c
create mode 100644 drivers/gpu/drm/xe/xe_eudebug_pagefault.h
diff --git a/drivers/gpu/drm/xe/Makefile b/drivers/gpu/drm/xe/Makefile
index e19736227dfa..64d3d324b7aa 100644
--- a/drivers/gpu/drm/xe/Makefile
+++ b/drivers/gpu/drm/xe/Makefile
@@ -148,7 +148,7 @@ xe-$(CONFIG_DRM_XE_GPUSVM) += xe_svm.o
xe-$(CONFIG_DRM_GPUSVM) += xe_userptr.o
# debugging shaders with gdb (eudebug) support
-xe-$(CONFIG_DRM_XE_EUDEBUG) += xe_eudebug.o xe_eudebug_vm.o xe_eudebug_hw.o xe_gt_debug.o
+xe-$(CONFIG_DRM_XE_EUDEBUG) += xe_eudebug.o xe_eudebug_vm.o xe_eudebug_hw.o xe_eudebug_pagefault.o xe_gt_debug.o
# graphics hardware monitoring (HWMON) support
xe-$(CONFIG_HWMON) += xe_hwmon.o
diff --git a/drivers/gpu/drm/xe/xe_eudebug.c b/drivers/gpu/drm/xe/xe_eudebug.c
index 44381084ee96..5e585e0006af 100644
--- a/drivers/gpu/drm/xe/xe_eudebug.c
+++ b/drivers/gpu/drm/xe/xe_eudebug.c
@@ -17,12 +17,16 @@
#include "xe_eudebug.h"
#include "xe_eudebug_hw.h"
#include "xe_eudebug_types.h"
+#include "xe_eudebug_pagefault.h"
#include "xe_eudebug_vm.h"
#include "xe_exec_queue.h"
+#include "xe_force_wake.h"
#include "xe_gt.h"
#include "xe_hw_engine.h"
#include "xe_gt.h"
#include "xe_gt_debug.h"
+#include "xe_gt_mcr.h"
+#include "regs/xe_gt_regs.h"
#include "xe_macros.h"
#include "xe_pm.h"
#include "xe_sriov_pf.h"
@@ -185,6 +189,8 @@ static void xe_eudebug_free(struct kref *ref)
while (kfifo_get(&d->events.fifo, &event))
kfree(event);
+ xe_eudebug_pagefault_fini(d);
+
xe_eudebug_destroy_resources(d);
XE_WARN_ON(d->target.xef);
@@ -383,7 +389,7 @@ static int _xe_eudebug_disconnect(struct xe_eudebug *d,
} \
})
-static struct xe_eudebug *
+struct xe_eudebug *
xe_eudebug_get_nolock(struct xe_file *xef)
{
struct xe_eudebug *d;
@@ -1793,10 +1799,6 @@ static int xe_eudebug_handle_gt_attention(struct xe_gt *gt)
{
int ret;
- ret = xe_gt_eu_threads_needing_attention(gt);
- if (ret <= 0)
- return ret;
-
ret = xe_send_gt_attention(gt);
/* Discovery in progress, fake it */
@@ -1806,6 +1808,65 @@ static int xe_eudebug_handle_gt_attention(struct xe_gt *gt)
return ret;
}
+int xe_eudebug_send_pagefault_event(struct xe_eudebug *d,
+ struct xe_eudebug_pagefault *pf)
+{
+ struct drm_xe_eudebug_event_pagefault *ep;
+ struct drm_xe_eudebug_event *event;
+ int h_queue, h_lrc;
+ u32 size = xe_gt_eu_attention_bitmap_size(pf->q->gt) * 3;
+ u32 sz = struct_size(ep, bitmask, size);
+ int ret;
+
+ XE_WARN_ON(pf->lrc_idx < 0 || pf->lrc_idx >= pf->q->width);
+
+ XE_WARN_ON(!xe_exec_queue_is_debuggable(pf->q));
+
+ h_queue = find_handle(d->res, XE_EUDEBUG_RES_TYPE_EXEC_QUEUE, pf->q);
+ if (h_queue < 0)
+ return h_queue;
+
+ h_lrc = find_handle(d->res, XE_EUDEBUG_RES_TYPE_LRC, pf->q->lrc[pf->lrc_idx]);
+ if (h_lrc < 0)
+ return h_lrc;
+
+ event = xe_eudebug_create_event(d, DRM_XE_EUDEBUG_EVENT_PAGEFAULT, 0,
+ DRM_XE_EUDEBUG_EVENT_STATE_CHANGE, sz);
+
+ if (!event)
+ return -ENOSPC;
+
+ ep = cast_event(ep, event);
+ ep->exec_queue_handle = h_queue;
+ ep->lrc_handle = h_lrc;
+ ep->bitmask_size = size;
+ ep->pagefault_address = pf->fault.addr;
+
+ memcpy(ep->bitmask, pf->attentions.before.att, pf->attentions.before.size);
+ memcpy(ep->bitmask + pf->attentions.before.size,
+ pf->attentions.after.att, pf->attentions.after.size);
+ memcpy(ep->bitmask + pf->attentions.before.size + pf->attentions.after.size,
+ pf->attentions.resolved.att, pf->attentions.resolved.size);
+
+ event->seqno = atomic_long_inc_return(&d->events.seqno);
+
+ ret = xe_eudebug_queue_event(d, event);
+ if (ret)
+ xe_eudebug_disconnect(d, ret);
+
+ return ret;
+}
+
+static void handle_attention_fail(struct xe_gt *gt, int gt_id, int ret)
+{
+ /* TODO: error capture */
+ drm_info(>_to_xe(gt)->drm,
+ "gt:%d unable to handle eu attention ret = %d\n",
+ gt_id, ret);
+
+ xe_gt_reset_async(gt);
+}
+
static void attention_poll_work(struct work_struct *work)
{
struct xe_device *xe = container_of(work, typeof(*xe),
@@ -1828,15 +1889,15 @@ static void attention_poll_work(struct work_struct *work)
if (gt->info.type != XE_GT_TYPE_MAIN)
continue;
- ret = xe_eudebug_handle_gt_attention(gt);
- if (ret) {
- /* TODO: error capture */
- drm_info(>_to_xe(gt)->drm,
- "gt:%d unable to handle eu attention ret=%d\n",
- gt_id, ret);
+ if (!xe_gt_eu_threads_needing_attention(gt))
+ continue;
+
+ ret = xe_eudebug_handle_pagefaults(gt);
+ if (!ret)
+ ret = xe_eudebug_handle_gt_attention(gt);
- xe_gt_reset_async(gt);
- }
+ if (ret)
+ handle_attention_fail(gt, gt_id, ret);
}
xe_pm_runtime_put(xe);
@@ -1845,12 +1906,12 @@ static void attention_poll_work(struct work_struct *work)
schedule_delayed_work(&xe->eudebug.attention_dwork, delay);
}
-static void attention_poll_stop(struct xe_device *xe)
+void xe_eudebug_attention_poll_stop(struct xe_device *xe)
{
cancel_delayed_work_sync(&xe->eudebug.attention_dwork);
}
-static void attention_poll_start(struct xe_device *xe)
+void xe_eudebug_attention_poll_start(struct xe_device *xe)
{
mod_delayed_work(system_wq, &xe->eudebug.attention_dwork, 0);
}
@@ -1893,6 +1954,8 @@ xe_eudebug_connect(struct xe_device *xe,
kref_init(&d->ref);
spin_lock_init(&d->target.lock);
+ mutex_init(&d->pf_lock);
+ INIT_LIST_HEAD(&d->pagefaults);
init_waitqueue_head(&d->events.write_done);
init_waitqueue_head(&d->events.read_done);
init_completion(&d->discovery);
@@ -1926,7 +1989,7 @@ xe_eudebug_connect(struct xe_device *xe,
kref_get(&d->ref);
queue_work(xe->eudebug.wq, &d->discovery_work);
- attention_poll_start(xe);
+ xe_eudebug_attention_poll_start(xe);
eu_dbg(d, "connected session %lld", d->session);
@@ -2005,9 +2068,9 @@ int xe_eudebug_enable(struct xe_device *xe, bool enable)
mutex_unlock(&xe->eudebug.lock);
if (enable) {
- attention_poll_start(xe);
+ xe_eudebug_attention_poll_start(xe);
} else {
- attention_poll_stop(xe);
+ xe_eudebug_attention_poll_stop(xe);
if (IS_SRIOV_PF(xe))
xe_sriov_pf_end_lockdown(xe);
@@ -2060,7 +2123,7 @@ static void xe_eudebug_fini(struct drm_device *dev, void *__unused)
xe_assert(xe, list_empty(&xe->eudebug.targets));
- attention_poll_stop(xe);
+ xe_eudebug_attention_poll_stop(xe);
}
void xe_eudebug_init(struct xe_device *xe)
diff --git a/drivers/gpu/drm/xe/xe_eudebug.h b/drivers/gpu/drm/xe/xe_eudebug.h
index bd9fd7bf454f..34938e87be13 100644
--- a/drivers/gpu/drm/xe/xe_eudebug.h
+++ b/drivers/gpu/drm/xe/xe_eudebug.h
@@ -13,12 +13,14 @@ struct drm_file;
struct xe_debug_data;
struct xe_device;
struct xe_file;
+struct xe_gt;
struct xe_vm;
struct xe_vma;
struct xe_vma_ops;
struct xe_exec_queue;
struct xe_user_fence;
struct xe_eudebug;
+struct xe_eudebug_pagefault;
#if IS_ENABLED(CONFIG_DRM_XE_EUDEBUG)
@@ -72,8 +74,15 @@ void xe_eudebug_ufence_init(struct xe_user_fence *ufence);
void xe_eudebug_ufence_fini(struct xe_user_fence *ufence);
bool xe_eudebug_ufence_track(struct xe_user_fence *ufence);
+struct xe_eudebug *xe_eudebug_get_nolock(struct xe_file *xef);
void xe_eudebug_put(struct xe_eudebug *d);
+int xe_eudebug_send_pagefault_event(struct xe_eudebug *d,
+ struct xe_eudebug_pagefault *pf);
+
+void xe_eudebug_attention_poll_stop(struct xe_device *xe);
+void xe_eudebug_attention_poll_start(struct xe_device *xe);
+
#else
static inline int xe_eudebug_connect_ioctl(struct drm_device *dev,
diff --git a/drivers/gpu/drm/xe/xe_eudebug_hw.c b/drivers/gpu/drm/xe/xe_eudebug_hw.c
index 236740ef10ba..e8e47987c9f9 100644
--- a/drivers/gpu/drm/xe/xe_eudebug_hw.c
+++ b/drivers/gpu/drm/xe/xe_eudebug_hw.c
@@ -322,6 +322,7 @@ static int do_eu_control(struct xe_eudebug *d,
struct xe_device *xe = d->xe;
u8 *bits = NULL;
unsigned int hw_attn_size, attn_size;
+ struct dma_fence *pf_fence;
struct xe_exec_queue *q;
struct xe_lrc *lrc;
u64 seqno;
@@ -373,8 +374,20 @@ static int do_eu_control(struct xe_eudebug *d,
goto out_free;
}
- ret = -EINVAL;
mutex_lock(&d->hw.lock);
+ do {
+ pf_fence = dma_fence_get(d->pf_fence);
+ if (pf_fence) {
+ mutex_unlock(&d->hw.lock);
+ ret = dma_fence_wait(pf_fence, true);
+ dma_fence_put(pf_fence);
+ if (ret)
+ goto out_free;
+ mutex_lock(&d->hw.lock);
+ }
+ } while (pf_fence);
+
+ ret = -EINVAL;
switch (arg->cmd) {
case DRM_XE_EUDEBUG_EU_CONTROL_CMD_INTERRUPT_ALL:
diff --git a/drivers/gpu/drm/xe/xe_eudebug_pagefault.c b/drivers/gpu/drm/xe/xe_eudebug_pagefault.c
new file mode 100644
index 000000000000..f139435cad33
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_eudebug_pagefault.c
@@ -0,0 +1,441 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2023-2025 Intel Corporation
+ */
+
+#include "xe_eudebug_pagefault.h"
+
+#include <linux/delay.h>
+
+#include "xe_exec_queue.h"
+#include "xe_eudebug.h"
+#include "xe_eudebug_hw.h"
+#include "xe_force_wake.h"
+#include "xe_gt_debug.h"
+#include "xe_gt_mcr.h"
+#include "regs/xe_gt_regs.h"
+#include "xe_vm.h"
+
+static struct xe_gt *
+pf_to_gt(struct xe_eudebug_pagefault *pf)
+{
+ return pf->q->gt;
+}
+
+static void destroy_pagefault(struct xe_eudebug_pagefault *pf)
+{
+ xe_exec_queue_put(pf->q);
+ kfree(pf);
+}
+
+static int queue_pagefault(struct xe_eudebug_pagefault *pf)
+{
+ struct xe_eudebug *d;
+
+ d = xe_eudebug_get_nolock(pf->q->vm->xef);
+ if (!d)
+ return -EINVAL;
+
+ mutex_lock(&d->pf_lock);
+ list_add_tail(&pf->link, &d->pagefaults);
+ mutex_unlock(&d->pf_lock);
+
+ xe_eudebug_put(d);
+
+ return 0;
+}
+
+static int send_pagefault(struct xe_eudebug_pagefault *pf,
+ bool from_attention_scan)
+{
+ struct xe_gt *gt = pf_to_gt(pf);
+ struct xe_eudebug *d;
+ struct xe_exec_queue *q;
+ int ret, lrc_idx;
+
+ q = xe_gt_runalone_active_queue_get(gt, &lrc_idx);
+ if (IS_ERR(q))
+ return PTR_ERR(q);
+
+ if (!xe_exec_queue_is_debuggable(q)) {
+ ret = -EPERM;
+ goto out_exec_queue_put;
+ }
+
+ d = xe_eudebug_get_nolock(q->vm->xef);
+ if (!d) {
+ ret = -ENOTCONN;
+ goto out_exec_queue_put;
+ }
+
+ if (pf->deferred_resolved) {
+ xe_gt_eu_attentions_read(gt, &pf->attentions.resolved,
+ XE_GT_ATTENTION_TIMEOUT_MS);
+
+ if (!xe_eu_attentions_xor_count(&pf->attentions.after,
+ &pf->attentions.resolved) &&
+ !from_attention_scan) {
+ eu_dbg(d, "xe attentions not yet updated\n");
+ ret = -EBUSY;
+ goto out_eudebug_put;
+ }
+ }
+
+ ret = xe_eudebug_send_pagefault_event(d, pf);
+
+out_eudebug_put:
+ xe_eudebug_put(d);
+out_exec_queue_put:
+ xe_exec_queue_put(q);
+
+ return ret;
+}
+
+static const char *
+pagefault_get_driver_name(struct dma_fence *dma_fence)
+{
+ return "xe";
+}
+
+static const char *
+pagefault_fence_get_timeline_name(struct dma_fence *dma_fence)
+{
+ return "eudebug_pagefault_fence";
+}
+
+static const struct dma_fence_ops pagefault_fence_ops = {
+ .get_driver_name = pagefault_get_driver_name,
+ .get_timeline_name = pagefault_fence_get_timeline_name,
+};
+
+struct pagefault_fence {
+ struct dma_fence base;
+ spinlock_t lock;
+};
+
+static struct pagefault_fence *pagefault_fence_create(void)
+{
+ struct pagefault_fence *fence;
+
+ fence = kzalloc(sizeof(*fence), GFP_KERNEL);
+ if (fence == NULL)
+ return NULL;
+
+ spin_lock_init(&fence->lock);
+ dma_fence_init(&fence->base, &pagefault_fence_ops, &fence->lock,
+ dma_fence_context_alloc(1), 1);
+
+ return fence;
+}
+
+void
+xe_eudebug_pagefault_create(struct xe_vm *vm, struct xe_pagefault *pf)
+{
+ struct pagefault_fence *pf_fence;
+ struct xe_eudebug_pagefault *epf;
+ struct xe_vma *vma;
+ struct xe_gt *gt = pf->gt;
+ struct xe_exec_queue *q;
+ struct dma_fence *fence;
+ struct xe_eudebug *d;
+ unsigned int fw_ref;
+ int lrc_idx;
+ u32 td_ctl;
+
+ pf->consumer.epf = NULL;
+
+ down_read(&vm->lock);
+ vma = xe_vm_find_vma_by_addr(vm, pf->consumer.page_addr);
+ up_read(&vm->lock);
+
+ if (vma)
+ return;
+
+ d = xe_eudebug_get_nolock(vm->xef);
+ if (!d)
+ return;
+
+ q = xe_gt_runalone_active_queue_get(gt, &lrc_idx);
+ if (IS_ERR(q))
+ goto err_put_eudebug;
+
+ if (XE_WARN_ON(q->vm != vm))
+ goto err_put_exec_queue;
+
+ if (!xe_exec_queue_is_debuggable(q))
+ goto err_put_exec_queue;
+
+ fw_ref = xe_force_wake_get(gt_to_fw(gt), q->hwe->domain);
+ if (!fw_ref)
+ goto err_put_exec_queue;
+
+ /*
+ * If there is no debug functionality (TD_CTL_GLOBAL_DEBUG_ENABLE, etc.),
+ * don't proceed pagefault routine for eu debugger.
+ */
+ td_ctl = xe_gt_mcr_unicast_read_any(gt, TD_CTL);
+ if (!td_ctl)
+ goto err_put_fw;
+
+ epf = kzalloc(sizeof(*epf), GFP_KERNEL);
+ if (!epf)
+ goto err_put_fw;
+
+ xe_eudebug_attention_poll_stop(gt_to_xe(gt));
+
+ mutex_lock(&d->hw.lock);
+ fence = dma_fence_get(d->pf_fence);
+
+ if (fence) {
+ /*
+ * TODO: If the new incoming pagefaulted address is different
+ * from the pagefaulted address it is currently handling on the
+ * same ASID, it needs a routine to wait here and then do the
+ * following pagefault.
+ */
+ dma_fence_put(fence);
+ goto err_unlock_hw_lock;
+ }
+
+ pf_fence = pagefault_fence_create();
+ if (!pf_fence)
+ goto err_unlock_hw_lock;
+
+ d->pf_fence = &pf_fence->base;
+
+ INIT_LIST_HEAD(&epf->link);
+
+ xe_gt_eu_attentions_read(gt, &epf->attentions.before, 0);
+
+ if (td_ctl & TD_CTL_FORCE_EXCEPTION)
+ eu_warn(d, "force exception already set!");
+
+ /* Halt regardless of thread dependencies */
+ while (!(td_ctl & TD_CTL_FORCE_EXCEPTION)) {
+ xe_gt_mcr_multicast_write(gt, TD_CTL,
+ td_ctl | TD_CTL_FORCE_EXCEPTION);
+ udelay(200);
+ td_ctl = xe_gt_mcr_unicast_read_any(gt, TD_CTL);
+ }
+
+ xe_gt_eu_attentions_read(gt, &epf->attentions.after,
+ XE_GT_ATTENTION_TIMEOUT_MS);
+
+ mutex_unlock(&d->hw.lock);
+
+ /*
+ * xe_exec_queue_put() will be called from xe_eudebug_pagefault_destroy()
+ * or handle_pagefault()
+ */
+ epf->q = q;
+ epf->lrc_idx = lrc_idx;
+ epf->fault.addr = pf->consumer.page_addr;
+ epf->fault.type = pf->consumer.fault_type;
+ epf->fault.level = pf->consumer.fault_level;
+ epf->fault.access = pf->consumer.access_type;
+
+ pf->consumer.epf = epf;
+
+ xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_eudebug_put(d);
+
+ return;
+
+err_unlock_hw_lock:
+ mutex_unlock(&d->hw.lock);
+ xe_eudebug_attention_poll_start(gt_to_xe(gt));
+ kfree(epf);
+err_put_fw:
+ xe_force_wake_put(gt_to_fw(gt), fw_ref);
+err_put_exec_queue:
+ xe_exec_queue_put(q);
+err_put_eudebug:
+ xe_eudebug_put(d);
+}
+
+struct xe_vma *xe_eudebug_create_vma(struct xe_vm *vm, struct xe_pagefault *pf)
+{
+ struct xe_vma *vma = NULL;
+
+ if (!pf->consumer.epf)
+ return NULL;
+
+ vma = xe_vm_create_null_vma(vm, pf->consumer.page_addr);
+ if (IS_ERR(vma))
+ return vma;
+
+ pf->consumer.epf->is_null = true;
+
+ return vma;
+}
+
+static void
+xe_eudebug_pagefault_process(struct xe_eudebug_pagefault *pf)
+{
+ struct xe_gt *gt = pf->q->gt;
+
+ xe_gt_eu_attentions_read(gt, &pf->attentions.resolved,
+ XE_GT_ATTENTION_TIMEOUT_MS);
+
+ if (!xe_eu_attentions_xor_count(&pf->attentions.after,
+ &pf->attentions.resolved))
+ pf->deferred_resolved = true;
+}
+
+static void
+_xe_eudebug_pagefault_destroy(struct xe_eudebug_pagefault *pf)
+{
+ struct xe_gt *gt = pf->q->gt;
+ struct xe_vm *vm = pf->q->vm;
+ struct xe_eudebug *d;
+ unsigned int fw_ref;
+ u32 td_ctl;
+ bool queued, try_send;
+ int ret;
+
+ fw_ref = xe_force_wake_get(gt_to_fw(gt), pf->q->hwe->domain);
+ if (!fw_ref) {
+ struct xe_device *xe = gt_to_xe(gt);
+
+ drm_warn(&xe->drm, "Forcewake fail: Can not recover TD_CTL");
+ } else {
+ td_ctl = xe_gt_mcr_unicast_read_any(gt, TD_CTL);
+ xe_gt_mcr_multicast_write(gt, TD_CTL, td_ctl &
+ ~(TD_CTL_FORCE_EXCEPTION));
+ xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ }
+
+ queued = false;
+ try_send = pf->is_null;
+ if (try_send) {
+ ret = send_pagefault(pf, false);
+
+ /*
+ * if debugger discovery is not completed or resolved attentions are not
+ * updated, then queue pagefault
+ */
+ if (ret == -EBUSY) {
+ ret = queue_pagefault(pf);
+ if (!ret)
+ queued = true;
+ }
+ }
+
+ d = xe_eudebug_get_nolock(vm->xef);
+ if (d) {
+ struct dma_fence *f;
+
+ mutex_lock(&d->hw.lock);
+ f = d->pf_fence;
+ d->pf_fence = NULL;
+ mutex_unlock(&d->hw.lock);
+
+ if (f) {
+ if (!queued)
+ dma_fence_signal(f);
+
+ dma_fence_put(f);
+ }
+
+ xe_eudebug_put(d);
+ }
+
+ if (!queued)
+ destroy_pagefault(pf);
+
+ xe_eudebug_attention_poll_start(gt_to_xe(gt));
+}
+
+static int send_queued_pagefaults(struct xe_eudebug *d)
+{
+ struct xe_eudebug_pagefault *pf, *pf_temp;
+ int ret = 0;
+
+ mutex_lock(&d->pf_lock);
+ list_for_each_entry_safe(pf, pf_temp, &d->pagefaults, link) {
+ ret = send_pagefault(pf, true);
+
+ /* if resolved attentions are not updated */
+ if (ret == -EBUSY)
+ break;
+
+ list_del(&pf->link);
+
+ destroy_pagefault(pf);
+
+ if (ret)
+ break;
+ }
+ mutex_unlock(&d->pf_lock);
+
+ return ret;
+}
+
+int xe_eudebug_handle_pagefaults(struct xe_gt *gt)
+{
+ struct xe_exec_queue *q;
+ struct xe_eudebug *d;
+ int ret, lrc_idx;
+
+ q = xe_gt_runalone_active_queue_get(gt, &lrc_idx);
+ if (IS_ERR(q))
+ return PTR_ERR(q);
+
+ if (!xe_exec_queue_is_debuggable(q)) {
+ ret = -EPERM;
+ goto out_exec_queue_put;
+ }
+
+ d = xe_eudebug_get_nolock(q->vm->xef);
+ if (!d) {
+ ret = -ENOTCONN;
+ goto out_exec_queue_put;
+ }
+
+ ret = send_queued_pagefaults(d);
+
+ xe_eudebug_put(d);
+
+out_exec_queue_put:
+ xe_exec_queue_put(q);
+
+ return ret;
+}
+
+void xe_eudebug_pagefault_service(struct xe_pagefault *pf)
+{
+ struct xe_eudebug_pagefault *f = pf->consumer.epf;
+
+ if (!f)
+ return;
+
+ if (f->is_null)
+ xe_eudebug_pagefault_process(f);
+}
+
+void xe_eudebug_pagefault_destroy(struct xe_pagefault *pf, int err)
+{
+ struct xe_eudebug_pagefault *f = pf->consumer.epf;
+
+ if (!f)
+ return;
+
+ if (err)
+ f->is_null = false;
+
+ _xe_eudebug_pagefault_destroy(f);
+}
+
+void xe_eudebug_pagefault_fini(struct xe_eudebug *d)
+{
+ struct xe_eudebug_pagefault *pf, *pf_temp;
+
+ /* Since it's the last reference no race here */
+
+ list_for_each_entry_safe(pf, pf_temp, &d->pagefaults, link) {
+ list_del(&pf->link);
+ destroy_pagefault(pf);
+ }
+
+ XE_WARN_ON(d->pf_fence);
+}
diff --git a/drivers/gpu/drm/xe/xe_eudebug_pagefault.h b/drivers/gpu/drm/xe/xe_eudebug_pagefault.h
new file mode 100644
index 000000000000..1ba20beac3cf
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_eudebug_pagefault.h
@@ -0,0 +1,47 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2023-2025 Intel Corporation
+ */
+
+#ifndef _XE_EUDEBUG_PAGEFAULT_H_
+#define _XE_EUDEBUG_PAGEFAULT_H_
+
+#include <linux/types.h>
+
+struct xe_eudebug;
+struct xe_gt;
+struct xe_pagefault;
+struct xe_eudebug_pagefault;
+struct xe_vm;
+
+void xe_eudebug_pagefault_fini(struct xe_eudebug *d);
+int xe_eudebug_handle_pagefaults(struct xe_gt *gt);
+
+#if IS_ENABLED(CONFIG_DRM_XE_EUDEBUG)
+void xe_eudebug_pagefault_create(struct xe_vm *vm, struct xe_pagefault *pf);
+struct xe_vma *xe_eudebug_create_vma(struct xe_vm *vm, struct xe_pagefault *pf);
+void xe_eudebug_pagefault_service(struct xe_pagefault *pf);
+void xe_eudebug_pagefault_destroy(struct xe_pagefault *pf, int err);
+#else
+
+static inline void
+xe_eudebug_pagefault_create(struct xe_vm *vm, struct xe_pagefault *pf)
+{
+}
+
+static inline struct xe_vma *xe_eudebug_create_vma(struct xe_vm *vm, struct xe_pagefault *pf)
+{
+ return NULL;
+}
+
+static inline void xe_eudebug_pagefault_service(struct xe_pagefault *pf)
+{
+}
+
+static inline void xe_eudebug_pagefault_destroy(struct xe_pagefault *pf, int err)
+{
+}
+
+#endif
+
+#endif /* _XE_EUDEBUG_PAGEFAULT_H_ */
diff --git a/drivers/gpu/drm/xe/xe_eudebug_types.h b/drivers/gpu/drm/xe/xe_eudebug_types.h
index 85fc321f8b0e..39cb70058994 100644
--- a/drivers/gpu/drm/xe/xe_eudebug_types.h
+++ b/drivers/gpu/drm/xe/xe_eudebug_types.h
@@ -15,6 +15,8 @@
#include <linux/wait.h>
#include <linux/xarray.h>
+#include "xe_gt_debug_types.h"
+
struct xe_device;
struct task_struct;
struct xe_eudebug;
@@ -37,7 +39,7 @@ enum xe_eudebug_state {
};
#define CONFIG_DRM_XE_DEBUGGER_EVENT_QUEUE_SIZE 64
-#define XE_EUDEBUG_MAX_EVENT_TYPE DRM_XE_EUDEBUG_EVENT_EU_ATTENTION
+#define XE_EUDEBUG_MAX_EVENT_TYPE DRM_XE_EUDEBUG_EVENT_PAGEFAULT
/**
* struct xe_eudebug_handle - eudebug resource handle
@@ -169,6 +171,71 @@ struct xe_eudebug {
/** @ops operations for eu_control */
struct xe_eudebug_eu_control_ops *ops;
+
+ /** @pf_lock: guards access to pagefaults list*/
+ struct mutex pf_lock;
+ /** @pagefaults: xe_eudebug_pagefault list for pagefault event queuing */
+ struct list_head pagefaults;
+ /**
+ * @pf_fence: fence on operations of eus (eu thread control and attention)
+ * when page faults are being handled, protected by @eu_lock.
+ */
+ struct dma_fence *pf_fence;
+};
+
+/**
+ * struct xe_eudebug_pagefault - eudebug structure for queuing pagefault
+ */
+struct xe_eudebug_pagefault {
+ /** @list: link into the xe_eudebug.pagefaults */
+ struct list_head link;
+ /** @q: exec_queue which raised pagefault */
+ struct xe_exec_queue *q;
+ /** @lrc_idx: lrc index of the workload which raised pagefault */
+ int lrc_idx;
+
+ /* pagefault raw partial data passed from guc */
+ struct {
+ /** @addr: ppgtt address where the pagefault occurred */
+ u64 addr;
+ int type;
+ int level;
+ int access;
+ } fault;
+
+ struct {
+ /** @before: state of attention bits before page fault WA processing*/
+ struct xe_eu_attentions before;
+ /**
+ * @after: status of attention bits during page fault WA processing.
+ * It includes eu threads where attention bits are turned on for
+ * reasons other than page fault WA (breakpoint, interrupt, etc.).
+ */
+ struct xe_eu_attentions after;
+ /**
+ * @resolved: state of the attention bits after page fault WA.
+ * It includes the eu thread that caused the page fault.
+ * To determine the eu thread that caused the page fault,
+ * do XOR attentions.after and attentions.resolved.
+ */
+ struct xe_eu_attentions resolved;
+ } attentions;
+
+ /**
+ * @deferred_resolved: to update attentions.resolved again when attention
+ * bits are ready if the eu thread fails to turn on attention bits within
+ * a certain time after page fault WA processing.
+ */
+ bool deferred_resolved;
+
+ /**
+ * @is_null: marks if this vma is null or not. The lookup for the
+ * vma is done in two phases and eudebug pagefault struct needs
+ * to be allocated apriori to resolving if we need null vma or not.
+ * So we keep the state here so that processing and teardown
+ * know which type of fault resulted in creation of this eudebug pf.
+ */
+ bool is_null;
};
#endif /* _XE_EUDEBUG_TYPES_H_ */
diff --git a/drivers/gpu/drm/xe/xe_pagefault_types.h b/drivers/gpu/drm/xe/xe_pagefault_types.h
index d3b516407d60..c89d7fb698e0 100644
--- a/drivers/gpu/drm/xe/xe_pagefault_types.h
+++ b/drivers/gpu/drm/xe/xe_pagefault_types.h
@@ -10,6 +10,7 @@
struct xe_gt;
struct xe_pagefault;
+struct xe_eudebug_pagefault;
/** enum xe_pagefault_access_type - Xe page fault access type */
enum xe_pagefault_access_type {
@@ -84,6 +85,9 @@ struct xe_pagefault {
u8 engine_class;
/** @consumer.engine_instance: engine instance */
u8 engine_instance;
+#if IS_ENABLED(CONFIG_DRM_XE_EUDEBUG)
+ struct xe_eudebug_pagefault *epf;
+#endif
/** consumer.reserved: reserved bits for future expansion */
u8 reserved[7];
} consumer;
diff --git a/include/uapi/drm/xe_drm_eudebug.h b/include/uapi/drm/xe_drm_eudebug.h
index 9f558e56b577..a90077eac23d 100644
--- a/include/uapi/drm/xe_drm_eudebug.h
+++ b/include/uapi/drm/xe_drm_eudebug.h
@@ -56,6 +56,7 @@ struct drm_xe_eudebug_event {
#define DRM_XE_EUDEBUG_EVENT_VM_BIND_OP_DEBUG_DATA 5
#define DRM_XE_EUDEBUG_EVENT_VM_BIND_UFENCE 6
#define DRM_XE_EUDEBUG_EVENT_EU_ATTENTION 7
+#define DRM_XE_EUDEBUG_EVENT_PAGEFAULT 8
__u16 flags;
#define DRM_XE_EUDEBUG_EVENT_CREATE (1 << 0)
@@ -208,6 +209,17 @@ struct drm_xe_eudebug_event_eu_attention {
__u8 bitmask[];
};
+struct drm_xe_eudebug_event_pagefault {
+ struct drm_xe_eudebug_event base;
+
+ __u64 exec_queue_handle;
+ __u64 lrc_handle;
+ __u32 flags;
+ __u32 bitmask_size;
+ __u64 pagefault_address;
+ __u8 bitmask[];
+};
+
#if defined(__cplusplus)
}
#endif
--
2.43.0
^ permalink raw reply related [flat|nested] 30+ messages in thread* [PATCH 20/20] drm/xe/eudebug: Enable EU pagefault handling
2025-12-02 13:52 [PATCH 00/20] Intel Xe GPU Debug Support (eudebug) v6 Mika Kuoppala
` (18 preceding siblings ...)
2025-12-02 13:52 ` [PATCH 19/20] drm/xe/eudebug: Introduce EU pagefault handling interface Mika Kuoppala
@ 2025-12-02 13:52 ` Mika Kuoppala
2025-12-02 14:02 ` ✗ CI.checkpatch: warning for Intel Xe GPU Debug Support (eudebug) v6 Patchwork
` (5 subsequent siblings)
25 siblings, 0 replies; 30+ messages in thread
From: Mika Kuoppala @ 2025-12-02 13:52 UTC (permalink / raw)
To: intel-xe
Cc: simona.vetter, matthew.brost, christian.koenig, thomas.hellstrom,
joonas.lahtinen, christoph.manszewski, rodrigo.vivi,
andrzej.hajda, matthew.auld, maciej.patelczyk, gwan-gyeong.mun,
Mika Kuoppala
From: Gwan-gyeong Mun <gwan-gyeong.mun@intel.com>
The XE2 (and PVC) HW has a limitation that the pagefault due to invalid
access will halt the corresponding EUs. To solve this problem, enable
EU pagefault handling functionality, which allows to unhalt pagefaulted
eu threads and to EU debugger to get inform about the eu attentions state
of EU threads during execution.
If a pagefault occurs, send the DRM_XE_EUDEBUG_EVENT_PAGEFAULT event
after handling the pagefault.
The pagefault handling is a mechanism that allows a stalled EU thread to
enter SIP mode by installing a temporal null page to the page table entry
where the pagefault happened.
A brief description of the page fault handling mechanism flow between KMD
and the eu thread is as follows
(1) eu thread accesses unallocated address
(2) pagefault happens and eu thread stalls
(3) XE kmd set an force eu thread exception to allow the running eu thread
to enter SIP mode (kmd set ForceException / ForceExternalHalt bit of
TD_CTL register)
Not stalled (none-pagefaulted) eu threads enter SIP mode
(4) XE kmd installs temporal null page to the pagetable entry of the
address where pagefault happened.
(5) XE kmd replies pagefault successful message to GUC
(6) stalled eu thread resumes as per pagefault condition has resolved
(7) resumed eu thread enters SIP mode due to force exception set by (3)
(8) adapted to consumer/produced pagefaults
As designed this feature to only work when eudbug is enabled, it should
have no impact to regular recoverable pagefault code path.
v2: - pf->q holds the vm ref so drop it (Mika)
- streamline uapi (Mika)
- cleanup the pagefault through producer if (Mika)
Signed-off-by: Gwan-gyeong Mun <gwan-gyeong.mun@intel.com>
Signed-off-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
---
drivers/gpu/drm/xe/xe_guc_pagefault.c | 8 +++++++
drivers/gpu/drm/xe/xe_pagefault.c | 31 ++++++++++++++++++++++++-
drivers/gpu/drm/xe/xe_pagefault_types.h | 9 +++++++
3 files changed, 47 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/xe/xe_guc_pagefault.c b/drivers/gpu/drm/xe/xe_guc_pagefault.c
index 719a18187a31..cd41023ebef9 100644
--- a/drivers/gpu/drm/xe/xe_guc_pagefault.c
+++ b/drivers/gpu/drm/xe/xe_guc_pagefault.c
@@ -8,6 +8,7 @@
#include "xe_guc_ct.h"
#include "xe_guc_pagefault.h"
#include "xe_pagefault.h"
+#include "xe_eudebug_pagefault.h"
static void guc_ack_fault(struct xe_pagefault *pf, int err)
{
@@ -36,8 +37,15 @@ static void guc_ack_fault(struct xe_pagefault *pf, int err)
xe_guc_ct_send(&guc->ct, action, ARRAY_SIZE(action), 0, 0);
}
+static void guc_cleanup_fault(struct xe_pagefault *pf, int err)
+{
+ xe_eudebug_pagefault_service(pf);
+ xe_eudebug_pagefault_destroy(pf, 0);
+}
+
static const struct xe_pagefault_ops guc_pagefault_ops = {
.ack_fault = guc_ack_fault,
+ .cleanup_fault = guc_cleanup_fault,
};
/**
diff --git a/drivers/gpu/drm/xe/xe_pagefault.c b/drivers/gpu/drm/xe/xe_pagefault.c
index afb06598b6e1..369749641f37 100644
--- a/drivers/gpu/drm/xe/xe_pagefault.c
+++ b/drivers/gpu/drm/xe/xe_pagefault.c
@@ -10,6 +10,7 @@
#include "xe_bo.h"
#include "xe_device.h"
+#include "xe_eudebug_pagefault.h"
#include "xe_gt_printk.h"
#include "xe_gt_types.h"
#include "xe_gt_stats.h"
@@ -171,6 +172,8 @@ static int xe_pagefault_service(struct xe_pagefault *pf)
if (IS_ERR(vm))
return PTR_ERR(vm);
+ xe_eudebug_pagefault_create(vm, pf);
+
/*
* TODO: Change to read lock? Using write lock for simplicity.
*/
@@ -184,9 +187,28 @@ static int xe_pagefault_service(struct xe_pagefault *pf)
vma = xe_vm_find_vma_by_addr(vm, pf->consumer.page_addr);
if (!vma) {
err = -EINVAL;
- goto unlock_vm;
+ vma = xe_eudebug_create_vma(vm, pf);
+ if (IS_ERR(vma)) {
+ err = PTR_ERR(vma);
+ vma = NULL;
+ }
}
+ if (vma) {
+ /*
+ * When creating an instance of eudebug_pagefault, there was
+ * no vma containing the ppgtt address where the pagefault occurred,
+ * but when reacquiring vm->lock, there is.
+ * During not aquiring the vm->lock from this context,
+ * but vma corresponding to the address where the pagefault occurred
+ * in another context has allocated.
+ */
+ err = 0;
+ }
+
+ if (err)
+ goto unlock_vm;
+
atomic = xe_pagefault_access_is_atomic(pf->consumer.access_type);
if (xe_vma_is_cpu_addr_mirror(vma))
@@ -198,6 +220,10 @@ static int xe_pagefault_service(struct xe_pagefault *pf)
unlock_vm:
if (!err)
vm->usm.last_fault_vma = vma;
+
+ if (err)
+ xe_eudebug_pagefault_destroy(pf, err);
+
up_write(&vm->lock);
xe_vm_put(vm);
@@ -266,6 +292,9 @@ static void xe_pagefault_queue_work(struct work_struct *w)
pf.producer.ops->ack_fault(&pf, err);
+ if (pf.producer.ops->cleanup_fault)
+ pf.producer.ops->cleanup_fault(&pf, err);
+
if (time_after(jiffies, threshold)) {
queue_work(gt_to_xe(pf.gt)->usm.pf_wq, w);
break;
diff --git a/drivers/gpu/drm/xe/xe_pagefault_types.h b/drivers/gpu/drm/xe/xe_pagefault_types.h
index c89d7fb698e0..ce82e39015ae 100644
--- a/drivers/gpu/drm/xe/xe_pagefault_types.h
+++ b/drivers/gpu/drm/xe/xe_pagefault_types.h
@@ -43,6 +43,15 @@ struct xe_pagefault_ops {
* sends the result to the HW/FW interface.
*/
void (*ack_fault)(struct xe_pagefault *pf, int err);
+
+ /**
+ * @cleanup_fault: Cleanup for producer, if any
+ * @pf: Page fault
+ * @err: Error state of fault
+ *
+ * Page fault producer received cleanup request from consumer
+ */
+ void (*cleanup_fault)(struct xe_pagefault *pf, int err);
};
/**
--
2.43.0
^ permalink raw reply related [flat|nested] 30+ messages in thread* ✗ CI.checkpatch: warning for Intel Xe GPU Debug Support (eudebug) v6
2025-12-02 13:52 [PATCH 00/20] Intel Xe GPU Debug Support (eudebug) v6 Mika Kuoppala
` (19 preceding siblings ...)
2025-12-02 13:52 ` [PATCH 20/20] drm/xe/eudebug: Enable EU pagefault handling Mika Kuoppala
@ 2025-12-02 14:02 ` Patchwork
2025-12-02 14:04 ` ✓ CI.KUnit: success " Patchwork
` (4 subsequent siblings)
25 siblings, 0 replies; 30+ messages in thread
From: Patchwork @ 2025-12-02 14:02 UTC (permalink / raw)
To: Mika Kuoppala; +Cc: intel-xe
== Series Details ==
Series: Intel Xe GPU Debug Support (eudebug) v6
URL : https://patchwork.freedesktop.org/series/158380/
State : warning
== Summary ==
+ KERNEL=/kernel
+ git clone https://gitlab.freedesktop.org/drm/maintainer-tools mt
Cloning into 'mt'...
warning: redirecting to https://gitlab.freedesktop.org/drm/maintainer-tools.git/
+ git -C mt rev-list -n1 origin/master
2de9a3901bc28757c7906b454717b64e2a214021
+ cd /kernel
+ git config --global --add safe.directory /kernel
+ git log -n1
commit 5ff32ed53ff251561457eca42f20fa0970378a67
Author: Gwan-gyeong Mun <gwan-gyeong.mun@intel.com>
Date: Tue Dec 2 15:52:39 2025 +0200
drm/xe/eudebug: Enable EU pagefault handling
The XE2 (and PVC) HW has a limitation that the pagefault due to invalid
access will halt the corresponding EUs. To solve this problem, enable
EU pagefault handling functionality, which allows to unhalt pagefaulted
eu threads and to EU debugger to get inform about the eu attentions state
of EU threads during execution.
If a pagefault occurs, send the DRM_XE_EUDEBUG_EVENT_PAGEFAULT event
after handling the pagefault.
The pagefault handling is a mechanism that allows a stalled EU thread to
enter SIP mode by installing a temporal null page to the page table entry
where the pagefault happened.
A brief description of the page fault handling mechanism flow between KMD
and the eu thread is as follows
(1) eu thread accesses unallocated address
(2) pagefault happens and eu thread stalls
(3) XE kmd set an force eu thread exception to allow the running eu thread
to enter SIP mode (kmd set ForceException / ForceExternalHalt bit of
TD_CTL register)
Not stalled (none-pagefaulted) eu threads enter SIP mode
(4) XE kmd installs temporal null page to the pagetable entry of the
address where pagefault happened.
(5) XE kmd replies pagefault successful message to GUC
(6) stalled eu thread resumes as per pagefault condition has resolved
(7) resumed eu thread enters SIP mode due to force exception set by (3)
(8) adapted to consumer/produced pagefaults
As designed this feature to only work when eudbug is enabled, it should
have no impact to regular recoverable pagefault code path.
v2: - pf->q holds the vm ref so drop it (Mika)
- streamline uapi (Mika)
- cleanup the pagefault through producer if (Mika)
Signed-off-by: Gwan-gyeong Mun <gwan-gyeong.mun@intel.com>
Signed-off-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
+ /mt/dim checkpatch 4ffeb1fd1362e2148a7ada498cbaef7b1de27867 drm-intel
b31fe0db8812 drm/xe/eudebug: Introduce eudebug interface
-:220: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#220:
new file mode 100644
-:475: CHECK:MACRO_ARG_REUSE: Macro argument reuse '_d' - possible side-effects?
#475: FILE: drivers/gpu/drm/xe/xe_eudebug.c:251:
+#define xe_eudebug_disconnect(_d, _err) ({ \
+ if (_xe_eudebug_disconnect((_d), (_err))) { \
+ if ((_err) == 0 || (_err) == -ETIMEDOUT) \
+ eu_dbg((_d), "Session closed (%d)", (_err)); \
+ else \
+ eu_err((_d), "Session disconnected, err = %d (%s:%d)", \
+ (_err), __func__, __LINE__); \
+ } \
+})
-:475: CHECK:MACRO_ARG_REUSE: Macro argument reuse '_err' - possible side-effects?
#475: FILE: drivers/gpu/drm/xe/xe_eudebug.c:251:
+#define xe_eudebug_disconnect(_d, _err) ({ \
+ if (_xe_eudebug_disconnect((_d), (_err))) { \
+ if ((_err) == 0 || (_err) == -ETIMEDOUT) \
+ eu_dbg((_d), "Session closed (%d)", (_err)); \
+ else \
+ eu_err((_d), "Session disconnected, err = %d (%s:%d)", \
+ (_err), __func__, __LINE__); \
+ } \
+})
-:827: CHECK:MACRO_ARG_REUSE: Macro argument reuse '_d' - possible side-effects?
#827: FILE: drivers/gpu/drm/xe/xe_eudebug.c:603:
+#define xe_eudebug_event_put(_d, _err) ({ \
+ if ((_err)) \
+ xe_eudebug_disconnect((_d), (_err)); \
+ xe_eudebug_put((_d)); \
+ })
-:827: CHECK:MACRO_ARG_REUSE: Macro argument reuse '_err' - possible side-effects?
#827: FILE: drivers/gpu/drm/xe/xe_eudebug.c:603:
+#define xe_eudebug_event_put(_d, _err) ({ \
+ if ((_err)) \
+ xe_eudebug_disconnect((_d), (_err)); \
+ xe_eudebug_put((_d)); \
+ })
-:1291: ERROR:COMPLEX_MACRO: Macros with complex values should be enclosed in parentheses
#1291: FILE: drivers/gpu/drm/xe/xe_eudebug.h:20:
+#define XE_EUDEBUG_DBG_ARGS(d) (d)->session, \
+ atomic_long_read(&(d)->events.seqno), \
+ !READ_ONCE(d->target.xef) ? "disconnected" : "", \
+ current->pid, \
+ task_tgid_nr(current), \
+ READ_ONCE(d->target.xef) ? d->target.xef->pid : -1
BUT SEE:
do {} while (0) advice is over-stated in a few situations:
The more obvious case is macros, like MODULE_PARM_DESC, invoked at
file-scope, where C disallows code (it must be in functions). See
$exceptions if you have one to add by name.
More troublesome is declarative macros used at top of new scope,
like DECLARE_PER_CPU. These might just compile with a do-while-0
wrapper, but would be incorrect. Most of these are handled by
detecting struct,union,etc declaration primitives in $exceptions.
Theres also macros called inside an if (block), which "return" an
expression. These cannot do-while, and need a ({}) wrapper.
Enjoy this qualification while we work to improve our heuristics.
-:1291: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'd' - possible side-effects?
#1291: FILE: drivers/gpu/drm/xe/xe_eudebug.h:20:
+#define XE_EUDEBUG_DBG_ARGS(d) (d)->session, \
+ atomic_long_read(&(d)->events.seqno), \
+ !READ_ONCE(d->target.xef) ? "disconnected" : "", \
+ current->pid, \
+ task_tgid_nr(current), \
+ READ_ONCE(d->target.xef) ? d->target.xef->pid : -1
-:1298: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'd' - possible side-effects?
#1298: FILE: drivers/gpu/drm/xe/xe_eudebug.h:27:
+#define eu_err(d, fmt, ...) drm_err(&(d)->xe->drm, XE_EUDEBUG_DBG_STR # fmt, \
+ XE_EUDEBUG_DBG_ARGS(d), ##__VA_ARGS__)
-:1300: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'd' - possible side-effects?
#1300: FILE: drivers/gpu/drm/xe/xe_eudebug.h:29:
+#define eu_warn(d, fmt, ...) drm_warn(&(d)->xe->drm, XE_EUDEBUG_DBG_STR # fmt, \
+ XE_EUDEBUG_DBG_ARGS(d), ##__VA_ARGS__)
-:1302: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'd' - possible side-effects?
#1302: FILE: drivers/gpu/drm/xe/xe_eudebug.h:31:
+#define eu_dbg(d, fmt, ...) drm_dbg(&(d)->xe->drm, XE_EUDEBUG_DBG_STR # fmt, \
+ XE_EUDEBUG_DBG_ARGS(d), ##__VA_ARGS__)
-:1520: WARNING:LONG_LINE: line length of 130 exceeds 100 columns
#1520: FILE: include/uapi/drm/xe_drm.h:127:
+#define DRM_IOCTL_XE_EUDEBUG_CONNECT DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_EUDEBUG_CONNECT, struct drm_xe_eudebug_connect)
total: 1 errors, 2 warnings, 8 checks, 1500 lines checked
7097827dbf33 drm/xe/eudebug: Introduce discovery for resources
1004518be923 drm/xe/eudebug: Introduce exec_queue events
697639d7f64b drm/xe: Add EUDEBUG_ENABLE exec queue property
362a0e16febf drm/xe/eudebug: Mark guc contexts as debuggable
59ec2b757842 drm/xe: Introduce ADD_DEBUG_DATA and REMOVE_DEBUG_DATA vm bind ops
-:44: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#44:
new file mode 100644
-:617: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#617: FILE: drivers/gpu/drm/xe/xe_vm.c:3431:
+ if (XE_IOCTL_DBG(xe, operation != DRM_XE_VM_BIND_OP_ADD_DEBUG_DATA &&
+ operation != DRM_XE_VM_BIND_OP_REMOVE_DEBUG_DATA &&
-:620: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#620: FILE: drivers/gpu/drm/xe/xe_vm.c:3434:
+ XE_IOCTL_DBG(xe, ext.name == XE_VM_BIND_OP_EXTENSIONS_DEBUG_DATA &&
+ ++debug_data_count > 1))
-:740: CHECK:UNCOMMENTED_DEFINITION: struct mutex definition without comment
#740: FILE: drivers/gpu/drm/xe/xe_vm_types.h:343:
+ struct mutex lock;
total: 0 errors, 1 warnings, 3 checks, 759 lines checked
56e23cab2dd5 drm/xe/eudebug: Introduce vm bind and vm bind debug data events
-:7: WARNING:COMMIT_LOG_LONG_LINE: Prefer a maximum 75 chars per line (possible unwrapped commit description?)
#7:
This patch adds events to track the bind ioctl and associated debug data add
-:390: WARNING:LONG_LINE_COMMENT: line length of 102 exceeds 100 columns
#390: FILE: include/uapi/drm/xe_drm_eudebug.h:92:
+ * │ EVENT_VM_BIND ├──────────────────┬─┬┄┐
-:391: WARNING:LONG_LINE_COMMENT: line length of 108 exceeds 100 columns
#391: FILE: include/uapi/drm/xe_drm_eudebug.h:93:
+ * └───────────────────────┘ │ │ ┊
-:392: WARNING:LONG_LINE_COMMENT: line length of 130 exceeds 100 columns
#392: FILE: include/uapi/drm/xe_drm_eudebug.h:94:
+ * ┌──────────────────────────────────┐ │ │ ┊
-:394: WARNING:LONG_LINE_COMMENT: line length of 128 exceeds 100 columns
#394: FILE: include/uapi/drm/xe_drm_eudebug.h:96:
+ * └──────────────────────────────────┘ │ ┊
-:396: WARNING:LONG_LINE_COMMENT: line length of 128 exceeds 100 columns
#396: FILE: include/uapi/drm/xe_drm_eudebug.h:98:
+ * ┌──────────────────────────────────┐ │ ┊
-:398: WARNING:LONG_LINE_COMMENT: line length of 126 exceeds 100 columns
#398: FILE: include/uapi/drm/xe_drm_eudebug.h:100:
+ * └──────────────────────────────────┘ ┊
-:400: WARNING:LONG_LINE_COMMENT: line length of 126 exceeds 100 columns
#400: FILE: include/uapi/drm/xe_drm_eudebug.h:102:
+ * ┌┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┐ ┊
-:402: WARNING:LONG_LINE_COMMENT: line length of 116 exceeds 100 columns
#402: FILE: include/uapi/drm/xe_drm_eudebug.h:104:
+ * └┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┘
total: 0 errors, 9 warnings, 0 checks, 378 lines checked
5e6a4dc9be81 drm/xe/eudebug: Add UFENCE events with acks
-:309: CHECK:COMPARISON_TO_NULL: Comparison to NULL could be written "!ufence"
#309: FILE: drivers/gpu/drm/xe/xe_eudebug.c:1195:
+ xe_assert(vm->xe, ufence == NULL);
-:608: CHECK:UNCOMMENTED_DEFINITION: spinlock_t definition without comment
#608: FILE: drivers/gpu/drm/xe/xe_sync_types.h:26:
+ spinlock_t lock;
total: 0 errors, 0 warnings, 2 checks, 610 lines checked
7ef278734e8f drm/xe/eudebug: vm open/pread/pwrite
-:124: CHECK:BRACES: Blank lines aren't necessary after an open brace '{'
#124: FILE: drivers/gpu/drm/xe/xe_eudebug.c:669:
+{
+
-:148: CHECK:LINE_SPACING: Please don't use multiple blank lines
#148: FILE: drivers/gpu/drm/xe/xe_eudebug.c:693:
+
+
-:188: ERROR:COMPLEX_MACRO: Macros with complex values should be enclosed in parentheses
#188: FILE: drivers/gpu/drm/xe/xe_eudebug.h:42:
+#define xe_eudebug_for_each_hw_engine(__hwe, __gt, __id) \
+ for_each_hw_engine(__hwe, __gt, __id) \
+ if (xe_hw_engine_has_eudebug(__hwe))
BUT SEE:
do {} while (0) advice is over-stated in a few situations:
The more obvious case is macros, like MODULE_PARM_DESC, invoked at
file-scope, where C disallows code (it must be in functions). See
$exceptions if you have one to add by name.
More troublesome is declarative macros used at top of new scope,
like DECLARE_PER_CPU. These might just compile with a do-while-0
wrapper, but would be incorrect. Most of these are handled by
detecting struct,union,etc declaration primitives in $exceptions.
Theres also macros called inside an if (block), which "return" an
expression. These cannot do-while, and need a ({}) wrapper.
Enjoy this qualification while we work to improve our heuristics.
-:188: CHECK:MACRO_ARG_REUSE: Macro argument reuse '__hwe' - possible side-effects?
#188: FILE: drivers/gpu/drm/xe/xe_eudebug.h:42:
+#define xe_eudebug_for_each_hw_engine(__hwe, __gt, __id) \
+ for_each_hw_engine(__hwe, __gt, __id) \
+ if (xe_hw_engine_has_eudebug(__hwe))
-:237: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#237:
new file mode 100644
total: 1 errors, 1 warnings, 3 checks, 619 lines checked
bf86b2a116c1 drm/xe/eudebug: userptr vm pread/pwrite
e731d8538205 drm/xe/eudebug: hw enablement for eudebug
-:107: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#107:
new file mode 100644
total: 0 errors, 1 warnings, 0 checks, 452 lines checked
94c2b0c6f708 drm/xe/eudebug: Introduce EU control interface
-:277: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#277: FILE: drivers/gpu/drm/xe/xe_eudebug_hw.c:183:
+static bool engine_has_runalone_set(const struct xe_hw_engine * const hwe,
+ u32 rcu_debug1)
-:283: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#283: FILE: drivers/gpu/drm/xe/xe_eudebug_hw.c:189:
+static bool engine_has_context_set(const struct xe_hw_engine * const hwe,
+ u32 rcu_debug1)
-:915: CHECK:BRACES: Blank lines aren't necessary after an open brace '{'
#915: FILE: include/uapi/drm/xe_drm_eudebug.h:184:
+struct drm_xe_eudebug_eu_control {
+
total: 0 errors, 0 warnings, 3 checks, 860 lines checked
de9ee0f02267 drm/xe/eudebug: Introduce per device attention scan worker
46daf4b60372 drm/xe/eudebug_test: Introduce xe_eudebug wa kunit test
-:16: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#16:
new file mode 100644
total: 0 errors, 1 warnings, 0 checks, 201 lines checked
878c1f2fbfa1 drm/xe: Implement SR-IOV and eudebug exclusivity
eaeca378838e drm/xe: Add xe_client_debugfs and introduce debug_data file
-:30: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#30:
new file mode 100644
-:84: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#84: FILE: drivers/gpu/drm/xe/xe_client_debugfs.c:50:
+ len = snprintf(kbuf, MAX_LINE_LEN, "%lu 0x%llx-0x%llx 0x%llx 0x%x\t%s\n",
+ vm_index,
total: 0 errors, 1 warnings, 1 checks, 161 lines checked
a8b37e7d9b56 drm/xe/eudebug: Add read/count/compare helper for eu attention
-:127: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#127:
new file mode 100644
-:149: CHECK:SPACING: spaces preferred around that '/' (ctx:VxV)
#149: FILE: drivers/gpu/drm/xe/xe_gt_debug_types.h:18:
+ XE_GT_EU_ATT_MAX_THREADS/8];
^
total: 0 errors, 1 warnings, 1 checks, 120 lines checked
f6d68798ae3e drm/xe/vm: Support for adding null page VMA to VM on request
-:15: WARNING:COMMIT_LOG_LONG_LINE: Prefer a maximum 75 chars per line (possible unwrapped commit description?)
#15:
[1] https://lore.kernel.org/intel-xe/20230829231648.4438-1-yu.bruce.chang@intel.com/
total: 0 errors, 1 warnings, 0 checks, 42 lines checked
0f5dbb4a16eb drm/xe/eudebug: Introduce EU pagefault handling interface
-:337: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#337:
new file mode 100644
-:454: CHECK:UNCOMMENTED_DEFINITION: spinlock_t definition without comment
#454: FILE: drivers/gpu/drm/xe/xe_eudebug_pagefault.c:113:
+ spinlock_t lock;
-:462: CHECK:COMPARISON_TO_NULL: Comparison to NULL could be written "!fence"
#462: FILE: drivers/gpu/drm/xe/xe_eudebug_pagefault.c:121:
+ if (fence == NULL)
-:558: CHECK:USLEEP_RANGE: usleep_range is preferred over udelay; see function description of usleep_range() and udelay().
#558: FILE: drivers/gpu/drm/xe/xe_eudebug_pagefault.c:217:
+ udelay(200);
total: 0 errors, 1 warnings, 3 checks, 859 lines checked
5ff32ed53ff2 drm/xe/eudebug: Enable EU pagefault handling
^ permalink raw reply [flat|nested] 30+ messages in thread* ✓ CI.KUnit: success for Intel Xe GPU Debug Support (eudebug) v6
2025-12-02 13:52 [PATCH 00/20] Intel Xe GPU Debug Support (eudebug) v6 Mika Kuoppala
` (20 preceding siblings ...)
2025-12-02 14:02 ` ✗ CI.checkpatch: warning for Intel Xe GPU Debug Support (eudebug) v6 Patchwork
@ 2025-12-02 14:04 ` Patchwork
2025-12-02 15:34 ` ✓ Xe.CI.BAT: " Patchwork
` (3 subsequent siblings)
25 siblings, 0 replies; 30+ messages in thread
From: Patchwork @ 2025-12-02 14:04 UTC (permalink / raw)
To: Mika Kuoppala; +Cc: intel-xe
== Series Details ==
Series: Intel Xe GPU Debug Support (eudebug) v6
URL : https://patchwork.freedesktop.org/series/158380/
State : success
== Summary ==
+ trap cleanup EXIT
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/xe/.kunitconfig
[14:02:14] Configuring KUnit Kernel ...
Generating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[14:02:21] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[14:03:24] Starting KUnit Kernel (1/1)...
[14:03:24] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[14:03:24] ================== guc_buf (11 subtests) ===================
[14:03:24] [PASSED] test_smallest
[14:03:24] [PASSED] test_largest
[14:03:24] [PASSED] test_granular
[14:03:24] [PASSED] test_unique
[14:03:24] [PASSED] test_overlap
[14:03:24] [PASSED] test_reusable
[14:03:24] [PASSED] test_too_big
[14:03:24] [PASSED] test_flush
[14:03:24] [PASSED] test_lookup
[14:03:24] [PASSED] test_data
[14:03:24] [PASSED] test_class
[14:03:24] ===================== [PASSED] guc_buf =====================
[14:03:24] =================== guc_dbm (7 subtests) ===================
[14:03:24] [PASSED] test_empty
[14:03:24] [PASSED] test_default
[14:03:24] ======================== test_size ========================
[14:03:24] [PASSED] 4
[14:03:24] [PASSED] 8
[14:03:24] [PASSED] 32
[14:03:24] [PASSED] 256
[14:03:24] ==================== [PASSED] test_size ====================
[14:03:24] ======================= test_reuse ========================
[14:03:24] [PASSED] 4
[14:03:24] [PASSED] 8
[14:03:24] [PASSED] 32
[14:03:24] [PASSED] 256
[14:03:24] =================== [PASSED] test_reuse ====================
[14:03:24] =================== test_range_overlap ====================
[14:03:24] [PASSED] 4
[14:03:24] [PASSED] 8
[14:03:24] [PASSED] 32
[14:03:24] [PASSED] 256
[14:03:24] =============== [PASSED] test_range_overlap ================
[14:03:24] =================== test_range_compact ====================
[14:03:24] [PASSED] 4
[14:03:24] [PASSED] 8
[14:03:24] [PASSED] 32
[14:03:24] [PASSED] 256
[14:03:24] =============== [PASSED] test_range_compact ================
[14:03:24] ==================== test_range_spare =====================
[14:03:24] [PASSED] 4
[14:03:24] [PASSED] 8
[14:03:24] [PASSED] 32
[14:03:24] [PASSED] 256
[14:03:24] ================ [PASSED] test_range_spare =================
[14:03:24] ===================== [PASSED] guc_dbm =====================
[14:03:24] =================== guc_idm (6 subtests) ===================
[14:03:24] [PASSED] bad_init
[14:03:24] [PASSED] no_init
[14:03:24] [PASSED] init_fini
[14:03:24] [PASSED] check_used
[14:03:24] [PASSED] check_quota
[14:03:24] [PASSED] check_all
[14:03:24] ===================== [PASSED] guc_idm =====================
[14:03:24] ================== no_relay (3 subtests) ===================
[14:03:24] [PASSED] xe_drops_guc2pf_if_not_ready
[14:03:24] [PASSED] xe_drops_guc2vf_if_not_ready
[14:03:24] [PASSED] xe_rejects_send_if_not_ready
[14:03:24] ==================== [PASSED] no_relay =====================
[14:03:24] ================== pf_relay (14 subtests) ==================
[14:03:24] [PASSED] pf_rejects_guc2pf_too_short
[14:03:24] [PASSED] pf_rejects_guc2pf_too_long
[14:03:24] [PASSED] pf_rejects_guc2pf_no_payload
[14:03:24] [PASSED] pf_fails_no_payload
[14:03:24] [PASSED] pf_fails_bad_origin
[14:03:24] [PASSED] pf_fails_bad_type
[14:03:24] [PASSED] pf_txn_reports_error
[14:03:24] [PASSED] pf_txn_sends_pf2guc
[14:03:24] [PASSED] pf_sends_pf2guc
[14:03:24] [SKIPPED] pf_loopback_nop
[14:03:24] [SKIPPED] pf_loopback_echo
[14:03:24] [SKIPPED] pf_loopback_fail
[14:03:24] [SKIPPED] pf_loopback_busy
[14:03:24] [SKIPPED] pf_loopback_retry
[14:03:24] ==================== [PASSED] pf_relay =====================
[14:03:24] ================== vf_relay (3 subtests) ===================
[14:03:24] [PASSED] vf_rejects_guc2vf_too_short
[14:03:24] [PASSED] vf_rejects_guc2vf_too_long
[14:03:24] [PASSED] vf_rejects_guc2vf_no_payload
[14:03:24] ==================== [PASSED] vf_relay =====================
[14:03:24] ================ pf_gt_config (6 subtests) =================
[14:03:24] [PASSED] fair_contexts_1vf
[14:03:24] [PASSED] fair_doorbells_1vf
[14:03:24] [PASSED] fair_ggtt_1vf
[14:03:24] ====================== fair_contexts ======================
[14:03:24] [PASSED] 1 VF
[14:03:24] [PASSED] 2 VFs
[14:03:24] [PASSED] 3 VFs
[14:03:24] [PASSED] 4 VFs
[14:03:24] [PASSED] 5 VFs
[14:03:24] [PASSED] 6 VFs
[14:03:24] [PASSED] 7 VFs
[14:03:24] [PASSED] 8 VFs
[14:03:24] [PASSED] 9 VFs
[14:03:24] [PASSED] 10 VFs
[14:03:24] [PASSED] 11 VFs
[14:03:24] [PASSED] 12 VFs
[14:03:24] [PASSED] 13 VFs
[14:03:24] [PASSED] 14 VFs
[14:03:24] [PASSED] 15 VFs
[14:03:24] [PASSED] 16 VFs
[14:03:24] [PASSED] 17 VFs
[14:03:24] [PASSED] 18 VFs
[14:03:24] [PASSED] 19 VFs
[14:03:24] [PASSED] 20 VFs
[14:03:24] [PASSED] 21 VFs
[14:03:24] [PASSED] 22 VFs
[14:03:24] [PASSED] 23 VFs
[14:03:24] [PASSED] 24 VFs
[14:03:24] [PASSED] 25 VFs
[14:03:24] [PASSED] 26 VFs
[14:03:24] [PASSED] 27 VFs
[14:03:24] [PASSED] 28 VFs
[14:03:24] [PASSED] 29 VFs
[14:03:24] [PASSED] 30 VFs
[14:03:24] [PASSED] 31 VFs
[14:03:24] [PASSED] 32 VFs
[14:03:24] [PASSED] 33 VFs
[14:03:24] [PASSED] 34 VFs
[14:03:24] [PASSED] 35 VFs
[14:03:24] [PASSED] 36 VFs
[14:03:24] [PASSED] 37 VFs
[14:03:24] [PASSED] 38 VFs
[14:03:24] [PASSED] 39 VFs
[14:03:24] [PASSED] 40 VFs
[14:03:24] [PASSED] 41 VFs
[14:03:24] [PASSED] 42 VFs
[14:03:24] [PASSED] 43 VFs
[14:03:24] [PASSED] 44 VFs
[14:03:24] [PASSED] 45 VFs
[14:03:24] [PASSED] 46 VFs
[14:03:24] [PASSED] 47 VFs
[14:03:24] [PASSED] 48 VFs
[14:03:24] [PASSED] 49 VFs
[14:03:24] [PASSED] 50 VFs
[14:03:24] [PASSED] 51 VFs
[14:03:24] [PASSED] 52 VFs
[14:03:24] [PASSED] 53 VFs
[14:03:24] [PASSED] 54 VFs
[14:03:24] [PASSED] 55 VFs
[14:03:24] [PASSED] 56 VFs
[14:03:24] [PASSED] 57 VFs
[14:03:24] [PASSED] 58 VFs
[14:03:24] [PASSED] 59 VFs
[14:03:24] [PASSED] 60 VFs
[14:03:24] [PASSED] 61 VFs
[14:03:24] [PASSED] 62 VFs
[14:03:24] [PASSED] 63 VFs
[14:03:24] ================== [PASSED] fair_contexts ==================
[14:03:24] ===================== fair_doorbells ======================
[14:03:24] [PASSED] 1 VF
[14:03:24] [PASSED] 2 VFs
[14:03:24] [PASSED] 3 VFs
[14:03:24] [PASSED] 4 VFs
[14:03:24] [PASSED] 5 VFs
[14:03:24] [PASSED] 6 VFs
[14:03:24] [PASSED] 7 VFs
[14:03:24] [PASSED] 8 VFs
[14:03:24] [PASSED] 9 VFs
[14:03:24] [PASSED] 10 VFs
[14:03:24] [PASSED] 11 VFs
[14:03:24] [PASSED] 12 VFs
[14:03:24] [PASSED] 13 VFs
[14:03:24] [PASSED] 14 VFs
[14:03:24] [PASSED] 15 VFs
[14:03:24] [PASSED] 16 VFs
[14:03:24] [PASSED] 17 VFs
[14:03:24] [PASSED] 18 VFs
[14:03:24] [PASSED] 19 VFs
[14:03:24] [PASSED] 20 VFs
[14:03:24] [PASSED] 21 VFs
[14:03:24] [PASSED] 22 VFs
[14:03:24] [PASSED] 23 VFs
[14:03:24] [PASSED] 24 VFs
[14:03:24] [PASSED] 25 VFs
[14:03:24] [PASSED] 26 VFs
[14:03:24] [PASSED] 27 VFs
[14:03:24] [PASSED] 28 VFs
[14:03:24] [PASSED] 29 VFs
[14:03:24] [PASSED] 30 VFs
[14:03:24] [PASSED] 31 VFs
[14:03:24] [PASSED] 32 VFs
[14:03:24] [PASSED] 33 VFs
[14:03:24] [PASSED] 34 VFs
[14:03:24] [PASSED] 35 VFs
[14:03:24] [PASSED] 36 VFs
[14:03:24] [PASSED] 37 VFs
[14:03:24] [PASSED] 38 VFs
[14:03:24] [PASSED] 39 VFs
[14:03:24] [PASSED] 40 VFs
[14:03:24] [PASSED] 41 VFs
[14:03:24] [PASSED] 42 VFs
[14:03:24] [PASSED] 43 VFs
[14:03:24] [PASSED] 44 VFs
[14:03:24] [PASSED] 45 VFs
[14:03:24] [PASSED] 46 VFs
[14:03:24] [PASSED] 47 VFs
[14:03:24] [PASSED] 48 VFs
[14:03:24] [PASSED] 49 VFs
[14:03:24] [PASSED] 50 VFs
[14:03:24] [PASSED] 51 VFs
[14:03:24] [PASSED] 52 VFs
[14:03:24] [PASSED] 53 VFs
[14:03:24] [PASSED] 54 VFs
[14:03:24] [PASSED] 55 VFs
[14:03:24] [PASSED] 56 VFs
[14:03:24] [PASSED] 57 VFs
[14:03:24] [PASSED] 58 VFs
[14:03:24] [PASSED] 59 VFs
[14:03:24] [PASSED] 60 VFs
[14:03:24] [PASSED] 61 VFs
[14:03:24] [PASSED] 62 VFs
[14:03:24] [PASSED] 63 VFs
[14:03:24] ================= [PASSED] fair_doorbells ==================
[14:03:24] ======================== fair_ggtt ========================
[14:03:24] [PASSED] 1 VF
[14:03:24] [PASSED] 2 VFs
[14:03:24] [PASSED] 3 VFs
[14:03:24] [PASSED] 4 VFs
[14:03:24] [PASSED] 5 VFs
[14:03:24] [PASSED] 6 VFs
[14:03:24] [PASSED] 7 VFs
[14:03:24] [PASSED] 8 VFs
[14:03:24] [PASSED] 9 VFs
[14:03:24] [PASSED] 10 VFs
[14:03:24] [PASSED] 11 VFs
[14:03:24] [PASSED] 12 VFs
[14:03:24] [PASSED] 13 VFs
[14:03:24] [PASSED] 14 VFs
[14:03:24] [PASSED] 15 VFs
[14:03:24] [PASSED] 16 VFs
[14:03:24] [PASSED] 17 VFs
[14:03:24] [PASSED] 18 VFs
[14:03:24] [PASSED] 19 VFs
[14:03:24] [PASSED] 20 VFs
[14:03:24] [PASSED] 21 VFs
[14:03:24] [PASSED] 22 VFs
[14:03:24] [PASSED] 23 VFs
[14:03:24] [PASSED] 24 VFs
[14:03:24] [PASSED] 25 VFs
[14:03:24] [PASSED] 26 VFs
[14:03:24] [PASSED] 27 VFs
[14:03:24] [PASSED] 28 VFs
[14:03:24] [PASSED] 29 VFs
[14:03:24] [PASSED] 30 VFs
[14:03:24] [PASSED] 31 VFs
[14:03:24] [PASSED] 32 VFs
[14:03:24] [PASSED] 33 VFs
[14:03:24] [PASSED] 34 VFs
[14:03:24] [PASSED] 35 VFs
[14:03:24] [PASSED] 36 VFs
[14:03:24] [PASSED] 37 VFs
[14:03:24] [PASSED] 38 VFs
[14:03:24] [PASSED] 39 VFs
[14:03:24] [PASSED] 40 VFs
[14:03:24] [PASSED] 41 VFs
[14:03:24] [PASSED] 42 VFs
[14:03:24] [PASSED] 43 VFs
[14:03:24] [PASSED] 44 VFs
[14:03:24] [PASSED] 45 VFs
[14:03:24] [PASSED] 46 VFs
[14:03:24] [PASSED] 47 VFs
[14:03:24] [PASSED] 48 VFs
[14:03:24] [PASSED] 49 VFs
[14:03:24] [PASSED] 50 VFs
[14:03:24] [PASSED] 51 VFs
[14:03:24] [PASSED] 52 VFs
[14:03:24] [PASSED] 53 VFs
[14:03:24] [PASSED] 54 VFs
[14:03:24] [PASSED] 55 VFs
[14:03:24] [PASSED] 56 VFs
[14:03:24] [PASSED] 57 VFs
[14:03:24] [PASSED] 58 VFs
[14:03:24] [PASSED] 59 VFs
[14:03:24] [PASSED] 60 VFs
[14:03:24] [PASSED] 61 VFs
[14:03:24] [PASSED] 62 VFs
[14:03:24] [PASSED] 63 VFs
[14:03:24] ==================== [PASSED] fair_ggtt ====================
[14:03:24] ================== [PASSED] pf_gt_config ===================
[14:03:24] ===================== lmtt (1 subtest) =====================
[14:03:24] ======================== test_ops =========================
[14:03:24] [PASSED] 2-level
[14:03:24] [PASSED] multi-level
[14:03:24] ==================== [PASSED] test_ops =====================
[14:03:24] ====================== [PASSED] lmtt =======================
[14:03:24] ================= pf_service (11 subtests) =================
[14:03:24] [PASSED] pf_negotiate_any
[14:03:24] [PASSED] pf_negotiate_base_match
[14:03:24] [PASSED] pf_negotiate_base_newer
[14:03:24] [PASSED] pf_negotiate_base_next
[14:03:24] [SKIPPED] pf_negotiate_base_older
[14:03:24] [PASSED] pf_negotiate_base_prev
[14:03:24] [PASSED] pf_negotiate_latest_match
[14:03:24] [PASSED] pf_negotiate_latest_newer
[14:03:24] [PASSED] pf_negotiate_latest_next
[14:03:24] [SKIPPED] pf_negotiate_latest_older
[14:03:24] [SKIPPED] pf_negotiate_latest_prev
[14:03:24] =================== [PASSED] pf_service ====================
[14:03:24] ================== xe_eudebug (1 subtest) ==================
[14:03:24] =============== xe_eudebug_toggle_reg_kunit ===============
[14:03:24] ========== [SKIPPED] xe_eudebug_toggle_reg_kunit ===========
[14:03:24] =================== [SKIPPED] xe_eudebug ===================
[14:03:24] ================= xe_guc_g2g (2 subtests) ==================
[14:03:24] ============== xe_live_guc_g2g_kunit_default ==============
[14:03:24] ========= [SKIPPED] xe_live_guc_g2g_kunit_default ==========
[14:03:24] ============== xe_live_guc_g2g_kunit_allmem ===============
[14:03:24] ========== [SKIPPED] xe_live_guc_g2g_kunit_allmem ==========
[14:03:24] =================== [SKIPPED] xe_guc_g2g ===================
[14:03:24] =================== xe_mocs (2 subtests) ===================
[14:03:24] ================ xe_live_mocs_kernel_kunit ================
[14:03:24] =========== [SKIPPED] xe_live_mocs_kernel_kunit ============
[14:03:24] ================ xe_live_mocs_reset_kunit =================
[14:03:24] ============ [SKIPPED] xe_live_mocs_reset_kunit ============
[14:03:24] ==================== [SKIPPED] xe_mocs =====================
[14:03:24] ================= xe_migrate (2 subtests) ==================
[14:03:24] ================= xe_migrate_sanity_kunit =================
[14:03:24] ============ [SKIPPED] xe_migrate_sanity_kunit =============
[14:03:24] ================== xe_validate_ccs_kunit ==================
[14:03:24] ============= [SKIPPED] xe_validate_ccs_kunit ==============
[14:03:24] =================== [SKIPPED] xe_migrate ===================
[14:03:24] ================== xe_dma_buf (1 subtest) ==================
[14:03:24] ==================== xe_dma_buf_kunit =====================
[14:03:24] ================ [SKIPPED] xe_dma_buf_kunit ================
[14:03:24] =================== [SKIPPED] xe_dma_buf ===================
[14:03:24] ================= xe_bo_shrink (1 subtest) =================
[14:03:24] =================== xe_bo_shrink_kunit ====================
[14:03:24] =============== [SKIPPED] xe_bo_shrink_kunit ===============
[14:03:24] ================== [SKIPPED] xe_bo_shrink ==================
[14:03:24] ==================== xe_bo (2 subtests) ====================
[14:03:24] ================== xe_ccs_migrate_kunit ===================
[14:03:24] ============== [SKIPPED] xe_ccs_migrate_kunit ==============
[14:03:24] ==================== xe_bo_evict_kunit ====================
[14:03:24] =============== [SKIPPED] xe_bo_evict_kunit ================
[14:03:24] ===================== [SKIPPED] xe_bo ======================
[14:03:24] ==================== args (11 subtests) ====================
[14:03:24] [PASSED] count_args_test
[14:03:24] [PASSED] call_args_example
[14:03:24] [PASSED] call_args_test
[14:03:24] [PASSED] drop_first_arg_example
[14:03:24] [PASSED] drop_first_arg_test
[14:03:24] [PASSED] first_arg_example
[14:03:24] [PASSED] first_arg_test
[14:03:24] [PASSED] last_arg_example
[14:03:24] [PASSED] last_arg_test
[14:03:24] [PASSED] pick_arg_example
[14:03:24] [PASSED] sep_comma_example
[14:03:24] ====================== [PASSED] args =======================
[14:03:24] =================== xe_pci (3 subtests) ====================
[14:03:24] ==================== check_graphics_ip ====================
[14:03:24] [PASSED] 12.00 Xe_LP
[14:03:24] [PASSED] 12.10 Xe_LP+
[14:03:24] [PASSED] 12.55 Xe_HPG
[14:03:24] [PASSED] 12.60 Xe_HPC
[14:03:24] [PASSED] 12.70 Xe_LPG
[14:03:24] [PASSED] 12.71 Xe_LPG
[14:03:24] [PASSED] 12.74 Xe_LPG+
[14:03:24] [PASSED] 20.01 Xe2_HPG
[14:03:24] [PASSED] 20.02 Xe2_HPG
[14:03:24] [PASSED] 20.04 Xe2_LPG
[14:03:24] [PASSED] 30.00 Xe3_LPG
[14:03:24] [PASSED] 30.01 Xe3_LPG
[14:03:24] [PASSED] 30.03 Xe3_LPG
[14:03:24] [PASSED] 30.04 Xe3_LPG
[14:03:24] [PASSED] 30.05 Xe3_LPG
[14:03:24] [PASSED] 35.11 Xe3p_XPC
[14:03:24] ================ [PASSED] check_graphics_ip ================
[14:03:24] ===================== check_media_ip ======================
[14:03:24] [PASSED] 12.00 Xe_M
[14:03:24] [PASSED] 12.55 Xe_HPM
[14:03:24] [PASSED] 13.00 Xe_LPM+
[14:03:24] [PASSED] 13.01 Xe2_HPM
[14:03:24] [PASSED] 20.00 Xe2_LPM
[14:03:24] [PASSED] 30.00 Xe3_LPM
[14:03:24] [PASSED] 30.02 Xe3_LPM
[14:03:24] [PASSED] 35.00 Xe3p_LPM
[14:03:24] [PASSED] 35.03 Xe3p_HPM
[14:03:24] ================= [PASSED] check_media_ip ==================
[14:03:24] =================== check_platform_desc ===================
[14:03:24] [PASSED] 0x9A60 (TIGERLAKE)
[14:03:24] [PASSED] 0x9A68 (TIGERLAKE)
[14:03:24] [PASSED] 0x9A70 (TIGERLAKE)
[14:03:24] [PASSED] 0x9A40 (TIGERLAKE)
[14:03:24] [PASSED] 0x9A49 (TIGERLAKE)
[14:03:24] [PASSED] 0x9A59 (TIGERLAKE)
[14:03:24] [PASSED] 0x9A78 (TIGERLAKE)
[14:03:24] [PASSED] 0x9AC0 (TIGERLAKE)
[14:03:24] [PASSED] 0x9AC9 (TIGERLAKE)
[14:03:24] [PASSED] 0x9AD9 (TIGERLAKE)
[14:03:24] [PASSED] 0x9AF8 (TIGERLAKE)
[14:03:24] [PASSED] 0x4C80 (ROCKETLAKE)
[14:03:24] [PASSED] 0x4C8A (ROCKETLAKE)
[14:03:24] [PASSED] 0x4C8B (ROCKETLAKE)
[14:03:24] [PASSED] 0x4C8C (ROCKETLAKE)
[14:03:24] [PASSED] 0x4C90 (ROCKETLAKE)
[14:03:24] [PASSED] 0x4C9A (ROCKETLAKE)
[14:03:24] [PASSED] 0x4680 (ALDERLAKE_S)
[14:03:24] [PASSED] 0x4682 (ALDERLAKE_S)
[14:03:24] [PASSED] 0x4688 (ALDERLAKE_S)
[14:03:24] [PASSED] 0x468A (ALDERLAKE_S)
[14:03:24] [PASSED] 0x468B (ALDERLAKE_S)
[14:03:24] [PASSED] 0x4690 (ALDERLAKE_S)
[14:03:24] [PASSED] 0x4692 (ALDERLAKE_S)
[14:03:24] [PASSED] 0x4693 (ALDERLAKE_S)
[14:03:24] [PASSED] 0x46A0 (ALDERLAKE_P)
[14:03:24] [PASSED] 0x46A1 (ALDERLAKE_P)
[14:03:24] [PASSED] 0x46A2 (ALDERLAKE_P)
[14:03:24] [PASSED] 0x46A3 (ALDERLAKE_P)
stty: 'standard input': Inappropriate ioctl for device
[14:03:24] [PASSED] 0x46A6 (ALDERLAKE_P)
[14:03:24] [PASSED] 0x46A8 (ALDERLAKE_P)
[14:03:24] [PASSED] 0x46AA (ALDERLAKE_P)
[14:03:24] [PASSED] 0x462A (ALDERLAKE_P)
[14:03:24] [PASSED] 0x4626 (ALDERLAKE_P)
[14:03:24] [PASSED] 0x4628 (ALDERLAKE_P)
[14:03:24] [PASSED] 0x46B0 (ALDERLAKE_P)
[14:03:24] [PASSED] 0x46B1 (ALDERLAKE_P)
[14:03:24] [PASSED] 0x46B2 (ALDERLAKE_P)
[14:03:24] [PASSED] 0x46B3 (ALDERLAKE_P)
[14:03:24] [PASSED] 0x46C0 (ALDERLAKE_P)
[14:03:24] [PASSED] 0x46C1 (ALDERLAKE_P)
[14:03:24] [PASSED] 0x46C2 (ALDERLAKE_P)
[14:03:24] [PASSED] 0x46C3 (ALDERLAKE_P)
[14:03:24] [PASSED] 0x46D0 (ALDERLAKE_N)
[14:03:24] [PASSED] 0x46D1 (ALDERLAKE_N)
[14:03:24] [PASSED] 0x46D2 (ALDERLAKE_N)
[14:03:24] [PASSED] 0x46D3 (ALDERLAKE_N)
[14:03:24] [PASSED] 0x46D4 (ALDERLAKE_N)
[14:03:24] [PASSED] 0xA721 (ALDERLAKE_P)
[14:03:24] [PASSED] 0xA7A1 (ALDERLAKE_P)
[14:03:24] [PASSED] 0xA7A9 (ALDERLAKE_P)
[14:03:24] [PASSED] 0xA7AC (ALDERLAKE_P)
[14:03:24] [PASSED] 0xA7AD (ALDERLAKE_P)
[14:03:24] [PASSED] 0xA720 (ALDERLAKE_P)
[14:03:24] [PASSED] 0xA7A0 (ALDERLAKE_P)
[14:03:24] [PASSED] 0xA7A8 (ALDERLAKE_P)
[14:03:24] [PASSED] 0xA7AA (ALDERLAKE_P)
[14:03:24] [PASSED] 0xA7AB (ALDERLAKE_P)
[14:03:24] [PASSED] 0xA780 (ALDERLAKE_S)
[14:03:24] [PASSED] 0xA781 (ALDERLAKE_S)
[14:03:24] [PASSED] 0xA782 (ALDERLAKE_S)
[14:03:24] [PASSED] 0xA783 (ALDERLAKE_S)
[14:03:24] [PASSED] 0xA788 (ALDERLAKE_S)
[14:03:24] [PASSED] 0xA789 (ALDERLAKE_S)
[14:03:24] [PASSED] 0xA78A (ALDERLAKE_S)
[14:03:24] [PASSED] 0xA78B (ALDERLAKE_S)
[14:03:24] [PASSED] 0x4905 (DG1)
[14:03:24] [PASSED] 0x4906 (DG1)
[14:03:24] [PASSED] 0x4907 (DG1)
[14:03:24] [PASSED] 0x4908 (DG1)
[14:03:24] [PASSED] 0x4909 (DG1)
[14:03:24] [PASSED] 0x56C0 (DG2)
[14:03:24] [PASSED] 0x56C2 (DG2)
[14:03:24] [PASSED] 0x56C1 (DG2)
[14:03:24] [PASSED] 0x7D51 (METEORLAKE)
[14:03:24] [PASSED] 0x7DD1 (METEORLAKE)
[14:03:24] [PASSED] 0x7D41 (METEORLAKE)
[14:03:24] [PASSED] 0x7D67 (METEORLAKE)
[14:03:24] [PASSED] 0xB640 (METEORLAKE)
[14:03:24] [PASSED] 0x56A0 (DG2)
[14:03:24] [PASSED] 0x56A1 (DG2)
[14:03:24] [PASSED] 0x56A2 (DG2)
[14:03:24] [PASSED] 0x56BE (DG2)
[14:03:24] [PASSED] 0x56BF (DG2)
[14:03:24] [PASSED] 0x5690 (DG2)
[14:03:24] [PASSED] 0x5691 (DG2)
[14:03:24] [PASSED] 0x5692 (DG2)
[14:03:24] [PASSED] 0x56A5 (DG2)
[14:03:24] [PASSED] 0x56A6 (DG2)
[14:03:24] [PASSED] 0x56B0 (DG2)
[14:03:24] [PASSED] 0x56B1 (DG2)
[14:03:24] [PASSED] 0x56BA (DG2)
[14:03:24] [PASSED] 0x56BB (DG2)
[14:03:24] [PASSED] 0x56BC (DG2)
[14:03:24] [PASSED] 0x56BD (DG2)
[14:03:24] [PASSED] 0x5693 (DG2)
[14:03:24] [PASSED] 0x5694 (DG2)
[14:03:24] [PASSED] 0x5695 (DG2)
[14:03:24] [PASSED] 0x56A3 (DG2)
[14:03:24] [PASSED] 0x56A4 (DG2)
[14:03:24] [PASSED] 0x56B2 (DG2)
[14:03:24] [PASSED] 0x56B3 (DG2)
[14:03:24] [PASSED] 0x5696 (DG2)
[14:03:24] [PASSED] 0x5697 (DG2)
[14:03:24] [PASSED] 0xB69 (PVC)
[14:03:24] [PASSED] 0xB6E (PVC)
[14:03:24] [PASSED] 0xBD4 (PVC)
[14:03:24] [PASSED] 0xBD5 (PVC)
[14:03:24] [PASSED] 0xBD6 (PVC)
[14:03:24] [PASSED] 0xBD7 (PVC)
[14:03:24] [PASSED] 0xBD8 (PVC)
[14:03:24] [PASSED] 0xBD9 (PVC)
[14:03:24] [PASSED] 0xBDA (PVC)
[14:03:24] [PASSED] 0xBDB (PVC)
[14:03:24] [PASSED] 0xBE0 (PVC)
[14:03:24] [PASSED] 0xBE1 (PVC)
[14:03:24] [PASSED] 0xBE5 (PVC)
[14:03:24] [PASSED] 0x7D40 (METEORLAKE)
[14:03:24] [PASSED] 0x7D45 (METEORLAKE)
[14:03:24] [PASSED] 0x7D55 (METEORLAKE)
[14:03:24] [PASSED] 0x7D60 (METEORLAKE)
[14:03:24] [PASSED] 0x7DD5 (METEORLAKE)
[14:03:24] [PASSED] 0x6420 (LUNARLAKE)
[14:03:24] [PASSED] 0x64A0 (LUNARLAKE)
[14:03:24] [PASSED] 0x64B0 (LUNARLAKE)
[14:03:24] [PASSED] 0xE202 (BATTLEMAGE)
[14:03:24] [PASSED] 0xE209 (BATTLEMAGE)
[14:03:24] [PASSED] 0xE20B (BATTLEMAGE)
[14:03:24] [PASSED] 0xE20C (BATTLEMAGE)
[14:03:24] [PASSED] 0xE20D (BATTLEMAGE)
[14:03:24] [PASSED] 0xE210 (BATTLEMAGE)
[14:03:24] [PASSED] 0xE211 (BATTLEMAGE)
[14:03:24] [PASSED] 0xE212 (BATTLEMAGE)
[14:03:24] [PASSED] 0xE216 (BATTLEMAGE)
[14:03:24] [PASSED] 0xE220 (BATTLEMAGE)
[14:03:24] [PASSED] 0xE221 (BATTLEMAGE)
[14:03:24] [PASSED] 0xE222 (BATTLEMAGE)
[14:03:24] [PASSED] 0xE223 (BATTLEMAGE)
[14:03:24] [PASSED] 0xB080 (PANTHERLAKE)
[14:03:24] [PASSED] 0xB081 (PANTHERLAKE)
[14:03:24] [PASSED] 0xB082 (PANTHERLAKE)
[14:03:24] [PASSED] 0xB083 (PANTHERLAKE)
[14:03:24] [PASSED] 0xB084 (PANTHERLAKE)
[14:03:24] [PASSED] 0xB085 (PANTHERLAKE)
[14:03:24] [PASSED] 0xB086 (PANTHERLAKE)
[14:03:24] [PASSED] 0xB087 (PANTHERLAKE)
[14:03:24] [PASSED] 0xB08F (PANTHERLAKE)
[14:03:24] [PASSED] 0xB090 (PANTHERLAKE)
[14:03:24] [PASSED] 0xB0A0 (PANTHERLAKE)
[14:03:24] [PASSED] 0xB0B0 (PANTHERLAKE)
[14:03:24] [PASSED] 0xD740 (NOVALAKE_S)
[14:03:24] [PASSED] 0xD741 (NOVALAKE_S)
[14:03:24] [PASSED] 0xD742 (NOVALAKE_S)
[14:03:24] [PASSED] 0xD743 (NOVALAKE_S)
[14:03:24] [PASSED] 0xD744 (NOVALAKE_S)
[14:03:24] [PASSED] 0xD745 (NOVALAKE_S)
[14:03:24] [PASSED] 0x674C (CRESCENTISLAND)
[14:03:24] [PASSED] 0xFD80 (PANTHERLAKE)
[14:03:24] [PASSED] 0xFD81 (PANTHERLAKE)
[14:03:24] =============== [PASSED] check_platform_desc ===============
[14:03:24] ===================== [PASSED] xe_pci ======================
[14:03:24] =================== xe_rtp (2 subtests) ====================
[14:03:24] =============== xe_rtp_process_to_sr_tests ================
[14:03:24] [PASSED] coalesce-same-reg
[14:03:24] [PASSED] no-match-no-add
[14:03:24] [PASSED] match-or
[14:03:24] [PASSED] match-or-xfail
[14:03:24] [PASSED] no-match-no-add-multiple-rules
[14:03:24] [PASSED] two-regs-two-entries
[14:03:24] [PASSED] clr-one-set-other
[14:03:24] [PASSED] set-field
[14:03:24] [PASSED] conflict-duplicate
[14:03:24] [PASSED] conflict-not-disjoint
[14:03:24] [PASSED] conflict-reg-type
[14:03:24] =========== [PASSED] xe_rtp_process_to_sr_tests ============
[14:03:24] ================== xe_rtp_process_tests ===================
[14:03:24] [PASSED] active1
[14:03:24] [PASSED] active2
[14:03:24] [PASSED] active-inactive
[14:03:24] [PASSED] inactive-active
[14:03:24] [PASSED] inactive-1st_or_active-inactive
[14:03:24] [PASSED] inactive-2nd_or_active-inactive
[14:03:24] [PASSED] inactive-last_or_active-inactive
[14:03:24] [PASSED] inactive-no_or_active-inactive
[14:03:24] ============== [PASSED] xe_rtp_process_tests ===============
[14:03:24] ===================== [PASSED] xe_rtp ======================
[14:03:24] ==================== xe_wa (1 subtest) =====================
[14:03:24] ======================== xe_wa_gt =========================
[14:03:24] [PASSED] TIGERLAKE B0
[14:03:24] [PASSED] DG1 A0
[14:03:24] [PASSED] DG1 B0
[14:03:24] [PASSED] ALDERLAKE_S A0
[14:03:24] [PASSED] ALDERLAKE_S B0
[14:03:24] [PASSED] ALDERLAKE_S C0
[14:03:24] [PASSED] ALDERLAKE_S D0
[14:03:24] [PASSED] ALDERLAKE_P A0
[14:03:24] [PASSED] ALDERLAKE_P B0
[14:03:24] [PASSED] ALDERLAKE_P C0
[14:03:24] [PASSED] ALDERLAKE_S RPLS D0
[14:03:24] [PASSED] ALDERLAKE_P RPLU E0
[14:03:24] [PASSED] DG2 G10 C0
[14:03:24] [PASSED] DG2 G11 B1
[14:03:24] [PASSED] DG2 G12 A1
[14:03:24] [PASSED] METEORLAKE 12.70(Xe_LPG) A0 13.00(Xe_LPM+) A0
[14:03:24] [PASSED] METEORLAKE 12.71(Xe_LPG) A0 13.00(Xe_LPM+) A0
[14:03:24] [PASSED] METEORLAKE 12.74(Xe_LPG+) A0 13.00(Xe_LPM+) A0
[14:03:24] [PASSED] LUNARLAKE 20.04(Xe2_LPG) A0 20.00(Xe2_LPM) A0
[14:03:24] [PASSED] LUNARLAKE 20.04(Xe2_LPG) B0 20.00(Xe2_LPM) A0
[14:03:24] [PASSED] BATTLEMAGE 20.01(Xe2_HPG) A0 13.01(Xe2_HPM) A1
[14:03:24] [PASSED] PANTHERLAKE 30.00(Xe3_LPG) A0 30.00(Xe3_LPM) A0
[14:03:24] ==================== [PASSED] xe_wa_gt =====================
[14:03:24] ====================== [PASSED] xe_wa ======================
[14:03:24] ============================================================
[14:03:24] Testing complete. Ran 511 tests: passed: 492, skipped: 19
[14:03:24] Elapsed time: 70.847s total, 7.543s configuring, 62.526s building, 0.752s running
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/tests/.kunitconfig
[14:03:25] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[14:03:27] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[14:04:16] Starting KUnit Kernel (1/1)...
[14:04:16] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[14:04:16] ============ drm_test_pick_cmdline (2 subtests) ============
[14:04:16] [PASSED] drm_test_pick_cmdline_res_1920_1080_60
[14:04:16] =============== drm_test_pick_cmdline_named ===============
[14:04:16] [PASSED] NTSC
[14:04:16] [PASSED] NTSC-J
[14:04:16] [PASSED] PAL
[14:04:16] [PASSED] PAL-M
[14:04:16] =========== [PASSED] drm_test_pick_cmdline_named ===========
[14:04:16] ============== [PASSED] drm_test_pick_cmdline ==============
[14:04:16] == drm_test_atomic_get_connector_for_encoder (1 subtest) ===
[14:04:16] [PASSED] drm_test_drm_atomic_get_connector_for_encoder
[14:04:16] ==== [PASSED] drm_test_atomic_get_connector_for_encoder ====
[14:04:16] =========== drm_validate_clone_mode (2 subtests) ===========
[14:04:16] ============== drm_test_check_in_clone_mode ===============
[14:04:16] [PASSED] in_clone_mode
[14:04:16] [PASSED] not_in_clone_mode
[14:04:16] ========== [PASSED] drm_test_check_in_clone_mode ===========
[14:04:16] =============== drm_test_check_valid_clones ===============
[14:04:16] [PASSED] not_in_clone_mode
[14:04:16] [PASSED] valid_clone
[14:04:16] [PASSED] invalid_clone
[14:04:16] =========== [PASSED] drm_test_check_valid_clones ===========
[14:04:16] ============= [PASSED] drm_validate_clone_mode =============
[14:04:16] ============= drm_validate_modeset (1 subtest) =============
[14:04:16] [PASSED] drm_test_check_connector_changed_modeset
[14:04:16] ============== [PASSED] drm_validate_modeset ===============
[14:04:16] ====== drm_test_bridge_get_current_state (2 subtests) ======
[14:04:16] [PASSED] drm_test_drm_bridge_get_current_state_atomic
[14:04:16] [PASSED] drm_test_drm_bridge_get_current_state_legacy
[14:04:16] ======== [PASSED] drm_test_bridge_get_current_state ========
[14:04:16] ====== drm_test_bridge_helper_reset_crtc (3 subtests) ======
[14:04:16] [PASSED] drm_test_drm_bridge_helper_reset_crtc_atomic
[14:04:16] [PASSED] drm_test_drm_bridge_helper_reset_crtc_atomic_disabled
[14:04:16] [PASSED] drm_test_drm_bridge_helper_reset_crtc_legacy
[14:04:16] ======== [PASSED] drm_test_bridge_helper_reset_crtc ========
[14:04:16] ============== drm_bridge_alloc (2 subtests) ===============
[14:04:16] [PASSED] drm_test_drm_bridge_alloc_basic
[14:04:16] [PASSED] drm_test_drm_bridge_alloc_get_put
[14:04:16] ================ [PASSED] drm_bridge_alloc =================
[14:04:16] ================== drm_buddy (8 subtests) ==================
[14:04:16] [PASSED] drm_test_buddy_alloc_limit
[14:04:16] [PASSED] drm_test_buddy_alloc_optimistic
[14:04:16] [PASSED] drm_test_buddy_alloc_pessimistic
[14:04:16] [PASSED] drm_test_buddy_alloc_pathological
[14:04:16] [PASSED] drm_test_buddy_alloc_contiguous
[14:04:16] [PASSED] drm_test_buddy_alloc_clear
[14:04:17] [PASSED] drm_test_buddy_alloc_range_bias
[14:04:17] [PASSED] drm_test_buddy_fragmentation_performance
[14:04:17] ==================== [PASSED] drm_buddy ====================
[14:04:17] ============= drm_cmdline_parser (40 subtests) =============
[14:04:17] [PASSED] drm_test_cmdline_force_d_only
[14:04:17] [PASSED] drm_test_cmdline_force_D_only_dvi
[14:04:17] [PASSED] drm_test_cmdline_force_D_only_hdmi
[14:04:17] [PASSED] drm_test_cmdline_force_D_only_not_digital
[14:04:17] [PASSED] drm_test_cmdline_force_e_only
[14:04:17] [PASSED] drm_test_cmdline_res
[14:04:17] [PASSED] drm_test_cmdline_res_vesa
[14:04:17] [PASSED] drm_test_cmdline_res_vesa_rblank
[14:04:17] [PASSED] drm_test_cmdline_res_rblank
[14:04:17] [PASSED] drm_test_cmdline_res_bpp
[14:04:17] [PASSED] drm_test_cmdline_res_refresh
[14:04:17] [PASSED] drm_test_cmdline_res_bpp_refresh
[14:04:17] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced
[14:04:17] [PASSED] drm_test_cmdline_res_bpp_refresh_margins
[14:04:17] [PASSED] drm_test_cmdline_res_bpp_refresh_force_off
[14:04:17] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on
[14:04:17] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_analog
[14:04:17] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_digital
[14:04:17] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced_margins_force_on
[14:04:17] [PASSED] drm_test_cmdline_res_margins_force_on
[14:04:17] [PASSED] drm_test_cmdline_res_vesa_margins
[14:04:17] [PASSED] drm_test_cmdline_name
[14:04:17] [PASSED] drm_test_cmdline_name_bpp
[14:04:17] [PASSED] drm_test_cmdline_name_option
[14:04:17] [PASSED] drm_test_cmdline_name_bpp_option
[14:04:17] [PASSED] drm_test_cmdline_rotate_0
[14:04:17] [PASSED] drm_test_cmdline_rotate_90
[14:04:17] [PASSED] drm_test_cmdline_rotate_180
[14:04:17] [PASSED] drm_test_cmdline_rotate_270
[14:04:17] [PASSED] drm_test_cmdline_hmirror
[14:04:17] [PASSED] drm_test_cmdline_vmirror
[14:04:17] [PASSED] drm_test_cmdline_margin_options
[14:04:17] [PASSED] drm_test_cmdline_multiple_options
[14:04:17] [PASSED] drm_test_cmdline_bpp_extra_and_option
[14:04:17] [PASSED] drm_test_cmdline_extra_and_option
[14:04:17] [PASSED] drm_test_cmdline_freestanding_options
[14:04:17] [PASSED] drm_test_cmdline_freestanding_force_e_and_options
[14:04:17] [PASSED] drm_test_cmdline_panel_orientation
[14:04:17] ================ drm_test_cmdline_invalid =================
[14:04:17] [PASSED] margin_only
[14:04:17] [PASSED] interlace_only
[14:04:17] [PASSED] res_missing_x
[14:04:17] [PASSED] res_missing_y
[14:04:17] [PASSED] res_bad_y
[14:04:17] [PASSED] res_missing_y_bpp
[14:04:17] [PASSED] res_bad_bpp
[14:04:17] [PASSED] res_bad_refresh
[14:04:17] [PASSED] res_bpp_refresh_force_on_off
[14:04:17] [PASSED] res_invalid_mode
[14:04:17] [PASSED] res_bpp_wrong_place_mode
[14:04:17] [PASSED] name_bpp_refresh
[14:04:17] [PASSED] name_refresh
[14:04:17] [PASSED] name_refresh_wrong_mode
[14:04:17] [PASSED] name_refresh_invalid_mode
[14:04:17] [PASSED] rotate_multiple
[14:04:17] [PASSED] rotate_invalid_val
[14:04:17] [PASSED] rotate_truncated
[14:04:17] [PASSED] invalid_option
[14:04:17] [PASSED] invalid_tv_option
[14:04:17] [PASSED] truncated_tv_option
[14:04:17] ============ [PASSED] drm_test_cmdline_invalid =============
[14:04:17] =============== drm_test_cmdline_tv_options ===============
[14:04:17] [PASSED] NTSC
[14:04:17] [PASSED] NTSC_443
[14:04:17] [PASSED] NTSC_J
[14:04:17] [PASSED] PAL
[14:04:17] [PASSED] PAL_M
[14:04:17] [PASSED] PAL_N
[14:04:17] [PASSED] SECAM
[14:04:17] [PASSED] MONO_525
[14:04:17] [PASSED] MONO_625
[14:04:17] =========== [PASSED] drm_test_cmdline_tv_options ===========
[14:04:17] =============== [PASSED] drm_cmdline_parser ================
[14:04:17] ========== drmm_connector_hdmi_init (20 subtests) ==========
[14:04:17] [PASSED] drm_test_connector_hdmi_init_valid
[14:04:17] [PASSED] drm_test_connector_hdmi_init_bpc_8
[14:04:17] [PASSED] drm_test_connector_hdmi_init_bpc_10
[14:04:17] [PASSED] drm_test_connector_hdmi_init_bpc_12
[14:04:17] [PASSED] drm_test_connector_hdmi_init_bpc_invalid
[14:04:17] [PASSED] drm_test_connector_hdmi_init_bpc_null
[14:04:17] [PASSED] drm_test_connector_hdmi_init_formats_empty
[14:04:17] [PASSED] drm_test_connector_hdmi_init_formats_no_rgb
[14:04:17] === drm_test_connector_hdmi_init_formats_yuv420_allowed ===
[14:04:17] [PASSED] supported_formats=0x9 yuv420_allowed=1
[14:04:17] [PASSED] supported_formats=0x9 yuv420_allowed=0
[14:04:17] [PASSED] supported_formats=0x3 yuv420_allowed=1
[14:04:17] [PASSED] supported_formats=0x3 yuv420_allowed=0
[14:04:17] === [PASSED] drm_test_connector_hdmi_init_formats_yuv420_allowed ===
[14:04:17] [PASSED] drm_test_connector_hdmi_init_null_ddc
[14:04:17] [PASSED] drm_test_connector_hdmi_init_null_product
[14:04:17] [PASSED] drm_test_connector_hdmi_init_null_vendor
[14:04:17] [PASSED] drm_test_connector_hdmi_init_product_length_exact
[14:04:17] [PASSED] drm_test_connector_hdmi_init_product_length_too_long
[14:04:17] [PASSED] drm_test_connector_hdmi_init_product_valid
[14:04:17] [PASSED] drm_test_connector_hdmi_init_vendor_length_exact
[14:04:17] [PASSED] drm_test_connector_hdmi_init_vendor_length_too_long
[14:04:17] [PASSED] drm_test_connector_hdmi_init_vendor_valid
[14:04:17] ========= drm_test_connector_hdmi_init_type_valid =========
[14:04:17] [PASSED] HDMI-A
[14:04:17] [PASSED] HDMI-B
[14:04:17] ===== [PASSED] drm_test_connector_hdmi_init_type_valid =====
[14:04:17] ======== drm_test_connector_hdmi_init_type_invalid ========
[14:04:17] [PASSED] Unknown
[14:04:17] [PASSED] VGA
[14:04:17] [PASSED] DVI-I
[14:04:17] [PASSED] DVI-D
[14:04:17] [PASSED] DVI-A
[14:04:17] [PASSED] Composite
[14:04:17] [PASSED] SVIDEO
[14:04:17] [PASSED] LVDS
[14:04:17] [PASSED] Component
[14:04:17] [PASSED] DIN
[14:04:17] [PASSED] DP
[14:04:17] [PASSED] TV
[14:04:17] [PASSED] eDP
[14:04:17] [PASSED] Virtual
[14:04:17] [PASSED] DSI
[14:04:17] [PASSED] DPI
[14:04:17] [PASSED] Writeback
[14:04:17] [PASSED] SPI
[14:04:17] [PASSED] USB
[14:04:17] ==== [PASSED] drm_test_connector_hdmi_init_type_invalid ====
[14:04:17] ============ [PASSED] drmm_connector_hdmi_init =============
[14:04:17] ============= drmm_connector_init (3 subtests) =============
[14:04:17] [PASSED] drm_test_drmm_connector_init
[14:04:17] [PASSED] drm_test_drmm_connector_init_null_ddc
[14:04:17] ========= drm_test_drmm_connector_init_type_valid =========
[14:04:17] [PASSED] Unknown
[14:04:17] [PASSED] VGA
[14:04:17] [PASSED] DVI-I
[14:04:17] [PASSED] DVI-D
[14:04:17] [PASSED] DVI-A
[14:04:17] [PASSED] Composite
[14:04:17] [PASSED] SVIDEO
[14:04:17] [PASSED] LVDS
[14:04:17] [PASSED] Component
[14:04:17] [PASSED] DIN
[14:04:17] [PASSED] DP
[14:04:17] [PASSED] HDMI-A
[14:04:17] [PASSED] HDMI-B
[14:04:17] [PASSED] TV
[14:04:17] [PASSED] eDP
[14:04:17] [PASSED] Virtual
[14:04:17] [PASSED] DSI
[14:04:17] [PASSED] DPI
[14:04:17] [PASSED] Writeback
[14:04:17] [PASSED] SPI
[14:04:17] [PASSED] USB
[14:04:17] ===== [PASSED] drm_test_drmm_connector_init_type_valid =====
[14:04:17] =============== [PASSED] drmm_connector_init ===============
[14:04:17] ========= drm_connector_dynamic_init (6 subtests) ==========
[14:04:17] [PASSED] drm_test_drm_connector_dynamic_init
[14:04:17] [PASSED] drm_test_drm_connector_dynamic_init_null_ddc
[14:04:17] [PASSED] drm_test_drm_connector_dynamic_init_not_added
[14:04:17] [PASSED] drm_test_drm_connector_dynamic_init_properties
[14:04:17] ===== drm_test_drm_connector_dynamic_init_type_valid ======
[14:04:17] [PASSED] Unknown
[14:04:17] [PASSED] VGA
[14:04:17] [PASSED] DVI-I
[14:04:17] [PASSED] DVI-D
[14:04:17] [PASSED] DVI-A
[14:04:17] [PASSED] Composite
[14:04:17] [PASSED] SVIDEO
[14:04:17] [PASSED] LVDS
[14:04:17] [PASSED] Component
[14:04:17] [PASSED] DIN
[14:04:17] [PASSED] DP
[14:04:17] [PASSED] HDMI-A
[14:04:17] [PASSED] HDMI-B
[14:04:17] [PASSED] TV
[14:04:17] [PASSED] eDP
[14:04:17] [PASSED] Virtual
[14:04:17] [PASSED] DSI
[14:04:17] [PASSED] DPI
[14:04:17] [PASSED] Writeback
[14:04:17] [PASSED] SPI
[14:04:17] [PASSED] USB
[14:04:17] = [PASSED] drm_test_drm_connector_dynamic_init_type_valid ==
[14:04:17] ======== drm_test_drm_connector_dynamic_init_name =========
[14:04:17] [PASSED] Unknown
[14:04:17] [PASSED] VGA
[14:04:17] [PASSED] DVI-I
[14:04:17] [PASSED] DVI-D
[14:04:17] [PASSED] DVI-A
[14:04:17] [PASSED] Composite
[14:04:17] [PASSED] SVIDEO
[14:04:17] [PASSED] LVDS
[14:04:17] [PASSED] Component
[14:04:17] [PASSED] DIN
[14:04:17] [PASSED] DP
[14:04:17] [PASSED] HDMI-A
[14:04:17] [PASSED] HDMI-B
[14:04:17] [PASSED] TV
[14:04:17] [PASSED] eDP
[14:04:17] [PASSED] Virtual
[14:04:17] [PASSED] DSI
[14:04:17] [PASSED] DPI
[14:04:17] [PASSED] Writeback
[14:04:17] [PASSED] SPI
[14:04:17] [PASSED] USB
[14:04:17] ==== [PASSED] drm_test_drm_connector_dynamic_init_name =====
[14:04:17] =========== [PASSED] drm_connector_dynamic_init ============
[14:04:17] ==== drm_connector_dynamic_register_early (4 subtests) =====
[14:04:17] [PASSED] drm_test_drm_connector_dynamic_register_early_on_list
[14:04:17] [PASSED] drm_test_drm_connector_dynamic_register_early_defer
[14:04:17] [PASSED] drm_test_drm_connector_dynamic_register_early_no_init
[14:04:17] [PASSED] drm_test_drm_connector_dynamic_register_early_no_mode_object
[14:04:17] ====== [PASSED] drm_connector_dynamic_register_early =======
[14:04:17] ======= drm_connector_dynamic_register (7 subtests) ========
[14:04:17] [PASSED] drm_test_drm_connector_dynamic_register_on_list
[14:04:17] [PASSED] drm_test_drm_connector_dynamic_register_no_defer
[14:04:17] [PASSED] drm_test_drm_connector_dynamic_register_no_init
[14:04:17] [PASSED] drm_test_drm_connector_dynamic_register_mode_object
[14:04:17] [PASSED] drm_test_drm_connector_dynamic_register_sysfs
[14:04:17] [PASSED] drm_test_drm_connector_dynamic_register_sysfs_name
[14:04:17] [PASSED] drm_test_drm_connector_dynamic_register_debugfs
[14:04:17] ========= [PASSED] drm_connector_dynamic_register ==========
[14:04:17] = drm_connector_attach_broadcast_rgb_property (2 subtests) =
[14:04:17] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property
[14:04:17] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property_hdmi_connector
[14:04:17] === [PASSED] drm_connector_attach_broadcast_rgb_property ===
[14:04:17] ========== drm_get_tv_mode_from_name (2 subtests) ==========
[14:04:17] ========== drm_test_get_tv_mode_from_name_valid ===========
[14:04:17] [PASSED] NTSC
[14:04:17] [PASSED] NTSC-443
[14:04:17] [PASSED] NTSC-J
[14:04:17] [PASSED] PAL
[14:04:17] [PASSED] PAL-M
[14:04:17] [PASSED] PAL-N
[14:04:17] [PASSED] SECAM
[14:04:17] [PASSED] Mono
[14:04:17] ====== [PASSED] drm_test_get_tv_mode_from_name_valid =======
[14:04:17] [PASSED] drm_test_get_tv_mode_from_name_truncated
[14:04:17] ============ [PASSED] drm_get_tv_mode_from_name ============
[14:04:17] = drm_test_connector_hdmi_compute_mode_clock (12 subtests) =
[14:04:17] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb
[14:04:17] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc
[14:04:17] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc_vic_1
[14:04:17] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc
[14:04:17] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc_vic_1
[14:04:17] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_double
[14:04:17] = drm_test_connector_hdmi_compute_mode_clock_yuv420_valid =
[14:04:17] [PASSED] VIC 96
[14:04:17] [PASSED] VIC 97
[14:04:17] [PASSED] VIC 101
[14:04:17] [PASSED] VIC 102
[14:04:17] [PASSED] VIC 106
[14:04:17] [PASSED] VIC 107
[14:04:17] === [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_valid ===
[14:04:17] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_10_bpc
[14:04:17] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_12_bpc
[14:04:17] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_8_bpc
[14:04:17] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_10_bpc
[14:04:17] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_12_bpc
[14:04:17] === [PASSED] drm_test_connector_hdmi_compute_mode_clock ====
[14:04:17] == drm_hdmi_connector_get_broadcast_rgb_name (2 subtests) ==
[14:04:17] === drm_test_drm_hdmi_connector_get_broadcast_rgb_name ====
[14:04:17] [PASSED] Automatic
[14:04:17] [PASSED] Full
[14:04:17] [PASSED] Limited 16:235
[14:04:17] === [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name ===
[14:04:17] [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name_invalid
[14:04:17] ==== [PASSED] drm_hdmi_connector_get_broadcast_rgb_name ====
[14:04:17] == drm_hdmi_connector_get_output_format_name (2 subtests) ==
[14:04:17] === drm_test_drm_hdmi_connector_get_output_format_name ====
[14:04:17] [PASSED] RGB
[14:04:17] [PASSED] YUV 4:2:0
[14:04:17] [PASSED] YUV 4:2:2
[14:04:17] [PASSED] YUV 4:4:4
[14:04:17] === [PASSED] drm_test_drm_hdmi_connector_get_output_format_name ===
[14:04:17] [PASSED] drm_test_drm_hdmi_connector_get_output_format_name_invalid
[14:04:17] ==== [PASSED] drm_hdmi_connector_get_output_format_name ====
[14:04:17] ============= drm_damage_helper (21 subtests) ==============
[14:04:17] [PASSED] drm_test_damage_iter_no_damage
[14:04:17] [PASSED] drm_test_damage_iter_no_damage_fractional_src
[14:04:17] [PASSED] drm_test_damage_iter_no_damage_src_moved
[14:04:17] [PASSED] drm_test_damage_iter_no_damage_fractional_src_moved
[14:04:17] [PASSED] drm_test_damage_iter_no_damage_not_visible
[14:04:17] [PASSED] drm_test_damage_iter_no_damage_no_crtc
[14:04:17] [PASSED] drm_test_damage_iter_no_damage_no_fb
[14:04:17] [PASSED] drm_test_damage_iter_simple_damage
[14:04:17] [PASSED] drm_test_damage_iter_single_damage
[14:04:17] [PASSED] drm_test_damage_iter_single_damage_intersect_src
[14:04:17] [PASSED] drm_test_damage_iter_single_damage_outside_src
[14:04:17] [PASSED] drm_test_damage_iter_single_damage_fractional_src
[14:04:17] [PASSED] drm_test_damage_iter_single_damage_intersect_fractional_src
[14:04:17] [PASSED] drm_test_damage_iter_single_damage_outside_fractional_src
[14:04:17] [PASSED] drm_test_damage_iter_single_damage_src_moved
[14:04:17] [PASSED] drm_test_damage_iter_single_damage_fractional_src_moved
[14:04:17] [PASSED] drm_test_damage_iter_damage
[14:04:17] [PASSED] drm_test_damage_iter_damage_one_intersect
[14:04:17] [PASSED] drm_test_damage_iter_damage_one_outside
[14:04:17] [PASSED] drm_test_damage_iter_damage_src_moved
[14:04:17] [PASSED] drm_test_damage_iter_damage_not_visible
[14:04:17] ================ [PASSED] drm_damage_helper ================
[14:04:17] ============== drm_dp_mst_helper (3 subtests) ==============
[14:04:17] ============== drm_test_dp_mst_calc_pbn_mode ==============
[14:04:17] [PASSED] Clock 154000 BPP 30 DSC disabled
[14:04:17] [PASSED] Clock 234000 BPP 30 DSC disabled
[14:04:17] [PASSED] Clock 297000 BPP 24 DSC disabled
[14:04:17] [PASSED] Clock 332880 BPP 24 DSC enabled
[14:04:17] [PASSED] Clock 324540 BPP 24 DSC enabled
[14:04:17] ========== [PASSED] drm_test_dp_mst_calc_pbn_mode ==========
[14:04:17] ============== drm_test_dp_mst_calc_pbn_div ===============
[14:04:17] [PASSED] Link rate 2000000 lane count 4
[14:04:17] [PASSED] Link rate 2000000 lane count 2
[14:04:17] [PASSED] Link rate 2000000 lane count 1
[14:04:17] [PASSED] Link rate 1350000 lane count 4
[14:04:17] [PASSED] Link rate 1350000 lane count 2
[14:04:17] [PASSED] Link rate 1350000 lane count 1
[14:04:17] [PASSED] Link rate 1000000 lane count 4
[14:04:17] [PASSED] Link rate 1000000 lane count 2
[14:04:17] [PASSED] Link rate 1000000 lane count 1
[14:04:17] [PASSED] Link rate 810000 lane count 4
[14:04:17] [PASSED] Link rate 810000 lane count 2
[14:04:17] [PASSED] Link rate 810000 lane count 1
[14:04:17] [PASSED] Link rate 540000 lane count 4
[14:04:17] [PASSED] Link rate 540000 lane count 2
[14:04:17] [PASSED] Link rate 540000 lane count 1
[14:04:17] [PASSED] Link rate 270000 lane count 4
[14:04:17] [PASSED] Link rate 270000 lane count 2
[14:04:17] [PASSED] Link rate 270000 lane count 1
[14:04:17] [PASSED] Link rate 162000 lane count 4
[14:04:17] [PASSED] Link rate 162000 lane count 2
[14:04:17] [PASSED] Link rate 162000 lane count 1
[14:04:17] ========== [PASSED] drm_test_dp_mst_calc_pbn_div ===========
[14:04:17] ========= drm_test_dp_mst_sideband_msg_req_decode =========
[14:04:17] [PASSED] DP_ENUM_PATH_RESOURCES with port number
[14:04:17] [PASSED] DP_POWER_UP_PHY with port number
[14:04:17] [PASSED] DP_POWER_DOWN_PHY with port number
[14:04:17] [PASSED] DP_ALLOCATE_PAYLOAD with SDP stream sinks
[14:04:17] [PASSED] DP_ALLOCATE_PAYLOAD with port number
[14:04:17] [PASSED] DP_ALLOCATE_PAYLOAD with VCPI
[14:04:17] [PASSED] DP_ALLOCATE_PAYLOAD with PBN
[14:04:17] [PASSED] DP_QUERY_PAYLOAD with port number
[14:04:17] [PASSED] DP_QUERY_PAYLOAD with VCPI
[14:04:17] [PASSED] DP_REMOTE_DPCD_READ with port number
[14:04:17] [PASSED] DP_REMOTE_DPCD_READ with DPCD address
[14:04:17] [PASSED] DP_REMOTE_DPCD_READ with max number of bytes
[14:04:17] [PASSED] DP_REMOTE_DPCD_WRITE with port number
[14:04:17] [PASSED] DP_REMOTE_DPCD_WRITE with DPCD address
[14:04:17] [PASSED] DP_REMOTE_DPCD_WRITE with data array
[14:04:17] [PASSED] DP_REMOTE_I2C_READ with port number
[14:04:17] [PASSED] DP_REMOTE_I2C_READ with I2C device ID
[14:04:17] [PASSED] DP_REMOTE_I2C_READ with transactions array
[14:04:17] [PASSED] DP_REMOTE_I2C_WRITE with port number
[14:04:17] [PASSED] DP_REMOTE_I2C_WRITE with I2C device ID
[14:04:17] [PASSED] DP_REMOTE_I2C_WRITE with data array
[14:04:17] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream ID
[14:04:17] [PASSED] DP_QUERY_STREAM_ENC_STATUS with client ID
[14:04:17] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream event
[14:04:17] [PASSED] DP_QUERY_STREAM_ENC_STATUS with valid stream event
[14:04:17] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream behavior
[14:04:17] [PASSED] DP_QUERY_STREAM_ENC_STATUS with a valid stream behavior
[14:04:17] ===== [PASSED] drm_test_dp_mst_sideband_msg_req_decode =====
[14:04:17] ================ [PASSED] drm_dp_mst_helper ================
[14:04:17] ================== drm_exec (7 subtests) ===================
[14:04:17] [PASSED] sanitycheck
[14:04:17] [PASSED] test_lock
[14:04:17] [PASSED] test_lock_unlock
[14:04:17] [PASSED] test_duplicates
[14:04:17] [PASSED] test_prepare
[14:04:17] [PASSED] test_prepare_array
[14:04:17] [PASSED] test_multiple_loops
[14:04:17] ==================== [PASSED] drm_exec =====================
[14:04:17] =========== drm_format_helper_test (17 subtests) ===========
[14:04:17] ============== drm_test_fb_xrgb8888_to_gray8 ==============
[14:04:17] [PASSED] single_pixel_source_buffer
[14:04:17] [PASSED] single_pixel_clip_rectangle
[14:04:17] [PASSED] well_known_colors
[14:04:17] [PASSED] destination_pitch
[14:04:17] ========== [PASSED] drm_test_fb_xrgb8888_to_gray8 ==========
[14:04:17] ============= drm_test_fb_xrgb8888_to_rgb332 ==============
[14:04:17] [PASSED] single_pixel_source_buffer
[14:04:17] [PASSED] single_pixel_clip_rectangle
[14:04:17] [PASSED] well_known_colors
[14:04:17] [PASSED] destination_pitch
[14:04:17] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb332 ==========
[14:04:17] ============= drm_test_fb_xrgb8888_to_rgb565 ==============
[14:04:17] [PASSED] single_pixel_source_buffer
[14:04:17] [PASSED] single_pixel_clip_rectangle
[14:04:17] [PASSED] well_known_colors
[14:04:17] [PASSED] destination_pitch
[14:04:17] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb565 ==========
[14:04:17] ============ drm_test_fb_xrgb8888_to_xrgb1555 =============
[14:04:17] [PASSED] single_pixel_source_buffer
[14:04:17] [PASSED] single_pixel_clip_rectangle
[14:04:17] [PASSED] well_known_colors
[14:04:17] [PASSED] destination_pitch
[14:04:17] ======== [PASSED] drm_test_fb_xrgb8888_to_xrgb1555 =========
[14:04:17] ============ drm_test_fb_xrgb8888_to_argb1555 =============
[14:04:17] [PASSED] single_pixel_source_buffer
[14:04:17] [PASSED] single_pixel_clip_rectangle
[14:04:17] [PASSED] well_known_colors
[14:04:17] [PASSED] destination_pitch
[14:04:17] ======== [PASSED] drm_test_fb_xrgb8888_to_argb1555 =========
[14:04:17] ============ drm_test_fb_xrgb8888_to_rgba5551 =============
[14:04:17] [PASSED] single_pixel_source_buffer
[14:04:17] [PASSED] single_pixel_clip_rectangle
[14:04:17] [PASSED] well_known_colors
[14:04:17] [PASSED] destination_pitch
[14:04:17] ======== [PASSED] drm_test_fb_xrgb8888_to_rgba5551 =========
[14:04:17] ============= drm_test_fb_xrgb8888_to_rgb888 ==============
[14:04:17] [PASSED] single_pixel_source_buffer
[14:04:17] [PASSED] single_pixel_clip_rectangle
[14:04:17] [PASSED] well_known_colors
[14:04:17] [PASSED] destination_pitch
[14:04:17] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb888 ==========
[14:04:17] ============= drm_test_fb_xrgb8888_to_bgr888 ==============
[14:04:17] [PASSED] single_pixel_source_buffer
[14:04:17] [PASSED] single_pixel_clip_rectangle
[14:04:17] [PASSED] well_known_colors
[14:04:17] [PASSED] destination_pitch
[14:04:17] ========= [PASSED] drm_test_fb_xrgb8888_to_bgr888 ==========
[14:04:17] ============ drm_test_fb_xrgb8888_to_argb8888 =============
[14:04:17] [PASSED] single_pixel_source_buffer
[14:04:17] [PASSED] single_pixel_clip_rectangle
[14:04:17] [PASSED] well_known_colors
[14:04:17] [PASSED] destination_pitch
[14:04:17] ======== [PASSED] drm_test_fb_xrgb8888_to_argb8888 =========
[14:04:17] =========== drm_test_fb_xrgb8888_to_xrgb2101010 ===========
[14:04:17] [PASSED] single_pixel_source_buffer
[14:04:17] [PASSED] single_pixel_clip_rectangle
[14:04:17] [PASSED] well_known_colors
[14:04:17] [PASSED] destination_pitch
[14:04:17] ======= [PASSED] drm_test_fb_xrgb8888_to_xrgb2101010 =======
[14:04:17] =========== drm_test_fb_xrgb8888_to_argb2101010 ===========
[14:04:17] [PASSED] single_pixel_source_buffer
[14:04:17] [PASSED] single_pixel_clip_rectangle
[14:04:17] [PASSED] well_known_colors
[14:04:17] [PASSED] destination_pitch
[14:04:17] ======= [PASSED] drm_test_fb_xrgb8888_to_argb2101010 =======
[14:04:17] ============== drm_test_fb_xrgb8888_to_mono ===============
[14:04:17] [PASSED] single_pixel_source_buffer
[14:04:17] [PASSED] single_pixel_clip_rectangle
[14:04:17] [PASSED] well_known_colors
[14:04:17] [PASSED] destination_pitch
[14:04:17] ========== [PASSED] drm_test_fb_xrgb8888_to_mono ===========
[14:04:17] ==================== drm_test_fb_swab =====================
[14:04:17] [PASSED] single_pixel_source_buffer
[14:04:17] [PASSED] single_pixel_clip_rectangle
[14:04:17] [PASSED] well_known_colors
[14:04:17] [PASSED] destination_pitch
[14:04:17] ================ [PASSED] drm_test_fb_swab =================
[14:04:17] ============ drm_test_fb_xrgb8888_to_xbgr8888 =============
[14:04:17] [PASSED] single_pixel_source_buffer
[14:04:17] [PASSED] single_pixel_clip_rectangle
[14:04:17] [PASSED] well_known_colors
[14:04:17] [PASSED] destination_pitch
[14:04:17] ======== [PASSED] drm_test_fb_xrgb8888_to_xbgr8888 =========
[14:04:17] ============ drm_test_fb_xrgb8888_to_abgr8888 =============
[14:04:17] [PASSED] single_pixel_source_buffer
[14:04:17] [PASSED] single_pixel_clip_rectangle
[14:04:17] [PASSED] well_known_colors
[14:04:17] [PASSED] destination_pitch
[14:04:17] ======== [PASSED] drm_test_fb_xrgb8888_to_abgr8888 =========
[14:04:17] ================= drm_test_fb_clip_offset =================
[14:04:17] [PASSED] pass through
[14:04:17] [PASSED] horizontal offset
[14:04:17] [PASSED] vertical offset
[14:04:17] [PASSED] horizontal and vertical offset
[14:04:17] [PASSED] horizontal offset (custom pitch)
[14:04:17] [PASSED] vertical offset (custom pitch)
[14:04:17] [PASSED] horizontal and vertical offset (custom pitch)
[14:04:17] ============= [PASSED] drm_test_fb_clip_offset =============
[14:04:17] =================== drm_test_fb_memcpy ====================
[14:04:17] [PASSED] single_pixel_source_buffer: XR24 little-endian (0x34325258)
[14:04:17] [PASSED] single_pixel_source_buffer: XRA8 little-endian (0x38415258)
[14:04:17] [PASSED] single_pixel_source_buffer: YU24 little-endian (0x34325559)
[14:04:17] [PASSED] single_pixel_clip_rectangle: XB24 little-endian (0x34324258)
[14:04:17] [PASSED] single_pixel_clip_rectangle: XRA8 little-endian (0x38415258)
[14:04:17] [PASSED] single_pixel_clip_rectangle: YU24 little-endian (0x34325559)
[14:04:17] [PASSED] well_known_colors: XB24 little-endian (0x34324258)
[14:04:17] [PASSED] well_known_colors: XRA8 little-endian (0x38415258)
[14:04:17] [PASSED] well_known_colors: YU24 little-endian (0x34325559)
[14:04:17] [PASSED] destination_pitch: XB24 little-endian (0x34324258)
[14:04:17] [PASSED] destination_pitch: XRA8 little-endian (0x38415258)
[14:04:17] [PASSED] destination_pitch: YU24 little-endian (0x34325559)
[14:04:17] =============== [PASSED] drm_test_fb_memcpy ================
[14:04:17] ============= [PASSED] drm_format_helper_test ==============
[14:04:17] ================= drm_format (18 subtests) =================
[14:04:17] [PASSED] drm_test_format_block_width_invalid
[14:04:17] [PASSED] drm_test_format_block_width_one_plane
[14:04:17] [PASSED] drm_test_format_block_width_two_plane
[14:04:17] [PASSED] drm_test_format_block_width_three_plane
[14:04:17] [PASSED] drm_test_format_block_width_tiled
[14:04:17] [PASSED] drm_test_format_block_height_invalid
[14:04:17] [PASSED] drm_test_format_block_height_one_plane
[14:04:17] [PASSED] drm_test_format_block_height_two_plane
[14:04:17] [PASSED] drm_test_format_block_height_three_plane
[14:04:17] [PASSED] drm_test_format_block_height_tiled
[14:04:17] [PASSED] drm_test_format_min_pitch_invalid
[14:04:17] [PASSED] drm_test_format_min_pitch_one_plane_8bpp
[14:04:17] [PASSED] drm_test_format_min_pitch_one_plane_16bpp
[14:04:17] [PASSED] drm_test_format_min_pitch_one_plane_24bpp
[14:04:17] [PASSED] drm_test_format_min_pitch_one_plane_32bpp
[14:04:17] [PASSED] drm_test_format_min_pitch_two_plane
[14:04:17] [PASSED] drm_test_format_min_pitch_three_plane_8bpp
[14:04:17] [PASSED] drm_test_format_min_pitch_tiled
[14:04:17] =================== [PASSED] drm_format ====================
[14:04:17] ============== drm_framebuffer (10 subtests) ===============
[14:04:17] ========== drm_test_framebuffer_check_src_coords ==========
[14:04:17] [PASSED] Success: source fits into fb
[14:04:17] [PASSED] Fail: overflowing fb with x-axis coordinate
[14:04:17] [PASSED] Fail: overflowing fb with y-axis coordinate
[14:04:17] [PASSED] Fail: overflowing fb with source width
[14:04:17] [PASSED] Fail: overflowing fb with source height
[14:04:17] ====== [PASSED] drm_test_framebuffer_check_src_coords ======
[14:04:17] [PASSED] drm_test_framebuffer_cleanup
[14:04:17] =============== drm_test_framebuffer_create ===============
[14:04:17] [PASSED] ABGR8888 normal sizes
[14:04:17] [PASSED] ABGR8888 max sizes
[14:04:17] [PASSED] ABGR8888 pitch greater than min required
[14:04:17] [PASSED] ABGR8888 pitch less than min required
[14:04:17] [PASSED] ABGR8888 Invalid width
[14:04:17] [PASSED] ABGR8888 Invalid buffer handle
[14:04:17] [PASSED] No pixel format
[14:04:17] [PASSED] ABGR8888 Width 0
[14:04:17] [PASSED] ABGR8888 Height 0
[14:04:17] [PASSED] ABGR8888 Out of bound height * pitch combination
[14:04:17] [PASSED] ABGR8888 Large buffer offset
[14:04:17] [PASSED] ABGR8888 Buffer offset for inexistent plane
[14:04:17] [PASSED] ABGR8888 Invalid flag
[14:04:17] [PASSED] ABGR8888 Set DRM_MODE_FB_MODIFIERS without modifiers
[14:04:17] [PASSED] ABGR8888 Valid buffer modifier
[14:04:17] [PASSED] ABGR8888 Invalid buffer modifier(DRM_FORMAT_MOD_SAMSUNG_64_32_TILE)
[14:04:17] [PASSED] ABGR8888 Extra pitches without DRM_MODE_FB_MODIFIERS
[14:04:17] [PASSED] ABGR8888 Extra pitches with DRM_MODE_FB_MODIFIERS
[14:04:17] [PASSED] NV12 Normal sizes
[14:04:17] [PASSED] NV12 Max sizes
[14:04:17] [PASSED] NV12 Invalid pitch
[14:04:17] [PASSED] NV12 Invalid modifier/missing DRM_MODE_FB_MODIFIERS flag
[14:04:17] [PASSED] NV12 different modifier per-plane
[14:04:17] [PASSED] NV12 with DRM_FORMAT_MOD_SAMSUNG_64_32_TILE
[14:04:17] [PASSED] NV12 Valid modifiers without DRM_MODE_FB_MODIFIERS
[14:04:17] [PASSED] NV12 Modifier for inexistent plane
[14:04:17] [PASSED] NV12 Handle for inexistent plane
[14:04:17] [PASSED] NV12 Handle for inexistent plane without DRM_MODE_FB_MODIFIERS
[14:04:17] [PASSED] YVU420 DRM_MODE_FB_MODIFIERS set without modifier
[14:04:17] [PASSED] YVU420 Normal sizes
[14:04:17] [PASSED] YVU420 Max sizes
[14:04:17] [PASSED] YVU420 Invalid pitch
[14:04:17] [PASSED] YVU420 Different pitches
[14:04:17] [PASSED] YVU420 Different buffer offsets/pitches
[14:04:17] [PASSED] YVU420 Modifier set just for plane 0, without DRM_MODE_FB_MODIFIERS
[14:04:17] [PASSED] YVU420 Modifier set just for planes 0, 1, without DRM_MODE_FB_MODIFIERS
[14:04:17] [PASSED] YVU420 Modifier set just for plane 0, 1, with DRM_MODE_FB_MODIFIERS
[14:04:17] [PASSED] YVU420 Valid modifier
[14:04:17] [PASSED] YVU420 Different modifiers per plane
[14:04:17] [PASSED] YVU420 Modifier for inexistent plane
[14:04:17] [PASSED] YUV420_10BIT Invalid modifier(DRM_FORMAT_MOD_LINEAR)
[14:04:17] [PASSED] X0L2 Normal sizes
[14:04:17] [PASSED] X0L2 Max sizes
[14:04:17] [PASSED] X0L2 Invalid pitch
[14:04:17] [PASSED] X0L2 Pitch greater than minimum required
[14:04:17] [PASSED] X0L2 Handle for inexistent plane
[14:04:17] [PASSED] X0L2 Offset for inexistent plane, without DRM_MODE_FB_MODIFIERS set
[14:04:17] [PASSED] X0L2 Modifier without DRM_MODE_FB_MODIFIERS set
[14:04:17] [PASSED] X0L2 Valid modifier
[14:04:17] [PASSED] X0L2 Modifier for inexistent plane
[14:04:17] =========== [PASSED] drm_test_framebuffer_create ===========
[14:04:17] [PASSED] drm_test_framebuffer_free
[14:04:17] [PASSED] drm_test_framebuffer_init
[14:04:17] [PASSED] drm_test_framebuffer_init_bad_format
[14:04:17] [PASSED] drm_test_framebuffer_init_dev_mismatch
[14:04:17] [PASSED] drm_test_framebuffer_lookup
[14:04:17] [PASSED] drm_test_framebuffer_lookup_inexistent
[14:04:17] [PASSED] drm_test_framebuffer_modifiers_not_supported
[14:04:17] ================= [PASSED] drm_framebuffer =================
[14:04:17] ================ drm_gem_shmem (8 subtests) ================
[14:04:17] [PASSED] drm_gem_shmem_test_obj_create
[14:04:17] [PASSED] drm_gem_shmem_test_obj_create_private
[14:04:17] [PASSED] drm_gem_shmem_test_pin_pages
[14:04:17] [PASSED] drm_gem_shmem_test_vmap
[14:04:17] [PASSED] drm_gem_shmem_test_get_pages_sgt
[14:04:17] [PASSED] drm_gem_shmem_test_get_sg_table
[14:04:17] [PASSED] drm_gem_shmem_test_madvise
[14:04:17] [PASSED] drm_gem_shmem_test_purge
[14:04:17] ================== [PASSED] drm_gem_shmem ==================
[14:04:17] === drm_atomic_helper_connector_hdmi_check (27 subtests) ===
[14:04:17] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode
[14:04:17] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode_vic_1
[14:04:17] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode
[14:04:17] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode_vic_1
[14:04:17] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode
[14:04:17] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode_vic_1
[14:04:17] ====== drm_test_check_broadcast_rgb_cea_mode_yuv420 =======
[14:04:17] [PASSED] Automatic
[14:04:17] [PASSED] Full
[14:04:17] [PASSED] Limited 16:235
[14:04:17] == [PASSED] drm_test_check_broadcast_rgb_cea_mode_yuv420 ===
[14:04:17] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_changed
[14:04:17] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_not_changed
[14:04:17] [PASSED] drm_test_check_disable_connector
[14:04:17] [PASSED] drm_test_check_hdmi_funcs_reject_rate
[14:04:17] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_rgb
[14:04:17] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_yuv420
[14:04:17] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_ignore_yuv422
[14:04:17] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_ignore_yuv420
[14:04:17] [PASSED] drm_test_check_driver_unsupported_fallback_yuv420
[14:04:17] [PASSED] drm_test_check_output_bpc_crtc_mode_changed
[14:04:17] [PASSED] drm_test_check_output_bpc_crtc_mode_not_changed
[14:04:17] [PASSED] drm_test_check_output_bpc_dvi
[14:04:17] [PASSED] drm_test_check_output_bpc_format_vic_1
[14:04:17] [PASSED] drm_test_check_output_bpc_format_display_8bpc_only
[14:04:17] [PASSED] drm_test_check_output_bpc_format_display_rgb_only
[14:04:17] [PASSED] drm_test_check_output_bpc_format_driver_8bpc_only
[14:04:17] [PASSED] drm_test_check_output_bpc_format_driver_rgb_only
[14:04:17] [PASSED] drm_test_check_tmds_char_rate_rgb_8bpc
[14:04:17] [PASSED] drm_test_check_tmds_char_rate_rgb_10bpc
[14:04:17] [PASSED] drm_test_check_tmds_char_rate_rgb_12bpc
[14:04:17] ===== [PASSED] drm_atomic_helper_connector_hdmi_check ======
[14:04:17] === drm_atomic_helper_connector_hdmi_reset (6 subtests) ====
[14:04:17] [PASSED] drm_test_check_broadcast_rgb_value
[14:04:17] [PASSED] drm_test_check_bpc_8_value
[14:04:17] [PASSED] drm_test_check_bpc_10_value
[14:04:17] [PASSED] drm_test_check_bpc_12_value
[14:04:17] [PASSED] drm_test_check_format_value
[14:04:17] [PASSED] drm_test_check_tmds_char_value
[14:04:17] ===== [PASSED] drm_atomic_helper_connector_hdmi_reset ======
[14:04:17] = drm_atomic_helper_connector_hdmi_mode_valid (4 subtests) =
[14:04:17] [PASSED] drm_test_check_mode_valid
[14:04:17] [PASSED] drm_test_check_mode_valid_reject
[14:04:17] [PASSED] drm_test_check_mode_valid_reject_rate
[14:04:17] [PASSED] drm_test_check_mode_valid_reject_max_clock
[14:04:17] === [PASSED] drm_atomic_helper_connector_hdmi_mode_valid ===
[14:04:17] ================= drm_managed (2 subtests) =================
[14:04:17] [PASSED] drm_test_managed_release_action
[14:04:17] [PASSED] drm_test_managed_run_action
[14:04:17] =================== [PASSED] drm_managed ===================
[14:04:17] =================== drm_mm (6 subtests) ====================
[14:04:17] [PASSED] drm_test_mm_init
[14:04:17] [PASSED] drm_test_mm_debug
[14:04:17] [PASSED] drm_test_mm_align32
[14:04:17] [PASSED] drm_test_mm_align64
[14:04:17] [PASSED] drm_test_mm_lowest
[14:04:17] [PASSED] drm_test_mm_highest
[14:04:17] ===================== [PASSED] drm_mm ======================
[14:04:17] ============= drm_modes_analog_tv (5 subtests) =============
[14:04:17] [PASSED] drm_test_modes_analog_tv_mono_576i
[14:04:17] [PASSED] drm_test_modes_analog_tv_ntsc_480i
[14:04:17] [PASSED] drm_test_modes_analog_tv_ntsc_480i_inlined
[14:04:17] [PASSED] drm_test_modes_analog_tv_pal_576i
[14:04:17] [PASSED] drm_test_modes_analog_tv_pal_576i_inlined
[14:04:17] =============== [PASSED] drm_modes_analog_tv ===============
[14:04:17] ============== drm_plane_helper (2 subtests) ===============
[14:04:17] =============== drm_test_check_plane_state ================
[14:04:17] [PASSED] clipping_simple
[14:04:17] [PASSED] clipping_rotate_reflect
[14:04:17] [PASSED] positioning_simple
[14:04:17] [PASSED] upscaling
[14:04:17] [PASSED] downscaling
[14:04:17] [PASSED] rounding1
[14:04:17] [PASSED] rounding2
[14:04:17] [PASSED] rounding3
[14:04:17] [PASSED] rounding4
[14:04:17] =========== [PASSED] drm_test_check_plane_state ============
[14:04:17] =========== drm_test_check_invalid_plane_state ============
[14:04:17] [PASSED] positioning_invalid
[14:04:17] [PASSED] upscaling_invalid
[14:04:17] [PASSED] downscaling_invalid
[14:04:17] ======= [PASSED] drm_test_check_invalid_plane_state ========
[14:04:17] ================ [PASSED] drm_plane_helper =================
[14:04:17] ====== drm_connector_helper_tv_get_modes (1 subtest) =======
[14:04:17] ====== drm_test_connector_helper_tv_get_modes_check =======
[14:04:17] [PASSED] None
[14:04:17] [PASSED] PAL
[14:04:17] [PASSED] NTSC
[14:04:17] [PASSED] Both, NTSC Default
[14:04:17] [PASSED] Both, PAL Default
[14:04:17] [PASSED] Both, NTSC Default, with PAL on command-line
[14:04:17] [PASSED] Both, PAL Default, with NTSC on command-line
[14:04:17] == [PASSED] drm_test_connector_helper_tv_get_modes_check ===
[14:04:17] ======== [PASSED] drm_connector_helper_tv_get_modes ========
[14:04:17] ================== drm_rect (9 subtests) ===================
[14:04:17] [PASSED] drm_test_rect_clip_scaled_div_by_zero
[14:04:17] [PASSED] drm_test_rect_clip_scaled_not_clipped
[14:04:17] [PASSED] drm_test_rect_clip_scaled_clipped
[14:04:17] [PASSED] drm_test_rect_clip_scaled_signed_vs_unsigned
[14:04:17] ================= drm_test_rect_intersect =================
[14:04:17] [PASSED] top-left x bottom-right: 2x2+1+1 x 2x2+0+0
[14:04:17] [PASSED] top-right x bottom-left: 2x2+0+0 x 2x2+1-1
[14:04:17] [PASSED] bottom-left x top-right: 2x2+1-1 x 2x2+0+0
[14:04:17] [PASSED] bottom-right x top-left: 2x2+0+0 x 2x2+1+1
[14:04:17] [PASSED] right x left: 2x1+0+0 x 3x1+1+0
[14:04:17] [PASSED] left x right: 3x1+1+0 x 2x1+0+0
[14:04:17] [PASSED] up x bottom: 1x2+0+0 x 1x3+0-1
[14:04:17] [PASSED] bottom x up: 1x3+0-1 x 1x2+0+0
[14:04:17] [PASSED] touching corner: 1x1+0+0 x 2x2+1+1
[14:04:17] [PASSED] touching side: 1x1+0+0 x 1x1+1+0
[14:04:17] [PASSED] equal rects: 2x2+0+0 x 2x2+0+0
[14:04:17] [PASSED] inside another: 2x2+0+0 x 1x1+1+1
[14:04:17] [PASSED] far away: 1x1+0+0 x 1x1+3+6
[14:04:17] [PASSED] points intersecting: 0x0+5+10 x 0x0+5+10
[14:04:17] [PASSED] points not intersecting: 0x0+0+0 x 0x0+5+10
[14:04:17] ============= [PASSED] drm_test_rect_intersect =============
[14:04:17] ================ drm_test_rect_calc_hscale ================
[14:04:17] [PASSED] normal use
[14:04:17] [PASSED] out of max range
[14:04:17] [PASSED] out of min range
[14:04:17] [PASSED] zero dst
[14:04:17] [PASSED] negative src
[14:04:17] [PASSED] negative dst
[14:04:17] ============ [PASSED] drm_test_rect_calc_hscale ============
[14:04:17] ================ drm_test_rect_calc_vscale ================
[14:04:17] [PASSED] normal use
stty: 'standard input': Inappropriate ioctl for device
[14:04:17] [PASSED] out of max range
[14:04:17] [PASSED] out of min range
[14:04:17] [PASSED] zero dst
[14:04:17] [PASSED] negative src
[14:04:17] [PASSED] negative dst
[14:04:17] ============ [PASSED] drm_test_rect_calc_vscale ============
[14:04:17] ================== drm_test_rect_rotate ===================
[14:04:17] [PASSED] reflect-x
[14:04:17] [PASSED] reflect-y
[14:04:17] [PASSED] rotate-0
[14:04:17] [PASSED] rotate-90
[14:04:17] [PASSED] rotate-180
[14:04:17] [PASSED] rotate-270
[14:04:17] ============== [PASSED] drm_test_rect_rotate ===============
[14:04:17] ================ drm_test_rect_rotate_inv =================
[14:04:17] [PASSED] reflect-x
[14:04:17] [PASSED] reflect-y
[14:04:17] [PASSED] rotate-0
[14:04:17] [PASSED] rotate-90
[14:04:17] [PASSED] rotate-180
[14:04:17] [PASSED] rotate-270
[14:04:17] ============ [PASSED] drm_test_rect_rotate_inv =============
[14:04:17] ==================== [PASSED] drm_rect =====================
[14:04:17] ============ drm_sysfb_modeset_test (1 subtest) ============
[14:04:17] ============ drm_test_sysfb_build_fourcc_list =============
[14:04:17] [PASSED] no native formats
[14:04:17] [PASSED] XRGB8888 as native format
[14:04:17] [PASSED] remove duplicates
[14:04:17] [PASSED] convert alpha formats
[14:04:17] [PASSED] random formats
[14:04:17] ======== [PASSED] drm_test_sysfb_build_fourcc_list =========
[14:04:17] ============= [PASSED] drm_sysfb_modeset_test ==============
[14:04:17] ================== drm_fixp (2 subtests) ===================
[14:04:17] [PASSED] drm_test_int2fixp
[14:04:17] [PASSED] drm_test_sm2fixp
[14:04:17] ==================== [PASSED] drm_fixp =====================
[14:04:17] ============================================================
[14:04:17] Testing complete. Ran 624 tests: passed: 624
[14:04:17] Elapsed time: 52.286s total, 2.555s configuring, 49.005s building, 0.663s running
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/ttm/tests/.kunitconfig
[14:04:17] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[14:04:20] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[14:04:36] Starting KUnit Kernel (1/1)...
[14:04:36] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[14:04:37] ================= ttm_device (5 subtests) ==================
[14:04:37] [PASSED] ttm_device_init_basic
[14:04:37] [PASSED] ttm_device_init_multiple
[14:04:37] [PASSED] ttm_device_fini_basic
[14:04:37] [PASSED] ttm_device_init_no_vma_man
[14:04:37] ================== ttm_device_init_pools ==================
[14:04:37] [PASSED] No DMA allocations, no DMA32 required
[14:04:37] [PASSED] DMA allocations, DMA32 required
[14:04:37] [PASSED] No DMA allocations, DMA32 required
[14:04:37] [PASSED] DMA allocations, no DMA32 required
[14:04:37] ============== [PASSED] ttm_device_init_pools ==============
[14:04:37] =================== [PASSED] ttm_device ====================
[14:04:37] ================== ttm_pool (8 subtests) ===================
[14:04:37] ================== ttm_pool_alloc_basic ===================
[14:04:37] [PASSED] One page
[14:04:37] [PASSED] More than one page
[14:04:37] [PASSED] Above the allocation limit
[14:04:37] [PASSED] One page, with coherent DMA mappings enabled
[14:04:37] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[14:04:37] ============== [PASSED] ttm_pool_alloc_basic ===============
[14:04:37] ============== ttm_pool_alloc_basic_dma_addr ==============
[14:04:37] [PASSED] One page
[14:04:37] [PASSED] More than one page
[14:04:37] [PASSED] Above the allocation limit
[14:04:37] [PASSED] One page, with coherent DMA mappings enabled
[14:04:37] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[14:04:37] ========== [PASSED] ttm_pool_alloc_basic_dma_addr ==========
[14:04:37] [PASSED] ttm_pool_alloc_order_caching_match
[14:04:37] [PASSED] ttm_pool_alloc_caching_mismatch
[14:04:37] [PASSED] ttm_pool_alloc_order_mismatch
[14:04:37] [PASSED] ttm_pool_free_dma_alloc
[14:04:37] [PASSED] ttm_pool_free_no_dma_alloc
[14:04:37] [PASSED] ttm_pool_fini_basic
[14:04:37] ==================== [PASSED] ttm_pool =====================
[14:04:37] ================ ttm_resource (8 subtests) =================
[14:04:37] ================= ttm_resource_init_basic =================
[14:04:37] [PASSED] Init resource in TTM_PL_SYSTEM
[14:04:37] [PASSED] Init resource in TTM_PL_VRAM
[14:04:37] [PASSED] Init resource in a private placement
[14:04:37] [PASSED] Init resource in TTM_PL_SYSTEM, set placement flags
[14:04:37] ============= [PASSED] ttm_resource_init_basic =============
[14:04:37] [PASSED] ttm_resource_init_pinned
[14:04:37] [PASSED] ttm_resource_fini_basic
[14:04:37] [PASSED] ttm_resource_manager_init_basic
[14:04:37] [PASSED] ttm_resource_manager_usage_basic
[14:04:37] [PASSED] ttm_resource_manager_set_used_basic
[14:04:37] [PASSED] ttm_sys_man_alloc_basic
[14:04:37] [PASSED] ttm_sys_man_free_basic
[14:04:37] ================== [PASSED] ttm_resource ===================
[14:04:37] =================== ttm_tt (15 subtests) ===================
[14:04:37] ==================== ttm_tt_init_basic ====================
[14:04:37] [PASSED] Page-aligned size
[14:04:37] [PASSED] Extra pages requested
[14:04:37] ================ [PASSED] ttm_tt_init_basic ================
[14:04:37] [PASSED] ttm_tt_init_misaligned
[14:04:37] [PASSED] ttm_tt_fini_basic
[14:04:37] [PASSED] ttm_tt_fini_sg
[14:04:37] [PASSED] ttm_tt_fini_shmem
[14:04:37] [PASSED] ttm_tt_create_basic
[14:04:37] [PASSED] ttm_tt_create_invalid_bo_type
[14:04:37] [PASSED] ttm_tt_create_ttm_exists
[14:04:37] [PASSED] ttm_tt_create_failed
[14:04:37] [PASSED] ttm_tt_destroy_basic
[14:04:37] [PASSED] ttm_tt_populate_null_ttm
[14:04:37] [PASSED] ttm_tt_populate_populated_ttm
[14:04:37] [PASSED] ttm_tt_unpopulate_basic
[14:04:37] [PASSED] ttm_tt_unpopulate_empty_ttm
[14:04:37] [PASSED] ttm_tt_swapin_basic
[14:04:37] ===================== [PASSED] ttm_tt ======================
[14:04:37] =================== ttm_bo (14 subtests) ===================
[14:04:37] =========== ttm_bo_reserve_optimistic_no_ticket ===========
[14:04:37] [PASSED] Cannot be interrupted and sleeps
[14:04:37] [PASSED] Cannot be interrupted, locks straight away
[14:04:37] [PASSED] Can be interrupted, sleeps
[14:04:37] ======= [PASSED] ttm_bo_reserve_optimistic_no_ticket =======
[14:04:37] [PASSED] ttm_bo_reserve_locked_no_sleep
[14:04:37] [PASSED] ttm_bo_reserve_no_wait_ticket
[14:04:37] [PASSED] ttm_bo_reserve_double_resv
[14:04:37] [PASSED] ttm_bo_reserve_interrupted
[14:04:37] [PASSED] ttm_bo_reserve_deadlock
[14:04:37] [PASSED] ttm_bo_unreserve_basic
[14:04:37] [PASSED] ttm_bo_unreserve_pinned
[14:04:37] [PASSED] ttm_bo_unreserve_bulk
[14:04:37] [PASSED] ttm_bo_fini_basic
[14:04:37] [PASSED] ttm_bo_fini_shared_resv
[14:04:37] [PASSED] ttm_bo_pin_basic
[14:04:37] [PASSED] ttm_bo_pin_unpin_resource
[14:04:37] [PASSED] ttm_bo_multiple_pin_one_unpin
[14:04:37] ===================== [PASSED] ttm_bo ======================
[14:04:37] ============== ttm_bo_validate (21 subtests) ===============
[14:04:37] ============== ttm_bo_init_reserved_sys_man ===============
[14:04:37] [PASSED] Buffer object for userspace
[14:04:37] [PASSED] Kernel buffer object
[14:04:37] [PASSED] Shared buffer object
[14:04:37] ========== [PASSED] ttm_bo_init_reserved_sys_man ===========
[14:04:37] ============== ttm_bo_init_reserved_mock_man ==============
[14:04:37] [PASSED] Buffer object for userspace
[14:04:37] [PASSED] Kernel buffer object
[14:04:37] [PASSED] Shared buffer object
[14:04:37] ========== [PASSED] ttm_bo_init_reserved_mock_man ==========
[14:04:37] [PASSED] ttm_bo_init_reserved_resv
[14:04:37] ================== ttm_bo_validate_basic ==================
[14:04:37] [PASSED] Buffer object for userspace
[14:04:37] [PASSED] Kernel buffer object
[14:04:37] [PASSED] Shared buffer object
[14:04:37] ============== [PASSED] ttm_bo_validate_basic ==============
[14:04:37] [PASSED] ttm_bo_validate_invalid_placement
[14:04:37] ============= ttm_bo_validate_same_placement ==============
[14:04:37] [PASSED] System manager
[14:04:37] [PASSED] VRAM manager
[14:04:37] ========= [PASSED] ttm_bo_validate_same_placement ==========
[14:04:37] [PASSED] ttm_bo_validate_failed_alloc
[14:04:37] [PASSED] ttm_bo_validate_pinned
[14:04:37] [PASSED] ttm_bo_validate_busy_placement
[14:04:37] ================ ttm_bo_validate_multihop =================
[14:04:37] [PASSED] Buffer object for userspace
[14:04:37] [PASSED] Kernel buffer object
[14:04:37] [PASSED] Shared buffer object
[14:04:37] ============ [PASSED] ttm_bo_validate_multihop =============
[14:04:37] ========== ttm_bo_validate_no_placement_signaled ==========
[14:04:37] [PASSED] Buffer object in system domain, no page vector
[14:04:37] [PASSED] Buffer object in system domain with an existing page vector
[14:04:37] ====== [PASSED] ttm_bo_validate_no_placement_signaled ======
[14:04:37] ======== ttm_bo_validate_no_placement_not_signaled ========
[14:04:37] [PASSED] Buffer object for userspace
[14:04:37] [PASSED] Kernel buffer object
[14:04:37] [PASSED] Shared buffer object
[14:04:37] ==== [PASSED] ttm_bo_validate_no_placement_not_signaled ====
[14:04:37] [PASSED] ttm_bo_validate_move_fence_signaled
[14:04:37] ========= ttm_bo_validate_move_fence_not_signaled =========
[14:04:37] [PASSED] Waits for GPU
[14:04:37] [PASSED] Tries to lock straight away
[14:04:37] ===== [PASSED] ttm_bo_validate_move_fence_not_signaled =====
[14:04:37] [PASSED] ttm_bo_validate_happy_evict
[14:04:37] [PASSED] ttm_bo_validate_all_pinned_evict
[14:04:37] [PASSED] ttm_bo_validate_allowed_only_evict
[14:04:37] [PASSED] ttm_bo_validate_deleted_evict
[14:04:37] [PASSED] ttm_bo_validate_busy_domain_evict
[14:04:37] [PASSED] ttm_bo_validate_evict_gutting
[14:04:37] [PASSED] ttm_bo_validate_recrusive_evict
stty: 'standard input': Inappropriate ioctl for device
[14:04:37] ================= [PASSED] ttm_bo_validate =================
[14:04:37] ============================================================
[14:04:37] Testing complete. Ran 101 tests: passed: 101
[14:04:37] Elapsed time: 19.680s total, 2.939s configuring, 16.471s building, 0.223s running
+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel
^ permalink raw reply [flat|nested] 30+ messages in thread* ✓ Xe.CI.BAT: success for Intel Xe GPU Debug Support (eudebug) v6
2025-12-02 13:52 [PATCH 00/20] Intel Xe GPU Debug Support (eudebug) v6 Mika Kuoppala
` (21 preceding siblings ...)
2025-12-02 14:04 ` ✓ CI.KUnit: success " Patchwork
@ 2025-12-02 15:34 ` Patchwork
2025-12-02 18:30 ` ✗ Xe.CI.Full: failure " Patchwork
` (2 subsequent siblings)
25 siblings, 0 replies; 30+ messages in thread
From: Patchwork @ 2025-12-02 15:34 UTC (permalink / raw)
To: Mika Kuoppala; +Cc: intel-xe
[-- Attachment #1: Type: text/plain, Size: 6393 bytes --]
== Series Details ==
Series: Intel Xe GPU Debug Support (eudebug) v6
URL : https://patchwork.freedesktop.org/series/158380/
State : success
== Summary ==
CI Bug Log - changes from xe-4179-4ffeb1fd1362e2148a7ada498cbaef7b1de27867_BAT -> xe-pw-158380v1_BAT
====================================================
Summary
-------
**SUCCESS**
No regressions found.
Participating hosts (11 -> 11)
------------------------------
No changes in participating hosts
Known issues
------------
Here are the changes found in xe-pw-158380v1_BAT that come from known issues:
### IGT changes ###
#### Issues hit ####
* igt@kms_addfb_basic@addfb25-y-tiled-small-legacy:
- bat-dg2-oem2: NOTRUN -> [SKIP][1] ([Intel XE#623])
[1]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/bat-dg2-oem2/igt@kms_addfb_basic@addfb25-y-tiled-small-legacy.html
* igt@kms_dsc@dsc-basic:
- bat-dg2-oem2: NOTRUN -> [SKIP][2] ([Intel XE#455])
[2]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/bat-dg2-oem2/igt@kms_dsc@dsc-basic.html
* igt@kms_psr@psr-cursor-plane-move:
- bat-dg2-oem2: NOTRUN -> [SKIP][3] ([Intel XE#1406] / [Intel XE#2850] / [Intel XE#929]) +2 other tests skip
[3]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/bat-dg2-oem2/igt@kms_psr@psr-cursor-plane-move.html
* igt@sriov_basic@enable-vfs-autoprobe-off:
- bat-dg2-oem2: NOTRUN -> [SKIP][4] ([Intel XE#1091] / [Intel XE#2849]) +1 other test skip
[4]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/bat-dg2-oem2/igt@sriov_basic@enable-vfs-autoprobe-off.html
* igt@xe_exec_fault_mode@twice-bindexecqueue-userptr:
- bat-dg2-oem2: NOTRUN -> [SKIP][5] ([Intel XE#288]) +32 other tests skip
[5]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/bat-dg2-oem2/igt@xe_exec_fault_mode@twice-bindexecqueue-userptr.html
* igt@xe_exec_queue_property@invalid-property:
- bat-dg2-oem2: NOTRUN -> [FAIL][6] ([Intel XE#6741])
[6]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/bat-dg2-oem2/igt@xe_exec_queue_property@invalid-property.html
* igt@xe_huc_copy@huc_copy:
- bat-dg2-oem2: NOTRUN -> [SKIP][7] ([Intel XE#255])
[7]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/bat-dg2-oem2/igt@xe_huc_copy@huc_copy.html
* igt@xe_live_ktest@xe_migrate@xe_validate_ccs_kunit:
- bat-dg2-oem2: NOTRUN -> [SKIP][8] ([Intel XE#2229])
[8]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/bat-dg2-oem2/igt@xe_live_ktest@xe_migrate@xe_validate_ccs_kunit.html
* igt@xe_pat@pat-index-xe2:
- bat-dg2-oem2: NOTRUN -> [SKIP][9] ([Intel XE#977])
[9]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/bat-dg2-oem2/igt@xe_pat@pat-index-xe2.html
* igt@xe_pat@pat-index-xehpc:
- bat-dg2-oem2: NOTRUN -> [SKIP][10] ([Intel XE#2838] / [Intel XE#979])
[10]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/bat-dg2-oem2/igt@xe_pat@pat-index-xehpc.html
* igt@xe_pat@pat-index-xelpg:
- bat-dg2-oem2: NOTRUN -> [SKIP][11] ([Intel XE#979])
[11]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/bat-dg2-oem2/igt@xe_pat@pat-index-xelpg.html
* igt@xe_sriov_flr@flr-vf1-clear:
- bat-dg2-oem2: NOTRUN -> [SKIP][12] ([Intel XE#3342])
[12]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/bat-dg2-oem2/igt@xe_sriov_flr@flr-vf1-clear.html
* igt@xe_waitfence@reltime:
- bat-dg2-oem2: NOTRUN -> [FAIL][13] ([Intel XE#6520])
[13]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/bat-dg2-oem2/igt@xe_waitfence@reltime.html
#### Possible fixes ####
* igt@xe_module_load@load:
- bat-dg2-oem2: [ABORT][14] ([Intel XE#6610]) -> [PASS][15]
[14]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4179-4ffeb1fd1362e2148a7ada498cbaef7b1de27867/bat-dg2-oem2/igt@xe_module_load@load.html
[15]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/bat-dg2-oem2/igt@xe_module_load@load.html
#### Warnings ####
* igt@xe_eudebug@multigpu-basic-client:
- bat-bmg-3: [SKIP][16] ([Intel XE#4837]) -> [FAIL][17] ([Intel XE#6560]) +1 other test fail
[16]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4179-4ffeb1fd1362e2148a7ada498cbaef7b1de27867/bat-bmg-3/igt@xe_eudebug@multigpu-basic-client.html
[17]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/bat-bmg-3/igt@xe_eudebug@multigpu-basic-client.html
[Intel XE#1091]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1091
[Intel XE#1406]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1406
[Intel XE#2229]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2229
[Intel XE#255]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/255
[Intel XE#2838]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2838
[Intel XE#2849]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2849
[Intel XE#2850]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2850
[Intel XE#288]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/288
[Intel XE#3342]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3342
[Intel XE#455]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/455
[Intel XE#4837]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4837
[Intel XE#623]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/623
[Intel XE#6520]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6520
[Intel XE#6560]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6560
[Intel XE#6610]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6610
[Intel XE#6741]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6741
[Intel XE#929]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/929
[Intel XE#977]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/977
[Intel XE#979]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/979
Build changes
-------------
* IGT: IGT_8647 -> IGT_8648
* Linux: xe-4179-4ffeb1fd1362e2148a7ada498cbaef7b1de27867 -> xe-pw-158380v1
IGT_8647: 8647
IGT_8648: 5f69b55422b228beba08cf87b66680d249c865c9 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
xe-4179-4ffeb1fd1362e2148a7ada498cbaef7b1de27867: 4ffeb1fd1362e2148a7ada498cbaef7b1de27867
xe-pw-158380v1: 158380v1
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/index.html
[-- Attachment #2: Type: text/html, Size: 7343 bytes --]
^ permalink raw reply [flat|nested] 30+ messages in thread* ✗ Xe.CI.Full: failure for Intel Xe GPU Debug Support (eudebug) v6
2025-12-02 13:52 [PATCH 00/20] Intel Xe GPU Debug Support (eudebug) v6 Mika Kuoppala
` (22 preceding siblings ...)
2025-12-02 15:34 ` ✓ Xe.CI.BAT: " Patchwork
@ 2025-12-02 18:30 ` Patchwork
2025-12-03 9:13 ` ✗ CI.checkpatch: warning for Intel Xe GPU Debug Support (eudebug) v6 (rev2) Patchwork
2025-12-03 9:15 ` ✓ CI.KUnit: success " Patchwork
25 siblings, 0 replies; 30+ messages in thread
From: Patchwork @ 2025-12-02 18:30 UTC (permalink / raw)
To: Mika Kuoppala; +Cc: intel-xe
[-- Attachment #1: Type: text/plain, Size: 102848 bytes --]
== Series Details ==
Series: Intel Xe GPU Debug Support (eudebug) v6
URL : https://patchwork.freedesktop.org/series/158380/
State : failure
== Summary ==
CI Bug Log - changes from xe-4179-4ffeb1fd1362e2148a7ada498cbaef7b1de27867_FULL -> xe-pw-158380v1_FULL
====================================================
Summary
-------
**FAILURE**
Serious unknown changes coming with xe-pw-158380v1_FULL absolutely need to be
verified manually.
If you think the reported changes have nothing to do with the changes
introduced in xe-pw-158380v1_FULL, please notify your bug team (I915-ci-infra@lists.freedesktop.org) to allow them
to document this new failure mode, which will reduce false positives in CI.
Participating hosts (4 -> 4)
------------------------------
No changes in participating hosts
Possible new issues
-------------------
Here are the unknown changes that may have been introduced in xe-pw-158380v1_FULL:
### IGT changes ###
#### Possible regressions ####
* igt@xe_eudebug@basic-close:
- shard-dg2-set2: NOTRUN -> [FAIL][1] +34 other tests fail
[1]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-464/igt@xe_eudebug@basic-close.html
* igt@xe_eudebug@basic-connect:
- shard-lnl: NOTRUN -> [FAIL][2] +33 other tests fail
[2]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-8/igt@xe_eudebug@basic-connect.html
* igt@xe_eudebug@multigpu-basic-client:
- shard-dg2-set2: NOTRUN -> [SKIP][3]
[3]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-435/igt@xe_eudebug@multigpu-basic-client.html
* igt@xe_eudebug_online@breakpoint-many-sessions-single-tile:
- shard-adlp: NOTRUN -> [SKIP][4] +3 other tests skip
[4]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-8/igt@xe_eudebug_online@breakpoint-many-sessions-single-tile.html
* igt@xe_eudebug_online@interrupt-reconnect@drm_xe_engine_class_render0:
- shard-bmg: NOTRUN -> [FAIL][5] +35 other tests fail
[5]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-3/igt@xe_eudebug_online@interrupt-reconnect@drm_xe_engine_class_render0.html
* igt@xe_eudebug_online@single-step:
- shard-adlp: NOTRUN -> [FAIL][6] +19 other tests fail
[6]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-8/igt@xe_eudebug_online@single-step.html
* igt@xe_exec_reset@virtual-cat-error:
- shard-bmg: [PASS][7] -> [ABORT][8]
[7]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4179-4ffeb1fd1362e2148a7ada498cbaef7b1de27867/shard-bmg-4/igt@xe_exec_reset@virtual-cat-error.html
[8]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-4/igt@xe_exec_reset@virtual-cat-error.html
* igt@xe_wedged@wedged-at-any-timeout:
- shard-bmg: NOTRUN -> [ABORT][9] +2 other tests abort
[9]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-5/igt@xe_wedged@wedged-at-any-timeout.html
* igt@xe_wedged@wedged-mode-toggle:
- shard-lnl: NOTRUN -> [ABORT][10] +2 other tests abort
[10]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-8/igt@xe_wedged@wedged-mode-toggle.html
#### Warnings ####
* igt@xe_eudebug@basic-vm-access-parameters-userptr-faultable:
- shard-adlp: [SKIP][11] ([Intel XE#4837] / [Intel XE#5565]) -> [SKIP][12]
[11]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4179-4ffeb1fd1362e2148a7ada498cbaef7b1de27867/shard-adlp-3/igt@xe_eudebug@basic-vm-access-parameters-userptr-faultable.html
[12]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-2/igt@xe_eudebug@basic-vm-access-parameters-userptr-faultable.html
- shard-dg2-set2: [SKIP][13] ([Intel XE#4837]) -> [SKIP][14]
[13]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4179-4ffeb1fd1362e2148a7ada498cbaef7b1de27867/shard-dg2-435/igt@xe_eudebug@basic-vm-access-parameters-userptr-faultable.html
[14]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-464/igt@xe_eudebug@basic-vm-access-parameters-userptr-faultable.html
* igt@xe_eudebug@basic-vm-bind-metadata-discovery:
- shard-adlp: [SKIP][15] ([Intel XE#4837] / [Intel XE#5565]) -> [FAIL][16] +11 other tests fail
[15]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4179-4ffeb1fd1362e2148a7ada498cbaef7b1de27867/shard-adlp-1/igt@xe_eudebug@basic-vm-bind-metadata-discovery.html
[16]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-9/igt@xe_eudebug@basic-vm-bind-metadata-discovery.html
* igt@xe_eudebug@basic-vm-bind-ufence-delay-ack:
- shard-dg2-set2: [SKIP][17] ([Intel XE#4837]) -> [FAIL][18] +6 other tests fail
[17]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4179-4ffeb1fd1362e2148a7ada498cbaef7b1de27867/shard-dg2-434/igt@xe_eudebug@basic-vm-bind-ufence-delay-ack.html
[18]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-434/igt@xe_eudebug@basic-vm-bind-ufence-delay-ack.html
- shard-bmg: [SKIP][19] ([Intel XE#6703]) -> [FAIL][20]
[19]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4179-4ffeb1fd1362e2148a7ada498cbaef7b1de27867/shard-bmg-2/igt@xe_eudebug@basic-vm-bind-ufence-delay-ack.html
[20]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-2/igt@xe_eudebug@basic-vm-bind-ufence-delay-ack.html
* igt@xe_eudebug@discovery-empty-clients:
- shard-lnl: [SKIP][21] ([Intel XE#4837]) -> [FAIL][22] +5 other tests fail
[21]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4179-4ffeb1fd1362e2148a7ada498cbaef7b1de27867/shard-lnl-2/igt@xe_eudebug@discovery-empty-clients.html
[22]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-5/igt@xe_eudebug@discovery-empty-clients.html
* igt@xe_eudebug_online@set-breakpoint:
- shard-bmg: [SKIP][23] ([Intel XE#4837]) -> [FAIL][24] +8 other tests fail
[23]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4179-4ffeb1fd1362e2148a7ada498cbaef7b1de27867/shard-bmg-3/igt@xe_eudebug_online@set-breakpoint.html
[24]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-1/igt@xe_eudebug_online@set-breakpoint.html
Known issues
------------
Here are the changes found in xe-pw-158380v1_FULL that come from known issues:
### IGT changes ###
#### Issues hit ####
* igt@intel_hwmon@hwmon-write:
- shard-lnl: NOTRUN -> [SKIP][25] ([Intel XE#1125])
[25]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-4/igt@intel_hwmon@hwmon-write.html
* igt@kms_addfb_basic@addfb25-y-tiled-small-legacy:
- shard-bmg: NOTRUN -> [SKIP][26] ([Intel XE#2233])
[26]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-1/igt@kms_addfb_basic@addfb25-y-tiled-small-legacy.html
- shard-dg2-set2: NOTRUN -> [SKIP][27] ([Intel XE#623])
[27]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-464/igt@kms_addfb_basic@addfb25-y-tiled-small-legacy.html
- shard-lnl: NOTRUN -> [SKIP][28] ([Intel XE#1466])
[28]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-5/igt@kms_addfb_basic@addfb25-y-tiled-small-legacy.html
* igt@kms_async_flips@async-flip-with-page-flip-events-tiled-atomic@pipe-a-edp-1-x:
- shard-lnl: NOTRUN -> [FAIL][29] ([Intel XE#6676]) +9 other tests fail
[29]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-1/igt@kms_async_flips@async-flip-with-page-flip-events-tiled-atomic@pipe-a-edp-1-x.html
* igt@kms_async_flips@test-cursor-atomic:
- shard-lnl: NOTRUN -> [SKIP][30] ([Intel XE#664])
[30]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-1/igt@kms_async_flips@test-cursor-atomic.html
* igt@kms_async_flips@test-time-stamp-atomic:
- shard-lnl: NOTRUN -> [FAIL][31] ([Intel XE#6677]) +2 other tests fail
[31]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-1/igt@kms_async_flips@test-time-stamp-atomic.html
* igt@kms_big_fb@4-tiled-8bpp-rotate-270:
- shard-dg2-set2: NOTRUN -> [SKIP][32] ([Intel XE#316]) +5 other tests skip
[32]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-463/igt@kms_big_fb@4-tiled-8bpp-rotate-270.html
- shard-lnl: NOTRUN -> [SKIP][33] ([Intel XE#1407]) +10 other tests skip
[33]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-4/igt@kms_big_fb@4-tiled-8bpp-rotate-270.html
* igt@kms_big_fb@4-tiled-max-hw-stride-64bpp-rotate-0:
- shard-adlp: NOTRUN -> [SKIP][34] ([Intel XE#1124]) +12 other tests skip
[34]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-1/igt@kms_big_fb@4-tiled-max-hw-stride-64bpp-rotate-0.html
* igt@kms_big_fb@linear-64bpp-rotate-90:
- shard-bmg: NOTRUN -> [SKIP][35] ([Intel XE#2327]) +5 other tests skip
[35]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-5/igt@kms_big_fb@linear-64bpp-rotate-90.html
* igt@kms_big_fb@y-tiled-32bpp-rotate-0:
- shard-lnl: NOTRUN -> [SKIP][36] ([Intel XE#1124]) +15 other tests skip
[36]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-8/igt@kms_big_fb@y-tiled-32bpp-rotate-0.html
* igt@kms_big_fb@y-tiled-64bpp-rotate-0:
- shard-adlp: NOTRUN -> [FAIL][37] ([Intel XE#1874])
[37]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-3/igt@kms_big_fb@y-tiled-64bpp-rotate-0.html
* igt@kms_big_fb@y-tiled-64bpp-rotate-90:
- shard-bmg: NOTRUN -> [SKIP][38] ([Intel XE#1124]) +11 other tests skip
[38]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-2/igt@kms_big_fb@y-tiled-64bpp-rotate-90.html
- shard-adlp: NOTRUN -> [SKIP][39] ([Intel XE#316]) +3 other tests skip
[39]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-2/igt@kms_big_fb@y-tiled-64bpp-rotate-90.html
* igt@kms_big_fb@y-tiled-addfb:
- shard-lnl: NOTRUN -> [SKIP][40] ([Intel XE#1467])
[40]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-4/igt@kms_big_fb@y-tiled-addfb.html
* igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-0-async-flip:
- shard-adlp: NOTRUN -> [FAIL][41] ([Intel XE#1231] / [Intel XE#6699])
[41]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-2/igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-0-async-flip.html
* igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-0-hflip:
- shard-dg2-set2: NOTRUN -> [SKIP][42] ([Intel XE#1124]) +14 other tests skip
[42]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-434/igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-0-hflip.html
* igt@kms_big_fb@yf-tiled-addfb-size-overflow:
- shard-lnl: NOTRUN -> [SKIP][43] ([Intel XE#1428])
[43]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-8/igt@kms_big_fb@yf-tiled-addfb-size-overflow.html
* igt@kms_bw@connected-linear-tiling-1-displays-3840x2160p:
- shard-adlp: NOTRUN -> [SKIP][44] ([Intel XE#367]) +2 other tests skip
[44]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-3/igt@kms_bw@connected-linear-tiling-1-displays-3840x2160p.html
* igt@kms_bw@connected-linear-tiling-2-displays-2160x1440p:
- shard-lnl: NOTRUN -> [SKIP][45] ([Intel XE#2191]) +2 other tests skip
[45]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-1/igt@kms_bw@connected-linear-tiling-2-displays-2160x1440p.html
* igt@kms_bw@connected-linear-tiling-2-displays-3840x2160p:
- shard-adlp: NOTRUN -> [SKIP][46] ([Intel XE#2191]) +3 other tests skip
[46]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-1/igt@kms_bw@connected-linear-tiling-2-displays-3840x2160p.html
* igt@kms_bw@connected-linear-tiling-3-displays-3840x2160p:
- shard-bmg: NOTRUN -> [SKIP][47] ([Intel XE#2314] / [Intel XE#2894]) +1 other test skip
[47]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-2/igt@kms_bw@connected-linear-tiling-3-displays-3840x2160p.html
* igt@kms_bw@connected-linear-tiling-4-displays-1920x1080p:
- shard-dg2-set2: NOTRUN -> [SKIP][48] ([Intel XE#2191]) +2 other tests skip
[48]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-432/igt@kms_bw@connected-linear-tiling-4-displays-1920x1080p.html
* igt@kms_bw@linear-tiling-1-displays-1920x1080p:
- shard-dg2-set2: NOTRUN -> [SKIP][49] ([Intel XE#367]) +1 other test skip
[49]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-464/igt@kms_bw@linear-tiling-1-displays-1920x1080p.html
* igt@kms_bw@linear-tiling-1-displays-3840x2160p:
- shard-bmg: NOTRUN -> [SKIP][50] ([Intel XE#367]) +1 other test skip
[50]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-2/igt@kms_bw@linear-tiling-1-displays-3840x2160p.html
* igt@kms_bw@linear-tiling-4-displays-3840x2160p:
- shard-lnl: NOTRUN -> [SKIP][51] ([Intel XE#1512])
[51]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-5/igt@kms_bw@linear-tiling-4-displays-3840x2160p.html
* igt@kms_ccs@bad-aux-stride-4-tiled-mtl-mc-ccs@pipe-d-hdmi-a-1:
- shard-adlp: NOTRUN -> [SKIP][52] ([Intel XE#455] / [Intel XE#787]) +55 other tests skip
[52]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-3/igt@kms_ccs@bad-aux-stride-4-tiled-mtl-mc-ccs@pipe-d-hdmi-a-1.html
* igt@kms_ccs@bad-rotation-90-4-tiled-bmg-ccs@pipe-a-edp-1:
- shard-lnl: NOTRUN -> [SKIP][53] ([Intel XE#2669]) +3 other tests skip
[53]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-1/igt@kms_ccs@bad-rotation-90-4-tiled-bmg-ccs@pipe-a-edp-1.html
* igt@kms_ccs@bad-rotation-90-4-tiled-lnl-ccs:
- shard-dg2-set2: NOTRUN -> [SKIP][54] ([Intel XE#2907]) +1 other test skip
[54]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-432/igt@kms_ccs@bad-rotation-90-4-tiled-lnl-ccs.html
* igt@kms_ccs@bad-rotation-90-4-tiled-lnl-ccs@pipe-c-dp-2:
- shard-bmg: NOTRUN -> [SKIP][55] ([Intel XE#2652] / [Intel XE#787]) +17 other tests skip
[55]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-4/igt@kms_ccs@bad-rotation-90-4-tiled-lnl-ccs@pipe-c-dp-2.html
* igt@kms_ccs@bad-rotation-90-4-tiled-mtl-rc-ccs-cc:
- shard-lnl: NOTRUN -> [SKIP][56] ([Intel XE#2887]) +20 other tests skip
[56]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-8/igt@kms_ccs@bad-rotation-90-4-tiled-mtl-rc-ccs-cc.html
* igt@kms_ccs@crc-primary-basic-4-tiled-lnl-ccs:
- shard-adlp: NOTRUN -> [SKIP][57] ([Intel XE#2907])
[57]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-3/igt@kms_ccs@crc-primary-basic-4-tiled-lnl-ccs.html
* igt@kms_ccs@crc-primary-basic-4-tiled-mtl-mc-ccs:
- shard-bmg: NOTRUN -> [SKIP][58] ([Intel XE#2887]) +19 other tests skip
[58]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-2/igt@kms_ccs@crc-primary-basic-4-tiled-mtl-mc-ccs.html
* igt@kms_ccs@crc-primary-basic-4-tiled-mtl-mc-ccs@pipe-c-hdmi-a-1:
- shard-adlp: NOTRUN -> [SKIP][59] ([Intel XE#787]) +83 other tests skip
[59]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-6/igt@kms_ccs@crc-primary-basic-4-tiled-mtl-mc-ccs@pipe-c-hdmi-a-1.html
* igt@kms_ccs@crc-primary-basic-4-tiled-mtl-rc-ccs@pipe-b-hdmi-a-6:
- shard-dg2-set2: NOTRUN -> [SKIP][60] ([Intel XE#787]) +153 other tests skip
[60]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-435/igt@kms_ccs@crc-primary-basic-4-tiled-mtl-rc-ccs@pipe-b-hdmi-a-6.html
* igt@kms_ccs@crc-primary-rotation-180-4-tiled-mtl-rc-ccs-cc@pipe-d-dp-4:
- shard-dg2-set2: NOTRUN -> [SKIP][61] ([Intel XE#455] / [Intel XE#787]) +43 other tests skip
[61]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-432/igt@kms_ccs@crc-primary-rotation-180-4-tiled-mtl-rc-ccs-cc@pipe-d-dp-4.html
* igt@kms_ccs@crc-primary-suspend-4-tiled-lnl-ccs:
- shard-adlp: NOTRUN -> [SKIP][62] ([Intel XE#3442])
[62]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-9/igt@kms_ccs@crc-primary-suspend-4-tiled-lnl-ccs.html
* igt@kms_ccs@crc-primary-suspend-4-tiled-mtl-rc-ccs:
- shard-bmg: NOTRUN -> [SKIP][63] ([Intel XE#3432]) +2 other tests skip
[63]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-4/igt@kms_ccs@crc-primary-suspend-4-tiled-mtl-rc-ccs.html
* igt@kms_ccs@crc-primary-suspend-yf-tiled-ccs:
- shard-lnl: NOTRUN -> [SKIP][64] ([Intel XE#3432]) +2 other tests skip
[64]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-5/igt@kms_ccs@crc-primary-suspend-yf-tiled-ccs.html
* igt@kms_cdclk@plane-scaling:
- shard-lnl: NOTRUN -> [SKIP][65] ([Intel XE#4416]) +3 other tests skip
[65]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-5/igt@kms_cdclk@plane-scaling.html
* igt@kms_cdclk@plane-scaling@pipe-b-dp-4:
- shard-dg2-set2: NOTRUN -> [SKIP][66] ([Intel XE#4416]) +3 other tests skip
[66]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-464/igt@kms_cdclk@plane-scaling@pipe-b-dp-4.html
* igt@kms_chamelium_color@ctm-blue-to-red:
- shard-adlp: NOTRUN -> [SKIP][67] ([Intel XE#306])
[67]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-1/igt@kms_chamelium_color@ctm-blue-to-red.html
* igt@kms_chamelium_edid@hdmi-edid-change-during-hibernate:
- shard-adlp: NOTRUN -> [SKIP][68] ([Intel XE#373]) +11 other tests skip
[68]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-6/igt@kms_chamelium_edid@hdmi-edid-change-during-hibernate.html
* igt@kms_chamelium_hpd@common-hpd-after-suspend:
- shard-bmg: NOTRUN -> [SKIP][69] ([Intel XE#2252]) +11 other tests skip
[69]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-1/igt@kms_chamelium_hpd@common-hpd-after-suspend.html
* igt@kms_chamelium_hpd@hdmi-hpd-for-each-pipe:
- shard-lnl: NOTRUN -> [SKIP][70] ([Intel XE#373]) +12 other tests skip
[70]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-8/igt@kms_chamelium_hpd@hdmi-hpd-for-each-pipe.html
* igt@kms_chamelium_hpd@vga-hpd:
- shard-dg2-set2: NOTRUN -> [SKIP][71] ([Intel XE#373]) +13 other tests skip
[71]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-434/igt@kms_chamelium_hpd@vga-hpd.html
* igt@kms_colorop@plane-xr24-xr24-multiply_125:
- shard-adlp: NOTRUN -> [SKIP][72] ([Intel XE#6704]) +8 other tests skip
[72]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-6/igt@kms_colorop@plane-xr24-xr24-multiply_125.html
* igt@kms_colorop@plane-xr24-xr24-pq_125_eotf:
- shard-dg2-set2: NOTRUN -> [SKIP][73] ([Intel XE#6704]) +10 other tests skip
[73]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-464/igt@kms_colorop@plane-xr24-xr24-pq_125_eotf.html
* igt@kms_colorop@plane-xr24-xr24-pq_eotf-pq_inv_eotf:
- shard-bmg: NOTRUN -> [SKIP][74] ([Intel XE#6704]) +9 other tests skip
[74]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-2/igt@kms_colorop@plane-xr24-xr24-pq_eotf-pq_inv_eotf.html
* igt@kms_colorop@plane-xr30-xr30-pq_125_inv_eotf:
- shard-lnl: NOTRUN -> [SKIP][75] ([Intel XE#6704]) +11 other tests skip
[75]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-8/igt@kms_colorop@plane-xr30-xr30-pq_125_inv_eotf.html
* igt@kms_content_protection@atomic:
- shard-bmg: NOTRUN -> [FAIL][76] ([Intel XE#1178]) +1 other test fail
[76]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-2/igt@kms_content_protection@atomic.html
- shard-dg2-set2: NOTRUN -> [FAIL][77] ([Intel XE#1178]) +1 other test fail
[77]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-434/igt@kms_content_protection@atomic.html
- shard-lnl: NOTRUN -> [SKIP][78] ([Intel XE#3278])
[78]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-3/igt@kms_content_protection@atomic.html
* igt@kms_content_protection@dp-mst-lic-type-0:
- shard-lnl: NOTRUN -> [SKIP][79] ([Intel XE#307])
[79]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-4/igt@kms_content_protection@dp-mst-lic-type-0.html
* igt@kms_content_protection@dp-mst-type-0:
- shard-dg2-set2: NOTRUN -> [SKIP][80] ([Intel XE#307])
[80]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-464/igt@kms_content_protection@dp-mst-type-0.html
* igt@kms_content_protection@dp-mst-type-1:
- shard-bmg: NOTRUN -> [SKIP][81] ([Intel XE#2390]) +1 other test skip
[81]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-2/igt@kms_content_protection@dp-mst-type-1.html
* igt@kms_content_protection@mei-interface:
- shard-bmg: NOTRUN -> [SKIP][82] ([Intel XE#2341])
[82]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-2/igt@kms_content_protection@mei-interface.html
* igt@kms_cursor_crc@cursor-offscreen-32x32:
- shard-bmg: NOTRUN -> [SKIP][83] ([Intel XE#2320]) +5 other tests skip
[83]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-1/igt@kms_cursor_crc@cursor-offscreen-32x32.html
* igt@kms_cursor_crc@cursor-offscreen-512x170:
- shard-dg2-set2: NOTRUN -> [SKIP][84] ([Intel XE#308]) +1 other test skip
[84]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-432/igt@kms_cursor_crc@cursor-offscreen-512x170.html
* igt@kms_cursor_crc@cursor-onscreen-512x512:
- shard-lnl: NOTRUN -> [SKIP][85] ([Intel XE#2321])
[85]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-1/igt@kms_cursor_crc@cursor-onscreen-512x512.html
* igt@kms_cursor_crc@cursor-random-512x170:
- shard-bmg: NOTRUN -> [SKIP][86] ([Intel XE#2321]) +1 other test skip
[86]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-1/igt@kms_cursor_crc@cursor-random-512x170.html
- shard-adlp: NOTRUN -> [SKIP][87] ([Intel XE#308])
[87]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-3/igt@kms_cursor_crc@cursor-random-512x170.html
* igt@kms_cursor_crc@cursor-rapid-movement-64x21:
- shard-lnl: NOTRUN -> [SKIP][88] ([Intel XE#1424]) +7 other tests skip
[88]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-5/igt@kms_cursor_crc@cursor-rapid-movement-64x21.html
* igt@kms_cursor_legacy@2x-nonblocking-modeset-vs-cursor-atomic:
- shard-lnl: NOTRUN -> [SKIP][89] ([Intel XE#309]) +11 other tests skip
[89]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-8/igt@kms_cursor_legacy@2x-nonblocking-modeset-vs-cursor-atomic.html
* igt@kms_cursor_legacy@cursorb-vs-flipa-atomic-transitions-varying-size:
- shard-adlp: NOTRUN -> [SKIP][90] ([Intel XE#309]) +10 other tests skip
[90]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-6/igt@kms_cursor_legacy@cursorb-vs-flipa-atomic-transitions-varying-size.html
* igt@kms_cursor_legacy@flip-vs-cursor-atomic:
- shard-bmg: [PASS][91] -> [FAIL][92] ([Intel XE#4633])
[91]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4179-4ffeb1fd1362e2148a7ada498cbaef7b1de27867/shard-bmg-1/igt@kms_cursor_legacy@flip-vs-cursor-atomic.html
[92]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-2/igt@kms_cursor_legacy@flip-vs-cursor-atomic.html
* igt@kms_cursor_legacy@flip-vs-cursor-legacy:
- shard-bmg: NOTRUN -> [FAIL][93] ([Intel XE#5299])
[93]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-2/igt@kms_cursor_legacy@flip-vs-cursor-legacy.html
* igt@kms_cursor_legacy@short-busy-flip-before-cursor-atomic-transitions-varying-size:
- shard-dg2-set2: NOTRUN -> [SKIP][94] ([Intel XE#323])
[94]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-434/igt@kms_cursor_legacy@short-busy-flip-before-cursor-atomic-transitions-varying-size.html
- shard-lnl: NOTRUN -> [SKIP][95] ([Intel XE#323]) +1 other test skip
[95]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-3/igt@kms_cursor_legacy@short-busy-flip-before-cursor-atomic-transitions-varying-size.html
- shard-adlp: NOTRUN -> [SKIP][96] ([Intel XE#323])
[96]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-6/igt@kms_cursor_legacy@short-busy-flip-before-cursor-atomic-transitions-varying-size.html
- shard-bmg: NOTRUN -> [SKIP][97] ([Intel XE#2286]) +1 other test skip
[97]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-2/igt@kms_cursor_legacy@short-busy-flip-before-cursor-atomic-transitions-varying-size.html
* igt@kms_dirtyfb@drrs-dirtyfb-ioctl:
- shard-bmg: NOTRUN -> [SKIP][98] ([Intel XE#1508])
[98]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-2/igt@kms_dirtyfb@drrs-dirtyfb-ioctl.html
* igt@kms_dp_aux_dev:
- shard-adlp: NOTRUN -> [SKIP][99] ([Intel XE#3009])
[99]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-2/igt@kms_dp_aux_dev.html
* igt@kms_dp_link_training@non-uhbr-sst:
- shard-adlp: NOTRUN -> [SKIP][100] ([Intel XE#4354])
[100]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-2/igt@kms_dp_link_training@non-uhbr-sst.html
* igt@kms_dp_link_training@uhbr-mst:
- shard-lnl: NOTRUN -> [SKIP][101] ([Intel XE#4354])
[101]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-3/igt@kms_dp_link_training@uhbr-mst.html
- shard-adlp: NOTRUN -> [SKIP][102] ([Intel XE#4356])
[102]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-6/igt@kms_dp_link_training@uhbr-mst.html
- shard-bmg: NOTRUN -> [SKIP][103] ([Intel XE#4354])
[103]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-2/igt@kms_dp_link_training@uhbr-mst.html
- shard-dg2-set2: NOTRUN -> [SKIP][104] ([Intel XE#4356])
[104]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-434/igt@kms_dp_link_training@uhbr-mst.html
* igt@kms_dp_linktrain_fallback@dp-fallback:
- shard-lnl: NOTRUN -> [SKIP][105] ([Intel XE#4294])
[105]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-1/igt@kms_dp_linktrain_fallback@dp-fallback.html
* igt@kms_dp_linktrain_fallback@dsc-fallback:
- shard-bmg: NOTRUN -> [SKIP][106] ([Intel XE#4331])
[106]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-4/igt@kms_dp_linktrain_fallback@dsc-fallback.html
- shard-adlp: NOTRUN -> [SKIP][107] ([Intel XE#4331])
[107]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-1/igt@kms_dp_linktrain_fallback@dsc-fallback.html
- shard-dg2-set2: NOTRUN -> [SKIP][108] ([Intel XE#4331])
[108]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-432/igt@kms_dp_linktrain_fallback@dsc-fallback.html
- shard-lnl: NOTRUN -> [SKIP][109] ([Intel XE#4331])
[109]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-2/igt@kms_dp_linktrain_fallback@dsc-fallback.html
* igt@kms_dsc@dsc-with-bpc-formats:
- shard-lnl: NOTRUN -> [SKIP][110] ([Intel XE#2244])
[110]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-4/igt@kms_dsc@dsc-with-bpc-formats.html
* igt@kms_fbc_dirty_rect@fbc-dirty-rectangle-different-formats:
- shard-dg2-set2: NOTRUN -> [SKIP][111] ([Intel XE#4422])
[111]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-433/igt@kms_fbc_dirty_rect@fbc-dirty-rectangle-different-formats.html
- shard-adlp: NOTRUN -> [SKIP][112] ([Intel XE#4422])
[112]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-8/igt@kms_fbc_dirty_rect@fbc-dirty-rectangle-different-formats.html
* igt@kms_fbcon_fbt@fbc:
- shard-bmg: NOTRUN -> [SKIP][113] ([Intel XE#4156])
[113]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-3/igt@kms_fbcon_fbt@fbc.html
* igt@kms_feature_discovery@display-2x:
- shard-lnl: NOTRUN -> [SKIP][114] ([Intel XE#702])
[114]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-5/igt@kms_feature_discovery@display-2x.html
- shard-adlp: NOTRUN -> [SKIP][115] ([Intel XE#702])
[115]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-3/igt@kms_feature_discovery@display-2x.html
* igt@kms_flip@2x-flip-vs-dpms-on-nop:
- shard-adlp: NOTRUN -> [SKIP][116] ([Intel XE#310]) +9 other tests skip
[116]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-8/igt@kms_flip@2x-flip-vs-dpms-on-nop.html
* igt@kms_flip@2x-plain-flip-ts-check:
- shard-lnl: NOTRUN -> [SKIP][117] ([Intel XE#1421]) +10 other tests skip
[117]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-1/igt@kms_flip@2x-plain-flip-ts-check.html
* igt@kms_flip@dpms-vs-vblank-race:
- shard-adlp: NOTRUN -> [DMESG-WARN][118] ([Intel XE#2953] / [Intel XE#4173] / [Intel XE#5208])
[118]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-1/igt@kms_flip@dpms-vs-vblank-race.html
* igt@kms_flip@flip-vs-suspend-interruptible:
- shard-bmg: NOTRUN -> [ABORT][119] ([Intel XE#6675]) +13 other tests abort
[119]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-5/igt@kms_flip@flip-vs-suspend-interruptible.html
* igt@kms_flip@flip-vs-suspend-interruptible@a-hdmi-a1:
- shard-adlp: NOTRUN -> [ABORT][120] ([Intel XE#6675]) +10 other tests abort
[120]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-3/igt@kms_flip@flip-vs-suspend-interruptible@a-hdmi-a1.html
* igt@kms_flip_scaled_crc@flip-32bpp-xtile-to-64bpp-xtile-upscaling:
- shard-lnl: NOTRUN -> [FAIL][121] ([Intel XE#4683]) +1 other test fail
[121]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-8/igt@kms_flip_scaled_crc@flip-32bpp-xtile-to-64bpp-xtile-upscaling.html
* igt@kms_flip_scaled_crc@flip-32bpp-yftileccs-to-64bpp-yftile-upscaling:
- shard-adlp: NOTRUN -> [SKIP][122] ([Intel XE#455]) +35 other tests skip
[122]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-8/igt@kms_flip_scaled_crc@flip-32bpp-yftileccs-to-64bpp-yftile-upscaling.html
- shard-bmg: NOTRUN -> [SKIP][123] ([Intel XE#2293] / [Intel XE#2380]) +7 other tests skip
[123]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-3/igt@kms_flip_scaled_crc@flip-32bpp-yftileccs-to-64bpp-yftile-upscaling.html
* igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytilegen12rcccs-upscaling@pipe-a-default-mode:
- shard-lnl: NOTRUN -> [SKIP][124] ([Intel XE#1401]) +8 other tests skip
[124]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-4/igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytilegen12rcccs-upscaling@pipe-a-default-mode.html
* igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-16bpp-ytile-downscaling@pipe-a-valid-mode:
- shard-bmg: NOTRUN -> [SKIP][125] ([Intel XE#2293]) +7 other tests skip
[125]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-2/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-16bpp-ytile-downscaling@pipe-a-valid-mode.html
* igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytile-downscaling:
- shard-dg2-set2: NOTRUN -> [SKIP][126] ([Intel XE#455]) +28 other tests skip
[126]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-432/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytile-downscaling.html
- shard-lnl: NOTRUN -> [SKIP][127] ([Intel XE#1401] / [Intel XE#1745]) +8 other tests skip
[127]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-5/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytile-downscaling.html
* igt@kms_force_connector_basic@force-connector-state:
- shard-lnl: NOTRUN -> [SKIP][128] ([Intel XE#352]) +1 other test skip
[128]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-5/igt@kms_force_connector_basic@force-connector-state.html
* igt@kms_frontbuffer_tracking@drrs-1p-primscrn-spr-indfb-fullscreen:
- shard-adlp: NOTRUN -> [SKIP][129] ([Intel XE#651]) +17 other tests skip
[129]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-3/igt@kms_frontbuffer_tracking@drrs-1p-primscrn-spr-indfb-fullscreen.html
* igt@kms_frontbuffer_tracking@drrs-2p-primscrn-cur-indfb-draw-render:
- shard-adlp: NOTRUN -> [SKIP][130] ([Intel XE#656]) +46 other tests skip
[130]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-9/igt@kms_frontbuffer_tracking@drrs-2p-primscrn-cur-indfb-draw-render.html
* igt@kms_frontbuffer_tracking@drrs-2p-primscrn-cur-indfb-onoff:
- shard-dg2-set2: NOTRUN -> [SKIP][131] ([Intel XE#651]) +41 other tests skip
[131]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-435/igt@kms_frontbuffer_tracking@drrs-2p-primscrn-cur-indfb-onoff.html
* igt@kms_frontbuffer_tracking@drrs-rgb565-draw-render:
- shard-bmg: NOTRUN -> [SKIP][132] ([Intel XE#2311]) +39 other tests skip
[132]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-3/igt@kms_frontbuffer_tracking@drrs-rgb565-draw-render.html
* igt@kms_frontbuffer_tracking@fbc-1p-offscreen-pri-shrfb-draw-mmap-wc:
- shard-adlp: [PASS][133] -> [FAIL][134] ([Intel XE#5671])
[133]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4179-4ffeb1fd1362e2148a7ada498cbaef7b1de27867/shard-adlp-2/igt@kms_frontbuffer_tracking@fbc-1p-offscreen-pri-shrfb-draw-mmap-wc.html
[134]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-8/igt@kms_frontbuffer_tracking@fbc-1p-offscreen-pri-shrfb-draw-mmap-wc.html
* igt@kms_frontbuffer_tracking@fbc-2p-primscrn-indfb-msflip-blt:
- shard-bmg: NOTRUN -> [SKIP][135] ([Intel XE#4141]) +13 other tests skip
[135]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-4/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-indfb-msflip-blt.html
* igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-pri-shrfb-draw-blt:
- shard-lnl: NOTRUN -> [SKIP][136] ([Intel XE#656]) +57 other tests skip
[136]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-1/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-pri-shrfb-draw-blt.html
* igt@kms_frontbuffer_tracking@fbcdrrs-1p-offscreen-pri-indfb-draw-blt:
- shard-lnl: NOTRUN -> [SKIP][137] ([Intel XE#6312]) +1 other test skip
[137]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-1/igt@kms_frontbuffer_tracking@fbcdrrs-1p-offscreen-pri-indfb-draw-blt.html
* igt@kms_frontbuffer_tracking@fbcdrrs-1p-offscreen-pri-shrfb-draw-mmap-wc:
- shard-adlp: NOTRUN -> [SKIP][138] ([Intel XE#6312]) +3 other tests skip
[138]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-2/igt@kms_frontbuffer_tracking@fbcdrrs-1p-offscreen-pri-shrfb-draw-mmap-wc.html
* igt@kms_frontbuffer_tracking@fbcdrrs-1p-offscreen-pri-shrfb-draw-render:
- shard-dg2-set2: NOTRUN -> [SKIP][139] ([Intel XE#6312]) +2 other tests skip
[139]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-435/igt@kms_frontbuffer_tracking@fbcdrrs-1p-offscreen-pri-shrfb-draw-render.html
* igt@kms_frontbuffer_tracking@fbcdrrs-1p-primscrn-indfb-msflip-blt:
- shard-lnl: NOTRUN -> [SKIP][140] ([Intel XE#651]) +23 other tests skip
[140]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-5/igt@kms_frontbuffer_tracking@fbcdrrs-1p-primscrn-indfb-msflip-blt.html
* igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-pri-shrfb-draw-mmap-wc:
- shard-adlp: NOTRUN -> [SKIP][141] ([Intel XE#653]) +17 other tests skip
[141]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-1/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-pri-shrfb-draw-mmap-wc.html
* igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-pri-indfb-draw-blt:
- shard-dg2-set2: NOTRUN -> [SKIP][142] ([Intel XE#653]) +36 other tests skip
[142]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-432/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-pri-indfb-draw-blt.html
* igt@kms_frontbuffer_tracking@fbcpsr-suspend:
- shard-lnl: NOTRUN -> [ABORT][143] ([Intel XE#6675]) +13 other tests abort
[143]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-2/igt@kms_frontbuffer_tracking@fbcpsr-suspend.html
* igt@kms_frontbuffer_tracking@psr-2p-primscrn-indfb-plflip-blt:
- shard-bmg: NOTRUN -> [SKIP][144] ([Intel XE#2313]) +31 other tests skip
[144]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-2/igt@kms_frontbuffer_tracking@psr-2p-primscrn-indfb-plflip-blt.html
* igt@kms_hdr@bpc-switch-dpms:
- shard-bmg: [PASS][145] -> [ABORT][146] ([Intel XE#6740]) +1 other test abort
[145]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4179-4ffeb1fd1362e2148a7ada498cbaef7b1de27867/shard-bmg-4/igt@kms_hdr@bpc-switch-dpms.html
[146]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-1/igt@kms_hdr@bpc-switch-dpms.html
* igt@kms_hdr@invalid-hdr@pipe-a-hdmi-a-3:
- shard-bmg: NOTRUN -> [ABORT][147] ([Intel XE#6740]) +1 other test abort
[147]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-3/igt@kms_hdr@invalid-hdr@pipe-a-hdmi-a-3.html
* igt@kms_joiner@basic-big-joiner:
- shard-lnl: NOTRUN -> [SKIP][148] ([Intel XE#2925] / [Intel XE#346])
[148]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-3/igt@kms_joiner@basic-big-joiner.html
* igt@kms_joiner@basic-force-ultra-joiner:
- shard-bmg: NOTRUN -> [SKIP][149] ([Intel XE#2934] / [Intel XE#6590])
[149]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-3/igt@kms_joiner@basic-force-ultra-joiner.html
* igt@kms_joiner@invalid-modeset-force-big-joiner:
- shard-adlp: NOTRUN -> [SKIP][150] ([Intel XE#2925] / [Intel XE#3012])
[150]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-1/igt@kms_joiner@invalid-modeset-force-big-joiner.html
* igt@kms_joiner@invalid-modeset-ultra-joiner:
- shard-lnl: NOTRUN -> [SKIP][151] ([Intel XE#2925] / [Intel XE#2927])
[151]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-8/igt@kms_joiner@invalid-modeset-ultra-joiner.html
- shard-adlp: NOTRUN -> [SKIP][152] ([Intel XE#2925] / [Intel XE#2927])
[152]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-3/igt@kms_joiner@invalid-modeset-ultra-joiner.html
- shard-bmg: NOTRUN -> [SKIP][153] ([Intel XE#2927] / [Intel XE#6590])
[153]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-4/igt@kms_joiner@invalid-modeset-ultra-joiner.html
* igt@kms_joiner@switch-modeset-ultra-joiner-big-joiner:
- shard-adlp: NOTRUN -> [SKIP][154] ([Intel XE#2925])
[154]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-8/igt@kms_joiner@switch-modeset-ultra-joiner-big-joiner.html
- shard-dg2-set2: NOTRUN -> [SKIP][155] ([Intel XE#2925]) +1 other test skip
[155]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-433/igt@kms_joiner@switch-modeset-ultra-joiner-big-joiner.html
* igt@kms_multipipe_modeset@basic-max-pipe-crc-check:
- shard-adlp: NOTRUN -> [SKIP][156] ([Intel XE#356])
[156]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-3/igt@kms_multipipe_modeset@basic-max-pipe-crc-check.html
- shard-lnl: NOTRUN -> [SKIP][157] ([Intel XE#356])
[157]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-8/igt@kms_multipipe_modeset@basic-max-pipe-crc-check.html
* igt@kms_pipe_stress@stress-xrgb8888-ytiled:
- shard-lnl: NOTRUN -> [SKIP][158] ([Intel XE#4329])
[158]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-3/igt@kms_pipe_stress@stress-xrgb8888-ytiled.html
* igt@kms_plane@pixel-format-source-clamping@pipe-a-plane-0:
- shard-lnl: NOTRUN -> [FAIL][159] ([Intel XE#5195]) +2 other tests fail
[159]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-1/igt@kms_plane@pixel-format-source-clamping@pipe-a-plane-0.html
* igt@kms_plane@pixel-format-source-clamping@pipe-b-plane-0:
- shard-adlp: NOTRUN -> [FAIL][160] ([Intel XE#5195]) +4 other tests fail
[160]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-3/igt@kms_plane@pixel-format-source-clamping@pipe-b-plane-0.html
* igt@kms_plane@plane-panning-bottom-right-suspend@pipe-b:
- shard-adlp: [PASS][161] -> [DMESG-WARN][162] ([Intel XE#2953] / [Intel XE#4173])
[161]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4179-4ffeb1fd1362e2148a7ada498cbaef7b1de27867/shard-adlp-8/igt@kms_plane@plane-panning-bottom-right-suspend@pipe-b.html
[162]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-1/igt@kms_plane@plane-panning-bottom-right-suspend@pipe-b.html
* igt@kms_plane_lowres@tiling-yf:
- shard-bmg: NOTRUN -> [SKIP][163] ([Intel XE#2393])
[163]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-4/igt@kms_plane_lowres@tiling-yf.html
- shard-lnl: NOTRUN -> [SKIP][164] ([Intel XE#599])
[164]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-4/igt@kms_plane_lowres@tiling-yf.html
* igt@kms_plane_multiple@2x-tiling-x:
- shard-adlp: NOTRUN -> [SKIP][165] ([Intel XE#4596])
[165]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-8/igt@kms_plane_multiple@2x-tiling-x.html
* igt@kms_plane_multiple@2x-tiling-yf:
- shard-dg2-set2: NOTRUN -> [SKIP][166] ([Intel XE#5021])
[166]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-464/igt@kms_plane_multiple@2x-tiling-yf.html
- shard-lnl: NOTRUN -> [SKIP][167] ([Intel XE#4596])
[167]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-5/igt@kms_plane_multiple@2x-tiling-yf.html
* igt@kms_plane_multiple@tiling-yf:
- shard-adlp: NOTRUN -> [SKIP][168] ([Intel XE#5020])
[168]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-2/igt@kms_plane_multiple@tiling-yf.html
- shard-bmg: NOTRUN -> [SKIP][169] ([Intel XE#5020])
[169]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-3/igt@kms_plane_multiple@tiling-yf.html
- shard-dg2-set2: NOTRUN -> [SKIP][170] ([Intel XE#5020])
[170]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-464/igt@kms_plane_multiple@tiling-yf.html
* igt@kms_plane_scaling@planes-upscale-20x20-downscale-factor-0-5@pipe-a:
- shard-lnl: NOTRUN -> [SKIP][171] ([Intel XE#5825]) +5 other tests skip
[171]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-8/igt@kms_plane_scaling@planes-upscale-20x20-downscale-factor-0-5@pipe-a.html
* igt@kms_pm_backlight@fade:
- shard-bmg: NOTRUN -> [SKIP][172] ([Intel XE#870]) +1 other test skip
[172]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-3/igt@kms_pm_backlight@fade.html
- shard-adlp: NOTRUN -> [SKIP][173] ([Intel XE#870]) +1 other test skip
[173]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-1/igt@kms_pm_backlight@fade.html
- shard-dg2-set2: NOTRUN -> [SKIP][174] ([Intel XE#870]) +1 other test skip
[174]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-435/igt@kms_pm_backlight@fade.html
* igt@kms_pm_dc@dc5-dpms-negative:
- shard-lnl: NOTRUN -> [SKIP][175] ([Intel XE#1131])
[175]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-1/igt@kms_pm_dc@dc5-dpms-negative.html
* igt@kms_pm_dc@dc5-psr:
- shard-bmg: NOTRUN -> [SKIP][176] ([Intel XE#2392])
[176]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-2/igt@kms_pm_dc@dc5-psr.html
* igt@kms_pm_dc@dc5-retention-flops:
- shard-lnl: NOTRUN -> [SKIP][177] ([Intel XE#3309])
[177]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-4/igt@kms_pm_dc@dc5-retention-flops.html
* igt@kms_pm_dc@dc6-dpms:
- shard-dg2-set2: NOTRUN -> [SKIP][178] ([Intel XE#908]) +1 other test skip
[178]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-435/igt@kms_pm_dc@dc6-dpms.html
- shard-lnl: NOTRUN -> [FAIL][179] ([Intel XE#718])
[179]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-1/igt@kms_pm_dc@dc6-dpms.html
- shard-adlp: NOTRUN -> [FAIL][180] ([Intel XE#718])
[180]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-2/igt@kms_pm_dc@dc6-dpms.html
* igt@kms_pm_dc@deep-pkgc:
- shard-adlp: NOTRUN -> [SKIP][181] ([Intel XE#2007])
[181]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-6/igt@kms_pm_dc@deep-pkgc.html
- shard-bmg: NOTRUN -> [SKIP][182] ([Intel XE#2505])
[182]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-2/igt@kms_pm_dc@deep-pkgc.html
- shard-lnl: NOTRUN -> [FAIL][183] ([Intel XE#2029])
[183]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-3/igt@kms_pm_dc@deep-pkgc.html
* igt@kms_psr2_sf@fbc-pr-overlay-plane-move-continuous-exceed-fully-sf:
- shard-lnl: NOTRUN -> [SKIP][184] ([Intel XE#1406] / [Intel XE#2893]) +3 other tests skip
[184]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-8/igt@kms_psr2_sf@fbc-pr-overlay-plane-move-continuous-exceed-fully-sf.html
* igt@kms_psr2_sf@fbc-psr2-overlay-primary-update-sf-dmg-area:
- shard-lnl: NOTRUN -> [SKIP][185] ([Intel XE#1406] / [Intel XE#2893] / [Intel XE#4608]) +1 other test skip
[185]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-5/igt@kms_psr2_sf@fbc-psr2-overlay-primary-update-sf-dmg-area.html
* igt@kms_psr2_sf@fbc-psr2-overlay-primary-update-sf-dmg-area@pipe-b-edp-1:
- shard-lnl: NOTRUN -> [SKIP][186] ([Intel XE#1406] / [Intel XE#4608]) +3 other tests skip
[186]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-5/igt@kms_psr2_sf@fbc-psr2-overlay-primary-update-sf-dmg-area@pipe-b-edp-1.html
* igt@kms_psr2_sf@pr-overlay-plane-update-continuous-sf:
- shard-adlp: NOTRUN -> [SKIP][187] ([Intel XE#1406] / [Intel XE#1489]) +10 other tests skip
[187]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-6/igt@kms_psr2_sf@pr-overlay-plane-update-continuous-sf.html
* igt@kms_psr2_sf@psr2-cursor-plane-move-continuous-exceed-fully-sf:
- shard-bmg: NOTRUN -> [SKIP][188] ([Intel XE#1406] / [Intel XE#1489]) +7 other tests skip
[188]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-2/igt@kms_psr2_sf@psr2-cursor-plane-move-continuous-exceed-fully-sf.html
* igt@kms_psr2_sf@psr2-overlay-primary-update-sf-dmg-area:
- shard-dg2-set2: NOTRUN -> [SKIP][189] ([Intel XE#1406] / [Intel XE#1489]) +11 other tests skip
[189]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-432/igt@kms_psr2_sf@psr2-overlay-primary-update-sf-dmg-area.html
* igt@kms_psr@fbc-psr2-cursor-plane-move:
- shard-bmg: NOTRUN -> [SKIP][190] ([Intel XE#1406] / [Intel XE#2234] / [Intel XE#2850]) +13 other tests skip
[190]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-2/igt@kms_psr@fbc-psr2-cursor-plane-move.html
* igt@kms_psr@fbc-psr2-cursor-plane-onoff:
- shard-dg2-set2: NOTRUN -> [SKIP][191] ([Intel XE#1406] / [Intel XE#2850] / [Intel XE#929]) +13 other tests skip
[191]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-432/igt@kms_psr@fbc-psr2-cursor-plane-onoff.html
* igt@kms_psr@fbc-psr2-primary-blt:
- shard-adlp: NOTRUN -> [SKIP][192] ([Intel XE#1406] / [Intel XE#2850] / [Intel XE#929]) +11 other tests skip
[192]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-3/igt@kms_psr@fbc-psr2-primary-blt.html
* igt@kms_psr@fbc-psr2-primary-blt@edp-1:
- shard-lnl: NOTRUN -> [SKIP][193] ([Intel XE#1406] / [Intel XE#4609]) +3 other tests skip
[193]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-5/igt@kms_psr@fbc-psr2-primary-blt@edp-1.html
* igt@kms_psr@pr-no-drrs:
- shard-lnl: NOTRUN -> [SKIP][194] ([Intel XE#1406]) +6 other tests skip
[194]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-1/igt@kms_psr@pr-no-drrs.html
* igt@kms_rotation_crc@bad-pixel-format:
- shard-bmg: NOTRUN -> [SKIP][195] ([Intel XE#3414] / [Intel XE#3904]) +2 other tests skip
[195]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-2/igt@kms_rotation_crc@bad-pixel-format.html
* igt@kms_rotation_crc@bad-tiling:
- shard-dg2-set2: NOTRUN -> [SKIP][196] ([Intel XE#3414]) +1 other test skip
[196]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-435/igt@kms_rotation_crc@bad-tiling.html
- shard-lnl: NOTRUN -> [SKIP][197] ([Intel XE#3414] / [Intel XE#3904]) +3 other tests skip
[197]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-1/igt@kms_rotation_crc@bad-tiling.html
- shard-adlp: NOTRUN -> [SKIP][198] ([Intel XE#3414])
[198]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-2/igt@kms_rotation_crc@bad-tiling.html
* igt@kms_rotation_crc@primary-yf-tiled-reflect-x-0:
- shard-lnl: NOTRUN -> [SKIP][199] ([Intel XE#1127])
[199]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-1/igt@kms_rotation_crc@primary-yf-tiled-reflect-x-0.html
- shard-adlp: NOTRUN -> [SKIP][200] ([Intel XE#1127])
[200]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-3/igt@kms_rotation_crc@primary-yf-tiled-reflect-x-0.html
- shard-dg2-set2: NOTRUN -> [SKIP][201] ([Intel XE#1127])
[201]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-436/igt@kms_rotation_crc@primary-yf-tiled-reflect-x-0.html
* igt@kms_sharpness_filter@invalid-filter-with-scaling-mode:
- shard-bmg: NOTRUN -> [SKIP][202] ([Intel XE#6503]) +3 other tests skip
[202]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-2/igt@kms_sharpness_filter@invalid-filter-with-scaling-mode.html
* igt@kms_tiled_display@basic-test-pattern-with-chamelium:
- shard-adlp: NOTRUN -> [SKIP][203] ([Intel XE#362])
[203]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-9/igt@kms_tiled_display@basic-test-pattern-with-chamelium.html
* igt@kms_tv_load_detect@load-detect:
- shard-dg2-set2: NOTRUN -> [SKIP][204] ([Intel XE#330])
[204]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-435/igt@kms_tv_load_detect@load-detect.html
* igt@kms_vrr@seamless-rr-switch-vrr:
- shard-bmg: NOTRUN -> [SKIP][205] ([Intel XE#1499])
[205]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-2/igt@kms_vrr@seamless-rr-switch-vrr.html
- shard-lnl: NOTRUN -> [SKIP][206] ([Intel XE#1499])
[206]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-1/igt@kms_vrr@seamless-rr-switch-vrr.html
* igt@xe_ccs@block-copy-compressed:
- shard-adlp: NOTRUN -> [SKIP][207] ([Intel XE#455] / [Intel XE#488] / [Intel XE#5607])
[207]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-1/igt@xe_ccs@block-copy-compressed.html
* igt@xe_compute@eu-busy-10s:
- shard-dg2-set2: NOTRUN -> [SKIP][208] ([Intel XE#6598])
[208]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-433/igt@xe_compute@eu-busy-10s.html
* igt@xe_copy_basic@mem-copy-linear-0x369:
- shard-adlp: NOTRUN -> [SKIP][209] ([Intel XE#1123]) +1 other test skip
[209]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-3/igt@xe_copy_basic@mem-copy-linear-0x369.html
* igt@xe_copy_basic@mem-copy-linear-0x3fff:
- shard-dg2-set2: NOTRUN -> [SKIP][210] ([Intel XE#1123])
[210]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-433/igt@xe_copy_basic@mem-copy-linear-0x3fff.html
* igt@xe_copy_basic@mem-page-copy-1:
- shard-dg2-set2: NOTRUN -> [SKIP][211] ([Intel XE#5300])
[211]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-463/igt@xe_copy_basic@mem-page-copy-1.html
* igt@xe_copy_basic@mem-set-linear-0x3fff:
- shard-adlp: NOTRUN -> [SKIP][212] ([Intel XE#1126])
[212]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-6/igt@xe_copy_basic@mem-set-linear-0x3fff.html
- shard-dg2-set2: NOTRUN -> [SKIP][213] ([Intel XE#1126])
[213]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-434/igt@xe_copy_basic@mem-set-linear-0x3fff.html
* igt@xe_copy_basic@mem-set-linear-0x8fffe:
- shard-adlp: NOTRUN -> [SKIP][214] ([Intel XE#5503])
[214]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-9/igt@xe_copy_basic@mem-set-linear-0x8fffe.html
- shard-dg2-set2: NOTRUN -> [SKIP][215] ([Intel XE#5503])
[215]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-432/igt@xe_copy_basic@mem-set-linear-0x8fffe.html
* igt@xe_create@create-big-vram:
- shard-adlp: NOTRUN -> [SKIP][216] ([Intel XE#1062])
[216]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-3/igt@xe_create@create-big-vram.html
- shard-lnl: NOTRUN -> [SKIP][217] ([Intel XE#1062])
[217]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-5/igt@xe_create@create-big-vram.html
* igt@xe_create@multigpu-create-massive-size:
- shard-bmg: NOTRUN -> [SKIP][218] ([Intel XE#2504])
[218]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-2/igt@xe_create@multigpu-create-massive-size.html
* igt@xe_eu_stall@invalid-sampling-rate:
- shard-dg2-set2: NOTRUN -> [SKIP][219] ([Intel XE#5626]) +1 other test skip
[219]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-436/igt@xe_eu_stall@invalid-sampling-rate.html
* igt@xe_eu_stall@unprivileged-access:
- shard-adlp: NOTRUN -> [SKIP][220] ([Intel XE#5626]) +1 other test skip
[220]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-3/igt@xe_eu_stall@unprivileged-access.html
* igt@xe_eudebug@multigpu-basic-client:
- shard-bmg: NOTRUN -> [SKIP][221] ([Intel XE#3894])
[221]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-2/igt@xe_eudebug@multigpu-basic-client.html
- shard-lnl: NOTRUN -> [SKIP][222] ([Intel XE#5132])
[222]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-1/igt@xe_eudebug@multigpu-basic-client.html
* igt@xe_eudebug_online@breakpoint-many-sessions-tiles:
- shard-dg2-set2: NOTRUN -> [SKIP][223] ([Intel XE#2846])
[223]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-433/igt@xe_eudebug_online@breakpoint-many-sessions-tiles.html
* igt@xe_eudebug_online@writes-caching-vram-bb-sram-target-sram:
- shard-lnl: NOTRUN -> [SKIP][224] ([Intel XE#2825]) +5 other tests skip
[224]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-4/igt@xe_eudebug_online@writes-caching-vram-bb-sram-target-sram.html
* igt@xe_evict@evict-beng-large-external-cm:
- shard-adlp: NOTRUN -> [SKIP][225] ([Intel XE#261] / [Intel XE#5564]) +2 other tests skip
[225]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-2/igt@xe_evict@evict-beng-large-external-cm.html
* igt@xe_evict@evict-beng-mixed-threads-small-multi-vm:
- shard-adlp: NOTRUN -> [SKIP][226] ([Intel XE#261] / [Intel XE#688]) +1 other test skip
[226]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-9/igt@xe_evict@evict-beng-mixed-threads-small-multi-vm.html
* igt@xe_evict@evict-beng-small-multi-vm:
- shard-adlp: NOTRUN -> [SKIP][227] ([Intel XE#261] / [Intel XE#5564] / [Intel XE#688]) +3 other tests skip
[227]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-2/igt@xe_evict@evict-beng-small-multi-vm.html
* igt@xe_evict@evict-mixed-many-threads-small:
- shard-adlp: NOTRUN -> [SKIP][228] ([Intel XE#261]) +3 other tests skip
[228]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-2/igt@xe_evict@evict-mixed-many-threads-small.html
* igt@xe_evict_ccs@evict-overcommit-standalone-instantfree-reopen:
- shard-lnl: NOTRUN -> [SKIP][229] ([Intel XE#688]) +17 other tests skip
[229]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-2/igt@xe_evict_ccs@evict-overcommit-standalone-instantfree-reopen.html
- shard-adlp: NOTRUN -> [SKIP][230] ([Intel XE#688]) +1 other test skip
[230]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-9/igt@xe_evict_ccs@evict-overcommit-standalone-instantfree-reopen.html
* igt@xe_exec_basic@multigpu-many-execqueues-many-vm-bindexecqueue-userptr:
- shard-bmg: NOTRUN -> [SKIP][231] ([Intel XE#2322]) +10 other tests skip
[231]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-2/igt@xe_exec_basic@multigpu-many-execqueues-many-vm-bindexecqueue-userptr.html
* igt@xe_exec_basic@multigpu-no-exec-userptr-invalidate:
- shard-lnl: NOTRUN -> [SKIP][232] ([Intel XE#1392]) +14 other tests skip
[232]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-8/igt@xe_exec_basic@multigpu-no-exec-userptr-invalidate.html
* igt@xe_exec_basic@multigpu-once-bindexecqueue-userptr-invalidate:
- shard-adlp: NOTRUN -> [SKIP][233] ([Intel XE#1392] / [Intel XE#5575]) +12 other tests skip
[233]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-9/igt@xe_exec_basic@multigpu-once-bindexecqueue-userptr-invalidate.html
* igt@xe_exec_fault_mode@many-execqueues-bindexecqueue-userptr-invalidate-race-imm:
- shard-adlp: NOTRUN -> [SKIP][234] ([Intel XE#288] / [Intel XE#5561]) +34 other tests skip
[234]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-6/igt@xe_exec_fault_mode@many-execqueues-bindexecqueue-userptr-invalidate-race-imm.html
* igt@xe_exec_fault_mode@once-rebind-prefetch:
- shard-dg2-set2: NOTRUN -> [SKIP][235] ([Intel XE#288]) +39 other tests skip
[235]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-433/igt@xe_exec_fault_mode@once-rebind-prefetch.html
* igt@xe_exec_queue_property@invalid-property:
- shard-adlp: NOTRUN -> [FAIL][236] ([Intel XE#6741])
[236]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-1/igt@xe_exec_queue_property@invalid-property.html
* igt@xe_exec_system_allocator@many-64k-mmap-free-huge-nomemset:
- shard-lnl: NOTRUN -> [SKIP][237] ([Intel XE#5007]) +2 other tests skip
[237]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-5/igt@xe_exec_system_allocator@many-64k-mmap-free-huge-nomemset.html
- shard-bmg: NOTRUN -> [SKIP][238] ([Intel XE#5007])
[238]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-1/igt@xe_exec_system_allocator@many-64k-mmap-free-huge-nomemset.html
* igt@xe_exec_system_allocator@many-64k-mmap-mlock-nomemset:
- shard-dg2-set2: NOTRUN -> [SKIP][239] ([Intel XE#4915]) +471 other tests skip
[239]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-432/igt@xe_exec_system_allocator@many-64k-mmap-mlock-nomemset.html
* igt@xe_exec_system_allocator@many-malloc-prefetch-madvise:
- shard-lnl: NOTRUN -> [WARN][240] ([Intel XE#5786]) +4 other tests warn
[240]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-3/igt@xe_exec_system_allocator@many-malloc-prefetch-madvise.html
* igt@xe_exec_system_allocator@pat-index-madvise-pat-idx-wt-multi-vma:
- shard-lnl: NOTRUN -> [SKIP][241] ([Intel XE#6196])
[241]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-3/igt@xe_exec_system_allocator@pat-index-madvise-pat-idx-wt-multi-vma.html
* igt@xe_exec_system_allocator@threads-shared-vm-many-execqueues-mmap-free-huge:
- shard-lnl: NOTRUN -> [SKIP][242] ([Intel XE#4943]) +35 other tests skip
[242]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-8/igt@xe_exec_system_allocator@threads-shared-vm-many-execqueues-mmap-free-huge.html
* igt@xe_exec_system_allocator@threads-shared-vm-many-execqueues-mmap-new-huge-nomemset:
- shard-bmg: NOTRUN -> [SKIP][243] ([Intel XE#4943]) +35 other tests skip
[243]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-2/igt@xe_exec_system_allocator@threads-shared-vm-many-execqueues-mmap-new-huge-nomemset.html
* igt@xe_exec_system_allocator@threads-shared-vm-many-execqueues-mmap-shared-remap-dontunmap-eocheck:
- shard-adlp: NOTRUN -> [SKIP][244] ([Intel XE#4915]) +398 other tests skip
[244]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-1/igt@xe_exec_system_allocator@threads-shared-vm-many-execqueues-mmap-shared-remap-dontunmap-eocheck.html
* igt@xe_fault_injection@exec-queue-create-fail-xe_pxp_exec_queue_add:
- shard-bmg: NOTRUN -> [SKIP][245] ([Intel XE#6281])
[245]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-2/igt@xe_fault_injection@exec-queue-create-fail-xe_pxp_exec_queue_add.html
* igt@xe_fault_injection@probe-fail-guc-xe_guc_ct_send_recv:
- shard-lnl: NOTRUN -> [ABORT][246] ([Intel XE#5466])
[246]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-8/igt@xe_fault_injection@probe-fail-guc-xe_guc_ct_send_recv.html
* igt@xe_fault_injection@probe-fail-guc-xe_guc_mmio_send_recv:
- shard-lnl: NOTRUN -> [ABORT][247] ([Intel XE#4757])
[247]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-5/igt@xe_fault_injection@probe-fail-guc-xe_guc_mmio_send_recv.html
* igt@xe_huc_copy@huc_copy:
- shard-dg2-set2: NOTRUN -> [SKIP][248] ([Intel XE#255])
[248]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-434/igt@xe_huc_copy@huc_copy.html
* igt@xe_media_fill@media-fill:
- shard-bmg: NOTRUN -> [SKIP][249] ([Intel XE#2459] / [Intel XE#2596])
[249]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-1/igt@xe_media_fill@media-fill.html
- shard-dg2-set2: NOTRUN -> [SKIP][250] ([Intel XE#560])
[250]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-432/igt@xe_media_fill@media-fill.html
- shard-lnl: NOTRUN -> [SKIP][251] ([Intel XE#560])
[251]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-2/igt@xe_media_fill@media-fill.html
* igt@xe_mmap@pci-membarrier:
- shard-adlp: NOTRUN -> [SKIP][252] ([Intel XE#5100]) +1 other test skip
[252]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-2/igt@xe_mmap@pci-membarrier.html
* igt@xe_mmap@small-bar:
- shard-adlp: NOTRUN -> [SKIP][253] ([Intel XE#512])
[253]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-3/igt@xe_mmap@small-bar.html
- shard-bmg: NOTRUN -> [SKIP][254] ([Intel XE#586])
[254]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-1/igt@xe_mmap@small-bar.html
- shard-dg2-set2: NOTRUN -> [SKIP][255] ([Intel XE#512])
[255]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-464/igt@xe_mmap@small-bar.html
- shard-lnl: NOTRUN -> [SKIP][256] ([Intel XE#512])
[256]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-5/igt@xe_mmap@small-bar.html
* igt@xe_mmap@vram:
- shard-lnl: NOTRUN -> [SKIP][257] ([Intel XE#1416])
[257]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-1/igt@xe_mmap@vram.html
- shard-adlp: NOTRUN -> [SKIP][258] ([Intel XE#1008] / [Intel XE#5591])
[258]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-2/igt@xe_mmap@vram.html
* igt@xe_oa@privileged-forked-access-vaddr:
- shard-adlp: NOTRUN -> [SKIP][259] ([Intel XE#3573]) +8 other tests skip
[259]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-3/igt@xe_oa@privileged-forked-access-vaddr.html
* igt@xe_oa@whitelisted-registers-userspace-config:
- shard-dg2-set2: NOTRUN -> [SKIP][260] ([Intel XE#3573]) +9 other tests skip
[260]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-432/igt@xe_oa@whitelisted-registers-userspace-config.html
* igt@xe_pat@pat-index-xehpc:
- shard-lnl: NOTRUN -> [SKIP][261] ([Intel XE#1420] / [Intel XE#2838])
[261]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-8/igt@xe_pat@pat-index-xehpc.html
* igt@xe_pat@pat-index-xelpg:
- shard-adlp: NOTRUN -> [SKIP][262] ([Intel XE#979])
[262]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-9/igt@xe_pat@pat-index-xelpg.html
- shard-dg2-set2: NOTRUN -> [SKIP][263] ([Intel XE#979])
[263]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-432/igt@xe_pat@pat-index-xelpg.html
* igt@xe_peer2peer@read:
- shard-adlp: NOTRUN -> [SKIP][264] ([Intel XE#1061])
[264]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-1/igt@xe_peer2peer@read.html
* igt@xe_pm@d3cold-basic:
- shard-lnl: NOTRUN -> [SKIP][265] ([Intel XE#2284] / [Intel XE#366]) +1 other test skip
[265]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-5/igt@xe_pm@d3cold-basic.html
- shard-adlp: NOTRUN -> [SKIP][266] ([Intel XE#2284] / [Intel XE#366]) +1 other test skip
[266]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-3/igt@xe_pm@d3cold-basic.html
* igt@xe_pm@d3cold-mmap-vram:
- shard-dg2-set2: NOTRUN -> [SKIP][267] ([Intel XE#2284] / [Intel XE#366]) +2 other tests skip
[267]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-436/igt@xe_pm@d3cold-mmap-vram.html
* igt@xe_pm@d3cold-mocs:
- shard-adlp: NOTRUN -> [SKIP][268] ([Intel XE#2284])
[268]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-2/igt@xe_pm@d3cold-mocs.html
- shard-bmg: NOTRUN -> [SKIP][269] ([Intel XE#2284]) +2 other tests skip
[269]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-3/igt@xe_pm@d3cold-mocs.html
- shard-dg2-set2: NOTRUN -> [SKIP][270] ([Intel XE#2284])
[270]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-464/igt@xe_pm@d3cold-mocs.html
* igt@xe_pm@d3hot-i2c:
- shard-lnl: NOTRUN -> [SKIP][271] ([Intel XE#5742])
[271]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-1/igt@xe_pm@d3hot-i2c.html
- shard-bmg: NOTRUN -> [SKIP][272] ([Intel XE#5742])
[272]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-2/igt@xe_pm@d3hot-i2c.html
* igt@xe_pm@d3hot-mmap-vram:
- shard-adlp: NOTRUN -> [SKIP][273] ([Intel XE#1948])
[273]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-1/igt@xe_pm@d3hot-mmap-vram.html
* igt@xe_pm@s2idle-basic-exec:
- shard-adlp: [PASS][274] -> [ABORT][275] ([Intel XE#6675]) +2 other tests abort
[274]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4179-4ffeb1fd1362e2148a7ada498cbaef7b1de27867/shard-adlp-6/igt@xe_pm@s2idle-basic-exec.html
[275]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-2/igt@xe_pm@s2idle-basic-exec.html
* igt@xe_pm@s2idle-exec-after:
- shard-dg2-set2: NOTRUN -> [ABORT][276] ([Intel XE#6675]) +19 other tests abort
[276]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-432/igt@xe_pm@s2idle-exec-after.html
* igt@xe_pm@s3-basic-exec:
- shard-lnl: NOTRUN -> [SKIP][277] ([Intel XE#584]) +1 other test skip
[277]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-1/igt@xe_pm@s3-basic-exec.html
* igt@xe_pmu@all-fn-engine-activity-load:
- shard-dg2-set2: NOTRUN -> [SKIP][278] ([Intel XE#4650])
[278]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-464/igt@xe_pmu@all-fn-engine-activity-load.html
* igt@xe_pmu@engine-activity-accuracy-50:
- shard-lnl: [PASS][279] -> [FAIL][280] ([Intel XE#6251]) +1 other test fail
[279]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4179-4ffeb1fd1362e2148a7ada498cbaef7b1de27867/shard-lnl-2/igt@xe_pmu@engine-activity-accuracy-50.html
[280]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-8/igt@xe_pmu@engine-activity-accuracy-50.html
* igt@xe_pmu@gt-c6-idle:
- shard-dg2-set2: NOTRUN -> [FAIL][281] ([Intel XE#6366])
[281]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-435/igt@xe_pmu@gt-c6-idle.html
* igt@xe_pxp@display-black-pxp-fb:
- shard-bmg: NOTRUN -> [SKIP][282] ([Intel XE#4733]) +4 other tests skip
[282]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-2/igt@xe_pxp@display-black-pxp-fb.html
- shard-adlp: NOTRUN -> [SKIP][283] ([Intel XE#4733])
[283]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-6/igt@xe_pxp@display-black-pxp-fb.html
* igt@xe_pxp@pxp-stale-bo-bind-post-rpm:
- shard-dg2-set2: NOTRUN -> [SKIP][284] ([Intel XE#4733]) +4 other tests skip
[284]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-436/igt@xe_pxp@pxp-stale-bo-bind-post-rpm.html
* igt@xe_pxp@pxp-stale-bo-bind-post-termination-irq:
- shard-adlp: NOTRUN -> [SKIP][285] ([Intel XE#4733] / [Intel XE#5594]) +3 other tests skip
[285]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-1/igt@xe_pxp@pxp-stale-bo-bind-post-termination-irq.html
* igt@xe_query@multigpu-query-invalid-extension:
- shard-bmg: NOTRUN -> [SKIP][286] ([Intel XE#944]) +3 other tests skip
[286]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-2/igt@xe_query@multigpu-query-invalid-extension.html
- shard-adlp: NOTRUN -> [SKIP][287] ([Intel XE#944]) +1 other test skip
[287]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-2/igt@xe_query@multigpu-query-invalid-extension.html
- shard-lnl: NOTRUN -> [SKIP][288] ([Intel XE#944]) +2 other tests skip
[288]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-1/igt@xe_query@multigpu-query-invalid-extension.html
* igt@xe_query@multigpu-query-uc-fw-version-guc:
- shard-dg2-set2: NOTRUN -> [SKIP][289] ([Intel XE#944]) +4 other tests skip
[289]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-435/igt@xe_query@multigpu-query-uc-fw-version-guc.html
* igt@xe_render_copy@render-stress-1-copies:
- shard-adlp: NOTRUN -> [SKIP][290] ([Intel XE#4814] / [Intel XE#5614])
[290]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-1/igt@xe_render_copy@render-stress-1-copies.html
* igt@xe_sriov_auto_provisioning@fair-allocation:
- shard-lnl: NOTRUN -> [SKIP][291] ([Intel XE#4130]) +1 other test skip
[291]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-4/igt@xe_sriov_auto_provisioning@fair-allocation.html
* igt@xe_sriov_auto_provisioning@selfconfig-reprovision-reduce-numvfs:
- shard-dg2-set2: NOTRUN -> [SKIP][292] ([Intel XE#4130]) +1 other test skip
[292]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-435/igt@xe_sriov_auto_provisioning@selfconfig-reprovision-reduce-numvfs.html
* igt@xe_sriov_flr@flr-vf1-clear:
- shard-dg2-set2: NOTRUN -> [SKIP][293] ([Intel XE#3342])
[293]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-433/igt@xe_sriov_flr@flr-vf1-clear.html
* igt@xe_sriov_scheduling@equal-throughput:
- shard-lnl: NOTRUN -> [SKIP][294] ([Intel XE#4351])
[294]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-8/igt@xe_sriov_scheduling@equal-throughput.html
* igt@xe_sriov_vram@vf-access-after-resize-up:
- shard-adlp: NOTRUN -> [SKIP][295] ([Intel XE#6376])
[295]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-9/igt@xe_sriov_vram@vf-access-after-resize-up.html
* igt@xe_survivability@i2c-functionality:
- shard-adlp: NOTRUN -> [SKIP][296] ([Intel XE#6529])
[296]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-1/igt@xe_survivability@i2c-functionality.html
* igt@xe_vm@out-of-memory:
- shard-adlp: NOTRUN -> [SKIP][297] ([Intel XE#5745])
[297]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-2/igt@xe_vm@out-of-memory.html
- shard-lnl: NOTRUN -> [SKIP][298] ([Intel XE#5745])
[298]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-1/igt@xe_vm@out-of-memory.html
#### Possible fixes ####
* igt@kms_addfb_basic@invalid-set-prop:
- shard-adlp: [DMESG-WARN][299] ([Intel XE#2953] / [Intel XE#4173]) -> [PASS][300]
[299]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4179-4ffeb1fd1362e2148a7ada498cbaef7b1de27867/shard-adlp-1/igt@kms_addfb_basic@invalid-set-prop.html
[300]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-2/igt@kms_addfb_basic@invalid-set-prop.html
* igt@kms_async_flips@async-flip-with-page-flip-events-linear@pipe-c-edp-1:
- shard-lnl: [FAIL][301] ([Intel XE#5993]) -> [PASS][302]
[301]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4179-4ffeb1fd1362e2148a7ada498cbaef7b1de27867/shard-lnl-3/igt@kms_async_flips@async-flip-with-page-flip-events-linear@pipe-c-edp-1.html
[302]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-5/igt@kms_async_flips@async-flip-with-page-flip-events-linear@pipe-c-edp-1.html
* igt@kms_async_flips@crc-atomic@pipe-d-hdmi-a-1:
- shard-adlp: [FAIL][303] ([Intel XE#3884]) -> [PASS][304] +3 other tests pass
[303]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4179-4ffeb1fd1362e2148a7ada498cbaef7b1de27867/shard-adlp-4/igt@kms_async_flips@crc-atomic@pipe-d-hdmi-a-1.html
[304]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-8/igt@kms_async_flips@crc-atomic@pipe-d-hdmi-a-1.html
* igt@kms_big_fb@x-tiled-max-hw-stride-32bpp-rotate-0-hflip-async-flip:
- shard-adlp: [FAIL][305] ([Intel XE#6699]) -> [PASS][306]
[305]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4179-4ffeb1fd1362e2148a7ada498cbaef7b1de27867/shard-adlp-9/igt@kms_big_fb@x-tiled-max-hw-stride-32bpp-rotate-0-hflip-async-flip.html
[306]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-8/igt@kms_big_fb@x-tiled-max-hw-stride-32bpp-rotate-0-hflip-async-flip.html
* igt@kms_big_fb@x-tiled-max-hw-stride-64bpp-rotate-0:
- shard-adlp: [FAIL][307] ([Intel XE#1874]) -> [PASS][308]
[307]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4179-4ffeb1fd1362e2148a7ada498cbaef7b1de27867/shard-adlp-1/igt@kms_big_fb@x-tiled-max-hw-stride-64bpp-rotate-0.html
[308]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-9/igt@kms_big_fb@x-tiled-max-hw-stride-64bpp-rotate-0.html
* igt@kms_invalid_mode@bad-vtotal@pipe-a-dp-2:
- shard-bmg: [DMESG-WARN][309] -> [PASS][310] +2 other tests pass
[309]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4179-4ffeb1fd1362e2148a7ada498cbaef7b1de27867/shard-bmg-2/igt@kms_invalid_mode@bad-vtotal@pipe-a-dp-2.html
[310]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-1/igt@kms_invalid_mode@bad-vtotal@pipe-a-dp-2.html
* igt@kms_pm_rpm@system-suspend-idle:
- shard-adlp: [ABORT][311] ([Intel XE#6675]) -> [PASS][312]
[311]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4179-4ffeb1fd1362e2148a7ada498cbaef7b1de27867/shard-adlp-3/igt@kms_pm_rpm@system-suspend-idle.html
[312]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-1/igt@kms_pm_rpm@system-suspend-idle.html
* igt@kms_psr@psr-suspend@edp-1:
- shard-lnl: [ABORT][313] ([Intel XE#2625] / [Intel XE#6675]) -> [PASS][314] +1 other test pass
[313]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4179-4ffeb1fd1362e2148a7ada498cbaef7b1de27867/shard-lnl-3/igt@kms_psr@psr-suspend@edp-1.html
[314]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-1/igt@kms_psr@psr-suspend@edp-1.html
* igt@xe_evict@evict-beng-mixed-threads-small-multi-vm:
- shard-bmg: [SKIP][315] ([Intel XE#6557] / [Intel XE#6703]) -> [PASS][316]
[315]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4179-4ffeb1fd1362e2148a7ada498cbaef7b1de27867/shard-bmg-2/igt@xe_evict@evict-beng-mixed-threads-small-multi-vm.html
[316]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-1/igt@xe_evict@evict-beng-mixed-threads-small-multi-vm.html
* igt@xe_exec_sip_eudebug@breakpoint-waitsip-heavy:
- shard-adlp: [SKIP][317] ([Intel XE#4837] / [Intel XE#5565]) -> [PASS][318] +1 other test pass
[317]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4179-4ffeb1fd1362e2148a7ada498cbaef7b1de27867/shard-adlp-1/igt@xe_exec_sip_eudebug@breakpoint-waitsip-heavy.html
[318]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-8/igt@xe_exec_sip_eudebug@breakpoint-waitsip-heavy.html
* igt@xe_live_ktest@xe_eudebug:
- shard-bmg: [SKIP][319] ([Intel XE#2833]) -> [PASS][320]
[319]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4179-4ffeb1fd1362e2148a7ada498cbaef7b1de27867/shard-bmg-5/igt@xe_live_ktest@xe_eudebug.html
[320]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-5/igt@xe_live_ktest@xe_eudebug.html
- shard-adlp: [SKIP][321] ([Intel XE#455] / [Intel XE#5712]) -> [PASS][322]
[321]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4179-4ffeb1fd1362e2148a7ada498cbaef7b1de27867/shard-adlp-4/igt@xe_live_ktest@xe_eudebug.html
[322]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-8/igt@xe_live_ktest@xe_eudebug.html
- shard-dg2-set2: [SKIP][323] ([Intel XE#455]) -> [PASS][324]
[323]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4179-4ffeb1fd1362e2148a7ada498cbaef7b1de27867/shard-dg2-463/igt@xe_live_ktest@xe_eudebug.html
[324]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-dg2-463/igt@xe_live_ktest@xe_eudebug.html
- shard-lnl: [SKIP][325] ([Intel XE#2833]) -> [PASS][326]
[325]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4179-4ffeb1fd1362e2148a7ada498cbaef7b1de27867/shard-lnl-8/igt@xe_live_ktest@xe_eudebug.html
[326]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-4/igt@xe_live_ktest@xe_eudebug.html
* igt@xe_pm@s3-vm-bind-userptr:
- shard-bmg: [ABORT][327] ([Intel XE#6675]) -> [PASS][328]
[327]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4179-4ffeb1fd1362e2148a7ada498cbaef7b1de27867/shard-bmg-3/igt@xe_pm@s3-vm-bind-userptr.html
[328]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-2/igt@xe_pm@s3-vm-bind-userptr.html
* igt@xe_sriov_auto_provisioning@resources-released-on-vfs-disabling:
- shard-bmg: [SKIP][329] ([Intel XE#6703]) -> [PASS][330] +30 other tests pass
[329]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4179-4ffeb1fd1362e2148a7ada498cbaef7b1de27867/shard-bmg-2/igt@xe_sriov_auto_provisioning@resources-released-on-vfs-disabling.html
[330]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-3/igt@xe_sriov_auto_provisioning@resources-released-on-vfs-disabling.html
#### Warnings ####
* igt@kms_async_flips@async-flip-with-page-flip-events-linear:
- shard-lnl: [FAIL][331] ([Intel XE#5993] / [Intel XE#6676]) -> [FAIL][332] ([Intel XE#6676])
[331]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4179-4ffeb1fd1362e2148a7ada498cbaef7b1de27867/shard-lnl-3/igt@kms_async_flips@async-flip-with-page-flip-events-linear.html
[332]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-lnl-5/igt@kms_async_flips@async-flip-with-page-flip-events-linear.html
* igt@kms_big_fb@y-tiled-16bpp-rotate-270:
- shard-bmg: [SKIP][333] ([Intel XE#6703]) -> [SKIP][334] ([Intel XE#1124])
[333]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4179-4ffeb1fd1362e2148a7ada498cbaef7b1de27867/shard-bmg-2/igt@kms_big_fb@y-tiled-16bpp-rotate-270.html
[334]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-2/igt@kms_big_fb@y-tiled-16bpp-rotate-270.html
* igt@kms_bw@linear-tiling-1-displays-1920x1080p:
- shard-bmg: [SKIP][335] ([Intel XE#6703]) -> [SKIP][336] ([Intel XE#367])
[335]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4179-4ffeb1fd1362e2148a7ada498cbaef7b1de27867/shard-bmg-2/igt@kms_bw@linear-tiling-1-displays-1920x1080p.html
[336]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-1/igt@kms_bw@linear-tiling-1-displays-1920x1080p.html
* igt@kms_ccs@crc-primary-basic-y-tiled-ccs:
- shard-bmg: [SKIP][337] ([Intel XE#6703]) -> [SKIP][338] ([Intel XE#2887]) +1 other test skip
[337]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4179-4ffeb1fd1362e2148a7ada498cbaef7b1de27867/shard-bmg-2/igt@kms_ccs@crc-primary-basic-y-tiled-ccs.html
[338]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-2/igt@kms_ccs@crc-primary-basic-y-tiled-ccs.html
* igt@kms_cursor_crc@cursor-rapid-movement-256x85:
- shard-bmg: [SKIP][339] ([Intel XE#6557] / [Intel XE#6703]) -> [SKIP][340] ([Intel XE#2320])
[339]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4179-4ffeb1fd1362e2148a7ada498cbaef7b1de27867/shard-bmg-2/igt@kms_cursor_crc@cursor-rapid-movement-256x85.html
[340]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-1/igt@kms_cursor_crc@cursor-rapid-movement-256x85.html
* igt@kms_feature_discovery@display-3x:
- shard-bmg: [SKIP][341] ([Intel XE#6703]) -> [SKIP][342] ([Intel XE#2373])
[341]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4179-4ffeb1fd1362e2148a7ada498cbaef7b1de27867/shard-bmg-2/igt@kms_feature_discovery@display-3x.html
[342]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-1/igt@kms_feature_discovery@display-3x.html
* igt@kms_frontbuffer_tracking@fbcdrrs-2p-scndscrn-cur-indfb-draw-mmap-wc:
- shard-bmg: [SKIP][343] ([Intel XE#6703]) -> [SKIP][344] ([Intel XE#2311]) +2 other tests skip
[343]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4179-4ffeb1fd1362e2148a7ada498cbaef7b1de27867/shard-bmg-2/igt@kms_frontbuffer_tracking@fbcdrrs-2p-scndscrn-cur-indfb-draw-mmap-wc.html
[344]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-3/igt@kms_frontbuffer_tracking@fbcdrrs-2p-scndscrn-cur-indfb-draw-mmap-wc.html
* igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-indfb-pgflip-blt:
- shard-bmg: [SKIP][345] ([Intel XE#6703]) -> [SKIP][346] ([Intel XE#2313])
[345]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4179-4ffeb1fd1362e2148a7ada498cbaef7b1de27867/shard-bmg-2/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-indfb-pgflip-blt.html
[346]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-1/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-indfb-pgflip-blt.html
* igt@kms_plane@plane-panning-bottom-right-suspend:
- shard-adlp: [ABORT][347] ([Intel XE#6675]) -> [ABORT][348] ([Intel XE#2953] / [Intel XE#6675]) +1 other test abort
[347]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4179-4ffeb1fd1362e2148a7ada498cbaef7b1de27867/shard-adlp-8/igt@kms_plane@plane-panning-bottom-right-suspend.html
[348]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-1/igt@kms_plane@plane-panning-bottom-right-suspend.html
* igt@kms_pm_rpm@system-suspend-idle:
- shard-bmg: [SKIP][349] -> [ABORT][350] ([Intel XE#6675])
[349]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4179-4ffeb1fd1362e2148a7ada498cbaef7b1de27867/shard-bmg-2/igt@kms_pm_rpm@system-suspend-idle.html
[350]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-4/igt@kms_pm_rpm@system-suspend-idle.html
* igt@kms_psr2_sf@pr-primary-plane-update-sf-dmg-area:
- shard-bmg: [SKIP][351] ([Intel XE#1406] / [Intel XE#6703]) -> [SKIP][352] ([Intel XE#1406] / [Intel XE#1489])
[351]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4179-4ffeb1fd1362e2148a7ada498cbaef7b1de27867/shard-bmg-2/igt@kms_psr2_sf@pr-primary-plane-update-sf-dmg-area.html
[352]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-2/igt@kms_psr2_sf@pr-primary-plane-update-sf-dmg-area.html
* igt@xe_eudebug_online@breakpoint-many-sessions-tiles:
- shard-adlp: [SKIP][353] ([Intel XE#4837] / [Intel XE#5565]) -> [SKIP][354] ([Intel XE#2846])
[353]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4179-4ffeb1fd1362e2148a7ada498cbaef7b1de27867/shard-adlp-4/igt@xe_eudebug_online@breakpoint-many-sessions-tiles.html
[354]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-8/igt@xe_eudebug_online@breakpoint-many-sessions-tiles.html
* igt@xe_eudebug_online@writes-caching-vram-bb-vram-target-vram:
- shard-adlp: [SKIP][355] ([Intel XE#4837] / [Intel XE#5565]) -> [SKIP][356] ([Intel XE#455]) +3 other tests skip
[355]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4179-4ffeb1fd1362e2148a7ada498cbaef7b1de27867/shard-adlp-2/igt@xe_eudebug_online@writes-caching-vram-bb-vram-target-vram.html
[356]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-adlp-9/igt@xe_eudebug_online@writes-caching-vram-bb-vram-target-vram.html
* igt@xe_exec_basic@multigpu-once-bindexecqueue-userptr-invalidate:
- shard-bmg: [SKIP][357] ([Intel XE#6703]) -> [SKIP][358] ([Intel XE#2322]) +1 other test skip
[357]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4179-4ffeb1fd1362e2148a7ada498cbaef7b1de27867/shard-bmg-2/igt@xe_exec_basic@multigpu-once-bindexecqueue-userptr-invalidate.html
[358]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/shard-bmg-1/igt@xe_exec_basic@multigpu-once-bindexecqueue-userptr-invalidate.html
{name}: This element is suppressed. This means it is ignored when computing
the status of the difference (SUCCESS, WARNING, or FAILURE).
[Intel XE#1008]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1008
[Intel XE#1061]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1061
[Intel XE#1062]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1062
[Intel XE#1123]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1123
[Intel XE#1124]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1124
[Intel XE#1125]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1125
[Intel XE#1126]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1126
[Intel XE#1127]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1127
[Intel XE#1131]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1131
[Intel XE#1178]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1178
[Intel XE#1231]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1231
[Intel XE#1392]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1392
[Intel XE#1401]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1401
[Intel XE#1406]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1406
[Intel XE#1407]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1407
[Intel XE#1416]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1416
[Intel XE#1420]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1420
[Intel XE#1421]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1421
[Intel XE#1424]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1424
[Intel XE#1428]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1428
[Intel XE#1466]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1466
[Intel XE#1467]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1467
[Intel XE#1489]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1489
[Intel XE#1499]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1499
[Intel XE#1508]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1508
[Intel XE#1512]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1512
[Intel XE#1745]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1745
[Intel XE#1874]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1874
[Intel XE#1948]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1948
[Intel XE#2007]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2007
[Intel XE#2029]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2029
[Intel XE#2191]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2191
[Intel XE#2233]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2233
[Intel XE#2234]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2234
[Intel XE#2244]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2244
[Intel XE#2252]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2252
[Intel XE#2284]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2284
[Intel XE#2286]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2286
[Intel XE#2293]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2293
[Intel XE#2311]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2311
[Intel XE#2313]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2313
[Intel XE#2314]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2314
[Intel XE#2320]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2320
[Intel XE#2321]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2321
[Intel XE#2322]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2322
[Intel XE#2327]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2327
[Intel XE#2341]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2341
[Intel XE#2373]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2373
[Intel XE#2380]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2380
[Intel XE#2390]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2390
[Intel XE#2392]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2392
[Intel XE#2393]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2393
[Intel XE#2459]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2459
[Intel XE#2504]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2504
[Intel XE#2505]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2505
[Intel XE#255]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/255
[Intel XE#2596]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2596
[Intel XE#261]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/261
[Intel XE#2625]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2625
[Intel XE#2652]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2652
[Intel XE#2669]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2669
[Intel XE#2825]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2825
[Intel XE#2833]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2833
[Intel XE#2838]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2838
[Intel XE#2846]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2846
[Intel XE#2850]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2850
[Intel XE#288]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/288
[Intel XE#2887]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2887
[Intel XE#2893]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2893
[Intel XE#2894]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2894
[Intel XE#2907]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2907
[Intel XE#2925]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2925
[Intel XE#2927]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2927
[Intel XE#2934]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2934
[Intel XE#2953]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2953
[Intel XE#3009]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3009
[Intel XE#3012]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3012
[Intel XE#306]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/306
[Intel XE#307]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/307
[Intel XE#308]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/308
[Intel XE#309]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/309
[Intel XE#310]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/310
[Intel XE#316]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/316
[Intel XE#323]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/323
[Intel XE#3278]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3278
[Intel XE#330]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/330
[Intel XE#3309]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3309
[Intel XE#3342]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3342
[Intel XE#3414]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3414
[Intel XE#3432]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3432
[Intel XE#3442]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3442
[Intel XE#346]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/346
[Intel XE#352]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/352
[Intel XE#356]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/356
[Intel XE#3573]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3573
[Intel XE#362]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/362
[Intel XE#366]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/366
[Intel XE#367]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/367
[Intel XE#373]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/373
[Intel XE#3884]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3884
[Intel XE#3894]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3894
[Intel XE#3904]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3904
[Intel XE#4130]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4130
[Intel XE#4141]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4141
[Intel XE#4156]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4156
[Intel XE#4173]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4173
[Intel XE#4294]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4294
[Intel XE#4329]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4329
[Intel XE#4331]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4331
[Intel XE#4351]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4351
[Intel XE#4354]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4354
[Intel XE#4356]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4356
[Intel XE#4416]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4416
[Intel XE#4422]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4422
[Intel XE#455]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/455
[Intel XE#4596]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4596
[Intel XE#4608]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4608
[Intel XE#4609]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4609
[Intel XE#4633]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4633
[Intel XE#4650]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4650
[Intel XE#4683]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4683
[Intel XE#4733]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4733
[Intel XE#4757]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4757
[Intel XE#4814]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4814
[Intel XE#4837]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4837
[Intel XE#488]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/488
[Intel XE#4915]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4915
[Intel XE#4943]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4943
[Intel XE#5007]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5007
[Intel XE#5020]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5020
[Intel XE#5021]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5021
[Intel XE#5100]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5100
[Intel XE#512]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/512
[Intel XE#5132]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5132
[Intel XE#5195]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5195
[Intel XE#5208]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5208
[Intel XE#5299]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5299
[Intel XE#5300]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5300
[Intel XE#5466]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5466
[Intel XE#5503]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5503
[Intel XE#5561]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5561
[Intel XE#5564]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5564
[Intel XE#5565]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5565
[Intel XE#5575]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5575
[Intel XE#5591]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5591
[Intel XE#5594]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5594
[Intel XE#560]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/560
[Intel XE#5607]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5607
[Intel XE#5614]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5614
[Intel XE#5626]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5626
[Intel XE#5671]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5671
[Intel XE#5712]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5712
[Intel XE#5742]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5742
[Intel XE#5745]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5745
[Intel XE#5786]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5786
[Intel XE#5825]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5825
[Intel XE#584]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/584
[Intel XE#586]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/586
[Intel XE#599]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/599
[Intel XE#5993]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5993
[Intel XE#6196]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6196
[Intel XE#623]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/623
[Intel XE#6251]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6251
[Intel XE#6281]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6281
[Intel XE#6312]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6312
[Intel XE#6366]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6366
[Intel XE#6376]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6376
[Intel XE#6503]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6503
[Intel XE#651]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/651
[Intel XE#6529]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6529
[Intel XE#653]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/653
[Intel XE#6557]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6557
[Intel XE#656]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/656
[Intel XE#6590]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6590
[Intel XE#6598]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6598
[Intel XE#664]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/664
[Intel XE#6675]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6675
[Intel XE#6676]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6676
[Intel XE#6677]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6677
[Intel XE#6699]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6699
[Intel XE#6703]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6703
[Intel XE#6704]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6704
[Intel XE#6740]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6740
[Intel XE#6741]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6741
[Intel XE#688]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/688
[Intel XE#702]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/702
[Intel XE#718]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/718
[Intel XE#787]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/787
[Intel XE#870]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/870
[Intel XE#908]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/908
[Intel XE#929]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/929
[Intel XE#944]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/944
[Intel XE#979]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/979
Build changes
-------------
* IGT: IGT_8647 -> IGT_8648
* Linux: xe-4179-4ffeb1fd1362e2148a7ada498cbaef7b1de27867 -> xe-pw-158380v1
IGT_8647: 8647
IGT_8648: 5f69b55422b228beba08cf87b66680d249c865c9 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
xe-4179-4ffeb1fd1362e2148a7ada498cbaef7b1de27867: 4ffeb1fd1362e2148a7ada498cbaef7b1de27867
xe-pw-158380v1: 158380v1
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-158380v1/index.html
[-- Attachment #2: Type: text/html, Size: 121933 bytes --]
^ permalink raw reply [flat|nested] 30+ messages in thread* ✗ CI.checkpatch: warning for Intel Xe GPU Debug Support (eudebug) v6 (rev2)
2025-12-02 13:52 [PATCH 00/20] Intel Xe GPU Debug Support (eudebug) v6 Mika Kuoppala
` (23 preceding siblings ...)
2025-12-02 18:30 ` ✗ Xe.CI.Full: failure " Patchwork
@ 2025-12-03 9:13 ` Patchwork
2025-12-03 9:15 ` ✓ CI.KUnit: success " Patchwork
25 siblings, 0 replies; 30+ messages in thread
From: Patchwork @ 2025-12-03 9:13 UTC (permalink / raw)
To: Mika Kuoppala; +Cc: intel-xe
== Series Details ==
Series: Intel Xe GPU Debug Support (eudebug) v6 (rev2)
URL : https://patchwork.freedesktop.org/series/158380/
State : warning
== Summary ==
+ KERNEL=/kernel
+ git clone https://gitlab.freedesktop.org/drm/maintainer-tools mt
Cloning into 'mt'...
warning: redirecting to https://gitlab.freedesktop.org/drm/maintainer-tools.git/
+ git -C mt rev-list -n1 origin/master
2de9a3901bc28757c7906b454717b64e2a214021
+ cd /kernel
+ git config --global --add safe.directory /kernel
+ git log -n1
commit 5a617783302a47fc8815335070690b60a3e42726
Author: Gwan-gyeong Mun <gwan-gyeong.mun@intel.com>
Date: Tue Dec 2 15:52:39 2025 +0200
drm/xe/eudebug: Enable EU pagefault handling
The XE2 (and PVC) HW has a limitation that the pagefault due to invalid
access will halt the corresponding EUs. To solve this problem, enable
EU pagefault handling functionality, which allows to unhalt pagefaulted
eu threads and to EU debugger to get inform about the eu attentions state
of EU threads during execution.
If a pagefault occurs, send the DRM_XE_EUDEBUG_EVENT_PAGEFAULT event
after handling the pagefault.
The pagefault handling is a mechanism that allows a stalled EU thread to
enter SIP mode by installing a temporal null page to the page table entry
where the pagefault happened.
A brief description of the page fault handling mechanism flow between KMD
and the eu thread is as follows
(1) eu thread accesses unallocated address
(2) pagefault happens and eu thread stalls
(3) XE kmd set an force eu thread exception to allow the running eu thread
to enter SIP mode (kmd set ForceException / ForceExternalHalt bit of
TD_CTL register)
Not stalled (none-pagefaulted) eu threads enter SIP mode
(4) XE kmd installs temporal null page to the pagetable entry of the
address where pagefault happened.
(5) XE kmd replies pagefault successful message to GUC
(6) stalled eu thread resumes as per pagefault condition has resolved
(7) resumed eu thread enters SIP mode due to force exception set by (3)
(8) adapted to consumer/produced pagefaults
As designed this feature to only work when eudbug is enabled, it should
have no impact to regular recoverable pagefault code path.
v2: - pf->q holds the vm ref so drop it (Mika)
- streamline uapi (Mika)
- cleanup the pagefault through producer if (Mika)
Signed-off-by: Gwan-gyeong Mun <gwan-gyeong.mun@intel.com>
Signed-off-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
+ /mt/dim checkpatch 6a00c52137a5a8eb36cb762d4649abe16cc0ca33 drm-intel
4f88ee8d9a76 drm/xe/eudebug: Introduce eudebug interface
-:220: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#220:
new file mode 100644
-:475: CHECK:MACRO_ARG_REUSE: Macro argument reuse '_d' - possible side-effects?
#475: FILE: drivers/gpu/drm/xe/xe_eudebug.c:251:
+#define xe_eudebug_disconnect(_d, _err) ({ \
+ if (_xe_eudebug_disconnect((_d), (_err))) { \
+ if ((_err) == 0 || (_err) == -ETIMEDOUT) \
+ eu_dbg((_d), "Session closed (%d)", (_err)); \
+ else \
+ eu_err((_d), "Session disconnected, err = %d (%s:%d)", \
+ (_err), __func__, __LINE__); \
+ } \
+})
-:475: CHECK:MACRO_ARG_REUSE: Macro argument reuse '_err' - possible side-effects?
#475: FILE: drivers/gpu/drm/xe/xe_eudebug.c:251:
+#define xe_eudebug_disconnect(_d, _err) ({ \
+ if (_xe_eudebug_disconnect((_d), (_err))) { \
+ if ((_err) == 0 || (_err) == -ETIMEDOUT) \
+ eu_dbg((_d), "Session closed (%d)", (_err)); \
+ else \
+ eu_err((_d), "Session disconnected, err = %d (%s:%d)", \
+ (_err), __func__, __LINE__); \
+ } \
+})
-:827: CHECK:MACRO_ARG_REUSE: Macro argument reuse '_d' - possible side-effects?
#827: FILE: drivers/gpu/drm/xe/xe_eudebug.c:603:
+#define xe_eudebug_event_put(_d, _err) ({ \
+ if ((_err)) \
+ xe_eudebug_disconnect((_d), (_err)); \
+ xe_eudebug_put((_d)); \
+ })
-:827: CHECK:MACRO_ARG_REUSE: Macro argument reuse '_err' - possible side-effects?
#827: FILE: drivers/gpu/drm/xe/xe_eudebug.c:603:
+#define xe_eudebug_event_put(_d, _err) ({ \
+ if ((_err)) \
+ xe_eudebug_disconnect((_d), (_err)); \
+ xe_eudebug_put((_d)); \
+ })
-:1291: ERROR:COMPLEX_MACRO: Macros with complex values should be enclosed in parentheses
#1291: FILE: drivers/gpu/drm/xe/xe_eudebug.h:20:
+#define XE_EUDEBUG_DBG_ARGS(d) (d)->session, \
+ atomic_long_read(&(d)->events.seqno), \
+ !READ_ONCE(d->target.xef) ? "disconnected" : "", \
+ current->pid, \
+ task_tgid_nr(current), \
+ READ_ONCE(d->target.xef) ? d->target.xef->pid : -1
BUT SEE:
do {} while (0) advice is over-stated in a few situations:
The more obvious case is macros, like MODULE_PARM_DESC, invoked at
file-scope, where C disallows code (it must be in functions). See
$exceptions if you have one to add by name.
More troublesome is declarative macros used at top of new scope,
like DECLARE_PER_CPU. These might just compile with a do-while-0
wrapper, but would be incorrect. Most of these are handled by
detecting struct,union,etc declaration primitives in $exceptions.
Theres also macros called inside an if (block), which "return" an
expression. These cannot do-while, and need a ({}) wrapper.
Enjoy this qualification while we work to improve our heuristics.
-:1291: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'd' - possible side-effects?
#1291: FILE: drivers/gpu/drm/xe/xe_eudebug.h:20:
+#define XE_EUDEBUG_DBG_ARGS(d) (d)->session, \
+ atomic_long_read(&(d)->events.seqno), \
+ !READ_ONCE(d->target.xef) ? "disconnected" : "", \
+ current->pid, \
+ task_tgid_nr(current), \
+ READ_ONCE(d->target.xef) ? d->target.xef->pid : -1
-:1298: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'd' - possible side-effects?
#1298: FILE: drivers/gpu/drm/xe/xe_eudebug.h:27:
+#define eu_err(d, fmt, ...) drm_err(&(d)->xe->drm, XE_EUDEBUG_DBG_STR # fmt, \
+ XE_EUDEBUG_DBG_ARGS(d), ##__VA_ARGS__)
-:1300: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'd' - possible side-effects?
#1300: FILE: drivers/gpu/drm/xe/xe_eudebug.h:29:
+#define eu_warn(d, fmt, ...) drm_warn(&(d)->xe->drm, XE_EUDEBUG_DBG_STR # fmt, \
+ XE_EUDEBUG_DBG_ARGS(d), ##__VA_ARGS__)
-:1302: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'd' - possible side-effects?
#1302: FILE: drivers/gpu/drm/xe/xe_eudebug.h:31:
+#define eu_dbg(d, fmt, ...) drm_dbg(&(d)->xe->drm, XE_EUDEBUG_DBG_STR # fmt, \
+ XE_EUDEBUG_DBG_ARGS(d), ##__VA_ARGS__)
-:1520: WARNING:LONG_LINE: line length of 130 exceeds 100 columns
#1520: FILE: include/uapi/drm/xe_drm.h:127:
+#define DRM_IOCTL_XE_EUDEBUG_CONNECT DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_EUDEBUG_CONNECT, struct drm_xe_eudebug_connect)
total: 1 errors, 2 warnings, 8 checks, 1500 lines checked
86acf646b019 drm/xe/eudebug: Introduce discovery for resources
6ed3cde9df90 drm/xe/eudebug: Introduce exec_queue events
92e3650da9f7 drm/xe: Add EUDEBUG_ENABLE exec queue property
232d4097c8b0 drm/xe/eudebug: Mark guc contexts as debuggable
9184cbe77524 drm/xe: Introduce ADD_DEBUG_DATA and REMOVE_DEBUG_DATA vm bind ops
-:44: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#44:
new file mode 100644
-:617: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#617: FILE: drivers/gpu/drm/xe/xe_vm.c:3431:
+ if (XE_IOCTL_DBG(xe, operation != DRM_XE_VM_BIND_OP_ADD_DEBUG_DATA &&
+ operation != DRM_XE_VM_BIND_OP_REMOVE_DEBUG_DATA &&
-:620: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#620: FILE: drivers/gpu/drm/xe/xe_vm.c:3434:
+ XE_IOCTL_DBG(xe, ext.name == XE_VM_BIND_OP_EXTENSIONS_DEBUG_DATA &&
+ ++debug_data_count > 1))
-:740: CHECK:UNCOMMENTED_DEFINITION: struct mutex definition without comment
#740: FILE: drivers/gpu/drm/xe/xe_vm_types.h:343:
+ struct mutex lock;
total: 0 errors, 1 warnings, 3 checks, 759 lines checked
b1b98b59491c drm/xe/eudebug: Introduce vm bind and vm bind debug data events
-:7: WARNING:COMMIT_LOG_LONG_LINE: Prefer a maximum 75 chars per line (possible unwrapped commit description?)
#7:
This patch adds events to track the bind ioctl and associated debug data add
-:390: WARNING:LONG_LINE_COMMENT: line length of 102 exceeds 100 columns
#390: FILE: include/uapi/drm/xe_drm_eudebug.h:92:
+ * │ EVENT_VM_BIND ├──────────────────┬─┬┄┐
-:391: WARNING:LONG_LINE_COMMENT: line length of 108 exceeds 100 columns
#391: FILE: include/uapi/drm/xe_drm_eudebug.h:93:
+ * └───────────────────────┘ │ │ ┊
-:392: WARNING:LONG_LINE_COMMENT: line length of 130 exceeds 100 columns
#392: FILE: include/uapi/drm/xe_drm_eudebug.h:94:
+ * ┌──────────────────────────────────┐ │ │ ┊
-:394: WARNING:LONG_LINE_COMMENT: line length of 128 exceeds 100 columns
#394: FILE: include/uapi/drm/xe_drm_eudebug.h:96:
+ * └──────────────────────────────────┘ │ ┊
-:396: WARNING:LONG_LINE_COMMENT: line length of 128 exceeds 100 columns
#396: FILE: include/uapi/drm/xe_drm_eudebug.h:98:
+ * ┌──────────────────────────────────┐ │ ┊
-:398: WARNING:LONG_LINE_COMMENT: line length of 126 exceeds 100 columns
#398: FILE: include/uapi/drm/xe_drm_eudebug.h:100:
+ * └──────────────────────────────────┘ ┊
-:400: WARNING:LONG_LINE_COMMENT: line length of 126 exceeds 100 columns
#400: FILE: include/uapi/drm/xe_drm_eudebug.h:102:
+ * ┌┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┐ ┊
-:402: WARNING:LONG_LINE_COMMENT: line length of 116 exceeds 100 columns
#402: FILE: include/uapi/drm/xe_drm_eudebug.h:104:
+ * └┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┘
total: 0 errors, 9 warnings, 0 checks, 378 lines checked
b7ab6f17093b drm/xe/eudebug: Add UFENCE events with acks
-:309: CHECK:COMPARISON_TO_NULL: Comparison to NULL could be written "!ufence"
#309: FILE: drivers/gpu/drm/xe/xe_eudebug.c:1195:
+ xe_assert(vm->xe, ufence == NULL);
-:608: CHECK:UNCOMMENTED_DEFINITION: spinlock_t definition without comment
#608: FILE: drivers/gpu/drm/xe/xe_sync_types.h:26:
+ spinlock_t lock;
total: 0 errors, 0 warnings, 2 checks, 610 lines checked
1ecb1c146a40 drm/xe/eudebug: vm open/pread/pwrite
-:124: CHECK:BRACES: Blank lines aren't necessary after an open brace '{'
#124: FILE: drivers/gpu/drm/xe/xe_eudebug.c:669:
+{
+
-:148: CHECK:LINE_SPACING: Please don't use multiple blank lines
#148: FILE: drivers/gpu/drm/xe/xe_eudebug.c:693:
+
+
-:188: ERROR:COMPLEX_MACRO: Macros with complex values should be enclosed in parentheses
#188: FILE: drivers/gpu/drm/xe/xe_eudebug.h:42:
+#define xe_eudebug_for_each_hw_engine(__hwe, __gt, __id) \
+ for_each_hw_engine(__hwe, __gt, __id) \
+ if (xe_hw_engine_has_eudebug(__hwe))
BUT SEE:
do {} while (0) advice is over-stated in a few situations:
The more obvious case is macros, like MODULE_PARM_DESC, invoked at
file-scope, where C disallows code (it must be in functions). See
$exceptions if you have one to add by name.
More troublesome is declarative macros used at top of new scope,
like DECLARE_PER_CPU. These might just compile with a do-while-0
wrapper, but would be incorrect. Most of these are handled by
detecting struct,union,etc declaration primitives in $exceptions.
Theres also macros called inside an if (block), which "return" an
expression. These cannot do-while, and need a ({}) wrapper.
Enjoy this qualification while we work to improve our heuristics.
-:188: CHECK:MACRO_ARG_REUSE: Macro argument reuse '__hwe' - possible side-effects?
#188: FILE: drivers/gpu/drm/xe/xe_eudebug.h:42:
+#define xe_eudebug_for_each_hw_engine(__hwe, __gt, __id) \
+ for_each_hw_engine(__hwe, __gt, __id) \
+ if (xe_hw_engine_has_eudebug(__hwe))
-:237: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#237:
new file mode 100644
total: 1 errors, 1 warnings, 3 checks, 619 lines checked
ee0c5b36bb8f drm/xe/eudebug: userptr vm pread/pwrite
8c318a3dc892 drm/xe/eudebug: hw enablement for eudebug
-:107: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#107:
new file mode 100644
total: 0 errors, 1 warnings, 0 checks, 452 lines checked
403b3838014e drm/xe/eudebug: Introduce EU control interface
-:277: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#277: FILE: drivers/gpu/drm/xe/xe_eudebug_hw.c:183:
+static bool engine_has_runalone_set(const struct xe_hw_engine * const hwe,
+ u32 rcu_debug1)
-:283: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#283: FILE: drivers/gpu/drm/xe/xe_eudebug_hw.c:189:
+static bool engine_has_context_set(const struct xe_hw_engine * const hwe,
+ u32 rcu_debug1)
-:915: CHECK:BRACES: Blank lines aren't necessary after an open brace '{'
#915: FILE: include/uapi/drm/xe_drm_eudebug.h:184:
+struct drm_xe_eudebug_eu_control {
+
total: 0 errors, 0 warnings, 3 checks, 860 lines checked
1879679d9ec9 drm/xe/eudebug: Introduce per device attention scan worker
1cb16bca5aae drm/xe/eudebug_test: Introduce xe_eudebug wa kunit test
-:16: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#16:
new file mode 100644
total: 0 errors, 1 warnings, 0 checks, 201 lines checked
6c904ea32cf0 drm/xe: Implement SR-IOV and eudebug exclusivity
32affb9db400 drm/xe: Add xe_client_debugfs and introduce debug_data file
-:31: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#31:
new file mode 100644
-:85: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#85: FILE: drivers/gpu/drm/xe/xe_client_debugfs.c:50:
+ len = snprintf(kbuf, MAX_LINE_LEN, "%lu 0x%llx-0x%llx 0x%llx 0x%x\t%s\n",
+ vm_index,
total: 0 errors, 1 warnings, 1 checks, 161 lines checked
78c4d67f01d9 drm/xe/eudebug: Add read/count/compare helper for eu attention
-:127: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#127:
new file mode 100644
-:149: CHECK:SPACING: spaces preferred around that '/' (ctx:VxV)
#149: FILE: drivers/gpu/drm/xe/xe_gt_debug_types.h:18:
+ XE_GT_EU_ATT_MAX_THREADS/8];
^
total: 0 errors, 1 warnings, 1 checks, 120 lines checked
d65709a989bd drm/xe/vm: Support for adding null page VMA to VM on request
-:15: WARNING:COMMIT_LOG_LONG_LINE: Prefer a maximum 75 chars per line (possible unwrapped commit description?)
#15:
[1] https://lore.kernel.org/intel-xe/20230829231648.4438-1-yu.bruce.chang@intel.com/
total: 0 errors, 1 warnings, 0 checks, 42 lines checked
f1c09186db00 drm/xe/eudebug: Introduce EU pagefault handling interface
-:337: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#337:
new file mode 100644
-:454: CHECK:UNCOMMENTED_DEFINITION: spinlock_t definition without comment
#454: FILE: drivers/gpu/drm/xe/xe_eudebug_pagefault.c:113:
+ spinlock_t lock;
-:462: CHECK:COMPARISON_TO_NULL: Comparison to NULL could be written "!fence"
#462: FILE: drivers/gpu/drm/xe/xe_eudebug_pagefault.c:121:
+ if (fence == NULL)
-:558: CHECK:USLEEP_RANGE: usleep_range is preferred over udelay; see function description of usleep_range() and udelay().
#558: FILE: drivers/gpu/drm/xe/xe_eudebug_pagefault.c:217:
+ udelay(200);
total: 0 errors, 1 warnings, 3 checks, 859 lines checked
5a617783302a drm/xe/eudebug: Enable EU pagefault handling
^ permalink raw reply [flat|nested] 30+ messages in thread* ✓ CI.KUnit: success for Intel Xe GPU Debug Support (eudebug) v6 (rev2)
2025-12-02 13:52 [PATCH 00/20] Intel Xe GPU Debug Support (eudebug) v6 Mika Kuoppala
` (24 preceding siblings ...)
2025-12-03 9:13 ` ✗ CI.checkpatch: warning for Intel Xe GPU Debug Support (eudebug) v6 (rev2) Patchwork
@ 2025-12-03 9:15 ` Patchwork
25 siblings, 0 replies; 30+ messages in thread
From: Patchwork @ 2025-12-03 9:15 UTC (permalink / raw)
To: Mika Kuoppala; +Cc: intel-xe
== Series Details ==
Series: Intel Xe GPU Debug Support (eudebug) v6 (rev2)
URL : https://patchwork.freedesktop.org/series/158380/
State : success
== Summary ==
+ trap cleanup EXIT
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/xe/.kunitconfig
[09:13:50] Configuring KUnit Kernel ...
Generating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[09:13:54] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[09:14:25] Starting KUnit Kernel (1/1)...
[09:14:25] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[09:14:25] ================== guc_buf (11 subtests) ===================
[09:14:25] [PASSED] test_smallest
[09:14:25] [PASSED] test_largest
[09:14:25] [PASSED] test_granular
[09:14:25] [PASSED] test_unique
[09:14:25] [PASSED] test_overlap
[09:14:25] [PASSED] test_reusable
[09:14:25] [PASSED] test_too_big
[09:14:25] [PASSED] test_flush
[09:14:25] [PASSED] test_lookup
[09:14:25] [PASSED] test_data
[09:14:25] [PASSED] test_class
[09:14:25] ===================== [PASSED] guc_buf =====================
[09:14:25] =================== guc_dbm (7 subtests) ===================
[09:14:25] [PASSED] test_empty
[09:14:25] [PASSED] test_default
[09:14:25] ======================== test_size ========================
[09:14:25] [PASSED] 4
[09:14:25] [PASSED] 8
[09:14:25] [PASSED] 32
[09:14:25] [PASSED] 256
[09:14:25] ==================== [PASSED] test_size ====================
[09:14:25] ======================= test_reuse ========================
[09:14:25] [PASSED] 4
[09:14:25] [PASSED] 8
[09:14:25] [PASSED] 32
[09:14:25] [PASSED] 256
[09:14:25] =================== [PASSED] test_reuse ====================
[09:14:25] =================== test_range_overlap ====================
[09:14:25] [PASSED] 4
[09:14:25] [PASSED] 8
[09:14:25] [PASSED] 32
[09:14:25] [PASSED] 256
[09:14:25] =============== [PASSED] test_range_overlap ================
[09:14:25] =================== test_range_compact ====================
[09:14:25] [PASSED] 4
[09:14:25] [PASSED] 8
[09:14:25] [PASSED] 32
[09:14:25] [PASSED] 256
[09:14:25] =============== [PASSED] test_range_compact ================
[09:14:25] ==================== test_range_spare =====================
[09:14:25] [PASSED] 4
[09:14:25] [PASSED] 8
[09:14:25] [PASSED] 32
[09:14:25] [PASSED] 256
[09:14:25] ================ [PASSED] test_range_spare =================
[09:14:25] ===================== [PASSED] guc_dbm =====================
[09:14:25] =================== guc_idm (6 subtests) ===================
[09:14:25] [PASSED] bad_init
[09:14:25] [PASSED] no_init
[09:14:25] [PASSED] init_fini
[09:14:25] [PASSED] check_used
[09:14:25] [PASSED] check_quota
[09:14:25] [PASSED] check_all
[09:14:25] ===================== [PASSED] guc_idm =====================
[09:14:25] ================== no_relay (3 subtests) ===================
[09:14:25] [PASSED] xe_drops_guc2pf_if_not_ready
[09:14:25] [PASSED] xe_drops_guc2vf_if_not_ready
[09:14:25] [PASSED] xe_rejects_send_if_not_ready
[09:14:25] ==================== [PASSED] no_relay =====================
[09:14:25] ================== pf_relay (14 subtests) ==================
[09:14:25] [PASSED] pf_rejects_guc2pf_too_short
[09:14:25] [PASSED] pf_rejects_guc2pf_too_long
[09:14:25] [PASSED] pf_rejects_guc2pf_no_payload
[09:14:25] [PASSED] pf_fails_no_payload
[09:14:25] [PASSED] pf_fails_bad_origin
[09:14:25] [PASSED] pf_fails_bad_type
[09:14:25] [PASSED] pf_txn_reports_error
[09:14:25] [PASSED] pf_txn_sends_pf2guc
[09:14:25] [PASSED] pf_sends_pf2guc
[09:14:25] [SKIPPED] pf_loopback_nop
[09:14:25] [SKIPPED] pf_loopback_echo
[09:14:25] [SKIPPED] pf_loopback_fail
[09:14:25] [SKIPPED] pf_loopback_busy
[09:14:25] [SKIPPED] pf_loopback_retry
[09:14:25] ==================== [PASSED] pf_relay =====================
[09:14:25] ================== vf_relay (3 subtests) ===================
[09:14:25] [PASSED] vf_rejects_guc2vf_too_short
[09:14:25] [PASSED] vf_rejects_guc2vf_too_long
[09:14:25] [PASSED] vf_rejects_guc2vf_no_payload
[09:14:25] ==================== [PASSED] vf_relay =====================
[09:14:25] ================ pf_gt_config (6 subtests) =================
[09:14:25] [PASSED] fair_contexts_1vf
[09:14:25] [PASSED] fair_doorbells_1vf
[09:14:25] [PASSED] fair_ggtt_1vf
[09:14:25] ====================== fair_contexts ======================
[09:14:25] [PASSED] 1 VF
[09:14:25] [PASSED] 2 VFs
[09:14:25] [PASSED] 3 VFs
[09:14:25] [PASSED] 4 VFs
[09:14:25] [PASSED] 5 VFs
[09:14:25] [PASSED] 6 VFs
[09:14:25] [PASSED] 7 VFs
[09:14:25] [PASSED] 8 VFs
[09:14:25] [PASSED] 9 VFs
[09:14:25] [PASSED] 10 VFs
[09:14:25] [PASSED] 11 VFs
[09:14:25] [PASSED] 12 VFs
[09:14:25] [PASSED] 13 VFs
[09:14:25] [PASSED] 14 VFs
[09:14:25] [PASSED] 15 VFs
[09:14:25] [PASSED] 16 VFs
[09:14:25] [PASSED] 17 VFs
[09:14:25] [PASSED] 18 VFs
[09:14:25] [PASSED] 19 VFs
[09:14:25] [PASSED] 20 VFs
[09:14:25] [PASSED] 21 VFs
[09:14:25] [PASSED] 22 VFs
[09:14:25] [PASSED] 23 VFs
[09:14:25] [PASSED] 24 VFs
[09:14:25] [PASSED] 25 VFs
[09:14:25] [PASSED] 26 VFs
[09:14:25] [PASSED] 27 VFs
[09:14:25] [PASSED] 28 VFs
[09:14:25] [PASSED] 29 VFs
[09:14:25] [PASSED] 30 VFs
[09:14:25] [PASSED] 31 VFs
[09:14:25] [PASSED] 32 VFs
[09:14:25] [PASSED] 33 VFs
[09:14:25] [PASSED] 34 VFs
[09:14:25] [PASSED] 35 VFs
[09:14:25] [PASSED] 36 VFs
[09:14:25] [PASSED] 37 VFs
[09:14:25] [PASSED] 38 VFs
[09:14:25] [PASSED] 39 VFs
[09:14:25] [PASSED] 40 VFs
[09:14:25] [PASSED] 41 VFs
[09:14:25] [PASSED] 42 VFs
[09:14:25] [PASSED] 43 VFs
[09:14:25] [PASSED] 44 VFs
[09:14:25] [PASSED] 45 VFs
[09:14:25] [PASSED] 46 VFs
[09:14:25] [PASSED] 47 VFs
[09:14:25] [PASSED] 48 VFs
[09:14:25] [PASSED] 49 VFs
[09:14:25] [PASSED] 50 VFs
[09:14:25] [PASSED] 51 VFs
[09:14:25] [PASSED] 52 VFs
[09:14:25] [PASSED] 53 VFs
[09:14:25] [PASSED] 54 VFs
[09:14:25] [PASSED] 55 VFs
[09:14:25] [PASSED] 56 VFs
[09:14:25] [PASSED] 57 VFs
[09:14:25] [PASSED] 58 VFs
[09:14:25] [PASSED] 59 VFs
[09:14:25] [PASSED] 60 VFs
[09:14:25] [PASSED] 61 VFs
[09:14:25] [PASSED] 62 VFs
[09:14:25] [PASSED] 63 VFs
[09:14:25] ================== [PASSED] fair_contexts ==================
[09:14:25] ===================== fair_doorbells ======================
[09:14:25] [PASSED] 1 VF
[09:14:25] [PASSED] 2 VFs
[09:14:25] [PASSED] 3 VFs
[09:14:25] [PASSED] 4 VFs
[09:14:25] [PASSED] 5 VFs
[09:14:25] [PASSED] 6 VFs
[09:14:25] [PASSED] 7 VFs
[09:14:25] [PASSED] 8 VFs
[09:14:25] [PASSED] 9 VFs
[09:14:25] [PASSED] 10 VFs
[09:14:25] [PASSED] 11 VFs
[09:14:25] [PASSED] 12 VFs
[09:14:25] [PASSED] 13 VFs
[09:14:25] [PASSED] 14 VFs
[09:14:25] [PASSED] 15 VFs
[09:14:25] [PASSED] 16 VFs
[09:14:25] [PASSED] 17 VFs
[09:14:25] [PASSED] 18 VFs
[09:14:25] [PASSED] 19 VFs
[09:14:25] [PASSED] 20 VFs
[09:14:25] [PASSED] 21 VFs
[09:14:25] [PASSED] 22 VFs
[09:14:25] [PASSED] 23 VFs
[09:14:25] [PASSED] 24 VFs
[09:14:25] [PASSED] 25 VFs
[09:14:25] [PASSED] 26 VFs
[09:14:25] [PASSED] 27 VFs
[09:14:25] [PASSED] 28 VFs
[09:14:25] [PASSED] 29 VFs
[09:14:25] [PASSED] 30 VFs
[09:14:25] [PASSED] 31 VFs
[09:14:25] [PASSED] 32 VFs
[09:14:25] [PASSED] 33 VFs
[09:14:25] [PASSED] 34 VFs
[09:14:25] [PASSED] 35 VFs
[09:14:25] [PASSED] 36 VFs
[09:14:25] [PASSED] 37 VFs
[09:14:25] [PASSED] 38 VFs
[09:14:25] [PASSED] 39 VFs
[09:14:25] [PASSED] 40 VFs
[09:14:25] [PASSED] 41 VFs
[09:14:25] [PASSED] 42 VFs
[09:14:25] [PASSED] 43 VFs
[09:14:25] [PASSED] 44 VFs
[09:14:25] [PASSED] 45 VFs
[09:14:25] [PASSED] 46 VFs
[09:14:25] [PASSED] 47 VFs
[09:14:25] [PASSED] 48 VFs
[09:14:25] [PASSED] 49 VFs
[09:14:25] [PASSED] 50 VFs
[09:14:25] [PASSED] 51 VFs
[09:14:25] [PASSED] 52 VFs
[09:14:25] [PASSED] 53 VFs
[09:14:25] [PASSED] 54 VFs
[09:14:25] [PASSED] 55 VFs
[09:14:25] [PASSED] 56 VFs
[09:14:25] [PASSED] 57 VFs
[09:14:25] [PASSED] 58 VFs
[09:14:25] [PASSED] 59 VFs
[09:14:25] [PASSED] 60 VFs
[09:14:25] [PASSED] 61 VFs
[09:14:25] [PASSED] 62 VFs
[09:14:25] [PASSED] 63 VFs
[09:14:25] ================= [PASSED] fair_doorbells ==================
[09:14:25] ======================== fair_ggtt ========================
[09:14:25] [PASSED] 1 VF
[09:14:25] [PASSED] 2 VFs
[09:14:25] [PASSED] 3 VFs
[09:14:25] [PASSED] 4 VFs
[09:14:25] [PASSED] 5 VFs
[09:14:25] [PASSED] 6 VFs
[09:14:25] [PASSED] 7 VFs
[09:14:25] [PASSED] 8 VFs
[09:14:25] [PASSED] 9 VFs
[09:14:25] [PASSED] 10 VFs
[09:14:25] [PASSED] 11 VFs
[09:14:25] [PASSED] 12 VFs
[09:14:25] [PASSED] 13 VFs
[09:14:25] [PASSED] 14 VFs
[09:14:25] [PASSED] 15 VFs
[09:14:25] [PASSED] 16 VFs
[09:14:25] [PASSED] 17 VFs
[09:14:25] [PASSED] 18 VFs
[09:14:25] [PASSED] 19 VFs
[09:14:25] [PASSED] 20 VFs
[09:14:25] [PASSED] 21 VFs
[09:14:25] [PASSED] 22 VFs
[09:14:25] [PASSED] 23 VFs
[09:14:25] [PASSED] 24 VFs
[09:14:25] [PASSED] 25 VFs
[09:14:25] [PASSED] 26 VFs
[09:14:25] [PASSED] 27 VFs
[09:14:25] [PASSED] 28 VFs
[09:14:25] [PASSED] 29 VFs
[09:14:25] [PASSED] 30 VFs
[09:14:25] [PASSED] 31 VFs
[09:14:25] [PASSED] 32 VFs
[09:14:25] [PASSED] 33 VFs
[09:14:25] [PASSED] 34 VFs
[09:14:25] [PASSED] 35 VFs
[09:14:25] [PASSED] 36 VFs
[09:14:25] [PASSED] 37 VFs
[09:14:25] [PASSED] 38 VFs
[09:14:25] [PASSED] 39 VFs
[09:14:25] [PASSED] 40 VFs
[09:14:25] [PASSED] 41 VFs
[09:14:25] [PASSED] 42 VFs
[09:14:25] [PASSED] 43 VFs
[09:14:25] [PASSED] 44 VFs
[09:14:25] [PASSED] 45 VFs
[09:14:25] [PASSED] 46 VFs
[09:14:25] [PASSED] 47 VFs
[09:14:25] [PASSED] 48 VFs
[09:14:25] [PASSED] 49 VFs
[09:14:25] [PASSED] 50 VFs
[09:14:25] [PASSED] 51 VFs
[09:14:25] [PASSED] 52 VFs
[09:14:25] [PASSED] 53 VFs
[09:14:25] [PASSED] 54 VFs
[09:14:25] [PASSED] 55 VFs
[09:14:25] [PASSED] 56 VFs
[09:14:25] [PASSED] 57 VFs
[09:14:25] [PASSED] 58 VFs
[09:14:25] [PASSED] 59 VFs
[09:14:25] [PASSED] 60 VFs
[09:14:25] [PASSED] 61 VFs
[09:14:25] [PASSED] 62 VFs
[09:14:25] [PASSED] 63 VFs
[09:14:25] ==================== [PASSED] fair_ggtt ====================
[09:14:25] ================== [PASSED] pf_gt_config ===================
[09:14:25] ===================== lmtt (1 subtest) =====================
[09:14:25] ======================== test_ops =========================
[09:14:25] [PASSED] 2-level
[09:14:25] [PASSED] multi-level
[09:14:25] ==================== [PASSED] test_ops =====================
[09:14:25] ====================== [PASSED] lmtt =======================
[09:14:25] ================= pf_service (11 subtests) =================
[09:14:25] [PASSED] pf_negotiate_any
[09:14:25] [PASSED] pf_negotiate_base_match
[09:14:25] [PASSED] pf_negotiate_base_newer
[09:14:25] [PASSED] pf_negotiate_base_next
[09:14:25] [SKIPPED] pf_negotiate_base_older
[09:14:25] [PASSED] pf_negotiate_base_prev
[09:14:25] [PASSED] pf_negotiate_latest_match
[09:14:25] [PASSED] pf_negotiate_latest_newer
[09:14:25] [PASSED] pf_negotiate_latest_next
[09:14:25] [SKIPPED] pf_negotiate_latest_older
[09:14:25] [SKIPPED] pf_negotiate_latest_prev
[09:14:25] =================== [PASSED] pf_service ====================
[09:14:25] ================== xe_eudebug (1 subtest) ==================
[09:14:25] =============== xe_eudebug_toggle_reg_kunit ===============
[09:14:25] ========== [SKIPPED] xe_eudebug_toggle_reg_kunit ===========
[09:14:25] =================== [SKIPPED] xe_eudebug ===================
[09:14:25] ================= xe_guc_g2g (2 subtests) ==================
[09:14:25] ============== xe_live_guc_g2g_kunit_default ==============
[09:14:25] ========= [SKIPPED] xe_live_guc_g2g_kunit_default ==========
[09:14:25] ============== xe_live_guc_g2g_kunit_allmem ===============
[09:14:25] ========== [SKIPPED] xe_live_guc_g2g_kunit_allmem ==========
[09:14:25] =================== [SKIPPED] xe_guc_g2g ===================
[09:14:25] =================== xe_mocs (2 subtests) ===================
[09:14:25] ================ xe_live_mocs_kernel_kunit ================
[09:14:25] =========== [SKIPPED] xe_live_mocs_kernel_kunit ============
[09:14:25] ================ xe_live_mocs_reset_kunit =================
[09:14:25] ============ [SKIPPED] xe_live_mocs_reset_kunit ============
[09:14:25] ==================== [SKIPPED] xe_mocs =====================
[09:14:25] ================= xe_migrate (2 subtests) ==================
[09:14:25] ================= xe_migrate_sanity_kunit =================
[09:14:25] ============ [SKIPPED] xe_migrate_sanity_kunit =============
[09:14:25] ================== xe_validate_ccs_kunit ==================
[09:14:25] ============= [SKIPPED] xe_validate_ccs_kunit ==============
[09:14:25] =================== [SKIPPED] xe_migrate ===================
[09:14:25] ================== xe_dma_buf (1 subtest) ==================
[09:14:25] ==================== xe_dma_buf_kunit =====================
[09:14:25] ================ [SKIPPED] xe_dma_buf_kunit ================
[09:14:25] =================== [SKIPPED] xe_dma_buf ===================
[09:14:25] ================= xe_bo_shrink (1 subtest) =================
[09:14:25] =================== xe_bo_shrink_kunit ====================
[09:14:25] =============== [SKIPPED] xe_bo_shrink_kunit ===============
[09:14:25] ================== [SKIPPED] xe_bo_shrink ==================
[09:14:25] ==================== xe_bo (2 subtests) ====================
[09:14:25] ================== xe_ccs_migrate_kunit ===================
[09:14:25] ============== [SKIPPED] xe_ccs_migrate_kunit ==============
[09:14:25] ==================== xe_bo_evict_kunit ====================
[09:14:25] =============== [SKIPPED] xe_bo_evict_kunit ================
[09:14:25] ===================== [SKIPPED] xe_bo ======================
[09:14:25] ==================== args (11 subtests) ====================
[09:14:25] [PASSED] count_args_test
[09:14:25] [PASSED] call_args_example
[09:14:25] [PASSED] call_args_test
[09:14:25] [PASSED] drop_first_arg_example
[09:14:25] [PASSED] drop_first_arg_test
[09:14:25] [PASSED] first_arg_example
[09:14:25] [PASSED] first_arg_test
[09:14:25] [PASSED] last_arg_example
[09:14:25] [PASSED] last_arg_test
[09:14:25] [PASSED] pick_arg_example
[09:14:25] [PASSED] sep_comma_example
[09:14:25] ====================== [PASSED] args =======================
[09:14:25] =================== xe_pci (3 subtests) ====================
[09:14:25] ==================== check_graphics_ip ====================
[09:14:25] [PASSED] 12.00 Xe_LP
[09:14:25] [PASSED] 12.10 Xe_LP+
[09:14:25] [PASSED] 12.55 Xe_HPG
[09:14:25] [PASSED] 12.60 Xe_HPC
[09:14:25] [PASSED] 12.70 Xe_LPG
[09:14:25] [PASSED] 12.71 Xe_LPG
[09:14:25] [PASSED] 12.74 Xe_LPG+
[09:14:25] [PASSED] 20.01 Xe2_HPG
[09:14:25] [PASSED] 20.02 Xe2_HPG
[09:14:25] [PASSED] 20.04 Xe2_LPG
[09:14:25] [PASSED] 30.00 Xe3_LPG
[09:14:25] [PASSED] 30.01 Xe3_LPG
[09:14:25] [PASSED] 30.03 Xe3_LPG
[09:14:25] [PASSED] 30.04 Xe3_LPG
[09:14:25] [PASSED] 30.05 Xe3_LPG
[09:14:25] [PASSED] 35.11 Xe3p_XPC
[09:14:25] ================ [PASSED] check_graphics_ip ================
[09:14:25] ===================== check_media_ip ======================
[09:14:25] [PASSED] 12.00 Xe_M
[09:14:25] [PASSED] 12.55 Xe_HPM
[09:14:25] [PASSED] 13.00 Xe_LPM+
[09:14:25] [PASSED] 13.01 Xe2_HPM
[09:14:25] [PASSED] 20.00 Xe2_LPM
[09:14:25] [PASSED] 30.00 Xe3_LPM
[09:14:25] [PASSED] 30.02 Xe3_LPM
[09:14:25] [PASSED] 35.00 Xe3p_LPM
[09:14:25] [PASSED] 35.03 Xe3p_HPM
[09:14:25] ================= [PASSED] check_media_ip ==================
[09:14:25] =================== check_platform_desc ===================
[09:14:25] [PASSED] 0x9A60 (TIGERLAKE)
[09:14:25] [PASSED] 0x9A68 (TIGERLAKE)
[09:14:25] [PASSED] 0x9A70 (TIGERLAKE)
[09:14:25] [PASSED] 0x9A40 (TIGERLAKE)
[09:14:25] [PASSED] 0x9A49 (TIGERLAKE)
[09:14:25] [PASSED] 0x9A59 (TIGERLAKE)
[09:14:25] [PASSED] 0x9A78 (TIGERLAKE)
[09:14:25] [PASSED] 0x9AC0 (TIGERLAKE)
[09:14:25] [PASSED] 0x9AC9 (TIGERLAKE)
[09:14:25] [PASSED] 0x9AD9 (TIGERLAKE)
[09:14:25] [PASSED] 0x9AF8 (TIGERLAKE)
[09:14:25] [PASSED] 0x4C80 (ROCKETLAKE)
[09:14:25] [PASSED] 0x4C8A (ROCKETLAKE)
[09:14:25] [PASSED] 0x4C8B (ROCKETLAKE)
[09:14:25] [PASSED] 0x4C8C (ROCKETLAKE)
[09:14:25] [PASSED] 0x4C90 (ROCKETLAKE)
[09:14:25] [PASSED] 0x4C9A (ROCKETLAKE)
[09:14:25] [PASSED] 0x4680 (ALDERLAKE_S)
[09:14:25] [PASSED] 0x4682 (ALDERLAKE_S)
[09:14:25] [PASSED] 0x4688 (ALDERLAKE_S)
[09:14:25] [PASSED] 0x468A (ALDERLAKE_S)
[09:14:25] [PASSED] 0x468B (ALDERLAKE_S)
[09:14:25] [PASSED] 0x4690 (ALDERLAKE_S)
[09:14:25] [PASSED] 0x4692 (ALDERLAKE_S)
[09:14:25] [PASSED] 0x4693 (ALDERLAKE_S)
[09:14:25] [PASSED] 0x46A0 (ALDERLAKE_P)
[09:14:25] [PASSED] 0x46A1 (ALDERLAKE_P)
[09:14:25] [PASSED] 0x46A2 (ALDERLAKE_P)
[09:14:25] [PASSED] 0x46A3 (ALDERLAKE_P)
stty: 'standard input': Inappropriate ioctl for device
[09:14:25] [PASSED] 0x46A6 (ALDERLAKE_P)
[09:14:25] [PASSED] 0x46A8 (ALDERLAKE_P)
[09:14:25] [PASSED] 0x46AA (ALDERLAKE_P)
[09:14:25] [PASSED] 0x462A (ALDERLAKE_P)
[09:14:25] [PASSED] 0x4626 (ALDERLAKE_P)
[09:14:25] [PASSED] 0x4628 (ALDERLAKE_P)
[09:14:25] [PASSED] 0x46B0 (ALDERLAKE_P)
[09:14:25] [PASSED] 0x46B1 (ALDERLAKE_P)
[09:14:25] [PASSED] 0x46B2 (ALDERLAKE_P)
[09:14:25] [PASSED] 0x46B3 (ALDERLAKE_P)
[09:14:25] [PASSED] 0x46C0 (ALDERLAKE_P)
[09:14:25] [PASSED] 0x46C1 (ALDERLAKE_P)
[09:14:25] [PASSED] 0x46C2 (ALDERLAKE_P)
[09:14:25] [PASSED] 0x46C3 (ALDERLAKE_P)
[09:14:25] [PASSED] 0x46D0 (ALDERLAKE_N)
[09:14:25] [PASSED] 0x46D1 (ALDERLAKE_N)
[09:14:25] [PASSED] 0x46D2 (ALDERLAKE_N)
[09:14:25] [PASSED] 0x46D3 (ALDERLAKE_N)
[09:14:25] [PASSED] 0x46D4 (ALDERLAKE_N)
[09:14:25] [PASSED] 0xA721 (ALDERLAKE_P)
[09:14:25] [PASSED] 0xA7A1 (ALDERLAKE_P)
[09:14:25] [PASSED] 0xA7A9 (ALDERLAKE_P)
[09:14:25] [PASSED] 0xA7AC (ALDERLAKE_P)
[09:14:25] [PASSED] 0xA7AD (ALDERLAKE_P)
[09:14:25] [PASSED] 0xA720 (ALDERLAKE_P)
[09:14:25] [PASSED] 0xA7A0 (ALDERLAKE_P)
[09:14:25] [PASSED] 0xA7A8 (ALDERLAKE_P)
[09:14:25] [PASSED] 0xA7AA (ALDERLAKE_P)
[09:14:25] [PASSED] 0xA7AB (ALDERLAKE_P)
[09:14:25] [PASSED] 0xA780 (ALDERLAKE_S)
[09:14:25] [PASSED] 0xA781 (ALDERLAKE_S)
[09:14:25] [PASSED] 0xA782 (ALDERLAKE_S)
[09:14:25] [PASSED] 0xA783 (ALDERLAKE_S)
[09:14:25] [PASSED] 0xA788 (ALDERLAKE_S)
[09:14:25] [PASSED] 0xA789 (ALDERLAKE_S)
[09:14:25] [PASSED] 0xA78A (ALDERLAKE_S)
[09:14:25] [PASSED] 0xA78B (ALDERLAKE_S)
[09:14:25] [PASSED] 0x4905 (DG1)
[09:14:25] [PASSED] 0x4906 (DG1)
[09:14:25] [PASSED] 0x4907 (DG1)
[09:14:25] [PASSED] 0x4908 (DG1)
[09:14:25] [PASSED] 0x4909 (DG1)
[09:14:25] [PASSED] 0x56C0 (DG2)
[09:14:25] [PASSED] 0x56C2 (DG2)
[09:14:25] [PASSED] 0x56C1 (DG2)
[09:14:25] [PASSED] 0x7D51 (METEORLAKE)
[09:14:25] [PASSED] 0x7DD1 (METEORLAKE)
[09:14:25] [PASSED] 0x7D41 (METEORLAKE)
[09:14:25] [PASSED] 0x7D67 (METEORLAKE)
[09:14:25] [PASSED] 0xB640 (METEORLAKE)
[09:14:25] [PASSED] 0x56A0 (DG2)
[09:14:25] [PASSED] 0x56A1 (DG2)
[09:14:25] [PASSED] 0x56A2 (DG2)
[09:14:25] [PASSED] 0x56BE (DG2)
[09:14:25] [PASSED] 0x56BF (DG2)
[09:14:25] [PASSED] 0x5690 (DG2)
[09:14:25] [PASSED] 0x5691 (DG2)
[09:14:25] [PASSED] 0x5692 (DG2)
[09:14:25] [PASSED] 0x56A5 (DG2)
[09:14:25] [PASSED] 0x56A6 (DG2)
[09:14:25] [PASSED] 0x56B0 (DG2)
[09:14:25] [PASSED] 0x56B1 (DG2)
[09:14:25] [PASSED] 0x56BA (DG2)
[09:14:25] [PASSED] 0x56BB (DG2)
[09:14:25] [PASSED] 0x56BC (DG2)
[09:14:25] [PASSED] 0x56BD (DG2)
[09:14:25] [PASSED] 0x5693 (DG2)
[09:14:25] [PASSED] 0x5694 (DG2)
[09:14:25] [PASSED] 0x5695 (DG2)
[09:14:25] [PASSED] 0x56A3 (DG2)
[09:14:25] [PASSED] 0x56A4 (DG2)
[09:14:25] [PASSED] 0x56B2 (DG2)
[09:14:25] [PASSED] 0x56B3 (DG2)
[09:14:25] [PASSED] 0x5696 (DG2)
[09:14:25] [PASSED] 0x5697 (DG2)
[09:14:25] [PASSED] 0xB69 (PVC)
[09:14:25] [PASSED] 0xB6E (PVC)
[09:14:25] [PASSED] 0xBD4 (PVC)
[09:14:25] [PASSED] 0xBD5 (PVC)
[09:14:25] [PASSED] 0xBD6 (PVC)
[09:14:25] [PASSED] 0xBD7 (PVC)
[09:14:25] [PASSED] 0xBD8 (PVC)
[09:14:25] [PASSED] 0xBD9 (PVC)
[09:14:25] [PASSED] 0xBDA (PVC)
[09:14:25] [PASSED] 0xBDB (PVC)
[09:14:25] [PASSED] 0xBE0 (PVC)
[09:14:25] [PASSED] 0xBE1 (PVC)
[09:14:25] [PASSED] 0xBE5 (PVC)
[09:14:25] [PASSED] 0x7D40 (METEORLAKE)
[09:14:25] [PASSED] 0x7D45 (METEORLAKE)
[09:14:25] [PASSED] 0x7D55 (METEORLAKE)
[09:14:25] [PASSED] 0x7D60 (METEORLAKE)
[09:14:25] [PASSED] 0x7DD5 (METEORLAKE)
[09:14:25] [PASSED] 0x6420 (LUNARLAKE)
[09:14:25] [PASSED] 0x64A0 (LUNARLAKE)
[09:14:25] [PASSED] 0x64B0 (LUNARLAKE)
[09:14:25] [PASSED] 0xE202 (BATTLEMAGE)
[09:14:25] [PASSED] 0xE209 (BATTLEMAGE)
[09:14:25] [PASSED] 0xE20B (BATTLEMAGE)
[09:14:25] [PASSED] 0xE20C (BATTLEMAGE)
[09:14:25] [PASSED] 0xE20D (BATTLEMAGE)
[09:14:25] [PASSED] 0xE210 (BATTLEMAGE)
[09:14:25] [PASSED] 0xE211 (BATTLEMAGE)
[09:14:25] [PASSED] 0xE212 (BATTLEMAGE)
[09:14:25] [PASSED] 0xE216 (BATTLEMAGE)
[09:14:25] [PASSED] 0xE220 (BATTLEMAGE)
[09:14:25] [PASSED] 0xE221 (BATTLEMAGE)
[09:14:25] [PASSED] 0xE222 (BATTLEMAGE)
[09:14:25] [PASSED] 0xE223 (BATTLEMAGE)
[09:14:25] [PASSED] 0xB080 (PANTHERLAKE)
[09:14:25] [PASSED] 0xB081 (PANTHERLAKE)
[09:14:25] [PASSED] 0xB082 (PANTHERLAKE)
[09:14:25] [PASSED] 0xB083 (PANTHERLAKE)
[09:14:25] [PASSED] 0xB084 (PANTHERLAKE)
[09:14:25] [PASSED] 0xB085 (PANTHERLAKE)
[09:14:25] [PASSED] 0xB086 (PANTHERLAKE)
[09:14:25] [PASSED] 0xB087 (PANTHERLAKE)
[09:14:25] [PASSED] 0xB08F (PANTHERLAKE)
[09:14:25] [PASSED] 0xB090 (PANTHERLAKE)
[09:14:25] [PASSED] 0xB0A0 (PANTHERLAKE)
[09:14:25] [PASSED] 0xB0B0 (PANTHERLAKE)
[09:14:25] [PASSED] 0xD740 (NOVALAKE_S)
[09:14:25] [PASSED] 0xD741 (NOVALAKE_S)
[09:14:25] [PASSED] 0xD742 (NOVALAKE_S)
[09:14:25] [PASSED] 0xD743 (NOVALAKE_S)
[09:14:25] [PASSED] 0xD744 (NOVALAKE_S)
[09:14:25] [PASSED] 0xD745 (NOVALAKE_S)
[09:14:25] [PASSED] 0x674C (CRESCENTISLAND)
[09:14:25] [PASSED] 0xFD80 (PANTHERLAKE)
[09:14:25] [PASSED] 0xFD81 (PANTHERLAKE)
[09:14:25] =============== [PASSED] check_platform_desc ===============
[09:14:25] ===================== [PASSED] xe_pci ======================
[09:14:25] =================== xe_rtp (2 subtests) ====================
[09:14:25] =============== xe_rtp_process_to_sr_tests ================
[09:14:25] [PASSED] coalesce-same-reg
[09:14:25] [PASSED] no-match-no-add
[09:14:25] [PASSED] match-or
[09:14:25] [PASSED] match-or-xfail
[09:14:25] [PASSED] no-match-no-add-multiple-rules
[09:14:25] [PASSED] two-regs-two-entries
[09:14:25] [PASSED] clr-one-set-other
[09:14:25] [PASSED] set-field
[09:14:25] [PASSED] conflict-duplicate
[09:14:25] [PASSED] conflict-not-disjoint
[09:14:25] [PASSED] conflict-reg-type
[09:14:25] =========== [PASSED] xe_rtp_process_to_sr_tests ============
[09:14:25] ================== xe_rtp_process_tests ===================
[09:14:25] [PASSED] active1
[09:14:25] [PASSED] active2
[09:14:25] [PASSED] active-inactive
[09:14:25] [PASSED] inactive-active
[09:14:25] [PASSED] inactive-1st_or_active-inactive
[09:14:25] [PASSED] inactive-2nd_or_active-inactive
[09:14:25] [PASSED] inactive-last_or_active-inactive
[09:14:25] [PASSED] inactive-no_or_active-inactive
[09:14:25] ============== [PASSED] xe_rtp_process_tests ===============
[09:14:25] ===================== [PASSED] xe_rtp ======================
[09:14:25] ==================== xe_wa (1 subtest) =====================
[09:14:25] ======================== xe_wa_gt =========================
[09:14:25] [PASSED] TIGERLAKE B0
[09:14:25] [PASSED] DG1 A0
[09:14:25] [PASSED] DG1 B0
[09:14:25] [PASSED] ALDERLAKE_S A0
[09:14:25] [PASSED] ALDERLAKE_S B0
[09:14:25] [PASSED] ALDERLAKE_S C0
[09:14:25] [PASSED] ALDERLAKE_S D0
[09:14:25] [PASSED] ALDERLAKE_P A0
[09:14:25] [PASSED] ALDERLAKE_P B0
[09:14:25] [PASSED] ALDERLAKE_P C0
[09:14:25] [PASSED] ALDERLAKE_S RPLS D0
[09:14:25] [PASSED] ALDERLAKE_P RPLU E0
[09:14:25] [PASSED] DG2 G10 C0
[09:14:25] [PASSED] DG2 G11 B1
[09:14:25] [PASSED] DG2 G12 A1
[09:14:25] [PASSED] METEORLAKE 12.70(Xe_LPG) A0 13.00(Xe_LPM+) A0
[09:14:25] [PASSED] METEORLAKE 12.71(Xe_LPG) A0 13.00(Xe_LPM+) A0
[09:14:25] [PASSED] METEORLAKE 12.74(Xe_LPG+) A0 13.00(Xe_LPM+) A0
[09:14:25] [PASSED] LUNARLAKE 20.04(Xe2_LPG) A0 20.00(Xe2_LPM) A0
[09:14:25] [PASSED] LUNARLAKE 20.04(Xe2_LPG) B0 20.00(Xe2_LPM) A0
[09:14:25] [PASSED] BATTLEMAGE 20.01(Xe2_HPG) A0 13.01(Xe2_HPM) A1
[09:14:25] [PASSED] PANTHERLAKE 30.00(Xe3_LPG) A0 30.00(Xe3_LPM) A0
[09:14:25] ==================== [PASSED] xe_wa_gt =====================
[09:14:25] ====================== [PASSED] xe_wa ======================
[09:14:25] ============================================================
[09:14:25] Testing complete. Ran 511 tests: passed: 492, skipped: 19
[09:14:25] Elapsed time: 35.476s total, 4.212s configuring, 30.747s building, 0.474s running
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/tests/.kunitconfig
[09:14:25] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[09:14:27] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[09:14:51] Starting KUnit Kernel (1/1)...
[09:14:51] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[09:14:52] ============ drm_test_pick_cmdline (2 subtests) ============
[09:14:52] [PASSED] drm_test_pick_cmdline_res_1920_1080_60
[09:14:52] =============== drm_test_pick_cmdline_named ===============
[09:14:52] [PASSED] NTSC
[09:14:52] [PASSED] NTSC-J
[09:14:52] [PASSED] PAL
[09:14:52] [PASSED] PAL-M
[09:14:52] =========== [PASSED] drm_test_pick_cmdline_named ===========
[09:14:52] ============== [PASSED] drm_test_pick_cmdline ==============
[09:14:52] == drm_test_atomic_get_connector_for_encoder (1 subtest) ===
[09:14:52] [PASSED] drm_test_drm_atomic_get_connector_for_encoder
[09:14:52] ==== [PASSED] drm_test_atomic_get_connector_for_encoder ====
[09:14:52] =========== drm_validate_clone_mode (2 subtests) ===========
[09:14:52] ============== drm_test_check_in_clone_mode ===============
[09:14:52] [PASSED] in_clone_mode
[09:14:52] [PASSED] not_in_clone_mode
[09:14:52] ========== [PASSED] drm_test_check_in_clone_mode ===========
[09:14:52] =============== drm_test_check_valid_clones ===============
[09:14:52] [PASSED] not_in_clone_mode
[09:14:52] [PASSED] valid_clone
[09:14:52] [PASSED] invalid_clone
[09:14:52] =========== [PASSED] drm_test_check_valid_clones ===========
[09:14:52] ============= [PASSED] drm_validate_clone_mode =============
[09:14:52] ============= drm_validate_modeset (1 subtest) =============
[09:14:52] [PASSED] drm_test_check_connector_changed_modeset
[09:14:52] ============== [PASSED] drm_validate_modeset ===============
[09:14:52] ====== drm_test_bridge_get_current_state (2 subtests) ======
[09:14:52] [PASSED] drm_test_drm_bridge_get_current_state_atomic
[09:14:52] [PASSED] drm_test_drm_bridge_get_current_state_legacy
[09:14:52] ======== [PASSED] drm_test_bridge_get_current_state ========
[09:14:52] ====== drm_test_bridge_helper_reset_crtc (3 subtests) ======
[09:14:52] [PASSED] drm_test_drm_bridge_helper_reset_crtc_atomic
[09:14:52] [PASSED] drm_test_drm_bridge_helper_reset_crtc_atomic_disabled
[09:14:52] [PASSED] drm_test_drm_bridge_helper_reset_crtc_legacy
[09:14:52] ======== [PASSED] drm_test_bridge_helper_reset_crtc ========
[09:14:52] ============== drm_bridge_alloc (2 subtests) ===============
[09:14:52] [PASSED] drm_test_drm_bridge_alloc_basic
[09:14:52] [PASSED] drm_test_drm_bridge_alloc_get_put
[09:14:52] ================ [PASSED] drm_bridge_alloc =================
[09:14:52] ================== drm_buddy (8 subtests) ==================
[09:14:52] [PASSED] drm_test_buddy_alloc_limit
[09:14:52] [PASSED] drm_test_buddy_alloc_optimistic
[09:14:52] [PASSED] drm_test_buddy_alloc_pessimistic
[09:14:52] [PASSED] drm_test_buddy_alloc_pathological
[09:14:52] [PASSED] drm_test_buddy_alloc_contiguous
[09:14:52] [PASSED] drm_test_buddy_alloc_clear
[09:14:52] [PASSED] drm_test_buddy_alloc_range_bias
[09:14:52] [PASSED] drm_test_buddy_fragmentation_performance
[09:14:52] ==================== [PASSED] drm_buddy ====================
[09:14:52] ============= drm_cmdline_parser (40 subtests) =============
[09:14:52] [PASSED] drm_test_cmdline_force_d_only
[09:14:52] [PASSED] drm_test_cmdline_force_D_only_dvi
[09:14:52] [PASSED] drm_test_cmdline_force_D_only_hdmi
[09:14:52] [PASSED] drm_test_cmdline_force_D_only_not_digital
[09:14:52] [PASSED] drm_test_cmdline_force_e_only
[09:14:52] [PASSED] drm_test_cmdline_res
[09:14:52] [PASSED] drm_test_cmdline_res_vesa
[09:14:52] [PASSED] drm_test_cmdline_res_vesa_rblank
[09:14:52] [PASSED] drm_test_cmdline_res_rblank
[09:14:52] [PASSED] drm_test_cmdline_res_bpp
[09:14:52] [PASSED] drm_test_cmdline_res_refresh
[09:14:52] [PASSED] drm_test_cmdline_res_bpp_refresh
[09:14:52] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced
[09:14:52] [PASSED] drm_test_cmdline_res_bpp_refresh_margins
[09:14:52] [PASSED] drm_test_cmdline_res_bpp_refresh_force_off
[09:14:52] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on
[09:14:52] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_analog
[09:14:52] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_digital
[09:14:52] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced_margins_force_on
[09:14:52] [PASSED] drm_test_cmdline_res_margins_force_on
[09:14:52] [PASSED] drm_test_cmdline_res_vesa_margins
[09:14:52] [PASSED] drm_test_cmdline_name
[09:14:52] [PASSED] drm_test_cmdline_name_bpp
[09:14:52] [PASSED] drm_test_cmdline_name_option
[09:14:52] [PASSED] drm_test_cmdline_name_bpp_option
[09:14:52] [PASSED] drm_test_cmdline_rotate_0
[09:14:52] [PASSED] drm_test_cmdline_rotate_90
[09:14:52] [PASSED] drm_test_cmdline_rotate_180
[09:14:52] [PASSED] drm_test_cmdline_rotate_270
[09:14:52] [PASSED] drm_test_cmdline_hmirror
[09:14:52] [PASSED] drm_test_cmdline_vmirror
[09:14:52] [PASSED] drm_test_cmdline_margin_options
[09:14:52] [PASSED] drm_test_cmdline_multiple_options
[09:14:52] [PASSED] drm_test_cmdline_bpp_extra_and_option
[09:14:52] [PASSED] drm_test_cmdline_extra_and_option
[09:14:52] [PASSED] drm_test_cmdline_freestanding_options
[09:14:52] [PASSED] drm_test_cmdline_freestanding_force_e_and_options
[09:14:52] [PASSED] drm_test_cmdline_panel_orientation
[09:14:52] ================ drm_test_cmdline_invalid =================
[09:14:52] [PASSED] margin_only
[09:14:52] [PASSED] interlace_only
[09:14:52] [PASSED] res_missing_x
[09:14:52] [PASSED] res_missing_y
[09:14:52] [PASSED] res_bad_y
[09:14:52] [PASSED] res_missing_y_bpp
[09:14:52] [PASSED] res_bad_bpp
[09:14:52] [PASSED] res_bad_refresh
[09:14:52] [PASSED] res_bpp_refresh_force_on_off
[09:14:52] [PASSED] res_invalid_mode
[09:14:52] [PASSED] res_bpp_wrong_place_mode
[09:14:52] [PASSED] name_bpp_refresh
[09:14:52] [PASSED] name_refresh
[09:14:52] [PASSED] name_refresh_wrong_mode
[09:14:52] [PASSED] name_refresh_invalid_mode
[09:14:52] [PASSED] rotate_multiple
[09:14:52] [PASSED] rotate_invalid_val
[09:14:52] [PASSED] rotate_truncated
[09:14:52] [PASSED] invalid_option
[09:14:52] [PASSED] invalid_tv_option
[09:14:52] [PASSED] truncated_tv_option
[09:14:52] ============ [PASSED] drm_test_cmdline_invalid =============
[09:14:52] =============== drm_test_cmdline_tv_options ===============
[09:14:52] [PASSED] NTSC
[09:14:52] [PASSED] NTSC_443
[09:14:52] [PASSED] NTSC_J
[09:14:52] [PASSED] PAL
[09:14:52] [PASSED] PAL_M
[09:14:52] [PASSED] PAL_N
[09:14:52] [PASSED] SECAM
[09:14:52] [PASSED] MONO_525
[09:14:52] [PASSED] MONO_625
[09:14:52] =========== [PASSED] drm_test_cmdline_tv_options ===========
[09:14:52] =============== [PASSED] drm_cmdline_parser ================
[09:14:52] ========== drmm_connector_hdmi_init (20 subtests) ==========
[09:14:52] [PASSED] drm_test_connector_hdmi_init_valid
[09:14:52] [PASSED] drm_test_connector_hdmi_init_bpc_8
[09:14:52] [PASSED] drm_test_connector_hdmi_init_bpc_10
[09:14:52] [PASSED] drm_test_connector_hdmi_init_bpc_12
[09:14:52] [PASSED] drm_test_connector_hdmi_init_bpc_invalid
[09:14:52] [PASSED] drm_test_connector_hdmi_init_bpc_null
[09:14:52] [PASSED] drm_test_connector_hdmi_init_formats_empty
[09:14:52] [PASSED] drm_test_connector_hdmi_init_formats_no_rgb
[09:14:52] === drm_test_connector_hdmi_init_formats_yuv420_allowed ===
[09:14:52] [PASSED] supported_formats=0x9 yuv420_allowed=1
[09:14:52] [PASSED] supported_formats=0x9 yuv420_allowed=0
[09:14:52] [PASSED] supported_formats=0x3 yuv420_allowed=1
[09:14:52] [PASSED] supported_formats=0x3 yuv420_allowed=0
[09:14:52] === [PASSED] drm_test_connector_hdmi_init_formats_yuv420_allowed ===
[09:14:52] [PASSED] drm_test_connector_hdmi_init_null_ddc
[09:14:52] [PASSED] drm_test_connector_hdmi_init_null_product
[09:14:52] [PASSED] drm_test_connector_hdmi_init_null_vendor
[09:14:52] [PASSED] drm_test_connector_hdmi_init_product_length_exact
[09:14:52] [PASSED] drm_test_connector_hdmi_init_product_length_too_long
[09:14:52] [PASSED] drm_test_connector_hdmi_init_product_valid
[09:14:52] [PASSED] drm_test_connector_hdmi_init_vendor_length_exact
[09:14:52] [PASSED] drm_test_connector_hdmi_init_vendor_length_too_long
[09:14:52] [PASSED] drm_test_connector_hdmi_init_vendor_valid
[09:14:52] ========= drm_test_connector_hdmi_init_type_valid =========
[09:14:52] [PASSED] HDMI-A
[09:14:52] [PASSED] HDMI-B
[09:14:52] ===== [PASSED] drm_test_connector_hdmi_init_type_valid =====
[09:14:52] ======== drm_test_connector_hdmi_init_type_invalid ========
[09:14:52] [PASSED] Unknown
[09:14:52] [PASSED] VGA
[09:14:52] [PASSED] DVI-I
[09:14:52] [PASSED] DVI-D
[09:14:52] [PASSED] DVI-A
[09:14:52] [PASSED] Composite
[09:14:52] [PASSED] SVIDEO
[09:14:52] [PASSED] LVDS
[09:14:52] [PASSED] Component
[09:14:52] [PASSED] DIN
[09:14:52] [PASSED] DP
[09:14:52] [PASSED] TV
[09:14:52] [PASSED] eDP
[09:14:52] [PASSED] Virtual
[09:14:52] [PASSED] DSI
[09:14:52] [PASSED] DPI
[09:14:52] [PASSED] Writeback
[09:14:52] [PASSED] SPI
[09:14:52] [PASSED] USB
[09:14:52] ==== [PASSED] drm_test_connector_hdmi_init_type_invalid ====
[09:14:52] ============ [PASSED] drmm_connector_hdmi_init =============
[09:14:52] ============= drmm_connector_init (3 subtests) =============
[09:14:52] [PASSED] drm_test_drmm_connector_init
[09:14:52] [PASSED] drm_test_drmm_connector_init_null_ddc
[09:14:52] ========= drm_test_drmm_connector_init_type_valid =========
[09:14:52] [PASSED] Unknown
[09:14:52] [PASSED] VGA
[09:14:52] [PASSED] DVI-I
[09:14:52] [PASSED] DVI-D
[09:14:52] [PASSED] DVI-A
[09:14:52] [PASSED] Composite
[09:14:52] [PASSED] SVIDEO
[09:14:52] [PASSED] LVDS
[09:14:52] [PASSED] Component
[09:14:52] [PASSED] DIN
[09:14:52] [PASSED] DP
[09:14:52] [PASSED] HDMI-A
[09:14:52] [PASSED] HDMI-B
[09:14:52] [PASSED] TV
[09:14:52] [PASSED] eDP
[09:14:52] [PASSED] Virtual
[09:14:52] [PASSED] DSI
[09:14:52] [PASSED] DPI
[09:14:52] [PASSED] Writeback
[09:14:52] [PASSED] SPI
[09:14:52] [PASSED] USB
[09:14:52] ===== [PASSED] drm_test_drmm_connector_init_type_valid =====
[09:14:52] =============== [PASSED] drmm_connector_init ===============
[09:14:52] ========= drm_connector_dynamic_init (6 subtests) ==========
[09:14:52] [PASSED] drm_test_drm_connector_dynamic_init
[09:14:52] [PASSED] drm_test_drm_connector_dynamic_init_null_ddc
[09:14:52] [PASSED] drm_test_drm_connector_dynamic_init_not_added
[09:14:52] [PASSED] drm_test_drm_connector_dynamic_init_properties
[09:14:52] ===== drm_test_drm_connector_dynamic_init_type_valid ======
[09:14:52] [PASSED] Unknown
[09:14:52] [PASSED] VGA
[09:14:52] [PASSED] DVI-I
[09:14:52] [PASSED] DVI-D
[09:14:52] [PASSED] DVI-A
[09:14:52] [PASSED] Composite
[09:14:52] [PASSED] SVIDEO
[09:14:52] [PASSED] LVDS
[09:14:52] [PASSED] Component
[09:14:52] [PASSED] DIN
[09:14:52] [PASSED] DP
[09:14:52] [PASSED] HDMI-A
[09:14:52] [PASSED] HDMI-B
[09:14:52] [PASSED] TV
[09:14:52] [PASSED] eDP
[09:14:52] [PASSED] Virtual
[09:14:52] [PASSED] DSI
[09:14:52] [PASSED] DPI
[09:14:52] [PASSED] Writeback
[09:14:52] [PASSED] SPI
[09:14:52] [PASSED] USB
[09:14:52] = [PASSED] drm_test_drm_connector_dynamic_init_type_valid ==
[09:14:52] ======== drm_test_drm_connector_dynamic_init_name =========
[09:14:52] [PASSED] Unknown
[09:14:52] [PASSED] VGA
[09:14:52] [PASSED] DVI-I
[09:14:52] [PASSED] DVI-D
[09:14:52] [PASSED] DVI-A
[09:14:52] [PASSED] Composite
[09:14:52] [PASSED] SVIDEO
[09:14:52] [PASSED] LVDS
[09:14:52] [PASSED] Component
[09:14:52] [PASSED] DIN
[09:14:52] [PASSED] DP
[09:14:52] [PASSED] HDMI-A
[09:14:52] [PASSED] HDMI-B
[09:14:52] [PASSED] TV
[09:14:52] [PASSED] eDP
[09:14:52] [PASSED] Virtual
[09:14:52] [PASSED] DSI
[09:14:52] [PASSED] DPI
[09:14:52] [PASSED] Writeback
[09:14:52] [PASSED] SPI
[09:14:52] [PASSED] USB
[09:14:52] ==== [PASSED] drm_test_drm_connector_dynamic_init_name =====
[09:14:52] =========== [PASSED] drm_connector_dynamic_init ============
[09:14:52] ==== drm_connector_dynamic_register_early (4 subtests) =====
[09:14:52] [PASSED] drm_test_drm_connector_dynamic_register_early_on_list
[09:14:52] [PASSED] drm_test_drm_connector_dynamic_register_early_defer
[09:14:52] [PASSED] drm_test_drm_connector_dynamic_register_early_no_init
[09:14:52] [PASSED] drm_test_drm_connector_dynamic_register_early_no_mode_object
[09:14:52] ====== [PASSED] drm_connector_dynamic_register_early =======
[09:14:52] ======= drm_connector_dynamic_register (7 subtests) ========
[09:14:52] [PASSED] drm_test_drm_connector_dynamic_register_on_list
[09:14:52] [PASSED] drm_test_drm_connector_dynamic_register_no_defer
[09:14:52] [PASSED] drm_test_drm_connector_dynamic_register_no_init
[09:14:52] [PASSED] drm_test_drm_connector_dynamic_register_mode_object
[09:14:52] [PASSED] drm_test_drm_connector_dynamic_register_sysfs
[09:14:52] [PASSED] drm_test_drm_connector_dynamic_register_sysfs_name
[09:14:52] [PASSED] drm_test_drm_connector_dynamic_register_debugfs
[09:14:52] ========= [PASSED] drm_connector_dynamic_register ==========
[09:14:52] = drm_connector_attach_broadcast_rgb_property (2 subtests) =
[09:14:52] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property
[09:14:52] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property_hdmi_connector
[09:14:52] === [PASSED] drm_connector_attach_broadcast_rgb_property ===
[09:14:52] ========== drm_get_tv_mode_from_name (2 subtests) ==========
[09:14:52] ========== drm_test_get_tv_mode_from_name_valid ===========
[09:14:52] [PASSED] NTSC
[09:14:52] [PASSED] NTSC-443
[09:14:52] [PASSED] NTSC-J
[09:14:52] [PASSED] PAL
[09:14:52] [PASSED] PAL-M
[09:14:52] [PASSED] PAL-N
[09:14:52] [PASSED] SECAM
[09:14:52] [PASSED] Mono
[09:14:52] ====== [PASSED] drm_test_get_tv_mode_from_name_valid =======
[09:14:52] [PASSED] drm_test_get_tv_mode_from_name_truncated
[09:14:52] ============ [PASSED] drm_get_tv_mode_from_name ============
[09:14:52] = drm_test_connector_hdmi_compute_mode_clock (12 subtests) =
[09:14:52] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb
[09:14:52] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc
[09:14:52] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc_vic_1
[09:14:52] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc
[09:14:52] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc_vic_1
[09:14:52] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_double
[09:14:52] = drm_test_connector_hdmi_compute_mode_clock_yuv420_valid =
[09:14:52] [PASSED] VIC 96
[09:14:52] [PASSED] VIC 97
[09:14:52] [PASSED] VIC 101
[09:14:52] [PASSED] VIC 102
[09:14:52] [PASSED] VIC 106
[09:14:52] [PASSED] VIC 107
[09:14:52] === [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_valid ===
[09:14:52] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_10_bpc
[09:14:52] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_12_bpc
[09:14:52] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_8_bpc
[09:14:52] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_10_bpc
[09:14:52] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_12_bpc
[09:14:52] === [PASSED] drm_test_connector_hdmi_compute_mode_clock ====
[09:14:52] == drm_hdmi_connector_get_broadcast_rgb_name (2 subtests) ==
[09:14:52] === drm_test_drm_hdmi_connector_get_broadcast_rgb_name ====
[09:14:52] [PASSED] Automatic
[09:14:52] [PASSED] Full
[09:14:52] [PASSED] Limited 16:235
[09:14:52] === [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name ===
[09:14:52] [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name_invalid
[09:14:52] ==== [PASSED] drm_hdmi_connector_get_broadcast_rgb_name ====
[09:14:52] == drm_hdmi_connector_get_output_format_name (2 subtests) ==
[09:14:52] === drm_test_drm_hdmi_connector_get_output_format_name ====
[09:14:52] [PASSED] RGB
[09:14:52] [PASSED] YUV 4:2:0
[09:14:52] [PASSED] YUV 4:2:2
[09:14:52] [PASSED] YUV 4:4:4
[09:14:52] === [PASSED] drm_test_drm_hdmi_connector_get_output_format_name ===
[09:14:52] [PASSED] drm_test_drm_hdmi_connector_get_output_format_name_invalid
[09:14:52] ==== [PASSED] drm_hdmi_connector_get_output_format_name ====
[09:14:52] ============= drm_damage_helper (21 subtests) ==============
[09:14:52] [PASSED] drm_test_damage_iter_no_damage
[09:14:52] [PASSED] drm_test_damage_iter_no_damage_fractional_src
[09:14:52] [PASSED] drm_test_damage_iter_no_damage_src_moved
[09:14:52] [PASSED] drm_test_damage_iter_no_damage_fractional_src_moved
[09:14:52] [PASSED] drm_test_damage_iter_no_damage_not_visible
[09:14:52] [PASSED] drm_test_damage_iter_no_damage_no_crtc
[09:14:52] [PASSED] drm_test_damage_iter_no_damage_no_fb
[09:14:52] [PASSED] drm_test_damage_iter_simple_damage
[09:14:52] [PASSED] drm_test_damage_iter_single_damage
[09:14:52] [PASSED] drm_test_damage_iter_single_damage_intersect_src
[09:14:52] [PASSED] drm_test_damage_iter_single_damage_outside_src
[09:14:52] [PASSED] drm_test_damage_iter_single_damage_fractional_src
[09:14:52] [PASSED] drm_test_damage_iter_single_damage_intersect_fractional_src
[09:14:52] [PASSED] drm_test_damage_iter_single_damage_outside_fractional_src
[09:14:52] [PASSED] drm_test_damage_iter_single_damage_src_moved
[09:14:52] [PASSED] drm_test_damage_iter_single_damage_fractional_src_moved
[09:14:52] [PASSED] drm_test_damage_iter_damage
[09:14:52] [PASSED] drm_test_damage_iter_damage_one_intersect
[09:14:52] [PASSED] drm_test_damage_iter_damage_one_outside
[09:14:52] [PASSED] drm_test_damage_iter_damage_src_moved
[09:14:52] [PASSED] drm_test_damage_iter_damage_not_visible
[09:14:52] ================ [PASSED] drm_damage_helper ================
[09:14:52] ============== drm_dp_mst_helper (3 subtests) ==============
[09:14:52] ============== drm_test_dp_mst_calc_pbn_mode ==============
[09:14:52] [PASSED] Clock 154000 BPP 30 DSC disabled
[09:14:52] [PASSED] Clock 234000 BPP 30 DSC disabled
[09:14:52] [PASSED] Clock 297000 BPP 24 DSC disabled
[09:14:52] [PASSED] Clock 332880 BPP 24 DSC enabled
[09:14:52] [PASSED] Clock 324540 BPP 24 DSC enabled
[09:14:52] ========== [PASSED] drm_test_dp_mst_calc_pbn_mode ==========
[09:14:52] ============== drm_test_dp_mst_calc_pbn_div ===============
[09:14:52] [PASSED] Link rate 2000000 lane count 4
[09:14:52] [PASSED] Link rate 2000000 lane count 2
[09:14:52] [PASSED] Link rate 2000000 lane count 1
[09:14:52] [PASSED] Link rate 1350000 lane count 4
[09:14:52] [PASSED] Link rate 1350000 lane count 2
[09:14:52] [PASSED] Link rate 1350000 lane count 1
[09:14:52] [PASSED] Link rate 1000000 lane count 4
[09:14:52] [PASSED] Link rate 1000000 lane count 2
[09:14:52] [PASSED] Link rate 1000000 lane count 1
[09:14:52] [PASSED] Link rate 810000 lane count 4
[09:14:52] [PASSED] Link rate 810000 lane count 2
[09:14:52] [PASSED] Link rate 810000 lane count 1
[09:14:52] [PASSED] Link rate 540000 lane count 4
[09:14:52] [PASSED] Link rate 540000 lane count 2
[09:14:52] [PASSED] Link rate 540000 lane count 1
[09:14:52] [PASSED] Link rate 270000 lane count 4
[09:14:52] [PASSED] Link rate 270000 lane count 2
[09:14:52] [PASSED] Link rate 270000 lane count 1
[09:14:52] [PASSED] Link rate 162000 lane count 4
[09:14:52] [PASSED] Link rate 162000 lane count 2
[09:14:52] [PASSED] Link rate 162000 lane count 1
[09:14:52] ========== [PASSED] drm_test_dp_mst_calc_pbn_div ===========
[09:14:52] ========= drm_test_dp_mst_sideband_msg_req_decode =========
[09:14:52] [PASSED] DP_ENUM_PATH_RESOURCES with port number
[09:14:52] [PASSED] DP_POWER_UP_PHY with port number
[09:14:52] [PASSED] DP_POWER_DOWN_PHY with port number
[09:14:52] [PASSED] DP_ALLOCATE_PAYLOAD with SDP stream sinks
[09:14:52] [PASSED] DP_ALLOCATE_PAYLOAD with port number
[09:14:52] [PASSED] DP_ALLOCATE_PAYLOAD with VCPI
[09:14:52] [PASSED] DP_ALLOCATE_PAYLOAD with PBN
[09:14:52] [PASSED] DP_QUERY_PAYLOAD with port number
[09:14:52] [PASSED] DP_QUERY_PAYLOAD with VCPI
[09:14:52] [PASSED] DP_REMOTE_DPCD_READ with port number
[09:14:52] [PASSED] DP_REMOTE_DPCD_READ with DPCD address
[09:14:52] [PASSED] DP_REMOTE_DPCD_READ with max number of bytes
[09:14:52] [PASSED] DP_REMOTE_DPCD_WRITE with port number
[09:14:52] [PASSED] DP_REMOTE_DPCD_WRITE with DPCD address
[09:14:52] [PASSED] DP_REMOTE_DPCD_WRITE with data array
[09:14:52] [PASSED] DP_REMOTE_I2C_READ with port number
[09:14:52] [PASSED] DP_REMOTE_I2C_READ with I2C device ID
[09:14:52] [PASSED] DP_REMOTE_I2C_READ with transactions array
[09:14:52] [PASSED] DP_REMOTE_I2C_WRITE with port number
[09:14:52] [PASSED] DP_REMOTE_I2C_WRITE with I2C device ID
[09:14:52] [PASSED] DP_REMOTE_I2C_WRITE with data array
[09:14:52] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream ID
[09:14:52] [PASSED] DP_QUERY_STREAM_ENC_STATUS with client ID
[09:14:52] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream event
[09:14:52] [PASSED] DP_QUERY_STREAM_ENC_STATUS with valid stream event
[09:14:52] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream behavior
[09:14:52] [PASSED] DP_QUERY_STREAM_ENC_STATUS with a valid stream behavior
[09:14:52] ===== [PASSED] drm_test_dp_mst_sideband_msg_req_decode =====
[09:14:52] ================ [PASSED] drm_dp_mst_helper ================
[09:14:52] ================== drm_exec (7 subtests) ===================
[09:14:52] [PASSED] sanitycheck
[09:14:52] [PASSED] test_lock
[09:14:52] [PASSED] test_lock_unlock
[09:14:52] [PASSED] test_duplicates
[09:14:52] [PASSED] test_prepare
[09:14:52] [PASSED] test_prepare_array
[09:14:52] [PASSED] test_multiple_loops
[09:14:52] ==================== [PASSED] drm_exec =====================
[09:14:52] =========== drm_format_helper_test (17 subtests) ===========
[09:14:52] ============== drm_test_fb_xrgb8888_to_gray8 ==============
[09:14:52] [PASSED] single_pixel_source_buffer
[09:14:52] [PASSED] single_pixel_clip_rectangle
[09:14:52] [PASSED] well_known_colors
[09:14:52] [PASSED] destination_pitch
[09:14:52] ========== [PASSED] drm_test_fb_xrgb8888_to_gray8 ==========
[09:14:52] ============= drm_test_fb_xrgb8888_to_rgb332 ==============
[09:14:52] [PASSED] single_pixel_source_buffer
[09:14:52] [PASSED] single_pixel_clip_rectangle
[09:14:52] [PASSED] well_known_colors
[09:14:52] [PASSED] destination_pitch
[09:14:52] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb332 ==========
[09:14:52] ============= drm_test_fb_xrgb8888_to_rgb565 ==============
[09:14:52] [PASSED] single_pixel_source_buffer
[09:14:52] [PASSED] single_pixel_clip_rectangle
[09:14:52] [PASSED] well_known_colors
[09:14:52] [PASSED] destination_pitch
[09:14:52] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb565 ==========
[09:14:52] ============ drm_test_fb_xrgb8888_to_xrgb1555 =============
[09:14:52] [PASSED] single_pixel_source_buffer
[09:14:52] [PASSED] single_pixel_clip_rectangle
[09:14:52] [PASSED] well_known_colors
[09:14:52] [PASSED] destination_pitch
[09:14:52] ======== [PASSED] drm_test_fb_xrgb8888_to_xrgb1555 =========
[09:14:52] ============ drm_test_fb_xrgb8888_to_argb1555 =============
[09:14:52] [PASSED] single_pixel_source_buffer
[09:14:52] [PASSED] single_pixel_clip_rectangle
[09:14:52] [PASSED] well_known_colors
[09:14:52] [PASSED] destination_pitch
[09:14:52] ======== [PASSED] drm_test_fb_xrgb8888_to_argb1555 =========
[09:14:52] ============ drm_test_fb_xrgb8888_to_rgba5551 =============
[09:14:52] [PASSED] single_pixel_source_buffer
[09:14:52] [PASSED] single_pixel_clip_rectangle
[09:14:52] [PASSED] well_known_colors
[09:14:52] [PASSED] destination_pitch
[09:14:52] ======== [PASSED] drm_test_fb_xrgb8888_to_rgba5551 =========
[09:14:52] ============= drm_test_fb_xrgb8888_to_rgb888 ==============
[09:14:52] [PASSED] single_pixel_source_buffer
[09:14:52] [PASSED] single_pixel_clip_rectangle
[09:14:52] [PASSED] well_known_colors
[09:14:52] [PASSED] destination_pitch
[09:14:52] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb888 ==========
[09:14:52] ============= drm_test_fb_xrgb8888_to_bgr888 ==============
[09:14:52] [PASSED] single_pixel_source_buffer
[09:14:52] [PASSED] single_pixel_clip_rectangle
[09:14:52] [PASSED] well_known_colors
[09:14:52] [PASSED] destination_pitch
[09:14:52] ========= [PASSED] drm_test_fb_xrgb8888_to_bgr888 ==========
[09:14:52] ============ drm_test_fb_xrgb8888_to_argb8888 =============
[09:14:52] [PASSED] single_pixel_source_buffer
[09:14:52] [PASSED] single_pixel_clip_rectangle
[09:14:52] [PASSED] well_known_colors
[09:14:52] [PASSED] destination_pitch
[09:14:52] ======== [PASSED] drm_test_fb_xrgb8888_to_argb8888 =========
[09:14:52] =========== drm_test_fb_xrgb8888_to_xrgb2101010 ===========
[09:14:52] [PASSED] single_pixel_source_buffer
[09:14:52] [PASSED] single_pixel_clip_rectangle
[09:14:52] [PASSED] well_known_colors
[09:14:52] [PASSED] destination_pitch
[09:14:52] ======= [PASSED] drm_test_fb_xrgb8888_to_xrgb2101010 =======
[09:14:52] =========== drm_test_fb_xrgb8888_to_argb2101010 ===========
[09:14:52] [PASSED] single_pixel_source_buffer
[09:14:52] [PASSED] single_pixel_clip_rectangle
[09:14:52] [PASSED] well_known_colors
[09:14:52] [PASSED] destination_pitch
[09:14:52] ======= [PASSED] drm_test_fb_xrgb8888_to_argb2101010 =======
[09:14:52] ============== drm_test_fb_xrgb8888_to_mono ===============
[09:14:52] [PASSED] single_pixel_source_buffer
[09:14:52] [PASSED] single_pixel_clip_rectangle
[09:14:52] [PASSED] well_known_colors
[09:14:52] [PASSED] destination_pitch
[09:14:52] ========== [PASSED] drm_test_fb_xrgb8888_to_mono ===========
[09:14:52] ==================== drm_test_fb_swab =====================
[09:14:52] [PASSED] single_pixel_source_buffer
[09:14:52] [PASSED] single_pixel_clip_rectangle
[09:14:52] [PASSED] well_known_colors
[09:14:52] [PASSED] destination_pitch
[09:14:52] ================ [PASSED] drm_test_fb_swab =================
[09:14:52] ============ drm_test_fb_xrgb8888_to_xbgr8888 =============
[09:14:52] [PASSED] single_pixel_source_buffer
[09:14:52] [PASSED] single_pixel_clip_rectangle
[09:14:52] [PASSED] well_known_colors
[09:14:52] [PASSED] destination_pitch
[09:14:52] ======== [PASSED] drm_test_fb_xrgb8888_to_xbgr8888 =========
[09:14:52] ============ drm_test_fb_xrgb8888_to_abgr8888 =============
[09:14:52] [PASSED] single_pixel_source_buffer
[09:14:52] [PASSED] single_pixel_clip_rectangle
[09:14:52] [PASSED] well_known_colors
[09:14:52] [PASSED] destination_pitch
[09:14:52] ======== [PASSED] drm_test_fb_xrgb8888_to_abgr8888 =========
[09:14:52] ================= drm_test_fb_clip_offset =================
[09:14:52] [PASSED] pass through
[09:14:52] [PASSED] horizontal offset
[09:14:52] [PASSED] vertical offset
[09:14:52] [PASSED] horizontal and vertical offset
[09:14:52] [PASSED] horizontal offset (custom pitch)
[09:14:52] [PASSED] vertical offset (custom pitch)
[09:14:52] [PASSED] horizontal and vertical offset (custom pitch)
[09:14:52] ============= [PASSED] drm_test_fb_clip_offset =============
[09:14:52] =================== drm_test_fb_memcpy ====================
[09:14:52] [PASSED] single_pixel_source_buffer: XR24 little-endian (0x34325258)
[09:14:52] [PASSED] single_pixel_source_buffer: XRA8 little-endian (0x38415258)
[09:14:52] [PASSED] single_pixel_source_buffer: YU24 little-endian (0x34325559)
[09:14:52] [PASSED] single_pixel_clip_rectangle: XB24 little-endian (0x34324258)
[09:14:52] [PASSED] single_pixel_clip_rectangle: XRA8 little-endian (0x38415258)
[09:14:52] [PASSED] single_pixel_clip_rectangle: YU24 little-endian (0x34325559)
[09:14:52] [PASSED] well_known_colors: XB24 little-endian (0x34324258)
[09:14:52] [PASSED] well_known_colors: XRA8 little-endian (0x38415258)
[09:14:52] [PASSED] well_known_colors: YU24 little-endian (0x34325559)
[09:14:52] [PASSED] destination_pitch: XB24 little-endian (0x34324258)
[09:14:52] [PASSED] destination_pitch: XRA8 little-endian (0x38415258)
[09:14:52] [PASSED] destination_pitch: YU24 little-endian (0x34325559)
[09:14:52] =============== [PASSED] drm_test_fb_memcpy ================
[09:14:52] ============= [PASSED] drm_format_helper_test ==============
[09:14:52] ================= drm_format (18 subtests) =================
[09:14:52] [PASSED] drm_test_format_block_width_invalid
[09:14:52] [PASSED] drm_test_format_block_width_one_plane
[09:14:52] [PASSED] drm_test_format_block_width_two_plane
[09:14:52] [PASSED] drm_test_format_block_width_three_plane
[09:14:52] [PASSED] drm_test_format_block_width_tiled
[09:14:52] [PASSED] drm_test_format_block_height_invalid
[09:14:52] [PASSED] drm_test_format_block_height_one_plane
[09:14:52] [PASSED] drm_test_format_block_height_two_plane
[09:14:52] [PASSED] drm_test_format_block_height_three_plane
[09:14:52] [PASSED] drm_test_format_block_height_tiled
[09:14:52] [PASSED] drm_test_format_min_pitch_invalid
[09:14:52] [PASSED] drm_test_format_min_pitch_one_plane_8bpp
[09:14:52] [PASSED] drm_test_format_min_pitch_one_plane_16bpp
[09:14:52] [PASSED] drm_test_format_min_pitch_one_plane_24bpp
[09:14:52] [PASSED] drm_test_format_min_pitch_one_plane_32bpp
[09:14:52] [PASSED] drm_test_format_min_pitch_two_plane
[09:14:52] [PASSED] drm_test_format_min_pitch_three_plane_8bpp
[09:14:52] [PASSED] drm_test_format_min_pitch_tiled
[09:14:52] =================== [PASSED] drm_format ====================
[09:14:52] ============== drm_framebuffer (10 subtests) ===============
[09:14:52] ========== drm_test_framebuffer_check_src_coords ==========
[09:14:52] [PASSED] Success: source fits into fb
[09:14:52] [PASSED] Fail: overflowing fb with x-axis coordinate
[09:14:52] [PASSED] Fail: overflowing fb with y-axis coordinate
[09:14:52] [PASSED] Fail: overflowing fb with source width
[09:14:52] [PASSED] Fail: overflowing fb with source height
[09:14:52] ====== [PASSED] drm_test_framebuffer_check_src_coords ======
[09:14:52] [PASSED] drm_test_framebuffer_cleanup
[09:14:52] =============== drm_test_framebuffer_create ===============
[09:14:52] [PASSED] ABGR8888 normal sizes
[09:14:52] [PASSED] ABGR8888 max sizes
[09:14:52] [PASSED] ABGR8888 pitch greater than min required
[09:14:52] [PASSED] ABGR8888 pitch less than min required
[09:14:52] [PASSED] ABGR8888 Invalid width
[09:14:52] [PASSED] ABGR8888 Invalid buffer handle
[09:14:52] [PASSED] No pixel format
[09:14:52] [PASSED] ABGR8888 Width 0
[09:14:52] [PASSED] ABGR8888 Height 0
[09:14:52] [PASSED] ABGR8888 Out of bound height * pitch combination
[09:14:52] [PASSED] ABGR8888 Large buffer offset
[09:14:52] [PASSED] ABGR8888 Buffer offset for inexistent plane
[09:14:52] [PASSED] ABGR8888 Invalid flag
[09:14:52] [PASSED] ABGR8888 Set DRM_MODE_FB_MODIFIERS without modifiers
[09:14:52] [PASSED] ABGR8888 Valid buffer modifier
[09:14:52] [PASSED] ABGR8888 Invalid buffer modifier(DRM_FORMAT_MOD_SAMSUNG_64_32_TILE)
[09:14:52] [PASSED] ABGR8888 Extra pitches without DRM_MODE_FB_MODIFIERS
[09:14:52] [PASSED] ABGR8888 Extra pitches with DRM_MODE_FB_MODIFIERS
[09:14:52] [PASSED] NV12 Normal sizes
[09:14:52] [PASSED] NV12 Max sizes
[09:14:52] [PASSED] NV12 Invalid pitch
[09:14:52] [PASSED] NV12 Invalid modifier/missing DRM_MODE_FB_MODIFIERS flag
[09:14:52] [PASSED] NV12 different modifier per-plane
[09:14:52] [PASSED] NV12 with DRM_FORMAT_MOD_SAMSUNG_64_32_TILE
[09:14:52] [PASSED] NV12 Valid modifiers without DRM_MODE_FB_MODIFIERS
[09:14:52] [PASSED] NV12 Modifier for inexistent plane
[09:14:52] [PASSED] NV12 Handle for inexistent plane
[09:14:52] [PASSED] NV12 Handle for inexistent plane without DRM_MODE_FB_MODIFIERS
[09:14:52] [PASSED] YVU420 DRM_MODE_FB_MODIFIERS set without modifier
[09:14:52] [PASSED] YVU420 Normal sizes
[09:14:52] [PASSED] YVU420 Max sizes
[09:14:52] [PASSED] YVU420 Invalid pitch
[09:14:52] [PASSED] YVU420 Different pitches
[09:14:52] [PASSED] YVU420 Different buffer offsets/pitches
[09:14:52] [PASSED] YVU420 Modifier set just for plane 0, without DRM_MODE_FB_MODIFIERS
[09:14:52] [PASSED] YVU420 Modifier set just for planes 0, 1, without DRM_MODE_FB_MODIFIERS
[09:14:52] [PASSED] YVU420 Modifier set just for plane 0, 1, with DRM_MODE_FB_MODIFIERS
[09:14:52] [PASSED] YVU420 Valid modifier
[09:14:52] [PASSED] YVU420 Different modifiers per plane
[09:14:52] [PASSED] YVU420 Modifier for inexistent plane
[09:14:52] [PASSED] YUV420_10BIT Invalid modifier(DRM_FORMAT_MOD_LINEAR)
[09:14:52] [PASSED] X0L2 Normal sizes
[09:14:52] [PASSED] X0L2 Max sizes
[09:14:52] [PASSED] X0L2 Invalid pitch
[09:14:52] [PASSED] X0L2 Pitch greater than minimum required
[09:14:52] [PASSED] X0L2 Handle for inexistent plane
[09:14:52] [PASSED] X0L2 Offset for inexistent plane, without DRM_MODE_FB_MODIFIERS set
[09:14:52] [PASSED] X0L2 Modifier without DRM_MODE_FB_MODIFIERS set
[09:14:52] [PASSED] X0L2 Valid modifier
[09:14:52] [PASSED] X0L2 Modifier for inexistent plane
[09:14:52] =========== [PASSED] drm_test_framebuffer_create ===========
[09:14:52] [PASSED] drm_test_framebuffer_free
[09:14:52] [PASSED] drm_test_framebuffer_init
[09:14:52] [PASSED] drm_test_framebuffer_init_bad_format
[09:14:52] [PASSED] drm_test_framebuffer_init_dev_mismatch
[09:14:52] [PASSED] drm_test_framebuffer_lookup
[09:14:52] [PASSED] drm_test_framebuffer_lookup_inexistent
[09:14:52] [PASSED] drm_test_framebuffer_modifiers_not_supported
[09:14:52] ================= [PASSED] drm_framebuffer =================
[09:14:52] ================ drm_gem_shmem (8 subtests) ================
[09:14:52] [PASSED] drm_gem_shmem_test_obj_create
[09:14:52] [PASSED] drm_gem_shmem_test_obj_create_private
[09:14:52] [PASSED] drm_gem_shmem_test_pin_pages
[09:14:52] [PASSED] drm_gem_shmem_test_vmap
[09:14:52] [PASSED] drm_gem_shmem_test_get_pages_sgt
[09:14:52] [PASSED] drm_gem_shmem_test_get_sg_table
[09:14:52] [PASSED] drm_gem_shmem_test_madvise
[09:14:52] [PASSED] drm_gem_shmem_test_purge
[09:14:52] ================== [PASSED] drm_gem_shmem ==================
[09:14:52] === drm_atomic_helper_connector_hdmi_check (27 subtests) ===
[09:14:52] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode
[09:14:52] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode_vic_1
[09:14:52] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode
[09:14:52] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode_vic_1
[09:14:52] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode
[09:14:52] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode_vic_1
[09:14:52] ====== drm_test_check_broadcast_rgb_cea_mode_yuv420 =======
[09:14:52] [PASSED] Automatic
[09:14:52] [PASSED] Full
[09:14:52] [PASSED] Limited 16:235
[09:14:52] == [PASSED] drm_test_check_broadcast_rgb_cea_mode_yuv420 ===
[09:14:52] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_changed
[09:14:52] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_not_changed
[09:14:52] [PASSED] drm_test_check_disable_connector
[09:14:52] [PASSED] drm_test_check_hdmi_funcs_reject_rate
[09:14:52] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_rgb
[09:14:52] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_yuv420
[09:14:52] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_ignore_yuv422
[09:14:52] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_ignore_yuv420
[09:14:52] [PASSED] drm_test_check_driver_unsupported_fallback_yuv420
[09:14:52] [PASSED] drm_test_check_output_bpc_crtc_mode_changed
[09:14:52] [PASSED] drm_test_check_output_bpc_crtc_mode_not_changed
[09:14:52] [PASSED] drm_test_check_output_bpc_dvi
[09:14:52] [PASSED] drm_test_check_output_bpc_format_vic_1
[09:14:52] [PASSED] drm_test_check_output_bpc_format_display_8bpc_only
[09:14:52] [PASSED] drm_test_check_output_bpc_format_display_rgb_only
[09:14:52] [PASSED] drm_test_check_output_bpc_format_driver_8bpc_only
[09:14:52] [PASSED] drm_test_check_output_bpc_format_driver_rgb_only
[09:14:52] [PASSED] drm_test_check_tmds_char_rate_rgb_8bpc
[09:14:52] [PASSED] drm_test_check_tmds_char_rate_rgb_10bpc
[09:14:52] [PASSED] drm_test_check_tmds_char_rate_rgb_12bpc
[09:14:52] ===== [PASSED] drm_atomic_helper_connector_hdmi_check ======
[09:14:52] === drm_atomic_helper_connector_hdmi_reset (6 subtests) ====
[09:14:52] [PASSED] drm_test_check_broadcast_rgb_value
[09:14:52] [PASSED] drm_test_check_bpc_8_value
[09:14:52] [PASSED] drm_test_check_bpc_10_value
[09:14:52] [PASSED] drm_test_check_bpc_12_value
[09:14:52] [PASSED] drm_test_check_format_value
[09:14:52] [PASSED] drm_test_check_tmds_char_value
[09:14:52] ===== [PASSED] drm_atomic_helper_connector_hdmi_reset ======
[09:14:52] = drm_atomic_helper_connector_hdmi_mode_valid (4 subtests) =
[09:14:52] [PASSED] drm_test_check_mode_valid
[09:14:52] [PASSED] drm_test_check_mode_valid_reject
[09:14:52] [PASSED] drm_test_check_mode_valid_reject_rate
[09:14:52] [PASSED] drm_test_check_mode_valid_reject_max_clock
[09:14:52] === [PASSED] drm_atomic_helper_connector_hdmi_mode_valid ===
[09:14:52] ================= drm_managed (2 subtests) =================
[09:14:52] [PASSED] drm_test_managed_release_action
[09:14:52] [PASSED] drm_test_managed_run_action
[09:14:52] =================== [PASSED] drm_managed ===================
[09:14:52] =================== drm_mm (6 subtests) ====================
[09:14:52] [PASSED] drm_test_mm_init
[09:14:52] [PASSED] drm_test_mm_debug
[09:14:52] [PASSED] drm_test_mm_align32
[09:14:52] [PASSED] drm_test_mm_align64
[09:14:52] [PASSED] drm_test_mm_lowest
[09:14:52] [PASSED] drm_test_mm_highest
[09:14:52] ===================== [PASSED] drm_mm ======================
[09:14:52] ============= drm_modes_analog_tv (5 subtests) =============
[09:14:52] [PASSED] drm_test_modes_analog_tv_mono_576i
[09:14:52] [PASSED] drm_test_modes_analog_tv_ntsc_480i
[09:14:52] [PASSED] drm_test_modes_analog_tv_ntsc_480i_inlined
[09:14:52] [PASSED] drm_test_modes_analog_tv_pal_576i
[09:14:52] [PASSED] drm_test_modes_analog_tv_pal_576i_inlined
[09:14:52] =============== [PASSED] drm_modes_analog_tv ===============
[09:14:52] ============== drm_plane_helper (2 subtests) ===============
[09:14:52] =============== drm_test_check_plane_state ================
[09:14:52] [PASSED] clipping_simple
[09:14:52] [PASSED] clipping_rotate_reflect
[09:14:52] [PASSED] positioning_simple
[09:14:52] [PASSED] upscaling
[09:14:52] [PASSED] downscaling
[09:14:52] [PASSED] rounding1
[09:14:52] [PASSED] rounding2
[09:14:52] [PASSED] rounding3
[09:14:52] [PASSED] rounding4
[09:14:52] =========== [PASSED] drm_test_check_plane_state ============
[09:14:52] =========== drm_test_check_invalid_plane_state ============
[09:14:52] [PASSED] positioning_invalid
[09:14:52] [PASSED] upscaling_invalid
[09:14:52] [PASSED] downscaling_invalid
[09:14:52] ======= [PASSED] drm_test_check_invalid_plane_state ========
[09:14:52] ================ [PASSED] drm_plane_helper =================
[09:14:52] ====== drm_connector_helper_tv_get_modes (1 subtest) =======
[09:14:52] ====== drm_test_connector_helper_tv_get_modes_check =======
[09:14:52] [PASSED] None
[09:14:52] [PASSED] PAL
[09:14:52] [PASSED] NTSC
[09:14:52] [PASSED] Both, NTSC Default
[09:14:52] [PASSED] Both, PAL Default
[09:14:52] [PASSED] Both, NTSC Default, with PAL on command-line
[09:14:52] [PASSED] Both, PAL Default, with NTSC on command-line
[09:14:52] == [PASSED] drm_test_connector_helper_tv_get_modes_check ===
[09:14:52] ======== [PASSED] drm_connector_helper_tv_get_modes ========
[09:14:52] ================== drm_rect (9 subtests) ===================
[09:14:52] [PASSED] drm_test_rect_clip_scaled_div_by_zero
[09:14:52] [PASSED] drm_test_rect_clip_scaled_not_clipped
[09:14:52] [PASSED] drm_test_rect_clip_scaled_clipped
[09:14:52] [PASSED] drm_test_rect_clip_scaled_signed_vs_unsigned
[09:14:52] ================= drm_test_rect_intersect =================
[09:14:52] [PASSED] top-left x bottom-right: 2x2+1+1 x 2x2+0+0
[09:14:52] [PASSED] top-right x bottom-left: 2x2+0+0 x 2x2+1-1
[09:14:52] [PASSED] bottom-left x top-right: 2x2+1-1 x 2x2+0+0
[09:14:52] [PASSED] bottom-right x top-left: 2x2+0+0 x 2x2+1+1
[09:14:52] [PASSED] right x left: 2x1+0+0 x 3x1+1+0
[09:14:52] [PASSED] left x right: 3x1+1+0 x 2x1+0+0
[09:14:52] [PASSED] up x bottom: 1x2+0+0 x 1x3+0-1
[09:14:52] [PASSED] bottom x up: 1x3+0-1 x 1x2+0+0
[09:14:52] [PASSED] touching corner: 1x1+0+0 x 2x2+1+1
[09:14:52] [PASSED] touching side: 1x1+0+0 x 1x1+1+0
[09:14:52] [PASSED] equal rects: 2x2+0+0 x 2x2+0+0
[09:14:52] [PASSED] inside another: 2x2+0+0 x 1x1+1+1
[09:14:52] [PASSED] far away: 1x1+0+0 x 1x1+3+6
[09:14:52] [PASSED] points intersecting: 0x0+5+10 x 0x0+5+10
[09:14:52] [PASSED] points not intersecting: 0x0+0+0 x 0x0+5+10
[09:14:52] ============= [PASSED] drm_test_rect_intersect =============
[09:14:52] ================ drm_test_rect_calc_hscale ================
[09:14:52] [PASSED] normal use
[09:14:52] [PASSED] out of max range
[09:14:52] [PASSED] out of min range
[09:14:52] [PASSED] zero dst
[09:14:52] [PASSED] negative src
[09:14:52] [PASSED] negative dst
[09:14:52] ============ [PASSED] drm_test_rect_calc_hscale ============
[09:14:52] ================ drm_test_rect_calc_vscale ================
[09:14:52] [PASSED] normal use
stty: 'standard input': Inappropriate ioctl for device
[09:14:52] [PASSED] out of max range
[09:14:52] [PASSED] out of min range
[09:14:52] [PASSED] zero dst
[09:14:52] [PASSED] negative src
[09:14:52] [PASSED] negative dst
[09:14:52] ============ [PASSED] drm_test_rect_calc_vscale ============
[09:14:52] ================== drm_test_rect_rotate ===================
[09:14:52] [PASSED] reflect-x
[09:14:52] [PASSED] reflect-y
[09:14:52] [PASSED] rotate-0
[09:14:52] [PASSED] rotate-90
[09:14:52] [PASSED] rotate-180
[09:14:52] [PASSED] rotate-270
[09:14:52] ============== [PASSED] drm_test_rect_rotate ===============
[09:14:52] ================ drm_test_rect_rotate_inv =================
[09:14:52] [PASSED] reflect-x
[09:14:52] [PASSED] reflect-y
[09:14:52] [PASSED] rotate-0
[09:14:52] [PASSED] rotate-90
[09:14:52] [PASSED] rotate-180
[09:14:52] [PASSED] rotate-270
[09:14:52] ============ [PASSED] drm_test_rect_rotate_inv =============
[09:14:52] ==================== [PASSED] drm_rect =====================
[09:14:52] ============ drm_sysfb_modeset_test (1 subtest) ============
[09:14:52] ============ drm_test_sysfb_build_fourcc_list =============
[09:14:52] [PASSED] no native formats
[09:14:52] [PASSED] XRGB8888 as native format
[09:14:52] [PASSED] remove duplicates
[09:14:52] [PASSED] convert alpha formats
[09:14:52] [PASSED] random formats
[09:14:52] ======== [PASSED] drm_test_sysfb_build_fourcc_list =========
[09:14:52] ============= [PASSED] drm_sysfb_modeset_test ==============
[09:14:52] ================== drm_fixp (2 subtests) ===================
[09:14:52] [PASSED] drm_test_int2fixp
[09:14:52] [PASSED] drm_test_sm2fixp
[09:14:52] ==================== [PASSED] drm_fixp =====================
[09:14:52] ============================================================
[09:14:52] Testing complete. Ran 624 tests: passed: 624
[09:14:52] Elapsed time: 26.685s total, 1.701s configuring, 24.564s building, 0.390s running
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/ttm/tests/.kunitconfig
[09:14:52] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[09:14:54] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[09:15:03] Starting KUnit Kernel (1/1)...
[09:15:03] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[09:15:03] ================= ttm_device (5 subtests) ==================
[09:15:03] [PASSED] ttm_device_init_basic
[09:15:03] [PASSED] ttm_device_init_multiple
[09:15:03] [PASSED] ttm_device_fini_basic
[09:15:03] [PASSED] ttm_device_init_no_vma_man
[09:15:03] ================== ttm_device_init_pools ==================
[09:15:03] [PASSED] No DMA allocations, no DMA32 required
[09:15:03] [PASSED] DMA allocations, DMA32 required
[09:15:03] [PASSED] No DMA allocations, DMA32 required
[09:15:03] [PASSED] DMA allocations, no DMA32 required
[09:15:03] ============== [PASSED] ttm_device_init_pools ==============
[09:15:03] =================== [PASSED] ttm_device ====================
[09:15:03] ================== ttm_pool (8 subtests) ===================
[09:15:03] ================== ttm_pool_alloc_basic ===================
[09:15:03] [PASSED] One page
[09:15:03] [PASSED] More than one page
[09:15:03] [PASSED] Above the allocation limit
[09:15:03] [PASSED] One page, with coherent DMA mappings enabled
[09:15:03] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[09:15:03] ============== [PASSED] ttm_pool_alloc_basic ===============
[09:15:03] ============== ttm_pool_alloc_basic_dma_addr ==============
[09:15:03] [PASSED] One page
[09:15:03] [PASSED] More than one page
[09:15:03] [PASSED] Above the allocation limit
[09:15:03] [PASSED] One page, with coherent DMA mappings enabled
[09:15:03] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[09:15:03] ========== [PASSED] ttm_pool_alloc_basic_dma_addr ==========
[09:15:03] [PASSED] ttm_pool_alloc_order_caching_match
[09:15:03] [PASSED] ttm_pool_alloc_caching_mismatch
[09:15:03] [PASSED] ttm_pool_alloc_order_mismatch
[09:15:03] [PASSED] ttm_pool_free_dma_alloc
[09:15:03] [PASSED] ttm_pool_free_no_dma_alloc
[09:15:03] [PASSED] ttm_pool_fini_basic
[09:15:03] ==================== [PASSED] ttm_pool =====================
[09:15:03] ================ ttm_resource (8 subtests) =================
[09:15:03] ================= ttm_resource_init_basic =================
[09:15:03] [PASSED] Init resource in TTM_PL_SYSTEM
[09:15:03] [PASSED] Init resource in TTM_PL_VRAM
[09:15:03] [PASSED] Init resource in a private placement
[09:15:03] [PASSED] Init resource in TTM_PL_SYSTEM, set placement flags
[09:15:03] ============= [PASSED] ttm_resource_init_basic =============
[09:15:03] [PASSED] ttm_resource_init_pinned
[09:15:03] [PASSED] ttm_resource_fini_basic
[09:15:03] [PASSED] ttm_resource_manager_init_basic
[09:15:03] [PASSED] ttm_resource_manager_usage_basic
[09:15:03] [PASSED] ttm_resource_manager_set_used_basic
[09:15:03] [PASSED] ttm_sys_man_alloc_basic
[09:15:03] [PASSED] ttm_sys_man_free_basic
[09:15:03] ================== [PASSED] ttm_resource ===================
[09:15:03] =================== ttm_tt (15 subtests) ===================
[09:15:03] ==================== ttm_tt_init_basic ====================
[09:15:03] [PASSED] Page-aligned size
[09:15:03] [PASSED] Extra pages requested
[09:15:03] ================ [PASSED] ttm_tt_init_basic ================
[09:15:03] [PASSED] ttm_tt_init_misaligned
[09:15:03] [PASSED] ttm_tt_fini_basic
[09:15:03] [PASSED] ttm_tt_fini_sg
[09:15:03] [PASSED] ttm_tt_fini_shmem
[09:15:03] [PASSED] ttm_tt_create_basic
[09:15:03] [PASSED] ttm_tt_create_invalid_bo_type
[09:15:03] [PASSED] ttm_tt_create_ttm_exists
[09:15:03] [PASSED] ttm_tt_create_failed
[09:15:03] [PASSED] ttm_tt_destroy_basic
[09:15:03] [PASSED] ttm_tt_populate_null_ttm
[09:15:03] [PASSED] ttm_tt_populate_populated_ttm
[09:15:03] [PASSED] ttm_tt_unpopulate_basic
[09:15:03] [PASSED] ttm_tt_unpopulate_empty_ttm
[09:15:03] [PASSED] ttm_tt_swapin_basic
[09:15:03] ===================== [PASSED] ttm_tt ======================
[09:15:03] =================== ttm_bo (14 subtests) ===================
[09:15:03] =========== ttm_bo_reserve_optimistic_no_ticket ===========
[09:15:03] [PASSED] Cannot be interrupted and sleeps
[09:15:03] [PASSED] Cannot be interrupted, locks straight away
[09:15:03] [PASSED] Can be interrupted, sleeps
[09:15:03] ======= [PASSED] ttm_bo_reserve_optimistic_no_ticket =======
[09:15:03] [PASSED] ttm_bo_reserve_locked_no_sleep
[09:15:03] [PASSED] ttm_bo_reserve_no_wait_ticket
[09:15:03] [PASSED] ttm_bo_reserve_double_resv
[09:15:03] [PASSED] ttm_bo_reserve_interrupted
[09:15:03] [PASSED] ttm_bo_reserve_deadlock
[09:15:03] [PASSED] ttm_bo_unreserve_basic
[09:15:03] [PASSED] ttm_bo_unreserve_pinned
[09:15:03] [PASSED] ttm_bo_unreserve_bulk
[09:15:03] [PASSED] ttm_bo_fini_basic
[09:15:03] [PASSED] ttm_bo_fini_shared_resv
[09:15:03] [PASSED] ttm_bo_pin_basic
[09:15:03] [PASSED] ttm_bo_pin_unpin_resource
[09:15:03] [PASSED] ttm_bo_multiple_pin_one_unpin
[09:15:03] ===================== [PASSED] ttm_bo ======================
[09:15:03] ============== ttm_bo_validate (21 subtests) ===============
[09:15:03] ============== ttm_bo_init_reserved_sys_man ===============
[09:15:03] [PASSED] Buffer object for userspace
[09:15:03] [PASSED] Kernel buffer object
[09:15:03] [PASSED] Shared buffer object
[09:15:03] ========== [PASSED] ttm_bo_init_reserved_sys_man ===========
[09:15:03] ============== ttm_bo_init_reserved_mock_man ==============
[09:15:03] [PASSED] Buffer object for userspace
[09:15:03] [PASSED] Kernel buffer object
[09:15:03] [PASSED] Shared buffer object
[09:15:03] ========== [PASSED] ttm_bo_init_reserved_mock_man ==========
[09:15:03] [PASSED] ttm_bo_init_reserved_resv
[09:15:03] ================== ttm_bo_validate_basic ==================
[09:15:03] [PASSED] Buffer object for userspace
[09:15:03] [PASSED] Kernel buffer object
[09:15:03] [PASSED] Shared buffer object
[09:15:03] ============== [PASSED] ttm_bo_validate_basic ==============
[09:15:03] [PASSED] ttm_bo_validate_invalid_placement
[09:15:03] ============= ttm_bo_validate_same_placement ==============
[09:15:03] [PASSED] System manager
[09:15:03] [PASSED] VRAM manager
[09:15:03] ========= [PASSED] ttm_bo_validate_same_placement ==========
[09:15:03] [PASSED] ttm_bo_validate_failed_alloc
[09:15:03] [PASSED] ttm_bo_validate_pinned
[09:15:03] [PASSED] ttm_bo_validate_busy_placement
[09:15:03] ================ ttm_bo_validate_multihop =================
[09:15:03] [PASSED] Buffer object for userspace
[09:15:03] [PASSED] Kernel buffer object
[09:15:03] [PASSED] Shared buffer object
[09:15:03] ============ [PASSED] ttm_bo_validate_multihop =============
[09:15:03] ========== ttm_bo_validate_no_placement_signaled ==========
[09:15:03] [PASSED] Buffer object in system domain, no page vector
[09:15:03] [PASSED] Buffer object in system domain with an existing page vector
[09:15:03] ====== [PASSED] ttm_bo_validate_no_placement_signaled ======
[09:15:03] ======== ttm_bo_validate_no_placement_not_signaled ========
[09:15:03] [PASSED] Buffer object for userspace
[09:15:03] [PASSED] Kernel buffer object
[09:15:03] [PASSED] Shared buffer object
[09:15:03] ==== [PASSED] ttm_bo_validate_no_placement_not_signaled ====
[09:15:03] [PASSED] ttm_bo_validate_move_fence_signaled
[09:15:03] ========= ttm_bo_validate_move_fence_not_signaled =========
[09:15:03] [PASSED] Waits for GPU
[09:15:03] [PASSED] Tries to lock straight away
[09:15:03] ===== [PASSED] ttm_bo_validate_move_fence_not_signaled =====
[09:15:03] [PASSED] ttm_bo_validate_happy_evict
[09:15:03] [PASSED] ttm_bo_validate_all_pinned_evict
[09:15:03] [PASSED] ttm_bo_validate_allowed_only_evict
[09:15:03] [PASSED] ttm_bo_validate_deleted_evict
[09:15:03] [PASSED] ttm_bo_validate_busy_domain_evict
[09:15:03] [PASSED] ttm_bo_validate_evict_gutting
[09:15:03] [PASSED] ttm_bo_validate_recrusive_evict
stty: 'standard input': Inappropriate ioctl for device
[09:15:03] ================= [PASSED] ttm_bo_validate =================
[09:15:03] ============================================================
[09:15:03] Testing complete. Ran 101 tests: passed: 101
[09:15:03] Elapsed time: 11.195s total, 1.709s configuring, 9.269s building, 0.183s running
+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel
^ permalink raw reply [flat|nested] 30+ messages in thread