Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v5 0/8] [ANDROID]: Add GPU work period support for Xe driver
@ 2025-10-06 14:20 Aakash Deep Sarkar
  2025-10-06 14:20 ` [PATCH v5 1/8] drm/xe: Add a new xe_user structure Aakash Deep Sarkar
                   ` (11 more replies)
  0 siblings, 12 replies; 17+ messages in thread
From: Aakash Deep Sarkar @ 2025-10-06 14:20 UTC (permalink / raw)
  To: intel-xe
  Cc: jeevaka.badrappan, rodrigo.vivi, matthew.brost, carlos.santa,
	matthew.auld, jani.nikula, ashutosh.dixit, Aakash Deep Sarkar

This patch series implements the Android VSR requirement GPU work
period event for the Intel Xe driver.

|GpuWorkPeriodEvent| defines a non-overlapping, non-zero period
of time from |start_time_ns| (inclusive) until |end_time_ns|
(exclusive) for a given |uid|, and includes details of how much
work the GPU was performing for |uid| during the period. When
GPU work for a given |uid| runs on the GPU, the driver must track
one or more periods that cover the time where the work was running,
and emit events soon after.

Full requirement is defined in the following file:
https://cs.android.com/android/platform/superproject/main/+\
main:frameworks/native/services/gpuservice/gpuwork/bpfprogs/gpuWork.c;l=35

The requirement is implemented using a delayed worker thread per
user id instance to accumulate its runtime on the gpu and emit
the event. Each user id instance is tracked using an xe_user
structure and the runtime is updated every time the kworker is
executed for this uid. The delay period is hardcoded to 500 msecs.

The runtime on the gpu is collected for each xe file individually
inside the function xe_exec_queue_update_run_ticks and accumulated
into the corresponding xe_user active_duration_ns field. The HW
Context timestamp field in the GTT is used to derive the runtime
in clock ticks and then converted into nanosecs before updating the
active duration.

Signed-off-by: Aakash Deep Sarkar <aakash.deep.sarkar@intel.com>

Aakash Deep Sarkar (8):
  drm/xe: Add a new xe_user structure
  drm/xe: Add xe_gt_clock_interval_to_ns function
  drm/xe: Modify xe_exec_queue_update_run_ticks
  drm/xe: Handle xe_user creation and removal
  drm/xe: Implement xe_work_period_worker
  drm/xe: Add a Kconfig option for GPU work period
  drm/xe: Handle xe_work_period destruction
  Hack patch: Do not merge

 drivers/gpu/drm/xe/Makefile          |   2 +
 drivers/gpu/drm/xe/xe_device.c       |  23 +++
 drivers/gpu/drm/xe/xe_device_types.h |  19 ++
 drivers/gpu/drm/xe/xe_exec_queue.c   |   8 +
 drivers/gpu/drm/xe/xe_gt_clock.c     |  14 ++
 drivers/gpu/drm/xe/xe_gt_clock.h     |   1 +
 drivers/gpu/drm/xe/xe_pm.c           |   5 +
 drivers/gpu/drm/xe/xe_user.c         | 288 +++++++++++++++++++++++++++
 drivers/gpu/drm/xe/xe_user.h         | 131 ++++++++++++
 9 files changed, 491 insertions(+)
 create mode 100644 drivers/gpu/drm/xe/xe_user.c
 create mode 100644 drivers/gpu/drm/xe/xe_user.h

-- 
2.49.0


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v5 1/8] drm/xe: Add a new xe_user structure
  2025-10-06 14:20 [PATCH v5 0/8] [ANDROID]: Add GPU work period support for Xe driver Aakash Deep Sarkar
@ 2025-10-06 14:20 ` Aakash Deep Sarkar
  2025-10-06 14:20 ` [PATCH v5 2/8] drm/xe: Add xe_gt_clock_interval_to_ns function Aakash Deep Sarkar
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 17+ messages in thread
From: Aakash Deep Sarkar @ 2025-10-06 14:20 UTC (permalink / raw)
  To: intel-xe
  Cc: jeevaka.badrappan, rodrigo.vivi, matthew.brost, carlos.santa,
	matthew.auld, jani.nikula, ashutosh.dixit, Aakash Deep Sarkar

For Android GPU work period event we need to track the runtime
on the GPU for each user id. This means we can have multiple
xe files opened by different processes/threads belonging to
the same user id. All these xe files need to be grouped together
so that one can easily identify these while calculating the
run time for the given user id.

Currently, the xe driver doesn't record the user id of the
calling process. Also, all the xe files created using open
call are clubbed together inside the xe device structure
with no way to distinguish between them based on the user id
of the calling process.

To remedy these limitations we are adding another layer of
indirection between xe device and xe file. xe device will
now have a list of xe users each with a given user id; and each
xe user will have a list of xe files each of which is created
by a process that is associated with this user id.

The lifetime of the xe user structure should be between when
a process with a new user id has opened the xe device; and when
the last xe file belonging to this user id is closed.

Signed-off-by: Aakash Deep Sarkar <aakash.deep.sarkar@intel.com>
---
 drivers/gpu/drm/xe/Makefile  |  2 +
 drivers/gpu/drm/xe/xe_user.c | 89 ++++++++++++++++++++++++++++++++++++
 drivers/gpu/drm/xe/xe_user.h | 78 +++++++++++++++++++++++++++++++
 3 files changed, 169 insertions(+)
 create mode 100644 drivers/gpu/drm/xe/xe_user.c
 create mode 100644 drivers/gpu/drm/xe/xe_user.h

diff --git a/drivers/gpu/drm/xe/Makefile b/drivers/gpu/drm/xe/Makefile
index 3c5d2388997d..b078834ec762 100644
--- a/drivers/gpu/drm/xe/Makefile
+++ b/drivers/gpu/drm/xe/Makefile
@@ -336,6 +336,8 @@ ifeq ($(CONFIG_DEBUG_FS),y)
 
 	xe-$(CONFIG_PCI_IOV) += xe_gt_sriov_pf_debugfs.o
 
+	xe-y += xe_user.o
+
 	xe-$(CONFIG_DRM_XE_DISPLAY) += \
 		i915-display/intel_display_debugfs.o \
 		i915-display/intel_display_debugfs_params.o \
diff --git a/drivers/gpu/drm/xe/xe_user.c b/drivers/gpu/drm/xe/xe_user.c
new file mode 100644
index 000000000000..f35e18776300
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_user.c
@@ -0,0 +1,89 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2025 Intel Corporation
+ */
+
+#include "xe_user.h"
+
+
+/**
+ * DOC: Xe User
+ *
+ * Xe User adds support for handling UID (i.e. persistent, unique ID of the
+ * Android app) based requirements for Android platforms.
+ *
+ * For Android GPU work period event we need to track the runtime on the GPU
+ * for each UID. This means we can have multiple xe files opened by different
+ * processes/threads that belongs to the same UID. All these xe files need to
+ * be grouped together so that one can easily identify them while calculating
+ * the run time for the given UID.
+ *
+ * Currently, the xe driver doesn't record the user id of the calling process.
+ * Also, all the xe files created using open call are clubbed together inside
+ * the xe device structure with no way to distinguish between them based on
+ * the UID of the calling process.
+ *
+ * To remedy these limitations we are adding another layer of indirection
+ * between the xe device and the xe file. xe device will now also have a list
+ * of xe users each with a given UID, and each xe user will have a list of xe
+ * files that are created by a process that belongs to this UID.
+ *
+ * The lifetime of a xe user structure should be between when a process with
+ * a new UID has first opened the xe device, and when the last xe file
+ * belonging to this UID is closed.
+ *
+ * In order to implement this we maintain an xarray of xe user structures
+ * inside our xe device instance. Whenever a new xe file is created via an
+ * open call, we check if the calling process' UID is already present in our
+ * xarray. If so, we increment the refcount for the associated xe user and add
+ * our newly created xe file to the list of xe files belonging to this xe user.
+ * Otherwise, we allocate a new xe user structure for this UID and initialize
+ * its file list with our newly create xe file.
+ *
+ * Whenever an xe file is being destroyed, we decrement the refcount of the
+ * associated xe user. When the last xe file in the xe user's file list is
+ * destroyed, the xe user refcount should drop to zero and the xe user should
+ * be cleaned up. During the cleanup path we remove the xarray entry in our xe
+ * device for this xe user and free up its memory.
+ */
+
+
+
+
+/**
+ * xe_user_alloc() - Allocate xe user
+ * @void: No arg
+ *
+ * Allocate xe user struct to track activity on the gpu
+ * by the application. Call this API whenever a new app
+ * has opened xe device.
+ *
+ * Return: pointer to user struct or NULL if can't allocate
+ */
+struct xe_user *xe_user_alloc(void)
+{
+	struct xe_user *user;
+
+	user = kzalloc(sizeof(*user), GFP_KERNEL);
+	if (!user)
+		return NULL;
+
+	kref_init(&user->refcount);
+	mutex_init(&user->filelist_lock);
+	INIT_LIST_HEAD(&user->filelist);
+	return user;
+}
+
+/**
+ * __xe_user_free() - Free user struct
+ * @kref: The reference
+ *
+ * Return: void
+ */
+void __xe_user_free(struct kref *kref)
+{
+	struct xe_user *user =
+		container_of(kref, struct xe_user, refcount);
+
+	kfree(user);
+}
diff --git a/drivers/gpu/drm/xe/xe_user.h b/drivers/gpu/drm/xe/xe_user.h
new file mode 100644
index 000000000000..9628cc628a37
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_user.h
@@ -0,0 +1,78 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2025 Intel Corporation
+ */
+
+#ifndef _XE_USER_H_
+#define _XE_USER_H_
+
+/**
+ * struct xe_user - xe user structure
+ *
+ * This is a per UID structure for tracking an xe device client. It is
+ * allocated when a new process/app opens the xe device and destroyed
+ * when the last xe file belonging to this UID is destroyed.
+ */
+struct xe_user {
+	/**
+	 * @refcount: reference count
+	 */
+	struct kref refcount;
+
+	/**
+	 * @xe: pointer to the xe_device
+	 */
+	struct xe_device *xe;
+
+	/**
+	 * @filelist_lock: lock protecting the filelist
+	 */
+	struct mutex filelist_lock;
+
+	/**
+	 * @filelist: list of xe files belonging to this xe user
+	 */
+	struct list_head filelist;
+
+	/**
+	 * @work: work to emit the gpu work period event for this
+	 * xe user
+	 */
+	struct work_struct work;
+
+	/**
+	 * @uid: UID of this xe_user
+	 */
+	u32 uid;
+
+	/**
+	 * @active_duration_ns: sum total of xe_file.active_duration_ns
+	 * for all xe files belonging to this xe user
+	 */
+	u64 active_duration_ns;
+
+	/**
+	 * @last_timestamp_ns: timestamp in ns when we last emitted event
+	 * for this xe user
+	 */
+	u64 last_timestamp_ns;
+};
+
+struct xe_user *xe_user_alloc(void);
+
+static inline struct xe_user *
+xe_user_get(struct xe_user *user)
+{
+	kref_get(&user->refcount);
+	return user;
+}
+
+void __xe_user_free(struct kref *kref);
+
+static inline void xe_user_put(struct xe_user *user)
+{
+	kref_put(&user->refcount, __xe_user_free);
+}
+
+#endif // _XE_USER_H_
+
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v5 2/8] drm/xe: Add xe_gt_clock_interval_to_ns function
  2025-10-06 14:20 [PATCH v5 0/8] [ANDROID]: Add GPU work period support for Xe driver Aakash Deep Sarkar
  2025-10-06 14:20 ` [PATCH v5 1/8] drm/xe: Add a new xe_user structure Aakash Deep Sarkar
@ 2025-10-06 14:20 ` Aakash Deep Sarkar
  2025-10-06 14:20 ` [PATCH v5 3/8] drm/xe: Modify xe_exec_queue_update_run_ticks Aakash Deep Sarkar
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 17+ messages in thread
From: Aakash Deep Sarkar @ 2025-10-06 14:20 UTC (permalink / raw)
  To: intel-xe
  Cc: jeevaka.badrappan, rodrigo.vivi, matthew.brost, carlos.santa,
	matthew.auld, jani.nikula, ashutosh.dixit, Aakash Deep Sarkar

The runtime of a user id in the GPU work period event are required
to be given in nanosec unit. Since we want to use the HW Context
timestamp register to derive the runtime for a context, we need
a way to convert from GT clock ticks to nano seconds.

Signed-off-by: Aakash Deep Sarkar <aakash.deep.sarkar@intel.com>
---
 drivers/gpu/drm/xe/xe_gt_clock.c | 14 ++++++++++++++
 drivers/gpu/drm/xe/xe_gt_clock.h |  1 +
 2 files changed, 15 insertions(+)

diff --git a/drivers/gpu/drm/xe/xe_gt_clock.c b/drivers/gpu/drm/xe/xe_gt_clock.c
index 4f011d1573c6..17c1cc6bff5a 100644
--- a/drivers/gpu/drm/xe/xe_gt_clock.c
+++ b/drivers/gpu/drm/xe/xe_gt_clock.c
@@ -110,3 +110,17 @@ u64 xe_gt_clock_interval_to_ms(struct xe_gt *gt, u64 count)
 {
 	return div_u64_roundup(count * MSEC_PER_SEC, gt->info.reference_clock);
 }
+
+/**
+ * xe_gt_clock_interval_to_ns - Convert sampled GT clock ticks to nanosec
+ *
+ * @gt: the &xe_gt
+ * @count: count of GT clock ticks
+ *
+ * Returns: time in nanosec
+ */
+u64 xe_gt_clock_interval_to_ns(struct xe_gt *gt, u64 count)
+{
+	return div_u64_roundup(count * NSEC_PER_SEC, gt->info.reference_clock);
+}
+
diff --git a/drivers/gpu/drm/xe/xe_gt_clock.h b/drivers/gpu/drm/xe/xe_gt_clock.h
index 3adeb7baaca4..bd87971bce97 100644
--- a/drivers/gpu/drm/xe/xe_gt_clock.h
+++ b/drivers/gpu/drm/xe/xe_gt_clock.h
@@ -12,5 +12,6 @@ struct xe_gt;
 
 int xe_gt_clock_init(struct xe_gt *gt);
 u64 xe_gt_clock_interval_to_ms(struct xe_gt *gt, u64 count);
+u64 xe_gt_clock_interval_to_ns(struct xe_gt *gt, u64 count);
 
 #endif
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v5 3/8] drm/xe: Modify xe_exec_queue_update_run_ticks
  2025-10-06 14:20 [PATCH v5 0/8] [ANDROID]: Add GPU work period support for Xe driver Aakash Deep Sarkar
  2025-10-06 14:20 ` [PATCH v5 1/8] drm/xe: Add a new xe_user structure Aakash Deep Sarkar
  2025-10-06 14:20 ` [PATCH v5 2/8] drm/xe: Add xe_gt_clock_interval_to_ns function Aakash Deep Sarkar
@ 2025-10-06 14:20 ` Aakash Deep Sarkar
  2025-10-06 14:20 ` [PATCH v5 4/8] drm/xe: Handle xe_user creation and removal Aakash Deep Sarkar
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 17+ messages in thread
From: Aakash Deep Sarkar @ 2025-10-06 14:20 UTC (permalink / raw)
  To: intel-xe
  Cc: jeevaka.badrappan, rodrigo.vivi, matthew.brost, carlos.santa,
	matthew.auld, jani.nikula, ashutosh.dixit, Aakash Deep Sarkar

For GPU work period event we need to record the run time of a
context on the GPU in nanosecs. In the present xe driver code,
we only record the run time in clock ticks and separately for
each engine class.

So, we are adding a uint64 variable |active_duration_ns| in
the xe file structure where we can record the cumulative
run time in ns of all the engines for this context. The
intent here is to add up the |active_duration_ns| in
all the xe files belonging to a given user id to derive
the run time for that user id.

Signed-off-by: Aakash Deep Sarkar <aakash.deep.sarkar@intel.com>
---
 drivers/gpu/drm/xe/xe_device_types.h | 3 +++
 drivers/gpu/drm/xe/xe_exec_queue.c   | 7 +++++++
 2 files changed, 10 insertions(+)

diff --git a/drivers/gpu/drm/xe/xe_device_types.h b/drivers/gpu/drm/xe/xe_device_types.h
index 53264b2bb832..54a612787289 100644
--- a/drivers/gpu/drm/xe/xe_device_types.h
+++ b/drivers/gpu/drm/xe/xe_device_types.h
@@ -681,6 +681,9 @@ struct xe_file {
 	/** @run_ticks: hw engine class run time in ticks for this drm client */
 	u64 run_ticks[XE_ENGINE_CLASS_MAX];
 
+	/** @active_duration_ns: total run time in ns for this xe file */
+	u64 active_duration_ns;
+
 	/** @client: drm client */
 	struct xe_drm_client *client;
 
diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
index 37b2b93b73d6..6eb34c62c779 100644
--- a/drivers/gpu/drm/xe/xe_exec_queue.c
+++ b/drivers/gpu/drm/xe/xe_exec_queue.c
@@ -15,6 +15,7 @@
 #include "xe_dep_scheduler.h"
 #include "xe_device.h"
 #include "xe_gt.h"
+#include "xe_gt_clock.h"
 #include "xe_hw_engine_class_sysfs.h"
 #include "xe_hw_engine_group.h"
 #include "xe_hw_fence.h"
@@ -887,6 +888,8 @@ void xe_exec_queue_update_run_ticks(struct xe_exec_queue *q)
 {
 	struct xe_device *xe = gt_to_xe(q->gt);
 	struct xe_lrc *lrc;
+	struct xe_gt *gt = q->gt;
+
 	u64 old_ts, new_ts;
 	int idx;
 
@@ -912,6 +915,10 @@ void xe_exec_queue_update_run_ticks(struct xe_exec_queue *q)
 	new_ts = xe_lrc_update_timestamp(lrc, &old_ts);
 	q->xef->run_ticks[q->class] += (new_ts - old_ts) * q->width;
 
+	// Accumulate the runtime in nanosec for this queue into the xe file.
+	q->xef->active_duration_ns +=
+		xe_gt_clock_interval_to_ns(gt, (new_ts - old_ts));
+
 	drm_dev_exit(idx);
 }
 
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v5 4/8] drm/xe: Handle xe_user creation and removal
  2025-10-06 14:20 [PATCH v5 0/8] [ANDROID]: Add GPU work period support for Xe driver Aakash Deep Sarkar
                   ` (2 preceding siblings ...)
  2025-10-06 14:20 ` [PATCH v5 3/8] drm/xe: Modify xe_exec_queue_update_run_ticks Aakash Deep Sarkar
@ 2025-10-06 14:20 ` Aakash Deep Sarkar
  2025-10-06 20:49   ` Matthew Brost
  2025-10-06 14:20 ` [PATCH v5 5/8] drm/xe: Implement xe_work_period_worker Aakash Deep Sarkar
                   ` (7 subsequent siblings)
  11 siblings, 1 reply; 17+ messages in thread
From: Aakash Deep Sarkar @ 2025-10-06 14:20 UTC (permalink / raw)
  To: intel-xe
  Cc: jeevaka.badrappan, rodrigo.vivi, matthew.brost, carlos.santa,
	matthew.auld, jani.nikula, ashutosh.dixit, Aakash Deep Sarkar

We want our xe user structure to be created when a new
user id opens the xe device node and to be destroyed
when the final xe file with this uid is closed. In other
words the xe_user structure for a uid should remain in
scope as long as any process with this uid has an open
xe file descriptor.

To implement this we maintain an xarray of xe user
structures inside our xe device instance. Whenever a new
xe file is created via an open call, we check if the
calling process' uid is already present in our xarray.
If so, we increment the refcount for the associated
xe user and add this xe file to the list of xe files
belonging to this xe user. Otherwise, we allocate a
new xe user structure for this uid and initialize its
file list with this xe file.

Whenever an xe file is destroyed, we decrement the
refcount of the associated xe user. When the last
xe file in the xe user's file list is destroyed,
the xe user refcount should drop to zero and the
xe user should be cleaned up. During the cleanup path
we remove the xarray entry for this xe user in our
xe device and free up its memory.

Signed-off-by: Aakash Deep Sarkar <aakash.deep.sarkar@intel.com>
---
 drivers/gpu/drm/xe/xe_device.c       | 21 ++++++++
 drivers/gpu/drm/xe/xe_device_types.h | 16 ++++++
 drivers/gpu/drm/xe/xe_user.c         | 77 +++++++++++++++++++++++++++-
 drivers/gpu/drm/xe/xe_user.h         | 11 +++-
 4 files changed, 123 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
index 386940323630..5a084fd39876 100644
--- a/drivers/gpu/drm/xe/xe_device.c
+++ b/drivers/gpu/drm/xe/xe_device.c
@@ -65,6 +65,7 @@
 #include "xe_tile.h"
 #include "xe_ttm_stolen_mgr.h"
 #include "xe_ttm_sys_mgr.h"
+#include "xe_user.h"
 #include "xe_vm.h"
 #include "xe_vm_madvise.h"
 #include "xe_vram.h"
@@ -82,7 +83,9 @@ static int xe_file_open(struct drm_device *dev, struct drm_file *file)
 	struct xe_drm_client *client;
 	struct xe_file *xef;
 	int ret = -ENOMEM;
+	int uid = -EINVAL;
 	struct task_struct *task = NULL;
+	const struct cred *cred = NULL;
 
 	xef = kzalloc(sizeof(*xef), GFP_KERNEL);
 	if (!xef)
@@ -107,8 +110,16 @@ static int xe_file_open(struct drm_device *dev, struct drm_file *file)
 	file->driver_priv = xef;
 	kref_init(&xef->refcount);
 
+	INIT_LIST_HEAD(&xef->user_link);
+
 	task = get_pid_task(rcu_access_pointer(file->pid), PIDTYPE_PID);
 	if (task) {
+		cred = get_task_cred(task);
+		if (cred) {
+			uid = (unsigned int) cred->euid.val;
+			xe_user_init(xe, xef, uid);
+			put_cred(cred);
+		}
 		xef->process_name = kstrdup(task->comm, GFP_KERNEL);
 		xef->pid = task->pid;
 		put_task_struct(task);
@@ -128,6 +139,12 @@ static void xe_file_destroy(struct kref *ref)
 
 	xe_drm_client_put(xef->client);
 	kfree(xef->process_name);
+
+	mutex_lock(&xef->user->filelist_lock);
+	list_del(&xef->user_link);
+	mutex_unlock(&xef->user->filelist_lock);
+
+	xe_user_put(xef->user);
 	kfree(xef);
 }
 
@@ -467,6 +484,10 @@ struct xe_device *xe_device_create(struct pci_dev *pdev,
 
 	xa_init_flags(&xe->usm.asid_to_vm, XA_FLAGS_ALLOC);
 
+	xa_init_flags(&xe->work_period.users, XA_FLAGS_ALLOC1);
+
+	mutex_init(&xe->work_period.lock);
+
 	if (IS_ENABLED(CONFIG_DRM_XE_DEBUG)) {
 		/* Trigger a large asid and an early asid wrap. */
 		u32 asid;
diff --git a/drivers/gpu/drm/xe/xe_device_types.h b/drivers/gpu/drm/xe/xe_device_types.h
index 54a612787289..4d4e9a63b3fd 100644
--- a/drivers/gpu/drm/xe/xe_device_types.h
+++ b/drivers/gpu/drm/xe/xe_device_types.h
@@ -613,6 +613,16 @@ struct xe_device {
 	atomic_t g2g_test_count;
 #endif
 
+	/**
+	 * @xe_work_period: Support for GPU work period tracepoint
+	 */
+	struct xe_work_period {
+		/** @users: list of users that have opened this xe device */
+		struct xarray users;
+		/** @lock: lock protecting this structure */
+		struct mutex lock;
+	} work_period;
+
 	/* private: */
 
 #if IS_ENABLED(CONFIG_DRM_XE_DISPLAY)
@@ -684,6 +694,12 @@ struct xe_file {
 	/** @active_duration_ns: total run time in ns for this xe file */
 	u64 active_duration_ns;
 
+	/** @user: pointer to struct xe_user associated with this xe file */
+	struct xe_user *user;
+
+	/** @user_link: link into xe_user::filelist */
+	struct list_head user_link;
+
 	/** @client: drm client */
 	struct xe_drm_client *client;
 
diff --git a/drivers/gpu/drm/xe/xe_user.c b/drivers/gpu/drm/xe/xe_user.c
index f35e18776300..cb3de75aa497 100644
--- a/drivers/gpu/drm/xe/xe_user.c
+++ b/drivers/gpu/drm/xe/xe_user.c
@@ -3,6 +3,8 @@
  * Copyright © 2025 Intel Corporation
  */
 
+#include <drm/drm_drv.h>
+
 #include "xe_user.h"
 
 
@@ -60,7 +62,7 @@
  *
  * Return: pointer to user struct or NULL if can't allocate
  */
-struct xe_user *xe_user_alloc(void)
+static struct xe_user *xe_user_alloc(void)
 {
 	struct xe_user *user;
 
@@ -71,6 +73,7 @@ struct xe_user *xe_user_alloc(void)
 	kref_init(&user->refcount);
 	mutex_init(&user->filelist_lock);
 	INIT_LIST_HEAD(&user->filelist);
+	INIT_WORK(&user->work, work_period_worker);
 	return user;
 }
 
@@ -84,6 +87,78 @@ void __xe_user_free(struct kref *kref)
 {
 	struct xe_user *user =
 		container_of(kref, struct xe_user, refcount);
+	struct xe_device *xe = user->xe;
+	void *lookup;
+
+	mutex_lock(&xe->work_period.lock);
+	lookup = xa_erase(&xe->work_period.users, user->id);
+	xe_assert(xe, lookup == user);
+	mutex_unlock(&xe->work_period.lock);
 
+	drm_dev_put(&user->xe->drm);
 	kfree(user);
 }
+
+static struct xe_user *xe_user_lookup(struct xe_device *xe, u32 uid)
+{
+	struct xe_user *user = NULL;
+	unsigned long i;
+
+	mutex_lock(&xe->work_period.lock);
+	xa_for_each(&xe->work_period.users, i, user) {
+		if (user->uid == uid) {
+			xe_user_get(user);
+			mutex_unlock(&xe->work_period.lock);
+			return user;
+		}
+	}
+	mutex_unlock(&xe->work_period.lock);
+
+	return NULL;
+}
+
+int xe_user_init(struct xe_device *xe, struct xe_file *xef, unsigned int uid)
+{
+	struct xe_user *user = NULL;
+	int ret;
+	u32 idx;
+	/*
+	 * Check if the calling process/uid has already been registered
+	 * with the xe device during a previous open call. If so then
+	 * take a reference to this xe user and add this xe file to the
+	 * filelist belonging to this xe user
+	 */
+	user = xe_user_lookup(xe, uid);
+	if (!user) {
+		/*
+		 * We couldn't find an existing xe user for the calling process.
+		 * Allocate a new struct xe_user and register it with this xe
+		 * device
+		 */
+		user = xe_user_alloc();
+		if (!user)
+			return -ENOMEM;
+
+
+		user->uid = uid;
+		user->last_timestamp_ns = ktime_get_raw_ns();
+		user->xe = xe;
+
+		mutex_lock(&xe->work_period.lock);
+		ret = xa_alloc(&xe->work_period.users, &idx, user, xa_limit_32b, GFP_KERNEL);
+		mutex_unlock(&xe->work_period.lock);
+
+		if (ret < 0)
+			return ret;
+
+		user->id = idx;
+		drm_dev_get(&xe->drm);
+	}
+
+	mutex_lock(&user->filelist_lock);
+	list_add(&xef->user_link, &user->filelist);
+	mutex_unlock(&user->filelist_lock);
+	xef->user = user;
+
+	return 0;
+}
diff --git a/drivers/gpu/drm/xe/xe_user.h b/drivers/gpu/drm/xe/xe_user.h
index 9628cc628a37..341200c55509 100644
--- a/drivers/gpu/drm/xe/xe_user.h
+++ b/drivers/gpu/drm/xe/xe_user.h
@@ -6,6 +6,9 @@
 #ifndef _XE_USER_H_
 #define _XE_USER_H_
 
+#include "xe_device.h"
+
+
 /**
  * struct xe_user - xe user structure
  *
@@ -40,6 +43,11 @@ struct xe_user {
 	 */
 	struct work_struct work;
 
+	/**
+	 * @id: index of this user into the xe device::users xarray
+	 */
+	u32 id;
+
 	/**
 	 * @uid: UID of this xe_user
 	 */
@@ -58,7 +66,8 @@ struct xe_user {
 	u64 last_timestamp_ns;
 };
 
-struct xe_user *xe_user_alloc(void);
+int xe_user_init(struct xe_device *xe, struct xe_file *xef, unsigned int uid);
+
 
 static inline struct xe_user *
 xe_user_get(struct xe_user *user)
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v5 5/8] drm/xe: Implement xe_work_period_worker
  2025-10-06 14:20 [PATCH v5 0/8] [ANDROID]: Add GPU work period support for Xe driver Aakash Deep Sarkar
                   ` (3 preceding siblings ...)
  2025-10-06 14:20 ` [PATCH v5 4/8] drm/xe: Handle xe_user creation and removal Aakash Deep Sarkar
@ 2025-10-06 14:20 ` Aakash Deep Sarkar
  2025-10-06 21:12   ` Matthew Brost
  2025-10-06 14:20 ` [PATCH v5 6/8] drm/xe: Add a Kconfig option for GPU work period Aakash Deep Sarkar
                   ` (6 subsequent siblings)
  11 siblings, 1 reply; 17+ messages in thread
From: Aakash Deep Sarkar @ 2025-10-06 14:20 UTC (permalink / raw)
  To: intel-xe
  Cc: jeevaka.badrappan, rodrigo.vivi, matthew.brost, carlos.santa,
	matthew.auld, jani.nikula, ashutosh.dixit, Aakash Deep Sarkar

The work of collecting the GPU run time for a given
xe_user and emitting its event, is done by the
xe_work_period_worker kworker. At the time of creation
of a new xe_user, we simultaneously start a delayed
kworker thread. The delay of execution is set to be
500 ms. After the completion of the work, the kworker
schedules itself for the next execution. This is done
as long as the reference to the xe_user pointer is
valid.

During each execution cycle the xe_work_period_worker
iterates over all the xe files in the xe_user::filelist
and accumulate their corresponding GPU runtime into the
xe_user::active_duration_ns; while also updating each of
the xe_file::active_duration_ns. The total runtime for
this uid in the current sampling period is the delta
between the previous xe_user::active_duration_ns and
the current xe_user::active_duration_ns.

We also record the current timestamp at the end of each
invocation to xe_work_period_worker function in the
xe_user::last_timestamp_ns. The sampling period for this
uid is the delta between the previous timestamp and the
current timestamp.

Signed-off-by: Aakash Deep Sarkar <aakash.deep.sarkar@intel.com>
---
 drivers/gpu/drm/xe/xe_device.c |  11 +--
 drivers/gpu/drm/xe/xe_pm.c     |   5 ++
 drivers/gpu/drm/xe/xe_user.c   | 127 +++++++++++++++++++++++++++++++--
 drivers/gpu/drm/xe/xe_user.h   |  19 ++++-
 4 files changed, 150 insertions(+), 12 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
index 5a084fd39876..54ac71d1265d 100644
--- a/drivers/gpu/drm/xe/xe_device.c
+++ b/drivers/gpu/drm/xe/xe_device.c
@@ -140,11 +140,12 @@ static void xe_file_destroy(struct kref *ref)
 	xe_drm_client_put(xef->client);
 	kfree(xef->process_name);
 
-	mutex_lock(&xef->user->filelist_lock);
-	list_del(&xef->user_link);
-	mutex_unlock(&xef->user->filelist_lock);
-
-	xe_user_put(xef->user);
+	if (xef->user) {
+		mutex_lock(&xef->user->lock);
+		list_del(&xef->user_link);
+		xe_user_put(xef->user);
+		mutex_unlock(&xef->user->lock);
+	}
 	kfree(xef);
 }
 
diff --git a/drivers/gpu/drm/xe/xe_pm.c b/drivers/gpu/drm/xe/xe_pm.c
index b7e3094f8acf..c7add2616189 100644
--- a/drivers/gpu/drm/xe/xe_pm.c
+++ b/drivers/gpu/drm/xe/xe_pm.c
@@ -26,6 +26,7 @@
 #include "xe_pxp.h"
 #include "xe_sriov_vf_ccs.h"
 #include "xe_trace.h"
+#include "xe_user.h"
 #include "xe_vm.h"
 #include "xe_wa.h"
 
@@ -598,6 +599,8 @@ int xe_pm_runtime_suspend(struct xe_device *xe)
 
 	xe_i2c_pm_suspend(xe);
 
+	xe_user_cancel_workers(xe);
+
 	xe_rpm_lockmap_release(xe);
 	xe_pm_write_callback_task(xe, NULL);
 	return 0;
@@ -650,6 +653,8 @@ int xe_pm_runtime_resume(struct xe_device *xe)
 
 	xe_i2c_pm_resume(xe, xe->d3cold.allowed);
 
+	xe_user_resume_workers(xe);
+
 	xe_irq_resume(xe);
 
 	for_each_gt(gt, xe, id)
diff --git a/drivers/gpu/drm/xe/xe_user.c b/drivers/gpu/drm/xe/xe_user.c
index cb3de75aa497..fb54d2659642 100644
--- a/drivers/gpu/drm/xe/xe_user.c
+++ b/drivers/gpu/drm/xe/xe_user.c
@@ -5,8 +5,15 @@
 
 #include <drm/drm_drv.h>
 
+#include "xe_assert.h"
+#include "xe_device_types.h"
+#include "xe_exec_queue.h"
+#include "xe_pm.h"
 #include "xe_user.h"
 
+#define CREATE_TRACE_POINTS
+#include <trace/gpu_work_period.h>
+
 
 /**
  * DOC: Xe User
@@ -50,7 +57,82 @@
  */
 
 
+static inline void schedule_next_work(struct xe_device *xe, unsigned int id)
+{
+	struct xe_user *user;
+
+	mutex_lock(&xe->work_period.lock);
+	user = xa_load(&xe->work_period.users, id);
+	if (user && xe_user_get_unless_zero(user))
+		schedule_delayed_work(&user->delay_work,
+				msecs_to_jiffies(XE_WORK_PERIOD_INTERVAL));
+	mutex_unlock(&xe->work_period.lock);
+}
+
+static void xe_work_period_worker(struct work_struct *work)
+{
+	struct xe_user *user = container_of(work, struct xe_user, delay_work.work);
+	struct xe_device *xe = user->xe;
+	struct xe_file *xef;
+	struct xe_exec_queue *q;
+
+	/*
+	 * The GPU work period event requires the following parameters
+	 *
+	 * gpuid:           GPU index in case the platform has more than one GPU
+	 * uid:             user id of the app
+	 * start_time:      start time for the sampling period in nanosecs
+	 * end_time:        end time for the sampling period in nanosecs
+	 * active_duration: Total runtime in nanosecs for this uid in
+	 *                  the current sampling period.
+	 */
+	u32 gpuid = 0, uid = user->uid, id = user->id;
+	u64 start_time, end_time, active_duration;
+	u64 last_active_duration, last_timestamp;
+	unsigned long i;
+
+	mutex_lock(&user->lock);
+
+	// Save the last recorded active duration and timestamp
+	last_active_duration = user->active_duration_ns;
+	last_timestamp = user->last_timestamp_ns;
+
+	if (xe_pm_runtime_get_if_active(xe)) {
+
+		list_for_each_entry(xef, &user->filelist, user_link) {
+
+			wait_var_event(&xef->exec_queue.pending_removal,
+			!atomic_read(&xef->exec_queue.pending_removal));
+
+			/* Accumulate all the exec queues from this file */
+			mutex_lock(&xef->exec_queue.lock);
+			xa_for_each(&xef->exec_queue.xa, i, q) {
+				xe_exec_queue_get(q);
+				mutex_unlock(&xef->exec_queue.lock);
+
+				xe_exec_queue_update_run_ticks(q);
+
+				mutex_lock(&xef->exec_queue.lock);
+				xe_exec_queue_put(q);
+			}
+			mutex_unlock(&xef->exec_queue.lock);
+			user->active_duration_ns += xef->active_duration_ns;
+		}
+
+		xe_pm_runtime_put(xe);
+
+		start_time = last_timestamp + 1;
+		end_time = ktime_get_raw_ns();
+		active_duration = user->active_duration_ns - last_active_duration;
+		trace_gpu_work_period(gpuid, uid, start_time, end_time, active_duration);
+		user->last_timestamp_ns = end_time;
+		xe_user_put(user);
+	}
+
+	mutex_unlock(&user->lock);
 
+	schedule_next_work(xe, id);
+}
 
 /**
  * xe_user_alloc() - Allocate xe user
@@ -71,9 +153,9 @@ static struct xe_user *xe_user_alloc(void)
 		return NULL;
 
 	kref_init(&user->refcount);
-	mutex_init(&user->filelist_lock);
+	mutex_init(&user->lock);
 	INIT_LIST_HEAD(&user->filelist);
-	INIT_WORK(&user->work, work_period_worker);
+	INIT_DELAYED_WORK(&user->delay_work, xe_work_period_worker);
 	return user;
 }
 
@@ -153,12 +235,49 @@ int xe_user_init(struct xe_device *xe, struct xe_file *xef, unsigned int uid)
 
 		user->id = idx;
 		drm_dev_get(&xe->drm);
+
+		xe_user_get(user);
+		if (!schedule_delayed_work(&user->delay_work,
+					msecs_to_jiffies(XE_WORK_PERIOD_INTERVAL)))
+			xe_user_put(user);
 	}
 
-	mutex_lock(&user->filelist_lock);
+	mutex_lock(&user->lock);
 	list_add(&xef->user_link, &user->filelist);
-	mutex_unlock(&user->filelist_lock);
+	mutex_unlock(&user->lock);
 	xef->user = user;
 
 	return 0;
 }
+
+void xe_user_cancel_workers(struct xe_device *xe)
+{
+	struct xe_user *user = NULL;
+	unsigned long i = 0;
+
+	mutex_lock(&xe->work_period.lock);
+	xa_for_each(&xe->work_period.users, i, user) {
+		if (user && xe_user_get_unless_zero(user)) {
+			cancel_delayed_work_sync(&user->delay_work);
+			xe_user_put(user);
+		}
+	}
+	mutex_unlock(&xe->work_period.lock);
+}
+
+void xe_user_resume_workers(struct xe_device *xe)
+{
+	struct xe_user *user = NULL;
+	unsigned long i = 0;
+
+	mutex_lock(&xe->work_period.lock);
+	xa_for_each(&xe->work_period.users, i, user) {
+		if (user && xe_user_get_unless_zero(user)) {
+			if (!schedule_delayed_work(&user->delay_work,
+					msecs_to_jiffies(XE_WORK_PERIOD_INTERVAL)))
+				xe_user_put(user);
+		}
+	}
+	mutex_unlock(&xe->work_period.lock);
+}
+
diff --git a/drivers/gpu/drm/xe/xe_user.h b/drivers/gpu/drm/xe/xe_user.h
index 341200c55509..55016ba189f1 100644
--- a/drivers/gpu/drm/xe/xe_user.h
+++ b/drivers/gpu/drm/xe/xe_user.h
@@ -9,6 +9,8 @@
 #include "xe_device.h"
 
 
+#define XE_WORK_PERIOD_INTERVAL 500
+
 /**
  * struct xe_user - xe user structure
  *
@@ -28,9 +30,9 @@ struct xe_user {
 	struct xe_device *xe;
 
 	/**
-	 * @filelist_lock: lock protecting the filelist
+	 * @filelist_lock: lock protecting this structure
 	 */
-	struct mutex filelist_lock;
+	struct mutex lock;
 
 	/**
 	 * @filelist: list of xe files belonging to this xe user
@@ -41,7 +43,7 @@ struct xe_user {
 	 * @work: work to emit the gpu work period event for this
 	 * xe user
 	 */
-	struct work_struct work;
+	struct delayed_work delay_work;
 
 	/**
 	 * @id: index of this user into the xe device::users xarray
@@ -68,6 +70,17 @@ struct xe_user {
 
 int xe_user_init(struct xe_device *xe, struct xe_file *xef, unsigned int uid);
 
+void xe_user_cancel_workers(struct xe_device *xe);
+
+void xe_user_resume_workers(struct xe_device *xe);
+
+static inline struct xe_user *
+xe_user_get_unless_zero(struct xe_user *user)
+{
+	if (kref_get_unless_zero(&user->refcount))
+		return user;
+	return NULL;
+}
 
 static inline struct xe_user *
 xe_user_get(struct xe_user *user)
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v5 6/8] drm/xe: Add a Kconfig option for GPU work period
  2025-10-06 14:20 [PATCH v5 0/8] [ANDROID]: Add GPU work period support for Xe driver Aakash Deep Sarkar
                   ` (4 preceding siblings ...)
  2025-10-06 14:20 ` [PATCH v5 5/8] drm/xe: Implement xe_work_period_worker Aakash Deep Sarkar
@ 2025-10-06 14:20 ` Aakash Deep Sarkar
  2025-10-06 14:20 ` [PATCH v5 7/8] drm/xe: Handle xe_work_period destruction Aakash Deep Sarkar
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 17+ messages in thread
From: Aakash Deep Sarkar @ 2025-10-06 14:20 UTC (permalink / raw)
  To: intel-xe
  Cc: jeevaka.badrappan, rodrigo.vivi, matthew.brost, carlos.santa,
	matthew.auld, jani.nikula, ashutosh.dixit, Aakash Deep Sarkar

Since this requirement is intended only for Android, there's
no reason to have it enabled by default in other distributions.
So, better to have it guarded by a Kconfig option.

Signed-off-by: Aakash Deep Sarkar <aakash.deep.sarkar@intel.com>
---
 drivers/gpu/drm/xe/Makefile        |  2 +-
 drivers/gpu/drm/xe/xe_device.c     |  1 -
 drivers/gpu/drm/xe/xe_exec_queue.c |  5 +++--
 drivers/gpu/drm/xe/xe_user.h       | 27 ++++++++++++++++++++++++++-
 4 files changed, 30 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/xe/Makefile b/drivers/gpu/drm/xe/Makefile
index b078834ec762..738e47afbe89 100644
--- a/drivers/gpu/drm/xe/Makefile
+++ b/drivers/gpu/drm/xe/Makefile
@@ -336,7 +336,7 @@ ifeq ($(CONFIG_DEBUG_FS),y)
 
 	xe-$(CONFIG_PCI_IOV) += xe_gt_sriov_pf_debugfs.o
 
-	xe-y += xe_user.o
+	xe-$(CONFIG_TRACE_GPU_WORK_PERIOD) += xe_user.o
 
 	xe-$(CONFIG_DRM_XE_DISPLAY) += \
 		i915-display/intel_display_debugfs.o \
diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
index 54ac71d1265d..61ba76144721 100644
--- a/drivers/gpu/drm/xe/xe_device.c
+++ b/drivers/gpu/drm/xe/xe_device.c
@@ -486,7 +486,6 @@ struct xe_device *xe_device_create(struct pci_dev *pdev,
 	xa_init_flags(&xe->usm.asid_to_vm, XA_FLAGS_ALLOC);
 
 	xa_init_flags(&xe->work_period.users, XA_FLAGS_ALLOC1);
-
 	mutex_init(&xe->work_period.lock);
 
 	if (IS_ENABLED(CONFIG_DRM_XE_DEBUG)) {
diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
index 6eb34c62c779..d5013d546348 100644
--- a/drivers/gpu/drm/xe/xe_exec_queue.c
+++ b/drivers/gpu/drm/xe/xe_exec_queue.c
@@ -915,9 +915,10 @@ void xe_exec_queue_update_run_ticks(struct xe_exec_queue *q)
 	new_ts = xe_lrc_update_timestamp(lrc, &old_ts);
 	q->xef->run_ticks[q->class] += (new_ts - old_ts) * q->width;
 
-	// Accumulate the runtime in nanosec for this queue into the xe file.
+
+	// Accumulate the runtime in ns for this queue
 	q->xef->active_duration_ns +=
-		xe_gt_clock_interval_to_ns(gt, (new_ts - old_ts));
+			xe_gt_clock_interval_to_ns(gt, (new_ts - old_ts));
 
 	drm_dev_exit(idx);
 }
diff --git a/drivers/gpu/drm/xe/xe_user.h b/drivers/gpu/drm/xe/xe_user.h
index 55016ba189f1..0d184464687c 100644
--- a/drivers/gpu/drm/xe/xe_user.h
+++ b/drivers/gpu/drm/xe/xe_user.h
@@ -68,12 +68,38 @@ struct xe_user {
 	u64 last_timestamp_ns;
 };
 
+#if IS_ENABLED(CONFIG_TRACE_GPU_WORK_PERIOD)
+
 int xe_user_init(struct xe_device *xe, struct xe_file *xef, unsigned int uid);
 
 void xe_user_cancel_workers(struct xe_device *xe);
 
 void xe_user_resume_workers(struct xe_device *xe);
 
+void __xe_user_free(struct kref *kref);
+
+#else
+
+static inline
+int xe_user_init(struct xe_device *xe, struct xe_file *xef, unsigned int uid)
+{
+	return 0;
+}
+
+static inline void __xe_user_free(struct kref *kref)
+{
+}
+
+static inline void xe_user_cancel_workers(struct xe_device *xe)
+{
+}
+
+static inline void xe_user_resume_workers(struct xe_device *xe)
+{
+}
+
+#endif // CONFIG_TRACE_GPU_WORK_PERIOD
+
 static inline struct xe_user *
 xe_user_get_unless_zero(struct xe_user *user)
 {
@@ -89,7 +115,6 @@ xe_user_get(struct xe_user *user)
 	return user;
 }
 
-void __xe_user_free(struct kref *kref);
 
 static inline void xe_user_put(struct xe_user *user)
 {
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v5 7/8] drm/xe: Handle xe_work_period destruction
  2025-10-06 14:20 [PATCH v5 0/8] [ANDROID]: Add GPU work period support for Xe driver Aakash Deep Sarkar
                   ` (5 preceding siblings ...)
  2025-10-06 14:20 ` [PATCH v5 6/8] drm/xe: Add a Kconfig option for GPU work period Aakash Deep Sarkar
@ 2025-10-06 14:20 ` Aakash Deep Sarkar
  2025-10-06 14:20 ` [PATCH v5 8/8] Hack patch: Do not merge Aakash Deep Sarkar
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 17+ messages in thread
From: Aakash Deep Sarkar @ 2025-10-06 14:20 UTC (permalink / raw)
  To: intel-xe
  Cc: jeevaka.badrappan, rodrigo.vivi, matthew.brost, carlos.santa,
	matthew.auld, jani.nikula, ashutosh.dixit, Aakash Deep Sarkar

This patch adds the xe_work_period destruction procedure.
We iterate over all entries in the xe::work_period::users
xarray and cancel any pending delayed work. Then destroy
the xarray itself.

Signed-off-by: Aakash Deep Sarkar <aakash.deep.sarkar@intel.com>
---
 drivers/gpu/drm/xe/xe_device.c |  2 ++
 drivers/gpu/drm/xe/xe_user.c   | 10 +++++++---
 drivers/gpu/drm/xe/xe_user.h   |  6 ++++++
 3 files changed, 15 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
index 61ba76144721..1436a10244c5 100644
--- a/drivers/gpu/drm/xe/xe_device.c
+++ b/drivers/gpu/drm/xe/xe_device.c
@@ -434,6 +434,8 @@ static void xe_device_destroy(struct drm_device *dev, void *dummy)
 	if (xe->destroy_wq)
 		destroy_workqueue(xe->destroy_wq);
 
+	xe_user_fini(xe);
+
 	ttm_device_fini(&xe->ttm);
 }
 
diff --git a/drivers/gpu/drm/xe/xe_user.c b/drivers/gpu/drm/xe/xe_user.c
index fb54d2659642..90664c36e221 100644
--- a/drivers/gpu/drm/xe/xe_user.c
+++ b/drivers/gpu/drm/xe/xe_user.c
@@ -250,6 +250,12 @@ int xe_user_init(struct xe_device *xe, struct xe_file *xef, unsigned int uid)
 	return 0;
 }
 
+void xe_user_fini(struct xe_device *xe)
+{
+	xe_user_cancel_workers(xe);
+	xa_destroy(&xe->work_period.users);
+}
+
 void xe_user_cancel_workers(struct xe_device *xe)
 {
 	struct xe_user *user = NULL;
@@ -257,10 +263,8 @@ void xe_user_cancel_workers(struct xe_device *xe)
 
 	mutex_lock(&xe->work_period.lock);
 	xa_for_each(&xe->work_period.users, i, user) {
-		if (user && xe_user_get_unless_zero(user)) {
-			cancel_delayed_work_sync(&user->delay_work);
+		if (cancel_delayed_work_sync(&user->delay_work))
 			xe_user_put(user);
-		}
 	}
 	mutex_unlock(&xe->work_period.lock);
 }
diff --git a/drivers/gpu/drm/xe/xe_user.h b/drivers/gpu/drm/xe/xe_user.h
index 0d184464687c..d7ac58cb3db8 100644
--- a/drivers/gpu/drm/xe/xe_user.h
+++ b/drivers/gpu/drm/xe/xe_user.h
@@ -72,6 +72,8 @@ struct xe_user {
 
 int xe_user_init(struct xe_device *xe, struct xe_file *xef, unsigned int uid);
 
+void xe_user_fini(struct xe_device *xe);
+
 void xe_user_cancel_workers(struct xe_device *xe);
 
 void xe_user_resume_workers(struct xe_device *xe);
@@ -86,6 +88,10 @@ int xe_user_init(struct xe_device *xe, struct xe_file *xef, unsigned int uid)
 	return 0;
 }
 
+static inline void xe_user_fini(struct xe_device *xe)
+{
+}
+
 static inline void __xe_user_free(struct kref *kref)
 {
 }
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v5 8/8] Hack patch: Do not merge
  2025-10-06 14:20 [PATCH v5 0/8] [ANDROID]: Add GPU work period support for Xe driver Aakash Deep Sarkar
                   ` (6 preceding siblings ...)
  2025-10-06 14:20 ` [PATCH v5 7/8] drm/xe: Handle xe_work_period destruction Aakash Deep Sarkar
@ 2025-10-06 14:20 ` Aakash Deep Sarkar
  2025-10-06 15:03 ` ✗ CI.checkpatch: warning for : Add GPU work period support for Xe driver (rev5) Patchwork
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 17+ messages in thread
From: Aakash Deep Sarkar @ 2025-10-06 14:20 UTC (permalink / raw)
  To: intel-xe
  Cc: jeevaka.badrappan, rodrigo.vivi, matthew.brost, carlos.santa,
	matthew.auld, jani.nikula, ashutosh.dixit, Aakash Deep Sarkar

This patch is only added so that our files are built and tested
on the CI

Signed-off-by: Aakash Deep Sarkar <aakash.deep.sarkar@intel.com>
---
 drivers/gpu/drm/xe/Makefile     |  2 +-
 drivers/gpu/drm/xe/xe_user.h    |  2 +-
 drivers/gpu/trace/Kconfig       | 12 +++++++
 include/trace/gpu_work_period.h | 59 +++++++++++++++++++++++++++++++++
 4 files changed, 73 insertions(+), 2 deletions(-)
 create mode 100644 include/trace/gpu_work_period.h

diff --git a/drivers/gpu/drm/xe/Makefile b/drivers/gpu/drm/xe/Makefile
index 738e47afbe89..b078834ec762 100644
--- a/drivers/gpu/drm/xe/Makefile
+++ b/drivers/gpu/drm/xe/Makefile
@@ -336,7 +336,7 @@ ifeq ($(CONFIG_DEBUG_FS),y)
 
 	xe-$(CONFIG_PCI_IOV) += xe_gt_sriov_pf_debugfs.o
 
-	xe-$(CONFIG_TRACE_GPU_WORK_PERIOD) += xe_user.o
+	xe-y += xe_user.o
 
 	xe-$(CONFIG_DRM_XE_DISPLAY) += \
 		i915-display/intel_display_debugfs.o \
diff --git a/drivers/gpu/drm/xe/xe_user.h b/drivers/gpu/drm/xe/xe_user.h
index d7ac58cb3db8..f8c5e261a563 100644
--- a/drivers/gpu/drm/xe/xe_user.h
+++ b/drivers/gpu/drm/xe/xe_user.h
@@ -68,7 +68,7 @@ struct xe_user {
 	u64 last_timestamp_ns;
 };
 
-#if IS_ENABLED(CONFIG_TRACE_GPU_WORK_PERIOD)
+#if 1
 
 int xe_user_init(struct xe_device *xe, struct xe_file *xef, unsigned int uid);
 
diff --git a/drivers/gpu/trace/Kconfig b/drivers/gpu/trace/Kconfig
index cd3d19c4a201..33ffe865739b 100644
--- a/drivers/gpu/trace/Kconfig
+++ b/drivers/gpu/trace/Kconfig
@@ -11,3 +11,15 @@ config TRACE_GPU_MEM
 	  Tracepoint availability varies by GPU driver.
 
 	  If in doubt, say "N".
+
+config TRACE_GPU_WORK_PERIOD
+        bool "Enable GPU work period tracepoint"
+        default n
+        help
+          Choose this option to enable tracepoint for tracking
+          GPU usage based on the UID. Intended for performance
+          profiling and required for Android.
+
+          Tracepoint availability varies by GPU driver.
+
+          If in doubt, say "N".
diff --git a/include/trace/gpu_work_period.h b/include/trace/gpu_work_period.h
new file mode 100644
index 000000000000..e06467625705
--- /dev/null
+++ b/include/trace/gpu_work_period.h
@@ -0,0 +1,59 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2025 Intel Corporation
+ */
+
+#undef TRACE_SYSTEM
+#define TRACE_SYSTEM power
+
+#if !defined(_TRACE_GPU_WORK_PERIOD_H) || defined(TRACE_HEADER_MULTI_READ)
+#define _TRACE_GPU_WORK_PERIOD_H
+
+#include <linux/tracepoint.h>
+
+TRACE_EVENT(gpu_work_period,
+
+	TP_PROTO(
+		u32 gpu_id,
+		u32 uid,
+		u64 start_time_ns,
+		u64 end_time_ns,
+		u64 total_active_duration_ns
+	),
+
+	TP_ARGS(gpu_id, uid, start_time_ns, end_time_ns, total_active_duration_ns),
+
+	TP_STRUCT__entry(
+		__field(u32, gpu_id)
+		__field(u32, uid)
+		__field(u64, start_time_ns)
+		__field(u64, end_time_ns)
+		__field(u64, total_active_duration_ns)
+	),
+
+	TP_fast_assign(
+		__entry->gpu_id = gpu_id;
+		__entry->uid = uid;
+		__entry->start_time_ns = start_time_ns;
+		__entry->end_time_ns = end_time_ns;
+		__entry->total_active_duration_ns = total_active_duration_ns;
+	),
+
+	TP_printk("gpu_id=%u uid=%u start_time_ns=%llu end_time_ns=%llu total_active_duration_ns=%llu",
+		__entry->gpu_id,
+		__entry->uid,
+		__entry->start_time_ns,
+		__entry->end_time_ns,
+		__entry->total_active_duration_ns)
+);
+
+#endif /* _TRACE_GPU_WORK_PERIOD_H */
+
+/* This part must be outside protection */
+
+#undef TRACE_INCLUDE_FILE
+#define TRACE_INCLUDE_FILE gpu_work_period
+#undef TRACE_INCLUDE_PATH
+#define TRACE_INCLUDE_PATH .
+
+#include <trace/define_trace.h>
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* ✗ CI.checkpatch: warning for : Add GPU work period support for Xe driver (rev5)
  2025-10-06 14:20 [PATCH v5 0/8] [ANDROID]: Add GPU work period support for Xe driver Aakash Deep Sarkar
                   ` (7 preceding siblings ...)
  2025-10-06 14:20 ` [PATCH v5 8/8] Hack patch: Do not merge Aakash Deep Sarkar
@ 2025-10-06 15:03 ` Patchwork
  2025-10-06 15:04 ` ✓ CI.KUnit: success " Patchwork
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 17+ messages in thread
From: Patchwork @ 2025-10-06 15:03 UTC (permalink / raw)
  To: Aakash Deep Sarkar; +Cc: intel-xe

== Series Details ==

Series: : Add GPU work period support for Xe driver (rev5)
URL   : https://patchwork.freedesktop.org/series/153341/
State : warning

== Summary ==

+ KERNEL=/kernel
+ git clone https://gitlab.freedesktop.org/drm/maintainer-tools mt
Cloning into 'mt'...
warning: redirecting to https://gitlab.freedesktop.org/drm/maintainer-tools.git/
+ git -C mt rev-list -n1 origin/master
fbd08a78c3a3bb17964db2a326514c69c1dca660
+ cd /kernel
+ git config --global --add safe.directory /kernel
+ git log -n1
commit 2b915419bb39680c7b54422f0e5758d4482f5e60
Author: Aakash Deep Sarkar <aakash.deep.sarkar@intel.com>
Date:   Mon Oct 6 14:20:29 2025 +0000

    Hack patch: Do not merge
    
    This patch is only added so that our files are built and tested
    on the CI
    
    Signed-off-by: Aakash Deep Sarkar <aakash.deep.sarkar@intel.com>
+ /mt/dim checkpatch 5d520fdf951167ca881c7ebf831dedff629e6ccf drm-intel
5be750f5e1ef drm/xe: Add a new xe_user structure
-:45: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#45: 
new file mode 100644

-:57: CHECK:LINE_SPACING: Please don't use multiple blank lines
#57: FILE: drivers/gpu/drm/xe/xe_user.c:8:
+
+

-:99: CHECK:LINE_SPACING: Please don't use multiple blank lines
#99: FILE: drivers/gpu/drm/xe/xe_user.c:50:
+
+

-:101: CHECK:LINE_SPACING: Please don't use multiple blank lines
#101: FILE: drivers/gpu/drm/xe/xe_user.c:52:
+
+

total: 0 errors, 1 warnings, 3 checks, 175 lines checked
a1fd14ab84d2 drm/xe: Add xe_gt_clock_interval_to_ns function
da0485cff781 drm/xe: Modify xe_exec_queue_update_run_ticks
ae75c2113d9a drm/xe: Handle xe_user creation and removal
-:65: CHECK:SPACING: No space is necessary after a cast
#65: FILE: drivers/gpu/drm/xe/xe_device.c:119:
+			uid = (unsigned int) cred->euid.val;

-:216: CHECK:LINE_SPACING: Please don't use multiple blank lines
#216: FILE: drivers/gpu/drm/xe/xe_user.c:142:
+
+

-:249: CHECK:LINE_SPACING: Please don't use multiple blank lines
#249: FILE: drivers/gpu/drm/xe/xe_user.h:11:
+
+

total: 0 errors, 0 warnings, 3 checks, 212 lines checked
d54ea215aa4b drm/xe: Implement xe_work_period_worker
-:117: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#117: FILE: drivers/gpu/drm/xe/xe_user.c:68:
+		schedule_delayed_work(&user->delay_work,
+				msecs_to_jiffies(XE_WORK_PERIOD_INTERVAL));

-:150: CHECK:BRACES: Blank lines aren't necessary after an open brace '{'
#150: FILE: drivers/gpu/drm/xe/xe_user.c:101:
+	if (xe_pm_runtime_get_if_active(xe)) {
+

-:152: CHECK:BRACES: Blank lines aren't necessary after an open brace '{'
#152: FILE: drivers/gpu/drm/xe/xe_user.c:103:
+		list_for_each_entry(xef, &user->filelist, user_link) {
+

-:154: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#154: FILE: drivers/gpu/drm/xe/xe_user.c:105:
+			wait_var_event(&xef->exec_queue.pending_removal,
+			!atomic_read(&xef->exec_queue.pending_removal));

-:207: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#207: FILE: drivers/gpu/drm/xe/xe_user.c:241:
+		if (!schedule_delayed_work(&user->delay_work,
+					msecs_to_jiffies(XE_WORK_PERIOD_INTERVAL)))

-:245: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#245: FILE: drivers/gpu/drm/xe/xe_user.c:277:
+			if (!schedule_delayed_work(&user->delay_work,
+					msecs_to_jiffies(XE_WORK_PERIOD_INTERVAL)))

-:273: CHECK:UNCOMMENTED_DEFINITION: struct mutex definition without comment
#273: FILE: drivers/gpu/drm/xe/xe_user.h:35:
+	struct mutex lock;

total: 0 errors, 0 warnings, 7 checks, 243 lines checked
b40dc4d05d50 drm/xe: Add a Kconfig option for GPU work period
-:46: CHECK:LINE_SPACING: Please don't use multiple blank lines
#46: FILE: drivers/gpu/drm/xe/xe_exec_queue.c:944:
 
+

total: 0 errors, 0 warnings, 1 checks, 72 lines checked
4a3f6df57b32 drm/xe: Handle xe_work_period destruction
2b915419bb39 Hack patch: Do not merge
-:33: WARNING:IF_1: Consider removing the #if 1 and its #endif
#33: FILE: drivers/gpu/drm/xe/xe_user.h:71:
+#if 1

-:58: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#58: 
new file mode 100644

-:77: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#77: FILE: include/trace/gpu_work_period.h:15:
+TRACE_EVENT(gpu_work_period,
+

-:78: CHECK:OPEN_ENDED_LINE: Lines should not end with a '('
#78: FILE: include/trace/gpu_work_period.h:16:
+	TP_PROTO(

-:88: CHECK:OPEN_ENDED_LINE: Lines should not end with a '('
#88: FILE: include/trace/gpu_work_period.h:26:
+	TP_STRUCT__entry(

-:96: CHECK:OPEN_ENDED_LINE: Lines should not end with a '('
#96: FILE: include/trace/gpu_work_period.h:34:
+	TP_fast_assign(

-:105: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#105: FILE: include/trace/gpu_work_period.h:43:
+	TP_printk("gpu_id=%u uid=%u start_time_ns=%llu end_time_ns=%llu total_active_duration_ns=%llu",
+		__entry->gpu_id,

total: 0 errors, 2 warnings, 5 checks, 90 lines checked



^ permalink raw reply	[flat|nested] 17+ messages in thread

* ✓ CI.KUnit: success for : Add GPU work period support for Xe driver (rev5)
  2025-10-06 14:20 [PATCH v5 0/8] [ANDROID]: Add GPU work period support for Xe driver Aakash Deep Sarkar
                   ` (8 preceding siblings ...)
  2025-10-06 15:03 ` ✗ CI.checkpatch: warning for : Add GPU work period support for Xe driver (rev5) Patchwork
@ 2025-10-06 15:04 ` Patchwork
  2025-10-06 15:58 ` ✗ Xe.CI.BAT: failure " Patchwork
  2025-10-06 17:42 ` ✗ Xe.CI.Full: " Patchwork
  11 siblings, 0 replies; 17+ messages in thread
From: Patchwork @ 2025-10-06 15:04 UTC (permalink / raw)
  To: Aakash Deep Sarkar; +Cc: intel-xe

== Series Details ==

Series: : Add GPU work period support for Xe driver (rev5)
URL   : https://patchwork.freedesktop.org/series/153341/
State : success

== Summary ==

+ trap cleanup EXIT
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/xe/.kunitconfig
[15:03:03] Configuring KUnit Kernel ...
Generating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[15:03:07] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[15:03:36] Starting KUnit Kernel (1/1)...
[15:03:36] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[15:03:36] ================== guc_buf (11 subtests) ===================
[15:03:36] [PASSED] test_smallest
[15:03:36] [PASSED] test_largest
[15:03:36] [PASSED] test_granular
[15:03:36] [PASSED] test_unique
[15:03:36] [PASSED] test_overlap
[15:03:36] [PASSED] test_reusable
[15:03:36] [PASSED] test_too_big
[15:03:36] [PASSED] test_flush
[15:03:36] [PASSED] test_lookup
[15:03:36] [PASSED] test_data
[15:03:36] [PASSED] test_class
[15:03:36] ===================== [PASSED] guc_buf =====================
[15:03:36] =================== guc_dbm (7 subtests) ===================
[15:03:36] [PASSED] test_empty
[15:03:36] [PASSED] test_default
[15:03:36] ======================== test_size  ========================
[15:03:36] [PASSED] 4
[15:03:36] [PASSED] 8
[15:03:36] [PASSED] 32
[15:03:36] [PASSED] 256
[15:03:36] ==================== [PASSED] test_size ====================
[15:03:36] ======================= test_reuse  ========================
[15:03:36] [PASSED] 4
[15:03:36] [PASSED] 8
[15:03:36] [PASSED] 32
[15:03:36] [PASSED] 256
[15:03:36] =================== [PASSED] test_reuse ====================
[15:03:36] =================== test_range_overlap  ====================
[15:03:36] [PASSED] 4
[15:03:36] [PASSED] 8
[15:03:36] [PASSED] 32
[15:03:36] [PASSED] 256
[15:03:36] =============== [PASSED] test_range_overlap ================
[15:03:36] =================== test_range_compact  ====================
[15:03:36] [PASSED] 4
[15:03:36] [PASSED] 8
[15:03:36] [PASSED] 32
[15:03:36] [PASSED] 256
[15:03:36] =============== [PASSED] test_range_compact ================
[15:03:36] ==================== test_range_spare  =====================
[15:03:36] [PASSED] 4
[15:03:36] [PASSED] 8
[15:03:36] [PASSED] 32
[15:03:36] [PASSED] 256
[15:03:36] ================ [PASSED] test_range_spare =================
[15:03:36] ===================== [PASSED] guc_dbm =====================
[15:03:36] =================== guc_idm (6 subtests) ===================
[15:03:36] [PASSED] bad_init
[15:03:36] [PASSED] no_init
[15:03:36] [PASSED] init_fini
[15:03:36] [PASSED] check_used
[15:03:36] [PASSED] check_quota
[15:03:36] [PASSED] check_all
[15:03:36] ===================== [PASSED] guc_idm =====================
[15:03:36] ================== no_relay (3 subtests) ===================
[15:03:36] [PASSED] xe_drops_guc2pf_if_not_ready
[15:03:36] [PASSED] xe_drops_guc2vf_if_not_ready
[15:03:36] [PASSED] xe_rejects_send_if_not_ready
[15:03:36] ==================== [PASSED] no_relay =====================
[15:03:36] ================== pf_relay (14 subtests) ==================
[15:03:36] [PASSED] pf_rejects_guc2pf_too_short
[15:03:36] [PASSED] pf_rejects_guc2pf_too_long
[15:03:36] [PASSED] pf_rejects_guc2pf_no_payload
[15:03:36] [PASSED] pf_fails_no_payload
[15:03:36] [PASSED] pf_fails_bad_origin
[15:03:36] [PASSED] pf_fails_bad_type
[15:03:36] [PASSED] pf_txn_reports_error
[15:03:36] [PASSED] pf_txn_sends_pf2guc
[15:03:36] [PASSED] pf_sends_pf2guc
[15:03:36] [SKIPPED] pf_loopback_nop
[15:03:36] [SKIPPED] pf_loopback_echo
[15:03:36] [SKIPPED] pf_loopback_fail
[15:03:36] [SKIPPED] pf_loopback_busy
[15:03:36] [SKIPPED] pf_loopback_retry
[15:03:36] ==================== [PASSED] pf_relay =====================
[15:03:36] ================== vf_relay (3 subtests) ===================
[15:03:36] [PASSED] vf_rejects_guc2vf_too_short
[15:03:36] [PASSED] vf_rejects_guc2vf_too_long
[15:03:36] [PASSED] vf_rejects_guc2vf_no_payload
[15:03:36] ==================== [PASSED] vf_relay =====================
[15:03:36] ===================== lmtt (1 subtest) =====================
[15:03:36] ======================== test_ops  =========================
[15:03:36] [PASSED] 2-level
[15:03:36] [PASSED] multi-level
[15:03:36] ==================== [PASSED] test_ops =====================
[15:03:36] ====================== [PASSED] lmtt =======================
[15:03:36] ================= pf_service (11 subtests) =================
[15:03:36] [PASSED] pf_negotiate_any
[15:03:36] [PASSED] pf_negotiate_base_match
[15:03:36] [PASSED] pf_negotiate_base_newer
[15:03:36] [PASSED] pf_negotiate_base_next
[15:03:36] [SKIPPED] pf_negotiate_base_older
[15:03:36] [PASSED] pf_negotiate_base_prev
[15:03:36] [PASSED] pf_negotiate_latest_match
[15:03:36] [PASSED] pf_negotiate_latest_newer
[15:03:36] [PASSED] pf_negotiate_latest_next
[15:03:36] [SKIPPED] pf_negotiate_latest_older
[15:03:36] [SKIPPED] pf_negotiate_latest_prev
[15:03:36] =================== [PASSED] pf_service ====================
[15:03:36] ================= xe_guc_g2g (2 subtests) ==================
[15:03:36] ============== xe_live_guc_g2g_kunit_default  ==============
[15:03:36] ========= [SKIPPED] xe_live_guc_g2g_kunit_default ==========
[15:03:36] ============== xe_live_guc_g2g_kunit_allmem  ===============
[15:03:36] ========== [SKIPPED] xe_live_guc_g2g_kunit_allmem ==========
[15:03:36] =================== [SKIPPED] xe_guc_g2g ===================
[15:03:36] =================== xe_mocs (2 subtests) ===================
[15:03:36] ================ xe_live_mocs_kernel_kunit  ================
[15:03:36] =========== [SKIPPED] xe_live_mocs_kernel_kunit ============
[15:03:36] ================ xe_live_mocs_reset_kunit  =================
[15:03:36] ============ [SKIPPED] xe_live_mocs_reset_kunit ============
[15:03:36] ==================== [SKIPPED] xe_mocs =====================
[15:03:36] ================= xe_migrate (2 subtests) ==================
[15:03:36] ================= xe_migrate_sanity_kunit  =================
[15:03:36] ============ [SKIPPED] xe_migrate_sanity_kunit =============
[15:03:36] ================== xe_validate_ccs_kunit  ==================
[15:03:36] ============= [SKIPPED] xe_validate_ccs_kunit ==============
[15:03:36] =================== [SKIPPED] xe_migrate ===================
[15:03:36] ================== xe_dma_buf (1 subtest) ==================
[15:03:36] ==================== xe_dma_buf_kunit  =====================
[15:03:36] ================ [SKIPPED] xe_dma_buf_kunit ================
[15:03:36] =================== [SKIPPED] xe_dma_buf ===================
[15:03:36] ================= xe_bo_shrink (1 subtest) =================
[15:03:36] =================== xe_bo_shrink_kunit  ====================
[15:03:36] =============== [SKIPPED] xe_bo_shrink_kunit ===============
[15:03:36] ================== [SKIPPED] xe_bo_shrink ==================
[15:03:36] ==================== xe_bo (2 subtests) ====================
[15:03:36] ================== xe_ccs_migrate_kunit  ===================
[15:03:36] ============== [SKIPPED] xe_ccs_migrate_kunit ==============
[15:03:36] ==================== xe_bo_evict_kunit  ====================
[15:03:36] =============== [SKIPPED] xe_bo_evict_kunit ================
[15:03:36] ===================== [SKIPPED] xe_bo ======================
[15:03:36] ==================== args (11 subtests) ====================
[15:03:36] [PASSED] count_args_test
[15:03:36] [PASSED] call_args_example
[15:03:36] [PASSED] call_args_test
[15:03:36] [PASSED] drop_first_arg_example
[15:03:36] [PASSED] drop_first_arg_test
[15:03:36] [PASSED] first_arg_example
[15:03:36] [PASSED] first_arg_test
[15:03:36] [PASSED] last_arg_example
[15:03:36] [PASSED] last_arg_test
[15:03:36] [PASSED] pick_arg_example
[15:03:36] [PASSED] sep_comma_example
[15:03:36] ====================== [PASSED] args =======================
[15:03:36] =================== xe_pci (3 subtests) ====================
[15:03:36] ==================== check_graphics_ip  ====================
[15:03:36] [PASSED] 12.00 Xe_LP
[15:03:36] [PASSED] 12.10 Xe_LP+
[15:03:36] [PASSED] 12.55 Xe_HPG
[15:03:36] [PASSED] 12.60 Xe_HPC
[15:03:36] [PASSED] 12.70 Xe_LPG
[15:03:36] [PASSED] 12.71 Xe_LPG
[15:03:36] [PASSED] 12.74 Xe_LPG+
[15:03:36] [PASSED] 20.01 Xe2_HPG
[15:03:36] [PASSED] 20.02 Xe2_HPG
[15:03:36] [PASSED] 20.04 Xe2_LPG
[15:03:36] [PASSED] 30.00 Xe3_LPG
[15:03:36] [PASSED] 30.01 Xe3_LPG
[15:03:36] [PASSED] 30.03 Xe3_LPG
[15:03:36] ================ [PASSED] check_graphics_ip ================
[15:03:36] ===================== check_media_ip  ======================
[15:03:36] [PASSED] 12.00 Xe_M
[15:03:36] [PASSED] 12.55 Xe_HPM
[15:03:36] [PASSED] 13.00 Xe_LPM+
[15:03:36] [PASSED] 13.01 Xe2_HPM
[15:03:36] [PASSED] 20.00 Xe2_LPM
[15:03:36] [PASSED] 30.00 Xe3_LPM
[15:03:36] [PASSED] 30.02 Xe3_LPM
[15:03:36] ================= [PASSED] check_media_ip ==================
[15:03:36] ================= check_platform_gt_count  =================
[15:03:36] [PASSED] 0x9A60 (TIGERLAKE)
[15:03:36] [PASSED] 0x9A68 (TIGERLAKE)
[15:03:36] [PASSED] 0x9A70 (TIGERLAKE)
[15:03:36] [PASSED] 0x9A40 (TIGERLAKE)
[15:03:36] [PASSED] 0x9A49 (TIGERLAKE)
[15:03:36] [PASSED] 0x9A59 (TIGERLAKE)
[15:03:36] [PASSED] 0x9A78 (TIGERLAKE)
[15:03:36] [PASSED] 0x9AC0 (TIGERLAKE)
[15:03:36] [PASSED] 0x9AC9 (TIGERLAKE)
[15:03:36] [PASSED] 0x9AD9 (TIGERLAKE)
[15:03:36] [PASSED] 0x9AF8 (TIGERLAKE)
[15:03:36] [PASSED] 0x4C80 (ROCKETLAKE)
[15:03:36] [PASSED] 0x4C8A (ROCKETLAKE)
[15:03:36] [PASSED] 0x4C8B (ROCKETLAKE)
[15:03:36] [PASSED] 0x4C8C (ROCKETLAKE)
[15:03:36] [PASSED] 0x4C90 (ROCKETLAKE)
[15:03:36] [PASSED] 0x4C9A (ROCKETLAKE)
[15:03:36] [PASSED] 0x4680 (ALDERLAKE_S)
[15:03:36] [PASSED] 0x4682 (ALDERLAKE_S)
[15:03:36] [PASSED] 0x4688 (ALDERLAKE_S)
[15:03:36] [PASSED] 0x468A (ALDERLAKE_S)
[15:03:36] [PASSED] 0x468B (ALDERLAKE_S)
[15:03:36] [PASSED] 0x4690 (ALDERLAKE_S)
[15:03:36] [PASSED] 0x4692 (ALDERLAKE_S)
[15:03:36] [PASSED] 0x4693 (ALDERLAKE_S)
[15:03:36] [PASSED] 0x46A0 (ALDERLAKE_P)
[15:03:36] [PASSED] 0x46A1 (ALDERLAKE_P)
[15:03:36] [PASSED] 0x46A2 (ALDERLAKE_P)
[15:03:36] [PASSED] 0x46A3 (ALDERLAKE_P)
[15:03:36] [PASSED] 0x46A6 (ALDERLAKE_P)
[15:03:36] [PASSED] 0x46A8 (ALDERLAKE_P)
[15:03:36] [PASSED] 0x46AA (ALDERLAKE_P)
[15:03:36] [PASSED] 0x462A (ALDERLAKE_P)
[15:03:36] [PASSED] 0x4626 (ALDERLAKE_P)
[15:03:36] [PASSED] 0x4628 (ALDERLAKE_P)
[15:03:36] [PASSED] 0x46B0 (ALDERLAKE_P)
[15:03:36] [PASSED] 0x46B1 (ALDERLAKE_P)
[15:03:36] [PASSED] 0x46B2 (ALDERLAKE_P)
[15:03:36] [PASSED] 0x46B3 (ALDERLAKE_P)
[15:03:36] [PASSED] 0x46C0 (ALDERLAKE_P)
[15:03:36] [PASSED] 0x46C1 (ALDERLAKE_P)
[15:03:36] [PASSED] 0x46C2 (ALDERLAKE_P)
[15:03:36] [PASSED] 0x46C3 (ALDERLAKE_P)
[15:03:36] [PASSED] 0x46D0 (ALDERLAKE_N)
[15:03:36] [PASSED] 0x46D1 (ALDERLAKE_N)
[15:03:36] [PASSED] 0x46D2 (ALDERLAKE_N)
[15:03:36] [PASSED] 0x46D3 (ALDERLAKE_N)
[15:03:36] [PASSED] 0x46D4 (ALDERLAKE_N)
[15:03:36] [PASSED] 0xA721 (ALDERLAKE_P)
[15:03:36] [PASSED] 0xA7A1 (ALDERLAKE_P)
[15:03:36] [PASSED] 0xA7A9 (ALDERLAKE_P)
[15:03:36] [PASSED] 0xA7AC (ALDERLAKE_P)
[15:03:36] [PASSED] 0xA7AD (ALDERLAKE_P)
[15:03:36] [PASSED] 0xA720 (ALDERLAKE_P)
[15:03:36] [PASSED] 0xA7A0 (ALDERLAKE_P)
[15:03:36] [PASSED] 0xA7A8 (ALDERLAKE_P)
[15:03:36] [PASSED] 0xA7AA (ALDERLAKE_P)
[15:03:36] [PASSED] 0xA7AB (ALDERLAKE_P)
[15:03:36] [PASSED] 0xA780 (ALDERLAKE_S)
[15:03:36] [PASSED] 0xA781 (ALDERLAKE_S)
[15:03:36] [PASSED] 0xA782 (ALDERLAKE_S)
[15:03:36] [PASSED] 0xA783 (ALDERLAKE_S)
[15:03:36] [PASSED] 0xA788 (ALDERLAKE_S)
[15:03:36] [PASSED] 0xA789 (ALDERLAKE_S)
[15:03:36] [PASSED] 0xA78A (ALDERLAKE_S)
[15:03:36] [PASSED] 0xA78B (ALDERLAKE_S)
[15:03:36] [PASSED] 0x4905 (DG1)
[15:03:36] [PASSED] 0x4906 (DG1)
[15:03:36] [PASSED] 0x4907 (DG1)
[15:03:36] [PASSED] 0x4908 (DG1)
[15:03:36] [PASSED] 0x4909 (DG1)
[15:03:36] [PASSED] 0x56C0 (DG2)
[15:03:36] [PASSED] 0x56C2 (DG2)
[15:03:36] [PASSED] 0x56C1 (DG2)
[15:03:36] [PASSED] 0x7D51 (METEORLAKE)
[15:03:36] [PASSED] 0x7DD1 (METEORLAKE)
[15:03:36] [PASSED] 0x7D41 (METEORLAKE)
[15:03:36] [PASSED] 0x7D67 (METEORLAKE)
[15:03:36] [PASSED] 0xB640 (METEORLAKE)
[15:03:36] [PASSED] 0x56A0 (DG2)
[15:03:36] [PASSED] 0x56A1 (DG2)
[15:03:36] [PASSED] 0x56A2 (DG2)
[15:03:36] [PASSED] 0x56BE (DG2)
[15:03:36] [PASSED] 0x56BF (DG2)
[15:03:36] [PASSED] 0x5690 (DG2)
[15:03:36] [PASSED] 0x5691 (DG2)
[15:03:36] [PASSED] 0x5692 (DG2)
[15:03:36] [PASSED] 0x56A5 (DG2)
[15:03:36] [PASSED] 0x56A6 (DG2)
[15:03:36] [PASSED] 0x56B0 (DG2)
[15:03:36] [PASSED] 0x56B1 (DG2)
[15:03:36] [PASSED] 0x56BA (DG2)
[15:03:36] [PASSED] 0x56BB (DG2)
[15:03:36] [PASSED] 0x56BC (DG2)
[15:03:36] [PASSED] 0x56BD (DG2)
[15:03:36] [PASSED] 0x5693 (DG2)
[15:03:36] [PASSED] 0x5694 (DG2)
[15:03:36] [PASSED] 0x5695 (DG2)
[15:03:36] [PASSED] 0x56A3 (DG2)
[15:03:36] [PASSED] 0x56A4 (DG2)
[15:03:36] [PASSED] 0x56B2 (DG2)
[15:03:36] [PASSED] 0x56B3 (DG2)
[15:03:36] [PASSED] 0x5696 (DG2)
[15:03:36] [PASSED] 0x5697 (DG2)
[15:03:36] [PASSED] 0xB69 (PVC)
[15:03:36] [PASSED] 0xB6E (PVC)
[15:03:36] [PASSED] 0xBD4 (PVC)
[15:03:36] [PASSED] 0xBD5 (PVC)
[15:03:36] [PASSED] 0xBD6 (PVC)
[15:03:36] [PASSED] 0xBD7 (PVC)
[15:03:36] [PASSED] 0xBD8 (PVC)
[15:03:36] [PASSED] 0xBD9 (PVC)
[15:03:36] [PASSED] 0xBDA (PVC)
[15:03:36] [PASSED] 0xBDB (PVC)
[15:03:36] [PASSED] 0xBE0 (PVC)
[15:03:36] [PASSED] 0xBE1 (PVC)
[15:03:36] [PASSED] 0xBE5 (PVC)
[15:03:36] [PASSED] 0x7D40 (METEORLAKE)
[15:03:36] [PASSED] 0x7D45 (METEORLAKE)
[15:03:36] [PASSED] 0x7D55 (METEORLAKE)
[15:03:36] [PASSED] 0x7D60 (METEORLAKE)
[15:03:36] [PASSED] 0x7DD5 (METEORLAKE)
[15:03:36] [PASSED] 0x6420 (LUNARLAKE)
[15:03:36] [PASSED] 0x64A0 (LUNARLAKE)
[15:03:36] [PASSED] 0x64B0 (LUNARLAKE)
[15:03:36] [PASSED] 0xE202 (BATTLEMAGE)
[15:03:36] [PASSED] 0xE209 (BATTLEMAGE)
[15:03:36] [PASSED] 0xE20B (BATTLEMAGE)
[15:03:36] [PASSED] 0xE20C (BATTLEMAGE)
[15:03:36] [PASSED] 0xE20D (BATTLEMAGE)
[15:03:36] [PASSED] 0xE210 (BATTLEMAGE)
[15:03:36] [PASSED] 0xE211 (BATTLEMAGE)
[15:03:36] [PASSED] 0xE212 (BATTLEMAGE)
[15:03:36] [PASSED] 0xE216 (BATTLEMAGE)
[15:03:36] [PASSED] 0xE220 (BATTLEMAGE)
[15:03:36] [PASSED] 0xE221 (BATTLEMAGE)
[15:03:36] [PASSED] 0xE222 (BATTLEMAGE)
[15:03:36] [PASSED] 0xE223 (BATTLEMAGE)
[15:03:36] [PASSED] 0xB080 (PANTHERLAKE)
[15:03:36] [PASSED] 0xB081 (PANTHERLAKE)
[15:03:36] [PASSED] 0xB082 (PANTHERLAKE)
[15:03:36] [PASSED] 0xB083 (PANTHERLAKE)
[15:03:36] [PASSED] 0xB084 (PANTHERLAKE)
[15:03:36] [PASSED] 0xB085 (PANTHERLAKE)
[15:03:36] [PASSED] 0xB086 (PANTHERLAKE)
[15:03:36] [PASSED] 0xB087 (PANTHERLAKE)
[15:03:36] [PASSED] 0xB08F (PANTHERLAKE)
[15:03:36] [PASSED] 0xB090 (PANTHERLAKE)
[15:03:36] [PASSED] 0xB0A0 (PANTHERLAKE)
[15:03:36] [PASSED] 0xB0B0 (PANTHERLAKE)
[15:03:36] [PASSED] 0xFD80 (PANTHERLAKE)
[15:03:36] [PASSED] 0xFD81 (PANTHERLAKE)
[15:03:36] ============= [PASSED] check_platform_gt_count =============
[15:03:36] ===================== [PASSED] xe_pci ======================
[15:03:36] =================== xe_rtp (2 subtests) ====================
[15:03:36] =============== xe_rtp_process_to_sr_tests  ================
[15:03:36] [PASSED] coalesce-same-reg
[15:03:36] [PASSED] no-match-no-add
[15:03:36] [PASSED] match-or
[15:03:36] [PASSED] match-or-xfail
[15:03:36] [PASSED] no-match-no-add-multiple-rules
[15:03:36] [PASSED] two-regs-two-entries
[15:03:36] [PASSED] clr-one-set-other
[15:03:36] [PASSED] set-field
[15:03:36] [PASSED] conflict-duplicate
[15:03:36] [PASSED] conflict-not-disjoint
[15:03:36] [PASSED] conflict-reg-type
[15:03:36] =========== [PASSED] xe_rtp_process_to_sr_tests ============
[15:03:36] ================== xe_rtp_process_tests  ===================
[15:03:36] [PASSED] active1
[15:03:36] [PASSED] active2
[15:03:36] [PASSED] active-inactive
[15:03:36] [PASSED] inactive-active
[15:03:36] [PASSED] inactive-1st_or_active-inactive
[15:03:36] [PASSED] inactive-2nd_or_active-inactive
[15:03:36] [PASSED] inactive-last_or_active-inactive
[15:03:36] [PASSED] inactive-no_or_active-inactive
[15:03:36] ============== [PASSED] xe_rtp_process_tests ===============
[15:03:36] ===================== [PASSED] xe_rtp ======================
[15:03:36] ==================== xe_wa (1 subtest) =====================
[15:03:36] ======================== xe_wa_gt  =========================
[15:03:36] [PASSED] TIGERLAKE B0
[15:03:36] [PASSED] DG1 A0
[15:03:36] [PASSED] DG1 B0
[15:03:36] [PASSED] ALDERLAKE_S A0
[15:03:36] [PASSED] ALDERLAKE_S B0
stty: 'standard input': Inappropriate ioctl for device
[15:03:36] [PASSED] ALDERLAKE_S C0
[15:03:36] [PASSED] ALDERLAKE_S D0
[15:03:36] [PASSED] ALDERLAKE_P A0
[15:03:36] [PASSED] ALDERLAKE_P B0
[15:03:36] [PASSED] ALDERLAKE_P C0
[15:03:36] [PASSED] ALDERLAKE_S RPLS D0
[15:03:36] [PASSED] ALDERLAKE_P RPLU E0
[15:03:36] [PASSED] DG2 G10 C0
[15:03:36] [PASSED] DG2 G11 B1
[15:03:36] [PASSED] DG2 G12 A1
[15:03:36] [PASSED] METEORLAKE 12.70(Xe_LPG) A0 13.00(Xe_LPM+) A0
[15:03:36] [PASSED] METEORLAKE 12.71(Xe_LPG) A0 13.00(Xe_LPM+) A0
[15:03:36] [PASSED] METEORLAKE 12.74(Xe_LPG+) A0 13.00(Xe_LPM+) A0
[15:03:36] [PASSED] LUNARLAKE 20.04(Xe2_LPG) A0 20.00(Xe2_LPM) A0
[15:03:36] [PASSED] LUNARLAKE 20.04(Xe2_LPG) B0 20.00(Xe2_LPM) A0
[15:03:36] [PASSED] BATTLEMAGE 20.01(Xe2_HPG) A0 13.01(Xe2_HPM) A1
[15:03:36] [PASSED] PANTHERLAKE 30.00(Xe3_LPG) A0 30.00(Xe3_LPM) A0
[15:03:36] ==================== [PASSED] xe_wa_gt =====================
[15:03:36] ====================== [PASSED] xe_wa ======================
[15:03:36] ============================================================
[15:03:36] Testing complete. Ran 306 tests: passed: 288, skipped: 18
[15:03:36] Elapsed time: 33.608s total, 4.198s configuring, 29.043s building, 0.328s running

+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/tests/.kunitconfig
[15:03:36] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[15:03:38] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[15:04:02] Starting KUnit Kernel (1/1)...
[15:04:02] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[15:04:02] ============ drm_test_pick_cmdline (2 subtests) ============
[15:04:02] [PASSED] drm_test_pick_cmdline_res_1920_1080_60
[15:04:02] =============== drm_test_pick_cmdline_named  ===============
[15:04:02] [PASSED] NTSC
[15:04:02] [PASSED] NTSC-J
[15:04:02] [PASSED] PAL
[15:04:02] [PASSED] PAL-M
[15:04:02] =========== [PASSED] drm_test_pick_cmdline_named ===========
[15:04:02] ============== [PASSED] drm_test_pick_cmdline ==============
[15:04:02] == drm_test_atomic_get_connector_for_encoder (1 subtest) ===
[15:04:02] [PASSED] drm_test_drm_atomic_get_connector_for_encoder
[15:04:02] ==== [PASSED] drm_test_atomic_get_connector_for_encoder ====
[15:04:02] =========== drm_validate_clone_mode (2 subtests) ===========
[15:04:02] ============== drm_test_check_in_clone_mode  ===============
[15:04:02] [PASSED] in_clone_mode
[15:04:02] [PASSED] not_in_clone_mode
[15:04:02] ========== [PASSED] drm_test_check_in_clone_mode ===========
[15:04:02] =============== drm_test_check_valid_clones  ===============
[15:04:02] [PASSED] not_in_clone_mode
[15:04:02] [PASSED] valid_clone
[15:04:02] [PASSED] invalid_clone
[15:04:02] =========== [PASSED] drm_test_check_valid_clones ===========
[15:04:02] ============= [PASSED] drm_validate_clone_mode =============
[15:04:02] ============= drm_validate_modeset (1 subtest) =============
[15:04:02] [PASSED] drm_test_check_connector_changed_modeset
[15:04:02] ============== [PASSED] drm_validate_modeset ===============
[15:04:02] ====== drm_test_bridge_get_current_state (2 subtests) ======
[15:04:02] [PASSED] drm_test_drm_bridge_get_current_state_atomic
[15:04:02] [PASSED] drm_test_drm_bridge_get_current_state_legacy
[15:04:02] ======== [PASSED] drm_test_bridge_get_current_state ========
[15:04:02] ====== drm_test_bridge_helper_reset_crtc (3 subtests) ======
[15:04:02] [PASSED] drm_test_drm_bridge_helper_reset_crtc_atomic
[15:04:02] [PASSED] drm_test_drm_bridge_helper_reset_crtc_atomic_disabled
[15:04:02] [PASSED] drm_test_drm_bridge_helper_reset_crtc_legacy
[15:04:02] ======== [PASSED] drm_test_bridge_helper_reset_crtc ========
[15:04:02] ============== drm_bridge_alloc (2 subtests) ===============
[15:04:02] [PASSED] drm_test_drm_bridge_alloc_basic
[15:04:02] [PASSED] drm_test_drm_bridge_alloc_get_put
[15:04:02] ================ [PASSED] drm_bridge_alloc =================
[15:04:02] ================== drm_buddy (7 subtests) ==================
[15:04:02] [PASSED] drm_test_buddy_alloc_limit
[15:04:02] [PASSED] drm_test_buddy_alloc_optimistic
[15:04:02] [PASSED] drm_test_buddy_alloc_pessimistic
[15:04:02] [PASSED] drm_test_buddy_alloc_pathological
[15:04:02] [PASSED] drm_test_buddy_alloc_contiguous
[15:04:02] [PASSED] drm_test_buddy_alloc_clear
[15:04:02] [PASSED] drm_test_buddy_alloc_range_bias
[15:04:02] ==================== [PASSED] drm_buddy ====================
[15:04:02] ============= drm_cmdline_parser (40 subtests) =============
[15:04:02] [PASSED] drm_test_cmdline_force_d_only
[15:04:02] [PASSED] drm_test_cmdline_force_D_only_dvi
[15:04:02] [PASSED] drm_test_cmdline_force_D_only_hdmi
[15:04:02] [PASSED] drm_test_cmdline_force_D_only_not_digital
[15:04:02] [PASSED] drm_test_cmdline_force_e_only
[15:04:02] [PASSED] drm_test_cmdline_res
[15:04:02] [PASSED] drm_test_cmdline_res_vesa
[15:04:02] [PASSED] drm_test_cmdline_res_vesa_rblank
[15:04:02] [PASSED] drm_test_cmdline_res_rblank
[15:04:02] [PASSED] drm_test_cmdline_res_bpp
[15:04:02] [PASSED] drm_test_cmdline_res_refresh
[15:04:02] [PASSED] drm_test_cmdline_res_bpp_refresh
[15:04:02] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced
[15:04:02] [PASSED] drm_test_cmdline_res_bpp_refresh_margins
[15:04:02] [PASSED] drm_test_cmdline_res_bpp_refresh_force_off
[15:04:02] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on
[15:04:02] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_analog
[15:04:02] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_digital
[15:04:02] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced_margins_force_on
[15:04:02] [PASSED] drm_test_cmdline_res_margins_force_on
[15:04:02] [PASSED] drm_test_cmdline_res_vesa_margins
[15:04:02] [PASSED] drm_test_cmdline_name
[15:04:02] [PASSED] drm_test_cmdline_name_bpp
[15:04:02] [PASSED] drm_test_cmdline_name_option
[15:04:02] [PASSED] drm_test_cmdline_name_bpp_option
[15:04:02] [PASSED] drm_test_cmdline_rotate_0
[15:04:02] [PASSED] drm_test_cmdline_rotate_90
[15:04:02] [PASSED] drm_test_cmdline_rotate_180
[15:04:02] [PASSED] drm_test_cmdline_rotate_270
[15:04:02] [PASSED] drm_test_cmdline_hmirror
[15:04:02] [PASSED] drm_test_cmdline_vmirror
[15:04:02] [PASSED] drm_test_cmdline_margin_options
[15:04:02] [PASSED] drm_test_cmdline_multiple_options
[15:04:02] [PASSED] drm_test_cmdline_bpp_extra_and_option
[15:04:02] [PASSED] drm_test_cmdline_extra_and_option
[15:04:02] [PASSED] drm_test_cmdline_freestanding_options
[15:04:02] [PASSED] drm_test_cmdline_freestanding_force_e_and_options
[15:04:02] [PASSED] drm_test_cmdline_panel_orientation
[15:04:02] ================ drm_test_cmdline_invalid  =================
[15:04:02] [PASSED] margin_only
[15:04:02] [PASSED] interlace_only
[15:04:02] [PASSED] res_missing_x
[15:04:02] [PASSED] res_missing_y
[15:04:02] [PASSED] res_bad_y
[15:04:02] [PASSED] res_missing_y_bpp
[15:04:02] [PASSED] res_bad_bpp
[15:04:02] [PASSED] res_bad_refresh
[15:04:02] [PASSED] res_bpp_refresh_force_on_off
[15:04:02] [PASSED] res_invalid_mode
[15:04:02] [PASSED] res_bpp_wrong_place_mode
[15:04:02] [PASSED] name_bpp_refresh
[15:04:02] [PASSED] name_refresh
[15:04:02] [PASSED] name_refresh_wrong_mode
[15:04:02] [PASSED] name_refresh_invalid_mode
[15:04:02] [PASSED] rotate_multiple
[15:04:02] [PASSED] rotate_invalid_val
[15:04:02] [PASSED] rotate_truncated
[15:04:02] [PASSED] invalid_option
[15:04:02] [PASSED] invalid_tv_option
[15:04:02] [PASSED] truncated_tv_option
[15:04:02] ============ [PASSED] drm_test_cmdline_invalid =============
[15:04:02] =============== drm_test_cmdline_tv_options  ===============
[15:04:02] [PASSED] NTSC
[15:04:02] [PASSED] NTSC_443
[15:04:02] [PASSED] NTSC_J
[15:04:02] [PASSED] PAL
[15:04:02] [PASSED] PAL_M
[15:04:02] [PASSED] PAL_N
[15:04:02] [PASSED] SECAM
[15:04:02] [PASSED] MONO_525
[15:04:02] [PASSED] MONO_625
[15:04:02] =========== [PASSED] drm_test_cmdline_tv_options ===========
[15:04:02] =============== [PASSED] drm_cmdline_parser ================
[15:04:02] ========== drmm_connector_hdmi_init (20 subtests) ==========
[15:04:02] [PASSED] drm_test_connector_hdmi_init_valid
[15:04:02] [PASSED] drm_test_connector_hdmi_init_bpc_8
[15:04:02] [PASSED] drm_test_connector_hdmi_init_bpc_10
[15:04:02] [PASSED] drm_test_connector_hdmi_init_bpc_12
[15:04:02] [PASSED] drm_test_connector_hdmi_init_bpc_invalid
[15:04:02] [PASSED] drm_test_connector_hdmi_init_bpc_null
[15:04:02] [PASSED] drm_test_connector_hdmi_init_formats_empty
[15:04:02] [PASSED] drm_test_connector_hdmi_init_formats_no_rgb
[15:04:02] === drm_test_connector_hdmi_init_formats_yuv420_allowed  ===
[15:04:02] [PASSED] supported_formats=0x9 yuv420_allowed=1
[15:04:02] [PASSED] supported_formats=0x9 yuv420_allowed=0
[15:04:02] [PASSED] supported_formats=0x3 yuv420_allowed=1
[15:04:02] [PASSED] supported_formats=0x3 yuv420_allowed=0
[15:04:02] === [PASSED] drm_test_connector_hdmi_init_formats_yuv420_allowed ===
[15:04:02] [PASSED] drm_test_connector_hdmi_init_null_ddc
[15:04:02] [PASSED] drm_test_connector_hdmi_init_null_product
[15:04:02] [PASSED] drm_test_connector_hdmi_init_null_vendor
[15:04:02] [PASSED] drm_test_connector_hdmi_init_product_length_exact
[15:04:02] [PASSED] drm_test_connector_hdmi_init_product_length_too_long
[15:04:02] [PASSED] drm_test_connector_hdmi_init_product_valid
[15:04:02] [PASSED] drm_test_connector_hdmi_init_vendor_length_exact
[15:04:02] [PASSED] drm_test_connector_hdmi_init_vendor_length_too_long
[15:04:02] [PASSED] drm_test_connector_hdmi_init_vendor_valid
[15:04:02] ========= drm_test_connector_hdmi_init_type_valid  =========
[15:04:02] [PASSED] HDMI-A
[15:04:02] [PASSED] HDMI-B
[15:04:02] ===== [PASSED] drm_test_connector_hdmi_init_type_valid =====
[15:04:02] ======== drm_test_connector_hdmi_init_type_invalid  ========
[15:04:02] [PASSED] Unknown
[15:04:02] [PASSED] VGA
[15:04:02] [PASSED] DVI-I
[15:04:02] [PASSED] DVI-D
[15:04:02] [PASSED] DVI-A
[15:04:02] [PASSED] Composite
[15:04:02] [PASSED] SVIDEO
[15:04:02] [PASSED] LVDS
[15:04:02] [PASSED] Component
[15:04:02] [PASSED] DIN
[15:04:02] [PASSED] DP
[15:04:02] [PASSED] TV
[15:04:02] [PASSED] eDP
[15:04:02] [PASSED] Virtual
[15:04:02] [PASSED] DSI
[15:04:02] [PASSED] DPI
[15:04:02] [PASSED] Writeback
[15:04:02] [PASSED] SPI
[15:04:02] [PASSED] USB
[15:04:02] ==== [PASSED] drm_test_connector_hdmi_init_type_invalid ====
[15:04:02] ============ [PASSED] drmm_connector_hdmi_init =============
[15:04:02] ============= drmm_connector_init (3 subtests) =============
[15:04:02] [PASSED] drm_test_drmm_connector_init
[15:04:02] [PASSED] drm_test_drmm_connector_init_null_ddc
[15:04:02] ========= drm_test_drmm_connector_init_type_valid  =========
[15:04:02] [PASSED] Unknown
[15:04:02] [PASSED] VGA
[15:04:02] [PASSED] DVI-I
[15:04:02] [PASSED] DVI-D
[15:04:02] [PASSED] DVI-A
[15:04:02] [PASSED] Composite
[15:04:02] [PASSED] SVIDEO
[15:04:02] [PASSED] LVDS
[15:04:02] [PASSED] Component
[15:04:02] [PASSED] DIN
[15:04:02] [PASSED] DP
[15:04:02] [PASSED] HDMI-A
[15:04:02] [PASSED] HDMI-B
[15:04:02] [PASSED] TV
[15:04:02] [PASSED] eDP
[15:04:02] [PASSED] Virtual
[15:04:02] [PASSED] DSI
[15:04:02] [PASSED] DPI
[15:04:02] [PASSED] Writeback
[15:04:02] [PASSED] SPI
[15:04:02] [PASSED] USB
[15:04:02] ===== [PASSED] drm_test_drmm_connector_init_type_valid =====
[15:04:02] =============== [PASSED] drmm_connector_init ===============
[15:04:02] ========= drm_connector_dynamic_init (6 subtests) ==========
[15:04:02] [PASSED] drm_test_drm_connector_dynamic_init
[15:04:02] [PASSED] drm_test_drm_connector_dynamic_init_null_ddc
[15:04:02] [PASSED] drm_test_drm_connector_dynamic_init_not_added
[15:04:02] [PASSED] drm_test_drm_connector_dynamic_init_properties
[15:04:02] ===== drm_test_drm_connector_dynamic_init_type_valid  ======
[15:04:02] [PASSED] Unknown
[15:04:02] [PASSED] VGA
[15:04:02] [PASSED] DVI-I
[15:04:02] [PASSED] DVI-D
[15:04:02] [PASSED] DVI-A
[15:04:02] [PASSED] Composite
[15:04:02] [PASSED] SVIDEO
[15:04:02] [PASSED] LVDS
[15:04:02] [PASSED] Component
[15:04:02] [PASSED] DIN
[15:04:02] [PASSED] DP
[15:04:02] [PASSED] HDMI-A
[15:04:02] [PASSED] HDMI-B
[15:04:02] [PASSED] TV
[15:04:02] [PASSED] eDP
[15:04:02] [PASSED] Virtual
[15:04:02] [PASSED] DSI
[15:04:02] [PASSED] DPI
[15:04:02] [PASSED] Writeback
[15:04:02] [PASSED] SPI
[15:04:02] [PASSED] USB
[15:04:02] = [PASSED] drm_test_drm_connector_dynamic_init_type_valid ==
[15:04:02] ======== drm_test_drm_connector_dynamic_init_name  =========
[15:04:02] [PASSED] Unknown
[15:04:02] [PASSED] VGA
[15:04:02] [PASSED] DVI-I
[15:04:02] [PASSED] DVI-D
[15:04:02] [PASSED] DVI-A
[15:04:02] [PASSED] Composite
[15:04:02] [PASSED] SVIDEO
[15:04:02] [PASSED] LVDS
[15:04:02] [PASSED] Component
[15:04:02] [PASSED] DIN
[15:04:02] [PASSED] DP
[15:04:02] [PASSED] HDMI-A
[15:04:02] [PASSED] HDMI-B
[15:04:02] [PASSED] TV
[15:04:02] [PASSED] eDP
[15:04:02] [PASSED] Virtual
[15:04:02] [PASSED] DSI
[15:04:02] [PASSED] DPI
[15:04:02] [PASSED] Writeback
[15:04:02] [PASSED] SPI
[15:04:02] [PASSED] USB
[15:04:02] ==== [PASSED] drm_test_drm_connector_dynamic_init_name =====
[15:04:02] =========== [PASSED] drm_connector_dynamic_init ============
[15:04:02] ==== drm_connector_dynamic_register_early (4 subtests) =====
[15:04:02] [PASSED] drm_test_drm_connector_dynamic_register_early_on_list
[15:04:02] [PASSED] drm_test_drm_connector_dynamic_register_early_defer
[15:04:02] [PASSED] drm_test_drm_connector_dynamic_register_early_no_init
[15:04:02] [PASSED] drm_test_drm_connector_dynamic_register_early_no_mode_object
[15:04:02] ====== [PASSED] drm_connector_dynamic_register_early =======
[15:04:02] ======= drm_connector_dynamic_register (7 subtests) ========
[15:04:02] [PASSED] drm_test_drm_connector_dynamic_register_on_list
[15:04:02] [PASSED] drm_test_drm_connector_dynamic_register_no_defer
[15:04:02] [PASSED] drm_test_drm_connector_dynamic_register_no_init
[15:04:02] [PASSED] drm_test_drm_connector_dynamic_register_mode_object
[15:04:02] [PASSED] drm_test_drm_connector_dynamic_register_sysfs
[15:04:02] [PASSED] drm_test_drm_connector_dynamic_register_sysfs_name
[15:04:02] [PASSED] drm_test_drm_connector_dynamic_register_debugfs
[15:04:02] ========= [PASSED] drm_connector_dynamic_register ==========
[15:04:02] = drm_connector_attach_broadcast_rgb_property (2 subtests) =
[15:04:02] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property
[15:04:02] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property_hdmi_connector
[15:04:02] === [PASSED] drm_connector_attach_broadcast_rgb_property ===
[15:04:02] ========== drm_get_tv_mode_from_name (2 subtests) ==========
[15:04:02] ========== drm_test_get_tv_mode_from_name_valid  ===========
[15:04:02] [PASSED] NTSC
[15:04:02] [PASSED] NTSC-443
[15:04:02] [PASSED] NTSC-J
[15:04:02] [PASSED] PAL
[15:04:02] [PASSED] PAL-M
[15:04:02] [PASSED] PAL-N
[15:04:02] [PASSED] SECAM
[15:04:02] [PASSED] Mono
[15:04:02] ====== [PASSED] drm_test_get_tv_mode_from_name_valid =======
[15:04:02] [PASSED] drm_test_get_tv_mode_from_name_truncated
[15:04:02] ============ [PASSED] drm_get_tv_mode_from_name ============
[15:04:02] = drm_test_connector_hdmi_compute_mode_clock (12 subtests) =
[15:04:02] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb
[15:04:02] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc
[15:04:02] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc_vic_1
[15:04:02] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc
[15:04:02] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc_vic_1
[15:04:02] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_double
[15:04:02] = drm_test_connector_hdmi_compute_mode_clock_yuv420_valid  =
[15:04:02] [PASSED] VIC 96
[15:04:02] [PASSED] VIC 97
[15:04:02] [PASSED] VIC 101
[15:04:02] [PASSED] VIC 102
[15:04:02] [PASSED] VIC 106
[15:04:02] [PASSED] VIC 107
[15:04:02] === [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_valid ===
[15:04:02] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_10_bpc
[15:04:02] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_12_bpc
[15:04:02] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_8_bpc
[15:04:02] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_10_bpc
[15:04:02] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_12_bpc
[15:04:02] === [PASSED] drm_test_connector_hdmi_compute_mode_clock ====
[15:04:02] == drm_hdmi_connector_get_broadcast_rgb_name (2 subtests) ==
[15:04:02] === drm_test_drm_hdmi_connector_get_broadcast_rgb_name  ====
[15:04:02] [PASSED] Automatic
[15:04:02] [PASSED] Full
[15:04:02] [PASSED] Limited 16:235
[15:04:02] === [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name ===
[15:04:02] [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name_invalid
[15:04:02] ==== [PASSED] drm_hdmi_connector_get_broadcast_rgb_name ====
[15:04:02] == drm_hdmi_connector_get_output_format_name (2 subtests) ==
[15:04:02] === drm_test_drm_hdmi_connector_get_output_format_name  ====
[15:04:02] [PASSED] RGB
[15:04:02] [PASSED] YUV 4:2:0
[15:04:02] [PASSED] YUV 4:2:2
[15:04:02] [PASSED] YUV 4:4:4
[15:04:02] === [PASSED] drm_test_drm_hdmi_connector_get_output_format_name ===
[15:04:02] [PASSED] drm_test_drm_hdmi_connector_get_output_format_name_invalid
[15:04:02] ==== [PASSED] drm_hdmi_connector_get_output_format_name ====
[15:04:02] ============= drm_damage_helper (21 subtests) ==============
[15:04:02] [PASSED] drm_test_damage_iter_no_damage
[15:04:02] [PASSED] drm_test_damage_iter_no_damage_fractional_src
[15:04:02] [PASSED] drm_test_damage_iter_no_damage_src_moved
[15:04:02] [PASSED] drm_test_damage_iter_no_damage_fractional_src_moved
[15:04:02] [PASSED] drm_test_damage_iter_no_damage_not_visible
[15:04:02] [PASSED] drm_test_damage_iter_no_damage_no_crtc
[15:04:02] [PASSED] drm_test_damage_iter_no_damage_no_fb
[15:04:02] [PASSED] drm_test_damage_iter_simple_damage
[15:04:02] [PASSED] drm_test_damage_iter_single_damage
[15:04:02] [PASSED] drm_test_damage_iter_single_damage_intersect_src
[15:04:02] [PASSED] drm_test_damage_iter_single_damage_outside_src
[15:04:02] [PASSED] drm_test_damage_iter_single_damage_fractional_src
[15:04:02] [PASSED] drm_test_damage_iter_single_damage_intersect_fractional_src
[15:04:02] [PASSED] drm_test_damage_iter_single_damage_outside_fractional_src
[15:04:02] [PASSED] drm_test_damage_iter_single_damage_src_moved
[15:04:02] [PASSED] drm_test_damage_iter_single_damage_fractional_src_moved
[15:04:02] [PASSED] drm_test_damage_iter_damage
[15:04:02] [PASSED] drm_test_damage_iter_damage_one_intersect
[15:04:02] [PASSED] drm_test_damage_iter_damage_one_outside
[15:04:02] [PASSED] drm_test_damage_iter_damage_src_moved
[15:04:02] [PASSED] drm_test_damage_iter_damage_not_visible
[15:04:02] ================ [PASSED] drm_damage_helper ================
[15:04:02] ============== drm_dp_mst_helper (3 subtests) ==============
[15:04:02] ============== drm_test_dp_mst_calc_pbn_mode  ==============
[15:04:02] [PASSED] Clock 154000 BPP 30 DSC disabled
[15:04:02] [PASSED] Clock 234000 BPP 30 DSC disabled
[15:04:02] [PASSED] Clock 297000 BPP 24 DSC disabled
[15:04:02] [PASSED] Clock 332880 BPP 24 DSC enabled
[15:04:02] [PASSED] Clock 324540 BPP 24 DSC enabled
[15:04:02] ========== [PASSED] drm_test_dp_mst_calc_pbn_mode ==========
[15:04:02] ============== drm_test_dp_mst_calc_pbn_div  ===============
[15:04:02] [PASSED] Link rate 2000000 lane count 4
[15:04:02] [PASSED] Link rate 2000000 lane count 2
[15:04:02] [PASSED] Link rate 2000000 lane count 1
[15:04:02] [PASSED] Link rate 1350000 lane count 4
[15:04:02] [PASSED] Link rate 1350000 lane count 2
[15:04:02] [PASSED] Link rate 1350000 lane count 1
[15:04:02] [PASSED] Link rate 1000000 lane count 4
[15:04:02] [PASSED] Link rate 1000000 lane count 2
[15:04:02] [PASSED] Link rate 1000000 lane count 1
[15:04:02] [PASSED] Link rate 810000 lane count 4
[15:04:02] [PASSED] Link rate 810000 lane count 2
[15:04:02] [PASSED] Link rate 810000 lane count 1
[15:04:02] [PASSED] Link rate 540000 lane count 4
[15:04:02] [PASSED] Link rate 540000 lane count 2
[15:04:02] [PASSED] Link rate 540000 lane count 1
[15:04:02] [PASSED] Link rate 270000 lane count 4
[15:04:02] [PASSED] Link rate 270000 lane count 2
[15:04:02] [PASSED] Link rate 270000 lane count 1
[15:04:02] [PASSED] Link rate 162000 lane count 4
[15:04:02] [PASSED] Link rate 162000 lane count 2
[15:04:02] [PASSED] Link rate 162000 lane count 1
[15:04:02] ========== [PASSED] drm_test_dp_mst_calc_pbn_div ===========
[15:04:02] ========= drm_test_dp_mst_sideband_msg_req_decode  =========
[15:04:02] [PASSED] DP_ENUM_PATH_RESOURCES with port number
[15:04:02] [PASSED] DP_POWER_UP_PHY with port number
[15:04:02] [PASSED] DP_POWER_DOWN_PHY with port number
[15:04:02] [PASSED] DP_ALLOCATE_PAYLOAD with SDP stream sinks
[15:04:02] [PASSED] DP_ALLOCATE_PAYLOAD with port number
[15:04:02] [PASSED] DP_ALLOCATE_PAYLOAD with VCPI
[15:04:02] [PASSED] DP_ALLOCATE_PAYLOAD with PBN
[15:04:02] [PASSED] DP_QUERY_PAYLOAD with port number
[15:04:02] [PASSED] DP_QUERY_PAYLOAD with VCPI
[15:04:02] [PASSED] DP_REMOTE_DPCD_READ with port number
[15:04:02] [PASSED] DP_REMOTE_DPCD_READ with DPCD address
[15:04:02] [PASSED] DP_REMOTE_DPCD_READ with max number of bytes
[15:04:02] [PASSED] DP_REMOTE_DPCD_WRITE with port number
[15:04:02] [PASSED] DP_REMOTE_DPCD_WRITE with DPCD address
[15:04:02] [PASSED] DP_REMOTE_DPCD_WRITE with data array
[15:04:02] [PASSED] DP_REMOTE_I2C_READ with port number
[15:04:02] [PASSED] DP_REMOTE_I2C_READ with I2C device ID
[15:04:02] [PASSED] DP_REMOTE_I2C_READ with transactions array
[15:04:02] [PASSED] DP_REMOTE_I2C_WRITE with port number
[15:04:02] [PASSED] DP_REMOTE_I2C_WRITE with I2C device ID
[15:04:02] [PASSED] DP_REMOTE_I2C_WRITE with data array
[15:04:02] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream ID
[15:04:02] [PASSED] DP_QUERY_STREAM_ENC_STATUS with client ID
[15:04:02] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream event
[15:04:02] [PASSED] DP_QUERY_STREAM_ENC_STATUS with valid stream event
[15:04:02] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream behavior
[15:04:02] [PASSED] DP_QUERY_STREAM_ENC_STATUS with a valid stream behavior
[15:04:02] ===== [PASSED] drm_test_dp_mst_sideband_msg_req_decode =====
[15:04:02] ================ [PASSED] drm_dp_mst_helper ================
[15:04:02] ================== drm_exec (7 subtests) ===================
[15:04:02] [PASSED] sanitycheck
[15:04:02] [PASSED] test_lock
[15:04:02] [PASSED] test_lock_unlock
[15:04:02] [PASSED] test_duplicates
[15:04:02] [PASSED] test_prepare
[15:04:02] [PASSED] test_prepare_array
[15:04:02] [PASSED] test_multiple_loops
[15:04:02] ==================== [PASSED] drm_exec =====================
[15:04:02] =========== drm_format_helper_test (17 subtests) ===========
[15:04:02] ============== drm_test_fb_xrgb8888_to_gray8  ==============
[15:04:02] [PASSED] single_pixel_source_buffer
[15:04:02] [PASSED] single_pixel_clip_rectangle
[15:04:02] [PASSED] well_known_colors
[15:04:02] [PASSED] destination_pitch
[15:04:02] ========== [PASSED] drm_test_fb_xrgb8888_to_gray8 ==========
[15:04:02] ============= drm_test_fb_xrgb8888_to_rgb332  ==============
[15:04:02] [PASSED] single_pixel_source_buffer
[15:04:02] [PASSED] single_pixel_clip_rectangle
[15:04:02] [PASSED] well_known_colors
[15:04:02] [PASSED] destination_pitch
[15:04:02] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb332 ==========
[15:04:02] ============= drm_test_fb_xrgb8888_to_rgb565  ==============
[15:04:02] [PASSED] single_pixel_source_buffer
[15:04:02] [PASSED] single_pixel_clip_rectangle
[15:04:02] [PASSED] well_known_colors
[15:04:02] [PASSED] destination_pitch
[15:04:02] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb565 ==========
[15:04:02] ============ drm_test_fb_xrgb8888_to_xrgb1555  =============
[15:04:02] [PASSED] single_pixel_source_buffer
[15:04:02] [PASSED] single_pixel_clip_rectangle
[15:04:02] [PASSED] well_known_colors
[15:04:02] [PASSED] destination_pitch
[15:04:02] ======== [PASSED] drm_test_fb_xrgb8888_to_xrgb1555 =========
[15:04:02] ============ drm_test_fb_xrgb8888_to_argb1555  =============
[15:04:02] [PASSED] single_pixel_source_buffer
[15:04:02] [PASSED] single_pixel_clip_rectangle
[15:04:02] [PASSED] well_known_colors
[15:04:02] [PASSED] destination_pitch
[15:04:02] ======== [PASSED] drm_test_fb_xrgb8888_to_argb1555 =========
[15:04:02] ============ drm_test_fb_xrgb8888_to_rgba5551  =============
[15:04:02] [PASSED] single_pixel_source_buffer
[15:04:02] [PASSED] single_pixel_clip_rectangle
[15:04:02] [PASSED] well_known_colors
[15:04:02] [PASSED] destination_pitch
[15:04:02] ======== [PASSED] drm_test_fb_xrgb8888_to_rgba5551 =========
[15:04:02] ============= drm_test_fb_xrgb8888_to_rgb888  ==============
[15:04:02] [PASSED] single_pixel_source_buffer
[15:04:02] [PASSED] single_pixel_clip_rectangle
[15:04:02] [PASSED] well_known_colors
[15:04:02] [PASSED] destination_pitch
[15:04:02] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb888 ==========
[15:04:02] ============= drm_test_fb_xrgb8888_to_bgr888  ==============
[15:04:02] [PASSED] single_pixel_source_buffer
[15:04:02] [PASSED] single_pixel_clip_rectangle
[15:04:02] [PASSED] well_known_colors
[15:04:02] [PASSED] destination_pitch
[15:04:02] ========= [PASSED] drm_test_fb_xrgb8888_to_bgr888 ==========
[15:04:02] ============ drm_test_fb_xrgb8888_to_argb8888  =============
[15:04:02] [PASSED] single_pixel_source_buffer
[15:04:02] [PASSED] single_pixel_clip_rectangle
[15:04:02] [PASSED] well_known_colors
[15:04:02] [PASSED] destination_pitch
[15:04:02] ======== [PASSED] drm_test_fb_xrgb8888_to_argb8888 =========
[15:04:02] =========== drm_test_fb_xrgb8888_to_xrgb2101010  ===========
[15:04:02] [PASSED] single_pixel_source_buffer
[15:04:02] [PASSED] single_pixel_clip_rectangle
[15:04:02] [PASSED] well_known_colors
[15:04:02] [PASSED] destination_pitch
[15:04:02] ======= [PASSED] drm_test_fb_xrgb8888_to_xrgb2101010 =======
[15:04:02] =========== drm_test_fb_xrgb8888_to_argb2101010  ===========
[15:04:02] [PASSED] single_pixel_source_buffer
[15:04:02] [PASSED] single_pixel_clip_rectangle
[15:04:02] [PASSED] well_known_colors
[15:04:02] [PASSED] destination_pitch
[15:04:02] ======= [PASSED] drm_test_fb_xrgb8888_to_argb2101010 =======
[15:04:02] ============== drm_test_fb_xrgb8888_to_mono  ===============
[15:04:02] [PASSED] single_pixel_source_buffer
[15:04:02] [PASSED] single_pixel_clip_rectangle
[15:04:02] [PASSED] well_known_colors
[15:04:02] [PASSED] destination_pitch
[15:04:02] ========== [PASSED] drm_test_fb_xrgb8888_to_mono ===========
[15:04:02] ==================== drm_test_fb_swab  =====================
[15:04:02] [PASSED] single_pixel_source_buffer
[15:04:02] [PASSED] single_pixel_clip_rectangle
[15:04:02] [PASSED] well_known_colors
[15:04:02] [PASSED] destination_pitch
[15:04:02] ================ [PASSED] drm_test_fb_swab =================
[15:04:02] ============ drm_test_fb_xrgb8888_to_xbgr8888  =============
[15:04:02] [PASSED] single_pixel_source_buffer
[15:04:02] [PASSED] single_pixel_clip_rectangle
[15:04:02] [PASSED] well_known_colors
[15:04:02] [PASSED] destination_pitch
[15:04:02] ======== [PASSED] drm_test_fb_xrgb8888_to_xbgr8888 =========
[15:04:02] ============ drm_test_fb_xrgb8888_to_abgr8888  =============
[15:04:02] [PASSED] single_pixel_source_buffer
[15:04:02] [PASSED] single_pixel_clip_rectangle
[15:04:02] [PASSED] well_known_colors
[15:04:02] [PASSED] destination_pitch
[15:04:02] ======== [PASSED] drm_test_fb_xrgb8888_to_abgr8888 =========
[15:04:02] ================= drm_test_fb_clip_offset  =================
[15:04:02] [PASSED] pass through
[15:04:02] [PASSED] horizontal offset
[15:04:02] [PASSED] vertical offset
[15:04:02] [PASSED] horizontal and vertical offset
[15:04:02] [PASSED] horizontal offset (custom pitch)
[15:04:02] [PASSED] vertical offset (custom pitch)
[15:04:02] [PASSED] horizontal and vertical offset (custom pitch)
[15:04:02] ============= [PASSED] drm_test_fb_clip_offset =============
[15:04:02] =================== drm_test_fb_memcpy  ====================
[15:04:02] [PASSED] single_pixel_source_buffer: XR24 little-endian (0x34325258)
[15:04:02] [PASSED] single_pixel_source_buffer: XRA8 little-endian (0x38415258)
[15:04:02] [PASSED] single_pixel_source_buffer: YU24 little-endian (0x34325559)
[15:04:02] [PASSED] single_pixel_clip_rectangle: XB24 little-endian (0x34324258)
[15:04:02] [PASSED] single_pixel_clip_rectangle: XRA8 little-endian (0x38415258)
[15:04:02] [PASSED] single_pixel_clip_rectangle: YU24 little-endian (0x34325559)
[15:04:02] [PASSED] well_known_colors: XB24 little-endian (0x34324258)
[15:04:02] [PASSED] well_known_colors: XRA8 little-endian (0x38415258)
[15:04:02] [PASSED] well_known_colors: YU24 little-endian (0x34325559)
[15:04:02] [PASSED] destination_pitch: XB24 little-endian (0x34324258)
[15:04:02] [PASSED] destination_pitch: XRA8 little-endian (0x38415258)
[15:04:02] [PASSED] destination_pitch: YU24 little-endian (0x34325559)
[15:04:02] =============== [PASSED] drm_test_fb_memcpy ================
[15:04:02] ============= [PASSED] drm_format_helper_test ==============
[15:04:02] ================= drm_format (18 subtests) =================
[15:04:02] [PASSED] drm_test_format_block_width_invalid
[15:04:02] [PASSED] drm_test_format_block_width_one_plane
[15:04:02] [PASSED] drm_test_format_block_width_two_plane
[15:04:02] [PASSED] drm_test_format_block_width_three_plane
[15:04:02] [PASSED] drm_test_format_block_width_tiled
[15:04:02] [PASSED] drm_test_format_block_height_invalid
[15:04:02] [PASSED] drm_test_format_block_height_one_plane
[15:04:02] [PASSED] drm_test_format_block_height_two_plane
[15:04:02] [PASSED] drm_test_format_block_height_three_plane
[15:04:02] [PASSED] drm_test_format_block_height_tiled
[15:04:02] [PASSED] drm_test_format_min_pitch_invalid
[15:04:02] [PASSED] drm_test_format_min_pitch_one_plane_8bpp
[15:04:02] [PASSED] drm_test_format_min_pitch_one_plane_16bpp
[15:04:02] [PASSED] drm_test_format_min_pitch_one_plane_24bpp
[15:04:02] [PASSED] drm_test_format_min_pitch_one_plane_32bpp
[15:04:02] [PASSED] drm_test_format_min_pitch_two_plane
[15:04:02] [PASSED] drm_test_format_min_pitch_three_plane_8bpp
[15:04:02] [PASSED] drm_test_format_min_pitch_tiled
[15:04:02] =================== [PASSED] drm_format ====================
[15:04:02] ============== drm_framebuffer (10 subtests) ===============
[15:04:02] ========== drm_test_framebuffer_check_src_coords  ==========
[15:04:02] [PASSED] Success: source fits into fb
[15:04:02] [PASSED] Fail: overflowing fb with x-axis coordinate
[15:04:02] [PASSED] Fail: overflowing fb with y-axis coordinate
[15:04:02] [PASSED] Fail: overflowing fb with source width
[15:04:02] [PASSED] Fail: overflowing fb with source height
[15:04:02] ====== [PASSED] drm_test_framebuffer_check_src_coords ======
[15:04:02] [PASSED] drm_test_framebuffer_cleanup
[15:04:02] =============== drm_test_framebuffer_create  ===============
[15:04:02] [PASSED] ABGR8888 normal sizes
[15:04:02] [PASSED] ABGR8888 max sizes
[15:04:02] [PASSED] ABGR8888 pitch greater than min required
[15:04:02] [PASSED] ABGR8888 pitch less than min required
[15:04:02] [PASSED] ABGR8888 Invalid width
[15:04:02] [PASSED] ABGR8888 Invalid buffer handle
[15:04:02] [PASSED] No pixel format
[15:04:02] [PASSED] ABGR8888 Width 0
[15:04:02] [PASSED] ABGR8888 Height 0
[15:04:02] [PASSED] ABGR8888 Out of bound height * pitch combination
[15:04:02] [PASSED] ABGR8888 Large buffer offset
[15:04:02] [PASSED] ABGR8888 Buffer offset for inexistent plane
[15:04:02] [PASSED] ABGR8888 Invalid flag
[15:04:02] [PASSED] ABGR8888 Set DRM_MODE_FB_MODIFIERS without modifiers
[15:04:02] [PASSED] ABGR8888 Valid buffer modifier
[15:04:02] [PASSED] ABGR8888 Invalid buffer modifier(DRM_FORMAT_MOD_SAMSUNG_64_32_TILE)
[15:04:02] [PASSED] ABGR8888 Extra pitches without DRM_MODE_FB_MODIFIERS
[15:04:02] [PASSED] ABGR8888 Extra pitches with DRM_MODE_FB_MODIFIERS
[15:04:02] [PASSED] NV12 Normal sizes
[15:04:02] [PASSED] NV12 Max sizes
[15:04:02] [PASSED] NV12 Invalid pitch
[15:04:02] [PASSED] NV12 Invalid modifier/missing DRM_MODE_FB_MODIFIERS flag
[15:04:02] [PASSED] NV12 different  modifier per-plane
[15:04:02] [PASSED] NV12 with DRM_FORMAT_MOD_SAMSUNG_64_32_TILE
[15:04:02] [PASSED] NV12 Valid modifiers without DRM_MODE_FB_MODIFIERS
[15:04:02] [PASSED] NV12 Modifier for inexistent plane
[15:04:02] [PASSED] NV12 Handle for inexistent plane
[15:04:02] [PASSED] NV12 Handle for inexistent plane without DRM_MODE_FB_MODIFIERS
[15:04:02] [PASSED] YVU420 DRM_MODE_FB_MODIFIERS set without modifier
[15:04:02] [PASSED] YVU420 Normal sizes
[15:04:02] [PASSED] YVU420 Max sizes
[15:04:02] [PASSED] YVU420 Invalid pitch
[15:04:02] [PASSED] YVU420 Different pitches
[15:04:02] [PASSED] YVU420 Different buffer offsets/pitches
[15:04:02] [PASSED] YVU420 Modifier set just for plane 0, without DRM_MODE_FB_MODIFIERS
[15:04:02] [PASSED] YVU420 Modifier set just for planes 0, 1, without DRM_MODE_FB_MODIFIERS
[15:04:02] [PASSED] YVU420 Modifier set just for plane 0, 1, with DRM_MODE_FB_MODIFIERS
[15:04:02] [PASSED] YVU420 Valid modifier
[15:04:02] [PASSED] YVU420 Different modifiers per plane
[15:04:02] [PASSED] YVU420 Modifier for inexistent plane
[15:04:02] [PASSED] YUV420_10BIT Invalid modifier(DRM_FORMAT_MOD_LINEAR)
[15:04:02] [PASSED] X0L2 Normal sizes
[15:04:02] [PASSED] X0L2 Max sizes
[15:04:02] [PASSED] X0L2 Invalid pitch
[15:04:02] [PASSED] X0L2 Pitch greater than minimum required
[15:04:02] [PASSED] X0L2 Handle for inexistent plane
[15:04:02] [PASSED] X0L2 Offset for inexistent plane, without DRM_MODE_FB_MODIFIERS set
[15:04:02] [PASSED] X0L2 Modifier without DRM_MODE_FB_MODIFIERS set
[15:04:02] [PASSED] X0L2 Valid modifier
[15:04:02] [PASSED] X0L2 Modifier for inexistent plane
[15:04:02] =========== [PASSED] drm_test_framebuffer_create ===========
[15:04:02] [PASSED] drm_test_framebuffer_free
[15:04:02] [PASSED] drm_test_framebuffer_init
[15:04:02] [PASSED] drm_test_framebuffer_init_bad_format
[15:04:02] [PASSED] drm_test_framebuffer_init_dev_mismatch
[15:04:02] [PASSED] drm_test_framebuffer_lookup
[15:04:02] [PASSED] drm_test_framebuffer_lookup_inexistent
[15:04:02] [PASSED] drm_test_framebuffer_modifiers_not_supported
[15:04:02] ================= [PASSED] drm_framebuffer =================
[15:04:02] ================ drm_gem_shmem (8 subtests) ================
[15:04:02] [PASSED] drm_gem_shmem_test_obj_create
[15:04:02] [PASSED] drm_gem_shmem_test_obj_create_private
[15:04:02] [PASSED] drm_gem_shmem_test_pin_pages
[15:04:02] [PASSED] drm_gem_shmem_test_vmap
[15:04:02] [PASSED] drm_gem_shmem_test_get_pages_sgt
[15:04:02] [PASSED] drm_gem_shmem_test_get_sg_table
[15:04:02] [PASSED] drm_gem_shmem_test_madvise
[15:04:02] [PASSED] drm_gem_shmem_test_purge
[15:04:02] ================== [PASSED] drm_gem_shmem ==================
[15:04:02] === drm_atomic_helper_connector_hdmi_check (27 subtests) ===
[15:04:02] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode
[15:04:02] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode_vic_1
[15:04:02] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode
[15:04:02] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode_vic_1
[15:04:02] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode
[15:04:02] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode_vic_1
[15:04:02] ====== drm_test_check_broadcast_rgb_cea_mode_yuv420  =======
[15:04:02] [PASSED] Automatic
[15:04:02] [PASSED] Full
[15:04:02] [PASSED] Limited 16:235
[15:04:02] == [PASSED] drm_test_check_broadcast_rgb_cea_mode_yuv420 ===
[15:04:02] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_changed
[15:04:02] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_not_changed
[15:04:02] [PASSED] drm_test_check_disable_connector
[15:04:02] [PASSED] drm_test_check_hdmi_funcs_reject_rate
[15:04:02] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_rgb
[15:04:02] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_yuv420
[15:04:02] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_ignore_yuv422
[15:04:02] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_ignore_yuv420
[15:04:02] [PASSED] drm_test_check_driver_unsupported_fallback_yuv420
[15:04:02] [PASSED] drm_test_check_output_bpc_crtc_mode_changed
[15:04:02] [PASSED] drm_test_check_output_bpc_crtc_mode_not_changed
[15:04:02] [PASSED] drm_test_check_output_bpc_dvi
[15:04:02] [PASSED] drm_test_check_output_bpc_format_vic_1
[15:04:02] [PASSED] drm_test_check_output_bpc_format_display_8bpc_only
[15:04:02] [PASSED] drm_test_check_output_bpc_format_display_rgb_only
[15:04:02] [PASSED] drm_test_check_output_bpc_format_driver_8bpc_only
[15:04:02] [PASSED] drm_test_check_output_bpc_format_driver_rgb_only
[15:04:02] [PASSED] drm_test_check_tmds_char_rate_rgb_8bpc
[15:04:02] [PASSED] drm_test_check_tmds_char_rate_rgb_10bpc
[15:04:02] [PASSED] drm_test_check_tmds_char_rate_rgb_12bpc
[15:04:02] ===== [PASSED] drm_atomic_helper_connector_hdmi_check ======
[15:04:02] === drm_atomic_helper_connector_hdmi_reset (6 subtests) ====
[15:04:02] [PASSED] drm_test_check_broadcast_rgb_value
[15:04:02] [PASSED] drm_test_check_bpc_8_value
[15:04:02] [PASSED] drm_test_check_bpc_10_value
[15:04:02] [PASSED] drm_test_check_bpc_12_value
[15:04:02] [PASSED] drm_test_check_format_value
[15:04:02] [PASSED] drm_test_check_tmds_char_value
[15:04:02] ===== [PASSED] drm_atomic_helper_connector_hdmi_reset ======
[15:04:02] = drm_atomic_helper_connector_hdmi_mode_valid (4 subtests) =
[15:04:02] [PASSED] drm_test_check_mode_valid
[15:04:02] [PASSED] drm_test_check_mode_valid_reject
[15:04:02] [PASSED] drm_test_check_mode_valid_reject_rate
[15:04:02] [PASSED] drm_test_check_mode_valid_reject_max_clock
[15:04:02] === [PASSED] drm_atomic_helper_connector_hdmi_mode_valid ===
[15:04:02] ================= drm_managed (2 subtests) =================
[15:04:02] [PASSED] drm_test_managed_release_action
[15:04:02] [PASSED] drm_test_managed_run_action
[15:04:02] =================== [PASSED] drm_managed ===================
[15:04:02] =================== drm_mm (6 subtests) ====================
[15:04:02] [PASSED] drm_test_mm_init
[15:04:02] [PASSED] drm_test_mm_debug
[15:04:02] [PASSED] drm_test_mm_align32
[15:04:02] [PASSED] drm_test_mm_align64
[15:04:02] [PASSED] drm_test_mm_lowest
[15:04:02] [PASSED] drm_test_mm_highest
[15:04:02] ===================== [PASSED] drm_mm ======================
[15:04:02] ============= drm_modes_analog_tv (5 subtests) =============
[15:04:02] [PASSED] drm_test_modes_analog_tv_mono_576i
[15:04:02] [PASSED] drm_test_modes_analog_tv_ntsc_480i
[15:04:02] [PASSED] drm_test_modes_analog_tv_ntsc_480i_inlined
[15:04:02] [PASSED] drm_test_modes_analog_tv_pal_576i
[15:04:02] [PASSED] drm_test_modes_analog_tv_pal_576i_inlined
[15:04:02] =============== [PASSED] drm_modes_analog_tv ===============
[15:04:02] ============== drm_plane_helper (2 subtests) ===============
[15:04:02] =============== drm_test_check_plane_state  ================
[15:04:02] [PASSED] clipping_simple
[15:04:02] [PASSED] clipping_rotate_reflect
[15:04:02] [PASSED] positioning_simple
[15:04:02] [PASSED] upscaling
[15:04:02] [PASSED] downscaling
[15:04:02] [PASSED] rounding1
[15:04:02] [PASSED] rounding2
[15:04:02] [PASSED] rounding3
[15:04:02] [PASSED] rounding4
[15:04:02] =========== [PASSED] drm_test_check_plane_state ============
[15:04:02] =========== drm_test_check_invalid_plane_state  ============
[15:04:02] [PASSED] positioning_invalid
[15:04:02] [PASSED] upscaling_invalid
[15:04:02] [PASSED] downscaling_invalid
[15:04:02] ======= [PASSED] drm_test_check_invalid_plane_state ========
[15:04:02] ================ [PASSED] drm_plane_helper =================
[15:04:02] ====== drm_connector_helper_tv_get_modes (1 subtest) =======
[15:04:02] ====== drm_test_connector_helper_tv_get_modes_check  =======
[15:04:02] [PASSED] None
[15:04:02] [PASSED] PAL
[15:04:02] [PASSED] NTSC
[15:04:02] [PASSED] Both, NTSC Default
[15:04:02] [PASSED] Both, PAL Default
[15:04:02] [PASSED] Both, NTSC Default, with PAL on command-line
[15:04:02] [PASSED] Both, PAL Default, with NTSC on command-line
[15:04:02] == [PASSED] drm_test_connector_helper_tv_get_modes_check ===
[15:04:02] ======== [PASSED] drm_connector_helper_tv_get_modes ========
[15:04:02] ================== drm_rect (9 subtests) ===================
[15:04:02] [PASSED] drm_test_rect_clip_scaled_div_by_zero
[15:04:02] [PASSED] drm_test_rect_clip_scaled_not_clipped
[15:04:02] [PASSED] drm_test_rect_clip_scaled_clipped
[15:04:02] [PASSED] drm_test_rect_clip_scaled_signed_vs_unsigned
[15:04:02] ================= drm_test_rect_intersect  =================
[15:04:02] [PASSED] top-left x bottom-right: 2x2+1+1 x 2x2+0+0
[15:04:02] [PASSED] top-right x bottom-left: 2x2+0+0 x 2x2+1-1
[15:04:02] [PASSED] bottom-left x top-right: 2x2+1-1 x 2x2+0+0
[15:04:02] [PASSED] bottom-right x top-left: 2x2+0+0 x 2x2+1+1
[15:04:02] [PASSED] right x left: 2x1+0+0 x 3x1+1+0
[15:04:02] [PASSED] left x right: 3x1+1+0 x 2x1+0+0
[15:04:02] [PASSED] up x bottom: 1x2+0+0 x 1x3+0-1
[15:04:02] [PASSED] bottom x up: 1x3+0-1 x 1x2+0+0
[15:04:02] [PASSED] touching corner: 1x1+0+0 x 2x2+1+1
[15:04:02] [PASSED] touching side: 1x1+0+0 x 1x1+1+0
[15:04:02] [PASSED] equal rects: 2x2+0+0 x 2x2+0+0
[15:04:02] [PASSED] inside another: 2x2+0+0 x 1x1+1+1
[15:04:02] [PASSED] far away: 1x1+0+0 x 1x1+3+6
[15:04:02] [PASSED] points intersecting: 0x0+5+10 x 0x0+5+10
[15:04:02] [PASSED] points not intersecting: 0x0+0+0 x 0x0+5+10
[15:04:02] ============= [PASSED] drm_test_rect_intersect =============
[15:04:02] ================ drm_test_rect_calc_hscale  ================
[15:04:02] [PASSED] normal use
[15:04:02] [PASSED] out of max range
[15:04:02] [PASSED] out of min range
[15:04:02] [PASSED] zero dst
[15:04:02] [PASSED] negative src
[15:04:02] [PASSED] negative dst
[15:04:02] ============ [PASSED] drm_test_rect_calc_hscale ============
[15:04:02] ================ drm_test_rect_calc_vscale  ================
[15:04:02] [PASSED] normal use
[15:04:02] [PASSED] out of max range
[15:04:02] [PASSED] out of min range
[15:04:02] [PASSED] zero dst
[15:04:02] [PASSED] negative src
stty: 'standard input': Inappropriate ioctl for device
[15:04:02] [PASSED] negative dst
[15:04:02] ============ [PASSED] drm_test_rect_calc_vscale ============
[15:04:02] ================== drm_test_rect_rotate  ===================
[15:04:02] [PASSED] reflect-x
[15:04:02] [PASSED] reflect-y
[15:04:02] [PASSED] rotate-0
[15:04:02] [PASSED] rotate-90
[15:04:02] [PASSED] rotate-180
[15:04:02] [PASSED] rotate-270
[15:04:02] ============== [PASSED] drm_test_rect_rotate ===============
[15:04:02] ================ drm_test_rect_rotate_inv  =================
[15:04:02] [PASSED] reflect-x
[15:04:02] [PASSED] reflect-y
[15:04:02] [PASSED] rotate-0
[15:04:02] [PASSED] rotate-90
[15:04:02] [PASSED] rotate-180
[15:04:02] [PASSED] rotate-270
[15:04:02] ============ [PASSED] drm_test_rect_rotate_inv =============
[15:04:02] ==================== [PASSED] drm_rect =====================
[15:04:02] ============ drm_sysfb_modeset_test (1 subtest) ============
[15:04:02] ============ drm_test_sysfb_build_fourcc_list  =============
[15:04:02] [PASSED] no native formats
[15:04:02] [PASSED] XRGB8888 as native format
[15:04:02] [PASSED] remove duplicates
[15:04:02] [PASSED] convert alpha formats
[15:04:02] [PASSED] random formats
[15:04:02] ======== [PASSED] drm_test_sysfb_build_fourcc_list =========
[15:04:02] ============= [PASSED] drm_sysfb_modeset_test ==============
[15:04:02] ============================================================
[15:04:02] Testing complete. Ran 621 tests: passed: 621
[15:04:02] Elapsed time: 25.611s total, 1.778s configuring, 23.612s building, 0.194s running

+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/ttm/tests/.kunitconfig
[15:04:02] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[15:04:04] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[15:04:13] Starting KUnit Kernel (1/1)...
[15:04:13] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[15:04:13] ================= ttm_device (5 subtests) ==================
[15:04:13] [PASSED] ttm_device_init_basic
[15:04:13] [PASSED] ttm_device_init_multiple
[15:04:13] [PASSED] ttm_device_fini_basic
[15:04:13] [PASSED] ttm_device_init_no_vma_man
[15:04:13] ================== ttm_device_init_pools  ==================
[15:04:13] [PASSED] No DMA allocations, no DMA32 required
[15:04:13] [PASSED] DMA allocations, DMA32 required
[15:04:13] [PASSED] No DMA allocations, DMA32 required
[15:04:13] [PASSED] DMA allocations, no DMA32 required
[15:04:13] ============== [PASSED] ttm_device_init_pools ==============
[15:04:13] =================== [PASSED] ttm_device ====================
[15:04:13] ================== ttm_pool (8 subtests) ===================
[15:04:13] ================== ttm_pool_alloc_basic  ===================
[15:04:13] [PASSED] One page
[15:04:13] [PASSED] More than one page
[15:04:13] [PASSED] Above the allocation limit
[15:04:13] [PASSED] One page, with coherent DMA mappings enabled
[15:04:13] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[15:04:13] ============== [PASSED] ttm_pool_alloc_basic ===============
[15:04:13] ============== ttm_pool_alloc_basic_dma_addr  ==============
[15:04:13] [PASSED] One page
[15:04:13] [PASSED] More than one page
[15:04:13] [PASSED] Above the allocation limit
[15:04:13] [PASSED] One page, with coherent DMA mappings enabled
[15:04:13] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[15:04:13] ========== [PASSED] ttm_pool_alloc_basic_dma_addr ==========
[15:04:13] [PASSED] ttm_pool_alloc_order_caching_match
[15:04:13] [PASSED] ttm_pool_alloc_caching_mismatch
[15:04:13] [PASSED] ttm_pool_alloc_order_mismatch
[15:04:13] [PASSED] ttm_pool_free_dma_alloc
[15:04:13] [PASSED] ttm_pool_free_no_dma_alloc
[15:04:13] [PASSED] ttm_pool_fini_basic
[15:04:13] ==================== [PASSED] ttm_pool =====================
[15:04:13] ================ ttm_resource (8 subtests) =================
[15:04:13] ================= ttm_resource_init_basic  =================
[15:04:13] [PASSED] Init resource in TTM_PL_SYSTEM
[15:04:13] [PASSED] Init resource in TTM_PL_VRAM
[15:04:13] [PASSED] Init resource in a private placement
[15:04:13] [PASSED] Init resource in TTM_PL_SYSTEM, set placement flags
[15:04:13] ============= [PASSED] ttm_resource_init_basic =============
[15:04:13] [PASSED] ttm_resource_init_pinned
[15:04:13] [PASSED] ttm_resource_fini_basic
[15:04:13] [PASSED] ttm_resource_manager_init_basic
[15:04:13] [PASSED] ttm_resource_manager_usage_basic
[15:04:13] [PASSED] ttm_resource_manager_set_used_basic
[15:04:13] [PASSED] ttm_sys_man_alloc_basic
[15:04:13] [PASSED] ttm_sys_man_free_basic
[15:04:13] ================== [PASSED] ttm_resource ===================
[15:04:13] =================== ttm_tt (15 subtests) ===================
[15:04:13] ==================== ttm_tt_init_basic  ====================
[15:04:13] [PASSED] Page-aligned size
[15:04:13] [PASSED] Extra pages requested
[15:04:13] ================ [PASSED] ttm_tt_init_basic ================
[15:04:13] [PASSED] ttm_tt_init_misaligned
[15:04:13] [PASSED] ttm_tt_fini_basic
[15:04:13] [PASSED] ttm_tt_fini_sg
[15:04:13] [PASSED] ttm_tt_fini_shmem
[15:04:13] [PASSED] ttm_tt_create_basic
[15:04:13] [PASSED] ttm_tt_create_invalid_bo_type
[15:04:13] [PASSED] ttm_tt_create_ttm_exists
[15:04:13] [PASSED] ttm_tt_create_failed
[15:04:13] [PASSED] ttm_tt_destroy_basic
[15:04:13] [PASSED] ttm_tt_populate_null_ttm
[15:04:13] [PASSED] ttm_tt_populate_populated_ttm
[15:04:13] [PASSED] ttm_tt_unpopulate_basic
[15:04:13] [PASSED] ttm_tt_unpopulate_empty_ttm
[15:04:13] [PASSED] ttm_tt_swapin_basic
[15:04:13] ===================== [PASSED] ttm_tt ======================
[15:04:13] =================== ttm_bo (14 subtests) ===================
[15:04:13] =========== ttm_bo_reserve_optimistic_no_ticket  ===========
[15:04:13] [PASSED] Cannot be interrupted and sleeps
[15:04:13] [PASSED] Cannot be interrupted, locks straight away
[15:04:13] [PASSED] Can be interrupted, sleeps
[15:04:13] ======= [PASSED] ttm_bo_reserve_optimistic_no_ticket =======
[15:04:13] [PASSED] ttm_bo_reserve_locked_no_sleep
[15:04:13] [PASSED] ttm_bo_reserve_no_wait_ticket
[15:04:13] [PASSED] ttm_bo_reserve_double_resv
[15:04:13] [PASSED] ttm_bo_reserve_interrupted
[15:04:13] [PASSED] ttm_bo_reserve_deadlock
[15:04:13] [PASSED] ttm_bo_unreserve_basic
[15:04:13] [PASSED] ttm_bo_unreserve_pinned
[15:04:13] [PASSED] ttm_bo_unreserve_bulk
[15:04:13] [PASSED] ttm_bo_fini_basic
[15:04:13] [PASSED] ttm_bo_fini_shared_resv
[15:04:13] [PASSED] ttm_bo_pin_basic
[15:04:13] [PASSED] ttm_bo_pin_unpin_resource
[15:04:13] [PASSED] ttm_bo_multiple_pin_one_unpin
[15:04:13] ===================== [PASSED] ttm_bo ======================
[15:04:13] ============== ttm_bo_validate (21 subtests) ===============
[15:04:13] ============== ttm_bo_init_reserved_sys_man  ===============
[15:04:13] [PASSED] Buffer object for userspace
[15:04:13] [PASSED] Kernel buffer object
[15:04:13] [PASSED] Shared buffer object
[15:04:13] ========== [PASSED] ttm_bo_init_reserved_sys_man ===========
[15:04:13] ============== ttm_bo_init_reserved_mock_man  ==============
[15:04:13] [PASSED] Buffer object for userspace
[15:04:13] [PASSED] Kernel buffer object
[15:04:13] [PASSED] Shared buffer object
[15:04:13] ========== [PASSED] ttm_bo_init_reserved_mock_man ==========
[15:04:13] [PASSED] ttm_bo_init_reserved_resv
[15:04:13] ================== ttm_bo_validate_basic  ==================
[15:04:13] [PASSED] Buffer object for userspace
[15:04:13] [PASSED] Kernel buffer object
[15:04:13] [PASSED] Shared buffer object
[15:04:13] ============== [PASSED] ttm_bo_validate_basic ==============
[15:04:13] [PASSED] ttm_bo_validate_invalid_placement
[15:04:13] ============= ttm_bo_validate_same_placement  ==============
[15:04:13] [PASSED] System manager
[15:04:13] [PASSED] VRAM manager
[15:04:13] ========= [PASSED] ttm_bo_validate_same_placement ==========
[15:04:13] [PASSED] ttm_bo_validate_failed_alloc
[15:04:13] [PASSED] ttm_bo_validate_pinned
[15:04:13] [PASSED] ttm_bo_validate_busy_placement
[15:04:13] ================ ttm_bo_validate_multihop  =================
[15:04:13] [PASSED] Buffer object for userspace
[15:04:13] [PASSED] Kernel buffer object
[15:04:13] [PASSED] Shared buffer object
[15:04:13] ============ [PASSED] ttm_bo_validate_multihop =============
[15:04:13] ========== ttm_bo_validate_no_placement_signaled  ==========
[15:04:13] [PASSED] Buffer object in system domain, no page vector
[15:04:13] [PASSED] Buffer object in system domain with an existing page vector
[15:04:13] ====== [PASSED] ttm_bo_validate_no_placement_signaled ======
[15:04:13] ======== ttm_bo_validate_no_placement_not_signaled  ========
[15:04:13] [PASSED] Buffer object for userspace
[15:04:13] [PASSED] Kernel buffer object
[15:04:13] [PASSED] Shared buffer object
[15:04:13] ==== [PASSED] ttm_bo_validate_no_placement_not_signaled ====
[15:04:13] [PASSED] ttm_bo_validate_move_fence_signaled
[15:04:13] ========= ttm_bo_validate_move_fence_not_signaled  =========
[15:04:13] [PASSED] Waits for GPU
[15:04:13] [PASSED] Tries to lock straight away
[15:04:13] ===== [PASSED] ttm_bo_validate_move_fence_not_signaled =====
[15:04:13] [PASSED] ttm_bo_validate_happy_evict
[15:04:13] [PASSED] ttm_bo_validate_all_pinned_evict
[15:04:13] [PASSED] ttm_bo_validate_allowed_only_evict
[15:04:13] [PASSED] ttm_bo_validate_deleted_evict
[15:04:13] [PASSED] ttm_bo_validate_busy_domain_evict
[15:04:13] [PASSED] ttm_bo_validate_evict_gutting
[15:04:13] [PASSED] ttm_bo_validate_recrusive_evict
stty: 'standard input': Inappropriate ioctl for device
[15:04:13] ================= [PASSED] ttm_bo_validate =================
[15:04:13] ============================================================
[15:04:13] Testing complete. Ran 101 tests: passed: 101
[15:04:13] Elapsed time: 11.162s total, 1.727s configuring, 9.168s building, 0.233s running

+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel



^ permalink raw reply	[flat|nested] 17+ messages in thread

* ✗ Xe.CI.BAT: failure for : Add GPU work period support for Xe driver (rev5)
  2025-10-06 14:20 [PATCH v5 0/8] [ANDROID]: Add GPU work period support for Xe driver Aakash Deep Sarkar
                   ` (9 preceding siblings ...)
  2025-10-06 15:04 ` ✓ CI.KUnit: success " Patchwork
@ 2025-10-06 15:58 ` Patchwork
  2025-10-06 17:42 ` ✗ Xe.CI.Full: " Patchwork
  11 siblings, 0 replies; 17+ messages in thread
From: Patchwork @ 2025-10-06 15:58 UTC (permalink / raw)
  To: Aakash Deep Sarkar; +Cc: intel-xe

[-- Attachment #1: Type: text/plain, Size: 3003 bytes --]

== Series Details ==

Series: : Add GPU work period support for Xe driver (rev5)
URL   : https://patchwork.freedesktop.org/series/153341/
State : failure

== Summary ==

CI Bug Log - changes from xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc_BAT -> xe-pw-153341v5_BAT
====================================================

Summary
-------

  **FAILURE**

  Serious unknown changes coming with xe-pw-153341v5_BAT absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in xe-pw-153341v5_BAT, please notify your bug team (I915-ci-infra@lists.freedesktop.org) to allow them
  to document this new failure mode, which will reduce false positives in CI.

  

Participating hosts (11 -> 9)
------------------------------

  Missing    (2): bat-adlp-vm bat-ptl-vm 

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in xe-pw-153341v5_BAT:

### IGT changes ###

#### Possible regressions ####

  * igt@core_hotunplug@unbind-rebind:
    - bat-adlp-7:         [PASS][1] -> [INCOMPLETE][2]
   [1]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/bat-adlp-7/igt@core_hotunplug@unbind-rebind.html
   [2]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/bat-adlp-7/igt@core_hotunplug@unbind-rebind.html
    - bat-lnl-2:          [PASS][3] -> [INCOMPLETE][4]
   [3]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/bat-lnl-2/igt@core_hotunplug@unbind-rebind.html
   [4]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/bat-lnl-2/igt@core_hotunplug@unbind-rebind.html
    - bat-dg2-oem2:       [PASS][5] -> [INCOMPLETE][6]
   [5]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/bat-dg2-oem2/igt@core_hotunplug@unbind-rebind.html
   [6]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/bat-dg2-oem2/igt@core_hotunplug@unbind-rebind.html
    - bat-bmg-2:          [PASS][7] -> [INCOMPLETE][8]
   [7]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/bat-bmg-2/igt@core_hotunplug@unbind-rebind.html
   [8]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/bat-bmg-2/igt@core_hotunplug@unbind-rebind.html

  
New tests
---------

  New tests have been introduced between xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc_BAT and xe-pw-153341v5_BAT:

### New IGT tests (1) ###

  * igt@xe_waitfence:
    - Statuses :
    - Exec time: [None] s

  



Build changes
-------------

  * Linux: xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc -> xe-pw-153341v5

  IGT_8574: 44a15713124663a622c6eddf7c6ee5ba732e0d41 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
  xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc: 29dc3d947463e9e9756a253801e5cc4466536ecc
  xe-pw-153341v5: 153341v5

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/index.html

[-- Attachment #2: Type: text/html, Size: 3642 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* ✗ Xe.CI.Full: failure for : Add GPU work period support for Xe driver (rev5)
  2025-10-06 14:20 [PATCH v5 0/8] [ANDROID]: Add GPU work period support for Xe driver Aakash Deep Sarkar
                   ` (10 preceding siblings ...)
  2025-10-06 15:58 ` ✗ Xe.CI.BAT: failure " Patchwork
@ 2025-10-06 17:42 ` Patchwork
  11 siblings, 0 replies; 17+ messages in thread
From: Patchwork @ 2025-10-06 17:42 UTC (permalink / raw)
  To: Aakash Deep Sarkar; +Cc: intel-xe

[-- Attachment #1: Type: text/plain, Size: 49346 bytes --]

== Series Details ==

Series: : Add GPU work period support for Xe driver (rev5)
URL   : https://patchwork.freedesktop.org/series/153341/
State : failure

== Summary ==

CI Bug Log - changes from xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc_FULL -> xe-pw-153341v5_FULL
====================================================

Summary
-------

  **FAILURE**

  Serious unknown changes coming with xe-pw-153341v5_FULL absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in xe-pw-153341v5_FULL, please notify your bug team (I915-ci-infra@lists.freedesktop.org) to allow them
  to document this new failure mode, which will reduce false positives in CI.

  

Participating hosts (4 -> 4)
------------------------------

  No changes in participating hosts

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in xe-pw-153341v5_FULL:

### IGT changes ###

#### Possible regressions ####

  * igt@core_hotunplug@hotunbind-rebind:
    - shard-lnl:          [PASS][1] -> [INCOMPLETE][2]
   [1]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-lnl-7/igt@core_hotunplug@hotunbind-rebind.html
   [2]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-lnl-5/igt@core_hotunplug@hotunbind-rebind.html

  * igt@kms_pm_rpm@basic-rte:
    - shard-adlp:         [PASS][3] -> [ABORT][4] +7 other tests abort
   [3]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-adlp-3/igt@kms_pm_rpm@basic-rte.html
   [4]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-adlp-3/igt@kms_pm_rpm@basic-rte.html
    - shard-bmg:          [PASS][5] -> [ABORT][6] +8 other tests abort
   [5]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-bmg-3/igt@kms_pm_rpm@basic-rte.html
   [6]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-bmg-7/igt@kms_pm_rpm@basic-rte.html

  * igt@kms_pm_rpm@legacy-planes-dpms@plane-43:
    - shard-lnl:          [PASS][7] -> [ABORT][8] +13 other tests abort
   [7]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-lnl-5/igt@kms_pm_rpm@legacy-planes-dpms@plane-43.html
   [8]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-lnl-3/igt@kms_pm_rpm@legacy-planes-dpms@plane-43.html

  * igt@kms_pm_rpm@modeset-lpsp-stress-no-wait:
    - shard-dg2-set2:     [PASS][9] -> [ABORT][10] +10 other tests abort
   [9]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-dg2-432/igt@kms_pm_rpm@modeset-lpsp-stress-no-wait.html
   [10]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-dg2-466/igt@kms_pm_rpm@modeset-lpsp-stress-no-wait.html

  * igt@xe_fault_injection@inject-fault-probe-function-xe_guc_relay_init:
    - shard-bmg:          [PASS][11] -> [DMESG-WARN][12] +3 other tests dmesg-warn
   [11]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-bmg-6/igt@xe_fault_injection@inject-fault-probe-function-xe_guc_relay_init.html
   [12]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-bmg-4/igt@xe_fault_injection@inject-fault-probe-function-xe_guc_relay_init.html
    - shard-dg2-set2:     [PASS][13] -> [DMESG-WARN][14] +7 other tests dmesg-warn
   [13]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-dg2-435/igt@xe_fault_injection@inject-fault-probe-function-xe_guc_relay_init.html
   [14]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-dg2-432/igt@xe_fault_injection@inject-fault-probe-function-xe_guc_relay_init.html

  * igt@xe_fault_injection@vm-bind-fail-vm_bind_ioctl_ops_execute:
    - shard-lnl:          [PASS][15] -> [DMESG-WARN][16] +5 other tests dmesg-warn
   [15]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-lnl-3/igt@xe_fault_injection@vm-bind-fail-vm_bind_ioctl_ops_execute.html
   [16]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-lnl-2/igt@xe_fault_injection@vm-bind-fail-vm_bind_ioctl_ops_execute.html

  * igt@xe_sriov_scheduling@nonpreempt-engine-resets:
    - shard-bmg:          [PASS][17] -> [INCOMPLETE][18] +2 other tests incomplete
   [17]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-bmg-3/igt@xe_sriov_scheduling@nonpreempt-engine-resets.html
   [18]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-bmg-3/igt@xe_sriov_scheduling@nonpreempt-engine-resets.html

  * igt@xe_wedged@basic-wedged:
    - shard-adlp:         [PASS][19] -> [INCOMPLETE][20] +1 other test incomplete
   [19]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-adlp-2/igt@xe_wedged@basic-wedged.html
   [20]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-adlp-9/igt@xe_wedged@basic-wedged.html
    - shard-dg2-set2:     [PASS][21] -> [INCOMPLETE][22]
   [21]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-dg2-463/igt@xe_wedged@basic-wedged.html
   [22]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-dg2-435/igt@xe_wedged@basic-wedged.html

  * igt@xe_wedged@wedged-at-any-timeout:
    - shard-adlp:         [PASS][23] -> [DMESG-WARN][24] +5 other tests dmesg-warn
   [23]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-adlp-4/igt@xe_wedged@wedged-at-any-timeout.html
   [24]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-adlp-8/igt@xe_wedged@wedged-at-any-timeout.html

  
#### Warnings ####

  * igt@kms_big_fb@y-tiled-8bpp-rotate-180:
    - shard-adlp:         [DMESG-FAIL][25] ([Intel XE#4543]) -> [INCOMPLETE][26]
   [25]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-adlp-3/igt@kms_big_fb@y-tiled-8bpp-rotate-180.html
   [26]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-adlp-9/igt@kms_big_fb@y-tiled-8bpp-rotate-180.html

  * igt@kms_pm_rpm@dpms-mode-unset-non-lpsp:
    - shard-adlp:         [SKIP][27] ([Intel XE#836]) -> [ABORT][28]
   [27]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-adlp-4/igt@kms_pm_rpm@dpms-mode-unset-non-lpsp.html
   [28]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-adlp-8/igt@kms_pm_rpm@dpms-mode-unset-non-lpsp.html
    - shard-lnl:          [SKIP][29] ([Intel XE#1439] / [Intel XE#836]) -> [ABORT][30]
   [29]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-lnl-8/igt@kms_pm_rpm@dpms-mode-unset-non-lpsp.html
   [30]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-lnl-5/igt@kms_pm_rpm@dpms-mode-unset-non-lpsp.html

  * igt@kms_pm_rpm@modeset-lpsp-stress:
    - shard-bmg:          [SKIP][31] ([Intel XE#1439] / [Intel XE#3141] / [Intel XE#836]) -> [ABORT][32] +1 other test abort
   [31]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-bmg-5/igt@kms_pm_rpm@modeset-lpsp-stress.html
   [32]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-bmg-5/igt@kms_pm_rpm@modeset-lpsp-stress.html

  * igt@xe_pm@d3cold-basic-exec:
    - shard-dg2-set2:     [SKIP][33] ([Intel XE#2284] / [Intel XE#366]) -> [ABORT][34] +1 other test abort
   [33]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-dg2-466/igt@xe_pm@d3cold-basic-exec.html
   [34]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-dg2-434/igt@xe_pm@d3cold-basic-exec.html
    - shard-lnl:          [SKIP][35] ([Intel XE#2284] / [Intel XE#366]) -> [ABORT][36]
   [35]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-lnl-8/igt@xe_pm@d3cold-basic-exec.html
   [36]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-lnl-1/igt@xe_pm@d3cold-basic-exec.html

  * igt@xe_pm@d3cold-multiple-execs:
    - shard-adlp:         [SKIP][37] ([Intel XE#2284] / [Intel XE#366]) -> [ABORT][38]
   [37]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-adlp-2/igt@xe_pm@d3cold-multiple-execs.html
   [38]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-adlp-2/igt@xe_pm@d3cold-multiple-execs.html
    - shard-bmg:          [SKIP][39] ([Intel XE#2284]) -> [ABORT][40] +1 other test abort
   [39]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-bmg-6/igt@xe_pm@d3cold-multiple-execs.html
   [40]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-bmg-8/igt@xe_pm@d3cold-multiple-execs.html

  
#### Suppressed ####

  The following results come from untrusted machines, tests, or statuses.
  They do not affect the overall result.

  * {igt@kms_pipe_stress@stress-xrgb8888-4tiled}:
    - shard-dg2-set2:     [PASS][41] -> [ABORT][42]
   [41]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-dg2-463/igt@kms_pipe_stress@stress-xrgb8888-4tiled.html
   [42]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-dg2-464/igt@kms_pipe_stress@stress-xrgb8888-4tiled.html

  * {igt@xe_configfs@ctx-restore-mid-bb}:
    - shard-adlp:         [PASS][43] -> [INCOMPLETE][44]
   [43]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-adlp-8/igt@xe_configfs@ctx-restore-mid-bb.html
   [44]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-adlp-2/igt@xe_configfs@ctx-restore-mid-bb.html
    - shard-bmg:          [PASS][45] -> [INCOMPLETE][46] +1 other test incomplete
   [45]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-bmg-4/igt@xe_configfs@ctx-restore-mid-bb.html
   [46]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-bmg-8/igt@xe_configfs@ctx-restore-mid-bb.html
    - shard-lnl:          [PASS][47] -> [INCOMPLETE][48] +1 other test incomplete
   [47]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-lnl-4/igt@xe_configfs@ctx-restore-mid-bb.html
   [48]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-lnl-8/igt@xe_configfs@ctx-restore-mid-bb.html

  * {igt@xe_configfs@engines-allowed}:
    - shard-adlp:         [PASS][49] -> [DMESG-WARN][50]
   [49]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-adlp-6/igt@xe_configfs@engines-allowed.html
   [50]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-adlp-3/igt@xe_configfs@engines-allowed.html
    - shard-dg2-set2:     [PASS][51] -> [INCOMPLETE][52]
   [51]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-dg2-464/igt@xe_configfs@engines-allowed.html
   [52]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-dg2-432/igt@xe_configfs@engines-allowed.html

  * {igt@xe_fault_injection@exec-queue-create-fail-xe_pxp_exec_queue_add}:
    - shard-lnl:          [PASS][53] -> [DMESG-WARN][54]
   [53]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-lnl-3/igt@xe_fault_injection@exec-queue-create-fail-xe_pxp_exec_queue_add.html
   [54]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-lnl-1/igt@xe_fault_injection@exec-queue-create-fail-xe_pxp_exec_queue_add.html

  * {igt@xe_pmu@all-fn-engine-activity-load@engine-drm_xe_engine_class_compute0}:
    - shard-bmg:          [PASS][55] -> [ABORT][56]
   [55]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-bmg-4/igt@xe_pmu@all-fn-engine-activity-load@engine-drm_xe_engine_class_compute0.html
   [56]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-bmg-8/igt@xe_pmu@all-fn-engine-activity-load@engine-drm_xe_engine_class_compute0.html

  
Known issues
------------

  Here are the changes found in xe-pw-153341v5_FULL that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@kms_ccs@crc-primary-suspend-4-tiled-mtl-rc-ccs-cc@pipe-a-hdmi-a-6:
    - shard-dg2-set2:     NOTRUN -> [SKIP][57] ([Intel XE#787]) +41 other tests skip
   [57]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-dg2-464/igt@kms_ccs@crc-primary-suspend-4-tiled-mtl-rc-ccs-cc@pipe-a-hdmi-a-6.html

  * igt@kms_ccs@crc-primary-suspend-4-tiled-mtl-rc-ccs-cc@pipe-d-dp-4:
    - shard-dg2-set2:     NOTRUN -> [SKIP][58] ([Intel XE#455] / [Intel XE#787]) +5 other tests skip
   [58]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-dg2-464/igt@kms_ccs@crc-primary-suspend-4-tiled-mtl-rc-ccs-cc@pipe-d-dp-4.html

  * igt@kms_flip@flip-vs-suspend-interruptible@d-hdmi-a1:
    - shard-adlp:         [PASS][59] -> [DMESG-WARN][60] ([Intel XE#4543]) +4 other tests dmesg-warn
   [59]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-adlp-9/igt@kms_flip@flip-vs-suspend-interruptible@d-hdmi-a1.html
   [60]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-adlp-6/igt@kms_flip@flip-vs-suspend-interruptible@d-hdmi-a1.html

  * igt@xe_exec_basic@multigpu-once-null:
    - shard-dg2-set2:     [PASS][61] -> [SKIP][62] ([Intel XE#1392]) +1 other test skip
   [61]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-dg2-435/igt@xe_exec_basic@multigpu-once-null.html
   [62]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-dg2-432/igt@xe_exec_basic@multigpu-once-null.html

  * igt@xe_fault_injection@exec-queue-create-fail-xe_exec_queue_create_bind:
    - shard-bmg:          [PASS][63] -> [DMESG-WARN][64] ([Intel XE#1727])
   [63]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-bmg-7/igt@xe_fault_injection@exec-queue-create-fail-xe_exec_queue_create_bind.html
   [64]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-bmg-6/igt@xe_fault_injection@exec-queue-create-fail-xe_exec_queue_create_bind.html

  * igt@xe_fault_injection@exec-queue-create-fail-xe_vm_add_compute_exec_queue:
    - shard-dg2-set2:     [PASS][65] -> [DMESG-WARN][66] ([Intel XE#1727])
   [65]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-dg2-463/igt@xe_fault_injection@exec-queue-create-fail-xe_vm_add_compute_exec_queue.html
   [66]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-dg2-436/igt@xe_fault_injection@exec-queue-create-fail-xe_vm_add_compute_exec_queue.html

  * igt@xe_sriov_scheduling@nonpreempt-engine-resets@numvfs-random:
    - shard-adlp:         [PASS][67] -> [ABORT][68] ([Intel XE#1727] / [Intel XE#4917] / [Intel XE#5545]) +1 other test abort
   [67]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-adlp-8/igt@xe_sriov_scheduling@nonpreempt-engine-resets@numvfs-random.html
   [68]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-adlp-2/igt@xe_sriov_scheduling@nonpreempt-engine-resets@numvfs-random.html

  * igt@xe_wedged@wedged-at-any-timeout:
    - shard-lnl:          [PASS][69] -> [DMESG-WARN][70] ([Intel XE#3119])
   [69]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-lnl-1/igt@xe_wedged@wedged-at-any-timeout.html
   [70]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-lnl-4/igt@xe_wedged@wedged-at-any-timeout.html

  
#### Possible fixes ####

  * igt@kms_bw@connected-linear-tiling-2-displays-2560x1440p:
    - shard-bmg:          [SKIP][71] ([Intel XE#2314] / [Intel XE#2894]) -> [PASS][72]
   [71]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-bmg-6/igt@kms_bw@connected-linear-tiling-2-displays-2560x1440p.html
   [72]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-bmg-3/igt@kms_bw@connected-linear-tiling-2-displays-2560x1440p.html

  * igt@kms_flip@flip-vs-suspend-interruptible@b-hdmi-a1:
    - shard-adlp:         [DMESG-WARN][73] ([Intel XE#4543]) -> [PASS][74]
   [73]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-adlp-9/igt@kms_flip@flip-vs-suspend-interruptible@b-hdmi-a1.html
   [74]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-adlp-6/igt@kms_flip@flip-vs-suspend-interruptible@b-hdmi-a1.html

  * igt@kms_setmode@invalid-clone-single-crtc:
    - shard-bmg:          [SKIP][75] ([Intel XE#1435]) -> [PASS][76]
   [75]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-bmg-6/igt@kms_setmode@invalid-clone-single-crtc.html
   [76]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-bmg-3/igt@kms_setmode@invalid-clone-single-crtc.html

  * igt@xe_module_load@load:
    - shard-lnl:          ([PASS][77], [PASS][78], [PASS][79], [PASS][80], [PASS][81], [PASS][82], [PASS][83], [PASS][84], [PASS][85], [SKIP][86], [PASS][87], [PASS][88], [PASS][89], [PASS][90], [PASS][91], [PASS][92], [PASS][93], [PASS][94], [PASS][95], [PASS][96], [PASS][97], [PASS][98], [PASS][99], [PASS][100], [PASS][101], [PASS][102]) ([Intel XE#378]) -> ([PASS][103], [PASS][104], [PASS][105], [PASS][106], [PASS][107], [PASS][108], [PASS][109], [PASS][110], [PASS][111], [PASS][112], [PASS][113], [PASS][114], [PASS][115], [PASS][116], [PASS][117], [PASS][118], [PASS][119], [PASS][120], [PASS][121], [PASS][122], [PASS][123], [PASS][124], [PASS][125], [PASS][126])
   [77]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-lnl-4/igt@xe_module_load@load.html
   [78]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-lnl-4/igt@xe_module_load@load.html
   [79]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-lnl-3/igt@xe_module_load@load.html
   [80]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-lnl-1/igt@xe_module_load@load.html
   [81]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-lnl-2/igt@xe_module_load@load.html
   [82]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-lnl-2/igt@xe_module_load@load.html
   [83]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-lnl-8/igt@xe_module_load@load.html
   [84]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-lnl-8/igt@xe_module_load@load.html
   [85]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-lnl-8/igt@xe_module_load@load.html
   [86]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-lnl-5/igt@xe_module_load@load.html
   [87]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-lnl-3/igt@xe_module_load@load.html
   [88]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-lnl-4/igt@xe_module_load@load.html
   [89]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-lnl-4/igt@xe_module_load@load.html
   [90]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-lnl-5/igt@xe_module_load@load.html
   [91]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-lnl-1/igt@xe_module_load@load.html
   [92]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-lnl-2/igt@xe_module_load@load.html
   [93]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-lnl-1/igt@xe_module_load@load.html
   [94]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-lnl-3/igt@xe_module_load@load.html
   [95]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-lnl-5/igt@xe_module_load@load.html
   [96]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-lnl-7/igt@xe_module_load@load.html
   [97]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-lnl-7/igt@xe_module_load@load.html
   [98]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-lnl-7/igt@xe_module_load@load.html
   [99]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-lnl-7/igt@xe_module_load@load.html
   [100]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-lnl-3/igt@xe_module_load@load.html
   [101]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-lnl-5/igt@xe_module_load@load.html
   [102]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-lnl-8/igt@xe_module_load@load.html
   [103]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-lnl-1/igt@xe_module_load@load.html
   [104]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-lnl-1/igt@xe_module_load@load.html
   [105]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-lnl-1/igt@xe_module_load@load.html
   [106]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-lnl-2/igt@xe_module_load@load.html
   [107]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-lnl-8/igt@xe_module_load@load.html
   [108]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-lnl-7/igt@xe_module_load@load.html
   [109]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-lnl-4/igt@xe_module_load@load.html
   [110]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-lnl-2/igt@xe_module_load@load.html
   [111]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-lnl-4/igt@xe_module_load@load.html
   [112]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-lnl-4/igt@xe_module_load@load.html
   [113]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-lnl-1/igt@xe_module_load@load.html
   [114]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-lnl-5/igt@xe_module_load@load.html
   [115]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-lnl-5/igt@xe_module_load@load.html
   [116]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-lnl-5/igt@xe_module_load@load.html
   [117]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-lnl-5/igt@xe_module_load@load.html
   [118]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-lnl-8/igt@xe_module_load@load.html
   [119]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-lnl-7/igt@xe_module_load@load.html
   [120]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-lnl-7/igt@xe_module_load@load.html
   [121]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-lnl-7/igt@xe_module_load@load.html
   [122]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-lnl-3/igt@xe_module_load@load.html
   [123]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-lnl-3/igt@xe_module_load@load.html
   [124]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-lnl-3/igt@xe_module_load@load.html
   [125]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-lnl-3/igt@xe_module_load@load.html
   [126]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-lnl-8/igt@xe_module_load@load.html
    - shard-bmg:          ([PASS][127], [PASS][128], [PASS][129], [PASS][130], [PASS][131], [PASS][132], [PASS][133], [PASS][134], [PASS][135], [PASS][136], [PASS][137], [PASS][138], [PASS][139], [PASS][140], [PASS][141], [PASS][142], [PASS][143], [PASS][144], [PASS][145], [PASS][146], [SKIP][147], [PASS][148], [PASS][149], [PASS][150], [PASS][151], [PASS][152]) ([Intel XE#2457]) -> ([PASS][153], [PASS][154], [PASS][155], [PASS][156], [PASS][157], [PASS][158], [PASS][159], [PASS][160], [PASS][161], [PASS][162], [PASS][163], [PASS][164], [PASS][165], [PASS][166], [PASS][167], [PASS][168], [PASS][169], [PASS][170], [PASS][171], [PASS][172], [PASS][173], [PASS][174], [PASS][175], [PASS][176])
   [127]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-bmg-1/igt@xe_module_load@load.html
   [128]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-bmg-8/igt@xe_module_load@load.html
   [129]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-bmg-6/igt@xe_module_load@load.html
   [130]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-bmg-6/igt@xe_module_load@load.html
   [131]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-bmg-6/igt@xe_module_load@load.html
   [132]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-bmg-5/igt@xe_module_load@load.html
   [133]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-bmg-8/igt@xe_module_load@load.html
   [134]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-bmg-2/igt@xe_module_load@load.html
   [135]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-bmg-4/igt@xe_module_load@load.html
   [136]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-bmg-4/igt@xe_module_load@load.html
   [137]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-bmg-6/igt@xe_module_load@load.html
   [138]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-bmg-3/igt@xe_module_load@load.html
   [139]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-bmg-3/igt@xe_module_load@load.html
   [140]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-bmg-1/igt@xe_module_load@load.html
   [141]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-bmg-8/igt@xe_module_load@load.html
   [142]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-bmg-5/igt@xe_module_load@load.html
   [143]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-bmg-1/igt@xe_module_load@load.html
   [144]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-bmg-7/igt@xe_module_load@load.html
   [145]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-bmg-7/igt@xe_module_load@load.html
   [146]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-bmg-3/igt@xe_module_load@load.html
   [147]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-bmg-2/igt@xe_module_load@load.html
   [148]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-bmg-2/igt@xe_module_load@load.html
   [149]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-bmg-3/igt@xe_module_load@load.html
   [150]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-bmg-5/igt@xe_module_load@load.html
   [151]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-bmg-2/igt@xe_module_load@load.html
   [152]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-bmg-7/igt@xe_module_load@load.html
   [153]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-bmg-4/igt@xe_module_load@load.html
   [154]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-bmg-4/igt@xe_module_load@load.html
   [155]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-bmg-5/igt@xe_module_load@load.html
   [156]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-bmg-8/igt@xe_module_load@load.html
   [157]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-bmg-8/igt@xe_module_load@load.html
   [158]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-bmg-8/igt@xe_module_load@load.html
   [159]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-bmg-7/igt@xe_module_load@load.html
   [160]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-bmg-3/igt@xe_module_load@load.html
   [161]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-bmg-6/igt@xe_module_load@load.html
   [162]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-bmg-6/igt@xe_module_load@load.html
   [163]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-bmg-1/igt@xe_module_load@load.html
   [164]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-bmg-5/igt@xe_module_load@load.html
   [165]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-bmg-6/igt@xe_module_load@load.html
   [166]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-bmg-7/igt@xe_module_load@load.html
   [167]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-bmg-2/igt@xe_module_load@load.html
   [168]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-bmg-2/igt@xe_module_load@load.html
   [169]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-bmg-1/igt@xe_module_load@load.html
   [170]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-bmg-3/igt@xe_module_load@load.html
   [171]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-bmg-3/igt@xe_module_load@load.html
   [172]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-bmg-2/igt@xe_module_load@load.html
   [173]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-bmg-4/igt@xe_module_load@load.html
   [174]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-bmg-4/igt@xe_module_load@load.html
   [175]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-bmg-5/igt@xe_module_load@load.html
   [176]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-bmg-5/igt@xe_module_load@load.html
    - shard-adlp:         ([PASS][177], [PASS][178], [PASS][179], [PASS][180], [PASS][181], [SKIP][182], [PASS][183], [PASS][184], [PASS][185], [PASS][186], [PASS][187], [PASS][188], [PASS][189], [PASS][190], [PASS][191], [PASS][192], [PASS][193], [PASS][194], [PASS][195], [PASS][196], [PASS][197], [PASS][198], [PASS][199], [PASS][200], [PASS][201], [PASS][202]) ([Intel XE#378] / [Intel XE#5612]) -> ([PASS][203], [PASS][204], [PASS][205], [PASS][206], [PASS][207], [PASS][208], [PASS][209], [PASS][210], [PASS][211], [PASS][212], [PASS][213], [PASS][214], [PASS][215], [PASS][216], [PASS][217], [PASS][218], [PASS][219], [PASS][220], [PASS][221], [PASS][222], [PASS][223], [PASS][224], [PASS][225], [PASS][226], [PASS][227])
   [177]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-adlp-6/igt@xe_module_load@load.html
   [178]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-adlp-8/igt@xe_module_load@load.html
   [179]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-adlp-4/igt@xe_module_load@load.html
   [180]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-adlp-4/igt@xe_module_load@load.html
   [181]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-adlp-3/igt@xe_module_load@load.html
   [182]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-adlp-2/igt@xe_module_load@load.html
   [183]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-adlp-1/igt@xe_module_load@load.html
   [184]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-adlp-1/igt@xe_module_load@load.html
   [185]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-adlp-2/igt@xe_module_load@load.html
   [186]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-adlp-2/igt@xe_module_load@load.html
   [187]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-adlp-4/igt@xe_module_load@load.html
   [188]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-adlp-6/igt@xe_module_load@load.html
   [189]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-adlp-4/igt@xe_module_load@load.html
   [190]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-adlp-8/igt@xe_module_load@load.html
   [191]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-adlp-8/igt@xe_module_load@load.html
   [192]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-adlp-8/igt@xe_module_load@load.html
   [193]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-adlp-1/igt@xe_module_load@load.html
   [194]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-adlp-6/igt@xe_module_load@load.html
   [195]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-adlp-9/igt@xe_module_load@load.html
   [196]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-adlp-9/igt@xe_module_load@load.html
   [197]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-adlp-3/igt@xe_module_load@load.html
   [198]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-adlp-3/igt@xe_module_load@load.html
   [199]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-adlp-2/igt@xe_module_load@load.html
   [200]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-adlp-9/igt@xe_module_load@load.html
   [201]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-adlp-9/igt@xe_module_load@load.html
   [202]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-adlp-2/igt@xe_module_load@load.html
   [203]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-adlp-8/igt@xe_module_load@load.html
   [204]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-adlp-8/igt@xe_module_load@load.html
   [205]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-adlp-9/igt@xe_module_load@load.html
   [206]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-adlp-3/igt@xe_module_load@load.html
   [207]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-adlp-3/igt@xe_module_load@load.html
   [208]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-adlp-6/igt@xe_module_load@load.html
   [209]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-adlp-6/igt@xe_module_load@load.html
   [210]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-adlp-3/igt@xe_module_load@load.html
   [211]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-adlp-9/igt@xe_module_load@load.html
   [212]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-adlp-9/igt@xe_module_load@load.html
   [213]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-adlp-1/igt@xe_module_load@load.html
   [214]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-adlp-1/igt@xe_module_load@load.html
   [215]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-adlp-4/igt@xe_module_load@load.html
   [216]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-adlp-1/igt@xe_module_load@load.html
   [217]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-adlp-4/igt@xe_module_load@load.html
   [218]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-adlp-1/igt@xe_module_load@load.html
   [219]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-adlp-6/igt@xe_module_load@load.html
   [220]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-adlp-3/igt@xe_module_load@load.html
   [221]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-adlp-8/igt@xe_module_load@load.html
   [222]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-adlp-8/igt@xe_module_load@load.html
   [223]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-adlp-4/igt@xe_module_load@load.html
   [224]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-adlp-2/igt@xe_module_load@load.html
   [225]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-adlp-6/igt@xe_module_load@load.html
   [226]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-adlp-2/igt@xe_module_load@load.html
   [227]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-adlp-2/igt@xe_module_load@load.html
    - shard-dg2-set2:     ([PASS][228], [SKIP][229], [PASS][230], [PASS][231], [PASS][232], [PASS][233], [PASS][234], [PASS][235], [PASS][236], [PASS][237], [PASS][238], [PASS][239], [PASS][240], [PASS][241], [PASS][242], [PASS][243], [PASS][244], [PASS][245], [PASS][246], [PASS][247], [PASS][248], [PASS][249], [PASS][250], [PASS][251], [PASS][252], [PASS][253]) ([Intel XE#378]) -> ([PASS][254], [PASS][255], [PASS][256], [PASS][257], [PASS][258], [PASS][259], [PASS][260], [PASS][261], [PASS][262], [PASS][263], [PASS][264], [PASS][265], [PASS][266], [PASS][267], [PASS][268], [PASS][269], [PASS][270], [PASS][271], [PASS][272], [PASS][273], [PASS][274], [PASS][275], [PASS][276], [PASS][277], [PASS][278])
   [228]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-dg2-463/igt@xe_module_load@load.html
   [229]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-dg2-435/igt@xe_module_load@load.html
   [230]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-dg2-435/igt@xe_module_load@load.html
   [231]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-dg2-433/igt@xe_module_load@load.html
   [232]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-dg2-435/igt@xe_module_load@load.html
   [233]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-dg2-436/igt@xe_module_load@load.html
   [234]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-dg2-434/igt@xe_module_load@load.html
   [235]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-dg2-434/igt@xe_module_load@load.html
   [236]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-dg2-432/igt@xe_module_load@load.html
   [237]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-dg2-432/igt@xe_module_load@load.html
   [238]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-dg2-432/igt@xe_module_load@load.html
   [239]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-dg2-436/igt@xe_module_load@load.html
   [240]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-dg2-464/igt@xe_module_load@load.html
   [241]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-dg2-433/igt@xe_module_load@load.html
   [242]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-dg2-433/igt@xe_module_load@load.html
   [243]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-dg2-432/igt@xe_module_load@load.html
   [244]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-dg2-463/igt@xe_module_load@load.html
   [245]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-dg2-464/igt@xe_module_load@load.html
   [246]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-dg2-464/igt@xe_module_load@load.html
   [247]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-dg2-466/igt@xe_module_load@load.html
   [248]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-dg2-435/igt@xe_module_load@load.html
   [249]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-dg2-435/igt@xe_module_load@load.html
   [250]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-dg2-466/igt@xe_module_load@load.html
   [251]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-dg2-463/igt@xe_module_load@load.html
   [252]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-dg2-466/igt@xe_module_load@load.html
   [253]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-dg2-466/igt@xe_module_load@load.html
   [254]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-dg2-433/igt@xe_module_load@load.html
   [255]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-dg2-433/igt@xe_module_load@load.html
   [256]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-dg2-463/igt@xe_module_load@load.html
   [257]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-dg2-463/igt@xe_module_load@load.html
   [258]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-dg2-434/igt@xe_module_load@load.html
   [259]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-dg2-434/igt@xe_module_load@load.html
   [260]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-dg2-434/igt@xe_module_load@load.html
   [261]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-dg2-435/igt@xe_module_load@load.html
   [262]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-dg2-436/igt@xe_module_load@load.html
   [263]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-dg2-432/igt@xe_module_load@load.html
   [264]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-dg2-436/igt@xe_module_load@load.html
   [265]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-dg2-464/igt@xe_module_load@load.html
   [266]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-dg2-433/igt@xe_module_load@load.html
   [267]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-dg2-432/igt@xe_module_load@load.html
   [268]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-dg2-435/igt@xe_module_load@load.html
   [269]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-dg2-435/igt@xe_module_load@load.html
   [270]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-dg2-432/igt@xe_module_load@load.html
   [271]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-dg2-464/igt@xe_module_load@load.html
   [272]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-dg2-432/igt@xe_module_load@load.html
   [273]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-dg2-466/igt@xe_module_load@load.html
   [274]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-dg2-463/igt@xe_module_load@load.html
   [275]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-dg2-466/igt@xe_module_load@load.html
   [276]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-dg2-466/igt@xe_module_load@load.html
   [277]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-dg2-464/igt@xe_module_load@load.html
   [278]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-dg2-464/igt@xe_module_load@load.html

  
#### Warnings ####

  * igt@kms_frontbuffer_tracking@fbc-2p-primscrn-spr-indfb-draw-mmap-wc:
    - shard-bmg:          [SKIP][279] ([Intel XE#2312]) -> [SKIP][280] ([Intel XE#5390])
   [279]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-bmg-6/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-spr-indfb-draw-mmap-wc.html
   [280]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-bmg-3/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-spr-indfb-draw-mmap-wc.html

  * igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-indfb-pgflip-blt:
    - shard-bmg:          [SKIP][281] ([Intel XE#2312]) -> [SKIP][282] ([Intel XE#2313]) +1 other test skip
   [281]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-bmg-6/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-indfb-pgflip-blt.html
   [282]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-bmg-8/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-indfb-pgflip-blt.html

  * igt@xe_fault_injection@probe-fail-guc-xe_guc_ct_send_recv:
    - shard-bmg:          [ABORT][283] ([Intel XE#5466] / [Intel XE#5530]) -> [INCOMPLETE][284] ([Intel XE#5466])
   [283]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc/shard-bmg-3/igt@xe_fault_injection@probe-fail-guc-xe_guc_ct_send_recv.html
   [284]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/shard-bmg-5/igt@xe_fault_injection@probe-fail-guc-xe_guc_ct_send_recv.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [Intel XE#1392]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1392
  [Intel XE#1435]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1435
  [Intel XE#1439]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1439
  [Intel XE#1727]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1727
  [Intel XE#2284]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2284
  [Intel XE#2312]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2312
  [Intel XE#2313]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2313
  [Intel XE#2314]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2314
  [Intel XE#2457]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2457
  [Intel XE#2894]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2894
  [Intel XE#3119]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3119
  [Intel XE#3141]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3141
  [Intel XE#366]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/366
  [Intel XE#378]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/378
  [Intel XE#4543]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4543
  [Intel XE#455]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/455
  [Intel XE#4917]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4917
  [Intel XE#5390]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5390
  [Intel XE#5466]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5466
  [Intel XE#5530]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5530
  [Intel XE#5545]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5545
  [Intel XE#5612]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5612
  [Intel XE#787]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/787
  [Intel XE#836]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/836


Build changes
-------------

  * Linux: xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc -> xe-pw-153341v5

  IGT_8574: 44a15713124663a622c6eddf7c6ee5ba732e0d41 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
  xe-3871-29dc3d947463e9e9756a253801e5cc4466536ecc: 29dc3d947463e9e9756a253801e5cc4466536ecc
  xe-pw-153341v5: 153341v5

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-153341v5/index.html

[-- Attachment #2: Type: text/html, Size: 51704 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v5 4/8] drm/xe: Handle xe_user creation and removal
  2025-10-06 14:20 ` [PATCH v5 4/8] drm/xe: Handle xe_user creation and removal Aakash Deep Sarkar
@ 2025-10-06 20:49   ` Matthew Brost
  2025-10-06 21:00     ` Matthew Brost
  0 siblings, 1 reply; 17+ messages in thread
From: Matthew Brost @ 2025-10-06 20:49 UTC (permalink / raw)
  To: Aakash Deep Sarkar
  Cc: intel-xe, jeevaka.badrappan, rodrigo.vivi, carlos.santa,
	matthew.auld, jani.nikula, ashutosh.dixit

On Mon, Oct 06, 2025 at 02:20:25PM +0000, Aakash Deep Sarkar wrote:
> We want our xe user structure to be created when a new
> user id opens the xe device node and to be destroyed
> when the final xe file with this uid is closed. In other
> words the xe_user structure for a uid should remain in
> scope as long as any process with this uid has an open
> xe file descriptor.
> 
> To implement this we maintain an xarray of xe user
> structures inside our xe device instance. Whenever a new
> xe file is created via an open call, we check if the
> calling process' uid is already present in our xarray.
> If so, we increment the refcount for the associated
> xe user and add this xe file to the list of xe files
> belonging to this xe user. Otherwise, we allocate a
> new xe user structure for this uid and initialize its
> file list with this xe file.
> 
> Whenever an xe file is destroyed, we decrement the
> refcount of the associated xe user. When the last
> xe file in the xe user's file list is destroyed,
> the xe user refcount should drop to zero and the
> xe user should be cleaned up. During the cleanup path
> we remove the xarray entry for this xe user in our
> xe device and free up its memory.
> 
> Signed-off-by: Aakash Deep Sarkar <aakash.deep.sarkar@intel.com>
> ---
>  drivers/gpu/drm/xe/xe_device.c       | 21 ++++++++
>  drivers/gpu/drm/xe/xe_device_types.h | 16 ++++++
>  drivers/gpu/drm/xe/xe_user.c         | 77 +++++++++++++++++++++++++++-
>  drivers/gpu/drm/xe/xe_user.h         | 11 +++-
>  4 files changed, 123 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
> index 386940323630..5a084fd39876 100644
> --- a/drivers/gpu/drm/xe/xe_device.c
> +++ b/drivers/gpu/drm/xe/xe_device.c
> @@ -65,6 +65,7 @@
>  #include "xe_tile.h"
>  #include "xe_ttm_stolen_mgr.h"
>  #include "xe_ttm_sys_mgr.h"
> +#include "xe_user.h"
>  #include "xe_vm.h"
>  #include "xe_vm_madvise.h"
>  #include "xe_vram.h"
> @@ -82,7 +83,9 @@ static int xe_file_open(struct drm_device *dev, struct drm_file *file)
>  	struct xe_drm_client *client;
>  	struct xe_file *xef;
>  	int ret = -ENOMEM;
> +	int uid = -EINVAL;
>  	struct task_struct *task = NULL;
> +	const struct cred *cred = NULL;
>  
>  	xef = kzalloc(sizeof(*xef), GFP_KERNEL);
>  	if (!xef)
> @@ -107,8 +110,16 @@ static int xe_file_open(struct drm_device *dev, struct drm_file *file)
>  	file->driver_priv = xef;
>  	kref_init(&xef->refcount);
>  
> +	INIT_LIST_HEAD(&xef->user_link);
> +
>  	task = get_pid_task(rcu_access_pointer(file->pid), PIDTYPE_PID);
>  	if (task) {
> +		cred = get_task_cred(task);
> +		if (cred) {
> +			uid = (unsigned int) cred->euid.val;
> +			xe_user_init(xe, xef, uid);
> +			put_cred(cred);
> +		}
>  		xef->process_name = kstrdup(task->comm, GFP_KERNEL);
>  		xef->pid = task->pid;
>  		put_task_struct(task);
> @@ -128,6 +139,12 @@ static void xe_file_destroy(struct kref *ref)
>  
>  	xe_drm_client_put(xef->client);
>  	kfree(xef->process_name);
> +
> +	mutex_lock(&xef->user->filelist_lock);
> +	list_del(&xef->user_link);
> +	mutex_unlock(&xef->user->filelist_lock);
> +
> +	xe_user_put(xef->user);
>  	kfree(xef);
>  }
>  
> @@ -467,6 +484,10 @@ struct xe_device *xe_device_create(struct pci_dev *pdev,
>  
>  	xa_init_flags(&xe->usm.asid_to_vm, XA_FLAGS_ALLOC);
>  
> +	xa_init_flags(&xe->work_period.users, XA_FLAGS_ALLOC1);
> +
> +	mutex_init(&xe->work_period.lock);
> +
>  	if (IS_ENABLED(CONFIG_DRM_XE_DEBUG)) {
>  		/* Trigger a large asid and an early asid wrap. */
>  		u32 asid;
> diff --git a/drivers/gpu/drm/xe/xe_device_types.h b/drivers/gpu/drm/xe/xe_device_types.h
> index 54a612787289..4d4e9a63b3fd 100644
> --- a/drivers/gpu/drm/xe/xe_device_types.h
> +++ b/drivers/gpu/drm/xe/xe_device_types.h
> @@ -613,6 +613,16 @@ struct xe_device {
>  	atomic_t g2g_test_count;
>  #endif
>  
> +	/**
> +	 * @xe_work_period: Support for GPU work period tracepoint
> +	 */
> +	struct xe_work_period {
> +		/** @users: list of users that have opened this xe device */
> +		struct xarray users;
> +		/** @lock: lock protecting this structure */
> +		struct mutex lock;
> +	} work_period;
> +
>  	/* private: */
>  
>  #if IS_ENABLED(CONFIG_DRM_XE_DISPLAY)
> @@ -684,6 +694,12 @@ struct xe_file {
>  	/** @active_duration_ns: total run time in ns for this xe file */
>  	u64 active_duration_ns;
>  
> +	/** @user: pointer to struct xe_user associated with this xe file */
> +	struct xe_user *user;
> +
> +	/** @user_link: link into xe_user::filelist */
> +	struct list_head user_link;
> +
>  	/** @client: drm client */
>  	struct xe_drm_client *client;
>  
> diff --git a/drivers/gpu/drm/xe/xe_user.c b/drivers/gpu/drm/xe/xe_user.c
> index f35e18776300..cb3de75aa497 100644
> --- a/drivers/gpu/drm/xe/xe_user.c
> +++ b/drivers/gpu/drm/xe/xe_user.c
> @@ -3,6 +3,8 @@
>   * Copyright © 2025 Intel Corporation
>   */
>  
> +#include <drm/drm_drv.h>
> +
>  #include "xe_user.h"
>  
>  
> @@ -60,7 +62,7 @@
>   *
>   * Return: pointer to user struct or NULL if can't allocate
>   */
> -struct xe_user *xe_user_alloc(void)
> +static struct xe_user *xe_user_alloc(void)
>  {
>  	struct xe_user *user;
>  
> @@ -71,6 +73,7 @@ struct xe_user *xe_user_alloc(void)
>  	kref_init(&user->refcount);
>  	mutex_init(&user->filelist_lock);
>  	INIT_LIST_HEAD(&user->filelist);
> +	INIT_WORK(&user->work, work_period_worker);
>  	return user;
>  }
>  
> @@ -84,6 +87,78 @@ void __xe_user_free(struct kref *kref)
>  {
>  	struct xe_user *user =
>  		container_of(kref, struct xe_user, refcount);
> +	struct xe_device *xe = user->xe;
> +	void *lookup;
> +
> +	mutex_lock(&xe->work_period.lock);

You don't need to take work_period.lock look here. Xa have there own
locking. work_period.lock should protect the reference upon lookup,
that's it.

> +	lookup = xa_erase(&xe->work_period.users, user->id);
> +	xe_assert(xe, lookup == user);
> +	mutex_unlock(&xe->work_period.lock);
>  
> +	drm_dev_put(&user->xe->drm);
>  	kfree(user);
>  }
> +
> +static struct xe_user *xe_user_lookup(struct xe_device *xe, u32 uid)
> +{
> +	struct xe_user *user = NULL;
> +	unsigned long i;
> +
> +	mutex_lock(&xe->work_period.lock);

guard(mutex)(&xe->work_period.lock) will work better here.

> +	xa_for_each(&xe->work_period.users, i, user) {
> +		if (user->uid == uid) {
> +			xe_user_get(user);
> +			mutex_unlock(&xe->work_period.lock);
> +			return user;
> +		}
> +	}
> +	mutex_unlock(&xe->work_period.lock);
> +
> +	return NULL;
> +}
> +
> +int xe_user_init(struct xe_device *xe, struct xe_file *xef, unsigned int uid)
> +{
> +	struct xe_user *user = NULL;
> +	int ret;
> +	u32 idx;
> +	/*
> +	 * Check if the calling process/uid has already been registered
> +	 * with the xe device during a previous open call. If so then
> +	 * take a reference to this xe user and add this xe file to the
> +	 * filelist belonging to this xe user
> +	 */
> +	user = xe_user_lookup(xe, uid);
> +	if (!user) {
> +		/*
> +		 * We couldn't find an existing xe user for the calling process.
> +		 * Allocate a new struct xe_user and register it with this xe
> +		 * device
> +		 */
> +		user = xe_user_alloc();
> +		if (!user)
> +			return -ENOMEM;
> +
> +
> +		user->uid = uid;
> +		user->last_timestamp_ns = ktime_get_raw_ns();
> +		user->xe = xe;
> +
> +		mutex_lock(&xe->work_period.lock);
> +		ret = xa_alloc(&xe->work_period.users, &idx, user, xa_limit_32b, GFP_KERNEL);

You don't lock here either.

Matt

> +		mutex_unlock(&xe->work_period.lock);
> +
> +		if (ret < 0)
> +			return ret;
> +
> +		user->id = idx;
> +		drm_dev_get(&xe->drm);
> +	}
> +
> +	mutex_lock(&user->filelist_lock);
> +	list_add(&xef->user_link, &user->filelist);
> +	mutex_unlock(&user->filelist_lock);
> +	xef->user = user;
> +
> +	return 0;
> +}
> diff --git a/drivers/gpu/drm/xe/xe_user.h b/drivers/gpu/drm/xe/xe_user.h
> index 9628cc628a37..341200c55509 100644
> --- a/drivers/gpu/drm/xe/xe_user.h
> +++ b/drivers/gpu/drm/xe/xe_user.h
> @@ -6,6 +6,9 @@
>  #ifndef _XE_USER_H_
>  #define _XE_USER_H_
>  
> +#include "xe_device.h"
> +
> +
>  /**
>   * struct xe_user - xe user structure
>   *
> @@ -40,6 +43,11 @@ struct xe_user {
>  	 */
>  	struct work_struct work;
>  
> +	/**
> +	 * @id: index of this user into the xe device::users xarray
> +	 */
> +	u32 id;
> +
>  	/**
>  	 * @uid: UID of this xe_user
>  	 */
> @@ -58,7 +66,8 @@ struct xe_user {
>  	u64 last_timestamp_ns;
>  };
>  
> -struct xe_user *xe_user_alloc(void);
> +int xe_user_init(struct xe_device *xe, struct xe_file *xef, unsigned int uid);
> +
>  
>  static inline struct xe_user *
>  xe_user_get(struct xe_user *user)
> -- 
> 2.49.0
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v5 4/8] drm/xe: Handle xe_user creation and removal
  2025-10-06 20:49   ` Matthew Brost
@ 2025-10-06 21:00     ` Matthew Brost
  0 siblings, 0 replies; 17+ messages in thread
From: Matthew Brost @ 2025-10-06 21:00 UTC (permalink / raw)
  To: Aakash Deep Sarkar
  Cc: intel-xe, jeevaka.badrappan, rodrigo.vivi, carlos.santa,
	matthew.auld, jani.nikula, ashutosh.dixit

On Mon, Oct 06, 2025 at 01:49:32PM -0700, Matthew Brost wrote:
> On Mon, Oct 06, 2025 at 02:20:25PM +0000, Aakash Deep Sarkar wrote:
> > We want our xe user structure to be created when a new
> > user id opens the xe device node and to be destroyed
> > when the final xe file with this uid is closed. In other
> > words the xe_user structure for a uid should remain in
> > scope as long as any process with this uid has an open
> > xe file descriptor.
> > 
> > To implement this we maintain an xarray of xe user
> > structures inside our xe device instance. Whenever a new
> > xe file is created via an open call, we check if the
> > calling process' uid is already present in our xarray.
> > If so, we increment the refcount for the associated
> > xe user and add this xe file to the list of xe files
> > belonging to this xe user. Otherwise, we allocate a
> > new xe user structure for this uid and initialize its
> > file list with this xe file.
> > 
> > Whenever an xe file is destroyed, we decrement the
> > refcount of the associated xe user. When the last
> > xe file in the xe user's file list is destroyed,
> > the xe user refcount should drop to zero and the
> > xe user should be cleaned up. During the cleanup path
> > we remove the xarray entry for this xe user in our
> > xe device and free up its memory.
> > 
> > Signed-off-by: Aakash Deep Sarkar <aakash.deep.sarkar@intel.com>
> > ---
> >  drivers/gpu/drm/xe/xe_device.c       | 21 ++++++++
> >  drivers/gpu/drm/xe/xe_device_types.h | 16 ++++++
> >  drivers/gpu/drm/xe/xe_user.c         | 77 +++++++++++++++++++++++++++-
> >  drivers/gpu/drm/xe/xe_user.h         | 11 +++-
> >  4 files changed, 123 insertions(+), 2 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
> > index 386940323630..5a084fd39876 100644
> > --- a/drivers/gpu/drm/xe/xe_device.c
> > +++ b/drivers/gpu/drm/xe/xe_device.c
> > @@ -65,6 +65,7 @@
> >  #include "xe_tile.h"
> >  #include "xe_ttm_stolen_mgr.h"
> >  #include "xe_ttm_sys_mgr.h"
> > +#include "xe_user.h"
> >  #include "xe_vm.h"
> >  #include "xe_vm_madvise.h"
> >  #include "xe_vram.h"
> > @@ -82,7 +83,9 @@ static int xe_file_open(struct drm_device *dev, struct drm_file *file)
> >  	struct xe_drm_client *client;
> >  	struct xe_file *xef;
> >  	int ret = -ENOMEM;
> > +	int uid = -EINVAL;
> >  	struct task_struct *task = NULL;
> > +	const struct cred *cred = NULL;
> >  
> >  	xef = kzalloc(sizeof(*xef), GFP_KERNEL);
> >  	if (!xef)
> > @@ -107,8 +110,16 @@ static int xe_file_open(struct drm_device *dev, struct drm_file *file)
> >  	file->driver_priv = xef;
> >  	kref_init(&xef->refcount);
> >  
> > +	INIT_LIST_HEAD(&xef->user_link);
> > +
> >  	task = get_pid_task(rcu_access_pointer(file->pid), PIDTYPE_PID);
> >  	if (task) {
> > +		cred = get_task_cred(task);
> > +		if (cred) {
> > +			uid = (unsigned int) cred->euid.val;
> > +			xe_user_init(xe, xef, uid);
> > +			put_cred(cred);
> > +		}
> >  		xef->process_name = kstrdup(task->comm, GFP_KERNEL);
> >  		xef->pid = task->pid;
> >  		put_task_struct(task);
> > @@ -128,6 +139,12 @@ static void xe_file_destroy(struct kref *ref)
> >  
> >  	xe_drm_client_put(xef->client);
> >  	kfree(xef->process_name);
> > +
> > +	mutex_lock(&xef->user->filelist_lock);
> > +	list_del(&xef->user_link);
> > +	mutex_unlock(&xef->user->filelist_lock);
> > +
> > +	xe_user_put(xef->user);
> >  	kfree(xef);
> >  }
> >  
> > @@ -467,6 +484,10 @@ struct xe_device *xe_device_create(struct pci_dev *pdev,
> >  
> >  	xa_init_flags(&xe->usm.asid_to_vm, XA_FLAGS_ALLOC);
> >  
> > +	xa_init_flags(&xe->work_period.users, XA_FLAGS_ALLOC1);
> > +
> > +	mutex_init(&xe->work_period.lock);
> > +
> >  	if (IS_ENABLED(CONFIG_DRM_XE_DEBUG)) {
> >  		/* Trigger a large asid and an early asid wrap. */
> >  		u32 asid;
> > diff --git a/drivers/gpu/drm/xe/xe_device_types.h b/drivers/gpu/drm/xe/xe_device_types.h
> > index 54a612787289..4d4e9a63b3fd 100644
> > --- a/drivers/gpu/drm/xe/xe_device_types.h
> > +++ b/drivers/gpu/drm/xe/xe_device_types.h
> > @@ -613,6 +613,16 @@ struct xe_device {
> >  	atomic_t g2g_test_count;
> >  #endif
> >  
> > +	/**
> > +	 * @xe_work_period: Support for GPU work period tracepoint
> > +	 */
> > +	struct xe_work_period {
> > +		/** @users: list of users that have opened this xe device */
> > +		struct xarray users;
> > +		/** @lock: lock protecting this structure */
> > +		struct mutex lock;
> > +	} work_period;
> > +
> >  	/* private: */
> >  
> >  #if IS_ENABLED(CONFIG_DRM_XE_DISPLAY)
> > @@ -684,6 +694,12 @@ struct xe_file {
> >  	/** @active_duration_ns: total run time in ns for this xe file */
> >  	u64 active_duration_ns;
> >  
> > +	/** @user: pointer to struct xe_user associated with this xe file */
> > +	struct xe_user *user;
> > +
> > +	/** @user_link: link into xe_user::filelist */
> > +	struct list_head user_link;
> > +
> >  	/** @client: drm client */
> >  	struct xe_drm_client *client;
> >  
> > diff --git a/drivers/gpu/drm/xe/xe_user.c b/drivers/gpu/drm/xe/xe_user.c
> > index f35e18776300..cb3de75aa497 100644
> > --- a/drivers/gpu/drm/xe/xe_user.c
> > +++ b/drivers/gpu/drm/xe/xe_user.c
> > @@ -3,6 +3,8 @@
> >   * Copyright © 2025 Intel Corporation
> >   */
> >  
> > +#include <drm/drm_drv.h>
> > +
> >  #include "xe_user.h"
> >  
> >  
> > @@ -60,7 +62,7 @@
> >   *
> >   * Return: pointer to user struct or NULL if can't allocate
> >   */
> > -struct xe_user *xe_user_alloc(void)
> > +static struct xe_user *xe_user_alloc(void)
> >  {
> >  	struct xe_user *user;
> >  
> > @@ -71,6 +73,7 @@ struct xe_user *xe_user_alloc(void)
> >  	kref_init(&user->refcount);
> >  	mutex_init(&user->filelist_lock);
> >  	INIT_LIST_HEAD(&user->filelist);
> > +	INIT_WORK(&user->work, work_period_worker);
> >  	return user;
> >  }
> >  
> > @@ -84,6 +87,78 @@ void __xe_user_free(struct kref *kref)
> >  {
> >  	struct xe_user *user =
> >  		container_of(kref, struct xe_user, refcount);
> > +	struct xe_device *xe = user->xe;
> > +	void *lookup;
> > +
> > +	mutex_lock(&xe->work_period.lock);
> 
> You don't need to take work_period.lock look here. Xa have there own
> locking. work_period.lock should protect the reference upon lookup,
> that's it.
> 

Actually I mis-spoke here. You do need this part with how you have it
coded but something looks very odd with the work_period.lock / xarray /
ref counting / waiting on workers under work_period.lock scheme.

Let me think on this part for a bit.

Matt

> > +	lookup = xa_erase(&xe->work_period.users, user->id);
> > +	xe_assert(xe, lookup == user);
> > +	mutex_unlock(&xe->work_period.lock);
> >  
> > +	drm_dev_put(&user->xe->drm);
> >  	kfree(user);
> >  }
> > +
> > +static struct xe_user *xe_user_lookup(struct xe_device *xe, u32 uid)
> > +{
> > +	struct xe_user *user = NULL;
> > +	unsigned long i;
> > +
> > +	mutex_lock(&xe->work_period.lock);
> 
> guard(mutex)(&xe->work_period.lock) will work better here.
> 
> > +	xa_for_each(&xe->work_period.users, i, user) {
> > +		if (user->uid == uid) {
> > +			xe_user_get(user);
> > +			mutex_unlock(&xe->work_period.lock);
> > +			return user;
> > +		}
> > +	}
> > +	mutex_unlock(&xe->work_period.lock);
> > +
> > +	return NULL;
> > +}
> > +
> > +int xe_user_init(struct xe_device *xe, struct xe_file *xef, unsigned int uid)
> > +{
> > +	struct xe_user *user = NULL;
> > +	int ret;
> > +	u32 idx;
> > +	/*
> > +	 * Check if the calling process/uid has already been registered
> > +	 * with the xe device during a previous open call. If so then
> > +	 * take a reference to this xe user and add this xe file to the
> > +	 * filelist belonging to this xe user
> > +	 */
> > +	user = xe_user_lookup(xe, uid);
> > +	if (!user) {
> > +		/*
> > +		 * We couldn't find an existing xe user for the calling process.
> > +		 * Allocate a new struct xe_user and register it with this xe
> > +		 * device
> > +		 */
> > +		user = xe_user_alloc();
> > +		if (!user)
> > +			return -ENOMEM;
> > +
> > +
> > +		user->uid = uid;
> > +		user->last_timestamp_ns = ktime_get_raw_ns();
> > +		user->xe = xe;
> > +
> > +		mutex_lock(&xe->work_period.lock);
> > +		ret = xa_alloc(&xe->work_period.users, &idx, user, xa_limit_32b, GFP_KERNEL);
> 
> You don't lock here either.
> 
> Matt
> 
> > +		mutex_unlock(&xe->work_period.lock);
> > +
> > +		if (ret < 0)
> > +			return ret;
> > +
> > +		user->id = idx;
> > +		drm_dev_get(&xe->drm);
> > +	}
> > +
> > +	mutex_lock(&user->filelist_lock);
> > +	list_add(&xef->user_link, &user->filelist);
> > +	mutex_unlock(&user->filelist_lock);
> > +	xef->user = user;
> > +
> > +	return 0;
> > +}
> > diff --git a/drivers/gpu/drm/xe/xe_user.h b/drivers/gpu/drm/xe/xe_user.h
> > index 9628cc628a37..341200c55509 100644
> > --- a/drivers/gpu/drm/xe/xe_user.h
> > +++ b/drivers/gpu/drm/xe/xe_user.h
> > @@ -6,6 +6,9 @@
> >  #ifndef _XE_USER_H_
> >  #define _XE_USER_H_
> >  
> > +#include "xe_device.h"
> > +
> > +
> >  /**
> >   * struct xe_user - xe user structure
> >   *
> > @@ -40,6 +43,11 @@ struct xe_user {
> >  	 */
> >  	struct work_struct work;
> >  
> > +	/**
> > +	 * @id: index of this user into the xe device::users xarray
> > +	 */
> > +	u32 id;
> > +
> >  	/**
> >  	 * @uid: UID of this xe_user
> >  	 */
> > @@ -58,7 +66,8 @@ struct xe_user {
> >  	u64 last_timestamp_ns;
> >  };
> >  
> > -struct xe_user *xe_user_alloc(void);
> > +int xe_user_init(struct xe_device *xe, struct xe_file *xef, unsigned int uid);
> > +
> >  
> >  static inline struct xe_user *
> >  xe_user_get(struct xe_user *user)
> > -- 
> > 2.49.0
> > 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v5 5/8] drm/xe: Implement xe_work_period_worker
  2025-10-06 14:20 ` [PATCH v5 5/8] drm/xe: Implement xe_work_period_worker Aakash Deep Sarkar
@ 2025-10-06 21:12   ` Matthew Brost
  2025-10-06 21:38     ` Matthew Brost
  0 siblings, 1 reply; 17+ messages in thread
From: Matthew Brost @ 2025-10-06 21:12 UTC (permalink / raw)
  To: Aakash Deep Sarkar
  Cc: intel-xe, jeevaka.badrappan, rodrigo.vivi, carlos.santa,
	matthew.auld, jani.nikula, ashutosh.dixit

On Mon, Oct 06, 2025 at 02:20:26PM +0000, Aakash Deep Sarkar wrote:
> The work of collecting the GPU run time for a given
> xe_user and emitting its event, is done by the
> xe_work_period_worker kworker. At the time of creation
> of a new xe_user, we simultaneously start a delayed
> kworker thread. The delay of execution is set to be
> 500 ms. After the completion of the work, the kworker
> schedules itself for the next execution. This is done
> as long as the reference to the xe_user pointer is
> valid.
> 
> During each execution cycle the xe_work_period_worker
> iterates over all the xe files in the xe_user::filelist
> and accumulate their corresponding GPU runtime into the
> xe_user::active_duration_ns; while also updating each of
> the xe_file::active_duration_ns. The total runtime for
> this uid in the current sampling period is the delta
> between the previous xe_user::active_duration_ns and
> the current xe_user::active_duration_ns.
> 
> We also record the current timestamp at the end of each
> invocation to xe_work_period_worker function in the
> xe_user::last_timestamp_ns. The sampling period for this
> uid is the delta between the previous timestamp and the
> current timestamp.
> 
> Signed-off-by: Aakash Deep Sarkar <aakash.deep.sarkar@intel.com>
> ---
>  drivers/gpu/drm/xe/xe_device.c |  11 +--
>  drivers/gpu/drm/xe/xe_pm.c     |   5 ++
>  drivers/gpu/drm/xe/xe_user.c   | 127 +++++++++++++++++++++++++++++++--
>  drivers/gpu/drm/xe/xe_user.h   |  19 ++++-
>  4 files changed, 150 insertions(+), 12 deletions(-)
> 
> diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
> index 5a084fd39876..54ac71d1265d 100644
> --- a/drivers/gpu/drm/xe/xe_device.c
> +++ b/drivers/gpu/drm/xe/xe_device.c
> @@ -140,11 +140,12 @@ static void xe_file_destroy(struct kref *ref)
>  	xe_drm_client_put(xef->client);
>  	kfree(xef->process_name);
>  
> -	mutex_lock(&xef->user->filelist_lock);
> -	list_del(&xef->user_link);
> -	mutex_unlock(&xef->user->filelist_lock);
> -
> -	xe_user_put(xef->user);
> +	if (xef->user) {
> +		mutex_lock(&xef->user->lock);
> +		list_del(&xef->user_link);
> +		xe_user_put(xef->user);
> +		mutex_unlock(&xef->user->lock);
> +	}
>  	kfree(xef);
>  }
>  
> diff --git a/drivers/gpu/drm/xe/xe_pm.c b/drivers/gpu/drm/xe/xe_pm.c
> index b7e3094f8acf..c7add2616189 100644
> --- a/drivers/gpu/drm/xe/xe_pm.c
> +++ b/drivers/gpu/drm/xe/xe_pm.c
> @@ -26,6 +26,7 @@
>  #include "xe_pxp.h"
>  #include "xe_sriov_vf_ccs.h"
>  #include "xe_trace.h"
> +#include "xe_user.h"
>  #include "xe_vm.h"
>  #include "xe_wa.h"
>  
> @@ -598,6 +599,8 @@ int xe_pm_runtime_suspend(struct xe_device *xe)
>  
>  	xe_i2c_pm_suspend(xe);
>  
> +	xe_user_cancel_workers(xe);
> +
>  	xe_rpm_lockmap_release(xe);
>  	xe_pm_write_callback_task(xe, NULL);
>  	return 0;
> @@ -650,6 +653,8 @@ int xe_pm_runtime_resume(struct xe_device *xe)
>  
>  	xe_i2c_pm_resume(xe, xe->d3cold.allowed);
>  
> +	xe_user_resume_workers(xe);
> +
>  	xe_irq_resume(xe);
>  
>  	for_each_gt(gt, xe, id)
> diff --git a/drivers/gpu/drm/xe/xe_user.c b/drivers/gpu/drm/xe/xe_user.c
> index cb3de75aa497..fb54d2659642 100644
> --- a/drivers/gpu/drm/xe/xe_user.c
> +++ b/drivers/gpu/drm/xe/xe_user.c
> @@ -5,8 +5,15 @@
>  
>  #include <drm/drm_drv.h>
>  
> +#include "xe_assert.h"
> +#include "xe_device_types.h"
> +#include "xe_exec_queue.h"
> +#include "xe_pm.h"
>  #include "xe_user.h"
>  
> +#define CREATE_TRACE_POINTS
> +#include <trace/gpu_work_period.h>
> +
>  
>  /**
>   * DOC: Xe User
> @@ -50,7 +57,82 @@
>   */
>  
>  
> +static inline void schedule_next_work(struct xe_device *xe, unsigned int id)
> +{
> +	struct xe_user *user;
> +
> +	mutex_lock(&xe->work_period.lock);
> +	user = xa_load(&xe->work_period.users, id);
> +	if (user && xe_user_get_unless_zero(user))
> +		schedule_delayed_work(&user->delay_work,
> +				msecs_to_jiffies(XE_WORK_PERIOD_INTERVAL));
> +	mutex_unlock(&xe->work_period.lock);
> +}
> +
> +static void xe_work_period_worker(struct work_struct *work)
> +{
> +	struct xe_user *user = container_of(work, struct xe_user, delay_work.work);
> +	struct xe_device *xe = user->xe;
> +	struct xe_file *xef;
> +	struct xe_exec_queue *q;
> +
> +	/*
> +	 * The GPU work period event requires the following parameters
> +	 *
> +	 * gpuid:           GPU index in case the platform has more than one GPU
> +	 * uid:             user id of the app
> +	 * start_time:      start time for the sampling period in nanosecs
> +	 * end_time:        end time for the sampling period in nanosecs
> +	 * active_duration: Total runtime in nanosecs for this uid in
> +	 *                  the current sampling period.
> +	 */
> +	u32 gpuid = 0, uid = user->uid, id = user->id;
> +	u64 start_time, end_time, active_duration;
> +	u64 last_active_duration, last_timestamp;
> +	unsigned long i;
> +
> +	mutex_lock(&user->lock);
> +
> +	// Save the last recorded active duration and timestamp
> +	last_active_duration = user->active_duration_ns;
> +	last_timestamp = user->last_timestamp_ns;
> +
> +	if (xe_pm_runtime_get_if_active(xe)) {
> +
> +		list_for_each_entry(xef, &user->filelist, user_link) {
> +
> +			wait_var_event(&xef->exec_queue.pending_removal,
> +			!atomic_read(&xef->exec_queue.pending_removal));
> +
> +			/* Accumulate all the exec queues from this file */
> +			mutex_lock(&xef->exec_queue.lock);
> +			xa_for_each(&xef->exec_queue.xa, i, q) {
> +				xe_exec_queue_get(q);
> +				mutex_unlock(&xef->exec_queue.lock);
> +
> +				xe_exec_queue_update_run_ticks(q);
> +
> +				mutex_lock(&xef->exec_queue.lock);
> +				xe_exec_queue_put(q);
> +			}
> +			mutex_unlock(&xef->exec_queue.lock);
> +			user->active_duration_ns += xef->active_duration_ns;
> +		}
> +
> +		xe_pm_runtime_put(xe);
> +
> +		start_time = last_timestamp + 1;
> +		end_time = ktime_get_raw_ns();
> +		active_duration = user->active_duration_ns - last_active_duration;
> +		trace_gpu_work_period(gpuid, uid, start_time, end_time, active_duration);
> +		user->last_timestamp_ns = end_time;
> +		xe_user_put(user);
> +	}
> +
> +	mutex_unlock(&user->lock);
>  
> +	schedule_next_work(xe, id);
> +}
>  
>  /**
>   * xe_user_alloc() - Allocate xe user
> @@ -71,9 +153,9 @@ static struct xe_user *xe_user_alloc(void)
>  		return NULL;
>  
>  	kref_init(&user->refcount);
> -	mutex_init(&user->filelist_lock);
> +	mutex_init(&user->lock);
>  	INIT_LIST_HEAD(&user->filelist);
> -	INIT_WORK(&user->work, work_period_worker);
> +	INIT_DELAYED_WORK(&user->delay_work, xe_work_period_worker);
>  	return user;
>  }
>  
> @@ -153,12 +235,49 @@ int xe_user_init(struct xe_device *xe, struct xe_file *xef, unsigned int uid)
>  
>  		user->id = idx;
>  		drm_dev_get(&xe->drm);
> +
> +		xe_user_get(user);
> +		if (!schedule_delayed_work(&user->delay_work,
> +					msecs_to_jiffies(XE_WORK_PERIOD_INTERVAL)))
> +			xe_user_put(user);
>  	}
>  
> -	mutex_lock(&user->filelist_lock);
> +	mutex_lock(&user->lock);
>  	list_add(&xef->user_link, &user->filelist);
> -	mutex_unlock(&user->filelist_lock);
> +	mutex_unlock(&user->lock);
>  	xef->user = user;
>  
>  	return 0;
>  }
> +
> +void xe_user_cancel_workers(struct xe_device *xe)
> +{
> +	struct xe_user *user = NULL;
> +	unsigned long i = 0;
> +
> +	mutex_lock(&xe->work_period.lock);
> +	xa_for_each(&xe->work_period.users, i, user) {
> +		if (user && xe_user_get_unless_zero(user)) {
> +			cancel_delayed_work_sync(&user->delay_work);
> +			xe_user_put(user);


Here’s where this looks problematic:

- Calling cancel_delayed_work_sync while holding a lock creates a locking
  chain between work_period.lock and every lock acquired in
  &user->delay_work, which is a pretty risky thing to do.

- __xe_user_free acquires xe->work_period.lock, so if xe_user_put is the 
  final reference drop, it could lead to a deadlock.

At a minimum, you need to release xe->work_period.lock inside the if
statement. Ideally, you should reconsider the entire locking strategy.

Matt

> +		}
> +	}
> +	mutex_unlock(&xe->work_period.lock);
> +}
> +
> +void xe_user_resume_workers(struct xe_device *xe)
> +{
> +	struct xe_user *user = NULL;
> +	unsigned long i = 0;
> +
> +	mutex_lock(&xe->work_period.lock);
> +	xa_for_each(&xe->work_period.users, i, user) {
> +		if (user && xe_user_get_unless_zero(user)) {
> +			if (!schedule_delayed_work(&user->delay_work,
> +					msecs_to_jiffies(XE_WORK_PERIOD_INTERVAL)))
> +				xe_user_put(user);
> +		}
> +	}
> +	mutex_unlock(&xe->work_period.lock);
> +}
> +
> diff --git a/drivers/gpu/drm/xe/xe_user.h b/drivers/gpu/drm/xe/xe_user.h
> index 341200c55509..55016ba189f1 100644
> --- a/drivers/gpu/drm/xe/xe_user.h
> +++ b/drivers/gpu/drm/xe/xe_user.h
> @@ -9,6 +9,8 @@
>  #include "xe_device.h"
>  
>  
> +#define XE_WORK_PERIOD_INTERVAL 500
> +
>  /**
>   * struct xe_user - xe user structure
>   *
> @@ -28,9 +30,9 @@ struct xe_user {
>  	struct xe_device *xe;
>  
>  	/**
> -	 * @filelist_lock: lock protecting the filelist
> +	 * @filelist_lock: lock protecting this structure
>  	 */
> -	struct mutex filelist_lock;
> +	struct mutex lock;
>  
>  	/**
>  	 * @filelist: list of xe files belonging to this xe user
> @@ -41,7 +43,7 @@ struct xe_user {
>  	 * @work: work to emit the gpu work period event for this
>  	 * xe user
>  	 */
> -	struct work_struct work;
> +	struct delayed_work delay_work;
>  
>  	/**
>  	 * @id: index of this user into the xe device::users xarray
> @@ -68,6 +70,17 @@ struct xe_user {
>  
>  int xe_user_init(struct xe_device *xe, struct xe_file *xef, unsigned int uid);
>  
> +void xe_user_cancel_workers(struct xe_device *xe);
> +
> +void xe_user_resume_workers(struct xe_device *xe);
> +
> +static inline struct xe_user *
> +xe_user_get_unless_zero(struct xe_user *user)
> +{
> +	if (kref_get_unless_zero(&user->refcount))
> +		return user;
> +	return NULL;
> +}
>  
>  static inline struct xe_user *
>  xe_user_get(struct xe_user *user)
> -- 
> 2.49.0
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v5 5/8] drm/xe: Implement xe_work_period_worker
  2025-10-06 21:12   ` Matthew Brost
@ 2025-10-06 21:38     ` Matthew Brost
  0 siblings, 0 replies; 17+ messages in thread
From: Matthew Brost @ 2025-10-06 21:38 UTC (permalink / raw)
  To: Aakash Deep Sarkar
  Cc: intel-xe, jeevaka.badrappan, rodrigo.vivi, carlos.santa,
	matthew.auld, jani.nikula, ashutosh.dixit

On Mon, Oct 06, 2025 at 02:12:45PM -0700, Matthew Brost wrote:
> On Mon, Oct 06, 2025 at 02:20:26PM +0000, Aakash Deep Sarkar wrote:
> > The work of collecting the GPU run time for a given
> > xe_user and emitting its event, is done by the
> > xe_work_period_worker kworker. At the time of creation
> > of a new xe_user, we simultaneously start a delayed
> > kworker thread. The delay of execution is set to be
> > 500 ms. After the completion of the work, the kworker
> > schedules itself for the next execution. This is done
> > as long as the reference to the xe_user pointer is
> > valid.
> > 
> > During each execution cycle the xe_work_period_worker
> > iterates over all the xe files in the xe_user::filelist
> > and accumulate their corresponding GPU runtime into the
> > xe_user::active_duration_ns; while also updating each of
> > the xe_file::active_duration_ns. The total runtime for
> > this uid in the current sampling period is the delta
> > between the previous xe_user::active_duration_ns and
> > the current xe_user::active_duration_ns.
> > 
> > We also record the current timestamp at the end of each
> > invocation to xe_work_period_worker function in the
> > xe_user::last_timestamp_ns. The sampling period for this
> > uid is the delta between the previous timestamp and the
> > current timestamp.
> > 
> > Signed-off-by: Aakash Deep Sarkar <aakash.deep.sarkar@intel.com>
> > ---
> >  drivers/gpu/drm/xe/xe_device.c |  11 +--
> >  drivers/gpu/drm/xe/xe_pm.c     |   5 ++
> >  drivers/gpu/drm/xe/xe_user.c   | 127 +++++++++++++++++++++++++++++++--
> >  drivers/gpu/drm/xe/xe_user.h   |  19 ++++-
> >  4 files changed, 150 insertions(+), 12 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
> > index 5a084fd39876..54ac71d1265d 100644
> > --- a/drivers/gpu/drm/xe/xe_device.c
> > +++ b/drivers/gpu/drm/xe/xe_device.c
> > @@ -140,11 +140,12 @@ static void xe_file_destroy(struct kref *ref)
> >  	xe_drm_client_put(xef->client);
> >  	kfree(xef->process_name);
> >  
> > -	mutex_lock(&xef->user->filelist_lock);
> > -	list_del(&xef->user_link);
> > -	mutex_unlock(&xef->user->filelist_lock);
> > -
> > -	xe_user_put(xef->user);
> > +	if (xef->user) {
> > +		mutex_lock(&xef->user->lock);
> > +		list_del(&xef->user_link);
> > +		xe_user_put(xef->user);

You also have a potential lock inversion in the current code.

There appears to be a possible chain of:

- user->lock -> xe->work_period.users if xe_user_put() is the final put.

However, cancel_delayed_work_sync() is called under
xe->work_period.users below, and xe_work_period_worker() takes
user->lock, which is the inverse order.

I don’t think it’s actually possible to trigger the inversion due to the
reference counting, but it’s still quite concerning.

It would be best to avoid calling xe_user_put() while holding
xef->user->lock.

Also, if you can’t come up with a better reference counting or xarray
scheme for xef->user, I’d suggest adding a
might_lock(&xe->work_period.users) to xe_user_put() so lockdep
immediately knows what xe_user_put() can do on the final put.

Matt

> > +		mutex_unlock(&xef->user->lock);
> > +	}
> >  	kfree(xef);
> >  }
> >  
> > diff --git a/drivers/gpu/drm/xe/xe_pm.c b/drivers/gpu/drm/xe/xe_pm.c
> > index b7e3094f8acf..c7add2616189 100644
> > --- a/drivers/gpu/drm/xe/xe_pm.c
> > +++ b/drivers/gpu/drm/xe/xe_pm.c
> > @@ -26,6 +26,7 @@
> >  #include "xe_pxp.h"
> >  #include "xe_sriov_vf_ccs.h"
> >  #include "xe_trace.h"
> > +#include "xe_user.h"
> >  #include "xe_vm.h"
> >  #include "xe_wa.h"
> >  
> > @@ -598,6 +599,8 @@ int xe_pm_runtime_suspend(struct xe_device *xe)
> >  
> >  	xe_i2c_pm_suspend(xe);
> >  
> > +	xe_user_cancel_workers(xe);
> > +
> >  	xe_rpm_lockmap_release(xe);
> >  	xe_pm_write_callback_task(xe, NULL);
> >  	return 0;
> > @@ -650,6 +653,8 @@ int xe_pm_runtime_resume(struct xe_device *xe)
> >  
> >  	xe_i2c_pm_resume(xe, xe->d3cold.allowed);
> >  
> > +	xe_user_resume_workers(xe);
> > +
> >  	xe_irq_resume(xe);
> >  
> >  	for_each_gt(gt, xe, id)
> > diff --git a/drivers/gpu/drm/xe/xe_user.c b/drivers/gpu/drm/xe/xe_user.c
> > index cb3de75aa497..fb54d2659642 100644
> > --- a/drivers/gpu/drm/xe/xe_user.c
> > +++ b/drivers/gpu/drm/xe/xe_user.c
> > @@ -5,8 +5,15 @@
> >  
> >  #include <drm/drm_drv.h>
> >  
> > +#include "xe_assert.h"
> > +#include "xe_device_types.h"
> > +#include "xe_exec_queue.h"
> > +#include "xe_pm.h"
> >  #include "xe_user.h"
> >  
> > +#define CREATE_TRACE_POINTS
> > +#include <trace/gpu_work_period.h>
> > +
> >  
> >  /**
> >   * DOC: Xe User
> > @@ -50,7 +57,82 @@
> >   */
> >  
> >  
> > +static inline void schedule_next_work(struct xe_device *xe, unsigned int id)
> > +{
> > +	struct xe_user *user;
> > +
> > +	mutex_lock(&xe->work_period.lock);
> > +	user = xa_load(&xe->work_period.users, id);
> > +	if (user && xe_user_get_unless_zero(user))
> > +		schedule_delayed_work(&user->delay_work,
> > +				msecs_to_jiffies(XE_WORK_PERIOD_INTERVAL));
> > +	mutex_unlock(&xe->work_period.lock);
> > +}
> > +
> > +static void xe_work_period_worker(struct work_struct *work)
> > +{
> > +	struct xe_user *user = container_of(work, struct xe_user, delay_work.work);
> > +	struct xe_device *xe = user->xe;
> > +	struct xe_file *xef;
> > +	struct xe_exec_queue *q;
> > +
> > +	/*
> > +	 * The GPU work period event requires the following parameters
> > +	 *
> > +	 * gpuid:           GPU index in case the platform has more than one GPU
> > +	 * uid:             user id of the app
> > +	 * start_time:      start time for the sampling period in nanosecs
> > +	 * end_time:        end time for the sampling period in nanosecs
> > +	 * active_duration: Total runtime in nanosecs for this uid in
> > +	 *                  the current sampling period.
> > +	 */
> > +	u32 gpuid = 0, uid = user->uid, id = user->id;
> > +	u64 start_time, end_time, active_duration;
> > +	u64 last_active_duration, last_timestamp;
> > +	unsigned long i;
> > +
> > +	mutex_lock(&user->lock);
> > +
> > +	// Save the last recorded active duration and timestamp
> > +	last_active_duration = user->active_duration_ns;
> > +	last_timestamp = user->last_timestamp_ns;
> > +
> > +	if (xe_pm_runtime_get_if_active(xe)) {
> > +
> > +		list_for_each_entry(xef, &user->filelist, user_link) {
> > +
> > +			wait_var_event(&xef->exec_queue.pending_removal,
> > +			!atomic_read(&xef->exec_queue.pending_removal));
> > +
> > +			/* Accumulate all the exec queues from this file */
> > +			mutex_lock(&xef->exec_queue.lock);
> > +			xa_for_each(&xef->exec_queue.xa, i, q) {
> > +				xe_exec_queue_get(q);
> > +				mutex_unlock(&xef->exec_queue.lock);
> > +
> > +				xe_exec_queue_update_run_ticks(q);
> > +
> > +				mutex_lock(&xef->exec_queue.lock);
> > +				xe_exec_queue_put(q);
> > +			}
> > +			mutex_unlock(&xef->exec_queue.lock);
> > +			user->active_duration_ns += xef->active_duration_ns;
> > +		}
> > +
> > +		xe_pm_runtime_put(xe);
> > +
> > +		start_time = last_timestamp + 1;
> > +		end_time = ktime_get_raw_ns();
> > +		active_duration = user->active_duration_ns - last_active_duration;
> > +		trace_gpu_work_period(gpuid, uid, start_time, end_time, active_duration);
> > +		user->last_timestamp_ns = end_time;
> > +		xe_user_put(user);
> > +	}
> > +
> > +	mutex_unlock(&user->lock);
> >  
> > +	schedule_next_work(xe, id);
> > +}
> >  
> >  /**
> >   * xe_user_alloc() - Allocate xe user
> > @@ -71,9 +153,9 @@ static struct xe_user *xe_user_alloc(void)
> >  		return NULL;
> >  
> >  	kref_init(&user->refcount);
> > -	mutex_init(&user->filelist_lock);
> > +	mutex_init(&user->lock);
> >  	INIT_LIST_HEAD(&user->filelist);
> > -	INIT_WORK(&user->work, work_period_worker);
> > +	INIT_DELAYED_WORK(&user->delay_work, xe_work_period_worker);
> >  	return user;
> >  }
> >  
> > @@ -153,12 +235,49 @@ int xe_user_init(struct xe_device *xe, struct xe_file *xef, unsigned int uid)
> >  
> >  		user->id = idx;
> >  		drm_dev_get(&xe->drm);
> > +
> > +		xe_user_get(user);
> > +		if (!schedule_delayed_work(&user->delay_work,
> > +					msecs_to_jiffies(XE_WORK_PERIOD_INTERVAL)))
> > +			xe_user_put(user);
> >  	}
> >  
> > -	mutex_lock(&user->filelist_lock);
> > +	mutex_lock(&user->lock);
> >  	list_add(&xef->user_link, &user->filelist);
> > -	mutex_unlock(&user->filelist_lock);
> > +	mutex_unlock(&user->lock);
> >  	xef->user = user;
> >  
> >  	return 0;
> >  }
> > +
> > +void xe_user_cancel_workers(struct xe_device *xe)
> > +{
> > +	struct xe_user *user = NULL;
> > +	unsigned long i = 0;
> > +
> > +	mutex_lock(&xe->work_period.lock);
> > +	xa_for_each(&xe->work_period.users, i, user) {
> > +		if (user && xe_user_get_unless_zero(user)) {
> > +			cancel_delayed_work_sync(&user->delay_work);
> > +			xe_user_put(user);
> 
> 
> Here’s where this looks problematic:
> 
> - Calling cancel_delayed_work_sync while holding a lock creates a locking
>   chain between work_period.lock and every lock acquired in
>   &user->delay_work, which is a pretty risky thing to do.
> 
> - __xe_user_free acquires xe->work_period.lock, so if xe_user_put is the 
>   final reference drop, it could lead to a deadlock.
> 
> At a minimum, you need to release xe->work_period.lock inside the if
> statement. Ideally, you should reconsider the entire locking strategy.
> 
> Matt
> 
> > +		}
> > +	}
> > +	mutex_unlock(&xe->work_period.lock);
> > +}
> > +
> > +void xe_user_resume_workers(struct xe_device *xe)
> > +{
> > +	struct xe_user *user = NULL;
> > +	unsigned long i = 0;
> > +
> > +	mutex_lock(&xe->work_period.lock);
> > +	xa_for_each(&xe->work_period.users, i, user) {
> > +		if (user && xe_user_get_unless_zero(user)) {
> > +			if (!schedule_delayed_work(&user->delay_work,
> > +					msecs_to_jiffies(XE_WORK_PERIOD_INTERVAL)))
> > +				xe_user_put(user);
> > +		}
> > +	}
> > +	mutex_unlock(&xe->work_period.lock);
> > +}
> > +
> > diff --git a/drivers/gpu/drm/xe/xe_user.h b/drivers/gpu/drm/xe/xe_user.h
> > index 341200c55509..55016ba189f1 100644
> > --- a/drivers/gpu/drm/xe/xe_user.h
> > +++ b/drivers/gpu/drm/xe/xe_user.h
> > @@ -9,6 +9,8 @@
> >  #include "xe_device.h"
> >  
> >  
> > +#define XE_WORK_PERIOD_INTERVAL 500
> > +
> >  /**
> >   * struct xe_user - xe user structure
> >   *
> > @@ -28,9 +30,9 @@ struct xe_user {
> >  	struct xe_device *xe;
> >  
> >  	/**
> > -	 * @filelist_lock: lock protecting the filelist
> > +	 * @filelist_lock: lock protecting this structure
> >  	 */
> > -	struct mutex filelist_lock;
> > +	struct mutex lock;
> >  
> >  	/**
> >  	 * @filelist: list of xe files belonging to this xe user
> > @@ -41,7 +43,7 @@ struct xe_user {
> >  	 * @work: work to emit the gpu work period event for this
> >  	 * xe user
> >  	 */
> > -	struct work_struct work;
> > +	struct delayed_work delay_work;
> >  
> >  	/**
> >  	 * @id: index of this user into the xe device::users xarray
> > @@ -68,6 +70,17 @@ struct xe_user {
> >  
> >  int xe_user_init(struct xe_device *xe, struct xe_file *xef, unsigned int uid);
> >  
> > +void xe_user_cancel_workers(struct xe_device *xe);
> > +
> > +void xe_user_resume_workers(struct xe_device *xe);
> > +
> > +static inline struct xe_user *
> > +xe_user_get_unless_zero(struct xe_user *user)
> > +{
> > +	if (kref_get_unless_zero(&user->refcount))
> > +		return user;
> > +	return NULL;
> > +}
> >  
> >  static inline struct xe_user *
> >  xe_user_get(struct xe_user *user)
> > -- 
> > 2.49.0
> > 

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2025-10-06 21:39 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-10-06 14:20 [PATCH v5 0/8] [ANDROID]: Add GPU work period support for Xe driver Aakash Deep Sarkar
2025-10-06 14:20 ` [PATCH v5 1/8] drm/xe: Add a new xe_user structure Aakash Deep Sarkar
2025-10-06 14:20 ` [PATCH v5 2/8] drm/xe: Add xe_gt_clock_interval_to_ns function Aakash Deep Sarkar
2025-10-06 14:20 ` [PATCH v5 3/8] drm/xe: Modify xe_exec_queue_update_run_ticks Aakash Deep Sarkar
2025-10-06 14:20 ` [PATCH v5 4/8] drm/xe: Handle xe_user creation and removal Aakash Deep Sarkar
2025-10-06 20:49   ` Matthew Brost
2025-10-06 21:00     ` Matthew Brost
2025-10-06 14:20 ` [PATCH v5 5/8] drm/xe: Implement xe_work_period_worker Aakash Deep Sarkar
2025-10-06 21:12   ` Matthew Brost
2025-10-06 21:38     ` Matthew Brost
2025-10-06 14:20 ` [PATCH v5 6/8] drm/xe: Add a Kconfig option for GPU work period Aakash Deep Sarkar
2025-10-06 14:20 ` [PATCH v5 7/8] drm/xe: Handle xe_work_period destruction Aakash Deep Sarkar
2025-10-06 14:20 ` [PATCH v5 8/8] Hack patch: Do not merge Aakash Deep Sarkar
2025-10-06 15:03 ` ✗ CI.checkpatch: warning for : Add GPU work period support for Xe driver (rev5) Patchwork
2025-10-06 15:04 ` ✓ CI.KUnit: success " Patchwork
2025-10-06 15:58 ` ✗ Xe.CI.BAT: failure " Patchwork
2025-10-06 17:42 ` ✗ Xe.CI.Full: " Patchwork

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox