From: Aakash Deep Sarkar <aakash.deep.sarkar@intel.com>
To: intel-xe@lists.freedesktop.org
Cc: jeevaka.badrappan@intel.com, rodrigo.vivi@intel.com,
matthew.brost@intel.com, carlos.santa@intel.com,
matthew.auld@intel.com,
Aakash Deep Sarkar <aakash.deep.sarkar@intel.com>
Subject: [PATCH v4 1/9] drm/xe: Add a new xe_user structure
Date: Fri, 26 Sep 2025 10:45:12 +0000 [thread overview]
Message-ID: <20250926104521.1815428-2-aakash.deep.sarkar@intel.com> (raw)
In-Reply-To: <20250926104521.1815428-1-aakash.deep.sarkar@intel.com>
For Android GPU work period event we need to track the runtime
on the GPU for each user id. This means we can have multiple
xe files opened by different processes/threads belonging to
the same user id. All these xe files need to be grouped together
so that one can easily identify these while calculating the
run time for the given user id.
Currently, the xe driver doesn't record the user id of the
calling process. Also, all the xe files created using open
call are clubbed together inside the xe device structure
with no way to distinguish between them based on the user id
of the calling process.
To remedy these limitations we are adding another layer of
indirection between xe device and xe file. xe device will
now have a list of xe users each with a given user id; and each
xe user will have a list of xe files each of which is created
by a process that is associated with this user id.
The lifetime of the xe user structure should be between when
a process with a new user id has opened the xe device; and when
the last xe file belonging to this user id is closed.
Signed-off-by: Aakash Deep Sarkar <aakash.deep.sarkar@intel.com>
---
drivers/gpu/drm/xe/Makefile | 2 +
drivers/gpu/drm/xe/xe_user.c | 59 ++++++++++++++++++++++++++
drivers/gpu/drm/xe/xe_user.h | 81 ++++++++++++++++++++++++++++++++++++
3 files changed, 142 insertions(+)
create mode 100644 drivers/gpu/drm/xe/xe_user.c
create mode 100644 drivers/gpu/drm/xe/xe_user.h
diff --git a/drivers/gpu/drm/xe/Makefile b/drivers/gpu/drm/xe/Makefile
index d9c6cf0f189e..ff6b584f3293 100644
--- a/drivers/gpu/drm/xe/Makefile
+++ b/drivers/gpu/drm/xe/Makefile
@@ -333,6 +333,8 @@ ifeq ($(CONFIG_DEBUG_FS),y)
xe-$(CONFIG_PCI_IOV) += xe_gt_sriov_pf_debugfs.o
+ xe-y += xe_user.o
+
xe-$(CONFIG_DRM_XE_DISPLAY) += \
i915-display/intel_display_debugfs.o \
i915-display/intel_display_debugfs_params.o \
diff --git a/drivers/gpu/drm/xe/xe_user.c b/drivers/gpu/drm/xe/xe_user.c
new file mode 100644
index 000000000000..8c285a68115a
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_user.c
@@ -0,0 +1,59 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2023 Intel Corporation
+ */
+
+#include <linux/slab.h>
+
+#include "xe_user.h"
+
+/**
+ * worker thread to emit gpu work period event for this xe user
+ * @work: work instance for this xe user
+ *
+ * Return: void
+ */
+static inline void work_period_worker(struct work_struct *work)
+{
+ //TODO: Implement this worker
+}
+
+/**
+ * xe_user_alloc() - Allocate xe user
+ * @void: No arg
+ *
+ * Allocate xe user struct to track activity on the gpu
+ * by the application. Call this API whenever a new app
+ * has opened xe device.
+ *
+ * Return: pointer to user struct or NULL if can't allocate
+ */
+struct xe_user *xe_user_alloc(void)
+{
+ struct xe_user *user;
+
+ user = kzalloc(sizeof(*user), GFP_KERNEL);
+ if (!user)
+ return NULL;
+
+ kref_init(&user->refcount);
+ mutex_init(&user->filelist_lock);
+ INIT_LIST_HEAD(&user->filelist);
+ //TODO: Add a hook into xe device
+ INIT_WORK(&user->work, work_period_worker);
+ return user;
+}
+
+/**
+ * __xe_user_free() - Free user struct
+ * @kref: The reference
+ *
+ * Return: void
+ */
+void __xe_user_free(struct kref *kref)
+{
+ struct xe_user *user =
+ container_of(kref, struct xe_user, refcount);
+
+ kfree(user);
+}
diff --git a/drivers/gpu/drm/xe/xe_user.h b/drivers/gpu/drm/xe/xe_user.h
new file mode 100644
index 000000000000..e52f66d3f3b0
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_user.h
@@ -0,0 +1,81 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2023 Intel Corporation
+ */
+
+#ifndef _XE_USER_H_
+#define _XE_USER_H_
+
+#include <linux/kref.h>
+#include <linux/list.h>
+#include <linux/workqueue.h>
+
+/**
+ * This is a per process/user id structure for a xe device
+ * client. It is allocated when a new process/app opens the
+ * xe device and destroyed when the last xe file belonging
+ * to this user id is destroyed.
+ */
+struct xe_user {
+ /**
+ * @refcount: reference count
+ */
+ struct kref refcount;
+
+ /**
+ * @xe: pointer to the xe_device
+ */
+ struct xe_device *xe;
+
+ /**
+ * @filelist_lock: lock protecting the filelist
+ */
+ struct mutex filelist_lock;
+
+ /**
+ * @filelist: list of xe files belonging to this xe user
+ */
+ struct list_head filelist;
+
+ /**
+ * @work: work to emit the gpu work period event for this
+ * xe user
+ */
+ struct work_struct work;
+
+ /**
+ * @uid: user id for this xe_user
+ */
+ u32 uid;
+
+ /**
+ * @active_duration_ns: sum total of xe_file.active_duration_ns
+ * for all xe files belonging to this xe user
+ */
+ u64 active_duration_ns;
+
+ /**
+ * @last_timestamp_ns: timestamp in ns when we last emitted event
+ * for this xe user
+ */
+ u64 last_timestamp_ns;
+};
+
+struct xe_user *xe_user_alloc(void);
+
+static inline struct xe_user *
+xe_user_get(struct xe_user *user)
+{
+ kref_get(&user->refcount);
+ return user;
+}
+
+void __xe_user_free(struct kref *kref);
+
+static inline void xe_user_put(struct xe_user *user)
+{
+ kref_put(&user->refcount, __xe_user_free);
+}
+
+#endif // _XE_USER_H_
+
--
2.49.0
next prev parent reply other threads:[~2025-09-26 11:20 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-26 10:45 [PATCH v4 0/9] [ANDROID]: Add GPU work period support for Xe driver Aakash Deep Sarkar
2025-09-26 10:45 ` Aakash Deep Sarkar [this message]
2025-10-02 14:40 ` [PATCH v4 1/9] drm/xe: Add a new xe_user structure Rodrigo Vivi
2025-09-26 10:45 ` [PATCH v4 2/9] drm/xe: Add xe_gt_clock_interval_to_ns function Aakash Deep Sarkar
2025-09-26 10:45 ` [PATCH v4 3/9] drm/xe: Add a trace point for GPU work period Aakash Deep Sarkar
2025-10-02 14:42 ` Rodrigo Vivi
2025-10-03 21:41 ` Dixit, Ashutosh
2025-09-26 10:45 ` [PATCH v4 4/9] drm/xe: Modify xe_exec_queue_update_run_ticks Aakash Deep Sarkar
2025-09-26 10:45 ` [PATCH v4 5/9] drm/xe: Handle xe_user creation and removal Aakash Deep Sarkar
2025-09-26 11:29 ` Jani Nikula
2025-09-26 10:45 ` [PATCH v4 6/9] drm/xe: Implement xe_work_period_worker Aakash Deep Sarkar
2025-09-26 11:31 ` Jani Nikula
2025-09-26 10:45 ` [PATCH v4 7/9] drm/xe: Add a Kconfig option for GPU work period Aakash Deep Sarkar
2025-09-26 10:45 ` [PATCH v4 8/9] drm/xe: Handle xe_work_period destruction Aakash Deep Sarkar
2025-09-26 11:32 ` Jani Nikula
2025-09-26 10:45 ` [PATCH v4 9/9] Hack patch: Do not merge Aakash Deep Sarkar
2025-09-26 11:59 ` ✗ CI.checkpatch: warning for : Add GPU work period support for Xe driver (rev4) Patchwork
2025-09-26 12:01 ` ✓ CI.KUnit: success " Patchwork
2025-09-26 12:51 ` ✗ Xe.CI.BAT: failure " Patchwork
2025-09-26 18:04 ` ✗ Xe.CI.Full: " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250926104521.1815428-2-aakash.deep.sarkar@intel.com \
--to=aakash.deep.sarkar@intel.com \
--cc=carlos.santa@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=jeevaka.badrappan@intel.com \
--cc=matthew.auld@intel.com \
--cc=matthew.brost@intel.com \
--cc=rodrigo.vivi@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox