Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Aakash Deep Sarkar <aakash.deep.sarkar@intel.com>
To: intel-xe@lists.freedesktop.org
Cc: jeevaka.badrappan@intel.com, rodrigo.vivi@intel.com,
	matthew.brost@intel.com, carlos.santa@intel.com,
	matthew.auld@intel.com,
	Aakash Deep Sarkar <aakash.deep.sarkar@intel.com>
Subject: [PATCH v4 3/9] drm/xe: Add a trace point for GPU work period
Date: Fri, 26 Sep 2025 10:45:14 +0000	[thread overview]
Message-ID: <20250926104521.1815428-4-aakash.deep.sarkar@intel.com> (raw)
In-Reply-To: <20250926104521.1815428-1-aakash.deep.sarkar@intel.com>

The GPU work period event is required to have the following format:

Defines the structure of the kernel tracepoint:
/sys/kernel/tracing/events/power/gpu_work_period

A value that uniquely identifies the GPU within the system.
  uint32_t gpu_id;

The UID of the application (i.e. persistent, unique ID of the Android
app) that submitted work to the GPU.
  uint32_t uid;

The start time of the period in nanoseconds. The clock must be
CLOCK_MONOTONIC_RAW, as returned by the ktime_get_raw_ns(void) function.
  uint64_t start_time_ns;

The end time of the period in nanoseconds. The clock must be
CLOCK_MONOTONIC_RAW, as returned by the ktime_get_raw_ns(void) function.
  uint64_t end_time_ns;

The amount of time the GPU was running GPU work for |uid| during the
period, in nanoseconds, without double-counting parallel GPU work for the
same |uid|. For example, this might include the amount of time the GPU
spent performing shader work (vertex work, fragment work, etc.) for
|uid|.
  uint64_t total_active_duration_ns;

Signed-off-by: Aakash Deep Sarkar <aakash.deep.sarkar@intel.com>
---
 include/trace/gpu_work_period.h | 59 +++++++++++++++++++++++++++++++++
 1 file changed, 59 insertions(+)
 create mode 100644 include/trace/gpu_work_period.h

diff --git a/include/trace/gpu_work_period.h b/include/trace/gpu_work_period.h
new file mode 100644
index 000000000000..e06467625705
--- /dev/null
+++ b/include/trace/gpu_work_period.h
@@ -0,0 +1,59 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2025 Intel Corporation
+ */
+
+#undef TRACE_SYSTEM
+#define TRACE_SYSTEM power
+
+#if !defined(_TRACE_GPU_WORK_PERIOD_H) || defined(TRACE_HEADER_MULTI_READ)
+#define _TRACE_GPU_WORK_PERIOD_H
+
+#include <linux/tracepoint.h>
+
+TRACE_EVENT(gpu_work_period,
+
+	TP_PROTO(
+		u32 gpu_id,
+		u32 uid,
+		u64 start_time_ns,
+		u64 end_time_ns,
+		u64 total_active_duration_ns
+	),
+
+	TP_ARGS(gpu_id, uid, start_time_ns, end_time_ns, total_active_duration_ns),
+
+	TP_STRUCT__entry(
+		__field(u32, gpu_id)
+		__field(u32, uid)
+		__field(u64, start_time_ns)
+		__field(u64, end_time_ns)
+		__field(u64, total_active_duration_ns)
+	),
+
+	TP_fast_assign(
+		__entry->gpu_id = gpu_id;
+		__entry->uid = uid;
+		__entry->start_time_ns = start_time_ns;
+		__entry->end_time_ns = end_time_ns;
+		__entry->total_active_duration_ns = total_active_duration_ns;
+	),
+
+	TP_printk("gpu_id=%u uid=%u start_time_ns=%llu end_time_ns=%llu total_active_duration_ns=%llu",
+		__entry->gpu_id,
+		__entry->uid,
+		__entry->start_time_ns,
+		__entry->end_time_ns,
+		__entry->total_active_duration_ns)
+);
+
+#endif /* _TRACE_GPU_WORK_PERIOD_H */
+
+/* This part must be outside protection */
+
+#undef TRACE_INCLUDE_FILE
+#define TRACE_INCLUDE_FILE gpu_work_period
+#undef TRACE_INCLUDE_PATH
+#define TRACE_INCLUDE_PATH .
+
+#include <trace/define_trace.h>
-- 
2.49.0


  parent reply	other threads:[~2025-09-26 11:20 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-09-26 10:45 [PATCH v4 0/9] [ANDROID]: Add GPU work period support for Xe driver Aakash Deep Sarkar
2025-09-26 10:45 ` [PATCH v4 1/9] drm/xe: Add a new xe_user structure Aakash Deep Sarkar
2025-10-02 14:40   ` Rodrigo Vivi
2025-09-26 10:45 ` [PATCH v4 2/9] drm/xe: Add xe_gt_clock_interval_to_ns function Aakash Deep Sarkar
2025-09-26 10:45 ` Aakash Deep Sarkar [this message]
2025-10-02 14:42   ` [PATCH v4 3/9] drm/xe: Add a trace point for GPU work period Rodrigo Vivi
2025-10-03 21:41     ` Dixit, Ashutosh
2025-09-26 10:45 ` [PATCH v4 4/9] drm/xe: Modify xe_exec_queue_update_run_ticks Aakash Deep Sarkar
2025-09-26 10:45 ` [PATCH v4 5/9] drm/xe: Handle xe_user creation and removal Aakash Deep Sarkar
2025-09-26 11:29   ` Jani Nikula
2025-09-26 10:45 ` [PATCH v4 6/9] drm/xe: Implement xe_work_period_worker Aakash Deep Sarkar
2025-09-26 11:31   ` Jani Nikula
2025-09-26 10:45 ` [PATCH v4 7/9] drm/xe: Add a Kconfig option for GPU work period Aakash Deep Sarkar
2025-09-26 10:45 ` [PATCH v4 8/9] drm/xe: Handle xe_work_period destruction Aakash Deep Sarkar
2025-09-26 11:32   ` Jani Nikula
2025-09-26 10:45 ` [PATCH v4 9/9] Hack patch: Do not merge Aakash Deep Sarkar
2025-09-26 11:59 ` ✗ CI.checkpatch: warning for : Add GPU work period support for Xe driver (rev4) Patchwork
2025-09-26 12:01 ` ✓ CI.KUnit: success " Patchwork
2025-09-26 12:51 ` ✗ Xe.CI.BAT: failure " Patchwork
2025-09-26 18:04 ` ✗ Xe.CI.Full: " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250926104521.1815428-4-aakash.deep.sarkar@intel.com \
    --to=aakash.deep.sarkar@intel.com \
    --cc=carlos.santa@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=jeevaka.badrappan@intel.com \
    --cc=matthew.auld@intel.com \
    --cc=matthew.brost@intel.com \
    --cc=rodrigo.vivi@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox