From: Lucas De Marchi <lucas.demarchi@intel.com>
To: intel-gfx@lists.freedesktop.org
Cc: Jonathan Cavitt <jonathan.cavitt@intel.com>,
Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com>,
Lucas De Marchi <lucas.demarchi@intel.com>
Subject: [PATCH 3/3] drm/xe: Stop accumulating LRC timestamp on job_free
Date: Sat, 26 Oct 2024 01:26:58 -0500 [thread overview]
Message-ID: <20241026062658.28060-4-lucas.demarchi@intel.com> (raw)
In-Reply-To: <20241026062658.28060-1-lucas.demarchi@intel.com>
The exec queue timestamp is only really useful when it's being queried
through the fdinfo. There's no need to update it so often, on every
job_free. Tracing a simple app like vkcube running shows an update
rate of ~ 120Hz.
The update on job_free() is used to cover a gap: if exec
queue is created and destroyed rapidily, before a new query, the
timestamp still needs to be accumulated and accounted on the xef.
Initial implementation in commit 6109f24f87d7 ("drm/xe: Add helper to
accumulate exec queue runtime") couldn't do it on the exec_queue_fini
since the xef could be gone at that point. However since commit
ce8c161cbad4 ("drm/xe: Add ref counting for xe_file") the xef is
refcounted and the exec queue has a reference.
Improve the fix in commit 2149ded63079 ("drm/xe: Fix use after free when
client stats are captured") by reducing the frequency in which the
update is needed.
Fixes: 2149ded63079 ("drm/xe: Fix use after free when client stats are captured")
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
---
drivers/gpu/drm/xe/xe_exec_queue.c | 6 ++++++
drivers/gpu/drm/xe/xe_guc_submit.c | 2 --
2 files changed, 6 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
index b15ca84b2422..bc2fc917e0de 100644
--- a/drivers/gpu/drm/xe/xe_exec_queue.c
+++ b/drivers/gpu/drm/xe/xe_exec_queue.c
@@ -260,8 +260,14 @@ void xe_exec_queue_fini(struct xe_exec_queue *q)
{
int i;
+ /*
+ * Before releasing our ref to lrc and xef, accumulate our run ticks
+ */
+ xe_exec_queue_update_run_ticks(q);
+
for (i = 0; i < q->width; ++i)
xe_lrc_put(q->lrc[i]);
+
__xe_exec_queue_free(q);
}
diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
index e5d7c767a744..ebe4665d9159 100644
--- a/drivers/gpu/drm/xe/xe_guc_submit.c
+++ b/drivers/gpu/drm/xe/xe_guc_submit.c
@@ -747,8 +747,6 @@ static void guc_exec_queue_free_job(struct drm_sched_job *drm_job)
{
struct xe_sched_job *job = to_xe_sched_job(drm_job);
- xe_exec_queue_update_run_ticks(job->q);
-
trace_xe_sched_job_free(job);
xe_sched_job_put(job);
}
--
2.47.0
next prev parent reply other threads:[~2024-10-26 6:27 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-10-26 6:26 [PATCH 0/3] drm/xe: Fix races on fdinfo Lucas De Marchi
2024-10-26 6:26 ` [PATCH 1/3] drm/xe: Add trace to lrc timestamp update Lucas De Marchi
2024-10-28 12:40 ` Nirmoy Das
2024-10-26 6:26 ` [PATCH 2/3] drm/xe: Accumulate exec queue timestamp on destroy Lucas De Marchi
2024-10-28 12:46 ` Nirmoy Das
2024-10-26 6:26 ` Lucas De Marchi [this message]
2024-10-28 15:39 ` ✗ Fi.CI.CHECKPATCH: warning for drm/xe: Fix races on fdinfo Patchwork
2024-10-28 15:39 ` ✗ Fi.CI.SPARSE: " Patchwork
2024-10-28 15:43 ` ✓ Fi.CI.BAT: success " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20241026062658.28060-4-lucas.demarchi@intel.com \
--to=lucas.demarchi@intel.com \
--cc=intel-gfx@lists.freedesktop.org \
--cc=jonathan.cavitt@intel.com \
--cc=umesh.nerlige.ramappa@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox