From: Lucas De Marchi <lucas.demarchi@intel.com>
To: <intel-xe@lists.freedesktop.org>
Cc: Jonathan Cavitt <jonathan.cavitt@intel.com>,
Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com>,
Matthew Brost <matthew.brost@intel.com>,
Lucas De Marchi <lucas.demarchi@intel.com>
Subject: [PATCH v2 0/4] drm/xe: Fix races on fdinfo
Date: Tue, 29 Oct 2024 14:43:47 -0700 [thread overview]
Message-ID: <20241029214351.776293-1-lucas.demarchi@intel.com> (raw)
The current reading of engine utilization has same races. This should
fix most of them while also drastically reducing the update rate needed
on "normal apps".
I left tests/xe_drm_fdinfo --r utilization-single-full-load-destroy-queue
running on 2 systems and saw no failures after 100 iterations about
execution cycles being 0.
There are still issues calculating the percentage load - while I have
one additional patch to "fix" it on an idle system, I still can
consistently reproduce the issue in a LNL machine by overloading the CPU
with `stress --cpu $(nproc)`. So I will leave that for later since it's
a different issue not related to killing the exec queue.
Lucas De Marchi (4):
drm/xe: Add trace to lrc timestamp update
drm/xe: Stop accumulating LRC timestamp on job_free
drm/xe: Reword exec_queue.lock doc
drm/xe: Wait on killed exec queues
drivers/gpu/drm/xe/Makefile | 1 +
drivers/gpu/drm/xe/xe_device_types.h | 11 ++++--
drivers/gpu/drm/xe/xe_drm_client.c | 7 ++++
drivers/gpu/drm/xe/xe_exec_queue.c | 10 ++++++
drivers/gpu/drm/xe/xe_guc_submit.c | 2 --
drivers/gpu/drm/xe/xe_lrc.c | 3 ++
drivers/gpu/drm/xe/xe_trace_lrc.c | 9 +++++
drivers/gpu/drm/xe/xe_trace_lrc.h | 52 ++++++++++++++++++++++++++++
8 files changed, 90 insertions(+), 5 deletions(-)
create mode 100644 drivers/gpu/drm/xe/xe_trace_lrc.c
create mode 100644 drivers/gpu/drm/xe/xe_trace_lrc.h
--
2.47.0
next reply other threads:[~2024-10-29 21:44 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-10-29 21:43 Lucas De Marchi [this message]
2024-10-29 21:43 ` [PATCH v2 1/4] drm/xe: Add trace to lrc timestamp update Lucas De Marchi
2024-10-30 9:24 ` Nirmoy Das
2024-10-29 21:43 ` [PATCH v2 2/4] drm/xe: Stop accumulating LRC timestamp on job_free Lucas De Marchi
2024-10-29 21:43 ` [PATCH v2 3/4] drm/xe: Reword exec_queue.lock doc Lucas De Marchi
2024-10-29 21:50 ` Cavitt, Jonathan
2024-10-30 9:28 ` Nirmoy Das
2024-10-29 21:43 ` [PATCH v2 4/4] drm/xe: Wait on killed exec queues Lucas De Marchi
2024-10-29 22:05 ` Cavitt, Jonathan
2024-10-30 14:01 ` Lucas De Marchi
2024-10-30 10:56 ` Matthew Auld
2024-10-30 13:30 ` Lucas De Marchi
2024-10-29 22:30 ` ✓ CI.Patch_applied: success for drm/xe: Fix races on fdinfo (rev3) Patchwork
2024-10-29 22:30 ` ✗ CI.checkpatch: warning " Patchwork
2024-10-29 22:31 ` ✓ CI.KUnit: success " Patchwork
2024-10-29 22:43 ` ✓ CI.Build: " Patchwork
2024-10-29 22:47 ` ✓ CI.Hooks: " Patchwork
2024-10-29 22:49 ` ✓ CI.checksparse: " Patchwork
2024-10-29 23:11 ` ✓ CI.BAT: " Patchwork
2024-10-29 23:48 ` [PATCH v2 0/4] drm/xe: Fix races on fdinfo Umesh Nerlige Ramappa
2024-10-30 3:24 ` ✗ CI.FULL: failure for drm/xe: Fix races on fdinfo (rev3) Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20241029214351.776293-1-lucas.demarchi@intel.com \
--to=lucas.demarchi@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=jonathan.cavitt@intel.com \
--cc=matthew.brost@intel.com \
--cc=umesh.nerlige.ramappa@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox