From: Stuart Summers <stuart.summers@intel.com>
Cc: intel-xe@lists.freedesktop.org, matthew.brost@intel.com,
farah.kassabri@intel.com,
Stuart Summers <stuart.summers@intel.com>
Subject: [PATCH 1/9] drm/xe: Move explicit CT lock in TLB invalidation sequence
Date: Wed, 20 Aug 2025 23:30:49 +0000 [thread overview]
Message-ID: <20250820233057.83894-2-stuart.summers@intel.com> (raw)
In-Reply-To: <20250820233057.83894-1-stuart.summers@intel.com>
Currently the CT lock is used to cover TLB invalidation
sequence number updates. In an effort to separate the GuC
back end tracking of communication with the firmware from
the front end TLB sequence number tracking, add a new lock
here to specifically track those sequence number updates
coming in from the user.
Apart from the CT lock, we also have a pending lock to
cover both pending fences and sequence numbers received
from the back end. Those cover interrupt cases and so
it makes not to overload those with sequence numbers
coming in from new transactions. In that way, we'll employ
a mutex here.
v2: Actually add the correct lock rather than just dropping
it... (Matt)
Signed-off-by: Stuart Summers <stuart.summers@intel.com>
---
drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c | 19 +++++++++++++------
drivers/gpu/drm/xe/xe_gt_types.h | 2 ++
2 files changed, 15 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
index 02f0bb92d6e0..75854b963d66 100644
--- a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
+++ b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
@@ -118,6 +118,9 @@ static void xe_gt_tlb_fence_timeout(struct work_struct *work)
*/
int xe_gt_tlb_invalidation_init_early(struct xe_gt *gt)
{
+ struct xe_device *xe = gt_to_xe(gt);
+ int err;
+
gt->tlb_invalidation.seqno = 1;
INIT_LIST_HEAD(>->tlb_invalidation.pending_fences);
spin_lock_init(>->tlb_invalidation.pending_lock);
@@ -125,6 +128,10 @@ int xe_gt_tlb_invalidation_init_early(struct xe_gt *gt)
INIT_DELAYED_WORK(>->tlb_invalidation.fence_tdr,
xe_gt_tlb_fence_timeout);
+ err = drmm_mutex_init(&xe->drm, >->tlb_invalidation.seqno_lock);
+ if (err)
+ return err;
+
gt->tlb_invalidation.job_wq =
drmm_alloc_ordered_workqueue(>_to_xe(gt)->drm, "gt-tbl-inval-job-wq",
WQ_MEM_RECLAIM);
@@ -158,7 +165,7 @@ void xe_gt_tlb_invalidation_reset(struct xe_gt *gt)
* appear.
*/
- mutex_lock(>->uc.guc.ct.lock);
+ mutex_lock(>->tlb_invalidation.seqno_lock);
spin_lock_irq(>->tlb_invalidation.pending_lock);
cancel_delayed_work(>->tlb_invalidation.fence_tdr);
/*
@@ -178,7 +185,7 @@ void xe_gt_tlb_invalidation_reset(struct xe_gt *gt)
>->tlb_invalidation.pending_fences, link)
invalidation_fence_signal(gt_to_xe(gt), fence);
spin_unlock_irq(>->tlb_invalidation.pending_lock);
- mutex_unlock(>->uc.guc.ct.lock);
+ mutex_unlock(>->tlb_invalidation.seqno_lock);
}
static bool tlb_invalidation_seqno_past(struct xe_gt *gt, int seqno)
@@ -211,13 +218,13 @@ static int send_tlb_invalidation(struct xe_guc *guc,
* need to be updated.
*/
- mutex_lock(&guc->ct.lock);
+ mutex_lock(>->tlb_invalidation.seqno_lock);
seqno = gt->tlb_invalidation.seqno;
fence->seqno = seqno;
trace_xe_gt_tlb_invalidation_fence_send(xe, fence);
action[1] = seqno;
- ret = xe_guc_ct_send_locked(&guc->ct, action, len,
- G2H_LEN_DW_TLB_INVALIDATE, 1);
+ ret = xe_guc_ct_send(&guc->ct, action, len,
+ G2H_LEN_DW_TLB_INVALIDATE, 1);
if (!ret) {
spin_lock_irq(>->tlb_invalidation.pending_lock);
/*
@@ -248,7 +255,7 @@ static int send_tlb_invalidation(struct xe_guc *guc,
if (!gt->tlb_invalidation.seqno)
gt->tlb_invalidation.seqno = 1;
}
- mutex_unlock(&guc->ct.lock);
+ mutex_unlock(>->tlb_invalidation.seqno_lock);
xe_gt_stats_incr(gt, XE_GT_STATS_ID_TLB_INVAL, 1);
return ret;
diff --git a/drivers/gpu/drm/xe/xe_gt_types.h b/drivers/gpu/drm/xe/xe_gt_types.h
index ef0f2eecfa29..4dbc40fa6639 100644
--- a/drivers/gpu/drm/xe/xe_gt_types.h
+++ b/drivers/gpu/drm/xe/xe_gt_types.h
@@ -190,6 +190,8 @@ struct xe_gt {
/** @tlb_invalidation.seqno: TLB invalidation seqno, protected by CT lock */
#define TLB_INVALIDATION_SEQNO_MAX 0x100000
int seqno;
+ /** @tlb_invalidation.seqno_lock: protects @tlb_invalidation.seqno */
+ struct mutex seqno_lock;
/**
* @tlb_invalidation.seqno_recv: last received TLB invalidation seqno,
* protected by CT lock
--
2.34.1
next prev parent reply other threads:[~2025-08-20 23:31 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-08-20 23:30 [PATCH 0/9] Add TLB invalidation abstraction Stuart Summers
2025-08-20 23:30 ` Stuart Summers [this message]
2025-08-21 22:09 ` [PATCH 1/9] drm/xe: Move explicit CT lock in TLB invalidation sequence Matthew Brost
2025-08-20 23:30 ` [PATCH 2/9] drm/xe: Cancel pending TLB inval workers on teardown Stuart Summers
2025-08-21 22:13 ` Matthew Brost
2025-08-21 23:34 ` Summers, Stuart
2025-08-22 2:21 ` Matthew Brost
2025-08-22 14:57 ` Summers, Stuart
2025-08-20 23:30 ` [PATCH 3/9] drm/xe: s/tlb_invalidation/tlb_inval Stuart Summers
2025-08-20 23:30 ` [PATCH 4/9] drm/xe: Add xe_tlb_inval structure Stuart Summers
2025-08-20 23:30 ` [PATCH 5/9] drm/xe: Add xe_gt_tlb_invalidation_done_handler Stuart Summers
2025-08-20 23:30 ` [PATCH 6/9] drm/xe: Decouple TLB invalidations from GT Stuart Summers
2025-08-20 23:30 ` [PATCH 7/9] drm/xe: Prep TLB invalidation fence before sending Stuart Summers
2025-08-20 23:30 ` [PATCH 8/9] drm/xe: Add helpers to send TLB invalidations Stuart Summers
2025-08-20 23:30 ` [PATCH 9/9] drm/xe: Split TLB invalidation code in frontend and backend Stuart Summers
2025-08-21 0:13 ` ✗ CI.checkpatch: warning for Add TLB invalidation abstraction (rev8) Patchwork
2025-08-21 0:14 ` ✓ CI.KUnit: success " Patchwork
2025-08-21 1:36 ` ✓ Xe.CI.BAT: " Patchwork
2025-08-21 22:04 ` ✗ Xe.CI.Full: failure " Patchwork
-- strict thread matches above, loose matches on Subject: below --
2025-08-26 18:29 [PATCH 0/9] Add TLB invalidation abstraction Stuart Summers
2025-08-26 18:29 ` [PATCH 1/9] drm/xe: Move explicit CT lock in TLB invalidation sequence Stuart Summers
2025-08-25 17:57 [PATCH 0/9] Add TLB invalidation abstraction Stuart Summers
2025-08-25 17:57 ` [PATCH 1/9] drm/xe: Move explicit CT lock in TLB invalidation sequence Stuart Summers
2025-08-20 22:45 [PATCH 0/9] Add TLB invalidation abstraction Stuart Summers
2025-08-20 22:45 ` [PATCH 1/9] drm/xe: Move explicit CT lock in TLB invalidation sequence Stuart Summers
2025-08-13 19:47 [PATCH 0/9] Add TLB invalidation abstraction stuartsummers
2025-08-13 19:47 ` [PATCH 1/9] drm/xe: Move explicit CT lock in TLB invalidation sequence stuartsummers
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250820233057.83894-2-stuart.summers@intel.com \
--to=stuart.summers@intel.com \
--cc=farah.kassabri@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=matthew.brost@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).