From: Matthew Brost <matthew.brost@intel.com>
To: intel-xe@lists.freedesktop.org
Subject: [PATCH v7 05/32] drm/xe: Don't change LRC ring head on job resubmission
Date: Tue, 7 Oct 2025 04:26:14 -0700 [thread overview]
Message-ID: <20251007112641.2669655-6-matthew.brost@intel.com> (raw)
In-Reply-To: <20251007112641.2669655-1-matthew.brost@intel.com>
Now that we save the job's head during submission, it's no longer
necessary to adjust the LRC ring head during resubmission. Instead, a
software-based adjustment of the tail will overwrite the old jobs in
place. For some odd reason, adjusting the LRC ring head didn't work on
parallel queues, which was causing issues in our CI.
v5:
- Add comment in guc_exec_queue_start explaning why the function works
(Auld)
v7:
- Only adjust first state on first unsignaled job (Auld)
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Tomasz Lis <tomasz.lis@intel.com>
---
drivers/gpu/drm/xe/xe_gpu_scheduler.h | 21 +++++++++++++++++----
drivers/gpu/drm/xe/xe_guc_submit.c | 18 ++++++++++++++++--
2 files changed, 33 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_gpu_scheduler.h b/drivers/gpu/drm/xe/xe_gpu_scheduler.h
index e548b2aed95a..3a9ff78d9346 100644
--- a/drivers/gpu/drm/xe/xe_gpu_scheduler.h
+++ b/drivers/gpu/drm/xe/xe_gpu_scheduler.h
@@ -77,17 +77,30 @@ static inline void xe_sched_add_pending_job(struct xe_gpu_scheduler *sched,
spin_unlock(&sched->base.job_list_lock);
}
+/**
+ * xe_sched_first_pending_job() - Find first pending job which is unsignaled
+ * @sched: Xe GPU scheduler
+ *
+ * Return first unsignaled job in pending list or NULL
+ */
static inline
struct xe_sched_job *xe_sched_first_pending_job(struct xe_gpu_scheduler *sched)
{
- struct xe_sched_job *job;
+ struct xe_sched_job *job, *r_job = NULL;
spin_lock(&sched->base.job_list_lock);
- job = list_first_entry_or_null(&sched->base.pending_list,
- struct xe_sched_job, drm.list);
+ list_for_each_entry(job, &sched->base.pending_list, drm.list) {
+ struct drm_sched_fence *s_fence = job->drm.s_fence;
+ struct dma_fence *hw_fence = s_fence->parent;
+
+ if (hw_fence && !dma_fence_is_signaled(hw_fence)) {
+ r_job = job;
+ break;
+ }
+ }
spin_unlock(&sched->base.job_list_lock);
- return job;
+ return r_job;
}
static inline int
diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
index 3a534d93505f..d123bdb63369 100644
--- a/drivers/gpu/drm/xe/xe_guc_submit.c
+++ b/drivers/gpu/drm/xe/xe_guc_submit.c
@@ -2008,11 +2008,25 @@ static void guc_exec_queue_start(struct xe_exec_queue *q)
struct xe_gpu_scheduler *sched = &q->guc->sched;
if (!exec_queue_killed_or_banned_or_wedged(q)) {
+ struct xe_sched_job *job = xe_sched_first_pending_job(sched);
int i;
trace_xe_exec_queue_resubmit(q);
- for (i = 0; i < q->width; ++i)
- xe_lrc_set_ring_head(q->lrc[i], q->lrc[i]->ring.tail);
+ if (job) {
+ for (i = 0; i < q->width; ++i) {
+ /*
+ * The GuC context is unregistered at this point
+ * time, adjusting software ring tail ensures
+ * jobs are rewritten in original placement,
+ * adjusting LRC tail ensures the newly loaded
+ * GuC / contexts only view the LRC tail
+ * increasing as jobs are written out.
+ */
+ q->lrc[i]->ring.tail = job->ptrs[i].head;
+ xe_lrc_set_ring_tail(q->lrc[i],
+ xe_lrc_ring_head(q->lrc[i]));
+ }
+ }
xe_sched_resubmit_jobs(sched);
}
--
2.34.1
next prev parent reply other threads:[~2025-10-07 11:26 UTC|newest]
Thread overview: 42+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-07 11:26 [PATCH v7 00/32] VF migration redesign Matthew Brost
2025-10-07 11:26 ` [PATCH v7 01/32] drm/xe: Add NULL checks to scratch LRC allocation Matthew Brost
2025-10-07 11:26 ` [PATCH v7 02/32] drm/xe: Save off position in ring in which a job was programmed Matthew Brost
2025-10-07 11:26 ` [PATCH v7 03/32] drm/xe/guc: Track pending-enable source in submission state Matthew Brost
2025-10-07 11:26 ` [PATCH v7 04/32] drm/xe: Track LR jobs in DRM scheduler pending list Matthew Brost
2025-10-07 11:26 ` Matthew Brost [this message]
2025-10-07 12:29 ` [PATCH v7 05/32] drm/xe: Don't change LRC ring head on job resubmission Matthew Auld
2025-10-07 13:00 ` Matthew Brost
2025-10-07 11:26 ` [PATCH v7 06/32] drm/xe: Make LRC W/A scratch buffer usage consistent Matthew Brost
2025-10-07 11:26 ` [PATCH v7 07/32] drm/xe/vf: Add xe_gt_recovery_pending helper Matthew Brost
2025-10-07 11:26 ` [PATCH v7 08/32] drm/xe/vf: Make VF recovery run on per-GT worker Matthew Brost
2025-10-07 11:26 ` [PATCH v7 09/32] drm/xe/vf: Abort H2G sends during VF post-migration recovery Matthew Brost
2025-10-07 11:26 ` [PATCH v7 10/32] drm/xe/vf: Remove memory allocations from VF post migration recovery Matthew Brost
2025-10-07 11:26 ` [PATCH v7 11/32] drm/xe: Move GGTT lock init to alloc Matthew Brost
2025-10-07 11:47 ` Michal Wajdeczko
2025-10-07 12:53 ` Matthew Brost
2025-10-07 11:26 ` [PATCH v7 12/32] drm/xe/vf: Close multi-GT GGTT shift race Matthew Brost
2025-10-07 13:03 ` Michal Wajdeczko
2025-10-07 11:26 ` [PATCH v7 13/32] drm/xe/vf: Teardown VF post migration worker on driver unload Matthew Brost
2025-10-07 11:26 ` [PATCH v7 14/32] drm/xe/vf: Don't allow GT reset to be queued during VF post migration recovery Matthew Brost
2025-10-07 11:26 ` [PATCH v7 15/32] drm/xe/vf: Wakeup in GuC backend on " Matthew Brost
2025-10-07 11:26 ` [PATCH v7 16/32] drm/xe/vf: Avoid indefinite blocking in preempt rebind worker for VFs supporting migration Matthew Brost
2025-10-07 11:26 ` [PATCH v7 17/32] drm/xe/vf: Use GUC_HXG_TYPE_EVENT for GuC context register Matthew Brost
2025-10-07 11:26 ` [PATCH v7 18/32] drm/xe/vf: Flush and stop CTs in VF post migration recovery Matthew Brost
2025-10-07 11:26 ` [PATCH v7 19/32] drm/xe/vf: Reset TLB invalidations during " Matthew Brost
2025-10-07 11:26 ` [PATCH v7 20/32] drm/xe/vf: Kickstart after resfix in " Matthew Brost
2025-10-07 11:26 ` [PATCH v7 21/32] drm/xe: Add CTB_H2G_BUFFER_OFFSET define Matthew Brost
2025-10-07 11:26 ` [PATCH v7 22/32] drm/xe/vf: Start CTs before resfix VF post migration recovery Matthew Brost
2025-10-07 11:26 ` [PATCH v7 23/32] drm/xe/vf: Abort VF post migration recovery on failure Matthew Brost
2025-10-07 11:26 ` [PATCH v7 24/32] drm/xe/vf: Replay GuC submission state on pause / unpause Matthew Brost
2025-10-07 11:26 ` [PATCH v7 25/32] drm/xe: Move queue init before LRC creation Matthew Brost
2025-10-07 11:26 ` [PATCH v7 26/32] drm/xe/vf: Add debug prints for GuC replaying state during VF recovery Matthew Brost
2025-10-07 11:26 ` [PATCH v7 27/32] drm/xe/vf: Workaround for race condition in GuC firmware during VF pause Matthew Brost
2025-10-07 11:26 ` [PATCH v7 28/32] drm/xe: Use PPGTT addresses for TLB invalidation to avoid GGTT fixups Matthew Brost
2025-10-07 11:26 ` [PATCH v7 29/32] drm/xe/vf: Use primary GT ordered work queue on media GT on PTL VF Matthew Brost
2025-10-07 11:26 ` [PATCH v7 30/32] drm/xe/vf: Ensure media GT VF recovery runs after primary GT on PTL Matthew Brost
2025-10-07 11:26 ` [PATCH v7 31/32] drm/xe/vf: Rebase CCS save/restore BB GGTT addresses Matthew Brost
2025-10-07 11:26 ` [PATCH v7 32/32] drm/xe/guc: Increase wait timeout to 2sec after BUSY reply from GuC Matthew Brost
2025-10-07 11:38 ` ✗ CI.checkpatch: warning for VF migration redesign (rev7) Patchwork
2025-10-07 11:39 ` ✓ CI.KUnit: success " Patchwork
2025-10-07 12:26 ` ✗ Xe.CI.BAT: failure " Patchwork
2025-10-07 14:23 ` ✗ Xe.CI.Full: " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251007112641.2669655-6-matthew.brost@intel.com \
--to=matthew.brost@intel.com \
--cc=intel-xe@lists.freedesktop.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox