Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Matthew Brost <matthew.brost@intel.com>
To: intel-xe@lists.freedesktop.org
Subject: [PATCH v10 05/34] drm/xe: Return first unsignaled job first pending job helper
Date: Wed,  8 Oct 2025 14:45:03 -0700	[thread overview]
Message-ID: <20251008214532.3442967-6-matthew.brost@intel.com> (raw)
In-Reply-To: <20251008214532.3442967-1-matthew.brost@intel.com>

In all cases where the first pending job helper is called, we only want
to retrieve the first unsignaled pending job, as this helper is used
exclusively in recovery flows. It is possible for signaled jobs to
remain in the pending list as the scheduler is stopped, so those should
be skipped.

Also, add kernel documentation to clarify this behavior.

v8:
 - Split out into own patch (Auld)

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
 drivers/gpu/drm/xe/xe_gpu_scheduler.h | 21 +++++++++++++++++----
 1 file changed, 17 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_gpu_scheduler.h b/drivers/gpu/drm/xe/xe_gpu_scheduler.h
index e548b2aed95a..3a9ff78d9346 100644
--- a/drivers/gpu/drm/xe/xe_gpu_scheduler.h
+++ b/drivers/gpu/drm/xe/xe_gpu_scheduler.h
@@ -77,17 +77,30 @@ static inline void xe_sched_add_pending_job(struct xe_gpu_scheduler *sched,
 	spin_unlock(&sched->base.job_list_lock);
 }
 
+/**
+ * xe_sched_first_pending_job() - Find first pending job which is unsignaled
+ * @sched: Xe GPU scheduler
+ *
+ * Return first unsignaled job in pending list or NULL
+ */
 static inline
 struct xe_sched_job *xe_sched_first_pending_job(struct xe_gpu_scheduler *sched)
 {
-	struct xe_sched_job *job;
+	struct xe_sched_job *job, *r_job = NULL;
 
 	spin_lock(&sched->base.job_list_lock);
-	job = list_first_entry_or_null(&sched->base.pending_list,
-				       struct xe_sched_job, drm.list);
+	list_for_each_entry(job, &sched->base.pending_list, drm.list) {
+		struct drm_sched_fence *s_fence = job->drm.s_fence;
+		struct dma_fence *hw_fence = s_fence->parent;
+
+		if (hw_fence && !dma_fence_is_signaled(hw_fence)) {
+			r_job = job;
+			break;
+		}
+	}
 	spin_unlock(&sched->base.job_list_lock);
 
-	return job;
+	return r_job;
 }
 
 static inline int
-- 
2.34.1


  parent reply	other threads:[~2025-10-08 21:45 UTC|newest]

Thread overview: 40+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-10-08 21:44 [PATCH v10 00/34] VF migration redesign Matthew Brost
2025-10-08 21:44 ` [PATCH v10 01/34] drm/xe: Add NULL checks to scratch LRC allocation Matthew Brost
2025-10-08 21:45 ` [PATCH v10 02/34] drm/xe: Save off position in ring in which a job was programmed Matthew Brost
2025-10-08 21:45 ` [PATCH v10 03/34] drm/xe/guc: Track pending-enable source in submission state Matthew Brost
2025-10-08 21:45 ` [PATCH v10 04/34] drm/xe: Track LR jobs in DRM scheduler pending list Matthew Brost
2025-10-08 21:45 ` Matthew Brost [this message]
2025-10-08 21:45 ` [PATCH v10 06/34] drm/xe: Don't change LRC ring head on job resubmission Matthew Brost
2025-10-08 21:45 ` [PATCH v10 07/34] drm/xe: Make LRC W/A scratch buffer usage consistent Matthew Brost
2025-10-08 21:45 ` [PATCH v10 08/34] drm/xe/vf: Add xe_gt_recovery_pending helper Matthew Brost
2025-10-08 21:45 ` [PATCH v10 09/34] drm/xe/vf: Make VF recovery run on per-GT worker Matthew Brost
2025-10-08 21:45 ` [PATCH v10 10/34] drm/xe/vf: Abort H2G sends during VF post-migration recovery Matthew Brost
2025-10-08 21:45 ` [PATCH v10 11/34] drm/xe/vf: Remove memory allocations from VF post migration recovery Matthew Brost
2025-10-08 21:45 ` [PATCH v10 12/34] drm/xe: Move GGTT lock init to alloc Matthew Brost
2025-10-08 21:45 ` [PATCH v10 13/34] drm/xe/vf: Move LMEM config to tile layer Matthew Brost
2025-10-08 21:45 ` [PATCH v10 14/34] drm/xe/vf: Close multi-GT GGTT shift race Matthew Brost
2025-10-08 21:45 ` [PATCH v10 15/34] drm/xe/vf: Teardown VF post migration worker on driver unload Matthew Brost
2025-10-08 21:45 ` [PATCH v10 16/34] drm/xe/vf: Don't allow GT reset to be queued during VF post migration recovery Matthew Brost
2025-10-08 21:45 ` [PATCH v10 17/34] drm/xe/vf: Wakeup in GuC backend on " Matthew Brost
2025-10-08 21:45 ` [PATCH v10 18/34] drm/xe/vf: Avoid indefinite blocking in preempt rebind worker for VFs supporting migration Matthew Brost
2025-10-08 21:45 ` [PATCH v10 19/34] drm/xe/vf: Use GUC_HXG_TYPE_EVENT for GuC context register Matthew Brost
2025-10-08 21:45 ` [PATCH v10 20/34] drm/xe/vf: Flush and stop CTs in VF post migration recovery Matthew Brost
2025-10-08 21:45 ` [PATCH v10 21/34] drm/xe/vf: Reset TLB invalidations during " Matthew Brost
2025-10-08 21:45 ` [PATCH v10 22/34] drm/xe/vf: Kickstart after resfix in " Matthew Brost
2025-10-08 21:45 ` [PATCH v10 23/34] drm/xe: Add CTB_H2G_BUFFER_OFFSET define Matthew Brost
2025-10-08 21:45 ` [PATCH v10 24/34] drm/xe/vf: Start CTs before resfix VF post migration recovery Matthew Brost
2025-10-08 21:45 ` [PATCH v10 25/34] drm/xe/vf: Abort VF post migration recovery on failure Matthew Brost
2025-10-08 21:45 ` [PATCH v10 26/34] drm/xe/vf: Replay GuC submission state on pause / unpause Matthew Brost
2025-10-13 11:54   ` Raag Jadav
2025-10-08 21:45 ` [PATCH v10 27/34] drm/xe: Move queue init before LRC creation Matthew Brost
2025-10-08 21:45 ` [PATCH v10 28/34] drm/xe/vf: Add debug prints for GuC replaying state during VF recovery Matthew Brost
2025-10-08 21:45 ` [PATCH v10 29/34] drm/xe/vf: Workaround for race condition in GuC firmware during VF pause Matthew Brost
2025-10-08 21:45 ` [PATCH v10 30/34] drm/xe: Use PPGTT addresses for TLB invalidation to avoid GGTT fixups Matthew Brost
2025-10-08 21:45 ` [PATCH v10 31/34] drm/xe/vf: Use primary GT ordered work queue on media GT on PTL VF Matthew Brost
2025-10-08 21:45 ` [PATCH v10 32/34] drm/xe/vf: Ensure media GT VF recovery runs after primary GT on PTL Matthew Brost
2025-10-08 21:45 ` [PATCH v10 33/34] drm/xe/vf: Rebase CCS save/restore BB GGTT addresses Matthew Brost
2025-10-08 21:45 ` [PATCH v10 34/34] drm/xe/guc: Increase wait timeout to 2sec after BUSY reply from GuC Matthew Brost
2025-10-08 21:53 ` ✗ CI.checkpatch: warning for VF migration redesign (rev10) Patchwork
2025-10-08 21:54 ` ✓ CI.KUnit: success " Patchwork
2025-10-08 22:35 ` ✓ Xe.CI.BAT: " Patchwork
2025-10-09  2:36 ` ✗ Xe.CI.Full: failure " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20251008214532.3442967-6-matthew.brost@intel.com \
    --to=matthew.brost@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox