Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] drm/sched: Consolidate drm_sched_job_timedout
@ 2025-07-16 14:48 Tvrtko Ursulin
  2025-07-16 17:45 ` ✓ CI.KUnit: success for " Patchwork
                   ` (4 more replies)
  0 siblings, 5 replies; 6+ messages in thread
From: Tvrtko Ursulin @ 2025-07-16 14:48 UTC (permalink / raw)
  To: dri-devel
  Cc: kernel-dev, intel-xe, amd-gfx, Tvrtko Ursulin,
	Christian König, Danilo Krummrich, Maíra Canal,
	Matthew Brost, Philipp Stanner

Reduce to one spin_unlock for hopefully a little bit clearer flow in the
function. It may appear that there is a behavioural change with the
drm_sched_start_timeout_unlocked() now not being called if there were
initially no jobs on the pending list, and then some appeared after
unlock, however if the code would rely on the TDR handler restarting
itself then it would fail to do that if the job arrived on the pending
list after the check.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@igalia.com>
Cc: Christian König <christian.koenig@amd.com>
Cc: Danilo Krummrich <dakr@kernel.org>
Cc: Maíra Canal <mcanal@igalia.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Philipp Stanner <phasta@kernel.org>
---
 drivers/gpu/drm/scheduler/sched_main.c | 36 ++++++++++++--------------
 1 file changed, 17 insertions(+), 19 deletions(-)

diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
index e2cda28a1af4..60ae600590dc 100644
--- a/drivers/gpu/drm/scheduler/sched_main.c
+++ b/drivers/gpu/drm/scheduler/sched_main.c
@@ -556,17 +556,15 @@ static void drm_sched_job_reinsert_on_false_timeout(struct drm_gpu_scheduler *sc
 
 static void drm_sched_job_timedout(struct work_struct *work)
 {
-	struct drm_gpu_scheduler *sched;
+	struct drm_gpu_scheduler *sched =
+		container_of(work, struct drm_gpu_scheduler, work_tdr.work);
+	enum drm_gpu_sched_stat status;
 	struct drm_sched_job *job;
-	enum drm_gpu_sched_stat status = DRM_GPU_SCHED_STAT_RESET;
-
-	sched = container_of(work, struct drm_gpu_scheduler, work_tdr.work);
 
 	/* Protects against concurrent deletion in drm_sched_get_finished_job */
 	spin_lock(&sched->job_list_lock);
 	job = list_first_entry_or_null(&sched->pending_list,
 				       struct drm_sched_job, list);
-
 	if (job) {
 		/*
 		 * Remove the bad job so it cannot be freed by a concurrent
@@ -575,23 +573,23 @@ static void drm_sched_job_timedout(struct work_struct *work)
 		 * cancelled, at which point it's safe.
 		 */
 		list_del_init(&job->list);
-		spin_unlock(&sched->job_list_lock);
+	}
+	spin_unlock(&sched->job_list_lock);
 
-		status = job->sched->ops->timedout_job(job);
+	if (!job)
+		return;
 
-		/*
-		 * Guilty job did complete and hence needs to be manually removed
-		 * See drm_sched_stop doc.
-		 */
-		if (sched->free_guilty) {
-			job->sched->ops->free_job(job);
-			sched->free_guilty = false;
-		}
+	status = job->sched->ops->timedout_job(job);
 
-		if (status == DRM_GPU_SCHED_STAT_NO_HANG)
-			drm_sched_job_reinsert_on_false_timeout(sched, job);
-	} else {
-		spin_unlock(&sched->job_list_lock);
+	/*
+	 * Guilty job did complete and hence needs to be manually removed. See
+	 * documentation for drm_sched_stop.
+	 */
+	if (sched->free_guilty) {
+		job->sched->ops->free_job(job);
+		sched->free_guilty = false;
+	} else if (status == DRM_GPU_SCHED_STAT_NO_HANG) {
+		drm_sched_job_reinsert_on_false_timeout(sched, job);
 	}
 
 	if (status != DRM_GPU_SCHED_STAT_ENODEV)
-- 
2.48.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2025-07-17 14:52 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-16 14:48 [PATCH] drm/sched: Consolidate drm_sched_job_timedout Tvrtko Ursulin
2025-07-16 17:45 ` ✓ CI.KUnit: success for " Patchwork
2025-07-16 19:27 ` ✓ Xe.CI.BAT: " Patchwork
2025-07-16 20:53 ` [PATCH] " Maíra Canal
2025-07-17 10:19 ` Danilo Krummrich
2025-07-17 14:52 ` ✗ Xe.CI.Full: failure for " Patchwork

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox