public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH v7 0/3] drm/sched: Documentation and refcount improvements
@ 2025-03-05 13:05 Philipp Stanner
  2025-03-05 13:05 ` [PATCH v7 1/3] drm/sched: Adjust outdated docu for run_job() Philipp Stanner
                   ` (3 more replies)
  0 siblings, 4 replies; 14+ messages in thread
From: Philipp Stanner @ 2025-03-05 13:05 UTC (permalink / raw)
  To: Matthew Brost, Danilo Krummrich, Philipp Stanner,
	Christian König, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, David Airlie, Simona Vetter, Sumit Semwal
  Cc: dri-devel, linux-kernel

Changes in v7:
  - Add Christian's info about violated dma_fence rules. (Christian)
  - Fix some typos

Changes in v6:
  - Switch patches 1 and 2. (Maira)
  - Move section related to run_job() out of timedout_job() patch.
    (Maira).
  - Fix some places for correct crosslink references.
  - Add comment to inform about run_job() incrementing the refcount.
    (Danilo)

Changes in v5:
  - Fix broken enumarated list in timedout_job's docu.
  - Add TODO for documenting the dma_fence rules in timedout_job one
    day.

Changes in v4:
  - Remove mention of vague "dma_fence rules" in timedout_job() again
    since I couldn't get input on what those rules precisely are.
  - Address a forgotten TODO. (Me)
  - Reposition "Return:" statements to make them congruent with the
    official kernel style. (Tvrtko)
  - Change formatting a bit because of crazy make htmldocs errors. (Me)

Changes in v3:
  - timedout_job(): various docu wording improvements. (Danilo)
  - Use the term "ring" consistently. (Danilo)
  - Add fully fledged docu for enum drm_gpu_sched_stat. (Danilo)

Changes in v2:
  - Document what run_job() is allowed to return. (Tvrtko)
  - Delete confusing comment about putting the fence. (Danilo)
  - Apply Danilo's RB to patch 1.
  - Delete info about job recovery for entities in patch 3. (Danilo, me)
  - Set the term "ring" as fix term for both HW rings and FW rings. A
    ring shall always be the thingy on the CPU ;) (Danilo)
  - Many (all) other comments improvements in patch 3. (Danilo)

This is as series succeeding my previous patch [1].

I recognized that we are still referring to a non-existing function and
a deprecated one in the callback docu. We should probably also point out
the important distinction between hardware and firmware schedulers more
cleanly.

Please give me feedback, especially on the RFC comments in patch3.

(This series still fires docu-build-warnings. I want to gather feedback
on the opion questions first and will solve them in v2.)

Thank you,
Philipp

[1] https://lore.kernel.org/all/20241220124515.93169-2-phasta@kernel.org/

Philipp Stanner (3):
  drm/sched: Adjust outdated docu for run_job()
  drm/sched: Document run_job() refcount hazard
  drm/sched: Update timedout_job()'s documentation

 drivers/gpu/drm/scheduler/sched_main.c |   9 +-
 include/drm/gpu_scheduler.h            | 112 +++++++++++++++++--------
 2 files changed, 83 insertions(+), 38 deletions(-)

-- 
2.48.1


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v7 1/3] drm/sched: Adjust outdated docu for run_job()
  2025-03-05 13:05 [PATCH v7 0/3] drm/sched: Documentation and refcount improvements Philipp Stanner
@ 2025-03-05 13:05 ` Philipp Stanner
  2025-03-05 13:45   ` Bagas Sanjaya
  2025-03-07 18:09   ` Maíra Canal
  2025-03-05 13:05 ` [PATCH v7 2/3] drm/sched: Document run_job() refcount hazard Philipp Stanner
                   ` (2 subsequent siblings)
  3 siblings, 2 replies; 14+ messages in thread
From: Philipp Stanner @ 2025-03-05 13:05 UTC (permalink / raw)
  To: Matthew Brost, Danilo Krummrich, Philipp Stanner,
	Christian König, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, David Airlie, Simona Vetter, Sumit Semwal
  Cc: dri-devel, linux-kernel

The documentation for drm_sched_backend_ops.run_job() mentions a certain
function called drm_sched_job_recovery(). This function does not exist.
What's actually meant is drm_sched_resubmit_jobs(), which is by now also
deprecated.

Furthermore, the scheduler expects to "inherit" a reference on the fence
from the run_job() callback. This, so far, is also not documented.

Remove the mention of the removed function.

Discourage the behavior of drm_sched_backend_ops.run_job() being called
multiple times for the same job.

Document the necessity of incrementing the refcount in run_job().

Signed-off-by: Philipp Stanner <phasta@kernel.org>
---
 include/drm/gpu_scheduler.h | 34 ++++++++++++++++++++++++++++++----
 1 file changed, 30 insertions(+), 4 deletions(-)

diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
index 50928a7ae98e..6381baae8024 100644
--- a/include/drm/gpu_scheduler.h
+++ b/include/drm/gpu_scheduler.h
@@ -410,10 +410,36 @@ struct drm_sched_backend_ops {
 					 struct drm_sched_entity *s_entity);
 
 	/**
-         * @run_job: Called to execute the job once all of the dependencies
-         * have been resolved.  This may be called multiple times, if
-	 * timedout_job() has happened and drm_sched_job_recovery()
-	 * decides to try it again.
+	 * @run_job: Called to execute the job once all of the dependencies
+	 * have been resolved.
+	 *
+	 * @sched_job: the job to run
+	 *
+	 * The deprecated drm_sched_resubmit_jobs() (called by &struct
+	 * drm_sched_backend_ops.timedout_job) can invoke this again with the
+	 * same parameters. Using this is discouraged because it violates
+	 * dma_fence rules, notably dma_fence_init() has to be called on
+	 * already initialized fences for a second time. Moreover, this is
+	 * dangerous because attempts to allocate memory might deadlock with
+	 * memory management code waiting for the reset to complete.
+	 *
+	 * TODO: Document what drivers should do / use instead.
+	 *
+	 * This method is called in a workqueue context - either from the
+	 * submit_wq the driver passed through drm_sched_init(), or, if the
+	 * driver passed NULL, a separate, ordered workqueue the scheduler
+	 * allocated.
+	 *
+	 * Note that the scheduler expects to 'inherit' its own reference to
+	 * this fence from the callback. It does not invoke an extra
+	 * dma_fence_get() on it. Consequently, this callback must take a
+	 * reference for the scheduler, and additional ones for the driver's
+	 * respective needs.
+	 *
+	 * Return:
+	 * * On success: dma_fence the driver must signal once the hardware has
+	 * completed the job ("hardware fence").
+	 * * On failure: NULL or an ERR_PTR.
 	 */
 	struct dma_fence *(*run_job)(struct drm_sched_job *sched_job);
 
-- 
2.48.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v7 2/3] drm/sched: Document run_job() refcount hazard
  2025-03-05 13:05 [PATCH v7 0/3] drm/sched: Documentation and refcount improvements Philipp Stanner
  2025-03-05 13:05 ` [PATCH v7 1/3] drm/sched: Adjust outdated docu for run_job() Philipp Stanner
@ 2025-03-05 13:05 ` Philipp Stanner
  2025-03-05 13:05 ` [PATCH v7 3/3] drm/sched: Update timedout_job()'s documentation Philipp Stanner
  2025-03-06 14:28 ` [PATCH v7 0/3] drm/sched: Documentation and refcount improvements Danilo Krummrich
  3 siblings, 0 replies; 14+ messages in thread
From: Philipp Stanner @ 2025-03-05 13:05 UTC (permalink / raw)
  To: Matthew Brost, Danilo Krummrich, Philipp Stanner,
	Christian König, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, David Airlie, Simona Vetter, Sumit Semwal
  Cc: dri-devel, linux-kernel, Philipp Stanner

From: Philipp Stanner <pstanner@redhat.com>

drm_sched_backend_ops.run_job() returns a dma_fence for the scheduler.
That fence is signalled by the driver once the hardware completed the
associated job. The scheduler does not increment the reference count on
that fence, but implicitly expects to inherit this fence from run_job().

This is relatively subtle and prone to misunderstandings.

This implies that, to keep a reference for itself, a driver needs to
call dma_fence_get() in addition to dma_fence_init() in that callback.

It's further complicated by the fact that the scheduler even decrements
the refcount in drm_sched_run_job_work() since it created a new
reference in drm_sched_fence_scheduled(). It does, however, still use
its pointer to the fence after calling dma_fence_put() - which is safe
because of the aforementioned new reference, but actually still violates
the refcounting rules.

Move the call to dma_fence_put() to the position behind the last usage
of the fence.

Suggested-by: Danilo Krummrich <dakr@kernel.org>
Signed-off-by: Philipp Stanner <pstanner@redhat.com>
Reviewed-by: Danilo Krummrich <dakr@kernel.org>
---
 drivers/gpu/drm/scheduler/sched_main.c | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
index c634993f1346..c8da6b4edf0a 100644
--- a/drivers/gpu/drm/scheduler/sched_main.c
+++ b/drivers/gpu/drm/scheduler/sched_main.c
@@ -1216,20 +1216,23 @@ static void drm_sched_run_job_work(struct work_struct *w)
 	drm_sched_job_begin(sched_job);
 
 	trace_drm_run_job(sched_job, entity);
+	/*
+	 * The run_job() callback must by definition return a fence whose
+	 * refcount has been incremented for the scheduler already.
+	 */
 	fence = sched->ops->run_job(sched_job);
 	complete_all(&entity->entity_idle);
 	drm_sched_fence_scheduled(s_fence, fence);
 
 	if (!IS_ERR_OR_NULL(fence)) {
-		/* Drop for original kref_init of the fence */
-		dma_fence_put(fence);
-
 		r = dma_fence_add_callback(fence, &sched_job->cb,
 					   drm_sched_job_done_cb);
 		if (r == -ENOENT)
 			drm_sched_job_done(sched_job, fence->error);
 		else if (r)
 			DRM_DEV_ERROR(sched->dev, "fence add callback failed (%d)\n", r);
+
+		dma_fence_put(fence);
 	} else {
 		drm_sched_job_done(sched_job, IS_ERR(fence) ?
 				   PTR_ERR(fence) : 0);
-- 
2.48.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v7 3/3] drm/sched: Update timedout_job()'s documentation
  2025-03-05 13:05 [PATCH v7 0/3] drm/sched: Documentation and refcount improvements Philipp Stanner
  2025-03-05 13:05 ` [PATCH v7 1/3] drm/sched: Adjust outdated docu for run_job() Philipp Stanner
  2025-03-05 13:05 ` [PATCH v7 2/3] drm/sched: Document run_job() refcount hazard Philipp Stanner
@ 2025-03-05 13:05 ` Philipp Stanner
  2025-03-06 20:57   ` Matthew Brost
  2025-03-06 14:28 ` [PATCH v7 0/3] drm/sched: Documentation and refcount improvements Danilo Krummrich
  3 siblings, 1 reply; 14+ messages in thread
From: Philipp Stanner @ 2025-03-05 13:05 UTC (permalink / raw)
  To: Matthew Brost, Danilo Krummrich, Philipp Stanner,
	Christian König, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, David Airlie, Simona Vetter, Sumit Semwal
  Cc: dri-devel, linux-kernel

drm_sched_backend_ops.timedout_job()'s documentation is outdated. It
mentions the deprecated function drm_sched_resubmit_jobs(). Furthermore,
it does not point out the important distinction between hardware and
firmware schedulers.

Since firmware schedulers typically only use one entity per scheduler,
timeout handling is significantly more simple because the entity the
faulted job came from can just be killed without affecting innocent
processes.

Update the documentation with that distinction and other details.

Reformat the docstring to work to a unified style with the other
handles.

Signed-off-by: Philipp Stanner <phasta@kernel.org>
---
 include/drm/gpu_scheduler.h | 78 ++++++++++++++++++++++---------------
 1 file changed, 47 insertions(+), 31 deletions(-)

diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
index 6381baae8024..1a7e377d4cbb 100644
--- a/include/drm/gpu_scheduler.h
+++ b/include/drm/gpu_scheduler.h
@@ -383,8 +383,15 @@ struct drm_sched_job {
 	struct xarray			dependencies;
 };
 
+/**
+ * enum drm_gpu_sched_stat - the scheduler's status
+ *
+ * @DRM_GPU_SCHED_STAT_NONE: Reserved. Do not use.
+ * @DRM_GPU_SCHED_STAT_NOMINAL: Operation succeeded.
+ * @DRM_GPU_SCHED_STAT_ENODEV: Error: Device is not available anymore.
+ */
 enum drm_gpu_sched_stat {
-	DRM_GPU_SCHED_STAT_NONE, /* Reserve 0 */
+	DRM_GPU_SCHED_STAT_NONE,
 	DRM_GPU_SCHED_STAT_NOMINAL,
 	DRM_GPU_SCHED_STAT_ENODEV,
 };
@@ -447,43 +454,52 @@ struct drm_sched_backend_ops {
 	 * @timedout_job: Called when a job has taken too long to execute,
 	 * to trigger GPU recovery.
 	 *
-	 * This method is called in a workqueue context.
+	 * @sched_job: The job that has timed out
 	 *
-	 * Drivers typically issue a reset to recover from GPU hangs, and this
-	 * procedure usually follows the following workflow:
+	 * Drivers typically issue a reset to recover from GPU hangs.
+	 * This procedure looks very different depending on whether a firmware
+	 * or a hardware scheduler is being used.
 	 *
-	 * 1. Stop the scheduler using drm_sched_stop(). This will park the
-	 *    scheduler thread and cancel the timeout work, guaranteeing that
-	 *    nothing is queued while we reset the hardware queue
-	 * 2. Try to gracefully stop non-faulty jobs (optional)
-	 * 3. Issue a GPU reset (driver-specific)
-	 * 4. Re-submit jobs using drm_sched_resubmit_jobs()
-	 * 5. Restart the scheduler using drm_sched_start(). At that point, new
-	 *    jobs can be queued, and the scheduler thread is unblocked
+	 * For a FIRMWARE SCHEDULER, each ring has one scheduler, and each
+	 * scheduler has one entity. Hence, the steps taken typically look as
+	 * follows:
+	 *
+	 * 1. Stop the scheduler using drm_sched_stop(). This will pause the
+	 *    scheduler workqueues and cancel the timeout work, guaranteeing
+	 *    that nothing is queued while the ring is being removed.
+	 * 2. Remove the ring. The firmware will make sure that the
+	 *    corresponding parts of the hardware are resetted, and that other
+	 *    rings are not impacted.
+	 * 3. Kill the entity and the associated scheduler.
+	 *
+	 *
+	 * For a HARDWARE SCHEDULER, a scheduler instance schedules jobs from
+	 * one or more entities to one ring. This implies that all entities
+	 * associated with the affected scheduler cannot be torn down, because
+	 * this would effectively also affect innocent userspace processes which
+	 * did not submit faulty jobs (for example).
+	 *
+	 * Consequently, the procedure to recover with a hardware scheduler
+	 * should look like this:
+	 *
+	 * 1. Stop all schedulers impacted by the reset using drm_sched_stop().
+	 * 2. Kill the entity the faulty job stems from.
+	 * 3. Issue a GPU reset on all faulty rings (driver-specific).
+	 * 4. Re-submit jobs on all schedulers impacted by re-submitting them to
+	 *    the entities which are still alive.
+	 * 5. Restart all schedulers that were stopped in step #1 using
+	 *    drm_sched_start().
 	 *
 	 * Note that some GPUs have distinct hardware queues but need to reset
 	 * the GPU globally, which requires extra synchronization between the
-	 * timeout handler of the different &drm_gpu_scheduler. One way to
-	 * achieve this synchronization is to create an ordered workqueue
-	 * (using alloc_ordered_workqueue()) at the driver level, and pass this
-	 * queue to drm_sched_init(), to guarantee that timeout handlers are
-	 * executed sequentially. The above workflow needs to be slightly
-	 * adjusted in that case:
+	 * timeout handlers of different schedulers. One way to achieve this
+	 * synchronization is to create an ordered workqueue (using
+	 * alloc_ordered_workqueue()) at the driver level, and pass this queue
+	 * as drm_sched_init()'s @timeout_wq parameter. This will guarantee
+	 * that timeout handlers are executed sequentially.
 	 *
-	 * 1. Stop all schedulers impacted by the reset using drm_sched_stop()
-	 * 2. Try to gracefully stop non-faulty jobs on all queues impacted by
-	 *    the reset (optional)
-	 * 3. Issue a GPU reset on all faulty queues (driver-specific)
-	 * 4. Re-submit jobs on all schedulers impacted by the reset using
-	 *    drm_sched_resubmit_jobs()
-	 * 5. Restart all schedulers that were stopped in step #1 using
-	 *    drm_sched_start()
+	 * Return: The scheduler's status, defined by &enum drm_gpu_sched_stat
 	 *
-	 * Return DRM_GPU_SCHED_STAT_NOMINAL, when all is normal,
-	 * and the underlying driver has started or completed recovery.
-	 *
-	 * Return DRM_GPU_SCHED_STAT_ENODEV, if the device is no longer
-	 * available, i.e. has been unplugged.
 	 */
 	enum drm_gpu_sched_stat (*timedout_job)(struct drm_sched_job *sched_job);
 
-- 
2.48.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH v7 1/3] drm/sched: Adjust outdated docu for run_job()
  2025-03-05 13:05 ` [PATCH v7 1/3] drm/sched: Adjust outdated docu for run_job() Philipp Stanner
@ 2025-03-05 13:45   ` Bagas Sanjaya
  2025-03-05 14:24     ` Philipp Stanner
  2025-03-07 18:09   ` Maíra Canal
  1 sibling, 1 reply; 14+ messages in thread
From: Bagas Sanjaya @ 2025-03-05 13:45 UTC (permalink / raw)
  To: Philipp Stanner, Matthew Brost, Danilo Krummrich,
	Christian König, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, David Airlie, Simona Vetter, Sumit Semwal
  Cc: dri-devel, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 1190 bytes --]

On Wed, Mar 05, 2025 at 02:05:50PM +0100, Philipp Stanner wrote:
>  	/**
> -         * @run_job: Called to execute the job once all of the dependencies
> -         * have been resolved.  This may be called multiple times, if
> -	 * timedout_job() has happened and drm_sched_job_recovery()
> -	 * decides to try it again.
> +	 * @run_job: Called to execute the job once all of the dependencies
> +	 * have been resolved.
> +	 *
> +	 * @sched_job: the job to run
> +	 *
> +	 * The deprecated drm_sched_resubmit_jobs() (called by &struct
> +	 * drm_sched_backend_ops.timedout_job) can invoke this again with the
> +	 * same parameters. Using this is discouraged because it violates
> +	 * dma_fence rules, notably dma_fence_init() has to be called on
> +	 * already initialized fences for a second time. Moreover, this is
> +	 * dangerous because attempts to allocate memory might deadlock with
> +	 * memory management code waiting for the reset to complete.
> +	 *
> +	 * TODO: Document what drivers should do / use instead.

No replacement? Or bespoke/roll-your-own functionality as a must?

Confused...

-- 
An old man doll... just what I always wanted! - Clara

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v7 1/3] drm/sched: Adjust outdated docu for run_job()
  2025-03-05 13:45   ` Bagas Sanjaya
@ 2025-03-05 14:24     ` Philipp Stanner
  0 siblings, 0 replies; 14+ messages in thread
From: Philipp Stanner @ 2025-03-05 14:24 UTC (permalink / raw)
  To: Bagas Sanjaya, Philipp Stanner, Matthew Brost, Danilo Krummrich,
	Christian König, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, David Airlie, Simona Vetter, Sumit Semwal
  Cc: dri-devel, linux-kernel

On Wed, 2025-03-05 at 20:45 +0700, Bagas Sanjaya wrote:
> On Wed, Mar 05, 2025 at 02:05:50PM +0100, Philipp Stanner wrote:
> >  	/**
> > -         * @run_job: Called to execute the job once all of the
> > dependencies
> > -         * have been resolved.  This may be called multiple times,
> > if
> > -	 * timedout_job() has happened and
> > drm_sched_job_recovery()
> > -	 * decides to try it again.
> > +	 * @run_job: Called to execute the job once all of the
> > dependencies
> > +	 * have been resolved.
> > +	 *
> > +	 * @sched_job: the job to run
> > +	 *
> > +	 * The deprecated drm_sched_resubmit_jobs() (called by
> > &struct
> > +	 * drm_sched_backend_ops.timedout_job) can invoke this
> > again with the
> > +	 * same parameters. Using this is discouraged because it
> > violates
> > +	 * dma_fence rules, notably dma_fence_init() has to be
> > called on
> > +	 * already initialized fences for a second time. Moreover,
> > this is
> > +	 * dangerous because attempts to allocate memory might
> > deadlock with
> > +	 * memory management code waiting for the reset to
> > complete.
> > +	 *
> > +	 * TODO: Document what drivers should do / use instead.
> 
> No replacement? Or bespoke/roll-your-own functionality as a must?
> 
> Confused...

We will document this in a follow-up. I'm trying for 2 months now [1]
just to fix up some broken, outdated documentation – and that in a
component that *I* am maintaining.

It's very difficult to reach the relevant stakeholders, and I really
want to unblock this series.

Feel free to provide a proposal for the TODO based on this series or
jump into the discussion here [2].

Otherwise I will propose a fix for the TODO some time the next weeks.

P.


[1] https://lore.kernel.org/dri-devel/20250109133710.39404-2-phasta@kernel.org/
[2] https://lore.kernel.org/dri-devel/688b5665-496d-470d-9835-0c6eadfa5569@gmail.com/



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v7 0/3] drm/sched: Documentation and refcount improvements
  2025-03-05 13:05 [PATCH v7 0/3] drm/sched: Documentation and refcount improvements Philipp Stanner
                   ` (2 preceding siblings ...)
  2025-03-05 13:05 ` [PATCH v7 3/3] drm/sched: Update timedout_job()'s documentation Philipp Stanner
@ 2025-03-06 14:28 ` Danilo Krummrich
  2025-03-06 15:44   ` Philipp Stanner
  3 siblings, 1 reply; 14+ messages in thread
From: Danilo Krummrich @ 2025-03-06 14:28 UTC (permalink / raw)
  To: Philipp Stanner
  Cc: Matthew Brost, Christian König, Maarten Lankhorst,
	Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter,
	Sumit Semwal, dri-devel, linux-kernel

On Wed, Mar 05, 2025 at 02:05:49PM +0100, Philipp Stanner wrote:
> Philipp Stanner (3):
>   drm/sched: Adjust outdated docu for run_job()
>   drm/sched: Document run_job() refcount hazard
>   drm/sched: Update timedout_job()'s documentation

Acked-by: Danilo Krummrich <dakr@kernel.org>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v7 0/3] drm/sched: Documentation and refcount improvements
  2025-03-06 14:28 ` [PATCH v7 0/3] drm/sched: Documentation and refcount improvements Danilo Krummrich
@ 2025-03-06 15:44   ` Philipp Stanner
  0 siblings, 0 replies; 14+ messages in thread
From: Philipp Stanner @ 2025-03-06 15:44 UTC (permalink / raw)
  To: Danilo Krummrich, Philipp Stanner
  Cc: Matthew Brost, Christian König, Maarten Lankhorst,
	Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter,
	Sumit Semwal, dri-devel, linux-kernel

On Thu, 2025-03-06 at 15:28 +0100, Danilo Krummrich wrote:
> On Wed, Mar 05, 2025 at 02:05:49PM +0100, Philipp Stanner wrote:
> > Philipp Stanner (3):
> >   drm/sched: Adjust outdated docu for run_job()
> >   drm/sched: Document run_job() refcount hazard
> >   drm/sched: Update timedout_job()'s documentation
> 
> Acked-by: Danilo Krummrich <dakr@kernel.org>

Applied to drm-misc-next.

Thanks

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v7 3/3] drm/sched: Update timedout_job()'s documentation
  2025-03-05 13:05 ` [PATCH v7 3/3] drm/sched: Update timedout_job()'s documentation Philipp Stanner
@ 2025-03-06 20:57   ` Matthew Brost
  2025-03-07  9:37     ` Philipp Stanner
  0 siblings, 1 reply; 14+ messages in thread
From: Matthew Brost @ 2025-03-06 20:57 UTC (permalink / raw)
  To: Philipp Stanner
  Cc: Danilo Krummrich, Christian König, Maarten Lankhorst,
	Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter,
	Sumit Semwal, dri-devel, linux-kernel

On Wed, Mar 05, 2025 at 02:05:52PM +0100, Philipp Stanner wrote:
> drm_sched_backend_ops.timedout_job()'s documentation is outdated. It
> mentions the deprecated function drm_sched_resubmit_jobs(). Furthermore,
> it does not point out the important distinction between hardware and
> firmware schedulers.
> 
> Since firmware schedulers typically only use one entity per scheduler,
> timeout handling is significantly more simple because the entity the
> faulted job came from can just be killed without affecting innocent
> processes.
> 
> Update the documentation with that distinction and other details.
> 
> Reformat the docstring to work to a unified style with the other
> handles.
> 

Looks really good, one suggestion.

> Signed-off-by: Philipp Stanner <phasta@kernel.org>
> ---
>  include/drm/gpu_scheduler.h | 78 ++++++++++++++++++++++---------------
>  1 file changed, 47 insertions(+), 31 deletions(-)
> 
> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
> index 6381baae8024..1a7e377d4cbb 100644
> --- a/include/drm/gpu_scheduler.h
> +++ b/include/drm/gpu_scheduler.h
> @@ -383,8 +383,15 @@ struct drm_sched_job {
>  	struct xarray			dependencies;
>  };
>  
> +/**
> + * enum drm_gpu_sched_stat - the scheduler's status
> + *
> + * @DRM_GPU_SCHED_STAT_NONE: Reserved. Do not use.
> + * @DRM_GPU_SCHED_STAT_NOMINAL: Operation succeeded.
> + * @DRM_GPU_SCHED_STAT_ENODEV: Error: Device is not available anymore.
> + */
>  enum drm_gpu_sched_stat {
> -	DRM_GPU_SCHED_STAT_NONE, /* Reserve 0 */
> +	DRM_GPU_SCHED_STAT_NONE,
>  	DRM_GPU_SCHED_STAT_NOMINAL,
>  	DRM_GPU_SCHED_STAT_ENODEV,
>  };
> @@ -447,43 +454,52 @@ struct drm_sched_backend_ops {
>  	 * @timedout_job: Called when a job has taken too long to execute,
>  	 * to trigger GPU recovery.
>  	 *
> -	 * This method is called in a workqueue context.
> +	 * @sched_job: The job that has timed out
>  	 *
> -	 * Drivers typically issue a reset to recover from GPU hangs, and this
> -	 * procedure usually follows the following workflow:
> +	 * Drivers typically issue a reset to recover from GPU hangs.
> +	 * This procedure looks very different depending on whether a firmware
> +	 * or a hardware scheduler is being used.
>  	 *
> -	 * 1. Stop the scheduler using drm_sched_stop(). This will park the
> -	 *    scheduler thread and cancel the timeout work, guaranteeing that
> -	 *    nothing is queued while we reset the hardware queue
> -	 * 2. Try to gracefully stop non-faulty jobs (optional)
> -	 * 3. Issue a GPU reset (driver-specific)
> -	 * 4. Re-submit jobs using drm_sched_resubmit_jobs()
> -	 * 5. Restart the scheduler using drm_sched_start(). At that point, new
> -	 *    jobs can be queued, and the scheduler thread is unblocked
> +	 * For a FIRMWARE SCHEDULER, each ring has one scheduler, and each
> +	 * scheduler has one entity. Hence, the steps taken typically look as
> +	 * follows:
> +	 *
> +	 * 1. Stop the scheduler using drm_sched_stop(). This will pause the
> +	 *    scheduler workqueues and cancel the timeout work, guaranteeing
> +	 *    that nothing is queued while the ring is being removed.
> +	 * 2. Remove the ring. The firmware will make sure that the
> +	 *    corresponding parts of the hardware are resetted, and that other
> +	 *    rings are not impacted.
> +	 * 3. Kill the entity and the associated scheduler.

Xe doesn't do step 3.

It does:
- Ban entity / scheduler so futures submissions are a NOP. This would be
  submissions with unmet dependencies. Submission at the IOCTL are
  disallowed 
- Signal all job's fences on the pending list
- Restart scheduler so free_job() is naturally called

I'm unsure if this how other firmware schedulers do this, but it seems
to work quite well in Xe.

Matt

> +	 *
> +	 *
> +	 * For a HARDWARE SCHEDULER, a scheduler instance schedules jobs from
> +	 * one or more entities to one ring. This implies that all entities
> +	 * associated with the affected scheduler cannot be torn down, because
> +	 * this would effectively also affect innocent userspace processes which
> +	 * did not submit faulty jobs (for example).
> +	 *
> +	 * Consequently, the procedure to recover with a hardware scheduler
> +	 * should look like this:
> +	 *
> +	 * 1. Stop all schedulers impacted by the reset using drm_sched_stop().
> +	 * 2. Kill the entity the faulty job stems from.
> +	 * 3. Issue a GPU reset on all faulty rings (driver-specific).
> +	 * 4. Re-submit jobs on all schedulers impacted by re-submitting them to
> +	 *    the entities which are still alive.
> +	 * 5. Restart all schedulers that were stopped in step #1 using
> +	 *    drm_sched_start().
>  	 *
>  	 * Note that some GPUs have distinct hardware queues but need to reset
>  	 * the GPU globally, which requires extra synchronization between the
> -	 * timeout handler of the different &drm_gpu_scheduler. One way to
> -	 * achieve this synchronization is to create an ordered workqueue
> -	 * (using alloc_ordered_workqueue()) at the driver level, and pass this
> -	 * queue to drm_sched_init(), to guarantee that timeout handlers are
> -	 * executed sequentially. The above workflow needs to be slightly
> -	 * adjusted in that case:
> +	 * timeout handlers of different schedulers. One way to achieve this
> +	 * synchronization is to create an ordered workqueue (using
> +	 * alloc_ordered_workqueue()) at the driver level, and pass this queue
> +	 * as drm_sched_init()'s @timeout_wq parameter. This will guarantee
> +	 * that timeout handlers are executed sequentially.
>  	 *
> -	 * 1. Stop all schedulers impacted by the reset using drm_sched_stop()
> -	 * 2. Try to gracefully stop non-faulty jobs on all queues impacted by
> -	 *    the reset (optional)
> -	 * 3. Issue a GPU reset on all faulty queues (driver-specific)
> -	 * 4. Re-submit jobs on all schedulers impacted by the reset using
> -	 *    drm_sched_resubmit_jobs()
> -	 * 5. Restart all schedulers that were stopped in step #1 using
> -	 *    drm_sched_start()
> +	 * Return: The scheduler's status, defined by &enum drm_gpu_sched_stat
>  	 *
> -	 * Return DRM_GPU_SCHED_STAT_NOMINAL, when all is normal,
> -	 * and the underlying driver has started or completed recovery.
> -	 *
> -	 * Return DRM_GPU_SCHED_STAT_ENODEV, if the device is no longer
> -	 * available, i.e. has been unplugged.
>  	 */
>  	enum drm_gpu_sched_stat (*timedout_job)(struct drm_sched_job *sched_job);
>  
> -- 
> 2.48.1
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v7 3/3] drm/sched: Update timedout_job()'s documentation
  2025-03-06 20:57   ` Matthew Brost
@ 2025-03-07  9:37     ` Philipp Stanner
  2025-03-07 17:07       ` Matthew Brost
  2025-03-07 17:35       ` Matthew Brost
  0 siblings, 2 replies; 14+ messages in thread
From: Philipp Stanner @ 2025-03-07  9:37 UTC (permalink / raw)
  To: Matthew Brost, Philipp Stanner
  Cc: Danilo Krummrich, Christian König, Maarten Lankhorst,
	Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter,
	Sumit Semwal, dri-devel, linux-kernel

On Thu, 2025-03-06 at 12:57 -0800, Matthew Brost wrote:
> On Wed, Mar 05, 2025 at 02:05:52PM +0100, Philipp Stanner wrote:
> > drm_sched_backend_ops.timedout_job()'s documentation is outdated.
> > It
> > mentions the deprecated function drm_sched_resubmit_jobs().
> > Furthermore,
> > it does not point out the important distinction between hardware
> > and
> > firmware schedulers.
> > 
> > Since firmware schedulers typically only use one entity per
> > scheduler,
> > timeout handling is significantly more simple because the entity
> > the
> > faulted job came from can just be killed without affecting innocent
> > processes.
> > 
> > Update the documentation with that distinction and other details.
> > 
> > Reformat the docstring to work to a unified style with the other
> > handles.
> > 
> 
> Looks really good, one suggestion.

Already merged. But I'm working already on the TODO and could address
your feedback in that followup.

Of course, would also be great if you could provide a proposal in a
patch? :)

> 
> > Signed-off-by: Philipp Stanner <phasta@kernel.org>
> > ---
> >  include/drm/gpu_scheduler.h | 78 ++++++++++++++++++++++-----------
> > ----
> >  1 file changed, 47 insertions(+), 31 deletions(-)
> > 
> > diff --git a/include/drm/gpu_scheduler.h
> > b/include/drm/gpu_scheduler.h
> > index 6381baae8024..1a7e377d4cbb 100644
> > --- a/include/drm/gpu_scheduler.h
> > +++ b/include/drm/gpu_scheduler.h
> > @@ -383,8 +383,15 @@ struct drm_sched_job {
> >  	struct xarray			dependencies;
> >  };
> >  
> > +/**
> > + * enum drm_gpu_sched_stat - the scheduler's status
> > + *
> > + * @DRM_GPU_SCHED_STAT_NONE: Reserved. Do not use.
> > + * @DRM_GPU_SCHED_STAT_NOMINAL: Operation succeeded.
> > + * @DRM_GPU_SCHED_STAT_ENODEV: Error: Device is not available
> > anymore.
> > + */
> >  enum drm_gpu_sched_stat {
> > -	DRM_GPU_SCHED_STAT_NONE, /* Reserve 0 */
> > +	DRM_GPU_SCHED_STAT_NONE,
> >  	DRM_GPU_SCHED_STAT_NOMINAL,
> >  	DRM_GPU_SCHED_STAT_ENODEV,
> >  };
> > @@ -447,43 +454,52 @@ struct drm_sched_backend_ops {
> >  	 * @timedout_job: Called when a job has taken too long to
> > execute,
> >  	 * to trigger GPU recovery.
> >  	 *
> > -	 * This method is called in a workqueue context.
> > +	 * @sched_job: The job that has timed out
> >  	 *
> > -	 * Drivers typically issue a reset to recover from GPU
> > hangs, and this
> > -	 * procedure usually follows the following workflow:
> > +	 * Drivers typically issue a reset to recover from GPU
> > hangs.
> > +	 * This procedure looks very different depending on
> > whether a firmware
> > +	 * or a hardware scheduler is being used.
> >  	 *
> > -	 * 1. Stop the scheduler using drm_sched_stop(). This will
> > park the
> > -	 *    scheduler thread and cancel the timeout work,
> > guaranteeing that
> > -	 *    nothing is queued while we reset the hardware queue
> > -	 * 2. Try to gracefully stop non-faulty jobs (optional)
> > -	 * 3. Issue a GPU reset (driver-specific)
> > -	 * 4. Re-submit jobs using drm_sched_resubmit_jobs()
> > -	 * 5. Restart the scheduler using drm_sched_start(). At
> > that point, new
> > -	 *    jobs can be queued, and the scheduler thread is
> > unblocked
> > +	 * For a FIRMWARE SCHEDULER, each ring has one scheduler,
> > and each
> > +	 * scheduler has one entity. Hence, the steps taken
> > typically look as
> > +	 * follows:
> > +	 *
> > +	 * 1. Stop the scheduler using drm_sched_stop(). This will
> > pause the
> > +	 *    scheduler workqueues and cancel the timeout work,
> > guaranteeing
> > +	 *    that nothing is queued while the ring is being
> > removed.
> > +	 * 2. Remove the ring. The firmware will make sure that
> > the
> > +	 *    corresponding parts of the hardware are resetted,
> > and that other
> > +	 *    rings are not impacted.
> > +	 * 3. Kill the entity and the associated scheduler.
> 
> Xe doesn't do step 3.
> 
> It does:
> - Ban entity / scheduler so futures submissions are a NOP. This would
> be
>   submissions with unmet dependencies. Submission at the IOCTL are
>   disallowed 
> - Signal all job's fences on the pending list
> - Restart scheduler so free_job() is naturally called
> 
> I'm unsure if this how other firmware schedulers do this, but it
> seems
> to work quite well in Xe.

Alright, so if I interpret this correctly you do that to avoid our
infamous memory leaks. That makes sense.

The memory leaks are documented in drm_sched_fini()'s docu, but it
could make sense to mention them here, too.

… thinking about it, we probably actually have to rephrase this line.
Just tearing down entity & sched makes those leaks very likely. Argh.

Nouveau, also a firmware scheduler, has effectively a copy of the
pending_list and also ensures that all fences get signalled. Only once
that copy of the pending list is empty it calls into drm_sched_fini().
Take a look at nouveau_sched.c if you want, the code is quite
straightforward.

P.

> 
> Matt
> 
> > +	 *
> > +	 *
> > +	 * For a HARDWARE SCHEDULER, a scheduler instance
> > schedules jobs from
> > +	 * one or more entities to one ring. This implies that all
> > entities
> > +	 * associated with the affected scheduler cannot be torn
> > down, because
> > +	 * this would effectively also affect innocent userspace
> > processes which
> > +	 * did not submit faulty jobs (for example).
> > +	 *
> > +	 * Consequently, the procedure to recover with a hardware
> > scheduler
> > +	 * should look like this:
> > +	 *
> > +	 * 1. Stop all schedulers impacted by the reset using
> > drm_sched_stop().
> > +	 * 2. Kill the entity the faulty job stems from.
> > +	 * 3. Issue a GPU reset on all faulty rings (driver-
> > specific).
> > +	 * 4. Re-submit jobs on all schedulers impacted by re-
> > submitting them to
> > +	 *    the entities which are still alive.
> > +	 * 5. Restart all schedulers that were stopped in step #1
> > using
> > +	 *    drm_sched_start().
> >  	 *
> >  	 * Note that some GPUs have distinct hardware queues but
> > need to reset
> >  	 * the GPU globally, which requires extra synchronization
> > between the
> > -	 * timeout handler of the different &drm_gpu_scheduler.
> > One way to
> > -	 * achieve this synchronization is to create an ordered
> > workqueue
> > -	 * (using alloc_ordered_workqueue()) at the driver level,
> > and pass this
> > -	 * queue to drm_sched_init(), to guarantee that timeout
> > handlers are
> > -	 * executed sequentially. The above workflow needs to be
> > slightly
> > -	 * adjusted in that case:
> > +	 * timeout handlers of different schedulers. One way to
> > achieve this
> > +	 * synchronization is to create an ordered workqueue
> > (using
> > +	 * alloc_ordered_workqueue()) at the driver level, and
> > pass this queue
> > +	 * as drm_sched_init()'s @timeout_wq parameter. This will
> > guarantee
> > +	 * that timeout handlers are executed sequentially.
> >  	 *
> > -	 * 1. Stop all schedulers impacted by the reset using
> > drm_sched_stop()
> > -	 * 2. Try to gracefully stop non-faulty jobs on all queues
> > impacted by
> > -	 *    the reset (optional)
> > -	 * 3. Issue a GPU reset on all faulty queues (driver-
> > specific)
> > -	 * 4. Re-submit jobs on all schedulers impacted by the
> > reset using
> > -	 *    drm_sched_resubmit_jobs()
> > -	 * 5. Restart all schedulers that were stopped in step #1
> > using
> > -	 *    drm_sched_start()
> > +	 * Return: The scheduler's status, defined by &enum
> > drm_gpu_sched_stat
> >  	 *
> > -	 * Return DRM_GPU_SCHED_STAT_NOMINAL, when all is normal,
> > -	 * and the underlying driver has started or completed
> > recovery.
> > -	 *
> > -	 * Return DRM_GPU_SCHED_STAT_ENODEV, if the device is no
> > longer
> > -	 * available, i.e. has been unplugged.
> >  	 */
> >  	enum drm_gpu_sched_stat (*timedout_job)(struct
> > drm_sched_job *sched_job);
> >  
> > -- 
> > 2.48.1
> > 


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v7 3/3] drm/sched: Update timedout_job()'s documentation
  2025-03-07  9:37     ` Philipp Stanner
@ 2025-03-07 17:07       ` Matthew Brost
  2025-03-07 17:35       ` Matthew Brost
  1 sibling, 0 replies; 14+ messages in thread
From: Matthew Brost @ 2025-03-07 17:07 UTC (permalink / raw)
  To: phasta
  Cc: Danilo Krummrich, Christian König, Maarten Lankhorst,
	Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter,
	Sumit Semwal, dri-devel, linux-kernel

On Fri, Mar 07, 2025 at 10:37:04AM +0100, Philipp Stanner wrote:
> On Thu, 2025-03-06 at 12:57 -0800, Matthew Brost wrote:
> > On Wed, Mar 05, 2025 at 02:05:52PM +0100, Philipp Stanner wrote:
> > > drm_sched_backend_ops.timedout_job()'s documentation is outdated.
> > > It
> > > mentions the deprecated function drm_sched_resubmit_jobs().
> > > Furthermore,
> > > it does not point out the important distinction between hardware
> > > and
> > > firmware schedulers.
> > > 
> > > Since firmware schedulers typically only use one entity per
> > > scheduler,
> > > timeout handling is significantly more simple because the entity
> > > the
> > > faulted job came from can just be killed without affecting innocent
> > > processes.
> > > 
> > > Update the documentation with that distinction and other details.
> > > 
> > > Reformat the docstring to work to a unified style with the other
> > > handles.
> > > 
> > 
> > Looks really good, one suggestion.
> 
> Already merged. But I'm working already on the TODO and could address
> your feedback in that followup.
> 
> Of course, would also be great if you could provide a proposal in a
> patch? :)
> 

I can post something. Let me try to get something out today.

Matt

> > 

> > > Signed-off-by: Philipp Stanner <phasta@kernel.org>
> > > ---
> > >  include/drm/gpu_scheduler.h | 78 ++++++++++++++++++++++-----------
> > > ----
> > >  1 file changed, 47 insertions(+), 31 deletions(-)
> > > 
> > > diff --git a/include/drm/gpu_scheduler.h
> > > b/include/drm/gpu_scheduler.h
> > > index 6381baae8024..1a7e377d4cbb 100644
> > > --- a/include/drm/gpu_scheduler.h
> > > +++ b/include/drm/gpu_scheduler.h
> > > @@ -383,8 +383,15 @@ struct drm_sched_job {
> > >  	struct xarray			dependencies;
> > >  };
> > >  
> > > +/**
> > > + * enum drm_gpu_sched_stat - the scheduler's status
> > > + *
> > > + * @DRM_GPU_SCHED_STAT_NONE: Reserved. Do not use.
> > > + * @DRM_GPU_SCHED_STAT_NOMINAL: Operation succeeded.
> > > + * @DRM_GPU_SCHED_STAT_ENODEV: Error: Device is not available
> > > anymore.
> > > + */
> > >  enum drm_gpu_sched_stat {
> > > -	DRM_GPU_SCHED_STAT_NONE, /* Reserve 0 */
> > > +	DRM_GPU_SCHED_STAT_NONE,
> > >  	DRM_GPU_SCHED_STAT_NOMINAL,
> > >  	DRM_GPU_SCHED_STAT_ENODEV,
> > >  };
> > > @@ -447,43 +454,52 @@ struct drm_sched_backend_ops {
> > >  	 * @timedout_job: Called when a job has taken too long to
> > > execute,
> > >  	 * to trigger GPU recovery.
> > >  	 *
> > > -	 * This method is called in a workqueue context.
> > > +	 * @sched_job: The job that has timed out
> > >  	 *
> > > -	 * Drivers typically issue a reset to recover from GPU
> > > hangs, and this
> > > -	 * procedure usually follows the following workflow:
> > > +	 * Drivers typically issue a reset to recover from GPU
> > > hangs.
> > > +	 * This procedure looks very different depending on
> > > whether a firmware
> > > +	 * or a hardware scheduler is being used.
> > >  	 *
> > > -	 * 1. Stop the scheduler using drm_sched_stop(). This will
> > > park the
> > > -	 *    scheduler thread and cancel the timeout work,
> > > guaranteeing that
> > > -	 *    nothing is queued while we reset the hardware queue
> > > -	 * 2. Try to gracefully stop non-faulty jobs (optional)
> > > -	 * 3. Issue a GPU reset (driver-specific)
> > > -	 * 4. Re-submit jobs using drm_sched_resubmit_jobs()
> > > -	 * 5. Restart the scheduler using drm_sched_start(). At
> > > that point, new
> > > -	 *    jobs can be queued, and the scheduler thread is
> > > unblocked
> > > +	 * For a FIRMWARE SCHEDULER, each ring has one scheduler,
> > > and each
> > > +	 * scheduler has one entity. Hence, the steps taken
> > > typically look as
> > > +	 * follows:
> > > +	 *
> > > +	 * 1. Stop the scheduler using drm_sched_stop(). This will
> > > pause the
> > > +	 *    scheduler workqueues and cancel the timeout work,
> > > guaranteeing
> > > +	 *    that nothing is queued while the ring is being
> > > removed.
> > > +	 * 2. Remove the ring. The firmware will make sure that
> > > the
> > > +	 *    corresponding parts of the hardware are resetted,
> > > and that other
> > > +	 *    rings are not impacted.
> > > +	 * 3. Kill the entity and the associated scheduler.
> > 
> > Xe doesn't do step 3.
> > 
> > It does:
> > - Ban entity / scheduler so futures submissions are a NOP. This would
> > be
> >   submissions with unmet dependencies. Submission at the IOCTL are
> >   disallowed 
> > - Signal all job's fences on the pending list
> > - Restart scheduler so free_job() is naturally called
> > 
> > I'm unsure if this how other firmware schedulers do this, but it
> > seems
> > to work quite well in Xe.
> 
> Alright, so if I interpret this correctly you do that to avoid our
> infamous memory leaks. That makes sense.
> 
> The memory leaks are documented in drm_sched_fini()'s docu, but it
> could make sense to mention them here, too.
> 
> … thinking about it, we probably actually have to rephrase this line.
> Just tearing down entity & sched makes those leaks very likely. Argh.
> 
> Nouveau, also a firmware scheduler, has effectively a copy of the
> pending_list and also ensures that all fences get signalled. Only once
> that copy of the pending list is empty it calls into drm_sched_fini().
> Take a look at nouveau_sched.c if you want, the code is quite
> straightforward.
> 
> P.
> 
> > 
> > Matt
> > 
> > > +	 *
> > > +	 *
> > > +	 * For a HARDWARE SCHEDULER, a scheduler instance
> > > schedules jobs from
> > > +	 * one or more entities to one ring. This implies that all
> > > entities
> > > +	 * associated with the affected scheduler cannot be torn
> > > down, because
> > > +	 * this would effectively also affect innocent userspace
> > > processes which
> > > +	 * did not submit faulty jobs (for example).
> > > +	 *
> > > +	 * Consequently, the procedure to recover with a hardware
> > > scheduler
> > > +	 * should look like this:
> > > +	 *
> > > +	 * 1. Stop all schedulers impacted by the reset using
> > > drm_sched_stop().
> > > +	 * 2. Kill the entity the faulty job stems from.
> > > +	 * 3. Issue a GPU reset on all faulty rings (driver-
> > > specific).
> > > +	 * 4. Re-submit jobs on all schedulers impacted by re-
> > > submitting them to
> > > +	 *    the entities which are still alive.
> > > +	 * 5. Restart all schedulers that were stopped in step #1
> > > using
> > > +	 *    drm_sched_start().
> > >  	 *
> > >  	 * Note that some GPUs have distinct hardware queues but
> > > need to reset
> > >  	 * the GPU globally, which requires extra synchronization
> > > between the
> > > -	 * timeout handler of the different &drm_gpu_scheduler.
> > > One way to
> > > -	 * achieve this synchronization is to create an ordered
> > > workqueue
> > > -	 * (using alloc_ordered_workqueue()) at the driver level,
> > > and pass this
> > > -	 * queue to drm_sched_init(), to guarantee that timeout
> > > handlers are
> > > -	 * executed sequentially. The above workflow needs to be
> > > slightly
> > > -	 * adjusted in that case:
> > > +	 * timeout handlers of different schedulers. One way to
> > > achieve this
> > > +	 * synchronization is to create an ordered workqueue
> > > (using
> > > +	 * alloc_ordered_workqueue()) at the driver level, and
> > > pass this queue
> > > +	 * as drm_sched_init()'s @timeout_wq parameter. This will
> > > guarantee
> > > +	 * that timeout handlers are executed sequentially.
> > >  	 *
> > > -	 * 1. Stop all schedulers impacted by the reset using
> > > drm_sched_stop()
> > > -	 * 2. Try to gracefully stop non-faulty jobs on all queues
> > > impacted by
> > > -	 *    the reset (optional)
> > > -	 * 3. Issue a GPU reset on all faulty queues (driver-
> > > specific)
> > > -	 * 4. Re-submit jobs on all schedulers impacted by the
> > > reset using
> > > -	 *    drm_sched_resubmit_jobs()
> > > -	 * 5. Restart all schedulers that were stopped in step #1
> > > using
> > > -	 *    drm_sched_start()
> > > +	 * Return: The scheduler's status, defined by &enum
> > > drm_gpu_sched_stat
> > >  	 *
> > > -	 * Return DRM_GPU_SCHED_STAT_NOMINAL, when all is normal,
> > > -	 * and the underlying driver has started or completed
> > > recovery.
> > > -	 *
> > > -	 * Return DRM_GPU_SCHED_STAT_ENODEV, if the device is no
> > > longer
> > > -	 * available, i.e. has been unplugged.
> > >  	 */
> > >  	enum drm_gpu_sched_stat (*timedout_job)(struct
> > > drm_sched_job *sched_job);
> > >  
> > > -- 
> > > 2.48.1
> > > 
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v7 3/3] drm/sched: Update timedout_job()'s documentation
  2025-03-07  9:37     ` Philipp Stanner
  2025-03-07 17:07       ` Matthew Brost
@ 2025-03-07 17:35       ` Matthew Brost
  1 sibling, 0 replies; 14+ messages in thread
From: Matthew Brost @ 2025-03-07 17:35 UTC (permalink / raw)
  To: phasta
  Cc: Danilo Krummrich, Christian König, Maarten Lankhorst,
	Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter,
	Sumit Semwal, dri-devel, linux-kernel

On Fri, Mar 07, 2025 at 10:37:04AM +0100, Philipp Stanner wrote:
> On Thu, 2025-03-06 at 12:57 -0800, Matthew Brost wrote:
> > On Wed, Mar 05, 2025 at 02:05:52PM +0100, Philipp Stanner wrote:
> > > drm_sched_backend_ops.timedout_job()'s documentation is outdated.
> > > It
> > > mentions the deprecated function drm_sched_resubmit_jobs().
> > > Furthermore,
> > > it does not point out the important distinction between hardware
> > > and
> > > firmware schedulers.
> > > 
> > > Since firmware schedulers typically only use one entity per
> > > scheduler,
> > > timeout handling is significantly more simple because the entity
> > > the
> > > faulted job came from can just be killed without affecting innocent
> > > processes.
> > > 
> > > Update the documentation with that distinction and other details.
> > > 
> > > Reformat the docstring to work to a unified style with the other
> > > handles.
> > > 
> > 
> > Looks really good, one suggestion.
> 
> Already merged. But I'm working already on the TODO and could address
> your feedback in that followup.
> 
> Of course, would also be great if you could provide a proposal in a
> patch? :)
> 
> > 
> > > Signed-off-by: Philipp Stanner <phasta@kernel.org>
> > > ---
> > >  include/drm/gpu_scheduler.h | 78 ++++++++++++++++++++++-----------
> > > ----
> > >  1 file changed, 47 insertions(+), 31 deletions(-)
> > > 
> > > diff --git a/include/drm/gpu_scheduler.h
> > > b/include/drm/gpu_scheduler.h
> > > index 6381baae8024..1a7e377d4cbb 100644
> > > --- a/include/drm/gpu_scheduler.h
> > > +++ b/include/drm/gpu_scheduler.h
> > > @@ -383,8 +383,15 @@ struct drm_sched_job {
> > >  	struct xarray			dependencies;
> > >  };
> > >  
> > > +/**
> > > + * enum drm_gpu_sched_stat - the scheduler's status
> > > + *
> > > + * @DRM_GPU_SCHED_STAT_NONE: Reserved. Do not use.
> > > + * @DRM_GPU_SCHED_STAT_NOMINAL: Operation succeeded.
> > > + * @DRM_GPU_SCHED_STAT_ENODEV: Error: Device is not available
> > > anymore.
> > > + */
> > >  enum drm_gpu_sched_stat {
> > > -	DRM_GPU_SCHED_STAT_NONE, /* Reserve 0 */
> > > +	DRM_GPU_SCHED_STAT_NONE,
> > >  	DRM_GPU_SCHED_STAT_NOMINAL,
> > >  	DRM_GPU_SCHED_STAT_ENODEV,
> > >  };
> > > @@ -447,43 +454,52 @@ struct drm_sched_backend_ops {
> > >  	 * @timedout_job: Called when a job has taken too long to
> > > execute,
> > >  	 * to trigger GPU recovery.
> > >  	 *
> > > -	 * This method is called in a workqueue context.
> > > +	 * @sched_job: The job that has timed out
> > >  	 *
> > > -	 * Drivers typically issue a reset to recover from GPU
> > > hangs, and this
> > > -	 * procedure usually follows the following workflow:
> > > +	 * Drivers typically issue a reset to recover from GPU
> > > hangs.
> > > +	 * This procedure looks very different depending on
> > > whether a firmware
> > > +	 * or a hardware scheduler is being used.
> > >  	 *
> > > -	 * 1. Stop the scheduler using drm_sched_stop(). This will
> > > park the
> > > -	 *    scheduler thread and cancel the timeout work,
> > > guaranteeing that
> > > -	 *    nothing is queued while we reset the hardware queue
> > > -	 * 2. Try to gracefully stop non-faulty jobs (optional)
> > > -	 * 3. Issue a GPU reset (driver-specific)
> > > -	 * 4. Re-submit jobs using drm_sched_resubmit_jobs()
> > > -	 * 5. Restart the scheduler using drm_sched_start(). At
> > > that point, new
> > > -	 *    jobs can be queued, and the scheduler thread is
> > > unblocked
> > > +	 * For a FIRMWARE SCHEDULER, each ring has one scheduler,
> > > and each
> > > +	 * scheduler has one entity. Hence, the steps taken
> > > typically look as
> > > +	 * follows:
> > > +	 *
> > > +	 * 1. Stop the scheduler using drm_sched_stop(). This will
> > > pause the
> > > +	 *    scheduler workqueues and cancel the timeout work,
> > > guaranteeing
> > > +	 *    that nothing is queued while the ring is being
> > > removed.
> > > +	 * 2. Remove the ring. The firmware will make sure that
> > > the
> > > +	 *    corresponding parts of the hardware are resetted,
> > > and that other
> > > +	 *    rings are not impacted.
> > > +	 * 3. Kill the entity and the associated scheduler.
> > 
> > Xe doesn't do step 3.
> > 
> > It does:
> > - Ban entity / scheduler so futures submissions are a NOP. This would
> > be
> >   submissions with unmet dependencies. Submission at the IOCTL are
> >   disallowed 
> > - Signal all job's fences on the pending list
> > - Restart scheduler so free_job() is naturally called
> > 
> > I'm unsure if this how other firmware schedulers do this, but it
> > seems
> > to work quite well in Xe.

Missed this part of the reply.

> 
> Alright, so if I interpret this correctly you do that to avoid our
> infamous memory leaks. That makes sense.
>

Yes.
 
> The memory leaks are documented in drm_sched_fini()'s docu, but it
> could make sense to mention them here, too.
> 

The jobs in Xe ref count the scheduler so we never call drm_sched_fini
until jobs in the pending list and dependency queues has made called
free_job().

> … thinking about it, we probably actually have to rephrase this line.
> Just tearing down entity & sched makes those leaks very likely. Argh.
> 
> Nouveau, also a firmware scheduler, has effectively a copy of the
> pending_list and also ensures that all fences get signalled. Only once
> that copy of the pending list is empty it calls into drm_sched_fini().
> Take a look at nouveau_sched.c if you want, the code is quite
> straightforward.
> 

Same idea in Xe I think we just directly access the pending access list.
Let me look at what Nouveau is doing before posting an updated doc here
patch.

Matt

> P.
> 
> > 
> > Matt
> > 
> > > +	 *
> > > +	 *
> > > +	 * For a HARDWARE SCHEDULER, a scheduler instance
> > > schedules jobs from
> > > +	 * one or more entities to one ring. This implies that all
> > > entities
> > > +	 * associated with the affected scheduler cannot be torn
> > > down, because
> > > +	 * this would effectively also affect innocent userspace
> > > processes which
> > > +	 * did not submit faulty jobs (for example).
> > > +	 *
> > > +	 * Consequently, the procedure to recover with a hardware
> > > scheduler
> > > +	 * should look like this:
> > > +	 *
> > > +	 * 1. Stop all schedulers impacted by the reset using
> > > drm_sched_stop().
> > > +	 * 2. Kill the entity the faulty job stems from.
> > > +	 * 3. Issue a GPU reset on all faulty rings (driver-
> > > specific).
> > > +	 * 4. Re-submit jobs on all schedulers impacted by re-
> > > submitting them to
> > > +	 *    the entities which are still alive.
> > > +	 * 5. Restart all schedulers that were stopped in step #1
> > > using
> > > +	 *    drm_sched_start().
> > >  	 *
> > >  	 * Note that some GPUs have distinct hardware queues but
> > > need to reset
> > >  	 * the GPU globally, which requires extra synchronization
> > > between the
> > > -	 * timeout handler of the different &drm_gpu_scheduler.
> > > One way to
> > > -	 * achieve this synchronization is to create an ordered
> > > workqueue
> > > -	 * (using alloc_ordered_workqueue()) at the driver level,
> > > and pass this
> > > -	 * queue to drm_sched_init(), to guarantee that timeout
> > > handlers are
> > > -	 * executed sequentially. The above workflow needs to be
> > > slightly
> > > -	 * adjusted in that case:
> > > +	 * timeout handlers of different schedulers. One way to
> > > achieve this
> > > +	 * synchronization is to create an ordered workqueue
> > > (using
> > > +	 * alloc_ordered_workqueue()) at the driver level, and
> > > pass this queue
> > > +	 * as drm_sched_init()'s @timeout_wq parameter. This will
> > > guarantee
> > > +	 * that timeout handlers are executed sequentially.
> > >  	 *
> > > -	 * 1. Stop all schedulers impacted by the reset using
> > > drm_sched_stop()
> > > -	 * 2. Try to gracefully stop non-faulty jobs on all queues
> > > impacted by
> > > -	 *    the reset (optional)
> > > -	 * 3. Issue a GPU reset on all faulty queues (driver-
> > > specific)
> > > -	 * 4. Re-submit jobs on all schedulers impacted by the
> > > reset using
> > > -	 *    drm_sched_resubmit_jobs()
> > > -	 * 5. Restart all schedulers that were stopped in step #1
> > > using
> > > -	 *    drm_sched_start()
> > > +	 * Return: The scheduler's status, defined by &enum
> > > drm_gpu_sched_stat
> > >  	 *
> > > -	 * Return DRM_GPU_SCHED_STAT_NOMINAL, when all is normal,
> > > -	 * and the underlying driver has started or completed
> > > recovery.
> > > -	 *
> > > -	 * Return DRM_GPU_SCHED_STAT_ENODEV, if the device is no
> > > longer
> > > -	 * available, i.e. has been unplugged.
> > >  	 */
> > >  	enum drm_gpu_sched_stat (*timedout_job)(struct
> > > drm_sched_job *sched_job);
> > >  
> > > -- 
> > > 2.48.1
> > > 
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v7 1/3] drm/sched: Adjust outdated docu for run_job()
  2025-03-05 13:05 ` [PATCH v7 1/3] drm/sched: Adjust outdated docu for run_job() Philipp Stanner
  2025-03-05 13:45   ` Bagas Sanjaya
@ 2025-03-07 18:09   ` Maíra Canal
  2025-03-07 18:17     ` Philipp Stanner
  1 sibling, 1 reply; 14+ messages in thread
From: Maíra Canal @ 2025-03-07 18:09 UTC (permalink / raw)
  To: Philipp Stanner, Matthew Brost, Danilo Krummrich,
	Christian König, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, David Airlie, Simona Vetter, Sumit Semwal
  Cc: dri-devel, linux-kernel

Hi Philipp,

On 05/03/25 10:05, Philipp Stanner wrote:
> The documentation for drm_sched_backend_ops.run_job() mentions a certain
> function called drm_sched_job_recovery(). This function does not exist.
> What's actually meant is drm_sched_resubmit_jobs(), which is by now also
> deprecated.
> 
> Furthermore, the scheduler expects to "inherit" a reference on the fence
> from the run_job() callback. This, so far, is also not documented.
> 
> Remove the mention of the removed function.
> 
> Discourage the behavior of drm_sched_backend_ops.run_job() being called
> multiple times for the same job.
> 
> Document the necessity of incrementing the refcount in run_job().
> 
> Signed-off-by: Philipp Stanner <phasta@kernel.org>
> ---
>   include/drm/gpu_scheduler.h | 34 ++++++++++++++++++++++++++++++----
>   1 file changed, 30 insertions(+), 4 deletions(-)
> 
> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
> index 50928a7ae98e..6381baae8024 100644
> --- a/include/drm/gpu_scheduler.h
> +++ b/include/drm/gpu_scheduler.h
> @@ -410,10 +410,36 @@ struct drm_sched_backend_ops {
>   					 struct drm_sched_entity *s_entity);
>   
>   	/**
> -         * @run_job: Called to execute the job once all of the dependencies
> -         * have been resolved.  This may be called multiple times, if
> -	 * timedout_job() has happened and drm_sched_job_recovery()
> -	 * decides to try it again.
> +	 * @run_job: Called to execute the job once all of the dependencies
> +	 * have been resolved.
> +	 *
> +	 * @sched_job: the job to run
> +	 *
> +	 * The deprecated drm_sched_resubmit_jobs() (called by &struct
> +	 * drm_sched_backend_ops.timedout_job) can invoke this again with the
> +	 * same parameters. Using this is discouraged because it violates
> +	 * dma_fence rules, notably dma_fence_init() has to be called on
> +	 * already initialized fences for a second time. Moreover, this is
> +	 * dangerous because attempts to allocate memory might deadlock with
> +	 * memory management code waiting for the reset to complete.

Thanks for adding this paragraph! Also, thanks Christian for providing
this explanation in v5. It really helped clarify the reasoning behind
deprecating drm_sched_resubmit_jobs().

Best Regards,
- Maíra

> +	 *
> +	 * TODO: Document what drivers should do / use instead.
> +	 *
> +	 * This method is called in a workqueue context - either from the
> +	 * submit_wq the driver passed through drm_sched_init(), or, if the
> +	 * driver passed NULL, a separate, ordered workqueue the scheduler
> +	 * allocated.
> +	 *
> +	 * Note that the scheduler expects to 'inherit' its own reference to
> +	 * this fence from the callback. It does not invoke an extra
> +	 * dma_fence_get() on it. Consequently, this callback must take a
> +	 * reference for the scheduler, and additional ones for the driver's
> +	 * respective needs.
> +	 *
> +	 * Return:
> +	 * * On success: dma_fence the driver must signal once the hardware has
> +	 * completed the job ("hardware fence").
> +	 * * On failure: NULL or an ERR_PTR.
>   	 */
>   	struct dma_fence *(*run_job)(struct drm_sched_job *sched_job);
>   


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v7 1/3] drm/sched: Adjust outdated docu for run_job()
  2025-03-07 18:09   ` Maíra Canal
@ 2025-03-07 18:17     ` Philipp Stanner
  0 siblings, 0 replies; 14+ messages in thread
From: Philipp Stanner @ 2025-03-07 18:17 UTC (permalink / raw)
  To: Maíra Canal, Philipp Stanner, Matthew Brost,
	Danilo Krummrich, Christian König, Maarten Lankhorst,
	Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter,
	Sumit Semwal
  Cc: dri-devel, linux-kernel

On Fri, 2025-03-07 at 15:09 -0300, Maíra Canal wrote:
> Hi Philipp,
> 
> On 05/03/25 10:05, Philipp Stanner wrote:
> > The documentation for drm_sched_backend_ops.run_job() mentions a
> > certain
> > function called drm_sched_job_recovery(). This function does not
> > exist.
> > What's actually meant is drm_sched_resubmit_jobs(), which is by now
> > also
> > deprecated.
> > 
> > Furthermore, the scheduler expects to "inherit" a reference on the
> > fence
> > from the run_job() callback. This, so far, is also not documented.
> > 
> > Remove the mention of the removed function.
> > 
> > Discourage the behavior of drm_sched_backend_ops.run_job() being
> > called
> > multiple times for the same job.
> > 
> > Document the necessity of incrementing the refcount in run_job().
> > 
> > Signed-off-by: Philipp Stanner <phasta@kernel.org>
> > ---
> >   include/drm/gpu_scheduler.h | 34 ++++++++++++++++++++++++++++++--
> > --
> >   1 file changed, 30 insertions(+), 4 deletions(-)
> > 
> > diff --git a/include/drm/gpu_scheduler.h
> > b/include/drm/gpu_scheduler.h
> > index 50928a7ae98e..6381baae8024 100644
> > --- a/include/drm/gpu_scheduler.h
> > +++ b/include/drm/gpu_scheduler.h
> > @@ -410,10 +410,36 @@ struct drm_sched_backend_ops {
> >   					 struct drm_sched_entity
> > *s_entity);
> >   
> >   	/**
> > -         * @run_job: Called to execute the job once all of the
> > dependencies
> > -         * have been resolved.  This may be called multiple times,
> > if
> > -	 * timedout_job() has happened and
> > drm_sched_job_recovery()
> > -	 * decides to try it again.
> > +	 * @run_job: Called to execute the job once all of the
> > dependencies
> > +	 * have been resolved.
> > +	 *
> > +	 * @sched_job: the job to run
> > +	 *
> > +	 * The deprecated drm_sched_resubmit_jobs() (called by
> > &struct
> > +	 * drm_sched_backend_ops.timedout_job) can invoke this
> > again with the
> > +	 * same parameters. Using this is discouraged because it
> > violates
> > +	 * dma_fence rules, notably dma_fence_init() has to be
> > called on
> > +	 * already initialized fences for a second time. Moreover,
> > this is
> > +	 * dangerous because attempts to allocate memory might
> > deadlock with
> > +	 * memory management code waiting for the reset to
> > complete.
> 
> Thanks for adding this paragraph!

You're welcome


>  Also, thanks Christian for providing
> this explanation in v5. It really helped clarify the reasoning behind
> deprecating drm_sched_resubmit_jobs().

I thought a bit more about it the last days and think that you are
right and we definitely have to tell drivers with hardware scheduler
how they can achieve that without using drm_sched_resubmit_jobs().

Unfortunately, I discovered that this is quite complicated and
certainly difficult to do right.

So I'd only feel comfortable writing more docu about that once we got
more input from Christian or someone else who's got a hardware
scheduler about how they're currently doing it


Cheers
P.

> 
> Best Regards,
> - Maíra
> 
> > +	 *
> > +	 * TODO: Document what drivers should do / use instead.
> > +	 *
> > +	 * This method is called in a workqueue context - either
> > from the
> > +	 * submit_wq the driver passed through drm_sched_init(),
> > or, if the
> > +	 * driver passed NULL, a separate, ordered workqueue the
> > scheduler
> > +	 * allocated.
> > +	 *
> > +	 * Note that the scheduler expects to 'inherit' its own
> > reference to
> > +	 * this fence from the callback. It does not invoke an
> > extra
> > +	 * dma_fence_get() on it. Consequently, this callback must
> > take a
> > +	 * reference for the scheduler, and additional ones for
> > the driver's
> > +	 * respective needs.
> > +	 *
> > +	 * Return:
> > +	 * * On success: dma_fence the driver must signal once the
> > hardware has
> > +	 * completed the job ("hardware fence").
> > +	 * * On failure: NULL or an ERR_PTR.
> >   	 */
> >   	struct dma_fence *(*run_job)(struct drm_sched_job
> > *sched_job);
> >   
> 


^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2025-03-07 18:17 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-03-05 13:05 [PATCH v7 0/3] drm/sched: Documentation and refcount improvements Philipp Stanner
2025-03-05 13:05 ` [PATCH v7 1/3] drm/sched: Adjust outdated docu for run_job() Philipp Stanner
2025-03-05 13:45   ` Bagas Sanjaya
2025-03-05 14:24     ` Philipp Stanner
2025-03-07 18:09   ` Maíra Canal
2025-03-07 18:17     ` Philipp Stanner
2025-03-05 13:05 ` [PATCH v7 2/3] drm/sched: Document run_job() refcount hazard Philipp Stanner
2025-03-05 13:05 ` [PATCH v7 3/3] drm/sched: Update timedout_job()'s documentation Philipp Stanner
2025-03-06 20:57   ` Matthew Brost
2025-03-07  9:37     ` Philipp Stanner
2025-03-07 17:07       ` Matthew Brost
2025-03-07 17:35       ` Matthew Brost
2025-03-06 14:28 ` [PATCH v7 0/3] drm/sched: Documentation and refcount improvements Danilo Krummrich
2025-03-06 15:44   ` Philipp Stanner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox