* [PATCH v5 0/3] drm/sched: Documentation and refcount improvements
@ 2025-02-20 11:28 Philipp Stanner
2025-02-20 11:28 ` [PATCH v5 1/3] drm/sched: Document run_job() refcount hazard Philipp Stanner
` (2 more replies)
0 siblings, 3 replies; 13+ messages in thread
From: Philipp Stanner @ 2025-02-20 11:28 UTC (permalink / raw)
To: Matthew Brost, Danilo Krummrich, Philipp Stanner,
Christian König, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, David Airlie, Simona Vetter, Tvrtko Ursulin
Cc: dri-devel, linux-kernel
Changes in v5:
- Fix broken enumarated list in timedout_job's docu.
- Add TODO for documenting the dma_fence rules in timedout_job one
day.
Changes in v4:
- Remove mention of vague "dma_fence rules" in timedout_job() again
since I couldn't get input on what those rules precisely are.
- Address a forgotten TODO. (Me)
- Reposition "Return:" statements to make them congruent with the
official kernel style. (Tvrtko)
- Change formatting a bit because of crazy make htmldocs errors. (Me)
Changes in v3:
- timedout_job(): various docu wording improvements. (Danilo)
- Use the term "ring" consistently. (Danilo)
- Add fully fledged docu for enum drm_gpu_sched_stat. (Danilo)
Changes in v2:
- Document what run_job() is allowed to return. (Tvrtko)
- Delete confusing comment about putting the fence. (Danilo)
- Apply Danilo's RB to patch 1.
- Delete info about job recovery for entities in patch 3. (Danilo, me)
- Set the term "ring" as fix term for both HW rings and FW rings. A
ring shall always be the thingy on the CPU ;) (Danilo)
- Many (all) other comments improvements in patch 3. (Danilo)
This is as series succeeding my previous patch [1].
I recognized that we are still referring to a non-existing function and
a deprecated one in the callback docu. We should probably also point out
the important distinction between hardware and firmware schedulers more
cleanly.
Please give me feedback, especially on the RFC comments in patch3.
(This series still fires docu-build-warnings. I want to gather feedback
on the opion questions first and will solve them in v2.)
Thank you,
Philipp
[1] https://lore.kernel.org/all/20241220124515.93169-2-phasta@kernel.org/
Philipp Stanner (3):
drm/sched: Document run_job() refcount hazard
drm/sched: Adjust outdated docu for run_job()
drm/sched: Update timedout_job()'s documentation
drivers/gpu/drm/scheduler/sched_main.c | 5 +-
include/drm/gpu_scheduler.h | 109 +++++++++++++++++--------
2 files changed, 76 insertions(+), 38 deletions(-)
--
2.47.1
^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH v5 1/3] drm/sched: Document run_job() refcount hazard
2025-02-20 11:28 [PATCH v5 0/3] drm/sched: Documentation and refcount improvements Philipp Stanner
@ 2025-02-20 11:28 ` Philipp Stanner
2025-02-20 11:28 ` [PATCH v5 2/3] drm/sched: Adjust outdated docu for run_job() Philipp Stanner
2025-02-20 11:28 ` [PATCH v5 3/3] drm/sched: Update timedout_job()'s documentation Philipp Stanner
2 siblings, 0 replies; 13+ messages in thread
From: Philipp Stanner @ 2025-02-20 11:28 UTC (permalink / raw)
To: Matthew Brost, Danilo Krummrich, Philipp Stanner,
Christian König, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, David Airlie, Simona Vetter, Tvrtko Ursulin
Cc: dri-devel, linux-kernel, Philipp Stanner
From: Philipp Stanner <pstanner@redhat.com>
drm_sched_backend_ops.run_job() returns a dma_fence for the scheduler.
That fence is signalled by the driver once the hardware completed the
associated job. The scheduler does not increment the reference count on
that fence, but implicitly expects to inherit this fence from run_job().
This is relatively subtle and prone to misunderstandings.
This implies that, to keep a reference for itself, a driver needs to
call dma_fence_get() in addition to dma_fence_init() in that callback.
It's further complicated by the fact that the scheduler even decrements
the refcount in drm_sched_run_job_work() since it created a new
reference in drm_sched_fence_scheduled(). It does, however, still use
its pointer to the fence after calling dma_fence_put() - which is safe
because of the aforementioned new reference, but actually still violates
the refcounting rules.
Move the call to dma_fence_put() to the position behind the last usage
of the fence.
Document the necessity to increment the reference count in
drm_sched_backend_ops.run_job().
Suggested-by: Danilo Krummrich <dakr@kernel.org>
Signed-off-by: Philipp Stanner <pstanner@redhat.com>
Reviewed-by: Danilo Krummrich <dakr@kernel.org>
---
drivers/gpu/drm/scheduler/sched_main.c | 5 ++---
include/drm/gpu_scheduler.h | 19 +++++++++++++++----
2 files changed, 17 insertions(+), 7 deletions(-)
diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
index 8c36a59afb72..02af3f89099d 100644
--- a/drivers/gpu/drm/scheduler/sched_main.c
+++ b/drivers/gpu/drm/scheduler/sched_main.c
@@ -1222,15 +1222,14 @@ static void drm_sched_run_job_work(struct work_struct *w)
drm_sched_fence_scheduled(s_fence, fence);
if (!IS_ERR_OR_NULL(fence)) {
- /* Drop for original kref_init of the fence */
- dma_fence_put(fence);
-
r = dma_fence_add_callback(fence, &sched_job->cb,
drm_sched_job_done_cb);
if (r == -ENOENT)
drm_sched_job_done(sched_job, fence->error);
else if (r)
DRM_DEV_ERROR(sched->dev, "fence add callback failed (%d)\n", r);
+
+ dma_fence_put(fence);
} else {
drm_sched_job_done(sched_job, IS_ERR(fence) ?
PTR_ERR(fence) : 0);
diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
index 6bf458dbce84..916279b5aa00 100644
--- a/include/drm/gpu_scheduler.h
+++ b/include/drm/gpu_scheduler.h
@@ -420,10 +420,21 @@ struct drm_sched_backend_ops {
struct drm_sched_entity *s_entity);
/**
- * @run_job: Called to execute the job once all of the dependencies
- * have been resolved. This may be called multiple times, if
- * timedout_job() has happened and drm_sched_job_recovery()
- * decides to try it again.
+ * @run_job: Called to execute the job once all of the dependencies
+ * have been resolved. This may be called multiple times, if
+ * timedout_job() has happened and drm_sched_job_recovery() decides to
+ * try it again.
+ *
+ * @sched_job: the job to run
+ *
+ * Returns: dma_fence the driver must signal once the hardware has
+ * completed the job ("hardware fence").
+ *
+ * Note that the scheduler expects to 'inherit' its own reference to
+ * this fence from the callback. It does not invoke an extra
+ * dma_fence_get() on it. Consequently, this callback must take a
+ * reference for the scheduler, and additional ones for the driver's
+ * respective needs.
*/
struct dma_fence *(*run_job)(struct drm_sched_job *sched_job);
--
2.47.1
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v5 2/3] drm/sched: Adjust outdated docu for run_job()
2025-02-20 11:28 [PATCH v5 0/3] drm/sched: Documentation and refcount improvements Philipp Stanner
2025-02-20 11:28 ` [PATCH v5 1/3] drm/sched: Document run_job() refcount hazard Philipp Stanner
@ 2025-02-20 11:28 ` Philipp Stanner
2025-02-20 13:28 ` Maíra Canal
2025-02-20 11:28 ` [PATCH v5 3/3] drm/sched: Update timedout_job()'s documentation Philipp Stanner
2 siblings, 1 reply; 13+ messages in thread
From: Philipp Stanner @ 2025-02-20 11:28 UTC (permalink / raw)
To: Matthew Brost, Danilo Krummrich, Philipp Stanner,
Christian König, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, David Airlie, Simona Vetter, Tvrtko Ursulin
Cc: dri-devel, linux-kernel
The documentation for drm_sched_backend_ops.run_job() mentions a certain
function called drm_sched_job_recovery(). This function does not exist.
What's actually meant is drm_sched_resubmit_jobs(), which is by now also
deprecated.
Remove the mention of the removed function.
Discourage the behavior of drm_sched_backend_ops.run_job() being called
multiple times for the same job.
Signed-off-by: Philipp Stanner <phasta@kernel.org>
---
include/drm/gpu_scheduler.h | 19 +++++++++++++------
1 file changed, 13 insertions(+), 6 deletions(-)
diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
index 916279b5aa00..29e5bda91806 100644
--- a/include/drm/gpu_scheduler.h
+++ b/include/drm/gpu_scheduler.h
@@ -421,20 +421,27 @@ struct drm_sched_backend_ops {
/**
* @run_job: Called to execute the job once all of the dependencies
- * have been resolved. This may be called multiple times, if
- * timedout_job() has happened and drm_sched_job_recovery() decides to
- * try it again.
+ * have been resolved.
+ *
+ * The deprecated drm_sched_resubmit_jobs() (called from
+ * drm_sched_backend_ops.timedout_job()) can invoke this again with the
+ * same parameters. Using this is discouraged because it, presumably,
+ * violates dma_fence rules.
+ *
+ * TODO: Document which fence rules above.
*
* @sched_job: the job to run
*
- * Returns: dma_fence the driver must signal once the hardware has
- * completed the job ("hardware fence").
- *
* Note that the scheduler expects to 'inherit' its own reference to
* this fence from the callback. It does not invoke an extra
* dma_fence_get() on it. Consequently, this callback must take a
* reference for the scheduler, and additional ones for the driver's
* respective needs.
+ *
+ * Return:
+ * * On success: dma_fence the driver must signal once the hardware has
+ * completed the job ("hardware fence").
+ * * On failure: NULL or an ERR_PTR.
*/
struct dma_fence *(*run_job)(struct drm_sched_job *sched_job);
--
2.47.1
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v5 3/3] drm/sched: Update timedout_job()'s documentation
2025-02-20 11:28 [PATCH v5 0/3] drm/sched: Documentation and refcount improvements Philipp Stanner
2025-02-20 11:28 ` [PATCH v5 1/3] drm/sched: Document run_job() refcount hazard Philipp Stanner
2025-02-20 11:28 ` [PATCH v5 2/3] drm/sched: Adjust outdated docu for run_job() Philipp Stanner
@ 2025-02-20 11:28 ` Philipp Stanner
2025-02-20 13:42 ` Maíra Canal
2 siblings, 1 reply; 13+ messages in thread
From: Philipp Stanner @ 2025-02-20 11:28 UTC (permalink / raw)
To: Matthew Brost, Danilo Krummrich, Philipp Stanner,
Christian König, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, David Airlie, Simona Vetter, Tvrtko Ursulin
Cc: dri-devel, linux-kernel
drm_sched_backend_ops.timedout_job()'s documentation is outdated. It
mentions the deprecated function drm_sched_resubmit_job(). Furthermore,
it does not point out the important distinction between hardware and
firmware schedulers.
Since firmware schedulers tyipically only use one entity per scheduler,
timeout handling is significantly more simple because the entity the
faulted job came from can just be killed without affecting innocent
processes.
Update the documentation with that distinction and other details.
Reformat the docstring to work to a unified style with the other
handles.
Signed-off-by: Philipp Stanner <phasta@kernel.org>
---
include/drm/gpu_scheduler.h | 83 +++++++++++++++++++++++--------------
1 file changed, 52 insertions(+), 31 deletions(-)
diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
index 29e5bda91806..18cdeacf8651 100644
--- a/include/drm/gpu_scheduler.h
+++ b/include/drm/gpu_scheduler.h
@@ -393,8 +393,15 @@ static inline bool drm_sched_invalidate_job(struct drm_sched_job *s_job,
return s_job && atomic_inc_return(&s_job->karma) > threshold;
}
+/**
+ * enum drm_gpu_sched_stat - the scheduler's status
+ *
+ * @DRM_GPU_SCHED_STAT_NONE: Reserved. Do not use.
+ * @DRM_GPU_SCHED_STAT_NOMINAL: Operation succeeded.
+ * @DRM_GPU_SCHED_STAT_ENODEV: Error: Device is not available anymore.
+ */
enum drm_gpu_sched_stat {
- DRM_GPU_SCHED_STAT_NONE, /* Reserve 0 */
+ DRM_GPU_SCHED_STAT_NONE,
DRM_GPU_SCHED_STAT_NOMINAL,
DRM_GPU_SCHED_STAT_ENODEV,
};
@@ -430,6 +437,11 @@ struct drm_sched_backend_ops {
*
* TODO: Document which fence rules above.
*
+ * This method is called in a workqueue context - either from the
+ * submit_wq the driver passed through &drm_sched_init(), or, if the
+ * driver passed NULL, a separate, ordered workqueue the scheduler
+ * allocated.
+ *
* @sched_job: the job to run
*
* Note that the scheduler expects to 'inherit' its own reference to
@@ -449,43 +461,52 @@ struct drm_sched_backend_ops {
* @timedout_job: Called when a job has taken too long to execute,
* to trigger GPU recovery.
*
- * This method is called in a workqueue context.
+ * @sched_job: The job that has timed out
*
- * Drivers typically issue a reset to recover from GPU hangs, and this
- * procedure usually follows the following workflow:
+ * Drivers typically issue a reset to recover from GPU hangs.
+ * This procedure looks very different depending on whether a firmware
+ * or a hardware scheduler is being used.
*
- * 1. Stop the scheduler using drm_sched_stop(). This will park the
- * scheduler thread and cancel the timeout work, guaranteeing that
- * nothing is queued while we reset the hardware queue
- * 2. Try to gracefully stop non-faulty jobs (optional)
- * 3. Issue a GPU reset (driver-specific)
- * 4. Re-submit jobs using drm_sched_resubmit_jobs()
- * 5. Restart the scheduler using drm_sched_start(). At that point, new
- * jobs can be queued, and the scheduler thread is unblocked
+ * For a FIRMWARE SCHEDULER, each ring has one scheduler, and each
+ * scheduler has one entity. Hence, the steps taken typically look as
+ * follows:
+ *
+ * 1. Stop the scheduler using drm_sched_stop(). This will pause the
+ * scheduler workqueues and cancel the timeout work, guaranteeing
+ * that nothing is queued while the ring is being removed.
+ * 2. Remove the ring. The firmware will make sure that the
+ * corresponding parts of the hardware are resetted, and that other
+ * rings are not impacted.
+ * 3. Kill the entity and the associated scheduler.
+ *
+ *
+ * For a HARDWARE SCHEDULER, a scheduler instance schedules jobs from
+ * one or more entities to one ring. This implies that all entities
+ * associated with the affected scheduler cannot be torn down, because
+ * this would effectively also affect innocent userspace processes which
+ * did not submit faulty jobs (for example).
+ *
+ * Consequently, the procedure to recover with a hardware scheduler
+ * should look like this:
+ *
+ * 1. Stop all schedulers impacted by the reset using drm_sched_stop().
+ * 2. Kill the entity the faulty job stems from.
+ * 3. Issue a GPU reset on all faulty rings (driver-specific).
+ * 4. Re-submit jobs on all schedulers impacted by re-submitting them to
+ * the entities which are still alive.
+ * 5. Restart all schedulers that were stopped in step #1 using
+ * drm_sched_start().
*
* Note that some GPUs have distinct hardware queues but need to reset
* the GPU globally, which requires extra synchronization between the
- * timeout handler of the different &drm_gpu_scheduler. One way to
- * achieve this synchronization is to create an ordered workqueue
- * (using alloc_ordered_workqueue()) at the driver level, and pass this
- * queue to drm_sched_init(), to guarantee that timeout handlers are
- * executed sequentially. The above workflow needs to be slightly
- * adjusted in that case:
+ * timeout handlers of different schedulers. One way to achieve this
+ * synchronization is to create an ordered workqueue (using
+ * alloc_ordered_workqueue()) at the driver level, and pass this queue
+ * as drm_sched_init()'s @timeout_wq parameter. This will guarantee
+ * that timeout handlers are executed sequentially.
*
- * 1. Stop all schedulers impacted by the reset using drm_sched_stop()
- * 2. Try to gracefully stop non-faulty jobs on all queues impacted by
- * the reset (optional)
- * 3. Issue a GPU reset on all faulty queues (driver-specific)
- * 4. Re-submit jobs on all schedulers impacted by the reset using
- * drm_sched_resubmit_jobs()
- * 5. Restart all schedulers that were stopped in step #1 using
- * drm_sched_start()
+ * Return: The scheduler's status, defined by &drm_gpu_sched_stat
*
- * Return DRM_GPU_SCHED_STAT_NOMINAL, when all is normal,
- * and the underlying driver has started or completed recovery.
- *
- * Return DRM_GPU_SCHED_STAT_ENODEV, if the device is no longer
- * available, i.e. has been unplugged.
*/
enum drm_gpu_sched_stat (*timedout_job)(struct drm_sched_job *sched_job);
--
2.47.1
^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [PATCH v5 2/3] drm/sched: Adjust outdated docu for run_job()
2025-02-20 11:28 ` [PATCH v5 2/3] drm/sched: Adjust outdated docu for run_job() Philipp Stanner
@ 2025-02-20 13:28 ` Maíra Canal
2025-02-20 15:28 ` Philipp Stanner
0 siblings, 1 reply; 13+ messages in thread
From: Maíra Canal @ 2025-02-20 13:28 UTC (permalink / raw)
To: Philipp Stanner, Matthew Brost, Danilo Krummrich,
Christian König, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, David Airlie, Simona Vetter, Tvrtko Ursulin
Cc: dri-devel, linux-kernel
Hi Philipp,
On 20/02/25 08:28, Philipp Stanner wrote:
> The documentation for drm_sched_backend_ops.run_job() mentions a certain
> function called drm_sched_job_recovery(). This function does not exist.
> What's actually meant is drm_sched_resubmit_jobs(), which is by now also
> deprecated.
>
> Remove the mention of the removed function.
>
> Discourage the behavior of drm_sched_backend_ops.run_job() being called
> multiple times for the same job.
It looks odd to me that this patch removes lines that were added in
patch 1/3. Maybe you could change the patchset order and place this one
as the first.
>
> Signed-off-by: Philipp Stanner <phasta@kernel.org>
> ---
> include/drm/gpu_scheduler.h | 19 +++++++++++++------
> 1 file changed, 13 insertions(+), 6 deletions(-)
>
> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
> index 916279b5aa00..29e5bda91806 100644
> --- a/include/drm/gpu_scheduler.h
> +++ b/include/drm/gpu_scheduler.h
> @@ -421,20 +421,27 @@ struct drm_sched_backend_ops {
>
> /**
> * @run_job: Called to execute the job once all of the dependencies
> - * have been resolved. This may be called multiple times, if
> - * timedout_job() has happened and drm_sched_job_recovery() decides to
> - * try it again.
> + * have been resolved.
> + *
> + * The deprecated drm_sched_resubmit_jobs() (called from
> + * drm_sched_backend_ops.timedout_job()) can invoke this again with the
I think it would be "@timedout_job".
> + * same parameters. Using this is discouraged because it, presumably,
> + * violates dma_fence rules.
I believe it would be "struct dma_fence".
> + *
> + * TODO: Document which fence rules above.
> *
> * @sched_job: the job to run
> *
> - * Returns: dma_fence the driver must signal once the hardware has
> - * completed the job ("hardware fence").
> - *
> * Note that the scheduler expects to 'inherit' its own reference to
> * this fence from the callback. It does not invoke an extra
> * dma_fence_get() on it. Consequently, this callback must take a
> * reference for the scheduler, and additional ones for the driver's
> * respective needs.
Would it be possible to add a comment that `run_job()` must check if
`s_fence->finished.error` is different than 0? If you increase the karma
of a job and don't check for `s_fence->finished.error`, you might run a
cancelled job.
> + *
> + * Return:
> + * * On success: dma_fence the driver must signal once the hardware has
> + * completed the job ("hardware fence").
A suggestion: "the fence that the driver must signal once the hardware
has completed the job".
Best Regards,
- Maíra
> + * * On failure: NULL or an ERR_PTR.
> */
> struct dma_fence *(*run_job)(struct drm_sched_job *sched_job);
>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v5 3/3] drm/sched: Update timedout_job()'s documentation
2025-02-20 11:28 ` [PATCH v5 3/3] drm/sched: Update timedout_job()'s documentation Philipp Stanner
@ 2025-02-20 13:42 ` Maíra Canal
2025-02-20 15:18 ` Philipp Stanner
0 siblings, 1 reply; 13+ messages in thread
From: Maíra Canal @ 2025-02-20 13:42 UTC (permalink / raw)
To: Philipp Stanner, Matthew Brost, Danilo Krummrich,
Christian König, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, David Airlie, Simona Vetter, Tvrtko Ursulin
Cc: dri-devel, linux-kernel
Hi Philipp,
On 20/02/25 08:28, Philipp Stanner wrote:
> drm_sched_backend_ops.timedout_job()'s documentation is outdated. It
> mentions the deprecated function drm_sched_resubmit_job(). Furthermore,
> it does not point out the important distinction between hardware and
> firmware schedulers.
>
> Since firmware schedulers tyipically only use one entity per scheduler,
> timeout handling is significantly more simple because the entity the
> faulted job came from can just be killed without affecting innocent
> processes.
>
> Update the documentation with that distinction and other details.
>
> Reformat the docstring to work to a unified style with the other
> handles.
>
> Signed-off-by: Philipp Stanner <phasta@kernel.org>
> ---
> include/drm/gpu_scheduler.h | 83 +++++++++++++++++++++++--------------
> 1 file changed, 52 insertions(+), 31 deletions(-)
>
> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
> index 29e5bda91806..18cdeacf8651 100644
> --- a/include/drm/gpu_scheduler.h
> +++ b/include/drm/gpu_scheduler.h
> @@ -393,8 +393,15 @@ static inline bool drm_sched_invalidate_job(struct drm_sched_job *s_job,
> return s_job && atomic_inc_return(&s_job->karma) > threshold;
> }
>
> +/**
> + * enum drm_gpu_sched_stat - the scheduler's status
> + *
> + * @DRM_GPU_SCHED_STAT_NONE: Reserved. Do not use.
> + * @DRM_GPU_SCHED_STAT_NOMINAL: Operation succeeded.
> + * @DRM_GPU_SCHED_STAT_ENODEV: Error: Device is not available anymore.
> + */
> enum drm_gpu_sched_stat {
> - DRM_GPU_SCHED_STAT_NONE, /* Reserve 0 */
> + DRM_GPU_SCHED_STAT_NONE,
> DRM_GPU_SCHED_STAT_NOMINAL,
> DRM_GPU_SCHED_STAT_ENODEV,
> };
> @@ -430,6 +437,11 @@ struct drm_sched_backend_ops {
> *
> * TODO: Document which fence rules above.
> *
> + * This method is called in a workqueue context - either from the
> + * submit_wq the driver passed through &drm_sched_init(), or, if the
> + * driver passed NULL, a separate, ordered workqueue the scheduler
> + * allocated.
> + *
The commit message mentions "Update timedout_job()'s documentation". As
this hunk is related to `run_job()`, maybe it would be a better fit to
patch 2/3.
> * @sched_job: the job to run
> *
> * Note that the scheduler expects to 'inherit' its own reference to
> @@ -449,43 +461,52 @@ struct drm_sched_backend_ops {
> * @timedout_job: Called when a job has taken too long to execute,
> * to trigger GPU recovery.
> *
> - * This method is called in a workqueue context.
> + * @sched_job: The job that has timed out
> *
> - * Drivers typically issue a reset to recover from GPU hangs, and this
> - * procedure usually follows the following workflow:
> + * Drivers typically issue a reset to recover from GPU hangs.
> + * This procedure looks very different depending on whether a firmware
> + * or a hardware scheduler is being used.
> *
> - * 1. Stop the scheduler using drm_sched_stop(). This will park the
> - * scheduler thread and cancel the timeout work, guaranteeing that
> - * nothing is queued while we reset the hardware queue
> - * 2. Try to gracefully stop non-faulty jobs (optional)
> - * 3. Issue a GPU reset (driver-specific)
> - * 4. Re-submit jobs using drm_sched_resubmit_jobs()
> - * 5. Restart the scheduler using drm_sched_start(). At that point, new
> - * jobs can be queued, and the scheduler thread is unblocked
> + * For a FIRMWARE SCHEDULER, each ring has one scheduler, and each
> + * scheduler has one entity. Hence, the steps taken typically look as
> + * follows:
> + *
> + * 1. Stop the scheduler using drm_sched_stop(). This will pause the
> + * scheduler workqueues and cancel the timeout work, guaranteeing
> + * that nothing is queued while the ring is being removed.
> + * 2. Remove the ring. The firmware will make sure that the
> + * corresponding parts of the hardware are resetted, and that other
> + * rings are not impacted.
> + * 3. Kill the entity and the associated scheduler.
> + *
> + *
> + * For a HARDWARE SCHEDULER, a scheduler instance schedules jobs from
> + * one or more entities to one ring. This implies that all entities
> + * associated with the affected scheduler cannot be torn down, because
> + * this would effectively also affect innocent userspace processes which
> + * did not submit faulty jobs (for example).
> + *
> + * Consequently, the procedure to recover with a hardware scheduler
> + * should look like this:
> + *
> + * 1. Stop all schedulers impacted by the reset using drm_sched_stop().
> + * 2. Kill the entity the faulty job stems from.
> + * 3. Issue a GPU reset on all faulty rings (driver-specific).
> + * 4. Re-submit jobs on all schedulers impacted by re-submitting them to
> + * the entities which are still alive.
I believe that a mention to `drm_sched_resubmit_jobs()` still worth it,
even mentioning that it is a deprecated option and it shouldn't be used
in new code. It is deprecated indeed, but we still have five users.
Best Regards,
- Maíra
> + * 5. Restart all schedulers that were stopped in step #1 using
> + * drm_sched_start().
> *
> * Note that some GPUs have distinct hardware queues but need to reset
> * the GPU globally, which requires extra synchronization between the
> - * timeout handler of the different &drm_gpu_scheduler. One way to
> - * achieve this synchronization is to create an ordered workqueue
> - * (using alloc_ordered_workqueue()) at the driver level, and pass this
> - * queue to drm_sched_init(), to guarantee that timeout handlers are
> - * executed sequentially. The above workflow needs to be slightly
> - * adjusted in that case:
> + * timeout handlers of different schedulers. One way to achieve this
> + * synchronization is to create an ordered workqueue (using
> + * alloc_ordered_workqueue()) at the driver level, and pass this queue
> + * as drm_sched_init()'s @timeout_wq parameter. This will guarantee
> + * that timeout handlers are executed sequentially.
> *
> - * 1. Stop all schedulers impacted by the reset using drm_sched_stop()
> - * 2. Try to gracefully stop non-faulty jobs on all queues impacted by
> - * the reset (optional)
> - * 3. Issue a GPU reset on all faulty queues (driver-specific)
> - * 4. Re-submit jobs on all schedulers impacted by the reset using
> - * drm_sched_resubmit_jobs()
> - * 5. Restart all schedulers that were stopped in step #1 using
> - * drm_sched_start()
> + * Return: The scheduler's status, defined by &drm_gpu_sched_stat
> *
> - * Return DRM_GPU_SCHED_STAT_NOMINAL, when all is normal,
> - * and the underlying driver has started or completed recovery.
> - *
> - * Return DRM_GPU_SCHED_STAT_ENODEV, if the device is no longer
> - * available, i.e. has been unplugged.
> */
> enum drm_gpu_sched_stat (*timedout_job)(struct drm_sched_job *sched_job);
>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v5 3/3] drm/sched: Update timedout_job()'s documentation
2025-02-20 13:42 ` Maíra Canal
@ 2025-02-20 15:18 ` Philipp Stanner
0 siblings, 0 replies; 13+ messages in thread
From: Philipp Stanner @ 2025-02-20 15:18 UTC (permalink / raw)
To: Maíra Canal, Philipp Stanner, Matthew Brost,
Danilo Krummrich, Christian König, Maarten Lankhorst,
Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter,
Tvrtko Ursulin
Cc: dri-devel, linux-kernel
On Thu, 2025-02-20 at 10:42 -0300, Maíra Canal wrote:
> Hi Philipp,
>
> On 20/02/25 08:28, Philipp Stanner wrote:
> > drm_sched_backend_ops.timedout_job()'s documentation is outdated.
> > It
> > mentions the deprecated function drm_sched_resubmit_job().
> > Furthermore,
> > it does not point out the important distinction between hardware
> > and
> > firmware schedulers.
> >
> > Since firmware schedulers tyipically only use one entity per
> > scheduler,
> > timeout handling is significantly more simple because the entity
> > the
> > faulted job came from can just be killed without affecting innocent
> > processes.
> >
> > Update the documentation with that distinction and other details.
> >
> > Reformat the docstring to work to a unified style with the other
> > handles.
> >
> > Signed-off-by: Philipp Stanner <phasta@kernel.org>
> > ---
> > include/drm/gpu_scheduler.h | 83 +++++++++++++++++++++++---------
> > -----
> > 1 file changed, 52 insertions(+), 31 deletions(-)
> >
> > diff --git a/include/drm/gpu_scheduler.h
> > b/include/drm/gpu_scheduler.h
> > index 29e5bda91806..18cdeacf8651 100644
> > --- a/include/drm/gpu_scheduler.h
> > +++ b/include/drm/gpu_scheduler.h
> > @@ -393,8 +393,15 @@ static inline bool
> > drm_sched_invalidate_job(struct drm_sched_job *s_job,
> > return s_job && atomic_inc_return(&s_job->karma) >
> > threshold;
> > }
> >
> > +/**
> > + * enum drm_gpu_sched_stat - the scheduler's status
> > + *
> > + * @DRM_GPU_SCHED_STAT_NONE: Reserved. Do not use.
> > + * @DRM_GPU_SCHED_STAT_NOMINAL: Operation succeeded.
> > + * @DRM_GPU_SCHED_STAT_ENODEV: Error: Device is not available
> > anymore.
> > + */
> > enum drm_gpu_sched_stat {
> > - DRM_GPU_SCHED_STAT_NONE, /* Reserve 0 */
> > + DRM_GPU_SCHED_STAT_NONE,
> > DRM_GPU_SCHED_STAT_NOMINAL,
> > DRM_GPU_SCHED_STAT_ENODEV,
> > };
> > @@ -430,6 +437,11 @@ struct drm_sched_backend_ops {
> > *
> > * TODO: Document which fence rules above.
> > *
> > + * This method is called in a workqueue context - either
> > from the
> > + * submit_wq the driver passed through &drm_sched_init(),
> > or, if the
> > + * driver passed NULL, a separate, ordered workqueue the
> > scheduler
> > + * allocated.
> > + *
>
> The commit message mentions "Update timedout_job()'s documentation".
> As
> this hunk is related to `run_job()`, maybe it would be a better fit
> to
> patch 2/3.
>
> > * @sched_job: the job to run
> > *
> > * Note that the scheduler expects to 'inherit' its own
> > reference to
> > @@ -449,43 +461,52 @@ struct drm_sched_backend_ops {
> > * @timedout_job: Called when a job has taken too long to
> > execute,
> > * to trigger GPU recovery.
> > *
> > - * This method is called in a workqueue context.
> > + * @sched_job: The job that has timed out
> > *
> > - * Drivers typically issue a reset to recover from GPU
> > hangs, and this
> > - * procedure usually follows the following workflow:
> > + * Drivers typically issue a reset to recover from GPU
> > hangs.
> > + * This procedure looks very different depending on
> > whether a firmware
> > + * or a hardware scheduler is being used.
> > *
> > - * 1. Stop the scheduler using drm_sched_stop(). This will
> > park the
> > - * scheduler thread and cancel the timeout work,
> > guaranteeing that
> > - * nothing is queued while we reset the hardware queue
> > - * 2. Try to gracefully stop non-faulty jobs (optional)
> > - * 3. Issue a GPU reset (driver-specific)
> > - * 4. Re-submit jobs using drm_sched_resubmit_jobs()
> > - * 5. Restart the scheduler using drm_sched_start(). At
> > that point, new
> > - * jobs can be queued, and the scheduler thread is
> > unblocked
> > + * For a FIRMWARE SCHEDULER, each ring has one scheduler,
> > and each
> > + * scheduler has one entity. Hence, the steps taken
> > typically look as
> > + * follows:
> > + *
> > + * 1. Stop the scheduler using drm_sched_stop(). This will
> > pause the
> > + * scheduler workqueues and cancel the timeout work,
> > guaranteeing
> > + * that nothing is queued while the ring is being
> > removed.
> > + * 2. Remove the ring. The firmware will make sure that
> > the
> > + * corresponding parts of the hardware are resetted,
> > and that other
> > + * rings are not impacted.
> > + * 3. Kill the entity and the associated scheduler.
> > + *
> > + *
> > + * For a HARDWARE SCHEDULER, a scheduler instance
> > schedules jobs from
> > + * one or more entities to one ring. This implies that all
> > entities
> > + * associated with the affected scheduler cannot be torn
> > down, because
> > + * this would effectively also affect innocent userspace
> > processes which
> > + * did not submit faulty jobs (for example).
> > + *
> > + * Consequently, the procedure to recover with a hardware
> > scheduler
> > + * should look like this:
> > + *
> > + * 1. Stop all schedulers impacted by the reset using
> > drm_sched_stop().
> > + * 2. Kill the entity the faulty job stems from.
> > + * 3. Issue a GPU reset on all faulty rings (driver-
> > specific).
> > + * 4. Re-submit jobs on all schedulers impacted by re-
> > submitting them to
> > + * the entities which are still alive.
>
> I believe that a mention to `drm_sched_resubmit_jobs()` still worth
> it,
> even mentioning that it is a deprecated option and it shouldn't be
> used
> in new code. It is deprecated indeed, but we still have five users.
I see no reason to mention a deprecated function. What would that be
good for? Why should I direct someone to something that he must not
use?
The drivers which already use it don't need that documentation, since
they're more or less functioning already. And even they shouldn't be
encouraged to keep using it; the list above basically is a list
exclusively about how to do things right.
And the new drivers should best not even know that this function
exists.
Furthermore, additional mentions of the function just increases the
probability that the comment / docu will be forgotten when the
deprecated function is finally removed.
(We have multiple such places within the scheduler. Some comments still
refer to a "thread", despite the scheduler now being based on
workqueues)
So NACK to that idea.
Regarding your other review ideas, I'll look into them
Thx
P.
>
> Best Regards,
> - Maíra
>
> > + * 5. Restart all schedulers that were stopped in step #1
> > using
> > + * drm_sched_start().
> > *
> > * Note that some GPUs have distinct hardware queues but
> > need to reset
> > * the GPU globally, which requires extra synchronization
> > between the
> > - * timeout handler of the different &drm_gpu_scheduler.
> > One way to
> > - * achieve this synchronization is to create an ordered
> > workqueue
> > - * (using alloc_ordered_workqueue()) at the driver level,
> > and pass this
> > - * queue to drm_sched_init(), to guarantee that timeout
> > handlers are
> > - * executed sequentially. The above workflow needs to be
> > slightly
> > - * adjusted in that case:
> > + * timeout handlers of different schedulers. One way to
> > achieve this
> > + * synchronization is to create an ordered workqueue
> > (using
> > + * alloc_ordered_workqueue()) at the driver level, and
> > pass this queue
> > + * as drm_sched_init()'s @timeout_wq parameter. This will
> > guarantee
> > + * that timeout handlers are executed sequentially.
> > *
> > - * 1. Stop all schedulers impacted by the reset using
> > drm_sched_stop()
> > - * 2. Try to gracefully stop non-faulty jobs on all queues
> > impacted by
> > - * the reset (optional)
> > - * 3. Issue a GPU reset on all faulty queues (driver-
> > specific)
> > - * 4. Re-submit jobs on all schedulers impacted by the
> > reset using
> > - * drm_sched_resubmit_jobs()
> > - * 5. Restart all schedulers that were stopped in step #1
> > using
> > - * drm_sched_start()
> > + * Return: The scheduler's status, defined by
> > &drm_gpu_sched_stat
> > *
> > - * Return DRM_GPU_SCHED_STAT_NOMINAL, when all is normal,
> > - * and the underlying driver has started or completed
> > recovery.
> > - *
> > - * Return DRM_GPU_SCHED_STAT_ENODEV, if the device is no
> > longer
> > - * available, i.e. has been unplugged.
> > */
> > enum drm_gpu_sched_stat (*timedout_job)(struct
> > drm_sched_job *sched_job);
> >
>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v5 2/3] drm/sched: Adjust outdated docu for run_job()
2025-02-20 13:28 ` Maíra Canal
@ 2025-02-20 15:28 ` Philipp Stanner
2025-02-24 13:29 ` Maíra Canal
0 siblings, 1 reply; 13+ messages in thread
From: Philipp Stanner @ 2025-02-20 15:28 UTC (permalink / raw)
To: Maíra Canal, Philipp Stanner, Matthew Brost,
Danilo Krummrich, Christian König, Maarten Lankhorst,
Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter,
Tvrtko Ursulin
Cc: dri-devel, linux-kernel
On Thu, 2025-02-20 at 10:28 -0300, Maíra Canal wrote:
> Hi Philipp,
>
> On 20/02/25 08:28, Philipp Stanner wrote:
> > The documentation for drm_sched_backend_ops.run_job() mentions a
> > certain
> > function called drm_sched_job_recovery(). This function does not
> > exist.
> > What's actually meant is drm_sched_resubmit_jobs(), which is by now
> > also
> > deprecated.
> >
> > Remove the mention of the removed function.
> >
> > Discourage the behavior of drm_sched_backend_ops.run_job() being
> > called
> > multiple times for the same job.
>
> It looks odd to me that this patch removes lines that were added in
> patch 1/3. Maybe you could change the patchset order and place this
> one
> as the first.
>
> >
> > Signed-off-by: Philipp Stanner <phasta@kernel.org>
> > ---
> > include/drm/gpu_scheduler.h | 19 +++++++++++++------
> > 1 file changed, 13 insertions(+), 6 deletions(-)
> >
> > diff --git a/include/drm/gpu_scheduler.h
> > b/include/drm/gpu_scheduler.h
> > index 916279b5aa00..29e5bda91806 100644
> > --- a/include/drm/gpu_scheduler.h
> > +++ b/include/drm/gpu_scheduler.h
> > @@ -421,20 +421,27 @@ struct drm_sched_backend_ops {
> >
> > /**
> > * @run_job: Called to execute the job once all of the
> > dependencies
> > - * have been resolved. This may be called multiple times,
> > if
> > - * timedout_job() has happened and
> > drm_sched_job_recovery() decides to
> > - * try it again.
> > + * have been resolved.
> > + *
> > + * The deprecated drm_sched_resubmit_jobs() (called from
> > + * drm_sched_backend_ops.timedout_job()) can invoke this
> > again with the
>
> I think it would be "@timedout_job".
Not sure, isn't referencing in docstrings done with '&'?
>
> > + * same parameters. Using this is discouraged because it,
> > presumably,
> > + * violates dma_fence rules.
>
> I believe it would be "struct dma_fence".
Well, in this case strictly speaking not IMO, because it's about the
rules of the "DMA Fence Subsystem", not about the struct itself.
I'd just keep it that way or call it "dma fence"
>
> > + *
> > + * TODO: Document which fence rules above.
> > *
> > * @sched_job: the job to run
> > *
> > - * Returns: dma_fence the driver must signal once the
> > hardware has
> > - * completed the job ("hardware fence").
> > - *
> > * Note that the scheduler expects to 'inherit' its own
> > reference to
> > * this fence from the callback. It does not invoke an
> > extra
> > * dma_fence_get() on it. Consequently, this callback must
> > take a
> > * reference for the scheduler, and additional ones for
> > the driver's
> > * respective needs.
>
> Would it be possible to add a comment that `run_job()` must check if
> `s_fence->finished.error` is different than 0? If you increase the
> karma
> of a job and don't check for `s_fence->finished.error`, you might run
> a
> cancelled job.
s_fence->finished is only signaled and its error set once the hardware
fence got signaled; or when the entity is killed.
In any case, signaling "finished" will cause the job to be prevented
from being executed (again), and will never reach run_job() in the
first place.
Correct me if I am mistaken.
Or are you suggesting that there is a race?
P.
>
> > + *
> > + * Return:
> > + * * On success: dma_fence the driver must signal once the
> > hardware has
> > + * completed the job ("hardware fence").
>
> A suggestion: "the fence that the driver must signal once the
> hardware
> has completed the job".
>
> Best Regards,
> - Maíra
>
> > + * * On failure: NULL or an ERR_PTR.
> > */
> > struct dma_fence *(*run_job)(struct drm_sched_job
> > *sched_job);
> >
>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v5 2/3] drm/sched: Adjust outdated docu for run_job()
2025-02-20 15:28 ` Philipp Stanner
@ 2025-02-24 13:29 ` Maíra Canal
2025-02-24 14:43 ` Danilo Krummrich
0 siblings, 1 reply; 13+ messages in thread
From: Maíra Canal @ 2025-02-24 13:29 UTC (permalink / raw)
To: phasta, Matthew Brost, Danilo Krummrich, Christian König,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Tvrtko Ursulin
Cc: dri-devel, linux-kernel
Hi Philipp,
On 20/02/25 12:28, Philipp Stanner wrote:
> On Thu, 2025-02-20 at 10:28 -0300, Maíra Canal wrote:
>> Hi Philipp,
>>
>> On 20/02/25 08:28, Philipp Stanner wrote:
>>> The documentation for drm_sched_backend_ops.run_job() mentions a
>>> certain
>>> function called drm_sched_job_recovery(). This function does not
>>> exist.
>>> What's actually meant is drm_sched_resubmit_jobs(), which is by now
>>> also
>>> deprecated.
>>>
>>> Remove the mention of the removed function.
>>>
>>> Discourage the behavior of drm_sched_backend_ops.run_job() being
>>> called
>>> multiple times for the same job.
>>
>> It looks odd to me that this patch removes lines that were added in
>> patch 1/3. Maybe you could change the patchset order and place this
>> one
>> as the first.
>>
>>>
>>> Signed-off-by: Philipp Stanner <phasta@kernel.org>
>>> ---
>>> include/drm/gpu_scheduler.h | 19 +++++++++++++------
>>> 1 file changed, 13 insertions(+), 6 deletions(-)
>>>
>>> diff --git a/include/drm/gpu_scheduler.h
>>> b/include/drm/gpu_scheduler.h
>>> index 916279b5aa00..29e5bda91806 100644
>>> --- a/include/drm/gpu_scheduler.h
>>> +++ b/include/drm/gpu_scheduler.h
>>> @@ -421,20 +421,27 @@ struct drm_sched_backend_ops {
>>>
>>> /**
>>> * @run_job: Called to execute the job once all of the
>>> dependencies
>>> - * have been resolved. This may be called multiple times,
>>> if
>>> - * timedout_job() has happened and
>>> drm_sched_job_recovery() decides to
>>> - * try it again.
>>> + * have been resolved.
>>> + *
>>> + * The deprecated drm_sched_resubmit_jobs() (called from
>>> + * drm_sched_backend_ops.timedout_job()) can invoke this
>>> again with the
>>
>> I think it would be "@timedout_job".
>
> Not sure, isn't referencing in docstrings done with '&'?
`timedout_job` is a member of the same struct, so I believe it should be
@. But, I'm no kernel-doc expert, it's just my understanding of [1]. If
we don't use @, it should be at least
"&drm_sched_backend_ops.timedout_job".
[1] https://docs.kernel.org/doc-guide/kernel-doc.html
>
>>
>>> + * same parameters. Using this is discouraged because it,
>>> presumably,
>>> + * violates dma_fence rules.
>>
>> I believe it would be "struct dma_fence".
>
> Well, in this case strictly speaking not IMO, because it's about the
> rules of the "DMA Fence Subsystem", not about the struct itself.
>
> I'd just keep it that way or call it "dma fence"
>
>>
>>> + *
>>> + * TODO: Document which fence rules above.
>>> *
>>> * @sched_job: the job to run
>>> *
>>> - * Returns: dma_fence the driver must signal once the
>>> hardware has
>>> - * completed the job ("hardware fence").
>>> - *
>>> * Note that the scheduler expects to 'inherit' its own
>>> reference to
>>> * this fence from the callback. It does not invoke an
>>> extra
>>> * dma_fence_get() on it. Consequently, this callback must
>>> take a
>>> * reference for the scheduler, and additional ones for
>>> the driver's
>>> * respective needs.
>>
>> Would it be possible to add a comment that `run_job()` must check if
>> `s_fence->finished.error` is different than 0? If you increase the
>> karma
>> of a job and don't check for `s_fence->finished.error`, you might run
>> a
>> cancelled job.
>
> s_fence->finished is only signaled and its error set once the hardware
> fence got signaled; or when the entity is killed.
If you have a timeout, increase the karma of that job with
`drm_sched_increase_karma()` and call `drm_sched_resubmit_jobs()`, the
latter will flag an error in the dma fence. If you don't check for it in
`run_job()`, you will run the guilty job again.
I'm still talking about `drm_sched_resubmit_jobs()`, because I'm
currently fixing an issue in V3D with the GPU reset and we still use
`drm_sched_resubmit_jobs()`. I read the documentation of `run_job()` and
`timeout_job()` and the information I commented here (which was crucial
to fix the bug) wasn't available there.
`drm_sched_resubmit_jobs()` was deprecated in 2022, but Xe introduced a
new use in 2023, for example. The commit that deprecated it just
mentions AMD's case, but do we know if the function works as expected
for the other users? For V3D, it does. Also, we need to make it clear
which are the dma fence requirements that the functions violates.
If we shouldn't use `drm_sched_resubmit_jobs()`, would it be possible to
provide a common interface for job resubmission?
Best Regards,
- Maíra
>
> In any case, signaling "finished" will cause the job to be prevented
> from being executed (again), and will never reach run_job() in the
> first place.
>
> Correct me if I am mistaken.
>
> Or are you suggesting that there is a race?
>
>
> P.
>
>>
>>> + *
>>> + * Return:
>>> + * * On success: dma_fence the driver must signal once the
>>> hardware has
>>> + * completed the job ("hardware fence").
>>
>> A suggestion: "the fence that the driver must signal once the
>> hardware
>> has completed the job".
>>
>> Best Regards,
>> - Maíra
>>
>>> + * * On failure: NULL or an ERR_PTR.
>>> */
>>> struct dma_fence *(*run_job)(struct drm_sched_job
>>> *sched_job);
>>>
>>
>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v5 2/3] drm/sched: Adjust outdated docu for run_job()
2025-02-24 13:29 ` Maíra Canal
@ 2025-02-24 14:43 ` Danilo Krummrich
2025-02-24 16:25 ` Matthew Brost
0 siblings, 1 reply; 13+ messages in thread
From: Danilo Krummrich @ 2025-02-24 14:43 UTC (permalink / raw)
To: Maíra Canal, Christian König
Cc: phasta, Matthew Brost, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, David Airlie, Simona Vetter, Tvrtko Ursulin,
dri-devel, linux-kernel
On Mon, Feb 24, 2025 at 10:29:26AM -0300, Maíra Canal wrote:
> On 20/02/25 12:28, Philipp Stanner wrote:
> > On Thu, 2025-02-20 at 10:28 -0300, Maíra Canal wrote:
> > > Would it be possible to add a comment that `run_job()` must check if
> > > `s_fence->finished.error` is different than 0? If you increase the
> > > karma
> > > of a job and don't check for `s_fence->finished.error`, you might run
> > > a
> > > cancelled job.
> >
> > s_fence->finished is only signaled and its error set once the hardware
> > fence got signaled; or when the entity is killed.
>
> If you have a timeout, increase the karma of that job with
> `drm_sched_increase_karma()` and call `drm_sched_resubmit_jobs()`, the
> latter will flag an error in the dma fence. If you don't check for it in
> `run_job()`, you will run the guilty job again.
Considering that drm_sched_resubmit_jobs() is deprecated I don't think we need
to add this hint to the documentation; the drivers that are still using the API
hopefully got it right.
> I'm still talking about `drm_sched_resubmit_jobs()`, because I'm
> currently fixing an issue in V3D with the GPU reset and we still use
> `drm_sched_resubmit_jobs()`. I read the documentation of `run_job()` and
> `timeout_job()` and the information I commented here (which was crucial
> to fix the bug) wasn't available there.
Well, hopefully... :-)
>
> `drm_sched_resubmit_jobs()` was deprecated in 2022, but Xe introduced a
> new use in 2023
Yeah, that's a bit odd, since Xe relies on a firmware scheduler and uses a 1:1
scheduler - entity setup. I'm a bit surprised Xe does use this function.
> for example. The commit that deprecated it just
> mentions AMD's case, but do we know if the function works as expected
> for the other users?
I read the comment [1] you're referring to differently. It says that
"Re-submitting jobs was a concept AMD came up as cheap way to implement recovery
after a job timeout".
It further explains that "there are many problem with the dma_fence
implementation and requirements. Either the implementation is risking deadlocks
with core memory management or violating documented implementation details of
the dma_fence object", which doesn't give any hint to me that the conceptual
issues are limited to amdgpu.
> For V3D, it does. Also, we need to make it clear which
> are the dma fence requirements that the functions violates.
This I fully agree with, unfortunately the comment does not explain what's the
issue at all.
While I do think I have a vague idea of what's the potential issue with this
approach, I think it would be way better to get Christian, as the expert for DMA
fence rules to comment on this.
@Christian: Can you please shed some light on this?
>
> If we shouldn't use `drm_sched_resubmit_jobs()`, would it be possible to
> provide a common interface for job resubmission?
I wonder why this question did not come up when drm_sched_resubmit_jobs() was
deprecated two years ago, did it?
Anyway, let's shed some light on the difficulties with drm_sched_resubmit_jobs()
and then we can figure out how we can do better.
I think it would also be interesting to know how amdgpu handles job from
unrelated entities being discarded by not re-submitting them when a job from
another entitiy hangs the HW ring.
[1] https://patchwork.freedesktop.org/patch/msgid/20221109095010.141189-5-christian.koenig@amd.com
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v5 2/3] drm/sched: Adjust outdated docu for run_job()
2025-02-24 14:43 ` Danilo Krummrich
@ 2025-02-24 16:25 ` Matthew Brost
2025-03-04 9:05 ` Christian König
0 siblings, 1 reply; 13+ messages in thread
From: Matthew Brost @ 2025-02-24 16:25 UTC (permalink / raw)
To: Danilo Krummrich
Cc: Maíra Canal, Christian König, phasta, Maarten Lankhorst,
Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter,
Tvrtko Ursulin, dri-devel, linux-kernel
On Mon, Feb 24, 2025 at 03:43:49PM +0100, Danilo Krummrich wrote:
> On Mon, Feb 24, 2025 at 10:29:26AM -0300, Maíra Canal wrote:
> > On 20/02/25 12:28, Philipp Stanner wrote:
> > > On Thu, 2025-02-20 at 10:28 -0300, Maíra Canal wrote:
> > > > Would it be possible to add a comment that `run_job()` must check if
> > > > `s_fence->finished.error` is different than 0? If you increase the
> > > > karma
> > > > of a job and don't check for `s_fence->finished.error`, you might run
> > > > a
> > > > cancelled job.
> > >
> > > s_fence->finished is only signaled and its error set once the hardware
> > > fence got signaled; or when the entity is killed.
> >
> > If you have a timeout, increase the karma of that job with
> > `drm_sched_increase_karma()` and call `drm_sched_resubmit_jobs()`, the
> > latter will flag an error in the dma fence. If you don't check for it in
> > `run_job()`, you will run the guilty job again.
>
> Considering that drm_sched_resubmit_jobs() is deprecated I don't think we need
> to add this hint to the documentation; the drivers that are still using the API
> hopefully got it right.
>
> > I'm still talking about `drm_sched_resubmit_jobs()`, because I'm
> > currently fixing an issue in V3D with the GPU reset and we still use
> > `drm_sched_resubmit_jobs()`. I read the documentation of `run_job()` and
> > `timeout_job()` and the information I commented here (which was crucial
> > to fix the bug) wasn't available there.
>
> Well, hopefully... :-)
>
> >
> > `drm_sched_resubmit_jobs()` was deprecated in 2022, but Xe introduced a
> > new use in 2023
>
> Yeah, that's a bit odd, since Xe relies on a firmware scheduler and uses a 1:1
> scheduler - entity setup. I'm a bit surprised Xe does use this function.
>
To clarify Xe's usage. We use this function to resubmit jobs after
device reset for queues which had nothing to do with the device reset.
In practice, a device should never occur as we have per-queue resets in
our harwdare. If a per-queue reset occurs, we ban the queue rather than
doing a resubmit.
Matt
> > for example. The commit that deprecated it just
> > mentions AMD's case, but do we know if the function works as expected
> > for the other users?
>
> I read the comment [1] you're referring to differently. It says that
> "Re-submitting jobs was a concept AMD came up as cheap way to implement recovery
> after a job timeout".
>
> It further explains that "there are many problem with the dma_fence
> implementation and requirements. Either the implementation is risking deadlocks
> with core memory management or violating documented implementation details of
> the dma_fence object", which doesn't give any hint to me that the conceptual
> issues are limited to amdgpu.
>
> > For V3D, it does. Also, we need to make it clear which
> > are the dma fence requirements that the functions violates.
>
> This I fully agree with, unfortunately the comment does not explain what's the
> issue at all.
>
> While I do think I have a vague idea of what's the potential issue with this
> approach, I think it would be way better to get Christian, as the expert for DMA
> fence rules to comment on this.
>
> @Christian: Can you please shed some light on this?
>
> >
> > If we shouldn't use `drm_sched_resubmit_jobs()`, would it be possible to
> > provide a common interface for job resubmission?
>
> I wonder why this question did not come up when drm_sched_resubmit_jobs() was
> deprecated two years ago, did it?
>
> Anyway, let's shed some light on the difficulties with drm_sched_resubmit_jobs()
> and then we can figure out how we can do better.
>
> I think it would also be interesting to know how amdgpu handles job from
> unrelated entities being discarded by not re-submitting them when a job from
> another entitiy hangs the HW ring.
>
> [1] https://patchwork.freedesktop.org/patch/msgid/20221109095010.141189-5-christian.koenig@amd.com
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v5 2/3] drm/sched: Adjust outdated docu for run_job()
2025-02-24 16:25 ` Matthew Brost
@ 2025-03-04 9:05 ` Christian König
2025-03-04 9:52 ` Philipp Stanner
0 siblings, 1 reply; 13+ messages in thread
From: Christian König @ 2025-03-04 9:05 UTC (permalink / raw)
To: Matthew Brost, Danilo Krummrich
Cc: Maíra Canal, phasta, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, David Airlie, Simona Vetter, Tvrtko Ursulin,
dri-devel, linux-kernel
Am 24.02.25 um 17:25 schrieb Matthew Brost:
> On Mon, Feb 24, 2025 at 03:43:49PM +0100, Danilo Krummrich wrote:
>> On Mon, Feb 24, 2025 at 10:29:26AM -0300, Maíra Canal wrote:
>>> On 20/02/25 12:28, Philipp Stanner wrote:
>>>> On Thu, 2025-02-20 at 10:28 -0300, Maíra Canal wrote:
>>>>> Would it be possible to add a comment that `run_job()` must check if
>>>>> `s_fence->finished.error` is different than 0? If you increase the
>>>>> karma
>>>>> of a job and don't check for `s_fence->finished.error`, you might run
>>>>> a
>>>>> cancelled job.
>>>> s_fence->finished is only signaled and its error set once the hardware
>>>> fence got signaled; or when the entity is killed.
>>> If you have a timeout, increase the karma of that job with
>>> `drm_sched_increase_karma()` and call `drm_sched_resubmit_jobs()`, the
>>> latter will flag an error in the dma fence. If you don't check for it in
>>> `run_job()`, you will run the guilty job again.
>> Considering that drm_sched_resubmit_jobs() is deprecated I don't think we need
>> to add this hint to the documentation; the drivers that are still using the API
>> hopefully got it right.
>>
>>> I'm still talking about `drm_sched_resubmit_jobs()`, because I'm
>>> currently fixing an issue in V3D with the GPU reset and we still use
>>> `drm_sched_resubmit_jobs()`. I read the documentation of `run_job()` and
>>> `timeout_job()` and the information I commented here (which was crucial
>>> to fix the bug) wasn't available there.
>> Well, hopefully... :-)
>>
>>> `drm_sched_resubmit_jobs()` was deprecated in 2022, but Xe introduced a
>>> new use in 2023
>> Yeah, that's a bit odd, since Xe relies on a firmware scheduler and uses a 1:1
>> scheduler - entity setup. I'm a bit surprised Xe does use this function.
>>
> To clarify Xe's usage. We use this function to resubmit jobs after
> device reset for queues which had nothing to do with the device reset.
> In practice, a device should never occur as we have per-queue resets in
> our harwdare. If a per-queue reset occurs, we ban the queue rather than
> doing a resubmit.
That's still invalid usage. Re-submitting jobs by the scheduler is a completely broken concept in general.
What you can do is to re-create the queue content after device reset inside your driver, but *never* use drm_sched_resubmit_jobs() for that.
>
> Matt
>
>>> for example. The commit that deprecated it just
>>> mentions AMD's case, but do we know if the function works as expected
>>> for the other users?
>> I read the comment [1] you're referring to differently. It says that
>> "Re-submitting jobs was a concept AMD came up as cheap way to implement recovery
>> after a job timeout".
>>
>> It further explains that "there are many problem with the dma_fence
>> implementation and requirements. Either the implementation is risking deadlocks
>> with core memory management or violating documented implementation details of
>> the dma_fence object", which doesn't give any hint to me that the conceptual
>> issues are limited to amdgpu.
>>
>>> For V3D, it does. Also, we need to make it clear which
>>> are the dma fence requirements that the functions violates.
>> This I fully agree with, unfortunately the comment does not explain what's the
>> issue at all.
>>
>> While I do think I have a vague idea of what's the potential issue with this
>> approach, I think it would be way better to get Christian, as the expert for DMA
>> fence rules to comment on this.
>>
>> @Christian: Can you please shed some light on this?
>>
>>> If we shouldn't use `drm_sched_resubmit_jobs()`, would it be possible to
>>> provide a common interface for job resubmission?
>> I wonder why this question did not come up when drm_sched_resubmit_jobs() was
>> deprecated two years ago, did it?
Exactly that's the point why drm_sched_resubmit_jobs() was deprecated.
It is not possible to provide a common interface to re-submit jobs (with switching of hardware dma_fences) without breaking dma_fence rules.
The idea behind the scheduler is that you pack your submission state into a job object which as soon as it is picked up is converted into a hardware dma_fence for execution. This hardware dma_fence is then the object which represents execution of the submission on the hardware.
So on re-submission you either use the same dma_fence multiple times which results in a *horrible* kref_init() on an already initialized reference (It's a wonder that this doesn't crashes all the time in amdgpu). Or you do things like starting to allocate memory while the memory management potentially waits for the reset to complete.
What we could do is to provide a helper for the device drivers in the form of an iterator which gives you all the hardware fences the scheduler is waiting for, but in general device drivers should have this information by themselves.
>>
>> Anyway, let's shed some light on the difficulties with drm_sched_resubmit_jobs()
>> and then we can figure out how we can do better.
>>
>> I think it would also be interesting to know how amdgpu handles job from
>> unrelated entities being discarded by not re-submitting them when a job from
>> another entitiy hangs the HW ring.
Quite simple this case never happens in the first place.
When you have individual queues for each process (e.g. like Xe and upcomming amdgpu HW generation) you should always be able to reset the device without loosing everything.
Otherwise things like userspace queues also doesn't work at all because then neither the kernel nor the DRM scheduler is involved in the submission any more.
Regards,
Christian.
>>
>> [1] https://patchwork.freedesktop.org/patch/msgid/20221109095010.141189-5-christian.koenig@amd.com
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v5 2/3] drm/sched: Adjust outdated docu for run_job()
2025-03-04 9:05 ` Christian König
@ 2025-03-04 9:52 ` Philipp Stanner
0 siblings, 0 replies; 13+ messages in thread
From: Philipp Stanner @ 2025-03-04 9:52 UTC (permalink / raw)
To: Christian König, Matthew Brost, Danilo Krummrich
Cc: Maíra Canal, phasta, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, David Airlie, Simona Vetter, Tvrtko Ursulin,
dri-devel, linux-kernel
On Tue, 2025-03-04 at 10:05 +0100, Christian König wrote:
> Am 24.02.25 um 17:25 schrieb Matthew Brost:
> > On Mon, Feb 24, 2025 at 03:43:49PM +0100, Danilo Krummrich wrote:
> > > On Mon, Feb 24, 2025 at 10:29:26AM -0300, Maíra Canal wrote:
> > > > On 20/02/25 12:28, Philipp Stanner wrote:
> > > > > On Thu, 2025-02-20 at 10:28 -0300, Maíra Canal wrote:
> > > > > > Would it be possible to add a comment that `run_job()` must
> > > > > > check if
> > > > > > `s_fence->finished.error` is different than 0? If you
> > > > > > increase the
> > > > > > karma
> > > > > > of a job and don't check for `s_fence->finished.error`, you
> > > > > > might run
> > > > > > a
> > > > > > cancelled job.
> > > > > s_fence->finished is only signaled and its error set once the
> > > > > hardware
> > > > > fence got signaled; or when the entity is killed.
> > > > If you have a timeout, increase the karma of that job with
> > > > `drm_sched_increase_karma()` and call
> > > > `drm_sched_resubmit_jobs()`, the
> > > > latter will flag an error in the dma fence. If you don't check
> > > > for it in
> > > > `run_job()`, you will run the guilty job again.
> > > Considering that drm_sched_resubmit_jobs() is deprecated I don't
> > > think we need
> > > to add this hint to the documentation; the drivers that are still
> > > using the API
> > > hopefully got it right.
> > >
> > > > I'm still talking about `drm_sched_resubmit_jobs()`, because
> > > > I'm
> > > > currently fixing an issue in V3D with the GPU reset and we
> > > > still use
> > > > `drm_sched_resubmit_jobs()`. I read the documentation of
> > > > `run_job()` and
> > > > `timeout_job()` and the information I commented here (which was
> > > > crucial
> > > > to fix the bug) wasn't available there.
> > > Well, hopefully... :-)
> > >
> > > > `drm_sched_resubmit_jobs()` was deprecated in 2022, but Xe
> > > > introduced a
> > > > new use in 2023
> > > Yeah, that's a bit odd, since Xe relies on a firmware scheduler
> > > and uses a 1:1
> > > scheduler - entity setup. I'm a bit surprised Xe does use this
> > > function.
> > >
> > To clarify Xe's usage. We use this function to resubmit jobs after
> > device reset for queues which had nothing to do with the device
> > reset.
> > In practice, a device should never occur as we have per-queue
> > resets in
> > our harwdare. If a per-queue reset occurs, we ban the queue rather
> > than
> > doing a resubmit.
>
> That's still invalid usage. Re-submitting jobs by the scheduler is a
> completely broken concept in general.
>
> What you can do is to re-create the queue content after device reset
> inside your driver, but *never* use drm_sched_resubmit_jobs() for
> that.
>
> >
> > Matt
> >
> > > > for example. The commit that deprecated it just
> > > > mentions AMD's case, but do we know if the function works as
> > > > expected
> > > > for the other users?
> > > I read the comment [1] you're referring to differently. It says
> > > that
> > > "Re-submitting jobs was a concept AMD came up as cheap way to
> > > implement recovery
> > > after a job timeout".
> > >
> > > It further explains that "there are many problem with the
> > > dma_fence
> > > implementation and requirements. Either the implementation is
> > > risking deadlocks
> > > with core memory management or violating documented
> > > implementation details of
> > > the dma_fence object", which doesn't give any hint to me that the
> > > conceptual
> > > issues are limited to amdgpu.
> > >
> > > > For V3D, it does. Also, we need to make it clear which
> > > > are the dma fence requirements that the functions violates.
> > > This I fully agree with, unfortunately the comment does not
> > > explain what's the
> > > issue at all.
> > >
> > > While I do think I have a vague idea of what's the potential
> > > issue with this
> > > approach, I think it would be way better to get Christian, as the
> > > expert for DMA
> > > fence rules to comment on this.
> > >
> > > @Christian: Can you please shed some light on this?
> > >
> > > > If we shouldn't use `drm_sched_resubmit_jobs()`, would it be
> > > > possible to
> > > > provide a common interface for job resubmission?
> > > I wonder why this question did not come up when
> > > drm_sched_resubmit_jobs() was
> > > deprecated two years ago, did it?
>
> Exactly that's the point why drm_sched_resubmit_jobs() was
> deprecated.
>
> It is not possible to provide a common interface to re-submit jobs
> (with switching of hardware dma_fences) without breaking dma_fence
> rules.
>
> The idea behind the scheduler is that you pack your submission state
> into a job object which as soon as it is picked up is converted into
> a hardware dma_fence for execution. This hardware dma_fence is then
> the object which represents execution of the submission on the
> hardware.
>
> So on re-submission you either use the same dma_fence multiple times
> which results in a *horrible* kref_init() on an already initialized
> reference (It's a wonder that this doesn't crashes all the time in
> amdgpu). Or you do things like starting to allocate memory while the
> memory management potentially waits for the reset to complete.
>
> What we could do is to provide a helper for the device drivers in the
> form of an iterator which gives you all the hardware fences the
> scheduler is waiting for, but in general device drivers should have
> this information by themselves.
What we should work out in this patch series first is some lines of
documentation telling the drivers what the current state is and what
they should do.
Maira is not OK with me just removing mention of
drm_sched_resubmit_jobs().
So the question is what they should do instead and thus, what, e.g.,
amdgpu does instead. See also below
>
> > >
> > > Anyway, let's shed some light on the difficulties with
> > > drm_sched_resubmit_jobs()
> > > and then we can figure out how we can do better.
> > >
> > > I think it would also be interesting to know how amdgpu handles
> > > job from
> > > unrelated entities being discarded by not re-submitting them when
> > > a job from
> > > another entitiy hangs the HW ring.
>
> Quite simple this case never happens in the first place.
>
> When you have individual queues for each process (e.g. like Xe and
> upcomming amdgpu HW generation)
If amdgpu's *current* HW generation does not have individual queues,
why then can it never happen currently?
How does amdgpu make sure that jobs from innocent entities get
rescheduled after a GPU reset? AFAIK AMD cards currently have 4 run
queues, which are shared by many entities by many processes.
P.
> you should always be able to reset the device without loosing
> everything.
>
> Otherwise things like userspace queues also doesn't work at all
> because then neither the kernel nor the DRM scheduler is involved in
> the submission any more.
>
> Regards,
> Christian.
>
> > >
> > > [1]
> > > https://patchwork.freedesktop.org/patch/msgid/20221109095010.141189-5-christian.koenig@amd.com
>
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2025-03-04 9:52 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-02-20 11:28 [PATCH v5 0/3] drm/sched: Documentation and refcount improvements Philipp Stanner
2025-02-20 11:28 ` [PATCH v5 1/3] drm/sched: Document run_job() refcount hazard Philipp Stanner
2025-02-20 11:28 ` [PATCH v5 2/3] drm/sched: Adjust outdated docu for run_job() Philipp Stanner
2025-02-20 13:28 ` Maíra Canal
2025-02-20 15:28 ` Philipp Stanner
2025-02-24 13:29 ` Maíra Canal
2025-02-24 14:43 ` Danilo Krummrich
2025-02-24 16:25 ` Matthew Brost
2025-03-04 9:05 ` Christian König
2025-03-04 9:52 ` Philipp Stanner
2025-02-20 11:28 ` [PATCH v5 3/3] drm/sched: Update timedout_job()'s documentation Philipp Stanner
2025-02-20 13:42 ` Maíra Canal
2025-02-20 15:18 ` Philipp Stanner
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox