* [PATCH v2 0/3] drm/sched: Documentation and refcount improvements
@ 2025-01-21 15:15 Philipp Stanner
2025-01-21 15:15 ` [PATCH v2 1/3] drm/sched: Document run_job() refcount hazard Philipp Stanner
` (3 more replies)
0 siblings, 4 replies; 8+ messages in thread
From: Philipp Stanner @ 2025-01-21 15:15 UTC (permalink / raw)
To: Matthew Brost, Danilo Krummrich, Philipp Stanner,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Sumit Semwal, Christian König
Cc: dri-devel, linux-kernel, linux-media, linaro-mm-sig,
Philipp Stanner
Changes in v2:
- Document what run_job() is allowed to return. (Tvrtko)
- Delete confusing comment about putting the fence. (Danilo)
- Apply Danilo's RB to patch 1.
- Delete info about job recovery for entities in patch 3. (Danilo, me)
- Set the term "ring" as fix term for both HW rings and FW rings. A
ring shall always be the thingy on the CPU ;) (Danilo)
- Many (all) other comments improvements in patch 3. (Danilo)
This is as series succeeding my previous patch [1].
I recognized that we are still referring to a non-existing function and
a deprecated one in the callback docu. We should probably also point out
the important distinction between hardware and firmware schedulers more
cleanly.
Please give me feedback, especially on the RFC comments in patch3.
(This series still fires docu-build-warnings. I want to gather feedback
on the opion questions first and will solve them in v2.)
Thank you,
Philipp
[1] https://lore.kernel.org/all/20241220124515.93169-2-phasta@kernel.org/
Philipp Stanner (3):
drm/sched: Document run_job() refcount hazard
drm/sched: Adjust outdated docu for run_job()
drm/sched: Update timedout_job()'s documentation
drivers/gpu/drm/scheduler/sched_main.c | 5 +-
include/drm/gpu_scheduler.h | 106 ++++++++++++++++---------
2 files changed, 71 insertions(+), 40 deletions(-)
--
2.47.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH v2 1/3] drm/sched: Document run_job() refcount hazard
2025-01-21 15:15 [PATCH v2 0/3] drm/sched: Documentation and refcount improvements Philipp Stanner
@ 2025-01-21 15:15 ` Philipp Stanner
2025-01-21 15:15 ` [PATCH v2 2/3] drm/sched: Adjust outdated docu for run_job() Philipp Stanner
` (2 subsequent siblings)
3 siblings, 0 replies; 8+ messages in thread
From: Philipp Stanner @ 2025-01-21 15:15 UTC (permalink / raw)
To: Matthew Brost, Danilo Krummrich, Philipp Stanner,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Sumit Semwal, Christian König
Cc: dri-devel, linux-kernel, linux-media, linaro-mm-sig
From: Philipp Stanner <pstanner@redhat.com>
drm_sched_backend_ops.run_job() returns a dma_fence for the scheduler.
That fence is signalled by the driver once the hardware completed the
associated job. The scheduler does not increment the reference count on
that fence, but implicitly expects to inherit this fence from run_job().
This is relatively subtle and prone to misunderstandings.
This implies that, to keep a reference for itself, a driver needs to
call dma_fence_get() in addition to dma_fence_init() in that callback.
It's further complicated by the fact that the scheduler even decrements
the refcount in drm_sched_run_job_work() since it created a new
reference in drm_sched_fence_scheduled(). It does, however, still use
its pointer to the fence after calling dma_fence_put() - which is safe
because of the aforementioned new reference, but actually still violates
the refcounting rules.
Move the call to dma_fence_put() to the position behind the last usage
of the fence.
Document the necessity to increment the reference count in
drm_sched_backend_ops.run_job().
Suggested-by: Danilo Krummrich <dakr@kernel.org>
Signed-off-by: Philipp Stanner <pstanner@redhat.com>
Reviewed-by: Danilo Krummrich <dakr@kernel.org>
---
drivers/gpu/drm/scheduler/sched_main.c | 5 ++---
include/drm/gpu_scheduler.h | 19 +++++++++++++++----
2 files changed, 17 insertions(+), 7 deletions(-)
diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
index 57da84908752..7e69ebc09513 100644
--- a/drivers/gpu/drm/scheduler/sched_main.c
+++ b/drivers/gpu/drm/scheduler/sched_main.c
@@ -1218,15 +1218,14 @@ static void drm_sched_run_job_work(struct work_struct *w)
drm_sched_fence_scheduled(s_fence, fence);
if (!IS_ERR_OR_NULL(fence)) {
- /* Drop for original kref_init of the fence */
- dma_fence_put(fence);
-
r = dma_fence_add_callback(fence, &sched_job->cb,
drm_sched_job_done_cb);
if (r == -ENOENT)
drm_sched_job_done(sched_job, fence->error);
else if (r)
DRM_DEV_ERROR(sched->dev, "fence add callback failed (%d)\n", r);
+
+ dma_fence_put(fence);
} else {
drm_sched_job_done(sched_job, IS_ERR(fence) ?
PTR_ERR(fence) : 0);
diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
index 95e17504e46a..d5cd2a78f27c 100644
--- a/include/drm/gpu_scheduler.h
+++ b/include/drm/gpu_scheduler.h
@@ -420,10 +420,21 @@ struct drm_sched_backend_ops {
struct drm_sched_entity *s_entity);
/**
- * @run_job: Called to execute the job once all of the dependencies
- * have been resolved. This may be called multiple times, if
- * timedout_job() has happened and drm_sched_job_recovery()
- * decides to try it again.
+ * @run_job: Called to execute the job once all of the dependencies
+ * have been resolved. This may be called multiple times, if
+ * timedout_job() has happened and drm_sched_job_recovery() decides to
+ * try it again.
+ *
+ * @sched_job: the job to run
+ *
+ * Returns: dma_fence the driver must signal once the hardware has
+ * completed the job ("hardware fence").
+ *
+ * Note that the scheduler expects to 'inherit' its own reference to
+ * this fence from the callback. It does not invoke an extra
+ * dma_fence_get() on it. Consequently, this callback must take a
+ * reference for the scheduler, and additional ones for the driver's
+ * respective needs.
*/
struct dma_fence *(*run_job)(struct drm_sched_job *sched_job);
--
2.47.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v2 2/3] drm/sched: Adjust outdated docu for run_job()
2025-01-21 15:15 [PATCH v2 0/3] drm/sched: Documentation and refcount improvements Philipp Stanner
2025-01-21 15:15 ` [PATCH v2 1/3] drm/sched: Document run_job() refcount hazard Philipp Stanner
@ 2025-01-21 15:15 ` Philipp Stanner
2025-01-21 15:15 ` [PATCH v2 3/3] drm/sched: Update timedout_job()'s documentation Philipp Stanner
2025-01-22 8:23 ` [PATCH v2 0/3] drm/sched: Documentation and refcount improvements Philipp Stanner
3 siblings, 0 replies; 8+ messages in thread
From: Philipp Stanner @ 2025-01-21 15:15 UTC (permalink / raw)
To: Matthew Brost, Danilo Krummrich, Philipp Stanner,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Sumit Semwal, Christian König
Cc: dri-devel, linux-kernel, linux-media, linaro-mm-sig,
Philipp Stanner
The documentation for drm_sched_backend_ops.run_job() mentions a certain
function called drm_sched_job_recovery(). This function does not exist.
What's actually meant is drm_sched_resubmit_jobs(), which is by now also
deprecated.
Remove the mention of the removed function.
Discourage the behavior of drm_sched_backend_ops.run_job() being called
multiple times for the same job.
Signed-off-by: Philipp Stanner <phasta@kernel.org>
---
Folks, I need input for those "refcount" rules. I say that we either
delete that section or someone (Christian?) should provide details about
what those rules are, as Danilo requested.
P.
---
include/drm/gpu_scheduler.h | 15 ++++++++++-----
1 file changed, 10 insertions(+), 5 deletions(-)
diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
index d5cd2a78f27c..cf40fdb55541 100644
--- a/include/drm/gpu_scheduler.h
+++ b/include/drm/gpu_scheduler.h
@@ -421,14 +421,19 @@ struct drm_sched_backend_ops {
/**
* @run_job: Called to execute the job once all of the dependencies
- * have been resolved. This may be called multiple times, if
- * timedout_job() has happened and drm_sched_job_recovery() decides to
- * try it again.
+ * have been resolved.
+ *
+ * The deprecated drm_sched_resubmit_jobs() (called from
+ * drm_sched_backend_ops.timedout_job()) can invoke this again with the
+ * same parameters. Doing this is strongly discouraged because it
+ * violates dma_fence rules.
*
* @sched_job: the job to run
*
- * Returns: dma_fence the driver must signal once the hardware has
- * completed the job ("hardware fence").
+ * Returns:
+ * On success: dma_fence the driver must signal once the hardware has
+ * completed the job ("hardware fence").
+ * On failure: NULL or an ERR_PTR.
*
* Note that the scheduler expects to 'inherit' its own reference to
* this fence from the callback. It does not invoke an extra
--
2.47.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v2 3/3] drm/sched: Update timedout_job()'s documentation
2025-01-21 15:15 [PATCH v2 0/3] drm/sched: Documentation and refcount improvements Philipp Stanner
2025-01-21 15:15 ` [PATCH v2 1/3] drm/sched: Document run_job() refcount hazard Philipp Stanner
2025-01-21 15:15 ` [PATCH v2 2/3] drm/sched: Adjust outdated docu for run_job() Philipp Stanner
@ 2025-01-21 15:15 ` Philipp Stanner
2025-01-24 12:27 ` Danilo Krummrich
2025-01-22 8:23 ` [PATCH v2 0/3] drm/sched: Documentation and refcount improvements Philipp Stanner
3 siblings, 1 reply; 8+ messages in thread
From: Philipp Stanner @ 2025-01-21 15:15 UTC (permalink / raw)
To: Matthew Brost, Danilo Krummrich, Philipp Stanner,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Sumit Semwal, Christian König
Cc: dri-devel, linux-kernel, linux-media, linaro-mm-sig,
Philipp Stanner
drm_sched_backend_ops.timedout_job()'s documentation is outdated. It
mentions the deprecated function drm_sched_resubmit_job(). Furthermore,
it does not point out the important distinction between hardware and
firmware schedulers.
Since firmware schedulers tyipically only use one entity per scheduler,
timeout handling is significantly more simple because the entity the
faulted job came from can just be killed without affecting innocent
processes.
Update the documentation with that distinction and other details.
Reformat the docstring to work to a unified style with the other
handles.
Signed-off-by: Philipp Stanner <phasta@kernel.org>
---
include/drm/gpu_scheduler.h | 82 ++++++++++++++++++++++---------------
1 file changed, 49 insertions(+), 33 deletions(-)
diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
index cf40fdb55541..4806740b9023 100644
--- a/include/drm/gpu_scheduler.h
+++ b/include/drm/gpu_scheduler.h
@@ -394,8 +394,14 @@ static inline bool drm_sched_invalidate_job(struct drm_sched_job *s_job,
}
enum drm_gpu_sched_stat {
- DRM_GPU_SCHED_STAT_NONE, /* Reserve 0 */
+ /* Reserve 0 */
+ DRM_GPU_SCHED_STAT_NONE,
+
+ /* Operation succeeded */
DRM_GPU_SCHED_STAT_NOMINAL,
+
+ /* Failure because dev is no longer available, for example because
+ * it was unplugged. */
DRM_GPU_SCHED_STAT_ENODEV,
};
@@ -447,43 +453,53 @@ struct drm_sched_backend_ops {
* @timedout_job: Called when a job has taken too long to execute,
* to trigger GPU recovery.
*
- * This method is called in a workqueue context.
+ * @sched_job: The job that has timed out
*
- * Drivers typically issue a reset to recover from GPU hangs, and this
- * procedure usually follows the following workflow:
+ * Returns: A drm_gpu_sched_stat enum.
*
- * 1. Stop the scheduler using drm_sched_stop(). This will park the
- * scheduler thread and cancel the timeout work, guaranteeing that
- * nothing is queued while we reset the hardware queue
- * 2. Try to gracefully stop non-faulty jobs (optional)
- * 3. Issue a GPU reset (driver-specific)
- * 4. Re-submit jobs using drm_sched_resubmit_jobs()
- * 5. Restart the scheduler using drm_sched_start(). At that point, new
- * jobs can be queued, and the scheduler thread is unblocked
+ * Drivers typically issue a reset to recover from GPU hangs.
+ * This procedure looks very different depending on whether a firmware
+ * or a hardware scheduler is being used.
+ *
+ * For a FIRMWARE SCHEDULER, each (pseudo-)ring has one scheduler, and
+ * each scheduler has one entity. Hence, you typically follow those
+ * steps:
+ *
+ * 1. Stop the scheduler using drm_sched_stop(). This will pause the
+ * scheduler workqueues and cancel the timeout work, guaranteeing
+ * that nothing is queued while we remove the ring.
+ * 2. Remove the ring. In most (all?) cases the firmware will make sure
+ * that the corresponding parts of the hardware are resetted, and that
+ * other rings are not impacted.
+ * 3. Kill the entity the faulted job stems from, and the associated
+ * scheduler.
+ *
+ *
+ * For a HARDWARE SCHEDULER, each ring also has one scheduler, but each
+ * scheduler is typically associated with many entities. This implies
+ * that all entities associated with the affected scheduler cannot be
+ * torn down, because this would effectively also kill innocent
+ * userspace processes which did not submit faulty jobs (for example).
+ *
+ * Consequently, the procedure to recover with a hardware scheduler
+ * should look like this:
+ *
+ * 1. Stop all schedulers impacted by the reset using drm_sched_stop().
+ * 2. Figure out to which entity the faulted job belongs to.
+ * 3. Kill that entity.
+ * 4. Issue a GPU reset on all faulty rings (driver-specific).
+ * 5. Re-submit jobs on all schedulers impacted by re-submitting them to
+ * the entities which are still alive.
+ * 6. Restart all schedulers that were stopped in step #1 using
+ * drm_sched_start().
*
* Note that some GPUs have distinct hardware queues but need to reset
* the GPU globally, which requires extra synchronization between the
- * timeout handler of the different &drm_gpu_scheduler. One way to
- * achieve this synchronization is to create an ordered workqueue
- * (using alloc_ordered_workqueue()) at the driver level, and pass this
- * queue to drm_sched_init(), to guarantee that timeout handlers are
- * executed sequentially. The above workflow needs to be slightly
- * adjusted in that case:
- *
- * 1. Stop all schedulers impacted by the reset using drm_sched_stop()
- * 2. Try to gracefully stop non-faulty jobs on all queues impacted by
- * the reset (optional)
- * 3. Issue a GPU reset on all faulty queues (driver-specific)
- * 4. Re-submit jobs on all schedulers impacted by the reset using
- * drm_sched_resubmit_jobs()
- * 5. Restart all schedulers that were stopped in step #1 using
- * drm_sched_start()
- *
- * Return DRM_GPU_SCHED_STAT_NOMINAL, when all is normal,
- * and the underlying driver has started or completed recovery.
- *
- * Return DRM_GPU_SCHED_STAT_ENODEV, if the device is no longer
- * available, i.e. has been unplugged.
+ * timeout handlers of different schedulers. One way to achieve this
+ * synchronization is to create an ordered workqueue (using
+ * alloc_ordered_workqueue()) at the driver level, and pass this queue
+ * as drm_sched_init()'s @timeout_wq parameter. This will guarantee
+ * that timeout handlers are executed sequentially.
*/
enum drm_gpu_sched_stat (*timedout_job)(struct drm_sched_job *sched_job);
--
2.47.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH v2 0/3] drm/sched: Documentation and refcount improvements
2025-01-21 15:15 [PATCH v2 0/3] drm/sched: Documentation and refcount improvements Philipp Stanner
` (2 preceding siblings ...)
2025-01-21 15:15 ` [PATCH v2 3/3] drm/sched: Update timedout_job()'s documentation Philipp Stanner
@ 2025-01-22 8:23 ` Philipp Stanner
3 siblings, 0 replies; 8+ messages in thread
From: Philipp Stanner @ 2025-01-22 8:23 UTC (permalink / raw)
To: Philipp Stanner, Matthew Brost, Danilo Krummrich,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Sumit Semwal, Christian König
Cc: dri-devel, linux-kernel, linux-media, linaro-mm-sig
On Tue, 2025-01-21 at 16:15 +0100, Philipp Stanner wrote:
> Changes in v2:
> - Document what run_job() is allowed to return. (Tvrtko)
> - Delete confusing comment about putting the fence. (Danilo)
> - Apply Danilo's RB to patch 1.
> - Delete info about job recovery for entities in patch 3. (Danilo,
> me)
> - Set the term "ring" as fix term for both HW rings and FW rings. A
> ring shall always be the thingy on the CPU ;) (Danilo)
s/CPU/GPU
obviously.
P.
> - Many (all) other comments improvements in patch 3. (Danilo)
>
> This is as series succeeding my previous patch [1].
>
> I recognized that we are still referring to a non-existing function
> and
> a deprecated one in the callback docu. We should probably also point
> out
> the important distinction between hardware and firmware schedulers
> more
> cleanly.
>
> Please give me feedback, especially on the RFC comments in patch3.
>
> (This series still fires docu-build-warnings. I want to gather
> feedback
> on the opion questions first and will solve them in v2.)
>
> Thank you,
> Philipp
>
> [1]
> https://lore.kernel.org/all/20241220124515.93169-2-phasta@kernel.org/
>
> Philipp Stanner (3):
> drm/sched: Document run_job() refcount hazard
> drm/sched: Adjust outdated docu for run_job()
> drm/sched: Update timedout_job()'s documentation
>
> drivers/gpu/drm/scheduler/sched_main.c | 5 +-
> include/drm/gpu_scheduler.h | 106 ++++++++++++++++-------
> --
> 2 files changed, 71 insertions(+), 40 deletions(-)
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v2 3/3] drm/sched: Update timedout_job()'s documentation
2025-01-21 15:15 ` [PATCH v2 3/3] drm/sched: Update timedout_job()'s documentation Philipp Stanner
@ 2025-01-24 12:27 ` Danilo Krummrich
2025-01-27 12:32 ` Philipp Stanner
0 siblings, 1 reply; 8+ messages in thread
From: Danilo Krummrich @ 2025-01-24 12:27 UTC (permalink / raw)
To: Philipp Stanner
Cc: Matthew Brost, Philipp Stanner, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, David Airlie, Simona Vetter, Sumit Semwal,
Christian König, dri-devel, linux-kernel, linux-media,
linaro-mm-sig
On Tue, Jan 21, 2025 at 04:15:46PM +0100, Philipp Stanner wrote:
> drm_sched_backend_ops.timedout_job()'s documentation is outdated. It
> mentions the deprecated function drm_sched_resubmit_job(). Furthermore,
> it does not point out the important distinction between hardware and
> firmware schedulers.
>
> Since firmware schedulers tyipically only use one entity per scheduler,
> timeout handling is significantly more simple because the entity the
> faulted job came from can just be killed without affecting innocent
> processes.
>
> Update the documentation with that distinction and other details.
>
> Reformat the docstring to work to a unified style with the other
> handles.
>
> Signed-off-by: Philipp Stanner <phasta@kernel.org>
> ---
> include/drm/gpu_scheduler.h | 82 ++++++++++++++++++++++---------------
> 1 file changed, 49 insertions(+), 33 deletions(-)
>
> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
> index cf40fdb55541..4806740b9023 100644
> --- a/include/drm/gpu_scheduler.h
> +++ b/include/drm/gpu_scheduler.h
> @@ -394,8 +394,14 @@ static inline bool drm_sched_invalidate_job(struct drm_sched_job *s_job,
> }
>
> enum drm_gpu_sched_stat {
> - DRM_GPU_SCHED_STAT_NONE, /* Reserve 0 */
> + /* Reserve 0 */
> + DRM_GPU_SCHED_STAT_NONE,
> +
> + /* Operation succeeded */
> DRM_GPU_SCHED_STAT_NOMINAL,
> +
> + /* Failure because dev is no longer available, for example because
> + * it was unplugged. */
> DRM_GPU_SCHED_STAT_ENODEV,
> };
>
> @@ -447,43 +453,53 @@ struct drm_sched_backend_ops {
> * @timedout_job: Called when a job has taken too long to execute,
> * to trigger GPU recovery.
> *
> - * This method is called in a workqueue context.
Why remove this line?
> + * @sched_job: The job that has timed out
> *
> - * Drivers typically issue a reset to recover from GPU hangs, and this
> - * procedure usually follows the following workflow:
> + * Returns: A drm_gpu_sched_stat enum.
Maybe "The status of the scheduler, defined by &drm_gpu_sched_stat".
I think you forgot to add the corresponding parts in the documentation of
drm_gpu_sched_stat.
> *
> - * 1. Stop the scheduler using drm_sched_stop(). This will park the
> - * scheduler thread and cancel the timeout work, guaranteeing that
> - * nothing is queued while we reset the hardware queue
> - * 2. Try to gracefully stop non-faulty jobs (optional)
> - * 3. Issue a GPU reset (driver-specific)
> - * 4. Re-submit jobs using drm_sched_resubmit_jobs()
> - * 5. Restart the scheduler using drm_sched_start(). At that point, new
> - * jobs can be queued, and the scheduler thread is unblocked
> + * Drivers typically issue a reset to recover from GPU hangs.
> + * This procedure looks very different depending on whether a firmware
> + * or a hardware scheduler is being used.
> + *
> + * For a FIRMWARE SCHEDULER, each (pseudo-)ring has one scheduler, and
Why pseudo? It's still a real ring buffer.
> + * each scheduler has one entity. Hence, you typically follow those
> + * steps:
Maybe better "Hence, the steps taken typically look as follows:".
> + *
> + * 1. Stop the scheduler using drm_sched_stop(). This will pause the
> + * scheduler workqueues and cancel the timeout work, guaranteeing
> + * that nothing is queued while we remove the ring.
"while the ring is removed"
> + * 2. Remove the ring. In most (all?) cases the firmware will make sure
At least I don't know about other cases and I also don't think it'd make a lot
of sense if it'd be different. But of course there's no rule preventing people
to implement things weirdly.
> + * that the corresponding parts of the hardware are resetted, and that
> + * other rings are not impacted.
> + * 3. Kill the entity the faulted job stems from, and the associated
There can only be one entity in this case, so you can drop "the faulted job
stems from".
> + * scheduler.
> + *
> + *
> + * For a HARDWARE SCHEDULER, each ring also has one scheduler, but each
> + * scheduler is typically associated with many entities. This implies
What about "each scheduler can be scheduling one or more entities"?
> + * that all entities associated with the affected scheduler cannot be
I think you want to say that not all entites can be torn down, rather than none
of them can be torn down.
> + * torn down, because this would effectively also kill innocent
> + * userspace processes which did not submit faulty jobs (for example).
This is phrased ambiguously, "kill userspace processs" typically means something
different than you mean in this context.
> + *
> + * Consequently, the procedure to recover with a hardware scheduler
> + * should look like this:
> + *
> + * 1. Stop all schedulers impacted by the reset using drm_sched_stop().
> + * 2. Figure out to which entity the faulted job belongs to.
> + * 3. Kill that entity.
I'd combine the two steps: "2. Kill the entity the faulty job originates from".
> + * 4. Issue a GPU reset on all faulty rings (driver-specific).
> + * 5. Re-submit jobs on all schedulers impacted by re-submitting them to
> + * the entities which are still alive.
> + * 6. Restart all schedulers that were stopped in step #1 using
> + * drm_sched_start().
> *
> * Note that some GPUs have distinct hardware queues but need to reset
> * the GPU globally, which requires extra synchronization between the
> - * timeout handler of the different &drm_gpu_scheduler. One way to
> - * achieve this synchronization is to create an ordered workqueue
> - * (using alloc_ordered_workqueue()) at the driver level, and pass this
> - * queue to drm_sched_init(), to guarantee that timeout handlers are
> - * executed sequentially. The above workflow needs to be slightly
> - * adjusted in that case:
> - *
> - * 1. Stop all schedulers impacted by the reset using drm_sched_stop()
> - * 2. Try to gracefully stop non-faulty jobs on all queues impacted by
> - * the reset (optional)
> - * 3. Issue a GPU reset on all faulty queues (driver-specific)
> - * 4. Re-submit jobs on all schedulers impacted by the reset using
> - * drm_sched_resubmit_jobs()
> - * 5. Restart all schedulers that were stopped in step #1 using
> - * drm_sched_start()
> - *
> - * Return DRM_GPU_SCHED_STAT_NOMINAL, when all is normal,
> - * and the underlying driver has started or completed recovery.
> - *
> - * Return DRM_GPU_SCHED_STAT_ENODEV, if the device is no longer
> - * available, i.e. has been unplugged.
> + * timeout handlers of different schedulers. One way to achieve this
> + * synchronization is to create an ordered workqueue (using
> + * alloc_ordered_workqueue()) at the driver level, and pass this queue
> + * as drm_sched_init()'s @timeout_wq parameter. This will guarantee
> + * that timeout handlers are executed sequentially.
> */
> enum drm_gpu_sched_stat (*timedout_job)(struct drm_sched_job *sched_job);
>
> --
> 2.47.1
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v2 3/3] drm/sched: Update timedout_job()'s documentation
2025-01-24 12:27 ` Danilo Krummrich
@ 2025-01-27 12:32 ` Philipp Stanner
2025-01-27 12:59 ` Danilo Krummrich
0 siblings, 1 reply; 8+ messages in thread
From: Philipp Stanner @ 2025-01-27 12:32 UTC (permalink / raw)
To: Danilo Krummrich, Philipp Stanner
Cc: Matthew Brost, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, David Airlie, Simona Vetter, Sumit Semwal,
Christian König, dri-devel, linux-kernel, linux-media,
linaro-mm-sig
On Fri, 2025-01-24 at 13:27 +0100, Danilo Krummrich wrote:
> On Tue, Jan 21, 2025 at 04:15:46PM +0100, Philipp Stanner wrote:
> > drm_sched_backend_ops.timedout_job()'s documentation is outdated.
> > It
> > mentions the deprecated function drm_sched_resubmit_job().
> > Furthermore,
> > it does not point out the important distinction between hardware
> > and
> > firmware schedulers.
> >
> > Since firmware schedulers tyipically only use one entity per
> > scheduler,
> > timeout handling is significantly more simple because the entity
> > the
> > faulted job came from can just be killed without affecting innocent
> > processes.
> >
> > Update the documentation with that distinction and other details.
> >
> > Reformat the docstring to work to a unified style with the other
> > handles.
> >
> > Signed-off-by: Philipp Stanner <phasta@kernel.org>
> > ---
> > include/drm/gpu_scheduler.h | 82 ++++++++++++++++++++++-----------
> > ----
> > 1 file changed, 49 insertions(+), 33 deletions(-)
> >
> > diff --git a/include/drm/gpu_scheduler.h
> > b/include/drm/gpu_scheduler.h
> > index cf40fdb55541..4806740b9023 100644
> > --- a/include/drm/gpu_scheduler.h
> > +++ b/include/drm/gpu_scheduler.h
> > @@ -394,8 +394,14 @@ static inline bool
> > drm_sched_invalidate_job(struct drm_sched_job *s_job,
> > }
> >
> > enum drm_gpu_sched_stat {
> > - DRM_GPU_SCHED_STAT_NONE, /* Reserve 0 */
> > + /* Reserve 0 */
> > + DRM_GPU_SCHED_STAT_NONE,
> > +
> > + /* Operation succeeded */
> > DRM_GPU_SCHED_STAT_NOMINAL,
> > +
> > + /* Failure because dev is no longer available, for example
> > because
> > + * it was unplugged. */
> > DRM_GPU_SCHED_STAT_ENODEV,
> > };
> >
> > @@ -447,43 +453,53 @@ struct drm_sched_backend_ops {
> > * @timedout_job: Called when a job has taken too long to
> > execute,
> > * to trigger GPU recovery.
> > *
> > - * This method is called in a workqueue context.
>
> Why remove this line?
I felt its surplus. All the functions here are callbacks that are
invoked by "the scheduler". I thought that's all the driver really
needs to know. Why should it care about the wq context?
Also, it's the only function for which the context is mentioned. If we
keep it here, we should probably provide it everywhere else, too.
>
> > + * @sched_job: The job that has timed out
> > *
> > - * Drivers typically issue a reset to recover from GPU
> > hangs, and this
> > - * procedure usually follows the following workflow:
> > + * Returns: A drm_gpu_sched_stat enum.
>
> Maybe "The status of the scheduler, defined by &drm_gpu_sched_stat".
>
> I think you forgot to add the corresponding parts in the
> documentation of
> drm_gpu_sched_stat.
What do you mean, precisely? I added information to that enum. You mean
that I should add that that enum is a return type for this callback
here?
>
> > *
> > - * 1. Stop the scheduler using drm_sched_stop(). This will
> > park the
> > - * scheduler thread and cancel the timeout work,
> > guaranteeing that
> > - * nothing is queued while we reset the hardware queue
> > - * 2. Try to gracefully stop non-faulty jobs (optional)
> > - * 3. Issue a GPU reset (driver-specific)
> > - * 4. Re-submit jobs using drm_sched_resubmit_jobs()
> > - * 5. Restart the scheduler using drm_sched_start(). At
> > that point, new
> > - * jobs can be queued, and the scheduler thread is
> > unblocked
> > + * Drivers typically issue a reset to recover from GPU
> > hangs.
> > + * This procedure looks very different depending on
> > whether a firmware
> > + * or a hardware scheduler is being used.
> > + *
> > + * For a FIRMWARE SCHEDULER, each (pseudo-)ring has one
> > scheduler, and
>
> Why pseudo? It's still a real ring buffer.
>
> > + * each scheduler has one entity. Hence, you typically
> > follow those
> > + * steps:
>
> Maybe better "Hence, the steps taken typically look as follows:".
>
> > + *
> > + * 1. Stop the scheduler using drm_sched_stop(). This will
> > pause the
> > + * scheduler workqueues and cancel the timeout work,
> > guaranteeing
> > + * that nothing is queued while we remove the ring.
>
> "while the ring is removed"
>
> > + * 2. Remove the ring. In most (all?) cases the firmware
> > will make sure
>
> At least I don't know about other cases and I also don't think it'd
> make a lot
> of sense if it'd be different. But of course there's no rule
> preventing people
> to implement things weirdly.
Seems like we can then use an absolute phrase here and who really wants
to do weird things won't be stopped by that anyways :]
>
> > + * that the corresponding parts of the hardware are
> > resetted, and that
> > + * other rings are not impacted.
> > + * 3. Kill the entity the faulted job stems from, and the
> > associated
>
> There can only be one entity in this case, so you can drop "the
> faulted job
> stems from".
>
> > + * scheduler.
> > + *
> > + *
> > + * For a HARDWARE SCHEDULER, each ring also has one
> > scheduler, but each
> > + * scheduler is typically associated with many entities.
> > This implies
>
> What about "each scheduler can be scheduling one or more entities"?
>
> > + * that all entities associated with the affected
> > scheduler cannot be
>
> I think you want to say that not all entites can be torn down, rather
> than none
> of them can be torn down.
>
> > + * torn down, because this would effectively also kill
> > innocent
> > + * userspace processes which did not submit faulty jobs
> > (for example).
>
> This is phrased ambiguously, "kill userspace processs" typically
> means something
> different than you mean in this context.
then let's say "down, because this would also affect users that did not
provide faulty jobs through their entities.", ack?
Danke,
P.
>
> > + *
> > + * Consequently, the procedure to recover with a hardware
> > scheduler
> > + * should look like this:
> > + *
> > + * 1. Stop all schedulers impacted by the reset using
> > drm_sched_stop().
> > + * 2. Figure out to which entity the faulted job belongs
> > to.
> > + * 3. Kill that entity.
>
> I'd combine the two steps: "2. Kill the entity the faulty job
> originates from".
>
> > + * 4. Issue a GPU reset on all faulty rings (driver-
> > specific).
> > + * 5. Re-submit jobs on all schedulers impacted by re-
> > submitting them to
> > + * the entities which are still alive.
> > + * 6. Restart all schedulers that were stopped in step #1
> > using
> > + * drm_sched_start().
> > *
> > * Note that some GPUs have distinct hardware queues but
> > need to reset
> > * the GPU globally, which requires extra synchronization
> > between the
> > - * timeout handler of the different &drm_gpu_scheduler.
> > One way to
> > - * achieve this synchronization is to create an ordered
> > workqueue
> > - * (using alloc_ordered_workqueue()) at the driver level,
> > and pass this
> > - * queue to drm_sched_init(), to guarantee that timeout
> > handlers are
> > - * executed sequentially. The above workflow needs to be
> > slightly
> > - * adjusted in that case:
> > - *
> > - * 1. Stop all schedulers impacted by the reset using
> > drm_sched_stop()
> > - * 2. Try to gracefully stop non-faulty jobs on all queues
> > impacted by
> > - * the reset (optional)
> > - * 3. Issue a GPU reset on all faulty queues (driver-
> > specific)
> > - * 4. Re-submit jobs on all schedulers impacted by the
> > reset using
> > - * drm_sched_resubmit_jobs()
> > - * 5. Restart all schedulers that were stopped in step #1
> > using
> > - * drm_sched_start()
> > - *
> > - * Return DRM_GPU_SCHED_STAT_NOMINAL, when all is normal,
> > - * and the underlying driver has started or completed
> > recovery.
> > - *
> > - * Return DRM_GPU_SCHED_STAT_ENODEV, if the device is no
> > longer
> > - * available, i.e. has been unplugged.
> > + * timeout handlers of different schedulers. One way to
> > achieve this
> > + * synchronization is to create an ordered workqueue
> > (using
> > + * alloc_ordered_workqueue()) at the driver level, and
> > pass this queue
> > + * as drm_sched_init()'s @timeout_wq parameter. This will
> > guarantee
> > + * that timeout handlers are executed sequentially.
> > */
> > enum drm_gpu_sched_stat (*timedout_job)(struct
> > drm_sched_job *sched_job);
> >
> > --
> > 2.47.1
> >
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v2 3/3] drm/sched: Update timedout_job()'s documentation
2025-01-27 12:32 ` Philipp Stanner
@ 2025-01-27 12:59 ` Danilo Krummrich
0 siblings, 0 replies; 8+ messages in thread
From: Danilo Krummrich @ 2025-01-27 12:59 UTC (permalink / raw)
To: Philipp Stanner
Cc: Philipp Stanner, Matthew Brost, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, David Airlie, Simona Vetter, Sumit Semwal,
Christian König, dri-devel, linux-kernel, linux-media,
linaro-mm-sig
On Mon, Jan 27, 2025 at 01:32:40PM +0100, Philipp Stanner wrote:
> On Fri, 2025-01-24 at 13:27 +0100, Danilo Krummrich wrote:
> > On Tue, Jan 21, 2025 at 04:15:46PM +0100, Philipp Stanner wrote:
> > > drm_sched_backend_ops.timedout_job()'s documentation is outdated.
> > > It
> > > mentions the deprecated function drm_sched_resubmit_job().
> > > Furthermore,
> > > it does not point out the important distinction between hardware
> > > and
> > > firmware schedulers.
> > >
> > > Since firmware schedulers tyipically only use one entity per
> > > scheduler,
> > > timeout handling is significantly more simple because the entity
> > > the
> > > faulted job came from can just be killed without affecting innocent
> > > processes.
> > >
> > > Update the documentation with that distinction and other details.
> > >
> > > Reformat the docstring to work to a unified style with the other
> > > handles.
> > >
> > > Signed-off-by: Philipp Stanner <phasta@kernel.org>
> > > ---
> > > include/drm/gpu_scheduler.h | 82 ++++++++++++++++++++++-----------
> > > ----
> > > 1 file changed, 49 insertions(+), 33 deletions(-)
> > >
> > > diff --git a/include/drm/gpu_scheduler.h
> > > b/include/drm/gpu_scheduler.h
> > > index cf40fdb55541..4806740b9023 100644
> > > --- a/include/drm/gpu_scheduler.h
> > > +++ b/include/drm/gpu_scheduler.h
> > > @@ -394,8 +394,14 @@ static inline bool
> > > drm_sched_invalidate_job(struct drm_sched_job *s_job,
> > > }
> > >
> > > enum drm_gpu_sched_stat {
> > > - DRM_GPU_SCHED_STAT_NONE, /* Reserve 0 */
> > > + /* Reserve 0 */
> > > + DRM_GPU_SCHED_STAT_NONE,
> > > +
> > > + /* Operation succeeded */
> > > DRM_GPU_SCHED_STAT_NOMINAL,
> > > +
> > > + /* Failure because dev is no longer available, for example
> > > because
> > > + * it was unplugged. */
> > > DRM_GPU_SCHED_STAT_ENODEV,
> > > };
> > >
> > > @@ -447,43 +453,53 @@ struct drm_sched_backend_ops {
> > > * @timedout_job: Called when a job has taken too long to
> > > execute,
> > > * to trigger GPU recovery.
> > > *
> > > - * This method is called in a workqueue context.
> >
> > Why remove this line?
>
> I felt its surplus. All the functions here are callbacks that are
> invoked by "the scheduler". I thought that's all the driver really
> needs to know. Why should it care about the wq context?
Yes, I think we should even be more clear and say which workqueue it's scheduled
on. The fact that this runs in the context of workqueues is not transpararent to
users. It's even that the exact workqueue to use is specified in
drm_sched_init().
It's a good hint for drivers in terms of dma-fence handling.
>
> Also, it's the only function for which the context is mentioned. If we
> keep it here, we should probably provide it everywhere else, too.
Sounds good.
>
> >
> > > + * @sched_job: The job that has timed out
> > > *
> > > - * Drivers typically issue a reset to recover from GPU
> > > hangs, and this
> > > - * procedure usually follows the following workflow:
> > > + * Returns: A drm_gpu_sched_stat enum.
> >
> > Maybe "The status of the scheduler, defined by &drm_gpu_sched_stat".
> >
> > I think you forgot to add the corresponding parts in the
> > documentation of
> > drm_gpu_sched_stat.
>
> What do you mean, precisely? I added information to that enum. You mean
> that I should add that that enum is a return type for this callback
> here?
You did add information to &drm_gpu_sched_stat, but no kernel doc comments you
can actually refer to.
>
> >
> > > *
> > > - * 1. Stop the scheduler using drm_sched_stop(). This will
> > > park the
> > > - * scheduler thread and cancel the timeout work,
> > > guaranteeing that
> > > - * nothing is queued while we reset the hardware queue
> > > - * 2. Try to gracefully stop non-faulty jobs (optional)
> > > - * 3. Issue a GPU reset (driver-specific)
> > > - * 4. Re-submit jobs using drm_sched_resubmit_jobs()
> > > - * 5. Restart the scheduler using drm_sched_start(). At
> > > that point, new
> > > - * jobs can be queued, and the scheduler thread is
> > > unblocked
> > > + * Drivers typically issue a reset to recover from GPU
> > > hangs.
> > > + * This procedure looks very different depending on
> > > whether a firmware
> > > + * or a hardware scheduler is being used.
> > > + *
> > > + * For a FIRMWARE SCHEDULER, each (pseudo-)ring has one
> > > scheduler, and
> >
> > Why pseudo? It's still a real ring buffer.
> >
> > > + * each scheduler has one entity. Hence, you typically
> > > follow those
> > > + * steps:
> >
> > Maybe better "Hence, the steps taken typically look as follows:".
> >
> > > + *
> > > + * 1. Stop the scheduler using drm_sched_stop(). This will
> > > pause the
> > > + * scheduler workqueues and cancel the timeout work,
> > > guaranteeing
> > > + * that nothing is queued while we remove the ring.
> >
> > "while the ring is removed"
> >
> > > + * 2. Remove the ring. In most (all?) cases the firmware
> > > will make sure
> >
> > At least I don't know about other cases and I also don't think it'd
> > make a lot
> > of sense if it'd be different. But of course there's no rule
> > preventing people
> > to implement things weirdly.
>
> Seems like we can then use an absolute phrase here and who really wants
> to do weird things won't be stopped by that anyways :]
>
> >
> > > + * that the corresponding parts of the hardware are
> > > resetted, and that
> > > + * other rings are not impacted.
> > > + * 3. Kill the entity the faulted job stems from, and the
> > > associated
> >
> > There can only be one entity in this case, so you can drop "the
> > faulted job
> > stems from".
> >
> > > + * scheduler.
> > > + *
> > > + *
> > > + * For a HARDWARE SCHEDULER, each ring also has one
> > > scheduler, but each
> > > + * scheduler is typically associated with many entities.
> > > This implies
> >
> > What about "each scheduler can be scheduling one or more entities"?
> >
> > > + * that all entities associated with the affected
> > > scheduler cannot be
> >
> > I think you want to say that not all entites can be torn down, rather
> > than none
> > of them can be torn down.
> >
> > > + * torn down, because this would effectively also kill
> > > innocent
> > > + * userspace processes which did not submit faulty jobs
> > > (for example).
> >
> > This is phrased ambiguously, "kill userspace processs" typically
> > means something
> > different than you mean in this context.
>
> then let's say "down, because this would also affect users that did not
> provide faulty jobs through their entities.", ack?
Sounds good.
>
>
> Danke,
> P.
>
> >
> > > + *
> > > + * Consequently, the procedure to recover with a hardware
> > > scheduler
> > > + * should look like this:
> > > + *
> > > + * 1. Stop all schedulers impacted by the reset using
> > > drm_sched_stop().
> > > + * 2. Figure out to which entity the faulted job belongs
> > > to.
> > > + * 3. Kill that entity.
> >
> > I'd combine the two steps: "2. Kill the entity the faulty job
> > originates from".
> >
> > > + * 4. Issue a GPU reset on all faulty rings (driver-
> > > specific).
> > > + * 5. Re-submit jobs on all schedulers impacted by re-
> > > submitting them to
> > > + * the entities which are still alive.
> > > + * 6. Restart all schedulers that were stopped in step #1
> > > using
> > > + * drm_sched_start().
> > > *
> > > * Note that some GPUs have distinct hardware queues but
> > > need to reset
> > > * the GPU globally, which requires extra synchronization
> > > between the
> > > - * timeout handler of the different &drm_gpu_scheduler.
> > > One way to
> > > - * achieve this synchronization is to create an ordered
> > > workqueue
> > > - * (using alloc_ordered_workqueue()) at the driver level,
> > > and pass this
> > > - * queue to drm_sched_init(), to guarantee that timeout
> > > handlers are
> > > - * executed sequentially. The above workflow needs to be
> > > slightly
> > > - * adjusted in that case:
> > > - *
> > > - * 1. Stop all schedulers impacted by the reset using
> > > drm_sched_stop()
> > > - * 2. Try to gracefully stop non-faulty jobs on all queues
> > > impacted by
> > > - * the reset (optional)
> > > - * 3. Issue a GPU reset on all faulty queues (driver-
> > > specific)
> > > - * 4. Re-submit jobs on all schedulers impacted by the
> > > reset using
> > > - * drm_sched_resubmit_jobs()
> > > - * 5. Restart all schedulers that were stopped in step #1
> > > using
> > > - * drm_sched_start()
> > > - *
> > > - * Return DRM_GPU_SCHED_STAT_NOMINAL, when all is normal,
> > > - * and the underlying driver has started or completed
> > > recovery.
> > > - *
> > > - * Return DRM_GPU_SCHED_STAT_ENODEV, if the device is no
> > > longer
> > > - * available, i.e. has been unplugged.
> > > + * timeout handlers of different schedulers. One way to
> > > achieve this
> > > + * synchronization is to create an ordered workqueue
> > > (using
> > > + * alloc_ordered_workqueue()) at the driver level, and
> > > pass this queue
> > > + * as drm_sched_init()'s @timeout_wq parameter. This will
> > > guarantee
> > > + * that timeout handlers are executed sequentially.
> > > */
> > > enum drm_gpu_sched_stat (*timedout_job)(struct
> > > drm_sched_job *sched_job);
> > >
> > > --
> > > 2.47.1
> > >
>
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2025-01-27 12:59 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-01-21 15:15 [PATCH v2 0/3] drm/sched: Documentation and refcount improvements Philipp Stanner
2025-01-21 15:15 ` [PATCH v2 1/3] drm/sched: Document run_job() refcount hazard Philipp Stanner
2025-01-21 15:15 ` [PATCH v2 2/3] drm/sched: Adjust outdated docu for run_job() Philipp Stanner
2025-01-21 15:15 ` [PATCH v2 3/3] drm/sched: Update timedout_job()'s documentation Philipp Stanner
2025-01-24 12:27 ` Danilo Krummrich
2025-01-27 12:32 ` Philipp Stanner
2025-01-27 12:59 ` Danilo Krummrich
2025-01-22 8:23 ` [PATCH v2 0/3] drm/sched: Documentation and refcount improvements Philipp Stanner
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).