From: Philipp Stanner <phasta@kernel.org>
To: "Matthew Brost" <matthew.brost@intel.com>,
"Danilo Krummrich" <dakr@kernel.org>,
"Philipp Stanner" <pstanner@redhat.com>,
"Maarten Lankhorst" <maarten.lankhorst@linux.intel.com>,
"Maxime Ripard" <mripard@kernel.org>,
"Thomas Zimmermann" <tzimmermann@suse.de>,
"David Airlie" <airlied@gmail.com>,
"Simona Vetter" <simona@ffwll.ch>,
"Sumit Semwal" <sumit.semwal@linaro.org>,
"Christian König" <christian.koenig@amd.com>
Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org,
linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org,
Philipp Stanner <phasta@kernel.org>
Subject: [PATCH v2 3/3] drm/sched: Update timedout_job()'s documentation
Date: Tue, 21 Jan 2025 16:15:46 +0100 [thread overview]
Message-ID: <20250121151544.44949-6-phasta@kernel.org> (raw)
In-Reply-To: <20250121151544.44949-2-phasta@kernel.org>
drm_sched_backend_ops.timedout_job()'s documentation is outdated. It
mentions the deprecated function drm_sched_resubmit_job(). Furthermore,
it does not point out the important distinction between hardware and
firmware schedulers.
Since firmware schedulers tyipically only use one entity per scheduler,
timeout handling is significantly more simple because the entity the
faulted job came from can just be killed without affecting innocent
processes.
Update the documentation with that distinction and other details.
Reformat the docstring to work to a unified style with the other
handles.
Signed-off-by: Philipp Stanner <phasta@kernel.org>
---
include/drm/gpu_scheduler.h | 82 ++++++++++++++++++++++---------------
1 file changed, 49 insertions(+), 33 deletions(-)
diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
index cf40fdb55541..4806740b9023 100644
--- a/include/drm/gpu_scheduler.h
+++ b/include/drm/gpu_scheduler.h
@@ -394,8 +394,14 @@ static inline bool drm_sched_invalidate_job(struct drm_sched_job *s_job,
}
enum drm_gpu_sched_stat {
- DRM_GPU_SCHED_STAT_NONE, /* Reserve 0 */
+ /* Reserve 0 */
+ DRM_GPU_SCHED_STAT_NONE,
+
+ /* Operation succeeded */
DRM_GPU_SCHED_STAT_NOMINAL,
+
+ /* Failure because dev is no longer available, for example because
+ * it was unplugged. */
DRM_GPU_SCHED_STAT_ENODEV,
};
@@ -447,43 +453,53 @@ struct drm_sched_backend_ops {
* @timedout_job: Called when a job has taken too long to execute,
* to trigger GPU recovery.
*
- * This method is called in a workqueue context.
+ * @sched_job: The job that has timed out
*
- * Drivers typically issue a reset to recover from GPU hangs, and this
- * procedure usually follows the following workflow:
+ * Returns: A drm_gpu_sched_stat enum.
*
- * 1. Stop the scheduler using drm_sched_stop(). This will park the
- * scheduler thread and cancel the timeout work, guaranteeing that
- * nothing is queued while we reset the hardware queue
- * 2. Try to gracefully stop non-faulty jobs (optional)
- * 3. Issue a GPU reset (driver-specific)
- * 4. Re-submit jobs using drm_sched_resubmit_jobs()
- * 5. Restart the scheduler using drm_sched_start(). At that point, new
- * jobs can be queued, and the scheduler thread is unblocked
+ * Drivers typically issue a reset to recover from GPU hangs.
+ * This procedure looks very different depending on whether a firmware
+ * or a hardware scheduler is being used.
+ *
+ * For a FIRMWARE SCHEDULER, each (pseudo-)ring has one scheduler, and
+ * each scheduler has one entity. Hence, you typically follow those
+ * steps:
+ *
+ * 1. Stop the scheduler using drm_sched_stop(). This will pause the
+ * scheduler workqueues and cancel the timeout work, guaranteeing
+ * that nothing is queued while we remove the ring.
+ * 2. Remove the ring. In most (all?) cases the firmware will make sure
+ * that the corresponding parts of the hardware are resetted, and that
+ * other rings are not impacted.
+ * 3. Kill the entity the faulted job stems from, and the associated
+ * scheduler.
+ *
+ *
+ * For a HARDWARE SCHEDULER, each ring also has one scheduler, but each
+ * scheduler is typically associated with many entities. This implies
+ * that all entities associated with the affected scheduler cannot be
+ * torn down, because this would effectively also kill innocent
+ * userspace processes which did not submit faulty jobs (for example).
+ *
+ * Consequently, the procedure to recover with a hardware scheduler
+ * should look like this:
+ *
+ * 1. Stop all schedulers impacted by the reset using drm_sched_stop().
+ * 2. Figure out to which entity the faulted job belongs to.
+ * 3. Kill that entity.
+ * 4. Issue a GPU reset on all faulty rings (driver-specific).
+ * 5. Re-submit jobs on all schedulers impacted by re-submitting them to
+ * the entities which are still alive.
+ * 6. Restart all schedulers that were stopped in step #1 using
+ * drm_sched_start().
*
* Note that some GPUs have distinct hardware queues but need to reset
* the GPU globally, which requires extra synchronization between the
- * timeout handler of the different &drm_gpu_scheduler. One way to
- * achieve this synchronization is to create an ordered workqueue
- * (using alloc_ordered_workqueue()) at the driver level, and pass this
- * queue to drm_sched_init(), to guarantee that timeout handlers are
- * executed sequentially. The above workflow needs to be slightly
- * adjusted in that case:
- *
- * 1. Stop all schedulers impacted by the reset using drm_sched_stop()
- * 2. Try to gracefully stop non-faulty jobs on all queues impacted by
- * the reset (optional)
- * 3. Issue a GPU reset on all faulty queues (driver-specific)
- * 4. Re-submit jobs on all schedulers impacted by the reset using
- * drm_sched_resubmit_jobs()
- * 5. Restart all schedulers that were stopped in step #1 using
- * drm_sched_start()
- *
- * Return DRM_GPU_SCHED_STAT_NOMINAL, when all is normal,
- * and the underlying driver has started or completed recovery.
- *
- * Return DRM_GPU_SCHED_STAT_ENODEV, if the device is no longer
- * available, i.e. has been unplugged.
+ * timeout handlers of different schedulers. One way to achieve this
+ * synchronization is to create an ordered workqueue (using
+ * alloc_ordered_workqueue()) at the driver level, and pass this queue
+ * as drm_sched_init()'s @timeout_wq parameter. This will guarantee
+ * that timeout handlers are executed sequentially.
*/
enum drm_gpu_sched_stat (*timedout_job)(struct drm_sched_job *sched_job);
--
2.47.1
next prev parent reply other threads:[~2025-01-21 15:16 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-01-21 15:15 [PATCH v2 0/3] drm/sched: Documentation and refcount improvements Philipp Stanner
2025-01-21 15:15 ` [PATCH v2 1/3] drm/sched: Document run_job() refcount hazard Philipp Stanner
2025-01-21 15:15 ` [PATCH v2 2/3] drm/sched: Adjust outdated docu for run_job() Philipp Stanner
2025-01-21 15:15 ` Philipp Stanner [this message]
2025-01-24 12:27 ` [PATCH v2 3/3] drm/sched: Update timedout_job()'s documentation Danilo Krummrich
2025-01-27 12:32 ` Philipp Stanner
2025-01-27 12:59 ` Danilo Krummrich
2025-01-22 8:23 ` [PATCH v2 0/3] drm/sched: Documentation and refcount improvements Philipp Stanner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250121151544.44949-6-phasta@kernel.org \
--to=phasta@kernel.org \
--cc=airlied@gmail.com \
--cc=christian.koenig@amd.com \
--cc=dakr@kernel.org \
--cc=dri-devel@lists.freedesktop.org \
--cc=linaro-mm-sig@lists.linaro.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-media@vger.kernel.org \
--cc=maarten.lankhorst@linux.intel.com \
--cc=matthew.brost@intel.com \
--cc=mripard@kernel.org \
--cc=pstanner@redhat.com \
--cc=simona@ffwll.ch \
--cc=sumit.semwal@linaro.org \
--cc=tzimmermann@suse.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox