public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3] drm/sched: Documentation and refcount improvements
@ 2025-01-09 13:37 Philipp Stanner
  2025-01-09 13:37 ` [PATCH 1/3] drm/sched: Document run_job() refcount hazard Philipp Stanner
                   ` (2 more replies)
  0 siblings, 3 replies; 13+ messages in thread
From: Philipp Stanner @ 2025-01-09 13:37 UTC (permalink / raw)
  To: Luben Tuikov, Matthew Brost, Danilo Krummrich, Philipp Stanner,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
	Simona Vetter, Sumit Semwal, Christian König
  Cc: dri-devel, linux-kernel, linux-media, linaro-mm-sig,
	Philipp Stanner

This is as series succeeding my previous patch [1].

I recognized that we are still referring to a non-existing function and
a deprecated one in the callback docu. We should probably also point out
the important distinction between hardware and firmware schedulers more
cleanly.

Please give me feedback, especially on the RFC comments in patch3.

(This series still fires docu-build-warnings. I want to gather feedback
on the opion questions first and will solve them in v2.)

Thank you,
Philipp

[1] https://lore.kernel.org/all/20241220124515.93169-2-phasta@kernel.org/

Philipp Stanner (3):
  drm/sched: Document run_job() refcount hazard
  drm/sched: Adjust outdated docu for run_job()
  drm/sched: Update timedout_job()'s documentation

 drivers/gpu/drm/scheduler/sched_main.c |  10 ++-
 include/drm/gpu_scheduler.h            | 105 ++++++++++++++++---------
 2 files changed, 77 insertions(+), 38 deletions(-)

-- 
2.47.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 1/3] drm/sched: Document run_job() refcount hazard
  2025-01-09 13:37 [PATCH 0/3] drm/sched: Documentation and refcount improvements Philipp Stanner
@ 2025-01-09 13:37 ` Philipp Stanner
  2025-01-09 13:44   ` Tvrtko Ursulin
                     ` (2 more replies)
  2025-01-09 13:37 ` [PATCH 2/3] drm/sched: Adjust outdated docu for run_job() Philipp Stanner
  2025-01-09 13:37 ` [PATCH 3/3] drm/sched: Update timedout_job()'s documentation Philipp Stanner
  2 siblings, 3 replies; 13+ messages in thread
From: Philipp Stanner @ 2025-01-09 13:37 UTC (permalink / raw)
  To: Luben Tuikov, Matthew Brost, Danilo Krummrich, Philipp Stanner,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
	Simona Vetter, Sumit Semwal, Christian König
  Cc: dri-devel, linux-kernel, linux-media, linaro-mm-sig

From: Philipp Stanner <pstanner@redhat.com>

drm_sched_backend_ops.run_job() returns a dma_fence for the scheduler.
That fence is signalled by the driver once the hardware completed the
associated job. The scheduler does not increment the reference count on
that fence, but implicitly expects to inherit this fence from run_job().

This is relatively subtle and prone to misunderstandings.

This implies that, to keep a reference for itself, a driver needs to
call dma_fence_get() in addition to dma_fence_init() in that callback.

It's further complicated by the fact that the scheduler even decrements
the refcount in drm_sched_run_job_work() since it created a new
reference in drm_sched_fence_scheduled(). It does, however, still use
its pointer to the fence after calling dma_fence_put() - which is safe
because of the aforementioned new reference, but actually still violates
the refcounting rules.

Improve the explanatory comment for that decrement.

Move the call to dma_fence_put() to the position behind the last usage
of the fence.

Document the necessity to increment the reference count in
drm_sched_backend_ops.run_job().

Signed-off-by: Philipp Stanner <pstanner@redhat.com>
---
 drivers/gpu/drm/scheduler/sched_main.c | 10 +++++++---
 include/drm/gpu_scheduler.h            | 19 +++++++++++++++----
 2 files changed, 22 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
index 57da84908752..5f46c01eb01e 100644
--- a/drivers/gpu/drm/scheduler/sched_main.c
+++ b/drivers/gpu/drm/scheduler/sched_main.c
@@ -1218,15 +1218,19 @@ static void drm_sched_run_job_work(struct work_struct *w)
 	drm_sched_fence_scheduled(s_fence, fence);
 
 	if (!IS_ERR_OR_NULL(fence)) {
-		/* Drop for original kref_init of the fence */
-		dma_fence_put(fence);
-
 		r = dma_fence_add_callback(fence, &sched_job->cb,
 					   drm_sched_job_done_cb);
 		if (r == -ENOENT)
 			drm_sched_job_done(sched_job, fence->error);
 		else if (r)
 			DRM_DEV_ERROR(sched->dev, "fence add callback failed (%d)\n", r);
+
+		/*
+		 * s_fence took a new reference to fence in the call to
+		 * drm_sched_fence_scheduled() above. The reference passed by
+		 * run_job() above is now not needed any longer. Drop it.
+		 */
+		dma_fence_put(fence);
 	} else {
 		drm_sched_job_done(sched_job, IS_ERR(fence) ?
 				   PTR_ERR(fence) : 0);
diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
index 95e17504e46a..d5cd2a78f27c 100644
--- a/include/drm/gpu_scheduler.h
+++ b/include/drm/gpu_scheduler.h
@@ -420,10 +420,21 @@ struct drm_sched_backend_ops {
 					 struct drm_sched_entity *s_entity);
 
 	/**
-         * @run_job: Called to execute the job once all of the dependencies
-         * have been resolved.  This may be called multiple times, if
-	 * timedout_job() has happened and drm_sched_job_recovery()
-	 * decides to try it again.
+	 * @run_job: Called to execute the job once all of the dependencies
+	 * have been resolved. This may be called multiple times, if
+	 * timedout_job() has happened and drm_sched_job_recovery() decides to
+	 * try it again.
+	 *
+	 * @sched_job: the job to run
+	 *
+	 * Returns: dma_fence the driver must signal once the hardware has
+	 *	completed the job ("hardware fence").
+	 *
+	 * Note that the scheduler expects to 'inherit' its own reference to
+	 * this fence from the callback. It does not invoke an extra
+	 * dma_fence_get() on it. Consequently, this callback must take a
+	 * reference for the scheduler, and additional ones for the driver's
+	 * respective needs.
 	 */
 	struct dma_fence *(*run_job)(struct drm_sched_job *sched_job);
 
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 2/3] drm/sched: Adjust outdated docu for run_job()
  2025-01-09 13:37 [PATCH 0/3] drm/sched: Documentation and refcount improvements Philipp Stanner
  2025-01-09 13:37 ` [PATCH 1/3] drm/sched: Document run_job() refcount hazard Philipp Stanner
@ 2025-01-09 13:37 ` Philipp Stanner
  2025-01-13 10:20   ` Danilo Krummrich
  2025-01-09 13:37 ` [PATCH 3/3] drm/sched: Update timedout_job()'s documentation Philipp Stanner
  2 siblings, 1 reply; 13+ messages in thread
From: Philipp Stanner @ 2025-01-09 13:37 UTC (permalink / raw)
  To: Luben Tuikov, Matthew Brost, Danilo Krummrich, Philipp Stanner,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
	Simona Vetter, Sumit Semwal, Christian König
  Cc: dri-devel, linux-kernel, linux-media, linaro-mm-sig,
	Philipp Stanner

The documentation for drm_sched_backend_ops.run_job() mentions a certain
function called drm_sched_job_recovery(). This function does not exist.
What's actually meant is drm_sched_resubmit_jobs(), which is by now also
deprecated.

Remove the mention of the deprecated function.

Discourage the behavior of drm_sched_backend_ops.run_job() being called
multiple times for the same job.

Signed-off-by: Philipp Stanner <phasta@kernel.org>
---
 include/drm/gpu_scheduler.h | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
index d5cd2a78f27c..c4e65f9f7f22 100644
--- a/include/drm/gpu_scheduler.h
+++ b/include/drm/gpu_scheduler.h
@@ -421,9 +421,12 @@ struct drm_sched_backend_ops {
 
 	/**
 	 * @run_job: Called to execute the job once all of the dependencies
-	 * have been resolved. This may be called multiple times, if
-	 * timedout_job() has happened and drm_sched_job_recovery() decides to
-	 * try it again.
+	 * have been resolved.
+	 *
+	 * The deprecated drm_sched_resubmit_jobs() (called from
+	 * drm_sched_backend_ops.timedout_job()) can invoke this again with the
+	 * same parameters. Doing this is strongly discouraged because it
+	 * violates dma_fence rules.
 	 *
 	 * @sched_job: the job to run
 	 *
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 3/3] drm/sched: Update timedout_job()'s documentation
  2025-01-09 13:37 [PATCH 0/3] drm/sched: Documentation and refcount improvements Philipp Stanner
  2025-01-09 13:37 ` [PATCH 1/3] drm/sched: Document run_job() refcount hazard Philipp Stanner
  2025-01-09 13:37 ` [PATCH 2/3] drm/sched: Adjust outdated docu for run_job() Philipp Stanner
@ 2025-01-09 13:37 ` Philipp Stanner
  2025-01-13 11:46   ` Danilo Krummrich
  2 siblings, 1 reply; 13+ messages in thread
From: Philipp Stanner @ 2025-01-09 13:37 UTC (permalink / raw)
  To: Luben Tuikov, Matthew Brost, Danilo Krummrich, Philipp Stanner,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
	Simona Vetter, Sumit Semwal, Christian König
  Cc: dri-devel, linux-kernel, linux-media, linaro-mm-sig,
	Philipp Stanner

drm_sched_backend_ops.timedout_job()'s documentation is outdated. It
mentions the deprecated function drm_sched_resubmit_job(). Furthermore,
it does not point out the important distinction between hardware and
firmware schedulers.

Since firmware schedulers tyipically only use one entity per scheduler,
timeout handling is significantly more simple because the entity the
faulted job came from can just be killed without affecting innocent
processes.

Update the documentation with that distinction and other details.

Reformat the docstring to work to a unified style with the other
handles.

Signed-off-by: Philipp Stanner <phasta@kernel.org>
---
 include/drm/gpu_scheduler.h | 83 +++++++++++++++++++++++--------------
 1 file changed, 52 insertions(+), 31 deletions(-)

diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
index c4e65f9f7f22..380b8840c591 100644
--- a/include/drm/gpu_scheduler.h
+++ b/include/drm/gpu_scheduler.h
@@ -445,43 +445,64 @@ struct drm_sched_backend_ops {
 	 * @timedout_job: Called when a job has taken too long to execute,
 	 * to trigger GPU recovery.
 	 *
-	 * This method is called in a workqueue context.
+	 * @sched_job: The job that has timed out
 	 *
-	 * Drivers typically issue a reset to recover from GPU hangs, and this
-	 * procedure usually follows the following workflow:
+	 * Returns:
+	 * - DRM_GPU_SCHED_STAT_NOMINAL, on success, i.e., if the underlying
+	 *   driver has started or completed recovery.
+	 * - DRM_GPU_SCHED_STAT_ENODEV, if the device is no longer
+	 *   available, i.e., has been unplugged.
 	 *
-	 * 1. Stop the scheduler using drm_sched_stop(). This will park the
-	 *    scheduler thread and cancel the timeout work, guaranteeing that
-	 *    nothing is queued while we reset the hardware queue
-	 * 2. Try to gracefully stop non-faulty jobs (optional)
-	 * 3. Issue a GPU reset (driver-specific)
-	 * 4. Re-submit jobs using drm_sched_resubmit_jobs()
+	 * Drivers typically issue a reset to recover from GPU hangs.
+	 * This procedure looks very different depending on whether a firmware
+	 * or a hardware scheduler is being used.
+	 *
+	 * For a FIRMWARE SCHEDULER, each (pseudo-)ring has one scheduler, and
+	 * each scheduler (typically) has one entity. Hence, you typically
+	 * follow those steps:
+	 *
+	 * 1. Stop the scheduler using drm_sched_stop(). This will pause the
+	 *    scheduler workqueues and cancel the timeout work, guaranteeing
+	 *    that nothing is queued while we reset the hardware queue.
+	 * 2. Try to gracefully stop non-faulty jobs (optional).
+	 * TODO: RFC ^ Folks, should we remove this step? What does it even mean
+	 * precisely to "stop" those jobs? Is that even helpful to userspace in
+	 * any way?
+	 * 3. Issue a GPU reset (driver-specific).
+	 * 4. Kill the entity the faulted job stems from, and the associated
+	 *    scheduler.
 	 * 5. Restart the scheduler using drm_sched_start(). At that point, new
-	 *    jobs can be queued, and the scheduler thread is unblocked
+	 *    jobs can be queued, and the scheduler workqueues awake again.
+	 *
+	 * For a HARDWARE SCHEDULER, each ring also has one scheduler, but each
+	 * scheduler typically has many attached entities. This implies that you
+	 * cannot tear down all entities associated with the affected scheduler,
+	 * because this would effectively also kill innocent userspace processes
+	 * which did not submit faulty jobs (for example).
+	 *
+	 * Consequently, the procedure to recover with a hardware scheduler
+	 * should look like this:
+	 *
+	 * 1. Stop all schedulers impacted by the reset using drm_sched_stop().
+	 * 2. Figure out to which entity the faulted job belongs.
+	 * 3. Try to gracefully stop non-faulty jobs (optional).
+	 * TODO: RFC ^ Folks, should we remove this step? What does it even mean
+	 * precisely to "stop" those jobs? Is that even helpful to userspace in
+	 * any way?
+	 * 4. Kill that entity.
+	 * 5. Issue a GPU reset on all faulty rings (driver-specific).
+	 * 6. Re-submit jobs on all schedulers impacted by re-submitting them to
+	 *    the entities which are still alive.
+	 * 7. Restart all schedulers that were stopped in step #1 using
+	 *    drm_sched_start().
 	 *
 	 * Note that some GPUs have distinct hardware queues but need to reset
 	 * the GPU globally, which requires extra synchronization between the
-	 * timeout handler of the different &drm_gpu_scheduler. One way to
-	 * achieve this synchronization is to create an ordered workqueue
-	 * (using alloc_ordered_workqueue()) at the driver level, and pass this
-	 * queue to drm_sched_init(), to guarantee that timeout handlers are
-	 * executed sequentially. The above workflow needs to be slightly
-	 * adjusted in that case:
-	 *
-	 * 1. Stop all schedulers impacted by the reset using drm_sched_stop()
-	 * 2. Try to gracefully stop non-faulty jobs on all queues impacted by
-	 *    the reset (optional)
-	 * 3. Issue a GPU reset on all faulty queues (driver-specific)
-	 * 4. Re-submit jobs on all schedulers impacted by the reset using
-	 *    drm_sched_resubmit_jobs()
-	 * 5. Restart all schedulers that were stopped in step #1 using
-	 *    drm_sched_start()
-	 *
-	 * Return DRM_GPU_SCHED_STAT_NOMINAL, when all is normal,
-	 * and the underlying driver has started or completed recovery.
-	 *
-	 * Return DRM_GPU_SCHED_STAT_ENODEV, if the device is no longer
-	 * available, i.e. has been unplugged.
+	 * timeout handlers of different schedulers. One way to achieve this
+	 * synchronization is to create an ordered workqueue (using
+	 * alloc_ordered_workqueue()) at the driver level, and pass this queue
+	 * as drm_sched_init()'s @timeout_wq parameter. This will guarantee
+	 * that timeout handlers are executed sequentially.
 	 */
 	enum drm_gpu_sched_stat (*timedout_job)(struct drm_sched_job *sched_job);
 
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/3] drm/sched: Document run_job() refcount hazard
  2025-01-09 13:37 ` [PATCH 1/3] drm/sched: Document run_job() refcount hazard Philipp Stanner
@ 2025-01-09 13:44   ` Tvrtko Ursulin
  2025-01-09 13:47     ` Philipp Stanner
  2025-01-09 14:01   ` Christian König
  2025-01-09 14:30   ` Danilo Krummrich
  2 siblings, 1 reply; 13+ messages in thread
From: Tvrtko Ursulin @ 2025-01-09 13:44 UTC (permalink / raw)
  To: Philipp Stanner, Luben Tuikov, Matthew Brost, Danilo Krummrich,
	Philipp Stanner, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, David Airlie, Simona Vetter, Sumit Semwal,
	Christian König
  Cc: dri-devel, linux-kernel, linux-media, linaro-mm-sig


On 09/01/2025 13:37, Philipp Stanner wrote:
> From: Philipp Stanner <pstanner@redhat.com>
> 
> drm_sched_backend_ops.run_job() returns a dma_fence for the scheduler.
> That fence is signalled by the driver once the hardware completed the
> associated job. The scheduler does not increment the reference count on
> that fence, but implicitly expects to inherit this fence from run_job().
> 
> This is relatively subtle and prone to misunderstandings.
> 
> This implies that, to keep a reference for itself, a driver needs to
> call dma_fence_get() in addition to dma_fence_init() in that callback.
> 
> It's further complicated by the fact that the scheduler even decrements
> the refcount in drm_sched_run_job_work() since it created a new
> reference in drm_sched_fence_scheduled(). It does, however, still use
> its pointer to the fence after calling dma_fence_put() - which is safe
> because of the aforementioned new reference, but actually still violates
> the refcounting rules.
> 
> Improve the explanatory comment for that decrement.
> 
> Move the call to dma_fence_put() to the position behind the last usage
> of the fence.
> 
> Document the necessity to increment the reference count in
> drm_sched_backend_ops.run_job().
> 
> Signed-off-by: Philipp Stanner <pstanner@redhat.com>
> ---
>   drivers/gpu/drm/scheduler/sched_main.c | 10 +++++++---
>   include/drm/gpu_scheduler.h            | 19 +++++++++++++++----
>   2 files changed, 22 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
> index 57da84908752..5f46c01eb01e 100644
> --- a/drivers/gpu/drm/scheduler/sched_main.c
> +++ b/drivers/gpu/drm/scheduler/sched_main.c
> @@ -1218,15 +1218,19 @@ static void drm_sched_run_job_work(struct work_struct *w)
>   	drm_sched_fence_scheduled(s_fence, fence);
>   
>   	if (!IS_ERR_OR_NULL(fence)) {
> -		/* Drop for original kref_init of the fence */
> -		dma_fence_put(fence);
> -
>   		r = dma_fence_add_callback(fence, &sched_job->cb,
>   					   drm_sched_job_done_cb);
>   		if (r == -ENOENT)
>   			drm_sched_job_done(sched_job, fence->error);
>   		else if (r)
>   			DRM_DEV_ERROR(sched->dev, "fence add callback failed (%d)\n", r);
> +
> +		/*
> +		 * s_fence took a new reference to fence in the call to
> +		 * drm_sched_fence_scheduled() above. The reference passed by
> +		 * run_job() above is now not needed any longer. Drop it.
> +		 */
> +		dma_fence_put(fence);
>   	} else {
>   		drm_sched_job_done(sched_job, IS_ERR(fence) ?
>   				   PTR_ERR(fence) : 0);
> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
> index 95e17504e46a..d5cd2a78f27c 100644
> --- a/include/drm/gpu_scheduler.h
> +++ b/include/drm/gpu_scheduler.h
> @@ -420,10 +420,21 @@ struct drm_sched_backend_ops {
>   					 struct drm_sched_entity *s_entity);
>   
>   	/**
> -         * @run_job: Called to execute the job once all of the dependencies
> -         * have been resolved.  This may be called multiple times, if
> -	 * timedout_job() has happened and drm_sched_job_recovery()
> -	 * decides to try it again.
> +	 * @run_job: Called to execute the job once all of the dependencies
> +	 * have been resolved. This may be called multiple times, if
> +	 * timedout_job() has happened and drm_sched_job_recovery() decides to
> +	 * try it again.
> +	 *
> +	 * @sched_job: the job to run
> +	 *
> +	 * Returns: dma_fence the driver must signal once the hardware has
> +	 *	completed the job ("hardware fence").
> +	 *
> +	 * Note that the scheduler expects to 'inherit' its own reference to
> +	 * this fence from the callback. It does not invoke an extra
> +	 * dma_fence_get() on it. Consequently, this callback must take a
> +	 * reference for the scheduler, and additional ones for the driver's
> +	 * respective needs.

Another thing which would be really good to document here is what other 
things run_job can return apart from the fence.

For instance amdgpu can return an ERR_PTR but I haven't looked into 
other drivers.

Regards,

Tvrtko

>   	 */
>   	struct dma_fence *(*run_job)(struct drm_sched_job *sched_job);
>   

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/3] drm/sched: Document run_job() refcount hazard
  2025-01-09 13:44   ` Tvrtko Ursulin
@ 2025-01-09 13:47     ` Philipp Stanner
  0 siblings, 0 replies; 13+ messages in thread
From: Philipp Stanner @ 2025-01-09 13:47 UTC (permalink / raw)
  To: Tvrtko Ursulin, Philipp Stanner, Luben Tuikov, Matthew Brost,
	Danilo Krummrich, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, David Airlie, Simona Vetter, Sumit Semwal,
	Christian König
  Cc: dri-devel, linux-kernel, linux-media, linaro-mm-sig

On Thu, 2025-01-09 at 13:44 +0000, Tvrtko Ursulin wrote:
> 
> On 09/01/2025 13:37, Philipp Stanner wrote:
> > From: Philipp Stanner <pstanner@redhat.com>
> > 
> > drm_sched_backend_ops.run_job() returns a dma_fence for the
> > scheduler.
> > That fence is signalled by the driver once the hardware completed
> > the
> > associated job. The scheduler does not increment the reference
> > count on
> > that fence, but implicitly expects to inherit this fence from
> > run_job().
> > 
> > This is relatively subtle and prone to misunderstandings.
> > 
> > This implies that, to keep a reference for itself, a driver needs
> > to
> > call dma_fence_get() in addition to dma_fence_init() in that
> > callback.
> > 
> > It's further complicated by the fact that the scheduler even
> > decrements
> > the refcount in drm_sched_run_job_work() since it created a new
> > reference in drm_sched_fence_scheduled(). It does, however, still
> > use
> > its pointer to the fence after calling dma_fence_put() - which is
> > safe
> > because of the aforementioned new reference, but actually still
> > violates
> > the refcounting rules.
> > 
> > Improve the explanatory comment for that decrement.
> > 
> > Move the call to dma_fence_put() to the position behind the last
> > usage
> > of the fence.
> > 
> > Document the necessity to increment the reference count in
> > drm_sched_backend_ops.run_job().
> > 
> > Signed-off-by: Philipp Stanner <pstanner@redhat.com>
> > ---
> >   drivers/gpu/drm/scheduler/sched_main.c | 10 +++++++---
> >   include/drm/gpu_scheduler.h            | 19 +++++++++++++++----
> >   2 files changed, 22 insertions(+), 7 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/scheduler/sched_main.c
> > b/drivers/gpu/drm/scheduler/sched_main.c
> > index 57da84908752..5f46c01eb01e 100644
> > --- a/drivers/gpu/drm/scheduler/sched_main.c
> > +++ b/drivers/gpu/drm/scheduler/sched_main.c
> > @@ -1218,15 +1218,19 @@ static void drm_sched_run_job_work(struct
> > work_struct *w)
> >   	drm_sched_fence_scheduled(s_fence, fence);
> >   
> >   	if (!IS_ERR_OR_NULL(fence)) {
> > -		/* Drop for original kref_init of the fence */
> > -		dma_fence_put(fence);
> > -
> >   		r = dma_fence_add_callback(fence, &sched_job->cb,
> >   					   drm_sched_job_done_cb);
> >   		if (r == -ENOENT)
> >   			drm_sched_job_done(sched_job, fence-
> > >error);
> >   		else if (r)
> >   			DRM_DEV_ERROR(sched->dev, "fence add
> > callback failed (%d)\n", r);
> > +
> > +		/*
> > +		 * s_fence took a new reference to fence in the
> > call to
> > +		 * drm_sched_fence_scheduled() above. The
> > reference passed by
> > +		 * run_job() above is now not needed any longer.
> > Drop it.
> > +		 */
> > +		dma_fence_put(fence);
> >   	} else {
> >   		drm_sched_job_done(sched_job, IS_ERR(fence) ?
> >   				   PTR_ERR(fence) : 0);
> > diff --git a/include/drm/gpu_scheduler.h
> > b/include/drm/gpu_scheduler.h
> > index 95e17504e46a..d5cd2a78f27c 100644
> > --- a/include/drm/gpu_scheduler.h
> > +++ b/include/drm/gpu_scheduler.h
> > @@ -420,10 +420,21 @@ struct drm_sched_backend_ops {
> >   					 struct drm_sched_entity
> > *s_entity);
> >   
> >   	/**
> > -         * @run_job: Called to execute the job once all of the
> > dependencies
> > -         * have been resolved.  This may be called multiple times,
> > if
> > -	 * timedout_job() has happened and
> > drm_sched_job_recovery()
> > -	 * decides to try it again.
> > +	 * @run_job: Called to execute the job once all of the
> > dependencies
> > +	 * have been resolved. This may be called multiple times,
> > if
> > +	 * timedout_job() has happened and
> > drm_sched_job_recovery() decides to
> > +	 * try it again.
> > +	 *
> > +	 * @sched_job: the job to run
> > +	 *
> > +	 * Returns: dma_fence the driver must signal once the
> > hardware has
> > +	 *	completed the job ("hardware fence").
> > +	 *
> > +	 * Note that the scheduler expects to 'inherit' its own
> > reference to
> > +	 * this fence from the callback. It does not invoke an
> > extra
> > +	 * dma_fence_get() on it. Consequently, this callback must
> > take a
> > +	 * reference for the scheduler, and additional ones for
> > the driver's
> > +	 * respective needs.
> 
> Another thing which would be really good to document here is what
> other 
> things run_job can return apart from the fence.
> 
> For instance amdgpu can return an ERR_PTR but I haven't looked into 
> other drivers.

ERR_PTR and NULL are both fine.

But good catch, I'll add that in v2

P.

> 
> Regards,
> 
> Tvrtko
> 
> >   	 */
> >   	struct dma_fence *(*run_job)(struct drm_sched_job
> > *sched_job);
> >   
> 


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/3] drm/sched: Document run_job() refcount hazard
  2025-01-09 13:37 ` [PATCH 1/3] drm/sched: Document run_job() refcount hazard Philipp Stanner
  2025-01-09 13:44   ` Tvrtko Ursulin
@ 2025-01-09 14:01   ` Christian König
  2025-01-13 12:07     ` Philipp Stanner
  2025-01-09 14:30   ` Danilo Krummrich
  2 siblings, 1 reply; 13+ messages in thread
From: Christian König @ 2025-01-09 14:01 UTC (permalink / raw)
  To: Philipp Stanner, Luben Tuikov, Matthew Brost, Danilo Krummrich,
	Philipp Stanner, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, David Airlie, Simona Vetter, Sumit Semwal
  Cc: dri-devel, linux-kernel, linux-media, linaro-mm-sig

Am 09.01.25 um 14:37 schrieb Philipp Stanner:
> From: Philipp Stanner <pstanner@redhat.com>
>
> drm_sched_backend_ops.run_job() returns a dma_fence for the scheduler.
> That fence is signalled by the driver once the hardware completed the
> associated job. The scheduler does not increment the reference count on
> that fence, but implicitly expects to inherit this fence from run_job().
>
> This is relatively subtle and prone to misunderstandings.
>
> This implies that, to keep a reference for itself, a driver needs to
> call dma_fence_get() in addition to dma_fence_init() in that callback.
>
> It's further complicated by the fact that the scheduler even decrements
> the refcount in drm_sched_run_job_work() since it created a new
> reference in drm_sched_fence_scheduled(). It does, however, still use
> its pointer to the fence after calling dma_fence_put() - which is safe
> because of the aforementioned new reference, but actually still violates
> the refcounting rules.
>
> Improve the explanatory comment for that decrement.
>
> Move the call to dma_fence_put() to the position behind the last usage
> of the fence.
>
> Document the necessity to increment the reference count in
> drm_sched_backend_ops.run_job().
>
> Signed-off-by: Philipp Stanner <pstanner@redhat.com>
> ---
>   drivers/gpu/drm/scheduler/sched_main.c | 10 +++++++---
>   include/drm/gpu_scheduler.h            | 19 +++++++++++++++----
>   2 files changed, 22 insertions(+), 7 deletions(-)
>
> diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
> index 57da84908752..5f46c01eb01e 100644
> --- a/drivers/gpu/drm/scheduler/sched_main.c
> +++ b/drivers/gpu/drm/scheduler/sched_main.c
> @@ -1218,15 +1218,19 @@ static void drm_sched_run_job_work(struct work_struct *w)
>   	drm_sched_fence_scheduled(s_fence, fence);
>   
>   	if (!IS_ERR_OR_NULL(fence)) {
> -		/* Drop for original kref_init of the fence */
> -		dma_fence_put(fence);
> -
>   		r = dma_fence_add_callback(fence, &sched_job->cb,
>   					   drm_sched_job_done_cb);
>   		if (r == -ENOENT)
>   			drm_sched_job_done(sched_job, fence->error);
>   		else if (r)
>   			DRM_DEV_ERROR(sched->dev, "fence add callback failed (%d)\n", r);
> +
> +		/*
> +		 * s_fence took a new reference to fence in the call to
> +		 * drm_sched_fence_scheduled() above. The reference passed by
> +		 * run_job() above is now not needed any longer. Drop it.
> +		 */
> +		dma_fence_put(fence);
>   	} else {
>   		drm_sched_job_done(sched_job, IS_ERR(fence) ?
>   				   PTR_ERR(fence) : 0);
> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
> index 95e17504e46a..d5cd2a78f27c 100644
> --- a/include/drm/gpu_scheduler.h
> +++ b/include/drm/gpu_scheduler.h
> @@ -420,10 +420,21 @@ struct drm_sched_backend_ops {
>   					 struct drm_sched_entity *s_entity);
>   
>   	/**
> -         * @run_job: Called to execute the job once all of the dependencies
> -         * have been resolved.  This may be called multiple times, if
> -	 * timedout_job() has happened and drm_sched_job_recovery()
> -	 * decides to try it again.
> +	 * @run_job: Called to execute the job once all of the dependencies
> +	 * have been resolved. This may be called multiple times, if
> +	 * timedout_job() has happened and drm_sched_job_recovery() decides to
> +	 * try it again.

I just came to realize that this hack with calling run_job multiple 
times won't work any more with this patch here.

The background is that you can't allocate memory for a newly returned 
fence and as far as I know no driver pre allocates multiple HW fences 
for a job.

So at least amdgpu used to re-use the same HW fence as return over and 
over again, just re-initialized the reference count. I removed that hack 
from amdgpu, but just FYI it could be that other driver did the same.

Apart from that concern I think that this patch is really the right 
thing and that driver hacks relying on the order of dropping references 
are fundamentally broken approaches.

So feel free to add Reviewed-by: Christian König <christian.koenig@amd.com>.

Regards,
Christian.

> +	 *
> +	 * @sched_job: the job to run
> +	 *
> +	 * Returns: dma_fence the driver must signal once the hardware has
> +	 *	completed the job ("hardware fence").
> +	 *
> +	 * Note that the scheduler expects to 'inherit' its own reference to
> +	 * this fence from the callback. It does not invoke an extra
> +	 * dma_fence_get() on it. Consequently, this callback must take a
> +	 * reference for the scheduler, and additional ones for the driver's
> +	 * respective needs.
>   	 */
>   	struct dma_fence *(*run_job)(struct drm_sched_job *sched_job);
>   


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/3] drm/sched: Document run_job() refcount hazard
  2025-01-09 13:37 ` [PATCH 1/3] drm/sched: Document run_job() refcount hazard Philipp Stanner
  2025-01-09 13:44   ` Tvrtko Ursulin
  2025-01-09 14:01   ` Christian König
@ 2025-01-09 14:30   ` Danilo Krummrich
  2025-01-10  9:14     ` Philipp Stanner
  2 siblings, 1 reply; 13+ messages in thread
From: Danilo Krummrich @ 2025-01-09 14:30 UTC (permalink / raw)
  To: Philipp Stanner
  Cc: Luben Tuikov, Matthew Brost, Philipp Stanner, Maarten Lankhorst,
	Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter,
	Sumit Semwal, Christian König, dri-devel, linux-kernel,
	linux-media, linaro-mm-sig

On Thu, Jan 09, 2025 at 02:37:10PM +0100, Philipp Stanner wrote:
> From: Philipp Stanner <pstanner@redhat.com>
> 
> drm_sched_backend_ops.run_job() returns a dma_fence for the scheduler.
> That fence is signalled by the driver once the hardware completed the
> associated job. The scheduler does not increment the reference count on
> that fence, but implicitly expects to inherit this fence from run_job().
> 
> This is relatively subtle and prone to misunderstandings.
> 
> This implies that, to keep a reference for itself, a driver needs to
> call dma_fence_get() in addition to dma_fence_init() in that callback.
> 
> It's further complicated by the fact that the scheduler even decrements
> the refcount in drm_sched_run_job_work() since it created a new
> reference in drm_sched_fence_scheduled(). It does, however, still use
> its pointer to the fence after calling dma_fence_put() - which is safe
> because of the aforementioned new reference, but actually still violates
> the refcounting rules.
> 
> Improve the explanatory comment for that decrement.
> 
> Move the call to dma_fence_put() to the position behind the last usage
> of the fence.
> 
> Document the necessity to increment the reference count in
> drm_sched_backend_ops.run_job().
> 
> Signed-off-by: Philipp Stanner <pstanner@redhat.com>
> ---
>  drivers/gpu/drm/scheduler/sched_main.c | 10 +++++++---
>  include/drm/gpu_scheduler.h            | 19 +++++++++++++++----
>  2 files changed, 22 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
> index 57da84908752..5f46c01eb01e 100644
> --- a/drivers/gpu/drm/scheduler/sched_main.c
> +++ b/drivers/gpu/drm/scheduler/sched_main.c
> @@ -1218,15 +1218,19 @@ static void drm_sched_run_job_work(struct work_struct *w)
>  	drm_sched_fence_scheduled(s_fence, fence);
>  
>  	if (!IS_ERR_OR_NULL(fence)) {
> -		/* Drop for original kref_init of the fence */
> -		dma_fence_put(fence);
> -
>  		r = dma_fence_add_callback(fence, &sched_job->cb,
>  					   drm_sched_job_done_cb);
>  		if (r == -ENOENT)
>  			drm_sched_job_done(sched_job, fence->error);
>  		else if (r)
>  			DRM_DEV_ERROR(sched->dev, "fence add callback failed (%d)\n", r);
> +
> +		/*
> +		 * s_fence took a new reference to fence in the call to
> +		 * drm_sched_fence_scheduled() above. The reference passed by

I think mentioning that in this context is a bit misleading. The reason we can
put the fence here, is because we stop using the local fence pointer we have a
reference for (from run_job()).

This has nothing to do with the fact that drm_sched_fence_scheduled() took its
own reference when it stored a copy of this fence pointer in a separate data
structure.

With that fixed,

Reviewed-by: Danilo Krummrich <dakr@kernel.org>

> +		 * run_job() above is now not needed any longer. Drop it.
> +		 */
> +		dma_fence_put(fence);
>  	} else {
>  		drm_sched_job_done(sched_job, IS_ERR(fence) ?
>  				   PTR_ERR(fence) : 0);
> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
> index 95e17504e46a..d5cd2a78f27c 100644
> --- a/include/drm/gpu_scheduler.h
> +++ b/include/drm/gpu_scheduler.h
> @@ -420,10 +420,21 @@ struct drm_sched_backend_ops {
>  					 struct drm_sched_entity *s_entity);
>  
>  	/**
> -         * @run_job: Called to execute the job once all of the dependencies
> -         * have been resolved.  This may be called multiple times, if
> -	 * timedout_job() has happened and drm_sched_job_recovery()
> -	 * decides to try it again.
> +	 * @run_job: Called to execute the job once all of the dependencies
> +	 * have been resolved. This may be called multiple times, if
> +	 * timedout_job() has happened and drm_sched_job_recovery() decides to
> +	 * try it again.
> +	 *
> +	 * @sched_job: the job to run
> +	 *
> +	 * Returns: dma_fence the driver must signal once the hardware has
> +	 *	completed the job ("hardware fence").
> +	 *
> +	 * Note that the scheduler expects to 'inherit' its own reference to
> +	 * this fence from the callback. It does not invoke an extra
> +	 * dma_fence_get() on it. Consequently, this callback must take a
> +	 * reference for the scheduler, and additional ones for the driver's
> +	 * respective needs.
>  	 */
>  	struct dma_fence *(*run_job)(struct drm_sched_job *sched_job);
>  
> -- 
> 2.47.1
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/3] drm/sched: Document run_job() refcount hazard
  2025-01-09 14:30   ` Danilo Krummrich
@ 2025-01-10  9:14     ` Philipp Stanner
  0 siblings, 0 replies; 13+ messages in thread
From: Philipp Stanner @ 2025-01-10  9:14 UTC (permalink / raw)
  To: Danilo Krummrich, Philipp Stanner
  Cc: Luben Tuikov, Matthew Brost, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, David Airlie, Simona Vetter, Sumit Semwal,
	Christian König, dri-devel, linux-kernel, linux-media,
	linaro-mm-sig

On Thu, 2025-01-09 at 15:30 +0100, Danilo Krummrich wrote:
> On Thu, Jan 09, 2025 at 02:37:10PM +0100, Philipp Stanner wrote:
> > From: Philipp Stanner <pstanner@redhat.com>
> > 
> > drm_sched_backend_ops.run_job() returns a dma_fence for the
> > scheduler.
> > That fence is signalled by the driver once the hardware completed
> > the
> > associated job. The scheduler does not increment the reference
> > count on
> > that fence, but implicitly expects to inherit this fence from
> > run_job().
> > 
> > This is relatively subtle and prone to misunderstandings.
> > 
> > This implies that, to keep a reference for itself, a driver needs
> > to
> > call dma_fence_get() in addition to dma_fence_init() in that
> > callback.
> > 
> > It's further complicated by the fact that the scheduler even
> > decrements
> > the refcount in drm_sched_run_job_work() since it created a new
> > reference in drm_sched_fence_scheduled(). It does, however, still
> > use
> > its pointer to the fence after calling dma_fence_put() - which is
> > safe
> > because of the aforementioned new reference, but actually still
> > violates
> > the refcounting rules.
> > 
> > Improve the explanatory comment for that decrement.
> > 
> > Move the call to dma_fence_put() to the position behind the last
> > usage
> > of the fence.
> > 
> > Document the necessity to increment the reference count in
> > drm_sched_backend_ops.run_job().
> > 
> > Signed-off-by: Philipp Stanner <pstanner@redhat.com>
> > ---
> >  drivers/gpu/drm/scheduler/sched_main.c | 10 +++++++---
> >  include/drm/gpu_scheduler.h            | 19 +++++++++++++++----
> >  2 files changed, 22 insertions(+), 7 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/scheduler/sched_main.c
> > b/drivers/gpu/drm/scheduler/sched_main.c
> > index 57da84908752..5f46c01eb01e 100644
> > --- a/drivers/gpu/drm/scheduler/sched_main.c
> > +++ b/drivers/gpu/drm/scheduler/sched_main.c
> > @@ -1218,15 +1218,19 @@ static void drm_sched_run_job_work(struct
> > work_struct *w)
> >  	drm_sched_fence_scheduled(s_fence, fence);
> >  
> >  	if (!IS_ERR_OR_NULL(fence)) {
> > -		/* Drop for original kref_init of the fence */
> > -		dma_fence_put(fence);
> > -
> >  		r = dma_fence_add_callback(fence, &sched_job->cb,
> >  					   drm_sched_job_done_cb);
> >  		if (r == -ENOENT)
> >  			drm_sched_job_done(sched_job, fence-
> > >error);
> >  		else if (r)
> >  			DRM_DEV_ERROR(sched->dev, "fence add
> > callback failed (%d)\n", r);
> > +
> > +		/*
> > +		 * s_fence took a new reference to fence in the
> > call to
> > +		 * drm_sched_fence_scheduled() above. The
> > reference passed by
> 
> I think mentioning that in this context is a bit misleading. The
> reason we can
> put the fence here, is because we stop using the local fence pointer
> we have a
> reference for (from run_job()).
> 
> This has nothing to do with the fact that drm_sched_fence_scheduled()
> took its
> own reference when it stored a copy of this fence pointer in a
> separate data
> structure.
> 
> With that fixed,

Then let's remove the comment completely I'd say.

> 
> Reviewed-by: Danilo Krummrich <dakr@kernel.org>

And I forgot your SB. Will add.


Danke,
P.

> 
> > +		 * run_job() above is now not needed any longer.
> > Drop it.
> > +		 */
> > +		dma_fence_put(fence);
> >  	} else {
> >  		drm_sched_job_done(sched_job, IS_ERR(fence) ?
> >  				   PTR_ERR(fence) : 0);
> > diff --git a/include/drm/gpu_scheduler.h
> > b/include/drm/gpu_scheduler.h
> > index 95e17504e46a..d5cd2a78f27c 100644
> > --- a/include/drm/gpu_scheduler.h
> > +++ b/include/drm/gpu_scheduler.h
> > @@ -420,10 +420,21 @@ struct drm_sched_backend_ops {
> >  					 struct drm_sched_entity
> > *s_entity);
> >  
> >  	/**
> > -         * @run_job: Called to execute the job once all of the
> > dependencies
> > -         * have been resolved.  This may be called multiple times,
> > if
> > -	 * timedout_job() has happened and
> > drm_sched_job_recovery()
> > -	 * decides to try it again.
> > +	 * @run_job: Called to execute the job once all of the
> > dependencies
> > +	 * have been resolved. This may be called multiple times,
> > if
> > +	 * timedout_job() has happened and
> > drm_sched_job_recovery() decides to
> > +	 * try it again.
> > +	 *
> > +	 * @sched_job: the job to run
> > +	 *
> > +	 * Returns: dma_fence the driver must signal once the
> > hardware has
> > +	 *	completed the job ("hardware fence").
> > +	 *
> > +	 * Note that the scheduler expects to 'inherit' its own
> > reference to
> > +	 * this fence from the callback. It does not invoke an
> > extra
> > +	 * dma_fence_get() on it. Consequently, this callback must
> > take a
> > +	 * reference for the scheduler, and additional ones for
> > the driver's
> > +	 * respective needs.
> >  	 */
> >  	struct dma_fence *(*run_job)(struct drm_sched_job
> > *sched_job);
> >  
> > -- 
> > 2.47.1
> > 


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/3] drm/sched: Adjust outdated docu for run_job()
  2025-01-09 13:37 ` [PATCH 2/3] drm/sched: Adjust outdated docu for run_job() Philipp Stanner
@ 2025-01-13 10:20   ` Danilo Krummrich
  0 siblings, 0 replies; 13+ messages in thread
From: Danilo Krummrich @ 2025-01-13 10:20 UTC (permalink / raw)
  To: Philipp Stanner
  Cc: Luben Tuikov, Matthew Brost, Philipp Stanner, Maarten Lankhorst,
	Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter,
	Sumit Semwal, Christian König, dri-devel, linux-kernel,
	linux-media, linaro-mm-sig

On Thu, Jan 09, 2025 at 02:37:11PM +0100, Philipp Stanner wrote:
> The documentation for drm_sched_backend_ops.run_job() mentions a certain
> function called drm_sched_job_recovery(). This function does not exist.
> What's actually meant is drm_sched_resubmit_jobs(), which is by now also
> deprecated.
> 
> Remove the mention of the deprecated function.
> 
> Discourage the behavior of drm_sched_backend_ops.run_job() being called
> multiple times for the same job.
> 
> Signed-off-by: Philipp Stanner <phasta@kernel.org>
> ---
>  include/drm/gpu_scheduler.h | 9 ++++++---
>  1 file changed, 6 insertions(+), 3 deletions(-)
> 
> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
> index d5cd2a78f27c..c4e65f9f7f22 100644
> --- a/include/drm/gpu_scheduler.h
> +++ b/include/drm/gpu_scheduler.h
> @@ -421,9 +421,12 @@ struct drm_sched_backend_ops {
>  
>  	/**
>  	 * @run_job: Called to execute the job once all of the dependencies
> -	 * have been resolved. This may be called multiple times, if
> -	 * timedout_job() has happened and drm_sched_job_recovery() decides to
> -	 * try it again.
> +	 * have been resolved.
> +	 *
> +	 * The deprecated drm_sched_resubmit_jobs() (called from
> +	 * drm_sched_backend_ops.timedout_job()) can invoke this again with the
> +	 * same parameters. Doing this is strongly discouraged because it

Maybe "invoke this again for the same job"?

> +	 * violates dma_fence rules.

Does it? AFAIU it puts certain expectations on the driver, before a driver can
call this function, which likely leads to the driver to violate dma_fence rules,
right?

Maybe we should also list the exact rules that are (likely to be) violated to
allow drivers to fix it at their end more easily.

>  	 *
>  	 * @sched_job: the job to run
>  	 *
> -- 
> 2.47.1
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 3/3] drm/sched: Update timedout_job()'s documentation
  2025-01-09 13:37 ` [PATCH 3/3] drm/sched: Update timedout_job()'s documentation Philipp Stanner
@ 2025-01-13 11:46   ` Danilo Krummrich
  2025-01-20 10:07     ` Philipp Stanner
  0 siblings, 1 reply; 13+ messages in thread
From: Danilo Krummrich @ 2025-01-13 11:46 UTC (permalink / raw)
  To: Philipp Stanner
  Cc: Luben Tuikov, Matthew Brost, Philipp Stanner, Maarten Lankhorst,
	Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter,
	Sumit Semwal, Christian König, dri-devel, linux-kernel,
	linux-media, linaro-mm-sig

On Thu, Jan 09, 2025 at 02:37:12PM +0100, Philipp Stanner wrote:
> drm_sched_backend_ops.timedout_job()'s documentation is outdated. It
> mentions the deprecated function drm_sched_resubmit_job(). Furthermore,
> it does not point out the important distinction between hardware and
> firmware schedulers.
> 
> Since firmware schedulers tyipically only use one entity per scheduler,
> timeout handling is significantly more simple because the entity the
> faulted job came from can just be killed without affecting innocent
> processes.
> 
> Update the documentation with that distinction and other details.
> 
> Reformat the docstring to work to a unified style with the other
> handles.
> 
> Signed-off-by: Philipp Stanner <phasta@kernel.org>
> ---
>  include/drm/gpu_scheduler.h | 83 +++++++++++++++++++++++--------------
>  1 file changed, 52 insertions(+), 31 deletions(-)
> 
> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
> index c4e65f9f7f22..380b8840c591 100644
> --- a/include/drm/gpu_scheduler.h
> +++ b/include/drm/gpu_scheduler.h
> @@ -445,43 +445,64 @@ struct drm_sched_backend_ops {
>  	 * @timedout_job: Called when a job has taken too long to execute,
>  	 * to trigger GPU recovery.
>  	 *
> -	 * This method is called in a workqueue context.
> +	 * @sched_job: The job that has timed out
>  	 *
> -	 * Drivers typically issue a reset to recover from GPU hangs, and this
> -	 * procedure usually follows the following workflow:
> +	 * Returns:
> +	 * - DRM_GPU_SCHED_STAT_NOMINAL, on success, i.e., if the underlying
> +	 *   driver has started or completed recovery.
> +	 * - DRM_GPU_SCHED_STAT_ENODEV, if the device is no longer
> +	 *   available, i.e., has been unplugged.

Maybe it'd be better to document this at the enum level and add a link.

>  	 *
> -	 * 1. Stop the scheduler using drm_sched_stop(). This will park the
> -	 *    scheduler thread and cancel the timeout work, guaranteeing that
> -	 *    nothing is queued while we reset the hardware queue
> -	 * 2. Try to gracefully stop non-faulty jobs (optional)
> -	 * 3. Issue a GPU reset (driver-specific)
> -	 * 4. Re-submit jobs using drm_sched_resubmit_jobs()
> +	 * Drivers typically issue a reset to recover from GPU hangs.
> +	 * This procedure looks very different depending on whether a firmware
> +	 * or a hardware scheduler is being used.
> +	 *
> +	 * For a FIRMWARE SCHEDULER, each (pseudo-)ring has one scheduler, and
> +	 * each scheduler (typically) has one entity. Hence, you typically

I think you can remove the first "typically" here. I don't think that for a
firmware scheduler we ever have something else than a 1:1 relation, besides that
having something else than a 1:1 relation (whatever that would be) would likely
be a contradiction to the statement above.

> +	 * follow those steps:
> +	 *
> +	 * 1. Stop the scheduler using drm_sched_stop(). This will pause the
> +	 *    scheduler workqueues and cancel the timeout work, guaranteeing
> +	 *    that nothing is queued while we reset the hardware queue.

Rather reset / remove the software queue / ring.

(Besides: I think we should also define a distinct terminology, sometimes "queue"
refers to the ring buffer, sometimes it refers to the entity, etc. At least we
should be consequent within this commit, and then fix the rest.)

> +	 * 2. Try to gracefully stop non-faulty jobs (optional).
> +	 * TODO: RFC ^ Folks, should we remove this step? What does it even mean
> +	 * precisely to "stop" those jobs? Is that even helpful to userspace in
> +	 * any way?

I think this means to prevent unrelated queues / jobs from being impacted by the
subsequent GPU reset.

So, this is likely not applicable here, see below.

> +	 * 3. Issue a GPU reset (driver-specific).

I'm not entirely sure it really applies to all GPUs that feature a FW scheduler,
but I'd expect that the FW takes care of resetting the corresponding HW
(including preventing impact on other FW rings) if the faulty FW ring is removed
by the driver.

> +	 * 4. Kill the entity the faulted job stems from, and the associated
> +	 *    scheduler.
>  	 * 5. Restart the scheduler using drm_sched_start(). At that point, new
> -	 *    jobs can be queued, and the scheduler thread is unblocked
> +	 *    jobs can be queued, and the scheduler workqueues awake again.

How can we start the scheduler again after we've killed it? I think the most
likely scenario is that the FW ring (including the scheduler structures) is
removed entirely and simply re-created. So, I think we can probably remove 5.

> +	 *
> +	 * For a HARDWARE SCHEDULER, each ring also has one scheduler, but each
> +	 * scheduler typically has many attached entities. This implies that you

Maybe better "associated". For the second sentence, I'd rephrase it, e.g. "This
implies that all entities associated with the affected scheduler can't be torn
down, because [...]".

> +	 * cannot tear down all entities associated with the affected scheduler,
> +	 * because this would effectively also kill innocent userspace processes
> +	 * which did not submit faulty jobs (for example).
> +	 *
> +	 * Consequently, the procedure to recover with a hardware scheduler
> +	 * should look like this:
> +	 *
> +	 * 1. Stop all schedulers impacted by the reset using drm_sched_stop().
> +	 * 2. Figure out to which entity the faulted job belongs.

"belongs to"

> +	 * 3. Try to gracefully stop non-faulty jobs (optional).
> +	 * TODO: RFC ^ Folks, should we remove this step? What does it even mean
> +	 * precisely to "stop" those jobs? Is that even helpful to userspace in
> +	 * any way?

See above.

> +	 * 4. Kill that entity.
> +	 * 5. Issue a GPU reset on all faulty rings (driver-specific).
> +	 * 6. Re-submit jobs on all schedulers impacted by re-submitting them to
> +	 *    the entities which are still alive.
> +	 * 7. Restart all schedulers that were stopped in step #1 using
> +	 *    drm_sched_start().
>  	 *
>  	 * Note that some GPUs have distinct hardware queues but need to reset
>  	 * the GPU globally, which requires extra synchronization between the
> -	 * timeout handler of the different &drm_gpu_scheduler. One way to
> -	 * achieve this synchronization is to create an ordered workqueue
> -	 * (using alloc_ordered_workqueue()) at the driver level, and pass this
> -	 * queue to drm_sched_init(), to guarantee that timeout handlers are
> -	 * executed sequentially. The above workflow needs to be slightly
> -	 * adjusted in that case:
> -	 *
> -	 * 1. Stop all schedulers impacted by the reset using drm_sched_stop()
> -	 * 2. Try to gracefully stop non-faulty jobs on all queues impacted by
> -	 *    the reset (optional)
> -	 * 3. Issue a GPU reset on all faulty queues (driver-specific)
> -	 * 4. Re-submit jobs on all schedulers impacted by the reset using
> -	 *    drm_sched_resubmit_jobs()
> -	 * 5. Restart all schedulers that were stopped in step #1 using
> -	 *    drm_sched_start()
> -	 *
> -	 * Return DRM_GPU_SCHED_STAT_NOMINAL, when all is normal,
> -	 * and the underlying driver has started or completed recovery.
> -	 *
> -	 * Return DRM_GPU_SCHED_STAT_ENODEV, if the device is no longer
> -	 * available, i.e. has been unplugged.
> +	 * timeout handlers of different schedulers. One way to achieve this
> +	 * synchronization is to create an ordered workqueue (using
> +	 * alloc_ordered_workqueue()) at the driver level, and pass this queue
> +	 * as drm_sched_init()'s @timeout_wq parameter. This will guarantee
> +	 * that timeout handlers are executed sequentially.
>  	 */
>  	enum drm_gpu_sched_stat (*timedout_job)(struct drm_sched_job *sched_job);
>  
> -- 
> 2.47.1
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/3] drm/sched: Document run_job() refcount hazard
  2025-01-09 14:01   ` Christian König
@ 2025-01-13 12:07     ` Philipp Stanner
  0 siblings, 0 replies; 13+ messages in thread
From: Philipp Stanner @ 2025-01-13 12:07 UTC (permalink / raw)
  To: Christian König, Philipp Stanner, Luben Tuikov,
	Matthew Brost, Danilo Krummrich, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, David Airlie, Simona Vetter, Sumit Semwal
  Cc: dri-devel, linux-kernel, linux-media, linaro-mm-sig

On Thu, 2025-01-09 at 15:01 +0100, Christian König wrote:
> Am 09.01.25 um 14:37 schrieb Philipp Stanner:
> > From: Philipp Stanner <pstanner@redhat.com>
> > 
> > drm_sched_backend_ops.run_job() returns a dma_fence for the
> > scheduler.
> > That fence is signalled by the driver once the hardware completed
> > the
> > associated job. The scheduler does not increment the reference
> > count on
> > that fence, but implicitly expects to inherit this fence from
> > run_job().
> > 
> > This is relatively subtle and prone to misunderstandings.
> > 
> > This implies that, to keep a reference for itself, a driver needs
> > to
> > call dma_fence_get() in addition to dma_fence_init() in that
> > callback.
> > 
> > It's further complicated by the fact that the scheduler even
> > decrements
> > the refcount in drm_sched_run_job_work() since it created a new
> > reference in drm_sched_fence_scheduled(). It does, however, still
> > use
> > its pointer to the fence after calling dma_fence_put() - which is
> > safe
> > because of the aforementioned new reference, but actually still
> > violates
> > the refcounting rules.
> > 
> > Improve the explanatory comment for that decrement.
> > 
> > Move the call to dma_fence_put() to the position behind the last
> > usage
> > of the fence.
> > 
> > Document the necessity to increment the reference count in
> > drm_sched_backend_ops.run_job().
> > 
> > Signed-off-by: Philipp Stanner <pstanner@redhat.com>
> > ---
> >   drivers/gpu/drm/scheduler/sched_main.c | 10 +++++++---
> >   include/drm/gpu_scheduler.h            | 19 +++++++++++++++----
> >   2 files changed, 22 insertions(+), 7 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/scheduler/sched_main.c
> > b/drivers/gpu/drm/scheduler/sched_main.c
> > index 57da84908752..5f46c01eb01e 100644
> > --- a/drivers/gpu/drm/scheduler/sched_main.c
> > +++ b/drivers/gpu/drm/scheduler/sched_main.c
> > @@ -1218,15 +1218,19 @@ static void drm_sched_run_job_work(struct
> > work_struct *w)
> >   	drm_sched_fence_scheduled(s_fence, fence);
> >   
> >   	if (!IS_ERR_OR_NULL(fence)) {
> > -		/* Drop for original kref_init of the fence */
> > -		dma_fence_put(fence);
> > -
> >   		r = dma_fence_add_callback(fence, &sched_job->cb,
> >   					   drm_sched_job_done_cb);
> >   		if (r == -ENOENT)
> >   			drm_sched_job_done(sched_job, fence-
> > >error);
> >   		else if (r)
> >   			DRM_DEV_ERROR(sched->dev, "fence add
> > callback failed (%d)\n", r);
> > +
> > +		/*
> > +		 * s_fence took a new reference to fence in the
> > call to
> > +		 * drm_sched_fence_scheduled() above. The
> > reference passed by
> > +		 * run_job() above is now not needed any longer.
> > Drop it.
> > +		 */
> > +		dma_fence_put(fence);
> >   	} else {
> >   		drm_sched_job_done(sched_job, IS_ERR(fence) ?
> >   				   PTR_ERR(fence) : 0);
> > diff --git a/include/drm/gpu_scheduler.h
> > b/include/drm/gpu_scheduler.h
> > index 95e17504e46a..d5cd2a78f27c 100644
> > --- a/include/drm/gpu_scheduler.h
> > +++ b/include/drm/gpu_scheduler.h
> > @@ -420,10 +420,21 @@ struct drm_sched_backend_ops {
> >   					 struct drm_sched_entity
> > *s_entity);
> >   
> >   	/**
> > -         * @run_job: Called to execute the job once all of the
> > dependencies
> > -         * have been resolved.  This may be called multiple times,
> > if
> > -	 * timedout_job() has happened and
> > drm_sched_job_recovery()
> > -	 * decides to try it again.
> > +	 * @run_job: Called to execute the job once all of the
> > dependencies
> > +	 * have been resolved. This may be called multiple times,
> > if
> > +	 * timedout_job() has happened and
> > drm_sched_job_recovery() decides to
> > +	 * try it again.
> 
> I just came to realize that this hack with calling run_job multiple 
> times won't work any more with this patch here.
> 
> The background is that you can't allocate memory for a newly returned
> fence and as far as I know no driver pre allocates multiple HW fences
> for a job.
> 
> So at least amdgpu used to re-use the same HW fence as return over
> and 
> over again, just re-initialized the reference count. I removed that
> hack 
> from amdgpu, but just FYI it could be that other driver did the same.
> 
> Apart from that concern I think that this patch is really the right 
> thing and that driver hacks relying on the order of dropping
> references 
> are fundamentally broken approaches.

I don't see how a hack couldn't work anymore with this patch. All it
does is add comments, and it moves the putting of the dma_fence to
where it belongs. But we're in run_job_work() here, which executes
sequentially.
Drivers reusing fences or modifying the refcount might be bad and
hacky, but as long as we're not racing here it doesn't matter whether
the scheduler decrements the refcount a few nanoseconds later than
before.

Anyways, even if I'm wrong, it seems we agree that provoking such
explosions would be worth it.

P.

> 
> So feel free to add Reviewed-by: Christian König
> <christian.koenig@amd.com>.
> 
> Regards,
> Christian.
> 
> > +	 *
> > +	 * @sched_job: the job to run
> > +	 *
> > +	 * Returns: dma_fence the driver must signal once the
> > hardware has
> > +	 *	completed the job ("hardware fence").
> > +	 *
> > +	 * Note that the scheduler expects to 'inherit' its own
> > reference to
> > +	 * this fence from the callback. It does not invoke an
> > extra
> > +	 * dma_fence_get() on it. Consequently, this callback must
> > take a
> > +	 * reference for the scheduler, and additional ones for
> > the driver's
> > +	 * respective needs.
> >   	 */
> >   	struct dma_fence *(*run_job)(struct drm_sched_job
> > *sched_job);
> >   
> 


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 3/3] drm/sched: Update timedout_job()'s documentation
  2025-01-13 11:46   ` Danilo Krummrich
@ 2025-01-20 10:07     ` Philipp Stanner
  0 siblings, 0 replies; 13+ messages in thread
From: Philipp Stanner @ 2025-01-20 10:07 UTC (permalink / raw)
  To: Danilo Krummrich, Philipp Stanner
  Cc: Luben Tuikov, Matthew Brost, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, David Airlie, Simona Vetter, Sumit Semwal,
	Christian König, dri-devel, linux-kernel, linux-media,
	linaro-mm-sig

On Mon, 2025-01-13 at 12:46 +0100, Danilo Krummrich wrote:
> On Thu, Jan 09, 2025 at 02:37:12PM +0100, Philipp Stanner wrote:
> > drm_sched_backend_ops.timedout_job()'s documentation is outdated.
> > It
> > mentions the deprecated function drm_sched_resubmit_job().
> > Furthermore,
> > it does not point out the important distinction between hardware
> > and
> > firmware schedulers.
> > 
> > Since firmware schedulers tyipically only use one entity per
> > scheduler,
> > timeout handling is significantly more simple because the entity
> > the
> > faulted job came from can just be killed without affecting innocent
> > processes.
> > 
> > Update the documentation with that distinction and other details.
> > 
> > Reformat the docstring to work to a unified style with the other
> > handles.
> > 
> > Signed-off-by: Philipp Stanner <phasta@kernel.org>
> > ---
> >  include/drm/gpu_scheduler.h | 83 +++++++++++++++++++++++----------
> > ----
> >  1 file changed, 52 insertions(+), 31 deletions(-)
> > 
> > diff --git a/include/drm/gpu_scheduler.h
> > b/include/drm/gpu_scheduler.h
> > index c4e65f9f7f22..380b8840c591 100644
> > --- a/include/drm/gpu_scheduler.h
> > +++ b/include/drm/gpu_scheduler.h
> > @@ -445,43 +445,64 @@ struct drm_sched_backend_ops {
> >  	 * @timedout_job: Called when a job has taken too long to
> > execute,
> >  	 * to trigger GPU recovery.
> >  	 *
> > -	 * This method is called in a workqueue context.
> > +	 * @sched_job: The job that has timed out
> >  	 *
> > -	 * Drivers typically issue a reset to recover from GPU
> > hangs, and this
> > -	 * procedure usually follows the following workflow:
> > +	 * Returns:
> > +	 * - DRM_GPU_SCHED_STAT_NOMINAL, on success, i.e., if the
> > underlying
> > +	 *   driver has started or completed recovery.
> > +	 * - DRM_GPU_SCHED_STAT_ENODEV, if the device is no longer
> > +	 *   available, i.e., has been unplugged.
> 
> Maybe it'd be better to document this at the enum level and add a
> link.
> 
> >  	 *
> > -	 * 1. Stop the scheduler using drm_sched_stop(). This will
> > park the
> > -	 *    scheduler thread and cancel the timeout work,
> > guaranteeing that
> > -	 *    nothing is queued while we reset the hardware queue
> > -	 * 2. Try to gracefully stop non-faulty jobs (optional)
> > -	 * 3. Issue a GPU reset (driver-specific)
> > -	 * 4. Re-submit jobs using drm_sched_resubmit_jobs()
> > +	 * Drivers typically issue a reset to recover from GPU
> > hangs.
> > +	 * This procedure looks very different depending on
> > whether a firmware
> > +	 * or a hardware scheduler is being used.
> > +	 *
> > +	 * For a FIRMWARE SCHEDULER, each (pseudo-)ring has one
> > scheduler, and
> > +	 * each scheduler (typically) has one entity. Hence, you
> > typically
> 
> I think you can remove the first "typically" here. I don't think that
> for a
> firmware scheduler we ever have something else than a 1:1 relation,
> besides that
> having something else than a 1:1 relation (whatever that would be)
> would likely
> be a contradiction to the statement above.
> 
> > +	 * follow those steps:
> > +	 *
> > +	 * 1. Stop the scheduler using drm_sched_stop(). This will
> > pause the
> > +	 *    scheduler workqueues and cancel the timeout work,
> > guaranteeing
> > +	 *    that nothing is queued while we reset the hardware
> > queue.
> 
> Rather reset / remove the software queue / ring.
> 
> (Besides: I think we should also define a distinct terminology,
> sometimes "queue"
> refers to the ring buffer, sometimes it refers to the entity, etc. At
> least we
> should be consequent within this commit, and then fix the rest.)


Very good idea!

How about we, from now on, always just call it "ring" or "hardware
ring"?

I think queue is very generic and, as you point out, often used for the
entities and stuff like that.

> 
> > +	 * 2. Try to gracefully stop non-faulty jobs (optional).
> > +	 * TODO: RFC ^ Folks, should we remove this step? What
> > does it even mean
> > +	 * precisely to "stop" those jobs? Is that even helpful to
> > userspace in
> > +	 * any way?
> 
> I think this means to prevent unrelated queues / jobs from being
> impacted by the
> subsequent GPU reset.
> 
> So, this is likely not applicable here, see below.
> 
> > +	 * 3. Issue a GPU reset (driver-specific).
> 
> I'm not entirely sure it really applies to all GPUs that feature a FW
> scheduler,
> but I'd expect that the FW takes care of resetting the corresponding
> HW
> (including preventing impact on other FW rings) if the faulty FW ring
> is removed
> by the driver.

@Christian: Agree? AMD is still purely a HW scheduler afaik.

> 
> > +	 * 4. Kill the entity the faulted job stems from, and the
> > associated
> > +	 *    scheduler.
> >  	 * 5. Restart the scheduler using drm_sched_start(). At
> > that point, new
> > -	 *    jobs can be queued, and the scheduler thread is
> > unblocked
> > +	 *    jobs can be queued, and the scheduler workqueues
> > awake again.
> 
> How can we start the scheduler again after we've killed it? I think
> the most
> likely scenario is that the FW ring (including the scheduler
> structures) is
> removed entirely and simply re-created. So, I think we can probably
> remove 5.

ACK

> 
> > +	 *
> > +	 * For a HARDWARE SCHEDULER, each ring also has one
> > scheduler, but each
> > +	 * scheduler typically has many attached entities. This
> > implies that you
> 
> Maybe better "associated". For the second sentence, I'd rephrase it,
> e.g. "This
> implies that all entities associated with the affected scheduler
> can't be torn
> down, because [...]".
> 
> > +	 * cannot tear down all entities associated with the
> > affected scheduler,
> > +	 * because this would effectively also kill innocent
> > userspace processes
> > +	 * which did not submit faulty jobs (for example).
> > +	 *
> > +	 * Consequently, the procedure to recover with a hardware
> > scheduler
> > +	 * should look like this:
> > +	 *
> > +	 * 1. Stop all schedulers impacted by the reset using
> > drm_sched_stop().
> > +	 * 2. Figure out to which entity the faulted job belongs.
> 
> "belongs to"
> 
> > +	 * 3. Try to gracefully stop non-faulty jobs (optional).
> > +	 * TODO: RFC ^ Folks, should we remove this step? What
> > does it even mean
> > +	 * precisely to "stop" those jobs? Is that even helpful to
> > userspace in
> > +	 * any way?
> 
> See above.

Agree with all. Will fix those in v2.

Danke,
P.

> 
> > +	 * 4. Kill that entity.
> > +	 * 5. Issue a GPU reset on all faulty rings (driver-
> > specific).
> > +	 * 6. Re-submit jobs on all schedulers impacted by re-
> > submitting them to
> > +	 *    the entities which are still alive.
> > +	 * 7. Restart all schedulers that were stopped in step #1
> > using
> > +	 *    drm_sched_start().
> >  	 *
> >  	 * Note that some GPUs have distinct hardware queues but
> > need to reset
> >  	 * the GPU globally, which requires extra synchronization
> > between the
> > -	 * timeout handler of the different &drm_gpu_scheduler.
> > One way to
> > -	 * achieve this synchronization is to create an ordered
> > workqueue
> > -	 * (using alloc_ordered_workqueue()) at the driver level,
> > and pass this
> > -	 * queue to drm_sched_init(), to guarantee that timeout
> > handlers are
> > -	 * executed sequentially. The above workflow needs to be
> > slightly
> > -	 * adjusted in that case:
> > -	 *
> > -	 * 1. Stop all schedulers impacted by the reset using
> > drm_sched_stop()
> > -	 * 2. Try to gracefully stop non-faulty jobs on all queues
> > impacted by
> > -	 *    the reset (optional)
> > -	 * 3. Issue a GPU reset on all faulty queues (driver-
> > specific)
> > -	 * 4. Re-submit jobs on all schedulers impacted by the
> > reset using
> > -	 *    drm_sched_resubmit_jobs()
> > -	 * 5. Restart all schedulers that were stopped in step #1
> > using
> > -	 *    drm_sched_start()
> > -	 *
> > -	 * Return DRM_GPU_SCHED_STAT_NOMINAL, when all is normal,
> > -	 * and the underlying driver has started or completed
> > recovery.
> > -	 *
> > -	 * Return DRM_GPU_SCHED_STAT_ENODEV, if the device is no
> > longer
> > -	 * available, i.e. has been unplugged.
> > +	 * timeout handlers of different schedulers. One way to
> > achieve this
> > +	 * synchronization is to create an ordered workqueue
> > (using
> > +	 * alloc_ordered_workqueue()) at the driver level, and
> > pass this queue
> > +	 * as drm_sched_init()'s @timeout_wq parameter. This will
> > guarantee
> > +	 * that timeout handlers are executed sequentially.
> >  	 */
> >  	enum drm_gpu_sched_stat (*timedout_job)(struct
> > drm_sched_job *sched_job);
> >  
> > -- 
> > 2.47.1
> > 
> 


^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2025-01-20 10:08 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-01-09 13:37 [PATCH 0/3] drm/sched: Documentation and refcount improvements Philipp Stanner
2025-01-09 13:37 ` [PATCH 1/3] drm/sched: Document run_job() refcount hazard Philipp Stanner
2025-01-09 13:44   ` Tvrtko Ursulin
2025-01-09 13:47     ` Philipp Stanner
2025-01-09 14:01   ` Christian König
2025-01-13 12:07     ` Philipp Stanner
2025-01-09 14:30   ` Danilo Krummrich
2025-01-10  9:14     ` Philipp Stanner
2025-01-09 13:37 ` [PATCH 2/3] drm/sched: Adjust outdated docu for run_job() Philipp Stanner
2025-01-13 10:20   ` Danilo Krummrich
2025-01-09 13:37 ` [PATCH 3/3] drm/sched: Update timedout_job()'s documentation Philipp Stanner
2025-01-13 11:46   ` Danilo Krummrich
2025-01-20 10:07     ` Philipp Stanner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox