amd-gfx.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
* [PATCH V8 00/27] Reset improvements
@ 2025-06-13 21:47 Alex Deucher
  2025-06-13 21:47 ` [PATCH 01/27] drm/amdgpu: switch job hw_fence to amdgpu_fence Alex Deucher
                   ` (26 more replies)
  0 siblings, 27 replies; 35+ messages in thread
From: Alex Deucher @ 2025-06-13 21:47 UTC (permalink / raw)
  To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher

This set improves per queue reset support for a number of IPs.
When we reset the queue, the queue is lost so we need
to re-emit the unprocessed state from subsequent submissions.
To that end, in order to make sure we actually restore
unprocessed state, we need to enable legacy enforce isolation
so that we can safely re-emit the unprocessed state.  If
we don't multiple jobs can run in parallel and we may not
end up resetting the correct one.  This is similar to how
windows handles queues.  This also gives us correct guilty
tracking for GC.

Tested on GC 10 and 11 chips with a game running and
then running hang tests.  The game pauses when the
hang happens, then continues after the queue reset.

I tried this same approach and GC8 and 9, but it
was not as reliable as soft recovery.  As such, I've dropped
the KGQ reset code for pre-GC10.

The same approach is extended to SDMA and VCN.
They don't need enforce isolation because those engines
are single threaded so they always operate serially.

Rework re-emit to signal the seq number of the bad job and
verify that to verify that the reset worked, then re-emit the
rest of the non-guilty state.  This way we are not waiting on
the rest of the state to complete, and if the subsequent state
also contains a bad job, we'll end up in queue reset again rather
than adapter reset.

Git tree:
https://gitlab.freedesktop.org/agd5f/linux/-/commits/kq_resets?ref_type=heads

v4: Drop explicit padding patches
    Drop new timeout macro
    Rework re-emit sequence
v5: Add a helper for reemit
    Convert VCN, JPEG, SDMA to use new helpers
v6: Update SDMA 4.4.2 to use new helpers
    Move ptr tracking to amdgpu_fence
    Skip all jobs from the bad context on the ring
v7: Rework the backup logic
    Move and clean up the guilty logic for engine resets
    Integrate suggestions from Christian
    Add JPEG 4.0.5 support
v8: Add non-guilty ring backup handling
    Clean up new function signatures
    Reorder some bug fixes to the start of the series

Alex Deucher (26):
  drm/amdgpu: switch job hw_fence to amdgpu_fence
  drm/amdgpu/sdma: handle paging queues in amdgpu_sdma_reset_engine()
  drm/amdgpu: enable legacy enforce isolation by default
  drm/amdgpu: update ring reset function signature
  drm/amdgpu: move force completion into ring resets
  drm/amdgpu: move guilty handling into ring resets
  drm/amdgpu: track ring state associated with a job
  drm/amdgpu/gfx9: re-emit unprocessed state on kcq reset
  drm/amdgpu/gfx9.4.3: re-emit unprocessed state on kcq reset
  drm/amdgpu/gfx10: re-emit unprocessed state on ring reset
  drm/amdgpu/gfx11: re-emit unprocessed state on ring reset
  drm/amdgpu/gfx12: re-emit unprocessed state on ring reset
  drm/amdgpu/sdma6: re-emit unprocessed state on ring reset
  drm/amdgpu/sdma7: re-emit unprocessed state on ring reset
  drm/amdgpu/jpeg2: re-emit unprocessed state on ring reset
  drm/amdgpu/jpeg2.5: re-emit unprocessed state on ring reset
  drm/amdgpu/jpeg3: re-emit unprocessed state on ring reset
  drm/amdgpu/jpeg4: re-emit unprocessed state on ring reset
  drm/amdgpu/jpeg4.0.3: re-emit unprocessed state on ring reset
  drm/amdgpu/jpeg4.0.5: add queue reset
  drm/amdgpu/jpeg5: add queue reset
  drm/amdgpu/jpeg5.0.1: re-emit unprocessed state on ring reset
  drm/amdgpu/vcn4: re-emit unprocessed state on ring reset
  drm/amdgpu/vcn4.0.3: re-emit unprocessed state on ring reset
  drm/amdgpu/vcn4.0.5: re-emit unprocessed state on ring reset
  drm/amdgpu/vcn5: re-emit unprocessed state on ring reset

Christian König (1):
  drm/amdgpu: rework queue reset scheduler interaction

 drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c |   2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c  |   6 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c   | 126 ++++++++++++++++----
 drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c      |   8 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.c     |  59 ++++-----
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.h     |   2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c    |  27 +++++
 drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h    |  36 +++++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.c    |  10 +-
 drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c      |  64 +++++-----
 drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c      |  57 +++++----
 drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c      |  57 +++++----
 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c       |  21 +++-
 drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c     |  23 +++-
 drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c      |  21 +++-
 drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c      |  21 +++-
 drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c      |  21 +++-
 drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c      |  21 +++-
 drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c    |  21 +++-
 drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_5.c    |  25 ++++
 drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_0.c    |  28 +++++
 drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c    |  21 +++-
 drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c    |  60 ++++++----
 drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c      |  32 ++++-
 drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c      |  34 +++++-
 drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c      |  36 +++++-
 drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c      |  37 +++++-
 drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c       |  19 ++-
 drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c     |  20 +++-
 drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c     |  20 +++-
 drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c     |  20 +++-
 31 files changed, 748 insertions(+), 207 deletions(-)

-- 
2.49.0


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH 01/27] drm/amdgpu: switch job hw_fence to amdgpu_fence
  2025-06-13 21:47 [PATCH V8 00/27] Reset improvements Alex Deucher
@ 2025-06-13 21:47 ` Alex Deucher
  2025-06-16 12:16   ` Christian König
  2025-06-13 21:47 ` [PATCH 02/27] drm/amdgpu/sdma: handle paging queues in amdgpu_sdma_reset_engine() Alex Deucher
                   ` (25 subsequent siblings)
  26 siblings, 1 reply; 35+ messages in thread
From: Alex Deucher @ 2025-06-13 21:47 UTC (permalink / raw)
  To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher

Use the amdgpu fence container so we can store additional
data in the fence.  This also fixes the start_time handling
for MCBP since we were casting the fence to an amdgpu_fence
and it wasn't.

Fixes: 3f4c175d62d8 ("drm/amdgpu: MCBP based on DRM scheduler (v9)")
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c |  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c  |  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c   | 30 +++++----------------
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.c     | 12 ++++-----
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.h     |  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h    | 16 +++++++++++
 6 files changed, 32 insertions(+), 32 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
index 8e626f50b362e..f81608330a3d0 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
@@ -1902,7 +1902,7 @@ static void amdgpu_ib_preempt_mark_partial_job(struct amdgpu_ring *ring)
 			continue;
 		}
 		job = to_amdgpu_job(s_job);
-		if (preempted && (&job->hw_fence) == fence)
+		if (preempted && (&job->hw_fence.base) == fence)
 			/* mark the job as preempted */
 			job->preemption_status |= AMDGPU_IB_PREEMPTED;
 	}
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 00174437b01ec..4893f834f4fd4 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -6397,7 +6397,7 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
 	 *
 	 * job->base holds a reference to parent fence
 	 */
-	if (job && dma_fence_is_signaled(&job->hw_fence)) {
+	if (job && dma_fence_is_signaled(&job->hw_fence.base)) {
 		job_signaled = true;
 		dev_info(adev->dev, "Guilty job already signaled, skipping HW reset");
 		goto skip_hw_reset;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
index 2f24a6aa13bf6..569e0e5373927 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
@@ -41,22 +41,6 @@
 #include "amdgpu_trace.h"
 #include "amdgpu_reset.h"
 
-/*
- * Fences mark an event in the GPUs pipeline and are used
- * for GPU/CPU synchronization.  When the fence is written,
- * it is expected that all buffers associated with that fence
- * are no longer in use by the associated ring on the GPU and
- * that the relevant GPU caches have been flushed.
- */
-
-struct amdgpu_fence {
-	struct dma_fence base;
-
-	/* RB, DMA, etc. */
-	struct amdgpu_ring		*ring;
-	ktime_t				start_timestamp;
-};
-
 static struct kmem_cache *amdgpu_fence_slab;
 
 int amdgpu_fence_slab_init(void)
@@ -151,12 +135,12 @@ int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **f, struct amd
 		am_fence = kmem_cache_alloc(amdgpu_fence_slab, GFP_ATOMIC);
 		if (am_fence == NULL)
 			return -ENOMEM;
-		fence = &am_fence->base;
-		am_fence->ring = ring;
 	} else {
 		/* take use of job-embedded fence */
-		fence = &job->hw_fence;
+		am_fence = &job->hw_fence;
 	}
+	fence = &am_fence->base;
+	am_fence->ring = ring;
 
 	seq = ++ring->fence_drv.sync_seq;
 	if (job && job->job_run_counter) {
@@ -718,7 +702,7 @@ void amdgpu_fence_driver_clear_job_fences(struct amdgpu_ring *ring)
 			 * it right here or we won't be able to track them in fence_drv
 			 * and they will remain unsignaled during sa_bo free.
 			 */
-			job = container_of(old, struct amdgpu_job, hw_fence);
+			job = container_of(old, struct amdgpu_job, hw_fence.base);
 			if (!job->base.s_fence && !dma_fence_is_signaled(old))
 				dma_fence_signal(old);
 			RCU_INIT_POINTER(*ptr, NULL);
@@ -780,7 +764,7 @@ static const char *amdgpu_fence_get_timeline_name(struct dma_fence *f)
 
 static const char *amdgpu_job_fence_get_timeline_name(struct dma_fence *f)
 {
-	struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence);
+	struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence.base);
 
 	return (const char *)to_amdgpu_ring(job->base.sched)->name;
 }
@@ -810,7 +794,7 @@ static bool amdgpu_fence_enable_signaling(struct dma_fence *f)
  */
 static bool amdgpu_job_fence_enable_signaling(struct dma_fence *f)
 {
-	struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence);
+	struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence.base);
 
 	if (!timer_pending(&to_amdgpu_ring(job->base.sched)->fence_drv.fallback_timer))
 		amdgpu_fence_schedule_fallback(to_amdgpu_ring(job->base.sched));
@@ -845,7 +829,7 @@ static void amdgpu_job_fence_free(struct rcu_head *rcu)
 	struct dma_fence *f = container_of(rcu, struct dma_fence, rcu);
 
 	/* free job if fence has a parent job */
-	kfree(container_of(f, struct amdgpu_job, hw_fence));
+	kfree(container_of(f, struct amdgpu_job, hw_fence.base));
 }
 
 /**
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
index acb21fc8b3ce5..ddb9d3269357c 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
@@ -272,8 +272,8 @@ void amdgpu_job_free_resources(struct amdgpu_job *job)
 	/* Check if any fences where initialized */
 	if (job->base.s_fence && job->base.s_fence->finished.ops)
 		f = &job->base.s_fence->finished;
-	else if (job->hw_fence.ops)
-		f = &job->hw_fence;
+	else if (job->hw_fence.base.ops)
+		f = &job->hw_fence.base;
 	else
 		f = NULL;
 
@@ -290,10 +290,10 @@ static void amdgpu_job_free_cb(struct drm_sched_job *s_job)
 	amdgpu_sync_free(&job->explicit_sync);
 
 	/* only put the hw fence if has embedded fence */
-	if (!job->hw_fence.ops)
+	if (!job->hw_fence.base.ops)
 		kfree(job);
 	else
-		dma_fence_put(&job->hw_fence);
+		dma_fence_put(&job->hw_fence.base);
 }
 
 void amdgpu_job_set_gang_leader(struct amdgpu_job *job,
@@ -322,10 +322,10 @@ void amdgpu_job_free(struct amdgpu_job *job)
 	if (job->gang_submit != &job->base.s_fence->scheduled)
 		dma_fence_put(job->gang_submit);
 
-	if (!job->hw_fence.ops)
+	if (!job->hw_fence.base.ops)
 		kfree(job);
 	else
-		dma_fence_put(&job->hw_fence);
+		dma_fence_put(&job->hw_fence.base);
 }
 
 struct dma_fence *amdgpu_job_submit(struct amdgpu_job *job)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
index f2c049129661f..931fed8892cc1 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
@@ -48,7 +48,7 @@ struct amdgpu_job {
 	struct drm_sched_job    base;
 	struct amdgpu_vm	*vm;
 	struct amdgpu_sync	explicit_sync;
-	struct dma_fence	hw_fence;
+	struct amdgpu_fence	hw_fence;
 	struct dma_fence	*gang_submit;
 	uint32_t		preamble_status;
 	uint32_t                preemption_status;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
index b95b471107692..e1f25218943a4 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
@@ -127,6 +127,22 @@ struct amdgpu_fence_driver {
 	struct dma_fence		**fences;
 };
 
+/*
+ * Fences mark an event in the GPUs pipeline and are used
+ * for GPU/CPU synchronization.  When the fence is written,
+ * it is expected that all buffers associated with that fence
+ * are no longer in use by the associated ring on the GPU and
+ * that the relevant GPU caches have been flushed.
+ */
+
+struct amdgpu_fence {
+	struct dma_fence base;
+
+	/* RB, DMA, etc. */
+	struct amdgpu_ring		*ring;
+	ktime_t				start_timestamp;
+};
+
 extern const struct drm_sched_backend_ops amdgpu_sched_ops;
 
 void amdgpu_fence_driver_clear_job_fences(struct amdgpu_ring *ring);
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 02/27] drm/amdgpu/sdma: handle paging queues in amdgpu_sdma_reset_engine()
  2025-06-13 21:47 [PATCH V8 00/27] Reset improvements Alex Deucher
  2025-06-13 21:47 ` [PATCH 01/27] drm/amdgpu: switch job hw_fence to amdgpu_fence Alex Deucher
@ 2025-06-13 21:47 ` Alex Deucher
  2025-06-14 12:31   ` Zhang, Jesse(Jie)
  2025-06-13 21:47 ` [PATCH 03/27] drm/amdgpu: enable legacy enforce isolation by default Alex Deucher
                   ` (24 subsequent siblings)
  26 siblings, 1 reply; 35+ messages in thread
From: Alex Deucher @ 2025-06-13 21:47 UTC (permalink / raw)
  To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher

Need to properly start and stop paging queues if they are present.

This is not an issue today since we don't support a paging queue
on any chips with queue reset.

Fixes: ffe43cc82a04 ("drm/amdgpu: switch amdgpu_sdma_reset_engine to use the new sdma function pointers")
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.c
index a1e54bcef495c..cf5733d5d26dd 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.c
@@ -571,8 +571,11 @@ int amdgpu_sdma_reset_engine(struct amdgpu_device *adev, uint32_t instance_id)
 		page_sched_stopped = true;
 	}
 
-	if (sdma_instance->funcs->stop_kernel_queue)
+	if (sdma_instance->funcs->stop_kernel_queue) {
 		sdma_instance->funcs->stop_kernel_queue(gfx_ring);
+		if (adev->sdma.has_page_queue)
+			sdma_instance->funcs->stop_kernel_queue(page_ring);
+	}
 
 	/* Perform the SDMA reset for the specified instance */
 	ret = amdgpu_sdma_soft_reset(adev, instance_id);
@@ -581,8 +584,11 @@ int amdgpu_sdma_reset_engine(struct amdgpu_device *adev, uint32_t instance_id)
 		goto exit;
 	}
 
-	if (sdma_instance->funcs->start_kernel_queue)
+	if (sdma_instance->funcs->start_kernel_queue) {
 		sdma_instance->funcs->start_kernel_queue(gfx_ring);
+		if (adev->sdma.has_page_queue)
+			sdma_instance->funcs->start_kernel_queue(page_ring);
+	}
 
 exit:
 	/* Restart the scheduler's work queue for the GFX and page rings
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 03/27] drm/amdgpu: enable legacy enforce isolation by default
  2025-06-13 21:47 [PATCH V8 00/27] Reset improvements Alex Deucher
  2025-06-13 21:47 ` [PATCH 01/27] drm/amdgpu: switch job hw_fence to amdgpu_fence Alex Deucher
  2025-06-13 21:47 ` [PATCH 02/27] drm/amdgpu/sdma: handle paging queues in amdgpu_sdma_reset_engine() Alex Deucher
@ 2025-06-13 21:47 ` Alex Deucher
  2025-06-13 21:47 ` [PATCH 04/27] drm/amdgpu: update ring reset function signature Alex Deucher
                   ` (23 subsequent siblings)
  26 siblings, 0 replies; 35+ messages in thread
From: Alex Deucher @ 2025-06-13 21:47 UTC (permalink / raw)
  To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher

Enable legacy enforce isolation (just serialize kernel
GC submissions).  This way we can reset a ring and
only affect the the process currently using that ring.
This mirrors what windows does.

Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 4893f834f4fd4..88d4da1df6af9 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -2148,9 +2148,7 @@ static int amdgpu_device_check_arguments(struct amdgpu_device *adev)
 
 	for (i = 0; i < MAX_XCP; i++) {
 		switch (amdgpu_enforce_isolation) {
-		case -1:
 		case 0:
-		default:
 			/* disable */
 			adev->enforce_isolation[i] = AMDGPU_ENFORCE_ISOLATION_DISABLE;
 			break;
@@ -2159,7 +2157,9 @@ static int amdgpu_device_check_arguments(struct amdgpu_device *adev)
 			adev->enforce_isolation[i] =
 				AMDGPU_ENFORCE_ISOLATION_ENABLE;
 			break;
+		case -1:
 		case 2:
+		default:
 			/* enable legacy mode */
 			adev->enforce_isolation[i] =
 				AMDGPU_ENFORCE_ISOLATION_ENABLE_LEGACY;
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 04/27] drm/amdgpu: update ring reset function signature
  2025-06-13 21:47 [PATCH V8 00/27] Reset improvements Alex Deucher
                   ` (2 preceding siblings ...)
  2025-06-13 21:47 ` [PATCH 03/27] drm/amdgpu: enable legacy enforce isolation by default Alex Deucher
@ 2025-06-13 21:47 ` Alex Deucher
  2025-06-13 21:47 ` [PATCH 05/27] drm/amdgpu: rework queue reset scheduler interaction Alex Deucher
                   ` (22 subsequent siblings)
  26 siblings, 0 replies; 35+ messages in thread
From: Alex Deucher @ 2025-06-13 21:47 UTC (permalink / raw)
  To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher

Going forward, we'll need more than just the vmid.  Add the
guilty amdgpu_fence.

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.c  | 2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h | 5 +++--
 drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c   | 7 +++++--
 drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c   | 8 ++++++--
 drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c   | 8 ++++++--
 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c    | 3 ++-
 drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c  | 3 ++-
 drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c   | 4 +++-
 drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c   | 4 +++-
 drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c   | 4 +++-
 drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c   | 4 +++-
 drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c | 4 +++-
 drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c | 4 +++-
 drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c | 4 +++-
 drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c   | 4 +++-
 drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c   | 4 +++-
 drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c   | 4 +++-
 drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c   | 4 +++-
 drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c    | 4 +++-
 drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c  | 4 +++-
 drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c  | 4 +++-
 drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c  | 4 +++-
 22 files changed, 70 insertions(+), 26 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
index ddb9d3269357c..a7ff1fa4c778e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
@@ -155,7 +155,7 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
 		if (is_guilty)
 			dma_fence_set_error(&s_job->s_fence->finished, -ETIME);
 
-		r = amdgpu_ring_reset(ring, job->vmid);
+		r = amdgpu_ring_reset(ring, job->vmid, NULL);
 		if (!r) {
 			if (amdgpu_ring_sched_ready(ring))
 				drm_sched_stop(&ring->sched, s_job);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
index e1f25218943a4..ff3a4b81e51ab 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
@@ -268,7 +268,8 @@ struct amdgpu_ring_funcs {
 	void (*patch_cntl)(struct amdgpu_ring *ring, unsigned offset);
 	void (*patch_ce)(struct amdgpu_ring *ring, unsigned offset);
 	void (*patch_de)(struct amdgpu_ring *ring, unsigned offset);
-	int (*reset)(struct amdgpu_ring *ring, unsigned int vmid);
+	int (*reset)(struct amdgpu_ring *ring, unsigned int vmid,
+		     struct amdgpu_fence *guilty_fence);
 	void (*emit_cleaner_shader)(struct amdgpu_ring *ring);
 	bool (*is_guilty)(struct amdgpu_ring *ring);
 };
@@ -425,7 +426,7 @@ struct amdgpu_ring {
 #define amdgpu_ring_patch_cntl(r, o) ((r)->funcs->patch_cntl((r), (o)))
 #define amdgpu_ring_patch_ce(r, o) ((r)->funcs->patch_ce((r), (o)))
 #define amdgpu_ring_patch_de(r, o) ((r)->funcs->patch_de((r), (o)))
-#define amdgpu_ring_reset(r, v) (r)->funcs->reset((r), (v))
+#define amdgpu_ring_reset(r, v, f) (r)->funcs->reset((r), (v), (f))
 
 unsigned int amdgpu_ring_max_ibs(enum amdgpu_ring_type type);
 int amdgpu_ring_alloc(struct amdgpu_ring *ring, unsigned ndw);
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
index 75ea071744eb5..444753b0ac885 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
@@ -9522,7 +9522,9 @@ static void gfx_v10_ring_insert_nop(struct amdgpu_ring *ring, uint32_t num_nop)
 	amdgpu_ring_insert_nop(ring, num_nop - 1);
 }
 
-static int gfx_v10_0_reset_kgq(struct amdgpu_ring *ring, unsigned int vmid)
+static int gfx_v10_0_reset_kgq(struct amdgpu_ring *ring,
+			       unsigned int vmid,
+			       struct amdgpu_fence *guilty_fence)
 {
 	struct amdgpu_device *adev = ring->adev;
 	struct amdgpu_kiq *kiq = &adev->gfx.kiq[0];
@@ -9579,7 +9581,8 @@ static int gfx_v10_0_reset_kgq(struct amdgpu_ring *ring, unsigned int vmid)
 }
 
 static int gfx_v10_0_reset_kcq(struct amdgpu_ring *ring,
-			       unsigned int vmid)
+			       unsigned int vmid,
+			       struct amdgpu_fence *guilty_fence)
 {
 	struct amdgpu_device *adev = ring->adev;
 	struct amdgpu_kiq *kiq = &adev->gfx.kiq[0];
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
index ec9b84f92d467..4293f2a1b9bfb 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
@@ -6811,7 +6811,9 @@ static int gfx_v11_reset_gfx_pipe(struct amdgpu_ring *ring)
 	return 0;
 }
 
-static int gfx_v11_0_reset_kgq(struct amdgpu_ring *ring, unsigned int vmid)
+static int gfx_v11_0_reset_kgq(struct amdgpu_ring *ring,
+			       unsigned int vmid,
+			       struct amdgpu_fence *guilty_fence)
 {
 	struct amdgpu_device *adev = ring->adev;
 	int r;
@@ -6973,7 +6975,9 @@ static int gfx_v11_0_reset_compute_pipe(struct amdgpu_ring *ring)
 	return 0;
 }
 
-static int gfx_v11_0_reset_kcq(struct amdgpu_ring *ring, unsigned int vmid)
+static int gfx_v11_0_reset_kcq(struct amdgpu_ring *ring,
+			       unsigned int vmid,
+			       struct amdgpu_fence *guilty_fence)
 {
 	struct amdgpu_device *adev = ring->adev;
 	int r = 0;
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
index 1234c8d64e20d..aea21ef177d05 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
@@ -5307,7 +5307,9 @@ static int gfx_v12_reset_gfx_pipe(struct amdgpu_ring *ring)
 	return 0;
 }
 
-static int gfx_v12_0_reset_kgq(struct amdgpu_ring *ring, unsigned int vmid)
+static int gfx_v12_0_reset_kgq(struct amdgpu_ring *ring,
+			       unsigned int vmid,
+			       struct amdgpu_fence *guilty_fence)
 {
 	struct amdgpu_device *adev = ring->adev;
 	int r;
@@ -5421,7 +5423,9 @@ static int gfx_v12_0_reset_compute_pipe(struct amdgpu_ring *ring)
 	return 0;
 }
 
-static int gfx_v12_0_reset_kcq(struct amdgpu_ring *ring, unsigned int vmid)
+static int gfx_v12_0_reset_kcq(struct amdgpu_ring *ring,
+			       unsigned int vmid,
+			       struct amdgpu_fence *guilty_fence)
 {
 	struct amdgpu_device *adev = ring->adev;
 	int r;
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
index d50e125fd3e0d..c0ffe7afca9b8 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
@@ -7153,7 +7153,8 @@ static void gfx_v9_ring_insert_nop(struct amdgpu_ring *ring, uint32_t num_nop)
 }
 
 static int gfx_v9_0_reset_kcq(struct amdgpu_ring *ring,
-			      unsigned int vmid)
+			      unsigned int vmid,
+			      struct amdgpu_fence *guilty_fence)
 {
 	struct amdgpu_device *adev = ring->adev;
 	struct amdgpu_kiq *kiq = &adev->gfx.kiq[0];
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
index c233edf605694..79d4ae0645ffc 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
@@ -3552,7 +3552,8 @@ static int gfx_v9_4_3_reset_hw_pipe(struct amdgpu_ring *ring)
 }
 
 static int gfx_v9_4_3_reset_kcq(struct amdgpu_ring *ring,
-				unsigned int vmid)
+				unsigned int vmid,
+				struct amdgpu_fence *guilty_fence)
 {
 	struct amdgpu_device *adev = ring->adev;
 	struct amdgpu_kiq *kiq = &adev->gfx.kiq[ring->xcc_id];
diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
index 4cde8a8bcc837..4c1ff6d0e14ea 100644
--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
@@ -764,7 +764,9 @@ static int jpeg_v2_0_process_interrupt(struct amdgpu_device *adev,
 	return 0;
 }
 
-static int jpeg_v2_0_ring_reset(struct amdgpu_ring *ring, unsigned int vmid)
+static int jpeg_v2_0_ring_reset(struct amdgpu_ring *ring,
+				unsigned int vmid,
+				struct amdgpu_fence *guilty_fence)
 {
 	jpeg_v2_0_stop(ring->adev);
 	jpeg_v2_0_start(ring->adev);
diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
index 8b39e114f3be1..5a18b8644de2f 100644
--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
+++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
@@ -643,7 +643,9 @@ static int jpeg_v2_5_process_interrupt(struct amdgpu_device *adev,
 	return 0;
 }
 
-static int jpeg_v2_5_ring_reset(struct amdgpu_ring *ring, unsigned int vmid)
+static int jpeg_v2_5_ring_reset(struct amdgpu_ring *ring,
+				unsigned int vmid,
+				struct amdgpu_fence *guilty_fence)
 {
 	jpeg_v2_5_stop_inst(ring->adev, ring->me);
 	jpeg_v2_5_start_inst(ring->adev, ring->me);
diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c
index 2f8510c2986b9..4963feddefae5 100644
--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c
@@ -555,7 +555,9 @@ static int jpeg_v3_0_process_interrupt(struct amdgpu_device *adev,
 	return 0;
 }
 
-static int jpeg_v3_0_ring_reset(struct amdgpu_ring *ring, unsigned int vmid)
+static int jpeg_v3_0_ring_reset(struct amdgpu_ring *ring,
+				unsigned int vmid,
+				struct amdgpu_fence *guilty_fence)
 {
 	jpeg_v3_0_stop(ring->adev);
 	jpeg_v3_0_start(ring->adev);
diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c
index f17ec5414fd69..327adb474b0d3 100644
--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c
@@ -720,7 +720,9 @@ static int jpeg_v4_0_process_interrupt(struct amdgpu_device *adev,
 	return 0;
 }
 
-static int jpeg_v4_0_ring_reset(struct amdgpu_ring *ring, unsigned int vmid)
+static int jpeg_v4_0_ring_reset(struct amdgpu_ring *ring,
+				unsigned int vmid,
+				struct amdgpu_fence *guilty_fence)
 {
 	if (amdgpu_sriov_vf(ring->adev))
 		return -EINVAL;
diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c
index 79e342d5ab28d..c951b4b170c5b 100644
--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c
+++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c
@@ -1143,7 +1143,9 @@ static void jpeg_v4_0_3_core_stall_reset(struct amdgpu_ring *ring)
 	WREG32_SOC15(JPEG, jpeg_inst, regJPEG_CORE_RST_CTRL, 0x00);
 }
 
-static int jpeg_v4_0_3_ring_reset(struct amdgpu_ring *ring, unsigned int vmid)
+static int jpeg_v4_0_3_ring_reset(struct amdgpu_ring *ring,
+				  unsigned int vmid,
+				  struct amdgpu_fence *guilty_fence)
 {
 	if (amdgpu_sriov_vf(ring->adev))
 		return -EOPNOTSUPP;
diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c
index 3b6f65a256464..51ae62c24c49e 100644
--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c
+++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c
@@ -834,7 +834,9 @@ static void jpeg_v5_0_1_core_stall_reset(struct amdgpu_ring *ring)
 	WREG32_SOC15(JPEG, jpeg_inst, regJPEG_CORE_RST_CTRL, 0x00);
 }
 
-static int jpeg_v5_0_1_ring_reset(struct amdgpu_ring *ring, unsigned int vmid)
+static int jpeg_v5_0_1_ring_reset(struct amdgpu_ring *ring,
+				  unsigned int vmid,
+				  struct amdgpu_fence *guilty_fence)
 {
 	if (amdgpu_sriov_vf(ring->adev))
 		return -EOPNOTSUPP;
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
index 35b0a7fb37b96..83596e032ee35 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
@@ -1670,7 +1670,9 @@ static bool sdma_v4_4_2_page_ring_is_guilty(struct amdgpu_ring *ring)
 	return sdma_v4_4_2_is_queue_selected(adev, instance_id, true);
 }
 
-static int sdma_v4_4_2_reset_queue(struct amdgpu_ring *ring, unsigned int vmid)
+static int sdma_v4_4_2_reset_queue(struct amdgpu_ring *ring,
+				   unsigned int vmid,
+				   struct amdgpu_fence *guilty_fence)
 {
 	struct amdgpu_device *adev = ring->adev;
 	u32 id = ring->me;
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
index 9505ae96fbecc..6cdaf60826923 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
@@ -1538,7 +1538,9 @@ static int sdma_v5_0_soft_reset(struct amdgpu_ip_block *ip_block)
 	return 0;
 }
 
-static int sdma_v5_0_reset_queue(struct amdgpu_ring *ring, unsigned int vmid)
+static int sdma_v5_0_reset_queue(struct amdgpu_ring *ring,
+				 unsigned int vmid,
+				 struct amdgpu_fence *guilty_fence)
 {
 	struct amdgpu_device *adev = ring->adev;
 	u32 inst_id = ring->me;
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
index a6e612b4a8928..1f7e21994b796 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
@@ -1451,7 +1451,9 @@ static int sdma_v5_2_wait_for_idle(struct amdgpu_ip_block *ip_block)
 	return -ETIMEDOUT;
 }
 
-static int sdma_v5_2_reset_queue(struct amdgpu_ring *ring, unsigned int vmid)
+static int sdma_v5_2_reset_queue(struct amdgpu_ring *ring,
+				 unsigned int vmid,
+				 struct amdgpu_fence *guilty_fence)
 {
 	struct amdgpu_device *adev = ring->adev;
 	u32 inst_id = ring->me;
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
index 5a70ae17be04e..43bb4a7456b90 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
@@ -1537,7 +1537,9 @@ static int sdma_v6_0_ring_preempt_ib(struct amdgpu_ring *ring)
 	return r;
 }
 
-static int sdma_v6_0_reset_queue(struct amdgpu_ring *ring, unsigned int vmid)
+static int sdma_v6_0_reset_queue(struct amdgpu_ring *ring,
+				 unsigned int vmid,
+				 struct amdgpu_fence *guilty_fence)
 {
 	struct amdgpu_device *adev = ring->adev;
 	int i, r;
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
index ad47d0bdf7775..b5c168cb1354d 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
@@ -802,7 +802,9 @@ static bool sdma_v7_0_check_soft_reset(struct amdgpu_ip_block *ip_block)
 	return false;
 }
 
-static int sdma_v7_0_reset_queue(struct amdgpu_ring *ring, unsigned int vmid)
+static int sdma_v7_0_reset_queue(struct amdgpu_ring *ring,
+				 unsigned int vmid,
+				 struct amdgpu_fence *guilty_fence)
 {
 	struct amdgpu_device *adev = ring->adev;
 	int i, r;
diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
index b5071f77f78d2..083fde15e83a1 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
@@ -1967,7 +1967,9 @@ static int vcn_v4_0_ring_patch_cs_in_place(struct amdgpu_cs_parser *p,
 	return 0;
 }
 
-static int vcn_v4_0_ring_reset(struct amdgpu_ring *ring, unsigned int vmid)
+static int vcn_v4_0_ring_reset(struct amdgpu_ring *ring,
+			       unsigned int vmid,
+			       struct amdgpu_fence *guilty_fence)
 {
 	struct amdgpu_device *adev = ring->adev;
 	struct amdgpu_vcn_inst *vinst = &adev->vcn.inst[ring->me];
diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c
index 5a33140f57235..57c59c4868a50 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c
@@ -1594,7 +1594,9 @@ static void vcn_v4_0_3_unified_ring_set_wptr(struct amdgpu_ring *ring)
 	}
 }
 
-static int vcn_v4_0_3_ring_reset(struct amdgpu_ring *ring, unsigned int vmid)
+static int vcn_v4_0_3_ring_reset(struct amdgpu_ring *ring,
+				 unsigned int vmid,
+				 struct amdgpu_fence *guilty_fence)
 {
 	int r = 0;
 	int vcn_inst;
diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c
index 16ade84facc78..4aad7d2e36379 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c
@@ -1465,7 +1465,9 @@ static void vcn_v4_0_5_unified_ring_set_wptr(struct amdgpu_ring *ring)
 	}
 }
 
-static int vcn_v4_0_5_ring_reset(struct amdgpu_ring *ring, unsigned int vmid)
+static int vcn_v4_0_5_ring_reset(struct amdgpu_ring *ring,
+				 unsigned int vmid,
+				 struct amdgpu_fence *guilty_fence)
 {
 	struct amdgpu_device *adev = ring->adev;
 	struct amdgpu_vcn_inst *vinst = &adev->vcn.inst[ring->me];
diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c
index f8e3f0b882da5..b9c8a2b8c5e0d 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c
@@ -1192,7 +1192,9 @@ static void vcn_v5_0_0_unified_ring_set_wptr(struct amdgpu_ring *ring)
 	}
 }
 
-static int vcn_v5_0_0_ring_reset(struct amdgpu_ring *ring, unsigned int vmid)
+static int vcn_v5_0_0_ring_reset(struct amdgpu_ring *ring,
+				 unsigned int vmid,
+				 struct amdgpu_fence *guilty_fence)
 {
 	struct amdgpu_device *adev = ring->adev;
 	struct amdgpu_vcn_inst *vinst = &adev->vcn.inst[ring->me];
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 05/27] drm/amdgpu: rework queue reset scheduler interaction
  2025-06-13 21:47 [PATCH V8 00/27] Reset improvements Alex Deucher
                   ` (3 preceding siblings ...)
  2025-06-13 21:47 ` [PATCH 04/27] drm/amdgpu: update ring reset function signature Alex Deucher
@ 2025-06-13 21:47 ` Alex Deucher
  2025-06-13 21:47 ` [PATCH 06/27] drm/amdgpu: move force completion into ring resets Alex Deucher
                   ` (21 subsequent siblings)
  26 siblings, 0 replies; 35+ messages in thread
From: Alex Deucher @ 2025-06-13 21:47 UTC (permalink / raw)
  To: amd-gfx, christian.koenig, sasundar; +Cc: Christian König, Alex Deucher

From: Christian König <ckoenig.leichtzumerken@gmail.com>

Stopping the scheduler for queue reset is generally a good idea because
it prevents any worker from touching the ring buffer.

Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 35 ++++++++++++++-----------
 1 file changed, 20 insertions(+), 15 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
index a7ff1fa4c778e..93413be59e08f 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
@@ -91,8 +91,8 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
 	struct amdgpu_job *job = to_amdgpu_job(s_job);
 	struct amdgpu_task_info *ti;
 	struct amdgpu_device *adev = ring->adev;
-	int idx;
-	int r;
+	bool set_error = false;
+	int idx, r;
 
 	if (!drm_dev_enter(adev_to_drm(adev), &idx)) {
 		dev_info(adev->dev, "%s - device unplugged skipping recovery on scheduler:%s",
@@ -136,10 +136,12 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
 	} else if (amdgpu_gpu_recovery && ring->funcs->reset) {
 		bool is_guilty;
 
-		dev_err(adev->dev, "Starting %s ring reset\n", s_job->sched->name);
-		/* stop the scheduler, but don't mess with the
-		 * bad job yet because if ring reset fails
-		 * we'll fall back to full GPU reset.
+		dev_err(adev->dev, "Starting %s ring reset\n",
+			s_job->sched->name);
+
+		/*
+		 * Stop the scheduler to prevent anybody else from touching the
+		 * ring buffer.
 		 */
 		drm_sched_wqueue_stop(&ring->sched);
 
@@ -152,26 +154,29 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
 		else
 			is_guilty = true;
 
-		if (is_guilty)
+		if (is_guilty) {
 			dma_fence_set_error(&s_job->s_fence->finished, -ETIME);
+			set_error = true;
+		}
 
 		r = amdgpu_ring_reset(ring, job->vmid, NULL);
 		if (!r) {
-			if (amdgpu_ring_sched_ready(ring))
-				drm_sched_stop(&ring->sched, s_job);
 			if (is_guilty) {
 				atomic_inc(&ring->adev->gpu_reset_counter);
 				amdgpu_fence_driver_force_completion(ring);
 			}
-			if (amdgpu_ring_sched_ready(ring))
-				drm_sched_start(&ring->sched, 0);
-			dev_err(adev->dev, "Ring %s reset succeeded\n", ring->sched.name);
-			drm_dev_wedged_event(adev_to_drm(adev), DRM_WEDGE_RECOVERY_NONE);
+			drm_sched_wqueue_start(&ring->sched);
+			dev_err(adev->dev, "Ring %s reset succeeded\n",
+				ring->sched.name);
+			drm_dev_wedged_event(adev_to_drm(adev),
+					     DRM_WEDGE_RECOVERY_NONE);
 			goto exit;
 		}
-		dev_err(adev->dev, "Ring %s reset failure\n", ring->sched.name);
+		dev_err(adev->dev, "Ring %s reset failed\n", ring->sched.name);
 	}
-	dma_fence_set_error(&s_job->s_fence->finished, -ETIME);
+
+	if (!set_error)
+		dma_fence_set_error(&s_job->s_fence->finished, -ETIME);
 
 	if (amdgpu_device_should_recover_gpu(ring->adev)) {
 		struct amdgpu_reset_context reset_context;
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 06/27] drm/amdgpu: move force completion into ring resets
  2025-06-13 21:47 [PATCH V8 00/27] Reset improvements Alex Deucher
                   ` (4 preceding siblings ...)
  2025-06-13 21:47 ` [PATCH 05/27] drm/amdgpu: rework queue reset scheduler interaction Alex Deucher
@ 2025-06-13 21:47 ` Alex Deucher
  2025-06-13 21:47 ` [PATCH 07/27] drm/amdgpu: move guilty handling " Alex Deucher
                   ` (20 subsequent siblings)
  26 siblings, 0 replies; 35+ messages in thread
From: Alex Deucher @ 2025-06-13 21:47 UTC (permalink / raw)
  To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher

Move the force completion handling into each ring
reset function so that each engine can determine
whether or not it needs to force completion on the
jobs in the ring.

Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.c  |  4 +---
 drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c   | 12 ++++++++++--
 drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c   | 12 ++++++++++--
 drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c   | 12 ++++++++++--
 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c    |  7 ++++++-
 drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c  |  7 ++++++-
 drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c   |  8 +++++++-
 drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c   |  8 +++++++-
 drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c   |  8 +++++++-
 drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c   |  8 +++++++-
 drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c |  8 +++++++-
 drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c |  8 +++++++-
 drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c |  8 +++++++-
 drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c   |  7 ++++++-
 drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c   |  7 ++++++-
 drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c   |  6 +++++-
 drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c   |  6 +++++-
 drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c    |  7 ++++++-
 drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c  |  6 ++++--
 drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c  |  7 ++++++-
 drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c  |  7 ++++++-
 21 files changed, 136 insertions(+), 27 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
index 93413be59e08f..177f04491a11b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
@@ -161,10 +161,8 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
 
 		r = amdgpu_ring_reset(ring, job->vmid, NULL);
 		if (!r) {
-			if (is_guilty) {
+			if (is_guilty)
 				atomic_inc(&ring->adev->gpu_reset_counter);
-				amdgpu_fence_driver_force_completion(ring);
-			}
 			drm_sched_wqueue_start(&ring->sched);
 			dev_err(adev->dev, "Ring %s reset succeeded\n",
 				ring->sched.name);
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
index 444753b0ac885..b4f4ad966db82 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
@@ -9577,7 +9577,11 @@ static int gfx_v10_0_reset_kgq(struct amdgpu_ring *ring,
 		return r;
 	}
 
-	return amdgpu_ring_test_ring(ring);
+	r = amdgpu_ring_test_ring(ring);
+	if (r)
+		return r;
+	amdgpu_fence_driver_force_completion(ring);
+	return 0;
 }
 
 static int gfx_v10_0_reset_kcq(struct amdgpu_ring *ring,
@@ -9650,7 +9654,11 @@ static int gfx_v10_0_reset_kcq(struct amdgpu_ring *ring,
 	if (r)
 		return r;
 
-	return amdgpu_ring_test_ring(ring);
+	r = amdgpu_ring_test_ring(ring);
+	if (r)
+		return r;
+	amdgpu_fence_driver_force_completion(ring);
+	return 0;
 }
 
 static void gfx_v10_ip_print(struct amdgpu_ip_block *ip_block, struct drm_printer *p)
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
index 4293f2a1b9bfb..5707ce7dd5c82 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
@@ -6842,7 +6842,11 @@ static int gfx_v11_0_reset_kgq(struct amdgpu_ring *ring,
 		return r;
 	}
 
-	return amdgpu_ring_test_ring(ring);
+	r = amdgpu_ring_test_ring(ring);
+	if (r)
+		return r;
+	amdgpu_fence_driver_force_completion(ring);
+	return 0;
 }
 
 static int gfx_v11_0_reset_compute_pipe(struct amdgpu_ring *ring)
@@ -7004,7 +7008,11 @@ static int gfx_v11_0_reset_kcq(struct amdgpu_ring *ring,
 		return r;
 	}
 
-	return amdgpu_ring_test_ring(ring);
+	r = amdgpu_ring_test_ring(ring);
+	if (r)
+		return r;
+	amdgpu_fence_driver_force_completion(ring);
+	return 0;
 }
 
 static void gfx_v11_ip_print(struct amdgpu_ip_block *ip_block, struct drm_printer *p)
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
index aea21ef177d05..259a83c3acb5d 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
@@ -5337,7 +5337,11 @@ static int gfx_v12_0_reset_kgq(struct amdgpu_ring *ring,
 		return r;
 	}
 
-	return amdgpu_ring_test_ring(ring);
+	r = amdgpu_ring_test_ring(ring);
+	if (r)
+		return r;
+	amdgpu_fence_driver_force_completion(ring);
+	return 0;
 }
 
 static int gfx_v12_0_reset_compute_pipe(struct amdgpu_ring *ring)
@@ -5452,7 +5456,11 @@ static int gfx_v12_0_reset_kcq(struct amdgpu_ring *ring,
 		return r;
 	}
 
-	return amdgpu_ring_test_ring(ring);
+	r = amdgpu_ring_test_ring(ring);
+	if (r)
+		return r;
+	amdgpu_fence_driver_force_completion(ring);
+	return 0;
 }
 
 static void gfx_v12_0_ring_begin_use(struct amdgpu_ring *ring)
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
index c0ffe7afca9b8..e0dec946b7cdc 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
@@ -7223,7 +7223,12 @@ static int gfx_v9_0_reset_kcq(struct amdgpu_ring *ring,
 		DRM_ERROR("fail to remap queue\n");
 		return r;
 	}
-	return amdgpu_ring_test_ring(ring);
+
+	r = amdgpu_ring_test_ring(ring);
+	if (r)
+		return r;
+	amdgpu_fence_driver_force_completion(ring);
+	return 0;
 }
 
 static void gfx_v9_ip_print(struct amdgpu_ip_block *ip_block, struct drm_printer *p)
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
index 79d4ae0645ffc..e5fcc63cd99df 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
@@ -3620,7 +3620,12 @@ static int gfx_v9_4_3_reset_kcq(struct amdgpu_ring *ring,
 		dev_err(adev->dev, "fail to remap queue\n");
 		return r;
 	}
-	return amdgpu_ring_test_ring(ring);
+
+	r = amdgpu_ring_test_ring(ring);
+	if (r)
+		return r;
+	amdgpu_fence_driver_force_completion(ring);
+	return 0;
 }
 
 enum amdgpu_gfx_cp_ras_mem_id {
diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
index 4c1ff6d0e14ea..0b1fa35a441ae 100644
--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
@@ -768,9 +768,15 @@ static int jpeg_v2_0_ring_reset(struct amdgpu_ring *ring,
 				unsigned int vmid,
 				struct amdgpu_fence *guilty_fence)
 {
+	int r;
+
 	jpeg_v2_0_stop(ring->adev);
 	jpeg_v2_0_start(ring->adev);
-	return amdgpu_ring_test_helper(ring);
+	r = amdgpu_ring_test_helper(ring);
+	if (r)
+		return r;
+	amdgpu_fence_driver_force_completion(ring);
+	return 0;
 }
 
 static const struct amd_ip_funcs jpeg_v2_0_ip_funcs = {
diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
index 5a18b8644de2f..7a9e91f6495de 100644
--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
+++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
@@ -647,9 +647,15 @@ static int jpeg_v2_5_ring_reset(struct amdgpu_ring *ring,
 				unsigned int vmid,
 				struct amdgpu_fence *guilty_fence)
 {
+	int r;
+
 	jpeg_v2_5_stop_inst(ring->adev, ring->me);
 	jpeg_v2_5_start_inst(ring->adev, ring->me);
-	return amdgpu_ring_test_helper(ring);
+	r = amdgpu_ring_test_helper(ring);
+	if (r)
+		return r;
+	amdgpu_fence_driver_force_completion(ring);
+	return 0;
 }
 
 static const struct amd_ip_funcs jpeg_v2_5_ip_funcs = {
diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c
index 4963feddefae5..81ee1ba4c0a3c 100644
--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c
@@ -559,9 +559,15 @@ static int jpeg_v3_0_ring_reset(struct amdgpu_ring *ring,
 				unsigned int vmid,
 				struct amdgpu_fence *guilty_fence)
 {
+	int r;
+
 	jpeg_v3_0_stop(ring->adev);
 	jpeg_v3_0_start(ring->adev);
-	return amdgpu_ring_test_helper(ring);
+	r = amdgpu_ring_test_helper(ring);
+	if (r)
+		return r;
+	amdgpu_fence_driver_force_completion(ring);
+	return 0;
 }
 
 static const struct amd_ip_funcs jpeg_v3_0_ip_funcs = {
diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c
index 327adb474b0d3..06f75091e1304 100644
--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c
@@ -724,12 +724,18 @@ static int jpeg_v4_0_ring_reset(struct amdgpu_ring *ring,
 				unsigned int vmid,
 				struct amdgpu_fence *guilty_fence)
 {
+	int r;
+
 	if (amdgpu_sriov_vf(ring->adev))
 		return -EINVAL;
 
 	jpeg_v4_0_stop(ring->adev);
 	jpeg_v4_0_start(ring->adev);
-	return amdgpu_ring_test_helper(ring);
+	r = amdgpu_ring_test_helper(ring);
+	if (r)
+		return r;
+	amdgpu_fence_driver_force_completion(ring);
+	return 0;
 }
 
 static const struct amd_ip_funcs jpeg_v4_0_ip_funcs = {
diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c
index c951b4b170c5b..10a7b990b0adf 100644
--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c
+++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c
@@ -1147,12 +1147,18 @@ static int jpeg_v4_0_3_ring_reset(struct amdgpu_ring *ring,
 				  unsigned int vmid,
 				  struct amdgpu_fence *guilty_fence)
 {
+	int r;
+
 	if (amdgpu_sriov_vf(ring->adev))
 		return -EOPNOTSUPP;
 
 	jpeg_v4_0_3_core_stall_reset(ring);
 	jpeg_v4_0_3_start_jrbc(ring);
-	return amdgpu_ring_test_helper(ring);
+	r = amdgpu_ring_test_helper(ring);
+	if (r)
+		return r;
+	amdgpu_fence_driver_force_completion(ring);
+	return 0;
 }
 
 static const struct amd_ip_funcs jpeg_v4_0_3_ip_funcs = {
diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c
index 51ae62c24c49e..88dea7a47a1e5 100644
--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c
+++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c
@@ -838,12 +838,18 @@ static int jpeg_v5_0_1_ring_reset(struct amdgpu_ring *ring,
 				  unsigned int vmid,
 				  struct amdgpu_fence *guilty_fence)
 {
+	int r;
+
 	if (amdgpu_sriov_vf(ring->adev))
 		return -EOPNOTSUPP;
 
 	jpeg_v5_0_1_core_stall_reset(ring);
 	jpeg_v5_0_1_init_jrbc(ring);
-	return amdgpu_ring_test_helper(ring);
+	r = amdgpu_ring_test_helper(ring);
+	if (r)
+		return r;
+	amdgpu_fence_driver_force_completion(ring);
+	return 0;
 }
 
 static const struct amd_ip_funcs jpeg_v5_0_1_ip_funcs = {
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
index 83596e032ee35..c5e0d2e730740 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
@@ -1674,6 +1674,7 @@ static int sdma_v4_4_2_reset_queue(struct amdgpu_ring *ring,
 				   unsigned int vmid,
 				   struct amdgpu_fence *guilty_fence)
 {
+	bool is_guilty = ring->funcs->is_guilty(ring);
 	struct amdgpu_device *adev = ring->adev;
 	u32 id = ring->me;
 	int r;
@@ -1684,8 +1685,13 @@ static int sdma_v4_4_2_reset_queue(struct amdgpu_ring *ring,
 	amdgpu_amdkfd_suspend(adev, false);
 	r = amdgpu_sdma_reset_engine(adev, id);
 	amdgpu_amdkfd_resume(adev, false);
+	if (r)
+		return r;
 
-	return r;
+	if (is_guilty)
+		amdgpu_fence_driver_force_completion(ring);
+
+	return 0;
 }
 
 static int sdma_v4_4_2_stop_queue(struct amdgpu_ring *ring)
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
index 6cdaf60826923..09419db2d49a6 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
@@ -1544,8 +1544,13 @@ static int sdma_v5_0_reset_queue(struct amdgpu_ring *ring,
 {
 	struct amdgpu_device *adev = ring->adev;
 	u32 inst_id = ring->me;
+	int r;
 
-	return amdgpu_sdma_reset_engine(adev, inst_id);
+	r = amdgpu_sdma_reset_engine(adev, inst_id);
+	if (r)
+		return r;
+	amdgpu_fence_driver_force_completion(ring);
+	return 0;
 }
 
 static int sdma_v5_0_stop_queue(struct amdgpu_ring *ring)
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
index 1f7e21994b796..365c710ee9e8c 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
@@ -1457,8 +1457,13 @@ static int sdma_v5_2_reset_queue(struct amdgpu_ring *ring,
 {
 	struct amdgpu_device *adev = ring->adev;
 	u32 inst_id = ring->me;
+	int r;
 
-	return amdgpu_sdma_reset_engine(adev, inst_id);
+	r = amdgpu_sdma_reset_engine(adev, inst_id);
+	if (r)
+		return r;
+	amdgpu_fence_driver_force_completion(ring);
+	return 0;
 }
 
 static int sdma_v5_2_stop_queue(struct amdgpu_ring *ring)
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
index 43bb4a7456b90..746f14862d9ff 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
@@ -1561,7 +1561,11 @@ static int sdma_v6_0_reset_queue(struct amdgpu_ring *ring,
 	if (r)
 		return r;
 
-	return sdma_v6_0_gfx_resume_instance(adev, i, true);
+	r = sdma_v6_0_gfx_resume_instance(adev, i, true);
+	if (r)
+		return r;
+	amdgpu_fence_driver_force_completion(ring);
+	return 0;
 }
 
 static int sdma_v6_0_set_trap_irq_state(struct amdgpu_device *adev,
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
index b5c168cb1354d..2e4c658598001 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
@@ -826,7 +826,11 @@ static int sdma_v7_0_reset_queue(struct amdgpu_ring *ring,
 	if (r)
 		return r;
 
-	return sdma_v7_0_gfx_resume_instance(adev, i, true);
+	r = sdma_v7_0_gfx_resume_instance(adev, i, true);
+	if (r)
+		return r;
+	amdgpu_fence_driver_force_completion(ring);
+	return 0;
 }
 
 /**
diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
index 083fde15e83a1..0d73b2bd4aad6 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
@@ -1973,6 +1973,7 @@ static int vcn_v4_0_ring_reset(struct amdgpu_ring *ring,
 {
 	struct amdgpu_device *adev = ring->adev;
 	struct amdgpu_vcn_inst *vinst = &adev->vcn.inst[ring->me];
+	int r;
 
 	if (!(adev->vcn.supported_reset & AMDGPU_RESET_TYPE_PER_QUEUE))
 		return -EOPNOTSUPP;
@@ -1980,7 +1981,11 @@ static int vcn_v4_0_ring_reset(struct amdgpu_ring *ring,
 	vcn_v4_0_stop(vinst);
 	vcn_v4_0_start(vinst);
 
-	return amdgpu_ring_test_helper(ring);
+	r = amdgpu_ring_test_helper(ring);
+	if (r)
+		return r;
+	amdgpu_fence_driver_force_completion(ring);
+	return 0;
 }
 
 static struct amdgpu_ring_funcs vcn_v4_0_unified_ring_vm_funcs = {
diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c
index 57c59c4868a50..bf9edfef2107e 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c
@@ -1623,8 +1623,10 @@ static int vcn_v4_0_3_ring_reset(struct amdgpu_ring *ring,
 	vcn_v4_0_3_hw_init_inst(vinst);
 	vcn_v4_0_3_start_dpg_mode(vinst, adev->vcn.inst[ring->me].indirect_sram);
 	r = amdgpu_ring_test_helper(ring);
-
-	return r;
+	if (r)
+		return r;
+	amdgpu_fence_driver_force_completion(ring);
+	return 0;
 }
 
 static const struct amdgpu_ring_funcs vcn_v4_0_3_unified_ring_vm_funcs = {
diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c
index 4aad7d2e36379..3a3ed600e15f0 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c
@@ -1471,6 +1471,7 @@ static int vcn_v4_0_5_ring_reset(struct amdgpu_ring *ring,
 {
 	struct amdgpu_device *adev = ring->adev;
 	struct amdgpu_vcn_inst *vinst = &adev->vcn.inst[ring->me];
+	int r;
 
 	if (!(adev->vcn.supported_reset & AMDGPU_RESET_TYPE_PER_QUEUE))
 		return -EOPNOTSUPP;
@@ -1478,7 +1479,11 @@ static int vcn_v4_0_5_ring_reset(struct amdgpu_ring *ring,
 	vcn_v4_0_5_stop(vinst);
 	vcn_v4_0_5_start(vinst);
 
-	return amdgpu_ring_test_helper(ring);
+	r = amdgpu_ring_test_helper(ring);
+	if (r)
+		return r;
+	amdgpu_fence_driver_force_completion(ring);
+	return 0;
 }
 
 static struct amdgpu_ring_funcs vcn_v4_0_5_unified_ring_vm_funcs = {
diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c
index b9c8a2b8c5e0d..c7953116ad532 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c
@@ -1198,6 +1198,7 @@ static int vcn_v5_0_0_ring_reset(struct amdgpu_ring *ring,
 {
 	struct amdgpu_device *adev = ring->adev;
 	struct amdgpu_vcn_inst *vinst = &adev->vcn.inst[ring->me];
+	int r;
 
 	if (!(adev->vcn.supported_reset & AMDGPU_RESET_TYPE_PER_QUEUE))
 		return -EOPNOTSUPP;
@@ -1205,7 +1206,11 @@ static int vcn_v5_0_0_ring_reset(struct amdgpu_ring *ring,
 	vcn_v5_0_0_stop(vinst);
 	vcn_v5_0_0_start(vinst);
 
-	return amdgpu_ring_test_helper(ring);
+	r = amdgpu_ring_test_helper(ring);
+	if (r)
+		return r;
+	amdgpu_fence_driver_force_completion(ring);
+	return 0;
 }
 
 static const struct amdgpu_ring_funcs vcn_v5_0_0_unified_ring_vm_funcs = {
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 07/27] drm/amdgpu: move guilty handling into ring resets
  2025-06-13 21:47 [PATCH V8 00/27] Reset improvements Alex Deucher
                   ` (5 preceding siblings ...)
  2025-06-13 21:47 ` [PATCH 06/27] drm/amdgpu: move force completion into ring resets Alex Deucher
@ 2025-06-13 21:47 ` Alex Deucher
  2025-06-16  3:46   ` Lazar, Lijo
  2025-06-13 21:47 ` [PATCH 08/27] drm/amdgpu: track ring state associated with a job Alex Deucher
                   ` (19 subsequent siblings)
  26 siblings, 1 reply; 35+ messages in thread
From: Alex Deucher @ 2025-06-13 21:47 UTC (permalink / raw)
  To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher

Move guilty logic into the ring reset callbacks.  This
allows each ring reset callback to better handle fence
errors and force completions in line with the reset
behavior for each IP.  It also allows us to remove
the ring guilty callback since that logic now lives
in the reset callback.

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.c  | 22 +---------
 drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h |  1 -
 drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c   |  2 +
 drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c   |  2 +
 drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c   |  2 +
 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c    |  1 +
 drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c  |  1 +
 drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c   |  1 +
 drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c   |  1 +
 drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c   |  1 +
 drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c   |  1 +
 drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c |  1 +
 drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c |  1 +
 drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c | 52 ++++++++++++------------
 drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c   | 23 ++++++++++-
 drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c   | 25 ++++++++++--
 drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c   |  1 +
 drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c   |  1 +
 drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c    |  1 +
 drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c  |  1 +
 drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c  |  1 +
 drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c  |  1 +
 22 files changed, 89 insertions(+), 54 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
index 177f04491a11b..680cdd8fc3ab2 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
@@ -91,7 +91,6 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
 	struct amdgpu_job *job = to_amdgpu_job(s_job);
 	struct amdgpu_task_info *ti;
 	struct amdgpu_device *adev = ring->adev;
-	bool set_error = false;
 	int idx, r;
 
 	if (!drm_dev_enter(adev_to_drm(adev), &idx)) {
@@ -134,8 +133,6 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
 	if (unlikely(adev->debug_disable_gpu_ring_reset)) {
 		dev_err(adev->dev, "Ring reset disabled by debug mask\n");
 	} else if (amdgpu_gpu_recovery && ring->funcs->reset) {
-		bool is_guilty;
-
 		dev_err(adev->dev, "Starting %s ring reset\n",
 			s_job->sched->name);
 
@@ -145,24 +142,8 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
 		 */
 		drm_sched_wqueue_stop(&ring->sched);
 
-		/* for engine resets, we need to reset the engine,
-		 * but individual queues may be unaffected.
-		 * check here to make sure the accounting is correct.
-		 */
-		if (ring->funcs->is_guilty)
-			is_guilty = ring->funcs->is_guilty(ring);
-		else
-			is_guilty = true;
-
-		if (is_guilty) {
-			dma_fence_set_error(&s_job->s_fence->finished, -ETIME);
-			set_error = true;
-		}
-
 		r = amdgpu_ring_reset(ring, job->vmid, NULL);
 		if (!r) {
-			if (is_guilty)
-				atomic_inc(&ring->adev->gpu_reset_counter);
 			drm_sched_wqueue_start(&ring->sched);
 			dev_err(adev->dev, "Ring %s reset succeeded\n",
 				ring->sched.name);
@@ -173,8 +154,7 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
 		dev_err(adev->dev, "Ring %s reset failed\n", ring->sched.name);
 	}
 
-	if (!set_error)
-		dma_fence_set_error(&s_job->s_fence->finished, -ETIME);
+	dma_fence_set_error(&s_job->s_fence->finished, -ETIME);
 
 	if (amdgpu_device_should_recover_gpu(ring->adev)) {
 		struct amdgpu_reset_context reset_context;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
index ff3a4b81e51ab..c1d14183abfe6 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
@@ -271,7 +271,6 @@ struct amdgpu_ring_funcs {
 	int (*reset)(struct amdgpu_ring *ring, unsigned int vmid,
 		     struct amdgpu_fence *guilty_fence);
 	void (*emit_cleaner_shader)(struct amdgpu_ring *ring);
-	bool (*is_guilty)(struct amdgpu_ring *ring);
 };
 
 /**
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
index b4f4ad966db82..a9d26d91c8468 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
@@ -9581,6 +9581,7 @@ static int gfx_v10_0_reset_kgq(struct amdgpu_ring *ring,
 	if (r)
 		return r;
 	amdgpu_fence_driver_force_completion(ring);
+	atomic_inc(&ring->adev->gpu_reset_counter);
 	return 0;
 }
 
@@ -9658,6 +9659,7 @@ static int gfx_v10_0_reset_kcq(struct amdgpu_ring *ring,
 	if (r)
 		return r;
 	amdgpu_fence_driver_force_completion(ring);
+	atomic_inc(&ring->adev->gpu_reset_counter);
 	return 0;
 }
 
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
index 5707ce7dd5c82..3dd2e04830dc6 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
@@ -6846,6 +6846,7 @@ static int gfx_v11_0_reset_kgq(struct amdgpu_ring *ring,
 	if (r)
 		return r;
 	amdgpu_fence_driver_force_completion(ring);
+	atomic_inc(&ring->adev->gpu_reset_counter);
 	return 0;
 }
 
@@ -7012,6 +7013,7 @@ static int gfx_v11_0_reset_kcq(struct amdgpu_ring *ring,
 	if (r)
 		return r;
 	amdgpu_fence_driver_force_completion(ring);
+	atomic_inc(&ring->adev->gpu_reset_counter);
 	return 0;
 }
 
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
index 259a83c3acb5d..d2ee4543ce222 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
@@ -5341,6 +5341,7 @@ static int gfx_v12_0_reset_kgq(struct amdgpu_ring *ring,
 	if (r)
 		return r;
 	amdgpu_fence_driver_force_completion(ring);
+	atomic_inc(&ring->adev->gpu_reset_counter);
 	return 0;
 }
 
@@ -5460,6 +5461,7 @@ static int gfx_v12_0_reset_kcq(struct amdgpu_ring *ring,
 	if (r)
 		return r;
 	amdgpu_fence_driver_force_completion(ring);
+	atomic_inc(&ring->adev->gpu_reset_counter);
 	return 0;
 }
 
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
index e0dec946b7cdc..1b767094dfa24 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
@@ -7228,6 +7228,7 @@ static int gfx_v9_0_reset_kcq(struct amdgpu_ring *ring,
 	if (r)
 		return r;
 	amdgpu_fence_driver_force_completion(ring);
+	atomic_inc(&ring->adev->gpu_reset_counter);
 	return 0;
 }
 
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
index e5fcc63cd99df..05abe86ecd9ac 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
@@ -3625,6 +3625,7 @@ static int gfx_v9_4_3_reset_kcq(struct amdgpu_ring *ring,
 	if (r)
 		return r;
 	amdgpu_fence_driver_force_completion(ring);
+	atomic_inc(&ring->adev->gpu_reset_counter);
 	return 0;
 }
 
diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
index 0b1fa35a441ae..dbc28042c7d53 100644
--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
@@ -776,6 +776,7 @@ static int jpeg_v2_0_ring_reset(struct amdgpu_ring *ring,
 	if (r)
 		return r;
 	amdgpu_fence_driver_force_completion(ring);
+	atomic_inc(&ring->adev->gpu_reset_counter);
 	return 0;
 }
 
diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
index 7a9e91f6495de..f8af473e2a7a4 100644
--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
+++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
@@ -655,6 +655,7 @@ static int jpeg_v2_5_ring_reset(struct amdgpu_ring *ring,
 	if (r)
 		return r;
 	amdgpu_fence_driver_force_completion(ring);
+	atomic_inc(&ring->adev->gpu_reset_counter);
 	return 0;
 }
 
diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c
index 81ee1ba4c0a3c..83559a32ed3d2 100644
--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c
@@ -567,6 +567,7 @@ static int jpeg_v3_0_ring_reset(struct amdgpu_ring *ring,
 	if (r)
 		return r;
 	amdgpu_fence_driver_force_completion(ring);
+	atomic_inc(&ring->adev->gpu_reset_counter);
 	return 0;
 }
 
diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c
index 06f75091e1304..b0f80f2a549c6 100644
--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c
@@ -735,6 +735,7 @@ static int jpeg_v4_0_ring_reset(struct amdgpu_ring *ring,
 	if (r)
 		return r;
 	amdgpu_fence_driver_force_completion(ring);
+	atomic_inc(&ring->adev->gpu_reset_counter);
 	return 0;
 }
 
diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c
index 10a7b990b0adf..4fd9386d2efd6 100644
--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c
+++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c
@@ -1158,6 +1158,7 @@ static int jpeg_v4_0_3_ring_reset(struct amdgpu_ring *ring,
 	if (r)
 		return r;
 	amdgpu_fence_driver_force_completion(ring);
+	atomic_inc(&ring->adev->gpu_reset_counter);
 	return 0;
 }
 
diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c
index 88dea7a47a1e5..beca4d1e941b3 100644
--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c
+++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c
@@ -849,6 +849,7 @@ static int jpeg_v5_0_1_ring_reset(struct amdgpu_ring *ring,
 	if (r)
 		return r;
 	amdgpu_fence_driver_force_completion(ring);
+	atomic_inc(&ring->adev->gpu_reset_counter);
 	return 0;
 }
 
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
index c5e0d2e730740..0199d5bb5821d 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
@@ -1651,37 +1651,21 @@ static bool sdma_v4_4_2_is_queue_selected(struct amdgpu_device *adev, uint32_t i
 	return (context_status & SDMA_GFX_CONTEXT_STATUS__SELECTED_MASK) != 0;
 }
 
-static bool sdma_v4_4_2_ring_is_guilty(struct amdgpu_ring *ring)
-{
-	struct amdgpu_device *adev = ring->adev;
-	uint32_t instance_id = ring->me;
-
-	return sdma_v4_4_2_is_queue_selected(adev, instance_id, false);
-}
-
-static bool sdma_v4_4_2_page_ring_is_guilty(struct amdgpu_ring *ring)
-{
-	struct amdgpu_device *adev = ring->adev;
-	uint32_t instance_id = ring->me;
-
-	if (!adev->sdma.has_page_queue)
-		return false;
-
-	return sdma_v4_4_2_is_queue_selected(adev, instance_id, true);
-}
-
 static int sdma_v4_4_2_reset_queue(struct amdgpu_ring *ring,
 				   unsigned int vmid,
 				   struct amdgpu_fence *guilty_fence)
 {
-	bool is_guilty = ring->funcs->is_guilty(ring);
 	struct amdgpu_device *adev = ring->adev;
 	u32 id = ring->me;
+	bool is_guilty;
 	int r;
 
 	if (!(adev->sdma.supported_reset & AMDGPU_RESET_TYPE_PER_QUEUE))
 		return -EOPNOTSUPP;
 
+	is_guilty = sdma_v4_4_2_is_queue_selected(adev, id,
+						  &adev->sdma.instance[id].page == ring);
+
 	amdgpu_amdkfd_suspend(adev, false);
 	r = amdgpu_sdma_reset_engine(adev, id);
 	amdgpu_amdkfd_resume(adev, false);
@@ -1689,7 +1673,7 @@ static int sdma_v4_4_2_reset_queue(struct amdgpu_ring *ring,
 		return r;
 
 	if (is_guilty)
-		amdgpu_fence_driver_force_completion(ring);
+		atomic_inc(&ring->adev->gpu_reset_counter);
 
 	return 0;
 }
@@ -1735,8 +1719,8 @@ static int sdma_v4_4_2_stop_queue(struct amdgpu_ring *ring)
 static int sdma_v4_4_2_restore_queue(struct amdgpu_ring *ring)
 {
 	struct amdgpu_device *adev = ring->adev;
-	u32 inst_mask;
-	int i;
+	u32 inst_mask, tmp_mask;
+	int i, r;
 
 	inst_mask = 1 << ring->me;
 	udelay(50);
@@ -1753,7 +1737,25 @@ static int sdma_v4_4_2_restore_queue(struct amdgpu_ring *ring)
 		return -ETIMEDOUT;
 	}
 
-	return sdma_v4_4_2_inst_start(adev, inst_mask, true);
+	r = sdma_v4_4_2_inst_start(adev, inst_mask, true);
+	if (r) {
+		return r;
+	}
+
+	tmp_mask = inst_mask;
+	for_each_inst(i, tmp_mask) {
+		ring = &adev->sdma.instance[i].ring;
+
+		amdgpu_fence_driver_force_completion(ring);
+
+		if (adev->sdma.has_page_queue) {
+			struct amdgpu_ring *page = &adev->sdma.instance[i].page;
+
+			amdgpu_fence_driver_force_completion(page);
+		}
+	}
+
+	return r;
 }
 
 static int sdma_v4_4_2_soft_reset_engine(struct amdgpu_device *adev,
@@ -2159,7 +2161,6 @@ static const struct amdgpu_ring_funcs sdma_v4_4_2_ring_funcs = {
 	.emit_reg_wait = sdma_v4_4_2_ring_emit_reg_wait,
 	.emit_reg_write_reg_wait = amdgpu_ring_emit_reg_write_reg_wait_helper,
 	.reset = sdma_v4_4_2_reset_queue,
-	.is_guilty = sdma_v4_4_2_ring_is_guilty,
 };
 
 static const struct amdgpu_ring_funcs sdma_v4_4_2_page_ring_funcs = {
@@ -2192,7 +2193,6 @@ static const struct amdgpu_ring_funcs sdma_v4_4_2_page_ring_funcs = {
 	.emit_reg_wait = sdma_v4_4_2_ring_emit_reg_wait,
 	.emit_reg_write_reg_wait = amdgpu_ring_emit_reg_write_reg_wait_helper,
 	.reset = sdma_v4_4_2_reset_queue,
-	.is_guilty = sdma_v4_4_2_page_ring_is_guilty,
 };
 
 static void sdma_v4_4_2_set_ring_funcs(struct amdgpu_device *adev)
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
index 09419db2d49a6..4a36e5199f248 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
@@ -1538,18 +1538,34 @@ static int sdma_v5_0_soft_reset(struct amdgpu_ip_block *ip_block)
 	return 0;
 }
 
+static bool sdma_v5_0_is_queue_selected(struct amdgpu_device *adev,
+					uint32_t instance_id)
+{
+	u32 context_status = RREG32(sdma_v5_0_get_reg_offset(adev, instance_id,
+							     mmSDMA0_GFX_CONTEXT_STATUS));
+
+	/* Check if the SELECTED bit is set */
+	return (context_status & SDMA0_GFX_CONTEXT_STATUS__SELECTED_MASK) != 0;
+}
+
 static int sdma_v5_0_reset_queue(struct amdgpu_ring *ring,
 				 unsigned int vmid,
 				 struct amdgpu_fence *guilty_fence)
 {
 	struct amdgpu_device *adev = ring->adev;
 	u32 inst_id = ring->me;
+	bool is_guilty = sdma_v5_0_is_queue_selected(adev, inst_id);
 	int r;
 
+	amdgpu_amdkfd_suspend(adev, false);
 	r = amdgpu_sdma_reset_engine(adev, inst_id);
+	amdgpu_amdkfd_resume(adev, false);
 	if (r)
 		return r;
-	amdgpu_fence_driver_force_completion(ring);
+
+	if (is_guilty)
+		atomic_inc(&ring->adev->gpu_reset_counter);
+
 	return 0;
 }
 
@@ -1617,7 +1633,10 @@ static int sdma_v5_0_restore_queue(struct amdgpu_ring *ring)
 
 	r = sdma_v5_0_gfx_resume_instance(adev, inst_id, true);
 	amdgpu_gfx_rlc_exit_safe_mode(adev, 0);
-	return r;
+	if (r)
+		return r;
+	amdgpu_fence_driver_force_completion(ring);
+	return 0;
 }
 
 static int sdma_v5_0_ring_preempt_ib(struct amdgpu_ring *ring)
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
index 365c710ee9e8c..84d85ef30701c 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
@@ -1451,18 +1451,34 @@ static int sdma_v5_2_wait_for_idle(struct amdgpu_ip_block *ip_block)
 	return -ETIMEDOUT;
 }
 
+static bool sdma_v5_2_is_queue_selected(struct amdgpu_device *adev,
+					uint32_t instance_id)
+{
+	u32 context_status = RREG32(sdma_v5_2_get_reg_offset(adev, instance_id,
+							     mmSDMA0_GFX_CONTEXT_STATUS));
+
+	/* Check if the SELECTED bit is set */
+	return (context_status & SDMA0_GFX_CONTEXT_STATUS__SELECTED_MASK) != 0;
+}
+
 static int sdma_v5_2_reset_queue(struct amdgpu_ring *ring,
 				 unsigned int vmid,
 				 struct amdgpu_fence *guilty_fence)
 {
 	struct amdgpu_device *adev = ring->adev;
 	u32 inst_id = ring->me;
+	bool is_guilty = sdma_v5_2_is_queue_selected(adev, inst_id);
 	int r;
 
+	amdgpu_amdkfd_suspend(adev, false);
 	r = amdgpu_sdma_reset_engine(adev, inst_id);
+	amdgpu_amdkfd_resume(adev, false);
 	if (r)
 		return r;
-	amdgpu_fence_driver_force_completion(ring);
+
+	if (is_guilty)
+		atomic_inc(&ring->adev->gpu_reset_counter);
+
 	return 0;
 }
 
@@ -1529,11 +1545,12 @@ static int sdma_v5_2_restore_queue(struct amdgpu_ring *ring)
 	freeze = RREG32(sdma_v5_2_get_reg_offset(adev, inst_id, mmSDMA0_FREEZE));
 	freeze = REG_SET_FIELD(freeze, SDMA0_FREEZE, FREEZE, 0);
 	WREG32(sdma_v5_2_get_reg_offset(adev, inst_id, mmSDMA0_FREEZE), freeze);
-
 	r = sdma_v5_2_gfx_resume_instance(adev, inst_id, true);
-
 	amdgpu_gfx_rlc_exit_safe_mode(adev, 0);
-	return r;
+	if (r)
+		return r;
+	amdgpu_fence_driver_force_completion(ring);
+	return 0;
 }
 
 static int sdma_v5_2_ring_preempt_ib(struct amdgpu_ring *ring)
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
index 746f14862d9ff..595e90a5274ea 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
@@ -1565,6 +1565,7 @@ static int sdma_v6_0_reset_queue(struct amdgpu_ring *ring,
 	if (r)
 		return r;
 	amdgpu_fence_driver_force_completion(ring);
+	atomic_inc(&ring->adev->gpu_reset_counter);
 	return 0;
 }
 
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
index 2e4c658598001..3e036c37b1f5a 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
@@ -830,6 +830,7 @@ static int sdma_v7_0_reset_queue(struct amdgpu_ring *ring,
 	if (r)
 		return r;
 	amdgpu_fence_driver_force_completion(ring);
+	atomic_inc(&ring->adev->gpu_reset_counter);
 	return 0;
 }
 
diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
index 0d73b2bd4aad6..d5be19361cc89 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
@@ -1985,6 +1985,7 @@ static int vcn_v4_0_ring_reset(struct amdgpu_ring *ring,
 	if (r)
 		return r;
 	amdgpu_fence_driver_force_completion(ring);
+	atomic_inc(&ring->adev->gpu_reset_counter);
 	return 0;
 }
 
diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c
index bf9edfef2107e..c7c2b7f5ba56d 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c
@@ -1626,6 +1626,7 @@ static int vcn_v4_0_3_ring_reset(struct amdgpu_ring *ring,
 	if (r)
 		return r;
 	amdgpu_fence_driver_force_completion(ring);
+	atomic_inc(&ring->adev->gpu_reset_counter);
 	return 0;
 }
 
diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c
index 3a3ed600e15f0..af75617cf6df5 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c
@@ -1483,6 +1483,7 @@ static int vcn_v4_0_5_ring_reset(struct amdgpu_ring *ring,
 	if (r)
 		return r;
 	amdgpu_fence_driver_force_completion(ring);
+	atomic_inc(&ring->adev->gpu_reset_counter);
 	return 0;
 }
 
diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c
index c7953116ad532..64f2b64da6258 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c
@@ -1210,6 +1210,7 @@ static int vcn_v5_0_0_ring_reset(struct amdgpu_ring *ring,
 	if (r)
 		return r;
 	amdgpu_fence_driver_force_completion(ring);
+	atomic_inc(&ring->adev->gpu_reset_counter);
 	return 0;
 }
 
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 08/27] drm/amdgpu: track ring state associated with a job
  2025-06-13 21:47 [PATCH V8 00/27] Reset improvements Alex Deucher
                   ` (6 preceding siblings ...)
  2025-06-13 21:47 ` [PATCH 07/27] drm/amdgpu: move guilty handling " Alex Deucher
@ 2025-06-13 21:47 ` Alex Deucher
  2025-06-13 21:47 ` [PATCH 09/27] drm/amdgpu/gfx9: re-emit unprocessed state on kcq reset Alex Deucher
                   ` (18 subsequent siblings)
  26 siblings, 0 replies; 35+ messages in thread
From: Alex Deucher @ 2025-06-13 21:47 UTC (permalink / raw)
  To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher

We need to know the wptr and sequence number associated
with a job so that we can re-emit the unprocessed state
after a ring reset.  Pre-allocate storage space for
the ring buffer contents and add helpers to save off
and re-emit the unprocessed state so that it can be
re-emitted after the queue is reset.

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c | 96 +++++++++++++++++++++++
 drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c    |  8 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.c   |  4 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c  | 27 +++++++
 drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h  | 14 ++++
 5 files changed, 146 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
index 569e0e5373927..d9e75d38bebf7 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
@@ -135,12 +135,20 @@ int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **f, struct amd
 		am_fence = kmem_cache_alloc(amdgpu_fence_slab, GFP_ATOMIC);
 		if (am_fence == NULL)
 			return -ENOMEM;
+		am_fence->context = 0;
 	} else {
 		/* take use of job-embedded fence */
 		am_fence = &job->hw_fence;
+		if (job->base.s_fence) {
+			struct dma_fence *finished = &job->base.s_fence->finished;
+			am_fence->context = finished->context;
+		} else {
+			am_fence->context = 0;
+		}
 	}
 	fence = &am_fence->base;
 	am_fence->ring = ring;
+	am_fence->wptr = 0;
 
 	seq = ++ring->fence_drv.sync_seq;
 	if (job && job->job_run_counter) {
@@ -276,6 +284,7 @@ bool amdgpu_fence_process(struct amdgpu_ring *ring)
 
 	do {
 		struct dma_fence *fence, **ptr;
+		struct amdgpu_fence *am_fence;
 
 		++last_seq;
 		last_seq &= drv->num_fences_mask;
@@ -288,6 +297,9 @@ bool amdgpu_fence_process(struct amdgpu_ring *ring)
 		if (!fence)
 			continue;
 
+		am_fence = container_of(fence, struct amdgpu_fence, base);
+		if (am_fence->wptr)
+			drv->last_wptr = am_fence->wptr;
 		dma_fence_signal(fence);
 		dma_fence_put(fence);
 		pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
@@ -748,6 +760,90 @@ void amdgpu_fence_driver_force_completion(struct amdgpu_ring *ring)
 	amdgpu_fence_process(ring);
 }
 
+/**
+ * amdgpu_fence_driver_guilty_force_completion - force signal of specified sequence
+ *
+ * @fence: fence of the ring to signal
+ *
+ */
+void amdgpu_fence_driver_guilty_force_completion(struct amdgpu_fence *fence)
+{
+	dma_fence_set_error(&fence->base, -ETIME);
+	amdgpu_fence_write(fence->ring, fence->base.seqno);
+	amdgpu_fence_process(fence->ring);
+}
+
+void amdgpu_fence_save_wptr(struct dma_fence *fence)
+{
+	struct amdgpu_fence *am_fence = container_of(fence, struct amdgpu_fence, base);
+
+	am_fence->wptr = am_fence->ring->wptr;
+}
+
+static void amdgpu_ring_backup_unprocessed_command(struct amdgpu_ring *ring,
+						   unsigned int idx,
+						   u64 start_wptr, u32 end_wptr)
+{
+	unsigned int first_idx = start_wptr & ring->buf_mask;
+	unsigned int last_idx = end_wptr & ring->buf_mask;
+	unsigned int i, j, entries_to_copy;
+
+	if (last_idx < first_idx) {
+		entries_to_copy = ring->buf_mask + 1 - first_idx;
+		for (i = 0; i < entries_to_copy; i++)
+			ring->ring_backup[idx + i] = ring->ring[first_idx + i];
+		ring->ring_backup_entries_to_copy += entries_to_copy;
+		entries_to_copy = last_idx;
+		for (j = 0; j < entries_to_copy; j++)
+			ring->ring_backup[idx + i + j] = ring->ring[j];
+		ring->ring_backup_entries_to_copy += entries_to_copy;
+	} else {
+		entries_to_copy = last_idx - first_idx;
+		for (i = 0; i < entries_to_copy; i++)
+			ring->ring_backup[idx + i] = ring->ring[first_idx + i];
+		ring->ring_backup_entries_to_copy += entries_to_copy;
+	}
+}
+
+void amdgpu_ring_backup_unprocessed_commands(struct amdgpu_ring *ring,
+					     struct amdgpu_fence *guilty_fence)
+{
+	struct amdgpu_fence *fence;
+	struct dma_fence *unprocessed, **ptr;
+	u64 wptr, i, seqno;
+
+	if (guilty_fence) {
+		seqno = guilty_fence->base.seqno;
+		wptr = guilty_fence->wptr;
+	} else {
+		seqno = amdgpu_fence_read(ring);
+		wptr = ring->fence_drv.last_wptr;
+	}
+	ring->ring_backup_entries_to_copy = 0;
+	for (i = seqno + 1; i <= ring->fence_drv.sync_seq; ++i) {
+		ptr = &ring->fence_drv.fences[i & ring->fence_drv.num_fences_mask];
+		rcu_read_lock();
+		unprocessed = rcu_dereference(*ptr);
+
+		if (unprocessed && !dma_fence_is_signaled(unprocessed)) {
+			fence = container_of(unprocessed, struct amdgpu_fence, base);
+
+			/* save everything if the ring is not guilty, otherwise
+			 * just save the content from other contexts.
+			 */
+			if (fence->wptr &&
+			    (!guilty_fence || (fence->context != guilty_fence->context))) {
+				amdgpu_ring_backup_unprocessed_command(ring,
+								       ring->ring_backup_entries_to_copy,
+								       wptr,
+								       fence->wptr);
+				wptr = fence->wptr;
+			}
+		}
+		rcu_read_unlock();
+	}
+}
+
 /*
  * Common fence implementation
  */
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
index 802743efa3b39..789f9b2af8f99 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
@@ -138,7 +138,6 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs,
 	int vmid = AMDGPU_JOB_GET_VMID(job);
 	bool need_pipe_sync = false;
 	unsigned int cond_exec;
-
 	unsigned int i;
 	int r = 0;
 
@@ -306,6 +305,13 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs,
 
 	amdgpu_ring_ib_end(ring);
 	amdgpu_ring_commit(ring);
+
+	/* This must be last for resets to work properly
+	 * as we need to save the wptr associated with this
+	 * fence.
+	 */
+	amdgpu_fence_save_wptr(*f);
+
 	return 0;
 }
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
index 680cdd8fc3ab2..af45b4a1e0c83 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
@@ -89,8 +89,8 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
 {
 	struct amdgpu_ring *ring = to_amdgpu_ring(s_job->sched);
 	struct amdgpu_job *job = to_amdgpu_job(s_job);
-	struct amdgpu_task_info *ti;
 	struct amdgpu_device *adev = ring->adev;
+	struct amdgpu_task_info *ti;
 	int idx, r;
 
 	if (!drm_dev_enter(adev_to_drm(adev), &idx)) {
@@ -142,7 +142,7 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
 		 */
 		drm_sched_wqueue_stop(&ring->sched);
 
-		r = amdgpu_ring_reset(ring, job->vmid, NULL);
+		r = amdgpu_ring_reset(ring, job->vmid, &job->hw_fence);
 		if (!r) {
 			drm_sched_wqueue_start(&ring->sched);
 			dev_err(adev->dev, "Ring %s reset succeeded\n",
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
index 426834806fbf2..736ff5bafd520 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
@@ -333,6 +333,12 @@ int amdgpu_ring_init(struct amdgpu_device *adev, struct amdgpu_ring *ring,
 	/*  Initialize cached_rptr to 0 */
 	ring->cached_rptr = 0;
 
+	if (!ring->ring_backup) {
+		ring->ring_backup = kvzalloc(ring->ring_size, GFP_KERNEL);
+		if (!ring->ring_backup)
+			return -ENOMEM;
+	}
+
 	/* Allocate ring buffer */
 	if (ring->ring_obj == NULL) {
 		r = amdgpu_bo_create_kernel(adev, ring->ring_size + ring->funcs->extra_dw, PAGE_SIZE,
@@ -342,6 +348,7 @@ int amdgpu_ring_init(struct amdgpu_device *adev, struct amdgpu_ring *ring,
 					    (void **)&ring->ring);
 		if (r) {
 			dev_err(adev->dev, "(%d) ring create failed\n", r);
+			kvfree(ring->ring_backup);
 			return r;
 		}
 		amdgpu_ring_clear_ring(ring);
@@ -385,6 +392,8 @@ void amdgpu_ring_fini(struct amdgpu_ring *ring)
 	amdgpu_bo_free_kernel(&ring->ring_obj,
 			      &ring->gpu_addr,
 			      (void **)&ring->ring);
+	kvfree(ring->ring_backup);
+	ring->ring_backup = NULL;
 
 	dma_fence_put(ring->vmid_wait);
 	ring->vmid_wait = NULL;
@@ -753,3 +762,21 @@ bool amdgpu_ring_sched_ready(struct amdgpu_ring *ring)
 
 	return true;
 }
+
+int amdgpu_ring_reemit_unprocessed_commands(struct amdgpu_ring *ring)
+{
+	unsigned int i;
+	int r;
+
+	/* re-emit the unprocessed ring contents */
+	if (ring->ring_backup_entries_to_copy) {
+		r = amdgpu_ring_alloc(ring, ring->ring_backup_entries_to_copy);
+		if (r)
+			return r;
+		for (i = 0; i < ring->ring_backup_entries_to_copy; i++)
+			amdgpu_ring_write(ring, ring->ring_backup[i]);
+		amdgpu_ring_commit(ring);
+	}
+
+	return 0;
+}
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
index c1d14183abfe6..7a61f3b74c69f 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
@@ -118,6 +118,7 @@ struct amdgpu_fence_driver {
 	/* sync_seq is protected by ring emission lock */
 	uint32_t			sync_seq;
 	atomic_t			last_seq;
+	u64				last_wptr;
 	bool				initialized;
 	struct amdgpu_irq_src		*irq_src;
 	unsigned			irq_type;
@@ -141,6 +142,11 @@ struct amdgpu_fence {
 	/* RB, DMA, etc. */
 	struct amdgpu_ring		*ring;
 	ktime_t				start_timestamp;
+
+	/* wptr for the fence for resets */
+	u64				wptr;
+	/* fence context for resets */
+	u64				context;
 };
 
 extern const struct drm_sched_backend_ops amdgpu_sched_ops;
@@ -148,6 +154,8 @@ extern const struct drm_sched_backend_ops amdgpu_sched_ops;
 void amdgpu_fence_driver_clear_job_fences(struct amdgpu_ring *ring);
 void amdgpu_fence_driver_set_error(struct amdgpu_ring *ring, int error);
 void amdgpu_fence_driver_force_completion(struct amdgpu_ring *ring);
+void amdgpu_fence_driver_guilty_force_completion(struct amdgpu_fence *fence);
+void amdgpu_fence_save_wptr(struct dma_fence *fence);
 
 int amdgpu_fence_driver_init_ring(struct amdgpu_ring *ring);
 int amdgpu_fence_driver_start_ring(struct amdgpu_ring *ring,
@@ -284,6 +292,9 @@ struct amdgpu_ring {
 
 	struct amdgpu_bo	*ring_obj;
 	uint32_t		*ring;
+	/* backups for resets */
+	uint32_t		*ring_backup;
+	unsigned int		ring_backup_entries_to_copy;
 	unsigned		rptr_offs;
 	u64			rptr_gpu_addr;
 	volatile u32		*rptr_cpu_addr;
@@ -550,4 +561,7 @@ int amdgpu_ib_pool_init(struct amdgpu_device *adev);
 void amdgpu_ib_pool_fini(struct amdgpu_device *adev);
 int amdgpu_ib_ring_tests(struct amdgpu_device *adev);
 bool amdgpu_ring_sched_ready(struct amdgpu_ring *ring);
+void amdgpu_ring_backup_unprocessed_commands(struct amdgpu_ring *ring,
+					     struct amdgpu_fence *guilty_fence);
+int amdgpu_ring_reemit_unprocessed_commands(struct amdgpu_ring *ring);
 #endif
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 09/27] drm/amdgpu/gfx9: re-emit unprocessed state on kcq reset
  2025-06-13 21:47 [PATCH V8 00/27] Reset improvements Alex Deucher
                   ` (7 preceding siblings ...)
  2025-06-13 21:47 ` [PATCH 08/27] drm/amdgpu: track ring state associated with a job Alex Deucher
@ 2025-06-13 21:47 ` Alex Deucher
  2025-06-13 21:47 ` [PATCH 10/27] drm/amdgpu/gfx9.4.3: " Alex Deucher
                   ` (17 subsequent siblings)
  26 siblings, 0 replies; 35+ messages in thread
From: Alex Deucher @ 2025-06-13 21:47 UTC (permalink / raw)
  To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher

Re-emit the unprocessed state after resetting the queue.

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 14 +++++++++++---
 1 file changed, 11 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
index 1b767094dfa24..3b3dd98245dcc 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
@@ -7168,6 +7168,8 @@ static int gfx_v9_0_reset_kcq(struct amdgpu_ring *ring,
 	if (!kiq->pmf || !kiq->pmf->kiq_unmap_queues)
 		return -EINVAL;
 
+	amdgpu_ring_backup_unprocessed_commands(ring, guilty_fence);
+
 	spin_lock_irqsave(&kiq->ring_lock, flags);
 
 	if (amdgpu_ring_alloc(kiq_ring, kiq->pmf->unmap_queues_size)) {
@@ -7217,18 +7219,24 @@ static int gfx_v9_0_reset_kcq(struct amdgpu_ring *ring,
 	}
 	kiq->pmf->kiq_map_queues(kiq_ring, ring);
 	amdgpu_ring_commit(kiq_ring);
-	spin_unlock_irqrestore(&kiq->ring_lock, flags);
 	r = amdgpu_ring_test_ring(kiq_ring);
+	spin_unlock_irqrestore(&kiq->ring_lock, flags);
 	if (r) {
 		DRM_ERROR("fail to remap queue\n");
 		return r;
 	}
-
 	r = amdgpu_ring_test_ring(ring);
 	if (r)
 		return r;
-	amdgpu_fence_driver_force_completion(ring);
+
+	/* signal the fence of the bad job */
+	amdgpu_fence_driver_guilty_force_completion(guilty_fence);
 	atomic_inc(&ring->adev->gpu_reset_counter);
+	r = amdgpu_ring_reemit_unprocessed_commands(ring);
+	if (r)
+		/* if we fail to reemit, force complete all fences */
+		amdgpu_fence_driver_force_completion(ring);
+
 	return 0;
 }
 
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 10/27] drm/amdgpu/gfx9.4.3: re-emit unprocessed state on kcq reset
  2025-06-13 21:47 [PATCH V8 00/27] Reset improvements Alex Deucher
                   ` (8 preceding siblings ...)
  2025-06-13 21:47 ` [PATCH 09/27] drm/amdgpu/gfx9: re-emit unprocessed state on kcq reset Alex Deucher
@ 2025-06-13 21:47 ` Alex Deucher
  2025-06-13 21:47 ` [PATCH 11/27] drm/amdgpu/gfx10: re-emit unprocessed state on ring reset Alex Deucher
                   ` (16 subsequent siblings)
  26 siblings, 0 replies; 35+ messages in thread
From: Alex Deucher @ 2025-06-13 21:47 UTC (permalink / raw)
  To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher

Re-emit the unprocessed state after resetting the queue.

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c | 14 +++++++++++---
 1 file changed, 11 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
index 05abe86ecd9ac..5323830691937 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
@@ -3567,6 +3567,8 @@ static int gfx_v9_4_3_reset_kcq(struct amdgpu_ring *ring,
 	if (!kiq->pmf || !kiq->pmf->kiq_unmap_queues)
 		return -EINVAL;
 
+	amdgpu_ring_backup_unprocessed_commands(ring, guilty_fence);
+
 	spin_lock_irqsave(&kiq->ring_lock, flags);
 
 	if (amdgpu_ring_alloc(kiq_ring, kiq->pmf->unmap_queues_size)) {
@@ -3613,9 +3615,8 @@ static int gfx_v9_4_3_reset_kcq(struct amdgpu_ring *ring,
 	}
 	kiq->pmf->kiq_map_queues(kiq_ring, ring);
 	amdgpu_ring_commit(kiq_ring);
-	spin_unlock_irqrestore(&kiq->ring_lock, flags);
-
 	r = amdgpu_ring_test_ring(kiq_ring);
+	spin_unlock_irqrestore(&kiq->ring_lock, flags);
 	if (r) {
 		dev_err(adev->dev, "fail to remap queue\n");
 		return r;
@@ -3624,8 +3625,15 @@ static int gfx_v9_4_3_reset_kcq(struct amdgpu_ring *ring,
 	r = amdgpu_ring_test_ring(ring);
 	if (r)
 		return r;
-	amdgpu_fence_driver_force_completion(ring);
+
+	/* signal the fence of the bad job */
+	amdgpu_fence_driver_guilty_force_completion(guilty_fence);
 	atomic_inc(&ring->adev->gpu_reset_counter);
+	r = amdgpu_ring_reemit_unprocessed_commands(ring);
+	if (r)
+		/* if we fail to reemit, force complete all fences */
+		amdgpu_fence_driver_force_completion(ring);
+
 	return 0;
 }
 
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 11/27] drm/amdgpu/gfx10: re-emit unprocessed state on ring reset
  2025-06-13 21:47 [PATCH V8 00/27] Reset improvements Alex Deucher
                   ` (9 preceding siblings ...)
  2025-06-13 21:47 ` [PATCH 10/27] drm/amdgpu/gfx9.4.3: " Alex Deucher
@ 2025-06-13 21:47 ` Alex Deucher
  2025-06-13 21:47 ` [PATCH 12/27] drm/amdgpu/gfx11: " Alex Deucher
                   ` (15 subsequent siblings)
  26 siblings, 0 replies; 35+ messages in thread
From: Alex Deucher @ 2025-06-13 21:47 UTC (permalink / raw)
  To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher

Re-emit the unprocessed state after resetting the queue.

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c | 49 ++++++++++++--------------
 1 file changed, 23 insertions(+), 26 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
index a9d26d91c8468..6402736a87c64 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
@@ -9046,21 +9046,6 @@ static void gfx_v10_0_ring_emit_reg_write_reg_wait(struct amdgpu_ring *ring,
 							   ref, mask);
 }
 
-static void gfx_v10_0_ring_soft_recovery(struct amdgpu_ring *ring,
-					 unsigned int vmid)
-{
-	struct amdgpu_device *adev = ring->adev;
-	uint32_t value = 0;
-
-	value = REG_SET_FIELD(value, SQ_CMD, CMD, 0x03);
-	value = REG_SET_FIELD(value, SQ_CMD, MODE, 0x01);
-	value = REG_SET_FIELD(value, SQ_CMD, CHECK_VMID, 1);
-	value = REG_SET_FIELD(value, SQ_CMD, VM_ID, vmid);
-	amdgpu_gfx_rlc_enter_safe_mode(adev, 0);
-	WREG32_SOC15(GC, 0, mmSQ_CMD, value);
-	amdgpu_gfx_rlc_exit_safe_mode(adev, 0);
-}
-
 static void
 gfx_v10_0_set_gfx_eop_interrupt_state(struct amdgpu_device *adev,
 				      uint32_t me, uint32_t pipe,
@@ -9540,6 +9525,8 @@ static int gfx_v10_0_reset_kgq(struct amdgpu_ring *ring,
 	if (!kiq->pmf || !kiq->pmf->kiq_unmap_queues)
 		return -EINVAL;
 
+	amdgpu_ring_backup_unprocessed_commands(ring, guilty_fence);
+
 	spin_lock_irqsave(&kiq->ring_lock, flags);
 
 	if (amdgpu_ring_alloc(kiq_ring, 5 + 7 + 7 + kiq->pmf->map_queues_size)) {
@@ -9564,10 +9551,8 @@ static int gfx_v10_0_reset_kgq(struct amdgpu_ring *ring,
 				     SOC15_REG_OFFSET(GC, 0, mmCP_VMID_RESET), 0, 0xffffffff);
 	kiq->pmf->kiq_map_queues(kiq_ring, ring);
 	amdgpu_ring_commit(kiq_ring);
-
-	spin_unlock_irqrestore(&kiq->ring_lock, flags);
-
 	r = amdgpu_ring_test_ring(kiq_ring);
+	spin_unlock_irqrestore(&kiq->ring_lock, flags);
 	if (r)
 		return r;
 
@@ -9580,8 +9565,15 @@ static int gfx_v10_0_reset_kgq(struct amdgpu_ring *ring,
 	r = amdgpu_ring_test_ring(ring);
 	if (r)
 		return r;
-	amdgpu_fence_driver_force_completion(ring);
+
+	/* signal the fence of the bad job */
+	amdgpu_fence_driver_guilty_force_completion(guilty_fence);
 	atomic_inc(&ring->adev->gpu_reset_counter);
+	r = amdgpu_ring_reemit_unprocessed_commands(ring);
+	if (r)
+		/* if we fail to reemit, force complete all fences */
+		amdgpu_fence_driver_force_completion(ring);
+
 	return 0;
 }
 
@@ -9601,6 +9593,8 @@ static int gfx_v10_0_reset_kcq(struct amdgpu_ring *ring,
 	if (!kiq->pmf || !kiq->pmf->kiq_unmap_queues)
 		return -EINVAL;
 
+	amdgpu_ring_backup_unprocessed_commands(ring, guilty_fence);
+
 	spin_lock_irqsave(&kiq->ring_lock, flags);
 
 	if (amdgpu_ring_alloc(kiq_ring, kiq->pmf->unmap_queues_size)) {
@@ -9611,9 +9605,8 @@ static int gfx_v10_0_reset_kcq(struct amdgpu_ring *ring,
 	kiq->pmf->kiq_unmap_queues(kiq_ring, ring, RESET_QUEUES,
 				   0, 0);
 	amdgpu_ring_commit(kiq_ring);
-	spin_unlock_irqrestore(&kiq->ring_lock, flags);
-
 	r = amdgpu_ring_test_ring(kiq_ring);
+	spin_unlock_irqrestore(&kiq->ring_lock, flags);
 	if (r)
 		return r;
 
@@ -9649,17 +9642,23 @@ static int gfx_v10_0_reset_kcq(struct amdgpu_ring *ring,
 	}
 	kiq->pmf->kiq_map_queues(kiq_ring, ring);
 	amdgpu_ring_commit(kiq_ring);
-	spin_unlock_irqrestore(&kiq->ring_lock, flags);
-
 	r = amdgpu_ring_test_ring(kiq_ring);
+	spin_unlock_irqrestore(&kiq->ring_lock, flags);
 	if (r)
 		return r;
 
 	r = amdgpu_ring_test_ring(ring);
 	if (r)
 		return r;
-	amdgpu_fence_driver_force_completion(ring);
+
+	/* signal the fence of the bad job */
+	amdgpu_fence_driver_guilty_force_completion(guilty_fence);
 	atomic_inc(&ring->adev->gpu_reset_counter);
+	r = amdgpu_ring_reemit_unprocessed_commands(ring);
+	if (r)
+		/* if we fail to reemit, force complete all fences */
+		amdgpu_fence_driver_force_completion(ring);
+
 	return 0;
 }
 
@@ -9895,7 +9894,6 @@ static const struct amdgpu_ring_funcs gfx_v10_0_ring_funcs_gfx = {
 	.emit_wreg = gfx_v10_0_ring_emit_wreg,
 	.emit_reg_wait = gfx_v10_0_ring_emit_reg_wait,
 	.emit_reg_write_reg_wait = gfx_v10_0_ring_emit_reg_write_reg_wait,
-	.soft_recovery = gfx_v10_0_ring_soft_recovery,
 	.emit_mem_sync = gfx_v10_0_emit_mem_sync,
 	.reset = gfx_v10_0_reset_kgq,
 	.emit_cleaner_shader = gfx_v10_0_ring_emit_cleaner_shader,
@@ -9936,7 +9934,6 @@ static const struct amdgpu_ring_funcs gfx_v10_0_ring_funcs_compute = {
 	.emit_wreg = gfx_v10_0_ring_emit_wreg,
 	.emit_reg_wait = gfx_v10_0_ring_emit_reg_wait,
 	.emit_reg_write_reg_wait = gfx_v10_0_ring_emit_reg_write_reg_wait,
-	.soft_recovery = gfx_v10_0_ring_soft_recovery,
 	.emit_mem_sync = gfx_v10_0_emit_mem_sync,
 	.reset = gfx_v10_0_reset_kcq,
 	.emit_cleaner_shader = gfx_v10_0_ring_emit_cleaner_shader,
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 12/27] drm/amdgpu/gfx11: re-emit unprocessed state on ring reset
  2025-06-13 21:47 [PATCH V8 00/27] Reset improvements Alex Deucher
                   ` (10 preceding siblings ...)
  2025-06-13 21:47 ` [PATCH 11/27] drm/amdgpu/gfx10: re-emit unprocessed state on ring reset Alex Deucher
@ 2025-06-13 21:47 ` Alex Deucher
  2025-06-13 21:47 ` [PATCH 13/27] drm/amdgpu/gfx12: " Alex Deucher
                   ` (14 subsequent siblings)
  26 siblings, 0 replies; 35+ messages in thread
From: Alex Deucher @ 2025-06-13 21:47 UTC (permalink / raw)
  To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher

Re-emit the unprocessed state after resetting the queue.

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c | 39 +++++++++++++-------------
 1 file changed, 20 insertions(+), 19 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
index 3dd2e04830dc6..8deea355d4b5b 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
@@ -6283,21 +6283,6 @@ static void gfx_v11_0_ring_emit_reg_write_reg_wait(struct amdgpu_ring *ring,
 			       ref, mask, 0x20);
 }
 
-static void gfx_v11_0_ring_soft_recovery(struct amdgpu_ring *ring,
-					 unsigned vmid)
-{
-	struct amdgpu_device *adev = ring->adev;
-	uint32_t value = 0;
-
-	value = REG_SET_FIELD(value, SQ_CMD, CMD, 0x03);
-	value = REG_SET_FIELD(value, SQ_CMD, MODE, 0x01);
-	value = REG_SET_FIELD(value, SQ_CMD, CHECK_VMID, 1);
-	value = REG_SET_FIELD(value, SQ_CMD, VM_ID, vmid);
-	amdgpu_gfx_rlc_enter_safe_mode(adev, 0);
-	WREG32_SOC15(GC, 0, regSQ_CMD, value);
-	amdgpu_gfx_rlc_exit_safe_mode(adev, 0);
-}
-
 static void
 gfx_v11_0_set_gfx_eop_interrupt_state(struct amdgpu_device *adev,
 				      uint32_t me, uint32_t pipe,
@@ -6821,6 +6806,8 @@ static int gfx_v11_0_reset_kgq(struct amdgpu_ring *ring,
 	if (amdgpu_sriov_vf(adev))
 		return -EINVAL;
 
+	amdgpu_ring_backup_unprocessed_commands(ring, guilty_fence);
+
 	r = amdgpu_mes_reset_legacy_queue(ring->adev, ring, vmid, false);
 	if (r) {
 
@@ -6845,8 +6832,15 @@ static int gfx_v11_0_reset_kgq(struct amdgpu_ring *ring,
 	r = amdgpu_ring_test_ring(ring);
 	if (r)
 		return r;
-	amdgpu_fence_driver_force_completion(ring);
+
+	/* signal the fence of the bad job */
+	amdgpu_fence_driver_guilty_force_completion(guilty_fence);
 	atomic_inc(&ring->adev->gpu_reset_counter);
+	r = amdgpu_ring_reemit_unprocessed_commands(ring);
+	if (r)
+		/* if we fail to reemit, force complete all fences */
+		amdgpu_fence_driver_force_completion(ring);
+
 	return 0;
 }
 
@@ -6990,6 +6984,8 @@ static int gfx_v11_0_reset_kcq(struct amdgpu_ring *ring,
 	if (amdgpu_sriov_vf(adev))
 		return -EINVAL;
 
+	amdgpu_ring_backup_unprocessed_commands(ring, guilty_fence);
+
 	r = amdgpu_mes_reset_legacy_queue(ring->adev, ring, vmid, true);
 	if (r) {
 		dev_warn(adev->dev, "fail(%d) to reset kcq and try pipe reset\n", r);
@@ -7012,8 +7008,15 @@ static int gfx_v11_0_reset_kcq(struct amdgpu_ring *ring,
 	r = amdgpu_ring_test_ring(ring);
 	if (r)
 		return r;
-	amdgpu_fence_driver_force_completion(ring);
+
+	/* signal the fence of the bad job */
+	amdgpu_fence_driver_guilty_force_completion(guilty_fence);
 	atomic_inc(&ring->adev->gpu_reset_counter);
+	r = amdgpu_ring_reemit_unprocessed_commands(ring);
+	if (r)
+		/* if we fail to reemit, force complete all fences */
+		amdgpu_fence_driver_force_completion(ring);
+
 	return 0;
 }
 
@@ -7250,7 +7253,6 @@ static const struct amdgpu_ring_funcs gfx_v11_0_ring_funcs_gfx = {
 	.emit_wreg = gfx_v11_0_ring_emit_wreg,
 	.emit_reg_wait = gfx_v11_0_ring_emit_reg_wait,
 	.emit_reg_write_reg_wait = gfx_v11_0_ring_emit_reg_write_reg_wait,
-	.soft_recovery = gfx_v11_0_ring_soft_recovery,
 	.emit_mem_sync = gfx_v11_0_emit_mem_sync,
 	.reset = gfx_v11_0_reset_kgq,
 	.emit_cleaner_shader = gfx_v11_0_ring_emit_cleaner_shader,
@@ -7292,7 +7294,6 @@ static const struct amdgpu_ring_funcs gfx_v11_0_ring_funcs_compute = {
 	.emit_wreg = gfx_v11_0_ring_emit_wreg,
 	.emit_reg_wait = gfx_v11_0_ring_emit_reg_wait,
 	.emit_reg_write_reg_wait = gfx_v11_0_ring_emit_reg_write_reg_wait,
-	.soft_recovery = gfx_v11_0_ring_soft_recovery,
 	.emit_mem_sync = gfx_v11_0_emit_mem_sync,
 	.reset = gfx_v11_0_reset_kcq,
 	.emit_cleaner_shader = gfx_v11_0_ring_emit_cleaner_shader,
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 13/27] drm/amdgpu/gfx12: re-emit unprocessed state on ring reset
  2025-06-13 21:47 [PATCH V8 00/27] Reset improvements Alex Deucher
                   ` (11 preceding siblings ...)
  2025-06-13 21:47 ` [PATCH 12/27] drm/amdgpu/gfx11: " Alex Deucher
@ 2025-06-13 21:47 ` Alex Deucher
  2025-06-13 21:47 ` [PATCH 14/27] drm/amdgpu/sdma6: " Alex Deucher
                   ` (13 subsequent siblings)
  26 siblings, 0 replies; 35+ messages in thread
From: Alex Deucher @ 2025-06-13 21:47 UTC (permalink / raw)
  To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher

Re-emit the unprocessed state after resetting the queue.

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c | 39 +++++++++++++-------------
 1 file changed, 20 insertions(+), 19 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
index d2ee4543ce222..693a3f0aa58b1 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
@@ -4690,21 +4690,6 @@ static void gfx_v12_0_ring_emit_reg_write_reg_wait(struct amdgpu_ring *ring,
 			       ref, mask, 0x20);
 }
 
-static void gfx_v12_0_ring_soft_recovery(struct amdgpu_ring *ring,
-					 unsigned vmid)
-{
-	struct amdgpu_device *adev = ring->adev;
-	uint32_t value = 0;
-
-	value = REG_SET_FIELD(value, SQ_CMD, CMD, 0x03);
-	value = REG_SET_FIELD(value, SQ_CMD, MODE, 0x01);
-	value = REG_SET_FIELD(value, SQ_CMD, CHECK_VMID, 1);
-	value = REG_SET_FIELD(value, SQ_CMD, VM_ID, vmid);
-	amdgpu_gfx_rlc_enter_safe_mode(adev, 0);
-	WREG32_SOC15(GC, 0, regSQ_CMD, value);
-	amdgpu_gfx_rlc_exit_safe_mode(adev, 0);
-}
-
 static void
 gfx_v12_0_set_gfx_eop_interrupt_state(struct amdgpu_device *adev,
 				      uint32_t me, uint32_t pipe,
@@ -5317,6 +5302,8 @@ static int gfx_v12_0_reset_kgq(struct amdgpu_ring *ring,
 	if (amdgpu_sriov_vf(adev))
 		return -EINVAL;
 
+	amdgpu_ring_backup_unprocessed_commands(ring, guilty_fence);
+
 	r = amdgpu_mes_reset_legacy_queue(ring->adev, ring, vmid, false);
 	if (r) {
 		dev_warn(adev->dev, "reset via MES failed and try pipe reset %d\n", r);
@@ -5340,8 +5327,15 @@ static int gfx_v12_0_reset_kgq(struct amdgpu_ring *ring,
 	r = amdgpu_ring_test_ring(ring);
 	if (r)
 		return r;
-	amdgpu_fence_driver_force_completion(ring);
+
+	/* signal the fence of the bad job */
+	amdgpu_fence_driver_guilty_force_completion(guilty_fence);
 	atomic_inc(&ring->adev->gpu_reset_counter);
+	r = amdgpu_ring_reemit_unprocessed_commands(ring);
+	if (r)
+		/* if we fail to reemit, force complete all fences */
+		amdgpu_fence_driver_force_completion(ring);
+
 	return 0;
 }
 
@@ -5438,6 +5432,8 @@ static int gfx_v12_0_reset_kcq(struct amdgpu_ring *ring,
 	if (amdgpu_sriov_vf(adev))
 		return -EINVAL;
 
+	amdgpu_ring_backup_unprocessed_commands(ring, guilty_fence);
+
 	r = amdgpu_mes_reset_legacy_queue(ring->adev, ring, vmid, true);
 	if (r) {
 		dev_warn(adev->dev, "fail(%d) to reset kcq  and try pipe reset\n", r);
@@ -5460,8 +5456,15 @@ static int gfx_v12_0_reset_kcq(struct amdgpu_ring *ring,
 	r = amdgpu_ring_test_ring(ring);
 	if (r)
 		return r;
-	amdgpu_fence_driver_force_completion(ring);
+
+	/* signal the fence of the bad job */
+	amdgpu_fence_driver_guilty_force_completion(guilty_fence);
 	atomic_inc(&ring->adev->gpu_reset_counter);
+	r = amdgpu_ring_reemit_unprocessed_commands(ring);
+	if (r)
+		/* if we fail to reemit, force complete all fences */
+		amdgpu_fence_driver_force_completion(ring);
+
 	return 0;
 }
 
@@ -5540,7 +5543,6 @@ static const struct amdgpu_ring_funcs gfx_v12_0_ring_funcs_gfx = {
 	.emit_wreg = gfx_v12_0_ring_emit_wreg,
 	.emit_reg_wait = gfx_v12_0_ring_emit_reg_wait,
 	.emit_reg_write_reg_wait = gfx_v12_0_ring_emit_reg_write_reg_wait,
-	.soft_recovery = gfx_v12_0_ring_soft_recovery,
 	.emit_mem_sync = gfx_v12_0_emit_mem_sync,
 	.reset = gfx_v12_0_reset_kgq,
 	.emit_cleaner_shader = gfx_v12_0_ring_emit_cleaner_shader,
@@ -5579,7 +5581,6 @@ static const struct amdgpu_ring_funcs gfx_v12_0_ring_funcs_compute = {
 	.emit_wreg = gfx_v12_0_ring_emit_wreg,
 	.emit_reg_wait = gfx_v12_0_ring_emit_reg_wait,
 	.emit_reg_write_reg_wait = gfx_v12_0_ring_emit_reg_write_reg_wait,
-	.soft_recovery = gfx_v12_0_ring_soft_recovery,
 	.emit_mem_sync = gfx_v12_0_emit_mem_sync,
 	.reset = gfx_v12_0_reset_kcq,
 	.emit_cleaner_shader = gfx_v12_0_ring_emit_cleaner_shader,
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 14/27] drm/amdgpu/sdma6: re-emit unprocessed state on ring reset
  2025-06-13 21:47 [PATCH V8 00/27] Reset improvements Alex Deucher
                   ` (12 preceding siblings ...)
  2025-06-13 21:47 ` [PATCH 13/27] drm/amdgpu/gfx12: " Alex Deucher
@ 2025-06-13 21:47 ` Alex Deucher
  2025-06-13 21:47 ` [PATCH 15/27] drm/amdgpu/sdma7: " Alex Deucher
                   ` (12 subsequent siblings)
  26 siblings, 0 replies; 35+ messages in thread
From: Alex Deucher @ 2025-06-13 21:47 UTC (permalink / raw)
  To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher

Re-emit the unprocessed state after resetting the queue.

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c | 29 ++++++++++++++++++++++++--
 1 file changed, 27 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
index 595e90a5274ea..00c7f440a6ba0 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
@@ -1537,11 +1537,23 @@ static int sdma_v6_0_ring_preempt_ib(struct amdgpu_ring *ring)
 	return r;
 }
 
+static bool sdma_v6_0_is_queue_selected(struct amdgpu_device *adev,
+					u32 instance_id)
+{
+	/* we always use queue0 for KGD */
+	u32 context_status = RREG32(sdma_v6_0_get_reg_offset(adev, instance_id,
+							     regSDMA0_QUEUE0_CONTEXT_STATUS));
+
+	/* Check if the SELECTED bit is set */
+	return (context_status & SDMA0_QUEUE0_CONTEXT_STATUS__SELECTED_MASK) != 0;
+}
+
 static int sdma_v6_0_reset_queue(struct amdgpu_ring *ring,
 				 unsigned int vmid,
 				 struct amdgpu_fence *guilty_fence)
 {
 	struct amdgpu_device *adev = ring->adev;
+	bool is_guilty;
 	int i, r;
 
 	if (amdgpu_sriov_vf(adev))
@@ -1557,6 +1569,10 @@ static int sdma_v6_0_reset_queue(struct amdgpu_ring *ring,
 		return -EINVAL;
 	}
 
+	is_guilty = sdma_v6_0_is_queue_selected(adev, i);
+
+	amdgpu_ring_backup_unprocessed_commands(ring, is_guilty ? guilty_fence : NULL);
+
 	r = amdgpu_mes_reset_legacy_queue(adev, ring, vmid, true);
 	if (r)
 		return r;
@@ -1564,8 +1580,17 @@ static int sdma_v6_0_reset_queue(struct amdgpu_ring *ring,
 	r = sdma_v6_0_gfx_resume_instance(adev, i, true);
 	if (r)
 		return r;
-	amdgpu_fence_driver_force_completion(ring);
-	atomic_inc(&ring->adev->gpu_reset_counter);
+
+	if (is_guilty) {
+		/* signal the fence of the bad job */
+		amdgpu_fence_driver_guilty_force_completion(guilty_fence);
+		atomic_inc(&ring->adev->gpu_reset_counter);
+	}
+	r = amdgpu_ring_reemit_unprocessed_commands(ring);
+	if (r)
+		/* if we fail to reemit, force complete all fences */
+		amdgpu_fence_driver_force_completion(ring);
+
 	return 0;
 }
 
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 15/27] drm/amdgpu/sdma7: re-emit unprocessed state on ring reset
  2025-06-13 21:47 [PATCH V8 00/27] Reset improvements Alex Deucher
                   ` (13 preceding siblings ...)
  2025-06-13 21:47 ` [PATCH 14/27] drm/amdgpu/sdma6: " Alex Deucher
@ 2025-06-13 21:47 ` Alex Deucher
  2025-06-13 21:47 ` [PATCH 16/27] drm/amdgpu/jpeg2: " Alex Deucher
                   ` (11 subsequent siblings)
  26 siblings, 0 replies; 35+ messages in thread
From: Alex Deucher @ 2025-06-13 21:47 UTC (permalink / raw)
  To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher

Re-emit the unprocessed state after resetting the queue.

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c | 30 ++++++++++++++++++++++++--
 1 file changed, 28 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
index 3e036c37b1f5a..9d89bd1ed8075 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
@@ -802,11 +802,23 @@ static bool sdma_v7_0_check_soft_reset(struct amdgpu_ip_block *ip_block)
 	return false;
 }
 
+static bool sdma_v7_0_is_queue_selected(struct amdgpu_device *adev,
+					uint32_t instance_id)
+{
+	/* we always use queue0 for KGD */
+	u32 context_status = RREG32(sdma_v7_0_get_reg_offset(adev, instance_id,
+							     regSDMA0_QUEUE0_CONTEXT_STATUS));
+
+	/* Check if the SELECTED bit is set */
+	return (context_status & SDMA0_QUEUE0_CONTEXT_STATUS__SELECTED_MASK) != 0;
+}
+
 static int sdma_v7_0_reset_queue(struct amdgpu_ring *ring,
 				 unsigned int vmid,
 				 struct amdgpu_fence *guilty_fence)
 {
 	struct amdgpu_device *adev = ring->adev;
+	bool is_guilty;
 	int i, r;
 
 	if (amdgpu_sriov_vf(adev))
@@ -822,6 +834,11 @@ static int sdma_v7_0_reset_queue(struct amdgpu_ring *ring,
 		return -EINVAL;
 	}
 
+	is_guilty = sdma_v7_0_is_queue_selected(adev, i);
+
+	amdgpu_ring_backup_unprocessed_commands(ring,
+						is_guilty ? guilty_fence : NULL);
+
 	r = amdgpu_mes_reset_legacy_queue(adev, ring, vmid, true);
 	if (r)
 		return r;
@@ -829,8 +846,17 @@ static int sdma_v7_0_reset_queue(struct amdgpu_ring *ring,
 	r = sdma_v7_0_gfx_resume_instance(adev, i, true);
 	if (r)
 		return r;
-	amdgpu_fence_driver_force_completion(ring);
-	atomic_inc(&ring->adev->gpu_reset_counter);
+
+	if (is_guilty) {
+		/* signal the fence of the bad job */
+		amdgpu_fence_driver_guilty_force_completion(guilty_fence);
+		atomic_inc(&ring->adev->gpu_reset_counter);
+	}
+	r = amdgpu_ring_reemit_unprocessed_commands(ring);
+	if (r)
+		/* if we fail to reemit, force complete all fences */
+		amdgpu_fence_driver_force_completion(ring);
+
 	return 0;
 }
 
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 16/27] drm/amdgpu/jpeg2: re-emit unprocessed state on ring reset
  2025-06-13 21:47 [PATCH V8 00/27] Reset improvements Alex Deucher
                   ` (14 preceding siblings ...)
  2025-06-13 21:47 ` [PATCH 15/27] drm/amdgpu/sdma7: " Alex Deucher
@ 2025-06-13 21:47 ` Alex Deucher
  2025-06-13 21:47 ` [PATCH 17/27] drm/amdgpu/jpeg2.5: " Alex Deucher
                   ` (10 subsequent siblings)
  26 siblings, 0 replies; 35+ messages in thread
From: Alex Deucher @ 2025-06-13 21:47 UTC (permalink / raw)
  To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher

Re-emit the unprocessed state after resetting the queue.

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
index dbc28042c7d53..f6060256f28c8 100644
--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
@@ -770,13 +770,21 @@ static int jpeg_v2_0_ring_reset(struct amdgpu_ring *ring,
 {
 	int r;
 
+	amdgpu_ring_backup_unprocessed_commands(ring, guilty_fence);
 	jpeg_v2_0_stop(ring->adev);
 	jpeg_v2_0_start(ring->adev);
-	r = amdgpu_ring_test_helper(ring);
+	r = amdgpu_ring_test_ring(ring);
 	if (r)
 		return r;
-	amdgpu_fence_driver_force_completion(ring);
+
+	/* signal the fence of the bad job */
+	amdgpu_fence_driver_guilty_force_completion(guilty_fence);
 	atomic_inc(&ring->adev->gpu_reset_counter);
+	r = amdgpu_ring_reemit_unprocessed_commands(ring);
+	if (r)
+		/* if we fail to reemit, force complete all fences */
+		amdgpu_fence_driver_force_completion(ring);
+
 	return 0;
 }
 
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 17/27] drm/amdgpu/jpeg2.5: re-emit unprocessed state on ring reset
  2025-06-13 21:47 [PATCH V8 00/27] Reset improvements Alex Deucher
                   ` (15 preceding siblings ...)
  2025-06-13 21:47 ` [PATCH 16/27] drm/amdgpu/jpeg2: " Alex Deucher
@ 2025-06-13 21:47 ` Alex Deucher
  2025-06-13 21:47 ` [PATCH 18/27] drm/amdgpu/jpeg3: " Alex Deucher
                   ` (9 subsequent siblings)
  26 siblings, 0 replies; 35+ messages in thread
From: Alex Deucher @ 2025-06-13 21:47 UTC (permalink / raw)
  To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher

Re-emit the unprocessed state after resetting the queue.

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
index f8af473e2a7a4..4d566cc1c90bd 100644
--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
+++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
@@ -649,13 +649,21 @@ static int jpeg_v2_5_ring_reset(struct amdgpu_ring *ring,
 {
 	int r;
 
+	amdgpu_ring_backup_unprocessed_commands(ring, guilty_fence);
 	jpeg_v2_5_stop_inst(ring->adev, ring->me);
 	jpeg_v2_5_start_inst(ring->adev, ring->me);
-	r = amdgpu_ring_test_helper(ring);
+	r = amdgpu_ring_test_ring(ring);
 	if (r)
 		return r;
-	amdgpu_fence_driver_force_completion(ring);
+
+	/* signal the fence of the bad job */
+	amdgpu_fence_driver_guilty_force_completion(guilty_fence);
 	atomic_inc(&ring->adev->gpu_reset_counter);
+	r = amdgpu_ring_reemit_unprocessed_commands(ring);
+	if (r)
+		/* if we fail to reemit, force complete all fences */
+		amdgpu_fence_driver_force_completion(ring);
+
 	return 0;
 }
 
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 18/27] drm/amdgpu/jpeg3: re-emit unprocessed state on ring reset
  2025-06-13 21:47 [PATCH V8 00/27] Reset improvements Alex Deucher
                   ` (16 preceding siblings ...)
  2025-06-13 21:47 ` [PATCH 17/27] drm/amdgpu/jpeg2.5: " Alex Deucher
@ 2025-06-13 21:47 ` Alex Deucher
  2025-06-13 21:47 ` [PATCH 19/27] drm/amdgpu/jpeg4: " Alex Deucher
                   ` (8 subsequent siblings)
  26 siblings, 0 replies; 35+ messages in thread
From: Alex Deucher @ 2025-06-13 21:47 UTC (permalink / raw)
  To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher

Re-emit the unprocessed state after resetting the queue.

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c
index 83559a32ed3d2..46becf4e63482 100644
--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c
@@ -561,13 +561,21 @@ static int jpeg_v3_0_ring_reset(struct amdgpu_ring *ring,
 {
 	int r;
 
+	amdgpu_ring_backup_unprocessed_commands(ring, guilty_fence);
 	jpeg_v3_0_stop(ring->adev);
 	jpeg_v3_0_start(ring->adev);
-	r = amdgpu_ring_test_helper(ring);
+	r = amdgpu_ring_test_ring(ring);
 	if (r)
 		return r;
-	amdgpu_fence_driver_force_completion(ring);
+
+	/* signal the fence of the bad job */
+	amdgpu_fence_driver_guilty_force_completion(guilty_fence);
 	atomic_inc(&ring->adev->gpu_reset_counter);
+	r = amdgpu_ring_reemit_unprocessed_commands(ring);
+	if (r)
+		/* if we fail to reemit, force complete all fences */
+		amdgpu_fence_driver_force_completion(ring);
+
 	return 0;
 }
 
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 19/27] drm/amdgpu/jpeg4: re-emit unprocessed state on ring reset
  2025-06-13 21:47 [PATCH V8 00/27] Reset improvements Alex Deucher
                   ` (17 preceding siblings ...)
  2025-06-13 21:47 ` [PATCH 18/27] drm/amdgpu/jpeg3: " Alex Deucher
@ 2025-06-13 21:47 ` Alex Deucher
  2025-06-13 21:47 ` [PATCH 20/27] drm/amdgpu/jpeg4.0.3: " Alex Deucher
                   ` (7 subsequent siblings)
  26 siblings, 0 replies; 35+ messages in thread
From: Alex Deucher @ 2025-06-13 21:47 UTC (permalink / raw)
  To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher

Re-emit the unprocessed state after resetting the queue.

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c
index b0f80f2a549c6..f63ac61f06e00 100644
--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c
@@ -729,13 +729,21 @@ static int jpeg_v4_0_ring_reset(struct amdgpu_ring *ring,
 	if (amdgpu_sriov_vf(ring->adev))
 		return -EINVAL;
 
+	amdgpu_ring_backup_unprocessed_commands(ring, guilty_fence);
 	jpeg_v4_0_stop(ring->adev);
 	jpeg_v4_0_start(ring->adev);
-	r = amdgpu_ring_test_helper(ring);
+	r = amdgpu_ring_test_ring(ring);
 	if (r)
 		return r;
-	amdgpu_fence_driver_force_completion(ring);
+
+	/* signal the fence of the bad job */
+	amdgpu_fence_driver_guilty_force_completion(guilty_fence);
 	atomic_inc(&ring->adev->gpu_reset_counter);
+	r = amdgpu_ring_reemit_unprocessed_commands(ring);
+	if (r)
+		/* if we fail to reemit, force complete all fences */
+		amdgpu_fence_driver_force_completion(ring);
+
 	return 0;
 }
 
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 20/27] drm/amdgpu/jpeg4.0.3: re-emit unprocessed state on ring reset
  2025-06-13 21:47 [PATCH V8 00/27] Reset improvements Alex Deucher
                   ` (18 preceding siblings ...)
  2025-06-13 21:47 ` [PATCH 19/27] drm/amdgpu/jpeg4: " Alex Deucher
@ 2025-06-13 21:47 ` Alex Deucher
  2025-06-13 21:47 ` [PATCH 21/27] drm/amdgpu/jpeg4.0.5: add queue reset Alex Deucher
                   ` (6 subsequent siblings)
  26 siblings, 0 replies; 35+ messages in thread
From: Alex Deucher @ 2025-06-13 21:47 UTC (permalink / raw)
  To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher

Re-emit the unprocessed state after resetting the queue.

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c
index 4fd9386d2efd6..913162d9930d2 100644
--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c
+++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c
@@ -1152,13 +1152,21 @@ static int jpeg_v4_0_3_ring_reset(struct amdgpu_ring *ring,
 	if (amdgpu_sriov_vf(ring->adev))
 		return -EOPNOTSUPP;
 
+	amdgpu_ring_backup_unprocessed_commands(ring, guilty_fence);
 	jpeg_v4_0_3_core_stall_reset(ring);
 	jpeg_v4_0_3_start_jrbc(ring);
-	r = amdgpu_ring_test_helper(ring);
+	r = amdgpu_ring_test_ring(ring);
 	if (r)
 		return r;
-	amdgpu_fence_driver_force_completion(ring);
+
+	/* signal the fence of the bad job */
+	amdgpu_fence_driver_guilty_force_completion(guilty_fence);
 	atomic_inc(&ring->adev->gpu_reset_counter);
+	r = amdgpu_ring_reemit_unprocessed_commands(ring);
+	if (r)
+		/* if we fail to reemit, force complete all fences */
+		amdgpu_fence_driver_force_completion(ring);
+
 	return 0;
 }
 
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 21/27] drm/amdgpu/jpeg4.0.5: add queue reset
  2025-06-13 21:47 [PATCH V8 00/27] Reset improvements Alex Deucher
                   ` (19 preceding siblings ...)
  2025-06-13 21:47 ` [PATCH 20/27] drm/amdgpu/jpeg4.0.3: " Alex Deucher
@ 2025-06-13 21:47 ` Alex Deucher
  2025-06-13 21:47 ` [PATCH 22/27] drm/amdgpu/jpeg5: " Alex Deucher
                   ` (5 subsequent siblings)
  26 siblings, 0 replies; 35+ messages in thread
From: Alex Deucher @ 2025-06-13 21:47 UTC (permalink / raw)
  To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher

Add queue reset support for jpeg 4.0.5.
Use the new helpers to re-emit the unprocessed state
after resetting the queue.

Untested.

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_5.c | 25 ++++++++++++++++++++++++
 1 file changed, 25 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_5.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_5.c
index 974030a5c03c9..c6e89aa9217df 100644
--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_5.c
+++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_5.c
@@ -767,6 +767,30 @@ static int jpeg_v4_0_5_process_interrupt(struct amdgpu_device *adev,
 	return 0;
 }
 
+static int jpeg_v4_0_5_ring_reset(struct amdgpu_ring *ring,
+				  unsigned int vmid,
+				  struct amdgpu_fence *guilty_fence)
+{
+	int r;
+
+	amdgpu_ring_backup_unprocessed_commands(ring, guilty_fence);
+	jpeg_v4_0_5_stop(ring->adev);
+	jpeg_v4_0_5_start(ring->adev);
+	r = amdgpu_ring_test_ring(ring);
+	if (r)
+		return r;
+
+	/* signal the fence of the bad job */
+	amdgpu_fence_driver_guilty_force_completion(guilty_fence);
+	atomic_inc(&ring->adev->gpu_reset_counter);
+	r = amdgpu_ring_reemit_unprocessed_commands(ring);
+	if (r)
+		/* if we fail to reemit, force complete all fences */
+		amdgpu_fence_driver_force_completion(ring);
+
+	return 0;
+}
+
 static const struct amd_ip_funcs jpeg_v4_0_5_ip_funcs = {
 	.name = "jpeg_v4_0_5",
 	.early_init = jpeg_v4_0_5_early_init,
@@ -812,6 +836,7 @@ static const struct amdgpu_ring_funcs jpeg_v4_0_5_dec_ring_vm_funcs = {
 	.emit_wreg = jpeg_v2_0_dec_ring_emit_wreg,
 	.emit_reg_wait = jpeg_v2_0_dec_ring_emit_reg_wait,
 	.emit_reg_write_reg_wait = amdgpu_ring_emit_reg_write_reg_wait_helper,
+	.reset = jpeg_v4_0_5_ring_reset,
 };
 
 static void jpeg_v4_0_5_set_dec_ring_funcs(struct amdgpu_device *adev)
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 22/27] drm/amdgpu/jpeg5: add queue reset
  2025-06-13 21:47 [PATCH V8 00/27] Reset improvements Alex Deucher
                   ` (20 preceding siblings ...)
  2025-06-13 21:47 ` [PATCH 21/27] drm/amdgpu/jpeg4.0.5: add queue reset Alex Deucher
@ 2025-06-13 21:47 ` Alex Deucher
  2025-06-13 21:47 ` [PATCH 23/27] drm/amdgpu/jpeg5.0.1: re-emit unprocessed state on ring reset Alex Deucher
                   ` (4 subsequent siblings)
  26 siblings, 0 replies; 35+ messages in thread
From: Alex Deucher @ 2025-06-13 21:47 UTC (permalink / raw)
  To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher

Add queue reset support for jpeg 5.0.0.
Use the new helpers to re-emit the unprocessed state
after resetting the queue.

Untested.

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_0.c | 28 ++++++++++++++++++++++++
 1 file changed, 28 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_0.c
index 31d213ccbe0a8..df47693a30d77 100644
--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_0.c
@@ -644,6 +644,33 @@ static int jpeg_v5_0_0_process_interrupt(struct amdgpu_device *adev,
 	return 0;
 }
 
+static int jpeg_v5_0_0_ring_reset(struct amdgpu_ring *ring,
+				  unsigned int vmid,
+				  struct amdgpu_fence *guilty_fence)
+{
+	int r;
+
+	if (amdgpu_sriov_vf(ring->adev))
+		return -EINVAL;
+
+	amdgpu_ring_backup_unprocessed_commands(ring, guilty_fence);
+	jpeg_v5_0_0_stop(ring->adev);
+	jpeg_v5_0_0_start(ring->adev);
+	r = amdgpu_ring_test_ring(ring);
+	if (r)
+		return r;
+
+	/* signal the fence of the bad job */
+	amdgpu_fence_driver_guilty_force_completion(guilty_fence);
+	atomic_inc(&ring->adev->gpu_reset_counter);
+	r = amdgpu_ring_reemit_unprocessed_commands(ring);
+	if (r)
+		/* if we fail to reemit, force complete all fences */
+		amdgpu_fence_driver_force_completion(ring);
+
+	return 0;
+}
+
 static const struct amd_ip_funcs jpeg_v5_0_0_ip_funcs = {
 	.name = "jpeg_v5_0_0",
 	.early_init = jpeg_v5_0_0_early_init,
@@ -689,6 +716,7 @@ static const struct amdgpu_ring_funcs jpeg_v5_0_0_dec_ring_vm_funcs = {
 	.emit_wreg = jpeg_v4_0_3_dec_ring_emit_wreg,
 	.emit_reg_wait = jpeg_v4_0_3_dec_ring_emit_reg_wait,
 	.emit_reg_write_reg_wait = amdgpu_ring_emit_reg_write_reg_wait_helper,
+	.reset = jpeg_v5_0_0_ring_reset,
 };
 
 static void jpeg_v5_0_0_set_dec_ring_funcs(struct amdgpu_device *adev)
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 23/27] drm/amdgpu/jpeg5.0.1: re-emit unprocessed state on ring reset
  2025-06-13 21:47 [PATCH V8 00/27] Reset improvements Alex Deucher
                   ` (21 preceding siblings ...)
  2025-06-13 21:47 ` [PATCH 22/27] drm/amdgpu/jpeg5: " Alex Deucher
@ 2025-06-13 21:47 ` Alex Deucher
  2025-06-13 21:47 ` [PATCH 24/27] drm/amdgpu/vcn4: " Alex Deucher
                   ` (3 subsequent siblings)
  26 siblings, 0 replies; 35+ messages in thread
From: Alex Deucher @ 2025-06-13 21:47 UTC (permalink / raw)
  To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher

Re-emit the unprocessed state after resetting the queue.

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c
index beca4d1e941b3..abdf0f9a5cd20 100644
--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c
+++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c
@@ -843,13 +843,21 @@ static int jpeg_v5_0_1_ring_reset(struct amdgpu_ring *ring,
 	if (amdgpu_sriov_vf(ring->adev))
 		return -EOPNOTSUPP;
 
+	amdgpu_ring_backup_unprocessed_commands(ring, guilty_fence);
 	jpeg_v5_0_1_core_stall_reset(ring);
 	jpeg_v5_0_1_init_jrbc(ring);
-	r = amdgpu_ring_test_helper(ring);
+	r = amdgpu_ring_test_ring(ring);
 	if (r)
 		return r;
-	amdgpu_fence_driver_force_completion(ring);
+
+	/* signal the fence of the bad job */
+	amdgpu_fence_driver_guilty_force_completion(guilty_fence);
 	atomic_inc(&ring->adev->gpu_reset_counter);
+	r = amdgpu_ring_reemit_unprocessed_commands(ring);
+	if (r)
+		/* if we fail to reemit, force complete all fences */
+		amdgpu_fence_driver_force_completion(ring);
+
 	return 0;
 }
 
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 24/27] drm/amdgpu/vcn4: re-emit unprocessed state on ring reset
  2025-06-13 21:47 [PATCH V8 00/27] Reset improvements Alex Deucher
                   ` (22 preceding siblings ...)
  2025-06-13 21:47 ` [PATCH 23/27] drm/amdgpu/jpeg5.0.1: re-emit unprocessed state on ring reset Alex Deucher
@ 2025-06-13 21:47 ` Alex Deucher
  2025-06-13 21:47 ` [PATCH 25/27] drm/amdgpu/vcn4.0.3: " Alex Deucher
                   ` (2 subsequent siblings)
  26 siblings, 0 replies; 35+ messages in thread
From: Alex Deucher @ 2025-06-13 21:47 UTC (permalink / raw)
  To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher

Re-emit the unprocessed state after resetting the queue.

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c | 13 ++++++++++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
index d5be19361cc89..ddf7c2fd94952 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
@@ -1978,14 +1978,21 @@ static int vcn_v4_0_ring_reset(struct amdgpu_ring *ring,
 	if (!(adev->vcn.supported_reset & AMDGPU_RESET_TYPE_PER_QUEUE))
 		return -EOPNOTSUPP;
 
+	amdgpu_ring_backup_unprocessed_commands(ring, guilty_fence);
 	vcn_v4_0_stop(vinst);
 	vcn_v4_0_start(vinst);
-
-	r = amdgpu_ring_test_helper(ring);
+	r = amdgpu_ring_test_ring(ring);
 	if (r)
 		return r;
-	amdgpu_fence_driver_force_completion(ring);
+
+	/* signal the fence of the bad job */
+	amdgpu_fence_driver_guilty_force_completion(guilty_fence);
 	atomic_inc(&ring->adev->gpu_reset_counter);
+	r = amdgpu_ring_reemit_unprocessed_commands(ring);
+	if (r)
+		/* if we fail to reemit, force complete all fences */
+		amdgpu_fence_driver_force_completion(ring);
+
 	return 0;
 }
 
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 25/27] drm/amdgpu/vcn4.0.3: re-emit unprocessed state on ring reset
  2025-06-13 21:47 [PATCH V8 00/27] Reset improvements Alex Deucher
                   ` (23 preceding siblings ...)
  2025-06-13 21:47 ` [PATCH 24/27] drm/amdgpu/vcn4: " Alex Deucher
@ 2025-06-13 21:47 ` Alex Deucher
  2025-06-13 21:47 ` [PATCH 26/27] drm/amdgpu/vcn4.0.5: " Alex Deucher
  2025-06-13 21:47 ` [PATCH 27/27] drm/amdgpu/vcn5: " Alex Deucher
  26 siblings, 0 replies; 35+ messages in thread
From: Alex Deucher @ 2025-06-13 21:47 UTC (permalink / raw)
  To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher

Re-emit the unprocessed state after resetting the queue.

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c | 13 +++++++++++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c
index c7c2b7f5ba56d..6cdd49e9ef07a 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c
@@ -1609,6 +1609,8 @@ static int vcn_v4_0_3_ring_reset(struct amdgpu_ring *ring,
 	if (!(adev->vcn.supported_reset & AMDGPU_RESET_TYPE_PER_QUEUE))
 		return -EOPNOTSUPP;
 
+	amdgpu_ring_backup_unprocessed_commands(ring, guilty_fence);
+
 	vcn_inst = GET_INST(VCN, ring->me);
 	r = amdgpu_dpm_reset_vcn(adev, 1 << vcn_inst);
 
@@ -1622,11 +1624,18 @@ static int vcn_v4_0_3_ring_reset(struct amdgpu_ring *ring,
 		adev->vcn.caps |= AMDGPU_VCN_CAPS(RRMT_ENABLED);
 	vcn_v4_0_3_hw_init_inst(vinst);
 	vcn_v4_0_3_start_dpg_mode(vinst, adev->vcn.inst[ring->me].indirect_sram);
-	r = amdgpu_ring_test_helper(ring);
+	r = amdgpu_ring_test_ring(ring);
 	if (r)
 		return r;
-	amdgpu_fence_driver_force_completion(ring);
+
+	/* signal the fence of the bad job */
+	amdgpu_fence_driver_guilty_force_completion(guilty_fence);
 	atomic_inc(&ring->adev->gpu_reset_counter);
+	r = amdgpu_ring_reemit_unprocessed_commands(ring);
+	if (r)
+		/* if we fail to reemit, force complete all fences */
+		amdgpu_fence_driver_force_completion(ring);
+
 	return 0;
 }
 
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 26/27] drm/amdgpu/vcn4.0.5: re-emit unprocessed state on ring reset
  2025-06-13 21:47 [PATCH V8 00/27] Reset improvements Alex Deucher
                   ` (24 preceding siblings ...)
  2025-06-13 21:47 ` [PATCH 25/27] drm/amdgpu/vcn4.0.3: " Alex Deucher
@ 2025-06-13 21:47 ` Alex Deucher
  2025-06-13 21:47 ` [PATCH 27/27] drm/amdgpu/vcn5: " Alex Deucher
  26 siblings, 0 replies; 35+ messages in thread
From: Alex Deucher @ 2025-06-13 21:47 UTC (permalink / raw)
  To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher

Re-emit the unprocessed state after resetting the queue.

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c | 14 +++++++++++---
 1 file changed, 11 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c
index af75617cf6df5..5cea5f0df6105 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c
@@ -1476,14 +1476,22 @@ static int vcn_v4_0_5_ring_reset(struct amdgpu_ring *ring,
 	if (!(adev->vcn.supported_reset & AMDGPU_RESET_TYPE_PER_QUEUE))
 		return -EOPNOTSUPP;
 
+	amdgpu_ring_backup_unprocessed_commands(ring, guilty_fence);
+
 	vcn_v4_0_5_stop(vinst);
 	vcn_v4_0_5_start(vinst);
-
-	r = amdgpu_ring_test_helper(ring);
+	r = amdgpu_ring_test_ring(ring);
 	if (r)
 		return r;
-	amdgpu_fence_driver_force_completion(ring);
+
+	/* signal the fence of the bad job */
+	amdgpu_fence_driver_guilty_force_completion(guilty_fence);
 	atomic_inc(&ring->adev->gpu_reset_counter);
+	r = amdgpu_ring_reemit_unprocessed_commands(ring);
+	if (r)
+		/* if we fail to reemit, force complete all fences */
+		amdgpu_fence_driver_force_completion(ring);
+
 	return 0;
 }
 
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH 27/27] drm/amdgpu/vcn5: re-emit unprocessed state on ring reset
  2025-06-13 21:47 [PATCH V8 00/27] Reset improvements Alex Deucher
                   ` (25 preceding siblings ...)
  2025-06-13 21:47 ` [PATCH 26/27] drm/amdgpu/vcn4.0.5: " Alex Deucher
@ 2025-06-13 21:47 ` Alex Deucher
  26 siblings, 0 replies; 35+ messages in thread
From: Alex Deucher @ 2025-06-13 21:47 UTC (permalink / raw)
  To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher

Re-emit the unprocessed state after resetting the queue.

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c | 14 +++++++++++---
 1 file changed, 11 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c
index 64f2b64da6258..01bf3bbe8cd93 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c
@@ -1203,14 +1203,22 @@ static int vcn_v5_0_0_ring_reset(struct amdgpu_ring *ring,
 	if (!(adev->vcn.supported_reset & AMDGPU_RESET_TYPE_PER_QUEUE))
 		return -EOPNOTSUPP;
 
+	amdgpu_ring_backup_unprocessed_commands(ring, guilty_fence);
+
 	vcn_v5_0_0_stop(vinst);
 	vcn_v5_0_0_start(vinst);
-
-	r = amdgpu_ring_test_helper(ring);
+	r = amdgpu_ring_test_ring(ring);
 	if (r)
 		return r;
-	amdgpu_fence_driver_force_completion(ring);
+
+	/* signal the fence of the bad job */
+	amdgpu_fence_driver_guilty_force_completion(guilty_fence);
 	atomic_inc(&ring->adev->gpu_reset_counter);
+	r = amdgpu_ring_reemit_unprocessed_commands(ring);
+	if (r)
+		/* if we fail to reemit, force complete all fences */
+		amdgpu_fence_driver_force_completion(ring);
+
 	return 0;
 }
 
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* RE: [PATCH 02/27] drm/amdgpu/sdma: handle paging queues in amdgpu_sdma_reset_engine()
  2025-06-13 21:47 ` [PATCH 02/27] drm/amdgpu/sdma: handle paging queues in amdgpu_sdma_reset_engine() Alex Deucher
@ 2025-06-14 12:31   ` Zhang, Jesse(Jie)
  0 siblings, 0 replies; 35+ messages in thread
From: Zhang, Jesse(Jie) @ 2025-06-14 12:31 UTC (permalink / raw)
  To: Deucher, Alexander, amd-gfx@lists.freedesktop.org,
	Koenig, Christian, Sundararaju, Sathishkumar
  Cc: Deucher, Alexander

[AMD Official Use Only - AMD Internal Distribution Only]

This patch is Reviewed-by: Jesse Zhang <Jesse.Zhang@amd.com>
-----Original Message-----
From: amd-gfx <amd-gfx-bounces@lists.freedesktop.org> On Behalf Of Alex Deucher
Sent: Saturday, June 14, 2025 5:47 AM
To: amd-gfx@lists.freedesktop.org; Koenig, Christian <Christian.Koenig@amd.com>; Sundararaju, Sathishkumar <Sathishkumar.Sundararaju@amd.com>
Cc: Deucher, Alexander <Alexander.Deucher@amd.com>
Subject: [PATCH 02/27] drm/amdgpu/sdma: handle paging queues in amdgpu_sdma_reset_engine()

Need to properly start and stop paging queues if they are present.

This is not an issue today since we don't support a paging queue on any chips with queue reset.

Fixes: ffe43cc82a04 ("drm/amdgpu: switch amdgpu_sdma_reset_engine to use the new sdma function pointers")
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.c
index a1e54bcef495c..cf5733d5d26dd 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.c
@@ -571,8 +571,11 @@ int amdgpu_sdma_reset_engine(struct amdgpu_device *adev, uint32_t instance_id)
                page_sched_stopped = true;
        }

-       if (sdma_instance->funcs->stop_kernel_queue)
+       if (sdma_instance->funcs->stop_kernel_queue) {
                sdma_instance->funcs->stop_kernel_queue(gfx_ring);
+               if (adev->sdma.has_page_queue)
+                       sdma_instance->funcs->stop_kernel_queue(page_ring);
+       }

        /* Perform the SDMA reset for the specified instance */
        ret = amdgpu_sdma_soft_reset(adev, instance_id); @@ -581,8 +584,11 @@ int amdgpu_sdma_reset_engine(struct amdgpu_device *adev, uint32_t instance_id)
                goto exit;
        }

-       if (sdma_instance->funcs->start_kernel_queue)
+       if (sdma_instance->funcs->start_kernel_queue) {
                sdma_instance->funcs->start_kernel_queue(gfx_ring);
+               if (adev->sdma.has_page_queue)
+                       sdma_instance->funcs->start_kernel_queue(page_ring);
+       }

 exit:
        /* Restart the scheduler's work queue for the GFX and page rings
--
2.49.0


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* Re: [PATCH 07/27] drm/amdgpu: move guilty handling into ring resets
  2025-06-13 21:47 ` [PATCH 07/27] drm/amdgpu: move guilty handling " Alex Deucher
@ 2025-06-16  3:46   ` Lazar, Lijo
  2025-06-16 16:03     ` Alex Deucher
  0 siblings, 1 reply; 35+ messages in thread
From: Lazar, Lijo @ 2025-06-16  3:46 UTC (permalink / raw)
  To: Alex Deucher, amd-gfx, christian.koenig, sasundar



On 6/14/2025 3:17 AM, Alex Deucher wrote:
> Move guilty logic into the ring reset callbacks.  This
> allows each ring reset callback to better handle fence
> errors and force completions in line with the reset
> behavior for each IP.  It also allows us to remove
> the ring guilty callback since that logic now lives
> in the reset callback.
> 
> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_job.c  | 22 +---------
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h |  1 -
>  drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c   |  2 +
>  drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c   |  2 +
>  drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c   |  2 +
>  drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c    |  1 +
>  drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c  |  1 +
>  drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c   |  1 +
>  drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c   |  1 +
>  drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c   |  1 +
>  drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c   |  1 +
>  drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c |  1 +
>  drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c |  1 +
>  drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c | 52 ++++++++++++------------
>  drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c   | 23 ++++++++++-
>  drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c   | 25 ++++++++++--
>  drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c   |  1 +
>  drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c   |  1 +
>  drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c    |  1 +
>  drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c  |  1 +
>  drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c  |  1 +
>  drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c  |  1 +
>  22 files changed, 89 insertions(+), 54 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> index 177f04491a11b..680cdd8fc3ab2 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> @@ -91,7 +91,6 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
>  	struct amdgpu_job *job = to_amdgpu_job(s_job);
>  	struct amdgpu_task_info *ti;
>  	struct amdgpu_device *adev = ring->adev;
> -	bool set_error = false;
>  	int idx, r;
>  
>  	if (!drm_dev_enter(adev_to_drm(adev), &idx)) {
> @@ -134,8 +133,6 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
>  	if (unlikely(adev->debug_disable_gpu_ring_reset)) {
>  		dev_err(adev->dev, "Ring reset disabled by debug mask\n");
>  	} else if (amdgpu_gpu_recovery && ring->funcs->reset) {
> -		bool is_guilty;
> -
>  		dev_err(adev->dev, "Starting %s ring reset\n",
>  			s_job->sched->name);
>  
> @@ -145,24 +142,8 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
>  		 */
>  		drm_sched_wqueue_stop(&ring->sched);
>  
> -		/* for engine resets, we need to reset the engine,
> -		 * but individual queues may be unaffected.
> -		 * check here to make sure the accounting is correct.
> -		 */
> -		if (ring->funcs->is_guilty)
> -			is_guilty = ring->funcs->is_guilty(ring);
> -		else
> -			is_guilty = true;
> -
> -		if (is_guilty) {
> -			dma_fence_set_error(&s_job->s_fence->finished, -ETIME);
> -			set_error = true;
> -		}
> -
>  		r = amdgpu_ring_reset(ring, job->vmid, NULL);
>  		if (!r) {
> -			if (is_guilty)
> -				atomic_inc(&ring->adev->gpu_reset_counter);
>  			drm_sched_wqueue_start(&ring->sched);
>  			dev_err(adev->dev, "Ring %s reset succeeded\n",
>  				ring->sched.name);
> @@ -173,8 +154,7 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
>  		dev_err(adev->dev, "Ring %s reset failed\n", ring->sched.name);
>  	}
>  
> -	if (!set_error)
> -		dma_fence_set_error(&s_job->s_fence->finished, -ETIME);
> +	dma_fence_set_error(&s_job->s_fence->finished, -ETIME);
>  
>  	if (amdgpu_device_should_recover_gpu(ring->adev)) {
>  		struct amdgpu_reset_context reset_context;
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> index ff3a4b81e51ab..c1d14183abfe6 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> @@ -271,7 +271,6 @@ struct amdgpu_ring_funcs {
>  	int (*reset)(struct amdgpu_ring *ring, unsigned int vmid,
>  		     struct amdgpu_fence *guilty_fence);
>  	void (*emit_cleaner_shader)(struct amdgpu_ring *ring);
> -	bool (*is_guilty)(struct amdgpu_ring *ring);
>  };
>  
>  /**
> diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
> index b4f4ad966db82..a9d26d91c8468 100644
> --- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
> @@ -9581,6 +9581,7 @@ static int gfx_v10_0_reset_kgq(struct amdgpu_ring *ring,
>  	if (r)
>  		return r;
>  	amdgpu_fence_driver_force_completion(ring);
> +	atomic_inc(&ring->adev->gpu_reset_counter);
>  	return 0;
>  }
>  
> @@ -9658,6 +9659,7 @@ static int gfx_v10_0_reset_kcq(struct amdgpu_ring *ring,
>  	if (r)
>  		return r;
>  	amdgpu_fence_driver_force_completion(ring);
> +	atomic_inc(&ring->adev->gpu_reset_counter);
>  	return 0;
>  }
>  
> diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
> index 5707ce7dd5c82..3dd2e04830dc6 100644
> --- a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
> @@ -6846,6 +6846,7 @@ static int gfx_v11_0_reset_kgq(struct amdgpu_ring *ring,
>  	if (r)
>  		return r;
>  	amdgpu_fence_driver_force_completion(ring);
> +	atomic_inc(&ring->adev->gpu_reset_counter);
>  	return 0;
>  }
>  
> @@ -7012,6 +7013,7 @@ static int gfx_v11_0_reset_kcq(struct amdgpu_ring *ring,
>  	if (r)
>  		return r;
>  	amdgpu_fence_driver_force_completion(ring);
> +	atomic_inc(&ring->adev->gpu_reset_counter);
>  	return 0;
>  }
>  
> diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
> index 259a83c3acb5d..d2ee4543ce222 100644
> --- a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
> @@ -5341,6 +5341,7 @@ static int gfx_v12_0_reset_kgq(struct amdgpu_ring *ring,
>  	if (r)
>  		return r;
>  	amdgpu_fence_driver_force_completion(ring);
> +	atomic_inc(&ring->adev->gpu_reset_counter);
>  	return 0;
>  }
>  
> @@ -5460,6 +5461,7 @@ static int gfx_v12_0_reset_kcq(struct amdgpu_ring *ring,
>  	if (r)
>  		return r;
>  	amdgpu_fence_driver_force_completion(ring);
> +	atomic_inc(&ring->adev->gpu_reset_counter);
>  	return 0;
>  }
>  
> diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
> index e0dec946b7cdc..1b767094dfa24 100644
> --- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
> @@ -7228,6 +7228,7 @@ static int gfx_v9_0_reset_kcq(struct amdgpu_ring *ring,
>  	if (r)
>  		return r;
>  	amdgpu_fence_driver_force_completion(ring);
> +	atomic_inc(&ring->adev->gpu_reset_counter);
>  	return 0;
>  }
>  
> diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
> index e5fcc63cd99df..05abe86ecd9ac 100644
> --- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
> +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
> @@ -3625,6 +3625,7 @@ static int gfx_v9_4_3_reset_kcq(struct amdgpu_ring *ring,
>  	if (r)
>  		return r;
>  	amdgpu_fence_driver_force_completion(ring);
> +	atomic_inc(&ring->adev->gpu_reset_counter);
>  	return 0;
>  }
>  
> diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
> index 0b1fa35a441ae..dbc28042c7d53 100644
> --- a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
> @@ -776,6 +776,7 @@ static int jpeg_v2_0_ring_reset(struct amdgpu_ring *ring,
>  	if (r)
>  		return r;
>  	amdgpu_fence_driver_force_completion(ring);
> +	atomic_inc(&ring->adev->gpu_reset_counter);
>  	return 0;
>  }
>  
> diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
> index 7a9e91f6495de..f8af473e2a7a4 100644
> --- a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
> +++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
> @@ -655,6 +655,7 @@ static int jpeg_v2_5_ring_reset(struct amdgpu_ring *ring,
>  	if (r)
>  		return r;
>  	amdgpu_fence_driver_force_completion(ring);
> +	atomic_inc(&ring->adev->gpu_reset_counter);
>  	return 0;
>  }
>  
> diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c
> index 81ee1ba4c0a3c..83559a32ed3d2 100644
> --- a/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c
> @@ -567,6 +567,7 @@ static int jpeg_v3_0_ring_reset(struct amdgpu_ring *ring,
>  	if (r)
>  		return r;
>  	amdgpu_fence_driver_force_completion(ring);
> +	atomic_inc(&ring->adev->gpu_reset_counter);
>  	return 0;
>  }
>  
> diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c
> index 06f75091e1304..b0f80f2a549c6 100644
> --- a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c
> @@ -735,6 +735,7 @@ static int jpeg_v4_0_ring_reset(struct amdgpu_ring *ring,
>  	if (r)
>  		return r;
>  	amdgpu_fence_driver_force_completion(ring);
> +	atomic_inc(&ring->adev->gpu_reset_counter);
>  	return 0;
>  }
>  
> diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c
> index 10a7b990b0adf..4fd9386d2efd6 100644
> --- a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c
> +++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c
> @@ -1158,6 +1158,7 @@ static int jpeg_v4_0_3_ring_reset(struct amdgpu_ring *ring,
>  	if (r)
>  		return r;
>  	amdgpu_fence_driver_force_completion(ring);
> +	atomic_inc(&ring->adev->gpu_reset_counter);
>  	return 0;
>  }
>  
> diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c
> index 88dea7a47a1e5..beca4d1e941b3 100644
> --- a/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c
> +++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c
> @@ -849,6 +849,7 @@ static int jpeg_v5_0_1_ring_reset(struct amdgpu_ring *ring,
>  	if (r)
>  		return r;
>  	amdgpu_fence_driver_force_completion(ring);
> +	atomic_inc(&ring->adev->gpu_reset_counter);
>  	return 0;
>  }
>  
> diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
> index c5e0d2e730740..0199d5bb5821d 100644
> --- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
> +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
> @@ -1651,37 +1651,21 @@ static bool sdma_v4_4_2_is_queue_selected(struct amdgpu_device *adev, uint32_t i
>  	return (context_status & SDMA_GFX_CONTEXT_STATUS__SELECTED_MASK) != 0;
>  }
>  
> -static bool sdma_v4_4_2_ring_is_guilty(struct amdgpu_ring *ring)
> -{
> -	struct amdgpu_device *adev = ring->adev;
> -	uint32_t instance_id = ring->me;
> -
> -	return sdma_v4_4_2_is_queue_selected(adev, instance_id, false);
> -}
> -
> -static bool sdma_v4_4_2_page_ring_is_guilty(struct amdgpu_ring *ring)
> -{
> -	struct amdgpu_device *adev = ring->adev;
> -	uint32_t instance_id = ring->me;
> -
> -	if (!adev->sdma.has_page_queue)
> -		return false;
> -
> -	return sdma_v4_4_2_is_queue_selected(adev, instance_id, true);
> -}
> -
>  static int sdma_v4_4_2_reset_queue(struct amdgpu_ring *ring,
>  				   unsigned int vmid,
>  				   struct amdgpu_fence *guilty_fence)
>  {
> -	bool is_guilty = ring->funcs->is_guilty(ring);
>  	struct amdgpu_device *adev = ring->adev;
>  	u32 id = ring->me;
> +	bool is_guilty;
>  	int r;
>  
>  	if (!(adev->sdma.supported_reset & AMDGPU_RESET_TYPE_PER_QUEUE))
>  		return -EOPNOTSUPP;
>  
> +	is_guilty = sdma_v4_4_2_is_queue_selected(adev, id,
> +						  &adev->sdma.instance[id].page == ring);
> +
>  	amdgpu_amdkfd_suspend(adev, false);
>  	r = amdgpu_sdma_reset_engine(adev, id);
>  	amdgpu_amdkfd_resume(adev, false);
> @@ -1689,7 +1673,7 @@ static int sdma_v4_4_2_reset_queue(struct amdgpu_ring *ring,
>  		return r;
>  
>  	if (is_guilty)
> -		amdgpu_fence_driver_force_completion(ring);
> +		atomic_inc(&ring->adev->gpu_reset_counter);

This may not be related to this patch as such. SDMA reset happened
regardless of page/sdma queue being guilty. Why this increment is done
conditionally in that case?

Thanks,
Lijo

>  
>  	return 0;
>  }
> @@ -1735,8 +1719,8 @@ static int sdma_v4_4_2_stop_queue(struct amdgpu_ring *ring)
>  static int sdma_v4_4_2_restore_queue(struct amdgpu_ring *ring)
>  {
>  	struct amdgpu_device *adev = ring->adev;
> -	u32 inst_mask;
> -	int i;
> +	u32 inst_mask, tmp_mask;
> +	int i, r;
>  
>  	inst_mask = 1 << ring->me;
>  	udelay(50);
> @@ -1753,7 +1737,25 @@ static int sdma_v4_4_2_restore_queue(struct amdgpu_ring *ring)
>  		return -ETIMEDOUT;
>  	}
>  
> -	return sdma_v4_4_2_inst_start(adev, inst_mask, true);
> +	r = sdma_v4_4_2_inst_start(adev, inst_mask, true);
> +	if (r) {
> +		return r;
> +	}
> +
> +	tmp_mask = inst_mask;
> +	for_each_inst(i, tmp_mask) {
> +		ring = &adev->sdma.instance[i].ring;
> +
> +		amdgpu_fence_driver_force_completion(ring);
> +
> +		if (adev->sdma.has_page_queue) {
> +			struct amdgpu_ring *page = &adev->sdma.instance[i].page;
> +
> +			amdgpu_fence_driver_force_completion(page);
> +		}
> +	}
> +
> +	return r;
>  }
>  
>  static int sdma_v4_4_2_soft_reset_engine(struct amdgpu_device *adev,
> @@ -2159,7 +2161,6 @@ static const struct amdgpu_ring_funcs sdma_v4_4_2_ring_funcs = {
>  	.emit_reg_wait = sdma_v4_4_2_ring_emit_reg_wait,
>  	.emit_reg_write_reg_wait = amdgpu_ring_emit_reg_write_reg_wait_helper,
>  	.reset = sdma_v4_4_2_reset_queue,
> -	.is_guilty = sdma_v4_4_2_ring_is_guilty,
>  };
>  
>  static const struct amdgpu_ring_funcs sdma_v4_4_2_page_ring_funcs = {
> @@ -2192,7 +2193,6 @@ static const struct amdgpu_ring_funcs sdma_v4_4_2_page_ring_funcs = {
>  	.emit_reg_wait = sdma_v4_4_2_ring_emit_reg_wait,
>  	.emit_reg_write_reg_wait = amdgpu_ring_emit_reg_write_reg_wait_helper,
>  	.reset = sdma_v4_4_2_reset_queue,
> -	.is_guilty = sdma_v4_4_2_page_ring_is_guilty,
>  };
>  
>  static void sdma_v4_4_2_set_ring_funcs(struct amdgpu_device *adev)
> diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
> index 09419db2d49a6..4a36e5199f248 100644
> --- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
> @@ -1538,18 +1538,34 @@ static int sdma_v5_0_soft_reset(struct amdgpu_ip_block *ip_block)
>  	return 0;
>  }
>  
> +static bool sdma_v5_0_is_queue_selected(struct amdgpu_device *adev,
> +					uint32_t instance_id)
> +{
> +	u32 context_status = RREG32(sdma_v5_0_get_reg_offset(adev, instance_id,
> +							     mmSDMA0_GFX_CONTEXT_STATUS));
> +
> +	/* Check if the SELECTED bit is set */
> +	return (context_status & SDMA0_GFX_CONTEXT_STATUS__SELECTED_MASK) != 0;
> +}
> +
>  static int sdma_v5_0_reset_queue(struct amdgpu_ring *ring,
>  				 unsigned int vmid,
>  				 struct amdgpu_fence *guilty_fence)
>  {
>  	struct amdgpu_device *adev = ring->adev;
>  	u32 inst_id = ring->me;
> +	bool is_guilty = sdma_v5_0_is_queue_selected(adev, inst_id);
>  	int r;
>  
> +	amdgpu_amdkfd_suspend(adev, false);
>  	r = amdgpu_sdma_reset_engine(adev, inst_id);
> +	amdgpu_amdkfd_resume(adev, false);
>  	if (r)
>  		return r;
> -	amdgpu_fence_driver_force_completion(ring);
> +
> +	if (is_guilty)
> +		atomic_inc(&ring->adev->gpu_reset_counter);
> +
>  	return 0;
>  }
>  
> @@ -1617,7 +1633,10 @@ static int sdma_v5_0_restore_queue(struct amdgpu_ring *ring)
>  
>  	r = sdma_v5_0_gfx_resume_instance(adev, inst_id, true);
>  	amdgpu_gfx_rlc_exit_safe_mode(adev, 0);
> -	return r;
> +	if (r)
> +		return r;
> +	amdgpu_fence_driver_force_completion(ring);
> +	return 0;
>  }
>  
>  static int sdma_v5_0_ring_preempt_ib(struct amdgpu_ring *ring)
> diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
> index 365c710ee9e8c..84d85ef30701c 100644
> --- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
> +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
> @@ -1451,18 +1451,34 @@ static int sdma_v5_2_wait_for_idle(struct amdgpu_ip_block *ip_block)
>  	return -ETIMEDOUT;
>  }
>  
> +static bool sdma_v5_2_is_queue_selected(struct amdgpu_device *adev,
> +					uint32_t instance_id)
> +{
> +	u32 context_status = RREG32(sdma_v5_2_get_reg_offset(adev, instance_id,
> +							     mmSDMA0_GFX_CONTEXT_STATUS));
> +
> +	/* Check if the SELECTED bit is set */
> +	return (context_status & SDMA0_GFX_CONTEXT_STATUS__SELECTED_MASK) != 0;
> +}
> +
>  static int sdma_v5_2_reset_queue(struct amdgpu_ring *ring,
>  				 unsigned int vmid,
>  				 struct amdgpu_fence *guilty_fence)
>  {
>  	struct amdgpu_device *adev = ring->adev;
>  	u32 inst_id = ring->me;
> +	bool is_guilty = sdma_v5_2_is_queue_selected(adev, inst_id);
>  	int r;
>  
> +	amdgpu_amdkfd_suspend(adev, false);
>  	r = amdgpu_sdma_reset_engine(adev, inst_id);
> +	amdgpu_amdkfd_resume(adev, false);
>  	if (r)
>  		return r;
> -	amdgpu_fence_driver_force_completion(ring);
> +
> +	if (is_guilty)
> +		atomic_inc(&ring->adev->gpu_reset_counter);
> +
>  	return 0;
>  }
>  
> @@ -1529,11 +1545,12 @@ static int sdma_v5_2_restore_queue(struct amdgpu_ring *ring)
>  	freeze = RREG32(sdma_v5_2_get_reg_offset(adev, inst_id, mmSDMA0_FREEZE));
>  	freeze = REG_SET_FIELD(freeze, SDMA0_FREEZE, FREEZE, 0);
>  	WREG32(sdma_v5_2_get_reg_offset(adev, inst_id, mmSDMA0_FREEZE), freeze);
> -
>  	r = sdma_v5_2_gfx_resume_instance(adev, inst_id, true);
> -
>  	amdgpu_gfx_rlc_exit_safe_mode(adev, 0);
> -	return r;
> +	if (r)
> +		return r;
> +	amdgpu_fence_driver_force_completion(ring);
> +	return 0;
>  }
>  
>  static int sdma_v5_2_ring_preempt_ib(struct amdgpu_ring *ring)
> diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
> index 746f14862d9ff..595e90a5274ea 100644
> --- a/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
> @@ -1565,6 +1565,7 @@ static int sdma_v6_0_reset_queue(struct amdgpu_ring *ring,
>  	if (r)
>  		return r;
>  	amdgpu_fence_driver_force_completion(ring);
> +	atomic_inc(&ring->adev->gpu_reset_counter);
>  	return 0;
>  }
>  
> diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
> index 2e4c658598001..3e036c37b1f5a 100644
> --- a/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
> @@ -830,6 +830,7 @@ static int sdma_v7_0_reset_queue(struct amdgpu_ring *ring,
>  	if (r)
>  		return r;
>  	amdgpu_fence_driver_force_completion(ring);
> +	atomic_inc(&ring->adev->gpu_reset_counter);
>  	return 0;
>  }
>  
> diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
> index 0d73b2bd4aad6..d5be19361cc89 100644
> --- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
> @@ -1985,6 +1985,7 @@ static int vcn_v4_0_ring_reset(struct amdgpu_ring *ring,
>  	if (r)
>  		return r;
>  	amdgpu_fence_driver_force_completion(ring);
> +	atomic_inc(&ring->adev->gpu_reset_counter);
>  	return 0;
>  }
>  
> diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c
> index bf9edfef2107e..c7c2b7f5ba56d 100644
> --- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c
> +++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c
> @@ -1626,6 +1626,7 @@ static int vcn_v4_0_3_ring_reset(struct amdgpu_ring *ring,
>  	if (r)
>  		return r;
>  	amdgpu_fence_driver_force_completion(ring);
> +	atomic_inc(&ring->adev->gpu_reset_counter);
>  	return 0;
>  }
>  
> diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c
> index 3a3ed600e15f0..af75617cf6df5 100644
> --- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c
> +++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c
> @@ -1483,6 +1483,7 @@ static int vcn_v4_0_5_ring_reset(struct amdgpu_ring *ring,
>  	if (r)
>  		return r;
>  	amdgpu_fence_driver_force_completion(ring);
> +	atomic_inc(&ring->adev->gpu_reset_counter);
>  	return 0;
>  }
>  
> diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c
> index c7953116ad532..64f2b64da6258 100644
> --- a/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c
> @@ -1210,6 +1210,7 @@ static int vcn_v5_0_0_ring_reset(struct amdgpu_ring *ring,
>  	if (r)
>  		return r;
>  	amdgpu_fence_driver_force_completion(ring);
> +	atomic_inc(&ring->adev->gpu_reset_counter);
>  	return 0;
>  }
>  


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 01/27] drm/amdgpu: switch job hw_fence to amdgpu_fence
  2025-06-13 21:47 ` [PATCH 01/27] drm/amdgpu: switch job hw_fence to amdgpu_fence Alex Deucher
@ 2025-06-16 12:16   ` Christian König
  2025-06-16 13:47     ` Alex Deucher
  0 siblings, 1 reply; 35+ messages in thread
From: Christian König @ 2025-06-16 12:16 UTC (permalink / raw)
  To: Alex Deucher, amd-gfx, sasundar

On 6/13/25 23:47, Alex Deucher wrote:
> Use the amdgpu fence container so we can store additional
> data in the fence.  This also fixes the start_time handling
> for MCBP since we were casting the fence to an amdgpu_fence
> and it wasn't.
> 
> Fixes: 3f4c175d62d8 ("drm/amdgpu: MCBP based on DRM scheduler (v9)")
> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c |  2 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_device.c  |  2 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c   | 30 +++++----------------
>  drivers/gpu/drm/amd/amdgpu/amdgpu_job.c     | 12 ++++-----
>  drivers/gpu/drm/amd/amdgpu/amdgpu_job.h     |  2 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h    | 16 +++++++++++
>  6 files changed, 32 insertions(+), 32 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> index 8e626f50b362e..f81608330a3d0 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> @@ -1902,7 +1902,7 @@ static void amdgpu_ib_preempt_mark_partial_job(struct amdgpu_ring *ring)
>  			continue;
>  		}
>  		job = to_amdgpu_job(s_job);
> -		if (preempted && (&job->hw_fence) == fence)
> +		if (preempted && (&job->hw_fence.base) == fence)
>  			/* mark the job as preempted */
>  			job->preemption_status |= AMDGPU_IB_PREEMPTED;
>  	}
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> index 00174437b01ec..4893f834f4fd4 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> @@ -6397,7 +6397,7 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
>  	 *
>  	 * job->base holds a reference to parent fence
>  	 */
> -	if (job && dma_fence_is_signaled(&job->hw_fence)) {
> +	if (job && dma_fence_is_signaled(&job->hw_fence.base)) {
>  		job_signaled = true;
>  		dev_info(adev->dev, "Guilty job already signaled, skipping HW reset");
>  		goto skip_hw_reset;
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> index 2f24a6aa13bf6..569e0e5373927 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> @@ -41,22 +41,6 @@
>  #include "amdgpu_trace.h"
>  #include "amdgpu_reset.h"
>  
> -/*
> - * Fences mark an event in the GPUs pipeline and are used
> - * for GPU/CPU synchronization.  When the fence is written,
> - * it is expected that all buffers associated with that fence
> - * are no longer in use by the associated ring on the GPU and
> - * that the relevant GPU caches have been flushed.
> - */
> -
> -struct amdgpu_fence {
> -	struct dma_fence base;
> -
> -	/* RB, DMA, etc. */
> -	struct amdgpu_ring		*ring;
> -	ktime_t				start_timestamp;
> -};
> -
>  static struct kmem_cache *amdgpu_fence_slab;
>  
>  int amdgpu_fence_slab_init(void)
> @@ -151,12 +135,12 @@ int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **f, struct amd
>  		am_fence = kmem_cache_alloc(amdgpu_fence_slab, GFP_ATOMIC);
>  		if (am_fence == NULL)
>  			return -ENOMEM;
> -		fence = &am_fence->base;
> -		am_fence->ring = ring;
>  	} else {
>  		/* take use of job-embedded fence */
> -		fence = &job->hw_fence;
> +		am_fence = &job->hw_fence;
>  	}
> +	fence = &am_fence->base;
> +	am_fence->ring = ring;

I would rather completely drop the job from the parameters and the general fence allocation here.

Instead we should just provide afence as input parameter and submit that one.

This should make sure that we don't run into such issues again.

Apart from that looks good to me,
Christian.

>  
>  	seq = ++ring->fence_drv.sync_seq;
>  	if (job && job->job_run_counter) {
> @@ -718,7 +702,7 @@ void amdgpu_fence_driver_clear_job_fences(struct amdgpu_ring *ring)
>  			 * it right here or we won't be able to track them in fence_drv
>  			 * and they will remain unsignaled during sa_bo free.
>  			 */
> -			job = container_of(old, struct amdgpu_job, hw_fence);
> +			job = container_of(old, struct amdgpu_job, hw_fence.base);
>  			if (!job->base.s_fence && !dma_fence_is_signaled(old))
>  				dma_fence_signal(old);
>  			RCU_INIT_POINTER(*ptr, NULL);
> @@ -780,7 +764,7 @@ static const char *amdgpu_fence_get_timeline_name(struct dma_fence *f)
>  
>  static const char *amdgpu_job_fence_get_timeline_name(struct dma_fence *f)
>  {
> -	struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence);
> +	struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence.base);
>  
>  	return (const char *)to_amdgpu_ring(job->base.sched)->name;
>  }
> @@ -810,7 +794,7 @@ static bool amdgpu_fence_enable_signaling(struct dma_fence *f)
>   */
>  static bool amdgpu_job_fence_enable_signaling(struct dma_fence *f)
>  {
> -	struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence);
> +	struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence.base);
>  
>  	if (!timer_pending(&to_amdgpu_ring(job->base.sched)->fence_drv.fallback_timer))
>  		amdgpu_fence_schedule_fallback(to_amdgpu_ring(job->base.sched));
> @@ -845,7 +829,7 @@ static void amdgpu_job_fence_free(struct rcu_head *rcu)
>  	struct dma_fence *f = container_of(rcu, struct dma_fence, rcu);
>  
>  	/* free job if fence has a parent job */
> -	kfree(container_of(f, struct amdgpu_job, hw_fence));
> +	kfree(container_of(f, struct amdgpu_job, hw_fence.base));
>  }
>  
>  /**
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> index acb21fc8b3ce5..ddb9d3269357c 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> @@ -272,8 +272,8 @@ void amdgpu_job_free_resources(struct amdgpu_job *job)
>  	/* Check if any fences where initialized */
>  	if (job->base.s_fence && job->base.s_fence->finished.ops)
>  		f = &job->base.s_fence->finished;
> -	else if (job->hw_fence.ops)
> -		f = &job->hw_fence;
> +	else if (job->hw_fence.base.ops)
> +		f = &job->hw_fence.base;
>  	else
>  		f = NULL;
>  
> @@ -290,10 +290,10 @@ static void amdgpu_job_free_cb(struct drm_sched_job *s_job)
>  	amdgpu_sync_free(&job->explicit_sync);
>  
>  	/* only put the hw fence if has embedded fence */
> -	if (!job->hw_fence.ops)
> +	if (!job->hw_fence.base.ops)
>  		kfree(job);
>  	else
> -		dma_fence_put(&job->hw_fence);
> +		dma_fence_put(&job->hw_fence.base);
>  }
>  
>  void amdgpu_job_set_gang_leader(struct amdgpu_job *job,
> @@ -322,10 +322,10 @@ void amdgpu_job_free(struct amdgpu_job *job)
>  	if (job->gang_submit != &job->base.s_fence->scheduled)
>  		dma_fence_put(job->gang_submit);
>  
> -	if (!job->hw_fence.ops)
> +	if (!job->hw_fence.base.ops)
>  		kfree(job);
>  	else
> -		dma_fence_put(&job->hw_fence);
> +		dma_fence_put(&job->hw_fence.base);
>  }
>  
>  struct dma_fence *amdgpu_job_submit(struct amdgpu_job *job)
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
> index f2c049129661f..931fed8892cc1 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
> @@ -48,7 +48,7 @@ struct amdgpu_job {
>  	struct drm_sched_job    base;
>  	struct amdgpu_vm	*vm;
>  	struct amdgpu_sync	explicit_sync;
> -	struct dma_fence	hw_fence;
> +	struct amdgpu_fence	hw_fence;
>  	struct dma_fence	*gang_submit;
>  	uint32_t		preamble_status;
>  	uint32_t                preemption_status;
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> index b95b471107692..e1f25218943a4 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> @@ -127,6 +127,22 @@ struct amdgpu_fence_driver {
>  	struct dma_fence		**fences;
>  };
>  
> +/*
> + * Fences mark an event in the GPUs pipeline and are used
> + * for GPU/CPU synchronization.  When the fence is written,
> + * it is expected that all buffers associated with that fence
> + * are no longer in use by the associated ring on the GPU and
> + * that the relevant GPU caches have been flushed.
> + */
> +
> +struct amdgpu_fence {
> +	struct dma_fence base;
> +
> +	/* RB, DMA, etc. */
> +	struct amdgpu_ring		*ring;
> +	ktime_t				start_timestamp;
> +};
> +
>  extern const struct drm_sched_backend_ops amdgpu_sched_ops;
>  
>  void amdgpu_fence_driver_clear_job_fences(struct amdgpu_ring *ring);


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 01/27] drm/amdgpu: switch job hw_fence to amdgpu_fence
  2025-06-16 12:16   ` Christian König
@ 2025-06-16 13:47     ` Alex Deucher
  2025-06-16 17:45       ` Christian König
  0 siblings, 1 reply; 35+ messages in thread
From: Alex Deucher @ 2025-06-16 13:47 UTC (permalink / raw)
  To: Christian König; +Cc: Alex Deucher, amd-gfx, sasundar

On Mon, Jun 16, 2025 at 8:16 AM Christian König
<christian.koenig@amd.com> wrote:
>
> On 6/13/25 23:47, Alex Deucher wrote:
> > Use the amdgpu fence container so we can store additional
> > data in the fence.  This also fixes the start_time handling
> > for MCBP since we were casting the fence to an amdgpu_fence
> > and it wasn't.
> >
> > Fixes: 3f4c175d62d8 ("drm/amdgpu: MCBP based on DRM scheduler (v9)")
> > Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
> > ---
> >  drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c |  2 +-
> >  drivers/gpu/drm/amd/amdgpu/amdgpu_device.c  |  2 +-
> >  drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c   | 30 +++++----------------
> >  drivers/gpu/drm/amd/amdgpu/amdgpu_job.c     | 12 ++++-----
> >  drivers/gpu/drm/amd/amdgpu/amdgpu_job.h     |  2 +-
> >  drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h    | 16 +++++++++++
> >  6 files changed, 32 insertions(+), 32 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> > index 8e626f50b362e..f81608330a3d0 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> > @@ -1902,7 +1902,7 @@ static void amdgpu_ib_preempt_mark_partial_job(struct amdgpu_ring *ring)
> >                       continue;
> >               }
> >               job = to_amdgpu_job(s_job);
> > -             if (preempted && (&job->hw_fence) == fence)
> > +             if (preempted && (&job->hw_fence.base) == fence)
> >                       /* mark the job as preempted */
> >                       job->preemption_status |= AMDGPU_IB_PREEMPTED;
> >       }
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> > index 00174437b01ec..4893f834f4fd4 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> > @@ -6397,7 +6397,7 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
> >        *
> >        * job->base holds a reference to parent fence
> >        */
> > -     if (job && dma_fence_is_signaled(&job->hw_fence)) {
> > +     if (job && dma_fence_is_signaled(&job->hw_fence.base)) {
> >               job_signaled = true;
> >               dev_info(adev->dev, "Guilty job already signaled, skipping HW reset");
> >               goto skip_hw_reset;
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> > index 2f24a6aa13bf6..569e0e5373927 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> > @@ -41,22 +41,6 @@
> >  #include "amdgpu_trace.h"
> >  #include "amdgpu_reset.h"
> >
> > -/*
> > - * Fences mark an event in the GPUs pipeline and are used
> > - * for GPU/CPU synchronization.  When the fence is written,
> > - * it is expected that all buffers associated with that fence
> > - * are no longer in use by the associated ring on the GPU and
> > - * that the relevant GPU caches have been flushed.
> > - */
> > -
> > -struct amdgpu_fence {
> > -     struct dma_fence base;
> > -
> > -     /* RB, DMA, etc. */
> > -     struct amdgpu_ring              *ring;
> > -     ktime_t                         start_timestamp;
> > -};
> > -
> >  static struct kmem_cache *amdgpu_fence_slab;
> >
> >  int amdgpu_fence_slab_init(void)
> > @@ -151,12 +135,12 @@ int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **f, struct amd
> >               am_fence = kmem_cache_alloc(amdgpu_fence_slab, GFP_ATOMIC);
> >               if (am_fence == NULL)
> >                       return -ENOMEM;
> > -             fence = &am_fence->base;
> > -             am_fence->ring = ring;
> >       } else {
> >               /* take use of job-embedded fence */
> > -             fence = &job->hw_fence;
> > +             am_fence = &job->hw_fence;
> >       }
> > +     fence = &am_fence->base;
> > +     am_fence->ring = ring;
>
> I would rather completely drop the job from the parameters and the general fence allocation here.
>
> Instead we should just provide afence as input parameter and submit that one.
>
> This should make sure that we don't run into such issues again.

How about doing that as a follow on patch?  It looks like that will be
a much bigger patch for a stable bug fix.  I think we can clean up a
lot of stuff in amdgpu_fence.c with that change.

Alex

>
> Apart from that looks good to me,
> Christian.
>
> >
> >       seq = ++ring->fence_drv.sync_seq;
> >       if (job && job->job_run_counter) {
> > @@ -718,7 +702,7 @@ void amdgpu_fence_driver_clear_job_fences(struct amdgpu_ring *ring)
> >                        * it right here or we won't be able to track them in fence_drv
> >                        * and they will remain unsignaled during sa_bo free.
> >                        */
> > -                     job = container_of(old, struct amdgpu_job, hw_fence);
> > +                     job = container_of(old, struct amdgpu_job, hw_fence.base);
> >                       if (!job->base.s_fence && !dma_fence_is_signaled(old))
> >                               dma_fence_signal(old);
> >                       RCU_INIT_POINTER(*ptr, NULL);
> > @@ -780,7 +764,7 @@ static const char *amdgpu_fence_get_timeline_name(struct dma_fence *f)
> >
> >  static const char *amdgpu_job_fence_get_timeline_name(struct dma_fence *f)
> >  {
> > -     struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence);
> > +     struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence.base);
> >
> >       return (const char *)to_amdgpu_ring(job->base.sched)->name;
> >  }
> > @@ -810,7 +794,7 @@ static bool amdgpu_fence_enable_signaling(struct dma_fence *f)
> >   */
> >  static bool amdgpu_job_fence_enable_signaling(struct dma_fence *f)
> >  {
> > -     struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence);
> > +     struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence.base);
> >
> >       if (!timer_pending(&to_amdgpu_ring(job->base.sched)->fence_drv.fallback_timer))
> >               amdgpu_fence_schedule_fallback(to_amdgpu_ring(job->base.sched));
> > @@ -845,7 +829,7 @@ static void amdgpu_job_fence_free(struct rcu_head *rcu)
> >       struct dma_fence *f = container_of(rcu, struct dma_fence, rcu);
> >
> >       /* free job if fence has a parent job */
> > -     kfree(container_of(f, struct amdgpu_job, hw_fence));
> > +     kfree(container_of(f, struct amdgpu_job, hw_fence.base));
> >  }
> >
> >  /**
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> > index acb21fc8b3ce5..ddb9d3269357c 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> > @@ -272,8 +272,8 @@ void amdgpu_job_free_resources(struct amdgpu_job *job)
> >       /* Check if any fences where initialized */
> >       if (job->base.s_fence && job->base.s_fence->finished.ops)
> >               f = &job->base.s_fence->finished;
> > -     else if (job->hw_fence.ops)
> > -             f = &job->hw_fence;
> > +     else if (job->hw_fence.base.ops)
> > +             f = &job->hw_fence.base;
> >       else
> >               f = NULL;
> >
> > @@ -290,10 +290,10 @@ static void amdgpu_job_free_cb(struct drm_sched_job *s_job)
> >       amdgpu_sync_free(&job->explicit_sync);
> >
> >       /* only put the hw fence if has embedded fence */
> > -     if (!job->hw_fence.ops)
> > +     if (!job->hw_fence.base.ops)
> >               kfree(job);
> >       else
> > -             dma_fence_put(&job->hw_fence);
> > +             dma_fence_put(&job->hw_fence.base);
> >  }
> >
> >  void amdgpu_job_set_gang_leader(struct amdgpu_job *job,
> > @@ -322,10 +322,10 @@ void amdgpu_job_free(struct amdgpu_job *job)
> >       if (job->gang_submit != &job->base.s_fence->scheduled)
> >               dma_fence_put(job->gang_submit);
> >
> > -     if (!job->hw_fence.ops)
> > +     if (!job->hw_fence.base.ops)
> >               kfree(job);
> >       else
> > -             dma_fence_put(&job->hw_fence);
> > +             dma_fence_put(&job->hw_fence.base);
> >  }
> >
> >  struct dma_fence *amdgpu_job_submit(struct amdgpu_job *job)
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
> > index f2c049129661f..931fed8892cc1 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
> > @@ -48,7 +48,7 @@ struct amdgpu_job {
> >       struct drm_sched_job    base;
> >       struct amdgpu_vm        *vm;
> >       struct amdgpu_sync      explicit_sync;
> > -     struct dma_fence        hw_fence;
> > +     struct amdgpu_fence     hw_fence;
> >       struct dma_fence        *gang_submit;
> >       uint32_t                preamble_status;
> >       uint32_t                preemption_status;
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> > index b95b471107692..e1f25218943a4 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> > @@ -127,6 +127,22 @@ struct amdgpu_fence_driver {
> >       struct dma_fence                **fences;
> >  };
> >
> > +/*
> > + * Fences mark an event in the GPUs pipeline and are used
> > + * for GPU/CPU synchronization.  When the fence is written,
> > + * it is expected that all buffers associated with that fence
> > + * are no longer in use by the associated ring on the GPU and
> > + * that the relevant GPU caches have been flushed.
> > + */
> > +
> > +struct amdgpu_fence {
> > +     struct dma_fence base;
> > +
> > +     /* RB, DMA, etc. */
> > +     struct amdgpu_ring              *ring;
> > +     ktime_t                         start_timestamp;
> > +};
> > +
> >  extern const struct drm_sched_backend_ops amdgpu_sched_ops;
> >
> >  void amdgpu_fence_driver_clear_job_fences(struct amdgpu_ring *ring);
>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 07/27] drm/amdgpu: move guilty handling into ring resets
  2025-06-16  3:46   ` Lazar, Lijo
@ 2025-06-16 16:03     ` Alex Deucher
  0 siblings, 0 replies; 35+ messages in thread
From: Alex Deucher @ 2025-06-16 16:03 UTC (permalink / raw)
  To: Lazar, Lijo; +Cc: Alex Deucher, amd-gfx, christian.koenig, sasundar

On Mon, Jun 16, 2025 at 12:11 AM Lazar, Lijo <lijo.lazar@amd.com> wrote:
>
>
>
> On 6/14/2025 3:17 AM, Alex Deucher wrote:
> > Move guilty logic into the ring reset callbacks.  This
> > allows each ring reset callback to better handle fence
> > errors and force completions in line with the reset
> > behavior for each IP.  It also allows us to remove
> > the ring guilty callback since that logic now lives
> > in the reset callback.
> >
> > Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
> > ---
> >  drivers/gpu/drm/amd/amdgpu/amdgpu_job.c  | 22 +---------
> >  drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h |  1 -
> >  drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c   |  2 +
> >  drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c   |  2 +
> >  drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c   |  2 +
> >  drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c    |  1 +
> >  drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c  |  1 +
> >  drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c   |  1 +
> >  drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c   |  1 +
> >  drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c   |  1 +
> >  drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c   |  1 +
> >  drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c |  1 +
> >  drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c |  1 +
> >  drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c | 52 ++++++++++++------------
> >  drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c   | 23 ++++++++++-
> >  drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c   | 25 ++++++++++--
> >  drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c   |  1 +
> >  drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c   |  1 +
> >  drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c    |  1 +
> >  drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c  |  1 +
> >  drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c  |  1 +
> >  drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c  |  1 +
> >  22 files changed, 89 insertions(+), 54 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> > index 177f04491a11b..680cdd8fc3ab2 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> > @@ -91,7 +91,6 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
> >       struct amdgpu_job *job = to_amdgpu_job(s_job);
> >       struct amdgpu_task_info *ti;
> >       struct amdgpu_device *adev = ring->adev;
> > -     bool set_error = false;
> >       int idx, r;
> >
> >       if (!drm_dev_enter(adev_to_drm(adev), &idx)) {
> > @@ -134,8 +133,6 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
> >       if (unlikely(adev->debug_disable_gpu_ring_reset)) {
> >               dev_err(adev->dev, "Ring reset disabled by debug mask\n");
> >       } else if (amdgpu_gpu_recovery && ring->funcs->reset) {
> > -             bool is_guilty;
> > -
> >               dev_err(adev->dev, "Starting %s ring reset\n",
> >                       s_job->sched->name);
> >
> > @@ -145,24 +142,8 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
> >                */
> >               drm_sched_wqueue_stop(&ring->sched);
> >
> > -             /* for engine resets, we need to reset the engine,
> > -              * but individual queues may be unaffected.
> > -              * check here to make sure the accounting is correct.
> > -              */
> > -             if (ring->funcs->is_guilty)
> > -                     is_guilty = ring->funcs->is_guilty(ring);
> > -             else
> > -                     is_guilty = true;
> > -
> > -             if (is_guilty) {
> > -                     dma_fence_set_error(&s_job->s_fence->finished, -ETIME);
> > -                     set_error = true;
> > -             }
> > -
> >               r = amdgpu_ring_reset(ring, job->vmid, NULL);
> >               if (!r) {
> > -                     if (is_guilty)
> > -                             atomic_inc(&ring->adev->gpu_reset_counter);
> >                       drm_sched_wqueue_start(&ring->sched);
> >                       dev_err(adev->dev, "Ring %s reset succeeded\n",
> >                               ring->sched.name);
> > @@ -173,8 +154,7 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
> >               dev_err(adev->dev, "Ring %s reset failed\n", ring->sched.name);
> >       }
> >
> > -     if (!set_error)
> > -             dma_fence_set_error(&s_job->s_fence->finished, -ETIME);
> > +     dma_fence_set_error(&s_job->s_fence->finished, -ETIME);
> >
> >       if (amdgpu_device_should_recover_gpu(ring->adev)) {
> >               struct amdgpu_reset_context reset_context;
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> > index ff3a4b81e51ab..c1d14183abfe6 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> > @@ -271,7 +271,6 @@ struct amdgpu_ring_funcs {
> >       int (*reset)(struct amdgpu_ring *ring, unsigned int vmid,
> >                    struct amdgpu_fence *guilty_fence);
> >       void (*emit_cleaner_shader)(struct amdgpu_ring *ring);
> > -     bool (*is_guilty)(struct amdgpu_ring *ring);
> >  };
> >
> >  /**
> > diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
> > index b4f4ad966db82..a9d26d91c8468 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
> > @@ -9581,6 +9581,7 @@ static int gfx_v10_0_reset_kgq(struct amdgpu_ring *ring,
> >       if (r)
> >               return r;
> >       amdgpu_fence_driver_force_completion(ring);
> > +     atomic_inc(&ring->adev->gpu_reset_counter);
> >       return 0;
> >  }
> >
> > @@ -9658,6 +9659,7 @@ static int gfx_v10_0_reset_kcq(struct amdgpu_ring *ring,
> >       if (r)
> >               return r;
> >       amdgpu_fence_driver_force_completion(ring);
> > +     atomic_inc(&ring->adev->gpu_reset_counter);
> >       return 0;
> >  }
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
> > index 5707ce7dd5c82..3dd2e04830dc6 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
> > @@ -6846,6 +6846,7 @@ static int gfx_v11_0_reset_kgq(struct amdgpu_ring *ring,
> >       if (r)
> >               return r;
> >       amdgpu_fence_driver_force_completion(ring);
> > +     atomic_inc(&ring->adev->gpu_reset_counter);
> >       return 0;
> >  }
> >
> > @@ -7012,6 +7013,7 @@ static int gfx_v11_0_reset_kcq(struct amdgpu_ring *ring,
> >       if (r)
> >               return r;
> >       amdgpu_fence_driver_force_completion(ring);
> > +     atomic_inc(&ring->adev->gpu_reset_counter);
> >       return 0;
> >  }
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
> > index 259a83c3acb5d..d2ee4543ce222 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
> > @@ -5341,6 +5341,7 @@ static int gfx_v12_0_reset_kgq(struct amdgpu_ring *ring,
> >       if (r)
> >               return r;
> >       amdgpu_fence_driver_force_completion(ring);
> > +     atomic_inc(&ring->adev->gpu_reset_counter);
> >       return 0;
> >  }
> >
> > @@ -5460,6 +5461,7 @@ static int gfx_v12_0_reset_kcq(struct amdgpu_ring *ring,
> >       if (r)
> >               return r;
> >       amdgpu_fence_driver_force_completion(ring);
> > +     atomic_inc(&ring->adev->gpu_reset_counter);
> >       return 0;
> >  }
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
> > index e0dec946b7cdc..1b767094dfa24 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
> > @@ -7228,6 +7228,7 @@ static int gfx_v9_0_reset_kcq(struct amdgpu_ring *ring,
> >       if (r)
> >               return r;
> >       amdgpu_fence_driver_force_completion(ring);
> > +     atomic_inc(&ring->adev->gpu_reset_counter);
> >       return 0;
> >  }
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
> > index e5fcc63cd99df..05abe86ecd9ac 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
> > @@ -3625,6 +3625,7 @@ static int gfx_v9_4_3_reset_kcq(struct amdgpu_ring *ring,
> >       if (r)
> >               return r;
> >       amdgpu_fence_driver_force_completion(ring);
> > +     atomic_inc(&ring->adev->gpu_reset_counter);
> >       return 0;
> >  }
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
> > index 0b1fa35a441ae..dbc28042c7d53 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
> > @@ -776,6 +776,7 @@ static int jpeg_v2_0_ring_reset(struct amdgpu_ring *ring,
> >       if (r)
> >               return r;
> >       amdgpu_fence_driver_force_completion(ring);
> > +     atomic_inc(&ring->adev->gpu_reset_counter);
> >       return 0;
> >  }
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
> > index 7a9e91f6495de..f8af473e2a7a4 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
> > @@ -655,6 +655,7 @@ static int jpeg_v2_5_ring_reset(struct amdgpu_ring *ring,
> >       if (r)
> >               return r;
> >       amdgpu_fence_driver_force_completion(ring);
> > +     atomic_inc(&ring->adev->gpu_reset_counter);
> >       return 0;
> >  }
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c
> > index 81ee1ba4c0a3c..83559a32ed3d2 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c
> > @@ -567,6 +567,7 @@ static int jpeg_v3_0_ring_reset(struct amdgpu_ring *ring,
> >       if (r)
> >               return r;
> >       amdgpu_fence_driver_force_completion(ring);
> > +     atomic_inc(&ring->adev->gpu_reset_counter);
> >       return 0;
> >  }
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c
> > index 06f75091e1304..b0f80f2a549c6 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c
> > @@ -735,6 +735,7 @@ static int jpeg_v4_0_ring_reset(struct amdgpu_ring *ring,
> >       if (r)
> >               return r;
> >       amdgpu_fence_driver_force_completion(ring);
> > +     atomic_inc(&ring->adev->gpu_reset_counter);
> >       return 0;
> >  }
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c
> > index 10a7b990b0adf..4fd9386d2efd6 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c
> > @@ -1158,6 +1158,7 @@ static int jpeg_v4_0_3_ring_reset(struct amdgpu_ring *ring,
> >       if (r)
> >               return r;
> >       amdgpu_fence_driver_force_completion(ring);
> > +     atomic_inc(&ring->adev->gpu_reset_counter);
> >       return 0;
> >  }
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c
> > index 88dea7a47a1e5..beca4d1e941b3 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c
> > @@ -849,6 +849,7 @@ static int jpeg_v5_0_1_ring_reset(struct amdgpu_ring *ring,
> >       if (r)
> >               return r;
> >       amdgpu_fence_driver_force_completion(ring);
> > +     atomic_inc(&ring->adev->gpu_reset_counter);
> >       return 0;
> >  }
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
> > index c5e0d2e730740..0199d5bb5821d 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
> > @@ -1651,37 +1651,21 @@ static bool sdma_v4_4_2_is_queue_selected(struct amdgpu_device *adev, uint32_t i
> >       return (context_status & SDMA_GFX_CONTEXT_STATUS__SELECTED_MASK) != 0;
> >  }
> >
> > -static bool sdma_v4_4_2_ring_is_guilty(struct amdgpu_ring *ring)
> > -{
> > -     struct amdgpu_device *adev = ring->adev;
> > -     uint32_t instance_id = ring->me;
> > -
> > -     return sdma_v4_4_2_is_queue_selected(adev, instance_id, false);
> > -}
> > -
> > -static bool sdma_v4_4_2_page_ring_is_guilty(struct amdgpu_ring *ring)
> > -{
> > -     struct amdgpu_device *adev = ring->adev;
> > -     uint32_t instance_id = ring->me;
> > -
> > -     if (!adev->sdma.has_page_queue)
> > -             return false;
> > -
> > -     return sdma_v4_4_2_is_queue_selected(adev, instance_id, true);
> > -}
> > -
> >  static int sdma_v4_4_2_reset_queue(struct amdgpu_ring *ring,
> >                                  unsigned int vmid,
> >                                  struct amdgpu_fence *guilty_fence)
> >  {
> > -     bool is_guilty = ring->funcs->is_guilty(ring);
> >       struct amdgpu_device *adev = ring->adev;
> >       u32 id = ring->me;
> > +     bool is_guilty;
> >       int r;
> >
> >       if (!(adev->sdma.supported_reset & AMDGPU_RESET_TYPE_PER_QUEUE))
> >               return -EOPNOTSUPP;
> >
> > +     is_guilty = sdma_v4_4_2_is_queue_selected(adev, id,
> > +                                               &adev->sdma.instance[id].page == ring);
> > +
> >       amdgpu_amdkfd_suspend(adev, false);
> >       r = amdgpu_sdma_reset_engine(adev, id);
> >       amdgpu_amdkfd_resume(adev, false);
> > @@ -1689,7 +1673,7 @@ static int sdma_v4_4_2_reset_queue(struct amdgpu_ring *ring,
> >               return r;
> >
> >       if (is_guilty)
> > -             amdgpu_fence_driver_force_completion(ring);
> > +             atomic_inc(&ring->adev->gpu_reset_counter);
>
> This may not be related to this patch as such. SDMA reset happened
> regardless of page/sdma queue being guilty. Why this increment is done
> conditionally in that case?

Yes, that should be fixed.  Will fix it up in the next rev.  I was
getting ahead of myself.  After the re-emit changes, we are able to
recover the non-guilty ring(s).

Alex

>
> Thanks,
> Lijo
>
> >
> >       return 0;
> >  }
> > @@ -1735,8 +1719,8 @@ static int sdma_v4_4_2_stop_queue(struct amdgpu_ring *ring)
> >  static int sdma_v4_4_2_restore_queue(struct amdgpu_ring *ring)
> >  {
> >       struct amdgpu_device *adev = ring->adev;
> > -     u32 inst_mask;
> > -     int i;
> > +     u32 inst_mask, tmp_mask;
> > +     int i, r;
> >
> >       inst_mask = 1 << ring->me;
> >       udelay(50);
> > @@ -1753,7 +1737,25 @@ static int sdma_v4_4_2_restore_queue(struct amdgpu_ring *ring)
> >               return -ETIMEDOUT;
> >       }
> >
> > -     return sdma_v4_4_2_inst_start(adev, inst_mask, true);
> > +     r = sdma_v4_4_2_inst_start(adev, inst_mask, true);
> > +     if (r) {
> > +             return r;
> > +     }
> > +
> > +     tmp_mask = inst_mask;
> > +     for_each_inst(i, tmp_mask) {
> > +             ring = &adev->sdma.instance[i].ring;
> > +
> > +             amdgpu_fence_driver_force_completion(ring);
> > +
> > +             if (adev->sdma.has_page_queue) {
> > +                     struct amdgpu_ring *page = &adev->sdma.instance[i].page;
> > +
> > +                     amdgpu_fence_driver_force_completion(page);
> > +             }
> > +     }
> > +
> > +     return r;
> >  }
> >
> >  static int sdma_v4_4_2_soft_reset_engine(struct amdgpu_device *adev,
> > @@ -2159,7 +2161,6 @@ static const struct amdgpu_ring_funcs sdma_v4_4_2_ring_funcs = {
> >       .emit_reg_wait = sdma_v4_4_2_ring_emit_reg_wait,
> >       .emit_reg_write_reg_wait = amdgpu_ring_emit_reg_write_reg_wait_helper,
> >       .reset = sdma_v4_4_2_reset_queue,
> > -     .is_guilty = sdma_v4_4_2_ring_is_guilty,
> >  };
> >
> >  static const struct amdgpu_ring_funcs sdma_v4_4_2_page_ring_funcs = {
> > @@ -2192,7 +2193,6 @@ static const struct amdgpu_ring_funcs sdma_v4_4_2_page_ring_funcs = {
> >       .emit_reg_wait = sdma_v4_4_2_ring_emit_reg_wait,
> >       .emit_reg_write_reg_wait = amdgpu_ring_emit_reg_write_reg_wait_helper,
> >       .reset = sdma_v4_4_2_reset_queue,
> > -     .is_guilty = sdma_v4_4_2_page_ring_is_guilty,
> >  };
> >
> >  static void sdma_v4_4_2_set_ring_funcs(struct amdgpu_device *adev)
> > diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
> > index 09419db2d49a6..4a36e5199f248 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
> > @@ -1538,18 +1538,34 @@ static int sdma_v5_0_soft_reset(struct amdgpu_ip_block *ip_block)
> >       return 0;
> >  }
> >
> > +static bool sdma_v5_0_is_queue_selected(struct amdgpu_device *adev,
> > +                                     uint32_t instance_id)
> > +{
> > +     u32 context_status = RREG32(sdma_v5_0_get_reg_offset(adev, instance_id,
> > +                                                          mmSDMA0_GFX_CONTEXT_STATUS));
> > +
> > +     /* Check if the SELECTED bit is set */
> > +     return (context_status & SDMA0_GFX_CONTEXT_STATUS__SELECTED_MASK) != 0;
> > +}
> > +
> >  static int sdma_v5_0_reset_queue(struct amdgpu_ring *ring,
> >                                unsigned int vmid,
> >                                struct amdgpu_fence *guilty_fence)
> >  {
> >       struct amdgpu_device *adev = ring->adev;
> >       u32 inst_id = ring->me;
> > +     bool is_guilty = sdma_v5_0_is_queue_selected(adev, inst_id);
> >       int r;
> >
> > +     amdgpu_amdkfd_suspend(adev, false);
> >       r = amdgpu_sdma_reset_engine(adev, inst_id);
> > +     amdgpu_amdkfd_resume(adev, false);
> >       if (r)
> >               return r;
> > -     amdgpu_fence_driver_force_completion(ring);
> > +
> > +     if (is_guilty)
> > +             atomic_inc(&ring->adev->gpu_reset_counter);
> > +
> >       return 0;
> >  }
> >
> > @@ -1617,7 +1633,10 @@ static int sdma_v5_0_restore_queue(struct amdgpu_ring *ring)
> >
> >       r = sdma_v5_0_gfx_resume_instance(adev, inst_id, true);
> >       amdgpu_gfx_rlc_exit_safe_mode(adev, 0);
> > -     return r;
> > +     if (r)
> > +             return r;
> > +     amdgpu_fence_driver_force_completion(ring);
> > +     return 0;
> >  }
> >
> >  static int sdma_v5_0_ring_preempt_ib(struct amdgpu_ring *ring)
> > diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
> > index 365c710ee9e8c..84d85ef30701c 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
> > @@ -1451,18 +1451,34 @@ static int sdma_v5_2_wait_for_idle(struct amdgpu_ip_block *ip_block)
> >       return -ETIMEDOUT;
> >  }
> >
> > +static bool sdma_v5_2_is_queue_selected(struct amdgpu_device *adev,
> > +                                     uint32_t instance_id)
> > +{
> > +     u32 context_status = RREG32(sdma_v5_2_get_reg_offset(adev, instance_id,
> > +                                                          mmSDMA0_GFX_CONTEXT_STATUS));
> > +
> > +     /* Check if the SELECTED bit is set */
> > +     return (context_status & SDMA0_GFX_CONTEXT_STATUS__SELECTED_MASK) != 0;
> > +}
> > +
> >  static int sdma_v5_2_reset_queue(struct amdgpu_ring *ring,
> >                                unsigned int vmid,
> >                                struct amdgpu_fence *guilty_fence)
> >  {
> >       struct amdgpu_device *adev = ring->adev;
> >       u32 inst_id = ring->me;
> > +     bool is_guilty = sdma_v5_2_is_queue_selected(adev, inst_id);
> >       int r;
> >
> > +     amdgpu_amdkfd_suspend(adev, false);
> >       r = amdgpu_sdma_reset_engine(adev, inst_id);
> > +     amdgpu_amdkfd_resume(adev, false);
> >       if (r)
> >               return r;
> > -     amdgpu_fence_driver_force_completion(ring);
> > +
> > +     if (is_guilty)
> > +             atomic_inc(&ring->adev->gpu_reset_counter);
> > +
> >       return 0;
> >  }
> >
> > @@ -1529,11 +1545,12 @@ static int sdma_v5_2_restore_queue(struct amdgpu_ring *ring)
> >       freeze = RREG32(sdma_v5_2_get_reg_offset(adev, inst_id, mmSDMA0_FREEZE));
> >       freeze = REG_SET_FIELD(freeze, SDMA0_FREEZE, FREEZE, 0);
> >       WREG32(sdma_v5_2_get_reg_offset(adev, inst_id, mmSDMA0_FREEZE), freeze);
> > -
> >       r = sdma_v5_2_gfx_resume_instance(adev, inst_id, true);
> > -
> >       amdgpu_gfx_rlc_exit_safe_mode(adev, 0);
> > -     return r;
> > +     if (r)
> > +             return r;
> > +     amdgpu_fence_driver_force_completion(ring);
> > +     return 0;
> >  }
> >
> >  static int sdma_v5_2_ring_preempt_ib(struct amdgpu_ring *ring)
> > diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
> > index 746f14862d9ff..595e90a5274ea 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
> > @@ -1565,6 +1565,7 @@ static int sdma_v6_0_reset_queue(struct amdgpu_ring *ring,
> >       if (r)
> >               return r;
> >       amdgpu_fence_driver_force_completion(ring);
> > +     atomic_inc(&ring->adev->gpu_reset_counter);
> >       return 0;
> >  }
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
> > index 2e4c658598001..3e036c37b1f5a 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
> > @@ -830,6 +830,7 @@ static int sdma_v7_0_reset_queue(struct amdgpu_ring *ring,
> >       if (r)
> >               return r;
> >       amdgpu_fence_driver_force_completion(ring);
> > +     atomic_inc(&ring->adev->gpu_reset_counter);
> >       return 0;
> >  }
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
> > index 0d73b2bd4aad6..d5be19361cc89 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
> > @@ -1985,6 +1985,7 @@ static int vcn_v4_0_ring_reset(struct amdgpu_ring *ring,
> >       if (r)
> >               return r;
> >       amdgpu_fence_driver_force_completion(ring);
> > +     atomic_inc(&ring->adev->gpu_reset_counter);
> >       return 0;
> >  }
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c
> > index bf9edfef2107e..c7c2b7f5ba56d 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c
> > @@ -1626,6 +1626,7 @@ static int vcn_v4_0_3_ring_reset(struct amdgpu_ring *ring,
> >       if (r)
> >               return r;
> >       amdgpu_fence_driver_force_completion(ring);
> > +     atomic_inc(&ring->adev->gpu_reset_counter);
> >       return 0;
> >  }
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c
> > index 3a3ed600e15f0..af75617cf6df5 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c
> > @@ -1483,6 +1483,7 @@ static int vcn_v4_0_5_ring_reset(struct amdgpu_ring *ring,
> >       if (r)
> >               return r;
> >       amdgpu_fence_driver_force_completion(ring);
> > +     atomic_inc(&ring->adev->gpu_reset_counter);
> >       return 0;
> >  }
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c
> > index c7953116ad532..64f2b64da6258 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c
> > @@ -1210,6 +1210,7 @@ static int vcn_v5_0_0_ring_reset(struct amdgpu_ring *ring,
> >       if (r)
> >               return r;
> >       amdgpu_fence_driver_force_completion(ring);
> > +     atomic_inc(&ring->adev->gpu_reset_counter);
> >       return 0;
> >  }
> >
>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 01/27] drm/amdgpu: switch job hw_fence to amdgpu_fence
  2025-06-16 13:47     ` Alex Deucher
@ 2025-06-16 17:45       ` Christian König
  2025-06-16 18:36         ` Alex Deucher
  0 siblings, 1 reply; 35+ messages in thread
From: Christian König @ 2025-06-16 17:45 UTC (permalink / raw)
  To: Alex Deucher; +Cc: Alex Deucher, amd-gfx, sasundar

On 6/16/25 15:47, Alex Deucher wrote:
> On Mon, Jun 16, 2025 at 8:16 AM Christian König
> <christian.koenig@amd.com> wrote:
>>
>> On 6/13/25 23:47, Alex Deucher wrote:
>>> Use the amdgpu fence container so we can store additional
>>> data in the fence.  This also fixes the start_time handling
>>> for MCBP since we were casting the fence to an amdgpu_fence
>>> and it wasn't.
>>>
>>> Fixes: 3f4c175d62d8 ("drm/amdgpu: MCBP based on DRM scheduler (v9)")
>>> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
>>> ---
>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c |  2 +-
>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_device.c  |  2 +-
>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c   | 30 +++++----------------
>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_job.c     | 12 ++++-----
>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_job.h     |  2 +-
>>>  drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h    | 16 +++++++++++
>>>  6 files changed, 32 insertions(+), 32 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
>>> index 8e626f50b362e..f81608330a3d0 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
>>> @@ -1902,7 +1902,7 @@ static void amdgpu_ib_preempt_mark_partial_job(struct amdgpu_ring *ring)
>>>                       continue;
>>>               }
>>>               job = to_amdgpu_job(s_job);
>>> -             if (preempted && (&job->hw_fence) == fence)
>>> +             if (preempted && (&job->hw_fence.base) == fence)
>>>                       /* mark the job as preempted */
>>>                       job->preemption_status |= AMDGPU_IB_PREEMPTED;
>>>       }
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>>> index 00174437b01ec..4893f834f4fd4 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>>> @@ -6397,7 +6397,7 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
>>>        *
>>>        * job->base holds a reference to parent fence
>>>        */
>>> -     if (job && dma_fence_is_signaled(&job->hw_fence)) {
>>> +     if (job && dma_fence_is_signaled(&job->hw_fence.base)) {
>>>               job_signaled = true;
>>>               dev_info(adev->dev, "Guilty job already signaled, skipping HW reset");
>>>               goto skip_hw_reset;
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
>>> index 2f24a6aa13bf6..569e0e5373927 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
>>> @@ -41,22 +41,6 @@
>>>  #include "amdgpu_trace.h"
>>>  #include "amdgpu_reset.h"
>>>
>>> -/*
>>> - * Fences mark an event in the GPUs pipeline and are used
>>> - * for GPU/CPU synchronization.  When the fence is written,
>>> - * it is expected that all buffers associated with that fence
>>> - * are no longer in use by the associated ring on the GPU and
>>> - * that the relevant GPU caches have been flushed.
>>> - */
>>> -
>>> -struct amdgpu_fence {
>>> -     struct dma_fence base;
>>> -
>>> -     /* RB, DMA, etc. */
>>> -     struct amdgpu_ring              *ring;
>>> -     ktime_t                         start_timestamp;
>>> -};
>>> -
>>>  static struct kmem_cache *amdgpu_fence_slab;
>>>
>>>  int amdgpu_fence_slab_init(void)
>>> @@ -151,12 +135,12 @@ int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **f, struct amd
>>>               am_fence = kmem_cache_alloc(amdgpu_fence_slab, GFP_ATOMIC);
>>>               if (am_fence == NULL)
>>>                       return -ENOMEM;
>>> -             fence = &am_fence->base;
>>> -             am_fence->ring = ring;
>>>       } else {
>>>               /* take use of job-embedded fence */
>>> -             fence = &job->hw_fence;
>>> +             am_fence = &job->hw_fence;
>>>       }
>>> +     fence = &am_fence->base;
>>> +     am_fence->ring = ring;
>>
>> I would rather completely drop the job from the parameters and the general fence allocation here.
>>
>> Instead we should just provide afence as input parameter and submit that one.
>>
>> This should make sure that we don't run into such issues again.
> 
> How about doing that as a follow on patch?  It looks like that will be
> a much bigger patch for a stable bug fix.  I think we can clean up a
> lot of stuff in amdgpu_fence.c with that change.

Works for me. I would also suggest to remove the kmem_cache_alloc() and just use kmalloc for the rare cases where we need an independent fence.

Additional to that the ring and start_time member looks suspicious. We should not have that inside the fence in the first place.

Regards,
Christian.

> 
> Alex
> 
>>
>> Apart from that looks good to me,
>> Christian.
>>
>>>
>>>       seq = ++ring->fence_drv.sync_seq;
>>>       if (job && job->job_run_counter) {
>>> @@ -718,7 +702,7 @@ void amdgpu_fence_driver_clear_job_fences(struct amdgpu_ring *ring)
>>>                        * it right here or we won't be able to track them in fence_drv
>>>                        * and they will remain unsignaled during sa_bo free.
>>>                        */
>>> -                     job = container_of(old, struct amdgpu_job, hw_fence);
>>> +                     job = container_of(old, struct amdgpu_job, hw_fence.base);
>>>                       if (!job->base.s_fence && !dma_fence_is_signaled(old))
>>>                               dma_fence_signal(old);
>>>                       RCU_INIT_POINTER(*ptr, NULL);
>>> @@ -780,7 +764,7 @@ static const char *amdgpu_fence_get_timeline_name(struct dma_fence *f)
>>>
>>>  static const char *amdgpu_job_fence_get_timeline_name(struct dma_fence *f)
>>>  {
>>> -     struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence);
>>> +     struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence.base);
>>>
>>>       return (const char *)to_amdgpu_ring(job->base.sched)->name;
>>>  }
>>> @@ -810,7 +794,7 @@ static bool amdgpu_fence_enable_signaling(struct dma_fence *f)
>>>   */
>>>  static bool amdgpu_job_fence_enable_signaling(struct dma_fence *f)
>>>  {
>>> -     struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence);
>>> +     struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence.base);
>>>
>>>       if (!timer_pending(&to_amdgpu_ring(job->base.sched)->fence_drv.fallback_timer))
>>>               amdgpu_fence_schedule_fallback(to_amdgpu_ring(job->base.sched));
>>> @@ -845,7 +829,7 @@ static void amdgpu_job_fence_free(struct rcu_head *rcu)
>>>       struct dma_fence *f = container_of(rcu, struct dma_fence, rcu);
>>>
>>>       /* free job if fence has a parent job */
>>> -     kfree(container_of(f, struct amdgpu_job, hw_fence));
>>> +     kfree(container_of(f, struct amdgpu_job, hw_fence.base));
>>>  }
>>>
>>>  /**
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
>>> index acb21fc8b3ce5..ddb9d3269357c 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
>>> @@ -272,8 +272,8 @@ void amdgpu_job_free_resources(struct amdgpu_job *job)
>>>       /* Check if any fences where initialized */
>>>       if (job->base.s_fence && job->base.s_fence->finished.ops)
>>>               f = &job->base.s_fence->finished;
>>> -     else if (job->hw_fence.ops)
>>> -             f = &job->hw_fence;
>>> +     else if (job->hw_fence.base.ops)
>>> +             f = &job->hw_fence.base;
>>>       else
>>>               f = NULL;
>>>
>>> @@ -290,10 +290,10 @@ static void amdgpu_job_free_cb(struct drm_sched_job *s_job)
>>>       amdgpu_sync_free(&job->explicit_sync);
>>>
>>>       /* only put the hw fence if has embedded fence */
>>> -     if (!job->hw_fence.ops)
>>> +     if (!job->hw_fence.base.ops)
>>>               kfree(job);
>>>       else
>>> -             dma_fence_put(&job->hw_fence);
>>> +             dma_fence_put(&job->hw_fence.base);
>>>  }
>>>
>>>  void amdgpu_job_set_gang_leader(struct amdgpu_job *job,
>>> @@ -322,10 +322,10 @@ void amdgpu_job_free(struct amdgpu_job *job)
>>>       if (job->gang_submit != &job->base.s_fence->scheduled)
>>>               dma_fence_put(job->gang_submit);
>>>
>>> -     if (!job->hw_fence.ops)
>>> +     if (!job->hw_fence.base.ops)
>>>               kfree(job);
>>>       else
>>> -             dma_fence_put(&job->hw_fence);
>>> +             dma_fence_put(&job->hw_fence.base);
>>>  }
>>>
>>>  struct dma_fence *amdgpu_job_submit(struct amdgpu_job *job)
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
>>> index f2c049129661f..931fed8892cc1 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
>>> @@ -48,7 +48,7 @@ struct amdgpu_job {
>>>       struct drm_sched_job    base;
>>>       struct amdgpu_vm        *vm;
>>>       struct amdgpu_sync      explicit_sync;
>>> -     struct dma_fence        hw_fence;
>>> +     struct amdgpu_fence     hw_fence;
>>>       struct dma_fence        *gang_submit;
>>>       uint32_t                preamble_status;
>>>       uint32_t                preemption_status;
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
>>> index b95b471107692..e1f25218943a4 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
>>> @@ -127,6 +127,22 @@ struct amdgpu_fence_driver {
>>>       struct dma_fence                **fences;
>>>  };
>>>
>>> +/*
>>> + * Fences mark an event in the GPUs pipeline and are used
>>> + * for GPU/CPU synchronization.  When the fence is written,
>>> + * it is expected that all buffers associated with that fence
>>> + * are no longer in use by the associated ring on the GPU and
>>> + * that the relevant GPU caches have been flushed.
>>> + */
>>> +
>>> +struct amdgpu_fence {
>>> +     struct dma_fence base;
>>> +
>>> +     /* RB, DMA, etc. */
>>> +     struct amdgpu_ring              *ring;
>>> +     ktime_t                         start_timestamp;
>>> +};
>>> +
>>>  extern const struct drm_sched_backend_ops amdgpu_sched_ops;
>>>
>>>  void amdgpu_fence_driver_clear_job_fences(struct amdgpu_ring *ring);
>>


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 01/27] drm/amdgpu: switch job hw_fence to amdgpu_fence
  2025-06-16 17:45       ` Christian König
@ 2025-06-16 18:36         ` Alex Deucher
  0 siblings, 0 replies; 35+ messages in thread
From: Alex Deucher @ 2025-06-16 18:36 UTC (permalink / raw)
  To: Christian König; +Cc: Alex Deucher, amd-gfx, sasundar

On Mon, Jun 16, 2025 at 1:45 PM Christian König
<christian.koenig@amd.com> wrote:
>
> On 6/16/25 15:47, Alex Deucher wrote:
> > On Mon, Jun 16, 2025 at 8:16 AM Christian König
> > <christian.koenig@amd.com> wrote:
> >>
> >> On 6/13/25 23:47, Alex Deucher wrote:
> >>> Use the amdgpu fence container so we can store additional
> >>> data in the fence.  This also fixes the start_time handling
> >>> for MCBP since we were casting the fence to an amdgpu_fence
> >>> and it wasn't.
> >>>
> >>> Fixes: 3f4c175d62d8 ("drm/amdgpu: MCBP based on DRM scheduler (v9)")
> >>> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
> >>> ---
> >>>  drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c |  2 +-
> >>>  drivers/gpu/drm/amd/amdgpu/amdgpu_device.c  |  2 +-
> >>>  drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c   | 30 +++++----------------
> >>>  drivers/gpu/drm/amd/amdgpu/amdgpu_job.c     | 12 ++++-----
> >>>  drivers/gpu/drm/amd/amdgpu/amdgpu_job.h     |  2 +-
> >>>  drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h    | 16 +++++++++++
> >>>  6 files changed, 32 insertions(+), 32 deletions(-)
> >>>
> >>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> >>> index 8e626f50b362e..f81608330a3d0 100644
> >>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> >>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> >>> @@ -1902,7 +1902,7 @@ static void amdgpu_ib_preempt_mark_partial_job(struct amdgpu_ring *ring)
> >>>                       continue;
> >>>               }
> >>>               job = to_amdgpu_job(s_job);
> >>> -             if (preempted && (&job->hw_fence) == fence)
> >>> +             if (preempted && (&job->hw_fence.base) == fence)
> >>>                       /* mark the job as preempted */
> >>>                       job->preemption_status |= AMDGPU_IB_PREEMPTED;
> >>>       }
> >>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> >>> index 00174437b01ec..4893f834f4fd4 100644
> >>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> >>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> >>> @@ -6397,7 +6397,7 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
> >>>        *
> >>>        * job->base holds a reference to parent fence
> >>>        */
> >>> -     if (job && dma_fence_is_signaled(&job->hw_fence)) {
> >>> +     if (job && dma_fence_is_signaled(&job->hw_fence.base)) {
> >>>               job_signaled = true;
> >>>               dev_info(adev->dev, "Guilty job already signaled, skipping HW reset");
> >>>               goto skip_hw_reset;
> >>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> >>> index 2f24a6aa13bf6..569e0e5373927 100644
> >>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> >>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> >>> @@ -41,22 +41,6 @@
> >>>  #include "amdgpu_trace.h"
> >>>  #include "amdgpu_reset.h"
> >>>
> >>> -/*
> >>> - * Fences mark an event in the GPUs pipeline and are used
> >>> - * for GPU/CPU synchronization.  When the fence is written,
> >>> - * it is expected that all buffers associated with that fence
> >>> - * are no longer in use by the associated ring on the GPU and
> >>> - * that the relevant GPU caches have been flushed.
> >>> - */
> >>> -
> >>> -struct amdgpu_fence {
> >>> -     struct dma_fence base;
> >>> -
> >>> -     /* RB, DMA, etc. */
> >>> -     struct amdgpu_ring              *ring;
> >>> -     ktime_t                         start_timestamp;
> >>> -};
> >>> -
> >>>  static struct kmem_cache *amdgpu_fence_slab;
> >>>
> >>>  int amdgpu_fence_slab_init(void)
> >>> @@ -151,12 +135,12 @@ int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **f, struct amd
> >>>               am_fence = kmem_cache_alloc(amdgpu_fence_slab, GFP_ATOMIC);
> >>>               if (am_fence == NULL)
> >>>                       return -ENOMEM;
> >>> -             fence = &am_fence->base;
> >>> -             am_fence->ring = ring;
> >>>       } else {
> >>>               /* take use of job-embedded fence */
> >>> -             fence = &job->hw_fence;
> >>> +             am_fence = &job->hw_fence;
> >>>       }
> >>> +     fence = &am_fence->base;
> >>> +     am_fence->ring = ring;
> >>
> >> I would rather completely drop the job from the parameters and the general fence allocation here.
> >>
> >> Instead we should just provide afence as input parameter and submit that one.
> >>
> >> This should make sure that we don't run into such issues again.
> >
> > How about doing that as a follow on patch?  It looks like that will be
> > a much bigger patch for a stable bug fix.  I think we can clean up a
> > lot of stuff in amdgpu_fence.c with that change.
>
> Works for me. I would also suggest to remove the kmem_cache_alloc() and just use kmalloc for the rare cases where we need an independent fence.
>
> Additional to that the ring and start_time member looks suspicious. We should not have that inside the fence in the first place.

The ring member is used in a number of places to get from the fence to
get to the fence_drv and the ring name.  The start_time is from MCBP.
I don't remember the details.  While we are here, I think we can
remove job->job_run_counter as well?  We don't support resubmission
anymore.

Alex

>
> Regards,
> Christian.
>
> >
> > Alex
> >
> >>
> >> Apart from that looks good to me,
> >> Christian.
> >>
> >>>
> >>>       seq = ++ring->fence_drv.sync_seq;
> >>>       if (job && job->job_run_counter) {
> >>> @@ -718,7 +702,7 @@ void amdgpu_fence_driver_clear_job_fences(struct amdgpu_ring *ring)
> >>>                        * it right here or we won't be able to track them in fence_drv
> >>>                        * and they will remain unsignaled during sa_bo free.
> >>>                        */
> >>> -                     job = container_of(old, struct amdgpu_job, hw_fence);
> >>> +                     job = container_of(old, struct amdgpu_job, hw_fence.base);
> >>>                       if (!job->base.s_fence && !dma_fence_is_signaled(old))
> >>>                               dma_fence_signal(old);
> >>>                       RCU_INIT_POINTER(*ptr, NULL);
> >>> @@ -780,7 +764,7 @@ static const char *amdgpu_fence_get_timeline_name(struct dma_fence *f)
> >>>
> >>>  static const char *amdgpu_job_fence_get_timeline_name(struct dma_fence *f)
> >>>  {
> >>> -     struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence);
> >>> +     struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence.base);
> >>>
> >>>       return (const char *)to_amdgpu_ring(job->base.sched)->name;
> >>>  }
> >>> @@ -810,7 +794,7 @@ static bool amdgpu_fence_enable_signaling(struct dma_fence *f)
> >>>   */
> >>>  static bool amdgpu_job_fence_enable_signaling(struct dma_fence *f)
> >>>  {
> >>> -     struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence);
> >>> +     struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence.base);
> >>>
> >>>       if (!timer_pending(&to_amdgpu_ring(job->base.sched)->fence_drv.fallback_timer))
> >>>               amdgpu_fence_schedule_fallback(to_amdgpu_ring(job->base.sched));
> >>> @@ -845,7 +829,7 @@ static void amdgpu_job_fence_free(struct rcu_head *rcu)
> >>>       struct dma_fence *f = container_of(rcu, struct dma_fence, rcu);
> >>>
> >>>       /* free job if fence has a parent job */
> >>> -     kfree(container_of(f, struct amdgpu_job, hw_fence));
> >>> +     kfree(container_of(f, struct amdgpu_job, hw_fence.base));
> >>>  }
> >>>
> >>>  /**
> >>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> >>> index acb21fc8b3ce5..ddb9d3269357c 100644
> >>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> >>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> >>> @@ -272,8 +272,8 @@ void amdgpu_job_free_resources(struct amdgpu_job *job)
> >>>       /* Check if any fences where initialized */
> >>>       if (job->base.s_fence && job->base.s_fence->finished.ops)
> >>>               f = &job->base.s_fence->finished;
> >>> -     else if (job->hw_fence.ops)
> >>> -             f = &job->hw_fence;
> >>> +     else if (job->hw_fence.base.ops)
> >>> +             f = &job->hw_fence.base;
> >>>       else
> >>>               f = NULL;
> >>>
> >>> @@ -290,10 +290,10 @@ static void amdgpu_job_free_cb(struct drm_sched_job *s_job)
> >>>       amdgpu_sync_free(&job->explicit_sync);
> >>>
> >>>       /* only put the hw fence if has embedded fence */
> >>> -     if (!job->hw_fence.ops)
> >>> +     if (!job->hw_fence.base.ops)
> >>>               kfree(job);
> >>>       else
> >>> -             dma_fence_put(&job->hw_fence);
> >>> +             dma_fence_put(&job->hw_fence.base);
> >>>  }
> >>>
> >>>  void amdgpu_job_set_gang_leader(struct amdgpu_job *job,
> >>> @@ -322,10 +322,10 @@ void amdgpu_job_free(struct amdgpu_job *job)
> >>>       if (job->gang_submit != &job->base.s_fence->scheduled)
> >>>               dma_fence_put(job->gang_submit);
> >>>
> >>> -     if (!job->hw_fence.ops)
> >>> +     if (!job->hw_fence.base.ops)
> >>>               kfree(job);
> >>>       else
> >>> -             dma_fence_put(&job->hw_fence);
> >>> +             dma_fence_put(&job->hw_fence.base);
> >>>  }
> >>>
> >>>  struct dma_fence *amdgpu_job_submit(struct amdgpu_job *job)
> >>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
> >>> index f2c049129661f..931fed8892cc1 100644
> >>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
> >>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
> >>> @@ -48,7 +48,7 @@ struct amdgpu_job {
> >>>       struct drm_sched_job    base;
> >>>       struct amdgpu_vm        *vm;
> >>>       struct amdgpu_sync      explicit_sync;
> >>> -     struct dma_fence        hw_fence;
> >>> +     struct amdgpu_fence     hw_fence;
> >>>       struct dma_fence        *gang_submit;
> >>>       uint32_t                preamble_status;
> >>>       uint32_t                preemption_status;
> >>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> >>> index b95b471107692..e1f25218943a4 100644
> >>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> >>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> >>> @@ -127,6 +127,22 @@ struct amdgpu_fence_driver {
> >>>       struct dma_fence                **fences;
> >>>  };
> >>>
> >>> +/*
> >>> + * Fences mark an event in the GPUs pipeline and are used
> >>> + * for GPU/CPU synchronization.  When the fence is written,
> >>> + * it is expected that all buffers associated with that fence
> >>> + * are no longer in use by the associated ring on the GPU and
> >>> + * that the relevant GPU caches have been flushed.
> >>> + */
> >>> +
> >>> +struct amdgpu_fence {
> >>> +     struct dma_fence base;
> >>> +
> >>> +     /* RB, DMA, etc. */
> >>> +     struct amdgpu_ring              *ring;
> >>> +     ktime_t                         start_timestamp;
> >>> +};
> >>> +
> >>>  extern const struct drm_sched_backend_ops amdgpu_sched_ops;
> >>>
> >>>  void amdgpu_fence_driver_clear_job_fences(struct amdgpu_ring *ring);
> >>
>

^ permalink raw reply	[flat|nested] 35+ messages in thread

end of thread, other threads:[~2025-06-16 18:36 UTC | newest]

Thread overview: 35+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-06-13 21:47 [PATCH V8 00/27] Reset improvements Alex Deucher
2025-06-13 21:47 ` [PATCH 01/27] drm/amdgpu: switch job hw_fence to amdgpu_fence Alex Deucher
2025-06-16 12:16   ` Christian König
2025-06-16 13:47     ` Alex Deucher
2025-06-16 17:45       ` Christian König
2025-06-16 18:36         ` Alex Deucher
2025-06-13 21:47 ` [PATCH 02/27] drm/amdgpu/sdma: handle paging queues in amdgpu_sdma_reset_engine() Alex Deucher
2025-06-14 12:31   ` Zhang, Jesse(Jie)
2025-06-13 21:47 ` [PATCH 03/27] drm/amdgpu: enable legacy enforce isolation by default Alex Deucher
2025-06-13 21:47 ` [PATCH 04/27] drm/amdgpu: update ring reset function signature Alex Deucher
2025-06-13 21:47 ` [PATCH 05/27] drm/amdgpu: rework queue reset scheduler interaction Alex Deucher
2025-06-13 21:47 ` [PATCH 06/27] drm/amdgpu: move force completion into ring resets Alex Deucher
2025-06-13 21:47 ` [PATCH 07/27] drm/amdgpu: move guilty handling " Alex Deucher
2025-06-16  3:46   ` Lazar, Lijo
2025-06-16 16:03     ` Alex Deucher
2025-06-13 21:47 ` [PATCH 08/27] drm/amdgpu: track ring state associated with a job Alex Deucher
2025-06-13 21:47 ` [PATCH 09/27] drm/amdgpu/gfx9: re-emit unprocessed state on kcq reset Alex Deucher
2025-06-13 21:47 ` [PATCH 10/27] drm/amdgpu/gfx9.4.3: " Alex Deucher
2025-06-13 21:47 ` [PATCH 11/27] drm/amdgpu/gfx10: re-emit unprocessed state on ring reset Alex Deucher
2025-06-13 21:47 ` [PATCH 12/27] drm/amdgpu/gfx11: " Alex Deucher
2025-06-13 21:47 ` [PATCH 13/27] drm/amdgpu/gfx12: " Alex Deucher
2025-06-13 21:47 ` [PATCH 14/27] drm/amdgpu/sdma6: " Alex Deucher
2025-06-13 21:47 ` [PATCH 15/27] drm/amdgpu/sdma7: " Alex Deucher
2025-06-13 21:47 ` [PATCH 16/27] drm/amdgpu/jpeg2: " Alex Deucher
2025-06-13 21:47 ` [PATCH 17/27] drm/amdgpu/jpeg2.5: " Alex Deucher
2025-06-13 21:47 ` [PATCH 18/27] drm/amdgpu/jpeg3: " Alex Deucher
2025-06-13 21:47 ` [PATCH 19/27] drm/amdgpu/jpeg4: " Alex Deucher
2025-06-13 21:47 ` [PATCH 20/27] drm/amdgpu/jpeg4.0.3: " Alex Deucher
2025-06-13 21:47 ` [PATCH 21/27] drm/amdgpu/jpeg4.0.5: add queue reset Alex Deucher
2025-06-13 21:47 ` [PATCH 22/27] drm/amdgpu/jpeg5: " Alex Deucher
2025-06-13 21:47 ` [PATCH 23/27] drm/amdgpu/jpeg5.0.1: re-emit unprocessed state on ring reset Alex Deucher
2025-06-13 21:47 ` [PATCH 24/27] drm/amdgpu/vcn4: " Alex Deucher
2025-06-13 21:47 ` [PATCH 25/27] drm/amdgpu/vcn4.0.3: " Alex Deucher
2025-06-13 21:47 ` [PATCH 26/27] drm/amdgpu/vcn4.0.5: " Alex Deucher
2025-06-13 21:47 ` [PATCH 27/27] drm/amdgpu/vcn5: " Alex Deucher

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).