* [PATCH V9 00/36] Reset improvements
@ 2025-06-17 3:07 Alex Deucher
2025-06-17 3:07 ` [PATCH 01/36] drm/amdgpu: switch job hw_fence to amdgpu_fence Alex Deucher
` (35 more replies)
0 siblings, 36 replies; 59+ messages in thread
From: Alex Deucher @ 2025-06-17 3:07 UTC (permalink / raw)
To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher
This set improves per queue reset support for a number of IPs.
When we reset the queue, the queue is lost so we need
to re-emit the unprocessed state from subsequent submissions.
To that end, in order to make sure we actually restore
unprocessed state, we need to enable legacy enforce isolation
so that we can safely re-emit the unprocessed state. If
we don't multiple jobs can run in parallel and we may not
end up resetting the correct one. This is similar to how
windows handles queues. This also gives us correct guilty
tracking for GC.
Tested on GC 10 and 11 chips with a game running and
then running hang tests. The game pauses when the
hang happens, then continues after the queue reset.
I tried this same approach and GC8 and 9, but it
was not as reliable as soft recovery. As such, I've dropped
the KGQ reset code for pre-GC10.
The same approach is extended to SDMA and VCN.
They don't need enforce isolation because those engines
are single threaded so they always operate serially.
Rework re-emit to signal the seq number of the bad job and
verify that to verify that the reset worked, then re-emit the
rest of the non-guilty state. This way we are not waiting on
the rest of the state to complete, and if the subsequent state
also contains a bad job, we'll end up in queue reset again rather
than adapter reset.
Git tree:
https://gitlab.freedesktop.org/agd5f/linux/-/commits/kq_resets?ref_type=heads
v4: Drop explicit padding patches
Drop new timeout macro
Rework re-emit sequence
v5: Add a helper for reemit
Convert VCN, JPEG, SDMA to use new helpers
v6: Update SDMA 4.4.2 to use new helpers
Move ptr tracking to amdgpu_fence
Skip all jobs from the bad context on the ring
v7: Rework the backup logic
Move and clean up the guilty logic for engine resets
Integrate suggestions from Christian
Add JPEG 4.0.5 support
v8: Add non-guilty ring backup handling
Clean up new function signatures
Reorder some bug fixes to the start of the series
v9: Clean up fence_emit
SDMA 5.x fixes
Add new reset helpers
sched wqueue stop/start cleanup
Add support for VCNs without unified queues
Alex Deucher (35):
drm/amdgpu: switch job hw_fence to amdgpu_fence
drm/amdgpu: remove job parameter from amdgpu_fence_emit()
drm/amdgpu: remove fence slab
drm/amdgpu: enable legacy enforce isolation by default
drm/amdgpu/sdma5.x: suspend KFD queues in ring reset
drm/amdgpu/sdma5: init engine reset mutex
drm/amdgpu/sdma5.2: init engine reset mutex
drm/amdgpu: update ring reset function signature
drm/amdgpu: move force completion into ring resets
drm/amdgpu: move guilty handling into ring resets
drm/amdgpu: move scheduler wqueue handling into callbacks
drm/amdgpu: track ring state associated with a job
drm/amdgpu/gfx9: re-emit unprocessed state on kcq reset
drm/amdgpu/gfx9.4.3: re-emit unprocessed state on kcq reset
drm/amdgpu/gfx10: re-emit unprocessed state on ring reset
drm/amdgpu/gfx11: re-emit unprocessed state on ring reset
drm/amdgpu/gfx12: re-emit unprocessed state on ring reset
drm/amdgpu/sdma6: re-emit unprocessed state on ring reset
drm/amdgpu/sdma7: re-emit unprocessed state on ring reset
drm/amdgpu/jpeg2: re-emit unprocessed state on ring reset
drm/amdgpu/jpeg2.5: re-emit unprocessed state on ring reset
drm/amdgpu/jpeg3: re-emit unprocessed state on ring reset
drm/amdgpu/jpeg4: re-emit unprocessed state on ring reset
drm/amdgpu/jpeg4.0.3: re-emit unprocessed state on ring reset
drm/amdgpu/jpeg4.0.5: add queue reset
drm/amdgpu/jpeg5: add queue reset
drm/amdgpu/jpeg5.0.1: re-emit unprocessed state on ring reset
drm/amdgpu/vcn4: re-emit unprocessed state on ring reset
drm/amdgpu/vcn4.0.3: re-emit unprocessed state on ring reset
drm/amdgpu/vcn4.0.5: re-emit unprocessed state on ring reset
drm/amdgpu/vcn5: re-emit unprocessed state on ring reset
drm/amdgpu/vcn: add a helper framework for engine resets
drm/amdgpu/vcn2: implement ring reset
drm/amdgpu/vcn2.5: implement ring reset
drm/amdgpu/vcn3: implement ring reset
Christian König (1):
drm/amdgpu: rework queue reset scheduler interaction
drivers/gpu/drm/amd/amdgpu/amdgpu.h | 3 -
drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 2 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 15 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 5 -
drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c | 175 +++++++++++++-------
drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c | 19 ++-
drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 60 ++-----
drivers/gpu/drm/amd/amdgpu/amdgpu_job.h | 2 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c | 59 +++++++
drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h | 43 ++++-
drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.c | 17 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c | 64 +++++++
drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h | 6 +-
drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c | 42 ++---
drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c | 33 ++--
drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c | 33 ++--
drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 9 +-
drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c | 11 +-
drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c | 7 +-
drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c | 7 +-
drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c | 7 +-
drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c | 7 +-
drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c | 7 +-
drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_5.c | 11 ++
drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_0.c | 14 ++
drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c | 7 +-
drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c | 49 +++---
drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c | 17 +-
drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c | 17 +-
drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c | 25 ++-
drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c | 25 ++-
drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c | 25 +++
drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c | 24 +++
drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c | 24 +++
drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c | 8 +-
drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c | 9 +-
drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c | 8 +-
drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c | 8 +-
38 files changed, 629 insertions(+), 275 deletions(-)
--
2.49.0
^ permalink raw reply [flat|nested] 59+ messages in thread
* [PATCH 01/36] drm/amdgpu: switch job hw_fence to amdgpu_fence
2025-06-17 3:07 [PATCH V9 00/36] Reset improvements Alex Deucher
@ 2025-06-17 3:07 ` Alex Deucher
2025-06-17 9:42 ` Christian König
2025-06-17 3:07 ` [PATCH 02/36] drm/amdgpu: remove job parameter from amdgpu_fence_emit() Alex Deucher
` (34 subsequent siblings)
35 siblings, 1 reply; 59+ messages in thread
From: Alex Deucher @ 2025-06-17 3:07 UTC (permalink / raw)
To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher
Use the amdgpu fence container so we can store additional
data in the fence. This also fixes the start_time handling
for MCBP since we were casting the fence to an amdgpu_fence
and it wasn't.
Fixes: 3f4c175d62d8 ("drm/amdgpu: MCBP based on DRM scheduler (v9)")
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 2 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 2 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c | 30 +++++----------------
drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 12 ++++-----
drivers/gpu/drm/amd/amdgpu/amdgpu_job.h | 2 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h | 16 +++++++++++
6 files changed, 32 insertions(+), 32 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
index 8e626f50b362e..f81608330a3d0 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
@@ -1902,7 +1902,7 @@ static void amdgpu_ib_preempt_mark_partial_job(struct amdgpu_ring *ring)
continue;
}
job = to_amdgpu_job(s_job);
- if (preempted && (&job->hw_fence) == fence)
+ if (preempted && (&job->hw_fence.base) == fence)
/* mark the job as preempted */
job->preemption_status |= AMDGPU_IB_PREEMPTED;
}
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index f134394047603..13070211dc69c 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -6427,7 +6427,7 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
*
* job->base holds a reference to parent fence
*/
- if (job && dma_fence_is_signaled(&job->hw_fence)) {
+ if (job && dma_fence_is_signaled(&job->hw_fence.base)) {
job_signaled = true;
dev_info(adev->dev, "Guilty job already signaled, skipping HW reset");
goto skip_hw_reset;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
index 2f24a6aa13bf6..569e0e5373927 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
@@ -41,22 +41,6 @@
#include "amdgpu_trace.h"
#include "amdgpu_reset.h"
-/*
- * Fences mark an event in the GPUs pipeline and are used
- * for GPU/CPU synchronization. When the fence is written,
- * it is expected that all buffers associated with that fence
- * are no longer in use by the associated ring on the GPU and
- * that the relevant GPU caches have been flushed.
- */
-
-struct amdgpu_fence {
- struct dma_fence base;
-
- /* RB, DMA, etc. */
- struct amdgpu_ring *ring;
- ktime_t start_timestamp;
-};
-
static struct kmem_cache *amdgpu_fence_slab;
int amdgpu_fence_slab_init(void)
@@ -151,12 +135,12 @@ int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **f, struct amd
am_fence = kmem_cache_alloc(amdgpu_fence_slab, GFP_ATOMIC);
if (am_fence == NULL)
return -ENOMEM;
- fence = &am_fence->base;
- am_fence->ring = ring;
} else {
/* take use of job-embedded fence */
- fence = &job->hw_fence;
+ am_fence = &job->hw_fence;
}
+ fence = &am_fence->base;
+ am_fence->ring = ring;
seq = ++ring->fence_drv.sync_seq;
if (job && job->job_run_counter) {
@@ -718,7 +702,7 @@ void amdgpu_fence_driver_clear_job_fences(struct amdgpu_ring *ring)
* it right here or we won't be able to track them in fence_drv
* and they will remain unsignaled during sa_bo free.
*/
- job = container_of(old, struct amdgpu_job, hw_fence);
+ job = container_of(old, struct amdgpu_job, hw_fence.base);
if (!job->base.s_fence && !dma_fence_is_signaled(old))
dma_fence_signal(old);
RCU_INIT_POINTER(*ptr, NULL);
@@ -780,7 +764,7 @@ static const char *amdgpu_fence_get_timeline_name(struct dma_fence *f)
static const char *amdgpu_job_fence_get_timeline_name(struct dma_fence *f)
{
- struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence);
+ struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence.base);
return (const char *)to_amdgpu_ring(job->base.sched)->name;
}
@@ -810,7 +794,7 @@ static bool amdgpu_fence_enable_signaling(struct dma_fence *f)
*/
static bool amdgpu_job_fence_enable_signaling(struct dma_fence *f)
{
- struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence);
+ struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence.base);
if (!timer_pending(&to_amdgpu_ring(job->base.sched)->fence_drv.fallback_timer))
amdgpu_fence_schedule_fallback(to_amdgpu_ring(job->base.sched));
@@ -845,7 +829,7 @@ static void amdgpu_job_fence_free(struct rcu_head *rcu)
struct dma_fence *f = container_of(rcu, struct dma_fence, rcu);
/* free job if fence has a parent job */
- kfree(container_of(f, struct amdgpu_job, hw_fence));
+ kfree(container_of(f, struct amdgpu_job, hw_fence.base));
}
/**
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
index acb21fc8b3ce5..ddb9d3269357c 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
@@ -272,8 +272,8 @@ void amdgpu_job_free_resources(struct amdgpu_job *job)
/* Check if any fences where initialized */
if (job->base.s_fence && job->base.s_fence->finished.ops)
f = &job->base.s_fence->finished;
- else if (job->hw_fence.ops)
- f = &job->hw_fence;
+ else if (job->hw_fence.base.ops)
+ f = &job->hw_fence.base;
else
f = NULL;
@@ -290,10 +290,10 @@ static void amdgpu_job_free_cb(struct drm_sched_job *s_job)
amdgpu_sync_free(&job->explicit_sync);
/* only put the hw fence if has embedded fence */
- if (!job->hw_fence.ops)
+ if (!job->hw_fence.base.ops)
kfree(job);
else
- dma_fence_put(&job->hw_fence);
+ dma_fence_put(&job->hw_fence.base);
}
void amdgpu_job_set_gang_leader(struct amdgpu_job *job,
@@ -322,10 +322,10 @@ void amdgpu_job_free(struct amdgpu_job *job)
if (job->gang_submit != &job->base.s_fence->scheduled)
dma_fence_put(job->gang_submit);
- if (!job->hw_fence.ops)
+ if (!job->hw_fence.base.ops)
kfree(job);
else
- dma_fence_put(&job->hw_fence);
+ dma_fence_put(&job->hw_fence.base);
}
struct dma_fence *amdgpu_job_submit(struct amdgpu_job *job)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
index f2c049129661f..931fed8892cc1 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
@@ -48,7 +48,7 @@ struct amdgpu_job {
struct drm_sched_job base;
struct amdgpu_vm *vm;
struct amdgpu_sync explicit_sync;
- struct dma_fence hw_fence;
+ struct amdgpu_fence hw_fence;
struct dma_fence *gang_submit;
uint32_t preamble_status;
uint32_t preemption_status;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
index b95b471107692..e1f25218943a4 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
@@ -127,6 +127,22 @@ struct amdgpu_fence_driver {
struct dma_fence **fences;
};
+/*
+ * Fences mark an event in the GPUs pipeline and are used
+ * for GPU/CPU synchronization. When the fence is written,
+ * it is expected that all buffers associated with that fence
+ * are no longer in use by the associated ring on the GPU and
+ * that the relevant GPU caches have been flushed.
+ */
+
+struct amdgpu_fence {
+ struct dma_fence base;
+
+ /* RB, DMA, etc. */
+ struct amdgpu_ring *ring;
+ ktime_t start_timestamp;
+};
+
extern const struct drm_sched_backend_ops amdgpu_sched_ops;
void amdgpu_fence_driver_clear_job_fences(struct amdgpu_ring *ring);
--
2.49.0
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH 02/36] drm/amdgpu: remove job parameter from amdgpu_fence_emit()
2025-06-17 3:07 [PATCH V9 00/36] Reset improvements Alex Deucher
2025-06-17 3:07 ` [PATCH 01/36] drm/amdgpu: switch job hw_fence to amdgpu_fence Alex Deucher
@ 2025-06-17 3:07 ` Alex Deucher
2025-06-17 11:44 ` Christian König
2025-06-17 3:07 ` [PATCH 03/36] drm/amdgpu: remove fence slab Alex Deucher
` (33 subsequent siblings)
35 siblings, 1 reply; 59+ messages in thread
From: Alex Deucher @ 2025-06-17 3:07 UTC (permalink / raw)
To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher
What we actually care about is the amdgpu_fence object
so pass that in explicitly to avoid possible mistakes
in the future.
The job_run_counter handling can be safely removed at this
point as we no longer support job resubmission.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c | 36 +++++++++--------------
drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c | 5 +++-
drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h | 4 +--
3 files changed, 20 insertions(+), 25 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
index 569e0e5373927..e88848c14491a 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
@@ -114,14 +114,14 @@ static u32 amdgpu_fence_read(struct amdgpu_ring *ring)
*
* @ring: ring the fence is associated with
* @f: resulting fence object
- * @job: job the fence is embedded in
+ * @af: amdgpu fence input
* @flags: flags to pass into the subordinate .emit_fence() call
*
* Emits a fence command on the requested ring (all asics).
* Returns 0 on success, -ENOMEM on failure.
*/
-int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **f, struct amdgpu_job *job,
- unsigned int flags)
+int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **f,
+ struct amdgpu_fence *af, unsigned int flags)
{
struct amdgpu_device *adev = ring->adev;
struct dma_fence *fence;
@@ -130,36 +130,28 @@ int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **f, struct amd
uint32_t seq;
int r;
- if (job == NULL) {
- /* create a sperate hw fence */
+ if (!af) {
+ /* create a separate hw fence */
am_fence = kmem_cache_alloc(amdgpu_fence_slab, GFP_ATOMIC);
if (am_fence == NULL)
return -ENOMEM;
} else {
- /* take use of job-embedded fence */
- am_fence = &job->hw_fence;
+ am_fence = af;
}
fence = &am_fence->base;
am_fence->ring = ring;
seq = ++ring->fence_drv.sync_seq;
- if (job && job->job_run_counter) {
- /* reinit seq for resubmitted jobs */
- fence->seqno = seq;
- /* TO be inline with external fence creation and other drivers */
+ if (af) {
+ dma_fence_init(fence, &amdgpu_job_fence_ops,
+ &ring->fence_drv.lock,
+ adev->fence_context + ring->idx, seq);
+ /* Against remove in amdgpu_job_{free, free_cb} */
dma_fence_get(fence);
} else {
- if (job) {
- dma_fence_init(fence, &amdgpu_job_fence_ops,
- &ring->fence_drv.lock,
- adev->fence_context + ring->idx, seq);
- /* Against remove in amdgpu_job_{free, free_cb} */
- dma_fence_get(fence);
- } else {
- dma_fence_init(fence, &amdgpu_fence_ops,
- &ring->fence_drv.lock,
- adev->fence_context + ring->idx, seq);
- }
+ dma_fence_init(fence, &amdgpu_fence_ops,
+ &ring->fence_drv.lock,
+ adev->fence_context + ring->idx, seq);
}
amdgpu_ring_emit_fence(ring, ring->fence_drv.gpu_addr,
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
index 802743efa3b39..206b70acb29a0 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
@@ -128,6 +128,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs,
struct amdgpu_device *adev = ring->adev;
struct amdgpu_ib *ib = &ibs[0];
struct dma_fence *tmp = NULL;
+ struct amdgpu_fence *af;
bool need_ctx_switch;
struct amdgpu_vm *vm;
uint64_t fence_ctx;
@@ -154,6 +155,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs,
csa_va = job->csa_va;
gds_va = job->gds_va;
init_shadow = job->init_shadow;
+ af = &job->hw_fence;
} else {
vm = NULL;
fence_ctx = 0;
@@ -161,6 +163,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs,
csa_va = 0;
gds_va = 0;
init_shadow = false;
+ af = NULL;
}
if (!ring->sched.ready) {
@@ -282,7 +285,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs,
amdgpu_ring_init_cond_exec(ring, ring->cond_exe_gpu_addr);
}
- r = amdgpu_fence_emit(ring, f, job, fence_flags);
+ r = amdgpu_fence_emit(ring, f, af, fence_flags);
if (r) {
dev_err(adev->dev, "failed to emit fence (%d)\n", r);
if (job && job->vmid)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
index e1f25218943a4..9ae522baad8e7 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
@@ -157,8 +157,8 @@ void amdgpu_fence_driver_hw_init(struct amdgpu_device *adev);
void amdgpu_fence_driver_hw_fini(struct amdgpu_device *adev);
int amdgpu_fence_driver_sw_init(struct amdgpu_device *adev);
void amdgpu_fence_driver_sw_fini(struct amdgpu_device *adev);
-int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **fence, struct amdgpu_job *job,
- unsigned flags);
+int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **f,
+ struct amdgpu_fence *af, unsigned int flags);
int amdgpu_fence_emit_polling(struct amdgpu_ring *ring, uint32_t *s,
uint32_t timeout);
bool amdgpu_fence_process(struct amdgpu_ring *ring);
--
2.49.0
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH 03/36] drm/amdgpu: remove fence slab
2025-06-17 3:07 [PATCH V9 00/36] Reset improvements Alex Deucher
2025-06-17 3:07 ` [PATCH 01/36] drm/amdgpu: switch job hw_fence to amdgpu_fence Alex Deucher
2025-06-17 3:07 ` [PATCH 02/36] drm/amdgpu: remove job parameter from amdgpu_fence_emit() Alex Deucher
@ 2025-06-17 3:07 ` Alex Deucher
2025-06-17 11:49 ` Christian König
2025-06-17 3:07 ` [PATCH 04/36] drm/amdgpu: enable legacy enforce isolation by default Alex Deucher
` (32 subsequent siblings)
35 siblings, 1 reply; 59+ messages in thread
From: Alex Deucher @ 2025-06-17 3:07 UTC (permalink / raw)
To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher
Just use kmalloc for the fences in the rare case we need
an independent fence.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
drivers/gpu/drm/amd/amdgpu/amdgpu.h | 3 ---
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 5 -----
drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c | 21 +++------------------
3 files changed, 3 insertions(+), 26 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index 5e2f086d2c99e..534d999b1433d 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -470,9 +470,6 @@ struct amdgpu_sa_manager {
void *cpu_ptr;
};
-int amdgpu_fence_slab_init(void);
-void amdgpu_fence_slab_fini(void);
-
/*
* IRQS.
*/
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index 7f8fa69300bf4..d645fa9bdff3b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
@@ -3113,10 +3113,6 @@ static int __init amdgpu_init(void)
if (r)
goto error_sync;
- r = amdgpu_fence_slab_init();
- if (r)
- goto error_fence;
-
r = amdgpu_userq_fence_slab_init();
if (r)
goto error_fence;
@@ -3151,7 +3147,6 @@ static void __exit amdgpu_exit(void)
amdgpu_unregister_atpx_handler();
amdgpu_acpi_release();
amdgpu_sync_fini();
- amdgpu_fence_slab_fini();
amdgpu_userq_fence_slab_fini();
mmu_notifier_synchronize();
amdgpu_xcp_drv_release();
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
index e88848c14491a..5555f3ae08c60 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
@@ -41,21 +41,6 @@
#include "amdgpu_trace.h"
#include "amdgpu_reset.h"
-static struct kmem_cache *amdgpu_fence_slab;
-
-int amdgpu_fence_slab_init(void)
-{
- amdgpu_fence_slab = KMEM_CACHE(amdgpu_fence, SLAB_HWCACHE_ALIGN);
- if (!amdgpu_fence_slab)
- return -ENOMEM;
- return 0;
-}
-
-void amdgpu_fence_slab_fini(void)
-{
- rcu_barrier();
- kmem_cache_destroy(amdgpu_fence_slab);
-}
/*
* Cast helper
*/
@@ -132,8 +117,8 @@ int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **f,
if (!af) {
/* create a separate hw fence */
- am_fence = kmem_cache_alloc(amdgpu_fence_slab, GFP_ATOMIC);
- if (am_fence == NULL)
+ am_fence = kmalloc(sizeof(*am_fence), GFP_KERNEL);
+ if (!am_fence)
return -ENOMEM;
} else {
am_fence = af;
@@ -806,7 +791,7 @@ static void amdgpu_fence_free(struct rcu_head *rcu)
struct dma_fence *f = container_of(rcu, struct dma_fence, rcu);
/* free fence_slab if it's separated fence*/
- kmem_cache_free(amdgpu_fence_slab, to_amdgpu_fence(f));
+ kfree(to_amdgpu_fence(f));
}
/**
--
2.49.0
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH 04/36] drm/amdgpu: enable legacy enforce isolation by default
2025-06-17 3:07 [PATCH V9 00/36] Reset improvements Alex Deucher
` (2 preceding siblings ...)
2025-06-17 3:07 ` [PATCH 03/36] drm/amdgpu: remove fence slab Alex Deucher
@ 2025-06-17 3:07 ` Alex Deucher
2025-06-17 3:07 ` [PATCH 05/36] drm/amdgpu/sdma5.x: suspend KFD queues in ring reset Alex Deucher
` (31 subsequent siblings)
35 siblings, 0 replies; 59+ messages in thread
From: Alex Deucher @ 2025-06-17 3:07 UTC (permalink / raw)
To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher
Enable legacy enforce isolation (just serialize kernel
GC submissions) for gfx9+. This way we can reset a ring and
only affect the the process currently using that ring.
This mirrors what windows does.
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 13 +++++++++++--
1 file changed, 11 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 13070211dc69c..508546ef55787 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -2148,9 +2148,7 @@ static int amdgpu_device_check_arguments(struct amdgpu_device *adev)
for (i = 0; i < MAX_XCP; i++) {
switch (amdgpu_enforce_isolation) {
- case -1:
case 0:
- default:
/* disable */
adev->enforce_isolation[i] = AMDGPU_ENFORCE_ISOLATION_DISABLE;
break;
@@ -2164,6 +2162,17 @@ static int amdgpu_device_check_arguments(struct amdgpu_device *adev)
adev->enforce_isolation[i] =
AMDGPU_ENFORCE_ISOLATION_ENABLE_LEGACY;
break;
+ case -1:
+ default:
+ /* disable by default on GFX8 and older */
+ if (adev->asic_type <= CHIP_VEGAM)
+ /* disable */
+ adev->enforce_isolation[i] = AMDGPU_ENFORCE_ISOLATION_DISABLE;
+ else
+ /* enable legacy mode */
+ adev->enforce_isolation[i] =
+ AMDGPU_ENFORCE_ISOLATION_ENABLE_LEGACY;
+ break;
case 3:
/* enable only process isolation without submitting cleaner shader */
adev->enforce_isolation[i] =
--
2.49.0
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH 05/36] drm/amdgpu/sdma5.x: suspend KFD queues in ring reset
2025-06-17 3:07 [PATCH V9 00/36] Reset improvements Alex Deucher
` (3 preceding siblings ...)
2025-06-17 3:07 ` [PATCH 04/36] drm/amdgpu: enable legacy enforce isolation by default Alex Deucher
@ 2025-06-17 3:07 ` Alex Deucher
2025-06-17 3:07 ` [PATCH 06/36] drm/amdgpu/sdma5: init engine reset mutex Alex Deucher
` (30 subsequent siblings)
35 siblings, 0 replies; 59+ messages in thread
From: Alex Deucher @ 2025-06-17 3:07 UTC (permalink / raw)
To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher
SDMA 5.x only supports engine soft reset which resets
all queues on the engine. As such, we need to suspend
KFD queues around resets like we do for SDMA 4.x.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c | 7 ++++++-
drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c | 7 ++++++-
2 files changed, 12 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
index 9505ae96fbecc..2d94aadc31149 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
@@ -1542,8 +1542,13 @@ static int sdma_v5_0_reset_queue(struct amdgpu_ring *ring, unsigned int vmid)
{
struct amdgpu_device *adev = ring->adev;
u32 inst_id = ring->me;
+ int r;
+
+ amdgpu_amdkfd_suspend(adev, true);
+ r = amdgpu_sdma_reset_engine(adev, inst_id);
+ amdgpu_amdkfd_resume(adev, true);
- return amdgpu_sdma_reset_engine(adev, inst_id);
+ return r;
}
static int sdma_v5_0_stop_queue(struct amdgpu_ring *ring)
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
index a6e612b4a8928..cc934724f387c 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
@@ -1455,8 +1455,13 @@ static int sdma_v5_2_reset_queue(struct amdgpu_ring *ring, unsigned int vmid)
{
struct amdgpu_device *adev = ring->adev;
u32 inst_id = ring->me;
+ int r;
+
+ amdgpu_amdkfd_suspend(adev, true);
+ r = amdgpu_sdma_reset_engine(adev, inst_id);
+ amdgpu_amdkfd_resume(adev, true);
- return amdgpu_sdma_reset_engine(adev, inst_id);
+ return r;
}
static int sdma_v5_2_stop_queue(struct amdgpu_ring *ring)
--
2.49.0
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH 06/36] drm/amdgpu/sdma5: init engine reset mutex
2025-06-17 3:07 [PATCH V9 00/36] Reset improvements Alex Deucher
` (4 preceding siblings ...)
2025-06-17 3:07 ` [PATCH 05/36] drm/amdgpu/sdma5.x: suspend KFD queues in ring reset Alex Deucher
@ 2025-06-17 3:07 ` Alex Deucher
2025-06-17 5:50 ` Zhang, Jesse(Jie)
` (2 more replies)
2025-06-17 3:07 ` [PATCH 07/36] drm/amdgpu/sdma5.2: " Alex Deucher
` (29 subsequent siblings)
35 siblings, 3 replies; 59+ messages in thread
From: Alex Deucher @ 2025-06-17 3:07 UTC (permalink / raw)
To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher
Missing the mutex init.
Fixes: e56d4bf57fab ("drm/amdgpu/: drm/amdgpu: Register the new sdma function pointers for sdma_v5_0")
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
index 2d94aadc31149..37f4b5b4a098f 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
@@ -1399,6 +1399,7 @@ static int sdma_v5_0_sw_init(struct amdgpu_ip_block *ip_block)
return r;
for (i = 0; i < adev->sdma.num_instances; i++) {
+ mutex_init(&adev->sdma.instance[i].engine_reset_mutex);
adev->sdma.instance[i].funcs = &sdma_v5_0_sdma_funcs;
ring = &adev->sdma.instance[i].ring;
ring->ring_obj = NULL;
--
2.49.0
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH 07/36] drm/amdgpu/sdma5.2: init engine reset mutex
2025-06-17 3:07 [PATCH V9 00/36] Reset improvements Alex Deucher
` (5 preceding siblings ...)
2025-06-17 3:07 ` [PATCH 06/36] drm/amdgpu/sdma5: init engine reset mutex Alex Deucher
@ 2025-06-17 3:07 ` Alex Deucher
2025-06-17 6:08 ` Zhang, Jesse(Jie)
2025-06-17 3:07 ` [PATCH 08/36] drm/amdgpu: update ring reset function signature Alex Deucher
` (28 subsequent siblings)
35 siblings, 1 reply; 59+ messages in thread
From: Alex Deucher @ 2025-06-17 3:07 UTC (permalink / raw)
To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher
Missing the mutex init.
Fixes: 47454f2dc0bf ("drm/amdgpu: Register the new sdma function pointers for sdma_v5_2")
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
index cc934724f387c..0b40411b92a0b 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
@@ -1318,6 +1318,7 @@ static int sdma_v5_2_sw_init(struct amdgpu_ip_block *ip_block)
}
for (i = 0; i < adev->sdma.num_instances; i++) {
+ mutex_init(&adev->sdma.instance[i].engine_reset_mutex);
adev->sdma.instance[i].funcs = &sdma_v5_2_sdma_funcs;
ring = &adev->sdma.instance[i].ring;
ring->ring_obj = NULL;
--
2.49.0
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH 08/36] drm/amdgpu: update ring reset function signature
2025-06-17 3:07 [PATCH V9 00/36] Reset improvements Alex Deucher
` (6 preceding siblings ...)
2025-06-17 3:07 ` [PATCH 07/36] drm/amdgpu/sdma5.2: " Alex Deucher
@ 2025-06-17 3:07 ` Alex Deucher
2025-06-17 12:20 ` Christian König
2025-06-17 3:07 ` [PATCH 09/36] drm/amdgpu: rework queue reset scheduler interaction Alex Deucher
` (27 subsequent siblings)
35 siblings, 1 reply; 59+ messages in thread
From: Alex Deucher @ 2025-06-17 3:07 UTC (permalink / raw)
To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher
Going forward, we'll need more than just the vmid. Add the
guilty amdgpu_fence.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 2 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h | 5 +++--
drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c | 7 +++++--
drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c | 8 ++++++--
drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c | 8 ++++++--
drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 3 ++-
drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c | 3 ++-
drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c | 4 +++-
drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c | 4 +++-
drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c | 4 +++-
drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c | 4 +++-
drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c | 4 +++-
drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c | 4 +++-
drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c | 4 +++-
drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c | 4 +++-
drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c | 4 +++-
drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c | 4 +++-
drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c | 4 +++-
drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c | 4 +++-
drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c | 4 +++-
drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c | 4 +++-
drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c | 4 +++-
22 files changed, 70 insertions(+), 26 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
index ddb9d3269357c..a7ff1fa4c778e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
@@ -155,7 +155,7 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
if (is_guilty)
dma_fence_set_error(&s_job->s_fence->finished, -ETIME);
- r = amdgpu_ring_reset(ring, job->vmid);
+ r = amdgpu_ring_reset(ring, job->vmid, NULL);
if (!r) {
if (amdgpu_ring_sched_ready(ring))
drm_sched_stop(&ring->sched, s_job);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
index 9ae522baad8e7..fc36b86c6dcf8 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
@@ -268,7 +268,8 @@ struct amdgpu_ring_funcs {
void (*patch_cntl)(struct amdgpu_ring *ring, unsigned offset);
void (*patch_ce)(struct amdgpu_ring *ring, unsigned offset);
void (*patch_de)(struct amdgpu_ring *ring, unsigned offset);
- int (*reset)(struct amdgpu_ring *ring, unsigned int vmid);
+ int (*reset)(struct amdgpu_ring *ring, unsigned int vmid,
+ struct amdgpu_fence *guilty_fence);
void (*emit_cleaner_shader)(struct amdgpu_ring *ring);
bool (*is_guilty)(struct amdgpu_ring *ring);
};
@@ -425,7 +426,7 @@ struct amdgpu_ring {
#define amdgpu_ring_patch_cntl(r, o) ((r)->funcs->patch_cntl((r), (o)))
#define amdgpu_ring_patch_ce(r, o) ((r)->funcs->patch_ce((r), (o)))
#define amdgpu_ring_patch_de(r, o) ((r)->funcs->patch_de((r), (o)))
-#define amdgpu_ring_reset(r, v) (r)->funcs->reset((r), (v))
+#define amdgpu_ring_reset(r, v, f) (r)->funcs->reset((r), (v), (f))
unsigned int amdgpu_ring_max_ibs(enum amdgpu_ring_type type);
int amdgpu_ring_alloc(struct amdgpu_ring *ring, unsigned ndw);
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
index 75ea071744eb5..444753b0ac885 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
@@ -9522,7 +9522,9 @@ static void gfx_v10_ring_insert_nop(struct amdgpu_ring *ring, uint32_t num_nop)
amdgpu_ring_insert_nop(ring, num_nop - 1);
}
-static int gfx_v10_0_reset_kgq(struct amdgpu_ring *ring, unsigned int vmid)
+static int gfx_v10_0_reset_kgq(struct amdgpu_ring *ring,
+ unsigned int vmid,
+ struct amdgpu_fence *guilty_fence)
{
struct amdgpu_device *adev = ring->adev;
struct amdgpu_kiq *kiq = &adev->gfx.kiq[0];
@@ -9579,7 +9581,8 @@ static int gfx_v10_0_reset_kgq(struct amdgpu_ring *ring, unsigned int vmid)
}
static int gfx_v10_0_reset_kcq(struct amdgpu_ring *ring,
- unsigned int vmid)
+ unsigned int vmid,
+ struct amdgpu_fence *guilty_fence)
{
struct amdgpu_device *adev = ring->adev;
struct amdgpu_kiq *kiq = &adev->gfx.kiq[0];
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
index ec9b84f92d467..4293f2a1b9bfb 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
@@ -6811,7 +6811,9 @@ static int gfx_v11_reset_gfx_pipe(struct amdgpu_ring *ring)
return 0;
}
-static int gfx_v11_0_reset_kgq(struct amdgpu_ring *ring, unsigned int vmid)
+static int gfx_v11_0_reset_kgq(struct amdgpu_ring *ring,
+ unsigned int vmid,
+ struct amdgpu_fence *guilty_fence)
{
struct amdgpu_device *adev = ring->adev;
int r;
@@ -6973,7 +6975,9 @@ static int gfx_v11_0_reset_compute_pipe(struct amdgpu_ring *ring)
return 0;
}
-static int gfx_v11_0_reset_kcq(struct amdgpu_ring *ring, unsigned int vmid)
+static int gfx_v11_0_reset_kcq(struct amdgpu_ring *ring,
+ unsigned int vmid,
+ struct amdgpu_fence *guilty_fence)
{
struct amdgpu_device *adev = ring->adev;
int r = 0;
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
index 1234c8d64e20d..aea21ef177d05 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
@@ -5307,7 +5307,9 @@ static int gfx_v12_reset_gfx_pipe(struct amdgpu_ring *ring)
return 0;
}
-static int gfx_v12_0_reset_kgq(struct amdgpu_ring *ring, unsigned int vmid)
+static int gfx_v12_0_reset_kgq(struct amdgpu_ring *ring,
+ unsigned int vmid,
+ struct amdgpu_fence *guilty_fence)
{
struct amdgpu_device *adev = ring->adev;
int r;
@@ -5421,7 +5423,9 @@ static int gfx_v12_0_reset_compute_pipe(struct amdgpu_ring *ring)
return 0;
}
-static int gfx_v12_0_reset_kcq(struct amdgpu_ring *ring, unsigned int vmid)
+static int gfx_v12_0_reset_kcq(struct amdgpu_ring *ring,
+ unsigned int vmid,
+ struct amdgpu_fence *guilty_fence)
{
struct amdgpu_device *adev = ring->adev;
int r;
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
index d50e125fd3e0d..c0ffe7afca9b8 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
@@ -7153,7 +7153,8 @@ static void gfx_v9_ring_insert_nop(struct amdgpu_ring *ring, uint32_t num_nop)
}
static int gfx_v9_0_reset_kcq(struct amdgpu_ring *ring,
- unsigned int vmid)
+ unsigned int vmid,
+ struct amdgpu_fence *guilty_fence)
{
struct amdgpu_device *adev = ring->adev;
struct amdgpu_kiq *kiq = &adev->gfx.kiq[0];
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
index c233edf605694..79d4ae0645ffc 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
@@ -3552,7 +3552,8 @@ static int gfx_v9_4_3_reset_hw_pipe(struct amdgpu_ring *ring)
}
static int gfx_v9_4_3_reset_kcq(struct amdgpu_ring *ring,
- unsigned int vmid)
+ unsigned int vmid,
+ struct amdgpu_fence *guilty_fence)
{
struct amdgpu_device *adev = ring->adev;
struct amdgpu_kiq *kiq = &adev->gfx.kiq[ring->xcc_id];
diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
index 4cde8a8bcc837..4c1ff6d0e14ea 100644
--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
@@ -764,7 +764,9 @@ static int jpeg_v2_0_process_interrupt(struct amdgpu_device *adev,
return 0;
}
-static int jpeg_v2_0_ring_reset(struct amdgpu_ring *ring, unsigned int vmid)
+static int jpeg_v2_0_ring_reset(struct amdgpu_ring *ring,
+ unsigned int vmid,
+ struct amdgpu_fence *guilty_fence)
{
jpeg_v2_0_stop(ring->adev);
jpeg_v2_0_start(ring->adev);
diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
index 8b39e114f3be1..5a18b8644de2f 100644
--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
+++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
@@ -643,7 +643,9 @@ static int jpeg_v2_5_process_interrupt(struct amdgpu_device *adev,
return 0;
}
-static int jpeg_v2_5_ring_reset(struct amdgpu_ring *ring, unsigned int vmid)
+static int jpeg_v2_5_ring_reset(struct amdgpu_ring *ring,
+ unsigned int vmid,
+ struct amdgpu_fence *guilty_fence)
{
jpeg_v2_5_stop_inst(ring->adev, ring->me);
jpeg_v2_5_start_inst(ring->adev, ring->me);
diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c
index 2f8510c2986b9..4963feddefae5 100644
--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c
@@ -555,7 +555,9 @@ static int jpeg_v3_0_process_interrupt(struct amdgpu_device *adev,
return 0;
}
-static int jpeg_v3_0_ring_reset(struct amdgpu_ring *ring, unsigned int vmid)
+static int jpeg_v3_0_ring_reset(struct amdgpu_ring *ring,
+ unsigned int vmid,
+ struct amdgpu_fence *guilty_fence)
{
jpeg_v3_0_stop(ring->adev);
jpeg_v3_0_start(ring->adev);
diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c
index f17ec5414fd69..327adb474b0d3 100644
--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c
@@ -720,7 +720,9 @@ static int jpeg_v4_0_process_interrupt(struct amdgpu_device *adev,
return 0;
}
-static int jpeg_v4_0_ring_reset(struct amdgpu_ring *ring, unsigned int vmid)
+static int jpeg_v4_0_ring_reset(struct amdgpu_ring *ring,
+ unsigned int vmid,
+ struct amdgpu_fence *guilty_fence)
{
if (amdgpu_sriov_vf(ring->adev))
return -EINVAL;
diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c
index 79e342d5ab28d..c951b4b170c5b 100644
--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c
+++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c
@@ -1143,7 +1143,9 @@ static void jpeg_v4_0_3_core_stall_reset(struct amdgpu_ring *ring)
WREG32_SOC15(JPEG, jpeg_inst, regJPEG_CORE_RST_CTRL, 0x00);
}
-static int jpeg_v4_0_3_ring_reset(struct amdgpu_ring *ring, unsigned int vmid)
+static int jpeg_v4_0_3_ring_reset(struct amdgpu_ring *ring,
+ unsigned int vmid,
+ struct amdgpu_fence *guilty_fence)
{
if (amdgpu_sriov_vf(ring->adev))
return -EOPNOTSUPP;
diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c
index 3b6f65a256464..51ae62c24c49e 100644
--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c
+++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c
@@ -834,7 +834,9 @@ static void jpeg_v5_0_1_core_stall_reset(struct amdgpu_ring *ring)
WREG32_SOC15(JPEG, jpeg_inst, regJPEG_CORE_RST_CTRL, 0x00);
}
-static int jpeg_v5_0_1_ring_reset(struct amdgpu_ring *ring, unsigned int vmid)
+static int jpeg_v5_0_1_ring_reset(struct amdgpu_ring *ring,
+ unsigned int vmid,
+ struct amdgpu_fence *guilty_fence)
{
if (amdgpu_sriov_vf(ring->adev))
return -EOPNOTSUPP;
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
index 5b7009612190f..502d71f678922 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
@@ -1674,7 +1674,9 @@ static bool sdma_v4_4_2_page_ring_is_guilty(struct amdgpu_ring *ring)
return sdma_v4_4_2_is_queue_selected(adev, instance_id, true);
}
-static int sdma_v4_4_2_reset_queue(struct amdgpu_ring *ring, unsigned int vmid)
+static int sdma_v4_4_2_reset_queue(struct amdgpu_ring *ring,
+ unsigned int vmid,
+ struct amdgpu_fence *guilty_fence)
{
struct amdgpu_device *adev = ring->adev;
u32 id = ring->me;
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
index 37f4b5b4a098f..6092e2a9e210b 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
@@ -1539,7 +1539,9 @@ static int sdma_v5_0_soft_reset(struct amdgpu_ip_block *ip_block)
return 0;
}
-static int sdma_v5_0_reset_queue(struct amdgpu_ring *ring, unsigned int vmid)
+static int sdma_v5_0_reset_queue(struct amdgpu_ring *ring,
+ unsigned int vmid,
+ struct amdgpu_fence *guilty_fence)
{
struct amdgpu_device *adev = ring->adev;
u32 inst_id = ring->me;
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
index 0b40411b92a0b..2cdcf28881c3d 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
@@ -1452,7 +1452,9 @@ static int sdma_v5_2_wait_for_idle(struct amdgpu_ip_block *ip_block)
return -ETIMEDOUT;
}
-static int sdma_v5_2_reset_queue(struct amdgpu_ring *ring, unsigned int vmid)
+static int sdma_v5_2_reset_queue(struct amdgpu_ring *ring,
+ unsigned int vmid,
+ struct amdgpu_fence *guilty_fence)
{
struct amdgpu_device *adev = ring->adev;
u32 inst_id = ring->me;
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
index 5a70ae17be04e..43bb4a7456b90 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
@@ -1537,7 +1537,9 @@ static int sdma_v6_0_ring_preempt_ib(struct amdgpu_ring *ring)
return r;
}
-static int sdma_v6_0_reset_queue(struct amdgpu_ring *ring, unsigned int vmid)
+static int sdma_v6_0_reset_queue(struct amdgpu_ring *ring,
+ unsigned int vmid,
+ struct amdgpu_fence *guilty_fence)
{
struct amdgpu_device *adev = ring->adev;
int i, r;
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
index ad47d0bdf7775..b5c168cb1354d 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
@@ -802,7 +802,9 @@ static bool sdma_v7_0_check_soft_reset(struct amdgpu_ip_block *ip_block)
return false;
}
-static int sdma_v7_0_reset_queue(struct amdgpu_ring *ring, unsigned int vmid)
+static int sdma_v7_0_reset_queue(struct amdgpu_ring *ring,
+ unsigned int vmid,
+ struct amdgpu_fence *guilty_fence)
{
struct amdgpu_device *adev = ring->adev;
int i, r;
diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
index b5071f77f78d2..083fde15e83a1 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
@@ -1967,7 +1967,9 @@ static int vcn_v4_0_ring_patch_cs_in_place(struct amdgpu_cs_parser *p,
return 0;
}
-static int vcn_v4_0_ring_reset(struct amdgpu_ring *ring, unsigned int vmid)
+static int vcn_v4_0_ring_reset(struct amdgpu_ring *ring,
+ unsigned int vmid,
+ struct amdgpu_fence *guilty_fence)
{
struct amdgpu_device *adev = ring->adev;
struct amdgpu_vcn_inst *vinst = &adev->vcn.inst[ring->me];
diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c
index 5a33140f57235..57c59c4868a50 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c
@@ -1594,7 +1594,9 @@ static void vcn_v4_0_3_unified_ring_set_wptr(struct amdgpu_ring *ring)
}
}
-static int vcn_v4_0_3_ring_reset(struct amdgpu_ring *ring, unsigned int vmid)
+static int vcn_v4_0_3_ring_reset(struct amdgpu_ring *ring,
+ unsigned int vmid,
+ struct amdgpu_fence *guilty_fence)
{
int r = 0;
int vcn_inst;
diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c
index 16ade84facc78..4aad7d2e36379 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c
@@ -1465,7 +1465,9 @@ static void vcn_v4_0_5_unified_ring_set_wptr(struct amdgpu_ring *ring)
}
}
-static int vcn_v4_0_5_ring_reset(struct amdgpu_ring *ring, unsigned int vmid)
+static int vcn_v4_0_5_ring_reset(struct amdgpu_ring *ring,
+ unsigned int vmid,
+ struct amdgpu_fence *guilty_fence)
{
struct amdgpu_device *adev = ring->adev;
struct amdgpu_vcn_inst *vinst = &adev->vcn.inst[ring->me];
diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c
index f8e3f0b882da5..b9c8a2b8c5e0d 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c
@@ -1192,7 +1192,9 @@ static void vcn_v5_0_0_unified_ring_set_wptr(struct amdgpu_ring *ring)
}
}
-static int vcn_v5_0_0_ring_reset(struct amdgpu_ring *ring, unsigned int vmid)
+static int vcn_v5_0_0_ring_reset(struct amdgpu_ring *ring,
+ unsigned int vmid,
+ struct amdgpu_fence *guilty_fence)
{
struct amdgpu_device *adev = ring->adev;
struct amdgpu_vcn_inst *vinst = &adev->vcn.inst[ring->me];
--
2.49.0
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH 09/36] drm/amdgpu: rework queue reset scheduler interaction
2025-06-17 3:07 [PATCH V9 00/36] Reset improvements Alex Deucher
` (7 preceding siblings ...)
2025-06-17 3:07 ` [PATCH 08/36] drm/amdgpu: update ring reset function signature Alex Deucher
@ 2025-06-17 3:07 ` Alex Deucher
2025-06-17 3:07 ` [PATCH 10/36] drm/amdgpu: move force completion into ring resets Alex Deucher
` (26 subsequent siblings)
35 siblings, 0 replies; 59+ messages in thread
From: Alex Deucher @ 2025-06-17 3:07 UTC (permalink / raw)
To: amd-gfx, christian.koenig, sasundar; +Cc: Christian König, Alex Deucher
From: Christian König <ckoenig.leichtzumerken@gmail.com>
Stopping the scheduler for queue reset is generally a good idea because
it prevents any worker from touching the ring buffer.
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 35 ++++++++++++++-----------
1 file changed, 20 insertions(+), 15 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
index a7ff1fa4c778e..93413be59e08f 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
@@ -91,8 +91,8 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
struct amdgpu_job *job = to_amdgpu_job(s_job);
struct amdgpu_task_info *ti;
struct amdgpu_device *adev = ring->adev;
- int idx;
- int r;
+ bool set_error = false;
+ int idx, r;
if (!drm_dev_enter(adev_to_drm(adev), &idx)) {
dev_info(adev->dev, "%s - device unplugged skipping recovery on scheduler:%s",
@@ -136,10 +136,12 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
} else if (amdgpu_gpu_recovery && ring->funcs->reset) {
bool is_guilty;
- dev_err(adev->dev, "Starting %s ring reset\n", s_job->sched->name);
- /* stop the scheduler, but don't mess with the
- * bad job yet because if ring reset fails
- * we'll fall back to full GPU reset.
+ dev_err(adev->dev, "Starting %s ring reset\n",
+ s_job->sched->name);
+
+ /*
+ * Stop the scheduler to prevent anybody else from touching the
+ * ring buffer.
*/
drm_sched_wqueue_stop(&ring->sched);
@@ -152,26 +154,29 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
else
is_guilty = true;
- if (is_guilty)
+ if (is_guilty) {
dma_fence_set_error(&s_job->s_fence->finished, -ETIME);
+ set_error = true;
+ }
r = amdgpu_ring_reset(ring, job->vmid, NULL);
if (!r) {
- if (amdgpu_ring_sched_ready(ring))
- drm_sched_stop(&ring->sched, s_job);
if (is_guilty) {
atomic_inc(&ring->adev->gpu_reset_counter);
amdgpu_fence_driver_force_completion(ring);
}
- if (amdgpu_ring_sched_ready(ring))
- drm_sched_start(&ring->sched, 0);
- dev_err(adev->dev, "Ring %s reset succeeded\n", ring->sched.name);
- drm_dev_wedged_event(adev_to_drm(adev), DRM_WEDGE_RECOVERY_NONE);
+ drm_sched_wqueue_start(&ring->sched);
+ dev_err(adev->dev, "Ring %s reset succeeded\n",
+ ring->sched.name);
+ drm_dev_wedged_event(adev_to_drm(adev),
+ DRM_WEDGE_RECOVERY_NONE);
goto exit;
}
- dev_err(adev->dev, "Ring %s reset failure\n", ring->sched.name);
+ dev_err(adev->dev, "Ring %s reset failed\n", ring->sched.name);
}
- dma_fence_set_error(&s_job->s_fence->finished, -ETIME);
+
+ if (!set_error)
+ dma_fence_set_error(&s_job->s_fence->finished, -ETIME);
if (amdgpu_device_should_recover_gpu(ring->adev)) {
struct amdgpu_reset_context reset_context;
--
2.49.0
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH 10/36] drm/amdgpu: move force completion into ring resets
2025-06-17 3:07 [PATCH V9 00/36] Reset improvements Alex Deucher
` (8 preceding siblings ...)
2025-06-17 3:07 ` [PATCH 09/36] drm/amdgpu: rework queue reset scheduler interaction Alex Deucher
@ 2025-06-17 3:07 ` Alex Deucher
2025-06-17 3:07 ` [PATCH 11/36] drm/amdgpu: move guilty handling " Alex Deucher
` (25 subsequent siblings)
35 siblings, 0 replies; 59+ messages in thread
From: Alex Deucher @ 2025-06-17 3:07 UTC (permalink / raw)
To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher
Move the force completion handling into each ring
reset function so that each engine can determine
whether or not it needs to force completion on the
jobs in the ring.
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 4 +--
drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c | 12 +++++++--
drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c | 12 +++++++--
drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c | 12 +++++++--
drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 7 +++++-
drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c | 7 +++++-
drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c | 8 +++++-
drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c | 8 +++++-
drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c | 8 +++++-
drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c | 8 +++++-
drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c | 8 +++++-
drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c | 8 +++++-
drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c | 31 +++++++++++++++++++++---
drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c | 5 +++-
drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c | 5 +++-
drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c | 6 ++++-
drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c | 6 ++++-
drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c | 7 +++++-
drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c | 6 +++--
drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c | 7 +++++-
drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c | 7 +++++-
21 files changed, 152 insertions(+), 30 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
index 93413be59e08f..177f04491a11b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
@@ -161,10 +161,8 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
r = amdgpu_ring_reset(ring, job->vmid, NULL);
if (!r) {
- if (is_guilty) {
+ if (is_guilty)
atomic_inc(&ring->adev->gpu_reset_counter);
- amdgpu_fence_driver_force_completion(ring);
- }
drm_sched_wqueue_start(&ring->sched);
dev_err(adev->dev, "Ring %s reset succeeded\n",
ring->sched.name);
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
index 444753b0ac885..b4f4ad966db82 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
@@ -9577,7 +9577,11 @@ static int gfx_v10_0_reset_kgq(struct amdgpu_ring *ring,
return r;
}
- return amdgpu_ring_test_ring(ring);
+ r = amdgpu_ring_test_ring(ring);
+ if (r)
+ return r;
+ amdgpu_fence_driver_force_completion(ring);
+ return 0;
}
static int gfx_v10_0_reset_kcq(struct amdgpu_ring *ring,
@@ -9650,7 +9654,11 @@ static int gfx_v10_0_reset_kcq(struct amdgpu_ring *ring,
if (r)
return r;
- return amdgpu_ring_test_ring(ring);
+ r = amdgpu_ring_test_ring(ring);
+ if (r)
+ return r;
+ amdgpu_fence_driver_force_completion(ring);
+ return 0;
}
static void gfx_v10_ip_print(struct amdgpu_ip_block *ip_block, struct drm_printer *p)
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
index 4293f2a1b9bfb..5707ce7dd5c82 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
@@ -6842,7 +6842,11 @@ static int gfx_v11_0_reset_kgq(struct amdgpu_ring *ring,
return r;
}
- return amdgpu_ring_test_ring(ring);
+ r = amdgpu_ring_test_ring(ring);
+ if (r)
+ return r;
+ amdgpu_fence_driver_force_completion(ring);
+ return 0;
}
static int gfx_v11_0_reset_compute_pipe(struct amdgpu_ring *ring)
@@ -7004,7 +7008,11 @@ static int gfx_v11_0_reset_kcq(struct amdgpu_ring *ring,
return r;
}
- return amdgpu_ring_test_ring(ring);
+ r = amdgpu_ring_test_ring(ring);
+ if (r)
+ return r;
+ amdgpu_fence_driver_force_completion(ring);
+ return 0;
}
static void gfx_v11_ip_print(struct amdgpu_ip_block *ip_block, struct drm_printer *p)
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
index aea21ef177d05..259a83c3acb5d 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
@@ -5337,7 +5337,11 @@ static int gfx_v12_0_reset_kgq(struct amdgpu_ring *ring,
return r;
}
- return amdgpu_ring_test_ring(ring);
+ r = amdgpu_ring_test_ring(ring);
+ if (r)
+ return r;
+ amdgpu_fence_driver_force_completion(ring);
+ return 0;
}
static int gfx_v12_0_reset_compute_pipe(struct amdgpu_ring *ring)
@@ -5452,7 +5456,11 @@ static int gfx_v12_0_reset_kcq(struct amdgpu_ring *ring,
return r;
}
- return amdgpu_ring_test_ring(ring);
+ r = amdgpu_ring_test_ring(ring);
+ if (r)
+ return r;
+ amdgpu_fence_driver_force_completion(ring);
+ return 0;
}
static void gfx_v12_0_ring_begin_use(struct amdgpu_ring *ring)
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
index c0ffe7afca9b8..e0dec946b7cdc 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
@@ -7223,7 +7223,12 @@ static int gfx_v9_0_reset_kcq(struct amdgpu_ring *ring,
DRM_ERROR("fail to remap queue\n");
return r;
}
- return amdgpu_ring_test_ring(ring);
+
+ r = amdgpu_ring_test_ring(ring);
+ if (r)
+ return r;
+ amdgpu_fence_driver_force_completion(ring);
+ return 0;
}
static void gfx_v9_ip_print(struct amdgpu_ip_block *ip_block, struct drm_printer *p)
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
index 79d4ae0645ffc..e5fcc63cd99df 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
@@ -3620,7 +3620,12 @@ static int gfx_v9_4_3_reset_kcq(struct amdgpu_ring *ring,
dev_err(adev->dev, "fail to remap queue\n");
return r;
}
- return amdgpu_ring_test_ring(ring);
+
+ r = amdgpu_ring_test_ring(ring);
+ if (r)
+ return r;
+ amdgpu_fence_driver_force_completion(ring);
+ return 0;
}
enum amdgpu_gfx_cp_ras_mem_id {
diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
index 4c1ff6d0e14ea..0b1fa35a441ae 100644
--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
@@ -768,9 +768,15 @@ static int jpeg_v2_0_ring_reset(struct amdgpu_ring *ring,
unsigned int vmid,
struct amdgpu_fence *guilty_fence)
{
+ int r;
+
jpeg_v2_0_stop(ring->adev);
jpeg_v2_0_start(ring->adev);
- return amdgpu_ring_test_helper(ring);
+ r = amdgpu_ring_test_helper(ring);
+ if (r)
+ return r;
+ amdgpu_fence_driver_force_completion(ring);
+ return 0;
}
static const struct amd_ip_funcs jpeg_v2_0_ip_funcs = {
diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
index 5a18b8644de2f..7a9e91f6495de 100644
--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
+++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
@@ -647,9 +647,15 @@ static int jpeg_v2_5_ring_reset(struct amdgpu_ring *ring,
unsigned int vmid,
struct amdgpu_fence *guilty_fence)
{
+ int r;
+
jpeg_v2_5_stop_inst(ring->adev, ring->me);
jpeg_v2_5_start_inst(ring->adev, ring->me);
- return amdgpu_ring_test_helper(ring);
+ r = amdgpu_ring_test_helper(ring);
+ if (r)
+ return r;
+ amdgpu_fence_driver_force_completion(ring);
+ return 0;
}
static const struct amd_ip_funcs jpeg_v2_5_ip_funcs = {
diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c
index 4963feddefae5..81ee1ba4c0a3c 100644
--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c
@@ -559,9 +559,15 @@ static int jpeg_v3_0_ring_reset(struct amdgpu_ring *ring,
unsigned int vmid,
struct amdgpu_fence *guilty_fence)
{
+ int r;
+
jpeg_v3_0_stop(ring->adev);
jpeg_v3_0_start(ring->adev);
- return amdgpu_ring_test_helper(ring);
+ r = amdgpu_ring_test_helper(ring);
+ if (r)
+ return r;
+ amdgpu_fence_driver_force_completion(ring);
+ return 0;
}
static const struct amd_ip_funcs jpeg_v3_0_ip_funcs = {
diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c
index 327adb474b0d3..06f75091e1304 100644
--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c
@@ -724,12 +724,18 @@ static int jpeg_v4_0_ring_reset(struct amdgpu_ring *ring,
unsigned int vmid,
struct amdgpu_fence *guilty_fence)
{
+ int r;
+
if (amdgpu_sriov_vf(ring->adev))
return -EINVAL;
jpeg_v4_0_stop(ring->adev);
jpeg_v4_0_start(ring->adev);
- return amdgpu_ring_test_helper(ring);
+ r = amdgpu_ring_test_helper(ring);
+ if (r)
+ return r;
+ amdgpu_fence_driver_force_completion(ring);
+ return 0;
}
static const struct amd_ip_funcs jpeg_v4_0_ip_funcs = {
diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c
index c951b4b170c5b..10a7b990b0adf 100644
--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c
+++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c
@@ -1147,12 +1147,18 @@ static int jpeg_v4_0_3_ring_reset(struct amdgpu_ring *ring,
unsigned int vmid,
struct amdgpu_fence *guilty_fence)
{
+ int r;
+
if (amdgpu_sriov_vf(ring->adev))
return -EOPNOTSUPP;
jpeg_v4_0_3_core_stall_reset(ring);
jpeg_v4_0_3_start_jrbc(ring);
- return amdgpu_ring_test_helper(ring);
+ r = amdgpu_ring_test_helper(ring);
+ if (r)
+ return r;
+ amdgpu_fence_driver_force_completion(ring);
+ return 0;
}
static const struct amd_ip_funcs jpeg_v4_0_3_ip_funcs = {
diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c
index 51ae62c24c49e..88dea7a47a1e5 100644
--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c
+++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c
@@ -838,12 +838,18 @@ static int jpeg_v5_0_1_ring_reset(struct amdgpu_ring *ring,
unsigned int vmid,
struct amdgpu_fence *guilty_fence)
{
+ int r;
+
if (amdgpu_sriov_vf(ring->adev))
return -EOPNOTSUPP;
jpeg_v5_0_1_core_stall_reset(ring);
jpeg_v5_0_1_init_jrbc(ring);
- return amdgpu_ring_test_helper(ring);
+ r = amdgpu_ring_test_helper(ring);
+ if (r)
+ return r;
+ amdgpu_fence_driver_force_completion(ring);
+ return 0;
}
static const struct amd_ip_funcs jpeg_v5_0_1_ip_funcs = {
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
index 502d71f678922..d3cb4dbae790b 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
@@ -1678,6 +1678,7 @@ static int sdma_v4_4_2_reset_queue(struct amdgpu_ring *ring,
unsigned int vmid,
struct amdgpu_fence *guilty_fence)
{
+ bool is_guilty = ring->funcs->is_guilty(ring);
struct amdgpu_device *adev = ring->adev;
u32 id = ring->me;
int r;
@@ -1688,8 +1689,13 @@ static int sdma_v4_4_2_reset_queue(struct amdgpu_ring *ring,
amdgpu_amdkfd_suspend(adev, true);
r = amdgpu_sdma_reset_engine(adev, id);
amdgpu_amdkfd_resume(adev, true);
+ if (r)
+ return r;
- return r;
+ if (is_guilty)
+ amdgpu_fence_driver_force_completion(ring);
+
+ return 0;
}
static int sdma_v4_4_2_stop_queue(struct amdgpu_ring *ring)
@@ -1733,8 +1739,8 @@ static int sdma_v4_4_2_stop_queue(struct amdgpu_ring *ring)
static int sdma_v4_4_2_restore_queue(struct amdgpu_ring *ring)
{
struct amdgpu_device *adev = ring->adev;
- u32 inst_mask;
- int i;
+ u32 inst_mask, tmp_mask;
+ int i, r;
inst_mask = 1 << ring->me;
udelay(50);
@@ -1751,7 +1757,24 @@ static int sdma_v4_4_2_restore_queue(struct amdgpu_ring *ring)
return -ETIMEDOUT;
}
- return sdma_v4_4_2_inst_start(adev, inst_mask, true);
+ r = sdma_v4_4_2_inst_start(adev, inst_mask, true);
+ if (r)
+ return r;
+
+ tmp_mask = inst_mask;
+ for_each_inst(i, tmp_mask) {
+ ring = &adev->sdma.instance[i].ring;
+
+ amdgpu_fence_driver_force_completion(ring);
+
+ if (adev->sdma.has_page_queue) {
+ struct amdgpu_ring *page = &adev->sdma.instance[i].page;
+
+ amdgpu_fence_driver_force_completion(page);
+ }
+ }
+
+ return r;
}
static int sdma_v4_4_2_soft_reset_engine(struct amdgpu_device *adev,
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
index 6092e2a9e210b..07fe6ba1612fd 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
@@ -1618,7 +1618,10 @@ static int sdma_v5_0_restore_queue(struct amdgpu_ring *ring)
r = sdma_v5_0_gfx_resume_instance(adev, inst_id, true);
amdgpu_gfx_rlc_exit_safe_mode(adev, 0);
- return r;
+ if (r)
+ return r;
+ amdgpu_fence_driver_force_completion(ring);
+ return 0;
}
static int sdma_v5_0_ring_preempt_ib(struct amdgpu_ring *ring)
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
index 2cdcf28881c3d..45f8b04324a1b 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
@@ -1534,7 +1534,10 @@ static int sdma_v5_2_restore_queue(struct amdgpu_ring *ring)
r = sdma_v5_2_gfx_resume_instance(adev, inst_id, true);
amdgpu_gfx_rlc_exit_safe_mode(adev, 0);
- return r;
+ if (r)
+ return r;
+ amdgpu_fence_driver_force_completion(ring);
+ return 0;
}
static int sdma_v5_2_ring_preempt_ib(struct amdgpu_ring *ring)
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
index 43bb4a7456b90..746f14862d9ff 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
@@ -1561,7 +1561,11 @@ static int sdma_v6_0_reset_queue(struct amdgpu_ring *ring,
if (r)
return r;
- return sdma_v6_0_gfx_resume_instance(adev, i, true);
+ r = sdma_v6_0_gfx_resume_instance(adev, i, true);
+ if (r)
+ return r;
+ amdgpu_fence_driver_force_completion(ring);
+ return 0;
}
static int sdma_v6_0_set_trap_irq_state(struct amdgpu_device *adev,
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
index b5c168cb1354d..2e4c658598001 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
@@ -826,7 +826,11 @@ static int sdma_v7_0_reset_queue(struct amdgpu_ring *ring,
if (r)
return r;
- return sdma_v7_0_gfx_resume_instance(adev, i, true);
+ r = sdma_v7_0_gfx_resume_instance(adev, i, true);
+ if (r)
+ return r;
+ amdgpu_fence_driver_force_completion(ring);
+ return 0;
}
/**
diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
index 083fde15e83a1..0d73b2bd4aad6 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
@@ -1973,6 +1973,7 @@ static int vcn_v4_0_ring_reset(struct amdgpu_ring *ring,
{
struct amdgpu_device *adev = ring->adev;
struct amdgpu_vcn_inst *vinst = &adev->vcn.inst[ring->me];
+ int r;
if (!(adev->vcn.supported_reset & AMDGPU_RESET_TYPE_PER_QUEUE))
return -EOPNOTSUPP;
@@ -1980,7 +1981,11 @@ static int vcn_v4_0_ring_reset(struct amdgpu_ring *ring,
vcn_v4_0_stop(vinst);
vcn_v4_0_start(vinst);
- return amdgpu_ring_test_helper(ring);
+ r = amdgpu_ring_test_helper(ring);
+ if (r)
+ return r;
+ amdgpu_fence_driver_force_completion(ring);
+ return 0;
}
static struct amdgpu_ring_funcs vcn_v4_0_unified_ring_vm_funcs = {
diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c
index 57c59c4868a50..bf9edfef2107e 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c
@@ -1623,8 +1623,10 @@ static int vcn_v4_0_3_ring_reset(struct amdgpu_ring *ring,
vcn_v4_0_3_hw_init_inst(vinst);
vcn_v4_0_3_start_dpg_mode(vinst, adev->vcn.inst[ring->me].indirect_sram);
r = amdgpu_ring_test_helper(ring);
-
- return r;
+ if (r)
+ return r;
+ amdgpu_fence_driver_force_completion(ring);
+ return 0;
}
static const struct amdgpu_ring_funcs vcn_v4_0_3_unified_ring_vm_funcs = {
diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c
index 4aad7d2e36379..3a3ed600e15f0 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c
@@ -1471,6 +1471,7 @@ static int vcn_v4_0_5_ring_reset(struct amdgpu_ring *ring,
{
struct amdgpu_device *adev = ring->adev;
struct amdgpu_vcn_inst *vinst = &adev->vcn.inst[ring->me];
+ int r;
if (!(adev->vcn.supported_reset & AMDGPU_RESET_TYPE_PER_QUEUE))
return -EOPNOTSUPP;
@@ -1478,7 +1479,11 @@ static int vcn_v4_0_5_ring_reset(struct amdgpu_ring *ring,
vcn_v4_0_5_stop(vinst);
vcn_v4_0_5_start(vinst);
- return amdgpu_ring_test_helper(ring);
+ r = amdgpu_ring_test_helper(ring);
+ if (r)
+ return r;
+ amdgpu_fence_driver_force_completion(ring);
+ return 0;
}
static struct amdgpu_ring_funcs vcn_v4_0_5_unified_ring_vm_funcs = {
diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c
index b9c8a2b8c5e0d..c7953116ad532 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c
@@ -1198,6 +1198,7 @@ static int vcn_v5_0_0_ring_reset(struct amdgpu_ring *ring,
{
struct amdgpu_device *adev = ring->adev;
struct amdgpu_vcn_inst *vinst = &adev->vcn.inst[ring->me];
+ int r;
if (!(adev->vcn.supported_reset & AMDGPU_RESET_TYPE_PER_QUEUE))
return -EOPNOTSUPP;
@@ -1205,7 +1206,11 @@ static int vcn_v5_0_0_ring_reset(struct amdgpu_ring *ring,
vcn_v5_0_0_stop(vinst);
vcn_v5_0_0_start(vinst);
- return amdgpu_ring_test_helper(ring);
+ r = amdgpu_ring_test_helper(ring);
+ if (r)
+ return r;
+ amdgpu_fence_driver_force_completion(ring);
+ return 0;
}
static const struct amdgpu_ring_funcs vcn_v5_0_0_unified_ring_vm_funcs = {
--
2.49.0
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH 11/36] drm/amdgpu: move guilty handling into ring resets
2025-06-17 3:07 [PATCH V9 00/36] Reset improvements Alex Deucher
` (9 preceding siblings ...)
2025-06-17 3:07 ` [PATCH 10/36] drm/amdgpu: move force completion into ring resets Alex Deucher
@ 2025-06-17 3:07 ` Alex Deucher
2025-06-17 12:28 ` Christian König
2025-06-17 3:07 ` [PATCH 12/36] drm/amdgpu: move scheduler wqueue handling into callbacks Alex Deucher
` (24 subsequent siblings)
35 siblings, 1 reply; 59+ messages in thread
From: Alex Deucher @ 2025-06-17 3:07 UTC (permalink / raw)
To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher
Move guilty logic into the ring reset callbacks. This
allows each ring reset callback to better handle fence
errors and force completions in line with the reset
behavior for each IP. It also allows us to remove
the ring guilty callback since that logic now lives
in the reset callback.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 23 ++----------------
drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h | 1 -
drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c | 30 +-----------------------
3 files changed, 3 insertions(+), 51 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
index 177f04491a11b..3b7d3844a74bc 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
@@ -91,7 +91,6 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
struct amdgpu_job *job = to_amdgpu_job(s_job);
struct amdgpu_task_info *ti;
struct amdgpu_device *adev = ring->adev;
- bool set_error = false;
int idx, r;
if (!drm_dev_enter(adev_to_drm(adev), &idx)) {
@@ -134,8 +133,6 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
if (unlikely(adev->debug_disable_gpu_ring_reset)) {
dev_err(adev->dev, "Ring reset disabled by debug mask\n");
} else if (amdgpu_gpu_recovery && ring->funcs->reset) {
- bool is_guilty;
-
dev_err(adev->dev, "Starting %s ring reset\n",
s_job->sched->name);
@@ -145,24 +142,9 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
*/
drm_sched_wqueue_stop(&ring->sched);
- /* for engine resets, we need to reset the engine,
- * but individual queues may be unaffected.
- * check here to make sure the accounting is correct.
- */
- if (ring->funcs->is_guilty)
- is_guilty = ring->funcs->is_guilty(ring);
- else
- is_guilty = true;
-
- if (is_guilty) {
- dma_fence_set_error(&s_job->s_fence->finished, -ETIME);
- set_error = true;
- }
-
r = amdgpu_ring_reset(ring, job->vmid, NULL);
if (!r) {
- if (is_guilty)
- atomic_inc(&ring->adev->gpu_reset_counter);
+ atomic_inc(&ring->adev->gpu_reset_counter);
drm_sched_wqueue_start(&ring->sched);
dev_err(adev->dev, "Ring %s reset succeeded\n",
ring->sched.name);
@@ -173,8 +155,7 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
dev_err(adev->dev, "Ring %s reset failed\n", ring->sched.name);
}
- if (!set_error)
- dma_fence_set_error(&s_job->s_fence->finished, -ETIME);
+ dma_fence_set_error(&s_job->s_fence->finished, -ETIME);
if (amdgpu_device_should_recover_gpu(ring->adev)) {
struct amdgpu_reset_context reset_context;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
index fc36b86c6dcf8..6aaa9d0c1f25c 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
@@ -271,7 +271,6 @@ struct amdgpu_ring_funcs {
int (*reset)(struct amdgpu_ring *ring, unsigned int vmid,
struct amdgpu_fence *guilty_fence);
void (*emit_cleaner_shader)(struct amdgpu_ring *ring);
- bool (*is_guilty)(struct amdgpu_ring *ring);
};
/**
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
index d3cb4dbae790b..61274579b3452 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
@@ -1655,30 +1655,10 @@ static bool sdma_v4_4_2_is_queue_selected(struct amdgpu_device *adev, uint32_t i
return (context_status & SDMA_GFX_CONTEXT_STATUS__SELECTED_MASK) != 0;
}
-static bool sdma_v4_4_2_ring_is_guilty(struct amdgpu_ring *ring)
-{
- struct amdgpu_device *adev = ring->adev;
- uint32_t instance_id = ring->me;
-
- return sdma_v4_4_2_is_queue_selected(adev, instance_id, false);
-}
-
-static bool sdma_v4_4_2_page_ring_is_guilty(struct amdgpu_ring *ring)
-{
- struct amdgpu_device *adev = ring->adev;
- uint32_t instance_id = ring->me;
-
- if (!adev->sdma.has_page_queue)
- return false;
-
- return sdma_v4_4_2_is_queue_selected(adev, instance_id, true);
-}
-
static int sdma_v4_4_2_reset_queue(struct amdgpu_ring *ring,
unsigned int vmid,
struct amdgpu_fence *guilty_fence)
{
- bool is_guilty = ring->funcs->is_guilty(ring);
struct amdgpu_device *adev = ring->adev;
u32 id = ring->me;
int r;
@@ -1689,13 +1669,7 @@ static int sdma_v4_4_2_reset_queue(struct amdgpu_ring *ring,
amdgpu_amdkfd_suspend(adev, true);
r = amdgpu_sdma_reset_engine(adev, id);
amdgpu_amdkfd_resume(adev, true);
- if (r)
- return r;
-
- if (is_guilty)
- amdgpu_fence_driver_force_completion(ring);
-
- return 0;
+ return r;
}
static int sdma_v4_4_2_stop_queue(struct amdgpu_ring *ring)
@@ -2180,7 +2154,6 @@ static const struct amdgpu_ring_funcs sdma_v4_4_2_ring_funcs = {
.emit_reg_wait = sdma_v4_4_2_ring_emit_reg_wait,
.emit_reg_write_reg_wait = amdgpu_ring_emit_reg_write_reg_wait_helper,
.reset = sdma_v4_4_2_reset_queue,
- .is_guilty = sdma_v4_4_2_ring_is_guilty,
};
static const struct amdgpu_ring_funcs sdma_v4_4_2_page_ring_funcs = {
@@ -2213,7 +2186,6 @@ static const struct amdgpu_ring_funcs sdma_v4_4_2_page_ring_funcs = {
.emit_reg_wait = sdma_v4_4_2_ring_emit_reg_wait,
.emit_reg_write_reg_wait = amdgpu_ring_emit_reg_write_reg_wait_helper,
.reset = sdma_v4_4_2_reset_queue,
- .is_guilty = sdma_v4_4_2_page_ring_is_guilty,
};
static void sdma_v4_4_2_set_ring_funcs(struct amdgpu_device *adev)
--
2.49.0
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH 12/36] drm/amdgpu: move scheduler wqueue handling into callbacks
2025-06-17 3:07 [PATCH V9 00/36] Reset improvements Alex Deucher
` (10 preceding siblings ...)
2025-06-17 3:07 ` [PATCH 11/36] drm/amdgpu: move guilty handling " Alex Deucher
@ 2025-06-17 3:07 ` Alex Deucher
2025-06-17 3:07 ` [PATCH 13/36] drm/amdgpu: track ring state associated with a job Alex Deucher
` (23 subsequent siblings)
35 siblings, 0 replies; 59+ messages in thread
From: Alex Deucher @ 2025-06-17 3:07 UTC (permalink / raw)
To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher
Move the scheduler wqueue stopping and starting into
the ring reset callbacks. On some IPs we have to reset
an engine which may have multiple queues. Move the wqueue
handling into the backend so we can handle them as needed
based on the type of reset available.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 8 --------
drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.c | 17 ++++-------------
drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c | 6 ++++++
drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c | 6 ++++++
drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c | 6 ++++++
drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 3 +++
drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c | 3 +++
drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c | 2 ++
drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c | 2 ++
drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c | 2 ++
drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c | 2 ++
drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c | 2 ++
drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c | 2 ++
drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c | 3 +++
drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c | 3 +++
drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c | 2 ++
drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c | 3 +++
drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c | 2 ++
drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c | 2 ++
19 files changed, 55 insertions(+), 21 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
index 3b7d3844a74bc..f0b7080dccb8d 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
@@ -135,17 +135,9 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
} else if (amdgpu_gpu_recovery && ring->funcs->reset) {
dev_err(adev->dev, "Starting %s ring reset\n",
s_job->sched->name);
-
- /*
- * Stop the scheduler to prevent anybody else from touching the
- * ring buffer.
- */
- drm_sched_wqueue_stop(&ring->sched);
-
r = amdgpu_ring_reset(ring, job->vmid, NULL);
if (!r) {
atomic_inc(&ring->adev->gpu_reset_counter);
- drm_sched_wqueue_start(&ring->sched);
dev_err(adev->dev, "Ring %s reset succeeded\n",
ring->sched.name);
drm_dev_wedged_event(adev_to_drm(adev),
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.c
index cf5733d5d26dd..7e26a44dcc1fd 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.c
@@ -554,22 +554,16 @@ int amdgpu_sdma_reset_engine(struct amdgpu_device *adev, uint32_t instance_id)
struct amdgpu_sdma_instance *sdma_instance = &adev->sdma.instance[instance_id];
struct amdgpu_ring *gfx_ring = &sdma_instance->ring;
struct amdgpu_ring *page_ring = &sdma_instance->page;
- bool gfx_sched_stopped = false, page_sched_stopped = false;
mutex_lock(&sdma_instance->engine_reset_mutex);
/* Stop the scheduler's work queue for the GFX and page rings if they are running.
* This ensures that no new tasks are submitted to the queues while
* the reset is in progress.
*/
- if (!amdgpu_ring_sched_ready(gfx_ring)) {
- drm_sched_wqueue_stop(&gfx_ring->sched);
- gfx_sched_stopped = true;
- }
+ drm_sched_wqueue_stop(&gfx_ring->sched);
- if (adev->sdma.has_page_queue && !amdgpu_ring_sched_ready(page_ring)) {
+ if (adev->sdma.has_page_queue)
drm_sched_wqueue_stop(&page_ring->sched);
- page_sched_stopped = true;
- }
if (sdma_instance->funcs->stop_kernel_queue) {
sdma_instance->funcs->stop_kernel_queue(gfx_ring);
@@ -596,12 +590,9 @@ int amdgpu_sdma_reset_engine(struct amdgpu_device *adev, uint32_t instance_id)
* to be submitted to the queues after the reset is complete.
*/
if (!ret) {
- if (gfx_sched_stopped && amdgpu_ring_sched_ready(gfx_ring)) {
- drm_sched_wqueue_start(&gfx_ring->sched);
- }
- if (page_sched_stopped && amdgpu_ring_sched_ready(page_ring)) {
+ drm_sched_wqueue_start(&gfx_ring->sched);
+ if (adev->sdma.has_page_queue)
drm_sched_wqueue_start(&page_ring->sched);
- }
}
mutex_unlock(&sdma_instance->engine_reset_mutex);
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
index b4f4ad966db82..120d0d4f03a56 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
@@ -9540,6 +9540,8 @@ static int gfx_v10_0_reset_kgq(struct amdgpu_ring *ring,
if (!kiq->pmf || !kiq->pmf->kiq_unmap_queues)
return -EINVAL;
+ drm_sched_wqueue_stop(&ring->sched);
+
spin_lock_irqsave(&kiq->ring_lock, flags);
if (amdgpu_ring_alloc(kiq_ring, 5 + 7 + 7 + kiq->pmf->map_queues_size)) {
@@ -9581,6 +9583,7 @@ static int gfx_v10_0_reset_kgq(struct amdgpu_ring *ring,
if (r)
return r;
amdgpu_fence_driver_force_completion(ring);
+ drm_sched_wqueue_start(&ring->sched);
return 0;
}
@@ -9600,6 +9603,8 @@ static int gfx_v10_0_reset_kcq(struct amdgpu_ring *ring,
if (!kiq->pmf || !kiq->pmf->kiq_unmap_queues)
return -EINVAL;
+ drm_sched_wqueue_stop(&ring->sched);
+
spin_lock_irqsave(&kiq->ring_lock, flags);
if (amdgpu_ring_alloc(kiq_ring, kiq->pmf->unmap_queues_size)) {
@@ -9658,6 +9663,7 @@ static int gfx_v10_0_reset_kcq(struct amdgpu_ring *ring,
if (r)
return r;
amdgpu_fence_driver_force_completion(ring);
+ drm_sched_wqueue_start(&ring->sched);
return 0;
}
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
index 5707ce7dd5c82..75196c0ba84b9 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
@@ -6821,6 +6821,8 @@ static int gfx_v11_0_reset_kgq(struct amdgpu_ring *ring,
if (amdgpu_sriov_vf(adev))
return -EINVAL;
+ drm_sched_wqueue_stop(&ring->sched);
+
r = amdgpu_mes_reset_legacy_queue(ring->adev, ring, vmid, false);
if (r) {
@@ -6846,6 +6848,7 @@ static int gfx_v11_0_reset_kgq(struct amdgpu_ring *ring,
if (r)
return r;
amdgpu_fence_driver_force_completion(ring);
+ drm_sched_wqueue_start(&ring->sched);
return 0;
}
@@ -6989,6 +6992,8 @@ static int gfx_v11_0_reset_kcq(struct amdgpu_ring *ring,
if (amdgpu_sriov_vf(adev))
return -EINVAL;
+ drm_sched_wqueue_stop(&ring->sched);
+
r = amdgpu_mes_reset_legacy_queue(ring->adev, ring, vmid, true);
if (r) {
dev_warn(adev->dev, "fail(%d) to reset kcq and try pipe reset\n", r);
@@ -7012,6 +7017,7 @@ static int gfx_v11_0_reset_kcq(struct amdgpu_ring *ring,
if (r)
return r;
amdgpu_fence_driver_force_completion(ring);
+ drm_sched_wqueue_start(&ring->sched);
return 0;
}
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
index 259a83c3acb5d..543429054bfcd 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
@@ -5317,6 +5317,8 @@ static int gfx_v12_0_reset_kgq(struct amdgpu_ring *ring,
if (amdgpu_sriov_vf(adev))
return -EINVAL;
+ drm_sched_wqueue_stop(&ring->sched);
+
r = amdgpu_mes_reset_legacy_queue(ring->adev, ring, vmid, false);
if (r) {
dev_warn(adev->dev, "reset via MES failed and try pipe reset %d\n", r);
@@ -5341,6 +5343,7 @@ static int gfx_v12_0_reset_kgq(struct amdgpu_ring *ring,
if (r)
return r;
amdgpu_fence_driver_force_completion(ring);
+ drm_sched_wqueue_start(&ring->sched);
return 0;
}
@@ -5437,6 +5440,8 @@ static int gfx_v12_0_reset_kcq(struct amdgpu_ring *ring,
if (amdgpu_sriov_vf(adev))
return -EINVAL;
+ drm_sched_wqueue_stop(&ring->sched);
+
r = amdgpu_mes_reset_legacy_queue(ring->adev, ring, vmid, true);
if (r) {
dev_warn(adev->dev, "fail(%d) to reset kcq and try pipe reset\n", r);
@@ -5460,6 +5465,7 @@ static int gfx_v12_0_reset_kcq(struct amdgpu_ring *ring,
if (r)
return r;
amdgpu_fence_driver_force_completion(ring);
+ drm_sched_wqueue_start(&ring->sched);
return 0;
}
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
index e0dec946b7cdc..fd43c047991e7 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
@@ -7168,6 +7168,8 @@ static int gfx_v9_0_reset_kcq(struct amdgpu_ring *ring,
if (!kiq->pmf || !kiq->pmf->kiq_unmap_queues)
return -EINVAL;
+ drm_sched_wqueue_stop(&ring->sched);
+
spin_lock_irqsave(&kiq->ring_lock, flags);
if (amdgpu_ring_alloc(kiq_ring, kiq->pmf->unmap_queues_size)) {
@@ -7228,6 +7230,7 @@ static int gfx_v9_0_reset_kcq(struct amdgpu_ring *ring,
if (r)
return r;
amdgpu_fence_driver_force_completion(ring);
+ drm_sched_wqueue_start(&ring->sched);
return 0;
}
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
index e5fcc63cd99df..08f01f64e1c24 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
@@ -3567,6 +3567,8 @@ static int gfx_v9_4_3_reset_kcq(struct amdgpu_ring *ring,
if (!kiq->pmf || !kiq->pmf->kiq_unmap_queues)
return -EINVAL;
+ drm_sched_wqueue_stop(&ring->sched);
+
spin_lock_irqsave(&kiq->ring_lock, flags);
if (amdgpu_ring_alloc(kiq_ring, kiq->pmf->unmap_queues_size)) {
@@ -3625,6 +3627,7 @@ static int gfx_v9_4_3_reset_kcq(struct amdgpu_ring *ring,
if (r)
return r;
amdgpu_fence_driver_force_completion(ring);
+ drm_sched_wqueue_start(&ring->sched);
return 0;
}
diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
index 0b1fa35a441ae..2b02ecb94eeae 100644
--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
@@ -770,12 +770,14 @@ static int jpeg_v2_0_ring_reset(struct amdgpu_ring *ring,
{
int r;
+ drm_sched_wqueue_stop(&ring->sched);
jpeg_v2_0_stop(ring->adev);
jpeg_v2_0_start(ring->adev);
r = amdgpu_ring_test_helper(ring);
if (r)
return r;
amdgpu_fence_driver_force_completion(ring);
+ drm_sched_wqueue_start(&ring->sched);
return 0;
}
diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
index 7a9e91f6495de..d8ab2a96d445e 100644
--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
+++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
@@ -649,12 +649,14 @@ static int jpeg_v2_5_ring_reset(struct amdgpu_ring *ring,
{
int r;
+ drm_sched_wqueue_stop(&ring->sched);
jpeg_v2_5_stop_inst(ring->adev, ring->me);
jpeg_v2_5_start_inst(ring->adev, ring->me);
r = amdgpu_ring_test_helper(ring);
if (r)
return r;
amdgpu_fence_driver_force_completion(ring);
+ drm_sched_wqueue_start(&ring->sched);
return 0;
}
diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c
index 81ee1ba4c0a3c..60ab0f2afeeff 100644
--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c
@@ -561,12 +561,14 @@ static int jpeg_v3_0_ring_reset(struct amdgpu_ring *ring,
{
int r;
+ drm_sched_wqueue_stop(&ring->sched);
jpeg_v3_0_stop(ring->adev);
jpeg_v3_0_start(ring->adev);
r = amdgpu_ring_test_helper(ring);
if (r)
return r;
amdgpu_fence_driver_force_completion(ring);
+ drm_sched_wqueue_start(&ring->sched);
return 0;
}
diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c
index 06f75091e1304..fad64d5cccd1f 100644
--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c
@@ -729,12 +729,14 @@ static int jpeg_v4_0_ring_reset(struct amdgpu_ring *ring,
if (amdgpu_sriov_vf(ring->adev))
return -EINVAL;
+ drm_sched_wqueue_stop(&ring->sched);
jpeg_v4_0_stop(ring->adev);
jpeg_v4_0_start(ring->adev);
r = amdgpu_ring_test_helper(ring);
if (r)
return r;
amdgpu_fence_driver_force_completion(ring);
+ drm_sched_wqueue_start(&ring->sched);
return 0;
}
diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c
index 10a7b990b0adf..82ccab9cf0895 100644
--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c
+++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c
@@ -1152,12 +1152,14 @@ static int jpeg_v4_0_3_ring_reset(struct amdgpu_ring *ring,
if (amdgpu_sriov_vf(ring->adev))
return -EOPNOTSUPP;
+ drm_sched_wqueue_stop(&ring->sched);
jpeg_v4_0_3_core_stall_reset(ring);
jpeg_v4_0_3_start_jrbc(ring);
r = amdgpu_ring_test_helper(ring);
if (r)
return r;
amdgpu_fence_driver_force_completion(ring);
+ drm_sched_wqueue_start(&ring->sched);
return 0;
}
diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c
index 88dea7a47a1e5..3ffc2a61e6bf0 100644
--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c
+++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c
@@ -843,12 +843,14 @@ static int jpeg_v5_0_1_ring_reset(struct amdgpu_ring *ring,
if (amdgpu_sriov_vf(ring->adev))
return -EOPNOTSUPP;
+ drm_sched_wqueue_stop(&ring->sched);
jpeg_v5_0_1_core_stall_reset(ring);
jpeg_v5_0_1_init_jrbc(ring);
r = amdgpu_ring_test_helper(ring);
if (r)
return r;
amdgpu_fence_driver_force_completion(ring);
+ drm_sched_wqueue_start(&ring->sched);
return 0;
}
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
index 746f14862d9ff..6ba8cb5995779 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
@@ -1557,6 +1557,8 @@ static int sdma_v6_0_reset_queue(struct amdgpu_ring *ring,
return -EINVAL;
}
+ drm_sched_wqueue_stop(&ring->sched);
+
r = amdgpu_mes_reset_legacy_queue(adev, ring, vmid, true);
if (r)
return r;
@@ -1565,6 +1567,7 @@ static int sdma_v6_0_reset_queue(struct amdgpu_ring *ring,
if (r)
return r;
amdgpu_fence_driver_force_completion(ring);
+ drm_sched_wqueue_start(&ring->sched);
return 0;
}
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
index 2e4c658598001..40416f2d03238 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
@@ -822,6 +822,8 @@ static int sdma_v7_0_reset_queue(struct amdgpu_ring *ring,
return -EINVAL;
}
+ drm_sched_wqueue_stop(&ring->sched);
+
r = amdgpu_mes_reset_legacy_queue(adev, ring, vmid, true);
if (r)
return r;
@@ -830,6 +832,7 @@ static int sdma_v7_0_reset_queue(struct amdgpu_ring *ring,
if (r)
return r;
amdgpu_fence_driver_force_completion(ring);
+ drm_sched_wqueue_start(&ring->sched);
return 0;
}
diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
index 0d73b2bd4aad6..1532e9d63e132 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
@@ -1978,6 +1978,7 @@ static int vcn_v4_0_ring_reset(struct amdgpu_ring *ring,
if (!(adev->vcn.supported_reset & AMDGPU_RESET_TYPE_PER_QUEUE))
return -EOPNOTSUPP;
+ drm_sched_wqueue_stop(&ring->sched);
vcn_v4_0_stop(vinst);
vcn_v4_0_start(vinst);
@@ -1985,6 +1986,7 @@ static int vcn_v4_0_ring_reset(struct amdgpu_ring *ring,
if (r)
return r;
amdgpu_fence_driver_force_completion(ring);
+ drm_sched_wqueue_start(&ring->sched);
return 0;
}
diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c
index bf9edfef2107e..31cd27721782f 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c
@@ -1609,6 +1609,8 @@ static int vcn_v4_0_3_ring_reset(struct amdgpu_ring *ring,
if (!(adev->vcn.supported_reset & AMDGPU_RESET_TYPE_PER_QUEUE))
return -EOPNOTSUPP;
+ drm_sched_wqueue_stop(&ring->sched);
+
vcn_inst = GET_INST(VCN, ring->me);
r = amdgpu_dpm_reset_vcn(adev, 1 << vcn_inst);
@@ -1626,6 +1628,7 @@ static int vcn_v4_0_3_ring_reset(struct amdgpu_ring *ring,
if (r)
return r;
amdgpu_fence_driver_force_completion(ring);
+ drm_sched_wqueue_start(&ring->sched);
return 0;
}
diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c
index 3a3ed600e15f0..aefa2d77a73c4 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c
@@ -1476,6 +1476,7 @@ static int vcn_v4_0_5_ring_reset(struct amdgpu_ring *ring,
if (!(adev->vcn.supported_reset & AMDGPU_RESET_TYPE_PER_QUEUE))
return -EOPNOTSUPP;
+ drm_sched_wqueue_stop(&ring->sched);
vcn_v4_0_5_stop(vinst);
vcn_v4_0_5_start(vinst);
@@ -1483,6 +1484,7 @@ static int vcn_v4_0_5_ring_reset(struct amdgpu_ring *ring,
if (r)
return r;
amdgpu_fence_driver_force_completion(ring);
+ drm_sched_wqueue_start(&ring->sched);
return 0;
}
diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c
index c7953116ad532..1de81a7541bf8 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c
@@ -1203,6 +1203,7 @@ static int vcn_v5_0_0_ring_reset(struct amdgpu_ring *ring,
if (!(adev->vcn.supported_reset & AMDGPU_RESET_TYPE_PER_QUEUE))
return -EOPNOTSUPP;
+ drm_sched_wqueue_stop(&ring->sched);
vcn_v5_0_0_stop(vinst);
vcn_v5_0_0_start(vinst);
@@ -1210,6 +1211,7 @@ static int vcn_v5_0_0_ring_reset(struct amdgpu_ring *ring,
if (r)
return r;
amdgpu_fence_driver_force_completion(ring);
+ drm_sched_wqueue_start(&ring->sched);
return 0;
}
--
2.49.0
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH 13/36] drm/amdgpu: track ring state associated with a job
2025-06-17 3:07 [PATCH V9 00/36] Reset improvements Alex Deucher
` (11 preceding siblings ...)
2025-06-17 3:07 ` [PATCH 12/36] drm/amdgpu: move scheduler wqueue handling into callbacks Alex Deucher
@ 2025-06-17 3:07 ` Alex Deucher
2025-06-18 14:53 ` Christian König
2025-06-17 3:07 ` [PATCH 14/36] drm/amdgpu/gfx9: re-emit unprocessed state on kcq reset Alex Deucher
` (22 subsequent siblings)
35 siblings, 1 reply; 59+ messages in thread
From: Alex Deucher @ 2025-06-17 3:07 UTC (permalink / raw)
To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher
We need to know the wptr and sequence number associated
with a job so that we can re-emit the unprocessed state
after a ring reset. Pre-allocate storage space for
the ring buffer contents and add helpers to save off
and re-emit the unprocessed state so that it can be
re-emitted after the queue is reset.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c | 90 +++++++++++++++++++++++
drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c | 14 +++-
drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 4 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c | 59 +++++++++++++++
drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h | 17 +++++
5 files changed, 181 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
index 5555f3ae08c60..b8d51ee60adcc 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
@@ -120,11 +120,13 @@ int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **f,
am_fence = kmalloc(sizeof(*am_fence), GFP_KERNEL);
if (!am_fence)
return -ENOMEM;
+ am_fence->context = 0;
} else {
am_fence = af;
}
fence = &am_fence->base;
am_fence->ring = ring;
+ am_fence->wptr = 0;
seq = ++ring->fence_drv.sync_seq;
if (af) {
@@ -253,6 +255,7 @@ bool amdgpu_fence_process(struct amdgpu_ring *ring)
do {
struct dma_fence *fence, **ptr;
+ struct amdgpu_fence *am_fence;
++last_seq;
last_seq &= drv->num_fences_mask;
@@ -265,6 +268,9 @@ bool amdgpu_fence_process(struct amdgpu_ring *ring)
if (!fence)
continue;
+ am_fence = container_of(fence, struct amdgpu_fence, base);
+ if (am_fence->wptr)
+ drv->last_wptr = am_fence->wptr;
dma_fence_signal(fence);
dma_fence_put(fence);
pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
@@ -725,6 +731,90 @@ void amdgpu_fence_driver_force_completion(struct amdgpu_ring *ring)
amdgpu_fence_process(ring);
}
+/**
+ * amdgpu_fence_driver_guilty_force_completion - force signal of specified sequence
+ *
+ * @fence: fence of the ring to signal
+ *
+ */
+void amdgpu_fence_driver_guilty_force_completion(struct amdgpu_fence *fence)
+{
+ dma_fence_set_error(&fence->base, -ETIME);
+ amdgpu_fence_write(fence->ring, fence->base.seqno);
+ amdgpu_fence_process(fence->ring);
+}
+
+void amdgpu_fence_save_wptr(struct dma_fence *fence)
+{
+ struct amdgpu_fence *am_fence = container_of(fence, struct amdgpu_fence, base);
+
+ am_fence->wptr = am_fence->ring->wptr;
+}
+
+static void amdgpu_ring_backup_unprocessed_command(struct amdgpu_ring *ring,
+ unsigned int idx,
+ u64 start_wptr, u32 end_wptr)
+{
+ unsigned int first_idx = start_wptr & ring->buf_mask;
+ unsigned int last_idx = end_wptr & ring->buf_mask;
+ unsigned int i, j, entries_to_copy;
+
+ if (last_idx < first_idx) {
+ entries_to_copy = ring->buf_mask + 1 - first_idx;
+ for (i = 0; i < entries_to_copy; i++)
+ ring->ring_backup[idx + i] = ring->ring[first_idx + i];
+ ring->ring_backup_entries_to_copy += entries_to_copy;
+ entries_to_copy = last_idx;
+ for (j = 0; j < entries_to_copy; j++)
+ ring->ring_backup[idx + i + j] = ring->ring[j];
+ ring->ring_backup_entries_to_copy += entries_to_copy;
+ } else {
+ entries_to_copy = last_idx - first_idx;
+ for (i = 0; i < entries_to_copy; i++)
+ ring->ring_backup[idx + i] = ring->ring[first_idx + i];
+ ring->ring_backup_entries_to_copy += entries_to_copy;
+ }
+}
+
+void amdgpu_ring_backup_unprocessed_commands(struct amdgpu_ring *ring,
+ struct amdgpu_fence *guilty_fence)
+{
+ struct amdgpu_fence *fence;
+ struct dma_fence *unprocessed, **ptr;
+ u64 wptr, i, seqno;
+
+ if (guilty_fence) {
+ seqno = guilty_fence->base.seqno;
+ wptr = guilty_fence->wptr;
+ } else {
+ seqno = amdgpu_fence_read(ring);
+ wptr = ring->fence_drv.last_wptr;
+ }
+ ring->ring_backup_entries_to_copy = 0;
+ for (i = seqno + 1; i <= ring->fence_drv.sync_seq; ++i) {
+ ptr = &ring->fence_drv.fences[i & ring->fence_drv.num_fences_mask];
+ rcu_read_lock();
+ unprocessed = rcu_dereference(*ptr);
+
+ if (unprocessed && !dma_fence_is_signaled(unprocessed)) {
+ fence = container_of(unprocessed, struct amdgpu_fence, base);
+
+ /* save everything if the ring is not guilty, otherwise
+ * just save the content from other contexts.
+ */
+ if (fence->wptr &&
+ (!guilty_fence || (fence->context != guilty_fence->context))) {
+ amdgpu_ring_backup_unprocessed_command(ring,
+ ring->ring_backup_entries_to_copy,
+ wptr,
+ fence->wptr);
+ wptr = fence->wptr;
+ }
+ }
+ rcu_read_unlock();
+ }
+}
+
/*
* Common fence implementation
*/
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
index 206b70acb29a0..4e6a598043df8 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
@@ -139,7 +139,6 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs,
int vmid = AMDGPU_JOB_GET_VMID(job);
bool need_pipe_sync = false;
unsigned int cond_exec;
-
unsigned int i;
int r = 0;
@@ -156,6 +155,12 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs,
gds_va = job->gds_va;
init_shadow = job->init_shadow;
af = &job->hw_fence;
+ if (job->base.s_fence) {
+ struct dma_fence *finished = &job->base.s_fence->finished;
+ af->context = finished->context;
+ } else {
+ af->context = 0;
+ }
} else {
vm = NULL;
fence_ctx = 0;
@@ -309,6 +314,13 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs,
amdgpu_ring_ib_end(ring);
amdgpu_ring_commit(ring);
+
+ /* This must be last for resets to work properly
+ * as we need to save the wptr associated with this
+ * fence.
+ */
+ amdgpu_fence_save_wptr(*f);
+
return 0;
}
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
index f0b7080dccb8d..45febdc2f3493 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
@@ -89,8 +89,8 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
{
struct amdgpu_ring *ring = to_amdgpu_ring(s_job->sched);
struct amdgpu_job *job = to_amdgpu_job(s_job);
- struct amdgpu_task_info *ti;
struct amdgpu_device *adev = ring->adev;
+ struct amdgpu_task_info *ti;
int idx, r;
if (!drm_dev_enter(adev_to_drm(adev), &idx)) {
@@ -135,7 +135,7 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
} else if (amdgpu_gpu_recovery && ring->funcs->reset) {
dev_err(adev->dev, "Starting %s ring reset\n",
s_job->sched->name);
- r = amdgpu_ring_reset(ring, job->vmid, NULL);
+ r = amdgpu_ring_reset(ring, job->vmid, &job->hw_fence);
if (!r) {
atomic_inc(&ring->adev->gpu_reset_counter);
dev_err(adev->dev, "Ring %s reset succeeded\n",
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
index 426834806fbf2..0985eba010e17 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
@@ -333,6 +333,12 @@ int amdgpu_ring_init(struct amdgpu_device *adev, struct amdgpu_ring *ring,
/* Initialize cached_rptr to 0 */
ring->cached_rptr = 0;
+ if (!ring->ring_backup) {
+ ring->ring_backup = kvzalloc(ring->ring_size, GFP_KERNEL);
+ if (!ring->ring_backup)
+ return -ENOMEM;
+ }
+
/* Allocate ring buffer */
if (ring->ring_obj == NULL) {
r = amdgpu_bo_create_kernel(adev, ring->ring_size + ring->funcs->extra_dw, PAGE_SIZE,
@@ -342,6 +348,7 @@ int amdgpu_ring_init(struct amdgpu_device *adev, struct amdgpu_ring *ring,
(void **)&ring->ring);
if (r) {
dev_err(adev->dev, "(%d) ring create failed\n", r);
+ kvfree(ring->ring_backup);
return r;
}
amdgpu_ring_clear_ring(ring);
@@ -385,6 +392,8 @@ void amdgpu_ring_fini(struct amdgpu_ring *ring)
amdgpu_bo_free_kernel(&ring->ring_obj,
&ring->gpu_addr,
(void **)&ring->ring);
+ kvfree(ring->ring_backup);
+ ring->ring_backup = NULL;
dma_fence_put(ring->vmid_wait);
ring->vmid_wait = NULL;
@@ -753,3 +762,53 @@ bool amdgpu_ring_sched_ready(struct amdgpu_ring *ring)
return true;
}
+
+static int amdgpu_ring_reemit_unprocessed_commands(struct amdgpu_ring *ring)
+{
+ unsigned int i;
+ int r;
+
+ /* re-emit the unprocessed ring contents */
+ if (ring->ring_backup_entries_to_copy) {
+ r = amdgpu_ring_alloc(ring, ring->ring_backup_entries_to_copy);
+ if (r)
+ return r;
+ for (i = 0; i < ring->ring_backup_entries_to_copy; i++)
+ amdgpu_ring_write(ring, ring->ring_backup[i]);
+ amdgpu_ring_commit(ring);
+ }
+
+ return 0;
+}
+
+void amdgpu_ring_reset_helper_begin(struct amdgpu_ring *ring,
+ struct amdgpu_fence *guilty_fence)
+{
+ /* Stop the scheduler to prevent anybody else from touching the ring buffer. */
+ drm_sched_wqueue_stop(&ring->sched);
+ /* back up the non-guilty commands */
+ amdgpu_ring_backup_unprocessed_commands(ring, guilty_fence);
+}
+
+int amdgpu_ring_reset_helper_end(struct amdgpu_ring *ring,
+ struct amdgpu_fence *guilty_fence)
+{
+ int r;
+
+ /* verify that the ring is functional */
+ r = amdgpu_ring_test_ring(ring);
+ if (r)
+ return r;
+
+ /* signal the fence of the bad job */
+ if (guilty_fence)
+ amdgpu_fence_driver_guilty_force_completion(guilty_fence);
+ /* Re-emit the non-guilty commands */
+ r = amdgpu_ring_reemit_unprocessed_commands(ring);
+ if (r)
+ /* if we fail to reemit, force complete all fences */
+ amdgpu_fence_driver_force_completion(ring);
+ /* Start the scheduler again */
+ drm_sched_wqueue_start(&ring->sched);
+ return 0;
+}
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
index 6aaa9d0c1f25c..dcf20adda2f36 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
@@ -118,6 +118,7 @@ struct amdgpu_fence_driver {
/* sync_seq is protected by ring emission lock */
uint32_t sync_seq;
atomic_t last_seq;
+ u64 last_wptr;
bool initialized;
struct amdgpu_irq_src *irq_src;
unsigned irq_type;
@@ -141,6 +142,11 @@ struct amdgpu_fence {
/* RB, DMA, etc. */
struct amdgpu_ring *ring;
ktime_t start_timestamp;
+
+ /* wptr for the fence for resets */
+ u64 wptr;
+ /* fence context for resets */
+ u64 context;
};
extern const struct drm_sched_backend_ops amdgpu_sched_ops;
@@ -148,6 +154,8 @@ extern const struct drm_sched_backend_ops amdgpu_sched_ops;
void amdgpu_fence_driver_clear_job_fences(struct amdgpu_ring *ring);
void amdgpu_fence_driver_set_error(struct amdgpu_ring *ring, int error);
void amdgpu_fence_driver_force_completion(struct amdgpu_ring *ring);
+void amdgpu_fence_driver_guilty_force_completion(struct amdgpu_fence *fence);
+void amdgpu_fence_save_wptr(struct dma_fence *fence);
int amdgpu_fence_driver_init_ring(struct amdgpu_ring *ring);
int amdgpu_fence_driver_start_ring(struct amdgpu_ring *ring,
@@ -284,6 +292,9 @@ struct amdgpu_ring {
struct amdgpu_bo *ring_obj;
uint32_t *ring;
+ /* backups for resets */
+ uint32_t *ring_backup;
+ unsigned int ring_backup_entries_to_copy;
unsigned rptr_offs;
u64 rptr_gpu_addr;
volatile u32 *rptr_cpu_addr;
@@ -550,4 +561,10 @@ int amdgpu_ib_pool_init(struct amdgpu_device *adev);
void amdgpu_ib_pool_fini(struct amdgpu_device *adev);
int amdgpu_ib_ring_tests(struct amdgpu_device *adev);
bool amdgpu_ring_sched_ready(struct amdgpu_ring *ring);
+void amdgpu_ring_backup_unprocessed_commands(struct amdgpu_ring *ring,
+ struct amdgpu_fence *guilty_fence);
+void amdgpu_ring_reset_helper_begin(struct amdgpu_ring *ring,
+ struct amdgpu_fence *guilty_fence);
+int amdgpu_ring_reset_helper_end(struct amdgpu_ring *ring,
+ struct amdgpu_fence *guilty_fence);
#endif
--
2.49.0
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH 14/36] drm/amdgpu/gfx9: re-emit unprocessed state on kcq reset
2025-06-17 3:07 [PATCH V9 00/36] Reset improvements Alex Deucher
` (12 preceding siblings ...)
2025-06-17 3:07 ` [PATCH 13/36] drm/amdgpu: track ring state associated with a job Alex Deucher
@ 2025-06-17 3:07 ` Alex Deucher
2025-06-17 3:07 ` [PATCH 15/36] drm/amdgpu/gfx9.4.3: " Alex Deucher
` (21 subsequent siblings)
35 siblings, 0 replies; 59+ messages in thread
From: Alex Deucher @ 2025-06-17 3:07 UTC (permalink / raw)
To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher
Re-emit the unprocessed state after resetting the queue.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 12 +++---------
1 file changed, 3 insertions(+), 9 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
index fd43c047991e7..d6b6c1ad6636d 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
@@ -7168,7 +7168,7 @@ static int gfx_v9_0_reset_kcq(struct amdgpu_ring *ring,
if (!kiq->pmf || !kiq->pmf->kiq_unmap_queues)
return -EINVAL;
- drm_sched_wqueue_stop(&ring->sched);
+ amdgpu_ring_reset_helper_begin(ring, guilty_fence);
spin_lock_irqsave(&kiq->ring_lock, flags);
@@ -7219,19 +7219,13 @@ static int gfx_v9_0_reset_kcq(struct amdgpu_ring *ring,
}
kiq->pmf->kiq_map_queues(kiq_ring, ring);
amdgpu_ring_commit(kiq_ring);
- spin_unlock_irqrestore(&kiq->ring_lock, flags);
r = amdgpu_ring_test_ring(kiq_ring);
+ spin_unlock_irqrestore(&kiq->ring_lock, flags);
if (r) {
DRM_ERROR("fail to remap queue\n");
return r;
}
-
- r = amdgpu_ring_test_ring(ring);
- if (r)
- return r;
- amdgpu_fence_driver_force_completion(ring);
- drm_sched_wqueue_start(&ring->sched);
- return 0;
+ return amdgpu_ring_reset_helper_end(ring, guilty_fence);
}
static void gfx_v9_ip_print(struct amdgpu_ip_block *ip_block, struct drm_printer *p)
--
2.49.0
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH 15/36] drm/amdgpu/gfx9.4.3: re-emit unprocessed state on kcq reset
2025-06-17 3:07 [PATCH V9 00/36] Reset improvements Alex Deucher
` (13 preceding siblings ...)
2025-06-17 3:07 ` [PATCH 14/36] drm/amdgpu/gfx9: re-emit unprocessed state on kcq reset Alex Deucher
@ 2025-06-17 3:07 ` Alex Deucher
2025-06-17 3:07 ` [PATCH 16/36] drm/amdgpu/gfx10: re-emit unprocessed state on ring reset Alex Deucher
` (20 subsequent siblings)
35 siblings, 0 replies; 59+ messages in thread
From: Alex Deucher @ 2025-06-17 3:07 UTC (permalink / raw)
To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher
Re-emit the unprocessed state after resetting the queue.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c | 12 +++---------
1 file changed, 3 insertions(+), 9 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
index 08f01f64e1c24..e4b84a9a1ef3f 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
@@ -3567,7 +3567,7 @@ static int gfx_v9_4_3_reset_kcq(struct amdgpu_ring *ring,
if (!kiq->pmf || !kiq->pmf->kiq_unmap_queues)
return -EINVAL;
- drm_sched_wqueue_stop(&ring->sched);
+ amdgpu_ring_reset_helper_begin(ring, guilty_fence);
spin_lock_irqsave(&kiq->ring_lock, flags);
@@ -3615,20 +3615,14 @@ static int gfx_v9_4_3_reset_kcq(struct amdgpu_ring *ring,
}
kiq->pmf->kiq_map_queues(kiq_ring, ring);
amdgpu_ring_commit(kiq_ring);
- spin_unlock_irqrestore(&kiq->ring_lock, flags);
-
r = amdgpu_ring_test_ring(kiq_ring);
+ spin_unlock_irqrestore(&kiq->ring_lock, flags);
if (r) {
dev_err(adev->dev, "fail to remap queue\n");
return r;
}
- r = amdgpu_ring_test_ring(ring);
- if (r)
- return r;
- amdgpu_fence_driver_force_completion(ring);
- drm_sched_wqueue_start(&ring->sched);
- return 0;
+ return amdgpu_ring_reset_helper_end(ring, guilty_fence);
}
enum amdgpu_gfx_cp_ras_mem_id {
--
2.49.0
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH 16/36] drm/amdgpu/gfx10: re-emit unprocessed state on ring reset
2025-06-17 3:07 [PATCH V9 00/36] Reset improvements Alex Deucher
` (14 preceding siblings ...)
2025-06-17 3:07 ` [PATCH 15/36] drm/amdgpu/gfx9.4.3: " Alex Deucher
@ 2025-06-17 3:07 ` Alex Deucher
2025-06-17 3:07 ` [PATCH 17/36] drm/amdgpu/gfx11: " Alex Deucher
` (19 subsequent siblings)
35 siblings, 0 replies; 59+ messages in thread
From: Alex Deucher @ 2025-06-17 3:07 UTC (permalink / raw)
To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher
Re-emit the unprocessed state after resetting the queue.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c | 45 ++++----------------------
1 file changed, 7 insertions(+), 38 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
index 120d0d4f03a56..7a203d8cee12e 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
@@ -9046,21 +9046,6 @@ static void gfx_v10_0_ring_emit_reg_write_reg_wait(struct amdgpu_ring *ring,
ref, mask);
}
-static void gfx_v10_0_ring_soft_recovery(struct amdgpu_ring *ring,
- unsigned int vmid)
-{
- struct amdgpu_device *adev = ring->adev;
- uint32_t value = 0;
-
- value = REG_SET_FIELD(value, SQ_CMD, CMD, 0x03);
- value = REG_SET_FIELD(value, SQ_CMD, MODE, 0x01);
- value = REG_SET_FIELD(value, SQ_CMD, CHECK_VMID, 1);
- value = REG_SET_FIELD(value, SQ_CMD, VM_ID, vmid);
- amdgpu_gfx_rlc_enter_safe_mode(adev, 0);
- WREG32_SOC15(GC, 0, mmSQ_CMD, value);
- amdgpu_gfx_rlc_exit_safe_mode(adev, 0);
-}
-
static void
gfx_v10_0_set_gfx_eop_interrupt_state(struct amdgpu_device *adev,
uint32_t me, uint32_t pipe,
@@ -9540,7 +9525,7 @@ static int gfx_v10_0_reset_kgq(struct amdgpu_ring *ring,
if (!kiq->pmf || !kiq->pmf->kiq_unmap_queues)
return -EINVAL;
- drm_sched_wqueue_stop(&ring->sched);
+ amdgpu_ring_reset_helper_begin(ring, guilty_fence);
spin_lock_irqsave(&kiq->ring_lock, flags);
@@ -9566,10 +9551,8 @@ static int gfx_v10_0_reset_kgq(struct amdgpu_ring *ring,
SOC15_REG_OFFSET(GC, 0, mmCP_VMID_RESET), 0, 0xffffffff);
kiq->pmf->kiq_map_queues(kiq_ring, ring);
amdgpu_ring_commit(kiq_ring);
-
- spin_unlock_irqrestore(&kiq->ring_lock, flags);
-
r = amdgpu_ring_test_ring(kiq_ring);
+ spin_unlock_irqrestore(&kiq->ring_lock, flags);
if (r)
return r;
@@ -9579,12 +9562,7 @@ static int gfx_v10_0_reset_kgq(struct amdgpu_ring *ring,
return r;
}
- r = amdgpu_ring_test_ring(ring);
- if (r)
- return r;
- amdgpu_fence_driver_force_completion(ring);
- drm_sched_wqueue_start(&ring->sched);
- return 0;
+ return amdgpu_ring_reset_helper_end(ring, guilty_fence);
}
static int gfx_v10_0_reset_kcq(struct amdgpu_ring *ring,
@@ -9603,7 +9581,7 @@ static int gfx_v10_0_reset_kcq(struct amdgpu_ring *ring,
if (!kiq->pmf || !kiq->pmf->kiq_unmap_queues)
return -EINVAL;
- drm_sched_wqueue_stop(&ring->sched);
+ amdgpu_ring_reset_helper_begin(ring, guilty_fence);
spin_lock_irqsave(&kiq->ring_lock, flags);
@@ -9615,9 +9593,8 @@ static int gfx_v10_0_reset_kcq(struct amdgpu_ring *ring,
kiq->pmf->kiq_unmap_queues(kiq_ring, ring, RESET_QUEUES,
0, 0);
amdgpu_ring_commit(kiq_ring);
- spin_unlock_irqrestore(&kiq->ring_lock, flags);
-
r = amdgpu_ring_test_ring(kiq_ring);
+ spin_unlock_irqrestore(&kiq->ring_lock, flags);
if (r)
return r;
@@ -9653,18 +9630,12 @@ static int gfx_v10_0_reset_kcq(struct amdgpu_ring *ring,
}
kiq->pmf->kiq_map_queues(kiq_ring, ring);
amdgpu_ring_commit(kiq_ring);
- spin_unlock_irqrestore(&kiq->ring_lock, flags);
-
r = amdgpu_ring_test_ring(kiq_ring);
+ spin_unlock_irqrestore(&kiq->ring_lock, flags);
if (r)
return r;
- r = amdgpu_ring_test_ring(ring);
- if (r)
- return r;
- amdgpu_fence_driver_force_completion(ring);
- drm_sched_wqueue_start(&ring->sched);
- return 0;
+ return amdgpu_ring_reset_helper_end(ring, guilty_fence);
}
static void gfx_v10_ip_print(struct amdgpu_ip_block *ip_block, struct drm_printer *p)
@@ -9899,7 +9870,6 @@ static const struct amdgpu_ring_funcs gfx_v10_0_ring_funcs_gfx = {
.emit_wreg = gfx_v10_0_ring_emit_wreg,
.emit_reg_wait = gfx_v10_0_ring_emit_reg_wait,
.emit_reg_write_reg_wait = gfx_v10_0_ring_emit_reg_write_reg_wait,
- .soft_recovery = gfx_v10_0_ring_soft_recovery,
.emit_mem_sync = gfx_v10_0_emit_mem_sync,
.reset = gfx_v10_0_reset_kgq,
.emit_cleaner_shader = gfx_v10_0_ring_emit_cleaner_shader,
@@ -9940,7 +9910,6 @@ static const struct amdgpu_ring_funcs gfx_v10_0_ring_funcs_compute = {
.emit_wreg = gfx_v10_0_ring_emit_wreg,
.emit_reg_wait = gfx_v10_0_ring_emit_reg_wait,
.emit_reg_write_reg_wait = gfx_v10_0_ring_emit_reg_write_reg_wait,
- .soft_recovery = gfx_v10_0_ring_soft_recovery,
.emit_mem_sync = gfx_v10_0_emit_mem_sync,
.reset = gfx_v10_0_reset_kcq,
.emit_cleaner_shader = gfx_v10_0_ring_emit_cleaner_shader,
--
2.49.0
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH 17/36] drm/amdgpu/gfx11: re-emit unprocessed state on ring reset
2025-06-17 3:07 [PATCH V9 00/36] Reset improvements Alex Deucher
` (15 preceding siblings ...)
2025-06-17 3:07 ` [PATCH 16/36] drm/amdgpu/gfx10: re-emit unprocessed state on ring reset Alex Deucher
@ 2025-06-17 3:07 ` Alex Deucher
2025-06-17 3:07 ` [PATCH 18/36] drm/amdgpu/gfx12: " Alex Deucher
` (18 subsequent siblings)
35 siblings, 0 replies; 59+ messages in thread
From: Alex Deucher @ 2025-06-17 3:07 UTC (permalink / raw)
To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher
Re-emit the unprocessed state after resetting the queue.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c | 35 +++-----------------------
1 file changed, 4 insertions(+), 31 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
index 75196c0ba84b9..98253eeaa07a4 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
@@ -6283,21 +6283,6 @@ static void gfx_v11_0_ring_emit_reg_write_reg_wait(struct amdgpu_ring *ring,
ref, mask, 0x20);
}
-static void gfx_v11_0_ring_soft_recovery(struct amdgpu_ring *ring,
- unsigned vmid)
-{
- struct amdgpu_device *adev = ring->adev;
- uint32_t value = 0;
-
- value = REG_SET_FIELD(value, SQ_CMD, CMD, 0x03);
- value = REG_SET_FIELD(value, SQ_CMD, MODE, 0x01);
- value = REG_SET_FIELD(value, SQ_CMD, CHECK_VMID, 1);
- value = REG_SET_FIELD(value, SQ_CMD, VM_ID, vmid);
- amdgpu_gfx_rlc_enter_safe_mode(adev, 0);
- WREG32_SOC15(GC, 0, regSQ_CMD, value);
- amdgpu_gfx_rlc_exit_safe_mode(adev, 0);
-}
-
static void
gfx_v11_0_set_gfx_eop_interrupt_state(struct amdgpu_device *adev,
uint32_t me, uint32_t pipe,
@@ -6821,7 +6806,7 @@ static int gfx_v11_0_reset_kgq(struct amdgpu_ring *ring,
if (amdgpu_sriov_vf(adev))
return -EINVAL;
- drm_sched_wqueue_stop(&ring->sched);
+ amdgpu_ring_reset_helper_begin(ring, guilty_fence);
r = amdgpu_mes_reset_legacy_queue(ring->adev, ring, vmid, false);
if (r) {
@@ -6844,12 +6829,7 @@ static int gfx_v11_0_reset_kgq(struct amdgpu_ring *ring,
return r;
}
- r = amdgpu_ring_test_ring(ring);
- if (r)
- return r;
- amdgpu_fence_driver_force_completion(ring);
- drm_sched_wqueue_start(&ring->sched);
- return 0;
+ return amdgpu_ring_reset_helper_end(ring, guilty_fence);
}
static int gfx_v11_0_reset_compute_pipe(struct amdgpu_ring *ring)
@@ -6992,7 +6972,7 @@ static int gfx_v11_0_reset_kcq(struct amdgpu_ring *ring,
if (amdgpu_sriov_vf(adev))
return -EINVAL;
- drm_sched_wqueue_stop(&ring->sched);
+ amdgpu_ring_reset_helper_begin(ring, guilty_fence);
r = amdgpu_mes_reset_legacy_queue(ring->adev, ring, vmid, true);
if (r) {
@@ -7013,12 +6993,7 @@ static int gfx_v11_0_reset_kcq(struct amdgpu_ring *ring,
return r;
}
- r = amdgpu_ring_test_ring(ring);
- if (r)
- return r;
- amdgpu_fence_driver_force_completion(ring);
- drm_sched_wqueue_start(&ring->sched);
- return 0;
+ return amdgpu_ring_reset_helper_end(ring, guilty_fence);
}
static void gfx_v11_ip_print(struct amdgpu_ip_block *ip_block, struct drm_printer *p)
@@ -7254,7 +7229,6 @@ static const struct amdgpu_ring_funcs gfx_v11_0_ring_funcs_gfx = {
.emit_wreg = gfx_v11_0_ring_emit_wreg,
.emit_reg_wait = gfx_v11_0_ring_emit_reg_wait,
.emit_reg_write_reg_wait = gfx_v11_0_ring_emit_reg_write_reg_wait,
- .soft_recovery = gfx_v11_0_ring_soft_recovery,
.emit_mem_sync = gfx_v11_0_emit_mem_sync,
.reset = gfx_v11_0_reset_kgq,
.emit_cleaner_shader = gfx_v11_0_ring_emit_cleaner_shader,
@@ -7296,7 +7270,6 @@ static const struct amdgpu_ring_funcs gfx_v11_0_ring_funcs_compute = {
.emit_wreg = gfx_v11_0_ring_emit_wreg,
.emit_reg_wait = gfx_v11_0_ring_emit_reg_wait,
.emit_reg_write_reg_wait = gfx_v11_0_ring_emit_reg_write_reg_wait,
- .soft_recovery = gfx_v11_0_ring_soft_recovery,
.emit_mem_sync = gfx_v11_0_emit_mem_sync,
.reset = gfx_v11_0_reset_kcq,
.emit_cleaner_shader = gfx_v11_0_ring_emit_cleaner_shader,
--
2.49.0
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH 18/36] drm/amdgpu/gfx12: re-emit unprocessed state on ring reset
2025-06-17 3:07 [PATCH V9 00/36] Reset improvements Alex Deucher
` (16 preceding siblings ...)
2025-06-17 3:07 ` [PATCH 17/36] drm/amdgpu/gfx11: " Alex Deucher
@ 2025-06-17 3:07 ` Alex Deucher
2025-06-17 3:07 ` [PATCH 19/36] drm/amdgpu/sdma6: " Alex Deucher
` (17 subsequent siblings)
35 siblings, 0 replies; 59+ messages in thread
From: Alex Deucher @ 2025-06-17 3:07 UTC (permalink / raw)
To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher
Re-emit the unprocessed state after resetting the queue.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c | 35 +++-----------------------
1 file changed, 4 insertions(+), 31 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
index 543429054bfcd..2f7968360bd39 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
@@ -4690,21 +4690,6 @@ static void gfx_v12_0_ring_emit_reg_write_reg_wait(struct amdgpu_ring *ring,
ref, mask, 0x20);
}
-static void gfx_v12_0_ring_soft_recovery(struct amdgpu_ring *ring,
- unsigned vmid)
-{
- struct amdgpu_device *adev = ring->adev;
- uint32_t value = 0;
-
- value = REG_SET_FIELD(value, SQ_CMD, CMD, 0x03);
- value = REG_SET_FIELD(value, SQ_CMD, MODE, 0x01);
- value = REG_SET_FIELD(value, SQ_CMD, CHECK_VMID, 1);
- value = REG_SET_FIELD(value, SQ_CMD, VM_ID, vmid);
- amdgpu_gfx_rlc_enter_safe_mode(adev, 0);
- WREG32_SOC15(GC, 0, regSQ_CMD, value);
- amdgpu_gfx_rlc_exit_safe_mode(adev, 0);
-}
-
static void
gfx_v12_0_set_gfx_eop_interrupt_state(struct amdgpu_device *adev,
uint32_t me, uint32_t pipe,
@@ -5317,7 +5302,7 @@ static int gfx_v12_0_reset_kgq(struct amdgpu_ring *ring,
if (amdgpu_sriov_vf(adev))
return -EINVAL;
- drm_sched_wqueue_stop(&ring->sched);
+ amdgpu_ring_reset_helper_begin(ring, guilty_fence);
r = amdgpu_mes_reset_legacy_queue(ring->adev, ring, vmid, false);
if (r) {
@@ -5339,12 +5324,7 @@ static int gfx_v12_0_reset_kgq(struct amdgpu_ring *ring,
return r;
}
- r = amdgpu_ring_test_ring(ring);
- if (r)
- return r;
- amdgpu_fence_driver_force_completion(ring);
- drm_sched_wqueue_start(&ring->sched);
- return 0;
+ return amdgpu_ring_reset_helper_end(ring, guilty_fence);
}
static int gfx_v12_0_reset_compute_pipe(struct amdgpu_ring *ring)
@@ -5440,7 +5420,7 @@ static int gfx_v12_0_reset_kcq(struct amdgpu_ring *ring,
if (amdgpu_sriov_vf(adev))
return -EINVAL;
- drm_sched_wqueue_stop(&ring->sched);
+ amdgpu_ring_reset_helper_begin(ring, guilty_fence);
r = amdgpu_mes_reset_legacy_queue(ring->adev, ring, vmid, true);
if (r) {
@@ -5461,12 +5441,7 @@ static int gfx_v12_0_reset_kcq(struct amdgpu_ring *ring,
return r;
}
- r = amdgpu_ring_test_ring(ring);
- if (r)
- return r;
- amdgpu_fence_driver_force_completion(ring);
- drm_sched_wqueue_start(&ring->sched);
- return 0;
+ return amdgpu_ring_reset_helper_end(ring, guilty_fence);
}
static void gfx_v12_0_ring_begin_use(struct amdgpu_ring *ring)
@@ -5544,7 +5519,6 @@ static const struct amdgpu_ring_funcs gfx_v12_0_ring_funcs_gfx = {
.emit_wreg = gfx_v12_0_ring_emit_wreg,
.emit_reg_wait = gfx_v12_0_ring_emit_reg_wait,
.emit_reg_write_reg_wait = gfx_v12_0_ring_emit_reg_write_reg_wait,
- .soft_recovery = gfx_v12_0_ring_soft_recovery,
.emit_mem_sync = gfx_v12_0_emit_mem_sync,
.reset = gfx_v12_0_reset_kgq,
.emit_cleaner_shader = gfx_v12_0_ring_emit_cleaner_shader,
@@ -5583,7 +5557,6 @@ static const struct amdgpu_ring_funcs gfx_v12_0_ring_funcs_compute = {
.emit_wreg = gfx_v12_0_ring_emit_wreg,
.emit_reg_wait = gfx_v12_0_ring_emit_reg_wait,
.emit_reg_write_reg_wait = gfx_v12_0_ring_emit_reg_write_reg_wait,
- .soft_recovery = gfx_v12_0_ring_soft_recovery,
.emit_mem_sync = gfx_v12_0_emit_mem_sync,
.reset = gfx_v12_0_reset_kcq,
.emit_cleaner_shader = gfx_v12_0_ring_emit_cleaner_shader,
--
2.49.0
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH 19/36] drm/amdgpu/sdma6: re-emit unprocessed state on ring reset
2025-06-17 3:07 [PATCH V9 00/36] Reset improvements Alex Deucher
` (17 preceding siblings ...)
2025-06-17 3:07 ` [PATCH 18/36] drm/amdgpu/gfx12: " Alex Deucher
@ 2025-06-17 3:07 ` Alex Deucher
2025-06-17 3:07 ` [PATCH 20/36] drm/amdgpu/sdma7: " Alex Deucher
` (16 subsequent siblings)
35 siblings, 0 replies; 59+ messages in thread
From: Alex Deucher @ 2025-06-17 3:07 UTC (permalink / raw)
To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher
Re-emit the unprocessed state after resetting the queue.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c | 20 ++++++++++++++++----
1 file changed, 16 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
index 6ba8cb5995779..6fee53afd6809 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
@@ -1537,11 +1537,23 @@ static int sdma_v6_0_ring_preempt_ib(struct amdgpu_ring *ring)
return r;
}
+static bool sdma_v6_0_is_queue_selected(struct amdgpu_device *adev,
+ u32 instance_id)
+{
+ /* we always use queue0 for KGD */
+ u32 context_status = RREG32(sdma_v6_0_get_reg_offset(adev, instance_id,
+ regSDMA0_QUEUE0_CONTEXT_STATUS));
+
+ /* Check if the SELECTED bit is set */
+ return (context_status & SDMA0_QUEUE0_CONTEXT_STATUS__SELECTED_MASK) != 0;
+}
+
static int sdma_v6_0_reset_queue(struct amdgpu_ring *ring,
unsigned int vmid,
struct amdgpu_fence *guilty_fence)
{
struct amdgpu_device *adev = ring->adev;
+ bool is_guilty;
int i, r;
if (amdgpu_sriov_vf(adev))
@@ -1557,7 +1569,8 @@ static int sdma_v6_0_reset_queue(struct amdgpu_ring *ring,
return -EINVAL;
}
- drm_sched_wqueue_stop(&ring->sched);
+ is_guilty = sdma_v6_0_is_queue_selected(adev, i);
+ amdgpu_ring_reset_helper_begin(ring, is_guilty ? guilty_fence : NULL);
r = amdgpu_mes_reset_legacy_queue(adev, ring, vmid, true);
if (r)
@@ -1566,9 +1579,8 @@ static int sdma_v6_0_reset_queue(struct amdgpu_ring *ring,
r = sdma_v6_0_gfx_resume_instance(adev, i, true);
if (r)
return r;
- amdgpu_fence_driver_force_completion(ring);
- drm_sched_wqueue_start(&ring->sched);
- return 0;
+
+ return amdgpu_ring_reset_helper_end(ring, is_guilty ? guilty_fence : NULL);
}
static int sdma_v6_0_set_trap_irq_state(struct amdgpu_device *adev,
--
2.49.0
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH 20/36] drm/amdgpu/sdma7: re-emit unprocessed state on ring reset
2025-06-17 3:07 [PATCH V9 00/36] Reset improvements Alex Deucher
` (18 preceding siblings ...)
2025-06-17 3:07 ` [PATCH 19/36] drm/amdgpu/sdma6: " Alex Deucher
@ 2025-06-17 3:07 ` Alex Deucher
2025-06-17 3:08 ` [PATCH 21/36] drm/amdgpu/jpeg2: " Alex Deucher
` (15 subsequent siblings)
35 siblings, 0 replies; 59+ messages in thread
From: Alex Deucher @ 2025-06-17 3:07 UTC (permalink / raw)
To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher
Re-emit the unprocessed state after resetting the queue.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c | 20 ++++++++++++++++----
1 file changed, 16 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
index 40416f2d03238..2b8e9239ad0ba 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
@@ -802,11 +802,23 @@ static bool sdma_v7_0_check_soft_reset(struct amdgpu_ip_block *ip_block)
return false;
}
+static bool sdma_v7_0_is_queue_selected(struct amdgpu_device *adev,
+ uint32_t instance_id)
+{
+ /* we always use queue0 for KGD */
+ u32 context_status = RREG32(sdma_v7_0_get_reg_offset(adev, instance_id,
+ regSDMA0_QUEUE0_CONTEXT_STATUS));
+
+ /* Check if the SELECTED bit is set */
+ return (context_status & SDMA0_QUEUE0_CONTEXT_STATUS__SELECTED_MASK) != 0;
+}
+
static int sdma_v7_0_reset_queue(struct amdgpu_ring *ring,
unsigned int vmid,
struct amdgpu_fence *guilty_fence)
{
struct amdgpu_device *adev = ring->adev;
+ bool is_guilty;
int i, r;
if (amdgpu_sriov_vf(adev))
@@ -822,7 +834,8 @@ static int sdma_v7_0_reset_queue(struct amdgpu_ring *ring,
return -EINVAL;
}
- drm_sched_wqueue_stop(&ring->sched);
+ is_guilty = sdma_v7_0_is_queue_selected(adev, i);
+ amdgpu_ring_reset_helper_begin(ring, is_guilty ? guilty_fence : NULL);
r = amdgpu_mes_reset_legacy_queue(adev, ring, vmid, true);
if (r)
@@ -831,9 +844,8 @@ static int sdma_v7_0_reset_queue(struct amdgpu_ring *ring,
r = sdma_v7_0_gfx_resume_instance(adev, i, true);
if (r)
return r;
- amdgpu_fence_driver_force_completion(ring);
- drm_sched_wqueue_start(&ring->sched);
- return 0;
+
+ return amdgpu_ring_reset_helper_end(ring, is_guilty ? guilty_fence : NULL);
}
/**
--
2.49.0
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH 21/36] drm/amdgpu/jpeg2: re-emit unprocessed state on ring reset
2025-06-17 3:07 [PATCH V9 00/36] Reset improvements Alex Deucher
` (19 preceding siblings ...)
2025-06-17 3:07 ` [PATCH 20/36] drm/amdgpu/sdma7: " Alex Deucher
@ 2025-06-17 3:08 ` Alex Deucher
2025-06-17 3:08 ` [PATCH 22/36] drm/amdgpu/jpeg2.5: " Alex Deucher
` (14 subsequent siblings)
35 siblings, 0 replies; 59+ messages in thread
From: Alex Deucher @ 2025-06-17 3:08 UTC (permalink / raw)
To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher
Re-emit the unprocessed state after resetting the queue.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c | 11 ++---------
1 file changed, 2 insertions(+), 9 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
index 2b02ecb94eeae..13bdfb1ea2646 100644
--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
@@ -768,17 +768,10 @@ static int jpeg_v2_0_ring_reset(struct amdgpu_ring *ring,
unsigned int vmid,
struct amdgpu_fence *guilty_fence)
{
- int r;
-
- drm_sched_wqueue_stop(&ring->sched);
+ amdgpu_ring_reset_helper_begin(ring, guilty_fence);
jpeg_v2_0_stop(ring->adev);
jpeg_v2_0_start(ring->adev);
- r = amdgpu_ring_test_helper(ring);
- if (r)
- return r;
- amdgpu_fence_driver_force_completion(ring);
- drm_sched_wqueue_start(&ring->sched);
- return 0;
+ return amdgpu_ring_reset_helper_end(ring, guilty_fence);
}
static const struct amd_ip_funcs jpeg_v2_0_ip_funcs = {
--
2.49.0
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH 22/36] drm/amdgpu/jpeg2.5: re-emit unprocessed state on ring reset
2025-06-17 3:07 [PATCH V9 00/36] Reset improvements Alex Deucher
` (20 preceding siblings ...)
2025-06-17 3:08 ` [PATCH 21/36] drm/amdgpu/jpeg2: " Alex Deucher
@ 2025-06-17 3:08 ` Alex Deucher
2025-06-17 3:08 ` [PATCH 23/36] drm/amdgpu/jpeg3: " Alex Deucher
` (13 subsequent siblings)
35 siblings, 0 replies; 59+ messages in thread
From: Alex Deucher @ 2025-06-17 3:08 UTC (permalink / raw)
To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher
Re-emit the unprocessed state after resetting the queue.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c | 11 ++---------
1 file changed, 2 insertions(+), 9 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
index d8ab2a96d445e..b98d4536001dc 100644
--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
+++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
@@ -647,17 +647,10 @@ static int jpeg_v2_5_ring_reset(struct amdgpu_ring *ring,
unsigned int vmid,
struct amdgpu_fence *guilty_fence)
{
- int r;
-
- drm_sched_wqueue_stop(&ring->sched);
+ amdgpu_ring_reset_helper_begin(ring, guilty_fence);
jpeg_v2_5_stop_inst(ring->adev, ring->me);
jpeg_v2_5_start_inst(ring->adev, ring->me);
- r = amdgpu_ring_test_helper(ring);
- if (r)
- return r;
- amdgpu_fence_driver_force_completion(ring);
- drm_sched_wqueue_start(&ring->sched);
- return 0;
+ return amdgpu_ring_reset_helper_end(ring, guilty_fence);
}
static const struct amd_ip_funcs jpeg_v2_5_ip_funcs = {
--
2.49.0
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH 23/36] drm/amdgpu/jpeg3: re-emit unprocessed state on ring reset
2025-06-17 3:07 [PATCH V9 00/36] Reset improvements Alex Deucher
` (21 preceding siblings ...)
2025-06-17 3:08 ` [PATCH 22/36] drm/amdgpu/jpeg2.5: " Alex Deucher
@ 2025-06-17 3:08 ` Alex Deucher
2025-06-17 3:08 ` [PATCH 24/36] drm/amdgpu/jpeg4: " Alex Deucher
` (12 subsequent siblings)
35 siblings, 0 replies; 59+ messages in thread
From: Alex Deucher @ 2025-06-17 3:08 UTC (permalink / raw)
To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher
Re-emit the unprocessed state after resetting the queue.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c | 11 ++---------
1 file changed, 2 insertions(+), 9 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c
index 60ab0f2afeeff..87b080eb0adef 100644
--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c
@@ -559,17 +559,10 @@ static int jpeg_v3_0_ring_reset(struct amdgpu_ring *ring,
unsigned int vmid,
struct amdgpu_fence *guilty_fence)
{
- int r;
-
- drm_sched_wqueue_stop(&ring->sched);
+ amdgpu_ring_reset_helper_begin(ring, guilty_fence);
jpeg_v3_0_stop(ring->adev);
jpeg_v3_0_start(ring->adev);
- r = amdgpu_ring_test_helper(ring);
- if (r)
- return r;
- amdgpu_fence_driver_force_completion(ring);
- drm_sched_wqueue_start(&ring->sched);
- return 0;
+ return amdgpu_ring_reset_helper_end(ring, guilty_fence);
}
static const struct amd_ip_funcs jpeg_v3_0_ip_funcs = {
--
2.49.0
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH 24/36] drm/amdgpu/jpeg4: re-emit unprocessed state on ring reset
2025-06-17 3:07 [PATCH V9 00/36] Reset improvements Alex Deucher
` (22 preceding siblings ...)
2025-06-17 3:08 ` [PATCH 23/36] drm/amdgpu/jpeg3: " Alex Deucher
@ 2025-06-17 3:08 ` Alex Deucher
2025-06-17 3:08 ` [PATCH 25/36] drm/amdgpu/jpeg4.0.3: " Alex Deucher
` (11 subsequent siblings)
35 siblings, 0 replies; 59+ messages in thread
From: Alex Deucher @ 2025-06-17 3:08 UTC (permalink / raw)
To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher
Re-emit the unprocessed state after resetting the queue.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c | 11 ++---------
1 file changed, 2 insertions(+), 9 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c
index fad64d5cccd1f..6ca8a3ae4549c 100644
--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c
@@ -724,20 +724,13 @@ static int jpeg_v4_0_ring_reset(struct amdgpu_ring *ring,
unsigned int vmid,
struct amdgpu_fence *guilty_fence)
{
- int r;
-
if (amdgpu_sriov_vf(ring->adev))
return -EINVAL;
- drm_sched_wqueue_stop(&ring->sched);
+ amdgpu_ring_reset_helper_begin(ring, guilty_fence);
jpeg_v4_0_stop(ring->adev);
jpeg_v4_0_start(ring->adev);
- r = amdgpu_ring_test_helper(ring);
- if (r)
- return r;
- amdgpu_fence_driver_force_completion(ring);
- drm_sched_wqueue_start(&ring->sched);
- return 0;
+ return amdgpu_ring_reset_helper_end(ring, guilty_fence);
}
static const struct amd_ip_funcs jpeg_v4_0_ip_funcs = {
--
2.49.0
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH 25/36] drm/amdgpu/jpeg4.0.3: re-emit unprocessed state on ring reset
2025-06-17 3:07 [PATCH V9 00/36] Reset improvements Alex Deucher
` (23 preceding siblings ...)
2025-06-17 3:08 ` [PATCH 24/36] drm/amdgpu/jpeg4: " Alex Deucher
@ 2025-06-17 3:08 ` Alex Deucher
2025-06-17 3:08 ` [PATCH 26/36] drm/amdgpu/jpeg4.0.5: add queue reset Alex Deucher
` (10 subsequent siblings)
35 siblings, 0 replies; 59+ messages in thread
From: Alex Deucher @ 2025-06-17 3:08 UTC (permalink / raw)
To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher
Re-emit the unprocessed state after resetting the queue.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c | 11 ++---------
1 file changed, 2 insertions(+), 9 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c
index 82ccab9cf0895..3e92b099e91a2 100644
--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c
+++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c
@@ -1147,20 +1147,13 @@ static int jpeg_v4_0_3_ring_reset(struct amdgpu_ring *ring,
unsigned int vmid,
struct amdgpu_fence *guilty_fence)
{
- int r;
-
if (amdgpu_sriov_vf(ring->adev))
return -EOPNOTSUPP;
- drm_sched_wqueue_stop(&ring->sched);
+ amdgpu_ring_reset_helper_begin(ring, guilty_fence);
jpeg_v4_0_3_core_stall_reset(ring);
jpeg_v4_0_3_start_jrbc(ring);
- r = amdgpu_ring_test_helper(ring);
- if (r)
- return r;
- amdgpu_fence_driver_force_completion(ring);
- drm_sched_wqueue_start(&ring->sched);
- return 0;
+ return amdgpu_ring_reset_helper_end(ring, guilty_fence);
}
static const struct amd_ip_funcs jpeg_v4_0_3_ip_funcs = {
--
2.49.0
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH 26/36] drm/amdgpu/jpeg4.0.5: add queue reset
2025-06-17 3:07 [PATCH V9 00/36] Reset improvements Alex Deucher
` (24 preceding siblings ...)
2025-06-17 3:08 ` [PATCH 25/36] drm/amdgpu/jpeg4.0.3: " Alex Deucher
@ 2025-06-17 3:08 ` Alex Deucher
2025-06-17 3:08 ` [PATCH 27/36] drm/amdgpu/jpeg5: " Alex Deucher
` (9 subsequent siblings)
35 siblings, 0 replies; 59+ messages in thread
From: Alex Deucher @ 2025-06-17 3:08 UTC (permalink / raw)
To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher
Add queue reset support for jpeg 4.0.5.
Use the new helpers to re-emit the unprocessed state
after resetting the queue.
Untested.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_5.c | 11 +++++++++++
1 file changed, 11 insertions(+)
diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_5.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_5.c
index 974030a5c03c9..8d187c7f7afe9 100644
--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_5.c
+++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_5.c
@@ -767,6 +767,16 @@ static int jpeg_v4_0_5_process_interrupt(struct amdgpu_device *adev,
return 0;
}
+static int jpeg_v4_0_5_ring_reset(struct amdgpu_ring *ring,
+ unsigned int vmid,
+ struct amdgpu_fence *guilty_fence)
+{
+ amdgpu_ring_reset_helper_begin(ring, guilty_fence);
+ jpeg_v4_0_5_stop(ring->adev);
+ jpeg_v4_0_5_start(ring->adev);
+ return amdgpu_ring_reset_helper_end(ring, guilty_fence);
+}
+
static const struct amd_ip_funcs jpeg_v4_0_5_ip_funcs = {
.name = "jpeg_v4_0_5",
.early_init = jpeg_v4_0_5_early_init,
@@ -812,6 +822,7 @@ static const struct amdgpu_ring_funcs jpeg_v4_0_5_dec_ring_vm_funcs = {
.emit_wreg = jpeg_v2_0_dec_ring_emit_wreg,
.emit_reg_wait = jpeg_v2_0_dec_ring_emit_reg_wait,
.emit_reg_write_reg_wait = amdgpu_ring_emit_reg_write_reg_wait_helper,
+ .reset = jpeg_v4_0_5_ring_reset,
};
static void jpeg_v4_0_5_set_dec_ring_funcs(struct amdgpu_device *adev)
--
2.49.0
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH 27/36] drm/amdgpu/jpeg5: add queue reset
2025-06-17 3:07 [PATCH V9 00/36] Reset improvements Alex Deucher
` (25 preceding siblings ...)
2025-06-17 3:08 ` [PATCH 26/36] drm/amdgpu/jpeg4.0.5: add queue reset Alex Deucher
@ 2025-06-17 3:08 ` Alex Deucher
2025-06-17 3:08 ` [PATCH 28/36] drm/amdgpu/jpeg5.0.1: re-emit unprocessed state on ring reset Alex Deucher
` (8 subsequent siblings)
35 siblings, 0 replies; 59+ messages in thread
From: Alex Deucher @ 2025-06-17 3:08 UTC (permalink / raw)
To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher
Add queue reset support for jpeg 5.0.0.
Use the new helpers to re-emit the unprocessed state
after resetting the queue.
Untested.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_0.c | 14 ++++++++++++++
1 file changed, 14 insertions(+)
diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_0.c
index 31d213ccbe0a8..339cf3a033a2e 100644
--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_0.c
@@ -644,6 +644,19 @@ static int jpeg_v5_0_0_process_interrupt(struct amdgpu_device *adev,
return 0;
}
+static int jpeg_v5_0_0_ring_reset(struct amdgpu_ring *ring,
+ unsigned int vmid,
+ struct amdgpu_fence *guilty_fence)
+{
+ if (amdgpu_sriov_vf(ring->adev))
+ return -EINVAL;
+
+ amdgpu_ring_reset_helper_begin(ring, guilty_fence);
+ jpeg_v5_0_0_stop(ring->adev);
+ jpeg_v5_0_0_start(ring->adev);
+ return amdgpu_ring_reset_helper_end(ring, guilty_fence);
+}
+
static const struct amd_ip_funcs jpeg_v5_0_0_ip_funcs = {
.name = "jpeg_v5_0_0",
.early_init = jpeg_v5_0_0_early_init,
@@ -689,6 +702,7 @@ static const struct amdgpu_ring_funcs jpeg_v5_0_0_dec_ring_vm_funcs = {
.emit_wreg = jpeg_v4_0_3_dec_ring_emit_wreg,
.emit_reg_wait = jpeg_v4_0_3_dec_ring_emit_reg_wait,
.emit_reg_write_reg_wait = amdgpu_ring_emit_reg_write_reg_wait_helper,
+ .reset = jpeg_v5_0_0_ring_reset,
};
static void jpeg_v5_0_0_set_dec_ring_funcs(struct amdgpu_device *adev)
--
2.49.0
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH 28/36] drm/amdgpu/jpeg5.0.1: re-emit unprocessed state on ring reset
2025-06-17 3:07 [PATCH V9 00/36] Reset improvements Alex Deucher
` (26 preceding siblings ...)
2025-06-17 3:08 ` [PATCH 27/36] drm/amdgpu/jpeg5: " Alex Deucher
@ 2025-06-17 3:08 ` Alex Deucher
2025-06-17 3:08 ` [PATCH 29/36] drm/amdgpu/vcn4: " Alex Deucher
` (7 subsequent siblings)
35 siblings, 0 replies; 59+ messages in thread
From: Alex Deucher @ 2025-06-17 3:08 UTC (permalink / raw)
To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher
Re-emit the unprocessed state after resetting the queue.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c | 11 ++---------
1 file changed, 2 insertions(+), 9 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c
index 3ffc2a61e6bf0..f49f3cf53b693 100644
--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c
+++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c
@@ -838,20 +838,13 @@ static int jpeg_v5_0_1_ring_reset(struct amdgpu_ring *ring,
unsigned int vmid,
struct amdgpu_fence *guilty_fence)
{
- int r;
-
if (amdgpu_sriov_vf(ring->adev))
return -EOPNOTSUPP;
- drm_sched_wqueue_stop(&ring->sched);
+ amdgpu_ring_reset_helper_begin(ring, guilty_fence);
jpeg_v5_0_1_core_stall_reset(ring);
jpeg_v5_0_1_init_jrbc(ring);
- r = amdgpu_ring_test_helper(ring);
- if (r)
- return r;
- amdgpu_fence_driver_force_completion(ring);
- drm_sched_wqueue_start(&ring->sched);
- return 0;
+ return amdgpu_ring_reset_helper_end(ring, guilty_fence);
}
static const struct amd_ip_funcs jpeg_v5_0_1_ip_funcs = {
--
2.49.0
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH 29/36] drm/amdgpu/vcn4: re-emit unprocessed state on ring reset
2025-06-17 3:07 [PATCH V9 00/36] Reset improvements Alex Deucher
` (27 preceding siblings ...)
2025-06-17 3:08 ` [PATCH 28/36] drm/amdgpu/jpeg5.0.1: re-emit unprocessed state on ring reset Alex Deucher
@ 2025-06-17 3:08 ` Alex Deucher
2025-06-17 3:08 ` [PATCH 30/36] drm/amdgpu/vcn4.0.3: " Alex Deucher
` (6 subsequent siblings)
35 siblings, 0 replies; 59+ messages in thread
From: Alex Deucher @ 2025-06-17 3:08 UTC (permalink / raw)
To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher
Re-emit the unprocessed state after resetting the queue.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c | 11 ++---------
1 file changed, 2 insertions(+), 9 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
index 1532e9d63e132..b29e69d034a73 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
@@ -1973,21 +1973,14 @@ static int vcn_v4_0_ring_reset(struct amdgpu_ring *ring,
{
struct amdgpu_device *adev = ring->adev;
struct amdgpu_vcn_inst *vinst = &adev->vcn.inst[ring->me];
- int r;
if (!(adev->vcn.supported_reset & AMDGPU_RESET_TYPE_PER_QUEUE))
return -EOPNOTSUPP;
- drm_sched_wqueue_stop(&ring->sched);
+ amdgpu_ring_reset_helper_begin(ring, guilty_fence);
vcn_v4_0_stop(vinst);
vcn_v4_0_start(vinst);
-
- r = amdgpu_ring_test_helper(ring);
- if (r)
- return r;
- amdgpu_fence_driver_force_completion(ring);
- drm_sched_wqueue_start(&ring->sched);
- return 0;
+ return amdgpu_ring_reset_helper_end(ring, guilty_fence);
}
static struct amdgpu_ring_funcs vcn_v4_0_unified_ring_vm_funcs = {
--
2.49.0
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH 30/36] drm/amdgpu/vcn4.0.3: re-emit unprocessed state on ring reset
2025-06-17 3:07 [PATCH V9 00/36] Reset improvements Alex Deucher
` (28 preceding siblings ...)
2025-06-17 3:08 ` [PATCH 29/36] drm/amdgpu/vcn4: " Alex Deucher
@ 2025-06-17 3:08 ` Alex Deucher
2025-06-17 3:08 ` [PATCH 31/36] drm/amdgpu/vcn4.0.5: " Alex Deucher
` (5 subsequent siblings)
35 siblings, 0 replies; 59+ messages in thread
From: Alex Deucher @ 2025-06-17 3:08 UTC (permalink / raw)
To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher
Re-emit the unprocessed state after resetting the queue.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c | 10 +++-------
1 file changed, 3 insertions(+), 7 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c
index 31cd27721782f..fcb0f5954ea06 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c
@@ -1609,7 +1609,7 @@ static int vcn_v4_0_3_ring_reset(struct amdgpu_ring *ring,
if (!(adev->vcn.supported_reset & AMDGPU_RESET_TYPE_PER_QUEUE))
return -EOPNOTSUPP;
- drm_sched_wqueue_stop(&ring->sched);
+ amdgpu_ring_reset_helper_begin(ring, guilty_fence);
vcn_inst = GET_INST(VCN, ring->me);
r = amdgpu_dpm_reset_vcn(adev, 1 << vcn_inst);
@@ -1624,12 +1624,8 @@ static int vcn_v4_0_3_ring_reset(struct amdgpu_ring *ring,
adev->vcn.caps |= AMDGPU_VCN_CAPS(RRMT_ENABLED);
vcn_v4_0_3_hw_init_inst(vinst);
vcn_v4_0_3_start_dpg_mode(vinst, adev->vcn.inst[ring->me].indirect_sram);
- r = amdgpu_ring_test_helper(ring);
- if (r)
- return r;
- amdgpu_fence_driver_force_completion(ring);
- drm_sched_wqueue_start(&ring->sched);
- return 0;
+
+ return amdgpu_ring_reset_helper_end(ring, guilty_fence);
}
static const struct amdgpu_ring_funcs vcn_v4_0_3_unified_ring_vm_funcs = {
--
2.49.0
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH 31/36] drm/amdgpu/vcn4.0.5: re-emit unprocessed state on ring reset
2025-06-17 3:07 [PATCH V9 00/36] Reset improvements Alex Deucher
` (29 preceding siblings ...)
2025-06-17 3:08 ` [PATCH 30/36] drm/amdgpu/vcn4.0.3: " Alex Deucher
@ 2025-06-17 3:08 ` Alex Deucher
2025-06-17 3:08 ` [PATCH 32/36] drm/amdgpu/vcn5: " Alex Deucher
` (4 subsequent siblings)
35 siblings, 0 replies; 59+ messages in thread
From: Alex Deucher @ 2025-06-17 3:08 UTC (permalink / raw)
To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher
Re-emit the unprocessed state after resetting the queue.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c | 11 ++---------
1 file changed, 2 insertions(+), 9 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c
index aefa2d77a73c4..06f2785df16f4 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c
@@ -1471,21 +1471,14 @@ static int vcn_v4_0_5_ring_reset(struct amdgpu_ring *ring,
{
struct amdgpu_device *adev = ring->adev;
struct amdgpu_vcn_inst *vinst = &adev->vcn.inst[ring->me];
- int r;
if (!(adev->vcn.supported_reset & AMDGPU_RESET_TYPE_PER_QUEUE))
return -EOPNOTSUPP;
- drm_sched_wqueue_stop(&ring->sched);
+ amdgpu_ring_reset_helper_begin(ring, guilty_fence);
vcn_v4_0_5_stop(vinst);
vcn_v4_0_5_start(vinst);
-
- r = amdgpu_ring_test_helper(ring);
- if (r)
- return r;
- amdgpu_fence_driver_force_completion(ring);
- drm_sched_wqueue_start(&ring->sched);
- return 0;
+ return amdgpu_ring_reset_helper_end(ring, guilty_fence);
}
static struct amdgpu_ring_funcs vcn_v4_0_5_unified_ring_vm_funcs = {
--
2.49.0
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH 32/36] drm/amdgpu/vcn5: re-emit unprocessed state on ring reset
2025-06-17 3:07 [PATCH V9 00/36] Reset improvements Alex Deucher
` (30 preceding siblings ...)
2025-06-17 3:08 ` [PATCH 31/36] drm/amdgpu/vcn4.0.5: " Alex Deucher
@ 2025-06-17 3:08 ` Alex Deucher
2025-06-17 3:08 ` [PATCH 33/36] drm/amdgpu/vcn: add a helper framework for engine resets Alex Deucher
` (3 subsequent siblings)
35 siblings, 0 replies; 59+ messages in thread
From: Alex Deucher @ 2025-06-17 3:08 UTC (permalink / raw)
To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher
Re-emit the unprocessed state after resetting the queue.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c | 11 ++---------
1 file changed, 2 insertions(+), 9 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c
index 1de81a7541bf8..e293f71085e82 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c
@@ -1198,21 +1198,14 @@ static int vcn_v5_0_0_ring_reset(struct amdgpu_ring *ring,
{
struct amdgpu_device *adev = ring->adev;
struct amdgpu_vcn_inst *vinst = &adev->vcn.inst[ring->me];
- int r;
if (!(adev->vcn.supported_reset & AMDGPU_RESET_TYPE_PER_QUEUE))
return -EOPNOTSUPP;
- drm_sched_wqueue_stop(&ring->sched);
+ amdgpu_ring_reset_helper_begin(ring, guilty_fence);
vcn_v5_0_0_stop(vinst);
vcn_v5_0_0_start(vinst);
-
- r = amdgpu_ring_test_helper(ring);
- if (r)
- return r;
- amdgpu_fence_driver_force_completion(ring);
- drm_sched_wqueue_start(&ring->sched);
- return 0;
+ return amdgpu_ring_reset_helper_end(ring, guilty_fence);
}
static const struct amdgpu_ring_funcs vcn_v5_0_0_unified_ring_vm_funcs = {
--
2.49.0
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH 33/36] drm/amdgpu/vcn: add a helper framework for engine resets
2025-06-17 3:07 [PATCH V9 00/36] Reset improvements Alex Deucher
` (31 preceding siblings ...)
2025-06-17 3:08 ` [PATCH 32/36] drm/amdgpu/vcn5: " Alex Deucher
@ 2025-06-17 3:08 ` Alex Deucher
2025-06-17 4:30 ` Sundararaju, Sathishkumar
2025-06-17 3:08 ` [PATCH 34/36] drm/amdgpu/vcn2: implement ring reset Alex Deucher
` (2 subsequent siblings)
35 siblings, 1 reply; 59+ messages in thread
From: Alex Deucher @ 2025-06-17 3:08 UTC (permalink / raw)
To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher
With engine resets we reset all queues on the engine rather
than just a single queue. Add a framework to handle this
similar to SDMA.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c | 64 +++++++++++++++++++++++++
drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h | 6 ++-
2 files changed, 69 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
index c8885c3d54b33..075740ed275eb 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
@@ -134,6 +134,7 @@ int amdgpu_vcn_sw_init(struct amdgpu_device *adev, int i)
mutex_init(&adev->vcn.inst[i].vcn1_jpeg1_workaround);
mutex_init(&adev->vcn.inst[i].vcn_pg_lock);
+ mutex_init(&adev->vcn.inst[i].engine_reset_mutex);
atomic_set(&adev->vcn.inst[i].total_submission_cnt, 0);
INIT_DELAYED_WORK(&adev->vcn.inst[i].idle_work, amdgpu_vcn_idle_work_handler);
atomic_set(&adev->vcn.inst[i].dpg_enc_submission_cnt, 0);
@@ -1451,3 +1452,66 @@ int vcn_set_powergating_state(struct amdgpu_ip_block *ip_block,
return ret;
}
+
+/**
+ * amdgpu_vcn_reset_engine - Reset a specific VCN engine
+ * @adev: Pointer to the AMDGPU device
+ * @instance_id: VCN engine instance to reset
+ *
+ * Returns: 0 on success, or a negative error code on failure.
+ */
+static int amdgpu_vcn_reset_engine(struct amdgpu_device *adev,
+ uint32_t instance_id)
+{
+ struct amdgpu_vcn_inst *vinst = &adev->vcn.inst[instance_id];
+ int r, i;
+
+ mutex_lock(&vinst->engine_reset_mutex);
+ /* Stop the scheduler's work queue for the dec and enc rings if they are running.
+ * This ensures that no new tasks are submitted to the queues while
+ * the reset is in progress.
+ */
+ drm_sched_wqueue_stop(&vinst->ring_dec.sched);
+ for (i = 0; i < vinst->num_enc_rings; i++)
+ drm_sched_wqueue_stop(&vinst->ring_enc[i].sched);
+
+ /* Perform the VCN reset for the specified instance */
+ r = vinst->reset(vinst);
+ if (r) {
+ dev_err(adev->dev, "Failed to reset VCN instance %u\n", instance_id);
+ } else {
+ /* Restart the scheduler's work queue for the dec and enc rings
+ * if they were stopped by this function. This allows new tasks
+ * to be submitted to the queues after the reset is complete.
+ */
+ drm_sched_wqueue_start(&vinst->ring_dec.sched);
+ for (i = 0; i < vinst->num_enc_rings; i++)
+ drm_sched_wqueue_start(&vinst->ring_enc[i].sched);
+ }
+ mutex_unlock(&vinst->engine_reset_mutex);
+
+ return r;
+}
+
+/**
+ * amdgpu_vcn_ring_reset - Reset a VCN ring
+ * @ring: ring to reset
+ * @vmid: vmid of guilty job
+ * @guilty_fence: guilty fence
+ *
+ * This helper is for VCN blocks without unified queues because
+ * resetting the engine resets all queues in that case. With
+ * unified queues we have one queue per engine.
+ * Returns: 0 on success, or a negative error code on failure.
+ */
+int amdgpu_vcn_ring_reset(struct amdgpu_ring *ring,
+ unsigned int vmid,
+ struct amdgpu_fence *guilty_fence)
+{
+ struct amdgpu_device *adev = ring->adev;
+
+ if (adev->vcn.inst[ring->me].using_unified_queue)
+ return -EINVAL;
+
+ return amdgpu_vcn_reset_engine(adev, ring->me);
+}
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
index 83adf81defc71..0bc0a94d7cf0f 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
@@ -330,7 +330,9 @@ struct amdgpu_vcn_inst {
struct dpg_pause_state *new_state);
int (*set_pg_state)(struct amdgpu_vcn_inst *vinst,
enum amd_powergating_state state);
+ int (*reset)(struct amdgpu_vcn_inst *vinst);
bool using_unified_queue;
+ struct mutex engine_reset_mutex;
};
struct amdgpu_vcn_ras {
@@ -552,5 +554,7 @@ void amdgpu_debugfs_vcn_sched_mask_init(struct amdgpu_device *adev);
int vcn_set_powergating_state(struct amdgpu_ip_block *ip_block,
enum amd_powergating_state state);
-
+int amdgpu_vcn_ring_reset(struct amdgpu_ring *ring,
+ unsigned int vmid,
+ struct amdgpu_fence *guilty_fence);
#endif
--
2.49.0
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH 34/36] drm/amdgpu/vcn2: implement ring reset
2025-06-17 3:07 [PATCH V9 00/36] Reset improvements Alex Deucher
` (32 preceding siblings ...)
2025-06-17 3:08 ` [PATCH 33/36] drm/amdgpu/vcn: add a helper framework for engine resets Alex Deucher
@ 2025-06-17 3:08 ` Alex Deucher
2025-06-17 3:08 ` [PATCH 35/36] drm/amdgpu/vcn2.5: " Alex Deucher
2025-06-17 3:08 ` [PATCH 36/36] drm/amdgpu/vcn3: " Alex Deucher
35 siblings, 0 replies; 59+ messages in thread
From: Alex Deucher @ 2025-06-17 3:08 UTC (permalink / raw)
To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher
Use the new helpers to handle engine resets for VCN.
Untested.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c | 25 +++++++++++++++++++++++++
1 file changed, 25 insertions(+)
diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c
index 148b651be7ca7..4ab02533f2fa0 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c
@@ -98,6 +98,8 @@ static int vcn_v2_0_set_pg_state(struct amdgpu_vcn_inst *vinst,
static int vcn_v2_0_pause_dpg_mode(struct amdgpu_vcn_inst *vinst,
struct dpg_pause_state *new_state);
static int vcn_v2_0_start_sriov(struct amdgpu_device *adev);
+static int vcn_v2_0_reset(struct amdgpu_vcn_inst *vinst);
+
/**
* vcn_v2_0_early_init - set function pointers and load microcode
*
@@ -213,6 +215,7 @@ static int vcn_v2_0_sw_init(struct amdgpu_ip_block *ip_block)
}
adev->vcn.inst[0].pause_dpg_mode = vcn_v2_0_pause_dpg_mode;
+ adev->vcn.inst[0].reset = vcn_v2_0_reset;
r = amdgpu_virt_alloc_mm_table(adev);
if (r)
@@ -1355,6 +1358,26 @@ static int vcn_v2_0_pause_dpg_mode(struct amdgpu_vcn_inst *vinst,
return 0;
}
+static int vcn_v2_0_reset(struct amdgpu_vcn_inst *vinst)
+{
+ int i, r;
+
+ vcn_v2_0_stop(vinst);
+ vcn_v2_0_start(vinst);
+ r = amdgpu_ring_test_ring(&vinst->ring_dec);
+ if (r)
+ return r;
+ for (i = 0; i < vinst->num_enc_rings; i++) {
+ r = amdgpu_ring_test_ring(&vinst->ring_enc[i]);
+ if (r)
+ return r;
+ }
+ amdgpu_fence_driver_force_completion(&vinst->ring_dec);
+ for (i = 0; i < vinst->num_enc_rings; i++)
+ amdgpu_fence_driver_force_completion(&vinst->ring_enc[i]);
+ return 0;
+}
+
static bool vcn_v2_0_is_idle(struct amdgpu_ip_block *ip_block)
{
struct amdgpu_device *adev = ip_block->adev;
@@ -2176,6 +2199,7 @@ static const struct amdgpu_ring_funcs vcn_v2_0_dec_ring_vm_funcs = {
.emit_wreg = vcn_v2_0_dec_ring_emit_wreg,
.emit_reg_wait = vcn_v2_0_dec_ring_emit_reg_wait,
.emit_reg_write_reg_wait = amdgpu_ring_emit_reg_write_reg_wait_helper,
+ .reset = amdgpu_vcn_ring_reset,
};
static const struct amdgpu_ring_funcs vcn_v2_0_enc_ring_vm_funcs = {
@@ -2205,6 +2229,7 @@ static const struct amdgpu_ring_funcs vcn_v2_0_enc_ring_vm_funcs = {
.emit_wreg = vcn_v2_0_enc_ring_emit_wreg,
.emit_reg_wait = vcn_v2_0_enc_ring_emit_reg_wait,
.emit_reg_write_reg_wait = amdgpu_ring_emit_reg_write_reg_wait_helper,
+ .reset = amdgpu_vcn_ring_reset,
};
static void vcn_v2_0_set_dec_ring_funcs(struct amdgpu_device *adev)
--
2.49.0
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH 35/36] drm/amdgpu/vcn2.5: implement ring reset
2025-06-17 3:07 [PATCH V9 00/36] Reset improvements Alex Deucher
` (33 preceding siblings ...)
2025-06-17 3:08 ` [PATCH 34/36] drm/amdgpu/vcn2: implement ring reset Alex Deucher
@ 2025-06-17 3:08 ` Alex Deucher
2025-06-17 3:08 ` [PATCH 36/36] drm/amdgpu/vcn3: " Alex Deucher
35 siblings, 0 replies; 59+ messages in thread
From: Alex Deucher @ 2025-06-17 3:08 UTC (permalink / raw)
To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher
Use the new helpers to handle engine resets for VCN.
Untested.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c | 24 ++++++++++++++++++++++++
1 file changed, 24 insertions(+)
diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c b/drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c
index 58b527a6b795f..9bc82ba3537dd 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c
@@ -102,6 +102,7 @@ static int vcn_v2_5_pause_dpg_mode(struct amdgpu_vcn_inst *vinst,
struct dpg_pause_state *new_state);
static int vcn_v2_5_sriov_start(struct amdgpu_device *adev);
static void vcn_v2_5_set_ras_funcs(struct amdgpu_device *adev);
+static int vcn_v2_5_reset(struct amdgpu_vcn_inst *vinst);
static int amdgpu_ih_clientid_vcns[] = {
SOC15_IH_CLIENTID_VCN,
@@ -404,6 +405,7 @@ static int vcn_v2_5_sw_init(struct amdgpu_ip_block *ip_block)
if (adev->pg_flags & AMD_PG_SUPPORT_VCN_DPG)
adev->vcn.inst[j].pause_dpg_mode = vcn_v2_5_pause_dpg_mode;
+ adev->vcn.inst[j].reset = vcn_v2_5_reset;
}
if (amdgpu_sriov_vf(adev)) {
@@ -1816,6 +1818,7 @@ static const struct amdgpu_ring_funcs vcn_v2_5_dec_ring_vm_funcs = {
.emit_wreg = vcn_v2_0_dec_ring_emit_wreg,
.emit_reg_wait = vcn_v2_0_dec_ring_emit_reg_wait,
.emit_reg_write_reg_wait = amdgpu_ring_emit_reg_write_reg_wait_helper,
+ .reset = amdgpu_vcn_ring_reset,
};
/**
@@ -1914,6 +1917,7 @@ static const struct amdgpu_ring_funcs vcn_v2_5_enc_ring_vm_funcs = {
.emit_wreg = vcn_v2_0_enc_ring_emit_wreg,
.emit_reg_wait = vcn_v2_0_enc_ring_emit_reg_wait,
.emit_reg_write_reg_wait = amdgpu_ring_emit_reg_write_reg_wait_helper,
+ .reset = amdgpu_vcn_ring_reset,
};
static void vcn_v2_5_set_dec_ring_funcs(struct amdgpu_device *adev)
@@ -1942,6 +1946,26 @@ static void vcn_v2_5_set_enc_ring_funcs(struct amdgpu_device *adev)
}
}
+static int vcn_v2_5_reset(struct amdgpu_vcn_inst *vinst)
+{
+ int i, r;
+
+ vcn_v2_5_stop(vinst);
+ vcn_v2_5_start(vinst);
+ r = amdgpu_ring_test_ring(&vinst->ring_dec);
+ if (r)
+ return r;
+ for (i = 0; i < vinst->num_enc_rings; i++) {
+ r = amdgpu_ring_test_ring(&vinst->ring_enc[i]);
+ if (r)
+ return r;
+ }
+ amdgpu_fence_driver_force_completion(&vinst->ring_dec);
+ for (i = 0; i < vinst->num_enc_rings; i++)
+ amdgpu_fence_driver_force_completion(&vinst->ring_enc[i]);
+ return 0;
+}
+
static bool vcn_v2_5_is_idle(struct amdgpu_ip_block *ip_block)
{
struct amdgpu_device *adev = ip_block->adev;
--
2.49.0
^ permalink raw reply related [flat|nested] 59+ messages in thread
* [PATCH 36/36] drm/amdgpu/vcn3: implement ring reset
2025-06-17 3:07 [PATCH V9 00/36] Reset improvements Alex Deucher
` (34 preceding siblings ...)
2025-06-17 3:08 ` [PATCH 35/36] drm/amdgpu/vcn2.5: " Alex Deucher
@ 2025-06-17 3:08 ` Alex Deucher
2025-06-17 19:22 ` Sundararaju, Sathishkumar
35 siblings, 1 reply; 59+ messages in thread
From: Alex Deucher @ 2025-06-17 3:08 UTC (permalink / raw)
To: amd-gfx, christian.koenig, sasundar; +Cc: Alex Deucher
Use the new helpers to handle engine resets for VCN.
Untested.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c | 24 ++++++++++++++++++++++++
1 file changed, 24 insertions(+)
diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
index 9fb0d53805892..ec4d2ab75fc4d 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
@@ -110,6 +110,7 @@ static int vcn_v3_0_set_pg_state(struct amdgpu_vcn_inst *vinst,
enum amd_powergating_state state);
static int vcn_v3_0_pause_dpg_mode(struct amdgpu_vcn_inst *vinst,
struct dpg_pause_state *new_state);
+static int vcn_v3_0_reset(struct amdgpu_vcn_inst *vinst);
static void vcn_v3_0_dec_ring_set_wptr(struct amdgpu_ring *ring);
static void vcn_v3_0_enc_ring_set_wptr(struct amdgpu_ring *ring);
@@ -289,6 +290,7 @@ static int vcn_v3_0_sw_init(struct amdgpu_ip_block *ip_block)
if (adev->pg_flags & AMD_PG_SUPPORT_VCN_DPG)
adev->vcn.inst[i].pause_dpg_mode = vcn_v3_0_pause_dpg_mode;
+ adev->vcn.inst[i].reset = vcn_v3_0_reset;
}
if (amdgpu_sriov_vf(adev)) {
@@ -1869,6 +1871,7 @@ static const struct amdgpu_ring_funcs vcn_v3_0_dec_sw_ring_vm_funcs = {
.emit_wreg = vcn_dec_sw_ring_emit_wreg,
.emit_reg_wait = vcn_dec_sw_ring_emit_reg_wait,
.emit_reg_write_reg_wait = amdgpu_ring_emit_reg_write_reg_wait_helper,
+ .reset = amdgpu_vcn_ring_reset,
};
static int vcn_v3_0_limit_sched(struct amdgpu_cs_parser *p,
@@ -2033,6 +2036,7 @@ static const struct amdgpu_ring_funcs vcn_v3_0_dec_ring_vm_funcs = {
.emit_wreg = vcn_v2_0_dec_ring_emit_wreg,
.emit_reg_wait = vcn_v2_0_dec_ring_emit_reg_wait,
.emit_reg_write_reg_wait = amdgpu_ring_emit_reg_write_reg_wait_helper,
+ .reset = amdgpu_vcn_ring_reset,
};
/**
@@ -2164,6 +2168,26 @@ static void vcn_v3_0_set_enc_ring_funcs(struct amdgpu_device *adev)
}
}
+static int vcn_v3_0_reset(struct amdgpu_vcn_inst *vinst)
+{
+ int i, r;
+
+ vcn_v3_0_stop(vinst);
+ vcn_v3_0_start(vinst);
+ r = amdgpu_ring_test_ring(&vinst->ring_dec);
+ if (r)
+ return r;
+ for (i = 0; i < vinst->num_enc_rings; i++) {
+ r = amdgpu_ring_test_ring(&vinst->ring_enc[i]);
+ if (r)
+ return r;
+ }
+ amdgpu_fence_driver_force_completion(&vinst->ring_dec);
+ for (i = 0; i < vinst->num_enc_rings; i++)
+ amdgpu_fence_driver_force_completion(&vinst->ring_enc[i]);
+ return 0;
+}
+
static bool vcn_v3_0_is_idle(struct amdgpu_ip_block *ip_block)
{
struct amdgpu_device *adev = ip_block->adev;
--
2.49.0
^ permalink raw reply related [flat|nested] 59+ messages in thread
* Re: [PATCH 33/36] drm/amdgpu/vcn: add a helper framework for engine resets
2025-06-17 3:08 ` [PATCH 33/36] drm/amdgpu/vcn: add a helper framework for engine resets Alex Deucher
@ 2025-06-17 4:30 ` Sundararaju, Sathishkumar
2025-06-17 6:10 ` Sundararaju, Sathishkumar
0 siblings, 1 reply; 59+ messages in thread
From: Sundararaju, Sathishkumar @ 2025-06-17 4:30 UTC (permalink / raw)
To: Alex Deucher, amd-gfx, christian.koenig
Hi Alex,
Would it be good to have this logic in the reset call back itself ?
Adding common vinst->reset stops the flexibility of having separate
reset functionality for enc rings and decode rings,
can selectively handle the drm_sched_wqueue_start/stop and re-emit of
guilty/non-guilty for enc and dec separately.
And the usual vcn_stop() followed by vcn_start() isn't helping in reset
of the engine for vcn3.
I tried a workaround to pause_dpg and enable static clockgate and
powergate, and then stop()/start() the engine
which is working consistently so far.
Regards,
Sathish
On 6/17/2025 8:38 AM, Alex Deucher wrote:
> With engine resets we reset all queues on the engine rather
> than just a single queue. Add a framework to handle this
> similar to SDMA.
>
> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c | 64 +++++++++++++++++++++++++
> drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h | 6 ++-
> 2 files changed, 69 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
> index c8885c3d54b33..075740ed275eb 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
> @@ -134,6 +134,7 @@ int amdgpu_vcn_sw_init(struct amdgpu_device *adev, int i)
>
> mutex_init(&adev->vcn.inst[i].vcn1_jpeg1_workaround);
> mutex_init(&adev->vcn.inst[i].vcn_pg_lock);
> + mutex_init(&adev->vcn.inst[i].engine_reset_mutex);
> atomic_set(&adev->vcn.inst[i].total_submission_cnt, 0);
> INIT_DELAYED_WORK(&adev->vcn.inst[i].idle_work, amdgpu_vcn_idle_work_handler);
> atomic_set(&adev->vcn.inst[i].dpg_enc_submission_cnt, 0);
> @@ -1451,3 +1452,66 @@ int vcn_set_powergating_state(struct amdgpu_ip_block *ip_block,
>
> return ret;
> }
> +
> +/**
> + * amdgpu_vcn_reset_engine - Reset a specific VCN engine
> + * @adev: Pointer to the AMDGPU device
> + * @instance_id: VCN engine instance to reset
> + *
> + * Returns: 0 on success, or a negative error code on failure.
> + */
> +static int amdgpu_vcn_reset_engine(struct amdgpu_device *adev,
> + uint32_t instance_id)
> +{
> + struct amdgpu_vcn_inst *vinst = &adev->vcn.inst[instance_id];
> + int r, i;
> +
> + mutex_lock(&vinst->engine_reset_mutex);
> + /* Stop the scheduler's work queue for the dec and enc rings if they are running.
> + * This ensures that no new tasks are submitted to the queues while
> + * the reset is in progress.
> + */
> + drm_sched_wqueue_stop(&vinst->ring_dec.sched);
> + for (i = 0; i < vinst->num_enc_rings; i++)
> + drm_sched_wqueue_stop(&vinst->ring_enc[i].sched);
> +
> + /* Perform the VCN reset for the specified instance */
> + r = vinst->reset(vinst);
> + if (r) {
> + dev_err(adev->dev, "Failed to reset VCN instance %u\n", instance_id);
> + } else {
> + /* Restart the scheduler's work queue for the dec and enc rings
> + * if they were stopped by this function. This allows new tasks
> + * to be submitted to the queues after the reset is complete.
> + */
> + drm_sched_wqueue_start(&vinst->ring_dec.sched);
> + for (i = 0; i < vinst->num_enc_rings; i++)
> + drm_sched_wqueue_start(&vinst->ring_enc[i].sched);
> + }
> + mutex_unlock(&vinst->engine_reset_mutex);
> +
> + return r;
> +}
> +
> +/**
> + * amdgpu_vcn_ring_reset - Reset a VCN ring
> + * @ring: ring to reset
> + * @vmid: vmid of guilty job
> + * @guilty_fence: guilty fence
> + *
> + * This helper is for VCN blocks without unified queues because
> + * resetting the engine resets all queues in that case. With
> + * unified queues we have one queue per engine.
> + * Returns: 0 on success, or a negative error code on failure.
> + */
> +int amdgpu_vcn_ring_reset(struct amdgpu_ring *ring,
> + unsigned int vmid,
> + struct amdgpu_fence *guilty_fence)
> +{
> + struct amdgpu_device *adev = ring->adev;
> +
> + if (adev->vcn.inst[ring->me].using_unified_queue)
> + return -EINVAL;
> +
> + return amdgpu_vcn_reset_engine(adev, ring->me);
> +}
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
> index 83adf81defc71..0bc0a94d7cf0f 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
> @@ -330,7 +330,9 @@ struct amdgpu_vcn_inst {
> struct dpg_pause_state *new_state);
> int (*set_pg_state)(struct amdgpu_vcn_inst *vinst,
> enum amd_powergating_state state);
> + int (*reset)(struct amdgpu_vcn_inst *vinst);
> bool using_unified_queue;
> + struct mutex engine_reset_mutex;
> };
>
> struct amdgpu_vcn_ras {
> @@ -552,5 +554,7 @@ void amdgpu_debugfs_vcn_sched_mask_init(struct amdgpu_device *adev);
>
> int vcn_set_powergating_state(struct amdgpu_ip_block *ip_block,
> enum amd_powergating_state state);
> -
> +int amdgpu_vcn_ring_reset(struct amdgpu_ring *ring,
> + unsigned int vmid,
> + struct amdgpu_fence *guilty_fence);
> #endif
^ permalink raw reply [flat|nested] 59+ messages in thread
* RE: [PATCH 06/36] drm/amdgpu/sdma5: init engine reset mutex
2025-06-17 3:07 ` [PATCH 06/36] drm/amdgpu/sdma5: init engine reset mutex Alex Deucher
@ 2025-06-17 5:50 ` Zhang, Jesse(Jie)
2025-06-17 6:09 ` Zhang, Jesse(Jie)
2025-06-17 11:50 ` Christian König
2 siblings, 0 replies; 59+ messages in thread
From: Zhang, Jesse(Jie) @ 2025-06-17 5:50 UTC (permalink / raw)
To: Deucher, Alexander, amd-gfx@lists.freedesktop.org,
Koenig, Christian, Sundararaju, Sathishkumar
Cc: Deucher, Alexander
[AMD Official Use Only - AMD Internal Distribution Only]
This patch is Reviewed-by: Jesse Zhang <Jesse.Zhang@amd.com>
-----Original Message-----
From: amd-gfx <amd-gfx-bounces@lists.freedesktop.org> On Behalf Of Alex Deucher
Sent: Tuesday, June 17, 2025 11:08 AM
To: amd-gfx@lists.freedesktop.org; Koenig, Christian <Christian.Koenig@amd.com>; Sundararaju, Sathishkumar <Sathishkumar.Sundararaju@amd.com>
Cc: Deucher, Alexander <Alexander.Deucher@amd.com>
Subject: [PATCH 06/36] drm/amdgpu/sdma5: init engine reset mutex
Missing the mutex init.
Fixes: e56d4bf57fab ("drm/amdgpu/: drm/amdgpu: Register the new sdma function pointers for sdma_v5_0")
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
index 2d94aadc31149..37f4b5b4a098f 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
@@ -1399,6 +1399,7 @@ static int sdma_v5_0_sw_init(struct amdgpu_ip_block *ip_block)
return r;
for (i = 0; i < adev->sdma.num_instances; i++) {
+ mutex_init(&adev->sdma.instance[i].engine_reset_mutex);
adev->sdma.instance[i].funcs = &sdma_v5_0_sdma_funcs;
ring = &adev->sdma.instance[i].ring;
ring->ring_obj = NULL;
--
2.49.0
^ permalink raw reply related [flat|nested] 59+ messages in thread
* RE: [PATCH 07/36] drm/amdgpu/sdma5.2: init engine reset mutex
2025-06-17 3:07 ` [PATCH 07/36] drm/amdgpu/sdma5.2: " Alex Deucher
@ 2025-06-17 6:08 ` Zhang, Jesse(Jie)
0 siblings, 0 replies; 59+ messages in thread
From: Zhang, Jesse(Jie) @ 2025-06-17 6:08 UTC (permalink / raw)
To: Deucher, Alexander, amd-gfx@lists.freedesktop.org,
Koenig, Christian, Sundararaju, Sathishkumar
Cc: Deucher, Alexander
[AMD Official Use Only - AMD Internal Distribution Only]
This patch is Reviewed-by: Jesse Zhang <Jesse.Zhang@amd.com>
-----Original Message-----
From: amd-gfx <amd-gfx-bounces@lists.freedesktop.org> On Behalf Of Alex Deucher
Sent: Tuesday, June 17, 2025 11:08 AM
To: amd-gfx@lists.freedesktop.org; Koenig, Christian <Christian.Koenig@amd.com>; Sundararaju, Sathishkumar <Sathishkumar.Sundararaju@amd.com>
Cc: Deucher, Alexander <Alexander.Deucher@amd.com>
Subject: [PATCH 07/36] drm/amdgpu/sdma5.2: init engine reset mutex
Missing the mutex init.
Fixes: 47454f2dc0bf ("drm/amdgpu: Register the new sdma function pointers for sdma_v5_2")
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
index cc934724f387c..0b40411b92a0b 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
@@ -1318,6 +1318,7 @@ static int sdma_v5_2_sw_init(struct amdgpu_ip_block *ip_block)
}
for (i = 0; i < adev->sdma.num_instances; i++) {
+ mutex_init(&adev->sdma.instance[i].engine_reset_mutex);
adev->sdma.instance[i].funcs = &sdma_v5_2_sdma_funcs;
ring = &adev->sdma.instance[i].ring;
ring->ring_obj = NULL;
--
2.49.0
^ permalink raw reply related [flat|nested] 59+ messages in thread
* RE: [PATCH 06/36] drm/amdgpu/sdma5: init engine reset mutex
2025-06-17 3:07 ` [PATCH 06/36] drm/amdgpu/sdma5: init engine reset mutex Alex Deucher
2025-06-17 5:50 ` Zhang, Jesse(Jie)
@ 2025-06-17 6:09 ` Zhang, Jesse(Jie)
2025-06-17 11:50 ` Christian König
2 siblings, 0 replies; 59+ messages in thread
From: Zhang, Jesse(Jie) @ 2025-06-17 6:09 UTC (permalink / raw)
To: Deucher, Alexander, amd-gfx@lists.freedesktop.org,
Koenig, Christian, Sundararaju, Sathishkumar
Cc: Deucher, Alexander
[AMD Official Use Only - AMD Internal Distribution Only]
This patch is Reviewed-by: Jesse Zhang <Jesse.Zhang@amd.com>
-----Original Message-----
From: amd-gfx <amd-gfx-bounces@lists.freedesktop.org> On Behalf Of Alex Deucher
Sent: Tuesday, June 17, 2025 11:08 AM
To: amd-gfx@lists.freedesktop.org; Koenig, Christian <Christian.Koenig@amd.com>; Sundararaju, Sathishkumar <Sathishkumar.Sundararaju@amd.com>
Cc: Deucher, Alexander <Alexander.Deucher@amd.com>
Subject: [PATCH 06/36] drm/amdgpu/sdma5: init engine reset mutex
Missing the mutex init.
Fixes: e56d4bf57fab ("drm/amdgpu/: drm/amdgpu: Register the new sdma function pointers for sdma_v5_0")
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
index 2d94aadc31149..37f4b5b4a098f 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
@@ -1399,6 +1399,7 @@ static int sdma_v5_0_sw_init(struct amdgpu_ip_block *ip_block)
return r;
for (i = 0; i < adev->sdma.num_instances; i++) {
+ mutex_init(&adev->sdma.instance[i].engine_reset_mutex);
adev->sdma.instance[i].funcs = &sdma_v5_0_sdma_funcs;
ring = &adev->sdma.instance[i].ring;
ring->ring_obj = NULL;
--
2.49.0
^ permalink raw reply related [flat|nested] 59+ messages in thread
* Re: [PATCH 33/36] drm/amdgpu/vcn: add a helper framework for engine resets
2025-06-17 4:30 ` Sundararaju, Sathishkumar
@ 2025-06-17 6:10 ` Sundararaju, Sathishkumar
2025-06-17 13:09 ` Alex Deucher
0 siblings, 1 reply; 59+ messages in thread
From: Sundararaju, Sathishkumar @ 2025-06-17 6:10 UTC (permalink / raw)
To: Alex Deucher, amd-gfx, christian.koenig
Please ignore my previous comments here, the new helper additions for
vcn non unified queues are good.
But one concern, the vinst->reset(vinst) callback must take in ring
pointer to handle guilty/non-guilty
for appropriate re-emit part, else the guilty ring has to be tracked
within the ring structure or identified
by some query with in reset.
Regards,
Sathish
On 6/17/2025 10:00 AM, Sundararaju, Sathishkumar wrote:
> Hi Alex,
>
> Would it be good to have this logic in the reset call back itself ?
>
> Adding common vinst->reset stops the flexibility of having separate
> reset functionality for enc rings and decode rings,
> can selectively handle the drm_sched_wqueue_start/stop and re-emit of
> guilty/non-guilty for enc and dec separately.
>
> And the usual vcn_stop() followed by vcn_start() isn't helping in
> reset of the engine for vcn3.
>
> I tried a workaround to pause_dpg and enable static clockgate and
> powergate, and then stop()/start() the engine
> which is working consistently so far.
>
> Regards,
> Sathish
>
> On 6/17/2025 8:38 AM, Alex Deucher wrote:
>> With engine resets we reset all queues on the engine rather
>> than just a single queue. Add a framework to handle this
>> similar to SDMA.
>>
>> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
>> ---
>> drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c | 64 +++++++++++++++++++++++++
>> drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h | 6 ++-
>> 2 files changed, 69 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
>> index c8885c3d54b33..075740ed275eb 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
>> @@ -134,6 +134,7 @@ int amdgpu_vcn_sw_init(struct amdgpu_device
>> *adev, int i)
>> mutex_init(&adev->vcn.inst[i].vcn1_jpeg1_workaround);
>> mutex_init(&adev->vcn.inst[i].vcn_pg_lock);
>> + mutex_init(&adev->vcn.inst[i].engine_reset_mutex);
>> atomic_set(&adev->vcn.inst[i].total_submission_cnt, 0);
>> INIT_DELAYED_WORK(&adev->vcn.inst[i].idle_work,
>> amdgpu_vcn_idle_work_handler);
>> atomic_set(&adev->vcn.inst[i].dpg_enc_submission_cnt, 0);
>> @@ -1451,3 +1452,66 @@ int vcn_set_powergating_state(struct
>> amdgpu_ip_block *ip_block,
>> return ret;
>> }
>> +
>> +/**
>> + * amdgpu_vcn_reset_engine - Reset a specific VCN engine
>> + * @adev: Pointer to the AMDGPU device
>> + * @instance_id: VCN engine instance to reset
>> + *
>> + * Returns: 0 on success, or a negative error code on failure.
>> + */
>> +static int amdgpu_vcn_reset_engine(struct amdgpu_device *adev,
>> + uint32_t instance_id)
>> +{
>> + struct amdgpu_vcn_inst *vinst = &adev->vcn.inst[instance_id];
>> + int r, i;
>> +
>> + mutex_lock(&vinst->engine_reset_mutex);
>> + /* Stop the scheduler's work queue for the dec and enc rings if
>> they are running.
>> + * This ensures that no new tasks are submitted to the queues while
>> + * the reset is in progress.
>> + */
>> + drm_sched_wqueue_stop(&vinst->ring_dec.sched);
>> + for (i = 0; i < vinst->num_enc_rings; i++)
>> + drm_sched_wqueue_stop(&vinst->ring_enc[i].sched);
>> +
>> + /* Perform the VCN reset for the specified instance */
>> + r = vinst->reset(vinst);
>> + if (r) {
>> + dev_err(adev->dev, "Failed to reset VCN instance %u\n",
>> instance_id);
>> + } else {
>> + /* Restart the scheduler's work queue for the dec and enc rings
>> + * if they were stopped by this function. This allows new tasks
>> + * to be submitted to the queues after the reset is complete.
>> + */
>> + drm_sched_wqueue_start(&vinst->ring_dec.sched);
>> + for (i = 0; i < vinst->num_enc_rings; i++)
>> + drm_sched_wqueue_start(&vinst->ring_enc[i].sched);
>> + }
>> + mutex_unlock(&vinst->engine_reset_mutex);
>> +
>> + return r;
>> +}
>> +
>> +/**
>> + * amdgpu_vcn_ring_reset - Reset a VCN ring
>> + * @ring: ring to reset
>> + * @vmid: vmid of guilty job
>> + * @guilty_fence: guilty fence
>> + *
>> + * This helper is for VCN blocks without unified queues because
>> + * resetting the engine resets all queues in that case. With
>> + * unified queues we have one queue per engine.
>> + * Returns: 0 on success, or a negative error code on failure.
>> + */
>> +int amdgpu_vcn_ring_reset(struct amdgpu_ring *ring,
>> + unsigned int vmid,
>> + struct amdgpu_fence *guilty_fence)
>> +{
>> + struct amdgpu_device *adev = ring->adev;
>> +
>> + if (adev->vcn.inst[ring->me].using_unified_queue)
>> + return -EINVAL;
>> +
>> + return amdgpu_vcn_reset_engine(adev, ring->me);
>> +}
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
>> index 83adf81defc71..0bc0a94d7cf0f 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
>> @@ -330,7 +330,9 @@ struct amdgpu_vcn_inst {
>> struct dpg_pause_state *new_state);
>> int (*set_pg_state)(struct amdgpu_vcn_inst *vinst,
>> enum amd_powergating_state state);
>> + int (*reset)(struct amdgpu_vcn_inst *vinst);
>> bool using_unified_queue;
>> + struct mutex engine_reset_mutex;
>> };
>> struct amdgpu_vcn_ras {
>> @@ -552,5 +554,7 @@ void amdgpu_debugfs_vcn_sched_mask_init(struct
>> amdgpu_device *adev);
>> int vcn_set_powergating_state(struct amdgpu_ip_block *ip_block,
>> enum amd_powergating_state state);
>> -
>> +int amdgpu_vcn_ring_reset(struct amdgpu_ring *ring,
>> + unsigned int vmid,
>> + struct amdgpu_fence *guilty_fence);
>> #endif
>
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [PATCH 01/36] drm/amdgpu: switch job hw_fence to amdgpu_fence
2025-06-17 3:07 ` [PATCH 01/36] drm/amdgpu: switch job hw_fence to amdgpu_fence Alex Deucher
@ 2025-06-17 9:42 ` Christian König
0 siblings, 0 replies; 59+ messages in thread
From: Christian König @ 2025-06-17 9:42 UTC (permalink / raw)
To: Alex Deucher, amd-gfx, sasundar
On 6/17/25 05:07, Alex Deucher wrote:
> Use the amdgpu fence container so we can store additional
> data in the fence. This also fixes the start_time handling
> for MCBP since we were casting the fence to an amdgpu_fence
> and it wasn't.
>
> Fixes: 3f4c175d62d8 ("drm/amdgpu: MCBP based on DRM scheduler (v9)")
> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
CC: stable?
Reviewed-by: Christian König <christian.koenig@amd.com>
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 2 +-
> drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 2 +-
> drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c | 30 +++++----------------
> drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 12 ++++-----
> drivers/gpu/drm/amd/amdgpu/amdgpu_job.h | 2 +-
> drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h | 16 +++++++++++
> 6 files changed, 32 insertions(+), 32 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> index 8e626f50b362e..f81608330a3d0 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> @@ -1902,7 +1902,7 @@ static void amdgpu_ib_preempt_mark_partial_job(struct amdgpu_ring *ring)
> continue;
> }
> job = to_amdgpu_job(s_job);
> - if (preempted && (&job->hw_fence) == fence)
> + if (preempted && (&job->hw_fence.base) == fence)
> /* mark the job as preempted */
> job->preemption_status |= AMDGPU_IB_PREEMPTED;
> }
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> index f134394047603..13070211dc69c 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> @@ -6427,7 +6427,7 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
> *
> * job->base holds a reference to parent fence
> */
> - if (job && dma_fence_is_signaled(&job->hw_fence)) {
> + if (job && dma_fence_is_signaled(&job->hw_fence.base)) {
> job_signaled = true;
> dev_info(adev->dev, "Guilty job already signaled, skipping HW reset");
> goto skip_hw_reset;
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> index 2f24a6aa13bf6..569e0e5373927 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> @@ -41,22 +41,6 @@
> #include "amdgpu_trace.h"
> #include "amdgpu_reset.h"
>
> -/*
> - * Fences mark an event in the GPUs pipeline and are used
> - * for GPU/CPU synchronization. When the fence is written,
> - * it is expected that all buffers associated with that fence
> - * are no longer in use by the associated ring on the GPU and
> - * that the relevant GPU caches have been flushed.
> - */
> -
> -struct amdgpu_fence {
> - struct dma_fence base;
> -
> - /* RB, DMA, etc. */
> - struct amdgpu_ring *ring;
> - ktime_t start_timestamp;
> -};
> -
> static struct kmem_cache *amdgpu_fence_slab;
>
> int amdgpu_fence_slab_init(void)
> @@ -151,12 +135,12 @@ int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **f, struct amd
> am_fence = kmem_cache_alloc(amdgpu_fence_slab, GFP_ATOMIC);
> if (am_fence == NULL)
> return -ENOMEM;
> - fence = &am_fence->base;
> - am_fence->ring = ring;
> } else {
> /* take use of job-embedded fence */
> - fence = &job->hw_fence;
> + am_fence = &job->hw_fence;
> }
> + fence = &am_fence->base;
> + am_fence->ring = ring;
>
> seq = ++ring->fence_drv.sync_seq;
> if (job && job->job_run_counter) {
> @@ -718,7 +702,7 @@ void amdgpu_fence_driver_clear_job_fences(struct amdgpu_ring *ring)
> * it right here or we won't be able to track them in fence_drv
> * and they will remain unsignaled during sa_bo free.
> */
> - job = container_of(old, struct amdgpu_job, hw_fence);
> + job = container_of(old, struct amdgpu_job, hw_fence.base);
> if (!job->base.s_fence && !dma_fence_is_signaled(old))
> dma_fence_signal(old);
> RCU_INIT_POINTER(*ptr, NULL);
> @@ -780,7 +764,7 @@ static const char *amdgpu_fence_get_timeline_name(struct dma_fence *f)
>
> static const char *amdgpu_job_fence_get_timeline_name(struct dma_fence *f)
> {
> - struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence);
> + struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence.base);
>
> return (const char *)to_amdgpu_ring(job->base.sched)->name;
> }
> @@ -810,7 +794,7 @@ static bool amdgpu_fence_enable_signaling(struct dma_fence *f)
> */
> static bool amdgpu_job_fence_enable_signaling(struct dma_fence *f)
> {
> - struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence);
> + struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence.base);
>
> if (!timer_pending(&to_amdgpu_ring(job->base.sched)->fence_drv.fallback_timer))
> amdgpu_fence_schedule_fallback(to_amdgpu_ring(job->base.sched));
> @@ -845,7 +829,7 @@ static void amdgpu_job_fence_free(struct rcu_head *rcu)
> struct dma_fence *f = container_of(rcu, struct dma_fence, rcu);
>
> /* free job if fence has a parent job */
> - kfree(container_of(f, struct amdgpu_job, hw_fence));
> + kfree(container_of(f, struct amdgpu_job, hw_fence.base));
> }
>
> /**
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> index acb21fc8b3ce5..ddb9d3269357c 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> @@ -272,8 +272,8 @@ void amdgpu_job_free_resources(struct amdgpu_job *job)
> /* Check if any fences where initialized */
> if (job->base.s_fence && job->base.s_fence->finished.ops)
> f = &job->base.s_fence->finished;
> - else if (job->hw_fence.ops)
> - f = &job->hw_fence;
> + else if (job->hw_fence.base.ops)
> + f = &job->hw_fence.base;
> else
> f = NULL;
>
> @@ -290,10 +290,10 @@ static void amdgpu_job_free_cb(struct drm_sched_job *s_job)
> amdgpu_sync_free(&job->explicit_sync);
>
> /* only put the hw fence if has embedded fence */
> - if (!job->hw_fence.ops)
> + if (!job->hw_fence.base.ops)
> kfree(job);
> else
> - dma_fence_put(&job->hw_fence);
> + dma_fence_put(&job->hw_fence.base);
> }
>
> void amdgpu_job_set_gang_leader(struct amdgpu_job *job,
> @@ -322,10 +322,10 @@ void amdgpu_job_free(struct amdgpu_job *job)
> if (job->gang_submit != &job->base.s_fence->scheduled)
> dma_fence_put(job->gang_submit);
>
> - if (!job->hw_fence.ops)
> + if (!job->hw_fence.base.ops)
> kfree(job);
> else
> - dma_fence_put(&job->hw_fence);
> + dma_fence_put(&job->hw_fence.base);
> }
>
> struct dma_fence *amdgpu_job_submit(struct amdgpu_job *job)
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
> index f2c049129661f..931fed8892cc1 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
> @@ -48,7 +48,7 @@ struct amdgpu_job {
> struct drm_sched_job base;
> struct amdgpu_vm *vm;
> struct amdgpu_sync explicit_sync;
> - struct dma_fence hw_fence;
> + struct amdgpu_fence hw_fence;
> struct dma_fence *gang_submit;
> uint32_t preamble_status;
> uint32_t preemption_status;
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> index b95b471107692..e1f25218943a4 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> @@ -127,6 +127,22 @@ struct amdgpu_fence_driver {
> struct dma_fence **fences;
> };
>
> +/*
> + * Fences mark an event in the GPUs pipeline and are used
> + * for GPU/CPU synchronization. When the fence is written,
> + * it is expected that all buffers associated with that fence
> + * are no longer in use by the associated ring on the GPU and
> + * that the relevant GPU caches have been flushed.
> + */
> +
> +struct amdgpu_fence {
> + struct dma_fence base;
> +
> + /* RB, DMA, etc. */
> + struct amdgpu_ring *ring;
> + ktime_t start_timestamp;
> +};
> +
> extern const struct drm_sched_backend_ops amdgpu_sched_ops;
>
> void amdgpu_fence_driver_clear_job_fences(struct amdgpu_ring *ring);
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [PATCH 02/36] drm/amdgpu: remove job parameter from amdgpu_fence_emit()
2025-06-17 3:07 ` [PATCH 02/36] drm/amdgpu: remove job parameter from amdgpu_fence_emit() Alex Deucher
@ 2025-06-17 11:44 ` Christian König
2025-06-17 13:46 ` Alex Deucher
2025-06-18 22:32 ` Alex Deucher
0 siblings, 2 replies; 59+ messages in thread
From: Christian König @ 2025-06-17 11:44 UTC (permalink / raw)
To: Alex Deucher, amd-gfx, sasundar
On 6/17/25 05:07, Alex Deucher wrote:
> What we actually care about is the amdgpu_fence object
> so pass that in explicitly to avoid possible mistakes
> in the future.
>
> The job_run_counter handling can be safely removed at this
> point as we no longer support job resubmission.
>
> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c | 36 +++++++++--------------
> drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c | 5 +++-
> drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h | 4 +--
> 3 files changed, 20 insertions(+), 25 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> index 569e0e5373927..e88848c14491a 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> @@ -114,14 +114,14 @@ static u32 amdgpu_fence_read(struct amdgpu_ring *ring)
> *
> * @ring: ring the fence is associated with
> * @f: resulting fence object
> - * @job: job the fence is embedded in
> + * @af: amdgpu fence input
> * @flags: flags to pass into the subordinate .emit_fence() call
> *
> * Emits a fence command on the requested ring (all asics).
> * Returns 0 on success, -ENOMEM on failure.
> */
> -int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **f, struct amdgpu_job *job,
> - unsigned int flags)
> +int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **f,
> + struct amdgpu_fence *af, unsigned int flags)
> {
> struct amdgpu_device *adev = ring->adev;
> struct dma_fence *fence;
> @@ -130,36 +130,28 @@ int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **f, struct amd
> uint32_t seq;
> int r;
>
> - if (job == NULL) {
> - /* create a sperate hw fence */
> + if (!af) {
> + /* create a separate hw fence */
> am_fence = kmem_cache_alloc(amdgpu_fence_slab, GFP_ATOMIC);
> if (am_fence == NULL)
> return -ENOMEM;
I think we should remove the output parameter as well.
An amdgpu_fence can be trivially allocated by the caller.
Apart from that looks good to me.
Regards,
Christian.
> } else {
> - /* take use of job-embedded fence */
> - am_fence = &job->hw_fence;
> + am_fence = af;
> }
> fence = &am_fence->base;
> am_fence->ring = ring;
>
> seq = ++ring->fence_drv.sync_seq;
> - if (job && job->job_run_counter) {
> - /* reinit seq for resubmitted jobs */
> - fence->seqno = seq;
> - /* TO be inline with external fence creation and other drivers */
> + if (af) {
> + dma_fence_init(fence, &amdgpu_job_fence_ops,
> + &ring->fence_drv.lock,
> + adev->fence_context + ring->idx, seq);
> + /* Against remove in amdgpu_job_{free, free_cb} */
> dma_fence_get(fence);
> } else {
> - if (job) {
> - dma_fence_init(fence, &amdgpu_job_fence_ops,
> - &ring->fence_drv.lock,
> - adev->fence_context + ring->idx, seq);
> - /* Against remove in amdgpu_job_{free, free_cb} */
> - dma_fence_get(fence);
> - } else {
> - dma_fence_init(fence, &amdgpu_fence_ops,
> - &ring->fence_drv.lock,
> - adev->fence_context + ring->idx, seq);
> - }
> + dma_fence_init(fence, &amdgpu_fence_ops,
> + &ring->fence_drv.lock,
> + adev->fence_context + ring->idx, seq);
> }
>
> amdgpu_ring_emit_fence(ring, ring->fence_drv.gpu_addr,
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
> index 802743efa3b39..206b70acb29a0 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
> @@ -128,6 +128,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs,
> struct amdgpu_device *adev = ring->adev;
> struct amdgpu_ib *ib = &ibs[0];
> struct dma_fence *tmp = NULL;
> + struct amdgpu_fence *af;
> bool need_ctx_switch;
> struct amdgpu_vm *vm;
> uint64_t fence_ctx;
> @@ -154,6 +155,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs,
> csa_va = job->csa_va;
> gds_va = job->gds_va;
> init_shadow = job->init_shadow;
> + af = &job->hw_fence;
> } else {
> vm = NULL;
> fence_ctx = 0;
> @@ -161,6 +163,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs,
> csa_va = 0;
> gds_va = 0;
> init_shadow = false;
> + af = NULL;
> }
>
> if (!ring->sched.ready) {
> @@ -282,7 +285,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs,
> amdgpu_ring_init_cond_exec(ring, ring->cond_exe_gpu_addr);
> }
>
> - r = amdgpu_fence_emit(ring, f, job, fence_flags);
> + r = amdgpu_fence_emit(ring, f, af, fence_flags);
> if (r) {
> dev_err(adev->dev, "failed to emit fence (%d)\n", r);
> if (job && job->vmid)
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> index e1f25218943a4..9ae522baad8e7 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> @@ -157,8 +157,8 @@ void amdgpu_fence_driver_hw_init(struct amdgpu_device *adev);
> void amdgpu_fence_driver_hw_fini(struct amdgpu_device *adev);
> int amdgpu_fence_driver_sw_init(struct amdgpu_device *adev);
> void amdgpu_fence_driver_sw_fini(struct amdgpu_device *adev);
> -int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **fence, struct amdgpu_job *job,
> - unsigned flags);
> +int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **f,
> + struct amdgpu_fence *af, unsigned int flags);
> int amdgpu_fence_emit_polling(struct amdgpu_ring *ring, uint32_t *s,
> uint32_t timeout);
> bool amdgpu_fence_process(struct amdgpu_ring *ring);
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [PATCH 03/36] drm/amdgpu: remove fence slab
2025-06-17 3:07 ` [PATCH 03/36] drm/amdgpu: remove fence slab Alex Deucher
@ 2025-06-17 11:49 ` Christian König
0 siblings, 0 replies; 59+ messages in thread
From: Christian König @ 2025-06-17 11:49 UTC (permalink / raw)
To: Alex Deucher, amd-gfx, sasundar
On 6/17/25 05:07, Alex Deucher wrote:
> Just use kmalloc for the fences in the rare case we need
> an independent fence.
>
> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
This also means that we can nuke the two different fence implementations here, see amdgpu_job_fence_free().
But this patch alone is Reviewed-by: Christian König <christian.koenig@amd.com>
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu.h | 3 ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 5 -----
> drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c | 21 +++------------------
> 3 files changed, 3 insertions(+), 26 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> index 5e2f086d2c99e..534d999b1433d 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> @@ -470,9 +470,6 @@ struct amdgpu_sa_manager {
> void *cpu_ptr;
> };
>
> -int amdgpu_fence_slab_init(void);
> -void amdgpu_fence_slab_fini(void);
> -
> /*
> * IRQS.
> */
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> index 7f8fa69300bf4..d645fa9bdff3b 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> @@ -3113,10 +3113,6 @@ static int __init amdgpu_init(void)
> if (r)
> goto error_sync;
>
> - r = amdgpu_fence_slab_init();
> - if (r)
> - goto error_fence;
> -
> r = amdgpu_userq_fence_slab_init();
> if (r)
> goto error_fence;
> @@ -3151,7 +3147,6 @@ static void __exit amdgpu_exit(void)
> amdgpu_unregister_atpx_handler();
> amdgpu_acpi_release();
> amdgpu_sync_fini();
> - amdgpu_fence_slab_fini();
> amdgpu_userq_fence_slab_fini();
> mmu_notifier_synchronize();
> amdgpu_xcp_drv_release();
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> index e88848c14491a..5555f3ae08c60 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> @@ -41,21 +41,6 @@
> #include "amdgpu_trace.h"
> #include "amdgpu_reset.h"
>
> -static struct kmem_cache *amdgpu_fence_slab;
> -
> -int amdgpu_fence_slab_init(void)
> -{
> - amdgpu_fence_slab = KMEM_CACHE(amdgpu_fence, SLAB_HWCACHE_ALIGN);
> - if (!amdgpu_fence_slab)
> - return -ENOMEM;
> - return 0;
> -}
> -
> -void amdgpu_fence_slab_fini(void)
> -{
> - rcu_barrier();
> - kmem_cache_destroy(amdgpu_fence_slab);
> -}
> /*
> * Cast helper
> */
> @@ -132,8 +117,8 @@ int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **f,
>
> if (!af) {
> /* create a separate hw fence */
> - am_fence = kmem_cache_alloc(amdgpu_fence_slab, GFP_ATOMIC);
> - if (am_fence == NULL)
> + am_fence = kmalloc(sizeof(*am_fence), GFP_KERNEL);
> + if (!am_fence)
> return -ENOMEM;
> } else {
> am_fence = af;
> @@ -806,7 +791,7 @@ static void amdgpu_fence_free(struct rcu_head *rcu)
> struct dma_fence *f = container_of(rcu, struct dma_fence, rcu);
>
> /* free fence_slab if it's separated fence*/
> - kmem_cache_free(amdgpu_fence_slab, to_amdgpu_fence(f));
> + kfree(to_amdgpu_fence(f));
> }
>
> /**
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [PATCH 06/36] drm/amdgpu/sdma5: init engine reset mutex
2025-06-17 3:07 ` [PATCH 06/36] drm/amdgpu/sdma5: init engine reset mutex Alex Deucher
2025-06-17 5:50 ` Zhang, Jesse(Jie)
2025-06-17 6:09 ` Zhang, Jesse(Jie)
@ 2025-06-17 11:50 ` Christian König
2 siblings, 0 replies; 59+ messages in thread
From: Christian König @ 2025-06-17 11:50 UTC (permalink / raw)
To: Alex Deucher, amd-gfx, sasundar
On 6/17/25 05:07, Alex Deucher wrote:
> Missing the mutex init.
>
> Fixes: e56d4bf57fab ("drm/amdgpu/: drm/amdgpu: Register the new sdma function pointers for sdma_v5_0")
> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
> ---
> drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
> index 2d94aadc31149..37f4b5b4a098f 100644
> --- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
> @@ -1399,6 +1399,7 @@ static int sdma_v5_0_sw_init(struct amdgpu_ip_block *ip_block)
> return r;
>
> for (i = 0; i < adev->sdma.num_instances; i++) {
> + mutex_init(&adev->sdma.instance[i].engine_reset_mutex);
> adev->sdma.instance[i].funcs = &sdma_v5_0_sdma_funcs;
> ring = &adev->sdma.instance[i].ring;
> ring->ring_obj = NULL;
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [PATCH 08/36] drm/amdgpu: update ring reset function signature
2025-06-17 3:07 ` [PATCH 08/36] drm/amdgpu: update ring reset function signature Alex Deucher
@ 2025-06-17 12:20 ` Christian König
0 siblings, 0 replies; 59+ messages in thread
From: Christian König @ 2025-06-17 12:20 UTC (permalink / raw)
To: Alex Deucher, amd-gfx, sasundar
On 6/17/25 05:07, Alex Deucher wrote:
> Going forward, we'll need more than just the vmid. Add the
> guilty amdgpu_fence.
>
> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Maybe not name it guilty_fence since that is not necessary the case for SDMA for example. Maybe just fence or timedout or something like that.
Apart from that feel free to add Reviewed-by: Christian König <christian.koenig@amd.com>.
Regards,
Christian.
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 2 +-
> drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h | 5 +++--
> drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c | 7 +++++--
> drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c | 8 ++++++--
> drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c | 8 ++++++--
> drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 3 ++-
> drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c | 3 ++-
> drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c | 4 +++-
> drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c | 4 +++-
> drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c | 4 +++-
> drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c | 4 +++-
> drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c | 4 +++-
> drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c | 4 +++-
> drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c | 4 +++-
> drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c | 4 +++-
> drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c | 4 +++-
> drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c | 4 +++-
> drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c | 4 +++-
> drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c | 4 +++-
> drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c | 4 +++-
> drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c | 4 +++-
> drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c | 4 +++-
> 22 files changed, 70 insertions(+), 26 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> index ddb9d3269357c..a7ff1fa4c778e 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> @@ -155,7 +155,7 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
> if (is_guilty)
> dma_fence_set_error(&s_job->s_fence->finished, -ETIME);
>
> - r = amdgpu_ring_reset(ring, job->vmid);
> + r = amdgpu_ring_reset(ring, job->vmid, NULL);
> if (!r) {
> if (amdgpu_ring_sched_ready(ring))
> drm_sched_stop(&ring->sched, s_job);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> index 9ae522baad8e7..fc36b86c6dcf8 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> @@ -268,7 +268,8 @@ struct amdgpu_ring_funcs {
> void (*patch_cntl)(struct amdgpu_ring *ring, unsigned offset);
> void (*patch_ce)(struct amdgpu_ring *ring, unsigned offset);
> void (*patch_de)(struct amdgpu_ring *ring, unsigned offset);
> - int (*reset)(struct amdgpu_ring *ring, unsigned int vmid);
> + int (*reset)(struct amdgpu_ring *ring, unsigned int vmid,
> + struct amdgpu_fence *guilty_fence);
> void (*emit_cleaner_shader)(struct amdgpu_ring *ring);
> bool (*is_guilty)(struct amdgpu_ring *ring);
> };
> @@ -425,7 +426,7 @@ struct amdgpu_ring {
> #define amdgpu_ring_patch_cntl(r, o) ((r)->funcs->patch_cntl((r), (o)))
> #define amdgpu_ring_patch_ce(r, o) ((r)->funcs->patch_ce((r), (o)))
> #define amdgpu_ring_patch_de(r, o) ((r)->funcs->patch_de((r), (o)))
> -#define amdgpu_ring_reset(r, v) (r)->funcs->reset((r), (v))
> +#define amdgpu_ring_reset(r, v, f) (r)->funcs->reset((r), (v), (f))
>
> unsigned int amdgpu_ring_max_ibs(enum amdgpu_ring_type type);
> int amdgpu_ring_alloc(struct amdgpu_ring *ring, unsigned ndw);
> diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
> index 75ea071744eb5..444753b0ac885 100644
> --- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
> @@ -9522,7 +9522,9 @@ static void gfx_v10_ring_insert_nop(struct amdgpu_ring *ring, uint32_t num_nop)
> amdgpu_ring_insert_nop(ring, num_nop - 1);
> }
>
> -static int gfx_v10_0_reset_kgq(struct amdgpu_ring *ring, unsigned int vmid)
> +static int gfx_v10_0_reset_kgq(struct amdgpu_ring *ring,
> + unsigned int vmid,
> + struct amdgpu_fence *guilty_fence)
> {
> struct amdgpu_device *adev = ring->adev;
> struct amdgpu_kiq *kiq = &adev->gfx.kiq[0];
> @@ -9579,7 +9581,8 @@ static int gfx_v10_0_reset_kgq(struct amdgpu_ring *ring, unsigned int vmid)
> }
>
> static int gfx_v10_0_reset_kcq(struct amdgpu_ring *ring,
> - unsigned int vmid)
> + unsigned int vmid,
> + struct amdgpu_fence *guilty_fence)
> {
> struct amdgpu_device *adev = ring->adev;
> struct amdgpu_kiq *kiq = &adev->gfx.kiq[0];
> diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
> index ec9b84f92d467..4293f2a1b9bfb 100644
> --- a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
> @@ -6811,7 +6811,9 @@ static int gfx_v11_reset_gfx_pipe(struct amdgpu_ring *ring)
> return 0;
> }
>
> -static int gfx_v11_0_reset_kgq(struct amdgpu_ring *ring, unsigned int vmid)
> +static int gfx_v11_0_reset_kgq(struct amdgpu_ring *ring,
> + unsigned int vmid,
> + struct amdgpu_fence *guilty_fence)
> {
> struct amdgpu_device *adev = ring->adev;
> int r;
> @@ -6973,7 +6975,9 @@ static int gfx_v11_0_reset_compute_pipe(struct amdgpu_ring *ring)
> return 0;
> }
>
> -static int gfx_v11_0_reset_kcq(struct amdgpu_ring *ring, unsigned int vmid)
> +static int gfx_v11_0_reset_kcq(struct amdgpu_ring *ring,
> + unsigned int vmid,
> + struct amdgpu_fence *guilty_fence)
> {
> struct amdgpu_device *adev = ring->adev;
> int r = 0;
> diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
> index 1234c8d64e20d..aea21ef177d05 100644
> --- a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
> @@ -5307,7 +5307,9 @@ static int gfx_v12_reset_gfx_pipe(struct amdgpu_ring *ring)
> return 0;
> }
>
> -static int gfx_v12_0_reset_kgq(struct amdgpu_ring *ring, unsigned int vmid)
> +static int gfx_v12_0_reset_kgq(struct amdgpu_ring *ring,
> + unsigned int vmid,
> + struct amdgpu_fence *guilty_fence)
> {
> struct amdgpu_device *adev = ring->adev;
> int r;
> @@ -5421,7 +5423,9 @@ static int gfx_v12_0_reset_compute_pipe(struct amdgpu_ring *ring)
> return 0;
> }
>
> -static int gfx_v12_0_reset_kcq(struct amdgpu_ring *ring, unsigned int vmid)
> +static int gfx_v12_0_reset_kcq(struct amdgpu_ring *ring,
> + unsigned int vmid,
> + struct amdgpu_fence *guilty_fence)
> {
> struct amdgpu_device *adev = ring->adev;
> int r;
> diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
> index d50e125fd3e0d..c0ffe7afca9b8 100644
> --- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
> @@ -7153,7 +7153,8 @@ static void gfx_v9_ring_insert_nop(struct amdgpu_ring *ring, uint32_t num_nop)
> }
>
> static int gfx_v9_0_reset_kcq(struct amdgpu_ring *ring,
> - unsigned int vmid)
> + unsigned int vmid,
> + struct amdgpu_fence *guilty_fence)
> {
> struct amdgpu_device *adev = ring->adev;
> struct amdgpu_kiq *kiq = &adev->gfx.kiq[0];
> diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
> index c233edf605694..79d4ae0645ffc 100644
> --- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
> +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
> @@ -3552,7 +3552,8 @@ static int gfx_v9_4_3_reset_hw_pipe(struct amdgpu_ring *ring)
> }
>
> static int gfx_v9_4_3_reset_kcq(struct amdgpu_ring *ring,
> - unsigned int vmid)
> + unsigned int vmid,
> + struct amdgpu_fence *guilty_fence)
> {
> struct amdgpu_device *adev = ring->adev;
> struct amdgpu_kiq *kiq = &adev->gfx.kiq[ring->xcc_id];
> diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
> index 4cde8a8bcc837..4c1ff6d0e14ea 100644
> --- a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
> @@ -764,7 +764,9 @@ static int jpeg_v2_0_process_interrupt(struct amdgpu_device *adev,
> return 0;
> }
>
> -static int jpeg_v2_0_ring_reset(struct amdgpu_ring *ring, unsigned int vmid)
> +static int jpeg_v2_0_ring_reset(struct amdgpu_ring *ring,
> + unsigned int vmid,
> + struct amdgpu_fence *guilty_fence)
> {
> jpeg_v2_0_stop(ring->adev);
> jpeg_v2_0_start(ring->adev);
> diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
> index 8b39e114f3be1..5a18b8644de2f 100644
> --- a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
> +++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
> @@ -643,7 +643,9 @@ static int jpeg_v2_5_process_interrupt(struct amdgpu_device *adev,
> return 0;
> }
>
> -static int jpeg_v2_5_ring_reset(struct amdgpu_ring *ring, unsigned int vmid)
> +static int jpeg_v2_5_ring_reset(struct amdgpu_ring *ring,
> + unsigned int vmid,
> + struct amdgpu_fence *guilty_fence)
> {
> jpeg_v2_5_stop_inst(ring->adev, ring->me);
> jpeg_v2_5_start_inst(ring->adev, ring->me);
> diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c
> index 2f8510c2986b9..4963feddefae5 100644
> --- a/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c
> @@ -555,7 +555,9 @@ static int jpeg_v3_0_process_interrupt(struct amdgpu_device *adev,
> return 0;
> }
>
> -static int jpeg_v3_0_ring_reset(struct amdgpu_ring *ring, unsigned int vmid)
> +static int jpeg_v3_0_ring_reset(struct amdgpu_ring *ring,
> + unsigned int vmid,
> + struct amdgpu_fence *guilty_fence)
> {
> jpeg_v3_0_stop(ring->adev);
> jpeg_v3_0_start(ring->adev);
> diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c
> index f17ec5414fd69..327adb474b0d3 100644
> --- a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0.c
> @@ -720,7 +720,9 @@ static int jpeg_v4_0_process_interrupt(struct amdgpu_device *adev,
> return 0;
> }
>
> -static int jpeg_v4_0_ring_reset(struct amdgpu_ring *ring, unsigned int vmid)
> +static int jpeg_v4_0_ring_reset(struct amdgpu_ring *ring,
> + unsigned int vmid,
> + struct amdgpu_fence *guilty_fence)
> {
> if (amdgpu_sriov_vf(ring->adev))
> return -EINVAL;
> diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c
> index 79e342d5ab28d..c951b4b170c5b 100644
> --- a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c
> +++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c
> @@ -1143,7 +1143,9 @@ static void jpeg_v4_0_3_core_stall_reset(struct amdgpu_ring *ring)
> WREG32_SOC15(JPEG, jpeg_inst, regJPEG_CORE_RST_CTRL, 0x00);
> }
>
> -static int jpeg_v4_0_3_ring_reset(struct amdgpu_ring *ring, unsigned int vmid)
> +static int jpeg_v4_0_3_ring_reset(struct amdgpu_ring *ring,
> + unsigned int vmid,
> + struct amdgpu_fence *guilty_fence)
> {
> if (amdgpu_sriov_vf(ring->adev))
> return -EOPNOTSUPP;
> diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c
> index 3b6f65a256464..51ae62c24c49e 100644
> --- a/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c
> +++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v5_0_1.c
> @@ -834,7 +834,9 @@ static void jpeg_v5_0_1_core_stall_reset(struct amdgpu_ring *ring)
> WREG32_SOC15(JPEG, jpeg_inst, regJPEG_CORE_RST_CTRL, 0x00);
> }
>
> -static int jpeg_v5_0_1_ring_reset(struct amdgpu_ring *ring, unsigned int vmid)
> +static int jpeg_v5_0_1_ring_reset(struct amdgpu_ring *ring,
> + unsigned int vmid,
> + struct amdgpu_fence *guilty_fence)
> {
> if (amdgpu_sriov_vf(ring->adev))
> return -EOPNOTSUPP;
> diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
> index 5b7009612190f..502d71f678922 100644
> --- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
> +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
> @@ -1674,7 +1674,9 @@ static bool sdma_v4_4_2_page_ring_is_guilty(struct amdgpu_ring *ring)
> return sdma_v4_4_2_is_queue_selected(adev, instance_id, true);
> }
>
> -static int sdma_v4_4_2_reset_queue(struct amdgpu_ring *ring, unsigned int vmid)
> +static int sdma_v4_4_2_reset_queue(struct amdgpu_ring *ring,
> + unsigned int vmid,
> + struct amdgpu_fence *guilty_fence)
> {
> struct amdgpu_device *adev = ring->adev;
> u32 id = ring->me;
> diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
> index 37f4b5b4a098f..6092e2a9e210b 100644
> --- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
> @@ -1539,7 +1539,9 @@ static int sdma_v5_0_soft_reset(struct amdgpu_ip_block *ip_block)
> return 0;
> }
>
> -static int sdma_v5_0_reset_queue(struct amdgpu_ring *ring, unsigned int vmid)
> +static int sdma_v5_0_reset_queue(struct amdgpu_ring *ring,
> + unsigned int vmid,
> + struct amdgpu_fence *guilty_fence)
> {
> struct amdgpu_device *adev = ring->adev;
> u32 inst_id = ring->me;
> diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
> index 0b40411b92a0b..2cdcf28881c3d 100644
> --- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
> +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
> @@ -1452,7 +1452,9 @@ static int sdma_v5_2_wait_for_idle(struct amdgpu_ip_block *ip_block)
> return -ETIMEDOUT;
> }
>
> -static int sdma_v5_2_reset_queue(struct amdgpu_ring *ring, unsigned int vmid)
> +static int sdma_v5_2_reset_queue(struct amdgpu_ring *ring,
> + unsigned int vmid,
> + struct amdgpu_fence *guilty_fence)
> {
> struct amdgpu_device *adev = ring->adev;
> u32 inst_id = ring->me;
> diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
> index 5a70ae17be04e..43bb4a7456b90 100644
> --- a/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
> @@ -1537,7 +1537,9 @@ static int sdma_v6_0_ring_preempt_ib(struct amdgpu_ring *ring)
> return r;
> }
>
> -static int sdma_v6_0_reset_queue(struct amdgpu_ring *ring, unsigned int vmid)
> +static int sdma_v6_0_reset_queue(struct amdgpu_ring *ring,
> + unsigned int vmid,
> + struct amdgpu_fence *guilty_fence)
> {
> struct amdgpu_device *adev = ring->adev;
> int i, r;
> diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
> index ad47d0bdf7775..b5c168cb1354d 100644
> --- a/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
> @@ -802,7 +802,9 @@ static bool sdma_v7_0_check_soft_reset(struct amdgpu_ip_block *ip_block)
> return false;
> }
>
> -static int sdma_v7_0_reset_queue(struct amdgpu_ring *ring, unsigned int vmid)
> +static int sdma_v7_0_reset_queue(struct amdgpu_ring *ring,
> + unsigned int vmid,
> + struct amdgpu_fence *guilty_fence)
> {
> struct amdgpu_device *adev = ring->adev;
> int i, r;
> diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
> index b5071f77f78d2..083fde15e83a1 100644
> --- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0.c
> @@ -1967,7 +1967,9 @@ static int vcn_v4_0_ring_patch_cs_in_place(struct amdgpu_cs_parser *p,
> return 0;
> }
>
> -static int vcn_v4_0_ring_reset(struct amdgpu_ring *ring, unsigned int vmid)
> +static int vcn_v4_0_ring_reset(struct amdgpu_ring *ring,
> + unsigned int vmid,
> + struct amdgpu_fence *guilty_fence)
> {
> struct amdgpu_device *adev = ring->adev;
> struct amdgpu_vcn_inst *vinst = &adev->vcn.inst[ring->me];
> diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c
> index 5a33140f57235..57c59c4868a50 100644
> --- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c
> +++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c
> @@ -1594,7 +1594,9 @@ static void vcn_v4_0_3_unified_ring_set_wptr(struct amdgpu_ring *ring)
> }
> }
>
> -static int vcn_v4_0_3_ring_reset(struct amdgpu_ring *ring, unsigned int vmid)
> +static int vcn_v4_0_3_ring_reset(struct amdgpu_ring *ring,
> + unsigned int vmid,
> + struct amdgpu_fence *guilty_fence)
> {
> int r = 0;
> int vcn_inst;
> diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c
> index 16ade84facc78..4aad7d2e36379 100644
> --- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c
> +++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_5.c
> @@ -1465,7 +1465,9 @@ static void vcn_v4_0_5_unified_ring_set_wptr(struct amdgpu_ring *ring)
> }
> }
>
> -static int vcn_v4_0_5_ring_reset(struct amdgpu_ring *ring, unsigned int vmid)
> +static int vcn_v4_0_5_ring_reset(struct amdgpu_ring *ring,
> + unsigned int vmid,
> + struct amdgpu_fence *guilty_fence)
> {
> struct amdgpu_device *adev = ring->adev;
> struct amdgpu_vcn_inst *vinst = &adev->vcn.inst[ring->me];
> diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c
> index f8e3f0b882da5..b9c8a2b8c5e0d 100644
> --- a/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_0.c
> @@ -1192,7 +1192,9 @@ static void vcn_v5_0_0_unified_ring_set_wptr(struct amdgpu_ring *ring)
> }
> }
>
> -static int vcn_v5_0_0_ring_reset(struct amdgpu_ring *ring, unsigned int vmid)
> +static int vcn_v5_0_0_ring_reset(struct amdgpu_ring *ring,
> + unsigned int vmid,
> + struct amdgpu_fence *guilty_fence)
> {
> struct amdgpu_device *adev = ring->adev;
> struct amdgpu_vcn_inst *vinst = &adev->vcn.inst[ring->me];
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [PATCH 11/36] drm/amdgpu: move guilty handling into ring resets
2025-06-17 3:07 ` [PATCH 11/36] drm/amdgpu: move guilty handling " Alex Deucher
@ 2025-06-17 12:28 ` Christian König
0 siblings, 0 replies; 59+ messages in thread
From: Christian König @ 2025-06-17 12:28 UTC (permalink / raw)
To: Alex Deucher, amd-gfx, sasundar
On 6/17/25 05:07, Alex Deucher wrote:
> Move guilty logic into the ring reset callbacks. This
> allows each ring reset callback to better handle fence
> errors and force completions in line with the reset
> behavior for each IP. It also allows us to remove
> the ring guilty callback since that logic now lives
> in the reset callback.
>
> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Acked-by: Christian König <christian.koenig@amd.com>
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 23 ++----------------
> drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h | 1 -
> drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c | 30 +-----------------------
> 3 files changed, 3 insertions(+), 51 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> index 177f04491a11b..3b7d3844a74bc 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> @@ -91,7 +91,6 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
> struct amdgpu_job *job = to_amdgpu_job(s_job);
> struct amdgpu_task_info *ti;
> struct amdgpu_device *adev = ring->adev;
> - bool set_error = false;
> int idx, r;
>
> if (!drm_dev_enter(adev_to_drm(adev), &idx)) {
> @@ -134,8 +133,6 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
> if (unlikely(adev->debug_disable_gpu_ring_reset)) {
> dev_err(adev->dev, "Ring reset disabled by debug mask\n");
> } else if (amdgpu_gpu_recovery && ring->funcs->reset) {
> - bool is_guilty;
> -
> dev_err(adev->dev, "Starting %s ring reset\n",
> s_job->sched->name);
>
> @@ -145,24 +142,9 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
> */
> drm_sched_wqueue_stop(&ring->sched);
>
> - /* for engine resets, we need to reset the engine,
> - * but individual queues may be unaffected.
> - * check here to make sure the accounting is correct.
> - */
> - if (ring->funcs->is_guilty)
> - is_guilty = ring->funcs->is_guilty(ring);
> - else
> - is_guilty = true;
> -
> - if (is_guilty) {
> - dma_fence_set_error(&s_job->s_fence->finished, -ETIME);
> - set_error = true;
> - }
> -
> r = amdgpu_ring_reset(ring, job->vmid, NULL);
> if (!r) {
> - if (is_guilty)
> - atomic_inc(&ring->adev->gpu_reset_counter);
> + atomic_inc(&ring->adev->gpu_reset_counter);
> drm_sched_wqueue_start(&ring->sched);
> dev_err(adev->dev, "Ring %s reset succeeded\n",
> ring->sched.name);
> @@ -173,8 +155,7 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
> dev_err(adev->dev, "Ring %s reset failed\n", ring->sched.name);
> }
>
> - if (!set_error)
> - dma_fence_set_error(&s_job->s_fence->finished, -ETIME);
> + dma_fence_set_error(&s_job->s_fence->finished, -ETIME);
>
> if (amdgpu_device_should_recover_gpu(ring->adev)) {
> struct amdgpu_reset_context reset_context;
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> index fc36b86c6dcf8..6aaa9d0c1f25c 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> @@ -271,7 +271,6 @@ struct amdgpu_ring_funcs {
> int (*reset)(struct amdgpu_ring *ring, unsigned int vmid,
> struct amdgpu_fence *guilty_fence);
> void (*emit_cleaner_shader)(struct amdgpu_ring *ring);
> - bool (*is_guilty)(struct amdgpu_ring *ring);
> };
>
> /**
> diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
> index d3cb4dbae790b..61274579b3452 100644
> --- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
> +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
> @@ -1655,30 +1655,10 @@ static bool sdma_v4_4_2_is_queue_selected(struct amdgpu_device *adev, uint32_t i
> return (context_status & SDMA_GFX_CONTEXT_STATUS__SELECTED_MASK) != 0;
> }
>
> -static bool sdma_v4_4_2_ring_is_guilty(struct amdgpu_ring *ring)
> -{
> - struct amdgpu_device *adev = ring->adev;
> - uint32_t instance_id = ring->me;
> -
> - return sdma_v4_4_2_is_queue_selected(adev, instance_id, false);
> -}
> -
> -static bool sdma_v4_4_2_page_ring_is_guilty(struct amdgpu_ring *ring)
> -{
> - struct amdgpu_device *adev = ring->adev;
> - uint32_t instance_id = ring->me;
> -
> - if (!adev->sdma.has_page_queue)
> - return false;
> -
> - return sdma_v4_4_2_is_queue_selected(adev, instance_id, true);
> -}
> -
> static int sdma_v4_4_2_reset_queue(struct amdgpu_ring *ring,
> unsigned int vmid,
> struct amdgpu_fence *guilty_fence)
> {
> - bool is_guilty = ring->funcs->is_guilty(ring);
> struct amdgpu_device *adev = ring->adev;
> u32 id = ring->me;
> int r;
> @@ -1689,13 +1669,7 @@ static int sdma_v4_4_2_reset_queue(struct amdgpu_ring *ring,
> amdgpu_amdkfd_suspend(adev, true);
> r = amdgpu_sdma_reset_engine(adev, id);
> amdgpu_amdkfd_resume(adev, true);
> - if (r)
> - return r;
> -
> - if (is_guilty)
> - amdgpu_fence_driver_force_completion(ring);
> -
> - return 0;
> + return r;
> }
>
> static int sdma_v4_4_2_stop_queue(struct amdgpu_ring *ring)
> @@ -2180,7 +2154,6 @@ static const struct amdgpu_ring_funcs sdma_v4_4_2_ring_funcs = {
> .emit_reg_wait = sdma_v4_4_2_ring_emit_reg_wait,
> .emit_reg_write_reg_wait = amdgpu_ring_emit_reg_write_reg_wait_helper,
> .reset = sdma_v4_4_2_reset_queue,
> - .is_guilty = sdma_v4_4_2_ring_is_guilty,
> };
>
> static const struct amdgpu_ring_funcs sdma_v4_4_2_page_ring_funcs = {
> @@ -2213,7 +2186,6 @@ static const struct amdgpu_ring_funcs sdma_v4_4_2_page_ring_funcs = {
> .emit_reg_wait = sdma_v4_4_2_ring_emit_reg_wait,
> .emit_reg_write_reg_wait = amdgpu_ring_emit_reg_write_reg_wait_helper,
> .reset = sdma_v4_4_2_reset_queue,
> - .is_guilty = sdma_v4_4_2_page_ring_is_guilty,
> };
>
> static void sdma_v4_4_2_set_ring_funcs(struct amdgpu_device *adev)
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [PATCH 33/36] drm/amdgpu/vcn: add a helper framework for engine resets
2025-06-17 6:10 ` Sundararaju, Sathishkumar
@ 2025-06-17 13:09 ` Alex Deucher
2025-06-17 16:49 ` Sundararaju, Sathishkumar
0 siblings, 1 reply; 59+ messages in thread
From: Alex Deucher @ 2025-06-17 13:09 UTC (permalink / raw)
To: Sundararaju, Sathishkumar; +Cc: Alex Deucher, amd-gfx, christian.koenig
On Tue, Jun 17, 2025 at 2:10 AM Sundararaju, Sathishkumar
<sasundar@amd.com> wrote:
>
> Please ignore my previous comments here, the new helper additions for
> vcn non unified queues are good.
>
> But one concern, the vinst->reset(vinst) callback must take in ring
> pointer to handle guilty/non-guilty
> for appropriate re-emit part, else the guilty ring has to be tracked
> within the ring structure or identified
> by some query with in reset.
I wasn't sure if we could handle the reemit properly on these VCN
chips. So at least for the first iteration, I just killed all the
queues. Is there a way to know which ring caused the hang? How does
the VCN firmware handle the rings?
Alex
>
> Regards,
> Sathish
>
>
> On 6/17/2025 10:00 AM, Sundararaju, Sathishkumar wrote:
> > Hi Alex,
> >
> > Would it be good to have this logic in the reset call back itself ?
> >
> > Adding common vinst->reset stops the flexibility of having separate
> > reset functionality for enc rings and decode rings,
> > can selectively handle the drm_sched_wqueue_start/stop and re-emit of
> > guilty/non-guilty for enc and dec separately.
> >
> > And the usual vcn_stop() followed by vcn_start() isn't helping in
> > reset of the engine for vcn3.
> >
> > I tried a workaround to pause_dpg and enable static clockgate and
> > powergate, and then stop()/start() the engine
> > which is working consistently so far.
> >
> > Regards,
> > Sathish
> >
> > On 6/17/2025 8:38 AM, Alex Deucher wrote:
> >> With engine resets we reset all queues on the engine rather
> >> than just a single queue. Add a framework to handle this
> >> similar to SDMA.
> >>
> >> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
> >> ---
> >> drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c | 64 +++++++++++++++++++++++++
> >> drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h | 6 ++-
> >> 2 files changed, 69 insertions(+), 1 deletion(-)
> >>
> >> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
> >> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
> >> index c8885c3d54b33..075740ed275eb 100644
> >> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
> >> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
> >> @@ -134,6 +134,7 @@ int amdgpu_vcn_sw_init(struct amdgpu_device
> >> *adev, int i)
> >> mutex_init(&adev->vcn.inst[i].vcn1_jpeg1_workaround);
> >> mutex_init(&adev->vcn.inst[i].vcn_pg_lock);
> >> + mutex_init(&adev->vcn.inst[i].engine_reset_mutex);
> >> atomic_set(&adev->vcn.inst[i].total_submission_cnt, 0);
> >> INIT_DELAYED_WORK(&adev->vcn.inst[i].idle_work,
> >> amdgpu_vcn_idle_work_handler);
> >> atomic_set(&adev->vcn.inst[i].dpg_enc_submission_cnt, 0);
> >> @@ -1451,3 +1452,66 @@ int vcn_set_powergating_state(struct
> >> amdgpu_ip_block *ip_block,
> >> return ret;
> >> }
> >> +
> >> +/**
> >> + * amdgpu_vcn_reset_engine - Reset a specific VCN engine
> >> + * @adev: Pointer to the AMDGPU device
> >> + * @instance_id: VCN engine instance to reset
> >> + *
> >> + * Returns: 0 on success, or a negative error code on failure.
> >> + */
> >> +static int amdgpu_vcn_reset_engine(struct amdgpu_device *adev,
> >> + uint32_t instance_id)
> >> +{
> >> + struct amdgpu_vcn_inst *vinst = &adev->vcn.inst[instance_id];
> >> + int r, i;
> >> +
> >> + mutex_lock(&vinst->engine_reset_mutex);
> >> + /* Stop the scheduler's work queue for the dec and enc rings if
> >> they are running.
> >> + * This ensures that no new tasks are submitted to the queues while
> >> + * the reset is in progress.
> >> + */
> >> + drm_sched_wqueue_stop(&vinst->ring_dec.sched);
> >> + for (i = 0; i < vinst->num_enc_rings; i++)
> >> + drm_sched_wqueue_stop(&vinst->ring_enc[i].sched);
> >> +
> >> + /* Perform the VCN reset for the specified instance */
> >> + r = vinst->reset(vinst);
> >> + if (r) {
> >> + dev_err(adev->dev, "Failed to reset VCN instance %u\n",
> >> instance_id);
> >> + } else {
> >> + /* Restart the scheduler's work queue for the dec and enc rings
> >> + * if they were stopped by this function. This allows new tasks
> >> + * to be submitted to the queues after the reset is complete.
> >> + */
> >> + drm_sched_wqueue_start(&vinst->ring_dec.sched);
> >> + for (i = 0; i < vinst->num_enc_rings; i++)
> >> + drm_sched_wqueue_start(&vinst->ring_enc[i].sched);
> >> + }
> >> + mutex_unlock(&vinst->engine_reset_mutex);
> >> +
> >> + return r;
> >> +}
> >> +
> >> +/**
> >> + * amdgpu_vcn_ring_reset - Reset a VCN ring
> >> + * @ring: ring to reset
> >> + * @vmid: vmid of guilty job
> >> + * @guilty_fence: guilty fence
> >> + *
> >> + * This helper is for VCN blocks without unified queues because
> >> + * resetting the engine resets all queues in that case. With
> >> + * unified queues we have one queue per engine.
> >> + * Returns: 0 on success, or a negative error code on failure.
> >> + */
> >> +int amdgpu_vcn_ring_reset(struct amdgpu_ring *ring,
> >> + unsigned int vmid,
> >> + struct amdgpu_fence *guilty_fence)
> >> +{
> >> + struct amdgpu_device *adev = ring->adev;
> >> +
> >> + if (adev->vcn.inst[ring->me].using_unified_queue)
> >> + return -EINVAL;
> >> +
> >> + return amdgpu_vcn_reset_engine(adev, ring->me);
> >> +}
> >> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
> >> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
> >> index 83adf81defc71..0bc0a94d7cf0f 100644
> >> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
> >> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
> >> @@ -330,7 +330,9 @@ struct amdgpu_vcn_inst {
> >> struct dpg_pause_state *new_state);
> >> int (*set_pg_state)(struct amdgpu_vcn_inst *vinst,
> >> enum amd_powergating_state state);
> >> + int (*reset)(struct amdgpu_vcn_inst *vinst);
> >> bool using_unified_queue;
> >> + struct mutex engine_reset_mutex;
> >> };
> >> struct amdgpu_vcn_ras {
> >> @@ -552,5 +554,7 @@ void amdgpu_debugfs_vcn_sched_mask_init(struct
> >> amdgpu_device *adev);
> >> int vcn_set_powergating_state(struct amdgpu_ip_block *ip_block,
> >> enum amd_powergating_state state);
> >> -
> >> +int amdgpu_vcn_ring_reset(struct amdgpu_ring *ring,
> >> + unsigned int vmid,
> >> + struct amdgpu_fence *guilty_fence);
> >> #endif
> >
>
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [PATCH 02/36] drm/amdgpu: remove job parameter from amdgpu_fence_emit()
2025-06-17 11:44 ` Christian König
@ 2025-06-17 13:46 ` Alex Deucher
2025-06-17 13:49 ` Alex Deucher
2025-06-18 22:32 ` Alex Deucher
1 sibling, 1 reply; 59+ messages in thread
From: Alex Deucher @ 2025-06-17 13:46 UTC (permalink / raw)
To: Christian König; +Cc: Alex Deucher, amd-gfx, sasundar
On Tue, Jun 17, 2025 at 7:57 AM Christian König
<christian.koenig@amd.com> wrote:
>
> On 6/17/25 05:07, Alex Deucher wrote:
> > What we actually care about is the amdgpu_fence object
> > so pass that in explicitly to avoid possible mistakes
> > in the future.
> >
> > The job_run_counter handling can be safely removed at this
> > point as we no longer support job resubmission.
> >
> > Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
> > ---
> > drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c | 36 +++++++++--------------
> > drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c | 5 +++-
> > drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h | 4 +--
> > 3 files changed, 20 insertions(+), 25 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> > index 569e0e5373927..e88848c14491a 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> > @@ -114,14 +114,14 @@ static u32 amdgpu_fence_read(struct amdgpu_ring *ring)
> > *
> > * @ring: ring the fence is associated with
> > * @f: resulting fence object
> > - * @job: job the fence is embedded in
> > + * @af: amdgpu fence input
> > * @flags: flags to pass into the subordinate .emit_fence() call
> > *
> > * Emits a fence command on the requested ring (all asics).
> > * Returns 0 on success, -ENOMEM on failure.
> > */
> > -int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **f, struct amdgpu_job *job,
> > - unsigned int flags)
> > +int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **f,
> > + struct amdgpu_fence *af, unsigned int flags)
> > {
> > struct amdgpu_device *adev = ring->adev;
> > struct dma_fence *fence;
> > @@ -130,36 +130,28 @@ int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **f, struct amd
> > uint32_t seq;
> > int r;
> >
> > - if (job == NULL) {
> > - /* create a sperate hw fence */
> > + if (!af) {
> > + /* create a separate hw fence */
> > am_fence = kmem_cache_alloc(amdgpu_fence_slab, GFP_ATOMIC);
> > if (am_fence == NULL)
> > return -ENOMEM;
>
> I think we should remove the output parameter as well.
>
> An amdgpu_fence can be trivially allocated by the caller.
Is there anything special about amdgpu_job_fence_ops vs
amdgpu_fence_ops other than the slab handling? I was worried I was
missing something about the fence lifetimes with amdgpu_job_{free,
free_cb}.
Alex
>
> Apart from that looks good to me.
>
> Regards,
> Christian.
>
> > } else {
> > - /* take use of job-embedded fence */
> > - am_fence = &job->hw_fence;
> > + am_fence = af;
> > }
> > fence = &am_fence->base;
> > am_fence->ring = ring;
> >
> > seq = ++ring->fence_drv.sync_seq;
> > - if (job && job->job_run_counter) {
> > - /* reinit seq for resubmitted jobs */
> > - fence->seqno = seq;
> > - /* TO be inline with external fence creation and other drivers */
> > + if (af) {
> > + dma_fence_init(fence, &amdgpu_job_fence_ops,
> > + &ring->fence_drv.lock,
> > + adev->fence_context + ring->idx, seq);
> > + /* Against remove in amdgpu_job_{free, free_cb} */
> > dma_fence_get(fence);
> > } else {
> > - if (job) {
> > - dma_fence_init(fence, &amdgpu_job_fence_ops,
> > - &ring->fence_drv.lock,
> > - adev->fence_context + ring->idx, seq);
> > - /* Against remove in amdgpu_job_{free, free_cb} */
> > - dma_fence_get(fence);
> > - } else {
> > - dma_fence_init(fence, &amdgpu_fence_ops,
> > - &ring->fence_drv.lock,
> > - adev->fence_context + ring->idx, seq);
> > - }
> > + dma_fence_init(fence, &amdgpu_fence_ops,
> > + &ring->fence_drv.lock,
> > + adev->fence_context + ring->idx, seq);
> > }
> >
> > amdgpu_ring_emit_fence(ring, ring->fence_drv.gpu_addr,
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
> > index 802743efa3b39..206b70acb29a0 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
> > @@ -128,6 +128,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs,
> > struct amdgpu_device *adev = ring->adev;
> > struct amdgpu_ib *ib = &ibs[0];
> > struct dma_fence *tmp = NULL;
> > + struct amdgpu_fence *af;
> > bool need_ctx_switch;
> > struct amdgpu_vm *vm;
> > uint64_t fence_ctx;
> > @@ -154,6 +155,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs,
> > csa_va = job->csa_va;
> > gds_va = job->gds_va;
> > init_shadow = job->init_shadow;
> > + af = &job->hw_fence;
> > } else {
> > vm = NULL;
> > fence_ctx = 0;
> > @@ -161,6 +163,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs,
> > csa_va = 0;
> > gds_va = 0;
> > init_shadow = false;
> > + af = NULL;
> > }
> >
> > if (!ring->sched.ready) {
> > @@ -282,7 +285,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs,
> > amdgpu_ring_init_cond_exec(ring, ring->cond_exe_gpu_addr);
> > }
> >
> > - r = amdgpu_fence_emit(ring, f, job, fence_flags);
> > + r = amdgpu_fence_emit(ring, f, af, fence_flags);
> > if (r) {
> > dev_err(adev->dev, "failed to emit fence (%d)\n", r);
> > if (job && job->vmid)
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> > index e1f25218943a4..9ae522baad8e7 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> > @@ -157,8 +157,8 @@ void amdgpu_fence_driver_hw_init(struct amdgpu_device *adev);
> > void amdgpu_fence_driver_hw_fini(struct amdgpu_device *adev);
> > int amdgpu_fence_driver_sw_init(struct amdgpu_device *adev);
> > void amdgpu_fence_driver_sw_fini(struct amdgpu_device *adev);
> > -int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **fence, struct amdgpu_job *job,
> > - unsigned flags);
> > +int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **f,
> > + struct amdgpu_fence *af, unsigned int flags);
> > int amdgpu_fence_emit_polling(struct amdgpu_ring *ring, uint32_t *s,
> > uint32_t timeout);
> > bool amdgpu_fence_process(struct amdgpu_ring *ring);
>
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [PATCH 02/36] drm/amdgpu: remove job parameter from amdgpu_fence_emit()
2025-06-17 13:46 ` Alex Deucher
@ 2025-06-17 13:49 ` Alex Deucher
2025-06-18 7:15 ` Christian König
0 siblings, 1 reply; 59+ messages in thread
From: Alex Deucher @ 2025-06-17 13:49 UTC (permalink / raw)
To: Christian König; +Cc: Alex Deucher, amd-gfx, sasundar
On Tue, Jun 17, 2025 at 9:46 AM Alex Deucher <alexdeucher@gmail.com> wrote:
>
> On Tue, Jun 17, 2025 at 7:57 AM Christian König
> <christian.koenig@amd.com> wrote:
> >
> > On 6/17/25 05:07, Alex Deucher wrote:
> > > What we actually care about is the amdgpu_fence object
> > > so pass that in explicitly to avoid possible mistakes
> > > in the future.
> > >
> > > The job_run_counter handling can be safely removed at this
> > > point as we no longer support job resubmission.
> > >
> > > Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
> > > ---
> > > drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c | 36 +++++++++--------------
> > > drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c | 5 +++-
> > > drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h | 4 +--
> > > 3 files changed, 20 insertions(+), 25 deletions(-)
> > >
> > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> > > index 569e0e5373927..e88848c14491a 100644
> > > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> > > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> > > @@ -114,14 +114,14 @@ static u32 amdgpu_fence_read(struct amdgpu_ring *ring)
> > > *
> > > * @ring: ring the fence is associated with
> > > * @f: resulting fence object
> > > - * @job: job the fence is embedded in
> > > + * @af: amdgpu fence input
> > > * @flags: flags to pass into the subordinate .emit_fence() call
> > > *
> > > * Emits a fence command on the requested ring (all asics).
> > > * Returns 0 on success, -ENOMEM on failure.
> > > */
> > > -int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **f, struct amdgpu_job *job,
> > > - unsigned int flags)
> > > +int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **f,
> > > + struct amdgpu_fence *af, unsigned int flags)
> > > {
> > > struct amdgpu_device *adev = ring->adev;
> > > struct dma_fence *fence;
> > > @@ -130,36 +130,28 @@ int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **f, struct amd
> > > uint32_t seq;
> > > int r;
> > >
> > > - if (job == NULL) {
> > > - /* create a sperate hw fence */
> > > + if (!af) {
> > > + /* create a separate hw fence */
> > > am_fence = kmem_cache_alloc(amdgpu_fence_slab, GFP_ATOMIC);
> > > if (am_fence == NULL)
> > > return -ENOMEM;
> >
> > I think we should remove the output parameter as well.
> >
> > An amdgpu_fence can be trivially allocated by the caller.
>
> Is there anything special about amdgpu_job_fence_ops vs
> amdgpu_fence_ops other than the slab handling? I was worried I was
> missing something about the fence lifetimes with amdgpu_job_{free,
> free_cb}.
Specifically this chunk of code is confusing to me:
/* only put the hw fence if has embedded fence */
if (!job->hw_fence.base.ops)
kfree(job);
else
dma_fence_put(&job->hw_fence.base);
Alex
>
> Alex
>
> >
> > Apart from that looks good to me.
> >
> > Regards,
> > Christian.
> >
> > > } else {
> > > - /* take use of job-embedded fence */
> > > - am_fence = &job->hw_fence;
> > > + am_fence = af;
> > > }
> > > fence = &am_fence->base;
> > > am_fence->ring = ring;
> > >
> > > seq = ++ring->fence_drv.sync_seq;
> > > - if (job && job->job_run_counter) {
> > > - /* reinit seq for resubmitted jobs */
> > > - fence->seqno = seq;
> > > - /* TO be inline with external fence creation and other drivers */
> > > + if (af) {
> > > + dma_fence_init(fence, &amdgpu_job_fence_ops,
> > > + &ring->fence_drv.lock,
> > > + adev->fence_context + ring->idx, seq);
> > > + /* Against remove in amdgpu_job_{free, free_cb} */
> > > dma_fence_get(fence);
> > > } else {
> > > - if (job) {
> > > - dma_fence_init(fence, &amdgpu_job_fence_ops,
> > > - &ring->fence_drv.lock,
> > > - adev->fence_context + ring->idx, seq);
> > > - /* Against remove in amdgpu_job_{free, free_cb} */
> > > - dma_fence_get(fence);
> > > - } else {
> > > - dma_fence_init(fence, &amdgpu_fence_ops,
> > > - &ring->fence_drv.lock,
> > > - adev->fence_context + ring->idx, seq);
> > > - }
> > > + dma_fence_init(fence, &amdgpu_fence_ops,
> > > + &ring->fence_drv.lock,
> > > + adev->fence_context + ring->idx, seq);
> > > }
> > >
> > > amdgpu_ring_emit_fence(ring, ring->fence_drv.gpu_addr,
> > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
> > > index 802743efa3b39..206b70acb29a0 100644
> > > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
> > > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
> > > @@ -128,6 +128,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs,
> > > struct amdgpu_device *adev = ring->adev;
> > > struct amdgpu_ib *ib = &ibs[0];
> > > struct dma_fence *tmp = NULL;
> > > + struct amdgpu_fence *af;
> > > bool need_ctx_switch;
> > > struct amdgpu_vm *vm;
> > > uint64_t fence_ctx;
> > > @@ -154,6 +155,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs,
> > > csa_va = job->csa_va;
> > > gds_va = job->gds_va;
> > > init_shadow = job->init_shadow;
> > > + af = &job->hw_fence;
> > > } else {
> > > vm = NULL;
> > > fence_ctx = 0;
> > > @@ -161,6 +163,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs,
> > > csa_va = 0;
> > > gds_va = 0;
> > > init_shadow = false;
> > > + af = NULL;
> > > }
> > >
> > > if (!ring->sched.ready) {
> > > @@ -282,7 +285,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs,
> > > amdgpu_ring_init_cond_exec(ring, ring->cond_exe_gpu_addr);
> > > }
> > >
> > > - r = amdgpu_fence_emit(ring, f, job, fence_flags);
> > > + r = amdgpu_fence_emit(ring, f, af, fence_flags);
> > > if (r) {
> > > dev_err(adev->dev, "failed to emit fence (%d)\n", r);
> > > if (job && job->vmid)
> > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> > > index e1f25218943a4..9ae522baad8e7 100644
> > > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> > > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> > > @@ -157,8 +157,8 @@ void amdgpu_fence_driver_hw_init(struct amdgpu_device *adev);
> > > void amdgpu_fence_driver_hw_fini(struct amdgpu_device *adev);
> > > int amdgpu_fence_driver_sw_init(struct amdgpu_device *adev);
> > > void amdgpu_fence_driver_sw_fini(struct amdgpu_device *adev);
> > > -int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **fence, struct amdgpu_job *job,
> > > - unsigned flags);
> > > +int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **f,
> > > + struct amdgpu_fence *af, unsigned int flags);
> > > int amdgpu_fence_emit_polling(struct amdgpu_ring *ring, uint32_t *s,
> > > uint32_t timeout);
> > > bool amdgpu_fence_process(struct amdgpu_ring *ring);
> >
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [PATCH 33/36] drm/amdgpu/vcn: add a helper framework for engine resets
2025-06-17 13:09 ` Alex Deucher
@ 2025-06-17 16:49 ` Sundararaju, Sathishkumar
0 siblings, 0 replies; 59+ messages in thread
From: Sundararaju, Sathishkumar @ 2025-06-17 16:49 UTC (permalink / raw)
To: Alex Deucher; +Cc: Alex Deucher, amd-gfx, christian.koenig
On 6/17/2025 6:39 PM, Alex Deucher wrote:
> On Tue, Jun 17, 2025 at 2:10 AM Sundararaju, Sathishkumar
> <sasundar@amd.com> wrote:
>> Please ignore my previous comments here, the new helper additions for
>> vcn non unified queues are good.
>>
>> But one concern, the vinst->reset(vinst) callback must take in ring
>> pointer to handle guilty/non-guilty
>> for appropriate re-emit part, else the guilty ring has to be tracked
>> within the ring structure or identified
>> by some query with in reset.
> I wasn't sure if we could handle the reemit properly on these VCN
> chips.
Your suspicion about it is right, I am not able to get the re-emit to work
on non-guilty ring ever (assuming the timed-out jobs ring as the hungone),
so eventually had to force complete on all queues anyway.
The only thing I was able to consistently get to work was re-emit and
save good
context on the timed-out job's ring.
> So at least for the first iteration, I just killed all the
> queues.
Agree with you, thanks for explaining.
> Is there a way to know which ring caused the hang? How does
> the VCN firmware handle the rings?
I haven't figured it out yet, like through some status registers.
I am assuming the timed-out jobs ring as the hung one always.
Regards,
Sathish
>
> Alex
>
>> Regards,
>> Sathish
>>
>>
>> On 6/17/2025 10:00 AM, Sundararaju, Sathishkumar wrote:
>>> Hi Alex,
>>>
>>> Would it be good to have this logic in the reset call back itself ?
>>>
>>> Adding common vinst->reset stops the flexibility of having separate
>>> reset functionality for enc rings and decode rings,
>>> can selectively handle the drm_sched_wqueue_start/stop and re-emit of
>>> guilty/non-guilty for enc and dec separately.
>>>
>>> And the usual vcn_stop() followed by vcn_start() isn't helping in
>>> reset of the engine for vcn3.
>>>
>>> I tried a workaround to pause_dpg and enable static clockgate and
>>> powergate, and then stop()/start() the engine
>>> which is working consistently so far.
>>>
>>> Regards,
>>> Sathish
>>>
>>> On 6/17/2025 8:38 AM, Alex Deucher wrote:
>>>> With engine resets we reset all queues on the engine rather
>>>> than just a single queue. Add a framework to handle this
>>>> similar to SDMA.
>>>>
>>>> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
>>>> ---
>>>> drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c | 64 +++++++++++++++++++++++++
>>>> drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h | 6 ++-
>>>> 2 files changed, 69 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
>>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
>>>> index c8885c3d54b33..075740ed275eb 100644
>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
>>>> @@ -134,6 +134,7 @@ int amdgpu_vcn_sw_init(struct amdgpu_device
>>>> *adev, int i)
>>>> mutex_init(&adev->vcn.inst[i].vcn1_jpeg1_workaround);
>>>> mutex_init(&adev->vcn.inst[i].vcn_pg_lock);
>>>> + mutex_init(&adev->vcn.inst[i].engine_reset_mutex);
>>>> atomic_set(&adev->vcn.inst[i].total_submission_cnt, 0);
>>>> INIT_DELAYED_WORK(&adev->vcn.inst[i].idle_work,
>>>> amdgpu_vcn_idle_work_handler);
>>>> atomic_set(&adev->vcn.inst[i].dpg_enc_submission_cnt, 0);
>>>> @@ -1451,3 +1452,66 @@ int vcn_set_powergating_state(struct
>>>> amdgpu_ip_block *ip_block,
>>>> return ret;
>>>> }
>>>> +
>>>> +/**
>>>> + * amdgpu_vcn_reset_engine - Reset a specific VCN engine
>>>> + * @adev: Pointer to the AMDGPU device
>>>> + * @instance_id: VCN engine instance to reset
>>>> + *
>>>> + * Returns: 0 on success, or a negative error code on failure.
>>>> + */
>>>> +static int amdgpu_vcn_reset_engine(struct amdgpu_device *adev,
>>>> + uint32_t instance_id)
>>>> +{
>>>> + struct amdgpu_vcn_inst *vinst = &adev->vcn.inst[instance_id];
>>>> + int r, i;
>>>> +
>>>> + mutex_lock(&vinst->engine_reset_mutex);
>>>> + /* Stop the scheduler's work queue for the dec and enc rings if
>>>> they are running.
>>>> + * This ensures that no new tasks are submitted to the queues while
>>>> + * the reset is in progress.
>>>> + */
>>>> + drm_sched_wqueue_stop(&vinst->ring_dec.sched);
>>>> + for (i = 0; i < vinst->num_enc_rings; i++)
>>>> + drm_sched_wqueue_stop(&vinst->ring_enc[i].sched);
>>>> +
>>>> + /* Perform the VCN reset for the specified instance */
>>>> + r = vinst->reset(vinst);
>>>> + if (r) {
>>>> + dev_err(adev->dev, "Failed to reset VCN instance %u\n",
>>>> instance_id);
>>>> + } else {
>>>> + /* Restart the scheduler's work queue for the dec and enc rings
>>>> + * if they were stopped by this function. This allows new tasks
>>>> + * to be submitted to the queues after the reset is complete.
>>>> + */
>>>> + drm_sched_wqueue_start(&vinst->ring_dec.sched);
>>>> + for (i = 0; i < vinst->num_enc_rings; i++)
>>>> + drm_sched_wqueue_start(&vinst->ring_enc[i].sched);
>>>> + }
>>>> + mutex_unlock(&vinst->engine_reset_mutex);
>>>> +
>>>> + return r;
>>>> +}
>>>> +
>>>> +/**
>>>> + * amdgpu_vcn_ring_reset - Reset a VCN ring
>>>> + * @ring: ring to reset
>>>> + * @vmid: vmid of guilty job
>>>> + * @guilty_fence: guilty fence
>>>> + *
>>>> + * This helper is for VCN blocks without unified queues because
>>>> + * resetting the engine resets all queues in that case. With
>>>> + * unified queues we have one queue per engine.
>>>> + * Returns: 0 on success, or a negative error code on failure.
>>>> + */
>>>> +int amdgpu_vcn_ring_reset(struct amdgpu_ring *ring,
>>>> + unsigned int vmid,
>>>> + struct amdgpu_fence *guilty_fence)
>>>> +{
>>>> + struct amdgpu_device *adev = ring->adev;
>>>> +
>>>> + if (adev->vcn.inst[ring->me].using_unified_queue)
>>>> + return -EINVAL;
>>>> +
>>>> + return amdgpu_vcn_reset_engine(adev, ring->me);
>>>> +}
>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
>>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
>>>> index 83adf81defc71..0bc0a94d7cf0f 100644
>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
>>>> @@ -330,7 +330,9 @@ struct amdgpu_vcn_inst {
>>>> struct dpg_pause_state *new_state);
>>>> int (*set_pg_state)(struct amdgpu_vcn_inst *vinst,
>>>> enum amd_powergating_state state);
>>>> + int (*reset)(struct amdgpu_vcn_inst *vinst);
>>>> bool using_unified_queue;
>>>> + struct mutex engine_reset_mutex;
>>>> };
>>>> struct amdgpu_vcn_ras {
>>>> @@ -552,5 +554,7 @@ void amdgpu_debugfs_vcn_sched_mask_init(struct
>>>> amdgpu_device *adev);
>>>> int vcn_set_powergating_state(struct amdgpu_ip_block *ip_block,
>>>> enum amd_powergating_state state);
>>>> -
>>>> +int amdgpu_vcn_ring_reset(struct amdgpu_ring *ring,
>>>> + unsigned int vmid,
>>>> + struct amdgpu_fence *guilty_fence);
>>>> #endif
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [PATCH 36/36] drm/amdgpu/vcn3: implement ring reset
2025-06-17 3:08 ` [PATCH 36/36] drm/amdgpu/vcn3: " Alex Deucher
@ 2025-06-17 19:22 ` Sundararaju, Sathishkumar
2025-06-17 20:14 ` Alex Deucher
0 siblings, 1 reply; 59+ messages in thread
From: Sundararaju, Sathishkumar @ 2025-06-17 19:22 UTC (permalink / raw)
To: Alex Deucher, amd-gfx, christian.koenig
Hi Alex,
On 6/17/2025 8:38 AM, Alex Deucher wrote:
> Use the new helpers to handle engine resets for VCN.
>
> Untested.
>
> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
> ---
> drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c | 24 ++++++++++++++++++++++++
> 1 file changed, 24 insertions(+)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
> index 9fb0d53805892..ec4d2ab75fc4d 100644
> --- a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
> @@ -110,6 +110,7 @@ static int vcn_v3_0_set_pg_state(struct amdgpu_vcn_inst *vinst,
> enum amd_powergating_state state);
> static int vcn_v3_0_pause_dpg_mode(struct amdgpu_vcn_inst *vinst,
> struct dpg_pause_state *new_state);
> +static int vcn_v3_0_reset(struct amdgpu_vcn_inst *vinst);
>
> static void vcn_v3_0_dec_ring_set_wptr(struct amdgpu_ring *ring);
> static void vcn_v3_0_enc_ring_set_wptr(struct amdgpu_ring *ring);
> @@ -289,6 +290,7 @@ static int vcn_v3_0_sw_init(struct amdgpu_ip_block *ip_block)
>
> if (adev->pg_flags & AMD_PG_SUPPORT_VCN_DPG)
> adev->vcn.inst[i].pause_dpg_mode = vcn_v3_0_pause_dpg_mode;
> + adev->vcn.inst[i].reset = vcn_v3_0_reset;
> }
>
> if (amdgpu_sriov_vf(adev)) {
> @@ -1869,6 +1871,7 @@ static const struct amdgpu_ring_funcs vcn_v3_0_dec_sw_ring_vm_funcs = {
> .emit_wreg = vcn_dec_sw_ring_emit_wreg,
> .emit_reg_wait = vcn_dec_sw_ring_emit_reg_wait,
> .emit_reg_write_reg_wait = amdgpu_ring_emit_reg_write_reg_wait_helper,
> + .reset = amdgpu_vcn_ring_reset,
You probably wanted to add reset callback to vcn_v3_0_enc_ring_vm_funcs
instead ofvcn_v3_0_dec_sw_ring_vm_funcs.
With that, the vcn and jpeg changes in this series are :-
Reviewed-by: Sathishkumar S <sathishkumar.sundararaju@amd.com>
Tested-by: Sathishkumar S <sathishkumar.sundararaju@amd.com>
Test exceptions: VCN/JPEG 4_0_3 and VCN/JPEG 5_0_1.
Regards,
Sathish
> };
>
> static int vcn_v3_0_limit_sched(struct amdgpu_cs_parser *p,
> @@ -2033,6 +2036,7 @@ static const struct amdgpu_ring_funcs vcn_v3_0_dec_ring_vm_funcs = {
> .emit_wreg = vcn_v2_0_dec_ring_emit_wreg,
> .emit_reg_wait = vcn_v2_0_dec_ring_emit_reg_wait,
> .emit_reg_write_reg_wait = amdgpu_ring_emit_reg_write_reg_wait_helper,
> + .reset = amdgpu_vcn_ring_reset,
> };
>
> /**
> @@ -2164,6 +2168,26 @@ static void vcn_v3_0_set_enc_ring_funcs(struct amdgpu_device *adev)
> }
> }
>
> +static int vcn_v3_0_reset(struct amdgpu_vcn_inst *vinst)
> +{
> + int i, r;
> +
> + vcn_v3_0_stop(vinst);
> + vcn_v3_0_start(vinst);
> + r = amdgpu_ring_test_ring(&vinst->ring_dec);
> + if (r)
> + return r;
> + for (i = 0; i < vinst->num_enc_rings; i++) {
> + r = amdgpu_ring_test_ring(&vinst->ring_enc[i]);
> + if (r)
> + return r;
> + }
> + amdgpu_fence_driver_force_completion(&vinst->ring_dec);
> + for (i = 0; i < vinst->num_enc_rings; i++)
> + amdgpu_fence_driver_force_completion(&vinst->ring_enc[i]);
> + return 0;
> +}
> +
> static bool vcn_v3_0_is_idle(struct amdgpu_ip_block *ip_block)
> {
> struct amdgpu_device *adev = ip_block->adev;
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [PATCH 36/36] drm/amdgpu/vcn3: implement ring reset
2025-06-17 19:22 ` Sundararaju, Sathishkumar
@ 2025-06-17 20:14 ` Alex Deucher
2025-06-18 7:35 ` Sundararaju, Sathishkumar
0 siblings, 1 reply; 59+ messages in thread
From: Alex Deucher @ 2025-06-17 20:14 UTC (permalink / raw)
To: Sundararaju, Sathishkumar; +Cc: Alex Deucher, amd-gfx, christian.koenig
On Tue, Jun 17, 2025 at 4:02 PM Sundararaju, Sathishkumar
<sasundar@amd.com> wrote:
>
> Hi Alex,
>
> On 6/17/2025 8:38 AM, Alex Deucher wrote:
> > Use the new helpers to handle engine resets for VCN.
> >
> > Untested.
> >
> > Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
> > ---
> > drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c | 24 ++++++++++++++++++++++++
> > 1 file changed, 24 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
> > index 9fb0d53805892..ec4d2ab75fc4d 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
> > @@ -110,6 +110,7 @@ static int vcn_v3_0_set_pg_state(struct amdgpu_vcn_inst *vinst,
> > enum amd_powergating_state state);
> > static int vcn_v3_0_pause_dpg_mode(struct amdgpu_vcn_inst *vinst,
> > struct dpg_pause_state *new_state);
> > +static int vcn_v3_0_reset(struct amdgpu_vcn_inst *vinst);
> >
> > static void vcn_v3_0_dec_ring_set_wptr(struct amdgpu_ring *ring);
> > static void vcn_v3_0_enc_ring_set_wptr(struct amdgpu_ring *ring);
> > @@ -289,6 +290,7 @@ static int vcn_v3_0_sw_init(struct amdgpu_ip_block *ip_block)
> >
> > if (adev->pg_flags & AMD_PG_SUPPORT_VCN_DPG)
> > adev->vcn.inst[i].pause_dpg_mode = vcn_v3_0_pause_dpg_mode;
> > + adev->vcn.inst[i].reset = vcn_v3_0_reset;
> > }
> >
> > if (amdgpu_sriov_vf(adev)) {
> > @@ -1869,6 +1871,7 @@ static const struct amdgpu_ring_funcs vcn_v3_0_dec_sw_ring_vm_funcs = {
> > .emit_wreg = vcn_dec_sw_ring_emit_wreg,
> > .emit_reg_wait = vcn_dec_sw_ring_emit_reg_wait,
> > .emit_reg_write_reg_wait = amdgpu_ring_emit_reg_write_reg_wait_helper,
> > + .reset = amdgpu_vcn_ring_reset,
>
> You probably wanted to add reset callback to vcn_v3_0_enc_ring_vm_funcs
> instead ofvcn_v3_0_dec_sw_ring_vm_funcs.
I'll fix that up.
>
> With that, the vcn and jpeg changes in this series are :-
>
> Reviewed-by: Sathishkumar S <sathishkumar.sundararaju@amd.com>
> Tested-by: Sathishkumar S <sathishkumar.sundararaju@amd.com>
You mentioned that the start/stop sequence didn't work for some chips.
What sequence should I use for those?
Alex
>
> Test exceptions: VCN/JPEG 4_0_3 and VCN/JPEG 5_0_1.
>
> Regards,
> Sathish
>
> > };
> >
> > static int vcn_v3_0_limit_sched(struct amdgpu_cs_parser *p,
> > @@ -2033,6 +2036,7 @@ static const struct amdgpu_ring_funcs vcn_v3_0_dec_ring_vm_funcs = {
> > .emit_wreg = vcn_v2_0_dec_ring_emit_wreg,
> > .emit_reg_wait = vcn_v2_0_dec_ring_emit_reg_wait,
> > .emit_reg_write_reg_wait = amdgpu_ring_emit_reg_write_reg_wait_helper,
> > + .reset = amdgpu_vcn_ring_reset,
> > };
> >
> > /**
> > @@ -2164,6 +2168,26 @@ static void vcn_v3_0_set_enc_ring_funcs(struct amdgpu_device *adev)
> > }
> > }
> >
> > +static int vcn_v3_0_reset(struct amdgpu_vcn_inst *vinst)
> > +{
> > + int i, r;
> > +
> > + vcn_v3_0_stop(vinst);
> > + vcn_v3_0_start(vinst);
> > + r = amdgpu_ring_test_ring(&vinst->ring_dec);
> > + if (r)
> > + return r;
> > + for (i = 0; i < vinst->num_enc_rings; i++) {
> > + r = amdgpu_ring_test_ring(&vinst->ring_enc[i]);
> > + if (r)
> > + return r;
> > + }
> > + amdgpu_fence_driver_force_completion(&vinst->ring_dec);
> > + for (i = 0; i < vinst->num_enc_rings; i++)
> > + amdgpu_fence_driver_force_completion(&vinst->ring_enc[i]);
> > + return 0;
> > +}
> > +
> > static bool vcn_v3_0_is_idle(struct amdgpu_ip_block *ip_block)
> > {
> > struct amdgpu_device *adev = ip_block->adev;
>
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [PATCH 02/36] drm/amdgpu: remove job parameter from amdgpu_fence_emit()
2025-06-17 13:49 ` Alex Deucher
@ 2025-06-18 7:15 ` Christian König
0 siblings, 0 replies; 59+ messages in thread
From: Christian König @ 2025-06-18 7:15 UTC (permalink / raw)
To: Alex Deucher; +Cc: Alex Deucher, amd-gfx, sasundar
On 6/17/25 15:49, Alex Deucher wrote:
> On Tue, Jun 17, 2025 at 9:46 AM Alex Deucher <alexdeucher@gmail.com> wrote:
>>
>> On Tue, Jun 17, 2025 at 7:57 AM Christian König
>> <christian.koenig@amd.com> wrote:
>>>
>>> On 6/17/25 05:07, Alex Deucher wrote:
>>>> What we actually care about is the amdgpu_fence object
>>>> so pass that in explicitly to avoid possible mistakes
>>>> in the future.
>>>>
>>>> The job_run_counter handling can be safely removed at this
>>>> point as we no longer support job resubmission.
>>>>
>>>> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
>>>> ---
>>>> drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c | 36 +++++++++--------------
>>>> drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c | 5 +++-
>>>> drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h | 4 +--
>>>> 3 files changed, 20 insertions(+), 25 deletions(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
>>>> index 569e0e5373927..e88848c14491a 100644
>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
>>>> @@ -114,14 +114,14 @@ static u32 amdgpu_fence_read(struct amdgpu_ring *ring)
>>>> *
>>>> * @ring: ring the fence is associated with
>>>> * @f: resulting fence object
>>>> - * @job: job the fence is embedded in
>>>> + * @af: amdgpu fence input
>>>> * @flags: flags to pass into the subordinate .emit_fence() call
>>>> *
>>>> * Emits a fence command on the requested ring (all asics).
>>>> * Returns 0 on success, -ENOMEM on failure.
>>>> */
>>>> -int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **f, struct amdgpu_job *job,
>>>> - unsigned int flags)
>>>> +int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **f,
>>>> + struct amdgpu_fence *af, unsigned int flags)
>>>> {
>>>> struct amdgpu_device *adev = ring->adev;
>>>> struct dma_fence *fence;
>>>> @@ -130,36 +130,28 @@ int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **f, struct amd
>>>> uint32_t seq;
>>>> int r;
>>>>
>>>> - if (job == NULL) {
>>>> - /* create a sperate hw fence */
>>>> + if (!af) {
>>>> + /* create a separate hw fence */
>>>> am_fence = kmem_cache_alloc(amdgpu_fence_slab, GFP_ATOMIC);
>>>> if (am_fence == NULL)
>>>> return -ENOMEM;
>>>
>>> I think we should remove the output parameter as well.
>>>
>>> An amdgpu_fence can be trivially allocated by the caller.
>>
>> Is there anything special about amdgpu_job_fence_ops vs
>> amdgpu_fence_ops other than the slab handling? I was worried I was
>> missing something about the fence lifetimes with amdgpu_job_{free,
>> free_cb}.
>
> Specifically this chunk of code is confusing to me:
>
> /* only put the hw fence if has embedded fence */
> if (!job->hw_fence.base.ops)
> kfree(job);
> else
> dma_fence_put(&job->hw_fence.base);
That looks like it's testing if the HW fence was ever initialized and based on that either freeing the job directly or dropping the fence refcount.
As far as I can see all we need to make sure is that the HW fence is the first member in the job structure.
But somebody should really audit the code and not just make random changes to get their problem solved.
Regards,
Christian.
>
> Alex
>
>>
>> Alex
>>
>>>
>>> Apart from that looks good to me.
>>>
>>> Regards,
>>> Christian.
>>>
>>>> } else {
>>>> - /* take use of job-embedded fence */
>>>> - am_fence = &job->hw_fence;
>>>> + am_fence = af;
>>>> }
>>>> fence = &am_fence->base;
>>>> am_fence->ring = ring;
>>>>
>>>> seq = ++ring->fence_drv.sync_seq;
>>>> - if (job && job->job_run_counter) {
>>>> - /* reinit seq for resubmitted jobs */
>>>> - fence->seqno = seq;
>>>> - /* TO be inline with external fence creation and other drivers */
>>>> + if (af) {
>>>> + dma_fence_init(fence, &amdgpu_job_fence_ops,
>>>> + &ring->fence_drv.lock,
>>>> + adev->fence_context + ring->idx, seq);
>>>> + /* Against remove in amdgpu_job_{free, free_cb} */
>>>> dma_fence_get(fence);
>>>> } else {
>>>> - if (job) {
>>>> - dma_fence_init(fence, &amdgpu_job_fence_ops,
>>>> - &ring->fence_drv.lock,
>>>> - adev->fence_context + ring->idx, seq);
>>>> - /* Against remove in amdgpu_job_{free, free_cb} */
>>>> - dma_fence_get(fence);
>>>> - } else {
>>>> - dma_fence_init(fence, &amdgpu_fence_ops,
>>>> - &ring->fence_drv.lock,
>>>> - adev->fence_context + ring->idx, seq);
>>>> - }
>>>> + dma_fence_init(fence, &amdgpu_fence_ops,
>>>> + &ring->fence_drv.lock,
>>>> + adev->fence_context + ring->idx, seq);
>>>> }
>>>>
>>>> amdgpu_ring_emit_fence(ring, ring->fence_drv.gpu_addr,
>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
>>>> index 802743efa3b39..206b70acb29a0 100644
>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
>>>> @@ -128,6 +128,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs,
>>>> struct amdgpu_device *adev = ring->adev;
>>>> struct amdgpu_ib *ib = &ibs[0];
>>>> struct dma_fence *tmp = NULL;
>>>> + struct amdgpu_fence *af;
>>>> bool need_ctx_switch;
>>>> struct amdgpu_vm *vm;
>>>> uint64_t fence_ctx;
>>>> @@ -154,6 +155,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs,
>>>> csa_va = job->csa_va;
>>>> gds_va = job->gds_va;
>>>> init_shadow = job->init_shadow;
>>>> + af = &job->hw_fence;
>>>> } else {
>>>> vm = NULL;
>>>> fence_ctx = 0;
>>>> @@ -161,6 +163,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs,
>>>> csa_va = 0;
>>>> gds_va = 0;
>>>> init_shadow = false;
>>>> + af = NULL;
>>>> }
>>>>
>>>> if (!ring->sched.ready) {
>>>> @@ -282,7 +285,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs,
>>>> amdgpu_ring_init_cond_exec(ring, ring->cond_exe_gpu_addr);
>>>> }
>>>>
>>>> - r = amdgpu_fence_emit(ring, f, job, fence_flags);
>>>> + r = amdgpu_fence_emit(ring, f, af, fence_flags);
>>>> if (r) {
>>>> dev_err(adev->dev, "failed to emit fence (%d)\n", r);
>>>> if (job && job->vmid)
>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
>>>> index e1f25218943a4..9ae522baad8e7 100644
>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
>>>> @@ -157,8 +157,8 @@ void amdgpu_fence_driver_hw_init(struct amdgpu_device *adev);
>>>> void amdgpu_fence_driver_hw_fini(struct amdgpu_device *adev);
>>>> int amdgpu_fence_driver_sw_init(struct amdgpu_device *adev);
>>>> void amdgpu_fence_driver_sw_fini(struct amdgpu_device *adev);
>>>> -int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **fence, struct amdgpu_job *job,
>>>> - unsigned flags);
>>>> +int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **f,
>>>> + struct amdgpu_fence *af, unsigned int flags);
>>>> int amdgpu_fence_emit_polling(struct amdgpu_ring *ring, uint32_t *s,
>>>> uint32_t timeout);
>>>> bool amdgpu_fence_process(struct amdgpu_ring *ring);
>>>
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [PATCH 36/36] drm/amdgpu/vcn3: implement ring reset
2025-06-17 20:14 ` Alex Deucher
@ 2025-06-18 7:35 ` Sundararaju, Sathishkumar
2025-06-18 14:16 ` Sundararaju, Sathishkumar
0 siblings, 1 reply; 59+ messages in thread
From: Sundararaju, Sathishkumar @ 2025-06-18 7:35 UTC (permalink / raw)
To: Alex Deucher; +Cc: Alex Deucher, amd-gfx, christian.koenig
On 6/18/2025 1:44 AM, Alex Deucher wrote:
> On Tue, Jun 17, 2025 at 4:02 PM Sundararaju, Sathishkumar
> <sasundar@amd.com> wrote:
>> Hi Alex,
>>
>> On 6/17/2025 8:38 AM, Alex Deucher wrote:
>>> Use the new helpers to handle engine resets for VCN.
>>>
>>> Untested.
>>>
>>> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
>>> ---
>>> drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c | 24 ++++++++++++++++++++++++
>>> 1 file changed, 24 insertions(+)
>>>
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
>>> index 9fb0d53805892..ec4d2ab75fc4d 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
>>> @@ -110,6 +110,7 @@ static int vcn_v3_0_set_pg_state(struct amdgpu_vcn_inst *vinst,
>>> enum amd_powergating_state state);
>>> static int vcn_v3_0_pause_dpg_mode(struct amdgpu_vcn_inst *vinst,
>>> struct dpg_pause_state *new_state);
>>> +static int vcn_v3_0_reset(struct amdgpu_vcn_inst *vinst);
>>>
>>> static void vcn_v3_0_dec_ring_set_wptr(struct amdgpu_ring *ring);
>>> static void vcn_v3_0_enc_ring_set_wptr(struct amdgpu_ring *ring);
>>> @@ -289,6 +290,7 @@ static int vcn_v3_0_sw_init(struct amdgpu_ip_block *ip_block)
>>>
>>> if (adev->pg_flags & AMD_PG_SUPPORT_VCN_DPG)
>>> adev->vcn.inst[i].pause_dpg_mode = vcn_v3_0_pause_dpg_mode;
>>> + adev->vcn.inst[i].reset = vcn_v3_0_reset;
>>> }
>>>
>>> if (amdgpu_sriov_vf(adev)) {
>>> @@ -1869,6 +1871,7 @@ static const struct amdgpu_ring_funcs vcn_v3_0_dec_sw_ring_vm_funcs = {
>>> .emit_wreg = vcn_dec_sw_ring_emit_wreg,
>>> .emit_reg_wait = vcn_dec_sw_ring_emit_reg_wait,
>>> .emit_reg_write_reg_wait = amdgpu_ring_emit_reg_write_reg_wait_helper,
>>> + .reset = amdgpu_vcn_ring_reset,
>> You probably wanted to add reset callback to vcn_v3_0_enc_ring_vm_funcs
>> instead ofvcn_v3_0_dec_sw_ring_vm_funcs.
> I'll fix that up.
>
>> With that, the vcn and jpeg changes in this series are :-
>>
>> Reviewed-by: Sathishkumar S <sathishkumar.sundararaju@amd.com>
>> Tested-by: Sathishkumar S <sathishkumar.sundararaju@amd.com>
> You mentioned that the start/stop sequence didn't work for some chips.
> What sequence should I use for those?
It is for vcn3 and vcn2 (non unified), I am testing on vcn3.
Your changes as it is, works for encode hang, but failed to reset decode
hang on vcn3.
The workaround is (works for both dec and enc on vcn3) :-
vcn_v3_0_stop(vinst);
vcn_v3_0_enable_clock_gating(vinst);
vcn_v3_0_enable_static_power_gating(vinst);
vcn_v3_0_start(vinst);
If you are okay to add the workaround, that would be good, until we have
the firmware
also handle this properly or clarify the requirements to reset the
rings, even without it
it's a good first iteration to start, leave it your decision to add this
or not.
Have also requested for a vcn2 machine from Lab, which I think will get
by EOD,
I am hoping this works on vcn2 as well, since they are similar, will
keep you updated of the result.
Regards,
Sathish
>
> Alex
>
>> Test exceptions: VCN/JPEG 4_0_3 and VCN/JPEG 5_0_1.
>>
>> Regards,
>> Sathish
>>
>>> };
>>>
>>> static int vcn_v3_0_limit_sched(struct amdgpu_cs_parser *p,
>>> @@ -2033,6 +2036,7 @@ static const struct amdgpu_ring_funcs vcn_v3_0_dec_ring_vm_funcs = {
>>> .emit_wreg = vcn_v2_0_dec_ring_emit_wreg,
>>> .emit_reg_wait = vcn_v2_0_dec_ring_emit_reg_wait,
>>> .emit_reg_write_reg_wait = amdgpu_ring_emit_reg_write_reg_wait_helper,
>>> + .reset = amdgpu_vcn_ring_reset,
>>> };
>>>
>>> /**
>>> @@ -2164,6 +2168,26 @@ static void vcn_v3_0_set_enc_ring_funcs(struct amdgpu_device *adev)
>>> }
>>> }
>>>
>>> +static int vcn_v3_0_reset(struct amdgpu_vcn_inst *vinst)
>>> +{
>>> + int i, r;
>>> +
>>> + vcn_v3_0_stop(vinst);
>>> + vcn_v3_0_start(vinst);
>>> + r = amdgpu_ring_test_ring(&vinst->ring_dec);
>>> + if (r)
>>> + return r;
>>> + for (i = 0; i < vinst->num_enc_rings; i++) {
>>> + r = amdgpu_ring_test_ring(&vinst->ring_enc[i]);
>>> + if (r)
>>> + return r;
>>> + }
>>> + amdgpu_fence_driver_force_completion(&vinst->ring_dec);
>>> + for (i = 0; i < vinst->num_enc_rings; i++)
>>> + amdgpu_fence_driver_force_completion(&vinst->ring_enc[i]);
>>> + return 0;
>>> +}
>>> +
>>> static bool vcn_v3_0_is_idle(struct amdgpu_ip_block *ip_block)
>>> {
>>> struct amdgpu_device *adev = ip_block->adev;
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [PATCH 36/36] drm/amdgpu/vcn3: implement ring reset
2025-06-18 7:35 ` Sundararaju, Sathishkumar
@ 2025-06-18 14:16 ` Sundararaju, Sathishkumar
0 siblings, 0 replies; 59+ messages in thread
From: Sundararaju, Sathishkumar @ 2025-06-18 14:16 UTC (permalink / raw)
To: Alex Deucher; +Cc: Alex Deucher, amd-gfx, christian.koenig
On 6/18/2025 1:05 PM, Sundararaju, Sathishkumar wrote:
>
>
> On 6/18/2025 1:44 AM, Alex Deucher wrote:
>> On Tue, Jun 17, 2025 at 4:02 PM Sundararaju, Sathishkumar
>> <sasundar@amd.com> wrote:
>>> Hi Alex,
>>>
>>> On 6/17/2025 8:38 AM, Alex Deucher wrote:
>>>> Use the new helpers to handle engine resets for VCN.
>>>>
>>>> Untested.
>>>>
>>>> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
>>>> ---
>>>> drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c | 24 ++++++++++++++++++++++++
>>>> 1 file changed, 24 insertions(+)
>>>>
>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
>>>> b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
>>>> index 9fb0d53805892..ec4d2ab75fc4d 100644
>>>> --- a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
>>>> +++ b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
>>>> @@ -110,6 +110,7 @@ static int vcn_v3_0_set_pg_state(struct
>>>> amdgpu_vcn_inst *vinst,
>>>> enum amd_powergating_state state);
>>>> static int vcn_v3_0_pause_dpg_mode(struct amdgpu_vcn_inst *vinst,
>>>> struct dpg_pause_state *new_state);
>>>> +static int vcn_v3_0_reset(struct amdgpu_vcn_inst *vinst);
>>>>
>>>> static void vcn_v3_0_dec_ring_set_wptr(struct amdgpu_ring *ring);
>>>> static void vcn_v3_0_enc_ring_set_wptr(struct amdgpu_ring *ring);
>>>> @@ -289,6 +290,7 @@ static int vcn_v3_0_sw_init(struct
>>>> amdgpu_ip_block *ip_block)
>>>>
>>>> if (adev->pg_flags & AMD_PG_SUPPORT_VCN_DPG)
>>>> adev->vcn.inst[i].pause_dpg_mode =
>>>> vcn_v3_0_pause_dpg_mode;
>>>> + adev->vcn.inst[i].reset = vcn_v3_0_reset;
>>>> }
>>>>
>>>> if (amdgpu_sriov_vf(adev)) {
>>>> @@ -1869,6 +1871,7 @@ static const struct amdgpu_ring_funcs
>>>> vcn_v3_0_dec_sw_ring_vm_funcs = {
>>>> .emit_wreg = vcn_dec_sw_ring_emit_wreg,
>>>> .emit_reg_wait = vcn_dec_sw_ring_emit_reg_wait,
>>>> .emit_reg_write_reg_wait =
>>>> amdgpu_ring_emit_reg_write_reg_wait_helper,
>>>> + .reset = amdgpu_vcn_ring_reset,
>>> You probably wanted to add reset callback to vcn_v3_0_enc_ring_vm_funcs
>>> instead ofvcn_v3_0_dec_sw_ring_vm_funcs.
>> I'll fix that up.
>>
>>> With that, the vcn and jpeg changes in this series are :-
>>>
>>> Reviewed-by: Sathishkumar S <sathishkumar.sundararaju@amd.com>
>>> Tested-by: Sathishkumar S <sathishkumar.sundararaju@amd.com>
>> You mentioned that the start/stop sequence didn't work for some chips.
>> What sequence should I use for those?
> It is for vcn3 and vcn2 (non unified), I am testing on vcn3.
> Your changes as it is, works for encode hang, but failed to reset
> decode hang on vcn3.
> The workaround is (works for both dec and enc on vcn3) :-
>
> vcn_v3_0_stop(vinst);
> vcn_v3_0_enable_clock_gating(vinst);
> vcn_v3_0_enable_static_power_gating(vinst);
> vcn_v3_0_start(vinst);
>
> If you are okay to add the workaround, that would be good, until we
> have the firmware
> also handle this properly or clarify the requirements to reset the
> rings, even without it
> it's a good first iteration to start, leave it your decision to add
> this or not.
>
> Have also requested for a vcn2 machine from Lab, which I think will
> get by EOD,
> I am hoping this works on vcn2 as well, since they are similar, will
> keep you updated of the result.
Got a vcn2 machine to test, reset isn't working even with the workaround
on vcn2,
nothing looks consistent w.r.t vcn non unified queues reset functionality.
But having it enabled as it is in V9 will help in debug and work with
firmware team on this.
Regards,
Sathish
>
> Regards,
> Sathish
>>
>> Alex
>>
>>> Test exceptions: VCN/JPEG 4_0_3 and VCN/JPEG 5_0_1.
>>>
>>> Regards,
>>> Sathish
>>>
>>>> };
>>>>
>>>> static int vcn_v3_0_limit_sched(struct amdgpu_cs_parser *p,
>>>> @@ -2033,6 +2036,7 @@ static const struct amdgpu_ring_funcs
>>>> vcn_v3_0_dec_ring_vm_funcs = {
>>>> .emit_wreg = vcn_v2_0_dec_ring_emit_wreg,
>>>> .emit_reg_wait = vcn_v2_0_dec_ring_emit_reg_wait,
>>>> .emit_reg_write_reg_wait =
>>>> amdgpu_ring_emit_reg_write_reg_wait_helper,
>>>> + .reset = amdgpu_vcn_ring_reset,
>>>> };
>>>>
>>>> /**
>>>> @@ -2164,6 +2168,26 @@ static void
>>>> vcn_v3_0_set_enc_ring_funcs(struct amdgpu_device *adev)
>>>> }
>>>> }
>>>>
>>>> +static int vcn_v3_0_reset(struct amdgpu_vcn_inst *vinst)
>>>> +{
>>>> + int i, r;
>>>> +
>>>> + vcn_v3_0_stop(vinst);
>>>> + vcn_v3_0_start(vinst);
>>>> + r = amdgpu_ring_test_ring(&vinst->ring_dec);
>>>> + if (r)
>>>> + return r;
>>>> + for (i = 0; i < vinst->num_enc_rings; i++) {
>>>> + r = amdgpu_ring_test_ring(&vinst->ring_enc[i]);
>>>> + if (r)
>>>> + return r;
>>>> + }
>>>> + amdgpu_fence_driver_force_completion(&vinst->ring_dec);
>>>> + for (i = 0; i < vinst->num_enc_rings; i++)
>>>> + amdgpu_fence_driver_force_completion(&vinst->ring_enc[i]);
>>>> + return 0;
>>>> +}
>>>> +
>>>> static bool vcn_v3_0_is_idle(struct amdgpu_ip_block *ip_block)
>>>> {
>>>> struct amdgpu_device *adev = ip_block->adev;
>
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [PATCH 13/36] drm/amdgpu: track ring state associated with a job
2025-06-17 3:07 ` [PATCH 13/36] drm/amdgpu: track ring state associated with a job Alex Deucher
@ 2025-06-18 14:53 ` Christian König
0 siblings, 0 replies; 59+ messages in thread
From: Christian König @ 2025-06-18 14:53 UTC (permalink / raw)
To: Alex Deucher, amd-gfx, sasundar
On 6/17/25 05:07, Alex Deucher wrote:
> We need to know the wptr and sequence number associated
> with a job so that we can re-emit the unprocessed state
I suggest to replace job with fence here and in the subject line.
> after a ring reset. Pre-allocate storage space for
> the ring buffer contents and add helpers to save off
> and re-emit the unprocessed state so that it can be
> re-emitted after the queue is reset.
>
> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c | 90 +++++++++++++++++++++++
> drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c | 14 +++-
> drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 4 +-
> drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c | 59 +++++++++++++++
> drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h | 17 +++++
> 5 files changed, 181 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> index 5555f3ae08c60..b8d51ee60adcc 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> @@ -120,11 +120,13 @@ int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **f,
> am_fence = kmalloc(sizeof(*am_fence), GFP_KERNEL);
> if (!am_fence)
> return -ENOMEM;
> + am_fence->context = 0;
> } else {
> am_fence = af;
> }
> fence = &am_fence->base;
> am_fence->ring = ring;
> + am_fence->wptr = 0;
>
> seq = ++ring->fence_drv.sync_seq;
> if (af) {
> @@ -253,6 +255,7 @@ bool amdgpu_fence_process(struct amdgpu_ring *ring)
>
> do {
> struct dma_fence *fence, **ptr;
> + struct amdgpu_fence *am_fence;
>
> ++last_seq;
> last_seq &= drv->num_fences_mask;
> @@ -265,6 +268,9 @@ bool amdgpu_fence_process(struct amdgpu_ring *ring)
> if (!fence)
> continue;
>
> + am_fence = container_of(fence, struct amdgpu_fence, base);
> + if (am_fence->wptr)
> + drv->last_wptr = am_fence->wptr;
> dma_fence_signal(fence);
> dma_fence_put(fence);
> pm_runtime_mark_last_busy(adev_to_drm(adev)->dev);
> @@ -725,6 +731,90 @@ void amdgpu_fence_driver_force_completion(struct amdgpu_ring *ring)
> amdgpu_fence_process(ring);
> }
>
> +/**
> + * amdgpu_fence_driver_guilty_force_completion - force signal of specified sequence
> + *
> + * @fence: fence of the ring to signal
> + *
> + */
> +void amdgpu_fence_driver_guilty_force_completion(struct amdgpu_fence *fence)
> +{
> + dma_fence_set_error(&fence->base, -ETIME);
> + amdgpu_fence_write(fence->ring, fence->base.seqno);
> + amdgpu_fence_process(fence->ring);
> +}
> +
> +void amdgpu_fence_save_wptr(struct dma_fence *fence)
> +{
> + struct amdgpu_fence *am_fence = container_of(fence, struct amdgpu_fence, base);
> +
> + am_fence->wptr = am_fence->ring->wptr;
> +}
> +
> +static void amdgpu_ring_backup_unprocessed_command(struct amdgpu_ring *ring,
> + unsigned int idx,
> + u64 start_wptr, u32 end_wptr)
> +{
> + unsigned int first_idx = start_wptr & ring->buf_mask;
> + unsigned int last_idx = end_wptr & ring->buf_mask;
> + unsigned int i, j, entries_to_copy;
> +
> + if (last_idx < first_idx) {
> + entries_to_copy = ring->buf_mask + 1 - first_idx;
> + for (i = 0; i < entries_to_copy; i++)
> + ring->ring_backup[idx + i] = ring->ring[first_idx + i];
> + ring->ring_backup_entries_to_copy += entries_to_copy;
> + entries_to_copy = last_idx;
> + for (j = 0; j < entries_to_copy; j++)
> + ring->ring_backup[idx + i + j] = ring->ring[j];
> + ring->ring_backup_entries_to_copy += entries_to_copy;
> + } else {
> + entries_to_copy = last_idx - first_idx;
> + for (i = 0; i < entries_to_copy; i++)
> + ring->ring_backup[idx + i] = ring->ring[first_idx + i];
> + ring->ring_backup_entries_to_copy += entries_to_copy;
> + }
That took me a moment to understand. Why not simplyfy it to something like this:
unsigned int i, count;
for (i = first_idx, count = 0; i != last_idx; ++i, i &= ring->buf_mask, ++count)
ring->ring_backup_entries[idx++] = ring->ring[i];
ring->ring_backup_entries_to_copy += count;
I need to take a closer look at all the details, and we should probably throw in some documentation here and there.
Regards,
Christian.
> +}
> +
> +void amdgpu_ring_backup_unprocessed_commands(struct amdgpu_ring *ring,
> + struct amdgpu_fence *guilty_fence)
> +{
> + struct amdgpu_fence *fence;
> + struct dma_fence *unprocessed, **ptr;
> + u64 wptr, i, seqno;
> +
> + if (guilty_fence) {
> + seqno = guilty_fence->base.seqno;
> + wptr = guilty_fence->wptr;
> + } else {
> + seqno = amdgpu_fence_read(ring);
> + wptr = ring->fence_drv.last_wptr;
> + }
> + ring->ring_backup_entries_to_copy = 0;
> + for (i = seqno + 1; i <= ring->fence_drv.sync_seq; ++i) {
> + ptr = &ring->fence_drv.fences[i & ring->fence_drv.num_fences_mask];
> + rcu_read_lock();
> + unprocessed = rcu_dereference(*ptr);
> +
> + if (unprocessed && !dma_fence_is_signaled(unprocessed)) {
> + fence = container_of(unprocessed, struct amdgpu_fence, base);
> +
> + /* save everything if the ring is not guilty, otherwise
> + * just save the content from other contexts.
> + */
> + if (fence->wptr &&
> + (!guilty_fence || (fence->context != guilty_fence->context))) {
> + amdgpu_ring_backup_unprocessed_command(ring,
> + ring->ring_backup_entries_to_copy,
> + wptr,
> + fence->wptr);
> + wptr = fence->wptr;
> + }
> + }
> + rcu_read_unlock();
> + }
> +}
> +
> /*
> * Common fence implementation
> */
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
> index 206b70acb29a0..4e6a598043df8 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
> @@ -139,7 +139,6 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs,
> int vmid = AMDGPU_JOB_GET_VMID(job);
> bool need_pipe_sync = false;
> unsigned int cond_exec;
> -
> unsigned int i;
> int r = 0;
>
> @@ -156,6 +155,12 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs,
> gds_va = job->gds_va;
> init_shadow = job->init_shadow;
> af = &job->hw_fence;
> + if (job->base.s_fence) {
> + struct dma_fence *finished = &job->base.s_fence->finished;
> + af->context = finished->context;
> + } else {
> + af->context = 0;
> + }
> } else {
> vm = NULL;
> fence_ctx = 0;
> @@ -309,6 +314,13 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs,
>
> amdgpu_ring_ib_end(ring);
> amdgpu_ring_commit(ring);
> +
> + /* This must be last for resets to work properly
> + * as we need to save the wptr associated with this
> + * fence.
> + */
> + amdgpu_fence_save_wptr(*f);
> +
> return 0;
> }
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> index f0b7080dccb8d..45febdc2f3493 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> @@ -89,8 +89,8 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
> {
> struct amdgpu_ring *ring = to_amdgpu_ring(s_job->sched);
> struct amdgpu_job *job = to_amdgpu_job(s_job);
> - struct amdgpu_task_info *ti;
> struct amdgpu_device *adev = ring->adev;
> + struct amdgpu_task_info *ti;
> int idx, r;
>
> if (!drm_dev_enter(adev_to_drm(adev), &idx)) {
> @@ -135,7 +135,7 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
> } else if (amdgpu_gpu_recovery && ring->funcs->reset) {
> dev_err(adev->dev, "Starting %s ring reset\n",
> s_job->sched->name);
> - r = amdgpu_ring_reset(ring, job->vmid, NULL);
> + r = amdgpu_ring_reset(ring, job->vmid, &job->hw_fence);
> if (!r) {
> atomic_inc(&ring->adev->gpu_reset_counter);
> dev_err(adev->dev, "Ring %s reset succeeded\n",
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
> index 426834806fbf2..0985eba010e17 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c
> @@ -333,6 +333,12 @@ int amdgpu_ring_init(struct amdgpu_device *adev, struct amdgpu_ring *ring,
> /* Initialize cached_rptr to 0 */
> ring->cached_rptr = 0;
>
> + if (!ring->ring_backup) {
> + ring->ring_backup = kvzalloc(ring->ring_size, GFP_KERNEL);
> + if (!ring->ring_backup)
> + return -ENOMEM;
> + }
> +
> /* Allocate ring buffer */
> if (ring->ring_obj == NULL) {
> r = amdgpu_bo_create_kernel(adev, ring->ring_size + ring->funcs->extra_dw, PAGE_SIZE,
> @@ -342,6 +348,7 @@ int amdgpu_ring_init(struct amdgpu_device *adev, struct amdgpu_ring *ring,
> (void **)&ring->ring);
> if (r) {
> dev_err(adev->dev, "(%d) ring create failed\n", r);
> + kvfree(ring->ring_backup);
> return r;
> }
> amdgpu_ring_clear_ring(ring);
> @@ -385,6 +392,8 @@ void amdgpu_ring_fini(struct amdgpu_ring *ring)
> amdgpu_bo_free_kernel(&ring->ring_obj,
> &ring->gpu_addr,
> (void **)&ring->ring);
> + kvfree(ring->ring_backup);
> + ring->ring_backup = NULL;
>
> dma_fence_put(ring->vmid_wait);
> ring->vmid_wait = NULL;
> @@ -753,3 +762,53 @@ bool amdgpu_ring_sched_ready(struct amdgpu_ring *ring)
>
> return true;
> }
> +
> +static int amdgpu_ring_reemit_unprocessed_commands(struct amdgpu_ring *ring)
> +{
> + unsigned int i;
> + int r;
> +
> + /* re-emit the unprocessed ring contents */
> + if (ring->ring_backup_entries_to_copy) {
> + r = amdgpu_ring_alloc(ring, ring->ring_backup_entries_to_copy);
> + if (r)
> + return r;
> + for (i = 0; i < ring->ring_backup_entries_to_copy; i++)
> + amdgpu_ring_write(ring, ring->ring_backup[i]);
> + amdgpu_ring_commit(ring);
> + }
> +
> + return 0;
> +}
> +
> +void amdgpu_ring_reset_helper_begin(struct amdgpu_ring *ring,
> + struct amdgpu_fence *guilty_fence)
> +{
> + /* Stop the scheduler to prevent anybody else from touching the ring buffer. */
> + drm_sched_wqueue_stop(&ring->sched);
> + /* back up the non-guilty commands */
> + amdgpu_ring_backup_unprocessed_commands(ring, guilty_fence);
> +}
> +
> +int amdgpu_ring_reset_helper_end(struct amdgpu_ring *ring,
> + struct amdgpu_fence *guilty_fence)
> +{
> + int r;
> +
> + /* verify that the ring is functional */
> + r = amdgpu_ring_test_ring(ring);
> + if (r)
> + return r;
> +
> + /* signal the fence of the bad job */
> + if (guilty_fence)
> + amdgpu_fence_driver_guilty_force_completion(guilty_fence);
> + /* Re-emit the non-guilty commands */
> + r = amdgpu_ring_reemit_unprocessed_commands(ring);
> + if (r)
> + /* if we fail to reemit, force complete all fences */
> + amdgpu_fence_driver_force_completion(ring);
> + /* Start the scheduler again */
> + drm_sched_wqueue_start(&ring->sched);
> + return 0;
> +}
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> index 6aaa9d0c1f25c..dcf20adda2f36 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> @@ -118,6 +118,7 @@ struct amdgpu_fence_driver {
> /* sync_seq is protected by ring emission lock */
> uint32_t sync_seq;
> atomic_t last_seq;
> + u64 last_wptr;
> bool initialized;
> struct amdgpu_irq_src *irq_src;
> unsigned irq_type;
> @@ -141,6 +142,11 @@ struct amdgpu_fence {
> /* RB, DMA, etc. */
> struct amdgpu_ring *ring;
> ktime_t start_timestamp;
> +
> + /* wptr for the fence for resets */
> + u64 wptr;
> + /* fence context for resets */
> + u64 context;
> };
>
> extern const struct drm_sched_backend_ops amdgpu_sched_ops;
> @@ -148,6 +154,8 @@ extern const struct drm_sched_backend_ops amdgpu_sched_ops;
> void amdgpu_fence_driver_clear_job_fences(struct amdgpu_ring *ring);
> void amdgpu_fence_driver_set_error(struct amdgpu_ring *ring, int error);
> void amdgpu_fence_driver_force_completion(struct amdgpu_ring *ring);
> +void amdgpu_fence_driver_guilty_force_completion(struct amdgpu_fence *fence);
> +void amdgpu_fence_save_wptr(struct dma_fence *fence);
>
> int amdgpu_fence_driver_init_ring(struct amdgpu_ring *ring);
> int amdgpu_fence_driver_start_ring(struct amdgpu_ring *ring,
> @@ -284,6 +292,9 @@ struct amdgpu_ring {
>
> struct amdgpu_bo *ring_obj;
> uint32_t *ring;
> + /* backups for resets */
> + uint32_t *ring_backup;
> + unsigned int ring_backup_entries_to_copy;
> unsigned rptr_offs;
> u64 rptr_gpu_addr;
> volatile u32 *rptr_cpu_addr;
> @@ -550,4 +561,10 @@ int amdgpu_ib_pool_init(struct amdgpu_device *adev);
> void amdgpu_ib_pool_fini(struct amdgpu_device *adev);
> int amdgpu_ib_ring_tests(struct amdgpu_device *adev);
> bool amdgpu_ring_sched_ready(struct amdgpu_ring *ring);
> +void amdgpu_ring_backup_unprocessed_commands(struct amdgpu_ring *ring,
> + struct amdgpu_fence *guilty_fence);
> +void amdgpu_ring_reset_helper_begin(struct amdgpu_ring *ring,
> + struct amdgpu_fence *guilty_fence);
> +int amdgpu_ring_reset_helper_end(struct amdgpu_ring *ring,
> + struct amdgpu_fence *guilty_fence);
> #endif
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [PATCH 02/36] drm/amdgpu: remove job parameter from amdgpu_fence_emit()
2025-06-17 11:44 ` Christian König
2025-06-17 13:46 ` Alex Deucher
@ 2025-06-18 22:32 ` Alex Deucher
1 sibling, 0 replies; 59+ messages in thread
From: Alex Deucher @ 2025-06-18 22:32 UTC (permalink / raw)
To: Christian König; +Cc: Alex Deucher, amd-gfx, sasundar
On Tue, Jun 17, 2025 at 7:57 AM Christian König
<christian.koenig@amd.com> wrote:
>
> On 6/17/25 05:07, Alex Deucher wrote:
> > What we actually care about is the amdgpu_fence object
> > so pass that in explicitly to avoid possible mistakes
> > in the future.
> >
> > The job_run_counter handling can be safely removed at this
> > point as we no longer support job resubmission.
> >
> > Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
> > ---
> > drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c | 36 +++++++++--------------
> > drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c | 5 +++-
> > drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h | 4 +--
> > 3 files changed, 20 insertions(+), 25 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> > index 569e0e5373927..e88848c14491a 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
> > @@ -114,14 +114,14 @@ static u32 amdgpu_fence_read(struct amdgpu_ring *ring)
> > *
> > * @ring: ring the fence is associated with
> > * @f: resulting fence object
> > - * @job: job the fence is embedded in
> > + * @af: amdgpu fence input
> > * @flags: flags to pass into the subordinate .emit_fence() call
> > *
> > * Emits a fence command on the requested ring (all asics).
> > * Returns 0 on success, -ENOMEM on failure.
> > */
> > -int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **f, struct amdgpu_job *job,
> > - unsigned int flags)
> > +int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **f,
> > + struct amdgpu_fence *af, unsigned int flags)
> > {
> > struct amdgpu_device *adev = ring->adev;
> > struct dma_fence *fence;
> > @@ -130,36 +130,28 @@ int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **f, struct amd
> > uint32_t seq;
> > int r;
> >
> > - if (job == NULL) {
> > - /* create a sperate hw fence */
> > + if (!af) {
> > + /* create a separate hw fence */
> > am_fence = kmem_cache_alloc(amdgpu_fence_slab, GFP_ATOMIC);
> > if (am_fence == NULL)
> > return -ENOMEM;
>
> I think we should remove the output parameter as well.
>
> An amdgpu_fence can be trivially allocated by the caller.
I think we should either take this patch as is or just drop it. It's
looking to be non-trivial to clean this up further. We still need
some sort of parameter to determine whether or not to use the
amdgpu_job_fence_ops or amdgpu_fence_ops. Removing those is
non-trivial as there are all sorts of corner cases for fence
lifetimes. Because the fence is part of the amdgpu_job structure, we
need to make sure fence lifetime aligns with the job lifetime since
the fence release also frees the job structure. There is also special
handling in amdgpu_fence_driver_clear_job_fences() to handle memory
leaks in failed IB tests. It's definitely a worthwhile cleanup that
needs to be done, but it's looking like a whole project on its own.
Alex
>
> Apart from that looks good to me.
>
> Regards,
> Christian.
>
> > } else {
> > - /* take use of job-embedded fence */
> > - am_fence = &job->hw_fence;
> > + am_fence = af;
> > }
> > fence = &am_fence->base;
> > am_fence->ring = ring;
> >
> > seq = ++ring->fence_drv.sync_seq;
> > - if (job && job->job_run_counter) {
> > - /* reinit seq for resubmitted jobs */
> > - fence->seqno = seq;
> > - /* TO be inline with external fence creation and other drivers */
> > + if (af) {
> > + dma_fence_init(fence, &amdgpu_job_fence_ops,
> > + &ring->fence_drv.lock,
> > + adev->fence_context + ring->idx, seq);
> > + /* Against remove in amdgpu_job_{free, free_cb} */
> > dma_fence_get(fence);
> > } else {
> > - if (job) {
> > - dma_fence_init(fence, &amdgpu_job_fence_ops,
> > - &ring->fence_drv.lock,
> > - adev->fence_context + ring->idx, seq);
> > - /* Against remove in amdgpu_job_{free, free_cb} */
> > - dma_fence_get(fence);
> > - } else {
> > - dma_fence_init(fence, &amdgpu_fence_ops,
> > - &ring->fence_drv.lock,
> > - adev->fence_context + ring->idx, seq);
> > - }
> > + dma_fence_init(fence, &amdgpu_fence_ops,
> > + &ring->fence_drv.lock,
> > + adev->fence_context + ring->idx, seq);
> > }
> >
> > amdgpu_ring_emit_fence(ring, ring->fence_drv.gpu_addr,
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
> > index 802743efa3b39..206b70acb29a0 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
> > @@ -128,6 +128,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs,
> > struct amdgpu_device *adev = ring->adev;
> > struct amdgpu_ib *ib = &ibs[0];
> > struct dma_fence *tmp = NULL;
> > + struct amdgpu_fence *af;
> > bool need_ctx_switch;
> > struct amdgpu_vm *vm;
> > uint64_t fence_ctx;
> > @@ -154,6 +155,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs,
> > csa_va = job->csa_va;
> > gds_va = job->gds_va;
> > init_shadow = job->init_shadow;
> > + af = &job->hw_fence;
> > } else {
> > vm = NULL;
> > fence_ctx = 0;
> > @@ -161,6 +163,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs,
> > csa_va = 0;
> > gds_va = 0;
> > init_shadow = false;
> > + af = NULL;
> > }
> >
> > if (!ring->sched.ready) {
> > @@ -282,7 +285,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs,
> > amdgpu_ring_init_cond_exec(ring, ring->cond_exe_gpu_addr);
> > }
> >
> > - r = amdgpu_fence_emit(ring, f, job, fence_flags);
> > + r = amdgpu_fence_emit(ring, f, af, fence_flags);
> > if (r) {
> > dev_err(adev->dev, "failed to emit fence (%d)\n", r);
> > if (job && job->vmid)
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> > index e1f25218943a4..9ae522baad8e7 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> > @@ -157,8 +157,8 @@ void amdgpu_fence_driver_hw_init(struct amdgpu_device *adev);
> > void amdgpu_fence_driver_hw_fini(struct amdgpu_device *adev);
> > int amdgpu_fence_driver_sw_init(struct amdgpu_device *adev);
> > void amdgpu_fence_driver_sw_fini(struct amdgpu_device *adev);
> > -int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **fence, struct amdgpu_job *job,
> > - unsigned flags);
> > +int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **f,
> > + struct amdgpu_fence *af, unsigned int flags);
> > int amdgpu_fence_emit_polling(struct amdgpu_ring *ring, uint32_t *s,
> > uint32_t timeout);
> > bool amdgpu_fence_process(struct amdgpu_ring *ring);
>
^ permalink raw reply [flat|nested] 59+ messages in thread
end of thread, other threads:[~2025-06-18 22:33 UTC | newest]
Thread overview: 59+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-06-17 3:07 [PATCH V9 00/36] Reset improvements Alex Deucher
2025-06-17 3:07 ` [PATCH 01/36] drm/amdgpu: switch job hw_fence to amdgpu_fence Alex Deucher
2025-06-17 9:42 ` Christian König
2025-06-17 3:07 ` [PATCH 02/36] drm/amdgpu: remove job parameter from amdgpu_fence_emit() Alex Deucher
2025-06-17 11:44 ` Christian König
2025-06-17 13:46 ` Alex Deucher
2025-06-17 13:49 ` Alex Deucher
2025-06-18 7:15 ` Christian König
2025-06-18 22:32 ` Alex Deucher
2025-06-17 3:07 ` [PATCH 03/36] drm/amdgpu: remove fence slab Alex Deucher
2025-06-17 11:49 ` Christian König
2025-06-17 3:07 ` [PATCH 04/36] drm/amdgpu: enable legacy enforce isolation by default Alex Deucher
2025-06-17 3:07 ` [PATCH 05/36] drm/amdgpu/sdma5.x: suspend KFD queues in ring reset Alex Deucher
2025-06-17 3:07 ` [PATCH 06/36] drm/amdgpu/sdma5: init engine reset mutex Alex Deucher
2025-06-17 5:50 ` Zhang, Jesse(Jie)
2025-06-17 6:09 ` Zhang, Jesse(Jie)
2025-06-17 11:50 ` Christian König
2025-06-17 3:07 ` [PATCH 07/36] drm/amdgpu/sdma5.2: " Alex Deucher
2025-06-17 6:08 ` Zhang, Jesse(Jie)
2025-06-17 3:07 ` [PATCH 08/36] drm/amdgpu: update ring reset function signature Alex Deucher
2025-06-17 12:20 ` Christian König
2025-06-17 3:07 ` [PATCH 09/36] drm/amdgpu: rework queue reset scheduler interaction Alex Deucher
2025-06-17 3:07 ` [PATCH 10/36] drm/amdgpu: move force completion into ring resets Alex Deucher
2025-06-17 3:07 ` [PATCH 11/36] drm/amdgpu: move guilty handling " Alex Deucher
2025-06-17 12:28 ` Christian König
2025-06-17 3:07 ` [PATCH 12/36] drm/amdgpu: move scheduler wqueue handling into callbacks Alex Deucher
2025-06-17 3:07 ` [PATCH 13/36] drm/amdgpu: track ring state associated with a job Alex Deucher
2025-06-18 14:53 ` Christian König
2025-06-17 3:07 ` [PATCH 14/36] drm/amdgpu/gfx9: re-emit unprocessed state on kcq reset Alex Deucher
2025-06-17 3:07 ` [PATCH 15/36] drm/amdgpu/gfx9.4.3: " Alex Deucher
2025-06-17 3:07 ` [PATCH 16/36] drm/amdgpu/gfx10: re-emit unprocessed state on ring reset Alex Deucher
2025-06-17 3:07 ` [PATCH 17/36] drm/amdgpu/gfx11: " Alex Deucher
2025-06-17 3:07 ` [PATCH 18/36] drm/amdgpu/gfx12: " Alex Deucher
2025-06-17 3:07 ` [PATCH 19/36] drm/amdgpu/sdma6: " Alex Deucher
2025-06-17 3:07 ` [PATCH 20/36] drm/amdgpu/sdma7: " Alex Deucher
2025-06-17 3:08 ` [PATCH 21/36] drm/amdgpu/jpeg2: " Alex Deucher
2025-06-17 3:08 ` [PATCH 22/36] drm/amdgpu/jpeg2.5: " Alex Deucher
2025-06-17 3:08 ` [PATCH 23/36] drm/amdgpu/jpeg3: " Alex Deucher
2025-06-17 3:08 ` [PATCH 24/36] drm/amdgpu/jpeg4: " Alex Deucher
2025-06-17 3:08 ` [PATCH 25/36] drm/amdgpu/jpeg4.0.3: " Alex Deucher
2025-06-17 3:08 ` [PATCH 26/36] drm/amdgpu/jpeg4.0.5: add queue reset Alex Deucher
2025-06-17 3:08 ` [PATCH 27/36] drm/amdgpu/jpeg5: " Alex Deucher
2025-06-17 3:08 ` [PATCH 28/36] drm/amdgpu/jpeg5.0.1: re-emit unprocessed state on ring reset Alex Deucher
2025-06-17 3:08 ` [PATCH 29/36] drm/amdgpu/vcn4: " Alex Deucher
2025-06-17 3:08 ` [PATCH 30/36] drm/amdgpu/vcn4.0.3: " Alex Deucher
2025-06-17 3:08 ` [PATCH 31/36] drm/amdgpu/vcn4.0.5: " Alex Deucher
2025-06-17 3:08 ` [PATCH 32/36] drm/amdgpu/vcn5: " Alex Deucher
2025-06-17 3:08 ` [PATCH 33/36] drm/amdgpu/vcn: add a helper framework for engine resets Alex Deucher
2025-06-17 4:30 ` Sundararaju, Sathishkumar
2025-06-17 6:10 ` Sundararaju, Sathishkumar
2025-06-17 13:09 ` Alex Deucher
2025-06-17 16:49 ` Sundararaju, Sathishkumar
2025-06-17 3:08 ` [PATCH 34/36] drm/amdgpu/vcn2: implement ring reset Alex Deucher
2025-06-17 3:08 ` [PATCH 35/36] drm/amdgpu/vcn2.5: " Alex Deucher
2025-06-17 3:08 ` [PATCH 36/36] drm/amdgpu/vcn3: " Alex Deucher
2025-06-17 19:22 ` Sundararaju, Sathishkumar
2025-06-17 20:14 ` Alex Deucher
2025-06-18 7:35 ` Sundararaju, Sathishkumar
2025-06-18 14:16 ` Sundararaju, Sathishkumar
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).