* [PATCH 00/10] Improvements for IB handling V3
@ 2026-01-16 16:20 Alex Deucher
2026-01-16 16:20 ` [PATCH 01/10] drm/amdgpu: fix type for wptr in ring backup Alex Deucher
` (9 more replies)
0 siblings, 10 replies; 20+ messages in thread
From: Alex Deucher @ 2026-01-16 16:20 UTC (permalink / raw)
To: amd-gfx; +Cc: Alex Deucher
This set contains a number of bug fixes and cleanups for
IB handling that I worked on over the holidays. The first
the three patches from V1 are already reviewed, so I didn't
include them in V2 or V3.
Patches 1-2:
Standalone fix and cleanup
Patches 3-5:
Removes the direct submit path for IBs and requires
that all IB submissions use a job structure. This
greatly simplifies the IB submission code. V2 uses
GFP_ATOMIC when in reset. V3 sqaushes all of the
IP changes into one patch. Not sure there is much
value breaking this out per IP.
Patches 6-9:
Improvements for adapter resets. Stop calling
drm_sched_stop/start(). Just stop/start the workqueues.
I can see why we need this at all. Per queue reset
doesn't use this. Stop calling drm_sched_increase_karma()
this seems to result in the jobs always getting marked
as innocent and prevents the subsequent fix for marking
the job as timedout from working. Per queue resets don't
call this and they work correctly. Properly set the
error on the the timedout fence so user space see it
as guilty. These changes also resulted in a small clean
up the the VCN reset helper.
Patch 10:
Rework the backup and reemit code for per ring reset
so that we can safely reemit repeatedly. This removes
the current single reemit limit currently in place.
This drops the new proposed reemit framework from V1
and V2 and sticks with saving and restoring the ring
content.
Git tree available as well:
https://gitlab.freedesktop.org/agd5f/linux/-/commits/ib_improvements2?ref_type=heads
Alex Deucher (10):
drm/amdgpu: fix type for wptr in ring backup
drm/amdgpu: rename amdgpu_fence_driver_guilty_force_completion()
drm/amdgpu/job: use GFP_ATOMIC while in gpu reset
drm/amdgpu: switch all IPs to using job for IBs
drm/amdgpu: require a job to schedule an IB
drm/amdgpu: don't call drm_sched_stop/start() in asic reset
drm/amdgpu: drop drm_sched_increase_karma()
drm/amdgpu: plumb timedout fence through to force completion
drm/amdgpu: simplify VCN reset helper
drm/amdgpu: rework ring reset backup and reemit
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c | 2 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 2 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 13 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c | 124 +++++++++--------
drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c | 117 +++++++---------
drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 14 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_object.h | 3 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c | 37 +-----
drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h | 26 ++--
drivers/gpu/drm/amd/amdgpu/amdgpu_sa.c | 6 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.c | 4 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c | 2 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c | 52 +++-----
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 1 +
drivers/gpu/drm/amd/amdgpu/amdgpu_vpe.c | 37 +++---
drivers/gpu/drm/amd/amdgpu/cik_sdma.c | 31 +++--
drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c | 29 ++--
drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c | 29 ++--
drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c | 29 ++--
drivers/gpu/drm/amd/amdgpu/gfx_v12_1.c | 29 ++--
drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c | 24 ++--
drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c | 25 ++--
drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c | 139 ++++++++++----------
drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 138 +++++++++----------
drivers/gpu/drm/amd/amdgpu/gfx_v9_4_2.c | 26 ++--
drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c | 29 ++--
drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c | 38 +++---
drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c | 38 +++---
drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c | 38 +++---
drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c | 38 +++---
drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c | 37 +++---
drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c | 36 ++---
drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c | 36 ++---
drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c | 36 ++---
drivers/gpu/drm/amd/amdgpu/sdma_v7_1.c | 36 ++---
drivers/gpu/drm/amd/amdgpu/si_dma.c | 29 ++--
36 files changed, 668 insertions(+), 662 deletions(-)
--
2.52.0
^ permalink raw reply [flat|nested] 20+ messages in thread* [PATCH 01/10] drm/amdgpu: fix type for wptr in ring backup 2026-01-16 16:20 [PATCH 00/10] Improvements for IB handling V3 Alex Deucher @ 2026-01-16 16:20 ` Alex Deucher 2026-01-19 12:18 ` Christian König 2026-01-16 16:20 ` [PATCH 02/10] drm/amdgpu: rename amdgpu_fence_driver_guilty_force_completion() Alex Deucher ` (8 subsequent siblings) 9 siblings, 1 reply; 20+ messages in thread From: Alex Deucher @ 2026-01-16 16:20 UTC (permalink / raw) To: amd-gfx; +Cc: Alex Deucher Needs to be a u64. Signed-off-by: Alex Deucher <alexander.deucher@amd.com> --- drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c index 3a23cce5f769a..f2f0288b7dce4 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c @@ -764,7 +764,7 @@ void amdgpu_fence_save_wptr(struct amdgpu_fence *af) } static void amdgpu_ring_backup_unprocessed_command(struct amdgpu_ring *ring, - u64 start_wptr, u32 end_wptr) + u64 start_wptr, u64 end_wptr) { unsigned int first_idx = start_wptr & ring->buf_mask; unsigned int last_idx = end_wptr & ring->buf_mask; -- 2.52.0 ^ permalink raw reply related [flat|nested] 20+ messages in thread
* Re: [PATCH 01/10] drm/amdgpu: fix type for wptr in ring backup 2026-01-16 16:20 ` [PATCH 01/10] drm/amdgpu: fix type for wptr in ring backup Alex Deucher @ 2026-01-19 12:18 ` Christian König 0 siblings, 0 replies; 20+ messages in thread From: Christian König @ 2026-01-19 12:18 UTC (permalink / raw) To: Alex Deucher, amd-gfx On 1/16/26 17:20, Alex Deucher wrote: > Needs to be a u64. > > Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> > --- > drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c > index 3a23cce5f769a..f2f0288b7dce4 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c > @@ -764,7 +764,7 @@ void amdgpu_fence_save_wptr(struct amdgpu_fence *af) > } > > static void amdgpu_ring_backup_unprocessed_command(struct amdgpu_ring *ring, > - u64 start_wptr, u32 end_wptr) > + u64 start_wptr, u64 end_wptr) > { > unsigned int first_idx = start_wptr & ring->buf_mask; > unsigned int last_idx = end_wptr & ring->buf_mask; ^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH 02/10] drm/amdgpu: rename amdgpu_fence_driver_guilty_force_completion() 2026-01-16 16:20 [PATCH 00/10] Improvements for IB handling V3 Alex Deucher 2026-01-16 16:20 ` [PATCH 01/10] drm/amdgpu: fix type for wptr in ring backup Alex Deucher @ 2026-01-16 16:20 ` Alex Deucher 2026-01-19 12:22 ` Christian König 2026-01-16 16:20 ` [PATCH 03/10] drm/amdgpu/job: use GFP_ATOMIC while in gpu reset Alex Deucher ` (7 subsequent siblings) 9 siblings, 1 reply; 20+ messages in thread From: Alex Deucher @ 2026-01-16 16:20 UTC (permalink / raw) To: amd-gfx; +Cc: Alex Deucher The function no longer signals the fence so rename it to better match what it does. Signed-off-by: Alex Deucher <alexander.deucher@amd.com> --- drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c | 6 +++--- drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c | 4 ++-- drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h | 2 +- 3 files changed, 6 insertions(+), 6 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c index f2f0288b7dce4..c7a2dff33d80b 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c @@ -709,12 +709,12 @@ void amdgpu_fence_driver_force_completion(struct amdgpu_ring *ring) */ /** - * amdgpu_fence_driver_guilty_force_completion - force signal of specified sequence + * amdgpu_fence_driver_update_timedout_fence_state - Update fence state and set errors * - * @af: fence of the ring to signal + * @af: fence of the ring to update * */ -void amdgpu_fence_driver_guilty_force_completion(struct amdgpu_fence *af) +void amdgpu_fence_driver_update_timedout_fence_state(struct amdgpu_fence *af) { struct dma_fence *unprocessed; struct dma_fence __rcu **ptr; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c index 600e6bb98af7a..b82357c657237 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c @@ -885,9 +885,9 @@ int amdgpu_ring_reset_helper_end(struct amdgpu_ring *ring, if (r) return r; - /* signal the guilty fence and set an error on all fences from the context */ + /* set an error on all fences from the context */ if (guilty_fence) - amdgpu_fence_driver_guilty_force_completion(guilty_fence); + amdgpu_fence_driver_update_timedout_fence_state(guilty_fence); /* Re-emit the non-guilty commands */ if (ring->ring_backup_entries_to_copy) { amdgpu_ring_alloc_reemit(ring, ring->ring_backup_entries_to_copy); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h index 87c9df6c2ecfe..cb0fb1a989d2f 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h @@ -161,7 +161,7 @@ extern const struct drm_sched_backend_ops amdgpu_sched_ops; void amdgpu_fence_driver_set_error(struct amdgpu_ring *ring, int error); void amdgpu_fence_driver_force_completion(struct amdgpu_ring *ring); -void amdgpu_fence_driver_guilty_force_completion(struct amdgpu_fence *af); +void amdgpu_fence_driver_update_timedout_fence_state(struct amdgpu_fence *af); void amdgpu_fence_save_wptr(struct amdgpu_fence *af); int amdgpu_fence_driver_init_ring(struct amdgpu_ring *ring); -- 2.52.0 ^ permalink raw reply related [flat|nested] 20+ messages in thread
* Re: [PATCH 02/10] drm/amdgpu: rename amdgpu_fence_driver_guilty_force_completion() 2026-01-16 16:20 ` [PATCH 02/10] drm/amdgpu: rename amdgpu_fence_driver_guilty_force_completion() Alex Deucher @ 2026-01-19 12:22 ` Christian König 0 siblings, 0 replies; 20+ messages in thread From: Christian König @ 2026-01-19 12:22 UTC (permalink / raw) To: Alex Deucher, amd-gfx On 1/16/26 17:20, Alex Deucher wrote: > The function no longer signals the fence so rename it to > better match what it does. > > Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> > --- > drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c | 6 +++--- > drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c | 4 ++-- > drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h | 2 +- > 3 files changed, 6 insertions(+), 6 deletions(-) > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c > index f2f0288b7dce4..c7a2dff33d80b 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c > @@ -709,12 +709,12 @@ void amdgpu_fence_driver_force_completion(struct amdgpu_ring *ring) > */ > > /** > - * amdgpu_fence_driver_guilty_force_completion - force signal of specified sequence > + * amdgpu_fence_driver_update_timedout_fence_state - Update fence state and set errors > * > - * @af: fence of the ring to signal > + * @af: fence of the ring to update > * > */ > -void amdgpu_fence_driver_guilty_force_completion(struct amdgpu_fence *af) > +void amdgpu_fence_driver_update_timedout_fence_state(struct amdgpu_fence *af) > { > struct dma_fence *unprocessed; > struct dma_fence __rcu **ptr; > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c > index 600e6bb98af7a..b82357c657237 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c > @@ -885,9 +885,9 @@ int amdgpu_ring_reset_helper_end(struct amdgpu_ring *ring, > if (r) > return r; > > - /* signal the guilty fence and set an error on all fences from the context */ > + /* set an error on all fences from the context */ > if (guilty_fence) > - amdgpu_fence_driver_guilty_force_completion(guilty_fence); > + amdgpu_fence_driver_update_timedout_fence_state(guilty_fence); > /* Re-emit the non-guilty commands */ > if (ring->ring_backup_entries_to_copy) { > amdgpu_ring_alloc_reemit(ring, ring->ring_backup_entries_to_copy); > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h > index 87c9df6c2ecfe..cb0fb1a989d2f 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h > @@ -161,7 +161,7 @@ extern const struct drm_sched_backend_ops amdgpu_sched_ops; > > void amdgpu_fence_driver_set_error(struct amdgpu_ring *ring, int error); > void amdgpu_fence_driver_force_completion(struct amdgpu_ring *ring); > -void amdgpu_fence_driver_guilty_force_completion(struct amdgpu_fence *af); > +void amdgpu_fence_driver_update_timedout_fence_state(struct amdgpu_fence *af); > void amdgpu_fence_save_wptr(struct amdgpu_fence *af); > > int amdgpu_fence_driver_init_ring(struct amdgpu_ring *ring); ^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH 03/10] drm/amdgpu/job: use GFP_ATOMIC while in gpu reset 2026-01-16 16:20 [PATCH 00/10] Improvements for IB handling V3 Alex Deucher 2026-01-16 16:20 ` [PATCH 01/10] drm/amdgpu: fix type for wptr in ring backup Alex Deucher 2026-01-16 16:20 ` [PATCH 02/10] drm/amdgpu: rename amdgpu_fence_driver_guilty_force_completion() Alex Deucher @ 2026-01-16 16:20 ` Alex Deucher 2026-01-19 12:27 ` Christian König 2026-01-16 16:20 ` [PATCH 04/10] drm/amdgpu: switch all IPs to using job for IBs Alex Deucher ` (6 subsequent siblings) 9 siblings, 1 reply; 20+ messages in thread From: Alex Deucher @ 2026-01-16 16:20 UTC (permalink / raw) To: amd-gfx; +Cc: Alex Deucher If we need to allocate a job during GPU reset, use GFP_ATOMIC rather than GFP_KERNEL. Signed-off-by: Alex Deucher <alexander.deucher@amd.com> --- drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c | 2 +- drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 9 ++++++--- drivers/gpu/drm/amd/amdgpu/amdgpu_object.h | 3 ++- drivers/gpu/drm/amd/amdgpu/amdgpu_sa.c | 6 ++++-- 4 files changed, 13 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c index 72ec455fa932c..136e50de712a0 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c @@ -68,7 +68,7 @@ int amdgpu_ib_get(struct amdgpu_device *adev, struct amdgpu_vm *vm, int r; if (size) { - r = amdgpu_sa_bo_new(&adev->ib_pools[pool_type], + r = amdgpu_sa_bo_new(adev, &adev->ib_pools[pool_type], &ib->sa_bo, size); if (r) { dev_err(adev->dev, "failed to get a new IB (%d)\n", r); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c index 1daa9145b217e..c7e4d79b9f61d 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c @@ -192,18 +192,21 @@ int amdgpu_job_alloc(struct amdgpu_device *adev, struct amdgpu_vm *vm, if (num_ibs == 0) return -EINVAL; - *job = kzalloc(struct_size(*job, ibs, num_ibs), GFP_KERNEL); + *job = kzalloc(struct_size(*job, ibs, num_ibs), + amdgpu_in_reset(adev) ? GFP_ATOMIC : GFP_KERNEL); if (!*job) return -ENOMEM; - af = kzalloc(sizeof(struct amdgpu_fence), GFP_KERNEL); + af = kzalloc(sizeof(struct amdgpu_fence), + amdgpu_in_reset(adev) ? GFP_ATOMIC : GFP_KERNEL); if (!af) { r = -ENOMEM; goto err_job; } (*job)->hw_fence = af; - af = kzalloc(sizeof(struct amdgpu_fence), GFP_KERNEL); + af = kzalloc(sizeof(struct amdgpu_fence), + amdgpu_in_reset(adev) ? GFP_ATOMIC : GFP_KERNEL); if (!af) { r = -ENOMEM; goto err_fence; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h index 912c9afaf9e11..7ee0cc46b4608 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h @@ -339,7 +339,8 @@ void amdgpu_sa_bo_manager_fini(struct amdgpu_device *adev, struct amdgpu_sa_manager *sa_manager); int amdgpu_sa_bo_manager_start(struct amdgpu_device *adev, struct amdgpu_sa_manager *sa_manager); -int amdgpu_sa_bo_new(struct amdgpu_sa_manager *sa_manager, +int amdgpu_sa_bo_new(struct amdgpu_device *adev, + struct amdgpu_sa_manager *sa_manager, struct drm_suballoc **sa_bo, unsigned int size); void amdgpu_sa_bo_free(struct drm_suballoc **sa_bo, diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sa.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_sa.c index 39070b2a4c04f..fc13969f8ef49 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sa.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sa.c @@ -76,12 +76,14 @@ void amdgpu_sa_bo_manager_fini(struct amdgpu_device *adev, amdgpu_bo_free_kernel(&sa_manager->bo, &sa_manager->gpu_addr, &sa_manager->cpu_ptr); } -int amdgpu_sa_bo_new(struct amdgpu_sa_manager *sa_manager, +int amdgpu_sa_bo_new(struct amdgpu_device *adev, + struct amdgpu_sa_manager *sa_manager, struct drm_suballoc **sa_bo, unsigned int size) { struct drm_suballoc *sa = drm_suballoc_new(&sa_manager->base, size, - GFP_KERNEL, false, 0); + amdgpu_in_reset(adev) ? GFP_ATOMIC : GFP_KERNEL, + false, 0); if (IS_ERR(sa)) { *sa_bo = NULL; -- 2.52.0 ^ permalink raw reply related [flat|nested] 20+ messages in thread
* Re: [PATCH 03/10] drm/amdgpu/job: use GFP_ATOMIC while in gpu reset 2026-01-16 16:20 ` [PATCH 03/10] drm/amdgpu/job: use GFP_ATOMIC while in gpu reset Alex Deucher @ 2026-01-19 12:27 ` Christian König 0 siblings, 0 replies; 20+ messages in thread From: Christian König @ 2026-01-19 12:27 UTC (permalink / raw) To: Alex Deucher, amd-gfx On 1/16/26 17:20, Alex Deucher wrote: > If we need to allocate a job during GPU reset, use > GFP_ATOMIC rather than GFP_KERNEL. > > Signed-off-by: Alex Deucher <alexander.deucher@amd.com> > --- > drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c | 2 +- > drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 9 ++++++--- > drivers/gpu/drm/amd/amdgpu/amdgpu_object.h | 3 ++- > drivers/gpu/drm/amd/amdgpu/amdgpu_sa.c | 6 ++++-- > 4 files changed, 13 insertions(+), 7 deletions(-) > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c > index 72ec455fa932c..136e50de712a0 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c > @@ -68,7 +68,7 @@ int amdgpu_ib_get(struct amdgpu_device *adev, struct amdgpu_vm *vm, > int r; > > if (size) { > - r = amdgpu_sa_bo_new(&adev->ib_pools[pool_type], > + r = amdgpu_sa_bo_new(adev, &adev->ib_pools[pool_type], > &ib->sa_bo, size); > if (r) { > dev_err(adev->dev, "failed to get a new IB (%d)\n", r); > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c > index 1daa9145b217e..c7e4d79b9f61d 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c > @@ -192,18 +192,21 @@ int amdgpu_job_alloc(struct amdgpu_device *adev, struct amdgpu_vm *vm, > if (num_ibs == 0) > return -EINVAL; > > - *job = kzalloc(struct_size(*job, ibs, num_ibs), GFP_KERNEL); > + *job = kzalloc(struct_size(*job, ibs, num_ibs), > + amdgpu_in_reset(adev) ? GFP_ATOMIC : GFP_KERNEL); That's an extremely bad idea, amdgpu_in_reset() returns true even outside of the reset thread. We really need to look at the pool type. Regards, Christian. > if (!*job) > return -ENOMEM; > > - af = kzalloc(sizeof(struct amdgpu_fence), GFP_KERNEL); > + af = kzalloc(sizeof(struct amdgpu_fence), > + amdgpu_in_reset(adev) ? GFP_ATOMIC : GFP_KERNEL); > if (!af) { > r = -ENOMEM; > goto err_job; > } > (*job)->hw_fence = af; > > - af = kzalloc(sizeof(struct amdgpu_fence), GFP_KERNEL); > + af = kzalloc(sizeof(struct amdgpu_fence), > + amdgpu_in_reset(adev) ? GFP_ATOMIC : GFP_KERNEL); > if (!af) { > r = -ENOMEM; > goto err_fence; > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h > index 912c9afaf9e11..7ee0cc46b4608 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h > @@ -339,7 +339,8 @@ void amdgpu_sa_bo_manager_fini(struct amdgpu_device *adev, > struct amdgpu_sa_manager *sa_manager); > int amdgpu_sa_bo_manager_start(struct amdgpu_device *adev, > struct amdgpu_sa_manager *sa_manager); > -int amdgpu_sa_bo_new(struct amdgpu_sa_manager *sa_manager, > +int amdgpu_sa_bo_new(struct amdgpu_device *adev, > + struct amdgpu_sa_manager *sa_manager, > struct drm_suballoc **sa_bo, > unsigned int size); > void amdgpu_sa_bo_free(struct drm_suballoc **sa_bo, > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sa.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_sa.c > index 39070b2a4c04f..fc13969f8ef49 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sa.c > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sa.c > @@ -76,12 +76,14 @@ void amdgpu_sa_bo_manager_fini(struct amdgpu_device *adev, > amdgpu_bo_free_kernel(&sa_manager->bo, &sa_manager->gpu_addr, &sa_manager->cpu_ptr); > } > > -int amdgpu_sa_bo_new(struct amdgpu_sa_manager *sa_manager, > +int amdgpu_sa_bo_new(struct amdgpu_device *adev, > + struct amdgpu_sa_manager *sa_manager, > struct drm_suballoc **sa_bo, > unsigned int size) > { > struct drm_suballoc *sa = drm_suballoc_new(&sa_manager->base, size, > - GFP_KERNEL, false, 0); > + amdgpu_in_reset(adev) ? GFP_ATOMIC : GFP_KERNEL, > + false, 0); > > if (IS_ERR(sa)) { > *sa_bo = NULL; ^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH 04/10] drm/amdgpu: switch all IPs to using job for IBs 2026-01-16 16:20 [PATCH 00/10] Improvements for IB handling V3 Alex Deucher ` (2 preceding siblings ...) 2026-01-16 16:20 ` [PATCH 03/10] drm/amdgpu/job: use GFP_ATOMIC while in gpu reset Alex Deucher @ 2026-01-16 16:20 ` Alex Deucher 2026-01-19 12:31 ` Christian König 2026-01-16 16:20 ` [PATCH 05/10] drm/amdgpu: require a job to schedule an IB Alex Deucher ` (5 subsequent siblings) 9 siblings, 1 reply; 20+ messages in thread From: Alex Deucher @ 2026-01-16 16:20 UTC (permalink / raw) To: amd-gfx; +Cc: Alex Deucher Switch to using a job structure for IBs. Signed-off-by: Alex Deucher <alexander.deucher@amd.com> --- drivers/gpu/drm/amd/amdgpu/amdgpu_vpe.c | 37 +++--- drivers/gpu/drm/amd/amdgpu/cik_sdma.c | 31 ++--- drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c | 29 ++--- drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c | 29 ++--- drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c | 29 ++--- drivers/gpu/drm/amd/amdgpu/gfx_v12_1.c | 29 ++--- drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c | 24 ++-- drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c | 25 ++-- drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c | 139 ++++++++++++----------- drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 138 +++++++++++----------- drivers/gpu/drm/amd/amdgpu/gfx_v9_4_2.c | 26 ++--- drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c | 29 ++--- drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c | 38 ++++--- drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c | 38 ++++--- drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c | 38 ++++--- drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c | 38 ++++--- drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c | 37 +++--- drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c | 36 +++--- drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c | 36 +++--- drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c | 36 +++--- drivers/gpu/drm/amd/amdgpu/sdma_v7_1.c | 36 +++--- drivers/gpu/drm/amd/amdgpu/si_dma.c | 29 +++-- 22 files changed, 500 insertions(+), 427 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vpe.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vpe.c index fd881388d6125..9fb1946be1ba2 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vpe.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vpe.c @@ -817,7 +817,8 @@ static int vpe_ring_test_ib(struct amdgpu_ring *ring, long timeout) { struct amdgpu_device *adev = ring->adev; const uint32_t test_pattern = 0xdeadbeef; - struct amdgpu_ib ib = {}; + struct amdgpu_job *job; + struct amdgpu_ib *ib; struct dma_fence *f = NULL; uint32_t index; uint64_t wb_addr; @@ -832,23 +833,28 @@ static int vpe_ring_test_ib(struct amdgpu_ring *ring, long timeout) adev->wb.wb[index] = 0; wb_addr = adev->wb.gpu_addr + (index * 4); - ret = amdgpu_ib_get(adev, NULL, 256, AMDGPU_IB_POOL_DIRECT, &ib); + ret = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, 256, + AMDGPU_IB_POOL_DIRECT, &job, + AMDGPU_KERNEL_JOB_ID_VPE_RING_TEST); if (ret) goto err0; - - ib.ptr[0] = VPE_CMD_HEADER(VPE_CMD_OPCODE_FENCE, 0); - ib.ptr[1] = lower_32_bits(wb_addr); - ib.ptr[2] = upper_32_bits(wb_addr); - ib.ptr[3] = test_pattern; - ib.ptr[4] = VPE_CMD_HEADER(VPE_CMD_OPCODE_NOP, 0); - ib.ptr[5] = VPE_CMD_HEADER(VPE_CMD_OPCODE_NOP, 0); - ib.ptr[6] = VPE_CMD_HEADER(VPE_CMD_OPCODE_NOP, 0); - ib.ptr[7] = VPE_CMD_HEADER(VPE_CMD_OPCODE_NOP, 0); - ib.length_dw = 8; - - ret = amdgpu_ib_schedule(ring, 1, &ib, NULL, &f); - if (ret) + ib = &job->ibs[0]; + + ib->ptr[0] = VPE_CMD_HEADER(VPE_CMD_OPCODE_FENCE, 0); + ib->ptr[1] = lower_32_bits(wb_addr); + ib->ptr[2] = upper_32_bits(wb_addr); + ib->ptr[3] = test_pattern; + ib->ptr[4] = VPE_CMD_HEADER(VPE_CMD_OPCODE_NOP, 0); + ib->ptr[5] = VPE_CMD_HEADER(VPE_CMD_OPCODE_NOP, 0); + ib->ptr[6] = VPE_CMD_HEADER(VPE_CMD_OPCODE_NOP, 0); + ib->ptr[7] = VPE_CMD_HEADER(VPE_CMD_OPCODE_NOP, 0); + ib->length_dw = 8; + + ret = amdgpu_job_submit_direct(job, ring, &f); + if (ret) { + amdgpu_job_free(job); goto err1; + } ret = dma_fence_wait_timeout(f, false, timeout); if (ret <= 0) { @@ -859,7 +865,6 @@ static int vpe_ring_test_ib(struct amdgpu_ring *ring, long timeout) ret = (le32_to_cpu(adev->wb.wb[index]) == test_pattern) ? 0 : -EINVAL; err1: - amdgpu_ib_free(&ib, NULL); dma_fence_put(f); err0: amdgpu_device_wb_free(adev, index); diff --git a/drivers/gpu/drm/amd/amdgpu/cik_sdma.c b/drivers/gpu/drm/amd/amdgpu/cik_sdma.c index 9e8715b4739da..e2ca96f5a7cfb 100644 --- a/drivers/gpu/drm/amd/amdgpu/cik_sdma.c +++ b/drivers/gpu/drm/amd/amdgpu/cik_sdma.c @@ -652,7 +652,8 @@ static int cik_sdma_ring_test_ring(struct amdgpu_ring *ring) static int cik_sdma_ring_test_ib(struct amdgpu_ring *ring, long timeout) { struct amdgpu_device *adev = ring->adev; - struct amdgpu_ib ib; + struct amdgpu_job *job; + struct amdgpu_ib *ib; struct dma_fence *f = NULL; unsigned index; u32 tmp = 0; @@ -666,22 +667,27 @@ static int cik_sdma_ring_test_ib(struct amdgpu_ring *ring, long timeout) gpu_addr = adev->wb.gpu_addr + (index * 4); tmp = 0xCAFEDEAD; adev->wb.wb[index] = cpu_to_le32(tmp); - memset(&ib, 0, sizeof(ib)); - r = amdgpu_ib_get(adev, NULL, 256, - AMDGPU_IB_POOL_DIRECT, &ib); + + r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, 256, + AMDGPU_IB_POOL_DIRECT, &job, + AMDGPU_KERNEL_JOB_ID_SDMA_RING_TEST); if (r) goto err0; + ib = &job->ibs[0]; - ib.ptr[0] = SDMA_PACKET(SDMA_OPCODE_WRITE, + ib->ptr[0] = SDMA_PACKET(SDMA_OPCODE_WRITE, SDMA_WRITE_SUB_OPCODE_LINEAR, 0); - ib.ptr[1] = lower_32_bits(gpu_addr); - ib.ptr[2] = upper_32_bits(gpu_addr); - ib.ptr[3] = 1; - ib.ptr[4] = 0xDEADBEEF; - ib.length_dw = 5; - r = amdgpu_ib_schedule(ring, 1, &ib, NULL, &f); - if (r) + ib->ptr[1] = lower_32_bits(gpu_addr); + ib->ptr[2] = upper_32_bits(gpu_addr); + ib->ptr[3] = 1; + ib->ptr[4] = 0xDEADBEEF; + ib->length_dw = 5; + + r = amdgpu_job_submit_direct(job, ring, &f); + if (r) { + amdgpu_job_free(job); goto err1; + } r = dma_fence_wait_timeout(f, false, timeout); if (r == 0) { @@ -697,7 +703,6 @@ static int cik_sdma_ring_test_ib(struct amdgpu_ring *ring, long timeout) r = -EINVAL; err1: - amdgpu_ib_free(&ib, NULL); dma_fence_put(f); err0: amdgpu_device_wb_free(adev, index); diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c index 41bbedb8e157e..496121bdc1de1 100644 --- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c @@ -4071,15 +4071,14 @@ static int gfx_v10_0_ring_test_ring(struct amdgpu_ring *ring) static int gfx_v10_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) { struct amdgpu_device *adev = ring->adev; - struct amdgpu_ib ib; + struct amdgpu_job *job; + struct amdgpu_ib *ib; struct dma_fence *f = NULL; unsigned int index; uint64_t gpu_addr; uint32_t *cpu_ptr; long r; - memset(&ib, 0, sizeof(ib)); - r = amdgpu_device_wb_get(adev, &index); if (r) return r; @@ -4088,22 +4087,27 @@ static int gfx_v10_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) adev->wb.wb[index] = cpu_to_le32(0xCAFEDEAD); cpu_ptr = &adev->wb.wb[index]; - r = amdgpu_ib_get(adev, NULL, 20, AMDGPU_IB_POOL_DIRECT, &ib); + r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, 20, + AMDGPU_IB_POOL_DIRECT, &job, + AMDGPU_KERNEL_JOB_ID_GFX_RING_TEST); if (r) { drm_err(adev_to_drm(adev), "failed to get ib (%ld).\n", r); goto err1; } + ib = &job->ibs[0]; - ib.ptr[0] = PACKET3(PACKET3_WRITE_DATA, 3); - ib.ptr[1] = WRITE_DATA_DST_SEL(5) | WR_CONFIRM; - ib.ptr[2] = lower_32_bits(gpu_addr); - ib.ptr[3] = upper_32_bits(gpu_addr); - ib.ptr[4] = 0xDEADBEEF; - ib.length_dw = 5; + ib->ptr[0] = PACKET3(PACKET3_WRITE_DATA, 3); + ib->ptr[1] = WRITE_DATA_DST_SEL(5) | WR_CONFIRM; + ib->ptr[2] = lower_32_bits(gpu_addr); + ib->ptr[3] = upper_32_bits(gpu_addr); + ib->ptr[4] = 0xDEADBEEF; + ib->length_dw = 5; - r = amdgpu_ib_schedule(ring, 1, &ib, NULL, &f); - if (r) + r = amdgpu_job_submit_direct(job, ring, &f); + if (r) { + amdgpu_job_free(job); goto err2; + } r = dma_fence_wait_timeout(f, false, timeout); if (r == 0) { @@ -4118,7 +4122,6 @@ static int gfx_v10_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) else r = -EINVAL; err2: - amdgpu_ib_free(&ib, NULL); dma_fence_put(f); err1: amdgpu_device_wb_free(adev, index); diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c index 3a4ca104b1612..5ad2516a60240 100644 --- a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c @@ -604,7 +604,8 @@ static int gfx_v11_0_ring_test_ring(struct amdgpu_ring *ring) static int gfx_v11_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) { struct amdgpu_device *adev = ring->adev; - struct amdgpu_ib ib; + struct amdgpu_job *job; + struct amdgpu_ib *ib; struct dma_fence *f = NULL; unsigned index; uint64_t gpu_addr; @@ -616,8 +617,6 @@ static int gfx_v11_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) ring->funcs->type == AMDGPU_RING_TYPE_KIQ) return 0; - memset(&ib, 0, sizeof(ib)); - r = amdgpu_device_wb_get(adev, &index); if (r) return r; @@ -626,22 +625,27 @@ static int gfx_v11_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) adev->wb.wb[index] = cpu_to_le32(0xCAFEDEAD); cpu_ptr = &adev->wb.wb[index]; - r = amdgpu_ib_get(adev, NULL, 20, AMDGPU_IB_POOL_DIRECT, &ib); + r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, 20, + AMDGPU_IB_POOL_DIRECT, &job, + AMDGPU_KERNEL_JOB_ID_GFX_RING_TEST); if (r) { drm_err(adev_to_drm(adev), "failed to get ib (%ld).\n", r); goto err1; } + ib = &job->ibs[0]; - ib.ptr[0] = PACKET3(PACKET3_WRITE_DATA, 3); - ib.ptr[1] = WRITE_DATA_DST_SEL(5) | WR_CONFIRM; - ib.ptr[2] = lower_32_bits(gpu_addr); - ib.ptr[3] = upper_32_bits(gpu_addr); - ib.ptr[4] = 0xDEADBEEF; - ib.length_dw = 5; + ib->ptr[0] = PACKET3(PACKET3_WRITE_DATA, 3); + ib->ptr[1] = WRITE_DATA_DST_SEL(5) | WR_CONFIRM; + ib->ptr[2] = lower_32_bits(gpu_addr); + ib->ptr[3] = upper_32_bits(gpu_addr); + ib->ptr[4] = 0xDEADBEEF; + ib->length_dw = 5; - r = amdgpu_ib_schedule(ring, 1, &ib, NULL, &f); - if (r) + r = amdgpu_job_submit_direct(job, ring, &f); + if (r) { + amdgpu_job_free(job); goto err2; + } r = dma_fence_wait_timeout(f, false, timeout); if (r == 0) { @@ -656,7 +660,6 @@ static int gfx_v11_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) else r = -EINVAL; err2: - amdgpu_ib_free(&ib, NULL); dma_fence_put(f); err1: amdgpu_device_wb_free(adev, index); diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c index 40660b05f9794..4d5c6bdd8cad7 100644 --- a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c @@ -493,7 +493,8 @@ static int gfx_v12_0_ring_test_ring(struct amdgpu_ring *ring) static int gfx_v12_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) { struct amdgpu_device *adev = ring->adev; - struct amdgpu_ib ib; + struct amdgpu_job *job; + struct amdgpu_ib *ib; struct dma_fence *f = NULL; unsigned index; uint64_t gpu_addr; @@ -505,8 +506,6 @@ static int gfx_v12_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) ring->funcs->type == AMDGPU_RING_TYPE_KIQ) return 0; - memset(&ib, 0, sizeof(ib)); - r = amdgpu_device_wb_get(adev, &index); if (r) return r; @@ -515,22 +514,27 @@ static int gfx_v12_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) adev->wb.wb[index] = cpu_to_le32(0xCAFEDEAD); cpu_ptr = &adev->wb.wb[index]; - r = amdgpu_ib_get(adev, NULL, 16, AMDGPU_IB_POOL_DIRECT, &ib); + r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, 16, + AMDGPU_IB_POOL_DIRECT, &job, + AMDGPU_KERNEL_JOB_ID_GFX_RING_TEST); if (r) { drm_err(adev_to_drm(adev), "failed to get ib (%ld).\n", r); goto err1; } + ib = &job->ibs[0]; - ib.ptr[0] = PACKET3(PACKET3_WRITE_DATA, 3); - ib.ptr[1] = WRITE_DATA_DST_SEL(5) | WR_CONFIRM; - ib.ptr[2] = lower_32_bits(gpu_addr); - ib.ptr[3] = upper_32_bits(gpu_addr); - ib.ptr[4] = 0xDEADBEEF; - ib.length_dw = 5; + ib->ptr[0] = PACKET3(PACKET3_WRITE_DATA, 3); + ib->ptr[1] = WRITE_DATA_DST_SEL(5) | WR_CONFIRM; + ib->ptr[2] = lower_32_bits(gpu_addr); + ib->ptr[3] = upper_32_bits(gpu_addr); + ib->ptr[4] = 0xDEADBEEF; + ib->length_dw = 5; - r = amdgpu_ib_schedule(ring, 1, &ib, NULL, &f); - if (r) + r = amdgpu_job_submit_direct(job, ring, &f); + if (r) { + amdgpu_job_free(job); goto err2; + } r = dma_fence_wait_timeout(f, false, timeout); if (r == 0) { @@ -545,7 +549,6 @@ static int gfx_v12_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) else r = -EINVAL; err2: - amdgpu_ib_free(&ib, NULL); dma_fence_put(f); err1: amdgpu_device_wb_free(adev, index); diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v12_1.c b/drivers/gpu/drm/amd/amdgpu/gfx_v12_1.c index 86cc90a662965..7d02569cd4738 100644 --- a/drivers/gpu/drm/amd/amdgpu/gfx_v12_1.c +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v12_1.c @@ -306,7 +306,8 @@ static int gfx_v12_1_ring_test_ring(struct amdgpu_ring *ring) static int gfx_v12_1_ring_test_ib(struct amdgpu_ring *ring, long timeout) { struct amdgpu_device *adev = ring->adev; - struct amdgpu_ib ib; + struct amdgpu_job *job; + struct amdgpu_ib *ib; struct dma_fence *f = NULL; unsigned index; uint64_t gpu_addr; @@ -318,8 +319,6 @@ static int gfx_v12_1_ring_test_ib(struct amdgpu_ring *ring, long timeout) ring->funcs->type == AMDGPU_RING_TYPE_KIQ) return 0; - memset(&ib, 0, sizeof(ib)); - r = amdgpu_device_wb_get(adev, &index); if (r) return r; @@ -328,22 +327,27 @@ static int gfx_v12_1_ring_test_ib(struct amdgpu_ring *ring, long timeout) adev->wb.wb[index] = cpu_to_le32(0xCAFEDEAD); cpu_ptr = &adev->wb.wb[index]; - r = amdgpu_ib_get(adev, NULL, 16, AMDGPU_IB_POOL_DIRECT, &ib); + r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, 16, + AMDGPU_IB_POOL_DIRECT, &job, + AMDGPU_KERNEL_JOB_ID_GFX_RING_TEST); if (r) { dev_err(adev->dev, "amdgpu: failed to get ib (%ld).\n", r); goto err1; } + ib = &job->ibs[0]; - ib.ptr[0] = PACKET3(PACKET3_WRITE_DATA, 3); - ib.ptr[1] = WRITE_DATA_DST_SEL(5) | WR_CONFIRM; - ib.ptr[2] = lower_32_bits(gpu_addr); - ib.ptr[3] = upper_32_bits(gpu_addr); - ib.ptr[4] = 0xDEADBEEF; - ib.length_dw = 5; + ib->ptr[0] = PACKET3(PACKET3_WRITE_DATA, 3); + ib->ptr[1] = WRITE_DATA_DST_SEL(5) | WR_CONFIRM; + ib->ptr[2] = lower_32_bits(gpu_addr); + ib->ptr[3] = upper_32_bits(gpu_addr); + ib->ptr[4] = 0xDEADBEEF; + ib->length_dw = 5; - r = amdgpu_ib_schedule(ring, 1, &ib, NULL, &f); - if (r) + r = amdgpu_job_submit_direct(job, ring, &f); + if (r) { + amdgpu_job_free(job); goto err2; + } r = dma_fence_wait_timeout(f, false, timeout); if (r == 0) { @@ -358,7 +362,6 @@ static int gfx_v12_1_ring_test_ib(struct amdgpu_ring *ring, long timeout) else r = -EINVAL; err2: - amdgpu_ib_free(&ib, NULL); dma_fence_put(f); err1: amdgpu_device_wb_free(adev, index); diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c index 73223d97a87f5..2f8aa99f17480 100644 --- a/drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c @@ -1895,24 +1895,29 @@ static int gfx_v6_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) { struct amdgpu_device *adev = ring->adev; struct dma_fence *f = NULL; - struct amdgpu_ib ib; + struct amdgpu_job *job; + struct amdgpu_ib *ib; uint32_t tmp = 0; long r; WREG32(mmSCRATCH_REG0, 0xCAFEDEAD); - memset(&ib, 0, sizeof(ib)); - r = amdgpu_ib_get(adev, NULL, 256, AMDGPU_IB_POOL_DIRECT, &ib); + r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, 256, + AMDGPU_IB_POOL_DIRECT, &job, + AMDGPU_KERNEL_JOB_ID_GFX_RING_TEST); if (r) return r; - ib.ptr[0] = PACKET3(PACKET3_SET_CONFIG_REG, 1); - ib.ptr[1] = mmSCRATCH_REG0 - PACKET3_SET_CONFIG_REG_START; - ib.ptr[2] = 0xDEADBEEF; - ib.length_dw = 3; + ib = &job->ibs[0]; + ib->ptr[0] = PACKET3(PACKET3_SET_CONFIG_REG, 1); + ib->ptr[1] = mmSCRATCH_REG0 - PACKET3_SET_CONFIG_REG_START; + ib->ptr[2] = 0xDEADBEEF; + ib->length_dw = 3; - r = amdgpu_ib_schedule(ring, 1, &ib, NULL, &f); - if (r) + r = amdgpu_job_submit_direct(job, ring, &f); + if (r) { + amdgpu_job_free(job); goto error; + } r = dma_fence_wait_timeout(f, false, timeout); if (r == 0) { @@ -1928,7 +1933,6 @@ static int gfx_v6_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) r = -EINVAL; error: - amdgpu_ib_free(&ib, NULL); dma_fence_put(f); return r; } diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c index 2b691452775bc..fa235b981c2e9 100644 --- a/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c @@ -2291,25 +2291,31 @@ static void gfx_v7_ring_emit_cntxcntl(struct amdgpu_ring *ring, uint32_t flags) static int gfx_v7_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) { struct amdgpu_device *adev = ring->adev; - struct amdgpu_ib ib; + struct amdgpu_job *job; + struct amdgpu_ib *ib; struct dma_fence *f = NULL; uint32_t tmp = 0; long r; WREG32(mmSCRATCH_REG0, 0xCAFEDEAD); - memset(&ib, 0, sizeof(ib)); - r = amdgpu_ib_get(adev, NULL, 256, AMDGPU_IB_POOL_DIRECT, &ib); + + r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, 256, + AMDGPU_IB_POOL_DIRECT, &job, + AMDGPU_KERNEL_JOB_ID_GFX_RING_TEST); if (r) return r; - ib.ptr[0] = PACKET3(PACKET3_SET_UCONFIG_REG, 1); - ib.ptr[1] = mmSCRATCH_REG0 - PACKET3_SET_UCONFIG_REG_START; - ib.ptr[2] = 0xDEADBEEF; - ib.length_dw = 3; + ib = &job->ibs[0]; + ib->ptr[0] = PACKET3(PACKET3_SET_UCONFIG_REG, 1); + ib->ptr[1] = mmSCRATCH_REG0 - PACKET3_SET_UCONFIG_REG_START; + ib->ptr[2] = 0xDEADBEEF; + ib->length_dw = 3; - r = amdgpu_ib_schedule(ring, 1, &ib, NULL, &f); - if (r) + r = amdgpu_job_submit_direct(job, ring, &f); + if (r) { + amdgpu_job_free(job); goto error; + } r = dma_fence_wait_timeout(f, false, timeout); if (r == 0) { @@ -2325,7 +2331,6 @@ static int gfx_v7_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) r = -EINVAL; error: - amdgpu_ib_free(&ib, NULL); dma_fence_put(f); return r; } diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c index a6b4c8f41dc11..4736216cd0211 100644 --- a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c @@ -868,9 +868,9 @@ static int gfx_v8_0_ring_test_ring(struct amdgpu_ring *ring) static int gfx_v8_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) { struct amdgpu_device *adev = ring->adev; - struct amdgpu_ib ib; + struct amdgpu_job *job; + struct amdgpu_ib *ib; struct dma_fence *f = NULL; - unsigned int index; uint64_t gpu_addr; uint32_t tmp; @@ -882,22 +882,26 @@ static int gfx_v8_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) gpu_addr = adev->wb.gpu_addr + (index * 4); adev->wb.wb[index] = cpu_to_le32(0xCAFEDEAD); - memset(&ib, 0, sizeof(ib)); - r = amdgpu_ib_get(adev, NULL, 20, AMDGPU_IB_POOL_DIRECT, &ib); + r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, 20, + AMDGPU_IB_POOL_DIRECT, &job, + AMDGPU_KERNEL_JOB_ID_GFX_RING_TEST); if (r) goto err1; - ib.ptr[0] = PACKET3(PACKET3_WRITE_DATA, 3); - ib.ptr[1] = WRITE_DATA_DST_SEL(5) | WR_CONFIRM; - ib.ptr[2] = lower_32_bits(gpu_addr); - ib.ptr[3] = upper_32_bits(gpu_addr); - ib.ptr[4] = 0xDEADBEEF; - ib.length_dw = 5; + ib = &job->ibs[0]; + ib->ptr[0] = PACKET3(PACKET3_WRITE_DATA, 3); + ib->ptr[1] = WRITE_DATA_DST_SEL(5) | WR_CONFIRM; + ib->ptr[2] = lower_32_bits(gpu_addr); + ib->ptr[3] = upper_32_bits(gpu_addr); + ib->ptr[4] = 0xDEADBEEF; + ib->length_dw = 5; - r = amdgpu_ib_schedule(ring, 1, &ib, NULL, &f); - if (r) + r = amdgpu_job_submit_direct(job, ring, &f); + if (r) { + amdgpu_job_free(job); goto err2; + } r = dma_fence_wait_timeout(f, false, timeout); if (r == 0) { @@ -914,7 +918,6 @@ static int gfx_v8_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) r = -EINVAL; err2: - amdgpu_ib_free(&ib, NULL); dma_fence_put(f); err1: amdgpu_device_wb_free(adev, index); @@ -1474,7 +1477,8 @@ static const u32 sec_ded_counter_registers[] = static int gfx_v8_0_do_edc_gpr_workarounds(struct amdgpu_device *adev) { struct amdgpu_ring *ring = &adev->gfx.compute_ring[0]; - struct amdgpu_ib ib; + struct amdgpu_job *job; + struct amdgpu_ib *ib; struct dma_fence *f = NULL; int r, i; u32 tmp; @@ -1505,106 +1509,108 @@ static int gfx_v8_0_do_edc_gpr_workarounds(struct amdgpu_device *adev) total_size += sizeof(sgpr_init_compute_shader); /* allocate an indirect buffer to put the commands in */ - memset(&ib, 0, sizeof(ib)); - r = amdgpu_ib_get(adev, NULL, total_size, - AMDGPU_IB_POOL_DIRECT, &ib); + r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, total_size, + AMDGPU_IB_POOL_DIRECT, &job, + AMDGPU_KERNEL_JOB_ID_RUN_SHADER); if (r) { drm_err(adev_to_drm(adev), "failed to get ib (%d).\n", r); return r; } + ib = &job->ibs[0]; /* load the compute shaders */ for (i = 0; i < ARRAY_SIZE(vgpr_init_compute_shader); i++) - ib.ptr[i + (vgpr_offset / 4)] = vgpr_init_compute_shader[i]; + ib->ptr[i + (vgpr_offset / 4)] = vgpr_init_compute_shader[i]; for (i = 0; i < ARRAY_SIZE(sgpr_init_compute_shader); i++) - ib.ptr[i + (sgpr_offset / 4)] = sgpr_init_compute_shader[i]; + ib->ptr[i + (sgpr_offset / 4)] = sgpr_init_compute_shader[i]; /* init the ib length to 0 */ - ib.length_dw = 0; + ib->length_dw = 0; /* VGPR */ /* write the register state for the compute dispatch */ for (i = 0; i < ARRAY_SIZE(vgpr_init_regs); i += 2) { - ib.ptr[ib.length_dw++] = PACKET3(PACKET3_SET_SH_REG, 1); - ib.ptr[ib.length_dw++] = vgpr_init_regs[i] - PACKET3_SET_SH_REG_START; - ib.ptr[ib.length_dw++] = vgpr_init_regs[i + 1]; + ib->ptr[ib->length_dw++] = PACKET3(PACKET3_SET_SH_REG, 1); + ib->ptr[ib->length_dw++] = vgpr_init_regs[i] - PACKET3_SET_SH_REG_START; + ib->ptr[ib->length_dw++] = vgpr_init_regs[i + 1]; } /* write the shader start address: mmCOMPUTE_PGM_LO, mmCOMPUTE_PGM_HI */ - gpu_addr = (ib.gpu_addr + (u64)vgpr_offset) >> 8; - ib.ptr[ib.length_dw++] = PACKET3(PACKET3_SET_SH_REG, 2); - ib.ptr[ib.length_dw++] = mmCOMPUTE_PGM_LO - PACKET3_SET_SH_REG_START; - ib.ptr[ib.length_dw++] = lower_32_bits(gpu_addr); - ib.ptr[ib.length_dw++] = upper_32_bits(gpu_addr); + gpu_addr = (ib->gpu_addr + (u64)vgpr_offset) >> 8; + ib->ptr[ib->length_dw++] = PACKET3(PACKET3_SET_SH_REG, 2); + ib->ptr[ib->length_dw++] = mmCOMPUTE_PGM_LO - PACKET3_SET_SH_REG_START; + ib->ptr[ib->length_dw++] = lower_32_bits(gpu_addr); + ib->ptr[ib->length_dw++] = upper_32_bits(gpu_addr); /* write dispatch packet */ - ib.ptr[ib.length_dw++] = PACKET3(PACKET3_DISPATCH_DIRECT, 3); - ib.ptr[ib.length_dw++] = 8; /* x */ - ib.ptr[ib.length_dw++] = 1; /* y */ - ib.ptr[ib.length_dw++] = 1; /* z */ - ib.ptr[ib.length_dw++] = + ib->ptr[ib->length_dw++] = PACKET3(PACKET3_DISPATCH_DIRECT, 3); + ib->ptr[ib->length_dw++] = 8; /* x */ + ib->ptr[ib->length_dw++] = 1; /* y */ + ib->ptr[ib->length_dw++] = 1; /* z */ + ib->ptr[ib->length_dw++] = REG_SET_FIELD(0, COMPUTE_DISPATCH_INITIATOR, COMPUTE_SHADER_EN, 1); /* write CS partial flush packet */ - ib.ptr[ib.length_dw++] = PACKET3(PACKET3_EVENT_WRITE, 0); - ib.ptr[ib.length_dw++] = EVENT_TYPE(7) | EVENT_INDEX(4); + ib->ptr[ib->length_dw++] = PACKET3(PACKET3_EVENT_WRITE, 0); + ib->ptr[ib->length_dw++] = EVENT_TYPE(7) | EVENT_INDEX(4); /* SGPR1 */ /* write the register state for the compute dispatch */ for (i = 0; i < ARRAY_SIZE(sgpr1_init_regs); i += 2) { - ib.ptr[ib.length_dw++] = PACKET3(PACKET3_SET_SH_REG, 1); - ib.ptr[ib.length_dw++] = sgpr1_init_regs[i] - PACKET3_SET_SH_REG_START; - ib.ptr[ib.length_dw++] = sgpr1_init_regs[i + 1]; + ib->ptr[ib->length_dw++] = PACKET3(PACKET3_SET_SH_REG, 1); + ib->ptr[ib->length_dw++] = sgpr1_init_regs[i] - PACKET3_SET_SH_REG_START; + ib->ptr[ib->length_dw++] = sgpr1_init_regs[i + 1]; } /* write the shader start address: mmCOMPUTE_PGM_LO, mmCOMPUTE_PGM_HI */ - gpu_addr = (ib.gpu_addr + (u64)sgpr_offset) >> 8; - ib.ptr[ib.length_dw++] = PACKET3(PACKET3_SET_SH_REG, 2); - ib.ptr[ib.length_dw++] = mmCOMPUTE_PGM_LO - PACKET3_SET_SH_REG_START; - ib.ptr[ib.length_dw++] = lower_32_bits(gpu_addr); - ib.ptr[ib.length_dw++] = upper_32_bits(gpu_addr); + gpu_addr = (ib->gpu_addr + (u64)sgpr_offset) >> 8; + ib->ptr[ib->length_dw++] = PACKET3(PACKET3_SET_SH_REG, 2); + ib->ptr[ib->length_dw++] = mmCOMPUTE_PGM_LO - PACKET3_SET_SH_REG_START; + ib->ptr[ib->length_dw++] = lower_32_bits(gpu_addr); + ib->ptr[ib->length_dw++] = upper_32_bits(gpu_addr); /* write dispatch packet */ - ib.ptr[ib.length_dw++] = PACKET3(PACKET3_DISPATCH_DIRECT, 3); - ib.ptr[ib.length_dw++] = 8; /* x */ - ib.ptr[ib.length_dw++] = 1; /* y */ - ib.ptr[ib.length_dw++] = 1; /* z */ - ib.ptr[ib.length_dw++] = + ib->ptr[ib->length_dw++] = PACKET3(PACKET3_DISPATCH_DIRECT, 3); + ib->ptr[ib->length_dw++] = 8; /* x */ + ib->ptr[ib->length_dw++] = 1; /* y */ + ib->ptr[ib->length_dw++] = 1; /* z */ + ib->ptr[ib->length_dw++] = REG_SET_FIELD(0, COMPUTE_DISPATCH_INITIATOR, COMPUTE_SHADER_EN, 1); /* write CS partial flush packet */ - ib.ptr[ib.length_dw++] = PACKET3(PACKET3_EVENT_WRITE, 0); - ib.ptr[ib.length_dw++] = EVENT_TYPE(7) | EVENT_INDEX(4); + ib->ptr[ib->length_dw++] = PACKET3(PACKET3_EVENT_WRITE, 0); + ib->ptr[ib->length_dw++] = EVENT_TYPE(7) | EVENT_INDEX(4); /* SGPR2 */ /* write the register state for the compute dispatch */ for (i = 0; i < ARRAY_SIZE(sgpr2_init_regs); i += 2) { - ib.ptr[ib.length_dw++] = PACKET3(PACKET3_SET_SH_REG, 1); - ib.ptr[ib.length_dw++] = sgpr2_init_regs[i] - PACKET3_SET_SH_REG_START; - ib.ptr[ib.length_dw++] = sgpr2_init_regs[i + 1]; + ib->ptr[ib->length_dw++] = PACKET3(PACKET3_SET_SH_REG, 1); + ib->ptr[ib->length_dw++] = sgpr2_init_regs[i] - PACKET3_SET_SH_REG_START; + ib->ptr[ib->length_dw++] = sgpr2_init_regs[i + 1]; } /* write the shader start address: mmCOMPUTE_PGM_LO, mmCOMPUTE_PGM_HI */ - gpu_addr = (ib.gpu_addr + (u64)sgpr_offset) >> 8; - ib.ptr[ib.length_dw++] = PACKET3(PACKET3_SET_SH_REG, 2); - ib.ptr[ib.length_dw++] = mmCOMPUTE_PGM_LO - PACKET3_SET_SH_REG_START; - ib.ptr[ib.length_dw++] = lower_32_bits(gpu_addr); - ib.ptr[ib.length_dw++] = upper_32_bits(gpu_addr); + gpu_addr = (ib->gpu_addr + (u64)sgpr_offset) >> 8; + ib->ptr[ib->length_dw++] = PACKET3(PACKET3_SET_SH_REG, 2); + ib->ptr[ib->length_dw++] = mmCOMPUTE_PGM_LO - PACKET3_SET_SH_REG_START; + ib->ptr[ib->length_dw++] = lower_32_bits(gpu_addr); + ib->ptr[ib->length_dw++] = upper_32_bits(gpu_addr); /* write dispatch packet */ - ib.ptr[ib.length_dw++] = PACKET3(PACKET3_DISPATCH_DIRECT, 3); - ib.ptr[ib.length_dw++] = 8; /* x */ - ib.ptr[ib.length_dw++] = 1; /* y */ - ib.ptr[ib.length_dw++] = 1; /* z */ - ib.ptr[ib.length_dw++] = + ib->ptr[ib->length_dw++] = PACKET3(PACKET3_DISPATCH_DIRECT, 3); + ib->ptr[ib->length_dw++] = 8; /* x */ + ib->ptr[ib->length_dw++] = 1; /* y */ + ib->ptr[ib->length_dw++] = 1; /* z */ + ib->ptr[ib->length_dw++] = REG_SET_FIELD(0, COMPUTE_DISPATCH_INITIATOR, COMPUTE_SHADER_EN, 1); /* write CS partial flush packet */ - ib.ptr[ib.length_dw++] = PACKET3(PACKET3_EVENT_WRITE, 0); - ib.ptr[ib.length_dw++] = EVENT_TYPE(7) | EVENT_INDEX(4); + ib->ptr[ib->length_dw++] = PACKET3(PACKET3_EVENT_WRITE, 0); + ib->ptr[ib->length_dw++] = EVENT_TYPE(7) | EVENT_INDEX(4); /* shedule the ib on the ring */ - r = amdgpu_ib_schedule(ring, 1, &ib, NULL, &f); + r = amdgpu_job_submit_direct(job, ring, &f); if (r) { drm_err(adev_to_drm(adev), "ib submit failed (%d).\n", r); + amdgpu_job_free(job); goto fail; } @@ -1629,7 +1635,6 @@ static int gfx_v8_0_do_edc_gpr_workarounds(struct amdgpu_device *adev) RREG32(sec_ded_counter_registers[i]); fail: - amdgpu_ib_free(&ib, NULL); dma_fence_put(f); return r; diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c index 7e9d753f4a808..36f0300a21bfa 100644 --- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c @@ -1224,9 +1224,9 @@ static int gfx_v9_0_ring_test_ring(struct amdgpu_ring *ring) static int gfx_v9_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) { struct amdgpu_device *adev = ring->adev; - struct amdgpu_ib ib; + struct amdgpu_job *job; + struct amdgpu_ib *ib; struct dma_fence *f = NULL; - unsigned index; uint64_t gpu_addr; uint32_t tmp; @@ -1238,22 +1238,26 @@ static int gfx_v9_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) gpu_addr = adev->wb.gpu_addr + (index * 4); adev->wb.wb[index] = cpu_to_le32(0xCAFEDEAD); - memset(&ib, 0, sizeof(ib)); - r = amdgpu_ib_get(adev, NULL, 20, AMDGPU_IB_POOL_DIRECT, &ib); + r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, 20, + AMDGPU_IB_POOL_DIRECT, &job, + AMDGPU_KERNEL_JOB_ID_GFX_RING_TEST); if (r) goto err1; - ib.ptr[0] = PACKET3(PACKET3_WRITE_DATA, 3); - ib.ptr[1] = WRITE_DATA_DST_SEL(5) | WR_CONFIRM; - ib.ptr[2] = lower_32_bits(gpu_addr); - ib.ptr[3] = upper_32_bits(gpu_addr); - ib.ptr[4] = 0xDEADBEEF; - ib.length_dw = 5; + ib = &job->ibs[0]; + ib->ptr[0] = PACKET3(PACKET3_WRITE_DATA, 3); + ib->ptr[1] = WRITE_DATA_DST_SEL(5) | WR_CONFIRM; + ib->ptr[2] = lower_32_bits(gpu_addr); + ib->ptr[3] = upper_32_bits(gpu_addr); + ib->ptr[4] = 0xDEADBEEF; + ib->length_dw = 5; - r = amdgpu_ib_schedule(ring, 1, &ib, NULL, &f); - if (r) + r = amdgpu_job_submit_direct(job, ring, &f); + if (r) { + amdgpu_job_free(job); goto err2; + } r = dma_fence_wait_timeout(f, false, timeout); if (r == 0) { @@ -1270,7 +1274,6 @@ static int gfx_v9_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) r = -EINVAL; err2: - amdgpu_ib_free(&ib, NULL); dma_fence_put(f); err1: amdgpu_device_wb_free(adev, index); @@ -4624,7 +4627,8 @@ static int gfx_v9_0_do_edc_gds_workarounds(struct amdgpu_device *adev) static int gfx_v9_0_do_edc_gpr_workarounds(struct amdgpu_device *adev) { struct amdgpu_ring *ring = &adev->gfx.compute_ring[0]; - struct amdgpu_ib ib; + struct amdgpu_job *job; + struct amdgpu_ib *ib; struct dma_fence *f = NULL; int r, i; unsigned total_size, vgpr_offset, sgpr_offset; @@ -4670,9 +4674,9 @@ static int gfx_v9_0_do_edc_gpr_workarounds(struct amdgpu_device *adev) total_size += sizeof(sgpr_init_compute_shader); /* allocate an indirect buffer to put the commands in */ - memset(&ib, 0, sizeof(ib)); - r = amdgpu_ib_get(adev, NULL, total_size, - AMDGPU_IB_POOL_DIRECT, &ib); + r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, total_size, + AMDGPU_IB_POOL_DIRECT, &job, + AMDGPU_KERNEL_JOB_ID_RUN_SHADER); if (r) { drm_err(adev_to_drm(adev), "failed to get ib (%d).\n", r); return r; @@ -4680,102 +4684,103 @@ static int gfx_v9_0_do_edc_gpr_workarounds(struct amdgpu_device *adev) /* load the compute shaders */ for (i = 0; i < vgpr_init_shader_size/sizeof(u32); i++) - ib.ptr[i + (vgpr_offset / 4)] = vgpr_init_shader_ptr[i]; + ib->ptr[i + (vgpr_offset / 4)] = vgpr_init_shader_ptr[i]; for (i = 0; i < ARRAY_SIZE(sgpr_init_compute_shader); i++) - ib.ptr[i + (sgpr_offset / 4)] = sgpr_init_compute_shader[i]; + ib->ptr[i + (sgpr_offset / 4)] = sgpr_init_compute_shader[i]; /* init the ib length to 0 */ - ib.length_dw = 0; + ib->length_dw = 0; /* VGPR */ /* write the register state for the compute dispatch */ for (i = 0; i < gpr_reg_size; i++) { - ib.ptr[ib.length_dw++] = PACKET3(PACKET3_SET_SH_REG, 1); - ib.ptr[ib.length_dw++] = SOC15_REG_ENTRY_OFFSET(vgpr_init_regs_ptr[i]) + ib->ptr[ib->length_dw++] = PACKET3(PACKET3_SET_SH_REG, 1); + ib->ptr[ib->length_dw++] = SOC15_REG_ENTRY_OFFSET(vgpr_init_regs_ptr[i]) - PACKET3_SET_SH_REG_START; - ib.ptr[ib.length_dw++] = vgpr_init_regs_ptr[i].reg_value; + ib->ptr[ib->length_dw++] = vgpr_init_regs_ptr[i].reg_value; } /* write the shader start address: mmCOMPUTE_PGM_LO, mmCOMPUTE_PGM_HI */ - gpu_addr = (ib.gpu_addr + (u64)vgpr_offset) >> 8; - ib.ptr[ib.length_dw++] = PACKET3(PACKET3_SET_SH_REG, 2); - ib.ptr[ib.length_dw++] = SOC15_REG_OFFSET(GC, 0, mmCOMPUTE_PGM_LO) + gpu_addr = (ib->gpu_addr + (u64)vgpr_offset) >> 8; + ib->ptr[ib->length_dw++] = PACKET3(PACKET3_SET_SH_REG, 2); + ib->ptr[ib->length_dw++] = SOC15_REG_OFFSET(GC, 0, mmCOMPUTE_PGM_LO) - PACKET3_SET_SH_REG_START; - ib.ptr[ib.length_dw++] = lower_32_bits(gpu_addr); - ib.ptr[ib.length_dw++] = upper_32_bits(gpu_addr); + ib->ptr[ib->length_dw++] = lower_32_bits(gpu_addr); + ib->ptr[ib->length_dw++] = upper_32_bits(gpu_addr); /* write dispatch packet */ - ib.ptr[ib.length_dw++] = PACKET3(PACKET3_DISPATCH_DIRECT, 3); - ib.ptr[ib.length_dw++] = compute_dim_x * 2; /* x */ - ib.ptr[ib.length_dw++] = 1; /* y */ - ib.ptr[ib.length_dw++] = 1; /* z */ - ib.ptr[ib.length_dw++] = + ib->ptr[ib->length_dw++] = PACKET3(PACKET3_DISPATCH_DIRECT, 3); + ib->ptr[ib->length_dw++] = compute_dim_x * 2; /* x */ + ib->ptr[ib->length_dw++] = 1; /* y */ + ib->ptr[ib->length_dw++] = 1; /* z */ + ib->ptr[ib->length_dw++] = REG_SET_FIELD(0, COMPUTE_DISPATCH_INITIATOR, COMPUTE_SHADER_EN, 1); /* write CS partial flush packet */ - ib.ptr[ib.length_dw++] = PACKET3(PACKET3_EVENT_WRITE, 0); - ib.ptr[ib.length_dw++] = EVENT_TYPE(7) | EVENT_INDEX(4); + ib->ptr[ib->length_dw++] = PACKET3(PACKET3_EVENT_WRITE, 0); + ib->ptr[ib->length_dw++] = EVENT_TYPE(7) | EVENT_INDEX(4); /* SGPR1 */ /* write the register state for the compute dispatch */ for (i = 0; i < gpr_reg_size; i++) { - ib.ptr[ib.length_dw++] = PACKET3(PACKET3_SET_SH_REG, 1); - ib.ptr[ib.length_dw++] = SOC15_REG_ENTRY_OFFSET(sgpr1_init_regs[i]) + ib->ptr[ib->length_dw++] = PACKET3(PACKET3_SET_SH_REG, 1); + ib->ptr[ib->length_dw++] = SOC15_REG_ENTRY_OFFSET(sgpr1_init_regs[i]) - PACKET3_SET_SH_REG_START; - ib.ptr[ib.length_dw++] = sgpr1_init_regs[i].reg_value; + ib->ptr[ib->length_dw++] = sgpr1_init_regs[i].reg_value; } /* write the shader start address: mmCOMPUTE_PGM_LO, mmCOMPUTE_PGM_HI */ - gpu_addr = (ib.gpu_addr + (u64)sgpr_offset) >> 8; - ib.ptr[ib.length_dw++] = PACKET3(PACKET3_SET_SH_REG, 2); - ib.ptr[ib.length_dw++] = SOC15_REG_OFFSET(GC, 0, mmCOMPUTE_PGM_LO) + gpu_addr = (ib->gpu_addr + (u64)sgpr_offset) >> 8; + ib->ptr[ib->length_dw++] = PACKET3(PACKET3_SET_SH_REG, 2); + ib->ptr[ib->length_dw++] = SOC15_REG_OFFSET(GC, 0, mmCOMPUTE_PGM_LO) - PACKET3_SET_SH_REG_START; - ib.ptr[ib.length_dw++] = lower_32_bits(gpu_addr); - ib.ptr[ib.length_dw++] = upper_32_bits(gpu_addr); + ib->ptr[ib->length_dw++] = lower_32_bits(gpu_addr); + ib->ptr[ib->length_dw++] = upper_32_bits(gpu_addr); /* write dispatch packet */ - ib.ptr[ib.length_dw++] = PACKET3(PACKET3_DISPATCH_DIRECT, 3); - ib.ptr[ib.length_dw++] = compute_dim_x / 2 * sgpr_work_group_size; /* x */ - ib.ptr[ib.length_dw++] = 1; /* y */ - ib.ptr[ib.length_dw++] = 1; /* z */ - ib.ptr[ib.length_dw++] = + ib->ptr[ib->length_dw++] = PACKET3(PACKET3_DISPATCH_DIRECT, 3); + ib->ptr[ib->length_dw++] = compute_dim_x / 2 * sgpr_work_group_size; /* x */ + ib->ptr[ib->length_dw++] = 1; /* y */ + ib->ptr[ib->length_dw++] = 1; /* z */ + ib->ptr[ib->length_dw++] = REG_SET_FIELD(0, COMPUTE_DISPATCH_INITIATOR, COMPUTE_SHADER_EN, 1); /* write CS partial flush packet */ - ib.ptr[ib.length_dw++] = PACKET3(PACKET3_EVENT_WRITE, 0); - ib.ptr[ib.length_dw++] = EVENT_TYPE(7) | EVENT_INDEX(4); + ib->ptr[ib->length_dw++] = PACKET3(PACKET3_EVENT_WRITE, 0); + ib->ptr[ib->length_dw++] = EVENT_TYPE(7) | EVENT_INDEX(4); /* SGPR2 */ /* write the register state for the compute dispatch */ for (i = 0; i < gpr_reg_size; i++) { - ib.ptr[ib.length_dw++] = PACKET3(PACKET3_SET_SH_REG, 1); - ib.ptr[ib.length_dw++] = SOC15_REG_ENTRY_OFFSET(sgpr2_init_regs[i]) + ib->ptr[ib->length_dw++] = PACKET3(PACKET3_SET_SH_REG, 1); + ib->ptr[ib->length_dw++] = SOC15_REG_ENTRY_OFFSET(sgpr2_init_regs[i]) - PACKET3_SET_SH_REG_START; - ib.ptr[ib.length_dw++] = sgpr2_init_regs[i].reg_value; + ib->ptr[ib->length_dw++] = sgpr2_init_regs[i].reg_value; } /* write the shader start address: mmCOMPUTE_PGM_LO, mmCOMPUTE_PGM_HI */ - gpu_addr = (ib.gpu_addr + (u64)sgpr_offset) >> 8; - ib.ptr[ib.length_dw++] = PACKET3(PACKET3_SET_SH_REG, 2); - ib.ptr[ib.length_dw++] = SOC15_REG_OFFSET(GC, 0, mmCOMPUTE_PGM_LO) + gpu_addr = (ib->gpu_addr + (u64)sgpr_offset) >> 8; + ib->ptr[ib->length_dw++] = PACKET3(PACKET3_SET_SH_REG, 2); + ib->ptr[ib->length_dw++] = SOC15_REG_OFFSET(GC, 0, mmCOMPUTE_PGM_LO) - PACKET3_SET_SH_REG_START; - ib.ptr[ib.length_dw++] = lower_32_bits(gpu_addr); - ib.ptr[ib.length_dw++] = upper_32_bits(gpu_addr); + ib->ptr[ib->length_dw++] = lower_32_bits(gpu_addr); + ib->ptr[ib->length_dw++] = upper_32_bits(gpu_addr); /* write dispatch packet */ - ib.ptr[ib.length_dw++] = PACKET3(PACKET3_DISPATCH_DIRECT, 3); - ib.ptr[ib.length_dw++] = compute_dim_x / 2 * sgpr_work_group_size; /* x */ - ib.ptr[ib.length_dw++] = 1; /* y */ - ib.ptr[ib.length_dw++] = 1; /* z */ - ib.ptr[ib.length_dw++] = + ib->ptr[ib->length_dw++] = PACKET3(PACKET3_DISPATCH_DIRECT, 3); + ib->ptr[ib->length_dw++] = compute_dim_x / 2 * sgpr_work_group_size; /* x */ + ib->ptr[ib->length_dw++] = 1; /* y */ + ib->ptr[ib->length_dw++] = 1; /* z */ + ib->ptr[ib->length_dw++] = REG_SET_FIELD(0, COMPUTE_DISPATCH_INITIATOR, COMPUTE_SHADER_EN, 1); /* write CS partial flush packet */ - ib.ptr[ib.length_dw++] = PACKET3(PACKET3_EVENT_WRITE, 0); - ib.ptr[ib.length_dw++] = EVENT_TYPE(7) | EVENT_INDEX(4); + ib->ptr[ib->length_dw++] = PACKET3(PACKET3_EVENT_WRITE, 0); + ib->ptr[ib->length_dw++] = EVENT_TYPE(7) | EVENT_INDEX(4); /* shedule the ib on the ring */ - r = amdgpu_ib_schedule(ring, 1, &ib, NULL, &f); + r = amdgpu_job_submit_direct(job, ring, &f); if (r) { drm_err(adev_to_drm(adev), "ib schedule failed (%d).\n", r); + amdgpu_job_free(job); goto fail; } @@ -4787,7 +4792,6 @@ static int gfx_v9_0_do_edc_gpr_workarounds(struct amdgpu_device *adev) } fail: - amdgpu_ib_free(&ib, NULL); dma_fence_put(f); return r; diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_2.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_2.c index 8058ea91ecafd..424b05b84ea74 100644 --- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_2.c +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_2.c @@ -345,12 +345,13 @@ const struct soc15_reg_entry sgpr64_init_regs_aldebaran[] = { static int gfx_v9_4_2_run_shader(struct amdgpu_device *adev, struct amdgpu_ring *ring, - struct amdgpu_ib *ib, const u32 *shader_ptr, u32 shader_size, const struct soc15_reg_entry *init_regs, u32 regs_size, u32 compute_dim_x, u64 wb_gpu_addr, u32 pattern, struct dma_fence **fence_ptr) { + struct amdgpu_job *job; + struct amdgpu_ib *ib; int r, i; uint32_t total_size, shader_offset; u64 gpu_addr; @@ -360,10 +361,9 @@ static int gfx_v9_4_2_run_shader(struct amdgpu_device *adev, shader_offset = total_size; total_size += ALIGN(shader_size, 256); - /* allocate an indirect buffer to put the commands in */ - memset(ib, 0, sizeof(*ib)); - r = amdgpu_ib_get(adev, NULL, total_size, - AMDGPU_IB_POOL_DIRECT, ib); + r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, total_size, + AMDGPU_IB_POOL_DIRECT, &job, + AMDGPU_KERNEL_JOB_ID_RUN_SHADER); if (r) { dev_err(adev->dev, "failed to get ib (%d).\n", r); return r; @@ -408,11 +408,11 @@ static int gfx_v9_4_2_run_shader(struct amdgpu_device *adev, ib->ptr[ib->length_dw++] = REG_SET_FIELD(0, COMPUTE_DISPATCH_INITIATOR, COMPUTE_SHADER_EN, 1); - /* shedule the ib on the ring */ - r = amdgpu_ib_schedule(ring, 1, ib, NULL, fence_ptr); + /* schedule the ib on the ring */ + r = amdgpu_job_submit_direct(job, ring, fence_ptr); if (r) { dev_err(adev->dev, "ib submit failed (%d).\n", r); - amdgpu_ib_free(ib, NULL); + amdgpu_job_free(job); } return r; } @@ -493,7 +493,6 @@ static int gfx_v9_4_2_do_sgprs_init(struct amdgpu_device *adev) int wb_size = adev->gfx.config.max_shader_engines * CU_ID_MAX * SIMD_ID_MAX * WAVE_ID_MAX; struct amdgpu_ib wb_ib; - struct amdgpu_ib disp_ibs[3]; struct dma_fence *fences[3]; u32 pattern[3] = { 0x1, 0x5, 0xa }; @@ -514,7 +513,6 @@ static int gfx_v9_4_2_do_sgprs_init(struct amdgpu_device *adev) r = gfx_v9_4_2_run_shader(adev, &adev->gfx.compute_ring[0], - &disp_ibs[0], sgpr112_init_compute_shader_aldebaran, sizeof(sgpr112_init_compute_shader_aldebaran), sgpr112_init_regs_aldebaran, @@ -539,7 +537,6 @@ static int gfx_v9_4_2_do_sgprs_init(struct amdgpu_device *adev) r = gfx_v9_4_2_run_shader(adev, &adev->gfx.compute_ring[1], - &disp_ibs[1], sgpr96_init_compute_shader_aldebaran, sizeof(sgpr96_init_compute_shader_aldebaran), sgpr96_init_regs_aldebaran, @@ -579,7 +576,6 @@ static int gfx_v9_4_2_do_sgprs_init(struct amdgpu_device *adev) memset(wb_ib.ptr, 0, (1 + wb_size) * sizeof(uint32_t)); r = gfx_v9_4_2_run_shader(adev, &adev->gfx.compute_ring[0], - &disp_ibs[2], sgpr64_init_compute_shader_aldebaran, sizeof(sgpr64_init_compute_shader_aldebaran), sgpr64_init_regs_aldebaran, @@ -611,13 +607,10 @@ static int gfx_v9_4_2_do_sgprs_init(struct amdgpu_device *adev) } disp2_failed: - amdgpu_ib_free(&disp_ibs[2], NULL); dma_fence_put(fences[2]); disp1_failed: - amdgpu_ib_free(&disp_ibs[1], NULL); dma_fence_put(fences[1]); disp0_failed: - amdgpu_ib_free(&disp_ibs[0], NULL); dma_fence_put(fences[0]); pro_end: amdgpu_ib_free(&wb_ib, NULL); @@ -637,7 +630,6 @@ static int gfx_v9_4_2_do_vgprs_init(struct amdgpu_device *adev) int wb_size = adev->gfx.config.max_shader_engines * CU_ID_MAX * SIMD_ID_MAX * WAVE_ID_MAX; struct amdgpu_ib wb_ib; - struct amdgpu_ib disp_ib; struct dma_fence *fence; u32 pattern = 0xa; @@ -657,7 +649,6 @@ static int gfx_v9_4_2_do_vgprs_init(struct amdgpu_device *adev) r = gfx_v9_4_2_run_shader(adev, &adev->gfx.compute_ring[0], - &disp_ib, vgpr_init_compute_shader_aldebaran, sizeof(vgpr_init_compute_shader_aldebaran), vgpr_init_regs_aldebaran, @@ -687,7 +678,6 @@ static int gfx_v9_4_2_do_vgprs_init(struct amdgpu_device *adev) } disp_failed: - amdgpu_ib_free(&disp_ib, NULL); dma_fence_put(fence); pro_end: amdgpu_ib_free(&wb_ib, NULL); diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c index ad4d442e7345e..d78b2c2ae13a3 100644 --- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c @@ -451,9 +451,9 @@ static int gfx_v9_4_3_ring_test_ring(struct amdgpu_ring *ring) static int gfx_v9_4_3_ring_test_ib(struct amdgpu_ring *ring, long timeout) { struct amdgpu_device *adev = ring->adev; - struct amdgpu_ib ib; + struct amdgpu_job *job; + struct amdgpu_ib *ib; struct dma_fence *f = NULL; - unsigned index; uint64_t gpu_addr; uint32_t tmp; @@ -465,22 +465,26 @@ static int gfx_v9_4_3_ring_test_ib(struct amdgpu_ring *ring, long timeout) gpu_addr = adev->wb.gpu_addr + (index * 4); adev->wb.wb[index] = cpu_to_le32(0xCAFEDEAD); - memset(&ib, 0, sizeof(ib)); - r = amdgpu_ib_get(adev, NULL, 20, AMDGPU_IB_POOL_DIRECT, &ib); + r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, 20, + AMDGPU_IB_POOL_DIRECT, &job, + AMDGPU_KERNEL_JOB_ID_GFX_RING_TEST); if (r) goto err1; - ib.ptr[0] = PACKET3(PACKET3_WRITE_DATA, 3); - ib.ptr[1] = WRITE_DATA_DST_SEL(5) | WR_CONFIRM; - ib.ptr[2] = lower_32_bits(gpu_addr); - ib.ptr[3] = upper_32_bits(gpu_addr); - ib.ptr[4] = 0xDEADBEEF; - ib.length_dw = 5; + ib = &job->ibs[0]; + ib->ptr[0] = PACKET3(PACKET3_WRITE_DATA, 3); + ib->ptr[1] = WRITE_DATA_DST_SEL(5) | WR_CONFIRM; + ib->ptr[2] = lower_32_bits(gpu_addr); + ib->ptr[3] = upper_32_bits(gpu_addr); + ib->ptr[4] = 0xDEADBEEF; + ib->length_dw = 5; - r = amdgpu_ib_schedule(ring, 1, &ib, NULL, &f); - if (r) + r = amdgpu_job_submit_direct(job, ring, &f); + if (r) { + amdgpu_job_free(job); goto err2; + } r = dma_fence_wait_timeout(f, false, timeout); if (r == 0) { @@ -497,7 +501,6 @@ static int gfx_v9_4_3_ring_test_ib(struct amdgpu_ring *ring, long timeout) r = -EINVAL; err2: - amdgpu_ib_free(&ib, NULL); dma_fence_put(f); err1: amdgpu_device_wb_free(adev, index); diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c b/drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c index 92ce580647cdc..46263d50cc9ef 100644 --- a/drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c @@ -584,7 +584,8 @@ static int sdma_v2_4_ring_test_ring(struct amdgpu_ring *ring) static int sdma_v2_4_ring_test_ib(struct amdgpu_ring *ring, long timeout) { struct amdgpu_device *adev = ring->adev; - struct amdgpu_ib ib; + struct amdgpu_job *job; + struct amdgpu_ib *ib; struct dma_fence *f = NULL; unsigned index; u32 tmp = 0; @@ -598,26 +599,30 @@ static int sdma_v2_4_ring_test_ib(struct amdgpu_ring *ring, long timeout) gpu_addr = adev->wb.gpu_addr + (index * 4); tmp = 0xCAFEDEAD; adev->wb.wb[index] = cpu_to_le32(tmp); - memset(&ib, 0, sizeof(ib)); - r = amdgpu_ib_get(adev, NULL, 256, - AMDGPU_IB_POOL_DIRECT, &ib); + + r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, 256, + AMDGPU_IB_POOL_DIRECT, &job, + AMDGPU_KERNEL_JOB_ID_SDMA_RING_TEST); if (r) goto err0; - ib.ptr[0] = SDMA_PKT_HEADER_OP(SDMA_OP_WRITE) | + ib = &job->ibs[0]; + ib->ptr[0] = SDMA_PKT_HEADER_OP(SDMA_OP_WRITE) | SDMA_PKT_HEADER_SUB_OP(SDMA_SUBOP_WRITE_LINEAR); - ib.ptr[1] = lower_32_bits(gpu_addr); - ib.ptr[2] = upper_32_bits(gpu_addr); - ib.ptr[3] = SDMA_PKT_WRITE_UNTILED_DW_3_COUNT(1); - ib.ptr[4] = 0xDEADBEEF; - ib.ptr[5] = SDMA_PKT_HEADER_OP(SDMA_OP_NOP); - ib.ptr[6] = SDMA_PKT_HEADER_OP(SDMA_OP_NOP); - ib.ptr[7] = SDMA_PKT_HEADER_OP(SDMA_OP_NOP); - ib.length_dw = 8; - - r = amdgpu_ib_schedule(ring, 1, &ib, NULL, &f); - if (r) + ib->ptr[1] = lower_32_bits(gpu_addr); + ib->ptr[2] = upper_32_bits(gpu_addr); + ib->ptr[3] = SDMA_PKT_WRITE_UNTILED_DW_3_COUNT(1); + ib->ptr[4] = 0xDEADBEEF; + ib->ptr[5] = SDMA_PKT_HEADER_OP(SDMA_OP_NOP); + ib->ptr[6] = SDMA_PKT_HEADER_OP(SDMA_OP_NOP); + ib->ptr[7] = SDMA_PKT_HEADER_OP(SDMA_OP_NOP); + ib->length_dw = 8; + + r = amdgpu_job_submit_direct(job, ring, &f); + if (r) { + amdgpu_job_free(job); goto err1; + } r = dma_fence_wait_timeout(f, false, timeout); if (r == 0) { @@ -633,7 +638,6 @@ static int sdma_v2_4_ring_test_ib(struct amdgpu_ring *ring, long timeout) r = -EINVAL; err1: - amdgpu_ib_free(&ib, NULL); dma_fence_put(f); err0: amdgpu_device_wb_free(adev, index); diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c index 1c076bd1cf73e..f9f05768072ad 100644 --- a/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c @@ -858,7 +858,8 @@ static int sdma_v3_0_ring_test_ring(struct amdgpu_ring *ring) static int sdma_v3_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) { struct amdgpu_device *adev = ring->adev; - struct amdgpu_ib ib; + struct amdgpu_job *job; + struct amdgpu_ib *ib; struct dma_fence *f = NULL; unsigned index; u32 tmp = 0; @@ -872,26 +873,30 @@ static int sdma_v3_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) gpu_addr = adev->wb.gpu_addr + (index * 4); tmp = 0xCAFEDEAD; adev->wb.wb[index] = cpu_to_le32(tmp); - memset(&ib, 0, sizeof(ib)); - r = amdgpu_ib_get(adev, NULL, 256, - AMDGPU_IB_POOL_DIRECT, &ib); + + r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, 256, + AMDGPU_IB_POOL_DIRECT, &job, + AMDGPU_KERNEL_JOB_ID_SDMA_RING_TEST); if (r) goto err0; - ib.ptr[0] = SDMA_PKT_HEADER_OP(SDMA_OP_WRITE) | + ib = &job->ibs[0]; + ib->ptr[0] = SDMA_PKT_HEADER_OP(SDMA_OP_WRITE) | SDMA_PKT_HEADER_SUB_OP(SDMA_SUBOP_WRITE_LINEAR); - ib.ptr[1] = lower_32_bits(gpu_addr); - ib.ptr[2] = upper_32_bits(gpu_addr); - ib.ptr[3] = SDMA_PKT_WRITE_UNTILED_DW_3_COUNT(1); - ib.ptr[4] = 0xDEADBEEF; - ib.ptr[5] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); - ib.ptr[6] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); - ib.ptr[7] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); - ib.length_dw = 8; - - r = amdgpu_ib_schedule(ring, 1, &ib, NULL, &f); - if (r) + ib->ptr[1] = lower_32_bits(gpu_addr); + ib->ptr[2] = upper_32_bits(gpu_addr); + ib->ptr[3] = SDMA_PKT_WRITE_UNTILED_DW_3_COUNT(1); + ib->ptr[4] = 0xDEADBEEF; + ib->ptr[5] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); + ib->ptr[6] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); + ib->ptr[7] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); + ib->length_dw = 8; + + r = amdgpu_job_submit_direct(job, ring, &f); + if (r) { + amdgpu_job_free(job); goto err1; + } r = dma_fence_wait_timeout(f, false, timeout); if (r == 0) { @@ -906,7 +911,6 @@ static int sdma_v3_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) else r = -EINVAL; err1: - amdgpu_ib_free(&ib, NULL); dma_fence_put(f); err0: amdgpu_device_wb_free(adev, index); diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c index f38004e6064e5..56d2832ccba2d 100644 --- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c @@ -1516,7 +1516,8 @@ static int sdma_v4_0_ring_test_ring(struct amdgpu_ring *ring) static int sdma_v4_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) { struct amdgpu_device *adev = ring->adev; - struct amdgpu_ib ib; + struct amdgpu_job *job; + struct amdgpu_ib *ib; struct dma_fence *f = NULL; unsigned index; long r; @@ -1530,26 +1531,30 @@ static int sdma_v4_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) gpu_addr = adev->wb.gpu_addr + (index * 4); tmp = 0xCAFEDEAD; adev->wb.wb[index] = cpu_to_le32(tmp); - memset(&ib, 0, sizeof(ib)); - r = amdgpu_ib_get(adev, NULL, 256, - AMDGPU_IB_POOL_DIRECT, &ib); + + r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, 256, + AMDGPU_IB_POOL_DIRECT, &job, + AMDGPU_KERNEL_JOB_ID_SDMA_RING_TEST); if (r) goto err0; - ib.ptr[0] = SDMA_PKT_HEADER_OP(SDMA_OP_WRITE) | + ib = &job->ibs[0]; + ib->ptr[0] = SDMA_PKT_HEADER_OP(SDMA_OP_WRITE) | SDMA_PKT_HEADER_SUB_OP(SDMA_SUBOP_WRITE_LINEAR); - ib.ptr[1] = lower_32_bits(gpu_addr); - ib.ptr[2] = upper_32_bits(gpu_addr); - ib.ptr[3] = SDMA_PKT_WRITE_UNTILED_DW_3_COUNT(0); - ib.ptr[4] = 0xDEADBEEF; - ib.ptr[5] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); - ib.ptr[6] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); - ib.ptr[7] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); - ib.length_dw = 8; - - r = amdgpu_ib_schedule(ring, 1, &ib, NULL, &f); - if (r) + ib->ptr[1] = lower_32_bits(gpu_addr); + ib->ptr[2] = upper_32_bits(gpu_addr); + ib->ptr[3] = SDMA_PKT_WRITE_UNTILED_DW_3_COUNT(0); + ib->ptr[4] = 0xDEADBEEF; + ib->ptr[5] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); + ib->ptr[6] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); + ib->ptr[7] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); + ib->length_dw = 8; + + r = amdgpu_job_submit_direct(job, ring, &f); + if (r) { + amdgpu_job_free(job); goto err1; + } r = dma_fence_wait_timeout(f, false, timeout); if (r == 0) { @@ -1565,7 +1570,6 @@ static int sdma_v4_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) r = -EINVAL; err1: - amdgpu_ib_free(&ib, NULL); dma_fence_put(f); err0: amdgpu_device_wb_free(adev, index); diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c index a1443990d5c60..dd8d6a572710d 100644 --- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c @@ -1112,7 +1112,8 @@ static int sdma_v4_4_2_ring_test_ring(struct amdgpu_ring *ring) static int sdma_v4_4_2_ring_test_ib(struct amdgpu_ring *ring, long timeout) { struct amdgpu_device *adev = ring->adev; - struct amdgpu_ib ib; + struct amdgpu_job *job; + struct amdgpu_ib *ib; struct dma_fence *f = NULL; unsigned index; long r; @@ -1126,26 +1127,30 @@ static int sdma_v4_4_2_ring_test_ib(struct amdgpu_ring *ring, long timeout) gpu_addr = adev->wb.gpu_addr + (index * 4); tmp = 0xCAFEDEAD; adev->wb.wb[index] = cpu_to_le32(tmp); - memset(&ib, 0, sizeof(ib)); - r = amdgpu_ib_get(adev, NULL, 256, - AMDGPU_IB_POOL_DIRECT, &ib); + + r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, 256, + AMDGPU_IB_POOL_DIRECT, &job, + AMDGPU_KERNEL_JOB_ID_SDMA_RING_TEST); if (r) goto err0; - ib.ptr[0] = SDMA_PKT_HEADER_OP(SDMA_OP_WRITE) | + ib = &job->ibs[0]; + ib->ptr[0] = SDMA_PKT_HEADER_OP(SDMA_OP_WRITE) | SDMA_PKT_HEADER_SUB_OP(SDMA_SUBOP_WRITE_LINEAR); - ib.ptr[1] = lower_32_bits(gpu_addr); - ib.ptr[2] = upper_32_bits(gpu_addr); - ib.ptr[3] = SDMA_PKT_WRITE_UNTILED_DW_3_COUNT(0); - ib.ptr[4] = 0xDEADBEEF; - ib.ptr[5] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); - ib.ptr[6] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); - ib.ptr[7] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); - ib.length_dw = 8; - - r = amdgpu_ib_schedule(ring, 1, &ib, NULL, &f); - if (r) + ib->ptr[1] = lower_32_bits(gpu_addr); + ib->ptr[2] = upper_32_bits(gpu_addr); + ib->ptr[3] = SDMA_PKT_WRITE_UNTILED_DW_3_COUNT(0); + ib->ptr[4] = 0xDEADBEEF; + ib->ptr[5] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); + ib->ptr[6] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); + ib->ptr[7] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); + ib->length_dw = 8; + + r = amdgpu_job_submit_direct(job, ring, &f); + if (r) { + amdgpu_job_free(job); goto err1; + } r = dma_fence_wait_timeout(f, false, timeout); if (r == 0) { @@ -1161,7 +1166,6 @@ static int sdma_v4_4_2_ring_test_ib(struct amdgpu_ring *ring, long timeout) r = -EINVAL; err1: - amdgpu_ib_free(&ib, NULL); dma_fence_put(f); err0: amdgpu_device_wb_free(adev, index); diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c index 7811cbb1f7ba3..786f1776fa30d 100644 --- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c @@ -1074,7 +1074,8 @@ static int sdma_v5_0_ring_test_ring(struct amdgpu_ring *ring) static int sdma_v5_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) { struct amdgpu_device *adev = ring->adev; - struct amdgpu_ib ib; + struct amdgpu_job *job; + struct amdgpu_ib *ib; struct dma_fence *f = NULL; unsigned index; long r; @@ -1082,7 +1083,6 @@ static int sdma_v5_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) u64 gpu_addr; tmp = 0xCAFEDEAD; - memset(&ib, 0, sizeof(ib)); r = amdgpu_device_wb_get(adev, &index); if (r) { @@ -1093,27 +1093,31 @@ static int sdma_v5_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) gpu_addr = adev->wb.gpu_addr + (index * 4); adev->wb.wb[index] = cpu_to_le32(tmp); - r = amdgpu_ib_get(adev, NULL, 256, - AMDGPU_IB_POOL_DIRECT, &ib); + r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, 256, + AMDGPU_IB_POOL_DIRECT, &job, + AMDGPU_KERNEL_JOB_ID_SDMA_RING_TEST); if (r) { drm_err(adev_to_drm(adev), "failed to get ib (%ld).\n", r); goto err0; } - ib.ptr[0] = SDMA_PKT_HEADER_OP(SDMA_OP_WRITE) | + ib = &job->ibs[0]; + ib->ptr[0] = SDMA_PKT_HEADER_OP(SDMA_OP_WRITE) | SDMA_PKT_HEADER_SUB_OP(SDMA_SUBOP_WRITE_LINEAR); - ib.ptr[1] = lower_32_bits(gpu_addr); - ib.ptr[2] = upper_32_bits(gpu_addr); - ib.ptr[3] = SDMA_PKT_WRITE_UNTILED_DW_3_COUNT(0); - ib.ptr[4] = 0xDEADBEEF; - ib.ptr[5] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); - ib.ptr[6] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); - ib.ptr[7] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); - ib.length_dw = 8; - - r = amdgpu_ib_schedule(ring, 1, &ib, NULL, &f); - if (r) + ib->ptr[1] = lower_32_bits(gpu_addr); + ib->ptr[2] = upper_32_bits(gpu_addr); + ib->ptr[3] = SDMA_PKT_WRITE_UNTILED_DW_3_COUNT(0); + ib->ptr[4] = 0xDEADBEEF; + ib->ptr[5] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); + ib->ptr[6] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); + ib->ptr[7] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); + ib->length_dw = 8; + + r = amdgpu_job_submit_direct(job, ring, &f); + if (r) { + amdgpu_job_free(job); goto err1; + } r = dma_fence_wait_timeout(f, false, timeout); if (r == 0) { @@ -1133,7 +1137,6 @@ static int sdma_v5_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) r = -EINVAL; err1: - amdgpu_ib_free(&ib, NULL); dma_fence_put(f); err0: amdgpu_device_wb_free(adev, index); diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c index dbe5b8f109f6a..49005b96aa3f2 100644 --- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c @@ -974,7 +974,8 @@ static int sdma_v5_2_ring_test_ring(struct amdgpu_ring *ring) static int sdma_v5_2_ring_test_ib(struct amdgpu_ring *ring, long timeout) { struct amdgpu_device *adev = ring->adev; - struct amdgpu_ib ib; + struct amdgpu_job *job; + struct amdgpu_ib *ib; struct dma_fence *f = NULL; unsigned index; long r; @@ -982,7 +983,6 @@ static int sdma_v5_2_ring_test_ib(struct amdgpu_ring *ring, long timeout) u64 gpu_addr; tmp = 0xCAFEDEAD; - memset(&ib, 0, sizeof(ib)); r = amdgpu_device_wb_get(adev, &index); if (r) { @@ -993,26 +993,31 @@ static int sdma_v5_2_ring_test_ib(struct amdgpu_ring *ring, long timeout) gpu_addr = adev->wb.gpu_addr + (index * 4); adev->wb.wb[index] = cpu_to_le32(tmp); - r = amdgpu_ib_get(adev, NULL, 256, AMDGPU_IB_POOL_DIRECT, &ib); + r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, 256, + AMDGPU_IB_POOL_DIRECT, &job, + AMDGPU_KERNEL_JOB_ID_SDMA_RING_TEST); if (r) { drm_err(adev_to_drm(adev), "failed to get ib (%ld).\n", r); goto err0; } - ib.ptr[0] = SDMA_PKT_HEADER_OP(SDMA_OP_WRITE) | + ib = &job->ibs[0]; + ib->ptr[0] = SDMA_PKT_HEADER_OP(SDMA_OP_WRITE) | SDMA_PKT_HEADER_SUB_OP(SDMA_SUBOP_WRITE_LINEAR); - ib.ptr[1] = lower_32_bits(gpu_addr); - ib.ptr[2] = upper_32_bits(gpu_addr); - ib.ptr[3] = SDMA_PKT_WRITE_UNTILED_DW_3_COUNT(0); - ib.ptr[4] = 0xDEADBEEF; - ib.ptr[5] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); - ib.ptr[6] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); - ib.ptr[7] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); - ib.length_dw = 8; - - r = amdgpu_ib_schedule(ring, 1, &ib, NULL, &f); - if (r) + ib->ptr[1] = lower_32_bits(gpu_addr); + ib->ptr[2] = upper_32_bits(gpu_addr); + ib->ptr[3] = SDMA_PKT_WRITE_UNTILED_DW_3_COUNT(0); + ib->ptr[4] = 0xDEADBEEF; + ib->ptr[5] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); + ib->ptr[6] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); + ib->ptr[7] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); + ib->length_dw = 8; + + r = amdgpu_job_submit_direct(job, ring, &f); + if (r) { + amdgpu_job_free(job); goto err1; + } r = dma_fence_wait_timeout(f, false, timeout); if (r == 0) { @@ -1032,7 +1037,6 @@ static int sdma_v5_2_ring_test_ib(struct amdgpu_ring *ring, long timeout) r = -EINVAL; err1: - amdgpu_ib_free(&ib, NULL); dma_fence_put(f); err0: amdgpu_device_wb_free(adev, index); diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c index eec659194718d..210ea6ba6212f 100644 --- a/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c @@ -981,7 +981,8 @@ static int sdma_v6_0_ring_test_ring(struct amdgpu_ring *ring) static int sdma_v6_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) { struct amdgpu_device *adev = ring->adev; - struct amdgpu_ib ib; + struct amdgpu_job *job; + struct amdgpu_ib *ib; struct dma_fence *f = NULL; unsigned index; long r; @@ -989,7 +990,6 @@ static int sdma_v6_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) u64 gpu_addr; tmp = 0xCAFEDEAD; - memset(&ib, 0, sizeof(ib)); r = amdgpu_device_wb_get(adev, &index); if (r) { @@ -1000,26 +1000,31 @@ static int sdma_v6_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) gpu_addr = adev->wb.gpu_addr + (index * 4); adev->wb.wb[index] = cpu_to_le32(tmp); - r = amdgpu_ib_get(adev, NULL, 256, AMDGPU_IB_POOL_DIRECT, &ib); + r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, 256, + AMDGPU_IB_POOL_DIRECT, &job, + AMDGPU_KERNEL_JOB_ID_SDMA_RING_TEST); if (r) { drm_err(adev_to_drm(adev), "failed to get ib (%ld).\n", r); goto err0; } - ib.ptr[0] = SDMA_PKT_COPY_LINEAR_HEADER_OP(SDMA_OP_WRITE) | + ib = &job->ibs[0]; + ib->ptr[0] = SDMA_PKT_COPY_LINEAR_HEADER_OP(SDMA_OP_WRITE) | SDMA_PKT_COPY_LINEAR_HEADER_SUB_OP(SDMA_SUBOP_WRITE_LINEAR); - ib.ptr[1] = lower_32_bits(gpu_addr); - ib.ptr[2] = upper_32_bits(gpu_addr); - ib.ptr[3] = SDMA_PKT_WRITE_UNTILED_DW_3_COUNT(0); - ib.ptr[4] = 0xDEADBEEF; - ib.ptr[5] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); - ib.ptr[6] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); - ib.ptr[7] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); - ib.length_dw = 8; - - r = amdgpu_ib_schedule(ring, 1, &ib, NULL, &f); - if (r) + ib->ptr[1] = lower_32_bits(gpu_addr); + ib->ptr[2] = upper_32_bits(gpu_addr); + ib->ptr[3] = SDMA_PKT_WRITE_UNTILED_DW_3_COUNT(0); + ib->ptr[4] = 0xDEADBEEF; + ib->ptr[5] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); + ib->ptr[6] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); + ib->ptr[7] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); + ib->length_dw = 8; + + r = amdgpu_job_submit_direct(job, ring, &f); + if (r) { + amdgpu_job_free(job); goto err1; + } r = dma_fence_wait_timeout(f, false, timeout); if (r == 0) { @@ -1039,7 +1044,6 @@ static int sdma_v6_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) r = -EINVAL; err1: - amdgpu_ib_free(&ib, NULL); dma_fence_put(f); err0: amdgpu_device_wb_free(adev, index); diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c index 8d16ef257bcb9..3b4417d19212e 100644 --- a/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c @@ -997,7 +997,8 @@ static int sdma_v7_0_ring_test_ring(struct amdgpu_ring *ring) static int sdma_v7_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) { struct amdgpu_device *adev = ring->adev; - struct amdgpu_ib ib; + struct amdgpu_job *job; + struct amdgpu_ib *ib; struct dma_fence *f = NULL; unsigned index; long r; @@ -1005,7 +1006,6 @@ static int sdma_v7_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) u64 gpu_addr; tmp = 0xCAFEDEAD; - memset(&ib, 0, sizeof(ib)); r = amdgpu_device_wb_get(adev, &index); if (r) { @@ -1016,26 +1016,31 @@ static int sdma_v7_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) gpu_addr = adev->wb.gpu_addr + (index * 4); adev->wb.wb[index] = cpu_to_le32(tmp); - r = amdgpu_ib_get(adev, NULL, 256, AMDGPU_IB_POOL_DIRECT, &ib); + r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, 256, + AMDGPU_IB_POOL_DIRECT, &job, + AMDGPU_KERNEL_JOB_ID_SDMA_RING_TEST); if (r) { drm_err(adev_to_drm(adev), "failed to get ib (%ld).\n", r); goto err0; } - ib.ptr[0] = SDMA_PKT_COPY_LINEAR_HEADER_OP(SDMA_OP_WRITE) | + ib = &job->ibs[0]; + ib->ptr[0] = SDMA_PKT_COPY_LINEAR_HEADER_OP(SDMA_OP_WRITE) | SDMA_PKT_COPY_LINEAR_HEADER_SUB_OP(SDMA_SUBOP_WRITE_LINEAR); - ib.ptr[1] = lower_32_bits(gpu_addr); - ib.ptr[2] = upper_32_bits(gpu_addr); - ib.ptr[3] = SDMA_PKT_WRITE_UNTILED_DW_3_COUNT(0); - ib.ptr[4] = 0xDEADBEEF; - ib.ptr[5] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); - ib.ptr[6] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); - ib.ptr[7] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); - ib.length_dw = 8; - - r = amdgpu_ib_schedule(ring, 1, &ib, NULL, &f); - if (r) + ib->ptr[1] = lower_32_bits(gpu_addr); + ib->ptr[2] = upper_32_bits(gpu_addr); + ib->ptr[3] = SDMA_PKT_WRITE_UNTILED_DW_3_COUNT(0); + ib->ptr[4] = 0xDEADBEEF; + ib->ptr[5] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); + ib->ptr[6] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); + ib->ptr[7] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); + ib->length_dw = 8; + + r = amdgpu_job_submit_direct(job, ring, &f); + if (r) { + amdgpu_job_free(job); goto err1; + } r = dma_fence_wait_timeout(f, false, timeout); if (r == 0) { @@ -1055,7 +1060,6 @@ static int sdma_v7_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) r = -EINVAL; err1: - amdgpu_ib_free(&ib, NULL); dma_fence_put(f); err0: amdgpu_device_wb_free(adev, index); diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v7_1.c b/drivers/gpu/drm/amd/amdgpu/sdma_v7_1.c index 5bc45c3e00d18..d71a546bdde61 100644 --- a/drivers/gpu/drm/amd/amdgpu/sdma_v7_1.c +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v7_1.c @@ -987,7 +987,8 @@ static int sdma_v7_1_ring_test_ring(struct amdgpu_ring *ring) static int sdma_v7_1_ring_test_ib(struct amdgpu_ring *ring, long timeout) { struct amdgpu_device *adev = ring->adev; - struct amdgpu_ib ib; + struct amdgpu_job *job; + struct amdgpu_ib *ib; struct dma_fence *f = NULL; unsigned index; long r; @@ -995,7 +996,6 @@ static int sdma_v7_1_ring_test_ib(struct amdgpu_ring *ring, long timeout) u64 gpu_addr; tmp = 0xCAFEDEAD; - memset(&ib, 0, sizeof(ib)); r = amdgpu_device_wb_get(adev, &index); if (r) { @@ -1006,26 +1006,31 @@ static int sdma_v7_1_ring_test_ib(struct amdgpu_ring *ring, long timeout) gpu_addr = adev->wb.gpu_addr + (index * 4); adev->wb.wb[index] = cpu_to_le32(tmp); - r = amdgpu_ib_get(adev, NULL, 256, AMDGPU_IB_POOL_DIRECT, &ib); + r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, 256, + AMDGPU_IB_POOL_DIRECT, &job, + AMDGPU_KERNEL_JOB_ID_SDMA_RING_TEST); if (r) { DRM_ERROR("amdgpu: failed to get ib (%ld).\n", r); goto err0; } - ib.ptr[0] = SDMA_PKT_COPY_LINEAR_HEADER_OP(SDMA_OP_WRITE) | + ib = &job->ibs[0]; + ib->ptr[0] = SDMA_PKT_COPY_LINEAR_HEADER_OP(SDMA_OP_WRITE) | SDMA_PKT_COPY_LINEAR_HEADER_SUB_OP(SDMA_SUBOP_WRITE_LINEAR); - ib.ptr[1] = lower_32_bits(gpu_addr); - ib.ptr[2] = upper_32_bits(gpu_addr); - ib.ptr[3] = SDMA_PKT_WRITE_UNTILED_DW_3_COUNT(0); - ib.ptr[4] = 0xDEADBEEF; - ib.ptr[5] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); - ib.ptr[6] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); - ib.ptr[7] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); - ib.length_dw = 8; - - r = amdgpu_ib_schedule(ring, 1, &ib, NULL, &f); - if (r) + ib->ptr[1] = lower_32_bits(gpu_addr); + ib->ptr[2] = upper_32_bits(gpu_addr); + ib->ptr[3] = SDMA_PKT_WRITE_UNTILED_DW_3_COUNT(0); + ib->ptr[4] = 0xDEADBEEF; + ib->ptr[5] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); + ib->ptr[6] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); + ib->ptr[7] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); + ib->length_dw = 8; + + r = amdgpu_job_submit_direct(job, ring, &f); + if (r) { + amdgpu_job_free(job); goto err1; + } r = dma_fence_wait_timeout(f, false, timeout); if (r == 0) { @@ -1045,7 +1050,6 @@ static int sdma_v7_1_ring_test_ib(struct amdgpu_ring *ring, long timeout) r = -EINVAL; err1: - amdgpu_ib_free(&ib, NULL); dma_fence_put(f); err0: amdgpu_device_wb_free(adev, index); diff --git a/drivers/gpu/drm/amd/amdgpu/si_dma.c b/drivers/gpu/drm/amd/amdgpu/si_dma.c index 74fcaa340d9b1..b67bd343f795f 100644 --- a/drivers/gpu/drm/amd/amdgpu/si_dma.c +++ b/drivers/gpu/drm/amd/amdgpu/si_dma.c @@ -259,7 +259,8 @@ static int si_dma_ring_test_ring(struct amdgpu_ring *ring) static int si_dma_ring_test_ib(struct amdgpu_ring *ring, long timeout) { struct amdgpu_device *adev = ring->adev; - struct amdgpu_ib ib; + struct amdgpu_job *job; + struct amdgpu_ib *ib; struct dma_fence *f = NULL; unsigned index; u32 tmp = 0; @@ -273,20 +274,25 @@ static int si_dma_ring_test_ib(struct amdgpu_ring *ring, long timeout) gpu_addr = adev->wb.gpu_addr + (index * 4); tmp = 0xCAFEDEAD; adev->wb.wb[index] = cpu_to_le32(tmp); - memset(&ib, 0, sizeof(ib)); - r = amdgpu_ib_get(adev, NULL, 256, - AMDGPU_IB_POOL_DIRECT, &ib); + + r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, 256, + AMDGPU_IB_POOL_DIRECT, &job, + AMDGPU_KERNEL_JOB_ID_SDMA_RING_TEST); if (r) goto err0; - ib.ptr[0] = DMA_PACKET(DMA_PACKET_WRITE, 0, 0, 0, 1); - ib.ptr[1] = lower_32_bits(gpu_addr); - ib.ptr[2] = upper_32_bits(gpu_addr) & 0xff; - ib.ptr[3] = 0xDEADBEEF; - ib.length_dw = 4; - r = amdgpu_ib_schedule(ring, 1, &ib, NULL, &f); - if (r) + ib = &job->ibs[0]; + ib->ptr[0] = DMA_PACKET(DMA_PACKET_WRITE, 0, 0, 0, 1); + ib->ptr[1] = lower_32_bits(gpu_addr); + ib->ptr[2] = upper_32_bits(gpu_addr) & 0xff; + ib->ptr[3] = 0xDEADBEEF; + ib->length_dw = 4; + + r = amdgpu_job_submit_direct(job, ring, &f); + if (r) { + amdgpu_job_free(job); goto err1; + } r = dma_fence_wait_timeout(f, false, timeout); if (r == 0) { @@ -302,7 +308,6 @@ static int si_dma_ring_test_ib(struct amdgpu_ring *ring, long timeout) r = -EINVAL; err1: - amdgpu_ib_free(&ib, NULL); dma_fence_put(f); err0: amdgpu_device_wb_free(adev, index); -- 2.52.0 ^ permalink raw reply related [flat|nested] 20+ messages in thread
* Re: [PATCH 04/10] drm/amdgpu: switch all IPs to using job for IBs 2026-01-16 16:20 ` [PATCH 04/10] drm/amdgpu: switch all IPs to using job for IBs Alex Deucher @ 2026-01-19 12:31 ` Christian König 0 siblings, 0 replies; 20+ messages in thread From: Christian König @ 2026-01-19 12:31 UTC (permalink / raw) To: Alex Deucher, amd-gfx On 1/16/26 17:20, Alex Deucher wrote: > Switch to using a job structure for IBs. > > Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Acked-by: Christian König <christian.koenig@amd.com> > --- > drivers/gpu/drm/amd/amdgpu/amdgpu_vpe.c | 37 +++--- > drivers/gpu/drm/amd/amdgpu/cik_sdma.c | 31 ++--- > drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c | 29 ++--- > drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c | 29 ++--- > drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c | 29 ++--- > drivers/gpu/drm/amd/amdgpu/gfx_v12_1.c | 29 ++--- > drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c | 24 ++-- > drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c | 25 ++-- > drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c | 139 ++++++++++++----------- > drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 138 +++++++++++----------- > drivers/gpu/drm/amd/amdgpu/gfx_v9_4_2.c | 26 ++--- > drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c | 29 ++--- > drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c | 38 ++++--- > drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c | 38 ++++--- > drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c | 38 ++++--- > drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c | 38 ++++--- > drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c | 37 +++--- > drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c | 36 +++--- > drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c | 36 +++--- > drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c | 36 +++--- > drivers/gpu/drm/amd/amdgpu/sdma_v7_1.c | 36 +++--- > drivers/gpu/drm/amd/amdgpu/si_dma.c | 29 +++-- > 22 files changed, 500 insertions(+), 427 deletions(-) > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vpe.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vpe.c > index fd881388d6125..9fb1946be1ba2 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vpe.c > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vpe.c > @@ -817,7 +817,8 @@ static int vpe_ring_test_ib(struct amdgpu_ring *ring, long timeout) > { > struct amdgpu_device *adev = ring->adev; > const uint32_t test_pattern = 0xdeadbeef; > - struct amdgpu_ib ib = {}; > + struct amdgpu_job *job; > + struct amdgpu_ib *ib; > struct dma_fence *f = NULL; > uint32_t index; > uint64_t wb_addr; > @@ -832,23 +833,28 @@ static int vpe_ring_test_ib(struct amdgpu_ring *ring, long timeout) > adev->wb.wb[index] = 0; > wb_addr = adev->wb.gpu_addr + (index * 4); > > - ret = amdgpu_ib_get(adev, NULL, 256, AMDGPU_IB_POOL_DIRECT, &ib); > + ret = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, 256, > + AMDGPU_IB_POOL_DIRECT, &job, > + AMDGPU_KERNEL_JOB_ID_VPE_RING_TEST); > if (ret) > goto err0; > - > - ib.ptr[0] = VPE_CMD_HEADER(VPE_CMD_OPCODE_FENCE, 0); > - ib.ptr[1] = lower_32_bits(wb_addr); > - ib.ptr[2] = upper_32_bits(wb_addr); > - ib.ptr[3] = test_pattern; > - ib.ptr[4] = VPE_CMD_HEADER(VPE_CMD_OPCODE_NOP, 0); > - ib.ptr[5] = VPE_CMD_HEADER(VPE_CMD_OPCODE_NOP, 0); > - ib.ptr[6] = VPE_CMD_HEADER(VPE_CMD_OPCODE_NOP, 0); > - ib.ptr[7] = VPE_CMD_HEADER(VPE_CMD_OPCODE_NOP, 0); > - ib.length_dw = 8; > - > - ret = amdgpu_ib_schedule(ring, 1, &ib, NULL, &f); > - if (ret) > + ib = &job->ibs[0]; > + > + ib->ptr[0] = VPE_CMD_HEADER(VPE_CMD_OPCODE_FENCE, 0); > + ib->ptr[1] = lower_32_bits(wb_addr); > + ib->ptr[2] = upper_32_bits(wb_addr); > + ib->ptr[3] = test_pattern; > + ib->ptr[4] = VPE_CMD_HEADER(VPE_CMD_OPCODE_NOP, 0); > + ib->ptr[5] = VPE_CMD_HEADER(VPE_CMD_OPCODE_NOP, 0); > + ib->ptr[6] = VPE_CMD_HEADER(VPE_CMD_OPCODE_NOP, 0); > + ib->ptr[7] = VPE_CMD_HEADER(VPE_CMD_OPCODE_NOP, 0); > + ib->length_dw = 8; > + > + ret = amdgpu_job_submit_direct(job, ring, &f); > + if (ret) { > + amdgpu_job_free(job); > goto err1; > + } > > ret = dma_fence_wait_timeout(f, false, timeout); > if (ret <= 0) { > @@ -859,7 +865,6 @@ static int vpe_ring_test_ib(struct amdgpu_ring *ring, long timeout) > ret = (le32_to_cpu(adev->wb.wb[index]) == test_pattern) ? 0 : -EINVAL; > > err1: > - amdgpu_ib_free(&ib, NULL); > dma_fence_put(f); > err0: > amdgpu_device_wb_free(adev, index); > diff --git a/drivers/gpu/drm/amd/amdgpu/cik_sdma.c b/drivers/gpu/drm/amd/amdgpu/cik_sdma.c > index 9e8715b4739da..e2ca96f5a7cfb 100644 > --- a/drivers/gpu/drm/amd/amdgpu/cik_sdma.c > +++ b/drivers/gpu/drm/amd/amdgpu/cik_sdma.c > @@ -652,7 +652,8 @@ static int cik_sdma_ring_test_ring(struct amdgpu_ring *ring) > static int cik_sdma_ring_test_ib(struct amdgpu_ring *ring, long timeout) > { > struct amdgpu_device *adev = ring->adev; > - struct amdgpu_ib ib; > + struct amdgpu_job *job; > + struct amdgpu_ib *ib; > struct dma_fence *f = NULL; > unsigned index; > u32 tmp = 0; > @@ -666,22 +667,27 @@ static int cik_sdma_ring_test_ib(struct amdgpu_ring *ring, long timeout) > gpu_addr = adev->wb.gpu_addr + (index * 4); > tmp = 0xCAFEDEAD; > adev->wb.wb[index] = cpu_to_le32(tmp); > - memset(&ib, 0, sizeof(ib)); > - r = amdgpu_ib_get(adev, NULL, 256, > - AMDGPU_IB_POOL_DIRECT, &ib); > + > + r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, 256, > + AMDGPU_IB_POOL_DIRECT, &job, > + AMDGPU_KERNEL_JOB_ID_SDMA_RING_TEST); > if (r) > goto err0; > + ib = &job->ibs[0]; > > - ib.ptr[0] = SDMA_PACKET(SDMA_OPCODE_WRITE, > + ib->ptr[0] = SDMA_PACKET(SDMA_OPCODE_WRITE, > SDMA_WRITE_SUB_OPCODE_LINEAR, 0); > - ib.ptr[1] = lower_32_bits(gpu_addr); > - ib.ptr[2] = upper_32_bits(gpu_addr); > - ib.ptr[3] = 1; > - ib.ptr[4] = 0xDEADBEEF; > - ib.length_dw = 5; > - r = amdgpu_ib_schedule(ring, 1, &ib, NULL, &f); > - if (r) > + ib->ptr[1] = lower_32_bits(gpu_addr); > + ib->ptr[2] = upper_32_bits(gpu_addr); > + ib->ptr[3] = 1; > + ib->ptr[4] = 0xDEADBEEF; > + ib->length_dw = 5; > + > + r = amdgpu_job_submit_direct(job, ring, &f); > + if (r) { > + amdgpu_job_free(job); > goto err1; > + } > > r = dma_fence_wait_timeout(f, false, timeout); > if (r == 0) { > @@ -697,7 +703,6 @@ static int cik_sdma_ring_test_ib(struct amdgpu_ring *ring, long timeout) > r = -EINVAL; > > err1: > - amdgpu_ib_free(&ib, NULL); > dma_fence_put(f); > err0: > amdgpu_device_wb_free(adev, index); > diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c > index 41bbedb8e157e..496121bdc1de1 100644 > --- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c > +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c > @@ -4071,15 +4071,14 @@ static int gfx_v10_0_ring_test_ring(struct amdgpu_ring *ring) > static int gfx_v10_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) > { > struct amdgpu_device *adev = ring->adev; > - struct amdgpu_ib ib; > + struct amdgpu_job *job; > + struct amdgpu_ib *ib; > struct dma_fence *f = NULL; > unsigned int index; > uint64_t gpu_addr; > uint32_t *cpu_ptr; > long r; > > - memset(&ib, 0, sizeof(ib)); > - > r = amdgpu_device_wb_get(adev, &index); > if (r) > return r; > @@ -4088,22 +4087,27 @@ static int gfx_v10_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) > adev->wb.wb[index] = cpu_to_le32(0xCAFEDEAD); > cpu_ptr = &adev->wb.wb[index]; > > - r = amdgpu_ib_get(adev, NULL, 20, AMDGPU_IB_POOL_DIRECT, &ib); > + r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, 20, > + AMDGPU_IB_POOL_DIRECT, &job, > + AMDGPU_KERNEL_JOB_ID_GFX_RING_TEST); > if (r) { > drm_err(adev_to_drm(adev), "failed to get ib (%ld).\n", r); > goto err1; > } > + ib = &job->ibs[0]; > > - ib.ptr[0] = PACKET3(PACKET3_WRITE_DATA, 3); > - ib.ptr[1] = WRITE_DATA_DST_SEL(5) | WR_CONFIRM; > - ib.ptr[2] = lower_32_bits(gpu_addr); > - ib.ptr[3] = upper_32_bits(gpu_addr); > - ib.ptr[4] = 0xDEADBEEF; > - ib.length_dw = 5; > + ib->ptr[0] = PACKET3(PACKET3_WRITE_DATA, 3); > + ib->ptr[1] = WRITE_DATA_DST_SEL(5) | WR_CONFIRM; > + ib->ptr[2] = lower_32_bits(gpu_addr); > + ib->ptr[3] = upper_32_bits(gpu_addr); > + ib->ptr[4] = 0xDEADBEEF; > + ib->length_dw = 5; > > - r = amdgpu_ib_schedule(ring, 1, &ib, NULL, &f); > - if (r) > + r = amdgpu_job_submit_direct(job, ring, &f); > + if (r) { > + amdgpu_job_free(job); > goto err2; > + } > > r = dma_fence_wait_timeout(f, false, timeout); > if (r == 0) { > @@ -4118,7 +4122,6 @@ static int gfx_v10_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) > else > r = -EINVAL; > err2: > - amdgpu_ib_free(&ib, NULL); > dma_fence_put(f); > err1: > amdgpu_device_wb_free(adev, index); > diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c > index 3a4ca104b1612..5ad2516a60240 100644 > --- a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c > +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c > @@ -604,7 +604,8 @@ static int gfx_v11_0_ring_test_ring(struct amdgpu_ring *ring) > static int gfx_v11_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) > { > struct amdgpu_device *adev = ring->adev; > - struct amdgpu_ib ib; > + struct amdgpu_job *job; > + struct amdgpu_ib *ib; > struct dma_fence *f = NULL; > unsigned index; > uint64_t gpu_addr; > @@ -616,8 +617,6 @@ static int gfx_v11_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) > ring->funcs->type == AMDGPU_RING_TYPE_KIQ) > return 0; > > - memset(&ib, 0, sizeof(ib)); > - > r = amdgpu_device_wb_get(adev, &index); > if (r) > return r; > @@ -626,22 +625,27 @@ static int gfx_v11_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) > adev->wb.wb[index] = cpu_to_le32(0xCAFEDEAD); > cpu_ptr = &adev->wb.wb[index]; > > - r = amdgpu_ib_get(adev, NULL, 20, AMDGPU_IB_POOL_DIRECT, &ib); > + r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, 20, > + AMDGPU_IB_POOL_DIRECT, &job, > + AMDGPU_KERNEL_JOB_ID_GFX_RING_TEST); > if (r) { > drm_err(adev_to_drm(adev), "failed to get ib (%ld).\n", r); > goto err1; > } > + ib = &job->ibs[0]; > > - ib.ptr[0] = PACKET3(PACKET3_WRITE_DATA, 3); > - ib.ptr[1] = WRITE_DATA_DST_SEL(5) | WR_CONFIRM; > - ib.ptr[2] = lower_32_bits(gpu_addr); > - ib.ptr[3] = upper_32_bits(gpu_addr); > - ib.ptr[4] = 0xDEADBEEF; > - ib.length_dw = 5; > + ib->ptr[0] = PACKET3(PACKET3_WRITE_DATA, 3); > + ib->ptr[1] = WRITE_DATA_DST_SEL(5) | WR_CONFIRM; > + ib->ptr[2] = lower_32_bits(gpu_addr); > + ib->ptr[3] = upper_32_bits(gpu_addr); > + ib->ptr[4] = 0xDEADBEEF; > + ib->length_dw = 5; > > - r = amdgpu_ib_schedule(ring, 1, &ib, NULL, &f); > - if (r) > + r = amdgpu_job_submit_direct(job, ring, &f); > + if (r) { > + amdgpu_job_free(job); > goto err2; > + } > > r = dma_fence_wait_timeout(f, false, timeout); > if (r == 0) { > @@ -656,7 +660,6 @@ static int gfx_v11_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) > else > r = -EINVAL; > err2: > - amdgpu_ib_free(&ib, NULL); > dma_fence_put(f); > err1: > amdgpu_device_wb_free(adev, index); > diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c > index 40660b05f9794..4d5c6bdd8cad7 100644 > --- a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c > +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c > @@ -493,7 +493,8 @@ static int gfx_v12_0_ring_test_ring(struct amdgpu_ring *ring) > static int gfx_v12_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) > { > struct amdgpu_device *adev = ring->adev; > - struct amdgpu_ib ib; > + struct amdgpu_job *job; > + struct amdgpu_ib *ib; > struct dma_fence *f = NULL; > unsigned index; > uint64_t gpu_addr; > @@ -505,8 +506,6 @@ static int gfx_v12_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) > ring->funcs->type == AMDGPU_RING_TYPE_KIQ) > return 0; > > - memset(&ib, 0, sizeof(ib)); > - > r = amdgpu_device_wb_get(adev, &index); > if (r) > return r; > @@ -515,22 +514,27 @@ static int gfx_v12_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) > adev->wb.wb[index] = cpu_to_le32(0xCAFEDEAD); > cpu_ptr = &adev->wb.wb[index]; > > - r = amdgpu_ib_get(adev, NULL, 16, AMDGPU_IB_POOL_DIRECT, &ib); > + r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, 16, > + AMDGPU_IB_POOL_DIRECT, &job, > + AMDGPU_KERNEL_JOB_ID_GFX_RING_TEST); > if (r) { > drm_err(adev_to_drm(adev), "failed to get ib (%ld).\n", r); > goto err1; > } > + ib = &job->ibs[0]; > > - ib.ptr[0] = PACKET3(PACKET3_WRITE_DATA, 3); > - ib.ptr[1] = WRITE_DATA_DST_SEL(5) | WR_CONFIRM; > - ib.ptr[2] = lower_32_bits(gpu_addr); > - ib.ptr[3] = upper_32_bits(gpu_addr); > - ib.ptr[4] = 0xDEADBEEF; > - ib.length_dw = 5; > + ib->ptr[0] = PACKET3(PACKET3_WRITE_DATA, 3); > + ib->ptr[1] = WRITE_DATA_DST_SEL(5) | WR_CONFIRM; > + ib->ptr[2] = lower_32_bits(gpu_addr); > + ib->ptr[3] = upper_32_bits(gpu_addr); > + ib->ptr[4] = 0xDEADBEEF; > + ib->length_dw = 5; > > - r = amdgpu_ib_schedule(ring, 1, &ib, NULL, &f); > - if (r) > + r = amdgpu_job_submit_direct(job, ring, &f); > + if (r) { > + amdgpu_job_free(job); > goto err2; > + } > > r = dma_fence_wait_timeout(f, false, timeout); > if (r == 0) { > @@ -545,7 +549,6 @@ static int gfx_v12_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) > else > r = -EINVAL; > err2: > - amdgpu_ib_free(&ib, NULL); > dma_fence_put(f); > err1: > amdgpu_device_wb_free(adev, index); > diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v12_1.c b/drivers/gpu/drm/amd/amdgpu/gfx_v12_1.c > index 86cc90a662965..7d02569cd4738 100644 > --- a/drivers/gpu/drm/amd/amdgpu/gfx_v12_1.c > +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v12_1.c > @@ -306,7 +306,8 @@ static int gfx_v12_1_ring_test_ring(struct amdgpu_ring *ring) > static int gfx_v12_1_ring_test_ib(struct amdgpu_ring *ring, long timeout) > { > struct amdgpu_device *adev = ring->adev; > - struct amdgpu_ib ib; > + struct amdgpu_job *job; > + struct amdgpu_ib *ib; > struct dma_fence *f = NULL; > unsigned index; > uint64_t gpu_addr; > @@ -318,8 +319,6 @@ static int gfx_v12_1_ring_test_ib(struct amdgpu_ring *ring, long timeout) > ring->funcs->type == AMDGPU_RING_TYPE_KIQ) > return 0; > > - memset(&ib, 0, sizeof(ib)); > - > r = amdgpu_device_wb_get(adev, &index); > if (r) > return r; > @@ -328,22 +327,27 @@ static int gfx_v12_1_ring_test_ib(struct amdgpu_ring *ring, long timeout) > adev->wb.wb[index] = cpu_to_le32(0xCAFEDEAD); > cpu_ptr = &adev->wb.wb[index]; > > - r = amdgpu_ib_get(adev, NULL, 16, AMDGPU_IB_POOL_DIRECT, &ib); > + r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, 16, > + AMDGPU_IB_POOL_DIRECT, &job, > + AMDGPU_KERNEL_JOB_ID_GFX_RING_TEST); > if (r) { > dev_err(adev->dev, "amdgpu: failed to get ib (%ld).\n", r); > goto err1; > } > + ib = &job->ibs[0]; > > - ib.ptr[0] = PACKET3(PACKET3_WRITE_DATA, 3); > - ib.ptr[1] = WRITE_DATA_DST_SEL(5) | WR_CONFIRM; > - ib.ptr[2] = lower_32_bits(gpu_addr); > - ib.ptr[3] = upper_32_bits(gpu_addr); > - ib.ptr[4] = 0xDEADBEEF; > - ib.length_dw = 5; > + ib->ptr[0] = PACKET3(PACKET3_WRITE_DATA, 3); > + ib->ptr[1] = WRITE_DATA_DST_SEL(5) | WR_CONFIRM; > + ib->ptr[2] = lower_32_bits(gpu_addr); > + ib->ptr[3] = upper_32_bits(gpu_addr); > + ib->ptr[4] = 0xDEADBEEF; > + ib->length_dw = 5; > > - r = amdgpu_ib_schedule(ring, 1, &ib, NULL, &f); > - if (r) > + r = amdgpu_job_submit_direct(job, ring, &f); > + if (r) { > + amdgpu_job_free(job); > goto err2; > + } > > r = dma_fence_wait_timeout(f, false, timeout); > if (r == 0) { > @@ -358,7 +362,6 @@ static int gfx_v12_1_ring_test_ib(struct amdgpu_ring *ring, long timeout) > else > r = -EINVAL; > err2: > - amdgpu_ib_free(&ib, NULL); > dma_fence_put(f); > err1: > amdgpu_device_wb_free(adev, index); > diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c > index 73223d97a87f5..2f8aa99f17480 100644 > --- a/drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c > +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c > @@ -1895,24 +1895,29 @@ static int gfx_v6_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) > { > struct amdgpu_device *adev = ring->adev; > struct dma_fence *f = NULL; > - struct amdgpu_ib ib; > + struct amdgpu_job *job; > + struct amdgpu_ib *ib; > uint32_t tmp = 0; > long r; > > WREG32(mmSCRATCH_REG0, 0xCAFEDEAD); > - memset(&ib, 0, sizeof(ib)); > - r = amdgpu_ib_get(adev, NULL, 256, AMDGPU_IB_POOL_DIRECT, &ib); > + r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, 256, > + AMDGPU_IB_POOL_DIRECT, &job, > + AMDGPU_KERNEL_JOB_ID_GFX_RING_TEST); > if (r) > return r; > > - ib.ptr[0] = PACKET3(PACKET3_SET_CONFIG_REG, 1); > - ib.ptr[1] = mmSCRATCH_REG0 - PACKET3_SET_CONFIG_REG_START; > - ib.ptr[2] = 0xDEADBEEF; > - ib.length_dw = 3; > + ib = &job->ibs[0]; > + ib->ptr[0] = PACKET3(PACKET3_SET_CONFIG_REG, 1); > + ib->ptr[1] = mmSCRATCH_REG0 - PACKET3_SET_CONFIG_REG_START; > + ib->ptr[2] = 0xDEADBEEF; > + ib->length_dw = 3; > > - r = amdgpu_ib_schedule(ring, 1, &ib, NULL, &f); > - if (r) > + r = amdgpu_job_submit_direct(job, ring, &f); > + if (r) { > + amdgpu_job_free(job); > goto error; > + } > > r = dma_fence_wait_timeout(f, false, timeout); > if (r == 0) { > @@ -1928,7 +1933,6 @@ static int gfx_v6_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) > r = -EINVAL; > > error: > - amdgpu_ib_free(&ib, NULL); > dma_fence_put(f); > return r; > } > diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c > index 2b691452775bc..fa235b981c2e9 100644 > --- a/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c > +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c > @@ -2291,25 +2291,31 @@ static void gfx_v7_ring_emit_cntxcntl(struct amdgpu_ring *ring, uint32_t flags) > static int gfx_v7_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) > { > struct amdgpu_device *adev = ring->adev; > - struct amdgpu_ib ib; > + struct amdgpu_job *job; > + struct amdgpu_ib *ib; > struct dma_fence *f = NULL; > uint32_t tmp = 0; > long r; > > WREG32(mmSCRATCH_REG0, 0xCAFEDEAD); > - memset(&ib, 0, sizeof(ib)); > - r = amdgpu_ib_get(adev, NULL, 256, AMDGPU_IB_POOL_DIRECT, &ib); > + > + r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, 256, > + AMDGPU_IB_POOL_DIRECT, &job, > + AMDGPU_KERNEL_JOB_ID_GFX_RING_TEST); > if (r) > return r; > > - ib.ptr[0] = PACKET3(PACKET3_SET_UCONFIG_REG, 1); > - ib.ptr[1] = mmSCRATCH_REG0 - PACKET3_SET_UCONFIG_REG_START; > - ib.ptr[2] = 0xDEADBEEF; > - ib.length_dw = 3; > + ib = &job->ibs[0]; > + ib->ptr[0] = PACKET3(PACKET3_SET_UCONFIG_REG, 1); > + ib->ptr[1] = mmSCRATCH_REG0 - PACKET3_SET_UCONFIG_REG_START; > + ib->ptr[2] = 0xDEADBEEF; > + ib->length_dw = 3; > > - r = amdgpu_ib_schedule(ring, 1, &ib, NULL, &f); > - if (r) > + r = amdgpu_job_submit_direct(job, ring, &f); > + if (r) { > + amdgpu_job_free(job); > goto error; > + } > > r = dma_fence_wait_timeout(f, false, timeout); > if (r == 0) { > @@ -2325,7 +2331,6 @@ static int gfx_v7_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) > r = -EINVAL; > > error: > - amdgpu_ib_free(&ib, NULL); > dma_fence_put(f); > return r; > } > diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c > index a6b4c8f41dc11..4736216cd0211 100644 > --- a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c > +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c > @@ -868,9 +868,9 @@ static int gfx_v8_0_ring_test_ring(struct amdgpu_ring *ring) > static int gfx_v8_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) > { > struct amdgpu_device *adev = ring->adev; > - struct amdgpu_ib ib; > + struct amdgpu_job *job; > + struct amdgpu_ib *ib; > struct dma_fence *f = NULL; > - > unsigned int index; > uint64_t gpu_addr; > uint32_t tmp; > @@ -882,22 +882,26 @@ static int gfx_v8_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) > > gpu_addr = adev->wb.gpu_addr + (index * 4); > adev->wb.wb[index] = cpu_to_le32(0xCAFEDEAD); > - memset(&ib, 0, sizeof(ib)); > > - r = amdgpu_ib_get(adev, NULL, 20, AMDGPU_IB_POOL_DIRECT, &ib); > + r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, 20, > + AMDGPU_IB_POOL_DIRECT, &job, > + AMDGPU_KERNEL_JOB_ID_GFX_RING_TEST); > if (r) > goto err1; > > - ib.ptr[0] = PACKET3(PACKET3_WRITE_DATA, 3); > - ib.ptr[1] = WRITE_DATA_DST_SEL(5) | WR_CONFIRM; > - ib.ptr[2] = lower_32_bits(gpu_addr); > - ib.ptr[3] = upper_32_bits(gpu_addr); > - ib.ptr[4] = 0xDEADBEEF; > - ib.length_dw = 5; > + ib = &job->ibs[0]; > + ib->ptr[0] = PACKET3(PACKET3_WRITE_DATA, 3); > + ib->ptr[1] = WRITE_DATA_DST_SEL(5) | WR_CONFIRM; > + ib->ptr[2] = lower_32_bits(gpu_addr); > + ib->ptr[3] = upper_32_bits(gpu_addr); > + ib->ptr[4] = 0xDEADBEEF; > + ib->length_dw = 5; > > - r = amdgpu_ib_schedule(ring, 1, &ib, NULL, &f); > - if (r) > + r = amdgpu_job_submit_direct(job, ring, &f); > + if (r) { > + amdgpu_job_free(job); > goto err2; > + } > > r = dma_fence_wait_timeout(f, false, timeout); > if (r == 0) { > @@ -914,7 +918,6 @@ static int gfx_v8_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) > r = -EINVAL; > > err2: > - amdgpu_ib_free(&ib, NULL); > dma_fence_put(f); > err1: > amdgpu_device_wb_free(adev, index); > @@ -1474,7 +1477,8 @@ static const u32 sec_ded_counter_registers[] = > static int gfx_v8_0_do_edc_gpr_workarounds(struct amdgpu_device *adev) > { > struct amdgpu_ring *ring = &adev->gfx.compute_ring[0]; > - struct amdgpu_ib ib; > + struct amdgpu_job *job; > + struct amdgpu_ib *ib; > struct dma_fence *f = NULL; > int r, i; > u32 tmp; > @@ -1505,106 +1509,108 @@ static int gfx_v8_0_do_edc_gpr_workarounds(struct amdgpu_device *adev) > total_size += sizeof(sgpr_init_compute_shader); > > /* allocate an indirect buffer to put the commands in */ > - memset(&ib, 0, sizeof(ib)); > - r = amdgpu_ib_get(adev, NULL, total_size, > - AMDGPU_IB_POOL_DIRECT, &ib); > + r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, total_size, > + AMDGPU_IB_POOL_DIRECT, &job, > + AMDGPU_KERNEL_JOB_ID_RUN_SHADER); > if (r) { > drm_err(adev_to_drm(adev), "failed to get ib (%d).\n", r); > return r; > } > + ib = &job->ibs[0]; > > /* load the compute shaders */ > for (i = 0; i < ARRAY_SIZE(vgpr_init_compute_shader); i++) > - ib.ptr[i + (vgpr_offset / 4)] = vgpr_init_compute_shader[i]; > + ib->ptr[i + (vgpr_offset / 4)] = vgpr_init_compute_shader[i]; > > for (i = 0; i < ARRAY_SIZE(sgpr_init_compute_shader); i++) > - ib.ptr[i + (sgpr_offset / 4)] = sgpr_init_compute_shader[i]; > + ib->ptr[i + (sgpr_offset / 4)] = sgpr_init_compute_shader[i]; > > /* init the ib length to 0 */ > - ib.length_dw = 0; > + ib->length_dw = 0; > > /* VGPR */ > /* write the register state for the compute dispatch */ > for (i = 0; i < ARRAY_SIZE(vgpr_init_regs); i += 2) { > - ib.ptr[ib.length_dw++] = PACKET3(PACKET3_SET_SH_REG, 1); > - ib.ptr[ib.length_dw++] = vgpr_init_regs[i] - PACKET3_SET_SH_REG_START; > - ib.ptr[ib.length_dw++] = vgpr_init_regs[i + 1]; > + ib->ptr[ib->length_dw++] = PACKET3(PACKET3_SET_SH_REG, 1); > + ib->ptr[ib->length_dw++] = vgpr_init_regs[i] - PACKET3_SET_SH_REG_START; > + ib->ptr[ib->length_dw++] = vgpr_init_regs[i + 1]; > } > /* write the shader start address: mmCOMPUTE_PGM_LO, mmCOMPUTE_PGM_HI */ > - gpu_addr = (ib.gpu_addr + (u64)vgpr_offset) >> 8; > - ib.ptr[ib.length_dw++] = PACKET3(PACKET3_SET_SH_REG, 2); > - ib.ptr[ib.length_dw++] = mmCOMPUTE_PGM_LO - PACKET3_SET_SH_REG_START; > - ib.ptr[ib.length_dw++] = lower_32_bits(gpu_addr); > - ib.ptr[ib.length_dw++] = upper_32_bits(gpu_addr); > + gpu_addr = (ib->gpu_addr + (u64)vgpr_offset) >> 8; > + ib->ptr[ib->length_dw++] = PACKET3(PACKET3_SET_SH_REG, 2); > + ib->ptr[ib->length_dw++] = mmCOMPUTE_PGM_LO - PACKET3_SET_SH_REG_START; > + ib->ptr[ib->length_dw++] = lower_32_bits(gpu_addr); > + ib->ptr[ib->length_dw++] = upper_32_bits(gpu_addr); > > /* write dispatch packet */ > - ib.ptr[ib.length_dw++] = PACKET3(PACKET3_DISPATCH_DIRECT, 3); > - ib.ptr[ib.length_dw++] = 8; /* x */ > - ib.ptr[ib.length_dw++] = 1; /* y */ > - ib.ptr[ib.length_dw++] = 1; /* z */ > - ib.ptr[ib.length_dw++] = > + ib->ptr[ib->length_dw++] = PACKET3(PACKET3_DISPATCH_DIRECT, 3); > + ib->ptr[ib->length_dw++] = 8; /* x */ > + ib->ptr[ib->length_dw++] = 1; /* y */ > + ib->ptr[ib->length_dw++] = 1; /* z */ > + ib->ptr[ib->length_dw++] = > REG_SET_FIELD(0, COMPUTE_DISPATCH_INITIATOR, COMPUTE_SHADER_EN, 1); > > /* write CS partial flush packet */ > - ib.ptr[ib.length_dw++] = PACKET3(PACKET3_EVENT_WRITE, 0); > - ib.ptr[ib.length_dw++] = EVENT_TYPE(7) | EVENT_INDEX(4); > + ib->ptr[ib->length_dw++] = PACKET3(PACKET3_EVENT_WRITE, 0); > + ib->ptr[ib->length_dw++] = EVENT_TYPE(7) | EVENT_INDEX(4); > > /* SGPR1 */ > /* write the register state for the compute dispatch */ > for (i = 0; i < ARRAY_SIZE(sgpr1_init_regs); i += 2) { > - ib.ptr[ib.length_dw++] = PACKET3(PACKET3_SET_SH_REG, 1); > - ib.ptr[ib.length_dw++] = sgpr1_init_regs[i] - PACKET3_SET_SH_REG_START; > - ib.ptr[ib.length_dw++] = sgpr1_init_regs[i + 1]; > + ib->ptr[ib->length_dw++] = PACKET3(PACKET3_SET_SH_REG, 1); > + ib->ptr[ib->length_dw++] = sgpr1_init_regs[i] - PACKET3_SET_SH_REG_START; > + ib->ptr[ib->length_dw++] = sgpr1_init_regs[i + 1]; > } > /* write the shader start address: mmCOMPUTE_PGM_LO, mmCOMPUTE_PGM_HI */ > - gpu_addr = (ib.gpu_addr + (u64)sgpr_offset) >> 8; > - ib.ptr[ib.length_dw++] = PACKET3(PACKET3_SET_SH_REG, 2); > - ib.ptr[ib.length_dw++] = mmCOMPUTE_PGM_LO - PACKET3_SET_SH_REG_START; > - ib.ptr[ib.length_dw++] = lower_32_bits(gpu_addr); > - ib.ptr[ib.length_dw++] = upper_32_bits(gpu_addr); > + gpu_addr = (ib->gpu_addr + (u64)sgpr_offset) >> 8; > + ib->ptr[ib->length_dw++] = PACKET3(PACKET3_SET_SH_REG, 2); > + ib->ptr[ib->length_dw++] = mmCOMPUTE_PGM_LO - PACKET3_SET_SH_REG_START; > + ib->ptr[ib->length_dw++] = lower_32_bits(gpu_addr); > + ib->ptr[ib->length_dw++] = upper_32_bits(gpu_addr); > > /* write dispatch packet */ > - ib.ptr[ib.length_dw++] = PACKET3(PACKET3_DISPATCH_DIRECT, 3); > - ib.ptr[ib.length_dw++] = 8; /* x */ > - ib.ptr[ib.length_dw++] = 1; /* y */ > - ib.ptr[ib.length_dw++] = 1; /* z */ > - ib.ptr[ib.length_dw++] = > + ib->ptr[ib->length_dw++] = PACKET3(PACKET3_DISPATCH_DIRECT, 3); > + ib->ptr[ib->length_dw++] = 8; /* x */ > + ib->ptr[ib->length_dw++] = 1; /* y */ > + ib->ptr[ib->length_dw++] = 1; /* z */ > + ib->ptr[ib->length_dw++] = > REG_SET_FIELD(0, COMPUTE_DISPATCH_INITIATOR, COMPUTE_SHADER_EN, 1); > > /* write CS partial flush packet */ > - ib.ptr[ib.length_dw++] = PACKET3(PACKET3_EVENT_WRITE, 0); > - ib.ptr[ib.length_dw++] = EVENT_TYPE(7) | EVENT_INDEX(4); > + ib->ptr[ib->length_dw++] = PACKET3(PACKET3_EVENT_WRITE, 0); > + ib->ptr[ib->length_dw++] = EVENT_TYPE(7) | EVENT_INDEX(4); > > /* SGPR2 */ > /* write the register state for the compute dispatch */ > for (i = 0; i < ARRAY_SIZE(sgpr2_init_regs); i += 2) { > - ib.ptr[ib.length_dw++] = PACKET3(PACKET3_SET_SH_REG, 1); > - ib.ptr[ib.length_dw++] = sgpr2_init_regs[i] - PACKET3_SET_SH_REG_START; > - ib.ptr[ib.length_dw++] = sgpr2_init_regs[i + 1]; > + ib->ptr[ib->length_dw++] = PACKET3(PACKET3_SET_SH_REG, 1); > + ib->ptr[ib->length_dw++] = sgpr2_init_regs[i] - PACKET3_SET_SH_REG_START; > + ib->ptr[ib->length_dw++] = sgpr2_init_regs[i + 1]; > } > /* write the shader start address: mmCOMPUTE_PGM_LO, mmCOMPUTE_PGM_HI */ > - gpu_addr = (ib.gpu_addr + (u64)sgpr_offset) >> 8; > - ib.ptr[ib.length_dw++] = PACKET3(PACKET3_SET_SH_REG, 2); > - ib.ptr[ib.length_dw++] = mmCOMPUTE_PGM_LO - PACKET3_SET_SH_REG_START; > - ib.ptr[ib.length_dw++] = lower_32_bits(gpu_addr); > - ib.ptr[ib.length_dw++] = upper_32_bits(gpu_addr); > + gpu_addr = (ib->gpu_addr + (u64)sgpr_offset) >> 8; > + ib->ptr[ib->length_dw++] = PACKET3(PACKET3_SET_SH_REG, 2); > + ib->ptr[ib->length_dw++] = mmCOMPUTE_PGM_LO - PACKET3_SET_SH_REG_START; > + ib->ptr[ib->length_dw++] = lower_32_bits(gpu_addr); > + ib->ptr[ib->length_dw++] = upper_32_bits(gpu_addr); > > /* write dispatch packet */ > - ib.ptr[ib.length_dw++] = PACKET3(PACKET3_DISPATCH_DIRECT, 3); > - ib.ptr[ib.length_dw++] = 8; /* x */ > - ib.ptr[ib.length_dw++] = 1; /* y */ > - ib.ptr[ib.length_dw++] = 1; /* z */ > - ib.ptr[ib.length_dw++] = > + ib->ptr[ib->length_dw++] = PACKET3(PACKET3_DISPATCH_DIRECT, 3); > + ib->ptr[ib->length_dw++] = 8; /* x */ > + ib->ptr[ib->length_dw++] = 1; /* y */ > + ib->ptr[ib->length_dw++] = 1; /* z */ > + ib->ptr[ib->length_dw++] = > REG_SET_FIELD(0, COMPUTE_DISPATCH_INITIATOR, COMPUTE_SHADER_EN, 1); > > /* write CS partial flush packet */ > - ib.ptr[ib.length_dw++] = PACKET3(PACKET3_EVENT_WRITE, 0); > - ib.ptr[ib.length_dw++] = EVENT_TYPE(7) | EVENT_INDEX(4); > + ib->ptr[ib->length_dw++] = PACKET3(PACKET3_EVENT_WRITE, 0); > + ib->ptr[ib->length_dw++] = EVENT_TYPE(7) | EVENT_INDEX(4); > > /* shedule the ib on the ring */ > - r = amdgpu_ib_schedule(ring, 1, &ib, NULL, &f); > + r = amdgpu_job_submit_direct(job, ring, &f); > if (r) { > drm_err(adev_to_drm(adev), "ib submit failed (%d).\n", r); > + amdgpu_job_free(job); > goto fail; > } > > @@ -1629,7 +1635,6 @@ static int gfx_v8_0_do_edc_gpr_workarounds(struct amdgpu_device *adev) > RREG32(sec_ded_counter_registers[i]); > > fail: > - amdgpu_ib_free(&ib, NULL); > dma_fence_put(f); > > return r; > diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c > index 7e9d753f4a808..36f0300a21bfa 100644 > --- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c > +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c > @@ -1224,9 +1224,9 @@ static int gfx_v9_0_ring_test_ring(struct amdgpu_ring *ring) > static int gfx_v9_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) > { > struct amdgpu_device *adev = ring->adev; > - struct amdgpu_ib ib; > + struct amdgpu_job *job; > + struct amdgpu_ib *ib; > struct dma_fence *f = NULL; > - > unsigned index; > uint64_t gpu_addr; > uint32_t tmp; > @@ -1238,22 +1238,26 @@ static int gfx_v9_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) > > gpu_addr = adev->wb.gpu_addr + (index * 4); > adev->wb.wb[index] = cpu_to_le32(0xCAFEDEAD); > - memset(&ib, 0, sizeof(ib)); > > - r = amdgpu_ib_get(adev, NULL, 20, AMDGPU_IB_POOL_DIRECT, &ib); > + r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, 20, > + AMDGPU_IB_POOL_DIRECT, &job, > + AMDGPU_KERNEL_JOB_ID_GFX_RING_TEST); > if (r) > goto err1; > > - ib.ptr[0] = PACKET3(PACKET3_WRITE_DATA, 3); > - ib.ptr[1] = WRITE_DATA_DST_SEL(5) | WR_CONFIRM; > - ib.ptr[2] = lower_32_bits(gpu_addr); > - ib.ptr[3] = upper_32_bits(gpu_addr); > - ib.ptr[4] = 0xDEADBEEF; > - ib.length_dw = 5; > + ib = &job->ibs[0]; > + ib->ptr[0] = PACKET3(PACKET3_WRITE_DATA, 3); > + ib->ptr[1] = WRITE_DATA_DST_SEL(5) | WR_CONFIRM; > + ib->ptr[2] = lower_32_bits(gpu_addr); > + ib->ptr[3] = upper_32_bits(gpu_addr); > + ib->ptr[4] = 0xDEADBEEF; > + ib->length_dw = 5; > > - r = amdgpu_ib_schedule(ring, 1, &ib, NULL, &f); > - if (r) > + r = amdgpu_job_submit_direct(job, ring, &f); > + if (r) { > + amdgpu_job_free(job); > goto err2; > + } > > r = dma_fence_wait_timeout(f, false, timeout); > if (r == 0) { > @@ -1270,7 +1274,6 @@ static int gfx_v9_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) > r = -EINVAL; > > err2: > - amdgpu_ib_free(&ib, NULL); > dma_fence_put(f); > err1: > amdgpu_device_wb_free(adev, index); > @@ -4624,7 +4627,8 @@ static int gfx_v9_0_do_edc_gds_workarounds(struct amdgpu_device *adev) > static int gfx_v9_0_do_edc_gpr_workarounds(struct amdgpu_device *adev) > { > struct amdgpu_ring *ring = &adev->gfx.compute_ring[0]; > - struct amdgpu_ib ib; > + struct amdgpu_job *job; > + struct amdgpu_ib *ib; > struct dma_fence *f = NULL; > int r, i; > unsigned total_size, vgpr_offset, sgpr_offset; > @@ -4670,9 +4674,9 @@ static int gfx_v9_0_do_edc_gpr_workarounds(struct amdgpu_device *adev) > total_size += sizeof(sgpr_init_compute_shader); > > /* allocate an indirect buffer to put the commands in */ > - memset(&ib, 0, sizeof(ib)); > - r = amdgpu_ib_get(adev, NULL, total_size, > - AMDGPU_IB_POOL_DIRECT, &ib); > + r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, total_size, > + AMDGPU_IB_POOL_DIRECT, &job, > + AMDGPU_KERNEL_JOB_ID_RUN_SHADER); > if (r) { > drm_err(adev_to_drm(adev), "failed to get ib (%d).\n", r); > return r; > @@ -4680,102 +4684,103 @@ static int gfx_v9_0_do_edc_gpr_workarounds(struct amdgpu_device *adev) > > /* load the compute shaders */ > for (i = 0; i < vgpr_init_shader_size/sizeof(u32); i++) > - ib.ptr[i + (vgpr_offset / 4)] = vgpr_init_shader_ptr[i]; > + ib->ptr[i + (vgpr_offset / 4)] = vgpr_init_shader_ptr[i]; > > for (i = 0; i < ARRAY_SIZE(sgpr_init_compute_shader); i++) > - ib.ptr[i + (sgpr_offset / 4)] = sgpr_init_compute_shader[i]; > + ib->ptr[i + (sgpr_offset / 4)] = sgpr_init_compute_shader[i]; > > /* init the ib length to 0 */ > - ib.length_dw = 0; > + ib->length_dw = 0; > > /* VGPR */ > /* write the register state for the compute dispatch */ > for (i = 0; i < gpr_reg_size; i++) { > - ib.ptr[ib.length_dw++] = PACKET3(PACKET3_SET_SH_REG, 1); > - ib.ptr[ib.length_dw++] = SOC15_REG_ENTRY_OFFSET(vgpr_init_regs_ptr[i]) > + ib->ptr[ib->length_dw++] = PACKET3(PACKET3_SET_SH_REG, 1); > + ib->ptr[ib->length_dw++] = SOC15_REG_ENTRY_OFFSET(vgpr_init_regs_ptr[i]) > - PACKET3_SET_SH_REG_START; > - ib.ptr[ib.length_dw++] = vgpr_init_regs_ptr[i].reg_value; > + ib->ptr[ib->length_dw++] = vgpr_init_regs_ptr[i].reg_value; > } > /* write the shader start address: mmCOMPUTE_PGM_LO, mmCOMPUTE_PGM_HI */ > - gpu_addr = (ib.gpu_addr + (u64)vgpr_offset) >> 8; > - ib.ptr[ib.length_dw++] = PACKET3(PACKET3_SET_SH_REG, 2); > - ib.ptr[ib.length_dw++] = SOC15_REG_OFFSET(GC, 0, mmCOMPUTE_PGM_LO) > + gpu_addr = (ib->gpu_addr + (u64)vgpr_offset) >> 8; > + ib->ptr[ib->length_dw++] = PACKET3(PACKET3_SET_SH_REG, 2); > + ib->ptr[ib->length_dw++] = SOC15_REG_OFFSET(GC, 0, mmCOMPUTE_PGM_LO) > - PACKET3_SET_SH_REG_START; > - ib.ptr[ib.length_dw++] = lower_32_bits(gpu_addr); > - ib.ptr[ib.length_dw++] = upper_32_bits(gpu_addr); > + ib->ptr[ib->length_dw++] = lower_32_bits(gpu_addr); > + ib->ptr[ib->length_dw++] = upper_32_bits(gpu_addr); > > /* write dispatch packet */ > - ib.ptr[ib.length_dw++] = PACKET3(PACKET3_DISPATCH_DIRECT, 3); > - ib.ptr[ib.length_dw++] = compute_dim_x * 2; /* x */ > - ib.ptr[ib.length_dw++] = 1; /* y */ > - ib.ptr[ib.length_dw++] = 1; /* z */ > - ib.ptr[ib.length_dw++] = > + ib->ptr[ib->length_dw++] = PACKET3(PACKET3_DISPATCH_DIRECT, 3); > + ib->ptr[ib->length_dw++] = compute_dim_x * 2; /* x */ > + ib->ptr[ib->length_dw++] = 1; /* y */ > + ib->ptr[ib->length_dw++] = 1; /* z */ > + ib->ptr[ib->length_dw++] = > REG_SET_FIELD(0, COMPUTE_DISPATCH_INITIATOR, COMPUTE_SHADER_EN, 1); > > /* write CS partial flush packet */ > - ib.ptr[ib.length_dw++] = PACKET3(PACKET3_EVENT_WRITE, 0); > - ib.ptr[ib.length_dw++] = EVENT_TYPE(7) | EVENT_INDEX(4); > + ib->ptr[ib->length_dw++] = PACKET3(PACKET3_EVENT_WRITE, 0); > + ib->ptr[ib->length_dw++] = EVENT_TYPE(7) | EVENT_INDEX(4); > > /* SGPR1 */ > /* write the register state for the compute dispatch */ > for (i = 0; i < gpr_reg_size; i++) { > - ib.ptr[ib.length_dw++] = PACKET3(PACKET3_SET_SH_REG, 1); > - ib.ptr[ib.length_dw++] = SOC15_REG_ENTRY_OFFSET(sgpr1_init_regs[i]) > + ib->ptr[ib->length_dw++] = PACKET3(PACKET3_SET_SH_REG, 1); > + ib->ptr[ib->length_dw++] = SOC15_REG_ENTRY_OFFSET(sgpr1_init_regs[i]) > - PACKET3_SET_SH_REG_START; > - ib.ptr[ib.length_dw++] = sgpr1_init_regs[i].reg_value; > + ib->ptr[ib->length_dw++] = sgpr1_init_regs[i].reg_value; > } > /* write the shader start address: mmCOMPUTE_PGM_LO, mmCOMPUTE_PGM_HI */ > - gpu_addr = (ib.gpu_addr + (u64)sgpr_offset) >> 8; > - ib.ptr[ib.length_dw++] = PACKET3(PACKET3_SET_SH_REG, 2); > - ib.ptr[ib.length_dw++] = SOC15_REG_OFFSET(GC, 0, mmCOMPUTE_PGM_LO) > + gpu_addr = (ib->gpu_addr + (u64)sgpr_offset) >> 8; > + ib->ptr[ib->length_dw++] = PACKET3(PACKET3_SET_SH_REG, 2); > + ib->ptr[ib->length_dw++] = SOC15_REG_OFFSET(GC, 0, mmCOMPUTE_PGM_LO) > - PACKET3_SET_SH_REG_START; > - ib.ptr[ib.length_dw++] = lower_32_bits(gpu_addr); > - ib.ptr[ib.length_dw++] = upper_32_bits(gpu_addr); > + ib->ptr[ib->length_dw++] = lower_32_bits(gpu_addr); > + ib->ptr[ib->length_dw++] = upper_32_bits(gpu_addr); > > /* write dispatch packet */ > - ib.ptr[ib.length_dw++] = PACKET3(PACKET3_DISPATCH_DIRECT, 3); > - ib.ptr[ib.length_dw++] = compute_dim_x / 2 * sgpr_work_group_size; /* x */ > - ib.ptr[ib.length_dw++] = 1; /* y */ > - ib.ptr[ib.length_dw++] = 1; /* z */ > - ib.ptr[ib.length_dw++] = > + ib->ptr[ib->length_dw++] = PACKET3(PACKET3_DISPATCH_DIRECT, 3); > + ib->ptr[ib->length_dw++] = compute_dim_x / 2 * sgpr_work_group_size; /* x */ > + ib->ptr[ib->length_dw++] = 1; /* y */ > + ib->ptr[ib->length_dw++] = 1; /* z */ > + ib->ptr[ib->length_dw++] = > REG_SET_FIELD(0, COMPUTE_DISPATCH_INITIATOR, COMPUTE_SHADER_EN, 1); > > /* write CS partial flush packet */ > - ib.ptr[ib.length_dw++] = PACKET3(PACKET3_EVENT_WRITE, 0); > - ib.ptr[ib.length_dw++] = EVENT_TYPE(7) | EVENT_INDEX(4); > + ib->ptr[ib->length_dw++] = PACKET3(PACKET3_EVENT_WRITE, 0); > + ib->ptr[ib->length_dw++] = EVENT_TYPE(7) | EVENT_INDEX(4); > > /* SGPR2 */ > /* write the register state for the compute dispatch */ > for (i = 0; i < gpr_reg_size; i++) { > - ib.ptr[ib.length_dw++] = PACKET3(PACKET3_SET_SH_REG, 1); > - ib.ptr[ib.length_dw++] = SOC15_REG_ENTRY_OFFSET(sgpr2_init_regs[i]) > + ib->ptr[ib->length_dw++] = PACKET3(PACKET3_SET_SH_REG, 1); > + ib->ptr[ib->length_dw++] = SOC15_REG_ENTRY_OFFSET(sgpr2_init_regs[i]) > - PACKET3_SET_SH_REG_START; > - ib.ptr[ib.length_dw++] = sgpr2_init_regs[i].reg_value; > + ib->ptr[ib->length_dw++] = sgpr2_init_regs[i].reg_value; > } > /* write the shader start address: mmCOMPUTE_PGM_LO, mmCOMPUTE_PGM_HI */ > - gpu_addr = (ib.gpu_addr + (u64)sgpr_offset) >> 8; > - ib.ptr[ib.length_dw++] = PACKET3(PACKET3_SET_SH_REG, 2); > - ib.ptr[ib.length_dw++] = SOC15_REG_OFFSET(GC, 0, mmCOMPUTE_PGM_LO) > + gpu_addr = (ib->gpu_addr + (u64)sgpr_offset) >> 8; > + ib->ptr[ib->length_dw++] = PACKET3(PACKET3_SET_SH_REG, 2); > + ib->ptr[ib->length_dw++] = SOC15_REG_OFFSET(GC, 0, mmCOMPUTE_PGM_LO) > - PACKET3_SET_SH_REG_START; > - ib.ptr[ib.length_dw++] = lower_32_bits(gpu_addr); > - ib.ptr[ib.length_dw++] = upper_32_bits(gpu_addr); > + ib->ptr[ib->length_dw++] = lower_32_bits(gpu_addr); > + ib->ptr[ib->length_dw++] = upper_32_bits(gpu_addr); > > /* write dispatch packet */ > - ib.ptr[ib.length_dw++] = PACKET3(PACKET3_DISPATCH_DIRECT, 3); > - ib.ptr[ib.length_dw++] = compute_dim_x / 2 * sgpr_work_group_size; /* x */ > - ib.ptr[ib.length_dw++] = 1; /* y */ > - ib.ptr[ib.length_dw++] = 1; /* z */ > - ib.ptr[ib.length_dw++] = > + ib->ptr[ib->length_dw++] = PACKET3(PACKET3_DISPATCH_DIRECT, 3); > + ib->ptr[ib->length_dw++] = compute_dim_x / 2 * sgpr_work_group_size; /* x */ > + ib->ptr[ib->length_dw++] = 1; /* y */ > + ib->ptr[ib->length_dw++] = 1; /* z */ > + ib->ptr[ib->length_dw++] = > REG_SET_FIELD(0, COMPUTE_DISPATCH_INITIATOR, COMPUTE_SHADER_EN, 1); > > /* write CS partial flush packet */ > - ib.ptr[ib.length_dw++] = PACKET3(PACKET3_EVENT_WRITE, 0); > - ib.ptr[ib.length_dw++] = EVENT_TYPE(7) | EVENT_INDEX(4); > + ib->ptr[ib->length_dw++] = PACKET3(PACKET3_EVENT_WRITE, 0); > + ib->ptr[ib->length_dw++] = EVENT_TYPE(7) | EVENT_INDEX(4); > > /* shedule the ib on the ring */ > - r = amdgpu_ib_schedule(ring, 1, &ib, NULL, &f); > + r = amdgpu_job_submit_direct(job, ring, &f); > if (r) { > drm_err(adev_to_drm(adev), "ib schedule failed (%d).\n", r); > + amdgpu_job_free(job); > goto fail; > } > > @@ -4787,7 +4792,6 @@ static int gfx_v9_0_do_edc_gpr_workarounds(struct amdgpu_device *adev) > } > > fail: > - amdgpu_ib_free(&ib, NULL); > dma_fence_put(f); > > return r; > diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_2.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_2.c > index 8058ea91ecafd..424b05b84ea74 100644 > --- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_2.c > +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_2.c > @@ -345,12 +345,13 @@ const struct soc15_reg_entry sgpr64_init_regs_aldebaran[] = { > > static int gfx_v9_4_2_run_shader(struct amdgpu_device *adev, > struct amdgpu_ring *ring, > - struct amdgpu_ib *ib, > const u32 *shader_ptr, u32 shader_size, > const struct soc15_reg_entry *init_regs, u32 regs_size, > u32 compute_dim_x, u64 wb_gpu_addr, u32 pattern, > struct dma_fence **fence_ptr) > { > + struct amdgpu_job *job; > + struct amdgpu_ib *ib; > int r, i; > uint32_t total_size, shader_offset; > u64 gpu_addr; > @@ -360,10 +361,9 @@ static int gfx_v9_4_2_run_shader(struct amdgpu_device *adev, > shader_offset = total_size; > total_size += ALIGN(shader_size, 256); > > - /* allocate an indirect buffer to put the commands in */ > - memset(ib, 0, sizeof(*ib)); > - r = amdgpu_ib_get(adev, NULL, total_size, > - AMDGPU_IB_POOL_DIRECT, ib); > + r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, total_size, > + AMDGPU_IB_POOL_DIRECT, &job, > + AMDGPU_KERNEL_JOB_ID_RUN_SHADER); > if (r) { > dev_err(adev->dev, "failed to get ib (%d).\n", r); > return r; > @@ -408,11 +408,11 @@ static int gfx_v9_4_2_run_shader(struct amdgpu_device *adev, > ib->ptr[ib->length_dw++] = > REG_SET_FIELD(0, COMPUTE_DISPATCH_INITIATOR, COMPUTE_SHADER_EN, 1); > > - /* shedule the ib on the ring */ > - r = amdgpu_ib_schedule(ring, 1, ib, NULL, fence_ptr); > + /* schedule the ib on the ring */ > + r = amdgpu_job_submit_direct(job, ring, fence_ptr); > if (r) { > dev_err(adev->dev, "ib submit failed (%d).\n", r); > - amdgpu_ib_free(ib, NULL); > + amdgpu_job_free(job); > } > return r; > } > @@ -493,7 +493,6 @@ static int gfx_v9_4_2_do_sgprs_init(struct amdgpu_device *adev) > int wb_size = adev->gfx.config.max_shader_engines * > CU_ID_MAX * SIMD_ID_MAX * WAVE_ID_MAX; > struct amdgpu_ib wb_ib; > - struct amdgpu_ib disp_ibs[3]; > struct dma_fence *fences[3]; > u32 pattern[3] = { 0x1, 0x5, 0xa }; > > @@ -514,7 +513,6 @@ static int gfx_v9_4_2_do_sgprs_init(struct amdgpu_device *adev) > > r = gfx_v9_4_2_run_shader(adev, > &adev->gfx.compute_ring[0], > - &disp_ibs[0], > sgpr112_init_compute_shader_aldebaran, > sizeof(sgpr112_init_compute_shader_aldebaran), > sgpr112_init_regs_aldebaran, > @@ -539,7 +537,6 @@ static int gfx_v9_4_2_do_sgprs_init(struct amdgpu_device *adev) > > r = gfx_v9_4_2_run_shader(adev, > &adev->gfx.compute_ring[1], > - &disp_ibs[1], > sgpr96_init_compute_shader_aldebaran, > sizeof(sgpr96_init_compute_shader_aldebaran), > sgpr96_init_regs_aldebaran, > @@ -579,7 +576,6 @@ static int gfx_v9_4_2_do_sgprs_init(struct amdgpu_device *adev) > memset(wb_ib.ptr, 0, (1 + wb_size) * sizeof(uint32_t)); > r = gfx_v9_4_2_run_shader(adev, > &adev->gfx.compute_ring[0], > - &disp_ibs[2], > sgpr64_init_compute_shader_aldebaran, > sizeof(sgpr64_init_compute_shader_aldebaran), > sgpr64_init_regs_aldebaran, > @@ -611,13 +607,10 @@ static int gfx_v9_4_2_do_sgprs_init(struct amdgpu_device *adev) > } > > disp2_failed: > - amdgpu_ib_free(&disp_ibs[2], NULL); > dma_fence_put(fences[2]); > disp1_failed: > - amdgpu_ib_free(&disp_ibs[1], NULL); > dma_fence_put(fences[1]); > disp0_failed: > - amdgpu_ib_free(&disp_ibs[0], NULL); > dma_fence_put(fences[0]); > pro_end: > amdgpu_ib_free(&wb_ib, NULL); > @@ -637,7 +630,6 @@ static int gfx_v9_4_2_do_vgprs_init(struct amdgpu_device *adev) > int wb_size = adev->gfx.config.max_shader_engines * > CU_ID_MAX * SIMD_ID_MAX * WAVE_ID_MAX; > struct amdgpu_ib wb_ib; > - struct amdgpu_ib disp_ib; > struct dma_fence *fence; > u32 pattern = 0xa; > > @@ -657,7 +649,6 @@ static int gfx_v9_4_2_do_vgprs_init(struct amdgpu_device *adev) > > r = gfx_v9_4_2_run_shader(adev, > &adev->gfx.compute_ring[0], > - &disp_ib, > vgpr_init_compute_shader_aldebaran, > sizeof(vgpr_init_compute_shader_aldebaran), > vgpr_init_regs_aldebaran, > @@ -687,7 +678,6 @@ static int gfx_v9_4_2_do_vgprs_init(struct amdgpu_device *adev) > } > > disp_failed: > - amdgpu_ib_free(&disp_ib, NULL); > dma_fence_put(fence); > pro_end: > amdgpu_ib_free(&wb_ib, NULL); > diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c > index ad4d442e7345e..d78b2c2ae13a3 100644 > --- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c > +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c > @@ -451,9 +451,9 @@ static int gfx_v9_4_3_ring_test_ring(struct amdgpu_ring *ring) > static int gfx_v9_4_3_ring_test_ib(struct amdgpu_ring *ring, long timeout) > { > struct amdgpu_device *adev = ring->adev; > - struct amdgpu_ib ib; > + struct amdgpu_job *job; > + struct amdgpu_ib *ib; > struct dma_fence *f = NULL; > - > unsigned index; > uint64_t gpu_addr; > uint32_t tmp; > @@ -465,22 +465,26 @@ static int gfx_v9_4_3_ring_test_ib(struct amdgpu_ring *ring, long timeout) > > gpu_addr = adev->wb.gpu_addr + (index * 4); > adev->wb.wb[index] = cpu_to_le32(0xCAFEDEAD); > - memset(&ib, 0, sizeof(ib)); > > - r = amdgpu_ib_get(adev, NULL, 20, AMDGPU_IB_POOL_DIRECT, &ib); > + r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, 20, > + AMDGPU_IB_POOL_DIRECT, &job, > + AMDGPU_KERNEL_JOB_ID_GFX_RING_TEST); > if (r) > goto err1; > > - ib.ptr[0] = PACKET3(PACKET3_WRITE_DATA, 3); > - ib.ptr[1] = WRITE_DATA_DST_SEL(5) | WR_CONFIRM; > - ib.ptr[2] = lower_32_bits(gpu_addr); > - ib.ptr[3] = upper_32_bits(gpu_addr); > - ib.ptr[4] = 0xDEADBEEF; > - ib.length_dw = 5; > + ib = &job->ibs[0]; > + ib->ptr[0] = PACKET3(PACKET3_WRITE_DATA, 3); > + ib->ptr[1] = WRITE_DATA_DST_SEL(5) | WR_CONFIRM; > + ib->ptr[2] = lower_32_bits(gpu_addr); > + ib->ptr[3] = upper_32_bits(gpu_addr); > + ib->ptr[4] = 0xDEADBEEF; > + ib->length_dw = 5; > > - r = amdgpu_ib_schedule(ring, 1, &ib, NULL, &f); > - if (r) > + r = amdgpu_job_submit_direct(job, ring, &f); > + if (r) { > + amdgpu_job_free(job); > goto err2; > + } > > r = dma_fence_wait_timeout(f, false, timeout); > if (r == 0) { > @@ -497,7 +501,6 @@ static int gfx_v9_4_3_ring_test_ib(struct amdgpu_ring *ring, long timeout) > r = -EINVAL; > > err2: > - amdgpu_ib_free(&ib, NULL); > dma_fence_put(f); > err1: > amdgpu_device_wb_free(adev, index); > diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c b/drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c > index 92ce580647cdc..46263d50cc9ef 100644 > --- a/drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c > +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c > @@ -584,7 +584,8 @@ static int sdma_v2_4_ring_test_ring(struct amdgpu_ring *ring) > static int sdma_v2_4_ring_test_ib(struct amdgpu_ring *ring, long timeout) > { > struct amdgpu_device *adev = ring->adev; > - struct amdgpu_ib ib; > + struct amdgpu_job *job; > + struct amdgpu_ib *ib; > struct dma_fence *f = NULL; > unsigned index; > u32 tmp = 0; > @@ -598,26 +599,30 @@ static int sdma_v2_4_ring_test_ib(struct amdgpu_ring *ring, long timeout) > gpu_addr = adev->wb.gpu_addr + (index * 4); > tmp = 0xCAFEDEAD; > adev->wb.wb[index] = cpu_to_le32(tmp); > - memset(&ib, 0, sizeof(ib)); > - r = amdgpu_ib_get(adev, NULL, 256, > - AMDGPU_IB_POOL_DIRECT, &ib); > + > + r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, 256, > + AMDGPU_IB_POOL_DIRECT, &job, > + AMDGPU_KERNEL_JOB_ID_SDMA_RING_TEST); > if (r) > goto err0; > > - ib.ptr[0] = SDMA_PKT_HEADER_OP(SDMA_OP_WRITE) | > + ib = &job->ibs[0]; > + ib->ptr[0] = SDMA_PKT_HEADER_OP(SDMA_OP_WRITE) | > SDMA_PKT_HEADER_SUB_OP(SDMA_SUBOP_WRITE_LINEAR); > - ib.ptr[1] = lower_32_bits(gpu_addr); > - ib.ptr[2] = upper_32_bits(gpu_addr); > - ib.ptr[3] = SDMA_PKT_WRITE_UNTILED_DW_3_COUNT(1); > - ib.ptr[4] = 0xDEADBEEF; > - ib.ptr[5] = SDMA_PKT_HEADER_OP(SDMA_OP_NOP); > - ib.ptr[6] = SDMA_PKT_HEADER_OP(SDMA_OP_NOP); > - ib.ptr[7] = SDMA_PKT_HEADER_OP(SDMA_OP_NOP); > - ib.length_dw = 8; > - > - r = amdgpu_ib_schedule(ring, 1, &ib, NULL, &f); > - if (r) > + ib->ptr[1] = lower_32_bits(gpu_addr); > + ib->ptr[2] = upper_32_bits(gpu_addr); > + ib->ptr[3] = SDMA_PKT_WRITE_UNTILED_DW_3_COUNT(1); > + ib->ptr[4] = 0xDEADBEEF; > + ib->ptr[5] = SDMA_PKT_HEADER_OP(SDMA_OP_NOP); > + ib->ptr[6] = SDMA_PKT_HEADER_OP(SDMA_OP_NOP); > + ib->ptr[7] = SDMA_PKT_HEADER_OP(SDMA_OP_NOP); > + ib->length_dw = 8; > + > + r = amdgpu_job_submit_direct(job, ring, &f); > + if (r) { > + amdgpu_job_free(job); > goto err1; > + } > > r = dma_fence_wait_timeout(f, false, timeout); > if (r == 0) { > @@ -633,7 +638,6 @@ static int sdma_v2_4_ring_test_ib(struct amdgpu_ring *ring, long timeout) > r = -EINVAL; > > err1: > - amdgpu_ib_free(&ib, NULL); > dma_fence_put(f); > err0: > amdgpu_device_wb_free(adev, index); > diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c > index 1c076bd1cf73e..f9f05768072ad 100644 > --- a/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c > +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c > @@ -858,7 +858,8 @@ static int sdma_v3_0_ring_test_ring(struct amdgpu_ring *ring) > static int sdma_v3_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) > { > struct amdgpu_device *adev = ring->adev; > - struct amdgpu_ib ib; > + struct amdgpu_job *job; > + struct amdgpu_ib *ib; > struct dma_fence *f = NULL; > unsigned index; > u32 tmp = 0; > @@ -872,26 +873,30 @@ static int sdma_v3_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) > gpu_addr = adev->wb.gpu_addr + (index * 4); > tmp = 0xCAFEDEAD; > adev->wb.wb[index] = cpu_to_le32(tmp); > - memset(&ib, 0, sizeof(ib)); > - r = amdgpu_ib_get(adev, NULL, 256, > - AMDGPU_IB_POOL_DIRECT, &ib); > + > + r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, 256, > + AMDGPU_IB_POOL_DIRECT, &job, > + AMDGPU_KERNEL_JOB_ID_SDMA_RING_TEST); > if (r) > goto err0; > > - ib.ptr[0] = SDMA_PKT_HEADER_OP(SDMA_OP_WRITE) | > + ib = &job->ibs[0]; > + ib->ptr[0] = SDMA_PKT_HEADER_OP(SDMA_OP_WRITE) | > SDMA_PKT_HEADER_SUB_OP(SDMA_SUBOP_WRITE_LINEAR); > - ib.ptr[1] = lower_32_bits(gpu_addr); > - ib.ptr[2] = upper_32_bits(gpu_addr); > - ib.ptr[3] = SDMA_PKT_WRITE_UNTILED_DW_3_COUNT(1); > - ib.ptr[4] = 0xDEADBEEF; > - ib.ptr[5] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); > - ib.ptr[6] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); > - ib.ptr[7] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); > - ib.length_dw = 8; > - > - r = amdgpu_ib_schedule(ring, 1, &ib, NULL, &f); > - if (r) > + ib->ptr[1] = lower_32_bits(gpu_addr); > + ib->ptr[2] = upper_32_bits(gpu_addr); > + ib->ptr[3] = SDMA_PKT_WRITE_UNTILED_DW_3_COUNT(1); > + ib->ptr[4] = 0xDEADBEEF; > + ib->ptr[5] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); > + ib->ptr[6] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); > + ib->ptr[7] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); > + ib->length_dw = 8; > + > + r = amdgpu_job_submit_direct(job, ring, &f); > + if (r) { > + amdgpu_job_free(job); > goto err1; > + } > > r = dma_fence_wait_timeout(f, false, timeout); > if (r == 0) { > @@ -906,7 +911,6 @@ static int sdma_v3_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) > else > r = -EINVAL; > err1: > - amdgpu_ib_free(&ib, NULL); > dma_fence_put(f); > err0: > amdgpu_device_wb_free(adev, index); > diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c > index f38004e6064e5..56d2832ccba2d 100644 > --- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c > +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c > @@ -1516,7 +1516,8 @@ static int sdma_v4_0_ring_test_ring(struct amdgpu_ring *ring) > static int sdma_v4_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) > { > struct amdgpu_device *adev = ring->adev; > - struct amdgpu_ib ib; > + struct amdgpu_job *job; > + struct amdgpu_ib *ib; > struct dma_fence *f = NULL; > unsigned index; > long r; > @@ -1530,26 +1531,30 @@ static int sdma_v4_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) > gpu_addr = adev->wb.gpu_addr + (index * 4); > tmp = 0xCAFEDEAD; > adev->wb.wb[index] = cpu_to_le32(tmp); > - memset(&ib, 0, sizeof(ib)); > - r = amdgpu_ib_get(adev, NULL, 256, > - AMDGPU_IB_POOL_DIRECT, &ib); > + > + r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, 256, > + AMDGPU_IB_POOL_DIRECT, &job, > + AMDGPU_KERNEL_JOB_ID_SDMA_RING_TEST); > if (r) > goto err0; > > - ib.ptr[0] = SDMA_PKT_HEADER_OP(SDMA_OP_WRITE) | > + ib = &job->ibs[0]; > + ib->ptr[0] = SDMA_PKT_HEADER_OP(SDMA_OP_WRITE) | > SDMA_PKT_HEADER_SUB_OP(SDMA_SUBOP_WRITE_LINEAR); > - ib.ptr[1] = lower_32_bits(gpu_addr); > - ib.ptr[2] = upper_32_bits(gpu_addr); > - ib.ptr[3] = SDMA_PKT_WRITE_UNTILED_DW_3_COUNT(0); > - ib.ptr[4] = 0xDEADBEEF; > - ib.ptr[5] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); > - ib.ptr[6] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); > - ib.ptr[7] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); > - ib.length_dw = 8; > - > - r = amdgpu_ib_schedule(ring, 1, &ib, NULL, &f); > - if (r) > + ib->ptr[1] = lower_32_bits(gpu_addr); > + ib->ptr[2] = upper_32_bits(gpu_addr); > + ib->ptr[3] = SDMA_PKT_WRITE_UNTILED_DW_3_COUNT(0); > + ib->ptr[4] = 0xDEADBEEF; > + ib->ptr[5] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); > + ib->ptr[6] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); > + ib->ptr[7] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); > + ib->length_dw = 8; > + > + r = amdgpu_job_submit_direct(job, ring, &f); > + if (r) { > + amdgpu_job_free(job); > goto err1; > + } > > r = dma_fence_wait_timeout(f, false, timeout); > if (r == 0) { > @@ -1565,7 +1570,6 @@ static int sdma_v4_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) > r = -EINVAL; > > err1: > - amdgpu_ib_free(&ib, NULL); > dma_fence_put(f); > err0: > amdgpu_device_wb_free(adev, index); > diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c > index a1443990d5c60..dd8d6a572710d 100644 > --- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c > +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c > @@ -1112,7 +1112,8 @@ static int sdma_v4_4_2_ring_test_ring(struct amdgpu_ring *ring) > static int sdma_v4_4_2_ring_test_ib(struct amdgpu_ring *ring, long timeout) > { > struct amdgpu_device *adev = ring->adev; > - struct amdgpu_ib ib; > + struct amdgpu_job *job; > + struct amdgpu_ib *ib; > struct dma_fence *f = NULL; > unsigned index; > long r; > @@ -1126,26 +1127,30 @@ static int sdma_v4_4_2_ring_test_ib(struct amdgpu_ring *ring, long timeout) > gpu_addr = adev->wb.gpu_addr + (index * 4); > tmp = 0xCAFEDEAD; > adev->wb.wb[index] = cpu_to_le32(tmp); > - memset(&ib, 0, sizeof(ib)); > - r = amdgpu_ib_get(adev, NULL, 256, > - AMDGPU_IB_POOL_DIRECT, &ib); > + > + r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, 256, > + AMDGPU_IB_POOL_DIRECT, &job, > + AMDGPU_KERNEL_JOB_ID_SDMA_RING_TEST); > if (r) > goto err0; > > - ib.ptr[0] = SDMA_PKT_HEADER_OP(SDMA_OP_WRITE) | > + ib = &job->ibs[0]; > + ib->ptr[0] = SDMA_PKT_HEADER_OP(SDMA_OP_WRITE) | > SDMA_PKT_HEADER_SUB_OP(SDMA_SUBOP_WRITE_LINEAR); > - ib.ptr[1] = lower_32_bits(gpu_addr); > - ib.ptr[2] = upper_32_bits(gpu_addr); > - ib.ptr[3] = SDMA_PKT_WRITE_UNTILED_DW_3_COUNT(0); > - ib.ptr[4] = 0xDEADBEEF; > - ib.ptr[5] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); > - ib.ptr[6] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); > - ib.ptr[7] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); > - ib.length_dw = 8; > - > - r = amdgpu_ib_schedule(ring, 1, &ib, NULL, &f); > - if (r) > + ib->ptr[1] = lower_32_bits(gpu_addr); > + ib->ptr[2] = upper_32_bits(gpu_addr); > + ib->ptr[3] = SDMA_PKT_WRITE_UNTILED_DW_3_COUNT(0); > + ib->ptr[4] = 0xDEADBEEF; > + ib->ptr[5] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); > + ib->ptr[6] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); > + ib->ptr[7] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); > + ib->length_dw = 8; > + > + r = amdgpu_job_submit_direct(job, ring, &f); > + if (r) { > + amdgpu_job_free(job); > goto err1; > + } > > r = dma_fence_wait_timeout(f, false, timeout); > if (r == 0) { > @@ -1161,7 +1166,6 @@ static int sdma_v4_4_2_ring_test_ib(struct amdgpu_ring *ring, long timeout) > r = -EINVAL; > > err1: > - amdgpu_ib_free(&ib, NULL); > dma_fence_put(f); > err0: > amdgpu_device_wb_free(adev, index); > diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c > index 7811cbb1f7ba3..786f1776fa30d 100644 > --- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c > +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c > @@ -1074,7 +1074,8 @@ static int sdma_v5_0_ring_test_ring(struct amdgpu_ring *ring) > static int sdma_v5_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) > { > struct amdgpu_device *adev = ring->adev; > - struct amdgpu_ib ib; > + struct amdgpu_job *job; > + struct amdgpu_ib *ib; > struct dma_fence *f = NULL; > unsigned index; > long r; > @@ -1082,7 +1083,6 @@ static int sdma_v5_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) > u64 gpu_addr; > > tmp = 0xCAFEDEAD; > - memset(&ib, 0, sizeof(ib)); > > r = amdgpu_device_wb_get(adev, &index); > if (r) { > @@ -1093,27 +1093,31 @@ static int sdma_v5_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) > gpu_addr = adev->wb.gpu_addr + (index * 4); > adev->wb.wb[index] = cpu_to_le32(tmp); > > - r = amdgpu_ib_get(adev, NULL, 256, > - AMDGPU_IB_POOL_DIRECT, &ib); > + r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, 256, > + AMDGPU_IB_POOL_DIRECT, &job, > + AMDGPU_KERNEL_JOB_ID_SDMA_RING_TEST); > if (r) { > drm_err(adev_to_drm(adev), "failed to get ib (%ld).\n", r); > goto err0; > } > > - ib.ptr[0] = SDMA_PKT_HEADER_OP(SDMA_OP_WRITE) | > + ib = &job->ibs[0]; > + ib->ptr[0] = SDMA_PKT_HEADER_OP(SDMA_OP_WRITE) | > SDMA_PKT_HEADER_SUB_OP(SDMA_SUBOP_WRITE_LINEAR); > - ib.ptr[1] = lower_32_bits(gpu_addr); > - ib.ptr[2] = upper_32_bits(gpu_addr); > - ib.ptr[3] = SDMA_PKT_WRITE_UNTILED_DW_3_COUNT(0); > - ib.ptr[4] = 0xDEADBEEF; > - ib.ptr[5] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); > - ib.ptr[6] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); > - ib.ptr[7] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); > - ib.length_dw = 8; > - > - r = amdgpu_ib_schedule(ring, 1, &ib, NULL, &f); > - if (r) > + ib->ptr[1] = lower_32_bits(gpu_addr); > + ib->ptr[2] = upper_32_bits(gpu_addr); > + ib->ptr[3] = SDMA_PKT_WRITE_UNTILED_DW_3_COUNT(0); > + ib->ptr[4] = 0xDEADBEEF; > + ib->ptr[5] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); > + ib->ptr[6] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); > + ib->ptr[7] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); > + ib->length_dw = 8; > + > + r = amdgpu_job_submit_direct(job, ring, &f); > + if (r) { > + amdgpu_job_free(job); > goto err1; > + } > > r = dma_fence_wait_timeout(f, false, timeout); > if (r == 0) { > @@ -1133,7 +1137,6 @@ static int sdma_v5_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) > r = -EINVAL; > > err1: > - amdgpu_ib_free(&ib, NULL); > dma_fence_put(f); > err0: > amdgpu_device_wb_free(adev, index); > diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c > index dbe5b8f109f6a..49005b96aa3f2 100644 > --- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c > +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c > @@ -974,7 +974,8 @@ static int sdma_v5_2_ring_test_ring(struct amdgpu_ring *ring) > static int sdma_v5_2_ring_test_ib(struct amdgpu_ring *ring, long timeout) > { > struct amdgpu_device *adev = ring->adev; > - struct amdgpu_ib ib; > + struct amdgpu_job *job; > + struct amdgpu_ib *ib; > struct dma_fence *f = NULL; > unsigned index; > long r; > @@ -982,7 +983,6 @@ static int sdma_v5_2_ring_test_ib(struct amdgpu_ring *ring, long timeout) > u64 gpu_addr; > > tmp = 0xCAFEDEAD; > - memset(&ib, 0, sizeof(ib)); > > r = amdgpu_device_wb_get(adev, &index); > if (r) { > @@ -993,26 +993,31 @@ static int sdma_v5_2_ring_test_ib(struct amdgpu_ring *ring, long timeout) > gpu_addr = adev->wb.gpu_addr + (index * 4); > adev->wb.wb[index] = cpu_to_le32(tmp); > > - r = amdgpu_ib_get(adev, NULL, 256, AMDGPU_IB_POOL_DIRECT, &ib); > + r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, 256, > + AMDGPU_IB_POOL_DIRECT, &job, > + AMDGPU_KERNEL_JOB_ID_SDMA_RING_TEST); > if (r) { > drm_err(adev_to_drm(adev), "failed to get ib (%ld).\n", r); > goto err0; > } > > - ib.ptr[0] = SDMA_PKT_HEADER_OP(SDMA_OP_WRITE) | > + ib = &job->ibs[0]; > + ib->ptr[0] = SDMA_PKT_HEADER_OP(SDMA_OP_WRITE) | > SDMA_PKT_HEADER_SUB_OP(SDMA_SUBOP_WRITE_LINEAR); > - ib.ptr[1] = lower_32_bits(gpu_addr); > - ib.ptr[2] = upper_32_bits(gpu_addr); > - ib.ptr[3] = SDMA_PKT_WRITE_UNTILED_DW_3_COUNT(0); > - ib.ptr[4] = 0xDEADBEEF; > - ib.ptr[5] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); > - ib.ptr[6] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); > - ib.ptr[7] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); > - ib.length_dw = 8; > - > - r = amdgpu_ib_schedule(ring, 1, &ib, NULL, &f); > - if (r) > + ib->ptr[1] = lower_32_bits(gpu_addr); > + ib->ptr[2] = upper_32_bits(gpu_addr); > + ib->ptr[3] = SDMA_PKT_WRITE_UNTILED_DW_3_COUNT(0); > + ib->ptr[4] = 0xDEADBEEF; > + ib->ptr[5] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); > + ib->ptr[6] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); > + ib->ptr[7] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); > + ib->length_dw = 8; > + > + r = amdgpu_job_submit_direct(job, ring, &f); > + if (r) { > + amdgpu_job_free(job); > goto err1; > + } > > r = dma_fence_wait_timeout(f, false, timeout); > if (r == 0) { > @@ -1032,7 +1037,6 @@ static int sdma_v5_2_ring_test_ib(struct amdgpu_ring *ring, long timeout) > r = -EINVAL; > > err1: > - amdgpu_ib_free(&ib, NULL); > dma_fence_put(f); > err0: > amdgpu_device_wb_free(adev, index); > diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c > index eec659194718d..210ea6ba6212f 100644 > --- a/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c > +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c > @@ -981,7 +981,8 @@ static int sdma_v6_0_ring_test_ring(struct amdgpu_ring *ring) > static int sdma_v6_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) > { > struct amdgpu_device *adev = ring->adev; > - struct amdgpu_ib ib; > + struct amdgpu_job *job; > + struct amdgpu_ib *ib; > struct dma_fence *f = NULL; > unsigned index; > long r; > @@ -989,7 +990,6 @@ static int sdma_v6_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) > u64 gpu_addr; > > tmp = 0xCAFEDEAD; > - memset(&ib, 0, sizeof(ib)); > > r = amdgpu_device_wb_get(adev, &index); > if (r) { > @@ -1000,26 +1000,31 @@ static int sdma_v6_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) > gpu_addr = adev->wb.gpu_addr + (index * 4); > adev->wb.wb[index] = cpu_to_le32(tmp); > > - r = amdgpu_ib_get(adev, NULL, 256, AMDGPU_IB_POOL_DIRECT, &ib); > + r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, 256, > + AMDGPU_IB_POOL_DIRECT, &job, > + AMDGPU_KERNEL_JOB_ID_SDMA_RING_TEST); > if (r) { > drm_err(adev_to_drm(adev), "failed to get ib (%ld).\n", r); > goto err0; > } > > - ib.ptr[0] = SDMA_PKT_COPY_LINEAR_HEADER_OP(SDMA_OP_WRITE) | > + ib = &job->ibs[0]; > + ib->ptr[0] = SDMA_PKT_COPY_LINEAR_HEADER_OP(SDMA_OP_WRITE) | > SDMA_PKT_COPY_LINEAR_HEADER_SUB_OP(SDMA_SUBOP_WRITE_LINEAR); > - ib.ptr[1] = lower_32_bits(gpu_addr); > - ib.ptr[2] = upper_32_bits(gpu_addr); > - ib.ptr[3] = SDMA_PKT_WRITE_UNTILED_DW_3_COUNT(0); > - ib.ptr[4] = 0xDEADBEEF; > - ib.ptr[5] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); > - ib.ptr[6] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); > - ib.ptr[7] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); > - ib.length_dw = 8; > - > - r = amdgpu_ib_schedule(ring, 1, &ib, NULL, &f); > - if (r) > + ib->ptr[1] = lower_32_bits(gpu_addr); > + ib->ptr[2] = upper_32_bits(gpu_addr); > + ib->ptr[3] = SDMA_PKT_WRITE_UNTILED_DW_3_COUNT(0); > + ib->ptr[4] = 0xDEADBEEF; > + ib->ptr[5] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); > + ib->ptr[6] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); > + ib->ptr[7] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); > + ib->length_dw = 8; > + > + r = amdgpu_job_submit_direct(job, ring, &f); > + if (r) { > + amdgpu_job_free(job); > goto err1; > + } > > r = dma_fence_wait_timeout(f, false, timeout); > if (r == 0) { > @@ -1039,7 +1044,6 @@ static int sdma_v6_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) > r = -EINVAL; > > err1: > - amdgpu_ib_free(&ib, NULL); > dma_fence_put(f); > err0: > amdgpu_device_wb_free(adev, index); > diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c > index 8d16ef257bcb9..3b4417d19212e 100644 > --- a/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c > +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c > @@ -997,7 +997,8 @@ static int sdma_v7_0_ring_test_ring(struct amdgpu_ring *ring) > static int sdma_v7_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) > { > struct amdgpu_device *adev = ring->adev; > - struct amdgpu_ib ib; > + struct amdgpu_job *job; > + struct amdgpu_ib *ib; > struct dma_fence *f = NULL; > unsigned index; > long r; > @@ -1005,7 +1006,6 @@ static int sdma_v7_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) > u64 gpu_addr; > > tmp = 0xCAFEDEAD; > - memset(&ib, 0, sizeof(ib)); > > r = amdgpu_device_wb_get(adev, &index); > if (r) { > @@ -1016,26 +1016,31 @@ static int sdma_v7_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) > gpu_addr = adev->wb.gpu_addr + (index * 4); > adev->wb.wb[index] = cpu_to_le32(tmp); > > - r = amdgpu_ib_get(adev, NULL, 256, AMDGPU_IB_POOL_DIRECT, &ib); > + r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, 256, > + AMDGPU_IB_POOL_DIRECT, &job, > + AMDGPU_KERNEL_JOB_ID_SDMA_RING_TEST); > if (r) { > drm_err(adev_to_drm(adev), "failed to get ib (%ld).\n", r); > goto err0; > } > > - ib.ptr[0] = SDMA_PKT_COPY_LINEAR_HEADER_OP(SDMA_OP_WRITE) | > + ib = &job->ibs[0]; > + ib->ptr[0] = SDMA_PKT_COPY_LINEAR_HEADER_OP(SDMA_OP_WRITE) | > SDMA_PKT_COPY_LINEAR_HEADER_SUB_OP(SDMA_SUBOP_WRITE_LINEAR); > - ib.ptr[1] = lower_32_bits(gpu_addr); > - ib.ptr[2] = upper_32_bits(gpu_addr); > - ib.ptr[3] = SDMA_PKT_WRITE_UNTILED_DW_3_COUNT(0); > - ib.ptr[4] = 0xDEADBEEF; > - ib.ptr[5] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); > - ib.ptr[6] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); > - ib.ptr[7] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); > - ib.length_dw = 8; > - > - r = amdgpu_ib_schedule(ring, 1, &ib, NULL, &f); > - if (r) > + ib->ptr[1] = lower_32_bits(gpu_addr); > + ib->ptr[2] = upper_32_bits(gpu_addr); > + ib->ptr[3] = SDMA_PKT_WRITE_UNTILED_DW_3_COUNT(0); > + ib->ptr[4] = 0xDEADBEEF; > + ib->ptr[5] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); > + ib->ptr[6] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); > + ib->ptr[7] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); > + ib->length_dw = 8; > + > + r = amdgpu_job_submit_direct(job, ring, &f); > + if (r) { > + amdgpu_job_free(job); > goto err1; > + } > > r = dma_fence_wait_timeout(f, false, timeout); > if (r == 0) { > @@ -1055,7 +1060,6 @@ static int sdma_v7_0_ring_test_ib(struct amdgpu_ring *ring, long timeout) > r = -EINVAL; > > err1: > - amdgpu_ib_free(&ib, NULL); > dma_fence_put(f); > err0: > amdgpu_device_wb_free(adev, index); > diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v7_1.c b/drivers/gpu/drm/amd/amdgpu/sdma_v7_1.c > index 5bc45c3e00d18..d71a546bdde61 100644 > --- a/drivers/gpu/drm/amd/amdgpu/sdma_v7_1.c > +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v7_1.c > @@ -987,7 +987,8 @@ static int sdma_v7_1_ring_test_ring(struct amdgpu_ring *ring) > static int sdma_v7_1_ring_test_ib(struct amdgpu_ring *ring, long timeout) > { > struct amdgpu_device *adev = ring->adev; > - struct amdgpu_ib ib; > + struct amdgpu_job *job; > + struct amdgpu_ib *ib; > struct dma_fence *f = NULL; > unsigned index; > long r; > @@ -995,7 +996,6 @@ static int sdma_v7_1_ring_test_ib(struct amdgpu_ring *ring, long timeout) > u64 gpu_addr; > > tmp = 0xCAFEDEAD; > - memset(&ib, 0, sizeof(ib)); > > r = amdgpu_device_wb_get(adev, &index); > if (r) { > @@ -1006,26 +1006,31 @@ static int sdma_v7_1_ring_test_ib(struct amdgpu_ring *ring, long timeout) > gpu_addr = adev->wb.gpu_addr + (index * 4); > adev->wb.wb[index] = cpu_to_le32(tmp); > > - r = amdgpu_ib_get(adev, NULL, 256, AMDGPU_IB_POOL_DIRECT, &ib); > + r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, 256, > + AMDGPU_IB_POOL_DIRECT, &job, > + AMDGPU_KERNEL_JOB_ID_SDMA_RING_TEST); > if (r) { > DRM_ERROR("amdgpu: failed to get ib (%ld).\n", r); > goto err0; > } > > - ib.ptr[0] = SDMA_PKT_COPY_LINEAR_HEADER_OP(SDMA_OP_WRITE) | > + ib = &job->ibs[0]; > + ib->ptr[0] = SDMA_PKT_COPY_LINEAR_HEADER_OP(SDMA_OP_WRITE) | > SDMA_PKT_COPY_LINEAR_HEADER_SUB_OP(SDMA_SUBOP_WRITE_LINEAR); > - ib.ptr[1] = lower_32_bits(gpu_addr); > - ib.ptr[2] = upper_32_bits(gpu_addr); > - ib.ptr[3] = SDMA_PKT_WRITE_UNTILED_DW_3_COUNT(0); > - ib.ptr[4] = 0xDEADBEEF; > - ib.ptr[5] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); > - ib.ptr[6] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); > - ib.ptr[7] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); > - ib.length_dw = 8; > - > - r = amdgpu_ib_schedule(ring, 1, &ib, NULL, &f); > - if (r) > + ib->ptr[1] = lower_32_bits(gpu_addr); > + ib->ptr[2] = upper_32_bits(gpu_addr); > + ib->ptr[3] = SDMA_PKT_WRITE_UNTILED_DW_3_COUNT(0); > + ib->ptr[4] = 0xDEADBEEF; > + ib->ptr[5] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); > + ib->ptr[6] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); > + ib->ptr[7] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP); > + ib->length_dw = 8; > + > + r = amdgpu_job_submit_direct(job, ring, &f); > + if (r) { > + amdgpu_job_free(job); > goto err1; > + } > > r = dma_fence_wait_timeout(f, false, timeout); > if (r == 0) { > @@ -1045,7 +1050,6 @@ static int sdma_v7_1_ring_test_ib(struct amdgpu_ring *ring, long timeout) > r = -EINVAL; > > err1: > - amdgpu_ib_free(&ib, NULL); > dma_fence_put(f); > err0: > amdgpu_device_wb_free(adev, index); > diff --git a/drivers/gpu/drm/amd/amdgpu/si_dma.c b/drivers/gpu/drm/amd/amdgpu/si_dma.c > index 74fcaa340d9b1..b67bd343f795f 100644 > --- a/drivers/gpu/drm/amd/amdgpu/si_dma.c > +++ b/drivers/gpu/drm/amd/amdgpu/si_dma.c > @@ -259,7 +259,8 @@ static int si_dma_ring_test_ring(struct amdgpu_ring *ring) > static int si_dma_ring_test_ib(struct amdgpu_ring *ring, long timeout) > { > struct amdgpu_device *adev = ring->adev; > - struct amdgpu_ib ib; > + struct amdgpu_job *job; > + struct amdgpu_ib *ib; > struct dma_fence *f = NULL; > unsigned index; > u32 tmp = 0; > @@ -273,20 +274,25 @@ static int si_dma_ring_test_ib(struct amdgpu_ring *ring, long timeout) > gpu_addr = adev->wb.gpu_addr + (index * 4); > tmp = 0xCAFEDEAD; > adev->wb.wb[index] = cpu_to_le32(tmp); > - memset(&ib, 0, sizeof(ib)); > - r = amdgpu_ib_get(adev, NULL, 256, > - AMDGPU_IB_POOL_DIRECT, &ib); > + > + r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, 256, > + AMDGPU_IB_POOL_DIRECT, &job, > + AMDGPU_KERNEL_JOB_ID_SDMA_RING_TEST); > if (r) > goto err0; > > - ib.ptr[0] = DMA_PACKET(DMA_PACKET_WRITE, 0, 0, 0, 1); > - ib.ptr[1] = lower_32_bits(gpu_addr); > - ib.ptr[2] = upper_32_bits(gpu_addr) & 0xff; > - ib.ptr[3] = 0xDEADBEEF; > - ib.length_dw = 4; > - r = amdgpu_ib_schedule(ring, 1, &ib, NULL, &f); > - if (r) > + ib = &job->ibs[0]; > + ib->ptr[0] = DMA_PACKET(DMA_PACKET_WRITE, 0, 0, 0, 1); > + ib->ptr[1] = lower_32_bits(gpu_addr); > + ib->ptr[2] = upper_32_bits(gpu_addr) & 0xff; > + ib->ptr[3] = 0xDEADBEEF; > + ib->length_dw = 4; > + > + r = amdgpu_job_submit_direct(job, ring, &f); > + if (r) { > + amdgpu_job_free(job); > goto err1; > + } > > r = dma_fence_wait_timeout(f, false, timeout); > if (r == 0) { > @@ -302,7 +308,6 @@ static int si_dma_ring_test_ib(struct amdgpu_ring *ring, long timeout) > r = -EINVAL; > > err1: > - amdgpu_ib_free(&ib, NULL); > dma_fence_put(f); > err0: > amdgpu_device_wb_free(adev, index); ^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH 05/10] drm/amdgpu: require a job to schedule an IB 2026-01-16 16:20 [PATCH 00/10] Improvements for IB handling V3 Alex Deucher ` (3 preceding siblings ...) 2026-01-16 16:20 ` [PATCH 04/10] drm/amdgpu: switch all IPs to using job for IBs Alex Deucher @ 2026-01-16 16:20 ` Alex Deucher 2026-01-19 12:41 ` Christian König 2026-01-16 16:20 ` [PATCH 06/10] drm/amdgpu: don't call drm_sched_stop/start() in asic reset Alex Deucher ` (4 subsequent siblings) 9 siblings, 1 reply; 20+ messages in thread From: Alex Deucher @ 2026-01-16 16:20 UTC (permalink / raw) To: amd-gfx; +Cc: Alex Deucher Remove the old direct submit path. This simplifies the code. Signed-off-by: Alex Deucher <alexander.deucher@amd.com> --- drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c | 2 +- drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c | 106 ++++++++------------- drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 5 +- drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h | 3 +- 4 files changed, 45 insertions(+), 71 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c index 877d0df50376a..399a59bf907dd 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c @@ -686,7 +686,7 @@ int amdgpu_amdkfd_submit_ib(struct amdgpu_device *adev, job->vmid = vmid; job->num_ibs = 1; - ret = amdgpu_ib_schedule(ring, 1, ib, job, &f); + ret = amdgpu_ib_schedule(ring, job, &f); if (ret) { drm_err(adev_to_drm(adev), "failed to schedule IB.\n"); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c index 136e50de712a0..fb58637ed1507 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c @@ -103,8 +103,6 @@ void amdgpu_ib_free(struct amdgpu_ib *ib, struct dma_fence *f) * amdgpu_ib_schedule - schedule an IB (Indirect Buffer) on the ring * * @ring: ring index the IB is associated with - * @num_ibs: number of IBs to schedule - * @ibs: IB objects to schedule * @job: job to schedule * @f: fence created during this submission * @@ -121,12 +119,11 @@ void amdgpu_ib_free(struct amdgpu_ib *ib, struct dma_fence *f) * a CONST_IB), it will be put on the ring prior to the DE IB. Prior * to SI there was just a DE IB. */ -int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs, - struct amdgpu_ib *ibs, struct amdgpu_job *job, +int amdgpu_ib_schedule(struct amdgpu_ring *ring, struct amdgpu_job *job, struct dma_fence **f) { struct amdgpu_device *adev = ring->adev; - struct amdgpu_ib *ib = &ibs[0]; + struct amdgpu_ib *ib; struct dma_fence *tmp = NULL; struct amdgpu_fence *af; bool need_ctx_switch; @@ -142,64 +139,51 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs, unsigned int i; int r = 0; - if (num_ibs == 0) + if (!job) + return -EINVAL; + if (job->num_ibs == 0) return -EINVAL; - /* ring tests don't use a job */ - if (job) { - vm = job->vm; - fence_ctx = job->base.s_fence ? - job->base.s_fence->finished.context : 0; - shadow_va = job->shadow_va; - csa_va = job->csa_va; - gds_va = job->gds_va; - init_shadow = job->init_shadow; - af = job->hw_fence; - /* Save the context of the job for reset handling. - * The driver needs this so it can skip the ring - * contents for guilty contexts. - */ - af->context = fence_ctx; - /* the vm fence is also part of the job's context */ - job->hw_vm_fence->context = fence_ctx; - } else { - vm = NULL; - fence_ctx = 0; - shadow_va = 0; - csa_va = 0; - gds_va = 0; - init_shadow = false; - af = kzalloc(sizeof(*af), GFP_ATOMIC); - if (!af) - return -ENOMEM; - } + ib = &job->ibs[0]; + vm = job->vm; + fence_ctx = job->base.s_fence ? + job->base.s_fence->finished.context : 0; + shadow_va = job->shadow_va; + csa_va = job->csa_va; + gds_va = job->gds_va; + init_shadow = job->init_shadow; + af = job->hw_fence; + /* Save the context of the job for reset handling. + * The driver needs this so it can skip the ring + * contents for guilty contexts. + */ + af->context = fence_ctx; + /* the vm fence is also part of the job's context */ + job->hw_vm_fence->context = fence_ctx; if (!ring->sched.ready) { dev_err(adev->dev, "couldn't schedule ib on ring <%s>\n", ring->name); - r = -EINVAL; - goto free_fence; + return -EINVAL; } if (vm && !job->vmid) { dev_err(adev->dev, "VM IB without ID\n"); - r = -EINVAL; - goto free_fence; + return -EINVAL; } if ((ib->flags & AMDGPU_IB_FLAGS_SECURE) && (!ring->funcs->secure_submission_supported)) { dev_err(adev->dev, "secure submissions not supported on ring <%s>\n", ring->name); - r = -EINVAL; - goto free_fence; + return -EINVAL; } - alloc_size = ring->funcs->emit_frame_size + num_ibs * + alloc_size = ring->funcs->emit_frame_size + job->num_ibs * ring->funcs->emit_ib_size; r = amdgpu_ring_alloc(ring, alloc_size); if (r) { dev_err(adev->dev, "scheduling IB failed (%d).\n", r); - goto free_fence; + return r; } need_ctx_switch = ring->current_ctx != fence_ctx; @@ -225,12 +209,10 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs, if (ring->funcs->insert_start) ring->funcs->insert_start(ring); - if (job) { - r = amdgpu_vm_flush(ring, job, need_pipe_sync); - if (r) { - amdgpu_ring_undo(ring); - return r; - } + r = amdgpu_vm_flush(ring, job, need_pipe_sync); + if (r) { + amdgpu_ring_undo(ring); + return r; } amdgpu_ring_ib_begin(ring); @@ -248,7 +230,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs, if (need_ctx_switch) status |= AMDGPU_HAVE_CTX_SWITCH; - if (job && ring->funcs->emit_cntxcntl) { + if (ring->funcs->emit_cntxcntl) { status |= job->preamble_status; status |= job->preemption_status; amdgpu_ring_emit_cntxcntl(ring, status); @@ -257,15 +239,15 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs, /* Setup initial TMZiness and send it off. */ secure = false; - if (job && ring->funcs->emit_frame_cntl) { + if (ring->funcs->emit_frame_cntl) { secure = ib->flags & AMDGPU_IB_FLAGS_SECURE; amdgpu_ring_emit_frame_cntl(ring, true, secure); } - for (i = 0; i < num_ibs; ++i) { - ib = &ibs[i]; + for (i = 0; i < job->num_ibs; ++i) { + ib = &job->ibs[i]; - if (job && ring->funcs->emit_frame_cntl) { + if (ring->funcs->emit_frame_cntl) { if (secure != !!(ib->flags & AMDGPU_IB_FLAGS_SECURE)) { amdgpu_ring_emit_frame_cntl(ring, false, secure); secure = !secure; @@ -277,7 +259,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs, status &= ~AMDGPU_HAVE_CTX_SWITCH; } - if (job && ring->funcs->emit_frame_cntl) + if (ring->funcs->emit_frame_cntl) amdgpu_ring_emit_frame_cntl(ring, false, secure); amdgpu_device_invalidate_hdp(adev, ring); @@ -286,7 +268,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs, fence_flags |= AMDGPU_FENCE_FLAG_TC_WB_ONLY; /* wrap the last IB with fence */ - if (job && job->uf_addr) { + if (job->uf_addr) { amdgpu_ring_emit_fence(ring, job->uf_addr, job->uf_sequence, fence_flags | AMDGPU_FENCE_FLAG_64BIT); } @@ -299,15 +281,14 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs, r = amdgpu_fence_emit(ring, af, fence_flags); if (r) { dev_err(adev->dev, "failed to emit fence (%d)\n", r); - if (job && job->vmid) + if (job->vmid) amdgpu_vmid_reset(adev, ring->vm_hub, job->vmid); amdgpu_ring_undo(ring); - goto free_fence; + return r; } *f = &af->base; /* get a ref for the job */ - if (job) - dma_fence_get(*f); + dma_fence_get(*f); if (ring->funcs->insert_end) ring->funcs->insert_end(ring); @@ -315,7 +296,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs, amdgpu_ring_patch_cond_exec(ring, cond_exec); ring->current_ctx = fence_ctx; - if (job && ring->funcs->emit_switch_buffer) + if (ring->funcs->emit_switch_buffer) amdgpu_ring_emit_switch_buffer(ring); if (ring->funcs->emit_wave_limit && @@ -334,11 +315,6 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs, amdgpu_ring_commit(ring); return 0; - -free_fence: - if (!job) - kfree(af); - return r; } /** diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c index c7e4d79b9f61d..b09ac3dda17a4 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c @@ -373,7 +373,7 @@ int amdgpu_job_submit_direct(struct amdgpu_job *job, struct amdgpu_ring *ring, int r; job->base.sched = &ring->sched; - r = amdgpu_ib_schedule(ring, job->num_ibs, job->ibs, job, fence); + r = amdgpu_ib_schedule(ring, job, fence); if (r) return r; @@ -443,8 +443,7 @@ static struct dma_fence *amdgpu_job_run(struct drm_sched_job *sched_job) dev_dbg(adev->dev, "Skip scheduling IBs in ring(%s)", ring->name); } else { - r = amdgpu_ib_schedule(ring, job->num_ibs, job->ibs, job, - &fence); + r = amdgpu_ib_schedule(ring, job, &fence); if (r) dev_err(adev->dev, "Error scheduling IBs (%d) in ring(%s)", r, diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h index cb0fb1a989d2f..86a788d476957 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h @@ -569,8 +569,7 @@ int amdgpu_ib_get(struct amdgpu_device *adev, struct amdgpu_vm *vm, enum amdgpu_ib_pool_type pool, struct amdgpu_ib *ib); void amdgpu_ib_free(struct amdgpu_ib *ib, struct dma_fence *f); -int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned num_ibs, - struct amdgpu_ib *ibs, struct amdgpu_job *job, +int amdgpu_ib_schedule(struct amdgpu_ring *ring, struct amdgpu_job *job, struct dma_fence **f); int amdgpu_ib_pool_init(struct amdgpu_device *adev); void amdgpu_ib_pool_fini(struct amdgpu_device *adev); -- 2.52.0 ^ permalink raw reply related [flat|nested] 20+ messages in thread
* Re: [PATCH 05/10] drm/amdgpu: require a job to schedule an IB 2026-01-16 16:20 ` [PATCH 05/10] drm/amdgpu: require a job to schedule an IB Alex Deucher @ 2026-01-19 12:41 ` Christian König 0 siblings, 0 replies; 20+ messages in thread From: Christian König @ 2026-01-19 12:41 UTC (permalink / raw) To: Alex Deucher, amd-gfx On 1/16/26 17:20, Alex Deucher wrote: > Remove the old direct submit path. This simplifies > the code. > > Signed-off-by: Alex Deucher <alexander.deucher@amd.com> > --- > drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c | 2 +- > drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c | 106 ++++++++------------- > drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 5 +- > drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h | 3 +- > 4 files changed, 45 insertions(+), 71 deletions(-) > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c > index 877d0df50376a..399a59bf907dd 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c > @@ -686,7 +686,7 @@ int amdgpu_amdkfd_submit_ib(struct amdgpu_device *adev, > job->vmid = vmid; > job->num_ibs = 1; > > - ret = amdgpu_ib_schedule(ring, 1, ib, job, &f); > + ret = amdgpu_ib_schedule(ring, job, &f); > > if (ret) { > drm_err(adev_to_drm(adev), "failed to schedule IB.\n"); > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c > index 136e50de712a0..fb58637ed1507 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c > @@ -103,8 +103,6 @@ void amdgpu_ib_free(struct amdgpu_ib *ib, struct dma_fence *f) > * amdgpu_ib_schedule - schedule an IB (Indirect Buffer) on the ring > * > * @ring: ring index the IB is associated with > - * @num_ibs: number of IBs to schedule > - * @ibs: IB objects to schedule > * @job: job to schedule > * @f: fence created during this submission > * > @@ -121,12 +119,11 @@ void amdgpu_ib_free(struct amdgpu_ib *ib, struct dma_fence *f) > * a CONST_IB), it will be put on the ring prior to the DE IB. Prior > * to SI there was just a DE IB. > */ > -int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs, > - struct amdgpu_ib *ibs, struct amdgpu_job *job, > +int amdgpu_ib_schedule(struct amdgpu_ring *ring, struct amdgpu_job *job, > struct dma_fence **f) > { > struct amdgpu_device *adev = ring->adev; > - struct amdgpu_ib *ib = &ibs[0]; > + struct amdgpu_ib *ib; > struct dma_fence *tmp = NULL; > struct amdgpu_fence *af; > bool need_ctx_switch; > @@ -142,64 +139,51 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs, > unsigned int i; > int r = 0; > > - if (num_ibs == 0) > + if (!job) > + return -EINVAL; > + if (job->num_ibs == 0) > return -EINVAL; > > - /* ring tests don't use a job */ > - if (job) { > - vm = job->vm; > - fence_ctx = job->base.s_fence ? > - job->base.s_fence->finished.context : 0; > - shadow_va = job->shadow_va; > - csa_va = job->csa_va; > - gds_va = job->gds_va; > - init_shadow = job->init_shadow; > - af = job->hw_fence; > - /* Save the context of the job for reset handling. > - * The driver needs this so it can skip the ring > - * contents for guilty contexts. > - */ > - af->context = fence_ctx; > - /* the vm fence is also part of the job's context */ > - job->hw_vm_fence->context = fence_ctx; > - } else { > - vm = NULL; > - fence_ctx = 0; > - shadow_va = 0; > - csa_va = 0; > - gds_va = 0; > - init_shadow = false; > - af = kzalloc(sizeof(*af), GFP_ATOMIC); > - if (!af) > - return -ENOMEM; > - } > + ib = &job->ibs[0]; > + vm = job->vm; > + fence_ctx = job->base.s_fence ? > + job->base.s_fence->finished.context : 0; > + shadow_va = job->shadow_va; > + csa_va = job->csa_va; > + gds_va = job->gds_va; > + init_shadow = job->init_shadow; > + af = job->hw_fence; I think we could also drop those local variable now as well (maybe except the VM). They are only used once or twice anyway. Apart from that looks good to me, Christian. > + /* Save the context of the job for reset handling. > + * The driver needs this so it can skip the ring > + * contents for guilty contexts. > + */ > + af->context = fence_ctx; > + /* the vm fence is also part of the job's context */ > + job->hw_vm_fence->context = fence_ctx; > > if (!ring->sched.ready) { > dev_err(adev->dev, "couldn't schedule ib on ring <%s>\n", ring->name); > - r = -EINVAL; > - goto free_fence; > + return -EINVAL; > } > > if (vm && !job->vmid) { > dev_err(adev->dev, "VM IB without ID\n"); > - r = -EINVAL; > - goto free_fence; > + return -EINVAL; > } > > if ((ib->flags & AMDGPU_IB_FLAGS_SECURE) && > (!ring->funcs->secure_submission_supported)) { > dev_err(adev->dev, "secure submissions not supported on ring <%s>\n", ring->name); > - r = -EINVAL; > - goto free_fence; > + return -EINVAL; > } > > - alloc_size = ring->funcs->emit_frame_size + num_ibs * > + alloc_size = ring->funcs->emit_frame_size + job->num_ibs * > ring->funcs->emit_ib_size; > > r = amdgpu_ring_alloc(ring, alloc_size); > if (r) { > dev_err(adev->dev, "scheduling IB failed (%d).\n", r); > - goto free_fence; > + return r; > } > > need_ctx_switch = ring->current_ctx != fence_ctx; > @@ -225,12 +209,10 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs, > if (ring->funcs->insert_start) > ring->funcs->insert_start(ring); > > - if (job) { > - r = amdgpu_vm_flush(ring, job, need_pipe_sync); > - if (r) { > - amdgpu_ring_undo(ring); > - return r; > - } > + r = amdgpu_vm_flush(ring, job, need_pipe_sync); > + if (r) { > + amdgpu_ring_undo(ring); > + return r; > } > > amdgpu_ring_ib_begin(ring); > @@ -248,7 +230,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs, > if (need_ctx_switch) > status |= AMDGPU_HAVE_CTX_SWITCH; > > - if (job && ring->funcs->emit_cntxcntl) { > + if (ring->funcs->emit_cntxcntl) { > status |= job->preamble_status; > status |= job->preemption_status; > amdgpu_ring_emit_cntxcntl(ring, status); > @@ -257,15 +239,15 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs, > /* Setup initial TMZiness and send it off. > */ > secure = false; > - if (job && ring->funcs->emit_frame_cntl) { > + if (ring->funcs->emit_frame_cntl) { > secure = ib->flags & AMDGPU_IB_FLAGS_SECURE; > amdgpu_ring_emit_frame_cntl(ring, true, secure); > } > > - for (i = 0; i < num_ibs; ++i) { > - ib = &ibs[i]; > + for (i = 0; i < job->num_ibs; ++i) { > + ib = &job->ibs[i]; > > - if (job && ring->funcs->emit_frame_cntl) { > + if (ring->funcs->emit_frame_cntl) { > if (secure != !!(ib->flags & AMDGPU_IB_FLAGS_SECURE)) { > amdgpu_ring_emit_frame_cntl(ring, false, secure); > secure = !secure; > @@ -277,7 +259,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs, > status &= ~AMDGPU_HAVE_CTX_SWITCH; > } > > - if (job && ring->funcs->emit_frame_cntl) > + if (ring->funcs->emit_frame_cntl) > amdgpu_ring_emit_frame_cntl(ring, false, secure); > > amdgpu_device_invalidate_hdp(adev, ring); > @@ -286,7 +268,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs, > fence_flags |= AMDGPU_FENCE_FLAG_TC_WB_ONLY; > > /* wrap the last IB with fence */ > - if (job && job->uf_addr) { > + if (job->uf_addr) { > amdgpu_ring_emit_fence(ring, job->uf_addr, job->uf_sequence, > fence_flags | AMDGPU_FENCE_FLAG_64BIT); > } > @@ -299,15 +281,14 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs, > r = amdgpu_fence_emit(ring, af, fence_flags); > if (r) { > dev_err(adev->dev, "failed to emit fence (%d)\n", r); > - if (job && job->vmid) > + if (job->vmid) > amdgpu_vmid_reset(adev, ring->vm_hub, job->vmid); > amdgpu_ring_undo(ring); > - goto free_fence; > + return r; > } > *f = &af->base; > /* get a ref for the job */ > - if (job) > - dma_fence_get(*f); > + dma_fence_get(*f); > > if (ring->funcs->insert_end) > ring->funcs->insert_end(ring); > @@ -315,7 +296,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs, > amdgpu_ring_patch_cond_exec(ring, cond_exec); > > ring->current_ctx = fence_ctx; > - if (job && ring->funcs->emit_switch_buffer) > + if (ring->funcs->emit_switch_buffer) > amdgpu_ring_emit_switch_buffer(ring); > > if (ring->funcs->emit_wave_limit && > @@ -334,11 +315,6 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs, > amdgpu_ring_commit(ring); > > return 0; > - > -free_fence: > - if (!job) > - kfree(af); > - return r; > } > > /** > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c > index c7e4d79b9f61d..b09ac3dda17a4 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c > @@ -373,7 +373,7 @@ int amdgpu_job_submit_direct(struct amdgpu_job *job, struct amdgpu_ring *ring, > int r; > > job->base.sched = &ring->sched; > - r = amdgpu_ib_schedule(ring, job->num_ibs, job->ibs, job, fence); > + r = amdgpu_ib_schedule(ring, job, fence); > > if (r) > return r; > @@ -443,8 +443,7 @@ static struct dma_fence *amdgpu_job_run(struct drm_sched_job *sched_job) > dev_dbg(adev->dev, "Skip scheduling IBs in ring(%s)", > ring->name); > } else { > - r = amdgpu_ib_schedule(ring, job->num_ibs, job->ibs, job, > - &fence); > + r = amdgpu_ib_schedule(ring, job, &fence); > if (r) > dev_err(adev->dev, > "Error scheduling IBs (%d) in ring(%s)", r, > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h > index cb0fb1a989d2f..86a788d476957 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h > @@ -569,8 +569,7 @@ int amdgpu_ib_get(struct amdgpu_device *adev, struct amdgpu_vm *vm, > enum amdgpu_ib_pool_type pool, > struct amdgpu_ib *ib); > void amdgpu_ib_free(struct amdgpu_ib *ib, struct dma_fence *f); > -int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned num_ibs, > - struct amdgpu_ib *ibs, struct amdgpu_job *job, > +int amdgpu_ib_schedule(struct amdgpu_ring *ring, struct amdgpu_job *job, > struct dma_fence **f); > int amdgpu_ib_pool_init(struct amdgpu_device *adev); > void amdgpu_ib_pool_fini(struct amdgpu_device *adev); ^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH 06/10] drm/amdgpu: don't call drm_sched_stop/start() in asic reset 2026-01-16 16:20 [PATCH 00/10] Improvements for IB handling V3 Alex Deucher ` (4 preceding siblings ...) 2026-01-16 16:20 ` [PATCH 05/10] drm/amdgpu: require a job to schedule an IB Alex Deucher @ 2026-01-16 16:20 ` Alex Deucher 2026-01-19 12:42 ` Christian König 2026-01-16 16:20 ` [PATCH 07/10] drm/amdgpu: drop drm_sched_increase_karma() Alex Deucher ` (3 subsequent siblings) 9 siblings, 1 reply; 20+ messages in thread From: Alex Deucher @ 2026-01-16 16:20 UTC (permalink / raw) To: amd-gfx; +Cc: Alex Deucher We only want to stop the work queues, not mess with the pending list so just stop the work queues. Signed-off-by: Alex Deucher <alexander.deucher@amd.com> --- drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c index 362ab2b344984..ed7f13752f462 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c @@ -6310,7 +6310,7 @@ static void amdgpu_device_halt_activities(struct amdgpu_device *adev, if (!amdgpu_ring_sched_ready(ring)) continue; - drm_sched_stop(&ring->sched, job ? &job->base : NULL); + drm_sched_wqueue_stop(&ring->sched); if (need_emergency_restart) amdgpu_job_stop_all_jobs_on_sched(&ring->sched); @@ -6394,7 +6394,7 @@ static int amdgpu_device_sched_resume(struct list_head *device_list, if (!amdgpu_ring_sched_ready(ring)) continue; - drm_sched_start(&ring->sched, 0); + drm_sched_wqueue_start(&ring->sched); } if (!drm_drv_uses_atomic_modeset(adev_to_drm(tmp_adev)) && !job_signaled) -- 2.52.0 ^ permalink raw reply related [flat|nested] 20+ messages in thread
* Re: [PATCH 06/10] drm/amdgpu: don't call drm_sched_stop/start() in asic reset 2026-01-16 16:20 ` [PATCH 06/10] drm/amdgpu: don't call drm_sched_stop/start() in asic reset Alex Deucher @ 2026-01-19 12:42 ` Christian König 0 siblings, 0 replies; 20+ messages in thread From: Christian König @ 2026-01-19 12:42 UTC (permalink / raw) To: Alex Deucher, amd-gfx On 1/16/26 17:20, Alex Deucher wrote: > We only want to stop the work queues, not mess with the > pending list so just stop the work queues. > > Signed-off-by: Alex Deucher <alexander.deucher@amd.com> > --- > drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c > index 362ab2b344984..ed7f13752f462 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c > @@ -6310,7 +6310,7 @@ static void amdgpu_device_halt_activities(struct amdgpu_device *adev, > if (!amdgpu_ring_sched_ready(ring)) > continue; > > - drm_sched_stop(&ring->sched, job ? &job->base : NULL); > + drm_sched_wqueue_stop(&ring->sched); I'm pretty sure that this will lead to memory leak if we either don't manually free the job or return the code to insert it back into the list. Regards, Christian. > > if (need_emergency_restart) > amdgpu_job_stop_all_jobs_on_sched(&ring->sched); > @@ -6394,7 +6394,7 @@ static int amdgpu_device_sched_resume(struct list_head *device_list, > if (!amdgpu_ring_sched_ready(ring)) > continue; > > - drm_sched_start(&ring->sched, 0); > + drm_sched_wqueue_start(&ring->sched); > } > > if (!drm_drv_uses_atomic_modeset(adev_to_drm(tmp_adev)) && !job_signaled) ^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH 07/10] drm/amdgpu: drop drm_sched_increase_karma() 2026-01-16 16:20 [PATCH 00/10] Improvements for IB handling V3 Alex Deucher ` (5 preceding siblings ...) 2026-01-16 16:20 ` [PATCH 06/10] drm/amdgpu: don't call drm_sched_stop/start() in asic reset Alex Deucher @ 2026-01-16 16:20 ` Alex Deucher 2026-01-19 12:44 ` Christian König 2026-01-16 16:20 ` [PATCH 08/10] drm/amdgpu: plumb timedout fence through to force completion Alex Deucher ` (2 subsequent siblings) 9 siblings, 1 reply; 20+ messages in thread From: Alex Deucher @ 2026-01-16 16:20 UTC (permalink / raw) To: amd-gfx; +Cc: Alex Deucher It was leftover from when the driver supported drm sched resubmit. That was dropped long ago, so drop this as well. Leaving this in place also causes userspace to treat the context as innocent. Removing this fixes reset behavior. Signed-off-by: Alex Deucher <alexander.deucher@amd.com> --- drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 3 --- 1 file changed, 3 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c index ed7f13752f462..70faf914b4f0b 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c @@ -5817,9 +5817,6 @@ int amdgpu_device_pre_asic_reset(struct amdgpu_device *adev, amdgpu_fence_driver_isr_toggle(adev, false); - if (job && job->vm) - drm_sched_increase_karma(&job->base); - r = amdgpu_reset_prepare_hwcontext(adev, reset_context); /* If reset handler not implemented, continue; otherwise return */ if (r == -EOPNOTSUPP) -- 2.52.0 ^ permalink raw reply related [flat|nested] 20+ messages in thread
* Re: [PATCH 07/10] drm/amdgpu: drop drm_sched_increase_karma() 2026-01-16 16:20 ` [PATCH 07/10] drm/amdgpu: drop drm_sched_increase_karma() Alex Deucher @ 2026-01-19 12:44 ` Christian König 0 siblings, 0 replies; 20+ messages in thread From: Christian König @ 2026-01-19 12:44 UTC (permalink / raw) To: Alex Deucher, amd-gfx On 1/16/26 17:20, Alex Deucher wrote: > It was leftover from when the driver supported drm sched > resubmit. That was dropped long ago, so drop this as well. > Leaving this in place also causes userspace to treat > the context as innocent. Removing this fixes reset behavior. > > Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Looks like you didn't saw my earlier reply. This won't work because we unfortunately still need this for the AMDGPU_CTX_QUERY2_FLAGS_GUILTY. We first need to fix that one before we can drop that here. Regards, Christian. > --- > drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 3 --- > 1 file changed, 3 deletions(-) > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c > index ed7f13752f462..70faf914b4f0b 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c > @@ -5817,9 +5817,6 @@ int amdgpu_device_pre_asic_reset(struct amdgpu_device *adev, > > amdgpu_fence_driver_isr_toggle(adev, false); > > - if (job && job->vm) > - drm_sched_increase_karma(&job->base); > - > r = amdgpu_reset_prepare_hwcontext(adev, reset_context); > /* If reset handler not implemented, continue; otherwise return */ > if (r == -EOPNOTSUPP) ^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH 08/10] drm/amdgpu: plumb timedout fence through to force completion 2026-01-16 16:20 [PATCH 00/10] Improvements for IB handling V3 Alex Deucher ` (6 preceding siblings ...) 2026-01-16 16:20 ` [PATCH 07/10] drm/amdgpu: drop drm_sched_increase_karma() Alex Deucher @ 2026-01-16 16:20 ` Alex Deucher 2026-01-16 16:20 ` [PATCH 09/10] drm/amdgpu: simplify VCN reset helper Alex Deucher 2026-01-16 16:20 ` [PATCH 10/10] drm/amdgpu: rework ring reset backup and reemit Alex Deucher 9 siblings, 0 replies; 20+ messages in thread From: Alex Deucher @ 2026-01-16 16:20 UTC (permalink / raw) To: amd-gfx; +Cc: Alex Deucher, Christian König When we do a full adapter reset, if we know the timedout fence mark the fence with -ETIME rather than -ECANCELED so it gets properly handled by userspace. Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> --- drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 2 +- drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 6 ++++- drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c | 28 +++++++++++++++++---- drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h | 3 ++- drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.c | 4 +-- drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c | 2 +- drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c | 21 ++++++++++------ 7 files changed, 47 insertions(+), 19 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c index 1f3e52637326b..e36c8e3cfb0f0 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c @@ -1960,7 +1960,7 @@ static int amdgpu_debugfs_ib_preempt(void *data, u64 val) /* swap out the old fences */ amdgpu_ib_preempt_fences_swap(ring, fences); - amdgpu_fence_driver_force_completion(ring); + amdgpu_fence_driver_force_completion(ring, NULL); /* resubmit unfinished jobs */ amdgpu_ib_preempt_job_recovery(&ring->sched); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c index 70faf914b4f0b..2daf8cd55e74f 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c @@ -5792,6 +5792,7 @@ int amdgpu_device_pre_asic_reset(struct amdgpu_device *adev, { int i, r = 0; struct amdgpu_job *job = NULL; + struct dma_fence *fence = NULL; struct amdgpu_device *tmp_adev = reset_context->reset_req_dev; bool need_full_reset = test_bit(AMDGPU_NEED_FULL_RESET, &reset_context->flags); @@ -5804,6 +5805,9 @@ int amdgpu_device_pre_asic_reset(struct amdgpu_device *adev, amdgpu_fence_driver_isr_toggle(adev, true); + if (job) + fence = &job->hw_fence->base; + /* block all schedulers and reset given job's ring */ for (i = 0; i < AMDGPU_MAX_RINGS; ++i) { struct amdgpu_ring *ring = adev->rings[i]; @@ -5812,7 +5816,7 @@ int amdgpu_device_pre_asic_reset(struct amdgpu_device *adev, continue; /* after all hw jobs are reset, hw fence is meaningless, so force_completion */ - amdgpu_fence_driver_force_completion(ring); + amdgpu_fence_driver_force_completion(ring, fence); } amdgpu_fence_driver_isr_toggle(adev, false); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c index c7a2dff33d80b..d48f61076c06a 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c @@ -568,7 +568,7 @@ void amdgpu_fence_driver_hw_fini(struct amdgpu_device *adev) r = -ENODEV; /* no need to trigger GPU reset as we are unloading */ if (r) - amdgpu_fence_driver_force_completion(ring); + amdgpu_fence_driver_force_completion(ring, NULL); if (!drm_dev_is_unplugged(adev_to_drm(adev)) && ring->fence_drv.irq_src && @@ -683,16 +683,34 @@ void amdgpu_fence_driver_set_error(struct amdgpu_ring *ring, int error) * amdgpu_fence_driver_force_completion - force signal latest fence of ring * * @ring: fence of the ring to signal + * @timedout_fence: fence of the timedout job * */ -void amdgpu_fence_driver_force_completion(struct amdgpu_ring *ring) +void amdgpu_fence_driver_force_completion(struct amdgpu_ring *ring, + struct dma_fence *timedout_fence) { - amdgpu_fence_driver_set_error(ring, -ECANCELED); + struct amdgpu_fence_driver *drv = &ring->fence_drv; + unsigned long flags; + + spin_lock_irqsave(&drv->lock, flags); + for (unsigned int i = 0; i <= drv->num_fences_mask; ++i) { + struct dma_fence *fence; + + fence = rcu_dereference_protected(drv->fences[i], + lockdep_is_held(&drv->lock)); + if (fence && !dma_fence_is_signaled_locked(fence)) { + if (fence == timedout_fence) + dma_fence_set_error(fence, -ETIME); + else + dma_fence_set_error(fence, -ECANCELED); + } + } + spin_unlock_irqrestore(&drv->lock, flags); + amdgpu_fence_write(ring, ring->fence_drv.sync_seq); amdgpu_fence_process(ring); } - /* * Kernel queue reset handling * @@ -753,7 +771,7 @@ void amdgpu_fence_driver_update_timedout_fence_state(struct amdgpu_fence *af) if (reemitted) { /* if we've already reemitted once then just cancel everything */ - amdgpu_fence_driver_force_completion(af->ring); + amdgpu_fence_driver_force_completion(af->ring, &af->base); af->ring->ring_backup_entries_to_copy = 0; } } diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h index 86a788d476957..ce095427611fb 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h @@ -160,7 +160,8 @@ struct amdgpu_fence { extern const struct drm_sched_backend_ops amdgpu_sched_ops; void amdgpu_fence_driver_set_error(struct amdgpu_ring *ring, int error); -void amdgpu_fence_driver_force_completion(struct amdgpu_ring *ring); +void amdgpu_fence_driver_force_completion(struct amdgpu_ring *ring, + struct dma_fence *timedout_fence); void amdgpu_fence_driver_update_timedout_fence_state(struct amdgpu_fence *af); void amdgpu_fence_save_wptr(struct amdgpu_fence *af); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.c index 8b8a04138711c..c270a40de5e5d 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sdma.c @@ -597,10 +597,10 @@ int amdgpu_sdma_reset_engine(struct amdgpu_device *adev, uint32_t instance_id, * to be submitted to the queues after the reset is complete. */ if (!ret) { - amdgpu_fence_driver_force_completion(gfx_ring); + amdgpu_fence_driver_force_completion(gfx_ring, NULL); drm_sched_wqueue_start(&gfx_ring->sched); if (adev->sdma.has_page_queue) { - amdgpu_fence_driver_force_completion(page_ring); + amdgpu_fence_driver_force_completion(page_ring, NULL); drm_sched_wqueue_start(&page_ring->sched); } } diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c index 9d5cca7da1d9e..3a3bc0d370fa6 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c @@ -512,7 +512,7 @@ int amdgpu_uvd_resume(struct amdgpu_device *adev) } memset_io(ptr, 0, size); /* to restore uvd fence seq */ - amdgpu_fence_driver_force_completion(&adev->uvd.inst[i].ring); + amdgpu_fence_driver_force_completion(&adev->uvd.inst[i].ring, NULL); } } return 0; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c index 75ae9b429420e..d22c8980fa42b 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c @@ -1482,15 +1482,16 @@ int vcn_set_powergating_state(struct amdgpu_ip_block *ip_block, /** * amdgpu_vcn_reset_engine - Reset a specific VCN engine - * @adev: Pointer to the AMDGPU device - * @instance_id: VCN engine instance to reset + * @ring: Pointer to the VCN ring + * @timedout_fence: fence that timed out * * Returns: 0 on success, or a negative error code on failure. */ -static int amdgpu_vcn_reset_engine(struct amdgpu_device *adev, - uint32_t instance_id) +static int amdgpu_vcn_reset_engine(struct amdgpu_ring *ring, + struct amdgpu_fence *timedout_fence) { - struct amdgpu_vcn_inst *vinst = &adev->vcn.inst[instance_id]; + struct amdgpu_device *adev = ring->adev; + struct amdgpu_vcn_inst *vinst = &adev->vcn.inst[ring->me]; int r, i; mutex_lock(&vinst->engine_reset_mutex); @@ -1514,9 +1515,13 @@ static int amdgpu_vcn_reset_engine(struct amdgpu_device *adev, if (r) goto unlock; } - amdgpu_fence_driver_force_completion(&vinst->ring_dec); + amdgpu_fence_driver_force_completion(&vinst->ring_dec, + (&vinst->ring_dec == ring) ? + &timedout_fence->base : NULL); for (i = 0; i < vinst->num_enc_rings; i++) - amdgpu_fence_driver_force_completion(&vinst->ring_enc[i]); + amdgpu_fence_driver_force_completion(&vinst->ring_enc[i], + (&vinst->ring_enc[i] == ring) ? + &timedout_fence->base : NULL); /* Restart the scheduler's work queue for the dec and enc rings * if they were stopped by this function. This allows new tasks @@ -1552,7 +1557,7 @@ int amdgpu_vcn_ring_reset(struct amdgpu_ring *ring, if (adev->vcn.inst[ring->me].using_unified_queue) return -EINVAL; - return amdgpu_vcn_reset_engine(adev, ring->me); + return amdgpu_vcn_reset_engine(ring, timedout_fence); } int amdgpu_vcn_reg_dump_init(struct amdgpu_device *adev, -- 2.52.0 ^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH 09/10] drm/amdgpu: simplify VCN reset helper 2026-01-16 16:20 [PATCH 00/10] Improvements for IB handling V3 Alex Deucher ` (7 preceding siblings ...) 2026-01-16 16:20 ` [PATCH 08/10] drm/amdgpu: plumb timedout fence through to force completion Alex Deucher @ 2026-01-16 16:20 ` Alex Deucher 2026-01-16 16:20 ` [PATCH 10/10] drm/amdgpu: rework ring reset backup and reemit Alex Deucher 9 siblings, 0 replies; 20+ messages in thread From: Alex Deucher @ 2026-01-16 16:20 UTC (permalink / raw) To: amd-gfx; +Cc: Alex Deucher Remove the wrapper function. Signed-off-by: Alex Deucher <alexander.deucher@amd.com> --- drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c | 41 ++++++++----------------- 1 file changed, 13 insertions(+), 28 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c index d22c8980fa42b..4de5c8b9a4cc4 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c @@ -1481,19 +1481,27 @@ int vcn_set_powergating_state(struct amdgpu_ip_block *ip_block, } /** - * amdgpu_vcn_reset_engine - Reset a specific VCN engine - * @ring: Pointer to the VCN ring - * @timedout_fence: fence that timed out + * amdgpu_vcn_ring_reset - Reset a VCN ring + * @ring: ring to reset + * @vmid: vmid of guilty job + * @timedout_fence: fence of timed out job * + * This helper is for VCN blocks without unified queues because + * resetting the engine resets all queues in that case. With + * unified queues we have one queue per engine. * Returns: 0 on success, or a negative error code on failure. */ -static int amdgpu_vcn_reset_engine(struct amdgpu_ring *ring, - struct amdgpu_fence *timedout_fence) +int amdgpu_vcn_ring_reset(struct amdgpu_ring *ring, + unsigned int vmid, + struct amdgpu_fence *timedout_fence) { struct amdgpu_device *adev = ring->adev; struct amdgpu_vcn_inst *vinst = &adev->vcn.inst[ring->me]; int r, i; + if (adev->vcn.inst[ring->me].using_unified_queue) + return -EINVAL; + mutex_lock(&vinst->engine_reset_mutex); /* Stop the scheduler's work queue for the dec and enc rings if they are running. * This ensures that no new tasks are submitted to the queues while @@ -1537,29 +1545,6 @@ static int amdgpu_vcn_reset_engine(struct amdgpu_ring *ring, return r; } -/** - * amdgpu_vcn_ring_reset - Reset a VCN ring - * @ring: ring to reset - * @vmid: vmid of guilty job - * @timedout_fence: fence of timed out job - * - * This helper is for VCN blocks without unified queues because - * resetting the engine resets all queues in that case. With - * unified queues we have one queue per engine. - * Returns: 0 on success, or a negative error code on failure. - */ -int amdgpu_vcn_ring_reset(struct amdgpu_ring *ring, - unsigned int vmid, - struct amdgpu_fence *timedout_fence) -{ - struct amdgpu_device *adev = ring->adev; - - if (adev->vcn.inst[ring->me].using_unified_queue) - return -EINVAL; - - return amdgpu_vcn_reset_engine(ring, timedout_fence); -} - int amdgpu_vcn_reg_dump_init(struct amdgpu_device *adev, const struct amdgpu_hwip_reg_entry *reg, u32 count) { -- 2.52.0 ^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH 10/10] drm/amdgpu: rework ring reset backup and reemit 2026-01-16 16:20 [PATCH 00/10] Improvements for IB handling V3 Alex Deucher ` (8 preceding siblings ...) 2026-01-16 16:20 ` [PATCH 09/10] drm/amdgpu: simplify VCN reset helper Alex Deucher @ 2026-01-16 16:20 ` Alex Deucher 2026-01-19 13:19 ` Christian König 2026-01-20 2:41 ` Timur Kristóf 9 siblings, 2 replies; 20+ messages in thread From: Alex Deucher @ 2026-01-16 16:20 UTC (permalink / raw) To: amd-gfx; +Cc: Alex Deucher Store the start and end wptrs in the IB fence. On queue reset, only save the ring contents of the non-guilty contexts. Since the VM fence is a sub-fence of the the IB fence, the IB fence stores the start and end wptrs for both fences as the IB state encapsulates the VM fence state. For reemit, only reemit the state for non-guilty contexts; for guilty contexts, just emit the fences. Split the reemit per fence when when we reemit, update the start and end wptrs with the new values from reemit. This allows us to reemit jobs repeatedly as the wptrs get properly updated each time. Submitting the fence directly also simplifies the logic as we no longer need to store additional wptrs for each fence within the IB ring sequence. If the context is guilty, we emit the fence(s) and if not, we reemit the whole IB ring sequence for that IB. Signed-off-by: Alex Deucher <alexander.deucher@amd.com> --- drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c | 98 +++++++++-------------- drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c | 9 +-- drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c | 37 +-------- drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h | 20 ++--- drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 1 + 5 files changed, 54 insertions(+), 111 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c index d48f61076c06a..541cdbe1e28bf 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c @@ -89,16 +89,6 @@ static u32 amdgpu_fence_read(struct amdgpu_ring *ring) return seq; } -static void amdgpu_fence_save_fence_wptr_start(struct amdgpu_fence *af) -{ - af->fence_wptr_start = af->ring->wptr; -} - -static void amdgpu_fence_save_fence_wptr_end(struct amdgpu_fence *af) -{ - af->fence_wptr_end = af->ring->wptr; -} - /** * amdgpu_fence_emit - emit a fence on the requested ring * @@ -126,11 +116,10 @@ int amdgpu_fence_emit(struct amdgpu_ring *ring, struct amdgpu_fence *af, &ring->fence_drv.lock, adev->fence_context + ring->idx, seq); - amdgpu_fence_save_fence_wptr_start(af); + af->flags = flags | AMDGPU_FENCE_FLAG_INT; amdgpu_ring_emit_fence(ring, ring->fence_drv.gpu_addr, - seq, flags | AMDGPU_FENCE_FLAG_INT); - amdgpu_fence_save_fence_wptr_end(af); - amdgpu_fence_save_wptr(af); + seq, af->flags); + pm_runtime_get_noresume(adev_to_drm(adev)->dev); ptr = &ring->fence_drv.fences[seq & ring->fence_drv.num_fences_mask]; if (unlikely(rcu_dereference_protected(*ptr, 1))) { @@ -241,7 +230,6 @@ bool amdgpu_fence_process(struct amdgpu_ring *ring) do { struct dma_fence *fence, **ptr; - struct amdgpu_fence *am_fence; ++last_seq; last_seq &= drv->num_fences_mask; @@ -254,12 +242,6 @@ bool amdgpu_fence_process(struct amdgpu_ring *ring) if (!fence) continue; - /* Save the wptr in the fence driver so we know what the last processed - * wptr was. This is required for re-emitting the ring state for - * queues that are reset but are not guilty and thus have no guilty fence. - */ - am_fence = container_of(fence, struct amdgpu_fence, base); - drv->signalled_wptr = am_fence->wptr; dma_fence_signal(fence); dma_fence_put(fence); pm_runtime_mark_last_busy(adev_to_drm(adev)->dev); @@ -727,25 +709,27 @@ void amdgpu_fence_driver_force_completion(struct amdgpu_ring *ring, */ /** - * amdgpu_fence_driver_update_timedout_fence_state - Update fence state and set errors + * amdgpu_ring_set_fence_errors_and_reemit - Set dma_fence errors and reemit * - * @af: fence of the ring to update + * @guilty_fence: fence of the ring to update * */ -void amdgpu_fence_driver_update_timedout_fence_state(struct amdgpu_fence *af) +void amdgpu_ring_set_fence_errors_and_reemit(struct amdgpu_ring *ring, + struct amdgpu_fence *guilty_fence) { struct dma_fence *unprocessed; struct dma_fence __rcu **ptr; struct amdgpu_fence *fence; - struct amdgpu_ring *ring = af->ring; unsigned long flags; u32 seq, last_seq; - bool reemitted = false; + unsigned int i; last_seq = amdgpu_fence_read(ring) & ring->fence_drv.num_fences_mask; seq = ring->fence_drv.sync_seq & ring->fence_drv.num_fences_mask; /* mark all fences from the guilty context with an error */ + amdgpu_ring_alloc(ring, ring->ring_backup_entries_to_copy + + 20 * ring->guilty_fence_count); spin_lock_irqsave(&ring->fence_drv.lock, flags); do { last_seq++; @@ -758,39 +742,41 @@ void amdgpu_fence_driver_update_timedout_fence_state(struct amdgpu_fence *af) if (unprocessed && !dma_fence_is_signaled_locked(unprocessed)) { fence = container_of(unprocessed, struct amdgpu_fence, base); - if (fence->reemitted > 1) - reemitted = true; - else if (fence == af) + if (fence == guilty_fence) dma_fence_set_error(&fence->base, -ETIME); - else if (fence->context == af->context) + else if (fence->context == guilty_fence->context) dma_fence_set_error(&fence->base, -ECANCELED); + + if (fence->context == guilty_fence->context) { + /* just emit the fence for the guilty context */ + amdgpu_ring_emit_fence(ring, ring->fence_drv.gpu_addr, + fence->base.seqno, fence->flags); + } else { + /* reemit the packet stream and update wptrs */ + fence->wptr_start = ring->wptr; + for (i = 0; i < fence->backup_size; i++) + amdgpu_ring_write(ring, ring->ring_backup[fence->backup_idx + i]); + fence->wptr_end = ring->wptr; + } } rcu_read_unlock(); } while (last_seq != seq); spin_unlock_irqrestore(&ring->fence_drv.lock, flags); - - if (reemitted) { - /* if we've already reemitted once then just cancel everything */ - amdgpu_fence_driver_force_completion(af->ring, &af->base); - af->ring->ring_backup_entries_to_copy = 0; - } -} - -void amdgpu_fence_save_wptr(struct amdgpu_fence *af) -{ - af->wptr = af->ring->wptr; + amdgpu_ring_commit(ring); } static void amdgpu_ring_backup_unprocessed_command(struct amdgpu_ring *ring, - u64 start_wptr, u64 end_wptr) + struct amdgpu_fence *af) { - unsigned int first_idx = start_wptr & ring->buf_mask; - unsigned int last_idx = end_wptr & ring->buf_mask; + unsigned int first_idx = af->wptr_start & ring->buf_mask; + unsigned int last_idx = af->wptr_end & ring->buf_mask; unsigned int i; /* Backup the contents of the ring buffer. */ + af->backup_idx = ring->ring_backup_entries_to_copy; for (i = first_idx; i != last_idx; ++i, i &= ring->buf_mask) ring->ring_backup[ring->ring_backup_entries_to_copy++] = ring->ring[i]; + af->backup_size = ring->ring_backup_entries_to_copy - af->backup_idx; } void amdgpu_ring_backup_unprocessed_commands(struct amdgpu_ring *ring, @@ -799,13 +785,12 @@ void amdgpu_ring_backup_unprocessed_commands(struct amdgpu_ring *ring, struct dma_fence *unprocessed; struct dma_fence __rcu **ptr; struct amdgpu_fence *fence; - u64 wptr; u32 seq, last_seq; last_seq = amdgpu_fence_read(ring) & ring->fence_drv.num_fences_mask; seq = ring->fence_drv.sync_seq & ring->fence_drv.num_fences_mask; - wptr = ring->fence_drv.signalled_wptr; ring->ring_backup_entries_to_copy = 0; + ring->guilty_fence_count = 0; do { last_seq++; @@ -818,21 +803,12 @@ void amdgpu_ring_backup_unprocessed_commands(struct amdgpu_ring *ring, if (unprocessed && !dma_fence_is_signaled(unprocessed)) { fence = container_of(unprocessed, struct amdgpu_fence, base); - /* save everything if the ring is not guilty, otherwise - * just save the content from other contexts. - */ - if (!fence->reemitted && - (!guilty_fence || (fence->context != guilty_fence->context))) { - amdgpu_ring_backup_unprocessed_command(ring, wptr, - fence->wptr); - } else if (!fence->reemitted) { - /* always save the fence */ - amdgpu_ring_backup_unprocessed_command(ring, - fence->fence_wptr_start, - fence->fence_wptr_end); - } - wptr = fence->wptr; - fence->reemitted++; + /* keep track of the guilty fences for reemit */ + if (fence->context == guilty_fence->context) + ring->guilty_fence_count++; + /* Non-vm fence has all the state. Backup the non-guilty contexts */ + if (!fence->is_vm_fence && (fence->context != guilty_fence->context)) + amdgpu_ring_backup_unprocessed_command(ring, fence); } rcu_read_unlock(); } while (last_seq != seq); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c index fb58637ed1507..865a803026c3b 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c @@ -185,6 +185,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, struct amdgpu_job *job, dev_err(adev->dev, "scheduling IB failed (%d).\n", r); return r; } + af->wptr_start = ring->wptr; need_ctx_switch = ring->current_ctx != fence_ctx; if (ring->funcs->emit_pipeline_sync && job && @@ -303,13 +304,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, struct amdgpu_job *job, ring->hw_prio == AMDGPU_GFX_PIPE_PRIO_HIGH) ring->funcs->emit_wave_limit(ring, false); - /* Save the wptr associated with this fence. - * This must be last for resets to work properly - * as we need to save the wptr associated with this - * fence so we know what rings contents to backup - * after we reset the queue. - */ - amdgpu_fence_save_wptr(af); + af->wptr_end = ring->wptr; amdgpu_ring_ib_end(ring); amdgpu_ring_commit(ring); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c index b82357c657237..df37335127979 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c @@ -104,29 +104,6 @@ int amdgpu_ring_alloc(struct amdgpu_ring *ring, unsigned int ndw) return 0; } -/** - * amdgpu_ring_alloc_reemit - allocate space on the ring buffer for reemit - * - * @ring: amdgpu_ring structure holding ring information - * @ndw: number of dwords to allocate in the ring buffer - * - * Allocate @ndw dwords in the ring buffer (all asics). - * doesn't check the max_dw limit as we may be reemitting - * several submissions. - */ -static void amdgpu_ring_alloc_reemit(struct amdgpu_ring *ring, unsigned int ndw) -{ - /* Align requested size with padding so unlock_commit can - * pad safely */ - ndw = (ndw + ring->funcs->align_mask) & ~ring->funcs->align_mask; - - ring->count_dw = ndw; - ring->wptr_old = ring->wptr; - - if (ring->funcs->begin_use) - ring->funcs->begin_use(ring); -} - /** * amdgpu_ring_insert_nop - insert NOP packets * @@ -877,7 +854,6 @@ void amdgpu_ring_reset_helper_begin(struct amdgpu_ring *ring, int amdgpu_ring_reset_helper_end(struct amdgpu_ring *ring, struct amdgpu_fence *guilty_fence) { - unsigned int i; int r; /* verify that the ring is functional */ @@ -885,16 +861,9 @@ int amdgpu_ring_reset_helper_end(struct amdgpu_ring *ring, if (r) return r; - /* set an error on all fences from the context */ - if (guilty_fence) - amdgpu_fence_driver_update_timedout_fence_state(guilty_fence); - /* Re-emit the non-guilty commands */ - if (ring->ring_backup_entries_to_copy) { - amdgpu_ring_alloc_reemit(ring, ring->ring_backup_entries_to_copy); - for (i = 0; i < ring->ring_backup_entries_to_copy; i++) - amdgpu_ring_write(ring, ring->ring_backup[i]); - amdgpu_ring_commit(ring); - } + /* set an error on all fences from the context and reemit */ + amdgpu_ring_set_fence_errors_and_reemit(ring, guilty_fence); + /* Start the scheduler again */ drm_sched_wqueue_start(&ring->sched); return 0; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h index ce095427611fb..6dca1ccd2fc2e 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h @@ -121,7 +121,6 @@ struct amdgpu_fence_driver { /* sync_seq is protected by ring emission lock */ uint32_t sync_seq; atomic_t last_seq; - u64 signalled_wptr; bool initialized; struct amdgpu_irq_src *irq_src; unsigned irq_type; @@ -146,15 +145,17 @@ struct amdgpu_fence { struct amdgpu_ring *ring; ktime_t start_timestamp; + bool is_vm_fence; + unsigned int flags; /* wptr for the total submission for resets */ - u64 wptr; + u64 wptr_start; + u64 wptr_end; /* fence context for resets */ u64 context; - /* has this fence been reemitted */ - unsigned int reemitted; - /* wptr for the fence for the submission */ - u64 fence_wptr_start; - u64 fence_wptr_end; + /* idx into the ring backup */ + unsigned int backup_idx; + unsigned int backup_size; + }; extern const struct drm_sched_backend_ops amdgpu_sched_ops; @@ -162,8 +163,8 @@ extern const struct drm_sched_backend_ops amdgpu_sched_ops; void amdgpu_fence_driver_set_error(struct amdgpu_ring *ring, int error); void amdgpu_fence_driver_force_completion(struct amdgpu_ring *ring, struct dma_fence *timedout_fence); -void amdgpu_fence_driver_update_timedout_fence_state(struct amdgpu_fence *af); -void amdgpu_fence_save_wptr(struct amdgpu_fence *af); +void amdgpu_ring_set_fence_errors_and_reemit(struct amdgpu_ring *ring, + struct amdgpu_fence *guilty_fence); int amdgpu_fence_driver_init_ring(struct amdgpu_ring *ring); int amdgpu_fence_driver_start_ring(struct amdgpu_ring *ring, @@ -314,6 +315,7 @@ struct amdgpu_ring { /* backups for resets */ uint32_t *ring_backup; unsigned int ring_backup_entries_to_copy; + unsigned int guilty_fence_count; unsigned rptr_offs; u64 rptr_gpu_addr; u32 *rptr_cpu_addr; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c index 6a2ea200d90c8..bd2c01b1ef18f 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c @@ -848,6 +848,7 @@ int amdgpu_vm_flush(struct amdgpu_ring *ring, struct amdgpu_job *job, r = amdgpu_fence_emit(ring, job->hw_vm_fence, 0); if (r) return r; + job->hw_vm_fence->is_vm_fence = true; fence = &job->hw_vm_fence->base; /* get a ref for the job */ dma_fence_get(fence); -- 2.52.0 ^ permalink raw reply related [flat|nested] 20+ messages in thread
* Re: [PATCH 10/10] drm/amdgpu: rework ring reset backup and reemit 2026-01-16 16:20 ` [PATCH 10/10] drm/amdgpu: rework ring reset backup and reemit Alex Deucher @ 2026-01-19 13:19 ` Christian König 2026-01-20 2:41 ` Timur Kristóf 1 sibling, 0 replies; 20+ messages in thread From: Christian König @ 2026-01-19 13:19 UTC (permalink / raw) To: Alex Deucher, amd-gfx On 1/16/26 17:20, Alex Deucher wrote: > Store the start and end wptrs in the IB fence. On queue > reset, only save the ring contents of the non-guilty contexts. > Since the VM fence is a sub-fence of the the IB fence, the IB > fence stores the start and end wptrs for both fences as the IB > state encapsulates the VM fence state. Most looks like a step in the right direction, but that is not such a good idea. Even for a gulty fence we still want to re-submit the VM stuff since submissions from other context might depend on getting this right. Regards, Christian. > > For reemit, only reemit the state for non-guilty contexts; for > guilty contexts, just emit the fences. Split the reemit per > fence when when we reemit, update the start and end wptrs > with the new values from reemit. This allows us to > reemit jobs repeatedly as the wptrs get properly updated > each time. Submitting the fence directly also simplifies > the logic as we no longer need to store additional wptrs for > each fence within the IB ring sequence. If the context is > guilty, we emit the fence(s) and if not, we reemit the > whole IB ring sequence for that IB. > > Signed-off-by: Alex Deucher <alexander.deucher@amd.com> > --- > drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c | 98 +++++++++-------------- > drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c | 9 +-- > drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c | 37 +-------- > drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h | 20 ++--- > drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 1 + > 5 files changed, 54 insertions(+), 111 deletions(-) > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c > index d48f61076c06a..541cdbe1e28bf 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c > @@ -89,16 +89,6 @@ static u32 amdgpu_fence_read(struct amdgpu_ring *ring) > return seq; > } > > -static void amdgpu_fence_save_fence_wptr_start(struct amdgpu_fence *af) > -{ > - af->fence_wptr_start = af->ring->wptr; > -} > - > -static void amdgpu_fence_save_fence_wptr_end(struct amdgpu_fence *af) > -{ > - af->fence_wptr_end = af->ring->wptr; > -} > - > /** > * amdgpu_fence_emit - emit a fence on the requested ring > * > @@ -126,11 +116,10 @@ int amdgpu_fence_emit(struct amdgpu_ring *ring, struct amdgpu_fence *af, > &ring->fence_drv.lock, > adev->fence_context + ring->idx, seq); > > - amdgpu_fence_save_fence_wptr_start(af); > + af->flags = flags | AMDGPU_FENCE_FLAG_INT; > amdgpu_ring_emit_fence(ring, ring->fence_drv.gpu_addr, > - seq, flags | AMDGPU_FENCE_FLAG_INT); > - amdgpu_fence_save_fence_wptr_end(af); > - amdgpu_fence_save_wptr(af); > + seq, af->flags); > + > pm_runtime_get_noresume(adev_to_drm(adev)->dev); > ptr = &ring->fence_drv.fences[seq & ring->fence_drv.num_fences_mask]; > if (unlikely(rcu_dereference_protected(*ptr, 1))) { > @@ -241,7 +230,6 @@ bool amdgpu_fence_process(struct amdgpu_ring *ring) > > do { > struct dma_fence *fence, **ptr; > - struct amdgpu_fence *am_fence; > > ++last_seq; > last_seq &= drv->num_fences_mask; > @@ -254,12 +242,6 @@ bool amdgpu_fence_process(struct amdgpu_ring *ring) > if (!fence) > continue; > > - /* Save the wptr in the fence driver so we know what the last processed > - * wptr was. This is required for re-emitting the ring state for > - * queues that are reset but are not guilty and thus have no guilty fence. > - */ > - am_fence = container_of(fence, struct amdgpu_fence, base); > - drv->signalled_wptr = am_fence->wptr; > dma_fence_signal(fence); > dma_fence_put(fence); > pm_runtime_mark_last_busy(adev_to_drm(adev)->dev); > @@ -727,25 +709,27 @@ void amdgpu_fence_driver_force_completion(struct amdgpu_ring *ring, > */ > > /** > - * amdgpu_fence_driver_update_timedout_fence_state - Update fence state and set errors > + * amdgpu_ring_set_fence_errors_and_reemit - Set dma_fence errors and reemit > * > - * @af: fence of the ring to update > + * @guilty_fence: fence of the ring to update > * > */ > -void amdgpu_fence_driver_update_timedout_fence_state(struct amdgpu_fence *af) > +void amdgpu_ring_set_fence_errors_and_reemit(struct amdgpu_ring *ring, > + struct amdgpu_fence *guilty_fence) > { > struct dma_fence *unprocessed; > struct dma_fence __rcu **ptr; > struct amdgpu_fence *fence; > - struct amdgpu_ring *ring = af->ring; > unsigned long flags; > u32 seq, last_seq; > - bool reemitted = false; > + unsigned int i; > > last_seq = amdgpu_fence_read(ring) & ring->fence_drv.num_fences_mask; > seq = ring->fence_drv.sync_seq & ring->fence_drv.num_fences_mask; > > /* mark all fences from the guilty context with an error */ > + amdgpu_ring_alloc(ring, ring->ring_backup_entries_to_copy + > + 20 * ring->guilty_fence_count); > spin_lock_irqsave(&ring->fence_drv.lock, flags); > do { > last_seq++; > @@ -758,39 +742,41 @@ void amdgpu_fence_driver_update_timedout_fence_state(struct amdgpu_fence *af) > if (unprocessed && !dma_fence_is_signaled_locked(unprocessed)) { > fence = container_of(unprocessed, struct amdgpu_fence, base); > > - if (fence->reemitted > 1) > - reemitted = true; > - else if (fence == af) > + if (fence == guilty_fence) > dma_fence_set_error(&fence->base, -ETIME); > - else if (fence->context == af->context) > + else if (fence->context == guilty_fence->context) > dma_fence_set_error(&fence->base, -ECANCELED); > + > + if (fence->context == guilty_fence->context) { > + /* just emit the fence for the guilty context */ > + amdgpu_ring_emit_fence(ring, ring->fence_drv.gpu_addr, > + fence->base.seqno, fence->flags); > + } else { > + /* reemit the packet stream and update wptrs */ > + fence->wptr_start = ring->wptr; > + for (i = 0; i < fence->backup_size; i++) > + amdgpu_ring_write(ring, ring->ring_backup[fence->backup_idx + i]); > + fence->wptr_end = ring->wptr; > + } > } > rcu_read_unlock(); > } while (last_seq != seq); > spin_unlock_irqrestore(&ring->fence_drv.lock, flags); > - > - if (reemitted) { > - /* if we've already reemitted once then just cancel everything */ > - amdgpu_fence_driver_force_completion(af->ring, &af->base); > - af->ring->ring_backup_entries_to_copy = 0; > - } > -} > - > -void amdgpu_fence_save_wptr(struct amdgpu_fence *af) > -{ > - af->wptr = af->ring->wptr; > + amdgpu_ring_commit(ring); > } > > static void amdgpu_ring_backup_unprocessed_command(struct amdgpu_ring *ring, > - u64 start_wptr, u64 end_wptr) > + struct amdgpu_fence *af) > { > - unsigned int first_idx = start_wptr & ring->buf_mask; > - unsigned int last_idx = end_wptr & ring->buf_mask; > + unsigned int first_idx = af->wptr_start & ring->buf_mask; > + unsigned int last_idx = af->wptr_end & ring->buf_mask; > unsigned int i; > > /* Backup the contents of the ring buffer. */ > + af->backup_idx = ring->ring_backup_entries_to_copy; > for (i = first_idx; i != last_idx; ++i, i &= ring->buf_mask) > ring->ring_backup[ring->ring_backup_entries_to_copy++] = ring->ring[i]; > + af->backup_size = ring->ring_backup_entries_to_copy - af->backup_idx; > } > > void amdgpu_ring_backup_unprocessed_commands(struct amdgpu_ring *ring, > @@ -799,13 +785,12 @@ void amdgpu_ring_backup_unprocessed_commands(struct amdgpu_ring *ring, > struct dma_fence *unprocessed; > struct dma_fence __rcu **ptr; > struct amdgpu_fence *fence; > - u64 wptr; > u32 seq, last_seq; > > last_seq = amdgpu_fence_read(ring) & ring->fence_drv.num_fences_mask; > seq = ring->fence_drv.sync_seq & ring->fence_drv.num_fences_mask; > - wptr = ring->fence_drv.signalled_wptr; > ring->ring_backup_entries_to_copy = 0; > + ring->guilty_fence_count = 0; > > do { > last_seq++; > @@ -818,21 +803,12 @@ void amdgpu_ring_backup_unprocessed_commands(struct amdgpu_ring *ring, > if (unprocessed && !dma_fence_is_signaled(unprocessed)) { > fence = container_of(unprocessed, struct amdgpu_fence, base); > > - /* save everything if the ring is not guilty, otherwise > - * just save the content from other contexts. > - */ > - if (!fence->reemitted && > - (!guilty_fence || (fence->context != guilty_fence->context))) { > - amdgpu_ring_backup_unprocessed_command(ring, wptr, > - fence->wptr); > - } else if (!fence->reemitted) { > - /* always save the fence */ > - amdgpu_ring_backup_unprocessed_command(ring, > - fence->fence_wptr_start, > - fence->fence_wptr_end); > - } > - wptr = fence->wptr; > - fence->reemitted++; > + /* keep track of the guilty fences for reemit */ > + if (fence->context == guilty_fence->context) > + ring->guilty_fence_count++; > + /* Non-vm fence has all the state. Backup the non-guilty contexts */ > + if (!fence->is_vm_fence && (fence->context != guilty_fence->context)) > + amdgpu_ring_backup_unprocessed_command(ring, fence); > } > rcu_read_unlock(); > } while (last_seq != seq); > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c > index fb58637ed1507..865a803026c3b 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c > @@ -185,6 +185,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, struct amdgpu_job *job, > dev_err(adev->dev, "scheduling IB failed (%d).\n", r); > return r; > } > + af->wptr_start = ring->wptr; > > need_ctx_switch = ring->current_ctx != fence_ctx; > if (ring->funcs->emit_pipeline_sync && job && > @@ -303,13 +304,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, struct amdgpu_job *job, > ring->hw_prio == AMDGPU_GFX_PIPE_PRIO_HIGH) > ring->funcs->emit_wave_limit(ring, false); > > - /* Save the wptr associated with this fence. > - * This must be last for resets to work properly > - * as we need to save the wptr associated with this > - * fence so we know what rings contents to backup > - * after we reset the queue. > - */ > - amdgpu_fence_save_wptr(af); > + af->wptr_end = ring->wptr; > > amdgpu_ring_ib_end(ring); > amdgpu_ring_commit(ring); > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c > index b82357c657237..df37335127979 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c > @@ -104,29 +104,6 @@ int amdgpu_ring_alloc(struct amdgpu_ring *ring, unsigned int ndw) > return 0; > } > > -/** > - * amdgpu_ring_alloc_reemit - allocate space on the ring buffer for reemit > - * > - * @ring: amdgpu_ring structure holding ring information > - * @ndw: number of dwords to allocate in the ring buffer > - * > - * Allocate @ndw dwords in the ring buffer (all asics). > - * doesn't check the max_dw limit as we may be reemitting > - * several submissions. > - */ > -static void amdgpu_ring_alloc_reemit(struct amdgpu_ring *ring, unsigned int ndw) > -{ > - /* Align requested size with padding so unlock_commit can > - * pad safely */ > - ndw = (ndw + ring->funcs->align_mask) & ~ring->funcs->align_mask; > - > - ring->count_dw = ndw; > - ring->wptr_old = ring->wptr; > - > - if (ring->funcs->begin_use) > - ring->funcs->begin_use(ring); > -} > - > /** > * amdgpu_ring_insert_nop - insert NOP packets > * > @@ -877,7 +854,6 @@ void amdgpu_ring_reset_helper_begin(struct amdgpu_ring *ring, > int amdgpu_ring_reset_helper_end(struct amdgpu_ring *ring, > struct amdgpu_fence *guilty_fence) > { > - unsigned int i; > int r; > > /* verify that the ring is functional */ > @@ -885,16 +861,9 @@ int amdgpu_ring_reset_helper_end(struct amdgpu_ring *ring, > if (r) > return r; > > - /* set an error on all fences from the context */ > - if (guilty_fence) > - amdgpu_fence_driver_update_timedout_fence_state(guilty_fence); > - /* Re-emit the non-guilty commands */ > - if (ring->ring_backup_entries_to_copy) { > - amdgpu_ring_alloc_reemit(ring, ring->ring_backup_entries_to_copy); > - for (i = 0; i < ring->ring_backup_entries_to_copy; i++) > - amdgpu_ring_write(ring, ring->ring_backup[i]); > - amdgpu_ring_commit(ring); > - } > + /* set an error on all fences from the context and reemit */ > + amdgpu_ring_set_fence_errors_and_reemit(ring, guilty_fence); > + > /* Start the scheduler again */ > drm_sched_wqueue_start(&ring->sched); > return 0; > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h > index ce095427611fb..6dca1ccd2fc2e 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h > @@ -121,7 +121,6 @@ struct amdgpu_fence_driver { > /* sync_seq is protected by ring emission lock */ > uint32_t sync_seq; > atomic_t last_seq; > - u64 signalled_wptr; > bool initialized; > struct amdgpu_irq_src *irq_src; > unsigned irq_type; > @@ -146,15 +145,17 @@ struct amdgpu_fence { > struct amdgpu_ring *ring; > ktime_t start_timestamp; > > + bool is_vm_fence; > + unsigned int flags; > /* wptr for the total submission for resets */ > - u64 wptr; > + u64 wptr_start; > + u64 wptr_end; > /* fence context for resets */ > u64 context; > - /* has this fence been reemitted */ > - unsigned int reemitted; > - /* wptr for the fence for the submission */ > - u64 fence_wptr_start; > - u64 fence_wptr_end; > + /* idx into the ring backup */ > + unsigned int backup_idx; > + unsigned int backup_size; > + > }; > > extern const struct drm_sched_backend_ops amdgpu_sched_ops; > @@ -162,8 +163,8 @@ extern const struct drm_sched_backend_ops amdgpu_sched_ops; > void amdgpu_fence_driver_set_error(struct amdgpu_ring *ring, int error); > void amdgpu_fence_driver_force_completion(struct amdgpu_ring *ring, > struct dma_fence *timedout_fence); > -void amdgpu_fence_driver_update_timedout_fence_state(struct amdgpu_fence *af); > -void amdgpu_fence_save_wptr(struct amdgpu_fence *af); > +void amdgpu_ring_set_fence_errors_and_reemit(struct amdgpu_ring *ring, > + struct amdgpu_fence *guilty_fence); > > int amdgpu_fence_driver_init_ring(struct amdgpu_ring *ring); > int amdgpu_fence_driver_start_ring(struct amdgpu_ring *ring, > @@ -314,6 +315,7 @@ struct amdgpu_ring { > /* backups for resets */ > uint32_t *ring_backup; > unsigned int ring_backup_entries_to_copy; > + unsigned int guilty_fence_count; > unsigned rptr_offs; > u64 rptr_gpu_addr; > u32 *rptr_cpu_addr; > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c > index 6a2ea200d90c8..bd2c01b1ef18f 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c > @@ -848,6 +848,7 @@ int amdgpu_vm_flush(struct amdgpu_ring *ring, struct amdgpu_job *job, > r = amdgpu_fence_emit(ring, job->hw_vm_fence, 0); > if (r) > return r; > + job->hw_vm_fence->is_vm_fence = true; > fence = &job->hw_vm_fence->base; > /* get a ref for the job */ > dma_fence_get(fence); ^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH 10/10] drm/amdgpu: rework ring reset backup and reemit 2026-01-16 16:20 ` [PATCH 10/10] drm/amdgpu: rework ring reset backup and reemit Alex Deucher 2026-01-19 13:19 ` Christian König @ 2026-01-20 2:41 ` Timur Kristóf 1 sibling, 0 replies; 20+ messages in thread From: Timur Kristóf @ 2026-01-20 2:41 UTC (permalink / raw) To: amd-gfx; +Cc: Alex Deucher, Alex Deucher On 2026. január 16., péntek 17:20:27 közép-európai téli idő Alex Deucher wrote: > Store the start and end wptrs in the IB fence. On queue > reset, only save the ring contents of the non-guilty contexts. > Since the VM fence is a sub-fence of the the IB fence, the IB > fence stores the start and end wptrs for both fences as the IB > state encapsulates the VM fence state. > > For reemit, only reemit the state for non-guilty contexts; for > guilty contexts, just emit the fences. Split the reemit per > fence when when we reemit, update the start and end wptrs > with the new values from reemit. This allows us to > reemit jobs repeatedly as the wptrs get properly updated > each time. Submitting the fence directly also simplifies > the logic as we no longer need to store additional wptrs for > each fence within the IB ring sequence. If the context is > guilty, we emit the fence(s) and if not, we reemit the > whole IB ring sequence for that IB. > Hi Alex, I have a few ideas that may help simplify this: 1. We could rework the pipeline sync so that it doesn't depend on the fence of the previous submission, similarly to how you already did for gfx9. - Then it becomes unnecessary to re-emit the guilty fence or track where the fence was emitted. - This also fixes queue reset for the case when the maximum scheduled submissions are greater than two. - For those engines that do not have something like VS/PS/CS_PARTIAL_FLUSH and ACQUIRE_MEM, you can emit a fence + wait_reg inside the pipeline sync. 2. When backing up the ring contents, you could replace submissions that belong to the guilty process with NOPs, and remember the start position of the first submission inside the ring buffer. When restoring, you can pad the ring buffer with NOPs until you get back to the same position. - This way, all the pointers to things inside the ring will remain valid and no extra logic is needed when re-emitting the same content repeatedly. Hope this helps, Timur > --- > drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c | 98 +++++++++-------------- > drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c | 9 +-- > drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c | 37 +-------- > drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h | 20 ++--- > drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 1 + > 5 files changed, 54 insertions(+), 111 deletions(-) > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c > b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c index > d48f61076c06a..541cdbe1e28bf 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c > @@ -89,16 +89,6 @@ static u32 amdgpu_fence_read(struct amdgpu_ring *ring) > return seq; > } > > -static void amdgpu_fence_save_fence_wptr_start(struct amdgpu_fence *af) > -{ > - af->fence_wptr_start = af->ring->wptr; > -} > - > -static void amdgpu_fence_save_fence_wptr_end(struct amdgpu_fence *af) > -{ > - af->fence_wptr_end = af->ring->wptr; > -} > - > /** > * amdgpu_fence_emit - emit a fence on the requested ring > * > @@ -126,11 +116,10 @@ int amdgpu_fence_emit(struct amdgpu_ring *ring, struct > amdgpu_fence *af, &ring->fence_drv.lock, > adev->fence_context + ring->idx, seq); > > - amdgpu_fence_save_fence_wptr_start(af); > + af->flags = flags | AMDGPU_FENCE_FLAG_INT; > amdgpu_ring_emit_fence(ring, ring->fence_drv.gpu_addr, > - seq, flags | AMDGPU_FENCE_FLAG_INT); > - amdgpu_fence_save_fence_wptr_end(af); > - amdgpu_fence_save_wptr(af); > + seq, af->flags); > + > pm_runtime_get_noresume(adev_to_drm(adev)->dev); > ptr = &ring->fence_drv.fences[seq & ring- >fence_drv.num_fences_mask]; > if (unlikely(rcu_dereference_protected(*ptr, 1))) { > @@ -241,7 +230,6 @@ bool amdgpu_fence_process(struct amdgpu_ring *ring) > > do { > struct dma_fence *fence, **ptr; > - struct amdgpu_fence *am_fence; > > ++last_seq; > last_seq &= drv->num_fences_mask; > @@ -254,12 +242,6 @@ bool amdgpu_fence_process(struct amdgpu_ring *ring) > if (!fence) > continue; > > - /* Save the wptr in the fence driver so we know what the last processed > - * wptr was. This is required for re-emitting the ring state for > - * queues that are reset but are not guilty and thus have no guilty > fence. - */ > - am_fence = container_of(fence, struct amdgpu_fence, base); > - drv->signalled_wptr = am_fence->wptr; > dma_fence_signal(fence); > dma_fence_put(fence); > pm_runtime_mark_last_busy(adev_to_drm(adev)->dev); > @@ -727,25 +709,27 @@ void amdgpu_fence_driver_force_completion(struct > amdgpu_ring *ring, */ > > /** > - * amdgpu_fence_driver_update_timedout_fence_state - Update fence state and > set errors + * amdgpu_ring_set_fence_errors_and_reemit - Set dma_fence > errors and reemit * > - * @af: fence of the ring to update > + * @guilty_fence: fence of the ring to update > * > */ > -void amdgpu_fence_driver_update_timedout_fence_state(struct amdgpu_fence > *af) +void amdgpu_ring_set_fence_errors_and_reemit(struct amdgpu_ring > *ring, + struct amdgpu_fence *guilty_fence) > { > struct dma_fence *unprocessed; > struct dma_fence __rcu **ptr; > struct amdgpu_fence *fence; > - struct amdgpu_ring *ring = af->ring; > unsigned long flags; > u32 seq, last_seq; > - bool reemitted = false; > + unsigned int i; > > last_seq = amdgpu_fence_read(ring) & ring- >fence_drv.num_fences_mask; > seq = ring->fence_drv.sync_seq & ring->fence_drv.num_fences_mask; > > /* mark all fences from the guilty context with an error */ > + amdgpu_ring_alloc(ring, ring->ring_backup_entries_to_copy + > + 20 * ring->guilty_fence_count); > spin_lock_irqsave(&ring->fence_drv.lock, flags); > do { > last_seq++; > @@ -758,39 +742,41 @@ void > amdgpu_fence_driver_update_timedout_fence_state(struct amdgpu_fence *af) if > (unprocessed && !dma_fence_is_signaled_locked(unprocessed)) { fence = > container_of(unprocessed, struct amdgpu_fence, base); > > - if (fence->reemitted > 1) > - reemitted = true; > - else if (fence == af) > + if (fence == guilty_fence) > dma_fence_set_error(&fence->base, -ETIME); > - else if (fence->context == af->context) > + else if (fence->context == guilty_fence- >context) > dma_fence_set_error(&fence->base, -ECANCELED); > + > + if (fence->context == guilty_fence->context) { > + /* just emit the fence for the guilty context */ > + amdgpu_ring_emit_fence(ring, ring- >fence_drv.gpu_addr, > + fence->base.seqno, fence->flags); > + } else { > + /* reemit the packet stream and update wptrs */ > + fence->wptr_start = ring->wptr; > + for (i = 0; i < fence- >backup_size; i++) > + amdgpu_ring_write(ring, ring->ring_backup[fence->backup_idx + i]); > + fence->wptr_end = ring->wptr; > + } > } > rcu_read_unlock(); > } while (last_seq != seq); > spin_unlock_irqrestore(&ring->fence_drv.lock, flags); > - > - if (reemitted) { > - /* if we've already reemitted once then just cancel everything */ > - amdgpu_fence_driver_force_completion(af->ring, &af- >base); > - af->ring->ring_backup_entries_to_copy = 0; > - } > -} > - > -void amdgpu_fence_save_wptr(struct amdgpu_fence *af) > -{ > - af->wptr = af->ring->wptr; > + amdgpu_ring_commit(ring); > } > > static void amdgpu_ring_backup_unprocessed_command(struct amdgpu_ring > *ring, - u64 start_wptr, u64 end_wptr) > + struct amdgpu_fence *af) > { > - unsigned int first_idx = start_wptr & ring->buf_mask; > - unsigned int last_idx = end_wptr & ring->buf_mask; > + unsigned int first_idx = af->wptr_start & ring->buf_mask; > + unsigned int last_idx = af->wptr_end & ring->buf_mask; > unsigned int i; > > /* Backup the contents of the ring buffer. */ > + af->backup_idx = ring->ring_backup_entries_to_copy; > for (i = first_idx; i != last_idx; ++i, i &= ring->buf_mask) > ring->ring_backup[ring->ring_backup_entries_to_copy++] = ring->ring[i]; > + af->backup_size = ring->ring_backup_entries_to_copy - af- >backup_idx; > } > > void amdgpu_ring_backup_unprocessed_commands(struct amdgpu_ring *ring, > @@ -799,13 +785,12 @@ void amdgpu_ring_backup_unprocessed_commands(struct > amdgpu_ring *ring, struct dma_fence *unprocessed; > struct dma_fence __rcu **ptr; > struct amdgpu_fence *fence; > - u64 wptr; > u32 seq, last_seq; > > last_seq = amdgpu_fence_read(ring) & ring- >fence_drv.num_fences_mask; > seq = ring->fence_drv.sync_seq & ring->fence_drv.num_fences_mask; > - wptr = ring->fence_drv.signalled_wptr; > ring->ring_backup_entries_to_copy = 0; > + ring->guilty_fence_count = 0; > > do { > last_seq++; > @@ -818,21 +803,12 @@ void amdgpu_ring_backup_unprocessed_commands(struct > amdgpu_ring *ring, if (unprocessed && !dma_fence_is_signaled(unprocessed)) > { > fence = container_of(unprocessed, struct amdgpu_fence, base); > > - /* save everything if the ring is not guilty, otherwise > - * just save the content from other contexts. > - */ > - if (!fence->reemitted && > - (!guilty_fence || (fence->context != guilty_fence->context))) { > - amdgpu_ring_backup_unprocessed_command(ring, wptr, > - fence->wptr); > - } else if (!fence->reemitted) { > - /* always save the fence */ > - amdgpu_ring_backup_unprocessed_command(ring, > - fence->fence_wptr_start, > - fence->fence_wptr_end); > - } > - wptr = fence->wptr; > - fence->reemitted++; > + /* keep track of the guilty fences for reemit */ > + if (fence->context == guilty_fence->context) > + ring->guilty_fence_count++; > + /* Non-vm fence has all the state. Backup the non-guilty contexts */ > + if (!fence->is_vm_fence && (fence->context != guilty_fence->context)) > + amdgpu_ring_backup_unprocessed_command(ring, fence); > } > rcu_read_unlock(); > } while (last_seq != seq); > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c > b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c index fb58637ed1507..865a803026c3b > 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c > @@ -185,6 +185,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, struct > amdgpu_job *job, dev_err(adev->dev, "scheduling IB failed (%d).\n", r); > return r; > } > + af->wptr_start = ring->wptr; > > need_ctx_switch = ring->current_ctx != fence_ctx; > if (ring->funcs->emit_pipeline_sync && job && > @@ -303,13 +304,7 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, struct > amdgpu_job *job, ring->hw_prio == AMDGPU_GFX_PIPE_PRIO_HIGH) > ring->funcs->emit_wave_limit(ring, false); > > - /* Save the wptr associated with this fence. > - * This must be last for resets to work properly > - * as we need to save the wptr associated with this > - * fence so we know what rings contents to backup > - * after we reset the queue. > - */ > - amdgpu_fence_save_wptr(af); > + af->wptr_end = ring->wptr; > > amdgpu_ring_ib_end(ring); > amdgpu_ring_commit(ring); > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c > b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c index > b82357c657237..df37335127979 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c > @@ -104,29 +104,6 @@ int amdgpu_ring_alloc(struct amdgpu_ring *ring, > unsigned int ndw) return 0; > } > > -/** > - * amdgpu_ring_alloc_reemit - allocate space on the ring buffer for reemit > - * > - * @ring: amdgpu_ring structure holding ring information > - * @ndw: number of dwords to allocate in the ring buffer > - * > - * Allocate @ndw dwords in the ring buffer (all asics). > - * doesn't check the max_dw limit as we may be reemitting > - * several submissions. > - */ > -static void amdgpu_ring_alloc_reemit(struct amdgpu_ring *ring, unsigned int > ndw) -{ > - /* Align requested size with padding so unlock_commit can > - * pad safely */ > - ndw = (ndw + ring->funcs->align_mask) & ~ring->funcs->align_mask; > - > - ring->count_dw = ndw; > - ring->wptr_old = ring->wptr; > - > - if (ring->funcs->begin_use) > - ring->funcs->begin_use(ring); > -} > - > /** > * amdgpu_ring_insert_nop - insert NOP packets > * > @@ -877,7 +854,6 @@ void amdgpu_ring_reset_helper_begin(struct amdgpu_ring > *ring, int amdgpu_ring_reset_helper_end(struct amdgpu_ring *ring, > struct amdgpu_fence *guilty_fence) > { > - unsigned int i; > int r; > > /* verify that the ring is functional */ > @@ -885,16 +861,9 @@ int amdgpu_ring_reset_helper_end(struct amdgpu_ring > *ring, if (r) > return r; > > - /* set an error on all fences from the context */ > - if (guilty_fence) > - amdgpu_fence_driver_update_timedout_fence_state(guilty_fence); > - /* Re-emit the non-guilty commands */ > - if (ring->ring_backup_entries_to_copy) { > - amdgpu_ring_alloc_reemit(ring, ring- >ring_backup_entries_to_copy); > - for (i = 0; i < ring->ring_backup_entries_to_copy; i++) > - amdgpu_ring_write(ring, ring- >ring_backup[i]); > - amdgpu_ring_commit(ring); > - } > + /* set an error on all fences from the context and reemit */ > + amdgpu_ring_set_fence_errors_and_reemit(ring, guilty_fence); > + > /* Start the scheduler again */ > drm_sched_wqueue_start(&ring->sched); > return 0; > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h > b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h index > ce095427611fb..6dca1ccd2fc2e 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h > @@ -121,7 +121,6 @@ struct amdgpu_fence_driver { > /* sync_seq is protected by ring emission lock */ > uint32_t sync_seq; > atomic_t last_seq; > - u64 signalled_wptr; > bool initialized; > struct amdgpu_irq_src *irq_src; > unsigned irq_type; > @@ -146,15 +145,17 @@ struct amdgpu_fence { > struct amdgpu_ring *ring; > ktime_t start_timestamp; > > + bool is_vm_fence; > + unsigned int flags; > /* wptr for the total submission for resets */ > - u64 wptr; > + u64 wptr_start; > + u64 wptr_end; > /* fence context for resets */ > u64 context; > - /* has this fence been reemitted */ > - unsigned int reemitted; > - /* wptr for the fence for the submission */ > - u64 fence_wptr_start; > - u64 fence_wptr_end; > + /* idx into the ring backup */ > + unsigned int backup_idx; > + unsigned int backup_size; > + > }; > > extern const struct drm_sched_backend_ops amdgpu_sched_ops; > @@ -162,8 +163,8 @@ extern const struct drm_sched_backend_ops > amdgpu_sched_ops; void amdgpu_fence_driver_set_error(struct amdgpu_ring > *ring, int error); void amdgpu_fence_driver_force_completion(struct > amdgpu_ring *ring, struct dma_fence *timedout_fence); > -void amdgpu_fence_driver_update_timedout_fence_state(struct amdgpu_fence > *af); -void amdgpu_fence_save_wptr(struct amdgpu_fence *af); > +void amdgpu_ring_set_fence_errors_and_reemit(struct amdgpu_ring *ring, > + struct amdgpu_fence *guilty_fence); > > int amdgpu_fence_driver_init_ring(struct amdgpu_ring *ring); > int amdgpu_fence_driver_start_ring(struct amdgpu_ring *ring, > @@ -314,6 +315,7 @@ struct amdgpu_ring { > /* backups for resets */ > uint32_t *ring_backup; > unsigned int ring_backup_entries_to_copy; > + unsigned int guilty_fence_count; > unsigned rptr_offs; > u64 rptr_gpu_addr; > u32 *rptr_cpu_addr; > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c > b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c index 6a2ea200d90c8..bd2c01b1ef18f > 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c > @@ -848,6 +848,7 @@ int amdgpu_vm_flush(struct amdgpu_ring *ring, struct > amdgpu_job *job, r = amdgpu_fence_emit(ring, job->hw_vm_fence, 0); > if (r) > return r; > + job->hw_vm_fence->is_vm_fence = true; > fence = &job->hw_vm_fence->base; > /* get a ref for the job */ > dma_fence_get(fence); ^ permalink raw reply [flat|nested] 20+ messages in thread
end of thread, other threads:[~2026-01-20 2:41 UTC | newest] Thread overview: 20+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2026-01-16 16:20 [PATCH 00/10] Improvements for IB handling V3 Alex Deucher 2026-01-16 16:20 ` [PATCH 01/10] drm/amdgpu: fix type for wptr in ring backup Alex Deucher 2026-01-19 12:18 ` Christian König 2026-01-16 16:20 ` [PATCH 02/10] drm/amdgpu: rename amdgpu_fence_driver_guilty_force_completion() Alex Deucher 2026-01-19 12:22 ` Christian König 2026-01-16 16:20 ` [PATCH 03/10] drm/amdgpu/job: use GFP_ATOMIC while in gpu reset Alex Deucher 2026-01-19 12:27 ` Christian König 2026-01-16 16:20 ` [PATCH 04/10] drm/amdgpu: switch all IPs to using job for IBs Alex Deucher 2026-01-19 12:31 ` Christian König 2026-01-16 16:20 ` [PATCH 05/10] drm/amdgpu: require a job to schedule an IB Alex Deucher 2026-01-19 12:41 ` Christian König 2026-01-16 16:20 ` [PATCH 06/10] drm/amdgpu: don't call drm_sched_stop/start() in asic reset Alex Deucher 2026-01-19 12:42 ` Christian König 2026-01-16 16:20 ` [PATCH 07/10] drm/amdgpu: drop drm_sched_increase_karma() Alex Deucher 2026-01-19 12:44 ` Christian König 2026-01-16 16:20 ` [PATCH 08/10] drm/amdgpu: plumb timedout fence through to force completion Alex Deucher 2026-01-16 16:20 ` [PATCH 09/10] drm/amdgpu: simplify VCN reset helper Alex Deucher 2026-01-16 16:20 ` [PATCH 10/10] drm/amdgpu: rework ring reset backup and reemit Alex Deucher 2026-01-19 13:19 ` Christian König 2026-01-20 2:41 ` Timur Kristóf
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox