public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 00/20] drm/amdgpu: use all SDMA instances for TTM clears and moves
@ 2025-11-13 16:05 Pierre-Eric Pelloux-Prayer
  2025-11-13 16:05 ` [PATCH v2 01/20] drm/amdgpu: give each kernel job a unique id Pierre-Eric Pelloux-Prayer
                   ` (19 more replies)
  0 siblings, 20 replies; 53+ messages in thread
From: Pierre-Eric Pelloux-Prayer @ 2025-11-13 16:05 UTC (permalink / raw)
  Cc: Pierre-Eric Pelloux-Prayer, Christian König, Alex Deucher,
	David Airlie, Felix Kuehling, Harry Wentland, Huang Rui, Leo Li,
	Maarten Lankhorst, Maxime Ripard, Simona Vetter, Sumit Semwal,
	Thomas Zimmermann, amd-gfx, dri-devel, linaro-mm-sig,
	linux-kernel, linux-media

The drm/ttm patch modifies TTM to support multiple contexts for the pipelined moves.

Then amdgpu/ttm is updated to express dependencies between jobs explicitely,
instead of relying on the ordering of execution guaranteed by the use of a single
instance.
With all of this in place, we can use multiple entities, with each having access
to the available SDMA instances.

This rework also gives the opportunity to merge the clear functions into a single
one and to optimize a bit GART usage.

(The first patch of the series has already been merged through drm-misc but I'm
including it here to reduce conflicts)


v2:
  - addressed comments from Christian
  - dropped "drm/amdgpu: prepare amdgpu_fill_buffer to use N entities" and
    "drm/amdgpu: use multiple entities in amdgpu_fill_buffer"
  - added "drm/admgpu: handle resv dependencies in amdgpu_ttm_map_buffer",
    "drm/amdgpu: round robin through clear_entities in amdgpu_fill_buffer"
  - reworked how sdma rings/scheds are passed to amdgpu_ttm
v1: https://lists.freedesktop.org/archives/dri-devel/2025-November/534517.html

Pierre-Eric Pelloux-Prayer (20):
  drm/amdgpu: give each kernel job a unique id
  drm/ttm: rework pipelined eviction fence handling
  drm/amdgpu: remove direct_submit arg from amdgpu_copy_buffer
  drm/amdgpu: introduce amdgpu_ttm_buffer_entity
  drm/amdgpu: pass the entity to use to ttm functions
  drm/amdgpu: statically assign gart windows to ttm entities
  drm/amdgpu: allocate multiple clear entities
  drm/amdgpu: allocate multiple move entities
  drm/amdgpu: pass optional dependency to amdgpu_fill_buffer
  drm/admgpu: handle resv dependencies in amdgpu_ttm_map_buffer
  drm/amdgpu: round robin through clear_entities in amdgpu_fill_buffer
  drm/amdgpu: use TTM_NUM_MOVE_FENCES when reserving fences
  drm/amdgpu: use multiple entities in amdgpu_move_blit
  drm/amdgpu: introduce amdgpu_sdma_set_vm_pte_scheds
  drm/amdgpu: pass all the sdma scheds to amdgpu_mman
  drm/amdgpu: give ttm entities access to all the sdma scheds
  drm/amdgpu: get rid of amdgpu_ttm_clear_buffer
  drm/amdgpu: rename amdgpu_fill_buffer as amdgpu_ttm_clear_buffer
  drm/amdgpu: use larger gart window when possible
  drm/amdgpu: double AMDGPU_GTT_MAX_TRANSFER_SIZE

 drivers/gpu/drm/amd/amdgpu/amdgpu.h           |   4 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c |   9 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c        |   5 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c    |   8 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c       |   2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c       |   3 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c       |  25 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c   |   6 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.c       |   5 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.h       |  19 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_jpeg.c      |   3 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_object.c    |  14 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c       | 435 +++++++++++-------
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h       |  50 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c       |   3 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c       |   5 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c       |   8 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_vkms.c      |   6 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c        |  26 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h        |   2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm_cpu.c    |   4 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm_pt.c     |   4 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c   |  12 +-
 drivers/gpu/drm/amd/amdgpu/cik_sdma.c         |  12 +-
 drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c        |  12 +-
 drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c        |  12 +-
 drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c        |  19 +-
 drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c      |  19 +-
 drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c        |  18 +-
 drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c        |  18 +-
 drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c        |  12 +-
 drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c        |  12 +-
 drivers/gpu/drm/amd/amdgpu/si_dma.c           |  12 +-
 drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c         |   6 +-
 drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c         |   6 +-
 drivers/gpu/drm/amd/amdkfd/kfd_migrate.c      |  32 +-
 drivers/gpu/drm/amd/amdkfd/kfd_svm.c          |   3 +-
 .../amd/display/amdgpu_dm/amdgpu_dm_plane.c   |   6 +-
 .../drm/amd/display/amdgpu_dm/amdgpu_dm_wb.c  |   6 +-
 .../gpu/drm/ttm/tests/ttm_bo_validate_test.c  |  11 +-
 drivers/gpu/drm/ttm/tests/ttm_resource_test.c |   5 +-
 drivers/gpu/drm/ttm/ttm_bo.c                  |  47 +-
 drivers/gpu/drm/ttm/ttm_bo_util.c             |  38 +-
 drivers/gpu/drm/ttm/ttm_resource.c            |  31 +-
 include/drm/ttm/ttm_resource.h                |  29 +-
 45 files changed, 588 insertions(+), 436 deletions(-)

-- 
2.43.0


^ permalink raw reply	[flat|nested] 53+ messages in thread

* [PATCH v2 01/20] drm/amdgpu: give each kernel job a unique id
  2025-11-13 16:05 [PATCH v2 00/20] drm/amdgpu: use all SDMA instances for TTM clears and moves Pierre-Eric Pelloux-Prayer
@ 2025-11-13 16:05 ` Pierre-Eric Pelloux-Prayer
  2025-11-14 12:26   ` Christian König
  2025-11-13 16:05 ` [PATCH v2 02/20] drm/ttm: rework pipelined eviction fence handling Pierre-Eric Pelloux-Prayer
                   ` (18 subsequent siblings)
  19 siblings, 1 reply; 53+ messages in thread
From: Pierre-Eric Pelloux-Prayer @ 2025-11-13 16:05 UTC (permalink / raw)
  To: Alex Deucher, Christian König, David Airlie, Simona Vetter,
	Felix Kuehling
  Cc: Pierre-Eric Pelloux-Prayer, Arunpravin Paneer Selvam, amd-gfx,
	dri-devel, linux-kernel

Userspace jobs have drm_file.client_id as a unique identifier
as job's owners. For kernel jobs, we can allocate arbitrary
values - the risk of overlap with userspace ids is small (given
that it's a u64 value).
In the unlikely case the overlap happens, it'll only impact
trace events.

Since this ID is traced in the gpu_scheduler trace events, this
allows to determine the source of each job sent to the hardware.

To make grepping easier, the IDs are defined as they will appear
in the trace output.

Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Arunpravin Paneer Selvam <Arunpravin.PaneerSelvam@amd.com>
Link: https://lore.kernel.org/r/20250604122827.2191-1-pierre-eric.pelloux-prayer@amd.com
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c     |  3 ++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c     |  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.c     |  5 ++--
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.h     | 19 +++++++++++++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_jpeg.c    |  3 ++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_object.c  |  3 ++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c     | 28 +++++++++++++--------
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h     |  3 ++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c     |  3 ++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c     |  5 ++--
 drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c     |  8 +++---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c      |  6 +++--
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h      |  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm_cpu.c  |  4 ++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm_pt.c   |  4 ++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c | 12 +++++----
 drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c       |  6 +++--
 drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c       |  6 +++--
 drivers/gpu/drm/amd/amdkfd/kfd_migrate.c    |  3 ++-
 19 files changed, 84 insertions(+), 41 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
index 3d24f9cd750a..29c927f4d6df 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
@@ -1549,7 +1549,8 @@ static int amdgpu_gfx_run_cleaner_shader_job(struct amdgpu_ring *ring)
 	owner = (void *)(unsigned long)atomic_inc_return(&counter);
 
 	r = amdgpu_job_alloc_with_ib(ring->adev, &entity, owner,
-				     64, 0, &job);
+				     64, 0, &job,
+				     AMDGPU_KERNEL_JOB_ID_CLEANER_SHADER);
 	if (r)
 		goto err;
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
index 97b562a79ea8..9dcf51991b5b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
@@ -690,7 +690,7 @@ void amdgpu_gmc_flush_gpu_tlb(struct amdgpu_device *adev, uint32_t vmid,
 	r = amdgpu_job_alloc_with_ib(ring->adev, &adev->mman.high_pr,
 				     AMDGPU_FENCE_OWNER_UNDEFINED,
 				     16 * 4, AMDGPU_IB_POOL_IMMEDIATE,
-				     &job);
+				     &job, AMDGPU_KERNEL_JOB_ID_FLUSH_GPU_TLB);
 	if (r)
 		goto error_alloc;
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
index 55c7e104d5ca..3457bd649623 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
@@ -234,11 +234,12 @@ int amdgpu_job_alloc(struct amdgpu_device *adev, struct amdgpu_vm *vm,
 int amdgpu_job_alloc_with_ib(struct amdgpu_device *adev,
 			     struct drm_sched_entity *entity, void *owner,
 			     size_t size, enum amdgpu_ib_pool_type pool_type,
-			     struct amdgpu_job **job)
+			     struct amdgpu_job **job, u64 k_job_id)
 {
 	int r;
 
-	r = amdgpu_job_alloc(adev, NULL, entity, owner, 1, job, 0);
+	r = amdgpu_job_alloc(adev, NULL, entity, owner, 1, job,
+			     k_job_id);
 	if (r)
 		return r;
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
index d25f1fcf0242..7abf069d17d4 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
@@ -44,6 +44,22 @@
 struct amdgpu_fence;
 enum amdgpu_ib_pool_type;
 
+/* Internal kernel job ids. (decreasing values, starting from U64_MAX). */
+#define AMDGPU_KERNEL_JOB_ID_VM_UPDATE              (18446744073709551615ULL)
+#define AMDGPU_KERNEL_JOB_ID_VM_UPDATE_PDES         (18446744073709551614ULL)
+#define AMDGPU_KERNEL_JOB_ID_VM_UPDATE_RANGE        (18446744073709551613ULL)
+#define AMDGPU_KERNEL_JOB_ID_VM_PT_CLEAR            (18446744073709551612ULL)
+#define AMDGPU_KERNEL_JOB_ID_TTM_MAP_BUFFER         (18446744073709551611ULL)
+#define AMDGPU_KERNEL_JOB_ID_TTM_ACCESS_MEMORY_SDMA (18446744073709551610ULL)
+#define AMDGPU_KERNEL_JOB_ID_TTM_COPY_BUFFER        (18446744073709551609ULL)
+#define AMDGPU_KERNEL_JOB_ID_CLEAR_ON_RELEASE       (18446744073709551608ULL)
+#define AMDGPU_KERNEL_JOB_ID_MOVE_BLIT              (18446744073709551607ULL)
+#define AMDGPU_KERNEL_JOB_ID_TTM_CLEAR_BUFFER       (18446744073709551606ULL)
+#define AMDGPU_KERNEL_JOB_ID_CLEANER_SHADER         (18446744073709551605ULL)
+#define AMDGPU_KERNEL_JOB_ID_FLUSH_GPU_TLB          (18446744073709551604ULL)
+#define AMDGPU_KERNEL_JOB_ID_KFD_GART_MAP           (18446744073709551603ULL)
+#define AMDGPU_KERNEL_JOB_ID_VCN_RING_TEST          (18446744073709551602ULL)
+
 struct amdgpu_job {
 	struct drm_sched_job    base;
 	struct amdgpu_vm	*vm;
@@ -97,7 +113,8 @@ int amdgpu_job_alloc(struct amdgpu_device *adev, struct amdgpu_vm *vm,
 int amdgpu_job_alloc_with_ib(struct amdgpu_device *adev,
 			     struct drm_sched_entity *entity, void *owner,
 			     size_t size, enum amdgpu_ib_pool_type pool_type,
-			     struct amdgpu_job **job);
+			     struct amdgpu_job **job,
+			     u64 k_job_id);
 void amdgpu_job_set_resources(struct amdgpu_job *job, struct amdgpu_bo *gds,
 			      struct amdgpu_bo *gws, struct amdgpu_bo *oa);
 void amdgpu_job_free_resources(struct amdgpu_job *job);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_jpeg.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_jpeg.c
index 91678621f1ff..63ee6ba6a931 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_jpeg.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_jpeg.c
@@ -196,7 +196,8 @@ static int amdgpu_jpeg_dec_set_reg(struct amdgpu_ring *ring, uint32_t handle,
 	int i, r;
 
 	r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, ib_size_dw * 4,
-				     AMDGPU_IB_POOL_DIRECT, &job);
+				     AMDGPU_IB_POOL_DIRECT, &job,
+				     AMDGPU_KERNEL_JOB_ID_VCN_RING_TEST);
 	if (r)
 		return r;
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
index fe486988a738..e08f58de4b17 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
@@ -1321,7 +1321,8 @@ void amdgpu_bo_release_notify(struct ttm_buffer_object *bo)
 	if (r)
 		goto out;
 
-	r = amdgpu_fill_buffer(abo, 0, &bo->base._resv, &fence, true);
+	r = amdgpu_fill_buffer(abo, 0, &bo->base._resv, &fence, true,
+			       AMDGPU_KERNEL_JOB_ID_CLEAR_ON_RELEASE);
 	if (WARN_ON(r))
 		goto out;
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index e226c3aff7d7..326476089db3 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -227,7 +227,8 @@ static int amdgpu_ttm_map_buffer(struct ttm_buffer_object *bo,
 	r = amdgpu_job_alloc_with_ib(adev, &adev->mman.high_pr,
 				     AMDGPU_FENCE_OWNER_UNDEFINED,
 				     num_dw * 4 + num_bytes,
-				     AMDGPU_IB_POOL_DELAYED, &job);
+				     AMDGPU_IB_POOL_DELAYED, &job,
+				     AMDGPU_KERNEL_JOB_ID_TTM_MAP_BUFFER);
 	if (r)
 		return r;
 
@@ -406,7 +407,7 @@ static int amdgpu_move_blit(struct ttm_buffer_object *bo,
 		struct dma_fence *wipe_fence = NULL;
 
 		r = amdgpu_fill_buffer(abo, 0, NULL, &wipe_fence,
-				       false);
+				       false, AMDGPU_KERNEL_JOB_ID_MOVE_BLIT);
 		if (r) {
 			goto error;
 		} else if (wipe_fence) {
@@ -1488,7 +1489,8 @@ static int amdgpu_ttm_access_memory_sdma(struct ttm_buffer_object *bo,
 	r = amdgpu_job_alloc_with_ib(adev, &adev->mman.high_pr,
 				     AMDGPU_FENCE_OWNER_UNDEFINED,
 				     num_dw * 4, AMDGPU_IB_POOL_DELAYED,
-				     &job);
+				     &job,
+				     AMDGPU_KERNEL_JOB_ID_TTM_ACCESS_MEMORY_SDMA);
 	if (r)
 		goto out;
 
@@ -2212,7 +2214,7 @@ static int amdgpu_ttm_prepare_job(struct amdgpu_device *adev,
 				  struct dma_resv *resv,
 				  bool vm_needs_flush,
 				  struct amdgpu_job **job,
-				  bool delayed)
+				  bool delayed, u64 k_job_id)
 {
 	enum amdgpu_ib_pool_type pool = direct_submit ?
 		AMDGPU_IB_POOL_DIRECT :
@@ -2222,7 +2224,7 @@ static int amdgpu_ttm_prepare_job(struct amdgpu_device *adev,
 						    &adev->mman.high_pr;
 	r = amdgpu_job_alloc_with_ib(adev, entity,
 				     AMDGPU_FENCE_OWNER_UNDEFINED,
-				     num_dw * 4, pool, job);
+				     num_dw * 4, pool, job, k_job_id);
 	if (r)
 		return r;
 
@@ -2262,7 +2264,8 @@ int amdgpu_copy_buffer(struct amdgpu_ring *ring, uint64_t src_offset,
 	num_loops = DIV_ROUND_UP(byte_count, max_bytes);
 	num_dw = ALIGN(num_loops * adev->mman.buffer_funcs->copy_num_dw, 8);
 	r = amdgpu_ttm_prepare_job(adev, direct_submit, num_dw,
-				   resv, vm_needs_flush, &job, false);
+				   resv, vm_needs_flush, &job, false,
+				   AMDGPU_KERNEL_JOB_ID_TTM_COPY_BUFFER);
 	if (r)
 		return r;
 
@@ -2297,7 +2300,8 @@ static int amdgpu_ttm_fill_mem(struct amdgpu_ring *ring, uint32_t src_data,
 			       uint64_t dst_addr, uint32_t byte_count,
 			       struct dma_resv *resv,
 			       struct dma_fence **fence,
-			       bool vm_needs_flush, bool delayed)
+			       bool vm_needs_flush, bool delayed,
+			       u64 k_job_id)
 {
 	struct amdgpu_device *adev = ring->adev;
 	unsigned int num_loops, num_dw;
@@ -2310,7 +2314,7 @@ static int amdgpu_ttm_fill_mem(struct amdgpu_ring *ring, uint32_t src_data,
 	num_loops = DIV_ROUND_UP_ULL(byte_count, max_bytes);
 	num_dw = ALIGN(num_loops * adev->mman.buffer_funcs->fill_num_dw, 8);
 	r = amdgpu_ttm_prepare_job(adev, false, num_dw, resv, vm_needs_flush,
-				   &job, delayed);
+				   &job, delayed, k_job_id);
 	if (r)
 		return r;
 
@@ -2380,7 +2384,8 @@ int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
 			goto err;
 
 		r = amdgpu_ttm_fill_mem(ring, 0, addr, size, resv,
-					&next, true, true);
+					&next, true, true,
+					AMDGPU_KERNEL_JOB_ID_TTM_CLEAR_BUFFER);
 		if (r)
 			goto err;
 
@@ -2399,7 +2404,8 @@ int amdgpu_fill_buffer(struct amdgpu_bo *bo,
 			uint32_t src_data,
 			struct dma_resv *resv,
 			struct dma_fence **f,
-			bool delayed)
+			bool delayed,
+			u64 k_job_id)
 {
 	struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
 	struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
@@ -2429,7 +2435,7 @@ int amdgpu_fill_buffer(struct amdgpu_bo *bo,
 			goto error;
 
 		r = amdgpu_ttm_fill_mem(ring, src_data, to, cur_size, resv,
-					&next, true, delayed);
+					&next, true, delayed, k_job_id);
 		if (r)
 			goto error;
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
index 054d48823d5f..577ee04ce0bf 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
@@ -175,7 +175,8 @@ int amdgpu_fill_buffer(struct amdgpu_bo *bo,
 			uint32_t src_data,
 			struct dma_resv *resv,
 			struct dma_fence **fence,
-			bool delayed);
+			bool delayed,
+			u64 k_job_id);
 
 int amdgpu_ttm_alloc_gart(struct ttm_buffer_object *bo);
 void amdgpu_ttm_recover_gart(struct ttm_buffer_object *tbo);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
index 74758b5ffc6c..5c38f0d30c87 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
@@ -1136,7 +1136,8 @@ static int amdgpu_uvd_send_msg(struct amdgpu_ring *ring, struct amdgpu_bo *bo,
 	r = amdgpu_job_alloc_with_ib(ring->adev, &adev->uvd.entity,
 				     AMDGPU_FENCE_OWNER_UNDEFINED,
 				     64, direct ? AMDGPU_IB_POOL_DIRECT :
-				     AMDGPU_IB_POOL_DELAYED, &job);
+				     AMDGPU_IB_POOL_DELAYED, &job,
+				     AMDGPU_KERNEL_JOB_ID_VCN_RING_TEST);
 	if (r)
 		return r;
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
index b9060bcd4806..ce318f5de047 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
@@ -449,7 +449,7 @@ static int amdgpu_vce_get_create_msg(struct amdgpu_ring *ring, uint32_t handle,
 	r = amdgpu_job_alloc_with_ib(ring->adev, &ring->adev->vce.entity,
 				     AMDGPU_FENCE_OWNER_UNDEFINED,
 				     ib_size_dw * 4, AMDGPU_IB_POOL_DIRECT,
-				     &job);
+				     &job, AMDGPU_KERNEL_JOB_ID_VCN_RING_TEST);
 	if (r)
 		return r;
 
@@ -540,7 +540,8 @@ static int amdgpu_vce_get_destroy_msg(struct amdgpu_ring *ring, uint32_t handle,
 				     AMDGPU_FENCE_OWNER_UNDEFINED,
 				     ib_size_dw * 4,
 				     direct ? AMDGPU_IB_POOL_DIRECT :
-				     AMDGPU_IB_POOL_DELAYED, &job);
+				     AMDGPU_IB_POOL_DELAYED, &job,
+				     AMDGPU_KERNEL_JOB_ID_VCN_RING_TEST);
 	if (r)
 		return r;
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
index 5ae7cc0d5f57..5e0786ea911b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
@@ -626,7 +626,7 @@ static int amdgpu_vcn_dec_send_msg(struct amdgpu_ring *ring,
 
 	r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL,
 				     64, AMDGPU_IB_POOL_DIRECT,
-				     &job);
+				     &job, AMDGPU_KERNEL_JOB_ID_VCN_RING_TEST);
 	if (r)
 		goto err;
 
@@ -806,7 +806,7 @@ static int amdgpu_vcn_dec_sw_send_msg(struct amdgpu_ring *ring,
 
 	r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL,
 				     ib_size_dw * 4, AMDGPU_IB_POOL_DIRECT,
-				     &job);
+				     &job, AMDGPU_KERNEL_JOB_ID_VCN_RING_TEST);
 	if (r)
 		goto err;
 
@@ -936,7 +936,7 @@ static int amdgpu_vcn_enc_get_create_msg(struct amdgpu_ring *ring, uint32_t hand
 
 	r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL,
 				     ib_size_dw * 4, AMDGPU_IB_POOL_DIRECT,
-				     &job);
+				     &job, AMDGPU_KERNEL_JOB_ID_VCN_RING_TEST);
 	if (r)
 		return r;
 
@@ -1003,7 +1003,7 @@ static int amdgpu_vcn_enc_get_destroy_msg(struct amdgpu_ring *ring, uint32_t han
 
 	r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL,
 				     ib_size_dw * 4, AMDGPU_IB_POOL_DIRECT,
-				     &job);
+				     &job, AMDGPU_KERNEL_JOB_ID_VCN_RING_TEST);
 	if (r)
 		return r;
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index db66b4232de0..2f8e83f840a8 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -983,7 +983,8 @@ int amdgpu_vm_update_pdes(struct amdgpu_device *adev,
 	params.vm = vm;
 	params.immediate = immediate;
 
-	r = vm->update_funcs->prepare(&params, NULL);
+	r = vm->update_funcs->prepare(&params, NULL,
+				      AMDGPU_KERNEL_JOB_ID_VM_UPDATE_PDES);
 	if (r)
 		goto error;
 
@@ -1152,7 +1153,8 @@ int amdgpu_vm_update_range(struct amdgpu_device *adev, struct amdgpu_vm *vm,
 		dma_fence_put(tmp);
 	}
 
-	r = vm->update_funcs->prepare(&params, sync);
+	r = vm->update_funcs->prepare(&params, sync,
+				      AMDGPU_KERNEL_JOB_ID_VM_UPDATE_RANGE);
 	if (r)
 		goto error_free;
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
index 77207f4e448e..cf0ec94e8a07 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
@@ -308,7 +308,7 @@ struct amdgpu_vm_update_params {
 struct amdgpu_vm_update_funcs {
 	int (*map_table)(struct amdgpu_bo_vm *bo);
 	int (*prepare)(struct amdgpu_vm_update_params *p,
-		       struct amdgpu_sync *sync);
+		       struct amdgpu_sync *sync, u64 k_job_id);
 	int (*update)(struct amdgpu_vm_update_params *p,
 		      struct amdgpu_bo_vm *bo, uint64_t pe, uint64_t addr,
 		      unsigned count, uint32_t incr, uint64_t flags);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_cpu.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_cpu.c
index 0c1ef5850a5e..22e2e5b47341 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_cpu.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_cpu.c
@@ -40,12 +40,14 @@ static int amdgpu_vm_cpu_map_table(struct amdgpu_bo_vm *table)
  *
  * @p: see amdgpu_vm_update_params definition
  * @sync: sync obj with fences to wait on
+ * @k_job_id: the id for tracing/debug purposes
  *
  * Returns:
  * Negativ errno, 0 for success.
  */
 static int amdgpu_vm_cpu_prepare(struct amdgpu_vm_update_params *p,
-				 struct amdgpu_sync *sync)
+				 struct amdgpu_sync *sync,
+				 u64 k_job_id)
 {
 	if (!sync)
 		return 0;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_pt.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_pt.c
index 30022123b0bf..f794fb1cc06e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_pt.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_pt.c
@@ -26,6 +26,7 @@
 #include "amdgpu.h"
 #include "amdgpu_trace.h"
 #include "amdgpu_vm.h"
+#include "amdgpu_job.h"
 
 /*
  * amdgpu_vm_pt_cursor - state for for_each_amdgpu_vm_pt
@@ -395,7 +396,8 @@ int amdgpu_vm_pt_clear(struct amdgpu_device *adev, struct amdgpu_vm *vm,
 	params.vm = vm;
 	params.immediate = immediate;
 
-	r = vm->update_funcs->prepare(&params, NULL);
+	r = vm->update_funcs->prepare(&params, NULL,
+				      AMDGPU_KERNEL_JOB_ID_VM_PT_CLEAR);
 	if (r)
 		goto exit;
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c
index 46d9fb433ab2..36805dcfa159 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c
@@ -40,7 +40,7 @@ static int amdgpu_vm_sdma_map_table(struct amdgpu_bo_vm *table)
 
 /* Allocate a new job for @count PTE updates */
 static int amdgpu_vm_sdma_alloc_job(struct amdgpu_vm_update_params *p,
-				    unsigned int count)
+				    unsigned int count, u64 k_job_id)
 {
 	enum amdgpu_ib_pool_type pool = p->immediate ? AMDGPU_IB_POOL_IMMEDIATE
 		: AMDGPU_IB_POOL_DELAYED;
@@ -56,7 +56,7 @@ static int amdgpu_vm_sdma_alloc_job(struct amdgpu_vm_update_params *p,
 	ndw = min(ndw, AMDGPU_VM_SDMA_MAX_NUM_DW);
 
 	r = amdgpu_job_alloc_with_ib(p->adev, entity, AMDGPU_FENCE_OWNER_VM,
-				     ndw * 4, pool, &p->job);
+				     ndw * 4, pool, &p->job, k_job_id);
 	if (r)
 		return r;
 
@@ -69,16 +69,17 @@ static int amdgpu_vm_sdma_alloc_job(struct amdgpu_vm_update_params *p,
  *
  * @p: see amdgpu_vm_update_params definition
  * @sync: amdgpu_sync object with fences to wait for
+ * @k_job_id: identifier of the job, for tracing purpose
  *
  * Returns:
  * Negativ errno, 0 for success.
  */
 static int amdgpu_vm_sdma_prepare(struct amdgpu_vm_update_params *p,
-				  struct amdgpu_sync *sync)
+				  struct amdgpu_sync *sync, u64 k_job_id)
 {
 	int r;
 
-	r = amdgpu_vm_sdma_alloc_job(p, 0);
+	r = amdgpu_vm_sdma_alloc_job(p, 0, k_job_id);
 	if (r)
 		return r;
 
@@ -249,7 +250,8 @@ static int amdgpu_vm_sdma_update(struct amdgpu_vm_update_params *p,
 			if (r)
 				return r;
 
-			r = amdgpu_vm_sdma_alloc_job(p, count);
+			r = amdgpu_vm_sdma_alloc_job(p, count,
+						     AMDGPU_KERNEL_JOB_ID_VM_UPDATE);
 			if (r)
 				return r;
 		}
diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c b/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c
index 1c07b701d0e4..ceb94bbb03a4 100644
--- a/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c
@@ -217,7 +217,8 @@ static int uvd_v6_0_enc_get_create_msg(struct amdgpu_ring *ring, uint32_t handle
 	int i, r;
 
 	r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, ib_size_dw * 4,
-				     AMDGPU_IB_POOL_DIRECT, &job);
+				     AMDGPU_IB_POOL_DIRECT, &job,
+				     AMDGPU_KERNEL_JOB_ID_VCN_RING_TEST);
 	if (r)
 		return r;
 
@@ -281,7 +282,8 @@ static int uvd_v6_0_enc_get_destroy_msg(struct amdgpu_ring *ring,
 	int i, r;
 
 	r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, ib_size_dw * 4,
-				     AMDGPU_IB_POOL_DIRECT, &job);
+				     AMDGPU_IB_POOL_DIRECT, &job,
+				     AMDGPU_KERNEL_JOB_ID_VCN_RING_TEST);
 	if (r)
 		return r;
 
diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c b/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
index 9d237b5937fb..1f8866f3f63c 100644
--- a/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
@@ -225,7 +225,8 @@ static int uvd_v7_0_enc_get_create_msg(struct amdgpu_ring *ring, u32 handle,
 	int i, r;
 
 	r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, ib_size_dw * 4,
-				     AMDGPU_IB_POOL_DIRECT, &job);
+				     AMDGPU_IB_POOL_DIRECT, &job,
+				     AMDGPU_KERNEL_JOB_ID_VCN_RING_TEST);
 	if (r)
 		return r;
 
@@ -288,7 +289,8 @@ static int uvd_v7_0_enc_get_destroy_msg(struct amdgpu_ring *ring, u32 handle,
 	int i, r;
 
 	r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, ib_size_dw * 4,
-				     AMDGPU_IB_POOL_DIRECT, &job);
+				     AMDGPU_IB_POOL_DIRECT, &job,
+				     AMDGPU_KERNEL_JOB_ID_VCN_RING_TEST);
 	if (r)
 		return r;
 
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
index 3653c563ee9a..46c84fc60af1 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
@@ -67,7 +67,8 @@ svm_migrate_gart_map(struct amdgpu_ring *ring, u64 npages,
 				     AMDGPU_FENCE_OWNER_UNDEFINED,
 				     num_dw * 4 + num_bytes,
 				     AMDGPU_IB_POOL_DELAYED,
-				     &job);
+				     &job,
+				     AMDGPU_KERNEL_JOB_ID_KFD_GART_MAP);
 	if (r)
 		return r;
 
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v2 02/20] drm/ttm: rework pipelined eviction fence handling
  2025-11-13 16:05 [PATCH v2 00/20] drm/amdgpu: use all SDMA instances for TTM clears and moves Pierre-Eric Pelloux-Prayer
  2025-11-13 16:05 ` [PATCH v2 01/20] drm/amdgpu: give each kernel job a unique id Pierre-Eric Pelloux-Prayer
@ 2025-11-13 16:05 ` Pierre-Eric Pelloux-Prayer
  2025-11-14 12:47   ` Christian König
  2025-11-18 15:00   ` Thomas Hellström
  2025-11-13 16:05 ` [PATCH v2 03/20] drm/amdgpu: remove direct_submit arg from amdgpu_copy_buffer Pierre-Eric Pelloux-Prayer
                   ` (17 subsequent siblings)
  19 siblings, 2 replies; 53+ messages in thread
From: Pierre-Eric Pelloux-Prayer @ 2025-11-13 16:05 UTC (permalink / raw)
  To: Alex Deucher, Christian König, David Airlie, Simona Vetter,
	Huang Rui, Matthew Auld, Matthew Brost, Maarten Lankhorst,
	Maxime Ripard, Thomas Zimmermann, Sumit Semwal
  Cc: Pierre-Eric Pelloux-Prayer, amd-gfx, dri-devel, linux-kernel,
	linux-media, linaro-mm-sig

Until now ttm stored a single pipelined eviction fence which means
drivers had to use a single entity for these evictions.

To lift this requirement, this commit allows up to 8 entities to
be used.

Ideally a dma_resv object would have been used as a container of
the eviction fences, but the locking rules makes it complex.
dma_resv all have the same ww_class, which means "Attempting to
lock more mutexes after ww_acquire_done." is an error.

One alternative considered was to introduced a 2nd ww_class for
specific resv to hold a single "transient" lock (= the resv lock
would only be held for a short period, without taking any other
locks).

The other option, is to statically reserve a fence array, and
extend the existing code to deal with N fences, instead of 1.

The driver is still responsible to reserve the correct number
of fence slots.

---
v2:
- simplified code
- dropped n_fences
- name changes
---

Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c       |  8 ++--
 .../gpu/drm/ttm/tests/ttm_bo_validate_test.c  | 11 +++--
 drivers/gpu/drm/ttm/tests/ttm_resource_test.c |  5 +-
 drivers/gpu/drm/ttm/ttm_bo.c                  | 47 ++++++++++---------
 drivers/gpu/drm/ttm/ttm_bo_util.c             | 38 ++++++++++++---
 drivers/gpu/drm/ttm/ttm_resource.c            | 31 +++++++-----
 include/drm/ttm/ttm_resource.h                | 29 ++++++++----
 7 files changed, 109 insertions(+), 60 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index 326476089db3..3b46a24a8c48 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -2156,7 +2156,7 @@ void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
 {
 	struct ttm_resource_manager *man = ttm_manager_type(&adev->mman.bdev, TTM_PL_VRAM);
 	uint64_t size;
-	int r;
+	int r, i;
 
 	if (!adev->mman.initialized || amdgpu_in_reset(adev) ||
 	    adev->mman.buffer_funcs_enabled == enable || adev->gmc.is_app_apu)
@@ -2190,8 +2190,10 @@ void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
 	} else {
 		drm_sched_entity_destroy(&adev->mman.high_pr);
 		drm_sched_entity_destroy(&adev->mman.low_pr);
-		dma_fence_put(man->move);
-		man->move = NULL;
+		for (i = 0; i < TTM_NUM_MOVE_FENCES; i++) {
+			dma_fence_put(man->eviction_fences[i]);
+			man->eviction_fences[i] = NULL;
+		}
 	}
 
 	/* this just adjusts TTM size idea, which sets lpfn to the correct value */
diff --git a/drivers/gpu/drm/ttm/tests/ttm_bo_validate_test.c b/drivers/gpu/drm/ttm/tests/ttm_bo_validate_test.c
index 3148f5d3dbd6..8f71906c4238 100644
--- a/drivers/gpu/drm/ttm/tests/ttm_bo_validate_test.c
+++ b/drivers/gpu/drm/ttm/tests/ttm_bo_validate_test.c
@@ -651,7 +651,7 @@ static void ttm_bo_validate_move_fence_signaled(struct kunit *test)
 	int err;
 
 	man = ttm_manager_type(priv->ttm_dev, mem_type);
-	man->move = dma_fence_get_stub();
+	man->eviction_fences[0] = dma_fence_get_stub();
 
 	bo = ttm_bo_kunit_init(test, test->priv, size, NULL);
 	bo->type = bo_type;
@@ -668,7 +668,7 @@ static void ttm_bo_validate_move_fence_signaled(struct kunit *test)
 	KUNIT_EXPECT_EQ(test, ctx.bytes_moved, size);
 
 	ttm_bo_put(bo);
-	dma_fence_put(man->move);
+	dma_fence_put(man->eviction_fences[0]);
 }
 
 static const struct ttm_bo_validate_test_case ttm_bo_validate_wait_cases[] = {
@@ -732,9 +732,9 @@ static void ttm_bo_validate_move_fence_not_signaled(struct kunit *test)
 
 	spin_lock_init(&fence_lock);
 	man = ttm_manager_type(priv->ttm_dev, fst_mem);
-	man->move = alloc_mock_fence(test);
+	man->eviction_fences[0] = alloc_mock_fence(test);
 
-	task = kthread_create(threaded_fence_signal, man->move, "move-fence-signal");
+	task = kthread_create(threaded_fence_signal, man->eviction_fences[0], "move-fence-signal");
 	if (IS_ERR(task))
 		KUNIT_FAIL(test, "Couldn't create move fence signal task\n");
 
@@ -742,7 +742,8 @@ static void ttm_bo_validate_move_fence_not_signaled(struct kunit *test)
 	err = ttm_bo_validate(bo, placement_val, &ctx_val);
 	dma_resv_unlock(bo->base.resv);
 
-	dma_fence_wait_timeout(man->move, false, MAX_SCHEDULE_TIMEOUT);
+	dma_fence_wait_timeout(man->eviction_fences[0], false, MAX_SCHEDULE_TIMEOUT);
+	man->eviction_fences[0] = NULL;
 
 	KUNIT_EXPECT_EQ(test, err, 0);
 	KUNIT_EXPECT_EQ(test, ctx_val.bytes_moved, size);
diff --git a/drivers/gpu/drm/ttm/tests/ttm_resource_test.c b/drivers/gpu/drm/ttm/tests/ttm_resource_test.c
index e6ea2bd01f07..c0e4e35e0442 100644
--- a/drivers/gpu/drm/ttm/tests/ttm_resource_test.c
+++ b/drivers/gpu/drm/ttm/tests/ttm_resource_test.c
@@ -207,6 +207,7 @@ static void ttm_resource_manager_init_basic(struct kunit *test)
 	struct ttm_resource_test_priv *priv = test->priv;
 	struct ttm_resource_manager *man;
 	size_t size = SZ_16K;
+	int i;
 
 	man = kunit_kzalloc(test, sizeof(*man), GFP_KERNEL);
 	KUNIT_ASSERT_NOT_NULL(test, man);
@@ -216,8 +217,8 @@ static void ttm_resource_manager_init_basic(struct kunit *test)
 	KUNIT_ASSERT_PTR_EQ(test, man->bdev, priv->devs->ttm_dev);
 	KUNIT_ASSERT_EQ(test, man->size, size);
 	KUNIT_ASSERT_EQ(test, man->usage, 0);
-	KUNIT_ASSERT_NULL(test, man->move);
-	KUNIT_ASSERT_NOT_NULL(test, &man->move_lock);
+	for (i = 0; i < TTM_NUM_MOVE_FENCES; i++)
+		KUNIT_ASSERT_NULL(test, man->eviction_fences[i]);
 
 	for (int i = 0; i < TTM_MAX_BO_PRIORITY; ++i)
 		KUNIT_ASSERT_TRUE(test, list_empty(&man->lru[i]));
diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
index f4d9e68b21e7..0b3732ed6f6c 100644
--- a/drivers/gpu/drm/ttm/ttm_bo.c
+++ b/drivers/gpu/drm/ttm/ttm_bo.c
@@ -658,34 +658,35 @@ void ttm_bo_unpin(struct ttm_buffer_object *bo)
 EXPORT_SYMBOL(ttm_bo_unpin);
 
 /*
- * Add the last move fence to the BO as kernel dependency and reserve a new
- * fence slot.
+ * Add the pipelined eviction fencesto the BO as kernel dependency and reserve new
+ * fence slots.
  */
-static int ttm_bo_add_move_fence(struct ttm_buffer_object *bo,
-				 struct ttm_resource_manager *man,
-				 bool no_wait_gpu)
+static int ttm_bo_add_pipelined_eviction_fences(struct ttm_buffer_object *bo,
+						struct ttm_resource_manager *man,
+						bool no_wait_gpu)
 {
 	struct dma_fence *fence;
-	int ret;
+	int i;
 
-	spin_lock(&man->move_lock);
-	fence = dma_fence_get(man->move);
-	spin_unlock(&man->move_lock);
+	spin_lock(&man->eviction_lock);
+	for (i = 0; i < TTM_NUM_MOVE_FENCES; i++) {
+		fence = man->eviction_fences[i];
+		if (!fence)
+			continue;
 
-	if (!fence)
-		return 0;
-
-	if (no_wait_gpu) {
-		ret = dma_fence_is_signaled(fence) ? 0 : -EBUSY;
-		dma_fence_put(fence);
-		return ret;
+		if (no_wait_gpu) {
+			if (!dma_fence_is_signaled(fence)) {
+				spin_unlock(&man->eviction_lock);
+				return -EBUSY;
+			}
+		} else {
+			dma_resv_add_fence(bo->base.resv, fence, DMA_RESV_USAGE_KERNEL);
+		}
 	}
+	spin_unlock(&man->eviction_lock);
 
-	dma_resv_add_fence(bo->base.resv, fence, DMA_RESV_USAGE_KERNEL);
-
-	ret = dma_resv_reserve_fences(bo->base.resv, 1);
-	dma_fence_put(fence);
-	return ret;
+	/* TODO: this call should be removed. */
+	return dma_resv_reserve_fences(bo->base.resv, 1);
 }
 
 /**
@@ -718,7 +719,7 @@ static int ttm_bo_alloc_resource(struct ttm_buffer_object *bo,
 	int i, ret;
 
 	ticket = dma_resv_locking_ctx(bo->base.resv);
-	ret = dma_resv_reserve_fences(bo->base.resv, 1);
+	ret = dma_resv_reserve_fences(bo->base.resv, TTM_NUM_MOVE_FENCES);
 	if (unlikely(ret))
 		return ret;
 
@@ -757,7 +758,7 @@ static int ttm_bo_alloc_resource(struct ttm_buffer_object *bo,
 				return ret;
 		}
 
-		ret = ttm_bo_add_move_fence(bo, man, ctx->no_wait_gpu);
+		ret = ttm_bo_add_pipelined_eviction_fences(bo, man, ctx->no_wait_gpu);
 		if (unlikely(ret)) {
 			ttm_resource_free(bo, res);
 			if (ret == -EBUSY)
diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c
index acbbca9d5c92..2ff35d55e462 100644
--- a/drivers/gpu/drm/ttm/ttm_bo_util.c
+++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
@@ -258,7 +258,7 @@ static int ttm_buffer_object_transfer(struct ttm_buffer_object *bo,
 	ret = dma_resv_trylock(&fbo->base.base._resv);
 	WARN_ON(!ret);
 
-	ret = dma_resv_reserve_fences(&fbo->base.base._resv, 1);
+	ret = dma_resv_reserve_fences(&fbo->base.base._resv, TTM_NUM_MOVE_FENCES);
 	if (ret) {
 		dma_resv_unlock(&fbo->base.base._resv);
 		kfree(fbo);
@@ -646,20 +646,44 @@ static void ttm_bo_move_pipeline_evict(struct ttm_buffer_object *bo,
 {
 	struct ttm_device *bdev = bo->bdev;
 	struct ttm_resource_manager *from;
+	struct dma_fence *tmp;
+	int i;
 
 	from = ttm_manager_type(bdev, bo->resource->mem_type);
 
 	/**
 	 * BO doesn't have a TTM we need to bind/unbind. Just remember
-	 * this eviction and free up the allocation
+	 * this eviction and free up the allocation.
+	 * The fence will be saved in the first free slot or in the slot
+	 * already used to store a fence from the same context. Since
+	 * drivers can't use more than TTM_NUM_MOVE_FENCES contexts for
+	 * evictions we should always find a slot to use.
 	 */
-	spin_lock(&from->move_lock);
-	if (!from->move || dma_fence_is_later(fence, from->move)) {
-		dma_fence_put(from->move);
-		from->move = dma_fence_get(fence);
+	spin_lock(&from->eviction_lock);
+	for (i = 0; i < TTM_NUM_MOVE_FENCES; i++) {
+		tmp = from->eviction_fences[i];
+		if (!tmp)
+			break;
+		if (fence->context != tmp->context)
+			continue;
+		if (dma_fence_is_later(fence, tmp)) {
+			dma_fence_put(tmp);
+			break;
+		}
+		goto unlock;
+	}
+	if (i < TTM_NUM_MOVE_FENCES) {
+		from->eviction_fences[i] = dma_fence_get(fence);
+	} else {
+		WARN(1, "not enough fence slots for all fence contexts");
+		spin_unlock(&from->eviction_lock);
+		dma_fence_wait(fence, false);
+		goto end;
 	}
-	spin_unlock(&from->move_lock);
 
+unlock:
+	spin_unlock(&from->eviction_lock);
+end:
 	ttm_resource_free(bo, &bo->resource);
 }
 
diff --git a/drivers/gpu/drm/ttm/ttm_resource.c b/drivers/gpu/drm/ttm/ttm_resource.c
index e2c82ad07eb4..62c34cafa387 100644
--- a/drivers/gpu/drm/ttm/ttm_resource.c
+++ b/drivers/gpu/drm/ttm/ttm_resource.c
@@ -523,14 +523,15 @@ void ttm_resource_manager_init(struct ttm_resource_manager *man,
 {
 	unsigned i;
 
-	spin_lock_init(&man->move_lock);
 	man->bdev = bdev;
 	man->size = size;
 	man->usage = 0;
 
 	for (i = 0; i < TTM_MAX_BO_PRIORITY; ++i)
 		INIT_LIST_HEAD(&man->lru[i]);
-	man->move = NULL;
+	spin_lock_init(&man->eviction_lock);
+	for (i = 0; i < TTM_NUM_MOVE_FENCES; i++)
+		man->eviction_fences[i] = NULL;
 }
 EXPORT_SYMBOL(ttm_resource_manager_init);
 
@@ -551,7 +552,7 @@ int ttm_resource_manager_evict_all(struct ttm_device *bdev,
 		.no_wait_gpu = false,
 	};
 	struct dma_fence *fence;
-	int ret;
+	int ret, i;
 
 	do {
 		ret = ttm_bo_evict_first(bdev, man, &ctx);
@@ -561,18 +562,24 @@ int ttm_resource_manager_evict_all(struct ttm_device *bdev,
 	if (ret && ret != -ENOENT)
 		return ret;
 
-	spin_lock(&man->move_lock);
-	fence = dma_fence_get(man->move);
-	spin_unlock(&man->move_lock);
+	ret = 0;
 
-	if (fence) {
-		ret = dma_fence_wait(fence, false);
-		dma_fence_put(fence);
-		if (ret)
-			return ret;
+	spin_lock(&man->eviction_lock);
+	for (i = 0; i < TTM_NUM_MOVE_FENCES; i++) {
+		fence = man->eviction_fences[i];
+		if (fence && !dma_fence_is_signaled(fence)) {
+			dma_fence_get(fence);
+			spin_unlock(&man->eviction_lock);
+			ret = dma_fence_wait(fence, false);
+			dma_fence_put(fence);
+			if (ret)
+				return ret;
+			spin_lock(&man->eviction_lock);
+		}
 	}
+	spin_unlock(&man->eviction_lock);
 
-	return 0;
+	return ret;
 }
 EXPORT_SYMBOL(ttm_resource_manager_evict_all);
 
diff --git a/include/drm/ttm/ttm_resource.h b/include/drm/ttm/ttm_resource.h
index f49daa504c36..50e6added509 100644
--- a/include/drm/ttm/ttm_resource.h
+++ b/include/drm/ttm/ttm_resource.h
@@ -50,6 +50,15 @@ struct io_mapping;
 struct sg_table;
 struct scatterlist;
 
+/**
+ * define TTM_NUM_MOVE_FENCES - How many entities can be used for evictions
+ *
+ * Pipelined evictions can be spread on multiple entities. This
+ * is the max number of entities that can be used by the driver
+ * for that purpose.
+ */
+#define TTM_NUM_MOVE_FENCES 8
+
 /**
  * enum ttm_lru_item_type - enumerate ttm_lru_item subclasses
  */
@@ -180,8 +189,8 @@ struct ttm_resource_manager_func {
  * @size: Size of the managed region.
  * @bdev: ttm device this manager belongs to
  * @func: structure pointer implementing the range manager. See above
- * @move_lock: lock for move fence
- * @move: The fence of the last pipelined move operation.
+ * @eviction_lock: lock for eviction fences
+ * @eviction_fences: The fences of the last pipelined move operation.
  * @lru: The lru list for this memory type.
  *
  * This structure is used to identify and manage memory types for a device.
@@ -195,12 +204,12 @@ struct ttm_resource_manager {
 	struct ttm_device *bdev;
 	uint64_t size;
 	const struct ttm_resource_manager_func *func;
-	spinlock_t move_lock;
 
-	/*
-	 * Protected by @move_lock.
+	/* This is very similar to a dma_resv object, but locking rules make
+	 * it difficult to use one in this context.
 	 */
-	struct dma_fence *move;
+	spinlock_t eviction_lock;
+	struct dma_fence *eviction_fences[TTM_NUM_MOVE_FENCES];
 
 	/*
 	 * Protected by the bdev->lru_lock.
@@ -421,8 +430,12 @@ static inline bool ttm_resource_manager_used(struct ttm_resource_manager *man)
 static inline void
 ttm_resource_manager_cleanup(struct ttm_resource_manager *man)
 {
-	dma_fence_put(man->move);
-	man->move = NULL;
+	int i;
+
+	for (i = 0; i < TTM_NUM_MOVE_FENCES; i++) {
+		dma_fence_put(man->eviction_fences[i]);
+		man->eviction_fences[i] = NULL;
+	}
 }
 
 void ttm_lru_bulk_move_init(struct ttm_lru_bulk_move *bulk);
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v2 03/20] drm/amdgpu: remove direct_submit arg from amdgpu_copy_buffer
  2025-11-13 16:05 [PATCH v2 00/20] drm/amdgpu: use all SDMA instances for TTM clears and moves Pierre-Eric Pelloux-Prayer
  2025-11-13 16:05 ` [PATCH v2 01/20] drm/amdgpu: give each kernel job a unique id Pierre-Eric Pelloux-Prayer
  2025-11-13 16:05 ` [PATCH v2 02/20] drm/ttm: rework pipelined eviction fence handling Pierre-Eric Pelloux-Prayer
@ 2025-11-13 16:05 ` Pierre-Eric Pelloux-Prayer
  2025-11-14 12:48   ` Christian König
  2025-11-13 16:05 ` [PATCH v2 04/20] drm/amdgpu: introduce amdgpu_ttm_buffer_entity Pierre-Eric Pelloux-Prayer
                   ` (16 subsequent siblings)
  19 siblings, 1 reply; 53+ messages in thread
From: Pierre-Eric Pelloux-Prayer @ 2025-11-13 16:05 UTC (permalink / raw)
  To: Alex Deucher, Christian König, David Airlie, Simona Vetter,
	Felix Kuehling, Sumit Semwal
  Cc: Pierre-Eric Pelloux-Prayer, amd-gfx, dri-devel, linux-kernel,
	linux-media, linaro-mm-sig

It was always false.

Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c |  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c       | 20 +++++++------------
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h       |  2 +-
 drivers/gpu/drm/amd/amdkfd/kfd_migrate.c      |  2 +-
 4 files changed, 10 insertions(+), 16 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c
index 199693369c7c..02c2479a8840 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c
@@ -39,7 +39,7 @@ static int amdgpu_benchmark_do_move(struct amdgpu_device *adev, unsigned size,
 	for (i = 0; i < n; i++) {
 		struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
 		r = amdgpu_copy_buffer(ring, saddr, daddr, size, NULL, &fence,
-				       false, false, 0);
+				       false, 0);
 		if (r)
 			goto exit_do_move;
 		r = dma_fence_wait(fence, false);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index 3b46a24a8c48..c985f57fa227 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -354,7 +354,7 @@ static int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
 		}
 
 		r = amdgpu_copy_buffer(ring, from, to, cur_size, resv,
-				       &next, false, true, copy_flags);
+				       &next, true, copy_flags);
 		if (r)
 			goto error;
 
@@ -2211,16 +2211,13 @@ void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
 }
 
 static int amdgpu_ttm_prepare_job(struct amdgpu_device *adev,
-				  bool direct_submit,
 				  unsigned int num_dw,
 				  struct dma_resv *resv,
 				  bool vm_needs_flush,
 				  struct amdgpu_job **job,
 				  bool delayed, u64 k_job_id)
 {
-	enum amdgpu_ib_pool_type pool = direct_submit ?
-		AMDGPU_IB_POOL_DIRECT :
-		AMDGPU_IB_POOL_DELAYED;
+	enum amdgpu_ib_pool_type pool = AMDGPU_IB_POOL_DELAYED;
 	int r;
 	struct drm_sched_entity *entity = delayed ? &adev->mman.low_pr :
 						    &adev->mman.high_pr;
@@ -2246,7 +2243,7 @@ static int amdgpu_ttm_prepare_job(struct amdgpu_device *adev,
 int amdgpu_copy_buffer(struct amdgpu_ring *ring, uint64_t src_offset,
 		       uint64_t dst_offset, uint32_t byte_count,
 		       struct dma_resv *resv,
-		       struct dma_fence **fence, bool direct_submit,
+		       struct dma_fence **fence,
 		       bool vm_needs_flush, uint32_t copy_flags)
 {
 	struct amdgpu_device *adev = ring->adev;
@@ -2256,7 +2253,7 @@ int amdgpu_copy_buffer(struct amdgpu_ring *ring, uint64_t src_offset,
 	unsigned int i;
 	int r;
 
-	if (!direct_submit && !ring->sched.ready) {
+	if (!ring->sched.ready) {
 		dev_err(adev->dev,
 			"Trying to move memory with ring turned off.\n");
 		return -EINVAL;
@@ -2265,7 +2262,7 @@ int amdgpu_copy_buffer(struct amdgpu_ring *ring, uint64_t src_offset,
 	max_bytes = adev->mman.buffer_funcs->copy_max_bytes;
 	num_loops = DIV_ROUND_UP(byte_count, max_bytes);
 	num_dw = ALIGN(num_loops * adev->mman.buffer_funcs->copy_num_dw, 8);
-	r = amdgpu_ttm_prepare_job(adev, direct_submit, num_dw,
+	r = amdgpu_ttm_prepare_job(adev, num_dw,
 				   resv, vm_needs_flush, &job, false,
 				   AMDGPU_KERNEL_JOB_ID_TTM_COPY_BUFFER);
 	if (r)
@@ -2283,10 +2280,7 @@ int amdgpu_copy_buffer(struct amdgpu_ring *ring, uint64_t src_offset,
 
 	amdgpu_ring_pad_ib(ring, &job->ibs[0]);
 	WARN_ON(job->ibs[0].length_dw > num_dw);
-	if (direct_submit)
-		r = amdgpu_job_submit_direct(job, ring, fence);
-	else
-		*fence = amdgpu_job_submit(job);
+	*fence = amdgpu_job_submit(job);
 	if (r)
 		goto error_free;
 
@@ -2315,7 +2309,7 @@ static int amdgpu_ttm_fill_mem(struct amdgpu_ring *ring, uint32_t src_data,
 	max_bytes = adev->mman.buffer_funcs->fill_max_bytes;
 	num_loops = DIV_ROUND_UP_ULL(byte_count, max_bytes);
 	num_dw = ALIGN(num_loops * adev->mman.buffer_funcs->fill_num_dw, 8);
-	r = amdgpu_ttm_prepare_job(adev, false, num_dw, resv, vm_needs_flush,
+	r = amdgpu_ttm_prepare_job(adev, num_dw, resv, vm_needs_flush,
 				   &job, delayed, k_job_id);
 	if (r)
 		return r;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
index 577ee04ce0bf..50e40380fe95 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
@@ -166,7 +166,7 @@ void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev,
 int amdgpu_copy_buffer(struct amdgpu_ring *ring, uint64_t src_offset,
 		       uint64_t dst_offset, uint32_t byte_count,
 		       struct dma_resv *resv,
-		       struct dma_fence **fence, bool direct_submit,
+		       struct dma_fence **fence,
 		       bool vm_needs_flush, uint32_t copy_flags);
 int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
 			    struct dma_resv *resv,
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
index 46c84fc60af1..378af0b2aaa9 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
@@ -153,7 +153,7 @@ svm_migrate_copy_memory_gart(struct amdgpu_device *adev, dma_addr_t *sys,
 		}
 
 		r = amdgpu_copy_buffer(ring, gart_s, gart_d, size * PAGE_SIZE,
-				       NULL, &next, false, true, 0);
+				       NULL, &next, true, 0);
 		if (r) {
 			dev_err(adev->dev, "fail %d to copy memory\n", r);
 			goto out_unlock;
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v2 04/20] drm/amdgpu: introduce amdgpu_ttm_buffer_entity
  2025-11-13 16:05 [PATCH v2 00/20] drm/amdgpu: use all SDMA instances for TTM clears and moves Pierre-Eric Pelloux-Prayer
                   ` (2 preceding siblings ...)
  2025-11-13 16:05 ` [PATCH v2 03/20] drm/amdgpu: remove direct_submit arg from amdgpu_copy_buffer Pierre-Eric Pelloux-Prayer
@ 2025-11-13 16:05 ` Pierre-Eric Pelloux-Prayer
  2025-11-14 12:57   ` Christian König
  2025-11-13 16:05 ` [PATCH v2 05/20] drm/amdgpu: pass the entity to use to ttm functions Pierre-Eric Pelloux-Prayer
                   ` (15 subsequent siblings)
  19 siblings, 1 reply; 53+ messages in thread
From: Pierre-Eric Pelloux-Prayer @ 2025-11-13 16:05 UTC (permalink / raw)
  To: Alex Deucher, Christian König, David Airlie, Simona Vetter,
	Felix Kuehling
  Cc: Pierre-Eric Pelloux-Prayer, amd-gfx, dri-devel, linux-kernel

No functional change for now, but this struct will have more
fields added in the next commit.

Technically the change introduces synchronisation issue, because
dependencies between successive jobs are not taken care of
properly. For instance, amdgpu_ttm_clear_buffer uses
amdgpu_ttm_map_buffer then amdgpu_ttm_fill_mem which use
different entities (default_entity then move/clear entity).
But it's all working as expected, because all entities use the
same sdma instance for now and default_entity has a higher prio
so its job always gets scheduler first.

The next commits will deal with these dependencies correctly.

---
v2: renamed amdgpu_ttm_buffer_entity
---

Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c  |  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c  | 30 +++++++++++++++++-------
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h  | 12 ++++++----
 drivers/gpu/drm/amd/amdkfd/kfd_migrate.c | 13 ++++++----
 4 files changed, 39 insertions(+), 18 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
index 9dcf51991b5b..8e2d41c9c271 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
@@ -687,7 +687,7 @@ void amdgpu_gmc_flush_gpu_tlb(struct amdgpu_device *adev, uint32_t vmid,
 	 * itself at least for GART.
 	 */
 	mutex_lock(&adev->mman.gtt_window_lock);
-	r = amdgpu_job_alloc_with_ib(ring->adev, &adev->mman.high_pr,
+	r = amdgpu_job_alloc_with_ib(ring->adev, &adev->mman.default_entity.base,
 				     AMDGPU_FENCE_OWNER_UNDEFINED,
 				     16 * 4, AMDGPU_IB_POOL_IMMEDIATE,
 				     &job, AMDGPU_KERNEL_JOB_ID_FLUSH_GPU_TLB);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index c985f57fa227..42d448cd6a6d 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -224,7 +224,7 @@ static int amdgpu_ttm_map_buffer(struct ttm_buffer_object *bo,
 	num_dw = ALIGN(adev->mman.buffer_funcs->copy_num_dw, 8);
 	num_bytes = num_pages * 8 * AMDGPU_GPU_PAGES_IN_CPU_PAGE;
 
-	r = amdgpu_job_alloc_with_ib(adev, &adev->mman.high_pr,
+	r = amdgpu_job_alloc_with_ib(adev, &adev->mman.default_entity.base,
 				     AMDGPU_FENCE_OWNER_UNDEFINED,
 				     num_dw * 4 + num_bytes,
 				     AMDGPU_IB_POOL_DELAYED, &job,
@@ -1486,7 +1486,7 @@ static int amdgpu_ttm_access_memory_sdma(struct ttm_buffer_object *bo,
 		memcpy(adev->mman.sdma_access_ptr, buf, len);
 
 	num_dw = ALIGN(adev->mman.buffer_funcs->copy_num_dw, 8);
-	r = amdgpu_job_alloc_with_ib(adev, &adev->mman.high_pr,
+	r = amdgpu_job_alloc_with_ib(adev, &adev->mman.default_entity.base,
 				     AMDGPU_FENCE_OWNER_UNDEFINED,
 				     num_dw * 4, AMDGPU_IB_POOL_DELAYED,
 				     &job,
@@ -2168,7 +2168,7 @@ void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
 
 		ring = adev->mman.buffer_funcs_ring;
 		sched = &ring->sched;
-		r = drm_sched_entity_init(&adev->mman.high_pr,
+		r = drm_sched_entity_init(&adev->mman.default_entity.base,
 					  DRM_SCHED_PRIORITY_KERNEL, &sched,
 					  1, NULL);
 		if (r) {
@@ -2178,18 +2178,30 @@ void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
 			return;
 		}
 
-		r = drm_sched_entity_init(&adev->mman.low_pr,
+		r = drm_sched_entity_init(&adev->mman.clear_entity.base,
+					  DRM_SCHED_PRIORITY_NORMAL, &sched,
+					  1, NULL);
+		if (r) {
+			dev_err(adev->dev,
+				"Failed setting up TTM BO clear entity (%d)\n",
+				r);
+			goto error_free_entity;
+		}
+
+		r = drm_sched_entity_init(&adev->mman.move_entity.base,
 					  DRM_SCHED_PRIORITY_NORMAL, &sched,
 					  1, NULL);
 		if (r) {
 			dev_err(adev->dev,
 				"Failed setting up TTM BO move entity (%d)\n",
 				r);
+			drm_sched_entity_destroy(&adev->mman.clear_entity.base);
 			goto error_free_entity;
 		}
 	} else {
-		drm_sched_entity_destroy(&adev->mman.high_pr);
-		drm_sched_entity_destroy(&adev->mman.low_pr);
+		drm_sched_entity_destroy(&adev->mman.default_entity.base);
+		drm_sched_entity_destroy(&adev->mman.clear_entity.base);
+		drm_sched_entity_destroy(&adev->mman.move_entity.base);
 		for (i = 0; i < TTM_NUM_MOVE_FENCES; i++) {
 			dma_fence_put(man->eviction_fences[i]);
 			man->eviction_fences[i] = NULL;
@@ -2207,7 +2219,7 @@ void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
 	return;
 
 error_free_entity:
-	drm_sched_entity_destroy(&adev->mman.high_pr);
+	drm_sched_entity_destroy(&adev->mman.default_entity.base);
 }
 
 static int amdgpu_ttm_prepare_job(struct amdgpu_device *adev,
@@ -2219,8 +2231,8 @@ static int amdgpu_ttm_prepare_job(struct amdgpu_device *adev,
 {
 	enum amdgpu_ib_pool_type pool = AMDGPU_IB_POOL_DELAYED;
 	int r;
-	struct drm_sched_entity *entity = delayed ? &adev->mman.low_pr :
-						    &adev->mman.high_pr;
+	struct drm_sched_entity *entity = delayed ? &adev->mman.clear_entity.base :
+						    &adev->mman.move_entity.base;
 	r = amdgpu_job_alloc_with_ib(adev, entity,
 				     AMDGPU_FENCE_OWNER_UNDEFINED,
 				     num_dw * 4, pool, job, k_job_id);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
index 50e40380fe95..d2295d6c2b67 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
@@ -52,6 +52,10 @@ struct amdgpu_gtt_mgr {
 	spinlock_t lock;
 };
 
+struct amdgpu_ttm_buffer_entity {
+	struct drm_sched_entity base;
+};
+
 struct amdgpu_mman {
 	struct ttm_device		bdev;
 	struct ttm_pool			*ttm_pools;
@@ -64,10 +68,10 @@ struct amdgpu_mman {
 	bool					buffer_funcs_enabled;
 
 	struct mutex				gtt_window_lock;
-	/* High priority scheduler entity for buffer moves */
-	struct drm_sched_entity			high_pr;
-	/* Low priority scheduler entity for VRAM clearing */
-	struct drm_sched_entity			low_pr;
+
+	struct amdgpu_ttm_buffer_entity default_entity;
+	struct amdgpu_ttm_buffer_entity clear_entity;
+	struct amdgpu_ttm_buffer_entity move_entity;
 
 	struct amdgpu_vram_mgr vram_mgr;
 	struct amdgpu_gtt_mgr gtt_mgr;
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
index 378af0b2aaa9..d74ff6e90590 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
@@ -45,7 +45,9 @@ svm_migrate_direct_mapping_addr(struct amdgpu_device *adev, u64 addr)
 }
 
 static int
-svm_migrate_gart_map(struct amdgpu_ring *ring, u64 npages,
+svm_migrate_gart_map(struct amdgpu_ring *ring,
+		     struct amdgpu_ttm_buffer_entity *entity,
+		     u64 npages,
 		     dma_addr_t *addr, u64 *gart_addr, u64 flags)
 {
 	struct amdgpu_device *adev = ring->adev;
@@ -63,7 +65,7 @@ svm_migrate_gart_map(struct amdgpu_ring *ring, u64 npages,
 	num_dw = ALIGN(adev->mman.buffer_funcs->copy_num_dw, 8);
 	num_bytes = npages * 8;
 
-	r = amdgpu_job_alloc_with_ib(adev, &adev->mman.high_pr,
+	r = amdgpu_job_alloc_with_ib(adev, &entity->base,
 				     AMDGPU_FENCE_OWNER_UNDEFINED,
 				     num_dw * 4 + num_bytes,
 				     AMDGPU_IB_POOL_DELAYED,
@@ -128,11 +130,14 @@ svm_migrate_copy_memory_gart(struct amdgpu_device *adev, dma_addr_t *sys,
 {
 	const u64 GTT_MAX_PAGES = AMDGPU_GTT_MAX_TRANSFER_SIZE;
 	struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
+	struct amdgpu_ttm_buffer_entity *entity;
 	u64 gart_s, gart_d;
 	struct dma_fence *next;
 	u64 size;
 	int r;
 
+	entity = &adev->mman.move_entity;
+
 	mutex_lock(&adev->mman.gtt_window_lock);
 
 	while (npages) {
@@ -140,10 +145,10 @@ svm_migrate_copy_memory_gart(struct amdgpu_device *adev, dma_addr_t *sys,
 
 		if (direction == FROM_VRAM_TO_RAM) {
 			gart_s = svm_migrate_direct_mapping_addr(adev, *vram);
-			r = svm_migrate_gart_map(ring, size, sys, &gart_d, 0);
+			r = svm_migrate_gart_map(ring, entity, size, sys, &gart_d, 0);
 
 		} else if (direction == FROM_RAM_TO_VRAM) {
-			r = svm_migrate_gart_map(ring, size, sys, &gart_s,
+			r = svm_migrate_gart_map(ring, entity, size, sys, &gart_s,
 						 KFD_IOCTL_SVM_FLAG_GPU_RO);
 			gart_d = svm_migrate_direct_mapping_addr(adev, *vram);
 		}
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v2 05/20] drm/amdgpu: pass the entity to use to ttm functions
  2025-11-13 16:05 [PATCH v2 00/20] drm/amdgpu: use all SDMA instances for TTM clears and moves Pierre-Eric Pelloux-Prayer
                   ` (3 preceding siblings ...)
  2025-11-13 16:05 ` [PATCH v2 04/20] drm/amdgpu: introduce amdgpu_ttm_buffer_entity Pierre-Eric Pelloux-Prayer
@ 2025-11-13 16:05 ` Pierre-Eric Pelloux-Prayer
  2025-11-14 13:07   ` Christian König
  2025-11-14 20:20   ` Felix Kuehling
  2025-11-13 16:05 ` [PATCH v2 06/20] drm/amdgpu: statically assign gart windows to ttm entities Pierre-Eric Pelloux-Prayer
                   ` (14 subsequent siblings)
  19 siblings, 2 replies; 53+ messages in thread
From: Pierre-Eric Pelloux-Prayer @ 2025-11-13 16:05 UTC (permalink / raw)
  To: Alex Deucher, Christian König, David Airlie, Simona Vetter,
	Felix Kuehling, Sumit Semwal
  Cc: Pierre-Eric Pelloux-Prayer, amd-gfx, dri-devel, linux-kernel,
	linux-media, linaro-mm-sig

This way the caller can select the one it wants to use.

Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c |  3 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_object.c    |  4 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c       | 75 +++++++++++--------
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h       | 16 ++--
 drivers/gpu/drm/amd/amdkfd/kfd_migrate.c      |  3 +-
 5 files changed, 60 insertions(+), 41 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c
index 02c2479a8840..b59040a8771f 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c
@@ -38,7 +38,8 @@ static int amdgpu_benchmark_do_move(struct amdgpu_device *adev, unsigned size,
 	stime = ktime_get();
 	for (i = 0; i < n; i++) {
 		struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
-		r = amdgpu_copy_buffer(ring, saddr, daddr, size, NULL, &fence,
+		r = amdgpu_copy_buffer(ring, &adev->mman.default_entity.base,
+				       saddr, daddr, size, NULL, &fence,
 				       false, 0);
 		if (r)
 			goto exit_do_move;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
index e08f58de4b17..c06c132a753c 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
@@ -1321,8 +1321,8 @@ void amdgpu_bo_release_notify(struct ttm_buffer_object *bo)
 	if (r)
 		goto out;
 
-	r = amdgpu_fill_buffer(abo, 0, &bo->base._resv, &fence, true,
-			       AMDGPU_KERNEL_JOB_ID_CLEAR_ON_RELEASE);
+	r = amdgpu_fill_buffer(&adev->mman.clear_entity, abo, 0, &bo->base._resv,
+			       &fence, AMDGPU_KERNEL_JOB_ID_CLEAR_ON_RELEASE);
 	if (WARN_ON(r))
 		goto out;
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index 42d448cd6a6d..c8d59ca2b3bd 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -164,6 +164,7 @@ static void amdgpu_evict_flags(struct ttm_buffer_object *bo,
 
 /**
  * amdgpu_ttm_map_buffer - Map memory into the GART windows
+ * @entity: entity to run the window setup job
  * @bo: buffer object to map
  * @mem: memory object to map
  * @mm_cur: range to map
@@ -176,7 +177,8 @@ static void amdgpu_evict_flags(struct ttm_buffer_object *bo,
  * Setup one of the GART windows to access a specific piece of memory or return
  * the physical address for local memory.
  */
-static int amdgpu_ttm_map_buffer(struct ttm_buffer_object *bo,
+static int amdgpu_ttm_map_buffer(struct drm_sched_entity *entity,
+				 struct ttm_buffer_object *bo,
 				 struct ttm_resource *mem,
 				 struct amdgpu_res_cursor *mm_cur,
 				 unsigned int window, struct amdgpu_ring *ring,
@@ -224,7 +226,7 @@ static int amdgpu_ttm_map_buffer(struct ttm_buffer_object *bo,
 	num_dw = ALIGN(adev->mman.buffer_funcs->copy_num_dw, 8);
 	num_bytes = num_pages * 8 * AMDGPU_GPU_PAGES_IN_CPU_PAGE;
 
-	r = amdgpu_job_alloc_with_ib(adev, &adev->mman.default_entity.base,
+	r = amdgpu_job_alloc_with_ib(adev, entity,
 				     AMDGPU_FENCE_OWNER_UNDEFINED,
 				     num_dw * 4 + num_bytes,
 				     AMDGPU_IB_POOL_DELAYED, &job,
@@ -274,6 +276,7 @@ static int amdgpu_ttm_map_buffer(struct ttm_buffer_object *bo,
 /**
  * amdgpu_ttm_copy_mem_to_mem - Helper function for copy
  * @adev: amdgpu device
+ * @entity: entity to run the jobs
  * @src: buffer/address where to read from
  * @dst: buffer/address where to write to
  * @size: number of bytes to copy
@@ -288,6 +291,7 @@ static int amdgpu_ttm_map_buffer(struct ttm_buffer_object *bo,
  */
 __attribute__((nonnull))
 static int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
+				      struct drm_sched_entity *entity,
 				      const struct amdgpu_copy_mem *src,
 				      const struct amdgpu_copy_mem *dst,
 				      uint64_t size, bool tmz,
@@ -320,12 +324,14 @@ static int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
 		cur_size = min3(src_mm.size, dst_mm.size, 256ULL << 20);
 
 		/* Map src to window 0 and dst to window 1. */
-		r = amdgpu_ttm_map_buffer(src->bo, src->mem, &src_mm,
+		r = amdgpu_ttm_map_buffer(entity,
+					  src->bo, src->mem, &src_mm,
 					  0, ring, tmz, &cur_size, &from);
 		if (r)
 			goto error;
 
-		r = amdgpu_ttm_map_buffer(dst->bo, dst->mem, &dst_mm,
+		r = amdgpu_ttm_map_buffer(entity,
+					  dst->bo, dst->mem, &dst_mm,
 					  1, ring, tmz, &cur_size, &to);
 		if (r)
 			goto error;
@@ -353,7 +359,7 @@ static int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
 							     write_compress_disable));
 		}
 
-		r = amdgpu_copy_buffer(ring, from, to, cur_size, resv,
+		r = amdgpu_copy_buffer(ring, entity, from, to, cur_size, resv,
 				       &next, true, copy_flags);
 		if (r)
 			goto error;
@@ -394,7 +400,9 @@ static int amdgpu_move_blit(struct ttm_buffer_object *bo,
 	src.offset = 0;
 	dst.offset = 0;
 
-	r = amdgpu_ttm_copy_mem_to_mem(adev, &src, &dst,
+	r = amdgpu_ttm_copy_mem_to_mem(adev,
+				       &adev->mman.move_entity.base,
+				       &src, &dst,
 				       new_mem->size,
 				       amdgpu_bo_encrypted(abo),
 				       bo->base.resv, &fence);
@@ -406,8 +414,9 @@ static int amdgpu_move_blit(struct ttm_buffer_object *bo,
 	    (abo->flags & AMDGPU_GEM_CREATE_VRAM_WIPE_ON_RELEASE)) {
 		struct dma_fence *wipe_fence = NULL;
 
-		r = amdgpu_fill_buffer(abo, 0, NULL, &wipe_fence,
-				       false, AMDGPU_KERNEL_JOB_ID_MOVE_BLIT);
+		r = amdgpu_fill_buffer(&adev->mman.move_entity,
+				       abo, 0, NULL, &wipe_fence,
+				       AMDGPU_KERNEL_JOB_ID_MOVE_BLIT);
 		if (r) {
 			goto error;
 		} else if (wipe_fence) {
@@ -2223,16 +2232,15 @@ void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
 }
 
 static int amdgpu_ttm_prepare_job(struct amdgpu_device *adev,
+				  struct drm_sched_entity *entity,
 				  unsigned int num_dw,
 				  struct dma_resv *resv,
 				  bool vm_needs_flush,
 				  struct amdgpu_job **job,
-				  bool delayed, u64 k_job_id)
+				  u64 k_job_id)
 {
 	enum amdgpu_ib_pool_type pool = AMDGPU_IB_POOL_DELAYED;
 	int r;
-	struct drm_sched_entity *entity = delayed ? &adev->mman.clear_entity.base :
-						    &adev->mman.move_entity.base;
 	r = amdgpu_job_alloc_with_ib(adev, entity,
 				     AMDGPU_FENCE_OWNER_UNDEFINED,
 				     num_dw * 4, pool, job, k_job_id);
@@ -2252,7 +2260,9 @@ static int amdgpu_ttm_prepare_job(struct amdgpu_device *adev,
 						   DMA_RESV_USAGE_BOOKKEEP);
 }
 
-int amdgpu_copy_buffer(struct amdgpu_ring *ring, uint64_t src_offset,
+int amdgpu_copy_buffer(struct amdgpu_ring *ring,
+		       struct drm_sched_entity *entity,
+		       uint64_t src_offset,
 		       uint64_t dst_offset, uint32_t byte_count,
 		       struct dma_resv *resv,
 		       struct dma_fence **fence,
@@ -2274,8 +2284,8 @@ int amdgpu_copy_buffer(struct amdgpu_ring *ring, uint64_t src_offset,
 	max_bytes = adev->mman.buffer_funcs->copy_max_bytes;
 	num_loops = DIV_ROUND_UP(byte_count, max_bytes);
 	num_dw = ALIGN(num_loops * adev->mman.buffer_funcs->copy_num_dw, 8);
-	r = amdgpu_ttm_prepare_job(adev, num_dw,
-				   resv, vm_needs_flush, &job, false,
+	r = amdgpu_ttm_prepare_job(adev, entity, num_dw,
+				   resv, vm_needs_flush, &job,
 				   AMDGPU_KERNEL_JOB_ID_TTM_COPY_BUFFER);
 	if (r)
 		return r;
@@ -2304,11 +2314,13 @@ int amdgpu_copy_buffer(struct amdgpu_ring *ring, uint64_t src_offset,
 	return r;
 }
 
-static int amdgpu_ttm_fill_mem(struct amdgpu_ring *ring, uint32_t src_data,
+static int amdgpu_ttm_fill_mem(struct amdgpu_ring *ring,
+			       struct drm_sched_entity *entity,
+			       uint32_t src_data,
 			       uint64_t dst_addr, uint32_t byte_count,
 			       struct dma_resv *resv,
 			       struct dma_fence **fence,
-			       bool vm_needs_flush, bool delayed,
+			       bool vm_needs_flush,
 			       u64 k_job_id)
 {
 	struct amdgpu_device *adev = ring->adev;
@@ -2321,8 +2333,8 @@ static int amdgpu_ttm_fill_mem(struct amdgpu_ring *ring, uint32_t src_data,
 	max_bytes = adev->mman.buffer_funcs->fill_max_bytes;
 	num_loops = DIV_ROUND_UP_ULL(byte_count, max_bytes);
 	num_dw = ALIGN(num_loops * adev->mman.buffer_funcs->fill_num_dw, 8);
-	r = amdgpu_ttm_prepare_job(adev, num_dw, resv, vm_needs_flush,
-				   &job, delayed, k_job_id);
+	r = amdgpu_ttm_prepare_job(adev, entity, num_dw, resv,
+				   vm_needs_flush, &job, k_job_id);
 	if (r)
 		return r;
 
@@ -2386,13 +2398,14 @@ int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
 		/* Never clear more than 256MiB at once to avoid timeouts */
 		size = min(cursor.size, 256ULL << 20);
 
-		r = amdgpu_ttm_map_buffer(&bo->tbo, bo->tbo.resource, &cursor,
+		r = amdgpu_ttm_map_buffer(&adev->mman.clear_entity.base,
+					  &bo->tbo, bo->tbo.resource, &cursor,
 					  1, ring, false, &size, &addr);
 		if (r)
 			goto err;
 
-		r = amdgpu_ttm_fill_mem(ring, 0, addr, size, resv,
-					&next, true, true,
+		r = amdgpu_ttm_fill_mem(ring, &adev->mman.clear_entity.base, 0, addr, size, resv,
+					&next, true,
 					AMDGPU_KERNEL_JOB_ID_TTM_CLEAR_BUFFER);
 		if (r)
 			goto err;
@@ -2408,12 +2421,12 @@ int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
 	return r;
 }
 
-int amdgpu_fill_buffer(struct amdgpu_bo *bo,
-			uint32_t src_data,
-			struct dma_resv *resv,
-			struct dma_fence **f,
-			bool delayed,
-			u64 k_job_id)
+int amdgpu_fill_buffer(struct amdgpu_ttm_entity *entity,
+		       struct amdgpu_bo *bo,
+		       uint32_t src_data,
+		       struct dma_resv *resv,
+		       struct dma_fence **f,
+		       u64 k_job_id)
 {
 	struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
 	struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
@@ -2437,13 +2450,15 @@ int amdgpu_fill_buffer(struct amdgpu_bo *bo,
 		/* Never fill more than 256MiB at once to avoid timeouts */
 		cur_size = min(dst.size, 256ULL << 20);
 
-		r = amdgpu_ttm_map_buffer(&bo->tbo, bo->tbo.resource, &dst,
+		r = amdgpu_ttm_map_buffer(&entity->base,
+					  &bo->tbo, bo->tbo.resource, &dst,
 					  1, ring, false, &cur_size, &to);
 		if (r)
 			goto error;
 
-		r = amdgpu_ttm_fill_mem(ring, src_data, to, cur_size, resv,
-					&next, true, delayed, k_job_id);
+		r = amdgpu_ttm_fill_mem(ring, &entity->base,
+					src_data, to, cur_size, resv,
+					&next, true, k_job_id);
 		if (r)
 			goto error;
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
index d2295d6c2b67..e1655f86a016 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
@@ -167,7 +167,9 @@ int amdgpu_ttm_init(struct amdgpu_device *adev);
 void amdgpu_ttm_fini(struct amdgpu_device *adev);
 void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev,
 					bool enable);
-int amdgpu_copy_buffer(struct amdgpu_ring *ring, uint64_t src_offset,
+int amdgpu_copy_buffer(struct amdgpu_ring *ring,
+		       struct drm_sched_entity *entity,
+		       uint64_t src_offset,
 		       uint64_t dst_offset, uint32_t byte_count,
 		       struct dma_resv *resv,
 		       struct dma_fence **fence,
@@ -175,12 +177,12 @@ int amdgpu_copy_buffer(struct amdgpu_ring *ring, uint64_t src_offset,
 int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
 			    struct dma_resv *resv,
 			    struct dma_fence **fence);
-int amdgpu_fill_buffer(struct amdgpu_bo *bo,
-			uint32_t src_data,
-			struct dma_resv *resv,
-			struct dma_fence **fence,
-			bool delayed,
-			u64 k_job_id);
+int amdgpu_fill_buffer(struct amdgpu_ttm_entity *entity,
+		       struct amdgpu_bo *bo,
+		       uint32_t src_data,
+		       struct dma_resv *resv,
+		       struct dma_fence **f,
+		       u64 k_job_id);
 
 int amdgpu_ttm_alloc_gart(struct ttm_buffer_object *bo);
 void amdgpu_ttm_recover_gart(struct ttm_buffer_object *tbo);
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
index d74ff6e90590..09756132fa1b 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
@@ -157,7 +157,8 @@ svm_migrate_copy_memory_gart(struct amdgpu_device *adev, dma_addr_t *sys,
 			goto out_unlock;
 		}
 
-		r = amdgpu_copy_buffer(ring, gart_s, gart_d, size * PAGE_SIZE,
+		r = amdgpu_copy_buffer(ring, &entity->base,
+				       gart_s, gart_d, size * PAGE_SIZE,
 				       NULL, &next, true, 0);
 		if (r) {
 			dev_err(adev->dev, "fail %d to copy memory\n", r);
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v2 06/20] drm/amdgpu: statically assign gart windows to ttm entities
  2025-11-13 16:05 [PATCH v2 00/20] drm/amdgpu: use all SDMA instances for TTM clears and moves Pierre-Eric Pelloux-Prayer
                   ` (4 preceding siblings ...)
  2025-11-13 16:05 ` [PATCH v2 05/20] drm/amdgpu: pass the entity to use to ttm functions Pierre-Eric Pelloux-Prayer
@ 2025-11-13 16:05 ` Pierre-Eric Pelloux-Prayer
  2025-11-14 15:15   ` Christian König
  2025-11-14 20:24   ` Felix Kuehling
  2025-11-13 16:05 ` [PATCH v2 07/20] drm/amdgpu: allocate multiple clear entities Pierre-Eric Pelloux-Prayer
                   ` (13 subsequent siblings)
  19 siblings, 2 replies; 53+ messages in thread
From: Pierre-Eric Pelloux-Prayer @ 2025-11-13 16:05 UTC (permalink / raw)
  To: Alex Deucher, Christian König, David Airlie, Simona Vetter,
	Felix Kuehling
  Cc: Pierre-Eric Pelloux-Prayer, amd-gfx, dri-devel, linux-kernel

If multiple entities share the same window we must make sure
that jobs using them are executed sequentially.

This commit gives separate window id to each entity, so jobs
from multiple entities could execute in parallel if needed.
(for now they all use the first sdma engine, so it makes no
difference yet).

default_entity doesn't get any windows reserved since there is
no use for them.

Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c  |  9 +++--
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c  | 50 ++++++++++++++----------
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h  |  9 +++--
 drivers/gpu/drm/amd/amdkfd/kfd_migrate.c |  8 ++--
 4 files changed, 46 insertions(+), 30 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
index 8e2d41c9c271..2a444d02cf4b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
@@ -686,7 +686,8 @@ void amdgpu_gmc_flush_gpu_tlb(struct amdgpu_device *adev, uint32_t vmid,
 	 * translation. Avoid this by doing the invalidation from the SDMA
 	 * itself at least for GART.
 	 */
-	mutex_lock(&adev->mman.gtt_window_lock);
+	mutex_lock(&adev->mman.clear_entity.gart_window_lock);
+	mutex_lock(&adev->mman.move_entity.gart_window_lock);
 	r = amdgpu_job_alloc_with_ib(ring->adev, &adev->mman.default_entity.base,
 				     AMDGPU_FENCE_OWNER_UNDEFINED,
 				     16 * 4, AMDGPU_IB_POOL_IMMEDIATE,
@@ -699,7 +700,8 @@ void amdgpu_gmc_flush_gpu_tlb(struct amdgpu_device *adev, uint32_t vmid,
 	job->ibs->ptr[job->ibs->length_dw++] = ring->funcs->nop;
 	amdgpu_ring_pad_ib(ring, &job->ibs[0]);
 	fence = amdgpu_job_submit(job);
-	mutex_unlock(&adev->mman.gtt_window_lock);
+	mutex_unlock(&adev->mman.move_entity.gart_window_lock);
+	mutex_unlock(&adev->mman.clear_entity.gart_window_lock);
 
 	dma_fence_wait(fence, false);
 	dma_fence_put(fence);
@@ -707,7 +709,8 @@ void amdgpu_gmc_flush_gpu_tlb(struct amdgpu_device *adev, uint32_t vmid,
 	return;
 
 error_alloc:
-	mutex_unlock(&adev->mman.gtt_window_lock);
+	mutex_unlock(&adev->mman.move_entity.gart_window_lock);
+	mutex_unlock(&adev->mman.clear_entity.gart_window_lock);
 	dev_err(adev->dev, "Error flushing GPU TLB using the SDMA (%d)!\n", r);
 }
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index c8d59ca2b3bd..7193a341689d 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -291,7 +291,7 @@ static int amdgpu_ttm_map_buffer(struct drm_sched_entity *entity,
  */
 __attribute__((nonnull))
 static int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
-				      struct drm_sched_entity *entity,
+				      struct amdgpu_ttm_buffer_entity *entity,
 				      const struct amdgpu_copy_mem *src,
 				      const struct amdgpu_copy_mem *dst,
 				      uint64_t size, bool tmz,
@@ -314,7 +314,7 @@ static int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
 	amdgpu_res_first(src->mem, src->offset, size, &src_mm);
 	amdgpu_res_first(dst->mem, dst->offset, size, &dst_mm);
 
-	mutex_lock(&adev->mman.gtt_window_lock);
+	mutex_lock(&entity->gart_window_lock);
 	while (src_mm.remaining) {
 		uint64_t from, to, cur_size, tiling_flags;
 		uint32_t num_type, data_format, max_com, write_compress_disable;
@@ -324,15 +324,15 @@ static int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
 		cur_size = min3(src_mm.size, dst_mm.size, 256ULL << 20);
 
 		/* Map src to window 0 and dst to window 1. */
-		r = amdgpu_ttm_map_buffer(entity,
+		r = amdgpu_ttm_map_buffer(&entity->base,
 					  src->bo, src->mem, &src_mm,
-					  0, ring, tmz, &cur_size, &from);
+					  entity->gart_window_id0, ring, tmz, &cur_size, &from);
 		if (r)
 			goto error;
 
-		r = amdgpu_ttm_map_buffer(entity,
+		r = amdgpu_ttm_map_buffer(&entity->base,
 					  dst->bo, dst->mem, &dst_mm,
-					  1, ring, tmz, &cur_size, &to);
+					  entity->gart_window_id1, ring, tmz, &cur_size, &to);
 		if (r)
 			goto error;
 
@@ -359,7 +359,7 @@ static int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
 							     write_compress_disable));
 		}
 
-		r = amdgpu_copy_buffer(ring, entity, from, to, cur_size, resv,
+		r = amdgpu_copy_buffer(ring, &entity->base, from, to, cur_size, resv,
 				       &next, true, copy_flags);
 		if (r)
 			goto error;
@@ -371,7 +371,7 @@ static int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
 		amdgpu_res_next(&dst_mm, cur_size);
 	}
 error:
-	mutex_unlock(&adev->mman.gtt_window_lock);
+	mutex_unlock(&entity->gart_window_lock);
 	*f = fence;
 	return r;
 }
@@ -401,7 +401,7 @@ static int amdgpu_move_blit(struct ttm_buffer_object *bo,
 	dst.offset = 0;
 
 	r = amdgpu_ttm_copy_mem_to_mem(adev,
-				       &adev->mman.move_entity.base,
+				       &adev->mman.move_entity,
 				       &src, &dst,
 				       new_mem->size,
 				       amdgpu_bo_encrypted(abo),
@@ -1893,8 +1893,6 @@ int amdgpu_ttm_init(struct amdgpu_device *adev)
 	uint64_t gtt_size;
 	int r;
 
-	mutex_init(&adev->mman.gtt_window_lock);
-
 	dma_set_max_seg_size(adev->dev, UINT_MAX);
 	/* No others user of address space so set it to 0 */
 	r = ttm_device_init(&adev->mman.bdev, &amdgpu_bo_driver, adev->dev,
@@ -2207,6 +2205,15 @@ void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
 			drm_sched_entity_destroy(&adev->mman.clear_entity.base);
 			goto error_free_entity;
 		}
+
+		/* Statically assign GART windows to each entity. */
+		mutex_init(&adev->mman.default_entity.gart_window_lock);
+		adev->mman.move_entity.gart_window_id0 = 0;
+		adev->mman.move_entity.gart_window_id1 = 1;
+		mutex_init(&adev->mman.move_entity.gart_window_lock);
+		/* Clearing entity doesn't use id0 */
+		adev->mman.clear_entity.gart_window_id1 = 2;
+		mutex_init(&adev->mman.clear_entity.gart_window_lock);
 	} else {
 		drm_sched_entity_destroy(&adev->mman.default_entity.base);
 		drm_sched_entity_destroy(&adev->mman.clear_entity.base);
@@ -2371,6 +2378,7 @@ int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
 {
 	struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
 	struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
+	struct amdgpu_ttm_buffer_entity *entity;
 	struct amdgpu_res_cursor cursor;
 	u64 addr;
 	int r = 0;
@@ -2381,11 +2389,12 @@ int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
 	if (!fence)
 		return -EINVAL;
 
+	entity = &adev->mman.clear_entity;
 	*fence = dma_fence_get_stub();
 
 	amdgpu_res_first(bo->tbo.resource, 0, amdgpu_bo_size(bo), &cursor);
 
-	mutex_lock(&adev->mman.gtt_window_lock);
+	mutex_lock(&entity->gart_window_lock);
 	while (cursor.remaining) {
 		struct dma_fence *next = NULL;
 		u64 size;
@@ -2398,13 +2407,13 @@ int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
 		/* Never clear more than 256MiB at once to avoid timeouts */
 		size = min(cursor.size, 256ULL << 20);
 
-		r = amdgpu_ttm_map_buffer(&adev->mman.clear_entity.base,
+		r = amdgpu_ttm_map_buffer(&entity->base,
 					  &bo->tbo, bo->tbo.resource, &cursor,
-					  1, ring, false, &size, &addr);
+					  entity->gart_window_id1, ring, false, &size, &addr);
 		if (r)
 			goto err;
 
-		r = amdgpu_ttm_fill_mem(ring, &adev->mman.clear_entity.base, 0, addr, size, resv,
+		r = amdgpu_ttm_fill_mem(ring, &entity->base, 0, addr, size, resv,
 					&next, true,
 					AMDGPU_KERNEL_JOB_ID_TTM_CLEAR_BUFFER);
 		if (r)
@@ -2416,12 +2425,12 @@ int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
 		amdgpu_res_next(&cursor, size);
 	}
 err:
-	mutex_unlock(&adev->mman.gtt_window_lock);
+	mutex_unlock(&entity->gart_window_lock);
 
 	return r;
 }
 
-int amdgpu_fill_buffer(struct amdgpu_ttm_entity *entity,
+int amdgpu_fill_buffer(struct amdgpu_ttm_buffer_entity *entity,
 		       struct amdgpu_bo *bo,
 		       uint32_t src_data,
 		       struct dma_resv *resv,
@@ -2442,7 +2451,7 @@ int amdgpu_fill_buffer(struct amdgpu_ttm_entity *entity,
 
 	amdgpu_res_first(bo->tbo.resource, 0, amdgpu_bo_size(bo), &dst);
 
-	mutex_lock(&adev->mman.gtt_window_lock);
+	mutex_lock(&entity->gart_window_lock);
 	while (dst.remaining) {
 		struct dma_fence *next;
 		uint64_t cur_size, to;
@@ -2452,7 +2461,8 @@ int amdgpu_fill_buffer(struct amdgpu_ttm_entity *entity,
 
 		r = amdgpu_ttm_map_buffer(&entity->base,
 					  &bo->tbo, bo->tbo.resource, &dst,
-					  1, ring, false, &cur_size, &to);
+					  entity->gart_window_id1, ring, false,
+					  &cur_size, &to);
 		if (r)
 			goto error;
 
@@ -2468,7 +2478,7 @@ int amdgpu_fill_buffer(struct amdgpu_ttm_entity *entity,
 		amdgpu_res_next(&dst, cur_size);
 	}
 error:
-	mutex_unlock(&adev->mman.gtt_window_lock);
+	mutex_unlock(&entity->gart_window_lock);
 	if (f)
 		*f = dma_fence_get(fence);
 	dma_fence_put(fence);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
index e1655f86a016..f4f762be9fdd 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
@@ -39,7 +39,7 @@
 #define __AMDGPU_PL_NUM	(TTM_PL_PRIV + 6)
 
 #define AMDGPU_GTT_MAX_TRANSFER_SIZE	512
-#define AMDGPU_GTT_NUM_TRANSFER_WINDOWS	2
+#define AMDGPU_GTT_NUM_TRANSFER_WINDOWS	3
 
 extern const struct attribute_group amdgpu_vram_mgr_attr_group;
 extern const struct attribute_group amdgpu_gtt_mgr_attr_group;
@@ -54,6 +54,9 @@ struct amdgpu_gtt_mgr {
 
 struct amdgpu_ttm_buffer_entity {
 	struct drm_sched_entity base;
+	struct mutex		gart_window_lock;
+	u32			gart_window_id0;
+	u32			gart_window_id1;
 };
 
 struct amdgpu_mman {
@@ -69,7 +72,7 @@ struct amdgpu_mman {
 
 	struct mutex				gtt_window_lock;
 
-	struct amdgpu_ttm_buffer_entity default_entity;
+	struct amdgpu_ttm_buffer_entity default_entity; /* has no gart windows */
 	struct amdgpu_ttm_buffer_entity clear_entity;
 	struct amdgpu_ttm_buffer_entity move_entity;
 
@@ -177,7 +180,7 @@ int amdgpu_copy_buffer(struct amdgpu_ring *ring,
 int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
 			    struct dma_resv *resv,
 			    struct dma_fence **fence);
-int amdgpu_fill_buffer(struct amdgpu_ttm_entity *entity,
+int amdgpu_fill_buffer(struct amdgpu_ttm_buffer_entity *entity,
 		       struct amdgpu_bo *bo,
 		       uint32_t src_data,
 		       struct dma_resv *resv,
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
index 09756132fa1b..bc47fc362a17 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
@@ -60,7 +60,7 @@ svm_migrate_gart_map(struct amdgpu_ring *ring,
 	int r;
 
 	/* use gart window 0 */
-	*gart_addr = adev->gmc.gart_start;
+	*gart_addr = entity->gart_window_id0;
 
 	num_dw = ALIGN(adev->mman.buffer_funcs->copy_num_dw, 8);
 	num_bytes = npages * 8;
@@ -116,7 +116,7 @@ svm_migrate_gart_map(struct amdgpu_ring *ring,
  * multiple GTT_MAX_PAGES transfer, all sdma operations are serialized, wait for
  * the last sdma finish fence which is returned to check copy memory is done.
  *
- * Context: Process context, takes and releases gtt_window_lock
+ * Context: Process context, takes and releases gart_window_lock
  *
  * Return:
  * 0 - OK, otherwise error code
@@ -138,7 +138,7 @@ svm_migrate_copy_memory_gart(struct amdgpu_device *adev, dma_addr_t *sys,
 
 	entity = &adev->mman.move_entity;
 
-	mutex_lock(&adev->mman.gtt_window_lock);
+	mutex_lock(&entity->gart_window_lock);
 
 	while (npages) {
 		size = min(GTT_MAX_PAGES, npages);
@@ -175,7 +175,7 @@ svm_migrate_copy_memory_gart(struct amdgpu_device *adev, dma_addr_t *sys,
 	}
 
 out_unlock:
-	mutex_unlock(&adev->mman.gtt_window_lock);
+	mutex_unlock(&entity->gart_window_lock);
 
 	return r;
 }
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v2 07/20] drm/amdgpu: allocate multiple clear entities
  2025-11-13 16:05 [PATCH v2 00/20] drm/amdgpu: use all SDMA instances for TTM clears and moves Pierre-Eric Pelloux-Prayer
                   ` (5 preceding siblings ...)
  2025-11-13 16:05 ` [PATCH v2 06/20] drm/amdgpu: statically assign gart windows to ttm entities Pierre-Eric Pelloux-Prayer
@ 2025-11-13 16:05 ` Pierre-Eric Pelloux-Prayer
  2025-11-17  8:41   ` Christian König
  2025-11-13 16:05 ` [PATCH v2 08/20] drm/amdgpu: allocate multiple move entities Pierre-Eric Pelloux-Prayer
                   ` (12 subsequent siblings)
  19 siblings, 1 reply; 53+ messages in thread
From: Pierre-Eric Pelloux-Prayer @ 2025-11-13 16:05 UTC (permalink / raw)
  To: Alex Deucher, Christian König, David Airlie, Simona Vetter
  Cc: Pierre-Eric Pelloux-Prayer, amd-gfx, dri-devel, linux-kernel

No functional change for now, as we always use entity 0.

Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c     | 11 +--
 drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c |  6 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_object.c  |  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c     | 76 +++++++++++++--------
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h     | 10 +--
 5 files changed, 66 insertions(+), 39 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
index 2a444d02cf4b..e73dcfed5338 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
@@ -655,7 +655,7 @@ void amdgpu_gmc_flush_gpu_tlb(struct amdgpu_device *adev, uint32_t vmid,
 	struct amdgpu_vmhub *hub = &adev->vmhub[vmhub];
 	struct dma_fence *fence;
 	struct amdgpu_job *job;
-	int r;
+	int r, i;
 
 	if (!hub->sdma_invalidation_workaround || vmid ||
 	    !adev->mman.buffer_funcs_enabled || !adev->ib_pool_ready ||
@@ -686,8 +686,9 @@ void amdgpu_gmc_flush_gpu_tlb(struct amdgpu_device *adev, uint32_t vmid,
 	 * translation. Avoid this by doing the invalidation from the SDMA
 	 * itself at least for GART.
 	 */
-	mutex_lock(&adev->mman.clear_entity.gart_window_lock);
 	mutex_lock(&adev->mman.move_entity.gart_window_lock);
+	for (i = 0; i < adev->mman.num_clear_entities; i++)
+		mutex_lock(&adev->mman.clear_entities[i].gart_window_lock);
 	r = amdgpu_job_alloc_with_ib(ring->adev, &adev->mman.default_entity.base,
 				     AMDGPU_FENCE_OWNER_UNDEFINED,
 				     16 * 4, AMDGPU_IB_POOL_IMMEDIATE,
@@ -701,7 +702,8 @@ void amdgpu_gmc_flush_gpu_tlb(struct amdgpu_device *adev, uint32_t vmid,
 	amdgpu_ring_pad_ib(ring, &job->ibs[0]);
 	fence = amdgpu_job_submit(job);
 	mutex_unlock(&adev->mman.move_entity.gart_window_lock);
-	mutex_unlock(&adev->mman.clear_entity.gart_window_lock);
+	for (i = 0; i < adev->mman.num_clear_entities; i++)
+		mutex_unlock(&adev->mman.clear_entities[i].gart_window_lock);
 
 	dma_fence_wait(fence, false);
 	dma_fence_put(fence);
@@ -710,7 +712,8 @@ void amdgpu_gmc_flush_gpu_tlb(struct amdgpu_device *adev, uint32_t vmid,
 
 error_alloc:
 	mutex_unlock(&adev->mman.move_entity.gart_window_lock);
-	mutex_unlock(&adev->mman.clear_entity.gart_window_lock);
+	for (i = 0; i < adev->mman.num_clear_entities; i++)
+		mutex_unlock(&adev->mman.clear_entities[i].gart_window_lock);
 	dev_err(adev->dev, "Error flushing GPU TLB using the SDMA (%d)!\n", r);
 }
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c
index 0760e70402ec..3771e89035f5 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c
@@ -269,10 +269,12 @@ static const struct ttm_resource_manager_func amdgpu_gtt_mgr_func = {
  *
  * @adev: amdgpu_device pointer
  * @gtt_size: maximum size of GTT
+ * @reserved_windows: num of already used windows
  *
  * Allocate and initialize the GTT manager.
  */
-int amdgpu_gtt_mgr_init(struct amdgpu_device *adev, uint64_t gtt_size)
+int amdgpu_gtt_mgr_init(struct amdgpu_device *adev, uint64_t gtt_size,
+			u32 reserved_windows)
 {
 	struct amdgpu_gtt_mgr *mgr = &adev->mman.gtt_mgr;
 	struct ttm_resource_manager *man = &mgr->manager;
@@ -283,7 +285,7 @@ int amdgpu_gtt_mgr_init(struct amdgpu_device *adev, uint64_t gtt_size)
 
 	ttm_resource_manager_init(man, &adev->mman.bdev, gtt_size);
 
-	start = AMDGPU_GTT_MAX_TRANSFER_SIZE * AMDGPU_GTT_NUM_TRANSFER_WINDOWS;
+	start = AMDGPU_GTT_MAX_TRANSFER_SIZE * reserved_windows;
 	size = (adev->gmc.gart_size >> PAGE_SHIFT) - start;
 	drm_mm_init(&mgr->mm, start, size);
 	spin_lock_init(&mgr->lock);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
index c06c132a753c..e7b2cae031b3 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
@@ -1321,7 +1321,7 @@ void amdgpu_bo_release_notify(struct ttm_buffer_object *bo)
 	if (r)
 		goto out;
 
-	r = amdgpu_fill_buffer(&adev->mman.clear_entity, abo, 0, &bo->base._resv,
+	r = amdgpu_fill_buffer(&adev->mman.clear_entities[0], abo, 0, &bo->base._resv,
 			       &fence, AMDGPU_KERNEL_JOB_ID_CLEAR_ON_RELEASE);
 	if (WARN_ON(r))
 		goto out;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index 7193a341689d..2f305ad32080 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -1891,6 +1891,7 @@ static void amdgpu_ttm_mmio_remap_bo_fini(struct amdgpu_device *adev)
 int amdgpu_ttm_init(struct amdgpu_device *adev)
 {
 	uint64_t gtt_size;
+	u32 gart_window;
 	int r;
 
 	dma_set_max_seg_size(adev->dev, UINT_MAX);
@@ -1923,7 +1924,7 @@ int amdgpu_ttm_init(struct amdgpu_device *adev)
 	}
 
 	/* Change the size here instead of the init above so only lpfn is affected */
-	amdgpu_ttm_set_buffer_funcs_status(adev, false);
+	gart_window = amdgpu_ttm_set_buffer_funcs_status(adev, false);
 #ifdef CONFIG_64BIT
 #ifdef CONFIG_X86
 	if (adev->gmc.xgmi.connected_to_cpu)
@@ -2019,7 +2020,7 @@ int amdgpu_ttm_init(struct amdgpu_device *adev)
 	}
 
 	/* Initialize GTT memory pool */
-	r = amdgpu_gtt_mgr_init(adev, gtt_size);
+	r = amdgpu_gtt_mgr_init(adev, gtt_size, gart_window);
 	if (r) {
 		dev_err(adev->dev, "Failed initializing GTT heap.\n");
 		return r;
@@ -2158,16 +2159,22 @@ void amdgpu_ttm_fini(struct amdgpu_device *adev)
  *
  * Enable/disable use of buffer functions during suspend/resume. This should
  * only be called at bootup or when userspace isn't running.
+ *
+ * Returns: the number of GART reserved window
  */
-void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
+u32 amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
 {
 	struct ttm_resource_manager *man = ttm_manager_type(&adev->mman.bdev, TTM_PL_VRAM);
 	uint64_t size;
-	int r, i;
+	int r, i, j;
+	u32 num_clear_entities, windows, w;
+
+	num_clear_entities = adev->sdma.num_instances;
+	windows = adev->gmc.is_app_apu ? 0 : (2 + num_clear_entities);
 
 	if (!adev->mman.initialized || amdgpu_in_reset(adev) ||
 	    adev->mman.buffer_funcs_enabled == enable || adev->gmc.is_app_apu)
-		return;
+		return windows;
 
 	if (enable) {
 		struct amdgpu_ring *ring;
@@ -2180,19 +2187,9 @@ void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
 					  1, NULL);
 		if (r) {
 			dev_err(adev->dev,
-				"Failed setting up TTM BO move entity (%d)\n",
+				"Failed setting up TTM BO eviction entity (%d)\n",
 				r);
-			return;
-		}
-
-		r = drm_sched_entity_init(&adev->mman.clear_entity.base,
-					  DRM_SCHED_PRIORITY_NORMAL, &sched,
-					  1, NULL);
-		if (r) {
-			dev_err(adev->dev,
-				"Failed setting up TTM BO clear entity (%d)\n",
-				r);
-			goto error_free_entity;
+			return 0;
 		}
 
 		r = drm_sched_entity_init(&adev->mman.move_entity.base,
@@ -2202,26 +2199,51 @@ void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
 			dev_err(adev->dev,
 				"Failed setting up TTM BO move entity (%d)\n",
 				r);
-			drm_sched_entity_destroy(&adev->mman.clear_entity.base);
 			goto error_free_entity;
 		}
 
+		adev->mman.num_clear_entities = num_clear_entities;
+		adev->mman.clear_entities = kcalloc(num_clear_entities,
+						    sizeof(struct amdgpu_ttm_buffer_entity),
+						    GFP_KERNEL);
+		if (!adev->mman.clear_entities)
+			goto error_free_entity;
+
+		for (i = 0; i < num_clear_entities; i++) {
+			r = drm_sched_entity_init(&adev->mman.clear_entities[i].base,
+						  DRM_SCHED_PRIORITY_NORMAL, &sched,
+						  1, NULL);
+			if (r) {
+				for (j = 0; j < i; j++)
+					drm_sched_entity_destroy(
+						&adev->mman.clear_entities[j].base);
+				kfree(adev->mman.clear_entities);
+				goto error_free_entity;
+			}
+		}
+
 		/* Statically assign GART windows to each entity. */
+		w = 0;
 		mutex_init(&adev->mman.default_entity.gart_window_lock);
-		adev->mman.move_entity.gart_window_id0 = 0;
-		adev->mman.move_entity.gart_window_id1 = 1;
+		adev->mman.move_entity.gart_window_id0 = w++;
+		adev->mman.move_entity.gart_window_id1 = w++;
 		mutex_init(&adev->mman.move_entity.gart_window_lock);
-		/* Clearing entity doesn't use id0 */
-		adev->mman.clear_entity.gart_window_id1 = 2;
-		mutex_init(&adev->mman.clear_entity.gart_window_lock);
+		for (i = 0; i < num_clear_entities; i++) {
+			/* Clearing entities don't use id0 */
+			adev->mman.clear_entities[i].gart_window_id1 = w++;
+			mutex_init(&adev->mman.clear_entities[i].gart_window_lock);
+		}
+		WARN_ON(w != windows);
 	} else {
 		drm_sched_entity_destroy(&adev->mman.default_entity.base);
-		drm_sched_entity_destroy(&adev->mman.clear_entity.base);
 		drm_sched_entity_destroy(&adev->mman.move_entity.base);
+		for (i = 0; i < num_clear_entities; i++)
+			drm_sched_entity_destroy(&adev->mman.clear_entities[i].base);
 		for (i = 0; i < TTM_NUM_MOVE_FENCES; i++) {
 			dma_fence_put(man->eviction_fences[i]);
 			man->eviction_fences[i] = NULL;
 		}
+		kfree(adev->mman.clear_entities);
 	}
 
 	/* this just adjusts TTM size idea, which sets lpfn to the correct value */
@@ -2232,10 +2254,11 @@ void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
 	man->size = size;
 	adev->mman.buffer_funcs_enabled = enable;
 
-	return;
+	return windows;
 
 error_free_entity:
 	drm_sched_entity_destroy(&adev->mman.default_entity.base);
+	return 0;
 }
 
 static int amdgpu_ttm_prepare_job(struct amdgpu_device *adev,
@@ -2388,8 +2411,7 @@ int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
 
 	if (!fence)
 		return -EINVAL;
-
-	entity = &adev->mman.clear_entity;
+	entity = &adev->mman.clear_entities[0];
 	*fence = dma_fence_get_stub();
 
 	amdgpu_res_first(bo->tbo.resource, 0, amdgpu_bo_size(bo), &cursor);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
index f4f762be9fdd..797f851a4578 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
@@ -39,7 +39,6 @@
 #define __AMDGPU_PL_NUM	(TTM_PL_PRIV + 6)
 
 #define AMDGPU_GTT_MAX_TRANSFER_SIZE	512
-#define AMDGPU_GTT_NUM_TRANSFER_WINDOWS	3
 
 extern const struct attribute_group amdgpu_vram_mgr_attr_group;
 extern const struct attribute_group amdgpu_gtt_mgr_attr_group;
@@ -73,8 +72,9 @@ struct amdgpu_mman {
 	struct mutex				gtt_window_lock;
 
 	struct amdgpu_ttm_buffer_entity default_entity; /* has no gart windows */
-	struct amdgpu_ttm_buffer_entity clear_entity;
 	struct amdgpu_ttm_buffer_entity move_entity;
+	struct amdgpu_ttm_buffer_entity *clear_entities;
+	u32 num_clear_entities;
 
 	struct amdgpu_vram_mgr vram_mgr;
 	struct amdgpu_gtt_mgr gtt_mgr;
@@ -134,7 +134,7 @@ struct amdgpu_copy_mem {
 #define AMDGPU_COPY_FLAGS_GET(value, field) \
 	(((__u32)(value) >> AMDGPU_COPY_FLAGS_##field##_SHIFT) & AMDGPU_COPY_FLAGS_##field##_MASK)
 
-int amdgpu_gtt_mgr_init(struct amdgpu_device *adev, uint64_t gtt_size);
+int amdgpu_gtt_mgr_init(struct amdgpu_device *adev, uint64_t gtt_size, u32 reserved_windows);
 void amdgpu_gtt_mgr_fini(struct amdgpu_device *adev);
 int amdgpu_preempt_mgr_init(struct amdgpu_device *adev);
 void amdgpu_preempt_mgr_fini(struct amdgpu_device *adev);
@@ -168,8 +168,8 @@ bool amdgpu_res_cpu_visible(struct amdgpu_device *adev,
 
 int amdgpu_ttm_init(struct amdgpu_device *adev);
 void amdgpu_ttm_fini(struct amdgpu_device *adev);
-void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev,
-					bool enable);
+u32 amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev,
+				       bool enable);
 int amdgpu_copy_buffer(struct amdgpu_ring *ring,
 		       struct drm_sched_entity *entity,
 		       uint64_t src_offset,
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v2 08/20] drm/amdgpu: allocate multiple move entities
  2025-11-13 16:05 [PATCH v2 00/20] drm/amdgpu: use all SDMA instances for TTM clears and moves Pierre-Eric Pelloux-Prayer
                   ` (6 preceding siblings ...)
  2025-11-13 16:05 ` [PATCH v2 07/20] drm/amdgpu: allocate multiple clear entities Pierre-Eric Pelloux-Prayer
@ 2025-11-13 16:05 ` Pierre-Eric Pelloux-Prayer
  2025-11-14 20:57   ` Felix Kuehling
  2025-11-13 16:05 ` [PATCH v2 09/20] drm/amdgpu: pass optional dependency to amdgpu_fill_buffer Pierre-Eric Pelloux-Prayer
                   ` (11 subsequent siblings)
  19 siblings, 1 reply; 53+ messages in thread
From: Pierre-Eric Pelloux-Prayer @ 2025-11-13 16:05 UTC (permalink / raw)
  To: Alex Deucher, Christian König, David Airlie, Simona Vetter,
	Felix Kuehling
  Cc: Pierre-Eric Pelloux-Prayer, amd-gfx, dri-devel, linux-kernel

No functional change for now, as we always use entity 0.

Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c  |  9 +++--
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c  | 48 +++++++++++++++---------
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h  |  3 +-
 drivers/gpu/drm/amd/amdkfd/kfd_migrate.c |  2 +-
 4 files changed, 39 insertions(+), 23 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
index e73dcfed5338..2713dd51ab9a 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
@@ -686,9 +686,10 @@ void amdgpu_gmc_flush_gpu_tlb(struct amdgpu_device *adev, uint32_t vmid,
 	 * translation. Avoid this by doing the invalidation from the SDMA
 	 * itself at least for GART.
 	 */
-	mutex_lock(&adev->mman.move_entity.gart_window_lock);
 	for (i = 0; i < adev->mman.num_clear_entities; i++)
 		mutex_lock(&adev->mman.clear_entities[i].gart_window_lock);
+	for (i = 0; i < adev->mman.num_move_entities; i++)
+		mutex_lock(&adev->mman.move_entities[i].gart_window_lock);
 	r = amdgpu_job_alloc_with_ib(ring->adev, &adev->mman.default_entity.base,
 				     AMDGPU_FENCE_OWNER_UNDEFINED,
 				     16 * 4, AMDGPU_IB_POOL_IMMEDIATE,
@@ -701,7 +702,8 @@ void amdgpu_gmc_flush_gpu_tlb(struct amdgpu_device *adev, uint32_t vmid,
 	job->ibs->ptr[job->ibs->length_dw++] = ring->funcs->nop;
 	amdgpu_ring_pad_ib(ring, &job->ibs[0]);
 	fence = amdgpu_job_submit(job);
-	mutex_unlock(&adev->mman.move_entity.gart_window_lock);
+	for (i = 0; i < adev->mman.num_move_entities; i++)
+		mutex_unlock(&adev->mman.move_entities[i].gart_window_lock);
 	for (i = 0; i < adev->mman.num_clear_entities; i++)
 		mutex_unlock(&adev->mman.clear_entities[i].gart_window_lock);
 
@@ -711,7 +713,8 @@ void amdgpu_gmc_flush_gpu_tlb(struct amdgpu_device *adev, uint32_t vmid,
 	return;
 
 error_alloc:
-	mutex_unlock(&adev->mman.move_entity.gart_window_lock);
+	for (i = 0; i < adev->mman.num_move_entities; i++)
+		mutex_unlock(&adev->mman.move_entities[i].gart_window_lock);
 	for (i = 0; i < adev->mman.num_clear_entities; i++)
 		mutex_unlock(&adev->mman.clear_entities[i].gart_window_lock);
 	dev_err(adev->dev, "Error flushing GPU TLB using the SDMA (%d)!\n", r);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index 2f305ad32080..e1f0567fd2d5 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -401,7 +401,7 @@ static int amdgpu_move_blit(struct ttm_buffer_object *bo,
 	dst.offset = 0;
 
 	r = amdgpu_ttm_copy_mem_to_mem(adev,
-				       &adev->mman.move_entity,
+				       &adev->mman.move_entities[0],
 				       &src, &dst,
 				       new_mem->size,
 				       amdgpu_bo_encrypted(abo),
@@ -414,7 +414,7 @@ static int amdgpu_move_blit(struct ttm_buffer_object *bo,
 	    (abo->flags & AMDGPU_GEM_CREATE_VRAM_WIPE_ON_RELEASE)) {
 		struct dma_fence *wipe_fence = NULL;
 
-		r = amdgpu_fill_buffer(&adev->mman.move_entity,
+		r = amdgpu_fill_buffer(&adev->mman.move_entities[0],
 				       abo, 0, NULL, &wipe_fence,
 				       AMDGPU_KERNEL_JOB_ID_MOVE_BLIT);
 		if (r) {
@@ -2167,10 +2167,11 @@ u32 amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
 	struct ttm_resource_manager *man = ttm_manager_type(&adev->mman.bdev, TTM_PL_VRAM);
 	uint64_t size;
 	int r, i, j;
-	u32 num_clear_entities, windows, w;
+	u32 num_clear_entities, num_move_entities, windows, w;
 
 	num_clear_entities = adev->sdma.num_instances;
-	windows = adev->gmc.is_app_apu ? 0 : (2 + num_clear_entities);
+	num_move_entities = MIN(adev->sdma.num_instances, TTM_NUM_MOVE_FENCES);
+	windows = adev->gmc.is_app_apu ? 0 : (2 * num_move_entities + num_clear_entities);
 
 	if (!adev->mman.initialized || amdgpu_in_reset(adev) ||
 	    adev->mman.buffer_funcs_enabled == enable || adev->gmc.is_app_apu)
@@ -2186,20 +2187,25 @@ u32 amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
 					  DRM_SCHED_PRIORITY_KERNEL, &sched,
 					  1, NULL);
 		if (r) {
-			dev_err(adev->dev,
-				"Failed setting up TTM BO eviction entity (%d)\n",
+			dev_err(adev->dev, "Failed setting up entity (%d)\n",
 				r);
 			return 0;
 		}
 
-		r = drm_sched_entity_init(&adev->mman.move_entity.base,
-					  DRM_SCHED_PRIORITY_NORMAL, &sched,
-					  1, NULL);
-		if (r) {
-			dev_err(adev->dev,
-				"Failed setting up TTM BO move entity (%d)\n",
-				r);
-			goto error_free_entity;
+		adev->mman.num_move_entities = num_move_entities;
+		for (i = 0; i < num_move_entities; i++) {
+			r = drm_sched_entity_init(&adev->mman.move_entities[i].base,
+						  DRM_SCHED_PRIORITY_NORMAL, &sched,
+						  1, NULL);
+			if (r) {
+				dev_err(adev->dev,
+					"Failed setting up TTM BO move entities (%d)\n",
+					r);
+				for (j = 0; j < i; j++)
+					drm_sched_entity_destroy(
+						&adev->mman.move_entities[j].base);
+				goto error_free_entity;
+			}
 		}
 
 		adev->mman.num_clear_entities = num_clear_entities;
@@ -2214,6 +2220,9 @@ u32 amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
 						  DRM_SCHED_PRIORITY_NORMAL, &sched,
 						  1, NULL);
 			if (r) {
+				for (j = 0; j < num_move_entities; j++)
+					drm_sched_entity_destroy(
+						&adev->mman.move_entities[j].base);
 				for (j = 0; j < i; j++)
 					drm_sched_entity_destroy(
 						&adev->mman.clear_entities[j].base);
@@ -2225,9 +2234,11 @@ u32 amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
 		/* Statically assign GART windows to each entity. */
 		w = 0;
 		mutex_init(&adev->mman.default_entity.gart_window_lock);
-		adev->mman.move_entity.gart_window_id0 = w++;
-		adev->mman.move_entity.gart_window_id1 = w++;
-		mutex_init(&adev->mman.move_entity.gart_window_lock);
+		for (i = 0; i < num_move_entities; i++) {
+			adev->mman.move_entities[i].gart_window_id0 = w++;
+			adev->mman.move_entities[i].gart_window_id1 = w++;
+			mutex_init(&adev->mman.move_entities[i].gart_window_lock);
+		}
 		for (i = 0; i < num_clear_entities; i++) {
 			/* Clearing entities don't use id0 */
 			adev->mman.clear_entities[i].gart_window_id1 = w++;
@@ -2236,7 +2247,8 @@ u32 amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
 		WARN_ON(w != windows);
 	} else {
 		drm_sched_entity_destroy(&adev->mman.default_entity.base);
-		drm_sched_entity_destroy(&adev->mman.move_entity.base);
+		for (i = 0; i < num_move_entities; i++)
+			drm_sched_entity_destroy(&adev->mman.move_entities[i].base);
 		for (i = 0; i < num_clear_entities; i++)
 			drm_sched_entity_destroy(&adev->mman.clear_entities[i].base);
 		for (i = 0; i < TTM_NUM_MOVE_FENCES; i++) {
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
index 797f851a4578..9d4891e86675 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
@@ -72,9 +72,10 @@ struct amdgpu_mman {
 	struct mutex				gtt_window_lock;
 
 	struct amdgpu_ttm_buffer_entity default_entity; /* has no gart windows */
-	struct amdgpu_ttm_buffer_entity move_entity;
 	struct amdgpu_ttm_buffer_entity *clear_entities;
 	u32 num_clear_entities;
+	struct amdgpu_ttm_buffer_entity move_entities[TTM_NUM_MOVE_FENCES];
+	u32 num_move_entities;
 
 	struct amdgpu_vram_mgr vram_mgr;
 	struct amdgpu_gtt_mgr gtt_mgr;
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
index bc47fc362a17..943c3438c7ee 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
@@ -136,7 +136,7 @@ svm_migrate_copy_memory_gart(struct amdgpu_device *adev, dma_addr_t *sys,
 	u64 size;
 	int r;
 
-	entity = &adev->mman.move_entity;
+	entity = &adev->mman.move_entities[0];
 
 	mutex_lock(&entity->gart_window_lock);
 
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v2 09/20] drm/amdgpu: pass optional dependency to amdgpu_fill_buffer
  2025-11-13 16:05 [PATCH v2 00/20] drm/amdgpu: use all SDMA instances for TTM clears and moves Pierre-Eric Pelloux-Prayer
                   ` (7 preceding siblings ...)
  2025-11-13 16:05 ` [PATCH v2 08/20] drm/amdgpu: allocate multiple move entities Pierre-Eric Pelloux-Prayer
@ 2025-11-13 16:05 ` Pierre-Eric Pelloux-Prayer
  2025-11-17  8:43   ` Christian König
  2025-11-13 16:05 ` [PATCH v2 10/20] drm/admgpu: handle resv dependencies in amdgpu_ttm_map_buffer Pierre-Eric Pelloux-Prayer
                   ` (10 subsequent siblings)
  19 siblings, 1 reply; 53+ messages in thread
From: Pierre-Eric Pelloux-Prayer @ 2025-11-13 16:05 UTC (permalink / raw)
  To: Alex Deucher, Christian König, David Airlie, Simona Vetter,
	Sumit Semwal
  Cc: Pierre-Eric Pelloux-Prayer, amd-gfx, dri-devel, linux-kernel,
	linux-media, linaro-mm-sig

In case the fill job depends on a previous fence, the caller can
now pass it to make sure the ordering of the jobs is correct.

Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_object.c |  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c    | 22 ++++++++++++++++------
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h    |  1 +
 3 files changed, 18 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
index e7b2cae031b3..be3532134e46 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
@@ -1322,7 +1322,7 @@ void amdgpu_bo_release_notify(struct ttm_buffer_object *bo)
 		goto out;
 
 	r = amdgpu_fill_buffer(&adev->mman.clear_entities[0], abo, 0, &bo->base._resv,
-			       &fence, AMDGPU_KERNEL_JOB_ID_CLEAR_ON_RELEASE);
+			       &fence, NULL, AMDGPU_KERNEL_JOB_ID_CLEAR_ON_RELEASE);
 	if (WARN_ON(r))
 		goto out;
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index e1f0567fd2d5..b13f0993dbf1 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -173,6 +173,7 @@ static void amdgpu_evict_flags(struct ttm_buffer_object *bo,
  * @tmz: if we should setup a TMZ enabled mapping
  * @size: in number of bytes to map, out number of bytes mapped
  * @addr: resulting address inside the MC address space
+ * @dep: optional dependency
  *
  * Setup one of the GART windows to access a specific piece of memory or return
  * the physical address for local memory.
@@ -182,7 +183,8 @@ static int amdgpu_ttm_map_buffer(struct drm_sched_entity *entity,
 				 struct ttm_resource *mem,
 				 struct amdgpu_res_cursor *mm_cur,
 				 unsigned int window, struct amdgpu_ring *ring,
-				 bool tmz, uint64_t *size, uint64_t *addr)
+				 bool tmz, uint64_t *size, uint64_t *addr,
+				 struct dma_fence *dep)
 {
 	struct amdgpu_device *adev = ring->adev;
 	unsigned int offset, num_pages, num_dw, num_bytes;
@@ -234,6 +236,9 @@ static int amdgpu_ttm_map_buffer(struct drm_sched_entity *entity,
 	if (r)
 		return r;
 
+	if (dep)
+		drm_sched_job_add_dependency(&job->base, dma_fence_get(dep));
+
 	src_addr = num_dw * 4;
 	src_addr += job->ibs[0].gpu_addr;
 
@@ -326,13 +331,15 @@ static int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
 		/* Map src to window 0 and dst to window 1. */
 		r = amdgpu_ttm_map_buffer(&entity->base,
 					  src->bo, src->mem, &src_mm,
-					  entity->gart_window_id0, ring, tmz, &cur_size, &from);
+					  entity->gart_window_id0, ring, tmz, &cur_size, &from,
+					  NULL);
 		if (r)
 			goto error;
 
 		r = amdgpu_ttm_map_buffer(&entity->base,
 					  dst->bo, dst->mem, &dst_mm,
-					  entity->gart_window_id1, ring, tmz, &cur_size, &to);
+					  entity->gart_window_id1, ring, tmz, &cur_size, &to,
+					  NULL);
 		if (r)
 			goto error;
 
@@ -415,7 +422,7 @@ static int amdgpu_move_blit(struct ttm_buffer_object *bo,
 		struct dma_fence *wipe_fence = NULL;
 
 		r = amdgpu_fill_buffer(&adev->mman.move_entities[0],
-				       abo, 0, NULL, &wipe_fence,
+				       abo, 0, NULL, &wipe_fence, fence,
 				       AMDGPU_KERNEL_JOB_ID_MOVE_BLIT);
 		if (r) {
 			goto error;
@@ -2443,7 +2450,8 @@ int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
 
 		r = amdgpu_ttm_map_buffer(&entity->base,
 					  &bo->tbo, bo->tbo.resource, &cursor,
-					  entity->gart_window_id1, ring, false, &size, &addr);
+					  entity->gart_window_id1, ring, false, &size, &addr,
+					  NULL);
 		if (r)
 			goto err;
 
@@ -2469,6 +2477,7 @@ int amdgpu_fill_buffer(struct amdgpu_ttm_buffer_entity *entity,
 		       uint32_t src_data,
 		       struct dma_resv *resv,
 		       struct dma_fence **f,
+		       struct dma_fence *dependency,
 		       u64 k_job_id)
 {
 	struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
@@ -2496,7 +2505,8 @@ int amdgpu_fill_buffer(struct amdgpu_ttm_buffer_entity *entity,
 		r = amdgpu_ttm_map_buffer(&entity->base,
 					  &bo->tbo, bo->tbo.resource, &dst,
 					  entity->gart_window_id1, ring, false,
-					  &cur_size, &to);
+					  &cur_size, &to,
+					  dependency);
 		if (r)
 			goto error;
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
index 9d4891e86675..e8f8165f5bcf 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
@@ -186,6 +186,7 @@ int amdgpu_fill_buffer(struct amdgpu_ttm_buffer_entity *entity,
 		       uint32_t src_data,
 		       struct dma_resv *resv,
 		       struct dma_fence **f,
+		       struct dma_fence *dependency,
 		       u64 k_job_id);
 
 int amdgpu_ttm_alloc_gart(struct ttm_buffer_object *bo);
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v2 10/20] drm/admgpu: handle resv dependencies in amdgpu_ttm_map_buffer
  2025-11-13 16:05 [PATCH v2 00/20] drm/amdgpu: use all SDMA instances for TTM clears and moves Pierre-Eric Pelloux-Prayer
                   ` (8 preceding siblings ...)
  2025-11-13 16:05 ` [PATCH v2 09/20] drm/amdgpu: pass optional dependency to amdgpu_fill_buffer Pierre-Eric Pelloux-Prayer
@ 2025-11-13 16:05 ` Pierre-Eric Pelloux-Prayer
  2025-11-17  8:44   ` Christian König
  2025-11-13 16:05 ` [PATCH v2 11/20] drm/amdgpu: round robin through clear_entities in amdgpu_fill_buffer Pierre-Eric Pelloux-Prayer
                   ` (9 subsequent siblings)
  19 siblings, 1 reply; 53+ messages in thread
From: Pierre-Eric Pelloux-Prayer @ 2025-11-13 16:05 UTC (permalink / raw)
  To: Alex Deucher, Christian König, David Airlie, Simona Vetter,
	Sumit Semwal
  Cc: Pierre-Eric Pelloux-Prayer, amd-gfx, dri-devel, linux-kernel,
	linux-media, linaro-mm-sig

If a resv object is passed, its fences are treated as a dependency
for the amdgpu_ttm_map_buffer operation.

This will be used by amdgpu_bo_release_notify through
amdgpu_fill_buffer.

Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 16 +++++++++++-----
 1 file changed, 11 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index b13f0993dbf1..411997db70eb 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -184,7 +184,8 @@ static int amdgpu_ttm_map_buffer(struct drm_sched_entity *entity,
 				 struct amdgpu_res_cursor *mm_cur,
 				 unsigned int window, struct amdgpu_ring *ring,
 				 bool tmz, uint64_t *size, uint64_t *addr,
-				 struct dma_fence *dep)
+				 struct dma_fence *dep,
+				 struct dma_resv *resv)
 {
 	struct amdgpu_device *adev = ring->adev;
 	unsigned int offset, num_pages, num_dw, num_bytes;
@@ -239,6 +240,10 @@ static int amdgpu_ttm_map_buffer(struct drm_sched_entity *entity,
 	if (dep)
 		drm_sched_job_add_dependency(&job->base, dma_fence_get(dep));
 
+	if (resv)
+		drm_sched_job_add_resv_dependencies(&job->base, resv,
+						    DMA_RESV_USAGE_BOOKKEEP);
+
 	src_addr = num_dw * 4;
 	src_addr += job->ibs[0].gpu_addr;
 
@@ -332,14 +337,14 @@ static int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
 		r = amdgpu_ttm_map_buffer(&entity->base,
 					  src->bo, src->mem, &src_mm,
 					  entity->gart_window_id0, ring, tmz, &cur_size, &from,
-					  NULL);
+					  NULL, NULL);
 		if (r)
 			goto error;
 
 		r = amdgpu_ttm_map_buffer(&entity->base,
 					  dst->bo, dst->mem, &dst_mm,
 					  entity->gart_window_id1, ring, tmz, &cur_size, &to,
-					  NULL);
+					  NULL, NULL);
 		if (r)
 			goto error;
 
@@ -2451,7 +2456,7 @@ int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
 		r = amdgpu_ttm_map_buffer(&entity->base,
 					  &bo->tbo, bo->tbo.resource, &cursor,
 					  entity->gart_window_id1, ring, false, &size, &addr,
-					  NULL);
+					  NULL, NULL);
 		if (r)
 			goto err;
 
@@ -2506,7 +2511,8 @@ int amdgpu_fill_buffer(struct amdgpu_ttm_buffer_entity *entity,
 					  &bo->tbo, bo->tbo.resource, &dst,
 					  entity->gart_window_id1, ring, false,
 					  &cur_size, &to,
-					  dependency);
+					  dependency,
+					  resv);
 		if (r)
 			goto error;
 
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v2 11/20] drm/amdgpu: round robin through clear_entities in amdgpu_fill_buffer
  2025-11-13 16:05 [PATCH v2 00/20] drm/amdgpu: use all SDMA instances for TTM clears and moves Pierre-Eric Pelloux-Prayer
                   ` (9 preceding siblings ...)
  2025-11-13 16:05 ` [PATCH v2 10/20] drm/admgpu: handle resv dependencies in amdgpu_ttm_map_buffer Pierre-Eric Pelloux-Prayer
@ 2025-11-13 16:05 ` Pierre-Eric Pelloux-Prayer
  2025-11-17  8:47   ` Christian König
  2025-11-13 16:05 ` [PATCH v2 12/20] drm/amdgpu: use TTM_NUM_MOVE_FENCES when reserving fences Pierre-Eric Pelloux-Prayer
                   ` (8 subsequent siblings)
  19 siblings, 1 reply; 53+ messages in thread
From: Pierre-Eric Pelloux-Prayer @ 2025-11-13 16:05 UTC (permalink / raw)
  To: Alex Deucher, Christian König, David Airlie, Simona Vetter
  Cc: Pierre-Eric Pelloux-Prayer, amd-gfx, dri-devel, linux-kernel

This makes clear of different BOs run in parallel. Partial jobs to
clear a single BO still execute sequentially.

Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c    | 9 ++++++++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h    | 1 +
 3 files changed, 10 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
index be3532134e46..33b397107778 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
@@ -1321,7 +1321,7 @@ void amdgpu_bo_release_notify(struct ttm_buffer_object *bo)
 	if (r)
 		goto out;
 
-	r = amdgpu_fill_buffer(&adev->mman.clear_entities[0], abo, 0, &bo->base._resv,
+	r = amdgpu_fill_buffer(NULL, abo, 0, &bo->base._resv,
 			       &fence, NULL, AMDGPU_KERNEL_JOB_ID_CLEAR_ON_RELEASE);
 	if (WARN_ON(r))
 		goto out;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index 411997db70eb..486c701d0d5b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -2224,6 +2224,7 @@ u32 amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
 		adev->mman.clear_entities = kcalloc(num_clear_entities,
 						    sizeof(struct amdgpu_ttm_buffer_entity),
 						    GFP_KERNEL);
+		atomic_set(&adev->mman.next_clear_entity, 0);
 		if (!adev->mman.clear_entities)
 			goto error_free_entity;
 
@@ -2489,7 +2490,7 @@ int amdgpu_fill_buffer(struct amdgpu_ttm_buffer_entity *entity,
 	struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
 	struct dma_fence *fence = NULL;
 	struct amdgpu_res_cursor dst;
-	int r;
+	int r, e;
 
 	if (!adev->mman.buffer_funcs_enabled) {
 		dev_err(adev->dev,
@@ -2497,6 +2498,12 @@ int amdgpu_fill_buffer(struct amdgpu_ttm_buffer_entity *entity,
 		return -EINVAL;
 	}
 
+	if (entity == NULL) {
+		e = atomic_inc_return(&adev->mman.next_clear_entity) %
+				      adev->mman.num_clear_entities;
+		entity = &adev->mman.clear_entities[e];
+	}
+
 	amdgpu_res_first(bo->tbo.resource, 0, amdgpu_bo_size(bo), &dst);
 
 	mutex_lock(&entity->gart_window_lock);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
index e8f8165f5bcf..781b0bdca56c 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
@@ -73,6 +73,7 @@ struct amdgpu_mman {
 
 	struct amdgpu_ttm_buffer_entity default_entity; /* has no gart windows */
 	struct amdgpu_ttm_buffer_entity *clear_entities;
+	atomic_t next_clear_entity;
 	u32 num_clear_entities;
 	struct amdgpu_ttm_buffer_entity move_entities[TTM_NUM_MOVE_FENCES];
 	u32 num_move_entities;
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v2 12/20] drm/amdgpu: use TTM_NUM_MOVE_FENCES when reserving fences
  2025-11-13 16:05 [PATCH v2 00/20] drm/amdgpu: use all SDMA instances for TTM clears and moves Pierre-Eric Pelloux-Prayer
                   ` (10 preceding siblings ...)
  2025-11-13 16:05 ` [PATCH v2 11/20] drm/amdgpu: round robin through clear_entities in amdgpu_fill_buffer Pierre-Eric Pelloux-Prayer
@ 2025-11-13 16:05 ` Pierre-Eric Pelloux-Prayer
  2025-11-14 20:57   ` Felix Kuehling
  2025-11-17  9:07   ` Christian König
  2025-11-13 16:05 ` [PATCH v2 13/20] drm/amdgpu: use multiple entities in amdgpu_move_blit Pierre-Eric Pelloux-Prayer
                   ` (7 subsequent siblings)
  19 siblings, 2 replies; 53+ messages in thread
From: Pierre-Eric Pelloux-Prayer @ 2025-11-13 16:05 UTC (permalink / raw)
  To: Alex Deucher, Christian König, David Airlie, Simona Vetter,
	Felix Kuehling, Harry Wentland, Leo Li, Rodrigo Siqueira
  Cc: Pierre-Eric Pelloux-Prayer, amd-gfx, dri-devel, linux-kernel

Use TTM_NUM_MOVE_FENCES as an upperbound of how many fences
ttm might need to deal with moves/evictions.

---
v2: removed drm_err calls
---

Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c                  | 5 ++---
 drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c                 | 2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_vkms.c                | 6 ++----
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c                  | 3 ++-
 drivers/gpu/drm/amd/amdkfd/kfd_svm.c                    | 3 +--
 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c | 6 ++----
 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_wb.c    | 6 ++----
 7 files changed, 12 insertions(+), 19 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
index ecdfe6cb36cc..0338522761bc 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
@@ -916,9 +916,8 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
 			goto out_free_user_pages;
 
 		amdgpu_bo_list_for_each_entry(e, p->bo_list) {
-			/* One fence for TTM and one for each CS job */
 			r = drm_exec_prepare_obj(&p->exec, &e->bo->tbo.base,
-						 1 + p->gang_size);
+						 TTM_NUM_MOVE_FENCES + p->gang_size);
 			drm_exec_retry_on_contention(&p->exec);
 			if (unlikely(r))
 				goto out_free_user_pages;
@@ -928,7 +927,7 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
 
 		if (p->uf_bo) {
 			r = drm_exec_prepare_obj(&p->exec, &p->uf_bo->tbo.base,
-						 1 + p->gang_size);
+						 TTM_NUM_MOVE_FENCES + p->gang_size);
 			drm_exec_retry_on_contention(&p->exec);
 			if (unlikely(r))
 				goto out_free_user_pages;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
index ce073e894584..2fe6899f6344 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
@@ -353,7 +353,7 @@ static void amdgpu_gem_object_close(struct drm_gem_object *obj,
 
 	drm_exec_init(&exec, DRM_EXEC_IGNORE_DUPLICATES, 0);
 	drm_exec_until_all_locked(&exec) {
-		r = drm_exec_prepare_obj(&exec, &bo->tbo.base, 1);
+		r = drm_exec_prepare_obj(&exec, &bo->tbo.base, TTM_NUM_MOVE_FENCES);
 		drm_exec_retry_on_contention(&exec);
 		if (unlikely(r))
 			goto out_unlock;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vkms.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vkms.c
index 79bad9cbe2ab..b92561eea3da 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vkms.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vkms.c
@@ -326,11 +326,9 @@ static int amdgpu_vkms_prepare_fb(struct drm_plane *plane,
 		return r;
 	}
 
-	r = dma_resv_reserve_fences(rbo->tbo.base.resv, 1);
-	if (r) {
-		dev_err(adev->dev, "allocating fence slot failed (%d)\n", r);
+	r = dma_resv_reserve_fences(rbo->tbo.base.resv, TTM_NUM_MOVE_FENCES);
+	if (r)
 		goto error_unlock;
-	}
 
 	if (plane->type != DRM_PLANE_TYPE_CURSOR)
 		domain = amdgpu_display_supported_domains(adev, rbo->flags);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 2f8e83f840a8..aaa44199e9f4 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -2630,7 +2630,8 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm,
 	}
 
 	amdgpu_vm_bo_base_init(&vm->root, vm, root_bo);
-	r = dma_resv_reserve_fences(root_bo->tbo.base.resv, 1);
+	r = dma_resv_reserve_fences(root_bo->tbo.base.resv,
+				    TTM_NUM_MOVE_FENCES);
 	if (r)
 		goto error_free_root;
 
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
index ffb7b36e577c..968cef1cbea6 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
@@ -627,9 +627,8 @@ svm_range_vram_node_new(struct kfd_node *node, struct svm_range *prange,
 		}
 	}
 
-	r = dma_resv_reserve_fences(bo->tbo.base.resv, 1);
+	r = dma_resv_reserve_fences(bo->tbo.base.resv, TTM_NUM_MOVE_FENCES);
 	if (r) {
-		pr_debug("failed %d to reserve bo\n", r);
 		amdgpu_bo_unreserve(bo);
 		goto reserve_bo_failed;
 	}
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c
index 56cb866ac6f8..ceb55dd183ed 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c
@@ -952,11 +952,9 @@ static int amdgpu_dm_plane_helper_prepare_fb(struct drm_plane *plane,
 		return r;
 	}
 
-	r = dma_resv_reserve_fences(rbo->tbo.base.resv, 1);
-	if (r) {
-		drm_err(adev_to_drm(adev), "reserving fence slot failed (%d)\n", r);
+	r = dma_resv_reserve_fences(rbo->tbo.base.resv, TTM_NUM_MOVE_FENCES);
+	if (r)
 		goto error_unlock;
-	}
 
 	if (plane->type != DRM_PLANE_TYPE_CURSOR)
 		domain = amdgpu_display_supported_domains(adev, rbo->flags);
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_wb.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_wb.c
index d9527c05fc87..110f0173eee6 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_wb.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_wb.c
@@ -106,11 +106,9 @@ static int amdgpu_dm_wb_prepare_job(struct drm_writeback_connector *wb_connector
 		return r;
 	}
 
-	r = dma_resv_reserve_fences(rbo->tbo.base.resv, 1);
-	if (r) {
-		drm_err(adev_to_drm(adev), "reserving fence slot failed (%d)\n", r);
+	r = dma_resv_reserve_fences(rbo->tbo.base.resv, TTM_NUM_MOVE_FENCES);
+	if (r)
 		goto error_unlock;
-	}
 
 	domain = amdgpu_display_supported_domains(adev, rbo->flags);
 
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v2 13/20] drm/amdgpu: use multiple entities in amdgpu_move_blit
  2025-11-13 16:05 [PATCH v2 00/20] drm/amdgpu: use all SDMA instances for TTM clears and moves Pierre-Eric Pelloux-Prayer
                   ` (11 preceding siblings ...)
  2025-11-13 16:05 ` [PATCH v2 12/20] drm/amdgpu: use TTM_NUM_MOVE_FENCES when reserving fences Pierre-Eric Pelloux-Prayer
@ 2025-11-13 16:05 ` Pierre-Eric Pelloux-Prayer
  2025-11-17  9:12   ` Christian König
  2025-11-13 16:05 ` [PATCH v2 14/20] drm/amdgpu: introduce amdgpu_sdma_set_vm_pte_scheds Pierre-Eric Pelloux-Prayer
                   ` (6 subsequent siblings)
  19 siblings, 1 reply; 53+ messages in thread
From: Pierre-Eric Pelloux-Prayer @ 2025-11-13 16:05 UTC (permalink / raw)
  To: Alex Deucher, Christian König, David Airlie, Simona Vetter
  Cc: Pierre-Eric Pelloux-Prayer, amd-gfx, dri-devel, linux-kernel

Thanks to "drm/ttm: rework pipelined eviction fence handling", ttm
can deal correctly with moves and evictions being executed from
different contexts.

Create several entities and use them in a round-robin fashion.

Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 13 ++++++++++---
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h |  1 +
 2 files changed, 11 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index 486c701d0d5b..6c333dba7a35 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -401,6 +401,7 @@ static int amdgpu_move_blit(struct ttm_buffer_object *bo,
 {
 	struct amdgpu_device *adev = amdgpu_ttm_adev(bo->bdev);
 	struct amdgpu_bo *abo = ttm_to_amdgpu_bo(bo);
+	struct amdgpu_ttm_buffer_entity *entity;
 	struct amdgpu_copy_mem src, dst;
 	struct dma_fence *fence = NULL;
 	int r;
@@ -412,8 +413,12 @@ static int amdgpu_move_blit(struct ttm_buffer_object *bo,
 	src.offset = 0;
 	dst.offset = 0;
 
+	int e = atomic_inc_return(&adev->mman.next_move_entity) %
+				  adev->mman.num_move_entities;
+	entity = &adev->mman.move_entities[e];
+
 	r = amdgpu_ttm_copy_mem_to_mem(adev,
-				       &adev->mman.move_entities[0],
+				       entity,
 				       &src, &dst,
 				       new_mem->size,
 				       amdgpu_bo_encrypted(abo),
@@ -426,7 +431,7 @@ static int amdgpu_move_blit(struct ttm_buffer_object *bo,
 	    (abo->flags & AMDGPU_GEM_CREATE_VRAM_WIPE_ON_RELEASE)) {
 		struct dma_fence *wipe_fence = NULL;
 
-		r = amdgpu_fill_buffer(&adev->mman.move_entities[0],
+		r = amdgpu_fill_buffer(entity,
 				       abo, 0, NULL, &wipe_fence, fence,
 				       AMDGPU_KERNEL_JOB_ID_MOVE_BLIT);
 		if (r) {
@@ -2179,7 +2184,8 @@ u32 amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
 	struct ttm_resource_manager *man = ttm_manager_type(&adev->mman.bdev, TTM_PL_VRAM);
 	uint64_t size;
 	int r, i, j;
-	u32 num_clear_entities, num_move_entities, windows, w;
+	u32 num_clear_entities, num_move_entities;
+	u32 windows, w;
 
 	num_clear_entities = adev->sdma.num_instances;
 	num_move_entities = MIN(adev->sdma.num_instances, TTM_NUM_MOVE_FENCES);
@@ -2205,6 +2211,7 @@ u32 amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
 		}
 
 		adev->mman.num_move_entities = num_move_entities;
+		atomic_set(&adev->mman.next_move_entity, 0);
 		for (i = 0; i < num_move_entities; i++) {
 			r = drm_sched_entity_init(&adev->mman.move_entities[i].base,
 						  DRM_SCHED_PRIORITY_NORMAL, &sched,
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
index 781b0bdca56c..4844f001f590 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
@@ -76,6 +76,7 @@ struct amdgpu_mman {
 	atomic_t next_clear_entity;
 	u32 num_clear_entities;
 	struct amdgpu_ttm_buffer_entity move_entities[TTM_NUM_MOVE_FENCES];
+	atomic_t next_move_entity;
 	u32 num_move_entities;
 
 	struct amdgpu_vram_mgr vram_mgr;
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v2 14/20] drm/amdgpu: introduce amdgpu_sdma_set_vm_pte_scheds
  2025-11-13 16:05 [PATCH v2 00/20] drm/amdgpu: use all SDMA instances for TTM clears and moves Pierre-Eric Pelloux-Prayer
                   ` (12 preceding siblings ...)
  2025-11-13 16:05 ` [PATCH v2 13/20] drm/amdgpu: use multiple entities in amdgpu_move_blit Pierre-Eric Pelloux-Prayer
@ 2025-11-13 16:05 ` Pierre-Eric Pelloux-Prayer
  2025-11-17  9:30   ` Christian König
  2025-11-13 16:05 ` [PATCH v2 15/20] drm/amdgpu: pass all the sdma scheds to amdgpu_mman Pierre-Eric Pelloux-Prayer
                   ` (5 subsequent siblings)
  19 siblings, 1 reply; 53+ messages in thread
From: Pierre-Eric Pelloux-Prayer @ 2025-11-13 16:05 UTC (permalink / raw)
  To: Alex Deucher, Christian König, David Airlie, Simona Vetter
  Cc: Pierre-Eric Pelloux-Prayer, amd-gfx, dri-devel, linux-kernel

All sdma versions used the same logic, so add a helper and move the
common code to a single place.

---
v2: pass amdgpu_vm_pte_funcs as well
---

Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu.h      |  2 ++
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c   | 17 +++++++++++++++++
 drivers/gpu/drm/amd/amdgpu/cik_sdma.c    |  9 +--------
 drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c   |  9 +--------
 drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c   |  9 +--------
 drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c   | 13 +------------
 drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c | 13 +------------
 drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c   | 12 ++----------
 drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c   | 12 ++----------
 drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c   |  9 +--------
 drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c   |  9 +--------
 drivers/gpu/drm/amd/amdgpu/si_dma.c      |  9 +--------
 12 files changed, 31 insertions(+), 92 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index 50079209c472..3fab3dc9f3e4 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -1613,6 +1613,8 @@ struct dma_fence *amdgpu_device_enforce_isolation(struct amdgpu_device *adev,
 bool amdgpu_device_has_display_hardware(struct amdgpu_device *adev);
 ssize_t amdgpu_get_soft_full_reset_mask(struct amdgpu_ring *ring);
 ssize_t amdgpu_show_reset_mask(char *buf, uint32_t supported_reset);
+void amdgpu_sdma_set_vm_pte_scheds(struct amdgpu_device *adev,
+				   const struct amdgpu_vm_pte_funcs *vm_pte_funcs);
 
 /* atpx handler */
 #if defined(CONFIG_VGA_SWITCHEROO)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index aaa44199e9f4..3d29c3642d1a 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -3210,3 +3210,20 @@ void amdgpu_vm_print_task_info(struct amdgpu_device *adev,
 		task_info->process_name, task_info->tgid,
 		task_info->task.comm, task_info->task.pid);
 }
+
+void amdgpu_sdma_set_vm_pte_scheds(struct amdgpu_device *adev,
+				   const struct amdgpu_vm_pte_funcs *vm_pte_funcs)
+{
+	struct drm_gpu_scheduler *sched;
+	int i;
+
+	for (i = 0; i < adev->sdma.num_instances; i++) {
+		if (adev->sdma.has_page_queue)
+			sched = &adev->sdma.instance[i].page.sched;
+		else
+			sched = &adev->sdma.instance[i].ring.sched;
+		adev->vm_manager.vm_pte_scheds[i] = sched;
+	}
+	adev->vm_manager.vm_pte_num_scheds = adev->sdma.num_instances;
+	adev->vm_manager.vm_pte_funcs = vm_pte_funcs;
+}
diff --git a/drivers/gpu/drm/amd/amdgpu/cik_sdma.c b/drivers/gpu/drm/amd/amdgpu/cik_sdma.c
index 9e8715b4739d..5fe162f52c92 100644
--- a/drivers/gpu/drm/amd/amdgpu/cik_sdma.c
+++ b/drivers/gpu/drm/amd/amdgpu/cik_sdma.c
@@ -1347,14 +1347,7 @@ static const struct amdgpu_vm_pte_funcs cik_sdma_vm_pte_funcs = {
 
 static void cik_sdma_set_vm_pte_funcs(struct amdgpu_device *adev)
 {
-	unsigned i;
-
-	adev->vm_manager.vm_pte_funcs = &cik_sdma_vm_pte_funcs;
-	for (i = 0; i < adev->sdma.num_instances; i++) {
-		adev->vm_manager.vm_pte_scheds[i] =
-			&adev->sdma.instance[i].ring.sched;
-	}
-	adev->vm_manager.vm_pte_num_scheds = adev->sdma.num_instances;
+	amdgpu_sdma_set_vm_pte_scheds(adev, &cik_sdma_vm_pte_funcs);
 }
 
 const struct amdgpu_ip_block_version cik_sdma_ip_block =
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c b/drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c
index 92ce580647cd..63636643db3d 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c
@@ -1242,14 +1242,7 @@ static const struct amdgpu_vm_pte_funcs sdma_v2_4_vm_pte_funcs = {
 
 static void sdma_v2_4_set_vm_pte_funcs(struct amdgpu_device *adev)
 {
-	unsigned i;
-
-	adev->vm_manager.vm_pte_funcs = &sdma_v2_4_vm_pte_funcs;
-	for (i = 0; i < adev->sdma.num_instances; i++) {
-		adev->vm_manager.vm_pte_scheds[i] =
-			&adev->sdma.instance[i].ring.sched;
-	}
-	adev->vm_manager.vm_pte_num_scheds = adev->sdma.num_instances;
+	amdgpu_sdma_set_vm_pte_scheds(adev, &sdma_v2_4_vm_pte_funcs);
 }
 
 const struct amdgpu_ip_block_version sdma_v2_4_ip_block = {
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c
index 1c076bd1cf73..0153626b5df2 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c
@@ -1684,14 +1684,7 @@ static const struct amdgpu_vm_pte_funcs sdma_v3_0_vm_pte_funcs = {
 
 static void sdma_v3_0_set_vm_pte_funcs(struct amdgpu_device *adev)
 {
-	unsigned i;
-
-	adev->vm_manager.vm_pte_funcs = &sdma_v3_0_vm_pte_funcs;
-	for (i = 0; i < adev->sdma.num_instances; i++) {
-		adev->vm_manager.vm_pte_scheds[i] =
-			 &adev->sdma.instance[i].ring.sched;
-	}
-	adev->vm_manager.vm_pte_num_scheds = adev->sdma.num_instances;
+	amdgpu_sdma_set_vm_pte_scheds(adev, &sdma_v3_0_vm_pte_funcs);
 }
 
 const struct amdgpu_ip_block_version sdma_v3_0_ip_block =
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
index f38004e6064e..96a67b30854c 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
@@ -2625,18 +2625,7 @@ static const struct amdgpu_vm_pte_funcs sdma_v4_0_vm_pte_funcs = {
 
 static void sdma_v4_0_set_vm_pte_funcs(struct amdgpu_device *adev)
 {
-	struct drm_gpu_scheduler *sched;
-	unsigned i;
-
-	adev->vm_manager.vm_pte_funcs = &sdma_v4_0_vm_pte_funcs;
-	for (i = 0; i < adev->sdma.num_instances; i++) {
-		if (adev->sdma.has_page_queue)
-			sched = &adev->sdma.instance[i].page.sched;
-		else
-			sched = &adev->sdma.instance[i].ring.sched;
-		adev->vm_manager.vm_pte_scheds[i] = sched;
-	}
-	adev->vm_manager.vm_pte_num_scheds = adev->sdma.num_instances;
+	amdgpu_sdma_set_vm_pte_scheds(adev, &sdma_v4_0_vm_pte_funcs);
 }
 
 static void sdma_v4_0_get_ras_error_count(uint32_t value,
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
index 36b1ca73c2ed..04dc8a8f4d66 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
@@ -2326,18 +2326,7 @@ static const struct amdgpu_vm_pte_funcs sdma_v4_4_2_vm_pte_funcs = {
 
 static void sdma_v4_4_2_set_vm_pte_funcs(struct amdgpu_device *adev)
 {
-	struct drm_gpu_scheduler *sched;
-	unsigned i;
-
-	adev->vm_manager.vm_pte_funcs = &sdma_v4_4_2_vm_pte_funcs;
-	for (i = 0; i < adev->sdma.num_instances; i++) {
-		if (adev->sdma.has_page_queue)
-			sched = &adev->sdma.instance[i].page.sched;
-		else
-			sched = &adev->sdma.instance[i].ring.sched;
-		adev->vm_manager.vm_pte_scheds[i] = sched;
-	}
-	adev->vm_manager.vm_pte_num_scheds = adev->sdma.num_instances;
+	amdgpu_sdma_set_vm_pte_scheds(adev, &sdma_v4_4_2_vm_pte_funcs);
 }
 
 /**
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
index 7dc67a22a7a0..19c717f2c602 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
@@ -2081,16 +2081,8 @@ static const struct amdgpu_vm_pte_funcs sdma_v5_0_vm_pte_funcs = {
 
 static void sdma_v5_0_set_vm_pte_funcs(struct amdgpu_device *adev)
 {
-	unsigned i;
-
-	if (adev->vm_manager.vm_pte_funcs == NULL) {
-		adev->vm_manager.vm_pte_funcs = &sdma_v5_0_vm_pte_funcs;
-		for (i = 0; i < adev->sdma.num_instances; i++) {
-			adev->vm_manager.vm_pte_scheds[i] =
-				&adev->sdma.instance[i].ring.sched;
-		}
-		adev->vm_manager.vm_pte_num_scheds = adev->sdma.num_instances;
-	}
+	if (adev->vm_manager.vm_pte_funcs == NULL)
+		amdgpu_sdma_set_vm_pte_scheds(adev, &sdma_v5_0_vm_pte_funcs);
 }
 
 const struct amdgpu_ip_block_version sdma_v5_0_ip_block = {
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
index 3bd44c24f692..7a07b8f4e86d 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
@@ -2091,16 +2091,8 @@ static const struct amdgpu_vm_pte_funcs sdma_v5_2_vm_pte_funcs = {
 
 static void sdma_v5_2_set_vm_pte_funcs(struct amdgpu_device *adev)
 {
-	unsigned i;
-
-	if (adev->vm_manager.vm_pte_funcs == NULL) {
-		adev->vm_manager.vm_pte_funcs = &sdma_v5_2_vm_pte_funcs;
-		for (i = 0; i < adev->sdma.num_instances; i++) {
-			adev->vm_manager.vm_pte_scheds[i] =
-				&adev->sdma.instance[i].ring.sched;
-		}
-		adev->vm_manager.vm_pte_num_scheds = adev->sdma.num_instances;
-	}
+	if (adev->vm_manager.vm_pte_funcs == NULL)
+		amdgpu_sdma_set_vm_pte_scheds(adev, &sdma_v5_2_vm_pte_funcs);
 }
 
 const struct amdgpu_ip_block_version sdma_v5_2_ip_block = {
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
index db6e41967f12..8f8228c7adee 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
@@ -1897,14 +1897,7 @@ static const struct amdgpu_vm_pte_funcs sdma_v6_0_vm_pte_funcs = {
 
 static void sdma_v6_0_set_vm_pte_funcs(struct amdgpu_device *adev)
 {
-	unsigned i;
-
-	adev->vm_manager.vm_pte_funcs = &sdma_v6_0_vm_pte_funcs;
-	for (i = 0; i < adev->sdma.num_instances; i++) {
-		adev->vm_manager.vm_pte_scheds[i] =
-			&adev->sdma.instance[i].ring.sched;
-	}
-	adev->vm_manager.vm_pte_num_scheds = adev->sdma.num_instances;
+	amdgpu_sdma_set_vm_pte_scheds(adev, &sdma_v6_0_vm_pte_funcs);
 }
 
 const struct amdgpu_ip_block_version sdma_v6_0_ip_block = {
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
index 326ecc8d37d2..cf412d8fb0ed 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
@@ -1839,14 +1839,7 @@ static const struct amdgpu_vm_pte_funcs sdma_v7_0_vm_pte_funcs = {
 
 static void sdma_v7_0_set_vm_pte_funcs(struct amdgpu_device *adev)
 {
-	unsigned i;
-
-	adev->vm_manager.vm_pte_funcs = &sdma_v7_0_vm_pte_funcs;
-	for (i = 0; i < adev->sdma.num_instances; i++) {
-		adev->vm_manager.vm_pte_scheds[i] =
-			&adev->sdma.instance[i].ring.sched;
-	}
-	adev->vm_manager.vm_pte_num_scheds = adev->sdma.num_instances;
+	amdgpu_sdma_set_vm_pte_scheds(adev, &sdma_v7_0_vm_pte_funcs);
 }
 
 const struct amdgpu_ip_block_version sdma_v7_0_ip_block = {
diff --git a/drivers/gpu/drm/amd/amdgpu/si_dma.c b/drivers/gpu/drm/amd/amdgpu/si_dma.c
index 7f18e4875287..863e00086c30 100644
--- a/drivers/gpu/drm/amd/amdgpu/si_dma.c
+++ b/drivers/gpu/drm/amd/amdgpu/si_dma.c
@@ -840,14 +840,7 @@ static const struct amdgpu_vm_pte_funcs si_dma_vm_pte_funcs = {
 
 static void si_dma_set_vm_pte_funcs(struct amdgpu_device *adev)
 {
-	unsigned i;
-
-	adev->vm_manager.vm_pte_funcs = &si_dma_vm_pte_funcs;
-	for (i = 0; i < adev->sdma.num_instances; i++) {
-		adev->vm_manager.vm_pte_scheds[i] =
-			&adev->sdma.instance[i].ring.sched;
-	}
-	adev->vm_manager.vm_pte_num_scheds = adev->sdma.num_instances;
+	amdgpu_sdma_set_vm_pte_scheds(adev, &si_dma_vm_pte_funcs);
 }
 
 const struct amdgpu_ip_block_version si_dma_ip_block =
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v2 15/20] drm/amdgpu: pass all the sdma scheds to amdgpu_mman
  2025-11-13 16:05 [PATCH v2 00/20] drm/amdgpu: use all SDMA instances for TTM clears and moves Pierre-Eric Pelloux-Prayer
                   ` (13 preceding siblings ...)
  2025-11-13 16:05 ` [PATCH v2 14/20] drm/amdgpu: introduce amdgpu_sdma_set_vm_pte_scheds Pierre-Eric Pelloux-Prayer
@ 2025-11-13 16:05 ` Pierre-Eric Pelloux-Prayer
  2025-11-14 21:23   ` Felix Kuehling
  2025-11-17  9:46   ` Christian König
  2025-11-13 16:05 ` [PATCH v2 16/20] drm/amdgpu: give ttm entities access to all the sdma scheds Pierre-Eric Pelloux-Prayer
                   ` (4 subsequent siblings)
  19 siblings, 2 replies; 53+ messages in thread
From: Pierre-Eric Pelloux-Prayer @ 2025-11-13 16:05 UTC (permalink / raw)
  To: Alex Deucher, Christian König, David Airlie, Simona Vetter,
	Felix Kuehling
  Cc: Pierre-Eric Pelloux-Prayer, amd-gfx, dri-devel, linux-kernel

This will allow the use of all of them for clear/fill buffer
operations.
Since drm_sched_entity_init requires a scheduler array, we
store schedulers rather than rings. For the few places that need
access to a ring, we can get it from the sched using container_of.

Since the code is the same for all sdma versions, add a new
helper amdgpu_sdma_set_buffer_funcs_scheds to set buffer_funcs_scheds
based on the number of sdma instances.

Note: the new sched array is identical to the amdgpu_vm_manager one.
These 2 could be merged.

Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu.h           |  2 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c |  4 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c    |  8 ++--
 drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c       |  4 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c       | 41 +++++++++++++++----
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h       |  3 +-
 drivers/gpu/drm/amd/amdgpu/cik_sdma.c         |  3 +-
 drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c        |  3 +-
 drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c        |  3 +-
 drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c        |  6 +--
 drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c      |  6 +--
 drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c        |  6 +--
 drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c        |  6 +--
 drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c        |  3 +-
 drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c        |  3 +-
 drivers/gpu/drm/amd/amdgpu/si_dma.c           |  3 +-
 drivers/gpu/drm/amd/amdkfd/kfd_migrate.c      |  3 +-
 17 files changed, 62 insertions(+), 45 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index 3fab3dc9f3e4..05c13fb0e6bf 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -1615,6 +1615,8 @@ ssize_t amdgpu_get_soft_full_reset_mask(struct amdgpu_ring *ring);
 ssize_t amdgpu_show_reset_mask(char *buf, uint32_t supported_reset);
 void amdgpu_sdma_set_vm_pte_scheds(struct amdgpu_device *adev,
 				   const struct amdgpu_vm_pte_funcs *vm_pte_funcs);
+void amdgpu_sdma_set_buffer_funcs_scheds(struct amdgpu_device *adev,
+					 const struct amdgpu_buffer_funcs *buffer_funcs);
 
 /* atpx handler */
 #if defined(CONFIG_VGA_SWITCHEROO)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c
index b59040a8771f..9ea927e07a77 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c
@@ -32,12 +32,14 @@ static int amdgpu_benchmark_do_move(struct amdgpu_device *adev, unsigned size,
 				    uint64_t saddr, uint64_t daddr, int n, s64 *time_ms)
 {
 	ktime_t stime, etime;
+	struct amdgpu_ring *ring;
 	struct dma_fence *fence;
 	int i, r;
 
+	ring = container_of(adev->mman.buffer_funcs_scheds[0], struct amdgpu_ring, sched);
+
 	stime = ktime_get();
 	for (i = 0; i < n; i++) {
-		struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
 		r = amdgpu_copy_buffer(ring, &adev->mman.default_entity.base,
 				       saddr, daddr, size, NULL, &fence,
 				       false, 0);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index b92234d63562..1927d940fbca 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -3303,7 +3303,7 @@ static int amdgpu_device_ip_init(struct amdgpu_device *adev)
 	if (r)
 		goto init_failed;
 
-	if (adev->mman.buffer_funcs_ring->sched.ready)
+	if (adev->mman.buffer_funcs_scheds[0]->ready)
 		amdgpu_ttm_set_buffer_funcs_status(adev, true);
 
 	/* Don't init kfd if whole hive need to be reset during init */
@@ -4143,7 +4143,7 @@ static int amdgpu_device_ip_resume(struct amdgpu_device *adev)
 
 	r = amdgpu_device_ip_resume_phase2(adev);
 
-	if (adev->mman.buffer_funcs_ring->sched.ready)
+	if (adev->mman.buffer_funcs_scheds[0]->ready)
 		amdgpu_ttm_set_buffer_funcs_status(adev, true);
 
 	if (r)
@@ -4493,7 +4493,7 @@ int amdgpu_device_init(struct amdgpu_device *adev,
 	adev->num_rings = 0;
 	RCU_INIT_POINTER(adev->gang_submit, dma_fence_get_stub());
 	adev->mman.buffer_funcs = NULL;
-	adev->mman.buffer_funcs_ring = NULL;
+	adev->mman.num_buffer_funcs_scheds = 0;
 	adev->vm_manager.vm_pte_funcs = NULL;
 	adev->vm_manager.vm_pte_num_scheds = 0;
 	adev->gmc.gmc_funcs = NULL;
@@ -5965,7 +5965,7 @@ int amdgpu_device_reinit_after_reset(struct amdgpu_reset_context *reset_context)
 				if (r)
 					goto out;
 
-				if (tmp_adev->mman.buffer_funcs_ring->sched.ready)
+				if (tmp_adev->mman.buffer_funcs_scheds[0]->ready)
 					amdgpu_ttm_set_buffer_funcs_status(tmp_adev, true);
 
 				r = amdgpu_device_ip_resume_phase3(tmp_adev);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
index 2713dd51ab9a..4433d8620129 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
@@ -651,12 +651,14 @@ int amdgpu_gmc_allocate_vm_inv_eng(struct amdgpu_device *adev)
 void amdgpu_gmc_flush_gpu_tlb(struct amdgpu_device *adev, uint32_t vmid,
 			      uint32_t vmhub, uint32_t flush_type)
 {
-	struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
+	struct amdgpu_ring *ring;
 	struct amdgpu_vmhub *hub = &adev->vmhub[vmhub];
 	struct dma_fence *fence;
 	struct amdgpu_job *job;
 	int r, i;
 
+	ring = container_of(adev->mman.buffer_funcs_scheds[0], struct amdgpu_ring, sched);
+
 	if (!hub->sdma_invalidation_workaround || vmid ||
 	    !adev->mman.buffer_funcs_enabled || !adev->ib_pool_ready ||
 	    !ring->sched.ready) {
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index 6c333dba7a35..11fec0fa4c11 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -308,7 +308,7 @@ static int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
 				      struct dma_resv *resv,
 				      struct dma_fence **f)
 {
-	struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
+	struct amdgpu_ring *ring;
 	struct amdgpu_res_cursor src_mm, dst_mm;
 	struct dma_fence *fence = NULL;
 	int r = 0;
@@ -321,6 +321,8 @@ static int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
 		return -EINVAL;
 	}
 
+	ring = container_of(adev->mman.buffer_funcs_scheds[0], struct amdgpu_ring, sched);
+
 	amdgpu_res_first(src->mem, src->offset, size, &src_mm);
 	amdgpu_res_first(dst->mem, dst->offset, size, &dst_mm);
 
@@ -1493,6 +1495,7 @@ static int amdgpu_ttm_access_memory_sdma(struct ttm_buffer_object *bo,
 	struct amdgpu_bo *abo = ttm_to_amdgpu_bo(bo);
 	struct amdgpu_device *adev = amdgpu_ttm_adev(abo->tbo.bdev);
 	struct amdgpu_res_cursor src_mm;
+	struct amdgpu_ring *ring;
 	struct amdgpu_job *job;
 	struct dma_fence *fence;
 	uint64_t src_addr, dst_addr;
@@ -1530,7 +1533,8 @@ static int amdgpu_ttm_access_memory_sdma(struct ttm_buffer_object *bo,
 	amdgpu_emit_copy_buffer(adev, &job->ibs[0], src_addr, dst_addr,
 				PAGE_SIZE, 0);
 
-	amdgpu_ring_pad_ib(adev->mman.buffer_funcs_ring, &job->ibs[0]);
+	ring = container_of(adev->mman.buffer_funcs_scheds[0], struct amdgpu_ring, sched);
+	amdgpu_ring_pad_ib(ring, &job->ibs[0]);
 	WARN_ON(job->ibs[0].length_dw > num_dw);
 
 	fence = amdgpu_job_submit(job);
@@ -2196,11 +2200,9 @@ u32 amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
 		return windows;
 
 	if (enable) {
-		struct amdgpu_ring *ring;
 		struct drm_gpu_scheduler *sched;
 
-		ring = adev->mman.buffer_funcs_ring;
-		sched = &ring->sched;
+		sched = adev->mman.buffer_funcs_scheds[0];
 		r = drm_sched_entity_init(&adev->mman.default_entity.base,
 					  DRM_SCHED_PRIORITY_KERNEL, &sched,
 					  1, NULL);
@@ -2432,7 +2434,7 @@ int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
 			    struct dma_fence **fence)
 {
 	struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
-	struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
+	struct amdgpu_ring *ring;
 	struct amdgpu_ttm_buffer_entity *entity;
 	struct amdgpu_res_cursor cursor;
 	u64 addr;
@@ -2443,6 +2445,8 @@ int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
 
 	if (!fence)
 		return -EINVAL;
+
+	ring = container_of(adev->mman.buffer_funcs_scheds[0], struct amdgpu_ring, sched);
 	entity = &adev->mman.clear_entities[0];
 	*fence = dma_fence_get_stub();
 
@@ -2494,9 +2498,9 @@ int amdgpu_fill_buffer(struct amdgpu_ttm_buffer_entity *entity,
 		       u64 k_job_id)
 {
 	struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
-	struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
 	struct dma_fence *fence = NULL;
 	struct amdgpu_res_cursor dst;
+	struct amdgpu_ring *ring;
 	int r, e;
 
 	if (!adev->mman.buffer_funcs_enabled) {
@@ -2505,6 +2509,8 @@ int amdgpu_fill_buffer(struct amdgpu_ttm_buffer_entity *entity,
 		return -EINVAL;
 	}
 
+	ring = container_of(adev->mman.buffer_funcs_scheds[0], struct amdgpu_ring, sched);
+
 	if (entity == NULL) {
 		e = atomic_inc_return(&adev->mman.next_clear_entity) %
 				      adev->mman.num_clear_entities;
@@ -2579,6 +2585,27 @@ int amdgpu_ttm_evict_resources(struct amdgpu_device *adev, int mem_type)
 	return ttm_resource_manager_evict_all(&adev->mman.bdev, man);
 }
 
+void amdgpu_sdma_set_buffer_funcs_scheds(struct amdgpu_device *adev,
+					 const struct amdgpu_buffer_funcs *buffer_funcs)
+{
+	struct amdgpu_vmhub *hub = &adev->vmhub[AMDGPU_GFXHUB(0)];
+	struct drm_gpu_scheduler *sched;
+	int i;
+
+	adev->mman.buffer_funcs = buffer_funcs;
+
+	for (i = 0; i < adev->sdma.num_instances; i++) {
+		if (adev->sdma.has_page_queue)
+			sched = &adev->sdma.instance[i].page.sched;
+		else
+			sched = &adev->sdma.instance[i].ring.sched;
+		adev->mman.buffer_funcs_scheds[i] = sched;
+	}
+
+	adev->mman.num_buffer_funcs_scheds = hub->sdma_invalidation_workaround ?
+		1 : adev->sdma.num_instances;
+}
+
 #if defined(CONFIG_DEBUG_FS)
 
 static int amdgpu_ttm_page_pool_show(struct seq_file *m, void *unused)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
index 4844f001f590..63c3e2466708 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
@@ -66,7 +66,8 @@ struct amdgpu_mman {
 
 	/* buffer handling */
 	const struct amdgpu_buffer_funcs	*buffer_funcs;
-	struct amdgpu_ring			*buffer_funcs_ring;
+	struct drm_gpu_scheduler		*buffer_funcs_scheds[AMDGPU_MAX_RINGS];
+	u32					num_buffer_funcs_scheds;
 	bool					buffer_funcs_enabled;
 
 	struct mutex				gtt_window_lock;
diff --git a/drivers/gpu/drm/amd/amdgpu/cik_sdma.c b/drivers/gpu/drm/amd/amdgpu/cik_sdma.c
index 5fe162f52c92..a36385ad8da8 100644
--- a/drivers/gpu/drm/amd/amdgpu/cik_sdma.c
+++ b/drivers/gpu/drm/amd/amdgpu/cik_sdma.c
@@ -1333,8 +1333,7 @@ static const struct amdgpu_buffer_funcs cik_sdma_buffer_funcs = {
 
 static void cik_sdma_set_buffer_funcs(struct amdgpu_device *adev)
 {
-	adev->mman.buffer_funcs = &cik_sdma_buffer_funcs;
-	adev->mman.buffer_funcs_ring = &adev->sdma.instance[0].ring;
+	amdgpu_sdma_set_buffer_funcs_scheds(adev, &cik_sdma_buffer_funcs);
 }
 
 static const struct amdgpu_vm_pte_funcs cik_sdma_vm_pte_funcs = {
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c b/drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c
index 63636643db3d..4a3ba136a36c 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c
@@ -1228,8 +1228,7 @@ static const struct amdgpu_buffer_funcs sdma_v2_4_buffer_funcs = {
 
 static void sdma_v2_4_set_buffer_funcs(struct amdgpu_device *adev)
 {
-	adev->mman.buffer_funcs = &sdma_v2_4_buffer_funcs;
-	adev->mman.buffer_funcs_ring = &adev->sdma.instance[0].ring;
+	amdgpu_sdma_set_buffer_funcs_scheds(adev, &sdma_v2_4_buffer_funcs);
 }
 
 static const struct amdgpu_vm_pte_funcs sdma_v2_4_vm_pte_funcs = {
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c
index 0153626b5df2..3cf527bcadf6 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c
@@ -1670,8 +1670,7 @@ static const struct amdgpu_buffer_funcs sdma_v3_0_buffer_funcs = {
 
 static void sdma_v3_0_set_buffer_funcs(struct amdgpu_device *adev)
 {
-	adev->mman.buffer_funcs = &sdma_v3_0_buffer_funcs;
-	adev->mman.buffer_funcs_ring = &adev->sdma.instance[0].ring;
+	amdgpu_sdma_set_buffer_funcs_scheds(adev, &sdma_v3_0_buffer_funcs);
 }
 
 static const struct amdgpu_vm_pte_funcs sdma_v3_0_vm_pte_funcs = {
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
index 96a67b30854c..7e106baecad5 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
@@ -2608,11 +2608,7 @@ static const struct amdgpu_buffer_funcs sdma_v4_0_buffer_funcs = {
 
 static void sdma_v4_0_set_buffer_funcs(struct amdgpu_device *adev)
 {
-	adev->mman.buffer_funcs = &sdma_v4_0_buffer_funcs;
-	if (adev->sdma.has_page_queue)
-		adev->mman.buffer_funcs_ring = &adev->sdma.instance[0].page;
-	else
-		adev->mman.buffer_funcs_ring = &adev->sdma.instance[0].ring;
+	amdgpu_sdma_set_buffer_funcs_scheds(adev, &sdma_v4_0_buffer_funcs);
 }
 
 static const struct amdgpu_vm_pte_funcs sdma_v4_0_vm_pte_funcs = {
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
index 04dc8a8f4d66..7cb0e213bab2 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
@@ -2309,11 +2309,7 @@ static const struct amdgpu_buffer_funcs sdma_v4_4_2_buffer_funcs = {
 
 static void sdma_v4_4_2_set_buffer_funcs(struct amdgpu_device *adev)
 {
-	adev->mman.buffer_funcs = &sdma_v4_4_2_buffer_funcs;
-	if (adev->sdma.has_page_queue)
-		adev->mman.buffer_funcs_ring = &adev->sdma.instance[0].page;
-	else
-		adev->mman.buffer_funcs_ring = &adev->sdma.instance[0].ring;
+	amdgpu_sdma_set_buffer_funcs_scheds(adev, &sdma_v4_4_2_buffer_funcs);
 }
 
 static const struct amdgpu_vm_pte_funcs sdma_v4_4_2_vm_pte_funcs = {
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
index 19c717f2c602..eab09c5fc762 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
@@ -2066,10 +2066,8 @@ static const struct amdgpu_buffer_funcs sdma_v5_0_buffer_funcs = {
 
 static void sdma_v5_0_set_buffer_funcs(struct amdgpu_device *adev)
 {
-	if (adev->mman.buffer_funcs == NULL) {
-		adev->mman.buffer_funcs = &sdma_v5_0_buffer_funcs;
-		adev->mman.buffer_funcs_ring = &adev->sdma.instance[0].ring;
-	}
+	if (adev->mman.buffer_funcs == NULL)
+		amdgpu_sdma_set_buffer_funcs_scheds(adev, &sdma_v5_0_buffer_funcs);
 }
 
 static const struct amdgpu_vm_pte_funcs sdma_v5_0_vm_pte_funcs = {
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
index 7a07b8f4e86d..e843da1dce59 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
@@ -2076,10 +2076,8 @@ static const struct amdgpu_buffer_funcs sdma_v5_2_buffer_funcs = {
 
 static void sdma_v5_2_set_buffer_funcs(struct amdgpu_device *adev)
 {
-	if (adev->mman.buffer_funcs == NULL) {
-		adev->mman.buffer_funcs = &sdma_v5_2_buffer_funcs;
-		adev->mman.buffer_funcs_ring = &adev->sdma.instance[0].ring;
-	}
+	if (adev->mman.buffer_funcs == NULL)
+		amdgpu_sdma_set_buffer_funcs_scheds(adev, &sdma_v5_2_buffer_funcs);
 }
 
 static const struct amdgpu_vm_pte_funcs sdma_v5_2_vm_pte_funcs = {
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
index 8f8228c7adee..d078bff42983 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
@@ -1884,8 +1884,7 @@ static const struct amdgpu_buffer_funcs sdma_v6_0_buffer_funcs = {
 
 static void sdma_v6_0_set_buffer_funcs(struct amdgpu_device *adev)
 {
-	adev->mman.buffer_funcs = &sdma_v6_0_buffer_funcs;
-	adev->mman.buffer_funcs_ring = &adev->sdma.instance[0].ring;
+	amdgpu_sdma_set_buffer_funcs_scheds(adev, &sdma_v6_0_buffer_funcs);
 }
 
 static const struct amdgpu_vm_pte_funcs sdma_v6_0_vm_pte_funcs = {
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
index cf412d8fb0ed..77ad6f128e75 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
@@ -1826,8 +1826,7 @@ static const struct amdgpu_buffer_funcs sdma_v7_0_buffer_funcs = {
 
 static void sdma_v7_0_set_buffer_funcs(struct amdgpu_device *adev)
 {
-	adev->mman.buffer_funcs = &sdma_v7_0_buffer_funcs;
-	adev->mman.buffer_funcs_ring = &adev->sdma.instance[0].ring;
+	amdgpu_sdma_set_buffer_funcs_scheds(adev, &sdma_v7_0_buffer_funcs);
 }
 
 static const struct amdgpu_vm_pte_funcs sdma_v7_0_vm_pte_funcs = {
diff --git a/drivers/gpu/drm/amd/amdgpu/si_dma.c b/drivers/gpu/drm/amd/amdgpu/si_dma.c
index 863e00086c30..4f6d7eeceb37 100644
--- a/drivers/gpu/drm/amd/amdgpu/si_dma.c
+++ b/drivers/gpu/drm/amd/amdgpu/si_dma.c
@@ -826,8 +826,7 @@ static const struct amdgpu_buffer_funcs si_dma_buffer_funcs = {
 
 static void si_dma_set_buffer_funcs(struct amdgpu_device *adev)
 {
-	adev->mman.buffer_funcs = &si_dma_buffer_funcs;
-	adev->mman.buffer_funcs_ring = &adev->sdma.instance[0].ring;
+	amdgpu_sdma_set_buffer_funcs_scheds(adev, &si_dma_buffer_funcs);
 }
 
 static const struct amdgpu_vm_pte_funcs si_dma_vm_pte_funcs = {
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
index 943c3438c7ee..3f7b85aabb72 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
@@ -129,13 +129,14 @@ svm_migrate_copy_memory_gart(struct amdgpu_device *adev, dma_addr_t *sys,
 			     struct dma_fence **mfence)
 {
 	const u64 GTT_MAX_PAGES = AMDGPU_GTT_MAX_TRANSFER_SIZE;
-	struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
+	struct amdgpu_ring *ring;
 	struct amdgpu_ttm_buffer_entity *entity;
 	u64 gart_s, gart_d;
 	struct dma_fence *next;
 	u64 size;
 	int r;
 
+	ring = container_of(adev->mman.buffer_funcs_scheds[0], struct amdgpu_ring, sched);
 	entity = &adev->mman.move_entities[0];
 
 	mutex_lock(&entity->gart_window_lock);
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v2 16/20] drm/amdgpu: give ttm entities access to all the sdma scheds
  2025-11-13 16:05 [PATCH v2 00/20] drm/amdgpu: use all SDMA instances for TTM clears and moves Pierre-Eric Pelloux-Prayer
                   ` (14 preceding siblings ...)
  2025-11-13 16:05 ` [PATCH v2 15/20] drm/amdgpu: pass all the sdma scheds to amdgpu_mman Pierre-Eric Pelloux-Prayer
@ 2025-11-13 16:05 ` Pierre-Eric Pelloux-Prayer
  2025-11-17  9:54   ` Christian König
  2025-11-13 16:05 ` [PATCH v2 17/20] drm/amdgpu: get rid of amdgpu_ttm_clear_buffer Pierre-Eric Pelloux-Prayer
                   ` (3 subsequent siblings)
  19 siblings, 1 reply; 53+ messages in thread
From: Pierre-Eric Pelloux-Prayer @ 2025-11-13 16:05 UTC (permalink / raw)
  To: Alex Deucher, Christian König, David Airlie, Simona Vetter
  Cc: Pierre-Eric Pelloux-Prayer, amd-gfx, dri-devel, linux-kernel

Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 19 +++++++++----------
 1 file changed, 9 insertions(+), 10 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index 11fec0fa4c11..94d0ff34593f 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -2191,8 +2191,8 @@ u32 amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
 	u32 num_clear_entities, num_move_entities;
 	u32 windows, w;
 
-	num_clear_entities = adev->sdma.num_instances;
-	num_move_entities = MIN(adev->sdma.num_instances, TTM_NUM_MOVE_FENCES);
+	num_clear_entities = MIN(adev->mman.num_buffer_funcs_scheds, TTM_NUM_MOVE_FENCES);
+	num_move_entities = MIN(adev->mman.num_buffer_funcs_scheds, TTM_NUM_MOVE_FENCES);
 	windows = adev->gmc.is_app_apu ? 0 : (2 * num_move_entities + num_clear_entities);
 
 	if (!adev->mman.initialized || amdgpu_in_reset(adev) ||
@@ -2200,11 +2200,8 @@ u32 amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
 		return windows;
 
 	if (enable) {
-		struct drm_gpu_scheduler *sched;
-
-		sched = adev->mman.buffer_funcs_scheds[0];
 		r = drm_sched_entity_init(&adev->mman.default_entity.base,
-					  DRM_SCHED_PRIORITY_KERNEL, &sched,
+					  DRM_SCHED_PRIORITY_KERNEL, adev->mman.buffer_funcs_scheds,
 					  1, NULL);
 		if (r) {
 			dev_err(adev->dev, "Failed setting up entity (%d)\n",
@@ -2216,8 +2213,9 @@ u32 amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
 		atomic_set(&adev->mman.next_move_entity, 0);
 		for (i = 0; i < num_move_entities; i++) {
 			r = drm_sched_entity_init(&adev->mman.move_entities[i].base,
-						  DRM_SCHED_PRIORITY_NORMAL, &sched,
-						  1, NULL);
+						  DRM_SCHED_PRIORITY_NORMAL,
+						  adev->mman.buffer_funcs_scheds,
+						  adev->mman.num_buffer_funcs_scheds, NULL);
 			if (r) {
 				dev_err(adev->dev,
 					"Failed setting up TTM BO move entities (%d)\n",
@@ -2239,8 +2237,9 @@ u32 amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
 
 		for (i = 0; i < num_clear_entities; i++) {
 			r = drm_sched_entity_init(&adev->mman.clear_entities[i].base,
-						  DRM_SCHED_PRIORITY_NORMAL, &sched,
-						  1, NULL);
+						  DRM_SCHED_PRIORITY_NORMAL,
+						  adev->mman.buffer_funcs_scheds,
+						  adev->mman.num_buffer_funcs_scheds, NULL);
 			if (r) {
 				for (j = 0; j < num_move_entities; j++)
 					drm_sched_entity_destroy(
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v2 17/20] drm/amdgpu: get rid of amdgpu_ttm_clear_buffer
  2025-11-13 16:05 [PATCH v2 00/20] drm/amdgpu: use all SDMA instances for TTM clears and moves Pierre-Eric Pelloux-Prayer
                   ` (15 preceding siblings ...)
  2025-11-13 16:05 ` [PATCH v2 16/20] drm/amdgpu: give ttm entities access to all the sdma scheds Pierre-Eric Pelloux-Prayer
@ 2025-11-13 16:05 ` Pierre-Eric Pelloux-Prayer
  2025-11-13 16:05 ` [PATCH v2 18/20] drm/amdgpu: rename amdgpu_fill_buffer as amdgpu_ttm_clear_buffer Pierre-Eric Pelloux-Prayer
                   ` (2 subsequent siblings)
  19 siblings, 0 replies; 53+ messages in thread
From: Pierre-Eric Pelloux-Prayer @ 2025-11-13 16:05 UTC (permalink / raw)
  To: Alex Deucher, Christian König, David Airlie, Simona Vetter,
	Sumit Semwal
  Cc: Pierre-Eric Pelloux-Prayer, amd-gfx, dri-devel, linux-kernel,
	linux-media, linaro-mm-sig

It's doing the same thing as amdgpu_fill_buffer(src_data=0), so drop it.

The only caveat is that amdgpu_res_cleared() return value is only valid
right after allocation.

---
v2: introduce new "bool consider_clear_status" arg
---

Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 15 ++--
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c    | 94 +++++-----------------
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h    |  6 +-
 3 files changed, 32 insertions(+), 83 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
index 33b397107778..4490b19752b8 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
@@ -725,13 +725,16 @@ int amdgpu_bo_create(struct amdgpu_device *adev,
 	    bo->tbo.resource->mem_type == TTM_PL_VRAM) {
 		struct dma_fence *fence;
 
-		r = amdgpu_ttm_clear_buffer(bo, bo->tbo.base.resv, &fence);
+		r = amdgpu_fill_buffer(NULL, bo, 0, NULL, &fence, NULL,
+				       true, AMDGPU_KERNEL_JOB_ID_TTM_CLEAR_BUFFER);
 		if (unlikely(r))
 			goto fail_unreserve;
 
-		dma_resv_add_fence(bo->tbo.base.resv, fence,
-				   DMA_RESV_USAGE_KERNEL);
-		dma_fence_put(fence);
+		if (fence) {
+			dma_resv_add_fence(bo->tbo.base.resv, fence,
+					   DMA_RESV_USAGE_KERNEL);
+			dma_fence_put(fence);
+		}
 	}
 	if (!bp->resv)
 		amdgpu_bo_unreserve(bo);
@@ -1321,8 +1324,8 @@ void amdgpu_bo_release_notify(struct ttm_buffer_object *bo)
 	if (r)
 		goto out;
 
-	r = amdgpu_fill_buffer(NULL, abo, 0, &bo->base._resv,
-			       &fence, NULL, AMDGPU_KERNEL_JOB_ID_CLEAR_ON_RELEASE);
+	r = amdgpu_fill_buffer(NULL, abo, 0, &bo->base._resv, &fence, NULL,
+			       false, AMDGPU_KERNEL_JOB_ID_CLEAR_ON_RELEASE);
 	if (WARN_ON(r))
 		goto out;
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index 94d0ff34593f..df05768c3817 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -435,7 +435,7 @@ static int amdgpu_move_blit(struct ttm_buffer_object *bo,
 
 		r = amdgpu_fill_buffer(entity,
 				       abo, 0, NULL, &wipe_fence, fence,
-				       AMDGPU_KERNEL_JOB_ID_MOVE_BLIT);
+				       false, AMDGPU_KERNEL_JOB_ID_MOVE_BLIT);
 		if (r) {
 			goto error;
 		} else if (wipe_fence) {
@@ -2418,82 +2418,27 @@ static int amdgpu_ttm_fill_mem(struct amdgpu_ring *ring,
 }
 
 /**
- * amdgpu_ttm_clear_buffer - clear memory buffers
- * @bo: amdgpu buffer object
- * @resv: reservation object
- * @fence: dma_fence associated with the operation
+ * amdgpu_fill_buffer - fill a buffer with a given value
+ * @entity: optional entity to use. If NULL, the clearing entities will be
+ *          used to load-balance the partial clears
+ * @bo: the bo to fill
+ * @src_data: the value to set
+ * @resv: fences contained in this reservation will be used as dependencies.
+ * @out_fence: the fence from the last clear will be stored here. It might be
+ *             NULL if no job was run.
+ * @dependency: optional input dependency fence.
+ * @consider_clear_status: true if region reported as cleared by amdgpu_res_cleared()
+ *                         are skipped.
+ * @k_job_id: trace id
  *
- * Clear the memory buffer resource.
- *
- * Returns:
- * 0 for success or a negative error code on failure.
  */
-int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
-			    struct dma_resv *resv,
-			    struct dma_fence **fence)
-{
-	struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
-	struct amdgpu_ring *ring;
-	struct amdgpu_ttm_buffer_entity *entity;
-	struct amdgpu_res_cursor cursor;
-	u64 addr;
-	int r = 0;
-
-	if (!adev->mman.buffer_funcs_enabled)
-		return -EINVAL;
-
-	if (!fence)
-		return -EINVAL;
-
-	ring = container_of(adev->mman.buffer_funcs_scheds[0], struct amdgpu_ring, sched);
-	entity = &adev->mman.clear_entities[0];
-	*fence = dma_fence_get_stub();
-
-	amdgpu_res_first(bo->tbo.resource, 0, amdgpu_bo_size(bo), &cursor);
-
-	mutex_lock(&entity->gart_window_lock);
-	while (cursor.remaining) {
-		struct dma_fence *next = NULL;
-		u64 size;
-
-		if (amdgpu_res_cleared(&cursor)) {
-			amdgpu_res_next(&cursor, cursor.size);
-			continue;
-		}
-
-		/* Never clear more than 256MiB at once to avoid timeouts */
-		size = min(cursor.size, 256ULL << 20);
-
-		r = amdgpu_ttm_map_buffer(&entity->base,
-					  &bo->tbo, bo->tbo.resource, &cursor,
-					  entity->gart_window_id1, ring, false, &size, &addr,
-					  NULL, NULL);
-		if (r)
-			goto err;
-
-		r = amdgpu_ttm_fill_mem(ring, &entity->base, 0, addr, size, resv,
-					&next, true,
-					AMDGPU_KERNEL_JOB_ID_TTM_CLEAR_BUFFER);
-		if (r)
-			goto err;
-
-		dma_fence_put(*fence);
-		*fence = next;
-
-		amdgpu_res_next(&cursor, size);
-	}
-err:
-	mutex_unlock(&entity->gart_window_lock);
-
-	return r;
-}
-
 int amdgpu_fill_buffer(struct amdgpu_ttm_buffer_entity *entity,
 		       struct amdgpu_bo *bo,
 		       uint32_t src_data,
 		       struct dma_resv *resv,
-		       struct dma_fence **f,
+		       struct dma_fence **out_fence,
 		       struct dma_fence *dependency,
+		       bool consider_clear_status,
 		       u64 k_job_id)
 {
 	struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
@@ -2523,6 +2468,11 @@ int amdgpu_fill_buffer(struct amdgpu_ttm_buffer_entity *entity,
 		struct dma_fence *next;
 		uint64_t cur_size, to;
 
+		if (consider_clear_status && amdgpu_res_cleared(&dst)) {
+			amdgpu_res_next(&dst, dst.size);
+			continue;
+		}
+
 		/* Never fill more than 256MiB at once to avoid timeouts */
 		cur_size = min(dst.size, 256ULL << 20);
 
@@ -2548,9 +2498,7 @@ int amdgpu_fill_buffer(struct amdgpu_ttm_buffer_entity *entity,
 	}
 error:
 	mutex_unlock(&entity->gart_window_lock);
-	if (f)
-		*f = dma_fence_get(fence);
-	dma_fence_put(fence);
+	*out_fence = fence;
 	return r;
 }
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
index 63c3e2466708..e01c2173d79f 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
@@ -181,15 +181,13 @@ int amdgpu_copy_buffer(struct amdgpu_ring *ring,
 		       struct dma_resv *resv,
 		       struct dma_fence **fence,
 		       bool vm_needs_flush, uint32_t copy_flags);
-int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
-			    struct dma_resv *resv,
-			    struct dma_fence **fence);
 int amdgpu_fill_buffer(struct amdgpu_ttm_buffer_entity *entity,
 		       struct amdgpu_bo *bo,
 		       uint32_t src_data,
 		       struct dma_resv *resv,
-		       struct dma_fence **f,
+		       struct dma_fence **out_fence,
 		       struct dma_fence *dependency,
+		       bool consider_clear_status,
 		       u64 k_job_id);
 
 int amdgpu_ttm_alloc_gart(struct ttm_buffer_object *bo);
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v2 18/20] drm/amdgpu: rename amdgpu_fill_buffer as amdgpu_ttm_clear_buffer
  2025-11-13 16:05 [PATCH v2 00/20] drm/amdgpu: use all SDMA instances for TTM clears and moves Pierre-Eric Pelloux-Prayer
                   ` (16 preceding siblings ...)
  2025-11-13 16:05 ` [PATCH v2 17/20] drm/amdgpu: get rid of amdgpu_ttm_clear_buffer Pierre-Eric Pelloux-Prayer
@ 2025-11-13 16:05 ` Pierre-Eric Pelloux-Prayer
  2025-11-17  9:56   ` Christian König
  2025-11-13 16:06 ` [PATCH v2 19/20] drm/amdgpu: use larger gart window when possible Pierre-Eric Pelloux-Prayer
  2025-11-13 16:06 ` [PATCH v2 20/20] drm/amdgpu: double AMDGPU_GTT_MAX_TRANSFER_SIZE Pierre-Eric Pelloux-Prayer
  19 siblings, 1 reply; 53+ messages in thread
From: Pierre-Eric Pelloux-Prayer @ 2025-11-13 16:05 UTC (permalink / raw)
  To: Alex Deucher, Christian König, David Airlie, Simona Vetter,
	Sumit Semwal
  Cc: Pierre-Eric Pelloux-Prayer, amd-gfx, dri-devel, linux-kernel,
	linux-media, linaro-mm-sig

This is the only use case for this function.

---
v2: amdgpu_ttm_clear_buffer instead of amdgpu_clear_buffer
---

Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_object.c |  8 +++----
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c    | 26 ++++++++++------------
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h    | 15 ++++++-------
 3 files changed, 23 insertions(+), 26 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
index 4490b19752b8..4b9518097899 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
@@ -725,8 +725,8 @@ int amdgpu_bo_create(struct amdgpu_device *adev,
 	    bo->tbo.resource->mem_type == TTM_PL_VRAM) {
 		struct dma_fence *fence;
 
-		r = amdgpu_fill_buffer(NULL, bo, 0, NULL, &fence, NULL,
-				       true, AMDGPU_KERNEL_JOB_ID_TTM_CLEAR_BUFFER);
+		r = amdgpu_ttm_clear_buffer(NULL, bo, NULL, &fence, NULL,
+					    true, AMDGPU_KERNEL_JOB_ID_TTM_CLEAR_BUFFER);
 		if (unlikely(r))
 			goto fail_unreserve;
 
@@ -1324,8 +1324,8 @@ void amdgpu_bo_release_notify(struct ttm_buffer_object *bo)
 	if (r)
 		goto out;
 
-	r = amdgpu_fill_buffer(NULL, abo, 0, &bo->base._resv, &fence, NULL,
-			       false, AMDGPU_KERNEL_JOB_ID_CLEAR_ON_RELEASE);
+	r = amdgpu_ttm_clear_buffer(NULL, abo, &bo->base._resv, &fence, NULL,
+				    false, AMDGPU_KERNEL_JOB_ID_CLEAR_ON_RELEASE);
 	if (WARN_ON(r))
 		goto out;
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index df05768c3817..0a55bc4ea91f 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -433,9 +433,9 @@ static int amdgpu_move_blit(struct ttm_buffer_object *bo,
 	    (abo->flags & AMDGPU_GEM_CREATE_VRAM_WIPE_ON_RELEASE)) {
 		struct dma_fence *wipe_fence = NULL;
 
-		r = amdgpu_fill_buffer(entity,
-				       abo, 0, NULL, &wipe_fence, fence,
-				       false, AMDGPU_KERNEL_JOB_ID_MOVE_BLIT);
+		r = amdgpu_ttm_clear_buffer(entity,
+					    abo, NULL, &wipe_fence, fence,
+					    false, AMDGPU_KERNEL_JOB_ID_MOVE_BLIT);
 		if (r) {
 			goto error;
 		} else if (wipe_fence) {
@@ -2418,11 +2418,10 @@ static int amdgpu_ttm_fill_mem(struct amdgpu_ring *ring,
 }
 
 /**
- * amdgpu_fill_buffer - fill a buffer with a given value
+ * amdgpu_ttm_clear_buffer - fill a buffer with 0
  * @entity: optional entity to use. If NULL, the clearing entities will be
  *          used to load-balance the partial clears
  * @bo: the bo to fill
- * @src_data: the value to set
  * @resv: fences contained in this reservation will be used as dependencies.
  * @out_fence: the fence from the last clear will be stored here. It might be
  *             NULL if no job was run.
@@ -2432,14 +2431,13 @@ static int amdgpu_ttm_fill_mem(struct amdgpu_ring *ring,
  * @k_job_id: trace id
  *
  */
-int amdgpu_fill_buffer(struct amdgpu_ttm_buffer_entity *entity,
-		       struct amdgpu_bo *bo,
-		       uint32_t src_data,
-		       struct dma_resv *resv,
-		       struct dma_fence **out_fence,
-		       struct dma_fence *dependency,
-		       bool consider_clear_status,
-		       u64 k_job_id)
+int amdgpu_ttm_clear_buffer(struct amdgpu_ttm_buffer_entity *entity,
+			    struct amdgpu_bo *bo,
+			    struct dma_resv *resv,
+			    struct dma_fence **out_fence,
+			    struct dma_fence *dependency,
+			    bool consider_clear_status,
+			    u64 k_job_id)
 {
 	struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
 	struct dma_fence *fence = NULL;
@@ -2486,7 +2484,7 @@ int amdgpu_fill_buffer(struct amdgpu_ttm_buffer_entity *entity,
 			goto error;
 
 		r = amdgpu_ttm_fill_mem(ring, &entity->base,
-					src_data, to, cur_size, resv,
+					0, to, cur_size, resv,
 					&next, true, k_job_id);
 		if (r)
 			goto error;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
index e01c2173d79f..585aee9a173b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
@@ -181,14 +181,13 @@ int amdgpu_copy_buffer(struct amdgpu_ring *ring,
 		       struct dma_resv *resv,
 		       struct dma_fence **fence,
 		       bool vm_needs_flush, uint32_t copy_flags);
-int amdgpu_fill_buffer(struct amdgpu_ttm_buffer_entity *entity,
-		       struct amdgpu_bo *bo,
-		       uint32_t src_data,
-		       struct dma_resv *resv,
-		       struct dma_fence **out_fence,
-		       struct dma_fence *dependency,
-		       bool consider_clear_status,
-		       u64 k_job_id);
+int amdgpu_ttm_clear_buffer(struct amdgpu_ttm_buffer_entity *entity,
+			    struct amdgpu_bo *bo,
+			    struct dma_resv *resv,
+			    struct dma_fence **out_fence,
+			    struct dma_fence *dependency,
+			    bool consider_clear_status,
+			    u64 k_job_id);
 
 int amdgpu_ttm_alloc_gart(struct ttm_buffer_object *bo);
 void amdgpu_ttm_recover_gart(struct ttm_buffer_object *tbo);
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v2 19/20] drm/amdgpu: use larger gart window when possible
  2025-11-13 16:05 [PATCH v2 00/20] drm/amdgpu: use all SDMA instances for TTM clears and moves Pierre-Eric Pelloux-Prayer
                   ` (17 preceding siblings ...)
  2025-11-13 16:05 ` [PATCH v2 18/20] drm/amdgpu: rename amdgpu_fill_buffer as amdgpu_ttm_clear_buffer Pierre-Eric Pelloux-Prayer
@ 2025-11-13 16:06 ` Pierre-Eric Pelloux-Prayer
  2025-11-13 16:06 ` [PATCH v2 20/20] drm/amdgpu: double AMDGPU_GTT_MAX_TRANSFER_SIZE Pierre-Eric Pelloux-Prayer
  19 siblings, 0 replies; 53+ messages in thread
From: Pierre-Eric Pelloux-Prayer @ 2025-11-13 16:06 UTC (permalink / raw)
  To: Alex Deucher, Christian König, David Airlie, Simona Vetter
  Cc: Pierre-Eric Pelloux-Prayer, amd-gfx, dri-devel, linux-kernel

Entities' gart windows are contiguous so when copying a buffer
and src doesn't need a gart window, its window can be used to
extend dst one (and vice versa).

This doubles the gart window size and reduces the number of jobs
required.

---
v2: pass adev instead of ring to amdgpu_ttm_needs_gart_window
---

Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 84 ++++++++++++++++++-------
 1 file changed, 62 insertions(+), 22 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index 0a55bc4ea91f..9397459ec462 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -162,6 +162,21 @@ static void amdgpu_evict_flags(struct ttm_buffer_object *bo,
 	*placement = abo->placement;
 }
 
+static bool amdgpu_ttm_needs_gart_window(struct amdgpu_device *adev,
+					 struct ttm_resource *mem,
+					 struct amdgpu_res_cursor *mm_cur,
+					 bool tmz,
+					 uint64_t *addr)
+{
+	/* Map only what can't be accessed directly */
+	if (!tmz && mem->start != AMDGPU_BO_INVALID_OFFSET) {
+		*addr = amdgpu_ttm_domain_start(adev, mem->mem_type) +
+			mm_cur->start;
+		return false;
+	}
+	return true;
+}
+
 /**
  * amdgpu_ttm_map_buffer - Map memory into the GART windows
  * @entity: entity to run the window setup job
@@ -169,6 +184,7 @@ static void amdgpu_evict_flags(struct ttm_buffer_object *bo,
  * @mem: memory object to map
  * @mm_cur: range to map
  * @window: which GART window to use
+ * @use_two_windows: if true, use a double window
  * @ring: DMA ring to use for the copy
  * @tmz: if we should setup a TMZ enabled mapping
  * @size: in number of bytes to map, out number of bytes mapped
@@ -182,7 +198,9 @@ static int amdgpu_ttm_map_buffer(struct drm_sched_entity *entity,
 				 struct ttm_buffer_object *bo,
 				 struct ttm_resource *mem,
 				 struct amdgpu_res_cursor *mm_cur,
-				 unsigned int window, struct amdgpu_ring *ring,
+				 unsigned int window,
+				 bool use_two_windows,
+				 struct amdgpu_ring *ring,
 				 bool tmz, uint64_t *size, uint64_t *addr,
 				 struct dma_fence *dep,
 				 struct dma_resv *resv)
@@ -202,13 +220,8 @@ static int amdgpu_ttm_map_buffer(struct drm_sched_entity *entity,
 	if (WARN_ON(mem->mem_type == AMDGPU_PL_PREEMPT))
 		return -EINVAL;
 
-	/* Map only what can't be accessed directly */
-	if (!tmz && mem->start != AMDGPU_BO_INVALID_OFFSET) {
-		*addr = amdgpu_ttm_domain_start(adev, mem->mem_type) +
-			mm_cur->start;
+	if (!amdgpu_ttm_needs_gart_window(adev, mem, mm_cur, tmz, addr))
 		return 0;
-	}
-
 
 	/*
 	 * If start begins at an offset inside the page, then adjust the size
@@ -217,7 +230,8 @@ static int amdgpu_ttm_map_buffer(struct drm_sched_entity *entity,
 	offset = mm_cur->start & ~PAGE_MASK;
 
 	num_pages = PFN_UP(*size + offset);
-	num_pages = min_t(uint32_t, num_pages, AMDGPU_GTT_MAX_TRANSFER_SIZE);
+	num_pages = min_t(uint32_t,
+		num_pages, AMDGPU_GTT_MAX_TRANSFER_SIZE * (use_two_windows ? 2 : 1));
 
 	*size = min(*size, (uint64_t)num_pages * PAGE_SIZE - offset);
 
@@ -308,8 +322,11 @@ static int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
 				      struct dma_resv *resv,
 				      struct dma_fence **f)
 {
+
+	bool src_needs_gart_window, dst_needs_gart_window, use_two_gart_windows;
 	struct amdgpu_ring *ring;
 	struct amdgpu_res_cursor src_mm, dst_mm;
+	int src_gart_window, dst_gart_window;
 	struct dma_fence *fence = NULL;
 	int r = 0;
 	uint32_t copy_flags = 0;
@@ -335,20 +352,43 @@ static int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
 		/* Never copy more than 256MiB at once to avoid a timeout */
 		cur_size = min3(src_mm.size, dst_mm.size, 256ULL << 20);
 
-		/* Map src to window 0 and dst to window 1. */
-		r = amdgpu_ttm_map_buffer(&entity->base,
-					  src->bo, src->mem, &src_mm,
-					  entity->gart_window_id0, ring, tmz, &cur_size, &from,
-					  NULL, NULL);
-		if (r)
-			goto error;
+		/* If only one direction needs a gart window to access memory, use both
+		 * windows for it.
+		 */
+		src_needs_gart_window =
+			amdgpu_ttm_needs_gart_window(adev, src->mem, &src_mm, tmz, &from);
+		dst_needs_gart_window =
+			amdgpu_ttm_needs_gart_window(adev, dst->mem, &dst_mm, tmz, &to);
 
-		r = amdgpu_ttm_map_buffer(&entity->base,
-					  dst->bo, dst->mem, &dst_mm,
-					  entity->gart_window_id1, ring, tmz, &cur_size, &to,
-					  NULL, NULL);
-		if (r)
-			goto error;
+		if (src_needs_gart_window) {
+			src_gart_window = entity->gart_window_id0;
+			use_two_gart_windows = !dst_needs_gart_window;
+		}
+		if (dst_needs_gart_window) {
+			dst_gart_window = src_needs_gart_window ?
+				entity->gart_window_id1 : entity->gart_window_id0;
+			use_two_gart_windows = !src_needs_gart_window;
+		}
+
+		if (src_needs_gart_window) {
+			r = amdgpu_ttm_map_buffer(&entity->base,
+						  src->bo, src->mem, &src_mm,
+						  src_gart_window, use_two_gart_windows,
+						  ring, tmz, &cur_size, &from,
+						  NULL, NULL);
+			if (r)
+				goto error;
+		}
+
+		if (dst_needs_gart_window) {
+			r = amdgpu_ttm_map_buffer(&entity->base,
+						  dst->bo, dst->mem, &dst_mm,
+						  dst_gart_window, use_two_gart_windows,
+						  ring, tmz, &cur_size, &to,
+						  NULL, NULL);
+			if (r)
+				goto error;
+		}
 
 		abo_src = ttm_to_amdgpu_bo(src->bo);
 		abo_dst = ttm_to_amdgpu_bo(dst->bo);
@@ -2476,7 +2516,7 @@ int amdgpu_ttm_clear_buffer(struct amdgpu_ttm_buffer_entity *entity,
 
 		r = amdgpu_ttm_map_buffer(&entity->base,
 					  &bo->tbo, bo->tbo.resource, &dst,
-					  entity->gart_window_id1, ring, false,
+					  entity->gart_window_id1, false, ring, false,
 					  &cur_size, &to,
 					  dependency,
 					  resv);
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v2 20/20] drm/amdgpu: double AMDGPU_GTT_MAX_TRANSFER_SIZE
  2025-11-13 16:05 [PATCH v2 00/20] drm/amdgpu: use all SDMA instances for TTM clears and moves Pierre-Eric Pelloux-Prayer
                   ` (18 preceding siblings ...)
  2025-11-13 16:06 ` [PATCH v2 19/20] drm/amdgpu: use larger gart window when possible Pierre-Eric Pelloux-Prayer
@ 2025-11-13 16:06 ` Pierre-Eric Pelloux-Prayer
  19 siblings, 0 replies; 53+ messages in thread
From: Pierre-Eric Pelloux-Prayer @ 2025-11-13 16:06 UTC (permalink / raw)
  To: Alex Deucher, Christian König, David Airlie, Simona Vetter
  Cc: Pierre-Eric Pelloux-Prayer, amd-gfx, dri-devel, linux-kernel

Makes copies/evictions faster when gart windows are required.

Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
index 585aee9a173b..910728cd084e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
@@ -38,7 +38,7 @@
 #define AMDGPU_PL_MMIO_REMAP	(TTM_PL_PRIV + 5)
 #define __AMDGPU_PL_NUM	(TTM_PL_PRIV + 6)
 
-#define AMDGPU_GTT_MAX_TRANSFER_SIZE	512
+#define AMDGPU_GTT_MAX_TRANSFER_SIZE	1024
 
 extern const struct attribute_group amdgpu_vram_mgr_attr_group;
 extern const struct attribute_group amdgpu_gtt_mgr_attr_group;
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* Re: [PATCH v2 01/20] drm/amdgpu: give each kernel job a unique id
  2025-11-13 16:05 ` [PATCH v2 01/20] drm/amdgpu: give each kernel job a unique id Pierre-Eric Pelloux-Prayer
@ 2025-11-14 12:26   ` Christian König
  2025-11-14 14:36     ` Pierre-Eric Pelloux-Prayer
  0 siblings, 1 reply; 53+ messages in thread
From: Christian König @ 2025-11-14 12:26 UTC (permalink / raw)
  To: Pierre-Eric Pelloux-Prayer, Alex Deucher, David Airlie,
	Simona Vetter, Felix Kuehling
  Cc: Arunpravin Paneer Selvam, amd-gfx, dri-devel, linux-kernel

On 11/13/25 17:05, Pierre-Eric Pelloux-Prayer wrote:
> Userspace jobs have drm_file.client_id as a unique identifier
> as job's owners. For kernel jobs, we can allocate arbitrary
> values - the risk of overlap with userspace ids is small (given
> that it's a u64 value).
> In the unlikely case the overlap happens, it'll only impact
> trace events.
> 
> Since this ID is traced in the gpu_scheduler trace events, this
> allows to determine the source of each job sent to the hardware.
> 
> To make grepping easier, the IDs are defined as they will appear
> in the trace output.
> 
> Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
> Acked-by: Alex Deucher <alexander.deucher@amd.com>
> Signed-off-by: Arunpravin Paneer Selvam <Arunpravin.PaneerSelvam@amd.com>
> Link: https://lore.kernel.org/r/20250604122827.2191-1-pierre-eric.pelloux-prayer@amd.com

Acked-by: Christian König <christian.koenig@amd.com>

You should probably start pushing this patch to amd-staging-drm-next even when not the full patch set is reviewed.

We need to get this partially merged through drm-misc-next because of the TTM dependencies anyway.

Regards,
Christian

> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c     |  3 ++-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c     |  2 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_job.c     |  5 ++--
>  drivers/gpu/drm/amd/amdgpu/amdgpu_job.h     | 19 +++++++++++++-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_jpeg.c    |  3 ++-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_object.c  |  3 ++-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c     | 28 +++++++++++++--------
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h     |  3 ++-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c     |  3 ++-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c     |  5 ++--
>  drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c     |  8 +++---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c      |  6 +++--
>  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h      |  2 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_vm_cpu.c  |  4 ++-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_vm_pt.c   |  4 ++-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c | 12 +++++----
>  drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c       |  6 +++--
>  drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c       |  6 +++--
>  drivers/gpu/drm/amd/amdkfd/kfd_migrate.c    |  3 ++-
>  19 files changed, 84 insertions(+), 41 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
> index 3d24f9cd750a..29c927f4d6df 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
> @@ -1549,7 +1549,8 @@ static int amdgpu_gfx_run_cleaner_shader_job(struct amdgpu_ring *ring)
>  	owner = (void *)(unsigned long)atomic_inc_return(&counter);
>  
>  	r = amdgpu_job_alloc_with_ib(ring->adev, &entity, owner,
> -				     64, 0, &job);
> +				     64, 0, &job,
> +				     AMDGPU_KERNEL_JOB_ID_CLEANER_SHADER);
>  	if (r)
>  		goto err;
>  
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
> index 97b562a79ea8..9dcf51991b5b 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
> @@ -690,7 +690,7 @@ void amdgpu_gmc_flush_gpu_tlb(struct amdgpu_device *adev, uint32_t vmid,
>  	r = amdgpu_job_alloc_with_ib(ring->adev, &adev->mman.high_pr,
>  				     AMDGPU_FENCE_OWNER_UNDEFINED,
>  				     16 * 4, AMDGPU_IB_POOL_IMMEDIATE,
> -				     &job);
> +				     &job, AMDGPU_KERNEL_JOB_ID_FLUSH_GPU_TLB);
>  	if (r)
>  		goto error_alloc;
>  
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> index 55c7e104d5ca..3457bd649623 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> @@ -234,11 +234,12 @@ int amdgpu_job_alloc(struct amdgpu_device *adev, struct amdgpu_vm *vm,
>  int amdgpu_job_alloc_with_ib(struct amdgpu_device *adev,
>  			     struct drm_sched_entity *entity, void *owner,
>  			     size_t size, enum amdgpu_ib_pool_type pool_type,
> -			     struct amdgpu_job **job)
> +			     struct amdgpu_job **job, u64 k_job_id)
>  {
>  	int r;
>  
> -	r = amdgpu_job_alloc(adev, NULL, entity, owner, 1, job, 0);
> +	r = amdgpu_job_alloc(adev, NULL, entity, owner, 1, job,
> +			     k_job_id);
>  	if (r)
>  		return r;
>  
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
> index d25f1fcf0242..7abf069d17d4 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
> @@ -44,6 +44,22 @@
>  struct amdgpu_fence;
>  enum amdgpu_ib_pool_type;
>  
> +/* Internal kernel job ids. (decreasing values, starting from U64_MAX). */
> +#define AMDGPU_KERNEL_JOB_ID_VM_UPDATE              (18446744073709551615ULL)
> +#define AMDGPU_KERNEL_JOB_ID_VM_UPDATE_PDES         (18446744073709551614ULL)
> +#define AMDGPU_KERNEL_JOB_ID_VM_UPDATE_RANGE        (18446744073709551613ULL)
> +#define AMDGPU_KERNEL_JOB_ID_VM_PT_CLEAR            (18446744073709551612ULL)
> +#define AMDGPU_KERNEL_JOB_ID_TTM_MAP_BUFFER         (18446744073709551611ULL)
> +#define AMDGPU_KERNEL_JOB_ID_TTM_ACCESS_MEMORY_SDMA (18446744073709551610ULL)
> +#define AMDGPU_KERNEL_JOB_ID_TTM_COPY_BUFFER        (18446744073709551609ULL)
> +#define AMDGPU_KERNEL_JOB_ID_CLEAR_ON_RELEASE       (18446744073709551608ULL)
> +#define AMDGPU_KERNEL_JOB_ID_MOVE_BLIT              (18446744073709551607ULL)
> +#define AMDGPU_KERNEL_JOB_ID_TTM_CLEAR_BUFFER       (18446744073709551606ULL)
> +#define AMDGPU_KERNEL_JOB_ID_CLEANER_SHADER         (18446744073709551605ULL)
> +#define AMDGPU_KERNEL_JOB_ID_FLUSH_GPU_TLB          (18446744073709551604ULL)
> +#define AMDGPU_KERNEL_JOB_ID_KFD_GART_MAP           (18446744073709551603ULL)
> +#define AMDGPU_KERNEL_JOB_ID_VCN_RING_TEST          (18446744073709551602ULL)
> +
>  struct amdgpu_job {
>  	struct drm_sched_job    base;
>  	struct amdgpu_vm	*vm;
> @@ -97,7 +113,8 @@ int amdgpu_job_alloc(struct amdgpu_device *adev, struct amdgpu_vm *vm,
>  int amdgpu_job_alloc_with_ib(struct amdgpu_device *adev,
>  			     struct drm_sched_entity *entity, void *owner,
>  			     size_t size, enum amdgpu_ib_pool_type pool_type,
> -			     struct amdgpu_job **job);
> +			     struct amdgpu_job **job,
> +			     u64 k_job_id);
>  void amdgpu_job_set_resources(struct amdgpu_job *job, struct amdgpu_bo *gds,
>  			      struct amdgpu_bo *gws, struct amdgpu_bo *oa);
>  void amdgpu_job_free_resources(struct amdgpu_job *job);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_jpeg.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_jpeg.c
> index 91678621f1ff..63ee6ba6a931 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_jpeg.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_jpeg.c
> @@ -196,7 +196,8 @@ static int amdgpu_jpeg_dec_set_reg(struct amdgpu_ring *ring, uint32_t handle,
>  	int i, r;
>  
>  	r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, ib_size_dw * 4,
> -				     AMDGPU_IB_POOL_DIRECT, &job);
> +				     AMDGPU_IB_POOL_DIRECT, &job,
> +				     AMDGPU_KERNEL_JOB_ID_VCN_RING_TEST);
>  	if (r)
>  		return r;
>  
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> index fe486988a738..e08f58de4b17 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> @@ -1321,7 +1321,8 @@ void amdgpu_bo_release_notify(struct ttm_buffer_object *bo)
>  	if (r)
>  		goto out;
>  
> -	r = amdgpu_fill_buffer(abo, 0, &bo->base._resv, &fence, true);
> +	r = amdgpu_fill_buffer(abo, 0, &bo->base._resv, &fence, true,
> +			       AMDGPU_KERNEL_JOB_ID_CLEAR_ON_RELEASE);
>  	if (WARN_ON(r))
>  		goto out;
>  
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> index e226c3aff7d7..326476089db3 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> @@ -227,7 +227,8 @@ static int amdgpu_ttm_map_buffer(struct ttm_buffer_object *bo,
>  	r = amdgpu_job_alloc_with_ib(adev, &adev->mman.high_pr,
>  				     AMDGPU_FENCE_OWNER_UNDEFINED,
>  				     num_dw * 4 + num_bytes,
> -				     AMDGPU_IB_POOL_DELAYED, &job);
> +				     AMDGPU_IB_POOL_DELAYED, &job,
> +				     AMDGPU_KERNEL_JOB_ID_TTM_MAP_BUFFER);
>  	if (r)
>  		return r;
>  
> @@ -406,7 +407,7 @@ static int amdgpu_move_blit(struct ttm_buffer_object *bo,
>  		struct dma_fence *wipe_fence = NULL;
>  
>  		r = amdgpu_fill_buffer(abo, 0, NULL, &wipe_fence,
> -				       false);
> +				       false, AMDGPU_KERNEL_JOB_ID_MOVE_BLIT);
>  		if (r) {
>  			goto error;
>  		} else if (wipe_fence) {
> @@ -1488,7 +1489,8 @@ static int amdgpu_ttm_access_memory_sdma(struct ttm_buffer_object *bo,
>  	r = amdgpu_job_alloc_with_ib(adev, &adev->mman.high_pr,
>  				     AMDGPU_FENCE_OWNER_UNDEFINED,
>  				     num_dw * 4, AMDGPU_IB_POOL_DELAYED,
> -				     &job);
> +				     &job,
> +				     AMDGPU_KERNEL_JOB_ID_TTM_ACCESS_MEMORY_SDMA);
>  	if (r)
>  		goto out;
>  
> @@ -2212,7 +2214,7 @@ static int amdgpu_ttm_prepare_job(struct amdgpu_device *adev,
>  				  struct dma_resv *resv,
>  				  bool vm_needs_flush,
>  				  struct amdgpu_job **job,
> -				  bool delayed)
> +				  bool delayed, u64 k_job_id)
>  {
>  	enum amdgpu_ib_pool_type pool = direct_submit ?
>  		AMDGPU_IB_POOL_DIRECT :
> @@ -2222,7 +2224,7 @@ static int amdgpu_ttm_prepare_job(struct amdgpu_device *adev,
>  						    &adev->mman.high_pr;
>  	r = amdgpu_job_alloc_with_ib(adev, entity,
>  				     AMDGPU_FENCE_OWNER_UNDEFINED,
> -				     num_dw * 4, pool, job);
> +				     num_dw * 4, pool, job, k_job_id);
>  	if (r)
>  		return r;
>  
> @@ -2262,7 +2264,8 @@ int amdgpu_copy_buffer(struct amdgpu_ring *ring, uint64_t src_offset,
>  	num_loops = DIV_ROUND_UP(byte_count, max_bytes);
>  	num_dw = ALIGN(num_loops * adev->mman.buffer_funcs->copy_num_dw, 8);
>  	r = amdgpu_ttm_prepare_job(adev, direct_submit, num_dw,
> -				   resv, vm_needs_flush, &job, false);
> +				   resv, vm_needs_flush, &job, false,
> +				   AMDGPU_KERNEL_JOB_ID_TTM_COPY_BUFFER);
>  	if (r)
>  		return r;
>  
> @@ -2297,7 +2300,8 @@ static int amdgpu_ttm_fill_mem(struct amdgpu_ring *ring, uint32_t src_data,
>  			       uint64_t dst_addr, uint32_t byte_count,
>  			       struct dma_resv *resv,
>  			       struct dma_fence **fence,
> -			       bool vm_needs_flush, bool delayed)
> +			       bool vm_needs_flush, bool delayed,
> +			       u64 k_job_id)
>  {
>  	struct amdgpu_device *adev = ring->adev;
>  	unsigned int num_loops, num_dw;
> @@ -2310,7 +2314,7 @@ static int amdgpu_ttm_fill_mem(struct amdgpu_ring *ring, uint32_t src_data,
>  	num_loops = DIV_ROUND_UP_ULL(byte_count, max_bytes);
>  	num_dw = ALIGN(num_loops * adev->mman.buffer_funcs->fill_num_dw, 8);
>  	r = amdgpu_ttm_prepare_job(adev, false, num_dw, resv, vm_needs_flush,
> -				   &job, delayed);
> +				   &job, delayed, k_job_id);
>  	if (r)
>  		return r;
>  
> @@ -2380,7 +2384,8 @@ int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
>  			goto err;
>  
>  		r = amdgpu_ttm_fill_mem(ring, 0, addr, size, resv,
> -					&next, true, true);
> +					&next, true, true,
> +					AMDGPU_KERNEL_JOB_ID_TTM_CLEAR_BUFFER);
>  		if (r)
>  			goto err;
>  
> @@ -2399,7 +2404,8 @@ int amdgpu_fill_buffer(struct amdgpu_bo *bo,
>  			uint32_t src_data,
>  			struct dma_resv *resv,
>  			struct dma_fence **f,
> -			bool delayed)
> +			bool delayed,
> +			u64 k_job_id)
>  {
>  	struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
>  	struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
> @@ -2429,7 +2435,7 @@ int amdgpu_fill_buffer(struct amdgpu_bo *bo,
>  			goto error;
>  
>  		r = amdgpu_ttm_fill_mem(ring, src_data, to, cur_size, resv,
> -					&next, true, delayed);
> +					&next, true, delayed, k_job_id);
>  		if (r)
>  			goto error;
>  
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> index 054d48823d5f..577ee04ce0bf 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> @@ -175,7 +175,8 @@ int amdgpu_fill_buffer(struct amdgpu_bo *bo,
>  			uint32_t src_data,
>  			struct dma_resv *resv,
>  			struct dma_fence **fence,
> -			bool delayed);
> +			bool delayed,
> +			u64 k_job_id);
>  
>  int amdgpu_ttm_alloc_gart(struct ttm_buffer_object *bo);
>  void amdgpu_ttm_recover_gart(struct ttm_buffer_object *tbo);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
> index 74758b5ffc6c..5c38f0d30c87 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
> @@ -1136,7 +1136,8 @@ static int amdgpu_uvd_send_msg(struct amdgpu_ring *ring, struct amdgpu_bo *bo,
>  	r = amdgpu_job_alloc_with_ib(ring->adev, &adev->uvd.entity,
>  				     AMDGPU_FENCE_OWNER_UNDEFINED,
>  				     64, direct ? AMDGPU_IB_POOL_DIRECT :
> -				     AMDGPU_IB_POOL_DELAYED, &job);
> +				     AMDGPU_IB_POOL_DELAYED, &job,
> +				     AMDGPU_KERNEL_JOB_ID_VCN_RING_TEST);
>  	if (r)
>  		return r;
>  
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
> index b9060bcd4806..ce318f5de047 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
> @@ -449,7 +449,7 @@ static int amdgpu_vce_get_create_msg(struct amdgpu_ring *ring, uint32_t handle,
>  	r = amdgpu_job_alloc_with_ib(ring->adev, &ring->adev->vce.entity,
>  				     AMDGPU_FENCE_OWNER_UNDEFINED,
>  				     ib_size_dw * 4, AMDGPU_IB_POOL_DIRECT,
> -				     &job);
> +				     &job, AMDGPU_KERNEL_JOB_ID_VCN_RING_TEST);
>  	if (r)
>  		return r;
>  
> @@ -540,7 +540,8 @@ static int amdgpu_vce_get_destroy_msg(struct amdgpu_ring *ring, uint32_t handle,
>  				     AMDGPU_FENCE_OWNER_UNDEFINED,
>  				     ib_size_dw * 4,
>  				     direct ? AMDGPU_IB_POOL_DIRECT :
> -				     AMDGPU_IB_POOL_DELAYED, &job);
> +				     AMDGPU_IB_POOL_DELAYED, &job,
> +				     AMDGPU_KERNEL_JOB_ID_VCN_RING_TEST);
>  	if (r)
>  		return r;
>  
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
> index 5ae7cc0d5f57..5e0786ea911b 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
> @@ -626,7 +626,7 @@ static int amdgpu_vcn_dec_send_msg(struct amdgpu_ring *ring,
>  
>  	r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL,
>  				     64, AMDGPU_IB_POOL_DIRECT,
> -				     &job);
> +				     &job, AMDGPU_KERNEL_JOB_ID_VCN_RING_TEST);
>  	if (r)
>  		goto err;
>  
> @@ -806,7 +806,7 @@ static int amdgpu_vcn_dec_sw_send_msg(struct amdgpu_ring *ring,
>  
>  	r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL,
>  				     ib_size_dw * 4, AMDGPU_IB_POOL_DIRECT,
> -				     &job);
> +				     &job, AMDGPU_KERNEL_JOB_ID_VCN_RING_TEST);
>  	if (r)
>  		goto err;
>  
> @@ -936,7 +936,7 @@ static int amdgpu_vcn_enc_get_create_msg(struct amdgpu_ring *ring, uint32_t hand
>  
>  	r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL,
>  				     ib_size_dw * 4, AMDGPU_IB_POOL_DIRECT,
> -				     &job);
> +				     &job, AMDGPU_KERNEL_JOB_ID_VCN_RING_TEST);
>  	if (r)
>  		return r;
>  
> @@ -1003,7 +1003,7 @@ static int amdgpu_vcn_enc_get_destroy_msg(struct amdgpu_ring *ring, uint32_t han
>  
>  	r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL,
>  				     ib_size_dw * 4, AMDGPU_IB_POOL_DIRECT,
> -				     &job);
> +				     &job, AMDGPU_KERNEL_JOB_ID_VCN_RING_TEST);
>  	if (r)
>  		return r;
>  
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> index db66b4232de0..2f8e83f840a8 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> @@ -983,7 +983,8 @@ int amdgpu_vm_update_pdes(struct amdgpu_device *adev,
>  	params.vm = vm;
>  	params.immediate = immediate;
>  
> -	r = vm->update_funcs->prepare(&params, NULL);
> +	r = vm->update_funcs->prepare(&params, NULL,
> +				      AMDGPU_KERNEL_JOB_ID_VM_UPDATE_PDES);
>  	if (r)
>  		goto error;
>  
> @@ -1152,7 +1153,8 @@ int amdgpu_vm_update_range(struct amdgpu_device *adev, struct amdgpu_vm *vm,
>  		dma_fence_put(tmp);
>  	}
>  
> -	r = vm->update_funcs->prepare(&params, sync);
> +	r = vm->update_funcs->prepare(&params, sync,
> +				      AMDGPU_KERNEL_JOB_ID_VM_UPDATE_RANGE);
>  	if (r)
>  		goto error_free;
>  
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> index 77207f4e448e..cf0ec94e8a07 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> @@ -308,7 +308,7 @@ struct amdgpu_vm_update_params {
>  struct amdgpu_vm_update_funcs {
>  	int (*map_table)(struct amdgpu_bo_vm *bo);
>  	int (*prepare)(struct amdgpu_vm_update_params *p,
> -		       struct amdgpu_sync *sync);
> +		       struct amdgpu_sync *sync, u64 k_job_id);
>  	int (*update)(struct amdgpu_vm_update_params *p,
>  		      struct amdgpu_bo_vm *bo, uint64_t pe, uint64_t addr,
>  		      unsigned count, uint32_t incr, uint64_t flags);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_cpu.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_cpu.c
> index 0c1ef5850a5e..22e2e5b47341 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_cpu.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_cpu.c
> @@ -40,12 +40,14 @@ static int amdgpu_vm_cpu_map_table(struct amdgpu_bo_vm *table)
>   *
>   * @p: see amdgpu_vm_update_params definition
>   * @sync: sync obj with fences to wait on
> + * @k_job_id: the id for tracing/debug purposes
>   *
>   * Returns:
>   * Negativ errno, 0 for success.
>   */
>  static int amdgpu_vm_cpu_prepare(struct amdgpu_vm_update_params *p,
> -				 struct amdgpu_sync *sync)
> +				 struct amdgpu_sync *sync,
> +				 u64 k_job_id)
>  {
>  	if (!sync)
>  		return 0;
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_pt.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_pt.c
> index 30022123b0bf..f794fb1cc06e 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_pt.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_pt.c
> @@ -26,6 +26,7 @@
>  #include "amdgpu.h"
>  #include "amdgpu_trace.h"
>  #include "amdgpu_vm.h"
> +#include "amdgpu_job.h"
>  
>  /*
>   * amdgpu_vm_pt_cursor - state for for_each_amdgpu_vm_pt
> @@ -395,7 +396,8 @@ int amdgpu_vm_pt_clear(struct amdgpu_device *adev, struct amdgpu_vm *vm,
>  	params.vm = vm;
>  	params.immediate = immediate;
>  
> -	r = vm->update_funcs->prepare(&params, NULL);
> +	r = vm->update_funcs->prepare(&params, NULL,
> +				      AMDGPU_KERNEL_JOB_ID_VM_PT_CLEAR);
>  	if (r)
>  		goto exit;
>  
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c
> index 46d9fb433ab2..36805dcfa159 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c
> @@ -40,7 +40,7 @@ static int amdgpu_vm_sdma_map_table(struct amdgpu_bo_vm *table)
>  
>  /* Allocate a new job for @count PTE updates */
>  static int amdgpu_vm_sdma_alloc_job(struct amdgpu_vm_update_params *p,
> -				    unsigned int count)
> +				    unsigned int count, u64 k_job_id)
>  {
>  	enum amdgpu_ib_pool_type pool = p->immediate ? AMDGPU_IB_POOL_IMMEDIATE
>  		: AMDGPU_IB_POOL_DELAYED;
> @@ -56,7 +56,7 @@ static int amdgpu_vm_sdma_alloc_job(struct amdgpu_vm_update_params *p,
>  	ndw = min(ndw, AMDGPU_VM_SDMA_MAX_NUM_DW);
>  
>  	r = amdgpu_job_alloc_with_ib(p->adev, entity, AMDGPU_FENCE_OWNER_VM,
> -				     ndw * 4, pool, &p->job);
> +				     ndw * 4, pool, &p->job, k_job_id);
>  	if (r)
>  		return r;
>  
> @@ -69,16 +69,17 @@ static int amdgpu_vm_sdma_alloc_job(struct amdgpu_vm_update_params *p,
>   *
>   * @p: see amdgpu_vm_update_params definition
>   * @sync: amdgpu_sync object with fences to wait for
> + * @k_job_id: identifier of the job, for tracing purpose
>   *
>   * Returns:
>   * Negativ errno, 0 for success.
>   */
>  static int amdgpu_vm_sdma_prepare(struct amdgpu_vm_update_params *p,
> -				  struct amdgpu_sync *sync)
> +				  struct amdgpu_sync *sync, u64 k_job_id)
>  {
>  	int r;
>  
> -	r = amdgpu_vm_sdma_alloc_job(p, 0);
> +	r = amdgpu_vm_sdma_alloc_job(p, 0, k_job_id);
>  	if (r)
>  		return r;
>  
> @@ -249,7 +250,8 @@ static int amdgpu_vm_sdma_update(struct amdgpu_vm_update_params *p,
>  			if (r)
>  				return r;
>  
> -			r = amdgpu_vm_sdma_alloc_job(p, count);
> +			r = amdgpu_vm_sdma_alloc_job(p, count,
> +						     AMDGPU_KERNEL_JOB_ID_VM_UPDATE);
>  			if (r)
>  				return r;
>  		}
> diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c b/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c
> index 1c07b701d0e4..ceb94bbb03a4 100644
> --- a/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c
> @@ -217,7 +217,8 @@ static int uvd_v6_0_enc_get_create_msg(struct amdgpu_ring *ring, uint32_t handle
>  	int i, r;
>  
>  	r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, ib_size_dw * 4,
> -				     AMDGPU_IB_POOL_DIRECT, &job);
> +				     AMDGPU_IB_POOL_DIRECT, &job,
> +				     AMDGPU_KERNEL_JOB_ID_VCN_RING_TEST);
>  	if (r)
>  		return r;
>  
> @@ -281,7 +282,8 @@ static int uvd_v6_0_enc_get_destroy_msg(struct amdgpu_ring *ring,
>  	int i, r;
>  
>  	r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, ib_size_dw * 4,
> -				     AMDGPU_IB_POOL_DIRECT, &job);
> +				     AMDGPU_IB_POOL_DIRECT, &job,
> +				     AMDGPU_KERNEL_JOB_ID_VCN_RING_TEST);
>  	if (r)
>  		return r;
>  
> diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c b/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
> index 9d237b5937fb..1f8866f3f63c 100644
> --- a/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
> @@ -225,7 +225,8 @@ static int uvd_v7_0_enc_get_create_msg(struct amdgpu_ring *ring, u32 handle,
>  	int i, r;
>  
>  	r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, ib_size_dw * 4,
> -				     AMDGPU_IB_POOL_DIRECT, &job);
> +				     AMDGPU_IB_POOL_DIRECT, &job,
> +				     AMDGPU_KERNEL_JOB_ID_VCN_RING_TEST);
>  	if (r)
>  		return r;
>  
> @@ -288,7 +289,8 @@ static int uvd_v7_0_enc_get_destroy_msg(struct amdgpu_ring *ring, u32 handle,
>  	int i, r;
>  
>  	r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, ib_size_dw * 4,
> -				     AMDGPU_IB_POOL_DIRECT, &job);
> +				     AMDGPU_IB_POOL_DIRECT, &job,
> +				     AMDGPU_KERNEL_JOB_ID_VCN_RING_TEST);
>  	if (r)
>  		return r;
>  
> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
> index 3653c563ee9a..46c84fc60af1 100644
> --- a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
> @@ -67,7 +67,8 @@ svm_migrate_gart_map(struct amdgpu_ring *ring, u64 npages,
>  				     AMDGPU_FENCE_OWNER_UNDEFINED,
>  				     num_dw * 4 + num_bytes,
>  				     AMDGPU_IB_POOL_DELAYED,
> -				     &job);
> +				     &job,
> +				     AMDGPU_KERNEL_JOB_ID_KFD_GART_MAP);
>  	if (r)
>  		return r;
>  


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v2 02/20] drm/ttm: rework pipelined eviction fence handling
  2025-11-13 16:05 ` [PATCH v2 02/20] drm/ttm: rework pipelined eviction fence handling Pierre-Eric Pelloux-Prayer
@ 2025-11-14 12:47   ` Christian König
  2025-11-18 15:00   ` Thomas Hellström
  1 sibling, 0 replies; 53+ messages in thread
From: Christian König @ 2025-11-14 12:47 UTC (permalink / raw)
  To: Pierre-Eric Pelloux-Prayer, Alex Deucher, David Airlie,
	Simona Vetter, Huang Rui, Matthew Auld, Matthew Brost,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Sumit Semwal
  Cc: amd-gfx, dri-devel, linux-kernel, linux-media, linaro-mm-sig,
	open list:DMA BUFFER SHARING FRAMEWORK:Keyword:bdma_(?:buf|fence|resv)b

On 11/13/25 17:05, Pierre-Eric Pelloux-Prayer wrote:
> Until now ttm stored a single pipelined eviction fence which means
> drivers had to use a single entity for these evictions.
> 
> To lift this requirement, this commit allows up to 8 entities to
> be used.
> 
> Ideally a dma_resv object would have been used as a container of
> the eviction fences, but the locking rules makes it complex.
> dma_resv all have the same ww_class, which means "Attempting to
> lock more mutexes after ww_acquire_done." is an error.
> 
> One alternative considered was to introduced a 2nd ww_class for
> specific resv to hold a single "transient" lock (= the resv lock
> would only be held for a short period, without taking any other
> locks).
> 
> The other option, is to statically reserve a fence array, and
> extend the existing code to deal with N fences, instead of 1.
> 
> The driver is still responsible to reserve the correct number
> of fence slots.
> 
> ---
> v2:
> - simplified code
> - dropped n_fences
> - name changes
> ---
> 
> Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c       |  8 ++--
>  .../gpu/drm/ttm/tests/ttm_bo_validate_test.c  | 11 +++--
>  drivers/gpu/drm/ttm/tests/ttm_resource_test.c |  5 +-
>  drivers/gpu/drm/ttm/ttm_bo.c                  | 47 ++++++++++---------
>  drivers/gpu/drm/ttm/ttm_bo_util.c             | 38 ++++++++++++---
>  drivers/gpu/drm/ttm/ttm_resource.c            | 31 +++++++-----
>  include/drm/ttm/ttm_resource.h                | 29 ++++++++----
>  7 files changed, 109 insertions(+), 60 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> index 326476089db3..3b46a24a8c48 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> @@ -2156,7 +2156,7 @@ void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
>  {
>  	struct ttm_resource_manager *man = ttm_manager_type(&adev->mman.bdev, TTM_PL_VRAM);
>  	uint64_t size;
> -	int r;
> +	int r, i;
>  
>  	if (!adev->mman.initialized || amdgpu_in_reset(adev) ||
>  	    adev->mman.buffer_funcs_enabled == enable || adev->gmc.is_app_apu)
> @@ -2190,8 +2190,10 @@ void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
>  	} else {
>  		drm_sched_entity_destroy(&adev->mman.high_pr);
>  		drm_sched_entity_destroy(&adev->mman.low_pr);
> -		dma_fence_put(man->move);
> -		man->move = NULL;
> +		for (i = 0; i < TTM_NUM_MOVE_FENCES; i++) {
> +			dma_fence_put(man->eviction_fences[i]);
> +			man->eviction_fences[i] = NULL;
> +		}

That code should have been a TTM function in the first place.

I suggest to just call ttm_resource_manager_cleanup() here instead and add this as comment:

/* Drop all the old fences since re-creating the scheduler entities will allocate next contexts */

Apart from that looks good to me.

Regards,
Christian.

>  	}
>  
>  	/* this just adjusts TTM size idea, which sets lpfn to the correct value */
> diff --git a/drivers/gpu/drm/ttm/tests/ttm_bo_validate_test.c b/drivers/gpu/drm/ttm/tests/ttm_bo_validate_test.c
> index 3148f5d3dbd6..8f71906c4238 100644
> --- a/drivers/gpu/drm/ttm/tests/ttm_bo_validate_test.c
> +++ b/drivers/gpu/drm/ttm/tests/ttm_bo_validate_test.c
> @@ -651,7 +651,7 @@ static void ttm_bo_validate_move_fence_signaled(struct kunit *test)
>  	int err;
>  
>  	man = ttm_manager_type(priv->ttm_dev, mem_type);
> -	man->move = dma_fence_get_stub();
> +	man->eviction_fences[0] = dma_fence_get_stub();
>  
>  	bo = ttm_bo_kunit_init(test, test->priv, size, NULL);
>  	bo->type = bo_type;
> @@ -668,7 +668,7 @@ static void ttm_bo_validate_move_fence_signaled(struct kunit *test)
>  	KUNIT_EXPECT_EQ(test, ctx.bytes_moved, size);
>  
>  	ttm_bo_put(bo);
> -	dma_fence_put(man->move);
> +	dma_fence_put(man->eviction_fences[0]);
>  }
>  
>  static const struct ttm_bo_validate_test_case ttm_bo_validate_wait_cases[] = {
> @@ -732,9 +732,9 @@ static void ttm_bo_validate_move_fence_not_signaled(struct kunit *test)
>  
>  	spin_lock_init(&fence_lock);
>  	man = ttm_manager_type(priv->ttm_dev, fst_mem);
> -	man->move = alloc_mock_fence(test);
> +	man->eviction_fences[0] = alloc_mock_fence(test);
>  
> -	task = kthread_create(threaded_fence_signal, man->move, "move-fence-signal");
> +	task = kthread_create(threaded_fence_signal, man->eviction_fences[0], "move-fence-signal");
>  	if (IS_ERR(task))
>  		KUNIT_FAIL(test, "Couldn't create move fence signal task\n");
>  
> @@ -742,7 +742,8 @@ static void ttm_bo_validate_move_fence_not_signaled(struct kunit *test)
>  	err = ttm_bo_validate(bo, placement_val, &ctx_val);
>  	dma_resv_unlock(bo->base.resv);
>  
> -	dma_fence_wait_timeout(man->move, false, MAX_SCHEDULE_TIMEOUT);
> +	dma_fence_wait_timeout(man->eviction_fences[0], false, MAX_SCHEDULE_TIMEOUT);
> +	man->eviction_fences[0] = NULL;
>  
>  	KUNIT_EXPECT_EQ(test, err, 0);
>  	KUNIT_EXPECT_EQ(test, ctx_val.bytes_moved, size);
> diff --git a/drivers/gpu/drm/ttm/tests/ttm_resource_test.c b/drivers/gpu/drm/ttm/tests/ttm_resource_test.c
> index e6ea2bd01f07..c0e4e35e0442 100644
> --- a/drivers/gpu/drm/ttm/tests/ttm_resource_test.c
> +++ b/drivers/gpu/drm/ttm/tests/ttm_resource_test.c
> @@ -207,6 +207,7 @@ static void ttm_resource_manager_init_basic(struct kunit *test)
>  	struct ttm_resource_test_priv *priv = test->priv;
>  	struct ttm_resource_manager *man;
>  	size_t size = SZ_16K;
> +	int i;
>  
>  	man = kunit_kzalloc(test, sizeof(*man), GFP_KERNEL);
>  	KUNIT_ASSERT_NOT_NULL(test, man);
> @@ -216,8 +217,8 @@ static void ttm_resource_manager_init_basic(struct kunit *test)
>  	KUNIT_ASSERT_PTR_EQ(test, man->bdev, priv->devs->ttm_dev);
>  	KUNIT_ASSERT_EQ(test, man->size, size);
>  	KUNIT_ASSERT_EQ(test, man->usage, 0);
> -	KUNIT_ASSERT_NULL(test, man->move);
> -	KUNIT_ASSERT_NOT_NULL(test, &man->move_lock);
> +	for (i = 0; i < TTM_NUM_MOVE_FENCES; i++)
> +		KUNIT_ASSERT_NULL(test, man->eviction_fences[i]);
>  
>  	for (int i = 0; i < TTM_MAX_BO_PRIORITY; ++i)
>  		KUNIT_ASSERT_TRUE(test, list_empty(&man->lru[i]));
> diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
> index f4d9e68b21e7..0b3732ed6f6c 100644
> --- a/drivers/gpu/drm/ttm/ttm_bo.c
> +++ b/drivers/gpu/drm/ttm/ttm_bo.c
> @@ -658,34 +658,35 @@ void ttm_bo_unpin(struct ttm_buffer_object *bo)
>  EXPORT_SYMBOL(ttm_bo_unpin);
>  
>  /*
> - * Add the last move fence to the BO as kernel dependency and reserve a new
> - * fence slot.
> + * Add the pipelined eviction fencesto the BO as kernel dependency and reserve new
> + * fence slots.
>   */
> -static int ttm_bo_add_move_fence(struct ttm_buffer_object *bo,
> -				 struct ttm_resource_manager *man,
> -				 bool no_wait_gpu)
> +static int ttm_bo_add_pipelined_eviction_fences(struct ttm_buffer_object *bo,
> +						struct ttm_resource_manager *man,
> +						bool no_wait_gpu)
>  {
>  	struct dma_fence *fence;
> -	int ret;
> +	int i;
>  
> -	spin_lock(&man->move_lock);
> -	fence = dma_fence_get(man->move);
> -	spin_unlock(&man->move_lock);
> +	spin_lock(&man->eviction_lock);
> +	for (i = 0; i < TTM_NUM_MOVE_FENCES; i++) {
> +		fence = man->eviction_fences[i];
> +		if (!fence)
> +			continue;
>  
> -	if (!fence)
> -		return 0;
> -
> -	if (no_wait_gpu) {
> -		ret = dma_fence_is_signaled(fence) ? 0 : -EBUSY;
> -		dma_fence_put(fence);
> -		return ret;
> +		if (no_wait_gpu) {
> +			if (!dma_fence_is_signaled(fence)) {
> +				spin_unlock(&man->eviction_lock);
> +				return -EBUSY;
> +			}
> +		} else {
> +			dma_resv_add_fence(bo->base.resv, fence, DMA_RESV_USAGE_KERNEL);
> +		}
>  	}
> +	spin_unlock(&man->eviction_lock);
>  
> -	dma_resv_add_fence(bo->base.resv, fence, DMA_RESV_USAGE_KERNEL);
> -
> -	ret = dma_resv_reserve_fences(bo->base.resv, 1);
> -	dma_fence_put(fence);
> -	return ret;
> +	/* TODO: this call should be removed. */
> +	return dma_resv_reserve_fences(bo->base.resv, 1);
>  }
>  
>  /**
> @@ -718,7 +719,7 @@ static int ttm_bo_alloc_resource(struct ttm_buffer_object *bo,
>  	int i, ret;
>  
>  	ticket = dma_resv_locking_ctx(bo->base.resv);
> -	ret = dma_resv_reserve_fences(bo->base.resv, 1);
> +	ret = dma_resv_reserve_fences(bo->base.resv, TTM_NUM_MOVE_FENCES);
>  	if (unlikely(ret))
>  		return ret;
>  
> @@ -757,7 +758,7 @@ static int ttm_bo_alloc_resource(struct ttm_buffer_object *bo,
>  				return ret;
>  		}
>  
> -		ret = ttm_bo_add_move_fence(bo, man, ctx->no_wait_gpu);
> +		ret = ttm_bo_add_pipelined_eviction_fences(bo, man, ctx->no_wait_gpu);
>  		if (unlikely(ret)) {
>  			ttm_resource_free(bo, res);
>  			if (ret == -EBUSY)
> diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c
> index acbbca9d5c92..2ff35d55e462 100644
> --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
> +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
> @@ -258,7 +258,7 @@ static int ttm_buffer_object_transfer(struct ttm_buffer_object *bo,
>  	ret = dma_resv_trylock(&fbo->base.base._resv);
>  	WARN_ON(!ret);
>  
> -	ret = dma_resv_reserve_fences(&fbo->base.base._resv, 1);
> +	ret = dma_resv_reserve_fences(&fbo->base.base._resv, TTM_NUM_MOVE_FENCES);
>  	if (ret) {
>  		dma_resv_unlock(&fbo->base.base._resv);
>  		kfree(fbo);
> @@ -646,20 +646,44 @@ static void ttm_bo_move_pipeline_evict(struct ttm_buffer_object *bo,
>  {
>  	struct ttm_device *bdev = bo->bdev;
>  	struct ttm_resource_manager *from;
> +	struct dma_fence *tmp;
> +	int i;
>  
>  	from = ttm_manager_type(bdev, bo->resource->mem_type);
>  
>  	/**
>  	 * BO doesn't have a TTM we need to bind/unbind. Just remember
> -	 * this eviction and free up the allocation
> +	 * this eviction and free up the allocation.
> +	 * The fence will be saved in the first free slot or in the slot
> +	 * already used to store a fence from the same context. Since
> +	 * drivers can't use more than TTM_NUM_MOVE_FENCES contexts for
> +	 * evictions we should always find a slot to use.
>  	 */
> -	spin_lock(&from->move_lock);
> -	if (!from->move || dma_fence_is_later(fence, from->move)) {
> -		dma_fence_put(from->move);
> -		from->move = dma_fence_get(fence);
> +	spin_lock(&from->eviction_lock);
> +	for (i = 0; i < TTM_NUM_MOVE_FENCES; i++) {
> +		tmp = from->eviction_fences[i];
> +		if (!tmp)
> +			break;
> +		if (fence->context != tmp->context)
> +			continue;
> +		if (dma_fence_is_later(fence, tmp)) {
> +			dma_fence_put(tmp);
> +			break;
> +		}
> +		goto unlock;
> +	}
> +	if (i < TTM_NUM_MOVE_FENCES) {
> +		from->eviction_fences[i] = dma_fence_get(fence);
> +	} else {
> +		WARN(1, "not enough fence slots for all fence contexts");
> +		spin_unlock(&from->eviction_lock);
> +		dma_fence_wait(fence, false);
> +		goto end;
>  	}
> -	spin_unlock(&from->move_lock);
>  
> +unlock:
> +	spin_unlock(&from->eviction_lock);
> +end:
>  	ttm_resource_free(bo, &bo->resource);
>  }
>  
> diff --git a/drivers/gpu/drm/ttm/ttm_resource.c b/drivers/gpu/drm/ttm/ttm_resource.c
> index e2c82ad07eb4..62c34cafa387 100644
> --- a/drivers/gpu/drm/ttm/ttm_resource.c
> +++ b/drivers/gpu/drm/ttm/ttm_resource.c
> @@ -523,14 +523,15 @@ void ttm_resource_manager_init(struct ttm_resource_manager *man,
>  {
>  	unsigned i;
>  
> -	spin_lock_init(&man->move_lock);
>  	man->bdev = bdev;
>  	man->size = size;
>  	man->usage = 0;
>  
>  	for (i = 0; i < TTM_MAX_BO_PRIORITY; ++i)
>  		INIT_LIST_HEAD(&man->lru[i]);
> -	man->move = NULL;
> +	spin_lock_init(&man->eviction_lock);
> +	for (i = 0; i < TTM_NUM_MOVE_FENCES; i++)
> +		man->eviction_fences[i] = NULL;
>  }
>  EXPORT_SYMBOL(ttm_resource_manager_init);
>  
> @@ -551,7 +552,7 @@ int ttm_resource_manager_evict_all(struct ttm_device *bdev,
>  		.no_wait_gpu = false,
>  	};
>  	struct dma_fence *fence;
> -	int ret;
> +	int ret, i;
>  
>  	do {
>  		ret = ttm_bo_evict_first(bdev, man, &ctx);
> @@ -561,18 +562,24 @@ int ttm_resource_manager_evict_all(struct ttm_device *bdev,
>  	if (ret && ret != -ENOENT)
>  		return ret;
>  
> -	spin_lock(&man->move_lock);
> -	fence = dma_fence_get(man->move);
> -	spin_unlock(&man->move_lock);
> +	ret = 0;
>  
> -	if (fence) {
> -		ret = dma_fence_wait(fence, false);
> -		dma_fence_put(fence);
> -		if (ret)
> -			return ret;
> +	spin_lock(&man->eviction_lock);
> +	for (i = 0; i < TTM_NUM_MOVE_FENCES; i++) {
> +		fence = man->eviction_fences[i];
> +		if (fence && !dma_fence_is_signaled(fence)) {
> +			dma_fence_get(fence);
> +			spin_unlock(&man->eviction_lock);
> +			ret = dma_fence_wait(fence, false);
> +			dma_fence_put(fence);
> +			if (ret)
> +				return ret;
> +			spin_lock(&man->eviction_lock);
> +		}
>  	}
> +	spin_unlock(&man->eviction_lock);
>  
> -	return 0;
> +	return ret;
>  }
>  EXPORT_SYMBOL(ttm_resource_manager_evict_all);
>  
> diff --git a/include/drm/ttm/ttm_resource.h b/include/drm/ttm/ttm_resource.h
> index f49daa504c36..50e6added509 100644
> --- a/include/drm/ttm/ttm_resource.h
> +++ b/include/drm/ttm/ttm_resource.h
> @@ -50,6 +50,15 @@ struct io_mapping;
>  struct sg_table;
>  struct scatterlist;
>  
> +/**
> + * define TTM_NUM_MOVE_FENCES - How many entities can be used for evictions
> + *
> + * Pipelined evictions can be spread on multiple entities. This
> + * is the max number of entities that can be used by the driver
> + * for that purpose.
> + */
> +#define TTM_NUM_MOVE_FENCES 8
> +
>  /**
>   * enum ttm_lru_item_type - enumerate ttm_lru_item subclasses
>   */
> @@ -180,8 +189,8 @@ struct ttm_resource_manager_func {
>   * @size: Size of the managed region.
>   * @bdev: ttm device this manager belongs to
>   * @func: structure pointer implementing the range manager. See above
> - * @move_lock: lock for move fence
> - * @move: The fence of the last pipelined move operation.
> + * @eviction_lock: lock for eviction fences
> + * @eviction_fences: The fences of the last pipelined move operation.
>   * @lru: The lru list for this memory type.
>   *
>   * This structure is used to identify and manage memory types for a device.
> @@ -195,12 +204,12 @@ struct ttm_resource_manager {
>  	struct ttm_device *bdev;
>  	uint64_t size;
>  	const struct ttm_resource_manager_func *func;
> -	spinlock_t move_lock;
>  
> -	/*
> -	 * Protected by @move_lock.
> +	/* This is very similar to a dma_resv object, but locking rules make
> +	 * it difficult to use one in this context.
>  	 */
> -	struct dma_fence *move;
> +	spinlock_t eviction_lock;
> +	struct dma_fence *eviction_fences[TTM_NUM_MOVE_FENCES];
>  
>  	/*
>  	 * Protected by the bdev->lru_lock.
> @@ -421,8 +430,12 @@ static inline bool ttm_resource_manager_used(struct ttm_resource_manager *man)
>  static inline void
>  ttm_resource_manager_cleanup(struct ttm_resource_manager *man)
>  {
> -	dma_fence_put(man->move);
> -	man->move = NULL;
> +	int i;
> +
> +	for (i = 0; i < TTM_NUM_MOVE_FENCES; i++) {
> +		dma_fence_put(man->eviction_fences[i]);
> +		man->eviction_fences[i] = NULL;
> +	}
>  }
>  
>  void ttm_lru_bulk_move_init(struct ttm_lru_bulk_move *bulk);


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v2 03/20] drm/amdgpu: remove direct_submit arg from amdgpu_copy_buffer
  2025-11-13 16:05 ` [PATCH v2 03/20] drm/amdgpu: remove direct_submit arg from amdgpu_copy_buffer Pierre-Eric Pelloux-Prayer
@ 2025-11-14 12:48   ` Christian König
  0 siblings, 0 replies; 53+ messages in thread
From: Christian König @ 2025-11-14 12:48 UTC (permalink / raw)
  To: Pierre-Eric Pelloux-Prayer, Alex Deucher, David Airlie,
	Simona Vetter, Felix Kuehling, Sumit Semwal
  Cc: amd-gfx, dri-devel, linux-kernel, linux-media, linaro-mm-sig

On 11/13/25 17:05, Pierre-Eric Pelloux-Prayer wrote:
> It was always false.
> 
> Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
> Reviewed-by: Christian König <christian.koenig@amd.com>

Please push to amd-staging-drm-next.

> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c |  2 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c       | 20 +++++++------------
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h       |  2 +-
>  drivers/gpu/drm/amd/amdkfd/kfd_migrate.c      |  2 +-
>  4 files changed, 10 insertions(+), 16 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c
> index 199693369c7c..02c2479a8840 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c
> @@ -39,7 +39,7 @@ static int amdgpu_benchmark_do_move(struct amdgpu_device *adev, unsigned size,
>  	for (i = 0; i < n; i++) {
>  		struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
>  		r = amdgpu_copy_buffer(ring, saddr, daddr, size, NULL, &fence,
> -				       false, false, 0);
> +				       false, 0);
>  		if (r)
>  			goto exit_do_move;
>  		r = dma_fence_wait(fence, false);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> index 3b46a24a8c48..c985f57fa227 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> @@ -354,7 +354,7 @@ static int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
>  		}
>  
>  		r = amdgpu_copy_buffer(ring, from, to, cur_size, resv,
> -				       &next, false, true, copy_flags);
> +				       &next, true, copy_flags);
>  		if (r)
>  			goto error;
>  
> @@ -2211,16 +2211,13 @@ void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
>  }
>  
>  static int amdgpu_ttm_prepare_job(struct amdgpu_device *adev,
> -				  bool direct_submit,
>  				  unsigned int num_dw,
>  				  struct dma_resv *resv,
>  				  bool vm_needs_flush,
>  				  struct amdgpu_job **job,
>  				  bool delayed, u64 k_job_id)
>  {
> -	enum amdgpu_ib_pool_type pool = direct_submit ?
> -		AMDGPU_IB_POOL_DIRECT :
> -		AMDGPU_IB_POOL_DELAYED;
> +	enum amdgpu_ib_pool_type pool = AMDGPU_IB_POOL_DELAYED;
>  	int r;
>  	struct drm_sched_entity *entity = delayed ? &adev->mman.low_pr :
>  						    &adev->mman.high_pr;
> @@ -2246,7 +2243,7 @@ static int amdgpu_ttm_prepare_job(struct amdgpu_device *adev,
>  int amdgpu_copy_buffer(struct amdgpu_ring *ring, uint64_t src_offset,
>  		       uint64_t dst_offset, uint32_t byte_count,
>  		       struct dma_resv *resv,
> -		       struct dma_fence **fence, bool direct_submit,
> +		       struct dma_fence **fence,
>  		       bool vm_needs_flush, uint32_t copy_flags)
>  {
>  	struct amdgpu_device *adev = ring->adev;
> @@ -2256,7 +2253,7 @@ int amdgpu_copy_buffer(struct amdgpu_ring *ring, uint64_t src_offset,
>  	unsigned int i;
>  	int r;
>  
> -	if (!direct_submit && !ring->sched.ready) {
> +	if (!ring->sched.ready) {
>  		dev_err(adev->dev,
>  			"Trying to move memory with ring turned off.\n");
>  		return -EINVAL;
> @@ -2265,7 +2262,7 @@ int amdgpu_copy_buffer(struct amdgpu_ring *ring, uint64_t src_offset,
>  	max_bytes = adev->mman.buffer_funcs->copy_max_bytes;
>  	num_loops = DIV_ROUND_UP(byte_count, max_bytes);
>  	num_dw = ALIGN(num_loops * adev->mman.buffer_funcs->copy_num_dw, 8);
> -	r = amdgpu_ttm_prepare_job(adev, direct_submit, num_dw,
> +	r = amdgpu_ttm_prepare_job(adev, num_dw,
>  				   resv, vm_needs_flush, &job, false,
>  				   AMDGPU_KERNEL_JOB_ID_TTM_COPY_BUFFER);
>  	if (r)
> @@ -2283,10 +2280,7 @@ int amdgpu_copy_buffer(struct amdgpu_ring *ring, uint64_t src_offset,
>  
>  	amdgpu_ring_pad_ib(ring, &job->ibs[0]);
>  	WARN_ON(job->ibs[0].length_dw > num_dw);
> -	if (direct_submit)
> -		r = amdgpu_job_submit_direct(job, ring, fence);
> -	else
> -		*fence = amdgpu_job_submit(job);
> +	*fence = amdgpu_job_submit(job);
>  	if (r)
>  		goto error_free;
>  
> @@ -2315,7 +2309,7 @@ static int amdgpu_ttm_fill_mem(struct amdgpu_ring *ring, uint32_t src_data,
>  	max_bytes = adev->mman.buffer_funcs->fill_max_bytes;
>  	num_loops = DIV_ROUND_UP_ULL(byte_count, max_bytes);
>  	num_dw = ALIGN(num_loops * adev->mman.buffer_funcs->fill_num_dw, 8);
> -	r = amdgpu_ttm_prepare_job(adev, false, num_dw, resv, vm_needs_flush,
> +	r = amdgpu_ttm_prepare_job(adev, num_dw, resv, vm_needs_flush,
>  				   &job, delayed, k_job_id);
>  	if (r)
>  		return r;
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> index 577ee04ce0bf..50e40380fe95 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> @@ -166,7 +166,7 @@ void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev,
>  int amdgpu_copy_buffer(struct amdgpu_ring *ring, uint64_t src_offset,
>  		       uint64_t dst_offset, uint32_t byte_count,
>  		       struct dma_resv *resv,
> -		       struct dma_fence **fence, bool direct_submit,
> +		       struct dma_fence **fence,
>  		       bool vm_needs_flush, uint32_t copy_flags);
>  int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
>  			    struct dma_resv *resv,
> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
> index 46c84fc60af1..378af0b2aaa9 100644
> --- a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
> @@ -153,7 +153,7 @@ svm_migrate_copy_memory_gart(struct amdgpu_device *adev, dma_addr_t *sys,
>  		}
>  
>  		r = amdgpu_copy_buffer(ring, gart_s, gart_d, size * PAGE_SIZE,
> -				       NULL, &next, false, true, 0);
> +				       NULL, &next, true, 0);
>  		if (r) {
>  			dev_err(adev->dev, "fail %d to copy memory\n", r);
>  			goto out_unlock;


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v2 04/20] drm/amdgpu: introduce amdgpu_ttm_buffer_entity
  2025-11-13 16:05 ` [PATCH v2 04/20] drm/amdgpu: introduce amdgpu_ttm_buffer_entity Pierre-Eric Pelloux-Prayer
@ 2025-11-14 12:57   ` Christian König
  2025-11-14 20:18     ` Felix Kuehling
  0 siblings, 1 reply; 53+ messages in thread
From: Christian König @ 2025-11-14 12:57 UTC (permalink / raw)
  To: Pierre-Eric Pelloux-Prayer, Alex Deucher, David Airlie,
	Simona Vetter, Felix Kuehling
  Cc: amd-gfx, dri-devel, linux-kernel

On 11/13/25 17:05, Pierre-Eric Pelloux-Prayer wrote:
> No functional change for now, but this struct will have more
> fields added in the next commit.
> 
> Technically the change introduces synchronisation issue, because
> dependencies between successive jobs are not taken care of
> properly. For instance, amdgpu_ttm_clear_buffer uses
> amdgpu_ttm_map_buffer then amdgpu_ttm_fill_mem which use
> different entities (default_entity then move/clear entity).
> But it's all working as expected, because all entities use the
> same sdma instance for now and default_entity has a higher prio
> so its job always gets scheduler first.
> 
> The next commits will deal with these dependencies correctly.
> 
> ---
> v2: renamed amdgpu_ttm_buffer_entity
> ---
> 
> Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>

Reviewed-by: Christian König <christian.koenig@amd.com>

> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c  |  2 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c  | 30 +++++++++++++++++-------
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h  | 12 ++++++----
>  drivers/gpu/drm/amd/amdkfd/kfd_migrate.c | 13 ++++++----
>  4 files changed, 39 insertions(+), 18 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
> index 9dcf51991b5b..8e2d41c9c271 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
> @@ -687,7 +687,7 @@ void amdgpu_gmc_flush_gpu_tlb(struct amdgpu_device *adev, uint32_t vmid,
>  	 * itself at least for GART.
>  	 */
>  	mutex_lock(&adev->mman.gtt_window_lock);
> -	r = amdgpu_job_alloc_with_ib(ring->adev, &adev->mman.high_pr,
> +	r = amdgpu_job_alloc_with_ib(ring->adev, &adev->mman.default_entity.base,
>  				     AMDGPU_FENCE_OWNER_UNDEFINED,
>  				     16 * 4, AMDGPU_IB_POOL_IMMEDIATE,
>  				     &job, AMDGPU_KERNEL_JOB_ID_FLUSH_GPU_TLB);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> index c985f57fa227..42d448cd6a6d 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> @@ -224,7 +224,7 @@ static int amdgpu_ttm_map_buffer(struct ttm_buffer_object *bo,
>  	num_dw = ALIGN(adev->mman.buffer_funcs->copy_num_dw, 8);
>  	num_bytes = num_pages * 8 * AMDGPU_GPU_PAGES_IN_CPU_PAGE;
>  
> -	r = amdgpu_job_alloc_with_ib(adev, &adev->mman.high_pr,
> +	r = amdgpu_job_alloc_with_ib(adev, &adev->mman.default_entity.base,
>  				     AMDGPU_FENCE_OWNER_UNDEFINED,
>  				     num_dw * 4 + num_bytes,
>  				     AMDGPU_IB_POOL_DELAYED, &job,
> @@ -1486,7 +1486,7 @@ static int amdgpu_ttm_access_memory_sdma(struct ttm_buffer_object *bo,
>  		memcpy(adev->mman.sdma_access_ptr, buf, len);
>  
>  	num_dw = ALIGN(adev->mman.buffer_funcs->copy_num_dw, 8);
> -	r = amdgpu_job_alloc_with_ib(adev, &adev->mman.high_pr,
> +	r = amdgpu_job_alloc_with_ib(adev, &adev->mman.default_entity.base,
>  				     AMDGPU_FENCE_OWNER_UNDEFINED,
>  				     num_dw * 4, AMDGPU_IB_POOL_DELAYED,
>  				     &job,
> @@ -2168,7 +2168,7 @@ void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
>  
>  		ring = adev->mman.buffer_funcs_ring;
>  		sched = &ring->sched;
> -		r = drm_sched_entity_init(&adev->mman.high_pr,
> +		r = drm_sched_entity_init(&adev->mman.default_entity.base,
>  					  DRM_SCHED_PRIORITY_KERNEL, &sched,
>  					  1, NULL);
>  		if (r) {
> @@ -2178,18 +2178,30 @@ void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
>  			return;
>  		}
>  
> -		r = drm_sched_entity_init(&adev->mman.low_pr,
> +		r = drm_sched_entity_init(&adev->mman.clear_entity.base,
> +					  DRM_SCHED_PRIORITY_NORMAL, &sched,
> +					  1, NULL);
> +		if (r) {
> +			dev_err(adev->dev,
> +				"Failed setting up TTM BO clear entity (%d)\n",
> +				r);
> +			goto error_free_entity;
> +		}
> +
> +		r = drm_sched_entity_init(&adev->mman.move_entity.base,
>  					  DRM_SCHED_PRIORITY_NORMAL, &sched,
>  					  1, NULL);
>  		if (r) {
>  			dev_err(adev->dev,
>  				"Failed setting up TTM BO move entity (%d)\n",
>  				r);
> +			drm_sched_entity_destroy(&adev->mman.clear_entity.base);
>  			goto error_free_entity;
>  		}
>  	} else {
> -		drm_sched_entity_destroy(&adev->mman.high_pr);
> -		drm_sched_entity_destroy(&adev->mman.low_pr);
> +		drm_sched_entity_destroy(&adev->mman.default_entity.base);
> +		drm_sched_entity_destroy(&adev->mman.clear_entity.base);
> +		drm_sched_entity_destroy(&adev->mman.move_entity.base);
>  		for (i = 0; i < TTM_NUM_MOVE_FENCES; i++) {
>  			dma_fence_put(man->eviction_fences[i]);
>  			man->eviction_fences[i] = NULL;
> @@ -2207,7 +2219,7 @@ void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
>  	return;
>  
>  error_free_entity:
> -	drm_sched_entity_destroy(&adev->mman.high_pr);
> +	drm_sched_entity_destroy(&adev->mman.default_entity.base);
>  }
>  
>  static int amdgpu_ttm_prepare_job(struct amdgpu_device *adev,
> @@ -2219,8 +2231,8 @@ static int amdgpu_ttm_prepare_job(struct amdgpu_device *adev,
>  {
>  	enum amdgpu_ib_pool_type pool = AMDGPU_IB_POOL_DELAYED;
>  	int r;
> -	struct drm_sched_entity *entity = delayed ? &adev->mman.low_pr :
> -						    &adev->mman.high_pr;
> +	struct drm_sched_entity *entity = delayed ? &adev->mman.clear_entity.base :
> +						    &adev->mman.move_entity.base;
>  	r = amdgpu_job_alloc_with_ib(adev, entity,
>  				     AMDGPU_FENCE_OWNER_UNDEFINED,
>  				     num_dw * 4, pool, job, k_job_id);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> index 50e40380fe95..d2295d6c2b67 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> @@ -52,6 +52,10 @@ struct amdgpu_gtt_mgr {
>  	spinlock_t lock;
>  };
>  
> +struct amdgpu_ttm_buffer_entity {
> +	struct drm_sched_entity base;
> +};
> +
>  struct amdgpu_mman {
>  	struct ttm_device		bdev;
>  	struct ttm_pool			*ttm_pools;
> @@ -64,10 +68,10 @@ struct amdgpu_mman {
>  	bool					buffer_funcs_enabled;
>  
>  	struct mutex				gtt_window_lock;
> -	/* High priority scheduler entity for buffer moves */
> -	struct drm_sched_entity			high_pr;
> -	/* Low priority scheduler entity for VRAM clearing */
> -	struct drm_sched_entity			low_pr;
> +
> +	struct amdgpu_ttm_buffer_entity default_entity;
> +	struct amdgpu_ttm_buffer_entity clear_entity;
> +	struct amdgpu_ttm_buffer_entity move_entity;
>  
>  	struct amdgpu_vram_mgr vram_mgr;
>  	struct amdgpu_gtt_mgr gtt_mgr;
> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
> index 378af0b2aaa9..d74ff6e90590 100644
> --- a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
> @@ -45,7 +45,9 @@ svm_migrate_direct_mapping_addr(struct amdgpu_device *adev, u64 addr)
>  }
>  
>  static int
> -svm_migrate_gart_map(struct amdgpu_ring *ring, u64 npages,
> +svm_migrate_gart_map(struct amdgpu_ring *ring,
> +		     struct amdgpu_ttm_buffer_entity *entity,
> +		     u64 npages,
>  		     dma_addr_t *addr, u64 *gart_addr, u64 flags)
>  {
>  	struct amdgpu_device *adev = ring->adev;
> @@ -63,7 +65,7 @@ svm_migrate_gart_map(struct amdgpu_ring *ring, u64 npages,
>  	num_dw = ALIGN(adev->mman.buffer_funcs->copy_num_dw, 8);
>  	num_bytes = npages * 8;
>  
> -	r = amdgpu_job_alloc_with_ib(adev, &adev->mman.high_pr,
> +	r = amdgpu_job_alloc_with_ib(adev, &entity->base,
>  				     AMDGPU_FENCE_OWNER_UNDEFINED,
>  				     num_dw * 4 + num_bytes,
>  				     AMDGPU_IB_POOL_DELAYED,
> @@ -128,11 +130,14 @@ svm_migrate_copy_memory_gart(struct amdgpu_device *adev, dma_addr_t *sys,
>  {
>  	const u64 GTT_MAX_PAGES = AMDGPU_GTT_MAX_TRANSFER_SIZE;
>  	struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
> +	struct amdgpu_ttm_buffer_entity *entity;
>  	u64 gart_s, gart_d;
>  	struct dma_fence *next;
>  	u64 size;
>  	int r;
>  
> +	entity = &adev->mman.move_entity;
> +
>  	mutex_lock(&adev->mman.gtt_window_lock);
>  
>  	while (npages) {
> @@ -140,10 +145,10 @@ svm_migrate_copy_memory_gart(struct amdgpu_device *adev, dma_addr_t *sys,
>  
>  		if (direction == FROM_VRAM_TO_RAM) {
>  			gart_s = svm_migrate_direct_mapping_addr(adev, *vram);
> -			r = svm_migrate_gart_map(ring, size, sys, &gart_d, 0);
> +			r = svm_migrate_gart_map(ring, entity, size, sys, &gart_d, 0);
>  
>  		} else if (direction == FROM_RAM_TO_VRAM) {
> -			r = svm_migrate_gart_map(ring, size, sys, &gart_s,
> +			r = svm_migrate_gart_map(ring, entity, size, sys, &gart_s,
>  						 KFD_IOCTL_SVM_FLAG_GPU_RO);
>  			gart_d = svm_migrate_direct_mapping_addr(adev, *vram);
>  		}


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v2 05/20] drm/amdgpu: pass the entity to use to ttm functions
  2025-11-13 16:05 ` [PATCH v2 05/20] drm/amdgpu: pass the entity to use to ttm functions Pierre-Eric Pelloux-Prayer
@ 2025-11-14 13:07   ` Christian König
  2025-11-14 14:41     ` Pierre-Eric Pelloux-Prayer
  2025-11-14 20:20   ` Felix Kuehling
  1 sibling, 1 reply; 53+ messages in thread
From: Christian König @ 2025-11-14 13:07 UTC (permalink / raw)
  To: Pierre-Eric Pelloux-Prayer, Alex Deucher, David Airlie,
	Simona Vetter, Felix Kuehling, Sumit Semwal
  Cc: amd-gfx, dri-devel, linux-kernel, linux-media, linaro-mm-sig

On 11/13/25 17:05, Pierre-Eric Pelloux-Prayer wrote:
> This way the caller can select the one it wants to use.
> 
> Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c |  3 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_object.c    |  4 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c       | 75 +++++++++++--------
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h       | 16 ++--
>  drivers/gpu/drm/amd/amdkfd/kfd_migrate.c      |  3 +-
>  5 files changed, 60 insertions(+), 41 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c
> index 02c2479a8840..b59040a8771f 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c
> @@ -38,7 +38,8 @@ static int amdgpu_benchmark_do_move(struct amdgpu_device *adev, unsigned size,
>  	stime = ktime_get();
>  	for (i = 0; i < n; i++) {
>  		struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
> -		r = amdgpu_copy_buffer(ring, saddr, daddr, size, NULL, &fence,
> +		r = amdgpu_copy_buffer(ring, &adev->mman.default_entity.base,
> +				       saddr, daddr, size, NULL, &fence,
>  				       false, 0);
>  		if (r)
>  			goto exit_do_move;
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> index e08f58de4b17..c06c132a753c 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> @@ -1321,8 +1321,8 @@ void amdgpu_bo_release_notify(struct ttm_buffer_object *bo)
>  	if (r)
>  		goto out;
>  
> -	r = amdgpu_fill_buffer(abo, 0, &bo->base._resv, &fence, true,
> -			       AMDGPU_KERNEL_JOB_ID_CLEAR_ON_RELEASE);
> +	r = amdgpu_fill_buffer(&adev->mman.clear_entity, abo, 0, &bo->base._resv,
> +			       &fence, AMDGPU_KERNEL_JOB_ID_CLEAR_ON_RELEASE);
>  	if (WARN_ON(r))
>  		goto out;
>  
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> index 42d448cd6a6d..c8d59ca2b3bd 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> @@ -164,6 +164,7 @@ static void amdgpu_evict_flags(struct ttm_buffer_object *bo,
>  
>  /**
>   * amdgpu_ttm_map_buffer - Map memory into the GART windows
> + * @entity: entity to run the window setup job
>   * @bo: buffer object to map
>   * @mem: memory object to map
>   * @mm_cur: range to map
> @@ -176,7 +177,8 @@ static void amdgpu_evict_flags(struct ttm_buffer_object *bo,
>   * Setup one of the GART windows to access a specific piece of memory or return
>   * the physical address for local memory.
>   */
> -static int amdgpu_ttm_map_buffer(struct ttm_buffer_object *bo,
> +static int amdgpu_ttm_map_buffer(struct drm_sched_entity *entity,
> +				 struct ttm_buffer_object *bo,


Probably better to split this patch into multiple patches.

One which changes amdgpu_ttm_map_buffer() and then another one or two for the higher level copy_buffer and fill_buffer functions.

>  				 struct ttm_resource *mem,
>  				 struct amdgpu_res_cursor *mm_cur,
>  				 unsigned int window, struct amdgpu_ring *ring,
> @@ -224,7 +226,7 @@ static int amdgpu_ttm_map_buffer(struct ttm_buffer_object *bo,
>  	num_dw = ALIGN(adev->mman.buffer_funcs->copy_num_dw, 8);
>  	num_bytes = num_pages * 8 * AMDGPU_GPU_PAGES_IN_CPU_PAGE;
>  
> -	r = amdgpu_job_alloc_with_ib(adev, &adev->mman.default_entity.base,
> +	r = amdgpu_job_alloc_with_ib(adev, entity,
>  				     AMDGPU_FENCE_OWNER_UNDEFINED,
>  				     num_dw * 4 + num_bytes,
>  				     AMDGPU_IB_POOL_DELAYED, &job,
> @@ -274,6 +276,7 @@ static int amdgpu_ttm_map_buffer(struct ttm_buffer_object *bo,
>  /**
>   * amdgpu_ttm_copy_mem_to_mem - Helper function for copy
>   * @adev: amdgpu device
> + * @entity: entity to run the jobs
>   * @src: buffer/address where to read from
>   * @dst: buffer/address where to write to
>   * @size: number of bytes to copy
> @@ -288,6 +291,7 @@ static int amdgpu_ttm_map_buffer(struct ttm_buffer_object *bo,
>   */
>  __attribute__((nonnull))
>  static int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
> +				      struct drm_sched_entity *entity,
>  				      const struct amdgpu_copy_mem *src,
>  				      const struct amdgpu_copy_mem *dst,
>  				      uint64_t size, bool tmz,
> @@ -320,12 +324,14 @@ static int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
>  		cur_size = min3(src_mm.size, dst_mm.size, 256ULL << 20);
>  
>  		/* Map src to window 0 and dst to window 1. */
> -		r = amdgpu_ttm_map_buffer(src->bo, src->mem, &src_mm,
> +		r = amdgpu_ttm_map_buffer(entity,
> +					  src->bo, src->mem, &src_mm,
>  					  0, ring, tmz, &cur_size, &from);
>  		if (r)
>  			goto error;
>  
> -		r = amdgpu_ttm_map_buffer(dst->bo, dst->mem, &dst_mm,
> +		r = amdgpu_ttm_map_buffer(entity,
> +					  dst->bo, dst->mem, &dst_mm,
>  					  1, ring, tmz, &cur_size, &to);
>  		if (r)
>  			goto error;
> @@ -353,7 +359,7 @@ static int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
>  							     write_compress_disable));
>  		}
>  
> -		r = amdgpu_copy_buffer(ring, from, to, cur_size, resv,
> +		r = amdgpu_copy_buffer(ring, entity, from, to, cur_size, resv,
>  				       &next, true, copy_flags);
>  		if (r)
>  			goto error;
> @@ -394,7 +400,9 @@ static int amdgpu_move_blit(struct ttm_buffer_object *bo,
>  	src.offset = 0;
>  	dst.offset = 0;
>  
> -	r = amdgpu_ttm_copy_mem_to_mem(adev, &src, &dst,
> +	r = amdgpu_ttm_copy_mem_to_mem(adev,
> +				       &adev->mman.move_entity.base,
> +				       &src, &dst,
>  				       new_mem->size,
>  				       amdgpu_bo_encrypted(abo),
>  				       bo->base.resv, &fence);
> @@ -406,8 +414,9 @@ static int amdgpu_move_blit(struct ttm_buffer_object *bo,
>  	    (abo->flags & AMDGPU_GEM_CREATE_VRAM_WIPE_ON_RELEASE)) {
>  		struct dma_fence *wipe_fence = NULL;
>  
> -		r = amdgpu_fill_buffer(abo, 0, NULL, &wipe_fence,
> -				       false, AMDGPU_KERNEL_JOB_ID_MOVE_BLIT);
> +		r = amdgpu_fill_buffer(&adev->mman.move_entity,
> +				       abo, 0, NULL, &wipe_fence,
> +				       AMDGPU_KERNEL_JOB_ID_MOVE_BLIT);
>  		if (r) {
>  			goto error;
>  		} else if (wipe_fence) {
> @@ -2223,16 +2232,15 @@ void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
>  }
>  
>  static int amdgpu_ttm_prepare_job(struct amdgpu_device *adev,
> +				  struct drm_sched_entity *entity,
>  				  unsigned int num_dw,
>  				  struct dma_resv *resv,
>  				  bool vm_needs_flush,
>  				  struct amdgpu_job **job,
> -				  bool delayed, u64 k_job_id)
> +				  u64 k_job_id)
>  {
>  	enum amdgpu_ib_pool_type pool = AMDGPU_IB_POOL_DELAYED;
>  	int r;
> -	struct drm_sched_entity *entity = delayed ? &adev->mman.clear_entity.base :
> -						    &adev->mman.move_entity.base;
>  	r = amdgpu_job_alloc_with_ib(adev, entity,
>  				     AMDGPU_FENCE_OWNER_UNDEFINED,
>  				     num_dw * 4, pool, job, k_job_id);
> @@ -2252,7 +2260,9 @@ static int amdgpu_ttm_prepare_job(struct amdgpu_device *adev,
>  						   DMA_RESV_USAGE_BOOKKEEP);
>  }
>  
> -int amdgpu_copy_buffer(struct amdgpu_ring *ring, uint64_t src_offset,
> +int amdgpu_copy_buffer(struct amdgpu_ring *ring,
> +		       struct drm_sched_entity *entity,
> +		       uint64_t src_offset,
>  		       uint64_t dst_offset, uint32_t byte_count,
>  		       struct dma_resv *resv,
>  		       struct dma_fence **fence,
> @@ -2274,8 +2284,8 @@ int amdgpu_copy_buffer(struct amdgpu_ring *ring, uint64_t src_offset,
>  	max_bytes = adev->mman.buffer_funcs->copy_max_bytes;
>  	num_loops = DIV_ROUND_UP(byte_count, max_bytes);
>  	num_dw = ALIGN(num_loops * adev->mman.buffer_funcs->copy_num_dw, 8);
> -	r = amdgpu_ttm_prepare_job(adev, num_dw,
> -				   resv, vm_needs_flush, &job, false,
> +	r = amdgpu_ttm_prepare_job(adev, entity, num_dw,
> +				   resv, vm_needs_flush, &job,
>  				   AMDGPU_KERNEL_JOB_ID_TTM_COPY_BUFFER);
>  	if (r)
>  		return r;
> @@ -2304,11 +2314,13 @@ int amdgpu_copy_buffer(struct amdgpu_ring *ring, uint64_t src_offset,
>  	return r;
>  }
>  
> -static int amdgpu_ttm_fill_mem(struct amdgpu_ring *ring, uint32_t src_data,
> +static int amdgpu_ttm_fill_mem(struct amdgpu_ring *ring,
> +			       struct drm_sched_entity *entity,
> +			       uint32_t src_data,
>  			       uint64_t dst_addr, uint32_t byte_count,
>  			       struct dma_resv *resv,
>  			       struct dma_fence **fence,
> -			       bool vm_needs_flush, bool delayed,
> +			       bool vm_needs_flush,
>  			       u64 k_job_id)
>  {
>  	struct amdgpu_device *adev = ring->adev;
> @@ -2321,8 +2333,8 @@ static int amdgpu_ttm_fill_mem(struct amdgpu_ring *ring, uint32_t src_data,
>  	max_bytes = adev->mman.buffer_funcs->fill_max_bytes;
>  	num_loops = DIV_ROUND_UP_ULL(byte_count, max_bytes);
>  	num_dw = ALIGN(num_loops * adev->mman.buffer_funcs->fill_num_dw, 8);
> -	r = amdgpu_ttm_prepare_job(adev, num_dw, resv, vm_needs_flush,
> -				   &job, delayed, k_job_id);
> +	r = amdgpu_ttm_prepare_job(adev, entity, num_dw, resv,
> +				   vm_needs_flush, &job, k_job_id);
>  	if (r)
>  		return r;
>  
> @@ -2386,13 +2398,14 @@ int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
>  		/* Never clear more than 256MiB at once to avoid timeouts */
>  		size = min(cursor.size, 256ULL << 20);
>  
> -		r = amdgpu_ttm_map_buffer(&bo->tbo, bo->tbo.resource, &cursor,
> +		r = amdgpu_ttm_map_buffer(&adev->mman.clear_entity.base,
> +					  &bo->tbo, bo->tbo.resource, &cursor,
>  					  1, ring, false, &size, &addr);
>  		if (r)
>  			goto err;
>  
> -		r = amdgpu_ttm_fill_mem(ring, 0, addr, size, resv,
> -					&next, true, true,
> +		r = amdgpu_ttm_fill_mem(ring, &adev->mman.clear_entity.base, 0, addr, size, resv,
> +					&next, true,
>  					AMDGPU_KERNEL_JOB_ID_TTM_CLEAR_BUFFER);
>  		if (r)
>  			goto err;
> @@ -2408,12 +2421,12 @@ int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
>  	return r;
>  }
>  
> -int amdgpu_fill_buffer(struct amdgpu_bo *bo,
> -			uint32_t src_data,
> -			struct dma_resv *resv,
> -			struct dma_fence **f,
> -			bool delayed,
> -			u64 k_job_id)
> +int amdgpu_fill_buffer(struct amdgpu_ttm_entity *entity,
> +		       struct amdgpu_bo *bo,
> +		       uint32_t src_data,
> +		       struct dma_resv *resv,
> +		       struct dma_fence **f,
> +		       u64 k_job_id)
>  {
>  	struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
>  	struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
> @@ -2437,13 +2450,15 @@ int amdgpu_fill_buffer(struct amdgpu_bo *bo,
>  		/* Never fill more than 256MiB at once to avoid timeouts */
>  		cur_size = min(dst.size, 256ULL << 20);
>  
> -		r = amdgpu_ttm_map_buffer(&bo->tbo, bo->tbo.resource, &dst,
> +		r = amdgpu_ttm_map_buffer(&entity->base,
> +					  &bo->tbo, bo->tbo.resource, &dst,
>  					  1, ring, false, &cur_size, &to);
>  		if (r)
>  			goto error;
>  
> -		r = amdgpu_ttm_fill_mem(ring, src_data, to, cur_size, resv,
> -					&next, true, delayed, k_job_id);
> +		r = amdgpu_ttm_fill_mem(ring, &entity->base,
> +					src_data, to, cur_size, resv,
> +					&next, true, k_job_id);
>  		if (r)
>  			goto error;
>  
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> index d2295d6c2b67..e1655f86a016 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> @@ -167,7 +167,9 @@ int amdgpu_ttm_init(struct amdgpu_device *adev);
>  void amdgpu_ttm_fini(struct amdgpu_device *adev);
>  void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev,
>  					bool enable);
> -int amdgpu_copy_buffer(struct amdgpu_ring *ring, uint64_t src_offset,
> +int amdgpu_copy_buffer(struct amdgpu_ring *ring,
> +		       struct drm_sched_entity *entity,

If I'm not completely mistaken you should be able to drop the ring argument since that can be determined from the entity.

Apart from that looks rather good to me.

Regards,
Christian.

> +		       uint64_t src_offset,
>  		       uint64_t dst_offset, uint32_t byte_count,
>  		       struct dma_resv *resv,
>  		       struct dma_fence **fence,
> @@ -175,12 +177,12 @@ int amdgpu_copy_buffer(struct amdgpu_ring *ring, uint64_t src_offset,
>  int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
>  			    struct dma_resv *resv,
>  			    struct dma_fence **fence);
> -int amdgpu_fill_buffer(struct amdgpu_bo *bo,
> -			uint32_t src_data,
> -			struct dma_resv *resv,
> -			struct dma_fence **fence,
> -			bool delayed,
> -			u64 k_job_id);
> +int amdgpu_fill_buffer(struct amdgpu_ttm_entity *entity,
> +		       struct amdgpu_bo *bo,
> +		       uint32_t src_data,
> +		       struct dma_resv *resv,
> +		       struct dma_fence **f,
> +		       u64 k_job_id);
>  
>  int amdgpu_ttm_alloc_gart(struct ttm_buffer_object *bo);
>  void amdgpu_ttm_recover_gart(struct ttm_buffer_object *tbo);
> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
> index d74ff6e90590..09756132fa1b 100644
> --- a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
> @@ -157,7 +157,8 @@ svm_migrate_copy_memory_gart(struct amdgpu_device *adev, dma_addr_t *sys,
>  			goto out_unlock;
>  		}
>  
> -		r = amdgpu_copy_buffer(ring, gart_s, gart_d, size * PAGE_SIZE,
> +		r = amdgpu_copy_buffer(ring, &entity->base,
> +				       gart_s, gart_d, size * PAGE_SIZE,
>  				       NULL, &next, true, 0);
>  		if (r) {
>  			dev_err(adev->dev, "fail %d to copy memory\n", r);


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v2 01/20] drm/amdgpu: give each kernel job a unique id
  2025-11-14 12:26   ` Christian König
@ 2025-11-14 14:36     ` Pierre-Eric Pelloux-Prayer
  2025-11-14 14:57       ` Christian König
  0 siblings, 1 reply; 53+ messages in thread
From: Pierre-Eric Pelloux-Prayer @ 2025-11-14 14:36 UTC (permalink / raw)
  To: Christian König, Pierre-Eric Pelloux-Prayer, Alex Deucher,
	David Airlie, Simona Vetter, Felix Kuehling
  Cc: Arunpravin Paneer Selvam, amd-gfx, dri-devel, linux-kernel



Le 14/11/2025 à 13:26, Christian König a écrit :
> On 11/13/25 17:05, Pierre-Eric Pelloux-Prayer wrote:
>> Userspace jobs have drm_file.client_id as a unique identifier
>> as job's owners. For kernel jobs, we can allocate arbitrary
>> values - the risk of overlap with userspace ids is small (given
>> that it's a u64 value).
>> In the unlikely case the overlap happens, it'll only impact
>> trace events.
>>
>> Since this ID is traced in the gpu_scheduler trace events, this
>> allows to determine the source of each job sent to the hardware.
>>
>> To make grepping easier, the IDs are defined as they will appear
>> in the trace output.
>>
>> Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
>> Acked-by: Alex Deucher <alexander.deucher@amd.com>
>> Signed-off-by: Arunpravin Paneer Selvam <Arunpravin.PaneerSelvam@amd.com>
>> Link: https://lore.kernel.org/r/20250604122827.2191-1-pierre-eric.pelloux-prayer@amd.com
> 
> Acked-by: Christian König <christian.koenig@amd.com>
> 
> You should probably start pushing this patch to amd-staging-drm-next even when not the full patch set is reviewed.
> 
> We need to get this partially merged through drm-misc-next because of the TTM dependencies anyway.

I've mentionned in the cover letter that this patch was already merged through 
drm-misc. I'm including it in the series to avoid conflicts.

Pierre-Eric

> 
> Regards,
> Christian
> 
>> ---
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c     |  3 ++-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c     |  2 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_job.c     |  5 ++--
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_job.h     | 19 +++++++++++++-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_jpeg.c    |  3 ++-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_object.c  |  3 ++-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c     | 28 +++++++++++++--------
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h     |  3 ++-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c     |  3 ++-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c     |  5 ++--
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c     |  8 +++---
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c      |  6 +++--
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h      |  2 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm_cpu.c  |  4 ++-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm_pt.c   |  4 ++-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c | 12 +++++----
>>   drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c       |  6 +++--
>>   drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c       |  6 +++--
>>   drivers/gpu/drm/amd/amdkfd/kfd_migrate.c    |  3 ++-
>>   19 files changed, 84 insertions(+), 41 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
>> index 3d24f9cd750a..29c927f4d6df 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
>> @@ -1549,7 +1549,8 @@ static int amdgpu_gfx_run_cleaner_shader_job(struct amdgpu_ring *ring)
>>   	owner = (void *)(unsigned long)atomic_inc_return(&counter);
>>   
>>   	r = amdgpu_job_alloc_with_ib(ring->adev, &entity, owner,
>> -				     64, 0, &job);
>> +				     64, 0, &job,
>> +				     AMDGPU_KERNEL_JOB_ID_CLEANER_SHADER);
>>   	if (r)
>>   		goto err;
>>   
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
>> index 97b562a79ea8..9dcf51991b5b 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
>> @@ -690,7 +690,7 @@ void amdgpu_gmc_flush_gpu_tlb(struct amdgpu_device *adev, uint32_t vmid,
>>   	r = amdgpu_job_alloc_with_ib(ring->adev, &adev->mman.high_pr,
>>   				     AMDGPU_FENCE_OWNER_UNDEFINED,
>>   				     16 * 4, AMDGPU_IB_POOL_IMMEDIATE,
>> -				     &job);
>> +				     &job, AMDGPU_KERNEL_JOB_ID_FLUSH_GPU_TLB);
>>   	if (r)
>>   		goto error_alloc;
>>   
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
>> index 55c7e104d5ca..3457bd649623 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
>> @@ -234,11 +234,12 @@ int amdgpu_job_alloc(struct amdgpu_device *adev, struct amdgpu_vm *vm,
>>   int amdgpu_job_alloc_with_ib(struct amdgpu_device *adev,
>>   			     struct drm_sched_entity *entity, void *owner,
>>   			     size_t size, enum amdgpu_ib_pool_type pool_type,
>> -			     struct amdgpu_job **job)
>> +			     struct amdgpu_job **job, u64 k_job_id)
>>   {
>>   	int r;
>>   
>> -	r = amdgpu_job_alloc(adev, NULL, entity, owner, 1, job, 0);
>> +	r = amdgpu_job_alloc(adev, NULL, entity, owner, 1, job,
>> +			     k_job_id);
>>   	if (r)
>>   		return r;
>>   
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
>> index d25f1fcf0242..7abf069d17d4 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
>> @@ -44,6 +44,22 @@
>>   struct amdgpu_fence;
>>   enum amdgpu_ib_pool_type;
>>   
>> +/* Internal kernel job ids. (decreasing values, starting from U64_MAX). */
>> +#define AMDGPU_KERNEL_JOB_ID_VM_UPDATE              (18446744073709551615ULL)
>> +#define AMDGPU_KERNEL_JOB_ID_VM_UPDATE_PDES         (18446744073709551614ULL)
>> +#define AMDGPU_KERNEL_JOB_ID_VM_UPDATE_RANGE        (18446744073709551613ULL)
>> +#define AMDGPU_KERNEL_JOB_ID_VM_PT_CLEAR            (18446744073709551612ULL)
>> +#define AMDGPU_KERNEL_JOB_ID_TTM_MAP_BUFFER         (18446744073709551611ULL)
>> +#define AMDGPU_KERNEL_JOB_ID_TTM_ACCESS_MEMORY_SDMA (18446744073709551610ULL)
>> +#define AMDGPU_KERNEL_JOB_ID_TTM_COPY_BUFFER        (18446744073709551609ULL)
>> +#define AMDGPU_KERNEL_JOB_ID_CLEAR_ON_RELEASE       (18446744073709551608ULL)
>> +#define AMDGPU_KERNEL_JOB_ID_MOVE_BLIT              (18446744073709551607ULL)
>> +#define AMDGPU_KERNEL_JOB_ID_TTM_CLEAR_BUFFER       (18446744073709551606ULL)
>> +#define AMDGPU_KERNEL_JOB_ID_CLEANER_SHADER         (18446744073709551605ULL)
>> +#define AMDGPU_KERNEL_JOB_ID_FLUSH_GPU_TLB          (18446744073709551604ULL)
>> +#define AMDGPU_KERNEL_JOB_ID_KFD_GART_MAP           (18446744073709551603ULL)
>> +#define AMDGPU_KERNEL_JOB_ID_VCN_RING_TEST          (18446744073709551602ULL)
>> +
>>   struct amdgpu_job {
>>   	struct drm_sched_job    base;
>>   	struct amdgpu_vm	*vm;
>> @@ -97,7 +113,8 @@ int amdgpu_job_alloc(struct amdgpu_device *adev, struct amdgpu_vm *vm,
>>   int amdgpu_job_alloc_with_ib(struct amdgpu_device *adev,
>>   			     struct drm_sched_entity *entity, void *owner,
>>   			     size_t size, enum amdgpu_ib_pool_type pool_type,
>> -			     struct amdgpu_job **job);
>> +			     struct amdgpu_job **job,
>> +			     u64 k_job_id);
>>   void amdgpu_job_set_resources(struct amdgpu_job *job, struct amdgpu_bo *gds,
>>   			      struct amdgpu_bo *gws, struct amdgpu_bo *oa);
>>   void amdgpu_job_free_resources(struct amdgpu_job *job);
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_jpeg.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_jpeg.c
>> index 91678621f1ff..63ee6ba6a931 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_jpeg.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_jpeg.c
>> @@ -196,7 +196,8 @@ static int amdgpu_jpeg_dec_set_reg(struct amdgpu_ring *ring, uint32_t handle,
>>   	int i, r;
>>   
>>   	r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, ib_size_dw * 4,
>> -				     AMDGPU_IB_POOL_DIRECT, &job);
>> +				     AMDGPU_IB_POOL_DIRECT, &job,
>> +				     AMDGPU_KERNEL_JOB_ID_VCN_RING_TEST);
>>   	if (r)
>>   		return r;
>>   
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
>> index fe486988a738..e08f58de4b17 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
>> @@ -1321,7 +1321,8 @@ void amdgpu_bo_release_notify(struct ttm_buffer_object *bo)
>>   	if (r)
>>   		goto out;
>>   
>> -	r = amdgpu_fill_buffer(abo, 0, &bo->base._resv, &fence, true);
>> +	r = amdgpu_fill_buffer(abo, 0, &bo->base._resv, &fence, true,
>> +			       AMDGPU_KERNEL_JOB_ID_CLEAR_ON_RELEASE);
>>   	if (WARN_ON(r))
>>   		goto out;
>>   
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>> index e226c3aff7d7..326476089db3 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>> @@ -227,7 +227,8 @@ static int amdgpu_ttm_map_buffer(struct ttm_buffer_object *bo,
>>   	r = amdgpu_job_alloc_with_ib(adev, &adev->mman.high_pr,
>>   				     AMDGPU_FENCE_OWNER_UNDEFINED,
>>   				     num_dw * 4 + num_bytes,
>> -				     AMDGPU_IB_POOL_DELAYED, &job);
>> +				     AMDGPU_IB_POOL_DELAYED, &job,
>> +				     AMDGPU_KERNEL_JOB_ID_TTM_MAP_BUFFER);
>>   	if (r)
>>   		return r;
>>   
>> @@ -406,7 +407,7 @@ static int amdgpu_move_blit(struct ttm_buffer_object *bo,
>>   		struct dma_fence *wipe_fence = NULL;
>>   
>>   		r = amdgpu_fill_buffer(abo, 0, NULL, &wipe_fence,
>> -				       false);
>> +				       false, AMDGPU_KERNEL_JOB_ID_MOVE_BLIT);
>>   		if (r) {
>>   			goto error;
>>   		} else if (wipe_fence) {
>> @@ -1488,7 +1489,8 @@ static int amdgpu_ttm_access_memory_sdma(struct ttm_buffer_object *bo,
>>   	r = amdgpu_job_alloc_with_ib(adev, &adev->mman.high_pr,
>>   				     AMDGPU_FENCE_OWNER_UNDEFINED,
>>   				     num_dw * 4, AMDGPU_IB_POOL_DELAYED,
>> -				     &job);
>> +				     &job,
>> +				     AMDGPU_KERNEL_JOB_ID_TTM_ACCESS_MEMORY_SDMA);
>>   	if (r)
>>   		goto out;
>>   
>> @@ -2212,7 +2214,7 @@ static int amdgpu_ttm_prepare_job(struct amdgpu_device *adev,
>>   				  struct dma_resv *resv,
>>   				  bool vm_needs_flush,
>>   				  struct amdgpu_job **job,
>> -				  bool delayed)
>> +				  bool delayed, u64 k_job_id)
>>   {
>>   	enum amdgpu_ib_pool_type pool = direct_submit ?
>>   		AMDGPU_IB_POOL_DIRECT :
>> @@ -2222,7 +2224,7 @@ static int amdgpu_ttm_prepare_job(struct amdgpu_device *adev,
>>   						    &adev->mman.high_pr;
>>   	r = amdgpu_job_alloc_with_ib(adev, entity,
>>   				     AMDGPU_FENCE_OWNER_UNDEFINED,
>> -				     num_dw * 4, pool, job);
>> +				     num_dw * 4, pool, job, k_job_id);
>>   	if (r)
>>   		return r;
>>   
>> @@ -2262,7 +2264,8 @@ int amdgpu_copy_buffer(struct amdgpu_ring *ring, uint64_t src_offset,
>>   	num_loops = DIV_ROUND_UP(byte_count, max_bytes);
>>   	num_dw = ALIGN(num_loops * adev->mman.buffer_funcs->copy_num_dw, 8);
>>   	r = amdgpu_ttm_prepare_job(adev, direct_submit, num_dw,
>> -				   resv, vm_needs_flush, &job, false);
>> +				   resv, vm_needs_flush, &job, false,
>> +				   AMDGPU_KERNEL_JOB_ID_TTM_COPY_BUFFER);
>>   	if (r)
>>   		return r;
>>   
>> @@ -2297,7 +2300,8 @@ static int amdgpu_ttm_fill_mem(struct amdgpu_ring *ring, uint32_t src_data,
>>   			       uint64_t dst_addr, uint32_t byte_count,
>>   			       struct dma_resv *resv,
>>   			       struct dma_fence **fence,
>> -			       bool vm_needs_flush, bool delayed)
>> +			       bool vm_needs_flush, bool delayed,
>> +			       u64 k_job_id)
>>   {
>>   	struct amdgpu_device *adev = ring->adev;
>>   	unsigned int num_loops, num_dw;
>> @@ -2310,7 +2314,7 @@ static int amdgpu_ttm_fill_mem(struct amdgpu_ring *ring, uint32_t src_data,
>>   	num_loops = DIV_ROUND_UP_ULL(byte_count, max_bytes);
>>   	num_dw = ALIGN(num_loops * adev->mman.buffer_funcs->fill_num_dw, 8);
>>   	r = amdgpu_ttm_prepare_job(adev, false, num_dw, resv, vm_needs_flush,
>> -				   &job, delayed);
>> +				   &job, delayed, k_job_id);
>>   	if (r)
>>   		return r;
>>   
>> @@ -2380,7 +2384,8 @@ int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
>>   			goto err;
>>   
>>   		r = amdgpu_ttm_fill_mem(ring, 0, addr, size, resv,
>> -					&next, true, true);
>> +					&next, true, true,
>> +					AMDGPU_KERNEL_JOB_ID_TTM_CLEAR_BUFFER);
>>   		if (r)
>>   			goto err;
>>   
>> @@ -2399,7 +2404,8 @@ int amdgpu_fill_buffer(struct amdgpu_bo *bo,
>>   			uint32_t src_data,
>>   			struct dma_resv *resv,
>>   			struct dma_fence **f,
>> -			bool delayed)
>> +			bool delayed,
>> +			u64 k_job_id)
>>   {
>>   	struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
>>   	struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
>> @@ -2429,7 +2435,7 @@ int amdgpu_fill_buffer(struct amdgpu_bo *bo,
>>   			goto error;
>>   
>>   		r = amdgpu_ttm_fill_mem(ring, src_data, to, cur_size, resv,
>> -					&next, true, delayed);
>> +					&next, true, delayed, k_job_id);
>>   		if (r)
>>   			goto error;
>>   
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
>> index 054d48823d5f..577ee04ce0bf 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
>> @@ -175,7 +175,8 @@ int amdgpu_fill_buffer(struct amdgpu_bo *bo,
>>   			uint32_t src_data,
>>   			struct dma_resv *resv,
>>   			struct dma_fence **fence,
>> -			bool delayed);
>> +			bool delayed,
>> +			u64 k_job_id);
>>   
>>   int amdgpu_ttm_alloc_gart(struct ttm_buffer_object *bo);
>>   void amdgpu_ttm_recover_gart(struct ttm_buffer_object *tbo);
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
>> index 74758b5ffc6c..5c38f0d30c87 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
>> @@ -1136,7 +1136,8 @@ static int amdgpu_uvd_send_msg(struct amdgpu_ring *ring, struct amdgpu_bo *bo,
>>   	r = amdgpu_job_alloc_with_ib(ring->adev, &adev->uvd.entity,
>>   				     AMDGPU_FENCE_OWNER_UNDEFINED,
>>   				     64, direct ? AMDGPU_IB_POOL_DIRECT :
>> -				     AMDGPU_IB_POOL_DELAYED, &job);
>> +				     AMDGPU_IB_POOL_DELAYED, &job,
>> +				     AMDGPU_KERNEL_JOB_ID_VCN_RING_TEST);
>>   	if (r)
>>   		return r;
>>   
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
>> index b9060bcd4806..ce318f5de047 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
>> @@ -449,7 +449,7 @@ static int amdgpu_vce_get_create_msg(struct amdgpu_ring *ring, uint32_t handle,
>>   	r = amdgpu_job_alloc_with_ib(ring->adev, &ring->adev->vce.entity,
>>   				     AMDGPU_FENCE_OWNER_UNDEFINED,
>>   				     ib_size_dw * 4, AMDGPU_IB_POOL_DIRECT,
>> -				     &job);
>> +				     &job, AMDGPU_KERNEL_JOB_ID_VCN_RING_TEST);
>>   	if (r)
>>   		return r;
>>   
>> @@ -540,7 +540,8 @@ static int amdgpu_vce_get_destroy_msg(struct amdgpu_ring *ring, uint32_t handle,
>>   				     AMDGPU_FENCE_OWNER_UNDEFINED,
>>   				     ib_size_dw * 4,
>>   				     direct ? AMDGPU_IB_POOL_DIRECT :
>> -				     AMDGPU_IB_POOL_DELAYED, &job);
>> +				     AMDGPU_IB_POOL_DELAYED, &job,
>> +				     AMDGPU_KERNEL_JOB_ID_VCN_RING_TEST);
>>   	if (r)
>>   		return r;
>>   
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
>> index 5ae7cc0d5f57..5e0786ea911b 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
>> @@ -626,7 +626,7 @@ static int amdgpu_vcn_dec_send_msg(struct amdgpu_ring *ring,
>>   
>>   	r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL,
>>   				     64, AMDGPU_IB_POOL_DIRECT,
>> -				     &job);
>> +				     &job, AMDGPU_KERNEL_JOB_ID_VCN_RING_TEST);
>>   	if (r)
>>   		goto err;
>>   
>> @@ -806,7 +806,7 @@ static int amdgpu_vcn_dec_sw_send_msg(struct amdgpu_ring *ring,
>>   
>>   	r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL,
>>   				     ib_size_dw * 4, AMDGPU_IB_POOL_DIRECT,
>> -				     &job);
>> +				     &job, AMDGPU_KERNEL_JOB_ID_VCN_RING_TEST);
>>   	if (r)
>>   		goto err;
>>   
>> @@ -936,7 +936,7 @@ static int amdgpu_vcn_enc_get_create_msg(struct amdgpu_ring *ring, uint32_t hand
>>   
>>   	r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL,
>>   				     ib_size_dw * 4, AMDGPU_IB_POOL_DIRECT,
>> -				     &job);
>> +				     &job, AMDGPU_KERNEL_JOB_ID_VCN_RING_TEST);
>>   	if (r)
>>   		return r;
>>   
>> @@ -1003,7 +1003,7 @@ static int amdgpu_vcn_enc_get_destroy_msg(struct amdgpu_ring *ring, uint32_t han
>>   
>>   	r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL,
>>   				     ib_size_dw * 4, AMDGPU_IB_POOL_DIRECT,
>> -				     &job);
>> +				     &job, AMDGPU_KERNEL_JOB_ID_VCN_RING_TEST);
>>   	if (r)
>>   		return r;
>>   
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>> index db66b4232de0..2f8e83f840a8 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>> @@ -983,7 +983,8 @@ int amdgpu_vm_update_pdes(struct amdgpu_device *adev,
>>   	params.vm = vm;
>>   	params.immediate = immediate;
>>   
>> -	r = vm->update_funcs->prepare(&params, NULL);
>> +	r = vm->update_funcs->prepare(&params, NULL,
>> +				      AMDGPU_KERNEL_JOB_ID_VM_UPDATE_PDES);
>>   	if (r)
>>   		goto error;
>>   
>> @@ -1152,7 +1153,8 @@ int amdgpu_vm_update_range(struct amdgpu_device *adev, struct amdgpu_vm *vm,
>>   		dma_fence_put(tmp);
>>   	}
>>   
>> -	r = vm->update_funcs->prepare(&params, sync);
>> +	r = vm->update_funcs->prepare(&params, sync,
>> +				      AMDGPU_KERNEL_JOB_ID_VM_UPDATE_RANGE);
>>   	if (r)
>>   		goto error_free;
>>   
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>> index 77207f4e448e..cf0ec94e8a07 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>> @@ -308,7 +308,7 @@ struct amdgpu_vm_update_params {
>>   struct amdgpu_vm_update_funcs {
>>   	int (*map_table)(struct amdgpu_bo_vm *bo);
>>   	int (*prepare)(struct amdgpu_vm_update_params *p,
>> -		       struct amdgpu_sync *sync);
>> +		       struct amdgpu_sync *sync, u64 k_job_id);
>>   	int (*update)(struct amdgpu_vm_update_params *p,
>>   		      struct amdgpu_bo_vm *bo, uint64_t pe, uint64_t addr,
>>   		      unsigned count, uint32_t incr, uint64_t flags);
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_cpu.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_cpu.c
>> index 0c1ef5850a5e..22e2e5b47341 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_cpu.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_cpu.c
>> @@ -40,12 +40,14 @@ static int amdgpu_vm_cpu_map_table(struct amdgpu_bo_vm *table)
>>    *
>>    * @p: see amdgpu_vm_update_params definition
>>    * @sync: sync obj with fences to wait on
>> + * @k_job_id: the id for tracing/debug purposes
>>    *
>>    * Returns:
>>    * Negativ errno, 0 for success.
>>    */
>>   static int amdgpu_vm_cpu_prepare(struct amdgpu_vm_update_params *p,
>> -				 struct amdgpu_sync *sync)
>> +				 struct amdgpu_sync *sync,
>> +				 u64 k_job_id)
>>   {
>>   	if (!sync)
>>   		return 0;
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_pt.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_pt.c
>> index 30022123b0bf..f794fb1cc06e 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_pt.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_pt.c
>> @@ -26,6 +26,7 @@
>>   #include "amdgpu.h"
>>   #include "amdgpu_trace.h"
>>   #include "amdgpu_vm.h"
>> +#include "amdgpu_job.h"
>>   
>>   /*
>>    * amdgpu_vm_pt_cursor - state for for_each_amdgpu_vm_pt
>> @@ -395,7 +396,8 @@ int amdgpu_vm_pt_clear(struct amdgpu_device *adev, struct amdgpu_vm *vm,
>>   	params.vm = vm;
>>   	params.immediate = immediate;
>>   
>> -	r = vm->update_funcs->prepare(&params, NULL);
>> +	r = vm->update_funcs->prepare(&params, NULL,
>> +				      AMDGPU_KERNEL_JOB_ID_VM_PT_CLEAR);
>>   	if (r)
>>   		goto exit;
>>   
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c
>> index 46d9fb433ab2..36805dcfa159 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c
>> @@ -40,7 +40,7 @@ static int amdgpu_vm_sdma_map_table(struct amdgpu_bo_vm *table)
>>   
>>   /* Allocate a new job for @count PTE updates */
>>   static int amdgpu_vm_sdma_alloc_job(struct amdgpu_vm_update_params *p,
>> -				    unsigned int count)
>> +				    unsigned int count, u64 k_job_id)
>>   {
>>   	enum amdgpu_ib_pool_type pool = p->immediate ? AMDGPU_IB_POOL_IMMEDIATE
>>   		: AMDGPU_IB_POOL_DELAYED;
>> @@ -56,7 +56,7 @@ static int amdgpu_vm_sdma_alloc_job(struct amdgpu_vm_update_params *p,
>>   	ndw = min(ndw, AMDGPU_VM_SDMA_MAX_NUM_DW);
>>   
>>   	r = amdgpu_job_alloc_with_ib(p->adev, entity, AMDGPU_FENCE_OWNER_VM,
>> -				     ndw * 4, pool, &p->job);
>> +				     ndw * 4, pool, &p->job, k_job_id);
>>   	if (r)
>>   		return r;
>>   
>> @@ -69,16 +69,17 @@ static int amdgpu_vm_sdma_alloc_job(struct amdgpu_vm_update_params *p,
>>    *
>>    * @p: see amdgpu_vm_update_params definition
>>    * @sync: amdgpu_sync object with fences to wait for
>> + * @k_job_id: identifier of the job, for tracing purpose
>>    *
>>    * Returns:
>>    * Negativ errno, 0 for success.
>>    */
>>   static int amdgpu_vm_sdma_prepare(struct amdgpu_vm_update_params *p,
>> -				  struct amdgpu_sync *sync)
>> +				  struct amdgpu_sync *sync, u64 k_job_id)
>>   {
>>   	int r;
>>   
>> -	r = amdgpu_vm_sdma_alloc_job(p, 0);
>> +	r = amdgpu_vm_sdma_alloc_job(p, 0, k_job_id);
>>   	if (r)
>>   		return r;
>>   
>> @@ -249,7 +250,8 @@ static int amdgpu_vm_sdma_update(struct amdgpu_vm_update_params *p,
>>   			if (r)
>>   				return r;
>>   
>> -			r = amdgpu_vm_sdma_alloc_job(p, count);
>> +			r = amdgpu_vm_sdma_alloc_job(p, count,
>> +						     AMDGPU_KERNEL_JOB_ID_VM_UPDATE);
>>   			if (r)
>>   				return r;
>>   		}
>> diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c b/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c
>> index 1c07b701d0e4..ceb94bbb03a4 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c
>> @@ -217,7 +217,8 @@ static int uvd_v6_0_enc_get_create_msg(struct amdgpu_ring *ring, uint32_t handle
>>   	int i, r;
>>   
>>   	r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, ib_size_dw * 4,
>> -				     AMDGPU_IB_POOL_DIRECT, &job);
>> +				     AMDGPU_IB_POOL_DIRECT, &job,
>> +				     AMDGPU_KERNEL_JOB_ID_VCN_RING_TEST);
>>   	if (r)
>>   		return r;
>>   
>> @@ -281,7 +282,8 @@ static int uvd_v6_0_enc_get_destroy_msg(struct amdgpu_ring *ring,
>>   	int i, r;
>>   
>>   	r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, ib_size_dw * 4,
>> -				     AMDGPU_IB_POOL_DIRECT, &job);
>> +				     AMDGPU_IB_POOL_DIRECT, &job,
>> +				     AMDGPU_KERNEL_JOB_ID_VCN_RING_TEST);
>>   	if (r)
>>   		return r;
>>   
>> diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c b/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
>> index 9d237b5937fb..1f8866f3f63c 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
>> @@ -225,7 +225,8 @@ static int uvd_v7_0_enc_get_create_msg(struct amdgpu_ring *ring, u32 handle,
>>   	int i, r;
>>   
>>   	r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, ib_size_dw * 4,
>> -				     AMDGPU_IB_POOL_DIRECT, &job);
>> +				     AMDGPU_IB_POOL_DIRECT, &job,
>> +				     AMDGPU_KERNEL_JOB_ID_VCN_RING_TEST);
>>   	if (r)
>>   		return r;
>>   
>> @@ -288,7 +289,8 @@ static int uvd_v7_0_enc_get_destroy_msg(struct amdgpu_ring *ring, u32 handle,
>>   	int i, r;
>>   
>>   	r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, ib_size_dw * 4,
>> -				     AMDGPU_IB_POOL_DIRECT, &job);
>> +				     AMDGPU_IB_POOL_DIRECT, &job,
>> +				     AMDGPU_KERNEL_JOB_ID_VCN_RING_TEST);
>>   	if (r)
>>   		return r;
>>   
>> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
>> index 3653c563ee9a..46c84fc60af1 100644
>> --- a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
>> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
>> @@ -67,7 +67,8 @@ svm_migrate_gart_map(struct amdgpu_ring *ring, u64 npages,
>>   				     AMDGPU_FENCE_OWNER_UNDEFINED,
>>   				     num_dw * 4 + num_bytes,
>>   				     AMDGPU_IB_POOL_DELAYED,
>> -				     &job);
>> +				     &job,
>> +				     AMDGPU_KERNEL_JOB_ID_KFD_GART_MAP);
>>   	if (r)
>>   		return r;
>>   


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v2 05/20] drm/amdgpu: pass the entity to use to ttm functions
  2025-11-14 13:07   ` Christian König
@ 2025-11-14 14:41     ` Pierre-Eric Pelloux-Prayer
  2025-11-17  9:41       ` Pierre-Eric Pelloux-Prayer
  0 siblings, 1 reply; 53+ messages in thread
From: Pierre-Eric Pelloux-Prayer @ 2025-11-14 14:41 UTC (permalink / raw)
  To: Christian König, Pierre-Eric Pelloux-Prayer, Alex Deucher,
	David Airlie, Simona Vetter, Felix Kuehling, Sumit Semwal
  Cc: amd-gfx, dri-devel, linux-kernel, linux-media, linaro-mm-sig



Le 14/11/2025 à 14:07, Christian König a écrit :
> On 11/13/25 17:05, Pierre-Eric Pelloux-Prayer wrote:
>> This way the caller can select the one it wants to use.
>>
>> Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
>> ---
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c |  3 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_object.c    |  4 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c       | 75 +++++++++++--------
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h       | 16 ++--
>>   drivers/gpu/drm/amd/amdkfd/kfd_migrate.c      |  3 +-
>>   5 files changed, 60 insertions(+), 41 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c
>> index 02c2479a8840..b59040a8771f 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c
>> @@ -38,7 +38,8 @@ static int amdgpu_benchmark_do_move(struct amdgpu_device *adev, unsigned size,
>>   	stime = ktime_get();
>>   	for (i = 0; i < n; i++) {
>>   		struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
>> -		r = amdgpu_copy_buffer(ring, saddr, daddr, size, NULL, &fence,
>> +		r = amdgpu_copy_buffer(ring, &adev->mman.default_entity.base,
>> +				       saddr, daddr, size, NULL, &fence,
>>   				       false, 0);
>>   		if (r)
>>   			goto exit_do_move;
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
>> index e08f58de4b17..c06c132a753c 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
>> @@ -1321,8 +1321,8 @@ void amdgpu_bo_release_notify(struct ttm_buffer_object *bo)
>>   	if (r)
>>   		goto out;
>>   
>> -	r = amdgpu_fill_buffer(abo, 0, &bo->base._resv, &fence, true,
>> -			       AMDGPU_KERNEL_JOB_ID_CLEAR_ON_RELEASE);
>> +	r = amdgpu_fill_buffer(&adev->mman.clear_entity, abo, 0, &bo->base._resv,
>> +			       &fence, AMDGPU_KERNEL_JOB_ID_CLEAR_ON_RELEASE);
>>   	if (WARN_ON(r))
>>   		goto out;
>>   
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>> index 42d448cd6a6d..c8d59ca2b3bd 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>> @@ -164,6 +164,7 @@ static void amdgpu_evict_flags(struct ttm_buffer_object *bo,
>>   
>>   /**
>>    * amdgpu_ttm_map_buffer - Map memory into the GART windows
>> + * @entity: entity to run the window setup job
>>    * @bo: buffer object to map
>>    * @mem: memory object to map
>>    * @mm_cur: range to map
>> @@ -176,7 +177,8 @@ static void amdgpu_evict_flags(struct ttm_buffer_object *bo,
>>    * Setup one of the GART windows to access a specific piece of memory or return
>>    * the physical address for local memory.
>>    */
>> -static int amdgpu_ttm_map_buffer(struct ttm_buffer_object *bo,
>> +static int amdgpu_ttm_map_buffer(struct drm_sched_entity *entity,
>> +				 struct ttm_buffer_object *bo,
> 
> 
> Probably better to split this patch into multiple patches.
> 
> One which changes amdgpu_ttm_map_buffer() and then another one or two for the higher level copy_buffer and fill_buffer functions.

OK.

> 
>>   				 struct ttm_resource *mem,
>>   				 struct amdgpu_res_cursor *mm_cur,
>>   				 unsigned int window, struct amdgpu_ring *ring,
>> @@ -224,7 +226,7 @@ static int amdgpu_ttm_map_buffer(struct ttm_buffer_object *bo,
>>   	num_dw = ALIGN(adev->mman.buffer_funcs->copy_num_dw, 8);
>>   	num_bytes = num_pages * 8 * AMDGPU_GPU_PAGES_IN_CPU_PAGE;
>>   
>> -	r = amdgpu_job_alloc_with_ib(adev, &adev->mman.default_entity.base,
>> +	r = amdgpu_job_alloc_with_ib(adev, entity,
>>   				     AMDGPU_FENCE_OWNER_UNDEFINED,
>>   				     num_dw * 4 + num_bytes,
>>   				     AMDGPU_IB_POOL_DELAYED, &job,
>> @@ -274,6 +276,7 @@ static int amdgpu_ttm_map_buffer(struct ttm_buffer_object *bo,
>>   /**
>>    * amdgpu_ttm_copy_mem_to_mem - Helper function for copy
>>    * @adev: amdgpu device
>> + * @entity: entity to run the jobs
>>    * @src: buffer/address where to read from
>>    * @dst: buffer/address where to write to
>>    * @size: number of bytes to copy
>> @@ -288,6 +291,7 @@ static int amdgpu_ttm_map_buffer(struct ttm_buffer_object *bo,
>>    */
>>   __attribute__((nonnull))
>>   static int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
>> +				      struct drm_sched_entity *entity,
>>   				      const struct amdgpu_copy_mem *src,
>>   				      const struct amdgpu_copy_mem *dst,
>>   				      uint64_t size, bool tmz,
>> @@ -320,12 +324,14 @@ static int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
>>   		cur_size = min3(src_mm.size, dst_mm.size, 256ULL << 20);
>>   
>>   		/* Map src to window 0 and dst to window 1. */
>> -		r = amdgpu_ttm_map_buffer(src->bo, src->mem, &src_mm,
>> +		r = amdgpu_ttm_map_buffer(entity,
>> +					  src->bo, src->mem, &src_mm,
>>   					  0, ring, tmz, &cur_size, &from);
>>   		if (r)
>>   			goto error;
>>   
>> -		r = amdgpu_ttm_map_buffer(dst->bo, dst->mem, &dst_mm,
>> +		r = amdgpu_ttm_map_buffer(entity,
>> +					  dst->bo, dst->mem, &dst_mm,
>>   					  1, ring, tmz, &cur_size, &to);
>>   		if (r)
>>   			goto error;
>> @@ -353,7 +359,7 @@ static int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
>>   							     write_compress_disable));
>>   		}
>>   
>> -		r = amdgpu_copy_buffer(ring, from, to, cur_size, resv,
>> +		r = amdgpu_copy_buffer(ring, entity, from, to, cur_size, resv,
>>   				       &next, true, copy_flags);
>>   		if (r)
>>   			goto error;
>> @@ -394,7 +400,9 @@ static int amdgpu_move_blit(struct ttm_buffer_object *bo,
>>   	src.offset = 0;
>>   	dst.offset = 0;
>>   
>> -	r = amdgpu_ttm_copy_mem_to_mem(adev, &src, &dst,
>> +	r = amdgpu_ttm_copy_mem_to_mem(adev,
>> +				       &adev->mman.move_entity.base,
>> +				       &src, &dst,
>>   				       new_mem->size,
>>   				       amdgpu_bo_encrypted(abo),
>>   				       bo->base.resv, &fence);
>> @@ -406,8 +414,9 @@ static int amdgpu_move_blit(struct ttm_buffer_object *bo,
>>   	    (abo->flags & AMDGPU_GEM_CREATE_VRAM_WIPE_ON_RELEASE)) {
>>   		struct dma_fence *wipe_fence = NULL;
>>   
>> -		r = amdgpu_fill_buffer(abo, 0, NULL, &wipe_fence,
>> -				       false, AMDGPU_KERNEL_JOB_ID_MOVE_BLIT);
>> +		r = amdgpu_fill_buffer(&adev->mman.move_entity,
>> +				       abo, 0, NULL, &wipe_fence,
>> +				       AMDGPU_KERNEL_JOB_ID_MOVE_BLIT);
>>   		if (r) {
>>   			goto error;
>>   		} else if (wipe_fence) {
>> @@ -2223,16 +2232,15 @@ void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
>>   }
>>   
>>   static int amdgpu_ttm_prepare_job(struct amdgpu_device *adev,
>> +				  struct drm_sched_entity *entity,
>>   				  unsigned int num_dw,
>>   				  struct dma_resv *resv,
>>   				  bool vm_needs_flush,
>>   				  struct amdgpu_job **job,
>> -				  bool delayed, u64 k_job_id)
>> +				  u64 k_job_id)
>>   {
>>   	enum amdgpu_ib_pool_type pool = AMDGPU_IB_POOL_DELAYED;
>>   	int r;
>> -	struct drm_sched_entity *entity = delayed ? &adev->mman.clear_entity.base :
>> -						    &adev->mman.move_entity.base;
>>   	r = amdgpu_job_alloc_with_ib(adev, entity,
>>   				     AMDGPU_FENCE_OWNER_UNDEFINED,
>>   				     num_dw * 4, pool, job, k_job_id);
>> @@ -2252,7 +2260,9 @@ static int amdgpu_ttm_prepare_job(struct amdgpu_device *adev,
>>   						   DMA_RESV_USAGE_BOOKKEEP);
>>   }
>>   
>> -int amdgpu_copy_buffer(struct amdgpu_ring *ring, uint64_t src_offset,
>> +int amdgpu_copy_buffer(struct amdgpu_ring *ring,
>> +		       struct drm_sched_entity *entity,
>> +		       uint64_t src_offset,
>>   		       uint64_t dst_offset, uint32_t byte_count,
>>   		       struct dma_resv *resv,
>>   		       struct dma_fence **fence,
>> @@ -2274,8 +2284,8 @@ int amdgpu_copy_buffer(struct amdgpu_ring *ring, uint64_t src_offset,
>>   	max_bytes = adev->mman.buffer_funcs->copy_max_bytes;
>>   	num_loops = DIV_ROUND_UP(byte_count, max_bytes);
>>   	num_dw = ALIGN(num_loops * adev->mman.buffer_funcs->copy_num_dw, 8);
>> -	r = amdgpu_ttm_prepare_job(adev, num_dw,
>> -				   resv, vm_needs_flush, &job, false,
>> +	r = amdgpu_ttm_prepare_job(adev, entity, num_dw,
>> +				   resv, vm_needs_flush, &job,
>>   				   AMDGPU_KERNEL_JOB_ID_TTM_COPY_BUFFER);
>>   	if (r)
>>   		return r;
>> @@ -2304,11 +2314,13 @@ int amdgpu_copy_buffer(struct amdgpu_ring *ring, uint64_t src_offset,
>>   	return r;
>>   }
>>   
>> -static int amdgpu_ttm_fill_mem(struct amdgpu_ring *ring, uint32_t src_data,
>> +static int amdgpu_ttm_fill_mem(struct amdgpu_ring *ring,
>> +			       struct drm_sched_entity *entity,
>> +			       uint32_t src_data,
>>   			       uint64_t dst_addr, uint32_t byte_count,
>>   			       struct dma_resv *resv,
>>   			       struct dma_fence **fence,
>> -			       bool vm_needs_flush, bool delayed,
>> +			       bool vm_needs_flush,
>>   			       u64 k_job_id)
>>   {
>>   	struct amdgpu_device *adev = ring->adev;
>> @@ -2321,8 +2333,8 @@ static int amdgpu_ttm_fill_mem(struct amdgpu_ring *ring, uint32_t src_data,
>>   	max_bytes = adev->mman.buffer_funcs->fill_max_bytes;
>>   	num_loops = DIV_ROUND_UP_ULL(byte_count, max_bytes);
>>   	num_dw = ALIGN(num_loops * adev->mman.buffer_funcs->fill_num_dw, 8);
>> -	r = amdgpu_ttm_prepare_job(adev, num_dw, resv, vm_needs_flush,
>> -				   &job, delayed, k_job_id);
>> +	r = amdgpu_ttm_prepare_job(adev, entity, num_dw, resv,
>> +				   vm_needs_flush, &job, k_job_id);
>>   	if (r)
>>   		return r;
>>   
>> @@ -2386,13 +2398,14 @@ int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
>>   		/* Never clear more than 256MiB at once to avoid timeouts */
>>   		size = min(cursor.size, 256ULL << 20);
>>   
>> -		r = amdgpu_ttm_map_buffer(&bo->tbo, bo->tbo.resource, &cursor,
>> +		r = amdgpu_ttm_map_buffer(&adev->mman.clear_entity.base,
>> +					  &bo->tbo, bo->tbo.resource, &cursor,
>>   					  1, ring, false, &size, &addr);
>>   		if (r)
>>   			goto err;
>>   
>> -		r = amdgpu_ttm_fill_mem(ring, 0, addr, size, resv,
>> -					&next, true, true,
>> +		r = amdgpu_ttm_fill_mem(ring, &adev->mman.clear_entity.base, 0, addr, size, resv,
>> +					&next, true,
>>   					AMDGPU_KERNEL_JOB_ID_TTM_CLEAR_BUFFER);
>>   		if (r)
>>   			goto err;
>> @@ -2408,12 +2421,12 @@ int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
>>   	return r;
>>   }
>>   
>> -int amdgpu_fill_buffer(struct amdgpu_bo *bo,
>> -			uint32_t src_data,
>> -			struct dma_resv *resv,
>> -			struct dma_fence **f,
>> -			bool delayed,
>> -			u64 k_job_id)
>> +int amdgpu_fill_buffer(struct amdgpu_ttm_entity *entity,
>> +		       struct amdgpu_bo *bo,
>> +		       uint32_t src_data,
>> +		       struct dma_resv *resv,
>> +		       struct dma_fence **f,
>> +		       u64 k_job_id)
>>   {
>>   	struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
>>   	struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
>> @@ -2437,13 +2450,15 @@ int amdgpu_fill_buffer(struct amdgpu_bo *bo,
>>   		/* Never fill more than 256MiB at once to avoid timeouts */
>>   		cur_size = min(dst.size, 256ULL << 20);
>>   
>> -		r = amdgpu_ttm_map_buffer(&bo->tbo, bo->tbo.resource, &dst,
>> +		r = amdgpu_ttm_map_buffer(&entity->base,
>> +					  &bo->tbo, bo->tbo.resource, &dst,
>>   					  1, ring, false, &cur_size, &to);
>>   		if (r)
>>   			goto error;
>>   
>> -		r = amdgpu_ttm_fill_mem(ring, src_data, to, cur_size, resv,
>> -					&next, true, delayed, k_job_id);
>> +		r = amdgpu_ttm_fill_mem(ring, &entity->base,
>> +					src_data, to, cur_size, resv,
>> +					&next, true, k_job_id);
>>   		if (r)
>>   			goto error;
>>   
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
>> index d2295d6c2b67..e1655f86a016 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
>> @@ -167,7 +167,9 @@ int amdgpu_ttm_init(struct amdgpu_device *adev);
>>   void amdgpu_ttm_fini(struct amdgpu_device *adev);
>>   void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev,
>>   					bool enable);
>> -int amdgpu_copy_buffer(struct amdgpu_ring *ring, uint64_t src_offset,
>> +int amdgpu_copy_buffer(struct amdgpu_ring *ring,
>> +		       struct drm_sched_entity *entity,
> 
> If I'm not completely mistaken you should be able to drop the ring argument since that can be determined from the entity.

OK will do.

Pierre-Eric


> 
> Apart from that looks rather good to me.
> 
> Regards,
> Christian.
> 
>> +		       uint64_t src_offset,
>>   		       uint64_t dst_offset, uint32_t byte_count,
>>   		       struct dma_resv *resv,
>>   		       struct dma_fence **fence,
>> @@ -175,12 +177,12 @@ int amdgpu_copy_buffer(struct amdgpu_ring *ring, uint64_t src_offset,
>>   int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
>>   			    struct dma_resv *resv,
>>   			    struct dma_fence **fence);
>> -int amdgpu_fill_buffer(struct amdgpu_bo *bo,
>> -			uint32_t src_data,
>> -			struct dma_resv *resv,
>> -			struct dma_fence **fence,
>> -			bool delayed,
>> -			u64 k_job_id);
>> +int amdgpu_fill_buffer(struct amdgpu_ttm_entity *entity,
>> +		       struct amdgpu_bo *bo,
>> +		       uint32_t src_data,
>> +		       struct dma_resv *resv,
>> +		       struct dma_fence **f,
>> +		       u64 k_job_id);
>>   
>>   int amdgpu_ttm_alloc_gart(struct ttm_buffer_object *bo);
>>   void amdgpu_ttm_recover_gart(struct ttm_buffer_object *tbo);
>> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
>> index d74ff6e90590..09756132fa1b 100644
>> --- a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
>> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
>> @@ -157,7 +157,8 @@ svm_migrate_copy_memory_gart(struct amdgpu_device *adev, dma_addr_t *sys,
>>   			goto out_unlock;
>>   		}
>>   
>> -		r = amdgpu_copy_buffer(ring, gart_s, gart_d, size * PAGE_SIZE,
>> +		r = amdgpu_copy_buffer(ring, &entity->base,
>> +				       gart_s, gart_d, size * PAGE_SIZE,
>>   				       NULL, &next, true, 0);
>>   		if (r) {
>>   			dev_err(adev->dev, "fail %d to copy memory\n", r);


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v2 01/20] drm/amdgpu: give each kernel job a unique id
  2025-11-14 14:36     ` Pierre-Eric Pelloux-Prayer
@ 2025-11-14 14:57       ` Christian König
  0 siblings, 0 replies; 53+ messages in thread
From: Christian König @ 2025-11-14 14:57 UTC (permalink / raw)
  To: Pierre-Eric Pelloux-Prayer, Pierre-Eric Pelloux-Prayer,
	Alex Deucher, David Airlie, Simona Vetter, Felix Kuehling
  Cc: Arunpravin Paneer Selvam, amd-gfx, dri-devel, linux-kernel



On 11/14/25 15:36, Pierre-Eric Pelloux-Prayer wrote:
> 
> 
> Le 14/11/2025 à 13:26, Christian König a écrit :
>> On 11/13/25 17:05, Pierre-Eric Pelloux-Prayer wrote:
>>> Userspace jobs have drm_file.client_id as a unique identifier
>>> as job's owners. For kernel jobs, we can allocate arbitrary
>>> values - the risk of overlap with userspace ids is small (given
>>> that it's a u64 value).
>>> In the unlikely case the overlap happens, it'll only impact
>>> trace events.
>>>
>>> Since this ID is traced in the gpu_scheduler trace events, this
>>> allows to determine the source of each job sent to the hardware.
>>>
>>> To make grepping easier, the IDs are defined as they will appear
>>> in the trace output.
>>>
>>> Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
>>> Acked-by: Alex Deucher <alexander.deucher@amd.com>
>>> Signed-off-by: Arunpravin Paneer Selvam <Arunpravin.PaneerSelvam@amd.com>
>>> Link: https://lore.kernel.org/r/20250604122827.2191-1-pierre-eric.pelloux-prayer@amd.com
>>
>> Acked-by: Christian König <christian.koenig@amd.com>
>>
>> You should probably start pushing this patch to amd-staging-drm-next even when not the full patch set is reviewed.
>>
>> We need to get this partially merged through drm-misc-next because of the TTM dependencies anyway.
> 
> I've mentionned in the cover letter that this patch was already merged through drm-misc. I'm including it in the series to avoid conflicts.

Sorry, I've missed that. BTW please base the patches on top of drm-misc-next if possible.

At least the TTM patch need to got through that branch.

If the other then don't apply any more cleanly then just send out the TTM change rebased.

Regards,
Christian.

> 
> Pierre-Eric
> 
>>
>> Regards,
>> Christian
>>
>>> ---
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c     |  3 ++-
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c     |  2 +-
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_job.c     |  5 ++--
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_job.h     | 19 +++++++++++++-
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_jpeg.c    |  3 ++-
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_object.c  |  3 ++-
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c     | 28 +++++++++++++--------
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h     |  3 ++-
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c     |  3 ++-
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c     |  5 ++--
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c     |  8 +++---
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c      |  6 +++--
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h      |  2 +-
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm_cpu.c  |  4 ++-
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm_pt.c   |  4 ++-
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c | 12 +++++----
>>>   drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c       |  6 +++--
>>>   drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c       |  6 +++--
>>>   drivers/gpu/drm/amd/amdkfd/kfd_migrate.c    |  3 ++-
>>>   19 files changed, 84 insertions(+), 41 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
>>> index 3d24f9cd750a..29c927f4d6df 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
>>> @@ -1549,7 +1549,8 @@ static int amdgpu_gfx_run_cleaner_shader_job(struct amdgpu_ring *ring)
>>>       owner = (void *)(unsigned long)atomic_inc_return(&counter);
>>>         r = amdgpu_job_alloc_with_ib(ring->adev, &entity, owner,
>>> -                     64, 0, &job);
>>> +                     64, 0, &job,
>>> +                     AMDGPU_KERNEL_JOB_ID_CLEANER_SHADER);
>>>       if (r)
>>>           goto err;
>>>   diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
>>> index 97b562a79ea8..9dcf51991b5b 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
>>> @@ -690,7 +690,7 @@ void amdgpu_gmc_flush_gpu_tlb(struct amdgpu_device *adev, uint32_t vmid,
>>>       r = amdgpu_job_alloc_with_ib(ring->adev, &adev->mman.high_pr,
>>>                        AMDGPU_FENCE_OWNER_UNDEFINED,
>>>                        16 * 4, AMDGPU_IB_POOL_IMMEDIATE,
>>> -                     &job);
>>> +                     &job, AMDGPU_KERNEL_JOB_ID_FLUSH_GPU_TLB);
>>>       if (r)
>>>           goto error_alloc;
>>>   diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
>>> index 55c7e104d5ca..3457bd649623 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
>>> @@ -234,11 +234,12 @@ int amdgpu_job_alloc(struct amdgpu_device *adev, struct amdgpu_vm *vm,
>>>   int amdgpu_job_alloc_with_ib(struct amdgpu_device *adev,
>>>                    struct drm_sched_entity *entity, void *owner,
>>>                    size_t size, enum amdgpu_ib_pool_type pool_type,
>>> -                 struct amdgpu_job **job)
>>> +                 struct amdgpu_job **job, u64 k_job_id)
>>>   {
>>>       int r;
>>>   -    r = amdgpu_job_alloc(adev, NULL, entity, owner, 1, job, 0);
>>> +    r = amdgpu_job_alloc(adev, NULL, entity, owner, 1, job,
>>> +                 k_job_id);
>>>       if (r)
>>>           return r;
>>>   diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
>>> index d25f1fcf0242..7abf069d17d4 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
>>> @@ -44,6 +44,22 @@
>>>   struct amdgpu_fence;
>>>   enum amdgpu_ib_pool_type;
>>>   +/* Internal kernel job ids. (decreasing values, starting from U64_MAX). */
>>> +#define AMDGPU_KERNEL_JOB_ID_VM_UPDATE              (18446744073709551615ULL)
>>> +#define AMDGPU_KERNEL_JOB_ID_VM_UPDATE_PDES         (18446744073709551614ULL)
>>> +#define AMDGPU_KERNEL_JOB_ID_VM_UPDATE_RANGE        (18446744073709551613ULL)
>>> +#define AMDGPU_KERNEL_JOB_ID_VM_PT_CLEAR            (18446744073709551612ULL)
>>> +#define AMDGPU_KERNEL_JOB_ID_TTM_MAP_BUFFER         (18446744073709551611ULL)
>>> +#define AMDGPU_KERNEL_JOB_ID_TTM_ACCESS_MEMORY_SDMA (18446744073709551610ULL)
>>> +#define AMDGPU_KERNEL_JOB_ID_TTM_COPY_BUFFER        (18446744073709551609ULL)
>>> +#define AMDGPU_KERNEL_JOB_ID_CLEAR_ON_RELEASE       (18446744073709551608ULL)
>>> +#define AMDGPU_KERNEL_JOB_ID_MOVE_BLIT              (18446744073709551607ULL)
>>> +#define AMDGPU_KERNEL_JOB_ID_TTM_CLEAR_BUFFER       (18446744073709551606ULL)
>>> +#define AMDGPU_KERNEL_JOB_ID_CLEANER_SHADER         (18446744073709551605ULL)
>>> +#define AMDGPU_KERNEL_JOB_ID_FLUSH_GPU_TLB          (18446744073709551604ULL)
>>> +#define AMDGPU_KERNEL_JOB_ID_KFD_GART_MAP           (18446744073709551603ULL)
>>> +#define AMDGPU_KERNEL_JOB_ID_VCN_RING_TEST          (18446744073709551602ULL)
>>> +
>>>   struct amdgpu_job {
>>>       struct drm_sched_job    base;
>>>       struct amdgpu_vm    *vm;
>>> @@ -97,7 +113,8 @@ int amdgpu_job_alloc(struct amdgpu_device *adev, struct amdgpu_vm *vm,
>>>   int amdgpu_job_alloc_with_ib(struct amdgpu_device *adev,
>>>                    struct drm_sched_entity *entity, void *owner,
>>>                    size_t size, enum amdgpu_ib_pool_type pool_type,
>>> -                 struct amdgpu_job **job);
>>> +                 struct amdgpu_job **job,
>>> +                 u64 k_job_id);
>>>   void amdgpu_job_set_resources(struct amdgpu_job *job, struct amdgpu_bo *gds,
>>>                     struct amdgpu_bo *gws, struct amdgpu_bo *oa);
>>>   void amdgpu_job_free_resources(struct amdgpu_job *job);
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_jpeg.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_jpeg.c
>>> index 91678621f1ff..63ee6ba6a931 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_jpeg.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_jpeg.c
>>> @@ -196,7 +196,8 @@ static int amdgpu_jpeg_dec_set_reg(struct amdgpu_ring *ring, uint32_t handle,
>>>       int i, r;
>>>         r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, ib_size_dw * 4,
>>> -                     AMDGPU_IB_POOL_DIRECT, &job);
>>> +                     AMDGPU_IB_POOL_DIRECT, &job,
>>> +                     AMDGPU_KERNEL_JOB_ID_VCN_RING_TEST);
>>>       if (r)
>>>           return r;
>>>   diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
>>> index fe486988a738..e08f58de4b17 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
>>> @@ -1321,7 +1321,8 @@ void amdgpu_bo_release_notify(struct ttm_buffer_object *bo)
>>>       if (r)
>>>           goto out;
>>>   -    r = amdgpu_fill_buffer(abo, 0, &bo->base._resv, &fence, true);
>>> +    r = amdgpu_fill_buffer(abo, 0, &bo->base._resv, &fence, true,
>>> +                   AMDGPU_KERNEL_JOB_ID_CLEAR_ON_RELEASE);
>>>       if (WARN_ON(r))
>>>           goto out;
>>>   diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>>> index e226c3aff7d7..326476089db3 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>>> @@ -227,7 +227,8 @@ static int amdgpu_ttm_map_buffer(struct ttm_buffer_object *bo,
>>>       r = amdgpu_job_alloc_with_ib(adev, &adev->mman.high_pr,
>>>                        AMDGPU_FENCE_OWNER_UNDEFINED,
>>>                        num_dw * 4 + num_bytes,
>>> -                     AMDGPU_IB_POOL_DELAYED, &job);
>>> +                     AMDGPU_IB_POOL_DELAYED, &job,
>>> +                     AMDGPU_KERNEL_JOB_ID_TTM_MAP_BUFFER);
>>>       if (r)
>>>           return r;
>>>   @@ -406,7 +407,7 @@ static int amdgpu_move_blit(struct ttm_buffer_object *bo,
>>>           struct dma_fence *wipe_fence = NULL;
>>>             r = amdgpu_fill_buffer(abo, 0, NULL, &wipe_fence,
>>> -                       false);
>>> +                       false, AMDGPU_KERNEL_JOB_ID_MOVE_BLIT);
>>>           if (r) {
>>>               goto error;
>>>           } else if (wipe_fence) {
>>> @@ -1488,7 +1489,8 @@ static int amdgpu_ttm_access_memory_sdma(struct ttm_buffer_object *bo,
>>>       r = amdgpu_job_alloc_with_ib(adev, &adev->mman.high_pr,
>>>                        AMDGPU_FENCE_OWNER_UNDEFINED,
>>>                        num_dw * 4, AMDGPU_IB_POOL_DELAYED,
>>> -                     &job);
>>> +                     &job,
>>> +                     AMDGPU_KERNEL_JOB_ID_TTM_ACCESS_MEMORY_SDMA);
>>>       if (r)
>>>           goto out;
>>>   @@ -2212,7 +2214,7 @@ static int amdgpu_ttm_prepare_job(struct amdgpu_device *adev,
>>>                     struct dma_resv *resv,
>>>                     bool vm_needs_flush,
>>>                     struct amdgpu_job **job,
>>> -                  bool delayed)
>>> +                  bool delayed, u64 k_job_id)
>>>   {
>>>       enum amdgpu_ib_pool_type pool = direct_submit ?
>>>           AMDGPU_IB_POOL_DIRECT :
>>> @@ -2222,7 +2224,7 @@ static int amdgpu_ttm_prepare_job(struct amdgpu_device *adev,
>>>                               &adev->mman.high_pr;
>>>       r = amdgpu_job_alloc_with_ib(adev, entity,
>>>                        AMDGPU_FENCE_OWNER_UNDEFINED,
>>> -                     num_dw * 4, pool, job);
>>> +                     num_dw * 4, pool, job, k_job_id);
>>>       if (r)
>>>           return r;
>>>   @@ -2262,7 +2264,8 @@ int amdgpu_copy_buffer(struct amdgpu_ring *ring, uint64_t src_offset,
>>>       num_loops = DIV_ROUND_UP(byte_count, max_bytes);
>>>       num_dw = ALIGN(num_loops * adev->mman.buffer_funcs->copy_num_dw, 8);
>>>       r = amdgpu_ttm_prepare_job(adev, direct_submit, num_dw,
>>> -                   resv, vm_needs_flush, &job, false);
>>> +                   resv, vm_needs_flush, &job, false,
>>> +                   AMDGPU_KERNEL_JOB_ID_TTM_COPY_BUFFER);
>>>       if (r)
>>>           return r;
>>>   @@ -2297,7 +2300,8 @@ static int amdgpu_ttm_fill_mem(struct amdgpu_ring *ring, uint32_t src_data,
>>>                      uint64_t dst_addr, uint32_t byte_count,
>>>                      struct dma_resv *resv,
>>>                      struct dma_fence **fence,
>>> -                   bool vm_needs_flush, bool delayed)
>>> +                   bool vm_needs_flush, bool delayed,
>>> +                   u64 k_job_id)
>>>   {
>>>       struct amdgpu_device *adev = ring->adev;
>>>       unsigned int num_loops, num_dw;
>>> @@ -2310,7 +2314,7 @@ static int amdgpu_ttm_fill_mem(struct amdgpu_ring *ring, uint32_t src_data,
>>>       num_loops = DIV_ROUND_UP_ULL(byte_count, max_bytes);
>>>       num_dw = ALIGN(num_loops * adev->mman.buffer_funcs->fill_num_dw, 8);
>>>       r = amdgpu_ttm_prepare_job(adev, false, num_dw, resv, vm_needs_flush,
>>> -                   &job, delayed);
>>> +                   &job, delayed, k_job_id);
>>>       if (r)
>>>           return r;
>>>   @@ -2380,7 +2384,8 @@ int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
>>>               goto err;
>>>             r = amdgpu_ttm_fill_mem(ring, 0, addr, size, resv,
>>> -                    &next, true, true);
>>> +                    &next, true, true,
>>> +                    AMDGPU_KERNEL_JOB_ID_TTM_CLEAR_BUFFER);
>>>           if (r)
>>>               goto err;
>>>   @@ -2399,7 +2404,8 @@ int amdgpu_fill_buffer(struct amdgpu_bo *bo,
>>>               uint32_t src_data,
>>>               struct dma_resv *resv,
>>>               struct dma_fence **f,
>>> -            bool delayed)
>>> +            bool delayed,
>>> +            u64 k_job_id)
>>>   {
>>>       struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
>>>       struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
>>> @@ -2429,7 +2435,7 @@ int amdgpu_fill_buffer(struct amdgpu_bo *bo,
>>>               goto error;
>>>             r = amdgpu_ttm_fill_mem(ring, src_data, to, cur_size, resv,
>>> -                    &next, true, delayed);
>>> +                    &next, true, delayed, k_job_id);
>>>           if (r)
>>>               goto error;
>>>   diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
>>> index 054d48823d5f..577ee04ce0bf 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
>>> @@ -175,7 +175,8 @@ int amdgpu_fill_buffer(struct amdgpu_bo *bo,
>>>               uint32_t src_data,
>>>               struct dma_resv *resv,
>>>               struct dma_fence **fence,
>>> -            bool delayed);
>>> +            bool delayed,
>>> +            u64 k_job_id);
>>>     int amdgpu_ttm_alloc_gart(struct ttm_buffer_object *bo);
>>>   void amdgpu_ttm_recover_gart(struct ttm_buffer_object *tbo);
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
>>> index 74758b5ffc6c..5c38f0d30c87 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
>>> @@ -1136,7 +1136,8 @@ static int amdgpu_uvd_send_msg(struct amdgpu_ring *ring, struct amdgpu_bo *bo,
>>>       r = amdgpu_job_alloc_with_ib(ring->adev, &adev->uvd.entity,
>>>                        AMDGPU_FENCE_OWNER_UNDEFINED,
>>>                        64, direct ? AMDGPU_IB_POOL_DIRECT :
>>> -                     AMDGPU_IB_POOL_DELAYED, &job);
>>> +                     AMDGPU_IB_POOL_DELAYED, &job,
>>> +                     AMDGPU_KERNEL_JOB_ID_VCN_RING_TEST);
>>>       if (r)
>>>           return r;
>>>   diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
>>> index b9060bcd4806..ce318f5de047 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
>>> @@ -449,7 +449,7 @@ static int amdgpu_vce_get_create_msg(struct amdgpu_ring *ring, uint32_t handle,
>>>       r = amdgpu_job_alloc_with_ib(ring->adev, &ring->adev->vce.entity,
>>>                        AMDGPU_FENCE_OWNER_UNDEFINED,
>>>                        ib_size_dw * 4, AMDGPU_IB_POOL_DIRECT,
>>> -                     &job);
>>> +                     &job, AMDGPU_KERNEL_JOB_ID_VCN_RING_TEST);
>>>       if (r)
>>>           return r;
>>>   @@ -540,7 +540,8 @@ static int amdgpu_vce_get_destroy_msg(struct amdgpu_ring *ring, uint32_t handle,
>>>                        AMDGPU_FENCE_OWNER_UNDEFINED,
>>>                        ib_size_dw * 4,
>>>                        direct ? AMDGPU_IB_POOL_DIRECT :
>>> -                     AMDGPU_IB_POOL_DELAYED, &job);
>>> +                     AMDGPU_IB_POOL_DELAYED, &job,
>>> +                     AMDGPU_KERNEL_JOB_ID_VCN_RING_TEST);
>>>       if (r)
>>>           return r;
>>>   diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
>>> index 5ae7cc0d5f57..5e0786ea911b 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
>>> @@ -626,7 +626,7 @@ static int amdgpu_vcn_dec_send_msg(struct amdgpu_ring *ring,
>>>         r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL,
>>>                        64, AMDGPU_IB_POOL_DIRECT,
>>> -                     &job);
>>> +                     &job, AMDGPU_KERNEL_JOB_ID_VCN_RING_TEST);
>>>       if (r)
>>>           goto err;
>>>   @@ -806,7 +806,7 @@ static int amdgpu_vcn_dec_sw_send_msg(struct amdgpu_ring *ring,
>>>         r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL,
>>>                        ib_size_dw * 4, AMDGPU_IB_POOL_DIRECT,
>>> -                     &job);
>>> +                     &job, AMDGPU_KERNEL_JOB_ID_VCN_RING_TEST);
>>>       if (r)
>>>           goto err;
>>>   @@ -936,7 +936,7 @@ static int amdgpu_vcn_enc_get_create_msg(struct amdgpu_ring *ring, uint32_t hand
>>>         r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL,
>>>                        ib_size_dw * 4, AMDGPU_IB_POOL_DIRECT,
>>> -                     &job);
>>> +                     &job, AMDGPU_KERNEL_JOB_ID_VCN_RING_TEST);
>>>       if (r)
>>>           return r;
>>>   @@ -1003,7 +1003,7 @@ static int amdgpu_vcn_enc_get_destroy_msg(struct amdgpu_ring *ring, uint32_t han
>>>         r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL,
>>>                        ib_size_dw * 4, AMDGPU_IB_POOL_DIRECT,
>>> -                     &job);
>>> +                     &job, AMDGPU_KERNEL_JOB_ID_VCN_RING_TEST);
>>>       if (r)
>>>           return r;
>>>   diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> index db66b4232de0..2f8e83f840a8 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> @@ -983,7 +983,8 @@ int amdgpu_vm_update_pdes(struct amdgpu_device *adev,
>>>       params.vm = vm;
>>>       params.immediate = immediate;
>>>   -    r = vm->update_funcs->prepare(&params, NULL);
>>> +    r = vm->update_funcs->prepare(&params, NULL,
>>> +                      AMDGPU_KERNEL_JOB_ID_VM_UPDATE_PDES);
>>>       if (r)
>>>           goto error;
>>>   @@ -1152,7 +1153,8 @@ int amdgpu_vm_update_range(struct amdgpu_device *adev, struct amdgpu_vm *vm,
>>>           dma_fence_put(tmp);
>>>       }
>>>   -    r = vm->update_funcs->prepare(&params, sync);
>>> +    r = vm->update_funcs->prepare(&params, sync,
>>> +                      AMDGPU_KERNEL_JOB_ID_VM_UPDATE_RANGE);
>>>       if (r)
>>>           goto error_free;
>>>   diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>>> index 77207f4e448e..cf0ec94e8a07 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>>> @@ -308,7 +308,7 @@ struct amdgpu_vm_update_params {
>>>   struct amdgpu_vm_update_funcs {
>>>       int (*map_table)(struct amdgpu_bo_vm *bo);
>>>       int (*prepare)(struct amdgpu_vm_update_params *p,
>>> -               struct amdgpu_sync *sync);
>>> +               struct amdgpu_sync *sync, u64 k_job_id);
>>>       int (*update)(struct amdgpu_vm_update_params *p,
>>>                 struct amdgpu_bo_vm *bo, uint64_t pe, uint64_t addr,
>>>                 unsigned count, uint32_t incr, uint64_t flags);
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_cpu.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_cpu.c
>>> index 0c1ef5850a5e..22e2e5b47341 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_cpu.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_cpu.c
>>> @@ -40,12 +40,14 @@ static int amdgpu_vm_cpu_map_table(struct amdgpu_bo_vm *table)
>>>    *
>>>    * @p: see amdgpu_vm_update_params definition
>>>    * @sync: sync obj with fences to wait on
>>> + * @k_job_id: the id for tracing/debug purposes
>>>    *
>>>    * Returns:
>>>    * Negativ errno, 0 for success.
>>>    */
>>>   static int amdgpu_vm_cpu_prepare(struct amdgpu_vm_update_params *p,
>>> -                 struct amdgpu_sync *sync)
>>> +                 struct amdgpu_sync *sync,
>>> +                 u64 k_job_id)
>>>   {
>>>       if (!sync)
>>>           return 0;
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_pt.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_pt.c
>>> index 30022123b0bf..f794fb1cc06e 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_pt.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_pt.c
>>> @@ -26,6 +26,7 @@
>>>   #include "amdgpu.h"
>>>   #include "amdgpu_trace.h"
>>>   #include "amdgpu_vm.h"
>>> +#include "amdgpu_job.h"
>>>     /*
>>>    * amdgpu_vm_pt_cursor - state for for_each_amdgpu_vm_pt
>>> @@ -395,7 +396,8 @@ int amdgpu_vm_pt_clear(struct amdgpu_device *adev, struct amdgpu_vm *vm,
>>>       params.vm = vm;
>>>       params.immediate = immediate;
>>>   -    r = vm->update_funcs->prepare(&params, NULL);
>>> +    r = vm->update_funcs->prepare(&params, NULL,
>>> +                      AMDGPU_KERNEL_JOB_ID_VM_PT_CLEAR);
>>>       if (r)
>>>           goto exit;
>>>   diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c
>>> index 46d9fb433ab2..36805dcfa159 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_sdma.c
>>> @@ -40,7 +40,7 @@ static int amdgpu_vm_sdma_map_table(struct amdgpu_bo_vm *table)
>>>     /* Allocate a new job for @count PTE updates */
>>>   static int amdgpu_vm_sdma_alloc_job(struct amdgpu_vm_update_params *p,
>>> -                    unsigned int count)
>>> +                    unsigned int count, u64 k_job_id)
>>>   {
>>>       enum amdgpu_ib_pool_type pool = p->immediate ? AMDGPU_IB_POOL_IMMEDIATE
>>>           : AMDGPU_IB_POOL_DELAYED;
>>> @@ -56,7 +56,7 @@ static int amdgpu_vm_sdma_alloc_job(struct amdgpu_vm_update_params *p,
>>>       ndw = min(ndw, AMDGPU_VM_SDMA_MAX_NUM_DW);
>>>         r = amdgpu_job_alloc_with_ib(p->adev, entity, AMDGPU_FENCE_OWNER_VM,
>>> -                     ndw * 4, pool, &p->job);
>>> +                     ndw * 4, pool, &p->job, k_job_id);
>>>       if (r)
>>>           return r;
>>>   @@ -69,16 +69,17 @@ static int amdgpu_vm_sdma_alloc_job(struct amdgpu_vm_update_params *p,
>>>    *
>>>    * @p: see amdgpu_vm_update_params definition
>>>    * @sync: amdgpu_sync object with fences to wait for
>>> + * @k_job_id: identifier of the job, for tracing purpose
>>>    *
>>>    * Returns:
>>>    * Negativ errno, 0 for success.
>>>    */
>>>   static int amdgpu_vm_sdma_prepare(struct amdgpu_vm_update_params *p,
>>> -                  struct amdgpu_sync *sync)
>>> +                  struct amdgpu_sync *sync, u64 k_job_id)
>>>   {
>>>       int r;
>>>   -    r = amdgpu_vm_sdma_alloc_job(p, 0);
>>> +    r = amdgpu_vm_sdma_alloc_job(p, 0, k_job_id);
>>>       if (r)
>>>           return r;
>>>   @@ -249,7 +250,8 @@ static int amdgpu_vm_sdma_update(struct amdgpu_vm_update_params *p,
>>>               if (r)
>>>                   return r;
>>>   -            r = amdgpu_vm_sdma_alloc_job(p, count);
>>> +            r = amdgpu_vm_sdma_alloc_job(p, count,
>>> +                             AMDGPU_KERNEL_JOB_ID_VM_UPDATE);
>>>               if (r)
>>>                   return r;
>>>           }
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c b/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c
>>> index 1c07b701d0e4..ceb94bbb03a4 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c
>>> @@ -217,7 +217,8 @@ static int uvd_v6_0_enc_get_create_msg(struct amdgpu_ring *ring, uint32_t handle
>>>       int i, r;
>>>         r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, ib_size_dw * 4,
>>> -                     AMDGPU_IB_POOL_DIRECT, &job);
>>> +                     AMDGPU_IB_POOL_DIRECT, &job,
>>> +                     AMDGPU_KERNEL_JOB_ID_VCN_RING_TEST);
>>>       if (r)
>>>           return r;
>>>   @@ -281,7 +282,8 @@ static int uvd_v6_0_enc_get_destroy_msg(struct amdgpu_ring *ring,
>>>       int i, r;
>>>         r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, ib_size_dw * 4,
>>> -                     AMDGPU_IB_POOL_DIRECT, &job);
>>> +                     AMDGPU_IB_POOL_DIRECT, &job,
>>> +                     AMDGPU_KERNEL_JOB_ID_VCN_RING_TEST);
>>>       if (r)
>>>           return r;
>>>   diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c b/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
>>> index 9d237b5937fb..1f8866f3f63c 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
>>> @@ -225,7 +225,8 @@ static int uvd_v7_0_enc_get_create_msg(struct amdgpu_ring *ring, u32 handle,
>>>       int i, r;
>>>         r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, ib_size_dw * 4,
>>> -                     AMDGPU_IB_POOL_DIRECT, &job);
>>> +                     AMDGPU_IB_POOL_DIRECT, &job,
>>> +                     AMDGPU_KERNEL_JOB_ID_VCN_RING_TEST);
>>>       if (r)
>>>           return r;
>>>   @@ -288,7 +289,8 @@ static int uvd_v7_0_enc_get_destroy_msg(struct amdgpu_ring *ring, u32 handle,
>>>       int i, r;
>>>         r = amdgpu_job_alloc_with_ib(ring->adev, NULL, NULL, ib_size_dw * 4,
>>> -                     AMDGPU_IB_POOL_DIRECT, &job);
>>> +                     AMDGPU_IB_POOL_DIRECT, &job,
>>> +                     AMDGPU_KERNEL_JOB_ID_VCN_RING_TEST);
>>>       if (r)
>>>           return r;
>>>   diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
>>> index 3653c563ee9a..46c84fc60af1 100644
>>> --- a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
>>> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
>>> @@ -67,7 +67,8 @@ svm_migrate_gart_map(struct amdgpu_ring *ring, u64 npages,
>>>                        AMDGPU_FENCE_OWNER_UNDEFINED,
>>>                        num_dw * 4 + num_bytes,
>>>                        AMDGPU_IB_POOL_DELAYED,
>>> -                     &job);
>>> +                     &job,
>>> +                     AMDGPU_KERNEL_JOB_ID_KFD_GART_MAP);
>>>       if (r)
>>>           return r;
>>>   


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v2 06/20] drm/amdgpu: statically assign gart windows to ttm entities
  2025-11-13 16:05 ` [PATCH v2 06/20] drm/amdgpu: statically assign gart windows to ttm entities Pierre-Eric Pelloux-Prayer
@ 2025-11-14 15:15   ` Christian König
  2025-11-14 20:24   ` Felix Kuehling
  1 sibling, 0 replies; 53+ messages in thread
From: Christian König @ 2025-11-14 15:15 UTC (permalink / raw)
  To: Pierre-Eric Pelloux-Prayer, Alex Deucher, David Airlie,
	Simona Vetter, Felix Kuehling
  Cc: amd-gfx, dri-devel, linux-kernel

On 11/13/25 17:05, Pierre-Eric Pelloux-Prayer wrote:
> If multiple entities share the same window we must make sure
> that jobs using them are executed sequentially.
> 
> This commit gives separate window id to each entity, so jobs
> from multiple entities could execute in parallel if needed.
> (for now they all use the first sdma engine, so it makes no
> difference yet).
> 
> default_entity doesn't get any windows reserved since there is
> no use for them.
> 
> Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c  |  9 +++--
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c  | 50 ++++++++++++++----------
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h  |  9 +++--
>  drivers/gpu/drm/amd/amdkfd/kfd_migrate.c |  8 ++--
>  4 files changed, 46 insertions(+), 30 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
> index 8e2d41c9c271..2a444d02cf4b 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
> @@ -686,7 +686,8 @@ void amdgpu_gmc_flush_gpu_tlb(struct amdgpu_device *adev, uint32_t vmid,
>  	 * translation. Avoid this by doing the invalidation from the SDMA
>  	 * itself at least for GART.
>  	 */
> -	mutex_lock(&adev->mman.gtt_window_lock);
> +	mutex_lock(&adev->mman.clear_entity.gart_window_lock);
> +	mutex_lock(&adev->mman.move_entity.gart_window_lock);

That looks strange, we want to use the default entity here and not anything else. Why are the other ones locked?

>  	r = amdgpu_job_alloc_with_ib(ring->adev, &adev->mman.default_entity.base,
>  				     AMDGPU_FENCE_OWNER_UNDEFINED,
>  				     16 * 4, AMDGPU_IB_POOL_IMMEDIATE,
> @@ -699,7 +700,8 @@ void amdgpu_gmc_flush_gpu_tlb(struct amdgpu_device *adev, uint32_t vmid,
>  	job->ibs->ptr[job->ibs->length_dw++] = ring->funcs->nop;
>  	amdgpu_ring_pad_ib(ring, &job->ibs[0]);
>  	fence = amdgpu_job_submit(job);
> -	mutex_unlock(&adev->mman.gtt_window_lock);
> +	mutex_unlock(&adev->mman.move_entity.gart_window_lock);
> +	mutex_unlock(&adev->mman.clear_entity.gart_window_lock);
>  
>  	dma_fence_wait(fence, false);
>  	dma_fence_put(fence);
> @@ -707,7 +709,8 @@ void amdgpu_gmc_flush_gpu_tlb(struct amdgpu_device *adev, uint32_t vmid,
>  	return;
>  
>  error_alloc:
> -	mutex_unlock(&adev->mman.gtt_window_lock);
> +	mutex_unlock(&adev->mman.move_entity.gart_window_lock);
> +	mutex_unlock(&adev->mman.clear_entity.gart_window_lock);
>  	dev_err(adev->dev, "Error flushing GPU TLB using the SDMA (%d)!\n", r);
>  }
>  
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> index c8d59ca2b3bd..7193a341689d 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> @@ -291,7 +291,7 @@ static int amdgpu_ttm_map_buffer(struct drm_sched_entity *entity,
>   */
>  __attribute__((nonnull))
>  static int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
> -				      struct drm_sched_entity *entity,
> +				      struct amdgpu_ttm_buffer_entity *entity,

That could already been part of the patch who adds the entity as parameter.

In general we should probably always use amdgpu_ttm_buffer_entity as parameter inside amdgpu_ttm.c

>  				      const struct amdgpu_copy_mem *src,
>  				      const struct amdgpu_copy_mem *dst,
>  				      uint64_t size, bool tmz,
> @@ -314,7 +314,7 @@ static int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
>  	amdgpu_res_first(src->mem, src->offset, size, &src_mm);
>  	amdgpu_res_first(dst->mem, dst->offset, size, &dst_mm);
>  
> -	mutex_lock(&adev->mman.gtt_window_lock);
> +	mutex_lock(&entity->gart_window_lock);
>  	while (src_mm.remaining) {
>  		uint64_t from, to, cur_size, tiling_flags;
>  		uint32_t num_type, data_format, max_com, write_compress_disable;
> @@ -324,15 +324,15 @@ static int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
>  		cur_size = min3(src_mm.size, dst_mm.size, 256ULL << 20);
>  
>  		/* Map src to window 0 and dst to window 1. */
> -		r = amdgpu_ttm_map_buffer(entity,
> +		r = amdgpu_ttm_map_buffer(&entity->base,
>  					  src->bo, src->mem, &src_mm,
> -					  0, ring, tmz, &cur_size, &from);
> +					  entity->gart_window_id0, ring, tmz, &cur_size, &from);
>  		if (r)
>  			goto error;
>  
> -		r = amdgpu_ttm_map_buffer(entity,
> +		r = amdgpu_ttm_map_buffer(&entity->base,
>  					  dst->bo, dst->mem, &dst_mm,
> -					  1, ring, tmz, &cur_size, &to);
> +					  entity->gart_window_id1, ring, tmz, &cur_size, &to);
>  		if (r)
>  			goto error;
>  
> @@ -359,7 +359,7 @@ static int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
>  							     write_compress_disable));
>  		}
>  
> -		r = amdgpu_copy_buffer(ring, entity, from, to, cur_size, resv,
> +		r = amdgpu_copy_buffer(ring, &entity->base, from, to, cur_size, resv,
>  				       &next, true, copy_flags);
>  		if (r)
>  			goto error;
> @@ -371,7 +371,7 @@ static int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
>  		amdgpu_res_next(&dst_mm, cur_size);
>  	}
>  error:
> -	mutex_unlock(&adev->mman.gtt_window_lock);
> +	mutex_unlock(&entity->gart_window_lock);
>  	*f = fence;
>  	return r;
>  }
> @@ -401,7 +401,7 @@ static int amdgpu_move_blit(struct ttm_buffer_object *bo,
>  	dst.offset = 0;
>  
>  	r = amdgpu_ttm_copy_mem_to_mem(adev,
> -				       &adev->mman.move_entity.base,
> +				       &adev->mman.move_entity,
>  				       &src, &dst,
>  				       new_mem->size,
>  				       amdgpu_bo_encrypted(abo),
> @@ -1893,8 +1893,6 @@ int amdgpu_ttm_init(struct amdgpu_device *adev)
>  	uint64_t gtt_size;
>  	int r;
>  
> -	mutex_init(&adev->mman.gtt_window_lock);
> -
>  	dma_set_max_seg_size(adev->dev, UINT_MAX);
>  	/* No others user of address space so set it to 0 */
>  	r = ttm_device_init(&adev->mman.bdev, &amdgpu_bo_driver, adev->dev,
> @@ -2207,6 +2205,15 @@ void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
>  			drm_sched_entity_destroy(&adev->mman.clear_entity.base);
>  			goto error_free_entity;
>  		}
> +
> +		/* Statically assign GART windows to each entity. */
> +		mutex_init(&adev->mman.default_entity.gart_window_lock);
> +		adev->mman.move_entity.gart_window_id0 = 0;
> +		adev->mman.move_entity.gart_window_id1 = 1;
> +		mutex_init(&adev->mman.move_entity.gart_window_lock);
> +		/* Clearing entity doesn't use id0 */
> +		adev->mman.clear_entity.gart_window_id1 = 2;
> +		mutex_init(&adev->mman.clear_entity.gart_window_lock);

I though for a moment that you init the same mutex twice.

Maybe add an amdgpu_ttm_buffer_enity_init() function?

>  	} else {
>  		drm_sched_entity_destroy(&adev->mman.default_entity.base);
>  		drm_sched_entity_destroy(&adev->mman.clear_entity.base);
> @@ -2371,6 +2378,7 @@ int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
>  {
>  	struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
>  	struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
> +	struct amdgpu_ttm_buffer_entity *entity;
>  	struct amdgpu_res_cursor cursor;
>  	u64 addr;
>  	int r = 0;
> @@ -2381,11 +2389,12 @@ int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
>  	if (!fence)
>  		return -EINVAL;
>  
> +	entity = &adev->mman.clear_entity;
>  	*fence = dma_fence_get_stub();
>  
>  	amdgpu_res_first(bo->tbo.resource, 0, amdgpu_bo_size(bo), &cursor);
>  
> -	mutex_lock(&adev->mman.gtt_window_lock);
> +	mutex_lock(&entity->gart_window_lock);
>  	while (cursor.remaining) {
>  		struct dma_fence *next = NULL;
>  		u64 size;
> @@ -2398,13 +2407,13 @@ int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
>  		/* Never clear more than 256MiB at once to avoid timeouts */
>  		size = min(cursor.size, 256ULL << 20);
>  
> -		r = amdgpu_ttm_map_buffer(&adev->mman.clear_entity.base,
> +		r = amdgpu_ttm_map_buffer(&entity->base,
>  					  &bo->tbo, bo->tbo.resource, &cursor,
> -					  1, ring, false, &size, &addr);
> +					  entity->gart_window_id1, ring, false, &size, &addr);
>  		if (r)
>  			goto err;
>  
> -		r = amdgpu_ttm_fill_mem(ring, &adev->mman.clear_entity.base, 0, addr, size, resv,
> +		r = amdgpu_ttm_fill_mem(ring, &entity->base, 0, addr, size, resv,
>  					&next, true,
>  					AMDGPU_KERNEL_JOB_ID_TTM_CLEAR_BUFFER);
>  		if (r)
> @@ -2416,12 +2425,12 @@ int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
>  		amdgpu_res_next(&cursor, size);
>  	}
>  err:
> -	mutex_unlock(&adev->mman.gtt_window_lock);
> +	mutex_unlock(&entity->gart_window_lock);
>  
>  	return r;
>  }
>  
> -int amdgpu_fill_buffer(struct amdgpu_ttm_entity *entity,
> +int amdgpu_fill_buffer(struct amdgpu_ttm_buffer_entity *entity,
>  		       struct amdgpu_bo *bo,
>  		       uint32_t src_data,
>  		       struct dma_resv *resv,
> @@ -2442,7 +2451,7 @@ int amdgpu_fill_buffer(struct amdgpu_ttm_entity *entity,
>  
>  	amdgpu_res_first(bo->tbo.resource, 0, amdgpu_bo_size(bo), &dst);
>  
> -	mutex_lock(&adev->mman.gtt_window_lock);
> +	mutex_lock(&entity->gart_window_lock);
>  	while (dst.remaining) {
>  		struct dma_fence *next;
>  		uint64_t cur_size, to;
> @@ -2452,7 +2461,8 @@ int amdgpu_fill_buffer(struct amdgpu_ttm_entity *entity,
>  
>  		r = amdgpu_ttm_map_buffer(&entity->base,
>  					  &bo->tbo, bo->tbo.resource, &dst,
> -					  1, ring, false, &cur_size, &to);
> +					  entity->gart_window_id1, ring, false,
> +					  &cur_size, &to);
>  		if (r)
>  			goto error;
>  
> @@ -2468,7 +2478,7 @@ int amdgpu_fill_buffer(struct amdgpu_ttm_entity *entity,
>  		amdgpu_res_next(&dst, cur_size);
>  	}
>  error:
> -	mutex_unlock(&adev->mman.gtt_window_lock);
> +	mutex_unlock(&entity->gart_window_lock);
>  	if (f)
>  		*f = dma_fence_get(fence);
>  	dma_fence_put(fence);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> index e1655f86a016..f4f762be9fdd 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> @@ -39,7 +39,7 @@
>  #define __AMDGPU_PL_NUM	(TTM_PL_PRIV + 6)
>  
>  #define AMDGPU_GTT_MAX_TRANSFER_SIZE	512
> -#define AMDGPU_GTT_NUM_TRANSFER_WINDOWS	2
> +#define AMDGPU_GTT_NUM_TRANSFER_WINDOWS	3
>  
>  extern const struct attribute_group amdgpu_vram_mgr_attr_group;
>  extern const struct attribute_group amdgpu_gtt_mgr_attr_group;
> @@ -54,6 +54,9 @@ struct amdgpu_gtt_mgr {
>  
>  struct amdgpu_ttm_buffer_entity {
>  	struct drm_sched_entity base;
> +	struct mutex		gart_window_lock;

Don't call that gart_window_lock. Essentially we are protecting the entity from concurrent access. Just "lock" should probably do it.

> +	u32			gart_window_id0;
> +	u32			gart_window_id1;
>  };
>  
>  struct amdgpu_mman {
> @@ -69,7 +72,7 @@ struct amdgpu_mman {
>  
>  	struct mutex				gtt_window_lock;
>  
> -	struct amdgpu_ttm_buffer_entity default_entity;
> +	struct amdgpu_ttm_buffer_entity default_entity; /* has no gart windows */
>  	struct amdgpu_ttm_buffer_entity clear_entity;
>  	struct amdgpu_ttm_buffer_entity move_entity;
>  
> @@ -177,7 +180,7 @@ int amdgpu_copy_buffer(struct amdgpu_ring *ring,
>  int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
>  			    struct dma_resv *resv,
>  			    struct dma_fence **fence);
> -int amdgpu_fill_buffer(struct amdgpu_ttm_entity *entity,
> +int amdgpu_fill_buffer(struct amdgpu_ttm_buffer_entity *entity,
>  		       struct amdgpu_bo *bo,
>  		       uint32_t src_data,
>  		       struct dma_resv *resv,
> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
> index 09756132fa1b..bc47fc362a17 100644
> --- a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
> @@ -60,7 +60,7 @@ svm_migrate_gart_map(struct amdgpu_ring *ring,
>  	int r;
>  
>  	/* use gart window 0 */
> -	*gart_addr = adev->gmc.gart_start;
> +	*gart_addr = entity->gart_window_id0;

The comment above should probably be removed.

Regards,
Christian.

>  
>  	num_dw = ALIGN(adev->mman.buffer_funcs->copy_num_dw, 8);
>  	num_bytes = npages * 8;
> @@ -116,7 +116,7 @@ svm_migrate_gart_map(struct amdgpu_ring *ring,
>   * multiple GTT_MAX_PAGES transfer, all sdma operations are serialized, wait for
>   * the last sdma finish fence which is returned to check copy memory is done.
>   *
> - * Context: Process context, takes and releases gtt_window_lock
> + * Context: Process context, takes and releases gart_window_lock
>   *
>   * Return:
>   * 0 - OK, otherwise error code
> @@ -138,7 +138,7 @@ svm_migrate_copy_memory_gart(struct amdgpu_device *adev, dma_addr_t *sys,
>  
>  	entity = &adev->mman.move_entity;
>  
> -	mutex_lock(&adev->mman.gtt_window_lock);
> +	mutex_lock(&entity->gart_window_lock);
>  
>  	while (npages) {
>  		size = min(GTT_MAX_PAGES, npages);
> @@ -175,7 +175,7 @@ svm_migrate_copy_memory_gart(struct amdgpu_device *adev, dma_addr_t *sys,
>  	}
>  
>  out_unlock:
> -	mutex_unlock(&adev->mman.gtt_window_lock);
> +	mutex_unlock(&entity->gart_window_lock);
>  
>  	return r;
>  }


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v2 04/20] drm/amdgpu: introduce amdgpu_ttm_buffer_entity
  2025-11-14 12:57   ` Christian König
@ 2025-11-14 20:18     ` Felix Kuehling
  0 siblings, 0 replies; 53+ messages in thread
From: Felix Kuehling @ 2025-11-14 20:18 UTC (permalink / raw)
  To: Christian König, Pierre-Eric Pelloux-Prayer, Alex Deucher,
	David Airlie, Simona Vetter
  Cc: amd-gfx, dri-devel, linux-kernel

On 2025-11-14 07:57, Christian König wrote:
> On 11/13/25 17:05, Pierre-Eric Pelloux-Prayer wrote:
>> No functional change for now, but this struct will have more
>> fields added in the next commit.
>>
>> Technically the change introduces synchronisation issue, because
>> dependencies between successive jobs are not taken care of
>> properly. For instance, amdgpu_ttm_clear_buffer uses
>> amdgpu_ttm_map_buffer then amdgpu_ttm_fill_mem which use
>> different entities (default_entity then move/clear entity).
>> But it's all working as expected, because all entities use the
>> same sdma instance for now and default_entity has a higher prio
>> so its job always gets scheduler first.
>>
>> The next commits will deal with these dependencies correctly.
>>
>> ---
>> v2: renamed amdgpu_ttm_buffer_entity
>> ---
>>
>> Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
> Reviewed-by: Christian König <christian.koenig@amd.com>
>
>> ---
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c  |  2 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c  | 30 +++++++++++++++++-------
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h  | 12 ++++++----
>>   drivers/gpu/drm/amd/amdkfd/kfd_migrate.c | 13 ++++++----
>>   4 files changed, 39 insertions(+), 18 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
>> index 9dcf51991b5b..8e2d41c9c271 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
>> @@ -687,7 +687,7 @@ void amdgpu_gmc_flush_gpu_tlb(struct amdgpu_device *adev, uint32_t vmid,
>>   	 * itself at least for GART.
>>   	 */
>>   	mutex_lock(&adev->mman.gtt_window_lock);
>> -	r = amdgpu_job_alloc_with_ib(ring->adev, &adev->mman.high_pr,
>> +	r = amdgpu_job_alloc_with_ib(ring->adev, &adev->mman.default_entity.base,
>>   				     AMDGPU_FENCE_OWNER_UNDEFINED,
>>   				     16 * 4, AMDGPU_IB_POOL_IMMEDIATE,
>>   				     &job, AMDGPU_KERNEL_JOB_ID_FLUSH_GPU_TLB);
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>> index c985f57fa227..42d448cd6a6d 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>> @@ -224,7 +224,7 @@ static int amdgpu_ttm_map_buffer(struct ttm_buffer_object *bo,
>>   	num_dw = ALIGN(adev->mman.buffer_funcs->copy_num_dw, 8);
>>   	num_bytes = num_pages * 8 * AMDGPU_GPU_PAGES_IN_CPU_PAGE;
>>   
>> -	r = amdgpu_job_alloc_with_ib(adev, &adev->mman.high_pr,
>> +	r = amdgpu_job_alloc_with_ib(adev, &adev->mman.default_entity.base,
>>   				     AMDGPU_FENCE_OWNER_UNDEFINED,
>>   				     num_dw * 4 + num_bytes,
>>   				     AMDGPU_IB_POOL_DELAYED, &job,
>> @@ -1486,7 +1486,7 @@ static int amdgpu_ttm_access_memory_sdma(struct ttm_buffer_object *bo,
>>   		memcpy(adev->mman.sdma_access_ptr, buf, len);
>>   
>>   	num_dw = ALIGN(adev->mman.buffer_funcs->copy_num_dw, 8);
>> -	r = amdgpu_job_alloc_with_ib(adev, &adev->mman.high_pr,
>> +	r = amdgpu_job_alloc_with_ib(adev, &adev->mman.default_entity.base,
>>   				     AMDGPU_FENCE_OWNER_UNDEFINED,
>>   				     num_dw * 4, AMDGPU_IB_POOL_DELAYED,
>>   				     &job,
>> @@ -2168,7 +2168,7 @@ void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
>>   
>>   		ring = adev->mman.buffer_funcs_ring;
>>   		sched = &ring->sched;
>> -		r = drm_sched_entity_init(&adev->mman.high_pr,
>> +		r = drm_sched_entity_init(&adev->mman.default_entity.base,
>>   					  DRM_SCHED_PRIORITY_KERNEL, &sched,
>>   					  1, NULL);
>>   		if (r) {
>> @@ -2178,18 +2178,30 @@ void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
>>   			return;
>>   		}
>>   
>> -		r = drm_sched_entity_init(&adev->mman.low_pr,
>> +		r = drm_sched_entity_init(&adev->mman.clear_entity.base,
>> +					  DRM_SCHED_PRIORITY_NORMAL, &sched,
>> +					  1, NULL);
>> +		if (r) {
>> +			dev_err(adev->dev,
>> +				"Failed setting up TTM BO clear entity (%d)\n",
>> +				r);
>> +			goto error_free_entity;
>> +		}
>> +
>> +		r = drm_sched_entity_init(&adev->mman.move_entity.base,
>>   					  DRM_SCHED_PRIORITY_NORMAL, &sched,
>>   					  1, NULL);
>>   		if (r) {
>>   			dev_err(adev->dev,
>>   				"Failed setting up TTM BO move entity (%d)\n",
>>   				r);
>> +			drm_sched_entity_destroy(&adev->mman.clear_entity.base);
>>   			goto error_free_entity;
>>   		}
>>   	} else {
>> -		drm_sched_entity_destroy(&adev->mman.high_pr);
>> -		drm_sched_entity_destroy(&adev->mman.low_pr);
>> +		drm_sched_entity_destroy(&adev->mman.default_entity.base);
>> +		drm_sched_entity_destroy(&adev->mman.clear_entity.base);
>> +		drm_sched_entity_destroy(&adev->mman.move_entity.base);
>>   		for (i = 0; i < TTM_NUM_MOVE_FENCES; i++) {
>>   			dma_fence_put(man->eviction_fences[i]);
>>   			man->eviction_fences[i] = NULL;
>> @@ -2207,7 +2219,7 @@ void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
>>   	return;
>>   
>>   error_free_entity:
>> -	drm_sched_entity_destroy(&adev->mman.high_pr);
>> +	drm_sched_entity_destroy(&adev->mman.default_entity.base);
>>   }
>>   
>>   static int amdgpu_ttm_prepare_job(struct amdgpu_device *adev,
>> @@ -2219,8 +2231,8 @@ static int amdgpu_ttm_prepare_job(struct amdgpu_device *adev,
>>   {
>>   	enum amdgpu_ib_pool_type pool = AMDGPU_IB_POOL_DELAYED;
>>   	int r;
>> -	struct drm_sched_entity *entity = delayed ? &adev->mman.low_pr :
>> -						    &adev->mman.high_pr;
>> +	struct drm_sched_entity *entity = delayed ? &adev->mman.clear_entity.base :
>> +						    &adev->mman.move_entity.base;
>>   	r = amdgpu_job_alloc_with_ib(adev, entity,
>>   				     AMDGPU_FENCE_OWNER_UNDEFINED,
>>   				     num_dw * 4, pool, job, k_job_id);
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
>> index 50e40380fe95..d2295d6c2b67 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
>> @@ -52,6 +52,10 @@ struct amdgpu_gtt_mgr {
>>   	spinlock_t lock;
>>   };
>>   
>> +struct amdgpu_ttm_buffer_entity {
>> +	struct drm_sched_entity base;
>> +};
>> +
>>   struct amdgpu_mman {
>>   	struct ttm_device		bdev;
>>   	struct ttm_pool			*ttm_pools;
>> @@ -64,10 +68,10 @@ struct amdgpu_mman {
>>   	bool					buffer_funcs_enabled;
>>   
>>   	struct mutex				gtt_window_lock;
>> -	/* High priority scheduler entity for buffer moves */
>> -	struct drm_sched_entity			high_pr;
>> -	/* Low priority scheduler entity for VRAM clearing */
>> -	struct drm_sched_entity			low_pr;
>> +
>> +	struct amdgpu_ttm_buffer_entity default_entity;
>> +	struct amdgpu_ttm_buffer_entity clear_entity;
>> +	struct amdgpu_ttm_buffer_entity move_entity;
>>   
>>   	struct amdgpu_vram_mgr vram_mgr;
>>   	struct amdgpu_gtt_mgr gtt_mgr;
>> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
>> index 378af0b2aaa9..d74ff6e90590 100644
>> --- a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
>> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
>> @@ -45,7 +45,9 @@ svm_migrate_direct_mapping_addr(struct amdgpu_device *adev, u64 addr)
>>   }
>>   
>>   static int
>> -svm_migrate_gart_map(struct amdgpu_ring *ring, u64 npages,
>> +svm_migrate_gart_map(struct amdgpu_ring *ring,
>> +		     struct amdgpu_ttm_buffer_entity *entity,

Do we still need the ring parameter, or is that implied by the entity?

Other than that, the patch is

Acked-by: Felix Kuehling <felix.kuehling@amd.com>


>> +		     u64 npages,
>>   		     dma_addr_t *addr, u64 *gart_addr, u64 flags)
>>   {
>>   	struct amdgpu_device *adev = ring->adev;
>> @@ -63,7 +65,7 @@ svm_migrate_gart_map(struct amdgpu_ring *ring, u64 npages,
>>   	num_dw = ALIGN(adev->mman.buffer_funcs->copy_num_dw, 8);
>>   	num_bytes = npages * 8;
>>   
>> -	r = amdgpu_job_alloc_with_ib(adev, &adev->mman.high_pr,
>> +	r = amdgpu_job_alloc_with_ib(adev, &entity->base,
>>   				     AMDGPU_FENCE_OWNER_UNDEFINED,
>>   				     num_dw * 4 + num_bytes,
>>   				     AMDGPU_IB_POOL_DELAYED,
>> @@ -128,11 +130,14 @@ svm_migrate_copy_memory_gart(struct amdgpu_device *adev, dma_addr_t *sys,
>>   {
>>   	const u64 GTT_MAX_PAGES = AMDGPU_GTT_MAX_TRANSFER_SIZE;
>>   	struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
>> +	struct amdgpu_ttm_buffer_entity *entity;
>>   	u64 gart_s, gart_d;
>>   	struct dma_fence *next;
>>   	u64 size;
>>   	int r;
>>   
>> +	entity = &adev->mman.move_entity;
>> +
>>   	mutex_lock(&adev->mman.gtt_window_lock);
>>   
>>   	while (npages) {
>> @@ -140,10 +145,10 @@ svm_migrate_copy_memory_gart(struct amdgpu_device *adev, dma_addr_t *sys,
>>   
>>   		if (direction == FROM_VRAM_TO_RAM) {
>>   			gart_s = svm_migrate_direct_mapping_addr(adev, *vram);
>> -			r = svm_migrate_gart_map(ring, size, sys, &gart_d, 0);
>> +			r = svm_migrate_gart_map(ring, entity, size, sys, &gart_d, 0);
>>   
>>   		} else if (direction == FROM_RAM_TO_VRAM) {
>> -			r = svm_migrate_gart_map(ring, size, sys, &gart_s,
>> +			r = svm_migrate_gart_map(ring, entity, size, sys, &gart_s,
>>   						 KFD_IOCTL_SVM_FLAG_GPU_RO);
>>   			gart_d = svm_migrate_direct_mapping_addr(adev, *vram);
>>   		}

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v2 05/20] drm/amdgpu: pass the entity to use to ttm functions
  2025-11-13 16:05 ` [PATCH v2 05/20] drm/amdgpu: pass the entity to use to ttm functions Pierre-Eric Pelloux-Prayer
  2025-11-14 13:07   ` Christian König
@ 2025-11-14 20:20   ` Felix Kuehling
  1 sibling, 0 replies; 53+ messages in thread
From: Felix Kuehling @ 2025-11-14 20:20 UTC (permalink / raw)
  To: Pierre-Eric Pelloux-Prayer, Alex Deucher, Christian König,
	David Airlie, Simona Vetter, Sumit Semwal
  Cc: amd-gfx, dri-devel, linux-kernel, linux-media, linaro-mm-sig


On 2025-11-13 11:05, Pierre-Eric Pelloux-Prayer wrote:
> This way the caller can select the one it wants to use.
>
> Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>

I agree with Christian's comment to eliminate the ring parameter where 
it's implied by the entity. Other than that, the patch is

Acked-by: Felix Kuehling <felix.kuehling@amd.com>


> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c |  3 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_object.c    |  4 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c       | 75 +++++++++++--------
>   drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h       | 16 ++--
>   drivers/gpu/drm/amd/amdkfd/kfd_migrate.c      |  3 +-
>   5 files changed, 60 insertions(+), 41 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c
> index 02c2479a8840..b59040a8771f 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c
> @@ -38,7 +38,8 @@ static int amdgpu_benchmark_do_move(struct amdgpu_device *adev, unsigned size,
>   	stime = ktime_get();
>   	for (i = 0; i < n; i++) {
>   		struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
> -		r = amdgpu_copy_buffer(ring, saddr, daddr, size, NULL, &fence,
> +		r = amdgpu_copy_buffer(ring, &adev->mman.default_entity.base,
> +				       saddr, daddr, size, NULL, &fence,
>   				       false, 0);
>   		if (r)
>   			goto exit_do_move;
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> index e08f58de4b17..c06c132a753c 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> @@ -1321,8 +1321,8 @@ void amdgpu_bo_release_notify(struct ttm_buffer_object *bo)
>   	if (r)
>   		goto out;
>   
> -	r = amdgpu_fill_buffer(abo, 0, &bo->base._resv, &fence, true,
> -			       AMDGPU_KERNEL_JOB_ID_CLEAR_ON_RELEASE);
> +	r = amdgpu_fill_buffer(&adev->mman.clear_entity, abo, 0, &bo->base._resv,
> +			       &fence, AMDGPU_KERNEL_JOB_ID_CLEAR_ON_RELEASE);
>   	if (WARN_ON(r))
>   		goto out;
>   
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> index 42d448cd6a6d..c8d59ca2b3bd 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> @@ -164,6 +164,7 @@ static void amdgpu_evict_flags(struct ttm_buffer_object *bo,
>   
>   /**
>    * amdgpu_ttm_map_buffer - Map memory into the GART windows
> + * @entity: entity to run the window setup job
>    * @bo: buffer object to map
>    * @mem: memory object to map
>    * @mm_cur: range to map
> @@ -176,7 +177,8 @@ static void amdgpu_evict_flags(struct ttm_buffer_object *bo,
>    * Setup one of the GART windows to access a specific piece of memory or return
>    * the physical address for local memory.
>    */
> -static int amdgpu_ttm_map_buffer(struct ttm_buffer_object *bo,
> +static int amdgpu_ttm_map_buffer(struct drm_sched_entity *entity,
> +				 struct ttm_buffer_object *bo,
>   				 struct ttm_resource *mem,
>   				 struct amdgpu_res_cursor *mm_cur,
>   				 unsigned int window, struct amdgpu_ring *ring,
> @@ -224,7 +226,7 @@ static int amdgpu_ttm_map_buffer(struct ttm_buffer_object *bo,
>   	num_dw = ALIGN(adev->mman.buffer_funcs->copy_num_dw, 8);
>   	num_bytes = num_pages * 8 * AMDGPU_GPU_PAGES_IN_CPU_PAGE;
>   
> -	r = amdgpu_job_alloc_with_ib(adev, &adev->mman.default_entity.base,
> +	r = amdgpu_job_alloc_with_ib(adev, entity,
>   				     AMDGPU_FENCE_OWNER_UNDEFINED,
>   				     num_dw * 4 + num_bytes,
>   				     AMDGPU_IB_POOL_DELAYED, &job,
> @@ -274,6 +276,7 @@ static int amdgpu_ttm_map_buffer(struct ttm_buffer_object *bo,
>   /**
>    * amdgpu_ttm_copy_mem_to_mem - Helper function for copy
>    * @adev: amdgpu device
> + * @entity: entity to run the jobs
>    * @src: buffer/address where to read from
>    * @dst: buffer/address where to write to
>    * @size: number of bytes to copy
> @@ -288,6 +291,7 @@ static int amdgpu_ttm_map_buffer(struct ttm_buffer_object *bo,
>    */
>   __attribute__((nonnull))
>   static int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
> +				      struct drm_sched_entity *entity,
>   				      const struct amdgpu_copy_mem *src,
>   				      const struct amdgpu_copy_mem *dst,
>   				      uint64_t size, bool tmz,
> @@ -320,12 +324,14 @@ static int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
>   		cur_size = min3(src_mm.size, dst_mm.size, 256ULL << 20);
>   
>   		/* Map src to window 0 and dst to window 1. */
> -		r = amdgpu_ttm_map_buffer(src->bo, src->mem, &src_mm,
> +		r = amdgpu_ttm_map_buffer(entity,
> +					  src->bo, src->mem, &src_mm,
>   					  0, ring, tmz, &cur_size, &from);
>   		if (r)
>   			goto error;
>   
> -		r = amdgpu_ttm_map_buffer(dst->bo, dst->mem, &dst_mm,
> +		r = amdgpu_ttm_map_buffer(entity,
> +					  dst->bo, dst->mem, &dst_mm,
>   					  1, ring, tmz, &cur_size, &to);
>   		if (r)
>   			goto error;
> @@ -353,7 +359,7 @@ static int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
>   							     write_compress_disable));
>   		}
>   
> -		r = amdgpu_copy_buffer(ring, from, to, cur_size, resv,
> +		r = amdgpu_copy_buffer(ring, entity, from, to, cur_size, resv,
>   				       &next, true, copy_flags);
>   		if (r)
>   			goto error;
> @@ -394,7 +400,9 @@ static int amdgpu_move_blit(struct ttm_buffer_object *bo,
>   	src.offset = 0;
>   	dst.offset = 0;
>   
> -	r = amdgpu_ttm_copy_mem_to_mem(adev, &src, &dst,
> +	r = amdgpu_ttm_copy_mem_to_mem(adev,
> +				       &adev->mman.move_entity.base,
> +				       &src, &dst,
>   				       new_mem->size,
>   				       amdgpu_bo_encrypted(abo),
>   				       bo->base.resv, &fence);
> @@ -406,8 +414,9 @@ static int amdgpu_move_blit(struct ttm_buffer_object *bo,
>   	    (abo->flags & AMDGPU_GEM_CREATE_VRAM_WIPE_ON_RELEASE)) {
>   		struct dma_fence *wipe_fence = NULL;
>   
> -		r = amdgpu_fill_buffer(abo, 0, NULL, &wipe_fence,
> -				       false, AMDGPU_KERNEL_JOB_ID_MOVE_BLIT);
> +		r = amdgpu_fill_buffer(&adev->mman.move_entity,
> +				       abo, 0, NULL, &wipe_fence,
> +				       AMDGPU_KERNEL_JOB_ID_MOVE_BLIT);
>   		if (r) {
>   			goto error;
>   		} else if (wipe_fence) {
> @@ -2223,16 +2232,15 @@ void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
>   }
>   
>   static int amdgpu_ttm_prepare_job(struct amdgpu_device *adev,
> +				  struct drm_sched_entity *entity,
>   				  unsigned int num_dw,
>   				  struct dma_resv *resv,
>   				  bool vm_needs_flush,
>   				  struct amdgpu_job **job,
> -				  bool delayed, u64 k_job_id)
> +				  u64 k_job_id)
>   {
>   	enum amdgpu_ib_pool_type pool = AMDGPU_IB_POOL_DELAYED;
>   	int r;
> -	struct drm_sched_entity *entity = delayed ? &adev->mman.clear_entity.base :
> -						    &adev->mman.move_entity.base;
>   	r = amdgpu_job_alloc_with_ib(adev, entity,
>   				     AMDGPU_FENCE_OWNER_UNDEFINED,
>   				     num_dw * 4, pool, job, k_job_id);
> @@ -2252,7 +2260,9 @@ static int amdgpu_ttm_prepare_job(struct amdgpu_device *adev,
>   						   DMA_RESV_USAGE_BOOKKEEP);
>   }
>   
> -int amdgpu_copy_buffer(struct amdgpu_ring *ring, uint64_t src_offset,
> +int amdgpu_copy_buffer(struct amdgpu_ring *ring,
> +		       struct drm_sched_entity *entity,
> +		       uint64_t src_offset,
>   		       uint64_t dst_offset, uint32_t byte_count,
>   		       struct dma_resv *resv,
>   		       struct dma_fence **fence,
> @@ -2274,8 +2284,8 @@ int amdgpu_copy_buffer(struct amdgpu_ring *ring, uint64_t src_offset,
>   	max_bytes = adev->mman.buffer_funcs->copy_max_bytes;
>   	num_loops = DIV_ROUND_UP(byte_count, max_bytes);
>   	num_dw = ALIGN(num_loops * adev->mman.buffer_funcs->copy_num_dw, 8);
> -	r = amdgpu_ttm_prepare_job(adev, num_dw,
> -				   resv, vm_needs_flush, &job, false,
> +	r = amdgpu_ttm_prepare_job(adev, entity, num_dw,
> +				   resv, vm_needs_flush, &job,
>   				   AMDGPU_KERNEL_JOB_ID_TTM_COPY_BUFFER);
>   	if (r)
>   		return r;
> @@ -2304,11 +2314,13 @@ int amdgpu_copy_buffer(struct amdgpu_ring *ring, uint64_t src_offset,
>   	return r;
>   }
>   
> -static int amdgpu_ttm_fill_mem(struct amdgpu_ring *ring, uint32_t src_data,
> +static int amdgpu_ttm_fill_mem(struct amdgpu_ring *ring,
> +			       struct drm_sched_entity *entity,
> +			       uint32_t src_data,
>   			       uint64_t dst_addr, uint32_t byte_count,
>   			       struct dma_resv *resv,
>   			       struct dma_fence **fence,
> -			       bool vm_needs_flush, bool delayed,
> +			       bool vm_needs_flush,
>   			       u64 k_job_id)
>   {
>   	struct amdgpu_device *adev = ring->adev;
> @@ -2321,8 +2333,8 @@ static int amdgpu_ttm_fill_mem(struct amdgpu_ring *ring, uint32_t src_data,
>   	max_bytes = adev->mman.buffer_funcs->fill_max_bytes;
>   	num_loops = DIV_ROUND_UP_ULL(byte_count, max_bytes);
>   	num_dw = ALIGN(num_loops * adev->mman.buffer_funcs->fill_num_dw, 8);
> -	r = amdgpu_ttm_prepare_job(adev, num_dw, resv, vm_needs_flush,
> -				   &job, delayed, k_job_id);
> +	r = amdgpu_ttm_prepare_job(adev, entity, num_dw, resv,
> +				   vm_needs_flush, &job, k_job_id);
>   	if (r)
>   		return r;
>   
> @@ -2386,13 +2398,14 @@ int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
>   		/* Never clear more than 256MiB at once to avoid timeouts */
>   		size = min(cursor.size, 256ULL << 20);
>   
> -		r = amdgpu_ttm_map_buffer(&bo->tbo, bo->tbo.resource, &cursor,
> +		r = amdgpu_ttm_map_buffer(&adev->mman.clear_entity.base,
> +					  &bo->tbo, bo->tbo.resource, &cursor,
>   					  1, ring, false, &size, &addr);
>   		if (r)
>   			goto err;
>   
> -		r = amdgpu_ttm_fill_mem(ring, 0, addr, size, resv,
> -					&next, true, true,
> +		r = amdgpu_ttm_fill_mem(ring, &adev->mman.clear_entity.base, 0, addr, size, resv,
> +					&next, true,
>   					AMDGPU_KERNEL_JOB_ID_TTM_CLEAR_BUFFER);
>   		if (r)
>   			goto err;
> @@ -2408,12 +2421,12 @@ int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
>   	return r;
>   }
>   
> -int amdgpu_fill_buffer(struct amdgpu_bo *bo,
> -			uint32_t src_data,
> -			struct dma_resv *resv,
> -			struct dma_fence **f,
> -			bool delayed,
> -			u64 k_job_id)
> +int amdgpu_fill_buffer(struct amdgpu_ttm_entity *entity,
> +		       struct amdgpu_bo *bo,
> +		       uint32_t src_data,
> +		       struct dma_resv *resv,
> +		       struct dma_fence **f,
> +		       u64 k_job_id)
>   {
>   	struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
>   	struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
> @@ -2437,13 +2450,15 @@ int amdgpu_fill_buffer(struct amdgpu_bo *bo,
>   		/* Never fill more than 256MiB at once to avoid timeouts */
>   		cur_size = min(dst.size, 256ULL << 20);
>   
> -		r = amdgpu_ttm_map_buffer(&bo->tbo, bo->tbo.resource, &dst,
> +		r = amdgpu_ttm_map_buffer(&entity->base,
> +					  &bo->tbo, bo->tbo.resource, &dst,
>   					  1, ring, false, &cur_size, &to);
>   		if (r)
>   			goto error;
>   
> -		r = amdgpu_ttm_fill_mem(ring, src_data, to, cur_size, resv,
> -					&next, true, delayed, k_job_id);
> +		r = amdgpu_ttm_fill_mem(ring, &entity->base,
> +					src_data, to, cur_size, resv,
> +					&next, true, k_job_id);
>   		if (r)
>   			goto error;
>   
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> index d2295d6c2b67..e1655f86a016 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> @@ -167,7 +167,9 @@ int amdgpu_ttm_init(struct amdgpu_device *adev);
>   void amdgpu_ttm_fini(struct amdgpu_device *adev);
>   void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev,
>   					bool enable);
> -int amdgpu_copy_buffer(struct amdgpu_ring *ring, uint64_t src_offset,
> +int amdgpu_copy_buffer(struct amdgpu_ring *ring,
> +		       struct drm_sched_entity *entity,
> +		       uint64_t src_offset,
>   		       uint64_t dst_offset, uint32_t byte_count,
>   		       struct dma_resv *resv,
>   		       struct dma_fence **fence,
> @@ -175,12 +177,12 @@ int amdgpu_copy_buffer(struct amdgpu_ring *ring, uint64_t src_offset,
>   int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
>   			    struct dma_resv *resv,
>   			    struct dma_fence **fence);
> -int amdgpu_fill_buffer(struct amdgpu_bo *bo,
> -			uint32_t src_data,
> -			struct dma_resv *resv,
> -			struct dma_fence **fence,
> -			bool delayed,
> -			u64 k_job_id);
> +int amdgpu_fill_buffer(struct amdgpu_ttm_entity *entity,
> +		       struct amdgpu_bo *bo,
> +		       uint32_t src_data,
> +		       struct dma_resv *resv,
> +		       struct dma_fence **f,
> +		       u64 k_job_id);
>   
>   int amdgpu_ttm_alloc_gart(struct ttm_buffer_object *bo);
>   void amdgpu_ttm_recover_gart(struct ttm_buffer_object *tbo);
> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
> index d74ff6e90590..09756132fa1b 100644
> --- a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
> @@ -157,7 +157,8 @@ svm_migrate_copy_memory_gart(struct amdgpu_device *adev, dma_addr_t *sys,
>   			goto out_unlock;
>   		}
>   
> -		r = amdgpu_copy_buffer(ring, gart_s, gart_d, size * PAGE_SIZE,
> +		r = amdgpu_copy_buffer(ring, &entity->base,
> +				       gart_s, gart_d, size * PAGE_SIZE,
>   				       NULL, &next, true, 0);
>   		if (r) {
>   			dev_err(adev->dev, "fail %d to copy memory\n", r);

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v2 06/20] drm/amdgpu: statically assign gart windows to ttm entities
  2025-11-13 16:05 ` [PATCH v2 06/20] drm/amdgpu: statically assign gart windows to ttm entities Pierre-Eric Pelloux-Prayer
  2025-11-14 15:15   ` Christian König
@ 2025-11-14 20:24   ` Felix Kuehling
  2025-11-19  9:55     ` Pierre-Eric Pelloux-Prayer
  1 sibling, 1 reply; 53+ messages in thread
From: Felix Kuehling @ 2025-11-14 20:24 UTC (permalink / raw)
  To: Pierre-Eric Pelloux-Prayer, Alex Deucher, Christian König,
	David Airlie, Simona Vetter
  Cc: amd-gfx, dri-devel, linux-kernel


On 2025-11-13 11:05, Pierre-Eric Pelloux-Prayer wrote:
> If multiple entities share the same window we must make sure
> that jobs using them are executed sequentially.
>
> This commit gives separate window id to each entity, so jobs
> from multiple entities could execute in parallel if needed.
> (for now they all use the first sdma engine, so it makes no
> difference yet).
>
> default_entity doesn't get any windows reserved since there is
> no use for them.
>
> Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c  |  9 +++--
>   drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c  | 50 ++++++++++++++----------
>   drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h  |  9 +++--
>   drivers/gpu/drm/amd/amdkfd/kfd_migrate.c |  8 ++--
>   4 files changed, 46 insertions(+), 30 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
> index 8e2d41c9c271..2a444d02cf4b 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
> @@ -686,7 +686,8 @@ void amdgpu_gmc_flush_gpu_tlb(struct amdgpu_device *adev, uint32_t vmid,
>   	 * translation. Avoid this by doing the invalidation from the SDMA
>   	 * itself at least for GART.
>   	 */
> -	mutex_lock(&adev->mman.gtt_window_lock);
> +	mutex_lock(&adev->mman.clear_entity.gart_window_lock);
> +	mutex_lock(&adev->mman.move_entity.gart_window_lock);
>   	r = amdgpu_job_alloc_with_ib(ring->adev, &adev->mman.default_entity.base,
>   				     AMDGPU_FENCE_OWNER_UNDEFINED,
>   				     16 * 4, AMDGPU_IB_POOL_IMMEDIATE,
> @@ -699,7 +700,8 @@ void amdgpu_gmc_flush_gpu_tlb(struct amdgpu_device *adev, uint32_t vmid,
>   	job->ibs->ptr[job->ibs->length_dw++] = ring->funcs->nop;
>   	amdgpu_ring_pad_ib(ring, &job->ibs[0]);
>   	fence = amdgpu_job_submit(job);
> -	mutex_unlock(&adev->mman.gtt_window_lock);
> +	mutex_unlock(&adev->mman.move_entity.gart_window_lock);
> +	mutex_unlock(&adev->mman.clear_entity.gart_window_lock);
>   
>   	dma_fence_wait(fence, false);
>   	dma_fence_put(fence);
> @@ -707,7 +709,8 @@ void amdgpu_gmc_flush_gpu_tlb(struct amdgpu_device *adev, uint32_t vmid,
>   	return;
>   
>   error_alloc:
> -	mutex_unlock(&adev->mman.gtt_window_lock);
> +	mutex_unlock(&adev->mman.move_entity.gart_window_lock);
> +	mutex_unlock(&adev->mman.clear_entity.gart_window_lock);
>   	dev_err(adev->dev, "Error flushing GPU TLB using the SDMA (%d)!\n", r);
>   }
>   
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> index c8d59ca2b3bd..7193a341689d 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> @@ -291,7 +291,7 @@ static int amdgpu_ttm_map_buffer(struct drm_sched_entity *entity,
>    */
>   __attribute__((nonnull))
>   static int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
> -				      struct drm_sched_entity *entity,
> +				      struct amdgpu_ttm_buffer_entity *entity,
>   				      const struct amdgpu_copy_mem *src,
>   				      const struct amdgpu_copy_mem *dst,
>   				      uint64_t size, bool tmz,
> @@ -314,7 +314,7 @@ static int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
>   	amdgpu_res_first(src->mem, src->offset, size, &src_mm);
>   	amdgpu_res_first(dst->mem, dst->offset, size, &dst_mm);
>   
> -	mutex_lock(&adev->mman.gtt_window_lock);
> +	mutex_lock(&entity->gart_window_lock);
>   	while (src_mm.remaining) {
>   		uint64_t from, to, cur_size, tiling_flags;
>   		uint32_t num_type, data_format, max_com, write_compress_disable;
> @@ -324,15 +324,15 @@ static int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
>   		cur_size = min3(src_mm.size, dst_mm.size, 256ULL << 20);
>   
>   		/* Map src to window 0 and dst to window 1. */
> -		r = amdgpu_ttm_map_buffer(entity,
> +		r = amdgpu_ttm_map_buffer(&entity->base,
>   					  src->bo, src->mem, &src_mm,
> -					  0, ring, tmz, &cur_size, &from);
> +					  entity->gart_window_id0, ring, tmz, &cur_size, &from);
>   		if (r)
>   			goto error;
>   
> -		r = amdgpu_ttm_map_buffer(entity,
> +		r = amdgpu_ttm_map_buffer(&entity->base,
>   					  dst->bo, dst->mem, &dst_mm,
> -					  1, ring, tmz, &cur_size, &to);
> +					  entity->gart_window_id1, ring, tmz, &cur_size, &to);
>   		if (r)
>   			goto error;
>   
> @@ -359,7 +359,7 @@ static int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
>   							     write_compress_disable));
>   		}
>   
> -		r = amdgpu_copy_buffer(ring, entity, from, to, cur_size, resv,
> +		r = amdgpu_copy_buffer(ring, &entity->base, from, to, cur_size, resv,
>   				       &next, true, copy_flags);
>   		if (r)
>   			goto error;
> @@ -371,7 +371,7 @@ static int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
>   		amdgpu_res_next(&dst_mm, cur_size);
>   	}
>   error:
> -	mutex_unlock(&adev->mman.gtt_window_lock);
> +	mutex_unlock(&entity->gart_window_lock);
>   	*f = fence;
>   	return r;
>   }
> @@ -401,7 +401,7 @@ static int amdgpu_move_blit(struct ttm_buffer_object *bo,
>   	dst.offset = 0;
>   
>   	r = amdgpu_ttm_copy_mem_to_mem(adev,
> -				       &adev->mman.move_entity.base,
> +				       &adev->mman.move_entity,
>   				       &src, &dst,
>   				       new_mem->size,
>   				       amdgpu_bo_encrypted(abo),
> @@ -1893,8 +1893,6 @@ int amdgpu_ttm_init(struct amdgpu_device *adev)
>   	uint64_t gtt_size;
>   	int r;
>   
> -	mutex_init(&adev->mman.gtt_window_lock);
> -
>   	dma_set_max_seg_size(adev->dev, UINT_MAX);
>   	/* No others user of address space so set it to 0 */
>   	r = ttm_device_init(&adev->mman.bdev, &amdgpu_bo_driver, adev->dev,
> @@ -2207,6 +2205,15 @@ void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
>   			drm_sched_entity_destroy(&adev->mman.clear_entity.base);
>   			goto error_free_entity;
>   		}
> +
> +		/* Statically assign GART windows to each entity. */
> +		mutex_init(&adev->mman.default_entity.gart_window_lock);
> +		adev->mman.move_entity.gart_window_id0 = 0;
> +		adev->mman.move_entity.gart_window_id1 = 1;
> +		mutex_init(&adev->mman.move_entity.gart_window_lock);
> +		/* Clearing entity doesn't use id0 */
> +		adev->mman.clear_entity.gart_window_id1 = 2;
> +		mutex_init(&adev->mman.clear_entity.gart_window_lock);
>   	} else {
>   		drm_sched_entity_destroy(&adev->mman.default_entity.base);
>   		drm_sched_entity_destroy(&adev->mman.clear_entity.base);
> @@ -2371,6 +2378,7 @@ int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
>   {
>   	struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
>   	struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
> +	struct amdgpu_ttm_buffer_entity *entity;
>   	struct amdgpu_res_cursor cursor;
>   	u64 addr;
>   	int r = 0;
> @@ -2381,11 +2389,12 @@ int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
>   	if (!fence)
>   		return -EINVAL;
>   
> +	entity = &adev->mman.clear_entity;
>   	*fence = dma_fence_get_stub();
>   
>   	amdgpu_res_first(bo->tbo.resource, 0, amdgpu_bo_size(bo), &cursor);
>   
> -	mutex_lock(&adev->mman.gtt_window_lock);
> +	mutex_lock(&entity->gart_window_lock);
>   	while (cursor.remaining) {
>   		struct dma_fence *next = NULL;
>   		u64 size;
> @@ -2398,13 +2407,13 @@ int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
>   		/* Never clear more than 256MiB at once to avoid timeouts */
>   		size = min(cursor.size, 256ULL << 20);
>   
> -		r = amdgpu_ttm_map_buffer(&adev->mman.clear_entity.base,
> +		r = amdgpu_ttm_map_buffer(&entity->base,
>   					  &bo->tbo, bo->tbo.resource, &cursor,
> -					  1, ring, false, &size, &addr);
> +					  entity->gart_window_id1, ring, false, &size, &addr);
>   		if (r)
>   			goto err;
>   
> -		r = amdgpu_ttm_fill_mem(ring, &adev->mman.clear_entity.base, 0, addr, size, resv,
> +		r = amdgpu_ttm_fill_mem(ring, &entity->base, 0, addr, size, resv,
>   					&next, true,
>   					AMDGPU_KERNEL_JOB_ID_TTM_CLEAR_BUFFER);
>   		if (r)
> @@ -2416,12 +2425,12 @@ int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
>   		amdgpu_res_next(&cursor, size);
>   	}
>   err:
> -	mutex_unlock(&adev->mman.gtt_window_lock);
> +	mutex_unlock(&entity->gart_window_lock);
>   
>   	return r;
>   }
>   
> -int amdgpu_fill_buffer(struct amdgpu_ttm_entity *entity,
> +int amdgpu_fill_buffer(struct amdgpu_ttm_buffer_entity *entity,
>   		       struct amdgpu_bo *bo,
>   		       uint32_t src_data,
>   		       struct dma_resv *resv,
> @@ -2442,7 +2451,7 @@ int amdgpu_fill_buffer(struct amdgpu_ttm_entity *entity,
>   
>   	amdgpu_res_first(bo->tbo.resource, 0, amdgpu_bo_size(bo), &dst);
>   
> -	mutex_lock(&adev->mman.gtt_window_lock);
> +	mutex_lock(&entity->gart_window_lock);
>   	while (dst.remaining) {
>   		struct dma_fence *next;
>   		uint64_t cur_size, to;
> @@ -2452,7 +2461,8 @@ int amdgpu_fill_buffer(struct amdgpu_ttm_entity *entity,
>   
>   		r = amdgpu_ttm_map_buffer(&entity->base,
>   					  &bo->tbo, bo->tbo.resource, &dst,
> -					  1, ring, false, &cur_size, &to);
> +					  entity->gart_window_id1, ring, false,
> +					  &cur_size, &to);
>   		if (r)
>   			goto error;
>   
> @@ -2468,7 +2478,7 @@ int amdgpu_fill_buffer(struct amdgpu_ttm_entity *entity,
>   		amdgpu_res_next(&dst, cur_size);
>   	}
>   error:
> -	mutex_unlock(&adev->mman.gtt_window_lock);
> +	mutex_unlock(&entity->gart_window_lock);
>   	if (f)
>   		*f = dma_fence_get(fence);
>   	dma_fence_put(fence);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> index e1655f86a016..f4f762be9fdd 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> @@ -39,7 +39,7 @@
>   #define __AMDGPU_PL_NUM	(TTM_PL_PRIV + 6)
>   
>   #define AMDGPU_GTT_MAX_TRANSFER_SIZE	512
> -#define AMDGPU_GTT_NUM_TRANSFER_WINDOWS	2
> +#define AMDGPU_GTT_NUM_TRANSFER_WINDOWS	3
>   
>   extern const struct attribute_group amdgpu_vram_mgr_attr_group;
>   extern const struct attribute_group amdgpu_gtt_mgr_attr_group;
> @@ -54,6 +54,9 @@ struct amdgpu_gtt_mgr {
>   
>   struct amdgpu_ttm_buffer_entity {
>   	struct drm_sched_entity base;
> +	struct mutex		gart_window_lock;
> +	u32			gart_window_id0;
> +	u32			gart_window_id1;
>   };
>   
>   struct amdgpu_mman {
> @@ -69,7 +72,7 @@ struct amdgpu_mman {
>   
>   	struct mutex				gtt_window_lock;
>   
> -	struct amdgpu_ttm_buffer_entity default_entity;
> +	struct amdgpu_ttm_buffer_entity default_entity; /* has no gart windows */
>   	struct amdgpu_ttm_buffer_entity clear_entity;
>   	struct amdgpu_ttm_buffer_entity move_entity;
>   
> @@ -177,7 +180,7 @@ int amdgpu_copy_buffer(struct amdgpu_ring *ring,
>   int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
>   			    struct dma_resv *resv,
>   			    struct dma_fence **fence);
> -int amdgpu_fill_buffer(struct amdgpu_ttm_entity *entity,
> +int amdgpu_fill_buffer(struct amdgpu_ttm_buffer_entity *entity,
>   		       struct amdgpu_bo *bo,
>   		       uint32_t src_data,
>   		       struct dma_resv *resv,
> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
> index 09756132fa1b..bc47fc362a17 100644
> --- a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
> @@ -60,7 +60,7 @@ svm_migrate_gart_map(struct amdgpu_ring *ring,
>   	int r;
>   
>   	/* use gart window 0 */
> -	*gart_addr = adev->gmc.gart_start;
> +	*gart_addr = entity->gart_window_id0;

gart_window_id0 doesn't look like an address. What's the actual MC 
address that any copy through this window should use?

Regards,
   Felix


>   
>   	num_dw = ALIGN(adev->mman.buffer_funcs->copy_num_dw, 8);
>   	num_bytes = npages * 8;
> @@ -116,7 +116,7 @@ svm_migrate_gart_map(struct amdgpu_ring *ring,
>    * multiple GTT_MAX_PAGES transfer, all sdma operations are serialized, wait for
>    * the last sdma finish fence which is returned to check copy memory is done.
>    *
> - * Context: Process context, takes and releases gtt_window_lock
> + * Context: Process context, takes and releases gart_window_lock
>    *
>    * Return:
>    * 0 - OK, otherwise error code
> @@ -138,7 +138,7 @@ svm_migrate_copy_memory_gart(struct amdgpu_device *adev, dma_addr_t *sys,
>   
>   	entity = &adev->mman.move_entity;
>   
> -	mutex_lock(&adev->mman.gtt_window_lock);
> +	mutex_lock(&entity->gart_window_lock);
>   
>   	while (npages) {
>   		size = min(GTT_MAX_PAGES, npages);
> @@ -175,7 +175,7 @@ svm_migrate_copy_memory_gart(struct amdgpu_device *adev, dma_addr_t *sys,
>   	}
>   
>   out_unlock:
> -	mutex_unlock(&adev->mman.gtt_window_lock);
> +	mutex_unlock(&entity->gart_window_lock);
>   
>   	return r;
>   }

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v2 08/20] drm/amdgpu: allocate multiple move entities
  2025-11-13 16:05 ` [PATCH v2 08/20] drm/amdgpu: allocate multiple move entities Pierre-Eric Pelloux-Prayer
@ 2025-11-14 20:57   ` Felix Kuehling
  0 siblings, 0 replies; 53+ messages in thread
From: Felix Kuehling @ 2025-11-14 20:57 UTC (permalink / raw)
  To: Pierre-Eric Pelloux-Prayer, Alex Deucher, Christian König,
	David Airlie, Simona Vetter
  Cc: amd-gfx, dri-devel, linux-kernel

On 2025-11-13 11:05, Pierre-Eric Pelloux-Prayer wrote:
> No functional change for now, as we always use entity 0.
>
> Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>

Acked-by: Felix Kuehling <felix.kuehling@amd.com>


> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c  |  9 +++--
>   drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c  | 48 +++++++++++++++---------
>   drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h  |  3 +-
>   drivers/gpu/drm/amd/amdkfd/kfd_migrate.c |  2 +-
>   4 files changed, 39 insertions(+), 23 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
> index e73dcfed5338..2713dd51ab9a 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
> @@ -686,9 +686,10 @@ void amdgpu_gmc_flush_gpu_tlb(struct amdgpu_device *adev, uint32_t vmid,
>   	 * translation. Avoid this by doing the invalidation from the SDMA
>   	 * itself at least for GART.
>   	 */
> -	mutex_lock(&adev->mman.move_entity.gart_window_lock);
>   	for (i = 0; i < adev->mman.num_clear_entities; i++)
>   		mutex_lock(&adev->mman.clear_entities[i].gart_window_lock);
> +	for (i = 0; i < adev->mman.num_move_entities; i++)
> +		mutex_lock(&adev->mman.move_entities[i].gart_window_lock);
>   	r = amdgpu_job_alloc_with_ib(ring->adev, &adev->mman.default_entity.base,
>   				     AMDGPU_FENCE_OWNER_UNDEFINED,
>   				     16 * 4, AMDGPU_IB_POOL_IMMEDIATE,
> @@ -701,7 +702,8 @@ void amdgpu_gmc_flush_gpu_tlb(struct amdgpu_device *adev, uint32_t vmid,
>   	job->ibs->ptr[job->ibs->length_dw++] = ring->funcs->nop;
>   	amdgpu_ring_pad_ib(ring, &job->ibs[0]);
>   	fence = amdgpu_job_submit(job);
> -	mutex_unlock(&adev->mman.move_entity.gart_window_lock);
> +	for (i = 0; i < adev->mman.num_move_entities; i++)
> +		mutex_unlock(&adev->mman.move_entities[i].gart_window_lock);
>   	for (i = 0; i < adev->mman.num_clear_entities; i++)
>   		mutex_unlock(&adev->mman.clear_entities[i].gart_window_lock);
>   
> @@ -711,7 +713,8 @@ void amdgpu_gmc_flush_gpu_tlb(struct amdgpu_device *adev, uint32_t vmid,
>   	return;
>   
>   error_alloc:
> -	mutex_unlock(&adev->mman.move_entity.gart_window_lock);
> +	for (i = 0; i < adev->mman.num_move_entities; i++)
> +		mutex_unlock(&adev->mman.move_entities[i].gart_window_lock);
>   	for (i = 0; i < adev->mman.num_clear_entities; i++)
>   		mutex_unlock(&adev->mman.clear_entities[i].gart_window_lock);
>   	dev_err(adev->dev, "Error flushing GPU TLB using the SDMA (%d)!\n", r);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> index 2f305ad32080..e1f0567fd2d5 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> @@ -401,7 +401,7 @@ static int amdgpu_move_blit(struct ttm_buffer_object *bo,
>   	dst.offset = 0;
>   
>   	r = amdgpu_ttm_copy_mem_to_mem(adev,
> -				       &adev->mman.move_entity,
> +				       &adev->mman.move_entities[0],
>   				       &src, &dst,
>   				       new_mem->size,
>   				       amdgpu_bo_encrypted(abo),
> @@ -414,7 +414,7 @@ static int amdgpu_move_blit(struct ttm_buffer_object *bo,
>   	    (abo->flags & AMDGPU_GEM_CREATE_VRAM_WIPE_ON_RELEASE)) {
>   		struct dma_fence *wipe_fence = NULL;
>   
> -		r = amdgpu_fill_buffer(&adev->mman.move_entity,
> +		r = amdgpu_fill_buffer(&adev->mman.move_entities[0],
>   				       abo, 0, NULL, &wipe_fence,
>   				       AMDGPU_KERNEL_JOB_ID_MOVE_BLIT);
>   		if (r) {
> @@ -2167,10 +2167,11 @@ u32 amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
>   	struct ttm_resource_manager *man = ttm_manager_type(&adev->mman.bdev, TTM_PL_VRAM);
>   	uint64_t size;
>   	int r, i, j;
> -	u32 num_clear_entities, windows, w;
> +	u32 num_clear_entities, num_move_entities, windows, w;
>   
>   	num_clear_entities = adev->sdma.num_instances;
> -	windows = adev->gmc.is_app_apu ? 0 : (2 + num_clear_entities);
> +	num_move_entities = MIN(adev->sdma.num_instances, TTM_NUM_MOVE_FENCES);
> +	windows = adev->gmc.is_app_apu ? 0 : (2 * num_move_entities + num_clear_entities);
>   
>   	if (!adev->mman.initialized || amdgpu_in_reset(adev) ||
>   	    adev->mman.buffer_funcs_enabled == enable || adev->gmc.is_app_apu)
> @@ -2186,20 +2187,25 @@ u32 amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
>   					  DRM_SCHED_PRIORITY_KERNEL, &sched,
>   					  1, NULL);
>   		if (r) {
> -			dev_err(adev->dev,
> -				"Failed setting up TTM BO eviction entity (%d)\n",
> +			dev_err(adev->dev, "Failed setting up entity (%d)\n",
>   				r);
>   			return 0;
>   		}
>   
> -		r = drm_sched_entity_init(&adev->mman.move_entity.base,
> -					  DRM_SCHED_PRIORITY_NORMAL, &sched,
> -					  1, NULL);
> -		if (r) {
> -			dev_err(adev->dev,
> -				"Failed setting up TTM BO move entity (%d)\n",
> -				r);
> -			goto error_free_entity;
> +		adev->mman.num_move_entities = num_move_entities;
> +		for (i = 0; i < num_move_entities; i++) {
> +			r = drm_sched_entity_init(&adev->mman.move_entities[i].base,
> +						  DRM_SCHED_PRIORITY_NORMAL, &sched,
> +						  1, NULL);
> +			if (r) {
> +				dev_err(adev->dev,
> +					"Failed setting up TTM BO move entities (%d)\n",
> +					r);
> +				for (j = 0; j < i; j++)
> +					drm_sched_entity_destroy(
> +						&adev->mman.move_entities[j].base);
> +				goto error_free_entity;
> +			}
>   		}
>   
>   		adev->mman.num_clear_entities = num_clear_entities;
> @@ -2214,6 +2220,9 @@ u32 amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
>   						  DRM_SCHED_PRIORITY_NORMAL, &sched,
>   						  1, NULL);
>   			if (r) {
> +				for (j = 0; j < num_move_entities; j++)
> +					drm_sched_entity_destroy(
> +						&adev->mman.move_entities[j].base);
>   				for (j = 0; j < i; j++)
>   					drm_sched_entity_destroy(
>   						&adev->mman.clear_entities[j].base);
> @@ -2225,9 +2234,11 @@ u32 amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
>   		/* Statically assign GART windows to each entity. */
>   		w = 0;
>   		mutex_init(&adev->mman.default_entity.gart_window_lock);
> -		adev->mman.move_entity.gart_window_id0 = w++;
> -		adev->mman.move_entity.gart_window_id1 = w++;
> -		mutex_init(&adev->mman.move_entity.gart_window_lock);
> +		for (i = 0; i < num_move_entities; i++) {
> +			adev->mman.move_entities[i].gart_window_id0 = w++;
> +			adev->mman.move_entities[i].gart_window_id1 = w++;
> +			mutex_init(&adev->mman.move_entities[i].gart_window_lock);
> +		}
>   		for (i = 0; i < num_clear_entities; i++) {
>   			/* Clearing entities don't use id0 */
>   			adev->mman.clear_entities[i].gart_window_id1 = w++;
> @@ -2236,7 +2247,8 @@ u32 amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
>   		WARN_ON(w != windows);
>   	} else {
>   		drm_sched_entity_destroy(&adev->mman.default_entity.base);
> -		drm_sched_entity_destroy(&adev->mman.move_entity.base);
> +		for (i = 0; i < num_move_entities; i++)
> +			drm_sched_entity_destroy(&adev->mman.move_entities[i].base);
>   		for (i = 0; i < num_clear_entities; i++)
>   			drm_sched_entity_destroy(&adev->mman.clear_entities[i].base);
>   		for (i = 0; i < TTM_NUM_MOVE_FENCES; i++) {
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> index 797f851a4578..9d4891e86675 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> @@ -72,9 +72,10 @@ struct amdgpu_mman {
>   	struct mutex				gtt_window_lock;
>   
>   	struct amdgpu_ttm_buffer_entity default_entity; /* has no gart windows */
> -	struct amdgpu_ttm_buffer_entity move_entity;
>   	struct amdgpu_ttm_buffer_entity *clear_entities;
>   	u32 num_clear_entities;
> +	struct amdgpu_ttm_buffer_entity move_entities[TTM_NUM_MOVE_FENCES];
> +	u32 num_move_entities;
>   
>   	struct amdgpu_vram_mgr vram_mgr;
>   	struct amdgpu_gtt_mgr gtt_mgr;
> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
> index bc47fc362a17..943c3438c7ee 100644
> --- a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
> @@ -136,7 +136,7 @@ svm_migrate_copy_memory_gart(struct amdgpu_device *adev, dma_addr_t *sys,
>   	u64 size;
>   	int r;
>   
> -	entity = &adev->mman.move_entity;
> +	entity = &adev->mman.move_entities[0];
>   
>   	mutex_lock(&entity->gart_window_lock);
>   

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v2 12/20] drm/amdgpu: use TTM_NUM_MOVE_FENCES when reserving fences
  2025-11-13 16:05 ` [PATCH v2 12/20] drm/amdgpu: use TTM_NUM_MOVE_FENCES when reserving fences Pierre-Eric Pelloux-Prayer
@ 2025-11-14 20:57   ` Felix Kuehling
  2025-11-17  9:07   ` Christian König
  1 sibling, 0 replies; 53+ messages in thread
From: Felix Kuehling @ 2025-11-14 20:57 UTC (permalink / raw)
  To: Pierre-Eric Pelloux-Prayer, Alex Deucher, Christian König,
	David Airlie, Simona Vetter, Harry Wentland, Leo Li,
	Rodrigo Siqueira
  Cc: amd-gfx, dri-devel, linux-kernel

On 2025-11-13 11:05, Pierre-Eric Pelloux-Prayer wrote:
> Use TTM_NUM_MOVE_FENCES as an upperbound of how many fences
> ttm might need to deal with moves/evictions.
>
> ---
> v2: removed drm_err calls
> ---
>
> Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>

Acked-by: Felix Kuehling <felix.kuehling@amd.com>


> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c                  | 5 ++---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c                 | 2 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vkms.c                | 6 ++----
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c                  | 3 ++-
>   drivers/gpu/drm/amd/amdkfd/kfd_svm.c                    | 3 +--
>   drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c | 6 ++----
>   drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_wb.c    | 6 ++----
>   7 files changed, 12 insertions(+), 19 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> index ecdfe6cb36cc..0338522761bc 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> @@ -916,9 +916,8 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
>   			goto out_free_user_pages;
>   
>   		amdgpu_bo_list_for_each_entry(e, p->bo_list) {
> -			/* One fence for TTM and one for each CS job */
>   			r = drm_exec_prepare_obj(&p->exec, &e->bo->tbo.base,
> -						 1 + p->gang_size);
> +						 TTM_NUM_MOVE_FENCES + p->gang_size);
>   			drm_exec_retry_on_contention(&p->exec);
>   			if (unlikely(r))
>   				goto out_free_user_pages;
> @@ -928,7 +927,7 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
>   
>   		if (p->uf_bo) {
>   			r = drm_exec_prepare_obj(&p->exec, &p->uf_bo->tbo.base,
> -						 1 + p->gang_size);
> +						 TTM_NUM_MOVE_FENCES + p->gang_size);
>   			drm_exec_retry_on_contention(&p->exec);
>   			if (unlikely(r))
>   				goto out_free_user_pages;
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> index ce073e894584..2fe6899f6344 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> @@ -353,7 +353,7 @@ static void amdgpu_gem_object_close(struct drm_gem_object *obj,
>   
>   	drm_exec_init(&exec, DRM_EXEC_IGNORE_DUPLICATES, 0);
>   	drm_exec_until_all_locked(&exec) {
> -		r = drm_exec_prepare_obj(&exec, &bo->tbo.base, 1);
> +		r = drm_exec_prepare_obj(&exec, &bo->tbo.base, TTM_NUM_MOVE_FENCES);
>   		drm_exec_retry_on_contention(&exec);
>   		if (unlikely(r))
>   			goto out_unlock;
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vkms.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vkms.c
> index 79bad9cbe2ab..b92561eea3da 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vkms.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vkms.c
> @@ -326,11 +326,9 @@ static int amdgpu_vkms_prepare_fb(struct drm_plane *plane,
>   		return r;
>   	}
>   
> -	r = dma_resv_reserve_fences(rbo->tbo.base.resv, 1);
> -	if (r) {
> -		dev_err(adev->dev, "allocating fence slot failed (%d)\n", r);
> +	r = dma_resv_reserve_fences(rbo->tbo.base.resv, TTM_NUM_MOVE_FENCES);
> +	if (r)
>   		goto error_unlock;
> -	}
>   
>   	if (plane->type != DRM_PLANE_TYPE_CURSOR)
>   		domain = amdgpu_display_supported_domains(adev, rbo->flags);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> index 2f8e83f840a8..aaa44199e9f4 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> @@ -2630,7 +2630,8 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm,
>   	}
>   
>   	amdgpu_vm_bo_base_init(&vm->root, vm, root_bo);
> -	r = dma_resv_reserve_fences(root_bo->tbo.base.resv, 1);
> +	r = dma_resv_reserve_fences(root_bo->tbo.base.resv,
> +				    TTM_NUM_MOVE_FENCES);
>   	if (r)
>   		goto error_free_root;
>   
> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
> index ffb7b36e577c..968cef1cbea6 100644
> --- a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
> @@ -627,9 +627,8 @@ svm_range_vram_node_new(struct kfd_node *node, struct svm_range *prange,
>   		}
>   	}
>   
> -	r = dma_resv_reserve_fences(bo->tbo.base.resv, 1);
> +	r = dma_resv_reserve_fences(bo->tbo.base.resv, TTM_NUM_MOVE_FENCES);
>   	if (r) {
> -		pr_debug("failed %d to reserve bo\n", r);
>   		amdgpu_bo_unreserve(bo);
>   		goto reserve_bo_failed;
>   	}
> diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c
> index 56cb866ac6f8..ceb55dd183ed 100644
> --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c
> +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c
> @@ -952,11 +952,9 @@ static int amdgpu_dm_plane_helper_prepare_fb(struct drm_plane *plane,
>   		return r;
>   	}
>   
> -	r = dma_resv_reserve_fences(rbo->tbo.base.resv, 1);
> -	if (r) {
> -		drm_err(adev_to_drm(adev), "reserving fence slot failed (%d)\n", r);
> +	r = dma_resv_reserve_fences(rbo->tbo.base.resv, TTM_NUM_MOVE_FENCES);
> +	if (r)
>   		goto error_unlock;
> -	}
>   
>   	if (plane->type != DRM_PLANE_TYPE_CURSOR)
>   		domain = amdgpu_display_supported_domains(adev, rbo->flags);
> diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_wb.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_wb.c
> index d9527c05fc87..110f0173eee6 100644
> --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_wb.c
> +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_wb.c
> @@ -106,11 +106,9 @@ static int amdgpu_dm_wb_prepare_job(struct drm_writeback_connector *wb_connector
>   		return r;
>   	}
>   
> -	r = dma_resv_reserve_fences(rbo->tbo.base.resv, 1);
> -	if (r) {
> -		drm_err(adev_to_drm(adev), "reserving fence slot failed (%d)\n", r);
> +	r = dma_resv_reserve_fences(rbo->tbo.base.resv, TTM_NUM_MOVE_FENCES);
> +	if (r)
>   		goto error_unlock;
> -	}
>   
>   	domain = amdgpu_display_supported_domains(adev, rbo->flags);
>   

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v2 15/20] drm/amdgpu: pass all the sdma scheds to amdgpu_mman
  2025-11-13 16:05 ` [PATCH v2 15/20] drm/amdgpu: pass all the sdma scheds to amdgpu_mman Pierre-Eric Pelloux-Prayer
@ 2025-11-14 21:23   ` Felix Kuehling
  2025-11-17  9:46   ` Christian König
  1 sibling, 0 replies; 53+ messages in thread
From: Felix Kuehling @ 2025-11-14 21:23 UTC (permalink / raw)
  To: Pierre-Eric Pelloux-Prayer, Alex Deucher, Christian König,
	David Airlie, Simona Vetter
  Cc: amd-gfx, dri-devel, linux-kernel

On 2025-11-13 11:05, Pierre-Eric Pelloux-Prayer wrote:
> This will allow the use of all of them for clear/fill buffer
> operations.
> Since drm_sched_entity_init requires a scheduler array, we
> store schedulers rather than rings. For the few places that need
> access to a ring, we can get it from the sched using container_of.
>
> Since the code is the same for all sdma versions, add a new
> helper amdgpu_sdma_set_buffer_funcs_scheds to set buffer_funcs_scheds
> based on the number of sdma instances.
>
> Note: the new sched array is identical to the amdgpu_vm_manager one.
> These 2 could be merged.
>
> Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>

Acked-by: Felix Kuehling <felix.kuehling@amd.com>


> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu.h           |  2 +
>   drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c |  4 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_device.c    |  8 ++--
>   drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c       |  4 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c       | 41 +++++++++++++++----
>   drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h       |  3 +-
>   drivers/gpu/drm/amd/amdgpu/cik_sdma.c         |  3 +-
>   drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c        |  3 +-
>   drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c        |  3 +-
>   drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c        |  6 +--
>   drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c      |  6 +--
>   drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c        |  6 +--
>   drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c        |  6 +--
>   drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c        |  3 +-
>   drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c        |  3 +-
>   drivers/gpu/drm/amd/amdgpu/si_dma.c           |  3 +-
>   drivers/gpu/drm/amd/amdkfd/kfd_migrate.c      |  3 +-
>   17 files changed, 62 insertions(+), 45 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> index 3fab3dc9f3e4..05c13fb0e6bf 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> @@ -1615,6 +1615,8 @@ ssize_t amdgpu_get_soft_full_reset_mask(struct amdgpu_ring *ring);
>   ssize_t amdgpu_show_reset_mask(char *buf, uint32_t supported_reset);
>   void amdgpu_sdma_set_vm_pte_scheds(struct amdgpu_device *adev,
>   				   const struct amdgpu_vm_pte_funcs *vm_pte_funcs);
> +void amdgpu_sdma_set_buffer_funcs_scheds(struct amdgpu_device *adev,
> +					 const struct amdgpu_buffer_funcs *buffer_funcs);
>   
>   /* atpx handler */
>   #if defined(CONFIG_VGA_SWITCHEROO)
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c
> index b59040a8771f..9ea927e07a77 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c
> @@ -32,12 +32,14 @@ static int amdgpu_benchmark_do_move(struct amdgpu_device *adev, unsigned size,
>   				    uint64_t saddr, uint64_t daddr, int n, s64 *time_ms)
>   {
>   	ktime_t stime, etime;
> +	struct amdgpu_ring *ring;
>   	struct dma_fence *fence;
>   	int i, r;
>   
> +	ring = container_of(adev->mman.buffer_funcs_scheds[0], struct amdgpu_ring, sched);
> +
>   	stime = ktime_get();
>   	for (i = 0; i < n; i++) {
> -		struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
>   		r = amdgpu_copy_buffer(ring, &adev->mman.default_entity.base,
>   				       saddr, daddr, size, NULL, &fence,
>   				       false, 0);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> index b92234d63562..1927d940fbca 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> @@ -3303,7 +3303,7 @@ static int amdgpu_device_ip_init(struct amdgpu_device *adev)
>   	if (r)
>   		goto init_failed;
>   
> -	if (adev->mman.buffer_funcs_ring->sched.ready)
> +	if (adev->mman.buffer_funcs_scheds[0]->ready)
>   		amdgpu_ttm_set_buffer_funcs_status(adev, true);
>   
>   	/* Don't init kfd if whole hive need to be reset during init */
> @@ -4143,7 +4143,7 @@ static int amdgpu_device_ip_resume(struct amdgpu_device *adev)
>   
>   	r = amdgpu_device_ip_resume_phase2(adev);
>   
> -	if (adev->mman.buffer_funcs_ring->sched.ready)
> +	if (adev->mman.buffer_funcs_scheds[0]->ready)
>   		amdgpu_ttm_set_buffer_funcs_status(adev, true);
>   
>   	if (r)
> @@ -4493,7 +4493,7 @@ int amdgpu_device_init(struct amdgpu_device *adev,
>   	adev->num_rings = 0;
>   	RCU_INIT_POINTER(adev->gang_submit, dma_fence_get_stub());
>   	adev->mman.buffer_funcs = NULL;
> -	adev->mman.buffer_funcs_ring = NULL;
> +	adev->mman.num_buffer_funcs_scheds = 0;
>   	adev->vm_manager.vm_pte_funcs = NULL;
>   	adev->vm_manager.vm_pte_num_scheds = 0;
>   	adev->gmc.gmc_funcs = NULL;
> @@ -5965,7 +5965,7 @@ int amdgpu_device_reinit_after_reset(struct amdgpu_reset_context *reset_context)
>   				if (r)
>   					goto out;
>   
> -				if (tmp_adev->mman.buffer_funcs_ring->sched.ready)
> +				if (tmp_adev->mman.buffer_funcs_scheds[0]->ready)
>   					amdgpu_ttm_set_buffer_funcs_status(tmp_adev, true);
>   
>   				r = amdgpu_device_ip_resume_phase3(tmp_adev);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
> index 2713dd51ab9a..4433d8620129 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
> @@ -651,12 +651,14 @@ int amdgpu_gmc_allocate_vm_inv_eng(struct amdgpu_device *adev)
>   void amdgpu_gmc_flush_gpu_tlb(struct amdgpu_device *adev, uint32_t vmid,
>   			      uint32_t vmhub, uint32_t flush_type)
>   {
> -	struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
> +	struct amdgpu_ring *ring;
>   	struct amdgpu_vmhub *hub = &adev->vmhub[vmhub];
>   	struct dma_fence *fence;
>   	struct amdgpu_job *job;
>   	int r, i;
>   
> +	ring = container_of(adev->mman.buffer_funcs_scheds[0], struct amdgpu_ring, sched);
> +
>   	if (!hub->sdma_invalidation_workaround || vmid ||
>   	    !adev->mman.buffer_funcs_enabled || !adev->ib_pool_ready ||
>   	    !ring->sched.ready) {
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> index 6c333dba7a35..11fec0fa4c11 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> @@ -308,7 +308,7 @@ static int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
>   				      struct dma_resv *resv,
>   				      struct dma_fence **f)
>   {
> -	struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
> +	struct amdgpu_ring *ring;
>   	struct amdgpu_res_cursor src_mm, dst_mm;
>   	struct dma_fence *fence = NULL;
>   	int r = 0;
> @@ -321,6 +321,8 @@ static int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
>   		return -EINVAL;
>   	}
>   
> +	ring = container_of(adev->mman.buffer_funcs_scheds[0], struct amdgpu_ring, sched);
> +
>   	amdgpu_res_first(src->mem, src->offset, size, &src_mm);
>   	amdgpu_res_first(dst->mem, dst->offset, size, &dst_mm);
>   
> @@ -1493,6 +1495,7 @@ static int amdgpu_ttm_access_memory_sdma(struct ttm_buffer_object *bo,
>   	struct amdgpu_bo *abo = ttm_to_amdgpu_bo(bo);
>   	struct amdgpu_device *adev = amdgpu_ttm_adev(abo->tbo.bdev);
>   	struct amdgpu_res_cursor src_mm;
> +	struct amdgpu_ring *ring;
>   	struct amdgpu_job *job;
>   	struct dma_fence *fence;
>   	uint64_t src_addr, dst_addr;
> @@ -1530,7 +1533,8 @@ static int amdgpu_ttm_access_memory_sdma(struct ttm_buffer_object *bo,
>   	amdgpu_emit_copy_buffer(adev, &job->ibs[0], src_addr, dst_addr,
>   				PAGE_SIZE, 0);
>   
> -	amdgpu_ring_pad_ib(adev->mman.buffer_funcs_ring, &job->ibs[0]);
> +	ring = container_of(adev->mman.buffer_funcs_scheds[0], struct amdgpu_ring, sched);
> +	amdgpu_ring_pad_ib(ring, &job->ibs[0]);
>   	WARN_ON(job->ibs[0].length_dw > num_dw);
>   
>   	fence = amdgpu_job_submit(job);
> @@ -2196,11 +2200,9 @@ u32 amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
>   		return windows;
>   
>   	if (enable) {
> -		struct amdgpu_ring *ring;
>   		struct drm_gpu_scheduler *sched;
>   
> -		ring = adev->mman.buffer_funcs_ring;
> -		sched = &ring->sched;
> +		sched = adev->mman.buffer_funcs_scheds[0];
>   		r = drm_sched_entity_init(&adev->mman.default_entity.base,
>   					  DRM_SCHED_PRIORITY_KERNEL, &sched,
>   					  1, NULL);
> @@ -2432,7 +2434,7 @@ int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
>   			    struct dma_fence **fence)
>   {
>   	struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
> -	struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
> +	struct amdgpu_ring *ring;
>   	struct amdgpu_ttm_buffer_entity *entity;
>   	struct amdgpu_res_cursor cursor;
>   	u64 addr;
> @@ -2443,6 +2445,8 @@ int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
>   
>   	if (!fence)
>   		return -EINVAL;
> +
> +	ring = container_of(adev->mman.buffer_funcs_scheds[0], struct amdgpu_ring, sched);
>   	entity = &adev->mman.clear_entities[0];
>   	*fence = dma_fence_get_stub();
>   
> @@ -2494,9 +2498,9 @@ int amdgpu_fill_buffer(struct amdgpu_ttm_buffer_entity *entity,
>   		       u64 k_job_id)
>   {
>   	struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
> -	struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
>   	struct dma_fence *fence = NULL;
>   	struct amdgpu_res_cursor dst;
> +	struct amdgpu_ring *ring;
>   	int r, e;
>   
>   	if (!adev->mman.buffer_funcs_enabled) {
> @@ -2505,6 +2509,8 @@ int amdgpu_fill_buffer(struct amdgpu_ttm_buffer_entity *entity,
>   		return -EINVAL;
>   	}
>   
> +	ring = container_of(adev->mman.buffer_funcs_scheds[0], struct amdgpu_ring, sched);
> +
>   	if (entity == NULL) {
>   		e = atomic_inc_return(&adev->mman.next_clear_entity) %
>   				      adev->mman.num_clear_entities;
> @@ -2579,6 +2585,27 @@ int amdgpu_ttm_evict_resources(struct amdgpu_device *adev, int mem_type)
>   	return ttm_resource_manager_evict_all(&adev->mman.bdev, man);
>   }
>   
> +void amdgpu_sdma_set_buffer_funcs_scheds(struct amdgpu_device *adev,
> +					 const struct amdgpu_buffer_funcs *buffer_funcs)
> +{
> +	struct amdgpu_vmhub *hub = &adev->vmhub[AMDGPU_GFXHUB(0)];
> +	struct drm_gpu_scheduler *sched;
> +	int i;
> +
> +	adev->mman.buffer_funcs = buffer_funcs;
> +
> +	for (i = 0; i < adev->sdma.num_instances; i++) {
> +		if (adev->sdma.has_page_queue)
> +			sched = &adev->sdma.instance[i].page.sched;
> +		else
> +			sched = &adev->sdma.instance[i].ring.sched;
> +		adev->mman.buffer_funcs_scheds[i] = sched;
> +	}
> +
> +	adev->mman.num_buffer_funcs_scheds = hub->sdma_invalidation_workaround ?
> +		1 : adev->sdma.num_instances;
> +}
> +
>   #if defined(CONFIG_DEBUG_FS)
>   
>   static int amdgpu_ttm_page_pool_show(struct seq_file *m, void *unused)
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> index 4844f001f590..63c3e2466708 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> @@ -66,7 +66,8 @@ struct amdgpu_mman {
>   
>   	/* buffer handling */
>   	const struct amdgpu_buffer_funcs	*buffer_funcs;
> -	struct amdgpu_ring			*buffer_funcs_ring;
> +	struct drm_gpu_scheduler		*buffer_funcs_scheds[AMDGPU_MAX_RINGS];
> +	u32					num_buffer_funcs_scheds;
>   	bool					buffer_funcs_enabled;
>   
>   	struct mutex				gtt_window_lock;
> diff --git a/drivers/gpu/drm/amd/amdgpu/cik_sdma.c b/drivers/gpu/drm/amd/amdgpu/cik_sdma.c
> index 5fe162f52c92..a36385ad8da8 100644
> --- a/drivers/gpu/drm/amd/amdgpu/cik_sdma.c
> +++ b/drivers/gpu/drm/amd/amdgpu/cik_sdma.c
> @@ -1333,8 +1333,7 @@ static const struct amdgpu_buffer_funcs cik_sdma_buffer_funcs = {
>   
>   static void cik_sdma_set_buffer_funcs(struct amdgpu_device *adev)
>   {
> -	adev->mman.buffer_funcs = &cik_sdma_buffer_funcs;
> -	adev->mman.buffer_funcs_ring = &adev->sdma.instance[0].ring;
> +	amdgpu_sdma_set_buffer_funcs_scheds(adev, &cik_sdma_buffer_funcs);
>   }
>   
>   static const struct amdgpu_vm_pte_funcs cik_sdma_vm_pte_funcs = {
> diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c b/drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c
> index 63636643db3d..4a3ba136a36c 100644
> --- a/drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c
> +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c
> @@ -1228,8 +1228,7 @@ static const struct amdgpu_buffer_funcs sdma_v2_4_buffer_funcs = {
>   
>   static void sdma_v2_4_set_buffer_funcs(struct amdgpu_device *adev)
>   {
> -	adev->mman.buffer_funcs = &sdma_v2_4_buffer_funcs;
> -	adev->mman.buffer_funcs_ring = &adev->sdma.instance[0].ring;
> +	amdgpu_sdma_set_buffer_funcs_scheds(adev, &sdma_v2_4_buffer_funcs);
>   }
>   
>   static const struct amdgpu_vm_pte_funcs sdma_v2_4_vm_pte_funcs = {
> diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c
> index 0153626b5df2..3cf527bcadf6 100644
> --- a/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c
> @@ -1670,8 +1670,7 @@ static const struct amdgpu_buffer_funcs sdma_v3_0_buffer_funcs = {
>   
>   static void sdma_v3_0_set_buffer_funcs(struct amdgpu_device *adev)
>   {
> -	adev->mman.buffer_funcs = &sdma_v3_0_buffer_funcs;
> -	adev->mman.buffer_funcs_ring = &adev->sdma.instance[0].ring;
> +	amdgpu_sdma_set_buffer_funcs_scheds(adev, &sdma_v3_0_buffer_funcs);
>   }
>   
>   static const struct amdgpu_vm_pte_funcs sdma_v3_0_vm_pte_funcs = {
> diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
> index 96a67b30854c..7e106baecad5 100644
> --- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
> @@ -2608,11 +2608,7 @@ static const struct amdgpu_buffer_funcs sdma_v4_0_buffer_funcs = {
>   
>   static void sdma_v4_0_set_buffer_funcs(struct amdgpu_device *adev)
>   {
> -	adev->mman.buffer_funcs = &sdma_v4_0_buffer_funcs;
> -	if (adev->sdma.has_page_queue)
> -		adev->mman.buffer_funcs_ring = &adev->sdma.instance[0].page;
> -	else
> -		adev->mman.buffer_funcs_ring = &adev->sdma.instance[0].ring;
> +	amdgpu_sdma_set_buffer_funcs_scheds(adev, &sdma_v4_0_buffer_funcs);
>   }
>   
>   static const struct amdgpu_vm_pte_funcs sdma_v4_0_vm_pte_funcs = {
> diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
> index 04dc8a8f4d66..7cb0e213bab2 100644
> --- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
> +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
> @@ -2309,11 +2309,7 @@ static const struct amdgpu_buffer_funcs sdma_v4_4_2_buffer_funcs = {
>   
>   static void sdma_v4_4_2_set_buffer_funcs(struct amdgpu_device *adev)
>   {
> -	adev->mman.buffer_funcs = &sdma_v4_4_2_buffer_funcs;
> -	if (adev->sdma.has_page_queue)
> -		adev->mman.buffer_funcs_ring = &adev->sdma.instance[0].page;
> -	else
> -		adev->mman.buffer_funcs_ring = &adev->sdma.instance[0].ring;
> +	amdgpu_sdma_set_buffer_funcs_scheds(adev, &sdma_v4_4_2_buffer_funcs);
>   }
>   
>   static const struct amdgpu_vm_pte_funcs sdma_v4_4_2_vm_pte_funcs = {
> diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
> index 19c717f2c602..eab09c5fc762 100644
> --- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
> @@ -2066,10 +2066,8 @@ static const struct amdgpu_buffer_funcs sdma_v5_0_buffer_funcs = {
>   
>   static void sdma_v5_0_set_buffer_funcs(struct amdgpu_device *adev)
>   {
> -	if (adev->mman.buffer_funcs == NULL) {
> -		adev->mman.buffer_funcs = &sdma_v5_0_buffer_funcs;
> -		adev->mman.buffer_funcs_ring = &adev->sdma.instance[0].ring;
> -	}
> +	if (adev->mman.buffer_funcs == NULL)
> +		amdgpu_sdma_set_buffer_funcs_scheds(adev, &sdma_v5_0_buffer_funcs);
>   }
>   
>   static const struct amdgpu_vm_pte_funcs sdma_v5_0_vm_pte_funcs = {
> diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
> index 7a07b8f4e86d..e843da1dce59 100644
> --- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
> +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
> @@ -2076,10 +2076,8 @@ static const struct amdgpu_buffer_funcs sdma_v5_2_buffer_funcs = {
>   
>   static void sdma_v5_2_set_buffer_funcs(struct amdgpu_device *adev)
>   {
> -	if (adev->mman.buffer_funcs == NULL) {
> -		adev->mman.buffer_funcs = &sdma_v5_2_buffer_funcs;
> -		adev->mman.buffer_funcs_ring = &adev->sdma.instance[0].ring;
> -	}
> +	if (adev->mman.buffer_funcs == NULL)
> +		amdgpu_sdma_set_buffer_funcs_scheds(adev, &sdma_v5_2_buffer_funcs);
>   }
>   
>   static const struct amdgpu_vm_pte_funcs sdma_v5_2_vm_pte_funcs = {
> diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
> index 8f8228c7adee..d078bff42983 100644
> --- a/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
> @@ -1884,8 +1884,7 @@ static const struct amdgpu_buffer_funcs sdma_v6_0_buffer_funcs = {
>   
>   static void sdma_v6_0_set_buffer_funcs(struct amdgpu_device *adev)
>   {
> -	adev->mman.buffer_funcs = &sdma_v6_0_buffer_funcs;
> -	adev->mman.buffer_funcs_ring = &adev->sdma.instance[0].ring;
> +	amdgpu_sdma_set_buffer_funcs_scheds(adev, &sdma_v6_0_buffer_funcs);
>   }
>   
>   static const struct amdgpu_vm_pte_funcs sdma_v6_0_vm_pte_funcs = {
> diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
> index cf412d8fb0ed..77ad6f128e75 100644
> --- a/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
> @@ -1826,8 +1826,7 @@ static const struct amdgpu_buffer_funcs sdma_v7_0_buffer_funcs = {
>   
>   static void sdma_v7_0_set_buffer_funcs(struct amdgpu_device *adev)
>   {
> -	adev->mman.buffer_funcs = &sdma_v7_0_buffer_funcs;
> -	adev->mman.buffer_funcs_ring = &adev->sdma.instance[0].ring;
> +	amdgpu_sdma_set_buffer_funcs_scheds(adev, &sdma_v7_0_buffer_funcs);
>   }
>   
>   static const struct amdgpu_vm_pte_funcs sdma_v7_0_vm_pte_funcs = {
> diff --git a/drivers/gpu/drm/amd/amdgpu/si_dma.c b/drivers/gpu/drm/amd/amdgpu/si_dma.c
> index 863e00086c30..4f6d7eeceb37 100644
> --- a/drivers/gpu/drm/amd/amdgpu/si_dma.c
> +++ b/drivers/gpu/drm/amd/amdgpu/si_dma.c
> @@ -826,8 +826,7 @@ static const struct amdgpu_buffer_funcs si_dma_buffer_funcs = {
>   
>   static void si_dma_set_buffer_funcs(struct amdgpu_device *adev)
>   {
> -	adev->mman.buffer_funcs = &si_dma_buffer_funcs;
> -	adev->mman.buffer_funcs_ring = &adev->sdma.instance[0].ring;
> +	amdgpu_sdma_set_buffer_funcs_scheds(adev, &si_dma_buffer_funcs);
>   }
>   
>   static const struct amdgpu_vm_pte_funcs si_dma_vm_pte_funcs = {
> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
> index 943c3438c7ee..3f7b85aabb72 100644
> --- a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
> @@ -129,13 +129,14 @@ svm_migrate_copy_memory_gart(struct amdgpu_device *adev, dma_addr_t *sys,
>   			     struct dma_fence **mfence)
>   {
>   	const u64 GTT_MAX_PAGES = AMDGPU_GTT_MAX_TRANSFER_SIZE;
> -	struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
> +	struct amdgpu_ring *ring;
>   	struct amdgpu_ttm_buffer_entity *entity;
>   	u64 gart_s, gart_d;
>   	struct dma_fence *next;
>   	u64 size;
>   	int r;
>   
> +	ring = container_of(adev->mman.buffer_funcs_scheds[0], struct amdgpu_ring, sched);
>   	entity = &adev->mman.move_entities[0];
>   
>   	mutex_lock(&entity->gart_window_lock);

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v2 07/20] drm/amdgpu: allocate multiple clear entities
  2025-11-13 16:05 ` [PATCH v2 07/20] drm/amdgpu: allocate multiple clear entities Pierre-Eric Pelloux-Prayer
@ 2025-11-17  8:41   ` Christian König
  0 siblings, 0 replies; 53+ messages in thread
From: Christian König @ 2025-11-17  8:41 UTC (permalink / raw)
  To: Pierre-Eric Pelloux-Prayer, Alex Deucher, David Airlie,
	Simona Vetter
  Cc: amd-gfx, dri-devel, linux-kernel

On 11/13/25 17:05, Pierre-Eric Pelloux-Prayer wrote:
> No functional change for now, as we always use entity 0.
> 
> Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c     | 11 +--
>  drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c |  6 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_object.c  |  2 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c     | 76 +++++++++++++--------
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h     | 10 +--
>  5 files changed, 66 insertions(+), 39 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
> index 2a444d02cf4b..e73dcfed5338 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
> @@ -655,7 +655,7 @@ void amdgpu_gmc_flush_gpu_tlb(struct amdgpu_device *adev, uint32_t vmid,
>  	struct amdgpu_vmhub *hub = &adev->vmhub[vmhub];
>  	struct dma_fence *fence;
>  	struct amdgpu_job *job;
> -	int r;
> +	int r, i;
>  
>  	if (!hub->sdma_invalidation_workaround || vmid ||
>  	    !adev->mman.buffer_funcs_enabled || !adev->ib_pool_ready ||
> @@ -686,8 +686,9 @@ void amdgpu_gmc_flush_gpu_tlb(struct amdgpu_device *adev, uint32_t vmid,
>  	 * translation. Avoid this by doing the invalidation from the SDMA
>  	 * itself at least for GART.
>  	 */
> -	mutex_lock(&adev->mman.clear_entity.gart_window_lock);
>  	mutex_lock(&adev->mman.move_entity.gart_window_lock);
> +	for (i = 0; i < adev->mman.num_clear_entities; i++)
> +		mutex_lock(&adev->mman.clear_entities[i].gart_window_lock);

Ok, that here makes not much sense but I already pointed that out on the other patch, so let's discuss there.

>  	r = amdgpu_job_alloc_with_ib(ring->adev, &adev->mman.default_entity.base,
>  				     AMDGPU_FENCE_OWNER_UNDEFINED,
>  				     16 * 4, AMDGPU_IB_POOL_IMMEDIATE,
> @@ -701,7 +702,8 @@ void amdgpu_gmc_flush_gpu_tlb(struct amdgpu_device *adev, uint32_t vmid,
>  	amdgpu_ring_pad_ib(ring, &job->ibs[0]);
>  	fence = amdgpu_job_submit(job);
>  	mutex_unlock(&adev->mman.move_entity.gart_window_lock);
> -	mutex_unlock(&adev->mman.clear_entity.gart_window_lock);
> +	for (i = 0; i < adev->mman.num_clear_entities; i++)
> +		mutex_unlock(&adev->mman.clear_entities[i].gart_window_lock);
>  
>  	dma_fence_wait(fence, false);
>  	dma_fence_put(fence);
> @@ -710,7 +712,8 @@ void amdgpu_gmc_flush_gpu_tlb(struct amdgpu_device *adev, uint32_t vmid,
>  
>  error_alloc:
>  	mutex_unlock(&adev->mman.move_entity.gart_window_lock);
> -	mutex_unlock(&adev->mman.clear_entity.gart_window_lock);
> +	for (i = 0; i < adev->mman.num_clear_entities; i++)
> +		mutex_unlock(&adev->mman.clear_entities[i].gart_window_lock);
>  	dev_err(adev->dev, "Error flushing GPU TLB using the SDMA (%d)!\n", r);
>  }
>  
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c
> index 0760e70402ec..3771e89035f5 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c
> @@ -269,10 +269,12 @@ static const struct ttm_resource_manager_func amdgpu_gtt_mgr_func = {
>   *
>   * @adev: amdgpu_device pointer
>   * @gtt_size: maximum size of GTT
> + * @reserved_windows: num of already used windows
>   *
>   * Allocate and initialize the GTT manager.
>   */
> -int amdgpu_gtt_mgr_init(struct amdgpu_device *adev, uint64_t gtt_size)
> +int amdgpu_gtt_mgr_init(struct amdgpu_device *adev, uint64_t gtt_size,
> +			u32 reserved_windows)
>  {
>  	struct amdgpu_gtt_mgr *mgr = &adev->mman.gtt_mgr;
>  	struct ttm_resource_manager *man = &mgr->manager;
> @@ -283,7 +285,7 @@ int amdgpu_gtt_mgr_init(struct amdgpu_device *adev, uint64_t gtt_size)
>  
>  	ttm_resource_manager_init(man, &adev->mman.bdev, gtt_size);
>  
> -	start = AMDGPU_GTT_MAX_TRANSFER_SIZE * AMDGPU_GTT_NUM_TRANSFER_WINDOWS;
> +	start = AMDGPU_GTT_MAX_TRANSFER_SIZE * reserved_windows;

Probably best to separate this change out into it's own patch. E.g. something like "remove fixed AMDGPU_GTT_NUM_TRANSFER_WINDOWS"...

>  	size = (adev->gmc.gart_size >> PAGE_SHIFT) - start;
>  	drm_mm_init(&mgr->mm, start, size);
>  	spin_lock_init(&mgr->lock);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> index c06c132a753c..e7b2cae031b3 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> @@ -1321,7 +1321,7 @@ void amdgpu_bo_release_notify(struct ttm_buffer_object *bo)
>  	if (r)
>  		goto out;
>  
> -	r = amdgpu_fill_buffer(&adev->mman.clear_entity, abo, 0, &bo->base._resv,
> +	r = amdgpu_fill_buffer(&adev->mman.clear_entities[0], abo, 0, &bo->base._resv,
>  			       &fence, AMDGPU_KERNEL_JOB_ID_CLEAR_ON_RELEASE);
>  	if (WARN_ON(r))
>  		goto out;
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> index 7193a341689d..2f305ad32080 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> @@ -1891,6 +1891,7 @@ static void amdgpu_ttm_mmio_remap_bo_fini(struct amdgpu_device *adev)
>  int amdgpu_ttm_init(struct amdgpu_device *adev)
>  {
>  	uint64_t gtt_size;
> +	u32 gart_window;

Make that num_gart_windows or reserved_windows or something like that.

>  	int r;
>  
>  	dma_set_max_seg_size(adev->dev, UINT_MAX);
> @@ -1923,7 +1924,7 @@ int amdgpu_ttm_init(struct amdgpu_device *adev)
>  	}
>  
>  	/* Change the size here instead of the init above so only lpfn is affected */
> -	amdgpu_ttm_set_buffer_funcs_status(adev, false);
> +	gart_window = amdgpu_ttm_set_buffer_funcs_status(adev, false);
>  #ifdef CONFIG_64BIT
>  #ifdef CONFIG_X86
>  	if (adev->gmc.xgmi.connected_to_cpu)
> @@ -2019,7 +2020,7 @@ int amdgpu_ttm_init(struct amdgpu_device *adev)
>  	}
>  
>  	/* Initialize GTT memory pool */
> -	r = amdgpu_gtt_mgr_init(adev, gtt_size);
> +	r = amdgpu_gtt_mgr_init(adev, gtt_size, gart_window);
>  	if (r) {
>  		dev_err(adev->dev, "Failed initializing GTT heap.\n");
>  		return r;
> @@ -2158,16 +2159,22 @@ void amdgpu_ttm_fini(struct amdgpu_device *adev)
>   *
>   * Enable/disable use of buffer functions during suspend/resume. This should
>   * only be called at bootup or when userspace isn't running.
> + *
> + * Returns: the number of GART reserved window
>   */
> -void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
> +u32 amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
>  {
>  	struct ttm_resource_manager *man = ttm_manager_type(&adev->mman.bdev, TTM_PL_VRAM);
>  	uint64_t size;
> -	int r, i;
> +	int r, i, j;
> +	u32 num_clear_entities, windows, w;
> +
> +	num_clear_entities = adev->sdma.num_instances;
> +	windows = adev->gmc.is_app_apu ? 0 : (2 + num_clear_entities);

Hui? That doesn't make much sense.

>  
>  	if (!adev->mman.initialized || amdgpu_in_reset(adev) ||
>  	    adev->mman.buffer_funcs_enabled == enable || adev->gmc.is_app_apu)

This also doesn't make much sense to begin with. Why are the SDMA functions disabled when is_app_apu is true?

Regards,
Christian.

> -		return;
> +		return windows;
>  
>  	if (enable) {
>  		struct amdgpu_ring *ring;
> @@ -2180,19 +2187,9 @@ void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
>  					  1, NULL);
>  		if (r) {
>  			dev_err(adev->dev,
> -				"Failed setting up TTM BO move entity (%d)\n",
> +				"Failed setting up TTM BO eviction entity (%d)\n",
>  				r);
> -			return;
> -		}
> -
> -		r = drm_sched_entity_init(&adev->mman.clear_entity.base,
> -					  DRM_SCHED_PRIORITY_NORMAL, &sched,
> -					  1, NULL);
> -		if (r) {
> -			dev_err(adev->dev,
> -				"Failed setting up TTM BO clear entity (%d)\n",
> -				r);
> -			goto error_free_entity;
> +			return 0;
>  		}
>  
>  		r = drm_sched_entity_init(&adev->mman.move_entity.base,
> @@ -2202,26 +2199,51 @@ void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
>  			dev_err(adev->dev,
>  				"Failed setting up TTM BO move entity (%d)\n",
>  				r);
> -			drm_sched_entity_destroy(&adev->mman.clear_entity.base);
>  			goto error_free_entity;
>  		}
>  
> +		adev->mman.num_clear_entities = num_clear_entities;
> +		adev->mman.clear_entities = kcalloc(num_clear_entities,
> +						    sizeof(struct amdgpu_ttm_buffer_entity),
> +						    GFP_KERNEL);
> +		if (!adev->mman.clear_entities)
> +			goto error_free_entity;
> +
> +		for (i = 0; i < num_clear_entities; i++) {
> +			r = drm_sched_entity_init(&adev->mman.clear_entities[i].base,
> +						  DRM_SCHED_PRIORITY_NORMAL, &sched,
> +						  1, NULL);
> +			if (r) {
> +				for (j = 0; j < i; j++)
> +					drm_sched_entity_destroy(
> +						&adev->mman.clear_entities[j].base);
> +				kfree(adev->mman.clear_entities);
> +				goto error_free_entity;
> +			}
> +		}
> +
>  		/* Statically assign GART windows to each entity. */
> +		w = 0;
>  		mutex_init(&adev->mman.default_entity.gart_window_lock);
> -		adev->mman.move_entity.gart_window_id0 = 0;
> -		adev->mman.move_entity.gart_window_id1 = 1;
> +		adev->mman.move_entity.gart_window_id0 = w++;
> +		adev->mman.move_entity.gart_window_id1 = w++;
>  		mutex_init(&adev->mman.move_entity.gart_window_lock);
> -		/* Clearing entity doesn't use id0 */
> -		adev->mman.clear_entity.gart_window_id1 = 2;
> -		mutex_init(&adev->mman.clear_entity.gart_window_lock);
> +		for (i = 0; i < num_clear_entities; i++) {
> +			/* Clearing entities don't use id0 */
> +			adev->mman.clear_entities[i].gart_window_id1 = w++;
> +			mutex_init(&adev->mman.clear_entities[i].gart_window_lock);
> +		}
> +		WARN_ON(w != windows);
>  	} else {
>  		drm_sched_entity_destroy(&adev->mman.default_entity.base);
> -		drm_sched_entity_destroy(&adev->mman.clear_entity.base);
>  		drm_sched_entity_destroy(&adev->mman.move_entity.base);
> +		for (i = 0; i < num_clear_entities; i++)
> +			drm_sched_entity_destroy(&adev->mman.clear_entities[i].base);
>  		for (i = 0; i < TTM_NUM_MOVE_FENCES; i++) {
>  			dma_fence_put(man->eviction_fences[i]);
>  			man->eviction_fences[i] = NULL;
>  		}
> +		kfree(adev->mman.clear_entities);
>  	}
>  
>  	/* this just adjusts TTM size idea, which sets lpfn to the correct value */
> @@ -2232,10 +2254,11 @@ void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
>  	man->size = size;
>  	adev->mman.buffer_funcs_enabled = enable;
>  
> -	return;
> +	return windows;
>  
>  error_free_entity:
>  	drm_sched_entity_destroy(&adev->mman.default_entity.base);
> +	return 0;
>  }
>  
>  static int amdgpu_ttm_prepare_job(struct amdgpu_device *adev,
> @@ -2388,8 +2411,7 @@ int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
>  
>  	if (!fence)
>  		return -EINVAL;
> -
> -	entity = &adev->mman.clear_entity;
> +	entity = &adev->mman.clear_entities[0];
>  	*fence = dma_fence_get_stub();
>  
>  	amdgpu_res_first(bo->tbo.resource, 0, amdgpu_bo_size(bo), &cursor);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> index f4f762be9fdd..797f851a4578 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> @@ -39,7 +39,6 @@
>  #define __AMDGPU_PL_NUM	(TTM_PL_PRIV + 6)
>  
>  #define AMDGPU_GTT_MAX_TRANSFER_SIZE	512
> -#define AMDGPU_GTT_NUM_TRANSFER_WINDOWS	3
>  
>  extern const struct attribute_group amdgpu_vram_mgr_attr_group;
>  extern const struct attribute_group amdgpu_gtt_mgr_attr_group;
> @@ -73,8 +72,9 @@ struct amdgpu_mman {
>  	struct mutex				gtt_window_lock;
>  
>  	struct amdgpu_ttm_buffer_entity default_entity; /* has no gart windows */
> -	struct amdgpu_ttm_buffer_entity clear_entity;
>  	struct amdgpu_ttm_buffer_entity move_entity;
> +	struct amdgpu_ttm_buffer_entity *clear_entities;
> +	u32 num_clear_entities;
>  
>  	struct amdgpu_vram_mgr vram_mgr;
>  	struct amdgpu_gtt_mgr gtt_mgr;
> @@ -134,7 +134,7 @@ struct amdgpu_copy_mem {
>  #define AMDGPU_COPY_FLAGS_GET(value, field) \
>  	(((__u32)(value) >> AMDGPU_COPY_FLAGS_##field##_SHIFT) & AMDGPU_COPY_FLAGS_##field##_MASK)
>  
> -int amdgpu_gtt_mgr_init(struct amdgpu_device *adev, uint64_t gtt_size);
> +int amdgpu_gtt_mgr_init(struct amdgpu_device *adev, uint64_t gtt_size, u32 reserved_windows);
>  void amdgpu_gtt_mgr_fini(struct amdgpu_device *adev);
>  int amdgpu_preempt_mgr_init(struct amdgpu_device *adev);
>  void amdgpu_preempt_mgr_fini(struct amdgpu_device *adev);
> @@ -168,8 +168,8 @@ bool amdgpu_res_cpu_visible(struct amdgpu_device *adev,
>  
>  int amdgpu_ttm_init(struct amdgpu_device *adev);
>  void amdgpu_ttm_fini(struct amdgpu_device *adev);
> -void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev,
> -					bool enable);
> +u32 amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev,
> +				       bool enable);
>  int amdgpu_copy_buffer(struct amdgpu_ring *ring,
>  		       struct drm_sched_entity *entity,
>  		       uint64_t src_offset,


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v2 09/20] drm/amdgpu: pass optional dependency to amdgpu_fill_buffer
  2025-11-13 16:05 ` [PATCH v2 09/20] drm/amdgpu: pass optional dependency to amdgpu_fill_buffer Pierre-Eric Pelloux-Prayer
@ 2025-11-17  8:43   ` Christian König
  0 siblings, 0 replies; 53+ messages in thread
From: Christian König @ 2025-11-17  8:43 UTC (permalink / raw)
  To: Pierre-Eric Pelloux-Prayer, Alex Deucher, David Airlie,
	Simona Vetter, Sumit Semwal
  Cc: amd-gfx, dri-devel, linux-kernel, linux-media, linaro-mm-sig

On 11/13/25 17:05, Pierre-Eric Pelloux-Prayer wrote:
> In case the fill job depends on a previous fence, the caller can
> now pass it to make sure the ordering of the jobs is correct.

I don't think you need that patch any more.

> 
> Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_object.c |  2 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c    | 22 ++++++++++++++++------
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h    |  1 +
>  3 files changed, 18 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> index e7b2cae031b3..be3532134e46 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> @@ -1322,7 +1322,7 @@ void amdgpu_bo_release_notify(struct ttm_buffer_object *bo)
>  		goto out;
>  
>  	r = amdgpu_fill_buffer(&adev->mman.clear_entities[0], abo, 0, &bo->base._resv,
> -			       &fence, AMDGPU_KERNEL_JOB_ID_CLEAR_ON_RELEASE);
> +			       &fence, NULL, AMDGPU_KERNEL_JOB_ID_CLEAR_ON_RELEASE);
>  	if (WARN_ON(r))
>  		goto out;
>  
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> index e1f0567fd2d5..b13f0993dbf1 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> @@ -173,6 +173,7 @@ static void amdgpu_evict_flags(struct ttm_buffer_object *bo,
>   * @tmz: if we should setup a TMZ enabled mapping
>   * @size: in number of bytes to map, out number of bytes mapped
>   * @addr: resulting address inside the MC address space
> + * @dep: optional dependency
>   *
>   * Setup one of the GART windows to access a specific piece of memory or return
>   * the physical address for local memory.
> @@ -182,7 +183,8 @@ static int amdgpu_ttm_map_buffer(struct drm_sched_entity *entity,
>  				 struct ttm_resource *mem,
>  				 struct amdgpu_res_cursor *mm_cur,
>  				 unsigned int window, struct amdgpu_ring *ring,
> -				 bool tmz, uint64_t *size, uint64_t *addr)
> +				 bool tmz, uint64_t *size, uint64_t *addr,
> +				 struct dma_fence *dep)
>  {
>  	struct amdgpu_device *adev = ring->adev;
>  	unsigned int offset, num_pages, num_dw, num_bytes;
> @@ -234,6 +236,9 @@ static int amdgpu_ttm_map_buffer(struct drm_sched_entity *entity,
>  	if (r)
>  		return r;
>  
> +	if (dep)
> +		drm_sched_job_add_dependency(&job->base, dma_fence_get(dep));
> +
>  	src_addr = num_dw * 4;
>  	src_addr += job->ibs[0].gpu_addr;
>  
> @@ -326,13 +331,15 @@ static int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
>  		/* Map src to window 0 and dst to window 1. */
>  		r = amdgpu_ttm_map_buffer(&entity->base,
>  					  src->bo, src->mem, &src_mm,
> -					  entity->gart_window_id0, ring, tmz, &cur_size, &from);
> +					  entity->gart_window_id0, ring, tmz, &cur_size, &from,
> +					  NULL);
>  		if (r)
>  			goto error;
>  
>  		r = amdgpu_ttm_map_buffer(&entity->base,
>  					  dst->bo, dst->mem, &dst_mm,
> -					  entity->gart_window_id1, ring, tmz, &cur_size, &to);
> +					  entity->gart_window_id1, ring, tmz, &cur_size, &to,
> +					  NULL);
>  		if (r)
>  			goto error;
>  
> @@ -415,7 +422,7 @@ static int amdgpu_move_blit(struct ttm_buffer_object *bo,
>  		struct dma_fence *wipe_fence = NULL;
>  
>  		r = amdgpu_fill_buffer(&adev->mman.move_entities[0],
> -				       abo, 0, NULL, &wipe_fence,
> +				       abo, 0, NULL, &wipe_fence, fence,
>  				       AMDGPU_KERNEL_JOB_ID_MOVE_BLIT);
>  		if (r) {
>  			goto error;
> @@ -2443,7 +2450,8 @@ int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
>  
>  		r = amdgpu_ttm_map_buffer(&entity->base,
>  					  &bo->tbo, bo->tbo.resource, &cursor,
> -					  entity->gart_window_id1, ring, false, &size, &addr);
> +					  entity->gart_window_id1, ring, false, &size, &addr,
> +					  NULL);
>  		if (r)
>  			goto err;
>  
> @@ -2469,6 +2477,7 @@ int amdgpu_fill_buffer(struct amdgpu_ttm_buffer_entity *entity,
>  		       uint32_t src_data,
>  		       struct dma_resv *resv,
>  		       struct dma_fence **f,
> +		       struct dma_fence *dependency,
>  		       u64 k_job_id)
>  {
>  	struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
> @@ -2496,7 +2505,8 @@ int amdgpu_fill_buffer(struct amdgpu_ttm_buffer_entity *entity,
>  		r = amdgpu_ttm_map_buffer(&entity->base,
>  					  &bo->tbo, bo->tbo.resource, &dst,
>  					  entity->gart_window_id1, ring, false,
> -					  &cur_size, &to);
> +					  &cur_size, &to,
> +					  dependency);
>  		if (r)
>  			goto error;
>  
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> index 9d4891e86675..e8f8165f5bcf 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> @@ -186,6 +186,7 @@ int amdgpu_fill_buffer(struct amdgpu_ttm_buffer_entity *entity,
>  		       uint32_t src_data,
>  		       struct dma_resv *resv,
>  		       struct dma_fence **f,
> +		       struct dma_fence *dependency,
>  		       u64 k_job_id);
>  
>  int amdgpu_ttm_alloc_gart(struct ttm_buffer_object *bo);


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v2 10/20] drm/admgpu: handle resv dependencies in amdgpu_ttm_map_buffer
  2025-11-13 16:05 ` [PATCH v2 10/20] drm/admgpu: handle resv dependencies in amdgpu_ttm_map_buffer Pierre-Eric Pelloux-Prayer
@ 2025-11-17  8:44   ` Christian König
  2025-11-19  8:28     ` Pierre-Eric Pelloux-Prayer
  0 siblings, 1 reply; 53+ messages in thread
From: Christian König @ 2025-11-17  8:44 UTC (permalink / raw)
  To: Pierre-Eric Pelloux-Prayer, Alex Deucher, David Airlie,
	Simona Vetter, Sumit Semwal
  Cc: amd-gfx, dri-devel, linux-kernel, linux-media, linaro-mm-sig

On 11/13/25 17:05, Pierre-Eric Pelloux-Prayer wrote:
> If a resv object is passed, its fences are treated as a dependency
> for the amdgpu_ttm_map_buffer operation.
> 
> This will be used by amdgpu_bo_release_notify through
> amdgpu_fill_buffer.

Why should updating the GART window depend on fences in a resv object?

Regards,
Christian.

> 
> Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 16 +++++++++++-----
>  1 file changed, 11 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> index b13f0993dbf1..411997db70eb 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> @@ -184,7 +184,8 @@ static int amdgpu_ttm_map_buffer(struct drm_sched_entity *entity,
>  				 struct amdgpu_res_cursor *mm_cur,
>  				 unsigned int window, struct amdgpu_ring *ring,
>  				 bool tmz, uint64_t *size, uint64_t *addr,
> -				 struct dma_fence *dep)
> +				 struct dma_fence *dep,
> +				 struct dma_resv *resv)
>  {
>  	struct amdgpu_device *adev = ring->adev;
>  	unsigned int offset, num_pages, num_dw, num_bytes;
> @@ -239,6 +240,10 @@ static int amdgpu_ttm_map_buffer(struct drm_sched_entity *entity,
>  	if (dep)
>  		drm_sched_job_add_dependency(&job->base, dma_fence_get(dep));
>  
> +	if (resv)
> +		drm_sched_job_add_resv_dependencies(&job->base, resv,
> +						    DMA_RESV_USAGE_BOOKKEEP);
> +
>  	src_addr = num_dw * 4;
>  	src_addr += job->ibs[0].gpu_addr;
>  
> @@ -332,14 +337,14 @@ static int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
>  		r = amdgpu_ttm_map_buffer(&entity->base,
>  					  src->bo, src->mem, &src_mm,
>  					  entity->gart_window_id0, ring, tmz, &cur_size, &from,
> -					  NULL);
> +					  NULL, NULL);
>  		if (r)
>  			goto error;
>  
>  		r = amdgpu_ttm_map_buffer(&entity->base,
>  					  dst->bo, dst->mem, &dst_mm,
>  					  entity->gart_window_id1, ring, tmz, &cur_size, &to,
> -					  NULL);
> +					  NULL, NULL);
>  		if (r)
>  			goto error;
>  
> @@ -2451,7 +2456,7 @@ int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
>  		r = amdgpu_ttm_map_buffer(&entity->base,
>  					  &bo->tbo, bo->tbo.resource, &cursor,
>  					  entity->gart_window_id1, ring, false, &size, &addr,
> -					  NULL);
> +					  NULL, NULL);
>  		if (r)
>  			goto err;
>  
> @@ -2506,7 +2511,8 @@ int amdgpu_fill_buffer(struct amdgpu_ttm_buffer_entity *entity,
>  					  &bo->tbo, bo->tbo.resource, &dst,
>  					  entity->gart_window_id1, ring, false,
>  					  &cur_size, &to,
> -					  dependency);
> +					  dependency,
> +					  resv);
>  		if (r)
>  			goto error;
>  


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v2 11/20] drm/amdgpu: round robin through clear_entities in amdgpu_fill_buffer
  2025-11-13 16:05 ` [PATCH v2 11/20] drm/amdgpu: round robin through clear_entities in amdgpu_fill_buffer Pierre-Eric Pelloux-Prayer
@ 2025-11-17  8:47   ` Christian König
  0 siblings, 0 replies; 53+ messages in thread
From: Christian König @ 2025-11-17  8:47 UTC (permalink / raw)
  To: Pierre-Eric Pelloux-Prayer, Alex Deucher, David Airlie,
	Simona Vetter
  Cc: amd-gfx, dri-devel, linux-kernel

On 11/13/25 17:05, Pierre-Eric Pelloux-Prayer wrote:
> This makes clear of different BOs run in parallel. Partial jobs to
> clear a single BO still execute sequentially.
> 
> Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 2 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c    | 9 ++++++++-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h    | 1 +
>  3 files changed, 10 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> index be3532134e46..33b397107778 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> @@ -1321,7 +1321,7 @@ void amdgpu_bo_release_notify(struct ttm_buffer_object *bo)
>  	if (r)
>  		goto out;
>  
> -	r = amdgpu_fill_buffer(&adev->mman.clear_entities[0], abo, 0, &bo->base._resv,
> +	r = amdgpu_fill_buffer(NULL, abo, 0, &bo->base._resv,
>  			       &fence, NULL, AMDGPU_KERNEL_JOB_ID_CLEAR_ON_RELEASE);
>  	if (WARN_ON(r))
>  		goto out;
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> index 411997db70eb..486c701d0d5b 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> @@ -2224,6 +2224,7 @@ u32 amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
>  		adev->mman.clear_entities = kcalloc(num_clear_entities,
>  						    sizeof(struct amdgpu_ttm_buffer_entity),
>  						    GFP_KERNEL);
> +		atomic_set(&adev->mman.next_clear_entity, 0);
>  		if (!adev->mman.clear_entities)
>  			goto error_free_entity;
>  
> @@ -2489,7 +2490,7 @@ int amdgpu_fill_buffer(struct amdgpu_ttm_buffer_entity *entity,
>  	struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
>  	struct dma_fence *fence = NULL;
>  	struct amdgpu_res_cursor dst;
> -	int r;
> +	int r, e;
>  
>  	if (!adev->mman.buffer_funcs_enabled) {
>  		dev_err(adev->dev,
> @@ -2497,6 +2498,12 @@ int amdgpu_fill_buffer(struct amdgpu_ttm_buffer_entity *entity,
>  		return -EINVAL;
>  	}
>  
> +	if (entity == NULL) {
> +		e = atomic_inc_return(&adev->mman.next_clear_entity) %
> +				      adev->mman.num_clear_entities;
> +		entity = &adev->mman.clear_entities[e];
> +	}
> +

Oh, that is really ugly.

I think you should have something like amdgpu_ttm_next_clear_entity() which returns the pointer round robin.

And then give that as parameter to amdgpu_fill_buffer().

Regards,
Christian.

>  	amdgpu_res_first(bo->tbo.resource, 0, amdgpu_bo_size(bo), &dst);
>  
>  	mutex_lock(&entity->gart_window_lock);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> index e8f8165f5bcf..781b0bdca56c 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> @@ -73,6 +73,7 @@ struct amdgpu_mman {
>  
>  	struct amdgpu_ttm_buffer_entity default_entity; /* has no gart windows */
>  	struct amdgpu_ttm_buffer_entity *clear_entities;
> +	atomic_t next_clear_entity;
>  	u32 num_clear_entities;
>  	struct amdgpu_ttm_buffer_entity move_entities[TTM_NUM_MOVE_FENCES];
>  	u32 num_move_entities;


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v2 12/20] drm/amdgpu: use TTM_NUM_MOVE_FENCES when reserving fences
  2025-11-13 16:05 ` [PATCH v2 12/20] drm/amdgpu: use TTM_NUM_MOVE_FENCES when reserving fences Pierre-Eric Pelloux-Prayer
  2025-11-14 20:57   ` Felix Kuehling
@ 2025-11-17  9:07   ` Christian König
  1 sibling, 0 replies; 53+ messages in thread
From: Christian König @ 2025-11-17  9:07 UTC (permalink / raw)
  To: Pierre-Eric Pelloux-Prayer, Alex Deucher, David Airlie,
	Simona Vetter, Felix Kuehling, Harry Wentland, Leo Li,
	Rodrigo Siqueira
  Cc: amd-gfx, dri-devel, linux-kernel

On 11/13/25 17:05, Pierre-Eric Pelloux-Prayer wrote:
> Use TTM_NUM_MOVE_FENCES as an upperbound of how many fences
> ttm might need to deal with moves/evictions.
> 
> ---
> v2: removed drm_err calls
> ---
> 
> Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>

Reviewed-by: Christian König <christian.koenig@amd.com>

> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c                  | 5 ++---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c                 | 2 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_vkms.c                | 6 ++----
>  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c                  | 3 ++-
>  drivers/gpu/drm/amd/amdkfd/kfd_svm.c                    | 3 +--
>  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c | 6 ++----
>  drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_wb.c    | 6 ++----
>  7 files changed, 12 insertions(+), 19 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> index ecdfe6cb36cc..0338522761bc 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> @@ -916,9 +916,8 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
>  			goto out_free_user_pages;
>  
>  		amdgpu_bo_list_for_each_entry(e, p->bo_list) {
> -			/* One fence for TTM and one for each CS job */
>  			r = drm_exec_prepare_obj(&p->exec, &e->bo->tbo.base,
> -						 1 + p->gang_size);
> +						 TTM_NUM_MOVE_FENCES + p->gang_size);
>  			drm_exec_retry_on_contention(&p->exec);
>  			if (unlikely(r))
>  				goto out_free_user_pages;
> @@ -928,7 +927,7 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
>  
>  		if (p->uf_bo) {
>  			r = drm_exec_prepare_obj(&p->exec, &p->uf_bo->tbo.base,
> -						 1 + p->gang_size);
> +						 TTM_NUM_MOVE_FENCES + p->gang_size);
>  			drm_exec_retry_on_contention(&p->exec);
>  			if (unlikely(r))
>  				goto out_free_user_pages;
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> index ce073e894584..2fe6899f6344 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> @@ -353,7 +353,7 @@ static void amdgpu_gem_object_close(struct drm_gem_object *obj,
>  
>  	drm_exec_init(&exec, DRM_EXEC_IGNORE_DUPLICATES, 0);
>  	drm_exec_until_all_locked(&exec) {
> -		r = drm_exec_prepare_obj(&exec, &bo->tbo.base, 1);
> +		r = drm_exec_prepare_obj(&exec, &bo->tbo.base, TTM_NUM_MOVE_FENCES);
>  		drm_exec_retry_on_contention(&exec);
>  		if (unlikely(r))
>  			goto out_unlock;
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vkms.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vkms.c
> index 79bad9cbe2ab..b92561eea3da 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vkms.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vkms.c
> @@ -326,11 +326,9 @@ static int amdgpu_vkms_prepare_fb(struct drm_plane *plane,
>  		return r;
>  	}
>  
> -	r = dma_resv_reserve_fences(rbo->tbo.base.resv, 1);
> -	if (r) {
> -		dev_err(adev->dev, "allocating fence slot failed (%d)\n", r);
> +	r = dma_resv_reserve_fences(rbo->tbo.base.resv, TTM_NUM_MOVE_FENCES);
> +	if (r)
>  		goto error_unlock;
> -	}
>  
>  	if (plane->type != DRM_PLANE_TYPE_CURSOR)
>  		domain = amdgpu_display_supported_domains(adev, rbo->flags);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> index 2f8e83f840a8..aaa44199e9f4 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> @@ -2630,7 +2630,8 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm,
>  	}
>  
>  	amdgpu_vm_bo_base_init(&vm->root, vm, root_bo);
> -	r = dma_resv_reserve_fences(root_bo->tbo.base.resv, 1);
> +	r = dma_resv_reserve_fences(root_bo->tbo.base.resv,
> +				    TTM_NUM_MOVE_FENCES);
>  	if (r)
>  		goto error_free_root;
>  
> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
> index ffb7b36e577c..968cef1cbea6 100644
> --- a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
> @@ -627,9 +627,8 @@ svm_range_vram_node_new(struct kfd_node *node, struct svm_range *prange,
>  		}
>  	}
>  
> -	r = dma_resv_reserve_fences(bo->tbo.base.resv, 1);
> +	r = dma_resv_reserve_fences(bo->tbo.base.resv, TTM_NUM_MOVE_FENCES);
>  	if (r) {
> -		pr_debug("failed %d to reserve bo\n", r);
>  		amdgpu_bo_unreserve(bo);
>  		goto reserve_bo_failed;
>  	}
> diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c
> index 56cb866ac6f8..ceb55dd183ed 100644
> --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c
> +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c
> @@ -952,11 +952,9 @@ static int amdgpu_dm_plane_helper_prepare_fb(struct drm_plane *plane,
>  		return r;
>  	}
>  
> -	r = dma_resv_reserve_fences(rbo->tbo.base.resv, 1);
> -	if (r) {
> -		drm_err(adev_to_drm(adev), "reserving fence slot failed (%d)\n", r);
> +	r = dma_resv_reserve_fences(rbo->tbo.base.resv, TTM_NUM_MOVE_FENCES);
> +	if (r)
>  		goto error_unlock;
> -	}
>  
>  	if (plane->type != DRM_PLANE_TYPE_CURSOR)
>  		domain = amdgpu_display_supported_domains(adev, rbo->flags);
> diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_wb.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_wb.c
> index d9527c05fc87..110f0173eee6 100644
> --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_wb.c
> +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_wb.c
> @@ -106,11 +106,9 @@ static int amdgpu_dm_wb_prepare_job(struct drm_writeback_connector *wb_connector
>  		return r;
>  	}
>  
> -	r = dma_resv_reserve_fences(rbo->tbo.base.resv, 1);
> -	if (r) {
> -		drm_err(adev_to_drm(adev), "reserving fence slot failed (%d)\n", r);
> +	r = dma_resv_reserve_fences(rbo->tbo.base.resv, TTM_NUM_MOVE_FENCES);
> +	if (r)
>  		goto error_unlock;
> -	}
>  
>  	domain = amdgpu_display_supported_domains(adev, rbo->flags);
>  


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v2 13/20] drm/amdgpu: use multiple entities in amdgpu_move_blit
  2025-11-13 16:05 ` [PATCH v2 13/20] drm/amdgpu: use multiple entities in amdgpu_move_blit Pierre-Eric Pelloux-Prayer
@ 2025-11-17  9:12   ` Christian König
  0 siblings, 0 replies; 53+ messages in thread
From: Christian König @ 2025-11-17  9:12 UTC (permalink / raw)
  To: Pierre-Eric Pelloux-Prayer, Alex Deucher, David Airlie,
	Simona Vetter
  Cc: amd-gfx, dri-devel, linux-kernel

On 11/13/25 17:05, Pierre-Eric Pelloux-Prayer wrote:
> Thanks to "drm/ttm: rework pipelined eviction fence handling", ttm
> can deal correctly with moves and evictions being executed from
> different contexts.
> 
> Create several entities and use them in a round-robin fashion.
> 
> Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 13 ++++++++++---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h |  1 +
>  2 files changed, 11 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> index 486c701d0d5b..6c333dba7a35 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> @@ -401,6 +401,7 @@ static int amdgpu_move_blit(struct ttm_buffer_object *bo,
>  {
>  	struct amdgpu_device *adev = amdgpu_ttm_adev(bo->bdev);
>  	struct amdgpu_bo *abo = ttm_to_amdgpu_bo(bo);
> +	struct amdgpu_ttm_buffer_entity *entity;
>  	struct amdgpu_copy_mem src, dst;
>  	struct dma_fence *fence = NULL;
>  	int r;
> @@ -412,8 +413,12 @@ static int amdgpu_move_blit(struct ttm_buffer_object *bo,
>  	src.offset = 0;
>  	dst.offset = 0;
>  
> +	int e = atomic_inc_return(&adev->mman.next_move_entity) %
> +				  adev->mman.num_move_entities;
> +	entity = &adev->mman.move_entities[e];

Similar to clears this could be a sepatate function, but not a must have.

Reviewed-by: Christian König <christian.koenig@amd.com> either way.

Regards,
Christian.

> +
>  	r = amdgpu_ttm_copy_mem_to_mem(adev,
> -				       &adev->mman.move_entities[0],
> +				       entity,
>  				       &src, &dst,
>  				       new_mem->size,
>  				       amdgpu_bo_encrypted(abo),
> @@ -426,7 +431,7 @@ static int amdgpu_move_blit(struct ttm_buffer_object *bo,
>  	    (abo->flags & AMDGPU_GEM_CREATE_VRAM_WIPE_ON_RELEASE)) {
>  		struct dma_fence *wipe_fence = NULL;
>  
> -		r = amdgpu_fill_buffer(&adev->mman.move_entities[0],
> +		r = amdgpu_fill_buffer(entity,
>  				       abo, 0, NULL, &wipe_fence, fence,
>  				       AMDGPU_KERNEL_JOB_ID_MOVE_BLIT);
>  		if (r) {
> @@ -2179,7 +2184,8 @@ u32 amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
>  	struct ttm_resource_manager *man = ttm_manager_type(&adev->mman.bdev, TTM_PL_VRAM);
>  	uint64_t size;
>  	int r, i, j;
> -	u32 num_clear_entities, num_move_entities, windows, w;
> +	u32 num_clear_entities, num_move_entities;
> +	u32 windows, w;
>  
>  	num_clear_entities = adev->sdma.num_instances;
>  	num_move_entities = MIN(adev->sdma.num_instances, TTM_NUM_MOVE_FENCES);
> @@ -2205,6 +2211,7 @@ u32 amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
>  		}
>  
>  		adev->mman.num_move_entities = num_move_entities;
> +		atomic_set(&adev->mman.next_move_entity, 0);
>  		for (i = 0; i < num_move_entities; i++) {
>  			r = drm_sched_entity_init(&adev->mman.move_entities[i].base,
>  						  DRM_SCHED_PRIORITY_NORMAL, &sched,
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> index 781b0bdca56c..4844f001f590 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> @@ -76,6 +76,7 @@ struct amdgpu_mman {
>  	atomic_t next_clear_entity;
>  	u32 num_clear_entities;
>  	struct amdgpu_ttm_buffer_entity move_entities[TTM_NUM_MOVE_FENCES];
> +	atomic_t next_move_entity;
>  	u32 num_move_entities;
>  
>  	struct amdgpu_vram_mgr vram_mgr;


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v2 14/20] drm/amdgpu: introduce amdgpu_sdma_set_vm_pte_scheds
  2025-11-13 16:05 ` [PATCH v2 14/20] drm/amdgpu: introduce amdgpu_sdma_set_vm_pte_scheds Pierre-Eric Pelloux-Prayer
@ 2025-11-17  9:30   ` Christian König
  0 siblings, 0 replies; 53+ messages in thread
From: Christian König @ 2025-11-17  9:30 UTC (permalink / raw)
  To: Pierre-Eric Pelloux-Prayer, Alex Deucher, David Airlie,
	Simona Vetter
  Cc: amd-gfx, dri-devel, linux-kernel

On 11/13/25 17:05, Pierre-Eric Pelloux-Prayer wrote:
> All sdma versions used the same logic, so add a helper and move the
> common code to a single place.
> 
> ---
> v2: pass amdgpu_vm_pte_funcs as well
> ---
> 
> Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu.h      |  2 ++
>  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c   | 17 +++++++++++++++++
>  drivers/gpu/drm/amd/amdgpu/cik_sdma.c    |  9 +--------
>  drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c   |  9 +--------
>  drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c   |  9 +--------
>  drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c   | 13 +------------
>  drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c | 13 +------------
>  drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c   | 12 ++----------
>  drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c   | 12 ++----------
>  drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c   |  9 +--------
>  drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c   |  9 +--------
>  drivers/gpu/drm/amd/amdgpu/si_dma.c      |  9 +--------
>  12 files changed, 31 insertions(+), 92 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> index 50079209c472..3fab3dc9f3e4 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> @@ -1613,6 +1613,8 @@ struct dma_fence *amdgpu_device_enforce_isolation(struct amdgpu_device *adev,
>  bool amdgpu_device_has_display_hardware(struct amdgpu_device *adev);
>  ssize_t amdgpu_get_soft_full_reset_mask(struct amdgpu_ring *ring);
>  ssize_t amdgpu_show_reset_mask(char *buf, uint32_t supported_reset);
> +void amdgpu_sdma_set_vm_pte_scheds(struct amdgpu_device *adev,
> +				   const struct amdgpu_vm_pte_funcs *vm_pte_funcs);
>  
>  /* atpx handler */
>  #if defined(CONFIG_VGA_SWITCHEROO)
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> index aaa44199e9f4..3d29c3642d1a 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> @@ -3210,3 +3210,20 @@ void amdgpu_vm_print_task_info(struct amdgpu_device *adev,
>  		task_info->process_name, task_info->tgid,
>  		task_info->task.comm, task_info->task.pid);
>  }
> +
> +void amdgpu_sdma_set_vm_pte_scheds(struct amdgpu_device *adev,
> +				   const struct amdgpu_vm_pte_funcs *vm_pte_funcs)
> +{
> +	struct drm_gpu_scheduler *sched;
> +	int i;
> +
> +	for (i = 0; i < adev->sdma.num_instances; i++) {
> +		if (adev->sdma.has_page_queue)
> +			sched = &adev->sdma.instance[i].page.sched;
> +		else
> +			sched = &adev->sdma.instance[i].ring.sched;
> +		adev->vm_manager.vm_pte_scheds[i] = sched;
> +	}
> +	adev->vm_manager.vm_pte_num_scheds = adev->sdma.num_instances;
> +	adev->vm_manager.vm_pte_funcs = vm_pte_funcs;
> +}
> diff --git a/drivers/gpu/drm/amd/amdgpu/cik_sdma.c b/drivers/gpu/drm/amd/amdgpu/cik_sdma.c
> index 9e8715b4739d..5fe162f52c92 100644
> --- a/drivers/gpu/drm/amd/amdgpu/cik_sdma.c
> +++ b/drivers/gpu/drm/amd/amdgpu/cik_sdma.c
> @@ -1347,14 +1347,7 @@ static const struct amdgpu_vm_pte_funcs cik_sdma_vm_pte_funcs = {
>  
>  static void cik_sdma_set_vm_pte_funcs(struct amdgpu_device *adev)
>  {
> -	unsigned i;
> -
> -	adev->vm_manager.vm_pte_funcs = &cik_sdma_vm_pte_funcs;
> -	for (i = 0; i < adev->sdma.num_instances; i++) {
> -		adev->vm_manager.vm_pte_scheds[i] =
> -			&adev->sdma.instance[i].ring.sched;
> -	}
> -	adev->vm_manager.vm_pte_num_scheds = adev->sdma.num_instances;
> +	amdgpu_sdma_set_vm_pte_scheds(adev, &cik_sdma_vm_pte_funcs);
>  }
>  
>  const struct amdgpu_ip_block_version cik_sdma_ip_block =
> diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c b/drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c
> index 92ce580647cd..63636643db3d 100644
> --- a/drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c
> +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c
> @@ -1242,14 +1242,7 @@ static const struct amdgpu_vm_pte_funcs sdma_v2_4_vm_pte_funcs = {
>  
>  static void sdma_v2_4_set_vm_pte_funcs(struct amdgpu_device *adev)
>  {

I think we can also get rid of all those sdma_v*_set_vm_pte_funcs which are only a single line calls.

Regards,
Christian.

> -	unsigned i;
> -
> -	adev->vm_manager.vm_pte_funcs = &sdma_v2_4_vm_pte_funcs;
> -	for (i = 0; i < adev->sdma.num_instances; i++) {
> -		adev->vm_manager.vm_pte_scheds[i] =
> -			&adev->sdma.instance[i].ring.sched;
> -	}
> -	adev->vm_manager.vm_pte_num_scheds = adev->sdma.num_instances;
> +	amdgpu_sdma_set_vm_pte_scheds(adev, &sdma_v2_4_vm_pte_funcs);
>  }
>  
>  const struct amdgpu_ip_block_version sdma_v2_4_ip_block = {
> diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c
> index 1c076bd1cf73..0153626b5df2 100644
> --- a/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c
> @@ -1684,14 +1684,7 @@ static const struct amdgpu_vm_pte_funcs sdma_v3_0_vm_pte_funcs = {
>  
>  static void sdma_v3_0_set_vm_pte_funcs(struct amdgpu_device *adev)
>  {
> -	unsigned i;
> -
> -	adev->vm_manager.vm_pte_funcs = &sdma_v3_0_vm_pte_funcs;
> -	for (i = 0; i < adev->sdma.num_instances; i++) {
> -		adev->vm_manager.vm_pte_scheds[i] =
> -			 &adev->sdma.instance[i].ring.sched;
> -	}
> -	adev->vm_manager.vm_pte_num_scheds = adev->sdma.num_instances;
> +	amdgpu_sdma_set_vm_pte_scheds(adev, &sdma_v3_0_vm_pte_funcs);
>  }
>  
>  const struct amdgpu_ip_block_version sdma_v3_0_ip_block =
> diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
> index f38004e6064e..96a67b30854c 100644
> --- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
> @@ -2625,18 +2625,7 @@ static const struct amdgpu_vm_pte_funcs sdma_v4_0_vm_pte_funcs = {
>  
>  static void sdma_v4_0_set_vm_pte_funcs(struct amdgpu_device *adev)
>  {
> -	struct drm_gpu_scheduler *sched;
> -	unsigned i;
> -
> -	adev->vm_manager.vm_pte_funcs = &sdma_v4_0_vm_pte_funcs;
> -	for (i = 0; i < adev->sdma.num_instances; i++) {
> -		if (adev->sdma.has_page_queue)
> -			sched = &adev->sdma.instance[i].page.sched;
> -		else
> -			sched = &adev->sdma.instance[i].ring.sched;
> -		adev->vm_manager.vm_pte_scheds[i] = sched;
> -	}
> -	adev->vm_manager.vm_pte_num_scheds = adev->sdma.num_instances;
> +	amdgpu_sdma_set_vm_pte_scheds(adev, &sdma_v4_0_vm_pte_funcs);
>  }
>  
>  static void sdma_v4_0_get_ras_error_count(uint32_t value,
> diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
> index 36b1ca73c2ed..04dc8a8f4d66 100644
> --- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
> +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
> @@ -2326,18 +2326,7 @@ static const struct amdgpu_vm_pte_funcs sdma_v4_4_2_vm_pte_funcs = {
>  
>  static void sdma_v4_4_2_set_vm_pte_funcs(struct amdgpu_device *adev)
>  {
> -	struct drm_gpu_scheduler *sched;
> -	unsigned i;
> -
> -	adev->vm_manager.vm_pte_funcs = &sdma_v4_4_2_vm_pte_funcs;
> -	for (i = 0; i < adev->sdma.num_instances; i++) {
> -		if (adev->sdma.has_page_queue)
> -			sched = &adev->sdma.instance[i].page.sched;
> -		else
> -			sched = &adev->sdma.instance[i].ring.sched;
> -		adev->vm_manager.vm_pte_scheds[i] = sched;
> -	}
> -	adev->vm_manager.vm_pte_num_scheds = adev->sdma.num_instances;
> +	amdgpu_sdma_set_vm_pte_scheds(adev, &sdma_v4_4_2_vm_pte_funcs);
>  }
>  
>  /**
> diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
> index 7dc67a22a7a0..19c717f2c602 100644
> --- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
> @@ -2081,16 +2081,8 @@ static const struct amdgpu_vm_pte_funcs sdma_v5_0_vm_pte_funcs = {
>  
>  static void sdma_v5_0_set_vm_pte_funcs(struct amdgpu_device *adev)
>  {
> -	unsigned i;
> -
> -	if (adev->vm_manager.vm_pte_funcs == NULL) {
> -		adev->vm_manager.vm_pte_funcs = &sdma_v5_0_vm_pte_funcs;
> -		for (i = 0; i < adev->sdma.num_instances; i++) {
> -			adev->vm_manager.vm_pte_scheds[i] =
> -				&adev->sdma.instance[i].ring.sched;
> -		}
> -		adev->vm_manager.vm_pte_num_scheds = adev->sdma.num_instances;
> -	}
> +	if (adev->vm_manager.vm_pte_funcs == NULL)
> +		amdgpu_sdma_set_vm_pte_scheds(adev, &sdma_v5_0_vm_pte_funcs);
>  }
>  
>  const struct amdgpu_ip_block_version sdma_v5_0_ip_block = {
> diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
> index 3bd44c24f692..7a07b8f4e86d 100644
> --- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
> +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
> @@ -2091,16 +2091,8 @@ static const struct amdgpu_vm_pte_funcs sdma_v5_2_vm_pte_funcs = {
>  
>  static void sdma_v5_2_set_vm_pte_funcs(struct amdgpu_device *adev)
>  {
> -	unsigned i;
> -
> -	if (adev->vm_manager.vm_pte_funcs == NULL) {
> -		adev->vm_manager.vm_pte_funcs = &sdma_v5_2_vm_pte_funcs;
> -		for (i = 0; i < adev->sdma.num_instances; i++) {
> -			adev->vm_manager.vm_pte_scheds[i] =
> -				&adev->sdma.instance[i].ring.sched;
> -		}
> -		adev->vm_manager.vm_pte_num_scheds = adev->sdma.num_instances;
> -	}
> +	if (adev->vm_manager.vm_pte_funcs == NULL)
> +		amdgpu_sdma_set_vm_pte_scheds(adev, &sdma_v5_2_vm_pte_funcs);
>  }
>  
>  const struct amdgpu_ip_block_version sdma_v5_2_ip_block = {
> diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
> index db6e41967f12..8f8228c7adee 100644
> --- a/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
> @@ -1897,14 +1897,7 @@ static const struct amdgpu_vm_pte_funcs sdma_v6_0_vm_pte_funcs = {
>  
>  static void sdma_v6_0_set_vm_pte_funcs(struct amdgpu_device *adev)
>  {
> -	unsigned i;
> -
> -	adev->vm_manager.vm_pte_funcs = &sdma_v6_0_vm_pte_funcs;
> -	for (i = 0; i < adev->sdma.num_instances; i++) {
> -		adev->vm_manager.vm_pte_scheds[i] =
> -			&adev->sdma.instance[i].ring.sched;
> -	}
> -	adev->vm_manager.vm_pte_num_scheds = adev->sdma.num_instances;
> +	amdgpu_sdma_set_vm_pte_scheds(adev, &sdma_v6_0_vm_pte_funcs);
>  }
>  
>  const struct amdgpu_ip_block_version sdma_v6_0_ip_block = {
> diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
> index 326ecc8d37d2..cf412d8fb0ed 100644
> --- a/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
> @@ -1839,14 +1839,7 @@ static const struct amdgpu_vm_pte_funcs sdma_v7_0_vm_pte_funcs = {
>  
>  static void sdma_v7_0_set_vm_pte_funcs(struct amdgpu_device *adev)
>  {
> -	unsigned i;
> -
> -	adev->vm_manager.vm_pte_funcs = &sdma_v7_0_vm_pte_funcs;
> -	for (i = 0; i < adev->sdma.num_instances; i++) {
> -		adev->vm_manager.vm_pte_scheds[i] =
> -			&adev->sdma.instance[i].ring.sched;
> -	}
> -	adev->vm_manager.vm_pte_num_scheds = adev->sdma.num_instances;
> +	amdgpu_sdma_set_vm_pte_scheds(adev, &sdma_v7_0_vm_pte_funcs);
>  }
>  
>  const struct amdgpu_ip_block_version sdma_v7_0_ip_block = {
> diff --git a/drivers/gpu/drm/amd/amdgpu/si_dma.c b/drivers/gpu/drm/amd/amdgpu/si_dma.c
> index 7f18e4875287..863e00086c30 100644
> --- a/drivers/gpu/drm/amd/amdgpu/si_dma.c
> +++ b/drivers/gpu/drm/amd/amdgpu/si_dma.c
> @@ -840,14 +840,7 @@ static const struct amdgpu_vm_pte_funcs si_dma_vm_pte_funcs = {
>  
>  static void si_dma_set_vm_pte_funcs(struct amdgpu_device *adev)
>  {
> -	unsigned i;
> -
> -	adev->vm_manager.vm_pte_funcs = &si_dma_vm_pte_funcs;
> -	for (i = 0; i < adev->sdma.num_instances; i++) {
> -		adev->vm_manager.vm_pte_scheds[i] =
> -			&adev->sdma.instance[i].ring.sched;
> -	}
> -	adev->vm_manager.vm_pte_num_scheds = adev->sdma.num_instances;
> +	amdgpu_sdma_set_vm_pte_scheds(adev, &si_dma_vm_pte_funcs);
>  }
>  
>  const struct amdgpu_ip_block_version si_dma_ip_block =


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v2 05/20] drm/amdgpu: pass the entity to use to ttm functions
  2025-11-14 14:41     ` Pierre-Eric Pelloux-Prayer
@ 2025-11-17  9:41       ` Pierre-Eric Pelloux-Prayer
  0 siblings, 0 replies; 53+ messages in thread
From: Pierre-Eric Pelloux-Prayer @ 2025-11-17  9:41 UTC (permalink / raw)
  To: Christian König, Pierre-Eric Pelloux-Prayer, Alex Deucher,
	David Airlie, Simona Vetter, Felix Kuehling, Sumit Semwal
  Cc: amd-gfx, dri-devel, linux-kernel, linux-media, linaro-mm-sig



Le 14/11/2025 à 15:41, Pierre-Eric Pelloux-Prayer a écrit :
> 
> 
> Le 14/11/2025 à 14:07, Christian König a écrit :
>> On 11/13/25 17:05, Pierre-Eric Pelloux-Prayer wrote:
>>> This way the caller can select the one it wants to use.
>>>
>>> Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
>>> ---
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c |  3 +-
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_object.c    |  4 +-
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c       | 75 +++++++++++--------
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h       | 16 ++--
>>>   drivers/gpu/drm/amd/amdkfd/kfd_migrate.c      |  3 +-
>>>   5 files changed, 60 insertions(+), 41 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c b/drivers/gpu/drm/ 
>>> amd/amdgpu/amdgpu_benchmark.c
>>> index 02c2479a8840..b59040a8771f 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c
>>> @@ -38,7 +38,8 @@ static int amdgpu_benchmark_do_move(struct amdgpu_device 
>>> *adev, unsigned size,
>>>       stime = ktime_get();
>>>       for (i = 0; i < n; i++) {
>>>           struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
>>> -        r = amdgpu_copy_buffer(ring, saddr, daddr, size, NULL, &fence,
>>> +        r = amdgpu_copy_buffer(ring, &adev->mman.default_entity.base,
>>> +                       saddr, daddr, size, NULL, &fence,
>>>                          false, 0);
>>>           if (r)
>>>               goto exit_do_move;
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/ 
>>> amd/amdgpu/amdgpu_object.c
>>> index e08f58de4b17..c06c132a753c 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
>>> @@ -1321,8 +1321,8 @@ void amdgpu_bo_release_notify(struct ttm_buffer_object 
>>> *bo)
>>>       if (r)
>>>           goto out;
>>> -    r = amdgpu_fill_buffer(abo, 0, &bo->base._resv, &fence, true,
>>> -                   AMDGPU_KERNEL_JOB_ID_CLEAR_ON_RELEASE);
>>> +    r = amdgpu_fill_buffer(&adev->mman.clear_entity, abo, 0, &bo->base._resv,
>>> +                   &fence, AMDGPU_KERNEL_JOB_ID_CLEAR_ON_RELEASE);
>>>       if (WARN_ON(r))
>>>           goto out;
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/ 
>>> amdgpu/amdgpu_ttm.c
>>> index 42d448cd6a6d..c8d59ca2b3bd 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>>> @@ -164,6 +164,7 @@ static void amdgpu_evict_flags(struct ttm_buffer_object *bo,
>>>   /**
>>>    * amdgpu_ttm_map_buffer - Map memory into the GART windows
>>> + * @entity: entity to run the window setup job
>>>    * @bo: buffer object to map
>>>    * @mem: memory object to map
>>>    * @mm_cur: range to map
>>> @@ -176,7 +177,8 @@ static void amdgpu_evict_flags(struct ttm_buffer_object *bo,
>>>    * Setup one of the GART windows to access a specific piece of memory or 
>>> return
>>>    * the physical address for local memory.
>>>    */
>>> -static int amdgpu_ttm_map_buffer(struct ttm_buffer_object *bo,
>>> +static int amdgpu_ttm_map_buffer(struct drm_sched_entity *entity,
>>> +                 struct ttm_buffer_object *bo,
>>
>>
>> Probably better to split this patch into multiple patches.
>>
>> One which changes amdgpu_ttm_map_buffer() and then another one or two for the 
>> higher level copy_buffer and fill_buffer functions.
> 
> OK.
> 
>>
>>>                    struct ttm_resource *mem,
>>>                    struct amdgpu_res_cursor *mm_cur,
>>>                    unsigned int window, struct amdgpu_ring *ring,
>>> @@ -224,7 +226,7 @@ static int amdgpu_ttm_map_buffer(struct ttm_buffer_object 
>>> *bo,
>>>       num_dw = ALIGN(adev->mman.buffer_funcs->copy_num_dw, 8);
>>>       num_bytes = num_pages * 8 * AMDGPU_GPU_PAGES_IN_CPU_PAGE;
>>> -    r = amdgpu_job_alloc_with_ib(adev, &adev->mman.default_entity.base,
>>> +    r = amdgpu_job_alloc_with_ib(adev, entity,
>>>                        AMDGPU_FENCE_OWNER_UNDEFINED,
>>>                        num_dw * 4 + num_bytes,
>>>                        AMDGPU_IB_POOL_DELAYED, &job,
>>> @@ -274,6 +276,7 @@ static int amdgpu_ttm_map_buffer(struct ttm_buffer_object 
>>> *bo,
>>>   /**
>>>    * amdgpu_ttm_copy_mem_to_mem - Helper function for copy
>>>    * @adev: amdgpu device
>>> + * @entity: entity to run the jobs
>>>    * @src: buffer/address where to read from
>>>    * @dst: buffer/address where to write to
>>>    * @size: number of bytes to copy
>>> @@ -288,6 +291,7 @@ static int amdgpu_ttm_map_buffer(struct ttm_buffer_object 
>>> *bo,
>>>    */
>>>   __attribute__((nonnull))
>>>   static int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
>>> +                      struct drm_sched_entity *entity,
>>>                         const struct amdgpu_copy_mem *src,
>>>                         const struct amdgpu_copy_mem *dst,
>>>                         uint64_t size, bool tmz,
>>> @@ -320,12 +324,14 @@ static int amdgpu_ttm_copy_mem_to_mem(struct 
>>> amdgpu_device *adev,
>>>           cur_size = min3(src_mm.size, dst_mm.size, 256ULL << 20);
>>>           /* Map src to window 0 and dst to window 1. */
>>> -        r = amdgpu_ttm_map_buffer(src->bo, src->mem, &src_mm,
>>> +        r = amdgpu_ttm_map_buffer(entity,
>>> +                      src->bo, src->mem, &src_mm,
>>>                         0, ring, tmz, &cur_size, &from);
>>>           if (r)
>>>               goto error;
>>> -        r = amdgpu_ttm_map_buffer(dst->bo, dst->mem, &dst_mm,
>>> +        r = amdgpu_ttm_map_buffer(entity,
>>> +                      dst->bo, dst->mem, &dst_mm,
>>>                         1, ring, tmz, &cur_size, &to);
>>>           if (r)
>>>               goto error;
>>> @@ -353,7 +359,7 @@ static int amdgpu_ttm_copy_mem_to_mem(struct 
>>> amdgpu_device *adev,
>>>                                    write_compress_disable));
>>>           }
>>> -        r = amdgpu_copy_buffer(ring, from, to, cur_size, resv,
>>> +        r = amdgpu_copy_buffer(ring, entity, from, to, cur_size, resv,
>>>                          &next, true, copy_flags);
>>>           if (r)
>>>               goto error;
>>> @@ -394,7 +400,9 @@ static int amdgpu_move_blit(struct ttm_buffer_object *bo,
>>>       src.offset = 0;
>>>       dst.offset = 0;
>>> -    r = amdgpu_ttm_copy_mem_to_mem(adev, &src, &dst,
>>> +    r = amdgpu_ttm_copy_mem_to_mem(adev,
>>> +                       &adev->mman.move_entity.base,
>>> +                       &src, &dst,
>>>                          new_mem->size,
>>>                          amdgpu_bo_encrypted(abo),
>>>                          bo->base.resv, &fence);
>>> @@ -406,8 +414,9 @@ static int amdgpu_move_blit(struct ttm_buffer_object *bo,
>>>           (abo->flags & AMDGPU_GEM_CREATE_VRAM_WIPE_ON_RELEASE)) {
>>>           struct dma_fence *wipe_fence = NULL;
>>> -        r = amdgpu_fill_buffer(abo, 0, NULL, &wipe_fence,
>>> -                       false, AMDGPU_KERNEL_JOB_ID_MOVE_BLIT);
>>> +        r = amdgpu_fill_buffer(&adev->mman.move_entity,
>>> +                       abo, 0, NULL, &wipe_fence,
>>> +                       AMDGPU_KERNEL_JOB_ID_MOVE_BLIT);
>>>           if (r) {
>>>               goto error;
>>>           } else if (wipe_fence) {
>>> @@ -2223,16 +2232,15 @@ void amdgpu_ttm_set_buffer_funcs_status(struct 
>>> amdgpu_device *adev, bool enable)
>>>   }
>>>   static int amdgpu_ttm_prepare_job(struct amdgpu_device *adev,
>>> +                  struct drm_sched_entity *entity,
>>>                     unsigned int num_dw,
>>>                     struct dma_resv *resv,
>>>                     bool vm_needs_flush,
>>>                     struct amdgpu_job **job,
>>> -                  bool delayed, u64 k_job_id)
>>> +                  u64 k_job_id)
>>>   {
>>>       enum amdgpu_ib_pool_type pool = AMDGPU_IB_POOL_DELAYED;
>>>       int r;
>>> -    struct drm_sched_entity *entity = delayed ? &adev->mman.clear_entity.base :
>>> -                            &adev->mman.move_entity.base;
>>>       r = amdgpu_job_alloc_with_ib(adev, entity,
>>>                        AMDGPU_FENCE_OWNER_UNDEFINED,
>>>                        num_dw * 4, pool, job, k_job_id);
>>> @@ -2252,7 +2260,9 @@ static int amdgpu_ttm_prepare_job(struct amdgpu_device 
>>> *adev,
>>>                              DMA_RESV_USAGE_BOOKKEEP);
>>>   }
>>> -int amdgpu_copy_buffer(struct amdgpu_ring *ring, uint64_t src_offset,
>>> +int amdgpu_copy_buffer(struct amdgpu_ring *ring,
>>> +               struct drm_sched_entity *entity,
>>> +               uint64_t src_offset,
>>>                  uint64_t dst_offset, uint32_t byte_count,
>>>                  struct dma_resv *resv,
>>>                  struct dma_fence **fence,
>>> @@ -2274,8 +2284,8 @@ int amdgpu_copy_buffer(struct amdgpu_ring *ring, 
>>> uint64_t src_offset,
>>>       max_bytes = adev->mman.buffer_funcs->copy_max_bytes;
>>>       num_loops = DIV_ROUND_UP(byte_count, max_bytes);
>>>       num_dw = ALIGN(num_loops * adev->mman.buffer_funcs->copy_num_dw, 8);
>>> -    r = amdgpu_ttm_prepare_job(adev, num_dw,
>>> -                   resv, vm_needs_flush, &job, false,
>>> +    r = amdgpu_ttm_prepare_job(adev, entity, num_dw,
>>> +                   resv, vm_needs_flush, &job,
>>>                      AMDGPU_KERNEL_JOB_ID_TTM_COPY_BUFFER);
>>>       if (r)
>>>           return r;
>>> @@ -2304,11 +2314,13 @@ int amdgpu_copy_buffer(struct amdgpu_ring *ring, 
>>> uint64_t src_offset,
>>>       return r;
>>>   }
>>> -static int amdgpu_ttm_fill_mem(struct amdgpu_ring *ring, uint32_t src_data,
>>> +static int amdgpu_ttm_fill_mem(struct amdgpu_ring *ring,
>>> +                   struct drm_sched_entity *entity,
>>> +                   uint32_t src_data,
>>>                      uint64_t dst_addr, uint32_t byte_count,
>>>                      struct dma_resv *resv,
>>>                      struct dma_fence **fence,
>>> -                   bool vm_needs_flush, bool delayed,
>>> +                   bool vm_needs_flush,
>>>                      u64 k_job_id)
>>>   {
>>>       struct amdgpu_device *adev = ring->adev;
>>> @@ -2321,8 +2333,8 @@ static int amdgpu_ttm_fill_mem(struct amdgpu_ring 
>>> *ring, uint32_t src_data,
>>>       max_bytes = adev->mman.buffer_funcs->fill_max_bytes;
>>>       num_loops = DIV_ROUND_UP_ULL(byte_count, max_bytes);
>>>       num_dw = ALIGN(num_loops * adev->mman.buffer_funcs->fill_num_dw, 8);
>>> -    r = amdgpu_ttm_prepare_job(adev, num_dw, resv, vm_needs_flush,
>>> -                   &job, delayed, k_job_id);
>>> +    r = amdgpu_ttm_prepare_job(adev, entity, num_dw, resv,
>>> +                   vm_needs_flush, &job, k_job_id);
>>>       if (r)
>>>           return r;
>>> @@ -2386,13 +2398,14 @@ int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
>>>           /* Never clear more than 256MiB at once to avoid timeouts */
>>>           size = min(cursor.size, 256ULL << 20);
>>> -        r = amdgpu_ttm_map_buffer(&bo->tbo, bo->tbo.resource, &cursor,
>>> +        r = amdgpu_ttm_map_buffer(&adev->mman.clear_entity.base,
>>> +                      &bo->tbo, bo->tbo.resource, &cursor,
>>>                         1, ring, false, &size, &addr);
>>>           if (r)
>>>               goto err;
>>> -        r = amdgpu_ttm_fill_mem(ring, 0, addr, size, resv,
>>> -                    &next, true, true,
>>> +        r = amdgpu_ttm_fill_mem(ring, &adev->mman.clear_entity.base, 0, 
>>> addr, size, resv,
>>> +                    &next, true,
>>>                       AMDGPU_KERNEL_JOB_ID_TTM_CLEAR_BUFFER);
>>>           if (r)
>>>               goto err;
>>> @@ -2408,12 +2421,12 @@ int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
>>>       return r;
>>>   }
>>> -int amdgpu_fill_buffer(struct amdgpu_bo *bo,
>>> -            uint32_t src_data,
>>> -            struct dma_resv *resv,
>>> -            struct dma_fence **f,
>>> -            bool delayed,
>>> -            u64 k_job_id)
>>> +int amdgpu_fill_buffer(struct amdgpu_ttm_entity *entity,
>>> +               struct amdgpu_bo *bo,
>>> +               uint32_t src_data,
>>> +               struct dma_resv *resv,
>>> +               struct dma_fence **f,
>>> +               u64 k_job_id)
>>>   {
>>>       struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
>>>       struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
>>> @@ -2437,13 +2450,15 @@ int amdgpu_fill_buffer(struct amdgpu_bo *bo,
>>>           /* Never fill more than 256MiB at once to avoid timeouts */
>>>           cur_size = min(dst.size, 256ULL << 20);
>>> -        r = amdgpu_ttm_map_buffer(&bo->tbo, bo->tbo.resource, &dst,
>>> +        r = amdgpu_ttm_map_buffer(&entity->base,
>>> +                      &bo->tbo, bo->tbo.resource, &dst,
>>>                         1, ring, false, &cur_size, &to);
>>>           if (r)
>>>               goto error;
>>> -        r = amdgpu_ttm_fill_mem(ring, src_data, to, cur_size, resv,
>>> -                    &next, true, delayed, k_job_id);
>>> +        r = amdgpu_ttm_fill_mem(ring, &entity->base,
>>> +                    src_data, to, cur_size, resv,
>>> +                    &next, true, k_job_id);
>>>           if (r)
>>>               goto error;
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/ 
>>> amdgpu/amdgpu_ttm.h
>>> index d2295d6c2b67..e1655f86a016 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
>>> @@ -167,7 +167,9 @@ int amdgpu_ttm_init(struct amdgpu_device *adev);
>>>   void amdgpu_ttm_fini(struct amdgpu_device *adev);
>>>   void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev,
>>>                       bool enable);
>>> -int amdgpu_copy_buffer(struct amdgpu_ring *ring, uint64_t src_offset,
>>> +int amdgpu_copy_buffer(struct amdgpu_ring *ring,
>>> +               struct drm_sched_entity *entity,
>>
>> If I'm not completely mistaken you should be able to drop the ring argument 
>> since that can be determined from the entity.
> 
> OK will do.
> 

AFAIU the only way to get the ring from the entity is to get it from the 
drm_gpu_scheduler pointer. This would require adding a new function:

struct drm_gpu_scheduler *
drm_sched_entity_get_scheduler(struct drm_sched_entity *entity) {
    struct drm_gpu_scheduler *sched;
    spin_lock(&entity->lock);
    if (entity->rq)
        sched = entity->rq->sched;
    spin_unlock(&entity->lock);
    return sched;
}

Alternatively, I can access the ring from the buffer_funcs_ring / 
buffer_funcs_sched stored in amdgpu_mman.

What do you think?


Thanks,
Pierre-Eric

> 
> 
>>
>> Apart from that looks rather good to me.
>>
>> Regards,
>> Christian.
>>
>>> +               uint64_t src_offset,
>>>                  uint64_t dst_offset, uint32_t byte_count,
>>>                  struct dma_resv *resv,
>>>                  struct dma_fence **fence,
>>> @@ -175,12 +177,12 @@ int amdgpu_copy_buffer(struct amdgpu_ring *ring, 
>>> uint64_t src_offset,
>>>   int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
>>>                   struct dma_resv *resv,
>>>                   struct dma_fence **fence);
>>> -int amdgpu_fill_buffer(struct amdgpu_bo *bo,
>>> -            uint32_t src_data,
>>> -            struct dma_resv *resv,
>>> -            struct dma_fence **fence,
>>> -            bool delayed,
>>> -            u64 k_job_id);
>>> +int amdgpu_fill_buffer(struct amdgpu_ttm_entity *entity,
>>> +               struct amdgpu_bo *bo,
>>> +               uint32_t src_data,
>>> +               struct dma_resv *resv,
>>> +               struct dma_fence **f,
>>> +               u64 k_job_id);
>>>   int amdgpu_ttm_alloc_gart(struct ttm_buffer_object *bo);
>>>   void amdgpu_ttm_recover_gart(struct ttm_buffer_object *tbo);
>>> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c b/drivers/gpu/drm/amd/ 
>>> amdkfd/kfd_migrate.c
>>> index d74ff6e90590..09756132fa1b 100644
>>> --- a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
>>> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
>>> @@ -157,7 +157,8 @@ svm_migrate_copy_memory_gart(struct amdgpu_device *adev, 
>>> dma_addr_t *sys,
>>>               goto out_unlock;
>>>           }
>>> -        r = amdgpu_copy_buffer(ring, gart_s, gart_d, size * PAGE_SIZE,
>>> +        r = amdgpu_copy_buffer(ring, &entity->base,
>>> +                       gart_s, gart_d, size * PAGE_SIZE,
>>>                          NULL, &next, true, 0);
>>>           if (r) {
>>>               dev_err(adev->dev, "fail %d to copy memory\n", r);


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v2 15/20] drm/amdgpu: pass all the sdma scheds to amdgpu_mman
  2025-11-13 16:05 ` [PATCH v2 15/20] drm/amdgpu: pass all the sdma scheds to amdgpu_mman Pierre-Eric Pelloux-Prayer
  2025-11-14 21:23   ` Felix Kuehling
@ 2025-11-17  9:46   ` Christian König
  2025-11-19  9:34     ` Pierre-Eric Pelloux-Prayer
  1 sibling, 1 reply; 53+ messages in thread
From: Christian König @ 2025-11-17  9:46 UTC (permalink / raw)
  To: Pierre-Eric Pelloux-Prayer, Alex Deucher, David Airlie,
	Simona Vetter, Felix Kuehling
  Cc: amd-gfx, dri-devel, linux-kernel

On 11/13/25 17:05, Pierre-Eric Pelloux-Prayer wrote:
> This will allow the use of all of them for clear/fill buffer
> operations.
> Since drm_sched_entity_init requires a scheduler array, we
> store schedulers rather than rings. For the few places that need
> access to a ring, we can get it from the sched using container_of.
> 
> Since the code is the same for all sdma versions, add a new
> helper amdgpu_sdma_set_buffer_funcs_scheds to set buffer_funcs_scheds
> based on the number of sdma instances.
> 
> Note: the new sched array is identical to the amdgpu_vm_manager one.
> These 2 could be merged.

I realized a bit after we discussed it that this isn't true.

We need the two arrays separated for a Navi 1x workaround to work correctly.

Anyway, that doesn't affect reviewing this patch here.

> 
> Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu.h           |  2 +
>  drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c |  4 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_device.c    |  8 ++--
>  drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c       |  4 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c       | 41 +++++++++++++++----
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h       |  3 +-
>  drivers/gpu/drm/amd/amdgpu/cik_sdma.c         |  3 +-
>  drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c        |  3 +-
>  drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c        |  3 +-
>  drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c        |  6 +--
>  drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c      |  6 +--
>  drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c        |  6 +--
>  drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c        |  6 +--
>  drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c        |  3 +-
>  drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c        |  3 +-
>  drivers/gpu/drm/amd/amdgpu/si_dma.c           |  3 +-
>  drivers/gpu/drm/amd/amdkfd/kfd_migrate.c      |  3 +-
>  17 files changed, 62 insertions(+), 45 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> index 3fab3dc9f3e4..05c13fb0e6bf 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> @@ -1615,6 +1615,8 @@ ssize_t amdgpu_get_soft_full_reset_mask(struct amdgpu_ring *ring);
>  ssize_t amdgpu_show_reset_mask(char *buf, uint32_t supported_reset);
>  void amdgpu_sdma_set_vm_pte_scheds(struct amdgpu_device *adev,
>  				   const struct amdgpu_vm_pte_funcs *vm_pte_funcs);
> +void amdgpu_sdma_set_buffer_funcs_scheds(struct amdgpu_device *adev,
> +					 const struct amdgpu_buffer_funcs *buffer_funcs);
>  
>  /* atpx handler */
>  #if defined(CONFIG_VGA_SWITCHEROO)
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c
> index b59040a8771f..9ea927e07a77 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c
> @@ -32,12 +32,14 @@ static int amdgpu_benchmark_do_move(struct amdgpu_device *adev, unsigned size,
>  				    uint64_t saddr, uint64_t daddr, int n, s64 *time_ms)
>  {
>  	ktime_t stime, etime;
> +	struct amdgpu_ring *ring;
>  	struct dma_fence *fence;
>  	int i, r;
>  
> +	ring = container_of(adev->mman.buffer_funcs_scheds[0], struct amdgpu_ring, sched);

We have the to_amdgpu_ring() macro for that.

> +
>  	stime = ktime_get();
>  	for (i = 0; i < n; i++) {
> -		struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
>  		r = amdgpu_copy_buffer(ring, &adev->mman.default_entity.base,
>  				       saddr, daddr, size, NULL, &fence,
>  				       false, 0);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> index b92234d63562..1927d940fbca 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> @@ -3303,7 +3303,7 @@ static int amdgpu_device_ip_init(struct amdgpu_device *adev)
>  	if (r)
>  		goto init_failed;
>  
> -	if (adev->mman.buffer_funcs_ring->sched.ready)
> +	if (adev->mman.buffer_funcs_scheds[0]->ready)
>  		amdgpu_ttm_set_buffer_funcs_status(adev, true);
>  
>  	/* Don't init kfd if whole hive need to be reset during init */
> @@ -4143,7 +4143,7 @@ static int amdgpu_device_ip_resume(struct amdgpu_device *adev)
>  
>  	r = amdgpu_device_ip_resume_phase2(adev);
>  
> -	if (adev->mman.buffer_funcs_ring->sched.ready)
> +	if (adev->mman.buffer_funcs_scheds[0]->ready)

We should probably drop that check here and move this into amdgpu_ttm_set_buffer_funcs_status().

>  		amdgpu_ttm_set_buffer_funcs_status(adev, true);
>  
>  	if (r)
> @@ -4493,7 +4493,7 @@ int amdgpu_device_init(struct amdgpu_device *adev,
>  	adev->num_rings = 0;
>  	RCU_INIT_POINTER(adev->gang_submit, dma_fence_get_stub());
>  	adev->mman.buffer_funcs = NULL;
> -	adev->mman.buffer_funcs_ring = NULL;
> +	adev->mman.num_buffer_funcs_scheds = 0;
>  	adev->vm_manager.vm_pte_funcs = NULL;
>  	adev->vm_manager.vm_pte_num_scheds = 0;
>  	adev->gmc.gmc_funcs = NULL;
> @@ -5965,7 +5965,7 @@ int amdgpu_device_reinit_after_reset(struct amdgpu_reset_context *reset_context)
>  				if (r)
>  					goto out;
>  
> -				if (tmp_adev->mman.buffer_funcs_ring->sched.ready)
> +				if (tmp_adev->mman.buffer_funcs_scheds[0]->ready)
>  					amdgpu_ttm_set_buffer_funcs_status(tmp_adev, true);
>  
>  				r = amdgpu_device_ip_resume_phase3(tmp_adev);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
> index 2713dd51ab9a..4433d8620129 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
> @@ -651,12 +651,14 @@ int amdgpu_gmc_allocate_vm_inv_eng(struct amdgpu_device *adev)
>  void amdgpu_gmc_flush_gpu_tlb(struct amdgpu_device *adev, uint32_t vmid,
>  			      uint32_t vmhub, uint32_t flush_type)
>  {
> -	struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
> +	struct amdgpu_ring *ring;
>  	struct amdgpu_vmhub *hub = &adev->vmhub[vmhub];
>  	struct dma_fence *fence;
>  	struct amdgpu_job *job;
>  	int r, i;
>  
> +	ring = container_of(adev->mman.buffer_funcs_scheds[0], struct amdgpu_ring, sched);
> +
>  	if (!hub->sdma_invalidation_workaround || vmid ||
>  	    !adev->mman.buffer_funcs_enabled || !adev->ib_pool_ready ||
>  	    !ring->sched.ready) {
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> index 6c333dba7a35..11fec0fa4c11 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> @@ -308,7 +308,7 @@ static int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
>  				      struct dma_resv *resv,
>  				      struct dma_fence **f)
>  {
> -	struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
> +	struct amdgpu_ring *ring;
>  	struct amdgpu_res_cursor src_mm, dst_mm;
>  	struct dma_fence *fence = NULL;
>  	int r = 0;
> @@ -321,6 +321,8 @@ static int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
>  		return -EINVAL;
>  	}
>  
> +	ring = container_of(adev->mman.buffer_funcs_scheds[0], struct amdgpu_ring, sched);
> +
>  	amdgpu_res_first(src->mem, src->offset, size, &src_mm);
>  	amdgpu_res_first(dst->mem, dst->offset, size, &dst_mm);
>  
> @@ -1493,6 +1495,7 @@ static int amdgpu_ttm_access_memory_sdma(struct ttm_buffer_object *bo,
>  	struct amdgpu_bo *abo = ttm_to_amdgpu_bo(bo);
>  	struct amdgpu_device *adev = amdgpu_ttm_adev(abo->tbo.bdev);
>  	struct amdgpu_res_cursor src_mm;
> +	struct amdgpu_ring *ring;
>  	struct amdgpu_job *job;
>  	struct dma_fence *fence;
>  	uint64_t src_addr, dst_addr;
> @@ -1530,7 +1533,8 @@ static int amdgpu_ttm_access_memory_sdma(struct ttm_buffer_object *bo,
>  	amdgpu_emit_copy_buffer(adev, &job->ibs[0], src_addr, dst_addr,
>  				PAGE_SIZE, 0);
>  
> -	amdgpu_ring_pad_ib(adev->mman.buffer_funcs_ring, &job->ibs[0]);
> +	ring = container_of(adev->mman.buffer_funcs_scheds[0], struct amdgpu_ring, sched);
> +	amdgpu_ring_pad_ib(ring, &job->ibs[0]);
>  	WARN_ON(job->ibs[0].length_dw > num_dw);
>  
>  	fence = amdgpu_job_submit(job);
> @@ -2196,11 +2200,9 @@ u32 amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
>  		return windows;
>  
>  	if (enable) {
> -		struct amdgpu_ring *ring;
>  		struct drm_gpu_scheduler *sched;
>  
> -		ring = adev->mman.buffer_funcs_ring;
> -		sched = &ring->sched;
> +		sched = adev->mman.buffer_funcs_scheds[0];
>  		r = drm_sched_entity_init(&adev->mman.default_entity.base,
>  					  DRM_SCHED_PRIORITY_KERNEL, &sched,
>  					  1, NULL);
> @@ -2432,7 +2434,7 @@ int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
>  			    struct dma_fence **fence)
>  {
>  	struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
> -	struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
> +	struct amdgpu_ring *ring;
>  	struct amdgpu_ttm_buffer_entity *entity;
>  	struct amdgpu_res_cursor cursor;
>  	u64 addr;
> @@ -2443,6 +2445,8 @@ int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
>  
>  	if (!fence)
>  		return -EINVAL;
> +
> +	ring = container_of(adev->mman.buffer_funcs_scheds[0], struct amdgpu_ring, sched);
>  	entity = &adev->mman.clear_entities[0];
>  	*fence = dma_fence_get_stub();
>  
> @@ -2494,9 +2498,9 @@ int amdgpu_fill_buffer(struct amdgpu_ttm_buffer_entity *entity,
>  		       u64 k_job_id)
>  {
>  	struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
> -	struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
>  	struct dma_fence *fence = NULL;
>  	struct amdgpu_res_cursor dst;
> +	struct amdgpu_ring *ring;
>  	int r, e;
>  
>  	if (!adev->mman.buffer_funcs_enabled) {
> @@ -2505,6 +2509,8 @@ int amdgpu_fill_buffer(struct amdgpu_ttm_buffer_entity *entity,
>  		return -EINVAL;
>  	}
>  
> +	ring = container_of(adev->mman.buffer_funcs_scheds[0], struct amdgpu_ring, sched);
> +
>  	if (entity == NULL) {
>  		e = atomic_inc_return(&adev->mman.next_clear_entity) %
>  				      adev->mman.num_clear_entities;
> @@ -2579,6 +2585,27 @@ int amdgpu_ttm_evict_resources(struct amdgpu_device *adev, int mem_type)
>  	return ttm_resource_manager_evict_all(&adev->mman.bdev, man);
>  }
>  
> +void amdgpu_sdma_set_buffer_funcs_scheds(struct amdgpu_device *adev,
> +					 const struct amdgpu_buffer_funcs *buffer_funcs)
> +{
> +	struct amdgpu_vmhub *hub = &adev->vmhub[AMDGPU_GFXHUB(0)];
> +	struct drm_gpu_scheduler *sched;
> +	int i;
> +
> +	adev->mman.buffer_funcs = buffer_funcs;
> +
> +	for (i = 0; i < adev->sdma.num_instances; i++) {
> +		if (adev->sdma.has_page_queue)
> +			sched = &adev->sdma.instance[i].page.sched;
> +		else
> +			sched = &adev->sdma.instance[i].ring.sched;
> +		adev->mman.buffer_funcs_scheds[i] = sched;
> +	}
> +
> +	adev->mman.num_buffer_funcs_scheds = hub->sdma_invalidation_workaround ?
> +		1 : adev->sdma.num_instances;
> +}
> +

Probably better to make all SDMA version switch to use amdgpu_sdma_set_buffer_funcs_scheds() one patch and then changing amdgpu_sdma_set_buffer_funcs_scheds() to use more than one DMA engine a second patch.

Regards,
Christian.

>  #if defined(CONFIG_DEBUG_FS)
>  
>  static int amdgpu_ttm_page_pool_show(struct seq_file *m, void *unused)
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> index 4844f001f590..63c3e2466708 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> @@ -66,7 +66,8 @@ struct amdgpu_mman {
>  
>  	/* buffer handling */
>  	const struct amdgpu_buffer_funcs	*buffer_funcs;
> -	struct amdgpu_ring			*buffer_funcs_ring;
> +	struct drm_gpu_scheduler		*buffer_funcs_scheds[AMDGPU_MAX_RINGS];
> +	u32					num_buffer_funcs_scheds;
>  	bool					buffer_funcs_enabled;
>  
>  	struct mutex				gtt_window_lock;
> diff --git a/drivers/gpu/drm/amd/amdgpu/cik_sdma.c b/drivers/gpu/drm/amd/amdgpu/cik_sdma.c
> index 5fe162f52c92..a36385ad8da8 100644
> --- a/drivers/gpu/drm/amd/amdgpu/cik_sdma.c
> +++ b/drivers/gpu/drm/amd/amdgpu/cik_sdma.c
> @@ -1333,8 +1333,7 @@ static const struct amdgpu_buffer_funcs cik_sdma_buffer_funcs = {
>  
>  static void cik_sdma_set_buffer_funcs(struct amdgpu_device *adev)
>  {
> -	adev->mman.buffer_funcs = &cik_sdma_buffer_funcs;
> -	adev->mman.buffer_funcs_ring = &adev->sdma.instance[0].ring;
> +	amdgpu_sdma_set_buffer_funcs_scheds(adev, &cik_sdma_buffer_funcs);
>  }
>  
>  static const struct amdgpu_vm_pte_funcs cik_sdma_vm_pte_funcs = {
> diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c b/drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c
> index 63636643db3d..4a3ba136a36c 100644
> --- a/drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c
> +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c
> @@ -1228,8 +1228,7 @@ static const struct amdgpu_buffer_funcs sdma_v2_4_buffer_funcs = {
>  
>  static void sdma_v2_4_set_buffer_funcs(struct amdgpu_device *adev)
>  {
> -	adev->mman.buffer_funcs = &sdma_v2_4_buffer_funcs;
> -	adev->mman.buffer_funcs_ring = &adev->sdma.instance[0].ring;
> +	amdgpu_sdma_set_buffer_funcs_scheds(adev, &sdma_v2_4_buffer_funcs);
>  }
>  
>  static const struct amdgpu_vm_pte_funcs sdma_v2_4_vm_pte_funcs = {
> diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c
> index 0153626b5df2..3cf527bcadf6 100644
> --- a/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c
> @@ -1670,8 +1670,7 @@ static const struct amdgpu_buffer_funcs sdma_v3_0_buffer_funcs = {
>  
>  static void sdma_v3_0_set_buffer_funcs(struct amdgpu_device *adev)
>  {
> -	adev->mman.buffer_funcs = &sdma_v3_0_buffer_funcs;
> -	adev->mman.buffer_funcs_ring = &adev->sdma.instance[0].ring;
> +	amdgpu_sdma_set_buffer_funcs_scheds(adev, &sdma_v3_0_buffer_funcs);
>  }
>  
>  static const struct amdgpu_vm_pte_funcs sdma_v3_0_vm_pte_funcs = {
> diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
> index 96a67b30854c..7e106baecad5 100644
> --- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
> @@ -2608,11 +2608,7 @@ static const struct amdgpu_buffer_funcs sdma_v4_0_buffer_funcs = {
>  
>  static void sdma_v4_0_set_buffer_funcs(struct amdgpu_device *adev)
>  {
> -	adev->mman.buffer_funcs = &sdma_v4_0_buffer_funcs;
> -	if (adev->sdma.has_page_queue)
> -		adev->mman.buffer_funcs_ring = &adev->sdma.instance[0].page;
> -	else
> -		adev->mman.buffer_funcs_ring = &adev->sdma.instance[0].ring;
> +	amdgpu_sdma_set_buffer_funcs_scheds(adev, &sdma_v4_0_buffer_funcs);
>  }
>  
>  static const struct amdgpu_vm_pte_funcs sdma_v4_0_vm_pte_funcs = {
> diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
> index 04dc8a8f4d66..7cb0e213bab2 100644
> --- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
> +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
> @@ -2309,11 +2309,7 @@ static const struct amdgpu_buffer_funcs sdma_v4_4_2_buffer_funcs = {
>  
>  static void sdma_v4_4_2_set_buffer_funcs(struct amdgpu_device *adev)
>  {
> -	adev->mman.buffer_funcs = &sdma_v4_4_2_buffer_funcs;
> -	if (adev->sdma.has_page_queue)
> -		adev->mman.buffer_funcs_ring = &adev->sdma.instance[0].page;
> -	else
> -		adev->mman.buffer_funcs_ring = &adev->sdma.instance[0].ring;
> +	amdgpu_sdma_set_buffer_funcs_scheds(adev, &sdma_v4_4_2_buffer_funcs);
>  }
>  
>  static const struct amdgpu_vm_pte_funcs sdma_v4_4_2_vm_pte_funcs = {
> diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
> index 19c717f2c602..eab09c5fc762 100644
> --- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
> @@ -2066,10 +2066,8 @@ static const struct amdgpu_buffer_funcs sdma_v5_0_buffer_funcs = {
>  
>  static void sdma_v5_0_set_buffer_funcs(struct amdgpu_device *adev)
>  {
> -	if (adev->mman.buffer_funcs == NULL) {
> -		adev->mman.buffer_funcs = &sdma_v5_0_buffer_funcs;
> -		adev->mman.buffer_funcs_ring = &adev->sdma.instance[0].ring;
> -	}
> +	if (adev->mman.buffer_funcs == NULL)
> +		amdgpu_sdma_set_buffer_funcs_scheds(adev, &sdma_v5_0_buffer_funcs);
>  }
>  
>  static const struct amdgpu_vm_pte_funcs sdma_v5_0_vm_pte_funcs = {
> diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
> index 7a07b8f4e86d..e843da1dce59 100644
> --- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
> +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
> @@ -2076,10 +2076,8 @@ static const struct amdgpu_buffer_funcs sdma_v5_2_buffer_funcs = {
>  
>  static void sdma_v5_2_set_buffer_funcs(struct amdgpu_device *adev)
>  {
> -	if (adev->mman.buffer_funcs == NULL) {
> -		adev->mman.buffer_funcs = &sdma_v5_2_buffer_funcs;
> -		adev->mman.buffer_funcs_ring = &adev->sdma.instance[0].ring;
> -	}
> +	if (adev->mman.buffer_funcs == NULL)
> +		amdgpu_sdma_set_buffer_funcs_scheds(adev, &sdma_v5_2_buffer_funcs);
>  }
>  
>  static const struct amdgpu_vm_pte_funcs sdma_v5_2_vm_pte_funcs = {
> diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
> index 8f8228c7adee..d078bff42983 100644
> --- a/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
> @@ -1884,8 +1884,7 @@ static const struct amdgpu_buffer_funcs sdma_v6_0_buffer_funcs = {
>  
>  static void sdma_v6_0_set_buffer_funcs(struct amdgpu_device *adev)
>  {
> -	adev->mman.buffer_funcs = &sdma_v6_0_buffer_funcs;
> -	adev->mman.buffer_funcs_ring = &adev->sdma.instance[0].ring;
> +	amdgpu_sdma_set_buffer_funcs_scheds(adev, &sdma_v6_0_buffer_funcs);
>  }
>  
>  static const struct amdgpu_vm_pte_funcs sdma_v6_0_vm_pte_funcs = {
> diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
> index cf412d8fb0ed..77ad6f128e75 100644
> --- a/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
> @@ -1826,8 +1826,7 @@ static const struct amdgpu_buffer_funcs sdma_v7_0_buffer_funcs = {
>  
>  static void sdma_v7_0_set_buffer_funcs(struct amdgpu_device *adev)
>  {
> -	adev->mman.buffer_funcs = &sdma_v7_0_buffer_funcs;
> -	adev->mman.buffer_funcs_ring = &adev->sdma.instance[0].ring;
> +	amdgpu_sdma_set_buffer_funcs_scheds(adev, &sdma_v7_0_buffer_funcs);
>  }
>  
>  static const struct amdgpu_vm_pte_funcs sdma_v7_0_vm_pte_funcs = {
> diff --git a/drivers/gpu/drm/amd/amdgpu/si_dma.c b/drivers/gpu/drm/amd/amdgpu/si_dma.c
> index 863e00086c30..4f6d7eeceb37 100644
> --- a/drivers/gpu/drm/amd/amdgpu/si_dma.c
> +++ b/drivers/gpu/drm/amd/amdgpu/si_dma.c
> @@ -826,8 +826,7 @@ static const struct amdgpu_buffer_funcs si_dma_buffer_funcs = {
>  
>  static void si_dma_set_buffer_funcs(struct amdgpu_device *adev)
>  {
> -	adev->mman.buffer_funcs = &si_dma_buffer_funcs;
> -	adev->mman.buffer_funcs_ring = &adev->sdma.instance[0].ring;
> +	amdgpu_sdma_set_buffer_funcs_scheds(adev, &si_dma_buffer_funcs);
>  }
>  
>  static const struct amdgpu_vm_pte_funcs si_dma_vm_pte_funcs = {
> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
> index 943c3438c7ee..3f7b85aabb72 100644
> --- a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
> @@ -129,13 +129,14 @@ svm_migrate_copy_memory_gart(struct amdgpu_device *adev, dma_addr_t *sys,
>  			     struct dma_fence **mfence)
>  {
>  	const u64 GTT_MAX_PAGES = AMDGPU_GTT_MAX_TRANSFER_SIZE;
> -	struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
> +	struct amdgpu_ring *ring;
>  	struct amdgpu_ttm_buffer_entity *entity;
>  	u64 gart_s, gart_d;
>  	struct dma_fence *next;
>  	u64 size;
>  	int r;
>  
> +	ring = container_of(adev->mman.buffer_funcs_scheds[0], struct amdgpu_ring, sched);
>  	entity = &adev->mman.move_entities[0];
>  
>  	mutex_lock(&entity->gart_window_lock);


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v2 16/20] drm/amdgpu: give ttm entities access to all the sdma scheds
  2025-11-13 16:05 ` [PATCH v2 16/20] drm/amdgpu: give ttm entities access to all the sdma scheds Pierre-Eric Pelloux-Prayer
@ 2025-11-17  9:54   ` Christian König
  0 siblings, 0 replies; 53+ messages in thread
From: Christian König @ 2025-11-17  9:54 UTC (permalink / raw)
  To: Pierre-Eric Pelloux-Prayer, Alex Deucher, David Airlie,
	Simona Vetter
  Cc: amd-gfx, dri-devel, linux-kernel

On 11/13/25 17:05, Pierre-Eric Pelloux-Prayer wrote:
> Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>

This patch needs some more text in the commit message, but apart from that Reviewed-by: Christian König <christian.koenig@amd.com>

Regards,
Christian.

> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 19 +++++++++----------
>  1 file changed, 9 insertions(+), 10 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> index 11fec0fa4c11..94d0ff34593f 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> @@ -2191,8 +2191,8 @@ u32 amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
>  	u32 num_clear_entities, num_move_entities;
>  	u32 windows, w;
>  
> -	num_clear_entities = adev->sdma.num_instances;
> -	num_move_entities = MIN(adev->sdma.num_instances, TTM_NUM_MOVE_FENCES);
> +	num_clear_entities = MIN(adev->mman.num_buffer_funcs_scheds, TTM_NUM_MOVE_FENCES);
> +	num_move_entities = MIN(adev->mman.num_buffer_funcs_scheds, TTM_NUM_MOVE_FENCES);
>  	windows = adev->gmc.is_app_apu ? 0 : (2 * num_move_entities + num_clear_entities);
>  
>  	if (!adev->mman.initialized || amdgpu_in_reset(adev) ||
> @@ -2200,11 +2200,8 @@ u32 amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
>  		return windows;
>  
>  	if (enable) {
> -		struct drm_gpu_scheduler *sched;
> -
> -		sched = adev->mman.buffer_funcs_scheds[0];
>  		r = drm_sched_entity_init(&adev->mman.default_entity.base,
> -					  DRM_SCHED_PRIORITY_KERNEL, &sched,
> +					  DRM_SCHED_PRIORITY_KERNEL, adev->mman.buffer_funcs_scheds,
>  					  1, NULL);
>  		if (r) {
>  			dev_err(adev->dev, "Failed setting up entity (%d)\n",
> @@ -2216,8 +2213,9 @@ u32 amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
>  		atomic_set(&adev->mman.next_move_entity, 0);
>  		for (i = 0; i < num_move_entities; i++) {
>  			r = drm_sched_entity_init(&adev->mman.move_entities[i].base,
> -						  DRM_SCHED_PRIORITY_NORMAL, &sched,
> -						  1, NULL);
> +						  DRM_SCHED_PRIORITY_NORMAL,
> +						  adev->mman.buffer_funcs_scheds,
> +						  adev->mman.num_buffer_funcs_scheds, NULL);
>  			if (r) {
>  				dev_err(adev->dev,
>  					"Failed setting up TTM BO move entities (%d)\n",
> @@ -2239,8 +2237,9 @@ u32 amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
>  
>  		for (i = 0; i < num_clear_entities; i++) {
>  			r = drm_sched_entity_init(&adev->mman.clear_entities[i].base,
> -						  DRM_SCHED_PRIORITY_NORMAL, &sched,
> -						  1, NULL);
> +						  DRM_SCHED_PRIORITY_NORMAL,
> +						  adev->mman.buffer_funcs_scheds,
> +						  adev->mman.num_buffer_funcs_scheds, NULL);
>  			if (r) {
>  				for (j = 0; j < num_move_entities; j++)
>  					drm_sched_entity_destroy(


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v2 18/20] drm/amdgpu: rename amdgpu_fill_buffer as amdgpu_ttm_clear_buffer
  2025-11-13 16:05 ` [PATCH v2 18/20] drm/amdgpu: rename amdgpu_fill_buffer as amdgpu_ttm_clear_buffer Pierre-Eric Pelloux-Prayer
@ 2025-11-17  9:56   ` Christian König
  0 siblings, 0 replies; 53+ messages in thread
From: Christian König @ 2025-11-17  9:56 UTC (permalink / raw)
  To: Pierre-Eric Pelloux-Prayer, Alex Deucher, David Airlie,
	Simona Vetter, Sumit Semwal
  Cc: amd-gfx, dri-devel, linux-kernel, linux-media, linaro-mm-sig

On 11/13/25 17:05, Pierre-Eric Pelloux-Prayer wrote:
> This is the only use case for this function.
> 
> ---
> v2: amdgpu_ttm_clear_buffer instead of amdgpu_clear_buffer
> ---
> 
> Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>

Reviewed-by: Christian König <christian.koenig@amd.com>

> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_object.c |  8 +++----
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c    | 26 ++++++++++------------
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h    | 15 ++++++-------
>  3 files changed, 23 insertions(+), 26 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> index 4490b19752b8..4b9518097899 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
> @@ -725,8 +725,8 @@ int amdgpu_bo_create(struct amdgpu_device *adev,
>  	    bo->tbo.resource->mem_type == TTM_PL_VRAM) {
>  		struct dma_fence *fence;
>  
> -		r = amdgpu_fill_buffer(NULL, bo, 0, NULL, &fence, NULL,
> -				       true, AMDGPU_KERNEL_JOB_ID_TTM_CLEAR_BUFFER);
> +		r = amdgpu_ttm_clear_buffer(NULL, bo, NULL, &fence, NULL,
> +					    true, AMDGPU_KERNEL_JOB_ID_TTM_CLEAR_BUFFER);
>  		if (unlikely(r))
>  			goto fail_unreserve;
>  
> @@ -1324,8 +1324,8 @@ void amdgpu_bo_release_notify(struct ttm_buffer_object *bo)
>  	if (r)
>  		goto out;
>  
> -	r = amdgpu_fill_buffer(NULL, abo, 0, &bo->base._resv, &fence, NULL,
> -			       false, AMDGPU_KERNEL_JOB_ID_CLEAR_ON_RELEASE);
> +	r = amdgpu_ttm_clear_buffer(NULL, abo, &bo->base._resv, &fence, NULL,
> +				    false, AMDGPU_KERNEL_JOB_ID_CLEAR_ON_RELEASE);
>  	if (WARN_ON(r))
>  		goto out;
>  
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> index df05768c3817..0a55bc4ea91f 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> @@ -433,9 +433,9 @@ static int amdgpu_move_blit(struct ttm_buffer_object *bo,
>  	    (abo->flags & AMDGPU_GEM_CREATE_VRAM_WIPE_ON_RELEASE)) {
>  		struct dma_fence *wipe_fence = NULL;
>  
> -		r = amdgpu_fill_buffer(entity,
> -				       abo, 0, NULL, &wipe_fence, fence,
> -				       false, AMDGPU_KERNEL_JOB_ID_MOVE_BLIT);
> +		r = amdgpu_ttm_clear_buffer(entity,
> +					    abo, NULL, &wipe_fence, fence,
> +					    false, AMDGPU_KERNEL_JOB_ID_MOVE_BLIT);
>  		if (r) {
>  			goto error;
>  		} else if (wipe_fence) {
> @@ -2418,11 +2418,10 @@ static int amdgpu_ttm_fill_mem(struct amdgpu_ring *ring,
>  }
>  
>  /**
> - * amdgpu_fill_buffer - fill a buffer with a given value
> + * amdgpu_ttm_clear_buffer - fill a buffer with 0
>   * @entity: optional entity to use. If NULL, the clearing entities will be
>   *          used to load-balance the partial clears
>   * @bo: the bo to fill
> - * @src_data: the value to set
>   * @resv: fences contained in this reservation will be used as dependencies.
>   * @out_fence: the fence from the last clear will be stored here. It might be
>   *             NULL if no job was run.
> @@ -2432,14 +2431,13 @@ static int amdgpu_ttm_fill_mem(struct amdgpu_ring *ring,
>   * @k_job_id: trace id
>   *
>   */
> -int amdgpu_fill_buffer(struct amdgpu_ttm_buffer_entity *entity,
> -		       struct amdgpu_bo *bo,
> -		       uint32_t src_data,
> -		       struct dma_resv *resv,
> -		       struct dma_fence **out_fence,
> -		       struct dma_fence *dependency,
> -		       bool consider_clear_status,
> -		       u64 k_job_id)
> +int amdgpu_ttm_clear_buffer(struct amdgpu_ttm_buffer_entity *entity,
> +			    struct amdgpu_bo *bo,
> +			    struct dma_resv *resv,
> +			    struct dma_fence **out_fence,
> +			    struct dma_fence *dependency,
> +			    bool consider_clear_status,
> +			    u64 k_job_id)
>  {
>  	struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
>  	struct dma_fence *fence = NULL;
> @@ -2486,7 +2484,7 @@ int amdgpu_fill_buffer(struct amdgpu_ttm_buffer_entity *entity,
>  			goto error;
>  
>  		r = amdgpu_ttm_fill_mem(ring, &entity->base,
> -					src_data, to, cur_size, resv,
> +					0, to, cur_size, resv,
>  					&next, true, k_job_id);
>  		if (r)
>  			goto error;
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> index e01c2173d79f..585aee9a173b 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> @@ -181,14 +181,13 @@ int amdgpu_copy_buffer(struct amdgpu_ring *ring,
>  		       struct dma_resv *resv,
>  		       struct dma_fence **fence,
>  		       bool vm_needs_flush, uint32_t copy_flags);
> -int amdgpu_fill_buffer(struct amdgpu_ttm_buffer_entity *entity,
> -		       struct amdgpu_bo *bo,
> -		       uint32_t src_data,
> -		       struct dma_resv *resv,
> -		       struct dma_fence **out_fence,
> -		       struct dma_fence *dependency,
> -		       bool consider_clear_status,
> -		       u64 k_job_id);
> +int amdgpu_ttm_clear_buffer(struct amdgpu_ttm_buffer_entity *entity,
> +			    struct amdgpu_bo *bo,
> +			    struct dma_resv *resv,
> +			    struct dma_fence **out_fence,
> +			    struct dma_fence *dependency,
> +			    bool consider_clear_status,
> +			    u64 k_job_id);
>  
>  int amdgpu_ttm_alloc_gart(struct ttm_buffer_object *bo);
>  void amdgpu_ttm_recover_gart(struct ttm_buffer_object *tbo);


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v2 02/20] drm/ttm: rework pipelined eviction fence handling
  2025-11-13 16:05 ` [PATCH v2 02/20] drm/ttm: rework pipelined eviction fence handling Pierre-Eric Pelloux-Prayer
  2025-11-14 12:47   ` Christian König
@ 2025-11-18 15:00   ` Thomas Hellström
  2025-11-19 14:57     ` Christian König
  1 sibling, 1 reply; 53+ messages in thread
From: Thomas Hellström @ 2025-11-18 15:00 UTC (permalink / raw)
  To: Pierre-Eric Pelloux-Prayer, Alex Deucher, Christian König,
	David Airlie, Simona Vetter, Huang Rui, Matthew Auld,
	Matthew Brost, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, Sumit Semwal
  Cc: amd-gfx, dri-devel, linux-kernel, linux-media, linaro-mm-sig

Hi, Pierre-Eric

On Thu, 2025-11-13 at 17:05 +0100, Pierre-Eric Pelloux-Prayer wrote:
> Until now ttm stored a single pipelined eviction fence which means
> drivers had to use a single entity for these evictions.
> 
> To lift this requirement, this commit allows up to 8 entities to
> be used.
> 
> Ideally a dma_resv object would have been used as a container of
> the eviction fences, but the locking rules makes it complex.
> dma_resv all have the same ww_class, which means "Attempting to
> lock more mutexes after ww_acquire_done." is an error.
> 
> One alternative considered was to introduced a 2nd ww_class for
> specific resv to hold a single "transient" lock (= the resv lock
> would only be held for a short period, without taking any other
> locks).

Wouldn't it be possible to use lockdep_set_class_and_name() to modify
the resv lock class for these particular resv objects after they are
allocated? Reusing the resv code certainly sounds attractive.

Thanks,
Thomas


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v2 10/20] drm/admgpu: handle resv dependencies in amdgpu_ttm_map_buffer
  2025-11-17  8:44   ` Christian König
@ 2025-11-19  8:28     ` Pierre-Eric Pelloux-Prayer
  0 siblings, 0 replies; 53+ messages in thread
From: Pierre-Eric Pelloux-Prayer @ 2025-11-19  8:28 UTC (permalink / raw)
  To: Christian König, Pierre-Eric Pelloux-Prayer, Alex Deucher,
	David Airlie, Simona Vetter, Sumit Semwal
  Cc: amd-gfx, dri-devel, linux-kernel, linux-media, linaro-mm-sig



Le 17/11/2025 à 09:44, Christian König a écrit :
> On 11/13/25 17:05, Pierre-Eric Pelloux-Prayer wrote:
>> If a resv object is passed, its fences are treated as a dependency
>> for the amdgpu_ttm_map_buffer operation.
>>
>> This will be used by amdgpu_bo_release_notify through
>> amdgpu_fill_buffer.
> 
> Why should updating the GART window depend on fences in a resv object?
> 

You're right, this is not needed. I'll drop the patch.

Pierre-Eric

> Regards,
> Christian.
> 
>>
>> Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
>> ---
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 16 +++++++++++-----
>>   1 file changed, 11 insertions(+), 5 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>> index b13f0993dbf1..411997db70eb 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>> @@ -184,7 +184,8 @@ static int amdgpu_ttm_map_buffer(struct drm_sched_entity *entity,
>>   				 struct amdgpu_res_cursor *mm_cur,
>>   				 unsigned int window, struct amdgpu_ring *ring,
>>   				 bool tmz, uint64_t *size, uint64_t *addr,
>> -				 struct dma_fence *dep)
>> +				 struct dma_fence *dep,
>> +				 struct dma_resv *resv)
>>   {
>>   	struct amdgpu_device *adev = ring->adev;
>>   	unsigned int offset, num_pages, num_dw, num_bytes;
>> @@ -239,6 +240,10 @@ static int amdgpu_ttm_map_buffer(struct drm_sched_entity *entity,
>>   	if (dep)
>>   		drm_sched_job_add_dependency(&job->base, dma_fence_get(dep));
>>   
>> +	if (resv)
>> +		drm_sched_job_add_resv_dependencies(&job->base, resv,
>> +						    DMA_RESV_USAGE_BOOKKEEP);
>> +
>>   	src_addr = num_dw * 4;
>>   	src_addr += job->ibs[0].gpu_addr;
>>   
>> @@ -332,14 +337,14 @@ static int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
>>   		r = amdgpu_ttm_map_buffer(&entity->base,
>>   					  src->bo, src->mem, &src_mm,
>>   					  entity->gart_window_id0, ring, tmz, &cur_size, &from,
>> -					  NULL);
>> +					  NULL, NULL);
>>   		if (r)
>>   			goto error;
>>   
>>   		r = amdgpu_ttm_map_buffer(&entity->base,
>>   					  dst->bo, dst->mem, &dst_mm,
>>   					  entity->gart_window_id1, ring, tmz, &cur_size, &to,
>> -					  NULL);
>> +					  NULL, NULL);
>>   		if (r)
>>   			goto error;
>>   
>> @@ -2451,7 +2456,7 @@ int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
>>   		r = amdgpu_ttm_map_buffer(&entity->base,
>>   					  &bo->tbo, bo->tbo.resource, &cursor,
>>   					  entity->gart_window_id1, ring, false, &size, &addr,
>> -					  NULL);
>> +					  NULL, NULL);
>>   		if (r)
>>   			goto err;
>>   
>> @@ -2506,7 +2511,8 @@ int amdgpu_fill_buffer(struct amdgpu_ttm_buffer_entity *entity,
>>   					  &bo->tbo, bo->tbo.resource, &dst,
>>   					  entity->gart_window_id1, ring, false,
>>   					  &cur_size, &to,
>> -					  dependency);
>> +					  dependency,
>> +					  resv);
>>   		if (r)
>>   			goto error;
>>   


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v2 15/20] drm/amdgpu: pass all the sdma scheds to amdgpu_mman
  2025-11-17  9:46   ` Christian König
@ 2025-11-19  9:34     ` Pierre-Eric Pelloux-Prayer
  2025-11-19 10:49       ` Christian König
  0 siblings, 1 reply; 53+ messages in thread
From: Pierre-Eric Pelloux-Prayer @ 2025-11-19  9:34 UTC (permalink / raw)
  To: Christian König, Pierre-Eric Pelloux-Prayer, Alex Deucher,
	David Airlie, Simona Vetter, Felix Kuehling
  Cc: amd-gfx, dri-devel, linux-kernel



Le 17/11/2025 à 10:46, Christian König a écrit :
> On 11/13/25 17:05, Pierre-Eric Pelloux-Prayer wrote:
>> This will allow the use of all of them for clear/fill buffer
>> operations.
>> Since drm_sched_entity_init requires a scheduler array, we
>> store schedulers rather than rings. For the few places that need
>> access to a ring, we can get it from the sched using container_of.
>>
>> Since the code is the same for all sdma versions, add a new
>> helper amdgpu_sdma_set_buffer_funcs_scheds to set buffer_funcs_scheds
>> based on the number of sdma instances.
>>
>> Note: the new sched array is identical to the amdgpu_vm_manager one.
>> These 2 could be merged.
> 
> I realized a bit after we discussed it that this isn't true.
> 
> We need the two arrays separated for a Navi 1x workaround to work correctly.

Why 2 arrays? AFAICT the only needed thing is for amdgpu_ttm to be aware that it 
should only use a single sched in this situation.

> 
> Anyway, that doesn't affect reviewing this patch here.
> 
>>
>> Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
>> ---
>>   drivers/gpu/drm/amd/amdgpu/amdgpu.h           |  2 +
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c |  4 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_device.c    |  8 ++--
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c       |  4 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c       | 41 +++++++++++++++----
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h       |  3 +-
>>   drivers/gpu/drm/amd/amdgpu/cik_sdma.c         |  3 +-
>>   drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c        |  3 +-
>>   drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c        |  3 +-
>>   drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c        |  6 +--
>>   drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c      |  6 +--
>>   drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c        |  6 +--
>>   drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c        |  6 +--
>>   drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c        |  3 +-
>>   drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c        |  3 +-
>>   drivers/gpu/drm/amd/amdgpu/si_dma.c           |  3 +-
>>   drivers/gpu/drm/amd/amdkfd/kfd_migrate.c      |  3 +-
>>   17 files changed, 62 insertions(+), 45 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
>> index 3fab3dc9f3e4..05c13fb0e6bf 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
>> @@ -1615,6 +1615,8 @@ ssize_t amdgpu_get_soft_full_reset_mask(struct amdgpu_ring *ring);
>>   ssize_t amdgpu_show_reset_mask(char *buf, uint32_t supported_reset);
>>   void amdgpu_sdma_set_vm_pte_scheds(struct amdgpu_device *adev,
>>   				   const struct amdgpu_vm_pte_funcs *vm_pte_funcs);
>> +void amdgpu_sdma_set_buffer_funcs_scheds(struct amdgpu_device *adev,
>> +					 const struct amdgpu_buffer_funcs *buffer_funcs);
>>   
>>   /* atpx handler */
>>   #if defined(CONFIG_VGA_SWITCHEROO)
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c
>> index b59040a8771f..9ea927e07a77 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c
>> @@ -32,12 +32,14 @@ static int amdgpu_benchmark_do_move(struct amdgpu_device *adev, unsigned size,
>>   				    uint64_t saddr, uint64_t daddr, int n, s64 *time_ms)
>>   {
>>   	ktime_t stime, etime;
>> +	struct amdgpu_ring *ring;
>>   	struct dma_fence *fence;
>>   	int i, r;
>>   
>> +	ring = container_of(adev->mman.buffer_funcs_scheds[0], struct amdgpu_ring, sched);
> 
> We have the to_amdgpu_ring() macro for that.

I'll update the patch, thx.

> 
>> +
>>   	stime = ktime_get();
>>   	for (i = 0; i < n; i++) {
>> -		struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
>>   		r = amdgpu_copy_buffer(ring, &adev->mman.default_entity.base,
>>   				       saddr, daddr, size, NULL, &fence,
>>   				       false, 0);
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>> index b92234d63562..1927d940fbca 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>> @@ -3303,7 +3303,7 @@ static int amdgpu_device_ip_init(struct amdgpu_device *adev)
>>   	if (r)
>>   		goto init_failed;
>>   
>> -	if (adev->mman.buffer_funcs_ring->sched.ready)
>> +	if (adev->mman.buffer_funcs_scheds[0]->ready)
>>   		amdgpu_ttm_set_buffer_funcs_status(adev, true);
>>   
>>   	/* Don't init kfd if whole hive need to be reset during init */
>> @@ -4143,7 +4143,7 @@ static int amdgpu_device_ip_resume(struct amdgpu_device *adev)
>>   
>>   	r = amdgpu_device_ip_resume_phase2(adev);
>>   
>> -	if (adev->mman.buffer_funcs_ring->sched.ready)
>> +	if (adev->mman.buffer_funcs_scheds[0]->ready)
> 
> We should probably drop that check here and move this into amdgpu_ttm_set_buffer_funcs_status().

What should amdgpu_ttm_set_buffer_funcs_status() do if ready is false but enable 
is true? Exit early?

> 
>>   		amdgpu_ttm_set_buffer_funcs_status(adev, true);
>>   
>>   	if (r)
>> @@ -4493,7 +4493,7 @@ int amdgpu_device_init(struct amdgpu_device *adev,
>>   	adev->num_rings = 0;
>>   	RCU_INIT_POINTER(adev->gang_submit, dma_fence_get_stub());
>>   	adev->mman.buffer_funcs = NULL;
>> -	adev->mman.buffer_funcs_ring = NULL;
>> +	adev->mman.num_buffer_funcs_scheds = 0;
>>   	adev->vm_manager.vm_pte_funcs = NULL;
>>   	adev->vm_manager.vm_pte_num_scheds = 0;
>>   	adev->gmc.gmc_funcs = NULL;
>> @@ -5965,7 +5965,7 @@ int amdgpu_device_reinit_after_reset(struct amdgpu_reset_context *reset_context)
>>   				if (r)
>>   					goto out;
>>   
>> -				if (tmp_adev->mman.buffer_funcs_ring->sched.ready)
>> +				if (tmp_adev->mman.buffer_funcs_scheds[0]->ready)
>>   					amdgpu_ttm_set_buffer_funcs_status(tmp_adev, true);
>>   
>>   				r = amdgpu_device_ip_resume_phase3(tmp_adev);
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
>> index 2713dd51ab9a..4433d8620129 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
>> @@ -651,12 +651,14 @@ int amdgpu_gmc_allocate_vm_inv_eng(struct amdgpu_device *adev)
>>   void amdgpu_gmc_flush_gpu_tlb(struct amdgpu_device *adev, uint32_t vmid,
>>   			      uint32_t vmhub, uint32_t flush_type)
>>   {
>> -	struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
>> +	struct amdgpu_ring *ring;
>>   	struct amdgpu_vmhub *hub = &adev->vmhub[vmhub];
>>   	struct dma_fence *fence;
>>   	struct amdgpu_job *job;
>>   	int r, i;
>>   
>> +	ring = container_of(adev->mman.buffer_funcs_scheds[0], struct amdgpu_ring, sched);
>> +
>>   	if (!hub->sdma_invalidation_workaround || vmid ||
>>   	    !adev->mman.buffer_funcs_enabled || !adev->ib_pool_ready ||
>>   	    !ring->sched.ready) {
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>> index 6c333dba7a35..11fec0fa4c11 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>> @@ -308,7 +308,7 @@ static int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
>>   				      struct dma_resv *resv,
>>   				      struct dma_fence **f)
>>   {
>> -	struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
>> +	struct amdgpu_ring *ring;
>>   	struct amdgpu_res_cursor src_mm, dst_mm;
>>   	struct dma_fence *fence = NULL;
>>   	int r = 0;
>> @@ -321,6 +321,8 @@ static int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
>>   		return -EINVAL;
>>   	}
>>   
>> +	ring = container_of(adev->mman.buffer_funcs_scheds[0], struct amdgpu_ring, sched);
>> +
>>   	amdgpu_res_first(src->mem, src->offset, size, &src_mm);
>>   	amdgpu_res_first(dst->mem, dst->offset, size, &dst_mm);
>>   
>> @@ -1493,6 +1495,7 @@ static int amdgpu_ttm_access_memory_sdma(struct ttm_buffer_object *bo,
>>   	struct amdgpu_bo *abo = ttm_to_amdgpu_bo(bo);
>>   	struct amdgpu_device *adev = amdgpu_ttm_adev(abo->tbo.bdev);
>>   	struct amdgpu_res_cursor src_mm;
>> +	struct amdgpu_ring *ring;
>>   	struct amdgpu_job *job;
>>   	struct dma_fence *fence;
>>   	uint64_t src_addr, dst_addr;
>> @@ -1530,7 +1533,8 @@ static int amdgpu_ttm_access_memory_sdma(struct ttm_buffer_object *bo,
>>   	amdgpu_emit_copy_buffer(adev, &job->ibs[0], src_addr, dst_addr,
>>   				PAGE_SIZE, 0);
>>   
>> -	amdgpu_ring_pad_ib(adev->mman.buffer_funcs_ring, &job->ibs[0]);
>> +	ring = container_of(adev->mman.buffer_funcs_scheds[0], struct amdgpu_ring, sched);
>> +	amdgpu_ring_pad_ib(ring, &job->ibs[0]);
>>   	WARN_ON(job->ibs[0].length_dw > num_dw);
>>   
>>   	fence = amdgpu_job_submit(job);
>> @@ -2196,11 +2200,9 @@ u32 amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
>>   		return windows;
>>   
>>   	if (enable) {
>> -		struct amdgpu_ring *ring;
>>   		struct drm_gpu_scheduler *sched;
>>   
>> -		ring = adev->mman.buffer_funcs_ring;
>> -		sched = &ring->sched;
>> +		sched = adev->mman.buffer_funcs_scheds[0];
>>   		r = drm_sched_entity_init(&adev->mman.default_entity.base,
>>   					  DRM_SCHED_PRIORITY_KERNEL, &sched,
>>   					  1, NULL);
>> @@ -2432,7 +2434,7 @@ int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
>>   			    struct dma_fence **fence)
>>   {
>>   	struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
>> -	struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
>> +	struct amdgpu_ring *ring;
>>   	struct amdgpu_ttm_buffer_entity *entity;
>>   	struct amdgpu_res_cursor cursor;
>>   	u64 addr;
>> @@ -2443,6 +2445,8 @@ int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
>>   
>>   	if (!fence)
>>   		return -EINVAL;
>> +
>> +	ring = container_of(adev->mman.buffer_funcs_scheds[0], struct amdgpu_ring, sched);
>>   	entity = &adev->mman.clear_entities[0];
>>   	*fence = dma_fence_get_stub();
>>   
>> @@ -2494,9 +2498,9 @@ int amdgpu_fill_buffer(struct amdgpu_ttm_buffer_entity *entity,
>>   		       u64 k_job_id)
>>   {
>>   	struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
>> -	struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
>>   	struct dma_fence *fence = NULL;
>>   	struct amdgpu_res_cursor dst;
>> +	struct amdgpu_ring *ring;
>>   	int r, e;
>>   
>>   	if (!adev->mman.buffer_funcs_enabled) {
>> @@ -2505,6 +2509,8 @@ int amdgpu_fill_buffer(struct amdgpu_ttm_buffer_entity *entity,
>>   		return -EINVAL;
>>   	}
>>   
>> +	ring = container_of(adev->mman.buffer_funcs_scheds[0], struct amdgpu_ring, sched);
>> +
>>   	if (entity == NULL) {
>>   		e = atomic_inc_return(&adev->mman.next_clear_entity) %
>>   				      adev->mman.num_clear_entities;
>> @@ -2579,6 +2585,27 @@ int amdgpu_ttm_evict_resources(struct amdgpu_device *adev, int mem_type)
>>   	return ttm_resource_manager_evict_all(&adev->mman.bdev, man);
>>   }
>>   
>> +void amdgpu_sdma_set_buffer_funcs_scheds(struct amdgpu_device *adev,
>> +					 const struct amdgpu_buffer_funcs *buffer_funcs)
>> +{
>> +	struct amdgpu_vmhub *hub = &adev->vmhub[AMDGPU_GFXHUB(0)];
>> +	struct drm_gpu_scheduler *sched;
>> +	int i;
>> +
>> +	adev->mman.buffer_funcs = buffer_funcs;
>> +
>> +	for (i = 0; i < adev->sdma.num_instances; i++) {
>> +		if (adev->sdma.has_page_queue)
>> +			sched = &adev->sdma.instance[i].page.sched;
>> +		else
>> +			sched = &adev->sdma.instance[i].ring.sched;
>> +		adev->mman.buffer_funcs_scheds[i] = sched;
>> +	}
>> +
>> +	adev->mman.num_buffer_funcs_scheds = hub->sdma_invalidation_workaround ?
>> +		1 : adev->sdma.num_instances;
>> +}
>> +
> 
> Probably better to make all SDMA version switch to use amdgpu_sdma_set_buffer_funcs_scheds() one patch and then changing amdgpu_sdma_set_buffer_funcs_scheds() to use more than one DMA engine a second patch.

I'm not sure it's useful: this patch simply creates an array of scheduler, but 
every user of this array only use the first sched.
Enabling multiple schedulers usage is done in the "drm/amdgpu: give ttm entities 
access to all the sdma scheds" patch.

Pierre-Eric

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v2 06/20] drm/amdgpu: statically assign gart windows to ttm entities
  2025-11-14 20:24   ` Felix Kuehling
@ 2025-11-19  9:55     ` Pierre-Eric Pelloux-Prayer
  0 siblings, 0 replies; 53+ messages in thread
From: Pierre-Eric Pelloux-Prayer @ 2025-11-19  9:55 UTC (permalink / raw)
  To: Felix Kuehling, Pierre-Eric Pelloux-Prayer, Alex Deucher,
	Christian König, David Airlie, Simona Vetter
  Cc: amd-gfx, dri-devel, linux-kernel



Le 14/11/2025 à 21:24, Felix Kuehling a écrit :
> 
> On 2025-11-13 11:05, Pierre-Eric Pelloux-Prayer wrote:
>> If multiple entities share the same window we must make sure
>> that jobs using them are executed sequentially.
>>
>> This commit gives separate window id to each entity, so jobs
>> from multiple entities could execute in parallel if needed.
>> (for now they all use the first sdma engine, so it makes no
>> difference yet).
>>
>> default_entity doesn't get any windows reserved since there is
>> no use for them.
>>
>> Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
>> ---
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c  |  9 +++--
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c  | 50 ++++++++++++++----------
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h  |  9 +++--
>>   drivers/gpu/drm/amd/amdkfd/kfd_migrate.c |  8 ++--
>>   4 files changed, 46 insertions(+), 30 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c b/drivers/gpu/drm/amd/ 
>> amdgpu/amdgpu_gmc.c
>> index 8e2d41c9c271..2a444d02cf4b 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
>> @@ -686,7 +686,8 @@ void amdgpu_gmc_flush_gpu_tlb(struct amdgpu_device *adev, 
>> uint32_t vmid,
>>        * translation. Avoid this by doing the invalidation from the SDMA
>>        * itself at least for GART.
>>        */
>> -    mutex_lock(&adev->mman.gtt_window_lock);
>> +    mutex_lock(&adev->mman.clear_entity.gart_window_lock);
>> +    mutex_lock(&adev->mman.move_entity.gart_window_lock);
>>       r = amdgpu_job_alloc_with_ib(ring->adev, &adev->mman.default_entity.base,
>>                        AMDGPU_FENCE_OWNER_UNDEFINED,
>>                        16 * 4, AMDGPU_IB_POOL_IMMEDIATE,
>> @@ -699,7 +700,8 @@ void amdgpu_gmc_flush_gpu_tlb(struct amdgpu_device *adev, 
>> uint32_t vmid,
>>       job->ibs->ptr[job->ibs->length_dw++] = ring->funcs->nop;
>>       amdgpu_ring_pad_ib(ring, &job->ibs[0]);
>>       fence = amdgpu_job_submit(job);
>> -    mutex_unlock(&adev->mman.gtt_window_lock);
>> +    mutex_unlock(&adev->mman.move_entity.gart_window_lock);
>> +    mutex_unlock(&adev->mman.clear_entity.gart_window_lock);
>>       dma_fence_wait(fence, false);
>>       dma_fence_put(fence);
>> @@ -707,7 +709,8 @@ void amdgpu_gmc_flush_gpu_tlb(struct amdgpu_device *adev, 
>> uint32_t vmid,
>>       return;
>>   error_alloc:
>> -    mutex_unlock(&adev->mman.gtt_window_lock);
>> +    mutex_unlock(&adev->mman.move_entity.gart_window_lock);
>> +    mutex_unlock(&adev->mman.clear_entity.gart_window_lock);
>>       dev_err(adev->dev, "Error flushing GPU TLB using the SDMA (%d)!\n", r);
>>   }
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/ 
>> amdgpu/amdgpu_ttm.c
>> index c8d59ca2b3bd..7193a341689d 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>> @@ -291,7 +291,7 @@ static int amdgpu_ttm_map_buffer(struct drm_sched_entity 
>> *entity,
>>    */
>>   __attribute__((nonnull))
>>   static int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
>> -                      struct drm_sched_entity *entity,
>> +                      struct amdgpu_ttm_buffer_entity *entity,
>>                         const struct amdgpu_copy_mem *src,
>>                         const struct amdgpu_copy_mem *dst,
>>                         uint64_t size, bool tmz,
>> @@ -314,7 +314,7 @@ static int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device 
>> *adev,
>>       amdgpu_res_first(src->mem, src->offset, size, &src_mm);
>>       amdgpu_res_first(dst->mem, dst->offset, size, &dst_mm);
>> -    mutex_lock(&adev->mman.gtt_window_lock);
>> +    mutex_lock(&entity->gart_window_lock);
>>       while (src_mm.remaining) {
>>           uint64_t from, to, cur_size, tiling_flags;
>>           uint32_t num_type, data_format, max_com, write_compress_disable;
>> @@ -324,15 +324,15 @@ static int amdgpu_ttm_copy_mem_to_mem(struct 
>> amdgpu_device *adev,
>>           cur_size = min3(src_mm.size, dst_mm.size, 256ULL << 20);
>>           /* Map src to window 0 and dst to window 1. */
>> -        r = amdgpu_ttm_map_buffer(entity,
>> +        r = amdgpu_ttm_map_buffer(&entity->base,
>>                         src->bo, src->mem, &src_mm,
>> -                      0, ring, tmz, &cur_size, &from);
>> +                      entity->gart_window_id0, ring, tmz, &cur_size, &from);
>>           if (r)
>>               goto error;
>> -        r = amdgpu_ttm_map_buffer(entity,
>> +        r = amdgpu_ttm_map_buffer(&entity->base,
>>                         dst->bo, dst->mem, &dst_mm,
>> -                      1, ring, tmz, &cur_size, &to);
>> +                      entity->gart_window_id1, ring, tmz, &cur_size, &to);
>>           if (r)
>>               goto error;
>> @@ -359,7 +359,7 @@ static int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device 
>> *adev,
>>                                    write_compress_disable));
>>           }
>> -        r = amdgpu_copy_buffer(ring, entity, from, to, cur_size, resv,
>> +        r = amdgpu_copy_buffer(ring, &entity->base, from, to, cur_size, resv,
>>                          &next, true, copy_flags);
>>           if (r)
>>               goto error;
>> @@ -371,7 +371,7 @@ static int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device 
>> *adev,
>>           amdgpu_res_next(&dst_mm, cur_size);
>>       }
>>   error:
>> -    mutex_unlock(&adev->mman.gtt_window_lock);
>> +    mutex_unlock(&entity->gart_window_lock);
>>       *f = fence;
>>       return r;
>>   }
>> @@ -401,7 +401,7 @@ static int amdgpu_move_blit(struct ttm_buffer_object *bo,
>>       dst.offset = 0;
>>       r = amdgpu_ttm_copy_mem_to_mem(adev,
>> -                       &adev->mman.move_entity.base,
>> +                       &adev->mman.move_entity,
>>                          &src, &dst,
>>                          new_mem->size,
>>                          amdgpu_bo_encrypted(abo),
>> @@ -1893,8 +1893,6 @@ int amdgpu_ttm_init(struct amdgpu_device *adev)
>>       uint64_t gtt_size;
>>       int r;
>> -    mutex_init(&adev->mman.gtt_window_lock);
>> -
>>       dma_set_max_seg_size(adev->dev, UINT_MAX);
>>       /* No others user of address space so set it to 0 */
>>       r = ttm_device_init(&adev->mman.bdev, &amdgpu_bo_driver, adev->dev,
>> @@ -2207,6 +2205,15 @@ void amdgpu_ttm_set_buffer_funcs_status(struct 
>> amdgpu_device *adev, bool enable)
>>               drm_sched_entity_destroy(&adev->mman.clear_entity.base);
>>               goto error_free_entity;
>>           }
>> +
>> +        /* Statically assign GART windows to each entity. */
>> +        mutex_init(&adev->mman.default_entity.gart_window_lock);
>> +        adev->mman.move_entity.gart_window_id0 = 0;
>> +        adev->mman.move_entity.gart_window_id1 = 1;
>> +        mutex_init(&adev->mman.move_entity.gart_window_lock);
>> +        /* Clearing entity doesn't use id0 */
>> +        adev->mman.clear_entity.gart_window_id1 = 2;
>> +        mutex_init(&adev->mman.clear_entity.gart_window_lock);
>>       } else {
>>           drm_sched_entity_destroy(&adev->mman.default_entity.base);
>>           drm_sched_entity_destroy(&adev->mman.clear_entity.base);
>> @@ -2371,6 +2378,7 @@ int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
>>   {
>>       struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
>>       struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
>> +    struct amdgpu_ttm_buffer_entity *entity;
>>       struct amdgpu_res_cursor cursor;
>>       u64 addr;
>>       int r = 0;
>> @@ -2381,11 +2389,12 @@ int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
>>       if (!fence)
>>           return -EINVAL;
>> +    entity = &adev->mman.clear_entity;
>>       *fence = dma_fence_get_stub();
>>       amdgpu_res_first(bo->tbo.resource, 0, amdgpu_bo_size(bo), &cursor);
>> -    mutex_lock(&adev->mman.gtt_window_lock);
>> +    mutex_lock(&entity->gart_window_lock);
>>       while (cursor.remaining) {
>>           struct dma_fence *next = NULL;
>>           u64 size;
>> @@ -2398,13 +2407,13 @@ int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
>>           /* Never clear more than 256MiB at once to avoid timeouts */
>>           size = min(cursor.size, 256ULL << 20);
>> -        r = amdgpu_ttm_map_buffer(&adev->mman.clear_entity.base,
>> +        r = amdgpu_ttm_map_buffer(&entity->base,
>>                         &bo->tbo, bo->tbo.resource, &cursor,
>> -                      1, ring, false, &size, &addr);
>> +                      entity->gart_window_id1, ring, false, &size, &addr);
>>           if (r)
>>               goto err;
>> -        r = amdgpu_ttm_fill_mem(ring, &adev->mman.clear_entity.base, 0, addr, 
>> size, resv,
>> +        r = amdgpu_ttm_fill_mem(ring, &entity->base, 0, addr, size, resv,
>>                       &next, true,
>>                       AMDGPU_KERNEL_JOB_ID_TTM_CLEAR_BUFFER);
>>           if (r)
>> @@ -2416,12 +2425,12 @@ int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
>>           amdgpu_res_next(&cursor, size);
>>       }
>>   err:
>> -    mutex_unlock(&adev->mman.gtt_window_lock);
>> +    mutex_unlock(&entity->gart_window_lock);
>>       return r;
>>   }
>> -int amdgpu_fill_buffer(struct amdgpu_ttm_entity *entity,
>> +int amdgpu_fill_buffer(struct amdgpu_ttm_buffer_entity *entity,
>>                  struct amdgpu_bo *bo,
>>                  uint32_t src_data,
>>                  struct dma_resv *resv,
>> @@ -2442,7 +2451,7 @@ int amdgpu_fill_buffer(struct amdgpu_ttm_entity *entity,
>>       amdgpu_res_first(bo->tbo.resource, 0, amdgpu_bo_size(bo), &dst);
>> -    mutex_lock(&adev->mman.gtt_window_lock);
>> +    mutex_lock(&entity->gart_window_lock);
>>       while (dst.remaining) {
>>           struct dma_fence *next;
>>           uint64_t cur_size, to;
>> @@ -2452,7 +2461,8 @@ int amdgpu_fill_buffer(struct amdgpu_ttm_entity *entity,
>>           r = amdgpu_ttm_map_buffer(&entity->base,
>>                         &bo->tbo, bo->tbo.resource, &dst,
>> -                      1, ring, false, &cur_size, &to);
>> +                      entity->gart_window_id1, ring, false,
>> +                      &cur_size, &to);
>>           if (r)
>>               goto error;
>> @@ -2468,7 +2478,7 @@ int amdgpu_fill_buffer(struct amdgpu_ttm_entity *entity,
>>           amdgpu_res_next(&dst, cur_size);
>>       }
>>   error:
>> -    mutex_unlock(&adev->mman.gtt_window_lock);
>> +    mutex_unlock(&entity->gart_window_lock);
>>       if (f)
>>           *f = dma_fence_get(fence);
>>       dma_fence_put(fence);
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/ 
>> amdgpu/amdgpu_ttm.h
>> index e1655f86a016..f4f762be9fdd 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
>> @@ -39,7 +39,7 @@
>>   #define __AMDGPU_PL_NUM    (TTM_PL_PRIV + 6)
>>   #define AMDGPU_GTT_MAX_TRANSFER_SIZE    512
>> -#define AMDGPU_GTT_NUM_TRANSFER_WINDOWS    2
>> +#define AMDGPU_GTT_NUM_TRANSFER_WINDOWS    3
>>   extern const struct attribute_group amdgpu_vram_mgr_attr_group;
>>   extern const struct attribute_group amdgpu_gtt_mgr_attr_group;
>> @@ -54,6 +54,9 @@ struct amdgpu_gtt_mgr {
>>   struct amdgpu_ttm_buffer_entity {
>>       struct drm_sched_entity base;
>> +    struct mutex        gart_window_lock;
>> +    u32            gart_window_id0;
>> +    u32            gart_window_id1;
>>   };
>>   struct amdgpu_mman {
>> @@ -69,7 +72,7 @@ struct amdgpu_mman {
>>       struct mutex                gtt_window_lock;
>> -    struct amdgpu_ttm_buffer_entity default_entity;
>> +    struct amdgpu_ttm_buffer_entity default_entity; /* has no gart windows */
>>       struct amdgpu_ttm_buffer_entity clear_entity;
>>       struct amdgpu_ttm_buffer_entity move_entity;
>> @@ -177,7 +180,7 @@ int amdgpu_copy_buffer(struct amdgpu_ring *ring,
>>   int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
>>                   struct dma_resv *resv,
>>                   struct dma_fence **fence);
>> -int amdgpu_fill_buffer(struct amdgpu_ttm_entity *entity,
>> +int amdgpu_fill_buffer(struct amdgpu_ttm_buffer_entity *entity,
>>                  struct amdgpu_bo *bo,
>>                  uint32_t src_data,
>>                  struct dma_resv *resv,
>> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c b/drivers/gpu/drm/amd/ 
>> amdkfd/kfd_migrate.c
>> index 09756132fa1b..bc47fc362a17 100644
>> --- a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
>> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
>> @@ -60,7 +60,7 @@ svm_migrate_gart_map(struct amdgpu_ring *ring,
>>       int r;
>>       /* use gart window 0 */
>> -    *gart_addr = adev->gmc.gart_start;
>> +    *gart_addr = entity->gart_window_id0;
> 
> gart_window_id0 doesn't look like an address. What's the actual MC address that 
> any copy through this window should use?

I believe the address should be:

    adev->gmc.gart_start + (u64)entity->gart_window_id0 * 
AMDGPU_GTT_MAX_TRANSFER_SIZE * AMDGPU_GPU_PAGE_SIZE

I'll update in v3.

Thanks,
Pierre-Eric


> 
> Regards,
>    Felix
> 
> 
>>       num_dw = ALIGN(adev->mman.buffer_funcs->copy_num_dw, 8);
>>       num_bytes = npages * 8;
>> @@ -116,7 +116,7 @@ svm_migrate_gart_map(struct amdgpu_ring *ring,
>>    * multiple GTT_MAX_PAGES transfer, all sdma operations are serialized, wait 
>> for
>>    * the last sdma finish fence which is returned to check copy memory is done.
>>    *
>> - * Context: Process context, takes and releases gtt_window_lock
>> + * Context: Process context, takes and releases gart_window_lock
>>    *
>>    * Return:
>>    * 0 - OK, otherwise error code
>> @@ -138,7 +138,7 @@ svm_migrate_copy_memory_gart(struct amdgpu_device *adev, 
>> dma_addr_t *sys,
>>       entity = &adev->mman.move_entity;
>> -    mutex_lock(&adev->mman.gtt_window_lock);
>> +    mutex_lock(&entity->gart_window_lock);
>>       while (npages) {
>>           size = min(GTT_MAX_PAGES, npages);
>> @@ -175,7 +175,7 @@ svm_migrate_copy_memory_gart(struct amdgpu_device *adev, 
>> dma_addr_t *sys,
>>       }
>>   out_unlock:
>> -    mutex_unlock(&adev->mman.gtt_window_lock);
>> +    mutex_unlock(&entity->gart_window_lock);
>>       return r;
>>   }


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v2 15/20] drm/amdgpu: pass all the sdma scheds to amdgpu_mman
  2025-11-19  9:34     ` Pierre-Eric Pelloux-Prayer
@ 2025-11-19 10:49       ` Christian König
  0 siblings, 0 replies; 53+ messages in thread
From: Christian König @ 2025-11-19 10:49 UTC (permalink / raw)
  To: Pierre-Eric Pelloux-Prayer, Pierre-Eric Pelloux-Prayer,
	Alex Deucher, David Airlie, Simona Vetter, Felix Kuehling
  Cc: amd-gfx, dri-devel, linux-kernel

On 11/19/25 10:34, Pierre-Eric Pelloux-Prayer wrote:
> 
> 
> Le 17/11/2025 à 10:46, Christian König a écrit :
>> On 11/13/25 17:05, Pierre-Eric Pelloux-Prayer wrote:
>>> This will allow the use of all of them for clear/fill buffer
>>> operations.
>>> Since drm_sched_entity_init requires a scheduler array, we
>>> store schedulers rather than rings. For the few places that need
>>> access to a ring, we can get it from the sched using container_of.
>>>
>>> Since the code is the same for all sdma versions, add a new
>>> helper amdgpu_sdma_set_buffer_funcs_scheds to set buffer_funcs_scheds
>>> based on the number of sdma instances.
>>>
>>> Note: the new sched array is identical to the amdgpu_vm_manager one.
>>> These 2 could be merged.
>>
>> I realized a bit after we discussed it that this isn't true.
>>
>> We need the two arrays separated for a Navi 1x workaround to work correctly.
> 
> Why 2 arrays? AFAICT the only needed thing is for amdgpu_ttm to be aware that it should only use a single sched in this situation.

So it could just use the first entry of the array for TTM and the full array for VM updates.

Good point, I haven't thought about that possibility.

> 
>>
>> Anyway, that doesn't affect reviewing this patch here.
>>
>>>
>>> Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
>>> ---
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu.h           |  2 +
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c |  4 +-
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_device.c    |  8 ++--
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c       |  4 +-
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c       | 41 +++++++++++++++----
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h       |  3 +-
>>>   drivers/gpu/drm/amd/amdgpu/cik_sdma.c         |  3 +-
>>>   drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c        |  3 +-
>>>   drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c        |  3 +-
>>>   drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c        |  6 +--
>>>   drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c      |  6 +--
>>>   drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c        |  6 +--
>>>   drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c        |  6 +--
>>>   drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c        |  3 +-
>>>   drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c        |  3 +-
>>>   drivers/gpu/drm/amd/amdgpu/si_dma.c           |  3 +-
>>>   drivers/gpu/drm/amd/amdkfd/kfd_migrate.c      |  3 +-
>>>   17 files changed, 62 insertions(+), 45 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
>>> index 3fab3dc9f3e4..05c13fb0e6bf 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
>>> @@ -1615,6 +1615,8 @@ ssize_t amdgpu_get_soft_full_reset_mask(struct amdgpu_ring *ring);
>>>   ssize_t amdgpu_show_reset_mask(char *buf, uint32_t supported_reset);
>>>   void amdgpu_sdma_set_vm_pte_scheds(struct amdgpu_device *adev,
>>>                      const struct amdgpu_vm_pte_funcs *vm_pte_funcs);
>>> +void amdgpu_sdma_set_buffer_funcs_scheds(struct amdgpu_device *adev,
>>> +                     const struct amdgpu_buffer_funcs *buffer_funcs);
>>>     /* atpx handler */
>>>   #if defined(CONFIG_VGA_SWITCHEROO)
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c
>>> index b59040a8771f..9ea927e07a77 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c
>>> @@ -32,12 +32,14 @@ static int amdgpu_benchmark_do_move(struct amdgpu_device *adev, unsigned size,
>>>                       uint64_t saddr, uint64_t daddr, int n, s64 *time_ms)
>>>   {
>>>       ktime_t stime, etime;
>>> +    struct amdgpu_ring *ring;
>>>       struct dma_fence *fence;
>>>       int i, r;
>>>   +    ring = container_of(adev->mman.buffer_funcs_scheds[0], struct amdgpu_ring, sched);
>>
>> We have the to_amdgpu_ring() macro for that.
> 
> I'll update the patch, thx.
> 
>>
>>> +
>>>       stime = ktime_get();
>>>       for (i = 0; i < n; i++) {
>>> -        struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
>>>           r = amdgpu_copy_buffer(ring, &adev->mman.default_entity.base,
>>>                          saddr, daddr, size, NULL, &fence,
>>>                          false, 0);
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>>> index b92234d63562..1927d940fbca 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>>> @@ -3303,7 +3303,7 @@ static int amdgpu_device_ip_init(struct amdgpu_device *adev)
>>>       if (r)
>>>           goto init_failed;
>>>   -    if (adev->mman.buffer_funcs_ring->sched.ready)
>>> +    if (adev->mman.buffer_funcs_scheds[0]->ready)
>>>           amdgpu_ttm_set_buffer_funcs_status(adev, true);
>>>         /* Don't init kfd if whole hive need to be reset during init */
>>> @@ -4143,7 +4143,7 @@ static int amdgpu_device_ip_resume(struct amdgpu_device *adev)
>>>         r = amdgpu_device_ip_resume_phase2(adev);
>>>   -    if (adev->mman.buffer_funcs_ring->sched.ready)
>>> +    if (adev->mman.buffer_funcs_scheds[0]->ready)
>>
>> We should probably drop that check here and move this into amdgpu_ttm_set_buffer_funcs_status().
> 
> What should amdgpu_ttm_set_buffer_funcs_status() do if ready is false but enable is true? Exit early?

Yes, probably while printing a warning like "Not enabling DMA transfers for in kernel use..." or something like that.

>>
>>>           amdgpu_ttm_set_buffer_funcs_status(adev, true);
>>>         if (r)
>>> @@ -4493,7 +4493,7 @@ int amdgpu_device_init(struct amdgpu_device *adev,
>>>       adev->num_rings = 0;
>>>       RCU_INIT_POINTER(adev->gang_submit, dma_fence_get_stub());
>>>       adev->mman.buffer_funcs = NULL;
>>> -    adev->mman.buffer_funcs_ring = NULL;
>>> +    adev->mman.num_buffer_funcs_scheds = 0;
>>>       adev->vm_manager.vm_pte_funcs = NULL;
>>>       adev->vm_manager.vm_pte_num_scheds = 0;
>>>       adev->gmc.gmc_funcs = NULL;
>>> @@ -5965,7 +5965,7 @@ int amdgpu_device_reinit_after_reset(struct amdgpu_reset_context *reset_context)
>>>                   if (r)
>>>                       goto out;
>>>   -                if (tmp_adev->mman.buffer_funcs_ring->sched.ready)
>>> +                if (tmp_adev->mman.buffer_funcs_scheds[0]->ready)
>>>                       amdgpu_ttm_set_buffer_funcs_status(tmp_adev, true);
>>>                     r = amdgpu_device_ip_resume_phase3(tmp_adev);
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
>>> index 2713dd51ab9a..4433d8620129 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
>>> @@ -651,12 +651,14 @@ int amdgpu_gmc_allocate_vm_inv_eng(struct amdgpu_device *adev)
>>>   void amdgpu_gmc_flush_gpu_tlb(struct amdgpu_device *adev, uint32_t vmid,
>>>                     uint32_t vmhub, uint32_t flush_type)
>>>   {
>>> -    struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
>>> +    struct amdgpu_ring *ring;
>>>       struct amdgpu_vmhub *hub = &adev->vmhub[vmhub];
>>>       struct dma_fence *fence;
>>>       struct amdgpu_job *job;
>>>       int r, i;
>>>   +    ring = container_of(adev->mman.buffer_funcs_scheds[0], struct amdgpu_ring, sched);
>>> +
>>>       if (!hub->sdma_invalidation_workaround || vmid ||
>>>           !adev->mman.buffer_funcs_enabled || !adev->ib_pool_ready ||
>>>           !ring->sched.ready) {
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>>> index 6c333dba7a35..11fec0fa4c11 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>>> @@ -308,7 +308,7 @@ static int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
>>>                         struct dma_resv *resv,
>>>                         struct dma_fence **f)
>>>   {
>>> -    struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
>>> +    struct amdgpu_ring *ring;
>>>       struct amdgpu_res_cursor src_mm, dst_mm;
>>>       struct dma_fence *fence = NULL;
>>>       int r = 0;
>>> @@ -321,6 +321,8 @@ static int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
>>>           return -EINVAL;
>>>       }
>>>   +    ring = container_of(adev->mman.buffer_funcs_scheds[0], struct amdgpu_ring, sched);
>>> +
>>>       amdgpu_res_first(src->mem, src->offset, size, &src_mm);
>>>       amdgpu_res_first(dst->mem, dst->offset, size, &dst_mm);
>>>   @@ -1493,6 +1495,7 @@ static int amdgpu_ttm_access_memory_sdma(struct ttm_buffer_object *bo,
>>>       struct amdgpu_bo *abo = ttm_to_amdgpu_bo(bo);
>>>       struct amdgpu_device *adev = amdgpu_ttm_adev(abo->tbo.bdev);
>>>       struct amdgpu_res_cursor src_mm;
>>> +    struct amdgpu_ring *ring;
>>>       struct amdgpu_job *job;
>>>       struct dma_fence *fence;
>>>       uint64_t src_addr, dst_addr;
>>> @@ -1530,7 +1533,8 @@ static int amdgpu_ttm_access_memory_sdma(struct ttm_buffer_object *bo,
>>>       amdgpu_emit_copy_buffer(adev, &job->ibs[0], src_addr, dst_addr,
>>>                   PAGE_SIZE, 0);
>>>   -    amdgpu_ring_pad_ib(adev->mman.buffer_funcs_ring, &job->ibs[0]);
>>> +    ring = container_of(adev->mman.buffer_funcs_scheds[0], struct amdgpu_ring, sched);
>>> +    amdgpu_ring_pad_ib(ring, &job->ibs[0]);
>>>       WARN_ON(job->ibs[0].length_dw > num_dw);
>>>         fence = amdgpu_job_submit(job);
>>> @@ -2196,11 +2200,9 @@ u32 amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
>>>           return windows;
>>>         if (enable) {
>>> -        struct amdgpu_ring *ring;
>>>           struct drm_gpu_scheduler *sched;
>>>   -        ring = adev->mman.buffer_funcs_ring;
>>> -        sched = &ring->sched;
>>> +        sched = adev->mman.buffer_funcs_scheds[0];
>>>           r = drm_sched_entity_init(&adev->mman.default_entity.base,
>>>                         DRM_SCHED_PRIORITY_KERNEL, &sched,
>>>                         1, NULL);
>>> @@ -2432,7 +2434,7 @@ int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
>>>                   struct dma_fence **fence)
>>>   {
>>>       struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
>>> -    struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
>>> +    struct amdgpu_ring *ring;
>>>       struct amdgpu_ttm_buffer_entity *entity;
>>>       struct amdgpu_res_cursor cursor;
>>>       u64 addr;
>>> @@ -2443,6 +2445,8 @@ int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
>>>         if (!fence)
>>>           return -EINVAL;
>>> +
>>> +    ring = container_of(adev->mman.buffer_funcs_scheds[0], struct amdgpu_ring, sched);
>>>       entity = &adev->mman.clear_entities[0];
>>>       *fence = dma_fence_get_stub();
>>>   @@ -2494,9 +2498,9 @@ int amdgpu_fill_buffer(struct amdgpu_ttm_buffer_entity *entity,
>>>                  u64 k_job_id)
>>>   {
>>>       struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
>>> -    struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
>>>       struct dma_fence *fence = NULL;
>>>       struct amdgpu_res_cursor dst;
>>> +    struct amdgpu_ring *ring;
>>>       int r, e;
>>>         if (!adev->mman.buffer_funcs_enabled) {
>>> @@ -2505,6 +2509,8 @@ int amdgpu_fill_buffer(struct amdgpu_ttm_buffer_entity *entity,
>>>           return -EINVAL;
>>>       }
>>>   +    ring = container_of(adev->mman.buffer_funcs_scheds[0], struct amdgpu_ring, sched);
>>> +
>>>       if (entity == NULL) {
>>>           e = atomic_inc_return(&adev->mman.next_clear_entity) %
>>>                         adev->mman.num_clear_entities;
>>> @@ -2579,6 +2585,27 @@ int amdgpu_ttm_evict_resources(struct amdgpu_device *adev, int mem_type)
>>>       return ttm_resource_manager_evict_all(&adev->mman.bdev, man);
>>>   }
>>>   +void amdgpu_sdma_set_buffer_funcs_scheds(struct amdgpu_device *adev,
>>> +                     const struct amdgpu_buffer_funcs *buffer_funcs)
>>> +{
>>> +    struct amdgpu_vmhub *hub = &adev->vmhub[AMDGPU_GFXHUB(0)];
>>> +    struct drm_gpu_scheduler *sched;
>>> +    int i;
>>> +
>>> +    adev->mman.buffer_funcs = buffer_funcs;
>>> +
>>> +    for (i = 0; i < adev->sdma.num_instances; i++) {
>>> +        if (adev->sdma.has_page_queue)
>>> +            sched = &adev->sdma.instance[i].page.sched;
>>> +        else
>>> +            sched = &adev->sdma.instance[i].ring.sched;
>>> +        adev->mman.buffer_funcs_scheds[i] = sched;
>>> +    }
>>> +
>>> +    adev->mman.num_buffer_funcs_scheds = hub->sdma_invalidation_workaround ?
>>> +        1 : adev->sdma.num_instances;
>>> +}
>>> +
>>
>> Probably better to make all SDMA version switch to use amdgpu_sdma_set_buffer_funcs_scheds() one patch and then changing amdgpu_sdma_set_buffer_funcs_scheds() to use more than one DMA engine a second patch.
> 
> I'm not sure it's useful: this patch simply creates an array of scheduler, but every user of this array only use the first sched.
> Enabling multiple schedulers usage is done in the "drm/amdgpu: give ttm entities access to all the sdma scheds" patch.

Oh, good point as well. Yeah than in that case please go ahead with what you have currently.

Thanks,
Christian.

> 
> Pierre-Eric


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v2 02/20] drm/ttm: rework pipelined eviction fence handling
  2025-11-18 15:00   ` Thomas Hellström
@ 2025-11-19 14:57     ` Christian König
  0 siblings, 0 replies; 53+ messages in thread
From: Christian König @ 2025-11-19 14:57 UTC (permalink / raw)
  To: Thomas Hellström, Pierre-Eric Pelloux-Prayer, Alex Deucher,
	David Airlie, Simona Vetter, Huang Rui, Matthew Auld,
	Matthew Brost, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, Sumit Semwal
  Cc: amd-gfx, dri-devel, linux-kernel, linux-media, linaro-mm-sig

On 11/18/25 16:00, Thomas Hellström wrote:
> Hi, Pierre-Eric
> 
> On Thu, 2025-11-13 at 17:05 +0100, Pierre-Eric Pelloux-Prayer wrote:
>> Until now ttm stored a single pipelined eviction fence which means
>> drivers had to use a single entity for these evictions.
>>
>> To lift this requirement, this commit allows up to 8 entities to
>> be used.
>>
>> Ideally a dma_resv object would have been used as a container of
>> the eviction fences, but the locking rules makes it complex.
>> dma_resv all have the same ww_class, which means "Attempting to
>> lock more mutexes after ww_acquire_done." is an error.
>>
>> One alternative considered was to introduced a 2nd ww_class for
>> specific resv to hold a single "transient" lock (= the resv lock
>> would only be held for a short period, without taking any other
>> locks).
> 
> Wouldn't it be possible to use lockdep_set_class_and_name() to modify
> the resv lock class for these particular resv objects after they are
> allocated? Reusing the resv code certainly sounds attractive.

Even when we can convince lockdep that this is unproblematic I don't think re-using the dma_resv code here is a good idea.

We should avoid dynamic memory allocation is much as possible and a static array seems to do the job just fine.

Regards,
Christian.

> 
> Thanks,
> Thomas
> 


^ permalink raw reply	[flat|nested] 53+ messages in thread

end of thread, other threads:[~2025-11-19 14:57 UTC | newest]

Thread overview: 53+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-11-13 16:05 [PATCH v2 00/20] drm/amdgpu: use all SDMA instances for TTM clears and moves Pierre-Eric Pelloux-Prayer
2025-11-13 16:05 ` [PATCH v2 01/20] drm/amdgpu: give each kernel job a unique id Pierre-Eric Pelloux-Prayer
2025-11-14 12:26   ` Christian König
2025-11-14 14:36     ` Pierre-Eric Pelloux-Prayer
2025-11-14 14:57       ` Christian König
2025-11-13 16:05 ` [PATCH v2 02/20] drm/ttm: rework pipelined eviction fence handling Pierre-Eric Pelloux-Prayer
2025-11-14 12:47   ` Christian König
2025-11-18 15:00   ` Thomas Hellström
2025-11-19 14:57     ` Christian König
2025-11-13 16:05 ` [PATCH v2 03/20] drm/amdgpu: remove direct_submit arg from amdgpu_copy_buffer Pierre-Eric Pelloux-Prayer
2025-11-14 12:48   ` Christian König
2025-11-13 16:05 ` [PATCH v2 04/20] drm/amdgpu: introduce amdgpu_ttm_buffer_entity Pierre-Eric Pelloux-Prayer
2025-11-14 12:57   ` Christian König
2025-11-14 20:18     ` Felix Kuehling
2025-11-13 16:05 ` [PATCH v2 05/20] drm/amdgpu: pass the entity to use to ttm functions Pierre-Eric Pelloux-Prayer
2025-11-14 13:07   ` Christian König
2025-11-14 14:41     ` Pierre-Eric Pelloux-Prayer
2025-11-17  9:41       ` Pierre-Eric Pelloux-Prayer
2025-11-14 20:20   ` Felix Kuehling
2025-11-13 16:05 ` [PATCH v2 06/20] drm/amdgpu: statically assign gart windows to ttm entities Pierre-Eric Pelloux-Prayer
2025-11-14 15:15   ` Christian König
2025-11-14 20:24   ` Felix Kuehling
2025-11-19  9:55     ` Pierre-Eric Pelloux-Prayer
2025-11-13 16:05 ` [PATCH v2 07/20] drm/amdgpu: allocate multiple clear entities Pierre-Eric Pelloux-Prayer
2025-11-17  8:41   ` Christian König
2025-11-13 16:05 ` [PATCH v2 08/20] drm/amdgpu: allocate multiple move entities Pierre-Eric Pelloux-Prayer
2025-11-14 20:57   ` Felix Kuehling
2025-11-13 16:05 ` [PATCH v2 09/20] drm/amdgpu: pass optional dependency to amdgpu_fill_buffer Pierre-Eric Pelloux-Prayer
2025-11-17  8:43   ` Christian König
2025-11-13 16:05 ` [PATCH v2 10/20] drm/admgpu: handle resv dependencies in amdgpu_ttm_map_buffer Pierre-Eric Pelloux-Prayer
2025-11-17  8:44   ` Christian König
2025-11-19  8:28     ` Pierre-Eric Pelloux-Prayer
2025-11-13 16:05 ` [PATCH v2 11/20] drm/amdgpu: round robin through clear_entities in amdgpu_fill_buffer Pierre-Eric Pelloux-Prayer
2025-11-17  8:47   ` Christian König
2025-11-13 16:05 ` [PATCH v2 12/20] drm/amdgpu: use TTM_NUM_MOVE_FENCES when reserving fences Pierre-Eric Pelloux-Prayer
2025-11-14 20:57   ` Felix Kuehling
2025-11-17  9:07   ` Christian König
2025-11-13 16:05 ` [PATCH v2 13/20] drm/amdgpu: use multiple entities in amdgpu_move_blit Pierre-Eric Pelloux-Prayer
2025-11-17  9:12   ` Christian König
2025-11-13 16:05 ` [PATCH v2 14/20] drm/amdgpu: introduce amdgpu_sdma_set_vm_pte_scheds Pierre-Eric Pelloux-Prayer
2025-11-17  9:30   ` Christian König
2025-11-13 16:05 ` [PATCH v2 15/20] drm/amdgpu: pass all the sdma scheds to amdgpu_mman Pierre-Eric Pelloux-Prayer
2025-11-14 21:23   ` Felix Kuehling
2025-11-17  9:46   ` Christian König
2025-11-19  9:34     ` Pierre-Eric Pelloux-Prayer
2025-11-19 10:49       ` Christian König
2025-11-13 16:05 ` [PATCH v2 16/20] drm/amdgpu: give ttm entities access to all the sdma scheds Pierre-Eric Pelloux-Prayer
2025-11-17  9:54   ` Christian König
2025-11-13 16:05 ` [PATCH v2 17/20] drm/amdgpu: get rid of amdgpu_ttm_clear_buffer Pierre-Eric Pelloux-Prayer
2025-11-13 16:05 ` [PATCH v2 18/20] drm/amdgpu: rename amdgpu_fill_buffer as amdgpu_ttm_clear_buffer Pierre-Eric Pelloux-Prayer
2025-11-17  9:56   ` Christian König
2025-11-13 16:06 ` [PATCH v2 19/20] drm/amdgpu: use larger gart window when possible Pierre-Eric Pelloux-Prayer
2025-11-13 16:06 ` [PATCH v2 20/20] drm/amdgpu: double AMDGPU_GTT_MAX_TRANSFER_SIZE Pierre-Eric Pelloux-Prayer

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox