From: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
To: "Alex Deucher" <alexander.deucher@amd.com>,
"Christian König" <christian.koenig@amd.com>,
"David Airlie" <airlied@gmail.com>,
"Simona Vetter" <simona@ffwll.ch>,
"Felix Kuehling" <Felix.Kuehling@amd.com>
Cc: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>,
<amd-gfx@lists.freedesktop.org>,
<dri-devel@lists.freedesktop.org>, <linux-kernel@vger.kernel.org>
Subject: [PATCH v2 04/20] drm/amdgpu: introduce amdgpu_ttm_buffer_entity
Date: Thu, 13 Nov 2025 17:05:45 +0100 [thread overview]
Message-ID: <20251113160632.5889-5-pierre-eric.pelloux-prayer@amd.com> (raw)
In-Reply-To: <20251113160632.5889-1-pierre-eric.pelloux-prayer@amd.com>
No functional change for now, but this struct will have more
fields added in the next commit.
Technically the change introduces synchronisation issue, because
dependencies between successive jobs are not taken care of
properly. For instance, amdgpu_ttm_clear_buffer uses
amdgpu_ttm_map_buffer then amdgpu_ttm_fill_mem which use
different entities (default_entity then move/clear entity).
But it's all working as expected, because all entities use the
same sdma instance for now and default_entity has a higher prio
so its job always gets scheduler first.
The next commits will deal with these dependencies correctly.
---
v2: renamed amdgpu_ttm_buffer_entity
---
Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
---
drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c | 2 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 30 +++++++++++++++++-------
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h | 12 ++++++----
drivers/gpu/drm/amd/amdkfd/kfd_migrate.c | 13 ++++++----
4 files changed, 39 insertions(+), 18 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
index 9dcf51991b5b..8e2d41c9c271 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
@@ -687,7 +687,7 @@ void amdgpu_gmc_flush_gpu_tlb(struct amdgpu_device *adev, uint32_t vmid,
* itself at least for GART.
*/
mutex_lock(&adev->mman.gtt_window_lock);
- r = amdgpu_job_alloc_with_ib(ring->adev, &adev->mman.high_pr,
+ r = amdgpu_job_alloc_with_ib(ring->adev, &adev->mman.default_entity.base,
AMDGPU_FENCE_OWNER_UNDEFINED,
16 * 4, AMDGPU_IB_POOL_IMMEDIATE,
&job, AMDGPU_KERNEL_JOB_ID_FLUSH_GPU_TLB);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index c985f57fa227..42d448cd6a6d 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -224,7 +224,7 @@ static int amdgpu_ttm_map_buffer(struct ttm_buffer_object *bo,
num_dw = ALIGN(adev->mman.buffer_funcs->copy_num_dw, 8);
num_bytes = num_pages * 8 * AMDGPU_GPU_PAGES_IN_CPU_PAGE;
- r = amdgpu_job_alloc_with_ib(adev, &adev->mman.high_pr,
+ r = amdgpu_job_alloc_with_ib(adev, &adev->mman.default_entity.base,
AMDGPU_FENCE_OWNER_UNDEFINED,
num_dw * 4 + num_bytes,
AMDGPU_IB_POOL_DELAYED, &job,
@@ -1486,7 +1486,7 @@ static int amdgpu_ttm_access_memory_sdma(struct ttm_buffer_object *bo,
memcpy(adev->mman.sdma_access_ptr, buf, len);
num_dw = ALIGN(adev->mman.buffer_funcs->copy_num_dw, 8);
- r = amdgpu_job_alloc_with_ib(adev, &adev->mman.high_pr,
+ r = amdgpu_job_alloc_with_ib(adev, &adev->mman.default_entity.base,
AMDGPU_FENCE_OWNER_UNDEFINED,
num_dw * 4, AMDGPU_IB_POOL_DELAYED,
&job,
@@ -2168,7 +2168,7 @@ void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
ring = adev->mman.buffer_funcs_ring;
sched = &ring->sched;
- r = drm_sched_entity_init(&adev->mman.high_pr,
+ r = drm_sched_entity_init(&adev->mman.default_entity.base,
DRM_SCHED_PRIORITY_KERNEL, &sched,
1, NULL);
if (r) {
@@ -2178,18 +2178,30 @@ void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
return;
}
- r = drm_sched_entity_init(&adev->mman.low_pr,
+ r = drm_sched_entity_init(&adev->mman.clear_entity.base,
+ DRM_SCHED_PRIORITY_NORMAL, &sched,
+ 1, NULL);
+ if (r) {
+ dev_err(adev->dev,
+ "Failed setting up TTM BO clear entity (%d)\n",
+ r);
+ goto error_free_entity;
+ }
+
+ r = drm_sched_entity_init(&adev->mman.move_entity.base,
DRM_SCHED_PRIORITY_NORMAL, &sched,
1, NULL);
if (r) {
dev_err(adev->dev,
"Failed setting up TTM BO move entity (%d)\n",
r);
+ drm_sched_entity_destroy(&adev->mman.clear_entity.base);
goto error_free_entity;
}
} else {
- drm_sched_entity_destroy(&adev->mman.high_pr);
- drm_sched_entity_destroy(&adev->mman.low_pr);
+ drm_sched_entity_destroy(&adev->mman.default_entity.base);
+ drm_sched_entity_destroy(&adev->mman.clear_entity.base);
+ drm_sched_entity_destroy(&adev->mman.move_entity.base);
for (i = 0; i < TTM_NUM_MOVE_FENCES; i++) {
dma_fence_put(man->eviction_fences[i]);
man->eviction_fences[i] = NULL;
@@ -2207,7 +2219,7 @@ void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
return;
error_free_entity:
- drm_sched_entity_destroy(&adev->mman.high_pr);
+ drm_sched_entity_destroy(&adev->mman.default_entity.base);
}
static int amdgpu_ttm_prepare_job(struct amdgpu_device *adev,
@@ -2219,8 +2231,8 @@ static int amdgpu_ttm_prepare_job(struct amdgpu_device *adev,
{
enum amdgpu_ib_pool_type pool = AMDGPU_IB_POOL_DELAYED;
int r;
- struct drm_sched_entity *entity = delayed ? &adev->mman.low_pr :
- &adev->mman.high_pr;
+ struct drm_sched_entity *entity = delayed ? &adev->mman.clear_entity.base :
+ &adev->mman.move_entity.base;
r = amdgpu_job_alloc_with_ib(adev, entity,
AMDGPU_FENCE_OWNER_UNDEFINED,
num_dw * 4, pool, job, k_job_id);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
index 50e40380fe95..d2295d6c2b67 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
@@ -52,6 +52,10 @@ struct amdgpu_gtt_mgr {
spinlock_t lock;
};
+struct amdgpu_ttm_buffer_entity {
+ struct drm_sched_entity base;
+};
+
struct amdgpu_mman {
struct ttm_device bdev;
struct ttm_pool *ttm_pools;
@@ -64,10 +68,10 @@ struct amdgpu_mman {
bool buffer_funcs_enabled;
struct mutex gtt_window_lock;
- /* High priority scheduler entity for buffer moves */
- struct drm_sched_entity high_pr;
- /* Low priority scheduler entity for VRAM clearing */
- struct drm_sched_entity low_pr;
+
+ struct amdgpu_ttm_buffer_entity default_entity;
+ struct amdgpu_ttm_buffer_entity clear_entity;
+ struct amdgpu_ttm_buffer_entity move_entity;
struct amdgpu_vram_mgr vram_mgr;
struct amdgpu_gtt_mgr gtt_mgr;
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
index 378af0b2aaa9..d74ff6e90590 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
@@ -45,7 +45,9 @@ svm_migrate_direct_mapping_addr(struct amdgpu_device *adev, u64 addr)
}
static int
-svm_migrate_gart_map(struct amdgpu_ring *ring, u64 npages,
+svm_migrate_gart_map(struct amdgpu_ring *ring,
+ struct amdgpu_ttm_buffer_entity *entity,
+ u64 npages,
dma_addr_t *addr, u64 *gart_addr, u64 flags)
{
struct amdgpu_device *adev = ring->adev;
@@ -63,7 +65,7 @@ svm_migrate_gart_map(struct amdgpu_ring *ring, u64 npages,
num_dw = ALIGN(adev->mman.buffer_funcs->copy_num_dw, 8);
num_bytes = npages * 8;
- r = amdgpu_job_alloc_with_ib(adev, &adev->mman.high_pr,
+ r = amdgpu_job_alloc_with_ib(adev, &entity->base,
AMDGPU_FENCE_OWNER_UNDEFINED,
num_dw * 4 + num_bytes,
AMDGPU_IB_POOL_DELAYED,
@@ -128,11 +130,14 @@ svm_migrate_copy_memory_gart(struct amdgpu_device *adev, dma_addr_t *sys,
{
const u64 GTT_MAX_PAGES = AMDGPU_GTT_MAX_TRANSFER_SIZE;
struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
+ struct amdgpu_ttm_buffer_entity *entity;
u64 gart_s, gart_d;
struct dma_fence *next;
u64 size;
int r;
+ entity = &adev->mman.move_entity;
+
mutex_lock(&adev->mman.gtt_window_lock);
while (npages) {
@@ -140,10 +145,10 @@ svm_migrate_copy_memory_gart(struct amdgpu_device *adev, dma_addr_t *sys,
if (direction == FROM_VRAM_TO_RAM) {
gart_s = svm_migrate_direct_mapping_addr(adev, *vram);
- r = svm_migrate_gart_map(ring, size, sys, &gart_d, 0);
+ r = svm_migrate_gart_map(ring, entity, size, sys, &gart_d, 0);
} else if (direction == FROM_RAM_TO_VRAM) {
- r = svm_migrate_gart_map(ring, size, sys, &gart_s,
+ r = svm_migrate_gart_map(ring, entity, size, sys, &gart_s,
KFD_IOCTL_SVM_FLAG_GPU_RO);
gart_d = svm_migrate_direct_mapping_addr(adev, *vram);
}
--
2.43.0
next prev parent reply other threads:[~2025-11-13 16:08 UTC|newest]
Thread overview: 53+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-13 16:05 [PATCH v2 00/20] drm/amdgpu: use all SDMA instances for TTM clears and moves Pierre-Eric Pelloux-Prayer
2025-11-13 16:05 ` [PATCH v2 01/20] drm/amdgpu: give each kernel job a unique id Pierre-Eric Pelloux-Prayer
2025-11-14 12:26 ` Christian König
2025-11-14 14:36 ` Pierre-Eric Pelloux-Prayer
2025-11-14 14:57 ` Christian König
2025-11-13 16:05 ` [PATCH v2 02/20] drm/ttm: rework pipelined eviction fence handling Pierre-Eric Pelloux-Prayer
2025-11-14 12:47 ` Christian König
2025-11-18 15:00 ` Thomas Hellström
2025-11-19 14:57 ` Christian König
2025-11-13 16:05 ` [PATCH v2 03/20] drm/amdgpu: remove direct_submit arg from amdgpu_copy_buffer Pierre-Eric Pelloux-Prayer
2025-11-14 12:48 ` Christian König
2025-11-13 16:05 ` Pierre-Eric Pelloux-Prayer [this message]
2025-11-14 12:57 ` [PATCH v2 04/20] drm/amdgpu: introduce amdgpu_ttm_buffer_entity Christian König
2025-11-14 20:18 ` Felix Kuehling
2025-11-13 16:05 ` [PATCH v2 05/20] drm/amdgpu: pass the entity to use to ttm functions Pierre-Eric Pelloux-Prayer
2025-11-14 13:07 ` Christian König
2025-11-14 14:41 ` Pierre-Eric Pelloux-Prayer
2025-11-17 9:41 ` Pierre-Eric Pelloux-Prayer
2025-11-14 20:20 ` Felix Kuehling
2025-11-13 16:05 ` [PATCH v2 06/20] drm/amdgpu: statically assign gart windows to ttm entities Pierre-Eric Pelloux-Prayer
2025-11-14 15:15 ` Christian König
2025-11-14 20:24 ` Felix Kuehling
2025-11-19 9:55 ` Pierre-Eric Pelloux-Prayer
2025-11-13 16:05 ` [PATCH v2 07/20] drm/amdgpu: allocate multiple clear entities Pierre-Eric Pelloux-Prayer
2025-11-17 8:41 ` Christian König
2025-11-13 16:05 ` [PATCH v2 08/20] drm/amdgpu: allocate multiple move entities Pierre-Eric Pelloux-Prayer
2025-11-14 20:57 ` Felix Kuehling
2025-11-13 16:05 ` [PATCH v2 09/20] drm/amdgpu: pass optional dependency to amdgpu_fill_buffer Pierre-Eric Pelloux-Prayer
2025-11-17 8:43 ` Christian König
2025-11-13 16:05 ` [PATCH v2 10/20] drm/admgpu: handle resv dependencies in amdgpu_ttm_map_buffer Pierre-Eric Pelloux-Prayer
2025-11-17 8:44 ` Christian König
2025-11-19 8:28 ` Pierre-Eric Pelloux-Prayer
2025-11-13 16:05 ` [PATCH v2 11/20] drm/amdgpu: round robin through clear_entities in amdgpu_fill_buffer Pierre-Eric Pelloux-Prayer
2025-11-17 8:47 ` Christian König
2025-11-13 16:05 ` [PATCH v2 12/20] drm/amdgpu: use TTM_NUM_MOVE_FENCES when reserving fences Pierre-Eric Pelloux-Prayer
2025-11-14 20:57 ` Felix Kuehling
2025-11-17 9:07 ` Christian König
2025-11-13 16:05 ` [PATCH v2 13/20] drm/amdgpu: use multiple entities in amdgpu_move_blit Pierre-Eric Pelloux-Prayer
2025-11-17 9:12 ` Christian König
2025-11-13 16:05 ` [PATCH v2 14/20] drm/amdgpu: introduce amdgpu_sdma_set_vm_pte_scheds Pierre-Eric Pelloux-Prayer
2025-11-17 9:30 ` Christian König
2025-11-13 16:05 ` [PATCH v2 15/20] drm/amdgpu: pass all the sdma scheds to amdgpu_mman Pierre-Eric Pelloux-Prayer
2025-11-14 21:23 ` Felix Kuehling
2025-11-17 9:46 ` Christian König
2025-11-19 9:34 ` Pierre-Eric Pelloux-Prayer
2025-11-19 10:49 ` Christian König
2025-11-13 16:05 ` [PATCH v2 16/20] drm/amdgpu: give ttm entities access to all the sdma scheds Pierre-Eric Pelloux-Prayer
2025-11-17 9:54 ` Christian König
2025-11-13 16:05 ` [PATCH v2 17/20] drm/amdgpu: get rid of amdgpu_ttm_clear_buffer Pierre-Eric Pelloux-Prayer
2025-11-13 16:05 ` [PATCH v2 18/20] drm/amdgpu: rename amdgpu_fill_buffer as amdgpu_ttm_clear_buffer Pierre-Eric Pelloux-Prayer
2025-11-17 9:56 ` Christian König
2025-11-13 16:06 ` [PATCH v2 19/20] drm/amdgpu: use larger gart window when possible Pierre-Eric Pelloux-Prayer
2025-11-13 16:06 ` [PATCH v2 20/20] drm/amdgpu: double AMDGPU_GTT_MAX_TRANSFER_SIZE Pierre-Eric Pelloux-Prayer
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251113160632.5889-5-pierre-eric.pelloux-prayer@amd.com \
--to=pierre-eric.pelloux-prayer@amd.com \
--cc=Felix.Kuehling@amd.com \
--cc=airlied@gmail.com \
--cc=alexander.deucher@amd.com \
--cc=amd-gfx@lists.freedesktop.org \
--cc=christian.koenig@amd.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=linux-kernel@vger.kernel.org \
--cc=simona@ffwll.ch \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox