* [PATCH v6 00/11] drm/amdgpu: preparation patchs for the use all SDMA instances series
@ 2026-01-26 13:34 Pierre-Eric Pelloux-Prayer
2026-01-26 13:34 ` [PATCH v6 01/11] drm/amdgpu: remove gart_window_lock usage from gmc v12_1 Pierre-Eric Pelloux-Prayer
` (10 more replies)
0 siblings, 11 replies; 18+ messages in thread
From: Pierre-Eric Pelloux-Prayer @ 2026-01-26 13:34 UTC (permalink / raw)
Cc: Pierre-Eric Pelloux-Prayer, Christian König, Alex Deucher,
David Airlie, Felix Kuehling, Simona Vetter, amd-gfx, dri-devel,
linux-kernel
This series is a subset of the "use all SDMA instances" series.
It starts at the first modified patch and ends at the last patch
before the drm/ttm patch that got merged through drm-misc-next.
The main changes from v5 are:
* split "drm/amdgpu: remove AMDGPU_GTT_NUM_TRANSFER_WINDOWS" into
3 patches: one modifying vce, one dealing with ttm and one removing
AMDGPU_GTT_NUM_TRANSFER_WINDOWS.
* dropped "drm/amdgpu: use larger gart window when possible".
v5:
https://lists.freedesktop.org/archives/amd-gfx/2026-January/137268.html
v3 of the full series:
https://lists.freedesktop.org/archives/dri-devel/2025-November/537830.html
Pierre-Eric Pelloux-Prayer (11):
drm/amdgpu: remove gart_window_lock usage from gmc v12_1
drm/amdgpu: statically assign gart windows to ttm entities
drm/amdgpu: add amdgpu_ttm_buffer_entity_fini func
amdgpu/vce: use amdgpu_gtt_mgr_alloc_entries
amdgpu/ttm: use amdgpu_gtt_mgr_alloc_entries
amdgpu/gtt: remove AMDGPU_GTT_NUM_TRANSFER_WINDOWS
drm/amdgpu: add missing lock in amdgpu_benchmark_do_move
drm/amdgpu: check entity lock is held in amdgpu_ttm_job_submit
drm/amdgpu: double AMDGPU_GTT_MAX_TRANSFER_SIZE
drm/amdgpu: introduce amdgpu_sdma_set_vm_pte_scheds
drm/amdgpu: move sched status check inside
amdgpu_ttm_set_buffer_funcs_status
drivers/gpu/drm/amd/amdgpu/amdgpu.h | 2 +
drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c | 2 +
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 13 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c | 6 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c | 6 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 161 ++++++++++++------
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h | 23 ++-
drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c | 18 --
drivers/gpu/drm/amd/amdgpu/amdgpu_vce.h | 2 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 17 ++
drivers/gpu/drm/amd/amdgpu/cik_sdma.c | 31 +---
drivers/gpu/drm/amd/amdgpu/gmc_v12_1.c | 2 -
drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c | 31 +---
drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c | 31 +---
drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c | 35 +---
drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c | 35 +---
drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c | 31 +---
drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c | 31 +---
drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c | 29 +---
drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c | 29 +---
drivers/gpu/drm/amd/amdgpu/sdma_v7_1.c | 29 +---
drivers/gpu/drm/amd/amdgpu/si_dma.c | 31 +---
drivers/gpu/drm/amd/amdgpu/vce_v1_0.c | 32 ++--
drivers/gpu/drm/amd/amdkfd/kfd_migrate.c | 12 +-
24 files changed, 274 insertions(+), 365 deletions(-)
--
2.43.0
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH v6 01/11] drm/amdgpu: remove gart_window_lock usage from gmc v12_1
2026-01-26 13:34 [PATCH v6 00/11] drm/amdgpu: preparation patchs for the use all SDMA instances series Pierre-Eric Pelloux-Prayer
@ 2026-01-26 13:34 ` Pierre-Eric Pelloux-Prayer
2026-01-26 13:34 ` [PATCH v6 02/11] drm/amdgpu: statically assign gart windows to ttm entities Pierre-Eric Pelloux-Prayer
` (9 subsequent siblings)
10 siblings, 0 replies; 18+ messages in thread
From: Pierre-Eric Pelloux-Prayer @ 2026-01-26 13:34 UTC (permalink / raw)
To: Alex Deucher, Christian König, David Airlie, Simona Vetter
Cc: Pierre-Eric Pelloux-Prayer, amd-gfx, dri-devel, linux-kernel
Same as what was done in commit 904de683fa5f
("drm/amdgpu: remove gart_window_lock usage from gmc v12") for v12.
Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
---
drivers/gpu/drm/amd/amdgpu/gmc_v12_1.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v12_1.c b/drivers/gpu/drm/amd/amdgpu/gmc_v12_1.c
index ef6e550ce7c3..dc8865c5879c 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v12_1.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v12_1.c
@@ -345,9 +345,7 @@ static void gmc_v12_1_flush_gpu_tlb(struct amdgpu_device *adev, uint32_t vmid,
return;
}
- mutex_lock(&adev->mman.gtt_window_lock);
gmc_v12_1_flush_vm_hub(adev, vmid, vmhub, 0);
- mutex_unlock(&adev->mman.gtt_window_lock);
return;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v6 02/11] drm/amdgpu: statically assign gart windows to ttm entities
2026-01-26 13:34 [PATCH v6 00/11] drm/amdgpu: preparation patchs for the use all SDMA instances series Pierre-Eric Pelloux-Prayer
2026-01-26 13:34 ` [PATCH v6 01/11] drm/amdgpu: remove gart_window_lock usage from gmc v12_1 Pierre-Eric Pelloux-Prayer
@ 2026-01-26 13:34 ` Pierre-Eric Pelloux-Prayer
2026-01-26 13:34 ` [PATCH v6 03/11] drm/amdgpu: add amdgpu_ttm_buffer_entity_fini func Pierre-Eric Pelloux-Prayer
` (8 subsequent siblings)
10 siblings, 0 replies; 18+ messages in thread
From: Pierre-Eric Pelloux-Prayer @ 2026-01-26 13:34 UTC (permalink / raw)
To: Alex Deucher, Christian König, David Airlie, Simona Vetter,
Felix Kuehling
Cc: Pierre-Eric Pelloux-Prayer, Felix Kuehling, amd-gfx, dri-devel,
linux-kernel
If multiple entities share the same window we must make sure
that jobs using them are executed sequentially.
This commit gives separate windows to each entity, so jobs
from multiple entities could execute in parallel if needed.
(for now they all use the first sdma engine, so it makes no
difference yet).
The entity stores the gart window offsets to centralize the
"window id" to "window offset" in a single place.
default_entity doesn't get any windows reserved since there is
no use for them.
---
v3:
- renamed gart_window_lock -> lock (Christian)
- added amdgpu_ttm_buffer_entity_init (Christian)
- fixed gart_addr in svm_migrate_gart_map (Felix)
- renamed gart_window_idX -> gart_window_offs[]
- added amdgpu_compute_gart_address
v4:
- u32 -> u64
- added kerneldoc
v5:
- removed gtt_window_lock
- simplified gart window creation and use: entities using a
single window now uses window #0 instead of #1
- fix dst_addr calculation in kfd_migrate.c
---
Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
Acked-by: Felix Kuehling <felix.kuehling@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
---
drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c | 6 +--
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 64 +++++++++++++++++-------
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h | 21 ++++++--
drivers/gpu/drm/amd/amdkfd/kfd_migrate.c | 12 ++---
4 files changed, 72 insertions(+), 31 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
index d9ff68a43178..fcde88e3a6b9 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
@@ -737,7 +737,7 @@ void amdgpu_gmc_flush_gpu_tlb(struct amdgpu_device *adev, uint32_t vmid,
* translation. Avoid this by doing the invalidation from the SDMA
* itself at least for GART.
*/
- mutex_lock(&adev->mman.gtt_window_lock);
+ mutex_lock(&adev->mman.default_entity.lock);
r = amdgpu_job_alloc_with_ib(ring->adev, &adev->mman.default_entity.base,
AMDGPU_FENCE_OWNER_UNDEFINED,
16 * 4, AMDGPU_IB_POOL_IMMEDIATE,
@@ -750,7 +750,7 @@ void amdgpu_gmc_flush_gpu_tlb(struct amdgpu_device *adev, uint32_t vmid,
job->ibs->ptr[job->ibs->length_dw++] = ring->funcs->nop;
amdgpu_ring_pad_ib(ring, &job->ibs[0]);
fence = amdgpu_job_submit(job);
- mutex_unlock(&adev->mman.gtt_window_lock);
+ mutex_unlock(&adev->mman.default_entity.lock);
dma_fence_wait(fence, false);
dma_fence_put(fence);
@@ -758,7 +758,7 @@ void amdgpu_gmc_flush_gpu_tlb(struct amdgpu_device *adev, uint32_t vmid,
return;
error_alloc:
- mutex_unlock(&adev->mman.gtt_window_lock);
+ mutex_unlock(&adev->mman.default_entity.lock);
dev_err(adev->dev, "Error flushing GPU TLB using the SDMA (%d)!\n", r);
}
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index 6d7a5bf2d0c8..5850a013e65e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -228,9 +228,7 @@ static int amdgpu_ttm_map_buffer(struct amdgpu_ttm_buffer_entity *entity,
*size = min(*size, (uint64_t)num_pages * PAGE_SIZE - offset);
- *addr = adev->gmc.gart_start;
- *addr += (u64)window * AMDGPU_GTT_MAX_TRANSFER_SIZE *
- AMDGPU_GPU_PAGE_SIZE;
+ *addr = amdgpu_compute_gart_address(&adev->gmc, entity, window);
*addr += offset;
num_dw = ALIGN(adev->mman.buffer_funcs->copy_num_dw, 8);
@@ -248,7 +246,7 @@ static int amdgpu_ttm_map_buffer(struct amdgpu_ttm_buffer_entity *entity,
src_addr += job->ibs[0].gpu_addr;
dst_addr = amdgpu_bo_gpu_offset(adev->gart.bo);
- dst_addr += window * AMDGPU_GTT_MAX_TRANSFER_SIZE * 8;
+ dst_addr += (entity->gart_window_offs[window] >> AMDGPU_GPU_PAGE_SHIFT) * 8;
amdgpu_emit_copy_buffer(adev, &job->ibs[0], src_addr,
dst_addr, num_bytes, 0);
@@ -313,7 +311,7 @@ static int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
amdgpu_res_first(src->mem, src->offset, size, &src_mm);
amdgpu_res_first(dst->mem, dst->offset, size, &dst_mm);
- mutex_lock(&adev->mman.gtt_window_lock);
+ mutex_lock(&entity->lock);
while (src_mm.remaining) {
uint64_t from, to, cur_size, tiling_flags;
uint32_t num_type, data_format, max_com, write_compress_disable;
@@ -368,7 +366,7 @@ static int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
amdgpu_res_next(&dst_mm, cur_size);
}
error:
- mutex_unlock(&adev->mman.gtt_window_lock);
+ mutex_unlock(&entity->lock);
*f = fence;
return r;
}
@@ -1580,7 +1578,7 @@ static int amdgpu_ttm_access_memory_sdma(struct ttm_buffer_object *bo,
if (r)
goto out;
- mutex_lock(&adev->mman.gtt_window_lock);
+ mutex_lock(&adev->mman.default_entity.lock);
amdgpu_res_first(abo->tbo.resource, offset, len, &src_mm);
src_addr = amdgpu_ttm_domain_start(adev, bo->resource->mem_type) +
src_mm.start;
@@ -1592,7 +1590,7 @@ static int amdgpu_ttm_access_memory_sdma(struct ttm_buffer_object *bo,
PAGE_SIZE, 0);
fence = amdgpu_ttm_job_submit(adev, job, num_dw);
- mutex_unlock(&adev->mman.gtt_window_lock);
+ mutex_unlock(&adev->mman.default_entity.lock);
if (!dma_fence_wait_timeout(fence, false, adev->sdma_timeout))
r = -ETIMEDOUT;
@@ -2014,6 +2012,27 @@ static void amdgpu_ttm_free_mmio_remap_bo(struct amdgpu_device *adev)
adev->rmmio_remap.bo = NULL;
}
+static int amdgpu_ttm_buffer_entity_init(struct amdgpu_ttm_buffer_entity *entity,
+ int starting_gart_window,
+ u32 num_gart_windows)
+{
+ int i;
+
+ mutex_init(&entity->lock);
+
+ if (ARRAY_SIZE(entity->gart_window_offs) < num_gart_windows)
+ return starting_gart_window;
+
+ for (i = 0; i < num_gart_windows; i++) {
+ entity->gart_window_offs[i] =
+ (u64)starting_gart_window * AMDGPU_GTT_MAX_TRANSFER_SIZE *
+ AMDGPU_GPU_PAGE_SIZE;
+ starting_gart_window++;
+ }
+
+ return starting_gart_window;
+}
+
/*
* amdgpu_ttm_init - Init the memory management (ttm) as well as various
* gtt/vram related fields.
@@ -2028,8 +2047,6 @@ int amdgpu_ttm_init(struct amdgpu_device *adev)
uint64_t gtt_size;
int r;
- mutex_init(&adev->mman.gtt_window_lock);
-
dma_set_max_seg_size(adev->dev, UINT_MAX);
/* No others user of address space so set it to 0 */
r = ttm_device_init(&adev->mman.bdev, &amdgpu_bo_driver, adev->dev,
@@ -2300,6 +2317,7 @@ void amdgpu_ttm_fini(struct amdgpu_device *adev)
void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
{
struct ttm_resource_manager *man = ttm_manager_type(&adev->mman.bdev, TTM_PL_VRAM);
+ u32 used_windows;
uint64_t size;
int r;
@@ -2343,6 +2361,13 @@ void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
drm_sched_entity_destroy(&adev->mman.clear_entity.base);
goto error_free_entity;
}
+
+ /* Statically assign GART windows to each entity. */
+ used_windows = amdgpu_ttm_buffer_entity_init(&adev->mman.default_entity, 0, 0);
+ used_windows = amdgpu_ttm_buffer_entity_init(&adev->mman.move_entity,
+ used_windows, 2);
+ used_windows = amdgpu_ttm_buffer_entity_init(&adev->mman.clear_entity,
+ used_windows, 1);
} else {
drm_sched_entity_destroy(&adev->mman.default_entity.base);
drm_sched_entity_destroy(&adev->mman.clear_entity.base);
@@ -2501,6 +2526,7 @@ int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
struct dma_fence **fence)
{
struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
+ struct amdgpu_ttm_buffer_entity *entity;
struct amdgpu_res_cursor cursor;
u64 addr;
int r = 0;
@@ -2511,11 +2537,12 @@ int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
if (!fence)
return -EINVAL;
+ entity = &adev->mman.clear_entity;
*fence = dma_fence_get_stub();
amdgpu_res_first(bo->tbo.resource, 0, amdgpu_bo_size(bo), &cursor);
- mutex_lock(&adev->mman.gtt_window_lock);
+ mutex_lock(&entity->lock);
while (cursor.remaining) {
struct dma_fence *next = NULL;
u64 size;
@@ -2528,13 +2555,12 @@ int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
/* Never clear more than 256MiB at once to avoid timeouts */
size = min(cursor.size, 256ULL << 20);
- r = amdgpu_ttm_map_buffer(&adev->mman.clear_entity,
- &bo->tbo, bo->tbo.resource, &cursor,
- 1, false, &size, &addr);
+ r = amdgpu_ttm_map_buffer(entity, &bo->tbo, bo->tbo.resource, &cursor,
+ 0, false, &size, &addr);
if (r)
goto err;
- r = amdgpu_ttm_fill_mem(adev, &adev->mman.clear_entity, 0, addr, size, resv,
+ r = amdgpu_ttm_fill_mem(adev, entity, 0, addr, size, resv,
&next, true,
AMDGPU_KERNEL_JOB_ID_TTM_CLEAR_BUFFER);
if (r)
@@ -2546,7 +2572,7 @@ int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
amdgpu_res_next(&cursor, size);
}
err:
- mutex_unlock(&adev->mman.gtt_window_lock);
+ mutex_unlock(&entity->lock);
return r;
}
@@ -2571,7 +2597,7 @@ int amdgpu_fill_buffer(struct amdgpu_ttm_buffer_entity *entity,
amdgpu_res_first(bo->tbo.resource, 0, amdgpu_bo_size(bo), &dst);
- mutex_lock(&adev->mman.gtt_window_lock);
+ mutex_lock(&entity->lock);
while (dst.remaining) {
struct dma_fence *next;
uint64_t cur_size, to;
@@ -2580,7 +2606,7 @@ int amdgpu_fill_buffer(struct amdgpu_ttm_buffer_entity *entity,
cur_size = min(dst.size, 256ULL << 20);
r = amdgpu_ttm_map_buffer(entity, &bo->tbo, bo->tbo.resource, &dst,
- 1, false, &cur_size, &to);
+ 0, false, &cur_size, &to);
if (r)
goto error;
@@ -2596,7 +2622,7 @@ int amdgpu_fill_buffer(struct amdgpu_ttm_buffer_entity *entity,
amdgpu_res_next(&dst, cur_size);
}
error:
- mutex_unlock(&adev->mman.gtt_window_lock);
+ mutex_unlock(&entity->lock);
if (f)
*f = dma_fence_get(fence);
dma_fence_put(fence);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
index 143201ecea3f..871388b86503 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
@@ -29,6 +29,7 @@
#include <drm/ttm/ttm_placement.h>
#include "amdgpu_vram_mgr.h"
#include "amdgpu_hmm.h"
+#include "amdgpu_gmc.h"
#define AMDGPU_PL_GDS (TTM_PL_PRIV + 0)
#define AMDGPU_PL_GWS (TTM_PL_PRIV + 1)
@@ -39,7 +40,7 @@
#define __AMDGPU_PL_NUM (TTM_PL_PRIV + 6)
#define AMDGPU_GTT_MAX_TRANSFER_SIZE 512
-#define AMDGPU_GTT_NUM_TRANSFER_WINDOWS 2
+#define AMDGPU_GTT_NUM_TRANSFER_WINDOWS 3
extern const struct attribute_group amdgpu_vram_mgr_attr_group;
extern const struct attribute_group amdgpu_gtt_mgr_attr_group;
@@ -54,6 +55,8 @@ struct amdgpu_gtt_mgr {
struct amdgpu_ttm_buffer_entity {
struct drm_sched_entity base;
+ struct mutex lock;
+ u64 gart_window_offs[2];
};
struct amdgpu_mman {
@@ -67,8 +70,7 @@ struct amdgpu_mman {
struct amdgpu_ring *buffer_funcs_ring;
bool buffer_funcs_enabled;
- struct mutex gtt_window_lock;
-
+ /* @default_entity: for workarounds, has no gart windows */
struct amdgpu_ttm_buffer_entity default_entity;
struct amdgpu_ttm_buffer_entity clear_entity;
struct amdgpu_ttm_buffer_entity move_entity;
@@ -205,6 +207,19 @@ static inline int amdgpu_ttm_tt_get_user_pages(struct amdgpu_bo *bo,
}
#endif
+/**
+ * amdgpu_compute_gart_address() - Returns GART address of an entity's window
+ * @gmc: The &struct amdgpu_gmc instance to use
+ * @entity: The &struct amdgpu_ttm_buffer_entity owning the GART window
+ * @index: The window to use (must be 0 or 1)
+ */
+static inline u64 amdgpu_compute_gart_address(struct amdgpu_gmc *gmc,
+ struct amdgpu_ttm_buffer_entity *entity,
+ int index)
+{
+ return gmc->gart_start + entity->gart_window_offs[index];
+}
+
void amdgpu_ttm_tt_set_user_pages(struct ttm_tt *ttm, struct amdgpu_hmm_range *range);
int amdgpu_ttm_tt_get_userptr(const struct ttm_buffer_object *tbo,
uint64_t *user_addr);
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
index 3df2bbd935e2..b3d304aab686 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
@@ -59,8 +59,7 @@ svm_migrate_gart_map(struct amdgpu_ring *ring,
void *cpu_addr;
int r;
- /* use gart window 0 */
- *gart_addr = adev->gmc.gart_start;
+ *gart_addr = amdgpu_compute_gart_address(&adev->gmc, entity, 0);
num_dw = ALIGN(adev->mman.buffer_funcs->copy_num_dw, 8);
num_bytes = npages * 8 * AMDGPU_GPU_PAGES_IN_CPU_PAGE;
@@ -78,6 +77,7 @@ svm_migrate_gart_map(struct amdgpu_ring *ring,
src_addr += job->ibs[0].gpu_addr;
dst_addr = amdgpu_bo_gpu_offset(adev->gart.bo);
+ dst_addr += (entity->gart_window_offs[0] >> AMDGPU_GPU_PAGE_SHIFT) * 8;
amdgpu_emit_copy_buffer(adev, &job->ibs[0], src_addr,
dst_addr, num_bytes, 0);
@@ -116,7 +116,7 @@ svm_migrate_gart_map(struct amdgpu_ring *ring,
* multiple GTT_MAX_PAGES transfer, all sdma operations are serialized, wait for
* the last sdma finish fence which is returned to check copy memory is done.
*
- * Context: Process context, takes and releases gtt_window_lock
+ * Context: Process context
*
* Return:
* 0 - OK, otherwise error code
@@ -136,9 +136,9 @@ svm_migrate_copy_memory_gart(struct amdgpu_device *adev, dma_addr_t *sys,
u64 size;
int r;
- entity = &adev->mman.default_entity;
+ entity = &adev->mman.move_entity;
- mutex_lock(&adev->mman.gtt_window_lock);
+ mutex_lock(&entity->lock);
while (npages) {
size = min(GTT_MAX_PAGES, npages);
@@ -175,7 +175,7 @@ svm_migrate_copy_memory_gart(struct amdgpu_device *adev, dma_addr_t *sys,
}
out_unlock:
- mutex_unlock(&adev->mman.gtt_window_lock);
+ mutex_unlock(&entity->lock);
return r;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v6 03/11] drm/amdgpu: add amdgpu_ttm_buffer_entity_fini func
2026-01-26 13:34 [PATCH v6 00/11] drm/amdgpu: preparation patchs for the use all SDMA instances series Pierre-Eric Pelloux-Prayer
2026-01-26 13:34 ` [PATCH v6 01/11] drm/amdgpu: remove gart_window_lock usage from gmc v12_1 Pierre-Eric Pelloux-Prayer
2026-01-26 13:34 ` [PATCH v6 02/11] drm/amdgpu: statically assign gart windows to ttm entities Pierre-Eric Pelloux-Prayer
@ 2026-01-26 13:34 ` Pierre-Eric Pelloux-Prayer
2026-01-26 13:34 ` [PATCH v6 04/11] amdgpu/vce: use amdgpu_gtt_mgr_alloc_entries Pierre-Eric Pelloux-Prayer
` (7 subsequent siblings)
10 siblings, 0 replies; 18+ messages in thread
From: Pierre-Eric Pelloux-Prayer @ 2026-01-26 13:34 UTC (permalink / raw)
To: Alex Deucher, Christian König, David Airlie, Simona Vetter
Cc: Pierre-Eric Pelloux-Prayer, amd-gfx, dri-devel, linux-kernel
This allows to have init/fini functions to hold all the init and
teardown code for amdgpu_ttm_buffer_entity.
For now only drm_sched_entity init/destroy function calls are moved
here, but as entities gain new members it will make code simpler.
Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
---
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 73 +++++++++++++------------
1 file changed, 38 insertions(+), 35 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index 5850a013e65e..8b38b5ed9a9c 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -2013,10 +2013,18 @@ static void amdgpu_ttm_free_mmio_remap_bo(struct amdgpu_device *adev)
}
static int amdgpu_ttm_buffer_entity_init(struct amdgpu_ttm_buffer_entity *entity,
+ enum drm_sched_priority prio,
+ struct drm_gpu_scheduler **scheds,
+ int num_schedulers,
int starting_gart_window,
u32 num_gart_windows)
{
- int i;
+ int i, r;
+
+ r = drm_sched_entity_init(&entity->base, prio, scheds, num_schedulers, NULL);
+ if (r)
+ return r;
+
mutex_init(&entity->lock);
@@ -2033,6 +2041,11 @@ static int amdgpu_ttm_buffer_entity_init(struct amdgpu_ttm_buffer_entity *entity
return starting_gart_window;
}
+static void amdgpu_ttm_buffer_entity_fini(struct amdgpu_ttm_buffer_entity *entity)
+{
+ drm_sched_entity_destroy(&entity->base);
+}
+
/*
* amdgpu_ttm_init - Init the memory management (ttm) as well as various
* gtt/vram related fields.
@@ -2317,7 +2330,6 @@ void amdgpu_ttm_fini(struct amdgpu_device *adev)
void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
{
struct ttm_resource_manager *man = ttm_manager_type(&adev->mman.bdev, TTM_PL_VRAM);
- u32 used_windows;
uint64_t size;
int r;
@@ -2331,47 +2343,36 @@ void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
ring = adev->mman.buffer_funcs_ring;
sched = &ring->sched;
- r = drm_sched_entity_init(&adev->mman.default_entity.base,
- DRM_SCHED_PRIORITY_KERNEL, &sched,
- 1, NULL);
- if (r) {
+ r = amdgpu_ttm_buffer_entity_init(&adev->mman.default_entity,
+ DRM_SCHED_PRIORITY_KERNEL, &sched, 1,
+ 0, 0);
+ if (r < 0) {
dev_err(adev->dev,
- "Failed setting up TTM BO move entity (%d)\n",
- r);
+ "Failed setting up TTM entity (%d)\n", r);
return;
}
- r = drm_sched_entity_init(&adev->mman.clear_entity.base,
- DRM_SCHED_PRIORITY_NORMAL, &sched,
- 1, NULL);
- if (r) {
+ r = amdgpu_ttm_buffer_entity_init(&adev->mman.clear_entity,
+ DRM_SCHED_PRIORITY_NORMAL, &sched, 1,
+ r, 1);
+ if (r < 0) {
dev_err(adev->dev,
- "Failed setting up TTM BO clear entity (%d)\n",
- r);
- goto error_free_entity;
+ "Failed setting up TTM BO clear entity (%d)\n", r);
+ goto error_free_default_entity;
}
- r = drm_sched_entity_init(&adev->mman.move_entity.base,
- DRM_SCHED_PRIORITY_NORMAL, &sched,
- 1, NULL);
- if (r) {
+ r = amdgpu_ttm_buffer_entity_init(&adev->mman.move_entity,
+ DRM_SCHED_PRIORITY_NORMAL, &sched, 1,
+ r, 2);
+ if (r < 0) {
dev_err(adev->dev,
- "Failed setting up TTM BO move entity (%d)\n",
- r);
- drm_sched_entity_destroy(&adev->mman.clear_entity.base);
- goto error_free_entity;
+ "Failed setting up TTM BO move entity (%d)\n", r);
+ goto error_free_clear_entity;
}
-
- /* Statically assign GART windows to each entity. */
- used_windows = amdgpu_ttm_buffer_entity_init(&adev->mman.default_entity, 0, 0);
- used_windows = amdgpu_ttm_buffer_entity_init(&adev->mman.move_entity,
- used_windows, 2);
- used_windows = amdgpu_ttm_buffer_entity_init(&adev->mman.clear_entity,
- used_windows, 1);
} else {
- drm_sched_entity_destroy(&adev->mman.default_entity.base);
- drm_sched_entity_destroy(&adev->mman.clear_entity.base);
- drm_sched_entity_destroy(&adev->mman.move_entity.base);
+ amdgpu_ttm_buffer_entity_fini(&adev->mman.default_entity);
+ amdgpu_ttm_buffer_entity_fini(&adev->mman.clear_entity);
+ amdgpu_ttm_buffer_entity_fini(&adev->mman.move_entity);
/* Drop all the old fences since re-creating the scheduler entities
* will allocate new contexts.
*/
@@ -2388,8 +2389,10 @@ void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
return;
-error_free_entity:
- drm_sched_entity_destroy(&adev->mman.default_entity.base);
+error_free_clear_entity:
+ amdgpu_ttm_buffer_entity_fini(&adev->mman.clear_entity);
+error_free_default_entity:
+ amdgpu_ttm_buffer_entity_fini(&adev->mman.default_entity);
}
static int amdgpu_ttm_prepare_job(struct amdgpu_device *adev,
--
2.43.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v6 04/11] amdgpu/vce: use amdgpu_gtt_mgr_alloc_entries
2026-01-26 13:34 [PATCH v6 00/11] drm/amdgpu: preparation patchs for the use all SDMA instances series Pierre-Eric Pelloux-Prayer
` (2 preceding siblings ...)
2026-01-26 13:34 ` [PATCH v6 03/11] drm/amdgpu: add amdgpu_ttm_buffer_entity_fini func Pierre-Eric Pelloux-Prayer
@ 2026-01-26 13:34 ` Pierre-Eric Pelloux-Prayer
2026-01-26 19:09 ` Christian König
2026-01-28 13:21 ` kernel test robot
2026-01-26 13:35 ` [PATCH v6 05/11] amdgpu/ttm: " Pierre-Eric Pelloux-Prayer
` (6 subsequent siblings)
10 siblings, 2 replies; 18+ messages in thread
From: Pierre-Eric Pelloux-Prayer @ 2026-01-26 13:34 UTC (permalink / raw)
To: Alex Deucher, Christian König, David Airlie, Simona Vetter
Cc: Pierre-Eric Pelloux-Prayer, amd-gfx, dri-devel, linux-kernel
Instead of reserving a number of GTT pages for VCE 1.0 this
commit now uses amdgpu_gtt_mgr_alloc_entries to allocate
the pages when initializing vce 1.0.
While at it remove the "does the VCPU BO already have a
32-bit address" check as suggested by Timur.
This decouples vce init from gtt init.
Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
---
drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c | 1 -
drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c | 18 ------------
drivers/gpu/drm/amd/amdgpu/amdgpu_vce.h | 2 +-
drivers/gpu/drm/amd/amdgpu/vce_v1_0.c | 32 +++++++++++----------
4 files changed, 18 insertions(+), 35 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c
index dd9b845d5783..f2e89fb4b666 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c
@@ -332,7 +332,6 @@ int amdgpu_gtt_mgr_init(struct amdgpu_device *adev, uint64_t gtt_size)
ttm_resource_manager_init(man, &adev->mman.bdev, gtt_size);
start = AMDGPU_GTT_MAX_TRANSFER_SIZE * AMDGPU_GTT_NUM_TRANSFER_WINDOWS;
- start += amdgpu_vce_required_gart_pages(adev);
size = (adev->gmc.gart_size >> PAGE_SHIFT) - start;
drm_mm_init(&mgr->mm, start, size);
spin_lock_init(&mgr->lock);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
index a7d8f1ce6ac2..eb4a15db2ef2 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
@@ -450,24 +450,6 @@ void amdgpu_vce_free_handles(struct amdgpu_device *adev, struct drm_file *filp)
}
}
-/**
- * amdgpu_vce_required_gart_pages() - gets number of GART pages required by VCE
- *
- * @adev: amdgpu_device pointer
- *
- * Returns how many GART pages we need before GTT for the VCE IP block.
- * For VCE1, see vce_v1_0_ensure_vcpu_bo_32bit_addr for details.
- * For VCE2+, this is not needed so return zero.
- */
-u32 amdgpu_vce_required_gart_pages(struct amdgpu_device *adev)
-{
- /* VCE IP block not added yet, so can't use amdgpu_ip_version */
- if (adev->family == AMDGPU_FAMILY_SI)
- return 512;
-
- return 0;
-}
-
/**
* amdgpu_vce_get_create_msg - generate a VCE create msg
*
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.h
index 1c3464ce5037..a59d87e09004 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.h
@@ -52,6 +52,7 @@ struct amdgpu_vce {
uint32_t srbm_soft_reset;
unsigned num_rings;
uint32_t keyselect;
+ struct drm_mm_node node;
};
int amdgpu_vce_early_init(struct amdgpu_device *adev);
@@ -61,7 +62,6 @@ int amdgpu_vce_entity_init(struct amdgpu_device *adev, struct amdgpu_ring *ring)
int amdgpu_vce_suspend(struct amdgpu_device *adev);
int amdgpu_vce_resume(struct amdgpu_device *adev);
void amdgpu_vce_free_handles(struct amdgpu_device *adev, struct drm_file *filp);
-u32 amdgpu_vce_required_gart_pages(struct amdgpu_device *adev);
int amdgpu_vce_ring_parse_cs(struct amdgpu_cs_parser *p, struct amdgpu_job *job,
struct amdgpu_ib *ib);
int amdgpu_vce_ring_parse_cs_vm(struct amdgpu_cs_parser *p,
diff --git a/drivers/gpu/drm/amd/amdgpu/vce_v1_0.c b/drivers/gpu/drm/amd/amdgpu/vce_v1_0.c
index 9ae424618556..bca34a30dbf3 100644
--- a/drivers/gpu/drm/amd/amdgpu/vce_v1_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/vce_v1_0.c
@@ -47,11 +47,6 @@
#define VCE_V1_0_DATA_SIZE (7808 * (AMDGPU_MAX_VCE_HANDLES + 1))
#define VCE_STATUS_VCPU_REPORT_FW_LOADED_MASK 0x02
-#define VCE_V1_0_GART_PAGE_START \
- (AMDGPU_GTT_MAX_TRANSFER_SIZE * AMDGPU_GTT_NUM_TRANSFER_WINDOWS)
-#define VCE_V1_0_GART_ADDR_START \
- (VCE_V1_0_GART_PAGE_START * AMDGPU_GPU_PAGE_SIZE)
-
static void vce_v1_0_set_ring_funcs(struct amdgpu_device *adev);
static void vce_v1_0_set_irq_funcs(struct amdgpu_device *adev);
@@ -541,21 +536,24 @@ static int vce_v1_0_ensure_vcpu_bo_32bit_addr(struct amdgpu_device *adev)
u64 num_pages = ALIGN(bo_size, AMDGPU_GPU_PAGE_SIZE) / AMDGPU_GPU_PAGE_SIZE;
u64 pa = amdgpu_gmc_vram_pa(adev, adev->vce.vcpu_bo);
u64 flags = AMDGPU_PTE_READABLE | AMDGPU_PTE_WRITEABLE | AMDGPU_PTE_VALID;
+ u64 vce_gart_start;
+ int r;
- /*
- * Check if the VCPU BO already has a 32-bit address.
- * Eg. if MC is configured to put VRAM in the low address range.
- */
- if (gpu_addr <= max_vcpu_bo_addr)
- return 0;
+ r = amdgpu_gtt_mgr_alloc_entries(&adev->mman.gtt_mgr,
+ &adev->vce.node, num_pages,
+ DRM_MM_INSERT_LOW);
+ if (r)
+ return r;
+
+ vce_gart_start = adev->vce.node.start * AMDGPU_GPU_PAGE_SIZE;
/* Check if we can map the VCPU BO in GART to a 32-bit address. */
- if (adev->gmc.gart_start + VCE_V1_0_GART_ADDR_START > max_vcpu_bo_addr)
+ if (adev->gmc.gart_start + vce_gart_start > max_vcpu_bo_addr)
return -EINVAL;
- amdgpu_gart_map_vram_range(adev, pa, VCE_V1_0_GART_PAGE_START,
+ amdgpu_gart_map_vram_range(adev, pa, adev->vce.node.start,
num_pages, flags, adev->gart.ptr);
- adev->vce.gpu_addr = adev->gmc.gart_start + VCE_V1_0_GART_ADDR_START;
+ adev->vce.gpu_addr = adev->gmc.gart_start + vce_gart_start;
if (adev->vce.gpu_addr > max_vcpu_bo_addr)
return -EINVAL;
@@ -610,7 +608,11 @@ static int vce_v1_0_sw_fini(struct amdgpu_ip_block *ip_block)
if (r)
return r;
- return amdgpu_vce_sw_fini(adev);
+ r = amdgpu_vce_sw_fini(adev);
+
+ amdgpu_gtt_mgr_free_entries(&adev->mman.gtt_mgr, &adev->vce.node);
+
+ return r;
}
/**
--
2.43.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v6 05/11] amdgpu/ttm: use amdgpu_gtt_mgr_alloc_entries
2026-01-26 13:34 [PATCH v6 00/11] drm/amdgpu: preparation patchs for the use all SDMA instances series Pierre-Eric Pelloux-Prayer
` (3 preceding siblings ...)
2026-01-26 13:34 ` [PATCH v6 04/11] amdgpu/vce: use amdgpu_gtt_mgr_alloc_entries Pierre-Eric Pelloux-Prayer
@ 2026-01-26 13:35 ` Pierre-Eric Pelloux-Prayer
2026-01-27 10:09 ` Christian König
2026-01-26 13:35 ` [PATCH v6 06/11] amdgpu/gtt: remove AMDGPU_GTT_NUM_TRANSFER_WINDOWS Pierre-Eric Pelloux-Prayer
` (5 subsequent siblings)
10 siblings, 1 reply; 18+ messages in thread
From: Pierre-Eric Pelloux-Prayer @ 2026-01-26 13:35 UTC (permalink / raw)
To: Alex Deucher, Christian König, David Airlie, Simona Vetter
Cc: Pierre-Eric Pelloux-Prayer, amd-gfx, dri-devel, linux-kernel
Use amdgpu_gtt_mgr_alloc_entries for each entity instead
of reserving a fixed number of pages.
Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
---
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 66 ++++++++++++++++---------
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h | 1 +
2 files changed, 43 insertions(+), 24 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index 8b38b5ed9a9c..d23d3046919b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -2012,37 +2012,47 @@ static void amdgpu_ttm_free_mmio_remap_bo(struct amdgpu_device *adev)
adev->rmmio_remap.bo = NULL;
}
-static int amdgpu_ttm_buffer_entity_init(struct amdgpu_ttm_buffer_entity *entity,
+static int amdgpu_ttm_buffer_entity_init(struct amdgpu_gtt_mgr *mgr,
+ struct amdgpu_ttm_buffer_entity *entity,
enum drm_sched_priority prio,
struct drm_gpu_scheduler **scheds,
int num_schedulers,
- int starting_gart_window,
u32 num_gart_windows)
{
- int i, r;
+ int i, r, num_pages;
r = drm_sched_entity_init(&entity->base, prio, scheds, num_schedulers, NULL);
if (r)
return r;
-
mutex_init(&entity->lock);
if (ARRAY_SIZE(entity->gart_window_offs) < num_gart_windows)
- return starting_gart_window;
+ return -EINVAL;
+ if (num_gart_windows == 0)
+ return 0;
+
+ num_pages = num_gart_windows * AMDGPU_GTT_MAX_TRANSFER_SIZE;
+ r = amdgpu_gtt_mgr_alloc_entries(mgr, &entity->node, num_pages,
+ DRM_MM_INSERT_BEST);
+ if (r) {
+ drm_sched_entity_destroy(&entity->base);
+ return r;
+ }
for (i = 0; i < num_gart_windows; i++) {
entity->gart_window_offs[i] =
- (u64)starting_gart_window * AMDGPU_GTT_MAX_TRANSFER_SIZE *
- AMDGPU_GPU_PAGE_SIZE;
- starting_gart_window++;
+ (entity->node.start + (u64)i * AMDGPU_GTT_MAX_TRANSFER_SIZE) *
+ AMDGPU_GPU_PAGE_SIZE;
}
- return starting_gart_window;
+ return 0;
}
-static void amdgpu_ttm_buffer_entity_fini(struct amdgpu_ttm_buffer_entity *entity)
+static void amdgpu_ttm_buffer_entity_fini(struct amdgpu_gtt_mgr *mgr,
+ struct amdgpu_ttm_buffer_entity *entity)
{
+ amdgpu_gtt_mgr_free_entries(mgr, &entity->node);
drm_sched_entity_destroy(&entity->base);
}
@@ -2343,36 +2353,42 @@ void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
ring = adev->mman.buffer_funcs_ring;
sched = &ring->sched;
- r = amdgpu_ttm_buffer_entity_init(&adev->mman.default_entity,
- DRM_SCHED_PRIORITY_KERNEL, &sched, 1,
- 0, 0);
+ r = amdgpu_ttm_buffer_entity_init(&adev->mman.gtt_mgr,
+ &adev->mman.default_entity,
+ DRM_SCHED_PRIORITY_KERNEL,
+ &sched, 1, 0);
if (r < 0) {
dev_err(adev->dev,
"Failed setting up TTM entity (%d)\n", r);
return;
}
- r = amdgpu_ttm_buffer_entity_init(&adev->mman.clear_entity,
- DRM_SCHED_PRIORITY_NORMAL, &sched, 1,
- r, 1);
+ r = amdgpu_ttm_buffer_entity_init(&adev->mman.gtt_mgr,
+ &adev->mman.clear_entity,
+ DRM_SCHED_PRIORITY_NORMAL,
+ &sched, 1, 1);
if (r < 0) {
dev_err(adev->dev,
"Failed setting up TTM BO clear entity (%d)\n", r);
goto error_free_default_entity;
}
- r = amdgpu_ttm_buffer_entity_init(&adev->mman.move_entity,
- DRM_SCHED_PRIORITY_NORMAL, &sched, 1,
- r, 2);
+ r = amdgpu_ttm_buffer_entity_init(&adev->mman.gtt_mgr,
+ &adev->mman.move_entity,
+ DRM_SCHED_PRIORITY_NORMAL,
+ &sched, 1, 2);
if (r < 0) {
dev_err(adev->dev,
"Failed setting up TTM BO move entity (%d)\n", r);
goto error_free_clear_entity;
}
} else {
- amdgpu_ttm_buffer_entity_fini(&adev->mman.default_entity);
- amdgpu_ttm_buffer_entity_fini(&adev->mman.clear_entity);
- amdgpu_ttm_buffer_entity_fini(&adev->mman.move_entity);
+ amdgpu_ttm_buffer_entity_fini(&adev->mman.gtt_mgr,
+ &adev->mman.default_entity);
+ amdgpu_ttm_buffer_entity_fini(&adev->mman.gtt_mgr,
+ &adev->mman.clear_entity);
+ amdgpu_ttm_buffer_entity_fini(&adev->mman.gtt_mgr,
+ &adev->mman.move_entity);
/* Drop all the old fences since re-creating the scheduler entities
* will allocate new contexts.
*/
@@ -2390,9 +2406,11 @@ void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
return;
error_free_clear_entity:
- amdgpu_ttm_buffer_entity_fini(&adev->mman.clear_entity);
+ amdgpu_ttm_buffer_entity_fini(&adev->mman.gtt_mgr,
+ &adev->mman.clear_entity);
error_free_default_entity:
- amdgpu_ttm_buffer_entity_fini(&adev->mman.default_entity);
+ amdgpu_ttm_buffer_entity_fini(&adev->mman.gtt_mgr,
+ &adev->mman.default_entity);
}
static int amdgpu_ttm_prepare_job(struct amdgpu_device *adev,
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
index 871388b86503..5419344d60fb 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
@@ -56,6 +56,7 @@ struct amdgpu_gtt_mgr {
struct amdgpu_ttm_buffer_entity {
struct drm_sched_entity base;
struct mutex lock;
+ struct drm_mm_node node;
u64 gart_window_offs[2];
};
--
2.43.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v6 06/11] amdgpu/gtt: remove AMDGPU_GTT_NUM_TRANSFER_WINDOWS
2026-01-26 13:34 [PATCH v6 00/11] drm/amdgpu: preparation patchs for the use all SDMA instances series Pierre-Eric Pelloux-Prayer
` (4 preceding siblings ...)
2026-01-26 13:35 ` [PATCH v6 05/11] amdgpu/ttm: " Pierre-Eric Pelloux-Prayer
@ 2026-01-26 13:35 ` Pierre-Eric Pelloux-Prayer
2026-01-27 10:18 ` Christian König
2026-01-27 10:22 ` Christian König
2026-01-26 13:35 ` [PATCH v6 07/11] drm/amdgpu: add missing lock in amdgpu_benchmark_do_move Pierre-Eric Pelloux-Prayer
` (4 subsequent siblings)
10 siblings, 2 replies; 18+ messages in thread
From: Pierre-Eric Pelloux-Prayer @ 2026-01-26 13:35 UTC (permalink / raw)
To: Alex Deucher, Christian König, David Airlie, Simona Vetter
Cc: Pierre-Eric Pelloux-Prayer, amd-gfx, dri-devel, linux-kernel
It's not needed anymore.
Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
---
drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c | 5 +----
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h | 1 -
2 files changed, 1 insertion(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c
index f2e89fb4b666..9b0bcf6aca44 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c
@@ -324,16 +324,13 @@ int amdgpu_gtt_mgr_init(struct amdgpu_device *adev, uint64_t gtt_size)
{
struct amdgpu_gtt_mgr *mgr = &adev->mman.gtt_mgr;
struct ttm_resource_manager *man = &mgr->manager;
- uint64_t start, size;
man->use_tt = true;
man->func = &amdgpu_gtt_mgr_func;
ttm_resource_manager_init(man, &adev->mman.bdev, gtt_size);
- start = AMDGPU_GTT_MAX_TRANSFER_SIZE * AMDGPU_GTT_NUM_TRANSFER_WINDOWS;
- size = (adev->gmc.gart_size >> PAGE_SHIFT) - start;
- drm_mm_init(&mgr->mm, start, size);
+ drm_mm_init(&mgr->mm, 0, adev->gmc.gart_size >> PAGE_SHIFT);
spin_lock_init(&mgr->lock);
ttm_set_driver_manager(&adev->mman.bdev, TTM_PL_TT, &mgr->manager);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
index 5419344d60fb..c8284cb2d22c 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
@@ -40,7 +40,6 @@
#define __AMDGPU_PL_NUM (TTM_PL_PRIV + 6)
#define AMDGPU_GTT_MAX_TRANSFER_SIZE 512
-#define AMDGPU_GTT_NUM_TRANSFER_WINDOWS 3
extern const struct attribute_group amdgpu_vram_mgr_attr_group;
extern const struct attribute_group amdgpu_gtt_mgr_attr_group;
--
2.43.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v6 07/11] drm/amdgpu: add missing lock in amdgpu_benchmark_do_move
2026-01-26 13:34 [PATCH v6 00/11] drm/amdgpu: preparation patchs for the use all SDMA instances series Pierre-Eric Pelloux-Prayer
` (5 preceding siblings ...)
2026-01-26 13:35 ` [PATCH v6 06/11] amdgpu/gtt: remove AMDGPU_GTT_NUM_TRANSFER_WINDOWS Pierre-Eric Pelloux-Prayer
@ 2026-01-26 13:35 ` Pierre-Eric Pelloux-Prayer
2026-01-26 13:35 ` [PATCH v6 08/11] drm/amdgpu: check entity lock is held in amdgpu_ttm_job_submit Pierre-Eric Pelloux-Prayer
` (3 subsequent siblings)
10 siblings, 0 replies; 18+ messages in thread
From: Pierre-Eric Pelloux-Prayer @ 2026-01-26 13:35 UTC (permalink / raw)
To: Alex Deucher, Christian König, David Airlie, Simona Vetter
Cc: Pierre-Eric Pelloux-Prayer, amd-gfx, dri-devel, linux-kernel
Taking the entity lock is required to guarantee the ordering of
execution. The next commit will add a check that the lock is
held.
Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
---
drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c
index 1cbba9803d31..98ccd7ab9e9a 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_benchmark.c
@@ -35,6 +35,7 @@ static int amdgpu_benchmark_do_move(struct amdgpu_device *adev, unsigned size,
struct dma_fence *fence;
int i, r;
+ mutex_lock(&adev->mman.default_entity.lock);
stime = ktime_get();
for (i = 0; i < n; i++) {
r = amdgpu_copy_buffer(adev, &adev->mman.default_entity,
@@ -47,6 +48,7 @@ static int amdgpu_benchmark_do_move(struct amdgpu_device *adev, unsigned size,
if (r)
goto exit_do_move;
}
+ mutex_unlock(&adev->mman.default_entity.lock);
exit_do_move:
etime = ktime_get();
--
2.43.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v6 08/11] drm/amdgpu: check entity lock is held in amdgpu_ttm_job_submit
2026-01-26 13:34 [PATCH v6 00/11] drm/amdgpu: preparation patchs for the use all SDMA instances series Pierre-Eric Pelloux-Prayer
` (6 preceding siblings ...)
2026-01-26 13:35 ` [PATCH v6 07/11] drm/amdgpu: add missing lock in amdgpu_benchmark_do_move Pierre-Eric Pelloux-Prayer
@ 2026-01-26 13:35 ` Pierre-Eric Pelloux-Prayer
2026-01-26 13:35 ` [PATCH v6 09/11] drm/amdgpu: double AMDGPU_GTT_MAX_TRANSFER_SIZE Pierre-Eric Pelloux-Prayer
` (2 subsequent siblings)
10 siblings, 0 replies; 18+ messages in thread
From: Pierre-Eric Pelloux-Prayer @ 2026-01-26 13:35 UTC (permalink / raw)
To: Alex Deucher, Christian König, David Airlie, Simona Vetter
Cc: Pierre-Eric Pelloux-Prayer, amd-gfx, dri-devel, linux-kernel
drm_sched_job_arm and drm_sched_entity_push_job must be called
under the same lock to guarantee the order of execution.
This commit adds a check in amdgpu_ttm_job_submit and fix the
places where the lock was missing.
Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
---
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 13 ++++++++-----
1 file changed, 8 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index d23d3046919b..e149092da8f1 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -163,7 +163,8 @@ static void amdgpu_evict_flags(struct ttm_buffer_object *bo,
}
static struct dma_fence *
-amdgpu_ttm_job_submit(struct amdgpu_device *adev, struct amdgpu_job *job, u32 num_dw)
+amdgpu_ttm_job_submit(struct amdgpu_device *adev, struct amdgpu_ttm_buffer_entity *entity,
+ struct amdgpu_job *job, u32 num_dw)
{
struct amdgpu_ring *ring;
@@ -171,6 +172,8 @@ amdgpu_ttm_job_submit(struct amdgpu_device *adev, struct amdgpu_job *job, u32 nu
amdgpu_ring_pad_ib(ring, &job->ibs[0]);
WARN_ON(job->ibs[0].length_dw > num_dw);
+ lockdep_assert_held(&entity->lock);
+
return amdgpu_job_submit(job);
}
@@ -267,7 +270,7 @@ static int amdgpu_ttm_map_buffer(struct amdgpu_ttm_buffer_entity *entity,
amdgpu_gart_map_vram_range(adev, pa, 0, num_pages, flags, cpu_addr);
}
- dma_fence_put(amdgpu_ttm_job_submit(adev, job, num_dw));
+ dma_fence_put(amdgpu_ttm_job_submit(adev, entity, job, num_dw));
return 0;
}
@@ -1589,7 +1592,7 @@ static int amdgpu_ttm_access_memory_sdma(struct ttm_buffer_object *bo,
amdgpu_emit_copy_buffer(adev, &job->ibs[0], src_addr, dst_addr,
PAGE_SIZE, 0);
- fence = amdgpu_ttm_job_submit(adev, job, num_dw);
+ fence = amdgpu_ttm_job_submit(adev, &adev->mman.default_entity, job, num_dw);
mutex_unlock(&adev->mman.default_entity.lock);
if (!dma_fence_wait_timeout(fence, false, adev->sdma_timeout))
@@ -2484,7 +2487,7 @@ int amdgpu_copy_buffer(struct amdgpu_device *adev,
byte_count -= cur_size_in_bytes;
}
- *fence = amdgpu_ttm_job_submit(adev, job, num_dw);
+ *fence = amdgpu_ttm_job_submit(adev, entity, job, num_dw);
return 0;
@@ -2527,7 +2530,7 @@ static int amdgpu_ttm_fill_mem(struct amdgpu_device *adev,
byte_count -= cur_size;
}
- *fence = amdgpu_ttm_job_submit(adev, job, num_dw);
+ *fence = amdgpu_ttm_job_submit(adev, entity, job, num_dw);
return 0;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v6 09/11] drm/amdgpu: double AMDGPU_GTT_MAX_TRANSFER_SIZE
2026-01-26 13:34 [PATCH v6 00/11] drm/amdgpu: preparation patchs for the use all SDMA instances series Pierre-Eric Pelloux-Prayer
` (7 preceding siblings ...)
2026-01-26 13:35 ` [PATCH v6 08/11] drm/amdgpu: check entity lock is held in amdgpu_ttm_job_submit Pierre-Eric Pelloux-Prayer
@ 2026-01-26 13:35 ` Pierre-Eric Pelloux-Prayer
2026-01-26 13:35 ` [PATCH v6 10/11] drm/amdgpu: introduce amdgpu_sdma_set_vm_pte_scheds Pierre-Eric Pelloux-Prayer
2026-01-26 13:35 ` [PATCH v6 11/11] drm/amdgpu: move sched status check inside amdgpu_ttm_set_buffer_funcs_status Pierre-Eric Pelloux-Prayer
10 siblings, 0 replies; 18+ messages in thread
From: Pierre-Eric Pelloux-Prayer @ 2026-01-26 13:35 UTC (permalink / raw)
To: Alex Deucher, Christian König, David Airlie, Simona Vetter
Cc: Pierre-Eric Pelloux-Prayer, amd-gfx, dri-devel, linux-kernel
Makes copies/evictions faster when gart windows are required.
Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
---
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
index c8284cb2d22c..6fce4bc10faf 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
@@ -39,7 +39,7 @@
#define AMDGPU_PL_MMIO_REMAP (TTM_PL_PRIV + 5)
#define __AMDGPU_PL_NUM (TTM_PL_PRIV + 6)
-#define AMDGPU_GTT_MAX_TRANSFER_SIZE 512
+#define AMDGPU_GTT_MAX_TRANSFER_SIZE 1024
extern const struct attribute_group amdgpu_vram_mgr_attr_group;
extern const struct attribute_group amdgpu_gtt_mgr_attr_group;
--
2.43.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v6 10/11] drm/amdgpu: introduce amdgpu_sdma_set_vm_pte_scheds
2026-01-26 13:34 [PATCH v6 00/11] drm/amdgpu: preparation patchs for the use all SDMA instances series Pierre-Eric Pelloux-Prayer
` (8 preceding siblings ...)
2026-01-26 13:35 ` [PATCH v6 09/11] drm/amdgpu: double AMDGPU_GTT_MAX_TRANSFER_SIZE Pierre-Eric Pelloux-Prayer
@ 2026-01-26 13:35 ` Pierre-Eric Pelloux-Prayer
2026-01-26 13:35 ` [PATCH v6 11/11] drm/amdgpu: move sched status check inside amdgpu_ttm_set_buffer_funcs_status Pierre-Eric Pelloux-Prayer
10 siblings, 0 replies; 18+ messages in thread
From: Pierre-Eric Pelloux-Prayer @ 2026-01-26 13:35 UTC (permalink / raw)
To: Alex Deucher, Christian König, David Airlie, Simona Vetter
Cc: Pierre-Eric Pelloux-Prayer, amd-gfx, dri-devel, linux-kernel
All sdma versions used the same logic, so add a helper and move the
common code to a single place.
---
v2: pass amdgpu_vm_pte_funcs as well
v3: drop all the *_set_vm_pte_funcs one liners
v5: rebased
---
Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
---
drivers/gpu/drm/amd/amdgpu/amdgpu.h | 2 ++
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 17 ++++++++++++
drivers/gpu/drm/amd/amdgpu/cik_sdma.c | 31 ++++++---------------
drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c | 31 ++++++---------------
drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c | 31 ++++++---------------
drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c | 35 ++++++------------------
drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c | 35 ++++++------------------
drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c | 31 ++++++---------------
drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c | 31 ++++++---------------
drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c | 29 ++++++--------------
drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c | 29 ++++++--------------
drivers/gpu/drm/amd/amdgpu/sdma_v7_1.c | 29 ++++++--------------
drivers/gpu/drm/amd/amdgpu/si_dma.c | 31 ++++++---------------
13 files changed, 113 insertions(+), 249 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index 9c11535c44c6..31b63f88de0f 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -1528,6 +1528,8 @@ struct dma_fence *amdgpu_device_enforce_isolation(struct amdgpu_device *adev,
bool amdgpu_device_has_display_hardware(struct amdgpu_device *adev);
ssize_t amdgpu_get_soft_full_reset_mask(struct amdgpu_ring *ring);
ssize_t amdgpu_show_reset_mask(char *buf, uint32_t supported_reset);
+void amdgpu_sdma_set_vm_pte_scheds(struct amdgpu_device *adev,
+ const struct amdgpu_vm_pte_funcs *vm_pte_funcs);
/* atpx handler */
#if defined(CONFIG_VGA_SWITCHEROO)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 6a2ea200d90c..24c1f95ec507 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -3227,3 +3227,20 @@ void amdgpu_vm_print_task_info(struct amdgpu_device *adev,
task_info->process_name, task_info->tgid,
task_info->task.comm, task_info->task.pid);
}
+
+void amdgpu_sdma_set_vm_pte_scheds(struct amdgpu_device *adev,
+ const struct amdgpu_vm_pte_funcs *vm_pte_funcs)
+{
+ struct drm_gpu_scheduler *sched;
+ int i;
+
+ for (i = 0; i < adev->sdma.num_instances; i++) {
+ if (adev->sdma.has_page_queue)
+ sched = &adev->sdma.instance[i].page.sched;
+ else
+ sched = &adev->sdma.instance[i].ring.sched;
+ adev->vm_manager.vm_pte_scheds[i] = sched;
+ }
+ adev->vm_manager.vm_pte_num_scheds = adev->sdma.num_instances;
+ adev->vm_manager.vm_pte_funcs = vm_pte_funcs;
+}
diff --git a/drivers/gpu/drm/amd/amdgpu/cik_sdma.c b/drivers/gpu/drm/amd/amdgpu/cik_sdma.c
index 9e8715b4739d..22780c09177d 100644
--- a/drivers/gpu/drm/amd/amdgpu/cik_sdma.c
+++ b/drivers/gpu/drm/amd/amdgpu/cik_sdma.c
@@ -53,7 +53,6 @@ static const u32 sdma_offsets[SDMA_MAX_INSTANCE] =
static void cik_sdma_set_ring_funcs(struct amdgpu_device *adev);
static void cik_sdma_set_irq_funcs(struct amdgpu_device *adev);
static void cik_sdma_set_buffer_funcs(struct amdgpu_device *adev);
-static void cik_sdma_set_vm_pte_funcs(struct amdgpu_device *adev);
static int cik_sdma_soft_reset(struct amdgpu_ip_block *ip_block);
u32 amdgpu_cik_gpu_check_soft_reset(struct amdgpu_device *adev);
@@ -919,6 +918,14 @@ static void cik_enable_sdma_mgls(struct amdgpu_device *adev,
}
}
+static const struct amdgpu_vm_pte_funcs cik_sdma_vm_pte_funcs = {
+ .copy_pte_num_dw = 7,
+ .copy_pte = cik_sdma_vm_copy_pte,
+
+ .write_pte = cik_sdma_vm_write_pte,
+ .set_pte_pde = cik_sdma_vm_set_pte_pde,
+};
+
static int cik_sdma_early_init(struct amdgpu_ip_block *ip_block)
{
struct amdgpu_device *adev = ip_block->adev;
@@ -933,7 +940,7 @@ static int cik_sdma_early_init(struct amdgpu_ip_block *ip_block)
cik_sdma_set_ring_funcs(adev);
cik_sdma_set_irq_funcs(adev);
cik_sdma_set_buffer_funcs(adev);
- cik_sdma_set_vm_pte_funcs(adev);
+ amdgpu_sdma_set_vm_pte_scheds(adev, &cik_sdma_vm_pte_funcs);
return 0;
}
@@ -1337,26 +1344,6 @@ static void cik_sdma_set_buffer_funcs(struct amdgpu_device *adev)
adev->mman.buffer_funcs_ring = &adev->sdma.instance[0].ring;
}
-static const struct amdgpu_vm_pte_funcs cik_sdma_vm_pte_funcs = {
- .copy_pte_num_dw = 7,
- .copy_pte = cik_sdma_vm_copy_pte,
-
- .write_pte = cik_sdma_vm_write_pte,
- .set_pte_pde = cik_sdma_vm_set_pte_pde,
-};
-
-static void cik_sdma_set_vm_pte_funcs(struct amdgpu_device *adev)
-{
- unsigned i;
-
- adev->vm_manager.vm_pte_funcs = &cik_sdma_vm_pte_funcs;
- for (i = 0; i < adev->sdma.num_instances; i++) {
- adev->vm_manager.vm_pte_scheds[i] =
- &adev->sdma.instance[i].ring.sched;
- }
- adev->vm_manager.vm_pte_num_scheds = adev->sdma.num_instances;
-}
-
const struct amdgpu_ip_block_version cik_sdma_ip_block =
{
.type = AMD_IP_BLOCK_TYPE_SDMA,
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c b/drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c
index 92ce580647cd..0090ace49024 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c
@@ -51,7 +51,6 @@
static void sdma_v2_4_set_ring_funcs(struct amdgpu_device *adev);
static void sdma_v2_4_set_buffer_funcs(struct amdgpu_device *adev);
-static void sdma_v2_4_set_vm_pte_funcs(struct amdgpu_device *adev);
static void sdma_v2_4_set_irq_funcs(struct amdgpu_device *adev);
MODULE_FIRMWARE("amdgpu/topaz_sdma.bin");
@@ -809,6 +808,14 @@ static void sdma_v2_4_ring_emit_wreg(struct amdgpu_ring *ring,
amdgpu_ring_write(ring, val);
}
+static const struct amdgpu_vm_pte_funcs sdma_v2_4_vm_pte_funcs = {
+ .copy_pte_num_dw = 7,
+ .copy_pte = sdma_v2_4_vm_copy_pte,
+
+ .write_pte = sdma_v2_4_vm_write_pte,
+ .set_pte_pde = sdma_v2_4_vm_set_pte_pde,
+};
+
static int sdma_v2_4_early_init(struct amdgpu_ip_block *ip_block)
{
struct amdgpu_device *adev = ip_block->adev;
@@ -822,7 +829,7 @@ static int sdma_v2_4_early_init(struct amdgpu_ip_block *ip_block)
sdma_v2_4_set_ring_funcs(adev);
sdma_v2_4_set_buffer_funcs(adev);
- sdma_v2_4_set_vm_pte_funcs(adev);
+ amdgpu_sdma_set_vm_pte_scheds(adev, &sdma_v2_4_vm_pte_funcs);
sdma_v2_4_set_irq_funcs(adev);
return 0;
@@ -1232,26 +1239,6 @@ static void sdma_v2_4_set_buffer_funcs(struct amdgpu_device *adev)
adev->mman.buffer_funcs_ring = &adev->sdma.instance[0].ring;
}
-static const struct amdgpu_vm_pte_funcs sdma_v2_4_vm_pte_funcs = {
- .copy_pte_num_dw = 7,
- .copy_pte = sdma_v2_4_vm_copy_pte,
-
- .write_pte = sdma_v2_4_vm_write_pte,
- .set_pte_pde = sdma_v2_4_vm_set_pte_pde,
-};
-
-static void sdma_v2_4_set_vm_pte_funcs(struct amdgpu_device *adev)
-{
- unsigned i;
-
- adev->vm_manager.vm_pte_funcs = &sdma_v2_4_vm_pte_funcs;
- for (i = 0; i < adev->sdma.num_instances; i++) {
- adev->vm_manager.vm_pte_scheds[i] =
- &adev->sdma.instance[i].ring.sched;
- }
- adev->vm_manager.vm_pte_num_scheds = adev->sdma.num_instances;
-}
-
const struct amdgpu_ip_block_version sdma_v2_4_ip_block = {
.type = AMD_IP_BLOCK_TYPE_SDMA,
.major = 2,
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c
index 1c076bd1cf73..2526d393162a 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c
@@ -51,7 +51,6 @@
static void sdma_v3_0_set_ring_funcs(struct amdgpu_device *adev);
static void sdma_v3_0_set_buffer_funcs(struct amdgpu_device *adev);
-static void sdma_v3_0_set_vm_pte_funcs(struct amdgpu_device *adev);
static void sdma_v3_0_set_irq_funcs(struct amdgpu_device *adev);
MODULE_FIRMWARE("amdgpu/tonga_sdma.bin");
@@ -1082,6 +1081,14 @@ static void sdma_v3_0_ring_emit_wreg(struct amdgpu_ring *ring,
amdgpu_ring_write(ring, val);
}
+static const struct amdgpu_vm_pte_funcs sdma_v3_0_vm_pte_funcs = {
+ .copy_pte_num_dw = 7,
+ .copy_pte = sdma_v3_0_vm_copy_pte,
+
+ .write_pte = sdma_v3_0_vm_write_pte,
+ .set_pte_pde = sdma_v3_0_vm_set_pte_pde,
+};
+
static int sdma_v3_0_early_init(struct amdgpu_ip_block *ip_block)
{
struct amdgpu_device *adev = ip_block->adev;
@@ -1102,7 +1109,7 @@ static int sdma_v3_0_early_init(struct amdgpu_ip_block *ip_block)
sdma_v3_0_set_ring_funcs(adev);
sdma_v3_0_set_buffer_funcs(adev);
- sdma_v3_0_set_vm_pte_funcs(adev);
+ amdgpu_sdma_set_vm_pte_scheds(adev, &sdma_v3_0_vm_pte_funcs);
sdma_v3_0_set_irq_funcs(adev);
return 0;
@@ -1674,26 +1681,6 @@ static void sdma_v3_0_set_buffer_funcs(struct amdgpu_device *adev)
adev->mman.buffer_funcs_ring = &adev->sdma.instance[0].ring;
}
-static const struct amdgpu_vm_pte_funcs sdma_v3_0_vm_pte_funcs = {
- .copy_pte_num_dw = 7,
- .copy_pte = sdma_v3_0_vm_copy_pte,
-
- .write_pte = sdma_v3_0_vm_write_pte,
- .set_pte_pde = sdma_v3_0_vm_set_pte_pde,
-};
-
-static void sdma_v3_0_set_vm_pte_funcs(struct amdgpu_device *adev)
-{
- unsigned i;
-
- adev->vm_manager.vm_pte_funcs = &sdma_v3_0_vm_pte_funcs;
- for (i = 0; i < adev->sdma.num_instances; i++) {
- adev->vm_manager.vm_pte_scheds[i] =
- &adev->sdma.instance[i].ring.sched;
- }
- adev->vm_manager.vm_pte_num_scheds = adev->sdma.num_instances;
-}
-
const struct amdgpu_ip_block_version sdma_v3_0_ip_block =
{
.type = AMD_IP_BLOCK_TYPE_SDMA,
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
index f38004e6064e..a35d9951e22a 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
@@ -129,7 +129,6 @@ static const struct amdgpu_hwip_reg_entry sdma_reg_list_4_0[] = {
static void sdma_v4_0_set_ring_funcs(struct amdgpu_device *adev);
static void sdma_v4_0_set_buffer_funcs(struct amdgpu_device *adev);
-static void sdma_v4_0_set_vm_pte_funcs(struct amdgpu_device *adev);
static void sdma_v4_0_set_irq_funcs(struct amdgpu_device *adev);
static void sdma_v4_0_set_ras_funcs(struct amdgpu_device *adev);
@@ -1751,6 +1750,14 @@ static bool sdma_v4_0_fw_support_paging_queue(struct amdgpu_device *adev)
}
}
+static const struct amdgpu_vm_pte_funcs sdma_v4_0_vm_pte_funcs = {
+ .copy_pte_num_dw = 7,
+ .copy_pte = sdma_v4_0_vm_copy_pte,
+
+ .write_pte = sdma_v4_0_vm_write_pte,
+ .set_pte_pde = sdma_v4_0_vm_set_pte_pde,
+};
+
static int sdma_v4_0_early_init(struct amdgpu_ip_block *ip_block)
{
struct amdgpu_device *adev = ip_block->adev;
@@ -1769,7 +1776,7 @@ static int sdma_v4_0_early_init(struct amdgpu_ip_block *ip_block)
sdma_v4_0_set_ring_funcs(adev);
sdma_v4_0_set_buffer_funcs(adev);
- sdma_v4_0_set_vm_pte_funcs(adev);
+ amdgpu_sdma_set_vm_pte_scheds(adev, &sdma_v4_0_vm_pte_funcs);
sdma_v4_0_set_irq_funcs(adev);
sdma_v4_0_set_ras_funcs(adev);
@@ -2615,30 +2622,6 @@ static void sdma_v4_0_set_buffer_funcs(struct amdgpu_device *adev)
adev->mman.buffer_funcs_ring = &adev->sdma.instance[0].ring;
}
-static const struct amdgpu_vm_pte_funcs sdma_v4_0_vm_pte_funcs = {
- .copy_pte_num_dw = 7,
- .copy_pte = sdma_v4_0_vm_copy_pte,
-
- .write_pte = sdma_v4_0_vm_write_pte,
- .set_pte_pde = sdma_v4_0_vm_set_pte_pde,
-};
-
-static void sdma_v4_0_set_vm_pte_funcs(struct amdgpu_device *adev)
-{
- struct drm_gpu_scheduler *sched;
- unsigned i;
-
- adev->vm_manager.vm_pte_funcs = &sdma_v4_0_vm_pte_funcs;
- for (i = 0; i < adev->sdma.num_instances; i++) {
- if (adev->sdma.has_page_queue)
- sched = &adev->sdma.instance[i].page.sched;
- else
- sched = &adev->sdma.instance[i].ring.sched;
- adev->vm_manager.vm_pte_scheds[i] = sched;
- }
- adev->vm_manager.vm_pte_num_scheds = adev->sdma.num_instances;
-}
-
static void sdma_v4_0_get_ras_error_count(uint32_t value,
uint32_t instance,
uint32_t *sec_count)
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
index a1443990d5c6..7f77367848d4 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
@@ -104,7 +104,6 @@ static const struct amdgpu_hwip_reg_entry sdma_reg_list_4_4_2[] = {
static void sdma_v4_4_2_set_ring_funcs(struct amdgpu_device *adev);
static void sdma_v4_4_2_set_buffer_funcs(struct amdgpu_device *adev);
-static void sdma_v4_4_2_set_vm_pte_funcs(struct amdgpu_device *adev);
static void sdma_v4_4_2_set_irq_funcs(struct amdgpu_device *adev);
static void sdma_v4_4_2_set_ras_funcs(struct amdgpu_device *adev);
static void sdma_v4_4_2_update_reset_mask(struct amdgpu_device *adev);
@@ -1347,6 +1346,14 @@ static const struct amdgpu_sdma_funcs sdma_v4_4_2_sdma_funcs = {
.soft_reset_kernel_queue = &sdma_v4_4_2_soft_reset_engine,
};
+static const struct amdgpu_vm_pte_funcs sdma_v4_4_2_vm_pte_funcs = {
+ .copy_pte_num_dw = 7,
+ .copy_pte = sdma_v4_4_2_vm_copy_pte,
+
+ .write_pte = sdma_v4_4_2_vm_write_pte,
+ .set_pte_pde = sdma_v4_4_2_vm_set_pte_pde,
+};
+
static int sdma_v4_4_2_early_init(struct amdgpu_ip_block *ip_block)
{
struct amdgpu_device *adev = ip_block->adev;
@@ -1362,7 +1369,7 @@ static int sdma_v4_4_2_early_init(struct amdgpu_ip_block *ip_block)
sdma_v4_4_2_set_ring_funcs(adev);
sdma_v4_4_2_set_buffer_funcs(adev);
- sdma_v4_4_2_set_vm_pte_funcs(adev);
+ amdgpu_sdma_set_vm_pte_scheds(adev, &sdma_v4_4_2_vm_pte_funcs);
sdma_v4_4_2_set_irq_funcs(adev);
sdma_v4_4_2_set_ras_funcs(adev);
return 0;
@@ -2316,30 +2323,6 @@ static void sdma_v4_4_2_set_buffer_funcs(struct amdgpu_device *adev)
adev->mman.buffer_funcs_ring = &adev->sdma.instance[0].ring;
}
-static const struct amdgpu_vm_pte_funcs sdma_v4_4_2_vm_pte_funcs = {
- .copy_pte_num_dw = 7,
- .copy_pte = sdma_v4_4_2_vm_copy_pte,
-
- .write_pte = sdma_v4_4_2_vm_write_pte,
- .set_pte_pde = sdma_v4_4_2_vm_set_pte_pde,
-};
-
-static void sdma_v4_4_2_set_vm_pte_funcs(struct amdgpu_device *adev)
-{
- struct drm_gpu_scheduler *sched;
- unsigned i;
-
- adev->vm_manager.vm_pte_funcs = &sdma_v4_4_2_vm_pte_funcs;
- for (i = 0; i < adev->sdma.num_instances; i++) {
- if (adev->sdma.has_page_queue)
- sched = &adev->sdma.instance[i].page.sched;
- else
- sched = &adev->sdma.instance[i].ring.sched;
- adev->vm_manager.vm_pte_scheds[i] = sched;
- }
- adev->vm_manager.vm_pte_num_scheds = adev->sdma.num_instances;
-}
-
/**
* sdma_v4_4_2_update_reset_mask - update reset mask for SDMA
* @adev: Pointer to the AMDGPU device structure
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
index 7811cbb1f7ba..445e2b4828b3 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
@@ -110,7 +110,6 @@ static const struct amdgpu_hwip_reg_entry sdma_reg_list_5_0[] = {
static void sdma_v5_0_set_ring_funcs(struct amdgpu_device *adev);
static void sdma_v5_0_set_buffer_funcs(struct amdgpu_device *adev);
-static void sdma_v5_0_set_vm_pte_funcs(struct amdgpu_device *adev);
static void sdma_v5_0_set_irq_funcs(struct amdgpu_device *adev);
static int sdma_v5_0_stop_queue(struct amdgpu_ring *ring);
static int sdma_v5_0_restore_queue(struct amdgpu_ring *ring);
@@ -1357,6 +1356,13 @@ static const struct amdgpu_sdma_funcs sdma_v5_0_sdma_funcs = {
.soft_reset_kernel_queue = &sdma_v5_0_soft_reset_engine,
};
+static const struct amdgpu_vm_pte_funcs sdma_v5_0_vm_pte_funcs = {
+ .copy_pte_num_dw = 7,
+ .copy_pte = sdma_v5_0_vm_copy_pte,
+ .write_pte = sdma_v5_0_vm_write_pte,
+ .set_pte_pde = sdma_v5_0_vm_set_pte_pde,
+};
+
static int sdma_v5_0_early_init(struct amdgpu_ip_block *ip_block)
{
struct amdgpu_device *adev = ip_block->adev;
@@ -1368,7 +1374,7 @@ static int sdma_v5_0_early_init(struct amdgpu_ip_block *ip_block)
sdma_v5_0_set_ring_funcs(adev);
sdma_v5_0_set_buffer_funcs(adev);
- sdma_v5_0_set_vm_pte_funcs(adev);
+ amdgpu_sdma_set_vm_pte_scheds(adev, &sdma_v5_0_vm_pte_funcs);
sdma_v5_0_set_irq_funcs(adev);
sdma_v5_0_set_mqd_funcs(adev);
@@ -2073,27 +2079,6 @@ static void sdma_v5_0_set_buffer_funcs(struct amdgpu_device *adev)
}
}
-static const struct amdgpu_vm_pte_funcs sdma_v5_0_vm_pte_funcs = {
- .copy_pte_num_dw = 7,
- .copy_pte = sdma_v5_0_vm_copy_pte,
- .write_pte = sdma_v5_0_vm_write_pte,
- .set_pte_pde = sdma_v5_0_vm_set_pte_pde,
-};
-
-static void sdma_v5_0_set_vm_pte_funcs(struct amdgpu_device *adev)
-{
- unsigned i;
-
- if (adev->vm_manager.vm_pte_funcs == NULL) {
- adev->vm_manager.vm_pte_funcs = &sdma_v5_0_vm_pte_funcs;
- for (i = 0; i < adev->sdma.num_instances; i++) {
- adev->vm_manager.vm_pte_scheds[i] =
- &adev->sdma.instance[i].ring.sched;
- }
- adev->vm_manager.vm_pte_num_scheds = adev->sdma.num_instances;
- }
-}
-
const struct amdgpu_ip_block_version sdma_v5_0_ip_block = {
.type = AMD_IP_BLOCK_TYPE_SDMA,
.major = 5,
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
index dbe5b8f109f6..4a98042a6578 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
@@ -111,7 +111,6 @@ static const struct amdgpu_hwip_reg_entry sdma_reg_list_5_2[] = {
static void sdma_v5_2_set_ring_funcs(struct amdgpu_device *adev);
static void sdma_v5_2_set_buffer_funcs(struct amdgpu_device *adev);
-static void sdma_v5_2_set_vm_pte_funcs(struct amdgpu_device *adev);
static void sdma_v5_2_set_irq_funcs(struct amdgpu_device *adev);
static int sdma_v5_2_stop_queue(struct amdgpu_ring *ring);
static int sdma_v5_2_restore_queue(struct amdgpu_ring *ring);
@@ -1248,6 +1247,13 @@ static void sdma_v5_2_ring_emit_reg_write_reg_wait(struct amdgpu_ring *ring,
amdgpu_ring_emit_reg_wait(ring, reg1, mask, mask);
}
+static const struct amdgpu_vm_pte_funcs sdma_v5_2_vm_pte_funcs = {
+ .copy_pte_num_dw = 7,
+ .copy_pte = sdma_v5_2_vm_copy_pte,
+ .write_pte = sdma_v5_2_vm_write_pte,
+ .set_pte_pde = sdma_v5_2_vm_set_pte_pde,
+};
+
static int sdma_v5_2_early_init(struct amdgpu_ip_block *ip_block)
{
struct amdgpu_device *adev = ip_block->adev;
@@ -1259,7 +1265,7 @@ static int sdma_v5_2_early_init(struct amdgpu_ip_block *ip_block)
sdma_v5_2_set_ring_funcs(adev);
sdma_v5_2_set_buffer_funcs(adev);
- sdma_v5_2_set_vm_pte_funcs(adev);
+ amdgpu_sdma_set_vm_pte_scheds(adev, &sdma_v5_2_vm_pte_funcs);
sdma_v5_2_set_irq_funcs(adev);
sdma_v5_2_set_mqd_funcs(adev);
@@ -2084,27 +2090,6 @@ static void sdma_v5_2_set_buffer_funcs(struct amdgpu_device *adev)
}
}
-static const struct amdgpu_vm_pte_funcs sdma_v5_2_vm_pte_funcs = {
- .copy_pte_num_dw = 7,
- .copy_pte = sdma_v5_2_vm_copy_pte,
- .write_pte = sdma_v5_2_vm_write_pte,
- .set_pte_pde = sdma_v5_2_vm_set_pte_pde,
-};
-
-static void sdma_v5_2_set_vm_pte_funcs(struct amdgpu_device *adev)
-{
- unsigned i;
-
- if (adev->vm_manager.vm_pte_funcs == NULL) {
- adev->vm_manager.vm_pte_funcs = &sdma_v5_2_vm_pte_funcs;
- for (i = 0; i < adev->sdma.num_instances; i++) {
- adev->vm_manager.vm_pte_scheds[i] =
- &adev->sdma.instance[i].ring.sched;
- }
- adev->vm_manager.vm_pte_num_scheds = adev->sdma.num_instances;
- }
-}
-
const struct amdgpu_ip_block_version sdma_v5_2_ip_block = {
.type = AMD_IP_BLOCK_TYPE_SDMA,
.major = 5,
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
index eec659194718..45d13ac09f9b 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v6_0.c
@@ -120,7 +120,6 @@ static const struct amdgpu_hwip_reg_entry sdma_reg_list_6_0[] = {
static void sdma_v6_0_set_ring_funcs(struct amdgpu_device *adev);
static void sdma_v6_0_set_buffer_funcs(struct amdgpu_device *adev);
-static void sdma_v6_0_set_vm_pte_funcs(struct amdgpu_device *adev);
static void sdma_v6_0_set_irq_funcs(struct amdgpu_device *adev);
static int sdma_v6_0_start(struct amdgpu_device *adev);
@@ -1280,6 +1279,13 @@ static void sdma_v6_0_get_csa_info(struct amdgpu_device *adev,
csa_info->alignment = SDMA6_CSA_ALIGNMENT;
}
+static const struct amdgpu_vm_pte_funcs sdma_v6_0_vm_pte_funcs = {
+ .copy_pte_num_dw = 7,
+ .copy_pte = sdma_v6_0_vm_copy_pte,
+ .write_pte = sdma_v6_0_vm_write_pte,
+ .set_pte_pde = sdma_v6_0_vm_set_pte_pde,
+};
+
static int sdma_v6_0_early_init(struct amdgpu_ip_block *ip_block)
{
struct amdgpu_device *adev = ip_block->adev;
@@ -1308,7 +1314,7 @@ static int sdma_v6_0_early_init(struct amdgpu_ip_block *ip_block)
sdma_v6_0_set_ring_funcs(adev);
sdma_v6_0_set_buffer_funcs(adev);
- sdma_v6_0_set_vm_pte_funcs(adev);
+ amdgpu_sdma_set_vm_pte_scheds(adev, &sdma_v6_0_vm_pte_funcs);
sdma_v6_0_set_irq_funcs(adev);
sdma_v6_0_set_mqd_funcs(adev);
sdma_v6_0_set_ras_funcs(adev);
@@ -1902,25 +1908,6 @@ static void sdma_v6_0_set_buffer_funcs(struct amdgpu_device *adev)
adev->mman.buffer_funcs_ring = &adev->sdma.instance[0].ring;
}
-static const struct amdgpu_vm_pte_funcs sdma_v6_0_vm_pte_funcs = {
- .copy_pte_num_dw = 7,
- .copy_pte = sdma_v6_0_vm_copy_pte,
- .write_pte = sdma_v6_0_vm_write_pte,
- .set_pte_pde = sdma_v6_0_vm_set_pte_pde,
-};
-
-static void sdma_v6_0_set_vm_pte_funcs(struct amdgpu_device *adev)
-{
- unsigned i;
-
- adev->vm_manager.vm_pte_funcs = &sdma_v6_0_vm_pte_funcs;
- for (i = 0; i < adev->sdma.num_instances; i++) {
- adev->vm_manager.vm_pte_scheds[i] =
- &adev->sdma.instance[i].ring.sched;
- }
- adev->vm_manager.vm_pte_num_scheds = adev->sdma.num_instances;
-}
-
const struct amdgpu_ip_block_version sdma_v6_0_ip_block = {
.type = AMD_IP_BLOCK_TYPE_SDMA,
.major = 6,
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
index 8d16ef257bcb..f938be0524cd 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
@@ -119,7 +119,6 @@ static const struct amdgpu_hwip_reg_entry sdma_reg_list_7_0[] = {
static void sdma_v7_0_set_ring_funcs(struct amdgpu_device *adev);
static void sdma_v7_0_set_buffer_funcs(struct amdgpu_device *adev);
-static void sdma_v7_0_set_vm_pte_funcs(struct amdgpu_device *adev);
static void sdma_v7_0_set_irq_funcs(struct amdgpu_device *adev);
static int sdma_v7_0_start(struct amdgpu_device *adev);
@@ -1264,6 +1263,13 @@ static void sdma_v7_0_get_csa_info(struct amdgpu_device *adev,
csa_info->alignment = SDMA7_CSA_ALIGNMENT;
}
+static const struct amdgpu_vm_pte_funcs sdma_v7_0_vm_pte_funcs = {
+ .copy_pte_num_dw = 8,
+ .copy_pte = sdma_v7_0_vm_copy_pte,
+ .write_pte = sdma_v7_0_vm_write_pte,
+ .set_pte_pde = sdma_v7_0_vm_set_pte_pde,
+};
+
static int sdma_v7_0_early_init(struct amdgpu_ip_block *ip_block)
{
struct amdgpu_device *adev = ip_block->adev;
@@ -1294,7 +1300,7 @@ static int sdma_v7_0_early_init(struct amdgpu_ip_block *ip_block)
sdma_v7_0_set_ring_funcs(adev);
sdma_v7_0_set_buffer_funcs(adev);
- sdma_v7_0_set_vm_pte_funcs(adev);
+ amdgpu_sdma_set_vm_pte_scheds(adev, &sdma_v7_0_vm_pte_funcs);
sdma_v7_0_set_irq_funcs(adev);
sdma_v7_0_set_mqd_funcs(adev);
adev->sdma.get_csa_info = &sdma_v7_0_get_csa_info;
@@ -1843,25 +1849,6 @@ static void sdma_v7_0_set_buffer_funcs(struct amdgpu_device *adev)
adev->mman.buffer_funcs_ring = &adev->sdma.instance[0].ring;
}
-static const struct amdgpu_vm_pte_funcs sdma_v7_0_vm_pte_funcs = {
- .copy_pte_num_dw = 8,
- .copy_pte = sdma_v7_0_vm_copy_pte,
- .write_pte = sdma_v7_0_vm_write_pte,
- .set_pte_pde = sdma_v7_0_vm_set_pte_pde,
-};
-
-static void sdma_v7_0_set_vm_pte_funcs(struct amdgpu_device *adev)
-{
- unsigned i;
-
- adev->vm_manager.vm_pte_funcs = &sdma_v7_0_vm_pte_funcs;
- for (i = 0; i < adev->sdma.num_instances; i++) {
- adev->vm_manager.vm_pte_scheds[i] =
- &adev->sdma.instance[i].ring.sched;
- }
- adev->vm_manager.vm_pte_num_scheds = adev->sdma.num_instances;
-}
-
const struct amdgpu_ip_block_version sdma_v7_0_ip_block = {
.type = AMD_IP_BLOCK_TYPE_SDMA,
.major = 7,
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v7_1.c b/drivers/gpu/drm/amd/amdgpu/sdma_v7_1.c
index 5bc45c3e00d1..16031b8d310a 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v7_1.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v7_1.c
@@ -110,7 +110,6 @@ static const struct amdgpu_hwip_reg_entry sdma_reg_list_7_1[] = {
static void sdma_v7_1_set_ring_funcs(struct amdgpu_device *adev);
static void sdma_v7_1_set_buffer_funcs(struct amdgpu_device *adev);
-static void sdma_v7_1_set_vm_pte_funcs(struct amdgpu_device *adev);
static void sdma_v7_1_set_irq_funcs(struct amdgpu_device *adev);
static int sdma_v7_1_inst_start(struct amdgpu_device *adev,
uint32_t inst_mask);
@@ -1248,6 +1247,13 @@ static void sdma_v7_1_ring_emit_reg_write_reg_wait(struct amdgpu_ring *ring,
amdgpu_ring_emit_reg_wait(ring, reg1, mask, mask);
}
+static const struct amdgpu_vm_pte_funcs sdma_v7_1_vm_pte_funcs = {
+ .copy_pte_num_dw = 8,
+ .copy_pte = sdma_v7_1_vm_copy_pte,
+ .write_pte = sdma_v7_1_vm_write_pte,
+ .set_pte_pde = sdma_v7_1_vm_set_pte_pde,
+};
+
static int sdma_v7_1_early_init(struct amdgpu_ip_block *ip_block)
{
struct amdgpu_device *adev = ip_block->adev;
@@ -1261,7 +1267,7 @@ static int sdma_v7_1_early_init(struct amdgpu_ip_block *ip_block)
sdma_v7_1_set_ring_funcs(adev);
sdma_v7_1_set_buffer_funcs(adev);
- sdma_v7_1_set_vm_pte_funcs(adev);
+ amdgpu_sdma_set_vm_pte_scheds(adev, &sdma_v7_1_vm_pte_funcs);
sdma_v7_1_set_irq_funcs(adev);
sdma_v7_1_set_mqd_funcs(adev);
@@ -1764,25 +1770,6 @@ static void sdma_v7_1_set_buffer_funcs(struct amdgpu_device *adev)
adev->mman.buffer_funcs_ring = &adev->sdma.instance[0].ring;
}
-static const struct amdgpu_vm_pte_funcs sdma_v7_1_vm_pte_funcs = {
- .copy_pte_num_dw = 8,
- .copy_pte = sdma_v7_1_vm_copy_pte,
- .write_pte = sdma_v7_1_vm_write_pte,
- .set_pte_pde = sdma_v7_1_vm_set_pte_pde,
-};
-
-static void sdma_v7_1_set_vm_pte_funcs(struct amdgpu_device *adev)
-{
- unsigned i;
-
- adev->vm_manager.vm_pte_funcs = &sdma_v7_1_vm_pte_funcs;
- for (i = 0; i < adev->sdma.num_instances; i++) {
- adev->vm_manager.vm_pte_scheds[i] =
- &adev->sdma.instance[i].ring.sched;
- }
- adev->vm_manager.vm_pte_num_scheds = adev->sdma.num_instances;
-}
-
const struct amdgpu_ip_block_version sdma_v7_1_ip_block = {
.type = AMD_IP_BLOCK_TYPE_SDMA,
.major = 7,
diff --git a/drivers/gpu/drm/amd/amdgpu/si_dma.c b/drivers/gpu/drm/amd/amdgpu/si_dma.c
index 74fcaa340d9b..3e58feb2d5e4 100644
--- a/drivers/gpu/drm/amd/amdgpu/si_dma.c
+++ b/drivers/gpu/drm/amd/amdgpu/si_dma.c
@@ -37,7 +37,6 @@ const u32 sdma_offsets[SDMA_MAX_INSTANCE] =
static void si_dma_set_ring_funcs(struct amdgpu_device *adev);
static void si_dma_set_buffer_funcs(struct amdgpu_device *adev);
-static void si_dma_set_vm_pte_funcs(struct amdgpu_device *adev);
static void si_dma_set_irq_funcs(struct amdgpu_device *adev);
/**
@@ -473,6 +472,14 @@ static void si_dma_ring_emit_wreg(struct amdgpu_ring *ring,
amdgpu_ring_write(ring, val);
}
+static const struct amdgpu_vm_pte_funcs si_dma_vm_pte_funcs = {
+ .copy_pte_num_dw = 5,
+ .copy_pte = si_dma_vm_copy_pte,
+
+ .write_pte = si_dma_vm_write_pte,
+ .set_pte_pde = si_dma_vm_set_pte_pde,
+};
+
static int si_dma_early_init(struct amdgpu_ip_block *ip_block)
{
struct amdgpu_device *adev = ip_block->adev;
@@ -481,7 +488,7 @@ static int si_dma_early_init(struct amdgpu_ip_block *ip_block)
si_dma_set_ring_funcs(adev);
si_dma_set_buffer_funcs(adev);
- si_dma_set_vm_pte_funcs(adev);
+ amdgpu_sdma_set_vm_pte_scheds(adev, &si_dma_vm_pte_funcs);
si_dma_set_irq_funcs(adev);
return 0;
@@ -830,26 +837,6 @@ static void si_dma_set_buffer_funcs(struct amdgpu_device *adev)
adev->mman.buffer_funcs_ring = &adev->sdma.instance[0].ring;
}
-static const struct amdgpu_vm_pte_funcs si_dma_vm_pte_funcs = {
- .copy_pte_num_dw = 5,
- .copy_pte = si_dma_vm_copy_pte,
-
- .write_pte = si_dma_vm_write_pte,
- .set_pte_pde = si_dma_vm_set_pte_pde,
-};
-
-static void si_dma_set_vm_pte_funcs(struct amdgpu_device *adev)
-{
- unsigned i;
-
- adev->vm_manager.vm_pte_funcs = &si_dma_vm_pte_funcs;
- for (i = 0; i < adev->sdma.num_instances; i++) {
- adev->vm_manager.vm_pte_scheds[i] =
- &adev->sdma.instance[i].ring.sched;
- }
- adev->vm_manager.vm_pte_num_scheds = adev->sdma.num_instances;
-}
-
const struct amdgpu_ip_block_version si_dma_ip_block =
{
.type = AMD_IP_BLOCK_TYPE_SDMA,
--
2.43.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v6 11/11] drm/amdgpu: move sched status check inside amdgpu_ttm_set_buffer_funcs_status
2026-01-26 13:34 [PATCH v6 00/11] drm/amdgpu: preparation patchs for the use all SDMA instances series Pierre-Eric Pelloux-Prayer
` (9 preceding siblings ...)
2026-01-26 13:35 ` [PATCH v6 10/11] drm/amdgpu: introduce amdgpu_sdma_set_vm_pte_scheds Pierre-Eric Pelloux-Prayer
@ 2026-01-26 13:35 ` Pierre-Eric Pelloux-Prayer
2026-01-27 10:23 ` Christian König
10 siblings, 1 reply; 18+ messages in thread
From: Pierre-Eric Pelloux-Prayer @ 2026-01-26 13:35 UTC (permalink / raw)
To: Alex Deucher, Christian König, David Airlie, Simona Vetter
Cc: Pierre-Eric Pelloux-Prayer, amd-gfx, dri-devel, linux-kernel
It avoids duplicated code and allows to output a warning.
---
v4: move check inside the existing if (enable) test
---
Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
---
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 13 ++++---------
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 5 +++++
2 files changed, 9 insertions(+), 9 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 362ab2b34498..98aead91b98b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -3158,9 +3158,7 @@ static int amdgpu_device_ip_init(struct amdgpu_device *adev)
if (r)
goto init_failed;
- if (adev->mman.buffer_funcs_ring &&
- adev->mman.buffer_funcs_ring->sched.ready)
- amdgpu_ttm_set_buffer_funcs_status(adev, true);
+ amdgpu_ttm_set_buffer_funcs_status(adev, true);
/* Don't init kfd if whole hive need to be reset during init */
if (adev->init_lvl->level != AMDGPU_INIT_LEVEL_MINIMAL_XGMI) {
@@ -4052,8 +4050,7 @@ static int amdgpu_device_ip_resume(struct amdgpu_device *adev)
r = amdgpu_device_ip_resume_phase2(adev);
- if (adev->mman.buffer_funcs_ring->sched.ready)
- amdgpu_ttm_set_buffer_funcs_status(adev, true);
+ amdgpu_ttm_set_buffer_funcs_status(adev, true);
if (r)
return r;
@@ -5199,8 +5196,7 @@ int amdgpu_device_suspend(struct drm_device *dev, bool notify_clients)
return 0;
unwind_evict:
- if (adev->mman.buffer_funcs_ring->sched.ready)
- amdgpu_ttm_set_buffer_funcs_status(adev, true);
+ amdgpu_ttm_set_buffer_funcs_status(adev, true);
amdgpu_fence_driver_hw_init(adev);
unwind_userq:
@@ -5931,8 +5927,7 @@ int amdgpu_device_reinit_after_reset(struct amdgpu_reset_context *reset_context)
if (r)
goto out;
- if (tmp_adev->mman.buffer_funcs_ring->sched.ready)
- amdgpu_ttm_set_buffer_funcs_status(tmp_adev, true);
+ amdgpu_ttm_set_buffer_funcs_status(tmp_adev, true);
r = amdgpu_device_ip_resume_phase3(tmp_adev);
if (r)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index e149092da8f1..1929a03daf18 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -2354,6 +2354,11 @@ void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
struct amdgpu_ring *ring;
struct drm_gpu_scheduler *sched;
+ if (!adev->mman.buffer_funcs_ring || !adev->mman.buffer_funcs_ring->sched.ready) {
+ dev_warn(adev->dev, "Not enabling DMA transfers for in kernel use");
+ return;
+ }
+
ring = adev->mman.buffer_funcs_ring;
sched = &ring->sched;
r = amdgpu_ttm_buffer_entity_init(&adev->mman.gtt_mgr,
--
2.43.0
^ permalink raw reply related [flat|nested] 18+ messages in thread
* Re: [PATCH v6 04/11] amdgpu/vce: use amdgpu_gtt_mgr_alloc_entries
2026-01-26 13:34 ` [PATCH v6 04/11] amdgpu/vce: use amdgpu_gtt_mgr_alloc_entries Pierre-Eric Pelloux-Prayer
@ 2026-01-26 19:09 ` Christian König
2026-01-28 13:21 ` kernel test robot
1 sibling, 0 replies; 18+ messages in thread
From: Christian König @ 2026-01-26 19:09 UTC (permalink / raw)
To: Pierre-Eric Pelloux-Prayer, Alex Deucher, David Airlie,
Simona Vetter
Cc: amd-gfx, dri-devel, linux-kernel
On 1/26/26 14:34, Pierre-Eric Pelloux-Prayer wrote:
> Instead of reserving a number of GTT pages for VCE 1.0 this
> commit now uses amdgpu_gtt_mgr_alloc_entries to allocate
> the pages when initializing vce 1.0.
>
> While at it remove the "does the VCPU BO already have a
> 32-bit address" check as suggested by Timur.
>
> This decouples vce init from gtt init.
>
> Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c | 1 -
> drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c | 18 ------------
> drivers/gpu/drm/amd/amdgpu/amdgpu_vce.h | 2 +-
> drivers/gpu/drm/amd/amdgpu/vce_v1_0.c | 32 +++++++++++----------
> 4 files changed, 18 insertions(+), 35 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c
> index dd9b845d5783..f2e89fb4b666 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c
> @@ -332,7 +332,6 @@ int amdgpu_gtt_mgr_init(struct amdgpu_device *adev, uint64_t gtt_size)
> ttm_resource_manager_init(man, &adev->mman.bdev, gtt_size);
>
> start = AMDGPU_GTT_MAX_TRANSFER_SIZE * AMDGPU_GTT_NUM_TRANSFER_WINDOWS;
> - start += amdgpu_vce_required_gart_pages(adev);
> size = (adev->gmc.gart_size >> PAGE_SHIFT) - start;
> drm_mm_init(&mgr->mm, start, size);
> spin_lock_init(&mgr->lock);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
> index a7d8f1ce6ac2..eb4a15db2ef2 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
> @@ -450,24 +450,6 @@ void amdgpu_vce_free_handles(struct amdgpu_device *adev, struct drm_file *filp)
> }
> }
>
> -/**
> - * amdgpu_vce_required_gart_pages() - gets number of GART pages required by VCE
> - *
> - * @adev: amdgpu_device pointer
> - *
> - * Returns how many GART pages we need before GTT for the VCE IP block.
> - * For VCE1, see vce_v1_0_ensure_vcpu_bo_32bit_addr for details.
> - * For VCE2+, this is not needed so return zero.
> - */
> -u32 amdgpu_vce_required_gart_pages(struct amdgpu_device *adev)
> -{
> - /* VCE IP block not added yet, so can't use amdgpu_ip_version */
> - if (adev->family == AMDGPU_FAMILY_SI)
> - return 512;
> -
> - return 0;
> -}
> -
> /**
> * amdgpu_vce_get_create_msg - generate a VCE create msg
> *
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.h
> index 1c3464ce5037..a59d87e09004 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.h
> @@ -52,6 +52,7 @@ struct amdgpu_vce {
> uint32_t srbm_soft_reset;
> unsigned num_rings;
> uint32_t keyselect;
> + struct drm_mm_node node;
Maybe name that gart_node.
> };
>
> int amdgpu_vce_early_init(struct amdgpu_device *adev);
> @@ -61,7 +62,6 @@ int amdgpu_vce_entity_init(struct amdgpu_device *adev, struct amdgpu_ring *ring)
> int amdgpu_vce_suspend(struct amdgpu_device *adev);
> int amdgpu_vce_resume(struct amdgpu_device *adev);
> void amdgpu_vce_free_handles(struct amdgpu_device *adev, struct drm_file *filp);
> -u32 amdgpu_vce_required_gart_pages(struct amdgpu_device *adev);
> int amdgpu_vce_ring_parse_cs(struct amdgpu_cs_parser *p, struct amdgpu_job *job,
> struct amdgpu_ib *ib);
> int amdgpu_vce_ring_parse_cs_vm(struct amdgpu_cs_parser *p,
> diff --git a/drivers/gpu/drm/amd/amdgpu/vce_v1_0.c b/drivers/gpu/drm/amd/amdgpu/vce_v1_0.c
> index 9ae424618556..bca34a30dbf3 100644
> --- a/drivers/gpu/drm/amd/amdgpu/vce_v1_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/vce_v1_0.c
> @@ -47,11 +47,6 @@
> #define VCE_V1_0_DATA_SIZE (7808 * (AMDGPU_MAX_VCE_HANDLES + 1))
> #define VCE_STATUS_VCPU_REPORT_FW_LOADED_MASK 0x02
>
> -#define VCE_V1_0_GART_PAGE_START \
> - (AMDGPU_GTT_MAX_TRANSFER_SIZE * AMDGPU_GTT_NUM_TRANSFER_WINDOWS)
> -#define VCE_V1_0_GART_ADDR_START \
> - (VCE_V1_0_GART_PAGE_START * AMDGPU_GPU_PAGE_SIZE)
> -
> static void vce_v1_0_set_ring_funcs(struct amdgpu_device *adev);
> static void vce_v1_0_set_irq_funcs(struct amdgpu_device *adev);
>
> @@ -541,21 +536,24 @@ static int vce_v1_0_ensure_vcpu_bo_32bit_addr(struct amdgpu_device *adev)
> u64 num_pages = ALIGN(bo_size, AMDGPU_GPU_PAGE_SIZE) / AMDGPU_GPU_PAGE_SIZE;
> u64 pa = amdgpu_gmc_vram_pa(adev, adev->vce.vcpu_bo);
> u64 flags = AMDGPU_PTE_READABLE | AMDGPU_PTE_WRITEABLE | AMDGPU_PTE_VALID;
> + u64 vce_gart_start;
Maybe name that vce_gart_offs. The GART start in MC address space is something different.
> + int r;
>
> - /*
> - * Check if the VCPU BO already has a 32-bit address.
> - * Eg. if MC is configured to put VRAM in the low address range.
> - */
> - if (gpu_addr <= max_vcpu_bo_addr)
> - return 0;
> + r = amdgpu_gtt_mgr_alloc_entries(&adev->mman.gtt_mgr,
> + &adev->vce.node, num_pages,
> + DRM_MM_INSERT_LOW);
> + if (r)
> + return r;
> +
> + vce_gart_start = adev->vce.node.start * AMDGPU_GPU_PAGE_SIZE;
IIRC that should only be PAGE_SIZE and not AMDGPU_GPU_PAGE_SIZE.
Apart from that looks good to me,
Christian.
>
> /* Check if we can map the VCPU BO in GART to a 32-bit address. */
> - if (adev->gmc.gart_start + VCE_V1_0_GART_ADDR_START > max_vcpu_bo_addr)
> + if (adev->gmc.gart_start + vce_gart_start > max_vcpu_bo_addr)
> return -EINVAL;
>
> - amdgpu_gart_map_vram_range(adev, pa, VCE_V1_0_GART_PAGE_START,
> + amdgpu_gart_map_vram_range(adev, pa, adev->vce.node.start,
> num_pages, flags, adev->gart.ptr);
> - adev->vce.gpu_addr = adev->gmc.gart_start + VCE_V1_0_GART_ADDR_START;
> + adev->vce.gpu_addr = adev->gmc.gart_start + vce_gart_start;
> if (adev->vce.gpu_addr > max_vcpu_bo_addr)
> return -EINVAL;
>
> @@ -610,7 +608,11 @@ static int vce_v1_0_sw_fini(struct amdgpu_ip_block *ip_block)
> if (r)
> return r;
>
> - return amdgpu_vce_sw_fini(adev);
> + r = amdgpu_vce_sw_fini(adev);
> +
> + amdgpu_gtt_mgr_free_entries(&adev->mman.gtt_mgr, &adev->vce.node);
> +
> + return r;
> }
>
> /**
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v6 05/11] amdgpu/ttm: use amdgpu_gtt_mgr_alloc_entries
2026-01-26 13:35 ` [PATCH v6 05/11] amdgpu/ttm: " Pierre-Eric Pelloux-Prayer
@ 2026-01-27 10:09 ` Christian König
0 siblings, 0 replies; 18+ messages in thread
From: Christian König @ 2026-01-27 10:09 UTC (permalink / raw)
To: Pierre-Eric Pelloux-Prayer, Alex Deucher, David Airlie,
Simona Vetter
Cc: amd-gfx, dri-devel, linux-kernel
On 1/26/26 14:35, Pierre-Eric Pelloux-Prayer wrote:
> Use amdgpu_gtt_mgr_alloc_entries for each entity instead
> of reserving a fixed number of pages.
>
> Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 66 ++++++++++++++++---------
> drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h | 1 +
> 2 files changed, 43 insertions(+), 24 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> index 8b38b5ed9a9c..d23d3046919b 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> @@ -2012,37 +2012,47 @@ static void amdgpu_ttm_free_mmio_remap_bo(struct amdgpu_device *adev)
> adev->rmmio_remap.bo = NULL;
> }
>
> -static int amdgpu_ttm_buffer_entity_init(struct amdgpu_ttm_buffer_entity *entity,
> +static int amdgpu_ttm_buffer_entity_init(struct amdgpu_gtt_mgr *mgr,
> + struct amdgpu_ttm_buffer_entity *entity,
> enum drm_sched_priority prio,
> struct drm_gpu_scheduler **scheds,
> int num_schedulers,
> - int starting_gart_window,
> u32 num_gart_windows)
> {
> - int i, r;
> + int i, r, num_pages;
>
> r = drm_sched_entity_init(&entity->base, prio, scheds, num_schedulers, NULL);
> if (r)
> return r;
>
> -
> mutex_init(&entity->lock);
>
> if (ARRAY_SIZE(entity->gart_window_offs) < num_gart_windows)
> - return starting_gart_window;
> + return -EINVAL;
> + if (num_gart_windows == 0)
> + return 0;
> +
> + num_pages = num_gart_windows * AMDGPU_GTT_MAX_TRANSFER_SIZE;
> + r = amdgpu_gtt_mgr_alloc_entries(mgr, &entity->node, num_pages,
> + DRM_MM_INSERT_BEST);
> + if (r) {
> + drm_sched_entity_destroy(&entity->base);
> + return r;
> + }
>
> for (i = 0; i < num_gart_windows; i++) {
> entity->gart_window_offs[i] =
> - (u64)starting_gart_window * AMDGPU_GTT_MAX_TRANSFER_SIZE *
> - AMDGPU_GPU_PAGE_SIZE;
> - starting_gart_window++;
> + (entity->node.start + (u64)i * AMDGPU_GTT_MAX_TRANSFER_SIZE) *
> + AMDGPU_GPU_PAGE_SIZE;
If I'm not completely mistaken the GTT manager works with PAGE_SIZE and not AMDGPU_GPU_PAGE_SIZE.
Background is that we can only map CPU pages (4k-64k) into the GTT/GART and not GPU pages (which are always 4k).
It would probably be the cleanest to have a helper for that in amdgpu_gtt_mgr.c (or a header).
Apart from that patch looks good to me,
Christian.
> }
>
> - return starting_gart_window;
> + return 0;
> }
>
> -static void amdgpu_ttm_buffer_entity_fini(struct amdgpu_ttm_buffer_entity *entity)
> +static void amdgpu_ttm_buffer_entity_fini(struct amdgpu_gtt_mgr *mgr,
> + struct amdgpu_ttm_buffer_entity *entity)
> {
> + amdgpu_gtt_mgr_free_entries(mgr, &entity->node);
> drm_sched_entity_destroy(&entity->base);
> }
>
> @@ -2343,36 +2353,42 @@ void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
>
> ring = adev->mman.buffer_funcs_ring;
> sched = &ring->sched;
> - r = amdgpu_ttm_buffer_entity_init(&adev->mman.default_entity,
> - DRM_SCHED_PRIORITY_KERNEL, &sched, 1,
> - 0, 0);
> + r = amdgpu_ttm_buffer_entity_init(&adev->mman.gtt_mgr,
> + &adev->mman.default_entity,
> + DRM_SCHED_PRIORITY_KERNEL,
> + &sched, 1, 0);
> if (r < 0) {
> dev_err(adev->dev,
> "Failed setting up TTM entity (%d)\n", r);
> return;
> }
>
> - r = amdgpu_ttm_buffer_entity_init(&adev->mman.clear_entity,
> - DRM_SCHED_PRIORITY_NORMAL, &sched, 1,
> - r, 1);
> + r = amdgpu_ttm_buffer_entity_init(&adev->mman.gtt_mgr,
> + &adev->mman.clear_entity,
> + DRM_SCHED_PRIORITY_NORMAL,
> + &sched, 1, 1);
> if (r < 0) {
> dev_err(adev->dev,
> "Failed setting up TTM BO clear entity (%d)\n", r);
> goto error_free_default_entity;
> }
>
> - r = amdgpu_ttm_buffer_entity_init(&adev->mman.move_entity,
> - DRM_SCHED_PRIORITY_NORMAL, &sched, 1,
> - r, 2);
> + r = amdgpu_ttm_buffer_entity_init(&adev->mman.gtt_mgr,
> + &adev->mman.move_entity,
> + DRM_SCHED_PRIORITY_NORMAL,
> + &sched, 1, 2);
> if (r < 0) {
> dev_err(adev->dev,
> "Failed setting up TTM BO move entity (%d)\n", r);
> goto error_free_clear_entity;
> }
> } else {
> - amdgpu_ttm_buffer_entity_fini(&adev->mman.default_entity);
> - amdgpu_ttm_buffer_entity_fini(&adev->mman.clear_entity);
> - amdgpu_ttm_buffer_entity_fini(&adev->mman.move_entity);
> + amdgpu_ttm_buffer_entity_fini(&adev->mman.gtt_mgr,
> + &adev->mman.default_entity);
> + amdgpu_ttm_buffer_entity_fini(&adev->mman.gtt_mgr,
> + &adev->mman.clear_entity);
> + amdgpu_ttm_buffer_entity_fini(&adev->mman.gtt_mgr,
> + &adev->mman.move_entity);
> /* Drop all the old fences since re-creating the scheduler entities
> * will allocate new contexts.
> */
> @@ -2390,9 +2406,11 @@ void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
> return;
>
> error_free_clear_entity:
> - amdgpu_ttm_buffer_entity_fini(&adev->mman.clear_entity);
> + amdgpu_ttm_buffer_entity_fini(&adev->mman.gtt_mgr,
> + &adev->mman.clear_entity);
> error_free_default_entity:
> - amdgpu_ttm_buffer_entity_fini(&adev->mman.default_entity);
> + amdgpu_ttm_buffer_entity_fini(&adev->mman.gtt_mgr,
> + &adev->mman.default_entity);
> }
>
> static int amdgpu_ttm_prepare_job(struct amdgpu_device *adev,
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> index 871388b86503..5419344d60fb 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> @@ -56,6 +56,7 @@ struct amdgpu_gtt_mgr {
> struct amdgpu_ttm_buffer_entity {
> struct drm_sched_entity base;
> struct mutex lock;
> + struct drm_mm_node node;
> u64 gart_window_offs[2];
> };
>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v6 06/11] amdgpu/gtt: remove AMDGPU_GTT_NUM_TRANSFER_WINDOWS
2026-01-26 13:35 ` [PATCH v6 06/11] amdgpu/gtt: remove AMDGPU_GTT_NUM_TRANSFER_WINDOWS Pierre-Eric Pelloux-Prayer
@ 2026-01-27 10:18 ` Christian König
2026-01-27 10:22 ` Christian König
1 sibling, 0 replies; 18+ messages in thread
From: Christian König @ 2026-01-27 10:18 UTC (permalink / raw)
To: Pierre-Eric Pelloux-Prayer, Alex Deucher, David Airlie,
Simona Vetter
Cc: amd-gfx, dri-devel, linux-kernel
On 1/26/26 14:35, Pierre-Eric Pelloux-Prayer wrote:
> It's not needed anymore.
>
> Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c | 5 +----
> drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h | 1 -
> 2 files changed, 1 insertion(+), 5 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c
> index f2e89fb4b666..9b0bcf6aca44 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c
> @@ -324,16 +324,13 @@ int amdgpu_gtt_mgr_init(struct amdgpu_device *adev, uint64_t gtt_size)
> {
> struct amdgpu_gtt_mgr *mgr = &adev->mman.gtt_mgr;
> struct ttm_resource_manager *man = &mgr->manager;
> - uint64_t start, size;
>
> man->use_tt = true;
> man->func = &amdgpu_gtt_mgr_func;
>
> ttm_resource_manager_init(man, &adev->mman.bdev, gtt_size);
>
> - start = AMDGPU_GTT_MAX_TRANSFER_SIZE * AMDGPU_GTT_NUM_TRANSFER_WINDOWS;
> - size = (adev->gmc.gart_size >> PAGE_SHIFT) - start;
> - drm_mm_init(&mgr->mm, start, size);
> + drm_mm_init(&mgr->mm, 0, adev->gmc.gart_size >> PAGE_SHIFT);
> spin_lock_init(&mgr->lock);
>
> ttm_set_driver_manager(&adev->mman.bdev, TTM_PL_TT, &mgr->manager);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> index 5419344d60fb..c8284cb2d22c 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> @@ -40,7 +40,6 @@
> #define __AMDGPU_PL_NUM (TTM_PL_PRIV + 6)
>
> #define AMDGPU_GTT_MAX_TRANSFER_SIZE 512
> -#define AMDGPU_GTT_NUM_TRANSFER_WINDOWS 3
>
> extern const struct attribute_group amdgpu_vram_mgr_attr_group;
> extern const struct attribute_group amdgpu_gtt_mgr_attr_group;
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v6 06/11] amdgpu/gtt: remove AMDGPU_GTT_NUM_TRANSFER_WINDOWS
2026-01-26 13:35 ` [PATCH v6 06/11] amdgpu/gtt: remove AMDGPU_GTT_NUM_TRANSFER_WINDOWS Pierre-Eric Pelloux-Prayer
2026-01-27 10:18 ` Christian König
@ 2026-01-27 10:22 ` Christian König
1 sibling, 0 replies; 18+ messages in thread
From: Christian König @ 2026-01-27 10:22 UTC (permalink / raw)
To: Pierre-Eric Pelloux-Prayer, Alex Deucher, David Airlie,
Simona Vetter
Cc: amd-gfx, dri-devel, linux-kernel
On 1/26/26 14:35, Pierre-Eric Pelloux-Prayer wrote:
> It's not needed anymore.
>
> Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c | 5 +----
> drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h | 1 -
> 2 files changed, 1 insertion(+), 5 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c
> index f2e89fb4b666..9b0bcf6aca44 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gtt_mgr.c
> @@ -324,16 +324,13 @@ int amdgpu_gtt_mgr_init(struct amdgpu_device *adev, uint64_t gtt_size)
> {
> struct amdgpu_gtt_mgr *mgr = &adev->mman.gtt_mgr;
> struct ttm_resource_manager *man = &mgr->manager;
> - uint64_t start, size;
>
> man->use_tt = true;
> man->func = &amdgpu_gtt_mgr_func;
>
> ttm_resource_manager_init(man, &adev->mman.bdev, gtt_size);
>
> - start = AMDGPU_GTT_MAX_TRANSFER_SIZE * AMDGPU_GTT_NUM_TRANSFER_WINDOWS;
> - size = (adev->gmc.gart_size >> PAGE_SHIFT) - start;
> - drm_mm_init(&mgr->mm, start, size);
> + drm_mm_init(&mgr->mm, 0, adev->gmc.gart_size >> PAGE_SHIFT);
> spin_lock_init(&mgr->lock);
>
> ttm_set_driver_manager(&adev->mman.bdev, TTM_PL_TT, &mgr->manager);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> index 5419344d60fb..c8284cb2d22c 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
> @@ -40,7 +40,6 @@
> #define __AMDGPU_PL_NUM (TTM_PL_PRIV + 6)
>
> #define AMDGPU_GTT_MAX_TRANSFER_SIZE 512
> -#define AMDGPU_GTT_NUM_TRANSFER_WINDOWS 3
>
> extern const struct attribute_group amdgpu_vram_mgr_attr_group;
> extern const struct attribute_group amdgpu_gtt_mgr_attr_group;
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v6 11/11] drm/amdgpu: move sched status check inside amdgpu_ttm_set_buffer_funcs_status
2026-01-26 13:35 ` [PATCH v6 11/11] drm/amdgpu: move sched status check inside amdgpu_ttm_set_buffer_funcs_status Pierre-Eric Pelloux-Prayer
@ 2026-01-27 10:23 ` Christian König
0 siblings, 0 replies; 18+ messages in thread
From: Christian König @ 2026-01-27 10:23 UTC (permalink / raw)
To: Pierre-Eric Pelloux-Prayer, Alex Deucher, David Airlie,
Simona Vetter
Cc: amd-gfx, dri-devel, linux-kernel
On 1/26/26 14:35, Pierre-Eric Pelloux-Prayer wrote:
> It avoids duplicated code and allows to output a warning.
>
> ---
> v4: move check inside the existing if (enable) test
> ---
>
> Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 13 ++++---------
> drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 5 +++++
> 2 files changed, 9 insertions(+), 9 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> index 362ab2b34498..98aead91b98b 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> @@ -3158,9 +3158,7 @@ static int amdgpu_device_ip_init(struct amdgpu_device *adev)
> if (r)
> goto init_failed;
>
> - if (adev->mman.buffer_funcs_ring &&
> - adev->mman.buffer_funcs_ring->sched.ready)
> - amdgpu_ttm_set_buffer_funcs_status(adev, true);
> + amdgpu_ttm_set_buffer_funcs_status(adev, true);
>
> /* Don't init kfd if whole hive need to be reset during init */
> if (adev->init_lvl->level != AMDGPU_INIT_LEVEL_MINIMAL_XGMI) {
> @@ -4052,8 +4050,7 @@ static int amdgpu_device_ip_resume(struct amdgpu_device *adev)
>
> r = amdgpu_device_ip_resume_phase2(adev);
>
> - if (adev->mman.buffer_funcs_ring->sched.ready)
> - amdgpu_ttm_set_buffer_funcs_status(adev, true);
> + amdgpu_ttm_set_buffer_funcs_status(adev, true);
>
> if (r)
> return r;
> @@ -5199,8 +5196,7 @@ int amdgpu_device_suspend(struct drm_device *dev, bool notify_clients)
> return 0;
>
> unwind_evict:
> - if (adev->mman.buffer_funcs_ring->sched.ready)
> - amdgpu_ttm_set_buffer_funcs_status(adev, true);
> + amdgpu_ttm_set_buffer_funcs_status(adev, true);
> amdgpu_fence_driver_hw_init(adev);
>
> unwind_userq:
> @@ -5931,8 +5927,7 @@ int amdgpu_device_reinit_after_reset(struct amdgpu_reset_context *reset_context)
> if (r)
> goto out;
>
> - if (tmp_adev->mman.buffer_funcs_ring->sched.ready)
> - amdgpu_ttm_set_buffer_funcs_status(tmp_adev, true);
> + amdgpu_ttm_set_buffer_funcs_status(tmp_adev, true);
>
> r = amdgpu_device_ip_resume_phase3(tmp_adev);
> if (r)
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> index e149092da8f1..1929a03daf18 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> @@ -2354,6 +2354,11 @@ void amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool enable)
> struct amdgpu_ring *ring;
> struct drm_gpu_scheduler *sched;
>
> + if (!adev->mman.buffer_funcs_ring || !adev->mman.buffer_funcs_ring->sched.ready) {
> + dev_warn(adev->dev, "Not enabling DMA transfers for in kernel use");
> + return;
> + }
> +
> ring = adev->mman.buffer_funcs_ring;
> sched = &ring->sched;
> r = amdgpu_ttm_buffer_entity_init(&adev->mman.gtt_mgr,
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v6 04/11] amdgpu/vce: use amdgpu_gtt_mgr_alloc_entries
2026-01-26 13:34 ` [PATCH v6 04/11] amdgpu/vce: use amdgpu_gtt_mgr_alloc_entries Pierre-Eric Pelloux-Prayer
2026-01-26 19:09 ` Christian König
@ 2026-01-28 13:21 ` kernel test robot
1 sibling, 0 replies; 18+ messages in thread
From: kernel test robot @ 2026-01-28 13:21 UTC (permalink / raw)
To: Pierre-Eric Pelloux-Prayer, Alex Deucher, Christian König,
David Airlie, Simona Vetter
Cc: oe-kbuild-all, Pierre-Eric Pelloux-Prayer, amd-gfx, dri-devel,
linux-kernel
Hi Pierre-Eric,
kernel test robot noticed the following build warnings:
[auto build test WARNING on next-20260123]
[cannot apply to drm-misc/drm-misc-next v6.19-rc7 v6.19-rc6 v6.19-rc5 linus/master v6.19-rc7]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Pierre-Eric-Pelloux-Prayer/drm-amdgpu-remove-gart_window_lock-usage-from-gmc-v12_1/20260126-214013
base: next-20260123
patch link: https://lore.kernel.org/r/20260126133518.2486-5-pierre-eric.pelloux-prayer%40amd.com
patch subject: [PATCH v6 04/11] amdgpu/vce: use amdgpu_gtt_mgr_alloc_entries
config: alpha-allyesconfig (https://download.01.org/0day-ci/archive/20260128/202601282153.4kuaeoS5-lkp@intel.com/config)
compiler: alpha-linux-gcc (GCC) 15.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260128/202601282153.4kuaeoS5-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202601282153.4kuaeoS5-lkp@intel.com/
All warnings (new ones prefixed by >>):
drivers/gpu/drm/amd/amdgpu/vce_v1_0.c: In function 'vce_v1_0_ensure_vcpu_bo_32bit_addr':
>> drivers/gpu/drm/amd/amdgpu/vce_v1_0.c:533:13: warning: unused variable 'gpu_addr' [-Wunused-variable]
533 | u64 gpu_addr = amdgpu_bo_gpu_offset(adev->vce.vcpu_bo);
| ^~~~~~~~
vim +/gpu_addr +533 drivers/gpu/drm/amd/amdgpu/vce_v1_0.c
d4a640d4b9f34a Timur Kristóf 2025-11-07 516
221cadb9c6bc2e Timur Kristóf 2025-11-07 517 /**
221cadb9c6bc2e Timur Kristóf 2025-11-07 518 * vce_v1_0_ensure_vcpu_bo_32bit_addr() - ensure the VCPU BO has a 32-bit address
221cadb9c6bc2e Timur Kristóf 2025-11-07 519 *
221cadb9c6bc2e Timur Kristóf 2025-11-07 520 * @adev: amdgpu_device pointer
221cadb9c6bc2e Timur Kristóf 2025-11-07 521 *
221cadb9c6bc2e Timur Kristóf 2025-11-07 522 * Due to various hardware limitations, the VCE1 requires
221cadb9c6bc2e Timur Kristóf 2025-11-07 523 * the VCPU BO to be in the low 32 bit address range.
221cadb9c6bc2e Timur Kristóf 2025-11-07 524 * Ensure that the VCPU BO has a 32-bit GPU address,
221cadb9c6bc2e Timur Kristóf 2025-11-07 525 * or return an error code when that isn't possible.
221cadb9c6bc2e Timur Kristóf 2025-11-07 526 *
221cadb9c6bc2e Timur Kristóf 2025-11-07 527 * To accomodate that, we put GART to the LOW address range
221cadb9c6bc2e Timur Kristóf 2025-11-07 528 * and reserve some GART pages where we map the VCPU BO,
221cadb9c6bc2e Timur Kristóf 2025-11-07 529 * so that it gets a 32-bit address.
221cadb9c6bc2e Timur Kristóf 2025-11-07 530 */
221cadb9c6bc2e Timur Kristóf 2025-11-07 531 static int vce_v1_0_ensure_vcpu_bo_32bit_addr(struct amdgpu_device *adev)
221cadb9c6bc2e Timur Kristóf 2025-11-07 532 {
221cadb9c6bc2e Timur Kristóf 2025-11-07 @533 u64 gpu_addr = amdgpu_bo_gpu_offset(adev->vce.vcpu_bo);
221cadb9c6bc2e Timur Kristóf 2025-11-07 534 u64 bo_size = amdgpu_bo_size(adev->vce.vcpu_bo);
221cadb9c6bc2e Timur Kristóf 2025-11-07 535 u64 max_vcpu_bo_addr = 0xffffffff - bo_size;
221cadb9c6bc2e Timur Kristóf 2025-11-07 536 u64 num_pages = ALIGN(bo_size, AMDGPU_GPU_PAGE_SIZE) / AMDGPU_GPU_PAGE_SIZE;
221cadb9c6bc2e Timur Kristóf 2025-11-07 537 u64 pa = amdgpu_gmc_vram_pa(adev, adev->vce.vcpu_bo);
221cadb9c6bc2e Timur Kristóf 2025-11-07 538 u64 flags = AMDGPU_PTE_READABLE | AMDGPU_PTE_WRITEABLE | AMDGPU_PTE_VALID;
6e8defc22cdff6 Pierre-Eric Pelloux-Prayer 2026-01-26 539 u64 vce_gart_start;
6e8defc22cdff6 Pierre-Eric Pelloux-Prayer 2026-01-26 540 int r;
221cadb9c6bc2e Timur Kristóf 2025-11-07 541
6e8defc22cdff6 Pierre-Eric Pelloux-Prayer 2026-01-26 542 r = amdgpu_gtt_mgr_alloc_entries(&adev->mman.gtt_mgr,
6e8defc22cdff6 Pierre-Eric Pelloux-Prayer 2026-01-26 543 &adev->vce.node, num_pages,
6e8defc22cdff6 Pierre-Eric Pelloux-Prayer 2026-01-26 544 DRM_MM_INSERT_LOW);
6e8defc22cdff6 Pierre-Eric Pelloux-Prayer 2026-01-26 545 if (r)
6e8defc22cdff6 Pierre-Eric Pelloux-Prayer 2026-01-26 546 return r;
6e8defc22cdff6 Pierre-Eric Pelloux-Prayer 2026-01-26 547
6e8defc22cdff6 Pierre-Eric Pelloux-Prayer 2026-01-26 548 vce_gart_start = adev->vce.node.start * AMDGPU_GPU_PAGE_SIZE;
221cadb9c6bc2e Timur Kristóf 2025-11-07 549
221cadb9c6bc2e Timur Kristóf 2025-11-07 550 /* Check if we can map the VCPU BO in GART to a 32-bit address. */
6e8defc22cdff6 Pierre-Eric Pelloux-Prayer 2026-01-26 551 if (adev->gmc.gart_start + vce_gart_start > max_vcpu_bo_addr)
221cadb9c6bc2e Timur Kristóf 2025-11-07 552 return -EINVAL;
221cadb9c6bc2e Timur Kristóf 2025-11-07 553
6e8defc22cdff6 Pierre-Eric Pelloux-Prayer 2026-01-26 554 amdgpu_gart_map_vram_range(adev, pa, adev->vce.node.start,
221cadb9c6bc2e Timur Kristóf 2025-11-07 555 num_pages, flags, adev->gart.ptr);
6e8defc22cdff6 Pierre-Eric Pelloux-Prayer 2026-01-26 556 adev->vce.gpu_addr = adev->gmc.gart_start + vce_gart_start;
221cadb9c6bc2e Timur Kristóf 2025-11-07 557 if (adev->vce.gpu_addr > max_vcpu_bo_addr)
221cadb9c6bc2e Timur Kristóf 2025-11-07 558 return -EINVAL;
221cadb9c6bc2e Timur Kristóf 2025-11-07 559
221cadb9c6bc2e Timur Kristóf 2025-11-07 560 return 0;
221cadb9c6bc2e Timur Kristóf 2025-11-07 561 }
221cadb9c6bc2e Timur Kristóf 2025-11-07 562
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 18+ messages in thread
end of thread, other threads:[~2026-01-28 13:21 UTC | newest]
Thread overview: 18+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-01-26 13:34 [PATCH v6 00/11] drm/amdgpu: preparation patchs for the use all SDMA instances series Pierre-Eric Pelloux-Prayer
2026-01-26 13:34 ` [PATCH v6 01/11] drm/amdgpu: remove gart_window_lock usage from gmc v12_1 Pierre-Eric Pelloux-Prayer
2026-01-26 13:34 ` [PATCH v6 02/11] drm/amdgpu: statically assign gart windows to ttm entities Pierre-Eric Pelloux-Prayer
2026-01-26 13:34 ` [PATCH v6 03/11] drm/amdgpu: add amdgpu_ttm_buffer_entity_fini func Pierre-Eric Pelloux-Prayer
2026-01-26 13:34 ` [PATCH v6 04/11] amdgpu/vce: use amdgpu_gtt_mgr_alloc_entries Pierre-Eric Pelloux-Prayer
2026-01-26 19:09 ` Christian König
2026-01-28 13:21 ` kernel test robot
2026-01-26 13:35 ` [PATCH v6 05/11] amdgpu/ttm: " Pierre-Eric Pelloux-Prayer
2026-01-27 10:09 ` Christian König
2026-01-26 13:35 ` [PATCH v6 06/11] amdgpu/gtt: remove AMDGPU_GTT_NUM_TRANSFER_WINDOWS Pierre-Eric Pelloux-Prayer
2026-01-27 10:18 ` Christian König
2026-01-27 10:22 ` Christian König
2026-01-26 13:35 ` [PATCH v6 07/11] drm/amdgpu: add missing lock in amdgpu_benchmark_do_move Pierre-Eric Pelloux-Prayer
2026-01-26 13:35 ` [PATCH v6 08/11] drm/amdgpu: check entity lock is held in amdgpu_ttm_job_submit Pierre-Eric Pelloux-Prayer
2026-01-26 13:35 ` [PATCH v6 09/11] drm/amdgpu: double AMDGPU_GTT_MAX_TRANSFER_SIZE Pierre-Eric Pelloux-Prayer
2026-01-26 13:35 ` [PATCH v6 10/11] drm/amdgpu: introduce amdgpu_sdma_set_vm_pte_scheds Pierre-Eric Pelloux-Prayer
2026-01-26 13:35 ` [PATCH v6 11/11] drm/amdgpu: move sched status check inside amdgpu_ttm_set_buffer_funcs_status Pierre-Eric Pelloux-Prayer
2026-01-27 10:23 ` Christian König
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox