From: "Christian König" <christian.koenig@amd.com>
To: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>,
Alex Deucher <alexander.deucher@amd.com>,
David Airlie <airlied@gmail.com>, Simona Vetter <simona@ffwll.ch>,
Felix Kuehling <Felix.Kuehling@amd.com>,
Harry Wentland <harry.wentland@amd.com>,
Leo Li <sunpeng.li@amd.com>,
Rodrigo Siqueira <siqueira@igalia.com>
Cc: amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH v3 23/28] drm/amdgpu: use TTM_NUM_MOVE_FENCES when reserving fences
Date: Fri, 21 Nov 2025 16:41:42 +0100 [thread overview]
Message-ID: <12dbb538-c430-4848-94cd-51329e4e04b0@amd.com> (raw)
In-Reply-To: <20251121101315.3585-24-pierre-eric.pelloux-prayer@amd.com>
On 11/21/25 11:12, Pierre-Eric Pelloux-Prayer wrote:
> Use TTM_NUM_MOVE_FENCES as an upperbound of how many fences
> ttm might need to deal with moves/evictions.
>
> ---
> v2: removed drm_err calls
> ---
>
> Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
> Acked-by: Felix Kuehling <felix.kuehling@amd.com>
> Reviewed-by: Christian König <christian.koenig@amd.com>
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 5 ++---
> drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c | 2 +-
> drivers/gpu/drm/amd/amdgpu/amdgpu_vkms.c | 6 ++----
> drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 3 ++-
> drivers/gpu/drm/amd/amdkfd/kfd_svm.c | 3 +--
> drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c | 6 ++----
> drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_wb.c | 6 ++----
> 7 files changed, 12 insertions(+), 19 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> index d591dce0f3b3..5215238f8fc9 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> @@ -916,9 +916,8 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
> goto out_free_user_pages;
>
> amdgpu_bo_list_for_each_entry(e, p->bo_list) {
> - /* One fence for TTM and one for each CS job */
> r = drm_exec_prepare_obj(&p->exec, &e->bo->tbo.base,
> - 1 + p->gang_size);
> + TTM_NUM_MOVE_FENCES + p->gang_size);
> drm_exec_retry_on_contention(&p->exec);
> if (unlikely(r))
> goto out_free_user_pages;
> @@ -928,7 +927,7 @@ static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
>
> if (p->uf_bo) {
> r = drm_exec_prepare_obj(&p->exec, &p->uf_bo->tbo.base,
> - 1 + p->gang_size);
> + TTM_NUM_MOVE_FENCES + p->gang_size);
> drm_exec_retry_on_contention(&p->exec);
> if (unlikely(r))
> goto out_free_user_pages;
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> index f2505ae5fd65..41fd84a19d66 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> @@ -354,7 +354,7 @@ static void amdgpu_gem_object_close(struct drm_gem_object *obj,
>
> drm_exec_init(&exec, DRM_EXEC_IGNORE_DUPLICATES, 0);
> drm_exec_until_all_locked(&exec) {
> - r = drm_exec_prepare_obj(&exec, &bo->tbo.base, 1);
> + r = drm_exec_prepare_obj(&exec, &bo->tbo.base, TTM_NUM_MOVE_FENCES);
This here can stay 1, we just need a single slot for the VM update fence.
> drm_exec_retry_on_contention(&exec);
> if (unlikely(r))
> goto out_unlock;
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vkms.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vkms.c
> index 79bad9cbe2ab..b92561eea3da 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vkms.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vkms.c
> @@ -326,11 +326,9 @@ static int amdgpu_vkms_prepare_fb(struct drm_plane *plane,
> return r;
> }
>
> - r = dma_resv_reserve_fences(rbo->tbo.base.resv, 1);
> - if (r) {
> - dev_err(adev->dev, "allocating fence slot failed (%d)\n", r);
> + r = dma_resv_reserve_fences(rbo->tbo.base.resv, TTM_NUM_MOVE_FENCES);
> + if (r)
> goto error_unlock;
> - }
>
> if (plane->type != DRM_PLANE_TYPE_CURSOR)
> domain = amdgpu_display_supported_domains(adev, rbo->flags);
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> index 5061d5b0f875..62f37a9ca966 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> @@ -2656,7 +2656,8 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm,
> }
>
> amdgpu_vm_bo_base_init(&vm->root, vm, root_bo);
> - r = dma_resv_reserve_fences(root_bo->tbo.base.resv, 1);
> + r = dma_resv_reserve_fences(root_bo->tbo.base.resv,
> + TTM_NUM_MOVE_FENCES);
Here we only need a single slot to clear the root PD during init.
With those two removed the patch is Reviewed-by: Christian König <christian.koenig@amd.com>.
Fingers crossed that we haven't missed any.
Regards,
Christian.
> if (r)
> goto error_free_root;
>
> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
> index 97c2270f278f..0f8d85ee97fc 100644
> --- a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
> @@ -627,9 +627,8 @@ svm_range_vram_node_new(struct kfd_node *node, struct svm_range *prange,
> }
> }
>
> - r = dma_resv_reserve_fences(bo->tbo.base.resv, 1);
> + r = dma_resv_reserve_fences(bo->tbo.base.resv, TTM_NUM_MOVE_FENCES);
> if (r) {
> - pr_debug("failed %d to reserve bo\n", r);
> amdgpu_bo_unreserve(bo);
> goto reserve_bo_failed;
> }
> diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c
> index 56cb866ac6f8..ceb55dd183ed 100644
> --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c
> +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c
> @@ -952,11 +952,9 @@ static int amdgpu_dm_plane_helper_prepare_fb(struct drm_plane *plane,
> return r;
> }
>
> - r = dma_resv_reserve_fences(rbo->tbo.base.resv, 1);
> - if (r) {
> - drm_err(adev_to_drm(adev), "reserving fence slot failed (%d)\n", r);
> + r = dma_resv_reserve_fences(rbo->tbo.base.resv, TTM_NUM_MOVE_FENCES);
> + if (r)
> goto error_unlock;
> - }
>
> if (plane->type != DRM_PLANE_TYPE_CURSOR)
> domain = amdgpu_display_supported_domains(adev, rbo->flags);
> diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_wb.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_wb.c
> index d9527c05fc87..110f0173eee6 100644
> --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_wb.c
> +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_wb.c
> @@ -106,11 +106,9 @@ static int amdgpu_dm_wb_prepare_job(struct drm_writeback_connector *wb_connector
> return r;
> }
>
> - r = dma_resv_reserve_fences(rbo->tbo.base.resv, 1);
> - if (r) {
> - drm_err(adev_to_drm(adev), "reserving fence slot failed (%d)\n", r);
> + r = dma_resv_reserve_fences(rbo->tbo.base.resv, TTM_NUM_MOVE_FENCES);
> + if (r)
> goto error_unlock;
> - }
>
> domain = amdgpu_display_supported_domains(adev, rbo->flags);
>
next prev parent reply other threads:[~2025-11-21 15:42 UTC|newest]
Thread overview: 53+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-21 10:12 [PATCH v3 00/28] drm/amdgpu: use all SDMA instances for TTM clears and moves Pierre-Eric Pelloux-Prayer
2025-11-21 10:12 ` [PATCH v3 01/28] drm/amdgpu: give each kernel job a unique id Pierre-Eric Pelloux-Prayer
2025-11-21 12:51 ` Christian König
2025-11-21 20:02 ` Felix Kuehling
2025-11-21 10:12 ` [PATCH v3 02/28] drm/amdgpu: use ttm_resource_manager_cleanup Pierre-Eric Pelloux-Prayer
2025-11-21 12:52 ` Christian König
2025-11-21 10:12 ` [PATCH v3 03/28] drm/amdgpu: remove direct_submit arg from amdgpu_copy_buffer Pierre-Eric Pelloux-Prayer
2025-11-21 10:12 ` [PATCH v3 04/28] drm/amdgpu: remove the ring param from ttm functions Pierre-Eric Pelloux-Prayer
2025-11-21 12:58 ` Christian König
2025-11-21 10:12 ` [PATCH v3 05/28] drm/amdgpu: introduce amdgpu_ttm_buffer_entity Pierre-Eric Pelloux-Prayer
2025-11-21 10:12 ` [PATCH v3 06/28] drm/amdgpu: add amdgpu_ttm_job_submit helper Pierre-Eric Pelloux-Prayer
2025-11-21 13:00 ` Christian König
2025-11-21 10:12 ` [PATCH v3 07/28] drm/amdgpu: fix error handling in amdgpu_copy_buffer Pierre-Eric Pelloux-Prayer
2025-11-21 13:04 ` Christian König
2025-11-21 10:12 ` [PATCH v3 08/28] drm/amdgpu: pass the entity to use to amdgpu_ttm_map_buffer Pierre-Eric Pelloux-Prayer
2025-11-21 13:05 ` Christian König
2025-11-21 10:12 ` [PATCH v3 09/28] drm/amdgpu: pass the entity to use to ttm public functions Pierre-Eric Pelloux-Prayer
2025-11-21 13:23 ` Christian König
2025-11-21 10:12 ` [PATCH v3 10/28] drm/amdgpu: add amdgpu_device argument to ttm functions that need it Pierre-Eric Pelloux-Prayer
2025-11-21 13:24 ` Christian König
2025-11-21 16:09 ` Pierre-Eric Pelloux-Prayer
2025-11-24 8:42 ` Christian König
2025-11-21 10:12 ` [PATCH v3 11/28] drm/amdgpu: statically assign gart windows to ttm entities Pierre-Eric Pelloux-Prayer
2025-11-21 13:34 ` Christian König
2025-11-21 10:12 ` [PATCH v3 12/28] drm/amdgpu: remove AMDGPU_GTT_NUM_TRANSFER_WINDOWS Pierre-Eric Pelloux-Prayer
2025-11-21 13:50 ` Christian König
2025-11-21 10:12 ` [PATCH v3 13/28] drm/amdgpu: add missing lock when using ttm entities Pierre-Eric Pelloux-Prayer
2025-11-21 13:53 ` Christian König
2025-11-21 10:12 ` [PATCH v3 14/28] drm/amdgpu: check entity lock is held in amdgpu_ttm_job_submit Pierre-Eric Pelloux-Prayer
2025-11-21 13:54 ` Christian König
2025-11-21 10:12 ` [PATCH v3 15/28] drm/amdgpu: double AMDGPU_GTT_MAX_TRANSFER_SIZE Pierre-Eric Pelloux-Prayer
2025-11-21 10:12 ` [PATCH v3 16/28] drm/amdgpu: use larger gart window when possible Pierre-Eric Pelloux-Prayer
2025-11-21 15:02 ` Christian König
2025-11-21 10:12 ` [PATCH v3 17/28] drm/amdgpu: introduce amdgpu_sdma_set_vm_pte_scheds Pierre-Eric Pelloux-Prayer
2025-11-21 15:05 ` Christian König
2025-11-21 10:12 ` [PATCH v3 18/28] drm/amdgpu: move sched status check inside amdgpu_ttm_set_buffer_funcs_status Pierre-Eric Pelloux-Prayer
2025-11-21 15:08 ` Christian König
2025-11-21 15:12 ` Pierre-Eric Pelloux-Prayer
2025-11-21 15:23 ` Christian König
2025-11-21 10:12 ` [PATCH v3 20/28] drm/amdgpu: allocate multiple clear entities Pierre-Eric Pelloux-Prayer
2025-11-21 15:34 ` Christian König
2025-11-21 10:12 ` [PATCH v3 21/28] drm/amdgpu: allocate multiple move entities Pierre-Eric Pelloux-Prayer
2025-11-21 10:12 ` [PATCH v3 22/28] drm/amdgpu: round robin through clear_entities in amdgpu_fill_buffer Pierre-Eric Pelloux-Prayer
2025-11-21 15:36 ` Christian König
2025-11-21 10:12 ` [PATCH v3 23/28] drm/amdgpu: use TTM_NUM_MOVE_FENCES when reserving fences Pierre-Eric Pelloux-Prayer
2025-11-21 15:41 ` Christian König [this message]
2025-11-21 10:12 ` [PATCH v3 24/28] drm/amdgpu: use multiple entities in amdgpu_move_blit Pierre-Eric Pelloux-Prayer
2025-11-21 10:12 ` [PATCH v3 25/28] drm/amdgpu: pass all the sdma scheds to amdgpu_mman Pierre-Eric Pelloux-Prayer
2025-11-21 15:52 ` Christian König
2025-11-21 10:12 ` [PATCH v3 26/28] drm/amdgpu: give ttm entities access to all the sdma scheds Pierre-Eric Pelloux-Prayer
2025-11-21 10:12 ` [PATCH v3 27/28] drm/amdgpu: get rid of amdgpu_ttm_clear_buffer Pierre-Eric Pelloux-Prayer
2025-11-21 15:54 ` Christian König
2025-11-21 10:12 ` [PATCH v3 28/28] drm/amdgpu: rename amdgpu_fill_buffer as amdgpu_ttm_clear_buffer Pierre-Eric Pelloux-Prayer
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=12dbb538-c430-4848-94cd-51329e4e04b0@amd.com \
--to=christian.koenig@amd.com \
--cc=Felix.Kuehling@amd.com \
--cc=airlied@gmail.com \
--cc=alexander.deucher@amd.com \
--cc=amd-gfx@lists.freedesktop.org \
--cc=dri-devel@lists.freedesktop.org \
--cc=harry.wentland@amd.com \
--cc=linux-kernel@vger.kernel.org \
--cc=pierre-eric.pelloux-prayer@amd.com \
--cc=simona@ffwll.ch \
--cc=siqueira@igalia.com \
--cc=sunpeng.li@amd.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox