AMD-GFX Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: "Christian König" <christian.koenig@amd.com>
To: "Khatri, Sunil" <sukhatri@amd.com>,
	tursulin@ursulin.net, Alexander.Deucher@amd.com,
	Prike.Liang@amd.com, Yogesh.Mohanmarimuthu@amd.com,
	SRINIVASAN.SHANMUGAM@amd.com, Sunil.Khatri@amd.com,
	amd-gfx@lists.freedesktop.org
Subject: Re: [PATCH 11/11] drm/amdgpu: WIP sync amdgpu_ttm_fill_mem only to kernel fences
Date: Tue, 17 Mar 2026 11:52:01 +0100	[thread overview]
Message-ID: <7fa0c8f7-887a-40a5-8fdf-55ab7aa58aa4@amd.com> (raw)
In-Reply-To: <2c655fb8-f8d7-46f7-9ab8-9574a45b1fde@amd.com>

On 3/17/26 09:59, Khatri, Sunil wrote:
> It would be good if we add some explanation of why we used DMA_RESV_USAGE_BOOKKEEP for buffer copy and for fill we use DMA_RESV_USAGE_KERNEL.
> Either as a comment or in commit message would help new folks to get a hold on it. Other than that its a good catch.
> 
> Acked-by: Sunil Khatri <sunil.khatri@amd.com>
> 
> 
> For my understanding:
> A copy buffer could involve buffer move to different domains too and might need to depend on all fences including read/write and internal kernel fences. At the same time buffer fill only
> writes to the memory and only depend on kernel implicit sync fences ?

No, the patch is actually buggy like hell. Both copy and fill should wait for all fences.

I only added this patch as a hack to work around MES problems and will probably drop it again when those are fixed.

Regards,
Christian.

> 
> Regards
> Sunil Khatri 
> 
> On 11-03-2026 12:43 am, Christian König wrote:
>> That's not even remotely correct, but should unblock testing for now.
>>
>> Signed-off-by: Christian König <christian.koenig@amd.com>
>> ---
>>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 12 +++++++-----
>>  1 file changed, 7 insertions(+), 5 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>> index 714fd8d12ca5..69f52a078022 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>> @@ -2428,12 +2428,14 @@ static int amdgpu_ttm_prepare_job(struct amdgpu_device *adev,
>>  				  struct amdgpu_ttm_buffer_entity *entity,
>>  				  unsigned int num_dw,
>>  				  struct dma_resv *resv,
>> +				  enum dma_resv_usage usage,
>>  				  bool vm_needs_flush,
>>  				  struct amdgpu_job **job,
>>  				  u64 k_job_id)
>>  {
>>  	enum amdgpu_ib_pool_type pool = AMDGPU_IB_POOL_DELAYED;
>>  	int r;
>> +
>>  	r = amdgpu_job_alloc_with_ib(adev, &entity->base,
>>  				     AMDGPU_FENCE_OWNER_UNDEFINED,
>>  				     num_dw * 4, pool, job, k_job_id);
>> @@ -2449,8 +2451,7 @@ static int amdgpu_ttm_prepare_job(struct amdgpu_device *adev,
>>  	if (!resv)
>>  		return 0;
>>  
>> -	return drm_sched_job_add_resv_dependencies(&(*job)->base, resv,
>> -						   DMA_RESV_USAGE_BOOKKEEP);
>> +	return drm_sched_job_add_resv_dependencies(&(*job)->base, resv, usage);
>>  }
>>  
>>  int amdgpu_copy_buffer(struct amdgpu_device *adev,
>> @@ -2479,9 +2480,9 @@ int amdgpu_copy_buffer(struct amdgpu_device *adev,
>>  	max_bytes = adev->mman.buffer_funcs->copy_max_bytes;
>>  	num_loops = DIV_ROUND_UP(byte_count, max_bytes);
>>  	num_dw = ALIGN(num_loops * adev->mman.buffer_funcs->copy_num_dw, 8);
>> -	r = amdgpu_ttm_prepare_job(adev, entity, num_dw,
>> -				   resv, vm_needs_flush, &job,
>> -				   AMDGPU_KERNEL_JOB_ID_TTM_COPY_BUFFER);
>> +	r = amdgpu_ttm_prepare_job(adev, entity, num_dw, resv,
>> +				   DMA_RESV_USAGE_BOOKKEEP, vm_needs_flush,
>> +				   &job, AMDGPU_KERNEL_JOB_ID_TTM_COPY_BUFFER);
>>  	if (r)
>>  		goto error_free;
>>  
>> @@ -2524,6 +2525,7 @@ static int amdgpu_ttm_fill_mem(struct amdgpu_device *adev,
>>  	num_loops = DIV_ROUND_UP_ULL(byte_count, max_bytes);
>>  	num_dw = ALIGN(num_loops * adev->mman.buffer_funcs->fill_num_dw, 8);
>>  	r = amdgpu_ttm_prepare_job(adev, entity, num_dw, resv,
>> +				   DMA_RESV_USAGE_KERNEL,
>>  				   vm_needs_flush, &job, k_job_id);
>>  	if (r)
>>  		return r;


  reply	other threads:[~2026-03-17 10:52 UTC|newest]

Thread overview: 44+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-10 19:13 [PATCH 01/11] drm/amdgpu: revert to old status lock handling v4 Christian König
2026-03-10 19:13 ` [PATCH 02/11] drm/amdgpu: restructure VM state machine Christian König
2026-03-11  8:47   ` Khatri, Sunil
2026-03-12 12:49     ` Christian König
2026-03-12 10:07   ` Liang, Prike
2026-03-12 14:32   ` Tvrtko Ursulin
2026-03-16 13:44     ` Christian König
2026-03-16 14:26       ` Tvrtko Ursulin
2026-03-10 19:13 ` [PATCH 03/11] drm/amdgpu: fix amdgpu_userq_evict Christian König
2026-03-11  8:51   ` Khatri, Sunil
2026-03-13  7:25   ` Liang, Prike
2026-03-10 19:13 ` [PATCH 04/11] drm/amdgpu: completely rework eviction fence handling Christian König
2026-03-11 12:27   ` Khatri, Sunil
2026-03-13  8:00     ` Khatri, Sunil
2026-03-17  9:41     ` Christian König
2026-03-13  8:28   ` Liang, Prike
2026-03-17  9:57     ` Christian König
2026-03-17 11:21       ` Liang, Prike
2026-03-17 11:23         ` Christian König
2026-03-17 11:54           ` Liang, Prike
2026-03-10 19:13 ` [PATCH 05/11] drm/amdgpu: fix eviction fence and userq manager shutdown Christian König
2026-03-11 12:26   ` Khatri, Sunil
2026-03-13  9:35     ` Khatri, Sunil
2026-03-10 19:13 ` [PATCH 06/11] drm/amdgpu: fix adding eviction fence Christian König
2026-03-11 12:26   ` Khatri, Sunil
2026-03-10 19:13 ` [PATCH 07/11] drm/amdgpu: rework amdgpu_userq_wait_ioctl v3 Christian König
2026-03-12 16:34   ` Tvrtko Ursulin
2026-03-16 14:19     ` Christian König
2026-03-16 14:44       ` Tvrtko Ursulin
2026-03-17  7:05   ` Khatri, Sunil
2026-03-10 19:13 ` [PATCH 08/11] drm/amdgpu: make amdgpu_user_wait_ioctl more resilent v2 Christian König
2026-03-17  7:15   ` Khatri, Sunil
2026-03-10 19:13 ` [PATCH 09/11] drm/amdgpu: annotate eviction fence signaling path Christian König
2026-03-17  7:35   ` Khatri, Sunil
2026-03-10 19:13 ` [PATCH 10/11] drm/amdgpu: fix some more bug in amdgpu_gem_va_ioctl Christian König
2026-03-17  8:44   ` Khatri, Sunil
2026-03-17 11:08     ` Christian König
2026-03-10 19:13 ` [PATCH 11/11] drm/amdgpu: WIP sync amdgpu_ttm_fill_mem only to kernel fences Christian König
2026-03-17  8:59   ` Khatri, Sunil
2026-03-17 10:52     ` Christian König [this message]
2026-03-11  7:43 ` [PATCH 01/11] drm/amdgpu: revert to old status lock handling v4 Khatri, Sunil
2026-03-12  7:13 ` Liang, Prike
  -- strict thread matches above, loose matches on Subject: below --
2026-04-21 12:55 [PATCH 01/11] drm/amdgpu: fix AMDGPU_INFO_READ_MMR_REG Christian König
2026-04-21 12:55 ` [PATCH 11/11] drm/amdgpu: WIP sync amdgpu_ttm_fill_mem only to kernel fences Christian König
2026-04-23 10:47   ` Khatri, Sunil

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=7fa0c8f7-887a-40a5-8fdf-55ab7aa58aa4@amd.com \
    --to=christian.koenig@amd.com \
    --cc=Alexander.Deucher@amd.com \
    --cc=Prike.Liang@amd.com \
    --cc=SRINIVASAN.SHANMUGAM@amd.com \
    --cc=Sunil.Khatri@amd.com \
    --cc=Yogesh.Mohanmarimuthu@amd.com \
    --cc=amd-gfx@lists.freedesktop.org \
    --cc=sukhatri@amd.com \
    --cc=tursulin@ursulin.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox