All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Chen, Xiaogang" <xiaogang.chen@amd.com>
To: Felix Kuehling <felix.kuehling@amd.com>, amd-gfx@lists.freedesktop.org
Cc: Philip.Yang@amd.com
Subject: Re: [PATCH] drm/amdkfd: Use partial migrations in GPU page faults
Date: Thu, 31 Aug 2023 16:29:10 -0500	[thread overview]
Message-ID: <a83c2317-932b-3a7d-2a54-0ccda4dd77be@amd.com> (raw)
In-Reply-To: <0da257e5-85a6-4843-4f49-5666d049110e@amd.com>


On 8/31/2023 3:59 PM, Felix Kuehling wrote:
> On 2023-08-31 16:33, Chen, Xiaogang wrote:
>>>>>>> That said, I'm not actually sure why we're freeing the DMA 
>>>>>>> address array after migration to RAM at all. I think we still 
>>>>>>> need it even when we're using VRAM. We call svm_range_dma_map in 
>>>>>>> svm_range_validate_and_map regardless of whether the range is in 
>>>>>>> VRAM or system memory. So it will just allocate a new array the 
>>>>>>> next time the range is validated anyway. VRAM pages use a 
>>>>>>> special address encoding to indicate VRAM pages to the GPUVM code.
>>>>>>
>>>>>> I think we do not need free DMA address array as you said, it is 
>>>>>> another thing though.
>>>>>>
>>>>>> We need unmap dma address(dma_unmap_page) after migrate from ram 
>>>>>> to vram because we always do dma_map_page at 
>>>>>> svm_range_validate_and_map. If not we would have multiple dma 
>>>>>> maps for same sys ram page.
>>>>>
>>>>> svm_range_dma_map_dev calls dma_unmap_page before overwriting an 
>>>>> existing valid entry in the dma_addr array. Anyway, dma unmapping 
>>>>> the old pages in bulk may still be cleaner. And it avoids delays 
>>>>> in cleaning up DMA mappings after migrations.
>>>>>
>>>>> Regards,
>>>>>   Felix
>>>>>
>>>>>
>>>> then we may not need do dma_unmap after migrate from ram to vram 
>>>> since svm_range_dma_map_dev always do dma_unmap_page if the address 
>>>> is valid dma address for sys ram, and after migrate from ram to 
>>>> vram we always do gpu mapping?
>>>
>>> I think with XNACK enabled, the DMA mapping may be delayed until a 
>>> page fault. For example on a multi-GPU system, GPU1 page faults and 
>>> migrates data from system memory to its VRAM. Immediately 
>>> afterwards, the page fault handler should use svm_validate_and_map 
>>> to update GPU1 page tables. But GPU2 page tables are not updated 
>>> immediately. So the now stale DMA mappings for GPU2 would continue 
>>> to exist until the next page fault on GPU2.
>>>
>>> Regards,
>>>   Felix
>>>
>> If I understand correctly: when user call svm_range_set_attr, if 
>> p->xnack_enabled is true, we can skip call 
>> svm_range_validate_and_map. We postpone the buffer validating and gpu 
>> mapping until page fault or the time the buffer really got used by a 
>> GPU, and only dma map and gpu map for this GPU.
>
> The current implementation of svm_range_set_attr skips the validation 
> after migration if XNACK is off, because it is handled by 
> svm_range_restore_work that gets scheduled by the MMU notifier 
> triggered by the migration.
>
> With XNACK on, svm_range_set_attr currently validates and maps after 
> migration assuming that the data will be used by the GPU(s) soon. That 
> is something we could change and let page faults take care of the 
> mappings as needed.
>
Yes, with xnack on, my understanding is we can skip 
svm_range_validate_and_map at svm_range_set_attr after migration, then 
page fault handler will do dma and gpu mapping. That would save the 
first time dma and gpu mapping which apply to all GPUs that user ask for 
access. Then current gpu page fault handler just does dma and gpu 
mapping for the GPU that triggered page fault. Is that ok?

Regards

Xiaogang

> Regards,
>   Felix
>
>

>>
>> Regards
>>
>> Xiaogang 

  reply	other threads:[~2023-08-31 21:29 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-08-24 22:08 [PATCH] drm/amdkfd: Use partial migrations in GPU page faults Xiaogang.Chen
2023-08-28 19:06 ` Felix Kuehling
2023-08-28 20:57   ` Chen, Xiaogang
2023-08-28 22:37     ` Felix Kuehling
2023-08-30 19:39       ` Chen, Xiaogang
2023-08-30 20:56         ` Felix Kuehling
2023-08-30 23:02           ` Chen, Xiaogang
2023-08-31 18:00             ` Felix Kuehling
2023-08-31 20:33               ` Chen, Xiaogang
2023-08-31 20:59                 ` Felix Kuehling
2023-08-31 21:29                   ` Chen, Xiaogang [this message]
     [not found]                     ` <0b40fe2b-7c80-573c-ec1c-64594b840dc2@amd.com>
2023-09-05 15:16                       ` Chen, Xiaogang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a83c2317-932b-3a7d-2a54-0ccda4dd77be@amd.com \
    --to=xiaogang.chen@amd.com \
    --cc=Philip.Yang@amd.com \
    --cc=amd-gfx@lists.freedesktop.org \
    --cc=felix.kuehling@amd.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.