AMD-GFX Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: "Das, Nirmoy" <nirmoy.das@amd.com>
To: "Christian König" <ckoenig.leichtzumerken@gmail.com>,
	"Christian König" <christian.koenig@amd.com>,
	amd-gfx@lists.freedesktop.org
Cc: Felix.Kuehling@amd.com
Subject: Re: [PATCH 1/1] drm/amdgpu: add helper function for vm pasid
Date: Tue, 22 Jun 2021 12:51:24 +0200	[thread overview]
Message-ID: <18309ad2-8fc9-30dc-cfa0-9baa1a08f5f4@amd.com> (raw)
In-Reply-To: <c28d9c08-0ab8-780a-ae44-e444ae49500e@gmail.com>


[-- Attachment #1.1: Type: text/plain, Size: 9999 bytes --]


On 6/22/2021 12:36 PM, Christian König wrote:
> Am 22.06.21 um 12:30 schrieb Das, Nirmoy:
>>
>> On 6/22/2021 10:36 AM, Christian König wrote:
>>
>>> Am 22.06.21 um 09:39 schrieb Das, Nirmoy:
>>>>
>>>> On 6/22/2021 9:03 AM, Christian König wrote:
>>>>>
>>>>>
>>>>> Am 22.06.21 um 08:57 schrieb Nirmoy Das:
>>>>>> Cleanup code related to vm pasid by adding helper functions.
>>>>>>
>>>>>> Signed-off-by: Nirmoy Das <nirmoy.das@amd.com>
>>>>>> ---
>>>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 105 
>>>>>> ++++++++++++-------------
>>>>>>   1 file changed, 50 insertions(+), 55 deletions(-)
>>>>>>
>>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
>>>>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>>>> index 63975bda8e76..6e476b173cbb 100644
>>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>>>> @@ -87,6 +87,46 @@ struct amdgpu_prt_cb {
>>>>>>       struct dma_fence_cb cb;
>>>>>>   };
>>>>>>
>>>>>> +static int amdgpu_vm_pasid_alloc(struct amdgpu_device *adev,
>>>>>> +                 struct amdgpu_vm *vm,
>>>>>> +                 unsigned int pasid,
>>>>>> +                 unsigned int *vm_pasid)
>>>>>> +{
>>>>>> +    unsigned long flags;
>>>>>> +    int r;
>>>>>> +
>>>>>> +    if (!pasid)
>>>>>> +        return 0;
>>>>>> +
>>>>>> + spin_lock_irqsave(&adev->vm_manager.pasid_lock, flags);
>>>>>> +    r = idr_alloc(&adev->vm_manager.pasid_idr, vm, pasid, pasid 
>>>>>> + 1,
>>>>>> +              GFP_ATOMIC);
>>>>>> + spin_unlock_irqrestore(&adev->vm_manager.pasid_lock, flags);
>>>>>> +    if (r < 0)
>>>>>> +        return r;
>>>>>> +    if (vm_pasid)
>>>>>> +        *vm_pasid = pasid;
>>>>>> +
>>>>>
>>>>> Ok the more I read from this patch the less it makes sense.
>>>>>
>>>>> We don't allocate the passid here, we just set it up in the idr.
>>>>>
>>>>> What we could do is to replace the idr with an xarray, that would 
>>>>> certainly make more sense than this here.
>>>>
>>>>
>>>> xarray looks great, with that we don't need pasid_lock either.
>>>
>>> You still need the lock to protect against VM destruction while 
>>> looking things up, but you could switch to RCU for this instead.
>>
>>
>> xarray has xa_{lock|unloack}_irqsave() and adev->vm_manager.pasid_xa 
>> will exist till devices's lifetime.
>
> That's just a wrapper around the lock.
>
>> So I am thinking something like:
>>
>> amdgpu_vm_pasid_insert()
>>
>> {
>>
>> ...
>>
>> xa_lock_irqsave(adev->vm_manager.pasids, flags)
>> r = xa_store(&adev->vm_manager.pasids, pasid, vm, GFP_ATOMIC);
>> xa_unlock_irqsave(adev->vm_manager.pasids, flags)
>
> It would be really nice if we could avoid the GFP_ATOMIC here, but not 
> much of a problem since we had that before.


I think it is possible as I think  only amdgpu_vm_handle_fault() runs in 
interrupt context which tries to find a vm ptr not store it.


>
>> }
>>
>> amdgpu_vm_pasid_remove()
>>
>> {
>>
>> ....
>>
>> xa_lock_irqsave(adev->vm_manager.pasids, flags)
>> xa_erase(&adev->vm_manager.pasids, pasid);
>> xa_unlock_irqsave(adev->vm_manager.pasids, flags)
>>
>> }
>>
>>
>> xa_{lock|unloack}_irqsave() can be use while looking up vm ptr for a 
>> pasid.
>>
>>
>> Shouldn't this be enough ?
>>
>
> Yeah I think so.


Great.


Nirmoy

>
> Christian.
>
>>
>> Regards,
>>
>> Nirmoy
>>
>>>
>>> Christian.
>>>
>>>>
>>>>
>>>> Thanks
>>>>
>>>> Nirmoy
>>>>
>>>>
>>>>>
>>>>> Christian.
>>>>>
>>>>>> +    return 0;
>>>>>> +}
>>>>>> +
>>>>>> +static void amdgpu_vm_pasid_remove(struct amdgpu_device *adev,
>>>>>> +                   unsigned int pasid,
>>>>>> +                   unsigned int *vm_pasid)
>>>>>> +{
>>>>>> +    unsigned long flags;
>>>>>> +
>>>>>> +    if (!pasid)
>>>>>> +        return;
>>>>>> +
>>>>>> + spin_lock_irqsave(&adev->vm_manager.pasid_lock, flags);
>>>>>> +    idr_remove(&adev->vm_manager.pasid_idr, pasid);
>>>>>> + spin_unlock_irqrestore(&adev->vm_manager.pasid_lock, flags);
>>>>>> +
>>>>>> +    if (vm_pasid)
>>>>>> +        *vm_pasid = 0;
>>>>>> +}
>>>>>> +
>>>>>>   /*
>>>>>>    * vm eviction_lock can be taken in MMU notifiers. Make sure no 
>>>>>> reclaim-FS
>>>>>>    * happens while holding this lock anywhere to prevent 
>>>>>> deadlocks when
>>>>>> @@ -2940,18 +2980,8 @@ int amdgpu_vm_init(struct amdgpu_device 
>>>>>> *adev, struct amdgpu_vm *vm, u32 pasid)
>>>>>>
>>>>>>       amdgpu_bo_unreserve(vm->root.bo);
>>>>>>
>>>>>> -    if (pasid) {
>>>>>> -        unsigned long flags;
>>>>>> -
>>>>>> - spin_lock_irqsave(&adev->vm_manager.pasid_lock, flags);
>>>>>> -        r = idr_alloc(&adev->vm_manager.pasid_idr, vm, pasid, 
>>>>>> pasid + 1,
>>>>>> -                  GFP_ATOMIC);
>>>>>> - spin_unlock_irqrestore(&adev->vm_manager.pasid_lock, flags);
>>>>>> -        if (r < 0)
>>>>>> -            goto error_free_root;
>>>>>> -
>>>>>> -        vm->pasid = pasid;
>>>>>> -    }
>>>>>> +    if (amdgpu_vm_pasid_alloc(adev, vm, pasid, &vm->pasid))
>>>>>> +        goto error_free_root;
>>>>>>
>>>>>>       INIT_KFIFO(vm->faults);
>>>>>>
>>>>>> @@ -3038,19 +3068,11 @@ int amdgpu_vm_make_compute(struct 
>>>>>> amdgpu_device *adev, struct amdgpu_vm *vm,
>>>>>>       r = amdgpu_vm_check_clean_reserved(adev, vm);
>>>>>>       if (r)
>>>>>>           goto unreserve_bo;
>>>>>> +    r = amdgpu_vm_pasid_alloc(adev, vm, pasid, NULL);
>>>>>> +    if (r ==  -ENOSPC)
>>>>>> +        goto unreserve_bo;
>>>>>>
>>>>>> -    if (pasid) {
>>>>>> -        unsigned long flags;
>>>>>> -
>>>>>> - spin_lock_irqsave(&adev->vm_manager.pasid_lock, flags);
>>>>>> -        r = idr_alloc(&adev->vm_manager.pasid_idr, vm, pasid, 
>>>>>> pasid + 1,
>>>>>> -                  GFP_ATOMIC);
>>>>>> - spin_unlock_irqrestore(&adev->vm_manager.pasid_lock, flags);
>>>>>> -
>>>>>> -        if (r == -ENOSPC)
>>>>>> -            goto unreserve_bo;
>>>>>> -        r = 0;
>>>>>> -    }
>>>>>> +    r = 0;
>>>>>>
>>>>>>       /* Check if PD needs to be reinitialized and do it before
>>>>>>        * changing any other state, in case it fails.
>>>>>> @@ -3089,35 +3111,23 @@ int amdgpu_vm_make_compute(struct 
>>>>>> amdgpu_device *adev, struct amdgpu_vm *vm,
>>>>>>       vm->is_compute_context = true;
>>>>>>
>>>>>>       if (vm->pasid) {
>>>>>> -        unsigned long flags;
>>>>>> -
>>>>>> - spin_lock_irqsave(&adev->vm_manager.pasid_lock, flags);
>>>>>> -        idr_remove(&adev->vm_manager.pasid_idr, vm->pasid);
>>>>>> - spin_unlock_irqrestore(&adev->vm_manager.pasid_lock, flags);
>>>>>> -
>>>>>>           /* Free the original amdgpu allocated pasid
>>>>>>            * Will be replaced with kfd allocated pasid
>>>>>>            */
>>>>>>           amdgpu_pasid_free(vm->pasid);
>>>>>> -        vm->pasid = 0;
>>>>>> +        amdgpu_vm_pasid_remove(adev, vm->pasid, &vm->pasid);
>>>>>>       }
>>>>>>
>>>>>>       /* Free the shadow bo for compute VM */
>>>>>> amdgpu_bo_unref(&to_amdgpu_bo_vm(vm->root.bo)->shadow);
>>>>>> -
>>>>>>       if (pasid)
>>>>>>           vm->pasid = pasid;
>>>>>>
>>>>>>       goto unreserve_bo;
>>>>>>
>>>>>>   free_idr:
>>>>>> -    if (pasid) {
>>>>>> -        unsigned long flags;
>>>>>> +    amdgpu_vm_pasid_remove(adev, pasid, NULL);
>>>>>>
>>>>>> - spin_lock_irqsave(&adev->vm_manager.pasid_lock, flags);
>>>>>> -        idr_remove(&adev->vm_manager.pasid_idr, pasid);
>>>>>> - spin_unlock_irqrestore(&adev->vm_manager.pasid_lock, flags);
>>>>>> -    }
>>>>>>   unreserve_bo:
>>>>>>       amdgpu_bo_unreserve(vm->root.bo);
>>>>>>       return r;
>>>>>> @@ -3133,14 +3143,7 @@ int amdgpu_vm_make_compute(struct 
>>>>>> amdgpu_device *adev, struct amdgpu_vm *vm,
>>>>>>    */
>>>>>>   void amdgpu_vm_release_compute(struct amdgpu_device *adev, 
>>>>>> struct amdgpu_vm *vm)
>>>>>>   {
>>>>>> -    if (vm->pasid) {
>>>>>> -        unsigned long flags;
>>>>>> -
>>>>>> - spin_lock_irqsave(&adev->vm_manager.pasid_lock, flags);
>>>>>> -        idr_remove(&adev->vm_manager.pasid_idr, vm->pasid);
>>>>>> - spin_unlock_irqrestore(&adev->vm_manager.pasid_lock, flags);
>>>>>> -    }
>>>>>> -    vm->pasid = 0;
>>>>>> +    amdgpu_vm_pasid_remove(adev, vm->pasid, &vm->pasid);
>>>>>>       vm->is_compute_context = false;
>>>>>>   }
>>>>>>
>>>>>> @@ -3164,15 +3167,7 @@ void amdgpu_vm_fini(struct amdgpu_device 
>>>>>> *adev, struct amdgpu_vm *vm)
>>>>>>
>>>>>>       root = amdgpu_bo_ref(vm->root.bo);
>>>>>>       amdgpu_bo_reserve(root, true);
>>>>>> -    if (vm->pasid) {
>>>>>> -        unsigned long flags;
>>>>>> -
>>>>>> - spin_lock_irqsave(&adev->vm_manager.pasid_lock, flags);
>>>>>> -        idr_remove(&adev->vm_manager.pasid_idr, vm->pasid);
>>>>>> - spin_unlock_irqrestore(&adev->vm_manager.pasid_lock, flags);
>>>>>> -        vm->pasid = 0;
>>>>>> -    }
>>>>>> -
>>>>>> +    amdgpu_vm_pasid_remove(adev, vm->pasid, &vm->pasid);
>>>>>>       dma_fence_wait(vm->last_unlocked, false);
>>>>>>       dma_fence_put(vm->last_unlocked);
>>>>>>
>>>>>> -- 
>>>>>> 2.32.0
>>>>>>
>>>>>
>>>> _______________________________________________
>>>> amd-gfx mailing list
>>>> amd-gfx@lists.freedesktop.org
>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=04%7C01%7Cnirmoy.das%40amd.com%7C3285a973b5a4498f3b0608d93558d909%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637599478002028860%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=QLv4hbUpPF9H%2BVL4eOQlTeROWQA%2FG1LrPGFBzCQRt7o%3D&amp;reserved=0 
>>>>
>>>
>

[-- Attachment #1.2: Type: text/html, Size: 52997 bytes --]

[-- Attachment #2: Type: text/plain, Size: 154 bytes --]

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

  reply	other threads:[~2021-06-22 10:51 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-22  6:57 [PATCH 1/1] drm/amdgpu: add helper function for vm pasid Nirmoy Das
2021-06-22  7:03 ` Christian König
2021-06-22  7:39   ` Das, Nirmoy
2021-06-22  8:36     ` Christian König
2021-06-22 10:30       ` Das, Nirmoy
2021-06-22 10:36         ` Christian König
2021-06-22 10:51           ` Das, Nirmoy [this message]
  -- strict thread matches above, loose matches on Subject: below --
2021-06-17 13:03 Nirmoy Das
2021-06-21  9:47 ` Das, Nirmoy
2021-06-21 10:26   ` Christian König
2021-06-21 10:56     ` Das, Nirmoy
2021-06-21 10:59       ` Christian König
2021-06-21 11:07         ` Das, Nirmoy
2021-06-21 11:13           ` Christian König
2021-06-21 11:10         ` Das, Nirmoy
2021-06-21 11:18           ` Christian König
2021-06-21 11:27             ` Das, Nirmoy
2021-06-21 11:35               ` Christian König
2021-06-21 11:41                 ` Das, Nirmoy

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=18309ad2-8fc9-30dc-cfa0-9baa1a08f5f4@amd.com \
    --to=nirmoy.das@amd.com \
    --cc=Felix.Kuehling@amd.com \
    --cc=amd-gfx@lists.freedesktop.org \
    --cc=christian.koenig@amd.com \
    --cc=ckoenig.leichtzumerken@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox