AMD-GFX Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Andrey Grodzovsky <Andrey.Grodzovsky@amd.com>
To: Daniel Vetter <daniel@ffwll.ch>, christian.koenig@amd.com
Cc: robh@kernel.org, amd-gfx@lists.freedesktop.org,
	daniel.vetter@ffwll.ch, dri-devel@lists.freedesktop.org,
	eric@anholt.net, ppaalanen@gmail.com, yuq825@gmail.com,
	gregkh@linuxfoundation.org, Alexander.Deucher@amd.com,
	Harry.Wentland@amd.com, l.stach@pengutronix.de
Subject: Re: [PATCH v3 01/12] drm: Add dummy page per device or GEM object
Date: Fri, 8 Jan 2021 09:26:22 -0500	[thread overview]
Message-ID: <a140ca34-9cfc-9c2f-39e2-1af156faabfe@amd.com> (raw)
In-Reply-To: <589ece1f-2718-87ab-ec07-4044c3df1c58@amd.com>

Hey Christian, just a ping.

Andrey

On 1/7/21 11:37 AM, Andrey Grodzovsky wrote:
>
> On 1/7/21 11:30 AM, Daniel Vetter wrote:
>> On Thu, Jan 07, 2021 at 11:26:52AM -0500, Andrey Grodzovsky wrote:
>>> On 1/7/21 11:21 AM, Daniel Vetter wrote:
>>>> On Tue, Jan 05, 2021 at 04:04:16PM -0500, Andrey Grodzovsky wrote:
>>>>> On 11/23/20 3:01 AM, Christian König wrote:
>>>>>> Am 23.11.20 um 05:54 schrieb Andrey Grodzovsky:
>>>>>>> On 11/21/20 9:15 AM, Christian König wrote:
>>>>>>>> Am 21.11.20 um 06:21 schrieb Andrey Grodzovsky:
>>>>>>>>> Will be used to reroute CPU mapped BO's page faults once
>>>>>>>>> device is removed.
>>>>>>>> Uff, one page for each exported DMA-buf? That's not something we can do.
>>>>>>>>
>>>>>>>> We need to find a different approach here.
>>>>>>>>
>>>>>>>> Can't we call alloc_page() on each fault and link them together
>>>>>>>> so they are freed when the device is finally reaped?
>>>>>>> For sure better to optimize and allocate on demand when we reach
>>>>>>> this corner case, but why the linking ?
>>>>>>> Shouldn't drm_prime_gem_destroy be good enough place to free ?
>>>>>> I want to avoid keeping the page in the GEM object.
>>>>>>
>>>>>> What we can do is to allocate a page on demand for each fault and link
>>>>>> the together in the bdev instead.
>>>>>>
>>>>>> And when the bdev is then finally destroyed after the last application
>>>>>> closed we can finally release all of them.
>>>>>>
>>>>>> Christian.
>>>>> Hey, started to implement this and then realized that by allocating a page
>>>>> for each fault indiscriminately
>>>>> we will be allocating a new page for each faulting virtual address within a
>>>>> VA range belonging the same BO
>>>>> and this is obviously too much and not the intention. Should I instead use
>>>>> let's say a hashtable with the hash
>>>>> key being faulting BO address to actually keep allocating and reusing same
>>>>> dummy zero page per GEM BO
>>>>> (or for that matter DRM file object address for non imported BOs) ?
>>>> Why do we need a hashtable? All the sw structures to track this should
>>>> still be around:
>>>> - if gem_bo->dma_buf is set the buffer is currently exported as a dma-buf,
>>>>     so defensively allocate a per-bo page
>>>> - otherwise allocate a per-file page
>>>
>>> That exactly what we have in current implementation
>>>
>>>
>>>> Or is the idea to save the struct page * pointer? That feels a bit like
>>>> over-optimizing stuff. Better to have a simple implementation first and
>>>> then tune it if (and only if) any part of it becomes a problem for normal
>>>> usage.
>>>
>>> Exactly - the idea is to avoid adding extra pointer to drm_gem_object,
>>> Christian suggested to instead keep a linked list of dummy pages to be
>>> allocated on demand once we hit a vm_fault. I will then also prefault the 
>>> entire
>>> VA range from vma->vm_end - vma->vm_start to vma->vm_end and map them
>>> to that single dummy page.
>> This strongly feels like premature optimization. If you're worried about
>> the overhead on amdgpu, pay down the debt by removing one of the redundant
>> pointers between gem and ttm bo structs (I think we still have some) :-)
>>
>> Until we've nuked these easy&obvious ones we shouldn't play "avoid 1
>> pointer just because" games with hashtables.
>> -Daniel
>
>
> Well, if you and Christian can agree on this approach and suggest maybe what 
> pointer is
> redundant and can be removed from GEM struct so we can use the 'credit' to add 
> the dummy page
> to GEM I will be happy to follow through.
>
> P.S Hash table is off the table anyway and we are talking only about linked 
> list here since by prefaulting
> the entire VA range for a vmf->vma i will be avoiding redundant page faults to 
> same VMA VA range and so
> don't need to search and reuse an existing dummy page but simply create a new 
> one for each next fault.
>
> Andrey
>
>
>>
>>> Andrey
>>>
>>>
>>>> -Daniel
>>>>
>>>>> Andrey
>>>>>
>>>>>
>>>>>>> Andrey
>>>>>>>
>>>>>>>
>>>>>>>> Regards,
>>>>>>>> Christian.
>>>>>>>>
>>>>>>>>> Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
>>>>>>>>> ---
>>>>>>>>>     drivers/gpu/drm/drm_file.c  |  8 ++++++++
>>>>>>>>>     drivers/gpu/drm/drm_prime.c | 10 ++++++++++
>>>>>>>>>     include/drm/drm_file.h      |  2 ++
>>>>>>>>>     include/drm/drm_gem.h       |  2 ++
>>>>>>>>>     4 files changed, 22 insertions(+)
>>>>>>>>>
>>>>>>>>> diff --git a/drivers/gpu/drm/drm_file.c b/drivers/gpu/drm/drm_file.c
>>>>>>>>> index 0ac4566..ff3d39f 100644
>>>>>>>>> --- a/drivers/gpu/drm/drm_file.c
>>>>>>>>> +++ b/drivers/gpu/drm/drm_file.c
>>>>>>>>> @@ -193,6 +193,12 @@ struct drm_file *drm_file_alloc(struct drm_minor 
>>>>>>>>> *minor)
>>>>>>>>>                 goto out_prime_destroy;
>>>>>>>>>         }
>>>>>>>>>     +    file->dummy_page = alloc_page(GFP_KERNEL | __GFP_ZERO);
>>>>>>>>> +    if (!file->dummy_page) {
>>>>>>>>> +        ret = -ENOMEM;
>>>>>>>>> +        goto out_prime_destroy;
>>>>>>>>> +    }
>>>>>>>>> +
>>>>>>>>>         return file;
>>>>>>>>>       out_prime_destroy:
>>>>>>>>> @@ -289,6 +295,8 @@ void drm_file_free(struct drm_file *file)
>>>>>>>>>         if (dev->driver->postclose)
>>>>>>>>>             dev->driver->postclose(dev, file);
>>>>>>>>>     +    __free_page(file->dummy_page);
>>>>>>>>> +
>>>>>>>>> drm_prime_destroy_file_private(&file->prime);
>>>>>>>>> WARN_ON(!list_empty(&file->event_list));
>>>>>>>>> diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
>>>>>>>>> index 1693aa7..987b45c 100644
>>>>>>>>> --- a/drivers/gpu/drm/drm_prime.c
>>>>>>>>> +++ b/drivers/gpu/drm/drm_prime.c
>>>>>>>>> @@ -335,6 +335,13 @@ int drm_gem_prime_fd_to_handle(struct drm_device 
>>>>>>>>> *dev,
>>>>>>>>>           ret = drm_prime_add_buf_handle(&file_priv->prime,
>>>>>>>>>                 dma_buf, *handle);
>>>>>>>>> +
>>>>>>>>> +    if (!ret) {
>>>>>>>>> +        obj->dummy_page = alloc_page(GFP_KERNEL | __GFP_ZERO);
>>>>>>>>> +        if (!obj->dummy_page)
>>>>>>>>> +            ret = -ENOMEM;
>>>>>>>>> +    }
>>>>>>>>> +
>>>>>>>>> mutex_unlock(&file_priv->prime.lock);
>>>>>>>>>         if (ret)
>>>>>>>>>             goto fail;
>>>>>>>>> @@ -1020,6 +1027,9 @@ void drm_prime_gem_destroy(struct
>>>>>>>>> drm_gem_object *obj, struct sg_table *sg)
>>>>>>>>>             dma_buf_unmap_attachment(attach, sg, DMA_BIDIRECTIONAL);
>>>>>>>>>         dma_buf = attach->dmabuf;
>>>>>>>>>         dma_buf_detach(attach->dmabuf, attach);
>>>>>>>>> +
>>>>>>>>> +    __free_page(obj->dummy_page);
>>>>>>>>> +
>>>>>>>>>         /* remove the reference */
>>>>>>>>>         dma_buf_put(dma_buf);
>>>>>>>>>     }
>>>>>>>>> diff --git a/include/drm/drm_file.h b/include/drm/drm_file.h
>>>>>>>>> index 716990b..2a011fc 100644
>>>>>>>>> --- a/include/drm/drm_file.h
>>>>>>>>> +++ b/include/drm/drm_file.h
>>>>>>>>> @@ -346,6 +346,8 @@ struct drm_file {
>>>>>>>>>          */
>>>>>>>>>         struct drm_prime_file_private prime;
>>>>>>>>>     +    struct page *dummy_page;
>>>>>>>>> +
>>>>>>>>>         /* private: */
>>>>>>>>>     #if IS_ENABLED(CONFIG_DRM_LEGACY)
>>>>>>>>>         unsigned long lock_count; /* DRI1 legacy lock count */
>>>>>>>>> diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
>>>>>>>>> index 337a483..76a97a3 100644
>>>>>>>>> --- a/include/drm/drm_gem.h
>>>>>>>>> +++ b/include/drm/drm_gem.h
>>>>>>>>> @@ -311,6 +311,8 @@ struct drm_gem_object {
>>>>>>>>>          *
>>>>>>>>>          */
>>>>>>>>>         const struct drm_gem_object_funcs *funcs;
>>>>>>>>> +
>>>>>>>>> +    struct page *dummy_page;
>>>>>>>>>     };
>>>>>>>>>       /**
>>>>>>> _______________________________________________
>>>>>>> amd-gfx mailing list
>>>>>>> amd-gfx@lists.freedesktop.org
>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=04%7C01%7CAndrey.Grodzovsky%40amd.com%7Ccacccf9d68c34d8e80e708d8b3299c0d%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637456338594884363%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=Z9aTtBhBMJ8rvenRyEH7w1KpQUKJxQGaKGgoWPWqomo%3D&amp;reserved=0 
>>>>>>>
>>>>>>>
>>>>>> _______________________________________________
>>>>>> amd-gfx mailing list
>>>>>> amd-gfx@lists.freedesktop.org
>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=04%7C01%7CAndrey.Grodzovsky%40amd.com%7Ccacccf9d68c34d8e80e708d8b3299c0d%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637456338594884363%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=Z9aTtBhBMJ8rvenRyEH7w1KpQUKJxQGaKGgoWPWqomo%3D&amp;reserved=0 
>>>>>>
>>>>>>
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

  reply	other threads:[~2021-01-08 14:26 UTC|newest]

Thread overview: 105+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-21  5:21 [PATCH v3 00/12] RFC Support hot device unplug in amdgpu Andrey Grodzovsky
2020-11-21  5:21 ` [PATCH v3 01/12] drm: Add dummy page per device or GEM object Andrey Grodzovsky
2020-11-21 14:15   ` Christian König
2020-11-23  4:54     ` Andrey Grodzovsky
2020-11-23  8:01       ` Christian König
2021-01-05 21:04         ` Andrey Grodzovsky
2021-01-07 16:21           ` Daniel Vetter
2021-01-07 16:26             ` Andrey Grodzovsky
2021-01-07 16:28               ` Andrey Grodzovsky
2021-01-07 16:30               ` Daniel Vetter
2021-01-07 16:37                 ` Andrey Grodzovsky
2021-01-08 14:26                   ` Andrey Grodzovsky [this message]
2021-01-08 14:33                     ` Christian König
2021-01-08 14:46                       ` Andrey Grodzovsky
2021-01-08 14:52                         ` Christian König
2021-01-08 16:49                           ` Grodzovsky, Andrey
2021-01-11 16:13                             ` Daniel Vetter
2021-01-11 16:15                               ` Daniel Vetter
2021-01-11 17:41                                 ` Andrey Grodzovsky
2021-01-11 20:45                                 ` Andrey Grodzovsky
2021-01-12  9:10                                   ` Daniel Vetter
2021-01-12 12:32                                     ` Christian König
2021-01-12 15:59                                       ` Andrey Grodzovsky
2021-01-13  9:14                                         ` Christian König
2021-01-13 14:40                                           ` Andrey Grodzovsky
2021-01-12 15:54                                     ` Andrey Grodzovsky
2021-01-12  8:12                               ` Christian König
2021-01-12  9:13                                 ` Daniel Vetter
2020-11-21  5:21 ` [PATCH v3 02/12] drm: Unamp the entire device address space on device unplug Andrey Grodzovsky
2020-11-21 14:16   ` Christian König
2020-11-24 14:44     ` Daniel Vetter
2020-11-21  5:21 ` [PATCH v3 03/12] drm/ttm: Remap all page faults to per process dummy page Andrey Grodzovsky
2020-11-21  5:21 ` [PATCH v3 04/12] drm/ttm: Set dma addr to null after freee Andrey Grodzovsky
2020-11-21 14:13   ` Christian König
2020-11-23  5:15     ` Andrey Grodzovsky
2020-11-23  8:04       ` Christian König
2020-11-21  5:21 ` [PATCH v3 05/12] drm/ttm: Expose ttm_tt_unpopulate for driver use Andrey Grodzovsky
2020-11-25 10:42   ` Christian König
2020-11-23 20:05     ` Andrey Grodzovsky
2020-11-23 20:20       ` Christian König
2020-11-23 20:38         ` Andrey Grodzovsky
2020-11-23 20:41           ` Christian König
2020-11-23 21:08             ` Andrey Grodzovsky
2020-11-24  7:41               ` Christian König
2020-11-24 16:22                 ` Andrey Grodzovsky
2020-11-24 16:44                   ` Christian König
2020-11-25 10:40                     ` Daniel Vetter
2020-11-25 12:57                       ` Christian König
2020-11-25 16:36                         ` Daniel Vetter
2020-11-25 19:34                           ` Andrey Grodzovsky
2020-11-27 13:10                             ` Grodzovsky, Andrey
2020-11-27 14:59                             ` Daniel Vetter
2020-11-27 16:04                               ` Andrey Grodzovsky
2020-11-30 14:15                                 ` Daniel Vetter
2020-11-25 16:56                         ` Michel Dänzer
2020-11-25 17:02                           ` Daniel Vetter
2020-12-15 20:18                     ` Andrey Grodzovsky
2020-12-16  8:04                       ` Christian König
2020-12-16 14:21                         ` Daniel Vetter
2020-12-16 16:13                           ` Andrey Grodzovsky
2020-12-16 16:18                             ` Christian König
2020-12-16 17:12                               ` Daniel Vetter
2020-12-16 17:20                                 ` Daniel Vetter
2020-12-16 18:26                                 ` Andrey Grodzovsky
2020-12-16 23:15                                   ` Daniel Vetter
2020-12-17  0:20                                     ` Andrey Grodzovsky
2020-12-17 12:01                                       ` Daniel Vetter
2020-12-17 19:19                                         ` Andrey Grodzovsky
2020-12-17 20:10                                           ` Christian König
2020-12-17 20:38                                             ` Andrey Grodzovsky
2020-12-17 20:48                                               ` Daniel Vetter
2020-12-17 21:06                                                 ` Andrey Grodzovsky
2020-12-18 14:30                                                   ` Daniel Vetter
2020-12-17 20:42                                           ` Daniel Vetter
2020-12-17 21:13                                             ` Andrey Grodzovsky
2021-01-04 16:33                                               ` Andrey Grodzovsky
2020-11-21  5:21 ` [PATCH v3 06/12] drm/sched: Cancel and flush all oustatdning jobs before finish Andrey Grodzovsky
2020-11-22 11:56   ` Christian König
2020-11-21  5:21 ` [PATCH v3 07/12] drm/sched: Prevent any job recoveries after device is unplugged Andrey Grodzovsky
2020-11-22 11:57   ` Christian König
2020-11-23  5:37     ` Andrey Grodzovsky
2020-11-23  8:06       ` Christian König
2020-11-24  1:12         ` Luben Tuikov
2020-11-24  7:50           ` Christian König
2020-11-24 17:11             ` Luben Tuikov
2020-11-24 17:17               ` Andrey Grodzovsky
2020-11-24 17:41                 ` Luben Tuikov
2020-11-24 17:40               ` Christian König
2020-11-24 17:44                 ` Luben Tuikov
2020-11-21  5:21 ` [PATCH v3 08/12] drm/amdgpu: Split amdgpu_device_fini into early and late Andrey Grodzovsky
2020-11-24 14:53   ` Daniel Vetter
2020-11-24 15:51     ` Andrey Grodzovsky
2020-11-25 10:41       ` Daniel Vetter
2020-11-25 17:41         ` Andrey Grodzovsky
2020-11-21  5:21 ` [PATCH v3 09/12] drm/amdgpu: Add early fini callback Andrey Grodzovsky
2020-11-21  5:21 ` [PATCH v3 10/12] drm/amdgpu: Avoid sysfs dirs removal post device unplug Andrey Grodzovsky
2020-11-24 14:49   ` Daniel Vetter
2020-11-24 22:27     ` Andrey Grodzovsky
2020-11-25  9:04       ` Daniel Vetter
2020-11-25 17:39         ` Andrey Grodzovsky
2020-11-27 13:12           ` Grodzovsky, Andrey
2020-11-27 15:04           ` Daniel Vetter
2020-11-27 15:34             ` Andrey Grodzovsky
2020-11-21  5:21 ` [PATCH v3 11/12] drm/amdgpu: Register IOMMU topology notifier per device Andrey Grodzovsky
2020-11-21  5:21 ` [PATCH v3 12/12] drm/amdgpu: Fix a bunch of sdma code crash post device unplug Andrey Grodzovsky

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a140ca34-9cfc-9c2f-39e2-1af156faabfe@amd.com \
    --to=andrey.grodzovsky@amd.com \
    --cc=Alexander.Deucher@amd.com \
    --cc=Harry.Wentland@amd.com \
    --cc=amd-gfx@lists.freedesktop.org \
    --cc=christian.koenig@amd.com \
    --cc=daniel.vetter@ffwll.ch \
    --cc=daniel@ffwll.ch \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=eric@anholt.net \
    --cc=gregkh@linuxfoundation.org \
    --cc=l.stach@pengutronix.de \
    --cc=ppaalanen@gmail.com \
    --cc=robh@kernel.org \
    --cc=yuq825@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox