AMD-GFX Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Andrey Grodzovsky <Andrey.Grodzovsky@amd.com>
To: Daniel Vetter <daniel@ffwll.ch>
Cc: robh@kernel.org, daniel.vetter@ffwll.ch,
	dri-devel@lists.freedesktop.org, eric@anholt.net,
	ppaalanen@gmail.com, amd-gfx@lists.freedesktop.org,
	gregkh@linuxfoundation.org, Alexander.Deucher@amd.com,
	yuq825@gmail.com, Harry.Wentland@amd.com,
	christian.koenig@amd.com, l.stach@pengutronix.de
Subject: Re: [PATCH v3 05/12] drm/ttm: Expose ttm_tt_unpopulate for driver use
Date: Fri, 27 Nov 2020 11:04:55 -0500	[thread overview]
Message-ID: <a1759f4c-1b00-fb6f-697c-40db915fb58e@amd.com> (raw)
In-Reply-To: <20201127145936.GC401619@phenom.ffwll.local>


On 11/27/20 9:59 AM, Daniel Vetter wrote:
> On Wed, Nov 25, 2020 at 02:34:44PM -0500, Andrey Grodzovsky wrote:
>> On 11/25/20 11:36 AM, Daniel Vetter wrote:
>>> On Wed, Nov 25, 2020 at 01:57:40PM +0100, Christian König wrote:
>>>> Am 25.11.20 um 11:40 schrieb Daniel Vetter:
>>>>> On Tue, Nov 24, 2020 at 05:44:07PM +0100, Christian König wrote:
>>>>>> Am 24.11.20 um 17:22 schrieb Andrey Grodzovsky:
>>>>>>> On 11/24/20 2:41 AM, Christian König wrote:
>>>>>>>> Am 23.11.20 um 22:08 schrieb Andrey Grodzovsky:
>>>>>>>>> On 11/23/20 3:41 PM, Christian König wrote:
>>>>>>>>>> Am 23.11.20 um 21:38 schrieb Andrey Grodzovsky:
>>>>>>>>>>> On 11/23/20 3:20 PM, Christian König wrote:
>>>>>>>>>>>> Am 23.11.20 um 21:05 schrieb Andrey Grodzovsky:
>>>>>>>>>>>>> On 11/25/20 5:42 AM, Christian König wrote:
>>>>>>>>>>>>>> Am 21.11.20 um 06:21 schrieb Andrey Grodzovsky:
>>>>>>>>>>>>>>> It's needed to drop iommu backed pages on device unplug
>>>>>>>>>>>>>>> before device's IOMMU group is released.
>>>>>>>>>>>>>> It would be cleaner if we could do the whole
>>>>>>>>>>>>>> handling in TTM. I also need to double check
>>>>>>>>>>>>>> what you are doing with this function.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Christian.
>>>>>>>>>>>>> Check patch "drm/amdgpu: Register IOMMU topology
>>>>>>>>>>>>> notifier per device." to see
>>>>>>>>>>>>> how i use it. I don't see why this should go
>>>>>>>>>>>>> into TTM mid-layer - the stuff I do inside
>>>>>>>>>>>>> is vendor specific and also I don't think TTM is
>>>>>>>>>>>>> explicitly aware of IOMMU ?
>>>>>>>>>>>>> Do you mean you prefer the IOMMU notifier to be
>>>>>>>>>>>>> registered from within TTM
>>>>>>>>>>>>> and then use a hook to call into vendor specific handler ?
>>>>>>>>>>>> No, that is really vendor specific.
>>>>>>>>>>>>
>>>>>>>>>>>> What I meant is to have a function like
>>>>>>>>>>>> ttm_resource_manager_evict_all() which you only need
>>>>>>>>>>>> to call and all tt objects are unpopulated.
>>>>>>>>>>> So instead of this BO list i create and later iterate in
>>>>>>>>>>> amdgpu from the IOMMU patch you just want to do it
>>>>>>>>>>> within
>>>>>>>>>>> TTM with a single function ? Makes much more sense.
>>>>>>>>>> Yes, exactly.
>>>>>>>>>>
>>>>>>>>>> The list_empty() checks we have in TTM for the LRU are
>>>>>>>>>> actually not the best idea, we should now check the
>>>>>>>>>> pin_count instead. This way we could also have a list of the
>>>>>>>>>> pinned BOs in TTM.
>>>>>>>>> So from my IOMMU topology handler I will iterate the TTM LRU for
>>>>>>>>> the unpinned BOs and this new function for the pinned ones  ?
>>>>>>>>> It's probably a good idea to combine both iterations into this
>>>>>>>>> new function to cover all the BOs allocated on the device.
>>>>>>>> Yes, that's what I had in my mind as well.
>>>>>>>>
>>>>>>>>>> BTW: Have you thought about what happens when we unpopulate
>>>>>>>>>> a BO while we still try to use a kernel mapping for it? That
>>>>>>>>>> could have unforeseen consequences.
>>>>>>>>> Are you asking what happens to kmap or vmap style mapped CPU
>>>>>>>>> accesses once we drop all the DMA backing pages for a particular
>>>>>>>>> BO ? Because for user mappings
>>>>>>>>> (mmap) we took care of this with dummy page reroute but indeed
>>>>>>>>> nothing was done for in kernel CPU mappings.
>>>>>>>> Yes exactly that.
>>>>>>>>
>>>>>>>> In other words what happens if we free the ring buffer while the
>>>>>>>> kernel still writes to it?
>>>>>>>>
>>>>>>>> Christian.
>>>>>>> While we can't control user application accesses to the mapped buffers
>>>>>>> explicitly and hence we use page fault rerouting
>>>>>>> I am thinking that in this  case we may be able to sprinkle
>>>>>>> drm_dev_enter/exit in any such sensitive place were we might
>>>>>>> CPU access a DMA buffer from the kernel ?
>>>>>> Yes, I fear we are going to need that.
>>>>> Uh ... problem is that dma_buf_vmap are usually permanent things. Maybe we
>>>>> could stuff this into begin/end_cpu_access
>>
>> Do you mean guarding with drm_dev_enter/exit in dma_buf_ops.begin/end_cpu_access
>> driver specific hook ?
>>
>>
>>>>> (but only for the kernel, so a
>>>>> bit tricky)?
>>
>> Why only kernel ? Why is it a problem to do it if it comes from dma_buf_ioctl by
>> some user process ? And  if we do need this distinction I think we should be able to
>> differentiate by looking at current->mm (i.e. mm_struct) pointer being NULL
>> for kernel thread.
> Userspace mmap is handled by punching out the pte. So we don't need to do
> anything special there.
>
> For kernel mmap the begin/end should be all in the same context (so we
> could use the srcu lock that works underneath drm_dev_enter/exit), since
> at least right now kernel vmaps of dma-buf are very long-lived.


If by same context you mean the right drm_device (the exporter's one)
then this should be ok as I am seeing from amdgpu implementation
of the callback - amdgpu_dma_buf_begin_cpu_access. We just need to add
handler for .end_cpu_access callback to call drm_dev_exit there.

Andrey


>
> But the good news is that Thomas Zimmerman is working on this problem
> already for different reasons, so it might be that we won't have any
> long-lived kernel vmap anymore. And we could put the drm_dev_enter/exit in
> there.
>
>>>> Oh very very good point! I haven't thought about DMA-buf mmaps in this
>>>> context yet.
>>>>
>>>>
>>>>> btw the other issue with dma-buf (and even worse with dma_fence) is
>>>>> refcounting of the underlying drm_device. I'd expect that all your
>>>>> callbacks go boom if the dma_buf outlives your drm_device. That part isn't
>>>>> yet solved in your series here.
>>>> Well thinking more about this, it seems to be a another really good argument
>>>> why mapping pages from DMA-bufs into application address space directly is a
>>>> very bad idea :)
>>>>
>>>> But yes, we essentially can't remove the device as long as there is a
>>>> DMA-buf with mappings. No idea how to clean that one up.
>>> drm_dev_get/put in drm_prime helpers should get us like 90% there I think.
>>
>> What are the other 10% ?
> dma_fence, which is also about 90% of the work probably. But I'm
> guesstimating only 10% of the oopses you can hit. Since generally the
> dma_fence for a buffer don't outlive the underlying buffer. So usually no
> problems happen when we've solved the dma-buf sharing, but the dma_fence
> can outlive the dma-buf, so there's still possibilities of crashing.
>
>>> The even more worrying thing is random dma_fence attached to the dma_resv
>>> object. We could try to clean all of ours up, but they could have escaped
>>> already into some other driver. And since we're talking about egpu
>>> hotunplug, dma_fence escaping to the igpu is a pretty reasonable use-case.
>>>
>>> I have no how to fix that one :-/
>>> -Daniel
>>
>> I assume you are referring to sync_file_create/sync_file_get_fence API  for
>> dma_fence export/import ?
> So dma_fence is a general issue, there's a pile of interfaces that result
> in sharing with other drivers:
> - dma_resv in the dma_buf
> - sync_file
> - drm_syncobj (but I think that's not yet cross driver, but probably
>    changes)
>
> In each of these cases drivers can pick up the dma_fence and use it
> internally for all kinds of purposes (could end up in the scheduler or
> wherever).
>
>> So with DMA bufs we have the drm_gem_object as exporter specific private data
>> and so we can do drm_dev_get and put at the drm_gem_object layer to bind
>> device life cycle
>> to that of each GEM object but, we don't have such mid-layer for dma_fence
>> which could allow
>> us to increment device reference for each fence out there related to that
>> device - is my understanding correct ?
> Yeah that's the annoying part with dma-fence. No existing generic place to
> put the drm_dev_get/put. tbf I'd note this as a todo and try to solve the
> other problems first.
> -Daniel
>
>> Andrey
>>
>>
>> Andrey
>>
>>
>>>> Christian.
>>>>
>>>>> -Daniel
>>>>>
>>>>>>> Things like CPU page table updates, ring buffer accesses and FW memcpy ?
>>>>>>> Is there other places ?
>>>>>> Puh, good question. I have no idea.
>>>>>>
>>>>>>> Another point is that at this point the driver shouldn't access any such
>>>>>>> buffers as we are at the process finishing the device.
>>>>>>> AFAIK there is no page fault mechanism for kernel mappings so I don't
>>>>>>> think there is anything else to do ?
>>>>>> Well there is a page fault handler for kernel mappings, but that one just
>>>>>> prints the stack trace into the system log and calls BUG(); :)
>>>>>>
>>>>>> Long story short we need to avoid any access to released pages after unplug.
>>>>>> No matter if it's from the kernel or userspace.
>>>>>>
>>>>>> Regards,
>>>>>> Christian.
>>>>>>
>>>>>>> Andrey
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

  reply	other threads:[~2020-11-27 16:05 UTC|newest]

Thread overview: 105+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-21  5:21 [PATCH v3 00/12] RFC Support hot device unplug in amdgpu Andrey Grodzovsky
2020-11-21  5:21 ` [PATCH v3 01/12] drm: Add dummy page per device or GEM object Andrey Grodzovsky
2020-11-21 14:15   ` Christian König
2020-11-23  4:54     ` Andrey Grodzovsky
2020-11-23  8:01       ` Christian König
2021-01-05 21:04         ` Andrey Grodzovsky
2021-01-07 16:21           ` Daniel Vetter
2021-01-07 16:26             ` Andrey Grodzovsky
2021-01-07 16:28               ` Andrey Grodzovsky
2021-01-07 16:30               ` Daniel Vetter
2021-01-07 16:37                 ` Andrey Grodzovsky
2021-01-08 14:26                   ` Andrey Grodzovsky
2021-01-08 14:33                     ` Christian König
2021-01-08 14:46                       ` Andrey Grodzovsky
2021-01-08 14:52                         ` Christian König
2021-01-08 16:49                           ` Grodzovsky, Andrey
2021-01-11 16:13                             ` Daniel Vetter
2021-01-11 16:15                               ` Daniel Vetter
2021-01-11 17:41                                 ` Andrey Grodzovsky
2021-01-11 20:45                                 ` Andrey Grodzovsky
2021-01-12  9:10                                   ` Daniel Vetter
2021-01-12 12:32                                     ` Christian König
2021-01-12 15:59                                       ` Andrey Grodzovsky
2021-01-13  9:14                                         ` Christian König
2021-01-13 14:40                                           ` Andrey Grodzovsky
2021-01-12 15:54                                     ` Andrey Grodzovsky
2021-01-12  8:12                               ` Christian König
2021-01-12  9:13                                 ` Daniel Vetter
2020-11-21  5:21 ` [PATCH v3 02/12] drm: Unamp the entire device address space on device unplug Andrey Grodzovsky
2020-11-21 14:16   ` Christian König
2020-11-24 14:44     ` Daniel Vetter
2020-11-21  5:21 ` [PATCH v3 03/12] drm/ttm: Remap all page faults to per process dummy page Andrey Grodzovsky
2020-11-21  5:21 ` [PATCH v3 04/12] drm/ttm: Set dma addr to null after freee Andrey Grodzovsky
2020-11-21 14:13   ` Christian König
2020-11-23  5:15     ` Andrey Grodzovsky
2020-11-23  8:04       ` Christian König
2020-11-21  5:21 ` [PATCH v3 05/12] drm/ttm: Expose ttm_tt_unpopulate for driver use Andrey Grodzovsky
2020-11-25 10:42   ` Christian König
2020-11-23 20:05     ` Andrey Grodzovsky
2020-11-23 20:20       ` Christian König
2020-11-23 20:38         ` Andrey Grodzovsky
2020-11-23 20:41           ` Christian König
2020-11-23 21:08             ` Andrey Grodzovsky
2020-11-24  7:41               ` Christian König
2020-11-24 16:22                 ` Andrey Grodzovsky
2020-11-24 16:44                   ` Christian König
2020-11-25 10:40                     ` Daniel Vetter
2020-11-25 12:57                       ` Christian König
2020-11-25 16:36                         ` Daniel Vetter
2020-11-25 19:34                           ` Andrey Grodzovsky
2020-11-27 13:10                             ` Grodzovsky, Andrey
2020-11-27 14:59                             ` Daniel Vetter
2020-11-27 16:04                               ` Andrey Grodzovsky [this message]
2020-11-30 14:15                                 ` Daniel Vetter
2020-11-25 16:56                         ` Michel Dänzer
2020-11-25 17:02                           ` Daniel Vetter
2020-12-15 20:18                     ` Andrey Grodzovsky
2020-12-16  8:04                       ` Christian König
2020-12-16 14:21                         ` Daniel Vetter
2020-12-16 16:13                           ` Andrey Grodzovsky
2020-12-16 16:18                             ` Christian König
2020-12-16 17:12                               ` Daniel Vetter
2020-12-16 17:20                                 ` Daniel Vetter
2020-12-16 18:26                                 ` Andrey Grodzovsky
2020-12-16 23:15                                   ` Daniel Vetter
2020-12-17  0:20                                     ` Andrey Grodzovsky
2020-12-17 12:01                                       ` Daniel Vetter
2020-12-17 19:19                                         ` Andrey Grodzovsky
2020-12-17 20:10                                           ` Christian König
2020-12-17 20:38                                             ` Andrey Grodzovsky
2020-12-17 20:48                                               ` Daniel Vetter
2020-12-17 21:06                                                 ` Andrey Grodzovsky
2020-12-18 14:30                                                   ` Daniel Vetter
2020-12-17 20:42                                           ` Daniel Vetter
2020-12-17 21:13                                             ` Andrey Grodzovsky
2021-01-04 16:33                                               ` Andrey Grodzovsky
2020-11-21  5:21 ` [PATCH v3 06/12] drm/sched: Cancel and flush all oustatdning jobs before finish Andrey Grodzovsky
2020-11-22 11:56   ` Christian König
2020-11-21  5:21 ` [PATCH v3 07/12] drm/sched: Prevent any job recoveries after device is unplugged Andrey Grodzovsky
2020-11-22 11:57   ` Christian König
2020-11-23  5:37     ` Andrey Grodzovsky
2020-11-23  8:06       ` Christian König
2020-11-24  1:12         ` Luben Tuikov
2020-11-24  7:50           ` Christian König
2020-11-24 17:11             ` Luben Tuikov
2020-11-24 17:17               ` Andrey Grodzovsky
2020-11-24 17:41                 ` Luben Tuikov
2020-11-24 17:40               ` Christian König
2020-11-24 17:44                 ` Luben Tuikov
2020-11-21  5:21 ` [PATCH v3 08/12] drm/amdgpu: Split amdgpu_device_fini into early and late Andrey Grodzovsky
2020-11-24 14:53   ` Daniel Vetter
2020-11-24 15:51     ` Andrey Grodzovsky
2020-11-25 10:41       ` Daniel Vetter
2020-11-25 17:41         ` Andrey Grodzovsky
2020-11-21  5:21 ` [PATCH v3 09/12] drm/amdgpu: Add early fini callback Andrey Grodzovsky
2020-11-21  5:21 ` [PATCH v3 10/12] drm/amdgpu: Avoid sysfs dirs removal post device unplug Andrey Grodzovsky
2020-11-24 14:49   ` Daniel Vetter
2020-11-24 22:27     ` Andrey Grodzovsky
2020-11-25  9:04       ` Daniel Vetter
2020-11-25 17:39         ` Andrey Grodzovsky
2020-11-27 13:12           ` Grodzovsky, Andrey
2020-11-27 15:04           ` Daniel Vetter
2020-11-27 15:34             ` Andrey Grodzovsky
2020-11-21  5:21 ` [PATCH v3 11/12] drm/amdgpu: Register IOMMU topology notifier per device Andrey Grodzovsky
2020-11-21  5:21 ` [PATCH v3 12/12] drm/amdgpu: Fix a bunch of sdma code crash post device unplug Andrey Grodzovsky

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a1759f4c-1b00-fb6f-697c-40db915fb58e@amd.com \
    --to=andrey.grodzovsky@amd.com \
    --cc=Alexander.Deucher@amd.com \
    --cc=Harry.Wentland@amd.com \
    --cc=amd-gfx@lists.freedesktop.org \
    --cc=christian.koenig@amd.com \
    --cc=daniel.vetter@ffwll.ch \
    --cc=daniel@ffwll.ch \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=eric@anholt.net \
    --cc=gregkh@linuxfoundation.org \
    --cc=l.stach@pengutronix.de \
    --cc=ppaalanen@gmail.com \
    --cc=robh@kernel.org \
    --cc=yuq825@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox