AMD-GFX Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: "Christian König" <ckoenig.leichtzumerken@gmail.com>
To: Felix Kuehling <felix.kuehling@amd.com>, Daniel Vetter <daniel@ffwll.ch>
Cc: Alex Sierra <alex.sierra@amd.com>,
	"Yang, Philip" <philip.yang@amd.com>,
	amd-gfx list <amd-gfx@lists.freedesktop.org>,
	dri-devel <dri-devel@lists.freedesktop.org>
Subject: Re: [PATCH 00/35] Add HMM-based SVM memory manager to KFD
Date: Thu, 14 Jan 2021 13:19:16 +0100	[thread overview]
Message-ID: <fbd26c26-b84a-f9c6-95fd-3afd79c0bd47@gmail.com> (raw)
In-Reply-To: <b9db6b8c-3979-d499-d276-77b9e9a2ab6a@amd.com>

Am 14.01.21 um 06:34 schrieb Felix Kuehling:
> Am 2021-01-11 um 11:29 a.m. schrieb Daniel Vetter:
>> On Fri, Jan 08, 2021 at 12:56:24PM -0500, Felix Kuehling wrote:
>>> Am 2021-01-08 um 11:53 a.m. schrieb Daniel Vetter:
>>>> On Fri, Jan 8, 2021 at 5:36 PM Felix Kuehling <felix.kuehling@amd.com> wrote:
>>>>> Am 2021-01-08 um 11:06 a.m. schrieb Daniel Vetter:
>>>>>> On Fri, Jan 8, 2021 at 4:58 PM Felix Kuehling <felix.kuehling@amd.com> wrote:
>>>>>>> Am 2021-01-08 um 9:40 a.m. schrieb Daniel Vetter:
>>>>>>>> On Thu, Jan 07, 2021 at 11:25:41AM -0500, Felix Kuehling wrote:
>>>>>>>>> Am 2021-01-07 um 4:23 a.m. schrieb Daniel Vetter:
>>>>>>>>>> On Wed, Jan 06, 2021 at 10:00:52PM -0500, Felix Kuehling wrote:
>>>>>>>>>>> This is the first version of our HMM based shared virtual memory manager
>>>>>>>>>>> for KFD. There are still a number of known issues that we're working through
>>>>>>>>>>> (see below). This will likely lead to some pretty significant changes in
>>>>>>>>>>> MMU notifier handling and locking on the migration code paths. So don't
>>>>>>>>>>> get hung up on those details yet.
>>>>>>>>>>>
>>>>>>>>>>> But I think this is a good time to start getting feedback. We're pretty
>>>>>>>>>>> confident about the ioctl API, which is both simple and extensible for the
>>>>>>>>>>> future. (see patches 4,16) The user mode side of the API can be found here:
>>>>>>>>>>> https://github.com/RadeonOpenCompute/ROCT-Thunk-Interface/blob/fxkamd/hmm-wip/src/svm.c
>>>>>>>>>>>
>>>>>>>>>>> I'd also like another pair of eyes on how we're interfacing with the GPU VM
>>>>>>>>>>> code in amdgpu_vm.c (see patches 12,13), retry page fault handling (24,25),
>>>>>>>>>>> and some retry IRQ handling changes (32).
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Known issues:
>>>>>>>>>>> * won't work with IOMMU enabled, we need to dma_map all pages properly
>>>>>>>>>>> * still working on some race conditions and random bugs
>>>>>>>>>>> * performance is not great yet
>>>>>>>>>> Still catching up, but I think there's another one for your list:
>>>>>>>>>>
>>>>>>>>>>   * hmm gpu context preempt vs page fault handling. I've had a short
>>>>>>>>>>     discussion about this one with Christian before the holidays, and also
>>>>>>>>>>     some private chats with Jerome. It's nasty since no easy fix, much less
>>>>>>>>>>     a good idea what's the best approach here.
>>>>>>>>> Do you have a pointer to that discussion or any more details?
>>>>>>>> Essentially if you're handling an hmm page fault from the gpu, you can
>>>>>>>> deadlock by calling dma_fence_wait on a (chain of, possibly) other command
>>>>>>>> submissions or compute contexts with dma_fence_wait. Which deadlocks if
>>>>>>>> you can't preempt while you have that page fault pending. Two solutions:
>>>>>>>>
>>>>>>>> - your hw can (at least for compute ctx) preempt even when a page fault is
>>>>>>>>    pending
>>>>>>> Our GFXv9 GPUs can do this. GFXv10 cannot.
>>>>>> Uh, why did your hw guys drop this :-/
>>> Performance. It's the same reason why the XNACK mode selection API
>>> exists (patch 16). When we enable recoverable page fault handling in the
>>> compute units on GFXv9, it costs some performance even when no page
>>> faults are happening. On GFXv10 that retry fault handling moved out of
>>> the compute units, so they don't take the performance hit. But that
>>> sacrificed the ability to preempt during page faults. We'll need to work
>>> with our hardware teams to restore that capability in a future generation.
>> Ah yes, you need to stall in more points in the compute cores to make sure
>> you can recover if the page fault gets interrupted.
>>
>> Maybe my knowledge is outdated, but my understanding is that nvidia can
>> also preempt (but only for compute jobs, since oh dear the pain this would
>> be for all the fixed function stuff). Since gfx10 moved page fault
>> handling further away from compute cores, do you know whether this now
>> means you can do page faults for (some?) fixed function stuff too? Or
>> still only for compute?
> I'm not sure.
>
>
>> Supporting page fault for 3d would be real pain with the corner we're
>> stuck in right now, but better we know about this early than later :-/
> I know Christian hates the idea.

Well I don't hate the idea. I just don't think that this will ever work 
correctly and performant.

A big part of the additional fun is that we currently have a mix of HMM 
capable engines (3D, compute, DMA) and not HMM capable engines (display, 
multimedia etc..).

> We know that page faults on GPUs can be
> a huge performance drain because you're stalling potentially so many
> threads and the CPU can become a bottle neck dealing with all the page
> faults from many GPU threads. On the compute side, applications will be
> optimized to avoid them as much as possible, e.g. by pre-faulting or
> pre-fetching data before it's needed.
>
> But I think you need page faults to make overcommitted memory with user
> mode command submission not suck.

Yeah, completely agree.

The only short term alternative I see is to have an IOCTL telling the 
kernel which memory is currently in use. And that is complete nonsense 
cause it kills the advantage why we want user mode command submission in 
the first place.

Regards,
Christian.

>>>>>> I do think it can be rescued with what I call gang scheduling of
>>>>>> engines: I.e. when a given engine is running a context (or a group of
>>>>>> engines, depending how your hw works) that can cause a page fault, you
>>>>>> must flush out all workloads running on the same engine which could
>>>>>> block a dma_fence (preempt them, or for non-compute stuff, force their
>>>>>> completion). And the other way round, i.e. before you can run a legacy
>>>>>> gl workload with a dma_fence on these engines you need to preempt all
>>>>>> ctxs that could cause page faults and take them at least out of the hw
>>>>>> scheduler queue.
>>>>> Yuck! But yeah, that would work. A less invasive alternative would be to
>>>>> reserve some compute units for graphics contexts so we can guarantee
>>>>> forward progress for graphics contexts even when all CUs working on
>>>>> compute stuff are stuck on page faults.
>>>> Won't this hurt compute workloads? I think we need something were at
>>>> least pure compute or pure gl/vk workloads run at full performance.
>>>> And without preempt we can't take anything back when we need it, so
>>>> would have to always upfront reserve some cores just in case.
>>> Yes, it would hurt proportionally to how many CUs get reserved. On big
>>> GPUs with many CUs the impact could be quite small.
>> Also, we could do the reservation only for the time when there's actually
>> a legacy context with normal dma_fence in the scheduler queue. Assuming
>> that reserving/unreserving of CUs isn't too expensive operation. If it's
>> as expensive as a full stall probably not worth the complexity here and
>> just go with a full stall and only run one or the other at a time.
>>
>> Wrt desktops I'm also somewhat worried that we might end up killing
>> desktop workloads if there's not enough CUs reserved for these and they
>> end up taking too long and anger either tdr or worse the user because the
>> desktop is unuseable when you start a compute job and get a big pile of
>> faults. Probably needs some testing to see how bad it is.
>>
>>> That said, I'm not sure it'll work on our hardware. Our CUs can execute
>>> multiple wavefronts from different contexts and switch between them with
>>> fine granularity. I'd need to check with our HW engineers whether this
>>> CU-internal context switching is still possible during page faults on
>>> GFXv10.
>> You'd need to do the reservation for all contexts/engines which can cause
>> page faults, otherewise it'd leak.
> All engines that can page fault and cannot be preempted during faults.
>
> Regards,
>    Felix
>

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

  reply	other threads:[~2021-01-14 12:19 UTC|newest]

Thread overview: 84+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-07  3:00 [PATCH 00/35] Add HMM-based SVM memory manager to KFD Felix Kuehling
2021-01-07  3:00 ` [PATCH 01/35] drm/amdkfd: select kernel DEVICE_PRIVATE option Felix Kuehling
2021-01-07  3:00 ` [PATCH 02/35] drm/amdgpu: replace per_device_list by array Felix Kuehling
2021-01-07  3:00 ` [PATCH 03/35] drm/amdkfd: helper to convert gpu id and idx Felix Kuehling
2021-01-07  3:00 ` [PATCH 04/35] drm/amdkfd: add svm ioctl API Felix Kuehling
2021-01-07  3:00 ` [PATCH 05/35] drm/amdkfd: Add SVM API support capability bits Felix Kuehling
2021-01-07  3:00 ` [PATCH 06/35] drm/amdkfd: register svm range Felix Kuehling
2021-01-07  3:00 ` [PATCH 07/35] drm/amdkfd: add svm ioctl GET_ATTR op Felix Kuehling
2021-01-07  3:01 ` [PATCH 08/35] drm/amdgpu: add common HMM get pages function Felix Kuehling
2021-01-07 10:53   ` Christian König
2021-01-07  3:01 ` [PATCH 09/35] drm/amdkfd: validate svm range system memory Felix Kuehling
2021-01-07  3:01 ` [PATCH 10/35] drm/amdkfd: register overlap system memory range Felix Kuehling
2021-01-07  3:01 ` [PATCH 11/35] drm/amdkfd: deregister svm range Felix Kuehling
2021-01-07  3:01 ` [PATCH 12/35] drm/amdgpu: export vm update mapping interface Felix Kuehling
2021-01-07 10:54   ` Christian König
2021-01-07  3:01 ` [PATCH 13/35] drm/amdkfd: map svm range to GPUs Felix Kuehling
2021-01-07  3:01 ` [PATCH 14/35] drm/amdkfd: svm range eviction and restore Felix Kuehling
2021-01-07  3:01 ` [PATCH 15/35] drm/amdkfd: add xnack enabled flag to kfd_process Felix Kuehling
2021-01-07  3:01 ` [PATCH 16/35] drm/amdkfd: add ioctl to configure and query xnack retries Felix Kuehling
2021-01-07  3:01 ` [PATCH 17/35] drm/amdkfd: register HMM device private zone Felix Kuehling
2021-03-01  8:32   ` Daniel Vetter
2021-03-01  8:46     ` Thomas Hellström (Intel)
2021-03-01  8:58       ` Daniel Vetter
2021-03-01  9:30         ` Thomas Hellström (Intel)
2021-03-04 17:58       ` Felix Kuehling
2021-03-11 12:24         ` Thomas Hellström (Intel)
2021-01-07  3:01 ` [PATCH 18/35] drm/amdkfd: validate vram svm range from TTM Felix Kuehling
2021-01-07  3:01 ` [PATCH 19/35] drm/amdkfd: support xgmi same hive mapping Felix Kuehling
2021-01-07  3:01 ` [PATCH 20/35] drm/amdkfd: copy memory through gart table Felix Kuehling
2021-01-07  3:01 ` [PATCH 21/35] drm/amdkfd: HMM migrate ram to vram Felix Kuehling
2021-01-07  3:01 ` [PATCH 22/35] drm/amdkfd: HMM migrate vram to ram Felix Kuehling
2021-01-07  3:01 ` [PATCH 23/35] drm/amdkfd: invalidate tables on page retry fault Felix Kuehling
2021-01-07  3:01 ` [PATCH 24/35] drm/amdkfd: page table restore through svm API Felix Kuehling
2021-01-07  3:01 ` [PATCH 25/35] drm/amdkfd: SVM API call to restore page tables Felix Kuehling
2021-01-07  3:01 ` [PATCH 26/35] drm/amdkfd: add svm_bo reference for eviction fence Felix Kuehling
2021-01-07  3:01 ` [PATCH 27/35] drm/amdgpu: add param bit flag to create SVM BOs Felix Kuehling
2021-01-07  3:01 ` [PATCH 28/35] drm/amdkfd: add svm_bo eviction mechanism support Felix Kuehling
2021-01-07  3:01 ` [PATCH 29/35] drm/amdgpu: svm bo enable_signal call condition Felix Kuehling
2021-01-07 10:56   ` Christian König
2021-01-07 16:16     ` Felix Kuehling
2021-01-07 16:28       ` Christian König
2021-01-07 16:53         ` Felix Kuehling
2021-01-07  3:01 ` [PATCH 30/35] drm/amdgpu: add svm_bo eviction to enable_signal cb Felix Kuehling
2021-01-07  3:01 ` [PATCH 31/35] drm/amdgpu: reserve fence slot to update page table Felix Kuehling
2021-01-07 10:57   ` Christian König
2021-01-07  3:01 ` [PATCH 32/35] drm/amdgpu: enable retry fault wptr overflow Felix Kuehling
2021-01-07 11:01   ` Christian König
2021-01-07  3:01 ` [PATCH 33/35] drm/amdkfd: refine migration policy with xnack on Felix Kuehling
2021-01-07  3:01 ` [PATCH 34/35] drm/amdkfd: add svm range validate timestamp Felix Kuehling
2021-01-07  3:01 ` [PATCH 35/35] drm/amdkfd: multiple gpu migrate vram to vram Felix Kuehling
2021-01-07  9:23 ` [PATCH 00/35] Add HMM-based SVM memory manager to KFD Daniel Vetter
2021-01-07 16:25   ` Felix Kuehling
2021-01-08 14:40     ` Daniel Vetter
2021-01-08 14:45       ` Christian König
2021-01-08 15:58       ` Felix Kuehling
2021-01-08 16:06         ` Daniel Vetter
2021-01-08 16:36           ` Felix Kuehling
2021-01-08 16:53             ` Daniel Vetter
2021-01-08 17:56               ` Felix Kuehling
2021-01-11 16:29                 ` Daniel Vetter
2021-01-14  5:34                   ` Felix Kuehling
2021-01-14 12:19                     ` Christian König [this message]
2021-01-13 16:56       ` Jerome Glisse
2021-01-13 20:31         ` Daniel Vetter
2021-01-14  3:27           ` Jerome Glisse
2021-01-14  9:26             ` Daniel Vetter
2021-01-14 10:39               ` Daniel Vetter
2021-01-14 10:49         ` Christian König
2021-01-14 11:52           ` Daniel Vetter
2021-01-14 13:37             ` HMM fence (was Re: [PATCH 00/35] Add HMM-based SVM memory manager to KFD) Christian König
2021-01-14 13:57               ` Daniel Vetter
2021-01-14 14:13                 ` Christian König
2021-01-14 14:23                   ` Daniel Vetter
2021-01-14 15:08                     ` Christian König
2021-01-14 15:40                       ` Daniel Vetter
2021-01-14 16:01                         ` Christian König
2021-01-14 16:36                           ` Daniel Vetter
2021-01-14 19:08                             ` Christian König
2021-01-14 20:09                               ` Daniel Vetter
2021-01-14 16:51               ` Jerome Glisse
2021-01-14 21:13                 ` Felix Kuehling
2021-01-15  7:47                   ` Christian König
2021-01-13 16:47 ` [PATCH 00/35] Add HMM-based SVM memory manager to KFD Jerome Glisse
2021-01-14  0:06   ` Felix Kuehling

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=fbd26c26-b84a-f9c6-95fd-3afd79c0bd47@gmail.com \
    --to=ckoenig.leichtzumerken@gmail.com \
    --cc=alex.sierra@amd.com \
    --cc=amd-gfx@lists.freedesktop.org \
    --cc=christian.koenig@amd.com \
    --cc=daniel@ffwll.ch \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=felix.kuehling@amd.com \
    --cc=philip.yang@amd.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox