From: Matthew Brost <matthew.brost@intel.com>
To: "Zeng, Oak" <oak.zeng@intel.com>
Cc: "intel-xe@lists.freedesktop.org" <intel-xe@lists.freedesktop.org>,
"Hellstrom, Thomas" <thomas.hellstrom@intel.com>,
"Welty, Brian" <brian.welty@intel.com>,
"Ghimiray, Himal Prasad" <himal.prasad.ghimiray@intel.com>,
"De Marchi, Lucas" <lucas.demarchi@intel.com>
Subject: Re: [PATCH 4/4] drm/xe: destroy userptr vma on UNMAP event
Date: Thu, 21 Mar 2024 17:01:50 +0000 [thread overview]
Message-ID: <Zfxn/o1Qugq1hG6C@DUT025-TGLU.fm.intel.com> (raw)
In-Reply-To: <SA1PR11MB699154B66898A29896FFB5D192322@SA1PR11MB6991.namprd11.prod.outlook.com>
On Thu, Mar 21, 2024 at 08:56:15AM -0600, Zeng, Oak wrote:
> Hi Matt,
>
> > -----Original Message-----
> > From: Brost, Matthew <matthew.brost@intel.com>
> > Sent: Wednesday, March 20, 2024 11:22 PM
> > To: Zeng, Oak <oak.zeng@intel.com>
> > Cc: intel-xe@lists.freedesktop.org; Hellstrom, Thomas
> > <thomas.hellstrom@intel.com>; Welty, Brian <brian.welty@intel.com>; Ghimiray,
> > Himal Prasad <himal.prasad.ghimiray@intel.com>; De Marchi, Lucas
> > <lucas.demarchi@intel.com>
> > Subject: Re: [PATCH 4/4] drm/xe: destroy userptr vma on UNMAP event
> >
> > On Wed, Mar 20, 2024 at 10:29:39PM -0400, Oak Zeng wrote:
> > > When there is MMU_NOTIFY_UNMAP event happens, the userptr
> > > is munmapped from CPU. There is no need to keep the xe_vma
> > > for this userptr from GPU side. So we destroy it.
> > >
> > > But we can't destroy vma directly from the mmu notifier
> > > callback function, because we need to remove mmu
> > > notifier during vma destroy. If we remove mmu notifier
> > > directly from mmu notifier callback, it is a deadlock.
> > > xe_vma_destroy is modified to destroy vma in a worker
> > > thread.
> > >
> > > Another reason of this change is, for the future
> > > hmmptr codes, we destroy vma when hmmptr is unmapped
> > > from CPU. We want to unify the hmmptr and userptr
> > > code.
> > >
> > > I believe this is also the correct behavior for userptr.
> >
> > It is not, the user is still responsible for unmapping.
>
> Do you see a problem of the scheme I described? In that scheme, if user vm_unbind after userptr is freed/munmapped, we can return user an error "userptr is already freed". Is this a problem to you?
>
We did this until was merged - the next userpr pin pages would return
-EFAULT. Turns out the compute apps call free / munmap on userptrs with
bindings as the time for performance reasons. Thus the [1] we just
invalidate the userptr if we can't pin the pages.
[1] https://patchwork.freedesktop.org/series/130935/
> My thinking is, the xe_vma state for userptr depends on a valid userptr. When user free/munmap userptr, the userptr is gone, so the xe_vma should also be gone. Isn't this a general valid design principle? Xekmd itself seems apply this principle a lot. For example, when we destroy a vm, all the exec queue of this vm is destroyed, not requiring user explicitly destroy queues.
>
Not a wrong of thinking about it but see above.
>
> Even if this was
> > the correct behavior we'd have to unmap from the GPU and update internal
> > PT state before calling xe_vma_destroy.
>
> In vma_userptr_invalidate, we called xe_vm_invalidate_vma to zap PT state. But currently it is done only for fault mode. If we use my scheme, this should be done for both fault and non-fault mode during a MMU_NOTIFY_UNMAP event.
>
I'm unsure where it done in this patch - xe_vma_destroy free the KMD
state / memory for the xe_vma. It does not remove / update the GPUVM
state, the PT state or interact with the GPU to update the page tables.
We only call xe_vm_invalidate_vma in notifier for fault because the
other modes (dma-fence / preempt fence) it is call somewhere else (exec
IOCTL / preempt rebind worker respectully). The dma-resv slots in these
modes ensure that after vma_userptr_invalidate returns the GPU will not
be touching the userptr. GPU exection is resumed in exec IOCTL or
preempt rebind worker either the userptr is rebound or invalidated.
Make sense?
>
> >
> > The correct behavior in this is case just let the userptr page pin fail
> > and invalidate GPU mappings. This recently got merge in [1].
>
> This also works, but the design is not as clean as my scheme IMO.
>
The UMDs have requested the existing behavior.
> For hmmptr, there is no vm_bind (except the initial gigantic bind which doesn't count). It is reasonable to destroy vma when user free/munmap, otherwise we left over a lot of zombie vmas. So at least for hmmptr we should apply my scheme. Whether we should unify the scheme for both userptr and hmmptr, I think it is still debatable.
>
The HMMPTR case, this concept is correct. Hence I implemented garbage
collector in my PoC [2]. The implementation however, is not. See my
comments above about what xe_vma_destroy does. Rather than calling that
function - it should be addded to a list which a worker picks up and
proper modifies all state (GPUVM, internal PT state, and GPU page
tables) needed to destroy the VMA. The garbage collector can be thought
od mini-IOCTL that only implements DRM_XE_VM_BIND_OP_UNMAP for the VMA.
Hence in my PoC it calls the same functions the bind IOCTL calls albiet
with slightly different arguments.
Again, does this make sense?
Matt
[2] https://gitlab.freedesktop.org/mbrost/xe-kernel-driver-system-allocator/-/commit/91cf8b38428ae9180c92435d3519193d074e837a
> Oak
>
> >
> > Have a pending test for this in [2].
> >
> > Matt
> >
> > [1] https://patchwork.freedesktop.org/series/130935/
> > [2] https://patchwork.freedesktop.org/series/131211/
> >
> > > This patch is experimental for CI and open to discuss
> > >
> > > Signed-off-by: Oak Zeng <oak.zeng@intel.com>
> > > ---
> > > drivers/gpu/drm/xe/xe_vm.c | 8 +++++++-
> > > 1 file changed, 7 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> > > index 11a4bb9d5415..90d1163c1090 100644
> > > --- a/drivers/gpu/drm/xe/xe_vm.c
> > > +++ b/drivers/gpu/drm/xe/xe_vm.c
> > > @@ -40,6 +40,8 @@
> > > #include "xe_wa.h"
> > > #include "xe_hmm.h"
> > >
> > > +static void xe_vma_destroy(struct xe_vma *vma, struct dma_fence *fence);
> > > +
> > > static struct drm_gem_object *xe_vm_obj(struct xe_vm *vm)
> > > {
> > > return vm->gpuvm.r_obj;
> > > @@ -604,6 +606,9 @@ static bool vma_userptr_invalidate(struct
> > mmu_interval_notifier *mni,
> > >
> > > trace_xe_vma_userptr_invalidate_complete(vma);
> > >
> > > + if (range->event == MMU_NOTIFY_UNMAP)
> > > + xe_vma_destroy(vma, NULL);
> > > +
> > > return true;
> > > }
> > >
> > > @@ -901,7 +906,8 @@ static void xe_vma_destroy(struct xe_vma *vma, struct
> > dma_fence *fence)
> > > xe_vma_destroy_late(vma);
> > > }
> > > } else {
> > > - xe_vma_destroy_late(vma);
> > > + INIT_WORK(&vma->destroy_work, vma_destroy_work_func);
> > > + queue_work(system_unbound_wq, &vma->destroy_work);
> > > }
> > > }
> > >
> > > --
> > > 2.26.3
> > >
next prev parent reply other threads:[~2024-03-21 17:03 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-03-21 2:29 [PATCH 0/4] Change userptr so it can be unified with hmmptr Oak Zeng
2024-03-21 2:27 ` ✓ CI.Patch_applied: success for " Patchwork
2024-03-21 2:27 ` ✗ CI.checkpatch: warning " Patchwork
2024-03-21 2:28 ` ✓ CI.KUnit: success " Patchwork
2024-03-21 2:29 ` [PATCH 1/4] drm/xe: Introduce helper to populate userptr Oak Zeng
2024-03-21 2:29 ` [PATCH 2/4] drm/xe: Introduce a helper to free sg table Oak Zeng
2024-03-21 2:29 ` [PATCH 3/4] drm/xe: Use hmm_range_fault to populate user pages Oak Zeng
2024-03-21 2:29 ` [PATCH 4/4] drm/xe: destroy userptr vma on UNMAP event Oak Zeng
2024-03-21 3:21 ` Matthew Brost
2024-03-21 14:56 ` Zeng, Oak
2024-03-21 17:01 ` Matthew Brost [this message]
2024-03-21 20:00 ` Zeng, Oak
2024-03-21 14:30 ` Hellstrom, Thomas
2024-03-21 17:31 ` Zeng, Oak
2024-03-21 2:39 ` ✓ CI.Build: success for Change userptr so it can be unified with hmmptr Patchwork
2024-03-21 2:43 ` ✓ CI.Hooks: " Patchwork
2024-03-21 2:44 ` ✓ CI.checksparse: " Patchwork
2024-03-21 3:12 ` ✗ CI.BAT: failure " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Zfxn/o1Qugq1hG6C@DUT025-TGLU.fm.intel.com \
--to=matthew.brost@intel.com \
--cc=brian.welty@intel.com \
--cc=himal.prasad.ghimiray@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=lucas.demarchi@intel.com \
--cc=oak.zeng@intel.com \
--cc=thomas.hellstrom@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox