From: Matthew Brost <matthew.brost@intel.com>
To: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
Cc: <intel-xe@lists.freedesktop.org>,
<dri-devel@lists.freedesktop.org>, <airlied@gmail.com>,
<christian.koenig@amd.com>, <matthew.auld@intel.com>,
<daniel@ffwll.ch>
Subject: Re: [RFC PATCH 05/28] drm/gpusvm: Add support for GPU Shared Virtual Memory
Date: Fri, 30 Aug 2024 13:47:03 +0000 [thread overview]
Message-ID: <ZtHNV6z8B7g4nbPp@DUT025-TGLU.fm.intel.com> (raw)
In-Reply-To: <666768946f0d7dbf9373ab6369a4b0c7eefd8e2d.camel@linux.intel.com>
On Fri, Aug 30, 2024 at 11:57:33AM +0200, Thomas Hellström wrote:
> Hi, Matthew,
>
> Agreed the below might not be important just now, but some ideas:
>
> On Thu, 2024-08-29 at 20:56 +0000, Matthew Brost wrote:
> > Issues with removing a SVM range:
> >
> > - Xe bind code stores invalidation / present state in VMA, this would
> > need to be moved to the radix tree. I have Jira open for that work
> > which I believe other developers are going to own.
>
> Yeah, although we shouldn't *design* around xe bind-code and page-table
> code shortcomings.
>
I'm thinking this one certainly should be fixed sooner rather than
later which would be helpful.
But let's also consider the case where we get a bunch of individual page
invalidates serially for an entire range (I can't remember when this
happens but I have seen it in my testing, will look into this more to
figure exactly when). If we invalidate 1 page at a time in radix tree,
each invalidation could potentially results in TLB invalidation
interaction with the hardware in cases where a larger GPU pages are not
being used. The TLB invalidation is going to vastly slower than any CPU
operation (e.g. RB search, radix tree walk). If we key on a range
invalidate the entire once on the first invalidation this may end up
being significantly faster.
Above is pure speculation though, a lot of what both of us is saying
is... So another reason I'd like to get apps running to do profiling. It
would be nice to make design decisions based on data not speculation.
>
> > - Where would the dma mapping / device pages be stored?
> > - In the radix tree? What if ATS is enabled? We don't have a
> > driver owned radix tree. How do we reasonably connect a
> > driver
> > owned radix to a common GPUSVM layer?
>
> With ATS you mean IOMMU SVA, right? I think we could assume that any
> user of this code also has a gpu page-table since otherwise they
> couldn't be using VRAM and a simpler solution would be in place.
>
Fair point.
> But to that specific question, drm_gpusvm state would live in a
> drm_gpusvm radix tree and driver-specific stuff in the driver tree. A
> helper based approach would then call drm_gpusvm_unmap_dma(range),
> whereas a middle layer would just traverse the tree and unmap.
>
Let me consider this. Open to all options.
> > - In the notifier? What is the notifier is sparsely
> > populated?
> > We would be wasting huge amounts of memory. What is the
> > notifier is configured to span the entire virtual address
> > space?
>
> Let's assume you use a fake page-table like in xe_pt_walk.c as your
> "radix tree", adapted to relevant page-sizes, sparsity is not a
> problem.
>
Ok, makes sense I think.
> > - How does the garbage collector work? We can't allocate memory in
> > the
> > notifier so we don't anything to add to the garbage collector. We
> > can't directly modify page tables given you need lock in the path
> > of
> > reclaim.
>
> The garbage collector would operate on the whole invalidated range. In
> the case of xe, upon zapping under reclaim you mark individual page-
> table bos that are to be removed as "invalid", the garbage collector
> walks the range removing the "invalid" entries. Subsequent (re-binding)
> avoids the "invalid" entries, (perhaps even helps removing them) and
> can thus race with the garbage collector. Hence, any ranges implied by
> the page-table code are elimitated.
>
This is pretty much with what I came up with too if we didn't have a SVM
range.
> > - How do we deal with fault storms (e.g. tons of faults hitting the
> > same
> > SVM range in a row)? Without a SVM range no every to know if
> > mapping
> > is valid and GPU page handler can be short circuited.
>
> Perhaps look at page-table tree and check whether the gpu_pte causing
> the fault is valid.
>
Came up with the same thing.
> > - Do we have notifier seqno for every PTE?
>
> I'd say no. With this approach it makes sense to have a wide notifier.
> The seqno now only affects binding of new gpu_ptes, so the problem with
> a wide notifier becomes that if invalidation occurs to *any* part of
> the notifier while we're in the read section during binding, we need to
I have avoided this by the drm_gpusvm_range_pages_valid. This isn't just
an optimization is actually required for the 2 tile case to be able to
safely know when dma pages can be unmapped (i.e. you can't dma unmap
pages if either tile has a valid mapping).
Matt
> rerun the binding. Adding more notifiers to mitigate that would be to
> optimize faulting performance over core invalidation performance which
> Jason asked us to avoid.
>
> /Thomas
>
>
>
next prev parent reply other threads:[~2024-08-30 13:48 UTC|newest]
Thread overview: 100+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-08-28 2:48 [RFC PATCH 00/28] Introduce GPU SVM and Xe SVM implementation Matthew Brost
2024-08-28 2:48 ` [RFC PATCH 01/28] dma-buf: Split out dma fence array create into alloc and arm functions Matthew Brost
2024-08-28 2:48 ` [RFC PATCH 02/28] drm/xe: Invalidate media_gt TLBs in PT code Matthew Brost
2024-08-28 2:48 ` [RFC PATCH 03/28] drm/xe: Retry BO allocation Matthew Brost
2024-08-28 2:48 ` [RFC PATCH 04/28] mm/migrate: Add migrate_device_vma_range Matthew Brost
2024-08-29 9:03 ` Daniel Vetter
2024-08-29 15:58 ` Matthew Brost
2024-08-28 2:48 ` [RFC PATCH 05/28] drm/gpusvm: Add support for GPU Shared Virtual Memory Matthew Brost
2024-08-28 14:31 ` Daniel Vetter
2024-08-28 14:46 ` Christian König
2024-08-28 15:43 ` Matthew Brost
2024-08-28 16:06 ` Alex Deucher
2024-08-28 16:25 ` Daniel Vetter
2024-08-29 16:40 ` Matthew Brost
2024-09-02 11:29 ` Daniel Vetter
2024-08-30 5:00 ` Matthew Brost
2024-09-02 11:36 ` Daniel Vetter
2024-08-28 18:50 ` Daniel Vetter
2024-08-29 16:49 ` Matthew Brost
2024-09-02 11:40 ` Daniel Vetter
2024-08-29 9:16 ` Thomas Hellström
2024-08-29 17:45 ` Matthew Brost
2024-08-29 18:13 ` Matthew Brost
2024-08-29 19:18 ` Thomas Hellström
2024-08-29 20:56 ` Matthew Brost
2024-08-30 8:18 ` Thomas Hellström
2024-08-30 13:58 ` Matthew Brost
2024-09-02 9:57 ` Thomas Hellström
2024-08-30 9:57 ` Thomas Hellström
2024-08-30 13:47 ` Matthew Brost [this message]
2024-09-02 9:45 ` Thomas Hellström
2024-09-02 12:33 ` Daniel Vetter
2024-09-04 12:27 ` Thomas Hellström
2024-09-24 8:41 ` Simona Vetter
2024-08-30 1:35 ` Matthew Brost
2024-08-29 9:45 ` Daniel Vetter
2024-08-29 17:27 ` Matthew Brost
2024-09-02 11:53 ` Daniel Vetter
2024-09-02 17:03 ` Matthew Brost
2024-09-11 16:06 ` Matthew Brost
2024-08-30 9:16 ` Thomas Hellström
2024-09-02 12:20 ` Daniel Vetter
2024-09-06 18:41 ` Zeng, Oak
2024-09-24 9:25 ` Simona Vetter
2024-09-25 16:34 ` Zeng, Oak
2024-09-24 10:42 ` Thomas Hellström
2024-09-24 16:30 ` Matthew Brost
2024-09-25 21:12 ` Matthew Brost
2024-10-09 10:50 ` Thomas Hellström
2024-10-16 3:18 ` Matthew Brost
2024-10-16 6:27 ` Thomas Hellström
2024-10-16 8:24 ` Matthew Brost
2024-08-28 2:48 ` [RFC PATCH 06/28] drm/xe/uapi: Add DRM_XE_VM_BIND_FLAG_SYSTEM_ALLOCATON flag Matthew Brost
2024-08-28 2:48 ` [RFC PATCH 07/28] drm/xe: Add SVM init / fini to faulting VMs Matthew Brost
2024-08-28 2:48 ` [RFC PATCH 08/28] drm/xe: Add dma_addr res cursor Matthew Brost
2024-08-28 2:48 ` [RFC PATCH 09/28] drm/xe: Add SVM range invalidation Matthew Brost
2024-08-28 2:48 ` [RFC PATCH 10/28] drm/gpuvm: Add DRM_GPUVA_OP_USER Matthew Brost
2024-08-28 2:48 ` [RFC PATCH 11/28] drm/xe: Add (re)bind to SVM page fault handler Matthew Brost
2024-08-28 2:48 ` [RFC PATCH 12/28] drm/xe: Add SVM garbage collector Matthew Brost
2024-08-28 2:48 ` [RFC PATCH 13/28] drm/xe: Add unbind to " Matthew Brost
2024-08-28 2:48 ` [RFC PATCH 14/28] drm/xe: Do not allow system allocator VMA unbind if the GPU has bindings Matthew Brost
2024-08-28 2:48 ` [RFC PATCH 15/28] drm/xe: Enable system allocator uAPI Matthew Brost
2024-08-28 2:48 ` [RFC PATCH 16/28] drm/xe: Add migrate layer functions for SVM support Matthew Brost
2024-08-28 2:48 ` [RFC PATCH 17/28] drm/xe: Add SVM device memory mirroring Matthew Brost
2024-08-28 2:48 ` [RFC PATCH 18/28] drm/xe: Add GPUSVM copy SRAM / VRAM vfunc functions Matthew Brost
2024-08-28 2:48 ` [RFC PATCH 19/28] drm/xe: Update PT layer to understand ranges in VRAM Matthew Brost
2024-08-28 2:48 ` [RFC PATCH 20/28] drm/xe: Add Xe SVM populate_vram_pfn vfunc Matthew Brost
2024-08-28 2:48 ` [RFC PATCH 21/28] drm/xe: Add Xe SVM vram_release vfunc Matthew Brost
2024-08-28 2:48 ` [RFC PATCH 22/28] drm/xe: Add BO flags required for SVM Matthew Brost
2024-08-28 2:48 ` [RFC PATCH 23/28] drm/xe: Add SVM VRAM migration Matthew Brost
2024-08-28 16:06 ` Daniel Vetter
2024-08-28 18:22 ` Daniel Vetter
2024-08-29 9:24 ` Christian König
2024-08-29 9:53 ` Thomas Hellström
2024-08-29 11:02 ` Daniel Vetter
2024-08-29 22:12 ` Matthew Brost
2024-08-29 22:23 ` Matthew Brost
2024-09-02 11:01 ` Christian König
2024-09-02 12:50 ` Daniel Vetter
2024-09-02 12:48 ` Daniel Vetter
2024-09-02 22:20 ` Matthew Brost
2024-09-03 8:07 ` Simona Vetter
2024-08-29 14:30 ` Christian König
2024-08-29 21:53 ` Matthew Brost
2024-08-29 21:48 ` Matthew Brost
2024-09-02 13:02 ` Daniel Vetter
2024-08-28 2:48 ` [RFC PATCH 24/28] drm/xe: Basic SVM BO eviction Matthew Brost
2024-08-29 10:14 ` Daniel Vetter
2024-08-29 15:55 ` Matthew Brost
2024-09-02 13:05 ` Daniel Vetter
2024-08-28 2:48 ` [RFC PATCH 25/28] drm/xe: Add SVM debug Matthew Brost
2024-08-28 2:48 ` [RFC PATCH 26/28] drm/xe: Add modparam for SVM notifier size Matthew Brost
2024-08-28 2:49 ` [RFC PATCH 27/28] drm/xe: Add modparam for SVM prefault Matthew Brost
2024-08-28 2:49 ` [RFC PATCH 28/28] drm/gpusvm: Ensure all pages migrated upon eviction Matthew Brost
2024-08-28 2:55 ` ✓ CI.Patch_applied: success for Introduce GPU SVM and Xe SVM implementation Patchwork
2024-08-28 2:55 ` ✗ CI.checkpatch: warning " Patchwork
2024-08-28 2:56 ` ✗ CI.KUnit: failure " Patchwork
2024-09-24 9:16 ` [RFC PATCH 00/28] " Simona Vetter
2024-09-24 19:36 ` Matthew Brost
2024-09-25 11:41 ` Simona Vetter
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZtHNV6z8B7g4nbPp@DUT025-TGLU.fm.intel.com \
--to=matthew.brost@intel.com \
--cc=airlied@gmail.com \
--cc=christian.koenig@amd.com \
--cc=daniel@ffwll.ch \
--cc=dri-devel@lists.freedesktop.org \
--cc=intel-xe@lists.freedesktop.org \
--cc=matthew.auld@intel.com \
--cc=thomas.hellstrom@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox