Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Matthew Brost <matthew.brost@intel.com>
To: Alistair Popple <apopple@nvidia.com>
Cc: <intel-xe@lists.freedesktop.org>,
	<dri-devel@lists.freedesktop.org>,
	<himal.prasad.ghimiray@intel.com>, <airlied@gmail.com>,
	<thomas.hellstrom@linux.intel.com>, <simona.vetter@ffwll.ch>,
	<felix.kuehling@amd.com>, <dakr@kernel.org>
Subject: Re: [PATCH v7 32/32] drm/doc: gpusvm: Add GPU SVM documentation
Date: Wed, 5 Mar 2025 22:08:39 -0800	[thread overview]
Message-ID: <Z8k757kchJi3fWdG@lstrano-desk.jf.intel.com> (raw)
In-Reply-To: <4vqsd4n7umeimw4gqwa6c5oeuvrpqxfxquzsaizfzuqcdfd7vs@bed32kv2hjom>

On Thu, Mar 06, 2025 at 04:45:31PM +1100, Alistair Popple wrote:
> On Wed, Mar 05, 2025 at 05:26:57PM -0800, Matthew Brost wrote:
> > Add documentation for agree upon GPU SVM design principles, current
> > status, and future plans.
> 
> One minor nit and a comment below, but feel free to add:
> 
> Acked-by: Alistair Popple <apopple@nvidia.com>
> 

Thanks!

> > v4:
> >  - Address Thomas's feedback
> > v5:
> >  - s/Current/Basline (Thomas)
> > v7:
> >  - Add license (CI)
> >  - Add examples for design guideline reasoning (Alistair)
> >  - Add snippet about possible livelock with concurrent GPU and and CPU
> >    access (Alistair)
> > 
> > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> > ---
> >  Documentation/gpu/rfc/gpusvm.rst | 106 +++++++++++++++++++++++++++++++
> >  Documentation/gpu/rfc/index.rst  |   4 ++
> >  2 files changed, 110 insertions(+)
> >  create mode 100644 Documentation/gpu/rfc/gpusvm.rst
> > 
> > diff --git a/Documentation/gpu/rfc/gpusvm.rst b/Documentation/gpu/rfc/gpusvm.rst
> > new file mode 100644
> > index 000000000000..87d9f9506155
> > --- /dev/null
> > +++ b/Documentation/gpu/rfc/gpusvm.rst
> > @@ -0,0 +1,106 @@
> > +.. SPDX-License-Identifier: (GPL-2.0+ OR MIT)
> > +
> > +===============
> > +GPU SVM Section
> > +===============
> > +
> > +Agreed upon design principles
> > +=============================
> > +
> > +* migrate_to_ram path
> > +	* Rely only on core MM concepts (migration PTEs, page references, and
> > +	  page locking).
> > +	* No driver specific locks other than locks for hardware interaction in
> > +	  this path. These are not required and generally a bad idea to
> > +	  invent driver defined locks to seal core MM races.
> > +	* An example of a driver-specific lock causing issues occurred before
> > +	  fixing do_swap_page to lock the faulting page. A driver-exclusive lock
> > +	  in migrate_to_ram produced a stable livelock if enough threads read
> > +	  the faulting page.
> > +	* Partial migration is supported (i.e., a subset of pages attempting to
> > +	  migrate can actually migrate, with only the faulting page guaranteed
> > +	  to migrate).
> > +	* Driver handles mixed migrations via retry loops rather than locking.
> > +* Eviction
> > +	* Eviction is defined as migrating data from the GPU back to the
> > +	  CPU without a virtual address to free up GPU memory.
> > +	* Only looking at physical memory data structures and locks as opposed to
> > +	  looking at virtual memory data structures and locks.
> > +	* No looking at mm/vma structs or relying on those being locked.
> > +	* The rationale for the above two points is that CPU virtual addresses
> > +	  can change at any moment, while the physical pages remain stable.
> > +	* GPU page table invalidation, which requires a GPU virtual address, is
> > +	  handled via the notifier that has access to the GPU virtual address.
> > +* GPU fault side
> > +	* mmap_read only used around core MM functions which require this lock
> > +	  and should strive to take mmap_read lock only in GPU SVM layer.
> > +	* Big retry loop to handle all races with the mmu notifier under the gpu
> > +	  pagetable locks/mmu notifier range lock/whatever we end up calling
> > +          those.
> > +	* Races (especially against concurrent eviction or migrate_to_ram)
> > +	  should not be handled on the fault side by trying to hold locks;
> > +	  rather, they should be handled using retry loops. One possible
> > +	  exception is holding a BO's dma-resv lock during the initial migration
> > +	  to VRAM, as this is a well-defined lock that can be taken underneath
> > +	  the mmap_read lock.
> > +	* One possible issue with the above approach is if a driver has a strict
> > +	  migration policy requiring GPU access to occur in GPU memory.
> > +	  Concurrent CPU access could cause a livelock due to endless retries.
> > +	  While no current user (Xe) of GPU SVM has such a policy, it is likely
> > +	  to be added in the future. Ideally, this should be resolved on the
> > +	  core-MM side rather than through a driver-side lock.
> > +* Physical memory to virtual backpointer
> > +	* This does not work, as no pointers from physical memory to virtual
> > +	  memory should exist. mremap() is an example of the core MM updating
> > +	  the virtual address without notifying the driver.
> 
> Pretty minor nit, but this could be read as core MM won't send a mmu notifier
> when calling mremap(). That's not the case, as it will get a notifier to
> invalidate the address but won't explicitly get notified of the new address.
> 
> Not worth sending an update just for that though.
> 

Yep, will fix up when merging.

> > +	* The physical memory backpointer (page->zone_device_data) should remain
> > +	  stable from allocation to page free. Safely updating this against a
> > +	  concurrent user would be very difficult unless the page is free.
> > +* GPU pagetable locking
> > +	* Notifier lock only protects range tree, pages valid state for a range
> > +	  (rather than seqno due to wider notifiers), pagetable entries, and
> > +	  mmu notifier seqno tracking, it is not a global lock to protect
> > +          against races.
> > +	* All races handled with big retry as mentioned above.
> > +
> > +Overview of baseline design
> > +===========================
> > +
> > +Baseline design is simple as possible to get a working basline in which can be
> > +built upon.
> > +
> > +.. kernel-doc:: drivers/gpu/drm/xe/drm_gpusvm.c
> > +   :doc: Overview
> > +   :doc: Locking
> > +   :doc: Migrataion
> > +   :doc: Partial Unmapping of Ranges
> > +   :doc: Examples
> > +
> > +Possible future design features
> > +===============================
> > +
> > +* Concurrent GPU faults
> > +	* CPU faults are concurrent so makes sense to have concurrent GPU
> > +	  faults.
> > +	* Should be possible with fined grained locking in the driver GPU
> > +	  fault handler.
> > +	* No expected GPU SVM changes required.
> > +* Ranges with mixed system and device pages
> > +	* Can be added if required to drm_gpusvm_get_pages fairly easily.
> > +* Multi-GPU support
> > +	* Work in progress and patches expected after initially landing on GPU
> > +	  SVM.
> > +	* Ideally can be done with little to no changes to GPU SVM.
> > +* Drop ranges in favor of radix tree
> > +	* May be desirable for faster notifiers.
> > +* Compound device pages
> > +	* Nvidia, AMD, and Intel all have agreed expensive core MM functions in
> > +	  migrate device layer are a performance bottleneck, having compound
> > +	  device pages should help increase performance by reducing the number
> > +	  of these expensive calls.
> 
> Balbir has also just posted an initial RFC implementation of this here:
> 
> https://lore.kernel.org/linux-mm/20250306044239.3874247-1-balbirs@nvidia.com/
> 

Nice. Will take a look as we have time. I also want to pull in Leon's
series [1] too.

Matt

[1] https://lore.kernel.org/all/cover.1738765879.git.leonro@nvidia.com/

> > +* Higher order dma mapping for migration
> > +	* 4k dma mapping adversely affects migration performance on Intel
> > +	  hardware, higher order (2M) dma mapping should help here.
> > +* Build common userptr implementation on top of GPU SVM
> > +* Driver side madvise implementation and migration policies
> > +* Pull in pending dma-mapping API changes from Leon / Nvidia when these land
> > diff --git a/Documentation/gpu/rfc/index.rst b/Documentation/gpu/rfc/index.rst
> > index 476719771eef..396e535377fb 100644
> > --- a/Documentation/gpu/rfc/index.rst
> > +++ b/Documentation/gpu/rfc/index.rst
> > @@ -16,6 +16,10 @@ host such documentation:
> >  * Once the code has landed move all the documentation to the right places in
> >    the main core, helper or driver sections.
> >  
> > +.. toctree::
> > +
> > +    gpusvm.rst
> > +
> >  .. toctree::
> >  
> >      i915_gem_lmem.rst
> > -- 
> > 2.34.1
> > 

  reply	other threads:[~2025-03-06  6:07 UTC|newest]

Thread overview: 52+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-03-06  1:26 [PATCH v7 00/32] Introduce GPU SVM and Xe SVM implementation Matthew Brost
2025-03-06  1:26 ` [PATCH v7 01/32] drm/xe: Retry BO allocation Matthew Brost
2025-03-06  1:26 ` [PATCH v7 02/32] mm/migrate: Add migrate_device_pfns Matthew Brost
2025-03-06  1:26 ` [PATCH v7 03/32] mm/migrate: Trylock device page in do_swap_page Matthew Brost
2025-03-06  1:26 ` [PATCH v7 04/32] drm/pagemap: Add DRM pagemap Matthew Brost
2025-03-06  1:26 ` [PATCH v7 05/32] drm/xe/bo: Introduce xe_bo_put_async Matthew Brost
2025-03-06  1:26 ` [PATCH v7 06/32] drm/gpusvm: Add support for GPU Shared Virtual Memory Matthew Brost
2025-03-06  1:26 ` [PATCH v7 07/32] drm/xe: Select DRM_GPUSVM Kconfig Matthew Brost
2025-03-06  1:26 ` [PATCH v7 08/32] drm/xe/uapi: Add DRM_XE_VM_BIND_FLAG_CPU_ADDR_MIRROR Matthew Brost
2025-03-06  1:26 ` [PATCH v7 09/32] drm/xe: Add SVM init / close / fini to faulting VMs Matthew Brost
2025-03-06  1:26 ` [PATCH v7 10/32] drm/xe: Add dma_addr res cursor Matthew Brost
2025-03-06  1:26 ` [PATCH v7 11/32] drm/xe: Nuke VM's mapping upon close Matthew Brost
2025-03-06  1:26 ` [PATCH v7 12/32] drm/xe: Add SVM range invalidation and page fault Matthew Brost
2025-03-06  1:26 ` [PATCH v7 13/32] drm/gpuvm: Add DRM_GPUVA_OP_DRIVER Matthew Brost
2025-03-06  1:26 ` [PATCH v7 14/32] drm/xe: Add (re)bind to SVM page fault handler Matthew Brost
2025-03-06  1:26 ` [PATCH v7 15/32] drm/xe: Add SVM garbage collector Matthew Brost
2025-03-06  1:26 ` [PATCH v7 16/32] drm/xe: Add unbind to " Matthew Brost
2025-03-06  1:26 ` [PATCH v7 17/32] drm/xe: Do not allow CPU address mirror VMA unbind if Matthew Brost
2025-03-06  1:26 ` [PATCH v7 18/32] drm/xe: Enable CPU address mirror uAPI Matthew Brost
2025-03-06  1:26 ` [PATCH v7 19/32] drm/xe/uapi: Add DRM_XE_QUERY_CONFIG_FLAG_HAS_CPU_ADDR_MIRROR Matthew Brost
2025-03-06  1:26 ` [PATCH v7 20/32] drm/xe: Add migrate layer functions for SVM support Matthew Brost
2025-03-06  1:26 ` [PATCH v7 21/32] drm/xe: Add SVM device memory mirroring Matthew Brost
2025-03-06  1:26 ` [PATCH v7 22/32] drm/xe: Add drm_gpusvm_devmem to xe_bo Matthew Brost
2025-03-06  1:26 ` [PATCH v7 23/32] drm/xe: Add drm_pagemap ops to SVM Matthew Brost
2025-03-06  1:26 ` [PATCH v7 24/32] drm/xe: Add GPUSVM device memory copy vfunc functions Matthew Brost
2025-03-06  1:26 ` [PATCH v7 25/32] drm/xe: Add Xe SVM populate_devmem_pfn GPU SVM vfunc Matthew Brost
2025-03-06  1:26 ` [PATCH v7 26/32] drm/xe: Add Xe SVM devmem_release " Matthew Brost
2025-03-06  1:26 ` [PATCH v7 27/32] drm/xe: Add SVM VRAM migration Matthew Brost
2025-03-06  1:26 ` [PATCH v7 28/32] drm/xe: Basic SVM BO eviction Matthew Brost
2025-03-06  1:26 ` [PATCH v7 29/32] drm/xe: Add SVM debug Matthew Brost
2025-03-06  1:26 ` [PATCH v7 30/32] drm/xe: Add modparam for SVM notifier size Matthew Brost
2025-03-06  1:26 ` [PATCH v7 31/32] drm/xe: Add always_migrate_to_vram modparam Matthew Brost
2025-03-06  1:26 ` [PATCH v7 32/32] drm/doc: gpusvm: Add GPU SVM documentation Matthew Brost
2025-03-06  5:45   ` Alistair Popple
2025-03-06  6:08     ` Matthew Brost [this message]
2025-03-06  1:54 ` ✓ CI.Patch_applied: success for Introduce GPU SVM and Xe SVM implementation (rev7) Patchwork
2025-03-06  1:54 ` ✗ CI.checkpatch: warning " Patchwork
2025-03-06  1:56 ` ✓ CI.KUnit: success " Patchwork
2025-03-06  2:12 ` ✓ CI.Build: " Patchwork
2025-03-06  2:14 ` ✓ CI.Hooks: " Patchwork
2025-03-06  2:16 ` ✗ CI.checksparse: warning " Patchwork
2025-03-06  2:51 ` ✓ Xe.CI.BAT: success " Patchwork
2025-03-06  8:25 ` ✗ Xe.CI.Full: failure " Patchwork
2025-03-06  9:57   ` Matthew Brost
2025-03-06  9:22 ` ✓ CI.Patch_applied: success for Introduce GPU SVM and Xe SVM implementation (rev8) Patchwork
2025-03-06  9:23 ` ✗ CI.checkpatch: warning " Patchwork
2025-03-06  9:24 ` ✓ CI.KUnit: success " Patchwork
2025-03-06  9:41 ` ✓ CI.Build: " Patchwork
2025-03-06  9:43 ` ✓ CI.Hooks: " Patchwork
2025-03-06  9:44 ` ✗ CI.checksparse: warning " Patchwork
2025-03-06 10:17 ` ✗ Xe.CI.BAT: failure " Patchwork
2025-03-06 18:41 ` ✗ Xe.CI.Full: " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Z8k757kchJi3fWdG@lstrano-desk.jf.intel.com \
    --to=matthew.brost@intel.com \
    --cc=airlied@gmail.com \
    --cc=apopple@nvidia.com \
    --cc=dakr@kernel.org \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=felix.kuehling@amd.com \
    --cc=himal.prasad.ghimiray@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=simona.vetter@ffwll.ch \
    --cc=thomas.hellstrom@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox