Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
To: Matthew Brost <matthew.brost@intel.com>,
	intel-xe@lists.freedesktop.org, dri-devel@lists.freedesktop.org
Cc: himal.prasad.ghimiray@intel.com, apopple@nvidia.com,
	airlied@gmail.com,  simona.vetter@ffwll.ch,
	felix.kuehling@amd.com, dakr@kernel.org
Subject: Re: [PATCH v6 27/32] drm/xe: Add SVM VRAM migration
Date: Wed, 26 Feb 2025 17:47:26 +0100	[thread overview]
Message-ID: <7bd3d2b1f477d3ee580625f081b4408d8fd233f2.camel@linux.intel.com> (raw)
In-Reply-To: <20250225044311.3178695-28-matthew.brost@intel.com>

On Mon, 2025-02-24 at 20:43 -0800, Matthew Brost wrote:
> Migration is implemented with range granularity, with VRAM backing
> being
> a VM private TTM BO (i.e., shares dma-resv with VM). The lifetime of
> the
> TTM BO is limited to when the SVM range is in VRAM (i.e., when a VRAM
> SVM range is migrated to SRAM, the TTM BO is destroyed).
> 
> The design choice for using TTM BO for VRAM backing store, as opposed
> to
> direct buddy allocation, is as follows:
> 
> - DRM buddy allocations are not at page granularity, offering no
>   advantage over a BO.
> - Unified eviction is required (SVM VRAM and TTM BOs need to be able
> to
>   evict each other).
> - For exhaustive eviction [1], SVM VRAM allocations will almost
> certainly
>   require a dma-resv.
> - Likely allocation size is 2M which makes of size of BO (872)
>   acceptable per allocation (872 / 2M == .0004158).
> 
> With this, using TTM BO for VRAM backing store seems to be an obvious
> choice as it allows leveraging of the TTM eviction code.
> 
> Current migration policy is migrate any SVM range greater than or
> equal
> to 64k once.
> 
> [1] https://patchwork.freedesktop.org/series/133643/
> 
> v2:
>  - Rebase on latest GPU SVM
>  - Retry page fault on get pages returning mixed allocation
>  - Use drm_gpusvm_devmem
> v3:
>  - Use new BO flags
>  - New range structure (Thomas)
>  - Hide migration behind Kconfig
>  - Kernel doc (Thomas)
>  - Use check_pages_threshold
> v4:
>  - Don't evict partial unmaps in garbage collector (Thomas)
>  - Use %pe to print errors (Thomas)
>  - Use %p to print pointers (Thomas)
> v5:
>  - Use range size helper (Thomas)
>  - Make BO external (Thomas)
>  - Set tile to NULL for BO creation (Thomas)
>  - Drop BO mirror flag (Thomas)
>  - Hold BO dma-resv lock across migration (Auld, Thomas)
> v6:
>  - s/drm_info/drm_dbg (Thomas)
>  - s/migrated/skip_migrate (Himal)
>  - Better debug message on VRAM migration failure (Himal)
>  - Drop return BO from VRAM allocation function (Thomas)
> 
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>


> ---
>  drivers/gpu/drm/xe/xe_svm.c | 97
> +++++++++++++++++++++++++++++++++++--
>  drivers/gpu/drm/xe/xe_svm.h |  5 ++
>  2 files changed, 98 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/gpu/drm/xe/xe_svm.c
> b/drivers/gpu/drm/xe/xe_svm.c
> index cb3567b561be..d8ff7fc708ee 100644
> --- a/drivers/gpu/drm/xe/xe_svm.c
> +++ b/drivers/gpu/drm/xe/xe_svm.c
> @@ -32,6 +32,11 @@ static unsigned long xe_svm_range_end(struct
> xe_svm_range *range)
>  	return drm_gpusvm_range_end(&range->base);
>  }
>  
> +static unsigned long xe_svm_range_size(struct xe_svm_range *range)
> +{
> +	return drm_gpusvm_range_size(&range->base);
> +}
> +
>  static void *xe_svm_devm_owner(struct xe_device *xe)
>  {
>  	return xe;
> @@ -508,7 +513,6 @@ static int xe_svm_populate_devmem_pfn(struct
> drm_gpusvm_devmem *devmem_allocatio
>  	return 0;
>  }
>  
> -__maybe_unused
>  static const struct drm_gpusvm_devmem_ops gpusvm_devmem_ops = {
>  	.devmem_release = xe_svm_devmem_release,
>  	.populate_devmem_pfn = xe_svm_populate_devmem_pfn,
> @@ -588,6 +592,62 @@ static bool xe_svm_range_is_valid(struct
> xe_svm_range *range,
>  	return (range->tile_present & ~range->tile_invalidated) &
> BIT(tile->id);
>  }
>  
> +static struct xe_vram_region *tile_to_vr(struct xe_tile *tile)
> +{
> +	return &tile->mem.vram;
> +}
> +
> +static int xe_svm_alloc_vram(struct xe_vm *vm, struct xe_tile *tile,
> +			     struct xe_svm_range *range,
> +			     const struct drm_gpusvm_ctx *ctx)
> +{
> +	struct mm_struct *mm = vm->svm.gpusvm.mm;
> +	struct xe_vram_region *vr = tile_to_vr(tile);
> +	struct drm_buddy_block *block;
> +	struct list_head *blocks;
> +	struct xe_bo *bo;
> +	ktime_t end = 0;
> +	int err;
> +
> +	if (!mmget_not_zero(mm))
> +		return -EFAULT;
> +	mmap_read_lock(mm);
> +
> +retry:
> +	bo = xe_bo_create_locked(tile_to_xe(tile), NULL, NULL,
> +				 xe_svm_range_size(range),
> +				 ttm_bo_type_device,
> +				 XE_BO_FLAG_VRAM_IF_DGFX(tile));
> +	if (IS_ERR(bo)) {
> +		err = PTR_ERR(bo);
> +		if (xe_vm_validate_should_retry(NULL, err, &end))
> +			goto retry;
> +		goto unlock;
> +	}
> +
> +	drm_gpusvm_devmem_init(&bo->devmem_allocation,
> +			       vm->xe->drm.dev, mm,
> +			       &gpusvm_devmem_ops,
> +			       &tile->mem.vram.dpagemap,
> +			       xe_svm_range_size(range));
> +
> +	blocks = &to_xe_ttm_vram_mgr_resource(bo->ttm.resource)-
> >blocks;
> +	list_for_each_entry(block, blocks, link)
> +		block->private = vr;
> +
> +	err = drm_gpusvm_migrate_to_devmem(&vm->svm.gpusvm, &range-
> >base,
> +					   &bo->devmem_allocation,
> ctx);
> +	xe_bo_unlock(bo);
> +	if (err)
> +		xe_bo_put(bo);	/* Creation ref */
> +
> +unlock:
> +	mmap_read_unlock(mm);
> +	mmput(mm);
> +
> +	return err;
> +}
> +
>  /**
>   * xe_svm_handle_pagefault() - SVM handle page fault
>   * @vm: The VM.
> @@ -596,7 +656,8 @@ static bool xe_svm_range_is_valid(struct
> xe_svm_range *range,
>   * @fault_addr: The GPU fault address.
>   * @atomic: The fault atomic access bit.
>   *
> - * Create GPU bindings for a SVM page fault.
> + * Create GPU bindings for a SVM page fault. Optionally migrate to
> device
> + * memory.
>   *
>   * Return: 0 on success, negative error code on error.
>   */
> @@ -604,7 +665,13 @@ int xe_svm_handle_pagefault(struct xe_vm *vm,
> struct xe_vma *vma,
>  			    struct xe_tile *tile, u64 fault_addr,
>  			    bool atomic)
>  {
> -	struct drm_gpusvm_ctx ctx = { .read_only =
> xe_vma_read_only(vma), };
> +	struct drm_gpusvm_ctx ctx = {
> +		.read_only = xe_vma_read_only(vma),
> +		.devmem_possible = IS_DGFX(vm->xe) &&
> +			IS_ENABLED(CONFIG_DRM_XE_DEVMEM_MIRROR),
> +		.check_pages_threshold = IS_DGFX(vm->xe) &&
> +			IS_ENABLED(CONFIG_DRM_XE_DEVMEM_MIRROR) ?
> SZ_64K : 0,
> +	};
>  	struct xe_svm_range *range;
>  	struct drm_gpusvm_range *r;
>  	struct drm_exec exec;
> @@ -631,9 +698,31 @@ int xe_svm_handle_pagefault(struct xe_vm *vm,
> struct xe_vma *vma,
>  	if (xe_svm_range_is_valid(range, tile))
>  		return 0;
>  
> +	/* XXX: Add migration policy, for now migrate range once */
> +	if (!range->skip_migrate && range->base.flags.migrate_devmem
> &&
> +	    xe_svm_range_size(range) >= SZ_64K) {
> +		range->skip_migrate = true;
> +
> +		err = xe_svm_alloc_vram(vm, tile, range, &ctx);
> +		if (err) {
> +			drm_dbg(&vm->xe->drm,
> +				"VRAM allocation failed, falling
> back to "
> +				"retrying fault, asid=%u, errno
> %pe\n",
> +				vm->usm.asid, ERR_PTR(err));
> +			goto retry;
> +		}
> +	}
> +
>  	err = drm_gpusvm_range_get_pages(&vm->svm.gpusvm, r, &ctx);
> -	if (err == -EFAULT || err == -EPERM)	/* Corner where CPU
> mappings have changed */
> +	/* Corner where CPU mappings have changed */
> +	if (err == -EOPNOTSUPP || err == -EFAULT || err == -EPERM) {
> +		if (err == -EOPNOTSUPP)
> +			drm_gpusvm_range_evict(&vm->svm.gpusvm,
> &range->base);
> +		drm_dbg(&vm->xe->drm,
> +			"Get pages failed, falling back to retrying,
> asid=%u, gpusvm=%p, errno %pe\n",
> +			vm->usm.asid, &vm->svm.gpusvm,
> ERR_PTR(err));
>  		goto retry;
> +	}
>  	if (err)
>  		goto err_out;
>  
> diff --git a/drivers/gpu/drm/xe/xe_svm.h
> b/drivers/gpu/drm/xe/xe_svm.h
> index 49c35e9ec183..5d4eeb2d34ce 100644
> --- a/drivers/gpu/drm/xe/xe_svm.h
> +++ b/drivers/gpu/drm/xe/xe_svm.h
> @@ -35,6 +35,11 @@ struct xe_svm_range {
>  	 * range. Protected by GPU SVM notifier lock.
>  	 */
>  	u8 tile_invalidated;
> +	/**
> +	 * @skip_migrate: Skip migration to VRAM, protected by GPU
> fault handler
> +	 * locking.
> +	 */
> +	u8 skip_migrate	:1;
>  };
>  
>  #if IS_ENABLED(CONFIG_DRM_GPUSVM)


  reply	other threads:[~2025-02-26 16:47 UTC|newest]

Thread overview: 56+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-02-25  4:42 [PATCH v6 00/32] Introduce GPU SVM and Xe SVM implementation Matthew Brost
2025-02-25  4:42 ` [PATCH v6 01/32] drm/xe: Retry BO allocation Matthew Brost
2025-02-25  4:42 ` [PATCH v6 02/32] mm/migrate: Add migrate_device_pfns Matthew Brost
2025-02-25  4:42 ` [PATCH v6 03/32] mm/migrate: Trylock device page in do_swap_page Matthew Brost
2025-02-25  4:42 ` [PATCH v6 04/32] drm/pagemap: Add DRM pagemap Matthew Brost
2025-02-25 15:06   ` Matthew Auld
2025-02-25 18:16     ` Matthew Brost
2025-02-25  4:42 ` [PATCH v6 05/32] drm/xe/bo: Introduce xe_bo_put_async Matthew Brost
2025-02-25  4:42 ` [PATCH v6 06/32] drm/gpusvm: Add support for GPU Shared Virtual Memory Matthew Brost
2025-02-25 15:14   ` Matthew Auld
2025-02-25 18:16     ` Matthew Brost
2025-02-25  4:42 ` [PATCH v6 07/32] drm/xe: Select DRM_GPUSVM Kconfig Matthew Brost
2025-02-25  4:42 ` [PATCH v6 08/32] drm/xe/uapi: Add DRM_XE_VM_BIND_FLAG_CPU_ADDR_MIRROR Matthew Brost
2025-02-25  4:42 ` [PATCH v6 09/32] drm/xe: Add SVM init / close / fini to faulting VMs Matthew Brost
2025-02-25  4:42 ` [PATCH v6 10/32] drm/xe: Add dma_addr res cursor Matthew Brost
2025-02-25  4:42 ` [PATCH v6 11/32] drm/xe: Nuke VM's mapping upon close Matthew Brost
2025-02-25 18:05   ` Matthew Auld
2025-02-25 18:14     ` Matthew Brost
2025-02-25  4:42 ` [PATCH v6 12/32] drm/xe: Add SVM range invalidation and page fault Matthew Brost
2025-02-25  4:42 ` [PATCH v6 13/32] drm/gpuvm: Add DRM_GPUVA_OP_DRIVER Matthew Brost
2025-02-25  4:42 ` [PATCH v6 14/32] drm/xe: Add (re)bind to SVM page fault handler Matthew Brost
2025-02-26 17:00   ` Thomas Hellström
2025-02-26 17:18   ` Ghimiray, Himal Prasad
2025-02-25  4:42 ` [PATCH v6 15/32] drm/xe: Add SVM garbage collector Matthew Brost
2025-02-25  4:42 ` [PATCH v6 16/32] drm/xe: Add unbind to " Matthew Brost
2025-02-25  4:42 ` [PATCH v6 17/32] drm/xe: Do not allow CPU address mirror VMA unbind if the GPU has bindings Matthew Brost
2025-02-27 17:01   ` Thomas Hellström
2025-02-25  4:42 ` [PATCH v6 18/32] drm/xe: Enable CPU address mirror uAPI Matthew Brost
2025-02-25  4:42 ` [PATCH v6 19/32] drm/xe/uapi: Add DRM_XE_QUERY_CONFIG_FLAG_HAS_CPU_ADDR_MIRROR Matthew Brost
2025-02-25  4:42 ` [PATCH v6 20/32] drm/xe: Add migrate layer functions for SVM support Matthew Brost
2025-02-25  4:43 ` [PATCH v6 21/32] drm/xe: Add SVM device memory mirroring Matthew Brost
2025-02-25  4:43 ` [PATCH v6 22/32] drm/xe: Add drm_gpusvm_devmem to xe_bo Matthew Brost
2025-02-25  4:43 ` [PATCH v6 23/32] drm/xe: Add drm_pagemap ops to SVM Matthew Brost
2025-02-25  4:43 ` [PATCH v6 24/32] drm/xe: Add GPUSVM device memory copy vfunc functions Matthew Brost
2025-02-25  4:43 ` [PATCH v6 25/32] drm/xe: Add Xe SVM populate_devmem_pfn GPU SVM vfunc Matthew Brost
2025-02-25  4:43 ` [PATCH v6 26/32] drm/xe: Add Xe SVM devmem_release " Matthew Brost
2025-02-25  4:43 ` [PATCH v6 27/32] drm/xe: Add SVM VRAM migration Matthew Brost
2025-02-26 16:47   ` Thomas Hellström [this message]
2025-02-26 17:16   ` Ghimiray, Himal Prasad
2025-02-25  4:43 ` [PATCH v6 28/32] drm/xe: Basic SVM BO eviction Matthew Brost
2025-02-25  4:43 ` [PATCH v6 29/32] drm/xe: Add SVM debug Matthew Brost
2025-02-25  4:43 ` [PATCH v6 30/32] drm/xe: Add modparam for SVM notifier size Matthew Brost
2025-02-25  4:43 ` [PATCH v6 31/32] drm/xe: Add always_migrate_to_vram modparam Matthew Brost
2025-02-25  4:43 ` [PATCH v6 32/32] drm/doc: gpusvm: Add GPU SVM documentation Matthew Brost
2025-02-28  2:34   ` Alistair Popple
2025-02-28  4:36     ` Matthew Brost
2025-02-28  5:53       ` Alistair Popple
2025-03-01  0:35         ` Matthew Brost
2025-02-25  4:50 ` ✓ CI.Patch_applied: success for Introduce GPU SVM and Xe SVM implementation (rev6) Patchwork
2025-02-25  4:51 ` ✗ CI.checkpatch: warning " Patchwork
2025-02-25  4:52 ` ✓ CI.KUnit: success " Patchwork
2025-02-25  5:08 ` ✓ CI.Build: " Patchwork
2025-02-25  5:10 ` ✗ CI.Hooks: failure " Patchwork
2025-02-25  5:12 ` ✗ CI.checksparse: warning " Patchwork
2025-02-25  5:32 ` ✓ Xe.CI.BAT: success " Patchwork
2025-02-25  9:55 ` ✗ Xe.CI.Full: failure " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=7bd3d2b1f477d3ee580625f081b4408d8fd233f2.camel@linux.intel.com \
    --to=thomas.hellstrom@linux.intel.com \
    --cc=airlied@gmail.com \
    --cc=apopple@nvidia.com \
    --cc=dakr@kernel.org \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=felix.kuehling@amd.com \
    --cc=himal.prasad.ghimiray@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=matthew.brost@intel.com \
    --cc=simona.vetter@ffwll.ch \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox