Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Matthew Brost <matthew.brost@intel.com>
To: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
Cc: intel-xe@lists.freedesktop.org, stable@vger.kernel.org,
	dri-devel@lists.freedesktop.org, himal.prasad.ghimiray@intel.com,
	apopple@nvidia.com, airlied@gmail.com,
	"Simona Vetter" <simona.vetter@ffwll.ch>,
	felix.kuehling@amd.com,
	"Christian König" <christian.koenig@amd.com>,
	dakr@kernel.org, "Mrozek, Michal" <michal.mrozek@intel.com>,
	"Joonas Lahtinen" <joonas.lahtinen@linux.intel.com>
Subject: Re: [PATCH v5 03/24] drm/pagemap, drm/xe: Ensure that the devmem allocation is idle before use
Date: Thu, 18 Dec 2025 10:33:34 -0800	[thread overview]
Message-ID: <aURI/tEMA6GInnCh@lstrano-desk.jf.intel.com> (raw)
In-Reply-To: <20251218162101.605379-4-thomas.hellstrom@linux.intel.com>

On Thu, Dec 18, 2025 at 05:20:40PM +0100, Thomas Hellström wrote:
> In situations where no system memory is migrated to devmem, and in
> upcoming patches where another GPU is performing the migration to
> the newly allocated devmem buffer, there is nothing to ensure any
> ongoing clear to the devmem allocation or async eviction from the
> devmem allocation is complete.
> 
> Address that by passing a struct dma_fence down to the copy
> functions, and ensure it is waited for before migration is marked
> complete.
> 
> v3:
> - New patch.
> v4:
> - Update the logic used for determining when to wait for the
>   pre_migrate_fence.
> - Update the logic used for determining when to warn for the
>   pre_migrate_fence since the scheduler fences apparently
>   can signal out-of-order.
> v5:
> - Fix a UAF (CI)
> - Remove references to source P2P migration (Himal)
> - Put the pre_migrate_fence after migration.
> 
> Fixes: c5b3eb5a906c ("drm/xe: Add GPUSVM device memory copy vfunc functions")
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: <stable@vger.kernel.org> # v6.15+
> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> ---
>  drivers/gpu/drm/drm_pagemap.c | 17 ++++++---
>  drivers/gpu/drm/xe/xe_svm.c   | 65 ++++++++++++++++++++++++++++++-----
>  include/drm/drm_pagemap.h     | 17 +++++++--
>  3 files changed, 83 insertions(+), 16 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_pagemap.c b/drivers/gpu/drm/drm_pagemap.c
> index 4cf8f54e5a27..ac3832f85190 100644
> --- a/drivers/gpu/drm/drm_pagemap.c
> +++ b/drivers/gpu/drm/drm_pagemap.c
> @@ -3,6 +3,7 @@
>   * Copyright © 2024-2025 Intel Corporation
>   */
>  
> +#include <linux/dma-fence.h>
>  #include <linux/dma-mapping.h>
>  #include <linux/migrate.h>
>  #include <linux/pagemap.h>
> @@ -408,10 +409,14 @@ int drm_pagemap_migrate_to_devmem(struct drm_pagemap_devmem *devmem_allocation,
>  		drm_pagemap_get_devmem_page(page, zdd);
>  	}
>  
> -	err = ops->copy_to_devmem(pages, pagemap_addr, npages);
> +	err = ops->copy_to_devmem(pages, pagemap_addr, npages,
> +				  devmem_allocation->pre_migrate_fence);
>  	if (err)
>  		goto err_finalize;
>  
> +	dma_fence_put(devmem_allocation->pre_migrate_fence);
> +	devmem_allocation->pre_migrate_fence = NULL;
> +
>  	/* Upon success bind devmem allocation to range and zdd */
>  	devmem_allocation->timeslice_expiration = get_jiffies_64() +
>  		msecs_to_jiffies(timeslice_ms);
> @@ -596,7 +601,7 @@ int drm_pagemap_evict_to_ram(struct drm_pagemap_devmem *devmem_allocation)
>  	for (i = 0; i < npages; ++i)
>  		pages[i] = migrate_pfn_to_page(src[i]);
>  
> -	err = ops->copy_to_ram(pages, pagemap_addr, npages);
> +	err = ops->copy_to_ram(pages, pagemap_addr, npages, NULL);
>  	if (err)
>  		goto err_finalize;
>  
> @@ -719,7 +724,7 @@ static int __drm_pagemap_migrate_to_ram(struct vm_area_struct *vas,
>  	for (i = 0; i < npages; ++i)
>  		pages[i] = migrate_pfn_to_page(migrate.src[i]);
>  
> -	err = ops->copy_to_ram(pages, pagemap_addr, npages);
> +	err = ops->copy_to_ram(pages, pagemap_addr, npages, NULL);
>  	if (err)
>  		goto err_finalize;
>  
> @@ -800,11 +805,14 @@ EXPORT_SYMBOL_GPL(drm_pagemap_pagemap_ops_get);
>   * @ops: Pointer to the operations structure for GPU SVM device memory
>   * @dpagemap: The struct drm_pagemap we're allocating from.
>   * @size: Size of device memory allocation
> + * @pre_migrate_fence: Fence to wait for or pipeline behind before migration starts.
> + * (May be NULL).
>   */
>  void drm_pagemap_devmem_init(struct drm_pagemap_devmem *devmem_allocation,
>  			     struct device *dev, struct mm_struct *mm,
>  			     const struct drm_pagemap_devmem_ops *ops,
> -			     struct drm_pagemap *dpagemap, size_t size)
> +			     struct drm_pagemap *dpagemap, size_t size,
> +			     struct dma_fence *pre_migrate_fence)
>  {
>  	init_completion(&devmem_allocation->detached);
>  	devmem_allocation->dev = dev;
> @@ -812,6 +820,7 @@ void drm_pagemap_devmem_init(struct drm_pagemap_devmem *devmem_allocation,
>  	devmem_allocation->ops = ops;
>  	devmem_allocation->dpagemap = dpagemap;
>  	devmem_allocation->size = size;
> +	devmem_allocation->pre_migrate_fence = pre_migrate_fence;
>  }
>  EXPORT_SYMBOL_GPL(drm_pagemap_devmem_init);
>  
> diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
> index bab8e6cbe53d..b806a1fce188 100644
> --- a/drivers/gpu/drm/xe/xe_svm.c
> +++ b/drivers/gpu/drm/xe/xe_svm.c
> @@ -472,11 +472,12 @@ static void xe_svm_copy_us_stats_incr(struct xe_gt *gt,
>  
>  static int xe_svm_copy(struct page **pages,
>  		       struct drm_pagemap_addr *pagemap_addr,
> -		       unsigned long npages, const enum xe_svm_copy_dir dir)
> +		       unsigned long npages, const enum xe_svm_copy_dir dir,
> +		       struct dma_fence *pre_migrate_fence)
>  {
>  	struct xe_vram_region *vr = NULL;
>  	struct xe_gt *gt = NULL;
> -	struct xe_device *xe;
> +	struct xe_device *xe = NULL;
>  	struct dma_fence *fence = NULL;
>  	unsigned long i;
>  #define XE_VRAM_ADDR_INVALID	~0x0ull
> @@ -485,6 +486,16 @@ static int xe_svm_copy(struct page **pages,
>  	bool sram = dir == XE_SVM_COPY_TO_SRAM;
>  	ktime_t start = xe_gt_stats_ktime_get();
>  
> +	if (pre_migrate_fence && dma_fence_is_container(pre_migrate_fence)) {
> +		/*
> +		 * This would typically be a composite fence operation on the destination memory.
> +		 * Ensure that the other GPU operation on the destination is complete.
> +		 */
> +		err = dma_fence_wait(pre_migrate_fence, true);
> +		if (err)
> +			return err;
> +	}
> +

I'm not fully convienced this code works. Consider the case where we
allocate memory a device A and we copying from device B. In this case
device A issues the clear but device B issues the copy. The
pre_migrate_fence is not going be a container and I believe our ordering
breaks.

Would it be simplier to pass in the pre_migrate_fence fence a dependency
to the first copy job issued, then set it to NULL. The drm scheduler is
smart enough to squash the input fence into a NOP if a copy / clear are
from the same devices queues.

>  	/*
>  	 * This flow is complex: it locates physically contiguous device pages,
>  	 * derives the starting physical address, and performs a single GPU copy
> @@ -621,10 +632,28 @@ static int xe_svm_copy(struct page **pages,
>  
>  err_out:
>  	/* Wait for all copies to complete */
> -	if (fence) {
> +	if (fence)
>  		dma_fence_wait(fence, false);
> -		dma_fence_put(fence);
> +

Then here...

	if (pre_migrate_fence)
		dma_fence_wait(pre_migrate_fence, false);

Matt

> +	/*
> +	 * If migrating to devmem, we should have pipelined the migration behind
> +	 * the pre_migrate_fence. Verify that this is indeed likely. If we
> +	 * didn't perform any copying, just wait for the pre_migrate_fence.
> +	 */
> +	if (pre_migrate_fence && !dma_fence_is_signaled(pre_migrate_fence)) {
> +		if (xe && fence &&
> +		    (pre_migrate_fence->context != fence->context ||
> +		     dma_fence_is_later(pre_migrate_fence, fence))) {
> +			drm_WARN(&xe->drm, true, "Unsignaled pre-migrate fence");
> +			drm_warn(&xe->drm, "fence contexts: %llu %llu. container %d\n",
> +				 (unsigned long long)fence->context,
> +				 (unsigned long long)pre_migrate_fence->context,
> +				 dma_fence_is_container(pre_migrate_fence));
> +		}
> +
> +		dma_fence_wait(pre_migrate_fence, false);
>  	}
> +	dma_fence_put(fence);
>  
>  	/*
>  	 * XXX: We can't derive the GT here (or anywhere in this functions, but
> @@ -641,16 +670,20 @@ static int xe_svm_copy(struct page **pages,
>  
>  static int xe_svm_copy_to_devmem(struct page **pages,
>  				 struct drm_pagemap_addr *pagemap_addr,
> -				 unsigned long npages)
> +				 unsigned long npages,
> +				 struct dma_fence *pre_migrate_fence)
>  {
> -	return xe_svm_copy(pages, pagemap_addr, npages, XE_SVM_COPY_TO_VRAM);
> +	return xe_svm_copy(pages, pagemap_addr, npages, XE_SVM_COPY_TO_VRAM,
> +			   pre_migrate_fence);
>  }
>  
>  static int xe_svm_copy_to_ram(struct page **pages,
>  			      struct drm_pagemap_addr *pagemap_addr,
> -			      unsigned long npages)
> +			      unsigned long npages,
> +			      struct dma_fence *pre_migrate_fence)
>  {
> -	return xe_svm_copy(pages, pagemap_addr, npages, XE_SVM_COPY_TO_SRAM);
> +	return xe_svm_copy(pages, pagemap_addr, npages, XE_SVM_COPY_TO_SRAM,
> +			   pre_migrate_fence);
>  }
>  
>  static struct xe_bo *to_xe_bo(struct drm_pagemap_devmem *devmem_allocation)
> @@ -663,6 +696,7 @@ static void xe_svm_devmem_release(struct drm_pagemap_devmem *devmem_allocation)
>  	struct xe_bo *bo = to_xe_bo(devmem_allocation);
>  	struct xe_device *xe = xe_bo_device(bo);
>  
> +	dma_fence_put(devmem_allocation->pre_migrate_fence);
>  	xe_bo_put_async(bo);
>  	xe_pm_runtime_put(xe);
>  }
> @@ -857,6 +891,7 @@ static int xe_drm_pagemap_populate_mm(struct drm_pagemap *dpagemap,
>  				      unsigned long timeslice_ms)
>  {
>  	struct xe_vram_region *vr = container_of(dpagemap, typeof(*vr), dpagemap);
> +	struct dma_fence *pre_migrate_fence = NULL;
>  	struct xe_device *xe = vr->xe;
>  	struct device *dev = xe->drm.dev;
>  	struct drm_buddy_block *block;
> @@ -883,8 +918,20 @@ static int xe_drm_pagemap_populate_mm(struct drm_pagemap *dpagemap,
>  			break;
>  		}
>  
> +		/* Ensure that any clearing or async eviction will complete before migration. */
> +		if (!dma_resv_test_signaled(bo->ttm.base.resv, DMA_RESV_USAGE_KERNEL)) {
> +			err = dma_resv_get_singleton(bo->ttm.base.resv, DMA_RESV_USAGE_KERNEL,
> +						     &pre_migrate_fence);
> +			if (err)
> +				dma_resv_wait_timeout(bo->ttm.base.resv, DMA_RESV_USAGE_KERNEL,
> +						      false, MAX_SCHEDULE_TIMEOUT);
> +			else if (pre_migrate_fence)
> +				dma_fence_enable_sw_signaling(pre_migrate_fence);
> +		}
> +
>  		drm_pagemap_devmem_init(&bo->devmem_allocation, dev, mm,
> -					&dpagemap_devmem_ops, dpagemap, end - start);
> +					&dpagemap_devmem_ops, dpagemap, end - start,
> +					pre_migrate_fence);
>  
>  		blocks = &to_xe_ttm_vram_mgr_resource(bo->ttm.resource)->blocks;
>  		list_for_each_entry(block, blocks, link)
> diff --git a/include/drm/drm_pagemap.h b/include/drm/drm_pagemap.h
> index f6e7e234c089..70a7991f784f 100644
> --- a/include/drm/drm_pagemap.h
> +++ b/include/drm/drm_pagemap.h
> @@ -8,6 +8,7 @@
>  
>  #define NR_PAGES(order) (1U << (order))
>  
> +struct dma_fence;
>  struct drm_pagemap;
>  struct drm_pagemap_zdd;
>  struct device;
> @@ -174,6 +175,8 @@ struct drm_pagemap_devmem_ops {
>  	 * @pages: Pointer to array of device memory pages (destination)
>  	 * @pagemap_addr: Pointer to array of DMA information (source)
>  	 * @npages: Number of pages to copy
> +	 * @pre_migrate_fence: dma-fence to wait for before migration start.
> +	 * May be NULL.
>  	 *
>  	 * Copy pages to device memory. If the order of a @pagemap_addr entry
>  	 * is greater than 0, the entry is populated but subsequent entries
> @@ -183,13 +186,16 @@ struct drm_pagemap_devmem_ops {
>  	 */
>  	int (*copy_to_devmem)(struct page **pages,
>  			      struct drm_pagemap_addr *pagemap_addr,
> -			      unsigned long npages);
> +			      unsigned long npages,
> +			      struct dma_fence *pre_migrate_fence);
>  
>  	/**
>  	 * @copy_to_ram: Copy to system RAM (required for migration)
>  	 * @pages: Pointer to array of device memory pages (source)
>  	 * @pagemap_addr: Pointer to array of DMA information (destination)
>  	 * @npages: Number of pages to copy
> +	 * @pre_migrate_fence: dma-fence to wait for before migration start.
> +	 * May be NULL.
>  	 *
>  	 * Copy pages to system RAM. If the order of a @pagemap_addr entry
>  	 * is greater than 0, the entry is populated but subsequent entries
> @@ -199,7 +205,8 @@ struct drm_pagemap_devmem_ops {
>  	 */
>  	int (*copy_to_ram)(struct page **pages,
>  			   struct drm_pagemap_addr *pagemap_addr,
> -			   unsigned long npages);
> +			   unsigned long npages,
> +			   struct dma_fence *pre_migrate_fence);
>  };
>  
>  /**
> @@ -212,6 +219,8 @@ struct drm_pagemap_devmem_ops {
>   * @dpagemap: The struct drm_pagemap of the pages this allocation belongs to.
>   * @size: Size of device memory allocation
>   * @timeslice_expiration: Timeslice expiration in jiffies
> + * @pre_migrate_fence: Fence to wait for or pipeline behind before migration starts.
> + * (May be NULL).
>   */
>  struct drm_pagemap_devmem {
>  	struct device *dev;
> @@ -221,6 +230,7 @@ struct drm_pagemap_devmem {
>  	struct drm_pagemap *dpagemap;
>  	size_t size;
>  	u64 timeslice_expiration;
> +	struct dma_fence *pre_migrate_fence;
>  };
>  
>  int drm_pagemap_migrate_to_devmem(struct drm_pagemap_devmem *devmem_allocation,
> @@ -238,7 +248,8 @@ struct drm_pagemap *drm_pagemap_page_to_dpagemap(struct page *page);
>  void drm_pagemap_devmem_init(struct drm_pagemap_devmem *devmem_allocation,
>  			     struct device *dev, struct mm_struct *mm,
>  			     const struct drm_pagemap_devmem_ops *ops,
> -			     struct drm_pagemap *dpagemap, size_t size);
> +			     struct drm_pagemap *dpagemap, size_t size,
> +			     struct dma_fence *pre_migrate_fence);
>  
>  int drm_pagemap_populate_mm(struct drm_pagemap *dpagemap,
>  			    unsigned long start, unsigned long end,
> -- 
> 2.51.1
> 

  reply	other threads:[~2025-12-18 18:33 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-12-18 16:20 [PATCH v5 00/24] Dynamic drm_pagemaps and Initial multi-device SVM Thomas Hellström
2025-12-18 16:20 ` [PATCH v5 01/24] drm/xe/svm: Fix a debug printout Thomas Hellström
2025-12-18 16:20 ` [PATCH v5 02/24] drm/pagemap: Remove some dead code Thomas Hellström
2025-12-18 18:16   ` Matthew Brost
2025-12-18 16:20 ` [PATCH v5 03/24] drm/pagemap, drm/xe: Ensure that the devmem allocation is idle before use Thomas Hellström
2025-12-18 18:33   ` Matthew Brost [this message]
2025-12-18 19:18     ` Thomas Hellström
2025-12-18 19:33       ` Matthew Brost
2025-12-18 16:20 ` [PATCH v5 04/24] drm/pagemap, drm/xe: Add refcounting to struct drm_pagemap Thomas Hellström
2025-12-18 16:20 ` [PATCH v5 05/24] drm/pagemap: Add a refcounted drm_pagemap backpointer to struct drm_pagemap_zdd Thomas Hellström
2025-12-18 16:20 ` [PATCH v5 06/24] drm/pagemap, drm/xe: Manage drm_pagemap provider lifetimes Thomas Hellström
2025-12-18 16:20 ` [PATCH v5 07/24] drm/pagemap: Add a drm_pagemap cache and shrinker Thomas Hellström
2025-12-18 16:20 ` [PATCH v5 08/24] drm/xe: Use the " Thomas Hellström
2025-12-18 16:20 ` [PATCH v5 09/24] drm/pagemap: Remove the drm_pagemap_create() interface Thomas Hellström
2025-12-18 16:20 ` [PATCH v5 10/24] drm/pagemap_util: Add a utility to assign an owner to a set of interconnected gpus Thomas Hellström
2025-12-18 16:20 ` [PATCH v5 11/24] drm/xe: Use the drm_pagemap_util helper to get a svm pagemap owner Thomas Hellström
2025-12-18 16:20 ` [PATCH v5 12/24] drm/xe: Pass a drm_pagemap pointer around with the memory advise attributes Thomas Hellström
2025-12-18 16:20 ` [PATCH v5 13/24] drm/xe: Use the vma attibute drm_pagemap to select where to migrate Thomas Hellström
2025-12-18 16:20 ` [PATCH v5 14/24] drm/xe: Simplify madvise_preferred_mem_loc() Thomas Hellström
2025-12-18 16:20 ` [PATCH v5 15/24] drm/xe/uapi: Extend the madvise functionality to support foreign pagemap placement for svm Thomas Hellström
2025-12-18 16:20 ` [PATCH v5 16/24] drm/xe: Support pcie p2p dma as a fast interconnect Thomas Hellström
2025-12-18 16:20 ` [PATCH v5 17/24] drm/xe/vm: Add a couple of VM debug printouts Thomas Hellström
2025-12-18 16:20 ` [PATCH v5 18/24] drm/xe/svm: Document how xe keeps drm_pagemap references Thomas Hellström
2025-12-18 16:20 ` [PATCH v5 19/24] drm/pagemap, drm/xe: Clean up the use of the device-private page owner Thomas Hellström
2025-12-18 16:20 ` [PATCH v5 20/24] drm/gpusvm: Introduce a function to scan the current migration state Thomas Hellström
2025-12-18 16:20 ` [PATCH v5 21/24] drm/xe: Use drm_gpusvm_scan_mm() Thomas Hellström
2025-12-18 16:20 ` [PATCH v5 22/24] drm/pagemap, drm/xe: Support destination migration over interconnect Thomas Hellström
2025-12-18 18:40   ` Matthew Brost
2025-12-18 16:21 ` [PATCH v5 23/24] drm/pagemap: Support source " Thomas Hellström
2025-12-18 20:36   ` Matthew Brost
2025-12-18 23:01     ` Matthew Brost
2025-12-18 16:21 ` [PATCH v5 24/24] drm/xe/svm: Serialize migration to device if racing Thomas Hellström
2025-12-18 19:03   ` Matthew Brost
2025-12-18 16:58 ` ✗ CI.checkpatch: warning for Dynamic drm_pagemaps and Initial multi-device SVM (rev6) Patchwork
2025-12-18 16:59 ` ✓ CI.KUnit: success " Patchwork
2025-12-18 17:38 ` ✓ Xe.CI.BAT: " Patchwork
2025-12-19 14:56 ` ✓ Xe.CI.Full: " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aURI/tEMA6GInnCh@lstrano-desk.jf.intel.com \
    --to=matthew.brost@intel.com \
    --cc=airlied@gmail.com \
    --cc=apopple@nvidia.com \
    --cc=christian.koenig@amd.com \
    --cc=dakr@kernel.org \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=felix.kuehling@amd.com \
    --cc=himal.prasad.ghimiray@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=joonas.lahtinen@linux.intel.com \
    --cc=michal.mrozek@intel.com \
    --cc=simona.vetter@ffwll.ch \
    --cc=stable@vger.kernel.org \
    --cc=thomas.hellstrom@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox