Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Matthew Brost <matthew.brost@intel.com>
To: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
Cc: intel-xe@lists.freedesktop.org, stable@vger.kernel.org,
	dri-devel@lists.freedesktop.org, himal.prasad.ghimiray@intel.com,
	apopple@nvidia.com, airlied@gmail.com,
	"Simona Vetter" <simona.vetter@ffwll.ch>,
	felix.kuehling@amd.com,
	"Christian König" <christian.koenig@amd.com>,
	dakr@kernel.org, "Mrozek, Michal" <michal.mrozek@intel.com>,
	"Joonas Lahtinen" <joonas.lahtinen@linux.intel.com>
Subject: Re: [PATCH v5 03/24] drm/pagemap, drm/xe: Ensure that the devmem allocation is idle before use
Date: Thu, 18 Dec 2025 11:33:06 -0800	[thread overview]
Message-ID: <aURW8sAjF7fDlU0n@lstrano-desk.jf.intel.com> (raw)
In-Reply-To: <8ed95ebd4525bbedbf62aa2ca26bcaf8ae1e4526.camel@linux.intel.com>

On Thu, Dec 18, 2025 at 08:18:24PM +0100, Thomas Hellström wrote:
> On Thu, 2025-12-18 at 10:33 -0800, Matthew Brost wrote:
> > On Thu, Dec 18, 2025 at 05:20:40PM +0100, Thomas Hellström wrote:
> > > In situations where no system memory is migrated to devmem, and in
> > > upcoming patches where another GPU is performing the migration to
> > > the newly allocated devmem buffer, there is nothing to ensure any
> > > ongoing clear to the devmem allocation or async eviction from the
> > > devmem allocation is complete.
> 
> 
> > > 
> > > Address that by passing a struct dma_fence down to the copy
> > > functions, and ensure it is waited for before migration is marked
> > > complete.
> > > 
> > > v3:
> > > - New patch.
> > > v4:
> > > - Update the logic used for determining when to wait for the
> > >   pre_migrate_fence.
> > > - Update the logic used for determining when to warn for the
> > >   pre_migrate_fence since the scheduler fences apparently
> > >   can signal out-of-order.
> > > v5:
> > > - Fix a UAF (CI)
> > > - Remove references to source P2P migration (Himal)
> > > - Put the pre_migrate_fence after migration.
> > > 
> > > Fixes: c5b3eb5a906c ("drm/xe: Add GPUSVM device memory copy vfunc
> > > functions")
> > > Cc: Matthew Brost <matthew.brost@intel.com>
> > > Cc: <stable@vger.kernel.org> # v6.15+
> > > Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> > > ---
> > >  drivers/gpu/drm/drm_pagemap.c | 17 ++++++---
> > >  drivers/gpu/drm/xe/xe_svm.c   | 65 ++++++++++++++++++++++++++++++-
> > > ----
> > >  include/drm/drm_pagemap.h     | 17 +++++++--
> > >  3 files changed, 83 insertions(+), 16 deletions(-)
> > > 
> > > diff --git a/drivers/gpu/drm/drm_pagemap.c
> > > b/drivers/gpu/drm/drm_pagemap.c
> > > index 4cf8f54e5a27..ac3832f85190 100644
> > > --- a/drivers/gpu/drm/drm_pagemap.c
> > > +++ b/drivers/gpu/drm/drm_pagemap.c
> > > @@ -3,6 +3,7 @@
> > >   * Copyright © 2024-2025 Intel Corporation
> > >   */
> > >  
> > > +#include <linux/dma-fence.h>
> > >  #include <linux/dma-mapping.h>
> > >  #include <linux/migrate.h>
> > >  #include <linux/pagemap.h>
> > > @@ -408,10 +409,14 @@ int drm_pagemap_migrate_to_devmem(struct
> > > drm_pagemap_devmem *devmem_allocation,
> > >  		drm_pagemap_get_devmem_page(page, zdd);
> > >  	}
> > >  
> > > -	err = ops->copy_to_devmem(pages, pagemap_addr, npages);
> > > +	err = ops->copy_to_devmem(pages, pagemap_addr, npages,
> > > +				  devmem_allocation-
> > > >pre_migrate_fence);
> > >  	if (err)
> > >  		goto err_finalize;
> > >  
> > > +	dma_fence_put(devmem_allocation->pre_migrate_fence);
> > > +	devmem_allocation->pre_migrate_fence = NULL;
> > > +
> > >  	/* Upon success bind devmem allocation to range and zdd */
> > >  	devmem_allocation->timeslice_expiration = get_jiffies_64()
> > > +
> > >  		msecs_to_jiffies(timeslice_ms);
> > > @@ -596,7 +601,7 @@ int drm_pagemap_evict_to_ram(struct
> > > drm_pagemap_devmem *devmem_allocation)
> > >  	for (i = 0; i < npages; ++i)
> > >  		pages[i] = migrate_pfn_to_page(src[i]);
> > >  
> > > -	err = ops->copy_to_ram(pages, pagemap_addr, npages);
> > > +	err = ops->copy_to_ram(pages, pagemap_addr, npages, NULL);
> > >  	if (err)
> > >  		goto err_finalize;
> > >  
> > > @@ -719,7 +724,7 @@ static int __drm_pagemap_migrate_to_ram(struct
> > > vm_area_struct *vas,
> > >  	for (i = 0; i < npages; ++i)
> > >  		pages[i] = migrate_pfn_to_page(migrate.src[i]);
> > >  
> > > -	err = ops->copy_to_ram(pages, pagemap_addr, npages);
> > > +	err = ops->copy_to_ram(pages, pagemap_addr, npages, NULL);
> > >  	if (err)
> > >  		goto err_finalize;
> > >  
> > > @@ -800,11 +805,14 @@
> > > EXPORT_SYMBOL_GPL(drm_pagemap_pagemap_ops_get);
> > >   * @ops: Pointer to the operations structure for GPU SVM device
> > > memory
> > >   * @dpagemap: The struct drm_pagemap we're allocating from.
> > >   * @size: Size of device memory allocation
> > > + * @pre_migrate_fence: Fence to wait for or pipeline behind before
> > > migration starts.
> > > + * (May be NULL).
> > >   */
> > >  void drm_pagemap_devmem_init(struct drm_pagemap_devmem
> > > *devmem_allocation,
> > >  			     struct device *dev, struct mm_struct
> > > *mm,
> > >  			     const struct drm_pagemap_devmem_ops
> > > *ops,
> > > -			     struct drm_pagemap *dpagemap, size_t
> > > size)
> > > +			     struct drm_pagemap *dpagemap, size_t
> > > size,
> > > +			     struct dma_fence *pre_migrate_fence)
> > >  {
> > >  	init_completion(&devmem_allocation->detached);
> > >  	devmem_allocation->dev = dev;
> > > @@ -812,6 +820,7 @@ void drm_pagemap_devmem_init(struct
> > > drm_pagemap_devmem *devmem_allocation,
> > >  	devmem_allocation->ops = ops;
> > >  	devmem_allocation->dpagemap = dpagemap;
> > >  	devmem_allocation->size = size;
> > > +	devmem_allocation->pre_migrate_fence = pre_migrate_fence;
> > >  }
> > >  EXPORT_SYMBOL_GPL(drm_pagemap_devmem_init);
> > >  
> > > diff --git a/drivers/gpu/drm/xe/xe_svm.c
> > > b/drivers/gpu/drm/xe/xe_svm.c
> > > index bab8e6cbe53d..b806a1fce188 100644
> > > --- a/drivers/gpu/drm/xe/xe_svm.c
> > > +++ b/drivers/gpu/drm/xe/xe_svm.c
> > > @@ -472,11 +472,12 @@ static void xe_svm_copy_us_stats_incr(struct
> > > xe_gt *gt,
> > >  
> > >  static int xe_svm_copy(struct page **pages,
> > >  		       struct drm_pagemap_addr *pagemap_addr,
> > > -		       unsigned long npages, const enum
> > > xe_svm_copy_dir dir)
> > > +		       unsigned long npages, const enum
> > > xe_svm_copy_dir dir,
> > > +		       struct dma_fence *pre_migrate_fence)
> > >  {
> > >  	struct xe_vram_region *vr = NULL;
> > >  	struct xe_gt *gt = NULL;
> > > -	struct xe_device *xe;
> > > +	struct xe_device *xe = NULL;
> > >  	struct dma_fence *fence = NULL;
> > >  	unsigned long i;
> > >  #define XE_VRAM_ADDR_INVALID	~0x0ull
> > > @@ -485,6 +486,16 @@ static int xe_svm_copy(struct page **pages,
> > >  	bool sram = dir == XE_SVM_COPY_TO_SRAM;
> > >  	ktime_t start = xe_gt_stats_ktime_get();
> > >  
> > > +	if (pre_migrate_fence &&
> > > dma_fence_is_container(pre_migrate_fence)) {
> > > +		/*
> > > +		 * This would typically be a composite fence
> > > operation on the destination memory.
> > > +		 * Ensure that the other GPU operation on the
> > > destination is complete.
> > > +		 */
> > > +		err = dma_fence_wait(pre_migrate_fence, true);
> > > +		if (err)
> > > +			return err;
> > > +	}
> > > +
> > 
> > I'm not fully convienced this code works. Consider the case where we
> > allocate memory a device A and we copying from device B. In this case
> > device A issues the clear but device B issues the copy. The
> > pre_migrate_fence is not going be a container and I believe our
> > ordering
> > breaks.
> 
> So the logic to handle that case was moved to the patch that enables
> source migration, as per Himal's review comment. So consider this patch
> only for destination migration where the devmem allocation is allocated
> on the same gpu that migrates.
>

Ah, yes I see that logic in patch 23.
 
> > 
> > Would it be simplier to pass in the pre_migrate_fence fence a
> > dependency
> > to the first copy job issued, then set it to NULL. The drm scheduler
> > is
> > smart enough to squash the input fence into a NOP if a copy / clear
> > are
> > from the same devices queues.
> 
> I intended to do that as a follow-up if needed but since this is a fix
> that fixes an existing problem I wanted it to be lightweight. But let
> me take a quick look at that and probably resend only that patch.
> 

I'm fine with doing this in a follow up if that is the preference, I do
think that would be cleaner.

So if you prefer to leave this as is:
Reviewed-by: Matthew Brost <matthew.brost@intel.com>

> /Thomas
> 
> 

  reply	other threads:[~2025-12-18 19:33 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-12-18 16:20 [PATCH v5 00/24] Dynamic drm_pagemaps and Initial multi-device SVM Thomas Hellström
2025-12-18 16:20 ` [PATCH v5 01/24] drm/xe/svm: Fix a debug printout Thomas Hellström
2025-12-18 16:20 ` [PATCH v5 02/24] drm/pagemap: Remove some dead code Thomas Hellström
2025-12-18 18:16   ` Matthew Brost
2025-12-18 16:20 ` [PATCH v5 03/24] drm/pagemap, drm/xe: Ensure that the devmem allocation is idle before use Thomas Hellström
2025-12-18 18:33   ` Matthew Brost
2025-12-18 19:18     ` Thomas Hellström
2025-12-18 19:33       ` Matthew Brost [this message]
2025-12-18 16:20 ` [PATCH v5 04/24] drm/pagemap, drm/xe: Add refcounting to struct drm_pagemap Thomas Hellström
2025-12-18 16:20 ` [PATCH v5 05/24] drm/pagemap: Add a refcounted drm_pagemap backpointer to struct drm_pagemap_zdd Thomas Hellström
2025-12-18 16:20 ` [PATCH v5 06/24] drm/pagemap, drm/xe: Manage drm_pagemap provider lifetimes Thomas Hellström
2025-12-18 16:20 ` [PATCH v5 07/24] drm/pagemap: Add a drm_pagemap cache and shrinker Thomas Hellström
2025-12-18 16:20 ` [PATCH v5 08/24] drm/xe: Use the " Thomas Hellström
2025-12-18 16:20 ` [PATCH v5 09/24] drm/pagemap: Remove the drm_pagemap_create() interface Thomas Hellström
2025-12-18 16:20 ` [PATCH v5 10/24] drm/pagemap_util: Add a utility to assign an owner to a set of interconnected gpus Thomas Hellström
2025-12-18 16:20 ` [PATCH v5 11/24] drm/xe: Use the drm_pagemap_util helper to get a svm pagemap owner Thomas Hellström
2025-12-18 16:20 ` [PATCH v5 12/24] drm/xe: Pass a drm_pagemap pointer around with the memory advise attributes Thomas Hellström
2025-12-18 16:20 ` [PATCH v5 13/24] drm/xe: Use the vma attibute drm_pagemap to select where to migrate Thomas Hellström
2025-12-18 16:20 ` [PATCH v5 14/24] drm/xe: Simplify madvise_preferred_mem_loc() Thomas Hellström
2025-12-18 16:20 ` [PATCH v5 15/24] drm/xe/uapi: Extend the madvise functionality to support foreign pagemap placement for svm Thomas Hellström
2025-12-18 16:20 ` [PATCH v5 16/24] drm/xe: Support pcie p2p dma as a fast interconnect Thomas Hellström
2025-12-18 16:20 ` [PATCH v5 17/24] drm/xe/vm: Add a couple of VM debug printouts Thomas Hellström
2025-12-18 16:20 ` [PATCH v5 18/24] drm/xe/svm: Document how xe keeps drm_pagemap references Thomas Hellström
2025-12-18 16:20 ` [PATCH v5 19/24] drm/pagemap, drm/xe: Clean up the use of the device-private page owner Thomas Hellström
2025-12-18 16:20 ` [PATCH v5 20/24] drm/gpusvm: Introduce a function to scan the current migration state Thomas Hellström
2025-12-18 16:20 ` [PATCH v5 21/24] drm/xe: Use drm_gpusvm_scan_mm() Thomas Hellström
2025-12-18 16:20 ` [PATCH v5 22/24] drm/pagemap, drm/xe: Support destination migration over interconnect Thomas Hellström
2025-12-18 18:40   ` Matthew Brost
2025-12-18 16:21 ` [PATCH v5 23/24] drm/pagemap: Support source " Thomas Hellström
2025-12-18 20:36   ` Matthew Brost
2025-12-18 23:01     ` Matthew Brost
2025-12-18 16:21 ` [PATCH v5 24/24] drm/xe/svm: Serialize migration to device if racing Thomas Hellström
2025-12-18 19:03   ` Matthew Brost
2025-12-18 16:58 ` ✗ CI.checkpatch: warning for Dynamic drm_pagemaps and Initial multi-device SVM (rev6) Patchwork
2025-12-18 16:59 ` ✓ CI.KUnit: success " Patchwork
2025-12-18 17:38 ` ✓ Xe.CI.BAT: " Patchwork
2025-12-19 14:56 ` ✓ Xe.CI.Full: " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aURW8sAjF7fDlU0n@lstrano-desk.jf.intel.com \
    --to=matthew.brost@intel.com \
    --cc=airlied@gmail.com \
    --cc=apopple@nvidia.com \
    --cc=christian.koenig@amd.com \
    --cc=dakr@kernel.org \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=felix.kuehling@amd.com \
    --cc=himal.prasad.ghimiray@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=joonas.lahtinen@linux.intel.com \
    --cc=michal.mrozek@intel.com \
    --cc=simona.vetter@ffwll.ch \
    --cc=stable@vger.kernel.org \
    --cc=thomas.hellstrom@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox