From: Francois Dugast <francois.dugast@intel.com>
To: Matthew Brost <matthew.brost@intel.com>
Cc: <intel-xe@lists.freedesktop.org>,
<dri-devel@lists.freedesktop.org>, <leonro@nvidia.com>,
<jgg@ziepe.ca>, <thomas.hellstrom@linux.intel.com>,
<himal.prasad.ghimiray@intel.com>
Subject: Re: [PATCH v5 3/5] drm/pagemap: Drop source_peer_migrates flag and assume true
Date: Thu, 2 Apr 2026 12:33:34 +0200 [thread overview]
Message-ID: <ac5F_oLmMeFwrWc0@fdugast-desk> (raw)
In-Reply-To: <aZd4OfBJyMepd4Of@lstrano-desk.jf.intel.com>
On Thu, Feb 19, 2026 at 12:53:13PM -0800, Matthew Brost wrote:
> On Thu, Feb 19, 2026 at 12:10:55PM -0800, Matthew Brost wrote:
> > All current users of DRM pagemap set source_peer_migrates to true during
> > migration, and it is unclear whether any user would ever want to disable
> > this for performance reasons or for features such as compression. It is
> > also questionable whether this flag could be made to work with
> > high-speed fabric mapping APIs.
> >
> > Drop the flag and make DRM pagemap unconditionally assume that
> > source_peer_migrates is true.
> >
> > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > ---
> > drivers/gpu/drm/drm_pagemap.c | 10 ++++------
> > drivers/gpu/drm/xe/xe_svm.c | 1 -
> > include/drm/drm_pagemap.h | 8 ++------
> > 3 files changed, 6 insertions(+), 13 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/drm_pagemap.c b/drivers/gpu/drm/drm_pagemap.c
> > index 01a06d1fd1a0..32535ab01c0f 100644
> > --- a/drivers/gpu/drm/drm_pagemap.c
> > +++ b/drivers/gpu/drm/drm_pagemap.c
> > @@ -602,12 +602,10 @@ int drm_pagemap_migrate_to_devmem(struct drm_pagemap_devmem *devmem_allocation,
> > own_pages++;
> > continue;
> > }
> > - if (mdetails->source_peer_migrates) {
> > - cur.dpagemap = src_zdd->dpagemap;
> > - cur.ops = src_zdd->devmem_allocation->ops;
> > - cur.device = cur.dpagemap->drm->dev;
> > - pages[i] = src_page;
> > - }
> > + cur.dpagemap = src_zdd->dpagemap;
> > + cur.ops = src_zdd->devmem_allocation->ops;
> > + cur.device = cur.dpagemap->drm->dev;
> > + pages[i] = src_page;
> > }
> > if (!pages[i]) {
> > cur.dpagemap = NULL;
> > diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
> > index c96ed760c077..e86e69087c7e 100644
> > --- a/drivers/gpu/drm/xe/xe_svm.c
> > +++ b/drivers/gpu/drm/xe/xe_svm.c
> > @@ -1027,7 +1027,6 @@ static int xe_drm_pagemap_populate_mm(struct drm_pagemap *dpagemap,
> > struct xe_pagemap *xpagemap = container_of(dpagemap, typeof(*xpagemap), dpagemap);
> > struct drm_pagemap_migrate_details mdetails = {
> > .timeslice_ms = timeslice_ms,
> > - .source_peer_migrates = 1,
> > };
> > struct xe_vram_region *vr = xe_pagemap_to_vr(xpagemap);
> > struct dma_fence *pre_migrate_fence = NULL;
> > diff --git a/include/drm/drm_pagemap.h b/include/drm/drm_pagemap.h
> > index 72f6828f2604..5c33982141c2 100644
> > --- a/include/drm/drm_pagemap.h
> > +++ b/include/drm/drm_pagemap.h
> > @@ -329,12 +329,8 @@ struct drm_pagemap_devmem {
> > * struct drm_pagemap_migrate_details - Details to govern migration.
> > * @timeslice_ms: The time requested for the migrated pagemap pages to
> > * be present in @mm before being allowed to be migrated back.
> > - * @can_migrate_same_pagemap: Whether the copy function as indicated by
> > - * the @source_peer_migrates flag, can migrate device pages within a
> > - * single drm_pagemap.
> > - * @source_peer_migrates: Whether on p2p migration, The source drm_pagemap
> > - * should use the copy_to_ram() callback rather than the destination
> > - * drm_pagemap should use the copy_to_devmem() callback.
> > + * @can_migrate_same_pagemap: Whether the copy function can migrate
> > + * device pages within a single drm_pagemap.
>
> I forgot to delete this variable, in effort to save CI cycles, will fix
> this in next rev or when merging.
With source_peer_migrates removed:
Reviewed-by: Francois Dugast <francois.dugast@intel.com>
>
> Matt
>
> > */
> > struct drm_pagemap_migrate_details {
> > unsigned long timeslice_ms;
> > --
> > 2.34.1
> >
next prev parent reply other threads:[~2026-04-02 10:33 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-19 20:10 [PATCH v5 0/5] Use new dma-map IOVA alloc, link, and sync API in GPU SVM and DRM pagemap Matthew Brost
2026-02-19 20:10 ` [PATCH v5 1/5] drm/pagemap: Add helper to access zone_device_data Matthew Brost
2026-02-19 20:10 ` [PATCH v5 2/5] drm/gpusvm: Use dma-map IOVA alloc, link, and sync API in GPU SVM Matthew Brost
2026-02-19 20:10 ` [PATCH v5 3/5] drm/pagemap: Drop source_peer_migrates flag and assume true Matthew Brost
2026-02-19 20:53 ` Matthew Brost
2026-04-02 10:33 ` Francois Dugast [this message]
2026-02-19 20:10 ` [PATCH v5 4/5] drm/pagemap: Split drm_pagemap_migrate_map_pages into device / system Matthew Brost
2026-04-02 14:12 ` Francois Dugast
2026-02-19 20:10 ` [PATCH v5 5/5] drm/pagemap: Use dma-map IOVA alloc, link, and sync API for DRM pagemap Matthew Brost
2026-04-02 15:59 ` Francois Dugast
2026-04-08 16:46 ` Matthew Brost
2026-02-19 20:18 ` ✓ CI.KUnit: success for Use new dma-map IOVA alloc, link, and sync API in GPU SVM and DRM pagemap (rev5) Patchwork
2026-02-20 8:47 ` ✓ Xe.CI.BAT: " Patchwork
2026-02-20 14:26 ` ✗ Xe.CI.FULL: failure " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ac5F_oLmMeFwrWc0@fdugast-desk \
--to=francois.dugast@intel.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=himal.prasad.ghimiray@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=jgg@ziepe.ca \
--cc=leonro@nvidia.com \
--cc=matthew.brost@intel.com \
--cc=thomas.hellstrom@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox