From: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
To: Matthew Brost <matthew.brost@intel.com>,
intel-xe@lists.freedesktop.org, dri-devel@lists.freedesktop.org
Cc: Francois Dugast <francois.dugast@intel.com>
Subject: Re: [PATCH v7 3/5] drm/pagemap: Split drm_pagemap_migrate_map_pages into device / system
Date: Fri, 17 Apr 2026 13:09:17 +0200 [thread overview]
Message-ID: <53f2047397754b7b6ec03e77e7a3114ecc1a0fd5.camel@linux.intel.com> (raw)
In-Reply-To: <20260410205929.3914474-4-matthew.brost@intel.com>
On Fri, 2026-04-10 at 13:59 -0700, Matthew Brost wrote:
> Split drm_pagemap_migrate_map_pages into device / system helpers
> clearly
> seperating these operations. Will help with upcoming changes to split
> IOVA allocation steps.
>
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> Reviewed-by: Francois Dugast <francois.dugast@intel.com>
Acked-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> ---
> drivers/gpu/drm/drm_pagemap.c | 151 ++++++++++++++++++++++----------
> --
> 1 file changed, 100 insertions(+), 51 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_pagemap.c
> b/drivers/gpu/drm/drm_pagemap.c
> index 63f32cf6e1a7..ee4d9f90bf67 100644
> --- a/drivers/gpu/drm/drm_pagemap.c
> +++ b/drivers/gpu/drm/drm_pagemap.c
> @@ -216,7 +216,8 @@ static void drm_pagemap_get_devmem_page(struct
> page *page,
> }
>
> /**
> - * drm_pagemap_migrate_map_pages() - Map migration pages for GPU SVM
> migration
> + * drm_pagemap_migrate_map_device_private_pages() - Map device
> private migration
> + * pages for GPU SVM migration
> * @dev: The device performing the migration.
> * @local_dpagemap: The drm_pagemap local to the migrating device.
> * @pagemap_addr: Array to store DMA information corresponding to
> mapped pages.
> @@ -232,58 +233,50 @@ static void drm_pagemap_get_devmem_page(struct
> page *page,
> *
> * Returns: 0 on success, -EFAULT if an error occurs during mapping.
> */
> -static int drm_pagemap_migrate_map_pages(struct device *dev,
> - struct drm_pagemap
> *local_dpagemap,
> - struct drm_pagemap_addr
> *pagemap_addr,
> - unsigned long *migrate_pfn,
> - unsigned long npages,
> - enum dma_data_direction
> dir,
> - const struct
> drm_pagemap_migrate_details *mdetails)
> +static int
> +drm_pagemap_migrate_map_device_private_pages(struct device *dev,
> + struct drm_pagemap
> *local_dpagemap,
> + struct drm_pagemap_addr
> *pagemap_addr,
> + unsigned long
> *migrate_pfn,
> + unsigned long npages,
> + enum dma_data_direction
> dir,
> + const struct
> drm_pagemap_migrate_details *mdetails)
> {
> unsigned long num_peer_pages = 0, num_local_pages = 0, i;
>
> for (i = 0; i < npages;) {
> struct page *page =
> migrate_pfn_to_page(migrate_pfn[i]);
> - dma_addr_t dma_addr;
> + struct drm_pagemap_zdd *zdd;
> + struct drm_pagemap *dpagemap;
> + struct drm_pagemap_addr addr;
> struct folio *folio;
> unsigned int order = 0;
>
> if (!page)
> goto next;
>
> + WARN_ON_ONCE(!is_device_private_page(page));
> folio = page_folio(page);
> order = folio_order(folio);
>
> - if (is_device_private_page(page)) {
> - struct drm_pagemap_zdd *zdd =
> drm_pagemap_page_zone_device_data(page);
> - struct drm_pagemap *dpagemap = zdd-
> >dpagemap;
> - struct drm_pagemap_addr addr;
> -
> - if (dpagemap == local_dpagemap) {
> - if (!mdetails-
> >can_migrate_same_pagemap)
> - goto next;
> -
> - num_local_pages += NR_PAGES(order);
> - } else {
> - num_peer_pages += NR_PAGES(order);
> - }
> + zdd = drm_pagemap_page_zone_device_data(page);
> + dpagemap = zdd->dpagemap;
>
> - addr = dpagemap->ops->device_map(dpagemap,
> dev, page, order, dir);
> - if (dma_mapping_error(dev, addr.addr))
> - return -EFAULT;
> + if (dpagemap == local_dpagemap) {
> + if (!mdetails->can_migrate_same_pagemap)
> + goto next;
>
> - pagemap_addr[i] = addr;
> + num_local_pages += NR_PAGES(order);
> } else {
> - dma_addr = dma_map_page(dev, page, 0,
> page_size(page), dir);
> - if (dma_mapping_error(dev, dma_addr))
> - return -EFAULT;
> -
> - pagemap_addr[i] =
> - drm_pagemap_addr_encode(dma_addr,
> -
> DRM_INTERCONNECT_SYSTEM,
> - order, dir);
> + num_peer_pages += NR_PAGES(order);
> }
>
> + addr = dpagemap->ops->device_map(dpagemap, dev,
> page, order, dir);
> + if (dma_mapping_error(dev, addr.addr))
> + return -EFAULT;
> +
> + pagemap_addr[i] = addr;
> +
> next:
> i += NR_PAGES(order);
> }
> @@ -298,6 +291,60 @@ static int drm_pagemap_migrate_map_pages(struct
> device *dev,
> return 0;
> }
>
> +/**
> + * drm_pagemap_migrate_map_system_pages() - Map system or device
> coherent
> + * migration pages for GPU SVM migration
> + * @dev: The device performing the migration.
> + * @pagemap_addr: Array to store DMA information corresponding to
> mapped pages.
> + * @migrate_pfn: Array of page frame numbers of system pages or peer
> pages to map.
> + * @npages: Number of system or device coherent pages to map.
> + * @dir: Direction of data transfer (e.g., DMA_BIDIRECTIONAL)
> + *
> + * This function maps pages of memory for migration usage in GPU
> SVM. It
> + * iterates over each page frame number provided in @migrate_pfn,
> maps the
> + * corresponding page, and stores the DMA address in the provided
> @dma_addr
> + * array.
> + *
> + * Returns: 0 on success, -EFAULT if an error occurs during mapping.
> + */
> +static int
> +drm_pagemap_migrate_map_system_pages(struct device *dev,
> + struct drm_pagemap_addr
> *pagemap_addr,
> + unsigned long *migrate_pfn,
> + unsigned long npages,
> + enum dma_data_direction dir)
> +{
> + unsigned long i;
> +
> + for (i = 0; i < npages;) {
> + struct page *page =
> migrate_pfn_to_page(migrate_pfn[i]);
> + dma_addr_t dma_addr;
> + struct folio *folio;
> + unsigned int order = 0;
> +
> + if (!page)
> + goto next;
> +
> + WARN_ON_ONCE(is_device_private_page(page));
> + folio = page_folio(page);
> + order = folio_order(folio);
> +
> + dma_addr = dma_map_page(dev, page, 0,
> page_size(page), dir);
> + if (dma_mapping_error(dev, dma_addr))
> + return -EFAULT;
> +
> + pagemap_addr[i] =
> + drm_pagemap_addr_encode(dma_addr,
> + DRM_INTERCONNECT_SYS
> TEM,
> + order, dir);
> +
> +next:
> + i += NR_PAGES(order);
> + }
> +
> + return 0;
> +}
> +
> /**
> * drm_pagemap_migrate_unmap_pages() - Unmap pages previously mapped
> for GPU SVM migration
> * @dev: The device for which the pages were mapped
> @@ -358,9 +405,13 @@ drm_pagemap_migrate_remote_to_local(struct
> drm_pagemap_devmem *devmem,
> const struct
> drm_pagemap_migrate_details *mdetails)
>
> {
> - int err = drm_pagemap_migrate_map_pages(remote_device,
> remote_dpagemap,
> - pagemap_addr,
> local_pfns,
> - npages,
> DMA_FROM_DEVICE, mdetails);
> + int err =
> drm_pagemap_migrate_map_device_private_pages(remote_device,
> +
> remote_dpagemap,
> +
> pagemap_addr,
> +
> local_pfns,
> +
> npages,
> +
> DMA_FROM_DEVICE,
> +
> mdetails);
>
> if (err)
> goto out;
> @@ -379,12 +430,11 @@ drm_pagemap_migrate_sys_to_dev(struct
> drm_pagemap_devmem *devmem,
> struct page *local_pages[],
> struct drm_pagemap_addr
> pagemap_addr[],
> unsigned long npages,
> - const struct drm_pagemap_devmem_ops
> *ops,
> - const struct
> drm_pagemap_migrate_details *mdetails)
> + const struct drm_pagemap_devmem_ops
> *ops)
> {
> - int err = drm_pagemap_migrate_map_pages(devmem->dev, devmem-
> >dpagemap,
> - pagemap_addr,
> sys_pfns, npages,
> - DMA_TO_DEVICE,
> mdetails);
> + int err = drm_pagemap_migrate_map_system_pages(devmem->dev,
> + pagemap_addr,
> sys_pfns,
> + npages,
> DMA_TO_DEVICE);
>
> if (err)
> goto out;
> @@ -448,7 +498,7 @@ static int drm_pagemap_migrate_range(struct
> drm_pagemap_devmem *devmem,
> &pages[last-
> >start],
>
> &pagemap_addr[last->start],
> cur->start -
> last->start,
> - last->ops,
> mdetails);
> + last->ops);
>
> out:
> *last = *cur;
> @@ -1010,7 +1060,6 @@ EXPORT_SYMBOL(drm_pagemap_put);
> int drm_pagemap_evict_to_ram(struct drm_pagemap_devmem
> *devmem_allocation)
> {
> const struct drm_pagemap_devmem_ops *ops =
> devmem_allocation->ops;
> - struct drm_pagemap_migrate_details mdetails = {};
> unsigned long npages, mpages = 0;
> struct page **pages;
> unsigned long *src, *dst;
> @@ -1049,10 +1098,10 @@ int drm_pagemap_evict_to_ram(struct
> drm_pagemap_devmem *devmem_allocation)
> if (err || !mpages)
> goto err_finalize;
>
> - err = drm_pagemap_migrate_map_pages(devmem_allocation->dev,
> - devmem_allocation-
> >dpagemap, pagemap_addr,
> - dst, npages,
> DMA_FROM_DEVICE,
> - &mdetails);
> + err =
> drm_pagemap_migrate_map_system_pages(devmem_allocation->dev,
> + pagemap_addr,
> + dst, npages,
> + DMA_FROM_DEVICE);
> if (err)
> goto err_finalize;
>
> @@ -1121,7 +1170,6 @@ static int __drm_pagemap_migrate_to_ram(struct
> vm_area_struct *vas,
> MIGRATE_VMA_SELECT_COMPOUND,
> .fault_page = page,
> };
> - struct drm_pagemap_migrate_details mdetails = {};
> struct drm_pagemap_zdd *zdd;
> const struct drm_pagemap_devmem_ops *ops;
> struct device *dev = NULL;
> @@ -1179,8 +1227,9 @@ static int __drm_pagemap_migrate_to_ram(struct
> vm_area_struct *vas,
> if (err)
> goto err_finalize;
>
> - err = drm_pagemap_migrate_map_pages(dev, zdd->dpagemap,
> pagemap_addr, migrate.dst, npages,
> - DMA_FROM_DEVICE,
> &mdetails);
> + err = drm_pagemap_migrate_map_system_pages(dev,
> pagemap_addr,
> + migrate.dst,
> npages,
> + DMA_FROM_DEVICE);
> if (err)
> goto err_finalize;
>
next prev parent reply other threads:[~2026-04-17 11:09 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-10 20:59 [PATCH v7 0/5] Use new dma-map IOVA alloc, link, and sync API in GPU SVM and DRM pagemap Matthew Brost
2026-04-10 20:59 ` [PATCH v7 1/5] drm/gpusvm: Use dma-map IOVA alloc, link, and sync API in GPU SVM Matthew Brost
2026-04-29 20:46 ` Nathan Chancellor
2026-04-10 20:59 ` [PATCH v7 2/5] drm/pagemap: Drop source_peer_migrates flag and assume true Matthew Brost
2026-04-17 11:03 ` Thomas Hellström
2026-04-10 20:59 ` [PATCH v7 3/5] drm/pagemap: Split drm_pagemap_migrate_map_pages into device / system Matthew Brost
2026-04-17 11:09 ` Thomas Hellström [this message]
2026-04-10 20:59 ` [PATCH v7 4/5] drm/pagemap: Use dma-map IOVA alloc, link, and sync API for DRM pagemap Matthew Brost
2026-04-17 11:11 ` Thomas Hellström
2026-04-10 20:59 ` [PATCH v7 5/5] drm/pagemap: Fix drm_pagemap_migrate_unmap_pages kerneldoc Matthew Brost
2026-04-10 21:07 ` ✓ CI.KUnit: success for Use new dma-map IOVA alloc, link, and sync API in GPU SVM and DRM pagemap (rev7) Patchwork
2026-04-10 21:44 ` ✓ Xe.CI.BAT: " Patchwork
2026-04-11 8:43 ` ✓ Xe.CI.FULL: " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=53f2047397754b7b6ec03e77e7a3114ecc1a0fd5.camel@linux.intel.com \
--to=thomas.hellstrom@linux.intel.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=francois.dugast@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=matthew.brost@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox