From: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
To: Matthew Brost <matthew.brost@intel.com>,
intel-xe@lists.freedesktop.org, dri-devel@lists.freedesktop.org
Cc: leonro@nvidia.com, jgg@ziepe.ca, francois.dugast@intel.com,
himal.prasad.ghimiray@intel.com
Subject: Re: [PATCH v4 3/4] drm/pagemap: Split drm_pagemap_migrate_map_pages into device / system
Date: Mon, 09 Feb 2026 16:49:03 +0100 [thread overview]
Message-ID: <93d2a45fd5a8af50b23bd15cd45c21300f804768.camel@linux.intel.com> (raw)
In-Reply-To: <20260205041921.3781292-4-matthew.brost@intel.com>
On Wed, 2026-02-04 at 20:19 -0800, Matthew Brost wrote:
> Split drm_pagemap_migrate_map_pages into device / system helpers
> clearly
> seperating these operations. Will help with upcoming changes to split
> IOVA allocation steps.
>
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> ---
> drivers/gpu/drm/drm_pagemap.c | 146 ++++++++++++++++++++++----------
> --
> 1 file changed, 96 insertions(+), 50 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_pagemap.c
> b/drivers/gpu/drm/drm_pagemap.c
> index fbd69f383457..29677b19bb69 100644
> --- a/drivers/gpu/drm/drm_pagemap.c
> +++ b/drivers/gpu/drm/drm_pagemap.c
> @@ -205,7 +205,7 @@ static void drm_pagemap_get_devmem_page(struct
> page *page,
> }
>
> /**
> - * drm_pagemap_migrate_map_pages() - Map migration pages for GPU SVM
> migration
> + * drm_pagemap_migrate_map_device_pages() - Map device migration
> pages for GPU SVM migration
> * @dev: The device performing the migration.
> * @local_dpagemap: The drm_pagemap local to the migrating device.
> * @pagemap_addr: Array to store DMA information corresponding to
> mapped pages.
> @@ -221,19 +221,22 @@ static void drm_pagemap_get_devmem_page(struct
> page *page,
> *
> * Returns: 0 on success, -EFAULT if an error occurs during mapping.
> */
> -static int drm_pagemap_migrate_map_pages(struct device *dev,
> - struct drm_pagemap
> *local_dpagemap,
> - struct drm_pagemap_addr
> *pagemap_addr,
> - unsigned long *migrate_pfn,
> - unsigned long npages,
> - enum dma_data_direction
> dir,
> - const struct
> drm_pagemap_migrate_details *mdetails)
> +static int
> +drm_pagemap_migrate_map_device_pages(struct device *dev,
> + struct drm_pagemap
> *local_dpagemap,
> + struct drm_pagemap_addr
> *pagemap_addr,
> + unsigned long *migrate_pfn,
> + unsigned long npages,
> + enum dma_data_direction dir,
> + const struct
> drm_pagemap_migrate_details *mdetails)
We might want to call this device_private pages. Device coherent pages
are treated like system pages here, but I figure those are known to the
dma subsystem and can be handled by the map_system_pages callback.
> {
> unsigned long num_peer_pages = 0, num_local_pages = 0, i;
>
> for (i = 0; i < npages;) {
> struct page *page =
> migrate_pfn_to_page(migrate_pfn[i]);
> - dma_addr_t dma_addr;
> + struct drm_pagemap_zdd *zdd;
> + struct drm_pagemap *dpagemap;
> + struct drm_pagemap_addr addr;
> struct folio *folio;
> unsigned int order = 0;
>
> @@ -243,36 +246,26 @@ static int drm_pagemap_migrate_map_pages(struct
> device *dev,
> folio = page_folio(page);
> order = folio_order(folio);
>
> - if (is_device_private_page(page)) {
> - struct drm_pagemap_zdd *zdd =
> drm_pagemap_page_zone_device_data(page);
> - struct drm_pagemap *dpagemap = zdd-
> >dpagemap;
> - struct drm_pagemap_addr addr;
> -
> - if (dpagemap == local_dpagemap) {
> - if (!mdetails-
> >can_migrate_same_pagemap)
> - goto next;
> + WARN_ON_ONCE(!is_device_private_page(page));
>
> - num_local_pages += NR_PAGES(order);
> - } else {
> - num_peer_pages += NR_PAGES(order);
> - }
> + zdd = drm_pagemap_page_zone_device_data(page);
> + dpagemap = zdd->dpagemap;
>
> - addr = dpagemap->ops->device_map(dpagemap,
> dev, page, order, dir);
> - if (dma_mapping_error(dev, addr.addr))
> - return -EFAULT;
> + if (dpagemap == local_dpagemap) {
> + if (!mdetails->can_migrate_same_pagemap)
> + goto next;
>
> - pagemap_addr[i] = addr;
> + num_local_pages += NR_PAGES(order);
> } else {
> - dma_addr = dma_map_page(dev, page, 0,
> page_size(page), dir);
> - if (dma_mapping_error(dev, dma_addr))
> - return -EFAULT;
> -
> - pagemap_addr[i] =
> - drm_pagemap_addr_encode(dma_addr,
> -
> DRM_INTERCONNECT_SYSTEM,
> - order, dir);
> + num_peer_pages += NR_PAGES(order);
> }
>
> + addr = dpagemap->ops->device_map(dpagemap, dev,
> page, order, dir);
> + if (dma_mapping_error(dev, addr.addr))
> + return -EFAULT;
> +
> + pagemap_addr[i] = addr;
> +
> next:
> i += NR_PAGES(order);
> }
> @@ -287,6 +280,59 @@ static int drm_pagemap_migrate_map_pages(struct
> device *dev,
> return 0;
> }
>
> +/**
> + * drm_pagemap_migrate_map_system_pages() - Map system migration
> pages for GPU SVM migration
> + * @dev: The device performing the migration.
> + * @pagemap_addr: Array to store DMA information corresponding to
> mapped pages.
> + * @migrate_pfn: Array of page frame numbers of system pages or peer
> pages to map.
system pages or device coherent pages? "Peer" pages would typically be
device-private pages with the same owner.
> + * @npages: Number of system pages or peer pages to map.
Same here.
> + * @dir: Direction of data transfer (e.g., DMA_BIDIRECTIONAL)
> + *
> + * This function maps pages of memory for migration usage in GPU
> SVM. It
> + * iterates over each page frame number provided in @migrate_pfn,
> maps the
> + * corresponding page, and stores the DMA address in the provided
> @dma_addr
> + * array.
> + *
> + * Returns: 0 on success, -EFAULT if an error occurs during mapping.
> + */
> +static int
> +drm_pagemap_migrate_map_system_pages(struct device *dev,
> + struct drm_pagemap_addr
> *pagemap_addr,
> + unsigned long *migrate_pfn,
> + unsigned long npages,
> + enum dma_data_direction dir)
> +{
> + unsigned long i;
> +
> + for (i = 0; i < npages;) {
> + struct page *page =
> migrate_pfn_to_page(migrate_pfn[i]);
> + dma_addr_t dma_addr;
> + struct folio *folio;
> + unsigned int order = 0;
> +
> + if (!page)
> + goto next;
> +
> + WARN_ON_ONCE(is_device_private_page(page));
> + folio = page_folio(page);
> + order = folio_order(folio);
> +
> + dma_addr = dma_map_page(dev, page, 0,
> page_size(page), dir);
> + if (dma_mapping_error(dev, dma_addr))
> + return -EFAULT;
> +
> + pagemap_addr[i] =
> + drm_pagemap_addr_encode(dma_addr,
> + DRM_INTERCONNECT_SYS
> TEM,
> + order, dir);
> +
> +next:
> + i += NR_PAGES(order);
> + }
> +
> + return 0;
> +}
> +
> /**
> * drm_pagemap_migrate_unmap_pages() - Unmap pages previously mapped
> for GPU SVM migration
> * @dev: The device for which the pages were mapped
> @@ -347,9 +393,11 @@ drm_pagemap_migrate_remote_to_local(struct
> drm_pagemap_devmem *devmem,
> const struct
> drm_pagemap_migrate_details *mdetails)
>
> {
> - int err = drm_pagemap_migrate_map_pages(remote_device,
> remote_dpagemap,
> - pagemap_addr,
> local_pfns,
> - npages,
> DMA_FROM_DEVICE, mdetails);
> + int err =
> drm_pagemap_migrate_map_device_pages(remote_device,
> +
> remote_dpagemap,
> + pagemap_addr,
> local_pfns,
> + npages,
> DMA_FROM_DEVICE,
> + mdetails);
>
> if (err)
> goto out;
> @@ -368,12 +416,11 @@ drm_pagemap_migrate_sys_to_dev(struct
> drm_pagemap_devmem *devmem,
> struct page *local_pages[],
> struct drm_pagemap_addr
> pagemap_addr[],
> unsigned long npages,
> - const struct drm_pagemap_devmem_ops
> *ops,
> - const struct
> drm_pagemap_migrate_details *mdetails)
> + const struct drm_pagemap_devmem_ops
> *ops)
> {
> - int err = drm_pagemap_migrate_map_pages(devmem->dev, devmem-
> >dpagemap,
> - pagemap_addr,
> sys_pfns, npages,
> - DMA_TO_DEVICE,
> mdetails);
> + int err = drm_pagemap_migrate_map_system_pages(devmem->dev,
> + pagemap_addr,
> sys_pfns,
> + npages,
> DMA_TO_DEVICE);
Unfortunately it's a bit more complicated than this. If the destination
gpu migrates, the range to migrate could be a mix of system pages,
device coherent pages and also device private pages, and previously
drm_pagemap_migrate_map_pages() took care of that and did the correct
thing on a per-page basis.
You can exercise this by setting mdetails::source_peer_migrates to
false on xe. That typically "works" but might generate some errors in
the atomic multi-device tests AFAICT because reading from the BAR does
not flush the L2 caches on BMG. But should be sufficient to exercise
this path.
/Thomas
next prev parent reply other threads:[~2026-02-09 15:49 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-05 4:19 [PATCH v4 0/4] Use new dma-map IOVA alloc, link, and sync API in GPU SVM and DRM pagemap Matthew Brost
2026-02-05 4:19 ` [PATCH v4 1/4] drm/pagemap: Add helper to access zone_device_data Matthew Brost
2026-02-05 4:19 ` [PATCH v4 2/4] drm/gpusvm: Use dma-map IOVA alloc, link, and sync API in GPU SVM Matthew Brost
2026-02-09 9:44 ` Thomas Hellström
2026-02-09 16:13 ` Matthew Brost
2026-02-09 16:41 ` Thomas Hellström
2026-02-05 4:19 ` [PATCH v4 3/4] drm/pagemap: Split drm_pagemap_migrate_map_pages into device / system Matthew Brost
2026-02-09 15:49 ` Thomas Hellström [this message]
2026-02-09 16:58 ` Matthew Brost
2026-02-09 17:09 ` Thomas Hellström
2026-02-05 4:19 ` [PATCH v4 4/4] drm/pagemap: Use dma-map IOVA alloc, link, and sync API for DRM pagemap Matthew Brost
2026-02-11 11:34 ` Thomas Hellström
2026-02-11 15:37 ` Matthew Brost
2026-02-11 18:48 ` Thomas Hellström
2026-02-11 18:51 ` Matthew Brost
[not found] ` <20260213145646.GO750753@ziepe.ca>
2026-02-13 20:00 ` Matthew Brost
2026-02-16 14:33 ` Thomas Hellström
2026-02-05 6:24 ` ✓ CI.KUnit: success for Use new dma-map IOVA alloc, link, and sync API in GPU SVM and DRM pagemap (rev4) Patchwork
2026-02-05 7:38 ` ✓ Xe.CI.BAT: " Patchwork
2026-02-06 1:06 ` ✗ Xe.CI.FULL: failure " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=93d2a45fd5a8af50b23bd15cd45c21300f804768.camel@linux.intel.com \
--to=thomas.hellstrom@linux.intel.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=francois.dugast@intel.com \
--cc=himal.prasad.ghimiray@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=jgg@ziepe.ca \
--cc=leonro@nvidia.com \
--cc=matthew.brost@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox