public inbox for intel-xe@lists.freedesktop.org
 help / color / mirror / Atom feed
From: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
To: Matthew Brost <matthew.brost@intel.com>
Cc: intel-xe@lists.freedesktop.org, dri-devel@lists.freedesktop.org,
	 leonro@nvidia.com, jgg@ziepe.ca, francois.dugast@intel.com,
	 himal.prasad.ghimiray@intel.com
Subject: Re: [PATCH v4 4/4] drm/pagemap: Use dma-map IOVA alloc, link, and sync API for DRM pagemap
Date: Wed, 11 Feb 2026 19:48:59 +0100	[thread overview]
Message-ID: <1215d2ec94ecf13944d01bd4de29bf29bd4f8bf7.camel@linux.intel.com> (raw)
In-Reply-To: <aYyiHQ0avcRcti8l@lstrano-desk.jf.intel.com>

On Wed, 2026-02-11 at 07:37 -0800, Matthew Brost wrote:
> On Wed, Feb 11, 2026 at 12:34:12PM +0100, Thomas Hellström wrote:
> > On Wed, 2026-02-04 at 20:19 -0800, Matthew Brost wrote:
> > > The dma-map IOVA alloc, link, and sync APIs perform significantly
> > > better
> > > than dma-map / dma-unmap, as they avoid costly IOMMU
> > > synchronizations.
> > > This difference is especially noticeable when mapping a 2MB
> > > region in
> > > 4KB pages.
> > > 
> > > Use the IOVA alloc, link, and sync APIs for DRM pagemap, which
> > > create
> > > DMA
> > > mappings between the CPU and GPU for copying data.
> > > 
> > > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > > ---
> > > v4:
> > >  - Pack IOVA and drop dummy page (Jason)
> > > 
> > >  drivers/gpu/drm/drm_pagemap.c | 84
> > > +++++++++++++++++++++++++++++----
> > > --
> > >  1 file changed, 70 insertions(+), 14 deletions(-)
> > > 
> > > diff --git a/drivers/gpu/drm/drm_pagemap.c
> > > b/drivers/gpu/drm/drm_pagemap.c
> > > index 29677b19bb69..52a196bc8459 100644
> > > --- a/drivers/gpu/drm/drm_pagemap.c
> > > +++ b/drivers/gpu/drm/drm_pagemap.c
> > > @@ -280,6 +280,20 @@ drm_pagemap_migrate_map_device_pages(struct
> > > device *dev,
> > >  	return 0;
> > >  }
> > >  
> > > +/**
> > > + * struct drm_pagemap_iova_state - DRM pagemap IOVA state
> > > + *
> > 
> > No newline 
> > 
> 
> +1
> 
> > > + * @dma_state: DMA IOVA state.
> > > + * @offset: Current offset in IOVA.
> > > + *
> > > + * This structure acts as an iterator for packing all IOVA
> > > addresses
> > > within a
> > > + * contiguous range.
> > > + */
> > > +struct drm_pagemap_iova_state {
> > > +	struct dma_iova_state dma_state;
> > > +	unsigned long offset;
> > > +};
> > > +
> > >  /**
> > >   * drm_pagemap_migrate_map_system_pages() - Map system migration
> > > pages for GPU SVM migration
> > >   * @dev: The device performing the migration.
> > > @@ -287,6 +301,7 @@ drm_pagemap_migrate_map_device_pages(struct
> > > device *dev,
> > >   * @migrate_pfn: Array of page frame numbers of system pages or
> > > peer
> > > pages to map.
> > >   * @npages: Number of system pages or peer pages to map.
> > >   * @dir: Direction of data transfer (e.g., DMA_BIDIRECTIONAL)
> > > + * @state: DMA IOVA state for mapping.
> > >   *
> > >   * This function maps pages of memory for migration usage in GPU
> > > SVM. It
> > >   * iterates over each page frame number provided in
> > > @migrate_pfn,
> > > maps the
> > > @@ -300,9 +315,11 @@ drm_pagemap_migrate_map_system_pages(struct
> > > device *dev,
> > >  				     struct drm_pagemap_addr
> > > *pagemap_addr,
> > >  				     unsigned long *migrate_pfn,
> > >  				     unsigned long npages,
> > > -				     enum dma_data_direction
> > > dir)
> > > +				     enum dma_data_direction
> > > dir,
> > > +				     struct
> > > drm_pagemap_iova_state
> > > *state)
> > >  {
> > >  	unsigned long i;
> > > +	bool try_alloc = false;
> > >  
> > >  	for (i = 0; i < npages;) {
> > >  		struct page *page =
> > > migrate_pfn_to_page(migrate_pfn[i]);
> > > @@ -317,9 +334,31 @@ drm_pagemap_migrate_map_system_pages(struct
> > > device *dev,
> > >  		folio = page_folio(page);
> > >  		order = folio_order(folio);
> > >  
> > > -		dma_addr = dma_map_page(dev, page, 0,
> > > page_size(page), dir);
> > > -		if (dma_mapping_error(dev, dma_addr))
> > > -			return -EFAULT;
> > > +		if (!try_alloc) {
> > > +			dma_iova_try_alloc(dev, &state-
> > > >dma_state,
> > > +					   npages * PAGE_SIZE >=
> > > +					   HPAGE_PMD_SIZE ?
> > > +					   HPAGE_PMD_SIZE : 0,
> > > +					   npages * PAGE_SIZE);
> > > +			try_alloc = true;
> > > +		}
> > 
> > What happens if dma_iova_try_alloc() fails for all i < some value x
> > and
> > then suddenly succeeds for i == x? While the below code looks
> > correct,
> 
> We only try to alloc on the first valid page - 'i' may be any value
> based on the first page found  or we may never alloc if the number of
> pages found == 0 (possible, hence why it is inside the loop). This
> step
> is done at most once. If the allocation fails, we use the map_page
> path
> for the remaining loop iterations.
> 
> > I figure we'd allocate a too large IOVA region and possibly get the
> > alignment wrong?
> 
> The first and only IOVA allocation attempts an aligned allocation.
> What
> can happen is only a subset of the IOVA is used for the copy but we
> pack
> in the pages starting at IOVA[0] and end at IOVA[number valid pages -
> 1].
> 
> Matt

So to be a little nicer on the IOVA allocator we could use the below?

		dma_iova_try_alloc(dev, &state->dma_state,
					   (npages - i) * PAGE_SIZE >=
					   HPAGE_PMD_SIZE ?
					   HPAGE_PMD_SIZE : 0,
					   (npages - i) * PAGE_SIZE);

Thanks,
Thomas

> 
> > 
> > Otherwise LGTM.
> > 
> > 
> > > +
> > > +		if (dma_use_iova(&state->dma_state)) {
> > > +			int err = dma_iova_link(dev, &state-
> > > > dma_state,
> > > +						page_to_phys(pag
> > > e),
> > > +						state->offset,
> > > page_size(page),
> > > +						dir, 0);
> > > +			if (err)
> > > +				return err;
> > > +
> > > +			dma_addr = state->dma_state.addr +
> > > state-
> > > > offset;
> > > +			state->offset += page_size(page);
> > > +		} else {
> > > +			dma_addr = dma_map_page(dev, page, 0,
> > > page_size(page),
> > > +						dir);
> > > +			if (dma_mapping_error(dev, dma_addr))
> > > +				return -EFAULT;
> > > +		}
> > >  
> > >  		pagemap_addr[i] =
> > >  			drm_pagemap_addr_encode(dma_addr,
> > > @@ -330,6 +369,9 @@ drm_pagemap_migrate_map_system_pages(struct
> > > device *dev,
> > >  		i += NR_PAGES(order);
> > >  	}
> > >  
> > > +	if (dma_use_iova(&state->dma_state))
> > > +		return dma_iova_sync(dev, &state->dma_state, 0,
> > > state->offset);
> > > +
> > >  	return 0;
> > >  }
> > >  
> > > @@ -341,6 +383,7 @@ drm_pagemap_migrate_map_system_pages(struct
> > > device *dev,
> > >   * @pagemap_addr: Array of DMA information corresponding to
> > > mapped
> > > pages
> > >   * @npages: Number of pages to unmap
> > >   * @dir: Direction of data transfer (e.g., DMA_BIDIRECTIONAL)
> > > + * @state: DMA IOVA state for mapping.
> > >   *
> > >   * This function unmaps previously mapped pages of memory for
> > > GPU
> > > Shared Virtual
> > >   * Memory (SVM). It iterates over each DMA address provided in
> > > @dma_addr, checks
> > > @@ -350,10 +393,17 @@ static void
> > > drm_pagemap_migrate_unmap_pages(struct device *dev,
> > >  					    struct
> > > drm_pagemap_addr
> > > *pagemap_addr,
> > >  					    unsigned long
> > > *migrate_pfn,
> > >  					    unsigned long
> > > npages,
> > > -					    enum
> > > dma_data_direction
> > > dir)
> > > +					    enum
> > > dma_data_direction
> > > dir,
> > > +					    struct
> > > drm_pagemap_iova_state *state)
> > >  {
> > >  	unsigned long i;
> > >  
> > > +	if (state && dma_use_iova(&state->dma_state)) {
> > > +		dma_iova_unlink(dev, &state->dma_state, 0,
> > > state-
> > > > offset, dir, 0);
> > > +		dma_iova_free(dev, &state->dma_state);
> > > +		return;
> > > +	}
> > > +
> > >  	for (i = 0; i < npages;) {
> > >  		struct page *page =
> > > migrate_pfn_to_page(migrate_pfn[i]);
> > >  
> > > @@ -406,7 +456,7 @@ drm_pagemap_migrate_remote_to_local(struct
> > > drm_pagemap_devmem *devmem,
> > >  			       devmem->pre_migrate_fence);
> > >  out:
> > >  	drm_pagemap_migrate_unmap_pages(remote_device,
> > > pagemap_addr,
> > > local_pfns,
> > > -					npages,
> > > DMA_FROM_DEVICE);
> > > +					npages, DMA_FROM_DEVICE,
> > > NULL);
> > >  	return err;
> > >  }
> > >  
> > > @@ -416,11 +466,13 @@ drm_pagemap_migrate_sys_to_dev(struct
> > > drm_pagemap_devmem *devmem,
> > >  			       struct page *local_pages[],
> > >  			       struct drm_pagemap_addr
> > > pagemap_addr[],
> > >  			       unsigned long npages,
> > > -			       const struct
> > > drm_pagemap_devmem_ops
> > > *ops)
> > > +			       const struct
> > > drm_pagemap_devmem_ops
> > > *ops,
> > > +			       struct drm_pagemap_iova_state
> > > *state)
> > >  {
> > >  	int err = drm_pagemap_migrate_map_system_pages(devmem-
> > > >dev,
> > >  						      
> > > pagemap_addr,
> > > sys_pfns,
> > > -						       npages,
> > > DMA_TO_DEVICE);
> > > +						       npages,
> > > DMA_TO_DEVICE,
> > > +						       state);
> > >  
> > >  	if (err)
> > >  		goto out;
> > > @@ -429,7 +481,7 @@ drm_pagemap_migrate_sys_to_dev(struct
> > > drm_pagemap_devmem *devmem,
> > >  				  devmem->pre_migrate_fence);
> > >  out:
> > >  	drm_pagemap_migrate_unmap_pages(devmem->dev,
> > > pagemap_addr,
> > > sys_pfns, npages,
> > > -					DMA_TO_DEVICE);
> > > +					DMA_TO_DEVICE, state);
> > >  	return err;
> > >  }
> > >  
> > > @@ -457,6 +509,7 @@ static int drm_pagemap_migrate_range(struct
> > > drm_pagemap_devmem *devmem,
> > >  				     const struct
> > > migrate_range_loc
> > > *cur,
> > >  				     const struct
> > > drm_pagemap_migrate_details *mdetails)
> > >  {
> > > +	struct drm_pagemap_iova_state state = {};
> > >  	int ret = 0;
> > >  
> > >  	if (cur->start == 0)
> > > @@ -484,7 +537,7 @@ static int drm_pagemap_migrate_range(struct
> > > drm_pagemap_devmem *devmem,
> > >  						    
> > > &pages[last-
> > > > start],
> > >  						    
> > > &pagemap_addr[last->start],
> > >  						     cur->start
> > > -
> > > last->start,
> > > -						     last->ops);
> > > +						     last->ops,
> > > &state);
> > >  
> > >  out:
> > >  	*last = *cur;
> > > @@ -1001,6 +1054,7 @@ EXPORT_SYMBOL(drm_pagemap_put);
> > >  int drm_pagemap_evict_to_ram(struct drm_pagemap_devmem
> > > *devmem_allocation)
> > >  {
> > >  	const struct drm_pagemap_devmem_ops *ops =
> > > devmem_allocation->ops;
> > > +	struct drm_pagemap_iova_state state = {};
> > >  	unsigned long npages, mpages = 0;
> > >  	struct page **pages;
> > >  	unsigned long *src, *dst;
> > > @@ -1042,7 +1096,7 @@ int drm_pagemap_evict_to_ram(struct
> > > drm_pagemap_devmem *devmem_allocation)
> > >  	err =
> > > drm_pagemap_migrate_map_system_pages(devmem_allocation->dev,
> > >  						   pagemap_addr,
> > >  						   dst, npages,
> > > -						  
> > > DMA_FROM_DEVICE);
> > > +						  
> > > DMA_FROM_DEVICE,
> > > &state);
> > >  	if (err)
> > >  		goto err_finalize;
> > >  
> > > @@ -1059,7 +1113,7 @@ int drm_pagemap_evict_to_ram(struct
> > > drm_pagemap_devmem *devmem_allocation)
> > >  	migrate_device_pages(src, dst, npages);
> > >  	migrate_device_finalize(src, dst, npages);
> > >  	drm_pagemap_migrate_unmap_pages(devmem_allocation->dev,
> > > pagemap_addr, dst, npages,
> > > -					DMA_FROM_DEVICE);
> > > +					DMA_FROM_DEVICE,
> > > &state);
> > >  
> > >  err_free:
> > >  	kvfree(buf);
> > > @@ -1103,6 +1157,7 @@ static int
> > > __drm_pagemap_migrate_to_ram(struct
> > > vm_area_struct *vas,
> > >  		MIGRATE_VMA_SELECT_DEVICE_COHERENT,
> > >  		.fault_page	= page,
> > >  	};
> > > +	struct drm_pagemap_iova_state state = {};
> > >  	struct drm_pagemap_zdd *zdd;
> > >  	const struct drm_pagemap_devmem_ops *ops;
> > >  	struct device *dev = NULL;
> > > @@ -1162,7 +1217,7 @@ static int
> > > __drm_pagemap_migrate_to_ram(struct
> > > vm_area_struct *vas,
> > >  
> > >  	err = drm_pagemap_migrate_map_system_pages(dev,
> > > pagemap_addr,
> > >  						   migrate.dst,
> > > npages,
> > > -						  
> > > DMA_FROM_DEVICE);
> > > +						  
> > > DMA_FROM_DEVICE,
> > > &state);
> > >  	if (err)
> > >  		goto err_finalize;
> > >  
> > > @@ -1180,7 +1235,8 @@ static int
> > > __drm_pagemap_migrate_to_ram(struct
> > > vm_area_struct *vas,
> > >  	migrate_vma_finalize(&migrate);
> > >  	if (dev)
> > >  		drm_pagemap_migrate_unmap_pages(dev,
> > > pagemap_addr,
> > > migrate.dst,
> > > -						npages,
> > > DMA_FROM_DEVICE);
> > > +						npages,
> > > DMA_FROM_DEVICE,
> > > +						&state);
> > >  err_free:
> > >  	kvfree(buf);
> > >  err_out:

  reply	other threads:[~2026-02-11 18:49 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-02-05  4:19 [PATCH v4 0/4] Use new dma-map IOVA alloc, link, and sync API in GPU SVM and DRM pagemap Matthew Brost
2026-02-05  4:19 ` [PATCH v4 1/4] drm/pagemap: Add helper to access zone_device_data Matthew Brost
2026-02-05  4:19 ` [PATCH v4 2/4] drm/gpusvm: Use dma-map IOVA alloc, link, and sync API in GPU SVM Matthew Brost
2026-02-09  9:44   ` Thomas Hellström
2026-02-09 16:13     ` Matthew Brost
2026-02-09 16:41       ` Thomas Hellström
2026-02-05  4:19 ` [PATCH v4 3/4] drm/pagemap: Split drm_pagemap_migrate_map_pages into device / system Matthew Brost
2026-02-09 15:49   ` Thomas Hellström
2026-02-09 16:58     ` Matthew Brost
2026-02-09 17:09       ` Thomas Hellström
2026-02-05  4:19 ` [PATCH v4 4/4] drm/pagemap: Use dma-map IOVA alloc, link, and sync API for DRM pagemap Matthew Brost
2026-02-11 11:34   ` Thomas Hellström
2026-02-11 15:37     ` Matthew Brost
2026-02-11 18:48       ` Thomas Hellström [this message]
2026-02-11 18:51         ` Matthew Brost
     [not found]           ` <20260213145646.GO750753@ziepe.ca>
2026-02-13 20:00             ` Matthew Brost
2026-02-16 14:33               ` Thomas Hellström
2026-02-05  6:24 ` ✓ CI.KUnit: success for Use new dma-map IOVA alloc, link, and sync API in GPU SVM and DRM pagemap (rev4) Patchwork
2026-02-05  7:38 ` ✓ Xe.CI.BAT: " Patchwork
2026-02-06  1:06 ` ✗ Xe.CI.FULL: failure " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1215d2ec94ecf13944d01bd4de29bf29bd4f8bf7.camel@linux.intel.com \
    --to=thomas.hellstrom@linux.intel.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=francois.dugast@intel.com \
    --cc=himal.prasad.ghimiray@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=jgg@ziepe.ca \
    --cc=leonro@nvidia.com \
    --cc=matthew.brost@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox