From: Matthew Brost <matthew.brost@intel.com>
To: Leon Romanovsky <leonro@nvidia.com>
Cc: <intel-xe@lists.freedesktop.org>,
<dri-devel@lists.freedesktop.org>, <francois.dugast@intel.com>,
<thomas.hellstrom@linux.intel.com>,
<himal.prasad.ghimiray@intel.com>, <jgg@ziepe.ca>
Subject: Re: [RFC PATCH v3 04/11] drm/pagemap: Use dma-map IOVA alloc, link, and sync API for DRM pagemap
Date: Wed, 28 Jan 2026 09:46:44 -0800 [thread overview]
Message-ID: <aXpLhN08jVbltQC0@lstrano-desk.jf.intel.com> (raw)
In-Reply-To: <20260128142853.GH40916@unreal>
On Wed, Jan 28, 2026 at 04:28:53PM +0200, Leon Romanovsky wrote:
> On Tue, Jan 27, 2026 at 04:48:34PM -0800, Matthew Brost wrote:
> > The dma-map IOVA alloc, link, and sync APIs perform significantly better
> > than dma-map / dma-unmap, as they avoid costly IOMMU synchronizations.
> > This difference is especially noticeable when mapping a 2MB region in
> > 4KB pages.
> >
> > Use the IOVA alloc, link, and sync APIs for DRM pagemap, which create DMA
> > mappings between the CPU and GPU for copying data.
> >
> > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > ---
> > drivers/gpu/drm/drm_pagemap.c | 121 +++++++++++++++++++++++++++-------
> > 1 file changed, 96 insertions(+), 25 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/drm_pagemap.c b/drivers/gpu/drm/drm_pagemap.c
> > index 4b79d4019453..b928c89f4bd1 100644
> > --- a/drivers/gpu/drm/drm_pagemap.c
> > +++ b/drivers/gpu/drm/drm_pagemap.c
> > @@ -287,6 +287,7 @@ drm_pagemap_migrate_map_device_pages(struct device *dev,
> > * @migrate_pfn: Array of page frame numbers of system pages or peer pages to map.
> > * @npages: Number of system pages or peer pages to map.
> > * @dir: Direction of data transfer (e.g., DMA_BIDIRECTIONAL)
> > + * @state: DMA IOVA state for mapping.
> > *
> > * This function maps pages of memory for migration usage in GPU SVM. It
> > * iterates over each page frame number provided in @migrate_pfn, maps the
> > @@ -300,26 +301,79 @@ drm_pagemap_migrate_map_system_pages(struct device *dev,
> > struct drm_pagemap_addr *pagemap_addr,
> > unsigned long *migrate_pfn,
> > unsigned long npages,
> > - enum dma_data_direction dir)
> > + enum dma_data_direction dir,
> > + struct dma_iova_state *state)
> > {
> > - unsigned long i;
> > + struct page *dummy_page = NULL;
> > + unsigned long i, psize;
> > + bool try_alloc = false;
> >
> > for (i = 0; i < npages;) {
> > struct page *page = migrate_pfn_to_page(migrate_pfn[i]);
> > - dma_addr_t dma_addr;
> > - struct folio *folio;
> > + dma_addr_t dma_addr = -1;
> > unsigned int order = 0;
> >
> > - if (!page)
> > - goto next;
> > + if (!page) {
> > + if (!dummy_page)
> > + goto next;
> >
> > - WARN_ON_ONCE(is_device_private_page(page));
> > - folio = page_folio(page);
> > - order = folio_order(folio);
> > + page = dummy_page;
>
> Why is this dummy_page required? Is it intended to introduce holes in the
It is intended to fill holes. The input pages come from the
migrate_vma_* functions, which can return a sparsely populated array of
pages for a region (e.g., it scans a 2M range but only finds several of
the 512 pages eligible for migration). As a result, if (!page) is true
for many entries.
> IOVA space? If so, what necessitates those holes? You can have less mapped
> than IOVA and dma_iova_*() API can handle it.
I was actually going to ask you about this, so I’m glad you brought it
up here. Again, this is a hack to avoid holes — the holes are never
touched by our copy function, but rather skipped, so we just jam in a
dummy address so the entire IOVA range has valid IOMMU pages.
It is meant to avoid the warning in [1] — without this, unmapped != size
as only some of the IOMMU pages are populated for size being destroyed.
I added this early on when everything was breaking and then moved on, so
at the moment I’m not sure whether this warning affects actual
functionality or if we could just delete it. Let me get back to you on
whether it just causes dmesg spam or if it has functional implications.
If it’s the former, I’d much prefer to remove the warning rather than
carry this hack.
Perhaps you can also explain why this warning exists?
Matt
[1] https://elixir.bootlin.com/linux/v6.19-rc5/source/drivers/iommu/dma-iommu.c#L2045
>
> Thanks
next prev parent reply other threads:[~2026-01-28 17:47 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-28 0:48 [RFC PATCH v3 00/11] Use new dma-map IOVA alloc, link, and sync API in GPU SVM and DRM pagemap Matthew Brost
2026-01-28 0:48 ` [RFC PATCH v3 01/11] drm/pagemap: Add helper to access zone_device_data Matthew Brost
2026-01-28 13:53 ` Leon Romanovsky
2026-01-28 0:48 ` [RFC PATCH v3 02/11] drm/gpusvm: Use dma-map IOVA alloc, link, and sync API in GPU SVM Matthew Brost
2026-01-28 14:04 ` Leon Romanovsky
2026-01-28 0:48 ` [RFC PATCH v3 03/11] drm/pagemap: Split drm_pagemap_migrate_map_pages into device / system Matthew Brost
2026-01-28 0:48 ` [RFC PATCH v3 04/11] drm/pagemap: Use dma-map IOVA alloc, link, and sync API for DRM pagemap Matthew Brost
2026-01-28 14:28 ` Leon Romanovsky
2026-01-28 17:46 ` Matthew Brost [this message]
[not found] ` <20260128175531.GR1641016@ziepe.ca>
2026-01-28 19:29 ` Matthew Brost
2026-01-28 19:45 ` Leon Romanovsky
2026-01-28 21:04 ` Matthew Brost
2026-01-29 10:14 ` Leon Romanovsky
2026-01-29 18:22 ` Matthew Brost
2026-01-28 0:48 ` [RFC PATCH v3 05/11] drm/pagemap: Reduce number of IOVA link calls Matthew Brost
2026-01-28 0:48 ` [RFC PATCH v3 06/11] drm/pagemap: Add IOVA interface to DRM pagemap Matthew Brost
[not found] ` <20260128151458.GJ1641016@ziepe.ca>
2026-01-28 18:42 ` Matthew Brost
2026-01-28 19:41 ` Matthew Brost
[not found] ` <20260128193509.GU1641016@ziepe.ca>
2026-01-28 20:24 ` Matthew Brost
2026-01-29 18:57 ` Jason Gunthorpe
2026-01-29 19:28 ` Matthew Brost
2026-01-29 19:32 ` Jason Gunthorpe
2026-01-28 0:48 ` [RFC PATCH v3 07/11] drm/xe: Stub out DRM pagemap IOVA alloc implementation Matthew Brost
2026-01-28 0:48 ` [RFC PATCH v3 08/11] drm/pagemap: Use device-to-device IOVA alloc, link, and sync API for DRM pagemap Matthew Brost
2026-01-28 0:48 ` [RFC PATCH v3 09/11] drm/xe: Drop BO dma-resv lock during SVM migrate-to-device Matthew Brost
2026-01-28 0:48 ` [RFC PATCH v3 10/11] drm/xe: Implement DRM pagemap IOVA vfuncs Matthew Brost
2026-01-28 0:48 ` [RFC PATCH v3 11/11] drm/gpusvm: Use device-to-device IOVA alloc, link, and sync API in GPU SVM Matthew Brost
2026-01-28 0:59 ` ✗ CI.checkpatch: warning for Use new dma-map IOVA alloc, link, and sync API in GPU SVM and DRM pagemap (rev3) Patchwork
2026-01-28 1:01 ` ✓ CI.KUnit: success " Patchwork
2026-01-28 1:42 ` ✓ Xe.CI.BAT: " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aXpLhN08jVbltQC0@lstrano-desk.jf.intel.com \
--to=matthew.brost@intel.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=francois.dugast@intel.com \
--cc=himal.prasad.ghimiray@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=jgg@ziepe.ca \
--cc=leonro@nvidia.com \
--cc=thomas.hellstrom@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox