From: Leon Romanovsky <leonro@nvidia.com>
To: Matthew Brost <matthew.brost@intel.com>
Cc: Jason Gunthorpe <jgg@ziepe.ca>, <intel-xe@lists.freedesktop.org>,
<dri-devel@lists.freedesktop.org>, <francois.dugast@intel.com>,
<thomas.hellstrom@linux.intel.com>,
<himal.prasad.ghimiray@intel.com>
Subject: Re: [RFC PATCH v3 04/11] drm/pagemap: Use dma-map IOVA alloc, link, and sync API for DRM pagemap
Date: Wed, 28 Jan 2026 21:45:40 +0200 [thread overview]
Message-ID: <20260128194540.GA10992@unreal> (raw)
In-Reply-To: <aXpjk5sAgOzE3OcR@lstrano-desk.jf.intel.com>
On Wed, Jan 28, 2026 at 11:29:23AM -0800, Matthew Brost wrote:
> On Wed, Jan 28, 2026 at 01:55:31PM -0400, Jason Gunthorpe wrote:
> > On Wed, Jan 28, 2026 at 09:46:44AM -0800, Matthew Brost wrote:
> >
> > > It is intended to fill holes. The input pages come from the
> > > migrate_vma_* functions, which can return a sparsely populated array of
> > > pages for a region (e.g., it scans a 2M range but only finds several of
> > > the 512 pages eligible for migration). As a result, if (!page) is true
> > > for many entries.
> >
> > This is migration?? So something is DMA'ing from A -> B - why put
> > holes in the first place? Can you tightly pack the pages in the IOVA?
> >
>
> This could probably could be made to work. I think it would be an
> initial pass to figure out the IOVA size then tightly pack.
>
> Let me look at this. Probably better too as installing dummy pages is a
> non-zero cost as I assume dma_iova_link is a radix tree walk.
>
> > If there is no iommu then the addresses are scattered all over anyhow
> > so it can't be relying on some dma_addr_t relationship?
>
> Scattered dma-addresses is already handled in the copy code, likewise
> holes so non-issue.
>
> >
> > You don't have to fully populate the allocated iova, you can link from
> > A-B and then unlink from A-B even if B is less than the total size
> > requested.
> >
> > The hmm users have the holes because hmm is dynamically
> > adding/removing pages as it runs and it can't do anything to pack the
> > mapping.
> >
> > > > IOVA space? If so, what necessitates those holes? You can have less mapped
> > > > than IOVA and dma_iova_*() API can handle it.
> > >
> > > I was actually going to ask you about this, so I’m glad you brought it
> > > up here. Again, this is a hack to avoid holes — the holes are never
> > > touched by our copy function, but rather skipped, so we just jam in a
> > > dummy address so the entire IOVA range has valid IOMMU pages.
> >
> > I would say what you are doing is trying to optimize unmap by
>
> Yes and make the code simplish.
>
> > unmapping everything in one shot instead of just the mapped areas, and
> > the WARN_ON is telling you that it isn't allowed to unmap across a
> > hole.
> >
> > > at the moment I’m not sure whether this warning affects actual
> > > functionality or if we could just delete it.
> >
> > It means the iommu page table stopped unmapping when it hit a hole and
> > there is a bunch of left over maps in the page table that shouldn't be
> > there. So yes, it is serious and cannot be deleted.
> >
>
> Cool, this explains the warning.
>
> > This is a possible option to teach things to detect the holes and
> > ignore them..
>
> Another option — and IMO probably the best one — as it makes potential
> usages with holes the simplest at the driver level. Let me look at this
> too.
It would be ideal if we could code a more general solution. In HMM we
release pages one by one, and it would be preferable to have a single-shot
unmap routine instead. In similar to NVMe which release all IOVA space
with one call to dma_iova_destroy().
HMM chain:
ib_umem_odp_unmap_dma_pages()
-> for (...)
-> hmm_dma_unmap_pfn()
After giving more thought to my earlier suggestion to use
hmm_pfn_to_phys(), I began to wonder why did not you use the
hmm_dma_*() API instead?
>
> Do you think we need flag somewhere for 'ignore holes' or can I just
> blindly skip them?
Better if we will have something like dma_iova_with_holes_destroy()
function call to make sure that we don't hurt performance of existing
dma_iova_destroy() users.
Thanks
>
> Matt
>
> >
> > Jason
next prev parent reply other threads:[~2026-01-28 19:46 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-28 0:48 [RFC PATCH v3 00/11] Use new dma-map IOVA alloc, link, and sync API in GPU SVM and DRM pagemap Matthew Brost
2026-01-28 0:48 ` [RFC PATCH v3 01/11] drm/pagemap: Add helper to access zone_device_data Matthew Brost
2026-01-28 13:53 ` Leon Romanovsky
2026-01-28 0:48 ` [RFC PATCH v3 02/11] drm/gpusvm: Use dma-map IOVA alloc, link, and sync API in GPU SVM Matthew Brost
2026-01-28 14:04 ` Leon Romanovsky
2026-01-28 0:48 ` [RFC PATCH v3 03/11] drm/pagemap: Split drm_pagemap_migrate_map_pages into device / system Matthew Brost
2026-01-28 0:48 ` [RFC PATCH v3 04/11] drm/pagemap: Use dma-map IOVA alloc, link, and sync API for DRM pagemap Matthew Brost
2026-01-28 14:28 ` Leon Romanovsky
2026-01-28 17:46 ` Matthew Brost
[not found] ` <20260128175531.GR1641016@ziepe.ca>
2026-01-28 19:29 ` Matthew Brost
2026-01-28 19:45 ` Leon Romanovsky [this message]
2026-01-28 21:04 ` Matthew Brost
2026-01-29 10:14 ` Leon Romanovsky
2026-01-29 18:22 ` Matthew Brost
2026-01-28 0:48 ` [RFC PATCH v3 05/11] drm/pagemap: Reduce number of IOVA link calls Matthew Brost
2026-01-28 0:48 ` [RFC PATCH v3 06/11] drm/pagemap: Add IOVA interface to DRM pagemap Matthew Brost
[not found] ` <20260128151458.GJ1641016@ziepe.ca>
2026-01-28 18:42 ` Matthew Brost
2026-01-28 19:41 ` Matthew Brost
[not found] ` <20260128193509.GU1641016@ziepe.ca>
2026-01-28 20:24 ` Matthew Brost
2026-01-29 18:57 ` Jason Gunthorpe
2026-01-29 19:28 ` Matthew Brost
2026-01-29 19:32 ` Jason Gunthorpe
2026-01-28 0:48 ` [RFC PATCH v3 07/11] drm/xe: Stub out DRM pagemap IOVA alloc implementation Matthew Brost
2026-01-28 0:48 ` [RFC PATCH v3 08/11] drm/pagemap: Use device-to-device IOVA alloc, link, and sync API for DRM pagemap Matthew Brost
2026-01-28 0:48 ` [RFC PATCH v3 09/11] drm/xe: Drop BO dma-resv lock during SVM migrate-to-device Matthew Brost
2026-01-28 0:48 ` [RFC PATCH v3 10/11] drm/xe: Implement DRM pagemap IOVA vfuncs Matthew Brost
2026-01-28 0:48 ` [RFC PATCH v3 11/11] drm/gpusvm: Use device-to-device IOVA alloc, link, and sync API in GPU SVM Matthew Brost
2026-01-28 0:59 ` ✗ CI.checkpatch: warning for Use new dma-map IOVA alloc, link, and sync API in GPU SVM and DRM pagemap (rev3) Patchwork
2026-01-28 1:01 ` ✓ CI.KUnit: success " Patchwork
2026-01-28 1:42 ` ✓ Xe.CI.BAT: " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260128194540.GA10992@unreal \
--to=leonro@nvidia.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=francois.dugast@intel.com \
--cc=himal.prasad.ghimiray@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=jgg@ziepe.ca \
--cc=matthew.brost@intel.com \
--cc=thomas.hellstrom@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox