From: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
To: "Ghimiray, Himal Prasad" <himal.prasad.ghimiray@intel.com>,
intel-xe@lists.freedesktop.org
Cc: dri-devel@lists.freedesktop.org, apopple@nvidia.com,
airlied@gmail.com, "Simona Vetter" <simona.vetter@ffwll.ch>,
felix.kuehling@amd.com, "Matthew Brost" <matthew.brost@intel.com>,
"Christian König" <christian.koenig@amd.com>,
dakr@kernel.org, "Mrozek, Michal" <michal.mrozek@intel.com>,
"Joonas Lahtinen" <joonas.lahtinen@linux.intel.com>
Subject: Re: [PATCH v4 19/22] drm/gpusvm: Introduce a function to scan the current migration state
Date: Fri, 12 Dec 2025 12:35:00 +0100 [thread overview]
Message-ID: <b43179723c15c9a20594f4ff52c1d936a5fa04c3.camel@linux.intel.com> (raw)
In-Reply-To: <b70bec9c-da7b-4bff-9a41-b7548ea92518@intel.com>
Hi, Himal
On Fri, 2025-12-12 at 16:51 +0530, Ghimiray, Himal Prasad wrote:
>
>
> On 11-12-2025 22:29, Thomas Hellström wrote:
> > With multi-device we are much more likely to have multiple
> > drm-gpusvm ranges pointing to the same struct mm range.
> >
> > To avoid calling into drm_pagemap_populate_mm(), which is always
> > very costly, introduce a much less costly drm_gpusvm function,
> > drm_gpusvm_scan_mm() to scan the current migration state.
> > The device fault-handler and prefetcher can use this function to
> > determine whether migration is really necessary.
> >
> > There are a couple of performance improvements that can be done
> > for this function if it turns out to be too costly. Those are
> > documented in the code.
> >
> > v3:
> > - New patch.
> >
> > Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> > ---
> > drivers/gpu/drm/drm_gpusvm.c | 121
> > +++++++++++++++++++++++++++++++++++
> > include/drm/drm_gpusvm.h | 29 +++++++++
> > 2 files changed, 150 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/drm_gpusvm.c
> > b/drivers/gpu/drm/drm_gpusvm.c
> > index 4c7474a331bc..aa9a0b60e727 100644
> > --- a/drivers/gpu/drm/drm_gpusvm.c
> > +++ b/drivers/gpu/drm/drm_gpusvm.c
> > @@ -743,6 +743,127 @@ static bool drm_gpusvm_check_pages(struct
> > drm_gpusvm *gpusvm,
> > return err ? false : true;
> > }
> >
> > +/**
> > + * drm_gpusvm_scan_mm() - Check the migration state of a
> > drm_gpusvm_range
> > + * @range: Pointer to the struct drm_gpusvm_range to check.
> > + * @dev_private_owner: The struct dev_private_owner to use to
> > determine
> > + * compatible device-private pages.
> > + * @pagemap: The struct dev_pagemap pointer to use for pagemap-
> > specific
> > + * checks.
> > + *
> > + * Scan the CPU address space corresponding to @range and return
> > the
> > + * current migration state. Note that the result may be invalid as
> > + * soon as the function returns. It's an advisory check.
> > + *
> > + * TODO: Bail early and call hmm_range_fault() for subranges.
> > + *
> > + * Return: See &enum drm_gpusvm_scan_result.
> > + */
> > +enum drm_gpusvm_scan_result drm_gpusvm_scan_mm(struct
> > drm_gpusvm_range *range,
> > + void
> > *dev_private_owner,
> > + const struct
> > dev_pagemap *pagemap)
> > +{
> > + struct mmu_interval_notifier *notifier = &range->notifier-
> > >notifier;
> > + unsigned long start = drm_gpusvm_range_start(range);
> > + unsigned long end = drm_gpusvm_range_end(range);
> > + struct hmm_range hmm_range = {
> > + .default_flags = 0,
> > + .notifier = notifier,
> > + .start = start,
> > + .end = end,
> > + .dev_private_owner = dev_private_owner,
> > + };
> > + unsigned long timeout =
> > + jiffies +
> > msecs_to_jiffies(HMM_RANGE_DEFAULT_TIMEOUT);
> > + enum drm_gpusvm_scan_result state =
> > DRM_GPUSVM_SCAN_UNPOPULATED, new_state;
> > + unsigned long *pfns;
> > + unsigned long npages = npages_in_range(start, end);
> > + const struct dev_pagemap *other = NULL;
> > + int err, i;
> > +
> > + pfns = kvmalloc_array(npages, sizeof(*pfns), GFP_KERNEL);
> > + if (!pfns)
> > + return DRM_GPUSVM_SCAN_UNPOPULATED;
> > +
> > + hmm_range.hmm_pfns = pfns;
> > +
> > +retry:
> > + hmm_range.notifier_seq =
> > mmu_interval_read_begin(notifier);
> > + mmap_read_lock(range->gpusvm->mm);
> > +
> > + while (true) {
> > + err = hmm_range_fault(&hmm_range);
> > + if (err == -EBUSY) {
> > + if (time_after(jiffies, timeout))
> > + break;
> > +
> > + hmm_range.notifier_seq =
> > + mmu_interval_read_begin(notifier);
> > + continue;
> > + }
> > + break;
> > + }
> > + mmap_read_unlock(range->gpusvm->mm);
> > + if (err)
> > + goto err_free;
> > +
> > + drm_gpusvm_notifier_lock(range->gpusvm);
> > + if (mmu_interval_read_retry(notifier,
> > hmm_range.notifier_seq)) {
> > + drm_gpusvm_notifier_unlock(range->gpusvm);
> > + goto retry;
> > + }
> > +
> > + for (i = 0; i < npages;) {
> > + struct page *page;
> > + const struct dev_pagemap *cur = NULL;
> > +
> > + if (!(pfns[i] & HMM_PFN_VALID)) {
> > + state = DRM_GPUSVM_SCAN_UNPOPULATED;
> > + goto err_free;
> > + }
> > +
> > + page = hmm_pfn_to_page(pfns[i]);
> > + if (is_device_private_page(page) ||
> > + is_device_coherent_page(page))
> > + cur = page_pgmap(page);
> > +
> > + if (cur == pagemap) {
> > + new_state = DRM_GPUSVM_SCAN_EQUAL;
> > + } else if (cur && (cur == other || !other)) {
> > + new_state = DRM_GPUSVM_SCAN_OTHER;
> > + other = cur;
> > + } else if (cur) {
> > + new_state = DRM_GPUSVM_SCAN_MIXED_DEVICE;
> > + } else {
> > + new_state = DRM_GPUSVM_SCAN_SYSTEM;
> > + }
> > +
> > + /*
> > + * TODO: Could use an array for state
> > + * transitions, and caller might want it
> > + * to bail early for some results.
> > + */
> > + if (state == DRM_GPUSVM_SCAN_UNPOPULATED) {
> > + state = new_state;
> > + } else if (state != new_state) {
> > + if (new_state == DRM_GPUSVM_SCAN_SYSTEM ||
> > + state == DRM_GPUSVM_SCAN_SYSTEM)
> > + state = DRM_GPUSVM_SCAN_MIXED;
> > + else if (state != DRM_GPUSVM_SCAN_MIXED)
> > + state =
> > DRM_GPUSVM_SCAN_MIXED_DEVICE;
> > + }
> > +
> > + i += 1ul << drm_gpusvm_hmm_pfn_to_order(pfns[i],
> > i, npages);
> > + }
> > +
> > +err_free:
> > + drm_gpusvm_notifier_unlock(range->gpusvm);
> > +
> > + kvfree(pfns);
> > + return state;
> > +}
> > +EXPORT_SYMBOL(drm_gpusvm_scan_mm);
> > +
> > /**
> > * drm_gpusvm_range_chunk_size() - Determine chunk size for GPU
> > SVM range
> > * @gpusvm: Pointer to the GPU SVM structure
> > diff --git a/include/drm/drm_gpusvm.h b/include/drm/drm_gpusvm.h
> > index 632e100e6efb..2578ac92a8d4 100644
> > --- a/include/drm/drm_gpusvm.h
> > +++ b/include/drm/drm_gpusvm.h
> > @@ -328,6 +328,35 @@ void drm_gpusvm_free_pages(struct drm_gpusvm
> > *gpusvm,
> > struct drm_gpusvm_pages *svm_pages,
> > unsigned long npages);
> >
> > +/**
> > + * enum drm_gpusvm_scan_result - Scan result from the
> > drm_gpusvm_scan_mm() function.
> > + * @DRM_GPUSVM_SCAN_UNPOPULATED: At least one page was not present
> > or inaccessible.
> > + * @DRM_GPUSVM_SCAN_EQUAL: All pages belong to the struct
> > dev_pagemap indicated as
> > + * the @pagemap argument to the drm_gpusvm_scan_mm() function.
> > + * @DRM_GPUSVM_SCAN_OTHER: All pages belong to exactly one
> > dev_pagemap, which is
> > + * *NOT* the @pagemap argument to the drm_gpusvm_scan_mm(). All
> > pages belong to
> > + * the same device private owner.
> > + * @DRM_GPUSVM_SCAN_SYSTEM: All pages are present and system
> > pages.
> > + * @DRM_GPUSVM_SCAN_MIXED_DEVICE: All pages are device pages and
> > belong to at least
> > + * two different struct dev_pagemaps. All pages belong to the same
> > device private
> > + * owner.
> > + * @DRM_GPUSVM_SCAN_MIXED: Pages are present and are a mix of
> > system pages
> > + * and device-private pages. All device-private pages belong to
> > the same device
> > + * private owner.
> > + */
> > +enum drm_gpusvm_scan_result {
> > + DRM_GPUSVM_SCAN_UNPOPULATED,
> > + DRM_GPUSVM_SCAN_EQUAL,
> > + DRM_GPUSVM_SCAN_OTHER,
> > + DRM_GPUSVM_SCAN_SYSTEM,
> > + DRM_GPUSVM_SCAN_MIXED_DEVICE,
> > + DRM_GPUSVM_SCAN_MIXED,
> > +};
> > +
>
> Do we really need these enums, wont simply returning whether all
> pages
> in range has same pgmap be sufficient ? Return true or false and use
> to
> decide range needs migration or not.
>
> If we are expecting some further uses cases for these enums, then
> this
> looks OK though.
For the deferred series implementing the migrate_system_only migration
policy the usage looks like
if (migration_status == DRM_GPUSVM_SCAN_EQUAL ||
(!details->migrate_same_owner &&
(migration_status == DRM_GPUSVM_SCAN_OTHER ||
migration_status == DRM_GPUSVM_SCAN_MIXED_DEVICE))) {
drm_info(dpagemap->drm, "Already migrated!\n");
return 0;
}
So not all are needed in the near future. I could look at simplifying a
bit, though, so that we actually only expose what we use currently.
/Thomas
>
> > +enum drm_gpusvm_scan_result drm_gpusvm_scan_mm(struct
> > drm_gpusvm_range *range,
> > + void
> > *dev_private_owner,
> > + const struct
> > dev_pagemap *pagemap);
> > +
> > #ifdef CONFIG_LOCKDEP
> > /**
> > * drm_gpusvm_driver_set_lock() - Set the lock protecting
> > accesses to GPU SVM
>
next prev parent reply other threads:[~2025-12-12 11:35 UTC|newest]
Thread overview: 38+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-12-11 16:58 [PATCH v4 00/22] Dynamic drm_pagemaps and Initial multi-device SVM Thomas Hellström
2025-12-11 16:58 ` [PATCH v4 01/22] drm/xe/svm: Fix a debug printout Thomas Hellström
2025-12-11 16:58 ` [PATCH v4 02/22] drm/pagemap, drm/xe: Ensure that the devmem allocation is idle before use Thomas Hellström
2025-12-12 8:56 ` Thomas Hellström
2025-12-12 9:24 ` Ghimiray, Himal Prasad
2025-12-12 10:15 ` Thomas Hellström
2025-12-12 10:17 ` Ghimiray, Himal Prasad
2025-12-11 16:58 ` [PATCH v4 03/22] drm/pagemap, drm/xe: Add refcounting to struct drm_pagemap Thomas Hellström
2025-12-12 11:24 ` Ghimiray, Himal Prasad
2025-12-11 16:58 ` [PATCH v4 04/22] drm/pagemap: Add a refcounted drm_pagemap backpointer to struct drm_pagemap_zdd Thomas Hellström
2025-12-11 16:58 ` [PATCH v4 05/22] drm/pagemap, drm/xe: Manage drm_pagemap provider lifetimes Thomas Hellström
2025-12-11 16:58 ` [PATCH v4 06/22] drm/pagemap: Add a drm_pagemap cache and shrinker Thomas Hellström
2025-12-11 16:58 ` [PATCH v4 07/22] drm/xe: Use the " Thomas Hellström
2025-12-11 16:58 ` [PATCH v4 08/22] drm/pagemap: Remove the drm_pagemap_create() interface Thomas Hellström
2025-12-11 16:58 ` [PATCH v4 09/22] drm/pagemap_util: Add a utility to assign an owner to a set of interconnected gpus Thomas Hellström
2025-12-11 16:58 ` [PATCH v4 10/22] drm/xe: Use the drm_pagemap_util helper to get a svm pagemap owner Thomas Hellström
2025-12-11 16:58 ` [PATCH v4 11/22] drm/xe: Pass a drm_pagemap pointer around with the memory advise attributes Thomas Hellström
2025-12-11 16:58 ` [PATCH v4 12/22] drm/xe: Use the vma attibute drm_pagemap to select where to migrate Thomas Hellström
2025-12-12 9:51 ` Ghimiray, Himal Prasad
2025-12-11 16:59 ` [PATCH v4 13/22] drm/xe: Simplify madvise_preferred_mem_loc() Thomas Hellström
2025-12-11 16:59 ` [PATCH v4 14/22] drm/xe/uapi: Extend the madvise functionality to support foreign pagemap placement for svm Thomas Hellström
2025-12-11 16:59 ` [PATCH v4 15/22] drm/xe: Support pcie p2p dma as a fast interconnect Thomas Hellström
2025-12-11 16:59 ` [PATCH v4 16/22] drm/xe/vm: Add a couple of VM debug printouts Thomas Hellström
2025-12-11 16:59 ` [PATCH v4 17/22] drm/xe/svm: Document how xe keeps drm_pagemap references Thomas Hellström
2025-12-11 16:59 ` [PATCH v4 18/22] drm/pagemap, drm/xe: Clean up the use of the device-private page owner Thomas Hellström
2025-12-12 10:09 ` Ghimiray, Himal Prasad
2025-12-11 16:59 ` [PATCH v4 19/22] drm/gpusvm: Introduce a function to scan the current migration state Thomas Hellström
2025-12-12 11:21 ` Ghimiray, Himal Prasad
2025-12-12 11:35 ` Thomas Hellström [this message]
2025-12-16 0:58 ` Matthew Brost
2025-12-16 23:55 ` Matthew Brost
2025-12-17 6:57 ` Ghimiray, Himal Prasad
2025-12-11 16:59 ` [PATCH v4 20/22] drm/xe: Use drm_gpusvm_scan_mm() Thomas Hellström
2025-12-16 1:06 ` Matthew Brost
2025-12-17 6:58 ` Ghimiray, Himal Prasad
2025-12-11 16:59 ` [PATCH v4 21/22] drm/pagemap, drm/xe: Support destination migration over interconnect Thomas Hellström
2025-12-18 1:20 ` Matthew Brost
2025-12-11 16:59 ` [PATCH v4 22/22] drm/pagemap: Support source " Thomas Hellström
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=b43179723c15c9a20594f4ff52c1d936a5fa04c3.camel@linux.intel.com \
--to=thomas.hellstrom@linux.intel.com \
--cc=airlied@gmail.com \
--cc=apopple@nvidia.com \
--cc=christian.koenig@amd.com \
--cc=dakr@kernel.org \
--cc=dri-devel@lists.freedesktop.org \
--cc=felix.kuehling@amd.com \
--cc=himal.prasad.ghimiray@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=joonas.lahtinen@linux.intel.com \
--cc=matthew.brost@intel.com \
--cc=michal.mrozek@intel.com \
--cc=simona.vetter@ffwll.ch \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).