From: Matthew Auld <matthew.auld@intel.com>
To: intel-xe@lists.freedesktop.org
Cc: dri-devel@lists.freedesktop.org,
"Thomas Hellström" <thomas.hellstrom@linux.intel.com>,
"Matthew Brost" <matthew.brost@intel.com>
Subject: [PATCH v2 1/7] drm/gpusvm: fix hmm_pfn_to_map_order() usage
Date: Fri, 28 Mar 2025 18:10:30 +0000 [thread overview]
Message-ID: <20250328181028.288312-10-matthew.auld@intel.com> (raw)
In-Reply-To: <20250328181028.288312-9-matthew.auld@intel.com>
Handle the case where the hmm range partially covers a huge page (like
2M), otherwise we can potentially end up doing something nasty like
mapping memory which is outside the range, and maybe not even mapped by
the mm. Fix is based on the xe userptr code, which in a future patch
will directly use gpusvm, so needs alignment here.
v2:
- Add kernel-doc (Matt B)
- s/fls/ilog2/ (Thomas)
Reported-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
drivers/gpu/drm/drm_gpusvm.c | 33 +++++++++++++++++++++++++++++++--
1 file changed, 31 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/drm_gpusvm.c b/drivers/gpu/drm/drm_gpusvm.c
index 2451c816edd5..07dda042624c 100644
--- a/drivers/gpu/drm/drm_gpusvm.c
+++ b/drivers/gpu/drm/drm_gpusvm.c
@@ -817,6 +817,35 @@ drm_gpusvm_range_alloc(struct drm_gpusvm *gpusvm,
return range;
}
+/**
+ * drm_gpusvm_hmm_pfn_to_order() - Get the largest CPU mapping order.
+ * @hmm_pfn: The current hmm_pfn.
+ * @hmm_pfn_index: Index of the @hmm_pfn within the pfn array.
+ * @npages: Number of pages within the pfn array i.e the hmm range size.
+ *
+ * To allow skipping PFNs with the same flags (like when they belong to
+ * the same huge PTE) when looping over the pfn array, take a given a hmm_pfn,
+ * and return the largest order that will fit inside the CPU PTE, but also
+ * crucially accounting for the original hmm range boundaries.
+ *
+ * Return: The largest order that will safely fit within the size of the hmm_pfn
+ * CPU PTE.
+ */
+static unsigned int drm_gpusvm_hmm_pfn_to_order(unsigned long hmm_pfn,
+ unsigned long hmm_pfn_index,
+ unsigned long npages)
+{
+ unsigned long size;
+
+ size = 1UL << hmm_pfn_to_map_order(hmm_pfn);
+ size -= (hmm_pfn & ~HMM_PFN_FLAGS) & (size - 1);
+ hmm_pfn_index += size;
+ if (hmm_pfn_index > npages)
+ size -= (hmm_pfn_index - npages);
+
+ return ilog2(size);
+}
+
/**
* drm_gpusvm_check_pages() - Check pages
* @gpusvm: Pointer to the GPU SVM structure
@@ -875,7 +904,7 @@ static bool drm_gpusvm_check_pages(struct drm_gpusvm *gpusvm,
err = -EFAULT;
goto err_free;
}
- i += 0x1 << hmm_pfn_to_map_order(pfns[i]);
+ i += 0x1 << drm_gpusvm_hmm_pfn_to_order(pfns[i], i, npages);
}
err_free:
@@ -1408,7 +1437,7 @@ int drm_gpusvm_range_get_pages(struct drm_gpusvm *gpusvm,
for (i = 0, j = 0; i < npages; ++j) {
struct page *page = hmm_pfn_to_page(pfns[i]);
- order = hmm_pfn_to_map_order(pfns[i]);
+ order = drm_gpusvm_hmm_pfn_to_order(pfns[i], i, npages);
if (is_device_private_page(page) ||
is_device_coherent_page(page)) {
if (zdd != page->zone_device_data && i > 0) {
--
2.48.1
next prev parent reply other threads:[~2025-03-28 18:11 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-03-28 18:10 [PATCH v2 0/7] Replace xe_hmm with gpusvm Matthew Auld
2025-03-28 18:10 ` Matthew Auld [this message]
2025-03-28 18:10 ` [PATCH v2 2/7] drm/gpusvm: use more selective dma dir in get_pages() Matthew Auld
2025-03-28 18:10 ` [PATCH v2 3/7] drm/gpusvm: pull out drm_gpusvm_pages substructure Matthew Auld
2025-04-03 21:15 ` Matthew Brost
2025-03-28 18:10 ` [PATCH v2 4/7] drm/gpusvm: refactor core API to use pages struct Matthew Auld
2025-03-28 18:10 ` [PATCH v2 5/7] drm/gpusvm: export drm_gpusvm_pages API Matthew Auld
2025-03-28 18:10 ` [PATCH v2 6/7] drm/xe/userptr: replace xe_hmm with gpusvm Matthew Auld
2025-03-28 18:10 ` [PATCH v2 7/7] drm/xe/pt: unify xe_pt_svm_pre_commit with userptr Matthew Auld
2025-04-03 21:23 ` Matthew Brost
2025-04-04 8:20 ` Matthew Auld
2025-04-03 21:25 ` Matthew Brost
2025-04-04 8:19 ` Matthew Auld
2025-04-04 17:02 ` Matthew Brost
2025-04-07 7:29 ` Matthew Auld
2025-04-23 16:20 ` Matthew Brost
2025-03-28 20:46 ` ✓ CI.Patch_applied: success for Replace xe_hmm with gpusvm (rev2) Patchwork
2025-03-28 20:47 ` ✗ CI.checkpatch: warning " Patchwork
2025-03-28 20:47 ` ✗ CI.KUnit: failure " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250328181028.288312-10-matthew.auld@intel.com \
--to=matthew.auld@intel.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=intel-xe@lists.freedesktop.org \
--cc=matthew.brost@intel.com \
--cc=thomas.hellstrom@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox