From: Oak Zeng <oak.zeng@intel.com>
To: dri-devel@lists.freedesktop.org, intel-xe@lists.freedesktop.org
Subject: [PATCH 15/22] drm/xe/svm: Implement functions to register and unregister mmu notifier
Date: Wed, 20 Dec 2023 23:38:05 -0500 [thread overview]
Message-ID: <20231221043812.3783313-16-oak.zeng@intel.com> (raw)
In-Reply-To: <20231221043812.3783313-1-oak.zeng@intel.com>
xe driver register mmu interval notifier to core mm to monitor vma
change. We register mmu interval notifier for each svm range. mmu
interval notifier should be unregistered in a worker (see next patch
in this series), so also initialize kernel worker to unregister mmu
interval notifier.
Signed-off-by: Oak Zeng <oak.zeng@intel.com>
Cc: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@intel.com>
Cc: Brian Welty <brian.welty@intel.com>
---
drivers/gpu/drm/xe/xe_svm.h | 14 ++++++
drivers/gpu/drm/xe/xe_svm_range.c | 73 +++++++++++++++++++++++++++++++
2 files changed, 87 insertions(+)
diff --git a/drivers/gpu/drm/xe/xe_svm.h b/drivers/gpu/drm/xe/xe_svm.h
index 6b93055934f8..90e665f2bfc6 100644
--- a/drivers/gpu/drm/xe/xe_svm.h
+++ b/drivers/gpu/drm/xe/xe_svm.h
@@ -52,16 +52,28 @@ struct xe_svm {
* struct xe_svm_range - Represents a shared virtual address range.
*/
struct xe_svm_range {
+ /** @svm: pointer of the xe_svm that this range belongs to */
+ struct xe_svm *svm;
+
/** @notifier: The mmu interval notifer used to keep track of CPU
* side address range change. Driver will get a callback with this
* notifier if anything changed from CPU side, such as range is
* unmapped from CPU
*/
struct mmu_interval_notifier notifier;
+ bool mmu_notifier_registered;
/** @start: start address of this range, inclusive */
u64 start;
/** @end: end address of this range, exclusive */
u64 end;
+ /** @vma: the corresponding vma of this svm range
+ * The relationship b/t vma and svm range is 1:N,
+ * which means one vma can be splitted into multiple
+ * @xe_svm_range while one @xe_svm_range can have
+ * only one vma. A N:N mapping means some complication
+ * in codes. Lets assume 1:N for now.
+ */
+ struct vm_area_struct *vma;
/** @unregister_notifier_work: A worker used to unregister this notifier */
struct work_struct unregister_notifier_work;
/** @inode: used to link this range to svm's range_tree */
@@ -77,6 +89,8 @@ struct xe_svm_range *xe_svm_range_from_addr(struct xe_svm *svm,
bool xe_svm_range_belongs_to_vma(struct mm_struct *mm,
struct xe_svm_range *range,
struct vm_area_struct *vma);
+void xe_svm_range_unregister_mmu_notifier(struct xe_svm_range *range);
+int xe_svm_range_register_mmu_notifier(struct xe_svm_range *range);
int xe_svm_build_sg(struct hmm_range *range, struct sg_table *st);
int xe_svm_devm_add(struct xe_tile *tile, struct xe_mem_region *mem);
diff --git a/drivers/gpu/drm/xe/xe_svm_range.c b/drivers/gpu/drm/xe/xe_svm_range.c
index b32c32f60315..286d5f7d6ecd 100644
--- a/drivers/gpu/drm/xe/xe_svm_range.c
+++ b/drivers/gpu/drm/xe/xe_svm_range.c
@@ -4,6 +4,7 @@
*/
#include <linux/interval_tree.h>
+#include <linux/mmu_notifier.h>
#include <linux/container_of.h>
#include <linux/mm_types.h>
#include <linux/mutex.h>
@@ -57,3 +58,75 @@ bool xe_svm_range_belongs_to_vma(struct mm_struct *mm,
return (vma1 == vma) && (vma2 == vma);
}
+
+static const struct mmu_interval_notifier_ops xe_svm_mni_ops = {
+ .invalidate = NULL,
+};
+
+/**
+ * unregister a mmu interval notifier for a svm range
+ *
+ * @range: svm range
+ *
+ */
+void xe_svm_range_unregister_mmu_notifier(struct xe_svm_range *range)
+{
+ if (!range->mmu_notifier_registered)
+ return;
+
+ mmu_interval_notifier_remove(&range->notifier);
+ range->mmu_notifier_registered = false;
+}
+
+static void xe_svm_unregister_notifier_work(struct work_struct *work)
+{
+ struct xe_svm_range *range;
+
+ range = container_of(work, struct xe_svm_range, unregister_notifier_work);
+
+ xe_svm_range_unregister_mmu_notifier(range);
+
+ /**
+ * This is called from mmu notifier MUNMAP event. When munmap is called,
+ * this range is not valid any more. Remove it.
+ */
+ mutex_lock(&range->svm->mutex);
+ interval_tree_remove(&range->inode, &range->svm->range_tree);
+ mutex_unlock(&range->svm->mutex);
+ kfree(range);
+}
+
+/**
+ * register a mmu interval notifier to monitor vma change
+ *
+ * @range: svm range to monitor
+ *
+ * This has to be called inside a mmap_read_lock
+ */
+int xe_svm_range_register_mmu_notifier(struct xe_svm_range *range)
+{
+ struct vm_area_struct *vma = range->vma;
+ struct mm_struct *mm = range->svm->mm;
+ u64 start, length;
+ int ret = 0;
+
+ if (range->mmu_notifier_registered)
+ return 0;
+
+ start = range->start;
+ length = range->end - start;
+ /** We are inside a mmap_read_lock, but it requires a mmap_write_lock
+ * to register mmu notifier.
+ */
+ mmap_read_unlock(mm);
+ mmap_write_lock(mm);
+ ret = mmu_interval_notifier_insert_locked(&range->notifier, vma->vm_mm,
+ start, length, &xe_svm_mni_ops);
+ mmap_write_downgrade(mm);
+ if (ret)
+ return ret;
+
+ INIT_WORK(&range->unregister_notifier_work, xe_svm_unregister_notifier_work);
+ range->mmu_notifier_registered = true;
+ return ret;
+}
--
2.26.3
next prev parent reply other threads:[~2023-12-21 4:28 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-12-21 4:37 [PATCH 00/22] XeKmd basic SVM support Oak Zeng
2023-12-21 4:37 ` [PATCH 01/22] drm/xe/svm: Add SVM document Oak Zeng
2023-12-21 4:37 ` [PATCH 02/22] drm/xe/svm: Add svm key data structures Oak Zeng
2023-12-21 4:37 ` [PATCH 03/22] drm/xe/svm: create xe svm during vm creation Oak Zeng
2023-12-21 4:37 ` [PATCH 04/22] drm/xe/svm: Trace svm creation Oak Zeng
2023-12-21 4:37 ` [PATCH 05/22] drm/xe/svm: add helper to retrieve svm range from address Oak Zeng
2023-12-21 4:37 ` [PATCH 06/22] drm/xe/svm: Introduce a helper to build sg table from hmm range Oak Zeng
2023-12-21 4:37 ` [PATCH 07/22] drm/xe/svm: Add helper for binding hmm range to gpu Oak Zeng
2023-12-21 4:37 ` [PATCH 08/22] drm/xe/svm: Add helper to invalidate svm range from GPU Oak Zeng
2023-12-21 4:37 ` [PATCH 09/22] drm/xe/svm: Remap and provide memmap backing for GPU vram Oak Zeng
2023-12-21 4:38 ` [PATCH 10/22] drm/xe/svm: Introduce svm migration function Oak Zeng
2023-12-21 4:38 ` [PATCH 11/22] drm/xe/svm: implement functions to allocate and free device memory Oak Zeng
2023-12-21 4:38 ` [PATCH 12/22] drm/xe/svm: Trace buddy block allocation and free Oak Zeng
2023-12-21 4:38 ` [PATCH 13/22] drm/xe/svm: Handle CPU page fault Oak Zeng
2023-12-21 4:38 ` [PATCH 14/22] drm/xe/svm: trace svm range migration Oak Zeng
2023-12-21 4:38 ` Oak Zeng [this message]
2023-12-21 4:38 ` [PATCH 16/22] drm/xe/svm: Implement the mmu notifier range invalidate callback Oak Zeng
2023-12-21 4:38 ` [PATCH 17/22] drm/xe/svm: clean up svm range during process exit Oak Zeng
2023-12-21 4:38 ` [PATCH 18/22] drm/xe/svm: Move a few structures to xe_gt.h Oak Zeng
2023-12-21 4:38 ` [PATCH 19/22] drm/xe/svm: migrate svm range to vram Oak Zeng
2023-12-21 4:38 ` [PATCH 20/22] drm/xe/svm: Populate svm range Oak Zeng
2023-12-21 4:38 ` [PATCH 21/22] drm/xe/svm: GPU page fault support Oak Zeng
2023-12-21 4:38 ` [PATCH 22/22] drm/xe/svm: Add DRM_XE_SVM kernel config entry Oak Zeng
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231221043812.3783313-16-oak.zeng@intel.com \
--to=oak.zeng@intel.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=intel-xe@lists.freedesktop.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox