From: Arvind Yadav <arvind.yadav@intel.com>
To: intel-xe@lists.freedesktop.org
Cc: matthew.brost@intel.com, himal.prasad.ghimiray@intel.com,
thomas.hellstrom@linux.intel.com
Subject: [RFC 6/7] drm/xe/vm: Wire MADVISE_AUTORESET notifiers into VM lifecycle
Date: Thu, 19 Feb 2026 14:43:11 +0530 [thread overview]
Message-ID: <20260219091312.796749-7-arvind.yadav@intel.com> (raw)
In-Reply-To: <20260219091312.796749-1-arvind.yadav@intel.com>
Initialise the MADVISE_AUTORESET interval notifier infrastructure for
fault-mode VMs and tear it down during VM close.
The notifier callback cannot take vm->lock, so the interval notifier work
is processed from a workqueue. VM close drops vm->lock around teardown
since the worker takes vm->lock.
For the madvise ioctl, collect the cpu_addr_mirror VMA ranges under
vm->lock and register the interval notifiers after dropping vm->lock to
avoid lock ordering issues with mmap_lock.
Also skip SVM PTE zapping for cpu_addr_mirror VMAs that are still marked
CPU_AUTORESET_ACTIVE since they do not have GPU mappings yet.
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
---
drivers/gpu/drm/xe/xe_svm.c | 9 +++
drivers/gpu/drm/xe/xe_vm.c | 22 ++++++
drivers/gpu/drm/xe/xe_vm_madvise.c | 113 ++++++++++++++++++++++++++++-
3 files changed, 140 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
index 3f09f5f6481f..8335fdc976b5 100644
--- a/drivers/gpu/drm/xe/xe_svm.c
+++ b/drivers/gpu/drm/xe/xe_svm.c
@@ -879,9 +879,18 @@ int xe_svm_init(struct xe_vm *vm)
xe_modparam.svm_notifier_size * SZ_1M,
&gpusvm_ops, fault_chunk_sizes,
ARRAY_SIZE(fault_chunk_sizes));
+ if (err) {
+ xe_svm_put_pagemaps(vm);
+ drm_pagemap_release_owner(&vm->svm.peer);
+ return err;
+ }
+
drm_gpusvm_driver_set_lock(&vm->svm.gpusvm, &vm->lock);
+ /* Initialize madvise notifier infrastructure after gpusvm */
+ err = xe_vm_madvise_init(vm);
if (err) {
+ drm_gpusvm_fini(&vm->svm.gpusvm);
xe_svm_put_pagemaps(vm);
drm_pagemap_release_owner(&vm->svm.peer);
return err;
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 152ee355e5c3..00799e56d089 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -39,6 +39,7 @@
#include "xe_tile.h"
#include "xe_tlb_inval.h"
#include "xe_trace_bo.h"
+#include "xe_vm_madvise.h"
#include "xe_wa.h"
static struct drm_gem_object *xe_vm_obj(struct xe_vm *vm)
@@ -1835,6 +1836,27 @@ void xe_vm_close_and_put(struct xe_vm *vm)
xe_vma_destroy_unlocked(vma);
}
+ /*
+ * xe_vm_madvise_fini() drains the madvise workqueue, and workers take vm->lock.
+ * Drop vm->lock around madvise teardown to avoid deadlock.
+ *
+ * Safe since the VM is already closed, and madvise teardown prevents new work
+ * from being queued.
+ */
+ xe_assert(vm->xe, xe_vm_is_closed_or_banned(vm));
+ up_write(&vm->lock);
+
+ /* Teardown madvise MMU notifiers + drain workers */
+ if (vm->flags & XE_VM_FLAG_FAULT_MODE)
+ xe_vm_madvise_fini(vm);
+
+ /*
+ * Retake vm->lock for SVM cleanup. drm_gpusvm_fini() needs to remove
+ * any remaining GPU SVM ranges, and drm_gpusvm_range_remove() requires
+ * the driver lock (vm->lock) to be held.
+ */
+ down_write(&vm->lock);
+
xe_svm_fini(vm);
up_write(&vm->lock);
diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
index 98663707d039..32aecad31a9c 100644
--- a/drivers/gpu/drm/xe/xe_vm_madvise.c
+++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
@@ -23,6 +23,12 @@ struct xe_vmas_in_madvise_range {
int num_vmas;
bool has_bo_vmas;
bool has_svm_userptr_vmas;
+ bool has_cpu_addr_mirror_vmas;
+};
+
+struct xe_madvise_notifier_range {
+ u64 start;
+ u64 end;
};
/**
@@ -61,7 +67,10 @@ static int get_vmas(struct xe_vm *vm, struct xe_vmas_in_madvise_range *madvise_r
if (xe_vma_bo(vma))
madvise_range->has_bo_vmas = true;
- else if (xe_vma_is_cpu_addr_mirror(vma) || xe_vma_is_userptr(vma))
+ else if (xe_vma_is_cpu_addr_mirror(vma)) {
+ madvise_range->has_svm_userptr_vmas = true;
+ madvise_range->has_cpu_addr_mirror_vmas = true;
+ } else if (xe_vma_is_userptr(vma))
madvise_range->has_svm_userptr_vmas = true;
if (madvise_range->num_vmas == max_vmas) {
@@ -213,9 +222,19 @@ static u8 xe_zap_ptes_in_madvise_range(struct xe_vm *vm, u64 start, u64 end)
continue;
if (xe_vma_is_cpu_addr_mirror(vma)) {
- tile_mask |= xe_svm_ranges_zap_ptes_in_range(vm,
- xe_vma_start(vma),
- xe_vma_end(vma));
+ /*
+ * CPU-only VMAs (CPU_AUTORESET_ACTIVE set) have no GPU mappings yet.
+ * Flag MUST be cleared via xe_vma_gpu_touch() before installing GPU PTEs.
+ * Today, CPU_ADDR_MIRROR GPU PTEs are installed via the SVM fault path.
+ * If additional paths are added (prefetch, migration, explicit bind),
+ * they must clear CPU_AUTORESET_ACTIVE before PTE install.
+ *
+ * Once flag is cleared (GPU faulted), SVM handles munmap via its notifier.
+ */
+ if (!xe_vma_has_cpu_autoreset_active(vma))
+ tile_mask |= xe_svm_ranges_zap_ptes_in_range(vm,
+ xe_vma_start(vma),
+ xe_vma_end(vma));
} else {
for_each_tile(tile, vm->xe, id) {
if (xe_pt_zap_ptes(tile, vma)) {
@@ -416,6 +435,8 @@ int xe_vm_madvise_ioctl(struct drm_device *dev, void *data, struct drm_file *fil
struct xe_madvise_details details;
struct xe_vm *vm;
struct drm_exec exec;
+ struct xe_madvise_notifier_range *notifier_ranges = NULL;
+ int num_notifier_ranges = 0;
int err, attr_type;
vm = xe_vm_lookup(xef, args->vm_id);
@@ -490,6 +511,89 @@ int xe_vm_madvise_ioctl(struct drm_device *dev, void *data, struct drm_file *fil
if (madvise_range.has_svm_userptr_vmas)
xe_svm_notifier_unlock(vm);
+ if (err)
+ goto err_fini;
+
+ /*
+ * Collect ranges (not VMA pointers) that need madvise notifiers.
+ * Must be done while still holding vm->lock to safely inspect VMAs.
+ * After releasing vm->lock, we'll register notifiers using only
+ * the collected {start,end} ranges, avoiding UAF issues.
+ */
+ if (madvise_range.has_cpu_addr_mirror_vmas) {
+ /* Allocate array for ranges - use kvcalloc for large counts */
+ notifier_ranges = kvcalloc(madvise_range.num_vmas,
+ sizeof(*notifier_ranges),
+ GFP_KERNEL);
+ if (!notifier_ranges) {
+ err = -ENOMEM;
+ goto err_fini;
+ }
+
+ /* Collect ranges for VMAs needing notifiers */
+ for (int i = 0; i < madvise_range.num_vmas; i++) {
+ struct xe_vma *vma = madvise_range.vmas[i];
+
+ if (!xe_vma_is_cpu_addr_mirror(vma))
+ continue;
+
+ /*
+ * Only collect ranges for VMAs with MADV_AUTORESET
+ * that are still CPU-only.
+ */
+ if (!(vma->gpuva.flags & XE_VMA_MADV_AUTORESET))
+ continue;
+
+ if (!(vma->gpuva.flags & XE_VMA_CPU_AUTORESET_ACTIVE))
+ continue;
+
+ /* Skip duplicates (same range already collected) */
+ if (num_notifier_ranges > 0 &&
+ notifier_ranges[num_notifier_ranges - 1].start == xe_vma_start(vma) &&
+ notifier_ranges[num_notifier_ranges - 1].end == xe_vma_end(vma))
+ continue;
+
+ /* Save range - don't hold VMA pointer */
+ notifier_ranges[num_notifier_ranges].start = xe_vma_start(vma);
+ notifier_ranges[num_notifier_ranges].end = xe_vma_end(vma);
+ num_notifier_ranges++;
+ }
+ }
+
+ /* Normal cleanup path - all resources released properly */
+ if (madvise_range.has_bo_vmas)
+ drm_exec_fini(&exec);
+ kfree(madvise_range.vmas);
+ xe_madvise_details_fini(&details);
+ up_write(&vm->lock);
+
+ /*
+ * Register madvise notifiers using collected ranges.
+ * Must be done after dropping vm->lock to avoid lock ordering issues.
+ *
+ * Race window: munmap between lock drop and registration is acceptable.
+ * Auto-reset is best-effort; core correctness comes from CPU_AUTORESET_ACTIVE
+ * preventing GPU PTE zaps on CPU-only VMAs.
+ */
+ for (int i = 0; i < num_notifier_ranges; i++) {
+ int reg_err;
+
+ reg_err = xe_vm_madvise_register_notifier_range(vm,
+ notifier_ranges[i].start,
+ notifier_ranges[i].end);
+ if (reg_err) {
+ /* Expected failures: -ENOMEM, -ENOENT (munmap race), -EINVAL */
+ if (reg_err != -ENOMEM && reg_err != -ENOENT && reg_err != -EINVAL)
+ drm_warn(&vm->xe->drm,
+ "madvise notifier reg failed [%#llx-%#llx]: %d\n",
+ notifier_ranges[i].start, notifier_ranges[i].end, reg_err);
+ }
+ }
+
+ kvfree(notifier_ranges);
+ xe_vm_put(vm);
+ return 0;
+
err_fini:
if (madvise_range.has_bo_vmas)
drm_exec_fini(&exec);
@@ -499,6 +603,7 @@ int xe_vm_madvise_ioctl(struct drm_device *dev, void *data, struct drm_file *fil
xe_madvise_details_fini(&details);
unlock_vm:
up_write(&vm->lock);
+ kvfree(notifier_ranges);
put_vm:
xe_vm_put(vm);
return err;
--
2.43.0
next prev parent reply other threads:[~2026-02-19 9:13 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-19 9:13 [RFC 0/7] drm/xe/svm: Add MMU notifier-based madvise autoreset on munmap Arvind Yadav
2026-02-19 9:13 ` [RFC 1/7] drm/xe/vm: Add CPU_AUTORESET_ACTIVE VMA flag Arvind Yadav
2026-02-19 9:13 ` [RFC 2/7] drm/xe/vm: Preserve CPU_AUTORESET_ACTIVE across GPUVA operations Arvind Yadav
2026-02-19 9:13 ` [RFC 3/7] drm/xe/svm: Clear CPU_AUTORESET_ACTIVE on first GPU fault Arvind Yadav
2026-02-20 20:12 ` Matthew Brost
2026-02-20 22:33 ` Matthew Brost
2026-03-05 3:38 ` Yadav, Arvind
2026-02-19 9:13 ` [RFC 4/7] drm/xe/vm: Add madvise autoreset interval notifier worker infrastructure Arvind Yadav
2026-02-25 23:34 ` Matthew Brost
2026-03-09 7:07 ` Yadav, Arvind
2026-03-09 9:32 ` Thomas Hellström
2026-03-11 6:34 ` Yadav, Arvind
2026-02-19 9:13 ` [RFC 5/7] drm/xe/vm: Deactivate madvise notifier on GPU touch Arvind Yadav
2026-02-19 9:13 ` Arvind Yadav [this message]
2026-02-19 9:13 ` [RFC 7/7] drm/xe/svm: Correct memory attribute reset for partial unmap Arvind Yadav
2026-02-19 9:40 ` ✗ CI.checkpatch: warning for drm/xe/svm: Add MMU notifier-based madvise autoreset on munmap Patchwork
2026-02-19 9:42 ` ✓ CI.KUnit: success " Patchwork
2026-02-19 10:40 ` ✓ Xe.CI.BAT: " Patchwork
2026-02-19 13:04 ` ✗ Xe.CI.FULL: failure " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260219091312.796749-7-arvind.yadav@intel.com \
--to=arvind.yadav@intel.com \
--cc=himal.prasad.ghimiray@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=matthew.brost@intel.com \
--cc=thomas.hellstrom@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox