Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Arvind Yadav <arvind.yadav@intel.com>
To: intel-xe@lists.freedesktop.org
Cc: matthew.brost@intel.com, himal.prasad.ghimiray@intel.com,
	thomas.hellstrom@linux.intel.com, pallavi.mishra@intel.com
Subject: [RFC v3 5/8] drm/xe/vm: Prevent binding of purged buffer objects
Date: Wed, 10 Dec 2025 10:00:49 +0530	[thread overview]
Message-ID: <20251210043112.3267620-6-arvind.yadav@intel.com> (raw)
In-Reply-To: <20251210043112.3267620-1-arvind.yadav@intel.com>

Add check_purged parameter to vma_lock_and_validate() to block
new mapping operations on purged BOs while allowing cleanup
operations to proceed.

Purged BOs have their backing pages freed by the kernel. New
mapping operations (MAP, PREFETCH, REMAP) must be rejected with
-EINVAL to prevent GPU access to invalid memory. Cleanup
operations (UNMAP) must be allowed so applications can release
resources after detecting purge via the retained field.

REMAP operations require mixed handling - reject new prev/next
VMAs if the BO is purged, but allow the unmap portion to proceed
for cleanup.

The check_purged parameter distinguishes between these cases:
true for new mappings (must reject), false for cleanup (allow).

v2:
  - Clarify that purged BOs are permanently invalid (i915 semantics)
  - Remove incorrect claim about madvise(WILLNEED) restoring purged BOs

v3:
  - Move xe_bo_is_purged check under vma_lock_and_validate (Matthew Brost)
  - Add check_purged parameter to distinguish new mappings from cleanup
  - Allow UNMAP operations to prevent resource leaks
  - Handle REMAP operation's dual nature (cleanup + new mappings)

Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
---
 drivers/gpu/drm/xe/xe_vm.c | 20 +++++++++++++-------
 1 file changed, 13 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 762098d368a6..9a6c9a26c6da 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -2864,7 +2864,7 @@ static void vm_bind_ioctl_ops_unwind(struct xe_vm *vm,
 }
 
 static int vma_lock_and_validate(struct drm_exec *exec, struct xe_vma *vma,
-				 bool res_evict, bool validate)
+				 bool res_evict, bool validate, bool check_purged)
 {
 	struct xe_bo *bo = xe_vma_bo(vma);
 	struct xe_vm *vm = xe_vma_vm(vma);
@@ -2873,6 +2873,11 @@ static int vma_lock_and_validate(struct drm_exec *exec, struct xe_vma *vma,
 	if (bo) {
 		if (!bo->vm)
 			err = drm_exec_lock_obj(exec, &bo->ttm.base);
+
+		/* Reject new mappings to purged BOs; allow cleanup operations */
+		if (!err && check_purged && xe_bo_is_purged(bo))
+			err = -EINVAL;
+
 		if (!err && validate)
 			err = xe_bo_validate(bo, vm,
 					     !xe_vm_in_preempt_fence_mode(vm) &&
@@ -2964,7 +2969,8 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
 			err = vma_lock_and_validate(exec, op->map.vma,
 						    res_evict,
 						    !xe_vm_in_fault_mode(vm) ||
-						    op->map.immediate);
+						    op->map.immediate,
+						    true);
 		break;
 	case DRM_GPUVA_OP_REMAP:
 		err = check_ufence(gpuva_to_vma(op->base.remap.unmap->va));
@@ -2973,13 +2979,13 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
 
 		err = vma_lock_and_validate(exec,
 					    gpuva_to_vma(op->base.remap.unmap->va),
-					    res_evict, false);
+					    res_evict, false, false);
 		if (!err && op->remap.prev)
 			err = vma_lock_and_validate(exec, op->remap.prev,
-						    res_evict, true);
+						    res_evict, true, true);
 		if (!err && op->remap.next)
 			err = vma_lock_and_validate(exec, op->remap.next,
-						    res_evict, true);
+						    res_evict, true, true);
 		break;
 	case DRM_GPUVA_OP_UNMAP:
 		err = check_ufence(gpuva_to_vma(op->base.unmap.va));
@@ -2988,7 +2994,7 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
 
 		err = vma_lock_and_validate(exec,
 					    gpuva_to_vma(op->base.unmap.va),
-					    res_evict, false);
+					    res_evict, false, false);
 		break;
 	case DRM_GPUVA_OP_PREFETCH:
 	{
@@ -3003,7 +3009,7 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
 
 		err = vma_lock_and_validate(exec,
 					    gpuva_to_vma(op->base.prefetch.va),
-					    res_evict, false);
+					    res_evict, false, true);
 		if (!err && !xe_vma_has_no_bo(vma))
 			err = xe_bo_migrate(xe_vma_bo(vma),
 					    region_to_mem_type[region],
-- 
2.43.0


  parent reply	other threads:[~2025-12-10  4:32 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-12-10  4:30 [RFC v3 0/8] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
2025-12-10  4:30 ` [RFC v3 1/8] drm/xe/uapi: Add UAPI " Arvind Yadav
2025-12-10  5:33   ` Matthew Brost
2025-12-10  7:16     ` Yadav, Arvind
2026-01-22 13:32       ` Thomas Hellström
2025-12-10  4:30 ` [RFC v3 2/8] drm/xe/bo: Add purgeable bo state tracking and field madv to xe_bo Arvind Yadav
2025-12-10  5:46   ` Matthew Brost
2025-12-10  7:18     ` Yadav, Arvind
2025-12-10  4:30 ` [RFC v3 3/8] drm/xe/madvise: Implement purgeable buffer object support Arvind Yadav
2025-12-10  4:30 ` [RFC v3 4/8] drm/xe/bo: Handle CPU faults on purged buffer objects Arvind Yadav
2025-12-10  4:30 ` Arvind Yadav [this message]
2025-12-10  4:30 ` [RFC v3 6/8] drm/xe/madvise: Implement per-VMA purgeable state tracking Arvind Yadav
2025-12-10  4:30 ` [RFC v3 7/8] drm/xe/madvise: Block imported and exported dma-bufs Arvind Yadav
2025-12-10  4:30 ` [RFC v3 8/8] drm/xe/bo: Add purgeable shrinker state helpers Arvind Yadav
2025-12-11  7:22 ` ✗ CI.checkpatch: warning for drm/xe/madvise: Add support for purgeable buffer objects (rev4) Patchwork
2025-12-11  7:24 ` ✓ CI.KUnit: success " Patchwork
2025-12-11  7:57 ` ✓ Xe.CI.BAT: " Patchwork
2025-12-11 14:56 ` ✗ Xe.CI.Full: failure " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20251210043112.3267620-6-arvind.yadav@intel.com \
    --to=arvind.yadav@intel.com \
    --cc=himal.prasad.ghimiray@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=matthew.brost@intel.com \
    --cc=pallavi.mishra@intel.com \
    --cc=thomas.hellstrom@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox