From: Arvind Yadav <arvind.yadav@intel.com>
To: intel-xe@lists.freedesktop.org
Cc: matthew.brost@intel.com, himal.prasad.ghimiray@intel.com,
thomas.hellstrom@linux.intel.com
Subject: [PATCH v8 05/12] drm/xe/vm: Prevent binding of purged buffer objects
Date: Thu, 26 Mar 2026 11:21:04 +0530 [thread overview]
Message-ID: <20260326055115.3422240-6-arvind.yadav@intel.com> (raw)
In-Reply-To: <20260326055115.3422240-1-arvind.yadav@intel.com>
Add purge checking to vma_lock_and_validate() to block new mapping
operations on purged BOs while allowing cleanup operations to proceed.
Purged BOs have their backing pages freed by the kernel. New
mapping operations (MAP, PREFETCH, REMAP) must be rejected with
-EINVAL to prevent GPU access to invalid memory. Cleanup
operations (UNMAP) must be allowed so applications can release
resources after detecting purge via the retained field.
REMAP operations require mixed handling - reject new prev/next
VMAs if the BO is purged, but allow the unmap portion to proceed
for cleanup.
The check_purged flag in struct xe_vma_lock_and_validate_flags
distinguishes between these cases: true for new mappings (must reject),
false for cleanup (allow).
v2:
- Clarify that purged BOs are permanently invalid (i915 semantics)
- Remove incorrect claim about madvise(WILLNEED) restoring purged BOs
v3:
- Move xe_bo_is_purged check under vma_lock_and_validate (Matt)
- Add check_purged parameter to distinguish new mappings from cleanup
- Allow UNMAP operations to prevent resource leaks
- Handle REMAP operation's dual nature (cleanup + new mappings)
v5:
- Replace three boolean parameters with struct xe_vma_lock_and_validate_flags
to improve readability and prevent argument transposition (Matt)
- Use u32 bitfields instead of bool members to match xe_bo_shrink_flags
pattern - more efficient packing and follows xe driver conventions (Thomas)
- Pass struct as const since flags are read-only (Matt)
v6:
- Block VM_BIND to DONTNEED BOs with -EBUSY (Thomas, Matt)
v7:
- Pass xe_vma_lock_and_validate_flags by value instead of by
pointer, consistent with xe driver style. (Thomas)
Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
---
drivers/gpu/drm/xe/xe_vm.c | 82 ++++++++++++++++++++++++++++++++------
1 file changed, 69 insertions(+), 13 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index a0ade67d616e..9c1a82b64a43 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -2918,8 +2918,22 @@ static void vm_bind_ioctl_ops_unwind(struct xe_vm *vm,
}
}
+/**
+ * struct xe_vma_lock_and_validate_flags - Flags for vma_lock_and_validate()
+ * @res_evict: Allow evicting resources during validation
+ * @validate: Perform BO validation
+ * @request_decompress: Request BO decompression
+ * @check_purged: Reject operation if BO is purged
+ */
+struct xe_vma_lock_and_validate_flags {
+ u32 res_evict : 1;
+ u32 validate : 1;
+ u32 request_decompress : 1;
+ u32 check_purged : 1;
+};
+
static int vma_lock_and_validate(struct drm_exec *exec, struct xe_vma *vma,
- bool res_evict, bool validate, bool request_decompress)
+ struct xe_vma_lock_and_validate_flags flags)
{
struct xe_bo *bo = xe_vma_bo(vma);
struct xe_vm *vm = xe_vma_vm(vma);
@@ -2928,15 +2942,24 @@ static int vma_lock_and_validate(struct drm_exec *exec, struct xe_vma *vma,
if (bo) {
if (!bo->vm)
err = drm_exec_lock_obj(exec, &bo->ttm.base);
- if (!err && validate)
+
+ /* Reject new mappings to DONTNEED/purged BOs; allow cleanup operations */
+ if (!err && flags.check_purged) {
+ if (xe_bo_madv_is_dontneed(bo))
+ err = -EBUSY; /* BO marked purgeable */
+ else if (xe_bo_is_purged(bo))
+ err = -EINVAL; /* BO already purged */
+ }
+
+ if (!err && flags.validate)
err = xe_bo_validate(bo, vm,
xe_vm_allow_vm_eviction(vm) &&
- res_evict, exec);
+ flags.res_evict, exec);
if (err)
return err;
- if (request_decompress)
+ if (flags.request_decompress)
err = xe_bo_decompress(bo);
}
@@ -3030,10 +3053,13 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
case DRM_GPUVA_OP_MAP:
if (!op->map.invalidate_on_bind)
err = vma_lock_and_validate(exec, op->map.vma,
- res_evict,
- !xe_vm_in_fault_mode(vm) ||
- op->map.immediate,
- op->map.request_decompress);
+ (struct xe_vma_lock_and_validate_flags) {
+ .res_evict = res_evict,
+ .validate = !xe_vm_in_fault_mode(vm) ||
+ op->map.immediate,
+ .request_decompress = op->map.request_decompress,
+ .check_purged = true,
+ });
break;
case DRM_GPUVA_OP_REMAP:
err = check_ufence(gpuva_to_vma(op->base.remap.unmap->va));
@@ -3042,13 +3068,28 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
err = vma_lock_and_validate(exec,
gpuva_to_vma(op->base.remap.unmap->va),
- res_evict, false, false);
+ (struct xe_vma_lock_and_validate_flags) {
+ .res_evict = res_evict,
+ .validate = false,
+ .request_decompress = false,
+ .check_purged = false,
+ });
if (!err && op->remap.prev)
err = vma_lock_and_validate(exec, op->remap.prev,
- res_evict, true, false);
+ (struct xe_vma_lock_and_validate_flags) {
+ .res_evict = res_evict,
+ .validate = true,
+ .request_decompress = false,
+ .check_purged = true,
+ });
if (!err && op->remap.next)
err = vma_lock_and_validate(exec, op->remap.next,
- res_evict, true, false);
+ (struct xe_vma_lock_and_validate_flags) {
+ .res_evict = res_evict,
+ .validate = true,
+ .request_decompress = false,
+ .check_purged = true,
+ });
break;
case DRM_GPUVA_OP_UNMAP:
err = check_ufence(gpuva_to_vma(op->base.unmap.va));
@@ -3057,7 +3098,12 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
err = vma_lock_and_validate(exec,
gpuva_to_vma(op->base.unmap.va),
- res_evict, false, false);
+ (struct xe_vma_lock_and_validate_flags) {
+ .res_evict = res_evict,
+ .validate = false,
+ .request_decompress = false,
+ .check_purged = false,
+ });
break;
case DRM_GPUVA_OP_PREFETCH:
{
@@ -3070,9 +3116,19 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
region <= ARRAY_SIZE(region_to_mem_type));
}
+ /*
+ * Prefetch attempts to migrate BO's backing store without
+ * repopulating it first. Purged BOs have no backing store
+ * to migrate, so reject the operation.
+ */
err = vma_lock_and_validate(exec,
gpuva_to_vma(op->base.prefetch.va),
- res_evict, false, false);
+ (struct xe_vma_lock_and_validate_flags) {
+ .res_evict = res_evict,
+ .validate = false,
+ .request_decompress = false,
+ .check_purged = true,
+ });
if (!err && !xe_vma_has_no_bo(vma))
err = xe_bo_migrate(xe_vma_bo(vma),
region_to_mem_type[region],
--
2.43.0
next prev parent reply other threads:[~2026-03-26 5:51 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-26 5:50 [PATCH v8 00/12] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
2026-03-26 5:51 ` [PATCH v8 01/12] drm/xe/uapi: Add UAPI " Arvind Yadav
2026-03-26 5:51 ` [PATCH v8 02/12] drm/xe/bo: Add purgeable bo state tracking and field madv to xe_bo Arvind Yadav
2026-03-26 5:51 ` [PATCH v8 03/12] drm/xe/madvise: Implement purgeable buffer object support Arvind Yadav
2026-03-26 8:19 ` Thomas Hellström
2026-03-26 5:51 ` [PATCH v8 04/12] drm/xe/bo: Block CPU faults to purgeable buffer objects Arvind Yadav
2026-03-26 5:51 ` Arvind Yadav [this message]
2026-03-26 5:51 ` [PATCH v8 06/12] drm/xe/madvise: Implement per-VMA purgeable state tracking Arvind Yadav
2026-03-26 5:51 ` [PATCH v8 07/12] drm/xe/madvise: Block imported and exported dma-bufs Arvind Yadav
2026-03-26 5:51 ` [PATCH v8 08/12] drm/xe/bo: Block mmap of DONTNEED/purged BOs Arvind Yadav
2026-03-26 7:41 ` Matthew Brost
2026-03-26 5:51 ` [PATCH v8 09/12] drm/xe/dma_buf: Block export " Arvind Yadav
2026-03-26 7:42 ` Matthew Brost
2026-03-26 5:51 ` [PATCH v8 10/12] drm/xe/bo: Add purgeable shrinker state helpers Arvind Yadav
2026-03-26 5:51 ` [PATCH v8 11/12] drm/xe/madvise: Enable purgeable buffer object IOCTL support Arvind Yadav
2026-03-26 5:51 ` [PATCH v8 12/12] drm/xe/madvise: Accept canonical GPU addresses in xe_vm_madvise_ioctl Arvind Yadav
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260326055115.3422240-6-arvind.yadav@intel.com \
--to=arvind.yadav@intel.com \
--cc=himal.prasad.ghimiray@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=matthew.brost@intel.com \
--cc=thomas.hellstrom@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox