From: Matthew Brost <matthew.brost@intel.com>
To: Arvind Yadav <arvind.yadav@intel.com>
Cc: <intel-xe@lists.freedesktop.org>,
<himal.prasad.ghimiray@intel.com>,
<thomas.hellstrom@linux.intel.com>, <pallavi.mishra@intel.com>
Subject: Re: [PATCH v5 5/9] drm/xe/vm: Prevent binding of purged buffer objects
Date: Wed, 11 Feb 2026 08:17:13 -0800 [thread overview]
Message-ID: <aYyriSl/d6IRDbr5@lstrano-desk.jf.intel.com> (raw)
In-Reply-To: <20260211152644.1661165-6-arvind.yadav@intel.com>
On Wed, Feb 11, 2026 at 08:56:34PM +0530, Arvind Yadav wrote:
> Add purge checking to vma_lock_and_validate() to block new mapping
> operations on purged BOs while allowing cleanup operations to proceed.
>
> Purged BOs have their backing pages freed by the kernel. New
> mapping operations (MAP, PREFETCH, REMAP) must be rejected with
> -EINVAL to prevent GPU access to invalid memory. Cleanup
> operations (UNMAP) must be allowed so applications can release
> resources after detecting purge via the retained field.
>
> REMAP operations require mixed handling - reject new prev/next
> VMAs if the BO is purged, but allow the unmap portion to proceed
> for cleanup.
>
> The check_purged flag in struct xe_vma_lock_and_validate_flags
> distinguishes between these cases: true for new mappings (must reject),
> false for cleanup (allow).
>
> v2:
> - Clarify that purged BOs are permanently invalid (i915 semantics)
> - Remove incorrect claim about madvise(WILLNEED) restoring purged BOs
>
> v3:
> - Move xe_bo_is_purged check under vma_lock_and_validate (Matt)
> - Add check_purged parameter to distinguish new mappings from cleanup
> - Allow UNMAP operations to prevent resource leaks
> - Handle REMAP operation's dual nature (cleanup + new mappings)
>
> v5:
> - Replace three boolean parameters with struct xe_vma_lock_and_validate_flags
> to improve readability and prevent argument transposition (Matt)
> - Use u32 bitfields instead of bool members to match xe_bo_shrink_flags
> pattern - more efficient packing and follows xe driver conventions (Thomas)
> - Pass struct as const since flags are read-only (Thomas)
>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
> ---
> drivers/gpu/drm/xe/xe_vm.c | 67 +++++++++++++++++++++++++++++++-------
> 1 file changed, 56 insertions(+), 11 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index 21a2527ca064..71cf3ce6c62b 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -2907,8 +2907,20 @@ static void vm_bind_ioctl_ops_unwind(struct xe_vm *vm,
> }
> }
>
> +/**
> + * struct xe_vma_lock_and_validate_flags - Flags for vma_lock_and_validate()
> + * @res_evict: Allow evicting resources during validation
> + * @validate: Perform BO validation
> + * @check_purged: Reject operation if BO is purged
> + */
> +struct xe_vma_lock_and_validate_flags {
> + u32 res_evict : 1;
> + u32 validate : 1;
> + u32 check_purged : 1;
> +};
This looks better, thanks for the cleanup.
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
> +
> static int vma_lock_and_validate(struct drm_exec *exec, struct xe_vma *vma,
> - bool res_evict, bool validate)
> + const struct xe_vma_lock_and_validate_flags *flags)
> {
> struct xe_bo *bo = xe_vma_bo(vma);
> struct xe_vm *vm = xe_vma_vm(vma);
> @@ -2917,10 +2929,15 @@ static int vma_lock_and_validate(struct drm_exec *exec, struct xe_vma *vma,
> if (bo) {
> if (!bo->vm)
> err = drm_exec_lock_obj(exec, &bo->ttm.base);
> - if (!err && validate)
> +
> + /* Reject new mappings to purged BOs; allow cleanup operations */
> + if (!err && flags->check_purged && xe_bo_is_purged(bo))
> + err = -EINVAL;
> +
> + if (!err && flags->validate)
> err = xe_bo_validate(bo, vm,
> !xe_vm_in_preempt_fence_mode(vm) &&
> - res_evict, exec);
> + flags->res_evict, exec);
> }
>
> return err;
> @@ -3013,9 +3030,12 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
> case DRM_GPUVA_OP_MAP:
> if (!op->map.invalidate_on_bind)
> err = vma_lock_and_validate(exec, op->map.vma,
> - res_evict,
> - !xe_vm_in_fault_mode(vm) ||
> - op->map.immediate);
> + &(struct xe_vma_lock_and_validate_flags) {
> + .res_evict = res_evict,
> + .validate = !xe_vm_in_fault_mode(vm) ||
> + op->map.immediate,
> + .check_purged = true
> + });
> break;
> case DRM_GPUVA_OP_REMAP:
> err = check_ufence(gpuva_to_vma(op->base.remap.unmap->va));
> @@ -3024,13 +3044,25 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
>
> err = vma_lock_and_validate(exec,
> gpuva_to_vma(op->base.remap.unmap->va),
> - res_evict, false);
> + &(struct xe_vma_lock_and_validate_flags) {
> + .res_evict = res_evict,
> + .validate = false,
> + .check_purged = false
> + });
> if (!err && op->remap.prev)
> err = vma_lock_and_validate(exec, op->remap.prev,
> - res_evict, true);
> + &(struct xe_vma_lock_and_validate_flags) {
> + .res_evict = res_evict,
> + .validate = true,
> + .check_purged = true
> + });
> if (!err && op->remap.next)
> err = vma_lock_and_validate(exec, op->remap.next,
> - res_evict, true);
> + &(struct xe_vma_lock_and_validate_flags) {
> + .res_evict = res_evict,
> + .validate = true,
> + .check_purged = true
> + });
> break;
> case DRM_GPUVA_OP_UNMAP:
> err = check_ufence(gpuva_to_vma(op->base.unmap.va));
> @@ -3039,7 +3071,11 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
>
> err = vma_lock_and_validate(exec,
> gpuva_to_vma(op->base.unmap.va),
> - res_evict, false);
> + &(struct xe_vma_lock_and_validate_flags) {
> + .res_evict = res_evict,
> + .validate = false,
> + .check_purged = false
> + });
> break;
> case DRM_GPUVA_OP_PREFETCH:
> {
> @@ -3052,9 +3088,18 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
> region <= ARRAY_SIZE(region_to_mem_type));
> }
>
> + /*
> + * Prefetch attempts to migrate BO's backing store without
> + * repopulating it first. Purged BOs have no backing store
> + * to migrate, so reject the operation.
> + */
> err = vma_lock_and_validate(exec,
> gpuva_to_vma(op->base.prefetch.va),
> - res_evict, false);
> + &(struct xe_vma_lock_and_validate_flags) {
> + .res_evict = res_evict,
> + .validate = false,
> + .check_purged = true
> + });
> if (!err && !xe_vma_has_no_bo(vma))
> err = xe_bo_migrate(xe_vma_bo(vma),
> region_to_mem_type[region],
> --
> 2.43.0
>
next prev parent reply other threads:[~2026-02-11 16:17 UTC|newest]
Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-11 15:26 [PATCH v5 0/9] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
2026-02-11 15:26 ` [PATCH v5 1/9] drm/xe/uapi: Add UAPI " Arvind Yadav
2026-02-24 10:50 ` Thomas Hellström
2026-02-26 17:58 ` Souza, Jose
2026-02-27 9:32 ` Yadav, Arvind
2026-02-11 15:26 ` [PATCH v5 2/9] drm/xe/bo: Add purgeable bo state tracking and field madv to xe_bo Arvind Yadav
2026-02-11 16:00 ` Matthew Brost
2026-02-11 15:26 ` [PATCH v5 3/9] drm/xe/madvise: Implement purgeable buffer object support Arvind Yadav
2026-02-24 12:21 ` Thomas Hellström
2026-02-24 14:56 ` Yadav, Arvind
2026-02-11 15:26 ` [PATCH v5 4/9] drm/xe/bo: Handle CPU faults on purged buffer objects Arvind Yadav
2026-02-11 15:26 ` [PATCH v5 5/9] drm/xe/vm: Prevent binding of " Arvind Yadav
2026-02-11 16:17 ` Matthew Brost [this message]
2026-02-11 15:26 ` [PATCH v5 6/9] drm/xe/madvise: Implement per-VMA purgeable state tracking Arvind Yadav
2026-02-24 12:48 ` Thomas Hellström
2026-02-24 15:07 ` Yadav, Arvind
2026-02-24 16:36 ` Matthew Brost
2026-02-25 5:35 ` Yadav, Arvind
2026-02-25 8:21 ` Thomas Hellström
2026-02-25 9:04 ` Matthew Brost
2026-02-25 9:18 ` Thomas Hellström
2026-02-25 9:40 ` Yadav, Arvind
2026-02-25 18:32 ` Matthew Brost
2026-02-11 15:26 ` [PATCH v5 7/9] drm/xe/madvise: Block imported and exported dma-bufs Arvind Yadav
2026-02-24 14:15 ` Thomas Hellström
2026-02-11 15:26 ` [PATCH v5 8/9] drm/xe/bo: Add purgeable shrinker state helpers Arvind Yadav
2026-02-24 14:21 ` Thomas Hellström
2026-02-24 15:09 ` Yadav, Arvind
2026-02-11 15:26 ` [PATCH v5 9/9] drm/xe/madvise: Enable purgeable buffer object IOCTL support Arvind Yadav
2026-02-11 15:40 ` Matthew Brost
2026-02-11 15:46 ` [PATCH v5 0/9] drm/xe/madvise: Add support for purgeable buffer objects Matthew Brost
2026-02-25 10:10 ` Yadav, Arvind
2026-02-11 16:21 ` ✗ CI.checkpatch: warning for drm/xe/madvise: Add support for purgeable buffer objects (rev6) Patchwork
2026-02-11 16:22 ` ✓ CI.KUnit: success " Patchwork
2026-02-11 17:11 ` ✗ Xe.CI.BAT: failure " Patchwork
2026-02-13 1:15 ` ✗ Xe.CI.FULL: " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aYyriSl/d6IRDbr5@lstrano-desk.jf.intel.com \
--to=matthew.brost@intel.com \
--cc=arvind.yadav@intel.com \
--cc=himal.prasad.ghimiray@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=pallavi.mishra@intel.com \
--cc=thomas.hellstrom@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox