public inbox for intel-xe@lists.freedesktop.org
 help / color / mirror / Atom feed
From: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
To: Arvind Yadav <arvind.yadav@intel.com>, intel-xe@lists.freedesktop.org
Cc: matthew.brost@intel.com, himal.prasad.ghimiray@intel.com
Subject: Re: [PATCH v7 05/12] drm/xe/vm: Prevent binding of purged buffer objects
Date: Tue, 24 Mar 2026 13:21:32 +0100	[thread overview]
Message-ID: <f1d430d0177f033f6460a93c9650e6594b8a344e.camel@linux.intel.com> (raw)
In-Reply-To: <20260323093106.2986900-6-arvind.yadav@intel.com>

On Mon, 2026-03-23 at 15:00 +0530, Arvind Yadav wrote:
> Add purge checking to vma_lock_and_validate() to block new mapping
> operations on purged BOs while allowing cleanup operations to
> proceed.
> 
> Purged BOs have their backing pages freed by the kernel. New
> mapping operations (MAP, PREFETCH, REMAP) must be rejected with
> -EINVAL to prevent GPU access to invalid memory. Cleanup
> operations (UNMAP) must be allowed so applications can release
> resources after detecting purge via the retained field.
> 
> REMAP operations require mixed handling - reject new prev/next
> VMAs if the BO is purged, but allow the unmap portion to proceed
> for cleanup.
> 
> The check_purged flag in struct xe_vma_lock_and_validate_flags
> distinguishes between these cases: true for new mappings (must
> reject),
> false for cleanup (allow).
> 
> v2:
>   - Clarify that purged BOs are permanently invalid (i915 semantics)
>   - Remove incorrect claim about madvise(WILLNEED) restoring purged
> BOs
> 
> v3:
>   - Move xe_bo_is_purged check under vma_lock_and_validate (Matt)
>   - Add check_purged parameter to distinguish new mappings from
> cleanup
>   - Allow UNMAP operations to prevent resource leaks
>   - Handle REMAP operation's dual nature (cleanup + new mappings)
> 
> v5:
>   - Replace three boolean parameters with struct
> xe_vma_lock_and_validate_flags
>     to improve readability and prevent argument transposition (Matt)
>   - Use u32 bitfields instead of bool members to match
> xe_bo_shrink_flags
>     pattern - more efficient packing and follows xe driver
> conventions (Thomas)
>   - Pass struct as const since flags are read-only (Matt)
> 
> v6:
>   - Block VM_BIND to DONTNEED BOs with -EBUSY (Thomas, Matt)
> 
> v7:
>   - Pass xe_vma_lock_and_validate_flags by value instead of by
>     pointer, consistent with xe driver style. (Thomas)
> 
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>

Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>

> ---
>  drivers/gpu/drm/xe/xe_vm.c | 82 ++++++++++++++++++++++++++++++++----
> --
>  1 file changed, 69 insertions(+), 13 deletions(-)
> 
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index a0ade67d616e..9c1a82b64a43 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -2918,8 +2918,22 @@ static void vm_bind_ioctl_ops_unwind(struct
> xe_vm *vm,
>  	}
>  }
>  
> +/**
> + * struct xe_vma_lock_and_validate_flags - Flags for
> vma_lock_and_validate()
> + * @res_evict: Allow evicting resources during validation
> + * @validate: Perform BO validation
> + * @request_decompress: Request BO decompression
> + * @check_purged: Reject operation if BO is purged
> + */
> +struct xe_vma_lock_and_validate_flags {
> +	u32 res_evict : 1;
> +	u32 validate : 1;
> +	u32 request_decompress : 1;
> +	u32 check_purged : 1;
> +};
> +
>  static int vma_lock_and_validate(struct drm_exec *exec, struct
> xe_vma *vma,
> -				 bool res_evict, bool validate, bool
> request_decompress)
> +				 struct
> xe_vma_lock_and_validate_flags flags)
>  {
>  	struct xe_bo *bo = xe_vma_bo(vma);
>  	struct xe_vm *vm = xe_vma_vm(vma);
> @@ -2928,15 +2942,24 @@ static int vma_lock_and_validate(struct
> drm_exec *exec, struct xe_vma *vma,
>  	if (bo) {
>  		if (!bo->vm)
>  			err = drm_exec_lock_obj(exec, &bo-
> >ttm.base);
> -		if (!err && validate)
> +
> +		/* Reject new mappings to DONTNEED/purged BOs; allow
> cleanup operations */
> +		if (!err && flags.check_purged) {
> +			if (xe_bo_madv_is_dontneed(bo))
> +				err = -EBUSY;  /* BO marked
> purgeable */
> +			else if (xe_bo_is_purged(bo))
> +				err = -EINVAL; /* BO already purged
> */
> +		}
> +
> +		if (!err && flags.validate)
>  			err = xe_bo_validate(bo, vm,
>  					    
> xe_vm_allow_vm_eviction(vm) &&
> -					     res_evict, exec);
> +					     flags.res_evict, exec);
>  
>  		if (err)
>  			return err;
>  
> -		if (request_decompress)
> +		if (flags.request_decompress)
>  			err = xe_bo_decompress(bo);
>  	}
>  
> @@ -3030,10 +3053,13 @@ static int op_lock_and_prep(struct drm_exec
> *exec, struct xe_vm *vm,
>  	case DRM_GPUVA_OP_MAP:
>  		if (!op->map.invalidate_on_bind)
>  			err = vma_lock_and_validate(exec, op-
> >map.vma,
> -						    res_evict,
> -						   
> !xe_vm_in_fault_mode(vm) ||
> -						    op-
> >map.immediate,
> -						    op-
> >map.request_decompress);
> +						    (struct
> xe_vma_lock_and_validate_flags) {
> +							.res_evict =
> res_evict,
> +							.validate =
> !xe_vm_in_fault_mode(vm) ||
> +								   
> op->map.immediate,
> +							.request_dec
> ompress = op->map.request_decompress,
> +							.check_purge
> d = true,
> +						    });
>  		break;
>  	case DRM_GPUVA_OP_REMAP:
>  		err = check_ufence(gpuva_to_vma(op-
> >base.remap.unmap->va));
> @@ -3042,13 +3068,28 @@ static int op_lock_and_prep(struct drm_exec
> *exec, struct xe_vm *vm,
>  
>  		err = vma_lock_and_validate(exec,
>  					    gpuva_to_vma(op-
> >base.remap.unmap->va),
> -					    res_evict, false,
> false);
> +					    (struct
> xe_vma_lock_and_validate_flags) {
> +						    .res_evict =
> res_evict,
> +						    .validate =
> false,
> +						   
> .request_decompress = false,
> +						    .check_purged =
> false,
> +					    });
>  		if (!err && op->remap.prev)
>  			err = vma_lock_and_validate(exec, op-
> >remap.prev,
> -						    res_evict, true,
> false);
> +						    (struct
> xe_vma_lock_and_validate_flags) {
> +							   
> .res_evict = res_evict,
> +							   
> .validate = true,
> +							   
> .request_decompress = false,
> +							   
> .check_purged = true,
> +						    });
>  		if (!err && op->remap.next)
>  			err = vma_lock_and_validate(exec, op-
> >remap.next,
> -						    res_evict, true,
> false);
> +						    (struct
> xe_vma_lock_and_validate_flags) {
> +							   
> .res_evict = res_evict,
> +							   
> .validate = true,
> +							   
> .request_decompress = false,
> +							   
> .check_purged = true,
> +						    });
>  		break;
>  	case DRM_GPUVA_OP_UNMAP:
>  		err = check_ufence(gpuva_to_vma(op->base.unmap.va));
> @@ -3057,7 +3098,12 @@ static int op_lock_and_prep(struct drm_exec
> *exec, struct xe_vm *vm,
>  
>  		err = vma_lock_and_validate(exec,
>  					    gpuva_to_vma(op-
> >base.unmap.va),
> -					    res_evict, false,
> false);
> +					    (struct
> xe_vma_lock_and_validate_flags) {
> +						    .res_evict =
> res_evict,
> +						    .validate =
> false,
> +						   
> .request_decompress = false,
> +						    .check_purged =
> false,
> +					    });
>  		break;
>  	case DRM_GPUVA_OP_PREFETCH:
>  	{
> @@ -3070,9 +3116,19 @@ static int op_lock_and_prep(struct drm_exec
> *exec, struct xe_vm *vm,
>  				  region <=
> ARRAY_SIZE(region_to_mem_type));
>  		}
>  
> +		/*
> +		 * Prefetch attempts to migrate BO's backing store
> without
> +		 * repopulating it first. Purged BOs have no backing
> store
> +		 * to migrate, so reject the operation.
> +		 */
>  		err = vma_lock_and_validate(exec,
>  					    gpuva_to_vma(op-
> >base.prefetch.va),
> -					    res_evict, false,
> false);
> +					    (struct
> xe_vma_lock_and_validate_flags) {
> +						    .res_evict =
> res_evict,
> +						    .validate =
> false,
> +						   
> .request_decompress = false,
> +						    .check_purged =
> true,
> +					    });
>  		if (!err && !xe_vma_has_no_bo(vma))
>  			err = xe_bo_migrate(xe_vma_bo(vma),
>  					   
> region_to_mem_type[region],

  reply	other threads:[~2026-03-24 12:21 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-23  9:30 [PATCH v7 00/12] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
2026-03-23  9:30 ` [PATCH v7 01/12] drm/xe/uapi: Add UAPI " Arvind Yadav
2026-03-23  9:30 ` [PATCH v7 02/12] drm/xe/bo: Add purgeable bo state tracking and field madv to xe_bo Arvind Yadav
2026-03-23  9:30 ` [PATCH v7 03/12] drm/xe/madvise: Implement purgeable buffer object support Arvind Yadav
2026-03-25 15:01   ` Thomas Hellström
2026-03-26  4:02     ` Yadav, Arvind
2026-03-23  9:30 ` [PATCH v7 04/12] drm/xe/bo: Block CPU faults to purgeable buffer objects Arvind Yadav
2026-03-23  9:30 ` [PATCH v7 05/12] drm/xe/vm: Prevent binding of purged " Arvind Yadav
2026-03-24 12:21   ` Thomas Hellström [this message]
2026-03-23  9:30 ` [PATCH v7 06/12] drm/xe/madvise: Implement per-VMA purgeable state tracking Arvind Yadav
2026-03-24 12:25   ` Thomas Hellström
2026-03-23  9:30 ` [PATCH v7 07/12] drm/xe/madvise: Block imported and exported dma-bufs Arvind Yadav
2026-03-24 14:13   ` Thomas Hellström
2026-03-23  9:30 ` [PATCH v7 08/12] drm/xe/bo: Block mmap of DONTNEED/purged BOs Arvind Yadav
2026-03-26  1:33   ` Matthew Brost
2026-03-26  2:49     ` Yadav, Arvind
2026-03-23  9:30 ` [PATCH v7 09/12] drm/xe/dma_buf: Block export " Arvind Yadav
2026-03-24 14:47   ` Thomas Hellström
2026-03-26  2:50     ` Yadav, Arvind
2026-03-23  9:30 ` [PATCH v7 10/12] drm/xe/bo: Add purgeable shrinker state helpers Arvind Yadav
2026-03-24 14:51   ` Thomas Hellström
2026-03-23  9:31 ` [PATCH v7 11/12] drm/xe/madvise: Enable purgeable buffer object IOCTL support Arvind Yadav
2026-03-23  9:31 ` [PATCH v7 12/12] drm/xe/madvise: Accept canonical GPU addresses in xe_vm_madvise_ioctl Arvind Yadav
2026-03-24  3:35   ` Matthew Brost
2026-03-23  9:40 ` ✗ CI.checkpatch: warning for drm/xe/madvise: Add support for purgeable buffer objects (rev8) Patchwork
2026-03-23  9:42 ` ✓ CI.KUnit: success " Patchwork
2026-03-23 10:40 ` ✓ Xe.CI.BAT: " Patchwork
2026-03-23 12:05 ` ✓ Xe.CI.FULL: " Patchwork
2026-03-23 15:45 ` [PATCH v7 00/12] drm/xe/madvise: Add support for purgeable buffer objects Souza, Jose

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=f1d430d0177f033f6460a93c9650e6594b8a344e.camel@linux.intel.com \
    --to=thomas.hellstrom@linux.intel.com \
    --cc=arvind.yadav@intel.com \
    --cc=himal.prasad.ghimiray@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=matthew.brost@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox