public inbox for intel-xe@lists.freedesktop.org
 help / color / mirror / Atom feed
From: "Yadav, Arvind" <arvind.yadav@intel.com>
To: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>,
	"Matthew Brost" <matthew.brost@intel.com>
Cc: <intel-xe@lists.freedesktop.org>,
	<himal.prasad.ghimiray@intel.com>, <pallavi.mishra@intel.com>
Subject: Re: [PATCH v4 5/8] drm/xe/vm: Prevent binding of purged buffer objects
Date: Fri, 30 Jan 2026 13:47:40 +0530	[thread overview]
Message-ID: <4489e6d4-e1b7-42aa-88a0-c189944525a2@intel.com> (raw)
In-Reply-To: <bedb36f03c7e82ffca688a1ba69929a343bc4f37.camel@linux.intel.com>


On 23-01-2026 18:07, Thomas Hellström wrote:
> On Fri, 2026-01-23 at 11:11 +0530, Yadav, Arvind wrote:
>> On 20-01-2026 22:57, Matthew Brost wrote:
>>> On Tue, Jan 20, 2026 at 11:38:51AM +0530, Arvind Yadav wrote:
>>>> Add check_purged parameter to vma_lock_and_validate() to block
>>>> new mapping operations on purged BOs while allowing cleanup
>>>> operations to proceed.
>>>>
>>>> Purged BOs have their backing pages freed by the kernel. New
>>>> mapping operations (MAP, PREFETCH, REMAP) must be rejected with
>>>> -EINVAL to prevent GPU access to invalid memory. Cleanup
>>>> operations (UNMAP) must be allowed so applications can release
>>>> resources after detecting purge via the retained field.
>>>>
>>>> REMAP operations require mixed handling - reject new prev/next
>>>> VMAs if the BO is purged, but allow the unmap portion to proceed
>>>> for cleanup.
>>>>
>>>> The check_purged parameter distinguishes between these cases:
>>>> true for new mappings (must reject), false for cleanup (allow).
>>>>
>>>> v2:
>>>>     - Clarify that purged BOs are permanently invalid (i915
>>>> semantics)
>>>>     - Remove incorrect claim about madvise(WILLNEED) restoring
>>>> purged BOs
>>>>
>>>> v3:
>>>>     - Move xe_bo_is_purged check under vma_lock_and_validate
>>>> (Matthew Brost)
>>>>     - Add check_purged parameter to distinguish new mappings from
>>>> cleanup
>>>>     - Allow UNMAP operations to prevent resource leaks
>>>>     - Handle REMAP operation's dual nature (cleanup + new
>>>> mappings)
>>>>
>>>> Cc: Matthew Brost<matthew.brost@intel.com>
>>>> Cc: Thomas Hellström<thomas.hellstrom@linux.intel.com>
>>>> Cc: Himal Prasad Ghimiray<himal.prasad.ghimiray@intel.com>
>>>> Signed-off-by: Arvind Yadav<arvind.yadav@intel.com>
>>>> ---
>>>>    drivers/gpu/drm/xe/xe_vm.c | 20 +++++++++++++-------
>>>>    1 file changed, 13 insertions(+), 7 deletions(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/xe/xe_vm.c
>>>> b/drivers/gpu/drm/xe/xe_vm.c
>>>> index c3a5fe76ff96..f250daae3012 100644
>>>> --- a/drivers/gpu/drm/xe/xe_vm.c
>>>> +++ b/drivers/gpu/drm/xe/xe_vm.c
>>>> @@ -2883,7 +2883,7 @@ static void vm_bind_ioctl_ops_unwind(struct
>>>> xe_vm *vm,
>>>>    }
>>>>
>>>>    static int vma_lock_and_validate(struct drm_exec *exec, struct
>>>> xe_vma *vma,
>>>> -				 bool res_evict, bool validate)
>>>> +				 bool res_evict, bool validate,
>>>> bool check_purged)
>>> It probably time to add something like this to avoid transposing
>>> arguments.
>>>
>>> struct lock_and_validate_flags {
>>> 	bool res_evict;
>>> 	bool validate;
>>> 	bool check_purged;
>>> };
>>>
>>> Logic in the patch looks correct though.
>>
>> Noted, I will add "struct xe_lock_and_validate_flags" Thanks, Arvind
> Note that if you follow the pattern of struct xe_bo_shink_flags,
> passing the struct as a const and using
>
> struct lock_and_validate_flags {
> 	u32 res_evict : 1;
>          u32 validate : 1;
>          u32 check_purged : 1;
> };
>
> This will be more or less equivalent to passing a bit-field with type-
> checking.
>
> Some reviewers frown on using "bool" in compound types although we
> accept that in the xe driver.


Noted, I will do the changes as per suggestion..


Thanks,
Arvind

> Otherwise patch LGTM as well.
>
> /Thomas
>
>
>>> Matt
>>>
>>>>    {
>>>>    	struct xe_bo *bo = xe_vma_bo(vma);
>>>>    	struct xe_vm *vm = xe_vma_vm(vma);
>>>> @@ -2892,6 +2892,11 @@ static int vma_lock_and_validate(struct
>>>> drm_exec *exec, struct xe_vma *vma,
>>>>    	if (bo) {
>>>>    		if (!bo->vm)
>>>>    			err = drm_exec_lock_obj(exec, &bo-
>>>>> ttm.base);
>>>> +
>>>> +		/* Reject new mappings to purged BOs; allow
>>>> cleanup operations */
>>>> +		if (!err && check_purged && xe_bo_is_purged(bo))
>>>> +			err = -EINVAL;
>>>> +
>>>>    		if (!err && validate)
>>>>    			err = xe_bo_validate(bo, vm,
>>>>    					
>>>> !xe_vm_in_preempt_fence_mode(vm) &&
>>>> @@ -2990,7 +2995,8 @@ static int op_lock_and_prep(struct drm_exec
>>>> *exec, struct xe_vm *vm,
>>>>    			err = vma_lock_and_validate(exec, op-
>>>>> map.vma,
>>>>    						    res_evict,
>>>>    						
>>>> !xe_vm_in_fault_mode(vm) ||
>>>> -						    op-
>>>>> map.immediate);
>>>> +						    op-
>>>>> map.immediate,
>>>> +						    true);
>>>>    		break;
>>>>    	case DRM_GPUVA_OP_REMAP:
>>>>    		err = check_ufence(gpuva_to_vma(op-
>>>>> base.remap.unmap->va));
>>>> @@ -2999,13 +3005,13 @@ static int op_lock_and_prep(struct
>>>> drm_exec *exec, struct xe_vm *vm,
>>>>    
>>>>    		err = vma_lock_and_validate(exec,
>>>>    					    gpuva_to_vma(op-
>>>>> base.remap.unmap->va),
>>>> -					    res_evict, false);
>>>> +					    res_evict, false,
>>>> false);
>>>>    		if (!err && op->remap.prev)
>>>>    			err = vma_lock_and_validate(exec, op-
>>>>> remap.prev,
>>>> -						    res_evict,
>>>> true);
>>>> +						    res_evict,
>>>> true, true);
>>>>    		if (!err && op->remap.next)
>>>>    			err = vma_lock_and_validate(exec, op-
>>>>> remap.next,
>>>> -						    res_evict,
>>>> true);
>>>> +						    res_evict,
>>>> true, true);
>>>>    		break;
>>>>    	case DRM_GPUVA_OP_UNMAP:
>>>>    		err = check_ufence(gpuva_to_vma(op-
>>>>> base.unmap.va));
>>>> @@ -3014,7 +3020,7 @@ static int op_lock_and_prep(struct drm_exec
>>>> *exec, struct xe_vm *vm,
>>>>    
>>>>    		err = vma_lock_and_validate(exec,
>>>>    					    gpuva_to_vma(op-
>>>>> base.unmap.va),
>>>> -					    res_evict, false);
>>>> +					    res_evict, false,
>>>> false);
>>>>    		break;
>>>>    	case DRM_GPUVA_OP_PREFETCH:
>>>>    	{
>>>> @@ -3029,7 +3035,7 @@ static int op_lock_and_prep(struct drm_exec
>>>> *exec, struct xe_vm *vm,
>>>>    
>>>>    		err = vma_lock_and_validate(exec,
>>>>    					    gpuva_to_vma(op-
>>>>> base.prefetch.va),
>>>> -					    res_evict, false);
>>>> +					    res_evict, false,
>>>> true);
>>>>    		if (!err && !xe_vma_has_no_bo(vma))
>>>>    			err = xe_bo_migrate(xe_vma_bo(vma),
>>>>    					
>>>> region_to_mem_type[region],
>>>> -- 
>>>> 2.43.0

  reply	other threads:[~2026-01-30  8:18 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-01-20  6:08 [PATCH v4 0/8] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
2026-01-20  6:08 ` [PATCH v4 1/8] drm/xe/uapi: Add UAPI " Arvind Yadav
2026-01-20 17:20   ` Matthew Brost
2026-01-21 18:42     ` Vivi, Rodrigo
2026-01-20  6:08 ` [PATCH v4 2/8] drm/xe/bo: Add purgeable bo state tracking and field madv to xe_bo Arvind Yadav
2026-01-20 17:45   ` Matthew Brost
2026-01-21  5:30     ` Yadav, Arvind
2026-01-22 15:05     ` Thomas Hellström
2026-01-20  6:08 ` [PATCH v4 3/8] drm/xe/madvise: Implement purgeable buffer object support Arvind Yadav
2026-01-20 16:58   ` Matthew Brost
2026-01-20 17:15     ` Matthew Brost
2026-01-21  8:24       ` Yadav, Arvind
2026-01-22 15:30     ` Thomas Hellström
2026-01-30  8:13       ` Yadav, Arvind
2026-01-20 17:44   ` Matthew Brost
2026-01-20  6:08 ` [PATCH v4 4/8] drm/xe/bo: Handle CPU faults on purged buffer objects Arvind Yadav
2026-01-20 17:23   ` Matthew Brost
2026-01-22 15:54   ` Thomas Hellström
2026-01-20  6:08 ` [PATCH v4 5/8] drm/xe/vm: Prevent binding of " Arvind Yadav
2026-01-20 17:27   ` Matthew Brost
2026-01-23  5:41     ` Yadav, Arvind
2026-01-23 12:37       ` Thomas Hellström
2026-01-30  8:17         ` Yadav, Arvind [this message]
2026-01-20  6:08 ` [PATCH v4 6/8] drm/xe/madvise: Implement per-VMA purgeable state tracking Arvind Yadav
2026-01-20 17:41   ` Matthew Brost
2026-01-21  5:11     ` Yadav, Arvind
2026-01-23 13:07     ` Thomas Hellström
2026-01-20  6:08 ` [PATCH v4 7/8] drm/xe/madvise: Block imported and exported dma-bufs Arvind Yadav
2026-01-20 17:51   ` Matthew Brost
2026-01-23 13:31     ` Thomas Hellström
2026-01-30  8:22       ` Yadav, Arvind
2026-01-30  8:59         ` Thomas Hellström
2026-01-20  6:08 ` [PATCH v4 8/8] drm/xe/bo: Add purgeable shrinker state helpers Arvind Yadav
2026-01-20 17:58   ` Matthew Brost
2026-01-23 13:42     ` Thomas Hellström
2026-01-20  6:14 ` ✗ CI.checkpatch: warning for drm/xe/madvise: Add support for purgeable buffer objects (rev5) Patchwork
2026-01-20  6:16 ` ✗ CI.KUnit: failure " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4489e6d4-e1b7-42aa-88a0-c189944525a2@intel.com \
    --to=arvind.yadav@intel.com \
    --cc=himal.prasad.ghimiray@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=matthew.brost@intel.com \
    --cc=pallavi.mishra@intel.com \
    --cc=thomas.hellstrom@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox