Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Matthew Brost <matthew.brost@intel.com>
To: Arvind Yadav <arvind.yadav@intel.com>
Cc: <intel-xe@lists.freedesktop.org>,
	<himal.prasad.ghimiray@intel.com>,
	<thomas.hellstrom@linux.intel.com>
Subject: Re: [RFC v2 2/7] drm/xe/vm: Preserve cpu_autoreset_active across GPUVA operations
Date: Wed, 29 Apr 2026 21:29:21 -0700	[thread overview]
Message-ID: <afLaoQqACZaoECxb@gsse-cloud1.jf.intel.com> (raw)
In-Reply-To: <20260406085830.1118431-3-arvind.yadav@intel.com>

On Mon, Apr 06, 2026 at 02:28:25PM +0530, Arvind Yadav wrote:
> GPUVA split and remap rebuild VMAs using XE_VMA_CREATE_MASK, which does
> not carry runtime-only state. Forward XE_VMA_CPU_AUTORESET_ACTIVE through
> the pipeline so xe_vma_create() restores cpu_autoreset_active in the new
> VMA.
> 
> Also preserve the effective PAT index on REMAP so madvise-applied
> overrides survive splits.
> 
> Relax XE_WARN_ON in the UNMAP path to allow non-default attributes on
> cpu_autoreset_active VMAs.
> 
> Add xe_vma_effective_create_flags() to centralise flag propagation.
> 
> v2:
>   - Move runtime state to xe_vma bool and keep
>     XE_VMA_CPU_AUTORESET_ACTIVE as pipeline-only. (Matt)
>   - Add xe_vma_effective_create_flags() to centralise flag handling.
> 
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
> ---
>  drivers/gpu/drm/xe/xe_vm.c | 52 +++++++++++++++++++++++++++++++++-----
>  1 file changed, 45 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index 2408b547ca3d..65425f2f1bf1 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -1106,7 +1106,9 @@ static struct xe_vma *xe_vma_create(struct xe_vm *vm,
>  	vma->gpuva.vm = &vm->gpuvm;
>  	vma->gpuva.va.addr = start;
>  	vma->gpuva.va.range = end - start + 1;
> -	vma->gpuva.flags = flags;
> +	/* Pipeline-only; runtime state lives in vma->cpu_autoreset_active. */
> +	vma->gpuva.flags = flags & ~XE_VMA_CPU_AUTORESET_ACTIVE;
> +	vma->cpu_autoreset_active = !!(flags & XE_VMA_CPU_AUTORESET_ACTIVE);

Should we only cpu_autoreset_active when the attributes are not the
default and is xe_vma_is_cpu_addr_mirror(), I believe that is the only
place where this relavent, right?

Matt

>  
>  	for_each_tile(tile, vm->xe, id)
>  		vma->tile_mask |= 0x1 << id;
> @@ -2454,8 +2456,10 @@ vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_vma_ops *vops,
>  				op->map.vma_flags |= XE_VMA_SYSTEM_ALLOCATOR;
>  			if (flags & DRM_XE_VM_BIND_FLAG_DUMPABLE)
>  				op->map.vma_flags |= XE_VMA_DUMPABLE;
> -			if (flags & DRM_XE_VM_BIND_FLAG_MADVISE_AUTORESET)
> +			if (flags & DRM_XE_VM_BIND_FLAG_MADVISE_AUTORESET) {
>  				op->map.vma_flags |= XE_VMA_MADV_AUTORESET;
> +				op->map.vma_flags |= XE_VMA_CPU_AUTORESET_ACTIVE;
> +			}
>  			op->map.request_decompress = flags & DRM_XE_VM_BIND_FLAG_DECOMPRESS;
>  			op->map.pat_index = pat_index;
>  			op->map.invalidate_on_bind =
> @@ -2775,6 +2779,12 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
>  			};
>  
>  			flags |= op->map.vma_flags & XE_VMA_CREATE_MASK;
> +			/*
> +			 * Pipeline-only flag; forward explicitly so xe_vma_create()
> +			 * restores state.
> +			 */
> +			if (op->map.vma_flags & XE_VMA_CPU_AUTORESET_ACTIVE)
> +				flags |= XE_VMA_CPU_AUTORESET_ACTIVE;
>  
>  			vma = new_vma(vm, &op->base.map, &default_attr,
>  				      flags);
> @@ -2817,6 +2827,10 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
>  			op->remap.old_range = op->remap.range;
>  
>  			flags |= op->base.remap.unmap->va->flags & XE_VMA_CREATE_MASK;
> +			/* Pipeline-only; forward explicitly. */
> +			if (xe_vma_has_cpu_autoreset_active(old))
> +				flags |= XE_VMA_CPU_AUTORESET_ACTIVE;
> +
>  			if (op->base.remap.prev) {
>  				vma = new_vma(vm, op->base.remap.prev,
>  					      &old->attr, flags);
> @@ -4659,6 +4673,20 @@ int xe_vma_need_vram_for_atomic(struct xe_device *xe, struct xe_vma *vma, bool i
>  	}
>  }
>  
> +/*
> + * Returns vma->gpuva.flags with XE_VMA_CPU_AUTORESET_ACTIVE injected if
> + * the runtime state is set, for passing through MAP/REMAP pipelines.
> + */
> +static unsigned int xe_vma_effective_create_flags(struct xe_vma *vma)
> +{
> +	unsigned int flags = vma->gpuva.flags;
> +
> +	if (xe_vma_has_cpu_autoreset_active(vma))
> +		flags |= XE_VMA_CPU_AUTORESET_ACTIVE;
> +
> +	return flags;
> +}
> +
>  static int xe_vm_alloc_vma(struct xe_vm *vm,
>  			   struct drm_gpuvm_map_req *map_req,
>  			   bool is_madvise)
> @@ -4694,19 +4722,25 @@ static int xe_vm_alloc_vma(struct xe_vm *vm,
>  		if (!is_madvise) {
>  			if (__op->op == DRM_GPUVA_OP_UNMAP) {
>  				vma = gpuva_to_vma(op->base.unmap.va);
> -				XE_WARN_ON(!xe_vma_has_default_mem_attrs(vma));
> +				/* Attributes must be default on UNMAP; CPU-only VMAs are exempt. */
> +				XE_WARN_ON(!xe_vma_has_default_mem_attrs(vma) &&
> +					   !xe_vma_has_cpu_autoreset_active(vma));
>  				default_pat = vma->attr.default_pat_index;
> -				vma_flags = vma->gpuva.flags;
> +				vma_flags = xe_vma_effective_create_flags(vma);
>  			}
>  
>  			if (__op->op == DRM_GPUVA_OP_REMAP) {
>  				vma = gpuva_to_vma(op->base.remap.unmap->va);
> -				default_pat = vma->attr.default_pat_index;
> -				vma_flags = vma->gpuva.flags;
> +				/* Preserve current PAT index, not default, for remap */
> +				default_pat = vma->attr.pat_index;
> +				vma_flags = xe_vma_effective_create_flags(vma);
>  			}
>  
>  			if (__op->op == DRM_GPUVA_OP_MAP) {
>  				op->map.vma_flags |= vma_flags & XE_VMA_CREATE_MASK;
> +				/* Pipeline-only; forward explicitly. */
> +				if (vma_flags & XE_VMA_CPU_AUTORESET_ACTIVE)
> +					op->map.vma_flags |= XE_VMA_CPU_AUTORESET_ACTIVE;
>  				op->map.pat_index = default_pat;
>  			}
>  		} else {
> @@ -4715,10 +4749,11 @@ static int xe_vm_alloc_vma(struct xe_vm *vm,
>  				xe_assert(vm->xe, !remap_op);
>  				xe_assert(vm->xe, xe_vma_has_no_bo(vma));
>  				remap_op = true;
> -				vma_flags = vma->gpuva.flags;
> +				vma_flags = xe_vma_effective_create_flags(vma);
>  			}
>  
>  			if (__op->op == DRM_GPUVA_OP_MAP) {
> +				/* Madvise MAP follows REMAP (split/merge). */
>  				xe_assert(vm->xe, remap_op);
>  				remap_op = false;
>  				/*
> @@ -4728,6 +4763,9 @@ static int xe_vm_alloc_vma(struct xe_vm *vm,
>  				 * unmapping.
>  				 */
>  				op->map.vma_flags |= vma_flags & XE_VMA_CREATE_MASK;
> +				/* Pipeline-only; forward explicitly. */
> +				if (vma_flags & XE_VMA_CPU_AUTORESET_ACTIVE)
> +					op->map.vma_flags |= XE_VMA_CPU_AUTORESET_ACTIVE;
>  			}
>  		}
>  		print_op(vm->xe, __op);
> -- 
> 2.43.0
> 

  reply	other threads:[~2026-04-30  4:29 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-06  8:58 [RFC v2 0/7] drm/xe/svm: Add MMU notifier-based madvise autoreset on munmap Arvind Yadav
2026-04-06  8:58 ` [RFC v2 1/7] drm/xe/vm: Track CPU_AUTORESET state in xe_vma Arvind Yadav
2026-04-30  4:07   ` Matthew Brost
2026-04-06  8:58 ` [RFC v2 2/7] drm/xe/vm: Preserve cpu_autoreset_active across GPUVA operations Arvind Yadav
2026-04-30  4:29   ` Matthew Brost [this message]
2026-04-06  8:58 ` [RFC v2 3/7] drm/xe/svm: Clear CPU_AUTORESET_ACTIVE on first GPU fault Arvind Yadav
2026-04-30  4:26   ` Matthew Brost
2026-04-06  8:58 ` [RFC v2 4/7] drm/xe/vm: Add madvise autoreset interval notifier worker infrastructure Arvind Yadav
2026-04-06  8:58 ` [RFC v2 5/7] drm/xe/vm: Deactivate madvise notifier on GPU touch Arvind Yadav
2026-04-06  8:58 ` [RFC v2 6/7] drm/xe/vm: Wire MADVISE_AUTORESET notifiers into VM lifecycle Arvind Yadav
2026-04-06  8:58 ` [RFC v2 7/7] drm/xe/svm: Correct memory attribute reset for partial unmap Arvind Yadav
2026-04-30  5:02   ` Matthew Brost
2026-04-30  5:08     ` Matthew Brost
2026-04-06  9:04 ` ✗ CI.checkpatch: warning for drm/xe/svm: Add MMU notifier-based madvise autoreset on munmap (rev2) Patchwork
2026-04-06  9:06 ` ✓ CI.KUnit: success " Patchwork
2026-04-06  9:54 ` ✓ Xe.CI.BAT: " Patchwork
2026-04-06 12:36 ` ✓ Xe.CI.FULL: " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=afLaoQqACZaoECxb@gsse-cloud1.jf.intel.com \
    --to=matthew.brost@intel.com \
    --cc=arvind.yadav@intel.com \
    --cc=himal.prasad.ghimiray@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=thomas.hellstrom@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox