Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Matthew Brost <matthew.brost@intel.com>
To: Nitin Gote <nitin.r.gote@intel.com>
Cc: <intel-xe@lists.freedesktop.org>, <matthew.auld@intel.com>,
	<thomas.hellstrom@intel.com>
Subject: Re: [PATCH v2 3/3] drm/xe: implement VM_BIND decompression in vm_bind_ioctl
Date: Mon, 13 Oct 2025 09:12:10 -0700	[thread overview]
Message-ID: <aO0k2rx8oqoDV/0v@lstrano-desk.jf.intel.com> (raw)
In-Reply-To: <aO0kal4XbxtdJGki@lstrano-desk.jf.intel.com>

On Mon, Oct 13, 2025 at 09:10:18AM -0700, Matthew Brost wrote:
> On Mon, Oct 13, 2025 at 08:18:50PM +0530, Nitin Gote wrote:
> > Implement handling of VM_BIND(..., DECOMPRESS) in xe_vm_bind_ioctl.
> > 
> > Key changes:
> > - Parse and record per-op intent (op->map.request_decompress) when the
> >   DECOMPRESS flag is present.
> > - Validate DECOMPRESS preconditions in the ioctl path:
> >   - Only valid for MAP ops.
> >   - The provided pat_index must select the device's "no-compression" PAT.
> >   - Only meaningful for VRAM-backed BOs on devices with flat CCS and the
> >     required XE2+ hardware (reject with -EOPNOTSUPP otherwise).
> >   - Use XE_IOCTL_DBG for uAPI sanity checks.
> > - Implement xe_bo_schedule_decompress():
> >   - Locate and invalidate any overlapping VMA.
> >   - Schedule an in-place resolve via the migrate/resolve path.
> >   - Install the resulting dma_fence into the BO's kernel reservation
> >     (DMA_RESV_USAGE_KERNEL).
> > - Wire scheduling into vma_lock_and_validate() so VM_BIND will schedule
> >   decompression when request_decompress is set.
> > - Handle fault-mode VMs by performing decompression synchronously during
> >   the bind process, ensuring that the resolve is completed before the bind
> >   finishes.
> > 
> > This schedules an in-place GPU resolve (xe_migrate_resolve) for
> > decompression.
> > 
> > v2:
> >    - Move decompression work out of vm_bind ioctl. (Matt)
> >    - Put that work in a small helper at the BO/migrate layer invoke it
> >      from vma_lock_and_validate which already runs under drm_exec.
> >    - Move lightweight checks to vm_bind_ioctl_check_args (Matthew Auld)
> > 
> > Cc: Matthew Brost <matthew.brost@intel.com>
> > Cc: Matthew Auld <matthew.auld@intel.com>
> > Signed-off-by: Nitin Gote <nitin.r.gote@intel.com>
> > ---
> >  drivers/gpu/drm/xe/xe_bo.c       | 53 ++++++++++++++++++++++++++++++++
> >  drivers/gpu/drm/xe/xe_bo.h       |  3 ++
> >  drivers/gpu/drm/xe/xe_vm.c       | 48 ++++++++++++++++++++---------
> >  drivers/gpu/drm/xe/xe_vm_types.h |  2 ++
> >  4 files changed, 92 insertions(+), 14 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
> > index 7b6502081873..e9b28c8462c6 100644
> > --- a/drivers/gpu/drm/xe/xe_bo.c
> > +++ b/drivers/gpu/drm/xe/xe_bo.c
> > @@ -3307,6 +3307,59 @@ int xe_gem_mmap_offset_ioctl(struct drm_device *dev, void *data,
> >  	return 0;
> >  }
> >  
> > +/**
> > + * xe_bo_schedule_decompress - schedule in-place decompress and install fence
> > + * @bo:   buffer object (caller should hold drm_exec reservations for VM+BO)
> > + * @vm:   VM containing the VMA range
> > + * @addr: start GPU virtual address of the range
> > + * @range:length in bytes of the range
> > + *
> > + * Schedules an in-place resolve via the migrate layer and installs the
> > + * returned dma_fence into the BO kernel reservation slot (DMA_RESV_USAGE_KERNEL).
> > + * Returns 0 on success, negative errno on error.
> > + */
> > +int xe_bo_schedule_decompress(struct xe_bo *bo, struct xe_vm *vm,
> 
> s/xe_bo_schedule_decompress/xe_bo_decompress
> 
> > +			      u64 addr, u64 range)
> 
> The caller has the VMA, just pass it in rather than looking it up here
> but I don't even this the VMA is needed - more on that below.
> 
> > +{
> > +	struct xe_tile *tile = xe_device_get_root_tile(vm->xe);
> > +	struct xe_vma *existing_vma = NULL;
> > +	struct dma_fence *decomp_fence = NULL;
> > +	int err = 0;
> > +
> > +	/* Validate buffer is VRAM-backed  */
> > +	if (!bo->ttm.resource || !mem_type_is_vram(bo->ttm.resource->mem_type)) {
> > +		drm_err(&vm->xe->drm, "Decompression requires VRAM buffer\n");
> > +		return -EINVAL;
> > +	}
> 
> I think we should just slightly skip the decrompress step if the BO

Typo. s/slightly/silently 

By silently I mean just return 0.

Matt

> isn't in VRAM. It is possible we have evicted the BO system memory
> already behind the user and decrompression isn't required then.
> 
> > +
> > +	/* Find overlapping VMA */
> > +	existing_vma = xe_vm_find_overlapping_vma(vm, addr, range);
> > +	if (existing_vma) {
> 
> See above a lookup isn't required as the caller has the VMA already. 
> 
> > +		drm_dbg(&vm->xe->drm,
> > +			"Found overlapping VMA - automatic invalidation will occur\n");
> > +
> > +		/* Invalidate the VMA */
> > +		err = xe_vm_invalidate_vma(existing_vma);
> 
> xe_vm_invalidate_vma is only valid on faulting VM. I think you want to
> call xe_bo_move_notify here as that will do all the correct things wrt
> to invalidtaions.
> 
> > +		if (err)
> > +			return err;
> > +	}
> > +
> > +	/* Schedule the in-place decompression */
> > +	decomp_fence = xe_migrate_resolve(tile->migrate,
> > +					  bo, bo,
> > +					  bo->ttm.resource, bo->ttm.resource,
> > +					  false);
> > +
> > +	if (IS_ERR(decomp_fence))
> > +		return PTR_ERR(decomp_fence);
> > +
> > +	/* Install kernel-usage fence */
> > +	dma_resv_add_fence(bo->ttm.base.resv, decomp_fence, DMA_RESV_USAGE_KERNEL);
> 
> I believe you need to call dma_resv_reserve_fences before
> xe_migrate_resolve to ensure dma_resv_add_fence will succeed.
> 
> > +	dma_fence_put(decomp_fence);
> > +
> > +	return 0;
> > +}
> > +
> >  /**
> >   * xe_bo_lock() - Lock the buffer object's dma_resv object
> >   * @bo: The struct xe_bo whose lock is to be taken
> > diff --git a/drivers/gpu/drm/xe/xe_bo.h b/drivers/gpu/drm/xe/xe_bo.h
> > index 353d607d301d..b5dd8630cde5 100644
> > --- a/drivers/gpu/drm/xe/xe_bo.h
> > +++ b/drivers/gpu/drm/xe/xe_bo.h
> > @@ -308,6 +308,9 @@ int xe_bo_dumb_create(struct drm_file *file_priv,
> >  
> >  bool xe_bo_needs_ccs_pages(struct xe_bo *bo);
> >  
> > +int xe_bo_schedule_decompress(struct xe_bo *bo, struct xe_vm *vm,
> > +			      u64 addr, u64 range);
> > +
> >  static inline size_t xe_bo_ccs_pages_start(struct xe_bo *bo)
> >  {
> >  	return PAGE_ALIGN(xe_bo_size(bo));
> > diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> > index 179758ca7cb8..32bc6d899c36 100644
> > --- a/drivers/gpu/drm/xe/xe_vm.c
> > +++ b/drivers/gpu/drm/xe/xe_vm.c
> > @@ -2304,6 +2304,7 @@ vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_vma_ops *vops,
> >  			op->map.is_cpu_addr_mirror = flags &
> >  				DRM_XE_VM_BIND_FLAG_CPU_ADDR_MIRROR;
> >  			op->map.dumpable = flags & DRM_XE_VM_BIND_FLAG_DUMPABLE;
> > +			op->map.request_decompress = flags & DRM_XE_VM_BIND_FLAG_DECOMPRESS;
> >  			op->map.pat_index = pat_index;
> >  			op->map.invalidate_on_bind =
> >  				__xe_vm_needs_clear_scratch_pages(vm, flags);
> > @@ -2858,7 +2859,7 @@ static void vm_bind_ioctl_ops_unwind(struct xe_vm *vm,
> >  }
> >  
> >  static int vma_lock_and_validate(struct drm_exec *exec, struct xe_vma *vma,
> > -				 bool res_evict, bool validate)
> > +				 bool res_evict, bool validate, bool request_decompress)
> 
> We have to many bool here but can clean this up in a follow up.
> 
> >  {
> >  	struct xe_bo *bo = xe_vma_bo(vma);
> >  	struct xe_vm *vm = xe_vma_vm(vma);
> > @@ -2871,6 +2872,17 @@ static int vma_lock_and_validate(struct drm_exec *exec, struct xe_vma *vma,
> >  			err = xe_bo_validate(bo, vm,
> >  					     !xe_vm_in_preempt_fence_mode(vm) &&
> >  					     res_evict, exec);
> > +
> > +		if (err)
> > +			return err;
> > +
> > +		if (request_decompress) {
> > +			int ret = xe_bo_schedule_decompress(bo, vm,
> > +							    xe_vma_start(vma),
> > +							    xe_vma_size(vma));
> 
> As stated above, start / size are not needed.
> 
> > +			if (ret)
> > +				return ret;
> > +		}
> 
> 
> I'd write this like:
> 
> if (request_decompress)
> 	err = xe_bo_schedule_decompress(...))
> 
> >  	}
> >  
> >  	return err;
> > @@ -2958,7 +2970,8 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
> >  			err = vma_lock_and_validate(exec, op->map.vma,
> >  						    res_evict,
> >  						    !xe_vm_in_fault_mode(vm) ||
> > -						    op->map.immediate);
> > +						    op->map.immediate,
> > +						    op->map.request_decompress);
> >  		break;
> >  	case DRM_GPUVA_OP_REMAP:
> >  		err = check_ufence(gpuva_to_vma(op->base.remap.unmap->va));
> > @@ -2967,13 +2980,13 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
> >  
> >  		err = vma_lock_and_validate(exec,
> >  					    gpuva_to_vma(op->base.remap.unmap->va),
> > -					    res_evict, false);
> > +					    res_evict, false, false);
> >  		if (!err && op->remap.prev)
> >  			err = vma_lock_and_validate(exec, op->remap.prev,
> > -						    res_evict, true);
> > +						    res_evict, true, false);
> >  		if (!err && op->remap.next)
> >  			err = vma_lock_and_validate(exec, op->remap.next,
> > -						    res_evict, true);
> > +						    res_evict, true, false);
> >  		break;
> >  	case DRM_GPUVA_OP_UNMAP:
> >  		err = check_ufence(gpuva_to_vma(op->base.unmap.va));
> > @@ -2982,7 +2995,7 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
> >  
> >  		err = vma_lock_and_validate(exec,
> >  					    gpuva_to_vma(op->base.unmap.va),
> > -					    res_evict, false);
> > +					    res_evict, false, false);
> >  		break;
> >  	case DRM_GPUVA_OP_PREFETCH:
> >  	{
> > @@ -2997,7 +3010,7 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
> >  
> >  		err = vma_lock_and_validate(exec,
> >  					    gpuva_to_vma(op->base.prefetch.va),
> > -					    res_evict, false);
> > +					    res_evict, false, false);
> >  		if (!err && !xe_vma_has_no_bo(vma))
> >  			err = xe_bo_migrate(xe_vma_bo(vma),
> >  					    region_to_mem_type[region],
> > @@ -3305,7 +3318,8 @@ ALLOW_ERROR_INJECTION(vm_bind_ioctl_ops_execute, ERRNO);
> >  	 DRM_XE_VM_BIND_FLAG_NULL | \
> >  	 DRM_XE_VM_BIND_FLAG_DUMPABLE | \
> >  	 DRM_XE_VM_BIND_FLAG_CHECK_PXP | \
> > -	 DRM_XE_VM_BIND_FLAG_CPU_ADDR_MIRROR)
> > +	 DRM_XE_VM_BIND_FLAG_CPU_ADDR_MIRROR | \
> > +	 DRM_XE_VM_BIND_FLAG_DECOMPRESS)
> >  
> >  #ifdef TEST_VM_OPS_ERROR
> >  #define SUPPORTED_FLAGS	(SUPPORTED_FLAGS_STUB | FORCE_OP_ERROR)
> > @@ -3322,7 +3336,6 @@ static int vm_bind_ioctl_check_args(struct xe_device *xe, struct xe_vm *vm,
> >  {
> >  	int err;
> >  	int i;
> > -
> 
> No related.
> 
> >  	if (XE_IOCTL_DBG(xe, args->pad || args->pad2) ||
> >  	    XE_IOCTL_DBG(xe, args->reserved[0] || args->reserved[1]))
> >  		return -EINVAL;
> > @@ -3363,9 +3376,9 @@ static int vm_bind_ioctl_check_args(struct xe_device *xe, struct xe_vm *vm,
> >  		bool is_null = flags & DRM_XE_VM_BIND_FLAG_NULL;
> >  		bool is_cpu_addr_mirror = flags &
> >  			DRM_XE_VM_BIND_FLAG_CPU_ADDR_MIRROR;
> > +		bool is_decompress = flags & DRM_XE_VM_BIND_FLAG_DECOMPRESS;
> >  		u16 pat_index = (*bind_ops)[i].pat_index;
> >  		u16 coh_mode;
> > -
> 
> Not related.
> 
> >  		if (XE_IOCTL_DBG(xe, is_cpu_addr_mirror &&
> >  				 (!xe_vm_in_fault_mode(vm) ||
> >  				 !IS_ENABLED(CONFIG_DRM_XE_GPUSVM)))) {
> > @@ -3397,7 +3410,9 @@ static int vm_bind_ioctl_check_args(struct xe_device *xe, struct xe_vm *vm,
> >  		    XE_IOCTL_DBG(xe, obj_offset && (is_null ||
> >  						    is_cpu_addr_mirror)) ||
> >  		    XE_IOCTL_DBG(xe, op != DRM_XE_VM_BIND_OP_MAP &&
> > -				 (is_null || is_cpu_addr_mirror)) ||
> > +				 (is_decompress || is_null || is_cpu_addr_mirror)) ||
> > +		    XE_IOCTL_DBG(xe, is_decompress &&
> > +				 pat_index == xe->pat.idx[XE_CACHE_NONE_COMPRESSION]) ||
> 
> The above check doesn't look right. I think you may to check pat_index
> against enabling compression and reject it.
> 
> >  		    XE_IOCTL_DBG(xe, !obj &&
> >  				 op == DRM_XE_VM_BIND_OP_MAP &&
> >  				 !is_null && !is_cpu_addr_mirror) ||
> > @@ -3417,8 +3432,8 @@ static int vm_bind_ioctl_check_args(struct xe_device *xe, struct xe_vm *vm,
> >  				 op == DRM_XE_VM_BIND_OP_PREFETCH) ||
> >  		    XE_IOCTL_DBG(xe, prefetch_region &&
> >  				 op != DRM_XE_VM_BIND_OP_PREFETCH) ||
> > -		    XE_IOCTL_DBG(xe,  (prefetch_region != DRM_XE_CONSULT_MEM_ADVISE_PREF_LOC &&
> > -				       !(BIT(prefetch_region) & xe->info.mem_region_mask))) ||
> > +		    XE_IOCTL_DBG(xe, (prefetch_region != DRM_XE_CONSULT_MEM_ADVISE_PREF_LOC &&
> > +				      !(BIT(prefetch_region) & xe->info.mem_region_mask))) ||
> 
> Not related but is a good cleanup.
> 
> >  		    XE_IOCTL_DBG(xe, obj &&
> >  				 op == DRM_XE_VM_BIND_OP_UNMAP)) {
> >  			err = -EINVAL;
> > @@ -3433,6 +3448,12 @@ static int vm_bind_ioctl_check_args(struct xe_device *xe, struct xe_vm *vm,
> >  			err = -EINVAL;
> >  			goto free_bind_ops;
> >  		}
> > +
> > +		if (is_decompress && (XE_IOCTL_DBG(xe, !xe_device_has_flat_ccs(xe)) ||
> > +				      XE_IOCTL_DBG(xe, GRAPHICS_VER(xe) < 20))) {
> > +			err = -EOPNOTSUPP;
> > +			goto free_bind_ops;
> > +		}
> >  	}
> >  
> >  	return 0;
> > @@ -3686,7 +3707,6 @@ int xe_vm_bind_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
> >  		u64 obj_offset = bind_ops[i].obj_offset;
> >  		u32 prefetch_region = bind_ops[i].prefetch_mem_region_instance;
> >  		u16 pat_index = bind_ops[i].pat_index;
> > -
> 
> Not related.
> 
> Matt
> 
> >  		ops[i] = vm_bind_ioctl_ops_create(vm, &vops, bos[i], obj_offset,
> >  						  addr, range, op, flags,
> >  						  prefetch_region, pat_index);
> > diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
> > index 413353e1c225..7d652d17b0dc 100644
> > --- a/drivers/gpu/drm/xe/xe_vm_types.h
> > +++ b/drivers/gpu/drm/xe/xe_vm_types.h
> > @@ -357,6 +357,8 @@ struct xe_vma_op_map {
> >  	bool dumpable;
> >  	/** @invalidate: invalidate the VMA before bind */
> >  	bool invalidate_on_bind;
> > +	/** @request_decompress: schedule decompression for GPU map */
> > +	bool request_decompress;
> >  	/** @pat_index: The pat index to use for this operation. */
> >  	u16 pat_index;
> >  };
> > -- 
> > 2.25.1
> > 

  reply	other threads:[~2025-10-13 16:12 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-10-13 14:48 [PATCH v2 0/3] drm/xe: add VM_BIND DECOMPRESS support and on‑demand decompression Nitin Gote
2025-10-13 14:28 ` ✓ CI.KUnit: success for drm/xe: add VM_BIND DECOMPRESS support and on‑demand decompression (rev2) Patchwork
2025-10-13 14:48 ` [PATCH v2 1/3] drm/xe: add VM_BIND DECOMPRESS uapi flag Nitin Gote
2025-10-13 14:48 ` [PATCH v2 2/3] drm/xe: add xe_migrate_resolve wrapper and is_vram_resolve support Nitin Gote
2025-10-13 14:48 ` [PATCH v2 3/3] drm/xe: implement VM_BIND decompression in vm_bind_ioctl Nitin Gote
2025-10-13 16:10   ` Matthew Brost
2025-10-13 16:12     ` Matthew Brost [this message]
2025-10-13 15:06 ` ✓ Xe.CI.BAT: success for drm/xe: add VM_BIND DECOMPRESS support and on‑demand decompression (rev2) Patchwork
2025-10-13 16:28 ` ✓ Xe.CI.Full: " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aO0k2rx8oqoDV/0v@lstrano-desk.jf.intel.com \
    --to=matthew.brost@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=matthew.auld@intel.com \
    --cc=nitin.r.gote@intel.com \
    --cc=thomas.hellstrom@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox