From: Nitin Gote <nitin.r.gote@intel.com>
To: intel-xe@lists.freedesktop.org
Cc: himal.prasad.ghimiray@intel.com, matthew.brost@intel.com,
thomas.hellstrom@intel.com, Nitin Gote <nitin.r.gote@intel.com>
Subject: [PATCH v1 4/5] drm/xe: implement VM_BIND decompression in vm_bind_ioctl
Date: Thu, 18 Sep 2025 19:55:28 +0530 [thread overview]
Message-ID: <20250918142529.608432-5-nitin.r.gote@intel.com> (raw)
In-Reply-To: <20250918142529.608432-1-nitin.r.gote@intel.com>
Implement handling of VM_BIND(..., DECOMPRESS) in xe_vm_bind_ioctl.
- Validate decompression preconditions (VRAM buffer, flat CCS support,
XE2+ hardware, and an uncompressed PAT index).
- Mark the BO with DRM_XE_VM_BIND_FLAG_DECOMPRESS.
- Invalidate any overlapping VMA when required.
- Use helper xe_vm_exec_lock_vm_and_bo(..., wait_bookkeep=true)
to atomically acquire the VM reservation and optional BO reservation and
handle BOOKKEEP wait/retry.
* Drop vm->lock before calling the helper.
* Do VMA invalidation or schedule the migrate resolve while reservations
are held.
* Call drm_exec_fini() to release reservations, then wait on the
migrate/decompression fence without holding reservations and
re-acquire vm->lock to continue.
- Defer decompression when the VM is in fault mode.
This schedules an in-place GPU resolve (xe_migrate_resolve) for
decompression and waits for completion before proceeding with the bind.
The change centralises drm_exec + BOOKKEEP retry logic and avoids
sleeping while holding resv locks.
Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Signed-off-by: Nitin Gote <nitin.r.gote@intel.com>
---
drivers/gpu/drm/xe/xe_vm.c | 134 +++++++++++++++++++++++++++++++++++--
1 file changed, 130 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index fae88c4c981e..cdeb7995eab1 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -3266,7 +3266,8 @@ ALLOW_ERROR_INJECTION(vm_bind_ioctl_ops_execute, ERRNO);
DRM_XE_VM_BIND_FLAG_NULL | \
DRM_XE_VM_BIND_FLAG_DUMPABLE | \
DRM_XE_VM_BIND_FLAG_CHECK_PXP | \
- DRM_XE_VM_BIND_FLAG_CPU_ADDR_MIRROR)
+ DRM_XE_VM_BIND_FLAG_CPU_ADDR_MIRROR | \
+ DRM_XE_VM_BIND_FLAG_DECOMPRESS)
#ifdef TEST_VM_OPS_ERROR
#define SUPPORTED_FLAGS (SUPPORTED_FLAGS_STUB | FORCE_OP_ERROR)
@@ -3324,6 +3325,7 @@ static int vm_bind_ioctl_check_args(struct xe_device *xe, struct xe_vm *vm,
bool is_null = flags & DRM_XE_VM_BIND_FLAG_NULL;
bool is_cpu_addr_mirror = flags &
DRM_XE_VM_BIND_FLAG_CPU_ADDR_MIRROR;
+ bool is_decompress = flags & DRM_XE_VM_BIND_FLAG_DECOMPRESS;
u16 pat_index = (*bind_ops)[i].pat_index;
u16 coh_mode;
@@ -3361,7 +3363,7 @@ static int vm_bind_ioctl_check_args(struct xe_device *xe, struct xe_vm *vm,
(is_null || is_cpu_addr_mirror)) ||
XE_IOCTL_DBG(xe, !obj &&
op == DRM_XE_VM_BIND_OP_MAP &&
- !is_null && !is_cpu_addr_mirror) ||
+ is_decompress && !is_null && !is_cpu_addr_mirror) ||
XE_IOCTL_DBG(xe, !obj &&
op == DRM_XE_VM_BIND_OP_UNMAP_ALL) ||
XE_IOCTL_DBG(xe, addr &&
@@ -3378,8 +3380,8 @@ static int vm_bind_ioctl_check_args(struct xe_device *xe, struct xe_vm *vm,
op == DRM_XE_VM_BIND_OP_PREFETCH) ||
XE_IOCTL_DBG(xe, prefetch_region &&
op != DRM_XE_VM_BIND_OP_PREFETCH) ||
- XE_IOCTL_DBG(xe, (prefetch_region != DRM_XE_CONSULT_MEM_ADVISE_PREF_LOC &&
- !(BIT(prefetch_region) & xe->info.mem_region_mask))) ||
+ XE_IOCTL_DBG(xe, (prefetch_region != DRM_XE_CONSULT_MEM_ADVISE_PREF_LOC &&
+ !(BIT(prefetch_region) & xe->info.mem_region_mask))) ||
XE_IOCTL_DBG(xe, obj &&
op == DRM_XE_VM_BIND_OP_UNMAP)) {
err = -EINVAL;
@@ -3698,6 +3700,130 @@ int xe_vm_bind_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
u64 obj_offset = bind_ops[i].obj_offset;
u32 prefetch_region = bind_ops[i].prefetch_mem_region_instance;
u16 pat_index = bind_ops[i].pat_index;
+ bool do_decompress = flags & DRM_XE_VM_BIND_FLAG_DECOMPRESS;
+
+ /* Handle decompression process */
+ if (do_decompress && bos[i]) {
+ struct xe_vma *existing_vma;
+ struct dma_fence *decomp_fence;
+ struct xe_tile *tile = xe_device_get_root_tile(xe);
+
+ bos[i]->flags |= DRM_XE_VM_BIND_FLAG_DECOMPRESS;
+
+ /* Validate VRAM buffer suitable for decompression or not */
+ if (!mem_type_is_vram(bos[i]->ttm.resource->mem_type)) {
+ drm_err(&vm->xe->drm,
+ "Decompression requires VRAM buffer\n");
+ err = -EINVAL;
+ goto unwind_ops;
+ }
+
+ /* Check hardware support for decompression */
+ if (!xe_device_has_flat_ccs(vm->xe)) {
+ drm_err(&vm->xe->drm,
+ "Decompression requires flat CCS support\n");
+ err = -EOPNOTSUPP;
+ goto unwind_ops;
+ }
+
+ if (GRAPHICS_VER(vm->xe) < 20) {
+ drm_err(&vm->xe->drm,
+ "Decompression requires XE2+ hardware\n");
+ err = -EOPNOTSUPP;
+ goto unwind_ops;
+ }
+
+ /* Validate that user provided an uncompressed PAT index */
+ if (pat_index == vm->xe->pat.idx[XE_CACHE_NONE_COMPRESSION]) {
+ drm_err(&vm->xe->drm,
+ "Decompression requires uncompressed PAT index\n");
+ err = -EINVAL;
+ goto unwind_ops;
+ }
+
+ /* Invalidate VMA so GPU mappings are destroyed */
+ existing_vma = xe_vm_find_overlapping_vma(vm, addr, range);
+ if (existing_vma) {
+ struct xe_bo *existing_bo = xe_vma_bo(existing_vma);
+
+ drm_dbg(&vm->xe->drm,
+ "Found overlapping VMA - automatic invalidation will occur\n");
+
+ /* Drop vm->lock and atomically take VM+BO resvs via drm_exec */
+ up_write(&vm->lock);
+ struct drm_exec exec;
+
+ err = xe_vm_exec_lock_vm_and_bo(vm, existing_bo, &exec, true, true);
+ if (err) {
+ down_write(&vm->lock);
+ goto put_obj;
+ }
+
+ /* Invalidate the VMA while reservations are held. */
+ err = xe_vm_invalidate_vma(existing_vma);
+
+ /* release reservations (drm_exec_fini releases VM+BO resvs) */
+ drm_exec_fini(&exec);
+
+ /* Re-acquire vm->lock */
+ down_write(&vm->lock);
+
+ if (err)
+ goto put_obj;
+ }
+
+ /* Handle decompression differently for fault mode */
+ if (xe_vm_in_fault_mode(vm)) {
+ drm_dbg(&vm->xe->drm,
+ "Deferring decompression for fault mode\n");
+ /* Skip immediate decompression */
+ goto unwind_ops;
+ }
+
+ /* Drop vm->lock and atomically acquire VM+BO reservations via drm_exec.
+ * Hold the reservations while we wait for BOOKKEEP and schedule the
+ * in-place resolve. Release the reservations (drm_exec_fini) before
+ * waiting on the migrate fence so we don't hold resv locks while
+ * sleeping.
+ */
+ up_write(&vm->lock);
+ struct drm_exec exec;
+
+ err = xe_vm_exec_lock_vm_and_bo(vm, bos[i], &exec, true, true);
+ if (err) {
+ drm_err(&vm->xe->drm,
+ "Failed to acquire reservations for decompression: %d\n",
+ err);
+ down_write(&vm->lock);
+ goto unwind_ops;
+ }
+
+ /* With exec holding reservations, schedule the in-place decompression */
+ decomp_fence = xe_migrate_resolve(tile->migrate,
+ bos[i], bos[i],
+ bos[i]->ttm.resource,
+ bos[i]->ttm.resource,
+ false);
+
+ /* release the reservations so scheduler can observe BO fences */
+ drm_exec_fini(&exec);
+
+ if (IS_ERR(decomp_fence)) {
+ err = PTR_ERR(decomp_fence);
+ drm_err(&vm->xe->drm,
+ "In-place decompression failed: %d\n", err);
+ /* vm->lock must be held for unwind_ops cleanup */
+ down_write(&vm->lock);
+ goto unwind_ops;
+ }
+
+ /* Wait for in-place decompression to complete without holding vm->lock */
+ dma_fence_wait(decomp_fence, false);
+ dma_fence_put(decomp_fence);
+
+ /* Now re-acquire vm->lock and continue */
+ down_write(&vm->lock);
+ }
ops[i] = vm_bind_ioctl_ops_create(vm, &vops, bos[i], obj_offset,
addr, range, op, flags,
--
2.25.1
next prev parent reply other threads:[~2025-09-18 13:56 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-18 14:25 [PATCH v1 0/5] drm/xe: add VM_BIND DECOMPRESS support and on‑demand decompression Nitin Gote
2025-09-18 14:25 ` [PATCH v1 1/5] drm/xe: add VM_BIND DECOMPRESS uapi flag Nitin Gote
2025-09-23 17:35 ` Matthew Brost
2025-09-18 14:25 ` [PATCH v1 2/5] drm/xe: add xe_migrate_resolve wrapper and is_vram_resolve support Nitin Gote
2025-09-18 14:25 ` [PATCH v1 3/5] drm/xe: add drm_exec helper to atomically lock VM (+ optional BO) with BOOKKEEP retry Nitin Gote
2025-09-18 14:25 ` Nitin Gote [this message]
2025-09-18 16:13 ` [PATCH v1 4/5] drm/xe: implement VM_BIND decompression in vm_bind_ioctl Matthew Auld
2025-09-22 7:57 ` Gote, Nitin R
2025-09-23 17:26 ` Matthew Brost
2025-09-18 14:25 ` [PATCH v1 5/5] drm/xe: perform on-demand decompression on VMA pagefaults Nitin Gote
2025-09-23 17:29 ` Matthew Brost
2025-09-18 15:12 ` ✓ CI.KUnit: success for drm/xe: add VM_BIND DECOMPRESS support and on‑demand decompression Patchwork
2025-09-18 15:57 ` ✓ Xe.CI.BAT: " Patchwork
2025-09-19 0:37 ` ✓ Xe.CI.Full: " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250918142529.608432-5-nitin.r.gote@intel.com \
--to=nitin.r.gote@intel.com \
--cc=himal.prasad.ghimiray@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=matthew.brost@intel.com \
--cc=thomas.hellstrom@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox