Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Matthew Auld <matthew.auld@intel.com>
To: Nitin Gote <nitin.r.gote@intel.com>, matthew.brost@intel.com
Cc: intel-xe@lists.freedesktop.org, michal.mrozek@intel.com
Subject: Re: [PATCH v5 3/3] drm/xe: implement VM_BIND decompression in vm_bind_ioctl
Date: Mon, 10 Nov 2025 12:26:05 +0000	[thread overview]
Message-ID: <40c255c5-3861-4aac-b5d0-7caa6c9ae624@intel.com> (raw)
In-Reply-To: <a6d92296-31b3-41bf-9ae1-d3942bd017c4@intel.com>

On 10/11/2025 11:46, Matthew Auld wrote:
> On 07/11/2025 11:48, Nitin Gote wrote:
>> Implement handling of VM_BIND(..., DECOMPRESS) in xe_vm_bind_ioctl.
>>
>> Key changes:
>> - Parse and record per-op intent (op->map.request_decompress) when the
>>    DECOMPRESS flag is present.
>> - Introduce xe_pat_index_get_comp_en() helper to check if a PAT index
>>    has compression enabled via the XE2_COMP_EN bit.
>> - Validate DECOMPRESS preconditions in the ioctl path:
>>    - Only valid for MAP ops.
>>    - The provided pat_index must select the device's "no-compression" 
>> PAT.
>>    - Only meaningful on devices with flat CCS and the required XE2+
>>      otherwise return -EOPNOTSUPP.
>>    - Use XE_IOCTL_DBG for uAPI sanity checks.
>> - Implement xe_bo_decompress():
>>    For VRAM BOs run xe_bo_move_notify(), reserve one fence slot,
>>    schedule xe_migrate_resolve(), and attach the returned fence
>>    with DMA_RESV_USAGE_KERNEL. Non-VRAM cases are silent no-ops.
>> - Wire scheduling into vma_lock_and_validate() so VM_BIND will schedule
>>    decompression when request_decompress is set.
>> - Handle fault-mode VMs by performing decompression synchronously during
>>    the bind process, ensuring that the resolve is completed before the 
>> bind
>>    finishes.
>>
>> This schedules an in-place GPU resolve (xe_migrate_resolve) for
>> decompression.
>>
>> v5: (Matt)
>>     - Correct the condition check of xe_pat_index_get_comp_en
>>
>> v4: (Matt)
>>     - Introduce xe_pat_index_get_comp_en(), which checks
>>       XE2_COMP_EN for the pat_index
>>     - .interruptible should be true, everything else false
>>
>> v3: (Matt)
>>     - s/xe_bo_schedule_decompress/xe_bo_decompress
>>     - skip the decrompress step if the BO isn't in VRAM
>>     - start/size not required in xe_bo_schedule_decompress
>>     - Use xe_bo_move_notify instead of xe_vm_invalidate_vma
>>       with respect to invalidation.
>>     - Nits
>>
>> v2:
>>     - Move decompression work out of vm_bind ioctl. (Matt)
>>     - Put that work in a small helper at the BO/migrate layer invoke it
>>       from vma_lock_and_validate which already runs under drm_exec.
>>     - Move lightweight checks to vm_bind_ioctl_check_args (Matthew Auld)
>>
>> Cc: Matthew Brost <matthew.brost@intel.com>
>> Cc: Matthew Auld <matthew.auld@intel.com>
>> Acked-by: Michal Mrozek <michal.mrozek@intel.com>
>> Signed-off-by: Nitin Gote <nitin.r.gote@intel.com>
>> ---
>> Hi Matt,
>>
>> You're correct. The polarity of the xe_pat_index_get_comp_en
>> check was wrong. I misread it, sorry for the confusion.
>> Corrected it in this version.
>>
>> Thank you,
>> -Nitin
>>
>>
>>   drivers/gpu/drm/xe/xe_bo.c       | 52 ++++++++++++++++++++++++++++++++
>>   drivers/gpu/drm/xe/xe_bo.h       |  2 ++
>>   drivers/gpu/drm/xe/xe_pat.c      |  6 ++++
>>   drivers/gpu/drm/xe/xe_pat.h      | 10 ++++++
>>   drivers/gpu/drm/xe/xe_vm.c       | 40 +++++++++++++++++-------
>>   drivers/gpu/drm/xe/xe_vm_types.h |  2 ++
>>   6 files changed, 101 insertions(+), 11 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
>> index b0bd31d14bb9..95331285c418 100644
>> --- a/drivers/gpu/drm/xe/xe_bo.c
>> +++ b/drivers/gpu/drm/xe/xe_bo.c
>> @@ -3330,6 +3330,58 @@ int xe_gem_mmap_offset_ioctl(struct drm_device 
>> *dev, void *data,
>>       return 0;
>>   }
>> +/**
>> + * xe_bo_decompress - schedule in-place decompress and install fence
>> + * @bo:   buffer object (caller should hold drm_exec reservations for 
>> VM+BO)
>> + * @vm:   VM containing the VMA range
>> + *
>> + * Schedules an in-place resolve via the migrate layer and installs the
>> + * returned dma_fence into the BO kernel reservation slot 
>> (DMA_RESV_USAGE_KERNEL).
>> + * In preempt fence mode, this operation interrupts hardware execution
>> + * which is expensive. Page fault mode is recommended for better 
>> performance.
> 
> Maybe add a comment somewhere here that this only currently works on 
> dgpu? In case someone tries to use this on igpu also, which below will 
> silently skip.
> 
>> + *
>> + * Returns 0 on success, negative errno on error.
>> + */
>> +int xe_bo_decompress(struct xe_bo *bo, struct xe_vm *vm)
>> +{
>> +    struct xe_tile *tile = xe_device_get_root_tile(vm->xe);
> 
> Look like we could extract this from the bo instead:
> 
> xe = xe_bo_device(bo);
> xe_device_get_root_tile(xe);
> 
> And we can then drop the vm arg?
> 
>> +    struct dma_fence *decomp_fence = NULL;
>> +    int err = 0;
> 
> Nit: Please put err at the end of the block. Reverse Christmas tree.
> 
>> +    struct ttm_operation_ctx op_ctx = {
>> +        .interruptible = true,
>> +        .no_wait_gpu = false,
>> +        .gfp_retry_mayfail = false,
>> +    };
>> +
>> +    /* Silently skip decompression for non-VRAM buffers */
>> +    if (!bo->ttm.resource || !mem_type_is_vram(bo->ttm.resource- 
>> >mem_type))
>> +        return 0;
>> +
>> +    /* Notify before scheduling resolve */
>> +    err = xe_bo_move_notify(bo, &op_ctx);
>> +    if (err)
>> +        return err;
>> +
>> +    /* Reserve fence slot before scheduling */
>> +    err = dma_resv_reserve_fences(bo->ttm.base.resv, 1);
>> +    if (err)
>> +        return err;
>> +
>> +    /* Schedule the in-place decompression */
>> +    decomp_fence = xe_migrate_resolve(tile->migrate,
>> +                      bo,
>> +                      bo->ttm.resource);
>> +
>> +    if (IS_ERR(decomp_fence))
>> +        return PTR_ERR(decomp_fence);
>> +
>> +    /* Install kernel-usage fence */
>> +    dma_resv_add_fence(bo->ttm.base.resv, decomp_fence, 
>> DMA_RESV_USAGE_KERNEL);
>> +    dma_fence_put(decomp_fence);
>> +
>> +    return 0;
>> +}
>> +
>>   /**
>>    * xe_bo_lock() - Lock the buffer object's dma_resv object
>>    * @bo: The struct xe_bo whose lock is to be taken
>> diff --git a/drivers/gpu/drm/xe/xe_bo.h b/drivers/gpu/drm/xe/xe_bo.h
>> index 911d5b90461a..346b50a1247c 100644
>> --- a/drivers/gpu/drm/xe/xe_bo.h
>> +++ b/drivers/gpu/drm/xe/xe_bo.h
>> @@ -309,6 +309,8 @@ int xe_bo_dumb_create(struct drm_file *file_priv,
>>   bool xe_bo_needs_ccs_pages(struct xe_bo *bo);
>> +int xe_bo_decompress(struct xe_bo *bo, struct xe_vm *vm);
>> +
>>   static inline size_t xe_bo_ccs_pages_start(struct xe_bo *bo)
>>   {
>>       return PAGE_ALIGN(xe_bo_size(bo));
>> diff --git a/drivers/gpu/drm/xe/xe_pat.c b/drivers/gpu/drm/xe/xe_pat.c
>> index 68171cceea18..1b4d5d3def0f 100644
>> --- a/drivers/gpu/drm/xe/xe_pat.c
>> +++ b/drivers/gpu/drm/xe/xe_pat.c
>> @@ -196,6 +196,12 @@ u16 xe_pat_index_get_coh_mode(struct xe_device 
>> *xe, u16 pat_index)
>>       return xe->pat.table[pat_index].coh_mode;
>>   }
>> +bool xe_pat_index_get_comp_en(struct xe_device *xe, u16 pat_index)
>> +{
>> +    WARN_ON(pat_index >= xe->pat.n_entries);
>> +    return !!(xe->pat.table[pat_index].value & XE2_COMP_EN);
>> +}
> 
> Looks like there no now three in-flight series each implementing a 
> helper like this. But all slightly different. Also not sure which is 
> going be be merged first. I personally prefer making this the same as 
> coh_mode above, and have a comp_en field, but either works.
> 
> https://patchwork.freedesktop.org/series/156658/
> https://patchwork.freedesktop.org/series/157036/
> 
> Wondering if it would make sense to split this helper into a separate 
> patch and merge that first, and then have all three series converge on 
> that? Also hopefully prevents accidently merging multiple helpers for 
> this, depending on the merge order?
> 
> + Sanjay & Xin
> 
>> +
>>   static void program_pat(struct xe_gt *gt, const struct 
>> xe_pat_table_entry table[],
>>               int n_entries)
>>   {
>> diff --git a/drivers/gpu/drm/xe/xe_pat.h b/drivers/gpu/drm/xe/xe_pat.h
>> index 05dae03a5f54..b8559120989e 100644
>> --- a/drivers/gpu/drm/xe/xe_pat.h
>> +++ b/drivers/gpu/drm/xe/xe_pat.h
>> @@ -58,4 +58,14 @@ int xe_pat_dump(struct xe_gt *gt, struct 
>> drm_printer *p);
>>    */
>>   u16 xe_pat_index_get_coh_mode(struct xe_device *xe, u16 pat_index);
>> +/**
>> + * xe_pat_index_get_comp_en - Extract the compression enable flag for
>> + * the given pat_index.
>> + * @xe: xe device
>> + * @pat_index: The pat_index to query
>> + *
>> + * Return: true if compression is enabled for this pat_index, false 
>> otherwise.
>> + */
>> +bool xe_pat_index_get_comp_en(struct xe_device *xe, u16 pat_index);
>> +
>>   #endif
>> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
>> index 8fb5cc6a69ec..a9927e665ad0 100644
>> --- a/drivers/gpu/drm/xe/xe_vm.c
>> +++ b/drivers/gpu/drm/xe/xe_vm.c
>> @@ -2302,6 +2302,7 @@ vm_bind_ioctl_ops_create(struct xe_vm *vm, 
>> struct xe_vma_ops *vops,
>>                   op->map.vma_flags |= XE_VMA_DUMPABLE;
>>               if (flags & DRM_XE_VM_BIND_FLAG_MADVISE_AUTORESET)
>>                   op->map.vma_flags |= XE_VMA_MADV_AUTORESET;
>> +            op->map.request_decompress = flags & 
>> DRM_XE_VM_BIND_FLAG_DECOMPRESS;
>>               op->map.pat_index = pat_index;
>>               op->map.invalidate_on_bind =
>>                   __xe_vm_needs_clear_scratch_pages(vm, flags);
>> @@ -2838,7 +2839,7 @@ static void vm_bind_ioctl_ops_unwind(struct 
>> xe_vm *vm,
>>   }
>>   static int vma_lock_and_validate(struct drm_exec *exec, struct 
>> xe_vma *vma,
>> -                 bool res_evict, bool validate)
>> +                 bool res_evict, bool validate, bool request_decompress)
>>   {
>>       struct xe_bo *bo = xe_vma_bo(vma);
>>       struct xe_vm *vm = xe_vma_vm(vma);
>> @@ -2851,6 +2852,12 @@ static int vma_lock_and_validate(struct 
>> drm_exec *exec, struct xe_vma *vma,
>>               err = xe_bo_validate(bo, vm,
>>                            !xe_vm_in_preempt_fence_mode(vm) &&
>>                            res_evict, exec);
>> +
>> +        if (err)
>> +            return err;
>> +
>> +        if (request_decompress)
>> +            err = xe_bo_decompress(bo, vm);
>>       }
>>       return err;
>> @@ -2938,7 +2945,8 @@ static int op_lock_and_prep(struct drm_exec 
>> *exec, struct xe_vm *vm,
>>               err = vma_lock_and_validate(exec, op->map.vma,
>>                               res_evict,
>>                               !xe_vm_in_fault_mode(vm) ||
>> -                            op->map.immediate);
>> +                            op->map.immediate,
>> +                            op->map.request_decompress);
>>           break;
>>       case DRM_GPUVA_OP_REMAP:
>>           err = check_ufence(gpuva_to_vma(op->base.remap.unmap->va));
>> @@ -2947,13 +2955,13 @@ static int op_lock_and_prep(struct drm_exec 
>> *exec, struct xe_vm *vm,
>>           err = vma_lock_and_validate(exec,
>>                           gpuva_to_vma(op->base.remap.unmap->va),
>> -                        res_evict, false);
>> +                        res_evict, false, false);
>>           if (!err && op->remap.prev)
>>               err = vma_lock_and_validate(exec, op->remap.prev,
>> -                            res_evict, true);
>> +                            res_evict, true, false);
>>           if (!err && op->remap.next)
>>               err = vma_lock_and_validate(exec, op->remap.next,
>> -                            res_evict, true);
>> +                            res_evict, true, false);
>>           break;
>>       case DRM_GPUVA_OP_UNMAP:
>>           err = check_ufence(gpuva_to_vma(op->base.unmap.va));
>> @@ -2962,7 +2970,7 @@ static int op_lock_and_prep(struct drm_exec 
>> *exec, struct xe_vm *vm,
>>           err = vma_lock_and_validate(exec,
>>                           gpuva_to_vma(op->base.unmap.va),
>> -                        res_evict, false);
>> +                        res_evict, false, false);
>>           break;
>>       case DRM_GPUVA_OP_PREFETCH:
>>       {
>> @@ -2977,7 +2985,7 @@ static int op_lock_and_prep(struct drm_exec 
>> *exec, struct xe_vm *vm,
>>           err = vma_lock_and_validate(exec,
>>                           gpuva_to_vma(op->base.prefetch.va),
>> -                        res_evict, false);
>> +                        res_evict, false, false);
>>           if (!err && !xe_vma_has_no_bo(vma))
>>               err = xe_bo_migrate(xe_vma_bo(vma),
>>                           region_to_mem_type[region],
>> @@ -3299,7 +3307,8 @@ ALLOW_ERROR_INJECTION(vm_bind_ioctl_ops_execute, 
>> ERRNO);
>>        DRM_XE_VM_BIND_FLAG_DUMPABLE | \
>>        DRM_XE_VM_BIND_FLAG_CHECK_PXP | \
>>        DRM_XE_VM_BIND_FLAG_CPU_ADDR_MIRROR | \
>> -     DRM_XE_VM_BIND_FLAG_MADVISE_AUTORESET)
>> +     DRM_XE_VM_BIND_FLAG_MADVISE_AUTORESET | \
>> +     DRM_XE_VM_BIND_FLAG_DECOMPRESS)
>>   #ifdef TEST_VM_OPS_ERROR
>>   #define SUPPORTED_FLAGS    (SUPPORTED_FLAGS_STUB | FORCE_OP_ERROR)
>> @@ -3357,6 +3366,7 @@ static int vm_bind_ioctl_check_args(struct 
>> xe_device *xe, struct xe_vm *vm,
>>           bool is_null = flags & DRM_XE_VM_BIND_FLAG_NULL;
>>           bool is_cpu_addr_mirror = flags &
>>               DRM_XE_VM_BIND_FLAG_CPU_ADDR_MIRROR;
>> +        bool is_decompress = flags & DRM_XE_VM_BIND_FLAG_DECOMPRESS;
>>           u16 pat_index = (*bind_ops)[i].pat_index;
>>           u16 coh_mode;
>> @@ -3391,7 +3401,9 @@ static int vm_bind_ioctl_check_args(struct 
>> xe_device *xe, struct xe_vm *vm,
>>               XE_IOCTL_DBG(xe, obj_offset && (is_null ||
>>                               is_cpu_addr_mirror)) ||
>>               XE_IOCTL_DBG(xe, op != DRM_XE_VM_BIND_OP_MAP &&
>> -                 (is_null || is_cpu_addr_mirror)) ||
>> +                 (is_decompress || is_null || is_cpu_addr_mirror)) ||
>> +            XE_IOCTL_DBG(xe, is_decompress &&
>> +                 xe_pat_index_get_comp_en(xe, pat_index)) ||
>>               XE_IOCTL_DBG(xe, !obj &&
>>                    op == DRM_XE_VM_BIND_OP_MAP &&
>>                    !is_null && !is_cpu_addr_mirror) ||
>> @@ -3411,8 +3423,8 @@ static int vm_bind_ioctl_check_args(struct 
>> xe_device *xe, struct xe_vm *vm,
>>                    op == DRM_XE_VM_BIND_OP_PREFETCH) ||
>>               XE_IOCTL_DBG(xe, prefetch_region &&
>>                    op != DRM_XE_VM_BIND_OP_PREFETCH) ||
>> -            XE_IOCTL_DBG(xe,  (prefetch_region != 
>> DRM_XE_CONSULT_MEM_ADVISE_PREF_LOC &&
>> -                       !(BIT(prefetch_region) & xe- 
>> >info.mem_region_mask))) ||
>> +            XE_IOCTL_DBG(xe, (prefetch_region != 
>> DRM_XE_CONSULT_MEM_ADVISE_PREF_LOC &&
>> +                      !(BIT(prefetch_region) & xe- 
>> >info.mem_region_mask))) ||
> 
> Is this just fixing whitespace? If so, please consider splitting 
> unrelated styling/cosmetic fixes into a separate patch.
> 
> Otherwise lgtm,
> Reviewed-by: Matthew Auld <matthew.auld@intel.com>
> 
>>               XE_IOCTL_DBG(xe, obj &&
>>                    op == DRM_XE_VM_BIND_OP_UNMAP) ||
>>               XE_IOCTL_DBG(xe, (flags & 
>> DRM_XE_VM_BIND_FLAG_MADVISE_AUTORESET) &&
>> @@ -3429,6 +3441,12 @@ static int vm_bind_ioctl_check_args(struct 
>> xe_device *xe, struct xe_vm *vm,
>>               err = -EINVAL;
>>               goto free_bind_ops;
>>           }
>> +
>> +        if (is_decompress && (XE_IOCTL_DBG(xe, ! 
>> xe_device_has_flat_ccs(xe)) ||
>> +                      XE_IOCTL_DBG(xe, GRAPHICS_VER(xe) < 20))) {

Hmm, reading the IGT, wondering if maybe we should rather reject this on 
igpu, instead of silently skipping? It gives the impression that it 
"worked", but it actually did nothing.

Also xe_device_has_flat_ccs() can sometimes be false on igpu due some 
setting in the BIOS to turn off compression on igpu, but I don't think 
UMD has a way to know that, and they might be surprised with this 
suddenly failing on xe2+ igpu, on some configurations. On the other hand 
if we always reject on igpu there are no surprises?

>> +            err = -EOPNOTSUPP;
>> +            goto free_bind_ops;
>> +        }
>>       }
>>       return 0;
>> diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/ 
>> xe_vm_types.h
>> index ccd6cc090309..f4962ca0df09 100644
>> --- a/drivers/gpu/drm/xe/xe_vm_types.h
>> +++ b/drivers/gpu/drm/xe/xe_vm_types.h
>> @@ -346,6 +346,8 @@ struct xe_vma_op_map {
>>       bool immediate;
>>       /** @read_only: Read only */
>>       bool invalidate_on_bind;
>> +    /** @request_decompress: schedule decompression for GPU map */
>> +    bool request_decompress;
>>       /** @pat_index: The pat index to use for this operation. */
>>       u16 pat_index;
>>   };
> 


  reply	other threads:[~2025-11-10 12:26 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-11-07 11:47 [PATCH v5 0/3] drm/xe: add VM_BIND DECOMPRESS support and on‑demand decompression Nitin Gote
2025-11-07 11:47 ` [PATCH v5 1/3] drm/xe: add VM_BIND DECOMPRESS uapi flag Nitin Gote
2025-11-07 11:48 ` [PATCH v5 2/3] drm/xe: add xe_migrate_resolve wrapper and is_vram_resolve support Nitin Gote
2025-11-07 11:48 ` [PATCH v5 3/3] drm/xe: implement VM_BIND decompression in vm_bind_ioctl Nitin Gote
2025-11-10 11:46   ` Matthew Auld
2025-11-10 12:26     ` Matthew Auld [this message]
2025-11-10 22:16     ` Wang, X
2025-11-07 12:42 ` ✓ CI.KUnit: success for drm/xe: add VM_BIND DECOMPRESS support and on‑demand decompression (rev5) Patchwork
2025-11-07 13:54 ` ✗ Xe.CI.BAT: failure " Patchwork
2025-11-09  1:09 ` ✗ Xe.CI.Full: " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=40c255c5-3861-4aac-b5d0-7caa6c9ae624@intel.com \
    --to=matthew.auld@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=matthew.brost@intel.com \
    --cc=michal.mrozek@intel.com \
    --cc=nitin.r.gote@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox