From: Matthew Brost <matthew.brost@intel.com>
To: Nitin Gote <nitin.r.gote@intel.com>
Cc: <intel-xe@lists.freedesktop.org>,
<himal.prasad.ghimiray@intel.com>, <thomas.hellstrom@intel.com>
Subject: Re: [PATCH v1 5/5] drm/xe: perform on-demand decompression on VMA pagefaults
Date: Tue, 23 Sep 2025 10:29:42 -0700 [thread overview]
Message-ID: <aNLZBtZQbHdspk6b@lstrano-desk.jf.intel.com> (raw)
In-Reply-To: <20250918142529.608432-6-nitin.r.gote@intel.com>
On Thu, Sep 18, 2025 at 07:55:29PM +0530, Nitin Gote wrote:
> When a VMA pagefault occurs and the bound BO has the DECOMPRESS flag,
> schedule a resolve (xe_migrate_resolve) to perform per-page decompression
> before binding the VMA to the faulting GT.
>
> Behavior:
> - schedule xe_migrate_resolve() while holding appropriate reservations,
> - wait on the returned fence, then clear the BO's DECOMPRESS flag
> on success,
> - proceed with the normal rebind path.
>
> This ensures on-demand fault-time decompression works consistently with
> the VM_BIND path.
>
Why special case decompress for page faults? I don't really understand
the reasoning. Beyond that, there are quite of few problematic issues
with the way this is implemented but I don't see the point in going over
until I understand why this special cased (i.e., why don't you just do
this in the bind IOCTL?).
Matt
> Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> Signed-off-by: Nitin Gote <nitin.r.gote@intel.com>
> ---
> drivers/gpu/drm/xe/xe_gt_pagefault.c | 41 ++++++++++++++++++++++++++--
> 1 file changed, 38 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_gt_pagefault.c b/drivers/gpu/drm/xe/xe_gt_pagefault.c
> index a054d6010ae0..4a3f2682ed85 100644
> --- a/drivers/gpu/drm/xe/xe_gt_pagefault.c
> +++ b/drivers/gpu/drm/xe/xe_gt_pagefault.c
> @@ -95,11 +95,14 @@ static int handle_vma_pagefault(struct xe_gt *gt, struct xe_vma *vma,
> bool atomic)
> {
> struct xe_vm *vm = xe_vma_vm(vma);
> + struct xe_bo *bo = xe_vma_bo(vma);
> struct xe_tile *tile = gt_to_tile(gt);
> struct xe_validation_ctx ctx;
> struct drm_exec exec;
> struct dma_fence *fence;
> int err, needs_vram;
> + bool needs_decompression = false;
> + struct dma_fence *decomp_fence = NULL;
>
> lockdep_assert_held_write(&vm->lock);
>
> @@ -112,8 +115,14 @@ static int handle_vma_pagefault(struct xe_gt *gt, struct xe_vma *vma,
>
> trace_xe_vma_pagefault(vma);
>
> + /* Check if decompression is needed */
> + if (bo && (bo->flags & DRM_XE_VM_BIND_FLAG_DECOMPRESS)) {
> + needs_decompression = true;
> + drm_dbg(&vm->xe->drm, "Decompressing VMA during page fault handling\n");
> + }
> +
> /* Check if VMA is valid, opportunistic check only */
> - if (vma_is_valid(tile, vma) && !atomic)
> + if (vma_is_valid(tile, vma) && !atomic && !needs_decompression)
> return 0;
>
> retry_userptr:
> @@ -135,6 +144,21 @@ static int handle_vma_pagefault(struct xe_gt *gt, struct xe_vma *vma,
> if (err)
> goto unlock_dma_resv;
>
> + /* Perform decompression inside proper locking context */
> + if (needs_decompression) {
> + decomp_fence = xe_migrate_resolve(tile->migrate, xe_vma_bo(vma),
> + xe_vma_bo(vma),
> + xe_vma_bo(vma)->ttm.resource,
> + xe_vma_bo(vma)->ttm.resource,
> + false);
> + if (IS_ERR(decomp_fence)) {
> + drm_err(&vm->xe->drm,
> + "Decompression failed during page fault handling\n");
> + err = PTR_ERR(decomp_fence);
> + goto unlock_dma_resv;
> + }
> + }
> +
> /* Bind VMA only to the GT that has faulted */
> trace_xe_vma_pf_bind(vma);
> xe_vm_set_validation_exec(vm, &exec);
> @@ -147,8 +171,19 @@ static int handle_vma_pagefault(struct xe_gt *gt, struct xe_vma *vma,
> }
> }
>
> - dma_fence_wait(fence, false);
> - dma_fence_put(fence);
> + /* Wait for decompression first, then rebind */
> + if (decomp_fence) {
> + dma_fence_wait(decomp_fence, false);
> + dma_fence_put(decomp_fence);
> +
> + /* Clear the decompression flag after successful decompression */
> + bo->flags &= ~DRM_XE_VM_BIND_FLAG_DECOMPRESS;
> + }
> +
> + if (fence) {
> + dma_fence_wait(fence, false);
> + dma_fence_put(fence);
> + }
>
> unlock_dma_resv:
> xe_validation_ctx_fini(&ctx);
> --
> 2.25.1
>
next prev parent reply other threads:[~2025-09-23 17:29 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-18 14:25 [PATCH v1 0/5] drm/xe: add VM_BIND DECOMPRESS support and on‑demand decompression Nitin Gote
2025-09-18 14:25 ` [PATCH v1 1/5] drm/xe: add VM_BIND DECOMPRESS uapi flag Nitin Gote
2025-09-23 17:35 ` Matthew Brost
2025-09-18 14:25 ` [PATCH v1 2/5] drm/xe: add xe_migrate_resolve wrapper and is_vram_resolve support Nitin Gote
2025-09-18 14:25 ` [PATCH v1 3/5] drm/xe: add drm_exec helper to atomically lock VM (+ optional BO) with BOOKKEEP retry Nitin Gote
2025-09-18 14:25 ` [PATCH v1 4/5] drm/xe: implement VM_BIND decompression in vm_bind_ioctl Nitin Gote
2025-09-18 16:13 ` Matthew Auld
2025-09-22 7:57 ` Gote, Nitin R
2025-09-23 17:26 ` Matthew Brost
2025-09-18 14:25 ` [PATCH v1 5/5] drm/xe: perform on-demand decompression on VMA pagefaults Nitin Gote
2025-09-23 17:29 ` Matthew Brost [this message]
2025-09-18 15:12 ` ✓ CI.KUnit: success for drm/xe: add VM_BIND DECOMPRESS support and on‑demand decompression Patchwork
2025-09-18 15:57 ` ✓ Xe.CI.BAT: " Patchwork
2025-09-19 0:37 ` ✓ Xe.CI.Full: " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aNLZBtZQbHdspk6b@lstrano-desk.jf.intel.com \
--to=matthew.brost@intel.com \
--cc=himal.prasad.ghimiray@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=nitin.r.gote@intel.com \
--cc=thomas.hellstrom@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox