From: Matthew Brost <matthew.brost@intel.com>
To: "Chang, Bruce" <yu.bruce.chang@intel.com>
Cc: Stuart Summers <stuart.summers@intel.com>,
intel-xe@lists.freedesktop.org
Subject: Re: [Intel-xe] [PATCH] drm/xe: Enable scratch page when page fault is enabled
Date: Mon, 28 Aug 2023 19:22:17 +0000 [thread overview]
Message-ID: <ZOzz6eMcc+ou494X@DUT025-TGLU.fm.intel.com> (raw)
In-Reply-To: <20230826001412.6949-1-yu.bruce.chang@intel.com>
On Sat, Aug 26, 2023 at 12:14:12AM +0000, Chang, Bruce wrote:
> i915 can use scratch page even when page fault is enabled, this patch
> is trying to port this feature over.
>
I think we need to ask why do we support this before merging this.
Because the i915 does this is not a valid answer. Please explain why a
UMD needs this feature.
> The current i915 solution changes page table directly which may be hard
> to make to upstream, so a more complex solution is needed to apply to the
> current Xe framework if following the existing i915 solution.
>
> This patch is trying to make the minimum impact to the existing driver,
> but still enable the scratch page support.
>
> So, the idea is to bind a scratch vma if the page fault is from an
> invalid access. This patch is taking advantage of null pte at this
> point, we may introduce a special vma for scratch vma if needed.
> After the bind, the user app can continue to run without causing a
> fatal failure or reset and stop.
>
> In case the app will bind this scratch vma to a valid address, it will
> fail the bind, this patch will handle the failre and unbind the scrach
> vma[s], so that the user binding will be retried to the valid address.
>
> This patch only kicks in when there is a failure for both page fault
> and bind, so it should has no impact to the exiating code path. On
> another hand, it uses actual page tables instead of special scratch
> page tables, so it enables potential not to invalidate TLBs when doing
> unbind if all upper layer page tables are still being used.
>
> tested on new scratch igt tests which will be sent out for review.
>
> Cc: Oak Zeng <oak.zeng@intel.com>
> Cc: Brian Welty <brian.welty@intel.com>
> Cc: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
> Cc: Stuart Summers <stuart.summers@intel.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
> ---
> drivers/gpu/drm/xe/xe_gt_pagefault.c | 9 ++++--
> drivers/gpu/drm/xe/xe_vm.c | 48 +++++++++++++++++++++++-----
> drivers/gpu/drm/xe/xe_vm.h | 2 ++
> 3 files changed, 49 insertions(+), 10 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_gt_pagefault.c b/drivers/gpu/drm/xe/xe_gt_pagefault.c
> index b6f781b3d9d7..524b38df3d7a 100644
> --- a/drivers/gpu/drm/xe/xe_gt_pagefault.c
> +++ b/drivers/gpu/drm/xe/xe_gt_pagefault.c
> @@ -137,8 +137,13 @@ static int handle_pagefault(struct xe_gt *gt, struct pagefault *pf)
> write_locked = true;
> vma = lookup_vma(vm, pf->page_addr);
> if (!vma) {
> - ret = -EINVAL;
> - goto unlock_vm;
> + if (vm->flags & XE_VM_FLAG_SCRATCH_PAGE)
> + vma = xe_bind_scratch_vma(vm, pf->page_addr, SZ_64K);
I think this would be better.
s/xe_bind_scratch_vma/xe_vm_create_scratch_vma
> +
> + if (!vma) {
> + ret = -EINVAL;
> + goto unlock_vm;
> + }
> }
>
> if (!xe_vma_is_userptr(vma) || !xe_vma_userptr_check_repin(vma)) {
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index 389ac5ba8ddf..4c3d5d781b58 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -1262,7 +1262,8 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags)
> }
> }
>
> - if (flags & XE_VM_FLAG_SCRATCH_PAGE) {
> + if (flags & XE_VM_FLAG_SCRATCH_PAGE &&
> + (!(flags & XE_VM_FLAG_FAULT_MODE))) {
> for_each_tile(tile, xe, id) {
> if (!vm->pt_root[id])
> continue;
> @@ -1998,10 +1999,6 @@ int xe_vm_create_ioctl(struct drm_device *dev, void *data,
> if (XE_IOCTL_DBG(xe, args->flags & ~ALL_DRM_XE_VM_CREATE_FLAGS))
> return -EINVAL;
>
> - if (XE_IOCTL_DBG(xe, args->flags & DRM_XE_VM_CREATE_SCRATCH_PAGE &&
> - args->flags & DRM_XE_VM_CREATE_FAULT_MODE))
> - return -EINVAL;
> -
> if (XE_IOCTL_DBG(xe, args->flags & DRM_XE_VM_CREATE_COMPUTE_MODE &&
> args->flags & DRM_XE_VM_CREATE_FAULT_MODE))
> return -EINVAL;
> @@ -2783,6 +2780,39 @@ static int __xe_vma_op_execute(struct xe_vm *vm, struct xe_vma *vma,
> return err;
> }
>
> +struct xe_vma *xe_bind_scratch_vma(struct xe_vm *vm, u64 addr, u64 size)
> +{
Nit, the size argument really isn't needed.
> + struct xe_vma *vma = 0;
> +
> + if (!vm->size)
xe_vm_is_closed_or_banned rather than vm->size check.
> + return 0;
Probably a ERR_PTR with correct return codes.
> +
> + vma = xe_vma_create(vm, NULL, 0, addr, addr + size - 1, false, true, 0);
> + if (!vma)
> + return 0;
> + xe_vm_insert_vma(vm, vma);
> +
Need to check the return of xe_vm_insert_vma, also probably WARN on this
as it shouldn't ever fail.
> + /* fault will handle the bind */
> +
> + return vma;
> +}
> +
> +int xe_unbind_scratch_vma(struct xe_vm *vm, u64 addr, u64 range)
> +{
> + struct xe_vma *vma;
> +
> + vma = xe_vm_find_overlapping_vma(vm, addr, range);
> + if (!vma)
> + return -ENXIO;
> +
> + if (xe_vma_is_null(vma)) {
> + prep_vma_destroy(vm, vma, true);
> + xe_vm_unbind(vm, vma, NULL, NULL, 0, NULL, true, false);
> + }
> +
> + return 0;
> +}
> +
> static int xe_vma_op_execute(struct xe_vm *vm, struct xe_vma_op *op)
> {
> int ret = 0;
> @@ -3205,7 +3235,6 @@ int xe_vm_bind_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
> err = vm_bind_ioctl_check_args(xe, args, &bind_ops, &async);
> if (err)
> return err;
> -
Not related.
> if (args->exec_queue_id) {
> q = xe_exec_queue_lookup(xef, args->exec_queue_id);
> if (XE_IOCTL_DBG(xe, !q)) {
> @@ -3352,10 +3381,13 @@ int xe_vm_bind_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
> u64 range = bind_ops[i].range;
> u64 addr = bind_ops[i].addr;
> u32 op = bind_ops[i].op;
> -
> +retry:
> err = vm_bind_ioctl_lookup_vma(vm, bos[i], addr, range, op);
> - if (err)
> + if (err) {
> + if (!xe_unbind_scratch_vma(vm, addr, range))
> + goto retry;
> goto free_syncs;
> + }
You don't need this change, GPUVA handles all of this (e.g. it will
create ops to unbind the old VMA, bind the new one).
Matt
> }
>
> for (i = 0; i < args->num_binds; ++i) {
> diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
> index 6de6e3edb24a..6447bed427b1 100644
> --- a/drivers/gpu/drm/xe/xe_vm.h
> +++ b/drivers/gpu/drm/xe/xe_vm.h
> @@ -212,6 +212,8 @@ int xe_vma_userptr_pin_pages(struct xe_vma *vma);
>
> int xe_vma_userptr_check_repin(struct xe_vma *vma);
>
> +struct xe_vma *xe_bind_scratch_vma(struct xe_vm *vm, u64 addr, u64 size);
> +
> /*
> * XE_ONSTACK_TV is used to size the tv_onstack array that is input
> * to xe_vm_lock_dma_resv() and xe_vm_unlock_dma_resv().
> --
> 2.25.1
>
next prev parent reply other threads:[~2023-08-28 19:24 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-08-26 0:14 [Intel-xe] [PATCH] drm/xe: Enable scratch page when page fault is enabled Chang, Bruce
2023-08-26 0:16 ` [Intel-xe] ✗ CI.Patch_applied: failure for drm/xe: Enable scratch page when page fault is enabled (rev2) Patchwork
2023-08-28 17:55 ` [Intel-xe] [PATCH] drm/xe: Enable scratch page when page fault is enabled Summers, Stuart
2023-08-28 19:11 ` Chang, Yu bruce
2023-08-28 22:25 ` Summers, Stuart
2023-08-28 22:44 ` Chang, Yu bruce
2023-08-29 16:36 ` Summers, Stuart
2023-08-28 19:22 ` Matthew Brost [this message]
2023-08-28 22:34 ` Chang, Yu bruce
2023-08-28 23:44 ` Chang, Yu bruce
-- strict thread matches above, loose matches on Subject: below --
2023-08-30 21:34 Chang, Bruce
2023-09-01 0:31 ` Welty, Brian
2023-11-02 21:16 ` Summers, Stuart
2023-08-29 23:16 Chang, Bruce
2023-08-29 23:58 ` Welty, Brian
2023-08-30 21:37 ` Chang, Yu bruce
2023-07-10 22:06 Chang, Bruce
2023-07-11 2:11 ` Zeng, Oak
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZOzz6eMcc+ou494X@DUT025-TGLU.fm.intel.com \
--to=matthew.brost@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=stuart.summers@intel.com \
--cc=yu.bruce.chang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox