From: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
To: Matthew Brost <matthew.brost@intel.com>, Oak Zeng <oak.zeng@intel.com>
Cc: intel-xe@lists.freedesktop.org, joonas.lahtinen@linux.intel.com
Subject: Re: [PATCH 2/3] drm/xe: Clear scratch page before vm_bind
Date: Wed, 29 Jan 2025 09:21:49 +0100 [thread overview]
Message-ID: <3f8fa3accc06366710b24f8145d5b23268ea9b5d.camel@linux.intel.com> (raw)
In-Reply-To: <Z5m22oRBNKgHPy0B@lstrano-desk.jf.intel.com>
On Tue, 2025-01-28 at 21:04 -0800, Matthew Brost wrote:
> On Tue, Jan 28, 2025 at 03:19:13PM -0800, Matthew Brost wrote:
> > On Tue, Jan 28, 2025 at 05:21:44PM -0500, Oak Zeng wrote:
> > > When a vm runs under fault mode, if scratch page is enabled, we
> > > need
> > > to clear the scratch page mapping before vm_bind for the vm_bind
> > > address range. Under fault mode, we depend on recoverable page
> > > fault
> > > to establish mapping in page table. If scratch page is not
> > > cleared,
> > > GPU access of address won't cause page fault because it always
> > > hits
> > > the existing scratch page mapping.
> > >
> > > When vm_bind with IMMEDIATE flag, there is no need of clearing as
> > > immediate bind can overwrite the scratch page mapping.
> > >
> > > So far only is xe2 and xe3 products are allowed to enable scratch
> > > page
> > > under fault mode. On other platform we don't allow scratch page
> > > under
> > > fault mode, so no need of such clearing.
> > >
> > > Signed-off-by: Oak Zeng <oak.zeng@intel.com>
> > > ---
> > > drivers/gpu/drm/xe/xe_vm.c | 32 ++++++++++++++++++++++++++++++++
> > > 1 file changed, 32 insertions(+)
> > >
> > > diff --git a/drivers/gpu/drm/xe/xe_vm.c
> > > b/drivers/gpu/drm/xe/xe_vm.c
> > > index 690330352d4c..196d347c6ac0 100644
> > > --- a/drivers/gpu/drm/xe/xe_vm.c
> > > +++ b/drivers/gpu/drm/xe/xe_vm.c
> > > @@ -38,6 +38,7 @@
> > > #include "xe_trace_bo.h"
> > > #include "xe_wa.h"
> > > #include "xe_hmm.h"
> > > +#include "i915_drv.h"
> > >
> > > static struct drm_gem_object *xe_vm_obj(struct xe_vm *vm)
> > > {
> > > @@ -2917,6 +2918,34 @@ static int
> > > xe_vm_bind_ioctl_validate_bo(struct xe_device *xe, struct xe_bo
> > > *bo,
> > > return 0;
> > > }
> > >
> > > +static bool __xe_vm_needs_clear_scratch_pages(struct xe_device
> > > *xe,
> > > + struct xe_vm *vm,
> > > u32 bind_flags)
> > > +{
> > > + if (!xe_vm_in_fault_mode(vm))
> > > + return false;
> > > +
> > > + if (!xe_vm_has_scratch(vm))
> > > + return false;
> > > +
> > > + if (bind_flags & DRM_XE_VM_BIND_FLAG_IMMEDIATE)
> > > + return false;
> > > +
> > > + if (!(IS_LUNARLAKE(xe) || IS_BATTLEMAGE(xe) ||
> > > IS_PANTHERLAKE(xe)))
> > > + return false;
> > > +
> > > + return true;
> > > +}
> > > +
> > > +static void __xe_vm_clear_scratch_pages(struct xe_device *xe,
> > > struct xe_vm *vm,
> > > + u64 start, u64 end)
> > > +{
> > > + struct xe_tile *tile;
> > > + u8 id;
> > > +
> > > + for_each_tile(tile, xe, id)
> > > + xe_pt_zap_range(tile, vm, start, end);
> > > +}
> > > +
> > > int xe_vm_bind_ioctl(struct drm_device *dev, void *data, struct
> > > drm_file *file)
> > > {
> > > struct xe_device *xe = to_xe_device(dev);
> > > @@ -3062,6 +3091,9 @@ int xe_vm_bind_ioctl(struct drm_device
> > > *dev, void *data, struct drm_file *file)
> > > u32 prefetch_region =
> > > bind_ops[i].prefetch_mem_region_instance;
> > > u16 pat_index = bind_ops[i].pat_index;
> > >
> > > + if (__xe_vm_needs_clear_scratch_pages(xe, vm,
> > > flags))
> > > + __xe_vm_clear_scratch_pages(xe, vm,
> > > addr, addr + range);
> >
> > A few things...
> >
> > - I believe this is only needed for bind user operations or
> > internal MAP
> > GPU VMA operations.
> > - I believe a TLB invalidation will be required.
> > - I don't think calling zap PTEs range works here, given how the
> > scratch
> > tables are set up (i.e., new PTEs need to be created pointing to
> > an
> > invalid state).
> > - This series appears to be untested based on the points above.
> >
> > Therefore, instead of this series, I believe you will need to fully
> > update the bind pipeline to process MAP GPU VMA operations here.
> >
> > So roughly...
> >
> > - Maybe include a bit in xe_vma_op_map that specifies "invalidate
> > on
> > bind," set in vm_bind_ioctl_ops_create, since this will need to
> > be
> > wired throughout the bind pipeline.
> > - Don't validate backing memory in this case.
> > - Ensure that xe_vma_ops_incr_pt_update_ops is called in this case
> > for
> > MAP operations, forcing entry into the xe_pt.c backend.
> > - Update xe_pt_stage_bind_walk with a variable that indicates
> > clearing
> > the PTE. Instead of calling pte_encode_vma in
> > xe_pt_stage_bind_entry,
> > set this variable for PT bind operations derived from MAP
> > operations
> > that meet the "invalidate on bind" condition.
>
> Ugh, typos / nonsense here. Let me try again.
>
> "Update xe_pt_stage_bind_walk with a variable that indicates clearing
> the PTE, set this for MAP ops that meet the 'invalidate on bind'
> condition. When this variable is set, instead of calling
> pte_encode_vma
> in xe_pt_stage_bind_entry, call a function which encodes an invalid
> PTE."
>
> Matt
I agree with Matt here. This needs to look just like a regular bind,
with the exception that PTEs are cleared instead of pointing towards
valid pages.
/Thomas
next prev parent reply other threads:[~2025-01-29 8:21 UTC|newest]
Thread overview: 32+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-01-28 22:21 [PATCH 1/3] drm/xe: Add a function to zap page table by address range Oak Zeng
2025-01-28 22:21 ` [PATCH 2/3] drm/xe: Clear scratch page before vm_bind Oak Zeng
2025-01-28 23:05 ` Cavitt, Jonathan
2025-01-28 23:19 ` Matthew Brost
2025-01-29 5:04 ` Matthew Brost
2025-01-29 8:21 ` Thomas Hellström [this message]
2025-01-29 21:01 ` Zeng, Oak
2025-01-30 1:36 ` Matthew Brost
2025-01-30 8:36 ` Thomas Hellström
2025-01-30 15:18 ` Zeng, Oak
2025-01-28 22:21 ` [PATCH 3/3] drm/xe: Allow scratch page under fault mode for certain platform Oak Zeng
2025-01-28 23:05 ` Cavitt, Jonathan
2025-01-29 8:52 ` Thomas Hellström
2025-01-29 16:41 ` Matthew Brost
2025-01-28 23:04 ` [PATCH 1/3] drm/xe: Add a function to zap page table by address range Cavitt, Jonathan
2025-01-28 23:12 ` ✓ CI.Patch_applied: success for series starting with [1/3] " Patchwork
2025-01-28 23:12 ` ✓ CI.checkpatch: " Patchwork
2025-01-28 23:13 ` ✗ CI.KUnit: failure " Patchwork
2025-01-28 23:38 ` [PATCH 1/3] " Matthew Brost
2025-01-29 8:48 ` Thomas Hellström
-- strict thread matches above, loose matches on Subject: below --
2025-02-04 18:45 [PATCH 1/3] drm/xe: Introduced needs_scratch bit in device descriptor Oak Zeng
2025-02-04 18:45 ` [PATCH 2/3] drm/xe: Clear scratch page before vm_bind Oak Zeng
2025-02-05 14:35 ` Thomas Hellström
2025-02-06 1:52 ` Zeng, Oak
2025-02-06 12:51 ` Thomas Hellström
2025-02-06 18:56 ` Zeng, Oak
2025-02-06 20:11 ` Thomas Hellström
2025-02-06 21:05 ` Zeng, Oak
2025-02-06 10:34 ` Matthew Brost
2025-02-06 10:43 ` Thomas Hellström
2025-02-10 17:08 ` Matthew Brost
2025-02-06 15:16 ` Zeng, Oak
2025-02-10 17:09 ` Matthew Brost
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3f8fa3accc06366710b24f8145d5b23268ea9b5d.camel@linux.intel.com \
--to=thomas.hellstrom@linux.intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=joonas.lahtinen@linux.intel.com \
--cc=matthew.brost@intel.com \
--cc=oak.zeng@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox