Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Matthew Brost <matthew.brost@intel.com>
To: "Zeng, Oak" <oak.zeng@intel.com>
Cc: "intel-xe@lists.freedesktop.org" <intel-xe@lists.freedesktop.org>,
	"joonas.lahtinen@linux.intel.com"
	<joonas.lahtinen@linux.intel.com>,
	"Thomas.Hellstrom@linux.intel.com"
	<Thomas.Hellstrom@linux.intel.com>
Subject: Re: [PATCH 2/3] drm/xe: Clear scratch page before vm_bind
Date: Wed, 29 Jan 2025 17:36:35 -0800	[thread overview]
Message-ID: <Z5rXo41N+OmJaN9Q@lstrano-desk.jf.intel.com> (raw)
In-Reply-To: <MN2PR11MB47285CC53C87B37B279D7C4092EE2@MN2PR11MB4728.namprd11.prod.outlook.com>

On Wed, Jan 29, 2025 at 02:01:38PM -0700, Zeng, Oak wrote:
> 
> 
> > -----Original Message-----
> > From: Brost, Matthew <matthew.brost@intel.com>
> > Sent: January 28, 2025 6:19 PM
> > To: Zeng, Oak <oak.zeng@intel.com>
> > Cc: intel-xe@lists.freedesktop.org; joonas.lahtinen@linux.intel.com;
> > Thomas.Hellstrom@linux.intel.com
> > Subject: Re: [PATCH 2/3] drm/xe: Clear scratch page before vm_bind
> > 
> > On Tue, Jan 28, 2025 at 05:21:44PM -0500, Oak Zeng wrote:
> > > When a vm runs under fault mode, if scratch page is enabled, we
> > need
> > > to clear the scratch page mapping before vm_bind for the vm_bind
> > > address range. Under fault mode, we depend on recoverable page
> > fault
> > > to establish mapping in page table. If scratch page is not cleared,
> > > GPU access of address won't cause page fault because it always hits
> > > the existing scratch page mapping.
> > >
> > > When vm_bind with IMMEDIATE flag, there is no need of clearing as
> > > immediate bind can overwrite the scratch page mapping.
> > >
> > > So far only is xe2 and xe3 products are allowed to enable scratch
> > page
> > > under fault mode. On other platform we don't allow scratch page
> > under
> > > fault mode, so no need of such clearing.
> > >
> > > Signed-off-by: Oak Zeng <oak.zeng@intel.com>
> > > ---
> > >  drivers/gpu/drm/xe/xe_vm.c | 32
> > ++++++++++++++++++++++++++++++++
> > >  1 file changed, 32 insertions(+)
> > >
> > > diff --git a/drivers/gpu/drm/xe/xe_vm.c
> > b/drivers/gpu/drm/xe/xe_vm.c
> > > index 690330352d4c..196d347c6ac0 100644
> > > --- a/drivers/gpu/drm/xe/xe_vm.c
> > > +++ b/drivers/gpu/drm/xe/xe_vm.c
> > > @@ -38,6 +38,7 @@
> > >  #include "xe_trace_bo.h"
> > >  #include "xe_wa.h"
> > >  #include "xe_hmm.h"
> > > +#include "i915_drv.h"
> > >
> > >  static struct drm_gem_object *xe_vm_obj(struct xe_vm *vm)
> > >  {
> > > @@ -2917,6 +2918,34 @@ static int
> > xe_vm_bind_ioctl_validate_bo(struct xe_device *xe, struct xe_bo
> > *bo,
> > >  	return 0;
> > >  }
> > >
> > > +static bool __xe_vm_needs_clear_scratch_pages(struct xe_device
> > *xe,
> > > +					      struct xe_vm *vm, u32
> > bind_flags)
> > > +{
> > > +	if (!xe_vm_in_fault_mode(vm))
> > > +		return false;
> > > +
> > > +	if (!xe_vm_has_scratch(vm))
> > > +		return false;
> > > +
> > > +	if (bind_flags & DRM_XE_VM_BIND_FLAG_IMMEDIATE)
> > > +		return false;
> > > +
> > > +	if (!(IS_LUNARLAKE(xe) || IS_BATTLEMAGE(xe) ||
> > IS_PANTHERLAKE(xe)))
> > > +		return false;
> > > +
> > > +	return true;
> > > +}
> > > +
> > > +static void __xe_vm_clear_scratch_pages(struct xe_device *xe,
> > struct xe_vm *vm,
> > > +					u64 start, u64 end)
> > > +{
> > > +	struct xe_tile *tile;
> > > +	u8 id;
> > > +
> > > +	for_each_tile(tile, xe, id)
> > > +		xe_pt_zap_range(tile, vm, start, end);
> > > +}
> > > +
> > >  int xe_vm_bind_ioctl(struct drm_device *dev, void *data, struct
> > drm_file *file)
> > >  {
> > >  	struct xe_device *xe = to_xe_device(dev);
> > > @@ -3062,6 +3091,9 @@ int xe_vm_bind_ioctl(struct drm_device
> > *dev, void *data, struct drm_file *file)
> > >  		u32 prefetch_region =
> > bind_ops[i].prefetch_mem_region_instance;
> > >  		u16 pat_index = bind_ops[i].pat_index;
> > >
> > > +		if (__xe_vm_needs_clear_scratch_pages(xe, vm,
> > flags))
> > > +			__xe_vm_clear_scratch_pages(xe, vm, addr,
> > addr + range);
> > 
> > A few things...
> > 
> > - I believe this is only needed for bind user operations or internal MAP
> >   GPU VMA operations.
> 
> Did you mean this is only needed for user bind, but not for internal map?
>

Needed for user bind and also for internal MAP. With updating the bind
pipeline you will only care MAP ops.
 
> > - I believe a TLB invalidation will be required.
> 
> My understanding is, NULL pte won't be cached in TLB, so TLB invalidation is not needed. 
> 

I would think NULL ptes are cached but not certain either way.

> > - I don't think calling zap PTEs range works here, given how the scratch
> >   tables are set up (i.e., new PTEs need to be created pointing to an
> >   invalid state).
> 
> You are right. 
> 
> I didn't realize that using xe_pt_walk_shared to walk and zap PTEs has a limitation:
> For the virtual address range we want to zap, all the page tables has to be already
> exist. This interface doesn't create new page tables. Even though xe_pt_walk_shared
> takes a range parameter (addr, end), range parameter can't be arbitrary.
> 
> Today only [xe_vma_start, xe_vma_end) is used to specify the xe_pt_walk_shared
> Walking range. Arbitrary range won't work as you pointed out. To me this is a small
> Interface design issue. If you agree, I can re-parameterize xe_pt_walk_shared to take
> VMA vs addr/end. This way people won't make the same mistake in the future.
> 

Ah, no. SVM will use ranges so I think (addr, end) are the right
parameters for internal PT functions.

> Anyway, I will follow the direction you give below to rework this series.
> 

+1

Matt

> Oak
> 
> > - This series appears to be untested based on the points above.
> > 
> > Therefore, instead of this series, I believe you will need to fully
> > update the bind pipeline to process MAP GPU VMA operations here.
> > 
> > So roughly...
> > 
> > - Maybe include a bit in xe_vma_op_map that specifies "invalidate on
> >   bind," set in vm_bind_ioctl_ops_create, since this will need to be
> >   wired throughout the bind pipeline.
> > - Don't validate backing memory in this case.
> > - Ensure that xe_vma_ops_incr_pt_update_ops is called in this case
> > for
> >   MAP operations, forcing entry into the xe_pt.c backend.
> > - Update xe_pt_stage_bind_walk with a variable that indicates
> > clearing
> >   the PTE. Instead of calling pte_encode_vma in
> > xe_pt_stage_bind_entry,
> >   set this variable for PT bind operations derived from MAP operations
> >   that meet the "invalidate on bind" condition.
> > - Ensure needs_invalidation is set in struct
> > xe_vm_pgtable_update_ops if
> >   a MAP operation is included that meets the "invalidate on bind"
> >   condition.
> > - Set the VMA tile_invalidated in addition to tile_present for MAP
> >   operations that meet the "invalidate on bind" condition.
> > 
> > I might be missing some implementation details mentioned above,
> > but this
> > should provide you with some direction.
> > 
> > Lastly, and perhaps most importantly, please test this using an IGT
> > and
> > include the results in the next post.
> > 
> > Matt
> > 
> > > +
> > >  		ops[i] = vm_bind_ioctl_ops_create(vm, bos[i],
> > obj_offset,
> > >  						  addr, range, op, flags,
> > >  						  prefetch_region,
> > pat_index);
> > > --
> > > 2.26.3
> > >

  reply	other threads:[~2025-01-30  1:35 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-01-28 22:21 [PATCH 1/3] drm/xe: Add a function to zap page table by address range Oak Zeng
2025-01-28 22:21 ` [PATCH 2/3] drm/xe: Clear scratch page before vm_bind Oak Zeng
2025-01-28 23:05   ` Cavitt, Jonathan
2025-01-28 23:19   ` Matthew Brost
2025-01-29  5:04     ` Matthew Brost
2025-01-29  8:21       ` Thomas Hellström
2025-01-29 21:01     ` Zeng, Oak
2025-01-30  1:36       ` Matthew Brost [this message]
2025-01-30  8:36         ` Thomas Hellström
2025-01-30 15:18           ` Zeng, Oak
2025-01-28 22:21 ` [PATCH 3/3] drm/xe: Allow scratch page under fault mode for certain platform Oak Zeng
2025-01-28 23:05   ` Cavitt, Jonathan
2025-01-29  8:52   ` Thomas Hellström
2025-01-29 16:41     ` Matthew Brost
2025-01-28 23:04 ` [PATCH 1/3] drm/xe: Add a function to zap page table by address range Cavitt, Jonathan
2025-01-28 23:12 ` ✓ CI.Patch_applied: success for series starting with [1/3] " Patchwork
2025-01-28 23:12 ` ✓ CI.checkpatch: " Patchwork
2025-01-28 23:13 ` ✗ CI.KUnit: failure " Patchwork
2025-01-28 23:38 ` [PATCH 1/3] " Matthew Brost
2025-01-29  8:48 ` Thomas Hellström
  -- strict thread matches above, loose matches on Subject: below --
2025-02-04 18:45 [PATCH 1/3] drm/xe: Introduced needs_scratch bit in device descriptor Oak Zeng
2025-02-04 18:45 ` [PATCH 2/3] drm/xe: Clear scratch page before vm_bind Oak Zeng
2025-02-05 14:35   ` Thomas Hellström
2025-02-06  1:52     ` Zeng, Oak
2025-02-06 12:51       ` Thomas Hellström
2025-02-06 18:56         ` Zeng, Oak
2025-02-06 20:11           ` Thomas Hellström
2025-02-06 21:05             ` Zeng, Oak
2025-02-06 10:34   ` Matthew Brost
2025-02-06 10:43     ` Thomas Hellström
2025-02-10 17:08       ` Matthew Brost
2025-02-06 15:16     ` Zeng, Oak
2025-02-10 17:09       ` Matthew Brost

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Z5rXo41N+OmJaN9Q@lstrano-desk.jf.intel.com \
    --to=matthew.brost@intel.com \
    --cc=Thomas.Hellstrom@linux.intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=joonas.lahtinen@linux.intel.com \
    --cc=oak.zeng@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox