Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Matthew Brost <matthew.brost@intel.com>
To: "Zeng, Oak" <oak.zeng@intel.com>
Cc: "intel-xe@lists.freedesktop.org" <intel-xe@lists.freedesktop.org>
Subject: Re: [PATCH 11/13] drm/xe: Move ufence add to vm_bind_ioctl_ops_fini
Date: Fri, 19 Apr 2024 19:45:06 +0000	[thread overview]
Message-ID: <ZiLJwnOZzrx/PaBn@DUT025-TGLU.fm.intel.com> (raw)
In-Reply-To: <SA1PR11MB699187124D07E9B753DEAA46920D2@SA1PR11MB6991.namprd11.prod.outlook.com>

On Fri, Apr 19, 2024 at 09:24:18AM -0600, Zeng, Oak wrote:
> 
> 
> > -----Original Message-----
> > From: Intel-xe <intel-xe-bounces@lists.freedesktop.org> On Behalf Of
> > Matthew Brost
> > Sent: Wednesday, April 10, 2024 1:41 AM
> > To: intel-xe@lists.freedesktop.org
> > Cc: Brost, Matthew <matthew.brost@intel.com>
> > Subject: [PATCH 11/13] drm/xe: Move ufence add to vm_bind_ioctl_ops_fini
> > 
> > Rather than adding a ufence to a VMA in the bind function, add the
> > ufence to all VMAs in the IOCTL that require binds in
> > vm_bind_ioctl_ops_install_fences.
> 
> This is a typo right? From the codes, it should be vm_bind_ioctl_ops_fini
> 

Yes, typo. Will fix in next rev.

> I also want to make sure I understand here: so the ufence added to vma is *only* used to make sure last vma bind has been completed upon vma unbind time. So even though it is more natural to set ufence at bind function, it is safe to set it after all operations are submitted (vm_bind_ioctl_ops_fini func). No vm_bind ioctl (and the vma unbind triggered by ioctl) can go through *before* the last vm bind ioctl's ops finish, right?
>

The ufence is attached to all VMAs being bound in an IOCTL. It prevents
any of those VMAs from being unbound until the attached ufence has
signaled (binding operation complete).

It is safe (and correct) to attach the ufence *after* operations
submitted because we are past the point of failure and are under the
VM->lock. The ufence could be signaled or unsignaled when attached to
the VMA. Safe to attach signaled ufences due to ref counting. Future
IOCTLs return -EBUSY if trying to unbind a VMA which an unsignaled
ufence.

Matt
 
> Oak
> 
> 
> 
>  This will help with the transition to
> > job 1 per VM bind IOCTL.
> > 
> > v2:
> >  - Rebase
> > 
> > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > ---
> >  drivers/gpu/drm/xe/xe_sync.c | 15 ++++++++++++
> >  drivers/gpu/drm/xe/xe_sync.h |  1 +
> >  drivers/gpu/drm/xe/xe_vm.c   | 44 ++++++++++++++++++++++++++++++--
> > ----
> >  3 files changed, 53 insertions(+), 7 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/xe/xe_sync.c b/drivers/gpu/drm/xe/xe_sync.c
> > index 65f1f1628235..2883d9aca404 100644
> > --- a/drivers/gpu/drm/xe/xe_sync.c
> > +++ b/drivers/gpu/drm/xe/xe_sync.c
> > @@ -338,6 +338,21 @@ xe_sync_in_fence_get(struct xe_sync_entry *sync,
> > int num_sync,
> >  	return ERR_PTR(-ENOMEM);
> >  }
> > 
> > +/**
> > + * __xe_sync_ufence_get() - Get user fence from user fence
> > + * @ufence: input user fence
> > + *
> > + * Get a user fence reference from user fence
> > + *
> > + * Return: xe_user_fence pointer with reference
> > + */
> > +struct xe_user_fence *__xe_sync_ufence_get(struct xe_user_fence
> > *ufence)
> > +{
> > +	user_fence_get(ufence);
> > +
> > +	return ufence;
> > +}
> > +
> >  /**
> >   * xe_sync_ufence_get() - Get user fence from sync
> >   * @sync: input sync
> > diff --git a/drivers/gpu/drm/xe/xe_sync.h b/drivers/gpu/drm/xe/xe_sync.h
> > index 3e03396af2c6..006dbf780793 100644
> > --- a/drivers/gpu/drm/xe/xe_sync.h
> > +++ b/drivers/gpu/drm/xe/xe_sync.h
> > @@ -37,6 +37,7 @@ static inline bool xe_sync_is_ufence(struct
> > xe_sync_entry *sync)
> >  	return !!sync->ufence;
> >  }
> > 
> > +struct xe_user_fence *__xe_sync_ufence_get(struct xe_user_fence
> > *ufence);
> >  struct xe_user_fence *xe_sync_ufence_get(struct xe_sync_entry *sync);
> >  void xe_sync_ufence_put(struct xe_user_fence *ufence);
> >  int xe_sync_ufence_get_status(struct xe_user_fence *ufence);
> > diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> > index 0319e70577fe..1da68a03407b 100644
> > --- a/drivers/gpu/drm/xe/xe_vm.c
> > +++ b/drivers/gpu/drm/xe/xe_vm.c
> > @@ -1900,17 +1900,10 @@ xe_vm_bind(struct xe_vm *vm, struct xe_vma
> > *vma, struct xe_exec_queue *q,
> >  {
> >  	struct dma_fence *fence;
> >  	struct xe_exec_queue *wait_exec_queue =
> > to_wait_exec_queue(vm, q);
> > -	struct xe_user_fence *ufence;
> > 
> >  	xe_vm_assert_held(vm);
> >  	xe_bo_assert_held(bo);
> > 
> > -	ufence = find_ufence_get(syncs, num_syncs);
> > -	if (vma->ufence && ufence)
> > -		xe_sync_ufence_put(vma->ufence);
> > -
> > -	vma->ufence = ufence ?: vma->ufence;
> > -
> >  	if (immediate) {
> >  		fence = xe_vm_bind_vma(vma, q, syncs, num_syncs,
> > tile_mask,
> >  				       first_op, last_op);
> > @@ -2918,20 +2911,57 @@ static struct dma_fence *ops_execute(struct
> > xe_vm *vm,
> >  	return fence;
> >  }
> > 
> > +static void vma_add_ufence(struct xe_vma *vma, struct xe_user_fence
> > *ufence)
> > +{
> > +	if (vma->ufence)
> > +		xe_sync_ufence_put(vma->ufence);
> > +	vma->ufence = __xe_sync_ufence_get(ufence);
> > +}
> > +
> > +static void op_add_ufence(struct xe_vm *vm, struct xe_vma_op *op,
> > +			  struct xe_user_fence *ufence)
> > +{
> > +	switch (op->base.op) {
> > +	case DRM_GPUVA_OP_MAP:
> > +		vma_add_ufence(op->map.vma, ufence);
> > +		break;
> > +	case DRM_GPUVA_OP_REMAP:
> > +		if (op->remap.prev)
> > +			vma_add_ufence(op->remap.prev, ufence);
> > +		if (op->remap.next)
> > +			vma_add_ufence(op->remap.next, ufence);
> > +		break;
> > +	case DRM_GPUVA_OP_UNMAP:
> > +		break;
> > +	case DRM_GPUVA_OP_PREFETCH:
> > +		vma_add_ufence(gpuva_to_vma(op->base.prefetch.va),
> > ufence);
> > +		break;
> > +	default:
> > +		drm_warn(&vm->xe->drm, "NOT POSSIBLE");
> > +	}
> > +}
> > +
> >  static void vm_bind_ioctl_ops_fini(struct xe_vm *vm, struct xe_vma_ops
> > *vops,
> >  				   struct dma_fence *fence)
> >  {
> >  	struct xe_exec_queue *wait_exec_queue =
> > to_wait_exec_queue(vm, vops->q);
> > +	struct xe_user_fence *ufence;
> >  	struct xe_vma_op *op;
> >  	int i;
> > 
> > +	ufence = find_ufence_get(vops->syncs, vops->num_syncs);
> >  	list_for_each_entry(op, &vops->list, link) {
> > +		if (ufence)
> > +			op_add_ufence(vm, op, ufence);
> > +
> >  		if (op->base.op == DRM_GPUVA_OP_UNMAP)
> >  			xe_vma_destroy(gpuva_to_vma(op-
> > >base.unmap.va), fence);
> >  		else if (op->base.op == DRM_GPUVA_OP_REMAP)
> >  			xe_vma_destroy(gpuva_to_vma(op-
> > >base.remap.unmap->va),
> >  				       fence);
> >  	}
> > +	if (ufence)
> > +		xe_sync_ufence_put(ufence);
> >  	for (i = 0; i < vops->num_syncs; i++)
> >  		xe_sync_entry_signal(vops->syncs + i, fence);
> >  	xe_exec_queue_last_fence_set(wait_exec_queue, vm, fence);
> > --
> > 2.34.1
> 

  reply	other threads:[~2024-04-19 19:45 UTC|newest]

Thread overview: 40+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-04-10  5:40 [PATCH 00/13] Prep patches for 1 job per VM bind IOCTL Matthew Brost
2024-04-10  5:40 ` [PATCH 01/13] drm/xe: Lock all gpuva ops during " Matthew Brost
2024-04-16 15:51   ` Zeng, Oak
2024-04-16 17:02     ` Matthew Brost
2024-04-10  5:40 ` [PATCH 02/13] drm/xe: Add ops_execute function which returns a fence Matthew Brost
2024-04-18 16:16   ` Zeng, Oak
2024-04-18 19:36     ` Matthew Brost
2024-04-23  3:09       ` Zeng, Oak
2024-04-10  5:40 ` [PATCH 03/13] drm/xe: Move migrate to prefetch to op_lock_and_prep function Matthew Brost
2024-04-18 19:27   ` Zeng, Oak
2024-04-19 19:52     ` Matthew Brost
2024-04-23  3:32       ` Zeng, Oak
2024-04-10  5:40 ` [PATCH 04/13] drm/xe: Add struct xe_vma_ops abstraction Matthew Brost
2024-04-10  5:40 ` [PATCH 05/13] drm/xe: Use xe_vma_ops to implement xe_vm_rebind Matthew Brost
2024-04-19  3:43   ` Zeng, Oak
2024-04-19  4:14     ` Matthew Brost
2024-04-23  3:17       ` Zeng, Oak
2024-04-10  5:40 ` [PATCH 06/13] drm/xe: Simplify VM bind IOCTL error handling and cleanup Matthew Brost
2024-04-19  4:19   ` Zeng, Oak
2024-04-19 19:16     ` Matthew Brost
2024-04-23  3:22       ` Zeng, Oak
2024-04-10  5:40 ` [PATCH 07/13] drm/xe: Use xe_vma_ops to implement page fault rebinds Matthew Brost
2024-04-19 14:22   ` Zeng, Oak
2024-04-19 19:33     ` Matthew Brost
2024-04-23  3:27       ` Zeng, Oak
2024-04-10  5:40 ` [PATCH 08/13] drm/xe: Add some members to xe_vma_ops Matthew Brost
2024-04-19 14:24   ` Zeng, Oak
2024-04-10  5:40 ` [PATCH 09/13] drm/xe: Add vm_bind_ioctl_ops_fini helper Matthew Brost
2024-04-19 14:51   ` Zeng, Oak
2024-04-10  5:40 ` [PATCH 10/13] drm/xe: Move ufence check to op_lock Matthew Brost
2024-04-19 14:56   ` Zeng, Oak
2024-04-19 19:34     ` Matthew Brost
2024-04-10  5:40 ` [PATCH 11/13] drm/xe: Move ufence add to vm_bind_ioctl_ops_fini Matthew Brost
2024-04-19 15:24   ` Zeng, Oak
2024-04-19 19:45     ` Matthew Brost [this message]
2024-04-23  3:36       ` Zeng, Oak
2024-04-10  5:40 ` [PATCH 12/13] drm/xe: Add xe_gt_tlb_invalidation_range and convert PT layer to use this Matthew Brost
2024-04-19 16:00   ` Zeng, Oak
2024-04-10  5:40 ` [PATCH 13/13] drm/xe: Delete PT update selftest Matthew Brost
2024-04-10  6:28 ` ✗ CI.Patch_applied: failure for Prep patches for 1 job per VM bind IOCTL Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZiLJwnOZzrx/PaBn@DUT025-TGLU.fm.intel.com \
    --to=matthew.brost@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=oak.zeng@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox