Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Matthew Brost <matthew.brost@intel.com>
To: "Zeng, Oak" <oak.zeng@intel.com>
Cc: "intel-xe@lists.freedesktop.org" <intel-xe@lists.freedesktop.org>
Subject: Re: [PATCH v4 10/30] drm/xe: Add vm_bind_ioctl_ops_install_fences helper
Date: Mon, 25 Mar 2024 19:34:58 +0000	[thread overview]
Message-ID: <ZgHR4qXDFWIT0umE@DUT025-TGLU.fm.intel.com> (raw)
In-Reply-To: <SA1PR11MB699123EE17279306D203608992362@SA1PR11MB6991.namprd11.prod.outlook.com>

On Mon, Mar 25, 2024 at 10:51:43AM -0600, Zeng, Oak wrote:
> 
> 
> > -----Original Message-----
> > From: Intel-xe <intel-xe-bounces@lists.freedesktop.org> On Behalf Of Matthew
> > Brost
> > Sent: Friday, March 8, 2024 12:08 AM
> > To: intel-xe@lists.freedesktop.org
> > Cc: Brost, Matthew <matthew.brost@intel.com>
> > Subject: [PATCH v4 10/30] drm/xe: Add vm_bind_ioctl_ops_install_fences helper
> > 
> > Simplify VM bind code by signaling out-fences / destroying VMAs in a
> > single location. Will help with transition single job for many bind ops.
> > 
> > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > ---
> >  drivers/gpu/drm/xe/xe_vm.c | 55 ++++++++++++++++----------------------
> >  1 file changed, 23 insertions(+), 32 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> > index f8b27746e5a7..8c96c98cba37 100644
> > --- a/drivers/gpu/drm/xe/xe_vm.c
> > +++ b/drivers/gpu/drm/xe/xe_vm.c
> > @@ -1658,7 +1658,7 @@ xe_vm_unbind_vma(struct xe_vma *vma, struct
> > xe_exec_queue *q,
> >  	struct dma_fence *fence = NULL;
> >  	struct dma_fence **fences = NULL;
> >  	struct dma_fence_array *cf = NULL;
> > -	int cur_fence = 0, i;
> > +	int cur_fence = 0;
> >  	int number_tiles = hweight8(vma->tile_present);
> >  	int err;
> >  	u8 id;
> > @@ -1716,10 +1716,6 @@ xe_vm_unbind_vma(struct xe_vma *vma, struct
> > xe_exec_queue *q,
> > 
> >  	fence = cf ? &cf->base : !fence ?
> >  		xe_exec_queue_last_fence_get(wait_exec_queue, vm) : fence;
> > -	if (last_op) {
> > -		for (i = 0; i < num_syncs; i++)
> > -			xe_sync_entry_signal(&syncs[i], NULL, fence);
> > -	}
> > 
> >  	return fence;
> > 
> > @@ -1743,7 +1739,7 @@ xe_vm_bind_vma(struct xe_vma *vma, struct
> > xe_exec_queue *q,
> >  	struct dma_fence **fences = NULL;
> >  	struct dma_fence_array *cf = NULL;
> >  	struct xe_vm *vm = xe_vma_vm(vma);
> > -	int cur_fence = 0, i;
> > +	int cur_fence = 0;
> >  	int number_tiles = hweight8(vma->tile_mask);
> >  	int err;
> >  	u8 id;
> > @@ -1790,12 +1786,6 @@ xe_vm_bind_vma(struct xe_vma *vma, struct
> > xe_exec_queue *q,
> >  		}
> >  	}
> > 
> > -	if (last_op) {
> > -		for (i = 0; i < num_syncs; i++)
> > -			xe_sync_entry_signal(&syncs[i], NULL,
> > -					     cf ? &cf->base : fence);
> > -	}
> > -
> >  	return cf ? &cf->base : fence;
> > 
> >  err_fences:
> > @@ -1847,15 +1837,8 @@ xe_vm_bind(struct xe_vm *vm, struct xe_vma *vma,
> > struct xe_exec_queue *q,
> >  		if (IS_ERR(fence))
> >  			return fence;
> >  	} else {
> > -		int i;
> > -
> >  		xe_assert(vm->xe, xe_vm_in_fault_mode(vm));
> > -
> >  		fence = xe_exec_queue_last_fence_get(wait_exec_queue, vm);
> > -		if (last_op) {
> > -			for (i = 0; i < num_syncs; i++)
> > -				xe_sync_entry_signal(&syncs[i], NULL, fence);
> > -		}
> >  	}
> > 
> >  	if (last_op)
> > @@ -1879,7 +1862,6 @@ xe_vm_unbind(struct xe_vm *vm, struct xe_vma *vma,
> >  	if (IS_ERR(fence))
> >  		return fence;
> > 
> > -	xe_vma_destroy(vma, fence);
> >  	if (last_op)
> >  		xe_exec_queue_last_fence_set(wait_exec_queue, vm, fence);
> > 
> > @@ -2037,17 +2019,7 @@ xe_vm_prefetch(struct xe_vm *vm, struct xe_vma
> > *vma,
> >  		return xe_vm_bind(vm, vma, q, xe_vma_bo(vma), syncs,
> > num_syncs,
> >  				  vma->tile_mask, true, first_op, last_op);
> >  	} else {
> > -		struct dma_fence *fence =
> > -			xe_exec_queue_last_fence_get(wait_exec_queue, vm);
> > -		int i;
> > -
> > -		/* Nothing to do, signal fences now */
> > -		if (last_op) {
> > -			for (i = 0; i < num_syncs; i++)
> > -				xe_sync_entry_signal(&syncs[i], NULL, fence);
> > -		}
> > -
> > -		return fence;
> > +		return xe_exec_queue_last_fence_get(wait_exec_queue, vm);
> >  	}
> >  }
> > 
> > @@ -2844,6 +2816,25 @@ struct dma_fence *xe_vm_ops_execute(struct xe_vm
> > *vm, struct xe_vma_ops *vops)
> >  	return fence;
> >  }
> > 
> > +static void vm_bind_ioctl_ops_install_fences(struct xe_vm *vm,
> > +					     struct xe_vma_ops *vops,
> > +					     struct dma_fence *fence)
> 
> Is this a correct function name? from the codes below, you destroyed the temporary vmas during vm_ioctl, then signaled all the sync entries, then you destroyed the fence generated from the last operation.... it is more like a cleanup stage of the vm_bind.... But I don't quite understand the code, see below questions...
> 

Yes, let me rename. How about vm_bind_ioctl_ops_execute_fini?

'destroyed the temporary vmas during vm_ioctl' - This is destroying
unmapped VMAs when the fence signals.

> 
> > +{
> > +	struct xe_vma_op *op;
> > +	int i;
> > +
> > +	list_for_each_entry(op, &vops->list, link) {
> > +		if (op->base.op == DRM_GPUVA_OP_UNMAP)
> > +			xe_vma_destroy(gpuva_to_vma(op->base.unmap.va),
> > fence);
> > +		else if (op->base.op == DRM_GPUVA_OP_REMAP)
> > +			xe_vma_destroy(gpuva_to_vma(op-
> > >base.remap.unmap->va),
> > +				       fence);
> > +	}
> > +	for (i = 0; i < vops->num_syncs; i++)
> > +		xe_sync_entry_signal(vops->syncs + i, NULL, fence);
> 
> This signals the out-fence of vm_bind ioctl. Isn't this be done *after* fence is signaled (aka means the last vm bind operation is done)?
> 
> 

This, xe_sync_entry_signal, is a bad name. It really should be
xe_sync_entry_install_fence or something like that. It is really is
installing the fence in all out-syncs, the fence signals, then the
out-fence signals.

> > +	dma_fence_put(fence);
> 
> 
> I know this is also in the original code below... but I also don't understand why we can destroy fence here. As I understand it, this fence is generated during vm_bind operations. This is the last fence. Shouldn't we wait this fence somewhere so we know all the vm bind operations have been complete? I need your help to understand the picture here.
>

This isn't destroying the fence - it dropping a reference. This is
reference return from xe_vm_ops_execute, we install the fence anywhere
needed (this might take more refs) then drop the ref from
xe_vm_ops_execute.

Matt
 
> Oak
> 
> > +}
> > +
> >  static int vm_bind_ioctl_ops_execute(struct xe_vm *vm,
> >  				     struct xe_vma_ops *vops)
> >  {
> > @@ -2868,7 +2859,7 @@ static int vm_bind_ioctl_ops_execute(struct xe_vm
> > *vm,
> >  			xe_vm_kill(vm, false);
> >  			goto unlock;
> >  		} else {
> > -			dma_fence_put(fence);
> > +			vm_bind_ioctl_ops_install_fences(vm, vops, fence);
> >  		}
> >  	}
> > 
> > --
> > 2.34.1
> 

  reply	other threads:[~2024-03-25 19:36 UTC|newest]

Thread overview: 76+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-08  5:07 [PATCH v4 00/30] Refactor VM bind code Matthew Brost
2024-03-08  5:07 ` [PATCH v4 01/30] drm/xe: Lock all gpuva ops during VM bind IOCTL Matthew Brost
2024-03-10 17:44   ` Zeng, Oak
2024-03-11 19:48     ` Matthew Brost
2024-03-11 22:02       ` Zeng, Oak
2024-03-12  1:29         ` Matthew Brost
2024-03-08  5:07 ` [PATCH v4 02/30] drm/xe: Add ops_execute function which returns a fence Matthew Brost
2024-03-22 16:11   ` Zeng, Oak
2024-03-22 17:31     ` Matthew Brost
2024-03-22 19:39       ` Zeng, Oak
2024-03-08  5:07 ` [PATCH v4 03/30] drm/xe: Move migrate to prefetch to op_lock function Matthew Brost
2024-03-22 17:06   ` Zeng, Oak
2024-03-22 17:36     ` Matthew Brost
2024-03-22 19:45       ` Zeng, Oak
2024-03-08  5:07 ` [PATCH v4 04/30] drm/xe: Add struct xe_vma_ops abstraction Matthew Brost
2024-03-22 17:13   ` Zeng, Oak
2024-03-08  5:07 ` [PATCH v4 05/30] drm/xe: Update xe_vm_rebind to use dummy VMA operations Matthew Brost
2024-03-22 21:23   ` Zeng, Oak
2024-03-22 22:51     ` Matthew Brost
2024-03-08  5:07 ` [PATCH v4 06/30] drm/xe: Simplify VM bind IOCTL error handling and cleanup Matthew Brost
2024-03-25 16:03   ` Zeng, Oak
2024-03-26 18:46     ` Matthew Brost
2024-03-08  5:07 ` [PATCH v4 07/30] drm/xe: Update pagefaults to use dummy VMA operations Matthew Brost
2024-03-08  5:07 ` [PATCH v4 08/30] drm/xe: s/xe_tile_migrate_engine/xe_tile_migrate_exec_queue Matthew Brost
2024-03-25 16:05   ` Zeng, Oak
2024-03-08  5:07 ` [PATCH v4 09/30] drm/xe: Add some members to xe_vma_ops Matthew Brost
2024-03-25 16:10   ` Zeng, Oak
2024-03-26 18:47     ` Matthew Brost
2024-03-08  5:07 ` [PATCH v4 10/30] drm/xe: Add vm_bind_ioctl_ops_install_fences helper Matthew Brost
2024-03-25 16:51   ` Zeng, Oak
2024-03-25 19:34     ` Matthew Brost [this message]
2024-03-25 19:44       ` Zeng, Oak
2024-03-08  5:07 ` [PATCH v4 11/30] drm/xe: Move setting last fence to vm_bind_ioctl_ops_install_fences Matthew Brost
2024-03-25 17:02   ` Zeng, Oak
2024-03-25 19:35     ` Matthew Brost
2024-03-08  5:07 ` [PATCH v4 12/30] drm/xe: Move ufence check to op_lock Matthew Brost
2024-03-25 20:37   ` Zeng, Oak
2024-03-26 18:49     ` Matthew Brost
2024-03-08  5:07 ` [PATCH v4 13/30] drm/xe: Move ufence add to vm_bind_ioctl_ops_install_fences Matthew Brost
2024-03-25 20:54   ` Zeng, Oak
2024-03-26 18:54     ` Matthew Brost
2024-03-26 20:59       ` Zeng, Oak
2024-03-08  5:07 ` [PATCH v4 14/30] drm/xe: Add xe_gt_tlb_invalidation_range and convert PT layer to use this Matthew Brost
2024-03-25 21:35   ` Zeng, Oak
2024-03-26 18:57     ` Matthew Brost
2024-03-08  5:07 ` [PATCH v4 15/30] drm/xe: Add xe_vm_pgtable_update_op to xe_vma_ops Matthew Brost
2024-03-25 21:58   ` Zeng, Oak
2024-03-26 19:05     ` Matthew Brost
2024-03-27  1:29       ` Zeng, Oak
2024-03-08  5:07 ` [PATCH v4 16/30] drm/xe: Use ordered WQ for TLB invalidation fences Matthew Brost
2024-03-25 22:30   ` Zeng, Oak
2024-03-26 19:10     ` Matthew Brost
2024-03-08  5:07 ` [PATCH v4 17/30] drm/xe: Delete PT update selftest Matthew Brost
2024-03-25 22:31   ` Zeng, Oak
2024-03-08  5:07 ` [PATCH v4 18/30] drm/xe: Convert multiple bind ops into single job Matthew Brost
2024-03-27  2:40   ` Zeng, Oak
2024-03-27 19:26     ` Matthew Brost
2024-03-08  5:07 ` [PATCH v4 19/30] drm/xe: Remove old functions defs in xe_pt.h Matthew Brost
2024-03-08  5:07 ` [PATCH v4 20/30] drm/xe: Update PT layer with better error handling Matthew Brost
2024-03-08  5:07 ` [PATCH v4 21/30] drm/xe: Update xe_vm_rebind to return int Matthew Brost
2024-03-08  5:07 ` [PATCH v4 22/30] drm/xe: Move vma rebinding to the drm_exec locking loop Matthew Brost
2024-03-08  5:07 ` [PATCH v4 23/30] drm/xe: Update VM trace events Matthew Brost
2024-03-08  5:08 ` [PATCH v4 24/30] drm/xe: Update clear / populate arguments Matthew Brost
2024-03-08  5:08 ` [PATCH v4 25/30] drm/xe: Add __xe_migrate_update_pgtables_cpu helper Matthew Brost
2024-03-08  5:08 ` [PATCH v4 26/30] drm/xe: CPU binds for jobs Matthew Brost
2024-03-08  5:08 ` [PATCH v4 27/30] drm/xe: Don't use migrate exec queue for page fault binds Matthew Brost
2024-03-08  5:08 ` [PATCH v4 28/30] drm/xe: Add VM bind IOCTL error injection Matthew Brost
2024-03-08  5:08 ` [PATCH v4 29/30] drm/xe/guc: Assert time'd out jobs are not from a VM exec queue Matthew Brost
2024-03-08  5:08 ` [PATCH v4 30/30] drm/xe: Add PT exec queues Matthew Brost
2024-03-08  5:42 ` ✓ CI.Patch_applied: success for Refactor VM bind code (rev5) Patchwork
2024-03-08  5:43 ` ✗ CI.checkpatch: warning " Patchwork
2024-03-08  5:44 ` ✓ CI.KUnit: success " Patchwork
2024-03-08  5:55 ` ✓ CI.Build: " Patchwork
2024-03-08  5:55 ` ✗ CI.Hooks: failure " Patchwork
2024-03-08  5:56 ` ✓ CI.checksparse: success " Patchwork
2024-03-08  6:26 ` ✗ CI.BAT: failure " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZgHR4qXDFWIT0umE@DUT025-TGLU.fm.intel.com \
    --to=matthew.brost@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=oak.zeng@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox