Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Matthew Brost <matthew.brost@intel.com>
To: Francois Dugast <francois.dugast@intel.com>
Cc: <intel-xe@lists.freedesktop.org>
Subject: Re: [PATCH v4 06/13] drm/xe/exec_queue: Prepare last fence for hw engine group resume context
Date: Mon, 5 Aug 2024 20:25:19 +0000	[thread overview]
Message-ID: <ZrE1L2hhrsFndmrP@DUT025-TGLU.fm.intel.com> (raw)
In-Reply-To: <20240801125748.355078-7-francois.dugast@intel.com>

On Thu, Aug 01, 2024 at 02:56:47PM +0200, Francois Dugast wrote:
> Ensure we can safely take a ref of the exec queue's last fence from the
> context of resuming jobs from the hw engine group. The locking requirements
> differ from the general case, hence the introduction of
> xe_exec_queue_last_fence_{get,put}_for_resume().
> 
> v2: Add kernel doc, rework the code to prevent code duplication
> 
> Signed-off-by: Francois Dugast <francois.dugast@intel.com>
> ---
>  drivers/gpu/drm/xe/xe_exec_queue.c | 123 ++++++++++++++++++++++++++---
>  drivers/gpu/drm/xe/xe_exec_queue.h |   3 +
>  2 files changed, 113 insertions(+), 13 deletions(-)
> 
> diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
> index 85da2a30a4a4..a860ba752786 100644
> --- a/drivers/gpu/drm/xe/xe_exec_queue.c
> +++ b/drivers/gpu/drm/xe/xe_exec_queue.c
> @@ -815,8 +815,37 @@ int xe_exec_queue_destroy_ioctl(struct drm_device *dev, void *data,
>  	return 0;
>  }
>  
> -static void xe_exec_queue_last_fence_lockdep_assert(struct xe_exec_queue *q,
> -						    struct xe_vm *vm)
> +/**
> + * exec_queue_last_fence_lockdep_assert() - Assert locking to access last fence
> + * @q: The exec queue
> + * @vm: The VM
> + *
> + * These are the locking requirements to access the last fence of this exec
> + * queue. This is the general and strictest version, to be used by default.
> + */
> +static void exec_queue_last_fence_lockdep_assert(struct xe_exec_queue *q,
> +						 struct xe_vm *vm)
> +{
> +	if (q->flags & EXEC_QUEUE_FLAG_VM) {
> +		lockdep_assert_held(&vm->lock);
> +	} else {
> +		xe_vm_assert_held(vm);
> +		lockdep_assert_held(&q->hwe->hw_engine_group->mode_sem);

I think this (lockdep_assert_held(&q->hwe->hw_engine_group->mode_sem))
is the only change needed here, then just use existing functions as is.
Yes, we won't have have lockdep_assert_held_write in this layer for
resume but we'd have it in the resume worker so I think we are fine?
Does that sounds reasonable?

Note you will need this patch though [1] to stop kernel binds from
blowing up lockdep.

Matt

[1] https://patchwork.freedesktop.org/series/136898/

> +	}
> +}
> +
> +/**
> + * exec_queue_last_fence_lockdep_assert_for_test_dep() - Assert locking only to
> + * test the last fence dep
> + * @q: The exec queue
> + * @vm: The VM
> + *
> + * This version of exec_queue_last_fence_lockdep_assert() does not require
> + * locking of the hw engine group semaphore. It is exclusively meant to be used
> + * from the context of xe_exec_queue_last_fence_test_dep().
> + */
> +static void exec_queue_last_fence_lockdep_assert_for_test_dep(struct xe_exec_queue *q,
> +							      struct xe_vm *vm)
>  {
>  	if (q->flags & EXEC_QUEUE_FLAG_VM)
>  		lockdep_assert_held(&vm->lock);
> @@ -831,7 +860,22 @@ static void xe_exec_queue_last_fence_lockdep_assert(struct xe_exec_queue *q,
>   */
>  void xe_exec_queue_last_fence_put(struct xe_exec_queue *q, struct xe_vm *vm)
>  {
> -	xe_exec_queue_last_fence_lockdep_assert(q, vm);
> +	exec_queue_last_fence_lockdep_assert(q, vm);
> +
> +	xe_exec_queue_last_fence_put_unlocked(q);
> +}
> +
> +/**
> + * xe_exec_queue_last_fence_put_for_resume() - Drop ref to last fence
> + * @q: The exec queue
> + * @vm: The VM the engine does a bind or exec for
> + *
> + * Only safe to be called in the context of resuming the hw engine group's
> + * long-running exec queue, when the group semaphore is held.
> + */
> +void xe_exec_queue_last_fence_put_for_resume(struct xe_exec_queue *q, struct xe_vm *vm)
> +{
> +	lockdep_assert_held_write(&q->hwe->hw_engine_group->mode_sem);
>  
>  	xe_exec_queue_last_fence_put_unlocked(q);
>  }
> @@ -851,30 +895,82 @@ void xe_exec_queue_last_fence_put_unlocked(struct xe_exec_queue *q)
>  }
>  
>  /**
> - * xe_exec_queue_last_fence_get() - Get last fence
> + * exec_queue_last_fence_get() - Get last fence
>   * @q: The exec queue
>   * @vm: The VM the engine does a bind or exec for
> + * @only_for_test_dep: True if context is testing last fence dependency
> + * @only_for_resume: True if context is resuming the hw engine group's queues
>   *
> - * Get last fence, takes a ref
> + * Get last fence, takes a ref, after checking locking depending on the
> + * context.
>   *
> - * Returns: last fence if not signaled, dma fence stub if signaled
> + * This function is parameterized to cover various contexts and consequently
> + * various locking requirements from which the caller gets the last fence,
> + * while also avoiding code duplication.
> + *
> + * It is intentionally made static to hide those parameters externally and
> + * limit possible incorrect uses from the wrong context.
>   */
> -struct dma_fence *xe_exec_queue_last_fence_get(struct xe_exec_queue *q,
> -					       struct xe_vm *vm)
> +static struct dma_fence *exec_queue_last_fence_get(struct xe_exec_queue *q,
> +						   struct xe_vm *vm,
> +						   bool only_for_test_dep,
> +						   bool only_for_resume)
>  {
>  	struct dma_fence *fence;
>  
> -	xe_exec_queue_last_fence_lockdep_assert(q, vm);
> +	xe_assert(vm->xe, !(only_for_test_dep && only_for_resume));
> +	if (only_for_resume)
> +		lockdep_assert_held_write(&q->hwe->hw_engine_group->mode_sem);
> +	else if (only_for_test_dep)
> +		exec_queue_last_fence_lockdep_assert_for_test_dep(q, vm);
> +	else
> +		exec_queue_last_fence_lockdep_assert(q, vm);
>  
>  	if (q->last_fence &&
> -	    test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &q->last_fence->flags))
> -		xe_exec_queue_last_fence_put(q, vm);
> +	    test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &q->last_fence->flags)) {
> +		if (only_for_resume)
> +			xe_exec_queue_last_fence_put_for_resume(q, vm);
> +		else
> +			xe_exec_queue_last_fence_put(q, vm);
> +	}
>  
>  	fence = q->last_fence ? q->last_fence : dma_fence_get_stub();
>  	dma_fence_get(fence);
>  	return fence;
>  }
>  
> +/**
> + * xe_exec_queue_last_fence_get() - Get last fence
> + * @q: The exec queue
> + * @vm: The VM the engine does a bind or exec for
> + *
> + * Get last fence, takes a ref
> + *
> + * Returns: last fence if not signaled, dma fence stub if signaled
> + */
> +struct dma_fence *xe_exec_queue_last_fence_get(struct xe_exec_queue *q,
> +					       struct xe_vm *vm)
> +{
> +	return exec_queue_last_fence_get(q, vm, false, false);
> +}
> +
> +/**
> + * xe_exec_queue_last_fence_get_to_resume() - Get last fence
> + * @q: The exec queue
> + * @vm: The VM the engine does a bind or exec for
> + *
> + * Get last fence, takes a ref. Only safe to be called in the context of
> + * resuming the hw engine group's long-running exec queue, when the group
> + * semaphore is held.
> + *
> + * Returns: last fence if not signaled, dma fence stub if signaled
> + */
> +struct dma_fence *xe_exec_queue_last_fence_get_for_resume(struct xe_exec_queue *q,
> +							  struct xe_vm *vm)
> +{
> +	return exec_queue_last_fence_get(q, vm, false, true);
> +}
> +
>  /**
>   * xe_exec_queue_last_fence_set() - Set last fence
>   * @q: The exec queue
> @@ -887,7 +983,7 @@ struct dma_fence *xe_exec_queue_last_fence_get(struct xe_exec_queue *q,
>  void xe_exec_queue_last_fence_set(struct xe_exec_queue *q, struct xe_vm *vm,
>  				  struct dma_fence *fence)
>  {
> -	xe_exec_queue_last_fence_lockdep_assert(q, vm);
> +	exec_queue_last_fence_lockdep_assert(q, vm);
>  
>  	xe_exec_queue_last_fence_put(q, vm);
>  	q->last_fence = dma_fence_get(fence);
> @@ -906,7 +1002,8 @@ int xe_exec_queue_last_fence_test_dep(struct xe_exec_queue *q, struct xe_vm *vm)
>  	struct dma_fence *fence;
>  	int err = 0;
>  
> -	fence = xe_exec_queue_last_fence_get(q, vm);
> +	fence = exec_queue_last_fence_get(q, vm, true, false);
> +
>  	if (fence) {
>  		err = test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags) ?
>  			0 : -ETIME;
> diff --git a/drivers/gpu/drm/xe/xe_exec_queue.h b/drivers/gpu/drm/xe/xe_exec_queue.h
> index ded77b0f3b90..fc9884a6ce85 100644
> --- a/drivers/gpu/drm/xe/xe_exec_queue.h
> +++ b/drivers/gpu/drm/xe/xe_exec_queue.h
> @@ -71,8 +71,11 @@ enum xe_exec_queue_priority xe_exec_queue_device_get_max_priority(struct xe_devi
>  
>  void xe_exec_queue_last_fence_put(struct xe_exec_queue *e, struct xe_vm *vm);
>  void xe_exec_queue_last_fence_put_unlocked(struct xe_exec_queue *e);
> +void xe_exec_queue_last_fence_put_for_resume(struct xe_exec_queue *e, struct xe_vm *vm);
>  struct dma_fence *xe_exec_queue_last_fence_get(struct xe_exec_queue *e,
>  					       struct xe_vm *vm);
> +struct dma_fence *xe_exec_queue_last_fence_get_for_resume(struct xe_exec_queue *e,
> +							  struct xe_vm *vm);
>  void xe_exec_queue_last_fence_set(struct xe_exec_queue *e, struct xe_vm *vm,
>  				  struct dma_fence *fence);
>  int xe_exec_queue_last_fence_test_dep(struct xe_exec_queue *q,
> -- 
> 2.43.0
> 

  reply	other threads:[~2024-08-05 20:26 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-08-01 12:56 [PATCH v4 00/13] Parallel submission of dma fence jobs and LR jobs with shared hardware resources Francois Dugast
2024-08-01 12:56 ` [PATCH v4 01/13] drm/xe/hw_engine_group: Introduce xe_hw_engine_group Francois Dugast
2024-08-01 12:56 ` [PATCH v4 02/13] drm/xe/guc_submit: Make suspend_wait interruptible Francois Dugast
2024-08-01 12:56 ` [PATCH v4 03/13] drm/xe/hw_engine_group: Register hw engine group's exec queues Francois Dugast
2024-08-05 20:19   ` Matthew Brost
2024-08-01 12:56 ` [PATCH v4 04/13] drm/xe/hw_engine_group: Add helper to suspend faulting LR jobs Francois Dugast
2024-08-01 12:56 ` [PATCH v4 05/13] drm/xe/exec_queue: Remove duplicated code Francois Dugast
2024-08-01 12:56 ` [PATCH v4 06/13] drm/xe/exec_queue: Prepare last fence for hw engine group resume context Francois Dugast
2024-08-05 20:25   ` Matthew Brost [this message]
2024-08-06  9:46     ` Francois Dugast
2024-08-06 14:33       ` Matthew Brost
2024-08-01 12:56 ` [PATCH v4 07/13] drm/xe/hw_engine_group: Add helper to wait for dma fence jobs Francois Dugast
2024-08-01 12:56 ` [PATCH v4 08/13] drm/xe/hw_engine_group: Ensure safe transition between execution modes Francois Dugast
2024-08-05 20:29   ` Matthew Brost
2024-08-01 12:56 ` [PATCH v4 09/13] drm/xe/exec: Switch hw engine group execution mode upon job submission Francois Dugast
2024-08-01 12:56 ` [PATCH v4 10/13] drm/xe/guc_submit: Allow calling guc_exec_queue_resume with pending resume Francois Dugast
2024-08-01 12:56 ` [PATCH v4 11/13] drm/xe/hw_engine_group: Resume exec queues suspended by dma fence jobs Francois Dugast
2024-08-05 20:32   ` Matthew Brost
2024-08-01 12:56 ` [PATCH v4 12/13] drm/xe/vm: Remove restriction that all VMs must be faulting if one is Francois Dugast
2024-08-01 12:56 ` [PATCH v4 13/13] drm/xe/device: Remove unused xe_device::usm::num_vm_in_* Francois Dugast
2024-08-01 13:03 ` ✓ CI.Patch_applied: success for Parallel submission of dma fence jobs and LR jobs with shared hardware resources (rev4) Patchwork
2024-08-01 13:04 ` ✗ CI.checkpatch: warning " Patchwork
2024-08-01 13:05 ` ✓ CI.KUnit: success " Patchwork
2024-08-01 13:17 ` ✓ CI.Build: " Patchwork
2024-08-01 13:19 ` ✗ CI.Hooks: failure " Patchwork
2024-08-01 13:21 ` ✓ CI.checksparse: success " Patchwork
2024-08-01 13:41 ` ✗ CI.BAT: failure " Patchwork
2024-08-01 14:56 ` ✗ CI.FULL: " Patchwork
2024-08-01 16:27 ` ✓ CI.Patch_applied: success for Parallel submission of dma fence jobs and LR jobs with shared hardware resources (rev5) Patchwork
2024-08-01 16:27 ` ✗ CI.checkpatch: warning " Patchwork
2024-08-01 16:28 ` ✓ CI.KUnit: success " Patchwork
2024-08-01 16:40 ` ✓ CI.Build: " Patchwork
2024-08-01 16:42 ` ✗ CI.Hooks: failure " Patchwork
2024-08-01 16:44 ` ✓ CI.checksparse: success " Patchwork
2024-08-01 17:13 ` ✗ CI.BAT: failure " Patchwork
2024-08-01 20:32 ` ✗ CI.FULL: " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZrE1L2hhrsFndmrP@DUT025-TGLU.fm.intel.com \
    --to=matthew.brost@intel.com \
    --cc=francois.dugast@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox