Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Rodrigo Vivi <rodrigo.vivi@intel.com>
To: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
Cc: <intel-xe@lists.freedesktop.org>,
	Matthew Brost <matthew.brost@intel.com>
Subject: Re: [PATCH v3 2/5] drm/xe: Split lrc seqno fence creation up
Date: Fri, 24 May 2024 08:32:26 -0400	[thread overview]
Message-ID: <ZlCI2jTQuHHtiij0@intel.com> (raw)
In-Reply-To: <20240524071940.83042-3-thomas.hellstrom@linux.intel.com>

On Fri, May 24, 2024 at 09:19:37AM +0200, Thomas Hellström wrote:
> Since sometimes a lock is required to initialize a seqno fence,
> and it might be desirable not to hold that lock while performing
> memory allocations, split the lrc seqno fence creation up into an
> allocation phase and an initialization phase.
> 
> Since lrc seqno fences under the hood are hw_fences, do the same
> for these and remove the xe_hw_fence_create() function since it
> is not used anymore.

This patch strengthens my idea from the previous patch to rename
the job seqno to fence_seqno.

Then in the message here and in the rest of the patch below I would
swap the terms and use 'fence seqno' instead of 'seqno fence'.

But other then that the patch looks correct to me so up to you:

Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>

> 
> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> ---
>  drivers/gpu/drm/xe/xe_hw_fence.c | 59 +++++++++++++++++++++++++-------
>  drivers/gpu/drm/xe/xe_hw_fence.h |  7 ++--
>  drivers/gpu/drm/xe/xe_lrc.c      | 48 ++++++++++++++++++++++++--
>  drivers/gpu/drm/xe/xe_lrc.h      |  3 ++
>  4 files changed, 101 insertions(+), 16 deletions(-)
> 
> diff --git a/drivers/gpu/drm/xe/xe_hw_fence.c b/drivers/gpu/drm/xe/xe_hw_fence.c
> index f872ef103127..35c0063a831a 100644
> --- a/drivers/gpu/drm/xe/xe_hw_fence.c
> +++ b/drivers/gpu/drm/xe/xe_hw_fence.c
> @@ -208,23 +208,58 @@ static struct xe_hw_fence *to_xe_hw_fence(struct dma_fence *fence)
>  	return container_of(fence, struct xe_hw_fence, dma);
>  }
>  
> -struct xe_hw_fence *xe_hw_fence_create(struct xe_hw_fence_ctx *ctx,
> -				       struct iosys_map seqno_map)
> +/**
> + * xe_hw_fence_alloc() -  Allocate an hw fence.
> + *
> + * Allocate but don't initialize an hw fence.
> + *
> + * Return: Pointer to the allocated fence or
> + * negative error pointer on error.
> + */
> +struct dma_fence *xe_hw_fence_alloc(void)
>  {
> -	struct xe_hw_fence *fence;
> +	struct xe_hw_fence *hw_fence = fence_alloc();
>  
> -	fence = fence_alloc();
> -	if (!fence)
> +	if (!hw_fence)
>  		return ERR_PTR(-ENOMEM);
>  
> -	fence->ctx = ctx;
> -	fence->seqno_map = seqno_map;
> -	INIT_LIST_HEAD(&fence->irq_link);
> +	return &hw_fence->dma;
> +}
>  
> -	dma_fence_init(&fence->dma, &xe_hw_fence_ops, &ctx->irq->lock,
> -		       ctx->dma_fence_ctx, ctx->next_seqno++);
> +/**
> + * xe_hw_fence_free() - Free an hw fence.
> + * @fence: Pointer to the fence to free.
> + *
> + * Frees an hw fence that hasn't yet been
> + * initialized.
> + */
> +void xe_hw_fence_free(struct dma_fence *fence)
> +{
> +	fence_free(&fence->rcu);
> +}
>  
> -	trace_xe_hw_fence_create(fence);
> +/**
> + * xe_hw_fence_init() - Initialize an hw fence.
> + * @fence: Pointer to the fence to initialize.
> + * @ctx: Pointer to the struct xe_hw_fence_ctx fence context.
> + * @seqno_map: Pointer to the map into where the seqno is blitted.
> + *
> + * Initializes a pre-allocated hw fence.
> + * After initialization, the fence is subject to normal
> + * dma-fence refcounting.
> + */
> +void xe_hw_fence_init(struct dma_fence *fence, struct xe_hw_fence_ctx *ctx,
> +		      struct iosys_map seqno_map)
> +{
> +	struct  xe_hw_fence *hw_fence =
> +		container_of(fence, typeof(*hw_fence), dma);
> +
> +	hw_fence->ctx = ctx;
> +	hw_fence->seqno_map = seqno_map;
> +	INIT_LIST_HEAD(&hw_fence->irq_link);
> +
> +	dma_fence_init(fence, &xe_hw_fence_ops, &ctx->irq->lock,
> +		       ctx->dma_fence_ctx, ctx->next_seqno++);
>  
> -	return fence;
> +	trace_xe_hw_fence_create(hw_fence);
>  }
> diff --git a/drivers/gpu/drm/xe/xe_hw_fence.h b/drivers/gpu/drm/xe/xe_hw_fence.h
> index cfe5fd603787..f13a1c4982c7 100644
> --- a/drivers/gpu/drm/xe/xe_hw_fence.h
> +++ b/drivers/gpu/drm/xe/xe_hw_fence.h
> @@ -24,7 +24,10 @@ void xe_hw_fence_ctx_init(struct xe_hw_fence_ctx *ctx, struct xe_gt *gt,
>  			  struct xe_hw_fence_irq *irq, const char *name);
>  void xe_hw_fence_ctx_finish(struct xe_hw_fence_ctx *ctx);
>  
> -struct xe_hw_fence *xe_hw_fence_create(struct xe_hw_fence_ctx *ctx,
> -				       struct iosys_map seqno_map);
> +struct dma_fence *xe_hw_fence_alloc(void);
>  
> +void xe_hw_fence_free(struct dma_fence *fence);
> +
> +void xe_hw_fence_init(struct dma_fence *fence, struct xe_hw_fence_ctx *ctx,
> +		      struct iosys_map seqno_map);
>  #endif
> diff --git a/drivers/gpu/drm/xe/xe_lrc.c b/drivers/gpu/drm/xe/xe_lrc.c
> index f679cb9aaea7..c28796cfaa03 100644
> --- a/drivers/gpu/drm/xe/xe_lrc.c
> +++ b/drivers/gpu/drm/xe/xe_lrc.c
> @@ -1033,10 +1033,54 @@ u32 xe_lrc_seqno_ggtt_addr(struct xe_lrc *lrc)
>  	return __xe_lrc_seqno_ggtt_addr(lrc);
>  }
>  
> +/**
> + * xe_lrc_alloc_seqno_fence() - Allocate an lrc seqno fence.
> + *
> + * Allocate but don't initialize an lrc seqno fence.
> + *
> + * Return: Pointer to the allocated fence or
> + * negative error pointer on error.
> + */
> +struct dma_fence *xe_lrc_alloc_seqno_fence(void)
> +{
> +	return xe_hw_fence_alloc();
> +}
> +
> +/**
> + * xe_lrc_free_seqno_fence() - Free an lrc seqno fence.
> + * @fence: Pointer to the fence to free.
> + *
> + * Frees an lrc seqno fence that hasn't yet been
> + * initialized.
> + */
> +void xe_lrc_free_seqno_fence(struct dma_fence *fence)
> +{
> +	xe_hw_fence_free(fence);
> +}
> +
> +/**
> + * xe_lrc_init_seqno_fence() - Initialize an lrc seqno fence.
> + * @lrc: Pointer to the lrc.
> + * @fence: Pointer to the fence to initialize.
> + *
> + * Initializes a pre-allocated lrc seqno fence.
> + * After initialization, the fence is subject to normal
> + * dma-fence refcounting.
> + */
> +void xe_lrc_init_seqno_fence(struct xe_lrc *lrc, struct dma_fence *fence)
> +{
> +	xe_hw_fence_init(fence, &lrc->fence_ctx, __xe_lrc_seqno_map(lrc));
> +}
> +
>  struct dma_fence *xe_lrc_create_seqno_fence(struct xe_lrc *lrc)
>  {
> -	return &xe_hw_fence_create(&lrc->fence_ctx,
> -				   __xe_lrc_seqno_map(lrc))->dma;
> +	struct dma_fence *fence = xe_lrc_alloc_seqno_fence();
> +
> +	if (IS_ERR(fence))
> +		return fence;
> +
> +	xe_lrc_init_seqno_fence(lrc, fence);
> +	return fence;
>  }
>  
>  s32 xe_lrc_seqno(struct xe_lrc *lrc)
> diff --git a/drivers/gpu/drm/xe/xe_lrc.h b/drivers/gpu/drm/xe/xe_lrc.h
> index b9da1031083b..7a8752fd8484 100644
> --- a/drivers/gpu/drm/xe/xe_lrc.h
> +++ b/drivers/gpu/drm/xe/xe_lrc.h
> @@ -44,6 +44,9 @@ void xe_lrc_write_ctx_reg(struct xe_lrc *lrc, int reg_nr, u32 val);
>  u64 xe_lrc_descriptor(struct xe_lrc *lrc);
>  
>  u32 xe_lrc_seqno_ggtt_addr(struct xe_lrc *lrc);
> +struct dma_fence *xe_lrc_alloc_seqno_fence(void);
> +void xe_lrc_free_seqno_fence(struct dma_fence *fence);
> +void xe_lrc_init_seqno_fence(struct xe_lrc *lrc, struct dma_fence *fence);
>  struct dma_fence *xe_lrc_create_seqno_fence(struct xe_lrc *lrc);
>  s32 xe_lrc_seqno(struct xe_lrc *lrc);
>  
> -- 
> 2.44.0
> 

  reply	other threads:[~2024-05-24 12:32 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-05-24  7:19 [PATCH v3 0/5] drm/xe: Allow migrate vm gpu submissions from reclaim context Thomas Hellström
2024-05-24  7:19 ` [PATCH v3 1/5] drm/xe: Decouple job seqno and lrc seqno Thomas Hellström
2024-05-24 12:26   ` Rodrigo Vivi
2024-05-24 12:56     ` Thomas Hellström
2024-05-24 19:44       ` Rodrigo Vivi
2024-05-24  7:19 ` [PATCH v3 2/5] drm/xe: Split lrc seqno fence creation up Thomas Hellström
2024-05-24 12:32   ` Rodrigo Vivi [this message]
2024-05-24  7:19 ` [PATCH v3 3/5] drm/xe: Don't initialize fences at xe_sched_job_create() Thomas Hellström
2024-05-24 13:12   ` Rodrigo Vivi
2024-05-24 13:22     ` Thomas Hellström
2024-05-24 19:45       ` Rodrigo Vivi
2024-05-24  7:19 ` [PATCH v3 4/5] drm/xe: Remove xe_lrc_create_seqno_fence() Thomas Hellström
2024-05-24 12:33   ` Rodrigo Vivi
2024-05-24  7:19 ` [PATCH v3 5/5] drm/xe: Move job creation out of the struct xe_migrate::job_mutex Thomas Hellström
2024-05-24 13:14   ` Rodrigo Vivi
2024-05-24  8:18 ` ✓ CI.Patch_applied: success for drm/xe: Allow migrate vm gpu submissions from reclaim context (rev3) Patchwork
2024-05-24  8:18 ` ✓ CI.checkpatch: " Patchwork
2024-05-24  8:19 ` ✓ CI.KUnit: " Patchwork
2024-05-24  8:31 ` ✓ CI.Build: " Patchwork
2024-05-24  8:33 ` ✓ CI.Hooks: " Patchwork
2024-05-24  8:35 ` ✓ CI.checksparse: " Patchwork
2024-05-24  9:05 ` ✓ CI.BAT: " Patchwork
2024-05-24 10:48 ` ✗ CI.FULL: failure " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZlCI2jTQuHHtiij0@intel.com \
    --to=rodrigo.vivi@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=matthew.brost@intel.com \
    --cc=thomas.hellstrom@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox