Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Rodrigo Vivi <rodrigo.vivi@intel.com>
To: "Michał Winiarski" <michal.winiarski@intel.com>
Cc: "Matthew Brost" <matthew.brost@intel.com>,
	intel-xe@lists.freedesktop.org,
	"Lucas De Marchi" <lucas.demarchi@intel.com>,
	"Matt Roper" <matthew.d.roper@intel.com>,
	"Ville Syrjälä" <ville.syrjala@linux.intel.com>,
	"Brian Welty" <brian.welty@intel.com>,
	"Michal Wajdeczko" <michal.wajdeczko@intel.com>
Subject: Re: Re: [PATCH 1/4] drm/xe/guc: Allocate GuC data structures in system memory for initial load
Date: Thu, 1 Feb 2024 16:57:25 -0500	[thread overview]
Message-ID: <ZbwTxSYFyERYDp0g@intel.com> (raw)
In-Reply-To: <vbsgkhdcgtpndbhbqpxv6wurmeuxzvgvrnjdhoarlztajbydke@bw5uym2eomjo>

On Thu, Feb 01, 2024 at 09:25:38PM +0100, Michał Winiarski wrote:
> On Thu, Feb 01, 2024 at 05:36:20PM +0000, Matthew Brost wrote:
> > On Mon, Jan 29, 2024 at 02:03:05PM +0100, Michał Winiarski wrote:
> > > GuC load will need to happen at an earlier point in probe, where local
> > > memory is not yet available. Use system memory for GuC data structures
> > > used for initial "hwconfig" load, and realloc at a later,
> > > "post-hwconfig" load if needed, when local memory is available.
> > > 
> > > Signed-off-by: Michał Winiarski <michal.winiarski@intel.com>
> > > ---
> > >  drivers/gpu/drm/xe/xe_bo.c           | 32 ++++++++++++++++++++++++++
> > >  drivers/gpu/drm/xe/xe_bo.h           |  1 +
> > >  drivers/gpu/drm/xe/xe_guc.c          | 34 ++++++++++++++++++++++++++++
> > >  drivers/gpu/drm/xe/xe_guc_ads.c      |  2 +-
> > >  drivers/gpu/drm/xe/xe_guc_ct.c       |  2 +-
> > >  drivers/gpu/drm/xe/xe_guc_hwconfig.c |  2 +-
> > >  drivers/gpu/drm/xe/xe_guc_log.c      |  2 +-
> > >  7 files changed, 71 insertions(+), 4 deletions(-)
> > > 
> > > diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
> > > index d6a193060cc0b..7df87fbad0938 100644
> > > --- a/drivers/gpu/drm/xe/xe_bo.c
> > > +++ b/drivers/gpu/drm/xe/xe_bo.c
> > > @@ -1605,6 +1605,38 @@ struct xe_bo *xe_managed_bo_create_from_data(struct xe_device *xe, struct xe_til
> > >  	return bo;
> > >  }
> > >  
> > > +/**
> > > + * xe_managed_bo_reinit_in_vram
> > > + * @xe: xe device
> > > + * @tile: Tile where the new buffer will be created
> > > + * @src: Managed buffer object allocated in system memory
> > > + *
> > > + * Replace a managed src buffer object allocated in system memory with a new
> > > + * one allocated in vram, copying the data between them.
> > > + * Buffer object in VRAM is not going to have the same GGTT address, the caller
> > > + * is responsible for making sure that any old references to it are updated.
> > > + *
> > > + * Returns 0 for success, negative error code otherwise.
> > > + */
> > > +int xe_managed_bo_reinit_in_vram(struct xe_device *xe, struct xe_tile *tile, struct xe_bo **src)
> > > +{
> > > +	struct xe_bo *bo;
> > > +
> > > +	xe_assert(xe, IS_DGFX(xe));
> > > +	xe_assert(xe, !(*src)->vmap.is_iomem);
> > > +
> > > +	bo = xe_managed_bo_create_from_data(xe, tile, (*src)->vmap.vaddr, (*src)->size,
> > > +					    XE_BO_CREATE_VRAM_IF_DGFX(tile) |
> > > +					    XE_BO_CREATE_GGTT_BIT);
> > > +	if (IS_ERR(bo))
> > > +		return PTR_ERR(bo);
> > > +
> > > +	drmm_release_action(&xe->drm, __xe_bo_unpin_map_no_vm, *src);
> > 
> > Should we not destroy / release the *src BO here?
> > 
> > Matt
> 
> That's what the __xe_bo_unpin_map_no_vm is doing. When the BO is
> allocated using xe_managed_bo_create_from_data, __xe_bo_unpin_map_no_vm
> action is added as DRM managed resource to automatically destroy the BO.
> With drmm_release_action, we're calling it immediately (and remove the
> action from the managed resources list - so that it won't be called when
> the underlying DRM device gets released).

This indeed looks correct to me:

Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>


> 
> Thanks,
> -Michał
> 
> > 
> > > +	*src = bo;
> > > +
> > > +	return 0;
> > > +}
> > > +
> > >  /*
> > >   * XXX: This is in the VM bind data path, likely should calculate this once and
> > >   * store, with a recalculation if the BO is moved.
> > > diff --git a/drivers/gpu/drm/xe/xe_bo.h b/drivers/gpu/drm/xe/xe_bo.h
> > > index db4b2db6b0730..ff919a836d163 100644
> > > --- a/drivers/gpu/drm/xe/xe_bo.h
> > > +++ b/drivers/gpu/drm/xe/xe_bo.h
> > > @@ -129,6 +129,7 @@ struct xe_bo *xe_managed_bo_create_pin_map(struct xe_device *xe, struct xe_tile
> > >  					   size_t size, u32 flags);
> > >  struct xe_bo *xe_managed_bo_create_from_data(struct xe_device *xe, struct xe_tile *tile,
> > >  					     const void *data, size_t size, u32 flags);
> > > +int xe_managed_bo_reinit_in_vram(struct xe_device *xe, struct xe_tile *tile, struct xe_bo **src);
> > >  
> > >  int xe_bo_placement_for_flags(struct xe_device *xe, struct xe_bo *bo,
> > >  			      u32 bo_flags);
> > > diff --git a/drivers/gpu/drm/xe/xe_guc.c b/drivers/gpu/drm/xe/xe_guc.c
> > > index fcb8a9efac704..9b9a1252f3090 100644
> > > --- a/drivers/gpu/drm/xe/xe_guc.c
> > > +++ b/drivers/gpu/drm/xe/xe_guc.c
> > > @@ -272,6 +272,34 @@ void xe_guc_comm_init_early(struct xe_guc *guc)
> > >  		guc->notify_reg = GUC_HOST_INTERRUPT;
> > >  }
> > >  
> > > +static int xe_guc_realloc_post_hwconfig(struct xe_guc *guc)
> > > +{
> > > +	struct xe_tile *tile = gt_to_tile(guc_to_gt(guc));
> > > +	struct xe_device *xe = guc_to_xe(guc);
> > > +	int ret;
> > > +
> > > +	if (!IS_DGFX(guc_to_xe(guc)))
> > > +		return 0;
> > > +
> > > +	ret = xe_managed_bo_reinit_in_vram(xe, tile, &guc->fw.bo);
> > > +	if (ret)
> > > +		return ret;
> > > +
> > > +	ret = xe_managed_bo_reinit_in_vram(xe, tile, &guc->log.bo);
> > > +	if (ret)
> > > +		return ret;
> > > +
> > > +	ret = xe_managed_bo_reinit_in_vram(xe, tile, &guc->ads.bo);
> > > +	if (ret)
> > > +		return ret;
> > > +
> > > +	ret = xe_managed_bo_reinit_in_vram(xe, tile, &guc->ct.bo);
> > > +	if (ret)
> > > +		return ret;
> > > +
> > > +	return 0;
> > > +}
> > > +
> > >  int xe_guc_init(struct xe_guc *guc)
> > >  {
> > >  	struct xe_device *xe = guc_to_xe(guc);
> > > @@ -331,6 +359,12 @@ int xe_guc_init(struct xe_guc *guc)
> > >   */
> > >  int xe_guc_init_post_hwconfig(struct xe_guc *guc)
> > >  {
> > > +	int ret;
> > > +
> > > +	ret = xe_guc_realloc_post_hwconfig(guc);
> > > +	if (ret)
> > > +		return ret;
> > > +
> > >  	guc_init_params_post_hwconfig(guc);
> > >  
> > >  	return xe_guc_ads_init_post_hwconfig(&guc->ads);
> > > diff --git a/drivers/gpu/drm/xe/xe_guc_ads.c b/drivers/gpu/drm/xe/xe_guc_ads.c
> > > index 390e6f1bf4e1c..6ad4c1a90a787 100644
> > > --- a/drivers/gpu/drm/xe/xe_guc_ads.c
> > > +++ b/drivers/gpu/drm/xe/xe_guc_ads.c
> > > @@ -273,7 +273,7 @@ int xe_guc_ads_init(struct xe_guc_ads *ads)
> > >  	ads->regset_size = calculate_regset_size(gt);
> > >  
> > >  	bo = xe_managed_bo_create_pin_map(xe, tile, guc_ads_size(ads) + MAX_GOLDEN_LRC_SIZE,
> > > -					  XE_BO_CREATE_VRAM_IF_DGFX(tile) |
> > > +					  XE_BO_CREATE_SYSTEM_BIT |
> > >  					  XE_BO_CREATE_GGTT_BIT);
> > >  	if (IS_ERR(bo))
> > >  		return PTR_ERR(bo);
> > > diff --git a/drivers/gpu/drm/xe/xe_guc_ct.c b/drivers/gpu/drm/xe/xe_guc_ct.c
> > > index f3d356383cedf..355edd4d758af 100644
> > > --- a/drivers/gpu/drm/xe/xe_guc_ct.c
> > > +++ b/drivers/gpu/drm/xe/xe_guc_ct.c
> > > @@ -155,7 +155,7 @@ int xe_guc_ct_init(struct xe_guc_ct *ct)
> > >  	primelockdep(ct);
> > >  
> > >  	bo = xe_managed_bo_create_pin_map(xe, tile, guc_ct_size(),
> > > -					  XE_BO_CREATE_VRAM_IF_DGFX(tile) |
> > > +					  XE_BO_CREATE_SYSTEM_BIT |
> > >  					  XE_BO_CREATE_GGTT_BIT);
> > >  	if (IS_ERR(bo))
> > >  		return PTR_ERR(bo);
> > > diff --git a/drivers/gpu/drm/xe/xe_guc_hwconfig.c b/drivers/gpu/drm/xe/xe_guc_hwconfig.c
> > > index 2a13a00917f8c..ea49f3885c108 100644
> > > --- a/drivers/gpu/drm/xe/xe_guc_hwconfig.c
> > > +++ b/drivers/gpu/drm/xe/xe_guc_hwconfig.c
> > > @@ -78,7 +78,7 @@ int xe_guc_hwconfig_init(struct xe_guc *guc)
> > >  		return -EINVAL;
> > >  
> > >  	bo = xe_managed_bo_create_pin_map(xe, tile, PAGE_ALIGN(size),
> > > -					  XE_BO_CREATE_VRAM_IF_DGFX(tile) |
> > > +					  XE_BO_CREATE_SYSTEM_BIT |
> > >  					  XE_BO_CREATE_GGTT_BIT);
> > >  	if (IS_ERR(bo))
> > >  		return PTR_ERR(bo);
> > > diff --git a/drivers/gpu/drm/xe/xe_guc_log.c b/drivers/gpu/drm/xe/xe_guc_log.c
> > > index bcd2f4d34081d..45135c3520e54 100644
> > > --- a/drivers/gpu/drm/xe/xe_guc_log.c
> > > +++ b/drivers/gpu/drm/xe/xe_guc_log.c
> > > @@ -84,7 +84,7 @@ int xe_guc_log_init(struct xe_guc_log *log)
> > >  	struct xe_bo *bo;
> > >  
> > >  	bo = xe_managed_bo_create_pin_map(xe, tile, guc_log_size(),
> > > -					  XE_BO_CREATE_VRAM_IF_DGFX(tile) |
> > > +					  XE_BO_CREATE_SYSTEM_BIT |
> > >  					  XE_BO_CREATE_GGTT_BIT);
> > >  	if (IS_ERR(bo))
> > >  		return PTR_ERR(bo);
> > > -- 
> > > 2.43.0
> > > 

  reply	other threads:[~2024-02-01 22:04 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-01-29 13:03 [PATCH 0/4] drm/xe: Remaining probe reordering Michał Winiarski
2024-01-29 13:03 ` [PATCH 1/4] drm/xe/guc: Allocate GuC data structures in system memory for initial load Michał Winiarski
2024-02-01 17:36   ` Matthew Brost
2024-02-01 20:25     ` Michał Winiarski
2024-02-01 21:57       ` Rodrigo Vivi [this message]
2024-02-02  0:37         ` Matthew Brost
2024-01-29 13:03 ` [PATCH 2/4] drm/xe/huc: Realloc HuC FW in vram for post-hwconfig Michał Winiarski
2024-02-01 17:41   ` Matthew Brost
2024-02-01 21:58     ` Rodrigo Vivi
2024-01-29 13:03 ` [PATCH 3/4] drm/xe/guc: Move GuC power control init to "post-hwconfig" Michał Winiarski
2024-01-29 13:03 ` [PATCH 4/4] drm/xe: Initialize GuC earlier during probe Michał Winiarski
2024-01-29 13:06 ` ✓ CI.Patch_applied: success for drm/xe: Remaining probe reordering Patchwork
2024-01-29 13:06 ` ✓ CI.checkpatch: " Patchwork
2024-01-29 13:07 ` ✓ CI.KUnit: " Patchwork
2024-01-29 13:14 ` ✓ CI.Build: " Patchwork
2024-01-29 13:15 ` ✗ CI.Hooks: failure " Patchwork
2024-01-29 13:16 ` ✓ CI.checksparse: success " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZbwTxSYFyERYDp0g@intel.com \
    --to=rodrigo.vivi@intel.com \
    --cc=brian.welty@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=lucas.demarchi@intel.com \
    --cc=matthew.brost@intel.com \
    --cc=matthew.d.roper@intel.com \
    --cc=michal.wajdeczko@intel.com \
    --cc=michal.winiarski@intel.com \
    --cc=ville.syrjala@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox