Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Matthew Brost <matthew.brost@intel.com>
To: Satyanarayana K V P <satyanarayana.k.v.p@intel.com>
Cc: <intel-xe@lists.freedesktop.org>,
	Michal Wajdeczko <michal.wajdeczko@intel.com>,
	Matthew Auld <matthew.auld@intel.com>
Subject: Re: [PATCH v4 3/3] drm/xe/vf: Clear CCS read/write buffers in atomic way
Date: Tue, 7 Oct 2025 18:40:31 -0700	[thread overview]
Message-ID: <aOXBD0w5mCPKv9pY@lstrano-desk.jf.intel.com> (raw)
In-Reply-To: <aOQiu9ImX79DLiLQ@lstrano-desk.jf.intel.com>

On Mon, Oct 06, 2025 at 01:12:43PM -0700, Matthew Brost wrote:
> On Mon, Oct 06, 2025 at 08:54:47PM +0530, Satyanarayana K V P wrote:
> > Clear the contents of the CCS read/write batch buffer, ensuring no page
> > faults / GPU hang occur if migration happens midway.
> > 
> 
> It is going to take me minute to fully validate the algorithm given the
> complexity but some quick comments.
> 
> > Signed-off-by: Satyanarayana K V P <satyanarayana.k.v.p@intel.com>
> > Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
> > Cc: Matthew Brost <matthew.brost@intel.com>
> > Cc: Matthew Auld <matthew.auld@intel.com>
> > 
> > ---
> > V3 -> V4:
> > - New commit added.
> > 
> > V2 -> V3:
> > - None
> > 
> > V1 -> V2:
> > - None
> > ---
> >  drivers/gpu/drm/xe/xe_migrate.c      | 130 +++++++++++++++++++++++++++
> >  drivers/gpu/drm/xe/xe_migrate.h      |   3 +
> >  drivers/gpu/drm/xe/xe_sriov_vf_ccs.c |   5 +-
> >  3 files changed, 137 insertions(+), 1 deletion(-)
> > 
> > diff --git a/drivers/gpu/drm/xe/xe_migrate.c b/drivers/gpu/drm/xe/xe_migrate.c
> > index 4c575be45a76..71c446e74d84 100644
> > --- a/drivers/gpu/drm/xe/xe_migrate.c
> > +++ b/drivers/gpu/drm/xe/xe_migrate.c
> > @@ -651,6 +651,42 @@ static void emit_pte(struct xe_migrate *m,
> >  	}
> >  }
> >  
> > +static void emit_pte_clear(struct xe_gt *gt, struct xe_bb *bb, int start_offset,
> > +			   int end_offset)
> > +{
> > +	u32 dw_nop[SZ_4] = {MI_NOOP};
> 
> SZ_2 or just 2.
> 
> > +	int i = start_offset;
> > +	int len = end_offset;
> > +	u32 *cs = bb->cs;
> > +
> > +	/* Reverses the operations performed by emit_pte() */
> > +	while (i < len) {
> > +		u32 dwords, qwords;
> > +
> > +		xe_assert(gt_to_xe(gt), (REG_FIELD_GET(REG_GENMASK(31, 23), cs[i]) == 0x20));

Ah, I see you have asserts everywhere which is self validating. Good
idea. If these asserts are not popping then I don't really have any more
comments. Looks good.

Matt

> > +
> > +		qwords = REG_FIELD_GET(MI_SDI_LEN_DW, cs[i]);
> > +		/*
> > +		 * If Store QW is enabled, then the value of the dwlengh
> > +		 * includes the header, address and multiple QW pairs of data
> > +		 * which means the values will be limited to odd values starting
> > +		 * at a value of 3(3 representing the size of a 5 DW command
> > +		 * including header, 2 dw address and 2 dw data).
> > +		 */
> > +		dwords = qwords - 1;
> > +		/*
> > +		 * Do not clear header first. Clear PTEs first and then clear the
> > +		 * header to avoid page faults.
> > +		 */
> > +		memset(&cs[i + 3], MI_NOOP, (dwords) * sizeof(u32));
> > +
> 
> I think you a need wmb() here to ensure the above is GPU visable before
> clearing the header instruction.
> 
> > +		WRITE_ONCE(*(u64 *)&cs[i], READ_ONCE(*(u64 *)dw_nop));
> > +
> > +		cs[i + 2] = MI_NOOP;
> > +		i += (dwords + 3);
> > +	}
> > +}
> > +
> >  static void memcpy_vmovdqu(void *dst, const void *src, u32 size)
> >  {
> >  	kernel_fpu_begin();
> > @@ -732,6 +768,17 @@ static void emit_copy_ccs(struct xe_gt *gt, struct xe_bb *bb,
> >  	bb->len = cs - bb->cs;
> >  }
> >  
> > +static u32 emit_copy_ccs_clear(struct xe_gt *gt, struct xe_bb *bb, u32 offset)
> > +{
> > +	u32 dw[EMIT_COPY_CCS_DW] = {MI_NOOP};
> > +	u32 *cs = bb->cs + offset - EMIT_COPY_CCS_DW;
> > +
> > +	xe_assert(gt_to_xe(gt), (REG_FIELD_GET(REG_GENMASK(31, 22), *cs) == 0x148));
> > +	emit_atomic(gt, cs, dw, sizeof(dw));
> 
> I think you need a wmb() here to the clearing of copy is GPU visable
> before you start clearing out the PTEs.
> 
> > +
> > +	return offset - EMIT_COPY_CCS_DW;
> > +}
> > +
> >  #define EMIT_COPY_DW 10
> >  static void emit_copy(struct xe_gt *gt, struct xe_bb *bb,
> >  		      u64 src_ofs, u64 dst_ofs, unsigned int size,
> > @@ -1062,6 +1109,19 @@ static int emit_flush_invalidate(struct xe_exec_queue *q, u32 *dw, int i, u32 fl
> >  	return i + j;
> >  }
> >  
> > +static u32 emit_flush_invalidate_clear(struct xe_gt *gt, struct xe_bb *bb,
> > +				       u32 offset)
> > +{
> > +	u32 dw[SZ_4] = {MI_NOOP};
> 
> As discussed in patch 1 use EMIT_FLUSH_INVALIDATE_DW.
> 
> Matt
> 
> > +	u32 *cs = bb->cs + offset - SZ_4;
> > +
> > +	xe_assert(gt_to_xe(gt), (REG_FIELD_GET(REG_GENMASK(31, 23), *cs) == 0x26));
> > +
> > +	emit_atomic(gt, cs, dw, sizeof(dw));
> > +
> > +	return offset - SZ_4;
> > +}
> > +
> >  /**
> >   * xe_migrate_ccs_rw_copy() - Copy content of TTM resources.
> >   * @tile: Tile whose migration context to be used.
> > @@ -1186,6 +1246,76 @@ int xe_migrate_ccs_rw_copy(struct xe_tile *tile, struct xe_exec_queue *q,
> >  	return err;
> >  }
> >  
> > +static u32 ccs_rw_pte_size(struct xe_gt *gt, struct xe_bb *bb, u32 offset)
> > +{
> > +	int len = bb->len;
> > +	u32 *cs = bb->cs;
> > +	u32 i = offset;
> > +
> > +	while (i < len) {
> > +		u32 dwords, qwords;
> > +
> > +		xe_assert(gt_to_xe(gt), (REG_FIELD_GET(REG_GENMASK(31, 23), cs[i]) == 0x20));
> > +
> > +		qwords = REG_FIELD_GET(MI_SDI_LEN_DW, cs[i]);
> > +		/*
> > +		 * If Store QW is enabled, then the value of the dwlengh
> > +		 * includes the header, address and multiple QW pairs of data
> > +		 * which means the values will be limited to odd values starting
> > +		 * at a value of 3(3 representing the size of a 5 DW command
> > +		 * including header, 2 dw address and 2 dw data).
> > +		 */
> > +		dwords = qwords - 1;
> > +		i += dwords + 3;
> > +
> > +		/*
> > +		 * Break if the next dword is for emit_flush_invalidate_clear()
> > +		 * or emit_copy_ccs_clear()
> > +		 */
> > +		if ((REG_FIELD_GET(REG_GENMASK(31, 23), cs[i]) == 0x26) ||
> > +		    (REG_FIELD_GET(REG_GENMASK(31, 22), cs[i]) == 0x148))
> > +			break;
> > +	}
> > +	return i;
> > +}
> > +
> > +/**
> > + * xe_migrate_ccs_rw_copy_clear() - Clear the CCS read/write batch buffer
> > + * content.
> > + * @tile: Tile whose migration context to be used.
> > + * @src_bo: The buffer object @src is currently bound to.
> > + * @read_write : Creates BB commands for CCS read/write.
> > + *
> > + * The CCS copy command has three stages: PTE setup, TLB invalidation, and CCS
> > + * copy. Each stage includes a header followed by instructions. When clearing,
> > + * remove the instructions first, then the header. For the TLB invalidation and
> > + * CCS copy stages, ensure the writes are atomic.
> > + *
> > + * This reverses the operations performed by xe_migrate_ccs_rw_copy().
> > + *
> > + * Returns: None.
> > + */
> > +void xe_migrate_ccs_rw_copy_clear(struct xe_tile *tile, struct xe_bo *src_bo,
> > +				  enum xe_sriov_vf_ccs_rw_ctxs read_write)
> > +{
> > +	struct xe_bb *bb = src_bo->bb_ccs[read_write];
> > +	u32 bb_offset = 0, bb_offset_chunk = 0;
> > +	struct xe_gt *gt = tile->primary_gt;
> > +
> > +	while (bb_offset_chunk >= 0 && bb_offset_chunk < bb->len) {
> > +		bb_offset = ccs_rw_pte_size(gt, bb, bb_offset_chunk);
> > +		/*
> > +		 * After PTE entries, we have one TLB invalidation, CCS copy
> > +		 * command and another TLB invalidation command.
> > +		 */
> > +		bb_offset_chunk = bb_offset + SZ_4 + EMIT_COPY_CCS_DW + SZ_4;
> > +		bb_offset = emit_flush_invalidate_clear(gt, bb, bb_offset_chunk);
> > +		bb_offset = emit_copy_ccs_clear(gt, bb, bb_offset);
> > +		bb_offset = emit_flush_invalidate_clear(gt, bb, bb_offset);
> > +		emit_pte_clear(gt, bb, bb_offset_chunk, bb_offset);
> > +	}
> > +}
> > +
> >  /**
> >   * xe_get_migrate_exec_queue() - Get the execution queue from migrate context.
> >   * @migrate: Migrate context.
> > diff --git a/drivers/gpu/drm/xe/xe_migrate.h b/drivers/gpu/drm/xe/xe_migrate.h
> > index 0d8944b1cee6..bd2c0eb3ad94 100644
> > --- a/drivers/gpu/drm/xe/xe_migrate.h
> > +++ b/drivers/gpu/drm/xe/xe_migrate.h
> > @@ -129,6 +129,9 @@ int xe_migrate_ccs_rw_copy(struct xe_tile *tile, struct xe_exec_queue *q,
> >  			   struct xe_bo *src_bo,
> >  			   enum xe_sriov_vf_ccs_rw_ctxs read_write);
> >  
> > +void xe_migrate_ccs_rw_copy_clear(struct xe_tile *tile, struct xe_bo *src_bo,
> > +				  enum xe_sriov_vf_ccs_rw_ctxs read_write);
> > +
> >  struct xe_lrc *xe_migrate_lrc(struct xe_migrate *migrate);
> >  struct xe_exec_queue *xe_migrate_exec_queue(struct xe_migrate *migrate);
> >  struct dma_fence *xe_migrate_raw_vram_copy(struct xe_bo *vram_bo, u64 vram_offset,
> > diff --git a/drivers/gpu/drm/xe/xe_sriov_vf_ccs.c b/drivers/gpu/drm/xe/xe_sriov_vf_ccs.c
> > index 790249801364..2d3728cb24ca 100644
> > --- a/drivers/gpu/drm/xe/xe_sriov_vf_ccs.c
> > +++ b/drivers/gpu/drm/xe/xe_sriov_vf_ccs.c
> > @@ -387,6 +387,7 @@ int xe_sriov_vf_ccs_detach_bo(struct xe_bo *bo)
> >  {
> >  	struct xe_device *xe = xe_bo_device(bo);
> >  	enum xe_sriov_vf_ccs_rw_ctxs ctx_id;
> > +	struct xe_tile *tile;
> >  	struct xe_bb *bb;
> >  
> >  	xe_assert(xe, IS_VF_CCS_READY(xe));
> > @@ -394,12 +395,14 @@ int xe_sriov_vf_ccs_detach_bo(struct xe_bo *bo)
> >  	if (!xe_bo_has_valid_ccs_bb(bo))
> >  		return 0;
> >  
> > +	tile = xe_device_get_root_tile(xe);
> > +
> >  	for_each_ccs_rw_ctx(ctx_id) {
> >  		bb = bo->bb_ccs[ctx_id];
> >  		if (!bb)
> >  			continue;
> >  
> > -		memset(bb->cs, MI_NOOP, bb->len * sizeof(u32));
> > +		xe_migrate_ccs_rw_copy_clear(tile, bo, ctx_id);
> >  		xe_bb_free(bb, NULL);
> >  		bo->bb_ccs[ctx_id] = NULL;
> >  	}
> > -- 
> > 2.51.0
> > 

      reply	other threads:[~2025-10-08  1:40 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-10-06 15:24 [PATCH v4 0/3] drm/xe/migrate: Atomicize CCS copy command setup Satyanarayana K V P
2025-10-06 15:24 ` [PATCH v4 1/3] " Satyanarayana K V P
2025-10-06 19:58   ` Matthew Brost
2025-10-08  9:50     ` K V P, Satyanarayana
2025-10-06 15:24 ` [PATCH v4 2/3] drm/xe/migrate: Make emit_pte() header write atomic Satyanarayana K V P
2025-10-06 19:42   ` Matthew Brost
2025-10-06 15:24 ` [PATCH v4 3/3] drm/xe/vf: Clear CCS read/write buffers in atomic way Satyanarayana K V P
2025-10-06 20:12   ` Matthew Brost
2025-10-08  1:40     ` Matthew Brost [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aOXBD0w5mCPKv9pY@lstrano-desk.jf.intel.com \
    --to=matthew.brost@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=matthew.auld@intel.com \
    --cc=michal.wajdeczko@intel.com \
    --cc=satyanarayana.k.v.p@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox