From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 792B2CCD195 for ; Fri, 17 Oct 2025 14:27:10 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 315A610EC20; Fri, 17 Oct 2025 14:27:10 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="ezcsqML8"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.19]) by gabe.freedesktop.org (Postfix) with ESMTPS id 9244010EC20 for ; Fri, 17 Oct 2025 14:27:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1760711229; x=1792247229; h=date:from:to:cc:subject:message-id:references: mime-version:content-transfer-encoding:in-reply-to; bh=GAhv6lkAPM+q2eF2gYPIAV0vanGODlq74iEc6JvNnlc=; b=ezcsqML8HoNgtaTfC2EnHFpdwoVvLWKDEyiHkEWV5IJAFuj++Dc6Jvjj xhmIU1GlpoRwqFdHUZzET/A8sSOakejzYcQd63cwYKxL/QpNoOyseKsZ1 0AIU4OS5D6o4jO+770mhohfhY/3PwI6La/FnNlYziVUHXO1JzOVNAdARI nsbm94Rrw5jMiUC1gkQZXBqagxendcg81NOwShoEofjjWAiPwZ30HrDpT llMdJs8W2zNa4U8zjzwBy6EqsU+Sr3ZVpV/ZYqvLMv6W6FZG613uLOwpY UsQ0AssDqjeDbEVfqCQlcvceeGZ2bGopC4DDtuVaZxecdCgbIQxICmh1t w==; X-CSE-ConnectionGUID: ToEsR1nmSYKfB5QnZXvIOw== X-CSE-MsgGUID: aUfVcsFvTvavAUAaHZvMZg== X-IronPort-AV: E=McAfee;i="6800,10657,11585"; a="62825251" X-IronPort-AV: E=Sophos;i="6.19,236,1754982000"; d="scan'208";a="62825251" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by orvoesa111.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Oct 2025 07:27:08 -0700 X-CSE-ConnectionGUID: uB7fguE5Tx6HL2G7VeITdg== X-CSE-MsgGUID: FaUeoHSISJGujWl+07gDfA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.19,236,1754982000"; d="scan'208";a="206455583" Received: from klitkey1-mobl1.ger.corp.intel.com (HELO localhost) ([10.245.244.129]) by fmviesa002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Oct 2025 07:27:06 -0700 Date: Fri, 17 Oct 2025 17:27:02 +0300 From: Ville =?iso-8859-1?Q?Syrj=E4l=E4?= To: Satyanarayana K V P Cc: intel-xe@lists.freedesktop.org, Michal Wajdeczko , Matthew Brost , Matthew Auld , Rodrigo Vivi , Matt Roper Subject: Re: [PATCH v7 1/3] drm/xe/migrate: Atomicize CCS copy command setup Message-ID: References: <20251017141226.924-5-satyanarayana.k.v.p@intel.com> <20251017141226.924-6-satyanarayana.k.v.p@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20251017141226.924-6-satyanarayana.k.v.p@intel.com> X-Patchwork-Hint: comment Organization: Intel Finland Oy - BIC 0357606-4 - Westendinkatu 7, 02160 Espoo X-BeenThere: intel-xe@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel Xe graphics driver List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-xe-bounces@lists.freedesktop.org Sender: "Intel-xe" On Fri, Oct 17, 2025 at 07:42:28PM +0530, Satyanarayana K V P wrote: > The CCS copy command is a 5-dword sequence. If the vCPU halts during > save/restore while this sequence is being programmed, partial writes may > trigger page faults when saving IGPU CCS metadata. Use the VMOVDQU > instruction to write the sequence atomically. If this whole thing is so racy why don't you always add a new BB_END after new commands, and only replace the previous BB_END with NOOP _after_ the new commands have been fully written? > > Since VMOVDQU operates on 256-bit chunks, update EMIT_COPY_CCS_DW to emit > 8 dwords instead of 5 dwords. > > Update emit_flush_invalidate() to use VMOVDQU operating with 128-bit > chunks. > > Signed-off-by: Satyanarayana K V P > Cc: Michal Wajdeczko > Cc: Matthew Brost > Cc: Matthew Auld > Cc: Rodrigo Vivi > Cc: Matt Roper > > --- > V6 -> V7: > - Added description explaining why to use assembly instructions for > atomicity. > - Assert if DGFX tries to use memcpy_vmovdqu(). (Rodrigo) > - Include though checkpatch complains. With > KUnit is throwing errors. > > V5 -> V6: > - Fixed review comments (Rodrigo) > > V4 -> V5: > - Fixed review comments. (Matt B) > > V3 -> V4: > - Fixed review comments. (Wajdeczko) > - Fix issues reported by patchworks. > > V2 -> V3: > - Added support for 128 bit and 256 bit instructions with memcpy_vmovdqu > - Updated emit_flush_invalidate() to use vmovdqu instruction. > > V1 -> V2: > - Use memcpy_vmovdqu only for x86 arch and for VF. Else use memcpy > (Auld, Matthew) > - Fix issues reported by patchworks. > --- > drivers/gpu/drm/xe/xe_migrate.c | 112 ++++++++++++++++++++++++++------ > 1 file changed, 91 insertions(+), 21 deletions(-) > > diff --git a/drivers/gpu/drm/xe/xe_migrate.c b/drivers/gpu/drm/xe/xe_migrate.c > index 3112c966c67d..e0be7396a0ab 100644 > --- a/drivers/gpu/drm/xe/xe_migrate.c > +++ b/drivers/gpu/drm/xe/xe_migrate.c > @@ -5,6 +5,8 @@ > > #include "xe_migrate.h" > > +#include > +#include > #include > #include > > @@ -33,6 +35,7 @@ > #include "xe_res_cursor.h" > #include "xe_sa.h" > #include "xe_sched_job.h" > +#include "xe_sriov_vf_ccs.h" > #include "xe_sync.h" > #include "xe_trace_bo.h" > #include "xe_validation.h" > @@ -657,18 +660,68 @@ static void emit_pte(struct xe_migrate *m, > } > } > > -#define EMIT_COPY_CCS_DW 5 > +/* > + * VF KMD registers two specialized LRCs with the GuC to handle save/restore > + * operations for CCS metadata on IGPU. The GuC executes these LRCAs during > + * VF state/restore operations. > + * > + * Each LRC contains a batch buffer pool that GuC submits to hardware during > + * VF state save/restore operations. Since these operations can occur > + * asynchronously at any time, we must ensure GPU instructions in the batch > + * buffer are written atomically to prevent corruption from incomplete writes. > + * > + * To guarantee atomic instruction writes, we use x86 SIMD instructions > + * (128-bit XMM and 256-bit YMM) within kernel_fpu_begin()/kernel_fpu_end() > + * sections. This prevents vCPU preemption during instruction generation, > + * ensuring complete GPU commands are written to the batch buffer. > + */ > + > +static void memcpy_vmovdqu(struct xe_device *xe, void *dst, const void *src, u32 size) > +{ > + xe_assert(xe, !IS_DGFX(xe)); > +#ifdef CONFIG_X86 > + kernel_fpu_begin(); > + if (size == SZ_128) { > + asm("vmovdqu (%0), %%xmm0\n" > + "vmovups %%xmm0, (%1)\n" > + :: "r" (src), "r" (dst) : "memory"); > + } else if (size == SZ_256) { > + asm("vmovdqu (%0), %%ymm0\n" > + "vmovups %%ymm0, (%1)\n" > + :: "r" (src), "r" (dst) : "memory"); > + } > + kernel_fpu_end(); > +#endif > +} > + > +static void emit_atomic(struct xe_gt *gt, void *dst, const void *src, u32 size) > +{ > + u32 instr_size = size * BITS_PER_BYTE; > + > + xe_gt_assert(gt, instr_size == SZ_128 || instr_size == SZ_256); > + > + if (IS_VF_CCS_READY(gt_to_xe(gt))) { > + xe_gt_assert(gt, static_cpu_has(X86_FEATURE_AVX)); > + memcpy_vmovdqu(gt_to_xe(gt), dst, src, instr_size); > + } else { > + memcpy(dst, src, size); > + } > +} > + > +#define EMIT_COPY_CCS_DW 8 > static void emit_copy_ccs(struct xe_gt *gt, struct xe_bb *bb, > u64 dst_ofs, bool dst_is_indirect, > u64 src_ofs, bool src_is_indirect, > u32 size) > { > + u32 dw[EMIT_COPY_CCS_DW] = {MI_NOOP}; > struct xe_device *xe = gt_to_xe(gt); > u32 *cs = bb->cs + bb->len; > u32 num_ccs_blks; > u32 num_pages; > u32 ccs_copy_size; > u32 mocs; > + u32 i = 0; > > if (GRAPHICS_VERx100(xe) >= 2000) { > num_pages = DIV_ROUND_UP(size, XE_PAGE_SIZE); > @@ -686,15 +739,23 @@ static void emit_copy_ccs(struct xe_gt *gt, struct xe_bb *bb, > mocs = FIELD_PREP(XY_CTRL_SURF_MOCS_MASK, gt->mocs.uc_index); > } > > - *cs++ = XY_CTRL_SURF_COPY_BLT | > - (src_is_indirect ? 0x0 : 0x1) << SRC_ACCESS_TYPE_SHIFT | > - (dst_is_indirect ? 0x0 : 0x1) << DST_ACCESS_TYPE_SHIFT | > - ccs_copy_size; > - *cs++ = lower_32_bits(src_ofs); > - *cs++ = upper_32_bits(src_ofs) | mocs; > - *cs++ = lower_32_bits(dst_ofs); > - *cs++ = upper_32_bits(dst_ofs) | mocs; > + dw[i++] = XY_CTRL_SURF_COPY_BLT | > + (src_is_indirect ? 0x0 : 0x1) << SRC_ACCESS_TYPE_SHIFT | > + (dst_is_indirect ? 0x0 : 0x1) << DST_ACCESS_TYPE_SHIFT | > + ccs_copy_size; > + dw[i++] = lower_32_bits(src_ofs); > + dw[i++] = upper_32_bits(src_ofs) | mocs; > + dw[i++] = lower_32_bits(dst_ofs); > + dw[i++] = upper_32_bits(dst_ofs) | mocs; > > + /* > + * The CCS copy command is a 5-dword sequence. If the vCPU halts during > + * save/restore while this sequence is being issued, partial writes may trigger > + * page faults when saving iGPU CCS metadata. Use the VMOVDQU instruction to > + * write the sequence atomically. > + */ > + emit_atomic(gt, cs, dw, sizeof(dw)); > + cs += EMIT_COPY_CCS_DW; > bb->len = cs - bb->cs; > } > > @@ -1006,18 +1067,27 @@ static u64 migrate_vm_ppgtt_addr_tlb_inval(void) > return (NUM_KERNEL_PDE - 2) * XE_PAGE_SIZE; > } > > -static int emit_flush_invalidate(u32 *dw, int i, u32 flags) > +/* > + * The MI_FLUSH_DW command is a 4-dword sequence. If the vCPU halts during > + * save/restore while this sequence is being issued, partial writes may > + * trigger page faults when saving iGPU CCS metadata. Use > + * emit_atomic() to write the sequence atomically. > + */ > +#define EMIT_FLUSH_INVALIDATE_DW 4 > +static int emit_flush_invalidate(struct xe_exec_queue *q, u32 *cs, int i, u32 flags) > { > u64 addr = migrate_vm_ppgtt_addr_tlb_inval(); > + u32 dw[EMIT_FLUSH_INVALIDATE_DW] = {MI_NOOP}, j = 0; > + > + dw[j++] = MI_FLUSH_DW | MI_INVALIDATE_TLB | MI_FLUSH_DW_OP_STOREDW | > + MI_FLUSH_IMM_DW | flags; > + dw[j++] = lower_32_bits(addr); > + dw[j++] = upper_32_bits(addr); > + dw[j++] = MI_NOOP; > > - dw[i++] = MI_FLUSH_DW | MI_INVALIDATE_TLB | MI_FLUSH_DW_OP_STOREDW | > - MI_FLUSH_IMM_DW | flags; > - dw[i++] = lower_32_bits(addr); > - dw[i++] = upper_32_bits(addr); > - dw[i++] = MI_NOOP; > - dw[i++] = MI_NOOP; > + emit_atomic(q->gt, &cs[i], dw, sizeof(dw)); > > - return i; > + return i + j; > } > > /** > @@ -1062,7 +1132,7 @@ int xe_migrate_ccs_rw_copy(struct xe_tile *tile, struct xe_exec_queue *q, > /* Calculate Batch buffer size */ > batch_size = 0; > while (size) { > - batch_size += 10; /* Flush + ggtt addr + 2 NOP */ > + batch_size += EMIT_FLUSH_INVALIDATE_DW * 2; /* Flush + ggtt addr + 1 NOP */ > u64 ccs_ofs, ccs_size; > u32 ccs_pt; > > @@ -1103,7 +1173,7 @@ int xe_migrate_ccs_rw_copy(struct xe_tile *tile, struct xe_exec_queue *q, > * sizes here again before copy command is emitted. > */ > while (size) { > - batch_size += 10; /* Flush + ggtt addr + 2 NOP */ > + batch_size += EMIT_FLUSH_INVALIDATE_DW * 2; /* Flush + ggtt addr + 1 NOP */ > u32 flush_flags = 0; > u64 ccs_ofs, ccs_size; > u32 ccs_pt; > @@ -1126,11 +1196,11 @@ int xe_migrate_ccs_rw_copy(struct xe_tile *tile, struct xe_exec_queue *q, > > emit_pte(m, bb, ccs_pt, false, false, &ccs_it, ccs_size, src); > > - bb->len = emit_flush_invalidate(bb->cs, bb->len, flush_flags); > + bb->len = emit_flush_invalidate(q, bb->cs, bb->len, flush_flags); > flush_flags = xe_migrate_ccs_copy(m, bb, src_L0_ofs, src_is_pltt, > src_L0_ofs, dst_is_pltt, > src_L0, ccs_ofs, true); > - bb->len = emit_flush_invalidate(bb->cs, bb->len, flush_flags); > + bb->len = emit_flush_invalidate(q, bb->cs, bb->len, flush_flags); > > size -= src_L0; > } > -- > 2.51.0 -- Ville Syrjälä Intel