* [PATCH v6 0/3] drm/xe/migrate: Atomicize CCS copy command setup
@ 2025-10-10 12:39 Satyanarayana K V P
2025-10-10 12:39 ` [PATCH v6 1/3] " Satyanarayana K V P
` (3 more replies)
0 siblings, 4 replies; 6+ messages in thread
From: Satyanarayana K V P @ 2025-10-10 12:39 UTC (permalink / raw)
To: intel-xe; +Cc: Satyanarayana K V P
The CCS copy command is a 5-dword sequence. If the vCPU halts during
save/restore while this sequence is being programmed, partial writes may
trigger page faults when saving IGPU CCS metadata. Use the VMOVDQU
instruction to write the sequence atomically. Since VMOVDQU operates on
256-bit chunks, update EMIT_COPY_CCS_DW to emit 8 dwords instead of 5
dwords. Update emit_flush_invalidate() to use VMOVDQU operating with
128-bit chunks.
The MI_STORE_DATA_IMM instruction header is quad dword in size. If the
vCPU halts during save/restore while this sequence is being programmed,
partial writes may trigger page faults when saving IGPU CCS metadata.
Update instruction header atomically.
Clear the contents of the CCS read/write batch buffer, ensuring no page
faults / GPU hang occur if migration happens midway.
---
V5 -> V6:
- Used xe_gt_assert() instead of xe_assert() (Matt B).
- Use emit_atomic() function to write MI_STORE_DATA_IMM instruction
(Matt B).
- Fixed review comments (Rodrigo)
V4 -> V5:
- Fixed review comments (Matt B)
V3 -> V4:
- Fixed review comments (Wajdeczko)
- Fix issues reported by patchworks.
V2 -> V3:
- Added support for 128 bit and 256 bit instructions with memcpy_vmovdqu
- Updated emit_flush_invalidate() to use vmovdqu instruction.
V1 -> V2:
- Use memcpy_vmovdqu only for x86 arch and for VF. Else use memcpy
(Auld, Matthew)
- Fix issues reported by patchworks.
Satyanarayana K V P (3):
drm/xe/migrate: Atomicize CCS copy command setup
drm/xe/migrate: Make emit_pte() header write atomic
drm/xe/vf: Clear CCS read/write buffers in atomic way
drivers/gpu/drm/xe/xe_migrate.c | 253 ++++++++++++++++++++++++---
drivers/gpu/drm/xe/xe_migrate.h | 3 +
drivers/gpu/drm/xe/xe_sriov_vf_ccs.c | 5 +-
3 files changed, 236 insertions(+), 25 deletions(-)
--
2.51.0
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH v6 1/3] drm/xe/migrate: Atomicize CCS copy command setup
2025-10-10 12:39 [PATCH v6 0/3] drm/xe/migrate: Atomicize CCS copy command setup Satyanarayana K V P
@ 2025-10-10 12:39 ` Satyanarayana K V P
2025-10-16 12:15 ` Rodrigo Vivi
2025-10-10 12:39 ` [PATCH v6 2/3] drm/xe/migrate: Make emit_pte() header write atomic Satyanarayana K V P
` (2 subsequent siblings)
3 siblings, 1 reply; 6+ messages in thread
From: Satyanarayana K V P @ 2025-10-10 12:39 UTC (permalink / raw)
To: intel-xe
Cc: Satyanarayana K V P, Michal Wajdeczko, Matthew Brost,
Matthew Auld, Rodrigo Vivi, Matt Roper
The CCS copy command is a 5-dword sequence. If the vCPU halts during
save/restore while this sequence is being programmed, partial writes may
trigger page faults when saving IGPU CCS metadata. Use the VMOVDQU
instruction to write the sequence atomically.
Since VMOVDQU operates on 256-bit chunks, update EMIT_COPY_CCS_DW to emit
8 dwords instead of 5 dwords.
Update emit_flush_invalidate() to use VMOVDQU operating with 128-bit
chunks.
Signed-off-by: Satyanarayana K V P <satyanarayana.k.v.p@intel.com>
Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Matthew Auld <matthew.auld@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Matt Roper <matthew.d.roper@intel.com>
---
V5 -> V6:
- Fixed review comments (Rodrigo)
V4 -> V5:
- Fixed review comments. (Matt B)
V3 -> V4:
- Fixed review comments. (Wajdeczko)
- Fix issues reported by patchworks.
V2 -> V3:
- Added support for 128 bit and 256 bit instructions with memcpy_vmovdqu
- Updated emit_flush_invalidate() to use vmovdqu instruction.
V1 -> V2:
- Use memcpy_vmovdqu only for x86 arch and for VF. Else use memcpy
(Auld, Matthew)
- Fix issues reported by patchworks.
---
drivers/gpu/drm/xe/xe_migrate.c | 105 +++++++++++++++++++++++++-------
1 file changed, 84 insertions(+), 21 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_migrate.c b/drivers/gpu/drm/xe/xe_migrate.c
index ad03afb5145f..8f7fb3f561e7 100644
--- a/drivers/gpu/drm/xe/xe_migrate.c
+++ b/drivers/gpu/drm/xe/xe_migrate.c
@@ -5,7 +5,9 @@
#include "xe_migrate.h"
+#include <asm/fpu/api.h>
#include <linux/bitfield.h>
+#include <linux/cpufeature.h>
#include <linux/sizes.h>
#include <drm/drm_managed.h>
@@ -33,6 +35,7 @@
#include "xe_res_cursor.h"
#include "xe_sa.h"
#include "xe_sched_job.h"
+#include "xe_sriov_vf_ccs.h"
#include "xe_sync.h"
#include "xe_trace_bo.h"
#include "xe_validation.h"
@@ -644,18 +647,61 @@ static void emit_pte(struct xe_migrate *m,
}
}
-#define EMIT_COPY_CCS_DW 5
+/*
+ * Some GPU sequences span more than two dwords. If a vCPU halts during
+ * save/restore while such a sequence is being programmed, a torn write can
+ * trigger page faults when saving iGPU CCS metadata. Use a single x86 vector
+ * store (VMOVDQU) under kernel_fpu_begin()/end() to emit the sequence as one
+ * instruction, ensuring it is not preempted mid-write when the vCPU halts.
+ *
+ * Do not use this for dGFX: on non-x86 hosts the VMOVDQU instruction may not
+ * be available.
+ */
+static void memcpy_vmovdqu(void *dst, const void *src, u32 size)
+{
+#ifdef CONFIG_X86
+ kernel_fpu_begin();
+ if (size == SZ_128) {
+ asm("vmovdqu (%0), %%xmm0\n"
+ "vmovups %%xmm0, (%1)\n"
+ :: "r" (src), "r" (dst) : "memory");
+ } else if (size == SZ_256) {
+ asm("vmovdqu (%0), %%ymm0\n"
+ "vmovups %%ymm0, (%1)\n"
+ :: "r" (src), "r" (dst) : "memory");
+ }
+ kernel_fpu_end();
+#endif
+}
+
+static void emit_atomic(struct xe_gt *gt, void *dst, const void *src, u32 size)
+{
+ u32 instr_size = size * BITS_PER_BYTE;
+
+ xe_gt_assert(gt, instr_size == SZ_128 || instr_size == SZ_256);
+
+ if (IS_VF_CCS_READY(gt_to_xe(gt))) {
+ xe_gt_assert(gt, static_cpu_has(X86_FEATURE_AVX));
+ memcpy_vmovdqu(dst, src, instr_size);
+ } else {
+ memcpy(dst, src, size);
+ }
+}
+
+#define EMIT_COPY_CCS_DW 8
static void emit_copy_ccs(struct xe_gt *gt, struct xe_bb *bb,
u64 dst_ofs, bool dst_is_indirect,
u64 src_ofs, bool src_is_indirect,
u32 size)
{
+ u32 dw[EMIT_COPY_CCS_DW] = {MI_NOOP};
struct xe_device *xe = gt_to_xe(gt);
u32 *cs = bb->cs + bb->len;
u32 num_ccs_blks;
u32 num_pages;
u32 ccs_copy_size;
u32 mocs;
+ u32 i = 0;
if (GRAPHICS_VERx100(xe) >= 2000) {
num_pages = DIV_ROUND_UP(size, XE_PAGE_SIZE);
@@ -673,15 +719,23 @@ static void emit_copy_ccs(struct xe_gt *gt, struct xe_bb *bb,
mocs = FIELD_PREP(XY_CTRL_SURF_MOCS_MASK, gt->mocs.uc_index);
}
- *cs++ = XY_CTRL_SURF_COPY_BLT |
- (src_is_indirect ? 0x0 : 0x1) << SRC_ACCESS_TYPE_SHIFT |
- (dst_is_indirect ? 0x0 : 0x1) << DST_ACCESS_TYPE_SHIFT |
- ccs_copy_size;
- *cs++ = lower_32_bits(src_ofs);
- *cs++ = upper_32_bits(src_ofs) | mocs;
- *cs++ = lower_32_bits(dst_ofs);
- *cs++ = upper_32_bits(dst_ofs) | mocs;
+ dw[i++] = XY_CTRL_SURF_COPY_BLT |
+ (src_is_indirect ? 0x0 : 0x1) << SRC_ACCESS_TYPE_SHIFT |
+ (dst_is_indirect ? 0x0 : 0x1) << DST_ACCESS_TYPE_SHIFT |
+ ccs_copy_size;
+ dw[i++] = lower_32_bits(src_ofs);
+ dw[i++] = upper_32_bits(src_ofs) | mocs;
+ dw[i++] = lower_32_bits(dst_ofs);
+ dw[i++] = upper_32_bits(dst_ofs) | mocs;
+ /*
+ * The CCS copy command is a 5-dword sequence. If the vCPU halts during
+ * save/restore while this sequence is being issued, partial writes may trigger
+ * page faults when saving iGPU CCS metadata. Use the VMOVDQU instruction to
+ * write the sequence atomically.
+ */
+ emit_atomic(gt, cs, dw, sizeof(dw));
+ cs += EMIT_COPY_CCS_DW;
bb->len = cs - bb->cs;
}
@@ -993,18 +1047,27 @@ static u64 migrate_vm_ppgtt_addr_tlb_inval(void)
return (NUM_KERNEL_PDE - 2) * XE_PAGE_SIZE;
}
-static int emit_flush_invalidate(u32 *dw, int i, u32 flags)
+/*
+ * The MI_FLUSH_DW command is a 4-dword sequence. If the vCPU halts during
+ * save/restore while this sequence is being issued, partial writes may
+ * trigger page faults when saving iGPU CCS metadata. Use
+ * emit_atomic() to write the sequence atomically.
+ */
+#define EMIT_FLUSH_INVALIDATE_DW 4
+static int emit_flush_invalidate(struct xe_exec_queue *q, u32 *cs, int i, u32 flags)
{
u64 addr = migrate_vm_ppgtt_addr_tlb_inval();
+ u32 dw[EMIT_FLUSH_INVALIDATE_DW] = {MI_NOOP}, j = 0;
+
+ dw[j++] = MI_FLUSH_DW | MI_INVALIDATE_TLB | MI_FLUSH_DW_OP_STOREDW |
+ MI_FLUSH_IMM_DW | flags;
+ dw[j++] = lower_32_bits(addr);
+ dw[j++] = upper_32_bits(addr);
+ dw[j++] = MI_NOOP;
- dw[i++] = MI_FLUSH_DW | MI_INVALIDATE_TLB | MI_FLUSH_DW_OP_STOREDW |
- MI_FLUSH_IMM_DW | flags;
- dw[i++] = lower_32_bits(addr);
- dw[i++] = upper_32_bits(addr);
- dw[i++] = MI_NOOP;
- dw[i++] = MI_NOOP;
+ emit_atomic(q->gt, &cs[i], dw, sizeof(dw));
- return i;
+ return i + j;
}
/**
@@ -1049,7 +1112,7 @@ int xe_migrate_ccs_rw_copy(struct xe_tile *tile, struct xe_exec_queue *q,
/* Calculate Batch buffer size */
batch_size = 0;
while (size) {
- batch_size += 10; /* Flush + ggtt addr + 2 NOP */
+ batch_size += EMIT_FLUSH_INVALIDATE_DW * 2; /* Flush + ggtt addr + 1 NOP */
u64 ccs_ofs, ccs_size;
u32 ccs_pt;
@@ -1090,7 +1153,7 @@ int xe_migrate_ccs_rw_copy(struct xe_tile *tile, struct xe_exec_queue *q,
* sizes here again before copy command is emitted.
*/
while (size) {
- batch_size += 10; /* Flush + ggtt addr + 2 NOP */
+ batch_size += EMIT_FLUSH_INVALIDATE_DW * 2; /* Flush + ggtt addr + 1 NOP */
u32 flush_flags = 0;
u64 ccs_ofs, ccs_size;
u32 ccs_pt;
@@ -1113,11 +1176,11 @@ int xe_migrate_ccs_rw_copy(struct xe_tile *tile, struct xe_exec_queue *q,
emit_pte(m, bb, ccs_pt, false, false, &ccs_it, ccs_size, src);
- bb->len = emit_flush_invalidate(bb->cs, bb->len, flush_flags);
+ bb->len = emit_flush_invalidate(q, bb->cs, bb->len, flush_flags);
flush_flags = xe_migrate_ccs_copy(m, bb, src_L0_ofs, src_is_pltt,
src_L0_ofs, dst_is_pltt,
src_L0, ccs_ofs, true);
- bb->len = emit_flush_invalidate(bb->cs, bb->len, flush_flags);
+ bb->len = emit_flush_invalidate(q, bb->cs, bb->len, flush_flags);
size -= src_L0;
}
--
2.51.0
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH v6 2/3] drm/xe/migrate: Make emit_pte() header write atomic
2025-10-10 12:39 [PATCH v6 0/3] drm/xe/migrate: Atomicize CCS copy command setup Satyanarayana K V P
2025-10-10 12:39 ` [PATCH v6 1/3] " Satyanarayana K V P
@ 2025-10-10 12:39 ` Satyanarayana K V P
2025-10-10 12:39 ` [PATCH v6 3/3] drm/xe/vf: Clear CCS read/write buffers in atomic way Satyanarayana K V P
2025-10-10 14:19 ` ✗ CI.KUnit: failure for drm/xe/migrate: Atomicize CCS copy command setup Patchwork
3 siblings, 0 replies; 6+ messages in thread
From: Satyanarayana K V P @ 2025-10-10 12:39 UTC (permalink / raw)
To: intel-xe; +Cc: Satyanarayana K V P, Michal Wajdeczko, Matthew Brost,
Matthew Auld
The MI_STORE_DATA_IMM instruction header is quad dword in size. If the
vCPU halts during save/restore while this sequence is being programmed,
partial writes may trigger page faults when saving IGPU CCS metadata.
Update instruction header atomically.
Signed-off-by: Satyanarayana K V P <satyanarayana.k.v.p@intel.com>
Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Matthew Auld <matthew.auld@intel.com>
---
V5 -> V6:
- Use emit_atomic() function to write MI_STORE_DATA_IMM instruction
(Matt B).
V4 -> V5:
- Fixed review comments (Matt B).
V3 -> V4:
- New commit added.
V2 -> V3:
- None
V1 -> V2:
- None
---
drivers/gpu/drm/xe/xe_migrate.c | 14 +++++++++++---
1 file changed, 11 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_migrate.c b/drivers/gpu/drm/xe/xe_migrate.c
index 8f7fb3f561e7..3f97ad759ffd 100644
--- a/drivers/gpu/drm/xe/xe_migrate.c
+++ b/drivers/gpu/drm/xe/xe_migrate.c
@@ -82,6 +82,8 @@ struct xe_migrate {
#define MAX_NUM_PTE 512
#define IDENTITY_OFFSET 256ULL
+static void emit_atomic(struct xe_gt *gt, void *dst, const void *src, u32 size);
+
/*
* Although MI_STORE_DATA_IMM's "length" field is 10-bits, 0x3FE is the largest
* legal value accepted. Since that instruction field is always stored in
@@ -583,6 +585,7 @@ static u32 pte_update_size(struct xe_migrate *m,
return cmds;
}
+#define EMIT_STORE_DATA_IMM_DW 4
static void emit_pte(struct xe_migrate *m,
struct xe_bb *bb, u32 at_pt,
bool is_vram, bool is_comp_pte,
@@ -606,11 +609,16 @@ static void emit_pte(struct xe_migrate *m,
ptes = DIV_ROUND_UP(size, XE_PAGE_SIZE);
while (ptes) {
+ u32 dw[EMIT_STORE_DATA_IMM_DW] = {MI_NOOP}, i = 0;
u32 chunk = min(MAX_PTE_PER_SDI, ptes);
- bb->cs[bb->len++] = MI_STORE_DATA_IMM | MI_SDI_NUM_QW(chunk);
- bb->cs[bb->len++] = ofs;
- bb->cs[bb->len++] = 0;
+ dw[i++] = MI_STORE_DATA_IMM | MI_SDI_NUM_QW(chunk);
+ dw[i++] = ofs;
+ dw[i++] = 0;
+
+ emit_atomic(m->q->gt, &bb->cs[bb->len], dw, sizeof(dw));
+
+ bb->len += i;
cur_ofs = ofs;
ofs += chunk * 8;
--
2.51.0
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH v6 3/3] drm/xe/vf: Clear CCS read/write buffers in atomic way
2025-10-10 12:39 [PATCH v6 0/3] drm/xe/migrate: Atomicize CCS copy command setup Satyanarayana K V P
2025-10-10 12:39 ` [PATCH v6 1/3] " Satyanarayana K V P
2025-10-10 12:39 ` [PATCH v6 2/3] drm/xe/migrate: Make emit_pte() header write atomic Satyanarayana K V P
@ 2025-10-10 12:39 ` Satyanarayana K V P
2025-10-10 14:19 ` ✗ CI.KUnit: failure for drm/xe/migrate: Atomicize CCS copy command setup Patchwork
3 siblings, 0 replies; 6+ messages in thread
From: Satyanarayana K V P @ 2025-10-10 12:39 UTC (permalink / raw)
To: intel-xe; +Cc: Satyanarayana K V P, Michal Wajdeczko, Matthew Brost,
Matthew Auld
Clear the contents of the CCS read/write batch buffer, ensuring no page
faults / GPU hang occur if migration happens midway.
Signed-off-by: Satyanarayana K V P <satyanarayana.k.v.p@intel.com>
Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Matthew Auld <matthew.auld@intel.com>
---
V5 -> V6:
- Used xe_gt_assert() instead of xe_assert() (Matt B).
V4 -> V5:
- Fixed review comments (Matt B).
V3 -> V4:
- New commit added.
V2 -> V3:
- None
V1 -> V2:
- None
---
drivers/gpu/drm/xe/xe_migrate.c | 134 +++++++++++++++++++++++++++
drivers/gpu/drm/xe/xe_migrate.h | 3 +
drivers/gpu/drm/xe/xe_sriov_vf_ccs.c | 5 +-
3 files changed, 141 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/xe/xe_migrate.c b/drivers/gpu/drm/xe/xe_migrate.c
index 3f97ad759ffd..8df28c0245d6 100644
--- a/drivers/gpu/drm/xe/xe_migrate.c
+++ b/drivers/gpu/drm/xe/xe_migrate.c
@@ -655,6 +655,43 @@ static void emit_pte(struct xe_migrate *m,
}
}
+static void emit_pte_clear(struct xe_gt *gt, struct xe_bb *bb, int start_offset,
+ int end_offset)
+{
+ u32 dw_nop[SZ_2] = {MI_NOOP};
+ int i = start_offset;
+ int len = end_offset;
+ u32 *cs = bb->cs;
+
+ /* Reverses the operations performed by emit_pte() */
+ while (i < len) {
+ u32 dwords, qwords;
+
+ xe_gt_assert(gt, (REG_FIELD_GET(REG_GENMASK(31, 23), cs[i]) == 0x20));
+
+ qwords = REG_FIELD_GET(MI_SDI_LEN_DW, cs[i]);
+ /*
+ * If Store QW is enabled, then the value of the dwlengh
+ * includes the header, address and multiple QW pairs of data
+ * which means the values will be limited to odd values starting
+ * at a value of 3(3 representing the size of a 5 DW command
+ * including header, 2 dw address and 2 dw data).
+ */
+ dwords = qwords - 1;
+ /*
+ * Do not clear header first. Clear PTEs first and then clear the
+ * header to avoid page faults.
+ */
+ memset(&cs[i + 3], MI_NOOP, (dwords) * sizeof(u32));
+
+ xe_device_wmb(gt_to_xe(gt));
+ WRITE_ONCE(*(u64 *)&cs[i], *(u64 *)dw_nop);
+
+ cs[i + 2] = MI_NOOP;
+ i += (dwords + 3);
+ }
+}
+
/*
* Some GPU sequences span more than two dwords. If a vCPU halts during
* save/restore while such a sequence is being programmed, a torn write can
@@ -747,6 +784,18 @@ static void emit_copy_ccs(struct xe_gt *gt, struct xe_bb *bb,
bb->len = cs - bb->cs;
}
+static u32 emit_copy_ccs_clear(struct xe_gt *gt, struct xe_bb *bb, u32 offset)
+{
+ u32 dw[EMIT_COPY_CCS_DW] = {MI_NOOP};
+ u32 *cs = bb->cs + offset - EMIT_COPY_CCS_DW;
+
+ xe_gt_assert(gt, (REG_FIELD_GET(REG_GENMASK(31, 22), *cs) == 0x148));
+ emit_atomic(gt, cs, dw, sizeof(dw));
+ xe_device_wmb(gt_to_xe(gt));
+
+ return offset - EMIT_COPY_CCS_DW;
+}
+
#define EMIT_COPY_DW 10
static void emit_copy(struct xe_gt *gt, struct xe_bb *bb,
u64 src_ofs, u64 dst_ofs, unsigned int size,
@@ -1078,6 +1127,19 @@ static int emit_flush_invalidate(struct xe_exec_queue *q, u32 *cs, int i, u32 fl
return i + j;
}
+static u32 emit_flush_invalidate_clear(struct xe_gt *gt, struct xe_bb *bb,
+ u32 offset)
+{
+ u32 dw[EMIT_FLUSH_INVALIDATE_DW] = {MI_NOOP};
+ u32 *cs = bb->cs + offset - EMIT_FLUSH_INVALIDATE_DW;
+
+ xe_gt_assert(gt, (REG_FIELD_GET(REG_GENMASK(31, 23), *cs) == 0x26));
+
+ emit_atomic(gt, cs, dw, sizeof(dw));
+
+ return offset - EMIT_FLUSH_INVALIDATE_DW;
+}
+
/**
* xe_migrate_ccs_rw_copy() - Copy content of TTM resources.
* @tile: Tile whose migration context to be used.
@@ -1202,6 +1264,78 @@ int xe_migrate_ccs_rw_copy(struct xe_tile *tile, struct xe_exec_queue *q,
return err;
}
+static u32 ccs_rw_pte_size(struct xe_gt *gt, struct xe_bb *bb, u32 offset)
+{
+ int len = bb->len;
+ u32 *cs = bb->cs;
+ u32 i = offset;
+
+ while (i < len) {
+ u32 dwords, qwords;
+
+ xe_gt_assert(gt, (REG_FIELD_GET(REG_GENMASK(31, 23), cs[i]) == 0x20));
+
+ qwords = REG_FIELD_GET(MI_SDI_LEN_DW, cs[i]);
+ /*
+ * If Store QW is enabled, then the value of the dwlengh
+ * includes the header, address and multiple QW pairs of data
+ * which means the values will be limited to odd values starting
+ * at a value of 3(3 representing the size of a 5 DW command
+ * including header, 2 dw address and 2 dw data).
+ */
+ dwords = qwords - 1;
+ i += dwords + 3;
+
+ /*
+ * Break if the next dword is for emit_flush_invalidate_clear()
+ * or emit_copy_ccs_clear()
+ */
+ if ((REG_FIELD_GET(REG_GENMASK(31, 23), cs[i]) == 0x26) ||
+ (REG_FIELD_GET(REG_GENMASK(31, 22), cs[i]) == 0x148))
+ break;
+ }
+ return i;
+}
+
+/**
+ * xe_migrate_ccs_rw_copy_clear() - Clear the CCS read/write batch buffer
+ * content.
+ * @tile: Tile whose migration context to be used.
+ * @src_bo: The buffer object @src is currently bound to.
+ * @read_write : Creates BB commands for CCS read/write.
+ *
+ * The CCS copy command has three stages: PTE setup, TLB invalidation, and CCS
+ * copy. Each stage includes a header followed by instructions. When clearing,
+ * remove the instructions first, then the header. For the TLB invalidation and
+ * CCS copy stages, ensure the writes are atomic.
+ *
+ * This reverses the operations performed by xe_migrate_ccs_rw_copy().
+ *
+ * Returns: None.
+ */
+void xe_migrate_ccs_rw_copy_clear(struct xe_tile *tile, struct xe_bo *src_bo,
+ enum xe_sriov_vf_ccs_rw_ctxs read_write)
+{
+ struct xe_bb *bb = src_bo->bb_ccs[read_write];
+ u32 bb_offset = 0, bb_offset_chunk = 0;
+ struct xe_gt *gt = tile->primary_gt;
+
+ while (bb_offset_chunk >= 0 && bb_offset_chunk < bb->len) {
+ bb_offset = ccs_rw_pte_size(gt, bb, bb_offset_chunk);
+ /*
+ * After PTE entries, we have one TLB invalidation, CCS copy
+ * command and another TLB invalidation command.
+ */
+ bb_offset_chunk = bb_offset + EMIT_FLUSH_INVALIDATE_DW +
+ EMIT_COPY_CCS_DW + EMIT_FLUSH_INVALIDATE_DW;
+
+ bb_offset = emit_flush_invalidate_clear(gt, bb, bb_offset_chunk);
+ bb_offset = emit_copy_ccs_clear(gt, bb, bb_offset);
+ bb_offset = emit_flush_invalidate_clear(gt, bb, bb_offset);
+ emit_pte_clear(gt, bb, bb_offset_chunk, bb_offset);
+ }
+}
+
/**
* xe_get_migrate_exec_queue() - Get the execution queue from migrate context.
* @migrate: Migrate context.
diff --git a/drivers/gpu/drm/xe/xe_migrate.h b/drivers/gpu/drm/xe/xe_migrate.h
index 4fad324b6253..7d3d4c5109dd 100644
--- a/drivers/gpu/drm/xe/xe_migrate.h
+++ b/drivers/gpu/drm/xe/xe_migrate.h
@@ -129,6 +129,9 @@ int xe_migrate_ccs_rw_copy(struct xe_tile *tile, struct xe_exec_queue *q,
struct xe_bo *src_bo,
enum xe_sriov_vf_ccs_rw_ctxs read_write);
+void xe_migrate_ccs_rw_copy_clear(struct xe_tile *tile, struct xe_bo *src_bo,
+ enum xe_sriov_vf_ccs_rw_ctxs read_write);
+
struct xe_lrc *xe_migrate_lrc(struct xe_migrate *migrate);
struct xe_exec_queue *xe_migrate_exec_queue(struct xe_migrate *migrate);
int xe_migrate_access_memory(struct xe_migrate *m, struct xe_bo *bo,
diff --git a/drivers/gpu/drm/xe/xe_sriov_vf_ccs.c b/drivers/gpu/drm/xe/xe_sriov_vf_ccs.c
index 790249801364..2d3728cb24ca 100644
--- a/drivers/gpu/drm/xe/xe_sriov_vf_ccs.c
+++ b/drivers/gpu/drm/xe/xe_sriov_vf_ccs.c
@@ -387,6 +387,7 @@ int xe_sriov_vf_ccs_detach_bo(struct xe_bo *bo)
{
struct xe_device *xe = xe_bo_device(bo);
enum xe_sriov_vf_ccs_rw_ctxs ctx_id;
+ struct xe_tile *tile;
struct xe_bb *bb;
xe_assert(xe, IS_VF_CCS_READY(xe));
@@ -394,12 +395,14 @@ int xe_sriov_vf_ccs_detach_bo(struct xe_bo *bo)
if (!xe_bo_has_valid_ccs_bb(bo))
return 0;
+ tile = xe_device_get_root_tile(xe);
+
for_each_ccs_rw_ctx(ctx_id) {
bb = bo->bb_ccs[ctx_id];
if (!bb)
continue;
- memset(bb->cs, MI_NOOP, bb->len * sizeof(u32));
+ xe_migrate_ccs_rw_copy_clear(tile, bo, ctx_id);
xe_bb_free(bb, NULL);
bo->bb_ccs[ctx_id] = NULL;
}
--
2.51.0
^ permalink raw reply related [flat|nested] 6+ messages in thread
* ✗ CI.KUnit: failure for drm/xe/migrate: Atomicize CCS copy command setup
2025-10-10 12:39 [PATCH v6 0/3] drm/xe/migrate: Atomicize CCS copy command setup Satyanarayana K V P
` (2 preceding siblings ...)
2025-10-10 12:39 ` [PATCH v6 3/3] drm/xe/vf: Clear CCS read/write buffers in atomic way Satyanarayana K V P
@ 2025-10-10 14:19 ` Patchwork
3 siblings, 0 replies; 6+ messages in thread
From: Patchwork @ 2025-10-10 14:19 UTC (permalink / raw)
To: Satyanarayana K V P; +Cc: intel-xe
== Series Details ==
Series: drm/xe/migrate: Atomicize CCS copy command setup
URL : https://patchwork.freedesktop.org/series/155744/
State : failure
== Summary ==
+ trap cleanup EXIT
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/xe/.kunitconfig
ERROR:root:In file included from ../include/linux/bitfield.h:10,
from ../drivers/gpu/drm/xe/xe_migrate.c:9:
../drivers/gpu/drm/xe/xe_migrate.c: In function ‘emit_atomic’:
../drivers/gpu/drm/xe/xe_migrate.c:729:34: error: implicit declaration of function ‘static_cpu_has’; did you mean ‘static_key_false’? [-Werror=implicit-function-declaration]
729 | xe_gt_assert(gt, static_cpu_has(X86_FEATURE_AVX));
| ^~~~~~~~~~~~~~
../include/linux/build_bug.h:30:63: note: in definition of macro ‘BUILD_BUG_ON_INVALID’
30 | #define BUILD_BUG_ON_INVALID(e) ((void)(sizeof((__force long)(e))))
| ^
../drivers/gpu/drm/xe/xe_assert.h:112:9: note: in expansion of macro ‘__xe_assert_msg’
112 | __xe_assert_msg(__xe, condition, \
| ^~~~~~~~~~~~~~~
../drivers/gpu/drm/xe/xe_assert.h:148:9: note: in expansion of macro ‘xe_assert_msg’
148 | xe_assert_msg(tile_to_xe(__tile), condition, "tile: %u VRAM %s\n" msg, \
| ^~~~~~~~~~~~~
../drivers/gpu/drm/xe/xe_assert.h:172:9: note: in expansion of macro ‘xe_tile_assert_msg’
172 | xe_tile_assert_msg(gt_to_tile(__gt), condition, "GT: %u type %d\n" msg, \
| ^~~~~~~~~~~~~~~~~~
../drivers/gpu/drm/xe/xe_assert.h:169:37: note: in expansion of macro ‘xe_gt_assert_msg’
169 | #define xe_gt_assert(gt, condition) xe_gt_assert_msg((gt), condition, "")
| ^~~~~~~~~~~~~~~~
../drivers/gpu/drm/xe/xe_migrate.c:729:17: note: in expansion of macro ‘xe_gt_assert’
729 | xe_gt_assert(gt, static_cpu_has(X86_FEATURE_AVX));
| ^~~~~~~~~~~~
cc1: some warnings being treated as errors
make[7]: *** [../scripts/Makefile.build:287: drivers/gpu/drm/xe/xe_migrate.o] Error 1
make[7]: *** Waiting for unfinished jobs....
make[6]: *** [../scripts/Makefile.build:556: drivers/gpu/drm/xe] Error 2
make[5]: *** [../scripts/Makefile.build:556: drivers/gpu/drm] Error 2
make[4]: *** [../scripts/Makefile.build:556: drivers/gpu] Error 2
make[3]: *** [../scripts/Makefile.build:556: drivers] Error 2
make[2]: *** [/kernel/Makefile:2011: .] Error 2
make[1]: *** [/kernel/Makefile:248: __sub-make] Error 2
make: *** [Makefile:248: __sub-make] Error 2
[14:19:12] Configuring KUnit Kernel ...
Generating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[14:19:16] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=25
+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v6 1/3] drm/xe/migrate: Atomicize CCS copy command setup
2025-10-10 12:39 ` [PATCH v6 1/3] " Satyanarayana K V P
@ 2025-10-16 12:15 ` Rodrigo Vivi
0 siblings, 0 replies; 6+ messages in thread
From: Rodrigo Vivi @ 2025-10-16 12:15 UTC (permalink / raw)
To: Satyanarayana K V P
Cc: intel-xe, Michal Wajdeczko, Matthew Brost, Matthew Auld,
Matt Roper
On Fri, Oct 10, 2025 at 06:09:02PM +0530, Satyanarayana K V P wrote:
> The CCS copy command is a 5-dword sequence. If the vCPU halts during
> save/restore while this sequence is being programmed, partial writes may
> trigger page faults when saving IGPU CCS metadata. Use the VMOVDQU
> instruction to write the sequence atomically.
>
> Since VMOVDQU operates on 256-bit chunks, update EMIT_COPY_CCS_DW to emit
> 8 dwords instead of 5 dwords.
>
> Update emit_flush_invalidate() to use VMOVDQU operating with 128-bit
> chunks.
>
> Signed-off-by: Satyanarayana K V P <satyanarayana.k.v.p@intel.com>
> Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Matthew Auld <matthew.auld@intel.com>
> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
> Cc: Matt Roper <matthew.d.roper@intel.com>
>
> ---
> V5 -> V6:
> - Fixed review comments (Rodrigo)
what review comments?
Next time, please specify exactly what has been changed due to the
review comments addressed. This line here doesn't help at all.
>
> V4 -> V5:
> - Fixed review comments. (Matt B)
>
> V3 -> V4:
> - Fixed review comments. (Wajdeczko)
> - Fix issues reported by patchworks.
>
> V2 -> V3:
> - Added support for 128 bit and 256 bit instructions with memcpy_vmovdqu
> - Updated emit_flush_invalidate() to use vmovdqu instruction.
>
> V1 -> V2:
> - Use memcpy_vmovdqu only for x86 arch and for VF. Else use memcpy
> (Auld, Matthew)
> - Fix issues reported by patchworks.
> ---
> drivers/gpu/drm/xe/xe_migrate.c | 105 +++++++++++++++++++++++++-------
> 1 file changed, 84 insertions(+), 21 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_migrate.c b/drivers/gpu/drm/xe/xe_migrate.c
> index ad03afb5145f..8f7fb3f561e7 100644
> --- a/drivers/gpu/drm/xe/xe_migrate.c
> +++ b/drivers/gpu/drm/xe/xe_migrate.c
> @@ -5,7 +5,9 @@
>
> #include "xe_migrate.h"
>
> +#include <asm/fpu/api.h>
> #include <linux/bitfield.h>
> +#include <linux/cpufeature.h>
> #include <linux/sizes.h>
>
> #include <drm/drm_managed.h>
> @@ -33,6 +35,7 @@
> #include "xe_res_cursor.h"
> #include "xe_sa.h"
> #include "xe_sched_job.h"
> +#include "xe_sriov_vf_ccs.h"
> #include "xe_sync.h"
> #include "xe_trace_bo.h"
> #include "xe_validation.h"
> @@ -644,18 +647,61 @@ static void emit_pte(struct xe_migrate *m,
> }
> }
>
> -#define EMIT_COPY_CCS_DW 5
> +/*
> + * Some GPU sequences span more than two dwords. If a vCPU halts during
> + * save/restore while such a sequence is being programmed, a torn write can
I saw in the previous reply that you have told that the full flow is documented
somewhere else. But the problem is that every developer reading this phrase
here in the future and looking to the code below will ask the same question over
and over: What? Why assembly code is needed? how the hack can the cpu halt
while the commands are getting executed in the gpu? why doesn't this buffer
gets written first and submitted later?
Please make sure that there is enough information here so we don't have to
keep justifying this over and over in the future.
> + * trigger page faults when saving iGPU CCS metadata. Use a single x86 vector
> + * store (VMOVDQU) under kernel_fpu_begin()/end() to emit the sequence as one
> + * instruction, ensuring it is not preempted mid-write when the vCPU halts.
> + *
> + * Do not use this for dGFX: on non-x86 hosts the VMOVDQU instruction may not
> + * be available.
This is not what I asked. Please add a return with error message (warn?!) if DGFX
reaches this path ever...
> + */
> +static void memcpy_vmovdqu(void *dst, const void *src, u32 size)
> +{
> +#ifdef CONFIG_X86
> + kernel_fpu_begin();
> + if (size == SZ_128) {
> + asm("vmovdqu (%0), %%xmm0\n"
> + "vmovups %%xmm0, (%1)\n"
> + :: "r" (src), "r" (dst) : "memory");
> + } else if (size == SZ_256) {
> + asm("vmovdqu (%0), %%ymm0\n"
> + "vmovups %%ymm0, (%1)\n"
> + :: "r" (src), "r" (dst) : "memory");
> + }
> + kernel_fpu_end();
> +#endif
> +}
> +
> +static void emit_atomic(struct xe_gt *gt, void *dst, const void *src, u32 size)
> +{
> + u32 instr_size = size * BITS_PER_BYTE;
> +
> + xe_gt_assert(gt, instr_size == SZ_128 || instr_size == SZ_256);
> +
> + if (IS_VF_CCS_READY(gt_to_xe(gt))) {
> + xe_gt_assert(gt, static_cpu_has(X86_FEATURE_AVX));
> + memcpy_vmovdqu(dst, src, instr_size);
> + } else {
> + memcpy(dst, src, size);
> + }
> +}
> +
> +#define EMIT_COPY_CCS_DW 8
> static void emit_copy_ccs(struct xe_gt *gt, struct xe_bb *bb,
> u64 dst_ofs, bool dst_is_indirect,
> u64 src_ofs, bool src_is_indirect,
> u32 size)
> {
> + u32 dw[EMIT_COPY_CCS_DW] = {MI_NOOP};
> struct xe_device *xe = gt_to_xe(gt);
> u32 *cs = bb->cs + bb->len;
> u32 num_ccs_blks;
> u32 num_pages;
> u32 ccs_copy_size;
> u32 mocs;
> + u32 i = 0;
>
> if (GRAPHICS_VERx100(xe) >= 2000) {
> num_pages = DIV_ROUND_UP(size, XE_PAGE_SIZE);
> @@ -673,15 +719,23 @@ static void emit_copy_ccs(struct xe_gt *gt, struct xe_bb *bb,
> mocs = FIELD_PREP(XY_CTRL_SURF_MOCS_MASK, gt->mocs.uc_index);
> }
>
> - *cs++ = XY_CTRL_SURF_COPY_BLT |
> - (src_is_indirect ? 0x0 : 0x1) << SRC_ACCESS_TYPE_SHIFT |
> - (dst_is_indirect ? 0x0 : 0x1) << DST_ACCESS_TYPE_SHIFT |
> - ccs_copy_size;
> - *cs++ = lower_32_bits(src_ofs);
> - *cs++ = upper_32_bits(src_ofs) | mocs;
> - *cs++ = lower_32_bits(dst_ofs);
> - *cs++ = upper_32_bits(dst_ofs) | mocs;
> + dw[i++] = XY_CTRL_SURF_COPY_BLT |
> + (src_is_indirect ? 0x0 : 0x1) << SRC_ACCESS_TYPE_SHIFT |
> + (dst_is_indirect ? 0x0 : 0x1) << DST_ACCESS_TYPE_SHIFT |
> + ccs_copy_size;
> + dw[i++] = lower_32_bits(src_ofs);
> + dw[i++] = upper_32_bits(src_ofs) | mocs;
> + dw[i++] = lower_32_bits(dst_ofs);
> + dw[i++] = upper_32_bits(dst_ofs) | mocs;
>
> + /*
> + * The CCS copy command is a 5-dword sequence. If the vCPU halts during
> + * save/restore while this sequence is being issued, partial writes may trigger
> + * page faults when saving iGPU CCS metadata. Use the VMOVDQU instruction to
> + * write the sequence atomically.
> + */
> + emit_atomic(gt, cs, dw, sizeof(dw));
> + cs += EMIT_COPY_CCS_DW;
> bb->len = cs - bb->cs;
> }
>
> @@ -993,18 +1047,27 @@ static u64 migrate_vm_ppgtt_addr_tlb_inval(void)
> return (NUM_KERNEL_PDE - 2) * XE_PAGE_SIZE;
> }
>
> -static int emit_flush_invalidate(u32 *dw, int i, u32 flags)
> +/*
> + * The MI_FLUSH_DW command is a 4-dword sequence. If the vCPU halts during
> + * save/restore while this sequence is being issued, partial writes may
> + * trigger page faults when saving iGPU CCS metadata. Use
> + * emit_atomic() to write the sequence atomically.
> + */
> +#define EMIT_FLUSH_INVALIDATE_DW 4
> +static int emit_flush_invalidate(struct xe_exec_queue *q, u32 *cs, int i, u32 flags)
> {
> u64 addr = migrate_vm_ppgtt_addr_tlb_inval();
> + u32 dw[EMIT_FLUSH_INVALIDATE_DW] = {MI_NOOP}, j = 0;
> +
> + dw[j++] = MI_FLUSH_DW | MI_INVALIDATE_TLB | MI_FLUSH_DW_OP_STOREDW |
> + MI_FLUSH_IMM_DW | flags;
> + dw[j++] = lower_32_bits(addr);
> + dw[j++] = upper_32_bits(addr);
> + dw[j++] = MI_NOOP;
>
> - dw[i++] = MI_FLUSH_DW | MI_INVALIDATE_TLB | MI_FLUSH_DW_OP_STOREDW |
> - MI_FLUSH_IMM_DW | flags;
> - dw[i++] = lower_32_bits(addr);
> - dw[i++] = upper_32_bits(addr);
> - dw[i++] = MI_NOOP;
> - dw[i++] = MI_NOOP;
> + emit_atomic(q->gt, &cs[i], dw, sizeof(dw));
>
> - return i;
> + return i + j;
> }
>
> /**
> @@ -1049,7 +1112,7 @@ int xe_migrate_ccs_rw_copy(struct xe_tile *tile, struct xe_exec_queue *q,
> /* Calculate Batch buffer size */
> batch_size = 0;
> while (size) {
> - batch_size += 10; /* Flush + ggtt addr + 2 NOP */
> + batch_size += EMIT_FLUSH_INVALIDATE_DW * 2; /* Flush + ggtt addr + 1 NOP */
> u64 ccs_ofs, ccs_size;
> u32 ccs_pt;
>
> @@ -1090,7 +1153,7 @@ int xe_migrate_ccs_rw_copy(struct xe_tile *tile, struct xe_exec_queue *q,
> * sizes here again before copy command is emitted.
> */
> while (size) {
> - batch_size += 10; /* Flush + ggtt addr + 2 NOP */
> + batch_size += EMIT_FLUSH_INVALIDATE_DW * 2; /* Flush + ggtt addr + 1 NOP */
> u32 flush_flags = 0;
> u64 ccs_ofs, ccs_size;
> u32 ccs_pt;
> @@ -1113,11 +1176,11 @@ int xe_migrate_ccs_rw_copy(struct xe_tile *tile, struct xe_exec_queue *q,
>
> emit_pte(m, bb, ccs_pt, false, false, &ccs_it, ccs_size, src);
>
> - bb->len = emit_flush_invalidate(bb->cs, bb->len, flush_flags);
> + bb->len = emit_flush_invalidate(q, bb->cs, bb->len, flush_flags);
> flush_flags = xe_migrate_ccs_copy(m, bb, src_L0_ofs, src_is_pltt,
> src_L0_ofs, dst_is_pltt,
> src_L0, ccs_ofs, true);
> - bb->len = emit_flush_invalidate(bb->cs, bb->len, flush_flags);
> + bb->len = emit_flush_invalidate(q, bb->cs, bb->len, flush_flags);
>
> size -= src_L0;
> }
> --
> 2.51.0
>
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2025-10-16 12:15 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-10-10 12:39 [PATCH v6 0/3] drm/xe/migrate: Atomicize CCS copy command setup Satyanarayana K V P
2025-10-10 12:39 ` [PATCH v6 1/3] " Satyanarayana K V P
2025-10-16 12:15 ` Rodrigo Vivi
2025-10-10 12:39 ` [PATCH v6 2/3] drm/xe/migrate: Make emit_pte() header write atomic Satyanarayana K V P
2025-10-10 12:39 ` [PATCH v6 3/3] drm/xe/vf: Clear CCS read/write buffers in atomic way Satyanarayana K V P
2025-10-10 14:19 ` ✗ CI.KUnit: failure for drm/xe/migrate: Atomicize CCS copy command setup Patchwork
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox