Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Matthew Auld <matthew.auld@intel.com>
To: intel-xe@lists.freedesktop.org
Cc: "Satyanarayana K V P" <satyanarayana.k.v.p@intel.com>,
	"Thomas Hellström" <thomas.hellstrom@linux.intel.com>,
	"Matthew Brost" <matthew.brost@intel.com>
Subject: [PATCH v3 4/8] drm/xe/migrate: ignore CCS for kernel objects
Date: Fri,  7 Mar 2025 18:29:01 +0000	[thread overview]
Message-ID: <20250307182856.304850-14-matthew.auld@intel.com> (raw)
In-Reply-To: <20250307182856.304850-10-matthew.auld@intel.com>

For kernel BOs we don't clear the CCS state on creation, therefore we
should be careful to ignore it when copying pages. In a future patch we
opt for using the copy path here for kernel BOs, so this now needs to be
considered.

Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Satyanarayana K V P <satyanarayana.k.v.p@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
---
 drivers/gpu/drm/xe/xe_migrate.c | 15 ++++++++++++---
 1 file changed, 12 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_migrate.c b/drivers/gpu/drm/xe/xe_migrate.c
index df4282c71bf0..dbd4bff75783 100644
--- a/drivers/gpu/drm/xe/xe_migrate.c
+++ b/drivers/gpu/drm/xe/xe_migrate.c
@@ -779,10 +779,12 @@ struct dma_fence *xe_migrate_copy(struct xe_migrate *m,
 	bool dst_is_pltt = dst->mem_type == XE_PL_TT;
 	bool src_is_vram = mem_type_is_vram(src->mem_type);
 	bool dst_is_vram = mem_type_is_vram(dst->mem_type);
+	bool type_device = src_bo->ttm.type == ttm_bo_type_device;
+	bool needs_ccs_emit = type_device && xe_migrate_needs_ccs_emit(xe);
 	bool copy_ccs = xe_device_has_flat_ccs(xe) &&
 		xe_bo_needs_ccs_pages(src_bo) && xe_bo_needs_ccs_pages(dst_bo);
 	bool copy_system_ccs = copy_ccs && (!src_is_vram || !dst_is_vram);
-	bool use_comp_pat = xe_device_has_flat_ccs(xe) &&
+	bool use_comp_pat = type_device && xe_device_has_flat_ccs(xe) &&
 		GRAPHICS_VER(xe) >= 20 && src_is_vram && !dst_is_vram;
 
 	/* Copying CCS between two different BOs is not supported yet. */
@@ -792,6 +794,12 @@ struct dma_fence *xe_migrate_copy(struct xe_migrate *m,
 	if (src_bo != dst_bo && XE_WARN_ON(src_bo->size != dst_bo->size))
 		return ERR_PTR(-EINVAL);
 
+	if (src_bo != dst_bo && XE_WARN_ON(src_bo->ttm.type != dst_bo->ttm.type))
+		return ERR_PTR(-EINVAL);
+
+	if (XE_WARN_ON(type_device && copy_only_ccs))
+		return ERR_PTR(-EINVAL);
+
 	if (!src_is_vram)
 		xe_res_first_sg(xe_bo_sg(src_bo), 0, size, &src_it);
 	else
@@ -839,6 +847,7 @@ struct dma_fence *xe_migrate_copy(struct xe_migrate *m,
 					      avail_pts, avail_pts);
 
 		if (copy_system_ccs) {
+			xe_assert(xe, type_device);
 			ccs_size = xe_device_ccs_bytes(xe, src_L0);
 			batch_size += pte_update_size(m, 0, NULL, &ccs_it, &ccs_size,
 						      &ccs_ofs, &ccs_pt, 0,
@@ -849,7 +858,7 @@ struct dma_fence *xe_migrate_copy(struct xe_migrate *m,
 
 		/* Add copy commands size here */
 		batch_size += ((copy_only_ccs) ? 0 : EMIT_COPY_DW) +
-			((xe_migrate_needs_ccs_emit(xe) ? EMIT_COPY_CCS_DW : 0));
+			((needs_ccs_emit ? EMIT_COPY_CCS_DW : 0));
 
 		bb = xe_bb_new(gt, batch_size, usm);
 		if (IS_ERR(bb)) {
@@ -878,7 +887,7 @@ struct dma_fence *xe_migrate_copy(struct xe_migrate *m,
 		if (!copy_only_ccs)
 			emit_copy(gt, bb, src_L0_ofs, dst_L0_ofs, src_L0, XE_PAGE_SIZE);
 
-		if (xe_migrate_needs_ccs_emit(xe))
+		if (needs_ccs_emit)
 			flush_flags = xe_migrate_ccs_copy(m, bb, src_L0_ofs,
 							  IS_DGFX(xe) ? src_is_vram : src_is_pltt,
 							  dst_L0_ofs,
-- 
2.48.1


  parent reply	other threads:[~2025-03-07 18:29 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-03-07 18:28 [PATCH v3 0/8] Improve SRIOV VRAM provisioning Matthew Auld
2025-03-07 18:28 ` [PATCH v3 1/8] drm/xe: use backup object for pinned save/restore Matthew Auld
2025-03-07 18:28 ` [PATCH v3 2/8] drm/xe: split pinned save/restore into phases Matthew Auld
2025-03-07 18:29 ` [PATCH v3 3/8] drm/xe: Add XE_BO_FLAG_PINNED_NORESTORE Matthew Auld
2025-03-07 18:29 ` Matthew Auld [this message]
2025-03-26  6:09   ` [PATCH v3 4/8] drm/xe/migrate: ignore CCS for kernel objects K V P, Satyanarayana
2025-03-07 18:29 ` [PATCH v3 5/8] drm/xe: add XE_BO_FLAG_PINNED_LATE_RESTORE Matthew Auld
2025-03-26  6:15   ` K V P, Satyanarayana
2025-03-26  9:03     ` Matthew Auld
2025-03-07 18:29 ` [PATCH v3 6/8] drm/xe: unconditionally apply PINNED for pin_map() Matthew Auld
2025-03-07 18:29 ` [PATCH v3 7/8] drm/xe: allow non-contig VRAM kernel BO Matthew Auld
2025-03-07 18:29 ` [PATCH v3 8/8] drm/xe/sriov: support non-contig VRAM provisioning Matthew Auld
2025-03-07 20:51 ` ✓ CI.Patch_applied: success for Improve SRIOV VRAM provisioning (rev2) Patchwork
2025-03-07 20:52 ` ✓ CI.checkpatch: " Patchwork
2025-03-07 20:53 ` ✓ CI.KUnit: " Patchwork
2025-03-07 21:10 ` ✓ CI.Build: " Patchwork
2025-03-07 21:13 ` ✓ CI.Hooks: " Patchwork
2025-03-07 21:14 ` ✓ CI.checksparse: " Patchwork
2025-03-07 21:44 ` ✗ Xe.CI.BAT: failure " Patchwork
2025-03-09  1:59 ` ✗ Xe.CI.Full: " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250307182856.304850-14-matthew.auld@intel.com \
    --to=matthew.auld@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=matthew.brost@intel.com \
    --cc=satyanarayana.k.v.p@intel.com \
    --cc=thomas.hellstrom@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox