Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Matthew Brost <matthew.brost@intel.com>
To: "Michał Winiarski" <michal.winiarski@intel.com>
Cc: "Alex Williamson" <alex@shazbot.org>,
	"Lucas De Marchi" <lucas.demarchi@intel.com>,
	"Thomas Hellström" <thomas.hellstrom@linux.intel.com>,
	"Rodrigo Vivi" <rodrigo.vivi@intel.com>,
	"Jason Gunthorpe" <jgg@ziepe.ca>,
	"Yishai Hadas" <yishaih@nvidia.com>,
	"Kevin Tian" <kevin.tian@intel.com>,
	"Shameer Kolothum" <skolothumtho@nvidia.com>,
	intel-xe@lists.freedesktop.org, linux-kernel@vger.kernel.org,
	kvm@vger.kernel.org,
	"Michal Wajdeczko" <michal.wajdeczko@intel.com>,
	dri-devel@lists.freedesktop.org,
	"Jani Nikula" <jani.nikula@linux.intel.com>,
	"Joonas Lahtinen" <joonas.lahtinen@linux.intel.com>,
	"Tvrtko Ursulin" <tursulin@ursulin.net>,
	"David Airlie" <airlied@gmail.com>,
	"Simona Vetter" <simona@ffwll.ch>,
	"Lukasz Laguna" <lukasz.laguna@intel.com>,
	"Christoph Hellwig" <hch@infradead.org>
Subject: Re: [PATCH v3 21/28] drm/xe/migrate: Add function to copy of VRAM data in chunks
Date: Mon, 3 Nov 2025 14:29:08 -0800	[thread overview]
Message-ID: <aQkstCpJQ6ZAnQr7@lstrano-desk.jf.intel.com> (raw)
In-Reply-To: <20251030203135.337696-22-michal.winiarski@intel.com>

On Thu, Oct 30, 2025 at 09:31:28PM +0100, Michał Winiarski wrote:
> From: Lukasz Laguna <lukasz.laguna@intel.com>
> 
> Introduce a new function to copy data between VRAM and sysmem objects.
> The existing xe_migrate_copy() is tailored for eviction and restore
> operations, which involves additional logic and operates on entire
> objects.
> The xe_migrate_vram_copy_chunk() allows copying chunks of data to or
> from a dedicated buffer object, which is essential in case of VF
> migration.
> 
> Signed-off-by: Lukasz Laguna <lukasz.laguna@intel.com>
> Signed-off-by: Michał Winiarski <michal.winiarski@intel.com>

Reviewed-by: Matthew Brost <matthew.brost@intel.com>

> ---
>  drivers/gpu/drm/xe/xe_migrate.c | 128 ++++++++++++++++++++++++++++++--
>  drivers/gpu/drm/xe/xe_migrate.h |   8 ++
>  2 files changed, 131 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/gpu/drm/xe/xe_migrate.c b/drivers/gpu/drm/xe/xe_migrate.c
> index 56a5804726e96..dbe9320863ab0 100644
> --- a/drivers/gpu/drm/xe/xe_migrate.c
> +++ b/drivers/gpu/drm/xe/xe_migrate.c
> @@ -29,6 +29,7 @@
>  #include "xe_lrc.h"
>  #include "xe_map.h"
>  #include "xe_mocs.h"
> +#include "xe_printk.h"
>  #include "xe_pt.h"
>  #include "xe_res_cursor.h"
>  #include "xe_sa.h"
> @@ -1210,6 +1211,128 @@ struct xe_exec_queue *xe_migrate_exec_queue(struct xe_migrate *migrate)
>  	return migrate->q;
>  }
>  
> +/**
> + * xe_migrate_vram_copy_chunk() - Copy a chunk of a VRAM buffer object.
> + * @vram_bo: The VRAM buffer object.
> + * @vram_offset: The VRAM offset.
> + * @sysmem_bo: The sysmem buffer object.
> + * @sysmem_offset: The sysmem offset.
> + * @size: The size of VRAM chunk to copy.
> + * @dir: The direction of the copy operation.
> + *
> + * Copies a portion of a buffer object between VRAM and system memory.
> + * On Xe2 platforms that support flat CCS, VRAM data is decompressed when
> + * copying to system memory.
> + *
> + * Return: Pointer to a dma_fence representing the last copy batch, or
> + * an error pointer on failure. If there is a failure, any copy operation
> + * started by the function call has been synced.
> + */
> +struct dma_fence *xe_migrate_vram_copy_chunk(struct xe_bo *vram_bo, u64 vram_offset,
> +					     struct xe_bo *sysmem_bo, u64 sysmem_offset,
> +					     u64 size, enum xe_migrate_copy_dir dir)
> +{
> +	struct xe_device *xe = xe_bo_device(vram_bo);
> +	struct xe_tile *tile = vram_bo->tile;
> +	struct xe_gt *gt = tile->primary_gt;
> +	struct xe_migrate *m = tile->migrate;
> +	struct dma_fence *fence = NULL;
> +	struct ttm_resource *vram = vram_bo->ttm.resource;
> +	struct ttm_resource *sysmem = sysmem_bo->ttm.resource;
> +	struct xe_res_cursor vram_it, sysmem_it;
> +	u64 vram_L0_ofs, sysmem_L0_ofs;
> +	u32 vram_L0_pt, sysmem_L0_pt;
> +	u64 vram_L0, sysmem_L0;
> +	bool to_sysmem = (dir == XE_MIGRATE_COPY_TO_SRAM);
> +	bool use_comp_pat = to_sysmem &&
> +		GRAPHICS_VER(xe) >= 20 && xe_device_has_flat_ccs(xe);
> +	int pass = 0;
> +	int err;
> +
> +	xe_assert(xe, IS_ALIGNED(vram_offset | sysmem_offset | size, PAGE_SIZE));
> +	xe_assert(xe, xe_bo_is_vram(vram_bo));
> +	xe_assert(xe, !xe_bo_is_vram(sysmem_bo));
> +	xe_assert(xe, !range_overflows(vram_offset, size, (u64)vram_bo->ttm.base.size));
> +	xe_assert(xe, !range_overflows(sysmem_offset, size, (u64)sysmem_bo->ttm.base.size));
> +
> +	xe_res_first(vram, vram_offset, size, &vram_it);
> +	xe_res_first_sg(xe_bo_sg(sysmem_bo), sysmem_offset, size, &sysmem_it);
> +
> +	while (size) {
> +		u32 pte_flags = PTE_UPDATE_FLAG_IS_VRAM;
> +		u32 batch_size = 2; /* arb_clear() + MI_BATCH_BUFFER_END */
> +		struct xe_sched_job *job;
> +		struct xe_bb *bb;
> +		u32 update_idx;
> +		bool usm = xe->info.has_usm;
> +		u32 avail_pts = max_mem_transfer_per_pass(xe) / LEVEL0_PAGE_TABLE_ENCODE_SIZE;
> +
> +		sysmem_L0 = xe_migrate_res_sizes(m, &sysmem_it);
> +		vram_L0 = min(xe_migrate_res_sizes(m, &vram_it), sysmem_L0);
> +
> +		xe_dbg(xe, "Pass %u, size: %llu\n", pass++, vram_L0);
> +
> +		pte_flags |= use_comp_pat ? PTE_UPDATE_FLAG_IS_COMP_PTE : 0;
> +		batch_size += pte_update_size(m, pte_flags, vram, &vram_it, &vram_L0,
> +					      &vram_L0_ofs, &vram_L0_pt, 0, 0, avail_pts);
> +
> +		batch_size += pte_update_size(m, 0, sysmem, &sysmem_it, &vram_L0, &sysmem_L0_ofs,
> +					      &sysmem_L0_pt, 0, avail_pts, avail_pts);
> +		batch_size += EMIT_COPY_DW;
> +
> +		bb = xe_bb_new(gt, batch_size, usm);
> +		if (IS_ERR(bb)) {
> +			err = PTR_ERR(bb);
> +			return ERR_PTR(err);
> +		}
> +
> +		if (xe_migrate_allow_identity(vram_L0, &vram_it))
> +			xe_res_next(&vram_it, vram_L0);
> +		else
> +			emit_pte(m, bb, vram_L0_pt, true, use_comp_pat, &vram_it, vram_L0, vram);
> +
> +		emit_pte(m, bb, sysmem_L0_pt, false, false, &sysmem_it, vram_L0, sysmem);
> +
> +		bb->cs[bb->len++] = MI_BATCH_BUFFER_END;
> +		update_idx = bb->len;
> +
> +		if (to_sysmem)
> +			emit_copy(gt, bb, vram_L0_ofs, sysmem_L0_ofs, vram_L0, XE_PAGE_SIZE);
> +		else
> +			emit_copy(gt, bb, sysmem_L0_ofs, vram_L0_ofs, vram_L0, XE_PAGE_SIZE);
> +
> +		job = xe_bb_create_migration_job(m->q, bb, xe_migrate_batch_base(m, usm),
> +						 update_idx);
> +		if (IS_ERR(job)) {
> +			xe_bb_free(bb, NULL);
> +			err = PTR_ERR(job);
> +			return ERR_PTR(err);
> +		}
> +
> +		xe_sched_job_add_migrate_flush(job, MI_INVALIDATE_TLB);
> +
> +		xe_assert(xe, dma_resv_test_signaled(vram_bo->ttm.base.resv,
> +						     DMA_RESV_USAGE_BOOKKEEP));
> +		xe_assert(xe, dma_resv_test_signaled(sysmem_bo->ttm.base.resv,
> +						     DMA_RESV_USAGE_BOOKKEEP));
> +
> +		scoped_guard(mutex, &m->job_mutex) {
> +			xe_sched_job_arm(job);
> +			dma_fence_put(fence);
> +			fence = dma_fence_get(&job->drm.s_fence->finished);
> +			xe_sched_job_push(job);
> +
> +			dma_fence_put(m->fence);
> +			m->fence = dma_fence_get(fence);
> +		}
> +
> +		xe_bb_free(bb, fence);
> +		size -= vram_L0;
> +	}
> +
> +	return fence;
> +}
> +
>  static void emit_clear_link_copy(struct xe_gt *gt, struct xe_bb *bb, u64 src_ofs,
>  				 u32 size, u32 pitch)
>  {
> @@ -1912,11 +2035,6 @@ static bool xe_migrate_vram_use_pde(struct drm_pagemap_addr *sram_addr,
>  	return true;
>  }
>  
> -enum xe_migrate_copy_dir {
> -	XE_MIGRATE_COPY_TO_VRAM,
> -	XE_MIGRATE_COPY_TO_SRAM,
> -};
> -
>  #define XE_CACHELINE_BYTES	64ull
>  #define XE_CACHELINE_MASK	(XE_CACHELINE_BYTES - 1)
>  
> diff --git a/drivers/gpu/drm/xe/xe_migrate.h b/drivers/gpu/drm/xe/xe_migrate.h
> index 4fad324b62535..d7bcc6ad8464e 100644
> --- a/drivers/gpu/drm/xe/xe_migrate.h
> +++ b/drivers/gpu/drm/xe/xe_migrate.h
> @@ -28,6 +28,11 @@ struct xe_vma;
>  
>  enum xe_sriov_vf_ccs_rw_ctxs;
>  
> +enum xe_migrate_copy_dir {
> +	XE_MIGRATE_COPY_TO_VRAM,
> +	XE_MIGRATE_COPY_TO_SRAM,
> +};
> +
>  /**
>   * struct xe_migrate_pt_update_ops - Callbacks for the
>   * xe_migrate_update_pgtables() function.
> @@ -131,6 +136,9 @@ int xe_migrate_ccs_rw_copy(struct xe_tile *tile, struct xe_exec_queue *q,
>  
>  struct xe_lrc *xe_migrate_lrc(struct xe_migrate *migrate);
>  struct xe_exec_queue *xe_migrate_exec_queue(struct xe_migrate *migrate);
> +struct dma_fence *xe_migrate_vram_copy_chunk(struct xe_bo *vram_bo, u64 vram_offset,
> +					     struct xe_bo *sysmem_bo, u64 sysmem_offset,
> +					     u64 size, enum xe_migrate_copy_dir dir);
>  int xe_migrate_access_memory(struct xe_migrate *m, struct xe_bo *bo,
>  			     unsigned long offset, void *buf, int len,
>  			     int write);
> -- 
> 2.50.1
> 

  reply	other threads:[~2025-11-03 22:29 UTC|newest]

Thread overview: 62+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-10-30 20:31 [PATCH v3 00/28] vfio/xe: Add driver variant for Xe VF migration Michał Winiarski
2025-10-30 20:31 ` [PATCH v3 01/28] drm/xe/pf: Remove GuC version check for migration support Michał Winiarski
2025-10-30 20:31 ` [PATCH v3 02/28] drm/xe: Move migration support to device-level struct Michał Winiarski
2025-11-03 18:55   ` Michal Wajdeczko
2025-10-30 20:31 ` [PATCH v3 03/28] drm/xe/pf: Convert control state to bitmap Michał Winiarski
2025-10-30 22:57   ` Michal Wajdeczko
2025-10-31  7:50     ` Michał Winiarski
2025-10-30 20:31 ` [PATCH v3 04/28] drm/xe/pf: Add save/restore control state stubs and connect to debugfs Michał Winiarski
2025-10-30 20:31 ` [PATCH v3 05/28] drm/xe/pf: Add data structures and handlers for migration rings Michał Winiarski
2025-10-31 16:17   ` Michal Wajdeczko
2025-11-04 10:25     ` Michał Winiarski
2025-10-30 20:31 ` [PATCH v3 06/28] drm/xe/pf: Add helpers for migration data allocation / free Michał Winiarski
2025-10-30 20:31 ` [PATCH v3 07/28] drm/xe/pf: Add support for encap/decap of bitstream to/from packet Michał Winiarski
2025-10-31 16:31   ` Michal Wajdeczko
2025-11-04 11:16     ` Michał Winiarski
2025-10-30 20:31 ` [PATCH v3 08/28] drm/xe/pf: Add minimalistic migration descriptor Michał Winiarski
2025-10-31 16:41   ` Michal Wajdeczko
2025-10-30 20:31 ` [PATCH v3 09/28] drm/xe/pf: Expose VF migration data size over debugfs Michał Winiarski
2025-10-30 20:31 ` [PATCH v3 10/28] drm/xe: Add sa/guc_buf_cache sync interface Michał Winiarski
2025-10-30 20:31 ` [PATCH v3 11/28] drm/xe: Allow the caller to pass guc_buf_cache size Michał Winiarski
2025-10-30 20:31 ` [PATCH v3 12/28] drm/xe/pf: Increase PF GuC Buffer Cache size and use it for VF migration Michał Winiarski
2025-10-31 16:48   ` Michal Wajdeczko
2025-10-30 20:31 ` [PATCH v3 13/28] drm/xe/pf: Remove GuC migration data save/restore from GT debugfs Michał Winiarski
2025-10-30 20:31 ` [PATCH v3 14/28] drm/xe/pf: Don't save GuC VF migration data on pause Michał Winiarski
2025-10-30 20:31 ` [PATCH v3 15/28] drm/xe/pf: Switch VF migration GuC save/restore to struct migration data Michał Winiarski
2025-11-03 18:30   ` Michal Wajdeczko
2025-10-30 20:31 ` [PATCH v3 16/28] drm/xe/pf: Handle GuC migration data as part of PF control Michał Winiarski
2025-10-31 18:15   ` Michal Wajdeczko
2025-11-04 11:55     ` Michał Winiarski
2025-10-30 20:31 ` [PATCH v3 17/28] drm/xe/pf: Add helpers for VF GGTT migration data handling Michał Winiarski
2025-10-31 16:59   ` Michal Wajdeczko
2025-10-30 20:31 ` [PATCH v3 18/28] drm/xe/pf: Handle GGTT migration data as part of PF control Michał Winiarski
2025-10-31 18:26   ` Michal Wajdeczko
2025-11-04 12:12     ` Michał Winiarski
2025-10-30 20:31 ` [PATCH v3 19/28] drm/xe/pf: Handle MMIO " Michał Winiarski
2025-10-31 18:39   ` Michal Wajdeczko
2025-11-04 12:29     ` Michał Winiarski
2025-10-30 20:31 ` [PATCH v3 20/28] drm/xe/pf: Add helper to retrieve VF's LMEM object Michał Winiarski
2025-10-30 20:31 ` [PATCH v3 21/28] drm/xe/migrate: Add function to copy of VRAM data in chunks Michał Winiarski
2025-11-03 22:29   ` Matthew Brost [this message]
2025-10-30 20:31 ` [PATCH v3 22/28] drm/xe/pf: Handle VRAM migration data as part of PF control Michał Winiarski
2025-11-03 22:37   ` Matthew Brost
2025-11-04 12:39     ` Michał Winiarski
2025-10-30 20:31 ` [PATCH v3 23/28] drm/xe/pf: Add wait helper for VF FLR Michał Winiarski
2025-10-30 20:31 ` [PATCH v3 24/28] drm/xe/pf: Enable SR-IOV VF migration Michał Winiarski
2025-10-31 17:06   ` Michal Wajdeczko
2025-10-30 20:31 ` [PATCH v3 25/28] drm/xe/pci: Introduce a helper to allow VF access to PF xe_device Michał Winiarski
2025-10-31 17:39   ` Michal Wajdeczko
2025-10-30 20:31 ` [PATCH v3 26/28] drm/xe/pf: Export helpers for VFIO Michał Winiarski
2025-10-30 20:31 ` [PATCH v3 27/28] drm/intel/pciids: Add match with VFIO override Michał Winiarski
2025-11-03 21:30   ` Lucas De Marchi
2025-11-04 12:59     ` Michał Winiarski
2025-11-04 17:41       ` Lucas De Marchi
     [not found]         ` <20251104192714.GK1204670@ziepe.ca>
2025-11-05 15:20           ` Michał Winiarski
2025-11-05 17:42             ` Lucas De Marchi
2025-10-30 20:31 ` [PATCH v3 28/28] vfio/xe: Add device specific vfio_pci driver variant for Intel graphics Michał Winiarski
2025-11-07  9:38   ` Muqthyar Ahmed, Syed Abdul
2025-11-07  9:54     ` Winiarski, Michal
2025-10-30 22:34 ` ✗ CI.checkpatch: warning for vfio/xe: Add driver variant for Xe VF migration (rev3) Patchwork
2025-10-30 22:35 ` ✓ CI.KUnit: success " Patchwork
2025-10-30 23:57 ` ✓ Xe.CI.BAT: " Patchwork
2025-10-31  7:50 ` ✗ Xe.CI.Full: failure " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aQkstCpJQ6ZAnQr7@lstrano-desk.jf.intel.com \
    --to=matthew.brost@intel.com \
    --cc=airlied@gmail.com \
    --cc=alex@shazbot.org \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=hch@infradead.org \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=jani.nikula@linux.intel.com \
    --cc=jgg@ziepe.ca \
    --cc=joonas.lahtinen@linux.intel.com \
    --cc=kevin.tian@intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lucas.demarchi@intel.com \
    --cc=lukasz.laguna@intel.com \
    --cc=michal.wajdeczko@intel.com \
    --cc=michal.winiarski@intel.com \
    --cc=rodrigo.vivi@intel.com \
    --cc=simona@ffwll.ch \
    --cc=skolothumtho@nvidia.com \
    --cc=thomas.hellstrom@linux.intel.com \
    --cc=tursulin@ursulin.net \
    --cc=yishaih@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox