From: Michal Wajdeczko <michal.wajdeczko@intel.com>
To: "Michał Winiarski" <michal.winiarski@intel.com>,
"Alex Williamson" <alex.williamson@redhat.com>,
"Lucas De Marchi" <lucas.demarchi@intel.com>,
"Thomas Hellström" <thomas.hellstrom@linux.intel.com>,
"Rodrigo Vivi" <rodrigo.vivi@intel.com>,
"Jason Gunthorpe" <jgg@ziepe.ca>,
"Yishai Hadas" <yishaih@nvidia.com>,
"Kevin Tian" <kevin.tian@intel.com>,
"Shameer Kolothum" <shameerali.kolothum.thodi@huawei.com>,
intel-xe@lists.freedesktop.org, linux-kernel@vger.kernel.org,
kvm@vger.kernel.org
Cc: <dri-devel@lists.freedesktop.org>,
Matthew Brost <matthew.brost@intel.com>,
Jani Nikula <jani.nikula@linux.intel.com>,
"Joonas Lahtinen" <joonas.lahtinen@linux.intel.com>,
Tvrtko Ursulin <tursulin@ursulin.net>,
David Airlie <airlied@gmail.com>, Simona Vetter <simona@ffwll.ch>,
Lukasz Laguna <lukasz.laguna@intel.com>
Subject: Re: [PATCH 12/26] drm/xe/pf: Increase PF GuC Buffer Cache size and use it for VF migration
Date: Mon, 13 Oct 2025 13:27:55 +0200 [thread overview]
Message-ID: <208353be-f7ad-445b-9015-4f4da61cd046@intel.com> (raw)
In-Reply-To: <20251011193847.1836454-13-michal.winiarski@intel.com>
On 10/11/2025 9:38 PM, Michał Winiarski wrote:
> Contiguous PF GGTT VMAs can be scarce after creating VFs.
> Increase the GuC buffer cache size to 8M for PF so that we can fit GuC
> migration data (which currently maxes out at just over 4M) and use the
> cache instead of allocating fresh BOs.
>
> Signed-off-by: Michał Winiarski <michal.winiarski@intel.com>
> ---
> drivers/gpu/drm/xe/xe_gt_sriov_pf_migration.c | 54 +++++++------------
> drivers/gpu/drm/xe/xe_guc.c | 2 +-
> 2 files changed, 20 insertions(+), 36 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_migration.c b/drivers/gpu/drm/xe/xe_gt_sriov_pf_migration.c
> index 50f09994e2854..8b96eff8df93b 100644
> --- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_migration.c
> +++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_migration.c
> @@ -11,7 +11,7 @@
> #include "xe_gt_sriov_pf_helpers.h"
> #include "xe_gt_sriov_pf_migration.h"
> #include "xe_gt_sriov_printk.h"
> -#include "xe_guc.h"
> +#include "xe_guc_buf.h"
> #include "xe_guc_ct.h"
> #include "xe_sriov.h"
> #include "xe_sriov_pf_migration.h"
> @@ -57,73 +57,57 @@ static int pf_send_guc_query_vf_state_size(struct xe_gt *gt, unsigned int vfid)
>
> /* Return: number of state dwords saved or a negative error code on failure */
> static int pf_send_guc_save_vf_state(struct xe_gt *gt, unsigned int vfid,
> - void *buff, size_t size)
> + void *dst, size_t size)
> {
> const int ndwords = size / sizeof(u32);
> - struct xe_tile *tile = gt_to_tile(gt);
> - struct xe_device *xe = tile_to_xe(tile);
> struct xe_guc *guc = >->uc.guc;
> - struct xe_bo *bo;
> + CLASS(xe_guc_buf, buf)(&guc->buf, ndwords);
> int ret;
>
> xe_gt_assert(gt, size % sizeof(u32) == 0);
> xe_gt_assert(gt, size == ndwords * sizeof(u32));
>
> - bo = xe_bo_create_pin_map_novm(xe, tile,
> - ALIGN(size, PAGE_SIZE),
> - ttm_bo_type_kernel,
> - XE_BO_FLAG_SYSTEM |
> - XE_BO_FLAG_GGTT |
> - XE_BO_FLAG_GGTT_INVALIDATE, false);
> - if (IS_ERR(bo))
> - return PTR_ERR(bo);
> + if (!xe_guc_buf_is_valid(buf))
> + return -ENOBUFS;
> +
> + memset(xe_guc_buf_cpu_ptr(buf), 0, size);
is that necessary? GuC will overwrite that anyway
>
> ret = guc_action_vf_save_restore(guc, vfid, GUC_PF_OPCODE_VF_SAVE,
> - xe_bo_ggtt_addr(bo), ndwords);
> - if (!ret)
> + xe_guc_buf_flush(buf), ndwords);
> + if (!ret) {
> ret = -ENODATA;
> - else if (ret > ndwords)
> + } else if (ret > ndwords) {
> ret = -EPROTO;
> - else if (ret > 0)
> - xe_map_memcpy_from(xe, buff, &bo->vmap, 0, ret * sizeof(u32));
> + } else if (ret > 0) {
> + xe_guc_buf_sync(buf);
> + memcpy(dst, xe_guc_buf_cpu_ptr(buf), ret * sizeof(u32));
with a small change suggested earlier, this could be just:
memcpy(dst, xe_guc_buf_sync(buf), ret * sizeof(u32));
> + }
>
> - xe_bo_unpin_map_no_vm(bo);
> return ret;
> }
>
> /* Return: number of state dwords restored or a negative error code on failure */
> static int pf_send_guc_restore_vf_state(struct xe_gt *gt, unsigned int vfid,
> - const void *buff, size_t size)
> + const void *src, size_t size)
> {
> const int ndwords = size / sizeof(u32);
> - struct xe_tile *tile = gt_to_tile(gt);
> - struct xe_device *xe = tile_to_xe(tile);
> struct xe_guc *guc = >->uc.guc;
> - struct xe_bo *bo;
> + CLASS(xe_guc_buf_from_data, buf)(&guc->buf, src, size);
> int ret;
>
> xe_gt_assert(gt, size % sizeof(u32) == 0);
> xe_gt_assert(gt, size == ndwords * sizeof(u32));
>
> - bo = xe_bo_create_pin_map_novm(xe, tile,
> - ALIGN(size, PAGE_SIZE),
> - ttm_bo_type_kernel,
> - XE_BO_FLAG_SYSTEM |
> - XE_BO_FLAG_GGTT |
> - XE_BO_FLAG_GGTT_INVALIDATE, false);
> - if (IS_ERR(bo))
> - return PTR_ERR(bo);
> -
> - xe_map_memcpy_to(xe, &bo->vmap, 0, buff, size);
> + if (!xe_guc_buf_is_valid(buf))
> + return -ENOBUFS;
>
> ret = guc_action_vf_save_restore(guc, vfid, GUC_PF_OPCODE_VF_RESTORE,
> - xe_bo_ggtt_addr(bo), ndwords);
> + xe_guc_buf_flush(buf), ndwords);
> if (!ret)
> ret = -ENODATA;
> else if (ret > ndwords)
> ret = -EPROTO;
>
> - xe_bo_unpin_map_no_vm(bo);
> return ret;
> }
>
> diff --git a/drivers/gpu/drm/xe/xe_guc.c b/drivers/gpu/drm/xe/xe_guc.c
> index ccc7c60ae9b77..71ca06d1af62b 100644
> --- a/drivers/gpu/drm/xe/xe_guc.c
> +++ b/drivers/gpu/drm/xe/xe_guc.c
> @@ -857,7 +857,7 @@ int xe_guc_init_post_hwconfig(struct xe_guc *guc)
> if (ret)
> return ret;
>
> - ret = xe_guc_buf_cache_init(&guc->buf, SZ_8K);
> + ret = xe_guc_buf_cache_init(&guc->buf, IS_SRIOV_PF(guc_to_xe(guc)) ? SZ_8M : SZ_8K);
shouldn't we also check for xe_sriov_pf_migration_supported() ?
also, shouldn't we get this SZ_8M somewhere from the PF code?
and maybe PF could (one day) query that somehow from the GuC?
> if (ret)
> return ret;
>
next prev parent reply other threads:[~2025-10-13 11:28 UTC|newest]
Thread overview: 81+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-11 19:38 [PATCH 00/26] vfio/xe: Add driver variant for Xe VF migration Michał Winiarski
2025-10-11 19:38 ` [PATCH 01/26] drm/xe/pf: Remove GuC version check for migration support Michał Winiarski
2025-10-12 18:31 ` Michal Wajdeczko
2025-10-20 14:46 ` Michał Winiarski
2025-10-11 19:38 ` [PATCH 02/26] drm/xe: Move migration support to device-level struct Michał Winiarski
2025-10-12 18:58 ` Michal Wajdeczko
2025-10-20 14:48 ` Michał Winiarski
2025-10-11 19:38 ` [PATCH 03/26] drm/xe/pf: Add save/restore control state stubs and connect to debugfs Michał Winiarski
2025-10-12 20:09 ` Michal Wajdeczko
2025-10-11 19:38 ` [PATCH 04/26] drm/xe/pf: Extract migration mutex out of its struct Michał Winiarski
2025-10-12 19:08 ` Matthew Brost
2025-10-20 14:50 ` Michał Winiarski
2025-10-11 19:38 ` [PATCH 05/26] drm/xe/pf: Add data structures and handlers for migration rings Michał Winiarski
2025-10-12 21:06 ` Michal Wajdeczko
2025-10-20 14:56 ` Michał Winiarski
2025-10-11 19:38 ` [PATCH 06/26] drm/xe/pf: Add helpers for migration data allocation / free Michał Winiarski
2025-10-12 19:12 ` Matthew Brost
2025-10-21 0:26 ` Michał Winiarski
2025-10-13 10:15 ` Michal Wajdeczko
2025-10-21 0:01 ` Michał Winiarski
2025-10-11 19:38 ` [PATCH 07/26] drm/xe/pf: Add support for encap/decap of bitstream to/from packet Michał Winiarski
2025-10-11 22:28 ` kernel test robot
2025-10-13 10:46 ` Michal Wajdeczko
2025-10-21 0:25 ` Michał Winiarski
2025-10-11 19:38 ` [PATCH 08/26] drm/xe/pf: Add minimalistic migration descriptor Michał Winiarski
2025-10-11 22:52 ` kernel test robot
2025-10-13 10:56 ` Michal Wajdeczko
2025-10-21 0:31 ` Michał Winiarski
2025-10-11 19:38 ` [PATCH 09/26] drm/xe/pf: Expose VF migration data size over debugfs Michał Winiarski
2025-10-12 19:15 ` Matthew Brost
2025-10-21 0:37 ` Michał Winiarski
2025-10-13 11:04 ` Michal Wajdeczko
2025-10-21 0:42 ` Michał Winiarski
2025-10-11 19:38 ` [PATCH 10/26] drm/xe: Add sa/guc_buf_cache sync interface Michał Winiarski
2025-10-12 18:06 ` Matthew Brost
2025-10-21 0:45 ` Michał Winiarski
2025-10-13 11:20 ` Michal Wajdeczko
2025-10-21 0:44 ` Michał Winiarski
2025-10-11 19:38 ` [PATCH 11/26] drm/xe: Allow the caller to pass guc_buf_cache size Michał Winiarski
2025-10-11 23:35 ` kernel test robot
2025-10-13 11:08 ` Michal Wajdeczko
2025-10-21 0:47 ` Michał Winiarski
2025-10-11 19:38 ` [PATCH 12/26] drm/xe/pf: Increase PF GuC Buffer Cache size and use it for VF migration Michał Winiarski
2025-10-13 11:27 ` Michal Wajdeczko [this message]
2025-10-21 0:50 ` Michał Winiarski
2025-10-11 19:38 ` [PATCH 13/26] drm/xe/pf: Remove GuC migration data save/restore from GT debugfs Michał Winiarski
2025-10-13 11:36 ` Michal Wajdeczko
2025-10-11 19:38 ` [PATCH 14/26] drm/xe/pf: Don't save GuC VF migration data on pause Michał Winiarski
2025-10-13 11:42 ` Michal Wajdeczko
2025-10-11 19:38 ` [PATCH 15/26] drm/xe/pf: Switch VF migration GuC save/restore to struct migration data Michał Winiarski
2025-10-11 19:38 ` [PATCH 16/26] drm/xe/pf: Handle GuC migration data as part of PF control Michał Winiarski
2025-10-13 11:56 ` Michal Wajdeczko
2025-10-21 0:52 ` Michał Winiarski
2025-10-11 19:38 ` [PATCH 17/26] drm/xe/pf: Add helpers for VF GGTT migration data handling Michał Winiarski
2025-10-13 12:17 ` Michal Wajdeczko
2025-10-21 1:00 ` Michał Winiarski
2025-10-11 19:38 ` [PATCH 18/26] drm/xe/pf: Handle GGTT migration data as part of PF control Michał Winiarski
2025-10-13 12:36 ` Michal Wajdeczko
2025-10-21 1:16 ` Michał Winiarski
2025-10-11 19:38 ` [PATCH 19/26] drm/xe/pf: Add helpers for VF MMIO migration data handling Michał Winiarski
2025-10-13 13:28 ` Michal Wajdeczko
2025-10-11 19:38 ` [PATCH 20/26] drm/xe/pf: Handle MMIO migration data as part of PF control Michał Winiarski
2025-10-11 19:38 ` [PATCH 21/26] drm/xe/pf: Add helper to retrieve VF's LMEM object Michał Winiarski
2025-10-11 19:38 ` [PATCH 22/26] drm/xe/migrate: Add function for raw copy of VRAM and CCS Michał Winiarski
2025-10-12 18:54 ` Matthew Brost
2025-10-11 19:38 ` [PATCH 23/26] drm/xe/pf: Handle VRAM migration data as part of PF control Michał Winiarski
2025-10-11 19:38 ` [PATCH 24/26] drm/xe/pf: Add wait helper for VF FLR Michał Winiarski
2025-10-13 13:49 ` Michal Wajdeczko
2025-10-11 19:38 ` [PATCH 25/26] drm/xe/pf: Export helpers for VFIO Michał Winiarski
2025-10-12 18:32 ` Matthew Brost
2025-10-21 1:38 ` Michał Winiarski
2025-10-13 14:02 ` Michal Wajdeczko
2025-10-21 1:49 ` Michał Winiarski
2025-10-11 19:38 ` [PATCH 26/26] vfio/xe: Add vendor-specific vfio_pci driver for Intel graphics Michał Winiarski
2025-10-13 19:00 ` Rodrigo Vivi
[not found] ` <20251021230328.GA21554@ziepe.ca>
2025-10-21 23:14 ` Matthew Brost
[not found] ` <20251021233811.GB21554@ziepe.ca>
2025-10-22 1:15 ` Matthew Brost
2025-10-22 9:05 ` Michał Winiarski
2025-10-27 7:02 ` Tian, Kevin
2025-10-11 19:48 ` ✗ CI.checkpatch: warning for vfio/xe: Add driver variant for Xe VF migration Patchwork
2025-10-11 19:49 ` ✗ CI.KUnit: failure " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=208353be-f7ad-445b-9015-4f4da61cd046@intel.com \
--to=michal.wajdeczko@intel.com \
--cc=airlied@gmail.com \
--cc=alex.williamson@redhat.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=intel-xe@lists.freedesktop.org \
--cc=jani.nikula@linux.intel.com \
--cc=jgg@ziepe.ca \
--cc=joonas.lahtinen@linux.intel.com \
--cc=kevin.tian@intel.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=lucas.demarchi@intel.com \
--cc=lukasz.laguna@intel.com \
--cc=matthew.brost@intel.com \
--cc=michal.winiarski@intel.com \
--cc=rodrigo.vivi@intel.com \
--cc=shameerali.kolothum.thodi@huawei.com \
--cc=simona@ffwll.ch \
--cc=thomas.hellstrom@linux.intel.com \
--cc=tursulin@ursulin.net \
--cc=yishaih@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox