From: Michal Wajdeczko <michal.wajdeczko@intel.com>
To: Tomasz Lis <tomasz.lis@intel.com>, intel-xe@lists.freedesktop.org
Cc: "Michał Winiarski" <michal.winiarski@intel.com>,
"Piotr Piórkowski" <piotr.piorkowski@intel.com>,
"Matthew Brost" <matthew.brost@intel.com>,
"Lucas De Marchi" <lucas.demarchi@intel.com>,
"Satyanarayana K V P" <satyanarayana.k.v.p@intel.com>
Subject: Re: [PATCH v4 6/8] drm/xe/vf: Rebase MEMIRQ structures for all contexts after migration
Date: Mon, 9 Jun 2025 13:15:41 +0200 [thread overview]
Message-ID: <a61cdf2b-9314-4a67-b66a-80e3a39b77be@intel.com> (raw)
In-Reply-To: <20250606001823.1010994-7-tomasz.lis@intel.com>
On 06.06.2025 02:18, Tomasz Lis wrote:
> All contexts require an update of state data, as the data includes
> GGTT references to memirq-related buffers.
>
> Default contexts need these references updated as well, because they
> are not refreshed when a new context is created from them.
>
> v2: Update addresses by xe_lrc_write_ctx_reg() rather than
> set_memory_based_intr()
> v3: Renamed parameter, reordered parameters in some functs
> v4: Check if have MEMIRQ, move `xe_gt*` funct to proper file
>
> Signed-off-by: Tomasz Lis <tomasz.lis@intel.com>
> Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
> Cc: Michal Winiarski <michal.winiarski@intel.com>
> Acked-by: Satyanarayana K V P <satyanarayana.k.v.p@intel.com>
> ---
> drivers/gpu/drm/xe/xe_exec_queue.c | 4 ++-
> drivers/gpu/drm/xe/xe_gt_sriov_vf.c | 14 +++++++++++
> drivers/gpu/drm/xe/xe_gt_sriov_vf.h | 1 +
> drivers/gpu/drm/xe/xe_lrc.c | 38 +++++++++++++++++++++++++++++
> drivers/gpu/drm/xe/xe_lrc.h | 2 ++
> drivers/gpu/drm/xe/xe_sriov_vf.c | 4 ++-
> 6 files changed, 61 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
> index c44a523ba906..86b2f9034902 100644
> --- a/drivers/gpu/drm/xe/xe_exec_queue.c
> +++ b/drivers/gpu/drm/xe/xe_exec_queue.c
> @@ -1038,6 +1038,8 @@ void xe_exec_queue_contexts_hwsp_rebase(struct xe_exec_queue *q)
> {
> int i;
>
> - for (i = 0; i < q->width; ++i)
> + for (i = 0; i < q->width; ++i) {
> + xe_lrc_update_memirq_regs_with_address(q->lrc[i], q->hwe);
> xe_lrc_update_hwctx_regs_with_address(q->lrc[i]);
> + }
> }
> diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_vf.c b/drivers/gpu/drm/xe/xe_gt_sriov_vf.c
> index 8fa210c0ef1a..0a5f78cf0490 100644
> --- a/drivers/gpu/drm/xe/xe_gt_sriov_vf.c
> +++ b/drivers/gpu/drm/xe/xe_gt_sriov_vf.c
> @@ -26,6 +26,7 @@
> #include "xe_guc_ct.h"
> #include "xe_guc_hxg_helpers.h"
> #include "xe_guc_relay.h"
> +#include "xe_lrc.h"
> #include "xe_mmio.h"
> #include "xe_sriov.h"
> #include "xe_sriov_vf.h"
> @@ -709,6 +710,19 @@ int xe_gt_sriov_vf_connect(struct xe_gt *gt)
> return err;
> }
>
> +/**
> + * xe_gt_sriov_vf_default_lrcs_hwsp_rebase - Update GGTT references in HWSP of default LRCs.
> + * @gt: the &xe_gt struct instance
> + */
> +void xe_gt_sriov_vf_default_lrcs_hwsp_rebase(struct xe_gt *gt)
> +{
> + struct xe_hw_engine *hwe;
> + enum xe_hw_engine_id id;
> +
> + for_each_hw_engine(hwe, gt, id)
> + xe_default_lrc_update_memirq_regs_with_address(hwe);
> +}
this function is not very specific to the data maintained by the VF
likely a better fit would be in xe_lrc.c or xe_hw_engine.c as
xe_hw_engines_update_default_lrc(gt)
or
xe_lrc_update_defaults(gt)
> +
> /**
> * xe_gt_sriov_vf_migrated_event_handler - Start a VF migration recovery,
> * or just mark that a GuC is ready for it.
> diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_vf.h b/drivers/gpu/drm/xe/xe_gt_sriov_vf.h
> index 6250fe774d89..5ab25a3c24ea 100644
> --- a/drivers/gpu/drm/xe/xe_gt_sriov_vf.h
> +++ b/drivers/gpu/drm/xe/xe_gt_sriov_vf.h
> @@ -17,6 +17,7 @@ int xe_gt_sriov_vf_bootstrap(struct xe_gt *gt);
> int xe_gt_sriov_vf_query_config(struct xe_gt *gt);
> int xe_gt_sriov_vf_connect(struct xe_gt *gt);
> int xe_gt_sriov_vf_query_runtime(struct xe_gt *gt);
> +void xe_gt_sriov_vf_default_lrcs_hwsp_rebase(struct xe_gt *gt);
> int xe_gt_sriov_vf_notify_resfix_done(struct xe_gt *gt);
> void xe_gt_sriov_vf_migrated_event_handler(struct xe_gt *gt);
>
> diff --git a/drivers/gpu/drm/xe/xe_lrc.c b/drivers/gpu/drm/xe/xe_lrc.c
> index 72690f71289c..c74856f3974a 100644
> --- a/drivers/gpu/drm/xe/xe_lrc.c
> +++ b/drivers/gpu/drm/xe/xe_lrc.c
> @@ -898,6 +898,44 @@ static void *empty_lrc_data(struct xe_hw_engine *hwe)
> return data;
> }
>
> +/**
> + * xe_default_lrc_update_memirq_regs_with_address - Re-compute GGTT references in default LRC
> + * of given engine.
> + * @hwe: the &xe_hw_engine struct instance
> + */
> +void xe_default_lrc_update_memirq_regs_with_address(struct xe_hw_engine *hwe)
> +{
> + struct xe_gt *gt = hwe->gt;
> + u32 *regs;
> +
> + if (!gt->default_lrc[hwe->class])
> + return;
> +
> + regs = gt->default_lrc[hwe->class] + LRC_PPHWSP_SIZE;
> + set_memory_based_intr(regs, hwe);
> +}
> +
> +/**
> + * xe_lrc_update_memirq_regs_with_address - Re-compute GGTT references in mem interrupt data
> + * for given LRC.
> + * @lrc: the &xe_lrc struct instance
> + * @hwe: the &xe_hw_engine struct instance
> + */
> +void xe_lrc_update_memirq_regs_with_address(struct xe_lrc *lrc, struct xe_hw_engine *hwe)
> +{
> + struct xe_memirq *memirq = >_to_tile(hwe->gt)->memirq;
> +
> + if (!xe_device_uses_memirq(gt_to_xe(hwe->gt)))
> + return;
> +
> + xe_lrc_write_ctx_reg(lrc, CTX_INT_MASK_ENABLE_PTR,
> + xe_memirq_enable_ptr(memirq));
> + xe_lrc_write_ctx_reg(lrc, CTX_INT_STATUS_REPORT_PTR,
> + xe_memirq_status_ptr(memirq, hwe));
> + xe_lrc_write_ctx_reg(lrc, CTX_INT_SRC_REPORT_PTR,
> + xe_memirq_source_ptr(memirq, hwe));
> +}
> +
> static void xe_lrc_set_ppgtt(struct xe_lrc *lrc, struct xe_vm *vm)
> {
> u64 desc = xe_vm_pdp4_descriptor(vm, gt_to_tile(lrc->gt));
> diff --git a/drivers/gpu/drm/xe/xe_lrc.h b/drivers/gpu/drm/xe/xe_lrc.h
> index e7a99cfd0abe..801a6b943f6e 100644
> --- a/drivers/gpu/drm/xe/xe_lrc.h
> +++ b/drivers/gpu/drm/xe/xe_lrc.h
> @@ -89,6 +89,8 @@ u32 xe_lrc_indirect_ring_ggtt_addr(struct xe_lrc *lrc);
> u32 xe_lrc_ggtt_addr(struct xe_lrc *lrc);
> u32 *xe_lrc_regs(struct xe_lrc *lrc);
> void xe_lrc_update_hwctx_regs_with_address(struct xe_lrc *lrc);
> +void xe_default_lrc_update_memirq_regs_with_address(struct xe_hw_engine *hwe);
> +void xe_lrc_update_memirq_regs_with_address(struct xe_lrc *lrc, struct xe_hw_engine *hwe);
>
> u32 xe_lrc_read_ctx_reg(struct xe_lrc *lrc, int reg_nr);
> void xe_lrc_write_ctx_reg(struct xe_lrc *lrc, int reg_nr, u32 val);
> diff --git a/drivers/gpu/drm/xe/xe_sriov_vf.c b/drivers/gpu/drm/xe/xe_sriov_vf.c
> index d54048bc4576..cf07d037a83a 100644
> --- a/drivers/gpu/drm/xe/xe_sriov_vf.c
> +++ b/drivers/gpu/drm/xe/xe_sriov_vf.c
> @@ -247,8 +247,10 @@ static void vf_post_migration_fixup_contexts(struct xe_device *xe)
> struct xe_gt *gt;
> unsigned int id;
>
> - for_each_gt(gt, xe, id)
> + for_each_gt(gt, xe, id) {
> + xe_gt_sriov_vf_default_lrcs_hwsp_rebase(gt);
> xe_guc_contexts_hwsp_rebase(>->uc.guc);
> + }
> }
>
> static void vf_post_migration_fixup_ctb(struct xe_device *xe)
next prev parent reply other threads:[~2025-06-09 11:15 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-06-06 0:18 [PATCH v4 0/8] drm/xe/vf: Post-migration recovery of queues and jobs Tomasz Lis
2025-06-06 0:18 ` [PATCH v4 1/8] drm/xe/sa: Avoid caching GGTT address within the manager Tomasz Lis
2025-06-06 0:18 ` [PATCH v4 2/8] drm/xe/vf: Finish RESFIX by reset if CTB not enabled Tomasz Lis
2025-06-09 11:00 ` Michal Wajdeczko
2025-06-11 20:18 ` Lis, Tomasz
2025-06-06 0:18 ` [PATCH v4 3/8] drm/xe/vf: Pause submissions during RESFIX fixups Tomasz Lis
2025-06-06 0:18 ` [PATCH v4 4/8] drm/xe: Block reset while recovering from VF migration Tomasz Lis
2025-06-06 0:18 ` [PATCH v4 5/8] drm/xe/vf: Rebase HWSP of all contexts after migration Tomasz Lis
2025-06-09 11:03 ` Michal Wajdeczko
2025-06-11 20:18 ` Lis, Tomasz
2025-06-06 0:18 ` [PATCH v4 6/8] drm/xe/vf: Rebase MEMIRQ structures for " Tomasz Lis
2025-06-09 11:15 ` Michal Wajdeczko [this message]
2025-06-06 0:18 ` [PATCH v4 7/8] drm/xe/vf: Post migration, repopulate ring area for pending request Tomasz Lis
2025-06-06 0:18 ` [PATCH v4 8/8] drm/xe/vf: Refresh utilization buffer during migration recovery Tomasz Lis
2025-06-06 2:43 ` ✓ CI.Patch_applied: success for drm/xe/vf: Post-migration recovery of queues and jobs (rev4) Patchwork
2025-06-06 2:44 ` ✗ CI.checkpatch: warning " Patchwork
2025-06-06 2:45 ` ✓ CI.KUnit: success " Patchwork
2025-06-06 2:56 ` ✓ CI.Build: " Patchwork
2025-06-06 2:58 ` ✗ CI.Hooks: failure " Patchwork
2025-06-06 3:00 ` ✓ CI.checksparse: success " Patchwork
2025-06-06 3:46 ` ✓ Xe.CI.BAT: " Patchwork
2025-06-08 7:58 ` ✗ Xe.CI.Full: failure " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=a61cdf2b-9314-4a67-b66a-80e3a39b77be@intel.com \
--to=michal.wajdeczko@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=lucas.demarchi@intel.com \
--cc=matthew.brost@intel.com \
--cc=michal.winiarski@intel.com \
--cc=piotr.piorkowski@intel.com \
--cc=satyanarayana.k.v.p@intel.com \
--cc=tomasz.lis@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox