From: Matthew Brost <matthew.brost@intel.com>
To: Tomasz Lis <tomasz.lis@intel.com>
Cc: intel-xe@lists.freedesktop.org,
"Michał Winiarski" <michal.winiarski@intel.com>,
"Michał Wajdeczko" <michal.wajdeczko@intel.com>,
"Piotr Piórkowski" <piotr.piorkowski@intel.com>
Subject: Re: [PATCH v3 4/4] drm/xe/vf: Redo LRC creation while in VF fixups
Date: Wed, 25 Feb 2026 17:49:31 -0800 [thread overview]
Message-ID: <aZ+mq7//0vNGe03S@lstrano-desk.jf.intel.com> (raw)
In-Reply-To: <20260225235447.2772383-5-tomasz.lis@intel.com>
On Thu, Feb 26, 2026 at 12:54:47AM +0100, Tomasz Lis wrote:
> If the xe module within a VM was creating a new LRC during save/
> restore, this LRC will be invalid. The fixups procedure may not
> be able to reach it, as there will be a race to add the new LRC
> reference to an exec queue.
>
> Even if the new LRC which was being created during VM migration is
> added to EQ in time for fixups, said LRC may still remain damaged.
> In a small percentage of specially crafted test cases, the resulting
> LRC was still damaged and caused GPU hang.
>
> Any LRC which could be created in such a situation, have to be
> re-created.
>
> Due to VM having arbitrarily set amount of CPU cores, it is possible
> to limit the amount to 1. In such case, there is a possibility that
> kernel will switch CPU contexts in a way which allows to miss
> VF migration recovery running in parallel (by simply not switching
> to the LRC creation thread during recovery). Therefore checking
> if the migration is in progress just after LRC creation, is not
> enough to ensure detection.
>
> Free the incorrectly created LRC, and trigger a re-run of the
> creation, but only after waiting for default LRC to get fixups.
> Use additional atomic value increased after fixups, to ensure any VF
> migration that avoided detection by just checking for recovery in
> progress, will be caught.
>
> v2: Merge marker and wait for default LRC, reducing amount of calls
> within xe_init_eq(). Alter the LRC creation loop to remove a race
> with post-migration fixups worker.
>
> Signed-off-by: Tomasz Lis <tomasz.lis@intel.com>
> ---
> drivers/gpu/drm/xe/xe_exec_queue.c | 29 ++++++++++++++++-------
> drivers/gpu/drm/xe/xe_gt_sriov_vf.c | 25 +++++++++++++++++--
> drivers/gpu/drm/xe/xe_gt_sriov_vf.h | 3 ++-
> drivers/gpu/drm/xe/xe_gt_sriov_vf_types.h | 2 ++
> 4 files changed, 47 insertions(+), 12 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
> index 2cb37af42021..07c6d8bc3ad8 100644
> --- a/drivers/gpu/drm/xe/xe_exec_queue.c
> +++ b/drivers/gpu/drm/xe/xe_exec_queue.c
> @@ -365,17 +365,28 @@ static int __xe_exec_queue_init(struct xe_exec_queue *q, u32 exec_queue_flags)
> * from the moment vCPU resumes execution.
> */
> for (i = 0; i < q->width; ++i) {
> - struct xe_lrc *lrc;
> + struct xe_lrc *__lrc = NULL;
> + int marker;
>
> - xe_gt_sriov_vf_wait_valid_ggtt(q->gt);
> - lrc = xe_lrc_create(q->hwe, q->vm, q->replay_state,
> - xe_lrc_ring_size(), q->msix_vec, flags);
> - if (IS_ERR(lrc)) {
> - err = PTR_ERR(lrc);
> - goto err_lrc;
> - }
> + do {
> + struct xe_lrc *lrc;
> +
> + marker = xe_gt_sriov_vf_wait_valid_ggtt(q->gt);
> +
> + lrc = xe_lrc_create(q->hwe, q->vm, q->replay_state,
> + xe_lrc_ring_size(), q->msix_vec, flags);
> + if (IS_ERR(lrc)) {
> + err = PTR_ERR(lrc);
> + goto err_lrc;
> + }
> +
> + xe_exec_queue_set_lrc(q, lrc, i);
> +
> + if (__lrc)
> + xe_lrc_put(__lrc);
> + __lrc = lrc;
>
> - xe_exec_queue_set_lrc(q, lrc, i);
> + } while (marker != xe_vf_migration_fixups_complete_count(q->gt));
> }
>
> return 0;
> diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_vf.c b/drivers/gpu/drm/xe/xe_gt_sriov_vf.c
> index 7f83c0d3b099..3da38f2ee317 100644
> --- a/drivers/gpu/drm/xe/xe_gt_sriov_vf.c
> +++ b/drivers/gpu/drm/xe/xe_gt_sriov_vf.c
> @@ -1277,6 +1277,8 @@ static int vf_post_migration_fixups(struct xe_gt *gt)
> if (err)
> return err;
>
> + atomic_inc(>->sriov.vf.migration.fixups_complete);
> +
> return 0;
> }
>
> @@ -1515,20 +1517,39 @@ static bool vf_valid_ggtt(struct xe_gt *gt)
> return true;
> }
>
> +int xe_vf_migration_fixups_complete_count(struct xe_gt *gt)
Kernel doc.
> +{
> + if (!IS_SRIOV_VF(gt_to_xe(gt)) ||
> + !xe_sriov_vf_migration_supported(gt_to_xe(gt)))
> + return 0;
> +
> + /* should never match fixups_complete value */
> + if (!vf_valid_ggtt(gt))
> + return -1;
> +
> + return atomic_read(>->sriov.vf.migration.fixups_complete);
> +}
> +
> /**
> * xe_gt_sriov_vf_wait_valid_ggtt() - wait for valid GGTT nodes and address refs
> * @gt: the &xe_gt
Return: explain return value.
> */
> -void xe_gt_sriov_vf_wait_valid_ggtt(struct xe_gt *gt)
> +int xe_gt_sriov_vf_wait_valid_ggtt(struct xe_gt *gt)
> {
> int ret;
>
> + /*
> + * this condition shall identical to one in
> + * xe_vf_migration_fixups_complete_count()
> + */
> if (!IS_SRIOV_VF(gt_to_xe(gt)) ||
> !xe_sriov_vf_migration_supported(gt_to_xe(gt)))
> - return;
> + return 0;
>
> ret = wait_event_interruptible_timeout(gt->sriov.vf.migration.wq,
> vf_valid_ggtt(gt),
> HZ * 5);
> xe_gt_WARN_ON(gt, !ret);
> +
> + return atomic_read(>->sriov.vf.migration.fixups_complete);
> }
> diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_vf.h b/drivers/gpu/drm/xe/xe_gt_sriov_vf.h
> index 7d97189c2d3d..a6f7127521a5 100644
> --- a/drivers/gpu/drm/xe/xe_gt_sriov_vf.h
> +++ b/drivers/gpu/drm/xe/xe_gt_sriov_vf.h
> @@ -39,6 +39,7 @@ void xe_gt_sriov_vf_print_config(struct xe_gt *gt, struct drm_printer *p);
> void xe_gt_sriov_vf_print_runtime(struct xe_gt *gt, struct drm_printer *p);
> void xe_gt_sriov_vf_print_version(struct xe_gt *gt, struct drm_printer *p);
>
> -void xe_gt_sriov_vf_wait_valid_ggtt(struct xe_gt *gt);
> +int xe_gt_sriov_vf_wait_valid_ggtt(struct xe_gt *gt);
> +int xe_vf_migration_fixups_complete_count(struct xe_gt *gt);
>
> #endif
> diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_vf_types.h b/drivers/gpu/drm/xe/xe_gt_sriov_vf_types.h
> index fca18be589db..0b397f259a26 100644
> --- a/drivers/gpu/drm/xe/xe_gt_sriov_vf_types.h
> +++ b/drivers/gpu/drm/xe/xe_gt_sriov_vf_types.h
> @@ -54,6 +54,8 @@ struct xe_gt_sriov_vf_migration {
> wait_queue_head_t wq;
> /** @scratch: Scratch memory for VF recovery */
> void *scratch;
> + /** @fixups_complete: Counts completed fixups stages */
> + atomic_t fixups_complete;
Hate to nitpick names but fixups_complete_count?
Logic in patch LGTM.
Matt
> /** @debug: Debug hooks for delaying migration */
> struct {
> /**
> --
> 2.25.1
>
next prev parent reply other threads:[~2026-02-26 1:49 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-25 23:54 [PATCH v3 0/4] drm/xe/vf: Fix exec queue creation during post-migration recovery Tomasz Lis
2026-02-25 23:54 ` [PATCH v3 1/4] drm/xe/queue: Call fini on exec queue creation fail Tomasz Lis
2026-02-25 23:54 ` [PATCH v3 2/4] drm/xe/queue: Wrappers for setting and getting LRC references Tomasz Lis
2026-02-26 1:42 ` Matthew Brost
2026-02-26 1:57 ` Matthew Brost
2026-02-26 17:26 ` Lis, Tomasz
2026-02-26 17:30 ` Matthew Brost
2026-02-25 23:54 ` [PATCH v3 3/4] drm/xe/vf: Wait for all fixups before using default LRCs Tomasz Lis
2026-02-25 23:54 ` [PATCH v3 4/4] drm/xe/vf: Redo LRC creation while in VF fixups Tomasz Lis
2026-02-26 1:49 ` Matthew Brost [this message]
2026-02-26 1:24 ` ✓ CI.KUnit: success for drm/xe/vf: Fix exec queue creation during post-migration recovery (rev3) Patchwork
2026-02-26 2:02 ` ✓ Xe.CI.BAT: " Patchwork
2026-02-26 6:08 ` ✗ Xe.CI.FULL: failure " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aZ+mq7//0vNGe03S@lstrano-desk.jf.intel.com \
--to=matthew.brost@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=michal.wajdeczko@intel.com \
--cc=michal.winiarski@intel.com \
--cc=piotr.piorkowski@intel.com \
--cc=tomasz.lis@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox