Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Matthew Brost <matthew.brost@intel.com>
To: "Lis, Tomasz" <tomasz.lis@intel.com>
Cc: <intel-xe@lists.freedesktop.org>,
	"Wajdeczko, Michal" <Michal.Wajdeczko@intel.com>
Subject: Re: [PATCH v4 11/34] drm/xe/vf: Add xe_gt_recovery_inprogress helper
Date: Fri, 3 Oct 2025 21:32:53 -0700	[thread overview]
Message-ID: <aOCjdTKFjbNBBFI6@lstrano-desk.jf.intel.com> (raw)
In-Reply-To: <fcf3bbdf-e100-4b7a-9280-a056dc1dc478@intel.com>

On Fri, Oct 03, 2025 at 03:39:08AM +0200, Lis, Tomasz wrote:
> 
> On 10/2/2025 7:53 AM, Matthew Brost wrote:
> > Add xe_gt_recovery_inprogress helper.
> > 
> > This helper serves as the singular point to determine whether a GT
> > recovery is currently in progress. Expected callers include the GuC CT
> > layer and the GuC submission layer. Atomically visable as soon as vCPU
> > are unhalted until VF recovery completes.
> > 
> > v3:
> >   - Add GT layer xe_gt_recovery_inprogress (Michal)
> >   - Don't blow up in memirq not enabled (CI)
> >   - Add __memirq_received with clear argument (Michal)
> >   - xe_memirq_sw_int_0_irq_pending rename (Michal)
> >   - Use offset in xe_memirq_sw_int_0_irq_pending (Michal)
> > v4:
> >   - Refactor xe_gt_recovery_inprogress logic around memirq (Michal)
> > 
> > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > ---
> >   drivers/gpu/drm/xe/xe_gt.h                | 13 ++++++
> >   drivers/gpu/drm/xe/xe_gt_sriov_vf.c       | 22 +++++++++++
> >   drivers/gpu/drm/xe/xe_gt_sriov_vf.h       |  2 +
> >   drivers/gpu/drm/xe/xe_gt_sriov_vf_types.h | 10 +++++
> >   drivers/gpu/drm/xe/xe_memirq.c            | 48 +++++++++++++++++++++--
> >   drivers/gpu/drm/xe/xe_memirq.h            |  2 +
> >   6 files changed, 93 insertions(+), 4 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/xe/xe_gt.h b/drivers/gpu/drm/xe/xe_gt.h
> > index 41880979f4de..ee0239b2f48c 100644
> > --- a/drivers/gpu/drm/xe/xe_gt.h
> > +++ b/drivers/gpu/drm/xe/xe_gt.h
> > @@ -12,6 +12,7 @@
> >   #include "xe_device.h"
> >   #include "xe_device_types.h"
> > +#include "xe_gt_sriov_vf.h"
> >   #include "xe_hw_engine.h"
> >   #define for_each_hw_engine(hwe__, gt__, id__) \
> > @@ -124,4 +125,16 @@ static inline bool xe_gt_is_usm_hwe(struct xe_gt *gt, struct xe_hw_engine *hwe)
> >   		hwe->instance == gt->usm.reserved_bcs_instance;
> >   }
> > +/**
> > + * xe_gt_recovery_inprogress() - GT recovery in progress
> > + * @gt: the &xe_gt
> > + *
> > + * Return: True if GT recovery in progress, False otherwise
> 
> ```
> True if migration recovery is pending or in progress
> ```
> Both the messages we log and spec clearly establish that migration recovery is the queued work, not the whole period after vCPU starts.
> 
> > + */
> > +static inline bool xe_gt_recovery_inprogress(struct xe_gt *gt)
> > +{
> > +	return IS_SRIOV_VF(gt_to_xe(gt)) &&
> > +		xe_gt_sriov_vf_recovery_inprogress(gt);
> > +}
> > +
> >   #endif
> > diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_vf.c b/drivers/gpu/drm/xe/xe_gt_sriov_vf.c
> > index 0461d5513487..c2be8fc14c88 100644
> > --- a/drivers/gpu/drm/xe/xe_gt_sriov_vf.c
> > +++ b/drivers/gpu/drm/xe/xe_gt_sriov_vf.c
> > @@ -26,6 +26,7 @@
> >   #include "xe_guc_hxg_helpers.h"
> >   #include "xe_guc_relay.h"
> >   #include "xe_lrc.h"
> > +#include "xe_memirq.h"
> >   #include "xe_mmio.h"
> >   #include "xe_sriov.h"
> >   #include "xe_sriov_vf.h"
> > @@ -776,6 +777,7 @@ void xe_gt_sriov_vf_migrated_event_handler(struct xe_gt *gt)
> >   	struct xe_device *xe = gt_to_xe(gt);
> >   	xe_gt_assert(gt, IS_SRIOV_VF(xe));
> > +	xe_gt_assert(gt, xe_gt_sriov_vf_recovery_inprogress(gt));
> 
> If the handler was called, then this will always pass. It does not fully
> verify the solution, as it doesn't mean the MEMIRQ was already set on the
> first vCPU cycle, which the patches rely on.
> 
> It only verifies that:
> 
> * the IRQ type is MEMIRQ
> 
> * the IRQ handler does not clear bytes which indicate source of the request
> before calling the handler
> 
> So this assert is ok to stay, but requirements on the IRQ raising moment are
> not established by it and need to be explicitly stated. Currently this is
> only mentioned in commit message, and not as requirement but rather.. a
> fact? Hard to tell even, as the sentence lacks pointing a subject.
> 
> I think we need this plainly stated, and not only in commit message but also
> in the sources or kerneldoc.
> 
> I don't think it would be a problem to expand the solution to IRQs with
> source in MMIO, we could expand the function to check MMIO bits if any
> platform requires that.
> 
> I also now think the requirement on IRQ bytes being set from first vCPU
> instruction is reasonable, meaning all VMM systems will meet it because it
> is required to finish VM memory restore of all types before GuC restore, and
> GuC needs to drive the HW to trigger the IRQ during its state load, it
> cannot defer it. (btw the MEMIRQ buffer seem to be in sys mem, not in vram
> as I previously thought)
> 
> So not sure if Michal will agree, but my perspective is: this is ok, and
> just needs comments to emphasize the "IRQ source data visible from the first
> vCPU cycle" requirement.

Will do.

Matt

> 
> -Tomasz
> 
> >   	set_bit(gt->info.id, &xe->sriov.vf.migration.gt_flags);
> >   	/*
> > @@ -1118,3 +1120,23 @@ void xe_gt_sriov_vf_print_version(struct xe_gt *gt, struct drm_printer *p)
> >   	drm_printf(p, "\thandshake:\t%u.%u\n",
> >   		   pf_version->major, pf_version->minor);
> >   }
> > +
> > +/**
> > + * xe_gt_sriov_vf_recovery_inprogress() - VF post migration recovery in progress
> > + * @gt: the &xe_gt
> > + *
> > + * Return: True if VF post migration recovery in progress, False otherwise
> > + */
> > +bool xe_gt_sriov_vf_recovery_inprogress(struct xe_gt *gt)
> > +{
> > +	struct xe_memirq *memirq = &gt_to_tile(gt)->memirq;
> > +
> > +	xe_gt_assert(gt, IS_SRIOV_VF(gt_to_xe(gt)));
> > +
> > +	/* early detection until recovery starts */
> > +	if (xe_device_uses_memirq(gt_to_xe(gt)) &&
> > +	    xe_memirq_sw_int_0_irq_pending(memirq, &gt->uc.guc))
> > +		return true;
> > +
> > +	return READ_ONCE(gt->sriov.vf.migration.recovery_inprogress);
> > +}
> > diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_vf.h b/drivers/gpu/drm/xe/xe_gt_sriov_vf.h
> > index 0af1dc769fe0..bb5f8eace19b 100644
> > --- a/drivers/gpu/drm/xe/xe_gt_sriov_vf.h
> > +++ b/drivers/gpu/drm/xe/xe_gt_sriov_vf.h
> > @@ -25,6 +25,8 @@ void xe_gt_sriov_vf_default_lrcs_hwsp_rebase(struct xe_gt *gt);
> >   int xe_gt_sriov_vf_notify_resfix_done(struct xe_gt *gt);
> >   void xe_gt_sriov_vf_migrated_event_handler(struct xe_gt *gt);
> > +bool xe_gt_sriov_vf_recovery_inprogress(struct xe_gt *gt);
> > +
> >   u32 xe_gt_sriov_vf_gmdid(struct xe_gt *gt);
> >   u16 xe_gt_sriov_vf_guc_ids(struct xe_gt *gt);
> >   u64 xe_gt_sriov_vf_lmem(struct xe_gt *gt);
> > diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_vf_types.h b/drivers/gpu/drm/xe/xe_gt_sriov_vf_types.h
> > index 298dedf4b009..1dfef60ec044 100644
> > --- a/drivers/gpu/drm/xe/xe_gt_sriov_vf_types.h
> > +++ b/drivers/gpu/drm/xe/xe_gt_sriov_vf_types.h
> > @@ -46,6 +46,14 @@ struct xe_gt_sriov_vf_runtime {
> >   	} *regs;
> >   };
> > +/**
> > + * xe_gt_sriov_vf_migration - VF migration data.
> > + */
> > +struct xe_gt_sriov_vf_migration {
> > +	/** @recovery_inprogress: VF post migration recovery in progress */
> > +	bool recovery_inprogress;
> > +};
> > +
> >   /**
> >    * struct xe_gt_sriov_vf - GT level VF virtualization data.
> >    */
> > @@ -58,6 +66,8 @@ struct xe_gt_sriov_vf {
> >   	struct xe_gt_sriov_vf_selfconfig self_config;
> >   	/** @runtime: runtime data retrieved from the PF. */
> >   	struct xe_gt_sriov_vf_runtime runtime;
> > +	/** @migration: migration data for the VF. */
> > +	struct xe_gt_sriov_vf_migration migration;
> >   };
> >   #endif
> > diff --git a/drivers/gpu/drm/xe/xe_memirq.c b/drivers/gpu/drm/xe/xe_memirq.c
> > index 49c45ec3e83c..2391993634b5 100644
> > --- a/drivers/gpu/drm/xe/xe_memirq.c
> > +++ b/drivers/gpu/drm/xe/xe_memirq.c
> > @@ -398,8 +398,9 @@ void xe_memirq_postinstall(struct xe_memirq *memirq)
> >   		memirq_set_enable(memirq, true);
> >   }
> > -static bool memirq_received(struct xe_memirq *memirq, struct iosys_map *vector,
> > -			    u16 offset, const char *name)
> > +static bool __memirq_received(struct xe_memirq *memirq,
> > +			      struct iosys_map *vector, u16 offset,
> > +			      const char *name, bool clear)
> >   {
> >   	u8 value;
> > @@ -409,12 +410,26 @@ static bool memirq_received(struct xe_memirq *memirq, struct iosys_map *vector,
> >   			memirq_err_ratelimited(memirq,
> >   					       "Unexpected memirq value %#x from %s at %u\n",
> >   					       value, name, offset);
> > -		iosys_map_wr(vector, offset, u8, 0x00);
> > +		if (clear)
> > +			iosys_map_wr(vector, offset, u8, 0x00);
> >   	}
> >   	return value;
> >   }
> > +static bool memirq_received_noclear(struct xe_memirq *memirq,
> > +				    struct iosys_map *vector,
> > +				    u16 offset, const char *name)
> > +{
> > +	return __memirq_received(memirq, vector, offset, name, false);
> > +}
> > +
> > +static bool memirq_received(struct xe_memirq *memirq, struct iosys_map *vector,
> > +			    u16 offset, const char *name)
> > +{
> > +	return __memirq_received(memirq, vector, offset, name, true);
> > +}
> > +
> >   static void memirq_dispatch_engine(struct xe_memirq *memirq, struct iosys_map *status,
> >   				   struct xe_hw_engine *hwe)
> >   {
> > @@ -434,8 +449,16 @@ static void memirq_dispatch_guc(struct xe_memirq *memirq, struct iosys_map *stat
> >   	if (memirq_received(memirq, status, ilog2(GUC_INTR_GUC2HOST), name))
> >   		xe_guc_irq_handler(guc, GUC_INTR_GUC2HOST);
> > -	if (memirq_received(memirq, status, ilog2(GUC_INTR_SW_INT_0), name))
> > +	/*
> > +	 * We must wait to perform the clear operation until after
> > +	 * xe_gt_sriov_vf_start_migration_recovery() runs, to avoid race
> > +	 * conditions where xe_gt_sriov_vf_recovery_inprogress() returns false.
> > +	 */
> > +	if (memirq_received_noclear(memirq, status, ilog2(GUC_INTR_SW_INT_0),
> > +				    name)) {
> >   		xe_guc_irq_handler(guc, GUC_INTR_SW_INT_0);
> > +		iosys_map_wr(status, ilog2(GUC_INTR_SW_INT_0), u8, 0x00);
> > +	}
> >   }
> >   /**
> > @@ -460,6 +483,23 @@ void xe_memirq_hwe_handler(struct xe_memirq *memirq, struct xe_hw_engine *hwe)
> >   	}
> >   }
> > +/**
> > + * xe_memirq_sw_int_0_irq_pending() - SW_INT_0 IRQ is pending
> > + * @memirq: the &xe_memirq
> > + * @guc: the &xe_guc to check for IRQ
> > + *
> > + * Return: True if SW_INT_0 IRQ is pending on @guc, False otherwise
> > + */
> > +bool xe_memirq_sw_int_0_irq_pending(struct xe_memirq *memirq, struct xe_guc *guc)
> > +{
> > +	struct xe_gt *gt = guc_to_gt(guc);
> > +	u32 offset = xe_gt_is_media_type(gt) ? ilog2(INTR_MGUC) : ilog2(INTR_GUC);
> > +	struct iosys_map map = IOSYS_MAP_INIT_OFFSET(&memirq->status, offset * SZ_16);
> > +
> > +	return memirq_received_noclear(memirq, &map, ilog2(GUC_INTR_SW_INT_0),
> > +				       guc_name(guc));
> > +}
> > +
> >   /**
> >    * xe_memirq_handler - The `Memory Based Interrupts`_ Handler.
> >    * @memirq: the &xe_memirq
> > diff --git a/drivers/gpu/drm/xe/xe_memirq.h b/drivers/gpu/drm/xe/xe_memirq.h
> > index 06130650e9d6..a6fffdadef88 100644
> > --- a/drivers/gpu/drm/xe/xe_memirq.h
> > +++ b/drivers/gpu/drm/xe/xe_memirq.h
> > @@ -25,4 +25,6 @@ void xe_memirq_handler(struct xe_memirq *memirq);
> >   int xe_memirq_init_guc(struct xe_memirq *memirq, struct xe_guc *guc);
> > +bool xe_memirq_sw_int_0_irq_pending(struct xe_memirq *memirq, struct xe_guc *guc);
> > +
> >   #endif

  reply	other threads:[~2025-10-04  4:33 UTC|newest]

Thread overview: 71+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-10-02  5:53 [PATCH v4 00/34] VF migration redesign Matthew Brost
2025-10-02  5:53 ` [PATCH v4 01/34] drm/xe: Add NULL checks to scratch LRC allocation Matthew Brost
2025-10-02 22:02   ` Lis, Tomasz
2025-10-02  5:53 ` [PATCH v4 02/34] Revert "drm/xe/vf: Rebase exec queue parallel commands during migration recovery" Matthew Brost
2025-10-02  5:53 ` [PATCH v4 03/34] Revert "drm/xe/vf: Post migration, repopulate ring area for pending request" Matthew Brost
2025-10-02  5:53 ` [PATCH v4 04/34] Revert "drm/xe/vf: Fixup CTB send buffer messages after migration" Matthew Brost
2025-10-02  5:53 ` [PATCH v4 05/34] drm/xe: Save off position in ring in which a job was programmed Matthew Brost
2025-10-02  5:53 ` [PATCH v4 06/34] drm/xe/guc: Track pending-enable source in submission state Matthew Brost
2025-10-02  5:53 ` [PATCH v4 07/34] drm/xe: Track LR jobs in DRM scheduler pending list Matthew Brost
2025-10-02 16:14   ` Matthew Auld
2025-10-05  5:21     ` Matthew Brost
2025-10-02  5:53 ` [PATCH v4 08/34] drm/xe: Don't change LRC ring head on job resubmission Matthew Brost
2025-10-02 14:15   ` Matthew Auld
2025-10-05  5:25     ` Matthew Brost
2025-10-05  6:53       ` Matthew Brost
2025-10-06  8:59         ` Matthew Auld
2025-10-02  5:53 ` [PATCH v4 09/34] drm/xe: Make LRC W/A scratch buffer usage consistent Matthew Brost
2025-10-02  5:53 ` [PATCH v4 10/34] drm/xe/guc: Document GuC submission backend Matthew Brost
2025-10-03 14:30   ` Lis, Tomasz
2025-10-02  5:53 ` [PATCH v4 11/34] drm/xe/vf: Add xe_gt_recovery_inprogress helper Matthew Brost
2025-10-03  1:39   ` Lis, Tomasz
2025-10-04  4:32     ` Matthew Brost [this message]
2025-10-03  8:40   ` Michal Wajdeczko
2025-10-04  4:32     ` Matthew Brost
2025-10-02  5:53 ` [PATCH v4 12/34] drm/xe/vf: Make VF recovery run on per-GT worker Matthew Brost
2025-10-02  5:53 ` [PATCH v4 13/34] drm/xe/vf: Abort H2G sends during VF post-migration recovery Matthew Brost
2025-10-02  5:53 ` [PATCH v4 14/34] drm/xe/vf: Remove memory allocations from VF post migration recovery Matthew Brost
2025-10-02  5:53 ` [PATCH v4 15/34] drm/xe/vf: Close multi-GT GGTT shift race Matthew Brost
2025-10-03 14:24   ` Michal Wajdeczko
2025-10-04  4:36     ` Matthew Brost
2025-10-02  5:53 ` [PATCH v4 16/34] drm/xe/vf: Teardown VF post migration worker on driver unload Matthew Brost
2025-10-02  5:53 ` [PATCH v4 17/34] drm/xe/vf: Don't allow GT reset to be queued during VF post migration recovery Matthew Brost
2025-10-03 16:09   ` Lis, Tomasz
2025-10-02  5:53 ` [PATCH v4 18/34] drm/xe/vf: Wakeup in GuC backend on " Matthew Brost
2025-10-03 14:38   ` Michal Wajdeczko
2025-10-05  6:22     ` Matthew Brost
2025-10-05  6:35       ` Matthew Brost
2025-10-02  5:53 ` [PATCH v4 19/34] drm/xe/vf: Avoid indefinite blocking in preempt rebind worker for VFs supporting migration Matthew Brost
2025-10-02  5:53 ` [PATCH v4 20/34] drm/xe/vf: Use GUC_HXG_TYPE_EVENT for GuC context register Matthew Brost
2025-10-03 14:26   ` Lis, Tomasz
2025-10-05  5:43     ` Matthew Brost
2025-10-03 14:57   ` Michal Wajdeczko
2025-10-02  5:53 ` [PATCH v4 21/34] drm/xe/vf: Flush and stop CTs in VF post migration recovery Matthew Brost
2025-10-02  5:53 ` [PATCH v4 22/34] drm/xe/vf: Reset TLB invalidations during " Matthew Brost
2025-10-02  5:53 ` [PATCH v4 23/34] drm/xe/vf: Kickstart after resfix in " Matthew Brost
2025-10-02  5:53 ` [PATCH v4 24/34] drm/xe/vf: Start CTs before resfix " Matthew Brost
2025-10-02 21:50   ` Lis, Tomasz
2025-10-03 15:10   ` Michal Wajdeczko
2025-10-05  6:49     ` Matthew Brost
2025-10-05 12:28       ` Michal Wajdeczko
2025-10-02  5:53 ` [PATCH v4 25/34] drm/xe/vf: Abort VF post migration recovery on failure Matthew Brost
2025-10-02  5:53 ` [PATCH v4 26/34] drm/xe/vf: Replay GuC submission state on pause / unpause Matthew Brost
2025-10-02  5:53 ` [PATCH v4 27/34] drm/xe: Move queue init before LRC creation Matthew Brost
2025-10-03 13:25   ` Lis, Tomasz
2025-10-05  8:03     ` Matthew Brost
2025-10-02  5:53 ` [PATCH v4 28/34] drm/xe/vf: Add debug prints for GuC replaying state during VF recovery Matthew Brost
2025-10-03 13:08   ` Lis, Tomasz
2025-10-02  5:53 ` [PATCH v4 29/34] drm/xe/vf: Workaround for race condition in GuC firmware during VF pause Matthew Brost
2025-10-03 13:06   ` Lis, Tomasz
2025-10-02  5:53 ` [PATCH v4 30/34] drm/xe: Use PPGTT addresses for TLB invalidation to avoid GGTT fixups Matthew Brost
2025-10-02  5:53 ` [PATCH v4 31/34] drm/xe/vf: Use primary GT ordered work queue on media GT on PTL VF Matthew Brost
2025-10-02 21:00   ` Lis, Tomasz
2025-10-05  7:03     ` Matthew Brost
2025-10-02  5:54 ` [PATCH v4 32/34] drm/xe/vf: Ensure media GT VF recovery runs after primary GT on PTL Matthew Brost
2025-10-02 20:19   ` Lis, Tomasz
2025-10-02  5:54 ` [PATCH v4 33/34] drm/xe/vf: Rebase CCS save/restore BB GGTT addresses Matthew Brost
2025-10-02  5:54 ` [PATCH v4 34/34] drm/xe/guc: Increase wait timeout to 2sec after BUSY reply from GuC Matthew Brost
2025-10-02  6:45 ` ✗ CI.checkpatch: warning for VF migration redesign (rev4) Patchwork
2025-10-02  6:47 ` ✓ CI.KUnit: success " Patchwork
2025-10-02  7:33 ` ✗ Xe.CI.BAT: failure " Patchwork
2025-10-02  9:19 ` ✗ Xe.CI.Full: " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aOCjdTKFjbNBBFI6@lstrano-desk.jf.intel.com \
    --to=matthew.brost@intel.com \
    --cc=Michal.Wajdeczko@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=tomasz.lis@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox