From: Matthew Brost <matthew.brost@intel.com>
To: Tomasz Lis <tomasz.lis@intel.com>
Cc: intel-xe@lists.freedesktop.org,
"Michał Winiarski" <michal.winiarski@intel.com>,
"Michał Wajdeczko" <michal.wajdeczko@intel.com>,
"Piotr Piórkowski" <piotr.piorkowski@intel.com>
Subject: Re: [PATCH v4 2/4] drm/xe: Wrappers for setting and getting LRC references
Date: Thu, 26 Feb 2026 15:01:00 -0800 [thread overview]
Message-ID: <aaDQrDdO2rxRWxoH@lstrano-desk.jf.intel.com> (raw)
In-Reply-To: <20260226212701.2937065-3-tomasz.lis@intel.com>
On Thu, Feb 26, 2026 at 10:26:59PM +0100, Tomasz Lis wrote:
> There is a small but non-zero chance that VF post migration fixups
> are running on an exec queue during teardown. The chances are
> decreased by starting the teardown by releasing guc_id, but remain
> non-zero. On the other hand the sync between fixups and EQ creation
> (wait_valid_ggtt) drastically increases the chance for such parallel
> teardown if queue creation error path is entered (err_lrc label).
>
> The exec queue itself is not going to cause an issue, but LRCs have
> a small chance of getting freed during the fixups.
>
> Creating a setter and a getter makes it easier to protect the fixup
> operations with a lock. For other driver activities, the original
> access method (without any protection) can still be used.
>
> v2: Separate lock, only for LRCs. Kerneldoc fixes. Subject tag fix.
>
> Signed-off-by: Tomasz Lis <tomasz.lis@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
> ---
> drivers/gpu/drm/xe/xe_exec_queue.c | 73 ++++++++++++++++++------
> drivers/gpu/drm/xe/xe_exec_queue.h | 1 +
> drivers/gpu/drm/xe/xe_exec_queue_types.h | 5 ++
> 3 files changed, 60 insertions(+), 19 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
> index b4ef725a682d..c0e95f1a9911 100644
> --- a/drivers/gpu/drm/xe/xe_exec_queue.c
> +++ b/drivers/gpu/drm/xe/xe_exec_queue.c
> @@ -231,6 +231,7 @@ static struct xe_exec_queue *__xe_exec_queue_alloc(struct xe_device *xe,
> INIT_LIST_HEAD(&q->hw_engine_group_link);
> INIT_LIST_HEAD(&q->pxp.link);
> spin_lock_init(&q->multi_queue.lock);
> + spin_lock_init(&q->lrc_lookup_lock);
> q->multi_queue.priority = XE_MULTI_QUEUE_PRIORITY_NORMAL;
>
> q->sched_props.timeslice_us = hwe->eclass->sched_props.timeslice_us;
> @@ -270,6 +271,56 @@ static struct xe_exec_queue *__xe_exec_queue_alloc(struct xe_device *xe,
> return q;
> }
>
> +static void xe_exec_queue_set_lrc(struct xe_exec_queue *q, struct xe_lrc *lrc, u16 idx)
> +{
> + xe_assert(gt_to_xe(q->gt), idx < q->width);
> +
> + scoped_guard(spinlock, &q->lrc_lookup_lock)
> + q->lrc[idx] = lrc;
> +}
> +
> +/**
> + * xe_exec_queue_get_lrc() - Get the LRC from exec queue.
> + * @q: The exec queue instance.
> + * @idx: Index within multi-LRC array.
> + *
> + * Retrieves LRC of given index for the exec queue under lock
> + * and takes reference.
> + *
> + * Return: Pointer to LRC on success, error on failure, NULL on
> + * lookup failure.
> + */
> +struct xe_lrc *xe_exec_queue_get_lrc(struct xe_exec_queue *q, u16 idx)
> +{
> + struct xe_lrc *lrc;
> +
> + xe_assert(gt_to_xe(q->gt), idx < q->width);
> +
> + scoped_guard(spinlock, &q->lrc_lookup_lock) {
> + lrc = q->lrc[idx];
> + if (lrc)
> + xe_lrc_get(lrc);
> + }
> +
> + return lrc;
> +}
> +
> +/**
> + * xe_exec_queue_lrc() - Get the LRC from exec queue.
> + * @q: The exec queue instance.
> + *
> + * Retrieves the primary LRC for the exec queue. Note that this function
> + * returns only the first LRC instance, even when multiple parallel LRCs
> + * are configured. This function does not increment reference count,
> + * so the reference can be just forgotten after use.
> + *
> + * Return: Pointer to LRC on success, error on failure
> + */
> +struct xe_lrc *xe_exec_queue_lrc(struct xe_exec_queue *q)
> +{
> + return q->lrc[0];
> +}
> +
> static void __xe_exec_queue_fini(struct xe_exec_queue *q)
> {
> int i;
> @@ -327,8 +378,7 @@ static int __xe_exec_queue_init(struct xe_exec_queue *q, u32 exec_queue_flags)
> goto err_lrc;
> }
>
> - /* Pairs with READ_ONCE to xe_exec_queue_contexts_hwsp_rebase */
> - WRITE_ONCE(q->lrc[i], lrc);
> + xe_exec_queue_set_lrc(q, lrc, i);
> }
>
> return 0;
> @@ -1288,21 +1338,6 @@ int xe_exec_queue_get_property_ioctl(struct drm_device *dev, void *data,
> return ret;
> }
>
> -/**
> - * xe_exec_queue_lrc() - Get the LRC from exec queue.
> - * @q: The exec_queue.
> - *
> - * Retrieves the primary LRC for the exec queue. Note that this function
> - * returns only the first LRC instance, even when multiple parallel LRCs
> - * are configured.
> - *
> - * Return: Pointer to LRC on success, error on failure
> - */
> -struct xe_lrc *xe_exec_queue_lrc(struct xe_exec_queue *q)
> -{
> - return q->lrc[0];
> -}
> -
> /**
> * xe_exec_queue_is_lr() - Whether an exec_queue is long-running
> * @q: The exec_queue
> @@ -1662,14 +1697,14 @@ int xe_exec_queue_contexts_hwsp_rebase(struct xe_exec_queue *q, void *scratch)
> for (i = 0; i < q->width; ++i) {
> struct xe_lrc *lrc;
>
> - /* Pairs with WRITE_ONCE in __xe_exec_queue_init */
> - lrc = READ_ONCE(q->lrc[i]);
> + lrc = xe_exec_queue_get_lrc(q, i);
> if (!lrc)
> continue;
>
> xe_lrc_update_memirq_regs_with_address(lrc, q->hwe, scratch);
> xe_lrc_update_hwctx_regs_with_address(lrc);
> err = xe_lrc_setup_wa_bb_with_scratch(lrc, q->hwe, scratch);
> + xe_lrc_put(lrc);
> if (err)
> break;
> }
> diff --git a/drivers/gpu/drm/xe/xe_exec_queue.h b/drivers/gpu/drm/xe/xe_exec_queue.h
> index c9e3a7c2d249..a82d99bd77bc 100644
> --- a/drivers/gpu/drm/xe/xe_exec_queue.h
> +++ b/drivers/gpu/drm/xe/xe_exec_queue.h
> @@ -160,6 +160,7 @@ void xe_exec_queue_update_run_ticks(struct xe_exec_queue *q);
> int xe_exec_queue_contexts_hwsp_rebase(struct xe_exec_queue *q, void *scratch);
>
> struct xe_lrc *xe_exec_queue_lrc(struct xe_exec_queue *q);
> +struct xe_lrc *xe_exec_queue_get_lrc(struct xe_exec_queue *q, u16 idx);
>
> /**
> * xe_exec_queue_idle_skip_suspend() - Can exec queue skip suspend
> diff --git a/drivers/gpu/drm/xe/xe_exec_queue_types.h b/drivers/gpu/drm/xe/xe_exec_queue_types.h
> index 3791fed34ffa..a1f3938f4173 100644
> --- a/drivers/gpu/drm/xe/xe_exec_queue_types.h
> +++ b/drivers/gpu/drm/xe/xe_exec_queue_types.h
> @@ -257,6 +257,11 @@ struct xe_exec_queue {
> u64 tlb_flush_seqno;
> /** @hw_engine_group_link: link into exec queues in the same hw engine group */
> struct list_head hw_engine_group_link;
> + /**
> + * @lrc_lookup_lock: Lock for protecting lrc array access. Only used when
> + * running in parallel to queue creation is possible.
> + */
> + spinlock_t lrc_lookup_lock;
> /** @lrc: logical ring context for this exec queue */
> struct xe_lrc *lrc[] __counted_by(width);
> };
> --
> 2.25.1
>
next prev parent reply other threads:[~2026-02-26 23:01 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-26 21:26 [PATCH v4 0/4] drm/xe/vf: Fix exec queue creation during post-migration recovery Tomasz Lis
2026-02-26 21:26 ` [PATCH v4 1/4] drm/xe/queue: Call fini on exec queue creation fail Tomasz Lis
2026-02-26 21:26 ` [PATCH v4 2/4] drm/xe: Wrappers for setting and getting LRC references Tomasz Lis
2026-02-26 23:01 ` Matthew Brost [this message]
2026-02-26 21:27 ` [PATCH v4 3/4] drm/xe/vf: Wait for all fixups before using default LRCs Tomasz Lis
2026-02-26 21:27 ` [PATCH v4 4/4] drm/xe/vf: Redo LRC creation while in VF fixups Tomasz Lis
2026-02-26 23:14 ` Matthew Brost
2026-02-26 21:32 ` ✓ CI.KUnit: success for drm/xe/vf: Fix exec queue creation during post-migration recovery (rev4) Patchwork
2026-02-26 22:21 ` ✓ Xe.CI.BAT: " Patchwork
2026-02-27 3:43 ` ✗ Xe.CI.FULL: failure " Patchwork
2026-02-27 13:25 ` Lis, Tomasz
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aaDQrDdO2rxRWxoH@lstrano-desk.jf.intel.com \
--to=matthew.brost@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=michal.wajdeczko@intel.com \
--cc=michal.winiarski@intel.com \
--cc=piotr.piorkowski@intel.com \
--cc=tomasz.lis@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox