From: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
To: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>,
intel-xe@lists.freedesktop.org
Cc: matthew.brost@intel.com
Subject: Re: [PATCH v2] drm/xe/multi_queue: Protect priority against concurrent access
Date: Mon, 26 Jan 2026 13:39:14 +0100 [thread overview]
Message-ID: <a1502d2dbd1d550f14b1e2f3c2bfc82770417685.camel@linux.intel.com> (raw)
In-Reply-To: <20260123212710.3393405-2-niranjana.vishwanathapura@intel.com>
On Fri, 2026-01-23 at 13:27 -0800, Niranjana Vishwanathapura wrote:
> Use a spinlock to protect multi-queue priority being concurrently
> updated by multiple set_priority ioctls and to protect against
> concurrent read and write to this field.
>
> v2: Update documentation, remove WRITE/READ_LOCK() (Thomas)
> Use scoped_guard, reduced lock scope (Matt Brost)
> v3: Fix author (checkpatch)
>
> Signed-off-by: Niranjana Vishwanathapura
> <niranjana.vishwanathapura@intel.com>
> Reviewed-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> ---
> drivers/gpu/drm/xe/xe_exec_queue.c | 1 +
> drivers/gpu/drm/xe/xe_exec_queue_types.h | 7 ++++++-
> drivers/gpu/drm/xe/xe_guc_submit.c | 19 +++++++++++++++----
> 3 files changed, 22 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c
> b/drivers/gpu/drm/xe/xe_exec_queue.c
> index 7e7e663189e4..66d0e10ee2c4 100644
> --- a/drivers/gpu/drm/xe/xe_exec_queue.c
> +++ b/drivers/gpu/drm/xe/xe_exec_queue.c
> @@ -230,6 +230,7 @@ static struct xe_exec_queue
> *__xe_exec_queue_alloc(struct xe_device *xe,
> INIT_LIST_HEAD(&q->multi_gt_link);
> INIT_LIST_HEAD(&q->hw_engine_group_link);
> INIT_LIST_HEAD(&q->pxp.link);
> + spin_lock_init(&q->multi_queue.lock);
> q->multi_queue.priority = XE_MULTI_QUEUE_PRIORITY_NORMAL;
>
> q->sched_props.timeslice_us = hwe->eclass-
> >sched_props.timeslice_us;
> diff --git a/drivers/gpu/drm/xe/xe_exec_queue_types.h
> b/drivers/gpu/drm/xe/xe_exec_queue_types.h
> index e987d431ce27..3791fed34ffa 100644
> --- a/drivers/gpu/drm/xe/xe_exec_queue_types.h
> +++ b/drivers/gpu/drm/xe/xe_exec_queue_types.h
> @@ -161,8 +161,13 @@ struct xe_exec_queue {
> struct xe_exec_queue_group *group;
> /** @multi_queue.link: Link into group's secondary
> queues list */
> struct list_head link;
> - /** @multi_queue.priority: Queue priority within the
> multi-queue group */
> + /**
> + * @multi_queue.priority: Queue priority within the
> multi-queue group.
> + * It is protected by @multi_queue.lock.
> + */
> enum xe_multi_queue_priority priority;
> + /** @multi_queue.lock: Lock for protecting certain
> members */
> + spinlock_t lock;
> /** @multi_queue.pos: Position of queue within the
> multi-queue group */
> u8 pos;
> /** @multi_queue.valid: Queue belongs to a multi
> queue group */
> diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c
> b/drivers/gpu/drm/xe/xe_guc_submit.c
> index 456f549c16f6..1f4625ddae0e 100644
> --- a/drivers/gpu/drm/xe/xe_guc_submit.c
> +++ b/drivers/gpu/drm/xe/xe_guc_submit.c
> @@ -804,6 +804,7 @@ static void
> xe_guc_exec_queue_group_cgp_sync(struct xe_guc *guc,
> {
> struct xe_exec_queue_group *group = q->multi_queue.group;
> struct xe_device *xe = guc_to_xe(guc);
> + enum xe_multi_queue_priority priority;
> long ret;
>
> /*
> @@ -827,7 +828,10 @@ static void
> xe_guc_exec_queue_group_cgp_sync(struct xe_guc *guc,
> return;
> }
>
> - xe_lrc_set_multi_queue_priority(q->lrc[0], q-
> >multi_queue.priority);
> + scoped_guard(spinlock, &q->multi_queue.lock)
> + priority = q->multi_queue.priority;
> +
> + xe_lrc_set_multi_queue_priority(q->lrc[0], priority);
> xe_guc_exec_queue_group_cgp_update(xe, q);
>
> WRITE_ONCE(group->sync_pending, true);
> @@ -2181,15 +2185,22 @@ static int
> guc_exec_queue_set_multi_queue_priority(struct xe_exec_queue *q,
>
> xe_gt_assert(guc_to_gt(exec_queue_to_guc(q)),
> xe_exec_queue_is_multi_queue(q));
>
> - if (q->multi_queue.priority == priority ||
> - exec_queue_killed_or_banned_or_wedged(q))
> + if (exec_queue_killed_or_banned_or_wedged(q))
> return 0;
>
> msg = kmalloc(sizeof(*msg), GFP_KERNEL);
> if (!msg)
> return -ENOMEM;
>
> - q->multi_queue.priority = priority;
> + scoped_guard(spinlock, &q->multi_queue.lock) {
> + if (q->multi_queue.priority == priority) {
> + kfree(msg);
> + return 0;
> + }
> +
> + q->multi_queue.priority = priority;
> + }
> +
> guc_exec_queue_add_msg(q, msg, SET_MULTI_QUEUE_PRIORITY);
>
> return 0;
next prev parent reply other threads:[~2026-01-26 12:39 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-23 21:27 [PATCH v2] drm/xe/multi_queue: Protect priority against concurrent access Niranjana Vishwanathapura
2026-01-23 21:34 ` ✓ CI.KUnit: success for drm/xe/multi_queue: Protect priority against concurrent access (rev3) Patchwork
2026-01-24 6:04 ` ✗ Xe.CI.Full: failure " Patchwork
2026-01-26 12:39 ` Thomas Hellström [this message]
-- strict thread matches above, loose matches on Subject: below --
2026-01-23 19:51 [PATCH v2] drm/xe/multi_queue: Protect priority against concurrent access Niranjana Vishwanathapura
2026-01-23 19:55 ` Matthew Brost
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=a1502d2dbd1d550f14b1e2f3c2bfc82770417685.camel@linux.intel.com \
--to=thomas.hellstrom@linux.intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=matthew.brost@intel.com \
--cc=niranjana.vishwanathapura@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox