From: Matthew Brost <matthew.brost@intel.com>
To: Matthew Auld <matthew.auld@intel.com>
Cc: <intel-xe@lists.freedesktop.org>, <stable@vger.kernel.org>
Subject: Re: [PATCH] drm/xe: prevent UAF around preempt fence
Date: Wed, 14 Aug 2024 15:21:44 +0000 [thread overview]
Message-ID: <ZrzLiMSn059qlu7P@DUT025-TGLU.fm.intel.com> (raw)
In-Reply-To: <20240814110129.825847-2-matthew.auld@intel.com>
On Wed, Aug 14, 2024 at 12:01:30PM +0100, Matthew Auld wrote:
> The fence lock is part of the queue, therefore in the current design
> anything locking the fence should then also hold a ref to the queue to
> prevent the queue from being freed.
>
> However, currently it looks like we signal the fence and then drop the
> queue ref, but if something is waiting on the fence, the waiter is
> kicked to wake up at some later point, where upon waking up it first
> grabs the lock before checking the fence state. But if we have already
> dropped the queue ref, then the lock might already be freed as part of
> the queue, leading to uaf.
>
> To prevent this, move the fence lock into the fence itself so we don't
> run into lifetime issues. Alternative might be to have device level
> lock, or only release the queue in the fence release callback, however
> that might require pushing to another worker to avoid locking issues.
>
> Fixes: dd08ebf6c352 ("drm/xe: Introduce a new DRM driver for Intel GPUs")
> References: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/2454
> References: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/2342
> References: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/2020
> Signed-off-by: Matthew Auld <matthew.auld@intel.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
Good catch. This is indeed a problem, and you have it coded probably in
the safest way possible.
With that:
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
> Cc: <stable@vger.kernel.org> # v6.8+
> ---
> drivers/gpu/drm/xe/xe_exec_queue.c | 1 -
> drivers/gpu/drm/xe/xe_exec_queue_types.h | 2 --
> drivers/gpu/drm/xe/xe_preempt_fence.c | 3 ++-
> drivers/gpu/drm/xe/xe_preempt_fence_types.h | 2 ++
> 4 files changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
> index 971e1234b8ea..0f610d273fb6 100644
> --- a/drivers/gpu/drm/xe/xe_exec_queue.c
> +++ b/drivers/gpu/drm/xe/xe_exec_queue.c
> @@ -614,7 +614,6 @@ int xe_exec_queue_create_ioctl(struct drm_device *dev, void *data,
>
> if (xe_vm_in_preempt_fence_mode(vm)) {
> q->lr.context = dma_fence_context_alloc(1);
> - spin_lock_init(&q->lr.lock);
>
> err = xe_vm_add_compute_exec_queue(vm, q);
> if (XE_IOCTL_DBG(xe, err))
> diff --git a/drivers/gpu/drm/xe/xe_exec_queue_types.h b/drivers/gpu/drm/xe/xe_exec_queue_types.h
> index 1408b02eea53..fc2a1a20b7e4 100644
> --- a/drivers/gpu/drm/xe/xe_exec_queue_types.h
> +++ b/drivers/gpu/drm/xe/xe_exec_queue_types.h
> @@ -126,8 +126,6 @@ struct xe_exec_queue {
> u32 seqno;
> /** @lr.link: link into VM's list of exec queues */
> struct list_head link;
> - /** @lr.lock: preemption fences lock */
> - spinlock_t lock;
> } lr;
>
> /** @ops: submission backend exec queue operations */
> diff --git a/drivers/gpu/drm/xe/xe_preempt_fence.c b/drivers/gpu/drm/xe/xe_preempt_fence.c
> index 56e709d2fb30..83fbeea5aa20 100644
> --- a/drivers/gpu/drm/xe/xe_preempt_fence.c
> +++ b/drivers/gpu/drm/xe/xe_preempt_fence.c
> @@ -134,8 +134,9 @@ xe_preempt_fence_arm(struct xe_preempt_fence *pfence, struct xe_exec_queue *q,
> {
> list_del_init(&pfence->link);
> pfence->q = xe_exec_queue_get(q);
> + spin_lock_init(&pfence->lock);
> dma_fence_init(&pfence->base, &preempt_fence_ops,
> - &q->lr.lock, context, seqno);
> + &pfence->lock, context, seqno);
>
> return &pfence->base;
> }
> diff --git a/drivers/gpu/drm/xe/xe_preempt_fence_types.h b/drivers/gpu/drm/xe/xe_preempt_fence_types.h
> index b54b5c29b533..312c3372a49f 100644
> --- a/drivers/gpu/drm/xe/xe_preempt_fence_types.h
> +++ b/drivers/gpu/drm/xe/xe_preempt_fence_types.h
> @@ -25,6 +25,8 @@ struct xe_preempt_fence {
> struct xe_exec_queue *q;
> /** @preempt_work: work struct which issues preemption */
> struct work_struct preempt_work;
> + /** @lock: dma-fence fence lock */
> + spinlock_t lock;
> /** @error: preempt fence is in error state */
> int error;
> };
> --
> 2.46.0
>
next prev parent reply other threads:[~2024-08-14 15:22 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-08-14 11:01 [PATCH] drm/xe: prevent UAF around preempt fence Matthew Auld
2024-08-14 12:17 ` ✓ CI.Patch_applied: success for " Patchwork
2024-08-14 12:17 ` ✗ CI.checkpatch: warning " Patchwork
2024-08-14 12:18 ` ✓ CI.KUnit: success " Patchwork
2024-08-14 12:30 ` ✓ CI.Build: " Patchwork
2024-08-14 12:32 ` ✓ CI.Hooks: " Patchwork
2024-08-14 12:34 ` ✓ CI.checksparse: " Patchwork
2024-08-14 13:05 ` ✓ CI.BAT: " Patchwork
2024-08-14 15:21 ` Matthew Brost [this message]
2024-08-14 16:15 ` ✗ CI.FULL: failure " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZrzLiMSn059qlu7P@DUT025-TGLU.fm.intel.com \
--to=matthew.brost@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=matthew.auld@intel.com \
--cc=stable@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox