From: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
To: intel-xe@lists.freedesktop.org
Cc: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>,
"Matthew Brost" <matthew.brost@intel.com>,
himal.prasad.ghimiray@intel.com,
"Matthew Auld" <matthew.auld@intel.com>
Subject: [PATCH v3 2/5] drm/xe/svm: Fix a potential bo UAF
Date: Mon, 24 Mar 2025 17:54:57 +0100 [thread overview]
Message-ID: <20250324165500.20680-3-thomas.hellstrom@linux.intel.com> (raw)
In-Reply-To: <20250324165500.20680-1-thomas.hellstrom@linux.intel.com>
If drm_gpusvm_migrate_to_devmem() succeeds, if a cpu access happens to the
range the bo may be freed before xe_bo_unlock(), causing a UAF.
Since the reference is transferred, use xe_svm_devmem_release() to
release the reference on drm_gpusvm_migrate_to_devmem() failure,
and hold a local reference to protect the UAF.
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_svm.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
index 52613dd8573a..c7424c824a14 100644
--- a/drivers/gpu/drm/xe/xe_svm.c
+++ b/drivers/gpu/drm/xe/xe_svm.c
@@ -702,11 +702,14 @@ static int xe_svm_alloc_vram(struct xe_vm *vm, struct xe_tile *tile,
list_for_each_entry(block, blocks, link)
block->private = vr;
+ xe_bo_get(bo);
err = drm_gpusvm_migrate_to_devmem(&vm->svm.gpusvm, &range->base,
&bo->devmem_allocation, ctx);
- xe_bo_unlock(bo);
if (err)
- xe_bo_put(bo); /* Creation ref */
+ xe_svm_devmem_release(&bo->devmem_allocation);
+
+ xe_bo_unlock(bo);
+ xe_bo_put(bo);
unlock:
mmap_read_unlock(mm);
--
2.48.1
next prev parent reply other threads:[~2025-03-24 16:55 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-03-24 16:54 [PATCH v3 0/5] drm/xe: xe-only patches from the multi-device GPUSVM series Thomas Hellström
2025-03-24 16:54 ` [PATCH v3 1/5] drm/xe: Introduce CONFIG_DRM_XE_GPUSVM Thomas Hellström
2025-03-24 16:54 ` Thomas Hellström [this message]
2025-03-24 16:54 ` [PATCH v3 3/5] drm/xe/bo: Add a bo remove callback Thomas Hellström
2025-03-25 9:02 ` Matthew Auld
2025-03-25 9:07 ` Thomas Hellström
2025-03-25 10:08 ` Matthew Auld
2025-03-25 16:45 ` Thomas Hellström
2025-03-24 16:54 ` [PATCH v3 4/5] drm/xe/migrate: Allow xe_migrate_vram() also on non-pagefault capable devices Thomas Hellström
2025-03-24 16:55 ` [PATCH v3 5/5] drm/xe: Make the PT code handle placement per PTE rather than per vma / range Thomas Hellström
2025-03-24 17:00 ` ✓ CI.Patch_applied: success for drm/xe: xe-only patches from the multi-device GPUSVM series (rev5) Patchwork
2025-03-24 17:01 ` ✗ CI.checkpatch: warning " Patchwork
2025-03-24 17:02 ` ✓ CI.KUnit: success " Patchwork
2025-03-24 17:18 ` ✓ CI.Build: " Patchwork
2025-03-24 17:21 ` ✓ CI.Hooks: " Patchwork
2025-03-24 17:22 ` ✓ CI.checksparse: " Patchwork
2025-03-24 17:41 ` ✓ Xe.CI.BAT: " Patchwork
2025-03-24 19:37 ` ✗ Xe.CI.Full: failure " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250324165500.20680-3-thomas.hellstrom@linux.intel.com \
--to=thomas.hellstrom@linux.intel.com \
--cc=himal.prasad.ghimiray@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=matthew.auld@intel.com \
--cc=matthew.brost@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox