Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] drm/xe/sync: Fix user fence leak on alloc failure
@ 2026-02-19  1:42 Shuicheng Lin
  2026-02-19  2:03 ` ✓ CI.KUnit: success for " Patchwork
                   ` (3 more replies)
  0 siblings, 4 replies; 6+ messages in thread
From: Shuicheng Lin @ 2026-02-19  1:42 UTC (permalink / raw)
  To: intel-xe; +Cc: Shuicheng Lin, Matthew Brost

When dma_fence_chain_alloc() fails, properly release the user fence
reference to prevent a memory leak.

The error cleanup path in callers (xe_exec.c, xe_oa.c, xe_vm.c) uses a
while loop that cleans up syncs from index 0 to num_syncs-1. The failed
sync at the current index num_syncs is not covered by this loop, so the
local user_fence_put() is necessary to prevent a leak.
Set sync->ufence = NULL after the user_fence_put() call to avoid if the
caller later calls xe_sync_entry_cleanup() on the failed sync, it will
trigger another user_fence_put() on the already-freed memory, causing
a use-after-free bug.

Also remove extra whitespace in function call and comment.

Fixes: adda4e855ab6 ("drm/xe: Enforce correct user fence signaling order using")
Cc: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Shuicheng Lin <shuicheng.lin@intel.com>
---
 drivers/gpu/drm/xe/xe_sync.c | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_sync.c b/drivers/gpu/drm/xe/xe_sync.c
index c8fdcdbd6ae7..c5f71067fcd2 100644
--- a/drivers/gpu/drm/xe/xe_sync.c
+++ b/drivers/gpu/drm/xe/xe_sync.c
@@ -200,8 +200,11 @@ int xe_sync_entry_parse(struct xe_device *xe, struct xe_file *xef,
 			if (XE_IOCTL_DBG(xe, IS_ERR(sync->ufence)))
 				return PTR_ERR(sync->ufence);
 			sync->ufence_chain_fence = dma_fence_chain_alloc();
-			if (!sync->ufence_chain_fence)
+			if (!sync->ufence_chain_fence) {
+				user_fence_put(sync->ufence);
+				sync->ufence = NULL;
 				return -ENOMEM;
+			}
 			sync->ufence_syncobj = ufence_syncobj;
 		}
 
@@ -222,7 +225,7 @@ ALLOW_ERROR_INJECTION(xe_sync_entry_parse, ERRNO);
 int xe_sync_entry_add_deps(struct xe_sync_entry *sync, struct xe_sched_job *job)
 {
 	if (sync->fence)
-		return  drm_sched_job_add_dependency(&job->drm,
+		return drm_sched_job_add_dependency(&job->drm,
 						     dma_fence_get(sync->fence));
 
 	return 0;
@@ -311,7 +314,7 @@ void xe_sync_entry_cleanup(struct xe_sync_entry *sync)
  *
  * Get a fence from syncs, exec queue, and VM. If syncs contain in-fences create
  * and return a composite fence of all in-fences + last fence. If no in-fences
- * return last fence on  input exec queue. Caller must drop reference to
+ * return last fence on input exec queue. Caller must drop reference to
  * returned fence.
  *
  * Return: fence on success, ERR_PTR(-ENOMEM) on failure
-- 
2.50.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2026-02-19 21:32 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-19  1:42 [PATCH] drm/xe/sync: Fix user fence leak on alloc failure Shuicheng Lin
2026-02-19  2:03 ` ✓ CI.KUnit: success for " Patchwork
2026-02-19  2:38 ` ✓ Xe.CI.BAT: " Patchwork
2026-02-19  3:00 ` [PATCH] " Matthew Brost
2026-02-19 21:32   ` Lin, Shuicheng
2026-02-19  3:37 ` ✗ Xe.CI.FULL: failure for " Patchwork

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox