From: "Christian König" <christian.koenig@amd.com>
To: David Baum <davidbaum461@gmail.com>,
alexdeucher@gmail.com, "Khatri, Sunil" <Sunil.Khatri@amd.com>
Cc: amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org
Subject: Re: [PATCH] drm/amdgpu: fix resource leaks in userqueue creation error paths
Date: Tue, 31 Mar 2026 10:41:20 +0200 [thread overview]
Message-ID: <d229bcbf-dbdb-4e6d-bb81-0aa40d96d406@amd.com> (raw)
In-Reply-To: <20260328021614.20100-1-davidbaum461@gmail.com>
On 3/28/26 03:16, David Baum wrote:
> amdgpu_userq_create() has multiple error paths that jump directly to the
> 'unlock' label, which only releases the mutex. This leaks resources that
> were allocated earlier in the function:
>
> - When amdgpu_userq_fence_driver_alloc() fails, the queue struct,
> doorbell BO, and VA list entries are leaked.
> - When xa_store_irq() fails, the MQD and fence driver are leaked
> in addition to the queue struct.
> - When kasprintf() fails for the queue debug name, the entire queue
> with all its resources (xa entry, MQD, fence driver, queue struct)
> is leaked.
>
> Fix this by adding cleanup labels in reverse allocation order
> (erase_xa, destroy_mqd, free_fence_driver, free_queue) before the
> existing unlock label, and routing each error path to the correct
> label that matches the resources allocated up to that point.
Sunil can you take a look at that?
Thanks,
Christian.
>
> Signed-off-by: David Baum <davidbaum461@gmail.com>
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_userq.c | 37 +++++++++++------------
> 1 file changed, 17 insertions(+), 20 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_userq.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_userq.c
> index 7c4503508..93c44798c 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_userq.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_userq.c
> @@ -819,17 +819,15 @@ amdgpu_userq_create(struct drm_file *filp, union drm_amdgpu_userq *args)
> amdgpu_userq_input_va_validate(adev, queue, args->in.rptr_va, AMDGPU_GPU_PAGE_SIZE) ||
> amdgpu_userq_input_va_validate(adev, queue, args->in.wptr_va, AMDGPU_GPU_PAGE_SIZE)) {
> r = -EINVAL;
> - kfree(queue);
> - goto unlock;
> + goto free_queue;
> }
>
> /* Convert relative doorbell offset into absolute doorbell index */
> index = amdgpu_userq_get_doorbell_index(uq_mgr, &db_info, filp);
> if (index == (uint64_t)-EINVAL) {
> drm_file_err(uq_mgr->file, "Failed to get doorbell for queue\n");
> - kfree(queue);
> r = -EINVAL;
> - goto unlock;
> + goto free_queue;
> }
>
> queue->doorbell_index = index;
> @@ -837,15 +835,13 @@ amdgpu_userq_create(struct drm_file *filp, union drm_amdgpu_userq *args)
> r = amdgpu_userq_fence_driver_alloc(adev, queue);
> if (r) {
> drm_file_err(uq_mgr->file, "Failed to alloc fence driver\n");
> - goto unlock;
> + goto free_queue;
> }
>
> r = uq_funcs->mqd_create(queue, &args->in);
> if (r) {
> drm_file_err(uq_mgr->file, "Failed to create Queue\n");
> - amdgpu_userq_fence_driver_free(queue);
> - kfree(queue);
> - goto unlock;
> + goto free_fence_driver;
> }
>
> /* drop this refcount during queue destroy */
> @@ -855,21 +851,17 @@ amdgpu_userq_create(struct drm_file *filp, union drm_amdgpu_userq *args)
> down_read(&adev->reset_domain->sem);
> r = xa_err(xa_store_irq(&adev->userq_doorbell_xa, index, queue, GFP_KERNEL));
> if (r) {
> - kfree(queue);
> up_read(&adev->reset_domain->sem);
> - goto unlock;
> + goto destroy_mqd;
> }
>
> r = xa_alloc(&uq_mgr->userq_xa, &qid, queue,
> XA_LIMIT(1, AMDGPU_MAX_USERQ_COUNT), GFP_KERNEL);
> if (r) {
> drm_file_err(uq_mgr->file, "Failed to allocate a queue id\n");
> - amdgpu_userq_fence_driver_free(queue);
> - uq_funcs->mqd_destroy(queue);
> - kfree(queue);
> r = -ENOMEM;
> up_read(&adev->reset_domain->sem);
> - goto unlock;
> + goto destroy_mqd;
> }
> up_read(&adev->reset_domain->sem);
>
> @@ -884,18 +876,14 @@ amdgpu_userq_create(struct drm_file *filp, union drm_amdgpu_userq *args)
> r = amdgpu_userq_map_helper(queue);
> if (r) {
> drm_file_err(uq_mgr->file, "Failed to map Queue\n");
> - xa_erase(&uq_mgr->userq_xa, qid);
> - amdgpu_userq_fence_driver_free(queue);
> - uq_funcs->mqd_destroy(queue);
> - kfree(queue);
> - goto unlock;
> + goto erase_xa;
> }
> }
>
> queue_name = kasprintf(GFP_KERNEL, "queue-%d", qid);
> if (!queue_name) {
> r = -ENOMEM;
> - goto unlock;
> + goto erase_xa;
> }
>
> #if defined(CONFIG_DEBUG_FS)
> @@ -908,7 +896,16 @@ amdgpu_userq_create(struct drm_file *filp, union drm_amdgpu_userq *args)
>
> args->out.queue_id = qid;
> atomic_inc(&uq_mgr->userq_count[queue->queue_type]);
> + goto unlock;
>
> +erase_xa:
> + xa_erase_irq(&uq_mgr->userq_xa, qid);
> +destroy_mqd:
> + uq_funcs->mqd_destroy(queue);
> +free_fence_driver:
> + amdgpu_userq_fence_driver_free(queue);
> +free_queue:
> + kfree(queue);
> unlock:
> mutex_unlock(&uq_mgr->userq_mutex);
>
> --
> 2.50.1 (Apple Git-155)
>
prev parent reply other threads:[~2026-03-31 8:41 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-28 2:16 [PATCH] drm/amdgpu: fix resource leaks in userqueue creation error paths David Baum
2026-03-31 8:41 ` Christian König [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=d229bcbf-dbdb-4e6d-bb81-0aa40d96d406@amd.com \
--to=christian.koenig@amd.com \
--cc=Sunil.Khatri@amd.com \
--cc=alexdeucher@gmail.com \
--cc=amd-gfx@lists.freedesktop.org \
--cc=davidbaum461@gmail.com \
--cc=dri-devel@lists.freedesktop.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox