From: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
To: Matthew Brost <matthew.brost@intel.com>, intel-xe@lists.freedesktop.org
Subject: Re: [PATCH] drm/xe: Move VM dma-resv lock from xe_exec_queue_create to __xe_exec_queue_init
Date: Fri, 9 Aug 2024 21:50:39 +0200 [thread overview]
Message-ID: <5e48b665-dd7f-4707-a963-9d5e8fd53e14@linux.intel.com> (raw)
In-Reply-To: <20240724152831.1848325-1-matthew.brost@intel.com>
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Den 2024-07-24 kl. 17:28, skrev Matthew Brost:
> The critical section which requires the VM dma-resv is the call
> xe_lrc_create in __xe_exec_queue_init. Move this lock to
> __xe_exec_queue_init holding it just around xe_lrc_create. Not only is
> good practice, this also fixes a locking double of the VM dma-resv in
> the error paths of __xe_exec_queue_init as xe_lrc_put tries to acquire
> this too resulting in a deadlock.
>
> Fixes: dd08ebf6c352 ("drm/xe: Introduce a new DRM driver for Intel GPUs")
> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> Signed-off-by: Matthw Brost <matthew.brost@intel.com>
> ---
> drivers/gpu/drm/xe/xe_exec_queue.c | 23 ++++++++++++++---------
> 1 file changed, 14 insertions(+), 9 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
> index 69867a7b7c77..0d72846af9bf 100644
> --- a/drivers/gpu/drm/xe/xe_exec_queue.c
> +++ b/drivers/gpu/drm/xe/xe_exec_queue.c
> @@ -105,22 +105,35 @@ static struct xe_exec_queue *__xe_exec_queue_alloc(struct xe_device *xe,
>
> static int __xe_exec_queue_init(struct xe_exec_queue *q)
> {
> + struct xe_vm *vm = q->vm;
> int i, err;
>
> + if (vm) {
> + err = xe_vm_lock(vm, true);
> + if (err)
> + return err;
> + }
> +
> for (i = 0; i < q->width; ++i) {
> q->lrc[i] = xe_lrc_create(q->hwe, q->vm, SZ_16K);
> if (IS_ERR(q->lrc[i])) {
> err = PTR_ERR(q->lrc[i]);
> - goto err_lrc;
> + goto err_unlock;
> }
> }
>
> + if (vm)
> + xe_vm_unlock(vm);
> +
> err = q->ops->init(q);
> if (err)
> goto err_lrc;
>
> return 0;
>
> +err_unlock:
> + if (vm)
> + xe_vm_unlock(vm);
> err_lrc:
> for (i = i - 1; i >= 0; --i)
> xe_lrc_put(q->lrc[i]);
> @@ -140,15 +153,7 @@ struct xe_exec_queue *xe_exec_queue_create(struct xe_device *xe, struct xe_vm *v
> if (IS_ERR(q))
> return q;
>
> - if (vm) {
> - err = xe_vm_lock(vm, true);
> - if (err)
> - goto err_post_alloc;
> - }
> -
> err = __xe_exec_queue_init(q);
> - if (vm)
> - xe_vm_unlock(vm);
> if (err)
> goto err_post_alloc;
>
prev parent reply other threads:[~2024-08-09 19:51 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-07-24 15:28 [PATCH] drm/xe: Move VM dma-resv lock from xe_exec_queue_create to __xe_exec_queue_init Matthew Brost
2024-07-24 15:40 ` ✓ CI.Patch_applied: success for " Patchwork
2024-07-24 15:40 ` ✗ CI.checkpatch: warning " Patchwork
2024-07-24 15:42 ` ✓ CI.KUnit: success " Patchwork
2024-07-24 15:54 ` ✓ CI.Build: " Patchwork
2024-07-24 15:56 ` ✓ CI.Hooks: " Patchwork
2024-07-24 15:57 ` ✓ CI.checksparse: " Patchwork
2024-07-24 16:18 ` ✓ CI.BAT: " Patchwork
2024-07-24 18:19 ` ✗ CI.FULL: failure " Patchwork
2024-08-09 19:50 ` Maarten Lankhorst [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5e48b665-dd7f-4707-a963-9d5e8fd53e14@linux.intel.com \
--to=maarten.lankhorst@linux.intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=matthew.brost@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox