From: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
To: Alice Ryhl <aliceryhl@google.com>,
Danilo Krummrich <dakr@kernel.org>,
Matthew Brost <matthew.brost@intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>,
Maxime Ripard <mripard@kernel.org>,
Thomas Zimmermann <tzimmermann@suse.de>,
David Airlie <airlied@gmail.com>, Simona Vetter <simona@ffwll.ch>,
Boris Brezillon <boris.brezillon@collabora.com>,
Steven Price <steven.price@arm.com>,
Daniel Almeida <daniel.almeida@collabora.com>,
Liviu Dudau <liviu.dudau@arm.com>,
dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org,
rust-for-linux@vger.kernel.org
Subject: Re: [PATCH v2 1/2] drm/gpuvm: add deferred vm_bo cleanup
Date: Tue, 09 Sep 2025 16:20:32 +0200 [thread overview]
Message-ID: <c7a7aac3e82fde7a20970e6a65d200ab79804b0f.camel@linux.intel.com> (raw)
In-Reply-To: <20250909-vmbo-defer-v2-1-9835d7349089@google.com>
On Tue, 2025-09-09 at 13:36 +0000, Alice Ryhl wrote:
> When using GPUVM in immediate mode, it is necessary to call
> drm_gpuvm_unlink() from the fence signalling critical path. However,
> unlink may call drm_gpuvm_bo_put(), which causes some challenges:
>
> 1. drm_gpuvm_bo_put() often requires you to take resv locks, which
> you
> can't do from the fence signalling critical path.
> 2. drm_gpuvm_bo_put() calls drm_gem_object_put(), which is often
> going
> to be unsafe to call from the fence signalling critical path.
>
> To solve these issues, add a deferred version of drm_gpuvm_unlink()
> that
> adds the vm_bo to a deferred cleanup list, and then clean it up
> later.
>
> The new methods take the GEMs GPUVA lock internally rather than
> letting
> the caller do it because it also needs to perform an operation after
> releasing the mutex again. This is to prevent freeing the GEM while
> holding the mutex (more info as comments in the patch). This means
> that
> the new methods can only be used with DRM_GPUVM_IMMEDIATE_MODE.
>
> Signed-off-by: Alice Ryhl <aliceryhl@google.com>
> ---
> drivers/gpu/drm/drm_gpuvm.c | 174
> ++++++++++++++++++++++++++++++++++++++++++++
> include/drm/drm_gpuvm.h | 26 +++++++
> 2 files changed, 200 insertions(+)
>
> diff --git a/drivers/gpu/drm/drm_gpuvm.c
> b/drivers/gpu/drm/drm_gpuvm.c
> index
> 78a1a4f095095e9379bdf604d583f6c8b9863ccb..5aa8b3813019705f70101950af2
> d8fe4e648e9d0 100644
> --- a/drivers/gpu/drm/drm_gpuvm.c
> +++ b/drivers/gpu/drm/drm_gpuvm.c
> @@ -876,6 +876,27 @@ __drm_gpuvm_bo_list_add(struct drm_gpuvm *gpuvm,
> spinlock_t *lock,
> cond_spin_unlock(lock, !!lock);
> }
>
> +/**
> + * drm_gpuvm_bo_is_dead() - check whether this vm_bo is scheduled
NIT: Is zombie a better name than dead?
> for cleanup
> + * @vm_bo: the &drm_gpuvm_bo
> + *
> + * When a vm_bo is scheduled for cleanup using the bo_defer list, it
> is not
> + * immediately removed from the evict and extobj lists if they are
> protected by
> + * the resv lock, as we can't take that lock during run_job() in
> immediate
> + * mode. Therefore, anyone iterating these lists should skip entries
> that are
> + * being destroyed.
> + *
> + * Checking the refcount without incrementing it is okay as long as
> the lock
> + * protecting the evict/extobj list is held for as long as you are
> using the
> + * vm_bo, because even if the refcount hits zero while you are using
> it, freeing
> + * the vm_bo requires taking the list's lock.
> + */
> +static bool
> +drm_gpuvm_bo_is_dead(struct drm_gpuvm_bo *vm_bo)
> +{
> + return !kref_read(&vm_bo->kref);
> +}
> +
> /**
> * drm_gpuvm_bo_list_add() - insert a vm_bo into the given list
> * @__vm_bo: the &drm_gpuvm_bo
> @@ -1081,6 +1102,9 @@ drm_gpuvm_init(struct drm_gpuvm *gpuvm, const
> char *name,
> INIT_LIST_HEAD(&gpuvm->evict.list);
> spin_lock_init(&gpuvm->evict.lock);
>
> + INIT_LIST_HEAD(&gpuvm->bo_defer.list);
> + spin_lock_init(&gpuvm->bo_defer.lock);
> +
This list appears to exactly follow the pattern a lockless list was
designed for. Saves some space in the vm_bo and gets rid of the
excessive locking. <include/linux/llist.h>
Otherwise LGTM.
/Thomas
next prev parent reply other threads:[~2025-09-09 14:20 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-09 13:36 [PATCH v2 0/2] Defer vm_bo cleanup in GPUVM with DRM_GPUVM_IMMEDIATE_MODE Alice Ryhl
2025-09-09 13:36 ` [PATCH v2 1/2] drm/gpuvm: add deferred vm_bo cleanup Alice Ryhl
2025-09-09 13:39 ` Alice Ryhl
2025-09-11 11:57 ` Boris Brezillon
2025-09-11 12:00 ` Boris Brezillon
2025-09-09 14:20 ` Thomas Hellström [this message]
2025-09-10 6:39 ` Alice Ryhl
2025-09-11 12:18 ` Boris Brezillon
2025-09-09 13:36 ` [PATCH v2 2/2] panthor: use drm_gpuva_unlink_defer() Alice Ryhl
2025-09-11 10:15 ` Boris Brezillon
2025-09-11 11:08 ` Alice Ryhl
2025-09-11 11:18 ` Boris Brezillon
2025-09-11 12:35 ` Boris Brezillon
2025-09-11 12:38 ` Boris Brezillon
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=c7a7aac3e82fde7a20970e6a65d200ab79804b0f.camel@linux.intel.com \
--to=thomas.hellstrom@linux.intel.com \
--cc=airlied@gmail.com \
--cc=aliceryhl@google.com \
--cc=boris.brezillon@collabora.com \
--cc=dakr@kernel.org \
--cc=daniel.almeida@collabora.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=linux-kernel@vger.kernel.org \
--cc=liviu.dudau@arm.com \
--cc=maarten.lankhorst@linux.intel.com \
--cc=matthew.brost@intel.com \
--cc=mripard@kernel.org \
--cc=rust-for-linux@vger.kernel.org \
--cc=simona@ffwll.ch \
--cc=steven.price@arm.com \
--cc=tzimmermann@suse.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).