From: Boris Brezillon <boris.brezillon@collabora.com>
To: Alice Ryhl <aliceryhl@google.com>
Cc: "Danilo Krummrich" <dakr@kernel.org>,
"Matthew Brost" <matthew.brost@intel.com>,
"Thomas Hellström" <thomas.hellstrom@linux.intel.com>,
"Maarten Lankhorst" <maarten.lankhorst@linux.intel.com>,
"Maxime Ripard" <mripard@kernel.org>,
"Thomas Zimmermann" <tzimmermann@suse.de>,
"David Airlie" <airlied@gmail.com>,
"Simona Vetter" <simona@ffwll.ch>,
"Steven Price" <steven.price@arm.com>,
"Daniel Almeida" <daniel.almeida@collabora.com>,
"Liviu Dudau" <liviu.dudau@arm.com>,
dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org,
rust-for-linux@vger.kernel.org
Subject: Re: [PATCH 1/2] drm/gpuvm: add deferred vm_bo cleanup
Date: Mon, 8 Sep 2025 11:37:13 +0200 [thread overview]
Message-ID: <20250908113713.34e57f16@fedora> (raw)
In-Reply-To: <aL6TJYRmWIkQXujj@google.com>
On Mon, 8 Sep 2025 08:26:13 +0000
Alice Ryhl <aliceryhl@google.com> wrote:
> On Mon, Sep 08, 2025 at 09:11:40AM +0200, Boris Brezillon wrote:
> > Hi Alice,
> >
> > On Sun, 7 Sep 2025 11:39:41 +0000
> > Alice Ryhl <aliceryhl@google.com> wrote:
> >
> > > Yeah I guess we could have unlink remove the gpuva, but then allow the
> > > end-user to attach the gpuva to a list of gpuvas to kfree deferred. That
> > > way, the drm_gpuva_unlink() is not deferred but any resources it has can
> > > be.
> >
> > This ^.
> >
> > >
> > > Of course, this approach also makes deferred gpuva cleanup somewhat
> > > orthogonal to this patch.
> >
> > Well, yes and no, because if you go for gpuva deferred cleanup, you
> > don't really need the fancy kref_put() you have in this patch, it's
> > just a regular vm_bo_put() that's called in the deferred gpuva path on
> > the vm_bo attached to the gpuva being released.
>
> Ok, so what you suggest is that on gpuva_unlink() we remove the gpuva
> from the vm_bo's list, but then instead of putting the vm_bo's refcount,
> we add the gpuva to a list, and in the deferred cleanup codepath we
> iterate gpuvas and drop vm_bo refcounts *at that point*. Is that
> understood correctly?
Yes, that's what I'm suggesting.
>
> That means we don't immediately remove the vm_bo from the gem.gpuva
> list, but the gpuva list in the vm_bo will be empty. I guess you already
> have to handle such vm_bos anyway since you can already have an empty
> vm_bo in between vm_bo_obtain() and the first call to gpuva_link().
>
> One disadvantage is that we might end up preparing or unevicting a GEM
> object that doesn't have any VAs left, which the current approach
> avoids.
Fair enough, though, as you said, the unnecessary ::prepare() already
exists in the "queue-map-operation" path, because the object is added
to the extobj list as soon as vm_bo_obtain() is called, not at _map()
time. This being said, I don't know if it really matters in practice,
because:
1. if buffer eviction happens too often, the system will already suffer
from the constant swapout/swapin already, it's not a single buffer
that's going to make a difference
2. hopefully the buffer will be gone before the next job is submitted
most of the time
3. we can flush the gpuva destruction list before preparing/unevicting
GEM objects, so we don't end up processing vm_bos that no longer
exist. Actually, I think this step is good to have regardless
of what we decide here, because the less elements we have to iterate
the better, and in the submit path, we've already acquired all
GEM locks to do that cleanup, so we're probably better off doing it
right away.
>
> > > One annoying part is that we don't have an gpuvm ops operation for
> > > freeing gpuva, and if we add one for this, it would *only* be used in
> > > this case as most drivers explicitly kfree gpuvas, which could be
> > > confusing for end-users.
> >
> > Also not sure ::vm_bo_free() was meant to be used like that. It was for
> > drivers that need to control the drm_gpuvm_bo allocation, not those
> > that rely on the default implementation (kmalloc). Given how things
> > are described in the the doc, it feels weird to have a ::vm_bo_free()
> > without ::vm_bo_alloc(). So, if we decide to go this way (which I'm
> > still not convinced we should, given ultimately we might want to defer
> > gpuvas cleanup), the ::vm_bo_free() doc should be extended to cover
> > this 'deferred vm_bo free' case.
>
> I can implement vm_bo_alloc() too, but I think it seems like a pretty
> natural way to use vm_bo_free().
Not necessarily saying we should have a vm_bo_alloc(), just saying it
feels weird as it is now because of the asymmetry and how things are
described in the doc.
next prev parent reply other threads:[~2025-09-08 9:37 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-05 12:11 [PATCH 0/2] Defer vm_bo cleanup in GPUVM with DRM_GPUVM_IMMEDIATE_MODE Alice Ryhl
2025-09-05 12:11 ` [PATCH 1/2] drm/gpuvm: add deferred vm_bo cleanup Alice Ryhl
2025-09-05 13:25 ` Boris Brezillon
2025-09-05 18:18 ` Alice Ryhl
2025-09-05 22:47 ` Danilo Krummrich
2025-09-07 11:15 ` Alice Ryhl
2025-09-07 11:28 ` Danilo Krummrich
2025-09-07 11:39 ` Alice Ryhl
2025-09-07 11:44 ` Danilo Krummrich
2025-09-08 7:11 ` Boris Brezillon
2025-09-08 8:26 ` Alice Ryhl
2025-09-08 8:47 ` Danilo Krummrich
2025-09-08 10:20 ` Boris Brezillon
2025-09-08 11:11 ` Danilo Krummrich
2025-09-08 12:11 ` Boris Brezillon
2025-09-08 12:20 ` Danilo Krummrich
2025-09-09 10:39 ` Thomas Hellström
2025-09-09 10:47 ` Danilo Krummrich
2025-09-09 11:10 ` Thomas Hellström
2025-09-09 11:24 ` Alice Ryhl
2025-09-09 11:28 ` Danilo Krummrich
2025-09-09 11:46 ` Thomas Hellström
2025-09-08 9:37 ` Boris Brezillon [this message]
2025-09-08 7:22 ` Boris Brezillon
2025-09-05 12:11 ` [PATCH 2/2] panthor: use drm_gpuva_unlink_defer() Alice Ryhl
2025-09-05 12:52 ` Boris Brezillon
2025-09-05 13:01 ` Alice Ryhl
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250908113713.34e57f16@fedora \
--to=boris.brezillon@collabora.com \
--cc=airlied@gmail.com \
--cc=aliceryhl@google.com \
--cc=dakr@kernel.org \
--cc=daniel.almeida@collabora.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=linux-kernel@vger.kernel.org \
--cc=liviu.dudau@arm.com \
--cc=maarten.lankhorst@linux.intel.com \
--cc=matthew.brost@intel.com \
--cc=mripard@kernel.org \
--cc=rust-for-linux@vger.kernel.org \
--cc=simona@ffwll.ch \
--cc=steven.price@arm.com \
--cc=thomas.hellstrom@linux.intel.com \
--cc=tzimmermann@suse.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).