From: Daniel Vetter <daniel@ffwll.ch>
To: Dmitry Osipenko <dmitry.osipenko@collabora.com>,
Daniel Stone <daniel@fooishbar.org>
Cc: David Airlie <airlied@linux.ie>,
dri-devel@lists.freedesktop.org,
Gurchetan Singh <gurchetansingh@chromium.org>,
Dmitry Osipenko <digetx@gmail.com>, Rob Herring <robh@kernel.org>,
Daniel Stone <daniel@fooishbar.org>,
Steven Price <steven.price@arm.com>,
Gustavo Padovan <gustavo.padovan@collabora.com>,
Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>,
Chia-I Wu <olvaffe@gmail.com>, Daniel Vetter <daniel@ffwll.ch>,
Maarten Lankhorst <maarten.lankhorst@linux.intel.com>,
Maxime Ripard <mripard@kernel.org>,
virtualization@lists.linux-foundation.org,
Tomeu Vizoso <tomeu.vizoso@collabora.com>,
Gert Wollny <gert.wollny@collabora.com>,
Emil Velikov <emil.l.velikov@gmail.com>,
linux-kernel@vger.kernel.org, Rob Clark <robdclark@gmail.com>,
Thomas Zimmermann <tzimmermann@suse.de>,
Robin Murphy <robin.murphy@arm.com>
Subject: Re: [PATCH v4 10/15] drm/shmem-helper: Take reservation lock instead of drm_gem_shmem locks
Date: Wed, 27 Apr 2022 16:50:04 +0200 [thread overview]
Message-ID: <YmlYHNlcmNMfOeyy@phenom.ffwll.local> (raw)
In-Reply-To: <d9e7bec1-fffb-e0c4-8659-ef3ce2c31280@collabora.com>
On Mon, Apr 18, 2022 at 10:18:54PM +0300, Dmitry Osipenko wrote:
> Hello,
>
> On 4/18/22 21:38, Thomas Zimmermann wrote:
> > Hi
> >
> > Am 18.04.22 um 00:37 schrieb Dmitry Osipenko:
> >> Replace drm_gem_shmem locks with the reservation lock to make GEM
> >> lockings more consistent.
> >>
> >> Previously drm_gem_shmem_vmap() and drm_gem_shmem_get_pages() were
> >> protected by separate locks, now it's the same lock, but it doesn't
> >> make any difference for the current GEM SHMEM users. Only Panfrost
> >> and Lima drivers use vmap() and they do it in the slow code paths,
> >> hence there was no practical justification for the usage of separate
> >> lock in the vmap().
> >>
> >> Suggested-by: Daniel Vetter <daniel@ffwll.ch>
> >> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
> >> ---
> ...
> >> @@ -310,7 +306,7 @@ static int drm_gem_shmem_vmap_locked(struct
> >> drm_gem_shmem_object *shmem,
> >> } else {
> >> pgprot_t prot = PAGE_KERNEL;
> >> - ret = drm_gem_shmem_get_pages(shmem);
> >> + ret = drm_gem_shmem_get_pages_locked(shmem);
> >> if (ret)
> >> goto err_zero_use;
> >> @@ -360,11 +356,11 @@ int drm_gem_shmem_vmap(struct
> >> drm_gem_shmem_object *shmem,
> >> {
> >> int ret;
> >> - ret = mutex_lock_interruptible(&shmem->vmap_lock);
> >> + ret = dma_resv_lock_interruptible(shmem->base.resv, NULL);
> >> if (ret)
> >> return ret;
> >> ret = drm_gem_shmem_vmap_locked(shmem, map);
> >
> > Within drm_gem_shmem_vmap_locked(), there's a call to dma_buf_vmap() for
> > imported pages. If the exporter side also holds/acquires the same
> > reservation lock as our object, the whole thing can deadlock. We cannot
> > move dma_buf_vmap() out of the CS, because we still need to increment
> > the reference counter. I honestly don't know how to easily fix this
> > problem. There's a TODO item about replacing these locks at [1]. As
> > Daniel suggested this patch, we should talk to him about the issue.
> >
> > Best regards
> > Thomas
> >
> > [1]
> > https://www.kernel.org/doc/html/latest/gpu/todo.html#move-buffer-object-locking-to-dma-resv-lock
>
> Indeed, good catch! Perhaps we could simply use a separate lock for the
> vmapping of the *imported* GEMs? The vmap_use_count is used only by
> vmap/vunmap, so it doesn't matter which lock is used by these functions
> in the case of imported GEMs since we only need to protect the
> vmap_use_count.
Apologies for the late reply, I'm flooded.
I discussed this with Daniel Stone last week in a chat, roughly what we
need to do is:
1. Pick a function from shmem helpers.
2. Go through all drivers that call this, and make sure that we acquire
dma_resv_lock in the top level driver entry point for this.
3. Once all driver code paths are converted, add a dma_resv_assert_held()
call to that function to make sure you have it all correctly.
4. Repeate 1-3 until all shmem helper functions are converted over.
5. Ditch the 3 different shmem helper locks.
The trouble is that I forgot that vmap is a thing, so that needs more
work. I think there's two approaches here:
- Do the vmap at import time. This is the trick we used to untangle the
dma_resv_lock issues around dma_buf_attachment_map()
- Change the dma_buf_vmap rules that callers must hold the dma_resv_lock.
- Maybe also do what you suggest and keep a separate lock for this, but
the fundamental issue is that this doesn't really work - if you share
buffers both ways with two drivers using shmem helpers, then the
ordering of this vmap_count_mutex vs dma_resv_lock is inconsistent and
you can get some nice deadlocks. So not a great approach (and also the
reason why we really need to get everyone to move towards dma_resv_lock
as _the_ buffer object lock, since otherwise we'll never get a
consistent lock nesting hierarchy).
The trouble here is that trying to be clever and doing the conversion just
in shmem helpers wont work, because there's a lot of cases where the
drivers are all kinds of inconsistent with their locking.
Adding Daniel S, also maybe for questions it'd be fastest to chat on irc?
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
next prev parent reply other threads:[~2022-04-27 14:50 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20220417223707.157113-1-dmitry.osipenko@collabora.com>
[not found] ` <20220417223707.157113-10-dmitry.osipenko@collabora.com>
2022-04-18 18:25 ` [PATCH v4 09/15] drm/shmem-helper: Correct doc-comment of drm_gem_shmem_get_sg_table() Thomas Zimmermann
[not found] ` <20220417223707.157113-11-dmitry.osipenko@collabora.com>
2022-04-18 18:38 ` [PATCH v4 10/15] drm/shmem-helper: Take reservation lock instead of drm_gem_shmem locks Thomas Zimmermann
[not found] ` <d9e7bec1-fffb-e0c4-8659-ef3ce2c31280@collabora.com>
2022-04-27 14:50 ` Daniel Vetter [this message]
[not found] ` <8f932ab0-bb72-8fea-4078-dc59e9164bd4@collabora.com>
2022-05-04 8:21 ` Daniel Vetter
[not found] ` <01506516-ab2f-cb6e-7507-f2a3295efb59@collabora.com>
2022-05-05 8:12 ` Daniel Vetter
[not found] ` <83e68918-68de-c0c6-6f9b-e94d34b19383@collabora.com>
2022-05-09 13:42 ` Daniel Vetter
[not found] ` <4d08b382-0076-1ea2-b565-893d50b453cb@collabora.com>
2022-05-11 13:00 ` Daniel Vetter
2022-05-11 14:24 ` Christian König
2022-05-11 15:07 ` Daniel Vetter
[not found] ` <3a362c32-870c-1d73-bba6-bbdcd62dc326@collabora.com>
2022-05-11 15:29 ` Daniel Vetter
[not found] ` <ba2836d0-9a3a-b879-cb1e-a48aed31637d@collabora.com>
2022-05-11 19:05 ` Daniel Vetter
2022-05-12 7:29 ` Christian König
2022-05-12 14:15 ` Daniel Vetter
[not found] ` <20220417223707.157113-12-dmitry.osipenko@collabora.com>
2022-04-19 7:22 ` [PATCH v4 11/15] drm/shmem-helper: Add generic memory shrinker Thomas Zimmermann
[not found] ` <7f497f99-f4c1-33d6-46cf-95bd90188fe3@collabora.com>
2022-04-27 15:03 ` Daniel Vetter
[not found] ` <d0970dbd-e6e7-afa0-fdfd-b755008e371f@collabora.com>
2022-05-04 8:24 ` Daniel Vetter
2022-06-19 16:54 ` Rob Clark
2022-05-05 8:34 ` Thomas Zimmermann
2022-05-05 11:59 ` Daniel Vetter
[not found] ` <ff97790a-fb64-1e15-74b4-59c807bce0b9@collabora.com>
2022-05-09 13:49 ` Daniel Vetter
[not found] ` <5fdf5232-e2b2-b444-5a41-f1db7e6a04da@collabora.com>
2022-05-11 13:09 ` Daniel Vetter
[not found] ` <3429a12f-9fbe-b66b-dbbd-94a1df54714e@collabora.com>
2022-05-11 19:09 ` Daniel Vetter
[not found] ` <0ae6fed7-b166-d2b8-0e42-84b94b777c20@collabora.com>
2022-05-12 17:04 ` Daniel Vetter
[not found] ` <31bc7a14-ff30-6961-b4fc-0aad83551df9@collabora.com>
2022-05-19 14:13 ` Daniel Vetter
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YmlYHNlcmNMfOeyy@phenom.ffwll.local \
--to=daniel@ffwll.ch \
--cc=airlied@linux.ie \
--cc=alyssa.rosenzweig@collabora.com \
--cc=daniel@fooishbar.org \
--cc=digetx@gmail.com \
--cc=dmitry.osipenko@collabora.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=emil.l.velikov@gmail.com \
--cc=gert.wollny@collabora.com \
--cc=gurchetansingh@chromium.org \
--cc=gustavo.padovan@collabora.com \
--cc=linux-kernel@vger.kernel.org \
--cc=maarten.lankhorst@linux.intel.com \
--cc=mripard@kernel.org \
--cc=olvaffe@gmail.com \
--cc=robdclark@gmail.com \
--cc=robh@kernel.org \
--cc=robin.murphy@arm.com \
--cc=steven.price@arm.com \
--cc=tomeu.vizoso@collabora.com \
--cc=tzimmermann@suse.de \
--cc=virtualization@lists.linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).