From: Daniel Vetter <daniel@ffwll.ch>
To: "Christian König" <ckoenig.leichtzumerken@gmail.com>
Cc: David Airlie <airlied@linux.ie>,
dri-devel@lists.freedesktop.org,
Gurchetan Singh <gurchetansingh@chromium.org>,
Dmitry Osipenko <digetx@gmail.com>, Rob Herring <robh@kernel.org>,
Daniel Stone <daniel@fooishbar.org>,
Steven Price <steven.price@arm.com>,
Gustavo Padovan <gustavo.padovan@collabora.com>,
Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>,
Dmitry Osipenko <dmitry.osipenko@collabora.com>,
Chia-I Wu <olvaffe@gmail.com>,
Maarten Lankhorst <maarten.lankhorst@linux.intel.com>,
Maxime Ripard <mripard@kernel.org>,
virtualization@lists.linux-foundation.org,
Tomeu Vizoso <tomeu.vizoso@collabora.com>,
Gert Wollny <gert.wollny@collabora.com>,
Emil Velikov <emil.l.velikov@gmail.com>,
linux-kernel@vger.kernel.org, Rob Clark <robdclark@gmail.com>,
Thomas Zimmermann <tzimmermann@suse.de>,
Robin Murphy <robin.murphy@arm.com>
Subject: Re: [PATCH v4 10/15] drm/shmem-helper: Take reservation lock instead of drm_gem_shmem locks
Date: Wed, 11 May 2022 17:07:57 +0200 [thread overview]
Message-ID: <YnvRTaMoO24y8xE5@phenom.ffwll.local> (raw)
In-Reply-To: <56787b70-fb64-64da-6006-d3aa3ed59d12@gmail.com>
On Wed, May 11, 2022 at 04:24:28PM +0200, Christian König wrote:
> Am 11.05.22 um 15:00 schrieb Daniel Vetter:
> > On Tue, May 10, 2022 at 04:39:53PM +0300, Dmitry Osipenko wrote:
> > > [SNIP]
> > > Since vmapping implies implicit pinning, we can't use a separate lock in
> > > drm_gem_shmem_vmap() because we need to protect the
> > > drm_gem_shmem_get_pages(), which is invoked by drm_gem_shmem_vmap() to
> > > pin the pages and requires the dma_resv_lock to be locked.
> > >
> > > Hence the problem is:
> > >
> > > 1. If dma-buf importer holds the dma_resv_lock and invokes
> > > dma_buf_vmap() -> drm_gem_shmem_vmap(), then drm_gem_shmem_vmap() shall
> > > not take the dma_resv_lock.
> > >
> > > 2. Since dma-buf locking convention isn't specified, we can't assume
> > > that dma-buf importer holds the dma_resv_lock around dma_buf_vmap().
> > >
> > > The possible solutions are:
> > >
> > > 1. Specify the dma_resv_lock convention for dma-bufs and make all
> > > drivers to follow it.
> > >
> > > 2. Make only DRM drivers to hold dma_resv_lock around dma_buf_vmap().
> > > Other non-DRM drivers will get the lockdep warning.
> > >
> > > 3. Make drm_gem_shmem_vmap() to take the dma_resv_lock and get deadlock
> > > if dma-buf importer holds the lock.
> > >
> > > ...
> > Yeah this is all very annoying.
>
> Ah, yes that topic again :)
>
> I think we could relatively easily fix that by just defining and enforcing
> that the dma_resv_lock must have be taken by the caller when dma_buf_vmap()
> is called.
>
> A two step approach should work:
> 1. Move the call to dma_resv_lock() into the dma_buf_vmap() function and
> remove all lock taking from the vmap callback implementations.
> 2. Move the call to dma_resv_lock() into the callers of dma_buf_vmap() and
> enforce that the function is called with the lock held.
>
> It shouldn't be that hard to clean up. The last time I looked into it my
> main problem was that we didn't had any easy unit test for it.
Yeah I think it's doable or at least a lot less work than the map/unmap
side, which really was unfixable without just pinning at import time to
avoid the locking fun. But vmap is used a lot less, and mostly by display
drivers (where locking is a lot easier against dma_resv_lock), so it might
be possible to pull off.
-Daniel
>
> Regards,
> Christian.
>
> >
> > > There are actually very few drivers in kernel that use dma_buf_vmap()
> > > [1], so perhaps it's not really a big deal to first try to define the
> > > locking and pinning convention for the dma-bufs? At least for
> > > dma_buf_vmap()? Let me try to do this.
> > >
> > > [1] https://elixir.bootlin.com/linux/v5.18-rc6/C/ident/dma_buf_vmap
> > Yeah looking through the code there's largely two classes of drivers that
> > need vmap:
> >
> > - display drivers that need to do cpu upload (usb, spi, i2c displays).
> > Those generally set up the vmap at import time or when creating the
> > drm_framebuffer object (e.g. see
> > drm_gem_cma_prime_import_sg_table_vmap()), because that's really the
> > only place where you can safely do that without running into locking
> > inversion issues sooner or later
> >
> > - lots of other drivers (and shmem helpers) seem to do dma_buf_vmap just
> > because they can, but only actually ever use vmap on native objects,
> > never on imported objects. Or at least I think so.
> >
> > So maybe another approach here:
> >
> > 1. In general drivers which need a vmap need to set that up at dma_buf
> > import time - the same way we pin the buffers at import time for
> > non-dynamic importers because that's the only place where across all
> > drivers it's ok to just take dma_resv_lock.
> >
> > 2. We remove the "just because we can" dma_buf_vmap support from
> > helpers/drivers - the paths all already can cope with NULL since
> > dma_buf_vmap can fail. vmap will only work on native objects, not imported
> > ones.
> >
> > 3. If there is any driver using shmem helpers that absolutely needs vmap
> > to also work on imported it needs a special import function (like cma
> > helpers) which sets up the vmap at import time.
> >
> > So since this is all very tricky ... what did I miss this time around?
> >
> > > I envision that the extra dma_resv_locks for dma-bufs potentially may
> > > create unnecessary bottlenecks for some drivers if locking isn't really
> > > necessary by a specific driver, so drivers will need to keep this in
> > > mind. On the other hand, I don't think that any of the today's drivers
> > > will notice the additional resv locks in practice.
> > Nah I don't think the extra locking will ever create a bottleneck,
> > especially not for vmap. Generally vmap is a fallback or at least cpu
> > operation, so at that point you're already going very slow.
> > -Daniel
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
next prev parent reply other threads:[~2022-05-11 15:08 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20220417223707.157113-1-dmitry.osipenko@collabora.com>
[not found] ` <20220417223707.157113-10-dmitry.osipenko@collabora.com>
2022-04-18 18:25 ` [PATCH v4 09/15] drm/shmem-helper: Correct doc-comment of drm_gem_shmem_get_sg_table() Thomas Zimmermann
[not found] ` <20220417223707.157113-11-dmitry.osipenko@collabora.com>
2022-04-18 18:38 ` [PATCH v4 10/15] drm/shmem-helper: Take reservation lock instead of drm_gem_shmem locks Thomas Zimmermann
[not found] ` <d9e7bec1-fffb-e0c4-8659-ef3ce2c31280@collabora.com>
2022-04-27 14:50 ` Daniel Vetter
[not found] ` <8f932ab0-bb72-8fea-4078-dc59e9164bd4@collabora.com>
2022-05-04 8:21 ` Daniel Vetter
[not found] ` <01506516-ab2f-cb6e-7507-f2a3295efb59@collabora.com>
2022-05-05 8:12 ` Daniel Vetter
[not found] ` <83e68918-68de-c0c6-6f9b-e94d34b19383@collabora.com>
2022-05-09 13:42 ` Daniel Vetter
[not found] ` <4d08b382-0076-1ea2-b565-893d50b453cb@collabora.com>
2022-05-11 13:00 ` Daniel Vetter
2022-05-11 14:24 ` Christian König
2022-05-11 15:07 ` Daniel Vetter [this message]
[not found] ` <3a362c32-870c-1d73-bba6-bbdcd62dc326@collabora.com>
2022-05-11 15:29 ` Daniel Vetter
[not found] ` <ba2836d0-9a3a-b879-cb1e-a48aed31637d@collabora.com>
2022-05-11 19:05 ` Daniel Vetter
2022-05-12 7:29 ` Christian König
2022-05-12 14:15 ` Daniel Vetter
[not found] ` <20220417223707.157113-12-dmitry.osipenko@collabora.com>
2022-04-19 7:22 ` [PATCH v4 11/15] drm/shmem-helper: Add generic memory shrinker Thomas Zimmermann
[not found] ` <7f497f99-f4c1-33d6-46cf-95bd90188fe3@collabora.com>
2022-04-27 15:03 ` Daniel Vetter
[not found] ` <d0970dbd-e6e7-afa0-fdfd-b755008e371f@collabora.com>
2022-05-04 8:24 ` Daniel Vetter
2022-06-19 16:54 ` Rob Clark
2022-05-05 8:34 ` Thomas Zimmermann
2022-05-05 11:59 ` Daniel Vetter
[not found] ` <ff97790a-fb64-1e15-74b4-59c807bce0b9@collabora.com>
2022-05-09 13:49 ` Daniel Vetter
[not found] ` <5fdf5232-e2b2-b444-5a41-f1db7e6a04da@collabora.com>
2022-05-11 13:09 ` Daniel Vetter
[not found] ` <3429a12f-9fbe-b66b-dbbd-94a1df54714e@collabora.com>
2022-05-11 19:09 ` Daniel Vetter
[not found] ` <0ae6fed7-b166-d2b8-0e42-84b94b777c20@collabora.com>
2022-05-12 17:04 ` Daniel Vetter
[not found] ` <31bc7a14-ff30-6961-b4fc-0aad83551df9@collabora.com>
2022-05-19 14:13 ` Daniel Vetter
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YnvRTaMoO24y8xE5@phenom.ffwll.local \
--to=daniel@ffwll.ch \
--cc=airlied@linux.ie \
--cc=alyssa.rosenzweig@collabora.com \
--cc=ckoenig.leichtzumerken@gmail.com \
--cc=daniel@fooishbar.org \
--cc=digetx@gmail.com \
--cc=dmitry.osipenko@collabora.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=emil.l.velikov@gmail.com \
--cc=gert.wollny@collabora.com \
--cc=gurchetansingh@chromium.org \
--cc=gustavo.padovan@collabora.com \
--cc=linux-kernel@vger.kernel.org \
--cc=maarten.lankhorst@linux.intel.com \
--cc=mripard@kernel.org \
--cc=olvaffe@gmail.com \
--cc=robdclark@gmail.com \
--cc=robh@kernel.org \
--cc=robin.murphy@arm.com \
--cc=steven.price@arm.com \
--cc=tomeu.vizoso@collabora.com \
--cc=tzimmermann@suse.de \
--cc=virtualization@lists.linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).