dri-devel.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
From: Boris Brezillon <boris.brezillon@collabora.com>
To: Dmitry Osipenko <dmitry.osipenko@collabora.com>
Cc: kernel@collabora.com, "Thomas Zimmermann" <tzimmermann@suse.de>,
	"Emma Anholt" <emma@anholt.net>,
	"Christian König" <christian.koenig@amd.com>,
	dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org,
	"Maxime Ripard" <mripard@kernel.org>,
	"Gurchetan Singh" <gurchetansingh@chromium.org>,
	"Melissa Wen" <mwen@igalia.com>,
	"Danilo Krummrich" <dakr@redhat.com>,
	"Gerd Hoffmann" <kraxel@redhat.com>,
	"Steven Price" <steven.price@arm.com>,
	virtualization@lists.linux-foundation.org,
	"Qiang Yu" <yuq825@gmail.com>
Subject: Re: [PATCH v14 02/12] drm/shmem-helper: Add pages_pin_count field
Date: Mon, 31 Jul 2023 15:35:51 +0200	[thread overview]
Message-ID: <20230731153551.7365daa4@collabora.com> (raw)
In-Reply-To: <4c5fa735-9bfd-f92a-8deb-888c7368f89e@collabora.com>

+Danilo, to confirm my understanding of the gpuva remap operation is
correct.

On Mon, 31 Jul 2023 15:27:31 +0300
Dmitry Osipenko <dmitry.osipenko@collabora.com> wrote:

> On 7/25/23 11:32, Boris Brezillon wrote:
> >> Can we make it an atomic_t, so we can avoid taking the lock when the
> >> GEM has already been pinned. That's something I need to be able to grab
> >> a pin-ref in a path where the GEM resv lock is already held[1]. We could
> >> of course expose the locked version,  
> > My bad, that's actually not true. The problem is not that I call
> > drm_gem_shmem_pin() with the resv lock already held, but that I call
> > drm_gem_shmem_pin() in a dma-signaling path where I'm not allowed to
> > take a resv lock. I know for sure pin_count > 0, because all GEM objects
> > mapped to a VM have their memory pinned right now, and this should
> > stand until we decide to add support for live-GEM eviction, at which
> > point we'll probably have a way to detect when a GEM is evicted, and
> > avoid calling drm_gem_shmem_pin() on it.
> > 
> > TLDR; I can't trade the atomic_t for a drm_gem_shmem_pin_locked(),
> > because that wouldn't solve my problem. The other solution would be to
> > add an atomic_t at the driver-GEM level, and only call
> > drm_gem_shmem_[un]pin() on 0 <-> 1 transitions, but I thought using an
> > atomic at the GEM-shmem level, to avoid locking when we can, would be
> > beneficial to the rest of the eco-system. Let me know if that's not an
> > option, and I'll go back to the driver-specific atomic_t.  
> 
> Could you please explain why do you need to pin GEM in a signal handler?
> This is not something drivers usually do or need to do. You likely also
> shouldn't need to detect that GEM is evicted in yours driver. I'd expect
> that Panthor shouldn't differ from Panfrost in regards to how GEM memory
> management is done and Panfrost doesn't need to do anything special.

Panthor VM management is completely different, and the case I'm
referring to is 'asynchronous VM_BIND': mapping a GEM object to a GPU VM
asynchronously, so we can make it depend on other operations, encoded as
syncobjs passed to the VM_BIND operation.

Here is the workflow we have for this use case:

1. Create + push a VM_BIND job to the VM_BIND queue (a drm_sched_entity
that's taking care of asynchronous VM map/unmap operations). Because
this operation is asynchronous, and the execution itself happens in a
dma-signaling path (drm_sched::run_job()), we need to pre-allocate the
MMU page tables for the worst case scenario, and make sure the GEM pages
are pinned at job creation time.

2. The VM operation itself is executed when all dependencies are met
(drm_sched calls run_job()). In case of a map operation, we call
drm_gpuva_sm_map(), which might split the map operation into
remap+unamp+map ones if the region being mapped is covering a region
that was previously mapped to a different GEM object or a different
portion of the same GEM object (see the gpuva_mgr doc [1]). A
remap operation is just a way to split an existing mapping in 2 mappings
covering the left/right side of the previous mapping, plus a hole in
the middle. This means that our VM mapping object (drm_gpuva), which
was pointing to a GEM object that had its pages pinned, is now turned
into 2 mapping objects, and we need to make sure those 2 mappings own a
reference to the pages, otherwise we'll have an unbalanced refcount
when we release those 2 mappings further down the road.

3. Release resources attached to mappings that were removed (that
includes releasing the ref we had on GEM pages) and free the mapping
objects. We do that asynchronously, outside of the dma-signaling path.

> 
> Note that patch #14 makes locked pin/unpin functions public and turns
> the unlocked variants into helpers, you'll be able to experiment with
> these funcs in the Panthor driver.

Unfortunately, those won't help. I really need a way to increment the
refcount without holding the lock, because we're in a dma-signaling
path when we call drm_gpuva_sm_map(). Note that I could live with a
drm_shmem_gem_pin_if_already_pinned() variant that would return NULL if
pin_count == 0 instead of trying to acquire the lock, but I'd still
need this refcount to be an atomic_t.

As I said, an alternative to this approach would be to have a separate
atomic refcount at the panthor_gem_object level, but I feel like we'd
just be duplicating something that exists already.

[1]https://cgit.freedesktop.org/drm/drm-misc/tree/drivers/gpu/drm/drm_gpuva_mgr.c#n67

  parent reply	other threads:[~2023-07-31 13:35 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-07-22 23:47 [PATCH v14 00/12] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers Dmitry Osipenko
2023-07-22 23:47 ` [PATCH v14 01/12] drm/shmem-helper: Factor out pages alloc/release from drm_gem_shmem_get/put_pages() Dmitry Osipenko
2023-07-25  7:14   ` Boris Brezillon
2023-07-22 23:47 ` [PATCH v14 02/12] drm/shmem-helper: Add pages_pin_count field Dmitry Osipenko
2023-07-25  7:27   ` Boris Brezillon
2023-07-25  8:32     ` Boris Brezillon
2023-07-31 12:27       ` Dmitry Osipenko
2023-07-31 12:31         ` Dmitry Osipenko
2023-07-31 13:35         ` Boris Brezillon [this message]
2023-08-02  2:31           ` Danilo Krummrich
2023-08-02  9:06             ` Boris Brezillon
2023-07-22 23:47 ` [PATCH v14 03/12] drm/shmem-helper: Switch drm_gem_shmem_vmap/vunmap to use pin/unpin Dmitry Osipenko
2023-07-22 23:47 ` [PATCH v14 04/12] drm/shmem-helper: Factor out unpinning part from drm_gem_shmem_purge() Dmitry Osipenko
2023-07-22 23:47 ` [PATCH v14 05/12] drm/shmem-helper: Add memory shrinker Dmitry Osipenko
2023-07-22 23:47 ` [PATCH v14 06/12] drm/shmem-helper: Remove obsoleted is_iomem test Dmitry Osipenko
2023-07-22 23:47 ` [PATCH v14 07/12] drm/shmem-helper: Export drm_gem_shmem_get_pages_sgt_locked() Dmitry Osipenko
2023-07-22 23:47 ` [PATCH v14 08/12] drm/virtio: Support memory shrinking Dmitry Osipenko
2023-07-22 23:47 ` [PATCH v14 09/12] drm/panfrost: Switch to generic memory shrinker Dmitry Osipenko
2023-07-22 23:47 ` [PATCH v14 10/12] drm/shmem-helper: Refactor locked/unlocked functions Dmitry Osipenko
2023-07-25  7:47   ` Boris Brezillon
2023-07-25  7:58     ` Boris Brezillon
2023-07-22 23:47 ` [PATCH v14 11/12] drm/shmem-helper: Make drm_gem_shmem_print_info() symbol GPL Dmitry Osipenko
2023-07-22 23:47 ` [PATCH v14 12/12] drm/gem: Add _unlocked postfix to drm_gem_pin/unpin() Dmitry Osipenko
2023-07-25  7:53   ` Boris Brezillon
2023-07-31 13:04     ` Dmitry Osipenko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230731153551.7365daa4@collabora.com \
    --to=boris.brezillon@collabora.com \
    --cc=christian.koenig@amd.com \
    --cc=dakr@redhat.com \
    --cc=dmitry.osipenko@collabora.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=emma@anholt.net \
    --cc=gurchetansingh@chromium.org \
    --cc=kernel@collabora.com \
    --cc=kraxel@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mripard@kernel.org \
    --cc=mwen@igalia.com \
    --cc=steven.price@arm.com \
    --cc=tzimmermann@suse.de \
    --cc=virtualization@lists.linux-foundation.org \
    --cc=yuq825@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).