From: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
To: Matthew Brost <matthew.brost@intel.com>
Cc: intel-xe@lists.freedesktop.org, dri-devel@lists.freedesktop.org,
himal.prasad.ghimiray@intel.com, apopple@nvidia.com,
airlied@gmail.com, "Simona Vetter" <simona.vetter@ffwll.ch>,
felix.kuehling@amd.com,
"Christian König" <christian.koenig@amd.com>,
dakr@kernel.org, "Mrozek, Michal" <michal.mrozek@intel.com>,
"Joonas Lahtinen" <joonas.lahtinen@linux.intel.com>
Subject: Re: [PATCH 04/15] drm/pagemap: Add a drm_pagemap cache and shrinker
Date: Tue, 28 Oct 2025 10:46:38 +0100 [thread overview]
Message-ID: <17d29da26bf86172510133c28e18a99e90772c7d.camel@linux.intel.com> (raw)
In-Reply-To: <aQAbGiYv/u/0wnto@lstrano-desk.jf.intel.com>
On Mon, 2025-10-27 at 18:23 -0700, Matthew Brost wrote:
> On Sat, Oct 25, 2025 at 02:04:01PM +0200, Thomas Hellström wrote:
> > Pagemaps are costly to set up and tear down, and they consume a lot
> > of system memory for the struct pages. Ideally they should be
> > created only when needed.
> >
> > Add a caching mechanism to allow doing just that: Create the
> > drm_pagemaps
> > when needed for migration. Keep them around to avoid destruction
> > and
> > re-creation latencies and destroy inactive/unused drm_pagemaps on
> > memory
> > pressure using a shrinker.
> >
> > Only add the helper functions. They will be hooked up to the xe
> > driver
> > in the upcoming patch.
> >
> > Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> > ---
> > drivers/gpu/drm/Makefile | 3 +-
> > drivers/gpu/drm/drm_pagemap.c | 79 +++++-
> > drivers/gpu/drm/drm_pagemap_util.c | 426
> > +++++++++++++++++++++++++++++
> > include/drm/drm_pagemap.h | 53 +++-
> > include/drm/drm_pagemap_util.h | 25 ++
> > 5 files changed, 569 insertions(+), 17 deletions(-)
> > create mode 100644 drivers/gpu/drm/drm_pagemap_util.c
> > create mode 100644 include/drm/drm_pagemap_util.h
> >
> > diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile
> > index c2672f369aed..cdca68fd9f23 100644
> > --- a/drivers/gpu/drm/Makefile
> > +++ b/drivers/gpu/drm/Makefile
> > @@ -107,7 +107,8 @@ obj-$(CONFIG_DRM_GPUVM) += drm_gpuvm.o
> >
> > drm_gpusvm_helper-y := \
> > drm_gpusvm.o\
> > - drm_pagemap.o
> > + drm_pagemap.o\
> > + drm_pagemap_util.o
> > obj-$(CONFIG_DRM_GPUSVM) += drm_gpusvm_helper.o
> >
> > obj-$(CONFIG_DRM_BUDDY) += drm_buddy.o
> > diff --git a/drivers/gpu/drm/drm_pagemap.c
> > b/drivers/gpu/drm/drm_pagemap.c
> > index fb18a80d6a1c..5ca5b2b53bc1 100644
> > --- a/drivers/gpu/drm/drm_pagemap.c
> > +++ b/drivers/gpu/drm/drm_pagemap.c
> > @@ -8,6 +8,7 @@
> > #include <linux/pagemap.h>
> > #include <drm/drm_drv.h>
> > #include <drm/drm_pagemap.h>
> > +#include <drm/drm_pagemap_util.h>
> > #include <drm/drm_print.h>
> >
> > /**
> > @@ -578,7 +579,7 @@ static void drm_pagemap_release(struct kref
> > *ref)
> > * pagemap provider drm_device and its module.
> > */
> > dpagemap->dev_hold = NULL;
> > - kfree(dpagemap);
> > + drm_pagemap_shrinker_add(dpagemap);
> > llist_add(&dev_hold->link, &drm_pagemap_unhold_list);
> > schedule_work(&drm_pagemap_work);
> > /*
> > @@ -628,6 +629,58 @@ drm_pagemap_dev_hold(struct drm_pagemap
> > *dpagemap)
> > return dev_hold;
> > }
> >
> > +/**
> > + * drm_pagemap_reinit() - Reinitialize a drm_pagemap
> > + * @dpagemap: The drm_pagemap to reinitialize
> > + *
> > + * Reinitialize a drm_pagemap, for which drm_pagemap_release
> > + * has already been called. This interface is intended for the
> > + * situation where the driver caches a destroyed drm_pagemap.
> > + *
> > + * Return: 0 on success, negative error code on failure.
> > + */
> > +int drm_pagemap_reinit(struct drm_pagemap *dpagemap)
> > +{
> > + dpagemap->dev_hold = drm_pagemap_dev_hold(dpagemap);
> > + if (IS_ERR(dpagemap->dev_hold))
> > + return PTR_ERR(dpagemap->dev_hold);
> > +
> > + kref_init(&dpagemap->ref);
> > + return 0;
> > +}
> > +EXPORT_SYMBOL(drm_pagemap_reinit);
> > +
> > +/**
> > + * drm_pagemap_init() - Initialize a pre-allocated drm_pagemap
> > + * @dpagemap: The drm_pagemap to initialize.
> > + * @pagemap: The associated dev_pagemap providing the device
> > + * private pages.
> > + * @drm: The drm device. The drm_pagemap holds a reference on the
> > + * drm_device and the module owning the drm_device until
> > + * drm_pagemap_release(). This facilitates drm_pagemap exporting.
> > + * @ops: The drm_pagemap ops.
> > + *
> > + * Initialize and take an initial reference on a drm_pagemap.
> > + * After successful return, use drm_pagemap_put() to destroy.
> > + *
> > + ** Return: 0 on success, negative error code on error.
> > + */
> > +int drm_pagemap_init(struct drm_pagemap *dpagemap,
> > + struct dev_pagemap *pagemap,
> > + struct drm_device *drm,
> > + const struct drm_pagemap_ops *ops)
> > +{
> > + kref_init(&dpagemap->ref);
> > + dpagemap->ops = ops;
> > + dpagemap->pagemap = pagemap;
> > + dpagemap->drm = drm;
> > + dpagemap->cache = NULL;
> > + INIT_LIST_HEAD(&dpagemap->shrink_link);
> > +
> > + return drm_pagemap_reinit(dpagemap);
> > +}
> > +EXPORT_SYMBOL(drm_pagemap_init);
> > +
> > /**
> > * drm_pagemap_create() - Create a struct drm_pagemap.
> > * @drm: Pointer to a struct drm_device providing the device-
> > private memory.
> > @@ -645,22 +698,14 @@ drm_pagemap_create(struct drm_device *drm,
> > const struct drm_pagemap_ops *ops)
> > {
> > struct drm_pagemap *dpagemap = kzalloc(sizeof(*dpagemap),
> > GFP_KERNEL);
> > - struct drm_pagemap_dev_hold *dev_hold;
> > + int err;
> >
> > if (!dpagemap)
> > return ERR_PTR(-ENOMEM);
> >
> > - kref_init(&dpagemap->ref);
> > - dpagemap->drm = drm;
> > - dpagemap->ops = ops;
> > - dpagemap->pagemap = pagemap;
> > -
> > - dev_hold = drm_pagemap_dev_hold(dpagemap);
> > - if (IS_ERR(dev_hold)) {
> > - kfree(dpagemap);
> > - return ERR_CAST(dev_hold);
> > - }
> > - dpagemap->dev_hold = dev_hold;
> > + err = drm_pagemap_init(dpagemap, pagemap, drm, ops);
> > + if (err)
> > + return ERR_PTR(err);
> >
> > return dpagemap;
> > }
> > @@ -1023,6 +1068,14 @@ int drm_pagemap_populate_mm(struct
> > drm_pagemap *dpagemap,
> > }
> > EXPORT_SYMBOL(drm_pagemap_populate_mm);
> >
> > +void drm_pagemap_destroy(struct drm_pagemap *dpagemap, bool
> > is_atomic_or_reclaim)
> > +{
> > + if (dpagemap->ops->destroy)
> > + dpagemap->ops->destroy(dpagemap,
> > is_atomic_or_reclaim);
> > + else
> > + kfree(dpagemap);
> > +}
> > +
> > static void drm_pagemap_exit(void)
> > {
> > flush_work(&drm_pagemap_work);
> > diff --git a/drivers/gpu/drm/drm_pagemap_util.c
> > b/drivers/gpu/drm/drm_pagemap_util.c
> > new file mode 100644
> > index 000000000000..e1a1d6bf25f4
> > --- /dev/null
> > +++ b/drivers/gpu/drm/drm_pagemap_util.c
> > @@ -0,0 +1,426 @@
> > +// SPDX-License-Identifier: GPL-2.0-only OR MIT
> > +/*
> > + * Copyright © 2025 Intel Corporation
> > + */
> > +
> > +#include <drm/drm_drv.h>
> > +#include <drm/drm_managed.h>
> > +#include <drm/drm_pagemap.h>
> > +#include <drm/drm_pagemap_util.h>
> > +#include <drm/drm_print.h>
> > +
> > +/**
> > + * struct drm_pagemap_cache - Lookup structure for pagemaps
> > + *
> > + * Structure to keep track of active (refcount > 1) and inactive
> > + * (refcount == 0) pagemaps. Inactive pagemaps can be made active
> > + * again by waiting for the @queued completion (indicating that
> > the
> > + * pagemap has been put on the @shrinker's list of shrinkable
> > + * pagemaps, and then successfully removing it from @shrinker's
> > + * list. The latter may fail if the shrinker is already in the
> > + * process of freeing the pagemap. A struct drm_pagemap_cache can
> > + * hold a single struct drm_pagemap.
> > + */
> > +struct drm_pagemap_cache {
> > + /** @lookup_mutex: Mutex making the lookup process atomic
> > */
> > + struct mutex lookup_mutex;
> > + /** @lock: Lock protecting the @dpagemap pointer */
> > + spinlock_t lock;
> > + /** @shrinker: Pointer to the shrinker used for this
> > cache. Immutable. */
> > + struct drm_pagemap_shrinker *shrinker;
> > + /** @dpagemap: Non-refcounted pointer to the drm_pagemap
> > */
> > + struct drm_pagemap *dpagemap;
> > + /**
> > + * @queued: Signals when an inactive drm_pagemap has been
> > put on
> > + * @shrinker's list.
> > + */
> > + struct completion queued;
> > +};
> > +
> > +/**
> > + * struct drm_pagemap_shrinker - Shrinker to remove unused
> > pagemaps
> > + */
> > +struct drm_pagemap_shrinker {
> > + /** @drm: Pointer to the drm device. */
> > + struct drm_device *drm;
> > + /** @lock: Spinlock to protect the @dpagemaps list. */
> > + spinlock_t lock;
> > + /** @dpagemaps: List of unused dpagemaps. */
> > + struct list_head dpagemaps;
> > + /** @num_dpagemaps: Number of unused dpagemaps in
> > @dpagemaps. */
> > + atomic_t num_dpagemaps;
> > + /** @shrink: Pointer to the struct shrinker. */
> > + struct shrinker *shrink;
> > +};
> > +
> > +static bool drm_pagemap_shrinker_cancel(struct drm_pagemap
> > *dpagemap);
> > +
> > +static void drm_pagemap_cache_fini(void *arg)
> > +{
> > + struct drm_pagemap_cache *cache = arg;
> > + struct drm_pagemap *dpagemap;
> > +
> > + drm_dbg(cache->shrinker->drm, "Destroying dpagemap
> > cache.\n");
> > + spin_lock(&cache->lock);
> > + dpagemap = cache->dpagemap;
> > + if (!dpagemap) {
> > + spin_unlock(&cache->lock);
> > + goto out;
> > + }
> > +
> > + if (drm_pagemap_shrinker_cancel(dpagemap)) {
> > + cache->dpagemap = NULL;
> > + spin_unlock(&cache->lock);
> > + drm_pagemap_destroy(dpagemap, false);
> > + }
> > +
> > +out:
> > + mutex_destroy(&cache->lookup_mutex);
> > + kfree(cache);
> > +}
> > +
> > +/**
> > + * drm_pagemap_cache_create_devm() - Create a drm_pagemap_cache
> > + * @shrinker: Pointer to a struct drm_pagemap_shrinker.
> > + *
> > + * Create a device-managed drm_pagemap cache. The cache is
> > automatically
> > + * destroyed on struct device removal, at which point any
> > *inactive*
> > + * drm_pagemap's are destroyed.
> > + *
> > + * Return: Pointer to a struct drm_pagemap_cache on success. Error
> > pointer
> > + * on failure.
> > + */
> > +struct drm_pagemap_cache *drm_pagemap_cache_create_devm(struct
> > drm_pagemap_shrinker *shrinker)
> > +{
> > + struct drm_pagemap_cache *cache = kzalloc(sizeof(*cache),
> > GFP_KERNEL);
> > + int err;
> > +
> > + if (!cache)
> > + return ERR_PTR(-ENOMEM);
> > +
> > + mutex_init(&cache->lookup_mutex);
> > + spin_lock_init(&cache->lock);
> > + cache->shrinker = shrinker;
> > + init_completion(&cache->queued);
> > + err = devm_add_action_or_reset(shrinker->drm->dev,
> > drm_pagemap_cache_fini, cache);
> > + if (err)
> > + return ERR_PTR(err);
> > +
> > + return cache;
> > +}
> > +EXPORT_SYMBOL(drm_pagemap_cache_create_devm);
> > +
> > +/**
> > + * DOC: Cache lookup
> > + *
> > + * Cache lookup should be done under a locked mutex, so that a
> > + * failed drm_pagemap_get_from_cache() and a following
> > + * drm_pagemap_cache_setpagemap() are carried out as an atomic
> > + * operation WRT other lookups. Otherwise, racing lookups may
> > + * unnecessarily concurrently create pagemaps to fulfill a
> > + * failed lookup. The API provides two functions to perform this
> > lock,
> > + * drm_pagemap_lock_lookup() and drm_pagemap_unlock_lookup() and
> > they
> > + * should be used in the following way:
> > + *
> > + * .. code-block:: c
> > + *
> > + * drm_pagemap_lock_lookup(cache);
> > + * dpagemap = drm_pagemap_get_from_cache(cache);
> > + * if (dpagemap)
> > + * goto out_unlock;
> > + *
> > + * dpagemap = driver_create_new_dpagemap();
> > + * if (!IS_ERR(dpagemap))
> > + * drm_pagemap_cache_set_pagemap(cache,
> > dpagemap);
> > + *
> > + * out_unlock:
> > + * drm_pagemap_unlock_lookup(cache);
> > + */
> > +
> > +/**
> > + * drm_pagemap_cache_lock_lookup() Lock a drm_pagemap_cache for
> > lookup
> > + * @cache: The drm_pagemap_cache to lock.
> > + *
> > + * Return: %-EINTR if interrupted while blocking. %0 otherwise.
> > + */
> > +int drm_pagemap_cache_lock_lookup(struct drm_pagemap_cache *cache)
> > +{
> > + return mutex_lock_interruptible(&cache->lookup_mutex);
> > +}
> > +EXPORT_SYMBOL(drm_pagemap_cache_lock_lookup);
> > +
> > +/**
> > + * drm_pagemap_cache_unlock_lookup() Unlock a drm_pagemap_cache
> > after lookup
> > + * @cache: The drm_pagemap_cache to unlock.
> > + */
> > +void drm_pagemap_cache_unlock_lookup(struct drm_pagemap_cache
> > *cache)
> > +{
> > + mutex_unlock(&cache->lookup_mutex);
> > +}
> > +EXPORT_SYMBOL(drm_pagemap_cache_unlock_lookup);
> > +
> > +/**
> > + * drm_pagemap_get_from_cache() - Lookup of drm_pagemaps.
> > + * @cache: The cache used for lookup.
> > + *
> > + * If an active pagemap is present in the cache, it is immediately
> > returned.
> > + * If an inactive pagemap is present, it's removed from the
> > shrinker list and
> > + * an attempt is made to make it active.
> > + * If no pagemap present or the attempt to make it active failed,
> > %NULL is returned
> > + * to indicate to the caller to create a new drm_pagemap and
> > insert it into
> > + * the cache.
> > + *
> > + * Return: A reference-counted pointer to a drm_pagemap if
> > successful. An error
> > + * pointer if an error occurred, or %NULL if no drm_pagemap was
> > found and
> > + * the caller should insert a new one.
> > + */
> > +struct drm_pagemap *drm_pagemap_get_from_cache(struct
> > drm_pagemap_cache *cache)
> > +{
> > + struct drm_pagemap *dpagemap;
> > + int err;
> > +
> > + lockdep_assert_held(&cache->lookup_mutex);
> > +retry:
> > + spin_lock(&cache->lock);
> > + dpagemap = cache->dpagemap;
> > + if (drm_pagemap_get_unless_zero(dpagemap)) {
> > + spin_unlock(&cache->lock);
> > + return dpagemap;
> > + }
> > +
> > + if (!dpagemap) {
> > + spin_unlock(&cache->lock);
> > + return NULL;
> > + }
> > +
> > + if (!try_wait_for_completion(&cache->queued)) {
> > + spin_unlock(&cache->lock);
> > + err = wait_for_completion_interruptible(&cache-
> > >queued);
> > + if (err)
> > + return ERR_PTR(err);
> > + goto retry;
> > + }
> > +
> > + if (drm_pagemap_shrinker_cancel(dpagemap)) {
> > + cache->dpagemap = NULL;
> > + spin_unlock(&cache->lock);
> > + err = drm_pagemap_reinit(dpagemap);
> > + if (err) {
> > + drm_pagemap_destroy(dpagemap, false);
> > + return ERR_PTR(err);
> > + }
> > + drm_pagemap_cache_set_pagemap(cache, dpagemap);
> > + } else {
> > + cache->dpagemap = NULL;
> > + spin_unlock(&cache->lock);
> > + dpagemap = NULL;
> > + }
> > +
> > + return dpagemap;
> > +}
> > +EXPORT_SYMBOL(drm_pagemap_get_from_cache);
> > +
> > +/**
> > + * drm_pagemap_cache_set_pagemap() - Assign a drm_pagemap to a
> > drm_pagemap_cache
> > + * @cache: The cache to assign the drm_pagemap to.
> > + * @dpagemap: The drm_pagemap to assign.
> > + *
> > + * The function must be called to populate a drm_pagemap_cache
> > only
> > + * after a call to drm_pagemap_get_from_cache() returns NULL.
> > + */
> > +void drm_pagemap_cache_set_pagemap(struct drm_pagemap_cache
> > *cache, struct drm_pagemap *dpagemap)
> > +{
> > + struct drm_device *drm = dpagemap->drm;
> > +
> > + lockdep_assert_held(&cache->lookup_mutex);
> > + spin_lock(&cache->lock);
> > + dpagemap->cache = cache;
> > + swap(cache->dpagemap, dpagemap);
> > + reinit_completion(&cache->queued);
> > + spin_unlock(&cache->lock);
> > + drm_WARN_ON(drm, !!dpagemap);
> > +}
> > +EXPORT_SYMBOL(drm_pagemap_cache_set_pagemap);
> > +
> > +/**
> > + * drm_pagemap_get_from_cache_if_active() - Quick lookup of active
> > drm_pagemaps
> > + * @cache: The cache to lookup from.
> > + *
> > + * Function that should be used to lookup a drm_pagemap that is
> > already active.
> > + * (refcount > 0).
> > + *
> > + * Return: A pointer to the cache's drm_pagemap if it's active;
> > %NULL otherwise.
> > + */
> > +struct drm_pagemap *drm_pagemap_get_from_cache_if_active(struct
> > drm_pagemap_cache *cache)
> > +{
> > + struct drm_pagemap *dpagemap;
> > +
> > + spin_lock(&cache->lock);
> > + dpagemap = drm_pagemap_get_unless_zero(cache->dpagemap);
> > + spin_unlock(&cache->lock);
> > +
> > + return dpagemap;
> > +}
> > +EXPORT_SYMBOL(drm_pagemap_get_from_cache_if_active);
> > +
> > +static bool drm_pagemap_shrinker_cancel(struct drm_pagemap
> > *dpagemap)
> > +{
> > + struct drm_pagemap_cache *cache = dpagemap->cache;
> > + struct drm_pagemap_shrinker *shrinker = cache->shrinker;
> > +
> > + spin_lock(&shrinker->lock);
> > + if (list_empty(&dpagemap->shrink_link)) {
> > + spin_unlock(&shrinker->lock);
> > + return false;
> > + }
> > +
> > + list_del_init(&dpagemap->shrink_link);
> > + atomic_dec(&shrinker->num_dpagemaps);
> > + spin_unlock(&shrinker->lock);
> > + return true;
> > +}
> > +
> > +/**
> > + * drm_pagemap_shrinker_add() - Add a drm_pagemap to the shrinker
> > list or destroy
> > + * @dpagemap: The drm_pagemap.
> > + *
> > + * If @dpagemap is associated with a &struct drm_pagemap_cache AND
> > the
> > + * struct device backing the drm device is still alive, add
> > @dpagemap to
> > + * the &struct drm_pagemap_shrinker list of shrinkable
> > drm_pagemaps.
> > + *
> > + * Otherwise destroy the pagemap directly using
> > drm_pagemap_destroy().
> > + *
> > + * This is an internal function which is not intended to be
> > exposed to drivers.
> > + */
> > +void drm_pagemap_shrinker_add(struct drm_pagemap *dpagemap)
>
> Not a full review - slowly wrapping my head around the first 6
> patches
> but one quick question.
>
> This is called from drm_pagemap_put. How do we know what type of
> context
> we're in? It seems like this could be called from either process
> context
> or atomic context (e.g., via drm_pagemap_zdd_destroy through
> drm_pagemap_page_free). This code doesn’t appear to work in atomic
> contexts—if I recall correctly, drm_dev_enter can’t be called from
> atomic context. Also, we're missing irqsave on the spinlock.
From reading up on srcu_read_lock(), which is hiding behind
drm_dev_enter(), it should be OK to call from atomic context as long as
it is also released from the same context. I indeed checked that we
could call it under a spinlock without getting any lockdep warnings.
The irqsave on the spinlock is a different thing, though. Do we know
that drm_pagemap_page_free() will be called from irq context?
/Thomas
>
> We had a worker for ZDD destroy at one point—should we revive that?
> If
> we did, I think we could safely enforce a rule that drm_pagemap
> operations must only be called from process context.
>
> Matt
>
> > +{
> > + struct drm_pagemap_cache *cache;
> > + struct drm_pagemap_shrinker *shrinker;
> > + int idx;
> > +
> > + /*
> > + * The pagemap cache and shrinker are disabled at
> > + * pci device remove time. After that, dpagemaps
> > + * are freed directly.
> > + */
> > + if (!drm_dev_enter(dpagemap->drm, &idx))
> > + goto out_no_cache;
> > +
> > + cache = dpagemap->cache;
> > + if (!cache) {
> > + drm_dev_exit(idx);
> > + goto out_no_cache;
> > + }
> > +
> > + shrinker = cache->shrinker;
> > + spin_lock(&shrinker->lock);
> > + list_add_tail(&dpagemap->shrink_link, &shrinker-
> > >dpagemaps);
> > + atomic_inc(&shrinker->num_dpagemaps);
> > + spin_unlock(&shrinker->lock);
> > + complete_all(&cache->queued);
> > + drm_dev_exit(idx);
> > + return;
> > +
> > +out_no_cache:
> > + drm_pagemap_destroy(dpagemap, true);
> > +}
> > +
> > +static unsigned long
> > +drm_pagemap_shrinker_count(struct shrinker *shrink, struct
> > shrink_control *sc)
> > +{
> > + struct drm_pagemap_shrinker *shrinker = shrink-
> > >private_data;
> > + unsigned long count = atomic_read(&shrinker-
> > >num_dpagemaps);
> > +
> > + return count ? : SHRINK_EMPTY;
> > +}
> > +
> > +static unsigned long
> > +drm_pagemap_shrinker_scan(struct shrinker *shrink, struct
> > shrink_control *sc)
> > +{
> > + struct drm_pagemap_shrinker *shrinker = shrink-
> > >private_data;
> > + struct drm_pagemap *dpagemap;
> > + struct drm_pagemap_cache *cache;
> > + unsigned long nr_freed = 0;
> > +
> > + sc->nr_scanned = 0;
> > + spin_lock(&shrinker->lock);
> > + do {
> > + dpagemap = list_first_entry_or_null(&shrinker-
> > >dpagemaps, typeof(*dpagemap),
> > + shrink_link);
> > + if (!dpagemap)
> > + break;
> > +
> > + atomic_dec(&shrinker->num_dpagemaps);
> > + list_del_init(&dpagemap->shrink_link);
> > + spin_unlock(&shrinker->lock);
> > +
> > + sc->nr_scanned++;
> > + nr_freed++;
> > +
> > + cache = dpagemap->cache;
> > + spin_lock(&cache->lock);
> > + cache->dpagemap = NULL;
> > + spin_unlock(&cache->lock);
> > +
> > + drm_dbg(dpagemap->drm, "Shrinking dpagemap %p.\n",
> > dpagemap);
> > + drm_pagemap_destroy(dpagemap, true);
> > + spin_lock(&shrinker->lock);
> > + } while (sc->nr_scanned < sc->nr_to_scan);
> > + spin_unlock(&shrinker->lock);
> > +
> > + return sc->nr_scanned ? nr_freed : SHRINK_STOP;
> > +}
> > +
> > +static void drm_pagemap_shrinker_fini(void *arg)
> > +{
> > + struct drm_pagemap_shrinker *shrinker = arg;
> > +
> > + drm_dbg(shrinker->drm, "Destroying dpagemap shrinker.\n");
> > + drm_WARN_ON(shrinker->drm, !!atomic_read(&shrinker-
> > >num_dpagemaps));
> > + shrinker_free(shrinker->shrink);
> > + kfree(shrinker);
> > +}
> > +
> > +/**
> > + * drm_pagemap_shrinker_create_devm() - Create and register a
> > pagemap shrinker
> > + * @drm: The drm device
> > + *
> > + * Create and register a pagemap shrinker that shrinks unused
> > pagemaps
> > + * and thereby reduces memory footprint.
> > + * The shrinker is drm_device managed and unregisters itself when
> > + * the drm device is removed.
> > + *
> > + * Return: %0 on success, negative error code on failure.
> > + */
> > +struct drm_pagemap_shrinker
> > *drm_pagemap_shrinker_create_devm(struct drm_device *drm)
> > +{
> > + struct drm_pagemap_shrinker *shrinker;
> > + struct shrinker *shrink;
> > + int err;
> > +
> > + shrinker = kzalloc(sizeof(*shrinker), GFP_KERNEL);
> > + if (!shrinker)
> > + return ERR_PTR(-ENOMEM);
> > +
> > + shrink = shrinker_alloc(0, "drm-drm_pagemap:%s", drm-
> > >unique);
> > + if (!shrink) {
> > + kfree(shrinker);
> > + return ERR_PTR(-ENOMEM);
> > + }
> > +
> > + spin_lock_init(&shrinker->lock);
> > + INIT_LIST_HEAD(&shrinker->dpagemaps);
> > + shrinker->drm = drm;
> > + shrinker->shrink = shrink;
> > + shrink->count_objects = drm_pagemap_shrinker_count;
> > + shrink->scan_objects = drm_pagemap_shrinker_scan;
> > + shrink->private_data = shrinker;
> > + shrinker_register(shrink);
> > +
> > + err = devm_add_action_or_reset(drm->dev,
> > drm_pagemap_shrinker_fini, shrinker);
> > + if (err)
> > + return ERR_PTR(err);
> > +
> > + return shrinker;
> > +}
> > +EXPORT_SYMBOL(drm_pagemap_shrinker_create_devm);
> > diff --git a/include/drm/drm_pagemap.h b/include/drm/drm_pagemap.h
> > index 5cfe54331ba7..4b9af5e785c6 100644
> > --- a/include/drm/drm_pagemap.h
> > +++ b/include/drm/drm_pagemap.h
> > @@ -9,6 +9,7 @@
> > #define NR_PAGES(order) (1U << (order))
> >
> > struct drm_pagemap;
> > +struct drm_pagemap_cache;
> > struct drm_pagemap_dev_hold;
> > struct drm_pagemap_zdd;
> > struct device;
> > @@ -124,6 +125,25 @@ struct drm_pagemap_ops {
> > unsigned long start, unsigned long end,
> > struct mm_struct *mm,
> > unsigned long timeslice_ms);
> > + /**
> > + * @destroy: Destroy the drm_pagemap and associated
> > resources.
> > + * @dpagemap: The drm_pagemap to destroy.
> > + * @is_atomic_or_reclaim: The function may be called from
> > + * atomic- or reclaim context.
> > + *
> > + * The implementation should take care not to attempt to
> > + * destroy resources that may already have been destroyed
> > + * using devm_ callbacks, since this function may be
> > called
> > + * after the underlying struct device has been unbound.
> > + * If the implementation defers the execution to a work
> > item
> > + * to avoid locking issues, then it must make sure the
> > work
> > + * items are flushed before module exit. If the destroy
> > call
> > + * happens after the provider's pci_remove() callback has
> > + * been executed, a module reference and drm device
> > reference is
> > + * held across the destroy callback.
> > + */
> > + void (*destroy)(struct drm_pagemap *dpagemap,
> > + bool is_atomic_or_reclaim);
> > };
> >
> > /**
> > @@ -135,6 +155,10 @@ struct drm_pagemap_ops {
> > * @pagemap: Pointer to the underlying dev_pagemap.
> > * @dev_hold: Pointer to a struct drm_pagemap_dev_hold for
> > * device referencing.
> > + * @cache: Back-pointer to the &struct drm_pagemap_cache used for
> > this
> > + * &struct drm_pagemap. May be NULL if no cache is used.
> > + * @shrink_link: Link into the shrinker's list of drm_pagemaps.
> > Only
> > + * used if also using a pagemap cache.
> > */
> > struct drm_pagemap {
> > const struct drm_pagemap_ops *ops;
> > @@ -142,6 +166,8 @@ struct drm_pagemap {
> > struct drm_device *drm;
> > struct dev_pagemap *pagemap;
> > struct drm_pagemap_dev_hold *dev_hold;
> > + struct drm_pagemap_cache *cache;
> > + struct list_head shrink_link;
> > };
> >
> > struct drm_pagemap_devmem;
> > @@ -210,6 +236,11 @@ struct drm_pagemap_devmem_ops {
> > unsigned long npages);
> > };
> >
> > +int drm_pagemap_init(struct drm_pagemap *dpagemap,
> > + struct dev_pagemap *pagemap,
> > + struct drm_device *drm,
> > + const struct drm_pagemap_ops *ops);
> > +
> > struct drm_pagemap *drm_pagemap_create(struct drm_device *drm,
> > struct dev_pagemap
> > *pagemap,
> > const struct
> > drm_pagemap_ops *ops);
> > @@ -228,9 +259,9 @@ static inline void drm_pagemap_put(struct
> > drm_pagemap *dpagemap)
> >
> > /**
> > * drm_pagemap_get() - Obtain a reference on a struct drm_pagemap
> > - * @dpagemap: Pointer to the struct drm_pagemap.
> > + * @dpagemap: Pointer to the struct drm_pagemap, or NULL.
> > *
> > - * Return: Pointer to the struct drm_pagemap.
> > + * Return: Pointer to the struct drm_pagemap, or NULL.
> > */
> > static inline struct drm_pagemap *
> > drm_pagemap_get(struct drm_pagemap *dpagemap)
> > @@ -241,6 +272,20 @@ drm_pagemap_get(struct drm_pagemap *dpagemap)
> > return dpagemap;
> > }
> >
> > +/**
> > + * drm_pagemap_get_unless_zero() - Obtain a reference on a struct
> > drm_pagemap
> > + * unless the current reference count is zero.
> > + * @dpagemap: Pointer to the drm_pagemap or NULL.
> > + *
> > + * Return: A pointer to @dpagemap if the reference count was
> > successfully
> > + * incremented. NULL if @dpagemap was NULL, or its refcount was 0.
> > + */
> > +static inline struct drm_pagemap * __must_check
> > +drm_pagemap_get_unless_zero(struct drm_pagemap *dpagemap)
> > +{
> > + return (dpagemap && kref_get_unless_zero(&dpagemap->ref))
> > ? dpagemap : NULL;
> > +}
> > +
> > /**
> > * struct drm_pagemap_devmem - Structure representing a GPU SVM
> > device memory allocation
> > *
> > @@ -284,5 +329,7 @@ int drm_pagemap_populate_mm(struct drm_pagemap
> > *dpagemap,
> > struct mm_struct *mm,
> > unsigned long timeslice_ms);
> >
> > -#endif
> > +void drm_pagemap_destroy(struct drm_pagemap *dpagemap, bool
> > is_atomic_or_reclaim);
> >
> > +int drm_pagemap_reinit(struct drm_pagemap *dpagemap);
> > +#endif
> > diff --git a/include/drm/drm_pagemap_util.h
> > b/include/drm/drm_pagemap_util.h
> > new file mode 100644
> > index 000000000000..292244d429ee
> > --- /dev/null
> > +++ b/include/drm/drm_pagemap_util.h
> > @@ -0,0 +1,25 @@
> > +/* SPDX-License-Identifier: MIT */
> > +#ifndef _DRM_PAGEMAP_UTIL_H_
> > +#define _DRM_PAGEMAP_UTIL_H_
> > +
> > +struct drm_device;
> > +struct drm_pagemap;
> > +struct drm_pagemap_cache;
> > +struct drm_pagemap_shrinker;
> > +
> > +void drm_pagemap_shrinker_add(struct drm_pagemap *dpagemap);
> > +
> > +int drm_pagemap_cache_lock_lookup(struct drm_pagemap_cache
> > *cache);
> > +
> > +void drm_pagemap_cache_unlock_lookup(struct drm_pagemap_cache
> > *cache);
> > +
> > +struct drm_pagemap_shrinker
> > *drm_pagemap_shrinker_create_devm(struct drm_device *drm);
> > +
> > +struct drm_pagemap_cache *drm_pagemap_cache_create_devm(struct
> > drm_pagemap_shrinker *shrinker);
> > +
> > +struct drm_pagemap *drm_pagemap_get_from_cache(struct
> > drm_pagemap_cache *cache);
> > +
> > +void drm_pagemap_cache_set_pagemap(struct drm_pagemap_cache
> > *cache, struct drm_pagemap *dpagemap);
> > +
> > +struct drm_pagemap *drm_pagemap_get_from_cache_if_active(struct
> > drm_pagemap_cache *cache);
> > +#endif
> > --
> > 2.51.0
> >
next prev parent reply other threads:[~2025-10-28 9:46 UTC|newest]
Thread overview: 49+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-25 12:03 [PATCH 00/15] Dynamic drm_pagemaps and Initial multi-device SVM Thomas Hellström
2025-10-25 12:03 ` [PATCH 01/15] drm/pagemap, drm/xe: Add refcounting to struct drm_pagemap Thomas Hellström
2025-10-29 0:31 ` Matthew Brost
2025-10-29 1:11 ` Matthew Brost
2025-10-29 14:51 ` Thomas Hellström
2025-10-25 12:03 ` [PATCH 02/15] drm/pagemap: Add a refcounted drm_pagemap backpointer to struct drm_pagemap_zdd Thomas Hellström
2025-10-29 0:33 ` Matthew Brost
2025-10-25 12:04 ` [PATCH 03/15] drm/pagemap, drm/xe: Manage drm_pagemap provider lifetimes Thomas Hellström
2025-10-29 0:46 ` Matthew Brost
2025-10-29 14:49 ` Thomas Hellström
2025-10-30 2:46 ` Matthew Brost
2025-10-25 12:04 ` [PATCH 04/15] drm/pagemap: Add a drm_pagemap cache and shrinker Thomas Hellström
2025-10-28 1:23 ` Matthew Brost
2025-10-28 9:46 ` Thomas Hellström [this message]
2025-10-28 10:29 ` Thomas Hellström
2025-10-28 18:38 ` Matthew Brost
2025-10-29 22:41 ` Matthew Brost
2025-10-29 22:48 ` Matthew Brost
2025-10-25 12:04 ` [PATCH 05/15] drm/xe: Use the " Thomas Hellström
2025-10-30 0:43 ` Matthew Brost
2025-10-25 12:04 ` [PATCH 06/15] drm/pagemap: Remove the drm_pagemap_create() interface Thomas Hellström
2025-10-29 1:00 ` Matthew Brost
2025-10-25 12:04 ` [PATCH 07/15] drm/pagemap_util: Add a utility to assign an owner to a set of interconnected gpus Thomas Hellström
2025-10-29 1:21 ` Matthew Brost
2025-10-29 14:52 ` Thomas Hellström
2025-10-25 12:04 ` [PATCH 08/15] drm/xe: Use the drm_pagemap_util helper to get a svm pagemap owner Thomas Hellström
2025-10-27 23:02 ` Matthew Brost
2025-10-25 12:04 ` [PATCH 09/15] drm/xe: Pass a drm_pagemap pointer around with the memory advise attributes Thomas Hellström
2025-10-28 0:35 ` Matthew Brost
2025-10-25 12:04 ` [PATCH 10/15] drm/xe: Use the vma attibute drm_pagemap to select where to migrate Thomas Hellström
2025-10-25 18:01 ` kernel test robot
2025-10-29 3:27 ` Matthew Brost
2025-10-29 14:56 ` Thomas Hellström
2025-10-29 16:59 ` kernel test robot
2025-10-25 12:04 ` [PATCH 11/15] drm/xe: Simplify madvise_preferred_mem_loc() Thomas Hellström
2025-10-27 23:14 ` Matthew Brost
2025-10-25 12:04 ` [PATCH 12/15] drm/xe/uapi: Extend the madvise functionality to support foreign pagemap placement for svm Thomas Hellström
2025-10-28 0:51 ` Matthew Brost
2025-10-25 12:04 ` [PATCH 13/15] drm/xe: Support pcie p2p dma as a fast interconnect Thomas Hellström
2025-10-28 1:14 ` Matthew Brost
2025-10-28 9:32 ` Thomas Hellström
2025-10-29 2:17 ` Matthew Brost
2025-10-29 14:54 ` Thomas Hellström
2025-10-25 12:04 ` [PATCH 14/15] drm/xe/vm: Add a prefetch debug printout Thomas Hellström
2025-10-27 23:16 ` Matthew Brost
2025-10-25 12:04 ` [PATCH 15/15] drm/xe: Retry migration once Thomas Hellström
2025-10-28 0:13 ` Matthew Brost
2025-10-28 9:11 ` Thomas Hellström
2025-10-28 19:03 ` Matthew Brost
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=17d29da26bf86172510133c28e18a99e90772c7d.camel@linux.intel.com \
--to=thomas.hellstrom@linux.intel.com \
--cc=airlied@gmail.com \
--cc=apopple@nvidia.com \
--cc=christian.koenig@amd.com \
--cc=dakr@kernel.org \
--cc=dri-devel@lists.freedesktop.org \
--cc=felix.kuehling@amd.com \
--cc=himal.prasad.ghimiray@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=joonas.lahtinen@linux.intel.com \
--cc=matthew.brost@intel.com \
--cc=michal.mrozek@intel.com \
--cc=simona.vetter@ffwll.ch \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).