Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Matthew Brost <matthew.brost@intel.com>
To: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
Cc: intel-xe@lists.freedesktop.org, dri-devel@lists.freedesktop.org,
	himal.prasad.ghimiray@intel.com, apopple@nvidia.com,
	airlied@gmail.com, "Simona Vetter" <simona.vetter@ffwll.ch>,
	felix.kuehling@amd.com,
	"Christian König" <christian.koenig@amd.com>,
	dakr@kernel.org, "Mrozek, Michal" <michal.mrozek@intel.com>,
	"Joonas Lahtinen" <joonas.lahtinen@linux.intel.com>
Subject: Re: [PATCH 07/15] drm/pagemap_util: Add a utility to assign an owner to a set of interconnected gpus
Date: Tue, 28 Oct 2025 18:21:20 -0700	[thread overview]
Message-ID: <aQFsEO84w6E1NXG3@lstrano-desk.jf.intel.com> (raw)
In-Reply-To: <20251025120412.12262-8-thomas.hellstrom@linux.intel.com>

On Sat, Oct 25, 2025 at 02:04:04PM +0200, Thomas Hellström wrote:
> The hmm_range_fault() and the migration helpers currently need a common
> "owner" to identify pagemaps and clients with fast interconnect.
> Add a drm_pagemap utility to setup such owners by registering
> drm_pagemaps, in a registry, and for each new drm_pagemap,
> query which existing drm_pagemaps have fast interconnects with the new
> drm_pagemap.
> 
> The "owner" scheme is limited in that it is static at drm_pagemap creation.
> Ideally one would want the owner to be adjusted at run-time, but that
> requires changes to hmm. If the proposed scheme becomes too limited,
> we need to revisit.
> 
> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> ---
>  drivers/gpu/drm/drm_pagemap_util.c | 118 +++++++++++++++++++++++++++++
>  include/drm/drm_pagemap_util.h     |  53 +++++++++++++
>  2 files changed, 171 insertions(+)
> 
> diff --git a/drivers/gpu/drm/drm_pagemap_util.c b/drivers/gpu/drm/drm_pagemap_util.c
> index e1a1d6bf25f4..dd573b620157 100644
> --- a/drivers/gpu/drm/drm_pagemap_util.c
> +++ b/drivers/gpu/drm/drm_pagemap_util.c
> @@ -3,6 +3,8 @@
>   * Copyright © 2025 Intel Corporation
>   */
>  
> +#include <linux/slab.h>
> +
>  #include <drm/drm_drv.h>
>  #include <drm/drm_managed.h>
>  #include <drm/drm_pagemap.h>
> @@ -424,3 +426,119 @@ struct drm_pagemap_shrinker *drm_pagemap_shrinker_create_devm(struct drm_device
>  	return shrinker;
>  }
>  EXPORT_SYMBOL(drm_pagemap_shrinker_create_devm);
> +
> +/**
> + * struct drm_pagemap_owner - Device interconnect group
> + * @kref: Reference count.
> + *
> + * A struct drm_pagemap_owner identifies a device interconnect group.
> + */
> +struct drm_pagemap_owner {
> +	struct kref kref;
> +};
> +
> +static void drm_pagemap_owner_release(struct kref *kref)
> +{
> +	kfree(container_of(kref, struct drm_pagemap_owner, kref));
> +}
> +
> +/**
> + * drm_pagemap_release_owner() - Stop participating in an interconnect group
> + * @peer: Pointer to the struct drm_pagemap_peer used when joining the group
> + *
> + * Stop participating in an interconnect group. This function is typically
> + * called when a pagemap is removed to indicate that it doesn't need to
> + * be taken into account.
> + */
> +void drm_pagemap_release_owner(struct drm_pagemap_peer *peer)
> +{
> +	struct drm_pagemap_owner_list *owner_list = peer->list;
> +
> +	if (!owner_list)
> +		return;
> +
> +	mutex_lock(&owner_list->lock);
> +	list_del(&peer->link);
> +	kref_put(&peer->owner->kref, drm_pagemap_owner_release);
> +	peer->owner = NULL;
> +	mutex_unlock(&owner_list->lock);
> +}
> +EXPORT_SYMBOL(drm_pagemap_release_owner);
> +
> +/**
> + * typedef interconnect_fn - Callback function to identify fast interconnects
> + * @peer1: First endpoint.
> + * @peer2: Second endpont.
> + *
> + * The function returns %true iff @peer1 and @peer2 have a fast interconnect.
> + * Note that this is symmetrical. The function has no notion of client and provider,
> + * which may not be sufficient in some cases. However, since the callback is intended
> + * to guide in providing common pagemap owners, the notion of a common owner to
> + * indicate fast interconnects would then have to change as well.
> + *
> + * Return: %true iff @peer1 and @peer2 have a fast interconnect. Otherwise @false.
> + */
> +typedef bool (*interconnect_fn)(struct drm_pagemap_peer *peer1, struct drm_pagemap_peer *peer2);
> +
> +/**
> + * drm_pagemap_acquire_owner() - Join an interconnect group
> + * @peer: A struct drm_pagemap_peer keeping track of the device interconnect
> + * @owner_list: Pointer to the owner_list, keeping track of all interconnects
> + * @has_interconnect: Callback function to determine whether two peers have a
> + * fast local interconnect.
> + *
> + * Repeatedly calls @has_interconnect for @peer and other peers on @owner_list to
> + * determine a set of peers for which @peer has a fast interconnect. That set will
> + * have common &struct drm_pagemap_owner, and upon successful return, @peer::owner
> + * will point to that struct, holding a reference, and @peer will be registered in
> + * @owner_list. If @peer doesn't have any fast interconnects to other @peers, a
> + * new unique &struct drm_pagemap_owner will be allocated for it, and that
> + * may be shared with other peers that, at a later point, are determined to have
> + * a fast interconnect with @peer.
> + *
> + * When @peer no longer participates in an interconnect group,
> + * drm_pagemap_release_owner() should be called to drop the reference on the
> + * struct drm_pagemap_owner.
> + *
> + * Return: %0 on success, negative error code on failure.
> + */
> +int drm_pagemap_acquire_owner(struct drm_pagemap_peer *peer,
> +			      struct drm_pagemap_owner_list *owner_list,
> +			      interconnect_fn has_interconnect)
> +{
> +	struct drm_pagemap_peer *cur_peer;
> +	struct drm_pagemap_owner *owner = NULL;
> +	bool interconnect = false;
> +
> +	mutex_lock(&owner_list->lock);
> +	might_alloc(GFP_KERNEL);
> +	list_for_each_entry(cur_peer, &owner_list->peers, link) {
> +		if (cur_peer->owner != owner) {
> +			if (owner && interconnect)
> +				break;
> +			owner = cur_peer->owner;
> +			interconnect = true;
> +		}
> +		if (interconnect && !has_interconnect(peer, cur_peer))
> +			interconnect = false;
> +	}
> +
> +	if (!interconnect) {
> +		owner = kmalloc(sizeof(*owner), GFP_KERNEL);
> +		if (!owner) {
> +			mutex_unlock(&owner_list->lock);
> +			return -ENOMEM;
> +		}
> +		kref_init(&owner->kref);
> +		list_add_tail(&peer->link, &owner_list->peers);
> +	} else {
> +		kref_get(&owner->kref);
> +		list_add_tail(&peer->link, &cur_peer->link);
> +	}
> +	peer->owner = owner;
> +	peer->list = owner_list;
> +	mutex_unlock(&owner_list->lock);
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL(drm_pagemap_acquire_owner);
> diff --git a/include/drm/drm_pagemap_util.h b/include/drm/drm_pagemap_util.h
> index 292244d429ee..1889630b8950 100644
> --- a/include/drm/drm_pagemap_util.h
> +++ b/include/drm/drm_pagemap_util.h
> @@ -1,12 +1,58 @@
>  /* SPDX-License-Identifier: MIT */
> +/*
> + * Copyright © 2025 Intel Corporation
> + */
> +

Nit: The above copyright should be moved to an earlier patch.



>  #ifndef _DRM_PAGEMAP_UTIL_H_
>  #define _DRM_PAGEMAP_UTIL_H_
>  
> +#include <linux/list.h>
> +#include <linux/mutex.h>
> +
>  struct drm_device;
>  struct drm_pagemap;
>  struct drm_pagemap_cache;
> +struct drm_pagemap_owner;
>  struct drm_pagemap_shrinker;
>  
> +/**
> + * struct drm_pagemap_peer - Structure representing a fast interconnect peer
> + * @list: Pointer to a &struct drm_pagemap_owner_list used to keep track of peers
> + * @link: List link for @list's list of peers.
> + * @owner: Pointer to a &struct drm_pagemap_owner, common for a set of peers having
> + * fast interconnects.
> + * @private: Pointer private to the struct embedding this struct.
> + */
> +struct drm_pagemap_peer {
> +	struct drm_pagemap_owner_list *list;
> +	struct list_head link;
> +	struct drm_pagemap_owner *owner;
> +	void *private;
> +};
> +
> +/**
> + * struct drm_pagemap_owner_list - Keeping track of peers and owners
> + * @peer: List of peers.
> + *
> + * The owner list defines the scope where we identify peers having fast interconnects
> + * and a common owner. Typically a driver has a single global owner list to
> + * keep track of common owners for the driver's pagemaps.
> + */
> +struct drm_pagemap_owner_list {
> +	/** @lock: Mutex protecting the @peers list. */
> +	struct mutex lock;
> +	/** @peers: List of peers. */
> +	struct list_head peers;
> +};
> +
> +/*
> + * Convenience macro to define an owner list.

I'd perhaps mention this typially a static module instantiation.

Patch itself lgtm, and makes sense. With that:
Reviewed-by: Matthew Brost <matthew.brost@intel.com>

> + */
> +#define DRM_PAGEMAP_OWNER_LIST_DEFINE(_name)	\
> +	struct drm_pagemap_owner_list _name = {	\
> +	.lock = __MUTEX_INITIALIZER(_name.lock), \
> +	.peers = LIST_HEAD_INIT(_name.peers) }
> +
>  void drm_pagemap_shrinker_add(struct drm_pagemap *dpagemap);
>  
>  int drm_pagemap_cache_lock_lookup(struct drm_pagemap_cache *cache);
> @@ -22,4 +68,11 @@ struct drm_pagemap *drm_pagemap_get_from_cache(struct drm_pagemap_cache *cache);
>  void drm_pagemap_cache_set_pagemap(struct drm_pagemap_cache *cache, struct drm_pagemap *dpagemap);
>  
>  struct drm_pagemap *drm_pagemap_get_from_cache_if_active(struct drm_pagemap_cache *cache);
> +
> +void drm_pagemap_release_owner(struct drm_pagemap_peer *peer);
> +
> +int drm_pagemap_acquire_owner(struct drm_pagemap_peer *peer,
> +			      struct drm_pagemap_owner_list *owner_list,
> +			      bool (*has_interconnect)(struct drm_pagemap_peer *peer1,
> +						       struct drm_pagemap_peer *peer2));
>  #endif
> -- 
> 2.51.0
> 

  reply	other threads:[~2025-10-29  1:22 UTC|newest]

Thread overview: 54+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-10-25 12:03 [PATCH 00/15] Dynamic drm_pagemaps and Initial multi-device SVM Thomas Hellström
2025-10-25 12:03 ` [PATCH 01/15] drm/pagemap, drm/xe: Add refcounting to struct drm_pagemap Thomas Hellström
2025-10-29  0:31   ` Matthew Brost
2025-10-29  1:11   ` Matthew Brost
2025-10-29 14:51     ` Thomas Hellström
2025-10-25 12:03 ` [PATCH 02/15] drm/pagemap: Add a refcounted drm_pagemap backpointer to struct drm_pagemap_zdd Thomas Hellström
2025-10-29  0:33   ` Matthew Brost
2025-10-25 12:04 ` [PATCH 03/15] drm/pagemap, drm/xe: Manage drm_pagemap provider lifetimes Thomas Hellström
2025-10-29  0:46   ` Matthew Brost
2025-10-29 14:49     ` Thomas Hellström
2025-10-30  2:46       ` Matthew Brost
2025-10-25 12:04 ` [PATCH 04/15] drm/pagemap: Add a drm_pagemap cache and shrinker Thomas Hellström
2025-10-28  1:23   ` Matthew Brost
2025-10-28  9:46     ` Thomas Hellström
2025-10-28 10:29       ` Thomas Hellström
2025-10-28 18:38         ` Matthew Brost
2025-10-29 22:41   ` Matthew Brost
2025-10-29 22:48   ` Matthew Brost
2025-10-25 12:04 ` [PATCH 05/15] drm/xe: Use the " Thomas Hellström
2025-10-30  0:43   ` Matthew Brost
2025-10-25 12:04 ` [PATCH 06/15] drm/pagemap: Remove the drm_pagemap_create() interface Thomas Hellström
2025-10-29  1:00   ` Matthew Brost
2025-10-25 12:04 ` [PATCH 07/15] drm/pagemap_util: Add a utility to assign an owner to a set of interconnected gpus Thomas Hellström
2025-10-29  1:21   ` Matthew Brost [this message]
2025-10-29 14:52     ` Thomas Hellström
2025-10-25 12:04 ` [PATCH 08/15] drm/xe: Use the drm_pagemap_util helper to get a svm pagemap owner Thomas Hellström
2025-10-27 23:02   ` Matthew Brost
2025-10-25 12:04 ` [PATCH 09/15] drm/xe: Pass a drm_pagemap pointer around with the memory advise attributes Thomas Hellström
2025-10-28  0:35   ` Matthew Brost
2025-11-26  0:31   ` Matthew Brost
2025-10-25 12:04 ` [PATCH 10/15] drm/xe: Use the vma attibute drm_pagemap to select where to migrate Thomas Hellström
2025-10-25 18:01   ` kernel test robot
2025-10-29  3:27   ` Matthew Brost
2025-10-29 14:56     ` Thomas Hellström
2025-10-29 16:59   ` kernel test robot
2025-10-25 12:04 ` [PATCH 11/15] drm/xe: Simplify madvise_preferred_mem_loc() Thomas Hellström
2025-10-27 23:14   ` Matthew Brost
2025-10-25 12:04 ` [PATCH 12/15] drm/xe/uapi: Extend the madvise functionality to support foreign pagemap placement for svm Thomas Hellström
2025-10-28  0:51   ` Matthew Brost
2025-10-25 12:04 ` [PATCH 13/15] drm/xe: Support pcie p2p dma as a fast interconnect Thomas Hellström
2025-10-28  1:14   ` Matthew Brost
2025-10-28  9:32     ` Thomas Hellström
2025-10-29  2:17   ` Matthew Brost
2025-10-29 14:54     ` Thomas Hellström
2025-10-25 12:04 ` [PATCH 14/15] drm/xe/vm: Add a prefetch debug printout Thomas Hellström
2025-10-27 23:16   ` Matthew Brost
2025-10-25 12:04 ` [PATCH 15/15] drm/xe: Retry migration once Thomas Hellström
2025-10-28  0:13   ` Matthew Brost
2025-10-28  9:11     ` Thomas Hellström
2025-10-28 19:03       ` Matthew Brost
2025-10-25 12:16 ` ✗ CI.checkpatch: warning for Dynamic drm_pagemaps and Initial multi-device SVM Patchwork
2025-10-25 12:17 ` ✓ CI.KUnit: success " Patchwork
2025-10-25 13:06 ` ✓ Xe.CI.BAT: " Patchwork
2025-10-25 14:14 ` ✗ Xe.CI.Full: failure " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aQFsEO84w6E1NXG3@lstrano-desk.jf.intel.com \
    --to=matthew.brost@intel.com \
    --cc=airlied@gmail.com \
    --cc=apopple@nvidia.com \
    --cc=christian.koenig@amd.com \
    --cc=dakr@kernel.org \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=felix.kuehling@amd.com \
    --cc=himal.prasad.ghimiray@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=joonas.lahtinen@linux.intel.com \
    --cc=michal.mrozek@intel.com \
    --cc=simona.vetter@ffwll.ch \
    --cc=thomas.hellstrom@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox