Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Oak Zeng <oak.zeng@intel.com>
To: dri-devel@lists.freedesktop.org, intel-xe@lists.freedesktop.org
Cc: felix.kuehling@amd.com, airlied@gmail.com, christian.koenig@amd.com
Subject: [Intel-xe] [RFC 04/11] drm: Add evict function pointer to drm lru entity
Date: Thu,  2 Nov 2023 00:32:59 -0400	[thread overview]
Message-ID: <20231102043306.2931989-5-oak.zeng@intel.com> (raw)
In-Reply-To: <20231102043306.2931989-1-oak.zeng@intel.com>

Drm lru manager provides generic functions to manage lru list,
and function to evict a lru entity. But how to evict an entity
is implemented in an entity's sub-class. This patch introduces
a few function pointers to drm lru entity for this purpose. Those
functions are abstracted from the current ttm resource eviction
process. They need to be tunned in the future when svm code comes
into the picture.

Also implemented a drm_lru_evict_first function to evict the first
lru entity from lru manager. Both ttm and svm codes are supposed
to call this function to evict the first resource from lru list.
This way ttm and svm codes can mutually evict resources from each
other.

Signed-off-by: Oak Zeng <oak.zeng@intel.com>
---
 drivers/gpu/drm/drm_evictable_lru.c | 36 +++++++++++++-
 include/drm/drm_evictable_lru.h     | 74 ++++++++++++++++++++++++++++-
 2 files changed, 108 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/drm_evictable_lru.c b/drivers/gpu/drm/drm_evictable_lru.c
index 2ba9105cca03..7b62cae2dfea 100644
--- a/drivers/gpu/drm/drm_evictable_lru.c
+++ b/drivers/gpu/drm/drm_evictable_lru.c
@@ -19,13 +19,15 @@ static inline struct drm_lru_mgr *entity_to_mgr(struct drm_lru_entity *entity)
 }
 
 void drm_lru_entity_init(struct drm_lru_entity *entity, struct drm_device *drm,
-			uint32_t mem_type, uint64_t size, uint32_t priority)
+			uint32_t mem_type, uint64_t size, uint32_t priority,
+			struct drm_lru_evict_func *evict_func)
 {
 	entity->drm = drm;
 	entity->mem_type = mem_type;
 	entity->size = size;
 	entity->priority = priority;
 	INIT_LIST_HEAD(&entity->lru);
+	entity->evict_func = evict_func;
 }
 
 /**
@@ -230,3 +232,35 @@ void drm_lru_del_bulk_move(struct drm_lru_entity *entity,
 	}
 }
 EXPORT_SYMBOL(drm_lru_del_bulk_move);
+
+int drm_lru_evict_first(struct drm_lru_mgr *mgr,
+			const struct drm_lru_evict_ctx *evict_ctx)
+{
+	struct drm_lru_entity *entity, *busy_entity = NULL;
+	struct drm_lru_cursor cursor;
+	bool locked = false, busy = false, found = false;
+
+	spin_lock(mgr->lru_lock);
+
+	/* First find a victim to evict*/
+	drm_lru_for_each_entity(mgr, &cursor, entity) {
+		if (!entity->evict_func->evict_allowable(entity,
+			evict_ctx, &busy, &locked)) {
+			if (!busy_entity && busy)
+				busy_entity = entity;
+			continue;
+		}
+		found = true;
+		break;
+	}
+
+	/* We didn't find a victim, but we found a busy entity, i.e.,
+	 * other clients hold a reservation lock of this entity. Try
+	 * to wait and evict this busy entity.
+	 */
+	if (!found && busy_entity)
+		return busy_entity->evict_func->evict_busy_entity(busy_entity, evict_ctx);
+
+	/* If here, we found a victim to evict*/
+	return entity->evict_func->evict_entity(entity, evict_ctx, locked);
+}
diff --git a/include/drm/drm_evictable_lru.h b/include/drm/drm_evictable_lru.h
index 3fd6bd2475d9..7f49964f2f9b 100644
--- a/include/drm/drm_evictable_lru.h
+++ b/include/drm/drm_evictable_lru.h
@@ -15,6 +15,12 @@ struct drm_device;
 #define DRM_MAX_LRU_PRIORITY 4
 #define DRM_NUM_MEM_TYPES 8
 
+struct drm_lru_evict_ctx {
+	void *data1;
+	void *data2;
+	void *data3;
+};
+
 /**
  * struct drm_lru_entity
  *
@@ -23,6 +29,7 @@ struct drm_device;
  * @size: resource size of this entity
  * @priority: The priority of this entity
  * @lru: least recent used list node, see &drm_lru_mgr.lru
+ * @evict_func: functions to evict this entity
  *
  * This structure represents an entity in drm_lru_mgr's
  * list. This structure is supposed to be embedded in
@@ -34,6 +41,7 @@ struct drm_lru_entity {
 	uint64_t size;
 	uint32_t priority;
 	struct list_head lru;
+	struct drm_lru_evict_func *evict_func;
 };
 
 /**
@@ -97,7 +105,67 @@ struct drm_lru_bulk_move {
 	struct drm_lru_bulk_move_range range[DRM_NUM_MEM_TYPES][DRM_MAX_LRU_PRIORITY];
 };
 
+struct drm_lru_evict_func {
+	/**
+	 * evict_allowable
+	 *
+	 * @lru_entity: The struct ttm_resource::lru_entity when this resource is
+	 * added to drm lru list.
+	 * @evict_ctx: eviction context. This is opaque data to drm lru layer. It is
+	 * passed to drm lru layer through drm_lru_evict_first function and drm lru
+	 * layer just pass it back to ttm or svm code by calling some ttm or svm
+	 * callback functions.
+	 * @busy: used to return whether the current resource is busy (i.e., locked
+	 * by other clients)
+	 * @locked: used to return whether this resource is locked during this check,
+	 * i.e., successfully trylocked bo's dma reservation object
+	 *
+	 * Check whether we are allowed to evict a memory resource. Return true if we
+	 * are allowed to evict resource; otherwise false.
+	 *
+	 * When this function returns true, a resource reference counter is hold. This
+	 * reference counter need to be released after evict operation later on.
+	 *
+	 * This function should be called with lru_lock hold.
+	 */
+	bool (*evict_allowable)(struct drm_lru_entity *lru_entity,
+			const struct drm_lru_evict_ctx *evict_ctx,
+			bool *busy, bool *locked);
 
+	/**
+	 * evict_busy_entity
+	 *
+	 * @lru_entity: The struct ttm_resource::lru_entity when this resource is
+	 * added to drm lru list.
+	 * @evict_ctx: eviction context. This is opaque data to drm lru layer. It is
+	 * passed to drm lru layer through drm_lru_evict_first function and drm lru
+	 * layer just pass it back to ttm or svm code by calling some ttm or svm
+	 * callback functions.
+	 *
+	 * Evict a busy memory resource.
+	 * This function should be called with lru_lock hold.
+	 */
+	int (*evict_busy_entity)(struct drm_lru_entity *lru_entity,
+			const struct drm_lru_evict_ctx *evict_ctx);
+
+	/**
+	 * evict_entity
+	 *
+	 * @lru_entity: The struct ttm_resource::lru_entity when this resource is
+	 * added to drm lru list.
+	 * @evict_ctx: eviction context. This is opaque data to drm lru layer. It is
+	 * passed to drm lru layer through drm_lru_evict_first function and drm lru
+	 * layer just pass it back to ttm or svm code by calling some ttm or svm
+	 * callback functions.
+	 * @locked: whether this resource is dma-reserved (if reserved, we need to
+	 * unreserve it in this function)
+	 *
+	 * Evict a memory resource corresponding to a lru_entity. This should be
+	 * called holding lru_lock
+	 */
+	int (*evict_entity)(struct drm_lru_entity *lru_entity,
+			const struct drm_lru_evict_ctx *evict_ctx, bool locked);
+};
 
 /**
  * drm_lru_add_entity
@@ -145,7 +213,8 @@ static inline void drm_lru_mgr_fini(struct drm_lru_mgr *mgr)
 }
 
 void drm_lru_entity_init(struct drm_lru_entity *entity, struct drm_device *drm,
-			uint32_t mem_type, uint64_t size, uint32_t priority);
+			uint32_t mem_type, uint64_t size, uint32_t priority,
+			struct drm_lru_evict_func *evict_func);
 
 struct drm_lru_entity *
 drm_lru_first(struct drm_lru_mgr *mgr, struct drm_lru_cursor *cursor);
@@ -172,6 +241,9 @@ void drm_lru_add_bulk_move(struct drm_lru_entity *entity,
 
 void drm_lru_del_bulk_move(struct drm_lru_entity *entity,
 		struct drm_lru_bulk_move *bulk_move);
+
+int drm_lru_evict_first(struct drm_lru_mgr *mgr,
+			const struct drm_lru_evict_ctx *evict_ctx);
 /**
  * drm_lru_for_each_entity
  *
-- 
2.26.3


  parent reply	other threads:[~2023-11-02  4:24 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-11-02  4:32 [Intel-xe] [PATCH 00/11] Introduce drm evictable lru Oak Zeng
2023-11-02  4:27 ` [Intel-xe] ✗ CI.Patch_applied: failure for " Patchwork
2023-11-02  4:32 ` [Intel-xe] [RFC 01/11] drm/ttm: re-parameter ttm_device_init Oak Zeng
2023-11-02  4:32 ` [Intel-xe] [RFC 02/11] drm: move lru_lock from ttm_device to drm_device Oak Zeng
2023-11-02 12:53   ` Christian König
2023-11-03  3:26     ` Zeng, Oak
2023-11-02  4:32 ` [Intel-xe] [RFC 03/11] drm: introduce drm evictable LRU Oak Zeng
2023-11-02 13:23   ` Christian König
2023-11-03  4:04     ` Zeng, Oak
2023-11-03  9:36       ` Christian König
2023-11-03 14:36         ` Zeng, Oak
2023-11-02  4:32 ` Oak Zeng [this message]
2023-11-02  4:33 ` [Intel-xe] [RFC 05/11] drm: Replace ttm macros with drm macros Oak Zeng
2023-11-02  4:33 ` [Intel-xe] [RFC 06/11] drm/ttm: Set lru manager to ttm resource manager Oak Zeng
2023-11-02  4:33 ` [Intel-xe] [RFC 07/11] drm/ttm: re-parameterize a few ttm functions Oak Zeng
2023-11-02  4:33 ` [Intel-xe] [RFC 08/11] drm: Initialize drm lru manager Oak Zeng
2023-11-02  4:33 ` [Intel-xe] [RFC 09/11] drm/ttm: Use drm LRU manager iterator Oak Zeng
2023-11-02  4:33 ` [Intel-xe] [RFC 10/11] drm/ttm: Implement ttm memory evict functions Oak Zeng
2023-11-02  4:33 ` [Intel-xe] [RFC 11/11] drm/ttm: Write ttm functions using drm lru manager functions Oak Zeng
2023-12-21 13:12 ` [PATCH 00/11] Introduce drm evictable lru Thomas Hellström

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20231102043306.2931989-5-oak.zeng@intel.com \
    --to=oak.zeng@intel.com \
    --cc=airlied@gmail.com \
    --cc=christian.koenig@amd.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=felix.kuehling@amd.com \
    --cc=intel-xe@lists.freedesktop.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox