rust-for-linux.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/2] Defer vm_bo cleanup in GPUVM with DRM_GPUVM_IMMEDIATE_MODE
@ 2025-09-09 13:36 Alice Ryhl
  2025-09-09 13:36 ` [PATCH v2 1/2] drm/gpuvm: add deferred vm_bo cleanup Alice Ryhl
  2025-09-09 13:36 ` [PATCH v2 2/2] panthor: use drm_gpuva_unlink_defer() Alice Ryhl
  0 siblings, 2 replies; 14+ messages in thread
From: Alice Ryhl @ 2025-09-09 13:36 UTC (permalink / raw)
  To: Danilo Krummrich, Matthew Brost, Thomas Hellström
  Cc: Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
	Simona Vetter, Boris Brezillon, Steven Price, Daniel Almeida,
	Liviu Dudau, dri-devel, linux-kernel, rust-for-linux, Alice Ryhl

There are two main ways that GPUVM might be used:

* staged mode, where VM_BIND ioctls update the GPUVM immediately so that
  the GPUVM reflects the state of the VM *including* staged changes that
  are not yet applied to the GPU's virtual address space.
* immediate mode, where the GPUVM state is updated during run_job(),
  i.e., in the DMA fence signalling critical path, to ensure that the
  GPUVM and the GPU's virtual address space has the same state at all
  times.

Currently, only Panthor uses GPUVM in immediate mode, but the Rust
drivers Tyr and Nova will also use GPUVM in immediate mode, so it is
worth to support both staged and immediate mode well in GPUVM. To use
immediate mode, we must manage the vm_bos and vas during the fence
signalling critical path.

The first part of that work was the introduction of a fence signalling
safe mutex for the GEMs GPUVA list in commit e7fa80e2932c ("drm_gem: add
mutex to drm_gem_object.gpuva").

This is series the second part of that work: Dropping a vm_bo object in
the fence signalling critical path is problematic for two reasons:

* When using DRM_GPUVM_RESV_PROTECTED, you cannot remove the vm_bo from
  the extobj/evicted lists during the fence signalling path.
* Dropping a vm_bo could lead to the GEM object getting destroyed.
  The requirement that GEM object cleanup is fence signalling safe is
  dubious and likely to be violated in practice.

Panthor already has its own custom implementation of postponing vm_bo
cleanup. Take inspiration from that by moving the logic into GPUVM, and
adjust Panthor to use the new GPUVM logic.

Signed-off-by: Alice Ryhl <aliceryhl@google.com>
---
Changes in v2:
- Fix missing kfree in Panthor.
- Rework mutex_lock() calls to be less confusing.
- Add note about resv lock in drm_gpuvm_bo_is_dead() docs.
- Link to v1: https://lore.kernel.org/r/20250905-vmbo-defer-v1-0-7ae1a382b674@google.com

---
Alice Ryhl (2):
      drm/gpuvm: add deferred vm_bo cleanup
      panthor: use drm_gpuva_unlink_defer()

 drivers/gpu/drm/drm_gpuvm.c           | 174 ++++++++++++++++++++++++++++++++++
 drivers/gpu/drm/panthor/panthor_mmu.c | 113 ++++------------------
 include/drm/drm_gpuvm.h               |  26 +++++
 3 files changed, 219 insertions(+), 94 deletions(-)
---
base-commit: 7156602d56e5ad689ae11e03680ab6326238b5e3
change-id: 20250905-vmbo-defer-3faf90d821f5

Best regards,
-- 
Alice Ryhl <aliceryhl@google.com>


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v2 1/2] drm/gpuvm: add deferred vm_bo cleanup
  2025-09-09 13:36 [PATCH v2 0/2] Defer vm_bo cleanup in GPUVM with DRM_GPUVM_IMMEDIATE_MODE Alice Ryhl
@ 2025-09-09 13:36 ` Alice Ryhl
  2025-09-09 13:39   ` Alice Ryhl
                     ` (2 more replies)
  2025-09-09 13:36 ` [PATCH v2 2/2] panthor: use drm_gpuva_unlink_defer() Alice Ryhl
  1 sibling, 3 replies; 14+ messages in thread
From: Alice Ryhl @ 2025-09-09 13:36 UTC (permalink / raw)
  To: Danilo Krummrich, Matthew Brost, Thomas Hellström
  Cc: Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
	Simona Vetter, Boris Brezillon, Steven Price, Daniel Almeida,
	Liviu Dudau, dri-devel, linux-kernel, rust-for-linux, Alice Ryhl

When using GPUVM in immediate mode, it is necessary to call
drm_gpuvm_unlink() from the fence signalling critical path. However,
unlink may call drm_gpuvm_bo_put(), which causes some challenges:

1. drm_gpuvm_bo_put() often requires you to take resv locks, which you
   can't do from the fence signalling critical path.
2. drm_gpuvm_bo_put() calls drm_gem_object_put(), which is often going
   to be unsafe to call from the fence signalling critical path.

To solve these issues, add a deferred version of drm_gpuvm_unlink() that
adds the vm_bo to a deferred cleanup list, and then clean it up later.

The new methods take the GEMs GPUVA lock internally rather than letting
the caller do it because it also needs to perform an operation after
releasing the mutex again. This is to prevent freeing the GEM while
holding the mutex (more info as comments in the patch). This means that
the new methods can only be used with DRM_GPUVM_IMMEDIATE_MODE.

Signed-off-by: Alice Ryhl <aliceryhl@google.com>
---
 drivers/gpu/drm/drm_gpuvm.c | 174 ++++++++++++++++++++++++++++++++++++++++++++
 include/drm/drm_gpuvm.h     |  26 +++++++
 2 files changed, 200 insertions(+)

diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c
index 78a1a4f095095e9379bdf604d583f6c8b9863ccb..5aa8b3813019705f70101950af2d8fe4e648e9d0 100644
--- a/drivers/gpu/drm/drm_gpuvm.c
+++ b/drivers/gpu/drm/drm_gpuvm.c
@@ -876,6 +876,27 @@ __drm_gpuvm_bo_list_add(struct drm_gpuvm *gpuvm, spinlock_t *lock,
 	cond_spin_unlock(lock, !!lock);
 }
 
+/**
+ * drm_gpuvm_bo_is_dead() - check whether this vm_bo is scheduled for cleanup
+ * @vm_bo: the &drm_gpuvm_bo
+ *
+ * When a vm_bo is scheduled for cleanup using the bo_defer list, it is not
+ * immediately removed from the evict and extobj lists if they are protected by
+ * the resv lock, as we can't take that lock during run_job() in immediate
+ * mode. Therefore, anyone iterating these lists should skip entries that are
+ * being destroyed.
+ *
+ * Checking the refcount without incrementing it is okay as long as the lock
+ * protecting the evict/extobj list is held for as long as you are using the
+ * vm_bo, because even if the refcount hits zero while you are using it, freeing
+ * the vm_bo requires taking the list's lock.
+ */
+static bool
+drm_gpuvm_bo_is_dead(struct drm_gpuvm_bo *vm_bo)
+{
+	return !kref_read(&vm_bo->kref);
+}
+
 /**
  * drm_gpuvm_bo_list_add() - insert a vm_bo into the given list
  * @__vm_bo: the &drm_gpuvm_bo
@@ -1081,6 +1102,9 @@ drm_gpuvm_init(struct drm_gpuvm *gpuvm, const char *name,
 	INIT_LIST_HEAD(&gpuvm->evict.list);
 	spin_lock_init(&gpuvm->evict.lock);
 
+	INIT_LIST_HEAD(&gpuvm->bo_defer.list);
+	spin_lock_init(&gpuvm->bo_defer.lock);
+
 	kref_init(&gpuvm->kref);
 
 	gpuvm->name = name ? name : "unknown";
@@ -1122,6 +1146,8 @@ drm_gpuvm_fini(struct drm_gpuvm *gpuvm)
 		 "Extobj list should be empty.\n");
 	drm_WARN(gpuvm->drm, !list_empty(&gpuvm->evict.list),
 		 "Evict list should be empty.\n");
+	drm_WARN(gpuvm->drm, !list_empty(&gpuvm->bo_defer.list),
+		 "VM BO cleanup list should be empty.\n");
 
 	drm_gem_object_put(gpuvm->r_obj);
 }
@@ -1217,6 +1243,9 @@ drm_gpuvm_prepare_objects_locked(struct drm_gpuvm *gpuvm,
 
 	drm_gpuvm_resv_assert_held(gpuvm);
 	list_for_each_entry(vm_bo, &gpuvm->extobj.list, list.entry.extobj) {
+		if (drm_gpuvm_bo_is_dead(vm_bo))
+			continue;
+
 		ret = exec_prepare_obj(exec, vm_bo->obj, num_fences);
 		if (ret)
 			break;
@@ -1460,6 +1489,9 @@ drm_gpuvm_validate_locked(struct drm_gpuvm *gpuvm, struct drm_exec *exec)
 
 	list_for_each_entry_safe(vm_bo, next, &gpuvm->evict.list,
 				 list.entry.evict) {
+		if (drm_gpuvm_bo_is_dead(vm_bo))
+			continue;
+
 		ret = ops->vm_bo_validate(vm_bo, exec);
 		if (ret)
 			break;
@@ -1560,6 +1592,7 @@ drm_gpuvm_bo_create(struct drm_gpuvm *gpuvm,
 
 	INIT_LIST_HEAD(&vm_bo->list.entry.extobj);
 	INIT_LIST_HEAD(&vm_bo->list.entry.evict);
+	INIT_LIST_HEAD(&vm_bo->list.entry.bo_defer);
 
 	return vm_bo;
 }
@@ -1621,6 +1654,113 @@ drm_gpuvm_bo_put(struct drm_gpuvm_bo *vm_bo)
 }
 EXPORT_SYMBOL_GPL(drm_gpuvm_bo_put);
 
+static void
+drm_gpuvm_bo_defer_locked(struct kref *kref)
+{
+	struct drm_gpuvm_bo *vm_bo = container_of(kref, struct drm_gpuvm_bo,
+						  kref);
+	struct drm_gpuvm *gpuvm = vm_bo->vm;
+
+	if (!drm_gpuvm_resv_protected(gpuvm)) {
+		drm_gpuvm_bo_list_del(vm_bo, extobj, true);
+		drm_gpuvm_bo_list_del(vm_bo, evict, true);
+	}
+
+	list_del(&vm_bo->list.entry.gem);
+}
+
+static void
+drm_gpuvm_bo_defer(struct kref *kref)
+{
+	struct drm_gpuvm_bo *vm_bo = container_of(kref, struct drm_gpuvm_bo,
+						  kref);
+
+	mutex_lock(&vm_bo->obj->gpuva.lock);
+	drm_gpuvm_bo_defer_locked(kref);
+	mutex_unlock(&vm_bo->obj->gpuva.lock);
+
+	/*
+	 * It's important that the GEM stays alive for the duration in which we
+	 * hold the mutex, but the instant we add the vm_bo to bo_defer,
+	 * another thread might call drm_gpuvm_bo_deferred_cleanup() and put
+	 * the GEM. Therefore, to avoid kfreeing a mutex we are holding, we add
+	 * the vm_bo to bo_defer *after* releasing the GEM's mutex.
+	 */
+	drm_gpuvm_bo_list_add(vm_bo, bo_defer, true);
+}
+
+/**
+ * drm_gpuvm_bo_put_deferred() - drop a struct drm_gpuvm_bo reference with
+ * deferred cleanup
+ * @vm_bo: the &drm_gpuvm_bo to release the reference of
+ *
+ * This releases a reference to @vm_bo.
+ *
+ * This might take and release the GEMs GPUVA lock. You should call
+ * drm_gpuvm_bo_deferred_cleanup() later to complete the cleanup process.
+ *
+ * Returns: true if vm_bo is being destroyed, false otherwise.
+ */
+bool
+drm_gpuvm_bo_put_deferred(struct drm_gpuvm_bo *vm_bo)
+{
+	if (!vm_bo)
+		return false;
+
+	drm_WARN_ON(vm_bo->vm->drm, !drm_gpuvm_immediate_mode(vm_bo->vm));
+
+	return !!kref_put(&vm_bo->kref, drm_gpuvm_bo_defer);
+}
+EXPORT_SYMBOL_GPL(drm_gpuvm_bo_put_deferred);
+
+/**
+ * drm_gpuvm_bo_deferred_cleanup() - clean up BOs in the deferred list
+ * deferred cleanup
+ * @gpuvm: the VM to clean up
+ *
+ * Cleans up &drm_gpuvm_bo instances in the deferred cleanup list.
+ */
+void
+drm_gpuvm_bo_deferred_cleanup(struct drm_gpuvm *gpuvm)
+{
+	const struct drm_gpuvm_ops *ops = gpuvm->ops;
+	struct drm_gpuvm_bo *vm_bo;
+	struct drm_gem_object *obj;
+	LIST_HEAD(bo_defer);
+
+	spin_lock(&gpuvm->bo_defer.lock);
+	list_replace_init(&gpuvm->bo_defer.list, &bo_defer);
+	spin_unlock(&gpuvm->bo_defer.lock);
+
+	if (list_empty(&bo_defer))
+		return;
+
+	if (drm_gpuvm_resv_protected(gpuvm)) {
+		dma_resv_lock(drm_gpuvm_resv(gpuvm), NULL);
+		list_for_each_entry(vm_bo, &bo_defer, list.entry.bo_defer) {
+			drm_gpuvm_bo_list_del(vm_bo, extobj, false);
+			drm_gpuvm_bo_list_del(vm_bo, evict, false);
+		}
+		dma_resv_unlock(drm_gpuvm_resv(gpuvm));
+	}
+
+	while (!list_empty(&bo_defer)) {
+		vm_bo = list_first_entry(&bo_defer,
+			struct drm_gpuvm_bo, list.entry.bo_defer);
+		obj = vm_bo->obj;
+
+		list_del(&vm_bo->list.entry.bo_defer);
+		if (ops && ops->vm_bo_free)
+			ops->vm_bo_free(vm_bo);
+		else
+			kfree(vm_bo);
+
+		drm_gpuvm_put(gpuvm);
+		drm_gem_object_put(obj);
+	}
+}
+EXPORT_SYMBOL_GPL(drm_gpuvm_bo_deferred_cleanup);
+
 static struct drm_gpuvm_bo *
 __drm_gpuvm_bo_find(struct drm_gpuvm *gpuvm,
 		    struct drm_gem_object *obj)
@@ -1948,6 +2088,40 @@ drm_gpuva_unlink(struct drm_gpuva *va)
 }
 EXPORT_SYMBOL_GPL(drm_gpuva_unlink);
 
+/**
+ * drm_gpuva_unlink_defer() - unlink a &drm_gpuva with deferred vm_bo cleanup
+ * @va: the &drm_gpuva to unlink
+ *
+ * Similar to drm_gpuva_unlink(), but uses drm_gpuvm_bo_put_deferred() and takes
+ * the lock for the caller.
+ */
+void
+drm_gpuva_unlink_defer(struct drm_gpuva *va)
+{
+	struct drm_gem_object *obj = va->gem.obj;
+	struct drm_gpuvm_bo *vm_bo = va->vm_bo;
+	bool should_defer_bo;
+
+	if (unlikely(!obj))
+		return;
+
+	drm_WARN_ON(vm_bo->vm->drm, !drm_gpuvm_immediate_mode(vm_bo->vm));
+
+	mutex_lock(&obj->gpuva.lock);
+	list_del_init(&va->gem.entry);
+
+	/*
+	 * This is drm_gpuvm_bo_put_deferred() except we already hold the mutex.
+	 */
+	should_defer_bo = kref_put(&vm_bo->kref, drm_gpuvm_bo_defer_locked);
+	mutex_unlock(&obj->gpuva.lock);
+	if (should_defer_bo)
+		drm_gpuvm_bo_list_add(vm_bo, bo_defer, true);
+
+	va->vm_bo = NULL;
+}
+EXPORT_SYMBOL_GPL(drm_gpuva_unlink_defer);
+
 /**
  * drm_gpuva_find_first() - find the first &drm_gpuva in the given range
  * @gpuvm: the &drm_gpuvm to search in
diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h
index 727b8f336fad0d853998e4379cbd374155468e18..1f80389e14312f552a8abc95d12f52f83beb9be8 100644
--- a/include/drm/drm_gpuvm.h
+++ b/include/drm/drm_gpuvm.h
@@ -152,6 +152,7 @@ void drm_gpuva_remove(struct drm_gpuva *va);
 
 void drm_gpuva_link(struct drm_gpuva *va, struct drm_gpuvm_bo *vm_bo);
 void drm_gpuva_unlink(struct drm_gpuva *va);
+void drm_gpuva_unlink_defer(struct drm_gpuva *va);
 
 struct drm_gpuva *drm_gpuva_find(struct drm_gpuvm *gpuvm,
 				 u64 addr, u64 range);
@@ -331,6 +332,22 @@ struct drm_gpuvm {
 		 */
 		spinlock_t lock;
 	} evict;
+
+	/**
+	 * @bo_defer: structure holding vm_bos that need to be destroyed
+	 */
+	struct {
+		/**
+		 * @bo_defer.list: &list_head storing &drm_gpuvm_bos that need
+		 * to be destroyed
+		 */
+		struct list_head list;
+
+		/**
+		 * @bo_defer.lock: spinlock to protect the bo_defer list
+		 */
+		spinlock_t lock;
+	} bo_defer;
 };
 
 void drm_gpuvm_init(struct drm_gpuvm *gpuvm, const char *name,
@@ -714,6 +731,12 @@ struct drm_gpuvm_bo {
 			 * &drm_gpuvms evict list.
 			 */
 			struct list_head evict;
+
+			/**
+			 * @list.entry.bo_defer: List entry to attach to
+			 * the &drm_gpuvms bo_defer list.
+			 */
+			struct list_head bo_defer;
 		} entry;
 	} list;
 };
@@ -746,6 +769,9 @@ drm_gpuvm_bo_get(struct drm_gpuvm_bo *vm_bo)
 
 bool drm_gpuvm_bo_put(struct drm_gpuvm_bo *vm_bo);
 
+bool drm_gpuvm_bo_put_deferred(struct drm_gpuvm_bo *vm_bo);
+void drm_gpuvm_bo_deferred_cleanup(struct drm_gpuvm *gpuvm);
+
 struct drm_gpuvm_bo *
 drm_gpuvm_bo_find(struct drm_gpuvm *gpuvm,
 		  struct drm_gem_object *obj);

-- 
2.51.0.384.g4c02a37b29-goog


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 2/2] panthor: use drm_gpuva_unlink_defer()
  2025-09-09 13:36 [PATCH v2 0/2] Defer vm_bo cleanup in GPUVM with DRM_GPUVM_IMMEDIATE_MODE Alice Ryhl
  2025-09-09 13:36 ` [PATCH v2 1/2] drm/gpuvm: add deferred vm_bo cleanup Alice Ryhl
@ 2025-09-09 13:36 ` Alice Ryhl
  2025-09-11 10:15   ` Boris Brezillon
                     ` (2 more replies)
  1 sibling, 3 replies; 14+ messages in thread
From: Alice Ryhl @ 2025-09-09 13:36 UTC (permalink / raw)
  To: Danilo Krummrich, Matthew Brost, Thomas Hellström
  Cc: Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
	Simona Vetter, Boris Brezillon, Steven Price, Daniel Almeida,
	Liviu Dudau, dri-devel, linux-kernel, rust-for-linux, Alice Ryhl

Instead of manually deferring cleanup of vm_bos, use the new GPUVM
infrastructure for doing so.

To avoid manual management of vm_bo refcounts, the panthor_vma_link()
and panthor_vma_unlink() methods are changed to get and put a vm_bo
refcount on the vm_bo. This simplifies the code a lot. I preserved the
behavior where panthor_gpuva_sm_step_map() drops the refcount right away
rather than letting panthor_vm_cleanup_op_ctx() do it later.

Signed-off-by: Alice Ryhl <aliceryhl@google.com>
---
 drivers/gpu/drm/panthor/panthor_mmu.c | 113 ++++++----------------------------
 1 file changed, 19 insertions(+), 94 deletions(-)

diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
index 6dec4354e3789d17c5a87fc8de3bc86764b804bc..fd9ed88a4259e5fb88e5acffcf6d8a658cc7115d 100644
--- a/drivers/gpu/drm/panthor/panthor_mmu.c
+++ b/drivers/gpu/drm/panthor/panthor_mmu.c
@@ -181,20 +181,6 @@ struct panthor_vm_op_ctx {
 		u64 range;
 	} va;
 
-	/**
-	 * @returned_vmas: List of panthor_vma objects returned after a VM operation.
-	 *
-	 * For unmap operations, this will contain all VMAs that were covered by the
-	 * specified VA range.
-	 *
-	 * For map operations, this will contain all VMAs that previously mapped to
-	 * the specified VA range.
-	 *
-	 * Those VMAs, and the resources they point to will be released as part of
-	 * the op_ctx cleanup operation.
-	 */
-	struct list_head returned_vmas;
-
 	/** @map: Fields specific to a map operation. */
 	struct {
 		/** @map.vm_bo: Buffer object to map. */
@@ -1081,47 +1067,18 @@ void panthor_vm_free_va(struct panthor_vm *vm, struct drm_mm_node *va_node)
 	mutex_unlock(&vm->mm_lock);
 }
 
-static void panthor_vm_bo_put(struct drm_gpuvm_bo *vm_bo)
+static void panthor_vm_bo_free(struct drm_gpuvm_bo *vm_bo)
 {
 	struct panthor_gem_object *bo = to_panthor_bo(vm_bo->obj);
-	struct drm_gpuvm *vm = vm_bo->vm;
-	bool unpin;
-
-	/* We must retain the GEM before calling drm_gpuvm_bo_put(),
-	 * otherwise the mutex might be destroyed while we hold it.
-	 * Same goes for the VM, since we take the VM resv lock.
-	 */
-	drm_gem_object_get(&bo->base.base);
-	drm_gpuvm_get(vm);
-
-	/* We take the resv lock to protect against concurrent accesses to the
-	 * gpuvm evicted/extobj lists that are modified in
-	 * drm_gpuvm_bo_destroy(), which is called if drm_gpuvm_bo_put()
-	 * releases sthe last vm_bo reference.
-	 * We take the BO GPUVA list lock to protect the vm_bo removal from the
-	 * GEM vm_bo list.
-	 */
-	dma_resv_lock(drm_gpuvm_resv(vm), NULL);
-	mutex_lock(&bo->base.base.gpuva.lock);
-	unpin = drm_gpuvm_bo_put(vm_bo);
-	mutex_unlock(&bo->base.base.gpuva.lock);
-	dma_resv_unlock(drm_gpuvm_resv(vm));
 
-	/* If the vm_bo object was destroyed, release the pin reference that
-	 * was hold by this object.
-	 */
-	if (unpin && !drm_gem_is_imported(&bo->base.base))
+	if (!drm_gem_is_imported(&bo->base.base))
 		drm_gem_shmem_unpin(&bo->base);
-
-	drm_gpuvm_put(vm);
-	drm_gem_object_put(&bo->base.base);
+	kfree(vm_bo);
 }
 
 static void panthor_vm_cleanup_op_ctx(struct panthor_vm_op_ctx *op_ctx,
 				      struct panthor_vm *vm)
 {
-	struct panthor_vma *vma, *tmp_vma;
-
 	u32 remaining_pt_count = op_ctx->rsvd_page_tables.count -
 				 op_ctx->rsvd_page_tables.ptr;
 
@@ -1134,16 +1091,12 @@ static void panthor_vm_cleanup_op_ctx(struct panthor_vm_op_ctx *op_ctx,
 	kfree(op_ctx->rsvd_page_tables.pages);
 
 	if (op_ctx->map.vm_bo)
-		panthor_vm_bo_put(op_ctx->map.vm_bo);
+		drm_gpuvm_bo_put_deferred(op_ctx->map.vm_bo);
 
 	for (u32 i = 0; i < ARRAY_SIZE(op_ctx->preallocated_vmas); i++)
 		kfree(op_ctx->preallocated_vmas[i]);
 
-	list_for_each_entry_safe(vma, tmp_vma, &op_ctx->returned_vmas, node) {
-		list_del(&vma->node);
-		panthor_vm_bo_put(vma->base.vm_bo);
-		kfree(vma);
-	}
+	drm_gpuvm_bo_deferred_cleanup(&vm->base);
 }
 
 static struct panthor_vma *
@@ -1232,7 +1185,6 @@ static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
 		return -EINVAL;
 
 	memset(op_ctx, 0, sizeof(*op_ctx));
-	INIT_LIST_HEAD(&op_ctx->returned_vmas);
 	op_ctx->flags = flags;
 	op_ctx->va.range = size;
 	op_ctx->va.addr = va;
@@ -1243,7 +1195,9 @@ static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
 
 	if (!drm_gem_is_imported(&bo->base.base)) {
 		/* Pre-reserve the BO pages, so the map operation doesn't have to
-		 * allocate.
+		 * allocate. This pin is dropped in panthor_vm_bo_free(), so
+		 * once we call drm_gpuvm_bo_create(), GPUVM will take care of
+		 * dropping the pin for us.
 		 */
 		ret = drm_gem_shmem_pin(&bo->base);
 		if (ret)
@@ -1263,9 +1217,6 @@ static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
 
 	preallocated_vm_bo = drm_gpuvm_bo_create(&vm->base, &bo->base.base);
 	if (!preallocated_vm_bo) {
-		if (!drm_gem_is_imported(&bo->base.base))
-			drm_gem_shmem_unpin(&bo->base);
-
 		ret = -ENOMEM;
 		goto err_cleanup;
 	}
@@ -1282,16 +1233,6 @@ static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
 	mutex_unlock(&bo->base.base.gpuva.lock);
 	dma_resv_unlock(panthor_vm_resv(vm));
 
-	/* If the a vm_bo for this <VM,BO> combination exists, it already
-	 * retains a pin ref, and we can release the one we took earlier.
-	 *
-	 * If our pre-allocated vm_bo is picked, it now retains the pin ref,
-	 * which will be released in panthor_vm_bo_put().
-	 */
-	if (preallocated_vm_bo != op_ctx->map.vm_bo &&
-	    !drm_gem_is_imported(&bo->base.base))
-		drm_gem_shmem_unpin(&bo->base);
-
 	op_ctx->map.bo_offset = offset;
 
 	/* L1, L2 and L3 page tables.
@@ -1339,7 +1280,6 @@ static int panthor_vm_prepare_unmap_op_ctx(struct panthor_vm_op_ctx *op_ctx,
 	int ret;
 
 	memset(op_ctx, 0, sizeof(*op_ctx));
-	INIT_LIST_HEAD(&op_ctx->returned_vmas);
 	op_ctx->va.range = size;
 	op_ctx->va.addr = va;
 	op_ctx->flags = DRM_PANTHOR_VM_BIND_OP_TYPE_UNMAP;
@@ -1387,7 +1327,6 @@ static void panthor_vm_prepare_sync_only_op_ctx(struct panthor_vm_op_ctx *op_ctx
 						struct panthor_vm *vm)
 {
 	memset(op_ctx, 0, sizeof(*op_ctx));
-	INIT_LIST_HEAD(&op_ctx->returned_vmas);
 	op_ctx->flags = DRM_PANTHOR_VM_BIND_OP_TYPE_SYNC_ONLY;
 }
 
@@ -2033,26 +1972,13 @@ static void panthor_vma_link(struct panthor_vm *vm,
 
 	mutex_lock(&bo->base.base.gpuva.lock);
 	drm_gpuva_link(&vma->base, vm_bo);
-	drm_WARN_ON(&vm->ptdev->base, drm_gpuvm_bo_put(vm_bo));
 	mutex_unlock(&bo->base.base.gpuva.lock);
 }
 
-static void panthor_vma_unlink(struct panthor_vm *vm,
-			       struct panthor_vma *vma)
+static void panthor_vma_unlink(struct panthor_vma *vma)
 {
-	struct panthor_gem_object *bo = to_panthor_bo(vma->base.gem.obj);
-	struct drm_gpuvm_bo *vm_bo = drm_gpuvm_bo_get(vma->base.vm_bo);
-
-	mutex_lock(&bo->base.base.gpuva.lock);
-	drm_gpuva_unlink(&vma->base);
-	mutex_unlock(&bo->base.base.gpuva.lock);
-
-	/* drm_gpuva_unlink() release the vm_bo, but we manually retained it
-	 * when entering this function, so we can implement deferred VMA
-	 * destruction. Re-assign it here.
-	 */
-	vma->base.vm_bo = vm_bo;
-	list_add_tail(&vma->node, &vm->op_ctx->returned_vmas);
+	drm_gpuva_unlink_defer(&vma->base);
+	kfree(vma);
 }
 
 static void panthor_vma_init(struct panthor_vma *vma, u32 flags)
@@ -2084,12 +2010,12 @@ static int panthor_gpuva_sm_step_map(struct drm_gpuva_op *op, void *priv)
 	if (ret)
 		return ret;
 
-	/* Ref owned by the mapping now, clear the obj field so we don't release the
-	 * pinning/obj ref behind GPUVA's back.
-	 */
 	drm_gpuva_map(&vm->base, &vma->base, &op->map);
 	panthor_vma_link(vm, vma, op_ctx->map.vm_bo);
+
+	drm_gpuvm_bo_put_deferred(op_ctx->map.vm_bo);
 	op_ctx->map.vm_bo = NULL;
+
 	return 0;
 }
 
@@ -2128,16 +2054,14 @@ static int panthor_gpuva_sm_step_remap(struct drm_gpuva_op *op,
 		 * owned by the old mapping which will be released when this
 		 * mapping is destroyed, we need to grab a ref here.
 		 */
-		panthor_vma_link(vm, prev_vma,
-				 drm_gpuvm_bo_get(op->remap.unmap->va->vm_bo));
+		panthor_vma_link(vm, prev_vma, op->remap.unmap->va->vm_bo);
 	}
 
 	if (next_vma) {
-		panthor_vma_link(vm, next_vma,
-				 drm_gpuvm_bo_get(op->remap.unmap->va->vm_bo));
+		panthor_vma_link(vm, next_vma, op->remap.unmap->va->vm_bo);
 	}
 
-	panthor_vma_unlink(vm, unmap_vma);
+	panthor_vma_unlink(unmap_vma);
 	return 0;
 }
 
@@ -2154,12 +2078,13 @@ static int panthor_gpuva_sm_step_unmap(struct drm_gpuva_op *op,
 		return ret;
 
 	drm_gpuva_unmap(&op->unmap);
-	panthor_vma_unlink(vm, unmap_vma);
+	panthor_vma_unlink(unmap_vma);
 	return 0;
 }
 
 static const struct drm_gpuvm_ops panthor_gpuvm_ops = {
 	.vm_free = panthor_vm_free,
+	.vm_bo_free = panthor_vm_bo_free,
 	.sm_step_map = panthor_gpuva_sm_step_map,
 	.sm_step_remap = panthor_gpuva_sm_step_remap,
 	.sm_step_unmap = panthor_gpuva_sm_step_unmap,

-- 
2.51.0.384.g4c02a37b29-goog


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 1/2] drm/gpuvm: add deferred vm_bo cleanup
  2025-09-09 13:36 ` [PATCH v2 1/2] drm/gpuvm: add deferred vm_bo cleanup Alice Ryhl
@ 2025-09-09 13:39   ` Alice Ryhl
  2025-09-11 11:57     ` Boris Brezillon
  2025-09-11 12:00     ` Boris Brezillon
  2025-09-09 14:20   ` Thomas Hellström
  2025-09-11 12:18   ` Boris Brezillon
  2 siblings, 2 replies; 14+ messages in thread
From: Alice Ryhl @ 2025-09-09 13:39 UTC (permalink / raw)
  To: Danilo Krummrich, Matthew Brost, Thomas Hellström,
	Boris Brezillon
  Cc: Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
	Simona Vetter, Steven Price, Daniel Almeida, Liviu Dudau,
	dri-devel, linux-kernel, rust-for-linux

On Tue, Sep 09, 2025 at 01:36:22PM +0000, Alice Ryhl wrote:
> When using GPUVM in immediate mode, it is necessary to call
> drm_gpuvm_unlink() from the fence signalling critical path. However,
> unlink may call drm_gpuvm_bo_put(), which causes some challenges:
> 
> 1. drm_gpuvm_bo_put() often requires you to take resv locks, which you
>    can't do from the fence signalling critical path.
> 2. drm_gpuvm_bo_put() calls drm_gem_object_put(), which is often going
>    to be unsafe to call from the fence signalling critical path.
> 
> To solve these issues, add a deferred version of drm_gpuvm_unlink() that
> adds the vm_bo to a deferred cleanup list, and then clean it up later.
> 
> The new methods take the GEMs GPUVA lock internally rather than letting
> the caller do it because it also needs to perform an operation after
> releasing the mutex again. This is to prevent freeing the GEM while
> holding the mutex (more info as comments in the patch). This means that
> the new methods can only be used with DRM_GPUVM_IMMEDIATE_MODE.
> 
> Signed-off-by: Alice Ryhl <aliceryhl@google.com>

I'm not sure if we came to a conclusion on the gpuva defer stuff on v1,
but I found a less confusing way to implement the locking. Please check
it out.

Alice

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 1/2] drm/gpuvm: add deferred vm_bo cleanup
  2025-09-09 13:36 ` [PATCH v2 1/2] drm/gpuvm: add deferred vm_bo cleanup Alice Ryhl
  2025-09-09 13:39   ` Alice Ryhl
@ 2025-09-09 14:20   ` Thomas Hellström
  2025-09-10  6:39     ` Alice Ryhl
  2025-09-11 12:18   ` Boris Brezillon
  2 siblings, 1 reply; 14+ messages in thread
From: Thomas Hellström @ 2025-09-09 14:20 UTC (permalink / raw)
  To: Alice Ryhl, Danilo Krummrich, Matthew Brost
  Cc: Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
	Simona Vetter, Boris Brezillon, Steven Price, Daniel Almeida,
	Liviu Dudau, dri-devel, linux-kernel, rust-for-linux

On Tue, 2025-09-09 at 13:36 +0000, Alice Ryhl wrote:
> When using GPUVM in immediate mode, it is necessary to call
> drm_gpuvm_unlink() from the fence signalling critical path. However,
> unlink may call drm_gpuvm_bo_put(), which causes some challenges:
> 
> 1. drm_gpuvm_bo_put() often requires you to take resv locks, which
> you
>    can't do from the fence signalling critical path.
> 2. drm_gpuvm_bo_put() calls drm_gem_object_put(), which is often
> going
>    to be unsafe to call from the fence signalling critical path.
> 
> To solve these issues, add a deferred version of drm_gpuvm_unlink()
> that
> adds the vm_bo to a deferred cleanup list, and then clean it up
> later.
> 
> The new methods take the GEMs GPUVA lock internally rather than
> letting
> the caller do it because it also needs to perform an operation after
> releasing the mutex again. This is to prevent freeing the GEM while
> holding the mutex (more info as comments in the patch). This means
> that
> the new methods can only be used with DRM_GPUVM_IMMEDIATE_MODE.
> 
> Signed-off-by: Alice Ryhl <aliceryhl@google.com>
> ---
>  drivers/gpu/drm/drm_gpuvm.c | 174
> ++++++++++++++++++++++++++++++++++++++++++++
>  include/drm/drm_gpuvm.h     |  26 +++++++
>  2 files changed, 200 insertions(+)
> 
> diff --git a/drivers/gpu/drm/drm_gpuvm.c
> b/drivers/gpu/drm/drm_gpuvm.c
> index
> 78a1a4f095095e9379bdf604d583f6c8b9863ccb..5aa8b3813019705f70101950af2
> d8fe4e648e9d0 100644
> --- a/drivers/gpu/drm/drm_gpuvm.c
> +++ b/drivers/gpu/drm/drm_gpuvm.c
> @@ -876,6 +876,27 @@ __drm_gpuvm_bo_list_add(struct drm_gpuvm *gpuvm,
> spinlock_t *lock,
>  	cond_spin_unlock(lock, !!lock);
>  }
>  
> +/**
> + * drm_gpuvm_bo_is_dead() - check whether this vm_bo is scheduled

NIT: Is zombie a better name than dead?

> for cleanup
> + * @vm_bo: the &drm_gpuvm_bo
> + *
> + * When a vm_bo is scheduled for cleanup using the bo_defer list, it
> is not
> + * immediately removed from the evict and extobj lists if they are
> protected by
> + * the resv lock, as we can't take that lock during run_job() in
> immediate
> + * mode. Therefore, anyone iterating these lists should skip entries
> that are
> + * being destroyed.
> + *
> + * Checking the refcount without incrementing it is okay as long as
> the lock
> + * protecting the evict/extobj list is held for as long as you are
> using the
> + * vm_bo, because even if the refcount hits zero while you are using
> it, freeing
> + * the vm_bo requires taking the list's lock.
> + */
> +static bool
> +drm_gpuvm_bo_is_dead(struct drm_gpuvm_bo *vm_bo)
> +{
> +	return !kref_read(&vm_bo->kref);
> +}
> +
>  /**
>   * drm_gpuvm_bo_list_add() - insert a vm_bo into the given list
>   * @__vm_bo: the &drm_gpuvm_bo
> @@ -1081,6 +1102,9 @@ drm_gpuvm_init(struct drm_gpuvm *gpuvm, const
> char *name,
>  	INIT_LIST_HEAD(&gpuvm->evict.list);
>  	spin_lock_init(&gpuvm->evict.lock);
>  
> +	INIT_LIST_HEAD(&gpuvm->bo_defer.list);
> +	spin_lock_init(&gpuvm->bo_defer.lock);
> +

This list appears to exactly follow the pattern a lockless list was
designed for. Saves some space in the vm_bo and gets rid of the
excessive locking. <include/linux/llist.h>

Otherwise LGTM.

/Thomas


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 1/2] drm/gpuvm: add deferred vm_bo cleanup
  2025-09-09 14:20   ` Thomas Hellström
@ 2025-09-10  6:39     ` Alice Ryhl
  0 siblings, 0 replies; 14+ messages in thread
From: Alice Ryhl @ 2025-09-10  6:39 UTC (permalink / raw)
  To: Thomas Hellström
  Cc: Danilo Krummrich, Matthew Brost, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, David Airlie, Simona Vetter, Boris Brezillon,
	Steven Price, Daniel Almeida, Liviu Dudau, dri-devel,
	linux-kernel, rust-for-linux

On Tue, Sep 09, 2025 at 04:20:32PM +0200, Thomas Hellström wrote:
> On Tue, 2025-09-09 at 13:36 +0000, Alice Ryhl wrote:
> > When using GPUVM in immediate mode, it is necessary to call
> > drm_gpuvm_unlink() from the fence signalling critical path. However,
> > unlink may call drm_gpuvm_bo_put(), which causes some challenges:
> > 
> > 1. drm_gpuvm_bo_put() often requires you to take resv locks, which
> > you
> >    can't do from the fence signalling critical path.
> > 2. drm_gpuvm_bo_put() calls drm_gem_object_put(), which is often
> > going
> >    to be unsafe to call from the fence signalling critical path.
> > 
> > To solve these issues, add a deferred version of drm_gpuvm_unlink()
> > that
> > adds the vm_bo to a deferred cleanup list, and then clean it up
> > later.
> > 
> > The new methods take the GEMs GPUVA lock internally rather than
> > letting
> > the caller do it because it also needs to perform an operation after
> > releasing the mutex again. This is to prevent freeing the GEM while
> > holding the mutex (more info as comments in the patch). This means
> > that
> > the new methods can only be used with DRM_GPUVM_IMMEDIATE_MODE.
> > 
> > Signed-off-by: Alice Ryhl <aliceryhl@google.com>
> > ---
> >  drivers/gpu/drm/drm_gpuvm.c | 174
> > ++++++++++++++++++++++++++++++++++++++++++++
> >  include/drm/drm_gpuvm.h     |  26 +++++++
> >  2 files changed, 200 insertions(+)
> > 
> > diff --git a/drivers/gpu/drm/drm_gpuvm.c
> > b/drivers/gpu/drm/drm_gpuvm.c
> > index
> > 78a1a4f095095e9379bdf604d583f6c8b9863ccb..5aa8b3813019705f70101950af2
> > d8fe4e648e9d0 100644
> > --- a/drivers/gpu/drm/drm_gpuvm.c
> > +++ b/drivers/gpu/drm/drm_gpuvm.c
> > @@ -876,6 +876,27 @@ __drm_gpuvm_bo_list_add(struct drm_gpuvm *gpuvm,
> > spinlock_t *lock,
> >  	cond_spin_unlock(lock, !!lock);
> >  }
> >  
> > +/**
> > + * drm_gpuvm_bo_is_dead() - check whether this vm_bo is scheduled
> 
> NIT: Is zombie a better name than dead?

I could see that name make sense.

> >  /**
> >   * drm_gpuvm_bo_list_add() - insert a vm_bo into the given list
> >   * @__vm_bo: the &drm_gpuvm_bo
> > @@ -1081,6 +1102,9 @@ drm_gpuvm_init(struct drm_gpuvm *gpuvm, const
> > char *name,
> >  	INIT_LIST_HEAD(&gpuvm->evict.list);
> >  	spin_lock_init(&gpuvm->evict.lock);
> >  
> > +	INIT_LIST_HEAD(&gpuvm->bo_defer.list);
> > +	spin_lock_init(&gpuvm->bo_defer.lock);
> > +
> 
> This list appears to exactly follow the pattern a lockless list was
> designed for. Saves some space in the vm_bo and gets rid of the
> excessive locking. <include/linux/llist.h>

Good point.

Alice

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 2/2] panthor: use drm_gpuva_unlink_defer()
  2025-09-09 13:36 ` [PATCH v2 2/2] panthor: use drm_gpuva_unlink_defer() Alice Ryhl
@ 2025-09-11 10:15   ` Boris Brezillon
  2025-09-11 11:08     ` Alice Ryhl
  2025-09-11 12:35   ` Boris Brezillon
  2025-09-11 12:38   ` Boris Brezillon
  2 siblings, 1 reply; 14+ messages in thread
From: Boris Brezillon @ 2025-09-11 10:15 UTC (permalink / raw)
  To: Alice Ryhl
  Cc: Danilo Krummrich, Matthew Brost, Thomas Hellström,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
	Simona Vetter, Steven Price, Daniel Almeida, Liviu Dudau,
	dri-devel, linux-kernel, rust-for-linux

On Tue, 09 Sep 2025 13:36:23 +0000
Alice Ryhl <aliceryhl@google.com> wrote:

>  static void panthor_vma_init(struct panthor_vma *vma, u32 flags)
> @@ -2084,12 +2010,12 @@ static int panthor_gpuva_sm_step_map(struct drm_gpuva_op *op, void *priv)
>  	if (ret)
>  		return ret;
>  
> -	/* Ref owned by the mapping now, clear the obj field so we don't release the
> -	 * pinning/obj ref behind GPUVA's back.
> -	 */
>  	drm_gpuva_map(&vm->base, &vma->base, &op->map);
>  	panthor_vma_link(vm, vma, op_ctx->map.vm_bo);
> +
> +	drm_gpuvm_bo_put_deferred(op_ctx->map.vm_bo);

Hm, I don't see why we need a drm_gpuvm_bo_put_deferred() here. The
original idea was to delegate the vm_bo ownership to the VA being added
to the VM tree, so if we put it here, we have a UAF situation, don't we?

>  	op_ctx->map.vm_bo = NULL;
> +
>  	return 0;
>  }

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 2/2] panthor: use drm_gpuva_unlink_defer()
  2025-09-11 10:15   ` Boris Brezillon
@ 2025-09-11 11:08     ` Alice Ryhl
  2025-09-11 11:18       ` Boris Brezillon
  0 siblings, 1 reply; 14+ messages in thread
From: Alice Ryhl @ 2025-09-11 11:08 UTC (permalink / raw)
  To: Boris Brezillon
  Cc: Danilo Krummrich, Matthew Brost, Thomas Hellström,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
	Simona Vetter, Steven Price, Daniel Almeida, Liviu Dudau,
	dri-devel, linux-kernel, rust-for-linux

On Thu, Sep 11, 2025 at 12:15:37PM +0200, Boris Brezillon wrote:
> On Tue, 09 Sep 2025 13:36:23 +0000
> Alice Ryhl <aliceryhl@google.com> wrote:
> 
> >  static void panthor_vma_init(struct panthor_vma *vma, u32 flags)
> > @@ -2084,12 +2010,12 @@ static int panthor_gpuva_sm_step_map(struct drm_gpuva_op *op, void *priv)
> >  	if (ret)
> >  		return ret;
> >  
> > -	/* Ref owned by the mapping now, clear the obj field so we don't release the
> > -	 * pinning/obj ref behind GPUVA's back.
> > -	 */
> >  	drm_gpuva_map(&vm->base, &vma->base, &op->map);
> >  	panthor_vma_link(vm, vma, op_ctx->map.vm_bo);
> > +
> > +	drm_gpuvm_bo_put_deferred(op_ctx->map.vm_bo);
> >  	op_ctx->map.vm_bo = NULL;
> 
> Hm, I don't see why we need a drm_gpuvm_bo_put_deferred() here. The
> original idea was to delegate the vm_bo ownership to the VA being added
> to the VM tree, so if we put it here, we have a UAF situation, don't we?

The vm_bo refcount goes like this:

incr vm_bo_obtain()
incr vma_link()
decr vm_bo_put()

There is no decrement in panthor_vm_cleanup_op_ctx() due to this line:

	op_ctx->map.vm_bo = NULL

So when everything is done, it is linked once and the refcount is
incremented by one, which is correct.

Alice

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 2/2] panthor: use drm_gpuva_unlink_defer()
  2025-09-11 11:08     ` Alice Ryhl
@ 2025-09-11 11:18       ` Boris Brezillon
  0 siblings, 0 replies; 14+ messages in thread
From: Boris Brezillon @ 2025-09-11 11:18 UTC (permalink / raw)
  To: Alice Ryhl
  Cc: Danilo Krummrich, Matthew Brost, Thomas Hellström,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
	Simona Vetter, Steven Price, Daniel Almeida, Liviu Dudau,
	dri-devel, linux-kernel, rust-for-linux

On Thu, 11 Sep 2025 11:08:43 +0000
Alice Ryhl <aliceryhl@google.com> wrote:

> On Thu, Sep 11, 2025 at 12:15:37PM +0200, Boris Brezillon wrote:
> > On Tue, 09 Sep 2025 13:36:23 +0000
> > Alice Ryhl <aliceryhl@google.com> wrote:
> >   
> > >  static void panthor_vma_init(struct panthor_vma *vma, u32 flags)
> > > @@ -2084,12 +2010,12 @@ static int panthor_gpuva_sm_step_map(struct drm_gpuva_op *op, void *priv)
> > >  	if (ret)
> > >  		return ret;
> > >  
> > > -	/* Ref owned by the mapping now, clear the obj field so we don't release the
> > > -	 * pinning/obj ref behind GPUVA's back.
> > > -	 */
> > >  	drm_gpuva_map(&vm->base, &vma->base, &op->map);
> > >  	panthor_vma_link(vm, vma, op_ctx->map.vm_bo);
> > > +
> > > +	drm_gpuvm_bo_put_deferred(op_ctx->map.vm_bo);
> > >  	op_ctx->map.vm_bo = NULL;  
> > 
> > Hm, I don't see why we need a drm_gpuvm_bo_put_deferred() here. The
> > original idea was to delegate the vm_bo ownership to the VA being added
> > to the VM tree, so if we put it here, we have a UAF situation, don't we?  
> 
> The vm_bo refcount goes like this:
> 
> incr vm_bo_obtain()
> incr vma_link()
> decr vm_bo_put()
> 
> There is no decrement in panthor_vm_cleanup_op_ctx() due to this line:
> 
> 	op_ctx->map.vm_bo = NULL
> 
> So when everything is done, it is linked once and the refcount is
> incremented by one, which is correct.

Ah, right, I overlooked the change to panthor_vma_link() where you drop
the _put().

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 1/2] drm/gpuvm: add deferred vm_bo cleanup
  2025-09-09 13:39   ` Alice Ryhl
@ 2025-09-11 11:57     ` Boris Brezillon
  2025-09-11 12:00     ` Boris Brezillon
  1 sibling, 0 replies; 14+ messages in thread
From: Boris Brezillon @ 2025-09-11 11:57 UTC (permalink / raw)
  To: Alice Ryhl
  Cc: Danilo Krummrich, Matthew Brost, Thomas Hellström,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
	Simona Vetter, Steven Price, Daniel Almeida, Liviu Dudau,
	dri-devel, linux-kernel, rust-for-linux

On Tue, 9 Sep 2025 13:39:39 +0000
Alice Ryhl <aliceryhl@google.com> wrote:

> On Tue, Sep 09, 2025 at 01:36:22PM +0000, Alice Ryhl wrote:
> > When using GPUVM in immediate mode, it is necessary to call
> > drm_gpuvm_unlink() from the fence signalling critical path. However,
> > unlink may call drm_gpuvm_bo_put(), which causes some challenges:
> > 
> > 1. drm_gpuvm_bo_put() often requires you to take resv locks, which you
> >    can't do from the fence signalling critical path.
> > 2. drm_gpuvm_bo_put() calls drm_gem_object_put(), which is often going
> >    to be unsafe to call from the fence signalling critical path.
> > 
> > To solve these issues, add a deferred version of drm_gpuvm_unlink() that
> > adds the vm_bo to a deferred cleanup list, and then clean it up later.
> > 
> > The new methods take the GEMs GPUVA lock internally rather than letting
> > the caller do it because it also needs to perform an operation after
> > releasing the mutex again. This is to prevent freeing the GEM while
> > holding the mutex (more info as comments in the patch). This means that
> > the new methods can only be used with DRM_GPUVM_IMMEDIATE_MODE.
> > 
> > Signed-off-by: Alice Ryhl <aliceryhl@google.com>  
> 
> I'm not sure if we came to a conclusion on the gpuva defer stuff on v1,
> but I found a less confusing way to implement the locking. Please check
> it out.

Yep, I think I prefer this version too.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 1/2] drm/gpuvm: add deferred vm_bo cleanup
  2025-09-09 13:39   ` Alice Ryhl
  2025-09-11 11:57     ` Boris Brezillon
@ 2025-09-11 12:00     ` Boris Brezillon
  1 sibling, 0 replies; 14+ messages in thread
From: Boris Brezillon @ 2025-09-11 12:00 UTC (permalink / raw)
  To: Alice Ryhl
  Cc: Danilo Krummrich, Matthew Brost, Thomas Hellström,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
	Simona Vetter, Steven Price, Daniel Almeida, Liviu Dudau,
	dri-devel, linux-kernel, rust-for-linux

On Tue, 9 Sep 2025 13:39:39 +0000
Alice Ryhl <aliceryhl@google.com> wrote:

> On Tue, Sep 09, 2025 at 01:36:22PM +0000, Alice Ryhl wrote:
> > When using GPUVM in immediate mode, it is necessary to call
> > drm_gpuvm_unlink() from the fence signalling critical path. However,
> > unlink may call drm_gpuvm_bo_put(), which causes some challenges:
> > 
> > 1. drm_gpuvm_bo_put() often requires you to take resv locks, which you
> >    can't do from the fence signalling critical path.
> > 2. drm_gpuvm_bo_put() calls drm_gem_object_put(), which is often going
> >    to be unsafe to call from the fence signalling critical path.
> > 
> > To solve these issues, add a deferred version of drm_gpuvm_unlink() that
> > adds the vm_bo to a deferred cleanup list, and then clean it up later.
> > 
> > The new methods take the GEMs GPUVA lock internally rather than letting
> > the caller do it because it also needs to perform an operation after
> > releasing the mutex again. This is to prevent freeing the GEM while
> > holding the mutex (more info as comments in the patch). This means that
> > the new methods can only be used with DRM_GPUVM_IMMEDIATE_MODE.
> > 
> > Signed-off-by: Alice Ryhl <aliceryhl@google.com>  
> 
> I'm not sure if we came to a conclusion on the gpuva defer stuff on v1,
> but I found a less confusing way to implement the locking. Please check
> it out.

Yep, I think I prefer this version too.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 1/2] drm/gpuvm: add deferred vm_bo cleanup
  2025-09-09 13:36 ` [PATCH v2 1/2] drm/gpuvm: add deferred vm_bo cleanup Alice Ryhl
  2025-09-09 13:39   ` Alice Ryhl
  2025-09-09 14:20   ` Thomas Hellström
@ 2025-09-11 12:18   ` Boris Brezillon
  2 siblings, 0 replies; 14+ messages in thread
From: Boris Brezillon @ 2025-09-11 12:18 UTC (permalink / raw)
  To: Alice Ryhl
  Cc: Danilo Krummrich, Matthew Brost, Thomas Hellström,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
	Simona Vetter, Steven Price, Daniel Almeida, Liviu Dudau,
	dri-devel, linux-kernel, rust-for-linux

On Tue, 09 Sep 2025 13:36:22 +0000
Alice Ryhl <aliceryhl@google.com> wrote:

> When using GPUVM in immediate mode, it is necessary to call
> drm_gpuvm_unlink() from the fence signalling critical path. However,
> unlink may call drm_gpuvm_bo_put(), which causes some challenges:
> 
> 1. drm_gpuvm_bo_put() often requires you to take resv locks, which you
>    can't do from the fence signalling critical path.
> 2. drm_gpuvm_bo_put() calls drm_gem_object_put(), which is often going
>    to be unsafe to call from the fence signalling critical path.
> 
> To solve these issues, add a deferred version of drm_gpuvm_unlink() that
> adds the vm_bo to a deferred cleanup list, and then clean it up later.
> 
> The new methods take the GEMs GPUVA lock internally rather than letting
> the caller do it because it also needs to perform an operation after
> releasing the mutex again. This is to prevent freeing the GEM while
> holding the mutex (more info as comments in the patch). This means that
> the new methods can only be used with DRM_GPUVM_IMMEDIATE_MODE.
> 
> Signed-off-by: Alice Ryhl <aliceryhl@google.com>

Apart from some names which I find a bit confusing (bo_defer doesn't
describe what is deferred, so I'd tempted to go for defer_free/cleanup
instead), and the fact I'm not not a fan of the asymmetry that exists
now between ::vm_bo_alloc() (which simply deals with the allocation)
and ::vm_bo_free() (which also does some cleanup around resources that
have been acquired after drm_gpuvm_bo_create()/::vm_bo_alloc() has
returned), this looks reasonable to me.

So here's my

Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>

(stands even after the llist changes Thomas suggested).

> ---
>  drivers/gpu/drm/drm_gpuvm.c | 174 ++++++++++++++++++++++++++++++++++++++++++++
>  include/drm/drm_gpuvm.h     |  26 +++++++
>  2 files changed, 200 insertions(+)
> 
> diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c
> index 78a1a4f095095e9379bdf604d583f6c8b9863ccb..5aa8b3813019705f70101950af2d8fe4e648e9d0 100644
> --- a/drivers/gpu/drm/drm_gpuvm.c
> +++ b/drivers/gpu/drm/drm_gpuvm.c
> @@ -876,6 +876,27 @@ __drm_gpuvm_bo_list_add(struct drm_gpuvm *gpuvm, spinlock_t *lock,
>  	cond_spin_unlock(lock, !!lock);
>  }
>  
> +/**
> + * drm_gpuvm_bo_is_dead() - check whether this vm_bo is scheduled for cleanup
> + * @vm_bo: the &drm_gpuvm_bo
> + *
> + * When a vm_bo is scheduled for cleanup using the bo_defer list, it is not
> + * immediately removed from the evict and extobj lists if they are protected by
> + * the resv lock, as we can't take that lock during run_job() in immediate
> + * mode. Therefore, anyone iterating these lists should skip entries that are
> + * being destroyed.
> + *
> + * Checking the refcount without incrementing it is okay as long as the lock
> + * protecting the evict/extobj list is held for as long as you are using the
> + * vm_bo, because even if the refcount hits zero while you are using it, freeing
> + * the vm_bo requires taking the list's lock.
> + */
> +static bool
> +drm_gpuvm_bo_is_dead(struct drm_gpuvm_bo *vm_bo)
> +{
> +	return !kref_read(&vm_bo->kref);
> +}
> +
>  /**
>   * drm_gpuvm_bo_list_add() - insert a vm_bo into the given list
>   * @__vm_bo: the &drm_gpuvm_bo
> @@ -1081,6 +1102,9 @@ drm_gpuvm_init(struct drm_gpuvm *gpuvm, const char *name,
>  	INIT_LIST_HEAD(&gpuvm->evict.list);
>  	spin_lock_init(&gpuvm->evict.lock);
>  
> +	INIT_LIST_HEAD(&gpuvm->bo_defer.list);
> +	spin_lock_init(&gpuvm->bo_defer.lock);
> +
>  	kref_init(&gpuvm->kref);
>  
>  	gpuvm->name = name ? name : "unknown";
> @@ -1122,6 +1146,8 @@ drm_gpuvm_fini(struct drm_gpuvm *gpuvm)
>  		 "Extobj list should be empty.\n");
>  	drm_WARN(gpuvm->drm, !list_empty(&gpuvm->evict.list),
>  		 "Evict list should be empty.\n");
> +	drm_WARN(gpuvm->drm, !list_empty(&gpuvm->bo_defer.list),
> +		 "VM BO cleanup list should be empty.\n");
>  
>  	drm_gem_object_put(gpuvm->r_obj);
>  }
> @@ -1217,6 +1243,9 @@ drm_gpuvm_prepare_objects_locked(struct drm_gpuvm *gpuvm,
>  
>  	drm_gpuvm_resv_assert_held(gpuvm);
>  	list_for_each_entry(vm_bo, &gpuvm->extobj.list, list.entry.extobj) {
> +		if (drm_gpuvm_bo_is_dead(vm_bo))
> +			continue;
> +
>  		ret = exec_prepare_obj(exec, vm_bo->obj, num_fences);
>  		if (ret)
>  			break;
> @@ -1460,6 +1489,9 @@ drm_gpuvm_validate_locked(struct drm_gpuvm *gpuvm, struct drm_exec *exec)
>  
>  	list_for_each_entry_safe(vm_bo, next, &gpuvm->evict.list,
>  				 list.entry.evict) {
> +		if (drm_gpuvm_bo_is_dead(vm_bo))
> +			continue;
> +
>  		ret = ops->vm_bo_validate(vm_bo, exec);
>  		if (ret)
>  			break;
> @@ -1560,6 +1592,7 @@ drm_gpuvm_bo_create(struct drm_gpuvm *gpuvm,
>  
>  	INIT_LIST_HEAD(&vm_bo->list.entry.extobj);
>  	INIT_LIST_HEAD(&vm_bo->list.entry.evict);
> +	INIT_LIST_HEAD(&vm_bo->list.entry.bo_defer);
>  
>  	return vm_bo;
>  }
> @@ -1621,6 +1654,113 @@ drm_gpuvm_bo_put(struct drm_gpuvm_bo *vm_bo)
>  }
>  EXPORT_SYMBOL_GPL(drm_gpuvm_bo_put);
>  
> +static void
> +drm_gpuvm_bo_defer_locked(struct kref *kref)
> +{
> +	struct drm_gpuvm_bo *vm_bo = container_of(kref, struct drm_gpuvm_bo,
> +						  kref);
> +	struct drm_gpuvm *gpuvm = vm_bo->vm;
> +
> +	if (!drm_gpuvm_resv_protected(gpuvm)) {
> +		drm_gpuvm_bo_list_del(vm_bo, extobj, true);
> +		drm_gpuvm_bo_list_del(vm_bo, evict, true);
> +	}
> +
> +	list_del(&vm_bo->list.entry.gem);
> +}
> +
> +static void
> +drm_gpuvm_bo_defer(struct kref *kref)
> +{
> +	struct drm_gpuvm_bo *vm_bo = container_of(kref, struct drm_gpuvm_bo,
> +						  kref);
> +
> +	mutex_lock(&vm_bo->obj->gpuva.lock);
> +	drm_gpuvm_bo_defer_locked(kref);
> +	mutex_unlock(&vm_bo->obj->gpuva.lock);
> +
> +	/*
> +	 * It's important that the GEM stays alive for the duration in which we
> +	 * hold the mutex, but the instant we add the vm_bo to bo_defer,
> +	 * another thread might call drm_gpuvm_bo_deferred_cleanup() and put
> +	 * the GEM. Therefore, to avoid kfreeing a mutex we are holding, we add
> +	 * the vm_bo to bo_defer *after* releasing the GEM's mutex.
> +	 */
> +	drm_gpuvm_bo_list_add(vm_bo, bo_defer, true);
> +}
> +
> +/**
> + * drm_gpuvm_bo_put_deferred() - drop a struct drm_gpuvm_bo reference with
> + * deferred cleanup
> + * @vm_bo: the &drm_gpuvm_bo to release the reference of
> + *
> + * This releases a reference to @vm_bo.
> + *
> + * This might take and release the GEMs GPUVA lock. You should call
> + * drm_gpuvm_bo_deferred_cleanup() later to complete the cleanup process.
> + *
> + * Returns: true if vm_bo is being destroyed, false otherwise.
> + */
> +bool
> +drm_gpuvm_bo_put_deferred(struct drm_gpuvm_bo *vm_bo)
> +{
> +	if (!vm_bo)
> +		return false;
> +
> +	drm_WARN_ON(vm_bo->vm->drm, !drm_gpuvm_immediate_mode(vm_bo->vm));
> +
> +	return !!kref_put(&vm_bo->kref, drm_gpuvm_bo_defer);
> +}
> +EXPORT_SYMBOL_GPL(drm_gpuvm_bo_put_deferred);
> +
> +/**
> + * drm_gpuvm_bo_deferred_cleanup() - clean up BOs in the deferred list
> + * deferred cleanup
> + * @gpuvm: the VM to clean up
> + *
> + * Cleans up &drm_gpuvm_bo instances in the deferred cleanup list.
> + */
> +void
> +drm_gpuvm_bo_deferred_cleanup(struct drm_gpuvm *gpuvm)
> +{
> +	const struct drm_gpuvm_ops *ops = gpuvm->ops;
> +	struct drm_gpuvm_bo *vm_bo;
> +	struct drm_gem_object *obj;
> +	LIST_HEAD(bo_defer);
> +
> +	spin_lock(&gpuvm->bo_defer.lock);
> +	list_replace_init(&gpuvm->bo_defer.list, &bo_defer);
> +	spin_unlock(&gpuvm->bo_defer.lock);
> +
> +	if (list_empty(&bo_defer))
> +		return;
> +
> +	if (drm_gpuvm_resv_protected(gpuvm)) {
> +		dma_resv_lock(drm_gpuvm_resv(gpuvm), NULL);
> +		list_for_each_entry(vm_bo, &bo_defer, list.entry.bo_defer) {
> +			drm_gpuvm_bo_list_del(vm_bo, extobj, false);
> +			drm_gpuvm_bo_list_del(vm_bo, evict, false);
> +		}
> +		dma_resv_unlock(drm_gpuvm_resv(gpuvm));
> +	}
> +
> +	while (!list_empty(&bo_defer)) {
> +		vm_bo = list_first_entry(&bo_defer,
> +			struct drm_gpuvm_bo, list.entry.bo_defer);
> +		obj = vm_bo->obj;
> +
> +		list_del(&vm_bo->list.entry.bo_defer);
> +		if (ops && ops->vm_bo_free)
> +			ops->vm_bo_free(vm_bo);
> +		else
> +			kfree(vm_bo);
> +
> +		drm_gpuvm_put(gpuvm);
> +		drm_gem_object_put(obj);
> +	}
> +}
> +EXPORT_SYMBOL_GPL(drm_gpuvm_bo_deferred_cleanup);
> +
>  static struct drm_gpuvm_bo *
>  __drm_gpuvm_bo_find(struct drm_gpuvm *gpuvm,
>  		    struct drm_gem_object *obj)
> @@ -1948,6 +2088,40 @@ drm_gpuva_unlink(struct drm_gpuva *va)
>  }
>  EXPORT_SYMBOL_GPL(drm_gpuva_unlink);
>  
> +/**
> + * drm_gpuva_unlink_defer() - unlink a &drm_gpuva with deferred vm_bo cleanup
> + * @va: the &drm_gpuva to unlink
> + *
> + * Similar to drm_gpuva_unlink(), but uses drm_gpuvm_bo_put_deferred() and takes
> + * the lock for the caller.
> + */
> +void
> +drm_gpuva_unlink_defer(struct drm_gpuva *va)
> +{
> +	struct drm_gem_object *obj = va->gem.obj;
> +	struct drm_gpuvm_bo *vm_bo = va->vm_bo;
> +	bool should_defer_bo;
> +
> +	if (unlikely(!obj))
> +		return;
> +
> +	drm_WARN_ON(vm_bo->vm->drm, !drm_gpuvm_immediate_mode(vm_bo->vm));
> +
> +	mutex_lock(&obj->gpuva.lock);
> +	list_del_init(&va->gem.entry);
> +
> +	/*
> +	 * This is drm_gpuvm_bo_put_deferred() except we already hold the mutex.
> +	 */
> +	should_defer_bo = kref_put(&vm_bo->kref, drm_gpuvm_bo_defer_locked);
> +	mutex_unlock(&obj->gpuva.lock);
> +	if (should_defer_bo)
> +		drm_gpuvm_bo_list_add(vm_bo, bo_defer, true);
> +
> +	va->vm_bo = NULL;
> +}
> +EXPORT_SYMBOL_GPL(drm_gpuva_unlink_defer);
> +
>  /**
>   * drm_gpuva_find_first() - find the first &drm_gpuva in the given range
>   * @gpuvm: the &drm_gpuvm to search in
> diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h
> index 727b8f336fad0d853998e4379cbd374155468e18..1f80389e14312f552a8abc95d12f52f83beb9be8 100644
> --- a/include/drm/drm_gpuvm.h
> +++ b/include/drm/drm_gpuvm.h
> @@ -152,6 +152,7 @@ void drm_gpuva_remove(struct drm_gpuva *va);
>  
>  void drm_gpuva_link(struct drm_gpuva *va, struct drm_gpuvm_bo *vm_bo);
>  void drm_gpuva_unlink(struct drm_gpuva *va);
> +void drm_gpuva_unlink_defer(struct drm_gpuva *va);
>  
>  struct drm_gpuva *drm_gpuva_find(struct drm_gpuvm *gpuvm,
>  				 u64 addr, u64 range);
> @@ -331,6 +332,22 @@ struct drm_gpuvm {
>  		 */
>  		spinlock_t lock;
>  	} evict;
> +
> +	/**
> +	 * @bo_defer: structure holding vm_bos that need to be destroyed
> +	 */
> +	struct {
> +		/**
> +		 * @bo_defer.list: &list_head storing &drm_gpuvm_bos that need
> +		 * to be destroyed
> +		 */
> +		struct list_head list;
> +
> +		/**
> +		 * @bo_defer.lock: spinlock to protect the bo_defer list
> +		 */
> +		spinlock_t lock;
> +	} bo_defer;
>  };
>  
>  void drm_gpuvm_init(struct drm_gpuvm *gpuvm, const char *name,
> @@ -714,6 +731,12 @@ struct drm_gpuvm_bo {
>  			 * &drm_gpuvms evict list.
>  			 */
>  			struct list_head evict;
> +
> +			/**
> +			 * @list.entry.bo_defer: List entry to attach to
> +			 * the &drm_gpuvms bo_defer list.
> +			 */
> +			struct list_head bo_defer;
>  		} entry;
>  	} list;
>  };
> @@ -746,6 +769,9 @@ drm_gpuvm_bo_get(struct drm_gpuvm_bo *vm_bo)
>  
>  bool drm_gpuvm_bo_put(struct drm_gpuvm_bo *vm_bo);
>  
> +bool drm_gpuvm_bo_put_deferred(struct drm_gpuvm_bo *vm_bo);
> +void drm_gpuvm_bo_deferred_cleanup(struct drm_gpuvm *gpuvm);
> +
>  struct drm_gpuvm_bo *
>  drm_gpuvm_bo_find(struct drm_gpuvm *gpuvm,
>  		  struct drm_gem_object *obj);
> 


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 2/2] panthor: use drm_gpuva_unlink_defer()
  2025-09-09 13:36 ` [PATCH v2 2/2] panthor: use drm_gpuva_unlink_defer() Alice Ryhl
  2025-09-11 10:15   ` Boris Brezillon
@ 2025-09-11 12:35   ` Boris Brezillon
  2025-09-11 12:38   ` Boris Brezillon
  2 siblings, 0 replies; 14+ messages in thread
From: Boris Brezillon @ 2025-09-11 12:35 UTC (permalink / raw)
  To: Alice Ryhl
  Cc: Danilo Krummrich, Matthew Brost, Thomas Hellström,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
	Simona Vetter, Steven Price, Daniel Almeida, Liviu Dudau,
	dri-devel, linux-kernel, rust-for-linux

On Tue, 09 Sep 2025 13:36:23 +0000
Alice Ryhl <aliceryhl@google.com> wrote:

> Instead of manually deferring cleanup of vm_bos, use the new GPUVM
> infrastructure for doing so.
> 
> To avoid manual management of vm_bo refcounts, the panthor_vma_link()
> and panthor_vma_unlink() methods are changed to get and put a vm_bo
> refcount on the vm_bo. This simplifies the code a lot. I preserved the
> behavior where panthor_gpuva_sm_step_map() drops the refcount right away
> rather than letting panthor_vm_cleanup_op_ctx() do it later.
> 
> Signed-off-by: Alice Ryhl <aliceryhl@google.com>
> ---
>  drivers/gpu/drm/panthor/panthor_mmu.c | 113 ++++++----------------------------
>  1 file changed, 19 insertions(+), 94 deletions(-)
> 
> diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
> index 6dec4354e3789d17c5a87fc8de3bc86764b804bc..fd9ed88a4259e5fb88e5acffcf6d8a658cc7115d 100644
> --- a/drivers/gpu/drm/panthor/panthor_mmu.c
> +++ b/drivers/gpu/drm/panthor/panthor_mmu.c
> @@ -181,20 +181,6 @@ struct panthor_vm_op_ctx {
>  		u64 range;
>  	} va;
>  
> -	/**
> -	 * @returned_vmas: List of panthor_vma objects returned after a VM operation.
> -	 *
> -	 * For unmap operations, this will contain all VMAs that were covered by the
> -	 * specified VA range.
> -	 *
> -	 * For map operations, this will contain all VMAs that previously mapped to
> -	 * the specified VA range.
> -	 *
> -	 * Those VMAs, and the resources they point to will be released as part of
> -	 * the op_ctx cleanup operation.
> -	 */
> -	struct list_head returned_vmas;
> -
>  	/** @map: Fields specific to a map operation. */
>  	struct {
>  		/** @map.vm_bo: Buffer object to map. */
> @@ -1081,47 +1067,18 @@ void panthor_vm_free_va(struct panthor_vm *vm, struct drm_mm_node *va_node)
>  	mutex_unlock(&vm->mm_lock);
>  }
>  
> -static void panthor_vm_bo_put(struct drm_gpuvm_bo *vm_bo)
> +static void panthor_vm_bo_free(struct drm_gpuvm_bo *vm_bo)
>  {
>  	struct panthor_gem_object *bo = to_panthor_bo(vm_bo->obj);
> -	struct drm_gpuvm *vm = vm_bo->vm;
> -	bool unpin;
> -
> -	/* We must retain the GEM before calling drm_gpuvm_bo_put(),
> -	 * otherwise the mutex might be destroyed while we hold it.
> -	 * Same goes for the VM, since we take the VM resv lock.
> -	 */
> -	drm_gem_object_get(&bo->base.base);
> -	drm_gpuvm_get(vm);
> -
> -	/* We take the resv lock to protect against concurrent accesses to the
> -	 * gpuvm evicted/extobj lists that are modified in
> -	 * drm_gpuvm_bo_destroy(), which is called if drm_gpuvm_bo_put()
> -	 * releases sthe last vm_bo reference.
> -	 * We take the BO GPUVA list lock to protect the vm_bo removal from the
> -	 * GEM vm_bo list.
> -	 */
> -	dma_resv_lock(drm_gpuvm_resv(vm), NULL);
> -	mutex_lock(&bo->base.base.gpuva.lock);
> -	unpin = drm_gpuvm_bo_put(vm_bo);
> -	mutex_unlock(&bo->base.base.gpuva.lock);
> -	dma_resv_unlock(drm_gpuvm_resv(vm));
>  
> -	/* If the vm_bo object was destroyed, release the pin reference that
> -	 * was hold by this object.
> -	 */
> -	if (unpin && !drm_gem_is_imported(&bo->base.base))
> +	if (!drm_gem_is_imported(&bo->base.base))
>  		drm_gem_shmem_unpin(&bo->base);
> -
> -	drm_gpuvm_put(vm);
> -	drm_gem_object_put(&bo->base.base);
> +	kfree(vm_bo);
>  }
>  
>  static void panthor_vm_cleanup_op_ctx(struct panthor_vm_op_ctx *op_ctx,
>  				      struct panthor_vm *vm)
>  {
> -	struct panthor_vma *vma, *tmp_vma;
> -
>  	u32 remaining_pt_count = op_ctx->rsvd_page_tables.count -
>  				 op_ctx->rsvd_page_tables.ptr;
>  
> @@ -1134,16 +1091,12 @@ static void panthor_vm_cleanup_op_ctx(struct panthor_vm_op_ctx *op_ctx,
>  	kfree(op_ctx->rsvd_page_tables.pages);
>  
>  	if (op_ctx->map.vm_bo)
> -		panthor_vm_bo_put(op_ctx->map.vm_bo);
> +		drm_gpuvm_bo_put_deferred(op_ctx->map.vm_bo);
>  
>  	for (u32 i = 0; i < ARRAY_SIZE(op_ctx->preallocated_vmas); i++)
>  		kfree(op_ctx->preallocated_vmas[i]);
>  
> -	list_for_each_entry_safe(vma, tmp_vma, &op_ctx->returned_vmas, node) {
> -		list_del(&vma->node);
> -		panthor_vm_bo_put(vma->base.vm_bo);
> -		kfree(vma);
> -	}
> +	drm_gpuvm_bo_deferred_cleanup(&vm->base);
>  }
>  
>  static struct panthor_vma *
> @@ -1232,7 +1185,6 @@ static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
>  		return -EINVAL;
>  
>  	memset(op_ctx, 0, sizeof(*op_ctx));
> -	INIT_LIST_HEAD(&op_ctx->returned_vmas);
>  	op_ctx->flags = flags;
>  	op_ctx->va.range = size;
>  	op_ctx->va.addr = va;
> @@ -1243,7 +1195,9 @@ static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
>  
>  	if (!drm_gem_is_imported(&bo->base.base)) {
>  		/* Pre-reserve the BO pages, so the map operation doesn't have to
> -		 * allocate.
> +		 * allocate. This pin is dropped in panthor_vm_bo_free(), so
> +		 * once we call drm_gpuvm_bo_create(), GPUVM will take care of
> +		 * dropping the pin for us.
>  		 */
>  		ret = drm_gem_shmem_pin(&bo->base);
>  		if (ret)
> @@ -1263,9 +1217,6 @@ static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
>  
>  	preallocated_vm_bo = drm_gpuvm_bo_create(&vm->base, &bo->base.base);
>  	if (!preallocated_vm_bo) {
> -		if (!drm_gem_is_imported(&bo->base.base))
> -			drm_gem_shmem_unpin(&bo->base);
> -

Aren't we leaking a pin ref here? If the preallocated_vm_bo is
NULL, ::vm_bo_free() won't be called and we're never releasing the ref
we acquired earlier in this function, are we?

>  		ret = -ENOMEM;
>  		goto err_cleanup;
>  	}
> @@ -1282,16 +1233,6 @@ static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
>  	mutex_unlock(&bo->base.base.gpuva.lock);
>  	dma_resv_unlock(panthor_vm_resv(vm));
>  
> -	/* If the a vm_bo for this <VM,BO> combination exists, it already
> -	 * retains a pin ref, and we can release the one we took earlier.
> -	 *
> -	 * If our pre-allocated vm_bo is picked, it now retains the pin ref,
> -	 * which will be released in panthor_vm_bo_put().
> -	 */
> -	if (preallocated_vm_bo != op_ctx->map.vm_bo &&
> -	    !drm_gem_is_imported(&bo->base.base))
> -		drm_gem_shmem_unpin(&bo->base);
> -
>  	op_ctx->map.bo_offset = offset;
>  
>  	/* L1, L2 and L3 page tables.
> @@ -1339,7 +1280,6 @@ static int panthor_vm_prepare_unmap_op_ctx(struct panthor_vm_op_ctx *op_ctx,
>  	int ret;
>  
>  	memset(op_ctx, 0, sizeof(*op_ctx));
> -	INIT_LIST_HEAD(&op_ctx->returned_vmas);
>  	op_ctx->va.range = size;
>  	op_ctx->va.addr = va;
>  	op_ctx->flags = DRM_PANTHOR_VM_BIND_OP_TYPE_UNMAP;
> @@ -1387,7 +1327,6 @@ static void panthor_vm_prepare_sync_only_op_ctx(struct panthor_vm_op_ctx *op_ctx
>  						struct panthor_vm *vm)
>  {
>  	memset(op_ctx, 0, sizeof(*op_ctx));
> -	INIT_LIST_HEAD(&op_ctx->returned_vmas);
>  	op_ctx->flags = DRM_PANTHOR_VM_BIND_OP_TYPE_SYNC_ONLY;
>  }
>  
> @@ -2033,26 +1972,13 @@ static void panthor_vma_link(struct panthor_vm *vm,
>  
>  	mutex_lock(&bo->base.base.gpuva.lock);
>  	drm_gpuva_link(&vma->base, vm_bo);
> -	drm_WARN_ON(&vm->ptdev->base, drm_gpuvm_bo_put(vm_bo));
>  	mutex_unlock(&bo->base.base.gpuva.lock);
>  }
>  
> -static void panthor_vma_unlink(struct panthor_vm *vm,
> -			       struct panthor_vma *vma)
> +static void panthor_vma_unlink(struct panthor_vma *vma)
>  {
> -	struct panthor_gem_object *bo = to_panthor_bo(vma->base.gem.obj);
> -	struct drm_gpuvm_bo *vm_bo = drm_gpuvm_bo_get(vma->base.vm_bo);
> -
> -	mutex_lock(&bo->base.base.gpuva.lock);
> -	drm_gpuva_unlink(&vma->base);
> -	mutex_unlock(&bo->base.base.gpuva.lock);
> -
> -	/* drm_gpuva_unlink() release the vm_bo, but we manually retained it
> -	 * when entering this function, so we can implement deferred VMA
> -	 * destruction. Re-assign it here.
> -	 */
> -	vma->base.vm_bo = vm_bo;
> -	list_add_tail(&vma->node, &vm->op_ctx->returned_vmas);
> +	drm_gpuva_unlink_defer(&vma->base);
> +	kfree(vma);
>  }
>  
>  static void panthor_vma_init(struct panthor_vma *vma, u32 flags)
> @@ -2084,12 +2010,12 @@ static int panthor_gpuva_sm_step_map(struct drm_gpuva_op *op, void *priv)
>  	if (ret)
>  		return ret;
>  
> -	/* Ref owned by the mapping now, clear the obj field so we don't release the
> -	 * pinning/obj ref behind GPUVA's back.
> -	 */
>  	drm_gpuva_map(&vm->base, &vma->base, &op->map);
>  	panthor_vma_link(vm, vma, op_ctx->map.vm_bo);
> +
> +	drm_gpuvm_bo_put_deferred(op_ctx->map.vm_bo);
>  	op_ctx->map.vm_bo = NULL;
> +
>  	return 0;
>  }
>  
> @@ -2128,16 +2054,14 @@ static int panthor_gpuva_sm_step_remap(struct drm_gpuva_op *op,
>  		 * owned by the old mapping which will be released when this
>  		 * mapping is destroyed, we need to grab a ref here.
>  		 */
> -		panthor_vma_link(vm, prev_vma,
> -				 drm_gpuvm_bo_get(op->remap.unmap->va->vm_bo));
> +		panthor_vma_link(vm, prev_vma, op->remap.unmap->va->vm_bo);
>  	}
>  
>  	if (next_vma) {
> -		panthor_vma_link(vm, next_vma,
> -				 drm_gpuvm_bo_get(op->remap.unmap->va->vm_bo));
> +		panthor_vma_link(vm, next_vma, op->remap.unmap->va->vm_bo);
>  	}
>  
> -	panthor_vma_unlink(vm, unmap_vma);
> +	panthor_vma_unlink(unmap_vma);
>  	return 0;
>  }
>  
> @@ -2154,12 +2078,13 @@ static int panthor_gpuva_sm_step_unmap(struct drm_gpuva_op *op,
>  		return ret;
>  
>  	drm_gpuva_unmap(&op->unmap);
> -	panthor_vma_unlink(vm, unmap_vma);
> +	panthor_vma_unlink(unmap_vma);
>  	return 0;
>  }
>  
>  static const struct drm_gpuvm_ops panthor_gpuvm_ops = {
>  	.vm_free = panthor_vm_free,
> +	.vm_bo_free = panthor_vm_bo_free,
>  	.sm_step_map = panthor_gpuva_sm_step_map,
>  	.sm_step_remap = panthor_gpuva_sm_step_remap,
>  	.sm_step_unmap = panthor_gpuva_sm_step_unmap,
> 


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 2/2] panthor: use drm_gpuva_unlink_defer()
  2025-09-09 13:36 ` [PATCH v2 2/2] panthor: use drm_gpuva_unlink_defer() Alice Ryhl
  2025-09-11 10:15   ` Boris Brezillon
  2025-09-11 12:35   ` Boris Brezillon
@ 2025-09-11 12:38   ` Boris Brezillon
  2 siblings, 0 replies; 14+ messages in thread
From: Boris Brezillon @ 2025-09-11 12:38 UTC (permalink / raw)
  To: Alice Ryhl
  Cc: Danilo Krummrich, Matthew Brost, Thomas Hellström,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
	Simona Vetter, Steven Price, Daniel Almeida, Liviu Dudau,
	dri-devel, linux-kernel, rust-for-linux

On Tue, 09 Sep 2025 13:36:23 +0000
Alice Ryhl <aliceryhl@google.com> wrote:

> Instead of manually deferring cleanup of vm_bos, use the new GPUVM
> infrastructure for doing so.
> 
> To avoid manual management of vm_bo refcounts, the panthor_vma_link()
> and panthor_vma_unlink() methods are changed to get and put a vm_bo
> refcount on the vm_bo. This simplifies the code a lot. I preserved the
> behavior where panthor_gpuva_sm_step_map() drops the refcount right away
> rather than letting panthor_vm_cleanup_op_ctx() do it later.
> 
> Signed-off-by: Alice Ryhl <aliceryhl@google.com>
> ---
>  drivers/gpu/drm/panthor/panthor_mmu.c | 113 ++++++----------------------------
>  1 file changed, 19 insertions(+), 94 deletions(-)
> 
> diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
> index 6dec4354e3789d17c5a87fc8de3bc86764b804bc..fd9ed88a4259e5fb88e5acffcf6d8a658cc7115d 100644
> --- a/drivers/gpu/drm/panthor/panthor_mmu.c
> +++ b/drivers/gpu/drm/panthor/panthor_mmu.c
> @@ -181,20 +181,6 @@ struct panthor_vm_op_ctx {
>  		u64 range;
>  	} va;
>  
> -	/**
> -	 * @returned_vmas: List of panthor_vma objects returned after a VM operation.
> -	 *
> -	 * For unmap operations, this will contain all VMAs that were covered by the
> -	 * specified VA range.
> -	 *
> -	 * For map operations, this will contain all VMAs that previously mapped to
> -	 * the specified VA range.
> -	 *
> -	 * Those VMAs, and the resources they point to will be released as part of
> -	 * the op_ctx cleanup operation.
> -	 */
> -	struct list_head returned_vmas;
> -
>  	/** @map: Fields specific to a map operation. */
>  	struct {
>  		/** @map.vm_bo: Buffer object to map. */
> @@ -1081,47 +1067,18 @@ void panthor_vm_free_va(struct panthor_vm *vm, struct drm_mm_node *va_node)
>  	mutex_unlock(&vm->mm_lock);
>  }
>  
> -static void panthor_vm_bo_put(struct drm_gpuvm_bo *vm_bo)
> +static void panthor_vm_bo_free(struct drm_gpuvm_bo *vm_bo)
>  {
>  	struct panthor_gem_object *bo = to_panthor_bo(vm_bo->obj);
> -	struct drm_gpuvm *vm = vm_bo->vm;
> -	bool unpin;
> -
> -	/* We must retain the GEM before calling drm_gpuvm_bo_put(),
> -	 * otherwise the mutex might be destroyed while we hold it.
> -	 * Same goes for the VM, since we take the VM resv lock.
> -	 */
> -	drm_gem_object_get(&bo->base.base);
> -	drm_gpuvm_get(vm);
> -
> -	/* We take the resv lock to protect against concurrent accesses to the
> -	 * gpuvm evicted/extobj lists that are modified in
> -	 * drm_gpuvm_bo_destroy(), which is called if drm_gpuvm_bo_put()
> -	 * releases sthe last vm_bo reference.
> -	 * We take the BO GPUVA list lock to protect the vm_bo removal from the
> -	 * GEM vm_bo list.
> -	 */
> -	dma_resv_lock(drm_gpuvm_resv(vm), NULL);
> -	mutex_lock(&bo->base.base.gpuva.lock);
> -	unpin = drm_gpuvm_bo_put(vm_bo);
> -	mutex_unlock(&bo->base.base.gpuva.lock);
> -	dma_resv_unlock(drm_gpuvm_resv(vm));
>  
> -	/* If the vm_bo object was destroyed, release the pin reference that
> -	 * was hold by this object.
> -	 */
> -	if (unpin && !drm_gem_is_imported(&bo->base.base))
> +	if (!drm_gem_is_imported(&bo->base.base))
>  		drm_gem_shmem_unpin(&bo->base);
> -
> -	drm_gpuvm_put(vm);
> -	drm_gem_object_put(&bo->base.base);
> +	kfree(vm_bo);
>  }
>  
>  static void panthor_vm_cleanup_op_ctx(struct panthor_vm_op_ctx *op_ctx,
>  				      struct panthor_vm *vm)
>  {
> -	struct panthor_vma *vma, *tmp_vma;
> -
>  	u32 remaining_pt_count = op_ctx->rsvd_page_tables.count -
>  				 op_ctx->rsvd_page_tables.ptr;
>  
> @@ -1134,16 +1091,12 @@ static void panthor_vm_cleanup_op_ctx(struct panthor_vm_op_ctx *op_ctx,
>  	kfree(op_ctx->rsvd_page_tables.pages);
>  
>  	if (op_ctx->map.vm_bo)
> -		panthor_vm_bo_put(op_ctx->map.vm_bo);
> +		drm_gpuvm_bo_put_deferred(op_ctx->map.vm_bo);
>  
>  	for (u32 i = 0; i < ARRAY_SIZE(op_ctx->preallocated_vmas); i++)
>  		kfree(op_ctx->preallocated_vmas[i]);
>  
> -	list_for_each_entry_safe(vma, tmp_vma, &op_ctx->returned_vmas, node) {
> -		list_del(&vma->node);
> -		panthor_vm_bo_put(vma->base.vm_bo);
> -		kfree(vma);
> -	}
> +	drm_gpuvm_bo_deferred_cleanup(&vm->base);
>  }
>  
>  static struct panthor_vma *
> @@ -1232,7 +1185,6 @@ static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
>  		return -EINVAL;
>  
>  	memset(op_ctx, 0, sizeof(*op_ctx));
> -	INIT_LIST_HEAD(&op_ctx->returned_vmas);
>  	op_ctx->flags = flags;
>  	op_ctx->va.range = size;
>  	op_ctx->va.addr = va;
> @@ -1243,7 +1195,9 @@ static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
>  
>  	if (!drm_gem_is_imported(&bo->base.base)) {
>  		/* Pre-reserve the BO pages, so the map operation doesn't have to
> -		 * allocate.
> +		 * allocate. This pin is dropped in panthor_vm_bo_free(), so
> +		 * once we call drm_gpuvm_bo_create(), GPUVM will take care of
> +		 * dropping the pin for us.
>  		 */
>  		ret = drm_gem_shmem_pin(&bo->base);
>  		if (ret)
> @@ -1263,9 +1217,6 @@ static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
>  
>  	preallocated_vm_bo = drm_gpuvm_bo_create(&vm->base, &bo->base.base);
>  	if (!preallocated_vm_bo) {
> -		if (!drm_gem_is_imported(&bo->base.base))
> -			drm_gem_shmem_unpin(&bo->base);

Aren't we leaking a pin ref here? If drm_gpuvm_bo_create() returns
NULL, ::vm_bo_free() won't be called, meaning we never return the pin
ref we acquired earlier in this function.

> -
>  		ret = -ENOMEM;
>  		goto err_cleanup;
>  	}
> @@ -1282,16 +1233,6 @@ static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
>  	mutex_unlock(&bo->base.base.gpuva.lock);
>  	dma_resv_unlock(panthor_vm_resv(vm));
>  
> -	/* If the a vm_bo for this <VM,BO> combination exists, it already
> -	 * retains a pin ref, and we can release the one we took earlier.
> -	 *
> -	 * If our pre-allocated vm_bo is picked, it now retains the pin ref,
> -	 * which will be released in panthor_vm_bo_put().
> -	 */
> -	if (preallocated_vm_bo != op_ctx->map.vm_bo &&
> -	    !drm_gem_is_imported(&bo->base.base))
> -		drm_gem_shmem_unpin(&bo->base);
> -
>  	op_ctx->map.bo_offset = offset;
>  
>  	/* L1, L2 and L3 page tables.
> @@ -1339,7 +1280,6 @@ static int panthor_vm_prepare_unmap_op_ctx(struct panthor_vm_op_ctx *op_ctx,
>  	int ret;
>  
>  	memset(op_ctx, 0, sizeof(*op_ctx));
> -	INIT_LIST_HEAD(&op_ctx->returned_vmas);
>  	op_ctx->va.range = size;
>  	op_ctx->va.addr = va;
>  	op_ctx->flags = DRM_PANTHOR_VM_BIND_OP_TYPE_UNMAP;
> @@ -1387,7 +1327,6 @@ static void panthor_vm_prepare_sync_only_op_ctx(struct panthor_vm_op_ctx *op_ctx
>  						struct panthor_vm *vm)
>  {
>  	memset(op_ctx, 0, sizeof(*op_ctx));
> -	INIT_LIST_HEAD(&op_ctx->returned_vmas);
>  	op_ctx->flags = DRM_PANTHOR_VM_BIND_OP_TYPE_SYNC_ONLY;
>  }
>  
> @@ -2033,26 +1972,13 @@ static void panthor_vma_link(struct panthor_vm *vm,
>  
>  	mutex_lock(&bo->base.base.gpuva.lock);
>  	drm_gpuva_link(&vma->base, vm_bo);
> -	drm_WARN_ON(&vm->ptdev->base, drm_gpuvm_bo_put(vm_bo));
>  	mutex_unlock(&bo->base.base.gpuva.lock);
>  }
>  
> -static void panthor_vma_unlink(struct panthor_vm *vm,
> -			       struct panthor_vma *vma)
> +static void panthor_vma_unlink(struct panthor_vma *vma)
>  {
> -	struct panthor_gem_object *bo = to_panthor_bo(vma->base.gem.obj);
> -	struct drm_gpuvm_bo *vm_bo = drm_gpuvm_bo_get(vma->base.vm_bo);
> -
> -	mutex_lock(&bo->base.base.gpuva.lock);
> -	drm_gpuva_unlink(&vma->base);
> -	mutex_unlock(&bo->base.base.gpuva.lock);
> -
> -	/* drm_gpuva_unlink() release the vm_bo, but we manually retained it
> -	 * when entering this function, so we can implement deferred VMA
> -	 * destruction. Re-assign it here.
> -	 */
> -	vma->base.vm_bo = vm_bo;
> -	list_add_tail(&vma->node, &vm->op_ctx->returned_vmas);
> +	drm_gpuva_unlink_defer(&vma->base);
> +	kfree(vma);
>  }
>  
>  static void panthor_vma_init(struct panthor_vma *vma, u32 flags)
> @@ -2084,12 +2010,12 @@ static int panthor_gpuva_sm_step_map(struct drm_gpuva_op *op, void *priv)
>  	if (ret)
>  		return ret;
>  
> -	/* Ref owned by the mapping now, clear the obj field so we don't release the
> -	 * pinning/obj ref behind GPUVA's back.
> -	 */
>  	drm_gpuva_map(&vm->base, &vma->base, &op->map);
>  	panthor_vma_link(vm, vma, op_ctx->map.vm_bo);
> +
> +	drm_gpuvm_bo_put_deferred(op_ctx->map.vm_bo);
>  	op_ctx->map.vm_bo = NULL;
> +
>  	return 0;
>  }
>  
> @@ -2128,16 +2054,14 @@ static int panthor_gpuva_sm_step_remap(struct drm_gpuva_op *op,
>  		 * owned by the old mapping which will be released when this
>  		 * mapping is destroyed, we need to grab a ref here.
>  		 */
> -		panthor_vma_link(vm, prev_vma,
> -				 drm_gpuvm_bo_get(op->remap.unmap->va->vm_bo));
> +		panthor_vma_link(vm, prev_vma, op->remap.unmap->va->vm_bo);
>  	}
>  
>  	if (next_vma) {
> -		panthor_vma_link(vm, next_vma,
> -				 drm_gpuvm_bo_get(op->remap.unmap->va->vm_bo));
> +		panthor_vma_link(vm, next_vma, op->remap.unmap->va->vm_bo);
>  	}
>  
> -	panthor_vma_unlink(vm, unmap_vma);
> +	panthor_vma_unlink(unmap_vma);
>  	return 0;
>  }
>  
> @@ -2154,12 +2078,13 @@ static int panthor_gpuva_sm_step_unmap(struct drm_gpuva_op *op,
>  		return ret;
>  
>  	drm_gpuva_unmap(&op->unmap);
> -	panthor_vma_unlink(vm, unmap_vma);
> +	panthor_vma_unlink(unmap_vma);
>  	return 0;
>  }
>  
>  static const struct drm_gpuvm_ops panthor_gpuvm_ops = {
>  	.vm_free = panthor_vm_free,
> +	.vm_bo_free = panthor_vm_bo_free,
>  	.sm_step_map = panthor_gpuva_sm_step_map,
>  	.sm_step_remap = panthor_gpuva_sm_step_remap,
>  	.sm_step_unmap = panthor_gpuva_sm_step_unmap,
> 


^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2025-09-11 12:39 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-09-09 13:36 [PATCH v2 0/2] Defer vm_bo cleanup in GPUVM with DRM_GPUVM_IMMEDIATE_MODE Alice Ryhl
2025-09-09 13:36 ` [PATCH v2 1/2] drm/gpuvm: add deferred vm_bo cleanup Alice Ryhl
2025-09-09 13:39   ` Alice Ryhl
2025-09-11 11:57     ` Boris Brezillon
2025-09-11 12:00     ` Boris Brezillon
2025-09-09 14:20   ` Thomas Hellström
2025-09-10  6:39     ` Alice Ryhl
2025-09-11 12:18   ` Boris Brezillon
2025-09-09 13:36 ` [PATCH v2 2/2] panthor: use drm_gpuva_unlink_defer() Alice Ryhl
2025-09-11 10:15   ` Boris Brezillon
2025-09-11 11:08     ` Alice Ryhl
2025-09-11 11:18       ` Boris Brezillon
2025-09-11 12:35   ` Boris Brezillon
2025-09-11 12:38   ` Boris Brezillon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).