rust-for-linux.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/3] Add mutex to drm_gem_object.gpuva list
@ 2025-08-22  9:28 Alice Ryhl
  2025-08-22  9:28 ` [PATCH v2 1/3] drm_gem: add mutex to drm_gem_object.gpuva Alice Ryhl
                   ` (2 more replies)
  0 siblings, 3 replies; 12+ messages in thread
From: Alice Ryhl @ 2025-08-22  9:28 UTC (permalink / raw)
  To: Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
	Simona Vetter, Danilo Krummrich, Boris Brezillon, Daniel Almeida,
	Steven Price, Liviu Dudau, Rob Clark, Rob Herring
  Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
	Benno Lossin, Andreas Hindborg, Trevor Gross, dri-devel,
	linux-kernel, rust-for-linux, Alice Ryhl

See the first patch for motivation.

Signed-off-by: Alice Ryhl <aliceryhl@google.com>
---
Changes in v2:
- Move the mutex_destroy() call to drm_gem_private_object_fini()
- Add a third patch to get rid of the lockdep map.
- Link to v1: https://lore.kernel.org/r/20250814-gpuva-mutex-in-gem-v1-0-e202cbfe6d77@google.com

---
Alice Ryhl (3):
      drm_gem: add mutex to drm_gem_object.gpuva
      panthor: use drm_gem_object.gpuva.lock instead of gpuva_list_lock
      gpuvm: remove gem.gpuva.lock_dep_map

 drivers/gpu/drm/drm_gem.c             |  2 ++
 drivers/gpu/drm/drm_gpuvm.c           | 30 +++++++++++++--------------
 drivers/gpu/drm/panthor/panthor_gem.c |  3 ---
 drivers/gpu/drm/panthor/panthor_gem.h | 12 -----------
 drivers/gpu/drm/panthor/panthor_mmu.c | 21 ++++++++++---------
 include/drm/drm_gem.h                 | 39 ++++++++++++++---------------------
 include/drm/drm_gpuvm.h               | 30 ++++++++++++++++++++++++---
 7 files changed, 70 insertions(+), 67 deletions(-)
---
base-commit: 3f13bcc886fc034113cb75cb32b8d9db1216b846
change-id: 20250814-gpuva-mutex-in-gem-a608ab334e51

Best regards,
-- 
Alice Ryhl <aliceryhl@google.com>


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v2 1/3] drm_gem: add mutex to drm_gem_object.gpuva
  2025-08-22  9:28 [PATCH v2 0/3] Add mutex to drm_gem_object.gpuva list Alice Ryhl
@ 2025-08-22  9:28 ` Alice Ryhl
  2025-08-22  9:52   ` Boris Brezillon
  2025-08-22  9:28 ` [PATCH v2 2/3] panthor: use drm_gem_object.gpuva.lock instead of gpuva_list_lock Alice Ryhl
  2025-08-22  9:28 ` [PATCH v2 3/3] gpuvm: remove gem.gpuva.lock_dep_map Alice Ryhl
  2 siblings, 1 reply; 12+ messages in thread
From: Alice Ryhl @ 2025-08-22  9:28 UTC (permalink / raw)
  To: Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
	Simona Vetter, Danilo Krummrich, Boris Brezillon, Daniel Almeida,
	Steven Price, Liviu Dudau, Rob Clark, Rob Herring
  Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
	Benno Lossin, Andreas Hindborg, Trevor Gross, dri-devel,
	linux-kernel, rust-for-linux, Alice Ryhl

There are two main ways that GPUVM might be used:

* staged mode, where VM_BIND ioctls update the GPUVM immediately so that
  the GPUVM reflects the state of the VM *including* staged changes that
  are not yet applied to the GPU's virtual address space.
* immediate mode, where the GPUVM state is updated during run_job(),
  i.e., in the DMA fence signalling critical path, to ensure that the
  GPUVM and the GPU's virtual address space has the same state at all
  times.

Currently, only Panthor uses GPUVM in immediate mode, but the Rust
drivers Tyr and Nova will also use GPUVM in immediate mode, so it is
worth to support both staged and immediate mode well in GPUVM. To use
immediate mode, the GEMs gpuva list must be modified during the fence
signalling path, which means that it must be protected by a lock that is
fence signalling safe.

For this reason, a mutex is added to struct drm_gem_object that is
intended to achieve this purpose. Adding it directly in the GEM object
both makes it easier to use GPUVM in immediate mode, but also makes it
possible to take the gpuva lock from core drm code.

As a follow-up, another change that should probably be made to support
immediate mode is a mechanism to postpone cleanup of vm_bo objects, as
dropping a vm_bo object in the fence signalling path is problematic for
two reasons:

* When using DRM_GPUVM_RESV_PROTECTED, you cannot remove the vm_bo from
  the extobj/evicted lists during the fence signalling path.
* Dropping a vm_bo could lead to the GEM object getting destroyed.
  The requirement that GEM object cleanup is fence signalling safe is
  dubious and likely to be violated in practice.

Panthor already has its own custom implementation of postponing vm_bo
cleanup.

Signed-off-by: Alice Ryhl <aliceryhl@google.com>
---
 drivers/gpu/drm/drm_gem.c | 2 ++
 include/drm/drm_gem.h     | 4 +++-
 2 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index 4a89b6acb6af39720451ac24033b89e144d282dc..8d25cc65707d5b44d931beb0207c9d08a3e2de5a 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -187,6 +187,7 @@ void drm_gem_private_object_init(struct drm_device *dev,
 	kref_init(&obj->refcount);
 	obj->handle_count = 0;
 	obj->size = size;
+	mutex_init(&obj->gpuva.lock);
 	dma_resv_init(&obj->_resv);
 	if (!obj->resv)
 		obj->resv = &obj->_resv;
@@ -210,6 +211,7 @@ void drm_gem_private_object_fini(struct drm_gem_object *obj)
 	WARN_ON(obj->dma_buf);
 
 	dma_resv_fini(&obj->_resv);
+	mutex_destroy(&obj->gpuva.lock);
 }
 EXPORT_SYMBOL(drm_gem_private_object_fini);
 
diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
index d3a7b43e2c637b164eba5af7cc2fc8ef09d4f0a4..5934d8dc267a65aaf62d2d025869221cd110b325 100644
--- a/include/drm/drm_gem.h
+++ b/include/drm/drm_gem.h
@@ -403,11 +403,13 @@ struct drm_gem_object {
 	 * Provides the list of GPU VAs attached to this GEM object.
 	 *
 	 * Drivers should lock list accesses with the GEMs &dma_resv lock
-	 * (&drm_gem_object.resv) or a custom lock if one is provided.
+	 * (&drm_gem_object.resv) or a custom lock if one is provided. The
+	 * mutex inside this struct may be used as the custom lock.
 	 */
 	struct {
 		struct list_head list;
 
+		struct mutex lock;
 #ifdef CONFIG_LOCKDEP
 		struct lockdep_map *lock_dep_map;
 #endif

-- 
2.51.0.rc2.233.g662b1ed5c5-goog


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v2 2/3] panthor: use drm_gem_object.gpuva.lock instead of gpuva_list_lock
  2025-08-22  9:28 [PATCH v2 0/3] Add mutex to drm_gem_object.gpuva list Alice Ryhl
  2025-08-22  9:28 ` [PATCH v2 1/3] drm_gem: add mutex to drm_gem_object.gpuva Alice Ryhl
@ 2025-08-22  9:28 ` Alice Ryhl
  2025-08-22  9:28 ` [PATCH v2 3/3] gpuvm: remove gem.gpuva.lock_dep_map Alice Ryhl
  2 siblings, 0 replies; 12+ messages in thread
From: Alice Ryhl @ 2025-08-22  9:28 UTC (permalink / raw)
  To: Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
	Simona Vetter, Danilo Krummrich, Boris Brezillon, Daniel Almeida,
	Steven Price, Liviu Dudau, Rob Clark, Rob Herring
  Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
	Benno Lossin, Andreas Hindborg, Trevor Gross, dri-devel,
	linux-kernel, rust-for-linux, Alice Ryhl

Now that drm_gem_object has a dedicated mutex for the gpuva list that is
intended to be used in cases that must be fence signalling safe, use it
in Panthor.

Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
Signed-off-by: Alice Ryhl <aliceryhl@google.com>
---
 drivers/gpu/drm/panthor/panthor_gem.c |  4 +---
 drivers/gpu/drm/panthor/panthor_gem.h | 12 ------------
 drivers/gpu/drm/panthor/panthor_mmu.c | 16 ++++++++--------
 3 files changed, 9 insertions(+), 23 deletions(-)

diff --git a/drivers/gpu/drm/panthor/panthor_gem.c b/drivers/gpu/drm/panthor/panthor_gem.c
index a123bc740ba1460f96882206f598b148b64dc5f6..c654a3377903cf7e067becdb481fb14895a4eaa5 100644
--- a/drivers/gpu/drm/panthor/panthor_gem.c
+++ b/drivers/gpu/drm/panthor/panthor_gem.c
@@ -74,7 +74,6 @@ static void panthor_gem_free_object(struct drm_gem_object *obj)
 	mutex_destroy(&bo->label.lock);
 
 	drm_gem_free_mmap_offset(&bo->base.base);
-	mutex_destroy(&bo->gpuva_list_lock);
 	drm_gem_shmem_free(&bo->base);
 	drm_gem_object_put(vm_root_gem);
 }
@@ -246,8 +245,7 @@ struct drm_gem_object *panthor_gem_create_object(struct drm_device *ddev, size_t
 
 	obj->base.base.funcs = &panthor_gem_funcs;
 	obj->base.map_wc = !ptdev->coherent;
-	mutex_init(&obj->gpuva_list_lock);
-	drm_gem_gpuva_set_lock(&obj->base.base, &obj->gpuva_list_lock);
+	drm_gem_gpuva_set_lock(&obj->base.base, &obj->base.base.gpuva.lock);
 	mutex_init(&obj->label.lock);
 
 	panthor_gem_debugfs_bo_init(obj);
diff --git a/drivers/gpu/drm/panthor/panthor_gem.h b/drivers/gpu/drm/panthor/panthor_gem.h
index 8fc7215e9b900ed162e03aebeae999fda00eeb7a..80c6e24112d0bd0f1561ae4d2224842afb735a59 100644
--- a/drivers/gpu/drm/panthor/panthor_gem.h
+++ b/drivers/gpu/drm/panthor/panthor_gem.h
@@ -79,18 +79,6 @@ struct panthor_gem_object {
 	 */
 	struct drm_gem_object *exclusive_vm_root_gem;
 
-	/**
-	 * @gpuva_list_lock: Custom GPUVA lock.
-	 *
-	 * Used to protect insertion of drm_gpuva elements to the
-	 * drm_gem_object.gpuva.list list.
-	 *
-	 * We can't use the GEM resv for that, because drm_gpuva_link() is
-	 * called in a dma-signaling path, where we're not allowed to take
-	 * resv locks.
-	 */
-	struct mutex gpuva_list_lock;
-
 	/** @flags: Combination of drm_panthor_bo_flags flags. */
 	u32 flags;
 
diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
index 2003b91a84097d419484c284c2d6241a82b5cde2..2881942ab5115686803fb9d9036bc059d56b7fa3 100644
--- a/drivers/gpu/drm/panthor/panthor_mmu.c
+++ b/drivers/gpu/drm/panthor/panthor_mmu.c
@@ -1107,9 +1107,9 @@ static void panthor_vm_bo_put(struct drm_gpuvm_bo *vm_bo)
 	 * GEM vm_bo list.
 	 */
 	dma_resv_lock(drm_gpuvm_resv(vm), NULL);
-	mutex_lock(&bo->gpuva_list_lock);
+	mutex_lock(&bo->base.base.gpuva.lock);
 	unpin = drm_gpuvm_bo_put(vm_bo);
-	mutex_unlock(&bo->gpuva_list_lock);
+	mutex_unlock(&bo->base.base.gpuva.lock);
 	dma_resv_unlock(drm_gpuvm_resv(vm));
 
 	/* If the vm_bo object was destroyed, release the pin reference that
@@ -1282,9 +1282,9 @@ static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
 	 * calling this function.
 	 */
 	dma_resv_lock(panthor_vm_resv(vm), NULL);
-	mutex_lock(&bo->gpuva_list_lock);
+	mutex_lock(&bo->base.base.gpuva.lock);
 	op_ctx->map.vm_bo = drm_gpuvm_bo_obtain_prealloc(preallocated_vm_bo);
-	mutex_unlock(&bo->gpuva_list_lock);
+	mutex_unlock(&bo->base.base.gpuva.lock);
 	dma_resv_unlock(panthor_vm_resv(vm));
 
 	/* If the a vm_bo for this <VM,BO> combination exists, it already
@@ -2036,10 +2036,10 @@ static void panthor_vma_link(struct panthor_vm *vm,
 {
 	struct panthor_gem_object *bo = to_panthor_bo(vma->base.gem.obj);
 
-	mutex_lock(&bo->gpuva_list_lock);
+	mutex_lock(&bo->base.base.gpuva.lock);
 	drm_gpuva_link(&vma->base, vm_bo);
 	drm_WARN_ON(&vm->ptdev->base, drm_gpuvm_bo_put(vm_bo));
-	mutex_unlock(&bo->gpuva_list_lock);
+	mutex_unlock(&bo->base.base.gpuva.lock);
 }
 
 static void panthor_vma_unlink(struct panthor_vm *vm,
@@ -2048,9 +2048,9 @@ static void panthor_vma_unlink(struct panthor_vm *vm,
 	struct panthor_gem_object *bo = to_panthor_bo(vma->base.gem.obj);
 	struct drm_gpuvm_bo *vm_bo = drm_gpuvm_bo_get(vma->base.vm_bo);
 
-	mutex_lock(&bo->gpuva_list_lock);
+	mutex_lock(&bo->base.base.gpuva.lock);
 	drm_gpuva_unlink(&vma->base);
-	mutex_unlock(&bo->gpuva_list_lock);
+	mutex_unlock(&bo->base.base.gpuva.lock);
 
 	/* drm_gpuva_unlink() release the vm_bo, but we manually retained it
 	 * when entering this function, so we can implement deferred VMA

-- 
2.51.0.rc2.233.g662b1ed5c5-goog


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v2 3/3] gpuvm: remove gem.gpuva.lock_dep_map
  2025-08-22  9:28 [PATCH v2 0/3] Add mutex to drm_gem_object.gpuva list Alice Ryhl
  2025-08-22  9:28 ` [PATCH v2 1/3] drm_gem: add mutex to drm_gem_object.gpuva Alice Ryhl
  2025-08-22  9:28 ` [PATCH v2 2/3] panthor: use drm_gem_object.gpuva.lock instead of gpuva_list_lock Alice Ryhl
@ 2025-08-22  9:28 ` Alice Ryhl
  2025-08-22  9:55   ` Boris Brezillon
  2025-08-22 10:25   ` Danilo Krummrich
  2 siblings, 2 replies; 12+ messages in thread
From: Alice Ryhl @ 2025-08-22  9:28 UTC (permalink / raw)
  To: Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
	Simona Vetter, Danilo Krummrich, Boris Brezillon, Daniel Almeida,
	Steven Price, Liviu Dudau, Rob Clark, Rob Herring
  Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
	Benno Lossin, Andreas Hindborg, Trevor Gross, dri-devel,
	linux-kernel, rust-for-linux, Alice Ryhl

Since all users of gem.gpuva.lock_dep_map now rely on the mutex directly
in gpuva, we may remove it. Whether the mutex is used is now tracked by
a flag in gpuvm rather than by whether lock_dep_map is null.

Note that a GEM object may not be pushed to multiple gpuvms that
disagree on the value of this new flag. But that's okay because a single
driver should use the same locking scheme everywhere, and a GEM object
is driver specific (when a GEM is exported with prime, a new GEM object
instance is created from the backing dma-buf).

The flag is present even with CONFIG_LOCKDEP=n because the intent is
that the flag will also cause vm_bo cleanup to become deferred. However,
that will happen in a follow-up patch.

Signed-off-by: Alice Ryhl <aliceryhl@google.com>
---
 drivers/gpu/drm/drm_gpuvm.c           | 30 +++++++++++++--------------
 drivers/gpu/drm/panthor/panthor_gem.c |  1 -
 drivers/gpu/drm/panthor/panthor_mmu.c |  5 +++--
 include/drm/drm_gem.h                 | 39 ++++++++++++++---------------------
 include/drm/drm_gpuvm.h               | 30 ++++++++++++++++++++++++---
 5 files changed, 59 insertions(+), 46 deletions(-)

diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c
index d6bea8a4fffd7613fb9b9ed5c795df373da2e7b6..78a1a4f095095e9379bdf604d583f6c8b9863ccb 100644
--- a/drivers/gpu/drm/drm_gpuvm.c
+++ b/drivers/gpu/drm/drm_gpuvm.c
@@ -497,8 +497,7 @@
  * DRM GPUVM also does not take care of the locking of the backing
  * &drm_gem_object buffers GPU VA lists and &drm_gpuvm_bo abstractions by
  * itself; drivers are responsible to enforce mutual exclusion using either the
- * GEMs dma_resv lock or alternatively a driver specific external lock. For the
- * latter see also drm_gem_gpuva_set_lock().
+ * GEMs dma_resv lock or the GEMs gpuva.lock mutex.
  *
  * However, DRM GPUVM contains lockdep checks to ensure callers of its API hold
  * the corresponding lock whenever the &drm_gem_objects GPU VA list is accessed
@@ -1582,7 +1581,7 @@ drm_gpuvm_bo_destroy(struct kref *kref)
 	drm_gpuvm_bo_list_del(vm_bo, extobj, lock);
 	drm_gpuvm_bo_list_del(vm_bo, evict, lock);
 
-	drm_gem_gpuva_assert_lock_held(obj);
+	drm_gem_gpuva_assert_lock_held(gpuvm, obj);
 	list_del(&vm_bo->list.entry.gem);
 
 	if (ops && ops->vm_bo_free)
@@ -1603,7 +1602,8 @@ drm_gpuvm_bo_destroy(struct kref *kref)
  * If the reference count drops to zero, the &gpuvm_bo is destroyed, which
  * includes removing it from the GEMs gpuva list. Hence, if a call to this
  * function can potentially let the reference count drop to zero the caller must
- * hold the dma-resv or driver specific GEM gpuva lock.
+ * hold the lock that the GEM uses for its gpuva list (either the GEM's
+ * dma-resv or gpuva.lock mutex).
  *
  * This function may only be called from non-atomic context.
  *
@@ -1627,7 +1627,7 @@ __drm_gpuvm_bo_find(struct drm_gpuvm *gpuvm,
 {
 	struct drm_gpuvm_bo *vm_bo;
 
-	drm_gem_gpuva_assert_lock_held(obj);
+	drm_gem_gpuva_assert_lock_held(gpuvm, obj);
 	drm_gem_for_each_gpuvm_bo(vm_bo, obj)
 		if (vm_bo->vm == gpuvm)
 			return vm_bo;
@@ -1686,7 +1686,7 @@ drm_gpuvm_bo_obtain(struct drm_gpuvm *gpuvm,
 	if (!vm_bo)
 		return ERR_PTR(-ENOMEM);
 
-	drm_gem_gpuva_assert_lock_held(obj);
+	drm_gem_gpuva_assert_lock_held(gpuvm, obj);
 	list_add_tail(&vm_bo->list.entry.gem, &obj->gpuva.list);
 
 	return vm_bo;
@@ -1722,7 +1722,7 @@ drm_gpuvm_bo_obtain_prealloc(struct drm_gpuvm_bo *__vm_bo)
 		return vm_bo;
 	}
 
-	drm_gem_gpuva_assert_lock_held(obj);
+	drm_gem_gpuva_assert_lock_held(gpuvm, obj);
 	list_add_tail(&__vm_bo->list.entry.gem, &obj->gpuva.list);
 
 	return __vm_bo;
@@ -1894,8 +1894,7 @@ EXPORT_SYMBOL_GPL(drm_gpuva_remove);
  * reference of the latter is taken.
  *
  * This function expects the caller to protect the GEM's GPUVA list against
- * concurrent access using either the GEMs dma_resv lock or a driver specific
- * lock set through drm_gem_gpuva_set_lock().
+ * concurrent access using either the GEM's dma-resv or gpuva.lock mutex.
  */
 void
 drm_gpuva_link(struct drm_gpuva *va, struct drm_gpuvm_bo *vm_bo)
@@ -1910,7 +1909,7 @@ drm_gpuva_link(struct drm_gpuva *va, struct drm_gpuvm_bo *vm_bo)
 
 	va->vm_bo = drm_gpuvm_bo_get(vm_bo);
 
-	drm_gem_gpuva_assert_lock_held(obj);
+	drm_gem_gpuva_assert_lock_held(gpuvm, obj);
 	list_add_tail(&va->gem.entry, &vm_bo->list.gpuva);
 }
 EXPORT_SYMBOL_GPL(drm_gpuva_link);
@@ -1930,8 +1929,7 @@ EXPORT_SYMBOL_GPL(drm_gpuva_link);
  * the latter is dropped.
  *
  * This function expects the caller to protect the GEM's GPUVA list against
- * concurrent access using either the GEMs dma_resv lock or a driver specific
- * lock set through drm_gem_gpuva_set_lock().
+ * concurrent access using either the GEM's dma-resv or gpuva.lock mutex.
  */
 void
 drm_gpuva_unlink(struct drm_gpuva *va)
@@ -1942,7 +1940,7 @@ drm_gpuva_unlink(struct drm_gpuva *va)
 	if (unlikely(!obj))
 		return;
 
-	drm_gem_gpuva_assert_lock_held(obj);
+	drm_gem_gpuva_assert_lock_held(va->vm, obj);
 	list_del_init(&va->gem.entry);
 
 	va->vm_bo = NULL;
@@ -2943,8 +2941,8 @@ EXPORT_SYMBOL_GPL(drm_gpuvm_prefetch_ops_create);
  * After the caller finished processing the returned &drm_gpuva_ops, they must
  * be freed with &drm_gpuva_ops_free.
  *
- * It is the callers responsibility to protect the GEMs GPUVA list against
- * concurrent access using the GEMs dma_resv lock.
+ * This function expects the caller to protect the GEM's GPUVA list against
+ * concurrent access using either the GEM's dma-resv or gpuva.lock mutex.
  *
  * Returns: a pointer to the &drm_gpuva_ops on success, an ERR_PTR on failure
  */
@@ -2956,7 +2954,7 @@ drm_gpuvm_bo_unmap_ops_create(struct drm_gpuvm_bo *vm_bo)
 	struct drm_gpuva *va;
 	int ret;
 
-	drm_gem_gpuva_assert_lock_held(vm_bo->obj);
+	drm_gem_gpuva_assert_lock_held(vm_bo->vm, vm_bo->obj);
 
 	ops = kzalloc(sizeof(*ops), GFP_KERNEL);
 	if (!ops)
diff --git a/drivers/gpu/drm/panthor/panthor_gem.c b/drivers/gpu/drm/panthor/panthor_gem.c
index c654a3377903cf7e067becdb481fb14895a4eaa5..156c7a0b62a231219645095d6012a5b108130bbc 100644
--- a/drivers/gpu/drm/panthor/panthor_gem.c
+++ b/drivers/gpu/drm/panthor/panthor_gem.c
@@ -245,7 +245,6 @@ struct drm_gem_object *panthor_gem_create_object(struct drm_device *ddev, size_t
 
 	obj->base.base.funcs = &panthor_gem_funcs;
 	obj->base.map_wc = !ptdev->coherent;
-	drm_gem_gpuva_set_lock(&obj->base.base, &obj->base.base.gpuva.lock);
 	mutex_init(&obj->label.lock);
 
 	panthor_gem_debugfs_bo_init(obj);
diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
index 2881942ab5115686803fb9d9036bc059d56b7fa3..ee9e94448b76ffd31a97d82a857fa925c4cf0cb5 100644
--- a/drivers/gpu/drm/panthor/panthor_mmu.c
+++ b/drivers/gpu/drm/panthor/panthor_mmu.c
@@ -2420,8 +2420,9 @@ panthor_vm_create(struct panthor_device *ptdev, bool for_mcu,
 	 * to be handled the same way user VMAs are.
 	 */
 	drm_gpuvm_init(&vm->base, for_mcu ? "panthor-MCU-VM" : "panthor-GPU-VM",
-		       DRM_GPUVM_RESV_PROTECTED, &ptdev->base, dummy_gem,
-		       min_va, va_range, 0, 0, &panthor_gpuvm_ops);
+		       DRM_GPUVM_RESV_PROTECTED | DRM_GPUVM_IMMEDIATE_MODE,
+		       &ptdev->base, dummy_gem, min_va, va_range, 0, 0,
+		       &panthor_gpuvm_ops);
 	drm_gem_object_put(dummy_gem);
 	return vm;
 
diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
index 5934d8dc267a65aaf62d2d025869221cd110b325..85a25bbb387c4590678e4ba243b51acd94b008ed 100644
--- a/include/drm/drm_gem.h
+++ b/include/drm/drm_gem.h
@@ -402,17 +402,22 @@ struct drm_gem_object {
 	 *
 	 * Provides the list of GPU VAs attached to this GEM object.
 	 *
-	 * Drivers should lock list accesses with the GEMs &dma_resv lock
-	 * (&drm_gem_object.resv) or a custom lock if one is provided. The
-	 * mutex inside this struct may be used as the custom lock.
+	 * When DRM_GPUVM_IMMEDIATE_MODE is set, this list is protected by the
+	 * mutex. Otherwise, the list is protected by the GEMs &dma_resv lock.
+	 *
+	 * Note that all entries in this list must agree on whether
+	 * DRM_GPUVM_IMMEDIATE_MODE is set.
 	 */
 	struct {
 		struct list_head list;
 
+		/**
+		 * @gpuva.lock: Only used when DRM_GPUVM_IMMEDIATE_MODE is set.
+		 * It should be safe to take this mutex during the fence
+		 * signalling path, so do not allocate memory while holding
+		 * this lock.
+		 */
 		struct mutex lock;
-#ifdef CONFIG_LOCKDEP
-		struct lockdep_map *lock_dep_map;
-#endif
 	} gpuva;
 
 	/**
@@ -597,26 +602,12 @@ static inline bool drm_gem_is_imported(const struct drm_gem_object *obj)
 }
 
 #ifdef CONFIG_LOCKDEP
-/**
- * drm_gem_gpuva_set_lock() - Set the lock protecting accesses to the gpuva list.
- * @obj: the &drm_gem_object
- * @lock: the lock used to protect the gpuva list. The locking primitive
- * must contain a dep_map field.
- *
- * Call this if you're not proctecting access to the gpuva list with the
- * dma-resv lock, but with a custom lock.
- */
-#define drm_gem_gpuva_set_lock(obj, lock) \
-	if (!WARN((obj)->gpuva.lock_dep_map, \
-		  "GEM GPUVA lock should be set only once.")) \
-		(obj)->gpuva.lock_dep_map = &(lock)->dep_map
-#define drm_gem_gpuva_assert_lock_held(obj) \
-	lockdep_assert((obj)->gpuva.lock_dep_map ? \
-		       lock_is_held((obj)->gpuva.lock_dep_map) : \
+#define drm_gem_gpuva_assert_lock_held(gpuvm, obj) \
+	lockdep_assert(drm_gpuvm_immediate_mode(gpuvm) ? \
+		       lock_is_held(&(obj)->gpuva.lock.dep_map) : \
 		       dma_resv_held((obj)->resv))
 #else
-#define drm_gem_gpuva_set_lock(obj, lock) do {} while (0)
-#define drm_gem_gpuva_assert_lock_held(obj) do {} while (0)
+#define drm_gem_gpuva_assert_lock_held(gpuvm, obj) do {} while (0)
 #endif
 
 /**
diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h
index 4a22b9d848f7b3d5710ca554f5b01556abf95985..598ba376b9430cdd4ab7bacdc15de031a887cf71 100644
--- a/include/drm/drm_gpuvm.h
+++ b/include/drm/drm_gpuvm.h
@@ -196,10 +196,20 @@ enum drm_gpuvm_flags {
 	 */
 	DRM_GPUVM_RESV_PROTECTED = BIT(0),
 
+	/**
+	 * @DRM_GPUVM_IMMEDIATE_MODE: use the locking scheme that makes it safe
+	 * to modify the GPUVM during the fence signalling path
+	 *
+	 * When set, gpuva.lock is used to protect gpuva.list in all GEM
+	 * objects associated with this GPUVM. Otherwise, the GEMs dma-resv is
+	 * used.
+	 */
+	DRM_GPUVM_IMMEDIATE_MODE = BIT(1),
+
 	/**
 	 * @DRM_GPUVM_USERBITS: user defined bits
 	 */
-	DRM_GPUVM_USERBITS = BIT(1),
+	DRM_GPUVM_USERBITS = BIT(2),
 };
 
 /**
@@ -369,6 +379,19 @@ drm_gpuvm_resv_protected(struct drm_gpuvm *gpuvm)
 	return gpuvm->flags & DRM_GPUVM_RESV_PROTECTED;
 }
 
+/**
+ * drm_gpuvm_immediate_mode() - indicates whether &DRM_GPUVM_IMMEDIATE_MODE is
+ * set
+ * @gpuvm: the &drm_gpuvm
+ *
+ * Returns: true if &DRM_GPUVM_IMMEDIATE_MODE is set, false otherwise.
+ */
+static inline bool
+drm_gpuvm_immediate_mode(struct drm_gpuvm *gpuvm)
+{
+	return gpuvm->flags & DRM_GPUVM_IMMEDIATE_MODE;
+}
+
 /**
  * drm_gpuvm_resv() - returns the &drm_gpuvm's &dma_resv
  * @gpuvm__: the &drm_gpuvm
@@ -742,9 +765,10 @@ drm_gpuvm_bo_gem_evict(struct drm_gem_object *obj, bool evict)
 {
 	struct drm_gpuvm_bo *vm_bo;
 
-	drm_gem_gpuva_assert_lock_held(obj);
-	drm_gem_for_each_gpuvm_bo(vm_bo, obj)
+	drm_gem_for_each_gpuvm_bo(vm_bo, obj) {
+		drm_gem_gpuva_assert_lock_held(vm_bo->vm, obj);
 		drm_gpuvm_bo_evict(vm_bo, evict);
+	}
 }
 
 void drm_gpuvm_bo_extobj_add(struct drm_gpuvm_bo *vm_bo);

-- 
2.51.0.rc2.233.g662b1ed5c5-goog


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 1/3] drm_gem: add mutex to drm_gem_object.gpuva
  2025-08-22  9:28 ` [PATCH v2 1/3] drm_gem: add mutex to drm_gem_object.gpuva Alice Ryhl
@ 2025-08-22  9:52   ` Boris Brezillon
  2025-08-22 10:57     ` Alice Ryhl
  0 siblings, 1 reply; 12+ messages in thread
From: Boris Brezillon @ 2025-08-22  9:52 UTC (permalink / raw)
  To: Alice Ryhl
  Cc: Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
	Simona Vetter, Danilo Krummrich, Daniel Almeida, Steven Price,
	Liviu Dudau, Rob Clark, Rob Herring, Miguel Ojeda, Boqun Feng,
	Gary Guo, Björn Roy Baron, Benno Lossin, Andreas Hindborg,
	Trevor Gross, dri-devel, linux-kernel, rust-for-linux

On Fri, 22 Aug 2025 09:28:24 +0000
Alice Ryhl <aliceryhl@google.com> wrote:

> There are two main ways that GPUVM might be used:
> 
> * staged mode, where VM_BIND ioctls update the GPUVM immediately so that
>   the GPUVM reflects the state of the VM *including* staged changes that
>   are not yet applied to the GPU's virtual address space.
> * immediate mode, where the GPUVM state is updated during run_job(),
>   i.e., in the DMA fence signalling critical path, to ensure that the
>   GPUVM and the GPU's virtual address space has the same state at all
>   times.
> 
> Currently, only Panthor uses GPUVM in immediate mode, but the Rust
> drivers Tyr and Nova will also use GPUVM in immediate mode, so it is
> worth to support both staged and immediate mode well in GPUVM. To use
> immediate mode, the GEMs gpuva list must be modified during the fence
> signalling path, which means that it must be protected by a lock that is
> fence signalling safe.
> 
> For this reason, a mutex is added to struct drm_gem_object that is
> intended to achieve this purpose. Adding it directly in the GEM object
> both makes it easier to use GPUVM in immediate mode, but also makes it
> possible to take the gpuva lock from core drm code.
> 
> As a follow-up, another change that should probably be made to support
> immediate mode is a mechanism to postpone cleanup of vm_bo objects, as
> dropping a vm_bo object in the fence signalling path is problematic for
> two reasons:
> 
> * When using DRM_GPUVM_RESV_PROTECTED, you cannot remove the vm_bo from
>   the extobj/evicted lists during the fence signalling path.
> * Dropping a vm_bo could lead to the GEM object getting destroyed.
>   The requirement that GEM object cleanup is fence signalling safe is
>   dubious and likely to be violated in practice.
> 
> Panthor already has its own custom implementation of postponing vm_bo
> cleanup.
> 
> Signed-off-by: Alice Ryhl <aliceryhl@google.com>

Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>

One minor thing below.

> ---
>  drivers/gpu/drm/drm_gem.c | 2 ++
>  include/drm/drm_gem.h     | 4 +++-
>  2 files changed, 5 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
> index 4a89b6acb6af39720451ac24033b89e144d282dc..8d25cc65707d5b44d931beb0207c9d08a3e2de5a 100644
> --- a/drivers/gpu/drm/drm_gem.c
> +++ b/drivers/gpu/drm/drm_gem.c
> @@ -187,6 +187,7 @@ void drm_gem_private_object_init(struct drm_device *dev,
>  	kref_init(&obj->refcount);
>  	obj->handle_count = 0;
>  	obj->size = size;
> +	mutex_init(&obj->gpuva.lock);
>  	dma_resv_init(&obj->_resv);
>  	if (!obj->resv)
>  		obj->resv = &obj->_resv;
> @@ -210,6 +211,7 @@ void drm_gem_private_object_fini(struct drm_gem_object *obj)
>  	WARN_ON(obj->dma_buf);
>  
>  	dma_resv_fini(&obj->_resv);
> +	mutex_destroy(&obj->gpuva.lock);
>  }
>  EXPORT_SYMBOL(drm_gem_private_object_fini);
>  
> diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
> index d3a7b43e2c637b164eba5af7cc2fc8ef09d4f0a4..5934d8dc267a65aaf62d2d025869221cd110b325 100644
> --- a/include/drm/drm_gem.h
> +++ b/include/drm/drm_gem.h
> @@ -403,11 +403,13 @@ struct drm_gem_object {
>  	 * Provides the list of GPU VAs attached to this GEM object.
>  	 *
>  	 * Drivers should lock list accesses with the GEMs &dma_resv lock
> -	 * (&drm_gem_object.resv) or a custom lock if one is provided.
> +	 * (&drm_gem_object.resv) or a custom lock if one is provided. The
> +	 * mutex inside this struct may be used as the custom lock.
>  	 */
>  	struct {
>  		struct list_head list;
>  
> +		struct mutex lock;

Maybe it's time we start moving some bits of the gpuva field docs next
to the fields they describe:

	/**
	 * @gpuva: Fields used by GPUVM to manage mappings pointing to this GEM object.
	 */
	struct {
		/**
		 * @gpuva.list: list of GPU VAs attached to this GEM object.
		 *
		 * Drivers should lock list accesses with the GEMs &dma_resv lock
		 * (&drm_gem_object.resv) or &drm_gem_object.gpuva.lock if the
		 * list is being updated in places where the resv lock can't be
		 * acquired (fence signalling path).
		 */
		struct list_head list;

		/**
		 * @gpuva.lock: lock protecting access to &drm_gem_object.gpuva.list
		 * when the resv lock can't be used.
		 *
		 * Should only be used when the VM is being modified in a fence
		 * signalling path, otherwise you should use &drm_gem_object.resv to
		 * protect accesses to &drm_gem_object.gpuva.list.
		 */
		struct mutex lock;

		...
	};

>  #ifdef CONFIG_LOCKDEP
>  		struct lockdep_map *lock_dep_map;
>  #endif
> 


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 3/3] gpuvm: remove gem.gpuva.lock_dep_map
  2025-08-22  9:28 ` [PATCH v2 3/3] gpuvm: remove gem.gpuva.lock_dep_map Alice Ryhl
@ 2025-08-22  9:55   ` Boris Brezillon
  2025-08-22 10:25   ` Danilo Krummrich
  1 sibling, 0 replies; 12+ messages in thread
From: Boris Brezillon @ 2025-08-22  9:55 UTC (permalink / raw)
  To: Alice Ryhl
  Cc: Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
	Simona Vetter, Danilo Krummrich, Daniel Almeida, Steven Price,
	Liviu Dudau, Rob Clark, Rob Herring, Miguel Ojeda, Boqun Feng,
	Gary Guo, Björn Roy Baron, Benno Lossin, Andreas Hindborg,
	Trevor Gross, dri-devel, linux-kernel, rust-for-linux

On Fri, 22 Aug 2025 09:28:26 +0000
Alice Ryhl <aliceryhl@google.com> wrote:

> diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
> index 5934d8dc267a65aaf62d2d025869221cd110b325..85a25bbb387c4590678e4ba243b51acd94b008ed 100644
> --- a/include/drm/drm_gem.h
> +++ b/include/drm/drm_gem.h
> @@ -402,17 +402,22 @@ struct drm_gem_object {
>  	 *
>  	 * Provides the list of GPU VAs attached to this GEM object.
>  	 *
> -	 * Drivers should lock list accesses with the GEMs &dma_resv lock
> -	 * (&drm_gem_object.resv) or a custom lock if one is provided. The
> -	 * mutex inside this struct may be used as the custom lock.
> +	 * When DRM_GPUVM_IMMEDIATE_MODE is set, this list is protected by the
> +	 * mutex. Otherwise, the list is protected by the GEMs &dma_resv lock.
> +	 *
> +	 * Note that all entries in this list must agree on whether
> +	 * DRM_GPUVM_IMMEDIATE_MODE is set.
>  	 */
>  	struct {
>  		struct list_head list;
>  
> +		/**
> +		 * @gpuva.lock: Only used when DRM_GPUVM_IMMEDIATE_MODE is set.
> +		 * It should be safe to take this mutex during the fence
> +		 * signalling path, so do not allocate memory while holding
> +		 * this lock.
> +		 */

To follow-up on my comment on patch 1: this makes
drm_gem_object::gpuva::list the only field to not have a proper doc.
This patch is

Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>

regardless.

Thanks,

Boris

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 3/3] gpuvm: remove gem.gpuva.lock_dep_map
  2025-08-22  9:28 ` [PATCH v2 3/3] gpuvm: remove gem.gpuva.lock_dep_map Alice Ryhl
  2025-08-22  9:55   ` Boris Brezillon
@ 2025-08-22 10:25   ` Danilo Krummrich
  2025-08-22 10:58     ` Alice Ryhl
  2025-08-27 13:35     ` Alice Ryhl
  1 sibling, 2 replies; 12+ messages in thread
From: Danilo Krummrich @ 2025-08-22 10:25 UTC (permalink / raw)
  To: Alice Ryhl
  Cc: Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
	Simona Vetter, Boris Brezillon, Daniel Almeida, Steven Price,
	Liviu Dudau, Rob Clark, Rob Herring, Miguel Ojeda, Boqun Feng,
	Gary Guo, Björn Roy Baron, Benno Lossin, Andreas Hindborg,
	Trevor Gross, dri-devel, linux-kernel, rust-for-linux

On Fri Aug 22, 2025 at 11:28 AM CEST, Alice Ryhl wrote:
> diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h
> index 4a22b9d848f7b3d5710ca554f5b01556abf95985..598ba376b9430cdd4ab7bacdc15de031a887cf71 100644
> --- a/include/drm/drm_gpuvm.h
> +++ b/include/drm/drm_gpuvm.h
> @@ -196,10 +196,20 @@ enum drm_gpuvm_flags {
>  	 */
>  	DRM_GPUVM_RESV_PROTECTED = BIT(0),
>  
> +	/**
> +	 * @DRM_GPUVM_IMMEDIATE_MODE: use the locking scheme that makes it safe
> +	 * to modify the GPUVM during the fence signalling path

I think this isn't entirely true yet or at least can be ambiguous for now,
because it doesn't prevent users from having DRM_GPUVM_RESV_PROTECTED set, which
requires the DMA resv lock to destroy a struct drm_gpuvm_bo, which may happen
from drm_gpuva_unlink().

So, for now, until we have the deferred drop idea in place, it also
requires DRM_GPUVM_RESV_PROTECTED to not be set.

(Eventually, we of course want both to be represented as a single flag, such
that it becomes an either or.)

I also wouldn't say "makes it safe to", but rather "makes it possible to
safely". We have no control over what the users do with the mutex. :)

NIT: missing period

> +	 *
> +	 * When set, gpuva.lock is used to protect gpuva.list in all GEM
> +	 * objects associated with this GPUVM. Otherwise, the GEMs dma-resv is
> +	 * used.
> +	 */
> +	DRM_GPUVM_IMMEDIATE_MODE = BIT(1),
> +
>  	/**
>  	 * @DRM_GPUVM_USERBITS: user defined bits
>  	 */
> -	DRM_GPUVM_USERBITS = BIT(1),
> +	DRM_GPUVM_USERBITS = BIT(2),
>  };

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 1/3] drm_gem: add mutex to drm_gem_object.gpuva
  2025-08-22  9:52   ` Boris Brezillon
@ 2025-08-22 10:57     ` Alice Ryhl
  2025-08-22 11:09       ` Boris Brezillon
  2025-08-22 11:09       ` Danilo Krummrich
  0 siblings, 2 replies; 12+ messages in thread
From: Alice Ryhl @ 2025-08-22 10:57 UTC (permalink / raw)
  To: Boris Brezillon
  Cc: Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
	Simona Vetter, Danilo Krummrich, Daniel Almeida, Steven Price,
	Liviu Dudau, Rob Clark, Rob Herring, Miguel Ojeda, Boqun Feng,
	Gary Guo, Björn Roy Baron, Benno Lossin, Andreas Hindborg,
	Trevor Gross, dri-devel, linux-kernel, rust-for-linux

On Fri, Aug 22, 2025 at 11:52:21AM +0200, Boris Brezillon wrote:
> On Fri, 22 Aug 2025 09:28:24 +0000
> 
> Maybe it's time we start moving some bits of the gpuva field docs next
> to the fields they describe:
> 
> 	/**
> 	 * @gpuva: Fields used by GPUVM to manage mappings pointing to this GEM object.
> 	 */
> 	struct {
> 		/**
> 		 * @gpuva.list: list of GPU VAs attached to this GEM object.
> 		 *
> 		 * Drivers should lock list accesses with the GEMs &dma_resv lock
> 		 * (&drm_gem_object.resv) or &drm_gem_object.gpuva.lock if the
> 		 * list is being updated in places where the resv lock can't be
> 		 * acquired (fence signalling path).
> 		 */
> 		struct list_head list;

This isn't a new issue, but it's somewhat confusing to call it a list of
VAs when it's a list of vm_bos.

> 		/**
> 		 * @gpuva.lock: lock protecting access to &drm_gem_object.gpuva.list
> 		 * when the resv lock can't be used.
> 		 *
> 		 * Should only be used when the VM is being modified in a fence
> 		 * signalling path, otherwise you should use &drm_gem_object.resv to
> 		 * protect accesses to &drm_gem_object.gpuva.list.
> 		 */
> 		struct mutex lock;
> 
> 		...
> 	};
> 

Alice

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 3/3] gpuvm: remove gem.gpuva.lock_dep_map
  2025-08-22 10:25   ` Danilo Krummrich
@ 2025-08-22 10:58     ` Alice Ryhl
  2025-08-27 13:35     ` Alice Ryhl
  1 sibling, 0 replies; 12+ messages in thread
From: Alice Ryhl @ 2025-08-22 10:58 UTC (permalink / raw)
  To: Danilo Krummrich
  Cc: Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
	Simona Vetter, Boris Brezillon, Daniel Almeida, Steven Price,
	Liviu Dudau, Rob Clark, Rob Herring, Miguel Ojeda, Boqun Feng,
	Gary Guo, Björn Roy Baron, Benno Lossin, Andreas Hindborg,
	Trevor Gross, dri-devel, linux-kernel, rust-for-linux

On Fri, Aug 22, 2025 at 12:25:34PM +0200, Danilo Krummrich wrote:
> On Fri Aug 22, 2025 at 11:28 AM CEST, Alice Ryhl wrote:
> > diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h
> > index 4a22b9d848f7b3d5710ca554f5b01556abf95985..598ba376b9430cdd4ab7bacdc15de031a887cf71 100644
> > --- a/include/drm/drm_gpuvm.h
> > +++ b/include/drm/drm_gpuvm.h
> > @@ -196,10 +196,20 @@ enum drm_gpuvm_flags {
> >  	 */
> >  	DRM_GPUVM_RESV_PROTECTED = BIT(0),
> >  
> > +	/**
> > +	 * @DRM_GPUVM_IMMEDIATE_MODE: use the locking scheme that makes it safe
> > +	 * to modify the GPUVM during the fence signalling path
> 
> I think this isn't entirely true yet or at least can be ambiguous for now,
> because it doesn't prevent users from having DRM_GPUVM_RESV_PROTECTED set, which
> requires the DMA resv lock to destroy a struct drm_gpuvm_bo, which may happen
> from drm_gpuva_unlink().
> 
> So, for now, until we have the deferred drop idea in place, it also
> requires DRM_GPUVM_RESV_PROTECTED to not be set.
> 
> (Eventually, we of course want both to be represented as a single flag, such
> that it becomes an either or.)
> 
> I also wouldn't say "makes it safe to", but rather "makes it possible to
> safely". We have no control over what the users do with the mutex. :)
> 
> NIT: missing period

Yeah, it probably makes sense to modify this wording, at least until the
deferred vm_bo thing.

Alice

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 1/3] drm_gem: add mutex to drm_gem_object.gpuva
  2025-08-22 10:57     ` Alice Ryhl
@ 2025-08-22 11:09       ` Boris Brezillon
  2025-08-22 11:09       ` Danilo Krummrich
  1 sibling, 0 replies; 12+ messages in thread
From: Boris Brezillon @ 2025-08-22 11:09 UTC (permalink / raw)
  To: Alice Ryhl
  Cc: Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
	Simona Vetter, Danilo Krummrich, Daniel Almeida, Steven Price,
	Liviu Dudau, Rob Clark, Rob Herring, Miguel Ojeda, Boqun Feng,
	Gary Guo, Björn Roy Baron, Benno Lossin, Andreas Hindborg,
	Trevor Gross, dri-devel, linux-kernel, rust-for-linux

On Fri, 22 Aug 2025 10:57:26 +0000
Alice Ryhl <aliceryhl@google.com> wrote:

> On Fri, Aug 22, 2025 at 11:52:21AM +0200, Boris Brezillon wrote:
> > On Fri, 22 Aug 2025 09:28:24 +0000
> > 
> > Maybe it's time we start moving some bits of the gpuva field docs next
> > to the fields they describe:
> > 
> > 	/**
> > 	 * @gpuva: Fields used by GPUVM to manage mappings pointing to this GEM object.
> > 	 */
> > 	struct {
> > 		/**
> > 		 * @gpuva.list: list of GPU VAs attached to this GEM object.
> > 		 *
> > 		 * Drivers should lock list accesses with the GEMs &dma_resv lock
> > 		 * (&drm_gem_object.resv) or &drm_gem_object.gpuva.lock if the
> > 		 * list is being updated in places where the resv lock can't be
> > 		 * acquired (fence signalling path).
> > 		 */
> > 		struct list_head list;  
> 
> This isn't a new issue, but it's somewhat confusing to call it a list of
> VAs when it's a list of vm_bos.

Yep, that's true.

> 
> > 		/**
> > 		 * @gpuva.lock: lock protecting access to &drm_gem_object.gpuva.list
> > 		 * when the resv lock can't be used.
> > 		 *
> > 		 * Should only be used when the VM is being modified in a fence
> > 		 * signalling path, otherwise you should use &drm_gem_object.resv to
> > 		 * protect accesses to &drm_gem_object.gpuva.list.
> > 		 */
> > 		struct mutex lock;
> > 
> > 		...
> > 	};
> >   
> 
> Alice


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 1/3] drm_gem: add mutex to drm_gem_object.gpuva
  2025-08-22 10:57     ` Alice Ryhl
  2025-08-22 11:09       ` Boris Brezillon
@ 2025-08-22 11:09       ` Danilo Krummrich
  1 sibling, 0 replies; 12+ messages in thread
From: Danilo Krummrich @ 2025-08-22 11:09 UTC (permalink / raw)
  To: Alice Ryhl
  Cc: Boris Brezillon, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, David Airlie, Simona Vetter, Daniel Almeida,
	Steven Price, Liviu Dudau, Rob Clark, Rob Herring, Miguel Ojeda,
	Boqun Feng, Gary Guo, Björn Roy Baron, Benno Lossin,
	Andreas Hindborg, Trevor Gross, dri-devel, linux-kernel,
	rust-for-linux

On 8/22/25 12:57 PM, Alice Ryhl wrote:
> On Fri, Aug 22, 2025 at 11:52:21AM +0200, Boris Brezillon wrote:
>> On Fri, 22 Aug 2025 09:28:24 +0000
>>
>> Maybe it's time we start moving some bits of the gpuva field docs next
>> to the fields they describe:
>>
>> 	/**
>> 	 * @gpuva: Fields used by GPUVM to manage mappings pointing to this GEM object.
>> 	 */
>> 	struct {
>> 		/**
>> 		 * @gpuva.list: list of GPU VAs attached to this GEM object.
>> 		 *
>> 		 * Drivers should lock list accesses with the GEMs &dma_resv lock
>> 		 * (&drm_gem_object.resv) or &drm_gem_object.gpuva.lock if the
>> 		 * list is being updated in places where the resv lock can't be
>> 		 * acquired (fence signalling path).
>> 		 */
>> 		struct list_head list;
> 
> This isn't a new issue, but it's somewhat confusing to call it a list of
> VAs when it's a list of vm_bos.

Yes, I already suggested (don't remember where though) to change the name of the
anonymous accordingly. I think I forgot to rename it back when I introduced
struct drm_gpuvm_bo.

If you want, please add a patch for this in the next version. But it's also fine
to leave as is for your series of course. I can also fix it up. :)

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 3/3] gpuvm: remove gem.gpuva.lock_dep_map
  2025-08-22 10:25   ` Danilo Krummrich
  2025-08-22 10:58     ` Alice Ryhl
@ 2025-08-27 13:35     ` Alice Ryhl
  1 sibling, 0 replies; 12+ messages in thread
From: Alice Ryhl @ 2025-08-27 13:35 UTC (permalink / raw)
  To: Danilo Krummrich
  Cc: Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
	Simona Vetter, Boris Brezillon, Daniel Almeida, Steven Price,
	Liviu Dudau, Rob Clark, Rob Herring, Miguel Ojeda, Boqun Feng,
	Gary Guo, Björn Roy Baron, Benno Lossin, Andreas Hindborg,
	Trevor Gross, dri-devel, linux-kernel, rust-for-linux

On Fri, Aug 22, 2025 at 12:25 PM Danilo Krummrich <dakr@kernel.org> wrote:
>
> On Fri Aug 22, 2025 at 11:28 AM CEST, Alice Ryhl wrote:
> > diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h
> > index 4a22b9d848f7b3d5710ca554f5b01556abf95985..598ba376b9430cdd4ab7bacdc15de031a887cf71 100644
> > --- a/include/drm/drm_gpuvm.h
> > +++ b/include/drm/drm_gpuvm.h
> > @@ -196,10 +196,20 @@ enum drm_gpuvm_flags {
> >        */
> >       DRM_GPUVM_RESV_PROTECTED = BIT(0),
> >
> > +     /**
> > +      * @DRM_GPUVM_IMMEDIATE_MODE: use the locking scheme that makes it safe
> > +      * to modify the GPUVM during the fence signalling path
>
> I think this isn't entirely true yet or at least can be ambiguous for now,
> because it doesn't prevent users from having DRM_GPUVM_RESV_PROTECTED set, which
> requires the DMA resv lock to destroy a struct drm_gpuvm_bo, which may happen
> from drm_gpuva_unlink().
>
> So, for now, until we have the deferred drop idea in place, it also
> requires DRM_GPUVM_RESV_PROTECTED to not be set.
>
> (Eventually, we of course want both to be represented as a single flag, such
> that it becomes an either or.)

I'm going to introduce "designed for" in the wording to address this.
I don't think we need to say that you are required to only pick one of
DRM_GPUVM_RESV_PROTECTED or this flag, since you can use both if you
manually postpone vm_bo cleanup. Let's just not elaborate on that
here.

> I also wouldn't say "makes it safe to", but rather "makes it possible to
> safely". We have no control over what the users do with the mutex. :)
>
> NIT: missing period

I didn't put a period for consistency. Nothing else has a period in
the summary sentence at the top of the doc-comment.

Alice

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2025-08-27 13:36 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-22  9:28 [PATCH v2 0/3] Add mutex to drm_gem_object.gpuva list Alice Ryhl
2025-08-22  9:28 ` [PATCH v2 1/3] drm_gem: add mutex to drm_gem_object.gpuva Alice Ryhl
2025-08-22  9:52   ` Boris Brezillon
2025-08-22 10:57     ` Alice Ryhl
2025-08-22 11:09       ` Boris Brezillon
2025-08-22 11:09       ` Danilo Krummrich
2025-08-22  9:28 ` [PATCH v2 2/3] panthor: use drm_gem_object.gpuva.lock instead of gpuva_list_lock Alice Ryhl
2025-08-22  9:28 ` [PATCH v2 3/3] gpuvm: remove gem.gpuva.lock_dep_map Alice Ryhl
2025-08-22  9:55   ` Boris Brezillon
2025-08-22 10:25   ` Danilo Krummrich
2025-08-22 10:58     ` Alice Ryhl
2025-08-27 13:35     ` Alice Ryhl

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).