linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] Add mutex to drm_gem_object.gpuva list
@ 2025-08-14 13:53 Alice Ryhl
  2025-08-14 13:53 ` [PATCH 1/2] drm_gem: add mutex to drm_gem_object.gpuva Alice Ryhl
  2025-08-14 13:53 ` [PATCH 2/2] panthor: use drm_gem_object.gpuva.lock instead of gpuva_list_lock Alice Ryhl
  0 siblings, 2 replies; 7+ messages in thread
From: Alice Ryhl @ 2025-08-14 13:53 UTC (permalink / raw)
  To: Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
	Simona Vetter, Danilo Krummrich, Boris Brezillon, Daniel Almeida,
	Steven Price, Liviu Dudau, Rob Clark, Rob Herring
  Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
	Benno Lossin, Andreas Hindborg, Trevor Gross, dri-devel,
	linux-kernel, rust-for-linux, Alice Ryhl

See the first patch for motivation.

Signed-off-by: Alice Ryhl <aliceryhl@google.com>
---
Alice Ryhl (2):
      drm_gem: add mutex to drm_gem_object.gpuva
      panthor: use drm_gem_object.gpuva.lock instead of gpuva_list_lock

 drivers/gpu/drm/drm_gem.c             |  2 ++
 drivers/gpu/drm/panthor/panthor_gem.c |  4 +---
 drivers/gpu/drm/panthor/panthor_gem.h | 12 ------------
 drivers/gpu/drm/panthor/panthor_mmu.c | 16 ++++++++--------
 include/drm/drm_gem.h                 |  4 +++-
 5 files changed, 14 insertions(+), 24 deletions(-)
---
base-commit: 8f5ae30d69d7543eee0d70083daf4de8fe15d585
change-id: 20250814-gpuva-mutex-in-gem-a608ab334e51

Best regards,
-- 
Alice Ryhl <aliceryhl@google.com>


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 1/2] drm_gem: add mutex to drm_gem_object.gpuva
  2025-08-14 13:53 [PATCH 0/2] Add mutex to drm_gem_object.gpuva list Alice Ryhl
@ 2025-08-14 13:53 ` Alice Ryhl
  2025-08-14 14:59   ` Danilo Krummrich
  2025-08-14 13:53 ` [PATCH 2/2] panthor: use drm_gem_object.gpuva.lock instead of gpuva_list_lock Alice Ryhl
  1 sibling, 1 reply; 7+ messages in thread
From: Alice Ryhl @ 2025-08-14 13:53 UTC (permalink / raw)
  To: Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
	Simona Vetter, Danilo Krummrich, Boris Brezillon, Daniel Almeida,
	Steven Price, Liviu Dudau, Rob Clark, Rob Herring
  Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
	Benno Lossin, Andreas Hindborg, Trevor Gross, dri-devel,
	linux-kernel, rust-for-linux, Alice Ryhl

There are two main ways that GPUVM might be used:

* staged mode, where VM_BIND ioctls update the GPUVM immediately so that
  the GPUVM reflects the state of the VM *including* staged changes that
  are not yet applied to the GPU's virtual address space.
* immediate mode, where the GPUVM state is updated during run_job(),
  i.e., in the DMA fence signalling critical path, to ensure that the
  GPUVM and the GPU's virtual address space has the same state at all
  times.

Currently, only Panthor uses GPUVM in immediate mode, but the Rust
drivers Tyr and Nova will also use GPUVM in immediate mode, so it is
worth to support both staged and immediate mode well in GPUVM. To use
immediate mode, the GEMs gpuva list must be modified during the fence
signalling path, which means that it must be protected by a lock that is
fence signalling safe.

For this reason, a mutex is added to struct drm_gem_object that is
intended to achieve this purpose. Adding it directly in the GEM object
both makes it easier to use GPUVM in immediate mode, but also makes it
possible to take the gpuva lock from core drm code.

As a follow-up, another change that should probably be made to support
immediate mode is a mechanism to postpone cleanup of vm_bo objects, as
dropping a vm_bo object in the fence signalling path is problematic for
two reasons:

* When using DRM_GPUVM_RESV_PROTECTED, you cannot remove the vm_bo from
  the extobj/evicted lists during the fence signalling path.
* Dropping a vm_bo could lead to the GEM object getting destroyed.
  The requirement that GEM object cleanup is fence signalling safe is
  dubious and likely to be violated in practice.

Panthor already has its own custom implementation of postponing vm_bo
cleanup.

Signed-off-by: Alice Ryhl <aliceryhl@google.com>
---
 drivers/gpu/drm/drm_gem.c | 2 ++
 include/drm/drm_gem.h     | 4 +++-
 2 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index 6a44351e58b7741c358406c8a576b6660b5ca904..24c109ab3fadd5af2e5d9de3fe330b105217a9ce 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -187,6 +187,7 @@ void drm_gem_private_object_init(struct drm_device *dev,
 	kref_init(&obj->refcount);
 	obj->handle_count = 0;
 	obj->size = size;
+	mutex_init(&obj->gpuva.lock);
 	dma_resv_init(&obj->_resv);
 	if (!obj->resv)
 		obj->resv = &obj->_resv;
@@ -1057,6 +1058,7 @@ drm_gem_object_free(struct kref *kref)
 	if (WARN_ON(!obj->funcs->free))
 		return;
 
+	mutex_destroy(&obj->gpuva.lock);
 	obj->funcs->free(obj);
 }
 EXPORT_SYMBOL(drm_gem_object_free);
diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
index d3a7b43e2c637b164eba5af7cc2fc8ef09d4f0a4..5934d8dc267a65aaf62d2d025869221cd110b325 100644
--- a/include/drm/drm_gem.h
+++ b/include/drm/drm_gem.h
@@ -403,11 +403,13 @@ struct drm_gem_object {
 	 * Provides the list of GPU VAs attached to this GEM object.
 	 *
 	 * Drivers should lock list accesses with the GEMs &dma_resv lock
-	 * (&drm_gem_object.resv) or a custom lock if one is provided.
+	 * (&drm_gem_object.resv) or a custom lock if one is provided. The
+	 * mutex inside this struct may be used as the custom lock.
 	 */
 	struct {
 		struct list_head list;
 
+		struct mutex lock;
 #ifdef CONFIG_LOCKDEP
 		struct lockdep_map *lock_dep_map;
 #endif

-- 
2.51.0.rc0.215.g125493bb4a-goog


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 2/2] panthor: use drm_gem_object.gpuva.lock instead of gpuva_list_lock
  2025-08-14 13:53 [PATCH 0/2] Add mutex to drm_gem_object.gpuva list Alice Ryhl
  2025-08-14 13:53 ` [PATCH 1/2] drm_gem: add mutex to drm_gem_object.gpuva Alice Ryhl
@ 2025-08-14 13:53 ` Alice Ryhl
  2025-08-21 14:05   ` Boris Brezillon
  1 sibling, 1 reply; 7+ messages in thread
From: Alice Ryhl @ 2025-08-14 13:53 UTC (permalink / raw)
  To: Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
	Simona Vetter, Danilo Krummrich, Boris Brezillon, Daniel Almeida,
	Steven Price, Liviu Dudau, Rob Clark, Rob Herring
  Cc: Miguel Ojeda, Boqun Feng, Gary Guo, Björn Roy Baron,
	Benno Lossin, Andreas Hindborg, Trevor Gross, dri-devel,
	linux-kernel, rust-for-linux, Alice Ryhl

Now that drm_gem_object has a dedicated mutex for the gpuva list that is
intended to be used in cases that must be fence signalling safe, use it
in Panthor.

Signed-off-by: Alice Ryhl <aliceryhl@google.com>
---
 drivers/gpu/drm/panthor/panthor_gem.c |  4 +---
 drivers/gpu/drm/panthor/panthor_gem.h | 12 ------------
 drivers/gpu/drm/panthor/panthor_mmu.c | 16 ++++++++--------
 3 files changed, 9 insertions(+), 23 deletions(-)

diff --git a/drivers/gpu/drm/panthor/panthor_gem.c b/drivers/gpu/drm/panthor/panthor_gem.c
index a123bc740ba1460f96882206f598b148b64dc5f6..c654a3377903cf7e067becdb481fb14895a4eaa5 100644
--- a/drivers/gpu/drm/panthor/panthor_gem.c
+++ b/drivers/gpu/drm/panthor/panthor_gem.c
@@ -74,7 +74,6 @@ static void panthor_gem_free_object(struct drm_gem_object *obj)
 	mutex_destroy(&bo->label.lock);
 
 	drm_gem_free_mmap_offset(&bo->base.base);
-	mutex_destroy(&bo->gpuva_list_lock);
 	drm_gem_shmem_free(&bo->base);
 	drm_gem_object_put(vm_root_gem);
 }
@@ -246,8 +245,7 @@ struct drm_gem_object *panthor_gem_create_object(struct drm_device *ddev, size_t
 
 	obj->base.base.funcs = &panthor_gem_funcs;
 	obj->base.map_wc = !ptdev->coherent;
-	mutex_init(&obj->gpuva_list_lock);
-	drm_gem_gpuva_set_lock(&obj->base.base, &obj->gpuva_list_lock);
+	drm_gem_gpuva_set_lock(&obj->base.base, &obj->base.base.gpuva.lock);
 	mutex_init(&obj->label.lock);
 
 	panthor_gem_debugfs_bo_init(obj);
diff --git a/drivers/gpu/drm/panthor/panthor_gem.h b/drivers/gpu/drm/panthor/panthor_gem.h
index 8fc7215e9b900ed162e03aebeae999fda00eeb7a..80c6e24112d0bd0f1561ae4d2224842afb735a59 100644
--- a/drivers/gpu/drm/panthor/panthor_gem.h
+++ b/drivers/gpu/drm/panthor/panthor_gem.h
@@ -79,18 +79,6 @@ struct panthor_gem_object {
 	 */
 	struct drm_gem_object *exclusive_vm_root_gem;
 
-	/**
-	 * @gpuva_list_lock: Custom GPUVA lock.
-	 *
-	 * Used to protect insertion of drm_gpuva elements to the
-	 * drm_gem_object.gpuva.list list.
-	 *
-	 * We can't use the GEM resv for that, because drm_gpuva_link() is
-	 * called in a dma-signaling path, where we're not allowed to take
-	 * resv locks.
-	 */
-	struct mutex gpuva_list_lock;
-
 	/** @flags: Combination of drm_panthor_bo_flags flags. */
 	u32 flags;
 
diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
index 4140f697ba5af5769492d3bbb378e18aec8ade98..49ca416c7c2c5a01ab0513029697ba9c7a35832d 100644
--- a/drivers/gpu/drm/panthor/panthor_mmu.c
+++ b/drivers/gpu/drm/panthor/panthor_mmu.c
@@ -1074,9 +1074,9 @@ static void panthor_vm_bo_put(struct drm_gpuvm_bo *vm_bo)
 	 * GEM vm_bo list.
 	 */
 	dma_resv_lock(drm_gpuvm_resv(vm), NULL);
-	mutex_lock(&bo->gpuva_list_lock);
+	mutex_lock(&bo->base.base.gpuva.lock);
 	unpin = drm_gpuvm_bo_put(vm_bo);
-	mutex_unlock(&bo->gpuva_list_lock);
+	mutex_unlock(&bo->base.base.gpuva.lock);
 	dma_resv_unlock(drm_gpuvm_resv(vm));
 
 	/* If the vm_bo object was destroyed, release the pin reference that
@@ -1249,9 +1249,9 @@ static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
 	 * calling this function.
 	 */
 	dma_resv_lock(panthor_vm_resv(vm), NULL);
-	mutex_lock(&bo->gpuva_list_lock);
+	mutex_lock(&bo->base.base.gpuva.lock);
 	op_ctx->map.vm_bo = drm_gpuvm_bo_obtain_prealloc(preallocated_vm_bo);
-	mutex_unlock(&bo->gpuva_list_lock);
+	mutex_unlock(&bo->base.base.gpuva.lock);
 	dma_resv_unlock(panthor_vm_resv(vm));
 
 	/* If the a vm_bo for this <VM,BO> combination exists, it already
@@ -2003,10 +2003,10 @@ static void panthor_vma_link(struct panthor_vm *vm,
 {
 	struct panthor_gem_object *bo = to_panthor_bo(vma->base.gem.obj);
 
-	mutex_lock(&bo->gpuva_list_lock);
+	mutex_lock(&bo->base.base.gpuva.lock);
 	drm_gpuva_link(&vma->base, vm_bo);
 	drm_WARN_ON(&vm->ptdev->base, drm_gpuvm_bo_put(vm_bo));
-	mutex_unlock(&bo->gpuva_list_lock);
+	mutex_unlock(&bo->base.base.gpuva.lock);
 }
 
 static void panthor_vma_unlink(struct panthor_vm *vm,
@@ -2015,9 +2015,9 @@ static void panthor_vma_unlink(struct panthor_vm *vm,
 	struct panthor_gem_object *bo = to_panthor_bo(vma->base.gem.obj);
 	struct drm_gpuvm_bo *vm_bo = drm_gpuvm_bo_get(vma->base.vm_bo);
 
-	mutex_lock(&bo->gpuva_list_lock);
+	mutex_lock(&bo->base.base.gpuva.lock);
 	drm_gpuva_unlink(&vma->base);
-	mutex_unlock(&bo->gpuva_list_lock);
+	mutex_unlock(&bo->base.base.gpuva.lock);
 
 	/* drm_gpuva_unlink() release the vm_bo, but we manually retained it
 	 * when entering this function, so we can implement deferred VMA

-- 
2.51.0.rc0.215.g125493bb4a-goog


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH 1/2] drm_gem: add mutex to drm_gem_object.gpuva
  2025-08-14 13:53 ` [PATCH 1/2] drm_gem: add mutex to drm_gem_object.gpuva Alice Ryhl
@ 2025-08-14 14:59   ` Danilo Krummrich
  2025-08-14 15:01     ` Alice Ryhl
  0 siblings, 1 reply; 7+ messages in thread
From: Danilo Krummrich @ 2025-08-14 14:59 UTC (permalink / raw)
  To: Alice Ryhl
  Cc: Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
	Simona Vetter, Boris Brezillon, Daniel Almeida, Steven Price,
	Liviu Dudau, Rob Clark, Rob Herring, Miguel Ojeda, Boqun Feng,
	Gary Guo, Björn Roy Baron, Benno Lossin, Andreas Hindborg,
	Trevor Gross, dri-devel, linux-kernel, rust-for-linux

On Thu Aug 14, 2025 at 3:53 PM CEST, Alice Ryhl wrote:
> There are two main ways that GPUVM might be used:
>
> * staged mode, where VM_BIND ioctls update the GPUVM immediately so that
>   the GPUVM reflects the state of the VM *including* staged changes that
>   are not yet applied to the GPU's virtual address space.
> * immediate mode, where the GPUVM state is updated during run_job(),
>   i.e., in the DMA fence signalling critical path, to ensure that the
>   GPUVM and the GPU's virtual address space has the same state at all
>   times.
>
> Currently, only Panthor uses GPUVM in immediate mode, but the Rust
> drivers Tyr and Nova will also use GPUVM in immediate mode, so it is
> worth to support both staged and immediate mode well in GPUVM. To use
> immediate mode, the GEMs gpuva list must be modified during the fence
> signalling path, which means that it must be protected by a lock that is
> fence signalling safe.
>
> For this reason, a mutex is added to struct drm_gem_object that is
> intended to achieve this purpose. Adding it directly in the GEM object
> both makes it easier to use GPUVM in immediate mode, but also makes it
> possible to take the gpuva lock from core drm code.
>
> As a follow-up, another change that should probably be made to support
> immediate mode is a mechanism to postpone cleanup of vm_bo objects, as
> dropping a vm_bo object in the fence signalling path is problematic for
> two reasons:
>
> * When using DRM_GPUVM_RESV_PROTECTED, you cannot remove the vm_bo from
>   the extobj/evicted lists during the fence signalling path.
> * Dropping a vm_bo could lead to the GEM object getting destroyed.
>   The requirement that GEM object cleanup is fence signalling safe is
>   dubious and likely to be violated in practice.
>
> Panthor already has its own custom implementation of postponing vm_bo
> cleanup.
>
> Signed-off-by: Alice Ryhl <aliceryhl@google.com>
> ---
>  drivers/gpu/drm/drm_gem.c | 2 ++
>  include/drm/drm_gem.h     | 4 +++-
>  2 files changed, 5 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
> index 6a44351e58b7741c358406c8a576b6660b5ca904..24c109ab3fadd5af2e5d9de3fe330b105217a9ce 100644
> --- a/drivers/gpu/drm/drm_gem.c
> +++ b/drivers/gpu/drm/drm_gem.c
> @@ -187,6 +187,7 @@ void drm_gem_private_object_init(struct drm_device *dev,
>  	kref_init(&obj->refcount);
>  	obj->handle_count = 0;
>  	obj->size = size;
> +	mutex_init(&obj->gpuva.lock);
>  	dma_resv_init(&obj->_resv);
>  	if (!obj->resv)
>  		obj->resv = &obj->_resv;
> @@ -1057,6 +1058,7 @@ drm_gem_object_free(struct kref *kref)
>  	if (WARN_ON(!obj->funcs->free))
>  		return;
>  
> +	mutex_destroy(&obj->gpuva.lock);
>  	obj->funcs->free(obj);

I really can't think of a valid case where we need to access this mutex from the
GEM's free() callback, yet it probably doesn't hurt to mention it in the
documentation of struct drm_gem_object_funcs.

>  }
>  EXPORT_SYMBOL(drm_gem_object_free);
> diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
> index d3a7b43e2c637b164eba5af7cc2fc8ef09d4f0a4..5934d8dc267a65aaf62d2d025869221cd110b325 100644
> --- a/include/drm/drm_gem.h
> +++ b/include/drm/drm_gem.h
> @@ -403,11 +403,13 @@ struct drm_gem_object {
>  	 * Provides the list of GPU VAs attached to this GEM object.
>  	 *
>  	 * Drivers should lock list accesses with the GEMs &dma_resv lock
> -	 * (&drm_gem_object.resv) or a custom lock if one is provided.
> +	 * (&drm_gem_object.resv) or a custom lock if one is provided. The
> +	 * mutex inside this struct may be used as the custom lock.
>  	 */
>  	struct {
>  		struct list_head list;
>  
> +		struct mutex lock;
>  #ifdef CONFIG_LOCKDEP
>  		struct lockdep_map *lock_dep_map;
>  #endif

We should remove this and the corresponding functions (i.e.
drm_gem_gpuva_set_lock(), drm_gem_gpuva_assert_lock_held()) in a subsequent
patch and let GPUVM assert for this mutex directly rather than for the
lockdep_map.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 1/2] drm_gem: add mutex to drm_gem_object.gpuva
  2025-08-14 14:59   ` Danilo Krummrich
@ 2025-08-14 15:01     ` Alice Ryhl
  2025-08-14 15:18       ` Danilo Krummrich
  0 siblings, 1 reply; 7+ messages in thread
From: Alice Ryhl @ 2025-08-14 15:01 UTC (permalink / raw)
  To: Danilo Krummrich
  Cc: Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
	Simona Vetter, Boris Brezillon, Daniel Almeida, Steven Price,
	Liviu Dudau, Rob Clark, Rob Herring, Miguel Ojeda, Boqun Feng,
	Gary Guo, Björn Roy Baron, Benno Lossin, Andreas Hindborg,
	Trevor Gross, dri-devel, linux-kernel, rust-for-linux

On Thu, Aug 14, 2025 at 4:59 PM Danilo Krummrich <dakr@kernel.org> wrote:
>
> On Thu Aug 14, 2025 at 3:53 PM CEST, Alice Ryhl wrote:
> > There are two main ways that GPUVM might be used:
> >
> > * staged mode, where VM_BIND ioctls update the GPUVM immediately so that
> >   the GPUVM reflects the state of the VM *including* staged changes that
> >   are not yet applied to the GPU's virtual address space.
> > * immediate mode, where the GPUVM state is updated during run_job(),
> >   i.e., in the DMA fence signalling critical path, to ensure that the
> >   GPUVM and the GPU's virtual address space has the same state at all
> >   times.
> >
> > Currently, only Panthor uses GPUVM in immediate mode, but the Rust
> > drivers Tyr and Nova will also use GPUVM in immediate mode, so it is
> > worth to support both staged and immediate mode well in GPUVM. To use
> > immediate mode, the GEMs gpuva list must be modified during the fence
> > signalling path, which means that it must be protected by a lock that is
> > fence signalling safe.
> >
> > For this reason, a mutex is added to struct drm_gem_object that is
> > intended to achieve this purpose. Adding it directly in the GEM object
> > both makes it easier to use GPUVM in immediate mode, but also makes it
> > possible to take the gpuva lock from core drm code.
> >
> > As a follow-up, another change that should probably be made to support
> > immediate mode is a mechanism to postpone cleanup of vm_bo objects, as
> > dropping a vm_bo object in the fence signalling path is problematic for
> > two reasons:
> >
> > * When using DRM_GPUVM_RESV_PROTECTED, you cannot remove the vm_bo from
> >   the extobj/evicted lists during the fence signalling path.
> > * Dropping a vm_bo could lead to the GEM object getting destroyed.
> >   The requirement that GEM object cleanup is fence signalling safe is
> >   dubious and likely to be violated in practice.
> >
> > Panthor already has its own custom implementation of postponing vm_bo
> > cleanup.
> >
> > Signed-off-by: Alice Ryhl <aliceryhl@google.com>
> > ---
> >  drivers/gpu/drm/drm_gem.c | 2 ++
> >  include/drm/drm_gem.h     | 4 +++-
> >  2 files changed, 5 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
> > index 6a44351e58b7741c358406c8a576b6660b5ca904..24c109ab3fadd5af2e5d9de3fe330b105217a9ce 100644
> > --- a/drivers/gpu/drm/drm_gem.c
> > +++ b/drivers/gpu/drm/drm_gem.c
> > @@ -187,6 +187,7 @@ void drm_gem_private_object_init(struct drm_device *dev,
> >       kref_init(&obj->refcount);
> >       obj->handle_count = 0;
> >       obj->size = size;
> > +     mutex_init(&obj->gpuva.lock);
> >       dma_resv_init(&obj->_resv);
> >       if (!obj->resv)
> >               obj->resv = &obj->_resv;
> > @@ -1057,6 +1058,7 @@ drm_gem_object_free(struct kref *kref)
> >       if (WARN_ON(!obj->funcs->free))
> >               return;
> >
> > +     mutex_destroy(&obj->gpuva.lock);
> >       obj->funcs->free(obj);
>
> I really can't think of a valid case where we need to access this mutex from the
> GEM's free() callback, yet it probably doesn't hurt to mention it in the
> documentation of struct drm_gem_object_funcs.

I had wanted to move it below free(), but we can't do that since
free() also performs the kfree() call.

> >  }
> >  EXPORT_SYMBOL(drm_gem_object_free);
> > diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
> > index d3a7b43e2c637b164eba5af7cc2fc8ef09d4f0a4..5934d8dc267a65aaf62d2d025869221cd110b325 100644
> > --- a/include/drm/drm_gem.h
> > +++ b/include/drm/drm_gem.h
> > @@ -403,11 +403,13 @@ struct drm_gem_object {
> >        * Provides the list of GPU VAs attached to this GEM object.
> >        *
> >        * Drivers should lock list accesses with the GEMs &dma_resv lock
> > -      * (&drm_gem_object.resv) or a custom lock if one is provided.
> > +      * (&drm_gem_object.resv) or a custom lock if one is provided. The
> > +      * mutex inside this struct may be used as the custom lock.
> >        */
> >       struct {
> >               struct list_head list;
> >
> > +             struct mutex lock;
> >  #ifdef CONFIG_LOCKDEP
> >               struct lockdep_map *lock_dep_map;
> >  #endif
>
> We should remove this and the corresponding functions (i.e.
> drm_gem_gpuva_set_lock(), drm_gem_gpuva_assert_lock_held()) in a subsequent
> patch and let GPUVM assert for this mutex directly rather than for the
> lockdep_map.

Agreed.

Alice

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 1/2] drm_gem: add mutex to drm_gem_object.gpuva
  2025-08-14 15:01     ` Alice Ryhl
@ 2025-08-14 15:18       ` Danilo Krummrich
  0 siblings, 0 replies; 7+ messages in thread
From: Danilo Krummrich @ 2025-08-14 15:18 UTC (permalink / raw)
  To: Alice Ryhl
  Cc: Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
	Simona Vetter, Boris Brezillon, Daniel Almeida, Steven Price,
	Liviu Dudau, Rob Clark, Rob Herring, Miguel Ojeda, Boqun Feng,
	Gary Guo, Björn Roy Baron, Benno Lossin, Andreas Hindborg,
	Trevor Gross, dri-devel, linux-kernel, rust-for-linux

On Thu Aug 14, 2025 at 5:01 PM CEST, Alice Ryhl wrote:
> I had wanted to move it below free(), but we can't do that since
> free() also performs the kfree() call.

Actually, I think it belongs in drm_gem_private_object_fini(), where also call
dma_resv_fini() for the GEM owned dma-resv lock.

Any cleanup should be done from where a driver calls drm_gem_object_release().

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 2/2] panthor: use drm_gem_object.gpuva.lock instead of gpuva_list_lock
  2025-08-14 13:53 ` [PATCH 2/2] panthor: use drm_gem_object.gpuva.lock instead of gpuva_list_lock Alice Ryhl
@ 2025-08-21 14:05   ` Boris Brezillon
  0 siblings, 0 replies; 7+ messages in thread
From: Boris Brezillon @ 2025-08-21 14:05 UTC (permalink / raw)
  To: Alice Ryhl
  Cc: Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
	Simona Vetter, Danilo Krummrich, Daniel Almeida, Steven Price,
	Liviu Dudau, Rob Clark, Rob Herring, Miguel Ojeda, Boqun Feng,
	Gary Guo, Björn Roy Baron, Benno Lossin, Andreas Hindborg,
	Trevor Gross, dri-devel, linux-kernel, rust-for-linux

On Thu, 14 Aug 2025 13:53:15 +0000
Alice Ryhl <aliceryhl@google.com> wrote:

> Now that drm_gem_object has a dedicated mutex for the gpuva list that is
> intended to be used in cases that must be fence signalling safe, use it
> in Panthor.
> 
> Signed-off-by: Alice Ryhl <aliceryhl@google.com>
> ---
>  drivers/gpu/drm/panthor/panthor_gem.c |  4 +---
>  drivers/gpu/drm/panthor/panthor_gem.h | 12 ------------
>  drivers/gpu/drm/panthor/panthor_mmu.c | 16 ++++++++--------
>  3 files changed, 9 insertions(+), 23 deletions(-)
> 
> diff --git a/drivers/gpu/drm/panthor/panthor_gem.c b/drivers/gpu/drm/panthor/panthor_gem.c
> index a123bc740ba1460f96882206f598b148b64dc5f6..c654a3377903cf7e067becdb481fb14895a4eaa5 100644
> --- a/drivers/gpu/drm/panthor/panthor_gem.c
> +++ b/drivers/gpu/drm/panthor/panthor_gem.c
> @@ -74,7 +74,6 @@ static void panthor_gem_free_object(struct drm_gem_object *obj)
>  	mutex_destroy(&bo->label.lock);
>  
>  	drm_gem_free_mmap_offset(&bo->base.base);
> -	mutex_destroy(&bo->gpuva_list_lock);
>  	drm_gem_shmem_free(&bo->base);
>  	drm_gem_object_put(vm_root_gem);
>  }
> @@ -246,8 +245,7 @@ struct drm_gem_object *panthor_gem_create_object(struct drm_device *ddev, size_t
>  
>  	obj->base.base.funcs = &panthor_gem_funcs;
>  	obj->base.map_wc = !ptdev->coherent;
> -	mutex_init(&obj->gpuva_list_lock);
> -	drm_gem_gpuva_set_lock(&obj->base.base, &obj->gpuva_list_lock);
> +	drm_gem_gpuva_set_lock(&obj->base.base, &obj->base.base.gpuva.lock);

I guess this will go away in the previous patch in you follow Danilo's
advice to get rid of drm_gem_gpuva_set_lock(). The rest looks good to
me, so feel free to add

Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>

on the next version.

>  	mutex_init(&obj->label.lock);
>  
>  	panthor_gem_debugfs_bo_init(obj);
> diff --git a/drivers/gpu/drm/panthor/panthor_gem.h b/drivers/gpu/drm/panthor/panthor_gem.h
> index 8fc7215e9b900ed162e03aebeae999fda00eeb7a..80c6e24112d0bd0f1561ae4d2224842afb735a59 100644
> --- a/drivers/gpu/drm/panthor/panthor_gem.h
> +++ b/drivers/gpu/drm/panthor/panthor_gem.h
> @@ -79,18 +79,6 @@ struct panthor_gem_object {
>  	 */
>  	struct drm_gem_object *exclusive_vm_root_gem;
>  
> -	/**
> -	 * @gpuva_list_lock: Custom GPUVA lock.
> -	 *
> -	 * Used to protect insertion of drm_gpuva elements to the
> -	 * drm_gem_object.gpuva.list list.
> -	 *
> -	 * We can't use the GEM resv for that, because drm_gpuva_link() is
> -	 * called in a dma-signaling path, where we're not allowed to take
> -	 * resv locks.
> -	 */
> -	struct mutex gpuva_list_lock;
> -
>  	/** @flags: Combination of drm_panthor_bo_flags flags. */
>  	u32 flags;
>  
> diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
> index 4140f697ba5af5769492d3bbb378e18aec8ade98..49ca416c7c2c5a01ab0513029697ba9c7a35832d 100644
> --- a/drivers/gpu/drm/panthor/panthor_mmu.c
> +++ b/drivers/gpu/drm/panthor/panthor_mmu.c
> @@ -1074,9 +1074,9 @@ static void panthor_vm_bo_put(struct drm_gpuvm_bo *vm_bo)
>  	 * GEM vm_bo list.
>  	 */
>  	dma_resv_lock(drm_gpuvm_resv(vm), NULL);
> -	mutex_lock(&bo->gpuva_list_lock);
> +	mutex_lock(&bo->base.base.gpuva.lock);
>  	unpin = drm_gpuvm_bo_put(vm_bo);
> -	mutex_unlock(&bo->gpuva_list_lock);
> +	mutex_unlock(&bo->base.base.gpuva.lock);
>  	dma_resv_unlock(drm_gpuvm_resv(vm));
>  
>  	/* If the vm_bo object was destroyed, release the pin reference that
> @@ -1249,9 +1249,9 @@ static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
>  	 * calling this function.
>  	 */
>  	dma_resv_lock(panthor_vm_resv(vm), NULL);
> -	mutex_lock(&bo->gpuva_list_lock);
> +	mutex_lock(&bo->base.base.gpuva.lock);
>  	op_ctx->map.vm_bo = drm_gpuvm_bo_obtain_prealloc(preallocated_vm_bo);
> -	mutex_unlock(&bo->gpuva_list_lock);
> +	mutex_unlock(&bo->base.base.gpuva.lock);
>  	dma_resv_unlock(panthor_vm_resv(vm));
>  
>  	/* If the a vm_bo for this <VM,BO> combination exists, it already
> @@ -2003,10 +2003,10 @@ static void panthor_vma_link(struct panthor_vm *vm,
>  {
>  	struct panthor_gem_object *bo = to_panthor_bo(vma->base.gem.obj);
>  
> -	mutex_lock(&bo->gpuva_list_lock);
> +	mutex_lock(&bo->base.base.gpuva.lock);
>  	drm_gpuva_link(&vma->base, vm_bo);
>  	drm_WARN_ON(&vm->ptdev->base, drm_gpuvm_bo_put(vm_bo));
> -	mutex_unlock(&bo->gpuva_list_lock);
> +	mutex_unlock(&bo->base.base.gpuva.lock);
>  }
>  
>  static void panthor_vma_unlink(struct panthor_vm *vm,
> @@ -2015,9 +2015,9 @@ static void panthor_vma_unlink(struct panthor_vm *vm,
>  	struct panthor_gem_object *bo = to_panthor_bo(vma->base.gem.obj);
>  	struct drm_gpuvm_bo *vm_bo = drm_gpuvm_bo_get(vma->base.vm_bo);
>  
> -	mutex_lock(&bo->gpuva_list_lock);
> +	mutex_lock(&bo->base.base.gpuva.lock);
>  	drm_gpuva_unlink(&vma->base);
> -	mutex_unlock(&bo->gpuva_list_lock);
> +	mutex_unlock(&bo->base.base.gpuva.lock);
>  
>  	/* drm_gpuva_unlink() release the vm_bo, but we manually retained it
>  	 * when entering this function, so we can implement deferred VMA
> 


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2025-08-21 14:05 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-14 13:53 [PATCH 0/2] Add mutex to drm_gem_object.gpuva list Alice Ryhl
2025-08-14 13:53 ` [PATCH 1/2] drm_gem: add mutex to drm_gem_object.gpuva Alice Ryhl
2025-08-14 14:59   ` Danilo Krummrich
2025-08-14 15:01     ` Alice Ryhl
2025-08-14 15:18       ` Danilo Krummrich
2025-08-14 13:53 ` [PATCH 2/2] panthor: use drm_gem_object.gpuva.lock instead of gpuva_list_lock Alice Ryhl
2025-08-21 14:05   ` Boris Brezillon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).