* [PATCH v2 1/2] drm/panthor: Replace sleep locks with spinlocks in fdinfo path
@ 2025-02-14 20:55 Adrián Larumbe
2025-02-14 20:55 ` [PATCH v2 2/2] drm/panthor: Avoid sleep locking in the internal BO size path Adrián Larumbe
2025-02-15 9:28 ` [PATCH v2 1/2] drm/panthor: Replace sleep locks with spinlocks in fdinfo path Boris Brezillon
0 siblings, 2 replies; 6+ messages in thread
From: Adrián Larumbe @ 2025-02-14 20:55 UTC (permalink / raw)
To: Boris Brezillon, Steven Price, Liviu Dudau, Maarten Lankhorst,
Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter
Cc: kernel, Adrián Larumbe, dri-devel, linux-kernel
Commit 0590c94c3596 ("drm/panthor: Fix race condition when gathering fdinfo
group samples") introduced an xarray lock to deal with potential
use-after-free errors when accessing groups fdinfo figures. However, this
toggles the kernel's atomic context status, so the next nested mutex lock
will raise a warning when the kernel is compiled with mutex debug options:
CONFIG_DEBUG_RT_MUTEXES=y
CONFIG_DEBUG_MUTEXES=y
Replace Panthor's group fdinfo data mutex with a guarded spinlock.
Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com>
0590c94c3596 ("drm/panthor: Fix race condition when gathering fdinfo group samples")
---
drivers/gpu/drm/panthor/panthor_sched.c | 26 ++++++++++++-------------
1 file changed, 12 insertions(+), 14 deletions(-)
diff --git a/drivers/gpu/drm/panthor/panthor_sched.c b/drivers/gpu/drm/panthor/panthor_sched.c
index 1a276db095ff..4d31d1967716 100644
--- a/drivers/gpu/drm/panthor/panthor_sched.c
+++ b/drivers/gpu/drm/panthor/panthor_sched.c
@@ -9,6 +9,7 @@
#include <drm/panthor_drm.h>
#include <linux/build_bug.h>
+#include <linux/cleanup.h>
#include <linux/clk.h>
#include <linux/delay.h>
#include <linux/dma-mapping.h>
@@ -631,10 +632,10 @@ struct panthor_group {
struct panthor_gpu_usage data;
/**
- * @lock: Mutex to govern concurrent access from drm file's fdinfo callback
- * and job post-completion processing function
+ * @fdinfo.lock: Spinlock to govern concurrent access from drm file's fdinfo
+ * callback and job post-completion processing function
*/
- struct mutex lock;
+ spinlock_t lock;
/** @fdinfo.kbo_sizes: Aggregate size of private kernel BO's held by the group. */
size_t kbo_sizes;
@@ -910,8 +911,6 @@ static void group_release_work(struct work_struct *work)
release_work);
u32 i;
- mutex_destroy(&group->fdinfo.lock);
-
for (i = 0; i < group->queue_count; i++)
group_free_queue(group, group->queues[i]);
@@ -2861,12 +2860,12 @@ static void update_fdinfo_stats(struct panthor_job *job)
struct panthor_job_profiling_data *slots = queue->profiling.slots->kmap;
struct panthor_job_profiling_data *data = &slots[job->profiling.slot];
- mutex_lock(&group->fdinfo.lock);
- if (job->profiling.mask & PANTHOR_DEVICE_PROFILING_CYCLES)
- fdinfo->cycles += data->cycles.after - data->cycles.before;
- if (job->profiling.mask & PANTHOR_DEVICE_PROFILING_TIMESTAMP)
- fdinfo->time += data->time.after - data->time.before;
- mutex_unlock(&group->fdinfo.lock);
+ scoped_guard(spinlock, &group->fdinfo.lock) {
+ if (job->profiling.mask & PANTHOR_DEVICE_PROFILING_CYCLES)
+ fdinfo->cycles += data->cycles.after - data->cycles.before;
+ if (job->profiling.mask & PANTHOR_DEVICE_PROFILING_TIMESTAMP)
+ fdinfo->time += data->time.after - data->time.before;
+ }
}
void panthor_fdinfo_gather_group_samples(struct panthor_file *pfile)
@@ -2880,12 +2879,11 @@ void panthor_fdinfo_gather_group_samples(struct panthor_file *pfile)
xa_lock(&gpool->xa);
xa_for_each(&gpool->xa, i, group) {
- mutex_lock(&group->fdinfo.lock);
+ guard(spinlock)(&group->fdinfo.lock);
pfile->stats.cycles += group->fdinfo.data.cycles;
pfile->stats.time += group->fdinfo.data.time;
group->fdinfo.data.cycles = 0;
group->fdinfo.data.time = 0;
- mutex_unlock(&group->fdinfo.lock);
}
xa_unlock(&gpool->xa);
}
@@ -3537,7 +3535,7 @@ int panthor_group_create(struct panthor_file *pfile,
mutex_unlock(&sched->reset.lock);
add_group_kbo_sizes(group->ptdev, group);
- mutex_init(&group->fdinfo.lock);
+ spin_lock_init(&group->fdinfo.lock);
return gid;
--
2.47.1
^ permalink raw reply related [flat|nested] 6+ messages in thread* [PATCH v2 2/2] drm/panthor: Avoid sleep locking in the internal BO size path 2025-02-14 20:55 [PATCH v2 1/2] drm/panthor: Replace sleep locks with spinlocks in fdinfo path Adrián Larumbe @ 2025-02-14 20:55 ` Adrián Larumbe 2025-02-15 9:44 ` Boris Brezillon 2025-02-15 9:28 ` [PATCH v2 1/2] drm/panthor: Replace sleep locks with spinlocks in fdinfo path Boris Brezillon 1 sibling, 1 reply; 6+ messages in thread From: Adrián Larumbe @ 2025-02-14 20:55 UTC (permalink / raw) To: Boris Brezillon, Steven Price, Liviu Dudau, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter, Adrián Larumbe, Mihail Atanassov Cc: kernel, dri-devel, linux-kernel Commit 434e5ca5b5d7 ("drm/panthor: Expose size of driver internal BO's over fdinfo") locks the VMS xarray, to avoid UAF errors when the same VM is being concurrently destroyed by another thread. However, that puts the current thread in atomic context, which means taking the VMS' heap locks will trigger a warning as the thread is no longer allowed to sleep. Because in this case replacing the heap mutex with a spinlock isn't feasible, the fdinfo handler no longer traverses the list of heaps for every single VM associated with an open DRM file. Instead, when a new heap chunk is allocated, its size is accumulated into a VM-wide tally, which also makes the atomic context code path somewhat faster. Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com> Fixes: 3e2c8c718567 ("drm/panthor: Expose size of driver internal BO's over fdinfo") --- drivers/gpu/drm/panthor/panthor_heap.c | 38 ++++++++------------------ drivers/gpu/drm/panthor/panthor_heap.h | 2 -- drivers/gpu/drm/panthor/panthor_mmu.c | 23 +++++++++++----- drivers/gpu/drm/panthor/panthor_mmu.h | 1 + 4 files changed, 28 insertions(+), 36 deletions(-) diff --git a/drivers/gpu/drm/panthor/panthor_heap.c b/drivers/gpu/drm/panthor/panthor_heap.c index db0285ce5812..e5e5953e4f87 100644 --- a/drivers/gpu/drm/panthor/panthor_heap.c +++ b/drivers/gpu/drm/panthor/panthor_heap.c @@ -127,6 +127,8 @@ static void panthor_free_heap_chunk(struct panthor_vm *vm, heap->chunk_count--; mutex_unlock(&heap->lock); + panthor_vm_heaps_size_accumulate(vm, -heap->chunk_size); + panthor_kernel_bo_destroy(chunk->bo); kfree(chunk); } @@ -180,6 +182,8 @@ static int panthor_alloc_heap_chunk(struct panthor_device *ptdev, heap->chunk_count++; mutex_unlock(&heap->lock); + panthor_vm_heaps_size_accumulate(vm, heap->chunk_size); + return 0; err_destroy_bo: @@ -389,6 +393,7 @@ int panthor_heap_return_chunk(struct panthor_heap_pool *pool, removed = chunk; list_del(&chunk->node); heap->chunk_count--; + panthor_vm_heaps_size_accumulate(chunk->bo->vm, -heap->chunk_size); break; } } @@ -560,6 +565,8 @@ panthor_heap_pool_create(struct panthor_device *ptdev, struct panthor_vm *vm) if (ret) goto err_destroy_pool; + panthor_vm_heaps_size_accumulate(vm, pool->gpu_contexts->obj->size); + return pool; err_destroy_pool: @@ -594,8 +601,11 @@ void panthor_heap_pool_destroy(struct panthor_heap_pool *pool) xa_for_each(&pool->xa, i, heap) drm_WARN_ON(&pool->ptdev->base, panthor_heap_destroy_locked(pool, i)); - if (!IS_ERR_OR_NULL(pool->gpu_contexts)) + if (!IS_ERR_OR_NULL(pool->gpu_contexts)) { + panthor_vm_heaps_size_accumulate(pool->gpu_contexts->vm, + -pool->gpu_contexts->obj->size); panthor_kernel_bo_destroy(pool->gpu_contexts); + } /* Reflects the fact the pool has been destroyed. */ pool->vm = NULL; @@ -603,29 +613,3 @@ void panthor_heap_pool_destroy(struct panthor_heap_pool *pool) panthor_heap_pool_put(pool); } - -/** - * panthor_heap_pool_size() - Calculate size of all chunks across all heaps in a pool - * @pool: Pool whose total chunk size to calculate. - * - * This function adds the size of all heap chunks across all heaps in the - * argument pool. It also adds the size of the gpu contexts kernel bo. - * It is meant to be used by fdinfo for displaying the size of internal - * driver BO's that aren't exposed to userspace through a GEM handle. - * - */ -size_t panthor_heap_pool_size(struct panthor_heap_pool *pool) -{ - struct panthor_heap *heap; - unsigned long i; - size_t size = 0; - - down_read(&pool->lock); - xa_for_each(&pool->xa, i, heap) - size += heap->chunk_size * heap->chunk_count; - up_read(&pool->lock); - - size += pool->gpu_contexts->obj->size; - - return size; -} diff --git a/drivers/gpu/drm/panthor/panthor_heap.h b/drivers/gpu/drm/panthor/panthor_heap.h index e3358d4e8edb..25a5f2bba445 100644 --- a/drivers/gpu/drm/panthor/panthor_heap.h +++ b/drivers/gpu/drm/panthor/panthor_heap.h @@ -27,8 +27,6 @@ struct panthor_heap_pool * panthor_heap_pool_get(struct panthor_heap_pool *pool); void panthor_heap_pool_put(struct panthor_heap_pool *pool); -size_t panthor_heap_pool_size(struct panthor_heap_pool *pool); - int panthor_heap_grow(struct panthor_heap_pool *pool, u64 heap_gpu_va, u32 renderpasses_in_flight, diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c index 8c6fc587ddc3..9e48b34fcf80 100644 --- a/drivers/gpu/drm/panthor/panthor_mmu.c +++ b/drivers/gpu/drm/panthor/panthor_mmu.c @@ -347,6 +347,14 @@ struct panthor_vm { struct mutex lock; } heaps; + /** + * @fdinfo: VM-wide fdinfo fields. + */ + struct { + /** @fdinfo.heaps_size: Size of all chunks across all heaps in the pool. */ + atomic_t heaps_size; + } fdinfo; + /** @node: Used to insert the VM in the panthor_mmu::vm::list. */ struct list_head node; @@ -1541,6 +1549,8 @@ static void panthor_vm_destroy(struct panthor_vm *vm) vm->heaps.pool = NULL; mutex_unlock(&vm->heaps.lock); + atomic_set(&vm->fdinfo.heaps_size, 0); + drm_WARN_ON(&vm->ptdev->base, panthor_vm_unmap_range(vm, vm->base.mm_start, vm->base.mm_range)); panthor_vm_put(vm); @@ -1963,13 +1973,7 @@ void panthor_vm_heaps_sizes(struct panthor_file *pfile, struct drm_memory_stats xa_lock(&pfile->vms->xa); xa_for_each(&pfile->vms->xa, i, vm) { - size_t size = 0; - - mutex_lock(&vm->heaps.lock); - if (vm->heaps.pool) - size = panthor_heap_pool_size(vm->heaps.pool); - mutex_unlock(&vm->heaps.lock); - + size_t size = atomic_read(&vm->fdinfo.heaps_size); stats->resident += size; if (vm->as.id >= 0) stats->active += size; @@ -1977,6 +1981,11 @@ void panthor_vm_heaps_sizes(struct panthor_file *pfile, struct drm_memory_stats xa_unlock(&pfile->vms->xa); } +void panthor_vm_heaps_size_accumulate(struct panthor_vm *vm, ssize_t acc) +{ + atomic_add(acc, &vm->fdinfo.heaps_size); +} + static u64 mair_to_memattr(u64 mair, bool coherent) { u64 memattr = 0; diff --git a/drivers/gpu/drm/panthor/panthor_mmu.h b/drivers/gpu/drm/panthor/panthor_mmu.h index fc274637114e..29030384eafe 100644 --- a/drivers/gpu/drm/panthor/panthor_mmu.h +++ b/drivers/gpu/drm/panthor/panthor_mmu.h @@ -39,6 +39,7 @@ struct panthor_heap_pool * panthor_vm_get_heap_pool(struct panthor_vm *vm, bool create); void panthor_vm_heaps_sizes(struct panthor_file *pfile, struct drm_memory_stats *stats); +void panthor_vm_heaps_size_accumulate(struct panthor_vm *vm, ssize_t acc); struct panthor_vm *panthor_vm_get(struct panthor_vm *vm); void panthor_vm_put(struct panthor_vm *vm); -- 2.47.1 ^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH v2 2/2] drm/panthor: Avoid sleep locking in the internal BO size path 2025-02-14 20:55 ` [PATCH v2 2/2] drm/panthor: Avoid sleep locking in the internal BO size path Adrián Larumbe @ 2025-02-15 9:44 ` Boris Brezillon 2025-02-20 20:26 ` Adrián Larumbe 0 siblings, 1 reply; 6+ messages in thread From: Boris Brezillon @ 2025-02-15 9:44 UTC (permalink / raw) To: Adrián Larumbe Cc: Steven Price, Liviu Dudau, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter, Mihail Atanassov, kernel, dri-devel, linux-kernel On Fri, 14 Feb 2025 20:55:21 +0000 Adrián Larumbe <adrian.larumbe@collabora.com> wrote: > Commit 434e5ca5b5d7 ("drm/panthor: Expose size of driver internal BO's over > fdinfo") locks the VMS xarray, to avoid UAF errors when the same VM is > being concurrently destroyed by another thread. However, that puts the > current thread in atomic context, which means taking the VMS' heap locks > will trigger a warning as the thread is no longer allowed to sleep. > > Because in this case replacing the heap mutex with a spinlock isn't > feasible, the fdinfo handler no longer traverses the list of heaps for > every single VM associated with an open DRM file. Instead, when a new heap > chunk is allocated, its size is accumulated into a VM-wide tally, which > also makes the atomic context code path somewhat faster. > > Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com> > Fixes: 3e2c8c718567 ("drm/panthor: Expose size of driver internal BO's over fdinfo") > --- > drivers/gpu/drm/panthor/panthor_heap.c | 38 ++++++++------------------ > drivers/gpu/drm/panthor/panthor_heap.h | 2 -- > drivers/gpu/drm/panthor/panthor_mmu.c | 23 +++++++++++----- > drivers/gpu/drm/panthor/panthor_mmu.h | 1 + > 4 files changed, 28 insertions(+), 36 deletions(-) > > diff --git a/drivers/gpu/drm/panthor/panthor_heap.c b/drivers/gpu/drm/panthor/panthor_heap.c > index db0285ce5812..e5e5953e4f87 100644 > --- a/drivers/gpu/drm/panthor/panthor_heap.c > +++ b/drivers/gpu/drm/panthor/panthor_heap.c > @@ -127,6 +127,8 @@ static void panthor_free_heap_chunk(struct panthor_vm *vm, > heap->chunk_count--; > mutex_unlock(&heap->lock); > > + panthor_vm_heaps_size_accumulate(vm, -heap->chunk_size); > + > panthor_kernel_bo_destroy(chunk->bo); > kfree(chunk); > } > @@ -180,6 +182,8 @@ static int panthor_alloc_heap_chunk(struct panthor_device *ptdev, > heap->chunk_count++; > mutex_unlock(&heap->lock); > > + panthor_vm_heaps_size_accumulate(vm, heap->chunk_size); > + > return 0; > > err_destroy_bo: > @@ -389,6 +393,7 @@ int panthor_heap_return_chunk(struct panthor_heap_pool *pool, > removed = chunk; > list_del(&chunk->node); > heap->chunk_count--; > + panthor_vm_heaps_size_accumulate(chunk->bo->vm, -heap->chunk_size); > break; > } > } > @@ -560,6 +565,8 @@ panthor_heap_pool_create(struct panthor_device *ptdev, struct panthor_vm *vm) > if (ret) > goto err_destroy_pool; > > + panthor_vm_heaps_size_accumulate(vm, pool->gpu_contexts->obj->size); > + > return pool; > > err_destroy_pool: > @@ -594,8 +601,11 @@ void panthor_heap_pool_destroy(struct panthor_heap_pool *pool) > xa_for_each(&pool->xa, i, heap) > drm_WARN_ON(&pool->ptdev->base, panthor_heap_destroy_locked(pool, i)); > > - if (!IS_ERR_OR_NULL(pool->gpu_contexts)) > + if (!IS_ERR_OR_NULL(pool->gpu_contexts)) { > + panthor_vm_heaps_size_accumulate(pool->gpu_contexts->vm, > + -pool->gpu_contexts->obj->size); > panthor_kernel_bo_destroy(pool->gpu_contexts); > + } > > /* Reflects the fact the pool has been destroyed. */ > pool->vm = NULL; > @@ -603,29 +613,3 @@ void panthor_heap_pool_destroy(struct panthor_heap_pool *pool) > > panthor_heap_pool_put(pool); > } > - > -/** > - * panthor_heap_pool_size() - Calculate size of all chunks across all heaps in a pool > - * @pool: Pool whose total chunk size to calculate. > - * > - * This function adds the size of all heap chunks across all heaps in the > - * argument pool. It also adds the size of the gpu contexts kernel bo. > - * It is meant to be used by fdinfo for displaying the size of internal > - * driver BO's that aren't exposed to userspace through a GEM handle. > - * > - */ > -size_t panthor_heap_pool_size(struct panthor_heap_pool *pool) > -{ > - struct panthor_heap *heap; > - unsigned long i; > - size_t size = 0; > - > - down_read(&pool->lock); > - xa_for_each(&pool->xa, i, heap) > - size += heap->chunk_size * heap->chunk_count; > - up_read(&pool->lock); > - > - size += pool->gpu_contexts->obj->size; > - > - return size; > -} > diff --git a/drivers/gpu/drm/panthor/panthor_heap.h b/drivers/gpu/drm/panthor/panthor_heap.h > index e3358d4e8edb..25a5f2bba445 100644 > --- a/drivers/gpu/drm/panthor/panthor_heap.h > +++ b/drivers/gpu/drm/panthor/panthor_heap.h > @@ -27,8 +27,6 @@ struct panthor_heap_pool * > panthor_heap_pool_get(struct panthor_heap_pool *pool); > void panthor_heap_pool_put(struct panthor_heap_pool *pool); > > -size_t panthor_heap_pool_size(struct panthor_heap_pool *pool); > - > int panthor_heap_grow(struct panthor_heap_pool *pool, > u64 heap_gpu_va, > u32 renderpasses_in_flight, > diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c > index 8c6fc587ddc3..9e48b34fcf80 100644 > --- a/drivers/gpu/drm/panthor/panthor_mmu.c > +++ b/drivers/gpu/drm/panthor/panthor_mmu.c > @@ -347,6 +347,14 @@ struct panthor_vm { > struct mutex lock; > } heaps; > > + /** > + * @fdinfo: VM-wide fdinfo fields. > + */ > + struct { > + /** @fdinfo.heaps_size: Size of all chunks across all heaps in the pool. */ > + atomic_t heaps_size; > + } fdinfo; Feels more like a panthor_heap_pool field to me. If you do that, you can keep the panthor_heap_pool_size() helper. > + > /** @node: Used to insert the VM in the panthor_mmu::vm::list. */ > struct list_head node; > > @@ -1541,6 +1549,8 @@ static void panthor_vm_destroy(struct panthor_vm *vm) > vm->heaps.pool = NULL; > mutex_unlock(&vm->heaps.lock); > > + atomic_set(&vm->fdinfo.heaps_size, 0); > + I don't think that's needed, the VM is gone, so there's no way someone can query its heaps size after that point. > drm_WARN_ON(&vm->ptdev->base, > panthor_vm_unmap_range(vm, vm->base.mm_start, vm->base.mm_range)); > panthor_vm_put(vm); > @@ -1963,13 +1973,7 @@ void panthor_vm_heaps_sizes(struct panthor_file *pfile, struct drm_memory_stats > > xa_lock(&pfile->vms->xa); > xa_for_each(&pfile->vms->xa, i, vm) { > - size_t size = 0; > - > - mutex_lock(&vm->heaps.lock); > - if (vm->heaps.pool) > - size = panthor_heap_pool_size(vm->heaps.pool); > - mutex_unlock(&vm->heaps.lock); > - > + size_t size = atomic_read(&vm->fdinfo.heaps_size); > stats->resident += size; > if (vm->as.id >= 0) > stats->active += size; > @@ -1977,6 +1981,11 @@ void panthor_vm_heaps_sizes(struct panthor_file *pfile, struct drm_memory_stats > xa_unlock(&pfile->vms->xa); > } > > +void panthor_vm_heaps_size_accumulate(struct panthor_vm *vm, ssize_t acc) > +{ > + atomic_add(acc, &vm->fdinfo.heaps_size); > +} Calling atomic_add() directly would probably be shorter, and I prefer the idea of calling atomic_sub(size) instead of atomic_add(-size), so how about we drop this helper and use atomic_add/sub() directly? > + > static u64 mair_to_memattr(u64 mair, bool coherent) > { > u64 memattr = 0; > diff --git a/drivers/gpu/drm/panthor/panthor_mmu.h b/drivers/gpu/drm/panthor/panthor_mmu.h > index fc274637114e..29030384eafe 100644 > --- a/drivers/gpu/drm/panthor/panthor_mmu.h > +++ b/drivers/gpu/drm/panthor/panthor_mmu.h > @@ -39,6 +39,7 @@ struct panthor_heap_pool * > panthor_vm_get_heap_pool(struct panthor_vm *vm, bool create); > > void panthor_vm_heaps_sizes(struct panthor_file *pfile, struct drm_memory_stats *stats); > +void panthor_vm_heaps_size_accumulate(struct panthor_vm *vm, ssize_t acc); > > struct panthor_vm *panthor_vm_get(struct panthor_vm *vm); > void panthor_vm_put(struct panthor_vm *vm); ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v2 2/2] drm/panthor: Avoid sleep locking in the internal BO size path 2025-02-15 9:44 ` Boris Brezillon @ 2025-02-20 20:26 ` Adrián Larumbe 2025-02-24 11:51 ` Boris Brezillon 0 siblings, 1 reply; 6+ messages in thread From: Adrián Larumbe @ 2025-02-20 20:26 UTC (permalink / raw) To: Boris Brezillon Cc: Steven Price, Liviu Dudau, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter, Mihail Atanassov, kernel, dri-devel, linux-kernel Hi Boris, On 15.02.2025 10:44, Boris Brezillon wrote: > On Fri, 14 Feb 2025 20:55:21 +0000 > Adrián Larumbe <adrian.larumbe@collabora.com> wrote: > > > Commit 434e5ca5b5d7 ("drm/panthor: Expose size of driver internal BO's over > > fdinfo") locks the VMS xarray, to avoid UAF errors when the same VM is > > being concurrently destroyed by another thread. However, that puts the > > current thread in atomic context, which means taking the VMS' heap locks > > will trigger a warning as the thread is no longer allowed to sleep. > > > > Because in this case replacing the heap mutex with a spinlock isn't > > feasible, the fdinfo handler no longer traverses the list of heaps for > > every single VM associated with an open DRM file. Instead, when a new heap > > chunk is allocated, its size is accumulated into a VM-wide tally, which > > also makes the atomic context code path somewhat faster. > > > > Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com> > > Fixes: 3e2c8c718567 ("drm/panthor: Expose size of driver internal BO's over fdinfo") > > --- > > drivers/gpu/drm/panthor/panthor_heap.c | 38 ++++++++------------------ > > drivers/gpu/drm/panthor/panthor_heap.h | 2 -- > > drivers/gpu/drm/panthor/panthor_mmu.c | 23 +++++++++++----- > > drivers/gpu/drm/panthor/panthor_mmu.h | 1 + > > 4 files changed, 28 insertions(+), 36 deletions(-) > > > > diff --git a/drivers/gpu/drm/panthor/panthor_heap.c b/drivers/gpu/drm/panthor/panthor_heap.c > > index db0285ce5812..e5e5953e4f87 100644 > > --- a/drivers/gpu/drm/panthor/panthor_heap.c > > +++ b/drivers/gpu/drm/panthor/panthor_heap.c > > @@ -127,6 +127,8 @@ static void panthor_free_heap_chunk(struct panthor_vm *vm, > > heap->chunk_count--; > > mutex_unlock(&heap->lock); > > > > + panthor_vm_heaps_size_accumulate(vm, -heap->chunk_size); > > + > > panthor_kernel_bo_destroy(chunk->bo); > > kfree(chunk); > > } > > @@ -180,6 +182,8 @@ static int panthor_alloc_heap_chunk(struct panthor_device *ptdev, > > heap->chunk_count++; > > mutex_unlock(&heap->lock); > > > > + panthor_vm_heaps_size_accumulate(vm, heap->chunk_size); > > + > > return 0; > > > > err_destroy_bo: > > @@ -389,6 +393,7 @@ int panthor_heap_return_chunk(struct panthor_heap_pool *pool, > > removed = chunk; > > list_del(&chunk->node); > > heap->chunk_count--; > > + panthor_vm_heaps_size_accumulate(chunk->bo->vm, -heap->chunk_size); > > break; > > } > > } > > @@ -560,6 +565,8 @@ panthor_heap_pool_create(struct panthor_device *ptdev, struct panthor_vm *vm) > > if (ret) > > goto err_destroy_pool; > > > > + panthor_vm_heaps_size_accumulate(vm, pool->gpu_contexts->obj->size); > > + > > return pool; > > > > err_destroy_pool: > > @@ -594,8 +601,11 @@ void panthor_heap_pool_destroy(struct panthor_heap_pool *pool) > > xa_for_each(&pool->xa, i, heap) > > drm_WARN_ON(&pool->ptdev->base, panthor_heap_destroy_locked(pool, i)); > > > > - if (!IS_ERR_OR_NULL(pool->gpu_contexts)) > > + if (!IS_ERR_OR_NULL(pool->gpu_contexts)) { > > + panthor_vm_heaps_size_accumulate(pool->gpu_contexts->vm, > > + -pool->gpu_contexts->obj->size); > > panthor_kernel_bo_destroy(pool->gpu_contexts); > > + } > > > > /* Reflects the fact the pool has been destroyed. */ > > pool->vm = NULL; > > @@ -603,29 +613,3 @@ void panthor_heap_pool_destroy(struct panthor_heap_pool *pool) > > > > panthor_heap_pool_put(pool); > > } > > - > > -/** > > - * panthor_heap_pool_size() - Calculate size of all chunks across all heaps in a pool > > - * @pool: Pool whose total chunk size to calculate. > > - * > > - * This function adds the size of all heap chunks across all heaps in the > > - * argument pool. It also adds the size of the gpu contexts kernel bo. > > - * It is meant to be used by fdinfo for displaying the size of internal > > - * driver BO's that aren't exposed to userspace through a GEM handle. > > - * > > - */ > > -size_t panthor_heap_pool_size(struct panthor_heap_pool *pool) > > -{ > > - struct panthor_heap *heap; > > - unsigned long i; > > - size_t size = 0; > > - > > - down_read(&pool->lock); > > - xa_for_each(&pool->xa, i, heap) > > - size += heap->chunk_size * heap->chunk_count; > > - up_read(&pool->lock); > > - > > - size += pool->gpu_contexts->obj->size; > > - > > - return size; > > -} > > diff --git a/drivers/gpu/drm/panthor/panthor_heap.h b/drivers/gpu/drm/panthor/panthor_heap.h > > index e3358d4e8edb..25a5f2bba445 100644 > > --- a/drivers/gpu/drm/panthor/panthor_heap.h > > +++ b/drivers/gpu/drm/panthor/panthor_heap.h > > @@ -27,8 +27,6 @@ struct panthor_heap_pool * > > panthor_heap_pool_get(struct panthor_heap_pool *pool); > > void panthor_heap_pool_put(struct panthor_heap_pool *pool); > > > > -size_t panthor_heap_pool_size(struct panthor_heap_pool *pool); > > - > > int panthor_heap_grow(struct panthor_heap_pool *pool, > > u64 heap_gpu_va, > > u32 renderpasses_in_flight, > > diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c > > index 8c6fc587ddc3..9e48b34fcf80 100644 > > --- a/drivers/gpu/drm/panthor/panthor_mmu.c > > +++ b/drivers/gpu/drm/panthor/panthor_mmu.c > > @@ -347,6 +347,14 @@ struct panthor_vm { > > struct mutex lock; > > } heaps; > > > > + /** > > + * @fdinfo: VM-wide fdinfo fields. > > + */ > > + struct { > > + /** @fdinfo.heaps_size: Size of all chunks across all heaps in the pool. */ > > + atomic_t heaps_size; > > + } fdinfo; > > Feels more like a panthor_heap_pool field to me. If you do that, > you can keep the panthor_heap_pool_size() helper. The only downside of storing a per-heap-pool fdinfo size for its chunks size total is that we'll have to traverse all the heap pools owned by a VM any time the fdinfo handler for an open DRM file is invoked. That means spending a longer time with the vms xarray lock taken. > > + > > /** @node: Used to insert the VM in the panthor_mmu::vm::list. */ > > struct list_head node; > > > > @@ -1541,6 +1549,8 @@ static void panthor_vm_destroy(struct panthor_vm *vm) > > vm->heaps.pool = NULL; > > mutex_unlock(&vm->heaps.lock); > > > > + atomic_set(&vm->fdinfo.heaps_size, 0); > > + > > I don't think that's needed, the VM is gone, so there's no way > someone can query its heaps size after that point. You're right, I had thought destruction doesn't always equal removal until the refcnt for the VM goes to zero, but it seems all code paths that lead to panthor_vm_destroy() either remove the VM from the VMS xarray or delete that xarray altogether. I'll get rid of this line in the next revision. > > drm_WARN_ON(&vm->ptdev->base, > > panthor_vm_unmap_range(vm, vm->base.mm_start, vm->base.mm_range)); > > panthor_vm_put(vm); > > @@ -1963,13 +1973,7 @@ void panthor_vm_heaps_sizes(struct panthor_file *pfile, struct drm_memory_stats > > > > xa_lock(&pfile->vms->xa); > > xa_for_each(&pfile->vms->xa, i, vm) { > > - size_t size = 0; > > - > > - mutex_lock(&vm->heaps.lock); > > - if (vm->heaps.pool) > > - size = panthor_heap_pool_size(vm->heaps.pool); > > - mutex_unlock(&vm->heaps.lock); > > - > > + size_t size = atomic_read(&vm->fdinfo.heaps_size); > > stats->resident += size; > > if (vm->as.id >= 0) > > stats->active += size; > > @@ -1977,6 +1981,11 @@ void panthor_vm_heaps_sizes(struct panthor_file *pfile, struct drm_memory_stats > > xa_unlock(&pfile->vms->xa); > > } > > > > +void panthor_vm_heaps_size_accumulate(struct panthor_vm *vm, ssize_t acc) > > +{ > > + atomic_add(acc, &vm->fdinfo.heaps_size); > > +} > > Calling atomic_add() directly would probably be shorter, and I prefer > the idea of calling atomic_sub(size) instead of atomic_add(-size), so > how about we drop this helper and use atomic_add/sub() directly? I had to add this VM interface function because the VM struct fields are kept hidden from other compilation units, as struct panthor_vm is defined inside panthor_mmu.c. I agree using atomic_sub() would be clearer, but that would imply exporting yet another panthor_mmu symbol, and atomic_add() can take signed values anyway. > > + > > static u64 mair_to_memattr(u64 mair, bool coherent) > > { > > u64 memattr = 0; > > diff --git a/drivers/gpu/drm/panthor/panthor_mmu.h b/drivers/gpu/drm/panthor/panthor_mmu.h > > index fc274637114e..29030384eafe 100644 > > --- a/drivers/gpu/drm/panthor/panthor_mmu.h > > +++ b/drivers/gpu/drm/panthor/panthor_mmu.h > > @@ -39,6 +39,7 @@ struct panthor_heap_pool * > > panthor_vm_get_heap_pool(struct panthor_vm *vm, bool create); > > > > void panthor_vm_heaps_sizes(struct panthor_file *pfile, struct drm_memory_stats *stats); > > +void panthor_vm_heaps_size_accumulate(struct panthor_vm *vm, ssize_t acc); > > > > struct panthor_vm *panthor_vm_get(struct panthor_vm *vm); > > void panthor_vm_put(struct panthor_vm *vm); Adrian Larumbe ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v2 2/2] drm/panthor: Avoid sleep locking in the internal BO size path 2025-02-20 20:26 ` Adrián Larumbe @ 2025-02-24 11:51 ` Boris Brezillon 0 siblings, 0 replies; 6+ messages in thread From: Boris Brezillon @ 2025-02-24 11:51 UTC (permalink / raw) To: Adrián Larumbe Cc: Steven Price, Liviu Dudau, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter, Mihail Atanassov, kernel, dri-devel, linux-kernel Hi Adrian, On Thu, 20 Feb 2025 20:26:23 +0000 Adrián Larumbe <adrian.larumbe@collabora.com> wrote: > Hi Boris, > > On 15.02.2025 10:44, Boris Brezillon wrote: > > On Fri, 14 Feb 2025 20:55:21 +0000 > > Adrián Larumbe <adrian.larumbe@collabora.com> wrote: > > > > > Commit 434e5ca5b5d7 ("drm/panthor: Expose size of driver internal BO's over > > > fdinfo") locks the VMS xarray, to avoid UAF errors when the same VM is > > > being concurrently destroyed by another thread. However, that puts the > > > current thread in atomic context, which means taking the VMS' heap locks > > > will trigger a warning as the thread is no longer allowed to sleep. > > > > > > Because in this case replacing the heap mutex with a spinlock isn't > > > feasible, the fdinfo handler no longer traverses the list of heaps for > > > every single VM associated with an open DRM file. Instead, when a new heap > > > chunk is allocated, its size is accumulated into a VM-wide tally, which > > > also makes the atomic context code path somewhat faster. > > > > > > Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com> > > > Fixes: 3e2c8c718567 ("drm/panthor: Expose size of driver internal BO's over fdinfo") > > > --- > > > drivers/gpu/drm/panthor/panthor_heap.c | 38 ++++++++------------------ > > > drivers/gpu/drm/panthor/panthor_heap.h | 2 -- > > > drivers/gpu/drm/panthor/panthor_mmu.c | 23 +++++++++++----- > > > drivers/gpu/drm/panthor/panthor_mmu.h | 1 + > > > 4 files changed, 28 insertions(+), 36 deletions(-) > > > > > > diff --git a/drivers/gpu/drm/panthor/panthor_heap.c b/drivers/gpu/drm/panthor/panthor_heap.c > > > index db0285ce5812..e5e5953e4f87 100644 > > > --- a/drivers/gpu/drm/panthor/panthor_heap.c > > > +++ b/drivers/gpu/drm/panthor/panthor_heap.c > > > @@ -127,6 +127,8 @@ static void panthor_free_heap_chunk(struct panthor_vm *vm, > > > heap->chunk_count--; > > > mutex_unlock(&heap->lock); > > > > > > + panthor_vm_heaps_size_accumulate(vm, -heap->chunk_size); > > > + > > > panthor_kernel_bo_destroy(chunk->bo); > > > kfree(chunk); > > > } > > > @@ -180,6 +182,8 @@ static int panthor_alloc_heap_chunk(struct panthor_device *ptdev, > > > heap->chunk_count++; > > > mutex_unlock(&heap->lock); > > > > > > + panthor_vm_heaps_size_accumulate(vm, heap->chunk_size); > > > + > > > return 0; > > > > > > err_destroy_bo: > > > @@ -389,6 +393,7 @@ int panthor_heap_return_chunk(struct panthor_heap_pool *pool, > > > removed = chunk; > > > list_del(&chunk->node); > > > heap->chunk_count--; > > > + panthor_vm_heaps_size_accumulate(chunk->bo->vm, -heap->chunk_size); > > > break; > > > } > > > } > > > @@ -560,6 +565,8 @@ panthor_heap_pool_create(struct panthor_device *ptdev, struct panthor_vm *vm) > > > if (ret) > > > goto err_destroy_pool; > > > > > > + panthor_vm_heaps_size_accumulate(vm, pool->gpu_contexts->obj->size); > > > + > > > return pool; > > > > > > err_destroy_pool: > > > @@ -594,8 +601,11 @@ void panthor_heap_pool_destroy(struct panthor_heap_pool *pool) > > > xa_for_each(&pool->xa, i, heap) > > > drm_WARN_ON(&pool->ptdev->base, panthor_heap_destroy_locked(pool, i)); > > > > > > - if (!IS_ERR_OR_NULL(pool->gpu_contexts)) > > > + if (!IS_ERR_OR_NULL(pool->gpu_contexts)) { > > > + panthor_vm_heaps_size_accumulate(pool->gpu_contexts->vm, > > > + -pool->gpu_contexts->obj->size); > > > panthor_kernel_bo_destroy(pool->gpu_contexts); > > > + } > > > > > > /* Reflects the fact the pool has been destroyed. */ > > > pool->vm = NULL; > > > @@ -603,29 +613,3 @@ void panthor_heap_pool_destroy(struct panthor_heap_pool *pool) > > > > > > panthor_heap_pool_put(pool); > > > } > > > - > > > -/** > > > - * panthor_heap_pool_size() - Calculate size of all chunks across all heaps in a pool > > > - * @pool: Pool whose total chunk size to calculate. > > > - * > > > - * This function adds the size of all heap chunks across all heaps in the > > > - * argument pool. It also adds the size of the gpu contexts kernel bo. > > > - * It is meant to be used by fdinfo for displaying the size of internal > > > - * driver BO's that aren't exposed to userspace through a GEM handle. > > > - * > > > - */ > > > -size_t panthor_heap_pool_size(struct panthor_heap_pool *pool) > > > -{ > > > - struct panthor_heap *heap; > > > - unsigned long i; > > > - size_t size = 0; > > > - > > > - down_read(&pool->lock); > > > - xa_for_each(&pool->xa, i, heap) > > > - size += heap->chunk_size * heap->chunk_count; > > > - up_read(&pool->lock); > > > - > > > - size += pool->gpu_contexts->obj->size; > > > - > > > - return size; > > > -} > > > diff --git a/drivers/gpu/drm/panthor/panthor_heap.h b/drivers/gpu/drm/panthor/panthor_heap.h > > > index e3358d4e8edb..25a5f2bba445 100644 > > > --- a/drivers/gpu/drm/panthor/panthor_heap.h > > > +++ b/drivers/gpu/drm/panthor/panthor_heap.h > > > @@ -27,8 +27,6 @@ struct panthor_heap_pool * > > > panthor_heap_pool_get(struct panthor_heap_pool *pool); > > > void panthor_heap_pool_put(struct panthor_heap_pool *pool); > > > > > > -size_t panthor_heap_pool_size(struct panthor_heap_pool *pool); > > > - > > > int panthor_heap_grow(struct panthor_heap_pool *pool, > > > u64 heap_gpu_va, > > > u32 renderpasses_in_flight, > > > diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c > > > index 8c6fc587ddc3..9e48b34fcf80 100644 > > > --- a/drivers/gpu/drm/panthor/panthor_mmu.c > > > +++ b/drivers/gpu/drm/panthor/panthor_mmu.c > > > @@ -347,6 +347,14 @@ struct panthor_vm { > > > struct mutex lock; > > > } heaps; > > > > > > + /** > > > + * @fdinfo: VM-wide fdinfo fields. > > > + */ > > > + struct { > > > + /** @fdinfo.heaps_size: Size of all chunks across all heaps in the pool. */ > > > + atomic_t heaps_size; > > > + } fdinfo; > > > > Feels more like a panthor_heap_pool field to me. If you do that, > > you can keep the panthor_heap_pool_size() helper. > > The only downside of storing a per-heap-pool fdinfo size for its chunks size total is that we'll > have to traverse all the heap pools owned by a VM any time the fdinfo handler for an open > DRM file is invoked. That means spending a longer time with the vms xarray lock taken. There's only one heap pool per VM though, and once the pool is created it can't go away, so you don't even have to take the lock to deref the panthor_vm::heaps::pool object, you just need a NULL check. > > > > > > +void panthor_vm_heaps_size_accumulate(struct panthor_vm *vm, ssize_t acc) > > > +{ > > > + atomic_add(acc, &vm->fdinfo.heaps_size); > > > +} > > > > Calling atomic_add() directly would probably be shorter, and I prefer > > the idea of calling atomic_sub(size) instead of atomic_add(-size), so > > how about we drop this helper and use atomic_add/sub() directly? > > I had to add this VM interface function because the VM struct fields are kept hidden from > other compilation units, as struct panthor_vm is defined inside panthor_mmu.c. I agree > using atomic_sub() would be clearer, but that would imply exporting yet another panthor_mmu > symbol, and atomic_add() can take signed values anyway. If you move the "heaps_size" field to panthor_heap_pool (which I would rename "size" when moving it to panthor_heap::fdinfo BTW), you no longer have this problem, because all users of this field exist in panthor_heap.c only. Regards, Boris ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v2 1/2] drm/panthor: Replace sleep locks with spinlocks in fdinfo path 2025-02-14 20:55 [PATCH v2 1/2] drm/panthor: Replace sleep locks with spinlocks in fdinfo path Adrián Larumbe 2025-02-14 20:55 ` [PATCH v2 2/2] drm/panthor: Avoid sleep locking in the internal BO size path Adrián Larumbe @ 2025-02-15 9:28 ` Boris Brezillon 1 sibling, 0 replies; 6+ messages in thread From: Boris Brezillon @ 2025-02-15 9:28 UTC (permalink / raw) To: Adrián Larumbe Cc: Steven Price, Liviu Dudau, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter, kernel, dri-devel, linux-kernel On Fri, 14 Feb 2025 20:55:20 +0000 Adrián Larumbe <adrian.larumbe@collabora.com> wrote: > Commit 0590c94c3596 ("drm/panthor: Fix race condition when gathering fdinfo > group samples") introduced an xarray lock to deal with potential > use-after-free errors when accessing groups fdinfo figures. However, this > toggles the kernel's atomic context status, so the next nested mutex lock > will raise a warning when the kernel is compiled with mutex debug options: > > CONFIG_DEBUG_RT_MUTEXES=y > CONFIG_DEBUG_MUTEXES=y > > Replace Panthor's group fdinfo data mutex with a guarded spinlock. > > Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com> > 0590c94c3596 ("drm/panthor: Fix race condition when gathering fdinfo group samples") My previous Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com> stands. > --- > drivers/gpu/drm/panthor/panthor_sched.c | 26 ++++++++++++------------- > 1 file changed, 12 insertions(+), 14 deletions(-) > > diff --git a/drivers/gpu/drm/panthor/panthor_sched.c b/drivers/gpu/drm/panthor/panthor_sched.c > index 1a276db095ff..4d31d1967716 100644 > --- a/drivers/gpu/drm/panthor/panthor_sched.c > +++ b/drivers/gpu/drm/panthor/panthor_sched.c > @@ -9,6 +9,7 @@ > #include <drm/panthor_drm.h> > > #include <linux/build_bug.h> > +#include <linux/cleanup.h> > #include <linux/clk.h> > #include <linux/delay.h> > #include <linux/dma-mapping.h> > @@ -631,10 +632,10 @@ struct panthor_group { > struct panthor_gpu_usage data; > > /** > - * @lock: Mutex to govern concurrent access from drm file's fdinfo callback > - * and job post-completion processing function > + * @fdinfo.lock: Spinlock to govern concurrent access from drm file's fdinfo > + * callback and job post-completion processing function > */ > - struct mutex lock; > + spinlock_t lock; > > /** @fdinfo.kbo_sizes: Aggregate size of private kernel BO's held by the group. */ > size_t kbo_sizes; > @@ -910,8 +911,6 @@ static void group_release_work(struct work_struct *work) > release_work); > u32 i; > > - mutex_destroy(&group->fdinfo.lock); > - > for (i = 0; i < group->queue_count; i++) > group_free_queue(group, group->queues[i]); > > @@ -2861,12 +2860,12 @@ static void update_fdinfo_stats(struct panthor_job *job) > struct panthor_job_profiling_data *slots = queue->profiling.slots->kmap; > struct panthor_job_profiling_data *data = &slots[job->profiling.slot]; > > - mutex_lock(&group->fdinfo.lock); > - if (job->profiling.mask & PANTHOR_DEVICE_PROFILING_CYCLES) > - fdinfo->cycles += data->cycles.after - data->cycles.before; > - if (job->profiling.mask & PANTHOR_DEVICE_PROFILING_TIMESTAMP) > - fdinfo->time += data->time.after - data->time.before; > - mutex_unlock(&group->fdinfo.lock); > + scoped_guard(spinlock, &group->fdinfo.lock) { > + if (job->profiling.mask & PANTHOR_DEVICE_PROFILING_CYCLES) > + fdinfo->cycles += data->cycles.after - data->cycles.before; > + if (job->profiling.mask & PANTHOR_DEVICE_PROFILING_TIMESTAMP) > + fdinfo->time += data->time.after - data->time.before; > + } > } > > void panthor_fdinfo_gather_group_samples(struct panthor_file *pfile) > @@ -2880,12 +2879,11 @@ void panthor_fdinfo_gather_group_samples(struct panthor_file *pfile) > > xa_lock(&gpool->xa); > xa_for_each(&gpool->xa, i, group) { > - mutex_lock(&group->fdinfo.lock); > + guard(spinlock)(&group->fdinfo.lock); > pfile->stats.cycles += group->fdinfo.data.cycles; > pfile->stats.time += group->fdinfo.data.time; > group->fdinfo.data.cycles = 0; > group->fdinfo.data.time = 0; > - mutex_unlock(&group->fdinfo.lock); > } > xa_unlock(&gpool->xa); > } > @@ -3537,7 +3535,7 @@ int panthor_group_create(struct panthor_file *pfile, > mutex_unlock(&sched->reset.lock); > > add_group_kbo_sizes(group->ptdev, group); > - mutex_init(&group->fdinfo.lock); > + spin_lock_init(&group->fdinfo.lock); > > return gid; > ^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2025-02-24 11:51 UTC | newest] Thread overview: 6+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2025-02-14 20:55 [PATCH v2 1/2] drm/panthor: Replace sleep locks with spinlocks in fdinfo path Adrián Larumbe 2025-02-14 20:55 ` [PATCH v2 2/2] drm/panthor: Avoid sleep locking in the internal BO size path Adrián Larumbe 2025-02-15 9:44 ` Boris Brezillon 2025-02-20 20:26 ` Adrián Larumbe 2025-02-24 11:51 ` Boris Brezillon 2025-02-15 9:28 ` [PATCH v2 1/2] drm/panthor: Replace sleep locks with spinlocks in fdinfo path Boris Brezillon
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox