* [PATCH v20 00/10] Add generic DRM-shmem memory shrinker (part 1)
@ 2025-03-22 21:25 Dmitry Osipenko
2025-03-22 21:25 ` [PATCH v20 01/10] drm/gem: Change locked/unlocked postfix of drm_gem_v/unmap() function names Dmitry Osipenko
` (11 more replies)
0 siblings, 12 replies; 32+ messages in thread
From: Dmitry Osipenko @ 2025-03-22 21:25 UTC (permalink / raw)
To: David Airlie, Simona Vetter, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, Christian König, Gerd Hoffmann, Qiang Yu,
Steven Price, Boris Brezillon, Frank Binns, Matt Coster
Cc: dri-devel, linux-kernel, kernel
Hi,
This a continuation of a year-old series that adds generic DRM-shmem
shrinker [1]. The old series became too big with too many patches, more
reasonable to split it up into multiple smaller patchsets. Here is
the firtst part that makes preparatory DRM changes.
[1] https://lore.kernel.org/dri-devel/20240105184624.508603-1-dmitry.osipenko@collabora.com/
Changelog:
v20:- Rebased on recent drm-misc. Added r-bs that were given to v19.
Dmitry Osipenko (10):
drm/gem: Change locked/unlocked postfix of drm_gem_v/unmap() function
names
drm/gem: Add _locked postfix to functions that have unlocked
counterpart
drm/gem: Document locking rule of vmap and evict callbacks
drm/shmem-helper: Make all exported symbols GPL
drm/shmem-helper: Refactor locked/unlocked functions
drm/shmem-helper: Remove obsoleted is_iomem test
drm/shmem-helper: Add and use pages_pin_count
drm/shmem-helper: Use refcount_t for pages_use_count
drm/shmem-helper: Switch drm_gem_shmem_vmap/vunmap to use pin/unpin
drm/shmem-helper: Use refcount_t for vmap_use_count
drivers/gpu/drm/drm_client.c | 10 +-
drivers/gpu/drm/drm_gem.c | 26 ++--
drivers/gpu/drm/drm_gem_framebuffer_helper.c | 6 +-
drivers/gpu/drm/drm_gem_shmem_helper.c | 145 +++++++++---------
drivers/gpu/drm/drm_internal.h | 4 +-
drivers/gpu/drm/drm_prime.c | 4 +-
drivers/gpu/drm/imagination/pvr_gem.c | 4 +-
drivers/gpu/drm/lima/lima_gem.c | 4 +-
drivers/gpu/drm/lima/lima_sched.c | 4 +-
drivers/gpu/drm/panfrost/panfrost_drv.c | 2 +-
drivers/gpu/drm/panfrost/panfrost_dump.c | 4 +-
.../gpu/drm/panfrost/panfrost_gem_shrinker.c | 2 +-
drivers/gpu/drm/panfrost/panfrost_mmu.c | 2 +-
drivers/gpu/drm/panfrost/panfrost_perfcnt.c | 6 +-
drivers/gpu/drm/panthor/panthor_gem.h | 4 +-
drivers/gpu/drm/panthor/panthor_sched.c | 4 +-
drivers/gpu/drm/tests/drm_gem_shmem_test.c | 28 ++--
include/drm/drm_gem.h | 15 +-
include/drm/drm_gem_shmem_helper.h | 45 ++++--
19 files changed, 167 insertions(+), 152 deletions(-)
--
2.49.0
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH v20 01/10] drm/gem: Change locked/unlocked postfix of drm_gem_v/unmap() function names
2025-03-22 21:25 [PATCH v20 00/10] Add generic DRM-shmem memory shrinker (part 1) Dmitry Osipenko
@ 2025-03-22 21:25 ` Dmitry Osipenko
2025-03-24 13:05 ` Christian König
2025-03-22 21:26 ` [PATCH v20 02/10] drm/gem: Add _locked postfix to functions that have unlocked counterpart Dmitry Osipenko
` (10 subsequent siblings)
11 siblings, 1 reply; 32+ messages in thread
From: Dmitry Osipenko @ 2025-03-22 21:25 UTC (permalink / raw)
To: David Airlie, Simona Vetter, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, Christian König, Gerd Hoffmann, Qiang Yu,
Steven Price, Boris Brezillon, Frank Binns, Matt Coster
Cc: dri-devel, linux-kernel, kernel
Make drm/gem API function names consistent by having locked function
use the _locked postfix in the name, while the unlocked variants don't
use the _unlocked postfix. Rename drm_gem_v/unmap() function names to
make them consistent with the rest of the API functions.
Acked-by: Maxime Ripard <mripard@kernel.org>
Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
Suggested-by: Boris Brezillon <boris.brezillon@collabora.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
drivers/gpu/drm/drm_client.c | 10 +++++-----
drivers/gpu/drm/drm_gem.c | 20 ++++++++++----------
drivers/gpu/drm/drm_gem_framebuffer_helper.c | 6 +++---
drivers/gpu/drm/drm_internal.h | 4 ++--
drivers/gpu/drm/drm_prime.c | 4 ++--
drivers/gpu/drm/lima/lima_sched.c | 4 ++--
drivers/gpu/drm/panfrost/panfrost_dump.c | 4 ++--
drivers/gpu/drm/panfrost/panfrost_perfcnt.c | 6 +++---
drivers/gpu/drm/panthor/panthor_gem.h | 4 ++--
drivers/gpu/drm/panthor/panthor_sched.c | 4 ++--
include/drm/drm_gem.h | 4 ++--
11 files changed, 35 insertions(+), 35 deletions(-)
diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c
index 549b28a5918c..f1de7faf9fb4 100644
--- a/drivers/gpu/drm/drm_client.c
+++ b/drivers/gpu/drm/drm_client.c
@@ -174,7 +174,7 @@ EXPORT_SYMBOL(drm_client_release);
static void drm_client_buffer_delete(struct drm_client_buffer *buffer)
{
if (buffer->gem) {
- drm_gem_vunmap_unlocked(buffer->gem, &buffer->map);
+ drm_gem_vunmap(buffer->gem, &buffer->map);
drm_gem_object_put(buffer->gem);
}
@@ -252,7 +252,7 @@ int drm_client_buffer_vmap_local(struct drm_client_buffer *buffer,
drm_gem_lock(gem);
- ret = drm_gem_vmap(gem, map);
+ ret = drm_gem_vmap_locked(gem, map);
if (ret)
goto err_drm_gem_vmap_unlocked;
*map_copy = *map;
@@ -278,7 +278,7 @@ void drm_client_buffer_vunmap_local(struct drm_client_buffer *buffer)
struct drm_gem_object *gem = buffer->gem;
struct iosys_map *map = &buffer->map;
- drm_gem_vunmap(gem, map);
+ drm_gem_vunmap_locked(gem, map);
drm_gem_unlock(gem);
}
EXPORT_SYMBOL(drm_client_buffer_vunmap_local);
@@ -316,7 +316,7 @@ drm_client_buffer_vmap(struct drm_client_buffer *buffer,
ret = drm_gem_pin_locked(gem);
if (ret)
goto err_drm_gem_pin_locked;
- ret = drm_gem_vmap(gem, map);
+ ret = drm_gem_vmap_locked(gem, map);
if (ret)
goto err_drm_gem_vmap;
@@ -348,7 +348,7 @@ void drm_client_buffer_vunmap(struct drm_client_buffer *buffer)
struct iosys_map *map = &buffer->map;
drm_gem_lock(gem);
- drm_gem_vunmap(gem, map);
+ drm_gem_vunmap_locked(gem, map);
drm_gem_unpin_locked(gem);
drm_gem_unlock(gem);
}
diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index c6240bab3fa5..27778e5ce0c0 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -1216,7 +1216,7 @@ void drm_gem_unpin(struct drm_gem_object *obj)
dma_resv_unlock(obj->resv);
}
-int drm_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map)
+int drm_gem_vmap_locked(struct drm_gem_object *obj, struct iosys_map *map)
{
int ret;
@@ -1233,9 +1233,9 @@ int drm_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map)
return 0;
}
-EXPORT_SYMBOL(drm_gem_vmap);
+EXPORT_SYMBOL(drm_gem_vmap_locked);
-void drm_gem_vunmap(struct drm_gem_object *obj, struct iosys_map *map)
+void drm_gem_vunmap_locked(struct drm_gem_object *obj, struct iosys_map *map)
{
dma_resv_assert_held(obj->resv);
@@ -1248,7 +1248,7 @@ void drm_gem_vunmap(struct drm_gem_object *obj, struct iosys_map *map)
/* Always set the mapping to NULL. Callers may rely on this. */
iosys_map_clear(map);
}
-EXPORT_SYMBOL(drm_gem_vunmap);
+EXPORT_SYMBOL(drm_gem_vunmap_locked);
void drm_gem_lock(struct drm_gem_object *obj)
{
@@ -1262,25 +1262,25 @@ void drm_gem_unlock(struct drm_gem_object *obj)
}
EXPORT_SYMBOL(drm_gem_unlock);
-int drm_gem_vmap_unlocked(struct drm_gem_object *obj, struct iosys_map *map)
+int drm_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map)
{
int ret;
dma_resv_lock(obj->resv, NULL);
- ret = drm_gem_vmap(obj, map);
+ ret = drm_gem_vmap_locked(obj, map);
dma_resv_unlock(obj->resv);
return ret;
}
-EXPORT_SYMBOL(drm_gem_vmap_unlocked);
+EXPORT_SYMBOL(drm_gem_vmap);
-void drm_gem_vunmap_unlocked(struct drm_gem_object *obj, struct iosys_map *map)
+void drm_gem_vunmap(struct drm_gem_object *obj, struct iosys_map *map)
{
dma_resv_lock(obj->resv, NULL);
- drm_gem_vunmap(obj, map);
+ drm_gem_vunmap_locked(obj, map);
dma_resv_unlock(obj->resv);
}
-EXPORT_SYMBOL(drm_gem_vunmap_unlocked);
+EXPORT_SYMBOL(drm_gem_vunmap);
/**
* drm_gem_lock_reservations - Sets up the ww context and acquires
diff --git a/drivers/gpu/drm/drm_gem_framebuffer_helper.c b/drivers/gpu/drm/drm_gem_framebuffer_helper.c
index 0fbeb686e561..6f72e7a0f427 100644
--- a/drivers/gpu/drm/drm_gem_framebuffer_helper.c
+++ b/drivers/gpu/drm/drm_gem_framebuffer_helper.c
@@ -362,7 +362,7 @@ int drm_gem_fb_vmap(struct drm_framebuffer *fb, struct iosys_map *map,
ret = -EINVAL;
goto err_drm_gem_vunmap;
}
- ret = drm_gem_vmap_unlocked(obj, &map[i]);
+ ret = drm_gem_vmap(obj, &map[i]);
if (ret)
goto err_drm_gem_vunmap;
}
@@ -384,7 +384,7 @@ int drm_gem_fb_vmap(struct drm_framebuffer *fb, struct iosys_map *map,
obj = drm_gem_fb_get_obj(fb, i);
if (!obj)
continue;
- drm_gem_vunmap_unlocked(obj, &map[i]);
+ drm_gem_vunmap(obj, &map[i]);
}
return ret;
}
@@ -411,7 +411,7 @@ void drm_gem_fb_vunmap(struct drm_framebuffer *fb, struct iosys_map *map)
continue;
if (iosys_map_is_null(&map[i]))
continue;
- drm_gem_vunmap_unlocked(obj, &map[i]);
+ drm_gem_vunmap(obj, &map[i]);
}
}
EXPORT_SYMBOL(drm_gem_fb_vunmap);
diff --git a/drivers/gpu/drm/drm_internal.h b/drivers/gpu/drm/drm_internal.h
index b2b6a8e49dda..e44f28fd81d3 100644
--- a/drivers/gpu/drm/drm_internal.h
+++ b/drivers/gpu/drm/drm_internal.h
@@ -179,8 +179,8 @@ int drm_gem_pin_locked(struct drm_gem_object *obj);
void drm_gem_unpin_locked(struct drm_gem_object *obj);
int drm_gem_pin(struct drm_gem_object *obj);
void drm_gem_unpin(struct drm_gem_object *obj);
-int drm_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map);
-void drm_gem_vunmap(struct drm_gem_object *obj, struct iosys_map *map);
+int drm_gem_vmap_locked(struct drm_gem_object *obj, struct iosys_map *map);
+void drm_gem_vunmap_locked(struct drm_gem_object *obj, struct iosys_map *map);
/* drm_debugfs.c drm_debugfs_crc.c */
#if defined(CONFIG_DEBUG_FS)
diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
index 4b8c6075e46a..d828502268b8 100644
--- a/drivers/gpu/drm/drm_prime.c
+++ b/drivers/gpu/drm/drm_prime.c
@@ -707,7 +707,7 @@ int drm_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct iosys_map *map)
{
struct drm_gem_object *obj = dma_buf->priv;
- return drm_gem_vmap(obj, map);
+ return drm_gem_vmap_locked(obj, map);
}
EXPORT_SYMBOL(drm_gem_dmabuf_vmap);
@@ -723,7 +723,7 @@ void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, struct iosys_map *map)
{
struct drm_gem_object *obj = dma_buf->priv;
- drm_gem_vunmap(obj, map);
+ drm_gem_vunmap_locked(obj, map);
}
EXPORT_SYMBOL(drm_gem_dmabuf_vunmap);
diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c
index 825135a26aa4..7934098e651b 100644
--- a/drivers/gpu/drm/lima/lima_sched.c
+++ b/drivers/gpu/drm/lima/lima_sched.c
@@ -371,7 +371,7 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task)
} else {
buffer_chunk->size = lima_bo_size(bo);
- ret = drm_gem_vmap_unlocked(&bo->base.base, &map);
+ ret = drm_gem_vmap(&bo->base.base, &map);
if (ret) {
kvfree(et);
goto out;
@@ -379,7 +379,7 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task)
memcpy(buffer_chunk + 1, map.vaddr, buffer_chunk->size);
- drm_gem_vunmap_unlocked(&bo->base.base, &map);
+ drm_gem_vunmap(&bo->base.base, &map);
}
buffer_chunk = (void *)(buffer_chunk + 1) + buffer_chunk->size;
diff --git a/drivers/gpu/drm/panfrost/panfrost_dump.c b/drivers/gpu/drm/panfrost/panfrost_dump.c
index 47751302f1bc..4042afe2fbf4 100644
--- a/drivers/gpu/drm/panfrost/panfrost_dump.c
+++ b/drivers/gpu/drm/panfrost/panfrost_dump.c
@@ -209,7 +209,7 @@ void panfrost_core_dump(struct panfrost_job *job)
goto dump_header;
}
- ret = drm_gem_vmap_unlocked(&bo->base.base, &map);
+ ret = drm_gem_vmap(&bo->base.base, &map);
if (ret) {
dev_err(pfdev->dev, "Panfrost Dump: couldn't map Buffer Object\n");
iter.hdr->bomap.valid = 0;
@@ -228,7 +228,7 @@ void panfrost_core_dump(struct panfrost_job *job)
vaddr = map.vaddr;
memcpy(iter.data, vaddr, bo->base.base.size);
- drm_gem_vunmap_unlocked(&bo->base.base, &map);
+ drm_gem_vunmap(&bo->base.base, &map);
iter.hdr->bomap.valid = 1;
diff --git a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
index ba9b6e2b2636..52befead08c6 100644
--- a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
+++ b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
@@ -106,7 +106,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
goto err_close_bo;
}
- ret = drm_gem_vmap_unlocked(&bo->base, &map);
+ ret = drm_gem_vmap(&bo->base, &map);
if (ret)
goto err_put_mapping;
perfcnt->buf = map.vaddr;
@@ -165,7 +165,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
return 0;
err_vunmap:
- drm_gem_vunmap_unlocked(&bo->base, &map);
+ drm_gem_vunmap(&bo->base, &map);
err_put_mapping:
panfrost_gem_mapping_put(perfcnt->mapping);
err_close_bo:
@@ -195,7 +195,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev,
GPU_PERFCNT_CFG_MODE(GPU_PERFCNT_CFG_MODE_OFF));
perfcnt->user = NULL;
- drm_gem_vunmap_unlocked(&perfcnt->mapping->obj->base.base, &map);
+ drm_gem_vunmap(&perfcnt->mapping->obj->base.base, &map);
perfcnt->buf = NULL;
panfrost_gem_close(&perfcnt->mapping->obj->base.base, file_priv);
panfrost_mmu_as_put(pfdev, perfcnt->mapping->mmu);
diff --git a/drivers/gpu/drm/panthor/panthor_gem.h b/drivers/gpu/drm/panthor/panthor_gem.h
index 5749ef2ebe03..1a363bb814f4 100644
--- a/drivers/gpu/drm/panthor/panthor_gem.h
+++ b/drivers/gpu/drm/panthor/panthor_gem.h
@@ -112,7 +112,7 @@ panthor_kernel_bo_vmap(struct panthor_kernel_bo *bo)
if (bo->kmap)
return 0;
- ret = drm_gem_vmap_unlocked(bo->obj, &map);
+ ret = drm_gem_vmap(bo->obj, &map);
if (ret)
return ret;
@@ -126,7 +126,7 @@ panthor_kernel_bo_vunmap(struct panthor_kernel_bo *bo)
if (bo->kmap) {
struct iosys_map map = IOSYS_MAP_INIT_VADDR(bo->kmap);
- drm_gem_vunmap_unlocked(bo->obj, &map);
+ drm_gem_vunmap(bo->obj, &map);
bo->kmap = NULL;
}
}
diff --git a/drivers/gpu/drm/panthor/panthor_sched.c b/drivers/gpu/drm/panthor/panthor_sched.c
index 4d31d1967716..446ec780eb4a 100644
--- a/drivers/gpu/drm/panthor/panthor_sched.c
+++ b/drivers/gpu/drm/panthor/panthor_sched.c
@@ -840,7 +840,7 @@ panthor_queue_put_syncwait_obj(struct panthor_queue *queue)
if (queue->syncwait.kmap) {
struct iosys_map map = IOSYS_MAP_INIT_VADDR(queue->syncwait.kmap);
- drm_gem_vunmap_unlocked(queue->syncwait.obj, &map);
+ drm_gem_vunmap(queue->syncwait.obj, &map);
queue->syncwait.kmap = NULL;
}
@@ -866,7 +866,7 @@ panthor_queue_get_syncwait_obj(struct panthor_group *group, struct panthor_queue
goto err_put_syncwait_obj;
queue->syncwait.obj = &bo->base.base;
- ret = drm_gem_vmap_unlocked(queue->syncwait.obj, &map);
+ ret = drm_gem_vmap(queue->syncwait.obj, &map);
if (drm_WARN_ON(&ptdev->base, ret))
goto err_put_syncwait_obj;
diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
index 2bf893eabb4b..13c312ca07ae 100644
--- a/include/drm/drm_gem.h
+++ b/include/drm/drm_gem.h
@@ -537,8 +537,8 @@ void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages,
void drm_gem_lock(struct drm_gem_object *obj);
void drm_gem_unlock(struct drm_gem_object *obj);
-int drm_gem_vmap_unlocked(struct drm_gem_object *obj, struct iosys_map *map);
-void drm_gem_vunmap_unlocked(struct drm_gem_object *obj, struct iosys_map *map);
+int drm_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map);
+void drm_gem_vunmap(struct drm_gem_object *obj, struct iosys_map *map);
int drm_gem_objects_lookup(struct drm_file *filp, void __user *bo_handles,
int count, struct drm_gem_object ***objs_out);
--
2.49.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH v20 02/10] drm/gem: Add _locked postfix to functions that have unlocked counterpart
2025-03-22 21:25 [PATCH v20 00/10] Add generic DRM-shmem memory shrinker (part 1) Dmitry Osipenko
2025-03-22 21:25 ` [PATCH v20 01/10] drm/gem: Change locked/unlocked postfix of drm_gem_v/unmap() function names Dmitry Osipenko
@ 2025-03-22 21:26 ` Dmitry Osipenko
2025-03-22 21:26 ` [PATCH v20 03/10] drm/gem: Document locking rule of vmap and evict callbacks Dmitry Osipenko
` (9 subsequent siblings)
11 siblings, 0 replies; 32+ messages in thread
From: Dmitry Osipenko @ 2025-03-22 21:26 UTC (permalink / raw)
To: David Airlie, Simona Vetter, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, Christian König, Gerd Hoffmann, Qiang Yu,
Steven Price, Boris Brezillon, Frank Binns, Matt Coster
Cc: dri-devel, linux-kernel, kernel
Add _locked postfix to drm_gem functions that have unlocked counterpart
functions to make GEM functions naming more consistent and intuitive in
regards to the locking requirements.
Acked-by: Maxime Ripard <mripard@kernel.org>
Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
Suggested-by: Boris Brezillon <boris.brezillon@collabora.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
drivers/gpu/drm/drm_gem.c | 6 +++---
include/drm/drm_gem.h | 2 +-
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index 27778e5ce0c0..1e659d2660f7 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -1543,10 +1543,10 @@ drm_gem_lru_scan(struct drm_gem_lru *lru,
EXPORT_SYMBOL(drm_gem_lru_scan);
/**
- * drm_gem_evict - helper to evict backing pages for a GEM object
+ * drm_gem_evict_locked - helper to evict backing pages for a GEM object
* @obj: obj in question
*/
-int drm_gem_evict(struct drm_gem_object *obj)
+int drm_gem_evict_locked(struct drm_gem_object *obj)
{
dma_resv_assert_held(obj->resv);
@@ -1558,4 +1558,4 @@ int drm_gem_evict(struct drm_gem_object *obj)
return 0;
}
-EXPORT_SYMBOL(drm_gem_evict);
+EXPORT_SYMBOL(drm_gem_evict_locked);
diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
index 13c312ca07ae..43cf3c2c7ca0 100644
--- a/include/drm/drm_gem.h
+++ b/include/drm/drm_gem.h
@@ -561,7 +561,7 @@ unsigned long drm_gem_lru_scan(struct drm_gem_lru *lru,
unsigned long *remaining,
bool (*shrink)(struct drm_gem_object *obj));
-int drm_gem_evict(struct drm_gem_object *obj);
+int drm_gem_evict_locked(struct drm_gem_object *obj);
/**
* drm_gem_object_is_shared_for_memory_stats - helper for shared memory stats
--
2.49.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH v20 03/10] drm/gem: Document locking rule of vmap and evict callbacks
2025-03-22 21:25 [PATCH v20 00/10] Add generic DRM-shmem memory shrinker (part 1) Dmitry Osipenko
2025-03-22 21:25 ` [PATCH v20 01/10] drm/gem: Change locked/unlocked postfix of drm_gem_v/unmap() function names Dmitry Osipenko
2025-03-22 21:26 ` [PATCH v20 02/10] drm/gem: Add _locked postfix to functions that have unlocked counterpart Dmitry Osipenko
@ 2025-03-22 21:26 ` Dmitry Osipenko
2025-03-22 21:26 ` [PATCH v20 04/10] drm/shmem-helper: Make all exported symbols GPL Dmitry Osipenko
` (8 subsequent siblings)
11 siblings, 0 replies; 32+ messages in thread
From: Dmitry Osipenko @ 2025-03-22 21:26 UTC (permalink / raw)
To: David Airlie, Simona Vetter, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, Christian König, Gerd Hoffmann, Qiang Yu,
Steven Price, Boris Brezillon, Frank Binns, Matt Coster
Cc: dri-devel, linux-kernel, kernel
The vmap/vunmap/evict GEM callbacks are always invoked with a held GEM's
reservation lock. Document this locking rule for clarity.
Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
include/drm/drm_gem.h | 9 ++++++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
index 43cf3c2c7ca0..9b71f7a9f3f8 100644
--- a/include/drm/drm_gem.h
+++ b/include/drm/drm_gem.h
@@ -159,7 +159,8 @@ struct drm_gem_object_funcs {
* @vmap:
*
* Returns a virtual address for the buffer. Used by the
- * drm_gem_dmabuf_vmap() helper.
+ * drm_gem_dmabuf_vmap() helper. Called with a held GEM reservation
+ * lock.
*
* This callback is optional.
*/
@@ -169,7 +170,8 @@ struct drm_gem_object_funcs {
* @vunmap:
*
* Releases the address previously returned by @vmap. Used by the
- * drm_gem_dmabuf_vunmap() helper.
+ * drm_gem_dmabuf_vunmap() helper. Called with a held GEM reservation
+ * lock.
*
* This callback is optional.
*/
@@ -192,7 +194,8 @@ struct drm_gem_object_funcs {
* @evict:
*
* Evicts gem object out from memory. Used by the drm_gem_object_evict()
- * helper. Returns 0 on success, -errno otherwise.
+ * helper. Returns 0 on success, -errno otherwise. Called with a held
+ * GEM reservation lock.
*
* This callback is optional.
*/
--
2.49.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH v20 04/10] drm/shmem-helper: Make all exported symbols GPL
2025-03-22 21:25 [PATCH v20 00/10] Add generic DRM-shmem memory shrinker (part 1) Dmitry Osipenko
` (2 preceding siblings ...)
2025-03-22 21:26 ` [PATCH v20 03/10] drm/gem: Document locking rule of vmap and evict callbacks Dmitry Osipenko
@ 2025-03-22 21:26 ` Dmitry Osipenko
2025-03-22 21:26 ` [PATCH v20 05/10] drm/shmem-helper: Refactor locked/unlocked functions Dmitry Osipenko
` (7 subsequent siblings)
11 siblings, 0 replies; 32+ messages in thread
From: Dmitry Osipenko @ 2025-03-22 21:26 UTC (permalink / raw)
To: David Airlie, Simona Vetter, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, Christian König, Gerd Hoffmann, Qiang Yu,
Steven Price, Boris Brezillon, Frank Binns, Matt Coster
Cc: dri-devel, linux-kernel, kernel
Make all drm-shmem exported symbols GPL to make them consistent with
the rest of drm-shmem symbols.
Acked-by: Maxime Ripard <mripard@kernel.org>
Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
drivers/gpu/drm/drm_gem_shmem_helper.c | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index d99dee67353a..98c68999d61a 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -247,7 +247,7 @@ void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem)
shmem->pages_mark_accessed_on_put);
shmem->pages = NULL;
}
-EXPORT_SYMBOL(drm_gem_shmem_put_pages);
+EXPORT_SYMBOL_GPL(drm_gem_shmem_put_pages);
int drm_gem_shmem_pin_locked(struct drm_gem_shmem_object *shmem)
{
@@ -296,7 +296,7 @@ int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem)
return ret;
}
-EXPORT_SYMBOL(drm_gem_shmem_pin);
+EXPORT_SYMBOL_GPL(drm_gem_shmem_pin);
/**
* drm_gem_shmem_unpin - Unpin backing pages for a shmem GEM object
@@ -315,7 +315,7 @@ void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem)
drm_gem_shmem_unpin_locked(shmem);
dma_resv_unlock(shmem->base.resv);
}
-EXPORT_SYMBOL(drm_gem_shmem_unpin);
+EXPORT_SYMBOL_GPL(drm_gem_shmem_unpin);
/*
* drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object
@@ -385,7 +385,7 @@ int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem,
return ret;
}
-EXPORT_SYMBOL(drm_gem_shmem_vmap);
+EXPORT_SYMBOL_GPL(drm_gem_shmem_vmap);
/*
* drm_gem_shmem_vunmap - Unmap a virtual mapping for a shmem GEM object
@@ -421,7 +421,7 @@ void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem,
shmem->vaddr = NULL;
}
-EXPORT_SYMBOL(drm_gem_shmem_vunmap);
+EXPORT_SYMBOL_GPL(drm_gem_shmem_vunmap);
static int
drm_gem_shmem_create_with_handle(struct drm_file *file_priv,
@@ -460,7 +460,7 @@ int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv)
return (madv >= 0);
}
-EXPORT_SYMBOL(drm_gem_shmem_madvise);
+EXPORT_SYMBOL_GPL(drm_gem_shmem_madvise);
void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem)
{
@@ -492,7 +492,7 @@ void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem)
invalidate_mapping_pages(file_inode(obj->filp)->i_mapping, 0, (loff_t)-1);
}
-EXPORT_SYMBOL(drm_gem_shmem_purge);
+EXPORT_SYMBOL_GPL(drm_gem_shmem_purge);
/**
* drm_gem_shmem_dumb_create - Create a dumb shmem buffer object
@@ -670,7 +670,7 @@ void drm_gem_shmem_print_info(const struct drm_gem_shmem_object *shmem,
drm_printf_indent(p, indent, "vmap_use_count=%u\n", shmem->vmap_use_count);
drm_printf_indent(p, indent, "vaddr=%p\n", shmem->vaddr);
}
-EXPORT_SYMBOL(drm_gem_shmem_print_info);
+EXPORT_SYMBOL_GPL(drm_gem_shmem_print_info);
/**
* drm_gem_shmem_get_sg_table - Provide a scatter/gather table of pinned
--
2.49.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH v20 05/10] drm/shmem-helper: Refactor locked/unlocked functions
2025-03-22 21:25 [PATCH v20 00/10] Add generic DRM-shmem memory shrinker (part 1) Dmitry Osipenko
` (3 preceding siblings ...)
2025-03-22 21:26 ` [PATCH v20 04/10] drm/shmem-helper: Make all exported symbols GPL Dmitry Osipenko
@ 2025-03-22 21:26 ` Dmitry Osipenko
2025-03-27 11:25 ` Jani Nikula
2025-03-22 21:26 ` [PATCH v20 06/10] drm/shmem-helper: Remove obsoleted is_iomem test Dmitry Osipenko
` (6 subsequent siblings)
11 siblings, 1 reply; 32+ messages in thread
From: Dmitry Osipenko @ 2025-03-22 21:26 UTC (permalink / raw)
To: David Airlie, Simona Vetter, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, Christian König, Gerd Hoffmann, Qiang Yu,
Steven Price, Boris Brezillon, Frank Binns, Matt Coster
Cc: dri-devel, linux-kernel, kernel
Add locked and remove unlocked postfixes from drm-shmem function names,
making names consistent with the drm/gem core code.
Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
Suggested-by: Boris Brezillon <boris.brezillon@collabora.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
drivers/gpu/drm/drm_gem_shmem_helper.c | 60 +++++++++----------
drivers/gpu/drm/imagination/pvr_gem.c | 4 +-
drivers/gpu/drm/lima/lima_gem.c | 2 +-
drivers/gpu/drm/panfrost/panfrost_drv.c | 2 +-
.../gpu/drm/panfrost/panfrost_gem_shrinker.c | 2 +-
drivers/gpu/drm/tests/drm_gem_shmem_test.c | 14 ++---
include/drm/drm_gem_shmem_helper.h | 28 ++++-----
7 files changed, 56 insertions(+), 56 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 98c68999d61a..a9e35a46e72b 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -174,7 +174,7 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
kfree(shmem->sgt);
}
if (shmem->pages)
- drm_gem_shmem_put_pages(shmem);
+ drm_gem_shmem_put_pages_locked(shmem);
drm_WARN_ON(obj->dev, shmem->pages_use_count);
@@ -186,7 +186,7 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
}
EXPORT_SYMBOL_GPL(drm_gem_shmem_free);
-static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
+static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem)
{
struct drm_gem_object *obj = &shmem->base;
struct page **pages;
@@ -220,12 +220,12 @@ static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
}
/*
- * drm_gem_shmem_put_pages - Decrease use count on the backing pages for a shmem GEM object
+ * drm_gem_shmem_put_pages_locked - Decrease use count on the backing pages for a shmem GEM object
* @shmem: shmem GEM object
*
* This function decreases the use count and puts the backing pages when use drops to zero.
*/
-void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem)
+void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem)
{
struct drm_gem_object *obj = &shmem->base;
@@ -247,7 +247,7 @@ void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem)
shmem->pages_mark_accessed_on_put);
shmem->pages = NULL;
}
-EXPORT_SYMBOL_GPL(drm_gem_shmem_put_pages);
+EXPORT_SYMBOL_GPL(drm_gem_shmem_put_pages_locked);
int drm_gem_shmem_pin_locked(struct drm_gem_shmem_object *shmem)
{
@@ -257,7 +257,7 @@ int drm_gem_shmem_pin_locked(struct drm_gem_shmem_object *shmem)
drm_WARN_ON(shmem->base.dev, drm_gem_is_imported(&shmem->base));
- ret = drm_gem_shmem_get_pages(shmem);
+ ret = drm_gem_shmem_get_pages_locked(shmem);
return ret;
}
@@ -267,7 +267,7 @@ void drm_gem_shmem_unpin_locked(struct drm_gem_shmem_object *shmem)
{
dma_resv_assert_held(shmem->base.resv);
- drm_gem_shmem_put_pages(shmem);
+ drm_gem_shmem_put_pages_locked(shmem);
}
EXPORT_SYMBOL(drm_gem_shmem_unpin_locked);
@@ -318,7 +318,7 @@ void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem)
EXPORT_SYMBOL_GPL(drm_gem_shmem_unpin);
/*
- * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object
+ * drm_gem_shmem_vmap_locked - Create a virtual mapping for a shmem GEM object
* @shmem: shmem GEM object
* @map: Returns the kernel virtual address of the SHMEM GEM object's backing
* store.
@@ -327,13 +327,13 @@ EXPORT_SYMBOL_GPL(drm_gem_shmem_unpin);
* exists for the buffer backing the shmem GEM object. It hides the differences
* between dma-buf imported and natively allocated objects.
*
- * Acquired mappings should be cleaned up by calling drm_gem_shmem_vunmap().
+ * Acquired mappings should be cleaned up by calling drm_gem_shmem_vunmap_locked().
*
* Returns:
* 0 on success or a negative error code on failure.
*/
-int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem,
- struct iosys_map *map)
+int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
+ struct iosys_map *map)
{
struct drm_gem_object *obj = &shmem->base;
int ret = 0;
@@ -356,7 +356,7 @@ int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem,
return 0;
}
- ret = drm_gem_shmem_get_pages(shmem);
+ ret = drm_gem_shmem_get_pages_locked(shmem);
if (ret)
goto err_zero_use;
@@ -379,28 +379,28 @@ int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem,
err_put_pages:
if (!drm_gem_is_imported(obj))
- drm_gem_shmem_put_pages(shmem);
+ drm_gem_shmem_put_pages_locked(shmem);
err_zero_use:
shmem->vmap_use_count = 0;
return ret;
}
-EXPORT_SYMBOL_GPL(drm_gem_shmem_vmap);
+EXPORT_SYMBOL_GPL(drm_gem_shmem_vmap_locked);
/*
- * drm_gem_shmem_vunmap - Unmap a virtual mapping for a shmem GEM object
+ * drm_gem_shmem_vunmap_locked - Unmap a virtual mapping for a shmem GEM object
* @shmem: shmem GEM object
* @map: Kernel virtual address where the SHMEM GEM object was mapped
*
* This function cleans up a kernel virtual address mapping acquired by
- * drm_gem_shmem_vmap(). The mapping is only removed when the use count drops to
- * zero.
+ * drm_gem_shmem_vmap_locked(). The mapping is only removed when the use count
+ * drops to zero.
*
* This function hides the differences between dma-buf imported and natively
* allocated objects.
*/
-void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem,
- struct iosys_map *map)
+void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
+ struct iosys_map *map)
{
struct drm_gem_object *obj = &shmem->base;
@@ -416,12 +416,12 @@ void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem,
return;
vunmap(shmem->vaddr);
- drm_gem_shmem_put_pages(shmem);
+ drm_gem_shmem_put_pages_locked(shmem);
}
shmem->vaddr = NULL;
}
-EXPORT_SYMBOL_GPL(drm_gem_shmem_vunmap);
+EXPORT_SYMBOL_GPL(drm_gem_shmem_vunmap_locked);
static int
drm_gem_shmem_create_with_handle(struct drm_file *file_priv,
@@ -449,7 +449,7 @@ drm_gem_shmem_create_with_handle(struct drm_file *file_priv,
/* Update madvise status, returns true if not purged, else
* false or -errno.
*/
-int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv)
+int drm_gem_shmem_madvise_locked(struct drm_gem_shmem_object *shmem, int madv)
{
dma_resv_assert_held(shmem->base.resv);
@@ -460,9 +460,9 @@ int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv)
return (madv >= 0);
}
-EXPORT_SYMBOL_GPL(drm_gem_shmem_madvise);
+EXPORT_SYMBOL_GPL(drm_gem_shmem_madvise_locked);
-void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem)
+void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem)
{
struct drm_gem_object *obj = &shmem->base;
struct drm_device *dev = obj->dev;
@@ -476,7 +476,7 @@ void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem)
kfree(shmem->sgt);
shmem->sgt = NULL;
- drm_gem_shmem_put_pages(shmem);
+ drm_gem_shmem_put_pages_locked(shmem);
shmem->madv = -1;
@@ -492,7 +492,7 @@ void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem)
invalidate_mapping_pages(file_inode(obj->filp)->i_mapping, 0, (loff_t)-1);
}
-EXPORT_SYMBOL_GPL(drm_gem_shmem_purge);
+EXPORT_SYMBOL_GPL(drm_gem_shmem_purge_locked);
/**
* drm_gem_shmem_dumb_create - Create a dumb shmem buffer object
@@ -589,7 +589,7 @@ static void drm_gem_shmem_vm_close(struct vm_area_struct *vma)
struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
dma_resv_lock(shmem->base.resv, NULL);
- drm_gem_shmem_put_pages(shmem);
+ drm_gem_shmem_put_pages_locked(shmem);
dma_resv_unlock(shmem->base.resv);
drm_gem_vm_close(vma);
@@ -639,7 +639,7 @@ int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct
return -EINVAL;
dma_resv_lock(shmem->base.resv, NULL);
- ret = drm_gem_shmem_get_pages(shmem);
+ ret = drm_gem_shmem_get_pages_locked(shmem);
dma_resv_unlock(shmem->base.resv);
if (ret)
@@ -707,7 +707,7 @@ static struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_
drm_WARN_ON(obj->dev, drm_gem_is_imported(obj));
- ret = drm_gem_shmem_get_pages(shmem);
+ ret = drm_gem_shmem_get_pages_locked(shmem);
if (ret)
return ERR_PTR(ret);
@@ -729,7 +729,7 @@ static struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_
sg_free_table(sgt);
kfree(sgt);
err_put_pages:
- drm_gem_shmem_put_pages(shmem);
+ drm_gem_shmem_put_pages_locked(shmem);
return ERR_PTR(ret);
}
diff --git a/drivers/gpu/drm/imagination/pvr_gem.c b/drivers/gpu/drm/imagination/pvr_gem.c
index 6a8c81fe8c1e..d9d7c6d1a138 100644
--- a/drivers/gpu/drm/imagination/pvr_gem.c
+++ b/drivers/gpu/drm/imagination/pvr_gem.c
@@ -203,7 +203,7 @@ pvr_gem_object_vmap(struct pvr_gem_object *pvr_obj)
dma_resv_lock(obj->resv, NULL);
- err = drm_gem_shmem_vmap(shmem_obj, &map);
+ err = drm_gem_shmem_vmap_locked(shmem_obj, &map);
if (err)
goto err_unlock;
@@ -257,7 +257,7 @@ pvr_gem_object_vunmap(struct pvr_gem_object *pvr_obj)
dma_sync_sgtable_for_device(dev, shmem_obj->sgt, DMA_BIDIRECTIONAL);
}
- drm_gem_shmem_vunmap(shmem_obj, &map);
+ drm_gem_shmem_vunmap_locked(shmem_obj, &map);
dma_resv_unlock(obj->resv);
}
diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c
index 9bb997dbb4b9..609221351cde 100644
--- a/drivers/gpu/drm/lima/lima_gem.c
+++ b/drivers/gpu/drm/lima/lima_gem.c
@@ -195,7 +195,7 @@ static int lima_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map)
if (bo->heap_size)
return -EINVAL;
- return drm_gem_shmem_vmap(&bo->base, map);
+ return drm_gem_shmem_vmap_locked(&bo->base, map);
}
static int lima_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
index 0f3935556ac7..a731f6b59a42 100644
--- a/drivers/gpu/drm/panfrost/panfrost_drv.c
+++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
@@ -476,7 +476,7 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data,
}
}
- args->retained = drm_gem_shmem_madvise(&bo->base, args->madv);
+ args->retained = drm_gem_shmem_madvise_locked(&bo->base, args->madv);
if (args->retained) {
if (args->madv == PANFROST_MADV_DONTNEED)
diff --git a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
index 3d9f51bd48b6..02b60ea1433a 100644
--- a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
+++ b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
@@ -51,7 +51,7 @@ static bool panfrost_gem_purge(struct drm_gem_object *obj)
goto unlock_mappings;
panfrost_gem_teardown_mappings_locked(bo);
- drm_gem_shmem_purge(&bo->base);
+ drm_gem_shmem_purge_locked(&bo->base);
ret = true;
dma_resv_unlock(shmem->base.resv);
diff --git a/drivers/gpu/drm/tests/drm_gem_shmem_test.c b/drivers/gpu/drm/tests/drm_gem_shmem_test.c
index fd4215e2f982..98884966bb92 100644
--- a/drivers/gpu/drm/tests/drm_gem_shmem_test.c
+++ b/drivers/gpu/drm/tests/drm_gem_shmem_test.c
@@ -173,7 +173,7 @@ static void drm_gem_shmem_test_vmap(struct kunit *test)
ret = kunit_add_action_or_reset(test, drm_gem_shmem_free_wrapper, shmem);
KUNIT_ASSERT_EQ(test, ret, 0);
- ret = drm_gem_shmem_vmap(shmem, &map);
+ ret = drm_gem_shmem_vmap_locked(shmem, &map);
KUNIT_ASSERT_EQ(test, ret, 0);
KUNIT_ASSERT_NOT_NULL(test, shmem->vaddr);
KUNIT_ASSERT_FALSE(test, iosys_map_is_null(&map));
@@ -183,7 +183,7 @@ static void drm_gem_shmem_test_vmap(struct kunit *test)
for (i = 0; i < TEST_SIZE; i++)
KUNIT_EXPECT_EQ(test, iosys_map_rd(&map, i, u8), TEST_BYTE);
- drm_gem_shmem_vunmap(shmem, &map);
+ drm_gem_shmem_vunmap_locked(shmem, &map);
KUNIT_EXPECT_NULL(test, shmem->vaddr);
KUNIT_EXPECT_EQ(test, shmem->vmap_use_count, 0);
}
@@ -281,17 +281,17 @@ static void drm_gem_shmem_test_madvise(struct kunit *test)
ret = kunit_add_action_or_reset(test, drm_gem_shmem_free_wrapper, shmem);
KUNIT_ASSERT_EQ(test, ret, 0);
- ret = drm_gem_shmem_madvise(shmem, 1);
+ ret = drm_gem_shmem_madvise_locked(shmem, 1);
KUNIT_EXPECT_TRUE(test, ret);
KUNIT_ASSERT_EQ(test, shmem->madv, 1);
/* Set madv to a negative value */
- ret = drm_gem_shmem_madvise(shmem, -1);
+ ret = drm_gem_shmem_madvise_locked(shmem, -1);
KUNIT_EXPECT_FALSE(test, ret);
KUNIT_ASSERT_EQ(test, shmem->madv, -1);
/* Check that madv cannot be set back to a positive value */
- ret = drm_gem_shmem_madvise(shmem, 0);
+ ret = drm_gem_shmem_madvise_locked(shmem, 0);
KUNIT_EXPECT_FALSE(test, ret);
KUNIT_ASSERT_EQ(test, shmem->madv, -1);
}
@@ -319,7 +319,7 @@ static void drm_gem_shmem_test_purge(struct kunit *test)
ret = drm_gem_shmem_is_purgeable(shmem);
KUNIT_EXPECT_FALSE(test, ret);
- ret = drm_gem_shmem_madvise(shmem, 1);
+ ret = drm_gem_shmem_madvise_locked(shmem, 1);
KUNIT_EXPECT_TRUE(test, ret);
/* The scatter/gather table will be freed by drm_gem_shmem_free */
@@ -329,7 +329,7 @@ static void drm_gem_shmem_test_purge(struct kunit *test)
ret = drm_gem_shmem_is_purgeable(shmem);
KUNIT_EXPECT_TRUE(test, ret);
- drm_gem_shmem_purge(shmem);
+ drm_gem_shmem_purge_locked(shmem);
KUNIT_EXPECT_NULL(test, shmem->pages);
KUNIT_EXPECT_NULL(test, shmem->sgt);
KUNIT_EXPECT_EQ(test, shmem->madv, -1);
diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
index cef5a6b5a4d6..0609e336479d 100644
--- a/include/drm/drm_gem_shmem_helper.h
+++ b/include/drm/drm_gem_shmem_helper.h
@@ -102,19 +102,19 @@ struct drm_gem_shmem_object *drm_gem_shmem_create_with_mnt(struct drm_device *de
struct vfsmount *gemfs);
void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem);
-void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem);
+void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem);
int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem);
void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem);
-int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem,
- struct iosys_map *map);
-void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem,
- struct iosys_map *map);
+int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
+ struct iosys_map *map);
+void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
+ struct iosys_map *map);
int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct *vma);
int drm_gem_shmem_pin_locked(struct drm_gem_shmem_object *shmem);
void drm_gem_shmem_unpin_locked(struct drm_gem_shmem_object *shmem);
-int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv);
+int drm_gem_shmem_madvise_locked(struct drm_gem_shmem_object *shmem, int madv);
static inline bool drm_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem)
{
@@ -123,7 +123,7 @@ static inline bool drm_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem
!shmem->base.dma_buf && !drm_gem_is_imported(&shmem->base);
}
-void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem);
+void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem);
struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem);
struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem);
@@ -214,12 +214,12 @@ static inline struct sg_table *drm_gem_shmem_object_get_sg_table(struct drm_gem_
}
/*
- * drm_gem_shmem_object_vmap - GEM object function for drm_gem_shmem_vmap()
+ * drm_gem_shmem_object_vmap - GEM object function for drm_gem_shmem_vmap_locked()
* @obj: GEM object
* @map: Returns the kernel virtual address of the SHMEM GEM object's backing store.
*
- * This function wraps drm_gem_shmem_vmap(). Drivers that employ the shmem helpers should
- * use it as their &drm_gem_object_funcs.vmap handler.
+ * This function wraps drm_gem_shmem_vmap_locked(). Drivers that employ the shmem
+ * helpers should use it as their &drm_gem_object_funcs.vmap handler.
*
* Returns:
* 0 on success or a negative error code on failure.
@@ -229,7 +229,7 @@ static inline int drm_gem_shmem_object_vmap(struct drm_gem_object *obj,
{
struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
- return drm_gem_shmem_vmap(shmem, map);
+ return drm_gem_shmem_vmap_locked(shmem, map);
}
/*
@@ -237,15 +237,15 @@ static inline int drm_gem_shmem_object_vmap(struct drm_gem_object *obj,
* @obj: GEM object
* @map: Kernel virtual address where the SHMEM GEM object was mapped
*
- * This function wraps drm_gem_shmem_vunmap(). Drivers that employ the shmem helpers should
- * use it as their &drm_gem_object_funcs.vunmap handler.
+ * This function wraps drm_gem_shmem_vunmap_locked(). Drivers that employ the shmem
+ * helpers should use it as their &drm_gem_object_funcs.vunmap handler.
*/
static inline void drm_gem_shmem_object_vunmap(struct drm_gem_object *obj,
struct iosys_map *map)
{
struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
- drm_gem_shmem_vunmap(shmem, map);
+ drm_gem_shmem_vunmap_locked(shmem, map);
}
/**
--
2.49.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH v20 06/10] drm/shmem-helper: Remove obsoleted is_iomem test
2025-03-22 21:25 [PATCH v20 00/10] Add generic DRM-shmem memory shrinker (part 1) Dmitry Osipenko
` (4 preceding siblings ...)
2025-03-22 21:26 ` [PATCH v20 05/10] drm/shmem-helper: Refactor locked/unlocked functions Dmitry Osipenko
@ 2025-03-22 21:26 ` Dmitry Osipenko
2025-03-22 21:26 ` [PATCH v20 07/10] drm/shmem-helper: Add and use pages_pin_count Dmitry Osipenko
` (5 subsequent siblings)
11 siblings, 0 replies; 32+ messages in thread
From: Dmitry Osipenko @ 2025-03-22 21:26 UTC (permalink / raw)
To: David Airlie, Simona Vetter, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, Christian König, Gerd Hoffmann, Qiang Yu,
Steven Price, Boris Brezillon, Frank Binns, Matt Coster
Cc: dri-devel, linux-kernel, kernel
Everything that uses the mapped buffer should be agnostic to is_iomem.
The only reason for the is_iomem test is that we're setting shmem->vaddr
to the returned map->vaddr. Now that the shmem->vaddr code is gone, remove
the obsoleted is_iomem test to clean up the code.
Acked-by: Maxime Ripard <mripard@kernel.org>
Suggested-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
drivers/gpu/drm/drm_gem_shmem_helper.c | 6 ------
1 file changed, 6 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index a9e35a46e72b..277e792a0c5c 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -340,12 +340,6 @@ int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
if (drm_gem_is_imported(obj)) {
ret = dma_buf_vmap(obj->dma_buf, map);
- if (!ret) {
- if (drm_WARN_ON(obj->dev, map->is_iomem)) {
- dma_buf_vunmap(obj->dma_buf, map);
- return -EIO;
- }
- }
} else {
pgprot_t prot = PAGE_KERNEL;
--
2.49.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH v20 07/10] drm/shmem-helper: Add and use pages_pin_count
2025-03-22 21:25 [PATCH v20 00/10] Add generic DRM-shmem memory shrinker (part 1) Dmitry Osipenko
` (5 preceding siblings ...)
2025-03-22 21:26 ` [PATCH v20 06/10] drm/shmem-helper: Remove obsoleted is_iomem test Dmitry Osipenko
@ 2025-03-22 21:26 ` Dmitry Osipenko
2025-03-22 21:26 ` [PATCH v20 08/10] drm/shmem-helper: Use refcount_t for pages_use_count Dmitry Osipenko
` (4 subsequent siblings)
11 siblings, 0 replies; 32+ messages in thread
From: Dmitry Osipenko @ 2025-03-22 21:26 UTC (permalink / raw)
To: David Airlie, Simona Vetter, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, Christian König, Gerd Hoffmann, Qiang Yu,
Steven Price, Boris Brezillon, Frank Binns, Matt Coster
Cc: dri-devel, linux-kernel, kernel
Add separate pages_pin_count for tracking of whether drm-shmem pages are
moveable or not. With the addition of memory shrinker support to drm-shmem,
the pages_use_count will no longer determine whether pages are hard-pinned
in memory, but whether pages exist and are soft-pinned (and could be swapped
out). The pages_pin_count > 1 will hard-pin pages in memory.
Acked-by: Maxime Ripard <mripard@kernel.org>
Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
Suggested-by: Boris Brezillon <boris.brezillon@collabora.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
drivers/gpu/drm/drm_gem_shmem_helper.c | 16 +++++++++++++++-
include/drm/drm_gem_shmem_helper.h | 11 +++++++++++
2 files changed, 26 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 277e792a0c5c..d338b36f4eaa 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -177,6 +177,7 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
drm_gem_shmem_put_pages_locked(shmem);
drm_WARN_ON(obj->dev, shmem->pages_use_count);
+ drm_WARN_ON(obj->dev, refcount_read(&shmem->pages_pin_count));
dma_resv_unlock(shmem->base.resv);
}
@@ -257,7 +258,12 @@ int drm_gem_shmem_pin_locked(struct drm_gem_shmem_object *shmem)
drm_WARN_ON(shmem->base.dev, drm_gem_is_imported(&shmem->base));
+ if (refcount_inc_not_zero(&shmem->pages_pin_count))
+ return 0;
+
ret = drm_gem_shmem_get_pages_locked(shmem);
+ if (!ret)
+ refcount_set(&shmem->pages_pin_count, 1);
return ret;
}
@@ -267,7 +273,8 @@ void drm_gem_shmem_unpin_locked(struct drm_gem_shmem_object *shmem)
{
dma_resv_assert_held(shmem->base.resv);
- drm_gem_shmem_put_pages_locked(shmem);
+ if (refcount_dec_and_test(&shmem->pages_pin_count))
+ drm_gem_shmem_put_pages_locked(shmem);
}
EXPORT_SYMBOL(drm_gem_shmem_unpin_locked);
@@ -288,6 +295,9 @@ int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem)
drm_WARN_ON(obj->dev, drm_gem_is_imported(obj));
+ if (refcount_inc_not_zero(&shmem->pages_pin_count))
+ return 0;
+
ret = dma_resv_lock_interruptible(shmem->base.resv, NULL);
if (ret)
return ret;
@@ -311,6 +321,9 @@ void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem)
drm_WARN_ON(obj->dev, drm_gem_is_imported(obj));
+ if (refcount_dec_not_one(&shmem->pages_pin_count))
+ return;
+
dma_resv_lock(shmem->base.resv, NULL);
drm_gem_shmem_unpin_locked(shmem);
dma_resv_unlock(shmem->base.resv);
@@ -660,6 +673,7 @@ void drm_gem_shmem_print_info(const struct drm_gem_shmem_object *shmem,
if (drm_gem_is_imported(&shmem->base))
return;
+ drm_printf_indent(p, indent, "pages_pin_count=%u\n", refcount_read(&shmem->pages_pin_count));
drm_printf_indent(p, indent, "pages_use_count=%u\n", shmem->pages_use_count);
drm_printf_indent(p, indent, "vmap_use_count=%u\n", shmem->vmap_use_count);
drm_printf_indent(p, indent, "vaddr=%p\n", shmem->vaddr);
diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
index 0609e336479d..d411215fe494 100644
--- a/include/drm/drm_gem_shmem_helper.h
+++ b/include/drm/drm_gem_shmem_helper.h
@@ -39,6 +39,17 @@ struct drm_gem_shmem_object {
*/
unsigned int pages_use_count;
+ /**
+ * @pages_pin_count:
+ *
+ * Reference count on the pinned pages table.
+ *
+ * Pages are hard-pinned and reside in memory if count
+ * greater than zero. Otherwise, when count is zero, the pages are
+ * allowed to be evicted and purged by memory shrinker.
+ */
+ refcount_t pages_pin_count;
+
/**
* @madv: State for madvise
*
--
2.49.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH v20 08/10] drm/shmem-helper: Use refcount_t for pages_use_count
2025-03-22 21:25 [PATCH v20 00/10] Add generic DRM-shmem memory shrinker (part 1) Dmitry Osipenko
` (6 preceding siblings ...)
2025-03-22 21:26 ` [PATCH v20 07/10] drm/shmem-helper: Add and use pages_pin_count Dmitry Osipenko
@ 2025-03-22 21:26 ` Dmitry Osipenko
2025-03-22 21:26 ` [PATCH v20 09/10] drm/shmem-helper: Switch drm_gem_shmem_vmap/vunmap to use pin/unpin Dmitry Osipenko
` (3 subsequent siblings)
11 siblings, 0 replies; 32+ messages in thread
From: Dmitry Osipenko @ 2025-03-22 21:26 UTC (permalink / raw)
To: David Airlie, Simona Vetter, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, Christian König, Gerd Hoffmann, Qiang Yu,
Steven Price, Boris Brezillon, Frank Binns, Matt Coster
Cc: dri-devel, linux-kernel, kernel
Use atomic refcount_t helper for pages_use_count to optimize pin/unpin
functions by skipping reservation locking while GEM's pin refcount > 1.
Acked-by: Maxime Ripard <mripard@kernel.org>
Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
Suggested-by: Boris Brezillon <boris.brezillon@collabora.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
drivers/gpu/drm/drm_gem_shmem_helper.c | 33 ++++++++++------------
drivers/gpu/drm/lima/lima_gem.c | 2 +-
drivers/gpu/drm/panfrost/panfrost_mmu.c | 2 +-
drivers/gpu/drm/tests/drm_gem_shmem_test.c | 8 +++---
include/drm/drm_gem_shmem_helper.h | 2 +-
5 files changed, 22 insertions(+), 25 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index d338b36f4eaa..6fb96e790abd 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -176,7 +176,7 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
if (shmem->pages)
drm_gem_shmem_put_pages_locked(shmem);
- drm_WARN_ON(obj->dev, shmem->pages_use_count);
+ drm_WARN_ON(obj->dev, refcount_read(&shmem->pages_use_count));
drm_WARN_ON(obj->dev, refcount_read(&shmem->pages_pin_count));
dma_resv_unlock(shmem->base.resv);
@@ -194,14 +194,13 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem)
dma_resv_assert_held(shmem->base.resv);
- if (shmem->pages_use_count++ > 0)
+ if (refcount_inc_not_zero(&shmem->pages_use_count))
return 0;
pages = drm_gem_get_pages(obj);
if (IS_ERR(pages)) {
drm_dbg_kms(obj->dev, "Failed to get pages (%ld)\n",
PTR_ERR(pages));
- shmem->pages_use_count = 0;
return PTR_ERR(pages);
}
@@ -217,6 +216,8 @@ static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem)
shmem->pages = pages;
+ refcount_set(&shmem->pages_use_count, 1);
+
return 0;
}
@@ -232,21 +233,17 @@ void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem)
dma_resv_assert_held(shmem->base.resv);
- if (drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count))
- return;
-
- if (--shmem->pages_use_count > 0)
- return;
-
+ if (refcount_dec_and_test(&shmem->pages_use_count)) {
#ifdef CONFIG_X86
- if (shmem->map_wc)
- set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT);
+ if (shmem->map_wc)
+ set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT);
#endif
- drm_gem_put_pages(obj, shmem->pages,
- shmem->pages_mark_dirty_on_put,
- shmem->pages_mark_accessed_on_put);
- shmem->pages = NULL;
+ drm_gem_put_pages(obj, shmem->pages,
+ shmem->pages_mark_dirty_on_put,
+ shmem->pages_mark_accessed_on_put);
+ shmem->pages = NULL;
+ }
}
EXPORT_SYMBOL_GPL(drm_gem_shmem_put_pages_locked);
@@ -582,8 +579,8 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma)
* mmap'd, vm_open() just grabs an additional reference for the new
* mm the vma is getting copied into (ie. on fork()).
*/
- if (!drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count))
- shmem->pages_use_count++;
+ drm_WARN_ON_ONCE(obj->dev,
+ !refcount_inc_not_zero(&shmem->pages_use_count));
dma_resv_unlock(shmem->base.resv);
@@ -674,7 +671,7 @@ void drm_gem_shmem_print_info(const struct drm_gem_shmem_object *shmem,
return;
drm_printf_indent(p, indent, "pages_pin_count=%u\n", refcount_read(&shmem->pages_pin_count));
- drm_printf_indent(p, indent, "pages_use_count=%u\n", shmem->pages_use_count);
+ drm_printf_indent(p, indent, "pages_use_count=%u\n", refcount_read(&shmem->pages_use_count));
drm_printf_indent(p, indent, "vmap_use_count=%u\n", shmem->vmap_use_count);
drm_printf_indent(p, indent, "vaddr=%p\n", shmem->vaddr);
}
diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c
index 609221351cde..5deec673c11e 100644
--- a/drivers/gpu/drm/lima/lima_gem.c
+++ b/drivers/gpu/drm/lima/lima_gem.c
@@ -47,7 +47,7 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm)
}
bo->base.pages = pages;
- bo->base.pages_use_count = 1;
+ refcount_set(&bo->base.pages_use_count, 1);
mapping_set_unevictable(mapping);
}
diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c
index b91019cd5acb..4a0b4bf03f1a 100644
--- a/drivers/gpu/drm/panfrost/panfrost_mmu.c
+++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c
@@ -489,7 +489,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
goto err_unlock;
}
bo->base.pages = pages;
- bo->base.pages_use_count = 1;
+ refcount_set(&bo->base.pages_use_count, 1);
} else {
pages = bo->base.pages;
if (pages[page_offset]) {
diff --git a/drivers/gpu/drm/tests/drm_gem_shmem_test.c b/drivers/gpu/drm/tests/drm_gem_shmem_test.c
index 98884966bb92..1459cdb0c413 100644
--- a/drivers/gpu/drm/tests/drm_gem_shmem_test.c
+++ b/drivers/gpu/drm/tests/drm_gem_shmem_test.c
@@ -134,7 +134,7 @@ static void drm_gem_shmem_test_pin_pages(struct kunit *test)
shmem = drm_gem_shmem_create(drm_dev, TEST_SIZE);
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, shmem);
KUNIT_EXPECT_NULL(test, shmem->pages);
- KUNIT_EXPECT_EQ(test, shmem->pages_use_count, 0);
+ KUNIT_EXPECT_EQ(test, refcount_read(&shmem->pages_use_count), 0);
ret = kunit_add_action_or_reset(test, drm_gem_shmem_free_wrapper, shmem);
KUNIT_ASSERT_EQ(test, ret, 0);
@@ -142,14 +142,14 @@ static void drm_gem_shmem_test_pin_pages(struct kunit *test)
ret = drm_gem_shmem_pin(shmem);
KUNIT_ASSERT_EQ(test, ret, 0);
KUNIT_ASSERT_NOT_NULL(test, shmem->pages);
- KUNIT_EXPECT_EQ(test, shmem->pages_use_count, 1);
+ KUNIT_EXPECT_EQ(test, refcount_read(&shmem->pages_use_count), 1);
for (i = 0; i < (shmem->base.size >> PAGE_SHIFT); i++)
KUNIT_ASSERT_NOT_NULL(test, shmem->pages[i]);
drm_gem_shmem_unpin(shmem);
KUNIT_EXPECT_NULL(test, shmem->pages);
- KUNIT_EXPECT_EQ(test, shmem->pages_use_count, 0);
+ KUNIT_EXPECT_EQ(test, refcount_read(&shmem->pages_use_count), 0);
}
/*
@@ -251,7 +251,7 @@ static void drm_gem_shmem_test_get_sg_table(struct kunit *test)
sgt = drm_gem_shmem_get_pages_sgt(shmem);
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, sgt);
KUNIT_ASSERT_NOT_NULL(test, shmem->pages);
- KUNIT_EXPECT_EQ(test, shmem->pages_use_count, 1);
+ KUNIT_EXPECT_EQ(test, refcount_read(&shmem->pages_use_count), 1);
KUNIT_EXPECT_PTR_EQ(test, sgt, shmem->sgt);
for_each_sgtable_sg(sgt, sg, si) {
diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
index d411215fe494..3a4be433d5f0 100644
--- a/include/drm/drm_gem_shmem_helper.h
+++ b/include/drm/drm_gem_shmem_helper.h
@@ -37,7 +37,7 @@ struct drm_gem_shmem_object {
* Reference count on the pages table.
* The pages are put when the count reaches zero.
*/
- unsigned int pages_use_count;
+ refcount_t pages_use_count;
/**
* @pages_pin_count:
--
2.49.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH v20 09/10] drm/shmem-helper: Switch drm_gem_shmem_vmap/vunmap to use pin/unpin
2025-03-22 21:25 [PATCH v20 00/10] Add generic DRM-shmem memory shrinker (part 1) Dmitry Osipenko
` (7 preceding siblings ...)
2025-03-22 21:26 ` [PATCH v20 08/10] drm/shmem-helper: Use refcount_t for pages_use_count Dmitry Osipenko
@ 2025-03-22 21:26 ` Dmitry Osipenko
2025-04-02 12:47 ` Thomas Zimmermann
2025-03-22 21:26 ` [PATCH v20 10/10] drm/shmem-helper: Use refcount_t for vmap_use_count Dmitry Osipenko
` (2 subsequent siblings)
11 siblings, 1 reply; 32+ messages in thread
From: Dmitry Osipenko @ 2025-03-22 21:26 UTC (permalink / raw)
To: David Airlie, Simona Vetter, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, Christian König, Gerd Hoffmann, Qiang Yu,
Steven Price, Boris Brezillon, Frank Binns, Matt Coster
Cc: dri-devel, linux-kernel, kernel
The vmapped pages shall be pinned in memory and previously get/put_pages()
were implicitly hard-pinning/unpinning the pages. This will no longer be
the case with addition of memory shrinker because pages_use_count > 0 won't
determine anymore whether pages are hard-pinned (they will be soft-pinned),
while the new pages_pin_count will do the hard-pinning. Switch the
vmap/vunmap() to use pin/unpin() functions in a preparation of addition
of the memory shrinker support to drm-shmem.
Acked-by: Maxime Ripard <mripard@kernel.org>
Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
drivers/gpu/drm/drm_gem_shmem_helper.c | 6 +++---
include/drm/drm_gem_shmem_helper.h | 2 +-
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 6fb96e790abd..84a196bbe44f 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -360,7 +360,7 @@ int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
return 0;
}
- ret = drm_gem_shmem_get_pages_locked(shmem);
+ ret = drm_gem_shmem_pin_locked(shmem);
if (ret)
goto err_zero_use;
@@ -383,7 +383,7 @@ int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
err_put_pages:
if (!drm_gem_is_imported(obj))
- drm_gem_shmem_put_pages_locked(shmem);
+ drm_gem_shmem_unpin_locked(shmem);
err_zero_use:
shmem->vmap_use_count = 0;
@@ -420,7 +420,7 @@ void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
return;
vunmap(shmem->vaddr);
- drm_gem_shmem_put_pages_locked(shmem);
+ drm_gem_shmem_unpin_locked(shmem);
}
shmem->vaddr = NULL;
diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
index 3a4be433d5f0..8b9bba87ae63 100644
--- a/include/drm/drm_gem_shmem_helper.h
+++ b/include/drm/drm_gem_shmem_helper.h
@@ -130,7 +130,7 @@ int drm_gem_shmem_madvise_locked(struct drm_gem_shmem_object *shmem, int madv);
static inline bool drm_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem)
{
return (shmem->madv > 0) &&
- !shmem->vmap_use_count && shmem->sgt &&
+ !refcount_read(&shmem->pages_pin_count) && shmem->sgt &&
!shmem->base.dma_buf && !drm_gem_is_imported(&shmem->base);
}
--
2.49.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* [PATCH v20 10/10] drm/shmem-helper: Use refcount_t for vmap_use_count
2025-03-22 21:25 [PATCH v20 00/10] Add generic DRM-shmem memory shrinker (part 1) Dmitry Osipenko
` (8 preceding siblings ...)
2025-03-22 21:26 ` [PATCH v20 09/10] drm/shmem-helper: Switch drm_gem_shmem_vmap/vunmap to use pin/unpin Dmitry Osipenko
@ 2025-03-22 21:26 ` Dmitry Osipenko
2025-03-25 14:17 ` [PATCH v20 00/10] Add generic DRM-shmem memory shrinker (part 1) Thomas Zimmermann
2025-04-03 0:37 ` Lucas De Marchi
11 siblings, 0 replies; 32+ messages in thread
From: Dmitry Osipenko @ 2025-03-22 21:26 UTC (permalink / raw)
To: David Airlie, Simona Vetter, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, Christian König, Gerd Hoffmann, Qiang Yu,
Steven Price, Boris Brezillon, Frank Binns, Matt Coster
Cc: dri-devel, linux-kernel, kernel
Use refcount_t helper for vmap_use_count to make refcounting consistent
with pages_use_count and pages_pin_count that use refcount_t. This also
makes vmapping to benefit from the refcount_t's overflow checks.
Acked-by: Maxime Ripard <mripard@kernel.org>
Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
Suggested-by: Boris Brezillon <boris.brezillon@collabora.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
drivers/gpu/drm/drm_gem_shmem_helper.c | 28 ++++++++++------------
drivers/gpu/drm/tests/drm_gem_shmem_test.c | 6 ++---
include/drm/drm_gem_shmem_helper.h | 2 +-
3 files changed, 16 insertions(+), 20 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 84a196bbe44f..2d924d547a51 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -165,7 +165,7 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
} else {
dma_resv_lock(shmem->base.resv, NULL);
- drm_WARN_ON(obj->dev, shmem->vmap_use_count);
+ drm_WARN_ON(obj->dev, refcount_read(&shmem->vmap_use_count));
if (shmem->sgt) {
dma_unmap_sgtable(obj->dev->dev, shmem->sgt,
@@ -355,23 +355,25 @@ int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
dma_resv_assert_held(shmem->base.resv);
- if (shmem->vmap_use_count++ > 0) {
+ if (refcount_inc_not_zero(&shmem->vmap_use_count)) {
iosys_map_set_vaddr(map, shmem->vaddr);
return 0;
}
ret = drm_gem_shmem_pin_locked(shmem);
if (ret)
- goto err_zero_use;
+ return ret;
if (shmem->map_wc)
prot = pgprot_writecombine(prot);
shmem->vaddr = vmap(shmem->pages, obj->size >> PAGE_SHIFT,
VM_MAP, prot);
- if (!shmem->vaddr)
+ if (!shmem->vaddr) {
ret = -ENOMEM;
- else
+ } else {
iosys_map_set_vaddr(map, shmem->vaddr);
+ refcount_set(&shmem->vmap_use_count, 1);
+ }
}
if (ret) {
@@ -384,8 +386,6 @@ int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
err_put_pages:
if (!drm_gem_is_imported(obj))
drm_gem_shmem_unpin_locked(shmem);
-err_zero_use:
- shmem->vmap_use_count = 0;
return ret;
}
@@ -413,14 +413,10 @@ void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
} else {
dma_resv_assert_held(shmem->base.resv);
- if (drm_WARN_ON_ONCE(obj->dev, !shmem->vmap_use_count))
- return;
-
- if (--shmem->vmap_use_count > 0)
- return;
-
- vunmap(shmem->vaddr);
- drm_gem_shmem_unpin_locked(shmem);
+ if (refcount_dec_and_test(&shmem->vmap_use_count)) {
+ vunmap(shmem->vaddr);
+ drm_gem_shmem_unpin_locked(shmem);
+ }
}
shmem->vaddr = NULL;
@@ -672,7 +668,7 @@ void drm_gem_shmem_print_info(const struct drm_gem_shmem_object *shmem,
drm_printf_indent(p, indent, "pages_pin_count=%u\n", refcount_read(&shmem->pages_pin_count));
drm_printf_indent(p, indent, "pages_use_count=%u\n", refcount_read(&shmem->pages_use_count));
- drm_printf_indent(p, indent, "vmap_use_count=%u\n", shmem->vmap_use_count);
+ drm_printf_indent(p, indent, "vmap_use_count=%u\n", refcount_read(&shmem->vmap_use_count));
drm_printf_indent(p, indent, "vaddr=%p\n", shmem->vaddr);
}
EXPORT_SYMBOL_GPL(drm_gem_shmem_print_info);
diff --git a/drivers/gpu/drm/tests/drm_gem_shmem_test.c b/drivers/gpu/drm/tests/drm_gem_shmem_test.c
index 1459cdb0c413..81cadaecdd4f 100644
--- a/drivers/gpu/drm/tests/drm_gem_shmem_test.c
+++ b/drivers/gpu/drm/tests/drm_gem_shmem_test.c
@@ -168,7 +168,7 @@ static void drm_gem_shmem_test_vmap(struct kunit *test)
shmem = drm_gem_shmem_create(drm_dev, TEST_SIZE);
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, shmem);
KUNIT_EXPECT_NULL(test, shmem->vaddr);
- KUNIT_EXPECT_EQ(test, shmem->vmap_use_count, 0);
+ KUNIT_EXPECT_EQ(test, refcount_read(&shmem->vmap_use_count), 0);
ret = kunit_add_action_or_reset(test, drm_gem_shmem_free_wrapper, shmem);
KUNIT_ASSERT_EQ(test, ret, 0);
@@ -177,7 +177,7 @@ static void drm_gem_shmem_test_vmap(struct kunit *test)
KUNIT_ASSERT_EQ(test, ret, 0);
KUNIT_ASSERT_NOT_NULL(test, shmem->vaddr);
KUNIT_ASSERT_FALSE(test, iosys_map_is_null(&map));
- KUNIT_EXPECT_EQ(test, shmem->vmap_use_count, 1);
+ KUNIT_EXPECT_EQ(test, refcount_read(&shmem->vmap_use_count), 1);
iosys_map_memset(&map, 0, TEST_BYTE, TEST_SIZE);
for (i = 0; i < TEST_SIZE; i++)
@@ -185,7 +185,7 @@ static void drm_gem_shmem_test_vmap(struct kunit *test)
drm_gem_shmem_vunmap_locked(shmem, &map);
KUNIT_EXPECT_NULL(test, shmem->vaddr);
- KUNIT_EXPECT_EQ(test, shmem->vmap_use_count, 0);
+ KUNIT_EXPECT_EQ(test, refcount_read(&shmem->vmap_use_count), 0);
}
/*
diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
index 8b9bba87ae63..b4f993da3cae 100644
--- a/include/drm/drm_gem_shmem_helper.h
+++ b/include/drm/drm_gem_shmem_helper.h
@@ -82,7 +82,7 @@ struct drm_gem_shmem_object {
* Reference count on the virtual address.
* The address are un-mapped when the count reaches zero.
*/
- unsigned int vmap_use_count;
+ refcount_t vmap_use_count;
/**
* @pages_mark_dirty_on_put:
--
2.49.0
^ permalink raw reply related [flat|nested] 32+ messages in thread
* Re: [PATCH v20 01/10] drm/gem: Change locked/unlocked postfix of drm_gem_v/unmap() function names
2025-03-22 21:25 ` [PATCH v20 01/10] drm/gem: Change locked/unlocked postfix of drm_gem_v/unmap() function names Dmitry Osipenko
@ 2025-03-24 13:05 ` Christian König
2025-03-25 13:15 ` Dmitry Osipenko
0 siblings, 1 reply; 32+ messages in thread
From: Christian König @ 2025-03-24 13:05 UTC (permalink / raw)
To: Dmitry Osipenko, David Airlie, Simona Vetter, Maarten Lankhorst,
Maxime Ripard, Thomas Zimmermann, Gerd Hoffmann, Qiang Yu,
Steven Price, Boris Brezillon, Frank Binns, Matt Coster
Cc: dri-devel, linux-kernel, kernel
Am 22.03.25 um 22:25 schrieb Dmitry Osipenko:
> Make drm/gem API function names consistent by having locked function
> use the _locked postfix in the name, while the unlocked variants don't
> use the _unlocked postfix. Rename drm_gem_v/unmap() function names to
> make them consistent with the rest of the API functions.
I usually prefer keeping the function which people should use for new code without a postfix, but that isn't a must have.
Either way patch #1-#3 are Reviewed-by: Christian König <christian.koenig@amd.com>.
Regards,
Christian.
>
> Acked-by: Maxime Ripard <mripard@kernel.org>
> Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
> Suggested-by: Boris Brezillon <boris.brezillon@collabora.com>
> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
> ---
> drivers/gpu/drm/drm_client.c | 10 +++++-----
> drivers/gpu/drm/drm_gem.c | 20 ++++++++++----------
> drivers/gpu/drm/drm_gem_framebuffer_helper.c | 6 +++---
> drivers/gpu/drm/drm_internal.h | 4 ++--
> drivers/gpu/drm/drm_prime.c | 4 ++--
> drivers/gpu/drm/lima/lima_sched.c | 4 ++--
> drivers/gpu/drm/panfrost/panfrost_dump.c | 4 ++--
> drivers/gpu/drm/panfrost/panfrost_perfcnt.c | 6 +++---
> drivers/gpu/drm/panthor/panthor_gem.h | 4 ++--
> drivers/gpu/drm/panthor/panthor_sched.c | 4 ++--
> include/drm/drm_gem.h | 4 ++--
> 11 files changed, 35 insertions(+), 35 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c
> index 549b28a5918c..f1de7faf9fb4 100644
> --- a/drivers/gpu/drm/drm_client.c
> +++ b/drivers/gpu/drm/drm_client.c
> @@ -174,7 +174,7 @@ EXPORT_SYMBOL(drm_client_release);
> static void drm_client_buffer_delete(struct drm_client_buffer *buffer)
> {
> if (buffer->gem) {
> - drm_gem_vunmap_unlocked(buffer->gem, &buffer->map);
> + drm_gem_vunmap(buffer->gem, &buffer->map);
> drm_gem_object_put(buffer->gem);
> }
>
> @@ -252,7 +252,7 @@ int drm_client_buffer_vmap_local(struct drm_client_buffer *buffer,
>
> drm_gem_lock(gem);
>
> - ret = drm_gem_vmap(gem, map);
> + ret = drm_gem_vmap_locked(gem, map);
> if (ret)
> goto err_drm_gem_vmap_unlocked;
> *map_copy = *map;
> @@ -278,7 +278,7 @@ void drm_client_buffer_vunmap_local(struct drm_client_buffer *buffer)
> struct drm_gem_object *gem = buffer->gem;
> struct iosys_map *map = &buffer->map;
>
> - drm_gem_vunmap(gem, map);
> + drm_gem_vunmap_locked(gem, map);
> drm_gem_unlock(gem);
> }
> EXPORT_SYMBOL(drm_client_buffer_vunmap_local);
> @@ -316,7 +316,7 @@ drm_client_buffer_vmap(struct drm_client_buffer *buffer,
> ret = drm_gem_pin_locked(gem);
> if (ret)
> goto err_drm_gem_pin_locked;
> - ret = drm_gem_vmap(gem, map);
> + ret = drm_gem_vmap_locked(gem, map);
> if (ret)
> goto err_drm_gem_vmap;
>
> @@ -348,7 +348,7 @@ void drm_client_buffer_vunmap(struct drm_client_buffer *buffer)
> struct iosys_map *map = &buffer->map;
>
> drm_gem_lock(gem);
> - drm_gem_vunmap(gem, map);
> + drm_gem_vunmap_locked(gem, map);
> drm_gem_unpin_locked(gem);
> drm_gem_unlock(gem);
> }
> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
> index c6240bab3fa5..27778e5ce0c0 100644
> --- a/drivers/gpu/drm/drm_gem.c
> +++ b/drivers/gpu/drm/drm_gem.c
> @@ -1216,7 +1216,7 @@ void drm_gem_unpin(struct drm_gem_object *obj)
> dma_resv_unlock(obj->resv);
> }
>
> -int drm_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map)
> +int drm_gem_vmap_locked(struct drm_gem_object *obj, struct iosys_map *map)
> {
> int ret;
>
> @@ -1233,9 +1233,9 @@ int drm_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map)
>
> return 0;
> }
> -EXPORT_SYMBOL(drm_gem_vmap);
> +EXPORT_SYMBOL(drm_gem_vmap_locked);
>
> -void drm_gem_vunmap(struct drm_gem_object *obj, struct iosys_map *map)
> +void drm_gem_vunmap_locked(struct drm_gem_object *obj, struct iosys_map *map)
> {
> dma_resv_assert_held(obj->resv);
>
> @@ -1248,7 +1248,7 @@ void drm_gem_vunmap(struct drm_gem_object *obj, struct iosys_map *map)
> /* Always set the mapping to NULL. Callers may rely on this. */
> iosys_map_clear(map);
> }
> -EXPORT_SYMBOL(drm_gem_vunmap);
> +EXPORT_SYMBOL(drm_gem_vunmap_locked);
>
> void drm_gem_lock(struct drm_gem_object *obj)
> {
> @@ -1262,25 +1262,25 @@ void drm_gem_unlock(struct drm_gem_object *obj)
> }
> EXPORT_SYMBOL(drm_gem_unlock);
>
> -int drm_gem_vmap_unlocked(struct drm_gem_object *obj, struct iosys_map *map)
> +int drm_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map)
> {
> int ret;
>
> dma_resv_lock(obj->resv, NULL);
> - ret = drm_gem_vmap(obj, map);
> + ret = drm_gem_vmap_locked(obj, map);
> dma_resv_unlock(obj->resv);
>
> return ret;
> }
> -EXPORT_SYMBOL(drm_gem_vmap_unlocked);
> +EXPORT_SYMBOL(drm_gem_vmap);
>
> -void drm_gem_vunmap_unlocked(struct drm_gem_object *obj, struct iosys_map *map)
> +void drm_gem_vunmap(struct drm_gem_object *obj, struct iosys_map *map)
> {
> dma_resv_lock(obj->resv, NULL);
> - drm_gem_vunmap(obj, map);
> + drm_gem_vunmap_locked(obj, map);
> dma_resv_unlock(obj->resv);
> }
> -EXPORT_SYMBOL(drm_gem_vunmap_unlocked);
> +EXPORT_SYMBOL(drm_gem_vunmap);
>
> /**
> * drm_gem_lock_reservations - Sets up the ww context and acquires
> diff --git a/drivers/gpu/drm/drm_gem_framebuffer_helper.c b/drivers/gpu/drm/drm_gem_framebuffer_helper.c
> index 0fbeb686e561..6f72e7a0f427 100644
> --- a/drivers/gpu/drm/drm_gem_framebuffer_helper.c
> +++ b/drivers/gpu/drm/drm_gem_framebuffer_helper.c
> @@ -362,7 +362,7 @@ int drm_gem_fb_vmap(struct drm_framebuffer *fb, struct iosys_map *map,
> ret = -EINVAL;
> goto err_drm_gem_vunmap;
> }
> - ret = drm_gem_vmap_unlocked(obj, &map[i]);
> + ret = drm_gem_vmap(obj, &map[i]);
> if (ret)
> goto err_drm_gem_vunmap;
> }
> @@ -384,7 +384,7 @@ int drm_gem_fb_vmap(struct drm_framebuffer *fb, struct iosys_map *map,
> obj = drm_gem_fb_get_obj(fb, i);
> if (!obj)
> continue;
> - drm_gem_vunmap_unlocked(obj, &map[i]);
> + drm_gem_vunmap(obj, &map[i]);
> }
> return ret;
> }
> @@ -411,7 +411,7 @@ void drm_gem_fb_vunmap(struct drm_framebuffer *fb, struct iosys_map *map)
> continue;
> if (iosys_map_is_null(&map[i]))
> continue;
> - drm_gem_vunmap_unlocked(obj, &map[i]);
> + drm_gem_vunmap(obj, &map[i]);
> }
> }
> EXPORT_SYMBOL(drm_gem_fb_vunmap);
> diff --git a/drivers/gpu/drm/drm_internal.h b/drivers/gpu/drm/drm_internal.h
> index b2b6a8e49dda..e44f28fd81d3 100644
> --- a/drivers/gpu/drm/drm_internal.h
> +++ b/drivers/gpu/drm/drm_internal.h
> @@ -179,8 +179,8 @@ int drm_gem_pin_locked(struct drm_gem_object *obj);
> void drm_gem_unpin_locked(struct drm_gem_object *obj);
> int drm_gem_pin(struct drm_gem_object *obj);
> void drm_gem_unpin(struct drm_gem_object *obj);
> -int drm_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map);
> -void drm_gem_vunmap(struct drm_gem_object *obj, struct iosys_map *map);
> +int drm_gem_vmap_locked(struct drm_gem_object *obj, struct iosys_map *map);
> +void drm_gem_vunmap_locked(struct drm_gem_object *obj, struct iosys_map *map);
>
> /* drm_debugfs.c drm_debugfs_crc.c */
> #if defined(CONFIG_DEBUG_FS)
> diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
> index 4b8c6075e46a..d828502268b8 100644
> --- a/drivers/gpu/drm/drm_prime.c
> +++ b/drivers/gpu/drm/drm_prime.c
> @@ -707,7 +707,7 @@ int drm_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct iosys_map *map)
> {
> struct drm_gem_object *obj = dma_buf->priv;
>
> - return drm_gem_vmap(obj, map);
> + return drm_gem_vmap_locked(obj, map);
> }
> EXPORT_SYMBOL(drm_gem_dmabuf_vmap);
>
> @@ -723,7 +723,7 @@ void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, struct iosys_map *map)
> {
> struct drm_gem_object *obj = dma_buf->priv;
>
> - drm_gem_vunmap(obj, map);
> + drm_gem_vunmap_locked(obj, map);
> }
> EXPORT_SYMBOL(drm_gem_dmabuf_vunmap);
>
> diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c
> index 825135a26aa4..7934098e651b 100644
> --- a/drivers/gpu/drm/lima/lima_sched.c
> +++ b/drivers/gpu/drm/lima/lima_sched.c
> @@ -371,7 +371,7 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task)
> } else {
> buffer_chunk->size = lima_bo_size(bo);
>
> - ret = drm_gem_vmap_unlocked(&bo->base.base, &map);
> + ret = drm_gem_vmap(&bo->base.base, &map);
> if (ret) {
> kvfree(et);
> goto out;
> @@ -379,7 +379,7 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task)
>
> memcpy(buffer_chunk + 1, map.vaddr, buffer_chunk->size);
>
> - drm_gem_vunmap_unlocked(&bo->base.base, &map);
> + drm_gem_vunmap(&bo->base.base, &map);
> }
>
> buffer_chunk = (void *)(buffer_chunk + 1) + buffer_chunk->size;
> diff --git a/drivers/gpu/drm/panfrost/panfrost_dump.c b/drivers/gpu/drm/panfrost/panfrost_dump.c
> index 47751302f1bc..4042afe2fbf4 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_dump.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_dump.c
> @@ -209,7 +209,7 @@ void panfrost_core_dump(struct panfrost_job *job)
> goto dump_header;
> }
>
> - ret = drm_gem_vmap_unlocked(&bo->base.base, &map);
> + ret = drm_gem_vmap(&bo->base.base, &map);
> if (ret) {
> dev_err(pfdev->dev, "Panfrost Dump: couldn't map Buffer Object\n");
> iter.hdr->bomap.valid = 0;
> @@ -228,7 +228,7 @@ void panfrost_core_dump(struct panfrost_job *job)
> vaddr = map.vaddr;
> memcpy(iter.data, vaddr, bo->base.base.size);
>
> - drm_gem_vunmap_unlocked(&bo->base.base, &map);
> + drm_gem_vunmap(&bo->base.base, &map);
>
> iter.hdr->bomap.valid = 1;
>
> diff --git a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
> index ba9b6e2b2636..52befead08c6 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
> @@ -106,7 +106,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
> goto err_close_bo;
> }
>
> - ret = drm_gem_vmap_unlocked(&bo->base, &map);
> + ret = drm_gem_vmap(&bo->base, &map);
> if (ret)
> goto err_put_mapping;
> perfcnt->buf = map.vaddr;
> @@ -165,7 +165,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
> return 0;
>
> err_vunmap:
> - drm_gem_vunmap_unlocked(&bo->base, &map);
> + drm_gem_vunmap(&bo->base, &map);
> err_put_mapping:
> panfrost_gem_mapping_put(perfcnt->mapping);
> err_close_bo:
> @@ -195,7 +195,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev,
> GPU_PERFCNT_CFG_MODE(GPU_PERFCNT_CFG_MODE_OFF));
>
> perfcnt->user = NULL;
> - drm_gem_vunmap_unlocked(&perfcnt->mapping->obj->base.base, &map);
> + drm_gem_vunmap(&perfcnt->mapping->obj->base.base, &map);
> perfcnt->buf = NULL;
> panfrost_gem_close(&perfcnt->mapping->obj->base.base, file_priv);
> panfrost_mmu_as_put(pfdev, perfcnt->mapping->mmu);
> diff --git a/drivers/gpu/drm/panthor/panthor_gem.h b/drivers/gpu/drm/panthor/panthor_gem.h
> index 5749ef2ebe03..1a363bb814f4 100644
> --- a/drivers/gpu/drm/panthor/panthor_gem.h
> +++ b/drivers/gpu/drm/panthor/panthor_gem.h
> @@ -112,7 +112,7 @@ panthor_kernel_bo_vmap(struct panthor_kernel_bo *bo)
> if (bo->kmap)
> return 0;
>
> - ret = drm_gem_vmap_unlocked(bo->obj, &map);
> + ret = drm_gem_vmap(bo->obj, &map);
> if (ret)
> return ret;
>
> @@ -126,7 +126,7 @@ panthor_kernel_bo_vunmap(struct panthor_kernel_bo *bo)
> if (bo->kmap) {
> struct iosys_map map = IOSYS_MAP_INIT_VADDR(bo->kmap);
>
> - drm_gem_vunmap_unlocked(bo->obj, &map);
> + drm_gem_vunmap(bo->obj, &map);
> bo->kmap = NULL;
> }
> }
> diff --git a/drivers/gpu/drm/panthor/panthor_sched.c b/drivers/gpu/drm/panthor/panthor_sched.c
> index 4d31d1967716..446ec780eb4a 100644
> --- a/drivers/gpu/drm/panthor/panthor_sched.c
> +++ b/drivers/gpu/drm/panthor/panthor_sched.c
> @@ -840,7 +840,7 @@ panthor_queue_put_syncwait_obj(struct panthor_queue *queue)
> if (queue->syncwait.kmap) {
> struct iosys_map map = IOSYS_MAP_INIT_VADDR(queue->syncwait.kmap);
>
> - drm_gem_vunmap_unlocked(queue->syncwait.obj, &map);
> + drm_gem_vunmap(queue->syncwait.obj, &map);
> queue->syncwait.kmap = NULL;
> }
>
> @@ -866,7 +866,7 @@ panthor_queue_get_syncwait_obj(struct panthor_group *group, struct panthor_queue
> goto err_put_syncwait_obj;
>
> queue->syncwait.obj = &bo->base.base;
> - ret = drm_gem_vmap_unlocked(queue->syncwait.obj, &map);
> + ret = drm_gem_vmap(queue->syncwait.obj, &map);
> if (drm_WARN_ON(&ptdev->base, ret))
> goto err_put_syncwait_obj;
>
> diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
> index 2bf893eabb4b..13c312ca07ae 100644
> --- a/include/drm/drm_gem.h
> +++ b/include/drm/drm_gem.h
> @@ -537,8 +537,8 @@ void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages,
> void drm_gem_lock(struct drm_gem_object *obj);
> void drm_gem_unlock(struct drm_gem_object *obj);
>
> -int drm_gem_vmap_unlocked(struct drm_gem_object *obj, struct iosys_map *map);
> -void drm_gem_vunmap_unlocked(struct drm_gem_object *obj, struct iosys_map *map);
> +int drm_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map);
> +void drm_gem_vunmap(struct drm_gem_object *obj, struct iosys_map *map);
>
> int drm_gem_objects_lookup(struct drm_file *filp, void __user *bo_handles,
> int count, struct drm_gem_object ***objs_out);
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v20 01/10] drm/gem: Change locked/unlocked postfix of drm_gem_v/unmap() function names
2025-03-24 13:05 ` Christian König
@ 2025-03-25 13:15 ` Dmitry Osipenko
0 siblings, 0 replies; 32+ messages in thread
From: Dmitry Osipenko @ 2025-03-25 13:15 UTC (permalink / raw)
To: Christian König, David Airlie, Simona Vetter,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
Gerd Hoffmann, Qiang Yu, Steven Price, Boris Brezillon,
Frank Binns, Matt Coster
Cc: dri-devel, linux-kernel, kernel
On 3/24/25 16:05, Christian König wrote:
> Am 22.03.25 um 22:25 schrieb Dmitry Osipenko:
>> Make drm/gem API function names consistent by having locked function
>> use the _locked postfix in the name, while the unlocked variants don't
>> use the _unlocked postfix. Rename drm_gem_v/unmap() function names to
>> make them consistent with the rest of the API functions.
>
> I usually prefer keeping the function which people should use for new code without a postfix, but that isn't a must have.
>
> Either way patch #1-#3 are Reviewed-by: Christian König <christian.koenig@amd.com>.
Thanks!
--
Best regards,
Dmitry
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v20 00/10] Add generic DRM-shmem memory shrinker (part 1)
2025-03-22 21:25 [PATCH v20 00/10] Add generic DRM-shmem memory shrinker (part 1) Dmitry Osipenko
` (9 preceding siblings ...)
2025-03-22 21:26 ` [PATCH v20 10/10] drm/shmem-helper: Use refcount_t for vmap_use_count Dmitry Osipenko
@ 2025-03-25 14:17 ` Thomas Zimmermann
2025-03-26 20:08 ` Dmitry Osipenko
2025-04-03 0:37 ` Lucas De Marchi
11 siblings, 1 reply; 32+ messages in thread
From: Thomas Zimmermann @ 2025-03-25 14:17 UTC (permalink / raw)
To: Dmitry Osipenko, David Airlie, Simona Vetter, Maarten Lankhorst,
Maxime Ripard, Christian König, Gerd Hoffmann, Qiang Yu,
Steven Price, Boris Brezillon, Frank Binns, Matt Coster
Cc: dri-devel, linux-kernel, kernel
I've looked through this before, so
Acked-by: Thomas Zimmermann <tzimmermann@suse.d>
for the series.
Am 22.03.25 um 22:25 schrieb Dmitry Osipenko:
> Hi,
>
> This a continuation of a year-old series that adds generic DRM-shmem
> shrinker [1]. The old series became too big with too many patches, more
> reasonable to split it up into multiple smaller patchsets. Here is
> the firtst part that makes preparatory DRM changes.
>
> [1] https://lore.kernel.org/dri-devel/20240105184624.508603-1-dmitry.osipenko@collabora.com/
>
> Changelog:
>
> v20:- Rebased on recent drm-misc. Added r-bs that were given to v19.
>
> Dmitry Osipenko (10):
> drm/gem: Change locked/unlocked postfix of drm_gem_v/unmap() function
> names
> drm/gem: Add _locked postfix to functions that have unlocked
> counterpart
> drm/gem: Document locking rule of vmap and evict callbacks
> drm/shmem-helper: Make all exported symbols GPL
> drm/shmem-helper: Refactor locked/unlocked functions
> drm/shmem-helper: Remove obsoleted is_iomem test
> drm/shmem-helper: Add and use pages_pin_count
> drm/shmem-helper: Use refcount_t for pages_use_count
> drm/shmem-helper: Switch drm_gem_shmem_vmap/vunmap to use pin/unpin
> drm/shmem-helper: Use refcount_t for vmap_use_count
>
> drivers/gpu/drm/drm_client.c | 10 +-
> drivers/gpu/drm/drm_gem.c | 26 ++--
> drivers/gpu/drm/drm_gem_framebuffer_helper.c | 6 +-
> drivers/gpu/drm/drm_gem_shmem_helper.c | 145 +++++++++---------
> drivers/gpu/drm/drm_internal.h | 4 +-
> drivers/gpu/drm/drm_prime.c | 4 +-
> drivers/gpu/drm/imagination/pvr_gem.c | 4 +-
> drivers/gpu/drm/lima/lima_gem.c | 4 +-
> drivers/gpu/drm/lima/lima_sched.c | 4 +-
> drivers/gpu/drm/panfrost/panfrost_drv.c | 2 +-
> drivers/gpu/drm/panfrost/panfrost_dump.c | 4 +-
> .../gpu/drm/panfrost/panfrost_gem_shrinker.c | 2 +-
> drivers/gpu/drm/panfrost/panfrost_mmu.c | 2 +-
> drivers/gpu/drm/panfrost/panfrost_perfcnt.c | 6 +-
> drivers/gpu/drm/panthor/panthor_gem.h | 4 +-
> drivers/gpu/drm/panthor/panthor_sched.c | 4 +-
> drivers/gpu/drm/tests/drm_gem_shmem_test.c | 28 ++--
> include/drm/drm_gem.h | 15 +-
> include/drm/drm_gem_shmem_helper.h | 45 ++++--
> 19 files changed, 167 insertions(+), 152 deletions(-)
>
--
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Frankenstrasse 146, 90461 Nuernberg, Germany
GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman
HRB 36809 (AG Nuernberg)
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v20 00/10] Add generic DRM-shmem memory shrinker (part 1)
2025-03-25 14:17 ` [PATCH v20 00/10] Add generic DRM-shmem memory shrinker (part 1) Thomas Zimmermann
@ 2025-03-26 20:08 ` Dmitry Osipenko
2025-03-27 10:45 ` Boris Brezillon
0 siblings, 1 reply; 32+ messages in thread
From: Dmitry Osipenko @ 2025-03-26 20:08 UTC (permalink / raw)
To: Thomas Zimmermann, David Airlie, Simona Vetter, Maarten Lankhorst,
Maxime Ripard, Christian König, Gerd Hoffmann, Qiang Yu,
Steven Price, Boris Brezillon, Frank Binns, Matt Coster
Cc: dri-devel, linux-kernel, kernel
On 3/25/25 17:17, Thomas Zimmermann wrote:
> I've looked through this before, so
>
> Acked-by: Thomas Zimmermann <tzimmermann@suse.d>
>
> for the series.
Applied to misc-next, thanks!
--
Best regards,
Dmitry
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v20 00/10] Add generic DRM-shmem memory shrinker (part 1)
2025-03-26 20:08 ` Dmitry Osipenko
@ 2025-03-27 10:45 ` Boris Brezillon
2025-03-27 10:47 ` Dmitry Osipenko
0 siblings, 1 reply; 32+ messages in thread
From: Boris Brezillon @ 2025-03-27 10:45 UTC (permalink / raw)
To: Dmitry Osipenko
Cc: Thomas Zimmermann, David Airlie, Simona Vetter, Maarten Lankhorst,
Maxime Ripard, Christian König, Gerd Hoffmann, Qiang Yu,
Steven Price, Frank Binns, Matt Coster, dri-devel, linux-kernel,
kernel
On Wed, 26 Mar 2025 23:08:55 +0300
Dmitry Osipenko <dmitry.osipenko@collabora.com> wrote:
> On 3/25/25 17:17, Thomas Zimmermann wrote:
> > I've looked through this before, so
> >
> > Acked-by: Thomas Zimmermann <tzimmermann@suse.d>
> >
> > for the series.
>
> Applied to misc-next, thanks!
Looks like the accel drivers were left behind. I just sent a patch
series to address that [1].
[1]https://lore.kernel.org/r/dri-devel/20250327104300.1982058-1-boris.brezillon@collabora.com/
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v20 00/10] Add generic DRM-shmem memory shrinker (part 1)
2025-03-27 10:45 ` Boris Brezillon
@ 2025-03-27 10:47 ` Dmitry Osipenko
0 siblings, 0 replies; 32+ messages in thread
From: Dmitry Osipenko @ 2025-03-27 10:47 UTC (permalink / raw)
To: Boris Brezillon
Cc: Thomas Zimmermann, David Airlie, Simona Vetter, Maarten Lankhorst,
Maxime Ripard, Christian König, Gerd Hoffmann, Qiang Yu,
Steven Price, Frank Binns, Matt Coster, dri-devel, linux-kernel,
kernel
On 3/27/25 13:45, Boris Brezillon wrote:
> On Wed, 26 Mar 2025 23:08:55 +0300
> Dmitry Osipenko <dmitry.osipenko@collabora.com> wrote:
>
>> On 3/25/25 17:17, Thomas Zimmermann wrote:
>>> I've looked through this before, so
>>>
>>> Acked-by: Thomas Zimmermann <tzimmermann@suse.d>
>>>
>>> for the series.
>>
>> Applied to misc-next, thanks!
>
> Looks like the accel drivers were left behind. I just sent a patch
> series to address that [1].
>
> [1]https://lore.kernel.org/r/dri-devel/20250327104300.1982058-1-boris.brezillon@collabora.com/
Accel drivers weren't on my radar, thanks a lot!
--
Best regards,
Dmitry
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v20 05/10] drm/shmem-helper: Refactor locked/unlocked functions
2025-03-22 21:26 ` [PATCH v20 05/10] drm/shmem-helper: Refactor locked/unlocked functions Dmitry Osipenko
@ 2025-03-27 11:25 ` Jani Nikula
2025-03-27 11:27 ` Jani Nikula
0 siblings, 1 reply; 32+ messages in thread
From: Jani Nikula @ 2025-03-27 11:25 UTC (permalink / raw)
To: Dmitry Osipenko, David Airlie, Simona Vetter, Maarten Lankhorst,
Maxime Ripard, Thomas Zimmermann, Christian König,
Gerd Hoffmann, Qiang Yu, Steven Price, Boris Brezillon,
Frank Binns, Matt Coster
Cc: dri-devel, linux-kernel, kernel
On Sun, 23 Mar 2025, Dmitry Osipenko <dmitry.osipenko@collabora.com> wrote:
> Add locked and remove unlocked postfixes from drm-shmem function names,
> making names consistent with the drm/gem core code.
This breaks the build for drivers/accel/ivpu/ivpu_gem.c.
Please enable CONFIG_DRM_ACCEL_IVPU=m and fix the fallout.
BR,
Jani.
>
> Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
> Suggested-by: Boris Brezillon <boris.brezillon@collabora.com>
> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
> ---
> drivers/gpu/drm/drm_gem_shmem_helper.c | 60 +++++++++----------
> drivers/gpu/drm/imagination/pvr_gem.c | 4 +-
> drivers/gpu/drm/lima/lima_gem.c | 2 +-
> drivers/gpu/drm/panfrost/panfrost_drv.c | 2 +-
> .../gpu/drm/panfrost/panfrost_gem_shrinker.c | 2 +-
> drivers/gpu/drm/tests/drm_gem_shmem_test.c | 14 ++---
> include/drm/drm_gem_shmem_helper.h | 28 ++++-----
> 7 files changed, 56 insertions(+), 56 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index 98c68999d61a..a9e35a46e72b 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -174,7 +174,7 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
> kfree(shmem->sgt);
> }
> if (shmem->pages)
> - drm_gem_shmem_put_pages(shmem);
> + drm_gem_shmem_put_pages_locked(shmem);
>
> drm_WARN_ON(obj->dev, shmem->pages_use_count);
>
> @@ -186,7 +186,7 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
> }
> EXPORT_SYMBOL_GPL(drm_gem_shmem_free);
>
> -static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
> +static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem)
> {
> struct drm_gem_object *obj = &shmem->base;
> struct page **pages;
> @@ -220,12 +220,12 @@ static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
> }
>
> /*
> - * drm_gem_shmem_put_pages - Decrease use count on the backing pages for a shmem GEM object
> + * drm_gem_shmem_put_pages_locked - Decrease use count on the backing pages for a shmem GEM object
> * @shmem: shmem GEM object
> *
> * This function decreases the use count and puts the backing pages when use drops to zero.
> */
> -void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem)
> +void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem)
> {
> struct drm_gem_object *obj = &shmem->base;
>
> @@ -247,7 +247,7 @@ void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem)
> shmem->pages_mark_accessed_on_put);
> shmem->pages = NULL;
> }
> -EXPORT_SYMBOL_GPL(drm_gem_shmem_put_pages);
> +EXPORT_SYMBOL_GPL(drm_gem_shmem_put_pages_locked);
>
> int drm_gem_shmem_pin_locked(struct drm_gem_shmem_object *shmem)
> {
> @@ -257,7 +257,7 @@ int drm_gem_shmem_pin_locked(struct drm_gem_shmem_object *shmem)
>
> drm_WARN_ON(shmem->base.dev, drm_gem_is_imported(&shmem->base));
>
> - ret = drm_gem_shmem_get_pages(shmem);
> + ret = drm_gem_shmem_get_pages_locked(shmem);
>
> return ret;
> }
> @@ -267,7 +267,7 @@ void drm_gem_shmem_unpin_locked(struct drm_gem_shmem_object *shmem)
> {
> dma_resv_assert_held(shmem->base.resv);
>
> - drm_gem_shmem_put_pages(shmem);
> + drm_gem_shmem_put_pages_locked(shmem);
> }
> EXPORT_SYMBOL(drm_gem_shmem_unpin_locked);
>
> @@ -318,7 +318,7 @@ void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem)
> EXPORT_SYMBOL_GPL(drm_gem_shmem_unpin);
>
> /*
> - * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object
> + * drm_gem_shmem_vmap_locked - Create a virtual mapping for a shmem GEM object
> * @shmem: shmem GEM object
> * @map: Returns the kernel virtual address of the SHMEM GEM object's backing
> * store.
> @@ -327,13 +327,13 @@ EXPORT_SYMBOL_GPL(drm_gem_shmem_unpin);
> * exists for the buffer backing the shmem GEM object. It hides the differences
> * between dma-buf imported and natively allocated objects.
> *
> - * Acquired mappings should be cleaned up by calling drm_gem_shmem_vunmap().
> + * Acquired mappings should be cleaned up by calling drm_gem_shmem_vunmap_locked().
> *
> * Returns:
> * 0 on success or a negative error code on failure.
> */
> -int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem,
> - struct iosys_map *map)
> +int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
> + struct iosys_map *map)
> {
> struct drm_gem_object *obj = &shmem->base;
> int ret = 0;
> @@ -356,7 +356,7 @@ int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem,
> return 0;
> }
>
> - ret = drm_gem_shmem_get_pages(shmem);
> + ret = drm_gem_shmem_get_pages_locked(shmem);
> if (ret)
> goto err_zero_use;
>
> @@ -379,28 +379,28 @@ int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem,
>
> err_put_pages:
> if (!drm_gem_is_imported(obj))
> - drm_gem_shmem_put_pages(shmem);
> + drm_gem_shmem_put_pages_locked(shmem);
> err_zero_use:
> shmem->vmap_use_count = 0;
>
> return ret;
> }
> -EXPORT_SYMBOL_GPL(drm_gem_shmem_vmap);
> +EXPORT_SYMBOL_GPL(drm_gem_shmem_vmap_locked);
>
> /*
> - * drm_gem_shmem_vunmap - Unmap a virtual mapping for a shmem GEM object
> + * drm_gem_shmem_vunmap_locked - Unmap a virtual mapping for a shmem GEM object
> * @shmem: shmem GEM object
> * @map: Kernel virtual address where the SHMEM GEM object was mapped
> *
> * This function cleans up a kernel virtual address mapping acquired by
> - * drm_gem_shmem_vmap(). The mapping is only removed when the use count drops to
> - * zero.
> + * drm_gem_shmem_vmap_locked(). The mapping is only removed when the use count
> + * drops to zero.
> *
> * This function hides the differences between dma-buf imported and natively
> * allocated objects.
> */
> -void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem,
> - struct iosys_map *map)
> +void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
> + struct iosys_map *map)
> {
> struct drm_gem_object *obj = &shmem->base;
>
> @@ -416,12 +416,12 @@ void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem,
> return;
>
> vunmap(shmem->vaddr);
> - drm_gem_shmem_put_pages(shmem);
> + drm_gem_shmem_put_pages_locked(shmem);
> }
>
> shmem->vaddr = NULL;
> }
> -EXPORT_SYMBOL_GPL(drm_gem_shmem_vunmap);
> +EXPORT_SYMBOL_GPL(drm_gem_shmem_vunmap_locked);
>
> static int
> drm_gem_shmem_create_with_handle(struct drm_file *file_priv,
> @@ -449,7 +449,7 @@ drm_gem_shmem_create_with_handle(struct drm_file *file_priv,
> /* Update madvise status, returns true if not purged, else
> * false or -errno.
> */
> -int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv)
> +int drm_gem_shmem_madvise_locked(struct drm_gem_shmem_object *shmem, int madv)
> {
> dma_resv_assert_held(shmem->base.resv);
>
> @@ -460,9 +460,9 @@ int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv)
>
> return (madv >= 0);
> }
> -EXPORT_SYMBOL_GPL(drm_gem_shmem_madvise);
> +EXPORT_SYMBOL_GPL(drm_gem_shmem_madvise_locked);
>
> -void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem)
> +void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem)
> {
> struct drm_gem_object *obj = &shmem->base;
> struct drm_device *dev = obj->dev;
> @@ -476,7 +476,7 @@ void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem)
> kfree(shmem->sgt);
> shmem->sgt = NULL;
>
> - drm_gem_shmem_put_pages(shmem);
> + drm_gem_shmem_put_pages_locked(shmem);
>
> shmem->madv = -1;
>
> @@ -492,7 +492,7 @@ void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem)
>
> invalidate_mapping_pages(file_inode(obj->filp)->i_mapping, 0, (loff_t)-1);
> }
> -EXPORT_SYMBOL_GPL(drm_gem_shmem_purge);
> +EXPORT_SYMBOL_GPL(drm_gem_shmem_purge_locked);
>
> /**
> * drm_gem_shmem_dumb_create - Create a dumb shmem buffer object
> @@ -589,7 +589,7 @@ static void drm_gem_shmem_vm_close(struct vm_area_struct *vma)
> struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
>
> dma_resv_lock(shmem->base.resv, NULL);
> - drm_gem_shmem_put_pages(shmem);
> + drm_gem_shmem_put_pages_locked(shmem);
> dma_resv_unlock(shmem->base.resv);
>
> drm_gem_vm_close(vma);
> @@ -639,7 +639,7 @@ int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct
> return -EINVAL;
>
> dma_resv_lock(shmem->base.resv, NULL);
> - ret = drm_gem_shmem_get_pages(shmem);
> + ret = drm_gem_shmem_get_pages_locked(shmem);
> dma_resv_unlock(shmem->base.resv);
>
> if (ret)
> @@ -707,7 +707,7 @@ static struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_
>
> drm_WARN_ON(obj->dev, drm_gem_is_imported(obj));
>
> - ret = drm_gem_shmem_get_pages(shmem);
> + ret = drm_gem_shmem_get_pages_locked(shmem);
> if (ret)
> return ERR_PTR(ret);
>
> @@ -729,7 +729,7 @@ static struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_
> sg_free_table(sgt);
> kfree(sgt);
> err_put_pages:
> - drm_gem_shmem_put_pages(shmem);
> + drm_gem_shmem_put_pages_locked(shmem);
> return ERR_PTR(ret);
> }
>
> diff --git a/drivers/gpu/drm/imagination/pvr_gem.c b/drivers/gpu/drm/imagination/pvr_gem.c
> index 6a8c81fe8c1e..d9d7c6d1a138 100644
> --- a/drivers/gpu/drm/imagination/pvr_gem.c
> +++ b/drivers/gpu/drm/imagination/pvr_gem.c
> @@ -203,7 +203,7 @@ pvr_gem_object_vmap(struct pvr_gem_object *pvr_obj)
>
> dma_resv_lock(obj->resv, NULL);
>
> - err = drm_gem_shmem_vmap(shmem_obj, &map);
> + err = drm_gem_shmem_vmap_locked(shmem_obj, &map);
> if (err)
> goto err_unlock;
>
> @@ -257,7 +257,7 @@ pvr_gem_object_vunmap(struct pvr_gem_object *pvr_obj)
> dma_sync_sgtable_for_device(dev, shmem_obj->sgt, DMA_BIDIRECTIONAL);
> }
>
> - drm_gem_shmem_vunmap(shmem_obj, &map);
> + drm_gem_shmem_vunmap_locked(shmem_obj, &map);
>
> dma_resv_unlock(obj->resv);
> }
> diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c
> index 9bb997dbb4b9..609221351cde 100644
> --- a/drivers/gpu/drm/lima/lima_gem.c
> +++ b/drivers/gpu/drm/lima/lima_gem.c
> @@ -195,7 +195,7 @@ static int lima_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map)
> if (bo->heap_size)
> return -EINVAL;
>
> - return drm_gem_shmem_vmap(&bo->base, map);
> + return drm_gem_shmem_vmap_locked(&bo->base, map);
> }
>
> static int lima_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
> diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
> index 0f3935556ac7..a731f6b59a42 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_drv.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
> @@ -476,7 +476,7 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data,
> }
> }
>
> - args->retained = drm_gem_shmem_madvise(&bo->base, args->madv);
> + args->retained = drm_gem_shmem_madvise_locked(&bo->base, args->madv);
>
> if (args->retained) {
> if (args->madv == PANFROST_MADV_DONTNEED)
> diff --git a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
> index 3d9f51bd48b6..02b60ea1433a 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
> @@ -51,7 +51,7 @@ static bool panfrost_gem_purge(struct drm_gem_object *obj)
> goto unlock_mappings;
>
> panfrost_gem_teardown_mappings_locked(bo);
> - drm_gem_shmem_purge(&bo->base);
> + drm_gem_shmem_purge_locked(&bo->base);
> ret = true;
>
> dma_resv_unlock(shmem->base.resv);
> diff --git a/drivers/gpu/drm/tests/drm_gem_shmem_test.c b/drivers/gpu/drm/tests/drm_gem_shmem_test.c
> index fd4215e2f982..98884966bb92 100644
> --- a/drivers/gpu/drm/tests/drm_gem_shmem_test.c
> +++ b/drivers/gpu/drm/tests/drm_gem_shmem_test.c
> @@ -173,7 +173,7 @@ static void drm_gem_shmem_test_vmap(struct kunit *test)
> ret = kunit_add_action_or_reset(test, drm_gem_shmem_free_wrapper, shmem);
> KUNIT_ASSERT_EQ(test, ret, 0);
>
> - ret = drm_gem_shmem_vmap(shmem, &map);
> + ret = drm_gem_shmem_vmap_locked(shmem, &map);
> KUNIT_ASSERT_EQ(test, ret, 0);
> KUNIT_ASSERT_NOT_NULL(test, shmem->vaddr);
> KUNIT_ASSERT_FALSE(test, iosys_map_is_null(&map));
> @@ -183,7 +183,7 @@ static void drm_gem_shmem_test_vmap(struct kunit *test)
> for (i = 0; i < TEST_SIZE; i++)
> KUNIT_EXPECT_EQ(test, iosys_map_rd(&map, i, u8), TEST_BYTE);
>
> - drm_gem_shmem_vunmap(shmem, &map);
> + drm_gem_shmem_vunmap_locked(shmem, &map);
> KUNIT_EXPECT_NULL(test, shmem->vaddr);
> KUNIT_EXPECT_EQ(test, shmem->vmap_use_count, 0);
> }
> @@ -281,17 +281,17 @@ static void drm_gem_shmem_test_madvise(struct kunit *test)
> ret = kunit_add_action_or_reset(test, drm_gem_shmem_free_wrapper, shmem);
> KUNIT_ASSERT_EQ(test, ret, 0);
>
> - ret = drm_gem_shmem_madvise(shmem, 1);
> + ret = drm_gem_shmem_madvise_locked(shmem, 1);
> KUNIT_EXPECT_TRUE(test, ret);
> KUNIT_ASSERT_EQ(test, shmem->madv, 1);
>
> /* Set madv to a negative value */
> - ret = drm_gem_shmem_madvise(shmem, -1);
> + ret = drm_gem_shmem_madvise_locked(shmem, -1);
> KUNIT_EXPECT_FALSE(test, ret);
> KUNIT_ASSERT_EQ(test, shmem->madv, -1);
>
> /* Check that madv cannot be set back to a positive value */
> - ret = drm_gem_shmem_madvise(shmem, 0);
> + ret = drm_gem_shmem_madvise_locked(shmem, 0);
> KUNIT_EXPECT_FALSE(test, ret);
> KUNIT_ASSERT_EQ(test, shmem->madv, -1);
> }
> @@ -319,7 +319,7 @@ static void drm_gem_shmem_test_purge(struct kunit *test)
> ret = drm_gem_shmem_is_purgeable(shmem);
> KUNIT_EXPECT_FALSE(test, ret);
>
> - ret = drm_gem_shmem_madvise(shmem, 1);
> + ret = drm_gem_shmem_madvise_locked(shmem, 1);
> KUNIT_EXPECT_TRUE(test, ret);
>
> /* The scatter/gather table will be freed by drm_gem_shmem_free */
> @@ -329,7 +329,7 @@ static void drm_gem_shmem_test_purge(struct kunit *test)
> ret = drm_gem_shmem_is_purgeable(shmem);
> KUNIT_EXPECT_TRUE(test, ret);
>
> - drm_gem_shmem_purge(shmem);
> + drm_gem_shmem_purge_locked(shmem);
> KUNIT_EXPECT_NULL(test, shmem->pages);
> KUNIT_EXPECT_NULL(test, shmem->sgt);
> KUNIT_EXPECT_EQ(test, shmem->madv, -1);
> diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
> index cef5a6b5a4d6..0609e336479d 100644
> --- a/include/drm/drm_gem_shmem_helper.h
> +++ b/include/drm/drm_gem_shmem_helper.h
> @@ -102,19 +102,19 @@ struct drm_gem_shmem_object *drm_gem_shmem_create_with_mnt(struct drm_device *de
> struct vfsmount *gemfs);
> void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem);
>
> -void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem);
> +void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem);
> int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem);
> void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem);
> -int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem,
> - struct iosys_map *map);
> -void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem,
> - struct iosys_map *map);
> +int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
> + struct iosys_map *map);
> +void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
> + struct iosys_map *map);
> int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct *vma);
>
> int drm_gem_shmem_pin_locked(struct drm_gem_shmem_object *shmem);
> void drm_gem_shmem_unpin_locked(struct drm_gem_shmem_object *shmem);
>
> -int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv);
> +int drm_gem_shmem_madvise_locked(struct drm_gem_shmem_object *shmem, int madv);
>
> static inline bool drm_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem)
> {
> @@ -123,7 +123,7 @@ static inline bool drm_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem
> !shmem->base.dma_buf && !drm_gem_is_imported(&shmem->base);
> }
>
> -void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem);
> +void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem);
>
> struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem);
> struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem);
> @@ -214,12 +214,12 @@ static inline struct sg_table *drm_gem_shmem_object_get_sg_table(struct drm_gem_
> }
>
> /*
> - * drm_gem_shmem_object_vmap - GEM object function for drm_gem_shmem_vmap()
> + * drm_gem_shmem_object_vmap - GEM object function for drm_gem_shmem_vmap_locked()
> * @obj: GEM object
> * @map: Returns the kernel virtual address of the SHMEM GEM object's backing store.
> *
> - * This function wraps drm_gem_shmem_vmap(). Drivers that employ the shmem helpers should
> - * use it as their &drm_gem_object_funcs.vmap handler.
> + * This function wraps drm_gem_shmem_vmap_locked(). Drivers that employ the shmem
> + * helpers should use it as their &drm_gem_object_funcs.vmap handler.
> *
> * Returns:
> * 0 on success or a negative error code on failure.
> @@ -229,7 +229,7 @@ static inline int drm_gem_shmem_object_vmap(struct drm_gem_object *obj,
> {
> struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
>
> - return drm_gem_shmem_vmap(shmem, map);
> + return drm_gem_shmem_vmap_locked(shmem, map);
> }
>
> /*
> @@ -237,15 +237,15 @@ static inline int drm_gem_shmem_object_vmap(struct drm_gem_object *obj,
> * @obj: GEM object
> * @map: Kernel virtual address where the SHMEM GEM object was mapped
> *
> - * This function wraps drm_gem_shmem_vunmap(). Drivers that employ the shmem helpers should
> - * use it as their &drm_gem_object_funcs.vunmap handler.
> + * This function wraps drm_gem_shmem_vunmap_locked(). Drivers that employ the shmem
> + * helpers should use it as their &drm_gem_object_funcs.vunmap handler.
> */
> static inline void drm_gem_shmem_object_vunmap(struct drm_gem_object *obj,
> struct iosys_map *map)
> {
> struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
>
> - drm_gem_shmem_vunmap(shmem, map);
> + drm_gem_shmem_vunmap_locked(shmem, map);
> }
>
> /**
--
Jani Nikula, Intel
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v20 05/10] drm/shmem-helper: Refactor locked/unlocked functions
2025-03-27 11:25 ` Jani Nikula
@ 2025-03-27 11:27 ` Jani Nikula
2025-03-27 11:29 ` Jani Nikula
0 siblings, 1 reply; 32+ messages in thread
From: Jani Nikula @ 2025-03-27 11:27 UTC (permalink / raw)
To: Dmitry Osipenko, David Airlie, Simona Vetter, Maarten Lankhorst,
Maxime Ripard, Thomas Zimmermann, Christian König,
Gerd Hoffmann, Qiang Yu, Steven Price, Boris Brezillon,
Frank Binns, Matt Coster
Cc: dri-devel, linux-kernel, kernel
On Thu, 27 Mar 2025, Jani Nikula <jani.nikula@linux.intel.com> wrote:
> On Sun, 23 Mar 2025, Dmitry Osipenko <dmitry.osipenko@collabora.com> wrote:
>> Add locked and remove unlocked postfixes from drm-shmem function names,
>> making names consistent with the drm/gem core code.
>
> This breaks the build for drivers/accel/ivpu/ivpu_gem.c.
>
> Please enable CONFIG_DRM_ACCEL_IVPU=m and fix the fallout.
Ditto for CONFIG_DRM_ACCEL_AMDXDNA=m.
>
> BR,
> Jani.
>
>
>>
>> Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
>> Suggested-by: Boris Brezillon <boris.brezillon@collabora.com>
>> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
>> ---
>> drivers/gpu/drm/drm_gem_shmem_helper.c | 60 +++++++++----------
>> drivers/gpu/drm/imagination/pvr_gem.c | 4 +-
>> drivers/gpu/drm/lima/lima_gem.c | 2 +-
>> drivers/gpu/drm/panfrost/panfrost_drv.c | 2 +-
>> .../gpu/drm/panfrost/panfrost_gem_shrinker.c | 2 +-
>> drivers/gpu/drm/tests/drm_gem_shmem_test.c | 14 ++---
>> include/drm/drm_gem_shmem_helper.h | 28 ++++-----
>> 7 files changed, 56 insertions(+), 56 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
>> index 98c68999d61a..a9e35a46e72b 100644
>> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
>> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
>> @@ -174,7 +174,7 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
>> kfree(shmem->sgt);
>> }
>> if (shmem->pages)
>> - drm_gem_shmem_put_pages(shmem);
>> + drm_gem_shmem_put_pages_locked(shmem);
>>
>> drm_WARN_ON(obj->dev, shmem->pages_use_count);
>>
>> @@ -186,7 +186,7 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
>> }
>> EXPORT_SYMBOL_GPL(drm_gem_shmem_free);
>>
>> -static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
>> +static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem)
>> {
>> struct drm_gem_object *obj = &shmem->base;
>> struct page **pages;
>> @@ -220,12 +220,12 @@ static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
>> }
>>
>> /*
>> - * drm_gem_shmem_put_pages - Decrease use count on the backing pages for a shmem GEM object
>> + * drm_gem_shmem_put_pages_locked - Decrease use count on the backing pages for a shmem GEM object
>> * @shmem: shmem GEM object
>> *
>> * This function decreases the use count and puts the backing pages when use drops to zero.
>> */
>> -void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem)
>> +void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem)
>> {
>> struct drm_gem_object *obj = &shmem->base;
>>
>> @@ -247,7 +247,7 @@ void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem)
>> shmem->pages_mark_accessed_on_put);
>> shmem->pages = NULL;
>> }
>> -EXPORT_SYMBOL_GPL(drm_gem_shmem_put_pages);
>> +EXPORT_SYMBOL_GPL(drm_gem_shmem_put_pages_locked);
>>
>> int drm_gem_shmem_pin_locked(struct drm_gem_shmem_object *shmem)
>> {
>> @@ -257,7 +257,7 @@ int drm_gem_shmem_pin_locked(struct drm_gem_shmem_object *shmem)
>>
>> drm_WARN_ON(shmem->base.dev, drm_gem_is_imported(&shmem->base));
>>
>> - ret = drm_gem_shmem_get_pages(shmem);
>> + ret = drm_gem_shmem_get_pages_locked(shmem);
>>
>> return ret;
>> }
>> @@ -267,7 +267,7 @@ void drm_gem_shmem_unpin_locked(struct drm_gem_shmem_object *shmem)
>> {
>> dma_resv_assert_held(shmem->base.resv);
>>
>> - drm_gem_shmem_put_pages(shmem);
>> + drm_gem_shmem_put_pages_locked(shmem);
>> }
>> EXPORT_SYMBOL(drm_gem_shmem_unpin_locked);
>>
>> @@ -318,7 +318,7 @@ void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem)
>> EXPORT_SYMBOL_GPL(drm_gem_shmem_unpin);
>>
>> /*
>> - * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object
>> + * drm_gem_shmem_vmap_locked - Create a virtual mapping for a shmem GEM object
>> * @shmem: shmem GEM object
>> * @map: Returns the kernel virtual address of the SHMEM GEM object's backing
>> * store.
>> @@ -327,13 +327,13 @@ EXPORT_SYMBOL_GPL(drm_gem_shmem_unpin);
>> * exists for the buffer backing the shmem GEM object. It hides the differences
>> * between dma-buf imported and natively allocated objects.
>> *
>> - * Acquired mappings should be cleaned up by calling drm_gem_shmem_vunmap().
>> + * Acquired mappings should be cleaned up by calling drm_gem_shmem_vunmap_locked().
>> *
>> * Returns:
>> * 0 on success or a negative error code on failure.
>> */
>> -int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem,
>> - struct iosys_map *map)
>> +int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
>> + struct iosys_map *map)
>> {
>> struct drm_gem_object *obj = &shmem->base;
>> int ret = 0;
>> @@ -356,7 +356,7 @@ int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem,
>> return 0;
>> }
>>
>> - ret = drm_gem_shmem_get_pages(shmem);
>> + ret = drm_gem_shmem_get_pages_locked(shmem);
>> if (ret)
>> goto err_zero_use;
>>
>> @@ -379,28 +379,28 @@ int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem,
>>
>> err_put_pages:
>> if (!drm_gem_is_imported(obj))
>> - drm_gem_shmem_put_pages(shmem);
>> + drm_gem_shmem_put_pages_locked(shmem);
>> err_zero_use:
>> shmem->vmap_use_count = 0;
>>
>> return ret;
>> }
>> -EXPORT_SYMBOL_GPL(drm_gem_shmem_vmap);
>> +EXPORT_SYMBOL_GPL(drm_gem_shmem_vmap_locked);
>>
>> /*
>> - * drm_gem_shmem_vunmap - Unmap a virtual mapping for a shmem GEM object
>> + * drm_gem_shmem_vunmap_locked - Unmap a virtual mapping for a shmem GEM object
>> * @shmem: shmem GEM object
>> * @map: Kernel virtual address where the SHMEM GEM object was mapped
>> *
>> * This function cleans up a kernel virtual address mapping acquired by
>> - * drm_gem_shmem_vmap(). The mapping is only removed when the use count drops to
>> - * zero.
>> + * drm_gem_shmem_vmap_locked(). The mapping is only removed when the use count
>> + * drops to zero.
>> *
>> * This function hides the differences between dma-buf imported and natively
>> * allocated objects.
>> */
>> -void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem,
>> - struct iosys_map *map)
>> +void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
>> + struct iosys_map *map)
>> {
>> struct drm_gem_object *obj = &shmem->base;
>>
>> @@ -416,12 +416,12 @@ void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem,
>> return;
>>
>> vunmap(shmem->vaddr);
>> - drm_gem_shmem_put_pages(shmem);
>> + drm_gem_shmem_put_pages_locked(shmem);
>> }
>>
>> shmem->vaddr = NULL;
>> }
>> -EXPORT_SYMBOL_GPL(drm_gem_shmem_vunmap);
>> +EXPORT_SYMBOL_GPL(drm_gem_shmem_vunmap_locked);
>>
>> static int
>> drm_gem_shmem_create_with_handle(struct drm_file *file_priv,
>> @@ -449,7 +449,7 @@ drm_gem_shmem_create_with_handle(struct drm_file *file_priv,
>> /* Update madvise status, returns true if not purged, else
>> * false or -errno.
>> */
>> -int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv)
>> +int drm_gem_shmem_madvise_locked(struct drm_gem_shmem_object *shmem, int madv)
>> {
>> dma_resv_assert_held(shmem->base.resv);
>>
>> @@ -460,9 +460,9 @@ int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv)
>>
>> return (madv >= 0);
>> }
>> -EXPORT_SYMBOL_GPL(drm_gem_shmem_madvise);
>> +EXPORT_SYMBOL_GPL(drm_gem_shmem_madvise_locked);
>>
>> -void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem)
>> +void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem)
>> {
>> struct drm_gem_object *obj = &shmem->base;
>> struct drm_device *dev = obj->dev;
>> @@ -476,7 +476,7 @@ void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem)
>> kfree(shmem->sgt);
>> shmem->sgt = NULL;
>>
>> - drm_gem_shmem_put_pages(shmem);
>> + drm_gem_shmem_put_pages_locked(shmem);
>>
>> shmem->madv = -1;
>>
>> @@ -492,7 +492,7 @@ void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem)
>>
>> invalidate_mapping_pages(file_inode(obj->filp)->i_mapping, 0, (loff_t)-1);
>> }
>> -EXPORT_SYMBOL_GPL(drm_gem_shmem_purge);
>> +EXPORT_SYMBOL_GPL(drm_gem_shmem_purge_locked);
>>
>> /**
>> * drm_gem_shmem_dumb_create - Create a dumb shmem buffer object
>> @@ -589,7 +589,7 @@ static void drm_gem_shmem_vm_close(struct vm_area_struct *vma)
>> struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
>>
>> dma_resv_lock(shmem->base.resv, NULL);
>> - drm_gem_shmem_put_pages(shmem);
>> + drm_gem_shmem_put_pages_locked(shmem);
>> dma_resv_unlock(shmem->base.resv);
>>
>> drm_gem_vm_close(vma);
>> @@ -639,7 +639,7 @@ int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct
>> return -EINVAL;
>>
>> dma_resv_lock(shmem->base.resv, NULL);
>> - ret = drm_gem_shmem_get_pages(shmem);
>> + ret = drm_gem_shmem_get_pages_locked(shmem);
>> dma_resv_unlock(shmem->base.resv);
>>
>> if (ret)
>> @@ -707,7 +707,7 @@ static struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_
>>
>> drm_WARN_ON(obj->dev, drm_gem_is_imported(obj));
>>
>> - ret = drm_gem_shmem_get_pages(shmem);
>> + ret = drm_gem_shmem_get_pages_locked(shmem);
>> if (ret)
>> return ERR_PTR(ret);
>>
>> @@ -729,7 +729,7 @@ static struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_
>> sg_free_table(sgt);
>> kfree(sgt);
>> err_put_pages:
>> - drm_gem_shmem_put_pages(shmem);
>> + drm_gem_shmem_put_pages_locked(shmem);
>> return ERR_PTR(ret);
>> }
>>
>> diff --git a/drivers/gpu/drm/imagination/pvr_gem.c b/drivers/gpu/drm/imagination/pvr_gem.c
>> index 6a8c81fe8c1e..d9d7c6d1a138 100644
>> --- a/drivers/gpu/drm/imagination/pvr_gem.c
>> +++ b/drivers/gpu/drm/imagination/pvr_gem.c
>> @@ -203,7 +203,7 @@ pvr_gem_object_vmap(struct pvr_gem_object *pvr_obj)
>>
>> dma_resv_lock(obj->resv, NULL);
>>
>> - err = drm_gem_shmem_vmap(shmem_obj, &map);
>> + err = drm_gem_shmem_vmap_locked(shmem_obj, &map);
>> if (err)
>> goto err_unlock;
>>
>> @@ -257,7 +257,7 @@ pvr_gem_object_vunmap(struct pvr_gem_object *pvr_obj)
>> dma_sync_sgtable_for_device(dev, shmem_obj->sgt, DMA_BIDIRECTIONAL);
>> }
>>
>> - drm_gem_shmem_vunmap(shmem_obj, &map);
>> + drm_gem_shmem_vunmap_locked(shmem_obj, &map);
>>
>> dma_resv_unlock(obj->resv);
>> }
>> diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c
>> index 9bb997dbb4b9..609221351cde 100644
>> --- a/drivers/gpu/drm/lima/lima_gem.c
>> +++ b/drivers/gpu/drm/lima/lima_gem.c
>> @@ -195,7 +195,7 @@ static int lima_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map)
>> if (bo->heap_size)
>> return -EINVAL;
>>
>> - return drm_gem_shmem_vmap(&bo->base, map);
>> + return drm_gem_shmem_vmap_locked(&bo->base, map);
>> }
>>
>> static int lima_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
>> diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
>> index 0f3935556ac7..a731f6b59a42 100644
>> --- a/drivers/gpu/drm/panfrost/panfrost_drv.c
>> +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
>> @@ -476,7 +476,7 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data,
>> }
>> }
>>
>> - args->retained = drm_gem_shmem_madvise(&bo->base, args->madv);
>> + args->retained = drm_gem_shmem_madvise_locked(&bo->base, args->madv);
>>
>> if (args->retained) {
>> if (args->madv == PANFROST_MADV_DONTNEED)
>> diff --git a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
>> index 3d9f51bd48b6..02b60ea1433a 100644
>> --- a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
>> +++ b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
>> @@ -51,7 +51,7 @@ static bool panfrost_gem_purge(struct drm_gem_object *obj)
>> goto unlock_mappings;
>>
>> panfrost_gem_teardown_mappings_locked(bo);
>> - drm_gem_shmem_purge(&bo->base);
>> + drm_gem_shmem_purge_locked(&bo->base);
>> ret = true;
>>
>> dma_resv_unlock(shmem->base.resv);
>> diff --git a/drivers/gpu/drm/tests/drm_gem_shmem_test.c b/drivers/gpu/drm/tests/drm_gem_shmem_test.c
>> index fd4215e2f982..98884966bb92 100644
>> --- a/drivers/gpu/drm/tests/drm_gem_shmem_test.c
>> +++ b/drivers/gpu/drm/tests/drm_gem_shmem_test.c
>> @@ -173,7 +173,7 @@ static void drm_gem_shmem_test_vmap(struct kunit *test)
>> ret = kunit_add_action_or_reset(test, drm_gem_shmem_free_wrapper, shmem);
>> KUNIT_ASSERT_EQ(test, ret, 0);
>>
>> - ret = drm_gem_shmem_vmap(shmem, &map);
>> + ret = drm_gem_shmem_vmap_locked(shmem, &map);
>> KUNIT_ASSERT_EQ(test, ret, 0);
>> KUNIT_ASSERT_NOT_NULL(test, shmem->vaddr);
>> KUNIT_ASSERT_FALSE(test, iosys_map_is_null(&map));
>> @@ -183,7 +183,7 @@ static void drm_gem_shmem_test_vmap(struct kunit *test)
>> for (i = 0; i < TEST_SIZE; i++)
>> KUNIT_EXPECT_EQ(test, iosys_map_rd(&map, i, u8), TEST_BYTE);
>>
>> - drm_gem_shmem_vunmap(shmem, &map);
>> + drm_gem_shmem_vunmap_locked(shmem, &map);
>> KUNIT_EXPECT_NULL(test, shmem->vaddr);
>> KUNIT_EXPECT_EQ(test, shmem->vmap_use_count, 0);
>> }
>> @@ -281,17 +281,17 @@ static void drm_gem_shmem_test_madvise(struct kunit *test)
>> ret = kunit_add_action_or_reset(test, drm_gem_shmem_free_wrapper, shmem);
>> KUNIT_ASSERT_EQ(test, ret, 0);
>>
>> - ret = drm_gem_shmem_madvise(shmem, 1);
>> + ret = drm_gem_shmem_madvise_locked(shmem, 1);
>> KUNIT_EXPECT_TRUE(test, ret);
>> KUNIT_ASSERT_EQ(test, shmem->madv, 1);
>>
>> /* Set madv to a negative value */
>> - ret = drm_gem_shmem_madvise(shmem, -1);
>> + ret = drm_gem_shmem_madvise_locked(shmem, -1);
>> KUNIT_EXPECT_FALSE(test, ret);
>> KUNIT_ASSERT_EQ(test, shmem->madv, -1);
>>
>> /* Check that madv cannot be set back to a positive value */
>> - ret = drm_gem_shmem_madvise(shmem, 0);
>> + ret = drm_gem_shmem_madvise_locked(shmem, 0);
>> KUNIT_EXPECT_FALSE(test, ret);
>> KUNIT_ASSERT_EQ(test, shmem->madv, -1);
>> }
>> @@ -319,7 +319,7 @@ static void drm_gem_shmem_test_purge(struct kunit *test)
>> ret = drm_gem_shmem_is_purgeable(shmem);
>> KUNIT_EXPECT_FALSE(test, ret);
>>
>> - ret = drm_gem_shmem_madvise(shmem, 1);
>> + ret = drm_gem_shmem_madvise_locked(shmem, 1);
>> KUNIT_EXPECT_TRUE(test, ret);
>>
>> /* The scatter/gather table will be freed by drm_gem_shmem_free */
>> @@ -329,7 +329,7 @@ static void drm_gem_shmem_test_purge(struct kunit *test)
>> ret = drm_gem_shmem_is_purgeable(shmem);
>> KUNIT_EXPECT_TRUE(test, ret);
>>
>> - drm_gem_shmem_purge(shmem);
>> + drm_gem_shmem_purge_locked(shmem);
>> KUNIT_EXPECT_NULL(test, shmem->pages);
>> KUNIT_EXPECT_NULL(test, shmem->sgt);
>> KUNIT_EXPECT_EQ(test, shmem->madv, -1);
>> diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
>> index cef5a6b5a4d6..0609e336479d 100644
>> --- a/include/drm/drm_gem_shmem_helper.h
>> +++ b/include/drm/drm_gem_shmem_helper.h
>> @@ -102,19 +102,19 @@ struct drm_gem_shmem_object *drm_gem_shmem_create_with_mnt(struct drm_device *de
>> struct vfsmount *gemfs);
>> void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem);
>>
>> -void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem);
>> +void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem);
>> int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem);
>> void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem);
>> -int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem,
>> - struct iosys_map *map);
>> -void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem,
>> - struct iosys_map *map);
>> +int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
>> + struct iosys_map *map);
>> +void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
>> + struct iosys_map *map);
>> int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct *vma);
>>
>> int drm_gem_shmem_pin_locked(struct drm_gem_shmem_object *shmem);
>> void drm_gem_shmem_unpin_locked(struct drm_gem_shmem_object *shmem);
>>
>> -int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv);
>> +int drm_gem_shmem_madvise_locked(struct drm_gem_shmem_object *shmem, int madv);
>>
>> static inline bool drm_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem)
>> {
>> @@ -123,7 +123,7 @@ static inline bool drm_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem
>> !shmem->base.dma_buf && !drm_gem_is_imported(&shmem->base);
>> }
>>
>> -void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem);
>> +void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem);
>>
>> struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem);
>> struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem);
>> @@ -214,12 +214,12 @@ static inline struct sg_table *drm_gem_shmem_object_get_sg_table(struct drm_gem_
>> }
>>
>> /*
>> - * drm_gem_shmem_object_vmap - GEM object function for drm_gem_shmem_vmap()
>> + * drm_gem_shmem_object_vmap - GEM object function for drm_gem_shmem_vmap_locked()
>> * @obj: GEM object
>> * @map: Returns the kernel virtual address of the SHMEM GEM object's backing store.
>> *
>> - * This function wraps drm_gem_shmem_vmap(). Drivers that employ the shmem helpers should
>> - * use it as their &drm_gem_object_funcs.vmap handler.
>> + * This function wraps drm_gem_shmem_vmap_locked(). Drivers that employ the shmem
>> + * helpers should use it as their &drm_gem_object_funcs.vmap handler.
>> *
>> * Returns:
>> * 0 on success or a negative error code on failure.
>> @@ -229,7 +229,7 @@ static inline int drm_gem_shmem_object_vmap(struct drm_gem_object *obj,
>> {
>> struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
>>
>> - return drm_gem_shmem_vmap(shmem, map);
>> + return drm_gem_shmem_vmap_locked(shmem, map);
>> }
>>
>> /*
>> @@ -237,15 +237,15 @@ static inline int drm_gem_shmem_object_vmap(struct drm_gem_object *obj,
>> * @obj: GEM object
>> * @map: Kernel virtual address where the SHMEM GEM object was mapped
>> *
>> - * This function wraps drm_gem_shmem_vunmap(). Drivers that employ the shmem helpers should
>> - * use it as their &drm_gem_object_funcs.vunmap handler.
>> + * This function wraps drm_gem_shmem_vunmap_locked(). Drivers that employ the shmem
>> + * helpers should use it as their &drm_gem_object_funcs.vunmap handler.
>> */
>> static inline void drm_gem_shmem_object_vunmap(struct drm_gem_object *obj,
>> struct iosys_map *map)
>> {
>> struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
>>
>> - drm_gem_shmem_vunmap(shmem, map);
>> + drm_gem_shmem_vunmap_locked(shmem, map);
>> }
>>
>> /**
--
Jani Nikula, Intel
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v20 05/10] drm/shmem-helper: Refactor locked/unlocked functions
2025-03-27 11:27 ` Jani Nikula
@ 2025-03-27 11:29 ` Jani Nikula
0 siblings, 0 replies; 32+ messages in thread
From: Jani Nikula @ 2025-03-27 11:29 UTC (permalink / raw)
To: Dmitry Osipenko, David Airlie, Simona Vetter, Maarten Lankhorst,
Maxime Ripard, Thomas Zimmermann, Christian König,
Gerd Hoffmann, Qiang Yu, Steven Price, Boris Brezillon,
Frank Binns, Matt Coster
Cc: dri-devel, linux-kernel, kernel
On Thu, 27 Mar 2025, Jani Nikula <jani.nikula@linux.intel.com> wrote:
> On Thu, 27 Mar 2025, Jani Nikula <jani.nikula@linux.intel.com> wrote:
>> On Sun, 23 Mar 2025, Dmitry Osipenko <dmitry.osipenko@collabora.com> wrote:
>>> Add locked and remove unlocked postfixes from drm-shmem function names,
>>> making names consistent with the drm/gem core code.
>>
>> This breaks the build for drivers/accel/ivpu/ivpu_gem.c.
>>
>> Please enable CONFIG_DRM_ACCEL_IVPU=m and fix the fallout.
>
> Ditto for CONFIG_DRM_ACCEL_AMDXDNA=m.
Ah, saw the fixes from Boris [1], sorry for the noise.
[1] https://lore.kernel.org/r/20250327104300.1982058-1-boris.brezillon@collabora.com
--
Jani Nikula, Intel
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v20 09/10] drm/shmem-helper: Switch drm_gem_shmem_vmap/vunmap to use pin/unpin
2025-03-22 21:26 ` [PATCH v20 09/10] drm/shmem-helper: Switch drm_gem_shmem_vmap/vunmap to use pin/unpin Dmitry Osipenko
@ 2025-04-02 12:47 ` Thomas Zimmermann
2025-04-02 12:58 ` Dmitry Osipenko
0 siblings, 1 reply; 32+ messages in thread
From: Thomas Zimmermann @ 2025-04-02 12:47 UTC (permalink / raw)
To: Dmitry Osipenko, David Airlie, Simona Vetter, Maarten Lankhorst,
Maxime Ripard, Christian König, Gerd Hoffmann, Qiang Yu,
Steven Price, Boris Brezillon, Frank Binns, Matt Coster
Cc: dri-devel, linux-kernel, kernel
Hi
Am 22.03.25 um 22:26 schrieb Dmitry Osipenko:
> The vmapped pages shall be pinned in memory and previously get/put_pages()
> were implicitly hard-pinning/unpinning the pages. This will no longer be
> the case with addition of memory shrinker because pages_use_count > 0 won't
> determine anymore whether pages are hard-pinned (they will be soft-pinned),
> while the new pages_pin_count will do the hard-pinning. Switch the
> vmap/vunmap() to use pin/unpin() functions in a preparation of addition
> of the memory shrinker support to drm-shmem.
I've meanwhile rediscovered this patch and I'm sure this is not correct.
Vmap should not pin AFAIK. It is possible to vmap if the buffer has been
pinned, but that's not automatic. For other vmaps it is necessary to
hold the reservation lock to prevent the buffer from moving.
Best regards
Thomas
>
> Acked-by: Maxime Ripard <mripard@kernel.org>
> Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
> ---
> drivers/gpu/drm/drm_gem_shmem_helper.c | 6 +++---
> include/drm/drm_gem_shmem_helper.h | 2 +-
> 2 files changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index 6fb96e790abd..84a196bbe44f 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -360,7 +360,7 @@ int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
> return 0;
> }
>
> - ret = drm_gem_shmem_get_pages_locked(shmem);
> + ret = drm_gem_shmem_pin_locked(shmem);
> if (ret)
> goto err_zero_use;
>
> @@ -383,7 +383,7 @@ int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
>
> err_put_pages:
> if (!drm_gem_is_imported(obj))
> - drm_gem_shmem_put_pages_locked(shmem);
> + drm_gem_shmem_unpin_locked(shmem);
> err_zero_use:
> shmem->vmap_use_count = 0;
>
> @@ -420,7 +420,7 @@ void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
> return;
>
> vunmap(shmem->vaddr);
> - drm_gem_shmem_put_pages_locked(shmem);
> + drm_gem_shmem_unpin_locked(shmem);
> }
>
> shmem->vaddr = NULL;
> diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
> index 3a4be433d5f0..8b9bba87ae63 100644
> --- a/include/drm/drm_gem_shmem_helper.h
> +++ b/include/drm/drm_gem_shmem_helper.h
> @@ -130,7 +130,7 @@ int drm_gem_shmem_madvise_locked(struct drm_gem_shmem_object *shmem, int madv);
> static inline bool drm_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem)
> {
> return (shmem->madv > 0) &&
> - !shmem->vmap_use_count && shmem->sgt &&
> + !refcount_read(&shmem->pages_pin_count) && shmem->sgt &&
> !shmem->base.dma_buf && !drm_gem_is_imported(&shmem->base);
> }
>
--
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Frankenstrasse 146, 90461 Nuernberg, Germany
GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman
HRB 36809 (AG Nuernberg)
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v20 09/10] drm/shmem-helper: Switch drm_gem_shmem_vmap/vunmap to use pin/unpin
2025-04-02 12:47 ` Thomas Zimmermann
@ 2025-04-02 12:58 ` Dmitry Osipenko
2025-04-02 13:21 ` Boris Brezillon
0 siblings, 1 reply; 32+ messages in thread
From: Dmitry Osipenko @ 2025-04-02 12:58 UTC (permalink / raw)
To: Thomas Zimmermann, David Airlie, Simona Vetter, Maarten Lankhorst,
Maxime Ripard, Christian König, Gerd Hoffmann, Qiang Yu,
Steven Price, Boris Brezillon, Frank Binns, Matt Coster
Cc: dri-devel, linux-kernel, kernel
On 4/2/25 15:47, Thomas Zimmermann wrote:
> Hi
>
> Am 22.03.25 um 22:26 schrieb Dmitry Osipenko:
>> The vmapped pages shall be pinned in memory and previously get/
>> put_pages()
>> were implicitly hard-pinning/unpinning the pages. This will no longer be
>> the case with addition of memory shrinker because pages_use_count > 0
>> won't
>> determine anymore whether pages are hard-pinned (they will be soft-
>> pinned),
>> while the new pages_pin_count will do the hard-pinning. Switch the
>> vmap/vunmap() to use pin/unpin() functions in a preparation of addition
>> of the memory shrinker support to drm-shmem.
>
> I've meanwhile rediscovered this patch and I'm sure this is not correct.
> Vmap should not pin AFAIK. It is possible to vmap if the buffer has been
> pinned, but that's not automatic. For other vmaps it is necessary to
> hold the reservation lock to prevent the buffer from moving.
Hi, with vmap() you're getting a kernel address. The GEM's memory should
be not movable while it's vmapped as we can't handle kernel page faults.
Not sure what you're meaning by the "other vmaps", please clarify.
--
Best regards,
Dmitry
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v20 09/10] drm/shmem-helper: Switch drm_gem_shmem_vmap/vunmap to use pin/unpin
2025-04-02 12:58 ` Dmitry Osipenko
@ 2025-04-02 13:21 ` Boris Brezillon
2025-04-03 7:20 ` Thomas Zimmermann
0 siblings, 1 reply; 32+ messages in thread
From: Boris Brezillon @ 2025-04-02 13:21 UTC (permalink / raw)
To: Dmitry Osipenko
Cc: Thomas Zimmermann, David Airlie, Simona Vetter, Maarten Lankhorst,
Maxime Ripard, Christian König, Gerd Hoffmann, Qiang Yu,
Steven Price, Frank Binns, Matt Coster, dri-devel, linux-kernel,
kernel
On Wed, 2 Apr 2025 15:58:55 +0300
Dmitry Osipenko <dmitry.osipenko@collabora.com> wrote:
> On 4/2/25 15:47, Thomas Zimmermann wrote:
> > Hi
> >
> > Am 22.03.25 um 22:26 schrieb Dmitry Osipenko:
> >> The vmapped pages shall be pinned in memory and previously get/
> >> put_pages()
> >> were implicitly hard-pinning/unpinning the pages. This will no longer be
> >> the case with addition of memory shrinker because pages_use_count > 0
> >> won't
> >> determine anymore whether pages are hard-pinned (they will be soft-
> >> pinned),
> >> while the new pages_pin_count will do the hard-pinning. Switch the
> >> vmap/vunmap() to use pin/unpin() functions in a preparation of addition
> >> of the memory shrinker support to drm-shmem.
> >
> > I've meanwhile rediscovered this patch and I'm sure this is not correct.
> > Vmap should not pin AFAIK. It is possible to vmap if the buffer has been
> > pinned, but that's not automatic. For other vmaps it is necessary to
> > hold the reservation lock to prevent the buffer from moving.
Hm, is this problematic though? If you want to vmap() inside a section
that's protected by the resv lock, you can
- drm_gem_shmem_vmap_locked()
- do whatever you need to do with the vaddr,
- drm_gem_shmem_vunmap_locked()
and the {pin,page_use}_count will be back to their original values.
Those are just ref counters, and I doubt the overhead of
incrementing/decrementing them makes a difference compared to the heavy
page-allocation/vmap operations...
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v20 00/10] Add generic DRM-shmem memory shrinker (part 1)
2025-03-22 21:25 [PATCH v20 00/10] Add generic DRM-shmem memory shrinker (part 1) Dmitry Osipenko
` (10 preceding siblings ...)
2025-03-25 14:17 ` [PATCH v20 00/10] Add generic DRM-shmem memory shrinker (part 1) Thomas Zimmermann
@ 2025-04-03 0:37 ` Lucas De Marchi
2025-04-03 7:03 ` Thomas Zimmermann
11 siblings, 1 reply; 32+ messages in thread
From: Lucas De Marchi @ 2025-04-03 0:37 UTC (permalink / raw)
To: Dmitry Osipenko
Cc: David Airlie, Simona Vetter, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, Christian König, Gerd Hoffmann, Qiang Yu,
Steven Price, Boris Brezillon, Frank Binns, Matt Coster,
dri-devel, linux-kernel, kernel, chaitanya.kumar.borah
On Sun, Mar 23, 2025 at 12:25:58AM +0300, Dmitry Osipenko wrote:
>Hi,
>
>This a continuation of a year-old series that adds generic DRM-shmem
>shrinker [1]. The old series became too big with too many patches, more
>reasonable to split it up into multiple smaller patchsets. Here is
>the firtst part that makes preparatory DRM changes.
>
>[1] https://lore.kernel.org/dri-devel/20240105184624.508603-1-dmitry.osipenko@collabora.com/
After these patches got merged I started seeing this on ast driver
and a similar one qemu-cirrus:
[ 88.731560] fbcon: astdrmfb (fb0) is primary device
[ 88.767100] Console: switching to colour frame buffer device 128x48
[ 88.768982] ------------[ cut here ]------------
[ 88.768989] ast 0000:02:00.0: [drm] Dirty helper failed: ret=-12
[ 88.769016] WARNING: CPU: 0 PID: 10 at drivers/gpu/drm/drm_fbdev_shmem.c:118 drm_fbdev_shmem_helper_fb_dirty+0xcc/0xd0 [drm_shmem_helper]
...
[ 88.769092] CPU: 0 UID: 0 PID: 10 Comm: kworker/0:1 Not tainted 6.14.0-xe+ #2
...
[ 88.769095] Workqueue: events drm_fb_helper_damage_work [drm_kms_helper]
[ 88.769109] RIP: 0010:drm_fbdev_shmem_helper_fb_dirty+0xcc/0xd0 [drm_shmem_helper]
[ 88.769112] Code: 4d 8b 6c 24 50 4d 85 ed 75 04 4d 8b 2c 24 4c 89 e7 e8 98 36 a9 e0 89 d9 4c 89 ea 48 c7 c7 d8 65 54 a1 48 89 c6 e8 64 f4 ee df <0f> 0b eb 8b 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 0f 1f
[ 88.769113] RSP: 0018:ffa00000003e7da0 EFLAGS: 00010246
[ 88.769115] RAX: 0000000000000000 RBX: 00000000fffffff4 RCX: 0000000000000000
[ 88.769117] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
[ 88.769117] RBP: ffa00000003e7db8 R08: 0000000000000000 R09: 0000000000000000
[ 88.769118] R10: 0000000000000000 R11: 0000000000000000 R12: ff11000122c190c8
[ 88.769119] R13: ff11000118588a10 R14: ff1100010f4a04c0 R15: ff1100019750b588
[ 88.769120] FS: 0000000000000000(0000) GS:ff11007e7d000000(0000) knlGS:0000000000000000
[ 88.769121] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 88.769122] CR2: 00007e43af4056f0 CR3: 00000080c89f0003 CR4: 0000000000f73ef0
[ 88.769123] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 88.769124] DR3: 0000000000000000 DR6: 00000000fffe07f0 DR7: 0000000000000400
[ 88.769125] PKRU: 55555554
[ 88.769126] Call Trace:
[ 88.769127] <TASK>
[ 88.769129] ? show_regs.cold+0x19/0x20
[ 88.769132] ? drm_fbdev_shmem_helper_fb_dirty+0xcc/0xd0 [drm_shmem_helper]
[ 88.769134] ? __warn.cold+0xd2/0x17f
[ 88.769136] ? drm_fbdev_shmem_helper_fb_dirty+0xcc/0xd0 [drm_shmem_helper]
[ 88.769139] ? report_bug+0x114/0x160
[ 88.769143] ? handle_bug+0x6e/0xb0
[ 88.769146] ? exc_invalid_op+0x18/0x80
[ 88.769147] ? asm_exc_invalid_op+0x1b/0x20
[ 88.769153] ? drm_fbdev_shmem_helper_fb_dirty+0xcc/0xd0 [drm_shmem_helper]
[ 88.769156] drm_fb_helper_damage_work+0x8a/0x190 [drm_kms_helper]
[ 88.769164] process_one_work+0x224/0x5f0
[ 88.769170] worker_thread+0x1d4/0x3e0
[ 88.769173] ? __pfx_worker_thread+0x10/0x10
[ 88.769175] kthread+0x10b/0x260
[ 88.769178] ? __pfx_kthread+0x10/0x10
[ 88.769181] ret_from_fork+0x44/0x70
[ 88.769184] ? __pfx_kthread+0x10/0x10
[ 88.769186] ret_from_fork_asm+0x1a/0x30
[ 88.769193] </TASK>
[ 88.769194] irq event stamp: 430275
[ 88.769195] hardirqs last enabled at (430281): [<ffffffff81505836>] vprintk_emit+0x416/0x470
[ 88.769198] hardirqs last disabled at (430286): [<ffffffff81505868>] vprintk_emit+0x448/0x470
[ 88.769200] softirqs last enabled at (428924): [<ffffffff8143977d>] __irq_exit_rcu+0x12d/0x150
[ 88.769203] softirqs last disabled at (428861): [<ffffffff8143977d>] __irq_exit_rcu+0x12d/0x150
[ 88.769205] ---[ end trace 0000000000000000 ]---
[ 88.769375] ast 0000:02:00.0: [drm] fb0: astdrmfb frame buffer device
[ 88.802554] ast 0000:02:00.0: [drm:drm_fb_helper_hotplug_event [drm_kms_helper]]
I confirmed that reverting the series (together with 2 patches for accel/ivpu on top)
makes the warning go away. Any idea what's going on?
Lucas De Marchi
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v20 00/10] Add generic DRM-shmem memory shrinker (part 1)
2025-04-03 0:37 ` Lucas De Marchi
@ 2025-04-03 7:03 ` Thomas Zimmermann
2025-04-03 14:10 ` Dmitry Osipenko
0 siblings, 1 reply; 32+ messages in thread
From: Thomas Zimmermann @ 2025-04-03 7:03 UTC (permalink / raw)
To: Lucas De Marchi, Dmitry Osipenko
Cc: David Airlie, Simona Vetter, Maarten Lankhorst, Maxime Ripard,
Christian König, Gerd Hoffmann, Qiang Yu, Steven Price,
Boris Brezillon, Frank Binns, Matt Coster, dri-devel,
linux-kernel, kernel, chaitanya.kumar.borah
Hi
Am 03.04.25 um 02:37 schrieb Lucas De Marchi:
> On Sun, Mar 23, 2025 at 12:25:58AM +0300, Dmitry Osipenko wrote:
>> Hi,
>>
>> This a continuation of a year-old series that adds generic DRM-shmem
>> shrinker [1]. The old series became too big with too many patches, more
>> reasonable to split it up into multiple smaller patchsets. Here is
>> the firtst part that makes preparatory DRM changes.
>>
>> [1]
>> https://lore.kernel.org/dri-devel/20240105184624.508603-1-dmitry.osipenko@collabora.com/
>
> After these patches got merged I started seeing this on ast driver
> and a similar one qemu-cirrus:
Same here with simpledrm. I wanted to bisect today.
Best regards
Thomas
>
> [ 88.731560] fbcon: astdrmfb (fb0) is primary device
> [ 88.767100] Console: switching to colour frame buffer device 128x48
> [ 88.768982] ------------[ cut here ]------------
> [ 88.768989] ast 0000:02:00.0: [drm] Dirty helper failed: ret=-12
> [ 88.769016] WARNING: CPU: 0 PID: 10 at
> drivers/gpu/drm/drm_fbdev_shmem.c:118
> drm_fbdev_shmem_helper_fb_dirty+0xcc/0xd0 [drm_shmem_helper]
> ...
> [ 88.769092] CPU: 0 UID: 0 PID: 10 Comm: kworker/0:1 Not tainted
> 6.14.0-xe+ #2
> ...
> [ 88.769095] Workqueue: events drm_fb_helper_damage_work
> [drm_kms_helper]
> [ 88.769109] RIP: 0010:drm_fbdev_shmem_helper_fb_dirty+0xcc/0xd0
> [drm_shmem_helper]
> [ 88.769112] Code: 4d 8b 6c 24 50 4d 85 ed 75 04 4d 8b 2c 24 4c 89
> e7 e8 98 36 a9 e0 89 d9 4c 89 ea 48 c7 c7 d8 65 54 a1 48 89 c6 e8 64
> f4 ee df <0f> 0b eb 8b 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90
> 0f 1f
> [ 88.769113] RSP: 0018:ffa00000003e7da0 EFLAGS: 00010246
> [ 88.769115] RAX: 0000000000000000 RBX: 00000000fffffff4 RCX:
> 0000000000000000
> [ 88.769117] RDX: 0000000000000000 RSI: 0000000000000000 RDI:
> 0000000000000000
> [ 88.769117] RBP: ffa00000003e7db8 R08: 0000000000000000 R09:
> 0000000000000000
> [ 88.769118] R10: 0000000000000000 R11: 0000000000000000 R12:
> ff11000122c190c8
> [ 88.769119] R13: ff11000118588a10 R14: ff1100010f4a04c0 R15:
> ff1100019750b588
> [ 88.769120] FS: 0000000000000000(0000) GS:ff11007e7d000000(0000)
> knlGS:0000000000000000
> [ 88.769121] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [ 88.769122] CR2: 00007e43af4056f0 CR3: 00000080c89f0003 CR4:
> 0000000000f73ef0
> [ 88.769123] DR0: 0000000000000000 DR1: 0000000000000000 DR2:
> 0000000000000000
> [ 88.769124] DR3: 0000000000000000 DR6: 00000000fffe07f0 DR7:
> 0000000000000400
> [ 88.769125] PKRU: 55555554
> [ 88.769126] Call Trace:
> [ 88.769127] <TASK>
> [ 88.769129] ? show_regs.cold+0x19/0x20
> [ 88.769132] ? drm_fbdev_shmem_helper_fb_dirty+0xcc/0xd0
> [drm_shmem_helper]
> [ 88.769134] ? __warn.cold+0xd2/0x17f
> [ 88.769136] ? drm_fbdev_shmem_helper_fb_dirty+0xcc/0xd0
> [drm_shmem_helper]
> [ 88.769139] ? report_bug+0x114/0x160
> [ 88.769143] ? handle_bug+0x6e/0xb0
> [ 88.769146] ? exc_invalid_op+0x18/0x80
> [ 88.769147] ? asm_exc_invalid_op+0x1b/0x20
> [ 88.769153] ? drm_fbdev_shmem_helper_fb_dirty+0xcc/0xd0
> [drm_shmem_helper]
> [ 88.769156] drm_fb_helper_damage_work+0x8a/0x190 [drm_kms_helper]
> [ 88.769164] process_one_work+0x224/0x5f0
> [ 88.769170] worker_thread+0x1d4/0x3e0
> [ 88.769173] ? __pfx_worker_thread+0x10/0x10
> [ 88.769175] kthread+0x10b/0x260
> [ 88.769178] ? __pfx_kthread+0x10/0x10
> [ 88.769181] ret_from_fork+0x44/0x70
> [ 88.769184] ? __pfx_kthread+0x10/0x10
> [ 88.769186] ret_from_fork_asm+0x1a/0x30
> [ 88.769193] </TASK>
> [ 88.769194] irq event stamp: 430275
> [ 88.769195] hardirqs last enabled at (430281):
> [<ffffffff81505836>] vprintk_emit+0x416/0x470
> [ 88.769198] hardirqs last disabled at (430286):
> [<ffffffff81505868>] vprintk_emit+0x448/0x470
> [ 88.769200] softirqs last enabled at (428924):
> [<ffffffff8143977d>] __irq_exit_rcu+0x12d/0x150
> [ 88.769203] softirqs last disabled at (428861):
> [<ffffffff8143977d>] __irq_exit_rcu+0x12d/0x150
> [ 88.769205] ---[ end trace 0000000000000000 ]---
> [ 88.769375] ast 0000:02:00.0: [drm] fb0: astdrmfb frame buffer device
> [ 88.802554] ast 0000:02:00.0: [drm:drm_fb_helper_hotplug_event
> [drm_kms_helper]]
> I confirmed that reverting the series (together with 2 patches for
> accel/ivpu on top)
> makes the warning go away. Any idea what's going on?
>
>
> Lucas De Marchi
--
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Frankenstrasse 146, 90461 Nuernberg, Germany
GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman
HRB 36809 (AG Nuernberg)
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v20 09/10] drm/shmem-helper: Switch drm_gem_shmem_vmap/vunmap to use pin/unpin
2025-04-02 13:21 ` Boris Brezillon
@ 2025-04-03 7:20 ` Thomas Zimmermann
2025-04-03 8:50 ` Boris Brezillon
0 siblings, 1 reply; 32+ messages in thread
From: Thomas Zimmermann @ 2025-04-03 7:20 UTC (permalink / raw)
To: Boris Brezillon, Dmitry Osipenko
Cc: David Airlie, Simona Vetter, Maarten Lankhorst, Maxime Ripard,
Christian König, Gerd Hoffmann, Qiang Yu, Steven Price,
Frank Binns, Matt Coster, dri-devel, linux-kernel, kernel
Hi
Am 02.04.25 um 15:21 schrieb Boris Brezillon:
> On Wed, 2 Apr 2025 15:58:55 +0300
> Dmitry Osipenko <dmitry.osipenko@collabora.com> wrote:
>
>> On 4/2/25 15:47, Thomas Zimmermann wrote:
>>> Hi
>>>
>>> Am 22.03.25 um 22:26 schrieb Dmitry Osipenko:
>>>> The vmapped pages shall be pinned in memory and previously get/
>>>> put_pages()
>>>> were implicitly hard-pinning/unpinning the pages. This will no longer be
>>>> the case with addition of memory shrinker because pages_use_count > 0
>>>> won't
>>>> determine anymore whether pages are hard-pinned (they will be soft-
>>>> pinned),
>>>> while the new pages_pin_count will do the hard-pinning. Switch the
>>>> vmap/vunmap() to use pin/unpin() functions in a preparation of addition
>>>> of the memory shrinker support to drm-shmem.
>>> I've meanwhile rediscovered this patch and I'm sure this is not correct.
>>> Vmap should not pin AFAIK. It is possible to vmap if the buffer has been
>>> pinned, but that's not automatic. For other vmaps it is necessary to
>>> hold the reservation lock to prevent the buffer from moving.
> Hm, is this problematic though? If you want to vmap() inside a section
> that's protected by the resv lock, you can
>
> - drm_gem_shmem_vmap_locked()
> - do whatever you need to do with the vaddr,
> - drm_gem_shmem_vunmap_locked()
>
> and the {pin,page_use}_count will be back to their original values.
> Those are just ref counters, and I doubt the overhead of
> incrementing/decrementing them makes a difference compared to the heavy
> page-allocation/vmap operations...
I once tried to add pin as part of vmap, so that pages stay in place.
Christian was very clear about not doing this. I found this made a lot
of sense: vmap means "make the memory available to the CPU". The memory
location doesn't matter much here. Pin means something like "make the
memory available to the GPU". But which GPU depends on the caller: calls
via GEM refer to the local GPU, calls via dma-buf usually refer to the
importer's GPU. That GPU uncertainty makes pin problematic already.
In your case, vmap an pin both intent to hold the shmem pages in memory.
They might be build on top of the same implementation, but one should
not be implemented with the other because of their different meanings.
More generally speaking, I've meanwhile come to the conclusion that pin
should not even exist in the GEM interface. It's an internal operation
of TTM and reveals too much about what happens within the
implementation. Instead GEM should be free to move buffers around.
Dma-buf importers should only tell exporters to make buffers available
to them, but not how to do this. AFAIK that's what dma-buf's
attach/detach is for.
Best regards
Thomas
--
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Frankenstrasse 146, 90461 Nuernberg, Germany
GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman
HRB 36809 (AG Nuernberg)
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v20 09/10] drm/shmem-helper: Switch drm_gem_shmem_vmap/vunmap to use pin/unpin
2025-04-03 7:20 ` Thomas Zimmermann
@ 2025-04-03 8:50 ` Boris Brezillon
2025-04-04 8:01 ` Thomas Zimmermann
0 siblings, 1 reply; 32+ messages in thread
From: Boris Brezillon @ 2025-04-03 8:50 UTC (permalink / raw)
To: Thomas Zimmermann
Cc: Dmitry Osipenko, David Airlie, Simona Vetter, Maarten Lankhorst,
Maxime Ripard, Christian König, Gerd Hoffmann, Qiang Yu,
Steven Price, Frank Binns, Matt Coster, dri-devel, linux-kernel,
kernel
On Thu, 3 Apr 2025 09:20:00 +0200
Thomas Zimmermann <tzimmermann@suse.de> wrote:
> Hi
>
> Am 02.04.25 um 15:21 schrieb Boris Brezillon:
> > On Wed, 2 Apr 2025 15:58:55 +0300
> > Dmitry Osipenko <dmitry.osipenko@collabora.com> wrote:
> >
> >> On 4/2/25 15:47, Thomas Zimmermann wrote:
> >>> Hi
> >>>
> >>> Am 22.03.25 um 22:26 schrieb Dmitry Osipenko:
> >>>> The vmapped pages shall be pinned in memory and previously get/
> >>>> put_pages()
> >>>> were implicitly hard-pinning/unpinning the pages. This will no longer be
> >>>> the case with addition of memory shrinker because pages_use_count > 0
> >>>> won't
> >>>> determine anymore whether pages are hard-pinned (they will be soft-
> >>>> pinned),
> >>>> while the new pages_pin_count will do the hard-pinning. Switch the
> >>>> vmap/vunmap() to use pin/unpin() functions in a preparation of addition
> >>>> of the memory shrinker support to drm-shmem.
> >>> I've meanwhile rediscovered this patch and I'm sure this is not correct.
> >>> Vmap should not pin AFAIK. It is possible to vmap if the buffer has been
> >>> pinned, but that's not automatic. For other vmaps it is necessary to
> >>> hold the reservation lock to prevent the buffer from moving.
> > Hm, is this problematic though? If you want to vmap() inside a section
> > that's protected by the resv lock, you can
> >
> > - drm_gem_shmem_vmap_locked()
> > - do whatever you need to do with the vaddr,
> > - drm_gem_shmem_vunmap_locked()
> >
> > and the {pin,page_use}_count will be back to their original values.
> > Those are just ref counters, and I doubt the overhead of
> > incrementing/decrementing them makes a difference compared to the heavy
> > page-allocation/vmap operations...
>
> I once tried to add pin as part of vmap, so that pages stay in place.
> Christian was very clear about not doing this. I found this made a lot
> of sense: vmap means "make the memory available to the CPU". The memory
> location doesn't matter much here. Pin means something like "make the
> memory available to the GPU". But which GPU depends on the caller: calls
> via GEM refer to the local GPU, calls via dma-buf usually refer to the
> importer's GPU. That GPU uncertainty makes pin problematic already.
Okay, so it looks more like a naming issue then. The intent here is to
make sure the page array doesn't disappear while we have a kernel
mapping active (address returned by vmap()). The reason we went from
pages_count to pages_use_count+pin_count is because we have two kind of
references in drm_gem_shmem:
- weak references (tracked with pages_use_count). Those are
usually held by GPU VMs, and they are weak in the sense they
shouldn't prevent the shrinker to reclaim them if the GPU VM is idle.
The other user of weak references is userspace mappings of GEM
objects (mmap()), because then we can repopulate those with our fault
handler.
- hard references (tracked with pin_count) which are used to prevent
the shrinker from even considering the GEM as reclaimable. And clearly
kernel mappings fall in that case, because otherwise we could reclaim
pages that might be dereferenced by the CPU later on. It's also used
to implement drm_gem_pin because it's the same mechanism really,
hence the name
>
> In your case, vmap an pin both intent to hold the shmem pages in memory.
> They might be build on top of the same implementation, but one should
> not be implemented with the other because of their different meanings.
But that's not what we do, is it? Sure, in drm_gem_shmem_vmap_locked(),
we call drm_gem_shmem_pin_locked(), but that's an internal function to
make sure the pages are allocated and stay around until
drm_gem_shmem_vunmap_locked() is called.
I guess we could rename pin_count into hard_refcount or
page_residency_count or xxx_count, and change the pin/unpin_locked()
function names accordingly, but that's just a naming detail, it doesn't
force you to call drm_gem_pin() to vmap() your GEM, it's something we
do internally.
>
> More generally speaking, I've meanwhile come to the conclusion that pin
> should not even exist in the GEM interface. It's an internal operation
> of TTM and reveals too much about what happens within the
> implementation. Instead GEM should be free to move buffers around.
Well, yes and no. There are situations where you simply can't move
things around if there are active users, and vmap() is one of those
AFAICT.
> Dma-buf importers should only tell exporters to make buffers available
> to them, but not how to do this. AFAIK that's what dma-buf's
> attach/detach is for.
And that's what they do, no? attach() tells the exporter to give the
importer a way to access those buffers, and given the exporter has no
clue about when/how the exporter will access those, there's no other way
but to pin the pages. Am I missing something here?
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v20 00/10] Add generic DRM-shmem memory shrinker (part 1)
2025-04-03 7:03 ` Thomas Zimmermann
@ 2025-04-03 14:10 ` Dmitry Osipenko
0 siblings, 0 replies; 32+ messages in thread
From: Dmitry Osipenko @ 2025-04-03 14:10 UTC (permalink / raw)
To: Thomas Zimmermann, Lucas De Marchi
Cc: David Airlie, Simona Vetter, Maarten Lankhorst, Maxime Ripard,
Christian König, Gerd Hoffmann, Qiang Yu, Steven Price,
Boris Brezillon, Frank Binns, Matt Coster, dri-devel,
linux-kernel, kernel, chaitanya.kumar.borah
On 4/3/25 10:03, Thomas Zimmermann wrote:
> Hi
>
> Am 03.04.25 um 02:37 schrieb Lucas De Marchi:
>> On Sun, Mar 23, 2025 at 12:25:58AM +0300, Dmitry Osipenko wrote:
>>> Hi,
>>>
>>> This a continuation of a year-old series that adds generic DRM-shmem
>>> shrinker [1]. The old series became too big with too many patches, more
>>> reasonable to split it up into multiple smaller patchsets. Here is
>>> the firtst part that makes preparatory DRM changes.
>>>
>>> [1] https://lore.kernel.org/dri-devel/20240105184624.508603-1-
>>> dmitry.osipenko@collabora.com/
>>
>> After these patches got merged I started seeing this on ast driver
>> and a similar one qemu-cirrus:
>
> Same here with simpledrm. I wanted to bisect today.
I've reproduced using bochs drv, it's the last patch "drm/shmem-helper:
Use refcount_t for vmap_use_count" causing the issue. Thanks for the report!
This change fixes it, let me send a proper patch:
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c
b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 2d924d547a51..554f1d4c1a76 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -416,10 +416,10 @@ void drm_gem_shmem_vunmap_locked(struct
drm_gem_shmem_object *shmem,
if (refcount_dec_and_test(&shmem->vmap_use_count)) {
vunmap(shmem->vaddr);
drm_gem_shmem_unpin_locked(shmem);
+
+ shmem->vaddr = NULL;
}
}
-
- shmem->vaddr = NULL;
}
EXPORT_SYMBOL_GPL(drm_gem_shmem_vunmap_locked);
--
Best regards,
Dmitry
^ permalink raw reply related [flat|nested] 32+ messages in thread
* Re: [PATCH v20 09/10] drm/shmem-helper: Switch drm_gem_shmem_vmap/vunmap to use pin/unpin
2025-04-03 8:50 ` Boris Brezillon
@ 2025-04-04 8:01 ` Thomas Zimmermann
2025-04-04 14:52 ` Boris Brezillon
0 siblings, 1 reply; 32+ messages in thread
From: Thomas Zimmermann @ 2025-04-04 8:01 UTC (permalink / raw)
To: Boris Brezillon
Cc: Dmitry Osipenko, David Airlie, Simona Vetter, Maarten Lankhorst,
Maxime Ripard, Christian König, Gerd Hoffmann, Qiang Yu,
Steven Price, Frank Binns, Matt Coster, dri-devel, linux-kernel,
kernel
Hi
Am 03.04.25 um 10:50 schrieb Boris Brezillon:
> On Thu, 3 Apr 2025 09:20:00 +0200
> Thomas Zimmermann <tzimmermann@suse.de> wrote:
>
>> Hi
>>
>> Am 02.04.25 um 15:21 schrieb Boris Brezillon:
>>> On Wed, 2 Apr 2025 15:58:55 +0300
>>> Dmitry Osipenko <dmitry.osipenko@collabora.com> wrote:
>>>
>>>> On 4/2/25 15:47, Thomas Zimmermann wrote:
>>>>> Hi
>>>>>
>>>>> Am 22.03.25 um 22:26 schrieb Dmitry Osipenko:
>>>>>> The vmapped pages shall be pinned in memory and previously get/
>>>>>> put_pages()
>>>>>> were implicitly hard-pinning/unpinning the pages. This will no longer be
>>>>>> the case with addition of memory shrinker because pages_use_count > 0
>>>>>> won't
>>>>>> determine anymore whether pages are hard-pinned (they will be soft-
>>>>>> pinned),
>>>>>> while the new pages_pin_count will do the hard-pinning. Switch the
>>>>>> vmap/vunmap() to use pin/unpin() functions in a preparation of addition
>>>>>> of the memory shrinker support to drm-shmem.
>>>>> I've meanwhile rediscovered this patch and I'm sure this is not correct.
>>>>> Vmap should not pin AFAIK. It is possible to vmap if the buffer has been
>>>>> pinned, but that's not automatic. For other vmaps it is necessary to
>>>>> hold the reservation lock to prevent the buffer from moving.
>>> Hm, is this problematic though? If you want to vmap() inside a section
>>> that's protected by the resv lock, you can
>>>
>>> - drm_gem_shmem_vmap_locked()
>>> - do whatever you need to do with the vaddr,
>>> - drm_gem_shmem_vunmap_locked()
>>>
>>> and the {pin,page_use}_count will be back to their original values.
>>> Those are just ref counters, and I doubt the overhead of
>>> incrementing/decrementing them makes a difference compared to the heavy
>>> page-allocation/vmap operations...
>> I once tried to add pin as part of vmap, so that pages stay in place.
>> Christian was very clear about not doing this. I found this made a lot
>> of sense: vmap means "make the memory available to the CPU". The memory
>> location doesn't matter much here. Pin means something like "make the
>> memory available to the GPU". But which GPU depends on the caller: calls
>> via GEM refer to the local GPU, calls via dma-buf usually refer to the
>> importer's GPU. That GPU uncertainty makes pin problematic already.
> Okay, so it looks more like a naming issue then. The intent here is to
It's certainly possible to see this as a problem naming.
> make sure the page array doesn't disappear while we have a kernel
> mapping active (address returned by vmap()). The reason we went from
> pages_count to pages_use_count+pin_count is because we have two kind of
> references in drm_gem_shmem:
>
> - weak references (tracked with pages_use_count). Those are
> usually held by GPU VMs, and they are weak in the sense they
> shouldn't prevent the shrinker to reclaim them if the GPU VM is idle.
> The other user of weak references is userspace mappings of GEM
> objects (mmap()), because then we can repopulate those with our fault
> handler.
> - hard references (tracked with pin_count) which are used to prevent
> the shrinker from even considering the GEM as reclaimable. And clearly
> kernel mappings fall in that case, because otherwise we could reclaim
> pages that might be dereferenced by the CPU later on. It's also used
> to implement drm_gem_pin because it's the same mechanism really,
> hence the name
Yeah, this should be rename IMHO. Pin is a TTM operation that fixes
buffers in certain locations. Drivers do this internally. It has nothing
to do with gem-shmem.
There's also a pin operation in GEM BOs' drm_gem_object_funcs, but it is
only called for PRIME-exported buffers and not for general use. For
gem-shmem, the callback would be implemented on top of the hard references.
And there's also a pin in dma_buf_ops. The term 'pin' is somewhat
overloaded already.
>
>> In your case, vmap an pin both intent to hold the shmem pages in memory.
>> They might be build on top of the same implementation, but one should
>> not be implemented with the other because of their different meanings.
> But that's not what we do, is it? Sure, in drm_gem_shmem_vmap_locked(),
> we call drm_gem_shmem_pin_locked(), but that's an internal function to
> make sure the pages are allocated and stay around until
> drm_gem_shmem_vunmap_locked() is called.
>
> I guess we could rename pin_count into hard_refcount or
> page_residency_count or xxx_count, and change the pin/unpin_locked()
> function names accordingly, but that's just a naming detail, it doesn't
> force you to call drm_gem_pin() to vmap() your GEM, it's something we
> do internally.
Such a rename would be much appreciated. page_residency_count seems
appropriate.
>
>> More generally speaking, I've meanwhile come to the conclusion that pin
>> should not even exist in the GEM interface. It's an internal operation
>> of TTM and reveals too much about what happens within the
>> implementation. Instead GEM should be free to move buffers around.
> Well, yes and no. There are situations where you simply can't move
> things around if there are active users, and vmap() is one of those
> AFAICT.
Sure. What I mean here is that pin/unpin is something of an
implementation detail. IMHO the pin/unpin callbacks in
drm_gem_object_funcs should get different names, such as pin_exported.
They are not for general use.
>
>> Dma-buf importers should only tell exporters to make buffers available
>> to them, but not how to do this. AFAIK that's what dma-buf's
>> attach/detach is for.
> And that's what they do, no? attach() tells the exporter to give the
> importer a way to access those buffers, and given the exporter has no
> clue about when/how the exporter will access those, there's no other way
> but to pin the pages. Am I missing something here?
Yeah, that's what they do.
Best regards
Thomas
--
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Frankenstrasse 146, 90461 Nuernberg, Germany
GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman
HRB 36809 (AG Nuernberg)
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v20 09/10] drm/shmem-helper: Switch drm_gem_shmem_vmap/vunmap to use pin/unpin
2025-04-04 8:01 ` Thomas Zimmermann
@ 2025-04-04 14:52 ` Boris Brezillon
2025-04-04 14:58 ` Thomas Zimmermann
0 siblings, 1 reply; 32+ messages in thread
From: Boris Brezillon @ 2025-04-04 14:52 UTC (permalink / raw)
To: Thomas Zimmermann
Cc: Dmitry Osipenko, David Airlie, Simona Vetter, Maarten Lankhorst,
Maxime Ripard, Christian König, Gerd Hoffmann, Qiang Yu,
Steven Price, Frank Binns, Matt Coster, dri-devel, linux-kernel,
kernel
On Fri, 4 Apr 2025 10:01:50 +0200
Thomas Zimmermann <tzimmermann@suse.de> wrote:
> >> In your case, vmap an pin both intent to hold the shmem pages in memory.
> >> They might be build on top of the same implementation, but one should
> >> not be implemented with the other because of their different meanings.
> > But that's not what we do, is it? Sure, in drm_gem_shmem_vmap_locked(),
> > we call drm_gem_shmem_pin_locked(), but that's an internal function to
> > make sure the pages are allocated and stay around until
> > drm_gem_shmem_vunmap_locked() is called.
> >
> > I guess we could rename pin_count into hard_refcount or
> > page_residency_count or xxx_count, and change the pin/unpin_locked()
> > function names accordingly, but that's just a naming detail, it doesn't
> > force you to call drm_gem_pin() to vmap() your GEM, it's something we
> > do internally.
>
> Such a rename would be much appreciated. page_residency_count seems
> appropriate.
On a second thought, I think I prefer
'unevictable_count/inc_unevictable()/dec_unevictable()'. But looking at
the gem-vram changes you just posted, it looks like gem-shmem is not the
only one to use the term 'pin' for this page pinning thing, so if we go
and plan for a rename, I'd rather make it DRM-wide than gem-shmem being
the outlier yet again :-).
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v20 09/10] drm/shmem-helper: Switch drm_gem_shmem_vmap/vunmap to use pin/unpin
2025-04-04 14:52 ` Boris Brezillon
@ 2025-04-04 14:58 ` Thomas Zimmermann
2025-04-07 9:56 ` Boris Brezillon
0 siblings, 1 reply; 32+ messages in thread
From: Thomas Zimmermann @ 2025-04-04 14:58 UTC (permalink / raw)
To: Boris Brezillon
Cc: Dmitry Osipenko, David Airlie, Simona Vetter, Maarten Lankhorst,
Maxime Ripard, Christian König, Gerd Hoffmann, Qiang Yu,
Steven Price, Frank Binns, Matt Coster, dri-devel, linux-kernel,
kernel
Hi
Am 04.04.25 um 16:52 schrieb Boris Brezillon:
> On Fri, 4 Apr 2025 10:01:50 +0200
> Thomas Zimmermann <tzimmermann@suse.de> wrote:
>
>>>> In your case, vmap an pin both intent to hold the shmem pages in memory.
>>>> They might be build on top of the same implementation, but one should
>>>> not be implemented with the other because of their different meanings.
>>> But that's not what we do, is it? Sure, in drm_gem_shmem_vmap_locked(),
>>> we call drm_gem_shmem_pin_locked(), but that's an internal function to
>>> make sure the pages are allocated and stay around until
>>> drm_gem_shmem_vunmap_locked() is called.
>>>
>>> I guess we could rename pin_count into hard_refcount or
>>> page_residency_count or xxx_count, and change the pin/unpin_locked()
>>> function names accordingly, but that's just a naming detail, it doesn't
>>> force you to call drm_gem_pin() to vmap() your GEM, it's something we
>>> do internally.
>> Such a rename would be much appreciated. page_residency_count seems
>> appropriate.
> On a second thought, I think I prefer
> 'unevictable_count/inc_unevictable()/dec_unevictable()'. But looking at
Ok
> the gem-vram changes you just posted, it looks like gem-shmem is not the
> only one to use the term 'pin' for this page pinning thing, so if we go
> and plan for a rename, I'd rather make it DRM-wide than gem-shmem being
> the outlier yet again :-).
Which one do you mean?
- The code in gem-vram does pinning in the TTM sense of the term.
- From the client code, the pinning got removed.
- The GEM pin/unpin callbacks are still there, but the long-term plan is
to go to dma-buf pin callbacks AFAICT.
- Renaming the dma-buf pin/unpin would be a kernel-wide change. Not
likely to happen.
Best regards
Thomas
--
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Frankenstrasse 146, 90461 Nuernberg, Germany
GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman
HRB 36809 (AG Nuernberg)
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v20 09/10] drm/shmem-helper: Switch drm_gem_shmem_vmap/vunmap to use pin/unpin
2025-04-04 14:58 ` Thomas Zimmermann
@ 2025-04-07 9:56 ` Boris Brezillon
0 siblings, 0 replies; 32+ messages in thread
From: Boris Brezillon @ 2025-04-07 9:56 UTC (permalink / raw)
To: Thomas Zimmermann
Cc: Dmitry Osipenko, David Airlie, Simona Vetter, Maarten Lankhorst,
Maxime Ripard, Christian König, Gerd Hoffmann, Qiang Yu,
Steven Price, Frank Binns, Matt Coster, dri-devel, linux-kernel,
kernel
On Fri, 4 Apr 2025 16:58:20 +0200
Thomas Zimmermann <tzimmermann@suse.de> wrote:
> Hi
>
> Am 04.04.25 um 16:52 schrieb Boris Brezillon:
> > On Fri, 4 Apr 2025 10:01:50 +0200
> > Thomas Zimmermann <tzimmermann@suse.de> wrote:
> >
> >>>> In your case, vmap an pin both intent to hold the shmem pages in memory.
> >>>> They might be build on top of the same implementation, but one should
> >>>> not be implemented with the other because of their different meanings.
> >>> But that's not what we do, is it? Sure, in drm_gem_shmem_vmap_locked(),
> >>> we call drm_gem_shmem_pin_locked(), but that's an internal function to
> >>> make sure the pages are allocated and stay around until
> >>> drm_gem_shmem_vunmap_locked() is called.
> >>>
> >>> I guess we could rename pin_count into hard_refcount or
> >>> page_residency_count or xxx_count, and change the pin/unpin_locked()
> >>> function names accordingly, but that's just a naming detail, it doesn't
> >>> force you to call drm_gem_pin() to vmap() your GEM, it's something we
> >>> do internally.
> >> Such a rename would be much appreciated. page_residency_count seems
> >> appropriate.
> > On a second thought, I think I prefer
> > 'unevictable_count/inc_unevictable()/dec_unevictable()'. But looking at
>
> Ok
>
> > the gem-vram changes you just posted, it looks like gem-shmem is not the
> > only one to use the term 'pin' for this page pinning thing, so if we go
> > and plan for a rename, I'd rather make it DRM-wide than gem-shmem being
> > the outlier yet again :-).
>
> Which one do you mean?
>
> - The code in gem-vram does pinning in the TTM sense of the term.
Sorry, but I still don't see why pinning should be a TTM only thing. If
I read the doc of drm_gem_vram_pin():
"
Pinning a buffer object ensures that it is not evicted from a memory
region. A pinned buffer object has to be unpinned before it can be
pinned to another region. If the pl_flag argument is 0, the buffer is
pinned at its current location (video RAM or system memory).
"
And this pinning is not so different from the pinning we have in
gem-shmem: making buffer object memory unevictable/unmovable.
>
> - From the client code, the pinning got removed.
>
> - The GEM pin/unpin callbacks are still there, but the long-term plan is
> to go to dma-buf pin callbacks AFAICT.
>
> - Renaming the dma-buf pin/unpin would be a kernel-wide change. Not
> likely to happen.
Again, I'm not sure why we'd want to do that anyway. Just because the
TTM pinning semantics might be slightly different from the
GEM/dma-buf ones doesn't mean the pinning term should be
prohibited outside the TTM scope. The concept of pinning is pretty
generic and applies to system memory too AFAICT.
^ permalink raw reply [flat|nested] 32+ messages in thread
end of thread, other threads:[~2025-04-07 9:57 UTC | newest]
Thread overview: 32+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-03-22 21:25 [PATCH v20 00/10] Add generic DRM-shmem memory shrinker (part 1) Dmitry Osipenko
2025-03-22 21:25 ` [PATCH v20 01/10] drm/gem: Change locked/unlocked postfix of drm_gem_v/unmap() function names Dmitry Osipenko
2025-03-24 13:05 ` Christian König
2025-03-25 13:15 ` Dmitry Osipenko
2025-03-22 21:26 ` [PATCH v20 02/10] drm/gem: Add _locked postfix to functions that have unlocked counterpart Dmitry Osipenko
2025-03-22 21:26 ` [PATCH v20 03/10] drm/gem: Document locking rule of vmap and evict callbacks Dmitry Osipenko
2025-03-22 21:26 ` [PATCH v20 04/10] drm/shmem-helper: Make all exported symbols GPL Dmitry Osipenko
2025-03-22 21:26 ` [PATCH v20 05/10] drm/shmem-helper: Refactor locked/unlocked functions Dmitry Osipenko
2025-03-27 11:25 ` Jani Nikula
2025-03-27 11:27 ` Jani Nikula
2025-03-27 11:29 ` Jani Nikula
2025-03-22 21:26 ` [PATCH v20 06/10] drm/shmem-helper: Remove obsoleted is_iomem test Dmitry Osipenko
2025-03-22 21:26 ` [PATCH v20 07/10] drm/shmem-helper: Add and use pages_pin_count Dmitry Osipenko
2025-03-22 21:26 ` [PATCH v20 08/10] drm/shmem-helper: Use refcount_t for pages_use_count Dmitry Osipenko
2025-03-22 21:26 ` [PATCH v20 09/10] drm/shmem-helper: Switch drm_gem_shmem_vmap/vunmap to use pin/unpin Dmitry Osipenko
2025-04-02 12:47 ` Thomas Zimmermann
2025-04-02 12:58 ` Dmitry Osipenko
2025-04-02 13:21 ` Boris Brezillon
2025-04-03 7:20 ` Thomas Zimmermann
2025-04-03 8:50 ` Boris Brezillon
2025-04-04 8:01 ` Thomas Zimmermann
2025-04-04 14:52 ` Boris Brezillon
2025-04-04 14:58 ` Thomas Zimmermann
2025-04-07 9:56 ` Boris Brezillon
2025-03-22 21:26 ` [PATCH v20 10/10] drm/shmem-helper: Use refcount_t for vmap_use_count Dmitry Osipenko
2025-03-25 14:17 ` [PATCH v20 00/10] Add generic DRM-shmem memory shrinker (part 1) Thomas Zimmermann
2025-03-26 20:08 ` Dmitry Osipenko
2025-03-27 10:45 ` Boris Brezillon
2025-03-27 10:47 ` Dmitry Osipenko
2025-04-03 0:37 ` Lucas De Marchi
2025-04-03 7:03 ` Thomas Zimmermann
2025-04-03 14:10 ` Dmitry Osipenko
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).