* [PATCH v14 00/12] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers
@ 2023-07-22 23:47 Dmitry Osipenko
2023-07-22 23:47 ` [PATCH v14 01/12] drm/shmem-helper: Factor out pages alloc/release from drm_gem_shmem_get/put_pages() Dmitry Osipenko
` (11 more replies)
0 siblings, 12 replies; 25+ messages in thread
From: Dmitry Osipenko @ 2023-07-22 23:47 UTC (permalink / raw)
To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
Daniel Vetter, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, Christian König, Qiang Yu, Steven Price,
Boris Brezillon, Emma Anholt, Melissa Wen
Cc: kernel, linux-kernel, dri-devel, virtualization
This series:
1. Adds common drm-shmem memory shrinker
2. Enables shrinker for VirtIO-GPU driver
3. Switches Panfrost driver to the common shrinker
Changelog:
v14:- All the prerequisite reservation locking patches landed upstream,
previously were a part of this series in v13 and older.
https://lore.kernel.org/dri-devel/20230529223935.2672495-1-dmitry.osipenko@collabora.com/
- Added patches to improve locked/unlocked function names, like was
suggested by Boris Brezillon for v13.
- Made all exported drm-shmem symbols GPL, like was previously
discussed with Thomas Zimmermann on this series.
- Improved virtio-gpu shrinker patch. Now it won't detach purged BO
when userspace closes GEM. Crosvm (and not qemu) checks res_id on
CMD_CTX_DETACH_RESOURCE and prints noisy error message if ID is
invalid, which wasn't noticed before.
v13:- Updated virtio-gpu shrinker patch to use drm_gem_shmem_object_pin()
directly instead of drm_gem_pin() and dropped patch that exported
drm_gem_pin() functions, like was requested by Thomas Zimmermann in
v12.
v12:- Fixed the "no previous prototype for function" warning reported by
kernel build bot for v11.
- Fixed the missing reservation lock reported by Intel CI for VGEM
driver. Other drivers using drm-shmem were affected similarly to
VGEM. The problem was in the dma-buf attachment code path that led
to drm-shmem pinning function which assumed the held reservation lock
by drm_gem_pin(). In the past that code path was causing trouble for
i915 driver and we've changed the locking scheme for the attachment
code path in the dma-buf core to let exporters to handle the locking
themselves. After a closer investigation, I realized that my assumption
about testing of dma-buf export code path using Panfrost driver was
incorrect. Now I created additional local test to exrecise the Panfrost
export path. I also reproduced the issue reported by the Intel CI for
v10. It's all fixed now by making the drm_gem_shmem_pin() to take the
resv lock by itself.
- Patches are based on top of drm-tip, CC'd intel-gfx CI for testing.
v11:- Rebased on a recent linux-next. Added new patch as a result:
drm/shmem-helper: Export drm_gem_shmem_get_pages_sgt_locked()
It's needed by the virtio-gpu driver to swap-in/unevict shmem
object, previously get_pages_sgt() didn't use locking.
- Separated the "Add memory shrinker" patch into smaller parts to ease
the reviewing, as was requested by Thomas Zimmermann:
drm/shmem-helper: Factor out pages alloc/release from
drm_gem_shmem_get/put_pages()
drm/shmem-helper: Add pages_pin_count field
drm/shmem-helper: Switch drm_gem_shmem_vmap/vunmap to use pin/unpin
drm/shmem-helper: Factor out unpinning part from drm_gem_shmem_purge()
- Addessed the v10 review comments from Thomas Zimmermann: return errno
instead of bool, sort code alphabetically, rename function and etc
minor changes.
- Added new patch to remove the "map->is_iomem" from drm-shmem, as
was suggested by Thomas Zimmermann.
- Added acks and r-b's that were given to v10.
v10:- Was partially applied to misc-fixes/next.
https://lore.kernel.org/dri-devel/6c16f303-81df-7ebe-85e9-51bb40a8b301@collabora.com/T/
Dmitry Osipenko (12):
drm/shmem-helper: Factor out pages alloc/release from
drm_gem_shmem_get/put_pages()
drm/shmem-helper: Add pages_pin_count field
drm/shmem-helper: Switch drm_gem_shmem_vmap/vunmap to use pin/unpin
drm/shmem-helper: Factor out unpinning part from drm_gem_shmem_purge()
drm/shmem-helper: Add memory shrinker
drm/shmem-helper: Remove obsoleted is_iomem test
drm/shmem-helper: Export drm_gem_shmem_get_pages_sgt_locked()
drm/virtio: Support memory shrinking
drm/panfrost: Switch to generic memory shrinker
drm/shmem-helper: Refactor locked/unlocked functions
drm/shmem-helper: Make drm_gem_shmem_print_info() symbol GPL
drm/gem: Add _unlocked postfix to drm_gem_pin/unpin()
drivers/gpu/drm/drm_gem.c | 4 +-
drivers/gpu/drm/drm_gem_shmem_helper.c | 546 ++++++++++++++----
drivers/gpu/drm/drm_internal.h | 4 +-
drivers/gpu/drm/drm_prime.c | 4 +-
drivers/gpu/drm/lima/lima_gem.c | 10 +-
drivers/gpu/drm/panfrost/Makefile | 1 -
drivers/gpu/drm/panfrost/panfrost_device.h | 4 -
drivers/gpu/drm/panfrost/panfrost_drv.c | 29 +-
drivers/gpu/drm/panfrost/panfrost_gem.c | 40 +-
drivers/gpu/drm/panfrost/panfrost_gem.h | 9 -
.../gpu/drm/panfrost/panfrost_gem_shrinker.c | 122 ----
drivers/gpu/drm/panfrost/panfrost_job.c | 18 +-
drivers/gpu/drm/panfrost/panfrost_mmu.c | 2 +-
drivers/gpu/drm/v3d/v3d_bo.c | 10 +-
drivers/gpu/drm/virtio/virtgpu_drv.h | 20 +-
drivers/gpu/drm/virtio/virtgpu_gem.c | 72 +++
drivers/gpu/drm/virtio/virtgpu_ioctl.c | 33 ++
drivers/gpu/drm/virtio/virtgpu_kms.c | 8 +
drivers/gpu/drm/virtio/virtgpu_object.c | 137 ++++-
drivers/gpu/drm/virtio/virtgpu_plane.c | 17 +-
drivers/gpu/drm/virtio/virtgpu_submit.c | 15 +-
drivers/gpu/drm/virtio/virtgpu_vq.c | 40 ++
include/drm/drm_device.h | 10 +-
include/drm/drm_gem_shmem_helper.h | 208 +++++--
include/uapi/drm/virtgpu_drm.h | 14 +
25 files changed, 986 insertions(+), 391 deletions(-)
delete mode 100644 drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
--
2.41.0
^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH v14 01/12] drm/shmem-helper: Factor out pages alloc/release from drm_gem_shmem_get/put_pages()
2023-07-22 23:47 [PATCH v14 00/12] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers Dmitry Osipenko
@ 2023-07-22 23:47 ` Dmitry Osipenko
2023-07-25 7:14 ` Boris Brezillon
2023-07-22 23:47 ` [PATCH v14 02/12] drm/shmem-helper: Add pages_pin_count field Dmitry Osipenko
` (10 subsequent siblings)
11 siblings, 1 reply; 25+ messages in thread
From: Dmitry Osipenko @ 2023-07-22 23:47 UTC (permalink / raw)
To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
Daniel Vetter, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, Christian König, Qiang Yu, Steven Price,
Boris Brezillon, Emma Anholt, Melissa Wen
Cc: kernel, linux-kernel, dri-devel, virtualization
Factor out pages allocation from drm_gem_shmem_get_pages() into
drm_gem_shmem_acquire_pages() function and similar for the put_pages()
in a preparation for addition of shrinker support to drm-shmem.
Once shrinker will be added, the pages_use_count>0 will no longer determine
whether pages are pinned because pages could be swapped out by the shrinker
and then pages_use_count will be greater than 0 in this case. We will add
new pages_pin_count in a later patch.
The new common drm_gem_shmem_acquire/release_pages() will be used by
shrinker code for performing the page swapping.
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
drivers/gpu/drm/drm_gem_shmem_helper.c | 65 ++++++++++++++++++++------
1 file changed, 52 insertions(+), 13 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index a783d2245599..267153853e2c 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -165,21 +165,26 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
}
EXPORT_SYMBOL_GPL(drm_gem_shmem_free);
-static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
+static int
+drm_gem_shmem_acquire_pages(struct drm_gem_shmem_object *shmem)
{
struct drm_gem_object *obj = &shmem->base;
struct page **pages;
dma_resv_assert_held(shmem->base.resv);
- if (shmem->pages_use_count++ > 0)
- return 0;
+ if (shmem->madv < 0) {
+ drm_WARN_ON(obj->dev, shmem->pages);
+ return -ENOMEM;
+ }
+
+ if (drm_WARN_ON(obj->dev, !shmem->pages_use_count))
+ return -EINVAL;
pages = drm_gem_get_pages(obj);
if (IS_ERR(pages)) {
drm_dbg_kms(obj->dev, "Failed to get pages (%ld)\n",
PTR_ERR(pages));
- shmem->pages_use_count = 0;
return PTR_ERR(pages);
}
@@ -198,6 +203,48 @@ static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
return 0;
}
+static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
+{
+ int err;
+
+ dma_resv_assert_held(shmem->base.resv);
+
+ if (shmem->madv < 0)
+ return -ENOMEM;
+
+ if (shmem->pages_use_count++ > 0)
+ return 0;
+
+ err = drm_gem_shmem_acquire_pages(shmem);
+ if (err)
+ goto err_zero_use;
+
+ return 0;
+
+err_zero_use:
+ shmem->pages_use_count = 0;
+
+ return err;
+}
+
+static void
+drm_gem_shmem_release_pages(struct drm_gem_shmem_object *shmem)
+{
+ struct drm_gem_object *obj = &shmem->base;
+
+ dma_resv_assert_held(shmem->base.resv);
+
+#ifdef CONFIG_X86
+ if (shmem->map_wc)
+ set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT);
+#endif
+
+ drm_gem_put_pages(obj, shmem->pages,
+ shmem->pages_mark_dirty_on_put,
+ shmem->pages_mark_accessed_on_put);
+ shmem->pages = NULL;
+}
+
/*
* drm_gem_shmem_put_pages - Decrease use count on the backing pages for a shmem GEM object
* @shmem: shmem GEM object
@@ -216,15 +263,7 @@ void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem)
if (--shmem->pages_use_count > 0)
return;
-#ifdef CONFIG_X86
- if (shmem->map_wc)
- set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT);
-#endif
-
- drm_gem_put_pages(obj, shmem->pages,
- shmem->pages_mark_dirty_on_put,
- shmem->pages_mark_accessed_on_put);
- shmem->pages = NULL;
+ drm_gem_shmem_release_pages(shmem);
}
EXPORT_SYMBOL(drm_gem_shmem_put_pages);
--
2.41.0
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH v14 02/12] drm/shmem-helper: Add pages_pin_count field
2023-07-22 23:47 [PATCH v14 00/12] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers Dmitry Osipenko
2023-07-22 23:47 ` [PATCH v14 01/12] drm/shmem-helper: Factor out pages alloc/release from drm_gem_shmem_get/put_pages() Dmitry Osipenko
@ 2023-07-22 23:47 ` Dmitry Osipenko
2023-07-25 7:27 ` Boris Brezillon
2023-07-22 23:47 ` [PATCH v14 03/12] drm/shmem-helper: Switch drm_gem_shmem_vmap/vunmap to use pin/unpin Dmitry Osipenko
` (9 subsequent siblings)
11 siblings, 1 reply; 25+ messages in thread
From: Dmitry Osipenko @ 2023-07-22 23:47 UTC (permalink / raw)
To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
Daniel Vetter, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, Christian König, Qiang Yu, Steven Price,
Boris Brezillon, Emma Anholt, Melissa Wen
Cc: kernel, linux-kernel, dri-devel, virtualization
And new pages_pin_count field to struct drm_gem_shmem_object that will
determine whether pages are evictable by memory shrinker. The pages will
be evictable only when pages_pin_count=0. This patch prepares code for
addition of the memory shrinker that will utilize the new field.
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
drivers/gpu/drm/drm_gem_shmem_helper.c | 9 +++++++++
include/drm/drm_gem_shmem_helper.h | 9 +++++++++
2 files changed, 18 insertions(+)
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 267153853e2c..42ba201dda50 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -274,15 +274,24 @@ static int drm_gem_shmem_pin_locked(struct drm_gem_shmem_object *shmem)
dma_resv_assert_held(shmem->base.resv);
ret = drm_gem_shmem_get_pages(shmem);
+ if (!ret)
+ shmem->pages_pin_count++;
return ret;
}
static void drm_gem_shmem_unpin_locked(struct drm_gem_shmem_object *shmem)
{
+ struct drm_gem_object *obj = &shmem->base;
+
dma_resv_assert_held(shmem->base.resv);
+ if (drm_WARN_ON_ONCE(obj->dev, !shmem->pages_pin_count))
+ return;
+
drm_gem_shmem_put_pages(shmem);
+
+ shmem->pages_pin_count--;
}
/**
diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
index bf0c31aa8fbe..7111f5743006 100644
--- a/include/drm/drm_gem_shmem_helper.h
+++ b/include/drm/drm_gem_shmem_helper.h
@@ -39,6 +39,15 @@ struct drm_gem_shmem_object {
*/
unsigned int pages_use_count;
+ /**
+ * @pages_pin_count:
+ *
+ * Reference count on the pinned pages table.
+ * The pages allowed to be evicted by memory shrinker
+ * only when the count is zero.
+ */
+ unsigned int pages_pin_count;
+
/**
* @madv: State for madvise
*
--
2.41.0
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH v14 03/12] drm/shmem-helper: Switch drm_gem_shmem_vmap/vunmap to use pin/unpin
2023-07-22 23:47 [PATCH v14 00/12] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers Dmitry Osipenko
2023-07-22 23:47 ` [PATCH v14 01/12] drm/shmem-helper: Factor out pages alloc/release from drm_gem_shmem_get/put_pages() Dmitry Osipenko
2023-07-22 23:47 ` [PATCH v14 02/12] drm/shmem-helper: Add pages_pin_count field Dmitry Osipenko
@ 2023-07-22 23:47 ` Dmitry Osipenko
2023-07-22 23:47 ` [PATCH v14 04/12] drm/shmem-helper: Factor out unpinning part from drm_gem_shmem_purge() Dmitry Osipenko
` (8 subsequent siblings)
11 siblings, 0 replies; 25+ messages in thread
From: Dmitry Osipenko @ 2023-07-22 23:47 UTC (permalink / raw)
To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
Daniel Vetter, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, Christian König, Qiang Yu, Steven Price,
Boris Brezillon, Emma Anholt, Melissa Wen
Cc: kernel, linux-kernel, dri-devel, virtualization
The vmapped pages shall be pinned in memory. Previously get/put pages
were implicitly hard-pinning/unpinning the pages. This will no longer be
the case with addition of memory shrinker because pages_use_count>0 won't
determine anymore whether pages are hard-pinned (they will be soft-pinned),
while the new pages_pin_count will do that. Switch the vmap/vunmap to use
pin/unpin functions in a preparation of addition of the memory shrinker
support.
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
drivers/gpu/drm/drm_gem_shmem_helper.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 42ba201dda50..c236ad835448 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -379,7 +379,7 @@ int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem,
return 0;
}
- ret = drm_gem_shmem_get_pages(shmem);
+ ret = drm_gem_shmem_pin_locked(shmem);
if (ret)
goto err_zero_use;
@@ -402,7 +402,7 @@ int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem,
err_put_pages:
if (!obj->import_attach)
- drm_gem_shmem_put_pages(shmem);
+ drm_gem_shmem_unpin_locked(shmem);
err_zero_use:
shmem->vmap_use_count = 0;
@@ -439,7 +439,7 @@ void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem,
return;
vunmap(shmem->vaddr);
- drm_gem_shmem_put_pages(shmem);
+ drm_gem_shmem_unpin_locked(shmem);
}
shmem->vaddr = NULL;
--
2.41.0
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH v14 04/12] drm/shmem-helper: Factor out unpinning part from drm_gem_shmem_purge()
2023-07-22 23:47 [PATCH v14 00/12] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers Dmitry Osipenko
` (2 preceding siblings ...)
2023-07-22 23:47 ` [PATCH v14 03/12] drm/shmem-helper: Switch drm_gem_shmem_vmap/vunmap to use pin/unpin Dmitry Osipenko
@ 2023-07-22 23:47 ` Dmitry Osipenko
2023-07-22 23:47 ` [PATCH v14 05/12] drm/shmem-helper: Add memory shrinker Dmitry Osipenko
` (7 subsequent siblings)
11 siblings, 0 replies; 25+ messages in thread
From: Dmitry Osipenko @ 2023-07-22 23:47 UTC (permalink / raw)
To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
Daniel Vetter, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, Christian König, Qiang Yu, Steven Price,
Boris Brezillon, Emma Anholt, Melissa Wen
Cc: kernel, linux-kernel, dri-devel, virtualization
Factor out pages unpinning code from drm_gem_shmem_purge() into new
drm_gem_shmem_unpin_pages(). This prepares code for addition of memory
shrinker support. The new common function will be used by shrinker for
eviction of shmem pages.
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
drivers/gpu/drm/drm_gem_shmem_helper.c | 18 ++++++++++++------
1 file changed, 12 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index c236ad835448..9e381b6dc712 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -485,25 +485,29 @@ int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv)
}
EXPORT_SYMBOL(drm_gem_shmem_madvise);
-void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem)
+static void drm_gem_shmem_unpin_pages(struct drm_gem_shmem_object *shmem)
{
struct drm_gem_object *obj = &shmem->base;
struct drm_device *dev = obj->dev;
dma_resv_assert_held(shmem->base.resv);
- drm_WARN_ON(obj->dev, !drm_gem_shmem_is_purgeable(shmem));
-
dma_unmap_sgtable(dev->dev, shmem->sgt, DMA_BIDIRECTIONAL, 0);
+ drm_gem_shmem_release_pages(shmem);
+ drm_vma_node_unmap(&obj->vma_node, dev->anon_inode->i_mapping);
+
sg_free_table(shmem->sgt);
kfree(shmem->sgt);
shmem->sgt = NULL;
+}
- drm_gem_shmem_put_pages(shmem);
+void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem)
+{
+ struct drm_gem_object *obj = &shmem->base;
- shmem->madv = -1;
+ drm_WARN_ON(obj->dev, !drm_gem_shmem_is_purgeable(shmem));
- drm_vma_node_unmap(&obj->vma_node, dev->anon_inode->i_mapping);
+ drm_gem_shmem_unpin_pages(shmem);
drm_gem_free_mmap_offset(obj);
/* Our goal here is to return as much of the memory as
@@ -514,6 +518,8 @@ void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem)
shmem_truncate_range(file_inode(obj->filp), 0, (loff_t)-1);
invalidate_mapping_pages(file_inode(obj->filp)->i_mapping, 0, (loff_t)-1);
+
+ shmem->madv = -1;
}
EXPORT_SYMBOL(drm_gem_shmem_purge);
--
2.41.0
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH v14 05/12] drm/shmem-helper: Add memory shrinker
2023-07-22 23:47 [PATCH v14 00/12] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers Dmitry Osipenko
` (3 preceding siblings ...)
2023-07-22 23:47 ` [PATCH v14 04/12] drm/shmem-helper: Factor out unpinning part from drm_gem_shmem_purge() Dmitry Osipenko
@ 2023-07-22 23:47 ` Dmitry Osipenko
2023-07-22 23:47 ` [PATCH v14 06/12] drm/shmem-helper: Remove obsoleted is_iomem test Dmitry Osipenko
` (6 subsequent siblings)
11 siblings, 0 replies; 25+ messages in thread
From: Dmitry Osipenko @ 2023-07-22 23:47 UTC (permalink / raw)
To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
Daniel Vetter, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, Christian König, Qiang Yu, Steven Price,
Boris Brezillon, Emma Anholt, Melissa Wen
Cc: kernel, linux-kernel, dri-devel, virtualization
Introduce common drm-shmem shrinker for DRM drivers.
To start using drm-shmem shrinker drivers should do the following:
1. Implement evict() callback of GEM object where driver should check
whether object is purgeable or evictable using drm-shmem helpers and
perform the shrinking action
2. Initialize drm-shmem internals using drmm_gem_shmem_init(drm_device),
which will register drm-shmem shrinker
3. Implement madvise IOCTL that will use drm_gem_shmem_madvise()
Signed-off-by: Daniel Almeida <daniel.almeida@collabora.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
drivers/gpu/drm/drm_gem_shmem_helper.c | 351 +++++++++++++++++-
.../gpu/drm/panfrost/panfrost_gem_shrinker.c | 9 +-
include/drm/drm_device.h | 10 +-
include/drm/drm_gem_shmem_helper.h | 76 +++-
4 files changed, 426 insertions(+), 20 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 9e381b6dc712..0b6c4f318da5 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -20,6 +20,7 @@
#include <drm/drm_device.h>
#include <drm/drm_drv.h>
#include <drm/drm_gem_shmem_helper.h>
+#include <drm/drm_managed.h>
#include <drm/drm_prime.h>
#include <drm/drm_print.h>
@@ -88,8 +89,6 @@ __drm_gem_shmem_create(struct drm_device *dev, size_t size, bool private)
if (ret)
goto err_release;
- INIT_LIST_HEAD(&shmem->madv_list);
-
if (!private) {
/*
* Our buffers are kept pinned, so allocating them
@@ -128,6 +127,57 @@ struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t
}
EXPORT_SYMBOL_GPL(drm_gem_shmem_create);
+static void drm_gem_shmem_resv_assert_held(struct drm_gem_shmem_object *shmem)
+{
+ /*
+ * Destroying the object is a special case.. drm_gem_shmem_free()
+ * calls many things that WARN_ON if the obj lock is not held. But
+ * acquiring the obj lock in drm_gem_shmem_free() can cause a locking
+ * order inversion between reservation_ww_class_mutex and fs_reclaim.
+ *
+ * This deadlock is not actually possible, because no one should
+ * be already holding the lock when drm_gem_shmem_free() is called.
+ * Unfortunately lockdep is not aware of this detail. So when the
+ * refcount drops to zero, we pretend it is already locked.
+ */
+ if (kref_read(&shmem->base.refcount))
+ dma_resv_assert_held(shmem->base.resv);
+}
+
+static bool drm_gem_shmem_is_evictable(struct drm_gem_shmem_object *shmem)
+{
+ dma_resv_assert_held(shmem->base.resv);
+
+ return (shmem->madv >= 0) && shmem->base.funcs->evict &&
+ shmem->pages_use_count && !shmem->pages_pin_count &&
+ !shmem->base.dma_buf && !shmem->base.import_attach &&
+ shmem->sgt && !shmem->evicted;
+}
+
+static void
+drm_gem_shmem_update_pages_state(struct drm_gem_shmem_object *shmem)
+{
+ struct drm_gem_object *obj = &shmem->base;
+ struct drm_gem_shmem *shmem_mm = obj->dev->shmem_mm;
+ struct drm_gem_shmem_shrinker *shmem_shrinker = &shmem_mm->shrinker;
+
+ drm_gem_shmem_resv_assert_held(shmem);
+
+ if (!shmem_shrinker || obj->import_attach)
+ return;
+
+ if (shmem->madv < 0)
+ drm_gem_lru_remove(&shmem->base);
+ else if (drm_gem_shmem_is_evictable(shmem) || drm_gem_shmem_is_purgeable(shmem))
+ drm_gem_lru_move_tail(&shmem_shrinker->lru_evictable, &shmem->base);
+ else if (shmem->evicted)
+ drm_gem_lru_move_tail(&shmem_shrinker->lru_evicted, &shmem->base);
+ else if (!shmem->pages)
+ drm_gem_lru_remove(&shmem->base);
+ else
+ drm_gem_lru_move_tail(&shmem_shrinker->lru_pinned, &shmem->base);
+}
+
/**
* drm_gem_shmem_free - Free resources associated with a shmem GEM object
* @shmem: shmem GEM object to free
@@ -142,7 +192,8 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
if (obj->import_attach) {
drm_prime_gem_destroy(obj, shmem->sgt);
} else {
- dma_resv_lock(shmem->base.resv, NULL);
+ /* take out shmem GEM object from the memory shrinker */
+ drm_gem_shmem_madvise(shmem, -1);
drm_WARN_ON(obj->dev, shmem->vmap_use_count);
@@ -152,12 +203,10 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
sg_free_table(shmem->sgt);
kfree(shmem->sgt);
}
- if (shmem->pages)
+ if (shmem->pages_use_count)
drm_gem_shmem_put_pages(shmem);
drm_WARN_ON(obj->dev, shmem->pages_use_count);
-
- dma_resv_unlock(shmem->base.resv);
}
drm_gem_object_release(obj);
@@ -178,6 +227,11 @@ drm_gem_shmem_acquire_pages(struct drm_gem_shmem_object *shmem)
return -ENOMEM;
}
+ if (shmem->pages) {
+ drm_WARN_ON(obj->dev, !shmem->evicted);
+ return 0;
+ }
+
if (drm_WARN_ON(obj->dev, !shmem->pages_use_count))
return -EINVAL;
@@ -212,13 +266,20 @@ static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
if (shmem->madv < 0)
return -ENOMEM;
- if (shmem->pages_use_count++ > 0)
+ if (shmem->pages_use_count++ > 0) {
+ err = drm_gem_shmem_swapin(shmem);
+ if (err)
+ goto err_zero_use;
+
return 0;
+ }
err = drm_gem_shmem_acquire_pages(shmem);
if (err)
goto err_zero_use;
+ drm_gem_shmem_update_pages_state(shmem);
+
return 0;
err_zero_use:
@@ -232,7 +293,12 @@ drm_gem_shmem_release_pages(struct drm_gem_shmem_object *shmem)
{
struct drm_gem_object *obj = &shmem->base;
- dma_resv_assert_held(shmem->base.resv);
+ drm_gem_shmem_resv_assert_held(shmem);
+
+ if (!shmem->pages) {
+ drm_WARN_ON(obj->dev, !shmem->evicted && shmem->madv >= 0);
+ return;
+ }
#ifdef CONFIG_X86
if (shmem->map_wc)
@@ -255,7 +321,7 @@ void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem)
{
struct drm_gem_object *obj = &shmem->base;
- dma_resv_assert_held(shmem->base.resv);
+ drm_gem_shmem_resv_assert_held(shmem);
if (drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count))
return;
@@ -264,6 +330,8 @@ void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem)
return;
drm_gem_shmem_release_pages(shmem);
+
+ drm_gem_shmem_update_pages_state(shmem);
}
EXPORT_SYMBOL(drm_gem_shmem_put_pages);
@@ -474,13 +542,15 @@ drm_gem_shmem_create_with_handle(struct drm_file *file_priv,
*/
int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv)
{
- dma_resv_assert_held(shmem->base.resv);
+ drm_gem_shmem_resv_assert_held(shmem);
if (shmem->madv >= 0)
shmem->madv = madv;
madv = shmem->madv;
+ drm_gem_shmem_update_pages_state(shmem);
+
return (madv >= 0);
}
EXPORT_SYMBOL(drm_gem_shmem_madvise);
@@ -492,6 +562,9 @@ static void drm_gem_shmem_unpin_pages(struct drm_gem_shmem_object *shmem)
dma_resv_assert_held(shmem->base.resv);
+ if (shmem->evicted)
+ return;
+
dma_unmap_sgtable(dev->dev, shmem->sgt, DMA_BIDIRECTIONAL, 0);
drm_gem_shmem_release_pages(shmem);
drm_vma_node_unmap(&obj->vma_node, dev->anon_inode->i_mapping);
@@ -520,9 +593,60 @@ void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem)
invalidate_mapping_pages(file_inode(obj->filp)->i_mapping, 0, (loff_t)-1);
shmem->madv = -1;
+ shmem->evicted = false;
+ drm_gem_shmem_update_pages_state(shmem);
}
EXPORT_SYMBOL(drm_gem_shmem_purge);
+/**
+ * drm_gem_shmem_swapin() - Moves shmem GEM back to memory and enables
+ * hardware access to the memory.
+ * @shmem: shmem GEM object
+ *
+ * This function moves shmem GEM back to memory if it was previously evicted
+ * by the memory shrinker. The GEM is ready to use on success.
+ *
+ * Returns:
+ * 0 on success or a negative error code on failure.
+ */
+int drm_gem_shmem_swapin(struct drm_gem_shmem_object *shmem)
+{
+ struct drm_gem_object *obj = &shmem->base;
+ struct sg_table *sgt;
+ int err;
+
+ dma_resv_assert_held(shmem->base.resv);
+
+ if (shmem->evicted) {
+ err = drm_gem_shmem_acquire_pages(shmem);
+ if (err)
+ return err;
+
+ sgt = drm_gem_shmem_get_sg_table(shmem);
+ if (IS_ERR(sgt))
+ return PTR_ERR(sgt);
+
+ err = dma_map_sgtable(obj->dev->dev, sgt,
+ DMA_BIDIRECTIONAL, 0);
+ if (err) {
+ sg_free_table(sgt);
+ kfree(sgt);
+ return err;
+ }
+
+ shmem->sgt = sgt;
+ shmem->evicted = false;
+
+ drm_gem_shmem_update_pages_state(shmem);
+ }
+
+ if (!shmem->pages)
+ return -ENOMEM;
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(drm_gem_shmem_swapin);
+
/**
* drm_gem_shmem_dumb_create - Create a dumb shmem buffer object
* @file: DRM file structure to create the dumb buffer for
@@ -569,22 +693,33 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
vm_fault_t ret;
struct page *page;
pgoff_t page_offset;
+ bool pages_unpinned;
+ int err;
/* We don't use vmf->pgoff since that has the fake offset */
page_offset = (vmf->address - vma->vm_start) >> PAGE_SHIFT;
dma_resv_lock(shmem->base.resv, NULL);
- if (page_offset >= num_pages ||
- drm_WARN_ON_ONCE(obj->dev, !shmem->pages) ||
- shmem->madv < 0) {
+ /* Sanity-check that we have the pages pointer when it should present */
+ pages_unpinned = (shmem->evicted || shmem->madv < 0 || !shmem->pages_use_count);
+ drm_WARN_ON_ONCE(obj->dev, !shmem->pages ^ pages_unpinned);
+
+ if (page_offset >= num_pages || (!shmem->pages && !shmem->evicted)) {
ret = VM_FAULT_SIGBUS;
} else {
+ err = drm_gem_shmem_swapin(shmem);
+ if (err) {
+ ret = VM_FAULT_OOM;
+ goto unlock;
+ }
+
page = shmem->pages[page_offset];
ret = vmf_insert_pfn(vma, vmf->address, page_to_pfn(page));
}
+unlock:
dma_resv_unlock(shmem->base.resv);
return ret;
@@ -607,6 +742,7 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma)
if (!drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count))
shmem->pages_use_count++;
+ drm_gem_shmem_update_pages_state(shmem);
dma_resv_unlock(shmem->base.resv);
drm_gem_vm_open(vma);
@@ -688,7 +824,9 @@ void drm_gem_shmem_print_info(const struct drm_gem_shmem_object *shmem,
drm_printf_indent(p, indent, "pages_use_count=%u\n", shmem->pages_use_count);
drm_printf_indent(p, indent, "vmap_use_count=%u\n", shmem->vmap_use_count);
+ drm_printf_indent(p, indent, "evicted=%d\n", shmem->evicted);
drm_printf_indent(p, indent, "vaddr=%p\n", shmem->vaddr);
+ drm_printf_indent(p, indent, "madv=%d\n", shmem->madv);
}
EXPORT_SYMBOL(drm_gem_shmem_print_info);
@@ -743,6 +881,8 @@ static struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_
shmem->sgt = sgt;
+ drm_gem_shmem_update_pages_state(shmem);
+
return sgt;
err_free_sgt:
@@ -819,6 +959,191 @@ drm_gem_shmem_prime_import_sg_table(struct drm_device *dev,
}
EXPORT_SYMBOL_GPL(drm_gem_shmem_prime_import_sg_table);
+static struct drm_gem_shmem_shrinker *
+to_drm_gem_shmem_shrinker(struct shrinker *shrinker)
+{
+ return container_of(shrinker, struct drm_gem_shmem_shrinker, base);
+}
+
+static unsigned long
+drm_gem_shmem_shrinker_count_objects(struct shrinker *shrinker,
+ struct shrink_control *sc)
+{
+ struct drm_gem_shmem_shrinker *shmem_shrinker =
+ to_drm_gem_shmem_shrinker(shrinker);
+ unsigned long count = shmem_shrinker->lru_evictable.count;
+
+ if (count >= SHRINK_EMPTY)
+ return SHRINK_EMPTY - 1;
+
+ return count ?: SHRINK_EMPTY;
+}
+
+void drm_gem_shmem_evict(struct drm_gem_shmem_object *shmem)
+{
+ struct drm_gem_object *obj = &shmem->base;
+
+ drm_WARN_ON(obj->dev, !drm_gem_shmem_is_evictable(shmem));
+ drm_WARN_ON(obj->dev, shmem->evicted);
+
+ drm_gem_shmem_unpin_pages(shmem);
+
+ shmem->evicted = true;
+ drm_gem_shmem_update_pages_state(shmem);
+}
+EXPORT_SYMBOL_GPL(drm_gem_shmem_evict);
+
+static bool drm_gem_shmem_shrinker_evict(struct drm_gem_object *obj)
+{
+ struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
+ int err;
+
+ if (!drm_gem_shmem_is_evictable(shmem) ||
+ get_nr_swap_pages() < obj->size >> PAGE_SHIFT)
+ return false;
+
+ err = drm_gem_evict(obj);
+ if (err)
+ return false;
+
+ return true;
+}
+
+static bool drm_gem_shmem_shrinker_purge(struct drm_gem_object *obj)
+{
+ struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
+ int err;
+
+ if (!drm_gem_shmem_is_purgeable(shmem))
+ return false;
+
+ err = drm_gem_evict(obj);
+ if (err)
+ return false;
+
+ return true;
+}
+
+static unsigned long
+drm_gem_shmem_shrinker_scan_objects(struct shrinker *shrinker,
+ struct shrink_control *sc)
+{
+ struct drm_gem_shmem_shrinker *shmem_shrinker;
+ unsigned long nr_to_scan = sc->nr_to_scan;
+ unsigned long remaining = 0;
+ unsigned long freed = 0;
+
+ shmem_shrinker = to_drm_gem_shmem_shrinker(shrinker);
+
+ /* purge as many objects as we can */
+ freed += drm_gem_lru_scan(&shmem_shrinker->lru_evictable,
+ nr_to_scan, &remaining,
+ drm_gem_shmem_shrinker_purge);
+
+ /* evict as many objects as we can */
+ if (freed < nr_to_scan)
+ freed += drm_gem_lru_scan(&shmem_shrinker->lru_evictable,
+ nr_to_scan - freed, &remaining,
+ drm_gem_shmem_shrinker_evict);
+
+ return (freed > 0 && remaining > 0) ? freed : SHRINK_STOP;
+}
+
+static int drm_gem_shmem_shrinker_init(struct drm_gem_shmem *shmem_mm,
+ const char *shrinker_name)
+{
+ struct drm_gem_shmem_shrinker *shmem_shrinker = &shmem_mm->shrinker;
+ int err;
+
+ shmem_shrinker->base.count_objects = drm_gem_shmem_shrinker_count_objects;
+ shmem_shrinker->base.scan_objects = drm_gem_shmem_shrinker_scan_objects;
+ shmem_shrinker->base.seeks = DEFAULT_SEEKS;
+
+ mutex_init(&shmem_shrinker->lock);
+ drm_gem_lru_init(&shmem_shrinker->lru_evictable, &shmem_shrinker->lock);
+ drm_gem_lru_init(&shmem_shrinker->lru_evicted, &shmem_shrinker->lock);
+ drm_gem_lru_init(&shmem_shrinker->lru_pinned, &shmem_shrinker->lock);
+
+ err = register_shrinker(&shmem_shrinker->base, shrinker_name);
+ if (err) {
+ mutex_destroy(&shmem_shrinker->lock);
+ return err;
+ }
+
+ return 0;
+}
+
+static void drm_gem_shmem_shrinker_release(struct drm_device *dev,
+ struct drm_gem_shmem *shmem_mm)
+{
+ struct drm_gem_shmem_shrinker *shmem_shrinker = &shmem_mm->shrinker;
+
+ unregister_shrinker(&shmem_shrinker->base);
+ drm_WARN_ON(dev, !list_empty(&shmem_shrinker->lru_evictable.list));
+ drm_WARN_ON(dev, !list_empty(&shmem_shrinker->lru_evicted.list));
+ drm_WARN_ON(dev, !list_empty(&shmem_shrinker->lru_pinned.list));
+ mutex_destroy(&shmem_shrinker->lock);
+}
+
+static int drm_gem_shmem_init(struct drm_device *dev)
+{
+ int err;
+
+ if (drm_WARN_ON(dev, dev->shmem_mm))
+ return -EBUSY;
+
+ dev->shmem_mm = kzalloc(sizeof(*dev->shmem_mm), GFP_KERNEL);
+ if (!dev->shmem_mm)
+ return -ENOMEM;
+
+ err = drm_gem_shmem_shrinker_init(dev->shmem_mm, dev->unique);
+ if (err)
+ goto free_gem_shmem;
+
+ return 0;
+
+free_gem_shmem:
+ kfree(dev->shmem_mm);
+ dev->shmem_mm = NULL;
+
+ return err;
+}
+
+static void drm_gem_shmem_release(struct drm_device *dev, void *ptr)
+{
+ struct drm_gem_shmem *shmem_mm = dev->shmem_mm;
+
+ drm_gem_shmem_shrinker_release(dev, shmem_mm);
+ dev->shmem_mm = NULL;
+ kfree(shmem_mm);
+}
+
+/**
+ * drmm_gem_shmem_init() - Initialize drm-shmem internals
+ * @dev: DRM device
+ *
+ * Cleanup is automatically managed as part of DRM device releasing.
+ * Calling this function multiple times will result in a error.
+ *
+ * Returns:
+ * 0 on success or a negative error code on failure.
+ */
+int drmm_gem_shmem_init(struct drm_device *dev)
+{
+ int err;
+
+ err = drm_gem_shmem_init(dev);
+ if (err)
+ return err;
+
+ err = drmm_add_action_or_reset(dev, drm_gem_shmem_release, NULL);
+ if (err)
+ return err;
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(drmm_gem_shmem_init);
+
MODULE_DESCRIPTION("DRM SHMEM memory-management helpers");
MODULE_IMPORT_NS(DMA_BUF);
MODULE_LICENSE("GPL v2");
diff --git a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
index 6a71a2555f85..865a989d67c8 100644
--- a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
+++ b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
@@ -15,6 +15,13 @@
#include "panfrost_gem.h"
#include "panfrost_mmu.h"
+static bool panfrost_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem)
+{
+ return (shmem->madv > 0) &&
+ !shmem->pages_pin_count && shmem->sgt &&
+ !shmem->base.dma_buf && !shmem->base.import_attach;
+}
+
static unsigned long
panfrost_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc)
{
@@ -27,7 +34,7 @@ panfrost_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc
return 0;
list_for_each_entry(shmem, &pfdev->shrinker_list, madv_list) {
- if (drm_gem_shmem_is_purgeable(shmem))
+ if (panfrost_gem_shmem_is_purgeable(shmem))
count += shmem->base.size >> PAGE_SHIFT;
}
diff --git a/include/drm/drm_device.h b/include/drm/drm_device.h
index 7cf4afae2e79..a978f0cb5e84 100644
--- a/include/drm/drm_device.h
+++ b/include/drm/drm_device.h
@@ -16,6 +16,7 @@ struct drm_vblank_crtc;
struct drm_vma_offset_manager;
struct drm_vram_mm;
struct drm_fb_helper;
+struct drm_gem_shmem_shrinker;
struct inode;
@@ -290,8 +291,13 @@ struct drm_device {
/** @vma_offset_manager: GEM information */
struct drm_vma_offset_manager *vma_offset_manager;
- /** @vram_mm: VRAM MM memory manager */
- struct drm_vram_mm *vram_mm;
+ union {
+ /** @vram_mm: VRAM MM memory manager */
+ struct drm_vram_mm *vram_mm;
+
+ /** @shmem_mm: SHMEM GEM memory manager */
+ struct drm_gem_shmem *shmem_mm;
+ };
/**
* @switch_power_state:
diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
index 7111f5743006..88aa08babe23 100644
--- a/include/drm/drm_gem_shmem_helper.h
+++ b/include/drm/drm_gem_shmem_helper.h
@@ -6,6 +6,7 @@
#include <linux/fs.h>
#include <linux/mm.h>
#include <linux/mutex.h>
+#include <linux/shrinker.h>
#include <drm/drm_file.h>
#include <drm/drm_gem.h>
@@ -13,6 +14,7 @@
#include <drm/drm_prime.h>
struct dma_buf_attachment;
+struct drm_device;
struct drm_mode_create_dumb;
struct drm_printer;
struct sg_table;
@@ -52,8 +54,8 @@ struct drm_gem_shmem_object {
* @madv: State for madvise
*
* 0 is active/inuse.
+ * 1 is not-needed/can-be-purged
* A negative value is the object is purged.
- * Positive values are driver specific and not used by the helpers.
*/
int madv;
@@ -100,6 +102,12 @@ struct drm_gem_shmem_object {
* @map_wc: map object write-combined (instead of using shmem defaults).
*/
bool map_wc : 1;
+
+ /**
+ * @evicted: True if shmem pages are evicted by the memory shrinker.
+ * Used internally by memory shrinker.
+ */
+ bool evicted : 1;
};
#define to_drm_gem_shmem_obj(obj) \
@@ -121,11 +129,17 @@ int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv);
static inline bool drm_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem)
{
- return (shmem->madv > 0) &&
- !shmem->vmap_use_count && shmem->sgt &&
- !shmem->base.dma_buf && !shmem->base.import_attach;
+ dma_resv_assert_held(shmem->base.resv);
+
+ return (shmem->madv > 0) && shmem->base.funcs->evict &&
+ shmem->pages_use_count && !shmem->pages_pin_count &&
+ !shmem->base.dma_buf && !shmem->base.import_attach &&
+ (shmem->sgt || shmem->evicted);
}
+int drm_gem_shmem_swapin(struct drm_gem_shmem_object *shmem);
+
+void drm_gem_shmem_evict(struct drm_gem_shmem_object *shmem);
void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem);
struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem);
@@ -269,6 +283,60 @@ static inline int drm_gem_shmem_object_mmap(struct drm_gem_object *obj, struct v
return drm_gem_shmem_mmap(shmem, vma);
}
+/**
+ * drm_gem_shmem_object_madvise_unlocked - unlocked GEM object function for drm_gem_shmem_madvise()
+ * @obj: GEM object
+ * @madv: Madvise value
+ *
+ * This function wraps drm_gem_shmem_madvise(), providing unlocked variant.
+ *
+ * Returns:
+ * 0 on success or a negative error code on failure.
+ */
+static inline int drm_gem_shmem_object_madvise_unlocked(struct drm_gem_object *obj, int madv)
+{
+ struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
+ int ret;
+
+ ret = dma_resv_lock_interruptible(obj->resv, NULL);
+ if (ret)
+ return ret;
+ ret = drm_gem_shmem_madvise(shmem, madv);
+ dma_resv_unlock(obj->resv);
+
+ return ret;
+}
+
+/**
+ * struct drm_gem_shmem_shrinker - Memory shrinker of GEM shmem memory manager
+ */
+struct drm_gem_shmem_shrinker {
+ /** @base: Shrinker for purging shmem GEM objects */
+ struct shrinker base;
+
+ /** @lock: Protects @lru_* */
+ struct mutex lock;
+
+ /** @lru_pinned: List of pinned shmem GEM objects */
+ struct drm_gem_lru lru_pinned;
+
+ /** @lru_evictable: List of shmem GEM objects to be evicted */
+ struct drm_gem_lru lru_evictable;
+
+ /** @lru_evicted: List of evicted shmem GEM objects */
+ struct drm_gem_lru lru_evicted;
+};
+
+/**
+ * struct drm_gem_shmem - GEM shmem memory manager
+ */
+struct drm_gem_shmem {
+ /** @shrinker: GEM shmem shrinker */
+ struct drm_gem_shmem_shrinker shrinker;
+};
+
+int drmm_gem_shmem_init(struct drm_device *dev);
+
/*
* Driver ops
*/
--
2.41.0
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH v14 06/12] drm/shmem-helper: Remove obsoleted is_iomem test
2023-07-22 23:47 [PATCH v14 00/12] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers Dmitry Osipenko
` (4 preceding siblings ...)
2023-07-22 23:47 ` [PATCH v14 05/12] drm/shmem-helper: Add memory shrinker Dmitry Osipenko
@ 2023-07-22 23:47 ` Dmitry Osipenko
2023-07-22 23:47 ` [PATCH v14 07/12] drm/shmem-helper: Export drm_gem_shmem_get_pages_sgt_locked() Dmitry Osipenko
` (5 subsequent siblings)
11 siblings, 0 replies; 25+ messages in thread
From: Dmitry Osipenko @ 2023-07-22 23:47 UTC (permalink / raw)
To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
Daniel Vetter, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, Christian König, Qiang Yu, Steven Price,
Boris Brezillon, Emma Anholt, Melissa Wen
Cc: kernel, linux-kernel, dri-devel, virtualization
Everything that uses the mapped buffer should by agnostic to is_iomem.
The only reason for the is_iomem test is that we're setting shmem->vaddr
to the returned map->vaddr. Now that the shmem->vaddr code is gone, remove
the obsoleted is_iomem test to clean up the code.
Suggested-by: Thomas Zimmermann <tzimmermann@suse.de>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
drivers/gpu/drm/drm_gem_shmem_helper.c | 6 ------
1 file changed, 6 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 0b6c4f318da5..5aa85242071a 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -431,12 +431,6 @@ int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem,
if (obj->import_attach) {
ret = dma_buf_vmap(obj->import_attach->dmabuf, map);
- if (!ret) {
- if (drm_WARN_ON(obj->dev, map->is_iomem)) {
- dma_buf_vunmap(obj->import_attach->dmabuf, map);
- return -EIO;
- }
- }
} else {
pgprot_t prot = PAGE_KERNEL;
--
2.41.0
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH v14 07/12] drm/shmem-helper: Export drm_gem_shmem_get_pages_sgt_locked()
2023-07-22 23:47 [PATCH v14 00/12] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers Dmitry Osipenko
` (5 preceding siblings ...)
2023-07-22 23:47 ` [PATCH v14 06/12] drm/shmem-helper: Remove obsoleted is_iomem test Dmitry Osipenko
@ 2023-07-22 23:47 ` Dmitry Osipenko
2023-07-22 23:47 ` [PATCH v14 08/12] drm/virtio: Support memory shrinking Dmitry Osipenko
` (4 subsequent siblings)
11 siblings, 0 replies; 25+ messages in thread
From: Dmitry Osipenko @ 2023-07-22 23:47 UTC (permalink / raw)
To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
Daniel Vetter, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, Christian König, Qiang Yu, Steven Price,
Boris Brezillon, Emma Anholt, Melissa Wen
Cc: kernel, linux-kernel, dri-devel, virtualization
Export drm_gem_shmem_get_pages_sgt_locked() that will be used by virtio-gpu
shrinker during GEM swap-in operation done under the held reservation lock.
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
drivers/gpu/drm/drm_gem_shmem_helper.c | 3 ++-
include/drm/drm_gem_shmem_helper.h | 1 +
2 files changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 5aa85242071a..87cef8e91fad 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -848,7 +848,7 @@ struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem)
}
EXPORT_SYMBOL_GPL(drm_gem_shmem_get_sg_table);
-static struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_object *shmem)
+struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_object *shmem)
{
struct drm_gem_object *obj = &shmem->base;
int ret;
@@ -886,6 +886,7 @@ static struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_
drm_gem_shmem_put_pages(shmem);
return ERR_PTR(ret);
}
+EXPORT_SYMBOL_GPL(drm_gem_shmem_get_pages_sgt_locked);
/**
* drm_gem_shmem_get_pages_sgt - Pin pages, dma map them, and return a
diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
index 88aa08babe23..2a0b49448526 100644
--- a/include/drm/drm_gem_shmem_helper.h
+++ b/include/drm/drm_gem_shmem_helper.h
@@ -144,6 +144,7 @@ void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem);
struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem);
struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem);
+struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_object *shmem);
void drm_gem_shmem_print_info(const struct drm_gem_shmem_object *shmem,
struct drm_printer *p, unsigned int indent);
--
2.41.0
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH v14 08/12] drm/virtio: Support memory shrinking
2023-07-22 23:47 [PATCH v14 00/12] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers Dmitry Osipenko
` (6 preceding siblings ...)
2023-07-22 23:47 ` [PATCH v14 07/12] drm/shmem-helper: Export drm_gem_shmem_get_pages_sgt_locked() Dmitry Osipenko
@ 2023-07-22 23:47 ` Dmitry Osipenko
2023-07-22 23:47 ` [PATCH v14 09/12] drm/panfrost: Switch to generic memory shrinker Dmitry Osipenko
` (3 subsequent siblings)
11 siblings, 0 replies; 25+ messages in thread
From: Dmitry Osipenko @ 2023-07-22 23:47 UTC (permalink / raw)
To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
Daniel Vetter, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, Christian König, Qiang Yu, Steven Price,
Boris Brezillon, Emma Anholt, Melissa Wen
Cc: kernel, linux-kernel, dri-devel, virtualization
Support generic drm-shmem memory shrinker and add new madvise IOCTL to
the VirtIO-GPU driver. BO cache manager of Mesa driver will mark BOs as
"don't need" using the new IOCTL to let shrinker purge the marked BOs on
OOM, the shrinker will also evict unpurgeable shmem BOs from memory if
guest supports SWAP file or partition.
Acked-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Daniel Almeida <daniel.almeida@collabora.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
drivers/gpu/drm/virtio/virtgpu_drv.h | 20 +++-
drivers/gpu/drm/virtio/virtgpu_gem.c | 72 +++++++++++++
drivers/gpu/drm/virtio/virtgpu_ioctl.c | 33 ++++++
drivers/gpu/drm/virtio/virtgpu_kms.c | 8 ++
drivers/gpu/drm/virtio/virtgpu_object.c | 137 +++++++++++++++++++-----
drivers/gpu/drm/virtio/virtgpu_plane.c | 17 ++-
drivers/gpu/drm/virtio/virtgpu_submit.c | 15 ++-
drivers/gpu/drm/virtio/virtgpu_vq.c | 40 +++++++
include/uapi/drm/virtgpu_drm.h | 14 +++
9 files changed, 323 insertions(+), 33 deletions(-)
diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h
index 4126c384286b..ee5c5848edd2 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.h
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
@@ -89,6 +89,7 @@ struct virtio_gpu_object {
uint32_t hw_res_handle;
bool dumb;
bool created;
+ bool detached;
bool host3d_blob, guest_blob;
uint32_t blob_mem, blob_flags;
@@ -277,7 +278,7 @@ struct virtio_gpu_fpriv {
};
/* virtgpu_ioctl.c */
-#define DRM_VIRTIO_NUM_IOCTLS 12
+#define DRM_VIRTIO_NUM_IOCTLS 13
extern struct drm_ioctl_desc virtio_gpu_ioctls[DRM_VIRTIO_NUM_IOCTLS];
void virtio_gpu_create_context(struct drm_device *dev, struct drm_file *file);
@@ -313,6 +314,12 @@ void virtio_gpu_array_put_free(struct virtio_gpu_object_array *objs);
void virtio_gpu_array_put_free_delayed(struct virtio_gpu_device *vgdev,
struct virtio_gpu_object_array *objs);
void virtio_gpu_array_put_free_work(struct work_struct *work);
+int virtio_gpu_array_prepare(struct virtio_gpu_device *vgdev,
+ struct virtio_gpu_object_array *objs);
+int virtio_gpu_gem_host_mem_release(struct virtio_gpu_object *bo);
+int virtio_gpu_gem_madvise(struct virtio_gpu_object *obj, int madv);
+int virtio_gpu_gem_pin(struct virtio_gpu_object *bo);
+void virtio_gpu_gem_unpin(struct virtio_gpu_object *bo);
/* virtgpu_vq.c */
int virtio_gpu_alloc_vbufs(struct virtio_gpu_device *vgdev);
@@ -324,6 +331,8 @@ void virtio_gpu_cmd_create_resource(struct virtio_gpu_device *vgdev,
struct virtio_gpu_fence *fence);
void virtio_gpu_cmd_unref_resource(struct virtio_gpu_device *vgdev,
struct virtio_gpu_object *bo);
+int virtio_gpu_cmd_release_resource(struct virtio_gpu_device *vgdev,
+ struct virtio_gpu_object *bo);
void virtio_gpu_cmd_transfer_to_host_2d(struct virtio_gpu_device *vgdev,
uint64_t offset,
uint32_t width, uint32_t height,
@@ -344,6 +353,9 @@ void virtio_gpu_object_attach(struct virtio_gpu_device *vgdev,
struct virtio_gpu_object *obj,
struct virtio_gpu_mem_entry *ents,
unsigned int nents);
+void virtio_gpu_object_detach(struct virtio_gpu_device *vgdev,
+ struct virtio_gpu_object *obj,
+ struct virtio_gpu_fence *fence);
int virtio_gpu_attach_status_page(struct virtio_gpu_device *vgdev);
int virtio_gpu_detach_status_page(struct virtio_gpu_device *vgdev);
void virtio_gpu_cursor_ping(struct virtio_gpu_device *vgdev,
@@ -456,6 +468,8 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
bool virtio_gpu_is_shmem(struct virtio_gpu_object *bo);
+int virtio_gpu_reattach_shmem_object(struct virtio_gpu_object *bo);
+
int virtio_gpu_resource_id_get(struct virtio_gpu_device *vgdev,
uint32_t *resid);
/* virtgpu_prime.c */
@@ -490,4 +504,8 @@ void virtio_gpu_vram_unmap_dma_buf(struct device *dev,
int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data,
struct drm_file *file);
+/* virtgpu_gem_shrinker.c */
+int virtio_gpu_gem_shrinker_init(struct virtio_gpu_device *vgdev);
+void virtio_gpu_gem_shrinker_fini(struct virtio_gpu_device *vgdev);
+
#endif
diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c b/drivers/gpu/drm/virtio/virtgpu_gem.c
index 7db48d17ee3a..b9ceb0602fd5 100644
--- a/drivers/gpu/drm/virtio/virtgpu_gem.c
+++ b/drivers/gpu/drm/virtio/virtgpu_gem.c
@@ -147,10 +147,20 @@ void virtio_gpu_gem_object_close(struct drm_gem_object *obj,
struct virtio_gpu_device *vgdev = obj->dev->dev_private;
struct virtio_gpu_fpriv *vfpriv = file->driver_priv;
struct virtio_gpu_object_array *objs;
+ struct virtio_gpu_object *bo;
if (!vgdev->has_virgl_3d)
return;
+ bo = gem_to_virtio_gpu_obj(obj);
+
+ /*
+ * Purged BO was already detached and released, the resource ID
+ * is invalid by now.
+ */
+ if (!virtio_gpu_gem_madvise(bo, VIRTGPU_MADV_WILLNEED))
+ return;
+
objs = virtio_gpu_array_alloc(1);
if (!objs)
return;
@@ -294,3 +304,65 @@ void virtio_gpu_array_put_free_work(struct work_struct *work)
}
spin_unlock(&vgdev->obj_free_lock);
}
+
+int virtio_gpu_array_prepare(struct virtio_gpu_device *vgdev,
+ struct virtio_gpu_object_array *objs)
+{
+ struct virtio_gpu_object *bo;
+ int ret = 0;
+ u32 i;
+
+ for (i = 0; i < objs->nents; i++) {
+ bo = gem_to_virtio_gpu_obj(objs->objs[i]);
+
+ if (virtio_gpu_is_shmem(bo) && bo->detached) {
+ ret = virtio_gpu_reattach_shmem_object(bo);
+ if (ret)
+ break;
+ }
+ }
+
+ return ret;
+}
+
+int virtio_gpu_gem_madvise(struct virtio_gpu_object *bo, int madv)
+{
+ /* only shmem BOs are supported by shrinker */
+ if (!virtio_gpu_is_shmem(bo) || !bo->base.pages_mark_dirty_on_put)
+ return 1;
+
+ return drm_gem_shmem_object_madvise_unlocked(&bo->base.base, madv);
+}
+
+int virtio_gpu_gem_host_mem_release(struct virtio_gpu_object *bo)
+{
+ struct virtio_gpu_device *vgdev = bo->base.base.dev->dev_private;
+ int err;
+
+ if (bo->created) {
+ err = virtio_gpu_cmd_release_resource(vgdev, bo);
+ if (err)
+ return err;
+
+ virtio_gpu_notify(vgdev);
+ bo->created = false;
+ }
+
+ return 0;
+}
+
+int virtio_gpu_gem_pin(struct virtio_gpu_object *bo)
+{
+ int ret = 0;
+
+ if (virtio_gpu_is_shmem(bo))
+ ret = drm_gem_shmem_object_pin(&bo->base.base);
+
+ return ret;
+}
+
+void virtio_gpu_gem_unpin(struct virtio_gpu_object *bo)
+{
+ if (virtio_gpu_is_shmem(bo))
+ drm_gem_shmem_object_unpin(&bo->base.base);
+}
diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
index b24b11f25197..6a41830a06c5 100644
--- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
+++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
@@ -246,6 +246,10 @@ static int virtio_gpu_transfer_from_host_ioctl(struct drm_device *dev,
if (ret != 0)
goto err_put_free;
+ ret = virtio_gpu_array_prepare(vgdev, objs);
+ if (ret)
+ goto err_unlock;
+
fence = virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context, 0);
if (!fence) {
ret = -ENOMEM;
@@ -305,6 +309,10 @@ static int virtio_gpu_transfer_to_host_ioctl(struct drm_device *dev, void *data,
if (ret != 0)
goto err_put_free;
+ ret = virtio_gpu_array_prepare(vgdev, objs);
+ if (ret)
+ goto err_unlock;
+
ret = -ENOMEM;
fence = virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context,
0);
@@ -668,6 +676,28 @@ static int virtio_gpu_context_init_ioctl(struct drm_device *dev,
return ret;
}
+static int virtio_gpu_madvise_ioctl(struct drm_device *dev,
+ void *data,
+ struct drm_file *file)
+{
+ struct drm_virtgpu_madvise *args = data;
+ struct virtio_gpu_object *bo;
+ struct drm_gem_object *obj;
+
+ if (args->madv > VIRTGPU_MADV_DONTNEED)
+ return -EOPNOTSUPP;
+
+ obj = drm_gem_object_lookup(file, args->bo_handle);
+ if (!obj)
+ return -ENOENT;
+
+ bo = gem_to_virtio_gpu_obj(obj);
+ args->retained = virtio_gpu_gem_madvise(bo, args->madv);
+ drm_gem_object_put(obj);
+
+ return 0;
+}
+
struct drm_ioctl_desc virtio_gpu_ioctls[DRM_VIRTIO_NUM_IOCTLS] = {
DRM_IOCTL_DEF_DRV(VIRTGPU_MAP, virtio_gpu_map_ioctl,
DRM_RENDER_ALLOW),
@@ -707,4 +737,7 @@ struct drm_ioctl_desc virtio_gpu_ioctls[DRM_VIRTIO_NUM_IOCTLS] = {
DRM_IOCTL_DEF_DRV(VIRTGPU_CONTEXT_INIT, virtio_gpu_context_init_ioctl,
DRM_RENDER_ALLOW),
+
+ DRM_IOCTL_DEF_DRV(VIRTGPU_MADVISE, virtio_gpu_madvise_ioctl,
+ DRM_RENDER_ALLOW),
};
diff --git a/drivers/gpu/drm/virtio/virtgpu_kms.c b/drivers/gpu/drm/virtio/virtgpu_kms.c
index 5a3b5aaed1f3..43e237082cec 100644
--- a/drivers/gpu/drm/virtio/virtgpu_kms.c
+++ b/drivers/gpu/drm/virtio/virtgpu_kms.c
@@ -245,6 +245,12 @@ int virtio_gpu_init(struct virtio_device *vdev, struct drm_device *dev)
goto err_scanouts;
}
+ ret = drmm_gem_shmem_init(dev);
+ if (ret) {
+ DRM_ERROR("shmem init failed\n");
+ goto err_modeset;
+ }
+
virtio_device_ready(vgdev->vdev);
if (num_capsets)
@@ -259,6 +265,8 @@ int virtio_gpu_init(struct virtio_device *vdev, struct drm_device *dev)
}
return 0;
+err_modeset:
+ virtio_gpu_modeset_fini(vgdev);
err_scanouts:
virtio_gpu_free_vbufs(vgdev);
err_vbufs:
diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c
index c7e74cf13022..70dcd19266dc 100644
--- a/drivers/gpu/drm/virtio/virtgpu_object.c
+++ b/drivers/gpu/drm/virtio/virtgpu_object.c
@@ -97,39 +97,54 @@ static void virtio_gpu_free_object(struct drm_gem_object *obj)
virtio_gpu_cleanup_object(bo);
}
-static const struct drm_gem_object_funcs virtio_gpu_shmem_funcs = {
- .free = virtio_gpu_free_object,
- .open = virtio_gpu_gem_object_open,
- .close = virtio_gpu_gem_object_close,
- .print_info = drm_gem_shmem_object_print_info,
- .export = virtgpu_gem_prime_export,
- .pin = drm_gem_shmem_object_pin,
- .unpin = drm_gem_shmem_object_unpin,
- .get_sg_table = drm_gem_shmem_object_get_sg_table,
- .vmap = drm_gem_shmem_object_vmap,
- .vunmap = drm_gem_shmem_object_vunmap,
- .mmap = drm_gem_shmem_object_mmap,
- .vm_ops = &drm_gem_shmem_vm_ops,
-};
-
-bool virtio_gpu_is_shmem(struct virtio_gpu_object *bo)
+static int virtio_gpu_detach_object_fenced(struct virtio_gpu_object *bo)
{
- return bo->base.base.funcs == &virtio_gpu_shmem_funcs;
+ struct virtio_gpu_device *vgdev = bo->base.base.dev->dev_private;
+ struct virtio_gpu_fence *fence;
+
+ fence = virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context, 0);
+ if (!fence)
+ return -ENOMEM;
+
+ virtio_gpu_object_detach(vgdev, bo, fence);
+ virtio_gpu_notify(vgdev);
+
+ dma_fence_wait(&fence->f, false);
+ dma_fence_put(&fence->f);
+
+ bo->detached = true;
+
+ return 0;
}
-struct drm_gem_object *virtio_gpu_create_object(struct drm_device *dev,
- size_t size)
+static int virtio_gpu_shmem_evict(struct drm_gem_object *obj)
{
- struct virtio_gpu_object_shmem *shmem;
- struct drm_gem_shmem_object *dshmem;
+ struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj);
+ int err;
+
+ /*
+ * At first tell host to stop using guest's memory to ensure that
+ * host won't touch the released guest's memory once it's gone.
+ */
+ if (!bo->base.evicted) {
+ err = virtio_gpu_detach_object_fenced(bo);
+ if (err)
+ return err;
+ }
- shmem = kzalloc(sizeof(*shmem), GFP_KERNEL);
- if (!shmem)
- return ERR_PTR(-ENOMEM);
+ if (drm_gem_shmem_is_purgeable(&bo->base)) {
+ err = virtio_gpu_gem_host_mem_release(bo);
+ if (err) {
+ virtio_gpu_reattach_shmem_object(bo);
+ return err;
+ }
- dshmem = &shmem->base.base;
- dshmem->base.funcs = &virtio_gpu_shmem_funcs;
- return &dshmem->base;
+ drm_gem_shmem_purge(&bo->base);
+ } else {
+ drm_gem_shmem_evict(&bo->base);
+ }
+
+ return 0;
}
static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev,
@@ -142,7 +157,7 @@ static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev,
struct sg_table *pages;
int si;
- pages = drm_gem_shmem_get_pages_sgt(&bo->base);
+ pages = drm_gem_shmem_get_pages_sgt_locked(&bo->base);
if (IS_ERR(pages))
return PTR_ERR(pages);
@@ -176,6 +191,65 @@ static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev,
return 0;
}
+int virtio_gpu_reattach_shmem_object(struct virtio_gpu_object *bo)
+{
+ struct virtio_gpu_device *vgdev = bo->base.base.dev->dev_private;
+ struct virtio_gpu_mem_entry *ents;
+ unsigned int nents;
+ int err;
+
+ err = drm_gem_shmem_swapin(&bo->base);
+ if (err)
+ return err;
+
+ err = virtio_gpu_object_shmem_init(vgdev, bo, &ents, &nents);
+ if (err)
+ return err;
+
+ virtio_gpu_object_attach(vgdev, bo, ents, nents);
+ virtio_gpu_notify(vgdev);
+
+ bo->detached = false;
+
+ return 0;
+}
+
+static const struct drm_gem_object_funcs virtio_gpu_shmem_funcs = {
+ .free = virtio_gpu_free_object,
+ .open = virtio_gpu_gem_object_open,
+ .close = virtio_gpu_gem_object_close,
+ .print_info = drm_gem_shmem_object_print_info,
+ .export = virtgpu_gem_prime_export,
+ .pin = drm_gem_shmem_object_pin,
+ .unpin = drm_gem_shmem_object_unpin,
+ .get_sg_table = drm_gem_shmem_object_get_sg_table,
+ .vmap = drm_gem_shmem_object_vmap,
+ .vunmap = drm_gem_shmem_object_vunmap,
+ .mmap = drm_gem_shmem_object_mmap,
+ .vm_ops = &drm_gem_shmem_vm_ops,
+ .evict = virtio_gpu_shmem_evict,
+};
+
+bool virtio_gpu_is_shmem(struct virtio_gpu_object *bo)
+{
+ return bo->base.base.funcs == &virtio_gpu_shmem_funcs;
+}
+
+struct drm_gem_object *virtio_gpu_create_object(struct drm_device *dev,
+ size_t size)
+{
+ struct virtio_gpu_object_shmem *shmem;
+ struct drm_gem_shmem_object *dshmem;
+
+ shmem = kzalloc(sizeof(*shmem), GFP_KERNEL);
+ if (!shmem)
+ return ERR_PTR(-ENOMEM);
+
+ dshmem = &shmem->base.base;
+ dshmem->base.funcs = &virtio_gpu_shmem_funcs;
+ return &dshmem->base;
+}
+
int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
struct virtio_gpu_object_params *params,
struct virtio_gpu_object **bo_ptr,
@@ -202,7 +276,10 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
bo->dumb = params->dumb;
+ dma_resv_lock(bo->base.base.resv, NULL);
ret = virtio_gpu_object_shmem_init(vgdev, bo, &ents, &nents);
+ dma_resv_unlock(bo->base.base.resv);
+
if (ret != 0)
goto err_put_id;
@@ -228,10 +305,14 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
virtio_gpu_cmd_resource_create_3d(vgdev, bo, params,
objs, fence);
virtio_gpu_object_attach(vgdev, bo, ents, nents);
+
+ shmem_obj->pages_mark_dirty_on_put = 1;
} else {
virtio_gpu_cmd_create_resource(vgdev, bo, params,
objs, fence);
virtio_gpu_object_attach(vgdev, bo, ents, nents);
+
+ shmem_obj->pages_mark_dirty_on_put = 1;
}
*bo_ptr = bo;
diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c b/drivers/gpu/drm/virtio/virtgpu_plane.c
index a2e045f3a000..def57b01a826 100644
--- a/drivers/gpu/drm/virtio/virtgpu_plane.c
+++ b/drivers/gpu/drm/virtio/virtgpu_plane.c
@@ -238,20 +238,28 @@ static int virtio_gpu_plane_prepare_fb(struct drm_plane *plane,
struct virtio_gpu_device *vgdev = dev->dev_private;
struct virtio_gpu_framebuffer *vgfb;
struct virtio_gpu_object *bo;
+ int err;
if (!new_state->fb)
return 0;
vgfb = to_virtio_gpu_framebuffer(new_state->fb);
bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]);
- if (!bo || (plane->type == DRM_PLANE_TYPE_PRIMARY && !bo->guest_blob))
+
+ err = virtio_gpu_gem_pin(bo);
+ if (err)
+ return err;
+
+ if (plane->type == DRM_PLANE_TYPE_PRIMARY && !bo->guest_blob)
return 0;
if (bo->dumb && (plane->state->fb != new_state->fb)) {
vgfb->fence = virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context,
0);
- if (!vgfb->fence)
+ if (!vgfb->fence) {
+ virtio_gpu_gem_unpin(bo);
return -ENOMEM;
+ }
}
return 0;
@@ -261,15 +269,20 @@ static void virtio_gpu_plane_cleanup_fb(struct drm_plane *plane,
struct drm_plane_state *state)
{
struct virtio_gpu_framebuffer *vgfb;
+ struct virtio_gpu_object *bo;
if (!state->fb)
return;
vgfb = to_virtio_gpu_framebuffer(state->fb);
+ bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]);
+
if (vgfb->fence) {
dma_fence_put(&vgfb->fence->f);
vgfb->fence = NULL;
}
+
+ virtio_gpu_gem_unpin(bo);
}
static void virtio_gpu_cursor_plane_update(struct drm_plane *plane,
diff --git a/drivers/gpu/drm/virtio/virtgpu_submit.c b/drivers/gpu/drm/virtio/virtgpu_submit.c
index 1d010c66910d..a88984dd3f2f 100644
--- a/drivers/gpu/drm/virtio/virtgpu_submit.c
+++ b/drivers/gpu/drm/virtio/virtgpu_submit.c
@@ -250,8 +250,19 @@ static void virtio_gpu_install_out_fence_fd(struct virtio_gpu_submit *submit)
static int virtio_gpu_lock_buflist(struct virtio_gpu_submit *submit)
{
- if (submit->buflist)
- return virtio_gpu_array_lock_resv(submit->buflist);
+ int err;
+
+ if (submit->buflist) {
+ err = virtio_gpu_array_lock_resv(submit->buflist);
+ if (err)
+ return err;
+
+ err = virtio_gpu_array_prepare(submit->vgdev, submit->buflist);
+ if (err) {
+ virtio_gpu_array_unlock_resv(submit->buflist);
+ return err;
+ }
+ }
return 0;
}
diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c
index b1a00c0c25a7..14ab470f413a 100644
--- a/drivers/gpu/drm/virtio/virtgpu_vq.c
+++ b/drivers/gpu/drm/virtio/virtgpu_vq.c
@@ -545,6 +545,21 @@ void virtio_gpu_cmd_unref_resource(struct virtio_gpu_device *vgdev,
virtio_gpu_cleanup_object(bo);
}
+int virtio_gpu_cmd_release_resource(struct virtio_gpu_device *vgdev,
+ struct virtio_gpu_object *bo)
+{
+ struct virtio_gpu_resource_unref *cmd_p;
+ struct virtio_gpu_vbuffer *vbuf;
+
+ cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p));
+ memset(cmd_p, 0, sizeof(*cmd_p));
+
+ cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_RESOURCE_UNREF);
+ cmd_p->resource_id = cpu_to_le32(bo->hw_res_handle);
+
+ return virtio_gpu_queue_ctrl_buffer(vgdev, vbuf);
+}
+
void virtio_gpu_cmd_set_scanout(struct virtio_gpu_device *vgdev,
uint32_t scanout_id, uint32_t resource_id,
uint32_t width, uint32_t height,
@@ -645,6 +660,23 @@ virtio_gpu_cmd_resource_attach_backing(struct virtio_gpu_device *vgdev,
virtio_gpu_queue_fenced_ctrl_buffer(vgdev, vbuf, fence);
}
+static void
+virtio_gpu_cmd_resource_detach_backing(struct virtio_gpu_device *vgdev,
+ u32 resource_id,
+ struct virtio_gpu_fence *fence)
+{
+ struct virtio_gpu_resource_attach_backing *cmd_p;
+ struct virtio_gpu_vbuffer *vbuf;
+
+ cmd_p = virtio_gpu_alloc_cmd(vgdev, &vbuf, sizeof(*cmd_p));
+ memset(cmd_p, 0, sizeof(*cmd_p));
+
+ cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING);
+ cmd_p->resource_id = cpu_to_le32(resource_id);
+
+ virtio_gpu_queue_fenced_ctrl_buffer(vgdev, vbuf, fence);
+}
+
static void virtio_gpu_cmd_get_display_info_cb(struct virtio_gpu_device *vgdev,
struct virtio_gpu_vbuffer *vbuf)
{
@@ -1107,6 +1139,14 @@ void virtio_gpu_object_attach(struct virtio_gpu_device *vgdev,
ents, nents, NULL);
}
+void virtio_gpu_object_detach(struct virtio_gpu_device *vgdev,
+ struct virtio_gpu_object *obj,
+ struct virtio_gpu_fence *fence)
+{
+ virtio_gpu_cmd_resource_detach_backing(vgdev, obj->hw_res_handle,
+ fence);
+}
+
void virtio_gpu_cursor_ping(struct virtio_gpu_device *vgdev,
struct virtio_gpu_output *output)
{
diff --git a/include/uapi/drm/virtgpu_drm.h b/include/uapi/drm/virtgpu_drm.h
index 7b158fcb02b4..9fb38ad16120 100644
--- a/include/uapi/drm/virtgpu_drm.h
+++ b/include/uapi/drm/virtgpu_drm.h
@@ -48,6 +48,7 @@ extern "C" {
#define DRM_VIRTGPU_GET_CAPS 0x09
#define DRM_VIRTGPU_RESOURCE_CREATE_BLOB 0x0a
#define DRM_VIRTGPU_CONTEXT_INIT 0x0b
+#define DRM_VIRTGPU_MADVISE 0x0c
#define VIRTGPU_EXECBUF_FENCE_FD_IN 0x01
#define VIRTGPU_EXECBUF_FENCE_FD_OUT 0x02
@@ -197,6 +198,15 @@ struct drm_virtgpu_context_init {
__u64 ctx_set_params;
};
+#define VIRTGPU_MADV_WILLNEED 0
+#define VIRTGPU_MADV_DONTNEED 1
+struct drm_virtgpu_madvise {
+ __u32 bo_handle;
+ __u32 retained; /* out, non-zero if BO can be used */
+ __u32 madv;
+ __u32 pad;
+};
+
/*
* Event code that's given when VIRTGPU_CONTEXT_PARAM_POLL_RINGS_MASK is in
* effect. The event size is sizeof(drm_event), since there is no additional
@@ -247,6 +257,10 @@ struct drm_virtgpu_context_init {
DRM_IOWR(DRM_COMMAND_BASE + DRM_VIRTGPU_CONTEXT_INIT, \
struct drm_virtgpu_context_init)
+#define DRM_IOCTL_VIRTGPU_MADVISE \
+ DRM_IOWR(DRM_COMMAND_BASE + DRM_VIRTGPU_MADVISE, \
+ struct drm_virtgpu_madvise)
+
#if defined(__cplusplus)
}
#endif
--
2.41.0
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH v14 09/12] drm/panfrost: Switch to generic memory shrinker
2023-07-22 23:47 [PATCH v14 00/12] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers Dmitry Osipenko
` (7 preceding siblings ...)
2023-07-22 23:47 ` [PATCH v14 08/12] drm/virtio: Support memory shrinking Dmitry Osipenko
@ 2023-07-22 23:47 ` Dmitry Osipenko
2023-07-22 23:47 ` [PATCH v14 10/12] drm/shmem-helper: Refactor locked/unlocked functions Dmitry Osipenko
` (2 subsequent siblings)
11 siblings, 0 replies; 25+ messages in thread
From: Dmitry Osipenko @ 2023-07-22 23:47 UTC (permalink / raw)
To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
Daniel Vetter, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, Christian König, Qiang Yu, Steven Price,
Boris Brezillon, Emma Anholt, Melissa Wen
Cc: kernel, linux-kernel, dri-devel, virtualization
Replace Panfrost's custom memory shrinker with a common drm-shmem
memory shrinker.
Tested-by: Steven Price <steven.price@arm.com> # Firefly-RK3288
Reviewed-by: Steven Price <steven.price@arm.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
drivers/gpu/drm/panfrost/Makefile | 1 -
drivers/gpu/drm/panfrost/panfrost_device.h | 4 -
drivers/gpu/drm/panfrost/panfrost_drv.c | 27 ++--
drivers/gpu/drm/panfrost/panfrost_gem.c | 30 ++--
drivers/gpu/drm/panfrost/panfrost_gem.h | 9 --
.../gpu/drm/panfrost/panfrost_gem_shrinker.c | 129 ------------------
drivers/gpu/drm/panfrost/panfrost_job.c | 18 ++-
include/drm/drm_gem_shmem_helper.h | 7 -
8 files changed, 47 insertions(+), 178 deletions(-)
delete mode 100644 drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
diff --git a/drivers/gpu/drm/panfrost/Makefile b/drivers/gpu/drm/panfrost/Makefile
index 7da2b3f02ed9..11622e22cf15 100644
--- a/drivers/gpu/drm/panfrost/Makefile
+++ b/drivers/gpu/drm/panfrost/Makefile
@@ -5,7 +5,6 @@ panfrost-y := \
panfrost_device.o \
panfrost_devfreq.o \
panfrost_gem.o \
- panfrost_gem_shrinker.o \
panfrost_gpu.o \
panfrost_job.o \
panfrost_mmu.o \
diff --git a/drivers/gpu/drm/panfrost/panfrost_device.h b/drivers/gpu/drm/panfrost/panfrost_device.h
index b0126b9fbadc..dcc2571c092b 100644
--- a/drivers/gpu/drm/panfrost/panfrost_device.h
+++ b/drivers/gpu/drm/panfrost/panfrost_device.h
@@ -116,10 +116,6 @@ struct panfrost_device {
atomic_t pending;
} reset;
- struct mutex shrinker_lock;
- struct list_head shrinker_list;
- struct shrinker shrinker;
-
struct panfrost_devfreq pfdevfreq;
};
diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
index 49b51f0db9b4..d1b2bd6db443 100644
--- a/drivers/gpu/drm/panfrost/panfrost_drv.c
+++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
@@ -169,7 +169,6 @@ panfrost_lookup_bos(struct drm_device *dev,
break;
}
- atomic_inc(&bo->gpu_usecount);
job->mappings[i] = mapping;
}
@@ -394,7 +393,6 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data,
{
struct panfrost_file_priv *priv = file_priv->driver_priv;
struct drm_panfrost_madvise *args = data;
- struct panfrost_device *pfdev = dev->dev_private;
struct drm_gem_object *gem_obj;
struct panfrost_gem_object *bo;
int ret = 0;
@@ -407,11 +405,15 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data,
bo = to_panfrost_bo(gem_obj);
+ if (bo->is_heap) {
+ args->retained = 1;
+ goto out_put_object;
+ }
+
ret = dma_resv_lock_interruptible(bo->base.base.resv, NULL);
if (ret)
goto out_put_object;
- mutex_lock(&pfdev->shrinker_lock);
mutex_lock(&bo->mappings.lock);
if (args->madv == PANFROST_MADV_DONTNEED) {
struct panfrost_gem_mapping *first;
@@ -437,17 +439,8 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data,
args->retained = drm_gem_shmem_madvise(&bo->base, args->madv);
- if (args->retained) {
- if (args->madv == PANFROST_MADV_DONTNEED)
- list_move_tail(&bo->base.madv_list,
- &pfdev->shrinker_list);
- else if (args->madv == PANFROST_MADV_WILLNEED)
- list_del_init(&bo->base.madv_list);
- }
-
out_unlock_mappings:
mutex_unlock(&bo->mappings.lock);
- mutex_unlock(&pfdev->shrinker_lock);
dma_resv_unlock(bo->base.base.resv);
out_put_object:
drm_gem_object_put(gem_obj);
@@ -576,9 +569,6 @@ static int panfrost_probe(struct platform_device *pdev)
ddev->dev_private = pfdev;
pfdev->ddev = ddev;
- mutex_init(&pfdev->shrinker_lock);
- INIT_LIST_HEAD(&pfdev->shrinker_list);
-
err = panfrost_device_init(pfdev);
if (err) {
if (err != -EPROBE_DEFER)
@@ -600,10 +590,14 @@ static int panfrost_probe(struct platform_device *pdev)
if (err < 0)
goto err_out1;
- panfrost_gem_shrinker_init(ddev);
+ err = drmm_gem_shmem_init(ddev);
+ if (err < 0)
+ goto err_out2;
return 0;
+err_out2:
+ drm_dev_unregister(ddev);
err_out1:
pm_runtime_disable(pfdev->dev);
panfrost_device_fini(pfdev);
@@ -619,7 +613,6 @@ static void panfrost_remove(struct platform_device *pdev)
struct drm_device *ddev = pfdev->ddev;
drm_dev_unregister(ddev);
- panfrost_gem_shrinker_cleanup(ddev);
pm_runtime_get_sync(pfdev->dev);
pm_runtime_disable(pfdev->dev);
diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c
index 3c812fbd126f..08d795c28b4e 100644
--- a/drivers/gpu/drm/panfrost/panfrost_gem.c
+++ b/drivers/gpu/drm/panfrost/panfrost_gem.c
@@ -19,16 +19,6 @@ static void panfrost_gem_free_object(struct drm_gem_object *obj)
struct panfrost_gem_object *bo = to_panfrost_bo(obj);
struct panfrost_device *pfdev = obj->dev->dev_private;
- /*
- * Make sure the BO is no longer inserted in the shrinker list before
- * taking care of the destruction itself. If we don't do that we have a
- * race condition between this function and what's done in
- * panfrost_gem_shrinker_scan().
- */
- mutex_lock(&pfdev->shrinker_lock);
- list_del_init(&bo->base.madv_list);
- mutex_unlock(&pfdev->shrinker_lock);
-
/*
* If we still have mappings attached to the BO, there's a problem in
* our refcounting.
@@ -195,6 +185,25 @@ static int panfrost_gem_pin(struct drm_gem_object *obj)
return drm_gem_shmem_pin(&bo->base);
}
+static int panfrost_shmem_evict(struct drm_gem_object *obj)
+{
+ struct panfrost_gem_object *bo = to_panfrost_bo(obj);
+
+ if (!drm_gem_shmem_is_purgeable(&bo->base))
+ return -EBUSY;
+
+ if (!mutex_trylock(&bo->mappings.lock))
+ return -EBUSY;
+
+ panfrost_gem_teardown_mappings_locked(bo);
+
+ drm_gem_shmem_purge(&bo->base);
+
+ mutex_unlock(&bo->mappings.lock);
+
+ return 0;
+}
+
static const struct drm_gem_object_funcs panfrost_gem_funcs = {
.free = panfrost_gem_free_object,
.open = panfrost_gem_open,
@@ -207,6 +216,7 @@ static const struct drm_gem_object_funcs panfrost_gem_funcs = {
.vunmap = drm_gem_shmem_object_vunmap,
.mmap = drm_gem_shmem_object_mmap,
.vm_ops = &drm_gem_shmem_vm_ops,
+ .evict = panfrost_shmem_evict,
};
/**
diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.h b/drivers/gpu/drm/panfrost/panfrost_gem.h
index ad2877eeeccd..6ad1bcedb932 100644
--- a/drivers/gpu/drm/panfrost/panfrost_gem.h
+++ b/drivers/gpu/drm/panfrost/panfrost_gem.h
@@ -30,12 +30,6 @@ struct panfrost_gem_object {
struct mutex lock;
} mappings;
- /*
- * Count the number of jobs referencing this BO so we don't let the
- * shrinker reclaim this object prematurely.
- */
- atomic_t gpu_usecount;
-
bool noexec :1;
bool is_heap :1;
};
@@ -81,7 +75,4 @@ panfrost_gem_mapping_get(struct panfrost_gem_object *bo,
void panfrost_gem_mapping_put(struct panfrost_gem_mapping *mapping);
void panfrost_gem_teardown_mappings_locked(struct panfrost_gem_object *bo);
-void panfrost_gem_shrinker_init(struct drm_device *dev);
-void panfrost_gem_shrinker_cleanup(struct drm_device *dev);
-
#endif /* __PANFROST_GEM_H__ */
diff --git a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
deleted file mode 100644
index 865a989d67c8..000000000000
--- a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
+++ /dev/null
@@ -1,129 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/* Copyright (C) 2019 Arm Ltd.
- *
- * Based on msm_gem_freedreno.c:
- * Copyright (C) 2016 Red Hat
- * Author: Rob Clark <robdclark@gmail.com>
- */
-
-#include <linux/list.h>
-
-#include <drm/drm_device.h>
-#include <drm/drm_gem_shmem_helper.h>
-
-#include "panfrost_device.h"
-#include "panfrost_gem.h"
-#include "panfrost_mmu.h"
-
-static bool panfrost_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem)
-{
- return (shmem->madv > 0) &&
- !shmem->pages_pin_count && shmem->sgt &&
- !shmem->base.dma_buf && !shmem->base.import_attach;
-}
-
-static unsigned long
-panfrost_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc)
-{
- struct panfrost_device *pfdev =
- container_of(shrinker, struct panfrost_device, shrinker);
- struct drm_gem_shmem_object *shmem;
- unsigned long count = 0;
-
- if (!mutex_trylock(&pfdev->shrinker_lock))
- return 0;
-
- list_for_each_entry(shmem, &pfdev->shrinker_list, madv_list) {
- if (panfrost_gem_shmem_is_purgeable(shmem))
- count += shmem->base.size >> PAGE_SHIFT;
- }
-
- mutex_unlock(&pfdev->shrinker_lock);
-
- return count;
-}
-
-static bool panfrost_gem_purge(struct drm_gem_object *obj)
-{
- struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
- struct panfrost_gem_object *bo = to_panfrost_bo(obj);
- bool ret = false;
-
- if (atomic_read(&bo->gpu_usecount))
- return false;
-
- if (!mutex_trylock(&bo->mappings.lock))
- return false;
-
- if (!dma_resv_trylock(shmem->base.resv))
- goto unlock_mappings;
-
- panfrost_gem_teardown_mappings_locked(bo);
- drm_gem_shmem_purge(&bo->base);
- ret = true;
-
- dma_resv_unlock(shmem->base.resv);
-
-unlock_mappings:
- mutex_unlock(&bo->mappings.lock);
- return ret;
-}
-
-static unsigned long
-panfrost_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc)
-{
- struct panfrost_device *pfdev =
- container_of(shrinker, struct panfrost_device, shrinker);
- struct drm_gem_shmem_object *shmem, *tmp;
- unsigned long freed = 0;
-
- if (!mutex_trylock(&pfdev->shrinker_lock))
- return SHRINK_STOP;
-
- list_for_each_entry_safe(shmem, tmp, &pfdev->shrinker_list, madv_list) {
- if (freed >= sc->nr_to_scan)
- break;
- if (drm_gem_shmem_is_purgeable(shmem) &&
- panfrost_gem_purge(&shmem->base)) {
- freed += shmem->base.size >> PAGE_SHIFT;
- list_del_init(&shmem->madv_list);
- }
- }
-
- mutex_unlock(&pfdev->shrinker_lock);
-
- if (freed > 0)
- pr_info_ratelimited("Purging %lu bytes\n", freed << PAGE_SHIFT);
-
- return freed;
-}
-
-/**
- * panfrost_gem_shrinker_init - Initialize panfrost shrinker
- * @dev: DRM device
- *
- * This function registers and sets up the panfrost shrinker.
- */
-void panfrost_gem_shrinker_init(struct drm_device *dev)
-{
- struct panfrost_device *pfdev = dev->dev_private;
- pfdev->shrinker.count_objects = panfrost_gem_shrinker_count;
- pfdev->shrinker.scan_objects = panfrost_gem_shrinker_scan;
- pfdev->shrinker.seeks = DEFAULT_SEEKS;
- WARN_ON(register_shrinker(&pfdev->shrinker, "drm-panfrost"));
-}
-
-/**
- * panfrost_gem_shrinker_cleanup - Clean up panfrost shrinker
- * @dev: DRM device
- *
- * This function unregisters the panfrost shrinker.
- */
-void panfrost_gem_shrinker_cleanup(struct drm_device *dev)
-{
- struct panfrost_device *pfdev = dev->dev_private;
-
- if (pfdev->shrinker.nr_deferred) {
- unregister_shrinker(&pfdev->shrinker);
- }
-}
diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
index dbc597ab46fb..98d9751d2b2c 100644
--- a/drivers/gpu/drm/panfrost/panfrost_job.c
+++ b/drivers/gpu/drm/panfrost/panfrost_job.c
@@ -272,6 +272,19 @@ static void panfrost_attach_object_fences(struct drm_gem_object **bos,
dma_resv_add_fence(bos[i]->resv, fence, DMA_RESV_USAGE_WRITE);
}
+static int panfrost_objects_prepare(struct drm_gem_object **bos, int bo_count)
+{
+ struct panfrost_gem_object *bo;
+ int ret = 0;
+
+ while (!ret && bo_count--) {
+ bo = to_panfrost_bo(bos[bo_count]);
+ ret = bo->base.madv ? -ENOMEM : 0;
+ }
+
+ return ret;
+}
+
int panfrost_job_push(struct panfrost_job *job)
{
struct panfrost_device *pfdev = job->pfdev;
@@ -283,6 +296,10 @@ int panfrost_job_push(struct panfrost_job *job)
if (ret)
return ret;
+ ret = panfrost_objects_prepare(job->bos, job->bo_count);
+ if (ret)
+ goto unlock;
+
mutex_lock(&pfdev->sched_lock);
drm_sched_job_arm(&job->base);
@@ -324,7 +341,6 @@ static void panfrost_job_cleanup(struct kref *ref)
if (!job->mappings[i])
break;
- atomic_dec(&job->mappings[i]->obj->gpu_usecount);
panfrost_gem_mapping_put(job->mappings[i]);
}
kvfree(job->mappings);
diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
index 2a0b49448526..55f5ff387bbc 100644
--- a/include/drm/drm_gem_shmem_helper.h
+++ b/include/drm/drm_gem_shmem_helper.h
@@ -59,13 +59,6 @@ struct drm_gem_shmem_object {
*/
int madv;
- /**
- * @madv_list: List entry for madvise tracking
- *
- * Typically used by drivers to track purgeable objects
- */
- struct list_head madv_list;
-
/**
* @sgt: Scatter/gather table for imported PRIME buffers
*/
--
2.41.0
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH v14 10/12] drm/shmem-helper: Refactor locked/unlocked functions
2023-07-22 23:47 [PATCH v14 00/12] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers Dmitry Osipenko
` (8 preceding siblings ...)
2023-07-22 23:47 ` [PATCH v14 09/12] drm/panfrost: Switch to generic memory shrinker Dmitry Osipenko
@ 2023-07-22 23:47 ` Dmitry Osipenko
2023-07-25 7:47 ` Boris Brezillon
2023-07-22 23:47 ` [PATCH v14 11/12] drm/shmem-helper: Make drm_gem_shmem_print_info() symbol GPL Dmitry Osipenko
2023-07-22 23:47 ` [PATCH v14 12/12] drm/gem: Add _unlocked postfix to drm_gem_pin/unpin() Dmitry Osipenko
11 siblings, 1 reply; 25+ messages in thread
From: Dmitry Osipenko @ 2023-07-22 23:47 UTC (permalink / raw)
To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
Daniel Vetter, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, Christian König, Qiang Yu, Steven Price,
Boris Brezillon, Emma Anholt, Melissa Wen
Cc: kernel, linux-kernel, dri-devel, virtualization
Add locked/unlocked postfixes to drm-shmem function names to make clear
where reservation lock is taken and where not. Add more common helpers to
drm_gem_shmem_helper.h
Suggested-by: Boris Brezillon <boris.brezillon@collabora.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
drivers/gpu/drm/drm_gem_shmem_helper.c | 172 +++++++++---------------
drivers/gpu/drm/lima/lima_gem.c | 10 +-
drivers/gpu/drm/panfrost/panfrost_drv.c | 2 +-
drivers/gpu/drm/panfrost/panfrost_gem.c | 12 +-
drivers/gpu/drm/panfrost/panfrost_mmu.c | 2 +-
drivers/gpu/drm/v3d/v3d_bo.c | 10 +-
drivers/gpu/drm/virtio/virtgpu_gem.c | 4 +-
drivers/gpu/drm/virtio/virtgpu_object.c | 16 +--
include/drm/drm_gem_shmem_helper.h | 133 +++++++++---------
9 files changed, 166 insertions(+), 195 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 87cef8e91fad..3dd4da18eedf 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -41,12 +41,12 @@ MODULE_IMPORT_NS(DMA_BUF);
static const struct drm_gem_object_funcs drm_gem_shmem_funcs = {
.free = drm_gem_shmem_object_free,
.print_info = drm_gem_shmem_object_print_info,
- .pin = drm_gem_shmem_object_pin,
- .unpin = drm_gem_shmem_object_unpin,
+ .pin = drm_gem_shmem_object_pin_unlocked,
+ .unpin = drm_gem_shmem_object_unpin_unlocked,
.get_sg_table = drm_gem_shmem_object_get_sg_table,
- .vmap = drm_gem_shmem_object_vmap,
- .vunmap = drm_gem_shmem_object_vunmap,
- .mmap = drm_gem_shmem_object_mmap,
+ .vmap = drm_gem_shmem_object_vmap_locked,
+ .vunmap = drm_gem_shmem_object_vunmap_locked,
+ .mmap = drm_gem_shmem_object_mmap_unlocked,
.vm_ops = &drm_gem_shmem_vm_ops,
};
@@ -155,7 +155,7 @@ static bool drm_gem_shmem_is_evictable(struct drm_gem_shmem_object *shmem)
}
static void
-drm_gem_shmem_update_pages_state(struct drm_gem_shmem_object *shmem)
+drm_gem_shmem_update_pages_state_locked(struct drm_gem_shmem_object *shmem)
{
struct drm_gem_object *obj = &shmem->base;
struct drm_gem_shmem *shmem_mm = obj->dev->shmem_mm;
@@ -193,7 +193,7 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
drm_prime_gem_destroy(obj, shmem->sgt);
} else {
/* take out shmem GEM object from the memory shrinker */
- drm_gem_shmem_madvise(shmem, -1);
+ drm_gem_shmem_madvise_locked(shmem, -1);
drm_WARN_ON(obj->dev, shmem->vmap_use_count);
@@ -204,7 +204,7 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
kfree(shmem->sgt);
}
if (shmem->pages_use_count)
- drm_gem_shmem_put_pages(shmem);
+ drm_gem_shmem_put_pages_locked(shmem);
drm_WARN_ON(obj->dev, shmem->pages_use_count);
}
@@ -267,7 +267,7 @@ static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
return -ENOMEM;
if (shmem->pages_use_count++ > 0) {
- err = drm_gem_shmem_swapin(shmem);
+ err = drm_gem_shmem_swapin_locked(shmem);
if (err)
goto err_zero_use;
@@ -278,7 +278,7 @@ static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
if (err)
goto err_zero_use;
- drm_gem_shmem_update_pages_state(shmem);
+ drm_gem_shmem_update_pages_state_locked(shmem);
return 0;
@@ -289,7 +289,7 @@ static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
}
static void
-drm_gem_shmem_release_pages(struct drm_gem_shmem_object *shmem)
+drm_gem_shmem_release_pages_locked(struct drm_gem_shmem_object *shmem)
{
struct drm_gem_object *obj = &shmem->base;
@@ -312,12 +312,12 @@ drm_gem_shmem_release_pages(struct drm_gem_shmem_object *shmem)
}
/*
- * drm_gem_shmem_put_pages - Decrease use count on the backing pages for a shmem GEM object
+ * drm_gem_shmem_put_pages_locked - Decrease use count on the backing pages for a shmem GEM object
* @shmem: shmem GEM object
*
* This function decreases the use count and puts the backing pages when use drops to zero.
*/
-void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem)
+void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem)
{
struct drm_gem_object *obj = &shmem->base;
@@ -329,16 +329,19 @@ void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem)
if (--shmem->pages_use_count > 0)
return;
- drm_gem_shmem_release_pages(shmem);
+ drm_gem_shmem_release_pages_locked(shmem);
- drm_gem_shmem_update_pages_state(shmem);
+ drm_gem_shmem_update_pages_state_locked(shmem);
}
-EXPORT_SYMBOL(drm_gem_shmem_put_pages);
+EXPORT_SYMBOL_GPL(drm_gem_shmem_put_pages_locked);
-static int drm_gem_shmem_pin_locked(struct drm_gem_shmem_object *shmem)
+int drm_gem_shmem_pin_locked(struct drm_gem_shmem_object *shmem)
{
+ struct drm_gem_object *obj = &shmem->base;
int ret;
+ drm_WARN_ON(obj->dev, obj->import_attach);
+
dma_resv_assert_held(shmem->base.resv);
ret = drm_gem_shmem_get_pages(shmem);
@@ -347,8 +350,9 @@ static int drm_gem_shmem_pin_locked(struct drm_gem_shmem_object *shmem)
return ret;
}
+EXPORT_SYMBOL_GPL(drm_gem_shmem_pin_locked);
-static void drm_gem_shmem_unpin_locked(struct drm_gem_shmem_object *shmem)
+void drm_gem_shmem_unpin_locked(struct drm_gem_shmem_object *shmem)
{
struct drm_gem_object *obj = &shmem->base;
@@ -357,59 +361,14 @@ static void drm_gem_shmem_unpin_locked(struct drm_gem_shmem_object *shmem)
if (drm_WARN_ON_ONCE(obj->dev, !shmem->pages_pin_count))
return;
- drm_gem_shmem_put_pages(shmem);
+ drm_gem_shmem_put_pages_locked(shmem);
shmem->pages_pin_count--;
}
-
-/**
- * drm_gem_shmem_pin - Pin backing pages for a shmem GEM object
- * @shmem: shmem GEM object
- *
- * This function makes sure the backing pages are pinned in memory while the
- * buffer is exported.
- *
- * Returns:
- * 0 on success or a negative error code on failure.
- */
-int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem)
-{
- struct drm_gem_object *obj = &shmem->base;
- int ret;
-
- drm_WARN_ON(obj->dev, obj->import_attach);
-
- ret = dma_resv_lock_interruptible(shmem->base.resv, NULL);
- if (ret)
- return ret;
- ret = drm_gem_shmem_pin_locked(shmem);
- dma_resv_unlock(shmem->base.resv);
-
- return ret;
-}
-EXPORT_SYMBOL(drm_gem_shmem_pin);
-
-/**
- * drm_gem_shmem_unpin - Unpin backing pages for a shmem GEM object
- * @shmem: shmem GEM object
- *
- * This function removes the requirement that the backing pages are pinned in
- * memory.
- */
-void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem)
-{
- struct drm_gem_object *obj = &shmem->base;
-
- drm_WARN_ON(obj->dev, obj->import_attach);
-
- dma_resv_lock(shmem->base.resv, NULL);
- drm_gem_shmem_unpin_locked(shmem);
- dma_resv_unlock(shmem->base.resv);
-}
-EXPORT_SYMBOL(drm_gem_shmem_unpin);
+EXPORT_SYMBOL_GPL(drm_gem_shmem_unpin_locked);
/*
- * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object
+ * drm_gem_shmem_vmap_locked - Create a virtual mapping for a shmem GEM object
* @shmem: shmem GEM object
* @map: Returns the kernel virtual address of the SHMEM GEM object's backing
* store.
@@ -418,13 +377,13 @@ EXPORT_SYMBOL(drm_gem_shmem_unpin);
* exists for the buffer backing the shmem GEM object. It hides the differences
* between dma-buf imported and natively allocated objects.
*
- * Acquired mappings should be cleaned up by calling drm_gem_shmem_vunmap().
+ * Acquired mappings should be cleaned up by calling drm_gem_shmem_vunmap_locked().
*
* Returns:
* 0 on success or a negative error code on failure.
*/
-int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem,
- struct iosys_map *map)
+int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
+ struct iosys_map *map)
{
struct drm_gem_object *obj = &shmem->base;
int ret = 0;
@@ -470,22 +429,22 @@ int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem,
return ret;
}
-EXPORT_SYMBOL(drm_gem_shmem_vmap);
+EXPORT_SYMBOL_GPL(drm_gem_shmem_vmap_locked);
/*
- * drm_gem_shmem_vunmap - Unmap a virtual mapping for a shmem GEM object
+ * drm_gem_shmem_vunmap_locked - Unmap a virtual mapping for a shmem GEM object
* @shmem: shmem GEM object
* @map: Kernel virtual address where the SHMEM GEM object was mapped
*
* This function cleans up a kernel virtual address mapping acquired by
- * drm_gem_shmem_vmap(). The mapping is only removed when the use count drops to
- * zero.
+ * drm_gem_shmem_vmap_locked(). The mapping is only removed when the use count
+ * drops to zero.
*
* This function hides the differences between dma-buf imported and natively
* allocated objects.
*/
-void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem,
- struct iosys_map *map)
+void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
+ struct iosys_map *map)
{
struct drm_gem_object *obj = &shmem->base;
@@ -506,7 +465,7 @@ void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem,
shmem->vaddr = NULL;
}
-EXPORT_SYMBOL(drm_gem_shmem_vunmap);
+EXPORT_SYMBOL_GPL(drm_gem_shmem_vunmap_locked);
static int
drm_gem_shmem_create_with_handle(struct drm_file *file_priv,
@@ -534,7 +493,7 @@ drm_gem_shmem_create_with_handle(struct drm_file *file_priv,
/* Update madvise status, returns true if not purged, else
* false or -errno.
*/
-int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv)
+int drm_gem_shmem_madvise_locked(struct drm_gem_shmem_object *shmem, int madv)
{
drm_gem_shmem_resv_assert_held(shmem);
@@ -543,13 +502,13 @@ int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv)
madv = shmem->madv;
- drm_gem_shmem_update_pages_state(shmem);
+ drm_gem_shmem_update_pages_state_locked(shmem);
return (madv >= 0);
}
-EXPORT_SYMBOL(drm_gem_shmem_madvise);
+EXPORT_SYMBOL_GPL(drm_gem_shmem_madvise_locked);
-static void drm_gem_shmem_unpin_pages(struct drm_gem_shmem_object *shmem)
+static void drm_gem_shmem_unpin_pages_locked(struct drm_gem_shmem_object *shmem)
{
struct drm_gem_object *obj = &shmem->base;
struct drm_device *dev = obj->dev;
@@ -560,7 +519,7 @@ static void drm_gem_shmem_unpin_pages(struct drm_gem_shmem_object *shmem)
return;
dma_unmap_sgtable(dev->dev, shmem->sgt, DMA_BIDIRECTIONAL, 0);
- drm_gem_shmem_release_pages(shmem);
+ drm_gem_shmem_release_pages_locked(shmem);
drm_vma_node_unmap(&obj->vma_node, dev->anon_inode->i_mapping);
sg_free_table(shmem->sgt);
@@ -568,13 +527,13 @@ static void drm_gem_shmem_unpin_pages(struct drm_gem_shmem_object *shmem)
shmem->sgt = NULL;
}
-void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem)
+void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem)
{
struct drm_gem_object *obj = &shmem->base;
drm_WARN_ON(obj->dev, !drm_gem_shmem_is_purgeable(shmem));
- drm_gem_shmem_unpin_pages(shmem);
+ drm_gem_shmem_unpin_pages_locked(shmem);
drm_gem_free_mmap_offset(obj);
/* Our goal here is to return as much of the memory as
@@ -588,13 +547,13 @@ void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem)
shmem->madv = -1;
shmem->evicted = false;
- drm_gem_shmem_update_pages_state(shmem);
+ drm_gem_shmem_update_pages_state_locked(shmem);
}
-EXPORT_SYMBOL(drm_gem_shmem_purge);
+EXPORT_SYMBOL_GPL(drm_gem_shmem_purge_locked);
/**
- * drm_gem_shmem_swapin() - Moves shmem GEM back to memory and enables
- * hardware access to the memory.
+ * drm_gem_shmem_swapin_locked() - Moves shmem GEM back to memory and enables
+ * hardware access to the memory.
* @shmem: shmem GEM object
*
* This function moves shmem GEM back to memory if it was previously evicted
@@ -603,7 +562,7 @@ EXPORT_SYMBOL(drm_gem_shmem_purge);
* Returns:
* 0 on success or a negative error code on failure.
*/
-int drm_gem_shmem_swapin(struct drm_gem_shmem_object *shmem)
+int drm_gem_shmem_swapin_locked(struct drm_gem_shmem_object *shmem)
{
struct drm_gem_object *obj = &shmem->base;
struct sg_table *sgt;
@@ -631,7 +590,7 @@ int drm_gem_shmem_swapin(struct drm_gem_shmem_object *shmem)
shmem->sgt = sgt;
shmem->evicted = false;
- drm_gem_shmem_update_pages_state(shmem);
+ drm_gem_shmem_update_pages_state_locked(shmem);
}
if (!shmem->pages)
@@ -639,7 +598,7 @@ int drm_gem_shmem_swapin(struct drm_gem_shmem_object *shmem)
return 0;
}
-EXPORT_SYMBOL_GPL(drm_gem_shmem_swapin);
+EXPORT_SYMBOL_GPL(drm_gem_shmem_swapin_locked);
/**
* drm_gem_shmem_dumb_create - Create a dumb shmem buffer object
@@ -702,7 +661,7 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
if (page_offset >= num_pages || (!shmem->pages && !shmem->evicted)) {
ret = VM_FAULT_SIGBUS;
} else {
- err = drm_gem_shmem_swapin(shmem);
+ err = drm_gem_shmem_swapin_locked(shmem);
if (err) {
ret = VM_FAULT_OOM;
goto unlock;
@@ -736,7 +695,7 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma)
if (!drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count))
shmem->pages_use_count++;
- drm_gem_shmem_update_pages_state(shmem);
+ drm_gem_shmem_update_pages_state_locked(shmem);
dma_resv_unlock(shmem->base.resv);
drm_gem_vm_open(vma);
@@ -748,7 +707,7 @@ static void drm_gem_shmem_vm_close(struct vm_area_struct *vma)
struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
dma_resv_lock(shmem->base.resv, NULL);
- drm_gem_shmem_put_pages(shmem);
+ drm_gem_shmem_put_pages_locked(shmem);
dma_resv_unlock(shmem->base.resv);
drm_gem_vm_close(vma);
@@ -762,7 +721,7 @@ const struct vm_operations_struct drm_gem_shmem_vm_ops = {
EXPORT_SYMBOL_GPL(drm_gem_shmem_vm_ops);
/**
- * drm_gem_shmem_mmap - Memory-map a shmem GEM object
+ * drm_gem_shmem_mmap_unlocked - Memory-map a shmem GEM object
* @shmem: shmem GEM object
* @vma: VMA for the area to be mapped
*
@@ -772,7 +731,8 @@ EXPORT_SYMBOL_GPL(drm_gem_shmem_vm_ops);
* Returns:
* 0 on success or a negative error code on failure.
*/
-int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct *vma)
+int drm_gem_shmem_mmap_unlocked(struct drm_gem_shmem_object *shmem,
+ struct vm_area_struct *vma)
{
struct drm_gem_object *obj = &shmem->base;
int ret;
@@ -802,7 +762,7 @@ int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct
return 0;
}
-EXPORT_SYMBOL_GPL(drm_gem_shmem_mmap);
+EXPORT_SYMBOL_GPL(drm_gem_shmem_mmap_unlocked);
/**
* drm_gem_shmem_print_info() - Print &drm_gem_shmem_object info for debugfs
@@ -875,7 +835,7 @@ struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_object
shmem->sgt = sgt;
- drm_gem_shmem_update_pages_state(shmem);
+ drm_gem_shmem_update_pages_state_locked(shmem);
return sgt;
@@ -883,7 +843,7 @@ struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_object
sg_free_table(sgt);
kfree(sgt);
err_put_pages:
- drm_gem_shmem_put_pages(shmem);
+ drm_gem_shmem_put_pages_locked(shmem);
return ERR_PTR(ret);
}
EXPORT_SYMBOL_GPL(drm_gem_shmem_get_pages_sgt_locked);
@@ -974,21 +934,21 @@ drm_gem_shmem_shrinker_count_objects(struct shrinker *shrinker,
return count ?: SHRINK_EMPTY;
}
-void drm_gem_shmem_evict(struct drm_gem_shmem_object *shmem)
+void drm_gem_shmem_evict_locked(struct drm_gem_shmem_object *shmem)
{
struct drm_gem_object *obj = &shmem->base;
drm_WARN_ON(obj->dev, !drm_gem_shmem_is_evictable(shmem));
drm_WARN_ON(obj->dev, shmem->evicted);
- drm_gem_shmem_unpin_pages(shmem);
+ drm_gem_shmem_unpin_pages_locked(shmem);
shmem->evicted = true;
- drm_gem_shmem_update_pages_state(shmem);
+ drm_gem_shmem_update_pages_state_locked(shmem);
}
-EXPORT_SYMBOL_GPL(drm_gem_shmem_evict);
+EXPORT_SYMBOL_GPL(drm_gem_shmem_evict_locked);
-static bool drm_gem_shmem_shrinker_evict(struct drm_gem_object *obj)
+static bool drm_gem_shmem_shrinker_evict_locked(struct drm_gem_object *obj)
{
struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
int err;
@@ -1004,7 +964,7 @@ static bool drm_gem_shmem_shrinker_evict(struct drm_gem_object *obj)
return true;
}
-static bool drm_gem_shmem_shrinker_purge(struct drm_gem_object *obj)
+static bool drm_gem_shmem_shrinker_purge_locked(struct drm_gem_object *obj)
{
struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
int err;
@@ -1033,13 +993,13 @@ drm_gem_shmem_shrinker_scan_objects(struct shrinker *shrinker,
/* purge as many objects as we can */
freed += drm_gem_lru_scan(&shmem_shrinker->lru_evictable,
nr_to_scan, &remaining,
- drm_gem_shmem_shrinker_purge);
+ drm_gem_shmem_shrinker_purge_locked);
/* evict as many objects as we can */
if (freed < nr_to_scan)
freed += drm_gem_lru_scan(&shmem_shrinker->lru_evictable,
nr_to_scan - freed, &remaining,
- drm_gem_shmem_shrinker_evict);
+ drm_gem_shmem_shrinker_evict_locked);
return (freed > 0 && remaining > 0) ? freed : SHRINK_STOP;
}
diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c
index 4f9736e5f929..492e5cf739bb 100644
--- a/drivers/gpu/drm/lima/lima_gem.c
+++ b/drivers/gpu/drm/lima/lima_gem.c
@@ -180,7 +180,7 @@ static int lima_gem_pin(struct drm_gem_object *obj)
if (bo->heap_size)
return -EINVAL;
- return drm_gem_shmem_pin(&bo->base);
+ return drm_gem_shmem_object_pin_unlocked(obj);
}
static int lima_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map)
@@ -190,7 +190,7 @@ static int lima_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map)
if (bo->heap_size)
return -EINVAL;
- return drm_gem_shmem_vmap(&bo->base, map);
+ return drm_gem_shmem_object_vmap_locked(obj, map);
}
static int lima_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
@@ -200,7 +200,7 @@ static int lima_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
if (bo->heap_size)
return -EINVAL;
- return drm_gem_shmem_mmap(&bo->base, vma);
+ return drm_gem_shmem_object_mmap_unlocked(obj, vma);
}
static const struct drm_gem_object_funcs lima_gem_funcs = {
@@ -209,10 +209,10 @@ static const struct drm_gem_object_funcs lima_gem_funcs = {
.close = lima_gem_object_close,
.print_info = drm_gem_shmem_object_print_info,
.pin = lima_gem_pin,
- .unpin = drm_gem_shmem_object_unpin,
+ .unpin = drm_gem_shmem_object_unpin_unlocked,
.get_sg_table = drm_gem_shmem_object_get_sg_table,
.vmap = lima_gem_vmap,
- .vunmap = drm_gem_shmem_object_vunmap,
+ .vunmap = drm_gem_shmem_object_vunmap_locked,
.mmap = lima_gem_mmap,
.vm_ops = &drm_gem_shmem_vm_ops,
};
diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
index d1b2bd6db443..74d802e5b1a6 100644
--- a/drivers/gpu/drm/panfrost/panfrost_drv.c
+++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
@@ -437,7 +437,7 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data,
}
}
- args->retained = drm_gem_shmem_madvise(&bo->base, args->madv);
+ args->retained = drm_gem_shmem_madvise_locked(&bo->base, args->madv);
out_unlock_mappings:
mutex_unlock(&bo->mappings.lock);
diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c
index 08d795c28b4e..6dcf8368d184 100644
--- a/drivers/gpu/drm/panfrost/panfrost_gem.c
+++ b/drivers/gpu/drm/panfrost/panfrost_gem.c
@@ -182,7 +182,7 @@ static int panfrost_gem_pin(struct drm_gem_object *obj)
if (bo->is_heap)
return -EINVAL;
- return drm_gem_shmem_pin(&bo->base);
+ return drm_gem_shmem_object_pin_unlocked(obj);
}
static int panfrost_shmem_evict(struct drm_gem_object *obj)
@@ -197,7 +197,7 @@ static int panfrost_shmem_evict(struct drm_gem_object *obj)
panfrost_gem_teardown_mappings_locked(bo);
- drm_gem_shmem_purge(&bo->base);
+ drm_gem_shmem_purge_locked(&bo->base);
mutex_unlock(&bo->mappings.lock);
@@ -210,11 +210,11 @@ static const struct drm_gem_object_funcs panfrost_gem_funcs = {
.close = panfrost_gem_close,
.print_info = drm_gem_shmem_object_print_info,
.pin = panfrost_gem_pin,
- .unpin = drm_gem_shmem_object_unpin,
+ .unpin = drm_gem_shmem_object_unpin_unlocked,
.get_sg_table = drm_gem_shmem_object_get_sg_table,
- .vmap = drm_gem_shmem_object_vmap,
- .vunmap = drm_gem_shmem_object_vunmap,
- .mmap = drm_gem_shmem_object_mmap,
+ .vmap = drm_gem_shmem_object_vmap_locked,
+ .vunmap = drm_gem_shmem_object_vunmap_locked,
+ .mmap = drm_gem_shmem_object_mmap_unlocked,
.vm_ops = &drm_gem_shmem_vm_ops,
.evict = panfrost_shmem_evict,
};
diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c
index c0123d09f699..7771769f0ce0 100644
--- a/drivers/gpu/drm/panfrost/panfrost_mmu.c
+++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c
@@ -535,7 +535,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
err_map:
sg_free_table(sgt);
err_pages:
- drm_gem_shmem_put_pages(&bo->base);
+ drm_gem_shmem_put_pages_locked(&bo->base);
err_unlock:
dma_resv_unlock(obj->resv);
err_bo:
diff --git a/drivers/gpu/drm/v3d/v3d_bo.c b/drivers/gpu/drm/v3d/v3d_bo.c
index 8b3229a37c6d..ad83a3043d02 100644
--- a/drivers/gpu/drm/v3d/v3d_bo.c
+++ b/drivers/gpu/drm/v3d/v3d_bo.c
@@ -53,12 +53,12 @@ void v3d_free_object(struct drm_gem_object *obj)
static const struct drm_gem_object_funcs v3d_gem_funcs = {
.free = v3d_free_object,
.print_info = drm_gem_shmem_object_print_info,
- .pin = drm_gem_shmem_object_pin,
- .unpin = drm_gem_shmem_object_unpin,
+ .pin = drm_gem_shmem_object_pin_unlocked,
+ .unpin = drm_gem_shmem_object_unpin_unlocked,
.get_sg_table = drm_gem_shmem_object_get_sg_table,
- .vmap = drm_gem_shmem_object_vmap,
- .vunmap = drm_gem_shmem_object_vunmap,
- .mmap = drm_gem_shmem_object_mmap,
+ .vmap = drm_gem_shmem_object_vmap_locked,
+ .vunmap = drm_gem_shmem_object_vunmap_locked,
+ .mmap = drm_gem_shmem_object_mmap_unlocked,
.vm_ops = &drm_gem_shmem_vm_ops,
};
diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c b/drivers/gpu/drm/virtio/virtgpu_gem.c
index b9ceb0602fd5..aea15548ba9e 100644
--- a/drivers/gpu/drm/virtio/virtgpu_gem.c
+++ b/drivers/gpu/drm/virtio/virtgpu_gem.c
@@ -356,7 +356,7 @@ int virtio_gpu_gem_pin(struct virtio_gpu_object *bo)
int ret = 0;
if (virtio_gpu_is_shmem(bo))
- ret = drm_gem_shmem_object_pin(&bo->base.base);
+ ret = drm_gem_shmem_object_pin_unlocked(&bo->base.base);
return ret;
}
@@ -364,5 +364,5 @@ int virtio_gpu_gem_pin(struct virtio_gpu_object *bo)
void virtio_gpu_gem_unpin(struct virtio_gpu_object *bo)
{
if (virtio_gpu_is_shmem(bo))
- drm_gem_shmem_object_unpin(&bo->base.base);
+ drm_gem_shmem_object_unpin_unlocked(&bo->base.base);
}
diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c
index 70dcd19266dc..6cd64eac555f 100644
--- a/drivers/gpu/drm/virtio/virtgpu_object.c
+++ b/drivers/gpu/drm/virtio/virtgpu_object.c
@@ -139,9 +139,9 @@ static int virtio_gpu_shmem_evict(struct drm_gem_object *obj)
return err;
}
- drm_gem_shmem_purge(&bo->base);
+ drm_gem_shmem_purge_locked(&bo->base);
} else {
- drm_gem_shmem_evict(&bo->base);
+ drm_gem_shmem_evict_locked(&bo->base);
}
return 0;
@@ -198,7 +198,7 @@ int virtio_gpu_reattach_shmem_object(struct virtio_gpu_object *bo)
unsigned int nents;
int err;
- err = drm_gem_shmem_swapin(&bo->base);
+ err = drm_gem_shmem_swapin_locked(&bo->base);
if (err)
return err;
@@ -220,12 +220,12 @@ static const struct drm_gem_object_funcs virtio_gpu_shmem_funcs = {
.close = virtio_gpu_gem_object_close,
.print_info = drm_gem_shmem_object_print_info,
.export = virtgpu_gem_prime_export,
- .pin = drm_gem_shmem_object_pin,
- .unpin = drm_gem_shmem_object_unpin,
+ .pin = drm_gem_shmem_object_pin_unlocked,
+ .unpin = drm_gem_shmem_object_unpin_unlocked,
.get_sg_table = drm_gem_shmem_object_get_sg_table,
- .vmap = drm_gem_shmem_object_vmap,
- .vunmap = drm_gem_shmem_object_vunmap,
- .mmap = drm_gem_shmem_object_mmap,
+ .vmap = drm_gem_shmem_object_vmap_locked,
+ .vunmap = drm_gem_shmem_object_vunmap_locked,
+ .mmap = drm_gem_shmem_object_mmap_unlocked,
.vm_ops = &drm_gem_shmem_vm_ops,
.evict = virtio_gpu_shmem_evict,
};
diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
index 55f5ff387bbc..73cfca5853fd 100644
--- a/include/drm/drm_gem_shmem_helper.h
+++ b/include/drm/drm_gem_shmem_helper.h
@@ -109,16 +109,17 @@ struct drm_gem_shmem_object {
struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t size);
void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem);
-void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem);
-int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem);
-void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem);
-int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem,
- struct iosys_map *map);
-void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem,
- struct iosys_map *map);
-int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct *vma);
-
-int drm_gem_shmem_madvise(struct drm_gem_shmem_object *shmem, int madv);
+void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem);
+int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
+ struct iosys_map *map);
+void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
+ struct iosys_map *map);
+int drm_gem_shmem_mmap_unlocked(struct drm_gem_shmem_object *shmem,
+ struct vm_area_struct *vma);
+int drm_gem_shmem_pin_locked(struct drm_gem_shmem_object *shmem);
+void drm_gem_shmem_unpin_locked(struct drm_gem_shmem_object *shmem);
+
+int drm_gem_shmem_madvise_locked(struct drm_gem_shmem_object *shmem, int madv);
static inline bool drm_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem)
{
@@ -130,10 +131,10 @@ static inline bool drm_gem_shmem_is_purgeable(struct drm_gem_shmem_object *shmem
(shmem->sgt || shmem->evicted);
}
-int drm_gem_shmem_swapin(struct drm_gem_shmem_object *shmem);
+int drm_gem_shmem_swapin_locked(struct drm_gem_shmem_object *shmem);
-void drm_gem_shmem_evict(struct drm_gem_shmem_object *shmem);
-void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem);
+void drm_gem_shmem_evict_locked(struct drm_gem_shmem_object *shmem);
+void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object *shmem);
struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem);
struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem);
@@ -179,34 +180,6 @@ static inline void drm_gem_shmem_object_print_info(struct drm_printer *p, unsign
drm_gem_shmem_print_info(shmem, p, indent);
}
-/**
- * drm_gem_shmem_object_pin - GEM object function for drm_gem_shmem_pin()
- * @obj: GEM object
- *
- * This function wraps drm_gem_shmem_pin(). Drivers that employ the shmem helpers should
- * use it as their &drm_gem_object_funcs.pin handler.
- */
-static inline int drm_gem_shmem_object_pin(struct drm_gem_object *obj)
-{
- struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
-
- return drm_gem_shmem_pin(shmem);
-}
-
-/**
- * drm_gem_shmem_object_unpin - GEM object function for drm_gem_shmem_unpin()
- * @obj: GEM object
- *
- * This function wraps drm_gem_shmem_unpin(). Drivers that employ the shmem helpers should
- * use it as their &drm_gem_object_funcs.unpin handler.
- */
-static inline void drm_gem_shmem_object_unpin(struct drm_gem_object *obj)
-{
- struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
-
- drm_gem_shmem_unpin(shmem);
-}
-
/**
* drm_gem_shmem_object_get_sg_table - GEM object function for drm_gem_shmem_get_sg_table()
* @obj: GEM object
@@ -225,64 +198,102 @@ static inline struct sg_table *drm_gem_shmem_object_get_sg_table(struct drm_gem_
}
/*
- * drm_gem_shmem_object_vmap - GEM object function for drm_gem_shmem_vmap()
+ * drm_gem_shmem_object_vmap_locked - GEM object function for drm_gem_shmem_vmap_locked()
* @obj: GEM object
* @map: Returns the kernel virtual address of the SHMEM GEM object's backing store.
*
- * This function wraps drm_gem_shmem_vmap(). Drivers that employ the shmem helpers should
- * use it as their &drm_gem_object_funcs.vmap handler.
+ * This function wraps drm_gem_shmem_vmap_locked(). Drivers that employ the shmem
+ * helpers should use it as their &drm_gem_object_funcs.vmap handler.
*
* Returns:
* 0 on success or a negative error code on failure.
*/
-static inline int drm_gem_shmem_object_vmap(struct drm_gem_object *obj,
- struct iosys_map *map)
+static inline int drm_gem_shmem_object_vmap_locked(struct drm_gem_object *obj,
+ struct iosys_map *map)
{
struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
- return drm_gem_shmem_vmap(shmem, map);
+ return drm_gem_shmem_vmap_locked(shmem, map);
}
/*
- * drm_gem_shmem_object_vunmap - GEM object function for drm_gem_shmem_vunmap()
+ * drm_gem_shmem_object_vunmap_locked - GEM object function for drm_gem_shmem_vunmap_locked()
* @obj: GEM object
* @map: Kernel virtual address where the SHMEM GEM object was mapped
*
- * This function wraps drm_gem_shmem_vunmap(). Drivers that employ the shmem helpers should
- * use it as their &drm_gem_object_funcs.vunmap handler.
+ * This function wraps drm_gem_shmem_vunmap_locked(). Drivers that employ the shmem
+ * helpers should use it as their &drm_gem_object_funcs.vunmap handler.
*/
-static inline void drm_gem_shmem_object_vunmap(struct drm_gem_object *obj,
- struct iosys_map *map)
+static inline void drm_gem_shmem_object_vunmap_locked(struct drm_gem_object *obj,
+ struct iosys_map *map)
{
struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
- drm_gem_shmem_vunmap(shmem, map);
+ drm_gem_shmem_vunmap_locked(shmem, map);
}
/**
- * drm_gem_shmem_object_mmap - GEM object function for drm_gem_shmem_mmap()
+ * drm_gem_shmem_object_mmap_unlocked - GEM object function for drm_gem_shmem_mmap_unlocked()
* @obj: GEM object
* @vma: VMA for the area to be mapped
*
- * This function wraps drm_gem_shmem_mmap(). Drivers that employ the shmem helpers should
- * use it as their &drm_gem_object_funcs.mmap handler.
+ * This function wraps drm_gem_shmem_mmap_unlocked(). Drivers that employ the shmem
+ * helpers should use it as their &drm_gem_object_funcs.mmap handler.
*
* Returns:
* 0 on success or a negative error code on failure.
*/
-static inline int drm_gem_shmem_object_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
+static inline int drm_gem_shmem_object_mmap_unlocked(struct drm_gem_object *obj,
+ struct vm_area_struct *vma)
{
struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
- return drm_gem_shmem_mmap(shmem, vma);
+ return drm_gem_shmem_mmap_unlocked(shmem, vma);
+}
+
+/**
+ * drm_gem_shmem_object_pin_unlocked - unlocked GEM object function for drm_gem_shmem_pin_locked()
+ * @obj: GEM object
+ *
+ * This function wraps drm_gem_shmem_pin_locked(). Drivers that employ the shmem
+ * helpers should use it as their &drm_gem_object_funcs.pin handler.
+ */
+static inline int drm_gem_shmem_object_pin_unlocked(struct drm_gem_object *obj)
+{
+ struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
+ int ret;
+
+ ret = dma_resv_lock_interruptible(obj->resv, NULL);
+ if (ret)
+ return ret;
+ ret = drm_gem_shmem_pin_locked(shmem);
+ dma_resv_unlock(obj->resv);
+
+ return ret;
+}
+
+/**
+ * drm_gem_shmem_object_unpin_unlocked - unlocked GEM object function for drm_gem_shmem_unpin_locked()
+ * @obj: GEM object
+ *
+ * This function wraps drm_gem_shmem_unpin_locked(). Drivers that employ the shmem
+ * helpers should use it as their &drm_gem_object_funcs.unpin handler.
+ */
+static inline void drm_gem_shmem_object_unpin_unlocked(struct drm_gem_object *obj)
+{
+ struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
+
+ dma_resv_lock(obj->resv, NULL);
+ drm_gem_shmem_unpin_locked(shmem);
+ dma_resv_unlock(obj->resv);
}
/**
- * drm_gem_shmem_object_madvise_unlocked - unlocked GEM object function for drm_gem_shmem_madvise()
+ * drm_gem_shmem_object_madvise_unlocked - unlocked GEM object function for drm_gem_shmem_madvise_locked()
* @obj: GEM object
* @madv: Madvise value
*
- * This function wraps drm_gem_shmem_madvise(), providing unlocked variant.
+ * This function wraps drm_gem_shmem_madvise_locked(), providing unlocked variant.
*
* Returns:
* 0 on success or a negative error code on failure.
@@ -295,7 +306,7 @@ static inline int drm_gem_shmem_object_madvise_unlocked(struct drm_gem_object *o
ret = dma_resv_lock_interruptible(obj->resv, NULL);
if (ret)
return ret;
- ret = drm_gem_shmem_madvise(shmem, madv);
+ ret = drm_gem_shmem_madvise_locked(shmem, madv);
dma_resv_unlock(obj->resv);
return ret;
--
2.41.0
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH v14 11/12] drm/shmem-helper: Make drm_gem_shmem_print_info() symbol GPL
2023-07-22 23:47 [PATCH v14 00/12] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers Dmitry Osipenko
` (9 preceding siblings ...)
2023-07-22 23:47 ` [PATCH v14 10/12] drm/shmem-helper: Refactor locked/unlocked functions Dmitry Osipenko
@ 2023-07-22 23:47 ` Dmitry Osipenko
2023-07-22 23:47 ` [PATCH v14 12/12] drm/gem: Add _unlocked postfix to drm_gem_pin/unpin() Dmitry Osipenko
11 siblings, 0 replies; 25+ messages in thread
From: Dmitry Osipenko @ 2023-07-22 23:47 UTC (permalink / raw)
To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
Daniel Vetter, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, Christian König, Qiang Yu, Steven Price,
Boris Brezillon, Emma Anholt, Melissa Wen
Cc: kernel, linux-kernel, dri-devel, virtualization
Make drm_gem_shmem_print_info() exported symbol GPL to make it consistent
with the rest of drm-shmem exports. It's the only remaining drm-shmem
symbol that isn't GPL.
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
drivers/gpu/drm/drm_gem_shmem_helper.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 3dd4da18eedf..46190e70c3df 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -782,7 +782,7 @@ void drm_gem_shmem_print_info(const struct drm_gem_shmem_object *shmem,
drm_printf_indent(p, indent, "vaddr=%p\n", shmem->vaddr);
drm_printf_indent(p, indent, "madv=%d\n", shmem->madv);
}
-EXPORT_SYMBOL(drm_gem_shmem_print_info);
+EXPORT_SYMBOL_GPL(drm_gem_shmem_print_info);
/**
* drm_gem_shmem_get_sg_table - Provide a scatter/gather table of pinned
--
2.41.0
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH v14 12/12] drm/gem: Add _unlocked postfix to drm_gem_pin/unpin()
2023-07-22 23:47 [PATCH v14 00/12] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers Dmitry Osipenko
` (10 preceding siblings ...)
2023-07-22 23:47 ` [PATCH v14 11/12] drm/shmem-helper: Make drm_gem_shmem_print_info() symbol GPL Dmitry Osipenko
@ 2023-07-22 23:47 ` Dmitry Osipenko
2023-07-25 7:53 ` Boris Brezillon
11 siblings, 1 reply; 25+ messages in thread
From: Dmitry Osipenko @ 2023-07-22 23:47 UTC (permalink / raw)
To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
Daniel Vetter, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, Christian König, Qiang Yu, Steven Price,
Boris Brezillon, Emma Anholt, Melissa Wen
Cc: kernel, linux-kernel, dri-devel, virtualization
Make clear that drm_gem_pin/unpin() functions take reservation lock by
adding _unlocked postfix to the function names.
Suggested-by: Boris Brezillon <boris.brezillon@collabora.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
drivers/gpu/drm/drm_gem.c | 4 ++--
drivers/gpu/drm/drm_internal.h | 4 ++--
drivers/gpu/drm/drm_prime.c | 4 ++--
3 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index c18686f434d4..805eb0d85297 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -1146,7 +1146,7 @@ void drm_gem_print_info(struct drm_printer *p, unsigned int indent,
obj->funcs->print_info(p, indent, obj);
}
-int drm_gem_pin(struct drm_gem_object *obj)
+int drm_gem_pin_unlocked(struct drm_gem_object *obj)
{
if (obj->funcs->pin)
return obj->funcs->pin(obj);
@@ -1154,7 +1154,7 @@ int drm_gem_pin(struct drm_gem_object *obj)
return 0;
}
-void drm_gem_unpin(struct drm_gem_object *obj)
+void drm_gem_unpin_unlocked(struct drm_gem_object *obj)
{
if (obj->funcs->unpin)
obj->funcs->unpin(obj);
diff --git a/drivers/gpu/drm/drm_internal.h b/drivers/gpu/drm/drm_internal.h
index d7e023bbb0d5..80f5bd1da8fd 100644
--- a/drivers/gpu/drm/drm_internal.h
+++ b/drivers/gpu/drm/drm_internal.h
@@ -173,8 +173,8 @@ void drm_gem_release(struct drm_device *dev, struct drm_file *file_private);
void drm_gem_print_info(struct drm_printer *p, unsigned int indent,
const struct drm_gem_object *obj);
-int drm_gem_pin(struct drm_gem_object *obj);
-void drm_gem_unpin(struct drm_gem_object *obj);
+int drm_gem_pin_unlocked(struct drm_gem_object *obj);
+void drm_gem_unpin_unlocked(struct drm_gem_object *obj);
int drm_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map);
void drm_gem_vunmap(struct drm_gem_object *obj, struct iosys_map *map);
diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
index 63b709a67471..8145b49e95ff 100644
--- a/drivers/gpu/drm/drm_prime.c
+++ b/drivers/gpu/drm/drm_prime.c
@@ -583,7 +583,7 @@ int drm_gem_map_attach(struct dma_buf *dma_buf,
if (!obj->funcs->get_sg_table)
return -ENOSYS;
- return drm_gem_pin(obj);
+ return drm_gem_pin_unlocked(obj);
}
EXPORT_SYMBOL(drm_gem_map_attach);
@@ -601,7 +601,7 @@ void drm_gem_map_detach(struct dma_buf *dma_buf,
{
struct drm_gem_object *obj = dma_buf->priv;
- drm_gem_unpin(obj);
+ drm_gem_unpin_unlocked(obj);
}
EXPORT_SYMBOL(drm_gem_map_detach);
--
2.41.0
^ permalink raw reply related [flat|nested] 25+ messages in thread
* Re: [PATCH v14 01/12] drm/shmem-helper: Factor out pages alloc/release from drm_gem_shmem_get/put_pages()
2023-07-22 23:47 ` [PATCH v14 01/12] drm/shmem-helper: Factor out pages alloc/release from drm_gem_shmem_get/put_pages() Dmitry Osipenko
@ 2023-07-25 7:14 ` Boris Brezillon
0 siblings, 0 replies; 25+ messages in thread
From: Boris Brezillon @ 2023-07-25 7:14 UTC (permalink / raw)
To: Dmitry Osipenko
Cc: kernel, Thomas Zimmermann, Emma Anholt, Christian König,
dri-devel, linux-kernel, Maxime Ripard, Gurchetan Singh,
Melissa Wen, Gerd Hoffmann, Steven Price, virtualization,
Qiang Yu
On Sun, 23 Jul 2023 02:47:35 +0300
Dmitry Osipenko <dmitry.osipenko@collabora.com> wrote:
> Factor out pages allocation from drm_gem_shmem_get_pages() into
> drm_gem_shmem_acquire_pages() function and similar for the put_pages()
> in a preparation for addition of shrinker support to drm-shmem.
>
> Once shrinker will be added, the pages_use_count>0 will no longer determine
> whether pages are pinned because pages could be swapped out by the shrinker
> and then pages_use_count will be greater than 0 in this case. We will add
> new pages_pin_count in a later patch.
>
> The new common drm_gem_shmem_acquire/release_pages() will be used by
> shrinker code for performing the page swapping.
>
> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
> ---
> drivers/gpu/drm/drm_gem_shmem_helper.c | 65 ++++++++++++++++++++------
> 1 file changed, 52 insertions(+), 13 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index a783d2245599..267153853e2c 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -165,21 +165,26 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
> }
> EXPORT_SYMBOL_GPL(drm_gem_shmem_free);
>
> -static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
> +static int
> +drm_gem_shmem_acquire_pages(struct drm_gem_shmem_object *shmem)
> {
> struct drm_gem_object *obj = &shmem->base;
> struct page **pages;
>
> dma_resv_assert_held(shmem->base.resv);
Not directly related to this patch, but can we start using _locked
suffixes for any function that's expecting the dma-resv lock to be held?
>
> - if (shmem->pages_use_count++ > 0)
> - return 0;
> + if (shmem->madv < 0) {
> + drm_WARN_ON(obj->dev, shmem->pages);
> + return -ENOMEM;
> + }
> +
> + if (drm_WARN_ON(obj->dev, !shmem->pages_use_count))
> + return -EINVAL;
>
> pages = drm_gem_get_pages(obj);
> if (IS_ERR(pages)) {
> drm_dbg_kms(obj->dev, "Failed to get pages (%ld)\n",
> PTR_ERR(pages));
> - shmem->pages_use_count = 0;
> return PTR_ERR(pages);
> }
>
> @@ -198,6 +203,48 @@ static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
> return 0;
> }
>
> +static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
> +{
> + int err;
> +
> + dma_resv_assert_held(shmem->base.resv);
> +
> + if (shmem->madv < 0)
> + return -ENOMEM;
> +
> + if (shmem->pages_use_count++ > 0)
> + return 0;
> +
> + err = drm_gem_shmem_acquire_pages(shmem);
> + if (err)
> + goto err_zero_use;
> +
> + return 0;
> +
> +err_zero_use:
> + shmem->pages_use_count = 0;
> +
> + return err;
> +}
> +
> +static void
> +drm_gem_shmem_release_pages(struct drm_gem_shmem_object *shmem)
> +{
> + struct drm_gem_object *obj = &shmem->base;
> +
> + dma_resv_assert_held(shmem->base.resv);
> +
> +#ifdef CONFIG_X86
> + if (shmem->map_wc)
> + set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT);
> +#endif
> +
> + drm_gem_put_pages(obj, shmem->pages,
> + shmem->pages_mark_dirty_on_put,
> + shmem->pages_mark_accessed_on_put);
> + shmem->pages = NULL;
> +}
> +
> /*
> * drm_gem_shmem_put_pages - Decrease use count on the backing pages for a shmem GEM object
> * @shmem: shmem GEM object
> @@ -216,15 +263,7 @@ void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem)
> if (--shmem->pages_use_count > 0)
> return;
>
> -#ifdef CONFIG_X86
> - if (shmem->map_wc)
> - set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT);
> -#endif
> -
> - drm_gem_put_pages(obj, shmem->pages,
> - shmem->pages_mark_dirty_on_put,
> - shmem->pages_mark_accessed_on_put);
> - shmem->pages = NULL;
> + drm_gem_shmem_release_pages(shmem);
> }
> EXPORT_SYMBOL(drm_gem_shmem_put_pages);
>
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v14 02/12] drm/shmem-helper: Add pages_pin_count field
2023-07-22 23:47 ` [PATCH v14 02/12] drm/shmem-helper: Add pages_pin_count field Dmitry Osipenko
@ 2023-07-25 7:27 ` Boris Brezillon
2023-07-25 8:32 ` Boris Brezillon
0 siblings, 1 reply; 25+ messages in thread
From: Boris Brezillon @ 2023-07-25 7:27 UTC (permalink / raw)
To: Dmitry Osipenko
Cc: kernel, Thomas Zimmermann, Emma Anholt, Christian König,
dri-devel, linux-kernel, Maxime Ripard, Gurchetan Singh,
Melissa Wen, Gerd Hoffmann, Steven Price, virtualization,
Qiang Yu
On Sun, 23 Jul 2023 02:47:36 +0300
Dmitry Osipenko <dmitry.osipenko@collabora.com> wrote:
> And new pages_pin_count field to struct drm_gem_shmem_object that will
> determine whether pages are evictable by memory shrinker. The pages will
> be evictable only when pages_pin_count=0. This patch prepares code for
> addition of the memory shrinker that will utilize the new field.
>
> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
> ---
> drivers/gpu/drm/drm_gem_shmem_helper.c | 9 +++++++++
> include/drm/drm_gem_shmem_helper.h | 9 +++++++++
> 2 files changed, 18 insertions(+)
>
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index 267153853e2c..42ba201dda50 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -274,15 +274,24 @@ static int drm_gem_shmem_pin_locked(struct drm_gem_shmem_object *shmem)
> dma_resv_assert_held(shmem->base.resv);
>
> ret = drm_gem_shmem_get_pages(shmem);
> + if (!ret)
> + shmem->pages_pin_count++;
>
> return ret;
> }
>
> static void drm_gem_shmem_unpin_locked(struct drm_gem_shmem_object *shmem)
> {
> + struct drm_gem_object *obj = &shmem->base;
> +
> dma_resv_assert_held(shmem->base.resv);
>
> + if (drm_WARN_ON_ONCE(obj->dev, !shmem->pages_pin_count))
> + return;
> +
> drm_gem_shmem_put_pages(shmem);
> +
> + shmem->pages_pin_count--;
> }
>
> /**
> diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
> index bf0c31aa8fbe..7111f5743006 100644
> --- a/include/drm/drm_gem_shmem_helper.h
> +++ b/include/drm/drm_gem_shmem_helper.h
> @@ -39,6 +39,15 @@ struct drm_gem_shmem_object {
> */
> unsigned int pages_use_count;
>
> + /**
> + * @pages_pin_count:
> + *
> + * Reference count on the pinned pages table.
> + * The pages allowed to be evicted by memory shrinker
> + * only when the count is zero.
> + */
> + unsigned int pages_pin_count;
Can we make it an atomic_t, so we can avoid taking the lock when the
GEM has already been pinned. That's something I need to be able to grab
a pin-ref in a path where the GEM resv lock is already held[1]. We could
of course expose the locked version, but in my case, I want to enforce
the fact the GEM has been pinned before the drm_gem_shmem_pin() call in
the section protected by the resv lock, so catching a "refcount 0 -> 1"
situation would be useful. Beside, using an atomic to avoid the
lock/unlock dance when refcount > 1 might be beneficial to everyone.
[1]https://gitlab.freedesktop.org/bbrezillon/linux/-/commit/4420fa0d5768ebdc35b34d58d4ae5fad9fbb93f9
> +
> /**
> * @madv: State for madvise
> *
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v14 10/12] drm/shmem-helper: Refactor locked/unlocked functions
2023-07-22 23:47 ` [PATCH v14 10/12] drm/shmem-helper: Refactor locked/unlocked functions Dmitry Osipenko
@ 2023-07-25 7:47 ` Boris Brezillon
2023-07-25 7:58 ` Boris Brezillon
0 siblings, 1 reply; 25+ messages in thread
From: Boris Brezillon @ 2023-07-25 7:47 UTC (permalink / raw)
To: Dmitry Osipenko
Cc: kernel, Thomas Zimmermann, Emma Anholt, Christian König,
dri-devel, linux-kernel, Maxime Ripard, Gurchetan Singh,
Melissa Wen, Gerd Hoffmann, Steven Price, virtualization,
Qiang Yu
On Sun, 23 Jul 2023 02:47:44 +0300
Dmitry Osipenko <dmitry.osipenko@collabora.com> wrote:
> Add locked/unlocked postfixes to drm-shmem function names to make clear
> where reservation lock is taken and where not.
Uh, ignore my comment on patch 1 then...
> Add more common helpers to drm_gem_shmem_helper.h
I'd do the renaming and exporting in separate patches.
>
> Suggested-by: Boris Brezillon <boris.brezillon@collabora.com>
> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v14 12/12] drm/gem: Add _unlocked postfix to drm_gem_pin/unpin()
2023-07-22 23:47 ` [PATCH v14 12/12] drm/gem: Add _unlocked postfix to drm_gem_pin/unpin() Dmitry Osipenko
@ 2023-07-25 7:53 ` Boris Brezillon
2023-07-31 13:04 ` Dmitry Osipenko
0 siblings, 1 reply; 25+ messages in thread
From: Boris Brezillon @ 2023-07-25 7:53 UTC (permalink / raw)
To: Dmitry Osipenko
Cc: kernel, Thomas Zimmermann, Emma Anholt, Christian König,
dri-devel, linux-kernel, Maxime Ripard, Gurchetan Singh,
Melissa Wen, Gerd Hoffmann, Steven Price, virtualization,
Qiang Yu
On Sun, 23 Jul 2023 02:47:46 +0300
Dmitry Osipenko <dmitry.osipenko@collabora.com> wrote:
> Make clear that drm_gem_pin/unpin() functions take reservation lock by
> adding _unlocked postfix to the function names.
>
> Suggested-by: Boris Brezillon <boris.brezillon@collabora.com>
> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
I'm still a bit confused by the fact we sometimes use the
xxx[_locked]() pattern (version without the _locked suffix takes the
lock) and other times the xxx[_unlocked]() pattern (version with the
_unlocked suffix takes the lock). It'd be good to chose one pattern and
stick to it, at least for all core functions...
> ---
> drivers/gpu/drm/drm_gem.c | 4 ++--
> drivers/gpu/drm/drm_internal.h | 4 ++--
> drivers/gpu/drm/drm_prime.c | 4 ++--
> 3 files changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
> index c18686f434d4..805eb0d85297 100644
> --- a/drivers/gpu/drm/drm_gem.c
> +++ b/drivers/gpu/drm/drm_gem.c
> @@ -1146,7 +1146,7 @@ void drm_gem_print_info(struct drm_printer *p, unsigned int indent,
> obj->funcs->print_info(p, indent, obj);
> }
>
> -int drm_gem_pin(struct drm_gem_object *obj)
> +int drm_gem_pin_unlocked(struct drm_gem_object *obj)
> {
> if (obj->funcs->pin)
> return obj->funcs->pin(obj);
> @@ -1154,7 +1154,7 @@ int drm_gem_pin(struct drm_gem_object *obj)
> return 0;
> }
>
> -void drm_gem_unpin(struct drm_gem_object *obj)
> +void drm_gem_unpin_unlocked(struct drm_gem_object *obj)
> {
> if (obj->funcs->unpin)
> obj->funcs->unpin(obj);
> diff --git a/drivers/gpu/drm/drm_internal.h b/drivers/gpu/drm/drm_internal.h
> index d7e023bbb0d5..80f5bd1da8fd 100644
> --- a/drivers/gpu/drm/drm_internal.h
> +++ b/drivers/gpu/drm/drm_internal.h
> @@ -173,8 +173,8 @@ void drm_gem_release(struct drm_device *dev, struct drm_file *file_private);
> void drm_gem_print_info(struct drm_printer *p, unsigned int indent,
> const struct drm_gem_object *obj);
>
> -int drm_gem_pin(struct drm_gem_object *obj);
> -void drm_gem_unpin(struct drm_gem_object *obj);
> +int drm_gem_pin_unlocked(struct drm_gem_object *obj);
> +void drm_gem_unpin_unlocked(struct drm_gem_object *obj);
> int drm_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map);
> void drm_gem_vunmap(struct drm_gem_object *obj, struct iosys_map *map);
>
> diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
> index 63b709a67471..8145b49e95ff 100644
> --- a/drivers/gpu/drm/drm_prime.c
> +++ b/drivers/gpu/drm/drm_prime.c
> @@ -583,7 +583,7 @@ int drm_gem_map_attach(struct dma_buf *dma_buf,
> if (!obj->funcs->get_sg_table)
> return -ENOSYS;
>
> - return drm_gem_pin(obj);
> + return drm_gem_pin_unlocked(obj);
> }
> EXPORT_SYMBOL(drm_gem_map_attach);
>
> @@ -601,7 +601,7 @@ void drm_gem_map_detach(struct dma_buf *dma_buf,
> {
> struct drm_gem_object *obj = dma_buf->priv;
>
> - drm_gem_unpin(obj);
> + drm_gem_unpin_unlocked(obj);
> }
> EXPORT_SYMBOL(drm_gem_map_detach);
>
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v14 10/12] drm/shmem-helper: Refactor locked/unlocked functions
2023-07-25 7:47 ` Boris Brezillon
@ 2023-07-25 7:58 ` Boris Brezillon
0 siblings, 0 replies; 25+ messages in thread
From: Boris Brezillon @ 2023-07-25 7:58 UTC (permalink / raw)
To: Dmitry Osipenko
Cc: kernel, Thomas Zimmermann, Emma Anholt, Christian König,
dri-devel, linux-kernel, Maxime Ripard, Gurchetan Singh,
Melissa Wen, Gerd Hoffmann, Steven Price, virtualization,
Qiang Yu
On Tue, 25 Jul 2023 09:47:02 +0200
Boris Brezillon <boris.brezillon@collabora.com> wrote:
> On Sun, 23 Jul 2023 02:47:44 +0300
> Dmitry Osipenko <dmitry.osipenko@collabora.com> wrote:
>
> > Add locked/unlocked postfixes to drm-shmem function names to make clear
> > where reservation lock is taken and where not.
>
> Uh, ignore my comment on patch 1 then...
>
> > Add more common helpers to drm_gem_shmem_helper.h
>
> I'd do the renaming and exporting in separate patches.
Actually, I'd refrain from exporting functions until someone needs
them, as you rightfully pointed out in your previous reply.
>
> >
> > Suggested-by: Boris Brezillon <boris.brezillon@collabora.com>
> > Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v14 02/12] drm/shmem-helper: Add pages_pin_count field
2023-07-25 7:27 ` Boris Brezillon
@ 2023-07-25 8:32 ` Boris Brezillon
2023-07-31 12:27 ` Dmitry Osipenko
0 siblings, 1 reply; 25+ messages in thread
From: Boris Brezillon @ 2023-07-25 8:32 UTC (permalink / raw)
To: Dmitry Osipenko
Cc: kernel, Thomas Zimmermann, Emma Anholt, Christian König,
dri-devel, linux-kernel, Maxime Ripard, Gurchetan Singh,
Melissa Wen, Gerd Hoffmann, Steven Price, virtualization,
Qiang Yu
On Tue, 25 Jul 2023 09:27:09 +0200
Boris Brezillon <boris.brezillon@collabora.com> wrote:
> On Sun, 23 Jul 2023 02:47:36 +0300
> Dmitry Osipenko <dmitry.osipenko@collabora.com> wrote:
>
> > And new pages_pin_count field to struct drm_gem_shmem_object that will
> > determine whether pages are evictable by memory shrinker. The pages will
> > be evictable only when pages_pin_count=0. This patch prepares code for
> > addition of the memory shrinker that will utilize the new field.
> >
> > Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
> > ---
> > drivers/gpu/drm/drm_gem_shmem_helper.c | 9 +++++++++
> > include/drm/drm_gem_shmem_helper.h | 9 +++++++++
> > 2 files changed, 18 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> > index 267153853e2c..42ba201dda50 100644
> > --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> > +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> > @@ -274,15 +274,24 @@ static int drm_gem_shmem_pin_locked(struct drm_gem_shmem_object *shmem)
> > dma_resv_assert_held(shmem->base.resv);
> >
> > ret = drm_gem_shmem_get_pages(shmem);
> > + if (!ret)
> > + shmem->pages_pin_count++;
> >
> > return ret;
> > }
> >
> > static void drm_gem_shmem_unpin_locked(struct drm_gem_shmem_object *shmem)
> > {
> > + struct drm_gem_object *obj = &shmem->base;
> > +
> > dma_resv_assert_held(shmem->base.resv);
> >
> > + if (drm_WARN_ON_ONCE(obj->dev, !shmem->pages_pin_count))
> > + return;
> > +
> > drm_gem_shmem_put_pages(shmem);
> > +
> > + shmem->pages_pin_count--;
> > }
> >
> > /**
> > diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
> > index bf0c31aa8fbe..7111f5743006 100644
> > --- a/include/drm/drm_gem_shmem_helper.h
> > +++ b/include/drm/drm_gem_shmem_helper.h
> > @@ -39,6 +39,15 @@ struct drm_gem_shmem_object {
> > */
> > unsigned int pages_use_count;
> >
> > + /**
> > + * @pages_pin_count:
> > + *
> > + * Reference count on the pinned pages table.
> > + * The pages allowed to be evicted by memory shrinker
> > + * only when the count is zero.
> > + */
> > + unsigned int pages_pin_count;
>
> Can we make it an atomic_t, so we can avoid taking the lock when the
> GEM has already been pinned. That's something I need to be able to grab
> a pin-ref in a path where the GEM resv lock is already held[1]. We could
> of course expose the locked version,
My bad, that's actually not true. The problem is not that I call
drm_gem_shmem_pin() with the resv lock already held, but that I call
drm_gem_shmem_pin() in a dma-signaling path where I'm not allowed to
take a resv lock. I know for sure pin_count > 0, because all GEM objects
mapped to a VM have their memory pinned right now, and this should
stand until we decide to add support for live-GEM eviction, at which
point we'll probably have a way to detect when a GEM is evicted, and
avoid calling drm_gem_shmem_pin() on it.
TLDR; I can't trade the atomic_t for a drm_gem_shmem_pin_locked(),
because that wouldn't solve my problem. The other solution would be to
add an atomic_t at the driver-GEM level, and only call
drm_gem_shmem_[un]pin() on 0 <-> 1 transitions, but I thought using an
atomic at the GEM-shmem level, to avoid locking when we can, would be
beneficial to the rest of the eco-system. Let me know if that's not an
option, and I'll go back to the driver-specific atomic_t.
> but in my case, I want to enforce
> the fact the GEM has been pinned before the drm_gem_shmem_pin() call in
> the section protected by the resv lock, so catching a "refcount 0 -> 1"
> situation would be useful. Beside, using an atomic to avoid the
> lock/unlock dance when refcount > 1 might be beneficial to everyone.
>
> [1]https://gitlab.freedesktop.org/bbrezillon/linux/-/commit/4420fa0d5768ebdc35b34d58d4ae5fad9fbb93f9
>
> > +
> > /**
> > * @madv: State for madvise
> > *
>
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v14 02/12] drm/shmem-helper: Add pages_pin_count field
2023-07-25 8:32 ` Boris Brezillon
@ 2023-07-31 12:27 ` Dmitry Osipenko
2023-07-31 12:31 ` Dmitry Osipenko
2023-07-31 13:35 ` Boris Brezillon
0 siblings, 2 replies; 25+ messages in thread
From: Dmitry Osipenko @ 2023-07-31 12:27 UTC (permalink / raw)
To: Boris Brezillon
Cc: kernel, Thomas Zimmermann, Emma Anholt, Christian König,
dri-devel, linux-kernel, Maxime Ripard, Gurchetan Singh,
Melissa Wen, Gerd Hoffmann, Steven Price, virtualization,
Qiang Yu
On 7/25/23 11:32, Boris Brezillon wrote:
>> Can we make it an atomic_t, so we can avoid taking the lock when the
>> GEM has already been pinned. That's something I need to be able to grab
>> a pin-ref in a path where the GEM resv lock is already held[1]. We could
>> of course expose the locked version,
> My bad, that's actually not true. The problem is not that I call
> drm_gem_shmem_pin() with the resv lock already held, but that I call
> drm_gem_shmem_pin() in a dma-signaling path where I'm not allowed to
> take a resv lock. I know for sure pin_count > 0, because all GEM objects
> mapped to a VM have their memory pinned right now, and this should
> stand until we decide to add support for live-GEM eviction, at which
> point we'll probably have a way to detect when a GEM is evicted, and
> avoid calling drm_gem_shmem_pin() on it.
>
> TLDR; I can't trade the atomic_t for a drm_gem_shmem_pin_locked(),
> because that wouldn't solve my problem. The other solution would be to
> add an atomic_t at the driver-GEM level, and only call
> drm_gem_shmem_[un]pin() on 0 <-> 1 transitions, but I thought using an
> atomic at the GEM-shmem level, to avoid locking when we can, would be
> beneficial to the rest of the eco-system. Let me know if that's not an
> option, and I'll go back to the driver-specific atomic_t.
Could you please explain why do you need to pin GEM in a signal handler?
This is not something drivers usually do or need to do. You likely also
shouldn't need to detect that GEM is evicted in yours driver. I'd expect
that Panthor shouldn't differ from Panfrost in regards to how GEM memory
management is done and Panfrost doesn't need to do anything special.
Note that patch #14 makes locked pin/unpin functions public and turns
the unlocked variants into helpers, you'll be able to experiment with
these funcs in the Panthor driver.
In general, using atomic_t or kref should be a good thing to do, but
AFAICS it shouldn't bring benefits to the today's drm-shmem users. I'd
want to understand what you're trying to achieve in the Panthor driver.
--
Best regards,
Dmitry
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v14 02/12] drm/shmem-helper: Add pages_pin_count field
2023-07-31 12:27 ` Dmitry Osipenko
@ 2023-07-31 12:31 ` Dmitry Osipenko
2023-07-31 13:35 ` Boris Brezillon
1 sibling, 0 replies; 25+ messages in thread
From: Dmitry Osipenko @ 2023-07-31 12:31 UTC (permalink / raw)
To: Boris Brezillon
Cc: kernel, Thomas Zimmermann, Emma Anholt, Christian König,
dri-devel, linux-kernel, Maxime Ripard, Gurchetan Singh,
Melissa Wen, Gerd Hoffmann, Steven Price, virtualization,
Qiang Yu
On 7/31/23 15:27, Dmitry Osipenko wrote:
> On 7/25/23 11:32, Boris Brezillon wrote:
>>> Can we make it an atomic_t, so we can avoid taking the lock when the
>>> GEM has already been pinned. That's something I need to be able to grab
>>> a pin-ref in a path where the GEM resv lock is already held[1]. We could
>>> of course expose the locked version,
>> My bad, that's actually not true. The problem is not that I call
>> drm_gem_shmem_pin() with the resv lock already held, but that I call
>> drm_gem_shmem_pin() in a dma-signaling path where I'm not allowed to
>> take a resv lock. I know for sure pin_count > 0, because all GEM objects
>> mapped to a VM have their memory pinned right now, and this should
>> stand until we decide to add support for live-GEM eviction, at which
>> point we'll probably have a way to detect when a GEM is evicted, and
>> avoid calling drm_gem_shmem_pin() on it.
>>
>> TLDR; I can't trade the atomic_t for a drm_gem_shmem_pin_locked(),
>> because that wouldn't solve my problem. The other solution would be to
>> add an atomic_t at the driver-GEM level, and only call
>> drm_gem_shmem_[un]pin() on 0 <-> 1 transitions, but I thought using an
>> atomic at the GEM-shmem level, to avoid locking when we can, would be
>> beneficial to the rest of the eco-system. Let me know if that's not an
>> option, and I'll go back to the driver-specific atomic_t.
>
> Could you please explain why do you need to pin GEM in a signal handler?
> This is not something drivers usually do or need to do. You likely also
> shouldn't need to detect that GEM is evicted in yours driver. I'd expect
> that Panthor shouldn't differ from Panfrost in regards to how GEM memory
> management is done and Panfrost doesn't need to do anything special.
>
> Note that patch #14 makes locked pin/unpin functions public and turns
> the unlocked variants into helpers, you'll be able to experiment with
> these funcs in the Panthor driver.
correction: that's patch #10
> In general, using atomic_t or kref should be a good thing to do, but
> AFAICS it shouldn't bring benefits to the today's drm-shmem users. I'd
> want to understand what you're trying to achieve in the Panthor driver.
>
--
Best regards,
Dmitry
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v14 12/12] drm/gem: Add _unlocked postfix to drm_gem_pin/unpin()
2023-07-25 7:53 ` Boris Brezillon
@ 2023-07-31 13:04 ` Dmitry Osipenko
0 siblings, 0 replies; 25+ messages in thread
From: Dmitry Osipenko @ 2023-07-31 13:04 UTC (permalink / raw)
To: Boris Brezillon
Cc: kernel, Thomas Zimmermann, Emma Anholt, Christian König,
dri-devel, linux-kernel, Maxime Ripard, Gurchetan Singh,
Melissa Wen, Gerd Hoffmann, Steven Price, virtualization,
Qiang Yu
On 7/25/23 10:53, Boris Brezillon wrote:
> On Sun, 23 Jul 2023 02:47:46 +0300
> Dmitry Osipenko <dmitry.osipenko@collabora.com> wrote:
>
>> Make clear that drm_gem_pin/unpin() functions take reservation lock by
>> adding _unlocked postfix to the function names.
>>
>> Suggested-by: Boris Brezillon <boris.brezillon@collabora.com>
>> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
>
> I'm still a bit confused by the fact we sometimes use the
> xxx[_locked]() pattern (version without the _locked suffix takes the
> lock) and other times the xxx[_unlocked]() pattern (version with the
> _unlocked suffix takes the lock). It'd be good to chose one pattern and
> stick to it, at least for all core functions...
After a more close look, agree that the _locked variant is much more
common in DRM. Alright, I'll rename the drm-gem funcs.
--
Best regards,
Dmitry
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v14 02/12] drm/shmem-helper: Add pages_pin_count field
2023-07-31 12:27 ` Dmitry Osipenko
2023-07-31 12:31 ` Dmitry Osipenko
@ 2023-07-31 13:35 ` Boris Brezillon
2023-08-02 2:31 ` Danilo Krummrich
1 sibling, 1 reply; 25+ messages in thread
From: Boris Brezillon @ 2023-07-31 13:35 UTC (permalink / raw)
To: Dmitry Osipenko
Cc: kernel, Thomas Zimmermann, Emma Anholt, Christian König,
dri-devel, linux-kernel, Maxime Ripard, Gurchetan Singh,
Melissa Wen, Danilo Krummrich, Gerd Hoffmann, Steven Price,
virtualization, Qiang Yu
+Danilo, to confirm my understanding of the gpuva remap operation is
correct.
On Mon, 31 Jul 2023 15:27:31 +0300
Dmitry Osipenko <dmitry.osipenko@collabora.com> wrote:
> On 7/25/23 11:32, Boris Brezillon wrote:
> >> Can we make it an atomic_t, so we can avoid taking the lock when the
> >> GEM has already been pinned. That's something I need to be able to grab
> >> a pin-ref in a path where the GEM resv lock is already held[1]. We could
> >> of course expose the locked version,
> > My bad, that's actually not true. The problem is not that I call
> > drm_gem_shmem_pin() with the resv lock already held, but that I call
> > drm_gem_shmem_pin() in a dma-signaling path where I'm not allowed to
> > take a resv lock. I know for sure pin_count > 0, because all GEM objects
> > mapped to a VM have their memory pinned right now, and this should
> > stand until we decide to add support for live-GEM eviction, at which
> > point we'll probably have a way to detect when a GEM is evicted, and
> > avoid calling drm_gem_shmem_pin() on it.
> >
> > TLDR; I can't trade the atomic_t for a drm_gem_shmem_pin_locked(),
> > because that wouldn't solve my problem. The other solution would be to
> > add an atomic_t at the driver-GEM level, and only call
> > drm_gem_shmem_[un]pin() on 0 <-> 1 transitions, but I thought using an
> > atomic at the GEM-shmem level, to avoid locking when we can, would be
> > beneficial to the rest of the eco-system. Let me know if that's not an
> > option, and I'll go back to the driver-specific atomic_t.
>
> Could you please explain why do you need to pin GEM in a signal handler?
> This is not something drivers usually do or need to do. You likely also
> shouldn't need to detect that GEM is evicted in yours driver. I'd expect
> that Panthor shouldn't differ from Panfrost in regards to how GEM memory
> management is done and Panfrost doesn't need to do anything special.
Panthor VM management is completely different, and the case I'm
referring to is 'asynchronous VM_BIND': mapping a GEM object to a GPU VM
asynchronously, so we can make it depend on other operations, encoded as
syncobjs passed to the VM_BIND operation.
Here is the workflow we have for this use case:
1. Create + push a VM_BIND job to the VM_BIND queue (a drm_sched_entity
that's taking care of asynchronous VM map/unmap operations). Because
this operation is asynchronous, and the execution itself happens in a
dma-signaling path (drm_sched::run_job()), we need to pre-allocate the
MMU page tables for the worst case scenario, and make sure the GEM pages
are pinned at job creation time.
2. The VM operation itself is executed when all dependencies are met
(drm_sched calls run_job()). In case of a map operation, we call
drm_gpuva_sm_map(), which might split the map operation into
remap+unamp+map ones if the region being mapped is covering a region
that was previously mapped to a different GEM object or a different
portion of the same GEM object (see the gpuva_mgr doc [1]). A
remap operation is just a way to split an existing mapping in 2 mappings
covering the left/right side of the previous mapping, plus a hole in
the middle. This means that our VM mapping object (drm_gpuva), which
was pointing to a GEM object that had its pages pinned, is now turned
into 2 mapping objects, and we need to make sure those 2 mappings own a
reference to the pages, otherwise we'll have an unbalanced refcount
when we release those 2 mappings further down the road.
3. Release resources attached to mappings that were removed (that
includes releasing the ref we had on GEM pages) and free the mapping
objects. We do that asynchronously, outside of the dma-signaling path.
>
> Note that patch #14 makes locked pin/unpin functions public and turns
> the unlocked variants into helpers, you'll be able to experiment with
> these funcs in the Panthor driver.
Unfortunately, those won't help. I really need a way to increment the
refcount without holding the lock, because we're in a dma-signaling
path when we call drm_gpuva_sm_map(). Note that I could live with a
drm_shmem_gem_pin_if_already_pinned() variant that would return NULL if
pin_count == 0 instead of trying to acquire the lock, but I'd still
need this refcount to be an atomic_t.
As I said, an alternative to this approach would be to have a separate
atomic refcount at the panthor_gem_object level, but I feel like we'd
just be duplicating something that exists already.
[1]https://cgit.freedesktop.org/drm/drm-misc/tree/drivers/gpu/drm/drm_gpuva_mgr.c#n67
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v14 02/12] drm/shmem-helper: Add pages_pin_count field
2023-07-31 13:35 ` Boris Brezillon
@ 2023-08-02 2:31 ` Danilo Krummrich
2023-08-02 9:06 ` Boris Brezillon
0 siblings, 1 reply; 25+ messages in thread
From: Danilo Krummrich @ 2023-08-02 2:31 UTC (permalink / raw)
To: Boris Brezillon, Dmitry Osipenko
Cc: kernel, Thomas Zimmermann, Emma Anholt, Christian König,
dri-devel, linux-kernel, Maxime Ripard, Gurchetan Singh,
Melissa Wen, Gerd Hoffmann, Steven Price, virtualization,
Qiang Yu
On 7/31/23 15:35, Boris Brezillon wrote:
> +Danilo, to confirm my understanding of the gpuva remap operation is
> correct.
Your understanding is correct.
Unfortunately, re-mapping things has such implications.
I'm currently working on tracking external GEM objects in the GPUVA
manager, where, ideally, you'd want to add the extobj to the VM when the
first mapping being backed by this GEM is created and removed when the
last mapping being backed by this GEM is removed. Hence, extobjs need to
be ref-counted based on how many mappings they back.
However, when re-mapping such a mapping, the reference counter might
drop to 0 temporarily and the slot of the data structure tracking the
extobj is cleaned up and needs to be re-allocated. Surely, we could just
increase the reference count while re-mapping or for the whole
transaction (job), but this would make the API kinda bulky.
>
> On Mon, 31 Jul 2023 15:27:31 +0300
> Dmitry Osipenko <dmitry.osipenko@collabora.com> wrote:
>
>> On 7/25/23 11:32, Boris Brezillon wrote:
>>>> Can we make it an atomic_t, so we can avoid taking the lock when the
>>>> GEM has already been pinned. That's something I need to be able to grab
>>>> a pin-ref in a path where the GEM resv lock is already held[1]. We could
>>>> of course expose the locked version,
>>> My bad, that's actually not true. The problem is not that I call
>>> drm_gem_shmem_pin() with the resv lock already held, but that I call
>>> drm_gem_shmem_pin() in a dma-signaling path where I'm not allowed to
>>> take a resv lock. I know for sure pin_count > 0, because all GEM objects
>>> mapped to a VM have their memory pinned right now, and this should
>>> stand until we decide to add support for live-GEM eviction, at which
>>> point we'll probably have a way to detect when a GEM is evicted, and
>>> avoid calling drm_gem_shmem_pin() on it.
>>>
>>> TLDR; I can't trade the atomic_t for a drm_gem_shmem_pin_locked(),
>>> because that wouldn't solve my problem. The other solution would be to
>>> add an atomic_t at the driver-GEM level, and only call
>>> drm_gem_shmem_[un]pin() on 0 <-> 1 transitions, but I thought using an
>>> atomic at the GEM-shmem level, to avoid locking when we can, would be
>>> beneficial to the rest of the eco-system. Let me know if that's not an
>>> option, and I'll go back to the driver-specific atomic_t.
>>
>> Could you please explain why do you need to pin GEM in a signal handler?
>> This is not something drivers usually do or need to do. You likely also
>> shouldn't need to detect that GEM is evicted in yours driver. I'd expect
>> that Panthor shouldn't differ from Panfrost in regards to how GEM memory
>> management is done and Panfrost doesn't need to do anything special.
>
> Panthor VM management is completely different, and the case I'm
> referring to is 'asynchronous VM_BIND': mapping a GEM object to a GPU VM
> asynchronously, so we can make it depend on other operations, encoded as
> syncobjs passed to the VM_BIND operation.
>
> Here is the workflow we have for this use case:
>
> 1. Create + push a VM_BIND job to the VM_BIND queue (a drm_sched_entity
> that's taking care of asynchronous VM map/unmap operations). Because
> this operation is asynchronous, and the execution itself happens in a
> dma-signaling path (drm_sched::run_job()), we need to pre-allocate the
> MMU page tables for the worst case scenario, and make sure the GEM pages
> are pinned at job creation time.
>
> 2. The VM operation itself is executed when all dependencies are met
> (drm_sched calls run_job()). In case of a map operation, we call
> drm_gpuva_sm_map(), which might split the map operation into
> remap+unamp+map ones if the region being mapped is covering a region
> that was previously mapped to a different GEM object or a different
> portion of the same GEM object (see the gpuva_mgr doc [1]). A
> remap operation is just a way to split an existing mapping in 2 mappings
> covering the left/right side of the previous mapping, plus a hole in
> the middle. This means that our VM mapping object (drm_gpuva), which
> was pointing to a GEM object that had its pages pinned, is now turned
> into 2 mapping objects, and we need to make sure those 2 mappings own a
> reference to the pages, otherwise we'll have an unbalanced refcount
> when we release those 2 mappings further down the road.
>
> 3. Release resources attached to mappings that were removed (that
> includes releasing the ref we had on GEM pages) and free the mapping
> objects. We do that asynchronously, outside of the dma-signaling path.
>
>>
>> Note that patch #14 makes locked pin/unpin functions public and turns
>> the unlocked variants into helpers, you'll be able to experiment with
>> these funcs in the Panthor driver.
>
> Unfortunately, those won't help. I really need a way to increment the
> refcount without holding the lock, because we're in a dma-signaling
> path when we call drm_gpuva_sm_map(). Note that I could live with a
> drm_shmem_gem_pin_if_already_pinned() variant that would return NULL if
> pin_count == 0 instead of trying to acquire the lock, but I'd still
> need this refcount to be an atomic_t.
>
> As I said, an alternative to this approach would be to have a separate
> atomic refcount at the panthor_gem_object level, but I feel like we'd
> just be duplicating something that exists already.
>
> [1]https://cgit.freedesktop.org/drm/drm-misc/tree/drivers/gpu/drm/drm_gpuva_mgr.c#n67
>
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH v14 02/12] drm/shmem-helper: Add pages_pin_count field
2023-08-02 2:31 ` Danilo Krummrich
@ 2023-08-02 9:06 ` Boris Brezillon
0 siblings, 0 replies; 25+ messages in thread
From: Boris Brezillon @ 2023-08-02 9:06 UTC (permalink / raw)
To: Danilo Krummrich
Cc: kernel, Dmitry Osipenko, Emma Anholt, Christian König,
Thomas Zimmermann, dri-devel, linux-kernel, Maxime Ripard,
Gurchetan Singh, Melissa Wen, Gerd Hoffmann, Steven Price,
virtualization, Qiang Yu
On Wed, 2 Aug 2023 04:31:52 +0200
Danilo Krummrich <dakr@redhat.com> wrote:
> On 7/31/23 15:35, Boris Brezillon wrote:
> > +Danilo, to confirm my understanding of the gpuva remap operation is
> > correct.
>
> Your understanding is correct.
>
> Unfortunately, re-mapping things has such implications.
>
> I'm currently working on tracking external GEM objects in the GPUVA
> manager, where, ideally, you'd want to add the extobj to the VM when the
> first mapping being backed by this GEM is created and removed when the
> last mapping being backed by this GEM is removed. Hence, extobjs need to
> be ref-counted based on how many mappings they back.
Uh, right. I went for a much simpler (but also less efficient) approach
where I basically track things at the mapping level (my panthor_vma
object, which inherits from drm_gpuva, has a list node so it can be
inserted in a shared_bos list tracked at the VM level), instead of the
GEM level. So we'd basically be trying to acquire resv locks multiple
times and reserving multiple slots if the same shared GEM is mapped
multiple times. With the IGNORE_DUPLICATES flag passed to drm_exec,
that works, but it might not be ideal if we expect shared BOs to be
mapped multiple times in the same VM.
>
> However, when re-mapping such a mapping, the reference counter might
> drop to 0 temporarily and the slot of the data structure tracking the
> extobj is cleaned up and needs to be re-allocated. Surely, we could just
> increase the reference count while re-mapping or for the whole
> transaction (job), but this would make the API kinda bulky.
With things happening in the dma-signaling path, we'd need to
pre-allocate this shared-bo container object anyway, because we can't
assume there will be one available by the time we get to run the VM
operation. So I think it's safe to assume that, even if the unmap part
of the remap operation drops the last ref of this container object, when
you get to map the same BO again, you'll have another container to play
with. It's just a matter of pre-allocating one more thing when
bo_is_shared==true && op==map, I think.
^ permalink raw reply [flat|nested] 25+ messages in thread
end of thread, other threads:[~2023-08-02 9:06 UTC | newest]
Thread overview: 25+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-07-22 23:47 [PATCH v14 00/12] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers Dmitry Osipenko
2023-07-22 23:47 ` [PATCH v14 01/12] drm/shmem-helper: Factor out pages alloc/release from drm_gem_shmem_get/put_pages() Dmitry Osipenko
2023-07-25 7:14 ` Boris Brezillon
2023-07-22 23:47 ` [PATCH v14 02/12] drm/shmem-helper: Add pages_pin_count field Dmitry Osipenko
2023-07-25 7:27 ` Boris Brezillon
2023-07-25 8:32 ` Boris Brezillon
2023-07-31 12:27 ` Dmitry Osipenko
2023-07-31 12:31 ` Dmitry Osipenko
2023-07-31 13:35 ` Boris Brezillon
2023-08-02 2:31 ` Danilo Krummrich
2023-08-02 9:06 ` Boris Brezillon
2023-07-22 23:47 ` [PATCH v14 03/12] drm/shmem-helper: Switch drm_gem_shmem_vmap/vunmap to use pin/unpin Dmitry Osipenko
2023-07-22 23:47 ` [PATCH v14 04/12] drm/shmem-helper: Factor out unpinning part from drm_gem_shmem_purge() Dmitry Osipenko
2023-07-22 23:47 ` [PATCH v14 05/12] drm/shmem-helper: Add memory shrinker Dmitry Osipenko
2023-07-22 23:47 ` [PATCH v14 06/12] drm/shmem-helper: Remove obsoleted is_iomem test Dmitry Osipenko
2023-07-22 23:47 ` [PATCH v14 07/12] drm/shmem-helper: Export drm_gem_shmem_get_pages_sgt_locked() Dmitry Osipenko
2023-07-22 23:47 ` [PATCH v14 08/12] drm/virtio: Support memory shrinking Dmitry Osipenko
2023-07-22 23:47 ` [PATCH v14 09/12] drm/panfrost: Switch to generic memory shrinker Dmitry Osipenko
2023-07-22 23:47 ` [PATCH v14 10/12] drm/shmem-helper: Refactor locked/unlocked functions Dmitry Osipenko
2023-07-25 7:47 ` Boris Brezillon
2023-07-25 7:58 ` Boris Brezillon
2023-07-22 23:47 ` [PATCH v14 11/12] drm/shmem-helper: Make drm_gem_shmem_print_info() symbol GPL Dmitry Osipenko
2023-07-22 23:47 ` [PATCH v14 12/12] drm/gem: Add _unlocked postfix to drm_gem_pin/unpin() Dmitry Osipenko
2023-07-25 7:53 ` Boris Brezillon
2023-07-31 13:04 ` Dmitry Osipenko
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).