* [PATCH 01/13] drm/gem-shmem: Fix typos in documentation
2025-12-09 13:41 [RFC][PATCH 00/13] drm: Introduce GEM-UMA memory management Thomas Zimmermann
@ 2025-12-09 13:41 ` Thomas Zimmermann
2025-12-11 10:00 ` Boris Brezillon
2025-12-11 12:03 ` Thomas Zimmermann
2025-12-09 13:41 ` [PATCH 02/13] drm/gem-shmem: Fix the MODULE_LICENSE() string Thomas Zimmermann
` (12 subsequent siblings)
13 siblings, 2 replies; 26+ messages in thread
From: Thomas Zimmermann @ 2025-12-09 13:41 UTC (permalink / raw)
To: boris.brezillon, simona, airlied, mripard, maarten.lankhorst,
ogabbay, mamin506, lizhi.hou, maciej.falkowski, karol.wachowski,
tomeu, frank.binns, matt.coster, yuq825, robh, steven.price,
adrian.larumbe, liviu.dudau, mwen, kraxel, dmitry.osipenko,
gurchetansingh, olvaffe, corbet
Cc: dri-devel, lima, virtualization, linux-doc, Thomas Zimmermann
Fix the compile-time warnings
Warning: drm_gem_shmem_helper.c:104 function parameter 'shmem' not described in 'drm_gem_shmem_init'
Warning: drm_gem_shmem_helper.c:104 function parameter 'size' not described in 'drm_gem_shmem_init'
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
drivers/gpu/drm/drm_gem_shmem_helper.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index dc94a27710e5..f4e77f75ec81 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -96,7 +96,8 @@ static int __drm_gem_shmem_init(struct drm_device *dev, struct drm_gem_shmem_obj
/**
* drm_gem_shmem_init - Initialize an allocated object.
* @dev: DRM device
- * @obj: The allocated shmem GEM object.
+ * @shmem: The allocated shmem GEM object.
+ * @size: Buffer size in bytes
*
* Returns:
* 0 on success, or a negative error code on failure.
--
2.52.0
^ permalink raw reply related [flat|nested] 26+ messages in thread* Re: [PATCH 01/13] drm/gem-shmem: Fix typos in documentation
2025-12-09 13:41 ` [PATCH 01/13] drm/gem-shmem: Fix typos in documentation Thomas Zimmermann
@ 2025-12-11 10:00 ` Boris Brezillon
2025-12-11 12:03 ` Thomas Zimmermann
1 sibling, 0 replies; 26+ messages in thread
From: Boris Brezillon @ 2025-12-11 10:00 UTC (permalink / raw)
To: Thomas Zimmermann
Cc: simona, airlied, mripard, maarten.lankhorst, ogabbay, mamin506,
lizhi.hou, maciej.falkowski, karol.wachowski, tomeu, frank.binns,
matt.coster, yuq825, robh, steven.price, adrian.larumbe,
liviu.dudau, mwen, kraxel, dmitry.osipenko, gurchetansingh,
olvaffe, corbet, dri-devel, lima, virtualization, linux-doc
On Tue, 9 Dec 2025 14:41:58 +0100
Thomas Zimmermann <tzimmermann@suse.de> wrote:
> Fix the compile-time warnings
>
> Warning: drm_gem_shmem_helper.c:104 function parameter 'shmem' not described in 'drm_gem_shmem_init'
> Warning: drm_gem_shmem_helper.c:104 function parameter 'size' not described in 'drm_gem_shmem_init'
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
> ---
> drivers/gpu/drm/drm_gem_shmem_helper.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index dc94a27710e5..f4e77f75ec81 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -96,7 +96,8 @@ static int __drm_gem_shmem_init(struct drm_device *dev, struct drm_gem_shmem_obj
> /**
> * drm_gem_shmem_init - Initialize an allocated object.
> * @dev: DRM device
> - * @obj: The allocated shmem GEM object.
> + * @shmem: The allocated shmem GEM object.
> + * @size: Buffer size in bytes
> *
> * Returns:
> * 0 on success, or a negative error code on failure.
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH 01/13] drm/gem-shmem: Fix typos in documentation
2025-12-09 13:41 ` [PATCH 01/13] drm/gem-shmem: Fix typos in documentation Thomas Zimmermann
2025-12-11 10:00 ` Boris Brezillon
@ 2025-12-11 12:03 ` Thomas Zimmermann
1 sibling, 0 replies; 26+ messages in thread
From: Thomas Zimmermann @ 2025-12-11 12:03 UTC (permalink / raw)
To: boris.brezillon, simona, airlied, mripard, maarten.lankhorst,
ogabbay, mamin506, lizhi.hou, maciej.falkowski, karol.wachowski,
tomeu, frank.binns, matt.coster, yuq825, robh, steven.price,
adrian.larumbe, liviu.dudau, mwen, kraxel, dmitry.osipenko,
gurchetansingh, olvaffe, corbet
Cc: dri-devel, lima, virtualization, linux-doc
Am 09.12.25 um 14:41 schrieb Thomas Zimmermann:
> Fix the compile-time warnings
>
> Warning: drm_gem_shmem_helper.c:104 function parameter 'shmem' not described in 'drm_gem_shmem_init'
> Warning: drm_gem_shmem_helper.c:104 function parameter 'size' not described in 'drm_gem_shmem_init'
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Fixes: e3f4bdaf2c5b ("drm/gem/shmem: Extract drm_gem_shmem_init() from
drm_gem_shmem_create()")
> ---
> drivers/gpu/drm/drm_gem_shmem_helper.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index dc94a27710e5..f4e77f75ec81 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -96,7 +96,8 @@ static int __drm_gem_shmem_init(struct drm_device *dev, struct drm_gem_shmem_obj
> /**
> * drm_gem_shmem_init - Initialize an allocated object.
> * @dev: DRM device
> - * @obj: The allocated shmem GEM object.
> + * @shmem: The allocated shmem GEM object.
> + * @size: Buffer size in bytes
> *
> * Returns:
> * 0 on success, or a negative error code on failure.
--
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Frankenstr. 146, 90461 Nürnberg, Germany, www.suse.com
GF: Jochen Jaser, Andrew McDonald, Werner Knoblich, (HRB 36809, AG Nürnberg)
^ permalink raw reply [flat|nested] 26+ messages in thread
* [PATCH 02/13] drm/gem-shmem: Fix the MODULE_LICENSE() string
2025-12-09 13:41 [RFC][PATCH 00/13] drm: Introduce GEM-UMA memory management Thomas Zimmermann
2025-12-09 13:41 ` [PATCH 01/13] drm/gem-shmem: Fix typos in documentation Thomas Zimmermann
@ 2025-12-09 13:41 ` Thomas Zimmermann
2025-12-11 10:01 ` Boris Brezillon
2025-12-11 12:04 ` Thomas Zimmermann
2025-12-09 13:42 ` [PATCH 03/13] drm: Add GEM-UMA helpers for memory management Thomas Zimmermann
` (11 subsequent siblings)
13 siblings, 2 replies; 26+ messages in thread
From: Thomas Zimmermann @ 2025-12-09 13:41 UTC (permalink / raw)
To: boris.brezillon, simona, airlied, mripard, maarten.lankhorst,
ogabbay, mamin506, lizhi.hou, maciej.falkowski, karol.wachowski,
tomeu, frank.binns, matt.coster, yuq825, robh, steven.price,
adrian.larumbe, liviu.dudau, mwen, kraxel, dmitry.osipenko,
gurchetansingh, olvaffe, corbet
Cc: dri-devel, lima, virtualization, linux-doc, Thomas Zimmermann
Replace the bogus "GPL v2" with "GPL" as MODULE_LICNSE() string. The
value does not declare the module's exact license, but only lets the
module loader test whether the module is Free Software or not.
See commit bf7fbeeae6db ("module: Cure the MODULE_LICENSE "GPL" vs.
"GPL v2" bogosity") in the details of the issue. The fix is to use
"GPL" for all modules under any variant of the GPL.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
drivers/gpu/drm/drm_gem_shmem_helper.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index f4e77f75ec81..2a67da98da25 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -896,4 +896,4 @@ EXPORT_SYMBOL_GPL(drm_gem_shmem_prime_import_no_map);
MODULE_DESCRIPTION("DRM SHMEM memory-management helpers");
MODULE_IMPORT_NS("DMA_BUF");
-MODULE_LICENSE("GPL v2");
+MODULE_LICENSE("GPL");
--
2.52.0
^ permalink raw reply related [flat|nested] 26+ messages in thread* Re: [PATCH 02/13] drm/gem-shmem: Fix the MODULE_LICENSE() string
2025-12-09 13:41 ` [PATCH 02/13] drm/gem-shmem: Fix the MODULE_LICENSE() string Thomas Zimmermann
@ 2025-12-11 10:01 ` Boris Brezillon
2025-12-11 12:04 ` Thomas Zimmermann
1 sibling, 0 replies; 26+ messages in thread
From: Boris Brezillon @ 2025-12-11 10:01 UTC (permalink / raw)
To: Thomas Zimmermann
Cc: simona, airlied, mripard, maarten.lankhorst, ogabbay, mamin506,
lizhi.hou, maciej.falkowski, karol.wachowski, tomeu, frank.binns,
matt.coster, yuq825, robh, steven.price, adrian.larumbe,
liviu.dudau, mwen, kraxel, dmitry.osipenko, gurchetansingh,
olvaffe, corbet, dri-devel, lima, virtualization, linux-doc
On Tue, 9 Dec 2025 14:41:59 +0100
Thomas Zimmermann <tzimmermann@suse.de> wrote:
> Replace the bogus "GPL v2" with "GPL" as MODULE_LICNSE() string. The
> value does not declare the module's exact license, but only lets the
> module loader test whether the module is Free Software or not.
>
> See commit bf7fbeeae6db ("module: Cure the MODULE_LICENSE "GPL" vs.
> "GPL v2" bogosity") in the details of the issue. The fix is to use
> "GPL" for all modules under any variant of the GPL.
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
> ---
> drivers/gpu/drm/drm_gem_shmem_helper.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index f4e77f75ec81..2a67da98da25 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -896,4 +896,4 @@ EXPORT_SYMBOL_GPL(drm_gem_shmem_prime_import_no_map);
>
> MODULE_DESCRIPTION("DRM SHMEM memory-management helpers");
> MODULE_IMPORT_NS("DMA_BUF");
> -MODULE_LICENSE("GPL v2");
> +MODULE_LICENSE("GPL");
^ permalink raw reply [flat|nested] 26+ messages in thread* Re: [PATCH 02/13] drm/gem-shmem: Fix the MODULE_LICENSE() string
2025-12-09 13:41 ` [PATCH 02/13] drm/gem-shmem: Fix the MODULE_LICENSE() string Thomas Zimmermann
2025-12-11 10:01 ` Boris Brezillon
@ 2025-12-11 12:04 ` Thomas Zimmermann
1 sibling, 0 replies; 26+ messages in thread
From: Thomas Zimmermann @ 2025-12-11 12:04 UTC (permalink / raw)
To: boris.brezillon, simona, airlied, mripard, maarten.lankhorst,
ogabbay, mamin506, lizhi.hou, maciej.falkowski, karol.wachowski,
tomeu, frank.binns, matt.coster, yuq825, robh, steven.price,
adrian.larumbe, liviu.dudau, mwen, kraxel, dmitry.osipenko,
gurchetansingh, olvaffe, corbet
Cc: dri-devel, lima, virtualization, linux-doc
Am 09.12.25 um 14:41 schrieb Thomas Zimmermann:
> Replace the bogus "GPL v2" with "GPL" as MODULE_LICNSE() string. The
> value does not declare the module's exact license, but only lets the
> module loader test whether the module is Free Software or not.
>
> See commit bf7fbeeae6db ("module: Cure the MODULE_LICENSE "GPL" vs.
> "GPL v2" bogosity") in the details of the issue. The fix is to use
> "GPL" for all modules under any variant of the GPL.
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Fixes: 4b2b5e142ff4 ("drm: Move GEM memory managers into modules")
> ---
> drivers/gpu/drm/drm_gem_shmem_helper.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index f4e77f75ec81..2a67da98da25 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -896,4 +896,4 @@ EXPORT_SYMBOL_GPL(drm_gem_shmem_prime_import_no_map);
>
> MODULE_DESCRIPTION("DRM SHMEM memory-management helpers");
> MODULE_IMPORT_NS("DMA_BUF");
> -MODULE_LICENSE("GPL v2");
> +MODULE_LICENSE("GPL");
--
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Frankenstr. 146, 90461 Nürnberg, Germany, www.suse.com
GF: Jochen Jaser, Andrew McDonald, Werner Knoblich, (HRB 36809, AG Nürnberg)
^ permalink raw reply [flat|nested] 26+ messages in thread
* [PATCH 03/13] drm: Add GEM-UMA helpers for memory management
2025-12-09 13:41 [RFC][PATCH 00/13] drm: Introduce GEM-UMA memory management Thomas Zimmermann
2025-12-09 13:41 ` [PATCH 01/13] drm/gem-shmem: Fix typos in documentation Thomas Zimmermann
2025-12-09 13:41 ` [PATCH 02/13] drm/gem-shmem: Fix the MODULE_LICENSE() string Thomas Zimmermann
@ 2025-12-09 13:42 ` Thomas Zimmermann
2025-12-09 13:42 ` [PATCH 04/13] drm/gem-uma: Remove unused interfaces Thomas Zimmermann
` (10 subsequent siblings)
13 siblings, 0 replies; 26+ messages in thread
From: Thomas Zimmermann @ 2025-12-09 13:42 UTC (permalink / raw)
To: boris.brezillon, simona, airlied, mripard, maarten.lankhorst,
ogabbay, mamin506, lizhi.hou, maciej.falkowski, karol.wachowski,
tomeu, frank.binns, matt.coster, yuq825, robh, steven.price,
adrian.larumbe, liviu.dudau, mwen, kraxel, dmitry.osipenko,
gurchetansingh, olvaffe, corbet
Cc: dri-devel, lima, virtualization, linux-doc, Thomas Zimmermann
Duplicate the existing GEM-SHMEM helpers to GEM-UMA; including fbdev
emulation and tests.
The new GEM-UMA helpers provide memory management for DRM drivers with
Unified Memory Architecture (UMA) hardware. Here, the CPU and GPU share
use of the system memory. DRM drivers for such hardware currently often
build upon GEM-SHMEM and extend it with additional GTT/MMU functionality
for the graphics chipset.
GEM-SHMEM also serves the different use case of holding the graphics
buffers entirely in system memory and copying their content on pageflips.
This conflicts with the UMA use case and complicates GEM-SHMEM. The most
prominent example is PRIME buffer import, where GEM-SHMEM currently has
to provide two distinct implementations.
By having a separate GEM memory manager for UMA, both implementations,
SHMEM and UMA, can evolve in different directions with each focusng on
their specific use case.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
Documentation/gpu/drm-mm.rst | 12 +
drivers/gpu/drm/Kconfig | 9 +
drivers/gpu/drm/Kconfig.debug | 1 +
drivers/gpu/drm/Makefile | 4 +
drivers/gpu/drm/drm_fbdev_uma.c | 203 +++++
drivers/gpu/drm/drm_gem_uma_helper.c | 898 +++++++++++++++++++++++
drivers/gpu/drm/tests/Makefile | 1 +
drivers/gpu/drm/tests/drm_gem_uma_test.c | 385 ++++++++++
include/drm/drm_fbdev_uma.h | 20 +
include/drm/drm_gem_uma_helper.h | 309 ++++++++
10 files changed, 1842 insertions(+)
create mode 100644 drivers/gpu/drm/drm_fbdev_uma.c
create mode 100644 drivers/gpu/drm/drm_gem_uma_helper.c
create mode 100644 drivers/gpu/drm/tests/drm_gem_uma_test.c
create mode 100644 include/drm/drm_fbdev_uma.h
create mode 100644 include/drm/drm_gem_uma_helper.h
diff --git a/Documentation/gpu/drm-mm.rst b/Documentation/gpu/drm-mm.rst
index d55751cad67c..225d9d760227 100644
--- a/Documentation/gpu/drm-mm.rst
+++ b/Documentation/gpu/drm-mm.rst
@@ -379,6 +379,18 @@ GEM SHMEM Helper Function Reference
.. kernel-doc:: drivers/gpu/drm/drm_gem_shmem_helper.c
:export:
+GEM UMA Helper Function Reference
+-----------------------------------
+
+.. kernel-doc:: drivers/gpu/drm/drm_gem_uma_helper.c
+ :doc: overview
+
+.. kernel-doc:: include/drm/drm_gem_uma_helper.h
+ :internal:
+
+.. kernel-doc:: drivers/gpu/drm/drm_gem_uma_helper.c
+ :export:
+
GEM VRAM Helper Functions Reference
-----------------------------------
diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
index 7e6bc0b3a589..a35fbd9feccb 100644
--- a/drivers/gpu/drm/Kconfig
+++ b/drivers/gpu/drm/Kconfig
@@ -257,6 +257,15 @@ config DRM_GEM_SHMEM_HELPER
help
Choose this if you need the GEM shmem helper functions
+config DRM_GEM_UMA_HELPER
+ tristate
+ depends on DRM && MMU
+ select DRM_KMS_HELPER if DRM_FBDEV_EMULATION
+ select FB_CORE if DRM_FBDEV_EMULATION
+ select FB_SYSMEM_HELPERS_DEFERRED if DRM_FBDEV_EMULATION
+ help
+ Choose this if you need the GEM UMA helper functions
+
config DRM_SUBALLOC_HELPER
tristate
depends on DRM
diff --git a/drivers/gpu/drm/Kconfig.debug b/drivers/gpu/drm/Kconfig.debug
index 05dc43c0b8c5..ec791826fca8 100644
--- a/drivers/gpu/drm/Kconfig.debug
+++ b/drivers/gpu/drm/Kconfig.debug
@@ -68,6 +68,7 @@ config DRM_KUNIT_TEST
select DRM_EXEC
select DRM_EXPORT_FOR_TESTS if m
select DRM_GEM_SHMEM_HELPER
+ select DRM_GEM_UMA_HELPER
select DRM_KUNIT_TEST_HELPERS
select DRM_LIB_RANDOM
select DRM_SYSFB_HELPER
diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile
index 33ff76ae52b2..c227523aa693 100644
--- a/drivers/gpu/drm/Makefile
+++ b/drivers/gpu/drm/Makefile
@@ -125,6 +125,10 @@ obj-$(CONFIG_DRM_GEM_SHMEM_HELPER) += drm_shmem_helper.o
drm_suballoc_helper-y := drm_suballoc.o
obj-$(CONFIG_DRM_SUBALLOC_HELPER) += drm_suballoc_helper.o
+drm_uma_helper-y := drm_gem_uma_helper.o
+drm_uma_helper-$(CONFIG_DRM_FBDEV_EMULATION) += drm_fbdev_uma.o
+obj-$(CONFIG_DRM_GEM_UMA_HELPER) += drm_uma_helper.o
+
drm_vram_helper-y := drm_gem_vram_helper.o
obj-$(CONFIG_DRM_VRAM_HELPER) += drm_vram_helper.o
diff --git a/drivers/gpu/drm/drm_fbdev_uma.c b/drivers/gpu/drm/drm_fbdev_uma.c
new file mode 100644
index 000000000000..76edb7dbc2ca
--- /dev/null
+++ b/drivers/gpu/drm/drm_fbdev_uma.c
@@ -0,0 +1,203 @@
+// SPDX-License-Identifier: MIT
+
+#include <linux/export.h>
+#include <linux/fb.h>
+
+#include <drm/drm_drv.h>
+#include <drm/drm_fbdev_uma.h>
+#include <drm/drm_fb_helper.h>
+#include <drm/drm_framebuffer.h>
+#include <drm/drm_gem_framebuffer_helper.h>
+#include <drm/drm_gem_uma_helper.h>
+#include <drm/drm_print.h>
+
+/*
+ * struct fb_ops
+ */
+
+static int drm_fbdev_uma_fb_open(struct fb_info *info, int user)
+{
+ struct drm_fb_helper *fb_helper = info->par;
+
+ /* No need to take a ref for fbcon because it unbinds on unregister */
+ if (user && !try_module_get(fb_helper->dev->driver->fops->owner))
+ return -ENODEV;
+
+ return 0;
+}
+
+static int drm_fbdev_uma_fb_release(struct fb_info *info, int user)
+{
+ struct drm_fb_helper *fb_helper = info->par;
+
+ if (user)
+ module_put(fb_helper->dev->driver->fops->owner);
+
+ return 0;
+}
+
+FB_GEN_DEFAULT_DEFERRED_SYSMEM_OPS(drm_fbdev_uma,
+ drm_fb_helper_damage_range,
+ drm_fb_helper_damage_area);
+
+static int drm_fbdev_uma_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
+{
+ struct drm_fb_helper *fb_helper = info->par;
+ struct drm_framebuffer *fb = fb_helper->fb;
+ struct drm_gem_object *obj = drm_gem_fb_get_obj(fb, 0);
+ struct drm_gem_uma_object *uma = to_drm_gem_uma_obj(obj);
+
+ if (uma->map_wc)
+ vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
+
+ return fb_deferred_io_mmap(info, vma);
+}
+
+static void drm_fbdev_uma_fb_destroy(struct fb_info *info)
+{
+ struct drm_fb_helper *fb_helper = info->par;
+
+ if (!fb_helper->dev)
+ return;
+
+ fb_deferred_io_cleanup(info);
+ drm_fb_helper_fini(fb_helper);
+
+ drm_client_buffer_vunmap(fb_helper->buffer);
+ drm_client_buffer_delete(fb_helper->buffer);
+ drm_client_release(&fb_helper->client);
+}
+
+static const struct fb_ops drm_fbdev_uma_fb_ops = {
+ .owner = THIS_MODULE,
+ .fb_open = drm_fbdev_uma_fb_open,
+ .fb_release = drm_fbdev_uma_fb_release,
+ __FB_DEFAULT_DEFERRED_OPS_RDWR(drm_fbdev_uma),
+ DRM_FB_HELPER_DEFAULT_OPS,
+ __FB_DEFAULT_DEFERRED_OPS_DRAW(drm_fbdev_uma),
+ .fb_mmap = drm_fbdev_uma_fb_mmap,
+ .fb_destroy = drm_fbdev_uma_fb_destroy,
+};
+
+static struct page *drm_fbdev_uma_get_page(struct fb_info *info, unsigned long offset)
+{
+ struct drm_fb_helper *fb_helper = info->par;
+ struct drm_framebuffer *fb = fb_helper->fb;
+ struct drm_gem_object *obj = drm_gem_fb_get_obj(fb, 0);
+ struct drm_gem_uma_object *uma = to_drm_gem_uma_obj(obj);
+ unsigned int i = offset >> PAGE_SHIFT;
+ struct page *page;
+
+ if (fb_WARN_ON_ONCE(info, offset > obj->size))
+ return NULL;
+
+ page = uma->pages[i]; // protected by active vmap
+ if (page)
+ get_page(page);
+ fb_WARN_ON_ONCE(info, !page);
+
+ return page;
+}
+
+/*
+ * struct drm_fb_helper
+ */
+
+static int drm_fbdev_uma_helper_fb_dirty(struct drm_fb_helper *helper,
+ struct drm_clip_rect *clip)
+{
+ struct drm_device *dev = helper->dev;
+ int ret;
+
+ /* Call damage handlers only if necessary */
+ if (!(clip->x1 < clip->x2 && clip->y1 < clip->y2))
+ return 0;
+
+ if (helper->fb->funcs->dirty) {
+ ret = helper->fb->funcs->dirty(helper->fb, NULL, 0, 0, clip, 1);
+ if (drm_WARN_ONCE(dev, ret, "Dirty helper failed: ret=%d\n", ret))
+ return ret;
+ }
+
+ return 0;
+}
+
+static const struct drm_fb_helper_funcs drm_fbdev_uma_helper_funcs = {
+ .fb_dirty = drm_fbdev_uma_helper_fb_dirty,
+};
+
+/*
+ * struct drm_driver
+ */
+
+int drm_fbdev_uma_driver_fbdev_probe(struct drm_fb_helper *fb_helper,
+ struct drm_fb_helper_surface_size *sizes)
+{
+ struct drm_client_dev *client = &fb_helper->client;
+ struct drm_device *dev = fb_helper->dev;
+ struct fb_info *info = fb_helper->info;
+ struct drm_client_buffer *buffer;
+ struct drm_gem_uma_object *uma;
+ struct drm_framebuffer *fb;
+ u32 format;
+ struct iosys_map map;
+ int ret;
+
+ drm_dbg_kms(dev, "surface width(%d), height(%d) and bpp(%d)\n",
+ sizes->surface_width, sizes->surface_height,
+ sizes->surface_bpp);
+
+ format = drm_driver_legacy_fb_format(dev, sizes->surface_bpp, sizes->surface_depth);
+ buffer = drm_client_buffer_create_dumb(client, sizes->surface_width,
+ sizes->surface_height, format);
+ if (IS_ERR(buffer))
+ return PTR_ERR(buffer);
+ uma = to_drm_gem_uma_obj(buffer->gem);
+
+ fb = buffer->fb;
+
+ ret = drm_client_buffer_vmap(buffer, &map);
+ if (ret) {
+ goto err_drm_client_buffer_delete;
+ } else if (drm_WARN_ON(dev, map.is_iomem)) {
+ ret = -ENODEV; /* I/O memory not supported; use generic emulation */
+ goto err_drm_client_buffer_delete;
+ }
+
+ fb_helper->funcs = &drm_fbdev_uma_helper_funcs;
+ fb_helper->buffer = buffer;
+ fb_helper->fb = fb;
+
+ drm_fb_helper_fill_info(info, fb_helper, sizes);
+
+ info->fbops = &drm_fbdev_uma_fb_ops;
+
+ /* screen */
+ info->flags |= FBINFO_VIRTFB; /* system memory */
+ if (!uma->map_wc)
+ info->flags |= FBINFO_READS_FAST; /* signal caching */
+ info->screen_size = sizes->surface_height * fb->pitches[0];
+ info->screen_buffer = map.vaddr;
+ info->fix.smem_len = info->screen_size;
+
+ /* deferred I/O */
+ fb_helper->fbdefio.delay = HZ / 20;
+ fb_helper->fbdefio.get_page = drm_fbdev_uma_get_page;
+ fb_helper->fbdefio.deferred_io = drm_fb_helper_deferred_io;
+
+ info->fbdefio = &fb_helper->fbdefio;
+ ret = fb_deferred_io_init(info);
+ if (ret)
+ goto err_drm_client_buffer_vunmap;
+
+ return 0;
+
+err_drm_client_buffer_vunmap:
+ fb_helper->fb = NULL;
+ fb_helper->buffer = NULL;
+ drm_client_buffer_vunmap(buffer);
+err_drm_client_buffer_delete:
+ drm_client_buffer_delete(buffer);
+ return ret;
+}
+EXPORT_SYMBOL(drm_fbdev_uma_driver_fbdev_probe);
diff --git a/drivers/gpu/drm/drm_gem_uma_helper.c b/drivers/gpu/drm/drm_gem_uma_helper.c
new file mode 100644
index 000000000000..d617cfd981e1
--- /dev/null
+++ b/drivers/gpu/drm/drm_gem_uma_helper.c
@@ -0,0 +1,898 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright 2018 Noralf Trønnes
+ */
+
+#include <linux/dma-buf.h>
+#include <linux/export.h>
+#include <linux/module.h>
+#include <linux/mutex.h>
+#include <linux/shmem_fs.h>
+#include <linux/slab.h>
+#include <linux/vmalloc.h>
+
+#ifdef CONFIG_X86
+#include <asm/set_memory.h>
+#endif
+
+#include <drm/drm.h>
+#include <drm/drm_device.h>
+#include <drm/drm_drv.h>
+#include <drm/drm_dumb_buffers.h>
+#include <drm/drm_gem_uma_helper.h>
+#include <drm/drm_prime.h>
+#include <drm/drm_print.h>
+
+MODULE_IMPORT_NS("DMA_BUF");
+
+/**
+ * DOC: overview
+ *
+ * This library provides helpers for GEM objects on UMA systems. Each
+ * object is backed by shmem buffers allocated using anonymous pageable
+ * memory.
+ *
+ * Functions that operate on the GEM object receive struct &drm_gem_uma_object.
+ * For GEM callback helpers in struct &drm_gem_object functions, see likewise
+ * named functions with an _object_ infix (e.g., drm_gem_uma_object_vmap() wraps
+ * drm_gem_uma_vmap()). These helpers perform the necessary type conversion.
+ */
+
+static const struct drm_gem_object_funcs drm_gem_uma_funcs = {
+ .free = drm_gem_uma_object_free,
+ .print_info = drm_gem_uma_object_print_info,
+ .pin = drm_gem_uma_object_pin,
+ .unpin = drm_gem_uma_object_unpin,
+ .get_sg_table = drm_gem_uma_object_get_sg_table,
+ .vmap = drm_gem_uma_object_vmap,
+ .vunmap = drm_gem_uma_object_vunmap,
+ .mmap = drm_gem_uma_object_mmap,
+ .vm_ops = &drm_gem_uma_vm_ops,
+};
+
+static int __drm_gem_uma_init(struct drm_device *dev, struct drm_gem_uma_object *uma,
+ size_t size, bool private, struct vfsmount *gemfs)
+{
+ struct drm_gem_object *obj = &uma->base;
+ int ret = 0;
+
+ if (!obj->funcs)
+ obj->funcs = &drm_gem_uma_funcs;
+
+ if (private) {
+ drm_gem_private_object_init(dev, obj, size);
+ uma->map_wc = false; /* dma-buf mappings use always writecombine */
+ } else {
+ ret = drm_gem_object_init_with_mnt(dev, obj, size, gemfs);
+ }
+ if (ret) {
+ drm_gem_private_object_fini(obj);
+ return ret;
+ }
+
+ ret = drm_gem_create_mmap_offset(obj);
+ if (ret)
+ goto err_release;
+
+ INIT_LIST_HEAD(&uma->madv_list);
+
+ if (!private) {
+ /*
+ * Our buffers are kept pinned, so allocating them
+ * from the MOVABLE zone is a really bad idea, and
+ * conflicts with CMA. See comments above new_inode()
+ * why this is required _and_ expected if you're
+ * going to pin these pages.
+ */
+ mapping_set_gfp_mask(obj->filp->f_mapping, GFP_HIGHUSER |
+ __GFP_RETRY_MAYFAIL | __GFP_NOWARN);
+ }
+
+ return 0;
+err_release:
+ drm_gem_object_release(obj);
+ return ret;
+}
+
+/**
+ * drm_gem_uma_init - Initialize an allocated object.
+ * @dev: DRM device
+ * @uma: The allocated GEM UMA object.
+ * @size: Buffer size in bytes
+ *
+ * Returns:
+ * 0 on success, or a negative error code on failure.
+ */
+int drm_gem_uma_init(struct drm_device *dev, struct drm_gem_uma_object *uma, size_t size)
+{
+ return __drm_gem_uma_init(dev, uma, size, false, NULL);
+}
+EXPORT_SYMBOL_GPL(drm_gem_uma_init);
+
+static struct drm_gem_uma_object *
+__drm_gem_uma_create(struct drm_device *dev, size_t size, bool private,
+ struct vfsmount *gemfs)
+{
+ struct drm_gem_uma_object *uma;
+ struct drm_gem_object *obj;
+ int ret = 0;
+
+ size = PAGE_ALIGN(size);
+
+ if (dev->driver->gem_create_object) {
+ obj = dev->driver->gem_create_object(dev, size);
+ if (IS_ERR(obj))
+ return ERR_CAST(obj);
+ uma = to_drm_gem_uma_obj(obj);
+ } else {
+ uma = kzalloc(sizeof(*uma), GFP_KERNEL);
+ if (!uma)
+ return ERR_PTR(-ENOMEM);
+ obj = &uma->base;
+ }
+
+ ret = __drm_gem_uma_init(dev, uma, size, private, gemfs);
+ if (ret) {
+ kfree(obj);
+ return ERR_PTR(ret);
+ }
+
+ return uma;
+}
+
+/**
+ * drm_gem_uma_create - Allocate an object with the given size
+ * @dev: DRM device
+ * @size: Size of the object to allocate
+ *
+ * This function creates a GEM UMA object.
+ *
+ * Returns:
+ * A struct drm_gem_uma_object * on success or an ERR_PTR()-encoded negative
+ * error code on failure.
+ */
+struct drm_gem_uma_object *drm_gem_uma_create(struct drm_device *dev, size_t size)
+{
+ return __drm_gem_uma_create(dev, size, false, NULL);
+}
+EXPORT_SYMBOL_GPL(drm_gem_uma_create);
+
+/**
+ * drm_gem_uma_create_with_mnt - Allocate an object with the given size in a
+ * given mountpoint
+ * @dev: DRM device
+ * @size: Size of the object to allocate
+ * @gemfs: tmpfs mount where the GEM object will be created
+ *
+ * This function creates a GEM UMA object in a given tmpfs mountpoint.
+ *
+ * Returns:
+ * A struct drm_gem_uma_object * on success or an ERR_PTR()-encoded negative
+ * error code on failure.
+ */
+struct drm_gem_uma_object *drm_gem_uma_create_with_mnt(struct drm_device *dev,
+ size_t size,
+ struct vfsmount *gemfs)
+{
+ return __drm_gem_uma_create(dev, size, false, gemfs);
+}
+EXPORT_SYMBOL_GPL(drm_gem_uma_create_with_mnt);
+
+/**
+ * drm_gem_uma_release - Release resources associated with a GEM UMA object.
+ * @uma: GEM UMA object
+ *
+ * This function cleans up the GEM object state, but does not free the memory used to store the
+ * object itself. This function is meant to be a dedicated helper for the Rust GEM bindings.
+ */
+void drm_gem_uma_release(struct drm_gem_uma_object *uma)
+{
+ struct drm_gem_object *obj = &uma->base;
+
+ if (drm_gem_is_imported(obj)) {
+ drm_prime_gem_destroy(obj, uma->sgt);
+ } else {
+ dma_resv_lock(uma->base.resv, NULL);
+
+ drm_WARN_ON(obj->dev, refcount_read(&uma->vmap_use_count));
+
+ if (uma->sgt) {
+ dma_unmap_sgtable(obj->dev->dev, uma->sgt,
+ DMA_BIDIRECTIONAL, 0);
+ sg_free_table(uma->sgt);
+ kfree(uma->sgt);
+ }
+ if (uma->pages)
+ drm_gem_uma_put_pages_locked(uma);
+
+ drm_WARN_ON(obj->dev, refcount_read(&uma->pages_use_count));
+ drm_WARN_ON(obj->dev, refcount_read(&uma->pages_pin_count));
+
+ dma_resv_unlock(uma->base.resv);
+ }
+
+ drm_gem_object_release(obj);
+}
+EXPORT_SYMBOL_GPL(drm_gem_uma_release);
+
+/**
+ * drm_gem_uma_free - Free resources associated with a GEM UMA object
+ * @uma: GEM UMA object to free
+ *
+ * This function cleans up the GEM object state and frees the memory used to
+ * store the object itself.
+ */
+void drm_gem_uma_free(struct drm_gem_uma_object *uma)
+{
+ drm_gem_uma_release(uma);
+ kfree(uma);
+}
+EXPORT_SYMBOL_GPL(drm_gem_uma_free);
+
+static int drm_gem_uma_get_pages_locked(struct drm_gem_uma_object *uma)
+{
+ struct drm_gem_object *obj = &uma->base;
+ struct page **pages;
+
+ dma_resv_assert_held(uma->base.resv);
+
+ if (refcount_inc_not_zero(&uma->pages_use_count))
+ return 0;
+
+ pages = drm_gem_get_pages(obj);
+ if (IS_ERR(pages)) {
+ drm_dbg_kms(obj->dev, "Failed to get pages (%ld)\n",
+ PTR_ERR(pages));
+ return PTR_ERR(pages);
+ }
+
+ /*
+ * TODO: Allocating WC pages which are correctly flushed is only
+ * supported on x86. Ideal solution would be a GFP_WC flag, which also
+ * ttm_pool.c could use.
+ */
+#ifdef CONFIG_X86
+ if (uma->map_wc)
+ set_pages_array_wc(pages, obj->size >> PAGE_SHIFT);
+#endif
+
+ uma->pages = pages;
+
+ refcount_set(&uma->pages_use_count, 1);
+
+ return 0;
+}
+
+/*
+ * drm_gem_uma_put_pages_locked - Decrease use count on the backing pages for a GEM UMA object
+ * @uma: GEM UMA object
+ *
+ * This function decreases the use count and puts the backing pages when use drops to zero.
+ */
+void drm_gem_uma_put_pages_locked(struct drm_gem_uma_object *uma)
+{
+ struct drm_gem_object *obj = &uma->base;
+
+ dma_resv_assert_held(uma->base.resv);
+
+ if (refcount_dec_and_test(&uma->pages_use_count)) {
+#ifdef CONFIG_X86
+ if (uma->map_wc)
+ set_pages_array_wb(uma->pages, obj->size >> PAGE_SHIFT);
+#endif
+
+ drm_gem_put_pages(obj, uma->pages,
+ uma->pages_mark_dirty_on_put,
+ uma->pages_mark_accessed_on_put);
+ uma->pages = NULL;
+ }
+}
+EXPORT_SYMBOL_GPL(drm_gem_uma_put_pages_locked);
+
+int drm_gem_uma_pin_locked(struct drm_gem_uma_object *uma)
+{
+ int ret;
+
+ dma_resv_assert_held(uma->base.resv);
+
+ drm_WARN_ON(uma->base.dev, drm_gem_is_imported(&uma->base));
+
+ if (refcount_inc_not_zero(&uma->pages_pin_count))
+ return 0;
+
+ ret = drm_gem_uma_get_pages_locked(uma);
+ if (!ret)
+ refcount_set(&uma->pages_pin_count, 1);
+
+ return ret;
+}
+EXPORT_SYMBOL(drm_gem_uma_pin_locked);
+
+void drm_gem_uma_unpin_locked(struct drm_gem_uma_object *uma)
+{
+ dma_resv_assert_held(uma->base.resv);
+
+ if (refcount_dec_and_test(&uma->pages_pin_count))
+ drm_gem_uma_put_pages_locked(uma);
+}
+EXPORT_SYMBOL(drm_gem_uma_unpin_locked);
+
+/**
+ * drm_gem_uma_pin - Pin backing pages for a GEM UMA object
+ * @uma: GEM UMA object
+ *
+ * This function makes sure the backing pages are pinned in memory while the
+ * buffer is exported.
+ *
+ * Returns:
+ * 0 on success or a negative error code on failure.
+ */
+int drm_gem_uma_pin(struct drm_gem_uma_object *uma)
+{
+ struct drm_gem_object *obj = &uma->base;
+ int ret;
+
+ drm_WARN_ON(obj->dev, drm_gem_is_imported(obj));
+
+ if (refcount_inc_not_zero(&uma->pages_pin_count))
+ return 0;
+
+ ret = dma_resv_lock_interruptible(uma->base.resv, NULL);
+ if (ret)
+ return ret;
+ ret = drm_gem_uma_pin_locked(uma);
+ dma_resv_unlock(uma->base.resv);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(drm_gem_uma_pin);
+
+/**
+ * drm_gem_uma_unpin - Unpin backing pages for a GEM UMA object
+ * @uma: GEM uma object
+ *
+ * This function removes the requirement that the backing pages are pinned in
+ * memory.
+ */
+void drm_gem_uma_unpin(struct drm_gem_uma_object *uma)
+{
+ struct drm_gem_object *obj = &uma->base;
+
+ drm_WARN_ON(obj->dev, drm_gem_is_imported(obj));
+
+ if (refcount_dec_not_one(&uma->pages_pin_count))
+ return;
+
+ dma_resv_lock(uma->base.resv, NULL);
+ drm_gem_uma_unpin_locked(uma);
+ dma_resv_unlock(uma->base.resv);
+}
+EXPORT_SYMBOL_GPL(drm_gem_uma_unpin);
+
+/*
+ * drm_gem_uma_vmap_locked - Create a virtual mapping for a GEM UMA object
+ * @uma: GEM UMA object
+ * @map: Returns the kernel virtual address of the GEM-UMA object's backing
+ * store.
+ *
+ * This function makes sure that a contiguous kernel virtual address mapping
+ * exists for the buffer backing the GEM UMA object. It hides the differences
+ * between dma-buf imported and natively allocated objects.
+ *
+ * Acquired mappings should be cleaned up by calling drm_gem_uma_vunmap_locked().
+ *
+ * Returns:
+ * 0 on success or a negative error code on failure.
+ */
+int drm_gem_uma_vmap_locked(struct drm_gem_uma_object *uma, struct iosys_map *map)
+{
+ struct drm_gem_object *obj = &uma->base;
+ int ret = 0;
+
+ dma_resv_assert_held(obj->resv);
+
+ if (drm_gem_is_imported(obj)) {
+ ret = dma_buf_vmap(obj->import_attach->dmabuf, map);
+ } else {
+ pgprot_t prot = PAGE_KERNEL;
+
+ dma_resv_assert_held(uma->base.resv);
+
+ if (refcount_inc_not_zero(&uma->vmap_use_count)) {
+ iosys_map_set_vaddr(map, uma->vaddr);
+ return 0;
+ }
+
+ ret = drm_gem_uma_pin_locked(uma);
+ if (ret)
+ return ret;
+
+ if (uma->map_wc)
+ prot = pgprot_writecombine(prot);
+ uma->vaddr = vmap(uma->pages, obj->size >> PAGE_SHIFT, VM_MAP, prot);
+ if (!uma->vaddr) {
+ ret = -ENOMEM;
+ } else {
+ iosys_map_set_vaddr(map, uma->vaddr);
+ refcount_set(&uma->vmap_use_count, 1);
+ }
+ }
+
+ if (ret) {
+ drm_dbg_kms(obj->dev, "Failed to vmap pages, error %d\n", ret);
+ goto err_put_pages;
+ }
+
+ return 0;
+
+err_put_pages:
+ if (!drm_gem_is_imported(obj))
+ drm_gem_uma_unpin_locked(uma);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(drm_gem_uma_vmap_locked);
+
+/*
+ * drm_gem_uma_vunmap_locked - Unmap a virtual mapping for a GEM UMA object
+ * @uma: GEM UMA object
+ * @map: Kernel virtual address where the GEM-UMA object was mapped
+ *
+ * This function cleans up a kernel virtual address mapping acquired by
+ * drm_gem_uma_vmap_locked(). The mapping is only removed when the use count
+ * drops to zero.
+ *
+ * This function hides the differences between dma-buf imported and natively
+ * allocated objects.
+ */
+void drm_gem_uma_vunmap_locked(struct drm_gem_uma_object *uma, struct iosys_map *map)
+{
+ struct drm_gem_object *obj = &uma->base;
+
+ dma_resv_assert_held(obj->resv);
+
+ if (drm_gem_is_imported(obj)) {
+ dma_buf_vunmap(obj->import_attach->dmabuf, map);
+ } else {
+ dma_resv_assert_held(uma->base.resv);
+
+ if (refcount_dec_and_test(&uma->vmap_use_count)) {
+ vunmap(uma->vaddr);
+ uma->vaddr = NULL;
+
+ drm_gem_uma_unpin_locked(uma);
+ }
+ }
+}
+EXPORT_SYMBOL_GPL(drm_gem_uma_vunmap_locked);
+
+static int
+drm_gem_uma_create_with_handle(struct drm_file *file_priv,
+ struct drm_device *dev, size_t size,
+ uint32_t *handle)
+{
+ struct drm_gem_uma_object *uma;
+ int ret;
+
+ uma = drm_gem_uma_create(dev, size);
+ if (IS_ERR(uma))
+ return PTR_ERR(uma);
+
+ /*
+ * Allocate an id of idr table where the obj is registered
+ * and handle has the id what user can see.
+ */
+ ret = drm_gem_handle_create(file_priv, &uma->base, handle);
+ /* drop reference from allocate - handle holds it now. */
+ drm_gem_object_put(&uma->base);
+
+ return ret;
+}
+
+/* Update madvise status, returns true if not purged, else
+ * false or -errno.
+ */
+int drm_gem_uma_madvise_locked(struct drm_gem_uma_object *uma, int madv)
+{
+ dma_resv_assert_held(uma->base.resv);
+
+ if (uma->madv >= 0)
+ uma->madv = madv;
+
+ madv = uma->madv;
+
+ return (madv >= 0);
+}
+EXPORT_SYMBOL_GPL(drm_gem_uma_madvise_locked);
+
+void drm_gem_uma_purge_locked(struct drm_gem_uma_object *uma)
+{
+ struct drm_gem_object *obj = &uma->base;
+ struct drm_device *dev = obj->dev;
+
+ dma_resv_assert_held(uma->base.resv);
+
+ drm_WARN_ON(obj->dev, !drm_gem_uma_is_purgeable(uma));
+
+ dma_unmap_sgtable(dev->dev, uma->sgt, DMA_BIDIRECTIONAL, 0);
+ sg_free_table(uma->sgt);
+ kfree(uma->sgt);
+ uma->sgt = NULL;
+
+ drm_gem_uma_put_pages_locked(uma);
+
+ uma->madv = -1;
+
+ drm_vma_node_unmap(&obj->vma_node, dev->anon_inode->i_mapping);
+ drm_gem_free_mmap_offset(obj);
+
+ /* Our goal here is to return as much of the memory as
+ * is possible back to the system as we are called from OOM.
+ * To do this we must instruct the shmfs to drop all of its
+ * backing pages, *now*.
+ */
+ shmem_truncate_range(file_inode(obj->filp), 0, (loff_t)-1);
+
+ invalidate_mapping_pages(file_inode(obj->filp)->i_mapping, 0, (loff_t)-1);
+}
+EXPORT_SYMBOL_GPL(drm_gem_uma_purge_locked);
+
+/**
+ * drm_gem_uma_dumb_create - Create a dumb UMA buffer object
+ * @file: DRM file structure to create the dumb buffer for
+ * @dev: DRM device
+ * @args: IOCTL data
+ *
+ * This function computes the pitch of the dumb buffer and rounds it up to an
+ * integer number of bytes per pixel. Drivers for hardware that doesn't have
+ * any additional restrictions on the pitch can directly use this function as
+ * their &drm_driver.dumb_create callback.
+ *
+ * For hardware with additional restrictions, drivers can adjust the fields
+ * set up by userspace before calling into this function.
+ *
+ * Returns:
+ * 0 on success or a negative error code on failure.
+ */
+int drm_gem_uma_dumb_create(struct drm_file *file, struct drm_device *dev,
+ struct drm_mode_create_dumb *args)
+{
+ int ret;
+
+ ret = drm_mode_size_dumb(dev, args, SZ_8, 0);
+ if (ret)
+ return ret;
+
+ return drm_gem_uma_create_with_handle(file, dev, args->size, &args->handle);
+}
+EXPORT_SYMBOL_GPL(drm_gem_uma_dumb_create);
+
+static vm_fault_t drm_gem_uma_fault(struct vm_fault *vmf)
+{
+ struct vm_area_struct *vma = vmf->vma;
+ struct drm_gem_object *obj = vma->vm_private_data;
+ struct drm_gem_uma_object *uma = to_drm_gem_uma_obj(obj);
+ loff_t num_pages = obj->size >> PAGE_SHIFT;
+ vm_fault_t ret;
+ struct page *page;
+ pgoff_t page_offset;
+
+ /* We don't use vmf->pgoff since that has the fake offset */
+ page_offset = (vmf->address - vma->vm_start) >> PAGE_SHIFT;
+
+ dma_resv_lock(uma->base.resv, NULL);
+
+ if (page_offset >= num_pages ||
+ drm_WARN_ON_ONCE(obj->dev, !uma->pages) ||
+ uma->madv < 0) {
+ ret = VM_FAULT_SIGBUS;
+ } else {
+ page = uma->pages[page_offset];
+
+ ret = vmf_insert_pfn(vma, vmf->address, page_to_pfn(page));
+ }
+
+ dma_resv_unlock(uma->base.resv);
+
+ return ret;
+}
+
+static void drm_gem_uma_vm_open(struct vm_area_struct *vma)
+{
+ struct drm_gem_object *obj = vma->vm_private_data;
+ struct drm_gem_uma_object *uma = to_drm_gem_uma_obj(obj);
+
+ drm_WARN_ON(obj->dev, drm_gem_is_imported(obj));
+
+ dma_resv_lock(uma->base.resv, NULL);
+
+ /*
+ * We should have already pinned the pages when the buffer was first
+ * mmap'd, vm_open() just grabs an additional reference for the new
+ * mm the vma is getting copied into (ie. on fork()).
+ */
+ drm_WARN_ON_ONCE(obj->dev,
+ !refcount_inc_not_zero(&uma->pages_use_count));
+
+ dma_resv_unlock(uma->base.resv);
+
+ drm_gem_vm_open(vma);
+}
+
+static void drm_gem_uma_vm_close(struct vm_area_struct *vma)
+{
+ struct drm_gem_object *obj = vma->vm_private_data;
+ struct drm_gem_uma_object *uma = to_drm_gem_uma_obj(obj);
+
+ dma_resv_lock(uma->base.resv, NULL);
+ drm_gem_uma_put_pages_locked(uma);
+ dma_resv_unlock(uma->base.resv);
+
+ drm_gem_vm_close(vma);
+}
+
+const struct vm_operations_struct drm_gem_uma_vm_ops = {
+ .fault = drm_gem_uma_fault,
+ .open = drm_gem_uma_vm_open,
+ .close = drm_gem_uma_vm_close,
+};
+EXPORT_SYMBOL_GPL(drm_gem_uma_vm_ops);
+
+/**
+ * drm_gem_uma_mmap - Memory-map a GEM UMA object
+ * @uma: GEM UMA object
+ * @vma: VMA for the area to be mapped
+ *
+ * This function implements an augmented version of the GEM DRM file mmap
+ * operation for GEM UMA objects.
+ *
+ * Returns:
+ * 0 on success or a negative error code on failure.
+ */
+int drm_gem_uma_mmap(struct drm_gem_uma_object *uma, struct vm_area_struct *vma)
+{
+ struct drm_gem_object *obj = &uma->base;
+ int ret;
+
+ if (drm_gem_is_imported(obj)) {
+ /* Reset both vm_ops and vm_private_data, so we don't end up with
+ * vm_ops pointing to our implementation if the dma-buf backend
+ * doesn't set those fields.
+ */
+ vma->vm_private_data = NULL;
+ vma->vm_ops = NULL;
+
+ ret = dma_buf_mmap(obj->dma_buf, vma, 0);
+
+ /* Drop the reference drm_gem_mmap_obj() acquired.*/
+ if (!ret)
+ drm_gem_object_put(obj);
+
+ return ret;
+ }
+
+ if (is_cow_mapping(vma->vm_flags))
+ return -EINVAL;
+
+ dma_resv_lock(uma->base.resv, NULL);
+ ret = drm_gem_uma_get_pages_locked(uma);
+ dma_resv_unlock(uma->base.resv);
+
+ if (ret)
+ return ret;
+
+ vm_flags_set(vma, VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP);
+ vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
+ if (uma->map_wc)
+ vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(drm_gem_uma_mmap);
+
+/**
+ * drm_gem_uma_print_info() - Print &drm_gem_uma_object info for debugfs
+ * @uma: GEM UMA object
+ * @p: DRM printer
+ * @indent: Tab indentation level
+ */
+void drm_gem_uma_print_info(const struct drm_gem_uma_object *uma,
+ struct drm_printer *p, unsigned int indent)
+{
+ if (drm_gem_is_imported(&uma->base))
+ return;
+
+ drm_printf_indent(p, indent, "pages_pin_count=%u\n", refcount_read(&uma->pages_pin_count));
+ drm_printf_indent(p, indent, "pages_use_count=%u\n", refcount_read(&uma->pages_use_count));
+ drm_printf_indent(p, indent, "vmap_use_count=%u\n", refcount_read(&uma->vmap_use_count));
+ drm_printf_indent(p, indent, "vaddr=%p\n", uma->vaddr);
+}
+EXPORT_SYMBOL_GPL(drm_gem_uma_print_info);
+
+/**
+ * drm_gem_uma_get_sg_table - Provide a scatter/gather table of pinned
+ * pages for a GEM UMA object
+ * @uma: GEM UMA object
+ *
+ * This function exports a scatter/gather table suitable for PRIME usage by
+ * calling the standard DMA mapping API.
+ *
+ * Drivers who need to acquire an scatter/gather table for objects need to call
+ * drm_gem_uma_get_pages_sgt() instead.
+ *
+ * Returns:
+ * A pointer to the scatter/gather table of pinned pages or error pointer on failure.
+ */
+struct sg_table *drm_gem_uma_get_sg_table(struct drm_gem_uma_object *uma)
+{
+ struct drm_gem_object *obj = &uma->base;
+
+ drm_WARN_ON(obj->dev, drm_gem_is_imported(obj));
+
+ return drm_prime_pages_to_sg(obj->dev, uma->pages, obj->size >> PAGE_SHIFT);
+}
+EXPORT_SYMBOL_GPL(drm_gem_uma_get_sg_table);
+
+static struct sg_table *drm_gem_uma_get_pages_sgt_locked(struct drm_gem_uma_object *uma)
+{
+ struct drm_gem_object *obj = &uma->base;
+ int ret;
+ struct sg_table *sgt;
+
+ if (uma->sgt)
+ return uma->sgt;
+
+ drm_WARN_ON(obj->dev, drm_gem_is_imported(obj));
+
+ ret = drm_gem_uma_get_pages_locked(uma);
+ if (ret)
+ return ERR_PTR(ret);
+
+ sgt = drm_gem_uma_get_sg_table(uma);
+ if (IS_ERR(sgt)) {
+ ret = PTR_ERR(sgt);
+ goto err_put_pages;
+ }
+ /* Map the pages for use by the h/w. */
+ ret = dma_map_sgtable(obj->dev->dev, sgt, DMA_BIDIRECTIONAL, 0);
+ if (ret)
+ goto err_free_sgt;
+
+ uma->sgt = sgt;
+
+ return sgt;
+
+err_free_sgt:
+ sg_free_table(sgt);
+ kfree(sgt);
+err_put_pages:
+ drm_gem_uma_put_pages_locked(uma);
+ return ERR_PTR(ret);
+}
+
+/**
+ * drm_gem_uma_get_pages_sgt - Pin pages, dma map them, and return a
+ * scatter/gather table for a GEM UMA object.
+ * @uma: GEM UMA object
+ *
+ * This function returns a scatter/gather table suitable for driver usage. If
+ * the sg table doesn't exist, the pages are pinned, dma-mapped, and a sg
+ * table created.
+ *
+ * This is the main function for drivers to get at backing storage, and it hides
+ * and difference between dma-buf imported and natively allocated objects.
+ * drm_gem_uma_get_sg_table() should not be directly called by drivers.
+ *
+ * Returns:
+ * A pointer to the scatter/gather table of pinned pages or errno on failure.
+ */
+struct sg_table *drm_gem_uma_get_pages_sgt(struct drm_gem_uma_object *uma)
+{
+ int ret;
+ struct sg_table *sgt;
+
+ ret = dma_resv_lock_interruptible(uma->base.resv, NULL);
+ if (ret)
+ return ERR_PTR(ret);
+ sgt = drm_gem_uma_get_pages_sgt_locked(uma);
+ dma_resv_unlock(uma->base.resv);
+
+ return sgt;
+}
+EXPORT_SYMBOL_GPL(drm_gem_uma_get_pages_sgt);
+
+/**
+ * drm_gem_uma_prime_import_sg_table - Produce a GEM UMA object from
+ * another driver's scatter/gather table of pinned pages
+ * @dev: Device to import into
+ * @attach: DMA-BUF attachment
+ * @sgt: Scatter/gather table of pinned pages
+ *
+ * This function imports a scatter/gather table exported via DMA-BUF by
+ * another driver. Drivers that use the GEM UMA helpers should set this as
+ * their &drm_driver.gem_prime_import_sg_table callback.
+ *
+ * Returns:
+ * A pointer to a newly created GEM object or an ERR_PTR-encoded negative
+ * error code on failure.
+ */
+struct drm_gem_object *
+drm_gem_uma_prime_import_sg_table(struct drm_device *dev,
+ struct dma_buf_attachment *attach,
+ struct sg_table *sgt)
+{
+ size_t size = PAGE_ALIGN(attach->dmabuf->size);
+ struct drm_gem_uma_object *uma;
+
+ uma = __drm_gem_uma_create(dev, size, true, NULL);
+ if (IS_ERR(uma))
+ return ERR_CAST(uma);
+
+ uma->sgt = sgt;
+
+ drm_dbg_prime(dev, "size = %zu\n", size);
+
+ return &uma->base;
+}
+EXPORT_SYMBOL_GPL(drm_gem_uma_prime_import_sg_table);
+
+/**
+ * drm_gem_uma_prime_import_no_map - Import dmabuf without mapping its sg_table
+ * @dev: Device to import into
+ * @dma_buf: dma-buf object to import
+ *
+ * Drivers that use the GEM UMA helpers but also wants to import dmabuf without
+ * mapping its sg_table can use this as their &drm_driver.gem_prime_import
+ * implementation.
+ */
+struct drm_gem_object *drm_gem_uma_prime_import_no_map(struct drm_device *dev,
+ struct dma_buf *dma_buf)
+{
+ struct dma_buf_attachment *attach;
+ struct drm_gem_uma_object *uma;
+ struct drm_gem_object *obj;
+ size_t size;
+ int ret;
+
+ if (drm_gem_is_prime_exported_dma_buf(dev, dma_buf)) {
+ /*
+ * Importing dmabuf exported from our own gem increases
+ * refcount on gem itself instead of f_count of dmabuf.
+ */
+ obj = dma_buf->priv;
+ drm_gem_object_get(obj);
+ return obj;
+ }
+
+ attach = dma_buf_attach(dma_buf, dev->dev);
+ if (IS_ERR(attach))
+ return ERR_CAST(attach);
+
+ get_dma_buf(dma_buf);
+
+ size = PAGE_ALIGN(attach->dmabuf->size);
+
+ uma = __drm_gem_uma_create(dev, size, true, NULL);
+ if (IS_ERR(uma)) {
+ ret = PTR_ERR(uma);
+ goto fail_detach;
+ }
+
+ drm_dbg_prime(dev, "size = %zu\n", size);
+
+ uma->base.import_attach = attach;
+ uma->base.resv = dma_buf->resv;
+
+ return &uma->base;
+
+fail_detach:
+ dma_buf_detach(dma_buf, attach);
+ dma_buf_put(dma_buf);
+
+ return ERR_PTR(ret);
+}
+EXPORT_SYMBOL_GPL(drm_gem_uma_prime_import_no_map);
+
+MODULE_DESCRIPTION("DRM UMA memory-management helpers");
+MODULE_IMPORT_NS("DMA_BUF");
+MODULE_LICENSE("GPL");
diff --git a/drivers/gpu/drm/tests/Makefile b/drivers/gpu/drm/tests/Makefile
index c0e952293ad0..801f1c00e3c1 100644
--- a/drivers/gpu/drm/tests/Makefile
+++ b/drivers/gpu/drm/tests/Makefile
@@ -17,6 +17,7 @@ obj-$(CONFIG_DRM_KUNIT_TEST) += \
drm_format_test.o \
drm_framebuffer_test.o \
drm_gem_shmem_test.o \
+ drm_gem_uma_test.o \
drm_hdmi_state_helper_test.o \
drm_managed_test.o \
drm_mm_test.o \
diff --git a/drivers/gpu/drm/tests/drm_gem_uma_test.c b/drivers/gpu/drm/tests/drm_gem_uma_test.c
new file mode 100644
index 000000000000..38f8b17c3d11
--- /dev/null
+++ b/drivers/gpu/drm/tests/drm_gem_uma_test.c
@@ -0,0 +1,385 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * KUnit test suite for GEM objects backed by uma buffers
+ *
+ * Copyright (C) 2023 Red Hat, Inc.
+ *
+ * Author: Marco Pagani <marpagan@redhat.com>
+ */
+
+#include <linux/dma-buf.h>
+#include <linux/iosys-map.h>
+#include <linux/sizes.h>
+
+#include <kunit/test.h>
+
+#include <drm/drm_device.h>
+#include <drm/drm_drv.h>
+#include <drm/drm_gem.h>
+#include <drm/drm_gem_uma_helper.h>
+#include <drm/drm_kunit_helpers.h>
+
+#define TEST_SIZE SZ_1M
+#define TEST_BYTE 0xae
+
+/*
+ * Wrappers to avoid cast warnings when passing action functions
+ * directly to kunit_add_action().
+ */
+KUNIT_DEFINE_ACTION_WRAPPER(kfree_wrapper, kfree, const void *);
+
+KUNIT_DEFINE_ACTION_WRAPPER(sg_free_table_wrapper, sg_free_table,
+ struct sg_table *);
+
+KUNIT_DEFINE_ACTION_WRAPPER(drm_gem_uma_free_wrapper, drm_gem_uma_free,
+ struct drm_gem_uma_object *);
+
+/*
+ * Test creating a uma GEM object backed by uma buffer. The test
+ * case succeeds if the GEM object is successfully allocated with the
+ * uma file node and object functions attributes set, and the size
+ * attribute is equal to the correct size.
+ */
+static void drm_gem_uma_test_obj_create(struct kunit *test)
+{
+ struct drm_device *drm_dev = test->priv;
+ struct drm_gem_uma_object *uma;
+
+ uma = drm_gem_uma_create(drm_dev, TEST_SIZE);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, uma);
+ KUNIT_EXPECT_EQ(test, uma->base.size, TEST_SIZE);
+ KUNIT_EXPECT_NOT_NULL(test, uma->base.filp);
+ KUNIT_EXPECT_NOT_NULL(test, uma->base.funcs);
+
+ drm_gem_uma_free(uma);
+}
+
+/*
+ * Test creating a uma GEM object from a scatter/gather table exported
+ * via a DMA-BUF. The test case succeed if the GEM object is successfully
+ * created with the uma file node attribute equal to NULL and the sgt
+ * attribute pointing to the scatter/gather table that has been imported.
+ */
+static void drm_gem_uma_test_obj_create_private(struct kunit *test)
+{
+ struct drm_device *drm_dev = test->priv;
+ struct drm_gem_uma_object *uma;
+ struct drm_gem_object *gem_obj;
+ struct dma_buf buf_mock;
+ struct dma_buf_attachment attach_mock;
+ struct sg_table *sgt;
+ char *buf;
+ int ret;
+
+ /* Create a mock scatter/gather table */
+ buf = kunit_kzalloc(test, TEST_SIZE, GFP_KERNEL);
+ KUNIT_ASSERT_NOT_NULL(test, buf);
+
+ sgt = kzalloc(sizeof(*sgt), GFP_KERNEL);
+ KUNIT_ASSERT_NOT_NULL(test, sgt);
+
+ ret = kunit_add_action_or_reset(test, kfree_wrapper, sgt);
+ KUNIT_ASSERT_EQ(test, ret, 0);
+
+ ret = sg_alloc_table(sgt, 1, GFP_KERNEL);
+ KUNIT_ASSERT_EQ(test, ret, 0);
+
+ ret = kunit_add_action_or_reset(test, sg_free_table_wrapper, sgt);
+ KUNIT_ASSERT_EQ(test, ret, 0);
+
+ sg_init_one(sgt->sgl, buf, TEST_SIZE);
+
+ /*
+ * Set the DMA mask to 64-bits and map the sgtables
+ * otherwise drm_gem_uma_free will cause a warning
+ * on debug kernels.
+ */
+ ret = dma_set_mask(drm_dev->dev, DMA_BIT_MASK(64));
+ KUNIT_ASSERT_EQ(test, ret, 0);
+
+ ret = dma_map_sgtable(drm_dev->dev, sgt, DMA_BIDIRECTIONAL, 0);
+ KUNIT_ASSERT_EQ(test, ret, 0);
+
+ /* Init a mock DMA-BUF */
+ buf_mock.size = TEST_SIZE;
+ attach_mock.dmabuf = &buf_mock;
+
+ gem_obj = drm_gem_uma_prime_import_sg_table(drm_dev, &attach_mock, sgt);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, gem_obj);
+ KUNIT_EXPECT_EQ(test, gem_obj->size, TEST_SIZE);
+ KUNIT_EXPECT_NULL(test, gem_obj->filp);
+ KUNIT_EXPECT_NOT_NULL(test, gem_obj->funcs);
+
+ /* The scatter/gather table will be freed by drm_gem_uma_free */
+ kunit_remove_action(test, sg_free_table_wrapper, sgt);
+ kunit_remove_action(test, kfree_wrapper, sgt);
+
+ uma = to_drm_gem_uma_obj(gem_obj);
+ KUNIT_EXPECT_PTR_EQ(test, uma->sgt, sgt);
+
+ drm_gem_uma_free(uma);
+}
+
+/*
+ * Test pinning backing pages for a uma GEM object. The test case
+ * succeeds if a suitable number of backing pages are allocated, and
+ * the pages table counter attribute is increased by one.
+ */
+static void drm_gem_uma_test_pin_pages(struct kunit *test)
+{
+ struct drm_device *drm_dev = test->priv;
+ struct drm_gem_uma_object *uma;
+ int i, ret;
+
+ uma = drm_gem_uma_create(drm_dev, TEST_SIZE);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, uma);
+ KUNIT_EXPECT_NULL(test, uma->pages);
+ KUNIT_EXPECT_EQ(test, refcount_read(&uma->pages_use_count), 0);
+
+ ret = kunit_add_action_or_reset(test, drm_gem_uma_free_wrapper, uma);
+ KUNIT_ASSERT_EQ(test, ret, 0);
+
+ ret = drm_gem_uma_pin(uma);
+ KUNIT_ASSERT_EQ(test, ret, 0);
+ KUNIT_ASSERT_NOT_NULL(test, uma->pages);
+ KUNIT_EXPECT_EQ(test, refcount_read(&uma->pages_use_count), 1);
+
+ for (i = 0; i < (uma->base.size >> PAGE_SHIFT); i++)
+ KUNIT_ASSERT_NOT_NULL(test, uma->pages[i]);
+
+ drm_gem_uma_unpin(uma);
+ KUNIT_EXPECT_NULL(test, uma->pages);
+ KUNIT_EXPECT_EQ(test, refcount_read(&uma->pages_use_count), 0);
+}
+
+/*
+ * Test creating a virtual mapping for a uma GEM object. The test
+ * case succeeds if the backing memory is mapped and the reference
+ * counter for virtual mapping is increased by one. Moreover, the test
+ * case writes and then reads a test pattern over the mapped memory.
+ */
+static void drm_gem_uma_test_vmap(struct kunit *test)
+{
+ struct drm_device *drm_dev = test->priv;
+ struct drm_gem_uma_object *uma;
+ struct iosys_map map;
+ int ret, i;
+
+ uma = drm_gem_uma_create(drm_dev, TEST_SIZE);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, uma);
+ KUNIT_EXPECT_NULL(test, uma->vaddr);
+ KUNIT_EXPECT_EQ(test, refcount_read(&uma->vmap_use_count), 0);
+
+ ret = kunit_add_action_or_reset(test, drm_gem_uma_free_wrapper, uma);
+ KUNIT_ASSERT_EQ(test, ret, 0);
+
+ ret = drm_gem_uma_vmap_locked(uma, &map);
+ KUNIT_ASSERT_EQ(test, ret, 0);
+ KUNIT_ASSERT_NOT_NULL(test, uma->vaddr);
+ KUNIT_ASSERT_FALSE(test, iosys_map_is_null(&map));
+ KUNIT_EXPECT_EQ(test, refcount_read(&uma->vmap_use_count), 1);
+
+ iosys_map_memset(&map, 0, TEST_BYTE, TEST_SIZE);
+ for (i = 0; i < TEST_SIZE; i++)
+ KUNIT_EXPECT_EQ(test, iosys_map_rd(&map, i, u8), TEST_BYTE);
+
+ drm_gem_uma_vunmap_locked(uma, &map);
+ KUNIT_EXPECT_NULL(test, uma->vaddr);
+ KUNIT_EXPECT_EQ(test, refcount_read(&uma->vmap_use_count), 0);
+}
+
+/*
+ * Test exporting a scatter/gather table of pinned pages suitable for
+ * PRIME usage from a uma GEM object. The test case succeeds if a
+ * scatter/gather table large enough to accommodate the backing memory
+ * is successfully exported.
+ */
+static void drm_gem_uma_test_get_pages_sgt(struct kunit *test)
+{
+ struct drm_device *drm_dev = test->priv;
+ struct drm_gem_uma_object *uma;
+ struct sg_table *sgt;
+ struct scatterlist *sg;
+ unsigned int si, len = 0;
+ int ret;
+
+ uma = drm_gem_uma_create(drm_dev, TEST_SIZE);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, uma);
+
+ ret = kunit_add_action_or_reset(test, drm_gem_uma_free_wrapper, uma);
+ KUNIT_ASSERT_EQ(test, ret, 0);
+
+ ret = drm_gem_uma_pin(uma);
+ KUNIT_ASSERT_EQ(test, ret, 0);
+
+ sgt = drm_gem_uma_get_sg_table(uma);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, sgt);
+ KUNIT_EXPECT_NULL(test, uma->sgt);
+
+ ret = kunit_add_action_or_reset(test, kfree_wrapper, sgt);
+ KUNIT_ASSERT_EQ(test, ret, 0);
+
+ ret = kunit_add_action_or_reset(test, sg_free_table_wrapper, sgt);
+ KUNIT_ASSERT_EQ(test, ret, 0);
+
+ for_each_sgtable_sg(sgt, sg, si) {
+ KUNIT_EXPECT_NOT_NULL(test, sg);
+ len += sg->length;
+ }
+
+ KUNIT_EXPECT_GE(test, len, TEST_SIZE);
+}
+
+/*
+ * Test pinning pages and exporting a scatter/gather table suitable for
+ * driver usage from a uma GEM object. The test case succeeds if the
+ * backing pages are pinned and a scatter/gather table large enough to
+ * accommodate the backing memory is successfully exported.
+ */
+static void drm_gem_uma_test_get_sg_table(struct kunit *test)
+{
+ struct drm_device *drm_dev = test->priv;
+ struct drm_gem_uma_object *uma;
+ struct sg_table *sgt;
+ struct scatterlist *sg;
+ unsigned int si, ret, len = 0;
+
+ uma = drm_gem_uma_create(drm_dev, TEST_SIZE);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, uma);
+
+ ret = kunit_add_action_or_reset(test, drm_gem_uma_free_wrapper, uma);
+ KUNIT_ASSERT_EQ(test, ret, 0);
+
+ /* The scatter/gather table will be freed by drm_gem_uma_free */
+ sgt = drm_gem_uma_get_pages_sgt(uma);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, sgt);
+ KUNIT_ASSERT_NOT_NULL(test, uma->pages);
+ KUNIT_EXPECT_EQ(test, refcount_read(&uma->pages_use_count), 1);
+ KUNIT_EXPECT_PTR_EQ(test, sgt, uma->sgt);
+
+ for_each_sgtable_sg(sgt, sg, si) {
+ KUNIT_EXPECT_NOT_NULL(test, sg);
+ len += sg->length;
+ }
+
+ KUNIT_EXPECT_GE(test, len, TEST_SIZE);
+}
+
+/*
+ * Test updating the madvise state of a uma GEM object. The test
+ * case checks that the function for setting madv updates it only if
+ * its current value is greater or equal than zero and returns false
+ * if it has a negative value.
+ */
+static void drm_gem_uma_test_madvise(struct kunit *test)
+{
+ struct drm_device *drm_dev = test->priv;
+ struct drm_gem_uma_object *uma;
+ int ret;
+
+ uma = drm_gem_uma_create(drm_dev, TEST_SIZE);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, uma);
+ KUNIT_ASSERT_EQ(test, uma->madv, 0);
+
+ ret = kunit_add_action_or_reset(test, drm_gem_uma_free_wrapper, uma);
+ KUNIT_ASSERT_EQ(test, ret, 0);
+
+ ret = drm_gem_uma_madvise_locked(uma, 1);
+ KUNIT_EXPECT_TRUE(test, ret);
+ KUNIT_ASSERT_EQ(test, uma->madv, 1);
+
+ /* Set madv to a negative value */
+ ret = drm_gem_uma_madvise_locked(uma, -1);
+ KUNIT_EXPECT_FALSE(test, ret);
+ KUNIT_ASSERT_EQ(test, uma->madv, -1);
+
+ /* Check that madv cannot be set back to a positive value */
+ ret = drm_gem_uma_madvise_locked(uma, 0);
+ KUNIT_EXPECT_FALSE(test, ret);
+ KUNIT_ASSERT_EQ(test, uma->madv, -1);
+}
+
+/*
+ * Test purging a uma GEM object. First, assert that a newly created
+ * uma GEM object is not purgeable. Then, set madvise to a positive
+ * value and call drm_gem_uma_get_pages_sgt() to pin and dma-map the
+ * backing pages. Finally, assert that the uma GEM object is now
+ * purgeable and purge it.
+ */
+static void drm_gem_uma_test_purge(struct kunit *test)
+{
+ struct drm_device *drm_dev = test->priv;
+ struct drm_gem_uma_object *uma;
+ struct sg_table *sgt;
+ int ret;
+
+ uma = drm_gem_uma_create(drm_dev, TEST_SIZE);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, uma);
+
+ ret = kunit_add_action_or_reset(test, drm_gem_uma_free_wrapper, uma);
+ KUNIT_ASSERT_EQ(test, ret, 0);
+
+ ret = drm_gem_uma_is_purgeable(uma);
+ KUNIT_EXPECT_FALSE(test, ret);
+
+ ret = drm_gem_uma_madvise_locked(uma, 1);
+ KUNIT_EXPECT_TRUE(test, ret);
+
+ /* The scatter/gather table will be freed by drm_gem_uma_free */
+ sgt = drm_gem_uma_get_pages_sgt(uma);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, sgt);
+
+ ret = drm_gem_uma_is_purgeable(uma);
+ KUNIT_EXPECT_TRUE(test, ret);
+
+ drm_gem_uma_purge_locked(uma);
+ KUNIT_EXPECT_NULL(test, uma->pages);
+ KUNIT_EXPECT_NULL(test, uma->sgt);
+ KUNIT_EXPECT_EQ(test, uma->madv, -1);
+}
+
+static int drm_gem_uma_test_init(struct kunit *test)
+{
+ struct device *dev;
+ struct drm_device *drm_dev;
+
+ /* Allocate a parent device */
+ dev = drm_kunit_helper_alloc_device(test);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, dev);
+
+ /*
+ * The DRM core will automatically initialize the GEM core and create
+ * a DRM Memory Manager object which provides an address space pool
+ * for GEM objects allocation.
+ */
+ drm_dev = __drm_kunit_helper_alloc_drm_device(test, dev, sizeof(*drm_dev),
+ 0, DRIVER_GEM);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, drm_dev);
+
+ test->priv = drm_dev;
+
+ return 0;
+}
+
+static struct kunit_case drm_gem_uma_test_cases[] = {
+ KUNIT_CASE(drm_gem_uma_test_obj_create),
+ KUNIT_CASE(drm_gem_uma_test_obj_create_private),
+ KUNIT_CASE(drm_gem_uma_test_pin_pages),
+ KUNIT_CASE(drm_gem_uma_test_vmap),
+ KUNIT_CASE(drm_gem_uma_test_get_pages_sgt),
+ KUNIT_CASE(drm_gem_uma_test_get_sg_table),
+ KUNIT_CASE(drm_gem_uma_test_madvise),
+ KUNIT_CASE(drm_gem_uma_test_purge),
+ {}
+};
+
+static struct kunit_suite drm_gem_uma_suite = {
+ .name = "drm_gem_uma",
+ .init = drm_gem_uma_test_init,
+ .test_cases = drm_gem_uma_test_cases
+};
+
+kunit_test_suite(drm_gem_uma_suite);
+
+MODULE_DESCRIPTION("KUnit test suite for GEM objects backed by UMA buffers");
+MODULE_LICENSE("GPL");
diff --git a/include/drm/drm_fbdev_uma.h b/include/drm/drm_fbdev_uma.h
new file mode 100644
index 000000000000..2dd5d47795ea
--- /dev/null
+++ b/include/drm/drm_fbdev_uma.h
@@ -0,0 +1,20 @@
+/* SPDX-License-Identifier: MIT */
+
+#ifndef DRM_FBDEV_UMA_H
+#define DRM_FBDEV_UMA_H
+
+struct drm_fb_helper;
+struct drm_fb_helper_surface_size;
+
+#ifdef CONFIG_DRM_FBDEV_EMULATION
+int drm_fbdev_uma_driver_fbdev_probe(struct drm_fb_helper *fb_helper,
+ struct drm_fb_helper_surface_size *sizes);
+
+#define DRM_FBDEV_UMA_DRIVER_OPS \
+ .fbdev_probe = drm_fbdev_uma_driver_fbdev_probe
+#else
+#define DRM_FBDEV_UMA_DRIVER_OPS \
+ .fbdev_probe = NULL
+#endif
+
+#endif
diff --git a/include/drm/drm_gem_uma_helper.h b/include/drm/drm_gem_uma_helper.h
new file mode 100644
index 000000000000..e7722c625fab
--- /dev/null
+++ b/include/drm/drm_gem_uma_helper.h
@@ -0,0 +1,309 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef __DRM_GEM_UMA_HELPER_H__
+#define __DRM_GEM_UMA_HELPER_H__
+
+#include <linux/fs.h>
+#include <linux/mm.h>
+#include <linux/mutex.h>
+
+#include <drm/drm_file.h>
+#include <drm/drm_gem.h>
+#include <drm/drm_ioctl.h>
+#include <drm/drm_prime.h>
+
+struct dma_buf_attachment;
+struct drm_mode_create_dumb;
+struct drm_printer;
+struct sg_table;
+
+/**
+ * struct drm_gem_uma_object - GEM object for UMA systems
+ */
+struct drm_gem_uma_object {
+ /**
+ * @base: Base GEM object
+ */
+ struct drm_gem_object base;
+
+ /**
+ * @pages: Page table
+ */
+ struct page **pages;
+
+ /**
+ * @pages_use_count:
+ *
+ * Reference count on the pages table.
+ * The pages are put when the count reaches zero.
+ */
+ refcount_t pages_use_count;
+
+ /**
+ * @pages_pin_count:
+ *
+ * Reference count on the pinned pages table.
+ *
+ * Pages are hard-pinned and reside in memory if count
+ * greater than zero. Otherwise, when count is zero, the pages are
+ * allowed to be evicted and purged by memory shrinker.
+ */
+ refcount_t pages_pin_count;
+
+ /**
+ * @madv: State for madvise
+ *
+ * 0 is active/inuse.
+ * A negative value is the object is purged.
+ * Positive values are driver specific and not used by the helpers.
+ */
+ int madv;
+
+ /**
+ * @madv_list: List entry for madvise tracking
+ *
+ * Typically used by drivers to track purgeable objects
+ */
+ struct list_head madv_list;
+
+ /**
+ * @sgt: Scatter/gather table for imported PRIME buffers
+ */
+ struct sg_table *sgt;
+
+ /**
+ * @vaddr: Kernel virtual address of the backing memory
+ */
+ void *vaddr;
+
+ /**
+ * @vmap_use_count:
+ *
+ * Reference count on the virtual address.
+ * The address are un-mapped when the count reaches zero.
+ */
+ refcount_t vmap_use_count;
+
+ /**
+ * @pages_mark_dirty_on_put:
+ *
+ * Mark pages as dirty when they are put.
+ */
+ bool pages_mark_dirty_on_put : 1;
+
+ /**
+ * @pages_mark_accessed_on_put:
+ *
+ * Mark pages as accessed when they are put.
+ */
+ bool pages_mark_accessed_on_put : 1;
+
+ /**
+ * @map_wc: map object write-combined (instead of using shmem defaults).
+ */
+ bool map_wc : 1;
+};
+
+#define to_drm_gem_uma_obj(obj) \
+ container_of(obj, struct drm_gem_uma_object, base)
+
+int drm_gem_uma_init(struct drm_device *dev, struct drm_gem_uma_object *uma, size_t size);
+struct drm_gem_uma_object *drm_gem_uma_create(struct drm_device *dev, size_t size);
+struct drm_gem_uma_object *drm_gem_uma_create_with_mnt(struct drm_device *dev,
+ size_t size,
+ struct vfsmount *gemfs);
+void drm_gem_uma_release(struct drm_gem_uma_object *uma);
+void drm_gem_uma_free(struct drm_gem_uma_object *uma);
+
+void drm_gem_uma_put_pages_locked(struct drm_gem_uma_object *uma);
+int drm_gem_uma_pin(struct drm_gem_uma_object *uma);
+void drm_gem_uma_unpin(struct drm_gem_uma_object *uma);
+int drm_gem_uma_vmap_locked(struct drm_gem_uma_object *uma,
+ struct iosys_map *map);
+void drm_gem_uma_vunmap_locked(struct drm_gem_uma_object *uma,
+ struct iosys_map *map);
+int drm_gem_uma_mmap(struct drm_gem_uma_object *uma, struct vm_area_struct *vma);
+
+int drm_gem_uma_pin_locked(struct drm_gem_uma_object *uma);
+void drm_gem_uma_unpin_locked(struct drm_gem_uma_object *uma);
+
+int drm_gem_uma_madvise_locked(struct drm_gem_uma_object *uma, int madv);
+
+static inline bool drm_gem_uma_is_purgeable(struct drm_gem_uma_object *uma)
+{
+ return (uma->madv > 0) &&
+ !refcount_read(&uma->pages_pin_count) && uma->sgt &&
+ !uma->base.dma_buf && !drm_gem_is_imported(&uma->base);
+}
+
+void drm_gem_uma_purge_locked(struct drm_gem_uma_object *uma);
+
+struct sg_table *drm_gem_uma_get_sg_table(struct drm_gem_uma_object *uma);
+struct sg_table *drm_gem_uma_get_pages_sgt(struct drm_gem_uma_object *uma);
+
+void drm_gem_uma_print_info(const struct drm_gem_uma_object *uma,
+ struct drm_printer *p, unsigned int indent);
+
+extern const struct vm_operations_struct drm_gem_uma_vm_ops;
+
+/*
+ * GEM object functions
+ */
+
+/**
+ * drm_gem_uma_object_free - GEM object function for drm_gem_uma_free()
+ * @obj: GEM object to free
+ *
+ * This function wraps drm_gem_uma_free(). Drivers that employ the GEM UMA
+ * helpers should use it as their &drm_gem_object_funcs.free handler.
+ */
+static inline void drm_gem_uma_object_free(struct drm_gem_object *obj)
+{
+ struct drm_gem_uma_object *uma = to_drm_gem_uma_obj(obj);
+
+ drm_gem_uma_free(uma);
+}
+
+/**
+ * drm_gem_uma_object_print_info() - Print &drm_gem_uma_object info for debugfs
+ * @p: DRM printer
+ * @indent: Tab indentation level
+ * @obj: GEM object
+ *
+ * This function wraps drm_gem_uma_print_info(). Drivers that employ the GEM
+ * UMA helpers should use this function as their &drm_gem_object_funcs.print_info
+ * handler.
+ */
+static inline void drm_gem_uma_object_print_info(struct drm_printer *p, unsigned int indent,
+ const struct drm_gem_object *obj)
+{
+ const struct drm_gem_uma_object *uma = to_drm_gem_uma_obj(obj);
+
+ drm_gem_uma_print_info(uma, p, indent);
+}
+
+/**
+ * drm_gem_uma_object_pin - GEM object function for drm_gem_uma_pin()
+ * @obj: GEM object
+ *
+ * This function wraps drm_gem_uma_pin(). Drivers that employ the GEM UMA
+ * helpers should use it as their &drm_gem_object_funcs.pin handler.
+ */
+static inline int drm_gem_uma_object_pin(struct drm_gem_object *obj)
+{
+ struct drm_gem_uma_object *uma = to_drm_gem_uma_obj(obj);
+
+ return drm_gem_uma_pin_locked(uma);
+}
+
+/**
+ * drm_gem_uma_object_unpin - GEM object function for drm_gem_uma_unpin()
+ * @obj: GEM object
+ *
+ * This function wraps drm_gem_uma_unpin(). Drivers that employ the GEM UMA
+ * helpers should use it as their &drm_gem_object_funcs.unpin handler.
+ */
+static inline void drm_gem_uma_object_unpin(struct drm_gem_object *obj)
+{
+ struct drm_gem_uma_object *uma = to_drm_gem_uma_obj(obj);
+
+ drm_gem_uma_unpin_locked(uma);
+}
+
+/**
+ * drm_gem_uma_object_get_sg_table - GEM object function for drm_gem_uma_get_sg_table()
+ * @obj: GEM object
+ *
+ * This function wraps drm_gem_uma_get_sg_table(). Drivers that employ the
+ * GEM UMA helpers should use it as their &drm_gem_object_funcs.get_sg_table
+ * handler.
+ *
+ * Returns:
+ * A pointer to the scatter/gather table of pinned pages or error pointer on failure.
+ */
+static inline struct sg_table *drm_gem_uma_object_get_sg_table(struct drm_gem_object *obj)
+{
+ struct drm_gem_uma_object *uma = to_drm_gem_uma_obj(obj);
+
+ return drm_gem_uma_get_sg_table(uma);
+}
+
+/*
+ * drm_gem_uma_object_vmap - GEM object function for drm_gem_uma_vmap_locked()
+ * @obj: GEM object
+ * @map: Returns the kernel virtual address of the GEM UMA object's backing store.
+ *
+ * This function wraps drm_gem_uma_vmap_locked(). Drivers that employ the
+ * GEM UMA helpers should use it as their &drm_gem_object_funcs.vmap handler.
+ *
+ * Returns:
+ * 0 on success or a negative error code on failure.
+ */
+static inline int drm_gem_uma_object_vmap(struct drm_gem_object *obj,
+ struct iosys_map *map)
+{
+ struct drm_gem_uma_object *uma = to_drm_gem_uma_obj(obj);
+
+ return drm_gem_uma_vmap_locked(uma, map);
+}
+
+/*
+ * drm_gem_uma_object_vunmap - GEM object function for drm_gem_uma_vunmap()
+ * @obj: GEM object
+ * @map: Kernel virtual address where the GEM UMA object was mapped
+ *
+ * This function wraps drm_gem_uma_vunmap_locked(). Drivers that employ
+ * the GEM UMA helpers should use it as their &drm_gem_object_funcs.vunmap
+ * handler.
+ */
+static inline void drm_gem_uma_object_vunmap(struct drm_gem_object *obj,
+ struct iosys_map *map)
+{
+ struct drm_gem_uma_object *uma = to_drm_gem_uma_obj(obj);
+
+ drm_gem_uma_vunmap_locked(uma, map);
+}
+
+/**
+ * drm_gem_uma_object_mmap - GEM object function for drm_gem_uma_mmap()
+ * @obj: GEM object
+ * @vma: VMA for the area to be mapped
+ *
+ * This function wraps drm_gem_uma_mmap(). Drivers that employ the GEM UMA
+ * helpers should use it as their &drm_gem_object_funcs.mmap handler.
+ *
+ * Returns:
+ * 0 on success or a negative error code on failure.
+ */
+static inline int drm_gem_uma_object_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
+{
+ struct drm_gem_uma_object *uma = to_drm_gem_uma_obj(obj);
+
+ return drm_gem_uma_mmap(uma, vma);
+}
+
+/*
+ * Driver ops
+ */
+
+struct drm_gem_object *
+drm_gem_uma_prime_import_sg_table(struct drm_device *dev,
+ struct dma_buf_attachment *attach,
+ struct sg_table *sgt);
+int drm_gem_uma_dumb_create(struct drm_file *file, struct drm_device *dev,
+ struct drm_mode_create_dumb *args);
+struct drm_gem_object *drm_gem_uma_prime_import_no_map(struct drm_device *dev,
+ struct dma_buf *buf);
+
+/**
+ * DRM_GEM_UMA_DRIVER_OPS - Default GEM UMA operations
+ *
+ * This macro provides a shortcut for setting the GEM UMA operations
+ * in the &drm_driver structure. Drivers that do not require an s/g table
+ * for imported buffers should use this.
+ */
+#define DRM_GEM_UMA_DRIVER_OPS \
+ .gem_prime_import = drm_gem_uma_prime_import_no_map, \
+ .dumb_create = drm_gem_uma_dumb_create
+
+#endif /* __DRM_GEM_UMA_HELPER_H__ */
--
2.52.0
^ permalink raw reply related [flat|nested] 26+ messages in thread* [PATCH 04/13] drm/gem-uma: Remove unused interfaces
2025-12-09 13:41 [RFC][PATCH 00/13] drm: Introduce GEM-UMA memory management Thomas Zimmermann
` (2 preceding siblings ...)
2025-12-09 13:42 ` [PATCH 03/13] drm: Add GEM-UMA helpers for memory management Thomas Zimmermann
@ 2025-12-09 13:42 ` Thomas Zimmermann
2025-12-09 13:42 ` [PATCH 05/13] drm/imagination: Use GEM-UMA helpers for memory management Thomas Zimmermann
` (9 subsequent siblings)
13 siblings, 0 replies; 26+ messages in thread
From: Thomas Zimmermann @ 2025-12-09 13:42 UTC (permalink / raw)
To: boris.brezillon, simona, airlied, mripard, maarten.lankhorst,
ogabbay, mamin506, lizhi.hou, maciej.falkowski, karol.wachowski,
tomeu, frank.binns, matt.coster, yuq825, robh, steven.price,
adrian.larumbe, liviu.dudau, mwen, kraxel, dmitry.osipenko,
gurchetansingh, olvaffe, corbet
Cc: dri-devel, lima, virtualization, linux-doc, Thomas Zimmermann
Copying from GEM-SHMEM duplicated a number of interfaces that are not
required. Remove them.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
drivers/gpu/drm/drm_gem_uma_helper.c | 111 ---------------------------
include/drm/drm_gem_uma_helper.h | 16 ----
2 files changed, 127 deletions(-)
diff --git a/drivers/gpu/drm/drm_gem_uma_helper.c b/drivers/gpu/drm/drm_gem_uma_helper.c
index d617cfd981e1..2b88e21b1db9 100644
--- a/drivers/gpu/drm/drm_gem_uma_helper.c
+++ b/drivers/gpu/drm/drm_gem_uma_helper.c
@@ -18,7 +18,6 @@
#include <drm/drm.h>
#include <drm/drm_device.h>
#include <drm/drm_drv.h>
-#include <drm/drm_dumb_buffers.h>
#include <drm/drm_gem_uma_helper.h>
#include <drm/drm_prime.h>
#include <drm/drm_print.h>
@@ -466,29 +465,6 @@ void drm_gem_uma_vunmap_locked(struct drm_gem_uma_object *uma, struct iosys_map
}
EXPORT_SYMBOL_GPL(drm_gem_uma_vunmap_locked);
-static int
-drm_gem_uma_create_with_handle(struct drm_file *file_priv,
- struct drm_device *dev, size_t size,
- uint32_t *handle)
-{
- struct drm_gem_uma_object *uma;
- int ret;
-
- uma = drm_gem_uma_create(dev, size);
- if (IS_ERR(uma))
- return PTR_ERR(uma);
-
- /*
- * Allocate an id of idr table where the obj is registered
- * and handle has the id what user can see.
- */
- ret = drm_gem_handle_create(file_priv, &uma->base, handle);
- /* drop reference from allocate - handle holds it now. */
- drm_gem_object_put(&uma->base);
-
- return ret;
-}
-
/* Update madvise status, returns true if not purged, else
* false or -errno.
*/
@@ -537,36 +513,6 @@ void drm_gem_uma_purge_locked(struct drm_gem_uma_object *uma)
}
EXPORT_SYMBOL_GPL(drm_gem_uma_purge_locked);
-/**
- * drm_gem_uma_dumb_create - Create a dumb UMA buffer object
- * @file: DRM file structure to create the dumb buffer for
- * @dev: DRM device
- * @args: IOCTL data
- *
- * This function computes the pitch of the dumb buffer and rounds it up to an
- * integer number of bytes per pixel. Drivers for hardware that doesn't have
- * any additional restrictions on the pitch can directly use this function as
- * their &drm_driver.dumb_create callback.
- *
- * For hardware with additional restrictions, drivers can adjust the fields
- * set up by userspace before calling into this function.
- *
- * Returns:
- * 0 on success or a negative error code on failure.
- */
-int drm_gem_uma_dumb_create(struct drm_file *file, struct drm_device *dev,
- struct drm_mode_create_dumb *args)
-{
- int ret;
-
- ret = drm_mode_size_dumb(dev, args, SZ_8, 0);
- if (ret)
- return ret;
-
- return drm_gem_uma_create_with_handle(file, dev, args->size, &args->handle);
-}
-EXPORT_SYMBOL_GPL(drm_gem_uma_dumb_create);
-
static vm_fault_t drm_gem_uma_fault(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
@@ -836,63 +782,6 @@ drm_gem_uma_prime_import_sg_table(struct drm_device *dev,
}
EXPORT_SYMBOL_GPL(drm_gem_uma_prime_import_sg_table);
-/**
- * drm_gem_uma_prime_import_no_map - Import dmabuf without mapping its sg_table
- * @dev: Device to import into
- * @dma_buf: dma-buf object to import
- *
- * Drivers that use the GEM UMA helpers but also wants to import dmabuf without
- * mapping its sg_table can use this as their &drm_driver.gem_prime_import
- * implementation.
- */
-struct drm_gem_object *drm_gem_uma_prime_import_no_map(struct drm_device *dev,
- struct dma_buf *dma_buf)
-{
- struct dma_buf_attachment *attach;
- struct drm_gem_uma_object *uma;
- struct drm_gem_object *obj;
- size_t size;
- int ret;
-
- if (drm_gem_is_prime_exported_dma_buf(dev, dma_buf)) {
- /*
- * Importing dmabuf exported from our own gem increases
- * refcount on gem itself instead of f_count of dmabuf.
- */
- obj = dma_buf->priv;
- drm_gem_object_get(obj);
- return obj;
- }
-
- attach = dma_buf_attach(dma_buf, dev->dev);
- if (IS_ERR(attach))
- return ERR_CAST(attach);
-
- get_dma_buf(dma_buf);
-
- size = PAGE_ALIGN(attach->dmabuf->size);
-
- uma = __drm_gem_uma_create(dev, size, true, NULL);
- if (IS_ERR(uma)) {
- ret = PTR_ERR(uma);
- goto fail_detach;
- }
-
- drm_dbg_prime(dev, "size = %zu\n", size);
-
- uma->base.import_attach = attach;
- uma->base.resv = dma_buf->resv;
-
- return &uma->base;
-
-fail_detach:
- dma_buf_detach(dma_buf, attach);
- dma_buf_put(dma_buf);
-
- return ERR_PTR(ret);
-}
-EXPORT_SYMBOL_GPL(drm_gem_uma_prime_import_no_map);
-
MODULE_DESCRIPTION("DRM UMA memory-management helpers");
MODULE_IMPORT_NS("DMA_BUF");
MODULE_LICENSE("GPL");
diff --git a/include/drm/drm_gem_uma_helper.h b/include/drm/drm_gem_uma_helper.h
index e7722c625fab..3d6de27efe79 100644
--- a/include/drm/drm_gem_uma_helper.h
+++ b/include/drm/drm_gem_uma_helper.h
@@ -13,7 +13,6 @@
#include <drm/drm_prime.h>
struct dma_buf_attachment;
-struct drm_mode_create_dumb;
struct drm_printer;
struct sg_table;
@@ -290,20 +289,5 @@ struct drm_gem_object *
drm_gem_uma_prime_import_sg_table(struct drm_device *dev,
struct dma_buf_attachment *attach,
struct sg_table *sgt);
-int drm_gem_uma_dumb_create(struct drm_file *file, struct drm_device *dev,
- struct drm_mode_create_dumb *args);
-struct drm_gem_object *drm_gem_uma_prime_import_no_map(struct drm_device *dev,
- struct dma_buf *buf);
-
-/**
- * DRM_GEM_UMA_DRIVER_OPS - Default GEM UMA operations
- *
- * This macro provides a shortcut for setting the GEM UMA operations
- * in the &drm_driver structure. Drivers that do not require an s/g table
- * for imported buffers should use this.
- */
-#define DRM_GEM_UMA_DRIVER_OPS \
- .gem_prime_import = drm_gem_uma_prime_import_no_map, \
- .dumb_create = drm_gem_uma_dumb_create
#endif /* __DRM_GEM_UMA_HELPER_H__ */
--
2.52.0
^ permalink raw reply related [flat|nested] 26+ messages in thread* [PATCH 05/13] drm/imagination: Use GEM-UMA helpers for memory management
2025-12-09 13:41 [RFC][PATCH 00/13] drm: Introduce GEM-UMA memory management Thomas Zimmermann
` (3 preceding siblings ...)
2025-12-09 13:42 ` [PATCH 04/13] drm/gem-uma: Remove unused interfaces Thomas Zimmermann
@ 2025-12-09 13:42 ` Thomas Zimmermann
2025-12-09 13:42 ` [PATCH 06/13] drm/lima: " Thomas Zimmermann
` (8 subsequent siblings)
13 siblings, 0 replies; 26+ messages in thread
From: Thomas Zimmermann @ 2025-12-09 13:42 UTC (permalink / raw)
To: boris.brezillon, simona, airlied, mripard, maarten.lankhorst,
ogabbay, mamin506, lizhi.hou, maciej.falkowski, karol.wachowski,
tomeu, frank.binns, matt.coster, yuq825, robh, steven.price,
adrian.larumbe, liviu.dudau, mwen, kraxel, dmitry.osipenko,
gurchetansingh, olvaffe, corbet
Cc: dri-devel, lima, virtualization, linux-doc, Thomas Zimmermann
Convert imagination from GEM-SHMEM to GEM-UMA. The latter is just a
copy, so this change it merely renaming symbols. No functional changes.
GEM-SHMEM will become more self-contained for drivers without specific
memory management. GEM-UMA's interfaces will remain flexible for drivers
with UMA hardware, such as imagination.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
drivers/gpu/drm/imagination/Kconfig | 4 +-
drivers/gpu/drm/imagination/pvr_drv.c | 2 +-
drivers/gpu/drm/imagination/pvr_free_list.c | 2 +-
drivers/gpu/drm/imagination/pvr_gem.c | 74 ++++++++++-----------
drivers/gpu/drm/imagination/pvr_gem.h | 12 ++--
5 files changed, 47 insertions(+), 47 deletions(-)
diff --git a/drivers/gpu/drm/imagination/Kconfig b/drivers/gpu/drm/imagination/Kconfig
index 0482bfcefdde..ee796c9cfdf2 100644
--- a/drivers/gpu/drm/imagination/Kconfig
+++ b/drivers/gpu/drm/imagination/Kconfig
@@ -9,9 +9,9 @@ config DRM_POWERVR
depends on PM
depends on POWER_SEQUENCING || !POWER_SEQUENCING
select DRM_EXEC
- select DRM_GEM_SHMEM_HELPER
- select DRM_SCHED
+ select DRM_GEM_UMA_HELPER
select DRM_GPUVM
+ select DRM_SCHED
select FW_LOADER
help
Choose this option if you have a system that has an Imagination
diff --git a/drivers/gpu/drm/imagination/pvr_drv.c b/drivers/gpu/drm/imagination/pvr_drv.c
index 916b40ced7eb..61bcbbef208c 100644
--- a/drivers/gpu/drm/imagination/pvr_drv.c
+++ b/drivers/gpu/drm/imagination/pvr_drv.c
@@ -1392,7 +1392,7 @@ static struct drm_driver pvr_drm_driver = {
.minor = PVR_DRIVER_MINOR,
.patchlevel = PVR_DRIVER_PATCHLEVEL,
- .gem_prime_import_sg_table = drm_gem_shmem_prime_import_sg_table,
+ .gem_prime_import_sg_table = drm_gem_uma_prime_import_sg_table,
.gem_create_object = pvr_gem_create_object,
};
diff --git a/drivers/gpu/drm/imagination/pvr_free_list.c b/drivers/gpu/drm/imagination/pvr_free_list.c
index 5228e214491c..5b43f7ca2a6c 100644
--- a/drivers/gpu/drm/imagination/pvr_free_list.c
+++ b/drivers/gpu/drm/imagination/pvr_free_list.c
@@ -281,7 +281,7 @@ pvr_free_list_insert_node_locked(struct pvr_free_list_node *free_list_node)
offset = (start_page * FREE_LIST_ENTRY_SIZE) &
~((u64)ROGUE_BIF_PM_FREELIST_BASE_ADDR_ALIGNSIZE - 1);
- sgt = drm_gem_shmem_get_pages_sgt(&free_list_node->mem_obj->base);
+ sgt = drm_gem_uma_get_pages_sgt(&free_list_node->mem_obj->base);
if (WARN_ON(IS_ERR(sgt)))
return PTR_ERR(sgt);
diff --git a/drivers/gpu/drm/imagination/pvr_gem.c b/drivers/gpu/drm/imagination/pvr_gem.c
index a66cf082af24..f29c9808f4e2 100644
--- a/drivers/gpu/drm/imagination/pvr_gem.c
+++ b/drivers/gpu/drm/imagination/pvr_gem.c
@@ -25,30 +25,30 @@
static void pvr_gem_object_free(struct drm_gem_object *obj)
{
- drm_gem_shmem_object_free(obj);
+ drm_gem_uma_object_free(obj);
}
static int pvr_gem_mmap(struct drm_gem_object *gem_obj, struct vm_area_struct *vma)
{
struct pvr_gem_object *pvr_obj = gem_to_pvr_gem(gem_obj);
- struct drm_gem_shmem_object *shmem_obj = shmem_gem_from_pvr_gem(pvr_obj);
+ struct drm_gem_uma_object *uma_obj = uma_gem_from_pvr_gem(pvr_obj);
if (!(pvr_obj->flags & DRM_PVR_BO_ALLOW_CPU_USERSPACE_ACCESS))
return -EINVAL;
- return drm_gem_shmem_mmap(shmem_obj, vma);
+ return drm_gem_uma_mmap(uma_obj, vma);
}
static const struct drm_gem_object_funcs pvr_gem_object_funcs = {
.free = pvr_gem_object_free,
- .print_info = drm_gem_shmem_object_print_info,
- .pin = drm_gem_shmem_object_pin,
- .unpin = drm_gem_shmem_object_unpin,
- .get_sg_table = drm_gem_shmem_object_get_sg_table,
- .vmap = drm_gem_shmem_object_vmap,
- .vunmap = drm_gem_shmem_object_vunmap,
+ .print_info = drm_gem_uma_object_print_info,
+ .pin = drm_gem_uma_object_pin,
+ .unpin = drm_gem_uma_object_unpin,
+ .get_sg_table = drm_gem_uma_object_get_sg_table,
+ .vmap = drm_gem_uma_object_vmap,
+ .vunmap = drm_gem_uma_object_vunmap,
.mmap = pvr_gem_mmap,
- .vm_ops = &drm_gem_shmem_vm_ops,
+ .vm_ops = &drm_gem_uma_vm_ops,
};
/**
@@ -195,25 +195,25 @@ pvr_gem_object_from_handle(struct pvr_file *pvr_file, u32 handle)
void *
pvr_gem_object_vmap(struct pvr_gem_object *pvr_obj)
{
- struct drm_gem_shmem_object *shmem_obj = shmem_gem_from_pvr_gem(pvr_obj);
+ struct drm_gem_uma_object *uma_obj = uma_gem_from_pvr_gem(pvr_obj);
struct drm_gem_object *obj = gem_from_pvr_gem(pvr_obj);
struct iosys_map map;
int err;
dma_resv_lock(obj->resv, NULL);
- err = drm_gem_shmem_vmap_locked(shmem_obj, &map);
+ err = drm_gem_uma_vmap_locked(uma_obj, &map);
if (err)
goto err_unlock;
if (pvr_obj->flags & PVR_BO_CPU_CACHED) {
- struct device *dev = shmem_obj->base.dev->dev;
+ struct device *dev = uma_obj->base.dev->dev;
- /* If shmem_obj->sgt is NULL, that means the buffer hasn't been mapped
+ /* If uma_obj->sgt is NULL, that means the buffer hasn't been mapped
* in GPU space yet.
*/
- if (shmem_obj->sgt)
- dma_sync_sgtable_for_cpu(dev, shmem_obj->sgt, DMA_BIDIRECTIONAL);
+ if (uma_obj->sgt)
+ dma_sync_sgtable_for_cpu(dev, uma_obj->sgt, DMA_BIDIRECTIONAL);
}
dma_resv_unlock(obj->resv);
@@ -237,8 +237,8 @@ pvr_gem_object_vmap(struct pvr_gem_object *pvr_obj)
void
pvr_gem_object_vunmap(struct pvr_gem_object *pvr_obj)
{
- struct drm_gem_shmem_object *shmem_obj = shmem_gem_from_pvr_gem(pvr_obj);
- struct iosys_map map = IOSYS_MAP_INIT_VADDR(shmem_obj->vaddr);
+ struct drm_gem_uma_object *uma_obj = uma_gem_from_pvr_gem(pvr_obj);
+ struct iosys_map map = IOSYS_MAP_INIT_VADDR(uma_obj->vaddr);
struct drm_gem_object *obj = gem_from_pvr_gem(pvr_obj);
if (WARN_ON(!map.vaddr))
@@ -247,16 +247,16 @@ pvr_gem_object_vunmap(struct pvr_gem_object *pvr_obj)
dma_resv_lock(obj->resv, NULL);
if (pvr_obj->flags & PVR_BO_CPU_CACHED) {
- struct device *dev = shmem_obj->base.dev->dev;
+ struct device *dev = uma_obj->base.dev->dev;
- /* If shmem_obj->sgt is NULL, that means the buffer hasn't been mapped
+ /* If uma_obj->sgt is NULL, that means the buffer hasn't been mapped
* in GPU space yet.
*/
- if (shmem_obj->sgt)
- dma_sync_sgtable_for_device(dev, shmem_obj->sgt, DMA_BIDIRECTIONAL);
+ if (uma_obj->sgt)
+ dma_sync_sgtable_for_device(dev, uma_obj->sgt, DMA_BIDIRECTIONAL);
}
- drm_gem_shmem_vunmap_locked(shmem_obj, &map);
+ drm_gem_uma_vunmap_locked(uma_obj, &map);
dma_resv_unlock(obj->resv);
}
@@ -336,7 +336,7 @@ struct pvr_gem_object *
pvr_gem_object_create(struct pvr_device *pvr_dev, size_t size, u64 flags)
{
struct drm_device *drm_dev = from_pvr_device(pvr_dev);
- struct drm_gem_shmem_object *shmem_obj;
+ struct drm_gem_uma_object *uma_obj;
struct pvr_gem_object *pvr_obj;
struct sg_table *sgt;
int err;
@@ -348,19 +348,19 @@ pvr_gem_object_create(struct pvr_device *pvr_dev, size_t size, u64 flags)
if (device_get_dma_attr(drm_dev->dev) == DEV_DMA_COHERENT)
flags |= PVR_BO_CPU_CACHED;
- shmem_obj = drm_gem_shmem_create(drm_dev, size);
- if (IS_ERR(shmem_obj))
- return ERR_CAST(shmem_obj);
+ uma_obj = drm_gem_uma_create(drm_dev, size);
+ if (IS_ERR(uma_obj))
+ return ERR_CAST(uma_obj);
- shmem_obj->pages_mark_dirty_on_put = true;
- shmem_obj->map_wc = !(flags & PVR_BO_CPU_CACHED);
- pvr_obj = shmem_gem_to_pvr_gem(shmem_obj);
+ uma_obj->pages_mark_dirty_on_put = true;
+ uma_obj->map_wc = !(flags & PVR_BO_CPU_CACHED);
+ pvr_obj = uma_gem_to_pvr_gem(uma_obj);
pvr_obj->flags = flags;
- sgt = drm_gem_shmem_get_pages_sgt(shmem_obj);
+ sgt = drm_gem_uma_get_pages_sgt(uma_obj);
if (IS_ERR(sgt)) {
err = PTR_ERR(sgt);
- goto err_shmem_object_free;
+ goto err_uma_object_free;
}
dma_sync_sgtable_for_device(drm_dev->dev, sgt, DMA_BIDIRECTIONAL);
@@ -373,8 +373,8 @@ pvr_gem_object_create(struct pvr_device *pvr_dev, size_t size, u64 flags)
return pvr_obj;
-err_shmem_object_free:
- drm_gem_shmem_free(shmem_obj);
+err_uma_object_free:
+ drm_gem_uma_free(uma_obj);
return ERR_PTR(err);
}
@@ -394,13 +394,13 @@ int
pvr_gem_get_dma_addr(struct pvr_gem_object *pvr_obj, u32 offset,
dma_addr_t *dma_addr_out)
{
- struct drm_gem_shmem_object *shmem_obj = shmem_gem_from_pvr_gem(pvr_obj);
+ struct drm_gem_uma_object *uma_obj = uma_gem_from_pvr_gem(pvr_obj);
u32 accumulated_offset = 0;
struct scatterlist *sgl;
unsigned int sgt_idx;
- WARN_ON(!shmem_obj->sgt);
- for_each_sgtable_dma_sg(shmem_obj->sgt, sgl, sgt_idx) {
+ WARN_ON(!uma_obj->sgt);
+ for_each_sgtable_dma_sg(uma_obj->sgt, sgl, sgt_idx) {
u32 new_offset = accumulated_offset + sg_dma_len(sgl);
if (offset >= accumulated_offset && offset < new_offset) {
diff --git a/drivers/gpu/drm/imagination/pvr_gem.h b/drivers/gpu/drm/imagination/pvr_gem.h
index c99f30cc6208..59223876b3f7 100644
--- a/drivers/gpu/drm/imagination/pvr_gem.h
+++ b/drivers/gpu/drm/imagination/pvr_gem.h
@@ -10,7 +10,7 @@
#include <uapi/drm/pvr_drm.h>
#include <drm/drm_gem.h>
-#include <drm/drm_gem_shmem_helper.h>
+#include <drm/drm_gem_uma_helper.h>
#include <drm/drm_mm.h>
#include <linux/bitfield.h>
@@ -82,12 +82,12 @@ struct pvr_file;
*/
struct pvr_gem_object {
/**
- * @base: The underlying &struct drm_gem_shmem_object.
+ * @base: The underlying &struct drm_gem_uma_object.
*
* Do not access this member directly, instead call
* shem_gem_from_pvr_gem().
*/
- struct drm_gem_shmem_object base;
+ struct drm_gem_uma_object base;
/**
* @flags: Options set at creation-time. Some of these options apply to
@@ -111,9 +111,9 @@ struct pvr_gem_object {
static_assert(offsetof(struct pvr_gem_object, base) == 0,
"offsetof(struct pvr_gem_object, base) not zero");
-#define shmem_gem_from_pvr_gem(pvr_obj) (&(pvr_obj)->base)
+#define uma_gem_from_pvr_gem(pvr_obj) (&(pvr_obj)->base)
-#define shmem_gem_to_pvr_gem(shmem_obj) container_of_const(shmem_obj, struct pvr_gem_object, base)
+#define uma_gem_to_pvr_gem(uma_obj) container_of_const(uma_obj, struct pvr_gem_object, base)
#define gem_from_pvr_gem(pvr_obj) (&(pvr_obj)->base.base)
@@ -134,7 +134,7 @@ struct pvr_gem_object *pvr_gem_object_from_handle(struct pvr_file *pvr_file,
static __always_inline struct sg_table *
pvr_gem_object_get_pages_sgt(struct pvr_gem_object *pvr_obj)
{
- return drm_gem_shmem_get_pages_sgt(shmem_gem_from_pvr_gem(pvr_obj));
+ return drm_gem_uma_get_pages_sgt(uma_gem_from_pvr_gem(pvr_obj));
}
void *pvr_gem_object_vmap(struct pvr_gem_object *pvr_obj);
--
2.52.0
^ permalink raw reply related [flat|nested] 26+ messages in thread* [PATCH 06/13] drm/lima: Use GEM-UMA helpers for memory management
2025-12-09 13:41 [RFC][PATCH 00/13] drm: Introduce GEM-UMA memory management Thomas Zimmermann
` (4 preceding siblings ...)
2025-12-09 13:42 ` [PATCH 05/13] drm/imagination: Use GEM-UMA helpers for memory management Thomas Zimmermann
@ 2025-12-09 13:42 ` Thomas Zimmermann
2025-12-09 13:42 ` [PATCH 07/13] drm/panfrost: " Thomas Zimmermann
` (7 subsequent siblings)
13 siblings, 0 replies; 26+ messages in thread
From: Thomas Zimmermann @ 2025-12-09 13:42 UTC (permalink / raw)
To: boris.brezillon, simona, airlied, mripard, maarten.lankhorst,
ogabbay, mamin506, lizhi.hou, maciej.falkowski, karol.wachowski,
tomeu, frank.binns, matt.coster, yuq825, robh, steven.price,
adrian.larumbe, liviu.dudau, mwen, kraxel, dmitry.osipenko,
gurchetansingh, olvaffe, corbet
Cc: dri-devel, lima, virtualization, linux-doc, Thomas Zimmermann
Convert lima from GEM-SHMEM to GEM-UMA. The latter is just a copy,
so this change it merely renaming symbols. No functional changes.
GEM-SHMEM will become more self-contained for drivers without specific
memory management. GEM-UMA's interfaces will remain flexible for drivers
with UMA hardware, such as lima.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
drivers/gpu/drm/lima/Kconfig | 4 ++--
drivers/gpu/drm/lima/lima_drv.c | 2 +-
drivers/gpu/drm/lima/lima_gem.c | 30 +++++++++++++++---------------
drivers/gpu/drm/lima/lima_gem.h | 6 +++---
4 files changed, 21 insertions(+), 21 deletions(-)
diff --git a/drivers/gpu/drm/lima/Kconfig b/drivers/gpu/drm/lima/Kconfig
index fa1d4f5df31e..ad854acef58d 100644
--- a/drivers/gpu/drm/lima/Kconfig
+++ b/drivers/gpu/drm/lima/Kconfig
@@ -8,9 +8,9 @@ config DRM_LIMA
depends on MMU
depends on COMMON_CLK
depends on OF
+ select DEVFREQ_GOV_SIMPLE_ONDEMAND
+ select DRM_GEM_UMA_HELPER
select DRM_SCHED
- select DRM_GEM_SHMEM_HELPER
select PM_DEVFREQ
- select DEVFREQ_GOV_SIMPLE_ONDEMAND
help
DRM driver for ARM Mali 400/450 GPUs.
diff --git a/drivers/gpu/drm/lima/lima_drv.c b/drivers/gpu/drm/lima/lima_drv.c
index 65210ab081bb..9d57d2e002f9 100644
--- a/drivers/gpu/drm/lima/lima_drv.c
+++ b/drivers/gpu/drm/lima/lima_drv.c
@@ -276,7 +276,7 @@ static const struct drm_driver lima_drm_driver = {
.patchlevel = 0,
.gem_create_object = lima_gem_create_object,
- .gem_prime_import_sg_table = drm_gem_shmem_prime_import_sg_table,
+ .gem_prime_import_sg_table = drm_gem_uma_prime_import_sg_table,
};
struct lima_block_reader {
diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c
index 9722b847a539..d6f00dde22ac 100644
--- a/drivers/gpu/drm/lima/lima_gem.c
+++ b/drivers/gpu/drm/lima/lima_gem.c
@@ -110,16 +110,16 @@ int lima_gem_create_handle(struct drm_device *dev, struct drm_file *file,
{
int err;
gfp_t mask;
- struct drm_gem_shmem_object *shmem;
+ struct drm_gem_uma_object *uma;
struct drm_gem_object *obj;
struct lima_bo *bo;
bool is_heap = flags & LIMA_BO_FLAG_HEAP;
- shmem = drm_gem_shmem_create(dev, size);
- if (IS_ERR(shmem))
- return PTR_ERR(shmem);
+ uma = drm_gem_uma_create(dev, size);
+ if (IS_ERR(uma))
+ return PTR_ERR(uma);
- obj = &shmem->base;
+ obj = &uma->base;
/* Mali Utgard GPU can only support 32bit address space */
mask = mapping_gfp_mask(obj->filp->f_mapping);
@@ -133,7 +133,7 @@ int lima_gem_create_handle(struct drm_device *dev, struct drm_file *file,
if (err)
goto out;
} else {
- struct sg_table *sgt = drm_gem_shmem_get_pages_sgt(shmem);
+ struct sg_table *sgt = drm_gem_uma_get_pages_sgt(uma);
if (IS_ERR(sgt)) {
err = PTR_ERR(sgt);
@@ -157,7 +157,7 @@ static void lima_gem_free_object(struct drm_gem_object *obj)
if (!list_empty(&bo->va))
dev_err(obj->dev->dev, "lima gem free bo still has va\n");
- drm_gem_shmem_free(&bo->base);
+ drm_gem_uma_free(&bo->base);
}
static int lima_gem_object_open(struct drm_gem_object *obj, struct drm_file *file)
@@ -185,7 +185,7 @@ static int lima_gem_pin(struct drm_gem_object *obj)
if (bo->heap_size)
return -EINVAL;
- return drm_gem_shmem_pin_locked(&bo->base);
+ return drm_gem_uma_pin_locked(&bo->base);
}
static int lima_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map)
@@ -195,7 +195,7 @@ static int lima_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map)
if (bo->heap_size)
return -EINVAL;
- return drm_gem_shmem_vmap_locked(&bo->base, map);
+ return drm_gem_uma_vmap_locked(&bo->base, map);
}
static int lima_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
@@ -205,21 +205,21 @@ static int lima_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
if (bo->heap_size)
return -EINVAL;
- return drm_gem_shmem_mmap(&bo->base, vma);
+ return drm_gem_uma_mmap(&bo->base, vma);
}
static const struct drm_gem_object_funcs lima_gem_funcs = {
.free = lima_gem_free_object,
.open = lima_gem_object_open,
.close = lima_gem_object_close,
- .print_info = drm_gem_shmem_object_print_info,
+ .print_info = drm_gem_uma_object_print_info,
.pin = lima_gem_pin,
- .unpin = drm_gem_shmem_object_unpin,
- .get_sg_table = drm_gem_shmem_object_get_sg_table,
+ .unpin = drm_gem_uma_object_unpin,
+ .get_sg_table = drm_gem_uma_object_get_sg_table,
.vmap = lima_gem_vmap,
- .vunmap = drm_gem_shmem_object_vunmap,
+ .vunmap = drm_gem_uma_object_vunmap,
.mmap = lima_gem_mmap,
- .vm_ops = &drm_gem_shmem_vm_ops,
+ .vm_ops = &drm_gem_uma_vm_ops,
};
struct drm_gem_object *lima_gem_create_object(struct drm_device *dev, size_t size)
diff --git a/drivers/gpu/drm/lima/lima_gem.h b/drivers/gpu/drm/lima/lima_gem.h
index ccea06142f4b..6ad19fda3480 100644
--- a/drivers/gpu/drm/lima/lima_gem.h
+++ b/drivers/gpu/drm/lima/lima_gem.h
@@ -4,13 +4,13 @@
#ifndef __LIMA_GEM_H__
#define __LIMA_GEM_H__
-#include <drm/drm_gem_shmem_helper.h>
+#include <drm/drm_gem_uma_helper.h>
struct lima_submit;
struct lima_vm;
struct lima_bo {
- struct drm_gem_shmem_object base;
+ struct drm_gem_uma_object base;
struct mutex lock;
struct list_head va;
@@ -21,7 +21,7 @@ struct lima_bo {
static inline struct lima_bo *
to_lima_bo(struct drm_gem_object *obj)
{
- return container_of(to_drm_gem_shmem_obj(obj), struct lima_bo, base);
+ return container_of(to_drm_gem_uma_obj(obj), struct lima_bo, base);
}
static inline size_t lima_bo_size(struct lima_bo *bo)
--
2.52.0
^ permalink raw reply related [flat|nested] 26+ messages in thread* [PATCH 07/13] drm/panfrost: Use GEM-UMA helpers for memory management
2025-12-09 13:41 [RFC][PATCH 00/13] drm: Introduce GEM-UMA memory management Thomas Zimmermann
` (5 preceding siblings ...)
2025-12-09 13:42 ` [PATCH 06/13] drm/lima: " Thomas Zimmermann
@ 2025-12-09 13:42 ` Thomas Zimmermann
2025-12-09 13:42 ` [PATCH 08/13] drm/panthor: " Thomas Zimmermann
` (6 subsequent siblings)
13 siblings, 0 replies; 26+ messages in thread
From: Thomas Zimmermann @ 2025-12-09 13:42 UTC (permalink / raw)
To: boris.brezillon, simona, airlied, mripard, maarten.lankhorst,
ogabbay, mamin506, lizhi.hou, maciej.falkowski, karol.wachowski,
tomeu, frank.binns, matt.coster, yuq825, robh, steven.price,
adrian.larumbe, liviu.dudau, mwen, kraxel, dmitry.osipenko,
gurchetansingh, olvaffe, corbet
Cc: dri-devel, lima, virtualization, linux-doc, Thomas Zimmermann
Convert panfrost from GEM-SHMEM to GEM-UMA. The latter is just a copy,
so this change it merely renaming symbols. No functional changes.
GEM-SHMEM will become more self-contained for drivers without specific
memory management. GEM-UMA's interfaces will remain flexible for drivers
with UMA hardware, such as panfrost.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
drivers/gpu/drm/panfrost/Kconfig | 2 +-
drivers/gpu/drm/panfrost/panfrost_drv.c | 2 +-
drivers/gpu/drm/panfrost/panfrost_gem.c | 30 +++++++++----------
drivers/gpu/drm/panfrost/panfrost_gem.h | 6 ++--
.../gpu/drm/panfrost/panfrost_gem_shrinker.c | 30 +++++++++----------
drivers/gpu/drm/panfrost/panfrost_mmu.c | 8 ++---
drivers/gpu/drm/panfrost/panfrost_perfcnt.c | 6 ++--
7 files changed, 42 insertions(+), 42 deletions(-)
diff --git a/drivers/gpu/drm/panfrost/Kconfig b/drivers/gpu/drm/panfrost/Kconfig
index e6403a9d66ad..3e44858df0f3 100644
--- a/drivers/gpu/drm/panfrost/Kconfig
+++ b/drivers/gpu/drm/panfrost/Kconfig
@@ -6,10 +6,10 @@ config DRM_PANFROST
depends on ARM || ARM64 || COMPILE_TEST
depends on !GENERIC_ATOMIC64 # for IOMMU_IO_PGTABLE_LPAE
depends on MMU
+ select DRM_GEM_UMA_HELPER
select DRM_SCHED
select IOMMU_SUPPORT
select IOMMU_IO_PGTABLE_LPAE
- select DRM_GEM_SHMEM_HELPER
select PM_DEVFREQ
select DEVFREQ_GOV_SIMPLE_ONDEMAND
select WANT_DEV_COREDUMP
diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
index 7d8c7c337606..fe38aa354bcc 100644
--- a/drivers/gpu/drm/panfrost/panfrost_drv.c
+++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
@@ -499,7 +499,7 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, void *data,
}
}
- args->retained = drm_gem_shmem_madvise_locked(&bo->base, args->madv);
+ args->retained = drm_gem_uma_madvise_locked(&bo->base, args->madv);
if (args->retained) {
if (args->madv == PANFROST_MADV_DONTNEED)
diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c
index 8041b65c6609..17a5218c9aee 100644
--- a/drivers/gpu/drm/panfrost/panfrost_gem.c
+++ b/drivers/gpu/drm/panfrost/panfrost_gem.c
@@ -85,7 +85,7 @@ static void panfrost_gem_free_object(struct drm_gem_object *obj)
kvfree(bo->sgts);
}
- drm_gem_shmem_free(&bo->base);
+ drm_gem_uma_free(&bo->base);
}
struct panfrost_gem_mapping *
@@ -228,7 +228,7 @@ static int panfrost_gem_pin(struct drm_gem_object *obj)
if (bo->is_heap)
return -EINVAL;
- return drm_gem_shmem_pin_locked(&bo->base);
+ return drm_gem_uma_pin_locked(&bo->base);
}
static enum drm_gem_object_status panfrost_gem_status(struct drm_gem_object *obj)
@@ -263,16 +263,16 @@ static const struct drm_gem_object_funcs panfrost_gem_funcs = {
.free = panfrost_gem_free_object,
.open = panfrost_gem_open,
.close = panfrost_gem_close,
- .print_info = drm_gem_shmem_object_print_info,
+ .print_info = drm_gem_uma_object_print_info,
.pin = panfrost_gem_pin,
- .unpin = drm_gem_shmem_object_unpin,
- .get_sg_table = drm_gem_shmem_object_get_sg_table,
- .vmap = drm_gem_shmem_object_vmap,
- .vunmap = drm_gem_shmem_object_vunmap,
- .mmap = drm_gem_shmem_object_mmap,
+ .unpin = drm_gem_uma_object_unpin,
+ .get_sg_table = drm_gem_uma_object_get_sg_table,
+ .vmap = drm_gem_uma_object_vmap,
+ .vunmap = drm_gem_uma_object_vunmap,
+ .mmap = drm_gem_uma_object_mmap,
.status = panfrost_gem_status,
.rss = panfrost_gem_rss,
- .vm_ops = &drm_gem_shmem_vm_ops,
+ .vm_ops = &drm_gem_uma_vm_ops,
};
/**
@@ -306,18 +306,18 @@ struct drm_gem_object *panfrost_gem_create_object(struct drm_device *dev, size_t
struct panfrost_gem_object *
panfrost_gem_create(struct drm_device *dev, size_t size, u32 flags)
{
- struct drm_gem_shmem_object *shmem;
+ struct drm_gem_uma_object *uma;
struct panfrost_gem_object *bo;
/* Round up heap allocations to 2MB to keep fault handling simple */
if (flags & PANFROST_BO_HEAP)
size = roundup(size, SZ_2M);
- shmem = drm_gem_shmem_create(dev, size);
- if (IS_ERR(shmem))
- return ERR_CAST(shmem);
+ uma = drm_gem_uma_create(dev, size);
+ if (IS_ERR(uma))
+ return ERR_CAST(uma);
- bo = to_panfrost_bo(&shmem->base);
+ bo = to_panfrost_bo(&uma->base);
bo->noexec = !!(flags & PANFROST_BO_NOEXEC);
bo->is_heap = !!(flags & PANFROST_BO_HEAP);
@@ -332,7 +332,7 @@ panfrost_gem_prime_import_sg_table(struct drm_device *dev,
struct drm_gem_object *obj;
struct panfrost_gem_object *bo;
- obj = drm_gem_shmem_prime_import_sg_table(dev, attach, sgt);
+ obj = drm_gem_uma_prime_import_sg_table(dev, attach, sgt);
if (IS_ERR(obj))
return ERR_CAST(obj);
diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.h b/drivers/gpu/drm/panfrost/panfrost_gem.h
index 8de3e76f2717..01a796a54340 100644
--- a/drivers/gpu/drm/panfrost/panfrost_gem.h
+++ b/drivers/gpu/drm/panfrost/panfrost_gem.h
@@ -4,7 +4,7 @@
#ifndef __PANFROST_GEM_H__
#define __PANFROST_GEM_H__
-#include <drm/drm_gem_shmem_helper.h>
+#include <drm/drm_gem_uma_helper.h>
#include <drm/drm_mm.h>
struct panfrost_mmu;
@@ -50,7 +50,7 @@ struct panfrost_gem_debugfs {
};
struct panfrost_gem_object {
- struct drm_gem_shmem_object base;
+ struct drm_gem_uma_object base;
struct sg_table *sgts;
/*
@@ -115,7 +115,7 @@ struct panfrost_gem_mapping {
static inline
struct panfrost_gem_object *to_panfrost_bo(struct drm_gem_object *obj)
{
- return container_of(to_drm_gem_shmem_obj(obj), struct panfrost_gem_object, base);
+ return container_of(to_drm_gem_uma_obj(obj), struct panfrost_gem_object, base);
}
static inline struct panfrost_gem_mapping *
diff --git a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
index 2fe967a90bcb..b55accfcbe0d 100644
--- a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
+++ b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
@@ -9,7 +9,7 @@
#include <linux/list.h>
#include <drm/drm_device.h>
-#include <drm/drm_gem_shmem_helper.h>
+#include <drm/drm_gem_uma_helper.h>
#include "panfrost_device.h"
#include "panfrost_gem.h"
@@ -19,15 +19,15 @@ static unsigned long
panfrost_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc)
{
struct panfrost_device *pfdev = shrinker->private_data;
- struct drm_gem_shmem_object *shmem;
+ struct drm_gem_uma_object *uma;
unsigned long count = 0;
if (!mutex_trylock(&pfdev->shrinker_lock))
return 0;
- list_for_each_entry(shmem, &pfdev->shrinker_list, madv_list) {
- if (drm_gem_shmem_is_purgeable(shmem))
- count += shmem->base.size >> PAGE_SHIFT;
+ list_for_each_entry(uma, &pfdev->shrinker_list, madv_list) {
+ if (drm_gem_uma_is_purgeable(uma))
+ count += uma->base.size >> PAGE_SHIFT;
}
mutex_unlock(&pfdev->shrinker_lock);
@@ -37,7 +37,7 @@ panfrost_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc
static bool panfrost_gem_purge(struct drm_gem_object *obj)
{
- struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
+ struct drm_gem_uma_object *uma = to_drm_gem_uma_obj(obj);
struct panfrost_gem_object *bo = to_panfrost_bo(obj);
bool ret = false;
@@ -47,14 +47,14 @@ static bool panfrost_gem_purge(struct drm_gem_object *obj)
if (!mutex_trylock(&bo->mappings.lock))
return false;
- if (!dma_resv_trylock(shmem->base.resv))
+ if (!dma_resv_trylock(uma->base.resv))
goto unlock_mappings;
panfrost_gem_teardown_mappings_locked(bo);
- drm_gem_shmem_purge_locked(&bo->base);
+ drm_gem_uma_purge_locked(&bo->base);
ret = true;
- dma_resv_unlock(shmem->base.resv);
+ dma_resv_unlock(uma->base.resv);
unlock_mappings:
mutex_unlock(&bo->mappings.lock);
@@ -65,19 +65,19 @@ static unsigned long
panfrost_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc)
{
struct panfrost_device *pfdev = shrinker->private_data;
- struct drm_gem_shmem_object *shmem, *tmp;
+ struct drm_gem_uma_object *uma, *tmp;
unsigned long freed = 0;
if (!mutex_trylock(&pfdev->shrinker_lock))
return SHRINK_STOP;
- list_for_each_entry_safe(shmem, tmp, &pfdev->shrinker_list, madv_list) {
+ list_for_each_entry_safe(uma, tmp, &pfdev->shrinker_list, madv_list) {
if (freed >= sc->nr_to_scan)
break;
- if (drm_gem_shmem_is_purgeable(shmem) &&
- panfrost_gem_purge(&shmem->base)) {
- freed += shmem->base.size >> PAGE_SHIFT;
- list_del_init(&shmem->madv_list);
+ if (drm_gem_uma_is_purgeable(uma) &&
+ panfrost_gem_purge(&uma->base)) {
+ freed += uma->base.size >> PAGE_SHIFT;
+ list_del_init(&uma->madv_list);
}
}
diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c
index 8f3b7a7b6ad0..a19bf238ba21 100644
--- a/drivers/gpu/drm/panfrost/panfrost_mmu.c
+++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c
@@ -461,8 +461,8 @@ static int mmu_map_sg(struct panfrost_device *pfdev, struct panfrost_mmu *mmu,
int panfrost_mmu_map(struct panfrost_gem_mapping *mapping)
{
struct panfrost_gem_object *bo = mapping->obj;
- struct drm_gem_shmem_object *shmem = &bo->base;
- struct drm_gem_object *obj = &shmem->base;
+ struct drm_gem_uma_object *uma = &bo->base;
+ struct drm_gem_object *obj = &uma->base;
struct panfrost_device *pfdev = to_panfrost_device(obj->dev);
struct sg_table *sgt;
int prot = IOMMU_READ | IOMMU_WRITE | IOMMU_CACHE;
@@ -474,7 +474,7 @@ int panfrost_mmu_map(struct panfrost_gem_mapping *mapping)
if (bo->noexec)
prot |= IOMMU_NOEXEC;
- sgt = drm_gem_shmem_get_pages_sgt(shmem);
+ sgt = drm_gem_uma_get_pages_sgt(uma);
if (WARN_ON(IS_ERR(sgt)))
return PTR_ERR(sgt);
@@ -488,7 +488,7 @@ int panfrost_mmu_map(struct panfrost_gem_mapping *mapping)
return 0;
err_put_pages:
- drm_gem_shmem_put_pages_locked(shmem);
+ drm_gem_uma_put_pages_locked(uma);
return ret;
}
diff --git a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
index 7020c0192e18..874c23c7c024 100644
--- a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
+++ b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
@@ -9,7 +9,7 @@
#include <linux/uaccess.h>
#include <drm/drm_file.h>
-#include <drm/drm_gem_shmem_helper.h>
+#include <drm/drm_gem_uma_helper.h>
#include <drm/panfrost_drm.h>
#include "panfrost_device.h"
@@ -75,7 +75,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
struct panfrost_file_priv *user = file_priv->driver_priv;
struct panfrost_perfcnt *perfcnt = pfdev->perfcnt;
struct iosys_map map;
- struct drm_gem_shmem_object *bo;
+ struct drm_gem_uma_object *bo;
u32 cfg, as;
int ret;
@@ -88,7 +88,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
if (ret < 0)
goto err_put_pm;
- bo = drm_gem_shmem_create(&pfdev->base, perfcnt->bosize);
+ bo = drm_gem_uma_create(&pfdev->base, perfcnt->bosize);
if (IS_ERR(bo)) {
ret = PTR_ERR(bo);
goto err_put_pm;
--
2.52.0
^ permalink raw reply related [flat|nested] 26+ messages in thread* [PATCH 08/13] drm/panthor: Use GEM-UMA helpers for memory management
2025-12-09 13:41 [RFC][PATCH 00/13] drm: Introduce GEM-UMA memory management Thomas Zimmermann
` (6 preceding siblings ...)
2025-12-09 13:42 ` [PATCH 07/13] drm/panfrost: " Thomas Zimmermann
@ 2025-12-09 13:42 ` Thomas Zimmermann
2025-12-09 13:42 ` [PATCH 09/13] drm/v3d: " Thomas Zimmermann
` (5 subsequent siblings)
13 siblings, 0 replies; 26+ messages in thread
From: Thomas Zimmermann @ 2025-12-09 13:42 UTC (permalink / raw)
To: boris.brezillon, simona, airlied, mripard, maarten.lankhorst,
ogabbay, mamin506, lizhi.hou, maciej.falkowski, karol.wachowski,
tomeu, frank.binns, matt.coster, yuq825, robh, steven.price,
adrian.larumbe, liviu.dudau, mwen, kraxel, dmitry.osipenko,
gurchetansingh, olvaffe, corbet
Cc: dri-devel, lima, virtualization, linux-doc, Thomas Zimmermann
Convert panthor from GEM-SHMEM to GEM-UMA. The latter is just a copy,
so this change it merely renaming symbols. No functional changes.
GEM-SHMEM will become more self-contained for drivers without specific
memory management. GEM-UMA's interfaces will remain flexible for drivers
with UMA hardware, such as panthor.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
drivers/gpu/drm/panthor/Kconfig | 2 +-
drivers/gpu/drm/panthor/panthor_drv.c | 2 +-
drivers/gpu/drm/panthor/panthor_fw.c | 4 +--
drivers/gpu/drm/panthor/panthor_gem.c | 40 ++++++++++++-------------
drivers/gpu/drm/panthor/panthor_gem.h | 8 ++---
drivers/gpu/drm/panthor/panthor_mmu.c | 10 +++----
drivers/gpu/drm/panthor/panthor_sched.c | 1 -
7 files changed, 33 insertions(+), 34 deletions(-)
diff --git a/drivers/gpu/drm/panthor/Kconfig b/drivers/gpu/drm/panthor/Kconfig
index 55b40ad07f3b..dc87abc79917 100644
--- a/drivers/gpu/drm/panthor/Kconfig
+++ b/drivers/gpu/drm/panthor/Kconfig
@@ -8,7 +8,7 @@ config DRM_PANTHOR
depends on MMU
select DEVFREQ_GOV_SIMPLE_ONDEMAND
select DRM_EXEC
- select DRM_GEM_SHMEM_HELPER
+ select DRM_GEM_UMA_HELPER
select DRM_GPUVM
select DRM_SCHED
select IOMMU_IO_PGTABLE_LPAE
diff --git a/drivers/gpu/drm/panthor/panthor_drv.c b/drivers/gpu/drm/panthor/panthor_drv.c
index d1d4c50da5bf..43d8791e1a47 100644
--- a/drivers/gpu/drm/panthor/panthor_drv.c
+++ b/drivers/gpu/drm/panthor/panthor_drv.c
@@ -1620,7 +1620,7 @@ static const struct drm_driver panthor_drm_driver = {
.minor = 5,
.gem_create_object = panthor_gem_create_object,
- .gem_prime_import_sg_table = drm_gem_shmem_prime_import_sg_table,
+ .gem_prime_import_sg_table = drm_gem_uma_prime_import_sg_table,
#ifdef CONFIG_DEBUG_FS
.debugfs_init = panthor_debugfs_init,
#endif
diff --git a/drivers/gpu/drm/panthor/panthor_fw.c b/drivers/gpu/drm/panthor/panthor_fw.c
index 38d87ab92eda..81cc6e2eee23 100644
--- a/drivers/gpu/drm/panthor/panthor_fw.c
+++ b/drivers/gpu/drm/panthor/panthor_fw.c
@@ -620,7 +620,7 @@ static int panthor_fw_load_section_entry(struct panthor_device *ptdev,
panthor_fw_init_section_mem(ptdev, section);
bo = to_panthor_bo(section->mem->obj);
- sgt = drm_gem_shmem_get_pages_sgt(&bo->base);
+ sgt = drm_gem_uma_get_pages_sgt(&bo->base);
if (IS_ERR(sgt))
return PTR_ERR(sgt);
@@ -684,7 +684,7 @@ panthor_reload_fw_sections(struct panthor_device *ptdev, bool full_reload)
continue;
panthor_fw_init_section_mem(ptdev, section);
- sgt = drm_gem_shmem_get_pages_sgt(&to_panthor_bo(section->mem->obj)->base);
+ sgt = drm_gem_uma_get_pages_sgt(&to_panthor_bo(section->mem->obj)->base);
if (!drm_WARN_ON(&ptdev->base, IS_ERR_OR_NULL(sgt)))
dma_sync_sgtable_for_device(ptdev->base.dev, sgt, DMA_TO_DEVICE);
}
diff --git a/drivers/gpu/drm/panthor/panthor_gem.c b/drivers/gpu/drm/panthor/panthor_gem.c
index 2c12c1c58e2b..68bc36fa70df 100644
--- a/drivers/gpu/drm/panthor/panthor_gem.c
+++ b/drivers/gpu/drm/panthor/panthor_gem.c
@@ -75,7 +75,7 @@ static void panthor_gem_free_object(struct drm_gem_object *obj)
mutex_destroy(&bo->label.lock);
drm_gem_free_mmap_offset(&bo->base.base);
- drm_gem_shmem_free(&bo->base);
+ drm_gem_uma_free(&bo->base);
drm_gem_object_put(vm_root_gem);
}
@@ -123,7 +123,7 @@ panthor_kernel_bo_create(struct panthor_device *ptdev, struct panthor_vm *vm,
size_t size, u32 bo_flags, u32 vm_map_flags,
u64 gpu_va, const char *name)
{
- struct drm_gem_shmem_object *obj;
+ struct drm_gem_uma_object *obj;
struct panthor_kernel_bo *kbo;
struct panthor_gem_object *bo;
u32 debug_flags = PANTHOR_DEBUGFS_GEM_USAGE_FLAG_KERNEL;
@@ -136,7 +136,7 @@ panthor_kernel_bo_create(struct panthor_device *ptdev, struct panthor_vm *vm,
if (!kbo)
return ERR_PTR(-ENOMEM);
- obj = drm_gem_shmem_create(&ptdev->base, size);
+ obj = drm_gem_uma_create(&ptdev->base, size);
if (IS_ERR(obj)) {
ret = PTR_ERR(obj);
goto err_free_bo;
@@ -207,16 +207,16 @@ static enum drm_gem_object_status panthor_gem_status(struct drm_gem_object *obj)
static const struct drm_gem_object_funcs panthor_gem_funcs = {
.free = panthor_gem_free_object,
- .print_info = drm_gem_shmem_object_print_info,
- .pin = drm_gem_shmem_object_pin,
- .unpin = drm_gem_shmem_object_unpin,
- .get_sg_table = drm_gem_shmem_object_get_sg_table,
- .vmap = drm_gem_shmem_object_vmap,
- .vunmap = drm_gem_shmem_object_vunmap,
- .mmap = drm_gem_shmem_object_mmap,
+ .print_info = drm_gem_uma_object_print_info,
+ .pin = drm_gem_uma_object_pin,
+ .unpin = drm_gem_uma_object_unpin,
+ .get_sg_table = drm_gem_uma_object_get_sg_table,
+ .vmap = drm_gem_uma_object_vmap,
+ .vunmap = drm_gem_uma_object_vunmap,
+ .mmap = drm_gem_uma_object_mmap,
.status = panthor_gem_status,
.export = panthor_gem_prime_export,
- .vm_ops = &drm_gem_shmem_vm_ops,
+ .vm_ops = &drm_gem_uma_vm_ops,
};
/**
@@ -263,14 +263,14 @@ panthor_gem_create_with_handle(struct drm_file *file,
u64 *size, u32 flags, u32 *handle)
{
int ret;
- struct drm_gem_shmem_object *shmem;
+ struct drm_gem_uma_object *uma;
struct panthor_gem_object *bo;
- shmem = drm_gem_shmem_create(ddev, *size);
- if (IS_ERR(shmem))
- return PTR_ERR(shmem);
+ uma = drm_gem_uma_create(ddev, *size);
+ if (IS_ERR(uma))
+ return PTR_ERR(uma);
- bo = to_panthor_bo(&shmem->base);
+ bo = to_panthor_bo(&uma->base);
bo->flags = flags;
if (exclusive_vm) {
@@ -288,10 +288,10 @@ panthor_gem_create_with_handle(struct drm_file *file,
* FIXME: Ideally this should be done when pages are allocated, not at
* BO creation time.
*/
- if (shmem->map_wc) {
+ if (uma->map_wc) {
struct sg_table *sgt;
- sgt = drm_gem_shmem_get_pages_sgt(shmem);
+ sgt = drm_gem_uma_get_pages_sgt(uma);
if (IS_ERR(sgt)) {
ret = PTR_ERR(sgt);
goto out_put_gem;
@@ -302,13 +302,13 @@ panthor_gem_create_with_handle(struct drm_file *file,
* Allocate an id of idr table where the obj is registered
* and handle has the id what user can see.
*/
- ret = drm_gem_handle_create(file, &shmem->base, handle);
+ ret = drm_gem_handle_create(file, &uma->base, handle);
if (!ret)
*size = bo->base.base.size;
out_put_gem:
/* drop reference from allocate - handle holds it now. */
- drm_gem_object_put(&shmem->base);
+ drm_gem_object_put(&uma->base);
return ret;
}
diff --git a/drivers/gpu/drm/panthor/panthor_gem.h b/drivers/gpu/drm/panthor/panthor_gem.h
index 80c6e24112d0..032b14a32b28 100644
--- a/drivers/gpu/drm/panthor/panthor_gem.h
+++ b/drivers/gpu/drm/panthor/panthor_gem.h
@@ -5,7 +5,7 @@
#ifndef __PANTHOR_GEM_H__
#define __PANTHOR_GEM_H__
-#include <drm/drm_gem_shmem_helper.h>
+#include <drm/drm_gem_uma_helper.h>
#include <drm/drm_mm.h>
#include <linux/iosys-map.h>
@@ -64,8 +64,8 @@ struct panthor_gem_debugfs {
* struct panthor_gem_object - Driver specific GEM object.
*/
struct panthor_gem_object {
- /** @base: Inherit from drm_gem_shmem_object. */
- struct drm_gem_shmem_object base;
+ /** @base: Inherit from drm_gem_uma_object. */
+ struct drm_gem_uma_object base;
/**
* @exclusive_vm_root_gem: Root GEM of the exclusive VM this GEM object
@@ -133,7 +133,7 @@ struct panthor_kernel_bo {
static inline
struct panthor_gem_object *to_panthor_bo(struct drm_gem_object *obj)
{
- return container_of(to_drm_gem_shmem_obj(obj), struct panthor_gem_object, base);
+ return container_of(to_drm_gem_uma_obj(obj), struct panthor_gem_object, base);
}
struct drm_gem_object *panthor_gem_create_object(struct drm_device *ddev, size_t size);
diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
index 478ea98db95c..f51619cea679 100644
--- a/drivers/gpu/drm/panthor/panthor_mmu.c
+++ b/drivers/gpu/drm/panthor/panthor_mmu.c
@@ -1073,7 +1073,7 @@ static void panthor_vm_bo_free(struct drm_gpuvm_bo *vm_bo)
struct panthor_gem_object *bo = to_panthor_bo(vm_bo->obj);
if (!drm_gem_is_imported(&bo->base.base))
- drm_gem_shmem_unpin(&bo->base);
+ drm_gem_uma_unpin(&bo->base);
kfree(vm_bo);
}
@@ -1218,15 +1218,15 @@ static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
* once we have successfully called drm_gpuvm_bo_create(),
* GPUVM will take care of dropping the pin for us.
*/
- ret = drm_gem_shmem_pin(&bo->base);
+ ret = drm_gem_uma_pin(&bo->base);
if (ret)
goto err_cleanup;
}
- sgt = drm_gem_shmem_get_pages_sgt(&bo->base);
+ sgt = drm_gem_uma_get_pages_sgt(&bo->base);
if (IS_ERR(sgt)) {
if (!drm_gem_is_imported(&bo->base.base))
- drm_gem_shmem_unpin(&bo->base);
+ drm_gem_uma_unpin(&bo->base);
ret = PTR_ERR(sgt);
goto err_cleanup;
@@ -1237,7 +1237,7 @@ static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
preallocated_vm_bo = drm_gpuvm_bo_create(&vm->base, &bo->base.base);
if (!preallocated_vm_bo) {
if (!drm_gem_is_imported(&bo->base.base))
- drm_gem_shmem_unpin(&bo->base);
+ drm_gem_uma_unpin(&bo->base);
ret = -ENOMEM;
goto err_cleanup;
diff --git a/drivers/gpu/drm/panthor/panthor_sched.c b/drivers/gpu/drm/panthor/panthor_sched.c
index e74ca071159d..d8a3f6aa9aaa 100644
--- a/drivers/gpu/drm/panthor/panthor_sched.c
+++ b/drivers/gpu/drm/panthor/panthor_sched.c
@@ -3,7 +3,6 @@
#include <drm/drm_drv.h>
#include <drm/drm_exec.h>
-#include <drm/drm_gem_shmem_helper.h>
#include <drm/drm_managed.h>
#include <drm/drm_print.h>
#include <drm/gpu_scheduler.h>
--
2.52.0
^ permalink raw reply related [flat|nested] 26+ messages in thread* [PATCH 09/13] drm/v3d: Use GEM-UMA helpers for memory management
2025-12-09 13:41 [RFC][PATCH 00/13] drm: Introduce GEM-UMA memory management Thomas Zimmermann
` (7 preceding siblings ...)
2025-12-09 13:42 ` [PATCH 08/13] drm/panthor: " Thomas Zimmermann
@ 2025-12-09 13:42 ` Thomas Zimmermann
2025-12-09 13:42 ` [PATCH 10/13] drm/virtgpu: " Thomas Zimmermann
` (4 subsequent siblings)
13 siblings, 0 replies; 26+ messages in thread
From: Thomas Zimmermann @ 2025-12-09 13:42 UTC (permalink / raw)
To: boris.brezillon, simona, airlied, mripard, maarten.lankhorst,
ogabbay, mamin506, lizhi.hou, maciej.falkowski, karol.wachowski,
tomeu, frank.binns, matt.coster, yuq825, robh, steven.price,
adrian.larumbe, liviu.dudau, mwen, kraxel, dmitry.osipenko,
gurchetansingh, olvaffe, corbet
Cc: dri-devel, lima, virtualization, linux-doc, Thomas Zimmermann
Convert v3d from GEM-SHMEM to GEM-UMA. The latter is just a copy,
so this change it merely renaming symbols. No functional changes.
GEM-SHMEM will become more self-contained for drivers without specific
memory management. GEM-UMA's interfaces will remain flexible for drivers
with UMA hardware, such as v3d.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
drivers/gpu/drm/v3d/Kconfig | 2 +-
drivers/gpu/drm/v3d/v3d_bo.c | 45 +++++++++++++++++------------------
drivers/gpu/drm/v3d/v3d_drv.h | 4 ++--
drivers/gpu/drm/v3d/v3d_mmu.c | 9 ++++---
4 files changed, 29 insertions(+), 31 deletions(-)
diff --git a/drivers/gpu/drm/v3d/Kconfig b/drivers/gpu/drm/v3d/Kconfig
index ce62c5908e1d..4345cb0f4dd6 100644
--- a/drivers/gpu/drm/v3d/Kconfig
+++ b/drivers/gpu/drm/v3d/Kconfig
@@ -5,8 +5,8 @@ config DRM_V3D
depends on DRM
depends on COMMON_CLK
depends on MMU
+ select DRM_GEM_UMA_HELPER
select DRM_SCHED
- select DRM_GEM_SHMEM_HELPER
help
Choose this option if you have a system that has a Broadcom
V3D 3.x or newer GPUs. SoCs supported include the BCM2711,
diff --git a/drivers/gpu/drm/v3d/v3d_bo.c b/drivers/gpu/drm/v3d/v3d_bo.c
index d9547f5117b9..842881e5f9a3 100644
--- a/drivers/gpu/drm/v3d/v3d_bo.c
+++ b/drivers/gpu/drm/v3d/v3d_bo.c
@@ -5,7 +5,7 @@
* DOC: V3D GEM BO management support
*
* Compared to VC4 (V3D 2.x), V3D 3.3 introduces an MMU between the
- * GPU and the bus, allowing us to use shmem objects for our storage
+ * GPU and the bus, allowing us to use UMA objects for our storage
* instead of CMA.
*
* Physically contiguous objects may still be imported to V3D, but the
@@ -59,20 +59,20 @@ void v3d_free_object(struct drm_gem_object *obj)
/* GPU execution may have dirtied any pages in the BO. */
bo->base.pages_mark_dirty_on_put = true;
- drm_gem_shmem_free(&bo->base);
+ drm_gem_uma_free(&bo->base);
}
static const struct drm_gem_object_funcs v3d_gem_funcs = {
.free = v3d_free_object,
- .print_info = drm_gem_shmem_object_print_info,
- .pin = drm_gem_shmem_object_pin,
- .unpin = drm_gem_shmem_object_unpin,
- .get_sg_table = drm_gem_shmem_object_get_sg_table,
- .vmap = drm_gem_shmem_object_vmap,
- .vunmap = drm_gem_shmem_object_vunmap,
- .mmap = drm_gem_shmem_object_mmap,
+ .print_info = drm_gem_uma_object_print_info,
+ .pin = drm_gem_uma_object_pin,
+ .unpin = drm_gem_uma_object_unpin,
+ .get_sg_table = drm_gem_uma_object_get_sg_table,
+ .vmap = drm_gem_uma_object_vmap,
+ .vunmap = drm_gem_uma_object_vunmap,
+ .mmap = drm_gem_uma_object_mmap,
.status = v3d_gem_status,
- .vm_ops = &drm_gem_shmem_vm_ops,
+ .vm_ops = &drm_gem_uma_vm_ops,
};
/* gem_create_object function for allocating a BO struct and doing
@@ -108,9 +108,9 @@ v3d_bo_create_finish(struct drm_gem_object *obj)
int ret;
/* So far we pin the BO in the MMU for its lifetime, so use
- * shmem's helper for getting a lifetime sgt.
+ * UMA's helper for getting a lifetime sgt.
*/
- sgt = drm_gem_shmem_get_pages_sgt(&bo->base);
+ sgt = drm_gem_uma_get_pages_sgt(&bo->base);
if (IS_ERR(sgt))
return PTR_ERR(sgt);
@@ -149,26 +149,25 @@ v3d_bo_create_finish(struct drm_gem_object *obj)
struct v3d_bo *v3d_bo_create(struct drm_device *dev, struct drm_file *file_priv,
size_t unaligned_size)
{
- struct drm_gem_shmem_object *shmem_obj;
+ struct drm_gem_uma_object *uma_obj;
struct v3d_dev *v3d = to_v3d_dev(dev);
struct v3d_bo *bo;
int ret;
- shmem_obj = drm_gem_shmem_create_with_mnt(dev, unaligned_size,
- v3d->gemfs);
- if (IS_ERR(shmem_obj))
- return ERR_CAST(shmem_obj);
- bo = to_v3d_bo(&shmem_obj->base);
+ uma_obj = drm_gem_uma_create_with_mnt(dev, unaligned_size, v3d->gemfs);
+ if (IS_ERR(uma_obj))
+ return ERR_CAST(uma_obj);
+ bo = to_v3d_bo(&uma_obj->base);
bo->vaddr = NULL;
- ret = v3d_bo_create_finish(&shmem_obj->base);
+ ret = v3d_bo_create_finish(&uma_obj->base);
if (ret)
goto free_obj;
return bo;
free_obj:
- drm_gem_shmem_free(shmem_obj);
+ drm_gem_uma_free(uma_obj);
return ERR_PTR(ret);
}
@@ -180,13 +179,13 @@ v3d_prime_import_sg_table(struct drm_device *dev,
struct drm_gem_object *obj;
int ret;
- obj = drm_gem_shmem_prime_import_sg_table(dev, attach, sgt);
+ obj = drm_gem_uma_prime_import_sg_table(dev, attach, sgt);
if (IS_ERR(obj))
return obj;
ret = v3d_bo_create_finish(obj);
if (ret) {
- drm_gem_shmem_free(&to_v3d_bo(obj)->base);
+ drm_gem_uma_free(&to_v3d_bo(obj)->base);
return ERR_PTR(ret);
}
@@ -195,7 +194,7 @@ v3d_prime_import_sg_table(struct drm_device *dev,
void v3d_get_bo_vaddr(struct v3d_bo *bo)
{
- struct drm_gem_shmem_object *obj = &bo->base;
+ struct drm_gem_uma_object *obj = &bo->base;
bo->vaddr = vmap(obj->pages, obj->base.size >> PAGE_SHIFT, VM_MAP,
pgprot_writecombine(PAGE_KERNEL));
diff --git a/drivers/gpu/drm/v3d/v3d_drv.h b/drivers/gpu/drm/v3d/v3d_drv.h
index 1884686985b8..3843d10fbd72 100644
--- a/drivers/gpu/drm/v3d/v3d_drv.h
+++ b/drivers/gpu/drm/v3d/v3d_drv.h
@@ -8,7 +8,7 @@
#include <drm/drm_encoder.h>
#include <drm/drm_gem.h>
-#include <drm/drm_gem_shmem_helper.h>
+#include <drm/drm_gem_uma_helper.h>
#include <drm/gpu_scheduler.h>
#include "v3d_performance_counters.h"
@@ -243,7 +243,7 @@ struct v3d_file_priv {
};
struct v3d_bo {
- struct drm_gem_shmem_object base;
+ struct drm_gem_uma_object base;
struct drm_mm_node node;
diff --git a/drivers/gpu/drm/v3d/v3d_mmu.c b/drivers/gpu/drm/v3d/v3d_mmu.c
index a25d25a8ae61..a634ac3eaaf6 100644
--- a/drivers/gpu/drm/v3d/v3d_mmu.c
+++ b/drivers/gpu/drm/v3d/v3d_mmu.c
@@ -82,13 +82,13 @@ int v3d_mmu_set_page_table(struct v3d_dev *v3d)
void v3d_mmu_insert_ptes(struct v3d_bo *bo)
{
- struct drm_gem_shmem_object *shmem_obj = &bo->base;
- struct v3d_dev *v3d = to_v3d_dev(shmem_obj->base.dev);
+ struct drm_gem_uma_object *uma_obj = &bo->base;
+ struct v3d_dev *v3d = to_v3d_dev(uma_obj->base.dev);
u32 page = bo->node.start;
struct scatterlist *sgl;
unsigned int count;
- for_each_sgtable_dma_sg(shmem_obj->sgt, sgl, count) {
+ for_each_sgtable_dma_sg(uma_obj->sgt, sgl, count) {
dma_addr_t dma_addr = sg_dma_address(sgl);
u32 pfn = dma_addr >> V3D_MMU_PAGE_SHIFT;
unsigned int len = sg_dma_len(sgl);
@@ -121,8 +121,7 @@ void v3d_mmu_insert_ptes(struct v3d_bo *bo)
}
}
- WARN_ON_ONCE(page - bo->node.start !=
- shmem_obj->base.size >> V3D_MMU_PAGE_SHIFT);
+ WARN_ON_ONCE(page - bo->node.start != uma_obj->base.size >> V3D_MMU_PAGE_SHIFT);
if (v3d_mmu_flush_all(v3d))
dev_err(v3d->drm.dev, "MMU flush timeout\n");
--
2.52.0
^ permalink raw reply related [flat|nested] 26+ messages in thread* [PATCH 10/13] drm/virtgpu: Use GEM-UMA helpers for memory management
2025-12-09 13:41 [RFC][PATCH 00/13] drm: Introduce GEM-UMA memory management Thomas Zimmermann
` (8 preceding siblings ...)
2025-12-09 13:42 ` [PATCH 09/13] drm/v3d: " Thomas Zimmermann
@ 2025-12-09 13:42 ` Thomas Zimmermann
2025-12-09 13:42 ` [PATCH 11/13] accel/amdxdna: " Thomas Zimmermann
` (3 subsequent siblings)
13 siblings, 0 replies; 26+ messages in thread
From: Thomas Zimmermann @ 2025-12-09 13:42 UTC (permalink / raw)
To: boris.brezillon, simona, airlied, mripard, maarten.lankhorst,
ogabbay, mamin506, lizhi.hou, maciej.falkowski, karol.wachowski,
tomeu, frank.binns, matt.coster, yuq825, robh, steven.price,
adrian.larumbe, liviu.dudau, mwen, kraxel, dmitry.osipenko,
gurchetansingh, olvaffe, corbet
Cc: dri-devel, lima, virtualization, linux-doc, Thomas Zimmermann
Convert virtgpu from GEM-SHMEM to GEM-UMA. The latter is just a copy,
so this change it merely renaming symbols. No functional changes.
GEM-SHMEM will become more self-contained for drivers without specific
memory management. GEM-UMA's interfaces will remain flexible for drivers
with UMA hardware, such as virtgpu.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
drivers/gpu/drm/virtio/Kconfig | 4 +-
drivers/gpu/drm/virtio/virtgpu_drv.c | 4 +-
drivers/gpu/drm/virtio/virtgpu_drv.h | 12 ++---
drivers/gpu/drm/virtio/virtgpu_object.c | 64 ++++++++++++-------------
drivers/gpu/drm/virtio/virtgpu_plane.c | 6 +--
drivers/gpu/drm/virtio/virtgpu_vq.c | 6 +--
6 files changed, 48 insertions(+), 48 deletions(-)
diff --git a/drivers/gpu/drm/virtio/Kconfig b/drivers/gpu/drm/virtio/Kconfig
index fc884fb57b7e..eee61fce17fc 100644
--- a/drivers/gpu/drm/virtio/Kconfig
+++ b/drivers/gpu/drm/virtio/Kconfig
@@ -2,10 +2,10 @@
config DRM_VIRTIO_GPU
tristate "Virtio GPU driver"
depends on DRM && VIRTIO_MENU && MMU
- select VIRTIO
select DRM_CLIENT_SELECTION
+ select DRM_GEM_UMA_HELPER
select DRM_KMS_HELPER
- select DRM_GEM_SHMEM_HELPER
+ select VIRTIO
select VIRTIO_DMA_SHARED_BUFFER
help
This is the virtual GPU driver for virtio. It can be used with
diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.c b/drivers/gpu/drm/virtio/virtgpu_drv.c
index a5ce96fb8a1d..8dbaa021f90f 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.c
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.c
@@ -37,7 +37,7 @@
#include <drm/drm.h>
#include <drm/drm_atomic_helper.h>
#include <drm/drm_drv.h>
-#include <drm/drm_fbdev_shmem.h>
+#include <drm/drm_fbdev_uma.h>
#include <drm/drm_file.h>
#include <drm/drm_print.h>
@@ -234,7 +234,7 @@ static const struct drm_driver driver = {
.dumb_create = virtio_gpu_mode_dumb_create,
- DRM_FBDEV_SHMEM_DRIVER_OPS,
+ DRM_FBDEV_UMA_DRIVER_OPS,
#if defined(CONFIG_DEBUG_FS)
.debugfs_init = virtio_gpu_debugfs_init,
diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h
index f17660a71a3e..d7a2293cc794 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.h
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
@@ -38,7 +38,7 @@
#include <drm/drm_fourcc.h>
#include <drm/drm_framebuffer.h>
#include <drm/drm_gem.h>
-#include <drm/drm_gem_shmem_helper.h>
+#include <drm/drm_gem_uma_helper.h>
#include <drm/drm_ioctl.h>
#include <drm/drm_probe_helper.h>
#include <drm/virtgpu_drm.h>
@@ -87,7 +87,7 @@ struct virtio_gpu_object_params {
};
struct virtio_gpu_object {
- struct drm_gem_shmem_object base;
+ struct drm_gem_uma_object base;
struct sg_table *sgt;
uint32_t hw_res_handle;
bool dumb;
@@ -102,7 +102,7 @@ struct virtio_gpu_object {
#define gem_to_virtio_gpu_obj(gobj) \
container_of((gobj), struct virtio_gpu_object, base.base)
-struct virtio_gpu_object_shmem {
+struct virtio_gpu_object_uma {
struct virtio_gpu_object base;
};
@@ -113,8 +113,8 @@ struct virtio_gpu_object_vram {
struct drm_mm_node vram_node;
};
-#define to_virtio_gpu_shmem(virtio_gpu_object) \
- container_of((virtio_gpu_object), struct virtio_gpu_object_shmem, base)
+#define to_virtio_gpu_uma(virtio_gpu_object) \
+ container_of((virtio_gpu_object), struct virtio_gpu_object_uma, base)
#define to_virtio_gpu_vram(virtio_gpu_object) \
container_of((virtio_gpu_object), struct virtio_gpu_object_vram, base)
@@ -474,7 +474,7 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
struct virtio_gpu_object **bo_ptr,
struct virtio_gpu_fence *fence);
-bool virtio_gpu_is_shmem(struct virtio_gpu_object *bo);
+bool virtio_gpu_is_uma(struct virtio_gpu_object *bo);
int virtio_gpu_resource_id_get(struct virtio_gpu_device *vgdev,
uint32_t *resid);
diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c
index 4270bfede7b9..31cc96b9b5a0 100644
--- a/drivers/gpu/drm/virtio/virtgpu_object.c
+++ b/drivers/gpu/drm/virtio/virtgpu_object.c
@@ -68,8 +68,8 @@ void virtio_gpu_cleanup_object(struct virtio_gpu_object *bo)
struct virtio_gpu_device *vgdev = bo->base.base.dev->dev_private;
virtio_gpu_resource_id_put(vgdev, bo->hw_res_handle);
- if (virtio_gpu_is_shmem(bo)) {
- drm_gem_shmem_free(&bo->base);
+ if (virtio_gpu_is_uma(bo)) {
+ drm_gem_uma_free(&bo->base);
} else if (virtio_gpu_is_vram(bo)) {
struct virtio_gpu_object_vram *vram = to_virtio_gpu_vram(bo);
@@ -123,52 +123,52 @@ int virtio_gpu_detach_object_fenced(struct virtio_gpu_object *bo)
return 0;
}
-static const struct drm_gem_object_funcs virtio_gpu_shmem_funcs = {
+static const struct drm_gem_object_funcs virtio_gpu_uma_funcs = {
.free = virtio_gpu_free_object,
.open = virtio_gpu_gem_object_open,
.close = virtio_gpu_gem_object_close,
- .print_info = drm_gem_shmem_object_print_info,
+ .print_info = drm_gem_uma_object_print_info,
.export = virtgpu_gem_prime_export,
- .pin = drm_gem_shmem_object_pin,
- .unpin = drm_gem_shmem_object_unpin,
- .get_sg_table = drm_gem_shmem_object_get_sg_table,
- .vmap = drm_gem_shmem_object_vmap,
- .vunmap = drm_gem_shmem_object_vunmap,
- .mmap = drm_gem_shmem_object_mmap,
- .vm_ops = &drm_gem_shmem_vm_ops,
+ .pin = drm_gem_uma_object_pin,
+ .unpin = drm_gem_uma_object_unpin,
+ .get_sg_table = drm_gem_uma_object_get_sg_table,
+ .vmap = drm_gem_uma_object_vmap,
+ .vunmap = drm_gem_uma_object_vunmap,
+ .mmap = drm_gem_uma_object_mmap,
+ .vm_ops = &drm_gem_uma_vm_ops,
};
-bool virtio_gpu_is_shmem(struct virtio_gpu_object *bo)
+bool virtio_gpu_is_uma(struct virtio_gpu_object *bo)
{
- return bo->base.base.funcs == &virtio_gpu_shmem_funcs;
+ return bo->base.base.funcs == &virtio_gpu_uma_funcs;
}
struct drm_gem_object *virtio_gpu_create_object(struct drm_device *dev,
size_t size)
{
- struct virtio_gpu_object_shmem *shmem;
- struct drm_gem_shmem_object *dshmem;
+ struct virtio_gpu_object_uma *vguma;
+ struct drm_gem_uma_object *uma;
- shmem = kzalloc(sizeof(*shmem), GFP_KERNEL);
- if (!shmem)
+ vguma = kzalloc(sizeof(*vguma), GFP_KERNEL);
+ if (!vguma)
return ERR_PTR(-ENOMEM);
- dshmem = &shmem->base.base;
- dshmem->base.funcs = &virtio_gpu_shmem_funcs;
- return &dshmem->base;
+ uma = &vguma->base.base;
+ uma->base.funcs = &virtio_gpu_uma_funcs;
+ return &uma->base;
}
-static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev,
- struct virtio_gpu_object *bo,
- struct virtio_gpu_mem_entry **ents,
- unsigned int *nents)
+static int virtio_gpu_object_uma_init(struct virtio_gpu_device *vgdev,
+ struct virtio_gpu_object *bo,
+ struct virtio_gpu_mem_entry **ents,
+ unsigned int *nents)
{
bool use_dma_api = !virtio_has_dma_quirk(vgdev->vdev);
struct scatterlist *sg;
struct sg_table *pages;
int si;
- pages = drm_gem_shmem_get_pages_sgt(&bo->base);
+ pages = drm_gem_uma_get_pages_sgt(&bo->base);
if (IS_ERR(pages))
return PTR_ERR(pages);
@@ -208,7 +208,7 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
struct virtio_gpu_fence *fence)
{
struct virtio_gpu_object_array *objs = NULL;
- struct drm_gem_shmem_object *shmem_obj;
+ struct drm_gem_uma_object *uma_obj;
struct virtio_gpu_object *bo;
struct virtio_gpu_mem_entry *ents = NULL;
unsigned int nents;
@@ -217,10 +217,10 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
*bo_ptr = NULL;
params->size = roundup(params->size, PAGE_SIZE);
- shmem_obj = drm_gem_shmem_create(vgdev->ddev, params->size);
- if (IS_ERR(shmem_obj))
- return PTR_ERR(shmem_obj);
- bo = gem_to_virtio_gpu_obj(&shmem_obj->base);
+ uma_obj = drm_gem_uma_create(vgdev->ddev, params->size);
+ if (IS_ERR(uma_obj))
+ return PTR_ERR(uma_obj);
+ bo = gem_to_virtio_gpu_obj(&uma_obj->base);
ret = virtio_gpu_resource_id_get(vgdev, &bo->hw_res_handle);
if (ret < 0)
@@ -228,7 +228,7 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
bo->dumb = params->dumb;
- ret = virtio_gpu_object_shmem_init(vgdev, bo, &ents, &nents);
+ ret = virtio_gpu_object_uma_init(vgdev, bo, &ents, &nents);
if (ret != 0)
goto err_put_id;
@@ -270,6 +270,6 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
err_put_id:
virtio_gpu_resource_id_put(vgdev, bo->hw_res_handle);
err_free_gem:
- drm_gem_shmem_free(shmem_obj);
+ drm_gem_uma_free(uma_obj);
return ret;
}
diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c b/drivers/gpu/drm/virtio/virtgpu_plane.c
index a7863f8ee4ee..a80027e02612 100644
--- a/drivers/gpu/drm/virtio/virtgpu_plane.c
+++ b/drivers/gpu/drm/virtio/virtgpu_plane.c
@@ -515,12 +515,12 @@ static int virtio_drm_get_scanout_buffer(struct drm_plane *plane,
if (bo->base.vaddr) {
iosys_map_set_vaddr(&sb->map[0], bo->base.vaddr);
} else {
- struct drm_gem_shmem_object *shmem = &bo->base;
+ struct drm_gem_uma_object *uma = &bo->base;
- if (!shmem->pages)
+ if (!uma->pages)
return -ENODEV;
/* map scanout buffer later */
- sb->pages = shmem->pages;
+ sb->pages = uma->pages;
}
sb->format = plane->state->fb->format;
diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c
index 0c194b4e9488..9c6607239629 100644
--- a/drivers/gpu/drm/virtio/virtgpu_vq.c
+++ b/drivers/gpu/drm/virtio/virtgpu_vq.c
@@ -726,7 +726,7 @@ int virtio_gpu_panic_cmd_transfer_to_host_2d(struct virtio_gpu_device *vgdev,
struct virtio_gpu_vbuffer *vbuf;
bool use_dma_api = !virtio_has_dma_quirk(vgdev->vdev);
- if (virtio_gpu_is_shmem(bo) && use_dma_api)
+ if (virtio_gpu_is_uma(bo) && use_dma_api)
dma_sync_sgtable_for_device(vgdev->vdev->dev.parent,
bo->base.sgt, DMA_TO_DEVICE);
@@ -757,7 +757,7 @@ void virtio_gpu_cmd_transfer_to_host_2d(struct virtio_gpu_device *vgdev,
struct virtio_gpu_vbuffer *vbuf;
bool use_dma_api = !virtio_has_dma_quirk(vgdev->vdev);
- if (virtio_gpu_is_shmem(bo) && use_dma_api)
+ if (virtio_gpu_is_uma(bo) && use_dma_api)
dma_sync_sgtable_for_device(vgdev->vdev->dev.parent,
bo->base.sgt, DMA_TO_DEVICE);
@@ -1195,7 +1195,7 @@ void virtio_gpu_cmd_transfer_to_host_3d(struct virtio_gpu_device *vgdev,
struct virtio_gpu_vbuffer *vbuf;
bool use_dma_api = !virtio_has_dma_quirk(vgdev->vdev);
- if (virtio_gpu_is_shmem(bo) && use_dma_api)
+ if (virtio_gpu_is_uma(bo) && use_dma_api)
dma_sync_sgtable_for_device(vgdev->vdev->dev.parent,
bo->base.sgt, DMA_TO_DEVICE);
--
2.52.0
^ permalink raw reply related [flat|nested] 26+ messages in thread* [PATCH 11/13] accel/amdxdna: Use GEM-UMA helpers for memory management
2025-12-09 13:41 [RFC][PATCH 00/13] drm: Introduce GEM-UMA memory management Thomas Zimmermann
` (9 preceding siblings ...)
2025-12-09 13:42 ` [PATCH 10/13] drm/virtgpu: " Thomas Zimmermann
@ 2025-12-09 13:42 ` Thomas Zimmermann
2025-12-09 13:42 ` [PATCH 12/13] accel/ivpu: " Thomas Zimmermann
` (2 subsequent siblings)
13 siblings, 0 replies; 26+ messages in thread
From: Thomas Zimmermann @ 2025-12-09 13:42 UTC (permalink / raw)
To: boris.brezillon, simona, airlied, mripard, maarten.lankhorst,
ogabbay, mamin506, lizhi.hou, maciej.falkowski, karol.wachowski,
tomeu, frank.binns, matt.coster, yuq825, robh, steven.price,
adrian.larumbe, liviu.dudau, mwen, kraxel, dmitry.osipenko,
gurchetansingh, olvaffe, corbet
Cc: dri-devel, lima, virtualization, linux-doc, Thomas Zimmermann
Convert amdxdna from GEM-SHMEM to GEM-UMA. The latter is just a copy,
so this change it merely renaming symbols. No functional changes.
GEM-SHMEM will become more self-contained for drivers without specific
memory management. GEM-UMA's interfaces will remain flexible for drivers
with UMA hardware, such as amdxdna.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
drivers/accel/amdxdna/Kconfig | 2 +-
drivers/accel/amdxdna/aie2_ctx.c | 1 -
drivers/accel/amdxdna/aie2_message.c | 1 -
drivers/accel/amdxdna/aie2_pci.c | 1 -
drivers/accel/amdxdna/aie2_psp.c | 1 -
drivers/accel/amdxdna/aie2_smu.c | 1 -
drivers/accel/amdxdna/amdxdna_ctx.c | 7 ++-
drivers/accel/amdxdna/amdxdna_gem.c | 49 +++++++++----------
drivers/accel/amdxdna/amdxdna_gem.h | 5 +-
.../accel/amdxdna/amdxdna_mailbox_helper.c | 1 -
drivers/accel/amdxdna/amdxdna_pci_drv.c | 1 -
drivers/accel/amdxdna/amdxdna_sysfs.c | 1 -
12 files changed, 32 insertions(+), 39 deletions(-)
diff --git a/drivers/accel/amdxdna/Kconfig b/drivers/accel/amdxdna/Kconfig
index f39d7a87296c..a417b18b401f 100644
--- a/drivers/accel/amdxdna/Kconfig
+++ b/drivers/accel/amdxdna/Kconfig
@@ -6,8 +6,8 @@ config DRM_ACCEL_AMDXDNA
depends on DRM_ACCEL
depends on PCI && HAS_IOMEM
depends on X86_64
+ select DRM_GEM_UMA_HELPER
select DRM_SCHED
- select DRM_GEM_SHMEM_HELPER
select FW_LOADER
select HMM_MIRROR
help
diff --git a/drivers/accel/amdxdna/aie2_ctx.c b/drivers/accel/amdxdna/aie2_ctx.c
index 42d876a427c5..7e82496495d3 100644
--- a/drivers/accel/amdxdna/aie2_ctx.c
+++ b/drivers/accel/amdxdna/aie2_ctx.c
@@ -6,7 +6,6 @@
#include <drm/amdxdna_accel.h>
#include <drm/drm_device.h>
#include <drm/drm_gem.h>
-#include <drm/drm_gem_shmem_helper.h>
#include <drm/drm_print.h>
#include <drm/drm_syncobj.h>
#include <linux/hmm.h>
diff --git a/drivers/accel/amdxdna/aie2_message.c b/drivers/accel/amdxdna/aie2_message.c
index d493bb1c3360..2f639c22ebf1 100644
--- a/drivers/accel/amdxdna/aie2_message.c
+++ b/drivers/accel/amdxdna/aie2_message.c
@@ -7,7 +7,6 @@
#include <drm/drm_cache.h>
#include <drm/drm_device.h>
#include <drm/drm_gem.h>
-#include <drm/drm_gem_shmem_helper.h>
#include <drm/drm_print.h>
#include <drm/gpu_scheduler.h>
#include <linux/bitfield.h>
diff --git a/drivers/accel/amdxdna/aie2_pci.c b/drivers/accel/amdxdna/aie2_pci.c
index ceef1c502e9e..498a23a070c9 100644
--- a/drivers/accel/amdxdna/aie2_pci.c
+++ b/drivers/accel/amdxdna/aie2_pci.c
@@ -6,7 +6,6 @@
#include <drm/amdxdna_accel.h>
#include <drm/drm_device.h>
#include <drm/drm_drv.h>
-#include <drm/drm_gem_shmem_helper.h>
#include <drm/drm_managed.h>
#include <drm/drm_print.h>
#include <drm/gpu_scheduler.h>
diff --git a/drivers/accel/amdxdna/aie2_psp.c b/drivers/accel/amdxdna/aie2_psp.c
index f28a060a8810..81145210abf3 100644
--- a/drivers/accel/amdxdna/aie2_psp.c
+++ b/drivers/accel/amdxdna/aie2_psp.c
@@ -4,7 +4,6 @@
*/
#include <drm/drm_device.h>
-#include <drm/drm_gem_shmem_helper.h>
#include <drm/drm_managed.h>
#include <drm/drm_print.h>
#include <drm/gpu_scheduler.h>
diff --git a/drivers/accel/amdxdna/aie2_smu.c b/drivers/accel/amdxdna/aie2_smu.c
index bd94ee96c2bc..a16942aa10a1 100644
--- a/drivers/accel/amdxdna/aie2_smu.c
+++ b/drivers/accel/amdxdna/aie2_smu.c
@@ -4,7 +4,6 @@
*/
#include <drm/drm_device.h>
-#include <drm/drm_gem_shmem_helper.h>
#include <drm/drm_print.h>
#include <drm/gpu_scheduler.h>
#include <linux/iopoll.h>
diff --git a/drivers/accel/amdxdna/amdxdna_ctx.c b/drivers/accel/amdxdna/amdxdna_ctx.c
index d17aef89a0ad..1c02a97b1865 100644
--- a/drivers/accel/amdxdna/amdxdna_ctx.c
+++ b/drivers/accel/amdxdna/amdxdna_ctx.c
@@ -8,7 +8,6 @@
#include <drm/drm_drv.h>
#include <drm/drm_file.h>
#include <drm/drm_gem.h>
-#include <drm/drm_gem_shmem_helper.h>
#include <drm/drm_print.h>
#include <drm/gpu_scheduler.h>
#include <linux/xarray.h>
@@ -387,7 +386,7 @@ amdxdna_arg_bos_lookup(struct amdxdna_client *client,
gobj = drm_gem_object_lookup(client->filp, bo_hdls[i]);
if (!gobj) {
ret = -ENOENT;
- goto put_shmem_bo;
+ goto put_bos;
}
abo = to_xdna_obj(gobj);
@@ -402,7 +401,7 @@ amdxdna_arg_bos_lookup(struct amdxdna_client *client,
if (ret) {
mutex_unlock(&abo->lock);
drm_gem_object_put(gobj);
- goto put_shmem_bo;
+ goto put_bos;
}
abo->pinned = true;
mutex_unlock(&abo->lock);
@@ -412,7 +411,7 @@ amdxdna_arg_bos_lookup(struct amdxdna_client *client,
return 0;
-put_shmem_bo:
+put_bos:
amdxdna_arg_bos_put(job);
return ret;
}
diff --git a/drivers/accel/amdxdna/amdxdna_gem.c b/drivers/accel/amdxdna/amdxdna_gem.c
index dfa916eeb2d9..33c48498d1eb 100644
--- a/drivers/accel/amdxdna/amdxdna_gem.c
+++ b/drivers/accel/amdxdna/amdxdna_gem.c
@@ -7,7 +7,6 @@
#include <drm/drm_cache.h>
#include <drm/drm_device.h>
#include <drm/drm_gem.h>
-#include <drm/drm_gem_shmem_helper.h>
#include <drm/drm_print.h>
#include <drm/gpu_scheduler.h>
#include <linux/dma-buf.h>
@@ -277,9 +276,9 @@ static int amdxdna_insert_pages(struct amdxdna_gem_obj *abo,
int ret;
if (!is_import_bo(abo)) {
- ret = drm_gem_shmem_mmap(&abo->base, vma);
+ ret = drm_gem_uma_mmap(&abo->base, vma);
if (ret) {
- XDNA_ERR(xdna, "Failed shmem mmap %d", ret);
+ XDNA_ERR(xdna, "Failed uma mmap %d", ret);
return ret;
}
@@ -358,11 +357,11 @@ static int amdxdna_gem_dmabuf_mmap(struct dma_buf *dma_buf, struct vm_area_struc
unsigned long num_pages = vma_pages(vma);
int ret;
- vma->vm_ops = &drm_gem_shmem_vm_ops;
+ vma->vm_ops = &drm_gem_uma_vm_ops;
vma->vm_private_data = gobj;
drm_gem_object_get(gobj);
- ret = drm_gem_shmem_mmap(&abo->base, vma);
+ ret = drm_gem_uma_mmap(&abo->base, vma);
if (ret)
goto put_obj;
@@ -474,23 +473,23 @@ static void amdxdna_gem_obj_free(struct drm_gem_object *gobj)
return;
}
- drm_gem_shmem_free(&abo->base);
+ drm_gem_uma_free(&abo->base);
}
static const struct drm_gem_object_funcs amdxdna_gem_dev_obj_funcs = {
.free = amdxdna_gem_dev_obj_free,
};
-static const struct drm_gem_object_funcs amdxdna_gem_shmem_funcs = {
+static const struct drm_gem_object_funcs amdxdna_gem_uma_funcs = {
.free = amdxdna_gem_obj_free,
- .print_info = drm_gem_shmem_object_print_info,
- .pin = drm_gem_shmem_object_pin,
- .unpin = drm_gem_shmem_object_unpin,
- .get_sg_table = drm_gem_shmem_object_get_sg_table,
- .vmap = drm_gem_shmem_object_vmap,
- .vunmap = drm_gem_shmem_object_vunmap,
+ .print_info = drm_gem_uma_object_print_info,
+ .pin = drm_gem_uma_object_pin,
+ .unpin = drm_gem_uma_object_unpin,
+ .get_sg_table = drm_gem_uma_object_get_sg_table,
+ .vmap = drm_gem_uma_object_vmap,
+ .vunmap = drm_gem_uma_object_vunmap,
.mmap = amdxdna_gem_obj_mmap,
- .vm_ops = &drm_gem_shmem_vm_ops,
+ .vm_ops = &drm_gem_uma_vm_ops,
.export = amdxdna_gem_prime_export,
};
@@ -525,21 +524,21 @@ amdxdna_gem_create_object_cb(struct drm_device *dev, size_t size)
if (IS_ERR(abo))
return ERR_CAST(abo);
- to_gobj(abo)->funcs = &amdxdna_gem_shmem_funcs;
+ to_gobj(abo)->funcs = &amdxdna_gem_uma_funcs;
return to_gobj(abo);
}
static struct amdxdna_gem_obj *
-amdxdna_gem_create_shmem_object(struct drm_device *dev, size_t size)
+amdxdna_gem_create_uma_object(struct drm_device *dev, size_t size)
{
- struct drm_gem_shmem_object *shmem = drm_gem_shmem_create(dev, size);
+ struct drm_gem_uma_object *uma = drm_gem_uma_create(dev, size);
- if (IS_ERR(shmem))
- return ERR_CAST(shmem);
+ if (IS_ERR(uma))
+ return ERR_CAST(uma);
- shmem->map_wc = false;
- return to_xdna_obj(&shmem->base);
+ uma->map_wc = false;
+ return to_xdna_obj(&uma->base);
}
static struct amdxdna_gem_obj *
@@ -589,7 +588,7 @@ amdxdna_gem_create_object(struct drm_device *dev,
if (args->vaddr)
return amdxdna_gem_create_ubuf_object(dev, args);
- return amdxdna_gem_create_shmem_object(dev, aligned_sz);
+ return amdxdna_gem_create_uma_object(dev, aligned_sz);
}
struct drm_gem_object *
@@ -615,7 +614,7 @@ amdxdna_gem_prime_import(struct drm_device *dev, struct dma_buf *dma_buf)
goto fail_detach;
}
- gobj = drm_gem_shmem_prime_import_sg_table(dev, attach, sgt);
+ gobj = drm_gem_uma_prime_import_sg_table(dev, attach, sgt);
if (IS_ERR(gobj)) {
ret = PTR_ERR(gobj);
goto fail_unmap;
@@ -836,7 +835,7 @@ int amdxdna_gem_pin_nolock(struct amdxdna_gem_obj *abo)
if (is_import_bo(abo))
return 0;
- ret = drm_gem_shmem_pin(&abo->base);
+ ret = drm_gem_uma_pin(&abo->base);
XDNA_DBG(xdna, "BO type %d ret %d", abo->type, ret);
return ret;
@@ -862,7 +861,7 @@ void amdxdna_gem_unpin(struct amdxdna_gem_obj *abo)
return;
mutex_lock(&abo->lock);
- drm_gem_shmem_unpin(&abo->base);
+ drm_gem_uma_unpin(&abo->base);
mutex_unlock(&abo->lock);
}
diff --git a/drivers/accel/amdxdna/amdxdna_gem.h b/drivers/accel/amdxdna/amdxdna_gem.h
index f79fc7f3c93b..74c78c86125e 100644
--- a/drivers/accel/amdxdna/amdxdna_gem.h
+++ b/drivers/accel/amdxdna/amdxdna_gem.h
@@ -7,6 +7,9 @@
#define _AMDXDNA_GEM_H_
#include <linux/hmm.h>
+
+#include <drm/drm_gem_uma_helper.h>
+
#include "amdxdna_pci_drv.h"
struct amdxdna_umap {
@@ -33,7 +36,7 @@ struct amdxdna_mem {
};
struct amdxdna_gem_obj {
- struct drm_gem_shmem_object base;
+ struct drm_gem_uma_object base;
struct amdxdna_client *client;
u8 type;
bool pinned;
diff --git a/drivers/accel/amdxdna/amdxdna_mailbox_helper.c b/drivers/accel/amdxdna/amdxdna_mailbox_helper.c
index 6d0c24513476..63c3b82ff1b0 100644
--- a/drivers/accel/amdxdna/amdxdna_mailbox_helper.c
+++ b/drivers/accel/amdxdna/amdxdna_mailbox_helper.c
@@ -7,7 +7,6 @@
#include <drm/drm_device.h>
#include <drm/drm_print.h>
#include <drm/drm_gem.h>
-#include <drm/drm_gem_shmem_helper.h>
#include <drm/gpu_scheduler.h>
#include <linux/completion.h>
diff --git a/drivers/accel/amdxdna/amdxdna_pci_drv.c b/drivers/accel/amdxdna/amdxdna_pci_drv.c
index 1973ab67721b..ae069b8805c7 100644
--- a/drivers/accel/amdxdna/amdxdna_pci_drv.c
+++ b/drivers/accel/amdxdna/amdxdna_pci_drv.c
@@ -7,7 +7,6 @@
#include <drm/drm_accel.h>
#include <drm/drm_drv.h>
#include <drm/drm_gem.h>
-#include <drm/drm_gem_shmem_helper.h>
#include <drm/drm_ioctl.h>
#include <drm/drm_managed.h>
#include <drm/gpu_scheduler.h>
diff --git a/drivers/accel/amdxdna/amdxdna_sysfs.c b/drivers/accel/amdxdna/amdxdna_sysfs.c
index f27e4ee960a0..d7fcb9c9b7b5 100644
--- a/drivers/accel/amdxdna/amdxdna_sysfs.c
+++ b/drivers/accel/amdxdna/amdxdna_sysfs.c
@@ -5,7 +5,6 @@
#include <drm/amdxdna_accel.h>
#include <drm/drm_device.h>
-#include <drm/drm_gem_shmem_helper.h>
#include <drm/drm_print.h>
#include <drm/gpu_scheduler.h>
#include <linux/types.h>
--
2.52.0
^ permalink raw reply related [flat|nested] 26+ messages in thread* [PATCH 12/13] accel/ivpu: Use GEM-UMA helpers for memory management
2025-12-09 13:41 [RFC][PATCH 00/13] drm: Introduce GEM-UMA memory management Thomas Zimmermann
` (10 preceding siblings ...)
2025-12-09 13:42 ` [PATCH 11/13] accel/amdxdna: " Thomas Zimmermann
@ 2025-12-09 13:42 ` Thomas Zimmermann
2025-12-09 14:24 ` Karol Wachowski
2025-12-09 14:25 ` Karol Wachowski
2025-12-09 13:42 ` [PATCH 13/13] accel/rocket: " Thomas Zimmermann
2025-12-09 14:27 ` [RFC][PATCH 00/13] drm: Introduce GEM-UMA " Boris Brezillon
13 siblings, 2 replies; 26+ messages in thread
From: Thomas Zimmermann @ 2025-12-09 13:42 UTC (permalink / raw)
To: boris.brezillon, simona, airlied, mripard, maarten.lankhorst,
ogabbay, mamin506, lizhi.hou, maciej.falkowski, karol.wachowski,
tomeu, frank.binns, matt.coster, yuq825, robh, steven.price,
adrian.larumbe, liviu.dudau, mwen, kraxel, dmitry.osipenko,
gurchetansingh, olvaffe, corbet
Cc: dri-devel, lima, virtualization, linux-doc, Thomas Zimmermann
Convert ivpu from GEM-SHMEM to GEM-UMA. The latter is just a copy,
so this change it merely renaming symbols. No functional changes.
GEM-SHMEM will become more self-contained for drivers without specific
memory management. GEM-UMA's interfaces will remain flexible for drivers
with UMA hardware, such as ivpu.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
drivers/accel/ivpu/Kconfig | 2 +-
drivers/accel/ivpu/ivpu_gem.c | 36 +++++++++++++++++------------------
drivers/accel/ivpu/ivpu_gem.h | 4 ++--
3 files changed, 21 insertions(+), 21 deletions(-)
diff --git a/drivers/accel/ivpu/Kconfig b/drivers/accel/ivpu/Kconfig
index 9e055b5ce03d..49ca139a9d31 100644
--- a/drivers/accel/ivpu/Kconfig
+++ b/drivers/accel/ivpu/Kconfig
@@ -5,8 +5,8 @@ config DRM_ACCEL_IVPU
depends on DRM_ACCEL
depends on X86_64 && !UML
depends on PCI && PCI_MSI
+ select DRM_GEM_UMA_HELPER
select FW_LOADER
- select DRM_GEM_SHMEM_HELPER
select GENERIC_ALLOCATOR
select WANT_DEV_COREDUMP
help
diff --git a/drivers/accel/ivpu/ivpu_gem.c b/drivers/accel/ivpu/ivpu_gem.c
index ece68f570b7e..7f4aeb482efb 100644
--- a/drivers/accel/ivpu/ivpu_gem.c
+++ b/drivers/accel/ivpu/ivpu_gem.c
@@ -84,7 +84,7 @@ int __must_check ivpu_bo_bind(struct ivpu_bo *bo)
if (bo->base.base.import_attach)
sgt = ivpu_bo_map_attachment(vdev, bo);
else
- sgt = drm_gem_shmem_get_pages_sgt(&bo->base);
+ sgt = drm_gem_uma_get_pages_sgt(&bo->base);
if (IS_ERR(sgt)) {
ret = PTR_ERR(sgt);
ivpu_err(vdev, "Failed to map BO in IOMMU: %d\n", ret);
@@ -223,7 +223,7 @@ struct drm_gem_object *ivpu_gem_prime_import(struct drm_device *dev,
get_dma_buf(dma_buf);
- obj = drm_gem_shmem_prime_import_sg_table(dev, attach, NULL);
+ obj = drm_gem_uma_prime_import_sg_table(dev, attach, NULL);
if (IS_ERR(obj)) {
ret = PTR_ERR(obj);
goto fail_detach;
@@ -251,7 +251,7 @@ struct drm_gem_object *ivpu_gem_prime_import(struct drm_device *dev,
static struct ivpu_bo *ivpu_bo_alloc(struct ivpu_device *vdev, u64 size, u32 flags)
{
- struct drm_gem_shmem_object *shmem;
+ struct drm_gem_uma_object *uma;
struct ivpu_bo *bo;
switch (flags & DRM_IVPU_BO_CACHE_MASK) {
@@ -262,11 +262,11 @@ static struct ivpu_bo *ivpu_bo_alloc(struct ivpu_device *vdev, u64 size, u32 fla
return ERR_PTR(-EINVAL);
}
- shmem = drm_gem_shmem_create(&vdev->drm, size);
- if (IS_ERR(shmem))
- return ERR_CAST(shmem);
+ uma = drm_gem_uma_create(&vdev->drm, size);
+ if (IS_ERR(uma))
+ return ERR_CAST(uma);
- bo = to_ivpu_bo(&shmem->base);
+ bo = to_ivpu_bo(&uma->base);
bo->base.map_wc = flags & DRM_IVPU_BO_WC;
bo->flags = flags;
@@ -330,7 +330,7 @@ static void ivpu_gem_bo_free(struct drm_gem_object *obj)
drm_WARN_ON(obj->dev, refcount_read(&bo->base.pages_use_count) > 1);
drm_WARN_ON(obj->dev, bo->base.base.vma_node.vm_files.rb_node);
- drm_gem_shmem_free(&bo->base);
+ drm_gem_uma_free(&bo->base);
}
static enum drm_gem_object_status ivpu_gem_status(struct drm_gem_object *obj)
@@ -347,15 +347,15 @@ static enum drm_gem_object_status ivpu_gem_status(struct drm_gem_object *obj)
static const struct drm_gem_object_funcs ivpu_gem_funcs = {
.free = ivpu_gem_bo_free,
.open = ivpu_gem_bo_open,
- .print_info = drm_gem_shmem_object_print_info,
- .pin = drm_gem_shmem_object_pin,
- .unpin = drm_gem_shmem_object_unpin,
- .get_sg_table = drm_gem_shmem_object_get_sg_table,
- .vmap = drm_gem_shmem_object_vmap,
- .vunmap = drm_gem_shmem_object_vunmap,
- .mmap = drm_gem_shmem_object_mmap,
+ .print_info = drm_gem_uma_object_print_info,
+ .pin = drm_gem_uma_object_pin,
+ .unpin = drm_gem_uma_object_unpin,
+ .get_sg_table = drm_gem_uma_object_get_sg_table,
+ .vmap = drm_gem_uma_object_vmap,
+ .vunmap = drm_gem_uma_object_vunmap,
+ .mmap = drm_gem_uma_object_mmap,
.status = ivpu_gem_status,
- .vm_ops = &drm_gem_shmem_vm_ops,
+ .vm_ops = &drm_gem_uma_vm_ops,
};
int ivpu_bo_create_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
@@ -435,7 +435,7 @@ ivpu_bo_create(struct ivpu_device *vdev, struct ivpu_mmu_context *ctx,
if (flags & DRM_IVPU_BO_MAPPABLE) {
ivpu_bo_lock(bo);
- ret = drm_gem_shmem_vmap_locked(&bo->base, &map);
+ ret = drm_gem_uma_vmap_locked(&bo->base, &map);
ivpu_bo_unlock(bo);
if (ret)
@@ -475,7 +475,7 @@ void ivpu_bo_free(struct ivpu_bo *bo)
if (bo->flags & DRM_IVPU_BO_MAPPABLE) {
ivpu_bo_lock(bo);
- drm_gem_shmem_vunmap_locked(&bo->base, &map);
+ drm_gem_uma_vunmap_locked(&bo->base, &map);
ivpu_bo_unlock(bo);
}
diff --git a/drivers/accel/ivpu/ivpu_gem.h b/drivers/accel/ivpu/ivpu_gem.h
index 0c3350f22b55..3e5d1a64deab 100644
--- a/drivers/accel/ivpu/ivpu_gem.h
+++ b/drivers/accel/ivpu/ivpu_gem.h
@@ -6,13 +6,13 @@
#define __IVPU_GEM_H__
#include <drm/drm_gem.h>
-#include <drm/drm_gem_shmem_helper.h>
+#include <drm/drm_gem_uma_helper.h>
#include <drm/drm_mm.h>
struct ivpu_file_priv;
struct ivpu_bo {
- struct drm_gem_shmem_object base;
+ struct drm_gem_uma_object base;
struct ivpu_mmu_context *ctx;
struct list_head bo_list_node;
struct drm_mm_node mm_node;
--
2.52.0
^ permalink raw reply related [flat|nested] 26+ messages in thread* Re: [PATCH 12/13] accel/ivpu: Use GEM-UMA helpers for memory management
2025-12-09 13:42 ` [PATCH 12/13] accel/ivpu: " Thomas Zimmermann
@ 2025-12-09 14:24 ` Karol Wachowski
2025-12-09 14:25 ` Karol Wachowski
1 sibling, 0 replies; 26+ messages in thread
From: Karol Wachowski @ 2025-12-09 14:24 UTC (permalink / raw)
To: Thomas Zimmermann, boris.brezillon, simona, airlied, mripard,
maarten.lankhorst, ogabbay, mamin506, lizhi.hou, maciej.falkowski,
tomeu, frank.binns, matt.coster, yuq825, robh, steven.price,
adrian.larumbe, liviu.dudau, mwen, kraxel, dmitry.osipenko,
gurchetansingh, olvaffe, corbet
Cc: dri-devel, lima, virtualization, linux-doc
On 12/9/2025 2:42 PM, Thomas Zimmermann wrote:
> Convert ivpu from GEM-SHMEM to GEM-UMA. The latter is just a copy,
> so this change it merely renaming symbols. No functional changes.
>
> GEM-SHMEM will become more self-contained for drivers without specific
> memory management. GEM-UMA's interfaces will remain flexible for drivers
> with UMA hardware, such as ivpu.
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> ---
> drivers/accel/ivpu/Kconfig | 2 +-
> drivers/accel/ivpu/ivpu_gem.c | 36 +++++++++++++++++------------------
> drivers/accel/ivpu/ivpu_gem.h | 4 ++--
> 3 files changed, 21 insertions(+), 21 deletions(-)
>
> diff --git a/drivers/accel/ivpu/Kconfig b/drivers/accel/ivpu/Kconfig
> index 9e055b5ce03d..49ca139a9d31 100644
> --- a/drivers/accel/ivpu/Kconfig
> +++ b/drivers/accel/ivpu/Kconfig
> @@ -5,8 +5,8 @@ config DRM_ACCEL_IVPU
> depends on DRM_ACCEL
> depends on X86_64 && !UML
> depends on PCI && PCI_MSI
> + select DRM_GEM_UMA_HELPER
> select FW_LOADER
> - select DRM_GEM_SHMEM_HELPER
> select GENERIC_ALLOCATOR
> select WANT_DEV_COREDUMP
> help
> diff --git a/drivers/accel/ivpu/ivpu_gem.c b/drivers/accel/ivpu/ivpu_gem.c
> index ece68f570b7e..7f4aeb482efb 100644
> --- a/drivers/accel/ivpu/ivpu_gem.c
> +++ b/drivers/accel/ivpu/ivpu_gem.c
> @@ -84,7 +84,7 @@ int __must_check ivpu_bo_bind(struct ivpu_bo *bo)
> if (bo->base.base.import_attach)
> sgt = ivpu_bo_map_attachment(vdev, bo);
> else
> - sgt = drm_gem_shmem_get_pages_sgt(&bo->base);
> + sgt = drm_gem_uma_get_pages_sgt(&bo->base);
> if (IS_ERR(sgt)) {
> ret = PTR_ERR(sgt);
> ivpu_err(vdev, "Failed to map BO in IOMMU: %d\n", ret);
> @@ -223,7 +223,7 @@ struct drm_gem_object *ivpu_gem_prime_import(struct drm_device *dev,
>
> get_dma_buf(dma_buf);
>
> - obj = drm_gem_shmem_prime_import_sg_table(dev, attach, NULL);
> + obj = drm_gem_uma_prime_import_sg_table(dev, attach, NULL);
> if (IS_ERR(obj)) {
> ret = PTR_ERR(obj);
> goto fail_detach;
> @@ -251,7 +251,7 @@ struct drm_gem_object *ivpu_gem_prime_import(struct drm_device *dev,
>
> static struct ivpu_bo *ivpu_bo_alloc(struct ivpu_device *vdev, u64 size, u32 flags)
> {
> - struct drm_gem_shmem_object *shmem;
> + struct drm_gem_uma_object *uma;
> struct ivpu_bo *bo;
>
> switch (flags & DRM_IVPU_BO_CACHE_MASK) {
> @@ -262,11 +262,11 @@ static struct ivpu_bo *ivpu_bo_alloc(struct ivpu_device *vdev, u64 size, u32 fla
> return ERR_PTR(-EINVAL);
> }
>
> - shmem = drm_gem_shmem_create(&vdev->drm, size);
> - if (IS_ERR(shmem))
> - return ERR_CAST(shmem);
> + uma = drm_gem_uma_create(&vdev->drm, size);
> + if (IS_ERR(uma))
> + return ERR_CAST(uma);
>
> - bo = to_ivpu_bo(&shmem->base);
> + bo = to_ivpu_bo(&uma->base);
> bo->base.map_wc = flags & DRM_IVPU_BO_WC;
> bo->flags = flags;
>
> @@ -330,7 +330,7 @@ static void ivpu_gem_bo_free(struct drm_gem_object *obj)
>
> drm_WARN_ON(obj->dev, refcount_read(&bo->base.pages_use_count) > 1);
> drm_WARN_ON(obj->dev, bo->base.base.vma_node.vm_files.rb_node);
> - drm_gem_shmem_free(&bo->base);
> + drm_gem_uma_free(&bo->base);
> }
>
> static enum drm_gem_object_status ivpu_gem_status(struct drm_gem_object *obj)
> @@ -347,15 +347,15 @@ static enum drm_gem_object_status ivpu_gem_status(struct drm_gem_object *obj)
> static const struct drm_gem_object_funcs ivpu_gem_funcs = {
> .free = ivpu_gem_bo_free,
> .open = ivpu_gem_bo_open,
> - .print_info = drm_gem_shmem_object_print_info,
> - .pin = drm_gem_shmem_object_pin,
> - .unpin = drm_gem_shmem_object_unpin,
> - .get_sg_table = drm_gem_shmem_object_get_sg_table,
> - .vmap = drm_gem_shmem_object_vmap,
> - .vunmap = drm_gem_shmem_object_vunmap,
> - .mmap = drm_gem_shmem_object_mmap,
> + .print_info = drm_gem_uma_object_print_info,
> + .pin = drm_gem_uma_object_pin,
> + .unpin = drm_gem_uma_object_unpin,
> + .get_sg_table = drm_gem_uma_object_get_sg_table,
> + .vmap = drm_gem_uma_object_vmap,
> + .vunmap = drm_gem_uma_object_vunmap,
> + .mmap = drm_gem_uma_object_mmap,
> .status = ivpu_gem_status,
> - .vm_ops = &drm_gem_shmem_vm_ops,
> + .vm_ops = &drm_gem_uma_vm_ops,
> };
>
> int ivpu_bo_create_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
> @@ -435,7 +435,7 @@ ivpu_bo_create(struct ivpu_device *vdev, struct ivpu_mmu_context *ctx,
>
> if (flags & DRM_IVPU_BO_MAPPABLE) {
> ivpu_bo_lock(bo);
> - ret = drm_gem_shmem_vmap_locked(&bo->base, &map);
> + ret = drm_gem_uma_vmap_locked(&bo->base, &map);
> ivpu_bo_unlock(bo);
>
> if (ret)
> @@ -475,7 +475,7 @@ void ivpu_bo_free(struct ivpu_bo *bo)
>
> if (bo->flags & DRM_IVPU_BO_MAPPABLE) {
> ivpu_bo_lock(bo);
> - drm_gem_shmem_vunmap_locked(&bo->base, &map);
> + drm_gem_uma_vunmap_locked(&bo->base, &map);
> ivpu_bo_unlock(bo);
> }
>
> diff --git a/drivers/accel/ivpu/ivpu_gem.h b/drivers/accel/ivpu/ivpu_gem.h
> index 0c3350f22b55..3e5d1a64deab 100644
> --- a/drivers/accel/ivpu/ivpu_gem.h
> +++ b/drivers/accel/ivpu/ivpu_gem.h
> @@ -6,13 +6,13 @@
> #define __IVPU_GEM_H__
>
> #include <drm/drm_gem.h>
> -#include <drm/drm_gem_shmem_helper.h>
> +#include <drm/drm_gem_uma_helper.h>
> #include <drm/drm_mm.h>
>
> struct ivpu_file_priv;
>
> struct ivpu_bo {
> - struct drm_gem_shmem_object base;
> + struct drm_gem_uma_object base;
> struct ivpu_mmu_context *ctx;
> struct list_head bo_list_node;
> struct drm_mm_node mm_node;
Reviewed-by: Karol Wachowski <karol.wachowski@linux.intel.com>
^ permalink raw reply [flat|nested] 26+ messages in thread* Re: [PATCH 12/13] accel/ivpu: Use GEM-UMA helpers for memory management
2025-12-09 13:42 ` [PATCH 12/13] accel/ivpu: " Thomas Zimmermann
2025-12-09 14:24 ` Karol Wachowski
@ 2025-12-09 14:25 ` Karol Wachowski
1 sibling, 0 replies; 26+ messages in thread
From: Karol Wachowski @ 2025-12-09 14:25 UTC (permalink / raw)
To: Thomas Zimmermann, boris.brezillon, simona, airlied, mripard,
maarten.lankhorst, ogabbay, mamin506, lizhi.hou, maciej.falkowski,
tomeu, frank.binns, matt.coster, yuq825, robh, steven.price,
adrian.larumbe, liviu.dudau, mwen, kraxel, dmitry.osipenko,
gurchetansingh, olvaffe, corbet
Cc: dri-devel, lima, virtualization, linux-doc
On 12/9/2025 2:42 PM, Thomas Zimmermann wrote:
> Convert ivpu from GEM-SHMEM to GEM-UMA. The latter is just a copy,
> so this change it merely renaming symbols. No functional changes.
>
> GEM-SHMEM will become more self-contained for drivers without specific
> memory management. GEM-UMA's interfaces will remain flexible for drivers
> with UMA hardware, such as ivpu.
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> ---
> drivers/accel/ivpu/Kconfig | 2 +-
> drivers/accel/ivpu/ivpu_gem.c | 36 +++++++++++++++++------------------
> drivers/accel/ivpu/ivpu_gem.h | 4 ++--
> 3 files changed, 21 insertions(+), 21 deletions(-)
>
> diff --git a/drivers/accel/ivpu/Kconfig b/drivers/accel/ivpu/Kconfig
> index 9e055b5ce03d..49ca139a9d31 100644
> --- a/drivers/accel/ivpu/Kconfig
> +++ b/drivers/accel/ivpu/Kconfig
> @@ -5,8 +5,8 @@ config DRM_ACCEL_IVPU
> depends on DRM_ACCEL
> depends on X86_64 && !UML
> depends on PCI && PCI_MSI
> + select DRM_GEM_UMA_HELPER
> select FW_LOADER
> - select DRM_GEM_SHMEM_HELPER
> select GENERIC_ALLOCATOR
> select WANT_DEV_COREDUMP
> help
> diff --git a/drivers/accel/ivpu/ivpu_gem.c b/drivers/accel/ivpu/ivpu_gem.c
> index ece68f570b7e..7f4aeb482efb 100644
> --- a/drivers/accel/ivpu/ivpu_gem.c
> +++ b/drivers/accel/ivpu/ivpu_gem.c
> @@ -84,7 +84,7 @@ int __must_check ivpu_bo_bind(struct ivpu_bo *bo)
> if (bo->base.base.import_attach)
> sgt = ivpu_bo_map_attachment(vdev, bo);
> else
> - sgt = drm_gem_shmem_get_pages_sgt(&bo->base);
> + sgt = drm_gem_uma_get_pages_sgt(&bo->base);
> if (IS_ERR(sgt)) {
> ret = PTR_ERR(sgt);
> ivpu_err(vdev, "Failed to map BO in IOMMU: %d\n", ret);
> @@ -223,7 +223,7 @@ struct drm_gem_object *ivpu_gem_prime_import(struct drm_device *dev,
>
> get_dma_buf(dma_buf);
>
> - obj = drm_gem_shmem_prime_import_sg_table(dev, attach, NULL);
> + obj = drm_gem_uma_prime_import_sg_table(dev, attach, NULL);
> if (IS_ERR(obj)) {
> ret = PTR_ERR(obj);
> goto fail_detach;
> @@ -251,7 +251,7 @@ struct drm_gem_object *ivpu_gem_prime_import(struct drm_device *dev,
>
> static struct ivpu_bo *ivpu_bo_alloc(struct ivpu_device *vdev, u64 size, u32 flags)
> {
> - struct drm_gem_shmem_object *shmem;
> + struct drm_gem_uma_object *uma;
> struct ivpu_bo *bo;
>
> switch (flags & DRM_IVPU_BO_CACHE_MASK) {
> @@ -262,11 +262,11 @@ static struct ivpu_bo *ivpu_bo_alloc(struct ivpu_device *vdev, u64 size, u32 fla
> return ERR_PTR(-EINVAL);
> }
>
> - shmem = drm_gem_shmem_create(&vdev->drm, size);
> - if (IS_ERR(shmem))
> - return ERR_CAST(shmem);
> + uma = drm_gem_uma_create(&vdev->drm, size);
> + if (IS_ERR(uma))
> + return ERR_CAST(uma);
>
> - bo = to_ivpu_bo(&shmem->base);
> + bo = to_ivpu_bo(&uma->base);
> bo->base.map_wc = flags & DRM_IVPU_BO_WC;
> bo->flags = flags;
>
> @@ -330,7 +330,7 @@ static void ivpu_gem_bo_free(struct drm_gem_object *obj)
>
> drm_WARN_ON(obj->dev, refcount_read(&bo->base.pages_use_count) > 1);
> drm_WARN_ON(obj->dev, bo->base.base.vma_node.vm_files.rb_node);
> - drm_gem_shmem_free(&bo->base);
> + drm_gem_uma_free(&bo->base);
> }
>
> static enum drm_gem_object_status ivpu_gem_status(struct drm_gem_object *obj)
> @@ -347,15 +347,15 @@ static enum drm_gem_object_status ivpu_gem_status(struct drm_gem_object *obj)
> static const struct drm_gem_object_funcs ivpu_gem_funcs = {
> .free = ivpu_gem_bo_free,
> .open = ivpu_gem_bo_open,
> - .print_info = drm_gem_shmem_object_print_info,
> - .pin = drm_gem_shmem_object_pin,
> - .unpin = drm_gem_shmem_object_unpin,
> - .get_sg_table = drm_gem_shmem_object_get_sg_table,
> - .vmap = drm_gem_shmem_object_vmap,
> - .vunmap = drm_gem_shmem_object_vunmap,
> - .mmap = drm_gem_shmem_object_mmap,
> + .print_info = drm_gem_uma_object_print_info,
> + .pin = drm_gem_uma_object_pin,
> + .unpin = drm_gem_uma_object_unpin,
> + .get_sg_table = drm_gem_uma_object_get_sg_table,
> + .vmap = drm_gem_uma_object_vmap,
> + .vunmap = drm_gem_uma_object_vunmap,
> + .mmap = drm_gem_uma_object_mmap,
> .status = ivpu_gem_status,
> - .vm_ops = &drm_gem_shmem_vm_ops,
> + .vm_ops = &drm_gem_uma_vm_ops,
> };
>
> int ivpu_bo_create_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
> @@ -435,7 +435,7 @@ ivpu_bo_create(struct ivpu_device *vdev, struct ivpu_mmu_context *ctx,
>
> if (flags & DRM_IVPU_BO_MAPPABLE) {
> ivpu_bo_lock(bo);
> - ret = drm_gem_shmem_vmap_locked(&bo->base, &map);
> + ret = drm_gem_uma_vmap_locked(&bo->base, &map);
> ivpu_bo_unlock(bo);
>
> if (ret)
> @@ -475,7 +475,7 @@ void ivpu_bo_free(struct ivpu_bo *bo)
>
> if (bo->flags & DRM_IVPU_BO_MAPPABLE) {
> ivpu_bo_lock(bo);
> - drm_gem_shmem_vunmap_locked(&bo->base, &map);
> + drm_gem_uma_vunmap_locked(&bo->base, &map);
> ivpu_bo_unlock(bo);
> }
>
> diff --git a/drivers/accel/ivpu/ivpu_gem.h b/drivers/accel/ivpu/ivpu_gem.h
> index 0c3350f22b55..3e5d1a64deab 100644
> --- a/drivers/accel/ivpu/ivpu_gem.h
> +++ b/drivers/accel/ivpu/ivpu_gem.h
> @@ -6,13 +6,13 @@
> #define __IVPU_GEM_H__
>
> #include <drm/drm_gem.h>
> -#include <drm/drm_gem_shmem_helper.h>
> +#include <drm/drm_gem_uma_helper.h>
> #include <drm/drm_mm.h>
>
> struct ivpu_file_priv;
>
> struct ivpu_bo {
> - struct drm_gem_shmem_object base;
> + struct drm_gem_uma_object base;
> struct ivpu_mmu_context *ctx;
> struct list_head bo_list_node;
> struct drm_mm_node mm_node;
Reviewed-by: Karol Wachowski <karol.wachowski@linux.intel.com>
^ permalink raw reply [flat|nested] 26+ messages in thread
* [PATCH 13/13] accel/rocket: Use GEM-UMA helpers for memory management
2025-12-09 13:41 [RFC][PATCH 00/13] drm: Introduce GEM-UMA memory management Thomas Zimmermann
` (11 preceding siblings ...)
2025-12-09 13:42 ` [PATCH 12/13] accel/ivpu: " Thomas Zimmermann
@ 2025-12-09 13:42 ` Thomas Zimmermann
2025-12-09 14:27 ` [RFC][PATCH 00/13] drm: Introduce GEM-UMA " Boris Brezillon
13 siblings, 0 replies; 26+ messages in thread
From: Thomas Zimmermann @ 2025-12-09 13:42 UTC (permalink / raw)
To: boris.brezillon, simona, airlied, mripard, maarten.lankhorst,
ogabbay, mamin506, lizhi.hou, maciej.falkowski, karol.wachowski,
tomeu, frank.binns, matt.coster, yuq825, robh, steven.price,
adrian.larumbe, liviu.dudau, mwen, kraxel, dmitry.osipenko,
gurchetansingh, olvaffe, corbet
Cc: dri-devel, lima, virtualization, linux-doc, Thomas Zimmermann
Convert rocket from GEM-SHMEM to GEM-UMA. The latter is just a copy,
so this change it merely renaming symbols. No functional changes.
GEM-SHMEM will become more self-contained for drivers without specific
memory management. GEM-UMA's interfaces will remain flexible for drivers
with UMA hardware, such as rocket.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
drivers/accel/rocket/Kconfig | 2 +-
drivers/accel/rocket/rocket_gem.c | 46 +++++++++++++++----------------
drivers/accel/rocket/rocket_gem.h | 6 ++--
3 files changed, 27 insertions(+), 27 deletions(-)
diff --git a/drivers/accel/rocket/Kconfig b/drivers/accel/rocket/Kconfig
index 16465abe0660..26f760afdd7a 100644
--- a/drivers/accel/rocket/Kconfig
+++ b/drivers/accel/rocket/Kconfig
@@ -6,8 +6,8 @@ config DRM_ACCEL_ROCKET
depends on (ARCH_ROCKCHIP && ARM64) || COMPILE_TEST
depends on ROCKCHIP_IOMMU || COMPILE_TEST
depends on MMU
+ select DRM_GEM_UMA_HELPER
select DRM_SCHED
- select DRM_GEM_SHMEM_HELPER
help
Choose this option if you have a Rockchip SoC that contains a
compatible Neural Processing Unit (NPU), such as the RK3588. Called by
diff --git a/drivers/accel/rocket/rocket_gem.c b/drivers/accel/rocket/rocket_gem.c
index 624c4ecf5a34..1c7b75413065 100644
--- a/drivers/accel/rocket/rocket_gem.c
+++ b/drivers/accel/rocket/rocket_gem.c
@@ -29,19 +29,19 @@ static void rocket_gem_bo_free(struct drm_gem_object *obj)
rocket_iommu_domain_put(bo->domain);
bo->domain = NULL;
- drm_gem_shmem_free(&bo->base);
+ drm_gem_uma_free(&bo->base);
}
static const struct drm_gem_object_funcs rocket_gem_funcs = {
.free = rocket_gem_bo_free,
- .print_info = drm_gem_shmem_object_print_info,
- .pin = drm_gem_shmem_object_pin,
- .unpin = drm_gem_shmem_object_unpin,
- .get_sg_table = drm_gem_shmem_object_get_sg_table,
- .vmap = drm_gem_shmem_object_vmap,
- .vunmap = drm_gem_shmem_object_vunmap,
- .mmap = drm_gem_shmem_object_mmap,
- .vm_ops = &drm_gem_shmem_vm_ops,
+ .print_info = drm_gem_uma_object_print_info,
+ .pin = drm_gem_uma_object_pin,
+ .unpin = drm_gem_uma_object_unpin,
+ .get_sg_table = drm_gem_uma_object_get_sg_table,
+ .vmap = drm_gem_uma_object_vmap,
+ .vunmap = drm_gem_uma_object_vunmap,
+ .mmap = drm_gem_uma_object_mmap,
+ .vm_ops = &drm_gem_uma_vm_ops,
};
struct drm_gem_object *rocket_gem_create_object(struct drm_device *dev, size_t size)
@@ -61,17 +61,17 @@ int rocket_ioctl_create_bo(struct drm_device *dev, void *data, struct drm_file *
{
struct rocket_file_priv *rocket_priv = file->driver_priv;
struct drm_rocket_create_bo *args = data;
- struct drm_gem_shmem_object *shmem_obj;
+ struct drm_gem_uma_object *uma_obj;
struct rocket_gem_object *rkt_obj;
struct drm_gem_object *gem_obj;
struct sg_table *sgt;
int ret;
- shmem_obj = drm_gem_shmem_create(dev, args->size);
- if (IS_ERR(shmem_obj))
- return PTR_ERR(shmem_obj);
+ uma_obj = drm_gem_uma_create(dev, args->size);
+ if (IS_ERR(uma_obj))
+ return PTR_ERR(uma_obj);
- gem_obj = &shmem_obj->base;
+ gem_obj = &uma_obj->base;
rkt_obj = to_rocket_bo(gem_obj);
rkt_obj->driver_priv = rocket_priv;
@@ -84,7 +84,7 @@ int rocket_ioctl_create_bo(struct drm_device *dev, void *data, struct drm_file *
if (ret)
goto err;
- sgt = drm_gem_shmem_get_pages_sgt(shmem_obj);
+ sgt = drm_gem_uma_get_pages_sgt(uma_obj);
if (IS_ERR(sgt)) {
ret = PTR_ERR(sgt);
goto err;
@@ -98,7 +98,7 @@ int rocket_ioctl_create_bo(struct drm_device *dev, void *data, struct drm_file *
ret = iommu_map_sgtable(rocket_priv->domain->domain,
rkt_obj->mm.start,
- shmem_obj->sgt,
+ uma_obj->sgt,
IOMMU_READ | IOMMU_WRITE);
if (ret < 0 || ret < args->size) {
drm_err(dev, "failed to map buffer: size=%d request_size=%u\n",
@@ -120,7 +120,7 @@ int rocket_ioctl_create_bo(struct drm_device *dev, void *data, struct drm_file *
mutex_unlock(&rocket_priv->mm_lock);
err:
- drm_gem_shmem_object_free(gem_obj);
+ drm_gem_uma_object_free(gem_obj);
return ret;
}
@@ -130,7 +130,7 @@ int rocket_ioctl_prep_bo(struct drm_device *dev, void *data, struct drm_file *fi
struct drm_rocket_prep_bo *args = data;
unsigned long timeout = drm_timeout_abs_to_jiffies(args->timeout_ns);
struct drm_gem_object *gem_obj;
- struct drm_gem_shmem_object *shmem_obj;
+ struct drm_gem_uma_object *uma_obj;
long ret = 0;
if (args->reserved != 0) {
@@ -146,9 +146,9 @@ int rocket_ioctl_prep_bo(struct drm_device *dev, void *data, struct drm_file *fi
if (!ret)
ret = timeout ? -ETIMEDOUT : -EBUSY;
- shmem_obj = &to_rocket_bo(gem_obj)->base;
+ uma_obj = &to_rocket_bo(gem_obj)->base;
- dma_sync_sgtable_for_cpu(dev->dev, shmem_obj->sgt, DMA_BIDIRECTIONAL);
+ dma_sync_sgtable_for_cpu(dev->dev, uma_obj->sgt, DMA_BIDIRECTIONAL);
drm_gem_object_put(gem_obj);
@@ -158,7 +158,7 @@ int rocket_ioctl_prep_bo(struct drm_device *dev, void *data, struct drm_file *fi
int rocket_ioctl_fini_bo(struct drm_device *dev, void *data, struct drm_file *file)
{
struct drm_rocket_fini_bo *args = data;
- struct drm_gem_shmem_object *shmem_obj;
+ struct drm_gem_uma_object *uma_obj;
struct rocket_gem_object *rkt_obj;
struct drm_gem_object *gem_obj;
@@ -172,9 +172,9 @@ int rocket_ioctl_fini_bo(struct drm_device *dev, void *data, struct drm_file *fi
return -ENOENT;
rkt_obj = to_rocket_bo(gem_obj);
- shmem_obj = &rkt_obj->base;
+ uma_obj = &rkt_obj->base;
- dma_sync_sgtable_for_device(dev->dev, shmem_obj->sgt, DMA_BIDIRECTIONAL);
+ dma_sync_sgtable_for_device(dev->dev, uma_obj->sgt, DMA_BIDIRECTIONAL);
drm_gem_object_put(gem_obj);
diff --git a/drivers/accel/rocket/rocket_gem.h b/drivers/accel/rocket/rocket_gem.h
index 240430334509..d5ea539519e7 100644
--- a/drivers/accel/rocket/rocket_gem.h
+++ b/drivers/accel/rocket/rocket_gem.h
@@ -4,10 +4,10 @@
#ifndef __ROCKET_GEM_H__
#define __ROCKET_GEM_H__
-#include <drm/drm_gem_shmem_helper.h>
+#include <drm/drm_gem_uma_helper.h>
struct rocket_gem_object {
- struct drm_gem_shmem_object base;
+ struct drm_gem_uma_object base;
struct rocket_file_priv *driver_priv;
@@ -28,7 +28,7 @@ int rocket_ioctl_fini_bo(struct drm_device *dev, void *data, struct drm_file *fi
static inline
struct rocket_gem_object *to_rocket_bo(struct drm_gem_object *obj)
{
- return container_of(to_drm_gem_shmem_obj(obj), struct rocket_gem_object, base);
+ return container_of(to_drm_gem_uma_obj(obj), struct rocket_gem_object, base);
}
#endif
--
2.52.0
^ permalink raw reply related [flat|nested] 26+ messages in thread* Re: [RFC][PATCH 00/13] drm: Introduce GEM-UMA memory management
2025-12-09 13:41 [RFC][PATCH 00/13] drm: Introduce GEM-UMA memory management Thomas Zimmermann
` (12 preceding siblings ...)
2025-12-09 13:42 ` [PATCH 13/13] accel/rocket: " Thomas Zimmermann
@ 2025-12-09 14:27 ` Boris Brezillon
2025-12-09 14:51 ` Thomas Zimmermann
13 siblings, 1 reply; 26+ messages in thread
From: Boris Brezillon @ 2025-12-09 14:27 UTC (permalink / raw)
To: Thomas Zimmermann
Cc: simona, airlied, mripard, maarten.lankhorst, ogabbay, mamin506,
lizhi.hou, maciej.falkowski, karol.wachowski, tomeu, frank.binns,
matt.coster, yuq825, robh, steven.price, adrian.larumbe,
liviu.dudau, mwen, kraxel, dmitry.osipenko, gurchetansingh,
olvaffe, corbet, dri-devel, lima, virtualization, linux-doc
On Tue, 9 Dec 2025 14:41:57 +0100
Thomas Zimmermann <tzimmermann@suse.de> wrote:
> Duplicate GEM-SHMEM to GEM-UMA. Convert all DRM drivers for UMA
> systems if they currently use GEM-SHMEM.
>
> Many DRM drivers for hardware with Unified Memory Architecture (UMA)
> currently builds upon GEM-SHMEM and extends the helpers with features
> for managing the GPU MMU. This allows the GPU to access the GEM buffer
> content for its operation.
>
> There is another, larger, set of DRM drivers that use GEM-SHMEM merely
> as buffer management with no hardware support. These drivers copy the
> buffer content to the GPU on each page flip. The GPU itself has no direct
> access. Hardware of this type is usually in servers, behind slow busses
> (SPI, USB), or provided by firmware (drivers in sysfb/).
>
> After some discussion with Boris on the future of GEM-SHMEM, it seems
> to me that both use cases more and more diverge from each other. The
> most prominent example is the implementation of gem_prime_import,
> where both use cases use distinct approaches.
>
> So we discussed the introduction of a GEM-UMA helper library for
> UMA-based hardware. GEM-UMA will remain flexible enough for drivers
> to extend it for their use case. GEM-SHMEM will become focused on the
> simple-hardware use case. The benefit for both libraries is that they
> will be easier to understand and maintain. GEM-SHMEM can be simplified
> signiifcantly, I think.
>
> This RFC series introduces GEM-UMA and converts the UMA-related drivers.
>
> Patches 1 and 2 fix issues in GEM-SHMEM, so that we don't duplicate
> errornous code.
>
> Patch 3 copies GEM-SHMEM to GEM-UMA. Patch 4 then does soem obvious
> cleanups of unnecessary code.
Instead of copying the code as-is, I'd rather take a step back and think
about what we need and how we want to handle more complex stuff, like
reclaim. I've started working on a shrinker for panthor [1], and as part
of this series, I've added a commit implementing just enough to replace
what gem-shmem currently provides. Feels like the new GEM-UMA thing
could be designed on a composition rather than inheritance model,
where we have sub-components (backing, cpu_map, gpu_map) that can be
pulled in and re-used by the driver implementation. The common helpers
would take those sub-components instead of a plain GEM object. That
would leave the drivers free of how their internal gem_object fields are
laid out and wouldn't require overloading the ->gem_create_object()
function. It seems to be that it would better match the model you were
describing the other day.
>
> Patches 5 to 13 update the drivers that can be converted to GEM-UMA.
> These changes are just symbol renaming. There are so far no functional
> differences between the memory managers.
>
> A gave GEM-UMA some smoke testing by running virtgpu.
[1]https://gitlab.freedesktop.org/bbrezillon/linux/-/commits/panthor-shrinker-revisited/drivers?ref_type=heads
[2]https://gitlab.freedesktop.org/bbrezillon/linux/-/commit/4e6927fc2c60265b77a5a88013f55377bc4f4ab3
>
> Thomas Zimmermann (13):
> drm/gem-shmem: Fix typos in documentation
> drm/gem-shmem: Fix the MODULE_LICENSE() string
> drm: Add GEM-UMA helpers for memory management
> drm/gem-uma: Remove unused interfaces
> drm/imagination: Use GEM-UMA helpers for memory management
> drm/lima: Use GEM-UMA helpers for memory management
> drm/panfrost: Use GEM-UMA helpers for memory management
> drm/panthor: Use GEM-UMA helpers for memory management
> drm/v3d: Use GEM-UMA helpers for memory management
> drm/virtgpu: Use GEM-UMA helpers for memory management
> accel/amdxdna: Use GEM-UMA helpers for memory management
> accel/ivpu: Use GEM-UMA helpers for memory management
> accel/rocket: Use GEM-UMA helpers for memory management
>
> Documentation/gpu/drm-mm.rst | 12 +
> drivers/accel/amdxdna/Kconfig | 2 +-
> drivers/accel/amdxdna/aie2_ctx.c | 1 -
> drivers/accel/amdxdna/aie2_message.c | 1 -
> drivers/accel/amdxdna/aie2_pci.c | 1 -
> drivers/accel/amdxdna/aie2_psp.c | 1 -
> drivers/accel/amdxdna/aie2_smu.c | 1 -
> drivers/accel/amdxdna/amdxdna_ctx.c | 7 +-
> drivers/accel/amdxdna/amdxdna_gem.c | 49 +-
> drivers/accel/amdxdna/amdxdna_gem.h | 5 +-
> .../accel/amdxdna/amdxdna_mailbox_helper.c | 1 -
> drivers/accel/amdxdna/amdxdna_pci_drv.c | 1 -
> drivers/accel/amdxdna/amdxdna_sysfs.c | 1 -
> drivers/accel/ivpu/Kconfig | 2 +-
> drivers/accel/ivpu/ivpu_gem.c | 36 +-
> drivers/accel/ivpu/ivpu_gem.h | 4 +-
> drivers/accel/rocket/Kconfig | 2 +-
> drivers/accel/rocket/rocket_gem.c | 46 +-
> drivers/accel/rocket/rocket_gem.h | 6 +-
> drivers/gpu/drm/Kconfig | 9 +
> drivers/gpu/drm/Kconfig.debug | 1 +
> drivers/gpu/drm/Makefile | 4 +
> drivers/gpu/drm/drm_fbdev_uma.c | 203 +++++
> drivers/gpu/drm/drm_gem_shmem_helper.c | 5 +-
> drivers/gpu/drm/drm_gem_uma_helper.c | 787 ++++++++++++++++++
> drivers/gpu/drm/imagination/Kconfig | 4 +-
> drivers/gpu/drm/imagination/pvr_drv.c | 2 +-
> drivers/gpu/drm/imagination/pvr_free_list.c | 2 +-
> drivers/gpu/drm/imagination/pvr_gem.c | 74 +-
> drivers/gpu/drm/imagination/pvr_gem.h | 12 +-
> drivers/gpu/drm/lima/Kconfig | 4 +-
> drivers/gpu/drm/lima/lima_drv.c | 2 +-
> drivers/gpu/drm/lima/lima_gem.c | 30 +-
> drivers/gpu/drm/lima/lima_gem.h | 6 +-
> drivers/gpu/drm/panfrost/Kconfig | 2 +-
> drivers/gpu/drm/panfrost/panfrost_drv.c | 2 +-
> drivers/gpu/drm/panfrost/panfrost_gem.c | 30 +-
> drivers/gpu/drm/panfrost/panfrost_gem.h | 6 +-
> .../gpu/drm/panfrost/panfrost_gem_shrinker.c | 30 +-
> drivers/gpu/drm/panfrost/panfrost_mmu.c | 8 +-
> drivers/gpu/drm/panfrost/panfrost_perfcnt.c | 6 +-
> drivers/gpu/drm/panthor/Kconfig | 2 +-
> drivers/gpu/drm/panthor/panthor_drv.c | 2 +-
> drivers/gpu/drm/panthor/panthor_fw.c | 4 +-
> drivers/gpu/drm/panthor/panthor_gem.c | 40 +-
> drivers/gpu/drm/panthor/panthor_gem.h | 8 +-
> drivers/gpu/drm/panthor/panthor_mmu.c | 10 +-
> drivers/gpu/drm/panthor/panthor_sched.c | 1 -
> drivers/gpu/drm/tests/Makefile | 1 +
> drivers/gpu/drm/tests/drm_gem_uma_test.c | 385 +++++++++
> drivers/gpu/drm/v3d/Kconfig | 2 +-
> drivers/gpu/drm/v3d/v3d_bo.c | 45 +-
> drivers/gpu/drm/v3d/v3d_drv.h | 4 +-
> drivers/gpu/drm/v3d/v3d_mmu.c | 9 +-
> drivers/gpu/drm/virtio/Kconfig | 4 +-
> drivers/gpu/drm/virtio/virtgpu_drv.c | 4 +-
> drivers/gpu/drm/virtio/virtgpu_drv.h | 12 +-
> drivers/gpu/drm/virtio/virtgpu_object.c | 64 +-
> drivers/gpu/drm/virtio/virtgpu_plane.c | 6 +-
> drivers/gpu/drm/virtio/virtgpu_vq.c | 6 +-
> include/drm/drm_fbdev_uma.h | 20 +
> include/drm/drm_gem_uma_helper.h | 293 +++++++
> 62 files changed, 2018 insertions(+), 312 deletions(-)
> create mode 100644 drivers/gpu/drm/drm_fbdev_uma.c
> create mode 100644 drivers/gpu/drm/drm_gem_uma_helper.c
> create mode 100644 drivers/gpu/drm/tests/drm_gem_uma_test.c
> create mode 100644 include/drm/drm_fbdev_uma.h
> create mode 100644 include/drm/drm_gem_uma_helper.h
>
>
> base-commit: 0a21e96e0b6840d2a4e0b45a957679eeddeb4362
> prerequisite-patch-id: c67e5d886a47b7d0266d81100837557fda34cb24
> prerequisite-patch-id: a5a973e527c88a5b47053d7a72aefe0b550197cb
> prerequisite-patch-id: 719d09751d38f5da743beed6266585ee063e1e29
> prerequisite-patch-id: 0bbc85bc6b528c32592e07f4ceafa51795c4cad9
> prerequisite-patch-id: c856d9c8a026e3244c44ec829e426e0ad4a685ab
> prerequisite-patch-id: 13441c9ed3062ae1448a53086559dfcbbd578177
> prerequisite-patch-id: 951c039657c1f58e4b6e36bc01c7a1c69ed59767
> prerequisite-patch-id: 4370b8b803ca439666fb9d2beb862f6e78347ce3
> prerequisite-patch-id: ebbaad226ed599f7aad4784fb3f4aaebe34cb110
> prerequisite-patch-id: cb907c3e3e14de7f4d13b429f3a2a88621a8a9fe
> prerequisite-patch-id: 0e243b426742122b239af59e36d742da5795a8b1
> prerequisite-patch-id: 120f97fa1af9891375a0dcf52c51c1907b01fe6a
^ permalink raw reply [flat|nested] 26+ messages in thread* Re: [RFC][PATCH 00/13] drm: Introduce GEM-UMA memory management
2025-12-09 14:27 ` [RFC][PATCH 00/13] drm: Introduce GEM-UMA " Boris Brezillon
@ 2025-12-09 14:51 ` Thomas Zimmermann
2025-12-09 15:30 ` Boris Brezillon
0 siblings, 1 reply; 26+ messages in thread
From: Thomas Zimmermann @ 2025-12-09 14:51 UTC (permalink / raw)
To: Boris Brezillon
Cc: simona, airlied, mripard, maarten.lankhorst, ogabbay, mamin506,
lizhi.hou, maciej.falkowski, karol.wachowski, tomeu, frank.binns,
matt.coster, yuq825, robh, steven.price, adrian.larumbe,
liviu.dudau, mwen, kraxel, dmitry.osipenko, gurchetansingh,
olvaffe, corbet, dri-devel, lima, virtualization, linux-doc
Hi
Am 09.12.25 um 15:27 schrieb Boris Brezillon:
> On Tue, 9 Dec 2025 14:41:57 +0100
> Thomas Zimmermann <tzimmermann@suse.de> wrote:
>
>> Duplicate GEM-SHMEM to GEM-UMA. Convert all DRM drivers for UMA
>> systems if they currently use GEM-SHMEM.
>>
>> Many DRM drivers for hardware with Unified Memory Architecture (UMA)
>> currently builds upon GEM-SHMEM and extends the helpers with features
>> for managing the GPU MMU. This allows the GPU to access the GEM buffer
>> content for its operation.
>>
>> There is another, larger, set of DRM drivers that use GEM-SHMEM merely
>> as buffer management with no hardware support. These drivers copy the
>> buffer content to the GPU on each page flip. The GPU itself has no direct
>> access. Hardware of this type is usually in servers, behind slow busses
>> (SPI, USB), or provided by firmware (drivers in sysfb/).
>>
>> After some discussion with Boris on the future of GEM-SHMEM, it seems
>> to me that both use cases more and more diverge from each other. The
>> most prominent example is the implementation of gem_prime_import,
>> where both use cases use distinct approaches.
>>
>> So we discussed the introduction of a GEM-UMA helper library for
>> UMA-based hardware. GEM-UMA will remain flexible enough for drivers
>> to extend it for their use case. GEM-SHMEM will become focused on the
>> simple-hardware use case. The benefit for both libraries is that they
>> will be easier to understand and maintain. GEM-SHMEM can be simplified
>> signiifcantly, I think.
>>
>> This RFC series introduces GEM-UMA and converts the UMA-related drivers.
>>
>> Patches 1 and 2 fix issues in GEM-SHMEM, so that we don't duplicate
>> errornous code.
>>
>> Patch 3 copies GEM-SHMEM to GEM-UMA. Patch 4 then does soem obvious
>> cleanups of unnecessary code.
> Instead of copying the code as-is, I'd rather take a step back and think
> about what we need and how we want to handle more complex stuff, like
> reclaim. I've started working on a shrinker for panthor [1], and as part
> of this series, I've added a commit implementing just enough to replace
> what gem-shmem currently provides. Feels like the new GEM-UMA thing
> could be designed on a composition rather than inheritance model,
> where we have sub-components (backing, cpu_map, gpu_map) that can be
> pulled in and re-used by the driver implementation. The common helpers
> would take those sub-components instead of a plain GEM object. That
> would leave the drivers free of how their internal gem_object fields are
> laid out and wouldn't require overloading the ->gem_create_object()
> function. It seems to be that it would better match the model you were
> describing the other day.
Yeah, I've seen your update to that series. Making individual parts of
the memory manager freely composable with each other is a fine idea.
But the flipside is that I also want the simple drivers to move away
from the flexible approach that GEM-SHMEM currently takes. There are
many drivers that do not need or want that. These drivers benefit from
something that is self contained. Many of the drivers are also hardly
maintained, so simplifying things will also be helpful.
I could have added a new GEM implementation for these drivers, but there
are less UMA drivers to convert and the GEM-UMA naming generally fits
better than GEM-SHMEM.
I'd rather have GEM-UMA and evolve it from where it stands now; and also
evolve GEM-SHMEM in a different direction. There's a difference in
concepts here.
Best regards
Thomas
>
>> Patches 5 to 13 update the drivers that can be converted to GEM-UMA.
>> These changes are just symbol renaming. There are so far no functional
>> differences between the memory managers.
>>
>> A gave GEM-UMA some smoke testing by running virtgpu.
> [1]https://gitlab.freedesktop.org/bbrezillon/linux/-/commits/panthor-shrinker-revisited/drivers?ref_type=heads
> [2]https://gitlab.freedesktop.org/bbrezillon/linux/-/commit/4e6927fc2c60265b77a5a88013f55377bc4f4ab3
>
>> Thomas Zimmermann (13):
>> drm/gem-shmem: Fix typos in documentation
>> drm/gem-shmem: Fix the MODULE_LICENSE() string
>> drm: Add GEM-UMA helpers for memory management
>> drm/gem-uma: Remove unused interfaces
>> drm/imagination: Use GEM-UMA helpers for memory management
>> drm/lima: Use GEM-UMA helpers for memory management
>> drm/panfrost: Use GEM-UMA helpers for memory management
>> drm/panthor: Use GEM-UMA helpers for memory management
>> drm/v3d: Use GEM-UMA helpers for memory management
>> drm/virtgpu: Use GEM-UMA helpers for memory management
>> accel/amdxdna: Use GEM-UMA helpers for memory management
>> accel/ivpu: Use GEM-UMA helpers for memory management
>> accel/rocket: Use GEM-UMA helpers for memory management
>>
>> Documentation/gpu/drm-mm.rst | 12 +
>> drivers/accel/amdxdna/Kconfig | 2 +-
>> drivers/accel/amdxdna/aie2_ctx.c | 1 -
>> drivers/accel/amdxdna/aie2_message.c | 1 -
>> drivers/accel/amdxdna/aie2_pci.c | 1 -
>> drivers/accel/amdxdna/aie2_psp.c | 1 -
>> drivers/accel/amdxdna/aie2_smu.c | 1 -
>> drivers/accel/amdxdna/amdxdna_ctx.c | 7 +-
>> drivers/accel/amdxdna/amdxdna_gem.c | 49 +-
>> drivers/accel/amdxdna/amdxdna_gem.h | 5 +-
>> .../accel/amdxdna/amdxdna_mailbox_helper.c | 1 -
>> drivers/accel/amdxdna/amdxdna_pci_drv.c | 1 -
>> drivers/accel/amdxdna/amdxdna_sysfs.c | 1 -
>> drivers/accel/ivpu/Kconfig | 2 +-
>> drivers/accel/ivpu/ivpu_gem.c | 36 +-
>> drivers/accel/ivpu/ivpu_gem.h | 4 +-
>> drivers/accel/rocket/Kconfig | 2 +-
>> drivers/accel/rocket/rocket_gem.c | 46 +-
>> drivers/accel/rocket/rocket_gem.h | 6 +-
>> drivers/gpu/drm/Kconfig | 9 +
>> drivers/gpu/drm/Kconfig.debug | 1 +
>> drivers/gpu/drm/Makefile | 4 +
>> drivers/gpu/drm/drm_fbdev_uma.c | 203 +++++
>> drivers/gpu/drm/drm_gem_shmem_helper.c | 5 +-
>> drivers/gpu/drm/drm_gem_uma_helper.c | 787 ++++++++++++++++++
>> drivers/gpu/drm/imagination/Kconfig | 4 +-
>> drivers/gpu/drm/imagination/pvr_drv.c | 2 +-
>> drivers/gpu/drm/imagination/pvr_free_list.c | 2 +-
>> drivers/gpu/drm/imagination/pvr_gem.c | 74 +-
>> drivers/gpu/drm/imagination/pvr_gem.h | 12 +-
>> drivers/gpu/drm/lima/Kconfig | 4 +-
>> drivers/gpu/drm/lima/lima_drv.c | 2 +-
>> drivers/gpu/drm/lima/lima_gem.c | 30 +-
>> drivers/gpu/drm/lima/lima_gem.h | 6 +-
>> drivers/gpu/drm/panfrost/Kconfig | 2 +-
>> drivers/gpu/drm/panfrost/panfrost_drv.c | 2 +-
>> drivers/gpu/drm/panfrost/panfrost_gem.c | 30 +-
>> drivers/gpu/drm/panfrost/panfrost_gem.h | 6 +-
>> .../gpu/drm/panfrost/panfrost_gem_shrinker.c | 30 +-
>> drivers/gpu/drm/panfrost/panfrost_mmu.c | 8 +-
>> drivers/gpu/drm/panfrost/panfrost_perfcnt.c | 6 +-
>> drivers/gpu/drm/panthor/Kconfig | 2 +-
>> drivers/gpu/drm/panthor/panthor_drv.c | 2 +-
>> drivers/gpu/drm/panthor/panthor_fw.c | 4 +-
>> drivers/gpu/drm/panthor/panthor_gem.c | 40 +-
>> drivers/gpu/drm/panthor/panthor_gem.h | 8 +-
>> drivers/gpu/drm/panthor/panthor_mmu.c | 10 +-
>> drivers/gpu/drm/panthor/panthor_sched.c | 1 -
>> drivers/gpu/drm/tests/Makefile | 1 +
>> drivers/gpu/drm/tests/drm_gem_uma_test.c | 385 +++++++++
>> drivers/gpu/drm/v3d/Kconfig | 2 +-
>> drivers/gpu/drm/v3d/v3d_bo.c | 45 +-
>> drivers/gpu/drm/v3d/v3d_drv.h | 4 +-
>> drivers/gpu/drm/v3d/v3d_mmu.c | 9 +-
>> drivers/gpu/drm/virtio/Kconfig | 4 +-
>> drivers/gpu/drm/virtio/virtgpu_drv.c | 4 +-
>> drivers/gpu/drm/virtio/virtgpu_drv.h | 12 +-
>> drivers/gpu/drm/virtio/virtgpu_object.c | 64 +-
>> drivers/gpu/drm/virtio/virtgpu_plane.c | 6 +-
>> drivers/gpu/drm/virtio/virtgpu_vq.c | 6 +-
>> include/drm/drm_fbdev_uma.h | 20 +
>> include/drm/drm_gem_uma_helper.h | 293 +++++++
>> 62 files changed, 2018 insertions(+), 312 deletions(-)
>> create mode 100644 drivers/gpu/drm/drm_fbdev_uma.c
>> create mode 100644 drivers/gpu/drm/drm_gem_uma_helper.c
>> create mode 100644 drivers/gpu/drm/tests/drm_gem_uma_test.c
>> create mode 100644 include/drm/drm_fbdev_uma.h
>> create mode 100644 include/drm/drm_gem_uma_helper.h
>>
>>
>> base-commit: 0a21e96e0b6840d2a4e0b45a957679eeddeb4362
>> prerequisite-patch-id: c67e5d886a47b7d0266d81100837557fda34cb24
>> prerequisite-patch-id: a5a973e527c88a5b47053d7a72aefe0b550197cb
>> prerequisite-patch-id: 719d09751d38f5da743beed6266585ee063e1e29
>> prerequisite-patch-id: 0bbc85bc6b528c32592e07f4ceafa51795c4cad9
>> prerequisite-patch-id: c856d9c8a026e3244c44ec829e426e0ad4a685ab
>> prerequisite-patch-id: 13441c9ed3062ae1448a53086559dfcbbd578177
>> prerequisite-patch-id: 951c039657c1f58e4b6e36bc01c7a1c69ed59767
>> prerequisite-patch-id: 4370b8b803ca439666fb9d2beb862f6e78347ce3
>> prerequisite-patch-id: ebbaad226ed599f7aad4784fb3f4aaebe34cb110
>> prerequisite-patch-id: cb907c3e3e14de7f4d13b429f3a2a88621a8a9fe
>> prerequisite-patch-id: 0e243b426742122b239af59e36d742da5795a8b1
>> prerequisite-patch-id: 120f97fa1af9891375a0dcf52c51c1907b01fe6a
--
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Frankenstr. 146, 90461 Nürnberg, Germany, www.suse.com
GF: Jochen Jaser, Andrew McDonald, Werner Knoblich, (HRB 36809, AG Nürnberg)
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [RFC][PATCH 00/13] drm: Introduce GEM-UMA memory management
2025-12-09 14:51 ` Thomas Zimmermann
@ 2025-12-09 15:30 ` Boris Brezillon
2025-12-10 7:34 ` Thomas Zimmermann
0 siblings, 1 reply; 26+ messages in thread
From: Boris Brezillon @ 2025-12-09 15:30 UTC (permalink / raw)
To: Thomas Zimmermann
Cc: simona, airlied, mripard, maarten.lankhorst, ogabbay, mamin506,
lizhi.hou, maciej.falkowski, karol.wachowski, tomeu, frank.binns,
matt.coster, yuq825, robh, steven.price, adrian.larumbe,
liviu.dudau, mwen, kraxel, dmitry.osipenko, gurchetansingh,
olvaffe, corbet, dri-devel, lima, virtualization, linux-doc
On Tue, 9 Dec 2025 15:51:21 +0100
Thomas Zimmermann <tzimmermann@suse.de> wrote:
> Hi
>
> Am 09.12.25 um 15:27 schrieb Boris Brezillon:
> > On Tue, 9 Dec 2025 14:41:57 +0100
> > Thomas Zimmermann <tzimmermann@suse.de> wrote:
> >
> >> Duplicate GEM-SHMEM to GEM-UMA. Convert all DRM drivers for UMA
> >> systems if they currently use GEM-SHMEM.
> >>
> >> Many DRM drivers for hardware with Unified Memory Architecture (UMA)
> >> currently builds upon GEM-SHMEM and extends the helpers with features
> >> for managing the GPU MMU. This allows the GPU to access the GEM buffer
> >> content for its operation.
> >>
> >> There is another, larger, set of DRM drivers that use GEM-SHMEM merely
> >> as buffer management with no hardware support. These drivers copy the
> >> buffer content to the GPU on each page flip. The GPU itself has no direct
> >> access. Hardware of this type is usually in servers, behind slow busses
> >> (SPI, USB), or provided by firmware (drivers in sysfb/).
> >>
> >> After some discussion with Boris on the future of GEM-SHMEM, it seems
> >> to me that both use cases more and more diverge from each other. The
> >> most prominent example is the implementation of gem_prime_import,
> >> where both use cases use distinct approaches.
> >>
> >> So we discussed the introduction of a GEM-UMA helper library for
> >> UMA-based hardware. GEM-UMA will remain flexible enough for drivers
> >> to extend it for their use case. GEM-SHMEM will become focused on the
> >> simple-hardware use case. The benefit for both libraries is that they
> >> will be easier to understand and maintain. GEM-SHMEM can be simplified
> >> signiifcantly, I think.
> >>
> >> This RFC series introduces GEM-UMA and converts the UMA-related drivers.
> >>
> >> Patches 1 and 2 fix issues in GEM-SHMEM, so that we don't duplicate
> >> errornous code.
> >>
> >> Patch 3 copies GEM-SHMEM to GEM-UMA. Patch 4 then does soem obvious
> >> cleanups of unnecessary code.
> > Instead of copying the code as-is, I'd rather take a step back and think
> > about what we need and how we want to handle more complex stuff, like
> > reclaim. I've started working on a shrinker for panthor [1], and as part
> > of this series, I've added a commit implementing just enough to replace
> > what gem-shmem currently provides. Feels like the new GEM-UMA thing
> > could be designed on a composition rather than inheritance model,
> > where we have sub-components (backing, cpu_map, gpu_map) that can be
> > pulled in and re-used by the driver implementation. The common helpers
> > would take those sub-components instead of a plain GEM object. That
> > would leave the drivers free of how their internal gem_object fields are
> > laid out and wouldn't require overloading the ->gem_create_object()
> > function. It seems to be that it would better match the model you were
> > describing the other day.
>
> Yeah, I've seen your update to that series. Making individual parts of
> the memory manager freely composable with each other is a fine idea.
>
> But the flipside is that I also want the simple drivers to move away
> from the flexible approach that GEM-SHMEM currently takes. There are
> many drivers that do not need or want that. These drivers benefit from
> something that is self contained. Many of the drivers are also hardly
> maintained, so simplifying things will also be helpful.
>
> I could have added a new GEM implementation for these drivers, but there
> are less UMA drivers to convert and the GEM-UMA naming generally fits
> better than GEM-SHMEM.
>
> I'd rather have GEM-UMA and evolve it from where it stands now; and also
> evolve GEM-SHMEM in a different direction. There's a difference in
> concepts here.
Problem is, we'll be stuck trying to evolve gem-uma to something
cleaner because of the existing abuse of gem-shmem that you're moving
to gem-uma, so I'm not sure I like the idea to be honest. I'm all for
this gem-uma thing, but I'm not convinced rushing it in is the right
solution.
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [RFC][PATCH 00/13] drm: Introduce GEM-UMA memory management
2025-12-09 15:30 ` Boris Brezillon
@ 2025-12-10 7:34 ` Thomas Zimmermann
2025-12-10 9:21 ` Boris Brezillon
0 siblings, 1 reply; 26+ messages in thread
From: Thomas Zimmermann @ 2025-12-10 7:34 UTC (permalink / raw)
To: Boris Brezillon
Cc: simona, airlied, mripard, maarten.lankhorst, ogabbay, mamin506,
lizhi.hou, maciej.falkowski, karol.wachowski, tomeu, frank.binns,
matt.coster, yuq825, robh, steven.price, adrian.larumbe,
liviu.dudau, mwen, kraxel, dmitry.osipenko, gurchetansingh,
olvaffe, corbet, dri-devel, lima, virtualization, linux-doc
Hi
Am 09.12.25 um 16:30 schrieb Boris Brezillon:
> On Tue, 9 Dec 2025 15:51:21 +0100
> Thomas Zimmermann <tzimmermann@suse.de> wrote:
>
>> Hi
>>
>> Am 09.12.25 um 15:27 schrieb Boris Brezillon:
>>> On Tue, 9 Dec 2025 14:41:57 +0100
>>> Thomas Zimmermann <tzimmermann@suse.de> wrote:
>>>
>>>> Duplicate GEM-SHMEM to GEM-UMA. Convert all DRM drivers for UMA
>>>> systems if they currently use GEM-SHMEM.
>>>>
>>>> Many DRM drivers for hardware with Unified Memory Architecture (UMA)
>>>> currently builds upon GEM-SHMEM and extends the helpers with features
>>>> for managing the GPU MMU. This allows the GPU to access the GEM buffer
>>>> content for its operation.
>>>>
>>>> There is another, larger, set of DRM drivers that use GEM-SHMEM merely
>>>> as buffer management with no hardware support. These drivers copy the
>>>> buffer content to the GPU on each page flip. The GPU itself has no direct
>>>> access. Hardware of this type is usually in servers, behind slow busses
>>>> (SPI, USB), or provided by firmware (drivers in sysfb/).
>>>>
>>>> After some discussion with Boris on the future of GEM-SHMEM, it seems
>>>> to me that both use cases more and more diverge from each other. The
>>>> most prominent example is the implementation of gem_prime_import,
>>>> where both use cases use distinct approaches.
>>>>
>>>> So we discussed the introduction of a GEM-UMA helper library for
>>>> UMA-based hardware. GEM-UMA will remain flexible enough for drivers
>>>> to extend it for their use case. GEM-SHMEM will become focused on the
>>>> simple-hardware use case. The benefit for both libraries is that they
>>>> will be easier to understand and maintain. GEM-SHMEM can be simplified
>>>> signiifcantly, I think.
>>>>
>>>> This RFC series introduces GEM-UMA and converts the UMA-related drivers.
>>>>
>>>> Patches 1 and 2 fix issues in GEM-SHMEM, so that we don't duplicate
>>>> errornous code.
>>>>
>>>> Patch 3 copies GEM-SHMEM to GEM-UMA. Patch 4 then does soem obvious
>>>> cleanups of unnecessary code.
>>> Instead of copying the code as-is, I'd rather take a step back and think
>>> about what we need and how we want to handle more complex stuff, like
>>> reclaim. I've started working on a shrinker for panthor [1], and as part
>>> of this series, I've added a commit implementing just enough to replace
>>> what gem-shmem currently provides. Feels like the new GEM-UMA thing
>>> could be designed on a composition rather than inheritance model,
>>> where we have sub-components (backing, cpu_map, gpu_map) that can be
>>> pulled in and re-used by the driver implementation. The common helpers
>>> would take those sub-components instead of a plain GEM object. That
>>> would leave the drivers free of how their internal gem_object fields are
>>> laid out and wouldn't require overloading the ->gem_create_object()
>>> function. It seems to be that it would better match the model you were
>>> describing the other day.
>> Yeah, I've seen your update to that series. Making individual parts of
>> the memory manager freely composable with each other is a fine idea.
>>
>> But the flipside is that I also want the simple drivers to move away
>> from the flexible approach that GEM-SHMEM currently takes. There are
>> many drivers that do not need or want that. These drivers benefit from
>> something that is self contained. Many of the drivers are also hardly
>> maintained, so simplifying things will also be helpful.
>>
>> I could have added a new GEM implementation for these drivers, but there
>> are less UMA drivers to convert and the GEM-UMA naming generally fits
>> better than GEM-SHMEM.
>>
>> I'd rather have GEM-UMA and evolve it from where it stands now; and also
>> evolve GEM-SHMEM in a different direction. There's a difference in
>> concepts here.
> Problem is, we'll be stuck trying to evolve gem-uma to something
> cleaner because of the existing abuse of gem-shmem that you're moving
> to gem-uma, so I'm not sure I like the idea to be honest. I'm all for
> this gem-uma thing, but I'm not convinced rushing it in is the right
> solution.
The abuse you're talking about is what you mentioned about ivpu? How it
uses the gem-shmem internals, right? Ivpu can get its own copy of
gem-shmem, so that the developers can work it out. There's no benefit in
sharing code at all cost. Code sharing only make sense if the callers
are conceptually aligned on what the callee does.
Also what stops you from fixing any of this in the context of gem-uma?
It should even be easier, as you won't have to keep my use cases in mind.
In parallel, gem-shmem could go in its own direction. I'd like to do
some changes and simplifications there, which conflict with where
gem-uma will be heading.
Best regards
Thomas
--
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Frankenstr. 146, 90461 Nürnberg, Germany, www.suse.com
GF: Jochen Jaser, Andrew McDonald, Werner Knoblich, (HRB 36809, AG Nürnberg)
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [RFC][PATCH 00/13] drm: Introduce GEM-UMA memory management
2025-12-10 7:34 ` Thomas Zimmermann
@ 2025-12-10 9:21 ` Boris Brezillon
2025-12-10 9:57 ` Thomas Zimmermann
0 siblings, 1 reply; 26+ messages in thread
From: Boris Brezillon @ 2025-12-10 9:21 UTC (permalink / raw)
To: Thomas Zimmermann
Cc: simona, airlied, mripard, maarten.lankhorst, ogabbay, mamin506,
lizhi.hou, maciej.falkowski, karol.wachowski, tomeu, frank.binns,
matt.coster, yuq825, robh, steven.price, adrian.larumbe,
liviu.dudau, mwen, kraxel, dmitry.osipenko, gurchetansingh,
olvaffe, corbet, dri-devel, lima, virtualization, linux-doc
Hi Thomas,
On Wed, 10 Dec 2025 08:34:02 +0100
Thomas Zimmermann <tzimmermann@suse.de> wrote:
> Hi
>
> Am 09.12.25 um 16:30 schrieb Boris Brezillon:
> > On Tue, 9 Dec 2025 15:51:21 +0100
> > Thomas Zimmermann <tzimmermann@suse.de> wrote:
> >
> >> Hi
> >>
> >> Am 09.12.25 um 15:27 schrieb Boris Brezillon:
> >>> On Tue, 9 Dec 2025 14:41:57 +0100
> >>> Thomas Zimmermann <tzimmermann@suse.de> wrote:
> >>>
> >>>> Duplicate GEM-SHMEM to GEM-UMA. Convert all DRM drivers for UMA
> >>>> systems if they currently use GEM-SHMEM.
> >>>>
> >>>> Many DRM drivers for hardware with Unified Memory Architecture (UMA)
> >>>> currently builds upon GEM-SHMEM and extends the helpers with features
> >>>> for managing the GPU MMU. This allows the GPU to access the GEM buffer
> >>>> content for its operation.
> >>>>
> >>>> There is another, larger, set of DRM drivers that use GEM-SHMEM merely
> >>>> as buffer management with no hardware support. These drivers copy the
> >>>> buffer content to the GPU on each page flip. The GPU itself has no direct
> >>>> access. Hardware of this type is usually in servers, behind slow busses
> >>>> (SPI, USB), or provided by firmware (drivers in sysfb/).
> >>>>
> >>>> After some discussion with Boris on the future of GEM-SHMEM, it seems
> >>>> to me that both use cases more and more diverge from each other. The
> >>>> most prominent example is the implementation of gem_prime_import,
> >>>> where both use cases use distinct approaches.
> >>>>
> >>>> So we discussed the introduction of a GEM-UMA helper library for
> >>>> UMA-based hardware. GEM-UMA will remain flexible enough for drivers
> >>>> to extend it for their use case. GEM-SHMEM will become focused on the
> >>>> simple-hardware use case. The benefit for both libraries is that they
> >>>> will be easier to understand and maintain. GEM-SHMEM can be simplified
> >>>> signiifcantly, I think.
> >>>>
> >>>> This RFC series introduces GEM-UMA and converts the UMA-related drivers.
> >>>>
> >>>> Patches 1 and 2 fix issues in GEM-SHMEM, so that we don't duplicate
> >>>> errornous code.
> >>>>
> >>>> Patch 3 copies GEM-SHMEM to GEM-UMA. Patch 4 then does soem obvious
> >>>> cleanups of unnecessary code.
> >>> Instead of copying the code as-is, I'd rather take a step back and think
> >>> about what we need and how we want to handle more complex stuff, like
> >>> reclaim. I've started working on a shrinker for panthor [1], and as part
> >>> of this series, I've added a commit implementing just enough to replace
> >>> what gem-shmem currently provides. Feels like the new GEM-UMA thing
> >>> could be designed on a composition rather than inheritance model,
> >>> where we have sub-components (backing, cpu_map, gpu_map) that can be
> >>> pulled in and re-used by the driver implementation. The common helpers
> >>> would take those sub-components instead of a plain GEM object. That
> >>> would leave the drivers free of how their internal gem_object fields are
> >>> laid out and wouldn't require overloading the ->gem_create_object()
> >>> function. It seems to be that it would better match the model you were
> >>> describing the other day.
> >> Yeah, I've seen your update to that series. Making individual parts of
> >> the memory manager freely composable with each other is a fine idea.
> >>
> >> But the flipside is that I also want the simple drivers to move away
> >> from the flexible approach that GEM-SHMEM currently takes. There are
> >> many drivers that do not need or want that. These drivers benefit from
> >> something that is self contained. Many of the drivers are also hardly
> >> maintained, so simplifying things will also be helpful.
> >>
> >> I could have added a new GEM implementation for these drivers, but there
> >> are less UMA drivers to convert and the GEM-UMA naming generally fits
> >> better than GEM-SHMEM.
> >>
> >> I'd rather have GEM-UMA and evolve it from where it stands now; and also
> >> evolve GEM-SHMEM in a different direction. There's a difference in
> >> concepts here.
> > Problem is, we'll be stuck trying to evolve gem-uma to something
> > cleaner because of the existing abuse of gem-shmem that you're moving
> > to gem-uma, so I'm not sure I like the idea to be honest. I'm all for
> > this gem-uma thing, but I'm not convinced rushing it in is the right
> > solution.
>
> The abuse you're talking about is what you mentioned about ivpu? How it
> uses the gem-shmem internals, right? Ivpu can get its own copy of
> gem-shmem, so that the developers can work it out.
There's that one, but there's also panfrost/lima manually filling the
pages array for their on-fault-allocation mechanism, and probably other
funky stuff I didn't notice yet.
> There's no benefit in
> sharing code at all cost. Code sharing only make sense if the callers
> are conceptually aligned on what the callee does.
At this point I think I'm clear on what you think about code sharing,
and I can pretty safely say I don't fully agree with you on this point
:P. IMHO, there's benefit in sharing code when the rules are clearly
defined, but gem-shmem has been so lax that we reached a point where it
has become the far west, and everyone happily manipulates gem_shmem
internals without the core helpers knowing. That's when it becomes a
mess, on that, I agree.
>
> Also what stops you from fixing any of this in the context of gem-uma?
That's exactly what I want to do, except that, rather than fixing it,
I'd like to get it right from the start and progressively move existing
GPU/accel drivers using gem-shmem to gem-uma. If you blindly move every
GPU/accel drivers currently using gem-shmem to gem-uma (which is just a
rebranded gem-shmem), you're just moving the problem, you're not
solving it. And all of a sudden, gem-uma, which I wanted to be this
clean slate with well defined rules, on top of which we can more
easily add building blocks for advanced stuff (reclaim, sparse
allocation, ...), is back to this far west.
> It should even be easier, as you won't have to keep my use cases in mind.
I might be wrong, but KMS use cases are probably not the problematic
ones.
>
> In parallel, gem-shmem could go in its own direction.
My understanding is that you're primarily targeting KMS drivers, so why
don't you fork gem-shem with something called gem-shmem-for-kms (or
gem-shmem-dumb) that does just enough for you, and nothing more?
I'm saying that with a bit of sarcasm, and I certainly get how painful
it is to go and patch all KMS drivers to rename things. But if you think
about it for a second, it's just as painful (if not more) to fork
gem-uma in all users that might get in the way of a cleaner
abstraction. Not only that, but all of a sudden, you need a lot more
synchronization to get that approved, and until that happens, you're
blocked on the real stuff: designing something that's sounds for
more complex use cases GPU/accel drivers care about.
> I'd like to do
> some changes and simplifications there, which conflict with where
> gem-uma will be heading.
Just to be clear, I'm not going to block this if that's the direction
people want to take, but I wanted to point out that making it easier for
you might mean making others' life harder. When I initially proposed to
fork gem-shmem it was not with the goal of pulling all current
GPU/accel users in directly, but rather design something that provides
the same set of features (with the ability to add more), with better
defined rules, so we don't end up in the same situation. What you're
doing here is the opposite: gem-uma becomes the gem-shmem's
forget-about box, and as a result, it becomes someone else's problem.
Regards,
Boris
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [RFC][PATCH 00/13] drm: Introduce GEM-UMA memory management
2025-12-10 9:21 ` Boris Brezillon
@ 2025-12-10 9:57 ` Thomas Zimmermann
0 siblings, 0 replies; 26+ messages in thread
From: Thomas Zimmermann @ 2025-12-10 9:57 UTC (permalink / raw)
To: Boris Brezillon
Cc: simona, airlied, mripard, maarten.lankhorst, ogabbay, mamin506,
lizhi.hou, maciej.falkowski, karol.wachowski, tomeu, frank.binns,
matt.coster, yuq825, robh, steven.price, adrian.larumbe,
liviu.dudau, mwen, kraxel, dmitry.osipenko, gurchetansingh,
olvaffe, corbet, dri-devel, lima, virtualization, linux-doc
Hi
Am 10.12.25 um 10:21 schrieb Boris Brezillon:
[...]
> n that, I agree.
>
>> Also what stops you from fixing any of this in the context of gem-uma?
> That's exactly what I want to do, except that, rather than fixing it,
> I'd like to get it right from the start and progressively move existing
> GPU/accel drivers using gem-shmem to gem-uma. If you blindly move every
> GPU/accel drivers currently using gem-shmem to gem-uma (which is just a
> rebranded gem-shmem), you're just moving the problem, you're not
> solving it. And all of a sudden, gem-uma, which I wanted to be this
Just to be clear, I'm trying to get the simple drivers out of the way
first. Nothing more. Solving problems with the UMA drivers is out of
question wrt this series.
> clean slate with well defined rules, on top of which we can more
> easily add building blocks for advanced stuff (reclaim, sparse
> allocation, ...), is back to this far west.
I've done something similar with GEM VRAM helpers a few years back. We
had drivers running TTM on cinsiderably simple hardware. They all went
to VRAM helpers at some point Still that took quite some time. With UMA
the problem seems more complex and the drivers are moving targets. I
feel like this will take years until you see the fruits of that work.
All while you have to maintain GEM's UMA and SHMEM code at the same time.
>
>> It should even be easier, as you won't have to keep my use cases in mind.
> I might be wrong, but KMS use cases are probably not the problematic
> ones.
>
>> In parallel, gem-shmem could go in its own direction.
> My understanding is that you're primarily targeting KMS drivers, so why
> don't you fork gem-shem with something called gem-shmem-for-kms (or
> gem-shmem-dumb) that does just enough for you, and nothing more?
>
> I'm saying that with a bit of sarcasm, and I certainly get how painful
> it is to go and patch all KMS drivers to rename things. But if you think
> about it for a second, it's just as painful (if not more) to fork
> gem-uma in all users that might get in the way of a cleaner
> abstraction. Not only that, but all of a sudden, you need a lot more
> synchronization to get that approved, and until that happens, you're
> blocked on the real stuff: designing something that's sounds for
> more complex use cases GPU/accel drivers care about.
There's nothing sarcastic about that. Forking from GEM SHMEM in the
'opposite direction' would be the other alternative. I can try to
provide something like GEM sysmem helpers with a simplified
implementation of GEM shmem that provides what the simple drivers need.
Best regards
Thomas
>
>> I'd like to do
>> some changes and simplifications there, which conflict with where
>> gem-uma will be heading.
> Just to be clear, I'm not going to block this if that's the direction
> people want to take, but I wanted to point out that making it easier for
> you might mean making others' life harder. When I initially proposed to
> fork gem-shmem it was not with the goal of pulling all current
> GPU/accel users in directly, but rather design something that provides
> the same set of features (with the ability to add more), with better
> defined rules, so we don't end up in the same situation. What you're
> doing here is the opposite: gem-uma becomes the gem-shmem's
> forget-about box, and as a result, it becomes someone else's problem.
>
> Regards,
>
> Boris
--
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Frankenstr. 146, 90461 Nürnberg, Germany, www.suse.com
GF: Jochen Jaser, Andrew McDonald, Werner Knoblich, (HRB 36809, AG Nürnberg)
^ permalink raw reply [flat|nested] 26+ messages in thread