public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH 0/7] Introduce sparse DRM shmem object allocations
@ 2025-02-18 23:25 Adrián Larumbe
  2025-02-18 23:25 ` [RFC PATCH 1/7] shmem: Introduce non-blocking allocation of shmem pages Adrián Larumbe
                   ` (6 more replies)
  0 siblings, 7 replies; 14+ messages in thread
From: Adrián Larumbe @ 2025-02-18 23:25 UTC (permalink / raw)
  To: dri-devel, linux-kernel, Boris Brezillon, Steven Price,
	Rob Herring
  Cc: kernel, Adrián Larumbe

This patch series is a proposal for implementing sparse page allocations
for shmem objects. It was initially motivated by a kind of BO managed by
the Panfrost driver, the tiler heap, which grows on demand every time the
GPU faults on a virtual address within its drm_mm-managed ranged.

Because keeping a struct page pointer array that can describe the entire
virtual range is wasteful when only a few backing pages have been
allocated, at Collabora we thought a sparse allocation approach with
xarrays was a more efficient choice.

Since sparse and 'dense' DRM shmem objects must be managed slightly
differently, the API is expanded to allow client drivers to create sparse
objects and also to expand their page backing range, but everything else
should remain as transparent as possible and be handled from within the DRM
shmem system itself.

Another side feature that was introduced in this patch series is an shmem
helper for non-blocking allocation of pages. This is motivated by the desire
to avoid deadlocks with the shrinker.

Adrián Larumbe (7):
  shmem: Introduce non-blocking allocation of shmem pages
  lib/scatterlist.c: Support constructing sgt from page xarray
  drm/prime: Let drm_prime_pages_to_sg use the page_array interface
  drm/shmem: Introduce the notion of sparse objects
  drm/shmem: Implement sparse allocation of pages for shmem objects
  drm/panfrost: Use shmem sparse allocation for heap BOs
  drm/panfrost/panthor: Take sparse objects into account for fdinfo

 drivers/gpu/drm/drm_gem.c               |  32 +++++
 drivers/gpu/drm/drm_gem_shmem_helper.c  | 163 ++++++++++++++++++++++--
 drivers/gpu/drm/drm_prime.c             |   7 +-
 drivers/gpu/drm/panfrost/panfrost_gem.c |  16 ++-
 drivers/gpu/drm/panfrost/panfrost_gem.h |   2 +-
 drivers/gpu/drm/panfrost/panfrost_mmu.c |  85 +++---------
 drivers/gpu/drm/panthor/panthor_gem.c   |   4 +-
 include/drm/drm_gem.h                   |   3 +
 include/drm/drm_gem_shmem_helper.h      |  30 ++++-
 include/linux/scatterlist.h             |  47 +++++++
 include/linux/shmem_fs.h                |   7 +
 lib/scatterlist.c                       | 128 +++++++++++++++++++
 12 files changed, 435 insertions(+), 89 deletions(-)


base-commit: 4fd6ca90fc7f509977585d39885f21b2911123f3
--
2.47.1

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [RFC PATCH 1/7] shmem: Introduce non-blocking allocation of shmem pages
  2025-02-18 23:25 [RFC PATCH 0/7] Introduce sparse DRM shmem object allocations Adrián Larumbe
@ 2025-02-18 23:25 ` Adrián Larumbe
  2025-02-25 12:43   ` Boris Brezillon
  2025-02-18 23:25 ` [RFC PATCH 2/7] lib/scatterlist.c: Support constructing sgt from page xarray Adrián Larumbe
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 14+ messages in thread
From: Adrián Larumbe @ 2025-02-18 23:25 UTC (permalink / raw)
  To: dri-devel, linux-kernel, Boris Brezillon, Steven Price,
	Rob Herring, Hugh Dickins
  Cc: kernel, Adrián Larumbe, linux-mm

With the future goal of preventing deadlocks with the shrinker when reclaiming
GEM-allocated memory, a variant of shmem_read_mapping_page_gfp() that does not
sleep when enough memory isn't available, therefore potentially triggering the
shrinker on same driver, is introduced.

Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com>
---
 include/linux/shmem_fs.h | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h
index 0b273a7b9f01..5735728aeda2 100644
--- a/include/linux/shmem_fs.h
+++ b/include/linux/shmem_fs.h
@@ -167,6 +167,13 @@ static inline struct page *shmem_read_mapping_page(
 					mapping_gfp_mask(mapping));
 }
 
+static inline struct page *shmem_read_mapping_page_nonblocking(
+				struct address_space *mapping, pgoff_t index)
+{
+	return shmem_read_mapping_page_gfp(mapping, index,
+					mapping_gfp_mask(mapping) | GFP_NOWAIT);
+}
+
 static inline bool shmem_file(struct file *file)
 {
 	if (!IS_ENABLED(CONFIG_SHMEM))
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [RFC PATCH 2/7] lib/scatterlist.c: Support constructing sgt from page xarray
  2025-02-18 23:25 [RFC PATCH 0/7] Introduce sparse DRM shmem object allocations Adrián Larumbe
  2025-02-18 23:25 ` [RFC PATCH 1/7] shmem: Introduce non-blocking allocation of shmem pages Adrián Larumbe
@ 2025-02-18 23:25 ` Adrián Larumbe
  2025-02-25 12:57   ` Boris Brezillon
  2025-02-18 23:25 ` [RFC PATCH 3/7] drm/prime: Let drm_prime_pages_to_sg use the page_array interface Adrián Larumbe
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 14+ messages in thread
From: Adrián Larumbe @ 2025-02-18 23:25 UTC (permalink / raw)
  To: dri-devel, linux-kernel, Boris Brezillon, Steven Price,
	Rob Herring, Andrew Morton
  Cc: kernel, Adrián Larumbe

In preparation for a future commit that will introduce sparse allocation of
pages in DRM shmem, a scatterlist function that knows how to deal with an xarray
collection of memory pages had to be introduced.

Because the new function is identical to the existing one that deals with a page
array, the page_array abstraction is also introduced, which hides the way pages
are retrieved from a collection.

Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com>
---
 include/linux/scatterlist.h |  47 +++++++++++++
 lib/scatterlist.c           | 128 ++++++++++++++++++++++++++++++++++++
 2 files changed, 175 insertions(+)

diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h
index d836e7440ee8..0045df9c374f 100644
--- a/include/linux/scatterlist.h
+++ b/include/linux/scatterlist.h
@@ -48,6 +48,39 @@ struct sg_append_table {
 	unsigned int total_nents;	/* Total entries in the table */
 };
 
+struct page_array {
+	union {
+		struct page **array;
+		struct xarray *xarray;
+	};
+
+	struct page *(*get_page)(struct page_array, unsigned int);
+};
+
+static inline struct page *page_array_get_page(struct page_array a,
+					       unsigned int index)
+{
+	return a.array[index];
+}
+
+static inline struct page *page_xarray_get_page(struct page_array a,
+						unsigned int index)
+{
+	return xa_load(a.xarray, index);
+}
+
+#define PAGE_ARRAY(pages)				\
+	((struct page_array) {				\
+		.array = pages,				\
+		.get_page = page_array_get_page,	\
+	})
+
+#define PAGE_XARRAY(pages)				\
+	((struct page_array) {				\
+		.xarray = pages,			\
+		.get_page = page_xarray_get_page,	\
+	})
+
 /*
  * Notes on SG table design.
  *
@@ -448,6 +481,20 @@ int sg_alloc_table_from_pages_segment(struct sg_table *sgt, struct page **pages,
 				      unsigned long size,
 				      unsigned int max_segment, gfp_t gfp_mask);
 
+int sg_alloc_table_from_page_array_segment(struct sg_table *sgt, struct page_array pages,
+					   unsigned int idx, unsigned int n_pages, unsigned int offset,
+					   unsigned long size, unsigned int max_segment, gfp_t gfp_mask);
+
+static inline int sg_alloc_table_from_page_xarray(struct sg_table *sgt, struct xarray *pages,
+						  unsigned int idx, unsigned int n_pages, unsigned int offset,
+						  unsigned long size, gfp_t gfp_mask)
+{
+	struct page_array parray = PAGE_XARRAY(pages);
+
+	return sg_alloc_table_from_page_array_segment(sgt, parray, idx, n_pages, offset,
+						      size, UINT_MAX, gfp_mask);
+}
+
 /**
  * sg_alloc_table_from_pages - Allocate and initialize an sg table from
  *			       an array of pages
diff --git a/lib/scatterlist.c b/lib/scatterlist.c
index 5bb6b8aff232..669ebd23e4ad 100644
--- a/lib/scatterlist.c
+++ b/lib/scatterlist.c
@@ -553,6 +553,115 @@ int sg_alloc_append_table_from_pages(struct sg_append_table *sgt_append,
 }
 EXPORT_SYMBOL(sg_alloc_append_table_from_pages);
 
+static inline int
+sg_alloc_append_table_from_page_array(struct sg_append_table *sgt_append,
+				      struct page_array pages,
+				      unsigned int first_page,
+				      unsigned int n_pages,
+				      unsigned int offset, unsigned long size,
+				      unsigned int max_segment,
+				      unsigned int left_pages, gfp_t gfp_mask)
+{
+	unsigned int chunks, seg_len, i, prv_len = 0;
+	unsigned int added_nents = 0;
+	struct scatterlist *s = sgt_append->prv;
+	unsigned int cur_pg_index = first_page;
+	unsigned int last_pg_index = first_page + n_pages - 1;
+	struct page *last_pg;
+
+	/*
+	 * The algorithm below requires max_segment to be aligned to PAGE_SIZE
+	 * otherwise it can overshoot.
+	 */
+	max_segment = ALIGN_DOWN(max_segment, PAGE_SIZE);
+	if (WARN_ON(max_segment < PAGE_SIZE))
+		return -EINVAL;
+
+	if (IS_ENABLED(CONFIG_ARCH_NO_SG_CHAIN) && sgt_append->prv)
+		return -EOPNOTSUPP;
+
+	if (sgt_append->prv) {
+		unsigned long next_pfn;
+		struct page *page;
+
+		if (WARN_ON(offset))
+			return -EINVAL;
+
+		/* Merge contiguous pages into the last SG */
+		page = pages.get_page(pages, cur_pg_index);
+		prv_len = sgt_append->prv->length;
+		next_pfn = (sg_phys(sgt_append->prv) + prv_len) / PAGE_SIZE;
+		if (page_to_pfn(page) == next_pfn) {
+			last_pg = pfn_to_page(next_pfn - 1);
+			while (cur_pg_index <= last_pg_index &&
+			       pages_are_mergeable(page, last_pg)) {
+				if (sgt_append->prv->length + PAGE_SIZE > max_segment)
+					break;
+				sgt_append->prv->length += PAGE_SIZE;
+				last_pg = page;
+				cur_pg_index++;
+			}
+			if (cur_pg_index > last_pg_index)
+				goto out;
+		}
+	}
+
+	/* compute number of contiguous chunks */
+	chunks = 1;
+	seg_len = 0;
+	for (i = cur_pg_index + 1; i <= last_pg_index; i++) {
+		seg_len += PAGE_SIZE;
+		if (seg_len >= max_segment ||
+		    !pages_are_mergeable(pages.get_page(pages, i),
+					 pages.get_page(pages, i - 1))) {
+			chunks++;
+			seg_len = 0;
+		}
+	}
+
+	/* merging chunks and putting them into the scatterlist */
+	for (i = 0; i < chunks; i++) {
+		unsigned int j, chunk_size;
+
+		/* look for the end of the current chunk */
+		seg_len = 0;
+		for (j = cur_pg_index + 1; j <= last_pg_index; j++) {
+			seg_len += PAGE_SIZE;
+			if (seg_len >= max_segment ||
+			    !pages_are_mergeable(pages.get_page(pages, j),
+						 pages.get_page(pages, j - 1)))
+				break;
+		}
+
+		/* Pass how many chunks might be left */
+		s = get_next_sg(sgt_append, s, chunks - i + left_pages,
+				gfp_mask);
+		if (IS_ERR(s)) {
+			/*
+			 * Adjust entry length to be as before function was
+			 * called.
+			 */
+			if (sgt_append->prv)
+				sgt_append->prv->length = prv_len;
+			return PTR_ERR(s);
+		}
+		chunk_size = ((j - cur_pg_index) << PAGE_SHIFT) - offset;
+		sg_set_page(s, pages.get_page(pages, cur_pg_index),
+			    min_t(unsigned long, size, chunk_size), offset);
+		added_nents++;
+		size -= chunk_size;
+		offset = 0;
+		cur_pg_index = j;
+	}
+	sgt_append->sgt.nents += added_nents;
+	sgt_append->sgt.orig_nents = sgt_append->sgt.nents;
+	sgt_append->prv = s;
+out:
+	if (!left_pages)
+		sg_mark_end(s);
+	return 0;
+}
+
 /**
  * sg_alloc_table_from_pages_segment - Allocate and initialize an sg table from
  *                                     an array of pages and given maximum
@@ -596,6 +705,25 @@ int sg_alloc_table_from_pages_segment(struct sg_table *sgt, struct page **pages,
 }
 EXPORT_SYMBOL(sg_alloc_table_from_pages_segment);
 
+int sg_alloc_table_from_page_array_segment(struct sg_table *sgt, struct page_array pages,
+					   unsigned int idx, unsigned int n_pages, unsigned int offset,
+					   unsigned long size, unsigned int max_segment, gfp_t gfp_mask)
+{
+	struct sg_append_table append = {};
+	int err;
+
+	err = sg_alloc_append_table_from_page_array(&append, pages, idx, n_pages, offset,
+						    size, max_segment, 0, gfp_mask);
+	if (err) {
+		sg_free_append_table(&append);
+		return err;
+	}
+	memcpy(sgt, &append.sgt, sizeof(*sgt));
+	WARN_ON(append.total_nents != sgt->orig_nents);
+	return 0;
+}
+EXPORT_SYMBOL(sg_alloc_table_from_page_array_segment);
+
 #ifdef CONFIG_SGL_ALLOC
 
 /**
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [RFC PATCH 3/7] drm/prime: Let drm_prime_pages_to_sg use the page_array interface
  2025-02-18 23:25 [RFC PATCH 0/7] Introduce sparse DRM shmem object allocations Adrián Larumbe
  2025-02-18 23:25 ` [RFC PATCH 1/7] shmem: Introduce non-blocking allocation of shmem pages Adrián Larumbe
  2025-02-18 23:25 ` [RFC PATCH 2/7] lib/scatterlist.c: Support constructing sgt from page xarray Adrián Larumbe
@ 2025-02-18 23:25 ` Adrián Larumbe
  2025-02-18 23:25 ` [RFC PATCH 4/7] drm/shmem: Introduce the notion of sparse objects Adrián Larumbe
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 14+ messages in thread
From: Adrián Larumbe @ 2025-02-18 23:25 UTC (permalink / raw)
  To: dri-devel, linux-kernel, Boris Brezillon, Steven Price,
	Rob Herring, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
	David Airlie, Simona Vetter
  Cc: kernel, Adrián Larumbe

Switch to sg_alloc_table_from_page_array_segment() when generating an sgtable
from an array of pages. This is functionally equivalent, but a future commit
will also let us do the same from a memory page xarray.

Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com>
---
 drivers/gpu/drm/drm_prime.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
index 32a8781cfd67..1549733d3833 100644
--- a/drivers/gpu/drm/drm_prime.c
+++ b/drivers/gpu/drm/drm_prime.c
@@ -837,6 +837,7 @@ struct sg_table *drm_prime_pages_to_sg(struct drm_device *dev,
 				       struct page **pages, unsigned int nr_pages)
 {
 	struct sg_table *sg;
+	struct page_array parray = PAGE_ARRAY(pages);
 	size_t max_segment = 0;
 	int err;
 
@@ -848,9 +849,9 @@ struct sg_table *drm_prime_pages_to_sg(struct drm_device *dev,
 		max_segment = dma_max_mapping_size(dev->dev);
 	if (max_segment == 0)
 		max_segment = UINT_MAX;
-	err = sg_alloc_table_from_pages_segment(sg, pages, nr_pages, 0,
-						(unsigned long)nr_pages << PAGE_SHIFT,
-						max_segment, GFP_KERNEL);
+	err = sg_alloc_table_from_page_array_segment(sg, parray, 0, nr_pages, 0,
+						     (unsigned long)nr_pages << PAGE_SHIFT,
+						     max_segment, GFP_KERNEL);
 	if (err) {
 		kfree(sg);
 		sg = ERR_PTR(err);
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [RFC PATCH 4/7] drm/shmem: Introduce the notion of sparse objects
  2025-02-18 23:25 [RFC PATCH 0/7] Introduce sparse DRM shmem object allocations Adrián Larumbe
                   ` (2 preceding siblings ...)
  2025-02-18 23:25 ` [RFC PATCH 3/7] drm/prime: Let drm_prime_pages_to_sg use the page_array interface Adrián Larumbe
@ 2025-02-18 23:25 ` Adrián Larumbe
  2025-02-25 13:28   ` Boris Brezillon
  2025-02-18 23:25 ` [RFC PATCH 5/7] drm/shmem: Implement sparse allocation of pages for shmem objects Adrián Larumbe
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 14+ messages in thread
From: Adrián Larumbe @ 2025-02-18 23:25 UTC (permalink / raw)
  To: dri-devel, linux-kernel, Boris Brezillon, Steven Price,
	Rob Herring, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
	David Airlie, Simona Vetter
  Cc: kernel, Adrián Larumbe

Sparse DRM objects will store their backing pages in an xarray, to avoid the
overhead of preallocating a huge struct page pointer array when only a very
small range of indices might be assigned.

For now, only the definition of a sparse object as a union alternative to a
'dense' object is provided, with functions that exploit it being part of later
commits.

Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com>
---
 drivers/gpu/drm/drm_gem_shmem_helper.c | 42 +++++++++++++++++++++++---
 include/drm/drm_gem_shmem_helper.h     | 18 ++++++++++-
 2 files changed, 54 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 5ab351409312..d63e42be2d72 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -10,6 +10,7 @@
 #include <linux/shmem_fs.h>
 #include <linux/slab.h>
 #include <linux/vmalloc.h>
+#include <linux/xarray.h>
 
 #ifdef CONFIG_X86
 #include <asm/set_memory.h>
@@ -50,7 +51,7 @@ static const struct drm_gem_object_funcs drm_gem_shmem_funcs = {
 
 static struct drm_gem_shmem_object *
 __drm_gem_shmem_create(struct drm_device *dev, size_t size, bool private,
-		       struct vfsmount *gemfs)
+		       bool sparse, struct vfsmount *gemfs)
 {
 	struct drm_gem_shmem_object *shmem;
 	struct drm_gem_object *obj;
@@ -90,6 +91,11 @@ __drm_gem_shmem_create(struct drm_device *dev, size_t size, bool private,
 
 	INIT_LIST_HEAD(&shmem->madv_list);
 
+	if (unlikely(sparse))
+		xa_init_flags(&shmem->xapages, XA_FLAGS_ALLOC);
+
+	shmem->sparse = sparse;
+
 	if (!private) {
 		/*
 		 * Our buffers are kept pinned, so allocating them
@@ -124,10 +130,16 @@ __drm_gem_shmem_create(struct drm_device *dev, size_t size, bool private,
  */
 struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t size)
 {
-	return __drm_gem_shmem_create(dev, size, false, NULL);
+	return __drm_gem_shmem_create(dev, size, false, false, NULL);
 }
 EXPORT_SYMBOL_GPL(drm_gem_shmem_create);
 
+struct drm_gem_shmem_object *drm_gem_shmem_create_sparse(struct drm_device *dev, size_t size)
+{
+	return __drm_gem_shmem_create(dev, size, false, true, NULL);
+}
+EXPORT_SYMBOL_GPL(drm_gem_shmem_create_sparse);
+
 /**
  * drm_gem_shmem_create_with_mnt - Allocate an object with the given size in a
  * given mountpoint
@@ -145,7 +157,7 @@ struct drm_gem_shmem_object *drm_gem_shmem_create_with_mnt(struct drm_device *de
 							   size_t size,
 							   struct vfsmount *gemfs)
 {
-	return __drm_gem_shmem_create(dev, size, false, gemfs);
+	return __drm_gem_shmem_create(dev, size, false, false, gemfs);
 }
 EXPORT_SYMBOL_GPL(drm_gem_shmem_create_with_mnt);
 
@@ -173,7 +185,9 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
 			sg_free_table(shmem->sgt);
 			kfree(shmem->sgt);
 		}
-		if (shmem->pages)
+
+		if ((!shmem->sparse && shmem->pages) ||
+		    (shmem->sparse && !xa_empty(&shmem->xapages)))
 			drm_gem_shmem_put_pages(shmem);
 
 		drm_WARN_ON(obj->dev, shmem->pages_use_count);
@@ -191,11 +205,19 @@ static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
 	struct drm_gem_object *obj = &shmem->base;
 	struct page **pages;
 
+	if (drm_WARN_ON(obj->dev, shmem->sparse))
+		return -EINVAL;
+
 	dma_resv_assert_held(shmem->base.resv);
 
 	if (shmem->pages_use_count++ > 0)
 		return 0;
 
+	/* We only allow increasing the user count in the case of
+	  sparse shmem objects with some backed pages for now */
+	if (shmem->sparse && xa_empty(&shmem->xapages))
+		return -EINVAL;
+
 	pages = drm_gem_get_pages(obj);
 	if (IS_ERR(pages)) {
 		drm_dbg_kms(obj->dev, "Failed to get pages (%ld)\n",
@@ -541,6 +563,8 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
 	struct page *page;
 	pgoff_t page_offset;
 
+	drm_WARN_ON(obj->dev, shmem->sparse);
+
 	/* We don't use vmf->pgoff since that has the fake offset */
 	page_offset = (vmf->address - vma->vm_start) >> PAGE_SHIFT;
 
@@ -567,6 +591,7 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma)
 	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
 
 	drm_WARN_ON(obj->dev, obj->import_attach);
+	drm_WARN_ON(obj->dev, shmem->sparse);
 
 	dma_resv_lock(shmem->base.resv, NULL);
 
@@ -666,6 +691,9 @@ void drm_gem_shmem_print_info(const struct drm_gem_shmem_object *shmem,
 	if (shmem->base.import_attach)
 		return;
 
+	if (drm_WARN_ON(shmem->base.dev, shmem->sparse))
+		return;
+
 	drm_printf_indent(p, indent, "pages_use_count=%u\n", shmem->pages_use_count);
 	drm_printf_indent(p, indent, "vmap_use_count=%u\n", shmem->vmap_use_count);
 	drm_printf_indent(p, indent, "vaddr=%p\n", shmem->vaddr);
@@ -691,6 +719,7 @@ struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem)
 	struct drm_gem_object *obj = &shmem->base;
 
 	drm_WARN_ON(obj->dev, obj->import_attach);
+	drm_WARN_ON(obj->dev, shmem->sparse);
 
 	return drm_prime_pages_to_sg(obj->dev, shmem->pages, obj->size >> PAGE_SHIFT);
 }
@@ -702,6 +731,9 @@ static struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_
 	int ret;
 	struct sg_table *sgt;
 
+	if (drm_WARN_ON(obj->dev, shmem->sparse))
+		return ERR_PTR(-EINVAL);
+
 	if (shmem->sgt)
 		return shmem->sgt;
 
@@ -787,7 +819,7 @@ drm_gem_shmem_prime_import_sg_table(struct drm_device *dev,
 	size_t size = PAGE_ALIGN(attach->dmabuf->size);
 	struct drm_gem_shmem_object *shmem;
 
-	shmem = __drm_gem_shmem_create(dev, size, true, NULL);
+	shmem = __drm_gem_shmem_create(dev, size, true, false, NULL);
 	if (IS_ERR(shmem))
 		return ERR_CAST(shmem);
 
diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
index d22e3fb53631..902039cfc4ce 100644
--- a/include/drm/drm_gem_shmem_helper.h
+++ b/include/drm/drm_gem_shmem_helper.h
@@ -6,6 +6,7 @@
 #include <linux/fs.h>
 #include <linux/mm.h>
 #include <linux/mutex.h>
+#include <linux/xarray.h>
 
 #include <drm/drm_file.h>
 #include <drm/drm_gem.h>
@@ -29,7 +30,11 @@ struct drm_gem_shmem_object {
 	/**
 	 * @pages: Page table
 	 */
-	struct page **pages;
+	union {
+
+		struct page **pages;
+		struct xarray xapages;
+	};
 
 	/**
 	 * @pages_use_count:
@@ -91,6 +96,11 @@ struct drm_gem_shmem_object {
 	 * @map_wc: map object write-combined (instead of using shmem defaults).
 	 */
 	bool map_wc : 1;
+
+	/**
+	 * @sparse: the object's virtual memory space is only partially backed by pages
+	 */
+	bool sparse : 1;
 };
 
 #define to_drm_gem_shmem_obj(obj) \
@@ -229,6 +239,9 @@ static inline int drm_gem_shmem_object_vmap(struct drm_gem_object *obj,
 {
 	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
 
+	if (shmem->sparse)
+		return -EACCES;
+
 	return drm_gem_shmem_vmap(shmem, map);
 }
 
@@ -263,6 +276,9 @@ static inline int drm_gem_shmem_object_mmap(struct drm_gem_object *obj, struct v
 {
 	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
 
+	if (shmem->sparse)
+		return -EACCES;
+
 	return drm_gem_shmem_mmap(shmem, vma);
 }
 
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [RFC PATCH 5/7] drm/shmem: Implement sparse allocation of pages for shmem objects
  2025-02-18 23:25 [RFC PATCH 0/7] Introduce sparse DRM shmem object allocations Adrián Larumbe
                   ` (3 preceding siblings ...)
  2025-02-18 23:25 ` [RFC PATCH 4/7] drm/shmem: Introduce the notion of sparse objects Adrián Larumbe
@ 2025-02-18 23:25 ` Adrián Larumbe
  2025-02-25 14:39   ` Boris Brezillon
  2025-02-18 23:25 ` [RFC PATCH 6/7] drm/panfrost: Use shmem sparse allocation for heap BOs Adrián Larumbe
  2025-02-18 23:25 ` [RFC PATCH 7/7] drm/panfrost/panthor: Take sparse objects into account for fdinfo Adrián Larumbe
  6 siblings, 1 reply; 14+ messages in thread
From: Adrián Larumbe @ 2025-02-18 23:25 UTC (permalink / raw)
  To: dri-devel, linux-kernel, Boris Brezillon, Steven Price,
	Rob Herring, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
	David Airlie, Simona Vetter
  Cc: kernel, Adrián Larumbe

Add a new function that lets drivers allocate pages for a subset of the shmem
object's virtual address range. Expand the shmem object's definition to include
an RSS field, since it's different from the base GEM object's virtual size.

Add also new function for putting the pages of a sparse page array. There is
refactorisation potential with drm_gem_put_pages, but it is yet to be decided
what this should look like.

Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com>
---
 drivers/gpu/drm/drm_gem.c              |  32 +++++++
 drivers/gpu/drm/drm_gem_shmem_helper.c | 123 ++++++++++++++++++++++++-
 include/drm/drm_gem.h                  |   3 +
 include/drm/drm_gem_shmem_helper.h     |  12 +++
 4 files changed, 165 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index ee811764c3df..930c5219e1e9 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -679,6 +679,38 @@ void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages,
 }
 EXPORT_SYMBOL(drm_gem_put_pages);
 
+void drm_gem_put_sparse_xarray(struct xarray *pa, unsigned long idx,
+				unsigned int npages, bool dirty, bool accessed)
+{
+	struct folio_batch fbatch;
+	struct page *page;
+
+	folio_batch_init(&fbatch);
+
+	xa_for_each(pa, idx, page) {
+		struct folio *folio = page_folio(page);
+
+		if (dirty)
+			folio_mark_dirty(folio);
+		if (accessed)
+			folio_mark_accessed(folio);
+
+		/* Undo the reference we took when populating the table */
+		if (!folio_batch_add(&fbatch, folio))
+			drm_gem_check_release_batch(&fbatch);
+
+		xa_erase(pa, idx);
+
+		idx += folio_nr_pages(folio) - 1;
+	}
+
+	if (folio_batch_count(&fbatch))
+		drm_gem_check_release_batch(&fbatch);
+
+	WARN_ON((idx+1) != npages);
+}
+EXPORT_SYMBOL(drm_gem_put_sparse_xarray);
+
 static int objects_lookup(struct drm_file *filp, u32 *handle, int count,
 			  struct drm_gem_object **objs)
 {
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index d63e42be2d72..40f7f6812195 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -10,7 +10,6 @@
 #include <linux/shmem_fs.h>
 #include <linux/slab.h>
 #include <linux/vmalloc.h>
-#include <linux/xarray.h>
 
 #ifdef CONFIG_X86
 #include <asm/set_memory.h>
@@ -161,6 +160,18 @@ struct drm_gem_shmem_object *drm_gem_shmem_create_with_mnt(struct drm_device *de
 }
 EXPORT_SYMBOL_GPL(drm_gem_shmem_create_with_mnt);
 
+static void drm_gem_shmem_put_pages_sparse(struct drm_gem_shmem_object *shmem)
+{
+	unsigned int n_pages = shmem->rss_size / PAGE_SIZE;
+
+	drm_WARN_ON(shmem->base.dev, (shmem->rss_size & (PAGE_SIZE - 1)) != 0);
+	drm_WARN_ON(shmem->base.dev, !shmem->sparse);
+
+	drm_gem_put_sparse_xarray(&shmem->xapages, 0, n_pages,
+				   shmem->pages_mark_dirty_on_put,
+				   shmem->pages_mark_accessed_on_put);
+}
+
 /**
  * drm_gem_shmem_free - Free resources associated with a shmem GEM object
  * @shmem: shmem GEM object to free
@@ -264,10 +275,15 @@ void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem)
 		set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT);
 #endif
 
-	drm_gem_put_pages(obj, shmem->pages,
-			  shmem->pages_mark_dirty_on_put,
-			  shmem->pages_mark_accessed_on_put);
-	shmem->pages = NULL;
+	if (!shmem->sparse) {
+		drm_gem_put_pages(obj, shmem->pages,
+				  shmem->pages_mark_dirty_on_put,
+				  shmem->pages_mark_accessed_on_put);
+		shmem->pages = NULL;
+	} else {
+		drm_gem_shmem_put_pages_sparse(shmem);
+		xa_destroy(&shmem->xapages);
+	}
 }
 EXPORT_SYMBOL(drm_gem_shmem_put_pages);
 
@@ -765,6 +781,81 @@ static struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_
 	return ERR_PTR(ret);
 }
 
+static struct sg_table *drm_gem_shmem_get_sparse_pages_locked(struct drm_gem_shmem_object *shmem,
+							       unsigned int n_pages,
+							       pgoff_t page_offset)
+{
+	struct drm_gem_object *obj = &shmem->base;
+	gfp_t mask = GFP_KERNEL | GFP_NOWAIT;
+	size_t size = n_pages * PAGE_SIZE;
+	struct address_space *mapping;
+	struct sg_table *sgt;
+	struct page *page;
+	bool first_alloc;
+	int ret, i;
+
+	if (!shmem->sparse)
+		return ERR_PTR(-EINVAL);
+
+	/* If the mapping exists, then bail out immediately */
+	if (xa_load(&shmem->xapages, page_offset) != NULL)
+		return ERR_PTR(-EEXIST);
+
+	dma_resv_assert_held(shmem->base.resv);
+
+	first_alloc = xa_empty(&shmem->xapages);
+
+	mapping = shmem->base.filp->f_mapping;
+	mapping_set_unevictable(mapping);
+
+	for (i = 0; i < n_pages; i++) {
+		page = shmem_read_mapping_page_nonblocking(mapping, page_offset + i);
+		if (IS_ERR(page)) {
+			ret = PTR_ERR(page);
+			goto err_free_pages;
+		}
+
+		/* Add the page into the xarray */
+		ret = xa_err(xa_store(&shmem->xapages, page_offset + i, page, mask));
+		if (ret) {
+			put_page(page);
+			goto err_free_pages;
+		}
+	}
+
+	sgt = kzalloc(sizeof(*sgt), mask);
+	if (!sgt) {
+		ret = -ENOMEM;
+		goto err_free_pages;
+	}
+
+	ret = sg_alloc_table_from_page_xarray(sgt, &shmem->xapages, page_offset, n_pages, 0, size, mask);
+	if (ret)
+		goto err_free_sgtable;
+
+	ret = dma_map_sgtable(obj->dev->dev, sgt, DMA_BIDIRECTIONAL, 0);
+	if (ret)
+		goto err_free_sgtable;
+
+	if (first_alloc)
+		shmem->pages_use_count = 1;
+
+	shmem->rss_size += size;
+
+	return sgt;
+
+err_free_sgtable:
+	kfree(sgt);
+err_free_pages:
+	while (--i) {
+		page = xa_erase(&shmem->xapages, page_offset + i);
+		if (drm_WARN_ON(obj->dev, !page))
+			continue;
+		put_page(page);
+	}
+	return ERR_PTR(ret);
+}
+
 /**
  * drm_gem_shmem_get_pages_sgt - Pin pages, dma map them, and return a
  *				 scatter/gather table for a shmem GEM object.
@@ -796,6 +887,28 @@ struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem)
 }
 EXPORT_SYMBOL_GPL(drm_gem_shmem_get_pages_sgt);
 
+struct sg_table *drm_gem_shmem_get_sparse_pages_sgt(struct drm_gem_shmem_object *shmem,
+						     unsigned int n_pages, pgoff_t page_offset)
+{
+	struct drm_gem_object *obj = &shmem->base;
+	struct sg_table *sgt;
+	int ret;
+
+	if (drm_WARN_ON(obj->dev, !shmem->sparse))
+		return ERR_PTR(-EINVAL);
+
+	ret = dma_resv_lock(shmem->base.resv, NULL);
+	if (ret)
+		return ERR_PTR(ret);
+
+	sgt = drm_gem_shmem_get_sparse_pages_locked(shmem, n_pages, page_offset);
+
+	dma_resv_unlock(shmem->base.resv);
+
+	return sgt;
+}
+EXPORT_SYMBOL_GPL(drm_gem_shmem_get_sparse_pages_sgt);
+
 /**
  * drm_gem_shmem_prime_import_sg_table - Produce a shmem GEM object from
  *                 another driver's scatter/gather table of pinned pages
diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
index fdae947682cd..4fd45169a3af 100644
--- a/include/drm/drm_gem.h
+++ b/include/drm/drm_gem.h
@@ -38,6 +38,7 @@
 #include <linux/dma-resv.h>
 #include <linux/list.h>
 #include <linux/mutex.h>
+#include <linux/xarray.h>
 
 #include <drm/drm_vma_manager.h>
 
@@ -532,6 +533,8 @@ int drm_gem_create_mmap_offset_size(struct drm_gem_object *obj, size_t size);
 struct page **drm_gem_get_pages(struct drm_gem_object *obj);
 void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages,
 		bool dirty, bool accessed);
+void drm_gem_put_sparse_xarray(struct xarray *pa, unsigned long idx,
+				unsigned int npages, bool dirty, bool accessed);
 
 void drm_gem_lock(struct drm_gem_object *obj);
 void drm_gem_unlock(struct drm_gem_object *obj);
diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
index 902039cfc4ce..fcd84c8cf8e7 100644
--- a/include/drm/drm_gem_shmem_helper.h
+++ b/include/drm/drm_gem_shmem_helper.h
@@ -44,6 +44,14 @@ struct drm_gem_shmem_object {
 	 */
 	unsigned int pages_use_count;
 
+	/**
+	 * @rss_size:
+	 *
+	 * Size of the object RSS, in bytes.
+	 * lifetime.
+	 */
+	size_t rss_size;
+
 	/**
 	 * @madv: State for madvise
 	 *
@@ -107,6 +115,7 @@ struct drm_gem_shmem_object {
 	container_of(obj, struct drm_gem_shmem_object, base)
 
 struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t size);
+struct drm_gem_shmem_object *drm_gem_shmem_create_sparse(struct drm_device *dev, size_t size);
 struct drm_gem_shmem_object *drm_gem_shmem_create_with_mnt(struct drm_device *dev,
 							   size_t size,
 							   struct vfsmount *gemfs);
@@ -138,6 +147,9 @@ void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem);
 struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem);
 struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem);
 
+struct sg_table *drm_gem_shmem_get_sparse_pages_sgt(struct drm_gem_shmem_object *shmem,
+						     unsigned int n_pages, pgoff_t page_offset);
+
 void drm_gem_shmem_print_info(const struct drm_gem_shmem_object *shmem,
 			      struct drm_printer *p, unsigned int indent);
 
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [RFC PATCH 6/7] drm/panfrost: Use shmem sparse allocation for heap BOs
  2025-02-18 23:25 [RFC PATCH 0/7] Introduce sparse DRM shmem object allocations Adrián Larumbe
                   ` (4 preceding siblings ...)
  2025-02-18 23:25 ` [RFC PATCH 5/7] drm/shmem: Implement sparse allocation of pages for shmem objects Adrián Larumbe
@ 2025-02-18 23:25 ` Adrián Larumbe
  2025-02-25 15:04   ` Boris Brezillon
  2025-02-18 23:25 ` [RFC PATCH 7/7] drm/panfrost/panthor: Take sparse objects into account for fdinfo Adrián Larumbe
  6 siblings, 1 reply; 14+ messages in thread
From: Adrián Larumbe @ 2025-02-18 23:25 UTC (permalink / raw)
  To: dri-devel, linux-kernel, Boris Brezillon, Steven Price,
	Rob Herring, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
	David Airlie, Simona Vetter
  Cc: kernel, Adrián Larumbe

Panfrost heap BOs grow on demand when the GPU triggers a page fault after
accessing an address within the BO's virtual range.

We still store the sgts we get back from the shmem sparse allocation function,
since it was decided management of sparse memory SGTs should be done by client
drivers rather than the shmem subsystem.

Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com>
---
 drivers/gpu/drm/panfrost/panfrost_gem.c | 12 ++--
 drivers/gpu/drm/panfrost/panfrost_gem.h |  2 +-
 drivers/gpu/drm/panfrost/panfrost_mmu.c | 85 +++++--------------------
 3 files changed, 25 insertions(+), 74 deletions(-)

diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c
index 8e0ff3efede7..0cda2c4e524f 100644
--- a/drivers/gpu/drm/panfrost/panfrost_gem.c
+++ b/drivers/gpu/drm/panfrost/panfrost_gem.c
@@ -40,10 +40,10 @@ static void panfrost_gem_free_object(struct drm_gem_object *obj)
 		int n_sgt = bo->base.base.size / SZ_2M;
 
 		for (i = 0; i < n_sgt; i++) {
-			if (bo->sgts[i].sgl) {
-				dma_unmap_sgtable(pfdev->dev, &bo->sgts[i],
+			if (bo->sgts[i]) {
+				dma_unmap_sgtable(pfdev->dev, bo->sgts[i],
 						  DMA_BIDIRECTIONAL, 0);
-				sg_free_table(&bo->sgts[i]);
+				sg_free_table(bo->sgts[i]);
 			}
 		}
 		kvfree(bo->sgts);
@@ -274,7 +274,11 @@ panfrost_gem_create(struct drm_device *dev, size_t size, u32 flags)
 	if (flags & PANFROST_BO_HEAP)
 		size = roundup(size, SZ_2M);
 
-	shmem = drm_gem_shmem_create(dev, size);
+	if (flags & PANFROST_BO_HEAP)
+		shmem = drm_gem_shmem_create_sparse(dev, size);
+	else
+		shmem = drm_gem_shmem_create(dev, size);
+
 	if (IS_ERR(shmem))
 		return ERR_CAST(shmem);
 
diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.h b/drivers/gpu/drm/panfrost/panfrost_gem.h
index 7516b7ecf7fe..2a8d0752011e 100644
--- a/drivers/gpu/drm/panfrost/panfrost_gem.h
+++ b/drivers/gpu/drm/panfrost/panfrost_gem.h
@@ -11,7 +11,7 @@ struct panfrost_mmu;
 
 struct panfrost_gem_object {
 	struct drm_gem_shmem_object base;
-	struct sg_table *sgts;
+	struct sg_table **sgts;
 
 	/*
 	 * Use a list for now. If searching a mapping ever becomes the
diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c
index b91019cd5acb..4a78ff9ca293 100644
--- a/drivers/gpu/drm/panfrost/panfrost_mmu.c
+++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c
@@ -441,14 +441,11 @@ addr_to_mapping(struct panfrost_device *pfdev, int as, u64 addr)
 static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
 				       u64 addr)
 {
-	int ret, i;
 	struct panfrost_gem_mapping *bomapping;
 	struct panfrost_gem_object *bo;
-	struct address_space *mapping;
-	struct drm_gem_object *obj;
 	pgoff_t page_offset;
 	struct sg_table *sgt;
-	struct page **pages;
+	int ret = 0;
 
 	bomapping = addr_to_mapping(pfdev, as, addr);
 	if (!bomapping)
@@ -459,94 +456,44 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
 		dev_WARN(pfdev->dev, "matching BO is not heap type (GPU VA = %llx)",
 			 bomapping->mmnode.start << PAGE_SHIFT);
 		ret = -EINVAL;
-		goto err_bo;
+		goto fault_out;
 	}
 	WARN_ON(bomapping->mmu->as != as);
 
 	/* Assume 2MB alignment and size multiple */
 	addr &= ~((u64)SZ_2M - 1);
-	page_offset = addr >> PAGE_SHIFT;
-	page_offset -= bomapping->mmnode.start;
+	page_offset = (addr >> PAGE_SHIFT) - bomapping->mmnode.start;
 
-	obj = &bo->base.base;
-
-	dma_resv_lock(obj->resv, NULL);
-
-	if (!bo->base.pages) {
+	if (!bo->sgts) {
 		bo->sgts = kvmalloc_array(bo->base.base.size / SZ_2M,
-				     sizeof(struct sg_table), GFP_KERNEL | __GFP_ZERO);
+					  sizeof(struct sg_table *), GFP_KERNEL | __GFP_ZERO);
 		if (!bo->sgts) {
 			ret = -ENOMEM;
-			goto err_unlock;
-		}
-
-		pages = kvmalloc_array(bo->base.base.size >> PAGE_SHIFT,
-				       sizeof(struct page *), GFP_KERNEL | __GFP_ZERO);
-		if (!pages) {
-			kvfree(bo->sgts);
-			bo->sgts = NULL;
-			ret = -ENOMEM;
-			goto err_unlock;
-		}
-		bo->base.pages = pages;
-		bo->base.pages_use_count = 1;
-	} else {
-		pages = bo->base.pages;
-		if (pages[page_offset]) {
-			/* Pages are already mapped, bail out. */
-			goto out;
+			goto fault_out;
 		}
 	}
 
-	mapping = bo->base.base.filp->f_mapping;
-	mapping_set_unevictable(mapping);
+	sgt = drm_gem_shmem_get_sparse_pages_sgt(&bo->base, NUM_FAULT_PAGES, page_offset);
+	if (IS_ERR(sgt)) {
+		if (WARN_ON(PTR_ERR(sgt) != -EEXIST))
+			ret = PTR_ERR(sgt);
+		else
+			ret = 0;
 
-	for (i = page_offset; i < page_offset + NUM_FAULT_PAGES; i++) {
-		/* Can happen if the last fault only partially filled this
-		 * section of the pages array before failing. In that case
-		 * we skip already filled pages.
-		 */
-		if (pages[i])
-			continue;
-
-		pages[i] = shmem_read_mapping_page(mapping, i);
-		if (IS_ERR(pages[i])) {
-			ret = PTR_ERR(pages[i]);
-			pages[i] = NULL;
-			goto err_unlock;
-		}
+		goto fault_out;
 	}
 
-	sgt = &bo->sgts[page_offset / (SZ_2M / PAGE_SIZE)];
-	ret = sg_alloc_table_from_pages(sgt, pages + page_offset,
-					NUM_FAULT_PAGES, 0, SZ_2M, GFP_KERNEL);
-	if (ret)
-		goto err_unlock;
-
-	ret = dma_map_sgtable(pfdev->dev, sgt, DMA_BIDIRECTIONAL, 0);
-	if (ret)
-		goto err_map;
-
 	mmu_map_sg(pfdev, bomapping->mmu, addr,
 		   IOMMU_WRITE | IOMMU_READ | IOMMU_NOEXEC, sgt);
 
+	bo->sgts[page_offset / (SZ_2M / PAGE_SIZE)] = sgt;
+
 	bomapping->active = true;
 	bo->heap_rss_size += SZ_2M;
 
 	dev_dbg(pfdev->dev, "mapped page fault @ AS%d %llx", as, addr);
 
-out:
-	dma_resv_unlock(obj->resv);
-
-	panfrost_gem_mapping_put(bomapping);
-
-	return 0;
-
-err_map:
-	sg_free_table(sgt);
-err_unlock:
-	dma_resv_unlock(obj->resv);
-err_bo:
+fault_out:
 	panfrost_gem_mapping_put(bomapping);
 	return ret;
 }
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [RFC PATCH 7/7] drm/panfrost/panthor: Take sparse objects into account for fdinfo
  2025-02-18 23:25 [RFC PATCH 0/7] Introduce sparse DRM shmem object allocations Adrián Larumbe
                   ` (5 preceding siblings ...)
  2025-02-18 23:25 ` [RFC PATCH 6/7] drm/panfrost: Use shmem sparse allocation for heap BOs Adrián Larumbe
@ 2025-02-18 23:25 ` Adrián Larumbe
  2025-02-25 15:09   ` Boris Brezillon
  6 siblings, 1 reply; 14+ messages in thread
From: Adrián Larumbe @ 2025-02-18 23:25 UTC (permalink / raw)
  To: dri-devel, linux-kernel, Boris Brezillon, Steven Price,
	Rob Herring, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
	David Airlie, Simona Vetter, Liviu Dudau
  Cc: kernel, Adrián Larumbe

Because of the alternative definition of the 'pages' field in shmem after adding
support for sparse allocations, the logic for deciding whether pages are
available must be expanded.

Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com>
---
 drivers/gpu/drm/panfrost/panfrost_gem.c | 4 +++-
 drivers/gpu/drm/panthor/panthor_gem.c   | 4 +++-
 2 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c
index 0cda2c4e524f..ced2fdee74ab 100644
--- a/drivers/gpu/drm/panfrost/panfrost_gem.c
+++ b/drivers/gpu/drm/panfrost/panfrost_gem.c
@@ -200,7 +200,9 @@ static enum drm_gem_object_status panfrost_gem_status(struct drm_gem_object *obj
 	struct panfrost_gem_object *bo = to_panfrost_bo(obj);
 	enum drm_gem_object_status res = 0;
 
-	if (bo->base.base.import_attach || bo->base.pages)
+	if (bo->base.base.import_attach ||
+	    (!bo->base.sparse && bo->base.pages) ||
+	    (bo->base.sparse && !xa_empty(&bo->base.xapages)))
 		res |= DRM_GEM_OBJECT_RESIDENT;
 
 	if (bo->base.madv == PANFROST_MADV_DONTNEED)
diff --git a/drivers/gpu/drm/panthor/panthor_gem.c b/drivers/gpu/drm/panthor/panthor_gem.c
index 8244a4e6c2a2..8dbaf766bd79 100644
--- a/drivers/gpu/drm/panthor/panthor_gem.c
+++ b/drivers/gpu/drm/panthor/panthor_gem.c
@@ -155,7 +155,9 @@ static enum drm_gem_object_status panthor_gem_status(struct drm_gem_object *obj)
 	struct panthor_gem_object *bo = to_panthor_bo(obj);
 	enum drm_gem_object_status res = 0;
 
-	if (bo->base.base.import_attach || bo->base.pages)
+	if (bo->base.base.import_attach ||
+	    (!bo->base.sparse && bo->base.pages) ||
+	    (bo->base.sparse && !xa_empty(&bo->base.xapages)))
 		res |= DRM_GEM_OBJECT_RESIDENT;
 
 	return res;
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [RFC PATCH 1/7] shmem: Introduce non-blocking allocation of shmem pages
  2025-02-18 23:25 ` [RFC PATCH 1/7] shmem: Introduce non-blocking allocation of shmem pages Adrián Larumbe
@ 2025-02-25 12:43   ` Boris Brezillon
  0 siblings, 0 replies; 14+ messages in thread
From: Boris Brezillon @ 2025-02-25 12:43 UTC (permalink / raw)
  To: Adrián Larumbe
  Cc: dri-devel, linux-kernel, Steven Price, Rob Herring, Hugh Dickins,
	kernel, linux-mm

On Tue, 18 Feb 2025 23:25:31 +0000
Adrián Larumbe <adrian.larumbe@collabora.com> wrote:

> With the future goal of preventing deadlocks with the shrinker when reclaiming
> GEM-allocated memory, a variant of shmem_read_mapping_page_gfp() that does not
> sleep when enough memory isn't available, therefore potentially triggering the
> shrinker on same driver, is introduced.
> 
> Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com>
> ---
>  include/linux/shmem_fs.h | 7 +++++++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h
> index 0b273a7b9f01..5735728aeda2 100644
> --- a/include/linux/shmem_fs.h
> +++ b/include/linux/shmem_fs.h
> @@ -167,6 +167,13 @@ static inline struct page *shmem_read_mapping_page(
>  					mapping_gfp_mask(mapping));
>  }
>  
> +static inline struct page *shmem_read_mapping_page_nonblocking(
> +				struct address_space *mapping, pgoff_t index)
> +{
> +	return shmem_read_mapping_page_gfp(mapping, index,
> +					mapping_gfp_mask(mapping) | GFP_NOWAIT);
> +}

Just my 2 cents, but I'm not entirely sure it's worth adding a helper
for the non-blocking case given we can pretty easily call
shmem_read_mapping_page_gfp() with the GFP_NOWAIT addition where needed.

> +
>  static inline bool shmem_file(struct file *file)
>  {
>  	if (!IS_ENABLED(CONFIG_SHMEM))


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [RFC PATCH 2/7] lib/scatterlist.c: Support constructing sgt from page xarray
  2025-02-18 23:25 ` [RFC PATCH 2/7] lib/scatterlist.c: Support constructing sgt from page xarray Adrián Larumbe
@ 2025-02-25 12:57   ` Boris Brezillon
  0 siblings, 0 replies; 14+ messages in thread
From: Boris Brezillon @ 2025-02-25 12:57 UTC (permalink / raw)
  To: Adrián Larumbe
  Cc: dri-devel, linux-kernel, Steven Price, Rob Herring, Andrew Morton,
	kernel

On Tue, 18 Feb 2025 23:25:32 +0000
Adrián Larumbe <adrian.larumbe@collabora.com> wrote:

> In preparation for a future commit that will introduce sparse allocation of
> pages in DRM shmem, a scatterlist function that knows how to deal with an xarray
> collection of memory pages had to be introduced.
> 
> Because the new function is identical to the existing one that deals with a page
> array, the page_array abstraction is also introduced, which hides the way pages
> are retrieved from a collection.
> 
> Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com>
> ---
>  include/linux/scatterlist.h |  47 +++++++++++++
>  lib/scatterlist.c           | 128 ++++++++++++++++++++++++++++++++++++
>  2 files changed, 175 insertions(+)
> 
> diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h
> index d836e7440ee8..0045df9c374f 100644
> --- a/include/linux/scatterlist.h
> +++ b/include/linux/scatterlist.h
> @@ -48,6 +48,39 @@ struct sg_append_table {
>  	unsigned int total_nents;	/* Total entries in the table */
>  };
>  
> +struct page_array {
> +	union {
> +		struct page **array;
> +		struct xarray *xarray;
> +	};
> +
> +	struct page *(*get_page)(struct page_array, unsigned int);
> +};
> +
> +static inline struct page *page_array_get_page(struct page_array a,
> +					       unsigned int index)
> +{
> +	return a.array[index];
> +}
> +
> +static inline struct page *page_xarray_get_page(struct page_array a,
> +						unsigned int index)
> +{
> +	return xa_load(a.xarray, index);
> +}
> +
> +#define PAGE_ARRAY(pages)				\
> +	((struct page_array) {				\
> +		.array = pages,				\
> +		.get_page = page_array_get_page,	\
> +	})
> +
> +#define PAGE_XARRAY(pages)				\
> +	((struct page_array) {				\
> +		.xarray = pages,			\
> +		.get_page = page_xarray_get_page,	\
> +	})
> +
>  /*
>   * Notes on SG table design.
>   *
> @@ -448,6 +481,20 @@ int sg_alloc_table_from_pages_segment(struct sg_table *sgt, struct page **pages,
>  				      unsigned long size,
>  				      unsigned int max_segment, gfp_t gfp_mask);
>  
> +int sg_alloc_table_from_page_array_segment(struct sg_table *sgt, struct page_array pages,
> +					   unsigned int idx, unsigned int n_pages, unsigned int offset,
> +					   unsigned long size, unsigned int max_segment, gfp_t gfp_mask);

The idea behind the page_array object was to avoid code duplication with
no extra-cost at runtime by letting the compiler see through the page
getter function and let it inline its content. If you make the
page_array struct public and pass such an object to
sg_alloc_table_from_page_array_segment() the compiler can no longer
determine that things are fixed, and you turn your direct call (which
is likely to be inlined if compiled with optimizations on) into an
indirect call, which we probably don't want. Besides, I suspect the
caller knows exactly what kind of array it's passing, so I'm not sure
generalizing things at this level is worth it. We're probably better of
adding
sg_alloc_table_from_page_xarray_segment()/sg_alloc_table_from_page_xarray()
helpers and have them defined in scatterlist.c.

> +
> +static inline int sg_alloc_table_from_page_xarray(struct sg_table *sgt, struct xarray *pages,
> +						  unsigned int idx, unsigned int n_pages, unsigned int offset,
> +						  unsigned long size, gfp_t gfp_mask)
> +{
> +	struct page_array parray = PAGE_XARRAY(pages);
> +
> +	return sg_alloc_table_from_page_array_segment(sgt, parray, idx, n_pages, offset,
> +						      size, UINT_MAX, gfp_mask);
> +}
> +
>  /**
>   * sg_alloc_table_from_pages - Allocate and initialize an sg table from
>   *			       an array of pages
> diff --git a/lib/scatterlist.c b/lib/scatterlist.c
> index 5bb6b8aff232..669ebd23e4ad 100644
> --- a/lib/scatterlist.c
> +++ b/lib/scatterlist.c
> @@ -553,6 +553,115 @@ int sg_alloc_append_table_from_pages(struct sg_append_table *sgt_append,
>  }
>  EXPORT_SYMBOL(sg_alloc_append_table_from_pages);
>  
> +static inline int
> +sg_alloc_append_table_from_page_array(struct sg_append_table *sgt_append,
> +				      struct page_array pages,
> +				      unsigned int first_page,
> +				      unsigned int n_pages,
> +				      unsigned int offset, unsigned long size,
> +				      unsigned int max_segment,
> +				      unsigned int left_pages, gfp_t gfp_mask)

You've done that to avoid code duplication, but you're not using
the helper for plain arrays (AKA sg_alloc_append_table_from_pages()),
so the code is duplicated anyway. If de-duplication is the goal, you
should get rid of sg_alloc_append_table_from_pages(), and use
sg_alloc_append_table_from_page_array() instead.

> +{
> +	unsigned int chunks, seg_len, i, prv_len = 0;
> +	unsigned int added_nents = 0;
> +	struct scatterlist *s = sgt_append->prv;
> +	unsigned int cur_pg_index = first_page;
> +	unsigned int last_pg_index = first_page + n_pages - 1;
> +	struct page *last_pg;
> +
> +	/*
> +	 * The algorithm below requires max_segment to be aligned to PAGE_SIZE
> +	 * otherwise it can overshoot.
> +	 */
> +	max_segment = ALIGN_DOWN(max_segment, PAGE_SIZE);
> +	if (WARN_ON(max_segment < PAGE_SIZE))
> +		return -EINVAL;
> +
> +	if (IS_ENABLED(CONFIG_ARCH_NO_SG_CHAIN) && sgt_append->prv)
> +		return -EOPNOTSUPP;
> +
> +	if (sgt_append->prv) {
> +		unsigned long next_pfn;
> +		struct page *page;
> +
> +		if (WARN_ON(offset))
> +			return -EINVAL;
> +
> +		/* Merge contiguous pages into the last SG */
> +		page = pages.get_page(pages, cur_pg_index);
> +		prv_len = sgt_append->prv->length;
> +		next_pfn = (sg_phys(sgt_append->prv) + prv_len) / PAGE_SIZE;
> +		if (page_to_pfn(page) == next_pfn) {
> +			last_pg = pfn_to_page(next_pfn - 1);
> +			while (cur_pg_index <= last_pg_index &&
> +			       pages_are_mergeable(page, last_pg)) {
> +				if (sgt_append->prv->length + PAGE_SIZE > max_segment)
> +					break;
> +				sgt_append->prv->length += PAGE_SIZE;
> +				last_pg = page;
> +				cur_pg_index++;
> +			}
> +			if (cur_pg_index > last_pg_index)
> +				goto out;
> +		}
> +	}
> +
> +	/* compute number of contiguous chunks */
> +	chunks = 1;
> +	seg_len = 0;
> +	for (i = cur_pg_index + 1; i <= last_pg_index; i++) {
> +		seg_len += PAGE_SIZE;
> +		if (seg_len >= max_segment ||
> +		    !pages_are_mergeable(pages.get_page(pages, i),
> +					 pages.get_page(pages, i - 1))) {
> +			chunks++;
> +			seg_len = 0;
> +		}
> +	}
> +
> +	/* merging chunks and putting them into the scatterlist */
> +	for (i = 0; i < chunks; i++) {
> +		unsigned int j, chunk_size;
> +
> +		/* look for the end of the current chunk */
> +		seg_len = 0;
> +		for (j = cur_pg_index + 1; j <= last_pg_index; j++) {
> +			seg_len += PAGE_SIZE;
> +			if (seg_len >= max_segment ||
> +			    !pages_are_mergeable(pages.get_page(pages, j),
> +						 pages.get_page(pages, j - 1)))
> +				break;
> +		}
> +
> +		/* Pass how many chunks might be left */
> +		s = get_next_sg(sgt_append, s, chunks - i + left_pages,
> +				gfp_mask);
> +		if (IS_ERR(s)) {
> +			/*
> +			 * Adjust entry length to be as before function was
> +			 * called.
> +			 */
> +			if (sgt_append->prv)
> +				sgt_append->prv->length = prv_len;
> +			return PTR_ERR(s);
> +		}
> +		chunk_size = ((j - cur_pg_index) << PAGE_SHIFT) - offset;
> +		sg_set_page(s, pages.get_page(pages, cur_pg_index),
> +			    min_t(unsigned long, size, chunk_size), offset);
> +		added_nents++;
> +		size -= chunk_size;
> +		offset = 0;
> +		cur_pg_index = j;
> +	}
> +	sgt_append->sgt.nents += added_nents;
> +	sgt_append->sgt.orig_nents = sgt_append->sgt.nents;
> +	sgt_append->prv = s;
> +out:
> +	if (!left_pages)
> +		sg_mark_end(s);
> +	return 0;
> +}
> +
>  /**
>   * sg_alloc_table_from_pages_segment - Allocate and initialize an sg table from
>   *                                     an array of pages and given maximum
> @@ -596,6 +705,25 @@ int sg_alloc_table_from_pages_segment(struct sg_table *sgt, struct page **pages,
>  }
>  EXPORT_SYMBOL(sg_alloc_table_from_pages_segment);
>  
> +int sg_alloc_table_from_page_array_segment(struct sg_table *sgt, struct page_array pages,
> +					   unsigned int idx, unsigned int n_pages, unsigned int offset,
> +					   unsigned long size, unsigned int max_segment, gfp_t gfp_mask)
> +{
> +	struct sg_append_table append = {};
> +	int err;
> +
> +	err = sg_alloc_append_table_from_page_array(&append, pages, idx, n_pages, offset,
> +						    size, max_segment, 0, gfp_mask);
> +	if (err) {
> +		sg_free_append_table(&append);
> +		return err;
> +	}
> +	memcpy(sgt, &append.sgt, sizeof(*sgt));
> +	WARN_ON(append.total_nents != sgt->orig_nents);
> +	return 0;
> +}
> +EXPORT_SYMBOL(sg_alloc_table_from_page_array_segment);
> +
>  #ifdef CONFIG_SGL_ALLOC
>  
>  /**


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [RFC PATCH 4/7] drm/shmem: Introduce the notion of sparse objects
  2025-02-18 23:25 ` [RFC PATCH 4/7] drm/shmem: Introduce the notion of sparse objects Adrián Larumbe
@ 2025-02-25 13:28   ` Boris Brezillon
  0 siblings, 0 replies; 14+ messages in thread
From: Boris Brezillon @ 2025-02-25 13:28 UTC (permalink / raw)
  To: Adrián Larumbe
  Cc: dri-devel, linux-kernel, Steven Price, Rob Herring,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
	Simona Vetter, kernel

On Tue, 18 Feb 2025 23:25:34 +0000
Adrián Larumbe <adrian.larumbe@collabora.com> wrote:

> Sparse DRM objects will store their backing pages in an xarray, to avoid the
> overhead of preallocating a huge struct page pointer array when only a very
> small range of indices might be assigned.
> 
> For now, only the definition of a sparse object as a union alternative to a
> 'dense' object is provided, with functions that exploit it being part of later
> commits.
> 
> Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com>
> ---
>  drivers/gpu/drm/drm_gem_shmem_helper.c | 42 +++++++++++++++++++++++---
>  include/drm/drm_gem_shmem_helper.h     | 18 ++++++++++-
>  2 files changed, 54 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index 5ab351409312..d63e42be2d72 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -10,6 +10,7 @@
>  #include <linux/shmem_fs.h>
>  #include <linux/slab.h>
>  #include <linux/vmalloc.h>
> +#include <linux/xarray.h>
>  
>  #ifdef CONFIG_X86
>  #include <asm/set_memory.h>
> @@ -50,7 +51,7 @@ static const struct drm_gem_object_funcs drm_gem_shmem_funcs = {
>  
>  static struct drm_gem_shmem_object *
>  __drm_gem_shmem_create(struct drm_device *dev, size_t size, bool private,
> -		       struct vfsmount *gemfs)
> +		       bool sparse, struct vfsmount *gemfs)
>  {
>  	struct drm_gem_shmem_object *shmem;
>  	struct drm_gem_object *obj;
> @@ -90,6 +91,11 @@ __drm_gem_shmem_create(struct drm_device *dev, size_t size, bool private,
>  
>  	INIT_LIST_HEAD(&shmem->madv_list);
>  
> +	if (unlikely(sparse))
> +		xa_init_flags(&shmem->xapages, XA_FLAGS_ALLOC);
> +
> +	shmem->sparse = sparse;

Looks like the only caller passing sparse=true is
drm_gem_shmem_create_sparse(), and the sparse property is not used for
the rest of the gem_shmem object initialization, so maybe we could move
that code to drm_gem_shmem_create_sparse() instead of modifying the
prototype of __drm_gem_shmem_create().

> +
>  	if (!private) {
>  		/*
>  		 * Our buffers are kept pinned, so allocating them
> @@ -124,10 +130,16 @@ __drm_gem_shmem_create(struct drm_device *dev, size_t size, bool private,
>   */
>  struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t size)
>  {
> -	return __drm_gem_shmem_create(dev, size, false, NULL);
> +	return __drm_gem_shmem_create(dev, size, false, false, NULL);
>  }
>  EXPORT_SYMBOL_GPL(drm_gem_shmem_create);
>  
> +struct drm_gem_shmem_object *drm_gem_shmem_create_sparse(struct drm_device *dev, size_t size)
> +{
> +	return __drm_gem_shmem_create(dev, size, false, true, NULL);
> +}
> +EXPORT_SYMBOL_GPL(drm_gem_shmem_create_sparse);
> +
>  /**
>   * drm_gem_shmem_create_with_mnt - Allocate an object with the given size in a
>   * given mountpoint
> @@ -145,7 +157,7 @@ struct drm_gem_shmem_object *drm_gem_shmem_create_with_mnt(struct drm_device *de
>  							   size_t size,
>  							   struct vfsmount *gemfs)
>  {
> -	return __drm_gem_shmem_create(dev, size, false, gemfs);
> +	return __drm_gem_shmem_create(dev, size, false, false, gemfs);
>  }
>  EXPORT_SYMBOL_GPL(drm_gem_shmem_create_with_mnt);
>  
> @@ -173,7 +185,9 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
>  			sg_free_table(shmem->sgt);
>  			kfree(shmem->sgt);
>  		}
> -		if (shmem->pages)
> +
> +		if ((!shmem->sparse && shmem->pages) ||
> +		    (shmem->sparse && !xa_empty(&shmem->xapages)))
>  			drm_gem_shmem_put_pages(shmem);

Can we let drm_gem_shmem_put_pages() do the is_empty() check?

>  
>  		drm_WARN_ON(obj->dev, shmem->pages_use_count);
> @@ -191,11 +205,19 @@ static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
>  	struct drm_gem_object *obj = &shmem->base;
>  	struct page **pages;
>  
> +	if (drm_WARN_ON(obj->dev, shmem->sparse))
> +		return -EINVAL;
> +
>  	dma_resv_assert_held(shmem->base.resv);
>  
>  	if (shmem->pages_use_count++ > 0)
>  		return 0;
>  
> +	/* We only allow increasing the user count in the case of
> +	  sparse shmem objects with some backed pages for now */
> +	if (shmem->sparse && xa_empty(&shmem->xapages))
> +		return -EINVAL;

You'll never enter this branch because you return -EINVAL early when
sparse==true.

> +
>  	pages = drm_gem_get_pages(obj);
>  	if (IS_ERR(pages)) {
>  		drm_dbg_kms(obj->dev, "Failed to get pages (%ld)\n",
> @@ -541,6 +563,8 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
>  	struct page *page;
>  	pgoff_t page_offset;
>  
> +	drm_WARN_ON(obj->dev, shmem->sparse);

For all those WARN_ON()s you add, I would add a comment explaining why
you don't expect sparse objects to enter this path. In that case, it
has to do with the fact sparse GEMs are not mmap-able (yet?).
And, if you don't want to populate on-demand, you should probably
return VM_FAULT_SIGBUS here, even if that's not expected.

> +
>  	/* We don't use vmf->pgoff since that has the fake offset */
>  	page_offset = (vmf->address - vma->vm_start) >> PAGE_SHIFT;
>  
> @@ -567,6 +591,7 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma)
>  	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
>  
>  	drm_WARN_ON(obj->dev, obj->import_attach);
> +	drm_WARN_ON(obj->dev, shmem->sparse);
>  
>  	dma_resv_lock(shmem->base.resv, NULL);
>  
> @@ -666,6 +691,9 @@ void drm_gem_shmem_print_info(const struct drm_gem_shmem_object *shmem,
>  	if (shmem->base.import_attach)
>  		return;
>  
> +	if (drm_WARN_ON(shmem->base.dev, shmem->sparse))
> +		return;
> +

We probably want to print some of these in case we're dealing with a
sparse GEM object.

>  	drm_printf_indent(p, indent, "pages_use_count=%u\n", shmem->pages_use_count);
>  	drm_printf_indent(p, indent, "vmap_use_count=%u\n", shmem->vmap_use_count);
>  	drm_printf_indent(p, indent, "vaddr=%p\n", shmem->vaddr);
> @@ -691,6 +719,7 @@ struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem)
>  	struct drm_gem_object *obj = &shmem->base;
>  
>  	drm_WARN_ON(obj->dev, obj->import_attach);
> +	drm_WARN_ON(obj->dev, shmem->sparse);
>  
>  	return drm_prime_pages_to_sg(obj->dev, shmem->pages, obj->size >> PAGE_SHIFT);
>  }
> @@ -702,6 +731,9 @@ static struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_
>  	int ret;
>  	struct sg_table *sgt;
>  
> +	if (drm_WARN_ON(obj->dev, shmem->sparse))
> +		return ERR_PTR(-EINVAL);
> +
>  	if (shmem->sgt)
>  		return shmem->sgt;
>  
> @@ -787,7 +819,7 @@ drm_gem_shmem_prime_import_sg_table(struct drm_device *dev,
>  	size_t size = PAGE_ALIGN(attach->dmabuf->size);
>  	struct drm_gem_shmem_object *shmem;
>  
> -	shmem = __drm_gem_shmem_create(dev, size, true, NULL);
> +	shmem = __drm_gem_shmem_create(dev, size, true, false, NULL);
>  	if (IS_ERR(shmem))
>  		return ERR_CAST(shmem);
>  
> diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
> index d22e3fb53631..902039cfc4ce 100644
> --- a/include/drm/drm_gem_shmem_helper.h
> +++ b/include/drm/drm_gem_shmem_helper.h
> @@ -6,6 +6,7 @@
>  #include <linux/fs.h>
>  #include <linux/mm.h>
>  #include <linux/mutex.h>
> +#include <linux/xarray.h>
>  
>  #include <drm/drm_file.h>
>  #include <drm/drm_gem.h>
> @@ -29,7 +30,11 @@ struct drm_gem_shmem_object {
>  	/**
>  	 * @pages: Page table
>  	 */
> -	struct page **pages;
> +	union {
> +
> +		struct page **pages;
> +		struct xarray xapages;
> +	};
>  
>  	/**
>  	 * @pages_use_count:
> @@ -91,6 +96,11 @@ struct drm_gem_shmem_object {
>  	 * @map_wc: map object write-combined (instead of using shmem defaults).
>  	 */
>  	bool map_wc : 1;
> +
> +	/**
> +	 * @sparse: the object's virtual memory space is only partially backed by pages

I would drop the "virtual memory space" part. sparse just means the
object might be partially backed by physical memory, and that memory
will be allocated on-demand (most likely on-GPU-demand, but there's
nothing preventing us from doing it on-CPU-demand, as long as the fault
handler knows the allocation granularity).

> +	 */
> +	bool sparse : 1;
>  };
>  
>  #define to_drm_gem_shmem_obj(obj) \
> @@ -229,6 +239,9 @@ static inline int drm_gem_shmem_object_vmap(struct drm_gem_object *obj,
>  {
>  	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
>  
> +	if (shmem->sparse)
> +		return -EACCES;
> +
>  	return drm_gem_shmem_vmap(shmem, map);

Do we need a WARN_ON() in drm_gem_shmem_vunmap()?

>  }
>  
> @@ -263,6 +276,9 @@ static inline int drm_gem_shmem_object_mmap(struct drm_gem_object *obj, struct v
>  {
>  	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
>  
> +	if (shmem->sparse)
> +		return -EACCES;
> +
>  	return drm_gem_shmem_mmap(shmem, vma);
>  }
>  

I guess we also need to fail in drm_gem_shmem_get_sg_table() is this is
a sparse object.


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [RFC PATCH 5/7] drm/shmem: Implement sparse allocation of pages for shmem objects
  2025-02-18 23:25 ` [RFC PATCH 5/7] drm/shmem: Implement sparse allocation of pages for shmem objects Adrián Larumbe
@ 2025-02-25 14:39   ` Boris Brezillon
  0 siblings, 0 replies; 14+ messages in thread
From: Boris Brezillon @ 2025-02-25 14:39 UTC (permalink / raw)
  To: Adrián Larumbe
  Cc: dri-devel, linux-kernel, Steven Price, Rob Herring,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
	Simona Vetter, kernel

On Tue, 18 Feb 2025 23:25:35 +0000
Adrián Larumbe <adrian.larumbe@collabora.com> wrote:

> Add a new function that lets drivers allocate pages for a subset of the shmem
> object's virtual address range. Expand the shmem object's definition to include
> an RSS field, since it's different from the base GEM object's virtual size.
> 
> Add also new function for putting the pages of a sparse page array. There is
> refactorisation potential with drm_gem_put_pages, but it is yet to be decided
> what this should look like.
> 
> Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com>
> ---
>  drivers/gpu/drm/drm_gem.c              |  32 +++++++
>  drivers/gpu/drm/drm_gem_shmem_helper.c | 123 ++++++++++++++++++++++++-
>  include/drm/drm_gem.h                  |   3 +
>  include/drm/drm_gem_shmem_helper.h     |  12 +++
>  4 files changed, 165 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
> index ee811764c3df..930c5219e1e9 100644
> --- a/drivers/gpu/drm/drm_gem.c
> +++ b/drivers/gpu/drm/drm_gem.c
> @@ -679,6 +679,38 @@ void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages,
>  }
>  EXPORT_SYMBOL(drm_gem_put_pages);
>  
> +void drm_gem_put_sparse_xarray(struct xarray *pa, unsigned long idx,
> +				unsigned int npages, bool dirty, bool accessed)

How about renaming that one drm_gem_put_xarray_page_range()? The sparse
property is something decided by the caller IMHO, and this aspect
doesn't necessarily have to leak through the drm_gem API.

> +{
> +	struct folio_batch fbatch;
> +	struct page *page;
> +
> +	folio_batch_init(&fbatch);
> +
> +	xa_for_each(pa, idx, page) {
> +		struct folio *folio = page_folio(page);
> +
> +		if (dirty)
> +			folio_mark_dirty(folio);
> +		if (accessed)
> +			folio_mark_accessed(folio);
> +
> +		/* Undo the reference we took when populating the table */
> +		if (!folio_batch_add(&fbatch, folio))
> +			drm_gem_check_release_batch(&fbatch);
> +
> +		xa_erase(pa, idx);
> +
> +		idx += folio_nr_pages(folio) - 1;
> +	}
> +
> +	if (folio_batch_count(&fbatch))
> +		drm_gem_check_release_batch(&fbatch);
> +
> +	WARN_ON((idx+1) != npages);
> +}
> +EXPORT_SYMBOL(drm_gem_put_sparse_xarray);

Since you already expose a helper to return pages in an xarray range,
why not add a helper to allocate/get pages? That's basically
drm_gem_shmem_get_sparse_pages_locked() but without the sgt logic, and
with the xarray passed as an argument (plus a gfp_t argument to specific
allocation constraints).

> +
>  static int objects_lookup(struct drm_file *filp, u32 *handle, int count,
>  			  struct drm_gem_object **objs)
>  {
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index d63e42be2d72..40f7f6812195 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -10,7 +10,6 @@
>  #include <linux/shmem_fs.h>
>  #include <linux/slab.h>
>  #include <linux/vmalloc.h>
> -#include <linux/xarray.h>
>  
>  #ifdef CONFIG_X86
>  #include <asm/set_memory.h>
> @@ -161,6 +160,18 @@ struct drm_gem_shmem_object *drm_gem_shmem_create_with_mnt(struct drm_device *de
>  }
>  EXPORT_SYMBOL_GPL(drm_gem_shmem_create_with_mnt);
>  
> +static void drm_gem_shmem_put_pages_sparse(struct drm_gem_shmem_object *shmem)
> +{
> +	unsigned int n_pages = shmem->rss_size / PAGE_SIZE;
> +
> +	drm_WARN_ON(shmem->base.dev, (shmem->rss_size & (PAGE_SIZE - 1)) != 0);
> +	drm_WARN_ON(shmem->base.dev, !shmem->sparse);
> +
> +	drm_gem_put_sparse_xarray(&shmem->xapages, 0, n_pages,
> +				   shmem->pages_mark_dirty_on_put,
> +				   shmem->pages_mark_accessed_on_put);
> +}
> +
>  /**
>   * drm_gem_shmem_free - Free resources associated with a shmem GEM object
>   * @shmem: shmem GEM object to free
> @@ -264,10 +275,15 @@ void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem)
>  		set_pages_array_wb(shmem->pages, obj->size >> PAGE_SHIFT);
>  #endif
>  
> -	drm_gem_put_pages(obj, shmem->pages,
> -			  shmem->pages_mark_dirty_on_put,
> -			  shmem->pages_mark_accessed_on_put);
> -	shmem->pages = NULL;
> +	if (!shmem->sparse) {
> +		drm_gem_put_pages(obj, shmem->pages,
> +				  shmem->pages_mark_dirty_on_put,
> +				  shmem->pages_mark_accessed_on_put);
> +		shmem->pages = NULL;
> +	} else {
> +		drm_gem_shmem_put_pages_sparse(shmem);
> +		xa_destroy(&shmem->xapages);
> +	}
>  }
>  EXPORT_SYMBOL(drm_gem_shmem_put_pages);
>  
> @@ -765,6 +781,81 @@ static struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_
>  	return ERR_PTR(ret);
>  }
>  
> +static struct sg_table *drm_gem_shmem_get_sparse_pages_locked(struct drm_gem_shmem_object *shmem,
> +							       unsigned int n_pages,
> +							       pgoff_t page_offset)

Can we keep the page allocation and sgt creation distinct, with a
drm_gem_shmem_sparse_populate_locked() returning an int, and
drm_gem_shmem_sparse_get_sgt_for_range() returning an sgt for a
previously populated range.

> +{
> +	struct drm_gem_object *obj = &shmem->base;
> +	gfp_t mask = GFP_KERNEL | GFP_NOWAIT;

You shouldn't mix GFP_KERNEL and GFP_NOWAIT, as GFP_KERNEL implies
GFP_RECLAIM.

> +	size_t size = n_pages * PAGE_SIZE;
> +	struct address_space *mapping;
> +	struct sg_table *sgt;
> +	struct page *page;
> +	bool first_alloc;
> +	int ret, i;
> +
> +	if (!shmem->sparse)
> +		return ERR_PTR(-EINVAL);
> +
> +	/* If the mapping exists, then bail out immediately */
> +	if (xa_load(&shmem->xapages, page_offset) != NULL)
> +		return ERR_PTR(-EEXIST);

You're only checking the first page here. Maybe we should just
ignore the case where some pages are already populated, and populate
the missing ones. This implies leaving already allocated pages in place
if an error occurs in the middle instead of trying to revert what we've
allocated, but that's probably okay.

> +
> +	dma_resv_assert_held(shmem->base.resv);
> +
> +	first_alloc = xa_empty(&shmem->xapages);
> +
> +	mapping = shmem->base.filp->f_mapping;
> +	mapping_set_unevictable(mapping);
> +
> +	for (i = 0; i < n_pages; i++) {
> +		page = shmem_read_mapping_page_nonblocking(mapping, page_offset + i);

Looks like we're mixing the sparse and non-blocking aspects. I'd rather
make the non-blocking property by passing gfp_t flags to this function.

> +		if (IS_ERR(page)) {
> +			ret = PTR_ERR(page);
> +			goto err_free_pages;
> +		}
> +
> +		/* Add the page into the xarray */
> +		ret = xa_err(xa_store(&shmem->xapages, page_offset + i, page, mask));
> +		if (ret) {
> +			put_page(page);
> +			goto err_free_pages;
> +		}
> +	}
> +
> +	sgt = kzalloc(sizeof(*sgt), mask);
> +	if (!sgt) {
> +		ret = -ENOMEM;
> +		goto err_free_pages;
> +	}
> +
> +	ret = sg_alloc_table_from_page_xarray(sgt, &shmem->xapages, page_offset, n_pages, 0, size, mask);
> +	if (ret)
> +		goto err_free_sgtable;
> +
> +	ret = dma_map_sgtable(obj->dev->dev, sgt, DMA_BIDIRECTIONAL, 0);
> +	if (ret)
> +		goto err_free_sgtable;
> +
> +	if (first_alloc)
> +		shmem->pages_use_count = 1;
> +
> +	shmem->rss_size += size;
> +
> +	return sgt;
> +
> +err_free_sgtable:
> +	kfree(sgt);
> +err_free_pages:
> +	while (--i) {
> +		page = xa_erase(&shmem->xapages, page_offset + i);
> +		if (drm_WARN_ON(obj->dev, !page))
> +			continue;
> +		put_page(page);
> +	}

Why not call drm_gem_put_sparse_xarray() here?

> +	return ERR_PTR(ret);
> +}
> +
>  /**
>   * drm_gem_shmem_get_pages_sgt - Pin pages, dma map them, and return a
>   *				 scatter/gather table for a shmem GEM object.
> @@ -796,6 +887,28 @@ struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem)
>  }
>  EXPORT_SYMBOL_GPL(drm_gem_shmem_get_pages_sgt);
>  
> +struct sg_table *drm_gem_shmem_get_sparse_pages_sgt(struct drm_gem_shmem_object *shmem,
> +						     unsigned int n_pages, pgoff_t page_offset)
> +{
> +	struct drm_gem_object *obj = &shmem->base;
> +	struct sg_table *sgt;
> +	int ret;
> +
> +	if (drm_WARN_ON(obj->dev, !shmem->sparse))
> +		return ERR_PTR(-EINVAL);
> +
> +	ret = dma_resv_lock(shmem->base.resv, NULL);
> +	if (ret)
> +		return ERR_PTR(ret);
> +
> +	sgt = drm_gem_shmem_get_sparse_pages_locked(shmem, n_pages, page_offset);

Let's make the page allocation explicit (force the caller to call
drm_gem_shmem_sparse_populate_locked() before this function), and return
an error if pages are not populated in the requested range.

> +
> +	dma_resv_unlock(shmem->base.resv);
> +
> +	return sgt;
> +}
> +EXPORT_SYMBOL_GPL(drm_gem_shmem_get_sparse_pages_sgt);
> +
>  /**
>   * drm_gem_shmem_prime_import_sg_table - Produce a shmem GEM object from
>   *                 another driver's scatter/gather table of pinned pages
> diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
> index fdae947682cd..4fd45169a3af 100644
> --- a/include/drm/drm_gem.h
> +++ b/include/drm/drm_gem.h
> @@ -38,6 +38,7 @@
>  #include <linux/dma-resv.h>
>  #include <linux/list.h>
>  #include <linux/mutex.h>
> +#include <linux/xarray.h>
>  
>  #include <drm/drm_vma_manager.h>
>  
> @@ -532,6 +533,8 @@ int drm_gem_create_mmap_offset_size(struct drm_gem_object *obj, size_t size);
>  struct page **drm_gem_get_pages(struct drm_gem_object *obj);
>  void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages,
>  		bool dirty, bool accessed);
> +void drm_gem_put_sparse_xarray(struct xarray *pa, unsigned long idx,
> +				unsigned int npages, bool dirty, bool accessed);
>  
>  void drm_gem_lock(struct drm_gem_object *obj);
>  void drm_gem_unlock(struct drm_gem_object *obj);
> diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
> index 902039cfc4ce..fcd84c8cf8e7 100644
> --- a/include/drm/drm_gem_shmem_helper.h
> +++ b/include/drm/drm_gem_shmem_helper.h
> @@ -44,6 +44,14 @@ struct drm_gem_shmem_object {
>  	 */
>  	unsigned int pages_use_count;
>  
> +	/**
> +	 * @rss_size:
> +	 *
> +	 * Size of the object RSS, in bytes.
> +	 * lifetime.
> +	 */
> +	size_t rss_size;

Let's do that in a separate patch series dealing with memory
accounting for sparse GEMs, if you don't mind. This can probably stay
driver specific until the rest of the changes have been accepted.

> +
>  	/**
>  	 * @madv: State for madvise
>  	 *
> @@ -107,6 +115,7 @@ struct drm_gem_shmem_object {
>  	container_of(obj, struct drm_gem_shmem_object, base)
>  
>  struct drm_gem_shmem_object *drm_gem_shmem_create(struct drm_device *dev, size_t size);
> +struct drm_gem_shmem_object *drm_gem_shmem_create_sparse(struct drm_device *dev, size_t size);
>  struct drm_gem_shmem_object *drm_gem_shmem_create_with_mnt(struct drm_device *dev,
>  							   size_t size,
>  							   struct vfsmount *gemfs);
> @@ -138,6 +147,9 @@ void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem);
>  struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem);
>  struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem);
>  
> +struct sg_table *drm_gem_shmem_get_sparse_pages_sgt(struct drm_gem_shmem_object *shmem,
> +						     unsigned int n_pages, pgoff_t page_offset);
> +
>  void drm_gem_shmem_print_info(const struct drm_gem_shmem_object *shmem,
>  			      struct drm_printer *p, unsigned int indent);
>  


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [RFC PATCH 6/7] drm/panfrost: Use shmem sparse allocation for heap BOs
  2025-02-18 23:25 ` [RFC PATCH 6/7] drm/panfrost: Use shmem sparse allocation for heap BOs Adrián Larumbe
@ 2025-02-25 15:04   ` Boris Brezillon
  0 siblings, 0 replies; 14+ messages in thread
From: Boris Brezillon @ 2025-02-25 15:04 UTC (permalink / raw)
  To: Adrián Larumbe
  Cc: dri-devel, linux-kernel, Steven Price, Rob Herring,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
	Simona Vetter, kernel

On Tue, 18 Feb 2025 23:25:36 +0000
Adrián Larumbe <adrian.larumbe@collabora.com> wrote:

> Panfrost heap BOs grow on demand when the GPU triggers a page fault after
> accessing an address within the BO's virtual range.
> 
> We still store the sgts we get back from the shmem sparse allocation function,
> since it was decided management of sparse memory SGTs should be done by client
> drivers rather than the shmem subsystem.
> 
> Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com>
> ---
>  drivers/gpu/drm/panfrost/panfrost_gem.c | 12 ++--
>  drivers/gpu/drm/panfrost/panfrost_gem.h |  2 +-
>  drivers/gpu/drm/panfrost/panfrost_mmu.c | 85 +++++--------------------
>  3 files changed, 25 insertions(+), 74 deletions(-)
> 
> diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c
> index 8e0ff3efede7..0cda2c4e524f 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_gem.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_gem.c
> @@ -40,10 +40,10 @@ static void panfrost_gem_free_object(struct drm_gem_object *obj)
>  		int n_sgt = bo->base.base.size / SZ_2M;
>  
>  		for (i = 0; i < n_sgt; i++) {
> -			if (bo->sgts[i].sgl) {
> -				dma_unmap_sgtable(pfdev->dev, &bo->sgts[i],
> +			if (bo->sgts[i]) {
> +				dma_unmap_sgtable(pfdev->dev, bo->sgts[i],
>  						  DMA_BIDIRECTIONAL, 0);
> -				sg_free_table(&bo->sgts[i]);
> +				sg_free_table(bo->sgts[i]);
>  			}
>  		}
>  		kvfree(bo->sgts);
> @@ -274,7 +274,11 @@ panfrost_gem_create(struct drm_device *dev, size_t size, u32 flags)
>  	if (flags & PANFROST_BO_HEAP)
>  		size = roundup(size, SZ_2M);
>  
> -	shmem = drm_gem_shmem_create(dev, size);
> +	if (flags & PANFROST_BO_HEAP)
> +		shmem = drm_gem_shmem_create_sparse(dev, size);
> +	else
> +		shmem = drm_gem_shmem_create(dev, size);
> +
>  	if (IS_ERR(shmem))
>  		return ERR_CAST(shmem);
>  
> diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.h b/drivers/gpu/drm/panfrost/panfrost_gem.h
> index 7516b7ecf7fe..2a8d0752011e 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_gem.h
> +++ b/drivers/gpu/drm/panfrost/panfrost_gem.h
> @@ -11,7 +11,7 @@ struct panfrost_mmu;
>  
>  struct panfrost_gem_object {
>  	struct drm_gem_shmem_object base;
> -	struct sg_table *sgts;
> +	struct sg_table **sgts;

I guess using an xarray here would make sense. Or maybe even an
sg_append_table, since we don't expect holes in the populated pages.
This makes me wonder if we really want the gem_shmem layer to automate
sgt creation for sparse GEM objects. Looks like something the driver
can easily optimize for its use-case.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [RFC PATCH 7/7] drm/panfrost/panthor: Take sparse objects into account for fdinfo
  2025-02-18 23:25 ` [RFC PATCH 7/7] drm/panfrost/panthor: Take sparse objects into account for fdinfo Adrián Larumbe
@ 2025-02-25 15:09   ` Boris Brezillon
  0 siblings, 0 replies; 14+ messages in thread
From: Boris Brezillon @ 2025-02-25 15:09 UTC (permalink / raw)
  To: Adrián Larumbe
  Cc: dri-devel, linux-kernel, Steven Price, Rob Herring,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
	Simona Vetter, Liviu Dudau, kernel

On Tue, 18 Feb 2025 23:25:37 +0000
Adrián Larumbe <adrian.larumbe@collabora.com> wrote:

> Because of the alternative definition of the 'pages' field in shmem after adding
> support for sparse allocations, the logic for deciding whether pages are
> available must be expanded.
> 
> Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com>
> ---
>  drivers/gpu/drm/panfrost/panfrost_gem.c | 4 +++-
>  drivers/gpu/drm/panthor/panthor_gem.c   | 4 +++-
>  2 files changed, 6 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c
> index 0cda2c4e524f..ced2fdee74ab 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_gem.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_gem.c
> @@ -200,7 +200,9 @@ static enum drm_gem_object_status panfrost_gem_status(struct drm_gem_object *obj
>  	struct panfrost_gem_object *bo = to_panfrost_bo(obj);
>  	enum drm_gem_object_status res = 0;
>  
> -	if (bo->base.base.import_attach || bo->base.pages)
> +	if (bo->base.base.import_attach ||
> +	    (!bo->base.sparse && bo->base.pages) ||
> +	    (bo->base.sparse && !xa_empty(&bo->base.xapages)))
>  		res |= DRM_GEM_OBJECT_RESIDENT;

Looks like we want some kind of drm_gem_shmem_is_populated() helper,
and have all drivers doing this pages != NULL test patched to use this
new helper as part of the patch introducing sparse shmem GEMs.

>  
>  	if (bo->base.madv == PANFROST_MADV_DONTNEED)
> diff --git a/drivers/gpu/drm/panthor/panthor_gem.c b/drivers/gpu/drm/panthor/panthor_gem.c
> index 8244a4e6c2a2..8dbaf766bd79 100644
> --- a/drivers/gpu/drm/panthor/panthor_gem.c
> +++ b/drivers/gpu/drm/panthor/panthor_gem.c
> @@ -155,7 +155,9 @@ static enum drm_gem_object_status panthor_gem_status(struct drm_gem_object *obj)
>  	struct panthor_gem_object *bo = to_panthor_bo(obj);
>  	enum drm_gem_object_status res = 0;
>  
> -	if (bo->base.base.import_attach || bo->base.pages)
> +	if (bo->base.base.import_attach ||
> +	    (!bo->base.sparse && bo->base.pages) ||
> +	    (bo->base.sparse && !xa_empty(&bo->base.xapages)))
>  		res |= DRM_GEM_OBJECT_RESIDENT;
>  
>  	return res;


^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2025-02-25 15:09 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-02-18 23:25 [RFC PATCH 0/7] Introduce sparse DRM shmem object allocations Adrián Larumbe
2025-02-18 23:25 ` [RFC PATCH 1/7] shmem: Introduce non-blocking allocation of shmem pages Adrián Larumbe
2025-02-25 12:43   ` Boris Brezillon
2025-02-18 23:25 ` [RFC PATCH 2/7] lib/scatterlist.c: Support constructing sgt from page xarray Adrián Larumbe
2025-02-25 12:57   ` Boris Brezillon
2025-02-18 23:25 ` [RFC PATCH 3/7] drm/prime: Let drm_prime_pages_to_sg use the page_array interface Adrián Larumbe
2025-02-18 23:25 ` [RFC PATCH 4/7] drm/shmem: Introduce the notion of sparse objects Adrián Larumbe
2025-02-25 13:28   ` Boris Brezillon
2025-02-18 23:25 ` [RFC PATCH 5/7] drm/shmem: Implement sparse allocation of pages for shmem objects Adrián Larumbe
2025-02-25 14:39   ` Boris Brezillon
2025-02-18 23:25 ` [RFC PATCH 6/7] drm/panfrost: Use shmem sparse allocation for heap BOs Adrián Larumbe
2025-02-25 15:04   ` Boris Brezillon
2025-02-18 23:25 ` [RFC PATCH 7/7] drm/panfrost/panthor: Take sparse objects into account for fdinfo Adrián Larumbe
2025-02-25 15:09   ` Boris Brezillon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox