public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH v10 0/6] Support sparse mappings in Panthor
@ 2026-04-29 18:32 Adrián Larumbe
  2026-04-29 18:32 ` [PATCH v10 1/6] drm/panthor: Expose GPU page sizes to UM Adrián Larumbe
                   ` (5 more replies)
  0 siblings, 6 replies; 11+ messages in thread
From: Adrián Larumbe @ 2026-04-29 18:32 UTC (permalink / raw)
  To: linux-kernel
  Cc: dri-devel, Steven Price, Boris Brezillon, kernel,
	Adrián Larumbe

This patch series implements sparse mappings in Panthor. Owing to the lack of HW MMU
features for sparse page table entries, this had to be implemented using a dummy object
over which sparse mappings requested over VM_BIND are mapped cyclically.

To that end, a new VM_BIND flag was added in the driver's uAPI.

The end goal of this patch series is to improve support of Vulkan sparse
resources. At the moment, to implement this feature on Mali hardware, Vulkan
sparse map is implemented by mapping the specified region to a "dummy bo" so
that the accesses do not fault. A newly created sparse resource starts off
unmapped, and therefore also has to be mapped to the "dummy bo".  This "dummy
bo" is small (a page size) in comparison to the sizes of va ranges that we might
want to map to it, and a large number of vm_bind ops can be necessary. For
example, if the user were to create a 100e6-byte sparse resident resource, we'd
have to poke VM_BIND with ceil(100e6/0x1000)=24415 map operations.

The new VM_BIND sparse mapping feature addresses this particular inefficiency by
letting us implement a single Vulkan sparse map operation and sparse resident
resource initialization with just one map operation.

Link to the conversation for the previous patch series revision at:
https://lore.kernel.org/dri-devel/20260422122538.3117380-1-adrian.larumbe@collabora.com

Changes in v10:
 - Fixed uAPI enum ordering issue
 - Reworked sparse mapping by hardcoding size of dummy object.
 - Added missing cleanup in case dummy object fails to allocate
 - Other minor fixes.

Changes in v9:
 - Addressed some nits.
 - Rearranged argument checks for vm_bind to profit from compiler optimisations.
 - Added some further comments.

Changes in v8:
 - Allocate a single 2MiB BO as a dummy buffer for sparse mappings. Let its pages
 be retrieved just like for any other BO during a map operation.
 - Removed locking around allocation of the dummy BO by doing it right at the
  time of a VMA pool creation.
 - Some minor style fixes.
 - Refactor low level page mapping code in sm_remap and sm_map.
 - Made NO_EXEC a mandatory flag for sparse mappings.
 - Actually bumped the driver's minor revision number.

Changes in v7:
 - Switched back to Panthor BO-backed dummy object instead of raw pages so as to profit from
 the existing shrinker reclaim paths.
 - Created Dummy BO's per file context to avoid information leaking between them.
 - Reorganised some of the low-level page mapping code.
 - Added commits deleting spurious white space and unused op contex field.

Changes in v6:
 - Moved all the GPUVM core code into the driver backend.
 - Discarded commits that touch on the gpuvm core too.
 - Redesigned the uAPI so that no repeat range or user BO is supplied for sparse mappings.
 - Replaced user-supplied BO with a kernel-allocated array of raw pages.

Changes in v5:
 - Minor fixes to drm_gpuvm.c.
 - Add panthor MMU page sizes device queriable param.
 - Add helper to make sure unmaps of repeated regions are correct.
 - Some fixes to Panthor's repeat mappings implementation.
 - Lump arguments to panthor_vm_prepare_map_op_ctx into a single struct.

Changes in v4:
 - Fixed the warnings reported by the kernel test robot.
  https://lore.kernel.org/oe-kbuild-all/202507041635.WyDu3TQ1-lkp@intel.com/
 - Fixed the warnings reported by the CI.
  https://patchwork.freedesktop.org/series/151264/

No changes in v3.

Changes in v2:
 - Make panthor use this stuff.
 - Make it possible to express a repeated mappina of any suitably sized
  and aligned range of a BO, rather than strictly the page size -sized
  prefix, generalizing the API. Rename DRM_GPUVA_SINGLE_PAGE to
  DRM_GPUVA_REPEAT.
 - Clean up parts of drm/gpuvm affected by these changes.

Adrián Larumbe (6):
  drm/panthor: Expose GPU page sizes to UM
  drm/panthor: Pass vm_bind_op to vm_prepare_map_op_ctx
  drm/panthor: Delete spurious whitespace from uAPI header
  drm/panthor: Remove unused operation context field
  drm/panthor: Support sparse mappings
  drm/panthor: Bump the driver version to 1.9

 drivers/gpu/drm/panthor/panthor_device.h |   3 +
 drivers/gpu/drm/panthor/panthor_drv.c    |  12 +-
 drivers/gpu/drm/panthor/panthor_gem.c    |  18 ++
 drivers/gpu/drm/panthor/panthor_gem.h    |   2 +
 drivers/gpu/drm/panthor/panthor_mmu.c    | 201 ++++++++++++++++++-----
 include/uapi/drm/panthor_drm.h           |  26 ++-
 6 files changed, 218 insertions(+), 44 deletions(-)


base-commit: 9a096b8879801a597c06c1a69d41c827458cea60
prerequisite-patch-id: 0000000000000000000000000000000000000000
--
2.53.0

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v10 1/6] drm/panthor: Expose GPU page sizes to UM
  2026-04-29 18:32 [PATCH v10 0/6] Support sparse mappings in Panthor Adrián Larumbe
@ 2026-04-29 18:32 ` Adrián Larumbe
  2026-04-29 18:32 ` [PATCH v10 2/6] drm/panthor: Pass vm_bind_op to vm_prepare_map_op_ctx Adrián Larumbe
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 11+ messages in thread
From: Adrián Larumbe @ 2026-04-29 18:32 UTC (permalink / raw)
  To: linux-kernel
  Cc: dri-devel, Steven Price, Boris Brezillon, kernel,
	Adrián Larumbe, Liviu Dudau, Maarten Lankhorst,
	Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter,
	Daniel Almeida, Alice Ryhl

In future commits that will implement repeated mappings, only repeat
values multiple of GPU page sizes will be tolerated. That means these
values must be made known to UM. Do it through a queriable GPU info
value.

Reviewed-by: Steven Price <steven.price@arm.com>
Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com>
---
 drivers/gpu/drm/panthor/panthor_device.h |  3 +++
 drivers/gpu/drm/panthor/panthor_drv.c    |  8 ++++++++
 drivers/gpu/drm/panthor/panthor_mmu.c    |  9 ++++++++-
 include/uapi/drm/panthor_drm.h           | 13 +++++++++++++
 4 files changed, 32 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/panthor/panthor_device.h b/drivers/gpu/drm/panthor/panthor_device.h
index 5cba272f9b4d..d856a4fe1d61 100644
--- a/drivers/gpu/drm/panthor/panthor_device.h
+++ b/drivers/gpu/drm/panthor/panthor_device.h
@@ -158,6 +158,9 @@ struct panthor_device {
 	/** @csif_info: Command stream interface information. */
 	struct drm_panthor_csif_info csif_info;
 
+	/** @mmu_info: MMU info */
+	struct drm_panthor_mmu_info mmu_info;
+
 	/** @hw: GPU-specific data. */
 	struct panthor_hw *hw;
 
diff --git a/drivers/gpu/drm/panthor/panthor_drv.c b/drivers/gpu/drm/panthor/panthor_drv.c
index 73fc983dc9b4..beca75b34293 100644
--- a/drivers/gpu/drm/panthor/panthor_drv.c
+++ b/drivers/gpu/drm/panthor/panthor_drv.c
@@ -175,6 +175,7 @@ panthor_get_uobj_array(const struct drm_panthor_obj_array *in, u32 min_stride,
 	_Generic(_obj_name, \
 		 PANTHOR_UOBJ_DECL(struct drm_panthor_gpu_info, tiler_present), \
 		 PANTHOR_UOBJ_DECL(struct drm_panthor_csif_info, pad), \
+		 PANTHOR_UOBJ_DECL(struct drm_panthor_mmu_info, page_size_bitmap), \
 		 PANTHOR_UOBJ_DECL(struct drm_panthor_timestamp_info, current_timestamp), \
 		 PANTHOR_UOBJ_DECL(struct drm_panthor_group_priorities_info, pad), \
 		 PANTHOR_UOBJ_DECL(struct drm_panthor_sync_op, timeline_value), \
@@ -954,6 +955,10 @@ static int panthor_ioctl_dev_query(struct drm_device *ddev, void *data, struct d
 			args->size = sizeof(priorities_info);
 			return 0;
 
+		case DRM_PANTHOR_DEV_QUERY_MMU_INFO:
+			args->size = sizeof(ptdev->mmu_info);
+			return 0;
+
 		default:
 			return -EINVAL;
 		}
@@ -984,6 +989,9 @@ static int panthor_ioctl_dev_query(struct drm_device *ddev, void *data, struct d
 		panthor_query_group_priorities_info(file, &priorities_info);
 		return PANTHOR_UOBJ_SET(args->pointer, args->size, priorities_info);
 
+	case DRM_PANTHOR_DEV_QUERY_MMU_INFO:
+		return PANTHOR_UOBJ_SET(args->pointer, args->size, ptdev->mmu_info);
+
 	default:
 		return -EINVAL;
 	}
diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
index fc930ee158a5..9b526b61d96d 100644
--- a/drivers/gpu/drm/panthor/panthor_mmu.c
+++ b/drivers/gpu/drm/panthor/panthor_mmu.c
@@ -2768,7 +2768,7 @@ panthor_vm_create(struct panthor_device *ptdev, bool for_mcu,
 	refcount_set(&vm->as.active_cnt, 0);
 
 	pgtbl_cfg = (struct io_pgtable_cfg) {
-		.pgsize_bitmap	= SZ_4K | SZ_2M,
+		.pgsize_bitmap	= ptdev->mmu_info.page_size_bitmap,
 		.ias		= va_bits,
 		.oas		= pa_bits,
 		.coherent_walk	= ptdev->coherent,
@@ -3213,6 +3213,11 @@ static void panthor_mmu_release_wq(struct drm_device *ddev, void *res)
 	destroy_workqueue(res);
 }
 
+static void panthor_mmu_info_init(struct panthor_device *ptdev)
+{
+	ptdev->mmu_info.page_size_bitmap = SZ_4K | SZ_2M;
+}
+
 /**
  * panthor_mmu_init() - Initialize the MMU logic.
  * @ptdev: Device.
@@ -3225,6 +3230,8 @@ int panthor_mmu_init(struct panthor_device *ptdev)
 	struct panthor_mmu *mmu;
 	int ret, irq;
 
+	panthor_mmu_info_init(ptdev);
+
 	mmu = drmm_kzalloc(&ptdev->base, sizeof(*mmu), GFP_KERNEL);
 	if (!mmu)
 		return -ENOMEM;
diff --git a/include/uapi/drm/panthor_drm.h b/include/uapi/drm/panthor_drm.h
index 0e455d91e77d..b462752c793d 100644
--- a/include/uapi/drm/panthor_drm.h
+++ b/include/uapi/drm/panthor_drm.h
@@ -253,6 +253,9 @@ enum drm_panthor_dev_query_type {
 	 * @DRM_PANTHOR_DEV_QUERY_GROUP_PRIORITIES_INFO: Query allowed group priorities information.
 	 */
 	DRM_PANTHOR_DEV_QUERY_GROUP_PRIORITIES_INFO,
+
+	/** @DRM_PANTHOR_DEV_QUERY_MMU_INFO: Query MMU information. */
+	DRM_PANTHOR_DEV_QUERY_MMU_INFO,
 };
 
 /**
@@ -487,6 +490,16 @@ struct drm_panthor_timestamp_info {
 	__u64 cpu_timestamp_nsec;
 };
 
+/**
+ * struct drm_panthor_mmu_info - MMU information
+ *
+ * Structure grouping all queryable information relating to the MMU.
+ */
+struct drm_panthor_mmu_info {
+	/** @page_size_bitmap: Allowed page sizes */
+	__u64 page_size_bitmap;
+};
+
 /**
  * struct drm_panthor_group_priorities_info - Group priorities information
  *
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v10 2/6] drm/panthor: Pass vm_bind_op to vm_prepare_map_op_ctx
  2026-04-29 18:32 [PATCH v10 0/6] Support sparse mappings in Panthor Adrián Larumbe
  2026-04-29 18:32 ` [PATCH v10 1/6] drm/panthor: Expose GPU page sizes to UM Adrián Larumbe
@ 2026-04-29 18:32 ` Adrián Larumbe
  2026-04-29 18:32 ` [PATCH v10 3/6] drm/panthor: Delete spurious whitespace from uAPI header Adrián Larumbe
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 11+ messages in thread
From: Adrián Larumbe @ 2026-04-29 18:32 UTC (permalink / raw)
  To: linux-kernel
  Cc: dri-devel, Steven Price, Boris Brezillon, kernel,
	Adrián Larumbe, Liviu Dudau, Maarten Lankhorst,
	Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter

Instead of passing its constituent elements, pass the whole struct to
simplify the function prototype.

Reviewed-by: Steven Price <steven.price@arm.com>
Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com>
---
 drivers/gpu/drm/panthor/panthor_mmu.c | 27 ++++++++++++++-------------
 1 file changed, 14 insertions(+), 13 deletions(-)

diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
index 9b526b61d96d..d35e65cab2b1 100644
--- a/drivers/gpu/drm/panthor/panthor_mmu.c
+++ b/drivers/gpu/drm/panthor/panthor_mmu.c
@@ -1275,9 +1275,7 @@ static int panthor_vm_op_ctx_prealloc_pts(struct panthor_vm_op_ctx *op_ctx)
 static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
 					 struct panthor_vm *vm,
 					 struct panthor_gem_object *bo,
-					 u64 offset,
-					 u64 size, u64 va,
-					 u32 flags)
+					 const struct drm_panthor_vm_bind_op *op)
 {
 	struct drm_gpuvm_bo *preallocated_vm_bo;
 	struct sg_table *sgt = NULL;
@@ -1286,12 +1284,12 @@ static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
 	if (!bo)
 		return -EINVAL;
 
-	if ((flags & ~PANTHOR_VM_BIND_OP_MAP_FLAGS) ||
-	    (flags & DRM_PANTHOR_VM_BIND_OP_TYPE_MASK) != DRM_PANTHOR_VM_BIND_OP_TYPE_MAP)
+	if ((op->flags & ~PANTHOR_VM_BIND_OP_MAP_FLAGS) ||
+	    (op->flags & DRM_PANTHOR_VM_BIND_OP_TYPE_MASK) != DRM_PANTHOR_VM_BIND_OP_TYPE_MAP)
 		return -EINVAL;
 
 	/* Make sure the VA and size are in-bounds. */
-	if (size > bo->base.size || offset > bo->base.size - size)
+	if (op->size > bo->base.size || op->bo_offset > bo->base.size - op->size)
 		return -EINVAL;
 
 	/* If the BO has an exclusive VM attached, it can't be mapped to other VMs. */
@@ -1299,7 +1297,7 @@ static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
 	    bo->exclusive_vm_root_gem != panthor_vm_root_gem(vm))
 		return -EINVAL;
 
-	panthor_vm_init_op_ctx(op_ctx, size, va, flags);
+	panthor_vm_init_op_ctx(op_ctx, op->size, op->va, op->flags);
 
 	ret = panthor_vm_op_ctx_prealloc_vmas(op_ctx);
 	if (ret)
@@ -1328,7 +1326,7 @@ static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
 	}
 
 	op_ctx->map.vm_bo = drm_gpuvm_bo_obtain_prealloc(preallocated_vm_bo);
-	op_ctx->map.bo_offset = offset;
+	op_ctx->map.bo_offset = op->bo_offset;
 
 	ret = panthor_vm_op_ctx_prealloc_pts(op_ctx);
 	if (ret)
@@ -2848,10 +2846,7 @@ panthor_vm_bind_prepare_op_ctx(struct drm_file *file,
 		gem = drm_gem_object_lookup(file, op->bo_handle);
 		ret = panthor_vm_prepare_map_op_ctx(op_ctx, vm,
 						    gem ? to_panthor_bo(gem) : NULL,
-						    op->bo_offset,
-						    op->size,
-						    op->va,
-						    op->flags);
+						    op);
 		drm_gem_object_put(gem);
 		return ret;
 
@@ -3047,10 +3042,16 @@ int panthor_vm_bind_exec_sync_op(struct drm_file *file,
 int panthor_vm_map_bo_range(struct panthor_vm *vm, struct panthor_gem_object *bo,
 			    u64 offset, u64 size, u64 va, u32 flags)
 {
+	struct drm_panthor_vm_bind_op op = {
+		.bo_offset = offset,
+		.size = size,
+		.va = va,
+		.flags = flags,
+	};
 	struct panthor_vm_op_ctx op_ctx;
 	int ret;
 
-	ret = panthor_vm_prepare_map_op_ctx(&op_ctx, vm, bo, offset, size, va, flags);
+	ret = panthor_vm_prepare_map_op_ctx(&op_ctx, vm, bo, &op);
 	if (ret)
 		return ret;
 
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v10 3/6] drm/panthor: Delete spurious whitespace from uAPI header
  2026-04-29 18:32 [PATCH v10 0/6] Support sparse mappings in Panthor Adrián Larumbe
  2026-04-29 18:32 ` [PATCH v10 1/6] drm/panthor: Expose GPU page sizes to UM Adrián Larumbe
  2026-04-29 18:32 ` [PATCH v10 2/6] drm/panthor: Pass vm_bind_op to vm_prepare_map_op_ctx Adrián Larumbe
@ 2026-04-29 18:32 ` Adrián Larumbe
  2026-04-29 18:32 ` [PATCH v10 4/6] drm/panthor: Remove unused operation context field Adrián Larumbe
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 11+ messages in thread
From: Adrián Larumbe @ 2026-04-29 18:32 UTC (permalink / raw)
  To: linux-kernel
  Cc: dri-devel, Steven Price, Boris Brezillon, kernel,
	Adrián Larumbe, Liviu Dudau, Daniel Almeida, Alice Ryhl,
	David Airlie, Simona Vetter, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann

There's no extra blank line after the last member of any other uAPI
structures, so delete it.

Reviewed-by: Steven Price <steven.price@arm.com>
Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com>
---
 include/uapi/drm/panthor_drm.h | 1 -
 1 file changed, 1 deletion(-)

diff --git a/include/uapi/drm/panthor_drm.h b/include/uapi/drm/panthor_drm.h
index b462752c793d..14a93a4ef6ff 100644
--- a/include/uapi/drm/panthor_drm.h
+++ b/include/uapi/drm/panthor_drm.h
@@ -677,7 +677,6 @@ struct drm_panthor_vm_bind_op {
 	 * This array shall not be empty for sync-only operations.
 	 */
 	struct drm_panthor_obj_array syncs;
-
 };
 
 /**
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v10 4/6] drm/panthor: Remove unused operation context field
  2026-04-29 18:32 [PATCH v10 0/6] Support sparse mappings in Panthor Adrián Larumbe
                   ` (2 preceding siblings ...)
  2026-04-29 18:32 ` [PATCH v10 3/6] drm/panthor: Delete spurious whitespace from uAPI header Adrián Larumbe
@ 2026-04-29 18:32 ` Adrián Larumbe
  2026-04-29 18:32 ` [PATCH v10 5/6] drm/panthor: Support sparse mappings Adrián Larumbe
  2026-04-29 18:32 ` [PATCH v10 6/6] drm/panthor: Bump the driver version to 1.9 Adrián Larumbe
  5 siblings, 0 replies; 11+ messages in thread
From: Adrián Larumbe @ 2026-04-29 18:32 UTC (permalink / raw)
  To: linux-kernel
  Cc: dri-devel, Steven Price, Boris Brezillon, kernel,
	Adrián Larumbe, Liviu Dudau, Maarten Lankhorst,
	Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter

A Panthor BO's sgtable is now retrieved from its dmap field.

Reviewed-by: Steven Price <steven.price@arm.com>
Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com>
---
 drivers/gpu/drm/panthor/panthor_mmu.c | 8 --------
 1 file changed, 8 deletions(-)

diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
index d35e65cab2b1..f54a60cd0ec4 100644
--- a/drivers/gpu/drm/panthor/panthor_mmu.c
+++ b/drivers/gpu/drm/panthor/panthor_mmu.c
@@ -195,14 +195,6 @@ struct panthor_vm_op_ctx {
 		/** @map.bo_offset: Offset in the buffer object. */
 		u64 bo_offset;
 
-		/**
-		 * @map.sgt: sg-table pointing to pages backing the GEM object.
-		 *
-		 * This is gathered at job creation time, such that we don't have
-		 * to allocate in ::run_job().
-		 */
-		struct sg_table *sgt;
-
 		/** @map.bo: the BO being mapped. */
 		struct panthor_gem_object *bo;
 	} map;
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v10 5/6] drm/panthor: Support sparse mappings
  2026-04-29 18:32 [PATCH v10 0/6] Support sparse mappings in Panthor Adrián Larumbe
                   ` (3 preceding siblings ...)
  2026-04-29 18:32 ` [PATCH v10 4/6] drm/panthor: Remove unused operation context field Adrián Larumbe
@ 2026-04-29 18:32 ` Adrián Larumbe
  2026-04-30  7:57   ` Boris Brezillon
  2026-05-05  8:14   ` Marcin Ślusarz
  2026-04-29 18:32 ` [PATCH v10 6/6] drm/panthor: Bump the driver version to 1.9 Adrián Larumbe
  5 siblings, 2 replies; 11+ messages in thread
From: Adrián Larumbe @ 2026-04-29 18:32 UTC (permalink / raw)
  To: linux-kernel
  Cc: dri-devel, Steven Price, Boris Brezillon, kernel,
	Adrián Larumbe, Liviu Dudau, Maarten Lankhorst,
	Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter,
	Daniel Almeida, Alice Ryhl

Allow UM to bind sparsely populated memory regions by cyclically mapping
virtual ranges over a kernel-allocated dummy BO. This alternative is
preferable to the old method of handling sparseness in the UMD, because it
relied on the creation of a buffer object to the same end, despite the fact
Vulkan sparse resources don't need to be backed by a driver BO.

The choice of backing sparsely-bound regions with a Panhtor BO was made so
as to profit from the existing shrinker reclaim code. That way no special
treatment must be given to the dummy sparse BOs when reclaiming memory, as
would be the case if we had chosen a raw kernel page implementation.

A new dummy BO is allocated per open file context, because even though the
Vulkan spec mandates that writes into sparsely bound regions must be
discarded, our implementation is still a workaround over the fact Mali CSF
GPUs cannot support this behaviour on the hardware level, so writes still
make it into the backing BO. If we had a global one, then it could be a
venue for information leaks between file contexts, which should never
happen in DRM.

Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com>
---
 drivers/gpu/drm/panthor/panthor_gem.c |  18 +++
 drivers/gpu/drm/panthor/panthor_gem.h |   2 +
 drivers/gpu/drm/panthor/panthor_mmu.c | 159 ++++++++++++++++++++++----
 include/uapi/drm/panthor_drm.h        |  12 ++
 4 files changed, 170 insertions(+), 21 deletions(-)

diff --git a/drivers/gpu/drm/panthor/panthor_gem.c b/drivers/gpu/drm/panthor/panthor_gem.c
index 13295d7a593d..c798ac2963e1 100644
--- a/drivers/gpu/drm/panthor/panthor_gem.c
+++ b/drivers/gpu/drm/panthor/panthor_gem.c
@@ -1345,6 +1345,24 @@ panthor_kernel_bo_create(struct panthor_device *ptdev, struct panthor_vm *vm,
 	return ERR_PTR(ret);
 }
 
+/**
+ * panthor_dummy_bo_create() - Create a Panthor BO meant to back sparse bindings.
+ * @ptdev: Device.
+ *
+ * Return: A valid pointer in case of success, an ERR_PTR() otherwise.
+ */
+struct panthor_gem_object *
+panthor_dummy_bo_create(struct panthor_device *ptdev)
+{
+	/* Since even when the DRM device's mount point has enabled THP we have no guarantee
+	 * that drm_gem_get_pages() will return a single 2MiB PMD, and also we cannot be sure
+	 * that the 2MiB won't be reclaimed and re-allocated later on as 4KiB chunks, it doesn't
+	 * make sense to pre-populate this object's page array, nor to fall back on a BO size
+	 * of 4KiB. Sticking to a dummy object size of 2MiB lets us keep things simple for now.
+	 */
+	return panthor_gem_create(&ptdev->base, SZ_2M, DRM_PANTHOR_BO_NO_MMAP, NULL, 0);
+}
+
 static bool can_swap(void)
 {
 	return get_nr_swap_pages() > 0;
diff --git a/drivers/gpu/drm/panthor/panthor_gem.h b/drivers/gpu/drm/panthor/panthor_gem.h
index ae0491d0b121..8639c2fa08e6 100644
--- a/drivers/gpu/drm/panthor/panthor_gem.h
+++ b/drivers/gpu/drm/panthor/panthor_gem.h
@@ -315,6 +315,8 @@ panthor_kernel_bo_create(struct panthor_device *ptdev, struct panthor_vm *vm,
 
 void panthor_kernel_bo_destroy(struct panthor_kernel_bo *bo);
 
+struct panthor_gem_object *panthor_dummy_bo_create(struct panthor_device *ptdev);
+
 #ifdef CONFIG_DEBUG_FS
 void panthor_gem_debugfs_init(struct drm_minor *minor);
 #endif
diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
index f54a60cd0ec4..9257afd6adc9 100644
--- a/drivers/gpu/drm/panthor/panthor_mmu.c
+++ b/drivers/gpu/drm/panthor/panthor_mmu.c
@@ -112,6 +112,17 @@ struct panthor_mmu {
 struct panthor_vm_pool {
 	/** @xa: Array used for VM handle tracking. */
 	struct xarray xa;
+
+	/**
+	 * @dummy: Dummy object used for sparse mappings
+	 *
+	 * Sparse bindings map virtual address ranges onto a dummy
+	 * BO in a modulo fashion. Even though sparse writes are meant
+	 * to be discarded and reads undefined, writes are still reflected
+	 * in the dummy buffer. That means we must keep a dummy object per
+	 * file context, to avoid data leaks between them.
+	 */
+	struct panthor_gem_object *dummy;
 };
 
 /**
@@ -391,6 +402,16 @@ struct panthor_vm {
 		 */
 		struct list_head lru_node;
 	} reclaim;
+
+	/**
+	 * @dummy: Dummy object used for sparse mappings.
+	 *
+	 * VM's must keep a reference to the file context-wide dummy BO because
+	 * they can outlive the file context, which includes the VM pool holding
+	 * the original dummy BO reference.
+	 *
+	 */
+	struct panthor_gem_object *dummy;
 };
 
 /**
@@ -1020,6 +1041,30 @@ panthor_vm_map_pages(struct panthor_vm *vm, u64 iova, int prot,
 	return 0;
 }
 
+static int
+panthor_vm_map_sparse(struct panthor_vm *vm, u64 iova, int prot,
+		      struct sg_table *sgt, u64 size)
+{
+	u64 mapped = 0;
+	int ret;
+
+	while (mapped < size) {
+		u64 addr = iova + mapped;
+		u32 chunk_size = min(size - mapped, SZ_2M - (addr & (SZ_2M - 1)));
+
+		ret = panthor_vm_map_pages(vm, addr, prot,
+					   sgt, 0, chunk_size);
+		if (ret) {
+			panthor_vm_unmap_pages(vm, iova, mapped);
+			return ret;
+		}
+
+		mapped += chunk_size;
+	}
+
+	return 0;
+}
+
 static int flags_to_prot(u32 flags)
 {
 	int prot = 0;
@@ -1262,6 +1307,7 @@ static int panthor_vm_op_ctx_prealloc_pts(struct panthor_vm_op_ctx *op_ctx)
 	(DRM_PANTHOR_VM_BIND_OP_MAP_READONLY | \
 	 DRM_PANTHOR_VM_BIND_OP_MAP_NOEXEC | \
 	 DRM_PANTHOR_VM_BIND_OP_MAP_UNCACHED | \
+	 DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE | \
 	 DRM_PANTHOR_VM_BIND_OP_TYPE_MASK)
 
 static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
@@ -1269,6 +1315,7 @@ static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
 					 struct panthor_gem_object *bo,
 					 const struct drm_panthor_vm_bind_op *op)
 {
+	bool is_sparse = op->flags & DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE;
 	struct drm_gpuvm_bo *preallocated_vm_bo;
 	struct sg_table *sgt = NULL;
 	int ret;
@@ -1280,8 +1327,21 @@ static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
 	    (op->flags & DRM_PANTHOR_VM_BIND_OP_TYPE_MASK) != DRM_PANTHOR_VM_BIND_OP_TYPE_MAP)
 		return -EINVAL;
 
-	/* Make sure the VA and size are in-bounds. */
-	if (op->size > bo->base.size || op->bo_offset > bo->base.size - op->size)
+	/* uAPI mandates sparsely bound regions must not be executable. */
+	if (is_sparse && !(op->flags & DRM_PANTHOR_VM_BIND_OP_MAP_NOEXEC))
+		return -EINVAL;
+
+	/* For non-sparse, make sure the VA and size are in-bounds.
+	 * For sparse, this is not applicable, because the dummy BO is
+	 * repeatedly mapped over a potentially wider VA range.
+	 */
+	if (!is_sparse && (op->size > bo->base.size || op->bo_offset > bo->base.size - op->size))
+		return -EINVAL;
+
+	/* For sparse, we don't expect any user BO, the BO we get passed
+	 * is the dummy BO attached to the VM pool.
+	 */
+	if (is_sparse && (op->bo_handle || op->bo_offset))
 		return -EINVAL;
 
 	/* If the BO has an exclusive VM attached, it can't be mapped to other VMs. */
@@ -1543,6 +1603,9 @@ int panthor_vm_pool_create_vm(struct panthor_device *ptdev,
 		return ret;
 	}
 
+	drm_gem_object_get(&pool->dummy->base);
+	vm->dummy = pool->dummy;
+
 	args->user_va_range = kernel_va_start;
 	return id;
 }
@@ -1634,6 +1697,7 @@ void panthor_vm_pool_destroy(struct panthor_file *pfile)
 	xa_for_each(&pfile->vms->xa, i, vm)
 		panthor_vm_destroy(vm);
 
+	drm_gem_object_put(&pfile->vms->dummy->base);
 	xa_destroy(&pfile->vms->xa);
 	kfree(pfile->vms);
 }
@@ -1651,6 +1715,13 @@ int panthor_vm_pool_create(struct panthor_file *pfile)
 		return -ENOMEM;
 
 	xa_init_flags(&pfile->vms->xa, XA_FLAGS_ALLOC1);
+
+	pfile->vms->dummy = panthor_dummy_bo_create(pfile->ptdev);
+	if (IS_ERR(pfile->vms->dummy)) {
+		kfree(pfile->vms);
+		return PTR_ERR(pfile->vms->dummy);
+	}
+
 	return 0;
 }
 
@@ -1987,6 +2058,9 @@ static void panthor_vm_free(struct drm_gpuvm *gpuvm)
 
 	free_io_pgtable_ops(vm->pgtbl_ops);
 
+	if (vm->dummy)
+		drm_gem_object_put(&vm->dummy->base);
+
 	drm_mm_takedown(&vm->mm);
 	kfree(vm);
 }
@@ -2146,7 +2220,23 @@ static void panthor_vma_init(struct panthor_vma *vma, u32 flags)
 #define PANTHOR_VM_MAP_FLAGS \
 	(DRM_PANTHOR_VM_BIND_OP_MAP_READONLY | \
 	 DRM_PANTHOR_VM_BIND_OP_MAP_NOEXEC | \
-	 DRM_PANTHOR_VM_BIND_OP_MAP_UNCACHED)
+	 DRM_PANTHOR_VM_BIND_OP_MAP_UNCACHED | \
+	 DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE)
+
+static int
+panthor_vm_exec_map_op(struct panthor_vm *vm, u32 flags,
+		       const struct drm_gpuva_op_map *op)
+{
+	struct panthor_gem_object *bo = to_panthor_bo(op->gem.obj);
+	int prot = flags_to_prot(flags);
+
+	if (flags & DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE)
+		return panthor_vm_map_sparse(vm, op->va.addr, prot,
+					     bo->dmap.sgt, op->va.range);
+
+	return panthor_vm_map_pages(vm, op->va.addr, prot, bo->dmap.sgt,
+				    op->gem.offset, op->va.range);
+}
 
 static int panthor_gpuva_sm_step_map(struct drm_gpuva_op *op, void *priv)
 {
@@ -2160,9 +2250,7 @@ static int panthor_gpuva_sm_step_map(struct drm_gpuva_op *op, void *priv)
 
 	panthor_vma_init(vma, op_ctx->flags & PANTHOR_VM_MAP_FLAGS);
 
-	ret = panthor_vm_map_pages(vm, op->map.va.addr, flags_to_prot(vma->flags),
-				   op_ctx->map.bo->dmap.sgt, op->map.gem.offset,
-				   op->map.va.range);
+	ret = panthor_vm_exec_map_op(vm, vma->flags, &op->map);
 	if (ret) {
 		panthor_vm_op_ctx_return_vma(op_ctx, vma);
 		return ret;
@@ -2178,13 +2266,16 @@ static int panthor_gpuva_sm_step_map(struct drm_gpuva_op *op, void *priv)
 }
 
 static bool
-iova_mapped_as_huge_page(struct drm_gpuva_op_map *op, u64 addr)
+iova_mapped_as_huge_page(struct drm_gpuva_op_map *op, u64 addr, bool is_sparse)
 {
 	struct panthor_gem_object *bo = to_panthor_bo(op->gem.obj);
 	const struct page *pg;
 	pgoff_t bo_offset;
 
-	bo_offset = addr - op->va.addr + op->gem.offset;
+	/* Per-VM Dummy BO in sparse mappings is always 2MiB, so checking the
+	 * size of the very first page is enough.
+	 */
+	bo_offset = !is_sparse ? addr - op->va.addr + op->gem.offset : 0;
 	pg = bo->backing.pages[bo_offset >> PAGE_SHIFT];
 
 	return folio_size(page_folio(pg)) >= SZ_2M;
@@ -2194,6 +2285,8 @@ static void
 unmap_hugepage_align(const struct drm_gpuva_op_remap *op,
 		     u64 *unmap_start, u64 *unmap_range)
 {
+	struct panthor_vma *unmap_vma = container_of(op->unmap->va, struct panthor_vma, base);
+	bool is_sparse = unmap_vma->flags & DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE;
 	u64 aligned_unmap_start, aligned_unmap_end, unmap_end;
 
 	unmap_end = *unmap_start + *unmap_range;
@@ -2205,7 +2298,7 @@ unmap_hugepage_align(const struct drm_gpuva_op_remap *op,
 	 */
 	if (op->prev && aligned_unmap_start < *unmap_start &&
 	    op->prev->va.addr <= aligned_unmap_start &&
-	    iova_mapped_as_huge_page(op->prev, *unmap_start)) {
+	    (iova_mapped_as_huge_page(op->prev, *unmap_start, is_sparse))) {
 		*unmap_range += *unmap_start - aligned_unmap_start;
 		*unmap_start = aligned_unmap_start;
 	}
@@ -2215,7 +2308,7 @@ unmap_hugepage_align(const struct drm_gpuva_op_remap *op,
 	 */
 	if (op->next && aligned_unmap_end > unmap_end &&
 	    op->next->va.addr + op->next->va.range >= aligned_unmap_end &&
-	    iova_mapped_as_huge_page(op->next, unmap_end - 1)) {
+	    iova_mapped_as_huge_page(op->next, unmap_end - 1, is_sparse)) {
 		*unmap_range += aligned_unmap_end - unmap_end;
 	}
 }
@@ -2250,15 +2343,27 @@ static int panthor_gpuva_sm_step_remap(struct drm_gpuva_op *op,
 		panthor_vm_unmap_pages(vm, unmap_start, unmap_range);
 	}
 
+	/* In the following two branches, neither remap::unmap::offset nor remap::unmap::keep
+	 * can be trusted to contain legitimate values in the case of sparse mappings, because
+	 * the drm_gpuvm core calculates them on the assumption that a VM_BIND operation's
+	 * range is always less than the target BO. That doesn't hold in the case of sparse
+	 * bindings, but we don't care to adjust the BO offset of new VA's spawned by a remap
+	 * operation because we ignore them altogether when sparse-mapping pages on a HW level
+	 * just further below. If we ever wanted to make use of remap::unmap::keep, then this
+	 * logic would have to be reworked.
+	 */
 	if (op->remap.prev) {
-		struct panthor_gem_object *bo = to_panthor_bo(op->remap.prev->gem.obj);
 		u64 offset = op->remap.prev->gem.offset + unmap_start - op->remap.prev->va.addr;
 		u64 size = op->remap.prev->va.addr + op->remap.prev->va.range - unmap_start;
+		const struct drm_gpuva_op_map map_op = {
+			.va.addr = unmap_start,
+			.va.range = size,
+			.gem.obj = op->remap.prev->gem.obj,
+			.gem.offset = offset,
+		};
 
-		if (!unmap_vma->evicted) {
-			ret = panthor_vm_map_pages(vm, unmap_start,
-						   flags_to_prot(unmap_vma->flags),
-						   bo->dmap.sgt, offset, size);
+		if (!unmap_vma->evicted && size > 0) {
+			ret = panthor_vm_exec_map_op(vm, unmap_vma->flags, &map_op);
 			if (ret)
 				return ret;
 		}
@@ -2269,14 +2374,17 @@ static int panthor_gpuva_sm_step_remap(struct drm_gpuva_op *op,
 	}
 
 	if (op->remap.next) {
-		struct panthor_gem_object *bo = to_panthor_bo(op->remap.next->gem.obj);
 		u64 addr = op->remap.next->va.addr;
 		u64 size = unmap_start + unmap_range - op->remap.next->va.addr;
+		const struct drm_gpuva_op_map map_op = {
+			.va.addr = addr,
+			.va.range = size,
+			.gem.obj = op->remap.next->gem.obj,
+			.gem.offset = op->remap.next->gem.offset,
+		};
 
-		if (!unmap_vma->evicted) {
-			ret = panthor_vm_map_pages(vm, addr, flags_to_prot(unmap_vma->flags),
-						   bo->dmap.sgt, op->remap.next->gem.offset,
-						   size);
+		if (!unmap_vma->evicted && size > 0) {
+			ret = panthor_vm_exec_map_op(vm, unmap_vma->flags, &map_op);
 			if (ret)
 				return ret;
 		}
@@ -2835,7 +2943,13 @@ panthor_vm_bind_prepare_op_ctx(struct drm_file *file,
 
 	switch (op->flags & DRM_PANTHOR_VM_BIND_OP_TYPE_MASK) {
 	case DRM_PANTHOR_VM_BIND_OP_TYPE_MAP:
-		gem = drm_gem_object_lookup(file, op->bo_handle);
+		if (!(op->flags & DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE)) {
+			gem = drm_gem_object_lookup(file, op->bo_handle);
+		} else {
+			gem = &vm->dummy->base;
+			drm_gem_object_get(&vm->dummy->base);
+		}
+
 		ret = panthor_vm_prepare_map_op_ctx(op_ctx, vm,
 						    gem ? to_panthor_bo(gem) : NULL,
 						    op);
@@ -3043,6 +3157,9 @@ int panthor_vm_map_bo_range(struct panthor_vm *vm, struct panthor_gem_object *bo
 	struct panthor_vm_op_ctx op_ctx;
 	int ret;
 
+	if (drm_WARN_ON(&vm->ptdev->base, flags & DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE))
+		return -EINVAL;
+
 	ret = panthor_vm_prepare_map_op_ctx(&op_ctx, vm, bo, &op);
 	if (ret)
 		return ret;
diff --git a/include/uapi/drm/panthor_drm.h b/include/uapi/drm/panthor_drm.h
index 14a93a4ef6ff..1b706d00b0a1 100644
--- a/include/uapi/drm/panthor_drm.h
+++ b/include/uapi/drm/panthor_drm.h
@@ -614,6 +614,18 @@ enum drm_panthor_vm_bind_op_flags {
 	 */
 	DRM_PANTHOR_VM_BIND_OP_MAP_UNCACHED = 1 << 2,
 
+	/**
+	 * @DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE: Sparsely map a virtual memory range
+	 *
+	 * Only valid with DRM_PANTHOR_VM_BIND_OP_TYPE_MAP.
+	 *
+	 * When this flag is set, the whole vm_bind range is mapped over a dummy object in a cyclic
+	 * fashion, and all GPU reads from addresses in the range return undefined values. This flag
+	 * being set means drm_panthor_vm_bind_op:offset and drm_panthor_vm_bind_op::handle must
+	 * both be set to 0. DRM_PANTHOR_VM_BIND_OP_MAP_NOEXEC must also be set.
+	 */
+	DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE = 1 << 3,
+
 	/**
 	 * @DRM_PANTHOR_VM_BIND_OP_TYPE_MASK: Mask used to determine the type of operation.
 	 */
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v10 6/6] drm/panthor: Bump the driver version to 1.9
  2026-04-29 18:32 [PATCH v10 0/6] Support sparse mappings in Panthor Adrián Larumbe
                   ` (4 preceding siblings ...)
  2026-04-29 18:32 ` [PATCH v10 5/6] drm/panthor: Support sparse mappings Adrián Larumbe
@ 2026-04-29 18:32 ` Adrián Larumbe
  5 siblings, 0 replies; 11+ messages in thread
From: Adrián Larumbe @ 2026-04-29 18:32 UTC (permalink / raw)
  To: linux-kernel
  Cc: dri-devel, Steven Price, Boris Brezillon, kernel,
	Adrián Larumbe, Liviu Dudau, Maarten Lankhorst,
	Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter

Bump the driver version to reflect the new MMU info query ioctl
parameter and the VM_BIND map sparse flag.

Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
Reviewed-by: Steven Price <steven.price@arm.com>
Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com>
---
 drivers/gpu/drm/panthor/panthor_drv.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/panthor/panthor_drv.c b/drivers/gpu/drm/panthor/panthor_drv.c
index beca75b34293..d0070a28a634 100644
--- a/drivers/gpu/drm/panthor/panthor_drv.c
+++ b/drivers/gpu/drm/panthor/panthor_drv.c
@@ -1787,6 +1787,8 @@ static void panthor_debugfs_init(struct drm_minor *minor)
  *       - adds DRM_IOCTL_PANTHOR_BO_QUERY_INFO ioctl
  *       - adds drm_panthor_gpu_info::selected_coherency
  * - 1.8 - extends DEV_QUERY_TIMESTAMP_INFO with flags
+ * - 1.9 - adds DRM_PANTHOR_DEV_QUERY_MMU_INFO query
+ *       - adds DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE flag
  */
 static const struct drm_driver panthor_drm_driver = {
 	.driver_features = DRIVER_RENDER | DRIVER_GEM | DRIVER_SYNCOBJ |
@@ -1800,7 +1802,7 @@ static const struct drm_driver panthor_drm_driver = {
 	.name = "panthor",
 	.desc = "Panthor DRM driver",
 	.major = 1,
-	.minor = 8,
+	.minor = 9,
 
 	.gem_prime_import_sg_table = panthor_gem_prime_import_sg_table,
 	.gem_prime_import = panthor_gem_prime_import,
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v10 5/6] drm/panthor: Support sparse mappings
  2026-04-29 18:32 ` [PATCH v10 5/6] drm/panthor: Support sparse mappings Adrián Larumbe
@ 2026-04-30  7:57   ` Boris Brezillon
  2026-04-30  9:57     ` Boris Brezillon
  2026-05-05  8:14   ` Marcin Ślusarz
  1 sibling, 1 reply; 11+ messages in thread
From: Boris Brezillon @ 2026-04-30  7:57 UTC (permalink / raw)
  To: Adrián Larumbe
  Cc: linux-kernel, dri-devel, Steven Price, kernel, Liviu Dudau,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
	Simona Vetter, Daniel Almeida, Alice Ryhl

On Wed, 29 Apr 2026 19:32:17 +0100
Adrián Larumbe <adrian.larumbe@collabora.com> wrote:

> Allow UM to bind sparsely populated memory regions by cyclically mapping
> virtual ranges over a kernel-allocated dummy BO. This alternative is
> preferable to the old method of handling sparseness in the UMD, because it
> relied on the creation of a buffer object to the same end, despite the fact
> Vulkan sparse resources don't need to be backed by a driver BO.
> 
> The choice of backing sparsely-bound regions with a Panhtor BO was made so
> as to profit from the existing shrinker reclaim code. That way no special
> treatment must be given to the dummy sparse BOs when reclaiming memory, as
> would be the case if we had chosen a raw kernel page implementation.
> 
> A new dummy BO is allocated per open file context, because even though the
> Vulkan spec mandates that writes into sparsely bound regions must be
> discarded, our implementation is still a workaround over the fact Mali CSF
> GPUs cannot support this behaviour on the hardware level, so writes still
> make it into the backing BO. If we had a global one, then it could be a
> venue for information leaks between file contexts, which should never
> happen in DRM.
> 
> Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
> Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com>
> ---
>  drivers/gpu/drm/panthor/panthor_gem.c |  18 +++
>  drivers/gpu/drm/panthor/panthor_gem.h |   2 +
>  drivers/gpu/drm/panthor/panthor_mmu.c | 159 ++++++++++++++++++++++----
>  include/uapi/drm/panthor_drm.h        |  12 ++
>  4 files changed, 170 insertions(+), 21 deletions(-)
> 
> diff --git a/drivers/gpu/drm/panthor/panthor_gem.c b/drivers/gpu/drm/panthor/panthor_gem.c
> index 13295d7a593d..c798ac2963e1 100644
> --- a/drivers/gpu/drm/panthor/panthor_gem.c
> +++ b/drivers/gpu/drm/panthor/panthor_gem.c
> @@ -1345,6 +1345,24 @@ panthor_kernel_bo_create(struct panthor_device *ptdev, struct panthor_vm *vm,
>  	return ERR_PTR(ret);
>  }
>  
> +/**
> + * panthor_dummy_bo_create() - Create a Panthor BO meant to back sparse bindings.
> + * @ptdev: Device.
> + *
> + * Return: A valid pointer in case of success, an ERR_PTR() otherwise.
> + */
> +struct panthor_gem_object *
> +panthor_dummy_bo_create(struct panthor_device *ptdev)
> +{
> +	/* Since even when the DRM device's mount point has enabled THP we have no guarantee
> +	 * that drm_gem_get_pages() will return a single 2MiB PMD, and also we cannot be sure
> +	 * that the 2MiB won't be reclaimed and re-allocated later on as 4KiB chunks, it doesn't
> +	 * make sense to pre-populate this object's page array, nor to fall back on a BO size
> +	 * of 4KiB. Sticking to a dummy object size of 2MiB lets us keep things simple for now.
> +	 */
> +	return panthor_gem_create(&ptdev->base, SZ_2M, DRM_PANTHOR_BO_NO_MMAP, NULL, 0);
> +}
> +
>  static bool can_swap(void)
>  {
>  	return get_nr_swap_pages() > 0;
> diff --git a/drivers/gpu/drm/panthor/panthor_gem.h b/drivers/gpu/drm/panthor/panthor_gem.h
> index ae0491d0b121..8639c2fa08e6 100644
> --- a/drivers/gpu/drm/panthor/panthor_gem.h
> +++ b/drivers/gpu/drm/panthor/panthor_gem.h
> @@ -315,6 +315,8 @@ panthor_kernel_bo_create(struct panthor_device *ptdev, struct panthor_vm *vm,
>  
>  void panthor_kernel_bo_destroy(struct panthor_kernel_bo *bo);
>  
> +struct panthor_gem_object *panthor_dummy_bo_create(struct panthor_device *ptdev);
> +
>  #ifdef CONFIG_DEBUG_FS
>  void panthor_gem_debugfs_init(struct drm_minor *minor);
>  #endif
> diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
> index f54a60cd0ec4..9257afd6adc9 100644
> --- a/drivers/gpu/drm/panthor/panthor_mmu.c
> +++ b/drivers/gpu/drm/panthor/panthor_mmu.c
> @@ -112,6 +112,17 @@ struct panthor_mmu {
>  struct panthor_vm_pool {
>  	/** @xa: Array used for VM handle tracking. */
>  	struct xarray xa;
> +
> +	/**
> +	 * @dummy: Dummy object used for sparse mappings
> +	 *
> +	 * Sparse bindings map virtual address ranges onto a dummy
> +	 * BO in a modulo fashion. Even though sparse writes are meant
> +	 * to be discarded and reads undefined, writes are still reflected
> +	 * in the dummy buffer. That means we must keep a dummy object per
> +	 * file context, to avoid data leaks between them.
> +	 */
> +	struct panthor_gem_object *dummy;
>  };
>  
>  /**
> @@ -391,6 +402,16 @@ struct panthor_vm {
>  		 */
>  		struct list_head lru_node;
>  	} reclaim;
> +
> +	/**
> +	 * @dummy: Dummy object used for sparse mappings.
> +	 *
> +	 * VM's must keep a reference to the file context-wide dummy BO because
> +	 * they can outlive the file context, which includes the VM pool holding
> +	 * the original dummy BO reference.
> +	 *

nit: Drop the extra blank line.

> +	 */
> +	struct panthor_gem_object *dummy;
>  };
>  
>  /**
> @@ -1020,6 +1041,30 @@ panthor_vm_map_pages(struct panthor_vm *vm, u64 iova, int prot,
>  	return 0;
>  }
>  
> +static int
> +panthor_vm_map_sparse(struct panthor_vm *vm, u64 iova, int prot,
> +		      struct sg_table *sgt, u64 size)
> +{
> +	u64 mapped = 0;
> +	int ret;
> +
> +	while (mapped < size) {
> +		u64 addr = iova + mapped;
> +		u32 chunk_size = min(size - mapped, SZ_2M - (addr & (SZ_2M - 1)));
> +
> +		ret = panthor_vm_map_pages(vm, addr, prot,
> +					   sgt, 0, chunk_size);
> +		if (ret) {
> +			panthor_vm_unmap_pages(vm, iova, mapped);
> +			return ret;
> +		}
> +
> +		mapped += chunk_size;
> +	}
> +
> +	return 0;
> +}
> +
>  static int flags_to_prot(u32 flags)
>  {
>  	int prot = 0;
> @@ -1262,6 +1307,7 @@ static int panthor_vm_op_ctx_prealloc_pts(struct panthor_vm_op_ctx *op_ctx)
>  	(DRM_PANTHOR_VM_BIND_OP_MAP_READONLY | \
>  	 DRM_PANTHOR_VM_BIND_OP_MAP_NOEXEC | \
>  	 DRM_PANTHOR_VM_BIND_OP_MAP_UNCACHED | \
> +	 DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE | \
>  	 DRM_PANTHOR_VM_BIND_OP_TYPE_MASK)
>  
>  static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
> @@ -1269,6 +1315,7 @@ static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
>  					 struct panthor_gem_object *bo,
>  					 const struct drm_panthor_vm_bind_op *op)
>  {
> +	bool is_sparse = op->flags & DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE;
>  	struct drm_gpuvm_bo *preallocated_vm_bo;
>  	struct sg_table *sgt = NULL;
>  	int ret;
> @@ -1280,8 +1327,21 @@ static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
>  	    (op->flags & DRM_PANTHOR_VM_BIND_OP_TYPE_MASK) != DRM_PANTHOR_VM_BIND_OP_TYPE_MAP)
>  		return -EINVAL;
>  
> -	/* Make sure the VA and size are in-bounds. */
> -	if (op->size > bo->base.size || op->bo_offset > bo->base.size - op->size)
> +	/* uAPI mandates sparsely bound regions must not be executable. */
> +	if (is_sparse && !(op->flags & DRM_PANTHOR_VM_BIND_OP_MAP_NOEXEC))
> +		return -EINVAL;
> +
> +	/* For non-sparse, make sure the VA and size are in-bounds.
> +	 * For sparse, this is not applicable, because the dummy BO is
> +	 * repeatedly mapped over a potentially wider VA range.
> +	 */
> +	if (!is_sparse && (op->size > bo->base.size || op->bo_offset > bo->base.size - op->size))
> +		return -EINVAL;
> +
> +	/* For sparse, we don't expect any user BO, the BO we get passed
> +	 * is the dummy BO attached to the VM pool.
> +	 */
> +	if (is_sparse && (op->bo_handle || op->bo_offset))
>  		return -EINVAL;
>  
>  	/* If the BO has an exclusive VM attached, it can't be mapped to other VMs. */
> @@ -1543,6 +1603,9 @@ int panthor_vm_pool_create_vm(struct panthor_device *ptdev,
>  		return ret;
>  	}
>  
> +	drm_gem_object_get(&pool->dummy->base);
> +	vm->dummy = pool->dummy;
> +
>  	args->user_va_range = kernel_va_start;
>  	return id;
>  }
> @@ -1634,6 +1697,7 @@ void panthor_vm_pool_destroy(struct panthor_file *pfile)
>  	xa_for_each(&pfile->vms->xa, i, vm)
>  		panthor_vm_destroy(vm);
>  
> +	drm_gem_object_put(&pfile->vms->dummy->base);
>  	xa_destroy(&pfile->vms->xa);
>  	kfree(pfile->vms);
>  }
> @@ -1651,6 +1715,13 @@ int panthor_vm_pool_create(struct panthor_file *pfile)
>  		return -ENOMEM;
>  
>  	xa_init_flags(&pfile->vms->xa, XA_FLAGS_ALLOC1);
> +
> +	pfile->vms->dummy = panthor_dummy_bo_create(pfile->ptdev);
> +	if (IS_ERR(pfile->vms->dummy)) {
> +		kfree(pfile->vms);
> +		return PTR_ERR(pfile->vms->dummy);
> +	}
> +
>  	return 0;
>  }
>  
> @@ -1987,6 +2058,9 @@ static void panthor_vm_free(struct drm_gpuvm *gpuvm)
>  
>  	free_io_pgtable_ops(vm->pgtbl_ops);
>  
> +	if (vm->dummy)
> +		drm_gem_object_put(&vm->dummy->base);
> +
>  	drm_mm_takedown(&vm->mm);
>  	kfree(vm);
>  }
> @@ -2146,7 +2220,23 @@ static void panthor_vma_init(struct panthor_vma *vma, u32 flags)
>  #define PANTHOR_VM_MAP_FLAGS \
>  	(DRM_PANTHOR_VM_BIND_OP_MAP_READONLY | \
>  	 DRM_PANTHOR_VM_BIND_OP_MAP_NOEXEC | \
> -	 DRM_PANTHOR_VM_BIND_OP_MAP_UNCACHED)
> +	 DRM_PANTHOR_VM_BIND_OP_MAP_UNCACHED | \
> +	 DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE)
> +
> +static int
> +panthor_vm_exec_map_op(struct panthor_vm *vm, u32 flags,
> +		       const struct drm_gpuva_op_map *op)
> +{
> +	struct panthor_gem_object *bo = to_panthor_bo(op->gem.obj);
> +	int prot = flags_to_prot(flags);
> +
> +	if (flags & DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE)
> +		return panthor_vm_map_sparse(vm, op->va.addr, prot,
> +					     bo->dmap.sgt, op->va.range);
> +
> +	return panthor_vm_map_pages(vm, op->va.addr, prot, bo->dmap.sgt,
> +				    op->gem.offset, op->va.range);
> +}
>  
>  static int panthor_gpuva_sm_step_map(struct drm_gpuva_op *op, void *priv)
>  {
> @@ -2160,9 +2250,7 @@ static int panthor_gpuva_sm_step_map(struct drm_gpuva_op *op, void *priv)
>  
>  	panthor_vma_init(vma, op_ctx->flags & PANTHOR_VM_MAP_FLAGS);
>  
> -	ret = panthor_vm_map_pages(vm, op->map.va.addr, flags_to_prot(vma->flags),
> -				   op_ctx->map.bo->dmap.sgt, op->map.gem.offset,
> -				   op->map.va.range);
> +	ret = panthor_vm_exec_map_op(vm, vma->flags, &op->map);
>  	if (ret) {
>  		panthor_vm_op_ctx_return_vma(op_ctx, vma);
>  		return ret;
> @@ -2178,13 +2266,16 @@ static int panthor_gpuva_sm_step_map(struct drm_gpuva_op *op, void *priv)
>  }
>  
>  static bool
> -iova_mapped_as_huge_page(struct drm_gpuva_op_map *op, u64 addr)
> +iova_mapped_as_huge_page(struct drm_gpuva_op_map *op, u64 addr, bool is_sparse)
>  {
>  	struct panthor_gem_object *bo = to_panthor_bo(op->gem.obj);
>  	const struct page *pg;
>  	pgoff_t bo_offset;
>  
> -	bo_offset = addr - op->va.addr + op->gem.offset;
> +	/* Per-VM Dummy BO in sparse mappings is always 2MiB, so checking the
> +	 * size of the very first page is enough.
> +	 */
> +	bo_offset = !is_sparse ? addr - op->va.addr + op->gem.offset : 0;
>  	pg = bo->backing.pages[bo_offset >> PAGE_SHIFT];
>  
>  	return folio_size(page_folio(pg)) >= SZ_2M;
> @@ -2194,6 +2285,8 @@ static void
>  unmap_hugepage_align(const struct drm_gpuva_op_remap *op,
>  		     u64 *unmap_start, u64 *unmap_range)
>  {
> +	struct panthor_vma *unmap_vma = container_of(op->unmap->va, struct panthor_vma, base);
> +	bool is_sparse = unmap_vma->flags & DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE;
>  	u64 aligned_unmap_start, aligned_unmap_end, unmap_end;
>  
>  	unmap_end = *unmap_start + *unmap_range;
> @@ -2205,7 +2298,7 @@ unmap_hugepage_align(const struct drm_gpuva_op_remap *op,
>  	 */
>  	if (op->prev && aligned_unmap_start < *unmap_start &&
>  	    op->prev->va.addr <= aligned_unmap_start &&
> -	    iova_mapped_as_huge_page(op->prev, *unmap_start)) {
> +	    (iova_mapped_as_huge_page(op->prev, *unmap_start, is_sparse))) {

Actually, I think this could be:
	    
	    (is_sparse || iova_mapped_as_huge_page(op->prev, *unmap_start)) {

such that we always end up with sparse mappings starting at offset=0 on
the dummy GEM, even if those mappings are not 2M aligned (see below for
more reasons to keep this thing accurate).

>  		*unmap_range += *unmap_start - aligned_unmap_start;
>  		*unmap_start = aligned_unmap_start;
>  	}
> @@ -2215,7 +2308,7 @@ unmap_hugepage_align(const struct drm_gpuva_op_remap *op,
>  	 */
>  	if (op->next && aligned_unmap_end > unmap_end &&
>  	    op->next->va.addr + op->next->va.range >= aligned_unmap_end &&
> -	    iova_mapped_as_huge_page(op->next, unmap_end - 1)) {
> +	    iova_mapped_as_huge_page(op->next, unmap_end - 1, is_sparse)) {
>  		*unmap_range += aligned_unmap_end - unmap_end;
>  	}
>  }
> @@ -2250,15 +2343,27 @@ static int panthor_gpuva_sm_step_remap(struct drm_gpuva_op *op,
>  		panthor_vm_unmap_pages(vm, unmap_start, unmap_range);
>  	}
>  
> +	/* In the following two branches, neither remap::unmap::offset nor remap::unmap::keep
> +	 * can be trusted to contain legitimate values in the case of sparse mappings, because
> +	 * the drm_gpuvm core calculates them on the assumption that a VM_BIND operation's
> +	 * range is always less than the target BO. That doesn't hold in the case of sparse
> +	 * bindings, but we don't care to adjust the BO offset of new VA's spawned by a remap
> +	 * operation because we ignore them altogether when sparse-mapping pages on a HW level
> +	 * just further below.

I think I'd prefer if we were setting gem.offset to zero when we're
dealing with a sparse remap, even if it doesn't matter in practice
because it will be ignored in case of sparse.

>	 * If we ever wanted to make use of remap::unmap::keep, then this
> +	 * logic would have to be reworked.
> +	 */
>  	if (op->remap.prev) {
> -		struct panthor_gem_object *bo = to_panthor_bo(op->remap.prev->gem.obj);
>  		u64 offset = op->remap.prev->gem.offset + unmap_start - op->remap.prev->va.addr;
>  		u64 size = op->remap.prev->va.addr + op->remap.prev->va.range - unmap_start;
> +		const struct drm_gpuva_op_map map_op = {
> +			.va.addr = unmap_start,
> +			.va.range = size,
> +			.gem.obj = op->remap.prev->gem.obj,
> +			.gem.offset = offset,

That one doesn't matter much, but for consistency, I'd go:

			.gem.offset = is_sparse ? 0 : offset,

and maybe add a WARN_ON() in panthor_vm_exec_map_op() if this is a
SPARSE request and the offset is not zero.

> +		};
>  
> -		if (!unmap_vma->evicted) {
> -			ret = panthor_vm_map_pages(vm, unmap_start,
> -						   flags_to_prot(unmap_vma->flags),
> -						   bo->dmap.sgt, offset, size);
> +		if (!unmap_vma->evicted && size > 0) {
> +			ret = panthor_vm_exec_map_op(vm, unmap_vma->flags, &map_op);
>  			if (ret)
>  				return ret;
>  		}
> @@ -2269,14 +2374,17 @@ static int panthor_gpuva_sm_step_remap(struct drm_gpuva_op *op,
>  	}
>  
>  	if (op->remap.next) {
> -		struct panthor_gem_object *bo = to_panthor_bo(op->remap.next->gem.obj);
>  		u64 addr = op->remap.next->va.addr;
>  		u64 size = unmap_start + unmap_range - op->remap.next->va.addr;
> +		const struct drm_gpuva_op_map map_op = {
> +			.va.addr = addr,
> +			.va.range = size,
> +			.gem.obj = op->remap.next->gem.obj,
> +			.gem.offset = op->remap.next->gem.offset,

If this is a sparse mapping, the original mapping offset should already
be zero, so I guess we can pass it directly, and let
panthor_vm_exec_map_op() complain if it's not zero.

> +		};
>  

But more importantly, I think we should

		if (is_sparse)
			op->remap.next->gem.offset = 0;

such that the remapped VA appears with a zero offset in the debugfs
file exposed by GPUVM. This makes me realize we should prevent
panthor_vm_get_bo_for_va() from returning a valid <BO,offset> pair
for a sparse VA, or at the very least, make it so the offset is masked
with (SZ_2M - 1).

> -		if (!unmap_vma->evicted) {
> -			ret = panthor_vm_map_pages(vm, addr, flags_to_prot(unmap_vma->flags),
> -						   bo->dmap.sgt, op->remap.next->gem.offset,
> -						   size);
> +		if (!unmap_vma->evicted && size > 0) {
> +			ret = panthor_vm_exec_map_op(vm, unmap_vma->flags, &map_op);
>  			if (ret)
>  				return ret;
>  		}
> @@ -2835,7 +2943,13 @@ panthor_vm_bind_prepare_op_ctx(struct drm_file *file,
>  
>  	switch (op->flags & DRM_PANTHOR_VM_BIND_OP_TYPE_MASK) {
>  	case DRM_PANTHOR_VM_BIND_OP_TYPE_MAP:
> -		gem = drm_gem_object_lookup(file, op->bo_handle);
> +		if (!(op->flags & DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE)) {
> +			gem = drm_gem_object_lookup(file, op->bo_handle);
> +		} else {
> +			gem = &vm->dummy->base;
> +			drm_gem_object_get(&vm->dummy->base);
> +		}
> +
>  		ret = panthor_vm_prepare_map_op_ctx(op_ctx, vm,
>  						    gem ? to_panthor_bo(gem) : NULL,
>  						    op);
> @@ -3043,6 +3157,9 @@ int panthor_vm_map_bo_range(struct panthor_vm *vm, struct panthor_gem_object *bo
>  	struct panthor_vm_op_ctx op_ctx;
>  	int ret;
>  
> +	if (drm_WARN_ON(&vm->ptdev->base, flags & DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE))
> +		return -EINVAL;
> +
>  	ret = panthor_vm_prepare_map_op_ctx(&op_ctx, vm, bo, &op);
>  	if (ret)
>  		return ret;
> diff --git a/include/uapi/drm/panthor_drm.h b/include/uapi/drm/panthor_drm.h
> index 14a93a4ef6ff..1b706d00b0a1 100644
> --- a/include/uapi/drm/panthor_drm.h
> +++ b/include/uapi/drm/panthor_drm.h
> @@ -614,6 +614,18 @@ enum drm_panthor_vm_bind_op_flags {
>  	 */
>  	DRM_PANTHOR_VM_BIND_OP_MAP_UNCACHED = 1 << 2,
>  
> +	/**
> +	 * @DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE: Sparsely map a virtual memory range
> +	 *
> +	 * Only valid with DRM_PANTHOR_VM_BIND_OP_TYPE_MAP.
> +	 *
> +	 * When this flag is set, the whole vm_bind range is mapped over a dummy object in a cyclic
> +	 * fashion, and all GPU reads from addresses in the range return undefined values. This flag
> +	 * being set means drm_panthor_vm_bind_op:offset and drm_panthor_vm_bind_op::handle must
> +	 * both be set to 0. DRM_PANTHOR_VM_BIND_OP_MAP_NOEXEC must also be set.
> +	 */
> +	DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE = 1 << 3,
> +
>  	/**
>  	 * @DRM_PANTHOR_VM_BIND_OP_TYPE_MASK: Mask used to determine the type of operation.
>  	 */


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v10 5/6] drm/panthor: Support sparse mappings
  2026-04-30  7:57   ` Boris Brezillon
@ 2026-04-30  9:57     ` Boris Brezillon
  0 siblings, 0 replies; 11+ messages in thread
From: Boris Brezillon @ 2026-04-30  9:57 UTC (permalink / raw)
  To: Adrián Larumbe
  Cc: linux-kernel, dri-devel, Steven Price, kernel, Liviu Dudau,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
	Simona Vetter, Daniel Almeida, Alice Ryhl

On Thu, 30 Apr 2026 09:57:34 +0200
Boris Brezillon <boris.brezillon@collabora.com> wrote:

> On Wed, 29 Apr 2026 19:32:17 +0100
> Adrián Larumbe <adrian.larumbe@collabora.com> wrote:
> 
> > Allow UM to bind sparsely populated memory regions by cyclically mapping
> > virtual ranges over a kernel-allocated dummy BO. This alternative is
> > preferable to the old method of handling sparseness in the UMD, because it
> > relied on the creation of a buffer object to the same end, despite the fact
> > Vulkan sparse resources don't need to be backed by a driver BO.
> > 
> > The choice of backing sparsely-bound regions with a Panhtor BO was made so
> > as to profit from the existing shrinker reclaim code. That way no special
> > treatment must be given to the dummy sparse BOs when reclaiming memory, as
> > would be the case if we had chosen a raw kernel page implementation.
> > 
> > A new dummy BO is allocated per open file context, because even though the
> > Vulkan spec mandates that writes into sparsely bound regions must be
> > discarded, our implementation is still a workaround over the fact Mali CSF
> > GPUs cannot support this behaviour on the hardware level, so writes still
> > make it into the backing BO. If we had a global one, then it could be a
> > venue for information leaks between file contexts, which should never
> > happen in DRM.
> > 
> > Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
> > Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com>
> > ---
> >  drivers/gpu/drm/panthor/panthor_gem.c |  18 +++
> >  drivers/gpu/drm/panthor/panthor_gem.h |   2 +
> >  drivers/gpu/drm/panthor/panthor_mmu.c | 159 ++++++++++++++++++++++----
> >  include/uapi/drm/panthor_drm.h        |  12 ++
> >  4 files changed, 170 insertions(+), 21 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/panthor/panthor_gem.c b/drivers/gpu/drm/panthor/panthor_gem.c
> > index 13295d7a593d..c798ac2963e1 100644
> > --- a/drivers/gpu/drm/panthor/panthor_gem.c
> > +++ b/drivers/gpu/drm/panthor/panthor_gem.c
> > @@ -1345,6 +1345,24 @@ panthor_kernel_bo_create(struct panthor_device *ptdev, struct panthor_vm *vm,
> >  	return ERR_PTR(ret);
> >  }
> >  
> > +/**
> > + * panthor_dummy_bo_create() - Create a Panthor BO meant to back sparse bindings.
> > + * @ptdev: Device.
> > + *
> > + * Return: A valid pointer in case of success, an ERR_PTR() otherwise.
> > + */
> > +struct panthor_gem_object *
> > +panthor_dummy_bo_create(struct panthor_device *ptdev)
> > +{
> > +	/* Since even when the DRM device's mount point has enabled THP we have no guarantee
> > +	 * that drm_gem_get_pages() will return a single 2MiB PMD, and also we cannot be sure
> > +	 * that the 2MiB won't be reclaimed and re-allocated later on as 4KiB chunks, it doesn't
> > +	 * make sense to pre-populate this object's page array, nor to fall back on a BO size
> > +	 * of 4KiB. Sticking to a dummy object size of 2MiB lets us keep things simple for now.
> > +	 */
> > +	return panthor_gem_create(&ptdev->base, SZ_2M, DRM_PANTHOR_BO_NO_MMAP, NULL, 0);
> > +}
> > +
> >  static bool can_swap(void)
> >  {
> >  	return get_nr_swap_pages() > 0;
> > diff --git a/drivers/gpu/drm/panthor/panthor_gem.h b/drivers/gpu/drm/panthor/panthor_gem.h
> > index ae0491d0b121..8639c2fa08e6 100644
> > --- a/drivers/gpu/drm/panthor/panthor_gem.h
> > +++ b/drivers/gpu/drm/panthor/panthor_gem.h
> > @@ -315,6 +315,8 @@ panthor_kernel_bo_create(struct panthor_device *ptdev, struct panthor_vm *vm,
> >  
> >  void panthor_kernel_bo_destroy(struct panthor_kernel_bo *bo);
> >  
> > +struct panthor_gem_object *panthor_dummy_bo_create(struct panthor_device *ptdev);
> > +
> >  #ifdef CONFIG_DEBUG_FS
> >  void panthor_gem_debugfs_init(struct drm_minor *minor);
> >  #endif
> > diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
> > index f54a60cd0ec4..9257afd6adc9 100644
> > --- a/drivers/gpu/drm/panthor/panthor_mmu.c
> > +++ b/drivers/gpu/drm/panthor/panthor_mmu.c
> > @@ -112,6 +112,17 @@ struct panthor_mmu {
> >  struct panthor_vm_pool {
> >  	/** @xa: Array used for VM handle tracking. */
> >  	struct xarray xa;
> > +
> > +	/**
> > +	 * @dummy: Dummy object used for sparse mappings
> > +	 *
> > +	 * Sparse bindings map virtual address ranges onto a dummy
> > +	 * BO in a modulo fashion. Even though sparse writes are meant
> > +	 * to be discarded and reads undefined, writes are still reflected
> > +	 * in the dummy buffer. That means we must keep a dummy object per
> > +	 * file context, to avoid data leaks between them.
> > +	 */
> > +	struct panthor_gem_object *dummy;
> >  };
> >  
> >  /**
> > @@ -391,6 +402,16 @@ struct panthor_vm {
> >  		 */
> >  		struct list_head lru_node;
> >  	} reclaim;
> > +
> > +	/**
> > +	 * @dummy: Dummy object used for sparse mappings.
> > +	 *
> > +	 * VM's must keep a reference to the file context-wide dummy BO because
> > +	 * they can outlive the file context, which includes the VM pool holding
> > +	 * the original dummy BO reference.
> > +	 *  
> 
> nit: Drop the extra blank line.
> 
> > +	 */
> > +	struct panthor_gem_object *dummy;
> >  };
> >  
> >  /**
> > @@ -1020,6 +1041,30 @@ panthor_vm_map_pages(struct panthor_vm *vm, u64 iova, int prot,
> >  	return 0;
> >  }
> >  
> > +static int
> > +panthor_vm_map_sparse(struct panthor_vm *vm, u64 iova, int prot,
> > +		      struct sg_table *sgt, u64 size)
> > +{
> > +	u64 mapped = 0;
> > +	int ret;
> > +
> > +	while (mapped < size) {
> > +		u64 addr = iova + mapped;
> > +		u32 chunk_size = min(size - mapped, SZ_2M - (addr & (SZ_2M - 1)));
> > +
> > +		ret = panthor_vm_map_pages(vm, addr, prot,
> > +					   sgt, 0, chunk_size);
> > +		if (ret) {
> > +			panthor_vm_unmap_pages(vm, iova, mapped);
> > +			return ret;
> > +		}
> > +
> > +		mapped += chunk_size;
> > +	}
> > +
> > +	return 0;
> > +}
> > +
> >  static int flags_to_prot(u32 flags)
> >  {
> >  	int prot = 0;
> > @@ -1262,6 +1307,7 @@ static int panthor_vm_op_ctx_prealloc_pts(struct panthor_vm_op_ctx *op_ctx)
> >  	(DRM_PANTHOR_VM_BIND_OP_MAP_READONLY | \
> >  	 DRM_PANTHOR_VM_BIND_OP_MAP_NOEXEC | \
> >  	 DRM_PANTHOR_VM_BIND_OP_MAP_UNCACHED | \
> > +	 DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE | \
> >  	 DRM_PANTHOR_VM_BIND_OP_TYPE_MASK)
> >  
> >  static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
> > @@ -1269,6 +1315,7 @@ static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
> >  					 struct panthor_gem_object *bo,
> >  					 const struct drm_panthor_vm_bind_op *op)
> >  {
> > +	bool is_sparse = op->flags & DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE;
> >  	struct drm_gpuvm_bo *preallocated_vm_bo;
> >  	struct sg_table *sgt = NULL;
> >  	int ret;
> > @@ -1280,8 +1327,21 @@ static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
> >  	    (op->flags & DRM_PANTHOR_VM_BIND_OP_TYPE_MASK) != DRM_PANTHOR_VM_BIND_OP_TYPE_MAP)
> >  		return -EINVAL;
> >  
> > -	/* Make sure the VA and size are in-bounds. */
> > -	if (op->size > bo->base.size || op->bo_offset > bo->base.size - op->size)
> > +	/* uAPI mandates sparsely bound regions must not be executable. */
> > +	if (is_sparse && !(op->flags & DRM_PANTHOR_VM_BIND_OP_MAP_NOEXEC))
> > +		return -EINVAL;
> > +
> > +	/* For non-sparse, make sure the VA and size are in-bounds.
> > +	 * For sparse, this is not applicable, because the dummy BO is
> > +	 * repeatedly mapped over a potentially wider VA range.
> > +	 */
> > +	if (!is_sparse && (op->size > bo->base.size || op->bo_offset > bo->base.size - op->size))
> > +		return -EINVAL;
> > +
> > +	/* For sparse, we don't expect any user BO, the BO we get passed
> > +	 * is the dummy BO attached to the VM pool.
> > +	 */
> > +	if (is_sparse && (op->bo_handle || op->bo_offset))
> >  		return -EINVAL;
> >  
> >  	/* If the BO has an exclusive VM attached, it can't be mapped to other VMs. */
> > @@ -1543,6 +1603,9 @@ int panthor_vm_pool_create_vm(struct panthor_device *ptdev,
> >  		return ret;
> >  	}
> >  
> > +	drm_gem_object_get(&pool->dummy->base);
> > +	vm->dummy = pool->dummy;
> > +
> >  	args->user_va_range = kernel_va_start;
> >  	return id;
> >  }
> > @@ -1634,6 +1697,7 @@ void panthor_vm_pool_destroy(struct panthor_file *pfile)
> >  	xa_for_each(&pfile->vms->xa, i, vm)
> >  		panthor_vm_destroy(vm);
> >  
> > +	drm_gem_object_put(&pfile->vms->dummy->base);
> >  	xa_destroy(&pfile->vms->xa);
> >  	kfree(pfile->vms);
> >  }
> > @@ -1651,6 +1715,13 @@ int panthor_vm_pool_create(struct panthor_file *pfile)
> >  		return -ENOMEM;
> >  
> >  	xa_init_flags(&pfile->vms->xa, XA_FLAGS_ALLOC1);
> > +
> > +	pfile->vms->dummy = panthor_dummy_bo_create(pfile->ptdev);
> > +	if (IS_ERR(pfile->vms->dummy)) {
> > +		kfree(pfile->vms);
> > +		return PTR_ERR(pfile->vms->dummy);
> > +	}
> > +
> >  	return 0;
> >  }
> >  
> > @@ -1987,6 +2058,9 @@ static void panthor_vm_free(struct drm_gpuvm *gpuvm)
> >  
> >  	free_io_pgtable_ops(vm->pgtbl_ops);
> >  
> > +	if (vm->dummy)
> > +		drm_gem_object_put(&vm->dummy->base);
> > +
> >  	drm_mm_takedown(&vm->mm);
> >  	kfree(vm);
> >  }
> > @@ -2146,7 +2220,23 @@ static void panthor_vma_init(struct panthor_vma *vma, u32 flags)
> >  #define PANTHOR_VM_MAP_FLAGS \
> >  	(DRM_PANTHOR_VM_BIND_OP_MAP_READONLY | \
> >  	 DRM_PANTHOR_VM_BIND_OP_MAP_NOEXEC | \
> > -	 DRM_PANTHOR_VM_BIND_OP_MAP_UNCACHED)
> > +	 DRM_PANTHOR_VM_BIND_OP_MAP_UNCACHED | \
> > +	 DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE)
> > +
> > +static int
> > +panthor_vm_exec_map_op(struct panthor_vm *vm, u32 flags,
> > +		       const struct drm_gpuva_op_map *op)
> > +{
> > +	struct panthor_gem_object *bo = to_panthor_bo(op->gem.obj);
> > +	int prot = flags_to_prot(flags);
> > +
> > +	if (flags & DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE)
> > +		return panthor_vm_map_sparse(vm, op->va.addr, prot,
> > +					     bo->dmap.sgt, op->va.range);
> > +
> > +	return panthor_vm_map_pages(vm, op->va.addr, prot, bo->dmap.sgt,
> > +				    op->gem.offset, op->va.range);
> > +}
> >  
> >  static int panthor_gpuva_sm_step_map(struct drm_gpuva_op *op, void *priv)
> >  {
> > @@ -2160,9 +2250,7 @@ static int panthor_gpuva_sm_step_map(struct drm_gpuva_op *op, void *priv)
> >  
> >  	panthor_vma_init(vma, op_ctx->flags & PANTHOR_VM_MAP_FLAGS);
> >  
> > -	ret = panthor_vm_map_pages(vm, op->map.va.addr, flags_to_prot(vma->flags),
> > -				   op_ctx->map.bo->dmap.sgt, op->map.gem.offset,
> > -				   op->map.va.range);
> > +	ret = panthor_vm_exec_map_op(vm, vma->flags, &op->map);
> >  	if (ret) {
> >  		panthor_vm_op_ctx_return_vma(op_ctx, vma);
> >  		return ret;
> > @@ -2178,13 +2266,16 @@ static int panthor_gpuva_sm_step_map(struct drm_gpuva_op *op, void *priv)
> >  }
> >  
> >  static bool
> > -iova_mapped_as_huge_page(struct drm_gpuva_op_map *op, u64 addr)
> > +iova_mapped_as_huge_page(struct drm_gpuva_op_map *op, u64 addr, bool is_sparse)
> >  {
> >  	struct panthor_gem_object *bo = to_panthor_bo(op->gem.obj);
> >  	const struct page *pg;
> >  	pgoff_t bo_offset;
> >  
> > -	bo_offset = addr - op->va.addr + op->gem.offset;
> > +	/* Per-VM Dummy BO in sparse mappings is always 2MiB, so checking the
> > +	 * size of the very first page is enough.
> > +	 */
> > +	bo_offset = !is_sparse ? addr - op->va.addr + op->gem.offset : 0;
> >  	pg = bo->backing.pages[bo_offset >> PAGE_SHIFT];
> >  
> >  	return folio_size(page_folio(pg)) >= SZ_2M;
> > @@ -2194,6 +2285,8 @@ static void
> >  unmap_hugepage_align(const struct drm_gpuva_op_remap *op,
> >  		     u64 *unmap_start, u64 *unmap_range)
> >  {
> > +	struct panthor_vma *unmap_vma = container_of(op->unmap->va, struct panthor_vma, base);
> > +	bool is_sparse = unmap_vma->flags & DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE;
> >  	u64 aligned_unmap_start, aligned_unmap_end, unmap_end;
> >  
> >  	unmap_end = *unmap_start + *unmap_range;
> > @@ -2205,7 +2298,7 @@ unmap_hugepage_align(const struct drm_gpuva_op_remap *op,
> >  	 */
> >  	if (op->prev && aligned_unmap_start < *unmap_start &&
> >  	    op->prev->va.addr <= aligned_unmap_start &&
> > -	    iova_mapped_as_huge_page(op->prev, *unmap_start)) {
> > +	    (iova_mapped_as_huge_page(op->prev, *unmap_start, is_sparse))) {  
> 
> Actually, I think this could be:
> 	    
> 	    (is_sparse || iova_mapped_as_huge_page(op->prev, *unmap_start)) {
> 
> such that we always end up with sparse mappings starting at offset=0 on
> the dummy GEM, even if those mappings are not 2M aligned (see below for
> more reasons to keep this thing accurate).
> 
> >  		*unmap_range += *unmap_start - aligned_unmap_start;
> >  		*unmap_start = aligned_unmap_start;
> >  	}
> > @@ -2215,7 +2308,7 @@ unmap_hugepage_align(const struct drm_gpuva_op_remap *op,
> >  	 */
> >  	if (op->next && aligned_unmap_end > unmap_end &&
> >  	    op->next->va.addr + op->next->va.range >= aligned_unmap_end &&
> > -	    iova_mapped_as_huge_page(op->next, unmap_end - 1)) {
> > +	    iova_mapped_as_huge_page(op->next, unmap_end - 1, is_sparse)) {
> >  		*unmap_range += aligned_unmap_end - unmap_end;
> >  	}

If we want all sparse mappings to start at offset 0, we'll also need
something like:

	if (op->next && aligned_unmap_end > unmap_end) {
		/* If this is a sparse mapping, we always unmap everything
		 * up to the next 2M boundary, just so we have the guarantee
		 * the new unaligned mapping starts at offset=0 of the dummy
		 * GEM.
		 */
		if (is_sparse) {
			u64 new_unmap_end = min(op->next->va.addr + op->next->va.range,
						aligned_unmap_end);

			*unmap_range += new_unmap_end - unmap_end;
		} else if (op->next->va.addr + op->next->va.range >= aligned_unmap_end &&
			   iova_mapped_as_huge_page(op->next, unmap_end - 1)) {
			*unmap_range += aligned_unmap_end - unmap_end;
		}	
	}

But it looks like we're accumulating special cases for SPARSE,
so I'm wondering if we're not better off setting
drm_gpuva::gem.offset to iova & (SZ_2M - 1) at this point,
and adding all these GEM offset adjustment machinery behind
some helpers that would check the VM_BIND_MAP flags.

> >  }



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v10 5/6] drm/panthor: Support sparse mappings
  2026-04-29 18:32 ` [PATCH v10 5/6] drm/panthor: Support sparse mappings Adrián Larumbe
  2026-04-30  7:57   ` Boris Brezillon
@ 2026-05-05  8:14   ` Marcin Ślusarz
  2026-05-05  8:33     ` Boris Brezillon
  1 sibling, 1 reply; 11+ messages in thread
From: Marcin Ślusarz @ 2026-05-05  8:14 UTC (permalink / raw)
  To: Adrián Larumbe
  Cc: linux-kernel, dri-devel, Steven Price, Boris Brezillon, kernel,
	Liviu Dudau, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
	David Airlie, Simona Vetter, Daniel Almeida, Alice Ryhl, nd

On Wed, Apr 29, 2026 at 07:32:17PM +0100, Adri�n Larumbe wrote:
> @@ -1651,6 +1715,13 @@ int panthor_vm_pool_create(struct panthor_file *pfile)
>  		return -ENOMEM;
>  
>  	xa_init_flags(&pfile->vms->xa, XA_FLAGS_ALLOC1);
> +
> +	pfile->vms->dummy = panthor_dummy_bo_create(pfile->ptdev);
> +	if (IS_ERR(pfile->vms->dummy)) {
> +		kfree(pfile->vms);
> +		return PTR_ERR(pfile->vms->dummy);

This is use-after-free.

> +	}
> +
>  	return 0;
>  }

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v10 5/6] drm/panthor: Support sparse mappings
  2026-05-05  8:14   ` Marcin Ślusarz
@ 2026-05-05  8:33     ` Boris Brezillon
  0 siblings, 0 replies; 11+ messages in thread
From: Boris Brezillon @ 2026-05-05  8:33 UTC (permalink / raw)
  To: Marcin Ślusarz
  Cc: Adrián Larumbe, linux-kernel, dri-devel, Steven Price,
	kernel, Liviu Dudau, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, David Airlie, Simona Vetter, Daniel Almeida,
	Alice Ryhl, nd

On Tue, 5 May 2026 10:14:50 +0200
Marcin Ślusarz <marcin.slusarz@arm.com> wrote:

> On Wed, Apr 29, 2026 at 07:32:17PM +0100, Adri�n Larumbe wrote:
> > @@ -1651,6 +1715,13 @@ int panthor_vm_pool_create(struct panthor_file *pfile)
> >  		return -ENOMEM;
> >  
> >  	xa_init_flags(&pfile->vms->xa, XA_FLAGS_ALLOC1);
> > +
> > +	pfile->vms->dummy = panthor_dummy_bo_create(pfile->ptdev);
> > +	if (IS_ERR(pfile->vms->dummy)) {
> > +		kfree(pfile->vms);
> > +		return PTR_ERR(pfile->vms->dummy);  
> 
> This is use-after-free.

Indeed. Let's add a proper error path where panthor_vm_pool_destroy()
is called to make sure we don't leak resources when an error occurs
anywhere in the creation path, and let's make panthor_vm_pool_destroy()
safe against dummy=NULL.

void panthor_vm_pool_destroy(struct panthor_file *pfile)
{
...

	if (pfile->vms->dummy)
		drm_gem_object_put(&pfile->vms->dummy->base);

...
}

int panthor_vm_pool_create(struct panthor_file *pfile)
{
	struct panthor_gem_object *dummy;

	...

	dummy = panthor_dummy_bo_create(pfile->ptdev);
	if (IS_ERR(dummy)) {
		ret = PTR_ERR(dummy);
		goto err_destroy_vm_pool;
	}

	pfile->vms->dummy = dummy;

	...

	return 0;

err_destroy_vm_pool:
	panthor_vm_pool_destroy(pfile);
	return ret;
}

> 
> > +	}
> > +
> >  	return 0;
> >  }  


^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2026-05-05  8:33 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-29 18:32 [PATCH v10 0/6] Support sparse mappings in Panthor Adrián Larumbe
2026-04-29 18:32 ` [PATCH v10 1/6] drm/panthor: Expose GPU page sizes to UM Adrián Larumbe
2026-04-29 18:32 ` [PATCH v10 2/6] drm/panthor: Pass vm_bind_op to vm_prepare_map_op_ctx Adrián Larumbe
2026-04-29 18:32 ` [PATCH v10 3/6] drm/panthor: Delete spurious whitespace from uAPI header Adrián Larumbe
2026-04-29 18:32 ` [PATCH v10 4/6] drm/panthor: Remove unused operation context field Adrián Larumbe
2026-04-29 18:32 ` [PATCH v10 5/6] drm/panthor: Support sparse mappings Adrián Larumbe
2026-04-30  7:57   ` Boris Brezillon
2026-04-30  9:57     ` Boris Brezillon
2026-05-05  8:14   ` Marcin Ślusarz
2026-05-05  8:33     ` Boris Brezillon
2026-04-29 18:32 ` [PATCH v10 6/6] drm/panthor: Bump the driver version to 1.9 Adrián Larumbe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox