* [PATCH v7 0/6] Support sparse mappings in Panthor
@ 2026-04-15 11:28 Adrián Larumbe
2026-04-15 11:28 ` [PATCH v7 1/6] drm/panthor: Expose GPU page sizes to UM Adrián Larumbe
` (5 more replies)
0 siblings, 6 replies; 23+ messages in thread
From: Adrián Larumbe @ 2026-04-15 11:28 UTC (permalink / raw)
To: linux-kernel
Cc: dri-devel, Steven Price, Boris Brezillon, kernel,
Adrián Larumbe
This patch series implements sparse mappings in Panthor. Owing to the lack of HW MMU
features for sparse page table entries, this had to be implemented using a dummy object
over which sparse mappings requested over VM_BIND are mapped cyclically.
To that end, a new VM_BIND flag was added in the driver's uAPI.
The end goal of this patch series is to improve support of Vulkan sparse
resources. At the moment, to implement this feature on Mali hardware, Vulkan
sparse map is implemented by mapping the specified region to a "dummy bo" so
that the accesses do not fault. A newly created sparse resource starts off
unmapped, and therefore also has to be mapped to the "dummy bo". This "dummy
bo" is small (a page size) in comparison to the sizes of va ranges that we might
want to map to it, and a large number of vm_bind ops can be necessary. For
example, if the user were to create a 100e6-byte sparse resident resource, we'd
have to poke VM_BIND with ceil(100e6/0x1000)=24415 map operations.
The new VM_BIND sparse mapping feature addresses this particular inefficiency by
letting us implement a single Vulkan sparse map operation and sparse resident
resource initialization with just one map operation.
Link to the conversation for the previous patch series revision at:
https://lore.kernel.org/dri-devel/20260403192743.3572062-1-adrian.larumbe@collabora.com/
Changes in v7:
- Switched back to Panthor BO-backed dummy object instead of raw pages so as to profit from
the existing shrinker reclaim paths.
- Created Dummy BO's per file context to avoid information leaking between them.
- Reorganised some of the low-level page mapping code.
- Added commits deleting spurious white space and unused op contex field.
Changes in v6:
- Moved all the GPUVM core code into the driver backend.
- Discarded commits that touch on the gpuvm core too.
- Redesigned the uAPI so that no repeat range or user BO is supplied for sparse mappings.
- Replaced user-supplied BO with a kernel-allocated array of raw pages.
Changes in v5:
- Minor fixes to drm_gpuvm.c.
- Add panthor MMU page sizes device queriable param.
- Add helper to make sure unmaps of repeated regions are correct.
- Some fixes to Panthor's repeat mappings implementation.
- Lump arguments to panthor_vm_prepare_map_op_ctx into a single struct.
Changes in v4:
- Fixed the warnings reported by the kernel test robot.
https://lore.kernel.org/oe-kbuild-all/202507041635.WyDu3TQ1-lkp@intel.com/
- Fixed the warnings reported by the CI.
https://patchwork.freedesktop.org/series/151264/
No changes in v3.
Changes in v2:
- Make panthor use this stuff.
- Make it possible to express a repeated mappina of any suitably sized
and aligned range of a BO, rather than strictly the page size -sized
prefix, generalizing the API. Rename DRM_GPUVA_SINGLE_PAGE to
DRM_GPUVA_REPEAT.
- Clean up parts of drm/gpuvm affected by these changes.
Adrián Larumbe (6):
drm/panthor: Expose GPU page sizes to UM
drm/panthor: Pass vm_bind_op to vm_prepare_map_op_ctx
drm/panthor: Delete spurious whitespace from uAPI header
drm/panthor: Remove unused operation context field
drm/panthor: Support sparse mappings
drm/panthor: Bump the driver version to 1.9
drivers/gpu/drm/panthor/panthor_device.h | 3 +
drivers/gpu/drm/panthor/panthor_drv.c | 10 +
drivers/gpu/drm/panthor/panthor_gem.c | 35 ++++
drivers/gpu/drm/panthor/panthor_gem.h | 2 +
drivers/gpu/drm/panthor/panthor_mmu.c | 230 ++++++++++++++++++-----
include/uapi/drm/panthor_drm.h | 26 ++-
6 files changed, 260 insertions(+), 46 deletions(-)
--
2.53.0
^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH v7 1/6] drm/panthor: Expose GPU page sizes to UM
2026-04-15 11:28 [PATCH v7 0/6] Support sparse mappings in Panthor Adrián Larumbe
@ 2026-04-15 11:28 ` Adrián Larumbe
2026-04-15 13:10 ` Boris Brezillon
2026-04-15 15:19 ` Steven Price
2026-04-15 11:28 ` [PATCH v7 2/6] drm/panthor: Pass vm_bind_op to vm_prepare_map_op_ctx Adrián Larumbe
` (4 subsequent siblings)
5 siblings, 2 replies; 23+ messages in thread
From: Adrián Larumbe @ 2026-04-15 11:28 UTC (permalink / raw)
To: linux-kernel
Cc: dri-devel, Steven Price, Boris Brezillon, kernel,
Adrián Larumbe, Liviu Dudau, Maarten Lankhorst,
Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter,
Daniel Almeida, Alice Ryhl
In future commits that will implement repeated mappings, only repeat
values multiple of GPU page sizes will be tolerated. That means these
values must be made known to UM. Do it through a queriable GPU info
value.
Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com>
---
drivers/gpu/drm/panthor/panthor_device.h | 3 +++
drivers/gpu/drm/panthor/panthor_drv.c | 8 ++++++++
drivers/gpu/drm/panthor/panthor_mmu.c | 9 ++++++++-
include/uapi/drm/panthor_drm.h | 13 +++++++++++++
4 files changed, 32 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/panthor/panthor_device.h b/drivers/gpu/drm/panthor/panthor_device.h
index 5cba272f9b4d..d856a4fe1d61 100644
--- a/drivers/gpu/drm/panthor/panthor_device.h
+++ b/drivers/gpu/drm/panthor/panthor_device.h
@@ -158,6 +158,9 @@ struct panthor_device {
/** @csif_info: Command stream interface information. */
struct drm_panthor_csif_info csif_info;
+ /** @mmu_info: MMU info */
+ struct drm_panthor_mmu_info mmu_info;
+
/** @hw: GPU-specific data. */
struct panthor_hw *hw;
diff --git a/drivers/gpu/drm/panthor/panthor_drv.c b/drivers/gpu/drm/panthor/panthor_drv.c
index 73fc983dc9b4..a8090bc4e33c 100644
--- a/drivers/gpu/drm/panthor/panthor_drv.c
+++ b/drivers/gpu/drm/panthor/panthor_drv.c
@@ -175,6 +175,7 @@ panthor_get_uobj_array(const struct drm_panthor_obj_array *in, u32 min_stride,
_Generic(_obj_name, \
PANTHOR_UOBJ_DECL(struct drm_panthor_gpu_info, tiler_present), \
PANTHOR_UOBJ_DECL(struct drm_panthor_csif_info, pad), \
+ PANTHOR_UOBJ_DECL(struct drm_panthor_mmu_info, page_size_bitmap), \
PANTHOR_UOBJ_DECL(struct drm_panthor_timestamp_info, current_timestamp), \
PANTHOR_UOBJ_DECL(struct drm_panthor_group_priorities_info, pad), \
PANTHOR_UOBJ_DECL(struct drm_panthor_sync_op, timeline_value), \
@@ -946,6 +947,10 @@ static int panthor_ioctl_dev_query(struct drm_device *ddev, void *data, struct d
args->size = sizeof(ptdev->csif_info);
return 0;
+ case DRM_PANTHOR_DEV_QUERY_MMU_INFO:
+ args->size = sizeof(ptdev->mmu_info);
+ return 0;
+
case DRM_PANTHOR_DEV_QUERY_TIMESTAMP_INFO:
args->size = sizeof(timestamp_info);
return 0;
@@ -966,6 +971,9 @@ static int panthor_ioctl_dev_query(struct drm_device *ddev, void *data, struct d
case DRM_PANTHOR_DEV_QUERY_CSIF_INFO:
return PANTHOR_UOBJ_SET(args->pointer, args->size, ptdev->csif_info);
+ case DRM_PANTHOR_DEV_QUERY_MMU_INFO:
+ return PANTHOR_UOBJ_SET(args->pointer, args->size, ptdev->mmu_info);
+
case DRM_PANTHOR_DEV_QUERY_TIMESTAMP_INFO:
ret = copy_struct_from_user(×tamp_info,
sizeof(timestamp_info),
diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
index bd41c892beb7..01577be88933 100644
--- a/drivers/gpu/drm/panthor/panthor_mmu.c
+++ b/drivers/gpu/drm/panthor/panthor_mmu.c
@@ -2769,7 +2769,7 @@ panthor_vm_create(struct panthor_device *ptdev, bool for_mcu,
refcount_set(&vm->as.active_cnt, 0);
pgtbl_cfg = (struct io_pgtable_cfg) {
- .pgsize_bitmap = SZ_4K | SZ_2M,
+ .pgsize_bitmap = ptdev->mmu_info.page_size_bitmap,
.ias = va_bits,
.oas = pa_bits,
.coherent_walk = ptdev->coherent,
@@ -3214,6 +3214,11 @@ static void panthor_mmu_release_wq(struct drm_device *ddev, void *res)
destroy_workqueue(res);
}
+static void panthor_mmu_info_init(struct panthor_device *ptdev)
+{
+ ptdev->mmu_info.page_size_bitmap = SZ_4K | SZ_2M;
+}
+
/**
* panthor_mmu_init() - Initialize the MMU logic.
* @ptdev: Device.
@@ -3226,6 +3231,8 @@ int panthor_mmu_init(struct panthor_device *ptdev)
struct panthor_mmu *mmu;
int ret, irq;
+ panthor_mmu_info_init(ptdev);
+
mmu = drmm_kzalloc(&ptdev->base, sizeof(*mmu), GFP_KERNEL);
if (!mmu)
return -ENOMEM;
diff --git a/include/uapi/drm/panthor_drm.h b/include/uapi/drm/panthor_drm.h
index 0e455d91e77d..dc2704fc2829 100644
--- a/include/uapi/drm/panthor_drm.h
+++ b/include/uapi/drm/panthor_drm.h
@@ -246,6 +246,9 @@ enum drm_panthor_dev_query_type {
/** @DRM_PANTHOR_DEV_QUERY_CSIF_INFO: Query command-stream interface information. */
DRM_PANTHOR_DEV_QUERY_CSIF_INFO,
+ /** @DRM_PANTHOR_DEV_QUERY_MMU_INFO: Query MMU information. */
+ DRM_PANTHOR_DEV_QUERY_MMU_INFO,
+
/** @DRM_PANTHOR_DEV_QUERY_TIMESTAMP_INFO: Query timestamp information. */
DRM_PANTHOR_DEV_QUERY_TIMESTAMP_INFO,
@@ -487,6 +490,16 @@ struct drm_panthor_timestamp_info {
__u64 cpu_timestamp_nsec;
};
+/**
+ * struct drm_panthor_mmu_info - MMU information
+ *
+ * Structure grouping all queryable information relating to the MMU.
+ */
+struct drm_panthor_mmu_info {
+ /** @page_size_bitmap: Allowed page sizes */
+ __u64 page_size_bitmap;
+};
+
/**
* struct drm_panthor_group_priorities_info - Group priorities information
*
--
2.53.0
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v7 2/6] drm/panthor: Pass vm_bind_op to vm_prepare_map_op_ctx
2026-04-15 11:28 [PATCH v7 0/6] Support sparse mappings in Panthor Adrián Larumbe
2026-04-15 11:28 ` [PATCH v7 1/6] drm/panthor: Expose GPU page sizes to UM Adrián Larumbe
@ 2026-04-15 11:28 ` Adrián Larumbe
2026-04-15 13:11 ` Boris Brezillon
2026-04-15 15:19 ` Steven Price
2026-04-15 11:28 ` [PATCH v7 3/6] drm/panthor: Delete spurious whitespace from uAPI header Adrián Larumbe
` (3 subsequent siblings)
5 siblings, 2 replies; 23+ messages in thread
From: Adrián Larumbe @ 2026-04-15 11:28 UTC (permalink / raw)
To: linux-kernel
Cc: dri-devel, Steven Price, Boris Brezillon, kernel,
Adrián Larumbe, Liviu Dudau, Maarten Lankhorst,
Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter
Instead of passing its constituent elements, pass the whole struct to
simplify the function prototype.
Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com>
---
drivers/gpu/drm/panthor/panthor_mmu.c | 27 ++++++++++++++-------------
1 file changed, 14 insertions(+), 13 deletions(-)
diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
index 01577be88933..d9e2f8afb8fb 100644
--- a/drivers/gpu/drm/panthor/panthor_mmu.c
+++ b/drivers/gpu/drm/panthor/panthor_mmu.c
@@ -1275,9 +1275,7 @@ static int panthor_vm_op_ctx_prealloc_pts(struct panthor_vm_op_ctx *op_ctx)
static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
struct panthor_vm *vm,
struct panthor_gem_object *bo,
- u64 offset,
- u64 size, u64 va,
- u32 flags)
+ const struct drm_panthor_vm_bind_op *op)
{
struct drm_gpuvm_bo *preallocated_vm_bo;
struct sg_table *sgt = NULL;
@@ -1286,12 +1284,12 @@ static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
if (!bo)
return -EINVAL;
- if ((flags & ~PANTHOR_VM_BIND_OP_MAP_FLAGS) ||
- (flags & DRM_PANTHOR_VM_BIND_OP_TYPE_MASK) != DRM_PANTHOR_VM_BIND_OP_TYPE_MAP)
+ if ((op->flags & ~PANTHOR_VM_BIND_OP_MAP_FLAGS) ||
+ (op->flags & DRM_PANTHOR_VM_BIND_OP_TYPE_MASK) != DRM_PANTHOR_VM_BIND_OP_TYPE_MAP)
return -EINVAL;
/* Make sure the VA and size are in-bounds. */
- if (size > bo->base.size || offset > bo->base.size - size)
+ if (op->size > bo->base.size || op->bo_offset > bo->base.size - op->size)
return -EINVAL;
/* If the BO has an exclusive VM attached, it can't be mapped to other VMs. */
@@ -1299,7 +1297,7 @@ static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
bo->exclusive_vm_root_gem != panthor_vm_root_gem(vm))
return -EINVAL;
- panthor_vm_init_op_ctx(op_ctx, size, va, flags);
+ panthor_vm_init_op_ctx(op_ctx, op->size, op->va, op->flags);
ret = panthor_vm_op_ctx_prealloc_vmas(op_ctx);
if (ret)
@@ -1328,7 +1326,7 @@ static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
}
op_ctx->map.vm_bo = drm_gpuvm_bo_obtain_prealloc(preallocated_vm_bo);
- op_ctx->map.bo_offset = offset;
+ op_ctx->map.bo_offset = op->bo_offset;
ret = panthor_vm_op_ctx_prealloc_pts(op_ctx);
if (ret)
@@ -2849,10 +2847,7 @@ panthor_vm_bind_prepare_op_ctx(struct drm_file *file,
gem = drm_gem_object_lookup(file, op->bo_handle);
ret = panthor_vm_prepare_map_op_ctx(op_ctx, vm,
gem ? to_panthor_bo(gem) : NULL,
- op->bo_offset,
- op->size,
- op->va,
- op->flags);
+ op);
drm_gem_object_put(gem);
return ret;
@@ -3048,10 +3043,16 @@ int panthor_vm_bind_exec_sync_op(struct drm_file *file,
int panthor_vm_map_bo_range(struct panthor_vm *vm, struct panthor_gem_object *bo,
u64 offset, u64 size, u64 va, u32 flags)
{
+ struct drm_panthor_vm_bind_op op = {
+ .bo_offset = offset,
+ .size = size,
+ .va = va,
+ .flags = flags,
+ };
struct panthor_vm_op_ctx op_ctx;
int ret;
- ret = panthor_vm_prepare_map_op_ctx(&op_ctx, vm, bo, offset, size, va, flags);
+ ret = panthor_vm_prepare_map_op_ctx(&op_ctx, vm, bo, &op);
if (ret)
return ret;
--
2.53.0
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v7 3/6] drm/panthor: Delete spurious whitespace from uAPI header
2026-04-15 11:28 [PATCH v7 0/6] Support sparse mappings in Panthor Adrián Larumbe
2026-04-15 11:28 ` [PATCH v7 1/6] drm/panthor: Expose GPU page sizes to UM Adrián Larumbe
2026-04-15 11:28 ` [PATCH v7 2/6] drm/panthor: Pass vm_bind_op to vm_prepare_map_op_ctx Adrián Larumbe
@ 2026-04-15 11:28 ` Adrián Larumbe
2026-04-15 13:41 ` Boris Brezillon
2026-04-15 15:19 ` Steven Price
2026-04-15 11:28 ` [PATCH v7 4/6] drm/panthor: Remove unused operation context field Adrián Larumbe
` (2 subsequent siblings)
5 siblings, 2 replies; 23+ messages in thread
From: Adrián Larumbe @ 2026-04-15 11:28 UTC (permalink / raw)
To: linux-kernel
Cc: dri-devel, Steven Price, Boris Brezillon, kernel,
Adrián Larumbe, Daniel Almeida, Alice Ryhl, Liviu Dudau,
David Airlie, Simona Vetter, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann
There's no extra blank line after the last member of any other uAPI
structures, so delete it.
Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com>
---
include/uapi/drm/panthor_drm.h | 1 -
1 file changed, 1 deletion(-)
diff --git a/include/uapi/drm/panthor_drm.h b/include/uapi/drm/panthor_drm.h
index dc2704fc2829..42c901ebdb7a 100644
--- a/include/uapi/drm/panthor_drm.h
+++ b/include/uapi/drm/panthor_drm.h
@@ -677,7 +677,6 @@ struct drm_panthor_vm_bind_op {
* This array shall not be empty for sync-only operations.
*/
struct drm_panthor_obj_array syncs;
-
};
/**
--
2.53.0
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v7 4/6] drm/panthor: Remove unused operation context field
2026-04-15 11:28 [PATCH v7 0/6] Support sparse mappings in Panthor Adrián Larumbe
` (2 preceding siblings ...)
2026-04-15 11:28 ` [PATCH v7 3/6] drm/panthor: Delete spurious whitespace from uAPI header Adrián Larumbe
@ 2026-04-15 11:28 ` Adrián Larumbe
2026-04-15 13:41 ` Boris Brezillon
2026-04-15 15:20 ` Steven Price
2026-04-15 11:28 ` [PATCH v7 5/6] drm/panthor: Support sparse mappings Adrián Larumbe
2026-04-15 11:28 ` [PATCH v7 6/6] drm/panthor: Bump the driver version to 1.9 Adrián Larumbe
5 siblings, 2 replies; 23+ messages in thread
From: Adrián Larumbe @ 2026-04-15 11:28 UTC (permalink / raw)
To: linux-kernel
Cc: dri-devel, Steven Price, Boris Brezillon, kernel,
Adrián Larumbe, Liviu Dudau, Maarten Lankhorst,
Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter
A Panthor BO's sgtable is now retrieved from its dmap field.
Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com>
---
drivers/gpu/drm/panthor/panthor_mmu.c | 8 --------
1 file changed, 8 deletions(-)
diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
index d9e2f8afb8fb..cea78e5f0591 100644
--- a/drivers/gpu/drm/panthor/panthor_mmu.c
+++ b/drivers/gpu/drm/panthor/panthor_mmu.c
@@ -195,14 +195,6 @@ struct panthor_vm_op_ctx {
/** @map.bo_offset: Offset in the buffer object. */
u64 bo_offset;
- /**
- * @map.sgt: sg-table pointing to pages backing the GEM object.
- *
- * This is gathered at job creation time, such that we don't have
- * to allocate in ::run_job().
- */
- struct sg_table *sgt;
-
/** @map.bo: the BO being mapped. */
struct panthor_gem_object *bo;
} map;
--
2.53.0
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v7 5/6] drm/panthor: Support sparse mappings
2026-04-15 11:28 [PATCH v7 0/6] Support sparse mappings in Panthor Adrián Larumbe
` (3 preceding siblings ...)
2026-04-15 11:28 ` [PATCH v7 4/6] drm/panthor: Remove unused operation context field Adrián Larumbe
@ 2026-04-15 11:28 ` Adrián Larumbe
2026-04-15 15:12 ` Boris Brezillon
2026-04-15 11:28 ` [PATCH v7 6/6] drm/panthor: Bump the driver version to 1.9 Adrián Larumbe
5 siblings, 1 reply; 23+ messages in thread
From: Adrián Larumbe @ 2026-04-15 11:28 UTC (permalink / raw)
To: linux-kernel
Cc: dri-devel, Steven Price, Boris Brezillon, kernel,
Adrián Larumbe, Liviu Dudau, Maarten Lankhorst,
Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter,
Daniel Almeida, Alice Ryhl
Allow UM to bind sparsely populated memory regions by cyclically mapping
virtual ranges over a kernel-allocated dummy BO. This alternative is
preferable to the old method of handling sparseness in the UMD, because it
relied on the creation of a buffer object to the same end, despite the fact
Vulkan sparse resources don't need to be backed by a driver BO.
The choice of backing sparsely-bound regions with a Panhtor BO was made so
as to profit from the existing shrinker reclaim code. That way no special
treatment must be given to the dummy sparse BOs when reclaiming memory, as
would be the case if we had chosen a raw kernel page implementation.
A new dummy BO is allocated per open file context, because even though the
Vulkan spec mandates that writes into sparsely bound regions must be
discarded, our implementation is still a workaround over the fact Mali CSF
GPUs cannot support this behaviour on the hardware level, so writes still
make it into the backing BO. If we had a global one, then it could be a
venue for information leaks between file contexts, which should never
happen in DRM.
Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com>
---
drivers/gpu/drm/panthor/panthor_gem.c | 35 +++++
drivers/gpu/drm/panthor/panthor_gem.h | 2 +
drivers/gpu/drm/panthor/panthor_mmu.c | 192 ++++++++++++++++++++++----
include/uapi/drm/panthor_drm.h | 12 ++
4 files changed, 215 insertions(+), 26 deletions(-)
diff --git a/drivers/gpu/drm/panthor/panthor_gem.c b/drivers/gpu/drm/panthor/panthor_gem.c
index 13295d7a593d..e27251ef113b 100644
--- a/drivers/gpu/drm/panthor/panthor_gem.c
+++ b/drivers/gpu/drm/panthor/panthor_gem.c
@@ -1345,6 +1345,41 @@ panthor_kernel_bo_create(struct panthor_device *ptdev, struct panthor_vm *vm,
return ERR_PTR(ret);
}
+/**
+ * panthor_dummy_bo_create() - Create a Panthor BO meant to back sparse bindings.
+ * @ptdev: Device.
+ *
+ * Return: A valid pointer in case of success, an ERR_PTR() otherwise.
+ */
+struct panthor_gem_object *
+panthor_dummy_bo_create(struct panthor_device *ptdev)
+{
+ u32 dummy_flags = DRM_PANTHOR_BO_NO_MMAP;
+ struct panthor_gem_object *bo;
+ struct page **pages;
+
+ bo = panthor_gem_create(&ptdev->base, SZ_2M, dummy_flags, NULL, 0);
+ if (IS_ERR_OR_NULL(bo))
+ return bo;
+
+ pages = drm_gem_get_pages(&bo->base);
+ if (PTR_ERR(pages) == -ENOMEM) {
+ drm_gem_object_put(&bo->base);
+ bo = panthor_gem_create(&ptdev->base, SZ_4K, dummy_flags, NULL, 0);
+ if (IS_ERR_OR_NULL(bo))
+ return bo;
+ pages = drm_gem_get_pages(&bo->base);
+ }
+
+ if (IS_ERR_OR_NULL(pages)) {
+ drm_gem_object_put(&bo->base);
+ return ERR_CAST(pages);
+ }
+
+ bo->backing.pages = pages;
+ return bo;
+}
+
static bool can_swap(void)
{
return get_nr_swap_pages() > 0;
diff --git a/drivers/gpu/drm/panthor/panthor_gem.h b/drivers/gpu/drm/panthor/panthor_gem.h
index ae0491d0b121..dcf9cdd51d93 100644
--- a/drivers/gpu/drm/panthor/panthor_gem.h
+++ b/drivers/gpu/drm/panthor/panthor_gem.h
@@ -264,6 +264,8 @@ void panthor_gem_kernel_bo_set_label(struct panthor_kernel_bo *bo, const char *l
int panthor_gem_sync(struct drm_gem_object *obj,
u32 type, u64 offset, u64 size);
+struct panthor_gem_object *panthor_dummy_bo_create(struct panthor_device *ptdev);
+
struct drm_gem_object *
panthor_gem_prime_import(struct drm_device *dev,
struct dma_buf *dma_buf);
diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
index cea78e5f0591..6585fd6b5d04 100644
--- a/drivers/gpu/drm/panthor/panthor_mmu.c
+++ b/drivers/gpu/drm/panthor/panthor_mmu.c
@@ -112,6 +112,23 @@ struct panthor_mmu {
struct panthor_vm_pool {
/** @xa: Array used for VM handle tracking. */
struct xarray xa;
+
+ /** @dummy: Dummy drm object related fields
+ *
+ * Sparse bindings map virtual address ranges onto a dummy
+ * BO in a modulo fashion. Even though sparse writes are meant
+ * to be discarded and reads undefined, writes are still reflected
+ * in the dummy buffer. That means we must keep a dummy object per
+ * file context, to avoid data leaks between them.
+ *
+ */
+ struct {
+ /** @dummy.obj: Dummy object used for sparse mappings. */
+ struct panthor_gem_object *obj;
+
+ /** @dummy.lock: Lock protecting against races on dummy object. */
+ struct mutex lock;
+ } dummy;
};
/**
@@ -391,6 +408,15 @@ struct panthor_vm {
*/
struct list_head lru_node;
} reclaim;
+
+ /** @dummy: Dummy object used for sparse mappings.
+ *
+ * VM's must keep a reference to the file context-wide dummy BO because
+ * they can outlive the file context, which includes the VM pool holding
+ * the original dummy BO reference.
+ *
+ */
+ struct panthor_gem_object *dummy;
};
/**
@@ -1020,6 +1046,46 @@ panthor_vm_map_pages(struct panthor_vm *vm, u64 iova, int prot,
return 0;
}
+static int
+panthor_vm_map_sparse(struct panthor_vm *vm, u64 iova, int prot,
+ struct sg_table *sgt, u64 size)
+{
+ u64 first_iova = iova;
+ u64 first_size = size;
+ int ret;
+
+ if (iova & (SZ_2M - 1)) {
+ u64 unaligned_size = min(ALIGN(iova, SZ_2M) - iova, size);
+
+ ret = panthor_vm_map_pages(vm, iova, prot, sgt,
+ 0, unaligned_size);
+ if (ret)
+ return ret;
+
+ size -= unaligned_size;
+ iova += unaligned_size;
+ }
+
+ /* TODO: we should probably optimize this at the io_pgtable level. */
+ while (size > 0) {
+ u64 next_size = min(size, sg_dma_len(sgt->sgl));
+
+ ret = panthor_vm_map_pages(vm, iova, prot,
+ sgt, 0, next_size);
+ if (ret)
+ goto err_unmap;
+
+ size -= next_size;
+ iova += next_size;
+ }
+
+ return 0;
+
+err_unmap:
+ panthor_vm_unmap_pages(vm, first_iova, first_size - size);
+ return ret;
+}
+
static int flags_to_prot(u32 flags)
{
int prot = 0;
@@ -1258,38 +1324,71 @@ static int panthor_vm_op_ctx_prealloc_pts(struct panthor_vm_op_ctx *op_ctx)
return 0;
}
+static struct panthor_gem_object *
+panthor_vm_get_dummy_obj(struct panthor_vm_pool *pool,
+ struct panthor_vm *vm)
+{
+ scoped_guard(mutex, &pool->dummy.lock) {
+ if (!vm->dummy) {
+ if (!pool->dummy.obj) {
+ struct panthor_gem_object *obj =
+ panthor_dummy_bo_create(vm->ptdev);
+ if (IS_ERR(obj))
+ return obj;
+
+ pool->dummy.obj = obj;
+ }
+
+ drm_gem_object_get(&pool->dummy.obj->base);
+ vm->dummy = pool->dummy.obj;
+ }
+ }
+
+ return vm->dummy;
+}
+
#define PANTHOR_VM_BIND_OP_MAP_FLAGS \
(DRM_PANTHOR_VM_BIND_OP_MAP_READONLY | \
DRM_PANTHOR_VM_BIND_OP_MAP_NOEXEC | \
DRM_PANTHOR_VM_BIND_OP_MAP_UNCACHED | \
+ DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE | \
DRM_PANTHOR_VM_BIND_OP_TYPE_MASK)
static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
+ struct panthor_vm_pool *pool,
struct panthor_vm *vm,
struct panthor_gem_object *bo,
const struct drm_panthor_vm_bind_op *op)
{
+ bool is_sparse = op->flags & DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE;
struct drm_gpuvm_bo *preallocated_vm_bo;
struct sg_table *sgt = NULL;
int ret;
- if (!bo)
- return -EINVAL;
-
if ((op->flags & ~PANTHOR_VM_BIND_OP_MAP_FLAGS) ||
(op->flags & DRM_PANTHOR_VM_BIND_OP_TYPE_MASK) != DRM_PANTHOR_VM_BIND_OP_TYPE_MAP)
return -EINVAL;
/* Make sure the VA and size are in-bounds. */
- if (op->size > bo->base.size || op->bo_offset > bo->base.size - op->size)
+ if (bo && (is_sparse || op->size > bo->base.size ||
+ op->bo_offset > bo->base.size - op->size))
return -EINVAL;
+ else if (is_sparse && (!pool || op->bo_handle || op->bo_offset))
+ return -EINVAL;
+
+ if (is_sparse) {
+ bo = panthor_vm_get_dummy_obj(pool, vm);
+ if (IS_ERR_OR_NULL(bo))
+ return PTR_ERR(bo);
+ }
/* If the BO has an exclusive VM attached, it can't be mapped to other VMs. */
if (bo->exclusive_vm_root_gem &&
bo->exclusive_vm_root_gem != panthor_vm_root_gem(vm))
return -EINVAL;
- panthor_vm_init_op_ctx(op_ctx, op->size, op->va, op->flags);
+ panthor_vm_init_op_ctx(op_ctx, op->size, op->va, op->flags
+ | ((is_sparse) ? DRM_PANTHOR_VM_BIND_OP_MAP_NOEXEC : 0));
ret = panthor_vm_op_ctx_prealloc_vmas(op_ctx);
if (ret)
@@ -1634,6 +1733,13 @@ void panthor_vm_pool_destroy(struct panthor_file *pfile)
xa_for_each(&pfile->vms->xa, i, vm)
panthor_vm_destroy(vm);
+ scoped_guard(mutex, &pfile->vms->dummy.lock) {
+ struct panthor_gem_object *bo = pfile->vms->dummy.obj;
+
+ if (bo)
+ drm_gem_object_put(&bo->base);
+ }
+
xa_destroy(&pfile->vms->xa);
kfree(pfile->vms);
}
@@ -1651,6 +1757,8 @@ int panthor_vm_pool_create(struct panthor_file *pfile)
return -ENOMEM;
xa_init_flags(&pfile->vms->xa, XA_FLAGS_ALLOC1);
+
+ mutex_init(&pfile->vms->dummy.lock);
return 0;
}
@@ -1987,6 +2095,9 @@ static void panthor_vm_free(struct drm_gpuvm *gpuvm)
free_io_pgtable_ops(vm->pgtbl_ops);
+ if (vm->dummy)
+ drm_gem_object_put(&vm->dummy->base);
+
drm_mm_takedown(&vm->mm);
kfree(vm);
}
@@ -2146,7 +2257,26 @@ static void panthor_vma_init(struct panthor_vma *vma, u32 flags)
#define PANTHOR_VM_MAP_FLAGS \
(DRM_PANTHOR_VM_BIND_OP_MAP_READONLY | \
DRM_PANTHOR_VM_BIND_OP_MAP_NOEXEC | \
- DRM_PANTHOR_VM_BIND_OP_MAP_UNCACHED)
+ DRM_PANTHOR_VM_BIND_OP_MAP_UNCACHED | \
+ DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE)
+
+static int
+panthor_vm_exec_map_op(struct panthor_vm *vm, u32 flags,
+ const struct drm_gpuva_op_map *op)
+{
+ struct panthor_gem_object *bo = to_panthor_bo(op->gem.obj);
+ int prot = flags_to_prot(flags);
+
+ if (!op->va.range)
+ return 0;
+
+ if (flags & DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE)
+ return panthor_vm_map_sparse(vm, op->va.addr, prot,
+ bo->dmap.sgt, op->va.range);
+
+ return panthor_vm_map_pages(vm, op->va.addr, prot, bo->dmap.sgt,
+ op->gem.offset, op->va.range);
+}
static int panthor_gpuva_sm_step_map(struct drm_gpuva_op *op, void *priv)
{
@@ -2160,9 +2290,7 @@ static int panthor_gpuva_sm_step_map(struct drm_gpuva_op *op, void *priv)
panthor_vma_init(vma, op_ctx->flags & PANTHOR_VM_MAP_FLAGS);
- ret = panthor_vm_map_pages(vm, op->map.va.addr, flags_to_prot(vma->flags),
- op_ctx->map.bo->dmap.sgt, op->map.gem.offset,
- op->map.va.range);
+ ret = panthor_vm_exec_map_op(vm, vma->flags, &op->map);
if (ret) {
panthor_vm_op_ctx_return_vma(op_ctx, vma);
return ret;
@@ -2178,13 +2306,15 @@ static int panthor_gpuva_sm_step_map(struct drm_gpuva_op *op, void *priv)
}
static bool
-iova_mapped_as_huge_page(struct drm_gpuva_op_map *op, u64 addr)
+iova_mapped_as_huge_page(struct drm_gpuva_op_map *op, u64 addr, bool is_sparse)
{
struct panthor_gem_object *bo = to_panthor_bo(op->gem.obj);
const struct page *pg;
pgoff_t bo_offset;
bo_offset = addr - op->va.addr + op->gem.offset;
+ if (is_sparse)
+ bo_offset %= bo->base.size;
pg = bo->backing.pages[bo_offset >> PAGE_SHIFT];
return folio_size(page_folio(pg)) >= SZ_2M;
@@ -2194,6 +2324,8 @@ static void
unmap_hugepage_align(const struct drm_gpuva_op_remap *op,
u64 *unmap_start, u64 *unmap_range)
{
+ struct panthor_vma *unmap_vma = container_of(op->unmap->va, struct panthor_vma, base);
+ bool is_sparse = unmap_vma->flags & DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE;
u64 aligned_unmap_start, aligned_unmap_end, unmap_end;
unmap_end = *unmap_start + *unmap_range;
@@ -2205,7 +2337,7 @@ unmap_hugepage_align(const struct drm_gpuva_op_remap *op,
*/
if (op->prev && aligned_unmap_start < *unmap_start &&
op->prev->va.addr <= aligned_unmap_start &&
- iova_mapped_as_huge_page(op->prev, *unmap_start)) {
+ (iova_mapped_as_huge_page(op->prev, *unmap_start, is_sparse))) {
*unmap_range += *unmap_start - aligned_unmap_start;
*unmap_start = aligned_unmap_start;
}
@@ -2215,7 +2347,7 @@ unmap_hugepage_align(const struct drm_gpuva_op_remap *op,
*/
if (op->next && aligned_unmap_end > unmap_end &&
op->next->va.addr + op->next->va.range >= aligned_unmap_end &&
- iova_mapped_as_huge_page(op->next, unmap_end - 1)) {
+ (iova_mapped_as_huge_page(op->next, *unmap_start, is_sparse))) {
*unmap_range += aligned_unmap_end - unmap_end;
}
}
@@ -2251,14 +2383,17 @@ static int panthor_gpuva_sm_step_remap(struct drm_gpuva_op *op,
}
if (op->remap.prev) {
- struct panthor_gem_object *bo = to_panthor_bo(op->remap.prev->gem.obj);
- u64 offset = op->remap.prev->gem.offset + unmap_start - op->remap.prev->va.addr;
- u64 size = op->remap.prev->va.addr + op->remap.prev->va.range - unmap_start;
+ const struct drm_gpuva_op_map map_op = {
+ .va.addr = unmap_start,
+ .va.range =
+ op->remap.prev->va.addr + op->remap.prev->va.range - unmap_start,
+ .gem.obj = op->remap.prev->gem.obj,
+ .gem.offset =
+ op->remap.prev->gem.offset + unmap_start - op->remap.prev->va.addr,
+ };
if (!unmap_vma->evicted) {
- ret = panthor_vm_map_pages(vm, unmap_start,
- flags_to_prot(unmap_vma->flags),
- bo->dmap.sgt, offset, size);
+ ret = panthor_vm_exec_map_op(vm, unmap_vma->flags, &map_op);
if (ret)
return ret;
}
@@ -2269,14 +2404,15 @@ static int panthor_gpuva_sm_step_remap(struct drm_gpuva_op *op,
}
if (op->remap.next) {
- struct panthor_gem_object *bo = to_panthor_bo(op->remap.next->gem.obj);
- u64 addr = op->remap.next->va.addr;
- u64 size = unmap_start + unmap_range - op->remap.next->va.addr;
+ const struct drm_gpuva_op_map map_op = {
+ .va.addr = op->remap.next->va.addr,
+ .va.range = unmap_start + unmap_range - op->remap.next->va.addr,
+ .gem.obj = op->remap.next->gem.obj,
+ .gem.offset = op->remap.next->gem.offset,
+ };
if (!unmap_vma->evicted) {
- ret = panthor_vm_map_pages(vm, addr, flags_to_prot(unmap_vma->flags),
- bo->dmap.sgt, op->remap.next->gem.offset,
- size);
+ ret = panthor_vm_exec_map_op(vm, unmap_vma->flags, &map_op);
if (ret)
return ret;
}
@@ -2826,6 +2962,7 @@ panthor_vm_bind_prepare_op_ctx(struct drm_file *file,
const struct drm_panthor_vm_bind_op *op,
struct panthor_vm_op_ctx *op_ctx)
{
+ struct panthor_file *pfile = file->driver_priv;
ssize_t vm_pgsz = panthor_vm_page_size(vm);
struct drm_gem_object *gem;
int ret;
@@ -2837,7 +2974,7 @@ panthor_vm_bind_prepare_op_ctx(struct drm_file *file,
switch (op->flags & DRM_PANTHOR_VM_BIND_OP_TYPE_MASK) {
case DRM_PANTHOR_VM_BIND_OP_TYPE_MAP:
gem = drm_gem_object_lookup(file, op->bo_handle);
- ret = panthor_vm_prepare_map_op_ctx(op_ctx, vm,
+ ret = panthor_vm_prepare_map_op_ctx(op_ctx, pfile->vms, vm,
gem ? to_panthor_bo(gem) : NULL,
op);
drm_gem_object_put(gem);
@@ -3044,7 +3181,10 @@ int panthor_vm_map_bo_range(struct panthor_vm *vm, struct panthor_gem_object *bo
struct panthor_vm_op_ctx op_ctx;
int ret;
- ret = panthor_vm_prepare_map_op_ctx(&op_ctx, vm, bo, &op);
+ if (drm_WARN_ON(&vm->ptdev->base, flags & DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE))
+ return -EINVAL;
+
+ ret = panthor_vm_prepare_map_op_ctx(&op_ctx, NULL, vm, bo, &op);
if (ret)
return ret;
diff --git a/include/uapi/drm/panthor_drm.h b/include/uapi/drm/panthor_drm.h
index 42c901ebdb7a..1a9bcfc8f4cd 100644
--- a/include/uapi/drm/panthor_drm.h
+++ b/include/uapi/drm/panthor_drm.h
@@ -614,6 +614,18 @@ enum drm_panthor_vm_bind_op_flags {
*/
DRM_PANTHOR_VM_BIND_OP_MAP_UNCACHED = 1 << 2,
+ /**
+ * @DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE: Repeat a BO range
+ *
+ * Only valid with DRM_PANTHOR_VM_BIND_OP_TYPE_MAP.
+ *
+ * When this flag is set, the whole vm_bind range is mapped over a dummy object in a cyclic
+ * fashion, and all GPU reads from addresses in the range return undefined values. This flag
+ * being set means drm_panthor_vm_bind_op:offset and drm_panthor_vm_bind_op::handle must
+ * both be set to 0.
+ */
+ DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE = 1 << 3,
+
/**
* @DRM_PANTHOR_VM_BIND_OP_TYPE_MASK: Mask used to determine the type of operation.
*/
--
2.53.0
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [PATCH v7 6/6] drm/panthor: Bump the driver version to 1.9
2026-04-15 11:28 [PATCH v7 0/6] Support sparse mappings in Panthor Adrián Larumbe
` (4 preceding siblings ...)
2026-04-15 11:28 ` [PATCH v7 5/6] drm/panthor: Support sparse mappings Adrián Larumbe
@ 2026-04-15 11:28 ` Adrián Larumbe
2026-04-15 13:54 ` Jani Nikula
2026-04-15 15:22 ` Boris Brezillon
5 siblings, 2 replies; 23+ messages in thread
From: Adrián Larumbe @ 2026-04-15 11:28 UTC (permalink / raw)
To: linux-kernel
Cc: dri-devel, Steven Price, Boris Brezillon, kernel,
Adrián Larumbe, Liviu Dudau, Maarten Lankhorst,
Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter
Bump the driver version to reflect the new MMU info query ioctl
parameter and the VM_BIND map sparse flag.
Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com>
---
drivers/gpu/drm/panthor/panthor_drv.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/gpu/drm/panthor/panthor_drv.c b/drivers/gpu/drm/panthor/panthor_drv.c
index a8090bc4e33c..c8fb63ede62c 100644
--- a/drivers/gpu/drm/panthor/panthor_drv.c
+++ b/drivers/gpu/drm/panthor/panthor_drv.c
@@ -1787,6 +1787,8 @@ static void panthor_debugfs_init(struct drm_minor *minor)
* - adds DRM_IOCTL_PANTHOR_BO_QUERY_INFO ioctl
* - adds drm_panthor_gpu_info::selected_coherency
* - 1.8 - extends DEV_QUERY_TIMESTAMP_INFO with flags
+ * - 1.9 - adds DRM_PANTHOR_DEV_QUERY_MMU_INFO query
+ * - adds DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE flag
*/
static const struct drm_driver panthor_drm_driver = {
.driver_features = DRIVER_RENDER | DRIVER_GEM | DRIVER_SYNCOBJ |
--
2.53.0
^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [PATCH v7 1/6] drm/panthor: Expose GPU page sizes to UM
2026-04-15 11:28 ` [PATCH v7 1/6] drm/panthor: Expose GPU page sizes to UM Adrián Larumbe
@ 2026-04-15 13:10 ` Boris Brezillon
2026-04-15 15:19 ` Steven Price
1 sibling, 0 replies; 23+ messages in thread
From: Boris Brezillon @ 2026-04-15 13:10 UTC (permalink / raw)
To: Adrián Larumbe
Cc: linux-kernel, dri-devel, Steven Price, kernel, Liviu Dudau,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Daniel Almeida, Alice Ryhl
On Wed, 15 Apr 2026 12:28:45 +0100
Adrián Larumbe <adrian.larumbe@collabora.com> wrote:
> In future commits that will implement repeated mappings, only repeat
> values multiple of GPU page sizes will be tolerated. That means these
> values must be made known to UM. Do it through a queriable GPU info
> value.
>
> Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com>
Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
> ---
> drivers/gpu/drm/panthor/panthor_device.h | 3 +++
> drivers/gpu/drm/panthor/panthor_drv.c | 8 ++++++++
> drivers/gpu/drm/panthor/panthor_mmu.c | 9 ++++++++-
> include/uapi/drm/panthor_drm.h | 13 +++++++++++++
> 4 files changed, 32 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/panthor/panthor_device.h b/drivers/gpu/drm/panthor/panthor_device.h
> index 5cba272f9b4d..d856a4fe1d61 100644
> --- a/drivers/gpu/drm/panthor/panthor_device.h
> +++ b/drivers/gpu/drm/panthor/panthor_device.h
> @@ -158,6 +158,9 @@ struct panthor_device {
> /** @csif_info: Command stream interface information. */
> struct drm_panthor_csif_info csif_info;
>
> + /** @mmu_info: MMU info */
> + struct drm_panthor_mmu_info mmu_info;
> +
> /** @hw: GPU-specific data. */
> struct panthor_hw *hw;
>
> diff --git a/drivers/gpu/drm/panthor/panthor_drv.c b/drivers/gpu/drm/panthor/panthor_drv.c
> index 73fc983dc9b4..a8090bc4e33c 100644
> --- a/drivers/gpu/drm/panthor/panthor_drv.c
> +++ b/drivers/gpu/drm/panthor/panthor_drv.c
> @@ -175,6 +175,7 @@ panthor_get_uobj_array(const struct drm_panthor_obj_array *in, u32 min_stride,
> _Generic(_obj_name, \
> PANTHOR_UOBJ_DECL(struct drm_panthor_gpu_info, tiler_present), \
> PANTHOR_UOBJ_DECL(struct drm_panthor_csif_info, pad), \
> + PANTHOR_UOBJ_DECL(struct drm_panthor_mmu_info, page_size_bitmap), \
> PANTHOR_UOBJ_DECL(struct drm_panthor_timestamp_info, current_timestamp), \
> PANTHOR_UOBJ_DECL(struct drm_panthor_group_priorities_info, pad), \
> PANTHOR_UOBJ_DECL(struct drm_panthor_sync_op, timeline_value), \
> @@ -946,6 +947,10 @@ static int panthor_ioctl_dev_query(struct drm_device *ddev, void *data, struct d
> args->size = sizeof(ptdev->csif_info);
> return 0;
>
> + case DRM_PANTHOR_DEV_QUERY_MMU_INFO:
> + args->size = sizeof(ptdev->mmu_info);
> + return 0;
> +
> case DRM_PANTHOR_DEV_QUERY_TIMESTAMP_INFO:
> args->size = sizeof(timestamp_info);
> return 0;
> @@ -966,6 +971,9 @@ static int panthor_ioctl_dev_query(struct drm_device *ddev, void *data, struct d
> case DRM_PANTHOR_DEV_QUERY_CSIF_INFO:
> return PANTHOR_UOBJ_SET(args->pointer, args->size, ptdev->csif_info);
>
> + case DRM_PANTHOR_DEV_QUERY_MMU_INFO:
> + return PANTHOR_UOBJ_SET(args->pointer, args->size, ptdev->mmu_info);
> +
> case DRM_PANTHOR_DEV_QUERY_TIMESTAMP_INFO:
> ret = copy_struct_from_user(×tamp_info,
> sizeof(timestamp_info),
> diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
> index bd41c892beb7..01577be88933 100644
> --- a/drivers/gpu/drm/panthor/panthor_mmu.c
> +++ b/drivers/gpu/drm/panthor/panthor_mmu.c
> @@ -2769,7 +2769,7 @@ panthor_vm_create(struct panthor_device *ptdev, bool for_mcu,
> refcount_set(&vm->as.active_cnt, 0);
>
> pgtbl_cfg = (struct io_pgtable_cfg) {
> - .pgsize_bitmap = SZ_4K | SZ_2M,
> + .pgsize_bitmap = ptdev->mmu_info.page_size_bitmap,
> .ias = va_bits,
> .oas = pa_bits,
> .coherent_walk = ptdev->coherent,
> @@ -3214,6 +3214,11 @@ static void panthor_mmu_release_wq(struct drm_device *ddev, void *res)
> destroy_workqueue(res);
> }
>
> +static void panthor_mmu_info_init(struct panthor_device *ptdev)
> +{
> + ptdev->mmu_info.page_size_bitmap = SZ_4K | SZ_2M;
> +}
> +
> /**
> * panthor_mmu_init() - Initialize the MMU logic.
> * @ptdev: Device.
> @@ -3226,6 +3231,8 @@ int panthor_mmu_init(struct panthor_device *ptdev)
> struct panthor_mmu *mmu;
> int ret, irq;
>
> + panthor_mmu_info_init(ptdev);
> +
> mmu = drmm_kzalloc(&ptdev->base, sizeof(*mmu), GFP_KERNEL);
> if (!mmu)
> return -ENOMEM;
> diff --git a/include/uapi/drm/panthor_drm.h b/include/uapi/drm/panthor_drm.h
> index 0e455d91e77d..dc2704fc2829 100644
> --- a/include/uapi/drm/panthor_drm.h
> +++ b/include/uapi/drm/panthor_drm.h
> @@ -246,6 +246,9 @@ enum drm_panthor_dev_query_type {
> /** @DRM_PANTHOR_DEV_QUERY_CSIF_INFO: Query command-stream interface information. */
> DRM_PANTHOR_DEV_QUERY_CSIF_INFO,
>
> + /** @DRM_PANTHOR_DEV_QUERY_MMU_INFO: Query MMU information. */
> + DRM_PANTHOR_DEV_QUERY_MMU_INFO,
> +
> /** @DRM_PANTHOR_DEV_QUERY_TIMESTAMP_INFO: Query timestamp information. */
> DRM_PANTHOR_DEV_QUERY_TIMESTAMP_INFO,
>
> @@ -487,6 +490,16 @@ struct drm_panthor_timestamp_info {
> __u64 cpu_timestamp_nsec;
> };
>
> +/**
> + * struct drm_panthor_mmu_info - MMU information
> + *
> + * Structure grouping all queryable information relating to the MMU.
> + */
> +struct drm_panthor_mmu_info {
> + /** @page_size_bitmap: Allowed page sizes */
Maybe add more info there: like, page size is a power of two, and each
bit encodes a particular page size.
> + __u64 page_size_bitmap;
> +};
> +
> /**
> * struct drm_panthor_group_priorities_info - Group priorities information
> *
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v7 2/6] drm/panthor: Pass vm_bind_op to vm_prepare_map_op_ctx
2026-04-15 11:28 ` [PATCH v7 2/6] drm/panthor: Pass vm_bind_op to vm_prepare_map_op_ctx Adrián Larumbe
@ 2026-04-15 13:11 ` Boris Brezillon
2026-04-15 15:19 ` Steven Price
1 sibling, 0 replies; 23+ messages in thread
From: Boris Brezillon @ 2026-04-15 13:11 UTC (permalink / raw)
To: Adrián Larumbe
Cc: linux-kernel, dri-devel, Steven Price, kernel, Liviu Dudau,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter
On Wed, 15 Apr 2026 12:28:46 +0100
Adrián Larumbe <adrian.larumbe@collabora.com> wrote:
> Instead of passing its constituent elements, pass the whole struct to
> simplify the function prototype.
>
> Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com>
Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
> ---
> drivers/gpu/drm/panthor/panthor_mmu.c | 27 ++++++++++++++-------------
> 1 file changed, 14 insertions(+), 13 deletions(-)
>
> diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
> index 01577be88933..d9e2f8afb8fb 100644
> --- a/drivers/gpu/drm/panthor/panthor_mmu.c
> +++ b/drivers/gpu/drm/panthor/panthor_mmu.c
> @@ -1275,9 +1275,7 @@ static int panthor_vm_op_ctx_prealloc_pts(struct panthor_vm_op_ctx *op_ctx)
> static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
> struct panthor_vm *vm,
> struct panthor_gem_object *bo,
> - u64 offset,
> - u64 size, u64 va,
> - u32 flags)
> + const struct drm_panthor_vm_bind_op *op)
> {
> struct drm_gpuvm_bo *preallocated_vm_bo;
> struct sg_table *sgt = NULL;
> @@ -1286,12 +1284,12 @@ static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
> if (!bo)
> return -EINVAL;
>
> - if ((flags & ~PANTHOR_VM_BIND_OP_MAP_FLAGS) ||
> - (flags & DRM_PANTHOR_VM_BIND_OP_TYPE_MASK) != DRM_PANTHOR_VM_BIND_OP_TYPE_MAP)
> + if ((op->flags & ~PANTHOR_VM_BIND_OP_MAP_FLAGS) ||
> + (op->flags & DRM_PANTHOR_VM_BIND_OP_TYPE_MASK) != DRM_PANTHOR_VM_BIND_OP_TYPE_MAP)
> return -EINVAL;
>
> /* Make sure the VA and size are in-bounds. */
> - if (size > bo->base.size || offset > bo->base.size - size)
> + if (op->size > bo->base.size || op->bo_offset > bo->base.size - op->size)
> return -EINVAL;
>
> /* If the BO has an exclusive VM attached, it can't be mapped to other VMs. */
> @@ -1299,7 +1297,7 @@ static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
> bo->exclusive_vm_root_gem != panthor_vm_root_gem(vm))
> return -EINVAL;
>
> - panthor_vm_init_op_ctx(op_ctx, size, va, flags);
> + panthor_vm_init_op_ctx(op_ctx, op->size, op->va, op->flags);
>
> ret = panthor_vm_op_ctx_prealloc_vmas(op_ctx);
> if (ret)
> @@ -1328,7 +1326,7 @@ static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
> }
>
> op_ctx->map.vm_bo = drm_gpuvm_bo_obtain_prealloc(preallocated_vm_bo);
> - op_ctx->map.bo_offset = offset;
> + op_ctx->map.bo_offset = op->bo_offset;
>
> ret = panthor_vm_op_ctx_prealloc_pts(op_ctx);
> if (ret)
> @@ -2849,10 +2847,7 @@ panthor_vm_bind_prepare_op_ctx(struct drm_file *file,
> gem = drm_gem_object_lookup(file, op->bo_handle);
> ret = panthor_vm_prepare_map_op_ctx(op_ctx, vm,
> gem ? to_panthor_bo(gem) : NULL,
> - op->bo_offset,
> - op->size,
> - op->va,
> - op->flags);
> + op);
> drm_gem_object_put(gem);
> return ret;
>
> @@ -3048,10 +3043,16 @@ int panthor_vm_bind_exec_sync_op(struct drm_file *file,
> int panthor_vm_map_bo_range(struct panthor_vm *vm, struct panthor_gem_object *bo,
> u64 offset, u64 size, u64 va, u32 flags)
> {
> + struct drm_panthor_vm_bind_op op = {
You could even make it const here, but I think you're adjusting some
fields after the initialization in one of the following patches.
> + .bo_offset = offset,
> + .size = size,
> + .va = va,
> + .flags = flags,
> + };
> struct panthor_vm_op_ctx op_ctx;
> int ret;
>
> - ret = panthor_vm_prepare_map_op_ctx(&op_ctx, vm, bo, offset, size, va, flags);
> + ret = panthor_vm_prepare_map_op_ctx(&op_ctx, vm, bo, &op);
> if (ret)
> return ret;
>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v7 3/6] drm/panthor: Delete spurious whitespace from uAPI header
2026-04-15 11:28 ` [PATCH v7 3/6] drm/panthor: Delete spurious whitespace from uAPI header Adrián Larumbe
@ 2026-04-15 13:41 ` Boris Brezillon
2026-04-15 15:19 ` Steven Price
1 sibling, 0 replies; 23+ messages in thread
From: Boris Brezillon @ 2026-04-15 13:41 UTC (permalink / raw)
To: Adrián Larumbe
Cc: linux-kernel, dri-devel, Steven Price, kernel, Daniel Almeida,
Alice Ryhl, Liviu Dudau, David Airlie, Simona Vetter,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann
On Wed, 15 Apr 2026 12:28:47 +0100
Adrián Larumbe <adrian.larumbe@collabora.com> wrote:
> There's no extra blank line after the last member of any other uAPI
> structures, so delete it.
>
> Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com>
Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
> ---
> include/uapi/drm/panthor_drm.h | 1 -
> 1 file changed, 1 deletion(-)
>
> diff --git a/include/uapi/drm/panthor_drm.h b/include/uapi/drm/panthor_drm.h
> index dc2704fc2829..42c901ebdb7a 100644
> --- a/include/uapi/drm/panthor_drm.h
> +++ b/include/uapi/drm/panthor_drm.h
> @@ -677,7 +677,6 @@ struct drm_panthor_vm_bind_op {
> * This array shall not be empty for sync-only operations.
> */
> struct drm_panthor_obj_array syncs;
> -
> };
>
> /**
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v7 4/6] drm/panthor: Remove unused operation context field
2026-04-15 11:28 ` [PATCH v7 4/6] drm/panthor: Remove unused operation context field Adrián Larumbe
@ 2026-04-15 13:41 ` Boris Brezillon
2026-04-15 15:20 ` Steven Price
1 sibling, 0 replies; 23+ messages in thread
From: Boris Brezillon @ 2026-04-15 13:41 UTC (permalink / raw)
To: Adrián Larumbe
Cc: linux-kernel, dri-devel, Steven Price, kernel, Liviu Dudau,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter
On Wed, 15 Apr 2026 12:28:48 +0100
Adrián Larumbe <adrian.larumbe@collabora.com> wrote:
> A Panthor BO's sgtable is now retrieved from its dmap field.
>
> Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com>
Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
> ---
> drivers/gpu/drm/panthor/panthor_mmu.c | 8 --------
> 1 file changed, 8 deletions(-)
>
> diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
> index d9e2f8afb8fb..cea78e5f0591 100644
> --- a/drivers/gpu/drm/panthor/panthor_mmu.c
> +++ b/drivers/gpu/drm/panthor/panthor_mmu.c
> @@ -195,14 +195,6 @@ struct panthor_vm_op_ctx {
> /** @map.bo_offset: Offset in the buffer object. */
> u64 bo_offset;
>
> - /**
> - * @map.sgt: sg-table pointing to pages backing the GEM object.
> - *
> - * This is gathered at job creation time, such that we don't have
> - * to allocate in ::run_job().
> - */
> - struct sg_table *sgt;
> -
> /** @map.bo: the BO being mapped. */
> struct panthor_gem_object *bo;
> } map;
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v7 6/6] drm/panthor: Bump the driver version to 1.9
2026-04-15 11:28 ` [PATCH v7 6/6] drm/panthor: Bump the driver version to 1.9 Adrián Larumbe
@ 2026-04-15 13:54 ` Jani Nikula
2026-04-15 14:01 ` Adrián Larumbe
2026-04-15 15:26 ` Boris Brezillon
2026-04-15 15:22 ` Boris Brezillon
1 sibling, 2 replies; 23+ messages in thread
From: Jani Nikula @ 2026-04-15 13:54 UTC (permalink / raw)
To: Adrián Larumbe, linux-kernel
Cc: dri-devel, Steven Price, Boris Brezillon, kernel,
Adrián Larumbe, Liviu Dudau, Maarten Lankhorst,
Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter
On Wed, 15 Apr 2026, Adrián Larumbe <adrian.larumbe@collabora.com> wrote:
> Bump the driver version to reflect the new MMU info query ioctl
> parameter and the VM_BIND map sparse flag.
You're not actually bumping the version, just adding to the comment.
Does the version actually work for you for checking stuff? Do you have
userspace checking it? Should it be some capability thing instead?
BR,
Jani.
>
> Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com>
> ---
> drivers/gpu/drm/panthor/panthor_drv.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/drivers/gpu/drm/panthor/panthor_drv.c b/drivers/gpu/drm/panthor/panthor_drv.c
> index a8090bc4e33c..c8fb63ede62c 100644
> --- a/drivers/gpu/drm/panthor/panthor_drv.c
> +++ b/drivers/gpu/drm/panthor/panthor_drv.c
> @@ -1787,6 +1787,8 @@ static void panthor_debugfs_init(struct drm_minor *minor)
> * - adds DRM_IOCTL_PANTHOR_BO_QUERY_INFO ioctl
> * - adds drm_panthor_gpu_info::selected_coherency
> * - 1.8 - extends DEV_QUERY_TIMESTAMP_INFO with flags
> + * - 1.9 - adds DRM_PANTHOR_DEV_QUERY_MMU_INFO query
> + * - adds DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE flag
> */
> static const struct drm_driver panthor_drm_driver = {
> .driver_features = DRIVER_RENDER | DRIVER_GEM | DRIVER_SYNCOBJ |
--
Jani Nikula, Intel
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v7 6/6] drm/panthor: Bump the driver version to 1.9
2026-04-15 13:54 ` Jani Nikula
@ 2026-04-15 14:01 ` Adrián Larumbe
2026-04-15 16:20 ` Jani Nikula
2026-04-15 15:26 ` Boris Brezillon
1 sibling, 1 reply; 23+ messages in thread
From: Adrián Larumbe @ 2026-04-15 14:01 UTC (permalink / raw)
To: Jani Nikula
Cc: linux-kernel, dri-devel, Steven Price, Boris Brezillon, kernel,
Liviu Dudau, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
David Airlie, Simona Vetter
Hi Jani,
On 15.04.2026 16:54, Jani Nikula wrote:
> On Wed, 15 Apr 2026, Adrián Larumbe <adrian.larumbe@collabora.com> wrote:
> > Bump the driver version to reflect the new MMU info query ioctl
> > parameter and the VM_BIND map sparse flag.
>
> You're not actually bumping the version, just adding to the comment.
Forgot to increase the driver's minor revision number, thanks for the catch.
> Does the version actually work for you for checking stuff? Do you have
> userspace checking it? Should it be some capability thing instead?
We absolutely need it because the UMD must be compatible with previous kernel versions,
which means in the case of sparse mappings, the old method of creating the dummy BO
in user mode and then having the Mesa driver do repeated mappings over it through
successive VM_BIND ioctl's must be available.
> BR,
> Jani.
>
> >
> > Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com>
> > ---
> > drivers/gpu/drm/panthor/panthor_drv.c | 2 ++
> > 1 file changed, 2 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/panthor/panthor_drv.c b/drivers/gpu/drm/panthor/panthor_drv.c
> > index a8090bc4e33c..c8fb63ede62c 100644
> > --- a/drivers/gpu/drm/panthor/panthor_drv.c
> > +++ b/drivers/gpu/drm/panthor/panthor_drv.c
> > @@ -1787,6 +1787,8 @@ static void panthor_debugfs_init(struct drm_minor *minor)
> > * - adds DRM_IOCTL_PANTHOR_BO_QUERY_INFO ioctl
> > * - adds drm_panthor_gpu_info::selected_coherency
> > * - 1.8 - extends DEV_QUERY_TIMESTAMP_INFO with flags
> > + * - 1.9 - adds DRM_PANTHOR_DEV_QUERY_MMU_INFO query
> > + * - adds DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE flag
> > */
> > static const struct drm_driver panthor_drm_driver = {
> > .driver_features = DRIVER_RENDER | DRIVER_GEM | DRIVER_SYNCOBJ |
>
> --
> Jani Nikula, Intel
Adrian Larumbe
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v7 5/6] drm/panthor: Support sparse mappings
2026-04-15 11:28 ` [PATCH v7 5/6] drm/panthor: Support sparse mappings Adrián Larumbe
@ 2026-04-15 15:12 ` Boris Brezillon
2026-04-15 22:09 ` Adrián Larumbe
0 siblings, 1 reply; 23+ messages in thread
From: Boris Brezillon @ 2026-04-15 15:12 UTC (permalink / raw)
To: Adrián Larumbe
Cc: linux-kernel, dri-devel, Steven Price, kernel, Liviu Dudau,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Daniel Almeida, Alice Ryhl
On Wed, 15 Apr 2026 12:28:49 +0100
Adrián Larumbe <adrian.larumbe@collabora.com> wrote:
> Allow UM to bind sparsely populated memory regions by cyclically mapping
> virtual ranges over a kernel-allocated dummy BO. This alternative is
> preferable to the old method of handling sparseness in the UMD, because it
> relied on the creation of a buffer object to the same end, despite the fact
> Vulkan sparse resources don't need to be backed by a driver BO.
>
> The choice of backing sparsely-bound regions with a Panhtor BO was made so
> as to profit from the existing shrinker reclaim code. That way no special
> treatment must be given to the dummy sparse BOs when reclaiming memory, as
> would be the case if we had chosen a raw kernel page implementation.
>
> A new dummy BO is allocated per open file context, because even though the
> Vulkan spec mandates that writes into sparsely bound regions must be
> discarded, our implementation is still a workaround over the fact Mali CSF
> GPUs cannot support this behaviour on the hardware level, so writes still
> make it into the backing BO. If we had a global one, then it could be a
> venue for information leaks between file contexts, which should never
> happen in DRM.
>
> Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com>
> ---
> drivers/gpu/drm/panthor/panthor_gem.c | 35 +++++
> drivers/gpu/drm/panthor/panthor_gem.h | 2 +
> drivers/gpu/drm/panthor/panthor_mmu.c | 192 ++++++++++++++++++++++----
> include/uapi/drm/panthor_drm.h | 12 ++
> 4 files changed, 215 insertions(+), 26 deletions(-)
>
> diff --git a/drivers/gpu/drm/panthor/panthor_gem.c b/drivers/gpu/drm/panthor/panthor_gem.c
> index 13295d7a593d..e27251ef113b 100644
> --- a/drivers/gpu/drm/panthor/panthor_gem.c
> +++ b/drivers/gpu/drm/panthor/panthor_gem.c
> @@ -1345,6 +1345,41 @@ panthor_kernel_bo_create(struct panthor_device *ptdev, struct panthor_vm *vm,
> return ERR_PTR(ret);
> }
>
> +/**
> + * panthor_dummy_bo_create() - Create a Panthor BO meant to back sparse bindings.
> + * @ptdev: Device.
> + *
> + * Return: A valid pointer in case of success, an ERR_PTR() otherwise.
> + */
> +struct panthor_gem_object *
> +panthor_dummy_bo_create(struct panthor_device *ptdev)
> +{
> + u32 dummy_flags = DRM_PANTHOR_BO_NO_MMAP;
> + struct panthor_gem_object *bo;
> + struct page **pages;
> +
> + bo = panthor_gem_create(&ptdev->base, SZ_2M, dummy_flags, NULL, 0);
> + if (IS_ERR_OR_NULL(bo))
> + return bo;
> +
> + pages = drm_gem_get_pages(&bo->base);
Why not use panthor_gem_backing_get_pages_locked() here? Also,
drm_gem_get_pages() doesn't give any guarantee that you'll get a huge
page, nor can you guarantee that the 2M won't be reclaimed and later
on be re-allocated as 4k chunks. I'd probably keep things simple for
now, and
- keep it a 2M GEM object
- force the page allocation at map time, just like we do for regular BOs
> + if (PTR_ERR(pages) == -ENOMEM) {
> + drm_gem_object_put(&bo->base);
> + bo = panthor_gem_create(&ptdev->base, SZ_4K, dummy_flags, NULL, 0);
> + if (IS_ERR_OR_NULL(bo))
> + return bo;
> + pages = drm_gem_get_pages(&bo->base);
> + }
> +
> + if (IS_ERR_OR_NULL(pages)) {
> + drm_gem_object_put(&bo->base);
> + return ERR_CAST(pages);
> + }
> +
> + bo->backing.pages = pages;
> + return bo;
> +}
> +
> static bool can_swap(void)
> {
> return get_nr_swap_pages() > 0;
> diff --git a/drivers/gpu/drm/panthor/panthor_gem.h b/drivers/gpu/drm/panthor/panthor_gem.h
> index ae0491d0b121..dcf9cdd51d93 100644
> --- a/drivers/gpu/drm/panthor/panthor_gem.h
> +++ b/drivers/gpu/drm/panthor/panthor_gem.h
> @@ -264,6 +264,8 @@ void panthor_gem_kernel_bo_set_label(struct panthor_kernel_bo *bo, const char *l
> int panthor_gem_sync(struct drm_gem_object *obj,
> u32 type, u64 offset, u64 size);
>
> +struct panthor_gem_object *panthor_dummy_bo_create(struct panthor_device *ptdev);
> +
> struct drm_gem_object *
> panthor_gem_prime_import(struct drm_device *dev,
> struct dma_buf *dma_buf);
> diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
> index cea78e5f0591..6585fd6b5d04 100644
> --- a/drivers/gpu/drm/panthor/panthor_mmu.c
> +++ b/drivers/gpu/drm/panthor/panthor_mmu.c
> @@ -112,6 +112,23 @@ struct panthor_mmu {
> struct panthor_vm_pool {
> /** @xa: Array used for VM handle tracking. */
> struct xarray xa;
> +
> + /** @dummy: Dummy drm object related fields
/**
* @dummy: Dummy drm object related fields.
> + *
> + * Sparse bindings map virtual address ranges onto a dummy
> + * BO in a modulo fashion. Even though sparse writes are meant
> + * to be discarded and reads undefined, writes are still reflected
> + * in the dummy buffer. That means we must keep a dummy object per
> + * file context, to avoid data leaks between them.
> + *
> + */
> + struct {
> + /** @dummy.obj: Dummy object used for sparse mappings. */
> + struct panthor_gem_object *obj;
> +
> + /** @dummy.lock: Lock protecting against races on dummy object. */
> + struct mutex lock;
> + } dummy;
> };
>
> /**
> @@ -391,6 +408,15 @@ struct panthor_vm {
> */
> struct list_head lru_node;
> } reclaim;
> +
> + /** @dummy: Dummy object used for sparse mappings.
/**
* @dummy: Dummy object used for sparse mappings.
> + *
> + * VM's must keep a reference to the file context-wide dummy BO because
> + * they can outlive the file context, which includes the VM pool holding
> + * the original dummy BO reference.
> + *
> + */
> + struct panthor_gem_object *dummy;
> };
>
> /**
> @@ -1020,6 +1046,46 @@ panthor_vm_map_pages(struct panthor_vm *vm, u64 iova, int prot,
> return 0;
> }
>
> +static int
> +panthor_vm_map_sparse(struct panthor_vm *vm, u64 iova, int prot,
> + struct sg_table *sgt, u64 size)
> +{
> + u64 first_iova = iova;
s/first_iova/orig_iova/
> + u64 first_size = size;
> + int ret;
> +
> + if (iova & (SZ_2M - 1)) {
> + u64 unaligned_size = min(ALIGN(iova, SZ_2M) - iova, size);
> +
> + ret = panthor_vm_map_pages(vm, iova, prot, sgt,
> + 0, unaligned_size);
> + if (ret)
> + return ret;
> +
> + size -= unaligned_size;
> + iova += unaligned_size;
> + }
> +
> + /* TODO: we should probably optimize this at the io_pgtable level. */
> + while (size > 0) {
> + u64 next_size = min(size, sg_dma_len(sgt->sgl));
> +
> + ret = panthor_vm_map_pages(vm, iova, prot,
> + sgt, 0, next_size);
> + if (ret)
> + goto err_unmap;
> +
> + size -= next_size;
> + iova += next_size;
> + }
> +
> + return 0;
> +
> +err_unmap:
> + panthor_vm_unmap_pages(vm, first_iova, first_size - size);
If you do:
panthor_vm_unmap_pages(vm, orig_iova, iova - orig_iova);
you can get rid of the first_size variable.
> + return ret;
> +}
> +
> static int flags_to_prot(u32 flags)
> {
> int prot = 0;
> @@ -1258,38 +1324,71 @@ static int panthor_vm_op_ctx_prealloc_pts(struct panthor_vm_op_ctx *op_ctx)
> return 0;
> }
>
> +static struct panthor_gem_object *
> +panthor_vm_get_dummy_obj(struct panthor_vm_pool *pool,
> + struct panthor_vm *vm)
> +{
> + scoped_guard(mutex, &pool->dummy.lock) {
> + if (!vm->dummy) {
> + if (!pool->dummy.obj) {
> + struct panthor_gem_object *obj =
> + panthor_dummy_bo_create(vm->ptdev);
> + if (IS_ERR(obj))
> + return obj;
> +
> + pool->dummy.obj = obj;
> + }
> +
> + drm_gem_object_get(&pool->dummy.obj->base);
> + vm->dummy = pool->dummy.obj;
> + }
> + }
The lock is taken for the whole function scope, you you can simply use
guard(mutex)() and get rid of two indentation levels:
guard(mutex)(&pool->dummy.lock);
if (vm->dummy)
return vm->dummy;
if (!pool->dummy.obj) {
struct panthor_gem_object *obj;
obj = panthor_dummy_bo_create(vm->ptdev);
if (IS_ERR(obj))
return obj;
pool->dummy.obj = obj;
}
drm_gem_object_get(&pool->dummy.obj->base);
vm->dummy = pool->dummy.obj;
return vm->dummy;
> +
> + return vm->dummy;
> +}
> +
> #define PANTHOR_VM_BIND_OP_MAP_FLAGS \
> (DRM_PANTHOR_VM_BIND_OP_MAP_READONLY | \
> DRM_PANTHOR_VM_BIND_OP_MAP_NOEXEC | \
> DRM_PANTHOR_VM_BIND_OP_MAP_UNCACHED | \
> + DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE | \
> DRM_PANTHOR_VM_BIND_OP_TYPE_MASK)
>
> static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
> + struct panthor_vm_pool *pool,
Can't we just make sure vm->dummy is allocated before
panthor_vm_prepare_map_op_ctx() is called in case this is
a sparse map request? This would prevent the conditional check on
pool != NULL, but only when is_sparse=true, and you wouldn't have to
pass the pool around.
> struct panthor_vm *vm,
> struct panthor_gem_object *bo,
> const struct drm_panthor_vm_bind_op *op)
> {
> + bool is_sparse = op->flags & DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE;
> struct drm_gpuvm_bo *preallocated_vm_bo;
> struct sg_table *sgt = NULL;
> int ret;
>
> - if (!bo)
> - return -EINVAL;
> -
> if ((op->flags & ~PANTHOR_VM_BIND_OP_MAP_FLAGS) ||
> (op->flags & DRM_PANTHOR_VM_BIND_OP_TYPE_MASK) != DRM_PANTHOR_VM_BIND_OP_TYPE_MAP)
> return -EINVAL;
>
> /* Make sure the VA and size are in-bounds. */
> - if (op->size > bo->base.size || op->bo_offset > bo->base.size - op->size)
> + if (bo && (is_sparse || op->size > bo->base.size ||
> + op->bo_offset > bo->base.size - op->size))
> return -EINVAL;
> + else if (is_sparse && (!pool || op->bo_handle || op->bo_offset))
> + return -EINVAL;
> +
> + if (is_sparse) {
> + bo = panthor_vm_get_dummy_obj(pool, vm);
Actually, you assign bo here, so you might as well just pass the dummy
BO to panthor_vm_prepare_map_op_ctx() and keep the
if (!bo)
return -EINVAL;
check.
As a side note, if gpuva.gem.obj != NULL for sparse mappings, it messes up
with the can_merge checks done by gpuvm, which is not a problem right now
because we simply ignore the .keep hint passed to unmap_op, but that's
probably worth a comment somewhere.
> + if (IS_ERR_OR_NULL(bo))
> + return PTR_ERR(bo);
> + }
>
> /* If the BO has an exclusive VM attached, it can't be mapped to other VMs. */
> if (bo->exclusive_vm_root_gem &&
> bo->exclusive_vm_root_gem != panthor_vm_root_gem(vm))
> return -EINVAL;
>
> - panthor_vm_init_op_ctx(op_ctx, op->size, op->va, op->flags);
> + panthor_vm_init_op_ctx(op_ctx, op->size, op->va, op->flags
> + | ((is_sparse) ? DRM_PANTHOR_VM_BIND_OP_MAP_NOEXEC : 0));
I would actually enforce NOEXEC is set and return EINVAL if
that's not the case.
>
> ret = panthor_vm_op_ctx_prealloc_vmas(op_ctx);
> if (ret)
> @@ -1634,6 +1733,13 @@ void panthor_vm_pool_destroy(struct panthor_file *pfile)
> xa_for_each(&pfile->vms->xa, i, vm)
> panthor_vm_destroy(vm);
>
> + scoped_guard(mutex, &pfile->vms->dummy.lock) {
> + struct panthor_gem_object *bo = pfile->vms->dummy.obj;
> +
> + if (bo)
> + drm_gem_object_put(&bo->base);
> + }
Missing
mutex_destroy(&pfile->vms->dummy.lock);
> +
> xa_destroy(&pfile->vms->xa);
> kfree(pfile->vms);
> }
> @@ -1651,6 +1757,8 @@ int panthor_vm_pool_create(struct panthor_file *pfile)
> return -ENOMEM;
>
> xa_init_flags(&pfile->vms->xa, XA_FLAGS_ALLOC1);
> +
> + mutex_init(&pfile->vms->dummy.lock);
> return 0;
> }
>
> @@ -1987,6 +2095,9 @@ static void panthor_vm_free(struct drm_gpuvm *gpuvm)
>
> free_io_pgtable_ops(vm->pgtbl_ops);
>
> + if (vm->dummy)
> + drm_gem_object_put(&vm->dummy->base);
> +
> drm_mm_takedown(&vm->mm);
> kfree(vm);
> }
> @@ -2146,7 +2257,26 @@ static void panthor_vma_init(struct panthor_vma *vma, u32 flags)
> #define PANTHOR_VM_MAP_FLAGS \
> (DRM_PANTHOR_VM_BIND_OP_MAP_READONLY | \
> DRM_PANTHOR_VM_BIND_OP_MAP_NOEXEC | \
> - DRM_PANTHOR_VM_BIND_OP_MAP_UNCACHED)
> + DRM_PANTHOR_VM_BIND_OP_MAP_UNCACHED | \
> + DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE)
> +
> +static int
> +panthor_vm_exec_map_op(struct panthor_vm *vm, u32 flags,
> + const struct drm_gpuva_op_map *op)
> +{
> + struct panthor_gem_object *bo = to_panthor_bo(op->gem.obj);
> + int prot = flags_to_prot(flags);
> +
> + if (!op->va.range)
> + return 0;
Do we really expect a range of zero here? If not, I'd either drop
the check, or at the very least, make it a drm_WARN_ON_ONCE().
> +
> + if (flags & DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE)
> + return panthor_vm_map_sparse(vm, op->va.addr, prot,
> + bo->dmap.sgt, op->va.range);
> +
> + return panthor_vm_map_pages(vm, op->va.addr, prot, bo->dmap.sgt,
> + op->gem.offset, op->va.range);
> +}
>
> static int panthor_gpuva_sm_step_map(struct drm_gpuva_op *op, void *priv)
> {
> @@ -2160,9 +2290,7 @@ static int panthor_gpuva_sm_step_map(struct drm_gpuva_op *op, void *priv)
>
> panthor_vma_init(vma, op_ctx->flags & PANTHOR_VM_MAP_FLAGS);
>
> - ret = panthor_vm_map_pages(vm, op->map.va.addr, flags_to_prot(vma->flags),
> - op_ctx->map.bo->dmap.sgt, op->map.gem.offset,
> - op->map.va.range);
> + ret = panthor_vm_exec_map_op(vm, vma->flags, &op->map);
> if (ret) {
> panthor_vm_op_ctx_return_vma(op_ctx, vma);
> return ret;
> @@ -2178,13 +2306,15 @@ static int panthor_gpuva_sm_step_map(struct drm_gpuva_op *op, void *priv)
> }
>
> static bool
> -iova_mapped_as_huge_page(struct drm_gpuva_op_map *op, u64 addr)
> +iova_mapped_as_huge_page(struct drm_gpuva_op_map *op, u64 addr, bool is_sparse)
> {
> struct panthor_gem_object *bo = to_panthor_bo(op->gem.obj);
> const struct page *pg;
> pgoff_t bo_offset;
>
> bo_offset = addr - op->va.addr + op->gem.offset;
> + if (is_sparse)
> + bo_offset %= bo->base.size;
If this is a sparse mapping, we just have to check the first page
(so bo_offset=0).
> pg = bo->backing.pages[bo_offset >> PAGE_SHIFT];
>
> return folio_size(page_folio(pg)) >= SZ_2M;
> @@ -2194,6 +2324,8 @@ static void
> unmap_hugepage_align(const struct drm_gpuva_op_remap *op,
> u64 *unmap_start, u64 *unmap_range)
> {
> + struct panthor_vma *unmap_vma = container_of(op->unmap->va, struct panthor_vma, base);
> + bool is_sparse = unmap_vma->flags & DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE;
> u64 aligned_unmap_start, aligned_unmap_end, unmap_end;
>
> unmap_end = *unmap_start + *unmap_range;
> @@ -2205,7 +2337,7 @@ unmap_hugepage_align(const struct drm_gpuva_op_remap *op,
> */
> if (op->prev && aligned_unmap_start < *unmap_start &&
> op->prev->va.addr <= aligned_unmap_start &&
> - iova_mapped_as_huge_page(op->prev, *unmap_start)) {
> + (iova_mapped_as_huge_page(op->prev, *unmap_start, is_sparse))) {
> *unmap_range += *unmap_start - aligned_unmap_start;
> *unmap_start = aligned_unmap_start;
> }
> @@ -2215,7 +2347,7 @@ unmap_hugepage_align(const struct drm_gpuva_op_remap *op,
> */
> if (op->next && aligned_unmap_end > unmap_end &&
> op->next->va.addr + op->next->va.range >= aligned_unmap_end &&
> - iova_mapped_as_huge_page(op->next, unmap_end - 1)) {
> + (iova_mapped_as_huge_page(op->next, *unmap_start, is_sparse))) {
> *unmap_range += aligned_unmap_end - unmap_end;
> }
> }
> @@ -2251,14 +2383,17 @@ static int panthor_gpuva_sm_step_remap(struct drm_gpuva_op *op,
> }
>
> if (op->remap.prev) {
> - struct panthor_gem_object *bo = to_panthor_bo(op->remap.prev->gem.obj);
> - u64 offset = op->remap.prev->gem.offset + unmap_start - op->remap.prev->va.addr;
> - u64 size = op->remap.prev->va.addr + op->remap.prev->va.range - unmap_start;
> + const struct drm_gpuva_op_map map_op = {
> + .va.addr = unmap_start,
> + .va.range =
> + op->remap.prev->va.addr + op->remap.prev->va.range - unmap_start,
> + .gem.obj = op->remap.prev->gem.obj,
> + .gem.offset =
> + op->remap.prev->gem.offset + unmap_start - op->remap.prev->va.addr,
I believe it should be forced to zero if this is a sparse
mapping, no? This makes me think we probably want this to be
NULL, in the case of a sparse mapping. It shouldn't prevent
reclaim from happening on the dummy BO, because the drm_gpuva
has a separate vm_bo field. Yes it forces us to add bunch of
is_sparse checks in a few other places, but I find it cleaner
than pretending this is a regular BO.
> + };
>
> if (!unmap_vma->evicted) {
> - ret = panthor_vm_map_pages(vm, unmap_start,
> - flags_to_prot(unmap_vma->flags),
> - bo->dmap.sgt, offset, size);
> + ret = panthor_vm_exec_map_op(vm, unmap_vma->flags, &map_op);
> if (ret)
> return ret;
> }
> @@ -2269,14 +2404,15 @@ static int panthor_gpuva_sm_step_remap(struct drm_gpuva_op *op,
> }
>
> if (op->remap.next) {
> - struct panthor_gem_object *bo = to_panthor_bo(op->remap.next->gem.obj);
> - u64 addr = op->remap.next->va.addr;
> - u64 size = unmap_start + unmap_range - op->remap.next->va.addr;
> + const struct drm_gpuva_op_map map_op = {
> + .va.addr = op->remap.next->va.addr,
> + .va.range = unmap_start + unmap_range - op->remap.next->va.addr,
> + .gem.obj = op->remap.next->gem.obj,
> + .gem.offset = op->remap.next->gem.offset,
Same here, I'd rather have gem.obj=NULL and gem.offset=0 when
remapping a porting of sparse mapping.
> + };
>
> if (!unmap_vma->evicted) {
> - ret = panthor_vm_map_pages(vm, addr, flags_to_prot(unmap_vma->flags),
> - bo->dmap.sgt, op->remap.next->gem.offset,
> - size);
> + ret = panthor_vm_exec_map_op(vm, unmap_vma->flags, &map_op);
> if (ret)
> return ret;
> }
> @@ -2826,6 +2962,7 @@ panthor_vm_bind_prepare_op_ctx(struct drm_file *file,
> const struct drm_panthor_vm_bind_op *op,
> struct panthor_vm_op_ctx *op_ctx)
> {
> + struct panthor_file *pfile = file->driver_priv;
> ssize_t vm_pgsz = panthor_vm_page_size(vm);
> struct drm_gem_object *gem;
> int ret;
> @@ -2837,7 +2974,7 @@ panthor_vm_bind_prepare_op_ctx(struct drm_file *file,
> switch (op->flags & DRM_PANTHOR_VM_BIND_OP_TYPE_MASK) {
> case DRM_PANTHOR_VM_BIND_OP_TYPE_MAP:
> gem = drm_gem_object_lookup(file, op->bo_handle);
> - ret = panthor_vm_prepare_map_op_ctx(op_ctx, vm,
> + ret = panthor_vm_prepare_map_op_ctx(op_ctx, pfile->vms, vm,
> gem ? to_panthor_bo(gem) : NULL,
> op);
> drm_gem_object_put(gem);
> @@ -3044,7 +3181,10 @@ int panthor_vm_map_bo_range(struct panthor_vm *vm, struct panthor_gem_object *bo
> struct panthor_vm_op_ctx op_ctx;
> int ret;
>
> - ret = panthor_vm_prepare_map_op_ctx(&op_ctx, vm, bo, &op);
> + if (drm_WARN_ON(&vm->ptdev->base, flags & DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE))
> + return -EINVAL;
> +
> + ret = panthor_vm_prepare_map_op_ctx(&op_ctx, NULL, vm, bo, &op);
> if (ret)
> return ret;
>
> diff --git a/include/uapi/drm/panthor_drm.h b/include/uapi/drm/panthor_drm.h
> index 42c901ebdb7a..1a9bcfc8f4cd 100644
> --- a/include/uapi/drm/panthor_drm.h
> +++ b/include/uapi/drm/panthor_drm.h
> @@ -614,6 +614,18 @@ enum drm_panthor_vm_bind_op_flags {
> */
> DRM_PANTHOR_VM_BIND_OP_MAP_UNCACHED = 1 << 2,
>
> + /**
> + * @DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE: Repeat a BO range
> + *
> + * Only valid with DRM_PANTHOR_VM_BIND_OP_TYPE_MAP.
> + *
> + * When this flag is set, the whole vm_bind range is mapped over a dummy object in a cyclic
> + * fashion, and all GPU reads from addresses in the range return undefined values. This flag
> + * being set means drm_panthor_vm_bind_op:offset and drm_panthor_vm_bind_op::handle must
> + * both be set to 0.
> + */
> + DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE = 1 << 3,
> +
> /**
> * @DRM_PANTHOR_VM_BIND_OP_TYPE_MASK: Mask used to determine the type of operation.
> */
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v7 1/6] drm/panthor: Expose GPU page sizes to UM
2026-04-15 11:28 ` [PATCH v7 1/6] drm/panthor: Expose GPU page sizes to UM Adrián Larumbe
2026-04-15 13:10 ` Boris Brezillon
@ 2026-04-15 15:19 ` Steven Price
1 sibling, 0 replies; 23+ messages in thread
From: Steven Price @ 2026-04-15 15:19 UTC (permalink / raw)
To: Adrián Larumbe, linux-kernel
Cc: dri-devel, Boris Brezillon, kernel, Liviu Dudau,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Daniel Almeida, Alice Ryhl
On 15/04/2026 12:28, Adrián Larumbe wrote:
> In future commits that will implement repeated mappings, only repeat
> values multiple of GPU page sizes will be tolerated. That means these
> values must be made known to UM. Do it through a queriable GPU info
> value.
>
> Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com>
Reviewed-by: Steven Price <steven.price@arm.com>
> ---
> drivers/gpu/drm/panthor/panthor_device.h | 3 +++
> drivers/gpu/drm/panthor/panthor_drv.c | 8 ++++++++
> drivers/gpu/drm/panthor/panthor_mmu.c | 9 ++++++++-
> include/uapi/drm/panthor_drm.h | 13 +++++++++++++
> 4 files changed, 32 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/panthor/panthor_device.h b/drivers/gpu/drm/panthor/panthor_device.h
> index 5cba272f9b4d..d856a4fe1d61 100644
> --- a/drivers/gpu/drm/panthor/panthor_device.h
> +++ b/drivers/gpu/drm/panthor/panthor_device.h
> @@ -158,6 +158,9 @@ struct panthor_device {
> /** @csif_info: Command stream interface information. */
> struct drm_panthor_csif_info csif_info;
>
> + /** @mmu_info: MMU info */
> + struct drm_panthor_mmu_info mmu_info;
> +
> /** @hw: GPU-specific data. */
> struct panthor_hw *hw;
>
> diff --git a/drivers/gpu/drm/panthor/panthor_drv.c b/drivers/gpu/drm/panthor/panthor_drv.c
> index 73fc983dc9b4..a8090bc4e33c 100644
> --- a/drivers/gpu/drm/panthor/panthor_drv.c
> +++ b/drivers/gpu/drm/panthor/panthor_drv.c
> @@ -175,6 +175,7 @@ panthor_get_uobj_array(const struct drm_panthor_obj_array *in, u32 min_stride,
> _Generic(_obj_name, \
> PANTHOR_UOBJ_DECL(struct drm_panthor_gpu_info, tiler_present), \
> PANTHOR_UOBJ_DECL(struct drm_panthor_csif_info, pad), \
> + PANTHOR_UOBJ_DECL(struct drm_panthor_mmu_info, page_size_bitmap), \
> PANTHOR_UOBJ_DECL(struct drm_panthor_timestamp_info, current_timestamp), \
> PANTHOR_UOBJ_DECL(struct drm_panthor_group_priorities_info, pad), \
> PANTHOR_UOBJ_DECL(struct drm_panthor_sync_op, timeline_value), \
> @@ -946,6 +947,10 @@ static int panthor_ioctl_dev_query(struct drm_device *ddev, void *data, struct d
> args->size = sizeof(ptdev->csif_info);
> return 0;
>
> + case DRM_PANTHOR_DEV_QUERY_MMU_INFO:
> + args->size = sizeof(ptdev->mmu_info);
> + return 0;
> +
> case DRM_PANTHOR_DEV_QUERY_TIMESTAMP_INFO:
> args->size = sizeof(timestamp_info);
> return 0;
> @@ -966,6 +971,9 @@ static int panthor_ioctl_dev_query(struct drm_device *ddev, void *data, struct d
> case DRM_PANTHOR_DEV_QUERY_CSIF_INFO:
> return PANTHOR_UOBJ_SET(args->pointer, args->size, ptdev->csif_info);
>
> + case DRM_PANTHOR_DEV_QUERY_MMU_INFO:
> + return PANTHOR_UOBJ_SET(args->pointer, args->size, ptdev->mmu_info);
> +
> case DRM_PANTHOR_DEV_QUERY_TIMESTAMP_INFO:
> ret = copy_struct_from_user(×tamp_info,
> sizeof(timestamp_info),
> diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
> index bd41c892beb7..01577be88933 100644
> --- a/drivers/gpu/drm/panthor/panthor_mmu.c
> +++ b/drivers/gpu/drm/panthor/panthor_mmu.c
> @@ -2769,7 +2769,7 @@ panthor_vm_create(struct panthor_device *ptdev, bool for_mcu,
> refcount_set(&vm->as.active_cnt, 0);
>
> pgtbl_cfg = (struct io_pgtable_cfg) {
> - .pgsize_bitmap = SZ_4K | SZ_2M,
> + .pgsize_bitmap = ptdev->mmu_info.page_size_bitmap,
> .ias = va_bits,
> .oas = pa_bits,
> .coherent_walk = ptdev->coherent,
> @@ -3214,6 +3214,11 @@ static void panthor_mmu_release_wq(struct drm_device *ddev, void *res)
> destroy_workqueue(res);
> }
>
> +static void panthor_mmu_info_init(struct panthor_device *ptdev)
> +{
> + ptdev->mmu_info.page_size_bitmap = SZ_4K | SZ_2M;
> +}
> +
> /**
> * panthor_mmu_init() - Initialize the MMU logic.
> * @ptdev: Device.
> @@ -3226,6 +3231,8 @@ int panthor_mmu_init(struct panthor_device *ptdev)
> struct panthor_mmu *mmu;
> int ret, irq;
>
> + panthor_mmu_info_init(ptdev);
> +
> mmu = drmm_kzalloc(&ptdev->base, sizeof(*mmu), GFP_KERNEL);
> if (!mmu)
> return -ENOMEM;
> diff --git a/include/uapi/drm/panthor_drm.h b/include/uapi/drm/panthor_drm.h
> index 0e455d91e77d..dc2704fc2829 100644
> --- a/include/uapi/drm/panthor_drm.h
> +++ b/include/uapi/drm/panthor_drm.h
> @@ -246,6 +246,9 @@ enum drm_panthor_dev_query_type {
> /** @DRM_PANTHOR_DEV_QUERY_CSIF_INFO: Query command-stream interface information. */
> DRM_PANTHOR_DEV_QUERY_CSIF_INFO,
>
> + /** @DRM_PANTHOR_DEV_QUERY_MMU_INFO: Query MMU information. */
> + DRM_PANTHOR_DEV_QUERY_MMU_INFO,
> +
> /** @DRM_PANTHOR_DEV_QUERY_TIMESTAMP_INFO: Query timestamp information. */
> DRM_PANTHOR_DEV_QUERY_TIMESTAMP_INFO,
>
> @@ -487,6 +490,16 @@ struct drm_panthor_timestamp_info {
> __u64 cpu_timestamp_nsec;
> };
>
> +/**
> + * struct drm_panthor_mmu_info - MMU information
> + *
> + * Structure grouping all queryable information relating to the MMU.
> + */
> +struct drm_panthor_mmu_info {
> + /** @page_size_bitmap: Allowed page sizes */
> + __u64 page_size_bitmap;
> +};
> +
> /**
> * struct drm_panthor_group_priorities_info - Group priorities information
> *
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v7 2/6] drm/panthor: Pass vm_bind_op to vm_prepare_map_op_ctx
2026-04-15 11:28 ` [PATCH v7 2/6] drm/panthor: Pass vm_bind_op to vm_prepare_map_op_ctx Adrián Larumbe
2026-04-15 13:11 ` Boris Brezillon
@ 2026-04-15 15:19 ` Steven Price
1 sibling, 0 replies; 23+ messages in thread
From: Steven Price @ 2026-04-15 15:19 UTC (permalink / raw)
To: Adrián Larumbe, linux-kernel
Cc: dri-devel, Boris Brezillon, kernel, Liviu Dudau,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter
On 15/04/2026 12:28, Adrián Larumbe wrote:
> Instead of passing its constituent elements, pass the whole struct to
> simplify the function prototype.
>
> Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com>
It's a little ugly the hack in panthor_vm_map_bo_range(), but I can't
immediately see a neater solution, so:
Reviewed-by: Steven Price <steven.price@arm.com>
> ---
> drivers/gpu/drm/panthor/panthor_mmu.c | 27 ++++++++++++++-------------
> 1 file changed, 14 insertions(+), 13 deletions(-)
>
> diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
> index 01577be88933..d9e2f8afb8fb 100644
> --- a/drivers/gpu/drm/panthor/panthor_mmu.c
> +++ b/drivers/gpu/drm/panthor/panthor_mmu.c
> @@ -1275,9 +1275,7 @@ static int panthor_vm_op_ctx_prealloc_pts(struct panthor_vm_op_ctx *op_ctx)
> static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
> struct panthor_vm *vm,
> struct panthor_gem_object *bo,
> - u64 offset,
> - u64 size, u64 va,
> - u32 flags)
> + const struct drm_panthor_vm_bind_op *op)
> {
> struct drm_gpuvm_bo *preallocated_vm_bo;
> struct sg_table *sgt = NULL;
> @@ -1286,12 +1284,12 @@ static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
> if (!bo)
> return -EINVAL;
>
> - if ((flags & ~PANTHOR_VM_BIND_OP_MAP_FLAGS) ||
> - (flags & DRM_PANTHOR_VM_BIND_OP_TYPE_MASK) != DRM_PANTHOR_VM_BIND_OP_TYPE_MAP)
> + if ((op->flags & ~PANTHOR_VM_BIND_OP_MAP_FLAGS) ||
> + (op->flags & DRM_PANTHOR_VM_BIND_OP_TYPE_MASK) != DRM_PANTHOR_VM_BIND_OP_TYPE_MAP)
> return -EINVAL;
>
> /* Make sure the VA and size are in-bounds. */
> - if (size > bo->base.size || offset > bo->base.size - size)
> + if (op->size > bo->base.size || op->bo_offset > bo->base.size - op->size)
> return -EINVAL;
>
> /* If the BO has an exclusive VM attached, it can't be mapped to other VMs. */
> @@ -1299,7 +1297,7 @@ static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
> bo->exclusive_vm_root_gem != panthor_vm_root_gem(vm))
> return -EINVAL;
>
> - panthor_vm_init_op_ctx(op_ctx, size, va, flags);
> + panthor_vm_init_op_ctx(op_ctx, op->size, op->va, op->flags);
>
> ret = panthor_vm_op_ctx_prealloc_vmas(op_ctx);
> if (ret)
> @@ -1328,7 +1326,7 @@ static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
> }
>
> op_ctx->map.vm_bo = drm_gpuvm_bo_obtain_prealloc(preallocated_vm_bo);
> - op_ctx->map.bo_offset = offset;
> + op_ctx->map.bo_offset = op->bo_offset;
>
> ret = panthor_vm_op_ctx_prealloc_pts(op_ctx);
> if (ret)
> @@ -2849,10 +2847,7 @@ panthor_vm_bind_prepare_op_ctx(struct drm_file *file,
> gem = drm_gem_object_lookup(file, op->bo_handle);
> ret = panthor_vm_prepare_map_op_ctx(op_ctx, vm,
> gem ? to_panthor_bo(gem) : NULL,
> - op->bo_offset,
> - op->size,
> - op->va,
> - op->flags);
> + op);
> drm_gem_object_put(gem);
> return ret;
>
> @@ -3048,10 +3043,16 @@ int panthor_vm_bind_exec_sync_op(struct drm_file *file,
> int panthor_vm_map_bo_range(struct panthor_vm *vm, struct panthor_gem_object *bo,
> u64 offset, u64 size, u64 va, u32 flags)
> {
> + struct drm_panthor_vm_bind_op op = {
> + .bo_offset = offset,
> + .size = size,
> + .va = va,
> + .flags = flags,
> + };
> struct panthor_vm_op_ctx op_ctx;
> int ret;
>
> - ret = panthor_vm_prepare_map_op_ctx(&op_ctx, vm, bo, offset, size, va, flags);
> + ret = panthor_vm_prepare_map_op_ctx(&op_ctx, vm, bo, &op);
> if (ret)
> return ret;
>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v7 3/6] drm/panthor: Delete spurious whitespace from uAPI header
2026-04-15 11:28 ` [PATCH v7 3/6] drm/panthor: Delete spurious whitespace from uAPI header Adrián Larumbe
2026-04-15 13:41 ` Boris Brezillon
@ 2026-04-15 15:19 ` Steven Price
1 sibling, 0 replies; 23+ messages in thread
From: Steven Price @ 2026-04-15 15:19 UTC (permalink / raw)
To: Adrián Larumbe, linux-kernel
Cc: dri-devel, Boris Brezillon, kernel, Daniel Almeida, Alice Ryhl,
Liviu Dudau, David Airlie, Simona Vetter, Maarten Lankhorst,
Maxime Ripard, Thomas Zimmermann
On 15/04/2026 12:28, Adrián Larumbe wrote:
> There's no extra blank line after the last member of any other uAPI
> structures, so delete it.
>
> Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com>
Reviewed-by: Steven Price <steven.price@arm.com>
> ---
> include/uapi/drm/panthor_drm.h | 1 -
> 1 file changed, 1 deletion(-)
>
> diff --git a/include/uapi/drm/panthor_drm.h b/include/uapi/drm/panthor_drm.h
> index dc2704fc2829..42c901ebdb7a 100644
> --- a/include/uapi/drm/panthor_drm.h
> +++ b/include/uapi/drm/panthor_drm.h
> @@ -677,7 +677,6 @@ struct drm_panthor_vm_bind_op {
> * This array shall not be empty for sync-only operations.
> */
> struct drm_panthor_obj_array syncs;
> -
> };
>
> /**
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v7 4/6] drm/panthor: Remove unused operation context field
2026-04-15 11:28 ` [PATCH v7 4/6] drm/panthor: Remove unused operation context field Adrián Larumbe
2026-04-15 13:41 ` Boris Brezillon
@ 2026-04-15 15:20 ` Steven Price
1 sibling, 0 replies; 23+ messages in thread
From: Steven Price @ 2026-04-15 15:20 UTC (permalink / raw)
To: Adrián Larumbe, linux-kernel
Cc: dri-devel, Boris Brezillon, kernel, Liviu Dudau,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter
On 15/04/2026 12:28, Adrián Larumbe wrote:
> A Panthor BO's sgtable is now retrieved from its dmap field.
>
> Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com>
Reviewed-by: Steven Price <steven.price@arm.com>
> ---
> drivers/gpu/drm/panthor/panthor_mmu.c | 8 --------
> 1 file changed, 8 deletions(-)
>
> diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
> index d9e2f8afb8fb..cea78e5f0591 100644
> --- a/drivers/gpu/drm/panthor/panthor_mmu.c
> +++ b/drivers/gpu/drm/panthor/panthor_mmu.c
> @@ -195,14 +195,6 @@ struct panthor_vm_op_ctx {
> /** @map.bo_offset: Offset in the buffer object. */
> u64 bo_offset;
>
> - /**
> - * @map.sgt: sg-table pointing to pages backing the GEM object.
> - *
> - * This is gathered at job creation time, such that we don't have
> - * to allocate in ::run_job().
> - */
> - struct sg_table *sgt;
> -
> /** @map.bo: the BO being mapped. */
> struct panthor_gem_object *bo;
> } map;
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v7 6/6] drm/panthor: Bump the driver version to 1.9
2026-04-15 11:28 ` [PATCH v7 6/6] drm/panthor: Bump the driver version to 1.9 Adrián Larumbe
2026-04-15 13:54 ` Jani Nikula
@ 2026-04-15 15:22 ` Boris Brezillon
2026-04-15 15:27 ` Boris Brezillon
1 sibling, 1 reply; 23+ messages in thread
From: Boris Brezillon @ 2026-04-15 15:22 UTC (permalink / raw)
To: Adrián Larumbe
Cc: linux-kernel, dri-devel, Steven Price, kernel, Liviu Dudau,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter
On Wed, 15 Apr 2026 12:28:50 +0100
Adrián Larumbe <adrian.larumbe@collabora.com> wrote:
> Bump the driver version to reflect the new MMU info query ioctl
> parameter and the VM_BIND map sparse flag.
>
> Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com>
Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
> ---
> drivers/gpu/drm/panthor/panthor_drv.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/drivers/gpu/drm/panthor/panthor_drv.c b/drivers/gpu/drm/panthor/panthor_drv.c
> index a8090bc4e33c..c8fb63ede62c 100644
> --- a/drivers/gpu/drm/panthor/panthor_drv.c
> +++ b/drivers/gpu/drm/panthor/panthor_drv.c
> @@ -1787,6 +1787,8 @@ static void panthor_debugfs_init(struct drm_minor *minor)
> * - adds DRM_IOCTL_PANTHOR_BO_QUERY_INFO ioctl
> * - adds drm_panthor_gpu_info::selected_coherency
> * - 1.8 - extends DEV_QUERY_TIMESTAMP_INFO with flags
> + * - 1.9 - adds DRM_PANTHOR_DEV_QUERY_MMU_INFO query
> + * - adds DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE flag
> */
> static const struct drm_driver panthor_drm_driver = {
> .driver_features = DRIVER_RENDER | DRIVER_GEM | DRIVER_SYNCOBJ |
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v7 6/6] drm/panthor: Bump the driver version to 1.9
2026-04-15 13:54 ` Jani Nikula
2026-04-15 14:01 ` Adrián Larumbe
@ 2026-04-15 15:26 ` Boris Brezillon
1 sibling, 0 replies; 23+ messages in thread
From: Boris Brezillon @ 2026-04-15 15:26 UTC (permalink / raw)
To: Jani Nikula
Cc: Adrián Larumbe, linux-kernel, dri-devel, Steven Price,
kernel, Liviu Dudau, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, David Airlie, Simona Vetter
On Wed, 15 Apr 2026 16:54:06 +0300
Jani Nikula <jani.nikula@linux.intel.com> wrote:
> On Wed, 15 Apr 2026, Adrián Larumbe <adrian.larumbe@collabora.com> wrote:
> > Bump the driver version to reflect the new MMU info query ioctl
> > parameter and the VM_BIND map sparse flag.
>
> Does the version actually work for you for checking stuff? Do you have
> userspace checking it? Should it be some capability thing instead?
Yep, we use versions to check for everything that's not a HW
feature/capability (so basically all SW features).
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v7 6/6] drm/panthor: Bump the driver version to 1.9
2026-04-15 15:22 ` Boris Brezillon
@ 2026-04-15 15:27 ` Boris Brezillon
0 siblings, 0 replies; 23+ messages in thread
From: Boris Brezillon @ 2026-04-15 15:27 UTC (permalink / raw)
To: Adrián Larumbe
Cc: linux-kernel, dri-devel, Steven Price, kernel, Liviu Dudau,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter
On Wed, 15 Apr 2026 17:22:33 +0200
Boris Brezillon <boris.brezillon@collabora.com> wrote:
> On Wed, 15 Apr 2026 12:28:50 +0100
> Adrián Larumbe <adrian.larumbe@collabora.com> wrote:
>
> > Bump the driver version to reflect the new MMU info query ioctl
> > parameter and the VM_BIND map sparse flag.
> >
> > Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com>
>
> Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
After actually bumping the version number, as pointed out by Jani.
>
> > ---
> > drivers/gpu/drm/panthor/panthor_drv.c | 2 ++
> > 1 file changed, 2 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/panthor/panthor_drv.c b/drivers/gpu/drm/panthor/panthor_drv.c
> > index a8090bc4e33c..c8fb63ede62c 100644
> > --- a/drivers/gpu/drm/panthor/panthor_drv.c
> > +++ b/drivers/gpu/drm/panthor/panthor_drv.c
> > @@ -1787,6 +1787,8 @@ static void panthor_debugfs_init(struct drm_minor *minor)
> > * - adds DRM_IOCTL_PANTHOR_BO_QUERY_INFO ioctl
> > * - adds drm_panthor_gpu_info::selected_coherency
> > * - 1.8 - extends DEV_QUERY_TIMESTAMP_INFO with flags
> > + * - 1.9 - adds DRM_PANTHOR_DEV_QUERY_MMU_INFO query
> > + * - adds DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE flag
> > */
> > static const struct drm_driver panthor_drm_driver = {
> > .driver_features = DRIVER_RENDER | DRIVER_GEM | DRIVER_SYNCOBJ |
>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v7 6/6] drm/panthor: Bump the driver version to 1.9
2026-04-15 14:01 ` Adrián Larumbe
@ 2026-04-15 16:20 ` Jani Nikula
0 siblings, 0 replies; 23+ messages in thread
From: Jani Nikula @ 2026-04-15 16:20 UTC (permalink / raw)
To: Adrián Larumbe
Cc: linux-kernel, dri-devel, Steven Price, Boris Brezillon, kernel,
Liviu Dudau, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann,
David Airlie, Simona Vetter
On Wed, 15 Apr 2026, Adrián Larumbe <adrian.larumbe@collabora.com> wrote:
> Hi Jani,
>
> On 15.04.2026 16:54, Jani Nikula wrote:
>> On Wed, 15 Apr 2026, Adrián Larumbe <adrian.larumbe@collabora.com> wrote:
>> > Bump the driver version to reflect the new MMU info query ioctl
>> > parameter and the VM_BIND map sparse flag.
>>
>> You're not actually bumping the version, just adding to the comment.
>
> Forgot to increase the driver's minor revision number, thanks for the catch.
>
>> Does the version actually work for you for checking stuff? Do you have
>> userspace checking it? Should it be some capability thing instead?
>
> We absolutely need it because the UMD must be compatible with previous kernel versions,
> which means in the case of sparse mappings, the old method of creating the dummy BO
> in user mode and then having the Mesa driver do repeated mappings over it through
> successive VM_BIND ioctl's must be available.
Okay, I was just wondering as the last time we bumped i915 version was
nearly 20 years ago. ;D
BR,
Jani.
>
>> BR,
>> Jani.
>>
>> >
>> > Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com>
>> > ---
>> > drivers/gpu/drm/panthor/panthor_drv.c | 2 ++
>> > 1 file changed, 2 insertions(+)
>> >
>> > diff --git a/drivers/gpu/drm/panthor/panthor_drv.c b/drivers/gpu/drm/panthor/panthor_drv.c
>> > index a8090bc4e33c..c8fb63ede62c 100644
>> > --- a/drivers/gpu/drm/panthor/panthor_drv.c
>> > +++ b/drivers/gpu/drm/panthor/panthor_drv.c
>> > @@ -1787,6 +1787,8 @@ static void panthor_debugfs_init(struct drm_minor *minor)
>> > * - adds DRM_IOCTL_PANTHOR_BO_QUERY_INFO ioctl
>> > * - adds drm_panthor_gpu_info::selected_coherency
>> > * - 1.8 - extends DEV_QUERY_TIMESTAMP_INFO with flags
>> > + * - 1.9 - adds DRM_PANTHOR_DEV_QUERY_MMU_INFO query
>> > + * - adds DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE flag
>> > */
>> > static const struct drm_driver panthor_drm_driver = {
>> > .driver_features = DRIVER_RENDER | DRIVER_GEM | DRIVER_SYNCOBJ |
>>
>> --
>> Jani Nikula, Intel
>
> Adrian Larumbe
--
Jani Nikula, Intel
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH v7 5/6] drm/panthor: Support sparse mappings
2026-04-15 15:12 ` Boris Brezillon
@ 2026-04-15 22:09 ` Adrián Larumbe
0 siblings, 0 replies; 23+ messages in thread
From: Adrián Larumbe @ 2026-04-15 22:09 UTC (permalink / raw)
To: Boris Brezillon
Cc: linux-kernel, dri-devel, Steven Price, kernel, Liviu Dudau,
Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
Simona Vetter, Daniel Almeida, Alice Ryhl
Hi Boris,
On 15.04.2026 17:12, Boris Brezillon wrote:
> On Wed, 15 Apr 2026 12:28:49 +0100
> Adrián Larumbe <adrian.larumbe@collabora.com> wrote:
>
> > Allow UM to bind sparsely populated memory regions by cyclically mapping
> > virtual ranges over a kernel-allocated dummy BO. This alternative is
> > preferable to the old method of handling sparseness in the UMD, because it
> > relied on the creation of a buffer object to the same end, despite the fact
> > Vulkan sparse resources don't need to be backed by a driver BO.
> >
> > The choice of backing sparsely-bound regions with a Panhtor BO was made so
> > as to profit from the existing shrinker reclaim code. That way no special
> > treatment must be given to the dummy sparse BOs when reclaiming memory, as
> > would be the case if we had chosen a raw kernel page implementation.
> >
> > A new dummy BO is allocated per open file context, because even though the
> > Vulkan spec mandates that writes into sparsely bound regions must be
> > discarded, our implementation is still a workaround over the fact Mali CSF
> > GPUs cannot support this behaviour on the hardware level, so writes still
> > make it into the backing BO. If we had a global one, then it could be a
> > venue for information leaks between file contexts, which should never
> > happen in DRM.
> >
> > Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com>
> > ---
> > drivers/gpu/drm/panthor/panthor_gem.c | 35 +++++
> > drivers/gpu/drm/panthor/panthor_gem.h | 2 +
> > drivers/gpu/drm/panthor/panthor_mmu.c | 192 ++++++++++++++++++++++----
> > include/uapi/drm/panthor_drm.h | 12 ++
> > 4 files changed, 215 insertions(+), 26 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/panthor/panthor_gem.c b/drivers/gpu/drm/panthor/panthor_gem.c
> > index 13295d7a593d..e27251ef113b 100644
> > --- a/drivers/gpu/drm/panthor/panthor_gem.c
> > +++ b/drivers/gpu/drm/panthor/panthor_gem.c
> > @@ -1345,6 +1345,41 @@ panthor_kernel_bo_create(struct panthor_device *ptdev, struct panthor_vm *vm,
> > return ERR_PTR(ret);
> > }
> >
> > +/**
> > + * panthor_dummy_bo_create() - Create a Panthor BO meant to back sparse bindings.
> > + * @ptdev: Device.
> > + *
> > + * Return: A valid pointer in case of success, an ERR_PTR() otherwise.
> > + */
> > +struct panthor_gem_object *
> > +panthor_dummy_bo_create(struct panthor_device *ptdev)
> > +{
> > + u32 dummy_flags = DRM_PANTHOR_BO_NO_MMAP;
> > + struct panthor_gem_object *bo;
> > + struct page **pages;
> > +
> > + bo = panthor_gem_create(&ptdev->base, SZ_2M, dummy_flags, NULL, 0);
> > + if (IS_ERR_OR_NULL(bo))
> > + return bo;
> > +
> > + pages = drm_gem_get_pages(&bo->base);
>
> Why not use panthor_gem_backing_get_pages_locked() here? Also,
> drm_gem_get_pages() doesn't give any guarantee that you'll get a huge
> page, nor can you guarantee that the 2M won't be reclaimed and later
> on be re-allocated as 4k chunks. I'd probably keep things simple for
> now, and
> - keep it a 2M GEM object
> - force the page allocation at map time, just like we do for regular BOs
>
> > + if (PTR_ERR(pages) == -ENOMEM) {
> > + drm_gem_object_put(&bo->base);
> > + bo = panthor_gem_create(&ptdev->base, SZ_4K, dummy_flags, NULL, 0);
> > + if (IS_ERR_OR_NULL(bo))
> > + return bo;
> > + pages = drm_gem_get_pages(&bo->base);
> > + }
> > +
> > + if (IS_ERR_OR_NULL(pages)) {
> > + drm_gem_object_put(&bo->base);
> > + return ERR_CAST(pages);
> > + }
> > +
> > + bo->backing.pages = pages;
> > + return bo;
> > +}
> > +
> > static bool can_swap(void)
> > {
> > return get_nr_swap_pages() > 0;
> > diff --git a/drivers/gpu/drm/panthor/panthor_gem.h b/drivers/gpu/drm/panthor/panthor_gem.h
> > index ae0491d0b121..dcf9cdd51d93 100644
> > --- a/drivers/gpu/drm/panthor/panthor_gem.h
> > +++ b/drivers/gpu/drm/panthor/panthor_gem.h
> > @@ -264,6 +264,8 @@ void panthor_gem_kernel_bo_set_label(struct panthor_kernel_bo *bo, const char *l
> > int panthor_gem_sync(struct drm_gem_object *obj,
> > u32 type, u64 offset, u64 size);
> >
> > +struct panthor_gem_object *panthor_dummy_bo_create(struct panthor_device *ptdev);
> > +
> > struct drm_gem_object *
> > panthor_gem_prime_import(struct drm_device *dev,
> > struct dma_buf *dma_buf);
> > diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
> > index cea78e5f0591..6585fd6b5d04 100644
> > --- a/drivers/gpu/drm/panthor/panthor_mmu.c
> > +++ b/drivers/gpu/drm/panthor/panthor_mmu.c
> > @@ -112,6 +112,23 @@ struct panthor_mmu {
> > struct panthor_vm_pool {
> > /** @xa: Array used for VM handle tracking. */
> > struct xarray xa;
> > +
> > + /** @dummy: Dummy drm object related fields
>
> /**
> * @dummy: Dummy drm object related fields.
>
> > + *
> > + * Sparse bindings map virtual address ranges onto a dummy
> > + * BO in a modulo fashion. Even though sparse writes are meant
> > + * to be discarded and reads undefined, writes are still reflected
> > + * in the dummy buffer. That means we must keep a dummy object per
> > + * file context, to avoid data leaks between them.
> > + *
> > + */
> > + struct {
> > + /** @dummy.obj: Dummy object used for sparse mappings. */
> > + struct panthor_gem_object *obj;
> > +
> > + /** @dummy.lock: Lock protecting against races on dummy object. */
> > + struct mutex lock;
> > + } dummy;
> > };
> >
> > /**
> > @@ -391,6 +408,15 @@ struct panthor_vm {
> > */
> > struct list_head lru_node;
> > } reclaim;
> > +
> > + /** @dummy: Dummy object used for sparse mappings.
>
> /**
> * @dummy: Dummy object used for sparse mappings.
Thanks for the catch. Do these comment formatting errors usually only show up when I
build the sources with W=1 ?
> > + *
> > + * VM's must keep a reference to the file context-wide dummy BO because
> > + * they can outlive the file context, which includes the VM pool holding
> > + * the original dummy BO reference.
> > + *
> > + */
> > + struct panthor_gem_object *dummy;
> > };
> >
> > /**
> > @@ -1020,6 +1046,46 @@ panthor_vm_map_pages(struct panthor_vm *vm, u64 iova, int prot,
> > return 0;
> > }
> >
> > +static int
> > +panthor_vm_map_sparse(struct panthor_vm *vm, u64 iova, int prot,
> > + struct sg_table *sgt, u64 size)
> > +{
> > + u64 first_iova = iova;
>
> s/first_iova/orig_iova/
Will do.
> > + u64 first_size = size;
> > + int ret;
> > +
> > + if (iova & (SZ_2M - 1)) {
> > + u64 unaligned_size = min(ALIGN(iova, SZ_2M) - iova, size);
> > +
> > + ret = panthor_vm_map_pages(vm, iova, prot, sgt,
> > + 0, unaligned_size);
> > + if (ret)
> > + return ret;
> > +
> > + size -= unaligned_size;
> > + iova += unaligned_size;
> > + }
> > +
> > + /* TODO: we should probably optimize this at the io_pgtable level. */
> > + while (size > 0) {
> > + u64 next_size = min(size, sg_dma_len(sgt->sgl));
> > +
> > + ret = panthor_vm_map_pages(vm, iova, prot,
> > + sgt, 0, next_size);
> > + if (ret)
> > + goto err_unmap;
> > +
> > + size -= next_size;
> > + iova += next_size;
> > + }
> > +
> > + return 0;
> > +
> > +err_unmap:
> > + panthor_vm_unmap_pages(vm, first_iova, first_size - size);
>
> If you do:
>
> panthor_vm_unmap_pages(vm, orig_iova, iova - orig_iova);
>
> you can get rid of the first_size variable.
Will do.
> > + return ret;
> > +}
> > +
> > static int flags_to_prot(u32 flags)
> > {
> > int prot = 0;
> > @@ -1258,38 +1324,71 @@ static int panthor_vm_op_ctx_prealloc_pts(struct panthor_vm_op_ctx *op_ctx)
> > return 0;
> > }
> >
> > +static struct panthor_gem_object *
> > +panthor_vm_get_dummy_obj(struct panthor_vm_pool *pool,
> > + struct panthor_vm *vm)
> > +{
> > + scoped_guard(mutex, &pool->dummy.lock) {
> > + if (!vm->dummy) {
> > + if (!pool->dummy.obj) {
> > + struct panthor_gem_object *obj =
> > + panthor_dummy_bo_create(vm->ptdev);
> > + if (IS_ERR(obj))
> > + return obj;
> > +
> > + pool->dummy.obj = obj;
> > + }
> > +
> > + drm_gem_object_get(&pool->dummy.obj->base);
> > + vm->dummy = pool->dummy.obj;
> > + }
> > + }
>
> The lock is taken for the whole function scope, you you can simply use
> guard(mutex)() and get rid of two indentation levels:
>
> guard(mutex)(&pool->dummy.lock);
> if (vm->dummy)
> return vm->dummy;
>
> if (!pool->dummy.obj) {
> struct panthor_gem_object *obj;
>
> obj = panthor_dummy_bo_create(vm->ptdev);
> if (IS_ERR(obj))
> return obj;
>
> pool->dummy.obj = obj;
> }
>
> drm_gem_object_get(&pool->dummy.obj->base);
> vm->dummy = pool->dummy.obj;
> return vm->dummy;
>
It's cleaner this way, will do.
> > +
> > + return vm->dummy;
> > +}
> > +
> > #define PANTHOR_VM_BIND_OP_MAP_FLAGS \
> > (DRM_PANTHOR_VM_BIND_OP_MAP_READONLY | \
> > DRM_PANTHOR_VM_BIND_OP_MAP_NOEXEC | \
> > DRM_PANTHOR_VM_BIND_OP_MAP_UNCACHED | \
> > + DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE | \
> > DRM_PANTHOR_VM_BIND_OP_TYPE_MASK)
> >
> > static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
> > + struct panthor_vm_pool *pool,
>
> Can't we just make sure vm->dummy is allocated before
> panthor_vm_prepare_map_op_ctx() is called in case this is
> a sparse map request? This would prevent the conditional check on
> pool != NULL, but only when is_sparse=true, and you wouldn't have to
> pass the pool around.
I guess the ideal place for allocating the dummy bo would be panthor_vm_bind_prepare_op_ctx(),
then panthor_vm_prepare_map_op_ctx() can remain untouched.
> > struct panthor_vm *vm,
> > struct panthor_gem_object *bo,
> > const struct drm_panthor_vm_bind_op *op)
> > {
> > + bool is_sparse = op->flags & DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE;
> > struct drm_gpuvm_bo *preallocated_vm_bo;
> > struct sg_table *sgt = NULL;
> > int ret;
> >
> > - if (!bo)
> > - return -EINVAL;
> > -
> > if ((op->flags & ~PANTHOR_VM_BIND_OP_MAP_FLAGS) ||
> > (op->flags & DRM_PANTHOR_VM_BIND_OP_TYPE_MASK) != DRM_PANTHOR_VM_BIND_OP_TYPE_MAP)
> > return -EINVAL;
> >
> > /* Make sure the VA and size are in-bounds. */
> > - if (op->size > bo->base.size || op->bo_offset > bo->base.size - op->size)
> > + if (bo && (is_sparse || op->size > bo->base.size ||
> > + op->bo_offset > bo->base.size - op->size))
> > return -EINVAL;
> > + else if (is_sparse && (!pool || op->bo_handle || op->bo_offset))
> > + return -EINVAL;
> > +
> > + if (is_sparse) {
> > + bo = panthor_vm_get_dummy_obj(pool, vm);
>
> Actually, you assign bo here, so you might as well just pass the dummy
> BO to panthor_vm_prepare_map_op_ctx() and keep the
>
> if (!bo)
> return -EINVAL;
>
> check.
>
> As a side note, if gpuva.gem.obj != NULL for sparse mappings, it messes up
> with the can_merge checks done by gpuvm, which is not a problem right now
> because we simply ignore the .keep hint passed to unmap_op, but that's
> probably worth a comment somewhere.
I can mention that in panthor_gpuva_sm_step_unmap.
> > + if (IS_ERR_OR_NULL(bo))
> > + return PTR_ERR(bo);
> > + }
> >
> > /* If the BO has an exclusive VM attached, it can't be mapped to other VMs. */
> > if (bo->exclusive_vm_root_gem &&
> > bo->exclusive_vm_root_gem != panthor_vm_root_gem(vm))
> > return -EINVAL;
> >
> > - panthor_vm_init_op_ctx(op_ctx, op->size, op->va, op->flags);
> > + panthor_vm_init_op_ctx(op_ctx, op->size, op->va, op->flags
> > + | ((is_sparse) ? DRM_PANTHOR_VM_BIND_OP_MAP_NOEXEC : 0));
>
> I would actually enforce NOEXEC is set and return EINVAL if
> that's not the case.
Will make it part of the uAPI and check for it inside panthor_vm_prepare_map_op_ctx()
.
> >
> > ret = panthor_vm_op_ctx_prealloc_vmas(op_ctx);
> > if (ret)
> > @@ -1634,6 +1733,13 @@ void panthor_vm_pool_destroy(struct panthor_file *pfile)
> > xa_for_each(&pfile->vms->xa, i, vm)
> > panthor_vm_destroy(vm);
> >
> > + scoped_guard(mutex, &pfile->vms->dummy.lock) {
> > + struct panthor_gem_object *bo = pfile->vms->dummy.obj;
> > +
> > + if (bo)
> > + drm_gem_object_put(&bo->base);
> > + }
>
> Missing
>
> mutex_destroy(&pfile->vms->dummy.lock);
>
> > +
> > xa_destroy(&pfile->vms->xa);
> > kfree(pfile->vms);
> > }
> > @@ -1651,6 +1757,8 @@ int panthor_vm_pool_create(struct panthor_file *pfile)
> > return -ENOMEM;
> >
> > xa_init_flags(&pfile->vms->xa, XA_FLAGS_ALLOC1);
> > +
> > + mutex_init(&pfile->vms->dummy.lock);
> > return 0;
> > }
> >
> > @@ -1987,6 +2095,9 @@ static void panthor_vm_free(struct drm_gpuvm *gpuvm)
> >
> > free_io_pgtable_ops(vm->pgtbl_ops);
> >
> > + if (vm->dummy)
> > + drm_gem_object_put(&vm->dummy->base);
> > +
> > drm_mm_takedown(&vm->mm);
> > kfree(vm);
> > }
> > @@ -2146,7 +2257,26 @@ static void panthor_vma_init(struct panthor_vma *vma, u32 flags)
> > #define PANTHOR_VM_MAP_FLAGS \
> > (DRM_PANTHOR_VM_BIND_OP_MAP_READONLY | \
> > DRM_PANTHOR_VM_BIND_OP_MAP_NOEXEC | \
> > - DRM_PANTHOR_VM_BIND_OP_MAP_UNCACHED)
> > + DRM_PANTHOR_VM_BIND_OP_MAP_UNCACHED | \
> > + DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE)
> > +
> > +static int
> > +panthor_vm_exec_map_op(struct panthor_vm *vm, u32 flags,
> > + const struct drm_gpuva_op_map *op)
> > +{
> > + struct panthor_gem_object *bo = to_panthor_bo(op->gem.obj);
> > + int prot = flags_to_prot(flags);
> > +
> > + if (!op->va.range)
> > + return 0;
>
> Do we really expect a range of zero here? If not, I'd either drop
> the check, or at the very least, make it a drm_WARN_ON_ONCE().
IIRC it can happen when panthor_vm_exec_map_op() is called from panthor_gpuva_sm_step_remap(),
and the remap's unmap didn't have to be expanded to account for a THP. Although in that case,
the check being done inside panthor_vm_map_pages() should be enough.
> > +
> > + if (flags & DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE)
> > + return panthor_vm_map_sparse(vm, op->va.addr, prot,
> > + bo->dmap.sgt, op->va.range);
> > +
> > + return panthor_vm_map_pages(vm, op->va.addr, prot, bo->dmap.sgt,
> > + op->gem.offset, op->va.range);
> > +}
> >
> > static int panthor_gpuva_sm_step_map(struct drm_gpuva_op *op, void *priv)
> > {
> > @@ -2160,9 +2290,7 @@ static int panthor_gpuva_sm_step_map(struct drm_gpuva_op *op, void *priv)
> >
> > panthor_vma_init(vma, op_ctx->flags & PANTHOR_VM_MAP_FLAGS);
> >
> > - ret = panthor_vm_map_pages(vm, op->map.va.addr, flags_to_prot(vma->flags),
> > - op_ctx->map.bo->dmap.sgt, op->map.gem.offset,
> > - op->map.va.range);
> > + ret = panthor_vm_exec_map_op(vm, vma->flags, &op->map);
> > if (ret) {
> > panthor_vm_op_ctx_return_vma(op_ctx, vma);
> > return ret;
> > @@ -2178,13 +2306,15 @@ static int panthor_gpuva_sm_step_map(struct drm_gpuva_op *op, void *priv)
> > }
> >
> > static bool
> > -iova_mapped_as_huge_page(struct drm_gpuva_op_map *op, u64 addr)
> > +iova_mapped_as_huge_page(struct drm_gpuva_op_map *op, u64 addr, bool is_sparse)
> > {
> > struct panthor_gem_object *bo = to_panthor_bo(op->gem.obj);
> > const struct page *pg;
> > pgoff_t bo_offset;
> >
> > bo_offset = addr - op->va.addr + op->gem.offset;
> > + if (is_sparse)
> > + bo_offset %= bo->base.size;
>
> If this is a sparse mapping, we just have to check the first page
> (so bo_offset=0).
Will do.
> > pg = bo->backing.pages[bo_offset >> PAGE_SHIFT];
> >
> > return folio_size(page_folio(pg)) >= SZ_2M;
> > @@ -2194,6 +2324,8 @@ static void
> > unmap_hugepage_align(const struct drm_gpuva_op_remap *op,
> > u64 *unmap_start, u64 *unmap_range)
> > {
> > + struct panthor_vma *unmap_vma = container_of(op->unmap->va, struct panthor_vma, base);
> > + bool is_sparse = unmap_vma->flags & DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE;
> > u64 aligned_unmap_start, aligned_unmap_end, unmap_end;
> >
> > unmap_end = *unmap_start + *unmap_range;
> > @@ -2205,7 +2337,7 @@ unmap_hugepage_align(const struct drm_gpuva_op_remap *op,
> > */
> > if (op->prev && aligned_unmap_start < *unmap_start &&
> > op->prev->va.addr <= aligned_unmap_start &&
> > - iova_mapped_as_huge_page(op->prev, *unmap_start)) {
> > + (iova_mapped_as_huge_page(op->prev, *unmap_start, is_sparse))) {
> > *unmap_range += *unmap_start - aligned_unmap_start;
> > *unmap_start = aligned_unmap_start;
> > }
> > @@ -2215,7 +2347,7 @@ unmap_hugepage_align(const struct drm_gpuva_op_remap *op,
> > */
> > if (op->next && aligned_unmap_end > unmap_end &&
> > op->next->va.addr + op->next->va.range >= aligned_unmap_end &&
> > - iova_mapped_as_huge_page(op->next, unmap_end - 1)) {
> > + (iova_mapped_as_huge_page(op->next, *unmap_start, is_sparse))) {
> > *unmap_range += aligned_unmap_end - unmap_end;
> > }
> > }
> > @@ -2251,14 +2383,17 @@ static int panthor_gpuva_sm_step_remap(struct drm_gpuva_op *op,
> > }
> >
> > if (op->remap.prev) {
> > - struct panthor_gem_object *bo = to_panthor_bo(op->remap.prev->gem.obj);
> > - u64 offset = op->remap.prev->gem.offset + unmap_start - op->remap.prev->va.addr;
> > - u64 size = op->remap.prev->va.addr + op->remap.prev->va.range - unmap_start;
> > + const struct drm_gpuva_op_map map_op = {
> > + .va.addr = unmap_start,
> > + .va.range =
> > + op->remap.prev->va.addr + op->remap.prev->va.range - unmap_start,
> > + .gem.obj = op->remap.prev->gem.obj,
> > + .gem.offset =
> > + op->remap.prev->gem.offset + unmap_start - op->remap.prev->va.addr,
>
> I believe it should be forced to zero if this is a sparse
> mapping, no? This makes me think we probably want this to be
> NULL, in the case of a sparse mapping. It shouldn't prevent
> reclaim from happening on the dummy BO, because the drm_gpuva
> has a separate vm_bo field. Yes it forces us to add bunch of
> is_sparse checks in a few other places, but I find it cleaner
> than pretending this is a regular BO.
The .gem.offset field is assigned here unconditionally, but discarded in cases it's a sparse mapping
when calling panthor_vm_map_sparse() (which takes no offset argument). I assume what you mean is that
in panthor_vm_exec_op(), I should abstain from assining .map.gem.obj and .map.gem.offset. However,
if I do that, the 'va->vm_bo = drm_gpuvm_bo_get(vm_bo);' will never happen inside drm_gpuva_link().
> > + };
> >
> > if (!unmap_vma->evicted) {
> > - ret = panthor_vm_map_pages(vm, unmap_start,
> > - flags_to_prot(unmap_vma->flags),
> > - bo->dmap.sgt, offset, size);
> > + ret = panthor_vm_exec_map_op(vm, unmap_vma->flags, &map_op);
> > if (ret)
> > return ret;
> > }
> > @@ -2269,14 +2404,15 @@ static int panthor_gpuva_sm_step_remap(struct drm_gpuva_op *op,
> > }
> >
> > if (op->remap.next) {
> > - struct panthor_gem_object *bo = to_panthor_bo(op->remap.next->gem.obj);
> > - u64 addr = op->remap.next->va.addr;
> > - u64 size = unmap_start + unmap_range - op->remap.next->va.addr;
> > + const struct drm_gpuva_op_map map_op = {
> > + .va.addr = op->remap.next->va.addr,
> > + .va.range = unmap_start + unmap_range - op->remap.next->va.addr,
> > + .gem.obj = op->remap.next->gem.obj,
> > + .gem.offset = op->remap.next->gem.offset,
>
> Same here, I'd rather have gem.obj=NULL and gem.offset=0 when
> remapping a porting of sparse mapping.
I could do that and then insert warnings in panthor_vm_map_pages() when it's sparse to make
sure these fields are zero-initialised.
> > + };
> >
> > if (!unmap_vma->evicted) {
> > - ret = panthor_vm_map_pages(vm, addr, flags_to_prot(unmap_vma->flags),
> > - bo->dmap.sgt, op->remap.next->gem.offset,
> > - size);
> > + ret = panthor_vm_exec_map_op(vm, unmap_vma->flags, &map_op);
> > if (ret)
> > return ret;
> > }
> > @@ -2826,6 +2962,7 @@ panthor_vm_bind_prepare_op_ctx(struct drm_file *file,
> > const struct drm_panthor_vm_bind_op *op,
> > struct panthor_vm_op_ctx *op_ctx)
> > {
> > + struct panthor_file *pfile = file->driver_priv;
> > ssize_t vm_pgsz = panthor_vm_page_size(vm);
> > struct drm_gem_object *gem;
> > int ret;
> > @@ -2837,7 +2974,7 @@ panthor_vm_bind_prepare_op_ctx(struct drm_file *file,
> > switch (op->flags & DRM_PANTHOR_VM_BIND_OP_TYPE_MASK) {
> > case DRM_PANTHOR_VM_BIND_OP_TYPE_MAP:
> > gem = drm_gem_object_lookup(file, op->bo_handle);
> > - ret = panthor_vm_prepare_map_op_ctx(op_ctx, vm,
> > + ret = panthor_vm_prepare_map_op_ctx(op_ctx, pfile->vms, vm,
> > gem ? to_panthor_bo(gem) : NULL,
> > op);
> > drm_gem_object_put(gem);
> > @@ -3044,7 +3181,10 @@ int panthor_vm_map_bo_range(struct panthor_vm *vm, struct panthor_gem_object *bo
> > struct panthor_vm_op_ctx op_ctx;
> > int ret;
> >
> > - ret = panthor_vm_prepare_map_op_ctx(&op_ctx, vm, bo, &op);
> > + if (drm_WARN_ON(&vm->ptdev->base, flags & DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE))
> > + return -EINVAL;
> > +
> > + ret = panthor_vm_prepare_map_op_ctx(&op_ctx, NULL, vm, bo, &op);
> > if (ret)
> > return ret;
> >
> > diff --git a/include/uapi/drm/panthor_drm.h b/include/uapi/drm/panthor_drm.h
> > index 42c901ebdb7a..1a9bcfc8f4cd 100644
> > --- a/include/uapi/drm/panthor_drm.h
> > +++ b/include/uapi/drm/panthor_drm.h
> > @@ -614,6 +614,18 @@ enum drm_panthor_vm_bind_op_flags {
> > */
> > DRM_PANTHOR_VM_BIND_OP_MAP_UNCACHED = 1 << 2,
> >
> > + /**
> > + * @DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE: Repeat a BO range
> > + *
> > + * Only valid with DRM_PANTHOR_VM_BIND_OP_TYPE_MAP.
> > + *
> > + * When this flag is set, the whole vm_bind range is mapped over a dummy object in a cyclic
> > + * fashion, and all GPU reads from addresses in the range return undefined values. This flag
> > + * being set means drm_panthor_vm_bind_op:offset and drm_panthor_vm_bind_op::handle must
> > + * both be set to 0.
> > + */
> > + DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE = 1 << 3,
> > +
> > /**
> > * @DRM_PANTHOR_VM_BIND_OP_TYPE_MASK: Mask used to determine the type of operation.
> > */
Adrian Larumbe
^ permalink raw reply [flat|nested] 23+ messages in thread
end of thread, other threads:[~2026-04-15 22:09 UTC | newest]
Thread overview: 23+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-15 11:28 [PATCH v7 0/6] Support sparse mappings in Panthor Adrián Larumbe
2026-04-15 11:28 ` [PATCH v7 1/6] drm/panthor: Expose GPU page sizes to UM Adrián Larumbe
2026-04-15 13:10 ` Boris Brezillon
2026-04-15 15:19 ` Steven Price
2026-04-15 11:28 ` [PATCH v7 2/6] drm/panthor: Pass vm_bind_op to vm_prepare_map_op_ctx Adrián Larumbe
2026-04-15 13:11 ` Boris Brezillon
2026-04-15 15:19 ` Steven Price
2026-04-15 11:28 ` [PATCH v7 3/6] drm/panthor: Delete spurious whitespace from uAPI header Adrián Larumbe
2026-04-15 13:41 ` Boris Brezillon
2026-04-15 15:19 ` Steven Price
2026-04-15 11:28 ` [PATCH v7 4/6] drm/panthor: Remove unused operation context field Adrián Larumbe
2026-04-15 13:41 ` Boris Brezillon
2026-04-15 15:20 ` Steven Price
2026-04-15 11:28 ` [PATCH v7 5/6] drm/panthor: Support sparse mappings Adrián Larumbe
2026-04-15 15:12 ` Boris Brezillon
2026-04-15 22:09 ` Adrián Larumbe
2026-04-15 11:28 ` [PATCH v7 6/6] drm/panthor: Bump the driver version to 1.9 Adrián Larumbe
2026-04-15 13:54 ` Jani Nikula
2026-04-15 14:01 ` Adrián Larumbe
2026-04-15 16:20 ` Jani Nikula
2026-04-15 15:26 ` Boris Brezillon
2026-04-15 15:22 ` Boris Brezillon
2026-04-15 15:27 ` Boris Brezillon
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox