The Linux Kernel Mailing List
 help / color / mirror / Atom feed
From: Steven Price <steven.price@arm.com>
To: "Adrián Larumbe" <adrian.larumbe@collabora.com>,
	linux-kernel@vger.kernel.org
Cc: dri-devel@lists.freedesktop.org,
	Boris Brezillon <boris.brezillon@collabora.com>,
	kernel@collabora.com, Liviu Dudau <liviu.dudau@arm.com>,
	Maarten Lankhorst <maarten.lankhorst@linux.intel.com>,
	Maxime Ripard <mripard@kernel.org>,
	Thomas Zimmermann <tzimmermann@suse.de>,
	David Airlie <airlied@gmail.com>, Simona Vetter <simona@ffwll.ch>,
	Daniel Almeida <daniel.almeida@collabora.com>,
	Alice Ryhl <aliceryhl@google.com>
Subject: Re: [PATCH v11 5/6] drm/panthor: Support sparse mappings
Date: Wed, 13 May 2026 16:46:17 +0100	[thread overview]
Message-ID: <661a6aab-a643-496b-94d2-ae9230df1a54@arm.com> (raw)
In-Reply-To: <20260507214939.2852489-6-adrian.larumbe@collabora.com>

On 07/05/2026 22:49, Adrián Larumbe wrote:
> Allow UM to bind sparsely populated memory regions by cyclically mapping
> virtual ranges over a kernel-allocated dummy BO. This alternative is
> preferable to the old method of handling sparseness in the UMD, because it
> relied on the creation of a buffer object to the same end, despite the fact
> Vulkan sparse resources don't need to be backed by a driver BO.
> 
> The choice of backing sparsely-bound regions with a Panthor BO was made so
> as to profit from the existing shrinker reclaim code. That way no special
> treatment must be given to the dummy sparse BOs when reclaiming memory, as
> would be the case if we had chosen a raw kernel page implementation.

Do you need to fix up the remap_evicted_vma() path though? At the moment
that will go through panthor_vm_map_pages() without doing the
panthor_fix_sparse_map_offset() dance. Also I suspect it won't map the
whole region (just the first 2MB in the sgtable).

Maybe I'm missing something?

Thanks,
Steve

> A new dummy BO is allocated per open file context, because even though the
> Vulkan spec mandates that writes into sparsely bound regions must be
> discarded, our implementation is still a workaround over the fact Mali CSF
> GPUs cannot support this behaviour on the hardware level, so writes still
> make it into the backing BO. If we had a global one, then it could be a
> venue for information leaks between file contexts, which should never
> happen in DRM.
> 
> As a side note, care was put to adjust dummy BO offsets for sparse mappings
> so that all addresses in the new VA are mapped aligned against it.
> 
> Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com>
> ---
>  drivers/gpu/drm/panthor/panthor_gem.c |  18 +++
>  drivers/gpu/drm/panthor/panthor_gem.h |   2 +
>  drivers/gpu/drm/panthor/panthor_mmu.c | 179 +++++++++++++++++++++++---
>  include/uapi/drm/panthor_drm.h        |  12 ++
>  4 files changed, 190 insertions(+), 21 deletions(-)
> 
> diff --git a/drivers/gpu/drm/panthor/panthor_gem.c b/drivers/gpu/drm/panthor/panthor_gem.c
> index 13295d7a593d..c798ac2963e1 100644
> --- a/drivers/gpu/drm/panthor/panthor_gem.c
> +++ b/drivers/gpu/drm/panthor/panthor_gem.c
> @@ -1345,6 +1345,24 @@ panthor_kernel_bo_create(struct panthor_device *ptdev, struct panthor_vm *vm,
>  	return ERR_PTR(ret);
>  }
>  
> +/**
> + * panthor_dummy_bo_create() - Create a Panthor BO meant to back sparse bindings.
> + * @ptdev: Device.
> + *
> + * Return: A valid pointer in case of success, an ERR_PTR() otherwise.
> + */
> +struct panthor_gem_object *
> +panthor_dummy_bo_create(struct panthor_device *ptdev)
> +{
> +	/* Since even when the DRM device's mount point has enabled THP we have no guarantee
> +	 * that drm_gem_get_pages() will return a single 2MiB PMD, and also we cannot be sure
> +	 * that the 2MiB won't be reclaimed and re-allocated later on as 4KiB chunks, it doesn't
> +	 * make sense to pre-populate this object's page array, nor to fall back on a BO size
> +	 * of 4KiB. Sticking to a dummy object size of 2MiB lets us keep things simple for now.
> +	 */
> +	return panthor_gem_create(&ptdev->base, SZ_2M, DRM_PANTHOR_BO_NO_MMAP, NULL, 0);
> +}
> +
>  static bool can_swap(void)
>  {
>  	return get_nr_swap_pages() > 0;
> diff --git a/drivers/gpu/drm/panthor/panthor_gem.h b/drivers/gpu/drm/panthor/panthor_gem.h
> index ae0491d0b121..8639c2fa08e6 100644
> --- a/drivers/gpu/drm/panthor/panthor_gem.h
> +++ b/drivers/gpu/drm/panthor/panthor_gem.h
> @@ -315,6 +315,8 @@ panthor_kernel_bo_create(struct panthor_device *ptdev, struct panthor_vm *vm,
>  
>  void panthor_kernel_bo_destroy(struct panthor_kernel_bo *bo);
>  
> +struct panthor_gem_object *panthor_dummy_bo_create(struct panthor_device *ptdev);
> +
>  #ifdef CONFIG_DEBUG_FS
>  void panthor_gem_debugfs_init(struct drm_minor *minor);
>  #endif
> diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
> index f54a60cd0ec4..f0425dd70d2c 100644
> --- a/drivers/gpu/drm/panthor/panthor_mmu.c
> +++ b/drivers/gpu/drm/panthor/panthor_mmu.c
> @@ -112,6 +112,17 @@ struct panthor_mmu {
>  struct panthor_vm_pool {
>  	/** @xa: Array used for VM handle tracking. */
>  	struct xarray xa;
> +
> +	/**
> +	 * @dummy: Dummy object used for sparse mappings
> +	 *
> +	 * Sparse bindings map virtual address ranges onto a dummy
> +	 * BO in a modulo fashion. Even though sparse writes are meant
> +	 * to be discarded and reads undefined, writes are still reflected
> +	 * in the dummy buffer. That means we must keep a dummy object per
> +	 * file context, to avoid data leaks between them.
> +	 */
> +	struct panthor_gem_object *dummy;
>  };
>  
>  /**
> @@ -391,6 +402,15 @@ struct panthor_vm {
>  		 */
>  		struct list_head lru_node;
>  	} reclaim;
> +
> +	/**
> +	 * @dummy: Dummy object used for sparse mappings.
> +	 *
> +	 * VM's must keep a reference to the file context-wide dummy BO because
> +	 * they can outlive the file context, which includes the VM pool holding
> +	 * the original dummy BO reference.
> +	 */
> +	struct panthor_gem_object *dummy;
>  };
>  
>  /**
> @@ -1020,6 +1040,30 @@ panthor_vm_map_pages(struct panthor_vm *vm, u64 iova, int prot,
>  	return 0;
>  }
>  
> +static int
> +panthor_vm_map_sparse(struct panthor_vm *vm, u64 iova, int prot,
> +		      struct sg_table *sgt, u64 size)
> +{
> +	u64 mapped = 0;
> +	int ret;
> +
> +	while (mapped < size) {
> +		u64 addr = iova + mapped;
> +		u32 chunk_size = min(size - mapped, SZ_2M - (addr & (SZ_2M - 1)));
> +
> +		ret = panthor_vm_map_pages(vm, addr, prot, sgt,
> +					   addr % SZ_2M, chunk_size);
> +		if (ret) {
> +			panthor_vm_unmap_pages(vm, iova, mapped);
> +			return ret;
> +		}
> +
> +		mapped += chunk_size;
> +	}
> +
> +	return 0;
> +}
> +
>  static int flags_to_prot(u32 flags)
>  {
>  	int prot = 0;
> @@ -1262,6 +1306,7 @@ static int panthor_vm_op_ctx_prealloc_pts(struct panthor_vm_op_ctx *op_ctx)
>  	(DRM_PANTHOR_VM_BIND_OP_MAP_READONLY | \
>  	 DRM_PANTHOR_VM_BIND_OP_MAP_NOEXEC | \
>  	 DRM_PANTHOR_VM_BIND_OP_MAP_UNCACHED | \
> +	 DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE | \
>  	 DRM_PANTHOR_VM_BIND_OP_TYPE_MASK)
>  
>  static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
> @@ -1269,6 +1314,7 @@ static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
>  					 struct panthor_gem_object *bo,
>  					 const struct drm_panthor_vm_bind_op *op)
>  {
> +	bool is_sparse = op->flags & DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE;
>  	struct drm_gpuvm_bo *preallocated_vm_bo;
>  	struct sg_table *sgt = NULL;
>  	int ret;
> @@ -1280,8 +1326,21 @@ static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx,
>  	    (op->flags & DRM_PANTHOR_VM_BIND_OP_TYPE_MASK) != DRM_PANTHOR_VM_BIND_OP_TYPE_MAP)
>  		return -EINVAL;
>  
> -	/* Make sure the VA and size are in-bounds. */
> -	if (op->size > bo->base.size || op->bo_offset > bo->base.size - op->size)
> +	/* uAPI mandates sparsely bound regions must not be executable. */
> +	if (is_sparse && !(op->flags & DRM_PANTHOR_VM_BIND_OP_MAP_NOEXEC))
> +		return -EINVAL;
> +
> +	/* For non-sparse, make sure the VA and size are in-bounds.
> +	 * For sparse, this is not applicable, because the dummy BO is
> +	 * repeatedly mapped over a potentially wider VA range.
> +	 */
> +	if (!is_sparse && (op->size > bo->base.size || op->bo_offset > bo->base.size - op->size))
> +		return -EINVAL;
> +
> +	/* For sparse, we don't expect any user BO, the BO we get passed
> +	 * is the dummy BO attached to the VM pool.
> +	 */
> +	if (is_sparse && (op->bo_handle || op->bo_offset))
>  		return -EINVAL;
>  
>  	/* If the BO has an exclusive VM attached, it can't be mapped to other VMs. */
> @@ -1430,7 +1489,9 @@ panthor_vm_get_bo_for_va(struct panthor_vm *vm, u64 va, u64 *bo_offset)
>  	if (vma && vma->base.gem.obj) {
>  		drm_gem_object_get(vma->base.gem.obj);
>  		bo = to_panthor_bo(vma->base.gem.obj);
> -		*bo_offset = vma->base.gem.offset + (va - vma->base.va.addr);
> +		*bo_offset = !(vma->flags & DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE) ?
> +			vma->base.gem.offset + (va - vma->base.va.addr) :
> +			va & (SZ_2M - 1);
>  	}
>  	mutex_unlock(&vm->op_lock);
>  
> @@ -1543,6 +1604,9 @@ int panthor_vm_pool_create_vm(struct panthor_device *ptdev,
>  		return ret;
>  	}
>  
> +	drm_gem_object_get(&pool->dummy->base);
> +	vm->dummy = pool->dummy;
> +
>  	args->user_va_range = kernel_va_start;
>  	return id;
>  }
> @@ -1634,6 +1698,8 @@ void panthor_vm_pool_destroy(struct panthor_file *pfile)
>  	xa_for_each(&pfile->vms->xa, i, vm)
>  		panthor_vm_destroy(vm);
>  
> +	if (pfile->vms->dummy)
> +		drm_gem_object_put(&pfile->vms->dummy->base);
>  	xa_destroy(&pfile->vms->xa);
>  	kfree(pfile->vms);
>  }
> @@ -1646,12 +1712,28 @@ void panthor_vm_pool_destroy(struct panthor_file *pfile)
>   */
>  int panthor_vm_pool_create(struct panthor_file *pfile)
>  {
> +	struct panthor_gem_object *dummy;
> +	int ret;
> +
>  	pfile->vms = kzalloc_obj(*pfile->vms);
>  	if (!pfile->vms)
>  		return -ENOMEM;
>  
>  	xa_init_flags(&pfile->vms->xa, XA_FLAGS_ALLOC1);
> +
> +	dummy = panthor_dummy_bo_create(pfile->ptdev);
> +	if (IS_ERR(dummy)) {
> +		ret = PTR_ERR(dummy);
> +		goto err_destroy_vm_pool;
> +	}
> +
> +	pfile->vms->dummy = dummy;
> +
>  	return 0;
> +
> +err_destroy_vm_pool:
> +	panthor_vm_pool_destroy(pfile);
> +	return ret;
>  }
>  
>  /* dummy TLB ops, the real TLB flush happens in panthor_vm_flush_range() */
> @@ -1987,6 +2069,9 @@ static void panthor_vm_free(struct drm_gpuvm *gpuvm)
>  
>  	free_io_pgtable_ops(vm->pgtbl_ops);
>  
> +	if (vm->dummy)
> +		drm_gem_object_put(&vm->dummy->base);
> +
>  	drm_mm_takedown(&vm->mm);
>  	kfree(vm);
>  }
> @@ -2146,7 +2231,30 @@ static void panthor_vma_init(struct panthor_vma *vma, u32 flags)
>  #define PANTHOR_VM_MAP_FLAGS \
>  	(DRM_PANTHOR_VM_BIND_OP_MAP_READONLY | \
>  	 DRM_PANTHOR_VM_BIND_OP_MAP_NOEXEC | \
> -	 DRM_PANTHOR_VM_BIND_OP_MAP_UNCACHED)
> +	 DRM_PANTHOR_VM_BIND_OP_MAP_UNCACHED | \
> +	 DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE)
> +
> +static void
> +panthor_fix_sparse_map_offset(struct drm_gpuva_op_map *op, u32 flags)
> +{
> +	if (op && (flags & DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE))
> +		op->gem.offset = op->va.addr & (SZ_2M - 1);
> +}
> +
> +static int
> +panthor_vm_exec_map_op(struct panthor_vm *vm, u32 flags,
> +		       const struct drm_gpuva_op_map *op)
> +{
> +	struct panthor_gem_object *bo = to_panthor_bo(op->gem.obj);
> +	int prot = flags_to_prot(flags);
> +
> +	if (flags & DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE)
> +		return panthor_vm_map_sparse(vm, op->va.addr, prot,
> +					     bo->dmap.sgt, op->va.range);
> +
> +	return panthor_vm_map_pages(vm, op->va.addr, prot, bo->dmap.sgt,
> +				    op->gem.offset, op->va.range);
> +}
>  
>  static int panthor_gpuva_sm_step_map(struct drm_gpuva_op *op, void *priv)
>  {
> @@ -2159,10 +2267,9 @@ static int panthor_gpuva_sm_step_map(struct drm_gpuva_op *op, void *priv)
>  		return -EINVAL;
>  
>  	panthor_vma_init(vma, op_ctx->flags & PANTHOR_VM_MAP_FLAGS);
> +	panthor_fix_sparse_map_offset(&op->map, vma->flags);
>  
> -	ret = panthor_vm_map_pages(vm, op->map.va.addr, flags_to_prot(vma->flags),
> -				   op_ctx->map.bo->dmap.sgt, op->map.gem.offset,
> -				   op->map.va.range);
> +	ret = panthor_vm_exec_map_op(vm, vma->flags, &op->map);
>  	if (ret) {
>  		panthor_vm_op_ctx_return_vma(op_ctx, vma);
>  		return ret;
> @@ -2194,6 +2301,8 @@ static void
>  unmap_hugepage_align(const struct drm_gpuva_op_remap *op,
>  		     u64 *unmap_start, u64 *unmap_range)
>  {
> +	struct panthor_vma *unmap_vma = container_of(op->unmap->va, struct panthor_vma, base);
> +	bool is_sparse = unmap_vma->flags & DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE;
>  	u64 aligned_unmap_start, aligned_unmap_end, unmap_end;
>  
>  	unmap_end = *unmap_start + *unmap_range;
> @@ -2201,11 +2310,15 @@ unmap_hugepage_align(const struct drm_gpuva_op_remap *op,
>  	aligned_unmap_end = ALIGN(unmap_end, SZ_2M);
>  
>  	/* If we're dealing with a huge page, make sure the unmap region is
> -	 * aligned on the start of the page.
> +	 * aligned on the start of the page. If the unmapped VMA stands for
> +	 * a sparse mapping, always assume the backing storage is a THP, since
> +	 * the overhead of unmapping 2MiB worth of 4KiB pages and remapping
> +	 * some of them is offset by the logic of working out whether it's
> +	 * the opposite case right below. This also holds true for op->next.
>  	 */
>  	if (op->prev && aligned_unmap_start < *unmap_start &&
>  	    op->prev->va.addr <= aligned_unmap_start &&
> -	    iova_mapped_as_huge_page(op->prev, *unmap_start)) {
> +	    (is_sparse || iova_mapped_as_huge_page(op->prev, *unmap_start))) {
>  		*unmap_range += *unmap_start - aligned_unmap_start;
>  		*unmap_start = aligned_unmap_start;
>  	}
> @@ -2215,7 +2328,7 @@ unmap_hugepage_align(const struct drm_gpuva_op_remap *op,
>  	 */
>  	if (op->next && aligned_unmap_end > unmap_end &&
>  	    op->next->va.addr + op->next->va.range >= aligned_unmap_end &&
> -	    iova_mapped_as_huge_page(op->next, unmap_end - 1)) {
> +	    (is_sparse || iova_mapped_as_huge_page(op->next, unmap_end - 1))) {
>  		*unmap_range += aligned_unmap_end - unmap_end;
>  	}
>  }
> @@ -2232,6 +2345,11 @@ static int panthor_gpuva_sm_step_remap(struct drm_gpuva_op *op,
>  
>  	drm_gpuva_op_remap_to_unmap_range(&op->remap, &unmap_start, &unmap_range);
>  
> +	/* op->remap.prev's BO offset is always the same as the unmap va's, but
> +	 * that of op->remap.next must be adjusted so as to remain < SZ_2M
> +	 */
> +	panthor_fix_sparse_map_offset(op->remap.next, unmap_vma->flags);
> +
>  	/*
>  	 * ARM IOMMU page table management code disallows partial unmaps of huge pages,
>  	 * so when a partial unmap is requested, we must first unmap the entire huge
> @@ -2251,14 +2369,19 @@ static int panthor_gpuva_sm_step_remap(struct drm_gpuva_op *op,
>  	}
>  
>  	if (op->remap.prev) {
> -		struct panthor_gem_object *bo = to_panthor_bo(op->remap.prev->gem.obj);
>  		u64 offset = op->remap.prev->gem.offset + unmap_start - op->remap.prev->va.addr;
>  		u64 size = op->remap.prev->va.addr + op->remap.prev->va.range - unmap_start;
>  
> -		if (!unmap_vma->evicted) {
> -			ret = panthor_vm_map_pages(vm, unmap_start,
> -						   flags_to_prot(unmap_vma->flags),
> -						   bo->dmap.sgt, offset, size);
> +		if (!unmap_vma->evicted && size > 0) {
> +			struct drm_gpuva_op_map map_op = {
> +				.va.addr = unmap_start,
> +				.va.range = size,
> +				.gem.obj = op->remap.prev->gem.obj,
> +				.gem.offset = offset,
> +			};
> +			panthor_fix_sparse_map_offset(&map_op, unmap_vma->flags);
> +
> +			ret = panthor_vm_exec_map_op(vm, unmap_vma->flags, &map_op);
>  			if (ret)
>  				return ret;
>  		}
> @@ -2269,14 +2392,19 @@ static int panthor_gpuva_sm_step_remap(struct drm_gpuva_op *op,
>  	}
>  
>  	if (op->remap.next) {
> -		struct panthor_gem_object *bo = to_panthor_bo(op->remap.next->gem.obj);
>  		u64 addr = op->remap.next->va.addr;
>  		u64 size = unmap_start + unmap_range - op->remap.next->va.addr;
>  
> -		if (!unmap_vma->evicted) {
> -			ret = panthor_vm_map_pages(vm, addr, flags_to_prot(unmap_vma->flags),
> -						   bo->dmap.sgt, op->remap.next->gem.offset,
> -						   size);
> +		if (!unmap_vma->evicted && size > 0) {
> +			struct drm_gpuva_op_map map_op = {
> +				.va.addr = addr,
> +				.va.range = size,
> +				.gem.obj = op->remap.next->gem.obj,
> +				.gem.offset = op->remap.next->gem.offset,
> +			};
> +			panthor_fix_sparse_map_offset(&map_op, unmap_vma->flags);
> +
> +			ret = panthor_vm_exec_map_op(vm, unmap_vma->flags, &map_op);
>  			if (ret)
>  				return ret;
>  		}
> @@ -2835,7 +2963,13 @@ panthor_vm_bind_prepare_op_ctx(struct drm_file *file,
>  
>  	switch (op->flags & DRM_PANTHOR_VM_BIND_OP_TYPE_MASK) {
>  	case DRM_PANTHOR_VM_BIND_OP_TYPE_MAP:
> -		gem = drm_gem_object_lookup(file, op->bo_handle);
> +		if (!(op->flags & DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE)) {
> +			gem = drm_gem_object_lookup(file, op->bo_handle);
> +		} else {
> +			gem = &vm->dummy->base;
> +			drm_gem_object_get(&vm->dummy->base);
> +		}
> +
>  		ret = panthor_vm_prepare_map_op_ctx(op_ctx, vm,
>  						    gem ? to_panthor_bo(gem) : NULL,
>  						    op);
> @@ -3043,6 +3177,9 @@ int panthor_vm_map_bo_range(struct panthor_vm *vm, struct panthor_gem_object *bo
>  	struct panthor_vm_op_ctx op_ctx;
>  	int ret;
>  
> +	if (drm_WARN_ON(&vm->ptdev->base, flags & DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE))
> +		return -EINVAL;
> +
>  	ret = panthor_vm_prepare_map_op_ctx(&op_ctx, vm, bo, &op);
>  	if (ret)
>  		return ret;
> diff --git a/include/uapi/drm/panthor_drm.h b/include/uapi/drm/panthor_drm.h
> index 14a93a4ef6ff..a2ff0f4ec691 100644
> --- a/include/uapi/drm/panthor_drm.h
> +++ b/include/uapi/drm/panthor_drm.h
> @@ -614,6 +614,18 @@ enum drm_panthor_vm_bind_op_flags {
>  	 */
>  	DRM_PANTHOR_VM_BIND_OP_MAP_UNCACHED = 1 << 2,
>  
> +	/**
> +	 * @DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE: Sparsely map a virtual memory range
> +	 *
> +	 * Only valid with DRM_PANTHOR_VM_BIND_OP_TYPE_MAP.
> +	 *
> +	 * When this flag is set, the whole vm_bind range is mapped over a dummy object in a cyclic
> +	 * fashion, and all GPU reads from addresses in the range return undefined values. This flag
> +	 * being set means drm_panthor_vm_bind_op::bo_offset and drm_panthor_vm_bind_op::bo_handle
> +	 * must both be set to 0. DRM_PANTHOR_VM_BIND_OP_MAP_NOEXEC must also be set.
> +	 */
> +	DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE = 1 << 3,
> +
>  	/**
>  	 * @DRM_PANTHOR_VM_BIND_OP_TYPE_MASK: Mask used to determine the type of operation.
>  	 */


  reply	other threads:[~2026-05-13 15:46 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-05-07 21:49 [PATCH v11 0/6] Support sparse mappings in Panthor Adrián Larumbe
2026-05-07 21:49 ` [PATCH v11 1/6] drm/panthor: Expose GPU page sizes to UM Adrián Larumbe
2026-05-07 21:49 ` [PATCH v11 2/6] drm/panthor: Pass vm_bind_op to vm_prepare_map_op_ctx Adrián Larumbe
2026-05-07 21:49 ` [PATCH v11 3/6] drm/panthor: Delete spurious whitespace from uAPI header Adrián Larumbe
2026-05-07 21:49 ` [PATCH v11 4/6] drm/panthor: Remove unused operation context field Adrián Larumbe
2026-05-07 21:49 ` [PATCH v11 5/6] drm/panthor: Support sparse mappings Adrián Larumbe
2026-05-13 15:46   ` Steven Price [this message]
2026-05-13 16:16     ` Boris Brezillon
2026-05-07 21:49 ` [PATCH v11 6/6] drm/panthor: Bump the driver version to 1.9 Adrián Larumbe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=661a6aab-a643-496b-94d2-ae9230df1a54@arm.com \
    --to=steven.price@arm.com \
    --cc=adrian.larumbe@collabora.com \
    --cc=airlied@gmail.com \
    --cc=aliceryhl@google.com \
    --cc=boris.brezillon@collabora.com \
    --cc=daniel.almeida@collabora.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=kernel@collabora.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=liviu.dudau@arm.com \
    --cc=maarten.lankhorst@linux.intel.com \
    --cc=mripard@kernel.org \
    --cc=simona@ffwll.ch \
    --cc=tzimmermann@suse.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox