From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 1E12C3F412E for ; Wed, 13 May 2026 15:46:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778687185; cv=none; b=H+onkAA6kYh6bcmJOkzplWep3EPwPEW4dcI1CWwGhj0x/UfN3RFQbHPwpuanE6obcPXUC863hsVM5V6H0G7JkNFi+83BK6SNr+Ro8OEzVqNsRN0xkSAWEayVaaIvUy7AjPM4WdTvnAjDDEu7kg7909u/jhzbpvQaMensSwez5f4= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778687185; c=relaxed/simple; bh=KOJnY/a+jCJzW48wpqRWTLGlSr4Zjo+YM/Vsc8hX44U=; h=Message-ID:Date:MIME-Version:Subject:To:Cc:References:From: In-Reply-To:Content-Type; b=HVJszAwKmpdnCOoQY/nmMyjFETIY5rfJACO056U5Th86ed7zJ/MLYks9NPJlH7vCCicceiANfe/UL6rQ7gqtX/ZRdm0KQSY5nOIgf8j4uIjZUVB+c1BNl6ruttz/RUa4DfLlCRKsagbgjt41gyDqufoUBmV8d6hFM81K61eibMs= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; dkim=pass (1024-bit key) header.d=arm.com header.i=@arm.com header.b=GfRX3kG9; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=arm.com header.i=@arm.com header.b="GfRX3kG9" Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 50A4E2008; Wed, 13 May 2026 08:46:17 -0700 (PDT) Received: from [10.57.68.187] (unknown [10.57.68.187]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C27E83F836; Wed, 13 May 2026 08:46:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=arm.com; s=foss; t=1778687182; bh=KOJnY/a+jCJzW48wpqRWTLGlSr4Zjo+YM/Vsc8hX44U=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From; b=GfRX3kG9iqfTm3IL3nwPhDZHL1px2tF7VJuWfs56BMtjeNNNumAgCNU/Lcgmq0d4j elUmacqcnAALzM51MPKqnMLe3appn5HQ7KduFNSK1V/DJuS7wWCoZn54y7ZsS/IgVC u2IU9kmfBxfpq02HhcdH/OjUsMR7VR0gM8KB1P6k= Message-ID: <661a6aab-a643-496b-94d2-ae9230df1a54@arm.com> Date: Wed, 13 May 2026 16:46:17 +0100 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v11 5/6] drm/panthor: Support sparse mappings To: =?UTF-8?Q?Adri=C3=A1n_Larumbe?= , linux-kernel@vger.kernel.org Cc: dri-devel@lists.freedesktop.org, Boris Brezillon , kernel@collabora.com, Liviu Dudau , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Daniel Almeida , Alice Ryhl References: <20260507214939.2852489-1-adrian.larumbe@collabora.com> <20260507214939.2852489-6-adrian.larumbe@collabora.com> From: Steven Price Content-Language: en-GB In-Reply-To: <20260507214939.2852489-6-adrian.larumbe@collabora.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit On 07/05/2026 22:49, Adrián Larumbe wrote: > Allow UM to bind sparsely populated memory regions by cyclically mapping > virtual ranges over a kernel-allocated dummy BO. This alternative is > preferable to the old method of handling sparseness in the UMD, because it > relied on the creation of a buffer object to the same end, despite the fact > Vulkan sparse resources don't need to be backed by a driver BO. > > The choice of backing sparsely-bound regions with a Panthor BO was made so > as to profit from the existing shrinker reclaim code. That way no special > treatment must be given to the dummy sparse BOs when reclaiming memory, as > would be the case if we had chosen a raw kernel page implementation. Do you need to fix up the remap_evicted_vma() path though? At the moment that will go through panthor_vm_map_pages() without doing the panthor_fix_sparse_map_offset() dance. Also I suspect it won't map the whole region (just the first 2MB in the sgtable). Maybe I'm missing something? Thanks, Steve > A new dummy BO is allocated per open file context, because even though the > Vulkan spec mandates that writes into sparsely bound regions must be > discarded, our implementation is still a workaround over the fact Mali CSF > GPUs cannot support this behaviour on the hardware level, so writes still > make it into the backing BO. If we had a global one, then it could be a > venue for information leaks between file contexts, which should never > happen in DRM. > > As a side note, care was put to adjust dummy BO offsets for sparse mappings > so that all addresses in the new VA are mapped aligned against it. > > Signed-off-by: Adrián Larumbe > --- > drivers/gpu/drm/panthor/panthor_gem.c | 18 +++ > drivers/gpu/drm/panthor/panthor_gem.h | 2 + > drivers/gpu/drm/panthor/panthor_mmu.c | 179 +++++++++++++++++++++++--- > include/uapi/drm/panthor_drm.h | 12 ++ > 4 files changed, 190 insertions(+), 21 deletions(-) > > diff --git a/drivers/gpu/drm/panthor/panthor_gem.c b/drivers/gpu/drm/panthor/panthor_gem.c > index 13295d7a593d..c798ac2963e1 100644 > --- a/drivers/gpu/drm/panthor/panthor_gem.c > +++ b/drivers/gpu/drm/panthor/panthor_gem.c > @@ -1345,6 +1345,24 @@ panthor_kernel_bo_create(struct panthor_device *ptdev, struct panthor_vm *vm, > return ERR_PTR(ret); > } > > +/** > + * panthor_dummy_bo_create() - Create a Panthor BO meant to back sparse bindings. > + * @ptdev: Device. > + * > + * Return: A valid pointer in case of success, an ERR_PTR() otherwise. > + */ > +struct panthor_gem_object * > +panthor_dummy_bo_create(struct panthor_device *ptdev) > +{ > + /* Since even when the DRM device's mount point has enabled THP we have no guarantee > + * that drm_gem_get_pages() will return a single 2MiB PMD, and also we cannot be sure > + * that the 2MiB won't be reclaimed and re-allocated later on as 4KiB chunks, it doesn't > + * make sense to pre-populate this object's page array, nor to fall back on a BO size > + * of 4KiB. Sticking to a dummy object size of 2MiB lets us keep things simple for now. > + */ > + return panthor_gem_create(&ptdev->base, SZ_2M, DRM_PANTHOR_BO_NO_MMAP, NULL, 0); > +} > + > static bool can_swap(void) > { > return get_nr_swap_pages() > 0; > diff --git a/drivers/gpu/drm/panthor/panthor_gem.h b/drivers/gpu/drm/panthor/panthor_gem.h > index ae0491d0b121..8639c2fa08e6 100644 > --- a/drivers/gpu/drm/panthor/panthor_gem.h > +++ b/drivers/gpu/drm/panthor/panthor_gem.h > @@ -315,6 +315,8 @@ panthor_kernel_bo_create(struct panthor_device *ptdev, struct panthor_vm *vm, > > void panthor_kernel_bo_destroy(struct panthor_kernel_bo *bo); > > +struct panthor_gem_object *panthor_dummy_bo_create(struct panthor_device *ptdev); > + > #ifdef CONFIG_DEBUG_FS > void panthor_gem_debugfs_init(struct drm_minor *minor); > #endif > diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c > index f54a60cd0ec4..f0425dd70d2c 100644 > --- a/drivers/gpu/drm/panthor/panthor_mmu.c > +++ b/drivers/gpu/drm/panthor/panthor_mmu.c > @@ -112,6 +112,17 @@ struct panthor_mmu { > struct panthor_vm_pool { > /** @xa: Array used for VM handle tracking. */ > struct xarray xa; > + > + /** > + * @dummy: Dummy object used for sparse mappings > + * > + * Sparse bindings map virtual address ranges onto a dummy > + * BO in a modulo fashion. Even though sparse writes are meant > + * to be discarded and reads undefined, writes are still reflected > + * in the dummy buffer. That means we must keep a dummy object per > + * file context, to avoid data leaks between them. > + */ > + struct panthor_gem_object *dummy; > }; > > /** > @@ -391,6 +402,15 @@ struct panthor_vm { > */ > struct list_head lru_node; > } reclaim; > + > + /** > + * @dummy: Dummy object used for sparse mappings. > + * > + * VM's must keep a reference to the file context-wide dummy BO because > + * they can outlive the file context, which includes the VM pool holding > + * the original dummy BO reference. > + */ > + struct panthor_gem_object *dummy; > }; > > /** > @@ -1020,6 +1040,30 @@ panthor_vm_map_pages(struct panthor_vm *vm, u64 iova, int prot, > return 0; > } > > +static int > +panthor_vm_map_sparse(struct panthor_vm *vm, u64 iova, int prot, > + struct sg_table *sgt, u64 size) > +{ > + u64 mapped = 0; > + int ret; > + > + while (mapped < size) { > + u64 addr = iova + mapped; > + u32 chunk_size = min(size - mapped, SZ_2M - (addr & (SZ_2M - 1))); > + > + ret = panthor_vm_map_pages(vm, addr, prot, sgt, > + addr % SZ_2M, chunk_size); > + if (ret) { > + panthor_vm_unmap_pages(vm, iova, mapped); > + return ret; > + } > + > + mapped += chunk_size; > + } > + > + return 0; > +} > + > static int flags_to_prot(u32 flags) > { > int prot = 0; > @@ -1262,6 +1306,7 @@ static int panthor_vm_op_ctx_prealloc_pts(struct panthor_vm_op_ctx *op_ctx) > (DRM_PANTHOR_VM_BIND_OP_MAP_READONLY | \ > DRM_PANTHOR_VM_BIND_OP_MAP_NOEXEC | \ > DRM_PANTHOR_VM_BIND_OP_MAP_UNCACHED | \ > + DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE | \ > DRM_PANTHOR_VM_BIND_OP_TYPE_MASK) > > static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx, > @@ -1269,6 +1314,7 @@ static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx, > struct panthor_gem_object *bo, > const struct drm_panthor_vm_bind_op *op) > { > + bool is_sparse = op->flags & DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE; > struct drm_gpuvm_bo *preallocated_vm_bo; > struct sg_table *sgt = NULL; > int ret; > @@ -1280,8 +1326,21 @@ static int panthor_vm_prepare_map_op_ctx(struct panthor_vm_op_ctx *op_ctx, > (op->flags & DRM_PANTHOR_VM_BIND_OP_TYPE_MASK) != DRM_PANTHOR_VM_BIND_OP_TYPE_MAP) > return -EINVAL; > > - /* Make sure the VA and size are in-bounds. */ > - if (op->size > bo->base.size || op->bo_offset > bo->base.size - op->size) > + /* uAPI mandates sparsely bound regions must not be executable. */ > + if (is_sparse && !(op->flags & DRM_PANTHOR_VM_BIND_OP_MAP_NOEXEC)) > + return -EINVAL; > + > + /* For non-sparse, make sure the VA and size are in-bounds. > + * For sparse, this is not applicable, because the dummy BO is > + * repeatedly mapped over a potentially wider VA range. > + */ > + if (!is_sparse && (op->size > bo->base.size || op->bo_offset > bo->base.size - op->size)) > + return -EINVAL; > + > + /* For sparse, we don't expect any user BO, the BO we get passed > + * is the dummy BO attached to the VM pool. > + */ > + if (is_sparse && (op->bo_handle || op->bo_offset)) > return -EINVAL; > > /* If the BO has an exclusive VM attached, it can't be mapped to other VMs. */ > @@ -1430,7 +1489,9 @@ panthor_vm_get_bo_for_va(struct panthor_vm *vm, u64 va, u64 *bo_offset) > if (vma && vma->base.gem.obj) { > drm_gem_object_get(vma->base.gem.obj); > bo = to_panthor_bo(vma->base.gem.obj); > - *bo_offset = vma->base.gem.offset + (va - vma->base.va.addr); > + *bo_offset = !(vma->flags & DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE) ? > + vma->base.gem.offset + (va - vma->base.va.addr) : > + va & (SZ_2M - 1); > } > mutex_unlock(&vm->op_lock); > > @@ -1543,6 +1604,9 @@ int panthor_vm_pool_create_vm(struct panthor_device *ptdev, > return ret; > } > > + drm_gem_object_get(&pool->dummy->base); > + vm->dummy = pool->dummy; > + > args->user_va_range = kernel_va_start; > return id; > } > @@ -1634,6 +1698,8 @@ void panthor_vm_pool_destroy(struct panthor_file *pfile) > xa_for_each(&pfile->vms->xa, i, vm) > panthor_vm_destroy(vm); > > + if (pfile->vms->dummy) > + drm_gem_object_put(&pfile->vms->dummy->base); > xa_destroy(&pfile->vms->xa); > kfree(pfile->vms); > } > @@ -1646,12 +1712,28 @@ void panthor_vm_pool_destroy(struct panthor_file *pfile) > */ > int panthor_vm_pool_create(struct panthor_file *pfile) > { > + struct panthor_gem_object *dummy; > + int ret; > + > pfile->vms = kzalloc_obj(*pfile->vms); > if (!pfile->vms) > return -ENOMEM; > > xa_init_flags(&pfile->vms->xa, XA_FLAGS_ALLOC1); > + > + dummy = panthor_dummy_bo_create(pfile->ptdev); > + if (IS_ERR(dummy)) { > + ret = PTR_ERR(dummy); > + goto err_destroy_vm_pool; > + } > + > + pfile->vms->dummy = dummy; > + > return 0; > + > +err_destroy_vm_pool: > + panthor_vm_pool_destroy(pfile); > + return ret; > } > > /* dummy TLB ops, the real TLB flush happens in panthor_vm_flush_range() */ > @@ -1987,6 +2069,9 @@ static void panthor_vm_free(struct drm_gpuvm *gpuvm) > > free_io_pgtable_ops(vm->pgtbl_ops); > > + if (vm->dummy) > + drm_gem_object_put(&vm->dummy->base); > + > drm_mm_takedown(&vm->mm); > kfree(vm); > } > @@ -2146,7 +2231,30 @@ static void panthor_vma_init(struct panthor_vma *vma, u32 flags) > #define PANTHOR_VM_MAP_FLAGS \ > (DRM_PANTHOR_VM_BIND_OP_MAP_READONLY | \ > DRM_PANTHOR_VM_BIND_OP_MAP_NOEXEC | \ > - DRM_PANTHOR_VM_BIND_OP_MAP_UNCACHED) > + DRM_PANTHOR_VM_BIND_OP_MAP_UNCACHED | \ > + DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE) > + > +static void > +panthor_fix_sparse_map_offset(struct drm_gpuva_op_map *op, u32 flags) > +{ > + if (op && (flags & DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE)) > + op->gem.offset = op->va.addr & (SZ_2M - 1); > +} > + > +static int > +panthor_vm_exec_map_op(struct panthor_vm *vm, u32 flags, > + const struct drm_gpuva_op_map *op) > +{ > + struct panthor_gem_object *bo = to_panthor_bo(op->gem.obj); > + int prot = flags_to_prot(flags); > + > + if (flags & DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE) > + return panthor_vm_map_sparse(vm, op->va.addr, prot, > + bo->dmap.sgt, op->va.range); > + > + return panthor_vm_map_pages(vm, op->va.addr, prot, bo->dmap.sgt, > + op->gem.offset, op->va.range); > +} > > static int panthor_gpuva_sm_step_map(struct drm_gpuva_op *op, void *priv) > { > @@ -2159,10 +2267,9 @@ static int panthor_gpuva_sm_step_map(struct drm_gpuva_op *op, void *priv) > return -EINVAL; > > panthor_vma_init(vma, op_ctx->flags & PANTHOR_VM_MAP_FLAGS); > + panthor_fix_sparse_map_offset(&op->map, vma->flags); > > - ret = panthor_vm_map_pages(vm, op->map.va.addr, flags_to_prot(vma->flags), > - op_ctx->map.bo->dmap.sgt, op->map.gem.offset, > - op->map.va.range); > + ret = panthor_vm_exec_map_op(vm, vma->flags, &op->map); > if (ret) { > panthor_vm_op_ctx_return_vma(op_ctx, vma); > return ret; > @@ -2194,6 +2301,8 @@ static void > unmap_hugepage_align(const struct drm_gpuva_op_remap *op, > u64 *unmap_start, u64 *unmap_range) > { > + struct panthor_vma *unmap_vma = container_of(op->unmap->va, struct panthor_vma, base); > + bool is_sparse = unmap_vma->flags & DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE; > u64 aligned_unmap_start, aligned_unmap_end, unmap_end; > > unmap_end = *unmap_start + *unmap_range; > @@ -2201,11 +2310,15 @@ unmap_hugepage_align(const struct drm_gpuva_op_remap *op, > aligned_unmap_end = ALIGN(unmap_end, SZ_2M); > > /* If we're dealing with a huge page, make sure the unmap region is > - * aligned on the start of the page. > + * aligned on the start of the page. If the unmapped VMA stands for > + * a sparse mapping, always assume the backing storage is a THP, since > + * the overhead of unmapping 2MiB worth of 4KiB pages and remapping > + * some of them is offset by the logic of working out whether it's > + * the opposite case right below. This also holds true for op->next. > */ > if (op->prev && aligned_unmap_start < *unmap_start && > op->prev->va.addr <= aligned_unmap_start && > - iova_mapped_as_huge_page(op->prev, *unmap_start)) { > + (is_sparse || iova_mapped_as_huge_page(op->prev, *unmap_start))) { > *unmap_range += *unmap_start - aligned_unmap_start; > *unmap_start = aligned_unmap_start; > } > @@ -2215,7 +2328,7 @@ unmap_hugepage_align(const struct drm_gpuva_op_remap *op, > */ > if (op->next && aligned_unmap_end > unmap_end && > op->next->va.addr + op->next->va.range >= aligned_unmap_end && > - iova_mapped_as_huge_page(op->next, unmap_end - 1)) { > + (is_sparse || iova_mapped_as_huge_page(op->next, unmap_end - 1))) { > *unmap_range += aligned_unmap_end - unmap_end; > } > } > @@ -2232,6 +2345,11 @@ static int panthor_gpuva_sm_step_remap(struct drm_gpuva_op *op, > > drm_gpuva_op_remap_to_unmap_range(&op->remap, &unmap_start, &unmap_range); > > + /* op->remap.prev's BO offset is always the same as the unmap va's, but > + * that of op->remap.next must be adjusted so as to remain < SZ_2M > + */ > + panthor_fix_sparse_map_offset(op->remap.next, unmap_vma->flags); > + > /* > * ARM IOMMU page table management code disallows partial unmaps of huge pages, > * so when a partial unmap is requested, we must first unmap the entire huge > @@ -2251,14 +2369,19 @@ static int panthor_gpuva_sm_step_remap(struct drm_gpuva_op *op, > } > > if (op->remap.prev) { > - struct panthor_gem_object *bo = to_panthor_bo(op->remap.prev->gem.obj); > u64 offset = op->remap.prev->gem.offset + unmap_start - op->remap.prev->va.addr; > u64 size = op->remap.prev->va.addr + op->remap.prev->va.range - unmap_start; > > - if (!unmap_vma->evicted) { > - ret = panthor_vm_map_pages(vm, unmap_start, > - flags_to_prot(unmap_vma->flags), > - bo->dmap.sgt, offset, size); > + if (!unmap_vma->evicted && size > 0) { > + struct drm_gpuva_op_map map_op = { > + .va.addr = unmap_start, > + .va.range = size, > + .gem.obj = op->remap.prev->gem.obj, > + .gem.offset = offset, > + }; > + panthor_fix_sparse_map_offset(&map_op, unmap_vma->flags); > + > + ret = panthor_vm_exec_map_op(vm, unmap_vma->flags, &map_op); > if (ret) > return ret; > } > @@ -2269,14 +2392,19 @@ static int panthor_gpuva_sm_step_remap(struct drm_gpuva_op *op, > } > > if (op->remap.next) { > - struct panthor_gem_object *bo = to_panthor_bo(op->remap.next->gem.obj); > u64 addr = op->remap.next->va.addr; > u64 size = unmap_start + unmap_range - op->remap.next->va.addr; > > - if (!unmap_vma->evicted) { > - ret = panthor_vm_map_pages(vm, addr, flags_to_prot(unmap_vma->flags), > - bo->dmap.sgt, op->remap.next->gem.offset, > - size); > + if (!unmap_vma->evicted && size > 0) { > + struct drm_gpuva_op_map map_op = { > + .va.addr = addr, > + .va.range = size, > + .gem.obj = op->remap.next->gem.obj, > + .gem.offset = op->remap.next->gem.offset, > + }; > + panthor_fix_sparse_map_offset(&map_op, unmap_vma->flags); > + > + ret = panthor_vm_exec_map_op(vm, unmap_vma->flags, &map_op); > if (ret) > return ret; > } > @@ -2835,7 +2963,13 @@ panthor_vm_bind_prepare_op_ctx(struct drm_file *file, > > switch (op->flags & DRM_PANTHOR_VM_BIND_OP_TYPE_MASK) { > case DRM_PANTHOR_VM_BIND_OP_TYPE_MAP: > - gem = drm_gem_object_lookup(file, op->bo_handle); > + if (!(op->flags & DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE)) { > + gem = drm_gem_object_lookup(file, op->bo_handle); > + } else { > + gem = &vm->dummy->base; > + drm_gem_object_get(&vm->dummy->base); > + } > + > ret = panthor_vm_prepare_map_op_ctx(op_ctx, vm, > gem ? to_panthor_bo(gem) : NULL, > op); > @@ -3043,6 +3177,9 @@ int panthor_vm_map_bo_range(struct panthor_vm *vm, struct panthor_gem_object *bo > struct panthor_vm_op_ctx op_ctx; > int ret; > > + if (drm_WARN_ON(&vm->ptdev->base, flags & DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE)) > + return -EINVAL; > + > ret = panthor_vm_prepare_map_op_ctx(&op_ctx, vm, bo, &op); > if (ret) > return ret; > diff --git a/include/uapi/drm/panthor_drm.h b/include/uapi/drm/panthor_drm.h > index 14a93a4ef6ff..a2ff0f4ec691 100644 > --- a/include/uapi/drm/panthor_drm.h > +++ b/include/uapi/drm/panthor_drm.h > @@ -614,6 +614,18 @@ enum drm_panthor_vm_bind_op_flags { > */ > DRM_PANTHOR_VM_BIND_OP_MAP_UNCACHED = 1 << 2, > > + /** > + * @DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE: Sparsely map a virtual memory range > + * > + * Only valid with DRM_PANTHOR_VM_BIND_OP_TYPE_MAP. > + * > + * When this flag is set, the whole vm_bind range is mapped over a dummy object in a cyclic > + * fashion, and all GPU reads from addresses in the range return undefined values. This flag > + * being set means drm_panthor_vm_bind_op::bo_offset and drm_panthor_vm_bind_op::bo_handle > + * must both be set to 0. DRM_PANTHOR_VM_BIND_OP_MAP_NOEXEC must also be set. > + */ > + DRM_PANTHOR_VM_BIND_OP_MAP_SPARSE = 1 << 3, > + > /** > * @DRM_PANTHOR_VM_BIND_OP_TYPE_MASK: Mask used to determine the type of operation. > */