linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v5 00/40] drm/msm: sparse / "VM_BIND" support
@ 2025-05-19 17:51 Rob Clark
  2025-05-19 17:51 ` [PATCH v5 05/40] iommu/io-pgtable-arm: Add quirk to quiet WARN_ON() Rob Clark
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Rob Clark @ 2025-05-19 17:51 UTC (permalink / raw)
  To: dri-devel
  Cc: freedreno, linux-arm-msm, Connor Abbott, Rob Clark, Abhinav Kumar,
	André Almeida, Arnd Bergmann, Barnabás Czémán,
	Christian König, Christopher Snowhill, Dmitry Baryshkov,
	Dmitry Baryshkov, Eugene Lepshy, open list:IOMMU SUBSYSTEM,
	Jason Gunthorpe, Jessica Zhang, Joao Martins, Jonathan Marek,
	Jun Nie, Kevin Tian, Konrad Dybcio, Krzysztof Kozlowski,
	moderated list:DMA BUFFER SHARING FRAMEWORK:Keyword:bdma_(?:buf|fence|resv)b,
	moderated list:ARM SMMU DRIVERS, open list,
	open list:DMA BUFFER SHARING FRAMEWORK:Keyword:bdma_(?:buf|fence|resv)b,
	Marijn Suijten, Nicolin Chen, Rob Herring (Arm), Robin Murphy,
	Sean Paul, Will Deacon

From: Rob Clark <robdclark@chromium.org>

Conversion to DRM GPU VA Manager[1], and adding support for Vulkan Sparse
Memory[2] in the form of:

1. A new VM_BIND submitqueue type for executing VM MSM_SUBMIT_BO_OP_MAP/
   MAP_NULL/UNMAP commands

2. A new VM_BIND ioctl to allow submitting batches of one or more
   MAP/MAP_NULL/UNMAP commands to a VM_BIND submitqueue

I did not implement support for synchronous VM_BIND commands.  Since
userspace could just immediately wait for the `SUBMIT` to complete, I don't
think we need this extra complexity in the kernel.  Synchronous/immediate
VM_BIND operations could be implemented with a 2nd VM_BIND submitqueue.

The corresponding mesa MR: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/32533

Changes in v5:
- Improved drm/sched enqueue_credit comments, and better define the
  return from drm_sched_entity_push_job()
- Improve DRM_GPUVM_VA_WEAK_REF comments, and additional WARN_ON()s to
  make it clear that some of the gpuvm functionality is not available
  in this mode.
- Link to v4: https://lore.kernel.org/all/20250514175527.42488-1-robdclark@gmail.com/

Changes in v4:
- Various locking/etc fixes
- Optimize the pgtable preallocation.  If userspace sorts the VM_BIND ops
  then the kernel detects ops that fall into the same 2MB last level PTD
  to avoid duplicate page preallocation.
- Add way to throttle pushing jobs to the scheduler, to cap the amount of
  potentially temporary prealloc'd pgtable pages.
- Add vm_log to devcoredump for debugging.  If the vm_log_shift module
  param is set, keep a log of the last 1<<vm_log_shift VM updates for
  easier debugging of faults/crashes.
- Link to v3: https://lore.kernel.org/all/20250428205619.227835-1-robdclark@gmail.com/

Changes in v3:
- Switched to seperate VM_BIND ioctl.  This makes the UABI a bit
  cleaner, but OTOH the userspace code was cleaner when the end result
  of either type of VkQueue lead to the same ioctl.  So I'm a bit on
  the fence.
- Switched to doing the gpuvm bookkeeping synchronously, and only
  deferring the pgtable updates.  This avoids needing to hold any resv
  locks in the fence signaling path, resolving the last shrinker related
  lockdep complaints.  OTOH it means userspace can trigger invalid
  pgtable updates with multiple VM_BIND queues.  In this case, we ensure
  that unmaps happen completely (to prevent userspace from using this to
  access free'd pages), mark the context as unusable, and move on with
  life.
- Link to v2: https://lore.kernel.org/all/20250319145425.51935-1-robdclark@gmail.com/

Changes in v2:
- Dropped Bibek Kumar Patro's arm-smmu patches[3], which have since been
  merged.
- Pre-allocate all the things, and drop HACK patch which disabled shrinker.
  This includes ensuring that vm_bo objects are allocated up front, pre-
  allocating VMA objects, and pre-allocating pages used for pgtable updates.
  The latter utilizes io_pgtable_cfg callbacks for pgtable alloc/free, that
  were initially added for panthor. 
- Add back support for BO dumping for devcoredump.
- Link to v1 (RFC): https://lore.kernel.org/dri-devel/20241207161651.410556-1-robdclark@gmail.com/T/#t

[1] https://www.kernel.org/doc/html/next/gpu/drm-mm.html#drm-gpuvm
[2] https://docs.vulkan.org/spec/latest/chapters/sparsemem.html
[3] https://patchwork.kernel.org/project/linux-arm-kernel/list/?series=909700

Rob Clark (40):
  drm/gpuvm: Don't require obj lock in destructor path
  drm/gpuvm: Allow VAs to hold soft reference to BOs
  drm/gem: Add ww_acquire_ctx support to drm_gem_lru_scan()
  drm/sched: Add enqueue credit limit
  iommu/io-pgtable-arm: Add quirk to quiet WARN_ON()
  drm/msm: Rename msm_file_private -> msm_context
  drm/msm: Improve msm_context comments
  drm/msm: Rename msm_gem_address_space -> msm_gem_vm
  drm/msm: Remove vram carveout support
  drm/msm: Collapse vma allocation and initialization
  drm/msm: Collapse vma close and delete
  drm/msm: Don't close VMAs on purge
  drm/msm: drm_gpuvm conversion
  drm/msm: Convert vm locking
  drm/msm: Use drm_gpuvm types more
  drm/msm: Split out helper to get iommu prot flags
  drm/msm: Add mmu support for non-zero offset
  drm/msm: Add PRR support
  drm/msm: Rename msm_gem_vma_purge() -> _unmap()
  drm/msm: Drop queued submits on lastclose()
  drm/msm: Lazily create context VM
  drm/msm: Add opt-in for VM_BIND
  drm/msm: Mark VM as unusable on GPU hangs
  drm/msm: Add _NO_SHARE flag
  drm/msm: Crashdump prep for sparse mappings
  drm/msm: rd dumping prep for sparse mappings
  drm/msm: Crashdec support for sparse
  drm/msm: rd dumping support for sparse
  drm/msm: Extract out syncobj helpers
  drm/msm: Use DMA_RESV_USAGE_BOOKKEEP/KERNEL
  drm/msm: Add VM_BIND submitqueue
  drm/msm: Support IO_PGTABLE_QUIRK_NO_WARN_ON
  drm/msm: Support pgtable preallocation
  drm/msm: Split out map/unmap ops
  drm/msm: Add VM_BIND ioctl
  drm/msm: Add VM logging for VM_BIND updates
  drm/msm: Add VMA unmap reason
  drm/msm: Add mmu prealloc tracepoint
  drm/msm: use trylock for debugfs
  drm/msm: Bump UAPI version

 drivers/gpu/drm/drm_gem.c                     |   14 +-
 drivers/gpu/drm/drm_gpuvm.c                   |   38 +-
 drivers/gpu/drm/msm/Kconfig                   |    1 +
 drivers/gpu/drm/msm/Makefile                  |    1 +
 drivers/gpu/drm/msm/adreno/a2xx_gpu.c         |   25 +-
 drivers/gpu/drm/msm/adreno/a2xx_gpummu.c      |    5 +-
 drivers/gpu/drm/msm/adreno/a3xx_gpu.c         |   17 +-
 drivers/gpu/drm/msm/adreno/a4xx_gpu.c         |   17 +-
 drivers/gpu/drm/msm/adreno/a5xx_debugfs.c     |    4 +-
 drivers/gpu/drm/msm/adreno/a5xx_gpu.c         |   22 +-
 drivers/gpu/drm/msm/adreno/a5xx_power.c       |    2 +-
 drivers/gpu/drm/msm/adreno/a5xx_preempt.c     |   10 +-
 drivers/gpu/drm/msm/adreno/a6xx_gmu.c         |   32 +-
 drivers/gpu/drm/msm/adreno/a6xx_gmu.h         |    2 +-
 drivers/gpu/drm/msm/adreno/a6xx_gpu.c         |   49 +-
 drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c   |    6 +-
 drivers/gpu/drm/msm/adreno/a6xx_preempt.c     |   10 +-
 drivers/gpu/drm/msm/adreno/adreno_device.c    |    4 -
 drivers/gpu/drm/msm/adreno/adreno_gpu.c       |   99 +-
 drivers/gpu/drm/msm/adreno/adreno_gpu.h       |   23 +-
 .../drm/msm/disp/dpu1/dpu_encoder_phys_wb.c   |   14 +-
 drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c   |   18 +-
 drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h   |    2 +-
 drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c       |   18 +-
 drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c     |   14 +-
 drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h     |    4 +-
 drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c     |    6 +-
 drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c      |   28 +-
 drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c    |   12 +-
 drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c     |    4 +-
 drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c      |   19 +-
 drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c    |   12 +-
 drivers/gpu/drm/msm/dsi/dsi_host.c            |   14 +-
 drivers/gpu/drm/msm/msm_drv.c                 |  184 +--
 drivers/gpu/drm/msm/msm_drv.h                 |   35 +-
 drivers/gpu/drm/msm/msm_fb.c                  |   18 +-
 drivers/gpu/drm/msm/msm_fbdev.c               |    2 +-
 drivers/gpu/drm/msm/msm_gem.c                 |  494 +++---
 drivers/gpu/drm/msm/msm_gem.h                 |  247 ++-
 drivers/gpu/drm/msm/msm_gem_prime.c           |   15 +
 drivers/gpu/drm/msm/msm_gem_shrinker.c        |  104 +-
 drivers/gpu/drm/msm/msm_gem_submit.c          |  295 ++--
 drivers/gpu/drm/msm/msm_gem_vma.c             | 1471 ++++++++++++++++-
 drivers/gpu/drm/msm/msm_gpu.c                 |  211 ++-
 drivers/gpu/drm/msm/msm_gpu.h                 |  144 +-
 drivers/gpu/drm/msm/msm_gpu_trace.h           |   14 +
 drivers/gpu/drm/msm/msm_iommu.c               |  302 +++-
 drivers/gpu/drm/msm/msm_kms.c                 |   18 +-
 drivers/gpu/drm/msm/msm_kms.h                 |    2 +-
 drivers/gpu/drm/msm/msm_mmu.h                 |   38 +-
 drivers/gpu/drm/msm/msm_rd.c                  |   62 +-
 drivers/gpu/drm/msm/msm_ringbuffer.c          |   10 +-
 drivers/gpu/drm/msm/msm_submitqueue.c         |   96 +-
 drivers/gpu/drm/msm/msm_syncobj.c             |  172 ++
 drivers/gpu/drm/msm/msm_syncobj.h             |   37 +
 drivers/gpu/drm/scheduler/sched_entity.c      |   19 +-
 drivers/gpu/drm/scheduler/sched_main.c        |    3 +
 drivers/iommu/io-pgtable-arm.c                |   27 +-
 include/drm/drm_gem.h                         |   10 +-
 include/drm/drm_gpuvm.h                       |   19 +-
 include/drm/gpu_scheduler.h                   |   24 +-
 include/linux/io-pgtable.h                    |    8 +
 include/uapi/drm/msm_drm.h                    |  149 +-
 63 files changed, 3526 insertions(+), 1250 deletions(-)
 create mode 100644 drivers/gpu/drm/msm/msm_syncobj.c
 create mode 100644 drivers/gpu/drm/msm/msm_syncobj.h

-- 
2.49.0



^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v5 05/40] iommu/io-pgtable-arm: Add quirk to quiet WARN_ON()
  2025-05-19 17:51 [PATCH v5 00/40] drm/msm: sparse / "VM_BIND" support Rob Clark
@ 2025-05-19 17:51 ` Rob Clark
  2025-05-19 21:15 ` [Linaro-mm-sig] [PATCH v5 00/40] drm/msm: sparse / "VM_BIND" support Dave Airlie
  2025-05-20 15:41 ` Will Deacon
  2 siblings, 0 replies; 8+ messages in thread
From: Rob Clark @ 2025-05-19 17:51 UTC (permalink / raw)
  To: dri-devel
  Cc: freedreno, linux-arm-msm, Connor Abbott, Rob Clark, Robin Murphy,
	Will Deacon, Joerg Roedel, Jason Gunthorpe, Nicolin Chen,
	Kevin Tian, Joao Martins, moderated list:ARM SMMU DRIVERS,
	open list:IOMMU SUBSYSTEM, open list

From: Rob Clark <robdclark@chromium.org>

In situations where mapping/unmapping sequence can be controlled by
userspace, attempting to map over a region that has not yet been
unmapped is an error.  But not something that should spam dmesg.

Now that there is a quirk, we can also drop the selftest_running
flag, and use the quirk instead for selftests.

Signed-off-by: Rob Clark <robdclark@chromium.org>
Acked-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Rob Clark <robdclark@chromium.org>
---
 drivers/iommu/io-pgtable-arm.c | 27 ++++++++++++++-------------
 include/linux/io-pgtable.h     |  8 ++++++++
 2 files changed, 22 insertions(+), 13 deletions(-)

diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c
index f27965caf6a1..a535d88f8943 100644
--- a/drivers/iommu/io-pgtable-arm.c
+++ b/drivers/iommu/io-pgtable-arm.c
@@ -253,8 +253,6 @@ static inline bool arm_lpae_concat_mandatory(struct io_pgtable_cfg *cfg,
 	       (data->start_level == 1) && (oas == 40);
 }
 
-static bool selftest_running = false;
-
 static dma_addr_t __arm_lpae_dma_addr(void *pages)
 {
 	return (dma_addr_t)virt_to_phys(pages);
@@ -373,7 +371,7 @@ static int arm_lpae_init_pte(struct arm_lpae_io_pgtable *data,
 	for (i = 0; i < num_entries; i++)
 		if (iopte_leaf(ptep[i], lvl, data->iop.fmt)) {
 			/* We require an unmap first */
-			WARN_ON(!selftest_running);
+			WARN_ON(!(data->iop.cfg.quirks & IO_PGTABLE_QUIRK_NO_WARN_ON));
 			return -EEXIST;
 		} else if (iopte_type(ptep[i]) == ARM_LPAE_PTE_TYPE_TABLE) {
 			/*
@@ -475,7 +473,7 @@ static int __arm_lpae_map(struct arm_lpae_io_pgtable *data, unsigned long iova,
 		cptep = iopte_deref(pte, data);
 	} else if (pte) {
 		/* We require an unmap first */
-		WARN_ON(!selftest_running);
+		WARN_ON(!(cfg->quirks & IO_PGTABLE_QUIRK_NO_WARN_ON));
 		return -EEXIST;
 	}
 
@@ -649,8 +647,10 @@ static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data,
 	unmap_idx_start = ARM_LPAE_LVL_IDX(iova, lvl, data);
 	ptep += unmap_idx_start;
 	pte = READ_ONCE(*ptep);
-	if (WARN_ON(!pte))
-		return 0;
+	if (!pte) {
+		WARN_ON(!(data->iop.cfg.quirks & IO_PGTABLE_QUIRK_NO_WARN_ON));
+		return -ENOENT;
+	}
 
 	/* If the size matches this level, we're in the right place */
 	if (size == ARM_LPAE_BLOCK_SIZE(lvl, data)) {
@@ -660,8 +660,10 @@ static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data,
 		/* Find and handle non-leaf entries */
 		for (i = 0; i < num_entries; i++) {
 			pte = READ_ONCE(ptep[i]);
-			if (WARN_ON(!pte))
+			if (!pte) {
+				WARN_ON(!(data->iop.cfg.quirks & IO_PGTABLE_QUIRK_NO_WARN_ON));
 				break;
+			}
 
 			if (!iopte_leaf(pte, lvl, iop->fmt)) {
 				__arm_lpae_clear_pte(&ptep[i], &iop->cfg, 1);
@@ -976,7 +978,8 @@ arm_64_lpae_alloc_pgtable_s1(struct io_pgtable_cfg *cfg, void *cookie)
 	if (cfg->quirks & ~(IO_PGTABLE_QUIRK_ARM_NS |
 			    IO_PGTABLE_QUIRK_ARM_TTBR1 |
 			    IO_PGTABLE_QUIRK_ARM_OUTER_WBWA |
-			    IO_PGTABLE_QUIRK_ARM_HD))
+			    IO_PGTABLE_QUIRK_ARM_HD |
+			    IO_PGTABLE_QUIRK_NO_WARN_ON))
 		return NULL;
 
 	data = arm_lpae_alloc_pgtable(cfg);
@@ -1079,7 +1082,8 @@ arm_64_lpae_alloc_pgtable_s2(struct io_pgtable_cfg *cfg, void *cookie)
 	struct arm_lpae_io_pgtable *data;
 	typeof(&cfg->arm_lpae_s2_cfg.vtcr) vtcr = &cfg->arm_lpae_s2_cfg.vtcr;
 
-	if (cfg->quirks & ~(IO_PGTABLE_QUIRK_ARM_S2FWB))
+	if (cfg->quirks & ~(IO_PGTABLE_QUIRK_ARM_S2FWB |
+			    IO_PGTABLE_QUIRK_NO_WARN_ON))
 		return NULL;
 
 	data = arm_lpae_alloc_pgtable(cfg);
@@ -1320,7 +1324,6 @@ static void __init arm_lpae_dump_ops(struct io_pgtable_ops *ops)
 #define __FAIL(ops, i)	({						\
 		WARN(1, "selftest: test failed for fmt idx %d\n", (i));	\
 		arm_lpae_dump_ops(ops);					\
-		selftest_running = false;				\
 		-EFAULT;						\
 })
 
@@ -1336,8 +1339,6 @@ static int __init arm_lpae_run_tests(struct io_pgtable_cfg *cfg)
 	size_t size, mapped;
 	struct io_pgtable_ops *ops;
 
-	selftest_running = true;
-
 	for (i = 0; i < ARRAY_SIZE(fmts); ++i) {
 		cfg_cookie = cfg;
 		ops = alloc_io_pgtable_ops(fmts[i], cfg, cfg);
@@ -1426,7 +1427,6 @@ static int __init arm_lpae_run_tests(struct io_pgtable_cfg *cfg)
 		free_io_pgtable_ops(ops);
 	}
 
-	selftest_running = false;
 	return 0;
 }
 
@@ -1448,6 +1448,7 @@ static int __init arm_lpae_do_selftests(void)
 		.tlb = &dummy_tlb_ops,
 		.coherent_walk = true,
 		.iommu_dev = &dev,
+		.quirks = IO_PGTABLE_QUIRK_NO_WARN_ON,
 	};
 
 	/* __arm_lpae_alloc_pages() merely needs dev_to_node() to work */
diff --git a/include/linux/io-pgtable.h b/include/linux/io-pgtable.h
index bba2a51c87d2..639b8f4fb87d 100644
--- a/include/linux/io-pgtable.h
+++ b/include/linux/io-pgtable.h
@@ -88,6 +88,13 @@ struct io_pgtable_cfg {
 	 *
 	 * IO_PGTABLE_QUIRK_ARM_HD: Enables dirty tracking in stage 1 pagetable.
 	 * IO_PGTABLE_QUIRK_ARM_S2FWB: Use the FWB format for the MemAttrs bits
+	 *
+	 * IO_PGTABLE_QUIRK_NO_WARN_ON: Do not WARN_ON() on conflicting
+	 *	mappings, but silently return -EEXISTS.  Normally an attempt
+	 *	to map over an existing mapping would indicate some sort of
+	 *	kernel bug, which would justify the WARN_ON().  But for GPU
+	 *	drivers, this could be under control of userspace.  Which
+	 *	deserves an error return, but not to spam dmesg.
 	 */
 	#define IO_PGTABLE_QUIRK_ARM_NS			BIT(0)
 	#define IO_PGTABLE_QUIRK_NO_PERMS		BIT(1)
@@ -97,6 +104,7 @@ struct io_pgtable_cfg {
 	#define IO_PGTABLE_QUIRK_ARM_OUTER_WBWA		BIT(6)
 	#define IO_PGTABLE_QUIRK_ARM_HD			BIT(7)
 	#define IO_PGTABLE_QUIRK_ARM_S2FWB		BIT(8)
+	#define IO_PGTABLE_QUIRK_NO_WARN_ON		BIT(9)
 	unsigned long			quirks;
 	unsigned long			pgsize_bitmap;
 	unsigned int			ias;
-- 
2.49.0



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [Linaro-mm-sig] [PATCH v5 00/40] drm/msm: sparse / "VM_BIND" support
  2025-05-19 17:51 [PATCH v5 00/40] drm/msm: sparse / "VM_BIND" support Rob Clark
  2025-05-19 17:51 ` [PATCH v5 05/40] iommu/io-pgtable-arm: Add quirk to quiet WARN_ON() Rob Clark
@ 2025-05-19 21:15 ` Dave Airlie
  2025-05-19 21:24   ` Rob Clark
  2025-05-20 15:41 ` Will Deacon
  2 siblings, 1 reply; 8+ messages in thread
From: Dave Airlie @ 2025-05-19 21:15 UTC (permalink / raw)
  To: Rob Clark
  Cc: dri-devel, freedreno, linux-arm-msm, Connor Abbott, Rob Clark,
	Abhinav Kumar, André Almeida, Arnd Bergmann,
	Barnabás Czémán, Christopher Snowhill,
	Dmitry Baryshkov, Dmitry Baryshkov, Eugene Lepshy,
	open list:IOMMU SUBSYSTEM, Jason Gunthorpe, Jessica Zhang,
	Joao Martins, Jonathan Marek, Jun Nie, Kevin Tian, Konrad Dybcio,
	Krzysztof Kozlowski,
	moderated list:DMA BUFFER SHARING FRAMEWORK:Keyword:bdma_?:buf|fence|resvb,
	m oderated list:ARM SMMU DRIVERS, open list,
	open list:DMA BUFFER SHARING FRAMEWORK:Keyword:bdma_?:buf|fence|resvb,
	Marijn Suijten, Nicolin Chen, Rob Herring (Arm), Robin Murphy,
	Sean Paul, Will Deacon

On Tue, 20 May 2025 at 03:54, Rob Clark <robdclark@gmail.com> wrote:
>
> From: Rob Clark <robdclark@chromium.org>
>
> Conversion to DRM GPU VA Manager[1], and adding support for Vulkan Sparse
> Memory[2] in the form of:
>
> 1. A new VM_BIND submitqueue type for executing VM MSM_SUBMIT_BO_OP_MAP/
>    MAP_NULL/UNMAP commands
>
> 2. A new VM_BIND ioctl to allow submitting batches of one or more
>    MAP/MAP_NULL/UNMAP commands to a VM_BIND submitqueue
>
> I did not implement support for synchronous VM_BIND commands.  Since
> userspace could just immediately wait for the `SUBMIT` to complete, I don't
> think we need this extra complexity in the kernel.  Synchronous/immediate
> VM_BIND operations could be implemented with a 2nd VM_BIND submitqueue.

This seems suboptimal for Vulkan userspaces. non-sparse binds are all
synchronous, you are adding an extra ioctl to wait, or do you manage
these via a different mechanism?

Dave.


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Linaro-mm-sig] [PATCH v5 00/40] drm/msm: sparse / "VM_BIND" support
  2025-05-19 21:15 ` [Linaro-mm-sig] [PATCH v5 00/40] drm/msm: sparse / "VM_BIND" support Dave Airlie
@ 2025-05-19 21:24   ` Rob Clark
  2025-05-19 21:45     ` Dave Airlie
  0 siblings, 1 reply; 8+ messages in thread
From: Rob Clark @ 2025-05-19 21:24 UTC (permalink / raw)
  To: Dave Airlie
  Cc: dri-devel, freedreno, linux-arm-msm, Connor Abbott, Rob Clark,
	Abhinav Kumar, André Almeida, Arnd Bergmann,
	Barnabás Czémán, Christopher Snowhill,
	Dmitry Baryshkov, Dmitry Baryshkov, Eugene Lepshy,
	open list:IOMMU SUBSYSTEM, Jason Gunthorpe, Jessica Zhang,
	Joao Martins, Jonathan Marek, Jun Nie, Kevin Tian, Konrad Dybcio,
	Krzysztof Kozlowski,
	moderated list:DMA BUFFER SHARING FRAMEWORK:Keyword:bdma_?:buf|fence|resvb,
	m oderated list:ARM SMMU DRIVERS, open list,
	open list:DMA BUFFER SHARING FRAMEWORK:Keyword:bdma_?:buf|fence|resvb,
	Marijn Suijten, Nicolin Chen, Rob Herring (Arm), Robin Murphy,
	Sean Paul, Will Deacon

On Mon, May 19, 2025 at 2:15 PM Dave Airlie <airlied@gmail.com> wrote:
>
> On Tue, 20 May 2025 at 03:54, Rob Clark <robdclark@gmail.com> wrote:
> >
> > From: Rob Clark <robdclark@chromium.org>
> >
> > Conversion to DRM GPU VA Manager[1], and adding support for Vulkan Sparse
> > Memory[2] in the form of:
> >
> > 1. A new VM_BIND submitqueue type for executing VM MSM_SUBMIT_BO_OP_MAP/
> >    MAP_NULL/UNMAP commands
> >
> > 2. A new VM_BIND ioctl to allow submitting batches of one or more
> >    MAP/MAP_NULL/UNMAP commands to a VM_BIND submitqueue
> >
> > I did not implement support for synchronous VM_BIND commands.  Since
> > userspace could just immediately wait for the `SUBMIT` to complete, I don't
> > think we need this extra complexity in the kernel.  Synchronous/immediate
> > VM_BIND operations could be implemented with a 2nd VM_BIND submitqueue.
>
> This seems suboptimal for Vulkan userspaces. non-sparse binds are all
> synchronous, you are adding an extra ioctl to wait, or do you manage
> these via a different mechanism?

Normally it's just an extra in-fence for the SUBMIT ioctl to ensure
the binds happen before cmd execution

When it comes to UAPI, it's easier to add something later, than to
take something away, so I don't see a problem adding synchronous binds
later if that proves to be needed.  But I don't think it is.

BR,
-R


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Linaro-mm-sig] [PATCH v5 00/40] drm/msm: sparse / "VM_BIND" support
  2025-05-19 21:24   ` Rob Clark
@ 2025-05-19 21:45     ` Dave Airlie
  2025-05-19 21:51       ` Rob Clark
  0 siblings, 1 reply; 8+ messages in thread
From: Dave Airlie @ 2025-05-19 21:45 UTC (permalink / raw)
  To: Rob Clark
  Cc: dri-devel, freedreno, linux-arm-msm, Connor Abbott, Rob Clark,
	Abhinav Kumar, André Almeida, Arnd Bergmann,
	Barnabás Czémán, Christopher Snowhill,
	Dmitry Baryshkov, Dmitry Baryshkov, Eugene Lepshy,
	open list:IOMMU SUBSYSTEM, Jason Gunthorpe, Jessica Zhang,
	Joao Martins, Jonathan Marek, Jun Nie, Kevin Tian, Konrad Dybcio,
	Krzysztof Kozlowski,
	moderated list:DMA BUFFER SHARING FRAMEWORK:Keyword:bdma_?:buf|fence|resvb,
	m oderated list:ARM SMMU DRIVERS, open list,
	open list:DMA BUFFER SHARING FRAMEWORK:Keyword:bdma_?:buf|fence|resvb,
	Marijn Suijten, Nicolin Chen, Rob Herring (Arm), Robin Murphy,
	Sean Paul, Will Deacon

On Tue, 20 May 2025 at 07:25, Rob Clark <robdclark@gmail.com> wrote:
>
> On Mon, May 19, 2025 at 2:15 PM Dave Airlie <airlied@gmail.com> wrote:
> >
> > On Tue, 20 May 2025 at 03:54, Rob Clark <robdclark@gmail.com> wrote:
> > >
> > > From: Rob Clark <robdclark@chromium.org>
> > >
> > > Conversion to DRM GPU VA Manager[1], and adding support for Vulkan Sparse
> > > Memory[2] in the form of:
> > >
> > > 1. A new VM_BIND submitqueue type for executing VM MSM_SUBMIT_BO_OP_MAP/
> > >    MAP_NULL/UNMAP commands
> > >
> > > 2. A new VM_BIND ioctl to allow submitting batches of one or more
> > >    MAP/MAP_NULL/UNMAP commands to a VM_BIND submitqueue
> > >
> > > I did not implement support for synchronous VM_BIND commands.  Since
> > > userspace could just immediately wait for the `SUBMIT` to complete, I don't
> > > think we need this extra complexity in the kernel.  Synchronous/immediate
> > > VM_BIND operations could be implemented with a 2nd VM_BIND submitqueue.
> >
> > This seems suboptimal for Vulkan userspaces. non-sparse binds are all
> > synchronous, you are adding an extra ioctl to wait, or do you manage
> > these via a different mechanism?
>
> Normally it's just an extra in-fence for the SUBMIT ioctl to ensure
> the binds happen before cmd execution
>
> When it comes to UAPI, it's easier to add something later, than to
> take something away, so I don't see a problem adding synchronous binds
> later if that proves to be needed.  But I don't think it is.

I'm not 100% sure that is conformant behaviour to the vulkan spec,

Two questions come to mind:
1. where is this out fence stored? vulkan being explicit with no
guarantee of threads doing things, seems like you'd need to use a lock
in the vulkan driver to store it, esp if multiple threads bind memory.

2. If it's fine to lazy bind on the hw side, do you also handle the
case where something is bound and immediately freed, where does the
fence go then, do you wait for the fence before destroying things?

Dave.


Dave.


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Linaro-mm-sig] [PATCH v5 00/40] drm/msm: sparse / "VM_BIND" support
  2025-05-19 21:45     ` Dave Airlie
@ 2025-05-19 21:51       ` Rob Clark
  2025-05-19 22:23         ` Connor Abbott
  0 siblings, 1 reply; 8+ messages in thread
From: Rob Clark @ 2025-05-19 21:51 UTC (permalink / raw)
  To: Dave Airlie
  Cc: dri-devel, freedreno, linux-arm-msm, Connor Abbott, Rob Clark,
	Abhinav Kumar, André Almeida, Arnd Bergmann,
	Barnabás Czémán, Christopher Snowhill,
	Dmitry Baryshkov, Dmitry Baryshkov, Eugene Lepshy,
	open list:IOMMU SUBSYSTEM, Jason Gunthorpe, Jessica Zhang,
	Joao Martins, Jonathan Marek, Jun Nie, Kevin Tian, Konrad Dybcio,
	Krzysztof Kozlowski,
	moderated list:DMA BUFFER SHARING FRAMEWORK:Keyword:bdma_?:buf|fence|resvb,
	m oderated list:ARM SMMU DRIVERS, open list,
	open list:DMA BUFFER SHARING FRAMEWORK:Keyword:bdma_?:buf|fence|resvb,
	Marijn Suijten, Nicolin Chen, Rob Herring (Arm), Robin Murphy,
	Sean Paul, Will Deacon

On Mon, May 19, 2025 at 2:45 PM Dave Airlie <airlied@gmail.com> wrote:
>
> On Tue, 20 May 2025 at 07:25, Rob Clark <robdclark@gmail.com> wrote:
> >
> > On Mon, May 19, 2025 at 2:15 PM Dave Airlie <airlied@gmail.com> wrote:
> > >
> > > On Tue, 20 May 2025 at 03:54, Rob Clark <robdclark@gmail.com> wrote:
> > > >
> > > > From: Rob Clark <robdclark@chromium.org>
> > > >
> > > > Conversion to DRM GPU VA Manager[1], and adding support for Vulkan Sparse
> > > > Memory[2] in the form of:
> > > >
> > > > 1. A new VM_BIND submitqueue type for executing VM MSM_SUBMIT_BO_OP_MAP/
> > > >    MAP_NULL/UNMAP commands
> > > >
> > > > 2. A new VM_BIND ioctl to allow submitting batches of one or more
> > > >    MAP/MAP_NULL/UNMAP commands to a VM_BIND submitqueue
> > > >
> > > > I did not implement support for synchronous VM_BIND commands.  Since
> > > > userspace could just immediately wait for the `SUBMIT` to complete, I don't
> > > > think we need this extra complexity in the kernel.  Synchronous/immediate
> > > > VM_BIND operations could be implemented with a 2nd VM_BIND submitqueue.
> > >
> > > This seems suboptimal for Vulkan userspaces. non-sparse binds are all
> > > synchronous, you are adding an extra ioctl to wait, or do you manage
> > > these via a different mechanism?
> >
> > Normally it's just an extra in-fence for the SUBMIT ioctl to ensure
> > the binds happen before cmd execution
> >
> > When it comes to UAPI, it's easier to add something later, than to
> > take something away, so I don't see a problem adding synchronous binds
> > later if that proves to be needed.  But I don't think it is.
>
> I'm not 100% sure that is conformant behaviour to the vulkan spec,
>
> Two questions come to mind:
> 1. where is this out fence stored? vulkan being explicit with no
> guarantee of threads doing things, seems like you'd need to use a lock
> in the vulkan driver to store it, esp if multiple threads bind memory.

turnip is protected dev->vm_bind_fence_fd with a u_rwlock

> 2. If it's fine to lazy bind on the hw side, do you also handle the
> case where something is bound and immediately freed, where does the
> fence go then, do you wait for the fence before destroying things?

right no turnip is just relying on the UNMAP/unbind going thru the
same queue.. but I guess it could also use vm_bind_fence_fd as an
in-fence.

BR,
-R


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Linaro-mm-sig] [PATCH v5 00/40] drm/msm: sparse / "VM_BIND" support
  2025-05-19 21:51       ` Rob Clark
@ 2025-05-19 22:23         ` Connor Abbott
  0 siblings, 0 replies; 8+ messages in thread
From: Connor Abbott @ 2025-05-19 22:23 UTC (permalink / raw)
  To: Rob Clark
  Cc: Dave Airlie, dri-devel, freedreno, linux-arm-msm, Rob Clark,
	Abhinav Kumar, André Almeida, Arnd Bergmann,
	Barnabás Czémán, Christopher Snowhill,
	Dmitry Baryshkov, Dmitry Baryshkov, Eugene Lepshy,
	open list:IOMMU SUBSYSTEM, Jason Gunthorpe, Jessica Zhang,
	Joao Martins, Jonathan Marek, Jun Nie, Kevin Tian, Konrad Dybcio,
	Krzysztof Kozlowski,
	moderated list:DMA BUFFER SHARING FRAMEWORK:Keyword:bdma_?:buf|fence|resvb,
	m oderated list:ARM SMMU DRIVERS, open list,
	open list:DMA BUFFER SHARING FRAMEWORK:Keyword:bdma_?:buf|fence|resvb,
	Marijn Suijten, Nicolin Chen, Rob Herring (Arm), Robin Murphy,
	Sean Paul, Will Deacon

On Mon, May 19, 2025 at 5:51 PM Rob Clark <robdclark@gmail.com> wrote:
>
> On Mon, May 19, 2025 at 2:45 PM Dave Airlie <airlied@gmail.com> wrote:
> >
> > On Tue, 20 May 2025 at 07:25, Rob Clark <robdclark@gmail.com> wrote:
> > >
> > > On Mon, May 19, 2025 at 2:15 PM Dave Airlie <airlied@gmail.com> wrote:
> > > >
> > > > On Tue, 20 May 2025 at 03:54, Rob Clark <robdclark@gmail.com> wrote:
> > > > >
> > > > > From: Rob Clark <robdclark@chromium.org>
> > > > >
> > > > > Conversion to DRM GPU VA Manager[1], and adding support for Vulkan Sparse
> > > > > Memory[2] in the form of:
> > > > >
> > > > > 1. A new VM_BIND submitqueue type for executing VM MSM_SUBMIT_BO_OP_MAP/
> > > > >    MAP_NULL/UNMAP commands
> > > > >
> > > > > 2. A new VM_BIND ioctl to allow submitting batches of one or more
> > > > >    MAP/MAP_NULL/UNMAP commands to a VM_BIND submitqueue
> > > > >
> > > > > I did not implement support for synchronous VM_BIND commands.  Since
> > > > > userspace could just immediately wait for the `SUBMIT` to complete, I don't
> > > > > think we need this extra complexity in the kernel.  Synchronous/immediate
> > > > > VM_BIND operations could be implemented with a 2nd VM_BIND submitqueue.
> > > >
> > > > This seems suboptimal for Vulkan userspaces. non-sparse binds are all
> > > > synchronous, you are adding an extra ioctl to wait, or do you manage
> > > > these via a different mechanism?
> > >
> > > Normally it's just an extra in-fence for the SUBMIT ioctl to ensure
> > > the binds happen before cmd execution
> > >
> > > When it comes to UAPI, it's easier to add something later, than to
> > > take something away, so I don't see a problem adding synchronous binds
> > > later if that proves to be needed.  But I don't think it is.
> >
> > I'm not 100% sure that is conformant behaviour to the vulkan spec,
> >
> > Two questions come to mind:
> > 1. where is this out fence stored? vulkan being explicit with no
> > guarantee of threads doing things, seems like you'd need to use a lock
> > in the vulkan driver to store it, esp if multiple threads bind memory.
>
> turnip is protected dev->vm_bind_fence_fd with a u_rwlock

To add to that, it doesn't really matter the exact order the fence
gets updated because a Vulkan app can't use anything in a submit until
after we return from the turnip function that allocates + binds the BO
and then the Vulkan-level object is returned to the user. We just have
to make sure that the fence is "new enough" when we return the BO. It
doesn't matter if multiple threads are creating/destroying objects,
the thread doing the VkQueueSubmit() must have observed the creation
of all resources used in the submit and will therefore see a new
enough fence.

>
> > 2. If it's fine to lazy bind on the hw side, do you also handle the
> > case where something is bound and immediately freed, where does the
> > fence go then, do you wait for the fence before destroying things?
>
> right no turnip is just relying on the UNMAP/unbind going thru the
> same queue.. but I guess it could also use vm_bind_fence_fd as an
> in-fence.
>
> BR,
> -R

Yeah, we always submit all non-sparse map/unmap on the same queue so
they're always synchronized wrt each other. We destroy the GEM object
right away after submitting the final unmap and rely on the kernel to
hold a reference to the BO in the unmap job.

Connor


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v5 00/40] drm/msm: sparse / "VM_BIND" support
  2025-05-19 17:51 [PATCH v5 00/40] drm/msm: sparse / "VM_BIND" support Rob Clark
  2025-05-19 17:51 ` [PATCH v5 05/40] iommu/io-pgtable-arm: Add quirk to quiet WARN_ON() Rob Clark
  2025-05-19 21:15 ` [Linaro-mm-sig] [PATCH v5 00/40] drm/msm: sparse / "VM_BIND" support Dave Airlie
@ 2025-05-20 15:41 ` Will Deacon
  2 siblings, 0 replies; 8+ messages in thread
From: Will Deacon @ 2025-05-20 15:41 UTC (permalink / raw)
  To: dri-devel, Rob Clark
  Cc: catalin.marinas, kernel-team, Will Deacon, freedreno,
	linux-arm-msm, Connor Abbott, Rob Clark, Abhinav Kumar,
	André Almeida, Arnd Bergmann, Barnabás Czémán,
	Christian König, Christopher Snowhill, Dmitry Baryshkov,
	Dmitry Baryshkov, Eugene Lepshy, iommu, Jason Gunthorpe,
	Jessica Zhang, Joao Martins, Jonathan Marek, Jun Nie, Kevin Tian,
	Konrad Dybcio, Krzysztof Kozlowski, linaro-mm-sig,
	linux-arm-kernel, linux-kernel, linux-media, Marijn Suijten,
	Nicolin Chen, Rob Herring (Arm), Robin Murphy, Sean Paul

On Mon, 19 May 2025 10:51:23 -0700, Rob Clark wrote:
> Conversion to DRM GPU VA Manager[1], and adding support for Vulkan Sparse
> Memory[2] in the form of:
> 
> 1. A new VM_BIND submitqueue type for executing VM MSM_SUBMIT_BO_OP_MAP/
>    MAP_NULL/UNMAP commands
> 
> 2. A new VM_BIND ioctl to allow submitting batches of one or more
>    MAP/MAP_NULL/UNMAP commands to a VM_BIND submitqueue
> 
> [...]

Applied io-pgtable change to iommu (arm/smmu/updates), thanks!

[05/40] iommu/io-pgtable-arm: Add quirk to quiet WARN_ON()
        https://git.kernel.org/iommu/c/3318f7b5cefb

Cheers,
-- 
Will

https://fixes.arm64.dev
https://next.arm64.dev
https://will.arm64.dev


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2025-05-20 16:30 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-05-19 17:51 [PATCH v5 00/40] drm/msm: sparse / "VM_BIND" support Rob Clark
2025-05-19 17:51 ` [PATCH v5 05/40] iommu/io-pgtable-arm: Add quirk to quiet WARN_ON() Rob Clark
2025-05-19 21:15 ` [Linaro-mm-sig] [PATCH v5 00/40] drm/msm: sparse / "VM_BIND" support Dave Airlie
2025-05-19 21:24   ` Rob Clark
2025-05-19 21:45     ` Dave Airlie
2025-05-19 21:51       ` Rob Clark
2025-05-19 22:23         ` Connor Abbott
2025-05-20 15:41 ` Will Deacon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).