* [PATCH v2 00/34] drm/msm: sparse / "VM_BIND" support
@ 2025-03-19 14:52 Rob Clark
2025-03-19 14:52 ` [PATCH v2 01/34] drm/gpuvm: Don't require obj lock in destructor path Rob Clark
` (33 more replies)
0 siblings, 34 replies; 45+ messages in thread
From: Rob Clark @ 2025-03-19 14:52 UTC (permalink / raw)
To: dri-devel
Cc: freedreno, linux-arm-msm, Rob Clark, Abhinav Kumar,
André Almeida, Arnd Bergmann, Barnabás Czémán,
Christopher Snowhill, Dmitry Baryshkov, Eugene Lepshy,
Jani Nikula, Jessica Zhang, Jonathan Marek, Konrad Dybcio,
Krzysztof Kozlowski,
moderated list:DMA BUFFER SHARING FRAMEWORK:Keyword:bdma_(?:buf|fence|resv)b,
open list,
open list:DMA BUFFER SHARING FRAMEWORK:Keyword:bdma_(?:buf|fence|resv)b,
Marijn Suijten, Sean Paul
From: Rob Clark <robdclark@chromium.org>
Conversion to DRM GPU VA Manager[1], and adding support for Vulkan Sparse
Memory[2] in the form of:
1. A new VM_BIND submitqueue type for executing VM MSM_SUBMIT_BO_OP_MAP/
MAP_NULL/UNMAP commands
2. Extending the SUBMIT ioctl to allow submitting batches of one or more
MAP/MAP_NULL/UNMAP commands to a VM_BIND submitqueue
The UABI takes a slightly different approach from what other drivers have
done, and what would make sense if starting from a clean sheet, ie separate
VM_BIND and EXEC ioctls. But since we have to maintain support for the
existing SUBMIT ioctl, and because the fence, syncobj, and BO pinning is
largely the same between legacy "BO-table" style SUBMIT ioctls, and new-
style VM updates submitted to a VM_BIND submitqueue, I chose to go the
route of extending the existing `SUBMIT` ioctl rather than adding a new
ioctl.
I also did not implement support for synchronous VM_BIND commands. Since
userspace could just immediately wait for the `SUBMIT` to complete, I don't
think we need this extra complexity in the kernel. Synchronous/immediate
VM_BIND operations could be implemented with a 2nd VM_BIND submitqueue.
The corresponding mesa MR: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/32533
This series can be found in MR form, if you prefer:
https://gitlab.freedesktop.org/drm/msm/-/merge_requests/144
Changes in v2:
- Dropped Bibek Kumar Patro's arm-smmu patches[3], which have since been
merged.
- Pre-allocate all the things, and drop HACK patch which disabled shrinker.
This includes ensuring that vm_bo objects are allocated up front, pre-
allocating VMA objects, and pre-allocating pages used for pgtable updates.
The latter utilizes io_pgtable_cfg callbacks for pgtable alloc/free, that
were initially added for panthor.
- Add back support for BO dumping for devcoredump.
- Link to v1 (RFC): https://lore.kernel.org/dri-devel/20241207161651.410556-1-robdclark@gmail.com/T/#t
[1] https://www.kernel.org/doc/html/next/gpu/drm-mm.html#drm-gpuvm
[2] https://docs.vulkan.org/spec/latest/chapters/sparsemem.html
[3] https://patchwork.kernel.org/project/linux-arm-kernel/list/?series=909700
Rob Clark (34):
drm/gpuvm: Don't require obj lock in destructor path
drm/gpuvm: Remove bogus lock assert
drm/gpuvm: Allow VAs to hold soft reference to BOs
drm/gpuvm: Add drm_gpuvm_sm_unmap_va()
drm/msm: Rename msm_file_private -> msm_context
drm/msm: Improve msm_context comments
drm/msm: Rename msm_gem_address_space -> msm_gem_vm
drm/msm: Remove vram carveout support
drm/msm: Collapse vma allocation and initialization
drm/msm: Collapse vma close and delete
drm/msm: drm_gpuvm conversion
drm/msm: Use drm_gpuvm types more
drm/msm: Split submit_pin_objects()
drm/msm: Lazily create context VM
drm/msm: Add opt-in for VM_BIND
drm/msm: Mark VM as unusable on faults
drm/msm: Extend SUBMIT ioctl for VM_BIND
drm/msm: Add VM_BIND submitqueue
drm/msm: Add _NO_SHARE flag
drm/msm: Split out helper to get iommu prot flags
drm/msm: Add mmu support for non-zero offset
drm/msm: Add PRR support
drm/msm: Rename msm_gem_vma_purge() -> _unmap()
drm/msm: Split msm_gem_vma_new()
drm/msm: Pre-allocate VMAs
drm/msm: Pre-allocate vm_bo objects
drm/msm: Pre-allocate pages for pgtable entries
drm/msm: Wire up gpuvm ops
drm/msm: Wire up drm_gpuvm debugfs
drm/msm: Crashdump prep for sparse mappings
drm/msm: rd dumping prep for sparse mappings
drm/msm: Crashdec support for sparse
drm/msm: rd dumping support for sparse
drm/msm: Bump UAPI version
drivers/gpu/drm/drm_gpuvm.c | 141 ++--
drivers/gpu/drm/msm/Kconfig | 1 +
drivers/gpu/drm/msm/adreno/a2xx_gpu.c | 25 +-
drivers/gpu/drm/msm/adreno/a2xx_gpummu.c | 5 +-
drivers/gpu/drm/msm/adreno/a3xx_gpu.c | 17 +-
drivers/gpu/drm/msm/adreno/a4xx_gpu.c | 17 +-
drivers/gpu/drm/msm/adreno/a5xx_debugfs.c | 4 +-
drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 24 +-
drivers/gpu/drm/msm/adreno/a5xx_power.c | 2 +-
drivers/gpu/drm/msm/adreno/a5xx_preempt.c | 10 +-
drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 32 +-
drivers/gpu/drm/msm/adreno/a6xx_gmu.h | 2 +-
drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 51 +-
drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c | 6 +-
drivers/gpu/drm/msm/adreno/a6xx_preempt.c | 10 +-
drivers/gpu/drm/msm/adreno/adreno_device.c | 4 -
drivers/gpu/drm/msm/adreno/adreno_gpu.c | 84 +-
drivers/gpu/drm/msm/adreno/adreno_gpu.h | 23 +-
.../drm/msm/disp/dpu1/dpu_encoder_phys_wb.c | 14 +-
drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c | 18 +-
drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h | 2 +-
drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c | 18 +-
drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c | 14 +-
drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h | 4 +-
drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c | 6 +-
drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c | 28 +-
drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c | 12 +-
drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c | 4 +-
drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c | 19 +-
drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c | 12 +-
drivers/gpu/drm/msm/dsi/dsi_host.c | 14 +-
drivers/gpu/drm/msm/msm_debugfs.c | 20 +
drivers/gpu/drm/msm/msm_drv.c | 176 ++---
drivers/gpu/drm/msm/msm_drv.h | 35 +-
drivers/gpu/drm/msm/msm_fb.c | 18 +-
drivers/gpu/drm/msm/msm_fbdev.c | 2 +-
drivers/gpu/drm/msm/msm_gem.c | 437 +++++-----
drivers/gpu/drm/msm/msm_gem.h | 226 ++++--
drivers/gpu/drm/msm/msm_gem_prime.c | 15 +
drivers/gpu/drm/msm/msm_gem_submit.c | 234 +++++-
drivers/gpu/drm/msm/msm_gem_vma.c | 748 ++++++++++++++++--
drivers/gpu/drm/msm/msm_gpu.c | 146 ++--
drivers/gpu/drm/msm/msm_gpu.h | 132 +++-
drivers/gpu/drm/msm/msm_iommu.c | 285 ++++++-
drivers/gpu/drm/msm/msm_kms.c | 18 +-
drivers/gpu/drm/msm/msm_kms.h | 2 +-
drivers/gpu/drm/msm/msm_mmu.h | 38 +-
drivers/gpu/drm/msm/msm_rd.c | 62 +-
drivers/gpu/drm/msm/msm_ringbuffer.c | 4 +-
drivers/gpu/drm/msm/msm_submitqueue.c | 86 +-
include/drm/drm_gpuvm.h | 14 +-
include/uapi/drm/msm_drm.h | 98 ++-
52 files changed, 2359 insertions(+), 1060 deletions(-)
--
2.48.1
^ permalink raw reply [flat|nested] 45+ messages in thread
* [PATCH v2 01/34] drm/gpuvm: Don't require obj lock in destructor path
2025-03-19 14:52 [PATCH v2 00/34] drm/msm: sparse / "VM_BIND" support Rob Clark
@ 2025-03-19 14:52 ` Rob Clark
2025-03-19 14:52 ` [PATCH v2 02/34] drm/gpuvm: Remove bogus lock assert Rob Clark
` (32 subsequent siblings)
33 siblings, 0 replies; 45+ messages in thread
From: Rob Clark @ 2025-03-19 14:52 UTC (permalink / raw)
To: dri-devel
Cc: freedreno, linux-arm-msm, Rob Clark, Maarten Lankhorst,
Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter,
open list
From: Rob Clark <robdclark@chromium.org>
See commit a414fe3a2129 ("drm/msm/gem: Drop obj lock in
msm_gem_free_object()") for justification.
Signed-off-by: Rob Clark <robdclark@chromium.org>
---
drivers/gpu/drm/drm_gpuvm.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c
index f9eb56f24bef..1e89a98caad4 100644
--- a/drivers/gpu/drm/drm_gpuvm.c
+++ b/drivers/gpu/drm/drm_gpuvm.c
@@ -1511,7 +1511,9 @@ drm_gpuvm_bo_destroy(struct kref *kref)
drm_gpuvm_bo_list_del(vm_bo, extobj, lock);
drm_gpuvm_bo_list_del(vm_bo, evict, lock);
- drm_gem_gpuva_assert_lock_held(obj);
+ if (kref_read(&obj->refcount) > 0)
+ drm_gem_gpuva_assert_lock_held(obj);
+
list_del(&vm_bo->list.entry.gem);
if (ops && ops->vm_bo_free)
@@ -1871,7 +1873,8 @@ drm_gpuva_unlink(struct drm_gpuva *va)
if (unlikely(!obj))
return;
- drm_gem_gpuva_assert_lock_held(obj);
+ if (kref_read(&obj->refcount) > 0)
+ drm_gem_gpuva_assert_lock_held(obj);
list_del_init(&va->gem.entry);
va->vm_bo = NULL;
--
2.48.1
^ permalink raw reply related [flat|nested] 45+ messages in thread
* [PATCH v2 02/34] drm/gpuvm: Remove bogus lock assert
2025-03-19 14:52 [PATCH v2 00/34] drm/msm: sparse / "VM_BIND" support Rob Clark
2025-03-19 14:52 ` [PATCH v2 01/34] drm/gpuvm: Don't require obj lock in destructor path Rob Clark
@ 2025-03-19 14:52 ` Rob Clark
2025-03-19 14:52 ` [PATCH v2 03/34] drm/gpuvm: Allow VAs to hold soft reference to BOs Rob Clark
` (31 subsequent siblings)
33 siblings, 0 replies; 45+ messages in thread
From: Rob Clark @ 2025-03-19 14:52 UTC (permalink / raw)
To: dri-devel
Cc: freedreno, linux-arm-msm, Rob Clark, Maarten Lankhorst,
Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter,
open list
From: Rob Clark <robdclark@chromium.org>
If the driver is using an external mutex to synchronize vm access, it
doesn't need to hold vm->r_obj->resv. And if the driver is already
holding obj->resv, then needing to pointlessly grab vm->r_obj->resv will
be seen by lockdep as nested locking.
Signed-off-by: Rob Clark <robdclark@chromium.org>
---
drivers/gpu/drm/drm_gpuvm.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c
index 1e89a98caad4..c9bf18119a86 100644
--- a/drivers/gpu/drm/drm_gpuvm.c
+++ b/drivers/gpu/drm/drm_gpuvm.c
@@ -1505,9 +1505,6 @@ drm_gpuvm_bo_destroy(struct kref *kref)
struct drm_gem_object *obj = vm_bo->obj;
bool lock = !drm_gpuvm_resv_protected(gpuvm);
- if (!lock)
- drm_gpuvm_resv_assert_held(gpuvm);
-
drm_gpuvm_bo_list_del(vm_bo, extobj, lock);
drm_gpuvm_bo_list_del(vm_bo, evict, lock);
--
2.48.1
^ permalink raw reply related [flat|nested] 45+ messages in thread
* [PATCH v2 03/34] drm/gpuvm: Allow VAs to hold soft reference to BOs
2025-03-19 14:52 [PATCH v2 00/34] drm/msm: sparse / "VM_BIND" support Rob Clark
2025-03-19 14:52 ` [PATCH v2 01/34] drm/gpuvm: Don't require obj lock in destructor path Rob Clark
2025-03-19 14:52 ` [PATCH v2 02/34] drm/gpuvm: Remove bogus lock assert Rob Clark
@ 2025-03-19 14:52 ` Rob Clark
2025-03-19 14:52 ` [PATCH v2 04/34] drm/gpuvm: Add drm_gpuvm_sm_unmap_va() Rob Clark
` (30 subsequent siblings)
33 siblings, 0 replies; 45+ messages in thread
From: Rob Clark @ 2025-03-19 14:52 UTC (permalink / raw)
To: dri-devel
Cc: freedreno, linux-arm-msm, Rob Clark, Maarten Lankhorst,
Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter,
open list
From: Rob Clark <robdclark@chromium.org>
Eases migration for drivers where VAs don't hold hard references to
their associated BO, avoiding reference loops.
In particular, msm uses soft references to optimistically keep around
mappings until the BO is distroyed. Which obviously won't work if the
VA (the mapping) is holding a reference to the BO.
By making this a per-VM flag, we can use normal hard-references for
mappings in a "VM_BIND" managed VM, but soft references in other cases,
such as kernel-internal VMs (for display scanout, etc).
Signed-off-by: Rob Clark <robdclark@chromium.org>
---
drivers/gpu/drm/drm_gpuvm.c | 8 ++++++--
include/drm/drm_gpuvm.h | 12 ++++++++++--
2 files changed, 16 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c
index c9bf18119a86..681dc58e9160 100644
--- a/drivers/gpu/drm/drm_gpuvm.c
+++ b/drivers/gpu/drm/drm_gpuvm.c
@@ -1482,7 +1482,9 @@ drm_gpuvm_bo_create(struct drm_gpuvm *gpuvm,
vm_bo->vm = drm_gpuvm_get(gpuvm);
vm_bo->obj = obj;
- drm_gem_object_get(obj);
+
+ if (!(gpuvm->flags & DRM_GPUVM_VA_WEAK_REF))
+ drm_gem_object_get(obj);
kref_init(&vm_bo->kref);
INIT_LIST_HEAD(&vm_bo->list.gpuva);
@@ -1504,6 +1506,7 @@ drm_gpuvm_bo_destroy(struct kref *kref)
const struct drm_gpuvm_ops *ops = gpuvm->ops;
struct drm_gem_object *obj = vm_bo->obj;
bool lock = !drm_gpuvm_resv_protected(gpuvm);
+ bool unref = !(gpuvm->flags & DRM_GPUVM_VA_WEAK_REF);
drm_gpuvm_bo_list_del(vm_bo, extobj, lock);
drm_gpuvm_bo_list_del(vm_bo, evict, lock);
@@ -1519,7 +1522,8 @@ drm_gpuvm_bo_destroy(struct kref *kref)
kfree(vm_bo);
drm_gpuvm_put(gpuvm);
- drm_gem_object_put(obj);
+ if (unref)
+ drm_gem_object_put(obj);
}
/**
diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h
index 00d4e43b76b6..13ab087a45fa 100644
--- a/include/drm/drm_gpuvm.h
+++ b/include/drm/drm_gpuvm.h
@@ -205,10 +205,18 @@ enum drm_gpuvm_flags {
*/
DRM_GPUVM_RESV_PROTECTED = BIT(0),
+ /**
+ * @DRM_GPUVM_VA_WEAK_REF:
+ *
+ * Flag indicating that the &drm_gpuva (or more correctly, the
+ * &drm_gpuvm_bo) only holds a weak reference to the &drm_gem_object.
+ */
+ DRM_GPUVM_VA_WEAK_REF = BIT(1),
+
/**
* @DRM_GPUVM_USERBITS: user defined bits
*/
- DRM_GPUVM_USERBITS = BIT(1),
+ DRM_GPUVM_USERBITS = BIT(2),
};
/**
@@ -651,7 +659,7 @@ struct drm_gpuvm_bo {
/**
* @obj: The &drm_gem_object being mapped in @vm. This is a reference
- * counted pointer.
+ * counted pointer, unless the &DRM_GPUVM_VA_WEAK_REF flag is set.
*/
struct drm_gem_object *obj;
--
2.48.1
^ permalink raw reply related [flat|nested] 45+ messages in thread
* [PATCH v2 04/34] drm/gpuvm: Add drm_gpuvm_sm_unmap_va()
2025-03-19 14:52 [PATCH v2 00/34] drm/msm: sparse / "VM_BIND" support Rob Clark
` (2 preceding siblings ...)
2025-03-19 14:52 ` [PATCH v2 03/34] drm/gpuvm: Allow VAs to hold soft reference to BOs Rob Clark
@ 2025-03-19 14:52 ` Rob Clark
2025-03-19 14:52 ` [PATCH v2 05/34] drm/msm: Rename msm_file_private -> msm_context Rob Clark
` (29 subsequent siblings)
33 siblings, 0 replies; 45+ messages in thread
From: Rob Clark @ 2025-03-19 14:52 UTC (permalink / raw)
To: dri-devel
Cc: freedreno, linux-arm-msm, Rob Clark, Maarten Lankhorst,
Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter,
open list
From: Rob Clark <robdclark@chromium.org>
Add drm_gpuvm_sm_unmap_va(), which works similarly to
drm_gpuvm_sm_unmap(), except that it operates on a single VA
at a time.
This is needed by drm/msm, where due to locking ordering we
cannot reuse drm_gpuvm_sm_unmap() as-is. This helper lets us
reuse the logic about remaps vs unmaps cases.
Signed-off-by: Rob Clark <robdclark@chromium.org>
---
drivers/gpu/drm/drm_gpuvm.c | 123 ++++++++++++++++++++++++------------
include/drm/drm_gpuvm.h | 2 +
2 files changed, 85 insertions(+), 40 deletions(-)
diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c
index 681dc58e9160..7267fcaa8f50 100644
--- a/drivers/gpu/drm/drm_gpuvm.c
+++ b/drivers/gpu/drm/drm_gpuvm.c
@@ -2244,6 +2244,56 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
req_obj, req_offset);
}
+static int
+__drm_gpuvm_sm_unmap_va(struct drm_gpuva *va, const struct drm_gpuvm_ops *ops,
+ void *priv, u64 req_addr, u64 req_range)
+{
+ struct drm_gpuva_op_map prev = {}, next = {};
+ bool prev_split = false, next_split = false;
+ struct drm_gem_object *obj = va->gem.obj;
+ u64 req_end = req_addr + req_range;
+ u64 offset = va->gem.offset;
+ u64 addr = va->va.addr;
+ u64 range = va->va.range;
+ u64 end = addr + range;
+ int ret;
+
+ if (addr < req_addr) {
+ prev.va.addr = addr;
+ prev.va.range = req_addr - addr;
+ prev.gem.obj = obj;
+ prev.gem.offset = offset;
+
+ prev_split = true;
+ }
+
+ if (end > req_end) {
+ next.va.addr = req_end;
+ next.va.range = end - req_end;
+ next.gem.obj = obj;
+ next.gem.offset = offset + (req_end - addr);
+
+ next_split = true;
+ }
+
+ if (prev_split || next_split) {
+ struct drm_gpuva_op_unmap unmap = { .va = va };
+
+ ret = op_remap_cb(ops, priv,
+ prev_split ? &prev : NULL,
+ next_split ? &next : NULL,
+ &unmap);
+ if (ret)
+ return ret;
+ } else {
+ ret = op_unmap_cb(ops, priv, va, false);
+ if (ret)
+ return ret;
+ }
+
+ return 0;
+}
+
static int
__drm_gpuvm_sm_unmap(struct drm_gpuvm *gpuvm,
const struct drm_gpuvm_ops *ops, void *priv,
@@ -2257,46 +2307,9 @@ __drm_gpuvm_sm_unmap(struct drm_gpuvm *gpuvm,
return -EINVAL;
drm_gpuvm_for_each_va_range_safe(va, next, gpuvm, req_addr, req_end) {
- struct drm_gpuva_op_map prev = {}, next = {};
- bool prev_split = false, next_split = false;
- struct drm_gem_object *obj = va->gem.obj;
- u64 offset = va->gem.offset;
- u64 addr = va->va.addr;
- u64 range = va->va.range;
- u64 end = addr + range;
-
- if (addr < req_addr) {
- prev.va.addr = addr;
- prev.va.range = req_addr - addr;
- prev.gem.obj = obj;
- prev.gem.offset = offset;
-
- prev_split = true;
- }
-
- if (end > req_end) {
- next.va.addr = req_end;
- next.va.range = end - req_end;
- next.gem.obj = obj;
- next.gem.offset = offset + (req_end - addr);
-
- next_split = true;
- }
-
- if (prev_split || next_split) {
- struct drm_gpuva_op_unmap unmap = { .va = va };
-
- ret = op_remap_cb(ops, priv,
- prev_split ? &prev : NULL,
- next_split ? &next : NULL,
- &unmap);
- if (ret)
- return ret;
- } else {
- ret = op_unmap_cb(ops, priv, va, false);
- if (ret)
- return ret;
- }
+ ret = __drm_gpuvm_sm_unmap_va(va, ops, priv, req_addr, req_range);
+ if (ret)
+ return ret;
}
return 0;
@@ -2394,6 +2407,36 @@ drm_gpuvm_sm_unmap(struct drm_gpuvm *gpuvm, void *priv,
}
EXPORT_SYMBOL_GPL(drm_gpuvm_sm_unmap);
+/**
+ * drm_gpuvm_sm_unmap_va() - like drm_gpuvm_sm_unmap() but operating on a single VA
+ * @va: the &drm_gpuva to unmap from
+ * @priv: pointer to a driver private data structure
+ * @req_addr: the start address of the range to unmap
+ * @req_range: the range of the mappings to unmap
+ *
+ * This function iterates the range of a single VA. It utilizes the
+ * &drm_gpuvm_ops to call back into the driver providing the operations to
+ * unmap and, if required, split the existent mapping.
+ *
+ * There can be at most one unmap or remap operation, depending on whether the
+ * requested unmap range fully covers or partially covers the specified VA.
+ *
+ * Returns: 0 on success or a negative error code
+ */
+int
+drm_gpuvm_sm_unmap_va(struct drm_gpuva *va, void *priv,
+ u64 req_addr, u64 req_range)
+{
+ const struct drm_gpuvm_ops *ops = va->vm->ops;
+
+ if (unlikely(!(ops && ops->sm_step_remap &&
+ ops->sm_step_unmap)))
+ return -EINVAL;
+
+ return __drm_gpuvm_sm_unmap_va(va, ops, priv, req_addr, req_range);
+}
+EXPORT_SYMBOL_GPL(drm_gpuvm_sm_unmap_va);
+
static struct drm_gpuva_op *
gpuva_op_alloc(struct drm_gpuvm *gpuvm)
{
diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h
index 13ab087a45fa..e1f488eb714e 100644
--- a/include/drm/drm_gpuvm.h
+++ b/include/drm/drm_gpuvm.h
@@ -1213,6 +1213,8 @@ int drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm, void *priv,
int drm_gpuvm_sm_unmap(struct drm_gpuvm *gpuvm, void *priv,
u64 addr, u64 range);
+int drm_gpuvm_sm_unmap_va(struct drm_gpuva *va, void *priv,
+ u64 req_addr, u64 req_range);
void drm_gpuva_map(struct drm_gpuvm *gpuvm,
struct drm_gpuva *va,
--
2.48.1
^ permalink raw reply related [flat|nested] 45+ messages in thread
* [PATCH v2 05/34] drm/msm: Rename msm_file_private -> msm_context
2025-03-19 14:52 [PATCH v2 00/34] drm/msm: sparse / "VM_BIND" support Rob Clark
` (3 preceding siblings ...)
2025-03-19 14:52 ` [PATCH v2 04/34] drm/gpuvm: Add drm_gpuvm_sm_unmap_va() Rob Clark
@ 2025-03-19 14:52 ` Rob Clark
2025-04-16 23:11 ` Dmitry Baryshkov
2025-03-19 14:52 ` [PATCH v2 06/34] drm/msm: Improve msm_context comments Rob Clark
` (28 subsequent siblings)
33 siblings, 1 reply; 45+ messages in thread
From: Rob Clark @ 2025-03-19 14:52 UTC (permalink / raw)
To: dri-devel
Cc: freedreno, linux-arm-msm, Rob Clark, Rob Clark, Sean Paul,
Konrad Dybcio, Abhinav Kumar, Dmitry Baryshkov, Marijn Suijten,
David Airlie, Simona Vetter, open list
From: Rob Clark <robdclark@chromium.org>
This is a more descriptive name.
Signed-off-by: Rob Clark <robdclark@chromium.org>
---
drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 2 +-
drivers/gpu/drm/msm/adreno/adreno_gpu.c | 6 ++--
drivers/gpu/drm/msm/adreno/adreno_gpu.h | 4 +--
drivers/gpu/drm/msm/msm_drv.c | 14 ++++-----
drivers/gpu/drm/msm/msm_gem.c | 2 +-
drivers/gpu/drm/msm/msm_gem_submit.c | 2 +-
drivers/gpu/drm/msm/msm_gpu.c | 4 +--
drivers/gpu/drm/msm/msm_gpu.h | 39 ++++++++++++-------------
drivers/gpu/drm/msm/msm_submitqueue.c | 27 +++++++++--------
9 files changed, 49 insertions(+), 51 deletions(-)
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
index eeb8b5e582d5..c5294351b1f0 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
@@ -111,7 +111,7 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu,
struct msm_ringbuffer *ring, struct msm_gem_submit *submit)
{
bool sysprof = refcount_read(&a6xx_gpu->base.base.sysprof_active) > 1;
- struct msm_file_private *ctx = submit->queue->ctx;
+ struct msm_context *ctx = submit->queue->ctx;
struct adreno_gpu *adreno_gpu = &a6xx_gpu->base;
phys_addr_t ttbr;
u32 asid;
diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
index 59cfed5acace..e0ab5126e390 100644
--- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
@@ -343,7 +343,7 @@ int adreno_fault_handler(struct msm_gpu *gpu, unsigned long iova, int flags,
return 0;
}
-int adreno_get_param(struct msm_gpu *gpu, struct msm_file_private *ctx,
+int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx,
uint32_t param, uint64_t *value, uint32_t *len)
{
struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
@@ -431,7 +431,7 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_file_private *ctx,
}
}
-int adreno_set_param(struct msm_gpu *gpu, struct msm_file_private *ctx,
+int adreno_set_param(struct msm_gpu *gpu, struct msm_context *ctx,
uint32_t param, uint64_t value, uint32_t len)
{
struct drm_device *drm = gpu->dev;
@@ -477,7 +477,7 @@ int adreno_set_param(struct msm_gpu *gpu, struct msm_file_private *ctx,
case MSM_PARAM_SYSPROF:
if (!capable(CAP_SYS_ADMIN))
return UERR(EPERM, drm, "invalid permissions");
- return msm_file_private_set_sysprof(ctx, gpu, value);
+ return msm_context_set_sysprof(ctx, gpu, value);
default:
return UERR(EINVAL, drm, "%s: invalid param: %u", gpu->name, param);
}
diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.h b/drivers/gpu/drm/msm/adreno/adreno_gpu.h
index a1e2d9e87b75..9fce88461f5a 100644
--- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h
+++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h
@@ -601,9 +601,9 @@ static inline int adreno_is_a7xx(struct adreno_gpu *gpu)
}
u64 adreno_private_address_space_size(struct msm_gpu *gpu);
-int adreno_get_param(struct msm_gpu *gpu, struct msm_file_private *ctx,
+int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx,
uint32_t param, uint64_t *value, uint32_t *len);
-int adreno_set_param(struct msm_gpu *gpu, struct msm_file_private *ctx,
+int adreno_set_param(struct msm_gpu *gpu, struct msm_context *ctx,
uint32_t param, uint64_t value, uint32_t len);
const struct firmware *adreno_request_fw(struct adreno_gpu *adreno_gpu,
const char *fwname);
diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
index c3588dc9e537..29ca24548c67 100644
--- a/drivers/gpu/drm/msm/msm_drv.c
+++ b/drivers/gpu/drm/msm/msm_drv.c
@@ -333,7 +333,7 @@ static int context_init(struct drm_device *dev, struct drm_file *file)
{
static atomic_t ident = ATOMIC_INIT(0);
struct msm_drm_private *priv = dev->dev_private;
- struct msm_file_private *ctx;
+ struct msm_context *ctx;
ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
if (!ctx)
@@ -363,23 +363,23 @@ static int msm_open(struct drm_device *dev, struct drm_file *file)
return context_init(dev, file);
}
-static void context_close(struct msm_file_private *ctx)
+static void context_close(struct msm_context *ctx)
{
msm_submitqueue_close(ctx);
- msm_file_private_put(ctx);
+ msm_context_put(ctx);
}
static void msm_postclose(struct drm_device *dev, struct drm_file *file)
{
struct msm_drm_private *priv = dev->dev_private;
- struct msm_file_private *ctx = file->driver_priv;
+ struct msm_context *ctx = file->driver_priv;
/*
* It is not possible to set sysprof param to non-zero if gpu
* is not initialized:
*/
if (priv->gpu)
- msm_file_private_set_sysprof(ctx, priv->gpu, 0);
+ msm_context_set_sysprof(ctx, priv->gpu, 0);
context_close(ctx);
}
@@ -511,7 +511,7 @@ static int msm_ioctl_gem_info_iova(struct drm_device *dev,
uint64_t *iova)
{
struct msm_drm_private *priv = dev->dev_private;
- struct msm_file_private *ctx = file->driver_priv;
+ struct msm_context *ctx = file->driver_priv;
if (!priv->gpu)
return -EINVAL;
@@ -531,7 +531,7 @@ static int msm_ioctl_gem_info_set_iova(struct drm_device *dev,
uint64_t iova)
{
struct msm_drm_private *priv = dev->dev_private;
- struct msm_file_private *ctx = file->driver_priv;
+ struct msm_context *ctx = file->driver_priv;
if (!priv->gpu)
return -EINVAL;
diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
index d2f38e1df510..fdeb6cf7eeb5 100644
--- a/drivers/gpu/drm/msm/msm_gem.c
+++ b/drivers/gpu/drm/msm/msm_gem.c
@@ -48,7 +48,7 @@ static void update_device_mem(struct msm_drm_private *priv, ssize_t size)
static void update_ctx_mem(struct drm_file *file, ssize_t size)
{
- struct msm_file_private *ctx = file->driver_priv;
+ struct msm_context *ctx = file->driver_priv;
uint64_t ctx_mem = atomic64_add_return(size, &ctx->ctx_mem);
rcu_read_lock(); /* Locks file->pid! */
diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c
index 3e9aa2cc38ef..16ca6cfac967 100644
--- a/drivers/gpu/drm/msm/msm_gem_submit.c
+++ b/drivers/gpu/drm/msm/msm_gem_submit.c
@@ -642,7 +642,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data,
{
struct msm_drm_private *priv = dev->dev_private;
struct drm_msm_gem_submit *args = data;
- struct msm_file_private *ctx = file->driver_priv;
+ struct msm_context *ctx = file->driver_priv;
struct msm_gem_submit *submit = NULL;
struct msm_gpu *gpu = priv->gpu;
struct msm_gpu_submitqueue *queue;
diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c
index c380d9d9f5af..d786fcfad62f 100644
--- a/drivers/gpu/drm/msm/msm_gpu.c
+++ b/drivers/gpu/drm/msm/msm_gpu.c
@@ -148,7 +148,7 @@ int msm_gpu_pm_suspend(struct msm_gpu *gpu)
return 0;
}
-void msm_gpu_show_fdinfo(struct msm_gpu *gpu, struct msm_file_private *ctx,
+void msm_gpu_show_fdinfo(struct msm_gpu *gpu, struct msm_context *ctx,
struct drm_printer *p)
{
drm_printf(p, "drm-engine-gpu:\t%llu ns\n", ctx->elapsed_ns);
@@ -339,7 +339,7 @@ static void retire_submits(struct msm_gpu *gpu);
static void get_comm_cmdline(struct msm_gem_submit *submit, char **comm, char **cmd)
{
- struct msm_file_private *ctx = submit->queue->ctx;
+ struct msm_context *ctx = submit->queue->ctx;
struct task_struct *task;
WARN_ON(!mutex_is_locked(&submit->gpu->lock));
diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h
index e25009150579..957d6fb3469d 100644
--- a/drivers/gpu/drm/msm/msm_gpu.h
+++ b/drivers/gpu/drm/msm/msm_gpu.h
@@ -22,7 +22,7 @@
struct msm_gem_submit;
struct msm_gpu_perfcntr;
struct msm_gpu_state;
-struct msm_file_private;
+struct msm_context;
struct msm_gpu_config {
const char *ioname;
@@ -44,9 +44,9 @@ struct msm_gpu_config {
* + z180_gpu
*/
struct msm_gpu_funcs {
- int (*get_param)(struct msm_gpu *gpu, struct msm_file_private *ctx,
+ int (*get_param)(struct msm_gpu *gpu, struct msm_context *ctx,
uint32_t param, uint64_t *value, uint32_t *len);
- int (*set_param)(struct msm_gpu *gpu, struct msm_file_private *ctx,
+ int (*set_param)(struct msm_gpu *gpu, struct msm_context *ctx,
uint32_t param, uint64_t value, uint32_t len);
int (*hw_init)(struct msm_gpu *gpu);
@@ -347,7 +347,7 @@ struct msm_gpu_perfcntr {
#define NR_SCHED_PRIORITIES (1 + DRM_SCHED_PRIORITY_LOW - DRM_SCHED_PRIORITY_HIGH)
/**
- * struct msm_file_private - per-drm_file context
+ * struct msm_context - per-drm_file context
*
* @queuelock: synchronizes access to submitqueues list
* @submitqueues: list of &msm_gpu_submitqueue created by userspace
@@ -357,7 +357,7 @@ struct msm_gpu_perfcntr {
* @ref: reference count
* @seqno: unique per process seqno
*/
-struct msm_file_private {
+struct msm_context {
rwlock_t queuelock;
struct list_head submitqueues;
int queueid;
@@ -512,7 +512,7 @@ struct msm_gpu_submitqueue {
u32 ring_nr;
int faults;
uint32_t last_fence;
- struct msm_file_private *ctx;
+ struct msm_context *ctx;
struct list_head node;
struct idr fence_idr;
struct spinlock idr_lock;
@@ -608,33 +608,32 @@ static inline void gpu_write64(struct msm_gpu *gpu, u32 reg, u64 val)
int msm_gpu_pm_suspend(struct msm_gpu *gpu);
int msm_gpu_pm_resume(struct msm_gpu *gpu);
-void msm_gpu_show_fdinfo(struct msm_gpu *gpu, struct msm_file_private *ctx,
+void msm_gpu_show_fdinfo(struct msm_gpu *gpu, struct msm_context *ctx,
struct drm_printer *p);
-int msm_submitqueue_init(struct drm_device *drm, struct msm_file_private *ctx);
-struct msm_gpu_submitqueue *msm_submitqueue_get(struct msm_file_private *ctx,
+int msm_submitqueue_init(struct drm_device *drm, struct msm_context *ctx);
+struct msm_gpu_submitqueue *msm_submitqueue_get(struct msm_context *ctx,
u32 id);
int msm_submitqueue_create(struct drm_device *drm,
- struct msm_file_private *ctx,
+ struct msm_context *ctx,
u32 prio, u32 flags, u32 *id);
-int msm_submitqueue_query(struct drm_device *drm, struct msm_file_private *ctx,
+int msm_submitqueue_query(struct drm_device *drm, struct msm_context *ctx,
struct drm_msm_submitqueue_query *args);
-int msm_submitqueue_remove(struct msm_file_private *ctx, u32 id);
-void msm_submitqueue_close(struct msm_file_private *ctx);
+int msm_submitqueue_remove(struct msm_context *ctx, u32 id);
+void msm_submitqueue_close(struct msm_context *ctx);
void msm_submitqueue_destroy(struct kref *kref);
-int msm_file_private_set_sysprof(struct msm_file_private *ctx,
- struct msm_gpu *gpu, int sysprof);
-void __msm_file_private_destroy(struct kref *kref);
+int msm_context_set_sysprof(struct msm_context *ctx, struct msm_gpu *gpu, int sysprof);
+void __msm_context_destroy(struct kref *kref);
-static inline void msm_file_private_put(struct msm_file_private *ctx)
+static inline void msm_context_put(struct msm_context *ctx)
{
- kref_put(&ctx->ref, __msm_file_private_destroy);
+ kref_put(&ctx->ref, __msm_context_destroy);
}
-static inline struct msm_file_private *msm_file_private_get(
- struct msm_file_private *ctx)
+static inline struct msm_context *msm_context_get(
+ struct msm_context *ctx)
{
kref_get(&ctx->ref);
return ctx;
diff --git a/drivers/gpu/drm/msm/msm_submitqueue.c b/drivers/gpu/drm/msm/msm_submitqueue.c
index 7fed1de63b5d..1acc0fe36353 100644
--- a/drivers/gpu/drm/msm/msm_submitqueue.c
+++ b/drivers/gpu/drm/msm/msm_submitqueue.c
@@ -7,8 +7,7 @@
#include "msm_gpu.h"
-int msm_file_private_set_sysprof(struct msm_file_private *ctx,
- struct msm_gpu *gpu, int sysprof)
+int msm_context_set_sysprof(struct msm_context *ctx, struct msm_gpu *gpu, int sysprof)
{
/*
* Since pm_runtime and sysprof_active are both refcounts, we
@@ -46,10 +45,10 @@ int msm_file_private_set_sysprof(struct msm_file_private *ctx,
return 0;
}
-void __msm_file_private_destroy(struct kref *kref)
+void __msm_context_destroy(struct kref *kref)
{
- struct msm_file_private *ctx = container_of(kref,
- struct msm_file_private, ref);
+ struct msm_context *ctx = container_of(kref,
+ struct msm_context, ref);
int i;
for (i = 0; i < ARRAY_SIZE(ctx->entities); i++) {
@@ -73,12 +72,12 @@ void msm_submitqueue_destroy(struct kref *kref)
idr_destroy(&queue->fence_idr);
- msm_file_private_put(queue->ctx);
+ msm_context_put(queue->ctx);
kfree(queue);
}
-struct msm_gpu_submitqueue *msm_submitqueue_get(struct msm_file_private *ctx,
+struct msm_gpu_submitqueue *msm_submitqueue_get(struct msm_context *ctx,
u32 id)
{
struct msm_gpu_submitqueue *entry;
@@ -101,7 +100,7 @@ struct msm_gpu_submitqueue *msm_submitqueue_get(struct msm_file_private *ctx,
return NULL;
}
-void msm_submitqueue_close(struct msm_file_private *ctx)
+void msm_submitqueue_close(struct msm_context *ctx)
{
struct msm_gpu_submitqueue *entry, *tmp;
@@ -119,7 +118,7 @@ void msm_submitqueue_close(struct msm_file_private *ctx)
}
static struct drm_sched_entity *
-get_sched_entity(struct msm_file_private *ctx, struct msm_ringbuffer *ring,
+get_sched_entity(struct msm_context *ctx, struct msm_ringbuffer *ring,
unsigned ring_nr, enum drm_sched_priority sched_prio)
{
static DEFINE_MUTEX(entity_lock);
@@ -155,7 +154,7 @@ get_sched_entity(struct msm_file_private *ctx, struct msm_ringbuffer *ring,
return ctx->entities[idx];
}
-int msm_submitqueue_create(struct drm_device *drm, struct msm_file_private *ctx,
+int msm_submitqueue_create(struct drm_device *drm, struct msm_context *ctx,
u32 prio, u32 flags, u32 *id)
{
struct msm_drm_private *priv = drm->dev_private;
@@ -200,7 +199,7 @@ int msm_submitqueue_create(struct drm_device *drm, struct msm_file_private *ctx,
write_lock(&ctx->queuelock);
- queue->ctx = msm_file_private_get(ctx);
+ queue->ctx = msm_context_get(ctx);
queue->id = ctx->queueid++;
if (id)
@@ -221,7 +220,7 @@ int msm_submitqueue_create(struct drm_device *drm, struct msm_file_private *ctx,
* Create the default submit-queue (id==0), used for backwards compatibility
* for userspace that pre-dates the introduction of submitqueues.
*/
-int msm_submitqueue_init(struct drm_device *drm, struct msm_file_private *ctx)
+int msm_submitqueue_init(struct drm_device *drm, struct msm_context *ctx)
{
struct msm_drm_private *priv = drm->dev_private;
int default_prio, max_priority;
@@ -261,7 +260,7 @@ static int msm_submitqueue_query_faults(struct msm_gpu_submitqueue *queue,
return ret ? -EFAULT : 0;
}
-int msm_submitqueue_query(struct drm_device *drm, struct msm_file_private *ctx,
+int msm_submitqueue_query(struct drm_device *drm, struct msm_context *ctx,
struct drm_msm_submitqueue_query *args)
{
struct msm_gpu_submitqueue *queue;
@@ -282,7 +281,7 @@ int msm_submitqueue_query(struct drm_device *drm, struct msm_file_private *ctx,
return ret;
}
-int msm_submitqueue_remove(struct msm_file_private *ctx, u32 id)
+int msm_submitqueue_remove(struct msm_context *ctx, u32 id)
{
struct msm_gpu_submitqueue *entry;
--
2.48.1
^ permalink raw reply related [flat|nested] 45+ messages in thread
* [PATCH v2 06/34] drm/msm: Improve msm_context comments
2025-03-19 14:52 [PATCH v2 00/34] drm/msm: sparse / "VM_BIND" support Rob Clark
` (4 preceding siblings ...)
2025-03-19 14:52 ` [PATCH v2 05/34] drm/msm: Rename msm_file_private -> msm_context Rob Clark
@ 2025-03-19 14:52 ` Rob Clark
2025-04-16 23:19 ` Dmitry Baryshkov
2025-03-19 14:52 ` [PATCH v2 07/34] drm/msm: Rename msm_gem_address_space -> msm_gem_vm Rob Clark
` (27 subsequent siblings)
33 siblings, 1 reply; 45+ messages in thread
From: Rob Clark @ 2025-03-19 14:52 UTC (permalink / raw)
To: dri-devel
Cc: freedreno, linux-arm-msm, Rob Clark, Rob Clark, Sean Paul,
Konrad Dybcio, Abhinav Kumar, Dmitry Baryshkov, Marijn Suijten,
David Airlie, Simona Vetter, open list
From: Rob Clark <robdclark@chromium.org>
Just some tidying up.
Signed-off-by: Rob Clark <robdclark@chromium.org>
---
drivers/gpu/drm/msm/msm_gpu.h | 44 +++++++++++++++++++++++------------
1 file changed, 29 insertions(+), 15 deletions(-)
diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h
index 957d6fb3469d..c699ce0c557b 100644
--- a/drivers/gpu/drm/msm/msm_gpu.h
+++ b/drivers/gpu/drm/msm/msm_gpu.h
@@ -348,25 +348,39 @@ struct msm_gpu_perfcntr {
/**
* struct msm_context - per-drm_file context
- *
- * @queuelock: synchronizes access to submitqueues list
- * @submitqueues: list of &msm_gpu_submitqueue created by userspace
- * @queueid: counter incremented each time a submitqueue is created,
- * used to assign &msm_gpu_submitqueue.id
- * @aspace: the per-process GPU address-space
- * @ref: reference count
- * @seqno: unique per process seqno
*/
struct msm_context {
+ /** @queuelock: synchronizes access to submitqueues list */
rwlock_t queuelock;
+
+ /** @submitqueues: list of &msm_gpu_submitqueue created by userspace */
struct list_head submitqueues;
+
+ /**
+ * @queueid:
+ *
+ * Counter incremented each time a submitqueue is created, used to
+ * assign &msm_gpu_submitqueue.id
+ */
int queueid;
+
+ /** @aspace: the per-process GPU address-space */
struct msm_gem_address_space *aspace;
+
+ /** @kref: the reference count */
struct kref ref;
+
+ /**
+ * @seqno:
+ *
+ * A unique per-process sequence number. Used to detect context
+ * switches, without relying on keeping a, potentially dangling,
+ * pointer to the previous context.
+ */
int seqno;
/**
- * sysprof:
+ * @sysprof:
*
* The value of MSM_PARAM_SYSPROF set by userspace. This is
* intended to be used by system profiling tools like Mesa's
@@ -384,21 +398,21 @@ struct msm_context {
int sysprof;
/**
- * comm: Overridden task comm, see MSM_PARAM_COMM
+ * @comm: Overridden task comm, see MSM_PARAM_COMM
*
* Accessed under msm_gpu::lock
*/
char *comm;
/**
- * cmdline: Overridden task cmdline, see MSM_PARAM_CMDLINE
+ * @cmdline: Overridden task cmdline, see MSM_PARAM_CMDLINE
*
* Accessed under msm_gpu::lock
*/
char *cmdline;
/**
- * elapsed:
+ * @elapsed:
*
* The total (cumulative) elapsed time GPU was busy with rendering
* from this context in ns.
@@ -406,7 +420,7 @@ struct msm_context {
uint64_t elapsed_ns;
/**
- * cycles:
+ * @cycles:
*
* The total (cumulative) GPU cycles elapsed attributed to this
* context.
@@ -414,7 +428,7 @@ struct msm_context {
uint64_t cycles;
/**
- * entities:
+ * @entities:
*
* Table of per-priority-level sched entities used by submitqueues
* associated with this &drm_file. Because some userspace apps
@@ -427,7 +441,7 @@ struct msm_context {
struct drm_sched_entity *entities[NR_SCHED_PRIORITIES * MSM_GPU_MAX_RINGS];
/**
- * ctx_mem:
+ * @ctx_mem:
*
* Total amount of memory of GEM buffers with handles attached for
* this context.
--
2.48.1
^ permalink raw reply related [flat|nested] 45+ messages in thread
* [PATCH v2 07/34] drm/msm: Rename msm_gem_address_space -> msm_gem_vm
2025-03-19 14:52 [PATCH v2 00/34] drm/msm: sparse / "VM_BIND" support Rob Clark
` (5 preceding siblings ...)
2025-03-19 14:52 ` [PATCH v2 06/34] drm/msm: Improve msm_context comments Rob Clark
@ 2025-03-19 14:52 ` Rob Clark
2025-04-21 19:19 ` Dmitry Baryshkov
2025-03-19 14:52 ` [PATCH v2 08/34] drm/msm: Remove vram carveout support Rob Clark
` (26 subsequent siblings)
33 siblings, 1 reply; 45+ messages in thread
From: Rob Clark @ 2025-03-19 14:52 UTC (permalink / raw)
To: dri-devel
Cc: freedreno, linux-arm-msm, Rob Clark, Rob Clark, Sean Paul,
Konrad Dybcio, Abhinav Kumar, Dmitry Baryshkov, Marijn Suijten,
David Airlie, Simona Vetter, Jessica Zhang, Jani Nikula,
Barnabás Czémán, Arnd Bergmann, André Almeida,
Christopher Snowhill, Jonathan Marek, Krzysztof Kozlowski,
Eugene Lepshy, open list
From: Rob Clark <robdclark@chromium.org>
Re-aligning naming to better match drm_gpuvm terminology will make
things less confusing at the end of the drm_gpuvm conversion.
This is just rename churn, no functional change.
Signed-off-by: Rob Clark <robdclark@chromium.org>
---
drivers/gpu/drm/msm/adreno/a2xx_gpu.c | 18 ++--
drivers/gpu/drm/msm/adreno/a3xx_gpu.c | 4 +-
drivers/gpu/drm/msm/adreno/a4xx_gpu.c | 4 +-
drivers/gpu/drm/msm/adreno/a5xx_debugfs.c | 4 +-
drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 22 ++---
drivers/gpu/drm/msm/adreno/a5xx_power.c | 2 +-
drivers/gpu/drm/msm/adreno/a5xx_preempt.c | 10 +-
drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 26 +++---
drivers/gpu/drm/msm/adreno/a6xx_gmu.h | 2 +-
drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 45 +++++----
drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c | 6 +-
drivers/gpu/drm/msm/adreno/a6xx_preempt.c | 10 +-
drivers/gpu/drm/msm/adreno/adreno_gpu.c | 47 +++++-----
drivers/gpu/drm/msm/adreno/adreno_gpu.h | 18 ++--
.../drm/msm/disp/dpu1/dpu_encoder_phys_wb.c | 14 +--
drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c | 18 ++--
drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h | 2 +-
drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c | 18 ++--
drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c | 14 +--
drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h | 4 +-
drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c | 6 +-
drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c | 24 ++---
drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c | 12 +--
drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c | 4 +-
drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c | 18 ++--
drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c | 12 +--
drivers/gpu/drm/msm/dsi/dsi_host.c | 14 +--
drivers/gpu/drm/msm/msm_drv.c | 8 +-
drivers/gpu/drm/msm/msm_drv.h | 10 +-
drivers/gpu/drm/msm/msm_fb.c | 10 +-
drivers/gpu/drm/msm/msm_fbdev.c | 2 +-
drivers/gpu/drm/msm/msm_gem.c | 74 +++++++--------
drivers/gpu/drm/msm/msm_gem.h | 34 +++----
drivers/gpu/drm/msm/msm_gem_submit.c | 6 +-
drivers/gpu/drm/msm/msm_gem_vma.c | 93 +++++++++----------
drivers/gpu/drm/msm/msm_gpu.c | 48 +++++-----
drivers/gpu/drm/msm/msm_gpu.h | 16 ++--
drivers/gpu/drm/msm/msm_kms.c | 16 ++--
drivers/gpu/drm/msm/msm_kms.h | 2 +-
drivers/gpu/drm/msm/msm_ringbuffer.c | 4 +-
drivers/gpu/drm/msm/msm_submitqueue.c | 2 +-
41 files changed, 349 insertions(+), 354 deletions(-)
diff --git a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c
index 379a3d346c30..5eb063ed0b46 100644
--- a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c
@@ -113,7 +113,7 @@ static int a2xx_hw_init(struct msm_gpu *gpu)
uint32_t *ptr, len;
int i, ret;
- a2xx_gpummu_params(gpu->aspace->mmu, &pt_base, &tran_error);
+ a2xx_gpummu_params(gpu->vm->mmu, &pt_base, &tran_error);
DBG("%s", gpu->name);
@@ -466,19 +466,19 @@ static struct msm_gpu_state *a2xx_gpu_state_get(struct msm_gpu *gpu)
return state;
}
-static struct msm_gem_address_space *
-a2xx_create_address_space(struct msm_gpu *gpu, struct platform_device *pdev)
+static struct msm_gem_vm *
+a2xx_create_vm(struct msm_gpu *gpu, struct platform_device *pdev)
{
struct msm_mmu *mmu = a2xx_gpummu_new(&pdev->dev, gpu);
- struct msm_gem_address_space *aspace;
+ struct msm_gem_vm *vm;
- aspace = msm_gem_address_space_create(mmu, "gpu", SZ_16M,
+ vm = msm_gem_vm_create(mmu, "gpu", SZ_16M,
0xfff * SZ_64K);
- if (IS_ERR(aspace) && !IS_ERR(mmu))
+ if (IS_ERR(vm) && !IS_ERR(mmu))
mmu->funcs->destroy(mmu);
- return aspace;
+ return vm;
}
static u32 a2xx_get_rptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring)
@@ -504,7 +504,7 @@ static const struct adreno_gpu_funcs funcs = {
#endif
.gpu_state_get = a2xx_gpu_state_get,
.gpu_state_put = adreno_gpu_state_put,
- .create_address_space = a2xx_create_address_space,
+ .create_vm = a2xx_create_vm,
.get_rptr = a2xx_get_rptr,
},
};
@@ -551,7 +551,7 @@ struct msm_gpu *a2xx_gpu_init(struct drm_device *dev)
else
adreno_gpu->registers = a220_registers;
- if (!gpu->aspace) {
+ if (!gpu->vm) {
dev_err(dev->dev, "No memory protection without MMU\n");
if (!allow_vram_carveout) {
ret = -ENXIO;
diff --git a/drivers/gpu/drm/msm/adreno/a3xx_gpu.c b/drivers/gpu/drm/msm/adreno/a3xx_gpu.c
index b6df115bb567..434e6ededf83 100644
--- a/drivers/gpu/drm/msm/adreno/a3xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a3xx_gpu.c
@@ -526,7 +526,7 @@ static const struct adreno_gpu_funcs funcs = {
.gpu_busy = a3xx_gpu_busy,
.gpu_state_get = a3xx_gpu_state_get,
.gpu_state_put = adreno_gpu_state_put,
- .create_address_space = adreno_create_address_space,
+ .create_vm = adreno_create_vm,
.get_rptr = a3xx_get_rptr,
},
};
@@ -581,7 +581,7 @@ struct msm_gpu *a3xx_gpu_init(struct drm_device *dev)
goto fail;
}
- if (!gpu->aspace) {
+ if (!gpu->vm) {
/* TODO we think it is possible to configure the GPU to
* restrict access to VRAM carveout. But the required
* registers are unknown. For now just bail out and
diff --git a/drivers/gpu/drm/msm/adreno/a4xx_gpu.c b/drivers/gpu/drm/msm/adreno/a4xx_gpu.c
index f1b18a6663f7..2c75debcfd84 100644
--- a/drivers/gpu/drm/msm/adreno/a4xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a4xx_gpu.c
@@ -645,7 +645,7 @@ static const struct adreno_gpu_funcs funcs = {
.gpu_busy = a4xx_gpu_busy,
.gpu_state_get = a4xx_gpu_state_get,
.gpu_state_put = adreno_gpu_state_put,
- .create_address_space = adreno_create_address_space,
+ .create_vm = adreno_create_vm,
.get_rptr = a4xx_get_rptr,
},
.get_timestamp = a4xx_get_timestamp,
@@ -695,7 +695,7 @@ struct msm_gpu *a4xx_gpu_init(struct drm_device *dev)
adreno_gpu->uche_trap_base = 0xffff0000ffff0000ull;
- if (!gpu->aspace) {
+ if (!gpu->vm) {
/* TODO we think it is possible to configure the GPU to
* restrict access to VRAM carveout. But the required
* registers are unknown. For now just bail out and
diff --git a/drivers/gpu/drm/msm/adreno/a5xx_debugfs.c b/drivers/gpu/drm/msm/adreno/a5xx_debugfs.c
index 169b8fe688f8..625a4e787d8f 100644
--- a/drivers/gpu/drm/msm/adreno/a5xx_debugfs.c
+++ b/drivers/gpu/drm/msm/adreno/a5xx_debugfs.c
@@ -116,13 +116,13 @@ reset_set(void *data, u64 val)
adreno_gpu->fw[ADRENO_FW_PFP] = NULL;
if (a5xx_gpu->pm4_bo) {
- msm_gem_unpin_iova(a5xx_gpu->pm4_bo, gpu->aspace);
+ msm_gem_unpin_iova(a5xx_gpu->pm4_bo, gpu->vm);
drm_gem_object_put(a5xx_gpu->pm4_bo);
a5xx_gpu->pm4_bo = NULL;
}
if (a5xx_gpu->pfp_bo) {
- msm_gem_unpin_iova(a5xx_gpu->pfp_bo, gpu->aspace);
+ msm_gem_unpin_iova(a5xx_gpu->pfp_bo, gpu->vm);
drm_gem_object_put(a5xx_gpu->pfp_bo);
a5xx_gpu->pfp_bo = NULL;
}
diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
index 670141531112..cce95ad3cfb8 100644
--- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
@@ -622,7 +622,7 @@ static int a5xx_ucode_load(struct msm_gpu *gpu)
a5xx_gpu->shadow = msm_gem_kernel_new(gpu->dev,
sizeof(u32) * gpu->nr_rings,
MSM_BO_WC | MSM_BO_MAP_PRIV,
- gpu->aspace, &a5xx_gpu->shadow_bo,
+ gpu->vm, &a5xx_gpu->shadow_bo,
&a5xx_gpu->shadow_iova);
if (IS_ERR(a5xx_gpu->shadow))
@@ -1042,22 +1042,22 @@ static void a5xx_destroy(struct msm_gpu *gpu)
a5xx_preempt_fini(gpu);
if (a5xx_gpu->pm4_bo) {
- msm_gem_unpin_iova(a5xx_gpu->pm4_bo, gpu->aspace);
+ msm_gem_unpin_iova(a5xx_gpu->pm4_bo, gpu->vm);
drm_gem_object_put(a5xx_gpu->pm4_bo);
}
if (a5xx_gpu->pfp_bo) {
- msm_gem_unpin_iova(a5xx_gpu->pfp_bo, gpu->aspace);
+ msm_gem_unpin_iova(a5xx_gpu->pfp_bo, gpu->vm);
drm_gem_object_put(a5xx_gpu->pfp_bo);
}
if (a5xx_gpu->gpmu_bo) {
- msm_gem_unpin_iova(a5xx_gpu->gpmu_bo, gpu->aspace);
+ msm_gem_unpin_iova(a5xx_gpu->gpmu_bo, gpu->vm);
drm_gem_object_put(a5xx_gpu->gpmu_bo);
}
if (a5xx_gpu->shadow_bo) {
- msm_gem_unpin_iova(a5xx_gpu->shadow_bo, gpu->aspace);
+ msm_gem_unpin_iova(a5xx_gpu->shadow_bo, gpu->vm);
drm_gem_object_put(a5xx_gpu->shadow_bo);
}
@@ -1457,7 +1457,7 @@ static int a5xx_crashdumper_init(struct msm_gpu *gpu,
struct a5xx_crashdumper *dumper)
{
dumper->ptr = msm_gem_kernel_new(gpu->dev,
- SZ_1M, MSM_BO_WC, gpu->aspace,
+ SZ_1M, MSM_BO_WC, gpu->vm,
&dumper->bo, &dumper->iova);
if (!IS_ERR(dumper->ptr))
@@ -1557,7 +1557,7 @@ static void a5xx_gpu_state_get_hlsq_regs(struct msm_gpu *gpu,
if (a5xx_crashdumper_run(gpu, &dumper)) {
kfree(a5xx_state->hlsqregs);
- msm_gem_kernel_put(dumper.bo, gpu->aspace);
+ msm_gem_kernel_put(dumper.bo, gpu->vm);
return;
}
@@ -1565,7 +1565,7 @@ static void a5xx_gpu_state_get_hlsq_regs(struct msm_gpu *gpu,
memcpy(a5xx_state->hlsqregs, dumper.ptr + (256 * SZ_1K),
count * sizeof(u32));
- msm_gem_kernel_put(dumper.bo, gpu->aspace);
+ msm_gem_kernel_put(dumper.bo, gpu->vm);
}
static struct msm_gpu_state *a5xx_gpu_state_get(struct msm_gpu *gpu)
@@ -1713,7 +1713,7 @@ static const struct adreno_gpu_funcs funcs = {
.gpu_busy = a5xx_gpu_busy,
.gpu_state_get = a5xx_gpu_state_get,
.gpu_state_put = a5xx_gpu_state_put,
- .create_address_space = adreno_create_address_space,
+ .create_vm = adreno_create_vm,
.get_rptr = a5xx_get_rptr,
},
.get_timestamp = a5xx_get_timestamp,
@@ -1786,8 +1786,8 @@ struct msm_gpu *a5xx_gpu_init(struct drm_device *dev)
return ERR_PTR(ret);
}
- if (gpu->aspace)
- msm_mmu_set_fault_handler(gpu->aspace->mmu, gpu, a5xx_fault_handler);
+ if (gpu->vm)
+ msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a5xx_fault_handler);
/* Set up the preemption specific bits and pieces for each ringbuffer */
a5xx_preempt_init(gpu);
diff --git a/drivers/gpu/drm/msm/adreno/a5xx_power.c b/drivers/gpu/drm/msm/adreno/a5xx_power.c
index 6b91e0bd1514..d6da7351cfbb 100644
--- a/drivers/gpu/drm/msm/adreno/a5xx_power.c
+++ b/drivers/gpu/drm/msm/adreno/a5xx_power.c
@@ -363,7 +363,7 @@ void a5xx_gpmu_ucode_init(struct msm_gpu *gpu)
bosize = (cmds_size + (cmds_size / TYPE4_MAX_PAYLOAD) + 1) << 2;
ptr = msm_gem_kernel_new(drm, bosize,
- MSM_BO_WC | MSM_BO_GPU_READONLY, gpu->aspace,
+ MSM_BO_WC | MSM_BO_GPU_READONLY, gpu->vm,
&a5xx_gpu->gpmu_bo, &a5xx_gpu->gpmu_iova);
if (IS_ERR(ptr))
return;
diff --git a/drivers/gpu/drm/msm/adreno/a5xx_preempt.c b/drivers/gpu/drm/msm/adreno/a5xx_preempt.c
index 0469fea55010..5f9e2eb80a2c 100644
--- a/drivers/gpu/drm/msm/adreno/a5xx_preempt.c
+++ b/drivers/gpu/drm/msm/adreno/a5xx_preempt.c
@@ -254,7 +254,7 @@ static int preempt_init_ring(struct a5xx_gpu *a5xx_gpu,
ptr = msm_gem_kernel_new(gpu->dev,
A5XX_PREEMPT_RECORD_SIZE + A5XX_PREEMPT_COUNTER_SIZE,
- MSM_BO_WC | MSM_BO_MAP_PRIV, gpu->aspace, &bo, &iova);
+ MSM_BO_WC | MSM_BO_MAP_PRIV, gpu->vm, &bo, &iova);
if (IS_ERR(ptr))
return PTR_ERR(ptr);
@@ -262,9 +262,9 @@ static int preempt_init_ring(struct a5xx_gpu *a5xx_gpu,
/* The buffer to store counters needs to be unprivileged */
counters = msm_gem_kernel_new(gpu->dev,
A5XX_PREEMPT_COUNTER_SIZE,
- MSM_BO_WC, gpu->aspace, &counters_bo, &counters_iova);
+ MSM_BO_WC, gpu->vm, &counters_bo, &counters_iova);
if (IS_ERR(counters)) {
- msm_gem_kernel_put(bo, gpu->aspace);
+ msm_gem_kernel_put(bo, gpu->vm);
return PTR_ERR(counters);
}
@@ -295,8 +295,8 @@ void a5xx_preempt_fini(struct msm_gpu *gpu)
int i;
for (i = 0; i < gpu->nr_rings; i++) {
- msm_gem_kernel_put(a5xx_gpu->preempt_bo[i], gpu->aspace);
- msm_gem_kernel_put(a5xx_gpu->preempt_counters_bo[i], gpu->aspace);
+ msm_gem_kernel_put(a5xx_gpu->preempt_bo[i], gpu->vm);
+ msm_gem_kernel_put(a5xx_gpu->preempt_counters_bo[i], gpu->vm);
}
}
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
index 3d2c5661dbee..4c459ae25cba 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
@@ -1259,15 +1259,15 @@ int a6xx_gmu_stop(struct a6xx_gpu *a6xx_gpu)
static void a6xx_gmu_memory_free(struct a6xx_gmu *gmu)
{
- msm_gem_kernel_put(gmu->hfi.obj, gmu->aspace);
- msm_gem_kernel_put(gmu->debug.obj, gmu->aspace);
- msm_gem_kernel_put(gmu->icache.obj, gmu->aspace);
- msm_gem_kernel_put(gmu->dcache.obj, gmu->aspace);
- msm_gem_kernel_put(gmu->dummy.obj, gmu->aspace);
- msm_gem_kernel_put(gmu->log.obj, gmu->aspace);
-
- gmu->aspace->mmu->funcs->detach(gmu->aspace->mmu);
- msm_gem_address_space_put(gmu->aspace);
+ msm_gem_kernel_put(gmu->hfi.obj, gmu->vm);
+ msm_gem_kernel_put(gmu->debug.obj, gmu->vm);
+ msm_gem_kernel_put(gmu->icache.obj, gmu->vm);
+ msm_gem_kernel_put(gmu->dcache.obj, gmu->vm);
+ msm_gem_kernel_put(gmu->dummy.obj, gmu->vm);
+ msm_gem_kernel_put(gmu->log.obj, gmu->vm);
+
+ gmu->vm->mmu->funcs->detach(gmu->vm->mmu);
+ msm_gem_vm_put(gmu->vm);
}
static int a6xx_gmu_memory_alloc(struct a6xx_gmu *gmu, struct a6xx_gmu_bo *bo,
@@ -1296,7 +1296,7 @@ static int a6xx_gmu_memory_alloc(struct a6xx_gmu *gmu, struct a6xx_gmu_bo *bo,
if (IS_ERR(bo->obj))
return PTR_ERR(bo->obj);
- ret = msm_gem_get_and_pin_iova_range(bo->obj, gmu->aspace, &bo->iova,
+ ret = msm_gem_get_and_pin_iova_range(bo->obj, gmu->vm, &bo->iova,
range_start, range_end);
if (ret) {
drm_gem_object_put(bo->obj);
@@ -1321,9 +1321,9 @@ static int a6xx_gmu_memory_probe(struct a6xx_gmu *gmu)
if (IS_ERR(mmu))
return PTR_ERR(mmu);
- gmu->aspace = msm_gem_address_space_create(mmu, "gmu", 0x0, 0x80000000);
- if (IS_ERR(gmu->aspace))
- return PTR_ERR(gmu->aspace);
+ gmu->vm = msm_gem_vm_create(mmu, "gmu", 0x0, 0x80000000);
+ if (IS_ERR(gmu->vm))
+ return PTR_ERR(gmu->vm);
return 0;
}
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.h b/drivers/gpu/drm/msm/adreno/a6xx_gmu.h
index 39fb8c774a79..cceda7d9c33a 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.h
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.h
@@ -62,7 +62,7 @@ struct a6xx_gmu {
/* For serializing communication with the GMU: */
struct mutex lock;
- struct msm_gem_address_space *aspace;
+ struct msm_gem_vm *vm;
void __iomem *mmio;
void __iomem *rscc;
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
index c5294351b1f0..fa63149bf73f 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
@@ -120,7 +120,7 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu,
if (ctx->seqno == ring->cur_ctx_seqno)
return;
- if (msm_iommu_pagetable_params(ctx->aspace->mmu, &ttbr, &asid))
+ if (msm_iommu_pagetable_params(ctx->vm->mmu, &ttbr, &asid))
return;
if (adreno_gpu->info->family >= ADRENO_7XX_GEN1) {
@@ -957,7 +957,7 @@ static int a6xx_ucode_load(struct msm_gpu *gpu)
msm_gem_object_set_name(a6xx_gpu->sqe_bo, "sqefw");
if (!a6xx_ucode_check_version(a6xx_gpu, a6xx_gpu->sqe_bo)) {
- msm_gem_unpin_iova(a6xx_gpu->sqe_bo, gpu->aspace);
+ msm_gem_unpin_iova(a6xx_gpu->sqe_bo, gpu->vm);
drm_gem_object_put(a6xx_gpu->sqe_bo);
a6xx_gpu->sqe_bo = NULL;
@@ -974,7 +974,7 @@ static int a6xx_ucode_load(struct msm_gpu *gpu)
a6xx_gpu->shadow = msm_gem_kernel_new(gpu->dev,
sizeof(u32) * gpu->nr_rings,
MSM_BO_WC | MSM_BO_MAP_PRIV,
- gpu->aspace, &a6xx_gpu->shadow_bo,
+ gpu->vm, &a6xx_gpu->shadow_bo,
&a6xx_gpu->shadow_iova);
if (IS_ERR(a6xx_gpu->shadow))
@@ -985,7 +985,7 @@ static int a6xx_ucode_load(struct msm_gpu *gpu)
a6xx_gpu->pwrup_reglist_ptr = msm_gem_kernel_new(gpu->dev, PAGE_SIZE,
MSM_BO_WC | MSM_BO_MAP_PRIV,
- gpu->aspace, &a6xx_gpu->pwrup_reglist_bo,
+ gpu->vm, &a6xx_gpu->pwrup_reglist_bo,
&a6xx_gpu->pwrup_reglist_iova);
if (IS_ERR(a6xx_gpu->pwrup_reglist_ptr))
@@ -2198,12 +2198,12 @@ static void a6xx_destroy(struct msm_gpu *gpu)
struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
if (a6xx_gpu->sqe_bo) {
- msm_gem_unpin_iova(a6xx_gpu->sqe_bo, gpu->aspace);
+ msm_gem_unpin_iova(a6xx_gpu->sqe_bo, gpu->vm);
drm_gem_object_put(a6xx_gpu->sqe_bo);
}
if (a6xx_gpu->shadow_bo) {
- msm_gem_unpin_iova(a6xx_gpu->shadow_bo, gpu->aspace);
+ msm_gem_unpin_iova(a6xx_gpu->shadow_bo, gpu->vm);
drm_gem_object_put(a6xx_gpu->shadow_bo);
}
@@ -2243,8 +2243,8 @@ static void a6xx_gpu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp,
mutex_unlock(&a6xx_gpu->gmu.lock);
}
-static struct msm_gem_address_space *
-a6xx_create_address_space(struct msm_gpu *gpu, struct platform_device *pdev)
+static struct msm_gem_vm *
+a6xx_create_vm(struct msm_gpu *gpu, struct platform_device *pdev)
{
struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu);
@@ -2258,22 +2258,22 @@ a6xx_create_address_space(struct msm_gpu *gpu, struct platform_device *pdev)
!device_iommu_capable(&pdev->dev, IOMMU_CAP_CACHE_COHERENCY))
quirks |= IO_PGTABLE_QUIRK_ARM_OUTER_WBWA;
- return adreno_iommu_create_address_space(gpu, pdev, quirks);
+ return adreno_iommu_create_vm(gpu, pdev, quirks);
}
-static struct msm_gem_address_space *
-a6xx_create_private_address_space(struct msm_gpu *gpu)
+static struct msm_gem_vm *
+a6xx_create_private_vm(struct msm_gpu *gpu)
{
struct msm_mmu *mmu;
- mmu = msm_iommu_pagetable_create(gpu->aspace->mmu);
+ mmu = msm_iommu_pagetable_create(gpu->vm->mmu);
if (IS_ERR(mmu))
return ERR_CAST(mmu);
- return msm_gem_address_space_create(mmu,
+ return msm_gem_vm_create(mmu,
"gpu", 0x100000000ULL,
- adreno_private_address_space_size(gpu));
+ adreno_private_vm_size(gpu));
}
static uint32_t a6xx_get_rptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring)
@@ -2390,8 +2390,8 @@ static const struct adreno_gpu_funcs funcs = {
.gpu_state_get = a6xx_gpu_state_get,
.gpu_state_put = a6xx_gpu_state_put,
#endif
- .create_address_space = a6xx_create_address_space,
- .create_private_address_space = a6xx_create_private_address_space,
+ .create_vm = a6xx_create_vm,
+ .create_private_vm = a6xx_create_private_vm,
.get_rptr = a6xx_get_rptr,
.progress = a6xx_progress,
},
@@ -2419,8 +2419,8 @@ static const struct adreno_gpu_funcs funcs_gmuwrapper = {
.gpu_state_get = a6xx_gpu_state_get,
.gpu_state_put = a6xx_gpu_state_put,
#endif
- .create_address_space = a6xx_create_address_space,
- .create_private_address_space = a6xx_create_private_address_space,
+ .create_vm = a6xx_create_vm,
+ .create_private_vm = a6xx_create_private_vm,
.get_rptr = a6xx_get_rptr,
.progress = a6xx_progress,
},
@@ -2450,8 +2450,8 @@ static const struct adreno_gpu_funcs funcs_a7xx = {
.gpu_state_get = a6xx_gpu_state_get,
.gpu_state_put = a6xx_gpu_state_put,
#endif
- .create_address_space = a6xx_create_address_space,
- .create_private_address_space = a6xx_create_private_address_space,
+ .create_vm = a6xx_create_vm,
+ .create_private_vm = a6xx_create_private_vm,
.get_rptr = a6xx_get_rptr,
.progress = a6xx_progress,
},
@@ -2547,9 +2547,8 @@ struct msm_gpu *a6xx_gpu_init(struct drm_device *dev)
adreno_gpu->uche_trap_base = 0x1fffffffff000ull;
- if (gpu->aspace)
- msm_mmu_set_fault_handler(gpu->aspace->mmu, gpu,
- a6xx_fault_handler);
+ if (gpu->vm)
+ msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a6xx_fault_handler);
a6xx_calc_ubwc_config(adreno_gpu);
/* Set up the preemption specific bits and pieces for each ringbuffer */
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c
index 341a72a67401..ff06bb75b76d 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c
@@ -132,7 +132,7 @@ static int a6xx_crashdumper_init(struct msm_gpu *gpu,
struct a6xx_crashdumper *dumper)
{
dumper->ptr = msm_gem_kernel_new(gpu->dev,
- SZ_1M, MSM_BO_WC, gpu->aspace,
+ SZ_1M, MSM_BO_WC, gpu->vm,
&dumper->bo, &dumper->iova);
if (!IS_ERR(dumper->ptr))
@@ -1619,7 +1619,7 @@ struct msm_gpu_state *a6xx_gpu_state_get(struct msm_gpu *gpu)
a7xx_get_clusters(gpu, a6xx_state, dumper);
a7xx_get_dbgahb_clusters(gpu, a6xx_state, dumper);
- msm_gem_kernel_put(dumper->bo, gpu->aspace);
+ msm_gem_kernel_put(dumper->bo, gpu->vm);
}
a7xx_get_post_crashdumper_registers(gpu, a6xx_state);
@@ -1631,7 +1631,7 @@ struct msm_gpu_state *a6xx_gpu_state_get(struct msm_gpu *gpu)
a6xx_get_clusters(gpu, a6xx_state, dumper);
a6xx_get_dbgahb_clusters(gpu, a6xx_state, dumper);
- msm_gem_kernel_put(dumper->bo, gpu->aspace);
+ msm_gem_kernel_put(dumper->bo, gpu->vm);
}
}
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
index 2fd4e39f618f..41229c60aa06 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
@@ -343,7 +343,7 @@ static int preempt_init_ring(struct a6xx_gpu *a6xx_gpu,
ptr = msm_gem_kernel_new(gpu->dev,
PREEMPT_RECORD_SIZE(adreno_gpu),
- MSM_BO_WC | MSM_BO_MAP_PRIV, gpu->aspace, &bo, &iova);
+ MSM_BO_WC | MSM_BO_MAP_PRIV, gpu->vm, &bo, &iova);
if (IS_ERR(ptr))
return PTR_ERR(ptr);
@@ -361,7 +361,7 @@ static int preempt_init_ring(struct a6xx_gpu *a6xx_gpu,
ptr = msm_gem_kernel_new(gpu->dev,
PREEMPT_SMMU_INFO_SIZE,
MSM_BO_WC | MSM_BO_MAP_PRIV | MSM_BO_GPU_READONLY,
- gpu->aspace, &bo, &iova);
+ gpu->vm, &bo, &iova);
if (IS_ERR(ptr))
return PTR_ERR(ptr);
@@ -376,7 +376,7 @@ static int preempt_init_ring(struct a6xx_gpu *a6xx_gpu,
struct a7xx_cp_smmu_info *smmu_info_ptr = ptr;
- msm_iommu_pagetable_params(gpu->aspace->mmu, &ttbr, &asid);
+ msm_iommu_pagetable_params(gpu->vm->mmu, &ttbr, &asid);
smmu_info_ptr->magic = GEN7_CP_SMMU_INFO_MAGIC;
smmu_info_ptr->ttbr0 = ttbr;
@@ -404,7 +404,7 @@ void a6xx_preempt_fini(struct msm_gpu *gpu)
int i;
for (i = 0; i < gpu->nr_rings; i++)
- msm_gem_kernel_put(a6xx_gpu->preempt_bo[i], gpu->aspace);
+ msm_gem_kernel_put(a6xx_gpu->preempt_bo[i], gpu->vm);
}
void a6xx_preempt_init(struct msm_gpu *gpu)
@@ -430,7 +430,7 @@ void a6xx_preempt_init(struct msm_gpu *gpu)
a6xx_gpu->preempt_postamble_ptr = msm_gem_kernel_new(gpu->dev,
PAGE_SIZE,
MSM_BO_WC | MSM_BO_MAP_PRIV | MSM_BO_GPU_READONLY,
- gpu->aspace, &a6xx_gpu->preempt_postamble_bo,
+ gpu->vm, &a6xx_gpu->preempt_postamble_bo,
&a6xx_gpu->preempt_postamble_iova);
preempt_prepare_postamble(a6xx_gpu);
diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
index e0ab5126e390..ffbbf3d5ce2f 100644
--- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
@@ -191,21 +191,21 @@ int adreno_zap_shader_load(struct msm_gpu *gpu, u32 pasid)
return zap_shader_load_mdt(gpu, adreno_gpu->info->zapfw, pasid);
}
-struct msm_gem_address_space *
-adreno_create_address_space(struct msm_gpu *gpu,
- struct platform_device *pdev)
+struct msm_gem_vm *
+adreno_create_vm(struct msm_gpu *gpu,
+ struct platform_device *pdev)
{
- return adreno_iommu_create_address_space(gpu, pdev, 0);
+ return adreno_iommu_create_vm(gpu, pdev, 0);
}
-struct msm_gem_address_space *
-adreno_iommu_create_address_space(struct msm_gpu *gpu,
- struct platform_device *pdev,
- unsigned long quirks)
+struct msm_gem_vm *
+adreno_iommu_create_vm(struct msm_gpu *gpu,
+ struct platform_device *pdev,
+ unsigned long quirks)
{
struct iommu_domain_geometry *geometry;
struct msm_mmu *mmu;
- struct msm_gem_address_space *aspace;
+ struct msm_gem_vm *vm;
u64 start, size;
mmu = msm_iommu_gpu_new(&pdev->dev, gpu, quirks);
@@ -224,16 +224,15 @@ adreno_iommu_create_address_space(struct msm_gpu *gpu,
start = max_t(u64, SZ_16M, geometry->aperture_start);
size = geometry->aperture_end - start + 1;
- aspace = msm_gem_address_space_create(mmu, "gpu",
- start & GENMASK_ULL(48, 0), size);
+ vm = msm_gem_vm_create(mmu, "gpu", start & GENMASK_ULL(48, 0), size);
- if (IS_ERR(aspace) && !IS_ERR(mmu))
+ if (IS_ERR(vm) && !IS_ERR(mmu))
mmu->funcs->destroy(mmu);
- return aspace;
+ return vm;
}
-u64 adreno_private_address_space_size(struct msm_gpu *gpu)
+u64 adreno_private_vm_size(struct msm_gpu *gpu)
{
struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
@@ -261,7 +260,7 @@ void adreno_check_and_reenable_stall(struct adreno_gpu *adreno_gpu)
!READ_ONCE(gpu->crashstate)) {
adreno_gpu->stall_enabled = true;
- gpu->aspace->mmu->funcs->set_stall(gpu->aspace->mmu, true);
+ gpu->vm->mmu->funcs->set_stall(gpu->vm->mmu, true);
}
spin_unlock_irqrestore(&adreno_gpu->fault_stall_lock, flags);
}
@@ -289,7 +288,7 @@ int adreno_fault_handler(struct msm_gpu *gpu, unsigned long iova, int flags,
if (adreno_gpu->stall_enabled) {
adreno_gpu->stall_enabled = false;
- gpu->aspace->mmu->funcs->set_stall(gpu->aspace->mmu, false);
+ gpu->vm->mmu->funcs->set_stall(gpu->vm->mmu, false);
}
adreno_gpu->stall_reenable_time = ktime_add_ms(ktime_get(), 500);
spin_unlock_irqrestore(&adreno_gpu->fault_stall_lock, irq_flags);
@@ -299,7 +298,7 @@ int adreno_fault_handler(struct msm_gpu *gpu, unsigned long iova, int flags,
* it now.
*/
if (!do_devcoredump) {
- gpu->aspace->mmu->funcs->resume_translation(gpu->aspace->mmu);
+ gpu->vm->mmu->funcs->resume_translation(gpu->vm->mmu);
}
/*
@@ -393,8 +392,8 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx,
*value = 0;
return 0;
case MSM_PARAM_FAULTS:
- if (ctx->aspace)
- *value = gpu->global_faults + ctx->aspace->faults;
+ if (ctx->vm)
+ *value = gpu->global_faults + ctx->vm->faults;
else
*value = gpu->global_faults;
return 0;
@@ -402,14 +401,14 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx,
*value = gpu->suspend_count;
return 0;
case MSM_PARAM_VA_START:
- if (ctx->aspace == gpu->aspace)
+ if (ctx->vm == gpu->vm)
return UERR(EINVAL, drm, "requires per-process pgtables");
- *value = ctx->aspace->va_start;
+ *value = ctx->vm->va_start;
return 0;
case MSM_PARAM_VA_SIZE:
- if (ctx->aspace == gpu->aspace)
+ if (ctx->vm == gpu->vm)
return UERR(EINVAL, drm, "requires per-process pgtables");
- *value = ctx->aspace->va_size;
+ *value = ctx->vm->va_size;
return 0;
case MSM_PARAM_HIGHEST_BANK_BIT:
*value = adreno_gpu->ubwc_config.highest_bank_bit;
@@ -599,7 +598,7 @@ struct drm_gem_object *adreno_fw_create_bo(struct msm_gpu *gpu,
void *ptr;
ptr = msm_gem_kernel_new(gpu->dev, fw->size - 4,
- MSM_BO_WC | MSM_BO_GPU_READONLY, gpu->aspace, &bo, iova);
+ MSM_BO_WC | MSM_BO_GPU_READONLY, gpu->vm, &bo, iova);
if (IS_ERR(ptr))
return ERR_CAST(ptr);
diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.h b/drivers/gpu/drm/msm/adreno/adreno_gpu.h
index 9fce88461f5a..eaebcb108b5e 100644
--- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h
+++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h
@@ -600,7 +600,7 @@ static inline int adreno_is_a7xx(struct adreno_gpu *gpu)
adreno_is_a740_family(gpu);
}
-u64 adreno_private_address_space_size(struct msm_gpu *gpu);
+u64 adreno_private_vm_size(struct msm_gpu *gpu);
int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx,
uint32_t param, uint64_t *value, uint32_t *len);
int adreno_set_param(struct msm_gpu *gpu, struct msm_context *ctx,
@@ -643,14 +643,14 @@ void adreno_show_object(struct drm_printer *p, void **ptr, int len,
* Common helper function to initialize the default address space for arm-smmu
* attached targets
*/
-struct msm_gem_address_space *
-adreno_create_address_space(struct msm_gpu *gpu,
- struct platform_device *pdev);
-
-struct msm_gem_address_space *
-adreno_iommu_create_address_space(struct msm_gpu *gpu,
- struct platform_device *pdev,
- unsigned long quirks);
+struct msm_gem_vm *
+adreno_create_vm(struct msm_gpu *gpu,
+ struct platform_device *pdev);
+
+struct msm_gem_vm *
+adreno_iommu_create_vm(struct msm_gpu *gpu,
+ struct platform_device *pdev,
+ unsigned long quirks);
int adreno_fault_handler(struct msm_gpu *gpu, unsigned long iova, int flags,
struct adreno_smmu_fault_info *info, const char *block,
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c
index 849fea580a4c..32e208ee946d 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c
@@ -566,7 +566,7 @@ static void dpu_encoder_phys_wb_prepare_wb_job(struct dpu_encoder_phys *phys_enc
struct drm_writeback_job *job)
{
const struct msm_format *format;
- struct msm_gem_address_space *aspace;
+ struct msm_gem_vm *vm;
struct dpu_hw_wb_cfg *wb_cfg;
int ret;
struct dpu_encoder_phys_wb *wb_enc = to_dpu_encoder_phys_wb(phys_enc);
@@ -576,13 +576,13 @@ static void dpu_encoder_phys_wb_prepare_wb_job(struct dpu_encoder_phys *phys_enc
wb_enc->wb_job = job;
wb_enc->wb_conn = job->connector;
- aspace = phys_enc->dpu_kms->base.aspace;
+ vm = phys_enc->dpu_kms->base.vm;
wb_cfg = &wb_enc->wb_cfg;
memset(wb_cfg, 0, sizeof(struct dpu_hw_wb_cfg));
- ret = msm_framebuffer_prepare(job->fb, aspace, false);
+ ret = msm_framebuffer_prepare(job->fb, vm, false);
if (ret) {
DPU_ERROR("prep fb failed, %d\n", ret);
return;
@@ -596,7 +596,7 @@ static void dpu_encoder_phys_wb_prepare_wb_job(struct dpu_encoder_phys *phys_enc
return;
}
- dpu_format_populate_addrs(aspace, job->fb, &wb_cfg->dest);
+ dpu_format_populate_addrs(vm, job->fb, &wb_cfg->dest);
wb_cfg->dest.width = job->fb->width;
wb_cfg->dest.height = job->fb->height;
@@ -619,14 +619,14 @@ static void dpu_encoder_phys_wb_cleanup_wb_job(struct dpu_encoder_phys *phys_enc
struct drm_writeback_job *job)
{
struct dpu_encoder_phys_wb *wb_enc = to_dpu_encoder_phys_wb(phys_enc);
- struct msm_gem_address_space *aspace;
+ struct msm_gem_vm *vm;
if (!job->fb)
return;
- aspace = phys_enc->dpu_kms->base.aspace;
+ vm = phys_enc->dpu_kms->base.vm;
- msm_framebuffer_cleanup(job->fb, aspace, false);
+ msm_framebuffer_cleanup(job->fb, vm, false);
wb_enc->wb_job = NULL;
wb_enc->wb_conn = NULL;
}
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c
index 59c9427da7dd..d115b79af771 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c
@@ -274,7 +274,7 @@ int dpu_format_populate_plane_sizes(
return _dpu_format_populate_plane_sizes_linear(fmt, fb, layout);
}
-static void _dpu_format_populate_addrs_ubwc(struct msm_gem_address_space *aspace,
+static void _dpu_format_populate_addrs_ubwc(struct msm_gem_vm *vm,
struct drm_framebuffer *fb,
struct dpu_hw_fmt_layout *layout)
{
@@ -282,7 +282,7 @@ static void _dpu_format_populate_addrs_ubwc(struct msm_gem_address_space *aspace
uint32_t base_addr = 0;
bool meta;
- base_addr = msm_framebuffer_iova(fb, aspace, 0);
+ base_addr = msm_framebuffer_iova(fb, vm, 0);
fmt = msm_framebuffer_format(fb);
meta = MSM_FORMAT_IS_UBWC(fmt);
@@ -355,7 +355,7 @@ static void _dpu_format_populate_addrs_ubwc(struct msm_gem_address_space *aspace
}
}
-static void _dpu_format_populate_addrs_linear(struct msm_gem_address_space *aspace,
+static void _dpu_format_populate_addrs_linear(struct msm_gem_vm *vm,
struct drm_framebuffer *fb,
struct dpu_hw_fmt_layout *layout)
{
@@ -363,17 +363,17 @@ static void _dpu_format_populate_addrs_linear(struct msm_gem_address_space *aspa
/* Populate addresses for simple formats here */
for (i = 0; i < layout->num_planes; ++i)
- layout->plane_addr[i] = msm_framebuffer_iova(fb, aspace, i);
-}
+ layout->plane_addr[i] = msm_framebuffer_iova(fb, vm, i);
+ }
/**
* dpu_format_populate_addrs - populate buffer addresses based on
* mmu, fb, and format found in the fb
- * @aspace: address space pointer
+ * @vm: address space pointer
* @fb: framebuffer pointer
* @layout: format layout structure to populate
*/
-void dpu_format_populate_addrs(struct msm_gem_address_space *aspace,
+void dpu_format_populate_addrs(struct msm_gem_vm *vm,
struct drm_framebuffer *fb,
struct dpu_hw_fmt_layout *layout)
{
@@ -384,7 +384,7 @@ void dpu_format_populate_addrs(struct msm_gem_address_space *aspace,
/* Populate the addresses given the fb */
if (MSM_FORMAT_IS_UBWC(fmt) ||
MSM_FORMAT_IS_TILE(fmt))
- _dpu_format_populate_addrs_ubwc(aspace, fb, layout);
+ _dpu_format_populate_addrs_ubwc(vm, fb, layout);
else
- _dpu_format_populate_addrs_linear(aspace, fb, layout);
+ _dpu_format_populate_addrs_linear(vm, fb, layout);
}
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h
index c6145d43aa3f..989f3e13c497 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h
@@ -31,7 +31,7 @@ static inline bool dpu_find_format(u32 format, const u32 *supported_formats,
return false;
}
-void dpu_format_populate_addrs(struct msm_gem_address_space *aspace,
+void dpu_format_populate_addrs(struct msm_gem_vm *vm,
struct drm_framebuffer *fb,
struct dpu_hw_fmt_layout *layout);
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
index 3305ad0623ca..bb5db6da636a 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
@@ -1095,26 +1095,26 @@ static void _dpu_kms_mmu_destroy(struct dpu_kms *dpu_kms)
{
struct msm_mmu *mmu;
- if (!dpu_kms->base.aspace)
+ if (!dpu_kms->base.vm)
return;
- mmu = dpu_kms->base.aspace->mmu;
+ mmu = dpu_kms->base.vm->mmu;
mmu->funcs->detach(mmu);
- msm_gem_address_space_put(dpu_kms->base.aspace);
+ msm_gem_vm_put(dpu_kms->base.vm);
- dpu_kms->base.aspace = NULL;
+ dpu_kms->base.vm = NULL;
}
static int _dpu_kms_mmu_init(struct dpu_kms *dpu_kms)
{
- struct msm_gem_address_space *aspace;
+ struct msm_gem_vm *vm;
- aspace = msm_kms_init_aspace(dpu_kms->dev);
- if (IS_ERR(aspace))
- return PTR_ERR(aspace);
+ vm = msm_kms_init_vm(dpu_kms->dev);
+ if (IS_ERR(vm))
+ return PTR_ERR(vm);
- dpu_kms->base.aspace = aspace;
+ dpu_kms->base.vm = vm;
return 0;
}
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
index af3e541f60c3..92a249b2ef5f 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
@@ -71,7 +71,7 @@ static const uint32_t qcom_compressed_supported_formats[] = {
/*
* struct dpu_plane - local dpu plane structure
- * @aspace: address space pointer
+ * @vm: address space pointer
* @csc_ptr: Points to dpu_csc_cfg structure to use for current
* @catalog: Points to dpu catalog structure
* @revalidate: force revalidation of all the plane properties
@@ -654,8 +654,8 @@ static int dpu_plane_prepare_fb(struct drm_plane *plane,
DPU_DEBUG_PLANE(pdpu, "FB[%u]\n", fb->base.id);
- /* cache aspace */
- pstate->aspace = kms->base.aspace;
+ /* cache vm */
+ pstate->vm = kms->base.vm;
/*
* TODO: Need to sort out the msm_framebuffer_prepare() call below so
@@ -664,9 +664,9 @@ static int dpu_plane_prepare_fb(struct drm_plane *plane,
*/
drm_gem_plane_helper_prepare_fb(plane, new_state);
- if (pstate->aspace) {
+ if (pstate->vm) {
ret = msm_framebuffer_prepare(new_state->fb,
- pstate->aspace, pstate->needs_dirtyfb);
+ pstate->vm, pstate->needs_dirtyfb);
if (ret) {
DPU_ERROR("failed to prepare framebuffer\n");
return ret;
@@ -689,7 +689,7 @@ static void dpu_plane_cleanup_fb(struct drm_plane *plane,
DPU_DEBUG_PLANE(pdpu, "FB[%u]\n", old_state->fb->base.id);
- msm_framebuffer_cleanup(old_state->fb, old_pstate->aspace,
+ msm_framebuffer_cleanup(old_state->fb, old_pstate->vm,
old_pstate->needs_dirtyfb);
}
@@ -1349,7 +1349,7 @@ static void dpu_plane_sspp_atomic_update(struct drm_plane *plane,
pstate->needs_qos_remap |= (is_rt_pipe != pdpu->is_rt_pipe);
pdpu->is_rt_pipe = is_rt_pipe;
- dpu_format_populate_addrs(pstate->aspace, new_state->fb, &pstate->layout);
+ dpu_format_populate_addrs(pstate->vm, new_state->fb, &pstate->layout);
DPU_DEBUG_PLANE(pdpu, "FB[%u] " DRM_RECT_FP_FMT "->crtc%u " DRM_RECT_FMT
", %p4cc ubwc %d\n", fb->base.id, DRM_RECT_FP_ARG(&state->src),
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h
index acd5725175cd..3578f52048a5 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h
@@ -17,7 +17,7 @@
/**
* struct dpu_plane_state: Define dpu extension of drm plane state object
* @base: base drm plane state object
- * @aspace: pointer to address space for input/output buffers
+ * @vm: pointer to address space for input/output buffers
* @pipe: software pipe description
* @r_pipe: software pipe description of the second pipe
* @pipe_cfg: software pipe configuration
@@ -34,7 +34,7 @@
*/
struct dpu_plane_state {
struct drm_plane_state base;
- struct msm_gem_address_space *aspace;
+ struct msm_gem_vm *vm;
struct dpu_sw_pipe pipe;
struct dpu_sw_pipe r_pipe;
struct dpu_sw_pipe_cfg pipe_cfg;
diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c b/drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c
index b8610aa806ea..0133c0c01a0b 100644
--- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c
+++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c
@@ -120,7 +120,7 @@ static void unref_cursor_worker(struct drm_flip_work *work, void *val)
struct mdp4_kms *mdp4_kms = get_kms(&mdp4_crtc->base);
struct msm_kms *kms = &mdp4_kms->base.base;
- msm_gem_unpin_iova(val, kms->aspace);
+ msm_gem_unpin_iova(val, kms->vm);
drm_gem_object_put(val);
}
@@ -369,7 +369,7 @@ static void update_cursor(struct drm_crtc *crtc)
if (next_bo) {
/* take a obj ref + iova ref when we start scanning out: */
drm_gem_object_get(next_bo);
- msm_gem_get_and_pin_iova(next_bo, kms->aspace, &iova);
+ msm_gem_get_and_pin_iova(next_bo, kms->vm, &iova);
/* enable cursor: */
mdp4_write(mdp4_kms, REG_MDP4_DMA_CURSOR_SIZE(dma),
@@ -427,7 +427,7 @@ static int mdp4_crtc_cursor_set(struct drm_crtc *crtc,
}
if (cursor_bo) {
- ret = msm_gem_get_and_pin_iova(cursor_bo, kms->aspace, &iova);
+ ret = msm_gem_get_and_pin_iova(cursor_bo, kms->vm, &iova);
if (ret)
goto fail;
} else {
diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c
index c469e66cfc11..94fbc20b2fbd 100644
--- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c
+++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c
@@ -120,15 +120,15 @@ static void mdp4_destroy(struct msm_kms *kms)
{
struct mdp4_kms *mdp4_kms = to_mdp4_kms(to_mdp_kms(kms));
struct device *dev = mdp4_kms->dev->dev;
- struct msm_gem_address_space *aspace = kms->aspace;
+ struct msm_gem_vm *vm = kms->vm;
if (mdp4_kms->blank_cursor_iova)
- msm_gem_unpin_iova(mdp4_kms->blank_cursor_bo, kms->aspace);
+ msm_gem_unpin_iova(mdp4_kms->blank_cursor_bo, kms->vm);
drm_gem_object_put(mdp4_kms->blank_cursor_bo);
- if (aspace) {
- aspace->mmu->funcs->detach(aspace->mmu);
- msm_gem_address_space_put(aspace);
+ if (vm) {
+ vm->mmu->funcs->detach(vm->mmu);
+ msm_gem_vm_put(vm);
}
if (mdp4_kms->rpm_enabled)
@@ -380,7 +380,7 @@ static int mdp4_kms_init(struct drm_device *dev)
struct mdp4_kms *mdp4_kms = to_mdp4_kms(to_mdp_kms(priv->kms));
struct msm_kms *kms = NULL;
struct msm_mmu *mmu;
- struct msm_gem_address_space *aspace;
+ struct msm_gem_vm *vm;
int ret;
u32 major, minor;
unsigned long max_clk;
@@ -449,19 +449,19 @@ static int mdp4_kms_init(struct drm_device *dev)
} else if (!mmu) {
DRM_DEV_INFO(dev->dev, "no iommu, fallback to phys "
"contig buffers for scanout\n");
- aspace = NULL;
+ vm = NULL;
} else {
- aspace = msm_gem_address_space_create(mmu,
+ vm = msm_gem_vm_create(mmu,
"mdp4", 0x1000, 0x100000000 - 0x1000);
- if (IS_ERR(aspace)) {
+ if (IS_ERR(vm)) {
if (!IS_ERR(mmu))
mmu->funcs->destroy(mmu);
- ret = PTR_ERR(aspace);
+ ret = PTR_ERR(vm);
goto fail;
}
- kms->aspace = aspace;
+ kms->vm = vm;
}
ret = modeset_init(mdp4_kms);
@@ -478,7 +478,7 @@ static int mdp4_kms_init(struct drm_device *dev)
goto fail;
}
- ret = msm_gem_get_and_pin_iova(mdp4_kms->blank_cursor_bo, kms->aspace,
+ ret = msm_gem_get_and_pin_iova(mdp4_kms->blank_cursor_bo, kms->vm,
&mdp4_kms->blank_cursor_iova);
if (ret) {
DRM_DEV_ERROR(dev->dev, "could not pin blank-cursor bo: %d\n", ret);
diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c b/drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c
index 3fefb2088008..7743be6167f8 100644
--- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c
+++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c
@@ -87,7 +87,7 @@ static int mdp4_plane_prepare_fb(struct drm_plane *plane,
drm_gem_plane_helper_prepare_fb(plane, new_state);
- return msm_framebuffer_prepare(new_state->fb, kms->aspace, false);
+ return msm_framebuffer_prepare(new_state->fb, kms->vm, false);
}
static void mdp4_plane_cleanup_fb(struct drm_plane *plane,
@@ -102,7 +102,7 @@ static void mdp4_plane_cleanup_fb(struct drm_plane *plane,
return;
DBG("%s: cleanup: FB[%u]", mdp4_plane->name, fb->base.id);
- msm_framebuffer_cleanup(fb, kms->aspace, false);
+ msm_framebuffer_cleanup(fb, kms->vm, false);
}
@@ -153,13 +153,13 @@ static void mdp4_plane_set_scanout(struct drm_plane *plane,
MDP4_PIPE_SRC_STRIDE_B_P3(fb->pitches[3]));
mdp4_write(mdp4_kms, REG_MDP4_PIPE_SRCP0_BASE(pipe),
- msm_framebuffer_iova(fb, kms->aspace, 0));
+ msm_framebuffer_iova(fb, kms->vm, 0));
mdp4_write(mdp4_kms, REG_MDP4_PIPE_SRCP1_BASE(pipe),
- msm_framebuffer_iova(fb, kms->aspace, 1));
+ msm_framebuffer_iova(fb, kms->vm, 1));
mdp4_write(mdp4_kms, REG_MDP4_PIPE_SRCP2_BASE(pipe),
- msm_framebuffer_iova(fb, kms->aspace, 2));
+ msm_framebuffer_iova(fb, kms->vm, 2));
mdp4_write(mdp4_kms, REG_MDP4_PIPE_SRCP3_BASE(pipe),
- msm_framebuffer_iova(fb, kms->aspace, 3));
+ msm_framebuffer_iova(fb, kms->vm, 3));
}
static void mdp4_write_csc_config(struct mdp4_kms *mdp4_kms,
diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c
index 0f653e62b4a0..298861f373b0 100644
--- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c
+++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c
@@ -169,7 +169,7 @@ static void unref_cursor_worker(struct drm_flip_work *work, void *val)
struct mdp5_kms *mdp5_kms = get_kms(&mdp5_crtc->base);
struct msm_kms *kms = &mdp5_kms->base.base;
- msm_gem_unpin_iova(val, kms->aspace);
+ msm_gem_unpin_iova(val, kms->vm);
drm_gem_object_put(val);
}
@@ -993,7 +993,7 @@ static int mdp5_crtc_cursor_set(struct drm_crtc *crtc,
if (!cursor_bo)
return -ENOENT;
- ret = msm_gem_get_and_pin_iova(cursor_bo, kms->aspace,
+ ret = msm_gem_get_and_pin_iova(cursor_bo, kms->vm,
&mdp5_crtc->cursor.iova);
if (ret) {
drm_gem_object_put(cursor_bo);
diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c
index 3fcca7a3d82e..9dca0385a42d 100644
--- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c
+++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c
@@ -198,11 +198,11 @@ static void mdp5_destroy(struct mdp5_kms *mdp5_kms);
static void mdp5_kms_destroy(struct msm_kms *kms)
{
struct mdp5_kms *mdp5_kms = to_mdp5_kms(to_mdp_kms(kms));
- struct msm_gem_address_space *aspace = kms->aspace;
+ struct msm_gem_vm *vm = kms->vm;
- if (aspace) {
- aspace->mmu->funcs->detach(aspace->mmu);
- msm_gem_address_space_put(aspace);
+ if (vm) {
+ vm->mmu->funcs->detach(vm->mmu);
+ msm_gem_vm_put(vm);
}
mdp_kms_destroy(&mdp5_kms->base);
@@ -500,7 +500,7 @@ static int mdp5_kms_init(struct drm_device *dev)
struct mdp5_kms *mdp5_kms;
struct mdp5_cfg *config;
struct msm_kms *kms = priv->kms;
- struct msm_gem_address_space *aspace;
+ struct msm_gem_vm *vm;
int i, ret;
ret = mdp5_init(to_platform_device(dev->dev), dev);
@@ -534,13 +534,13 @@ static int mdp5_kms_init(struct drm_device *dev)
}
mdelay(16);
- aspace = msm_kms_init_aspace(mdp5_kms->dev);
- if (IS_ERR(aspace)) {
- ret = PTR_ERR(aspace);
+ vm = msm_kms_init_vm(mdp5_kms->dev);
+ if (IS_ERR(vm)) {
+ ret = PTR_ERR(vm);
goto fail;
}
- kms->aspace = aspace;
+ kms->vm = vm;
pm_runtime_put_sync(&pdev->dev);
diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c
index bb1601921938..9f68a4747203 100644
--- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c
+++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c
@@ -144,7 +144,7 @@ static int mdp5_plane_prepare_fb(struct drm_plane *plane,
drm_gem_plane_helper_prepare_fb(plane, new_state);
- return msm_framebuffer_prepare(new_state->fb, kms->aspace, needs_dirtyfb);
+ return msm_framebuffer_prepare(new_state->fb, kms->vm, needs_dirtyfb);
}
static void mdp5_plane_cleanup_fb(struct drm_plane *plane,
@@ -159,7 +159,7 @@ static void mdp5_plane_cleanup_fb(struct drm_plane *plane,
return;
DBG("%s: cleanup: FB[%u]", plane->name, fb->base.id);
- msm_framebuffer_cleanup(fb, kms->aspace, needed_dirtyfb);
+ msm_framebuffer_cleanup(fb, kms->vm, needed_dirtyfb);
}
static int mdp5_plane_atomic_check_with_state(struct drm_crtc_state *crtc_state,
@@ -478,13 +478,13 @@ static void set_scanout_locked(struct mdp5_kms *mdp5_kms,
MDP5_PIPE_SRC_STRIDE_B_P3(fb->pitches[3]));
mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC0_ADDR(pipe),
- msm_framebuffer_iova(fb, kms->aspace, 0));
+ msm_framebuffer_iova(fb, kms->vm, 0));
mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC1_ADDR(pipe),
- msm_framebuffer_iova(fb, kms->aspace, 1));
+ msm_framebuffer_iova(fb, kms->vm, 1));
mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC2_ADDR(pipe),
- msm_framebuffer_iova(fb, kms->aspace, 2));
+ msm_framebuffer_iova(fb, kms->vm, 2));
mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC3_ADDR(pipe),
- msm_framebuffer_iova(fb, kms->aspace, 3));
+ msm_framebuffer_iova(fb, kms->vm, 3));
}
/* Note: mdp5_plane->pipe_lock must be locked */
diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c b/drivers/gpu/drm/msm/dsi/dsi_host.c
index 4d75529c0e85..16335ebd21e4 100644
--- a/drivers/gpu/drm/msm/dsi/dsi_host.c
+++ b/drivers/gpu/drm/msm/dsi/dsi_host.c
@@ -143,7 +143,7 @@ struct msm_dsi_host {
/* DSI 6G TX buffer*/
struct drm_gem_object *tx_gem_obj;
- struct msm_gem_address_space *aspace;
+ struct msm_gem_vm *vm;
/* DSI v2 TX buffer */
void *tx_buf;
@@ -1146,10 +1146,10 @@ int dsi_tx_buf_alloc_6g(struct msm_dsi_host *msm_host, int size)
uint64_t iova;
u8 *data;
- msm_host->aspace = msm_gem_address_space_get(priv->kms->aspace);
+ msm_host->vm = msm_gem_vm_get(priv->kms->vm);
data = msm_gem_kernel_new(dev, size, MSM_BO_WC,
- msm_host->aspace,
+ msm_host->vm,
&msm_host->tx_gem_obj, &iova);
if (IS_ERR(data)) {
@@ -1193,10 +1193,10 @@ void msm_dsi_tx_buf_free(struct mipi_dsi_host *host)
return;
if (msm_host->tx_gem_obj) {
- msm_gem_kernel_put(msm_host->tx_gem_obj, msm_host->aspace);
- msm_gem_address_space_put(msm_host->aspace);
+ msm_gem_kernel_put(msm_host->tx_gem_obj, msm_host->vm);
+ msm_gem_vm_put(msm_host->vm);
msm_host->tx_gem_obj = NULL;
- msm_host->aspace = NULL;
+ msm_host->vm = NULL;
}
if (msm_host->tx_buf)
@@ -1327,7 +1327,7 @@ int dsi_dma_base_get_6g(struct msm_dsi_host *msm_host, uint64_t *dma_base)
return -EINVAL;
return msm_gem_get_and_pin_iova(msm_host->tx_gem_obj,
- priv->kms->aspace, dma_base);
+ priv->kms->vm, dma_base);
}
int dsi_dma_base_get_v2(struct msm_dsi_host *msm_host, uint64_t *dma_base)
diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
index 29ca24548c67..903abf3532e0 100644
--- a/drivers/gpu/drm/msm/msm_drv.c
+++ b/drivers/gpu/drm/msm/msm_drv.c
@@ -345,7 +345,7 @@ static int context_init(struct drm_device *dev, struct drm_file *file)
kref_init(&ctx->ref);
msm_submitqueue_init(dev, ctx);
- ctx->aspace = msm_gpu_create_private_address_space(priv->gpu, current);
+ ctx->vm = msm_gpu_create_private_vm(priv->gpu, current);
file->driver_priv = ctx;
ctx->seqno = atomic_inc_return(&ident);
@@ -523,7 +523,7 @@ static int msm_ioctl_gem_info_iova(struct drm_device *dev,
* Don't pin the memory here - just get an address so that userspace can
* be productive
*/
- return msm_gem_get_iova(obj, ctx->aspace, iova);
+ return msm_gem_get_iova(obj, ctx->vm, iova);
}
static int msm_ioctl_gem_info_set_iova(struct drm_device *dev,
@@ -537,13 +537,13 @@ static int msm_ioctl_gem_info_set_iova(struct drm_device *dev,
return -EINVAL;
/* Only supported if per-process address space is supported: */
- if (priv->gpu->aspace == ctx->aspace)
+ if (priv->gpu->vm == ctx->vm)
return UERR(EOPNOTSUPP, dev, "requires per-process pgtables");
if (should_fail(&fail_gem_iova, obj->size))
return -ENOMEM;
- return msm_gem_set_iova(obj, ctx->aspace, iova);
+ return msm_gem_set_iova(obj, ctx->vm, iova);
}
static int msm_ioctl_gem_info_set_metadata(struct drm_gem_object *obj,
diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h
index a65077855201..0e675c9a7f83 100644
--- a/drivers/gpu/drm/msm/msm_drv.h
+++ b/drivers/gpu/drm/msm/msm_drv.h
@@ -48,7 +48,7 @@ struct msm_rd_state;
struct msm_perf_state;
struct msm_gem_submit;
struct msm_fence_context;
-struct msm_gem_address_space;
+struct msm_gem_vm;
struct msm_gem_vma;
struct msm_disp_state;
@@ -241,7 +241,7 @@ void msm_crtc_disable_vblank(struct drm_crtc *crtc);
int msm_register_mmu(struct drm_device *dev, struct msm_mmu *mmu);
void msm_unregister_mmu(struct drm_device *dev, struct msm_mmu *mmu);
-struct msm_gem_address_space *msm_kms_init_aspace(struct drm_device *dev);
+struct msm_gem_vm *msm_kms_init_vm(struct drm_device *dev);
bool msm_use_mmu(struct drm_device *dev);
int msm_ioctl_gem_submit(struct drm_device *dev, void *data,
@@ -263,11 +263,11 @@ int msm_gem_prime_pin(struct drm_gem_object *obj);
void msm_gem_prime_unpin(struct drm_gem_object *obj);
int msm_framebuffer_prepare(struct drm_framebuffer *fb,
- struct msm_gem_address_space *aspace, bool needs_dirtyfb);
+ struct msm_gem_vm *vm, bool needs_dirtyfb);
void msm_framebuffer_cleanup(struct drm_framebuffer *fb,
- struct msm_gem_address_space *aspace, bool needed_dirtyfb);
+ struct msm_gem_vm *vm, bool needed_dirtyfb);
uint32_t msm_framebuffer_iova(struct drm_framebuffer *fb,
- struct msm_gem_address_space *aspace, int plane);
+ struct msm_gem_vm *vm, int plane);
struct drm_gem_object *msm_framebuffer_bo(struct drm_framebuffer *fb, int plane);
const struct msm_format *msm_framebuffer_format(struct drm_framebuffer *fb);
struct drm_framebuffer *msm_framebuffer_create(struct drm_device *dev,
diff --git a/drivers/gpu/drm/msm/msm_fb.c b/drivers/gpu/drm/msm/msm_fb.c
index 09268e416843..6df318b73534 100644
--- a/drivers/gpu/drm/msm/msm_fb.c
+++ b/drivers/gpu/drm/msm/msm_fb.c
@@ -76,7 +76,7 @@ void msm_framebuffer_describe(struct drm_framebuffer *fb, struct seq_file *m)
/* prepare/pin all the fb's bo's for scanout.
*/
int msm_framebuffer_prepare(struct drm_framebuffer *fb,
- struct msm_gem_address_space *aspace,
+ struct msm_gem_vm *vm,
bool needs_dirtyfb)
{
struct msm_framebuffer *msm_fb = to_msm_framebuffer(fb);
@@ -88,7 +88,7 @@ int msm_framebuffer_prepare(struct drm_framebuffer *fb,
atomic_inc(&msm_fb->prepare_count);
for (i = 0; i < n; i++) {
- ret = msm_gem_get_and_pin_iova(fb->obj[i], aspace, &msm_fb->iova[i]);
+ ret = msm_gem_get_and_pin_iova(fb->obj[i], vm, &msm_fb->iova[i]);
drm_dbg_state(fb->dev, "FB[%u]: iova[%d]: %08llx (%d)\n",
fb->base.id, i, msm_fb->iova[i], ret);
if (ret)
@@ -99,7 +99,7 @@ int msm_framebuffer_prepare(struct drm_framebuffer *fb,
}
void msm_framebuffer_cleanup(struct drm_framebuffer *fb,
- struct msm_gem_address_space *aspace,
+ struct msm_gem_vm *vm,
bool needed_dirtyfb)
{
struct msm_framebuffer *msm_fb = to_msm_framebuffer(fb);
@@ -109,14 +109,14 @@ void msm_framebuffer_cleanup(struct drm_framebuffer *fb,
refcount_dec(&msm_fb->dirtyfb);
for (i = 0; i < n; i++)
- msm_gem_unpin_iova(fb->obj[i], aspace);
+ msm_gem_unpin_iova(fb->obj[i], vm);
if (!atomic_dec_return(&msm_fb->prepare_count))
memset(msm_fb->iova, 0, sizeof(msm_fb->iova));
}
uint32_t msm_framebuffer_iova(struct drm_framebuffer *fb,
- struct msm_gem_address_space *aspace, int plane)
+ struct msm_gem_vm *vm, int plane)
{
struct msm_framebuffer *msm_fb = to_msm_framebuffer(fb);
return msm_fb->iova[plane] + fb->offsets[plane];
diff --git a/drivers/gpu/drm/msm/msm_fbdev.c b/drivers/gpu/drm/msm/msm_fbdev.c
index c62249b1ab3d..b5969374d53f 100644
--- a/drivers/gpu/drm/msm/msm_fbdev.c
+++ b/drivers/gpu/drm/msm/msm_fbdev.c
@@ -122,7 +122,7 @@ int msm_fbdev_driver_fbdev_probe(struct drm_fb_helper *helper,
* in panic (ie. lock-safe, etc) we could avoid pinning the
* buffer now:
*/
- ret = msm_gem_get_and_pin_iova(bo, priv->kms->aspace, &paddr);
+ ret = msm_gem_get_and_pin_iova(bo, priv->kms->vm, &paddr);
if (ret) {
DRM_DEV_ERROR(dev->dev, "failed to get buffer obj iova: %d\n", ret);
goto fail;
diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
index fdeb6cf7eeb5..07a30d29248c 100644
--- a/drivers/gpu/drm/msm/msm_gem.c
+++ b/drivers/gpu/drm/msm/msm_gem.c
@@ -402,14 +402,14 @@ uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj)
}
static struct msm_gem_vma *add_vma(struct drm_gem_object *obj,
- struct msm_gem_address_space *aspace)
+ struct msm_gem_vm *vm)
{
struct msm_gem_object *msm_obj = to_msm_bo(obj);
struct msm_gem_vma *vma;
msm_gem_assert_locked(obj);
- vma = msm_gem_vma_new(aspace);
+ vma = msm_gem_vma_new(vm);
if (!vma)
return ERR_PTR(-ENOMEM);
@@ -419,7 +419,7 @@ static struct msm_gem_vma *add_vma(struct drm_gem_object *obj,
}
static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj,
- struct msm_gem_address_space *aspace)
+ struct msm_gem_vm *vm)
{
struct msm_gem_object *msm_obj = to_msm_bo(obj);
struct msm_gem_vma *vma;
@@ -427,7 +427,7 @@ static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj,
msm_gem_assert_locked(obj);
list_for_each_entry(vma, &msm_obj->vmas, list) {
- if (vma->aspace == aspace)
+ if (vma->vm == vm)
return vma;
}
@@ -458,7 +458,7 @@ put_iova_spaces(struct drm_gem_object *obj, bool close)
msm_gem_assert_locked(obj);
list_for_each_entry(vma, &msm_obj->vmas, list) {
- if (vma->aspace) {
+ if (vma->vm) {
msm_gem_vma_purge(vma);
if (close)
msm_gem_vma_close(vma);
@@ -481,19 +481,19 @@ put_iova_vmas(struct drm_gem_object *obj)
}
static struct msm_gem_vma *get_vma_locked(struct drm_gem_object *obj,
- struct msm_gem_address_space *aspace,
+ struct msm_gem_vm *vm,
u64 range_start, u64 range_end)
{
struct msm_gem_vma *vma;
msm_gem_assert_locked(obj);
- vma = lookup_vma(obj, aspace);
+ vma = lookup_vma(obj, vm);
if (!vma) {
int ret;
- vma = add_vma(obj, aspace);
+ vma = add_vma(obj, vm);
if (IS_ERR(vma))
return vma;
@@ -569,13 +569,13 @@ void msm_gem_unpin_active(struct drm_gem_object *obj)
}
struct msm_gem_vma *msm_gem_get_vma_locked(struct drm_gem_object *obj,
- struct msm_gem_address_space *aspace)
+ struct msm_gem_vm *vm)
{
- return get_vma_locked(obj, aspace, 0, U64_MAX);
+ return get_vma_locked(obj, vm, 0, U64_MAX);
}
static int get_and_pin_iova_range_locked(struct drm_gem_object *obj,
- struct msm_gem_address_space *aspace, uint64_t *iova,
+ struct msm_gem_vm *vm, uint64_t *iova,
u64 range_start, u64 range_end)
{
struct msm_gem_vma *vma;
@@ -583,7 +583,7 @@ static int get_and_pin_iova_range_locked(struct drm_gem_object *obj,
msm_gem_assert_locked(obj);
- vma = get_vma_locked(obj, aspace, range_start, range_end);
+ vma = get_vma_locked(obj, vm, range_start, range_end);
if (IS_ERR(vma))
return PTR_ERR(vma);
@@ -601,13 +601,13 @@ static int get_and_pin_iova_range_locked(struct drm_gem_object *obj,
* limits iova to specified range (in pages)
*/
int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj,
- struct msm_gem_address_space *aspace, uint64_t *iova,
+ struct msm_gem_vm *vm, uint64_t *iova,
u64 range_start, u64 range_end)
{
int ret;
msm_gem_lock(obj);
- ret = get_and_pin_iova_range_locked(obj, aspace, iova, range_start, range_end);
+ ret = get_and_pin_iova_range_locked(obj, vm, iova, range_start, range_end);
msm_gem_unlock(obj);
return ret;
@@ -615,9 +615,9 @@ int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj,
/* get iova and pin it. Should have a matching put */
int msm_gem_get_and_pin_iova(struct drm_gem_object *obj,
- struct msm_gem_address_space *aspace, uint64_t *iova)
+ struct msm_gem_vm *vm, uint64_t *iova)
{
- return msm_gem_get_and_pin_iova_range(obj, aspace, iova, 0, U64_MAX);
+ return msm_gem_get_and_pin_iova_range(obj, vm, iova, 0, U64_MAX);
}
/*
@@ -625,13 +625,13 @@ int msm_gem_get_and_pin_iova(struct drm_gem_object *obj,
* valid for the life of the object
*/
int msm_gem_get_iova(struct drm_gem_object *obj,
- struct msm_gem_address_space *aspace, uint64_t *iova)
+ struct msm_gem_vm *vm, uint64_t *iova)
{
struct msm_gem_vma *vma;
int ret = 0;
msm_gem_lock(obj);
- vma = get_vma_locked(obj, aspace, 0, U64_MAX);
+ vma = get_vma_locked(obj, vm, 0, U64_MAX);
if (IS_ERR(vma)) {
ret = PTR_ERR(vma);
} else {
@@ -643,9 +643,9 @@ int msm_gem_get_iova(struct drm_gem_object *obj,
}
static int clear_iova(struct drm_gem_object *obj,
- struct msm_gem_address_space *aspace)
+ struct msm_gem_vm *vm)
{
- struct msm_gem_vma *vma = lookup_vma(obj, aspace);
+ struct msm_gem_vma *vma = lookup_vma(obj, vm);
if (!vma)
return 0;
@@ -665,20 +665,20 @@ static int clear_iova(struct drm_gem_object *obj,
* Setting an iova of zero will clear the vma.
*/
int msm_gem_set_iova(struct drm_gem_object *obj,
- struct msm_gem_address_space *aspace, uint64_t iova)
+ struct msm_gem_vm *vm, uint64_t iova)
{
int ret = 0;
msm_gem_lock(obj);
if (!iova) {
- ret = clear_iova(obj, aspace);
+ ret = clear_iova(obj, vm);
} else {
struct msm_gem_vma *vma;
- vma = get_vma_locked(obj, aspace, iova, iova + obj->size);
+ vma = get_vma_locked(obj, vm, iova, iova + obj->size);
if (IS_ERR(vma)) {
ret = PTR_ERR(vma);
} else if (GEM_WARN_ON(vma->iova != iova)) {
- clear_iova(obj, aspace);
+ clear_iova(obj, vm);
ret = -EBUSY;
}
}
@@ -693,12 +693,12 @@ int msm_gem_set_iova(struct drm_gem_object *obj,
* to get rid of it
*/
void msm_gem_unpin_iova(struct drm_gem_object *obj,
- struct msm_gem_address_space *aspace)
+ struct msm_gem_vm *vm)
{
struct msm_gem_vma *vma;
msm_gem_lock(obj);
- vma = lookup_vma(obj, aspace);
+ vma = lookup_vma(obj, vm);
if (!GEM_WARN_ON(!vma)) {
msm_gem_unpin_locked(obj);
}
@@ -1016,23 +1016,23 @@ void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m,
list_for_each_entry(vma, &msm_obj->vmas, list) {
const char *name, *comm;
- if (vma->aspace) {
- struct msm_gem_address_space *aspace = vma->aspace;
+ if (vma->vm) {
+ struct msm_gem_vm *vm = vma->vm;
struct task_struct *task =
- get_pid_task(aspace->pid, PIDTYPE_PID);
+ get_pid_task(vm->pid, PIDTYPE_PID);
if (task) {
comm = kstrdup(task->comm, GFP_KERNEL);
put_task_struct(task);
} else {
comm = NULL;
}
- name = aspace->name;
+ name = vm->name;
} else {
name = comm = NULL;
}
- seq_printf(m, " [%s%s%s: aspace=%p, %08llx,%s]",
+ seq_printf(m, " [%s%s%s: vm=%p, %08llx,%s]",
name, comm ? ":" : "", comm ? comm : "",
- vma->aspace, vma->iova,
+ vma->vm, vma->iova,
vma->mapped ? "mapped" : "unmapped");
kfree(comm);
}
@@ -1357,7 +1357,7 @@ struct drm_gem_object *msm_gem_import(struct drm_device *dev,
}
void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size,
- uint32_t flags, struct msm_gem_address_space *aspace,
+ uint32_t flags, struct msm_gem_vm *vm,
struct drm_gem_object **bo, uint64_t *iova)
{
void *vaddr;
@@ -1368,14 +1368,14 @@ void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size,
return ERR_CAST(obj);
if (iova) {
- ret = msm_gem_get_and_pin_iova(obj, aspace, iova);
+ ret = msm_gem_get_and_pin_iova(obj, vm, iova);
if (ret)
goto err;
}
vaddr = msm_gem_get_vaddr(obj);
if (IS_ERR(vaddr)) {
- msm_gem_unpin_iova(obj, aspace);
+ msm_gem_unpin_iova(obj, vm);
ret = PTR_ERR(vaddr);
goto err;
}
@@ -1392,13 +1392,13 @@ void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size,
}
void msm_gem_kernel_put(struct drm_gem_object *bo,
- struct msm_gem_address_space *aspace)
+ struct msm_gem_vm *vm)
{
if (IS_ERR_OR_NULL(bo))
return;
msm_gem_put_vaddr(bo);
- msm_gem_unpin_iova(bo, aspace);
+ msm_gem_unpin_iova(bo, vm);
drm_gem_object_put(bo);
}
diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h
index 85f0257e83da..d2f39a371373 100644
--- a/drivers/gpu/drm/msm/msm_gem.h
+++ b/drivers/gpu/drm/msm/msm_gem.h
@@ -22,7 +22,7 @@
#define MSM_BO_STOLEN 0x10000000 /* try to use stolen/splash memory */
#define MSM_BO_MAP_PRIV 0x20000000 /* use IOMMU_PRIV when mapping */
-struct msm_gem_address_space {
+struct msm_gem_vm {
const char *name;
/* NOTE: mm managed at the page level, size is in # of pages
* and position mm_node->start is in # of pages:
@@ -47,13 +47,13 @@ struct msm_gem_address_space {
uint64_t va_size;
};
-struct msm_gem_address_space *
-msm_gem_address_space_get(struct msm_gem_address_space *aspace);
+struct msm_gem_vm *
+msm_gem_vm_get(struct msm_gem_vm *vm);
-void msm_gem_address_space_put(struct msm_gem_address_space *aspace);
+void msm_gem_vm_put(struct msm_gem_vm *vm);
-struct msm_gem_address_space *
-msm_gem_address_space_create(struct msm_mmu *mmu, const char *name,
+struct msm_gem_vm *
+msm_gem_vm_create(struct msm_mmu *mmu, const char *name,
u64 va_start, u64 size);
struct msm_fence_context;
@@ -61,12 +61,12 @@ struct msm_fence_context;
struct msm_gem_vma {
struct drm_mm_node node;
uint64_t iova;
- struct msm_gem_address_space *aspace;
+ struct msm_gem_vm *vm;
struct list_head list; /* node in msm_gem_object::vmas */
bool mapped;
};
-struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_address_space *aspace);
+struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_vm *vm);
int msm_gem_vma_init(struct msm_gem_vma *vma, int size,
u64 range_start, u64 range_end);
void msm_gem_vma_purge(struct msm_gem_vma *vma);
@@ -127,18 +127,18 @@ int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct msm_gem_vma *vma);
void msm_gem_unpin_locked(struct drm_gem_object *obj);
void msm_gem_unpin_active(struct drm_gem_object *obj);
struct msm_gem_vma *msm_gem_get_vma_locked(struct drm_gem_object *obj,
- struct msm_gem_address_space *aspace);
+ struct msm_gem_vm *vm);
int msm_gem_get_iova(struct drm_gem_object *obj,
- struct msm_gem_address_space *aspace, uint64_t *iova);
+ struct msm_gem_vm *vm, uint64_t *iova);
int msm_gem_set_iova(struct drm_gem_object *obj,
- struct msm_gem_address_space *aspace, uint64_t iova);
+ struct msm_gem_vm *vm, uint64_t iova);
int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj,
- struct msm_gem_address_space *aspace, uint64_t *iova,
+ struct msm_gem_vm *vm, uint64_t *iova,
u64 range_start, u64 range_end);
int msm_gem_get_and_pin_iova(struct drm_gem_object *obj,
- struct msm_gem_address_space *aspace, uint64_t *iova);
+ struct msm_gem_vm *vm, uint64_t *iova);
void msm_gem_unpin_iova(struct drm_gem_object *obj,
- struct msm_gem_address_space *aspace);
+ struct msm_gem_vm *vm);
void msm_gem_pin_obj_locked(struct drm_gem_object *obj);
struct page **msm_gem_pin_pages_locked(struct drm_gem_object *obj);
void msm_gem_unpin_pages_locked(struct drm_gem_object *obj);
@@ -160,10 +160,10 @@ int msm_gem_new_handle(struct drm_device *dev, struct drm_file *file,
struct drm_gem_object *msm_gem_new(struct drm_device *dev,
uint32_t size, uint32_t flags);
void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size,
- uint32_t flags, struct msm_gem_address_space *aspace,
+ uint32_t flags, struct msm_gem_vm *vm,
struct drm_gem_object **bo, uint64_t *iova);
void msm_gem_kernel_put(struct drm_gem_object *bo,
- struct msm_gem_address_space *aspace);
+ struct msm_gem_vm *vm);
struct drm_gem_object *msm_gem_import(struct drm_device *dev,
struct dma_buf *dmabuf, struct sg_table *sgt);
__printf(2, 3)
@@ -257,7 +257,7 @@ struct msm_gem_submit {
struct kref ref;
struct drm_device *dev;
struct msm_gpu *gpu;
- struct msm_gem_address_space *aspace;
+ struct msm_gem_vm *vm;
struct list_head node; /* node in ring submit list */
struct drm_exec exec;
uint32_t seqno; /* Sequence number of the submit on the ring */
diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c
index 16ca6cfac967..95da4714fffb 100644
--- a/drivers/gpu/drm/msm/msm_gem_submit.c
+++ b/drivers/gpu/drm/msm/msm_gem_submit.c
@@ -63,7 +63,7 @@ static struct msm_gem_submit *submit_create(struct drm_device *dev,
kref_init(&submit->ref);
submit->dev = dev;
- submit->aspace = queue->ctx->aspace;
+ submit->vm = queue->ctx->vm;
submit->gpu = gpu;
submit->cmd = (void *)&submit->bos[nr_bos];
submit->queue = queue;
@@ -302,7 +302,7 @@ static int submit_pin_objects(struct msm_gem_submit *submit)
struct msm_gem_vma *vma;
/* if locking succeeded, pin bo: */
- vma = msm_gem_get_vma_locked(obj, submit->aspace);
+ vma = msm_gem_get_vma_locked(obj, submit->vm);
if (IS_ERR(vma)) {
ret = PTR_ERR(vma);
break;
@@ -659,7 +659,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data,
if (args->pad)
return -EINVAL;
- if (unlikely(!ctx->aspace) && !capable(CAP_SYS_RAWIO)) {
+ if (unlikely(!ctx->vm) && !capable(CAP_SYS_RAWIO)) {
DRM_ERROR_RATELIMITED("IOMMU support or CAP_SYS_RAWIO required!\n");
return -EPERM;
}
diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c
index 11e842dda73c..9419692f0cc8 100644
--- a/drivers/gpu/drm/msm/msm_gem_vma.c
+++ b/drivers/gpu/drm/msm/msm_gem_vma.c
@@ -10,45 +10,44 @@
#include "msm_mmu.h"
static void
-msm_gem_address_space_destroy(struct kref *kref)
+msm_gem_vm_destroy(struct kref *kref)
{
- struct msm_gem_address_space *aspace = container_of(kref,
- struct msm_gem_address_space, kref);
-
- drm_mm_takedown(&aspace->mm);
- if (aspace->mmu)
- aspace->mmu->funcs->destroy(aspace->mmu);
- put_pid(aspace->pid);
- kfree(aspace);
+ struct msm_gem_vm *vm = container_of(kref, struct msm_gem_vm, kref);
+
+ drm_mm_takedown(&vm->mm);
+ if (vm->mmu)
+ vm->mmu->funcs->destroy(vm->mmu);
+ put_pid(vm->pid);
+ kfree(vm);
}
-void msm_gem_address_space_put(struct msm_gem_address_space *aspace)
+void msm_gem_vm_put(struct msm_gem_vm *vm)
{
- if (aspace)
- kref_put(&aspace->kref, msm_gem_address_space_destroy);
+ if (vm)
+ kref_put(&vm->kref, msm_gem_vm_destroy);
}
-struct msm_gem_address_space *
-msm_gem_address_space_get(struct msm_gem_address_space *aspace)
+struct msm_gem_vm *
+msm_gem_vm_get(struct msm_gem_vm *vm)
{
- if (!IS_ERR_OR_NULL(aspace))
- kref_get(&aspace->kref);
+ if (!IS_ERR_OR_NULL(vm))
+ kref_get(&vm->kref);
- return aspace;
+ return vm;
}
/* Actually unmap memory for the vma */
void msm_gem_vma_purge(struct msm_gem_vma *vma)
{
- struct msm_gem_address_space *aspace = vma->aspace;
+ struct msm_gem_vm *vm = vma->vm;
unsigned size = vma->node.size;
/* Don't do anything if the memory isn't mapped */
if (!vma->mapped)
return;
- aspace->mmu->funcs->unmap(aspace->mmu, vma->iova, size);
+ vm->mmu->funcs->unmap(vm->mmu, vma->iova, size);
vma->mapped = false;
}
@@ -58,7 +57,7 @@ int
msm_gem_vma_map(struct msm_gem_vma *vma, int prot,
struct sg_table *sgt, int size)
{
- struct msm_gem_address_space *aspace = vma->aspace;
+ struct msm_gem_vm *vm = vma->vm;
int ret;
if (GEM_WARN_ON(!vma->iova))
@@ -69,7 +68,7 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot,
vma->mapped = true;
- if (!aspace)
+ if (!vm)
return 0;
/*
@@ -81,7 +80,7 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot,
* Revisit this if we can come up with a scheme to pre-alloc pages
* for the pgtable in map/unmap ops.
*/
- ret = aspace->mmu->funcs->map(aspace->mmu, vma->iova, sgt, size, prot);
+ ret = vm->mmu->funcs->map(vm->mmu, vma->iova, sgt, size, prot);
if (ret) {
vma->mapped = false;
@@ -93,21 +92,21 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot,
/* Close an iova. Warn if it is still in use */
void msm_gem_vma_close(struct msm_gem_vma *vma)
{
- struct msm_gem_address_space *aspace = vma->aspace;
+ struct msm_gem_vm *vm = vma->vm;
GEM_WARN_ON(vma->mapped);
- spin_lock(&aspace->lock);
+ spin_lock(&vm->lock);
if (vma->iova)
drm_mm_remove_node(&vma->node);
- spin_unlock(&aspace->lock);
+ spin_unlock(&vm->lock);
vma->iova = 0;
- msm_gem_address_space_put(aspace);
+ msm_gem_vm_put(vm);
}
-struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_address_space *aspace)
+struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_vm *vm)
{
struct msm_gem_vma *vma;
@@ -115,7 +114,7 @@ struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_address_space *aspace)
if (!vma)
return NULL;
- vma->aspace = aspace;
+ vma->vm = vm;
return vma;
}
@@ -124,20 +123,20 @@ struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_address_space *aspace)
int msm_gem_vma_init(struct msm_gem_vma *vma, int size,
u64 range_start, u64 range_end)
{
- struct msm_gem_address_space *aspace = vma->aspace;
+ struct msm_gem_vm *vm = vma->vm;
int ret;
- if (GEM_WARN_ON(!aspace))
+ if (GEM_WARN_ON(!vm))
return -EINVAL;
if (GEM_WARN_ON(vma->iova))
return -EBUSY;
- spin_lock(&aspace->lock);
- ret = drm_mm_insert_node_in_range(&aspace->mm, &vma->node,
+ spin_lock(&vm->lock);
+ ret = drm_mm_insert_node_in_range(&vm->mm, &vma->node,
size, PAGE_SIZE, 0,
range_start, range_end, 0);
- spin_unlock(&aspace->lock);
+ spin_unlock(&vm->lock);
if (ret)
return ret;
@@ -145,33 +144,33 @@ int msm_gem_vma_init(struct msm_gem_vma *vma, int size,
vma->iova = vma->node.start;
vma->mapped = false;
- kref_get(&aspace->kref);
+ kref_get(&vm->kref);
return 0;
}
-struct msm_gem_address_space *
-msm_gem_address_space_create(struct msm_mmu *mmu, const char *name,
+struct msm_gem_vm *
+msm_gem_vm_create(struct msm_mmu *mmu, const char *name,
u64 va_start, u64 size)
{
- struct msm_gem_address_space *aspace;
+ struct msm_gem_vm *vm;
if (IS_ERR(mmu))
return ERR_CAST(mmu);
- aspace = kzalloc(sizeof(*aspace), GFP_KERNEL);
- if (!aspace)
+ vm = kzalloc(sizeof(*vm), GFP_KERNEL);
+ if (!vm)
return ERR_PTR(-ENOMEM);
- spin_lock_init(&aspace->lock);
- aspace->name = name;
- aspace->mmu = mmu;
- aspace->va_start = va_start;
- aspace->va_size = size;
+ spin_lock_init(&vm->lock);
+ vm->name = name;
+ vm->mmu = mmu;
+ vm->va_start = va_start;
+ vm->va_size = size;
- drm_mm_init(&aspace->mm, va_start, size);
+ drm_mm_init(&vm->mm, va_start, size);
- kref_init(&aspace->kref);
+ kref_init(&vm->kref);
- return aspace;
+ return vm;
}
diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c
index d786fcfad62f..0d466a2e9b32 100644
--- a/drivers/gpu/drm/msm/msm_gpu.c
+++ b/drivers/gpu/drm/msm/msm_gpu.c
@@ -283,7 +283,7 @@ static void msm_gpu_crashstate_capture(struct msm_gpu *gpu,
if (state->fault_info.ttbr0) {
struct msm_gpu_fault_info *info = &state->fault_info;
- struct msm_mmu *mmu = submit->aspace->mmu;
+ struct msm_mmu *mmu = submit->vm->mmu;
msm_iommu_pagetable_params(mmu, &info->pgtbl_ttbr0,
&info->asid);
@@ -386,8 +386,8 @@ static void recover_worker(struct kthread_work *work)
/* Increment the fault counts */
submit->queue->faults++;
- if (submit->aspace)
- submit->aspace->faults++;
+ if (submit->vm)
+ submit->vm->faults++;
get_comm_cmdline(submit, &comm, &cmd);
@@ -492,7 +492,7 @@ static void fault_worker(struct kthread_work *work)
resume_smmu:
memset(&gpu->fault_info, 0, sizeof(gpu->fault_info));
- gpu->aspace->mmu->funcs->resume_translation(gpu->aspace->mmu);
+ gpu->vm->mmu->funcs->resume_translation(gpu->vm->mmu);
mutex_unlock(&gpu->lock);
}
@@ -829,10 +829,10 @@ static int get_clocks(struct platform_device *pdev, struct msm_gpu *gpu)
}
/* Return a new address space for a msm_drm_private instance */
-struct msm_gem_address_space *
-msm_gpu_create_private_address_space(struct msm_gpu *gpu, struct task_struct *task)
+struct msm_gem_vm *
+msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task)
{
- struct msm_gem_address_space *aspace = NULL;
+ struct msm_gem_vm *vm = NULL;
if (!gpu)
return NULL;
@@ -840,16 +840,16 @@ msm_gpu_create_private_address_space(struct msm_gpu *gpu, struct task_struct *ta
* If the target doesn't support private address spaces then return
* the global one
*/
- if (gpu->funcs->create_private_address_space) {
- aspace = gpu->funcs->create_private_address_space(gpu);
- if (!IS_ERR(aspace))
- aspace->pid = get_pid(task_pid(task));
+ if (gpu->funcs->create_private_vm) {
+ vm = gpu->funcs->create_private_vm(gpu);
+ if (!IS_ERR(vm))
+ vm->pid = get_pid(task_pid(task));
}
- if (IS_ERR_OR_NULL(aspace))
- aspace = msm_gem_address_space_get(gpu->aspace);
+ if (IS_ERR_OR_NULL(vm))
+ vm = msm_gem_vm_get(gpu->vm);
- return aspace;
+ return vm;
}
int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev,
@@ -945,18 +945,18 @@ int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev,
msm_devfreq_init(gpu);
- gpu->aspace = gpu->funcs->create_address_space(gpu, pdev);
+ gpu->vm = gpu->funcs->create_vm(gpu, pdev);
- if (gpu->aspace == NULL)
+ if (gpu->vm == NULL)
DRM_DEV_INFO(drm->dev, "%s: no IOMMU, fallback to VRAM carveout!\n", name);
- else if (IS_ERR(gpu->aspace)) {
- ret = PTR_ERR(gpu->aspace);
+ else if (IS_ERR(gpu->vm)) {
+ ret = PTR_ERR(gpu->vm);
goto fail;
}
memptrs = msm_gem_kernel_new(drm,
sizeof(struct msm_rbmemptrs) * nr_rings,
- check_apriv(gpu, MSM_BO_WC), gpu->aspace, &gpu->memptrs_bo,
+ check_apriv(gpu, MSM_BO_WC), gpu->vm, &gpu->memptrs_bo,
&memptrs_iova);
if (IS_ERR(memptrs)) {
@@ -1000,7 +1000,7 @@ int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev,
gpu->rb[i] = NULL;
}
- msm_gem_kernel_put(gpu->memptrs_bo, gpu->aspace);
+ msm_gem_kernel_put(gpu->memptrs_bo, gpu->vm);
platform_set_drvdata(pdev, NULL);
return ret;
@@ -1017,11 +1017,11 @@ void msm_gpu_cleanup(struct msm_gpu *gpu)
gpu->rb[i] = NULL;
}
- msm_gem_kernel_put(gpu->memptrs_bo, gpu->aspace);
+ msm_gem_kernel_put(gpu->memptrs_bo, gpu->vm);
- if (!IS_ERR_OR_NULL(gpu->aspace)) {
- gpu->aspace->mmu->funcs->detach(gpu->aspace->mmu);
- msm_gem_address_space_put(gpu->aspace);
+ if (!IS_ERR_OR_NULL(gpu->vm)) {
+ gpu->vm->mmu->funcs->detach(gpu->vm->mmu);
+ msm_gem_vm_put(gpu->vm);
}
if (gpu->worker) {
diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h
index c699ce0c557b..1f26ba00f773 100644
--- a/drivers/gpu/drm/msm/msm_gpu.h
+++ b/drivers/gpu/drm/msm/msm_gpu.h
@@ -78,10 +78,8 @@ struct msm_gpu_funcs {
/* note: gpu_set_freq() can assume that we have been pm_resumed */
void (*gpu_set_freq)(struct msm_gpu *gpu, struct dev_pm_opp *opp,
bool suspended);
- struct msm_gem_address_space *(*create_address_space)
- (struct msm_gpu *gpu, struct platform_device *pdev);
- struct msm_gem_address_space *(*create_private_address_space)
- (struct msm_gpu *gpu);
+ struct msm_gem_vm *(*create_vm)(struct msm_gpu *gpu, struct platform_device *pdev);
+ struct msm_gem_vm *(*create_private_vm)(struct msm_gpu *gpu);
uint32_t (*get_rptr)(struct msm_gpu *gpu, struct msm_ringbuffer *ring);
/**
@@ -236,7 +234,7 @@ struct msm_gpu {
void __iomem *mmio;
int irq;
- struct msm_gem_address_space *aspace;
+ struct msm_gem_vm *vm;
/* Power Control: */
struct regulator *gpu_reg, *gpu_cx;
@@ -364,8 +362,8 @@ struct msm_context {
*/
int queueid;
- /** @aspace: the per-process GPU address-space */
- struct msm_gem_address_space *aspace;
+ /** @vm: the per-process GPU address-space */
+ struct msm_gem_vm *vm;
/** @kref: the reference count */
struct kref ref;
@@ -675,8 +673,8 @@ int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev,
struct msm_gpu *gpu, const struct msm_gpu_funcs *funcs,
const char *name, struct msm_gpu_config *config);
-struct msm_gem_address_space *
-msm_gpu_create_private_address_space(struct msm_gpu *gpu, struct task_struct *task);
+struct msm_gem_vm *
+msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task);
void msm_gpu_cleanup(struct msm_gpu *gpu);
diff --git a/drivers/gpu/drm/msm/msm_kms.c b/drivers/gpu/drm/msm/msm_kms.c
index 35d5397e73b4..88504c4b842f 100644
--- a/drivers/gpu/drm/msm/msm_kms.c
+++ b/drivers/gpu/drm/msm/msm_kms.c
@@ -176,9 +176,9 @@ static int msm_kms_fault_handler(void *arg, unsigned long iova, int flags, void
return -ENOSYS;
}
-struct msm_gem_address_space *msm_kms_init_aspace(struct drm_device *dev)
+struct msm_gem_vm *msm_kms_init_vm(struct drm_device *dev)
{
- struct msm_gem_address_space *aspace;
+ struct msm_gem_vm *vm;
struct msm_mmu *mmu;
struct device *mdp_dev = dev->dev;
struct device *mdss_dev = mdp_dev->parent;
@@ -204,17 +204,17 @@ struct msm_gem_address_space *msm_kms_init_aspace(struct drm_device *dev)
return NULL;
}
- aspace = msm_gem_address_space_create(mmu, "mdp_kms",
+ vm = msm_gem_vm_create(mmu, "mdp_kms",
0x1000, 0x100000000 - 0x1000);
- if (IS_ERR(aspace)) {
- dev_err(mdp_dev, "aspace create, error %pe\n", aspace);
+ if (IS_ERR(vm)) {
+ dev_err(mdp_dev, "vm create, error %pe\n", vm);
mmu->funcs->destroy(mmu);
- return aspace;
+ return vm;
}
- msm_mmu_set_fault_handler(aspace->mmu, kms, msm_kms_fault_handler);
+ msm_mmu_set_fault_handler(vm->mmu, kms, msm_kms_fault_handler);
- return aspace;
+ return vm;
}
void msm_drm_kms_uninit(struct device *dev)
diff --git a/drivers/gpu/drm/msm/msm_kms.h b/drivers/gpu/drm/msm/msm_kms.h
index 43b58d052ee6..f45996a03e15 100644
--- a/drivers/gpu/drm/msm/msm_kms.h
+++ b/drivers/gpu/drm/msm/msm_kms.h
@@ -139,7 +139,7 @@ struct msm_kms {
atomic_t fault_snapshot_capture;
/* mapper-id used to request GEM buffer mapped for scanout: */
- struct msm_gem_address_space *aspace;
+ struct msm_gem_vm *vm;
/* disp snapshot support */
struct kthread_worker *dump_worker;
diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.c b/drivers/gpu/drm/msm/msm_ringbuffer.c
index c5651c39ac2a..bbf8503f6bb5 100644
--- a/drivers/gpu/drm/msm/msm_ringbuffer.c
+++ b/drivers/gpu/drm/msm/msm_ringbuffer.c
@@ -84,7 +84,7 @@ struct msm_ringbuffer *msm_ringbuffer_new(struct msm_gpu *gpu, int id,
ring->start = msm_gem_kernel_new(gpu->dev, MSM_GPU_RINGBUFFER_SZ,
check_apriv(gpu, MSM_BO_WC | MSM_BO_GPU_READONLY),
- gpu->aspace, &ring->bo, &ring->iova);
+ gpu->vm, &ring->bo, &ring->iova);
if (IS_ERR(ring->start)) {
ret = PTR_ERR(ring->start);
@@ -131,7 +131,7 @@ void msm_ringbuffer_destroy(struct msm_ringbuffer *ring)
msm_fence_context_free(ring->fctx);
- msm_gem_kernel_put(ring->bo, ring->gpu->aspace);
+ msm_gem_kernel_put(ring->bo, ring->gpu->vm);
kfree(ring);
}
diff --git a/drivers/gpu/drm/msm/msm_submitqueue.c b/drivers/gpu/drm/msm/msm_submitqueue.c
index 1acc0fe36353..6298233c3568 100644
--- a/drivers/gpu/drm/msm/msm_submitqueue.c
+++ b/drivers/gpu/drm/msm/msm_submitqueue.c
@@ -59,7 +59,7 @@ void __msm_context_destroy(struct kref *kref)
kfree(ctx->entities[i]);
}
- msm_gem_address_space_put(ctx->aspace);
+ msm_gem_vm_put(ctx->vm);
kfree(ctx->comm);
kfree(ctx->cmdline);
kfree(ctx);
--
2.48.1
^ permalink raw reply related [flat|nested] 45+ messages in thread
* [PATCH v2 08/34] drm/msm: Remove vram carveout support
2025-03-19 14:52 [PATCH v2 00/34] drm/msm: sparse / "VM_BIND" support Rob Clark
` (6 preceding siblings ...)
2025-03-19 14:52 ` [PATCH v2 07/34] drm/msm: Rename msm_gem_address_space -> msm_gem_vm Rob Clark
@ 2025-03-19 14:52 ` Rob Clark
2025-04-16 17:18 ` Akhil P Oommen
2025-04-16 23:20 ` Dmitry Baryshkov
2025-03-19 14:52 ` [PATCH v2 09/34] drm/msm: Collapse vma allocation and initialization Rob Clark
` (25 subsequent siblings)
33 siblings, 2 replies; 45+ messages in thread
From: Rob Clark @ 2025-03-19 14:52 UTC (permalink / raw)
To: dri-devel
Cc: freedreno, linux-arm-msm, Rob Clark, Rob Clark, Sean Paul,
Konrad Dybcio, Abhinav Kumar, Dmitry Baryshkov, Marijn Suijten,
David Airlie, Simona Vetter, open list
From: Rob Clark <robdclark@chromium.org>
It is standing in the way of drm_gpuvm / VM_BIND support. Not to
mention frequently broken and rarely tested. And I think only needed
for a 10yr old not quite upstream SoC (msm8974).
Maybe we can add support back in later, but I'm doubtful.
Signed-off-by: Rob Clark <robdclark@chromium.org>
---
drivers/gpu/drm/msm/adreno/a2xx_gpu.c | 6 +-
drivers/gpu/drm/msm/adreno/a3xx_gpu.c | 13 +-
drivers/gpu/drm/msm/adreno/a4xx_gpu.c | 13 +-
drivers/gpu/drm/msm/adreno/adreno_device.c | 4 -
drivers/gpu/drm/msm/adreno/adreno_gpu.h | 1 -
drivers/gpu/drm/msm/msm_drv.c | 117 +-----------------
drivers/gpu/drm/msm/msm_drv.h | 11 --
drivers/gpu/drm/msm/msm_gem.c | 131 ++-------------------
drivers/gpu/drm/msm/msm_gem.h | 5 -
drivers/gpu/drm/msm/msm_gem_submit.c | 5 -
10 files changed, 19 insertions(+), 287 deletions(-)
diff --git a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c
index 5eb063ed0b46..db1aa281ce47 100644
--- a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c
@@ -553,10 +553,8 @@ struct msm_gpu *a2xx_gpu_init(struct drm_device *dev)
if (!gpu->vm) {
dev_err(dev->dev, "No memory protection without MMU\n");
- if (!allow_vram_carveout) {
- ret = -ENXIO;
- goto fail;
- }
+ ret = -ENXIO;
+ goto fail;
}
return gpu;
diff --git a/drivers/gpu/drm/msm/adreno/a3xx_gpu.c b/drivers/gpu/drm/msm/adreno/a3xx_gpu.c
index 434e6ededf83..49ba1ce77144 100644
--- a/drivers/gpu/drm/msm/adreno/a3xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a3xx_gpu.c
@@ -582,18 +582,9 @@ struct msm_gpu *a3xx_gpu_init(struct drm_device *dev)
}
if (!gpu->vm) {
- /* TODO we think it is possible to configure the GPU to
- * restrict access to VRAM carveout. But the required
- * registers are unknown. For now just bail out and
- * limp along with just modesetting. If it turns out
- * to not be possible to restrict access, then we must
- * implement a cmdstream validator.
- */
DRM_DEV_ERROR(dev->dev, "No memory protection without IOMMU\n");
- if (!allow_vram_carveout) {
- ret = -ENXIO;
- goto fail;
- }
+ ret = -ENXIO;
+ goto fail;
}
icc_path = devm_of_icc_get(&pdev->dev, "gfx-mem");
diff --git a/drivers/gpu/drm/msm/adreno/a4xx_gpu.c b/drivers/gpu/drm/msm/adreno/a4xx_gpu.c
index 2c75debcfd84..4faf8570aec7 100644
--- a/drivers/gpu/drm/msm/adreno/a4xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a4xx_gpu.c
@@ -696,18 +696,9 @@ struct msm_gpu *a4xx_gpu_init(struct drm_device *dev)
adreno_gpu->uche_trap_base = 0xffff0000ffff0000ull;
if (!gpu->vm) {
- /* TODO we think it is possible to configure the GPU to
- * restrict access to VRAM carveout. But the required
- * registers are unknown. For now just bail out and
- * limp along with just modesetting. If it turns out
- * to not be possible to restrict access, then we must
- * implement a cmdstream validator.
- */
DRM_DEV_ERROR(dev->dev, "No memory protection without IOMMU\n");
- if (!allow_vram_carveout) {
- ret = -ENXIO;
- goto fail;
- }
+ ret = -ENXIO;
+ goto fail;
}
icc_path = devm_of_icc_get(&pdev->dev, "gfx-mem");
diff --git a/drivers/gpu/drm/msm/adreno/adreno_device.c b/drivers/gpu/drm/msm/adreno/adreno_device.c
index f4552b8c6767..6b0390c38bff 100644
--- a/drivers/gpu/drm/msm/adreno/adreno_device.c
+++ b/drivers/gpu/drm/msm/adreno/adreno_device.c
@@ -16,10 +16,6 @@ bool snapshot_debugbus = false;
MODULE_PARM_DESC(snapshot_debugbus, "Include debugbus sections in GPU devcoredump (if not fused off)");
module_param_named(snapshot_debugbus, snapshot_debugbus, bool, 0600);
-bool allow_vram_carveout = false;
-MODULE_PARM_DESC(allow_vram_carveout, "Allow using VRAM Carveout, in place of IOMMU");
-module_param_named(allow_vram_carveout, allow_vram_carveout, bool, 0600);
-
int enable_preemption = -1;
MODULE_PARM_DESC(enable_preemption, "Enable preemption (A7xx only) (1=on , 0=disable, -1=auto (default))");
module_param(enable_preemption, int, 0600);
diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.h b/drivers/gpu/drm/msm/adreno/adreno_gpu.h
index eaebcb108b5e..7dbe09817edc 100644
--- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h
+++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h
@@ -18,7 +18,6 @@
#include "adreno_pm4.xml.h"
extern bool snapshot_debugbus;
-extern bool allow_vram_carveout;
enum {
ADRENO_FW_PM4 = 0,
diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
index 903abf3532e0..978f1d355b42 100644
--- a/drivers/gpu/drm/msm/msm_drv.c
+++ b/drivers/gpu/drm/msm/msm_drv.c
@@ -46,12 +46,6 @@
#define MSM_VERSION_MINOR 12
#define MSM_VERSION_PATCHLEVEL 0
-static void msm_deinit_vram(struct drm_device *ddev);
-
-static char *vram = "16m";
-MODULE_PARM_DESC(vram, "Configure VRAM size (for devices without IOMMU/GPUMMU)");
-module_param(vram, charp, 0);
-
bool dumpstate;
MODULE_PARM_DESC(dumpstate, "Dump KMS state on errors");
module_param(dumpstate, bool, 0600);
@@ -97,8 +91,6 @@ static int msm_drm_uninit(struct device *dev)
if (priv->kms)
msm_drm_kms_uninit(dev);
- msm_deinit_vram(ddev);
-
component_unbind_all(dev, ddev);
ddev->dev_private = NULL;
@@ -109,107 +101,6 @@ static int msm_drm_uninit(struct device *dev)
return 0;
}
-bool msm_use_mmu(struct drm_device *dev)
-{
- struct msm_drm_private *priv = dev->dev_private;
-
- /*
- * a2xx comes with its own MMU
- * On other platforms IOMMU can be declared specified either for the
- * MDP/DPU device or for its parent, MDSS device.
- */
- return priv->is_a2xx ||
- device_iommu_mapped(dev->dev) ||
- device_iommu_mapped(dev->dev->parent);
-}
-
-static int msm_init_vram(struct drm_device *dev)
-{
- struct msm_drm_private *priv = dev->dev_private;
- struct device_node *node;
- unsigned long size = 0;
- int ret = 0;
-
- /* In the device-tree world, we could have a 'memory-region'
- * phandle, which gives us a link to our "vram". Allocating
- * is all nicely abstracted behind the dma api, but we need
- * to know the entire size to allocate it all in one go. There
- * are two cases:
- * 1) device with no IOMMU, in which case we need exclusive
- * access to a VRAM carveout big enough for all gpu
- * buffers
- * 2) device with IOMMU, but where the bootloader puts up
- * a splash screen. In this case, the VRAM carveout
- * need only be large enough for fbdev fb. But we need
- * exclusive access to the buffer to avoid the kernel
- * using those pages for other purposes (which appears
- * as corruption on screen before we have a chance to
- * load and do initial modeset)
- */
-
- node = of_parse_phandle(dev->dev->of_node, "memory-region", 0);
- if (node) {
- struct resource r;
- ret = of_address_to_resource(node, 0, &r);
- of_node_put(node);
- if (ret)
- return ret;
- size = r.end - r.start + 1;
- DRM_INFO("using VRAM carveout: %lx@%pa\n", size, &r.start);
-
- /* if we have no IOMMU, then we need to use carveout allocator.
- * Grab the entire DMA chunk carved out in early startup in
- * mach-msm:
- */
- } else if (!msm_use_mmu(dev)) {
- DRM_INFO("using %s VRAM carveout\n", vram);
- size = memparse(vram, NULL);
- }
-
- if (size) {
- unsigned long attrs = 0;
- void *p;
-
- priv->vram.size = size;
-
- drm_mm_init(&priv->vram.mm, 0, (size >> PAGE_SHIFT) - 1);
- spin_lock_init(&priv->vram.lock);
-
- attrs |= DMA_ATTR_NO_KERNEL_MAPPING;
- attrs |= DMA_ATTR_WRITE_COMBINE;
-
- /* note that for no-kernel-mapping, the vaddr returned
- * is bogus, but non-null if allocation succeeded:
- */
- p = dma_alloc_attrs(dev->dev, size,
- &priv->vram.paddr, GFP_KERNEL, attrs);
- if (!p) {
- DRM_DEV_ERROR(dev->dev, "failed to allocate VRAM\n");
- priv->vram.paddr = 0;
- return -ENOMEM;
- }
-
- DRM_DEV_INFO(dev->dev, "VRAM: %08x->%08x\n",
- (uint32_t)priv->vram.paddr,
- (uint32_t)(priv->vram.paddr + size));
- }
-
- return ret;
-}
-
-static void msm_deinit_vram(struct drm_device *ddev)
-{
- struct msm_drm_private *priv = ddev->dev_private;
- unsigned long attrs = DMA_ATTR_NO_KERNEL_MAPPING;
-
- if (!priv->vram.paddr)
- return;
-
- drm_mm_takedown(&priv->vram.mm);
- dma_free_attrs(ddev->dev, priv->vram.size, NULL, priv->vram.paddr,
- attrs);
-}
-
static int msm_drm_init(struct device *dev, const struct drm_driver *drv)
{
struct msm_drm_private *priv = dev_get_drvdata(dev);
@@ -256,16 +147,12 @@ static int msm_drm_init(struct device *dev, const struct drm_driver *drv)
goto err_destroy_wq;
}
- ret = msm_init_vram(ddev);
- if (ret)
- goto err_destroy_wq;
-
dma_set_max_seg_size(dev, UINT_MAX);
/* Bind all our sub-components: */
ret = component_bind_all(dev, ddev);
if (ret)
- goto err_deinit_vram;
+ goto err_destroy_wq;
ret = msm_gem_shrinker_init(ddev);
if (ret)
@@ -302,8 +189,6 @@ static int msm_drm_init(struct device *dev, const struct drm_driver *drv)
return ret;
-err_deinit_vram:
- msm_deinit_vram(ddev);
err_destroy_wq:
destroy_workqueue(priv->wq);
err_put_dev:
diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h
index 0e675c9a7f83..ad509403f072 100644
--- a/drivers/gpu/drm/msm/msm_drv.h
+++ b/drivers/gpu/drm/msm/msm_drv.h
@@ -183,17 +183,6 @@ struct msm_drm_private {
struct msm_drm_thread event_thread[MAX_CRTCS];
- /* VRAM carveout, used when no IOMMU: */
- struct {
- unsigned long size;
- dma_addr_t paddr;
- /* NOTE: mm managed at the page level, size is in # of pages
- * and position mm_node->start is in # of pages:
- */
- struct drm_mm mm;
- spinlock_t lock; /* Protects drm_mm node allocation/removal */
- } vram;
-
struct notifier_block vmap_notifier;
struct shrinker *shrinker;
diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
index 07a30d29248c..621fb4e17a2e 100644
--- a/drivers/gpu/drm/msm/msm_gem.c
+++ b/drivers/gpu/drm/msm/msm_gem.c
@@ -17,24 +17,8 @@
#include <trace/events/gpu_mem.h>
#include "msm_drv.h"
-#include "msm_fence.h"
#include "msm_gem.h"
#include "msm_gpu.h"
-#include "msm_mmu.h"
-
-static dma_addr_t physaddr(struct drm_gem_object *obj)
-{
- struct msm_gem_object *msm_obj = to_msm_bo(obj);
- struct msm_drm_private *priv = obj->dev->dev_private;
- return (((dma_addr_t)msm_obj->vram_node->start) << PAGE_SHIFT) +
- priv->vram.paddr;
-}
-
-static bool use_pages(struct drm_gem_object *obj)
-{
- struct msm_gem_object *msm_obj = to_msm_bo(obj);
- return !msm_obj->vram_node;
-}
static int pgprot = 0;
module_param(pgprot, int, 0600);
@@ -139,36 +123,6 @@ static void update_lru(struct drm_gem_object *obj)
mutex_unlock(&priv->lru.lock);
}
-/* allocate pages from VRAM carveout, used when no IOMMU: */
-static struct page **get_pages_vram(struct drm_gem_object *obj, int npages)
-{
- struct msm_gem_object *msm_obj = to_msm_bo(obj);
- struct msm_drm_private *priv = obj->dev->dev_private;
- dma_addr_t paddr;
- struct page **p;
- int ret, i;
-
- p = kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL);
- if (!p)
- return ERR_PTR(-ENOMEM);
-
- spin_lock(&priv->vram.lock);
- ret = drm_mm_insert_node(&priv->vram.mm, msm_obj->vram_node, npages);
- spin_unlock(&priv->vram.lock);
- if (ret) {
- kvfree(p);
- return ERR_PTR(ret);
- }
-
- paddr = physaddr(obj);
- for (i = 0; i < npages; i++) {
- p[i] = pfn_to_page(__phys_to_pfn(paddr));
- paddr += PAGE_SIZE;
- }
-
- return p;
-}
-
static struct page **get_pages(struct drm_gem_object *obj)
{
struct msm_gem_object *msm_obj = to_msm_bo(obj);
@@ -180,10 +134,7 @@ static struct page **get_pages(struct drm_gem_object *obj)
struct page **p;
int npages = obj->size >> PAGE_SHIFT;
- if (use_pages(obj))
- p = drm_gem_get_pages(obj);
- else
- p = get_pages_vram(obj, npages);
+ p = drm_gem_get_pages(obj);
if (IS_ERR(p)) {
DRM_DEV_ERROR(dev->dev, "could not get pages: %ld\n",
@@ -216,18 +167,6 @@ static struct page **get_pages(struct drm_gem_object *obj)
return msm_obj->pages;
}
-static void put_pages_vram(struct drm_gem_object *obj)
-{
- struct msm_gem_object *msm_obj = to_msm_bo(obj);
- struct msm_drm_private *priv = obj->dev->dev_private;
-
- spin_lock(&priv->vram.lock);
- drm_mm_remove_node(msm_obj->vram_node);
- spin_unlock(&priv->vram.lock);
-
- kvfree(msm_obj->pages);
-}
-
static void put_pages(struct drm_gem_object *obj)
{
struct msm_gem_object *msm_obj = to_msm_bo(obj);
@@ -248,10 +187,7 @@ static void put_pages(struct drm_gem_object *obj)
update_device_mem(obj->dev->dev_private, -obj->size);
- if (use_pages(obj))
- drm_gem_put_pages(obj, msm_obj->pages, true, false);
- else
- put_pages_vram(obj);
+ drm_gem_put_pages(obj, msm_obj->pages, true, false);
msm_obj->pages = NULL;
update_lru(obj);
@@ -1215,19 +1151,10 @@ struct drm_gem_object *msm_gem_new(struct drm_device *dev, uint32_t size, uint32
struct msm_drm_private *priv = dev->dev_private;
struct msm_gem_object *msm_obj;
struct drm_gem_object *obj = NULL;
- bool use_vram = false;
int ret;
size = PAGE_ALIGN(size);
- if (!msm_use_mmu(dev))
- use_vram = true;
- else if ((flags & (MSM_BO_STOLEN | MSM_BO_SCANOUT)) && priv->vram.size)
- use_vram = true;
-
- if (GEM_WARN_ON(use_vram && !priv->vram.size))
- return ERR_PTR(-EINVAL);
-
/* Disallow zero sized objects as they make the underlying
* infrastructure grumpy
*/
@@ -1240,44 +1167,16 @@ struct drm_gem_object *msm_gem_new(struct drm_device *dev, uint32_t size, uint32
msm_obj = to_msm_bo(obj);
- if (use_vram) {
- struct msm_gem_vma *vma;
- struct page **pages;
-
- drm_gem_private_object_init(dev, obj, size);
-
- msm_gem_lock(obj);
-
- vma = add_vma(obj, NULL);
- msm_gem_unlock(obj);
- if (IS_ERR(vma)) {
- ret = PTR_ERR(vma);
- goto fail;
- }
-
- to_msm_bo(obj)->vram_node = &vma->node;
-
- msm_gem_lock(obj);
- pages = get_pages(obj);
- msm_gem_unlock(obj);
- if (IS_ERR(pages)) {
- ret = PTR_ERR(pages);
- goto fail;
- }
-
- vma->iova = physaddr(obj);
- } else {
- ret = drm_gem_object_init(dev, obj, size);
- if (ret)
- goto fail;
- /*
- * Our buffers are kept pinned, so allocating them from the
- * MOVABLE zone is a really bad idea, and conflicts with CMA.
- * See comments above new_inode() why this is required _and_
- * expected if you're going to pin these pages.
- */
- mapping_set_gfp_mask(obj->filp->f_mapping, GFP_HIGHUSER);
- }
+ ret = drm_gem_object_init(dev, obj, size);
+ if (ret)
+ goto fail;
+ /*
+ * Our buffers are kept pinned, so allocating them from the
+ * MOVABLE zone is a really bad idea, and conflicts with CMA.
+ * See comments above new_inode() why this is required _and_
+ * expected if you're going to pin these pages.
+ */
+ mapping_set_gfp_mask(obj->filp->f_mapping, GFP_HIGHUSER);
drm_gem_lru_move_tail(&priv->lru.unbacked, obj);
@@ -1305,12 +1204,6 @@ struct drm_gem_object *msm_gem_import(struct drm_device *dev,
uint32_t size;
int ret, npages;
- /* if we don't have IOMMU, don't bother pretending we can import: */
- if (!msm_use_mmu(dev)) {
- DRM_DEV_ERROR(dev->dev, "cannot import without IOMMU\n");
- return ERR_PTR(-EINVAL);
- }
-
size = PAGE_ALIGN(dmabuf->size);
ret = msm_gem_new_impl(dev, size, MSM_BO_WC, &obj);
diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h
index d2f39a371373..c16b11182831 100644
--- a/drivers/gpu/drm/msm/msm_gem.h
+++ b/drivers/gpu/drm/msm/msm_gem.h
@@ -102,11 +102,6 @@ struct msm_gem_object {
struct list_head vmas; /* list of msm_gem_vma */
- /* For physically contiguous buffers. Used when we don't have
- * an IOMMU. Also used for stolen/splashscreen buffer.
- */
- struct drm_mm_node *vram_node;
-
char name[32]; /* Identifier to print for the debugfs files */
/* userspace metadata backchannel */
diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c
index 95da4714fffb..a186b7dfea35 100644
--- a/drivers/gpu/drm/msm/msm_gem_submit.c
+++ b/drivers/gpu/drm/msm/msm_gem_submit.c
@@ -659,11 +659,6 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data,
if (args->pad)
return -EINVAL;
- if (unlikely(!ctx->vm) && !capable(CAP_SYS_RAWIO)) {
- DRM_ERROR_RATELIMITED("IOMMU support or CAP_SYS_RAWIO required!\n");
- return -EPERM;
- }
-
/* for now, we just have 3d pipe.. eventually this would need to
* be more clever to dispatch to appropriate gpu module:
*/
--
2.48.1
^ permalink raw reply related [flat|nested] 45+ messages in thread
* [PATCH v2 09/34] drm/msm: Collapse vma allocation and initialization
2025-03-19 14:52 [PATCH v2 00/34] drm/msm: sparse / "VM_BIND" support Rob Clark
` (7 preceding siblings ...)
2025-03-19 14:52 ` [PATCH v2 08/34] drm/msm: Remove vram carveout support Rob Clark
@ 2025-03-19 14:52 ` Rob Clark
2025-03-19 14:52 ` [PATCH v2 10/34] drm/msm: Collapse vma close and delete Rob Clark
` (24 subsequent siblings)
33 siblings, 0 replies; 45+ messages in thread
From: Rob Clark @ 2025-03-19 14:52 UTC (permalink / raw)
To: dri-devel
Cc: freedreno, linux-arm-msm, Rob Clark, Rob Clark, Abhinav Kumar,
Dmitry Baryshkov, Sean Paul, Marijn Suijten, David Airlie,
Simona Vetter, open list
From: Rob Clark <robdclark@chromium.org>
Now that we've dropped vram carveout support, we can collapse vma
allocation and initialization. This better matches how things work
with drm_gpuvm.
Signed-off-by: Rob Clark <robdclark@chromium.org>
---
drivers/gpu/drm/msm/msm_gem.c | 30 +++-----------------------
drivers/gpu/drm/msm/msm_gem.h | 4 ++--
drivers/gpu/drm/msm/msm_gem_vma.c | 36 +++++++++++++------------------
3 files changed, 20 insertions(+), 50 deletions(-)
diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
index 621fb4e17a2e..29247911f048 100644
--- a/drivers/gpu/drm/msm/msm_gem.c
+++ b/drivers/gpu/drm/msm/msm_gem.c
@@ -337,23 +337,6 @@ uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj)
return offset;
}
-static struct msm_gem_vma *add_vma(struct drm_gem_object *obj,
- struct msm_gem_vm *vm)
-{
- struct msm_gem_object *msm_obj = to_msm_bo(obj);
- struct msm_gem_vma *vma;
-
- msm_gem_assert_locked(obj);
-
- vma = msm_gem_vma_new(vm);
- if (!vma)
- return ERR_PTR(-ENOMEM);
-
- list_add_tail(&vma->list, &msm_obj->vmas);
-
- return vma;
-}
-
static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj,
struct msm_gem_vm *vm)
{
@@ -420,6 +403,7 @@ static struct msm_gem_vma *get_vma_locked(struct drm_gem_object *obj,
struct msm_gem_vm *vm,
u64 range_start, u64 range_end)
{
+ struct msm_gem_object *msm_obj = to_msm_bo(obj);
struct msm_gem_vma *vma;
msm_gem_assert_locked(obj);
@@ -427,18 +411,10 @@ static struct msm_gem_vma *get_vma_locked(struct drm_gem_object *obj,
vma = lookup_vma(obj, vm);
if (!vma) {
- int ret;
-
- vma = add_vma(obj, vm);
+ vma = msm_gem_vma_new(vm, obj, range_start, range_end);
if (IS_ERR(vma))
return vma;
-
- ret = msm_gem_vma_init(vma, obj->size,
- range_start, range_end);
- if (ret) {
- del_vma(vma);
- return ERR_PTR(ret);
- }
+ list_add_tail(&vma->list, &msm_obj->vmas);
} else {
GEM_WARN_ON(vma->iova < range_start);
GEM_WARN_ON((vma->iova + obj->size) > range_end);
diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h
index c16b11182831..9bd78642671c 100644
--- a/drivers/gpu/drm/msm/msm_gem.h
+++ b/drivers/gpu/drm/msm/msm_gem.h
@@ -66,8 +66,8 @@ struct msm_gem_vma {
bool mapped;
};
-struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_vm *vm);
-int msm_gem_vma_init(struct msm_gem_vma *vma, int size,
+struct msm_gem_vma *
+msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj,
u64 range_start, u64 range_end);
void msm_gem_vma_purge(struct msm_gem_vma *vma);
int msm_gem_vma_map(struct msm_gem_vma *vma, int prot, struct sg_table *sgt, int size);
diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c
index 9419692f0cc8..6d18364f321c 100644
--- a/drivers/gpu/drm/msm/msm_gem_vma.c
+++ b/drivers/gpu/drm/msm/msm_gem_vma.c
@@ -106,47 +106,41 @@ void msm_gem_vma_close(struct msm_gem_vma *vma)
msm_gem_vm_put(vm);
}
-struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_vm *vm)
+/* Create a new vma and allocate an iova for it */
+struct msm_gem_vma *
+msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj,
+ u64 range_start, u64 range_end)
{
struct msm_gem_vma *vma;
+ int ret;
vma = kzalloc(sizeof(*vma), GFP_KERNEL);
if (!vma)
- return NULL;
+ return ERR_PTR(-ENOMEM);
vma->vm = vm;
- return vma;
-}
-
-/* Initialize a new vma and allocate an iova for it */
-int msm_gem_vma_init(struct msm_gem_vma *vma, int size,
- u64 range_start, u64 range_end)
-{
- struct msm_gem_vm *vm = vma->vm;
- int ret;
-
- if (GEM_WARN_ON(!vm))
- return -EINVAL;
-
- if (GEM_WARN_ON(vma->iova))
- return -EBUSY;
-
spin_lock(&vm->lock);
ret = drm_mm_insert_node_in_range(&vm->mm, &vma->node,
- size, PAGE_SIZE, 0,
+ obj->size, PAGE_SIZE, 0,
range_start, range_end, 0);
spin_unlock(&vm->lock);
if (ret)
- return ret;
+ goto err_free_vma;
vma->iova = vma->node.start;
vma->mapped = false;
+ INIT_LIST_HEAD(&vma->list);
+
kref_get(&vm->kref);
- return 0;
+ return vma;
+
+err_free_vma:
+ kfree(vma);
+ return ERR_PTR(ret);
}
struct msm_gem_vm *
--
2.48.1
^ permalink raw reply related [flat|nested] 45+ messages in thread
* [PATCH v2 10/34] drm/msm: Collapse vma close and delete
2025-03-19 14:52 [PATCH v2 00/34] drm/msm: sparse / "VM_BIND" support Rob Clark
` (8 preceding siblings ...)
2025-03-19 14:52 ` [PATCH v2 09/34] drm/msm: Collapse vma allocation and initialization Rob Clark
@ 2025-03-19 14:52 ` Rob Clark
2025-03-19 14:52 ` [PATCH v2 11/34] drm/msm: drm_gpuvm conversion Rob Clark
` (23 subsequent siblings)
33 siblings, 0 replies; 45+ messages in thread
From: Rob Clark @ 2025-03-19 14:52 UTC (permalink / raw)
To: dri-devel
Cc: freedreno, linux-arm-msm, Rob Clark, Rob Clark, Abhinav Kumar,
Dmitry Baryshkov, Sean Paul, Marijn Suijten, David Airlie,
Simona Vetter, open list
From: Rob Clark <robdclark@chromium.org>
This fits better drm_gpuvm/drm_gpuva.
Signed-off-by: Rob Clark <robdclark@chromium.org>
---
drivers/gpu/drm/msm/msm_gem.c | 16 +++-------------
drivers/gpu/drm/msm/msm_gem_vma.c | 2 ++
2 files changed, 5 insertions(+), 13 deletions(-)
diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
index 29247911f048..4c10eca404e0 100644
--- a/drivers/gpu/drm/msm/msm_gem.c
+++ b/drivers/gpu/drm/msm/msm_gem.c
@@ -353,15 +353,6 @@ static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj,
return NULL;
}
-static void del_vma(struct msm_gem_vma *vma)
-{
- if (!vma)
- return;
-
- list_del(&vma->list);
- kfree(vma);
-}
-
/*
* If close is true, this also closes the VMA (releasing the allocated
* iova range) in addition to removing the iommu mapping. In the eviction
@@ -372,11 +363,11 @@ static void
put_iova_spaces(struct drm_gem_object *obj, bool close)
{
struct msm_gem_object *msm_obj = to_msm_bo(obj);
- struct msm_gem_vma *vma;
+ struct msm_gem_vma *vma, *tmp;
msm_gem_assert_locked(obj);
- list_for_each_entry(vma, &msm_obj->vmas, list) {
+ list_for_each_entry_safe(vma, tmp, &msm_obj->vmas, list) {
if (vma->vm) {
msm_gem_vma_purge(vma);
if (close)
@@ -395,7 +386,7 @@ put_iova_vmas(struct drm_gem_object *obj)
msm_gem_assert_locked(obj);
list_for_each_entry_safe(vma, tmp, &msm_obj->vmas, list) {
- del_vma(vma);
+ msm_gem_vma_close(vma);
}
}
@@ -564,7 +555,6 @@ static int clear_iova(struct drm_gem_object *obj,
msm_gem_vma_purge(vma);
msm_gem_vma_close(vma);
- del_vma(vma);
return 0;
}
diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c
index 6d18364f321c..ca29e81d79d2 100644
--- a/drivers/gpu/drm/msm/msm_gem_vma.c
+++ b/drivers/gpu/drm/msm/msm_gem_vma.c
@@ -102,8 +102,10 @@ void msm_gem_vma_close(struct msm_gem_vma *vma)
spin_unlock(&vm->lock);
vma->iova = 0;
+ list_del(&vma->list);
msm_gem_vm_put(vm);
+ kfree(vma);
}
/* Create a new vma and allocate an iova for it */
--
2.48.1
^ permalink raw reply related [flat|nested] 45+ messages in thread
* [PATCH v2 11/34] drm/msm: drm_gpuvm conversion
2025-03-19 14:52 [PATCH v2 00/34] drm/msm: sparse / "VM_BIND" support Rob Clark
` (9 preceding siblings ...)
2025-03-19 14:52 ` [PATCH v2 10/34] drm/msm: Collapse vma close and delete Rob Clark
@ 2025-03-19 14:52 ` Rob Clark
2025-04-16 17:20 ` Akhil P Oommen
2025-03-19 14:52 ` [PATCH v2 12/34] drm/msm: Use drm_gpuvm types more Rob Clark
` (22 subsequent siblings)
33 siblings, 1 reply; 45+ messages in thread
From: Rob Clark @ 2025-03-19 14:52 UTC (permalink / raw)
To: dri-devel
Cc: freedreno, linux-arm-msm, Rob Clark, Rob Clark, Abhinav Kumar,
Dmitry Baryshkov, Sean Paul, Marijn Suijten, David Airlie,
Simona Vetter, Konrad Dybcio, open list
From: Rob Clark <robdclark@chromium.org>
Now that we've realigned deletion and allocation, switch over to using
drm_gpuvm/drm_gpuva. This allows us to support multiple VMAs per BO per
VM, to allow mapping different parts of a single BO at different virtual
addresses, which is a key requirement for sparse/VM_BIND.
This prepares us for using drm_gpuvm to translate a batch of MAP/
MAP_NULL/UNMAP operations from userspace into a sequence of map/remap/
unmap steps for updating the page tables.
Signed-off-by: Rob Clark <robdclark@chromium.org>
---
drivers/gpu/drm/msm/Kconfig | 1 +
drivers/gpu/drm/msm/adreno/a2xx_gpu.c | 3 +-
drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 6 +-
drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 5 +-
drivers/gpu/drm/msm/adreno/adreno_gpu.c | 7 +-
drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c | 5 +-
drivers/gpu/drm/msm/msm_drv.c | 1 +
drivers/gpu/drm/msm/msm_gem.c | 142 ++++++++++++++---------
drivers/gpu/drm/msm/msm_gem.h | 87 +++++++++++---
drivers/gpu/drm/msm/msm_gem_submit.c | 2 +-
drivers/gpu/drm/msm/msm_gem_vma.c | 139 +++++++++++++++-------
drivers/gpu/drm/msm/msm_kms.c | 4 +-
12 files changed, 268 insertions(+), 134 deletions(-)
diff --git a/drivers/gpu/drm/msm/Kconfig b/drivers/gpu/drm/msm/Kconfig
index 974bc7c0ea76..4af7e896c1d4 100644
--- a/drivers/gpu/drm/msm/Kconfig
+++ b/drivers/gpu/drm/msm/Kconfig
@@ -21,6 +21,7 @@ config DRM_MSM
select DRM_DISPLAY_HELPER
select DRM_BRIDGE_CONNECTOR
select DRM_EXEC
+ select DRM_GPUVM
select DRM_KMS_HELPER
select DRM_PANEL
select DRM_BRIDGE
diff --git a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c
index db1aa281ce47..94c49ed057cd 100644
--- a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c
@@ -472,8 +472,7 @@ a2xx_create_vm(struct msm_gpu *gpu, struct platform_device *pdev)
struct msm_mmu *mmu = a2xx_gpummu_new(&pdev->dev, gpu);
struct msm_gem_vm *vm;
- vm = msm_gem_vm_create(mmu, "gpu", SZ_16M,
- 0xfff * SZ_64K);
+ vm = msm_gem_vm_create(gpu->dev, mmu, "gpu", SZ_16M, 0xfff * SZ_64K, true);
if (IS_ERR(vm) && !IS_ERR(mmu))
mmu->funcs->destroy(mmu);
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
index 4c459ae25cba..259a589a827d 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
@@ -1311,7 +1311,7 @@ static int a6xx_gmu_memory_alloc(struct a6xx_gmu *gmu, struct a6xx_gmu_bo *bo,
return 0;
}
-static int a6xx_gmu_memory_probe(struct a6xx_gmu *gmu)
+static int a6xx_gmu_memory_probe(struct drm_device *drm, struct a6xx_gmu *gmu)
{
struct msm_mmu *mmu;
@@ -1321,7 +1321,7 @@ static int a6xx_gmu_memory_probe(struct a6xx_gmu *gmu)
if (IS_ERR(mmu))
return PTR_ERR(mmu);
- gmu->vm = msm_gem_vm_create(mmu, "gmu", 0x0, 0x80000000);
+ gmu->vm = msm_gem_vm_create(drm, mmu, "gmu", 0x0, 0x80000000, true);
if (IS_ERR(gmu->vm))
return PTR_ERR(gmu->vm);
@@ -1940,7 +1940,7 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node)
if (ret)
goto err_put_device;
- ret = a6xx_gmu_memory_probe(gmu);
+ ret = a6xx_gmu_memory_probe(adreno_gpu->base.dev, gmu);
if (ret)
goto err_put_device;
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
index fa63149bf73f..a124249f7a1d 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
@@ -2271,9 +2271,8 @@ a6xx_create_private_vm(struct msm_gpu *gpu)
if (IS_ERR(mmu))
return ERR_CAST(mmu);
- return msm_gem_vm_create(mmu,
- "gpu", 0x100000000ULL,
- adreno_private_vm_size(gpu));
+ return msm_gem_vm_create(gpu->dev, mmu, "gpu", 0x100000000ULL,
+ adreno_private_vm_size(gpu), true);
}
static uint32_t a6xx_get_rptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring)
diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
index ffbbf3d5ce2f..0ba1819833ab 100644
--- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
@@ -224,7 +224,8 @@ adreno_iommu_create_vm(struct msm_gpu *gpu,
start = max_t(u64, SZ_16M, geometry->aperture_start);
size = geometry->aperture_end - start + 1;
- vm = msm_gem_vm_create(mmu, "gpu", start & GENMASK_ULL(48, 0), size);
+ vm = msm_gem_vm_create(gpu->dev, mmu, "gpu", start & GENMASK_ULL(48, 0),
+ size, true);
if (IS_ERR(vm) && !IS_ERR(mmu))
mmu->funcs->destroy(mmu);
@@ -403,12 +404,12 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx,
case MSM_PARAM_VA_START:
if (ctx->vm == gpu->vm)
return UERR(EINVAL, drm, "requires per-process pgtables");
- *value = ctx->vm->va_start;
+ *value = ctx->vm->base.mm_start;
return 0;
case MSM_PARAM_VA_SIZE:
if (ctx->vm == gpu->vm)
return UERR(EINVAL, drm, "requires per-process pgtables");
- *value = ctx->vm->va_size;
+ *value = ctx->vm->base.mm_range;
return 0;
case MSM_PARAM_HIGHEST_BANK_BIT:
*value = adreno_gpu->ubwc_config.highest_bank_bit;
diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c
index 94fbc20b2fbd..d5b5628bee24 100644
--- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c
+++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c
@@ -451,8 +451,9 @@ static int mdp4_kms_init(struct drm_device *dev)
"contig buffers for scanout\n");
vm = NULL;
} else {
- vm = msm_gem_vm_create(mmu,
- "mdp4", 0x1000, 0x100000000 - 0x1000);
+ vm = msm_gem_vm_create(dev, mmu, "mdp4",
+ 0x1000, 0x100000000 - 0x1000,
+ true);
if (IS_ERR(vm)) {
if (!IS_ERR(mmu))
diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
index 978f1d355b42..6ef29bc48bb0 100644
--- a/drivers/gpu/drm/msm/msm_drv.c
+++ b/drivers/gpu/drm/msm/msm_drv.c
@@ -776,6 +776,7 @@ static const struct file_operations fops = {
static const struct drm_driver msm_driver = {
.driver_features = DRIVER_GEM |
+ DRIVER_GEM_GPUVA |
DRIVER_RENDER |
DRIVER_ATOMIC |
DRIVER_MODESET |
diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
index 4c10eca404e0..7901871c66cc 100644
--- a/drivers/gpu/drm/msm/msm_gem.c
+++ b/drivers/gpu/drm/msm/msm_gem.c
@@ -47,9 +47,32 @@ static int msm_gem_open(struct drm_gem_object *obj, struct drm_file *file)
return 0;
}
+static void put_iova_spaces(struct drm_gem_object *obj, struct drm_gpuvm *vm, bool close);
+
static void msm_gem_close(struct drm_gem_object *obj, struct drm_file *file)
{
+ struct msm_context *ctx = file->driver_priv;
+
update_ctx_mem(file, -obj->size);
+
+ /*
+ * If VM isn't created yet, nothing to cleanup. And in fact calling
+ * put_iova_spaces() with vm=NULL would be bad, in that it will tear-
+ * down the mappings of shared buffers in other contexts.
+ */
+ if (!ctx->vm)
+ return;
+
+ /*
+ * TODO we might need to kick this to a queue to avoid blocking
+ * in CLOSE ioctl
+ */
+ dma_resv_wait_timeout(obj->resv, DMA_RESV_USAGE_READ, false,
+ msecs_to_jiffies(1000));
+
+ msm_gem_lock(obj);
+ put_iova_spaces(obj, &ctx->vm->base, true);
+ msm_gem_unlock(obj);
}
/*
@@ -171,6 +194,13 @@ static void put_pages(struct drm_gem_object *obj)
{
struct msm_gem_object *msm_obj = to_msm_bo(obj);
+ /*
+ * Skip gpuvm in the object free path to avoid a WARN_ON() splat.
+ * See explaination in msm_gem_assert_locked()
+ */
+ if (kref_read(&obj->refcount))
+ drm_gpuvm_bo_gem_evict(obj, true);
+
if (msm_obj->pages) {
if (msm_obj->sgt) {
/* For non-cached buffers, ensure the new
@@ -338,16 +368,25 @@ uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj)
}
static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj,
- struct msm_gem_vm *vm)
+ struct msm_gem_vm *vm)
{
- struct msm_gem_object *msm_obj = to_msm_bo(obj);
- struct msm_gem_vma *vma;
+ struct drm_gpuvm_bo *vm_bo;
msm_gem_assert_locked(obj);
- list_for_each_entry(vma, &msm_obj->vmas, list) {
- if (vma->vm == vm)
- return vma;
+ drm_gem_for_each_gpuvm_bo (vm_bo, obj) {
+ struct drm_gpuva *vma;
+
+ drm_gpuvm_bo_for_each_va (vma, vm_bo) {
+ if (vma->vm == &vm->base) {
+ /* lookup_vma() should only be used in paths
+ * with at most one vma per vm
+ */
+ GEM_WARN_ON(!list_is_singular(&vm_bo->list.gpuva));
+
+ return to_msm_vma(vma);
+ }
+ }
}
return NULL;
@@ -360,33 +399,29 @@ static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj,
* mapping.
*/
static void
-put_iova_spaces(struct drm_gem_object *obj, bool close)
+put_iova_spaces(struct drm_gem_object *obj, struct drm_gpuvm *vm, bool close)
{
- struct msm_gem_object *msm_obj = to_msm_bo(obj);
- struct msm_gem_vma *vma, *tmp;
+ struct drm_gpuvm_bo *vm_bo, *tmp;
msm_gem_assert_locked(obj);
- list_for_each_entry_safe(vma, tmp, &msm_obj->vmas, list) {
- if (vma->vm) {
- msm_gem_vma_purge(vma);
- if (close)
- msm_gem_vma_close(vma);
- }
- }
-}
+ drm_gem_for_each_gpuvm_bo_safe (vm_bo, tmp, obj) {
+ struct drm_gpuva *vma, *vmatmp;
-/* Called with msm_obj locked */
-static void
-put_iova_vmas(struct drm_gem_object *obj)
-{
- struct msm_gem_object *msm_obj = to_msm_bo(obj);
- struct msm_gem_vma *vma, *tmp;
+ if (vm && vm_bo->vm != vm)
+ continue;
- msm_gem_assert_locked(obj);
+ drm_gpuvm_bo_get(vm_bo);
- list_for_each_entry_safe(vma, tmp, &msm_obj->vmas, list) {
- msm_gem_vma_close(vma);
+ drm_gpuvm_bo_for_each_va_safe (vma, vmatmp, vm_bo) {
+ struct msm_gem_vma *msm_vma = to_msm_vma(vma);
+
+ msm_gem_vma_purge(msm_vma);
+ if (close)
+ msm_gem_vma_close(msm_vma);
+ }
+
+ drm_gpuvm_bo_put(vm_bo);
}
}
@@ -394,7 +429,6 @@ static struct msm_gem_vma *get_vma_locked(struct drm_gem_object *obj,
struct msm_gem_vm *vm,
u64 range_start, u64 range_end)
{
- struct msm_gem_object *msm_obj = to_msm_bo(obj);
struct msm_gem_vma *vma;
msm_gem_assert_locked(obj);
@@ -403,12 +437,9 @@ static struct msm_gem_vma *get_vma_locked(struct drm_gem_object *obj,
if (!vma) {
vma = msm_gem_vma_new(vm, obj, range_start, range_end);
- if (IS_ERR(vma))
- return vma;
- list_add_tail(&vma->list, &msm_obj->vmas);
} else {
- GEM_WARN_ON(vma->iova < range_start);
- GEM_WARN_ON((vma->iova + obj->size) > range_end);
+ GEM_WARN_ON(vma->base.va.addr < range_start);
+ GEM_WARN_ON((vma->base.va.addr + obj->size) > range_end);
}
return vma;
@@ -492,7 +523,7 @@ static int get_and_pin_iova_range_locked(struct drm_gem_object *obj,
ret = msm_gem_pin_vma_locked(obj, vma);
if (!ret) {
- *iova = vma->iova;
+ *iova = vma->base.va.addr;
pin_obj_locked(obj);
}
@@ -538,7 +569,7 @@ int msm_gem_get_iova(struct drm_gem_object *obj,
if (IS_ERR(vma)) {
ret = PTR_ERR(vma);
} else {
- *iova = vma->iova;
+ *iova = vma->base.va.addr;
}
msm_gem_unlock(obj);
@@ -579,7 +610,7 @@ int msm_gem_set_iova(struct drm_gem_object *obj,
vma = get_vma_locked(obj, vm, iova, iova + obj->size);
if (IS_ERR(vma)) {
ret = PTR_ERR(vma);
- } else if (GEM_WARN_ON(vma->iova != iova)) {
+ } else if (GEM_WARN_ON(vma->base.va.addr != iova)) {
clear_iova(obj, vm);
ret = -EBUSY;
}
@@ -763,7 +794,7 @@ void msm_gem_purge(struct drm_gem_object *obj)
GEM_WARN_ON(!is_purgeable(msm_obj));
/* Get rid of any iommu mapping(s): */
- put_iova_spaces(obj, true);
+ put_iova_spaces(obj, NULL, true);
msm_gem_vunmap(obj);
@@ -771,8 +802,6 @@ void msm_gem_purge(struct drm_gem_object *obj)
put_pages(obj);
- put_iova_vmas(obj);
-
mutex_lock(&priv->lru.lock);
/* A one-way transition: */
msm_obj->madv = __MSM_MADV_PURGED;
@@ -803,7 +832,7 @@ void msm_gem_evict(struct drm_gem_object *obj)
GEM_WARN_ON(is_unevictable(msm_obj));
/* Get rid of any iommu mapping(s): */
- put_iova_spaces(obj, false);
+ put_iova_spaces(obj, NULL, false);
drm_vma_node_unmap(&obj->vma_node, dev->anon_inode->i_mapping);
@@ -869,7 +898,6 @@ void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m,
{
struct msm_gem_object *msm_obj = to_msm_bo(obj);
struct dma_resv *robj = obj->resv;
- struct msm_gem_vma *vma;
uint64_t off = drm_vma_node_start(&obj->vma_node);
const char *madv;
@@ -912,14 +940,17 @@ void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m,
seq_printf(m, " %08zu %9s %-32s\n", obj->size, madv, msm_obj->name);
- if (!list_empty(&msm_obj->vmas)) {
+ if (!list_empty(&obj->gpuva.list)) {
+ struct drm_gpuvm_bo *vm_bo;
seq_puts(m, " vmas:");
- list_for_each_entry(vma, &msm_obj->vmas, list) {
- const char *name, *comm;
- if (vma->vm) {
- struct msm_gem_vm *vm = vma->vm;
+ drm_gem_for_each_gpuvm_bo (vm_bo, obj) {
+ struct drm_gpuva *vma;
+
+ drm_gpuvm_bo_for_each_va (vma, vm_bo) {
+ const char *name, *comm;
+ struct msm_gem_vm *vm = to_msm_vm(vma->vm);
struct task_struct *task =
get_pid_task(vm->pid, PIDTYPE_PID);
if (task) {
@@ -928,15 +959,14 @@ void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m,
} else {
comm = NULL;
}
- name = vm->name;
- } else {
- name = comm = NULL;
+ name = vm->base.name;
+
+ seq_printf(m, " [%s%s%s: vm=%p, %08llx,%smapped]",
+ name, comm ? ":" : "", comm ? comm : "",
+ vma->vm, vma->va.addr,
+ to_msm_vma(vma)->mapped ? "" : "un");
+ kfree(comm);
}
- seq_printf(m, " [%s%s%s: vm=%p, %08llx,%s]",
- name, comm ? ":" : "", comm ? comm : "",
- vma->vm, vma->iova,
- vma->mapped ? "mapped" : "unmapped");
- kfree(comm);
}
seq_puts(m, "\n");
@@ -982,7 +1012,7 @@ static void msm_gem_free_object(struct drm_gem_object *obj)
list_del(&msm_obj->node);
mutex_unlock(&priv->obj_lock);
- put_iova_spaces(obj, true);
+ put_iova_spaces(obj, NULL, true);
if (obj->import_attach) {
GEM_WARN_ON(msm_obj->vaddr);
@@ -992,13 +1022,10 @@ static void msm_gem_free_object(struct drm_gem_object *obj)
*/
kvfree(msm_obj->pages);
- put_iova_vmas(obj);
-
drm_prime_gem_destroy(obj, msm_obj->sgt);
} else {
msm_gem_vunmap(obj);
put_pages(obj);
- put_iova_vmas(obj);
}
drm_gem_object_release(obj);
@@ -1104,7 +1131,6 @@ static int msm_gem_new_impl(struct drm_device *dev,
msm_obj->madv = MSM_MADV_WILLNEED;
INIT_LIST_HEAD(&msm_obj->node);
- INIT_LIST_HEAD(&msm_obj->vmas);
*obj = &msm_obj->base;
(*obj)->funcs = &msm_gem_object_funcs;
diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h
index 9bd78642671c..5091892bbe2e 100644
--- a/drivers/gpu/drm/msm/msm_gem.h
+++ b/drivers/gpu/drm/msm/msm_gem.h
@@ -10,6 +10,7 @@
#include <linux/kref.h>
#include <linux/dma-resv.h>
#include "drm/drm_exec.h"
+#include "drm/drm_gpuvm.h"
#include "drm/gpu_scheduler.h"
#include "msm_drv.h"
@@ -22,30 +23,67 @@
#define MSM_BO_STOLEN 0x10000000 /* try to use stolen/splash memory */
#define MSM_BO_MAP_PRIV 0x20000000 /* use IOMMU_PRIV when mapping */
+/**
+ * struct msm_gem_vm - VM object
+ *
+ * A VM object representing a GPU (or display or GMU or ...) virtual address
+ * space.
+ *
+ * In the case of GPU, if per-process address spaces are supported, the address
+ * space is split into two VMs, which map to TTBR0 and TTBR1 in the SMMU. TTBR0
+ * is used for userspace objects, and is unique per msm_context/drm_file, while
+ * TTBR1 is the same for all processes. (The kernel controlled ringbuffer and
+ * a few other kernel controlled buffers live in TTBR1.)
+ *
+ * The GPU TTBR0 vm can be managed by userspace or by the kernel, depending on
+ * whether userspace supports VM_BIND. All other vm's are managed by the kernel.
+ * (Managed by kernel means the kernel is responsible for VA allocation.)
+ *
+ * Note that because VM_BIND allows a given BO to be mapped multiple times in
+ * a VM, and therefore have multiple VMA's in a VM, there is an extra object
+ * provided by drm_gpuvm infrastructure.. the drm_gpuvm_bo, which is not
+ * embedded in any larger driver structure. The GEM object holds a list of
+ * drm_gpuvm_bo, which in turn holds a list of msm_gem_vma. A linked vma
+ * holds a reference to the vm_bo, and drops it when the vma is unlinked.
+ * So we just need to call drm_gpuvm_bo_obtain() to return a ref to an
+ * existing vm_bo, or create a new one. Once the vma is linked, the ref
+ * to the vm_bo can be dropped (since the vma is holding one).
+ */
struct msm_gem_vm {
- const char *name;
- /* NOTE: mm managed at the page level, size is in # of pages
- * and position mm_node->start is in # of pages:
+ /** @base: Inherit from drm_gpuvm. */
+ struct drm_gpuvm base;
+
+ /**
+ * @mm: Memory management for kernel managed VA allocations
+ *
+ * Only used for kernel managed VMs, unused for user managed VMs.
+ *
+ * Protected by @mm_lock.
*/
struct drm_mm mm;
- spinlock_t lock; /* Protects drm_mm node allocation/removal */
+
+ /** @mm_lock: protects @mm node allocation/removal */
+ struct spinlock mm_lock;
+
+ /** @vm_lock: protects gpuvm insert/remove/traverse */
+ struct mutex vm_lock;
+
+ /** @mmu: The mmu object which manages the pgtables */
struct msm_mmu *mmu;
- struct kref kref;
- /* For address spaces associated with a specific process, this
+ /**
+ * @pid: For address spaces associated with a specific process, this
* will be non-NULL:
*/
struct pid *pid;
- /* @faults: the number of GPU hangs associated with this address space */
+ /** @faults: the number of GPU hangs associated with this address space */
int faults;
- /** @va_start: lowest possible address to allocate */
- uint64_t va_start;
-
- /** @va_size: the size of the address space (in bytes) */
- uint64_t va_size;
+ /** @managed: is this a kernel managed VM? */
+ bool managed;
};
+#define to_msm_vm(x) container_of(x, struct msm_gem_vm, base)
struct msm_gem_vm *
msm_gem_vm_get(struct msm_gem_vm *vm);
@@ -53,18 +91,31 @@ msm_gem_vm_get(struct msm_gem_vm *vm);
void msm_gem_vm_put(struct msm_gem_vm *vm);
struct msm_gem_vm *
-msm_gem_vm_create(struct msm_mmu *mmu, const char *name,
- u64 va_start, u64 size);
+msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name,
+ u64 va_start, u64 va_size, bool managed);
struct msm_fence_context;
+/**
+ * struct msm_gem_vma - a VMA mapping
+ *
+ * Represents a combination of a GEM object plus a VM.
+ */
struct msm_gem_vma {
+ /** @base: inherit from drm_gpuva */
+ struct drm_gpuva base;
+
+ /**
+ * @node: mm node for VA allocation
+ *
+ * Only used by kernel managed VMs
+ */
struct drm_mm_node node;
- uint64_t iova;
- struct msm_gem_vm *vm;
- struct list_head list; /* node in msm_gem_object::vmas */
+
+ /** @mapped: Is this VMA mapped? */
bool mapped;
};
+#define to_msm_vma(x) container_of(x, struct msm_gem_vma, base)
struct msm_gem_vma *
msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj,
@@ -100,8 +151,6 @@ struct msm_gem_object {
struct sg_table *sgt;
void *vaddr;
- struct list_head vmas; /* list of msm_gem_vma */
-
char name[32]; /* Identifier to print for the debugfs files */
/* userspace metadata backchannel */
diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c
index a186b7dfea35..e8a670566147 100644
--- a/drivers/gpu/drm/msm/msm_gem_submit.c
+++ b/drivers/gpu/drm/msm/msm_gem_submit.c
@@ -312,7 +312,7 @@ static int submit_pin_objects(struct msm_gem_submit *submit)
if (ret)
break;
- submit->bos[i].iova = vma->iova;
+ submit->bos[i].iova = vma->base.va.addr;
}
/*
diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c
index ca29e81d79d2..56221dfdf551 100644
--- a/drivers/gpu/drm/msm/msm_gem_vma.c
+++ b/drivers/gpu/drm/msm/msm_gem_vma.c
@@ -5,14 +5,13 @@
*/
#include "msm_drv.h"
-#include "msm_fence.h"
#include "msm_gem.h"
#include "msm_mmu.h"
static void
-msm_gem_vm_destroy(struct kref *kref)
+msm_gem_vm_free(struct drm_gpuvm *gpuvm)
{
- struct msm_gem_vm *vm = container_of(kref, struct msm_gem_vm, kref);
+ struct msm_gem_vm *vm = container_of(gpuvm, struct msm_gem_vm, base);
drm_mm_takedown(&vm->mm);
if (vm->mmu)
@@ -25,14 +24,14 @@ msm_gem_vm_destroy(struct kref *kref)
void msm_gem_vm_put(struct msm_gem_vm *vm)
{
if (vm)
- kref_put(&vm->kref, msm_gem_vm_destroy);
+ drm_gpuvm_put(&vm->base);
}
struct msm_gem_vm *
msm_gem_vm_get(struct msm_gem_vm *vm)
{
if (!IS_ERR_OR_NULL(vm))
- kref_get(&vm->kref);
+ drm_gpuvm_get(&vm->base);
return vm;
}
@@ -40,14 +39,14 @@ msm_gem_vm_get(struct msm_gem_vm *vm)
/* Actually unmap memory for the vma */
void msm_gem_vma_purge(struct msm_gem_vma *vma)
{
- struct msm_gem_vm *vm = vma->vm;
- unsigned size = vma->node.size;
+ struct msm_gem_vm *vm = to_msm_vm(vma->base.vm);
+ unsigned size = vma->base.va.range;
/* Don't do anything if the memory isn't mapped */
if (!vma->mapped)
return;
- vm->mmu->funcs->unmap(vm->mmu, vma->iova, size);
+ vm->mmu->funcs->unmap(vm->mmu, vma->base.va.addr, size);
vma->mapped = false;
}
@@ -57,10 +56,10 @@ int
msm_gem_vma_map(struct msm_gem_vma *vma, int prot,
struct sg_table *sgt, int size)
{
- struct msm_gem_vm *vm = vma->vm;
+ struct msm_gem_vm *vm = to_msm_vm(vma->base.vm);
int ret;
- if (GEM_WARN_ON(!vma->iova))
+ if (GEM_WARN_ON(!vma->base.va.addr))
return -EINVAL;
if (vma->mapped)
@@ -68,9 +67,6 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot,
vma->mapped = true;
- if (!vm)
- return 0;
-
/*
* NOTE: iommu/io-pgtable can allocate pages, so we cannot hold
* a lock across map/unmap which is also used in the job_run()
@@ -80,7 +76,7 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot,
* Revisit this if we can come up with a scheme to pre-alloc pages
* for the pgtable in map/unmap ops.
*/
- ret = vm->mmu->funcs->map(vm->mmu, vma->iova, sgt, size, prot);
+ ret = vm->mmu->funcs->map(vm->mmu, vma->base.va.addr, sgt, size, prot);
if (ret) {
vma->mapped = false;
@@ -92,19 +88,20 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot,
/* Close an iova. Warn if it is still in use */
void msm_gem_vma_close(struct msm_gem_vma *vma)
{
- struct msm_gem_vm *vm = vma->vm;
+ struct msm_gem_vm *vm = to_msm_vm(vma->base.vm);
GEM_WARN_ON(vma->mapped);
- spin_lock(&vm->lock);
- if (vma->iova)
+ spin_lock(&vm->mm_lock);
+ if (vma->base.va.addr)
drm_mm_remove_node(&vma->node);
- spin_unlock(&vm->lock);
+ spin_unlock(&vm->mm_lock);
- vma->iova = 0;
- list_del(&vma->list);
+ mutex_lock(&vm->vm_lock);
+ drm_gpuva_remove(&vma->base);
+ drm_gpuva_unlink(&vma->base);
+ mutex_unlock(&vm->vm_lock);
- msm_gem_vm_put(vm);
kfree(vma);
}
@@ -113,6 +110,7 @@ struct msm_gem_vma *
msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj,
u64 range_start, u64 range_end)
{
+ struct drm_gpuvm_bo *vm_bo;
struct msm_gem_vma *vma;
int ret;
@@ -120,36 +118,82 @@ msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj,
if (!vma)
return ERR_PTR(-ENOMEM);
- vma->vm = vm;
+ if (vm->managed) {
+ spin_lock(&vm->mm_lock);
+ ret = drm_mm_insert_node_in_range(&vm->mm, &vma->node,
+ obj->size, PAGE_SIZE, 0,
+ range_start, range_end, 0);
+ spin_unlock(&vm->mm_lock);
- spin_lock(&vm->lock);
- ret = drm_mm_insert_node_in_range(&vm->mm, &vma->node,
- obj->size, PAGE_SIZE, 0,
- range_start, range_end, 0);
- spin_unlock(&vm->lock);
+ if (ret)
+ goto err_free_vma;
- if (ret)
- goto err_free_vma;
+ range_start = vma->node.start;
+ range_end = range_start + obj->size;
+ }
- vma->iova = vma->node.start;
+ GEM_WARN_ON((range_end - range_start) > obj->size);
+
+ drm_gpuva_init(&vma->base, range_start, range_end - range_start, obj, 0);
vma->mapped = false;
- INIT_LIST_HEAD(&vma->list);
+ mutex_lock(&vm->vm_lock);
+ ret = drm_gpuva_insert(&vm->base, &vma->base);
+ mutex_unlock(&vm->vm_lock);
+ if (ret)
+ goto err_free_range;
- kref_get(&vm->kref);
+ vm_bo = drm_gpuvm_bo_obtain(&vm->base, obj);
+ if (IS_ERR(vm_bo)) {
+ ret = PTR_ERR(vm_bo);
+ goto err_va_remove;
+ }
+
+ mutex_lock(&vm->vm_lock);
+ drm_gpuva_link(&vma->base, vm_bo);
+ mutex_unlock(&vm->vm_lock);
+ GEM_WARN_ON(drm_gpuvm_bo_put(vm_bo));
return vma;
+err_va_remove:
+ mutex_lock(&vm->vm_lock);
+ drm_gpuva_remove(&vma->base);
+ mutex_unlock(&vm->vm_lock);
+err_free_range:
+ if (vm->managed)
+ drm_mm_remove_node(&vma->node);
err_free_vma:
kfree(vma);
return ERR_PTR(ret);
}
+static const struct drm_gpuvm_ops msm_gpuvm_ops = {
+ .vm_free = msm_gem_vm_free,
+};
+
+/**
+ * msm_gem_vm_create() - Create and initialize a &msm_gem_vm
+ * @drm: the drm device
+ * @mmu: the backing MMU objects handling mapping/unmapping
+ * @name: the name of the VM
+ * @va_start: the start offset of the GPU VA space
+ * @va_size: the size of the GPU VA space
+ * @managed: is it a kernel managed VM?
+ *
+ * In a kernel managed VM, the kernel handles address allocation, and only
+ * synchronous operations are supported. In a user managed VM, userspace
+ * handles virtual address allocation, and both async and sync operations
+ * are supported.
+ */
struct msm_gem_vm *
-msm_gem_vm_create(struct msm_mmu *mmu, const char *name,
- u64 va_start, u64 size)
+msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name,
+ u64 va_start, u64 va_size, bool managed)
{
+ enum drm_gpuvm_flags flags = managed ? DRM_GPUVM_VA_WEAK_REF : 0;
struct msm_gem_vm *vm;
+ struct drm_gem_object *dummy_gem;
+ int ret = 0;
if (IS_ERR(mmu))
return ERR_CAST(mmu);
@@ -158,15 +202,28 @@ msm_gem_vm_create(struct msm_mmu *mmu, const char *name,
if (!vm)
return ERR_PTR(-ENOMEM);
- spin_lock_init(&vm->lock);
- vm->name = name;
- vm->mmu = mmu;
- vm->va_start = va_start;
- vm->va_size = size;
+ dummy_gem = drm_gpuvm_resv_object_alloc(drm);
+ if (!dummy_gem) {
+ ret = -ENOMEM;
+ goto err_free_vm;
+ }
+
+ drm_gpuvm_init(&vm->base, name, flags, drm, dummy_gem,
+ va_start, va_size, 0, 0, &msm_gpuvm_ops);
+ drm_gem_object_put(dummy_gem);
+
+ spin_lock_init(&vm->mm_lock);
+ mutex_init(&vm->vm_lock);
- drm_mm_init(&vm->mm, va_start, size);
+ vm->mmu = mmu;
+ vm->managed = managed;
- kref_init(&vm->kref);
+ drm_mm_init(&vm->mm, va_start, va_size);
return vm;
+
+err_free_vm:
+ kfree(vm);
+ return ERR_PTR(ret);
+
}
diff --git a/drivers/gpu/drm/msm/msm_kms.c b/drivers/gpu/drm/msm/msm_kms.c
index 88504c4b842f..6458bd82a0cd 100644
--- a/drivers/gpu/drm/msm/msm_kms.c
+++ b/drivers/gpu/drm/msm/msm_kms.c
@@ -204,8 +204,8 @@ struct msm_gem_vm *msm_kms_init_vm(struct drm_device *dev)
return NULL;
}
- vm = msm_gem_vm_create(mmu, "mdp_kms",
- 0x1000, 0x100000000 - 0x1000);
+ vm = msm_gem_vm_create(dev, mmu, "mdp_kms",
+ 0x1000, 0x100000000 - 0x1000, true);
if (IS_ERR(vm)) {
dev_err(mdp_dev, "vm create, error %pe\n", vm);
mmu->funcs->destroy(mmu);
--
2.48.1
^ permalink raw reply related [flat|nested] 45+ messages in thread
* [PATCH v2 12/34] drm/msm: Use drm_gpuvm types more
2025-03-19 14:52 [PATCH v2 00/34] drm/msm: sparse / "VM_BIND" support Rob Clark
` (10 preceding siblings ...)
2025-03-19 14:52 ` [PATCH v2 11/34] drm/msm: drm_gpuvm conversion Rob Clark
@ 2025-03-19 14:52 ` Rob Clark
2025-03-19 14:52 ` [PATCH v2 13/34] drm/msm: Split submit_pin_objects() Rob Clark
` (21 subsequent siblings)
33 siblings, 0 replies; 45+ messages in thread
From: Rob Clark @ 2025-03-19 14:52 UTC (permalink / raw)
To: dri-devel
Cc: freedreno, linux-arm-msm, Rob Clark, Rob Clark, Sean Paul,
Konrad Dybcio, Abhinav Kumar, Dmitry Baryshkov, Marijn Suijten,
David Airlie, Simona Vetter, Jessica Zhang, Jani Nikula,
Barnabás Czémán, Arnd Bergmann, Jonathan Marek,
Krzysztof Kozlowski, Eugene Lepshy, open list
From: Rob Clark <robdclark@chromium.org>
Most of the driver code doesn't need to reach in to msm specific fields,
so just use the drm_gpuvm/drm_gpuva types directly. This should
hopefully improve commonality with other drivers and make the code
easier to understand.
Signed-off-by: Rob Clark <robdclark@chromium.org>
---
drivers/gpu/drm/msm/adreno/a2xx_gpu.c | 6 +-
drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 6 +-
drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 6 +-
drivers/gpu/drm/msm/adreno/a6xx_gmu.h | 2 +-
drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 14 ++--
drivers/gpu/drm/msm/adreno/a6xx_preempt.c | 2 +-
drivers/gpu/drm/msm/adreno/adreno_gpu.c | 21 +++--
drivers/gpu/drm/msm/adreno/adreno_gpu.h | 4 +-
.../drm/msm/disp/dpu1/dpu_encoder_phys_wb.c | 4 +-
drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c | 6 +-
drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h | 2 +-
drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c | 6 +-
drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h | 2 +-
drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c | 11 +--
drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c | 11 +--
drivers/gpu/drm/msm/dsi/dsi_host.c | 6 +-
drivers/gpu/drm/msm/msm_drv.h | 19 ++---
drivers/gpu/drm/msm/msm_fb.c | 14 ++--
drivers/gpu/drm/msm/msm_gem.c | 82 +++++++++----------
drivers/gpu/drm/msm/msm_gem.h | 53 ++++++------
drivers/gpu/drm/msm/msm_gem_submit.c | 4 +-
drivers/gpu/drm/msm/msm_gem_vma.c | 72 +++++++---------
drivers/gpu/drm/msm/msm_gpu.c | 21 +++--
drivers/gpu/drm/msm/msm_gpu.h | 10 +--
drivers/gpu/drm/msm/msm_kms.c | 6 +-
drivers/gpu/drm/msm/msm_kms.h | 2 +-
drivers/gpu/drm/msm/msm_submitqueue.c | 2 +-
27 files changed, 192 insertions(+), 202 deletions(-)
diff --git a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c
index 94c49ed057cd..c4c723a0bf1a 100644
--- a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c
@@ -113,7 +113,7 @@ static int a2xx_hw_init(struct msm_gpu *gpu)
uint32_t *ptr, len;
int i, ret;
- a2xx_gpummu_params(gpu->vm->mmu, &pt_base, &tran_error);
+ a2xx_gpummu_params(to_msm_vm(gpu->vm)->mmu, &pt_base, &tran_error);
DBG("%s", gpu->name);
@@ -466,11 +466,11 @@ static struct msm_gpu_state *a2xx_gpu_state_get(struct msm_gpu *gpu)
return state;
}
-static struct msm_gem_vm *
+static struct drm_gpuvm *
a2xx_create_vm(struct msm_gpu *gpu, struct platform_device *pdev)
{
struct msm_mmu *mmu = a2xx_gpummu_new(&pdev->dev, gpu);
- struct msm_gem_vm *vm;
+ struct drm_gpuvm *vm;
vm = msm_gem_vm_create(gpu->dev, mmu, "gpu", SZ_16M, 0xfff * SZ_64K, true);
diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
index cce95ad3cfb8..9dd7dea84a4a 100644
--- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
@@ -1786,8 +1786,10 @@ struct msm_gpu *a5xx_gpu_init(struct drm_device *dev)
return ERR_PTR(ret);
}
- if (gpu->vm)
- msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a5xx_fault_handler);
+ if (gpu->vm) {
+ msm_mmu_set_fault_handler(to_msm_vm(gpu->vm)->mmu, gpu,
+ a5xx_fault_handler);
+ }
/* Set up the preemption specific bits and pieces for each ringbuffer */
a5xx_preempt_init(gpu);
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
index 259a589a827d..32711c4967f7 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
@@ -1259,6 +1259,8 @@ int a6xx_gmu_stop(struct a6xx_gpu *a6xx_gpu)
static void a6xx_gmu_memory_free(struct a6xx_gmu *gmu)
{
+ struct msm_mmu *mmu = to_msm_vm(gmu->vm)->mmu;
+
msm_gem_kernel_put(gmu->hfi.obj, gmu->vm);
msm_gem_kernel_put(gmu->debug.obj, gmu->vm);
msm_gem_kernel_put(gmu->icache.obj, gmu->vm);
@@ -1266,8 +1268,8 @@ static void a6xx_gmu_memory_free(struct a6xx_gmu *gmu)
msm_gem_kernel_put(gmu->dummy.obj, gmu->vm);
msm_gem_kernel_put(gmu->log.obj, gmu->vm);
- gmu->vm->mmu->funcs->detach(gmu->vm->mmu);
- msm_gem_vm_put(gmu->vm);
+ mmu->funcs->detach(mmu);
+ drm_gpuvm_put(gmu->vm);
}
static int a6xx_gmu_memory_alloc(struct a6xx_gmu *gmu, struct a6xx_gmu_bo *bo,
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.h b/drivers/gpu/drm/msm/adreno/a6xx_gmu.h
index cceda7d9c33a..5da36226b93d 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.h
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.h
@@ -62,7 +62,7 @@ struct a6xx_gmu {
/* For serializing communication with the GMU: */
struct mutex lock;
- struct msm_gem_vm *vm;
+ struct drm_gpuvm *vm;
void __iomem *mmio;
void __iomem *rscc;
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
index a124249f7a1d..4811be5a7c29 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
@@ -120,7 +120,7 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu,
if (ctx->seqno == ring->cur_ctx_seqno)
return;
- if (msm_iommu_pagetable_params(ctx->vm->mmu, &ttbr, &asid))
+ if (msm_iommu_pagetable_params(to_msm_vm(ctx->vm)->mmu, &ttbr, &asid))
return;
if (adreno_gpu->info->family >= ADRENO_7XX_GEN1) {
@@ -2243,7 +2243,7 @@ static void a6xx_gpu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp,
mutex_unlock(&a6xx_gpu->gmu.lock);
}
-static struct msm_gem_vm *
+static struct drm_gpuvm *
a6xx_create_vm(struct msm_gpu *gpu, struct platform_device *pdev)
{
struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
@@ -2261,12 +2261,12 @@ a6xx_create_vm(struct msm_gpu *gpu, struct platform_device *pdev)
return adreno_iommu_create_vm(gpu, pdev, quirks);
}
-static struct msm_gem_vm *
+static struct drm_gpuvm *
a6xx_create_private_vm(struct msm_gpu *gpu)
{
struct msm_mmu *mmu;
- mmu = msm_iommu_pagetable_create(gpu->vm->mmu);
+ mmu = msm_iommu_pagetable_create(to_msm_vm(gpu->vm)->mmu);
if (IS_ERR(mmu))
return ERR_CAST(mmu);
@@ -2546,8 +2546,10 @@ struct msm_gpu *a6xx_gpu_init(struct drm_device *dev)
adreno_gpu->uche_trap_base = 0x1fffffffff000ull;
- if (gpu->vm)
- msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a6xx_fault_handler);
+ if (gpu->vm) {
+ msm_mmu_set_fault_handler(to_msm_vm(gpu->vm)->mmu, gpu,
+ a6xx_fault_handler);
+ }
a6xx_calc_ubwc_config(adreno_gpu);
/* Set up the preemption specific bits and pieces for each ringbuffer */
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
index 41229c60aa06..bd40d0f26e2c 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c
@@ -376,7 +376,7 @@ static int preempt_init_ring(struct a6xx_gpu *a6xx_gpu,
struct a7xx_cp_smmu_info *smmu_info_ptr = ptr;
- msm_iommu_pagetable_params(gpu->vm->mmu, &ttbr, &asid);
+ msm_iommu_pagetable_params(to_msm_vm(gpu->vm)->mmu, &ttbr, &asid);
smmu_info_ptr->magic = GEN7_CP_SMMU_INFO_MAGIC;
smmu_info_ptr->ttbr0 = ttbr;
diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
index 0ba1819833ab..0f71703f6ec7 100644
--- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
@@ -191,21 +191,21 @@ int adreno_zap_shader_load(struct msm_gpu *gpu, u32 pasid)
return zap_shader_load_mdt(gpu, adreno_gpu->info->zapfw, pasid);
}
-struct msm_gem_vm *
+struct drm_gpuvm *
adreno_create_vm(struct msm_gpu *gpu,
struct platform_device *pdev)
{
return adreno_iommu_create_vm(gpu, pdev, 0);
}
-struct msm_gem_vm *
+struct drm_gpuvm *
adreno_iommu_create_vm(struct msm_gpu *gpu,
struct platform_device *pdev,
unsigned long quirks)
{
struct iommu_domain_geometry *geometry;
struct msm_mmu *mmu;
- struct msm_gem_vm *vm;
+ struct drm_gpuvm *vm;
u64 start, size;
mmu = msm_iommu_gpu_new(&pdev->dev, gpu, quirks);
@@ -259,9 +259,10 @@ void adreno_check_and_reenable_stall(struct adreno_gpu *adreno_gpu)
if (!adreno_gpu->stall_enabled &&
ktime_after(ktime_get(), adreno_gpu->stall_reenable_time) &&
!READ_ONCE(gpu->crashstate)) {
+ struct msm_mmu *mmu = to_msm_vm(gpu->vm)->mmu;
adreno_gpu->stall_enabled = true;
- gpu->vm->mmu->funcs->set_stall(gpu->vm->mmu, true);
+ mmu->funcs->set_stall(mmu, true);
}
spin_unlock_irqrestore(&adreno_gpu->fault_stall_lock, flags);
}
@@ -275,6 +276,7 @@ int adreno_fault_handler(struct msm_gpu *gpu, unsigned long iova, int flags,
struct adreno_smmu_fault_info *info, const char *block,
u32 scratch[4])
{
+ struct msm_mmu *mmu = to_msm_vm(gpu->vm)->mmu;
struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
const char *type = "UNKNOWN";
bool do_devcoredump = info && (info->fsr & ARM_SMMU_FSR_SS) &&
@@ -287,9 +289,10 @@ int adreno_fault_handler(struct msm_gpu *gpu, unsigned long iova, int flags,
*/
spin_lock_irqsave(&adreno_gpu->fault_stall_lock, irq_flags);
if (adreno_gpu->stall_enabled) {
+
adreno_gpu->stall_enabled = false;
- gpu->vm->mmu->funcs->set_stall(gpu->vm->mmu, false);
+ mmu->funcs->set_stall(mmu, false);
}
adreno_gpu->stall_reenable_time = ktime_add_ms(ktime_get(), 500);
spin_unlock_irqrestore(&adreno_gpu->fault_stall_lock, irq_flags);
@@ -299,7 +302,7 @@ int adreno_fault_handler(struct msm_gpu *gpu, unsigned long iova, int flags,
* it now.
*/
if (!do_devcoredump) {
- gpu->vm->mmu->funcs->resume_translation(gpu->vm->mmu);
+ mmu->funcs->resume_translation(mmu);
}
/*
@@ -394,7 +397,7 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx,
return 0;
case MSM_PARAM_FAULTS:
if (ctx->vm)
- *value = gpu->global_faults + ctx->vm->faults;
+ *value = gpu->global_faults + to_msm_vm(ctx->vm)->faults;
else
*value = gpu->global_faults;
return 0;
@@ -404,12 +407,12 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx,
case MSM_PARAM_VA_START:
if (ctx->vm == gpu->vm)
return UERR(EINVAL, drm, "requires per-process pgtables");
- *value = ctx->vm->base.mm_start;
+ *value = ctx->vm->mm_start;
return 0;
case MSM_PARAM_VA_SIZE:
if (ctx->vm == gpu->vm)
return UERR(EINVAL, drm, "requires per-process pgtables");
- *value = ctx->vm->base.mm_range;
+ *value = ctx->vm->mm_range;
return 0;
case MSM_PARAM_HIGHEST_BANK_BIT:
*value = adreno_gpu->ubwc_config.highest_bank_bit;
diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.h b/drivers/gpu/drm/msm/adreno/adreno_gpu.h
index 7dbe09817edc..a76f4c62deee 100644
--- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h
+++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h
@@ -642,11 +642,11 @@ void adreno_show_object(struct drm_printer *p, void **ptr, int len,
* Common helper function to initialize the default address space for arm-smmu
* attached targets
*/
-struct msm_gem_vm *
+struct drm_gpuvm *
adreno_create_vm(struct msm_gpu *gpu,
struct platform_device *pdev);
-struct msm_gem_vm *
+struct drm_gpuvm *
adreno_iommu_create_vm(struct msm_gpu *gpu,
struct platform_device *pdev,
unsigned long quirks);
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c
index 32e208ee946d..3b02f4d1a7a5 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c
@@ -566,7 +566,7 @@ static void dpu_encoder_phys_wb_prepare_wb_job(struct dpu_encoder_phys *phys_enc
struct drm_writeback_job *job)
{
const struct msm_format *format;
- struct msm_gem_vm *vm;
+ struct drm_gpuvm *vm;
struct dpu_hw_wb_cfg *wb_cfg;
int ret;
struct dpu_encoder_phys_wb *wb_enc = to_dpu_encoder_phys_wb(phys_enc);
@@ -619,7 +619,7 @@ static void dpu_encoder_phys_wb_cleanup_wb_job(struct dpu_encoder_phys *phys_enc
struct drm_writeback_job *job)
{
struct dpu_encoder_phys_wb *wb_enc = to_dpu_encoder_phys_wb(phys_enc);
- struct msm_gem_vm *vm;
+ struct drm_gpuvm *vm;
if (!job->fb)
return;
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c
index d115b79af771..6aef29590a3d 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c
@@ -274,7 +274,7 @@ int dpu_format_populate_plane_sizes(
return _dpu_format_populate_plane_sizes_linear(fmt, fb, layout);
}
-static void _dpu_format_populate_addrs_ubwc(struct msm_gem_vm *vm,
+static void _dpu_format_populate_addrs_ubwc(struct drm_gpuvm *vm,
struct drm_framebuffer *fb,
struct dpu_hw_fmt_layout *layout)
{
@@ -355,7 +355,7 @@ static void _dpu_format_populate_addrs_ubwc(struct msm_gem_vm *vm,
}
}
-static void _dpu_format_populate_addrs_linear(struct msm_gem_vm *vm,
+static void _dpu_format_populate_addrs_linear(struct drm_gpuvm *vm,
struct drm_framebuffer *fb,
struct dpu_hw_fmt_layout *layout)
{
@@ -373,7 +373,7 @@ static void _dpu_format_populate_addrs_linear(struct msm_gem_vm *vm,
* @fb: framebuffer pointer
* @layout: format layout structure to populate
*/
-void dpu_format_populate_addrs(struct msm_gem_vm *vm,
+void dpu_format_populate_addrs(struct drm_gpuvm *vm,
struct drm_framebuffer *fb,
struct dpu_hw_fmt_layout *layout)
{
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h
index 989f3e13c497..127bf4f586db 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h
@@ -31,7 +31,7 @@ static inline bool dpu_find_format(u32 format, const u32 *supported_formats,
return false;
}
-void dpu_format_populate_addrs(struct msm_gem_vm *vm,
+void dpu_format_populate_addrs(struct drm_gpuvm *vm,
struct drm_framebuffer *fb,
struct dpu_hw_fmt_layout *layout);
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
index bb5db6da636a..a9cd215cfd33 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
@@ -1098,17 +1098,17 @@ static void _dpu_kms_mmu_destroy(struct dpu_kms *dpu_kms)
if (!dpu_kms->base.vm)
return;
- mmu = dpu_kms->base.vm->mmu;
+ mmu = to_msm_vm(dpu_kms->base.vm)->mmu;
mmu->funcs->detach(mmu);
- msm_gem_vm_put(dpu_kms->base.vm);
+ drm_gpuvm_put(dpu_kms->base.vm);
dpu_kms->base.vm = NULL;
}
static int _dpu_kms_mmu_init(struct dpu_kms *dpu_kms)
{
- struct msm_gem_vm *vm;
+ struct drm_gpuvm *vm;
vm = msm_kms_init_vm(dpu_kms->dev);
if (IS_ERR(vm))
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h
index 3578f52048a5..fbf9c1fd6cfb 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h
@@ -34,7 +34,7 @@
*/
struct dpu_plane_state {
struct drm_plane_state base;
- struct msm_gem_vm *vm;
+ struct drm_gpuvm *vm;
struct dpu_sw_pipe pipe;
struct dpu_sw_pipe r_pipe;
struct dpu_sw_pipe_cfg pipe_cfg;
diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c
index d5b5628bee24..9326ed3aab04 100644
--- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c
+++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c
@@ -120,15 +120,16 @@ static void mdp4_destroy(struct msm_kms *kms)
{
struct mdp4_kms *mdp4_kms = to_mdp4_kms(to_mdp_kms(kms));
struct device *dev = mdp4_kms->dev->dev;
- struct msm_gem_vm *vm = kms->vm;
if (mdp4_kms->blank_cursor_iova)
msm_gem_unpin_iova(mdp4_kms->blank_cursor_bo, kms->vm);
drm_gem_object_put(mdp4_kms->blank_cursor_bo);
- if (vm) {
- vm->mmu->funcs->detach(vm->mmu);
- msm_gem_vm_put(vm);
+ if (kms->vm) {
+ struct msm_mmu *mmu = to_msm_vm(kms->vm)->mmu;
+
+ mmu->funcs->detach(mmu);
+ drm_gpuvm_put(kms->vm);
}
if (mdp4_kms->rpm_enabled)
@@ -380,7 +381,7 @@ static int mdp4_kms_init(struct drm_device *dev)
struct mdp4_kms *mdp4_kms = to_mdp4_kms(to_mdp_kms(priv->kms));
struct msm_kms *kms = NULL;
struct msm_mmu *mmu;
- struct msm_gem_vm *vm;
+ struct drm_gpuvm *vm;
int ret;
u32 major, minor;
unsigned long max_clk;
diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c
index 9dca0385a42d..b6e6bd1f95ee 100644
--- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c
+++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c
@@ -198,11 +198,12 @@ static void mdp5_destroy(struct mdp5_kms *mdp5_kms);
static void mdp5_kms_destroy(struct msm_kms *kms)
{
struct mdp5_kms *mdp5_kms = to_mdp5_kms(to_mdp_kms(kms));
- struct msm_gem_vm *vm = kms->vm;
- if (vm) {
- vm->mmu->funcs->detach(vm->mmu);
- msm_gem_vm_put(vm);
+ if (kms->vm) {
+ struct msm_mmu *mmu = to_msm_vm(kms->vm)->mmu;
+
+ mmu->funcs->detach(mmu);
+ drm_gpuvm_put(kms->vm);
}
mdp_kms_destroy(&mdp5_kms->base);
@@ -500,7 +501,7 @@ static int mdp5_kms_init(struct drm_device *dev)
struct mdp5_kms *mdp5_kms;
struct mdp5_cfg *config;
struct msm_kms *kms = priv->kms;
- struct msm_gem_vm *vm;
+ struct drm_gpuvm *vm;
int i, ret;
ret = mdp5_init(to_platform_device(dev->dev), dev);
diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c b/drivers/gpu/drm/msm/dsi/dsi_host.c
index 16335ebd21e4..2d1699b7dc93 100644
--- a/drivers/gpu/drm/msm/dsi/dsi_host.c
+++ b/drivers/gpu/drm/msm/dsi/dsi_host.c
@@ -143,7 +143,7 @@ struct msm_dsi_host {
/* DSI 6G TX buffer*/
struct drm_gem_object *tx_gem_obj;
- struct msm_gem_vm *vm;
+ struct drm_gpuvm *vm;
/* DSI v2 TX buffer */
void *tx_buf;
@@ -1146,7 +1146,7 @@ int dsi_tx_buf_alloc_6g(struct msm_dsi_host *msm_host, int size)
uint64_t iova;
u8 *data;
- msm_host->vm = msm_gem_vm_get(priv->kms->vm);
+ msm_host->vm = drm_gpuvm_get(priv->kms->vm);
data = msm_gem_kernel_new(dev, size, MSM_BO_WC,
msm_host->vm,
@@ -1194,7 +1194,7 @@ void msm_dsi_tx_buf_free(struct mipi_dsi_host *host)
if (msm_host->tx_gem_obj) {
msm_gem_kernel_put(msm_host->tx_gem_obj, msm_host->vm);
- msm_gem_vm_put(msm_host->vm);
+ drm_gpuvm_put(msm_host->vm);
msm_host->tx_gem_obj = NULL;
msm_host->vm = NULL;
}
diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h
index ad509403f072..b77fd2c531c3 100644
--- a/drivers/gpu/drm/msm/msm_drv.h
+++ b/drivers/gpu/drm/msm/msm_drv.h
@@ -48,8 +48,6 @@ struct msm_rd_state;
struct msm_perf_state;
struct msm_gem_submit;
struct msm_fence_context;
-struct msm_gem_vm;
-struct msm_gem_vma;
struct msm_disp_state;
#define MAX_CRTCS 8
@@ -230,7 +228,7 @@ void msm_crtc_disable_vblank(struct drm_crtc *crtc);
int msm_register_mmu(struct drm_device *dev, struct msm_mmu *mmu);
void msm_unregister_mmu(struct drm_device *dev, struct msm_mmu *mmu);
-struct msm_gem_vm *msm_kms_init_vm(struct drm_device *dev);
+struct drm_gpuvm *msm_kms_init_vm(struct drm_device *dev);
bool msm_use_mmu(struct drm_device *dev);
int msm_ioctl_gem_submit(struct drm_device *dev, void *data,
@@ -251,13 +249,14 @@ struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *dev,
int msm_gem_prime_pin(struct drm_gem_object *obj);
void msm_gem_prime_unpin(struct drm_gem_object *obj);
-int msm_framebuffer_prepare(struct drm_framebuffer *fb,
- struct msm_gem_vm *vm, bool needs_dirtyfb);
-void msm_framebuffer_cleanup(struct drm_framebuffer *fb,
- struct msm_gem_vm *vm, bool needed_dirtyfb);
-uint32_t msm_framebuffer_iova(struct drm_framebuffer *fb,
- struct msm_gem_vm *vm, int plane);
-struct drm_gem_object *msm_framebuffer_bo(struct drm_framebuffer *fb, int plane);
+int msm_framebuffer_prepare(struct drm_framebuffer *fb, struct drm_gpuvm *vm,
+ bool needs_dirtyfb);
+void msm_framebuffer_cleanup(struct drm_framebuffer *fb, struct drm_gpuvm *vm,
+ bool needed_dirtyfb);
+uint32_t msm_framebuffer_iova(struct drm_framebuffer *fb, struct drm_gpuvm *vm,
+ int plane);
+struct drm_gem_object *msm_framebuffer_bo(struct drm_framebuffer *fb,
+ int plane);
const struct msm_format *msm_framebuffer_format(struct drm_framebuffer *fb);
struct drm_framebuffer *msm_framebuffer_create(struct drm_device *dev,
struct drm_file *file, const struct drm_mode_fb_cmd2 *mode_cmd);
diff --git a/drivers/gpu/drm/msm/msm_fb.c b/drivers/gpu/drm/msm/msm_fb.c
index 6df318b73534..d267aa1cb218 100644
--- a/drivers/gpu/drm/msm/msm_fb.c
+++ b/drivers/gpu/drm/msm/msm_fb.c
@@ -75,9 +75,8 @@ void msm_framebuffer_describe(struct drm_framebuffer *fb, struct seq_file *m)
/* prepare/pin all the fb's bo's for scanout.
*/
-int msm_framebuffer_prepare(struct drm_framebuffer *fb,
- struct msm_gem_vm *vm,
- bool needs_dirtyfb)
+int msm_framebuffer_prepare(struct drm_framebuffer *fb, struct drm_gpuvm *vm,
+ bool needs_dirtyfb)
{
struct msm_framebuffer *msm_fb = to_msm_framebuffer(fb);
int ret, i, n = fb->format->num_planes;
@@ -98,9 +97,8 @@ int msm_framebuffer_prepare(struct drm_framebuffer *fb,
return 0;
}
-void msm_framebuffer_cleanup(struct drm_framebuffer *fb,
- struct msm_gem_vm *vm,
- bool needed_dirtyfb)
+void msm_framebuffer_cleanup(struct drm_framebuffer *fb, struct drm_gpuvm *vm,
+ bool needed_dirtyfb)
{
struct msm_framebuffer *msm_fb = to_msm_framebuffer(fb);
int i, n = fb->format->num_planes;
@@ -115,8 +113,8 @@ void msm_framebuffer_cleanup(struct drm_framebuffer *fb,
memset(msm_fb->iova, 0, sizeof(msm_fb->iova));
}
-uint32_t msm_framebuffer_iova(struct drm_framebuffer *fb,
- struct msm_gem_vm *vm, int plane)
+uint32_t msm_framebuffer_iova(struct drm_framebuffer *fb, struct drm_gpuvm *vm,
+ int plane)
{
struct msm_framebuffer *msm_fb = to_msm_framebuffer(fb);
return msm_fb->iova[plane] + fb->offsets[plane];
diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
index 7901871c66cc..a0c15cca9245 100644
--- a/drivers/gpu/drm/msm/msm_gem.c
+++ b/drivers/gpu/drm/msm/msm_gem.c
@@ -71,7 +71,7 @@ static void msm_gem_close(struct drm_gem_object *obj, struct drm_file *file)
msecs_to_jiffies(1000));
msm_gem_lock(obj);
- put_iova_spaces(obj, &ctx->vm->base, true);
+ put_iova_spaces(obj, ctx->vm, true);
msm_gem_unlock(obj);
}
@@ -367,8 +367,8 @@ uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj)
return offset;
}
-static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj,
- struct msm_gem_vm *vm)
+static struct drm_gpuva *lookup_vma(struct drm_gem_object *obj,
+ struct drm_gpuvm *vm)
{
struct drm_gpuvm_bo *vm_bo;
@@ -378,13 +378,13 @@ static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj,
struct drm_gpuva *vma;
drm_gpuvm_bo_for_each_va (vma, vm_bo) {
- if (vma->vm == &vm->base) {
+ if (vma->vm == vm) {
/* lookup_vma() should only be used in paths
* with at most one vma per vm
*/
GEM_WARN_ON(!list_is_singular(&vm_bo->list.gpuva));
- return to_msm_vma(vma);
+ return vma;
}
}
}
@@ -414,22 +414,20 @@ put_iova_spaces(struct drm_gem_object *obj, struct drm_gpuvm *vm, bool close)
drm_gpuvm_bo_get(vm_bo);
drm_gpuvm_bo_for_each_va_safe (vma, vmatmp, vm_bo) {
- struct msm_gem_vma *msm_vma = to_msm_vma(vma);
-
- msm_gem_vma_purge(msm_vma);
+ msm_gem_vma_purge(vma);
if (close)
- msm_gem_vma_close(msm_vma);
+ msm_gem_vma_close(vma);
}
drm_gpuvm_bo_put(vm_bo);
}
}
-static struct msm_gem_vma *get_vma_locked(struct drm_gem_object *obj,
- struct msm_gem_vm *vm,
- u64 range_start, u64 range_end)
+static struct drm_gpuva *get_vma_locked(struct drm_gem_object *obj,
+ struct drm_gpuvm *vm, u64 range_start,
+ u64 range_end)
{
- struct msm_gem_vma *vma;
+ struct drm_gpuva *vma;
msm_gem_assert_locked(obj);
@@ -438,14 +436,14 @@ static struct msm_gem_vma *get_vma_locked(struct drm_gem_object *obj,
if (!vma) {
vma = msm_gem_vma_new(vm, obj, range_start, range_end);
} else {
- GEM_WARN_ON(vma->base.va.addr < range_start);
- GEM_WARN_ON((vma->base.va.addr + obj->size) > range_end);
+ GEM_WARN_ON(vma->va.addr < range_start);
+ GEM_WARN_ON((vma->va.addr + obj->size) > range_end);
}
return vma;
}
-int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct msm_gem_vma *vma)
+int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct drm_gpuva *vma)
{
struct msm_gem_object *msm_obj = to_msm_bo(obj);
struct page **pages;
@@ -502,17 +500,17 @@ void msm_gem_unpin_active(struct drm_gem_object *obj)
update_lru_active(obj);
}
-struct msm_gem_vma *msm_gem_get_vma_locked(struct drm_gem_object *obj,
- struct msm_gem_vm *vm)
+struct drm_gpuva *msm_gem_get_vma_locked(struct drm_gem_object *obj,
+ struct drm_gpuvm *vm)
{
return get_vma_locked(obj, vm, 0, U64_MAX);
}
static int get_and_pin_iova_range_locked(struct drm_gem_object *obj,
- struct msm_gem_vm *vm, uint64_t *iova,
- u64 range_start, u64 range_end)
+ struct drm_gpuvm *vm, uint64_t *iova,
+ u64 range_start, u64 range_end)
{
- struct msm_gem_vma *vma;
+ struct drm_gpuva *vma;
int ret;
msm_gem_assert_locked(obj);
@@ -523,7 +521,7 @@ static int get_and_pin_iova_range_locked(struct drm_gem_object *obj,
ret = msm_gem_pin_vma_locked(obj, vma);
if (!ret) {
- *iova = vma->base.va.addr;
+ *iova = vma->va.addr;
pin_obj_locked(obj);
}
@@ -535,8 +533,8 @@ static int get_and_pin_iova_range_locked(struct drm_gem_object *obj,
* limits iova to specified range (in pages)
*/
int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj,
- struct msm_gem_vm *vm, uint64_t *iova,
- u64 range_start, u64 range_end)
+ struct drm_gpuvm *vm, uint64_t *iova,
+ u64 range_start, u64 range_end)
{
int ret;
@@ -548,8 +546,8 @@ int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj,
}
/* get iova and pin it. Should have a matching put */
-int msm_gem_get_and_pin_iova(struct drm_gem_object *obj,
- struct msm_gem_vm *vm, uint64_t *iova)
+int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm,
+ uint64_t *iova)
{
return msm_gem_get_and_pin_iova_range(obj, vm, iova, 0, U64_MAX);
}
@@ -558,10 +556,10 @@ int msm_gem_get_and_pin_iova(struct drm_gem_object *obj,
* Get an iova but don't pin it. Doesn't need a put because iovas are currently
* valid for the life of the object
*/
-int msm_gem_get_iova(struct drm_gem_object *obj,
- struct msm_gem_vm *vm, uint64_t *iova)
+int msm_gem_get_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm,
+ uint64_t *iova)
{
- struct msm_gem_vma *vma;
+ struct drm_gpuva *vma;
int ret = 0;
msm_gem_lock(obj);
@@ -569,7 +567,7 @@ int msm_gem_get_iova(struct drm_gem_object *obj,
if (IS_ERR(vma)) {
ret = PTR_ERR(vma);
} else {
- *iova = vma->base.va.addr;
+ *iova = vma->va.addr;
}
msm_gem_unlock(obj);
@@ -577,9 +575,9 @@ int msm_gem_get_iova(struct drm_gem_object *obj,
}
static int clear_iova(struct drm_gem_object *obj,
- struct msm_gem_vm *vm)
+ struct drm_gpuvm *vm)
{
- struct msm_gem_vma *vma = lookup_vma(obj, vm);
+ struct drm_gpuva *vma = lookup_vma(obj, vm);
if (!vma)
return 0;
@@ -598,7 +596,7 @@ static int clear_iova(struct drm_gem_object *obj,
* Setting an iova of zero will clear the vma.
*/
int msm_gem_set_iova(struct drm_gem_object *obj,
- struct msm_gem_vm *vm, uint64_t iova)
+ struct drm_gpuvm *vm, uint64_t iova)
{
int ret = 0;
@@ -606,11 +604,11 @@ int msm_gem_set_iova(struct drm_gem_object *obj,
if (!iova) {
ret = clear_iova(obj, vm);
} else {
- struct msm_gem_vma *vma;
+ struct drm_gpuva *vma;
vma = get_vma_locked(obj, vm, iova, iova + obj->size);
if (IS_ERR(vma)) {
ret = PTR_ERR(vma);
- } else if (GEM_WARN_ON(vma->base.va.addr != iova)) {
+ } else if (GEM_WARN_ON(vma->va.addr != iova)) {
clear_iova(obj, vm);
ret = -EBUSY;
}
@@ -625,10 +623,9 @@ int msm_gem_set_iova(struct drm_gem_object *obj,
* purged until something else (shrinker, mm_notifier, destroy, etc) decides
* to get rid of it
*/
-void msm_gem_unpin_iova(struct drm_gem_object *obj,
- struct msm_gem_vm *vm)
+void msm_gem_unpin_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm)
{
- struct msm_gem_vma *vma;
+ struct drm_gpuva *vma;
msm_gem_lock(obj);
vma = lookup_vma(obj, vm);
@@ -1241,9 +1238,9 @@ struct drm_gem_object *msm_gem_import(struct drm_device *dev,
return ERR_PTR(ret);
}
-void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size,
- uint32_t flags, struct msm_gem_vm *vm,
- struct drm_gem_object **bo, uint64_t *iova)
+void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, uint32_t flags,
+ struct drm_gpuvm *vm, struct drm_gem_object **bo,
+ uint64_t *iova)
{
void *vaddr;
struct drm_gem_object *obj = msm_gem_new(dev, size, flags);
@@ -1276,8 +1273,7 @@ void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size,
}
-void msm_gem_kernel_put(struct drm_gem_object *bo,
- struct msm_gem_vm *vm)
+void msm_gem_kernel_put(struct drm_gem_object *bo, struct drm_gpuvm *vm)
{
if (IS_ERR_OR_NULL(bo))
return;
diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h
index 5091892bbe2e..acb976722580 100644
--- a/drivers/gpu/drm/msm/msm_gem.h
+++ b/drivers/gpu/drm/msm/msm_gem.h
@@ -85,12 +85,7 @@ struct msm_gem_vm {
};
#define to_msm_vm(x) container_of(x, struct msm_gem_vm, base)
-struct msm_gem_vm *
-msm_gem_vm_get(struct msm_gem_vm *vm);
-
-void msm_gem_vm_put(struct msm_gem_vm *vm);
-
-struct msm_gem_vm *
+struct drm_gpuvm *
msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name,
u64 va_start, u64 va_size, bool managed);
@@ -117,12 +112,12 @@ struct msm_gem_vma {
};
#define to_msm_vma(x) container_of(x, struct msm_gem_vma, base)
-struct msm_gem_vma *
-msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj,
+struct drm_gpuva *
+msm_gem_vma_new(struct drm_gpuvm *vm, struct drm_gem_object *obj,
u64 range_start, u64 range_end);
-void msm_gem_vma_purge(struct msm_gem_vma *vma);
-int msm_gem_vma_map(struct msm_gem_vma *vma, int prot, struct sg_table *sgt, int size);
-void msm_gem_vma_close(struct msm_gem_vma *vma);
+void msm_gem_vma_purge(struct drm_gpuva *vma);
+int msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt, int size);
+void msm_gem_vma_close(struct drm_gpuva *vma);
struct msm_gem_object {
struct drm_gem_object base;
@@ -167,22 +162,21 @@ struct msm_gem_object {
#define to_msm_bo(x) container_of(x, struct msm_gem_object, base)
uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj);
-int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct msm_gem_vma *vma);
+int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct drm_gpuva *vma);
void msm_gem_unpin_locked(struct drm_gem_object *obj);
void msm_gem_unpin_active(struct drm_gem_object *obj);
-struct msm_gem_vma *msm_gem_get_vma_locked(struct drm_gem_object *obj,
- struct msm_gem_vm *vm);
-int msm_gem_get_iova(struct drm_gem_object *obj,
- struct msm_gem_vm *vm, uint64_t *iova);
-int msm_gem_set_iova(struct drm_gem_object *obj,
- struct msm_gem_vm *vm, uint64_t iova);
+struct drm_gpuva *msm_gem_get_vma_locked(struct drm_gem_object *obj,
+ struct drm_gpuvm *vm);
+int msm_gem_get_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm,
+ uint64_t *iova);
+int msm_gem_set_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm,
+ uint64_t iova);
int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj,
- struct msm_gem_vm *vm, uint64_t *iova,
- u64 range_start, u64 range_end);
-int msm_gem_get_and_pin_iova(struct drm_gem_object *obj,
- struct msm_gem_vm *vm, uint64_t *iova);
-void msm_gem_unpin_iova(struct drm_gem_object *obj,
- struct msm_gem_vm *vm);
+ struct drm_gpuvm *vm, uint64_t *iova,
+ u64 range_start, u64 range_end);
+int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm,
+ uint64_t *iova);
+void msm_gem_unpin_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm);
void msm_gem_pin_obj_locked(struct drm_gem_object *obj);
struct page **msm_gem_pin_pages_locked(struct drm_gem_object *obj);
void msm_gem_unpin_pages_locked(struct drm_gem_object *obj);
@@ -203,11 +197,10 @@ int msm_gem_new_handle(struct drm_device *dev, struct drm_file *file,
uint32_t size, uint32_t flags, uint32_t *handle, char *name);
struct drm_gem_object *msm_gem_new(struct drm_device *dev,
uint32_t size, uint32_t flags);
-void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size,
- uint32_t flags, struct msm_gem_vm *vm,
- struct drm_gem_object **bo, uint64_t *iova);
-void msm_gem_kernel_put(struct drm_gem_object *bo,
- struct msm_gem_vm *vm);
+void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, uint32_t flags,
+ struct drm_gpuvm *vm, struct drm_gem_object **bo,
+ uint64_t *iova);
+void msm_gem_kernel_put(struct drm_gem_object *bo, struct drm_gpuvm *vm);
struct drm_gem_object *msm_gem_import(struct drm_device *dev,
struct dma_buf *dmabuf, struct sg_table *sgt);
__printf(2, 3)
@@ -301,7 +294,7 @@ struct msm_gem_submit {
struct kref ref;
struct drm_device *dev;
struct msm_gpu *gpu;
- struct msm_gem_vm *vm;
+ struct drm_gpuvm *vm;
struct list_head node; /* node in ring submit list */
struct drm_exec exec;
uint32_t seqno; /* Sequence number of the submit on the ring */
diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c
index e8a670566147..998cedb24941 100644
--- a/drivers/gpu/drm/msm/msm_gem_submit.c
+++ b/drivers/gpu/drm/msm/msm_gem_submit.c
@@ -299,7 +299,7 @@ static int submit_pin_objects(struct msm_gem_submit *submit)
for (i = 0; i < submit->nr_bos; i++) {
struct drm_gem_object *obj = submit->bos[i].obj;
- struct msm_gem_vma *vma;
+ struct drm_gpuva *vma;
/* if locking succeeded, pin bo: */
vma = msm_gem_get_vma_locked(obj, submit->vm);
@@ -312,7 +312,7 @@ static int submit_pin_objects(struct msm_gem_submit *submit)
if (ret)
break;
- submit->bos[i].iova = vma->base.va.addr;
+ submit->bos[i].iova = vma->va.addr;
}
/*
diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c
index 56221dfdf551..0bc22618e9f0 100644
--- a/drivers/gpu/drm/msm/msm_gem_vma.c
+++ b/drivers/gpu/drm/msm/msm_gem_vma.c
@@ -20,52 +20,38 @@ msm_gem_vm_free(struct drm_gpuvm *gpuvm)
kfree(vm);
}
-
-void msm_gem_vm_put(struct msm_gem_vm *vm)
-{
- if (vm)
- drm_gpuvm_put(&vm->base);
-}
-
-struct msm_gem_vm *
-msm_gem_vm_get(struct msm_gem_vm *vm)
-{
- if (!IS_ERR_OR_NULL(vm))
- drm_gpuvm_get(&vm->base);
-
- return vm;
-}
-
/* Actually unmap memory for the vma */
-void msm_gem_vma_purge(struct msm_gem_vma *vma)
+void msm_gem_vma_purge(struct drm_gpuva *vma)
{
- struct msm_gem_vm *vm = to_msm_vm(vma->base.vm);
- unsigned size = vma->base.va.range;
+ struct msm_gem_vma *msm_vma = to_msm_vma(vma);
+ struct msm_gem_vm *vm = to_msm_vm(vma->vm);
+ unsigned size = vma->va.range;
/* Don't do anything if the memory isn't mapped */
- if (!vma->mapped)
+ if (!msm_vma->mapped)
return;
- vm->mmu->funcs->unmap(vm->mmu, vma->base.va.addr, size);
+ vm->mmu->funcs->unmap(vm->mmu, vma->va.addr, size);
- vma->mapped = false;
+ msm_vma->mapped = false;
}
/* Map and pin vma: */
int
-msm_gem_vma_map(struct msm_gem_vma *vma, int prot,
+msm_gem_vma_map(struct drm_gpuva *vma, int prot,
struct sg_table *sgt, int size)
{
- struct msm_gem_vm *vm = to_msm_vm(vma->base.vm);
+ struct msm_gem_vma *msm_vma = to_msm_vma(vma);
+ struct msm_gem_vm *vm = to_msm_vm(vma->vm);
int ret;
- if (GEM_WARN_ON(!vma->base.va.addr))
+ if (GEM_WARN_ON(!vma->va.addr))
return -EINVAL;
- if (vma->mapped)
+ if (msm_vma->mapped)
return 0;
- vma->mapped = true;
+ msm_vma->mapped = true;
/*
* NOTE: iommu/io-pgtable can allocate pages, so we cannot hold
@@ -76,40 +62,44 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot,
* Revisit this if we can come up with a scheme to pre-alloc pages
* for the pgtable in map/unmap ops.
*/
- ret = vm->mmu->funcs->map(vm->mmu, vma->base.va.addr, sgt, size, prot);
+ ret = vm->mmu->funcs->map(vm->mmu, vma->va.addr, sgt, size, prot);
if (ret) {
- vma->mapped = false;
+ msm_vma->mapped = false;
}
return ret;
}
/* Close an iova. Warn if it is still in use */
-void msm_gem_vma_close(struct msm_gem_vma *vma)
+void msm_gem_vma_close(struct drm_gpuva *vma)
{
- struct msm_gem_vm *vm = to_msm_vm(vma->base.vm);
+ struct msm_gem_vm *vm = to_msm_vm(vma->vm);
+ struct msm_gem_vma *msm_vma = to_msm_vma(vma);
- GEM_WARN_ON(vma->mapped);
+ GEM_WARN_ON(msm_vma->mapped);
spin_lock(&vm->mm_lock);
- if (vma->base.va.addr)
- drm_mm_remove_node(&vma->node);
+ if (vma->va.addr && vm->managed)
+ drm_mm_remove_node(&msm_vma->node);
spin_unlock(&vm->mm_lock);
+ dma_resv_lock(drm_gpuvm_resv(vma->vm), NULL);
mutex_lock(&vm->vm_lock);
- drm_gpuva_remove(&vma->base);
- drm_gpuva_unlink(&vma->base);
+ drm_gpuva_remove(vma);
+ drm_gpuva_unlink(vma);
mutex_unlock(&vm->vm_lock);
+ dma_resv_unlock(drm_gpuvm_resv(vma->vm));
kfree(vma);
}
/* Create a new vma and allocate an iova for it */
-struct msm_gem_vma *
-msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj,
+struct drm_gpuva *
+msm_gem_vma_new(struct drm_gpuvm *_vm, struct drm_gem_object *obj,
u64 range_start, u64 range_end)
{
+ struct msm_gem_vm *vm = to_msm_vm(_vm);
struct drm_gpuvm_bo *vm_bo;
struct msm_gem_vma *vma;
int ret;
@@ -154,7 +144,7 @@ msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj,
mutex_unlock(&vm->vm_lock);
GEM_WARN_ON(drm_gpuvm_bo_put(vm_bo));
- return vma;
+ return &vma->base;
err_va_remove:
mutex_lock(&vm->vm_lock);
@@ -186,7 +176,7 @@ static const struct drm_gpuvm_ops msm_gpuvm_ops = {
* handles virtual address allocation, and both async and sync operations
* are supported.
*/
-struct msm_gem_vm *
+struct drm_gpuvm *
msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name,
u64 va_start, u64 va_size, bool managed)
{
@@ -220,7 +210,7 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name,
drm_mm_init(&vm->mm, va_start, va_size);
- return vm;
+ return &vm->base;
err_free_vm:
kfree(vm);
diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c
index 0d466a2e9b32..4d24dcf62064 100644
--- a/drivers/gpu/drm/msm/msm_gpu.c
+++ b/drivers/gpu/drm/msm/msm_gpu.c
@@ -283,7 +283,7 @@ static void msm_gpu_crashstate_capture(struct msm_gpu *gpu,
if (state->fault_info.ttbr0) {
struct msm_gpu_fault_info *info = &state->fault_info;
- struct msm_mmu *mmu = submit->vm->mmu;
+ struct msm_mmu *mmu = to_msm_vm(submit->vm)->mmu;
msm_iommu_pagetable_params(mmu, &info->pgtbl_ttbr0,
&info->asid);
@@ -387,7 +387,7 @@ static void recover_worker(struct kthread_work *work)
/* Increment the fault counts */
submit->queue->faults++;
if (submit->vm)
- submit->vm->faults++;
+ to_msm_vm(submit->vm)->faults++;
get_comm_cmdline(submit, &comm, &cmd);
@@ -463,6 +463,7 @@ static void fault_worker(struct kthread_work *work)
{
struct msm_gpu *gpu = container_of(work, struct msm_gpu, fault_work);
struct msm_gem_submit *submit;
+ struct msm_mmu *mmu = to_msm_vm(gpu->vm)->mmu;
struct msm_ringbuffer *cur_ring = gpu->funcs->active_ring(gpu);
char *comm = NULL, *cmd = NULL;
@@ -492,7 +493,7 @@ static void fault_worker(struct kthread_work *work)
resume_smmu:
memset(&gpu->fault_info, 0, sizeof(gpu->fault_info));
- gpu->vm->mmu->funcs->resume_translation(gpu->vm->mmu);
+ mmu->funcs->resume_translation(mmu);
mutex_unlock(&gpu->lock);
}
@@ -829,10 +830,11 @@ static int get_clocks(struct platform_device *pdev, struct msm_gpu *gpu)
}
/* Return a new address space for a msm_drm_private instance */
-struct msm_gem_vm *
+struct drm_gpuvm *
msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task)
{
- struct msm_gem_vm *vm = NULL;
+ struct drm_gpuvm *vm = NULL;
+
if (!gpu)
return NULL;
@@ -843,11 +845,11 @@ msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task)
if (gpu->funcs->create_private_vm) {
vm = gpu->funcs->create_private_vm(gpu);
if (!IS_ERR(vm))
- vm->pid = get_pid(task_pid(task));
+ to_msm_vm(vm)->pid = get_pid(task_pid(task));
}
if (IS_ERR_OR_NULL(vm))
- vm = msm_gem_vm_get(gpu->vm);
+ vm = drm_gpuvm_get(gpu->vm);
return vm;
}
@@ -1020,8 +1022,9 @@ void msm_gpu_cleanup(struct msm_gpu *gpu)
msm_gem_kernel_put(gpu->memptrs_bo, gpu->vm);
if (!IS_ERR_OR_NULL(gpu->vm)) {
- gpu->vm->mmu->funcs->detach(gpu->vm->mmu);
- msm_gem_vm_put(gpu->vm);
+ struct msm_mmu *mmu = to_msm_vm(gpu->vm)->mmu;
+ mmu->funcs->detach(mmu);
+ drm_gpuvm_put(gpu->vm);
}
if (gpu->worker) {
diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h
index 1f26ba00f773..d8425e6d7f5a 100644
--- a/drivers/gpu/drm/msm/msm_gpu.h
+++ b/drivers/gpu/drm/msm/msm_gpu.h
@@ -78,8 +78,8 @@ struct msm_gpu_funcs {
/* note: gpu_set_freq() can assume that we have been pm_resumed */
void (*gpu_set_freq)(struct msm_gpu *gpu, struct dev_pm_opp *opp,
bool suspended);
- struct msm_gem_vm *(*create_vm)(struct msm_gpu *gpu, struct platform_device *pdev);
- struct msm_gem_vm *(*create_private_vm)(struct msm_gpu *gpu);
+ struct drm_gpuvm *(*create_vm)(struct msm_gpu *gpu, struct platform_device *pdev);
+ struct drm_gpuvm *(*create_private_vm)(struct msm_gpu *gpu);
uint32_t (*get_rptr)(struct msm_gpu *gpu, struct msm_ringbuffer *ring);
/**
@@ -234,7 +234,7 @@ struct msm_gpu {
void __iomem *mmio;
int irq;
- struct msm_gem_vm *vm;
+ struct drm_gpuvm *vm;
/* Power Control: */
struct regulator *gpu_reg, *gpu_cx;
@@ -363,7 +363,7 @@ struct msm_context {
int queueid;
/** @vm: the per-process GPU address-space */
- struct msm_gem_vm *vm;
+ struct drm_gpuvm *vm;
/** @kref: the reference count */
struct kref ref;
@@ -673,7 +673,7 @@ int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev,
struct msm_gpu *gpu, const struct msm_gpu_funcs *funcs,
const char *name, struct msm_gpu_config *config);
-struct msm_gem_vm *
+struct drm_gpuvm *
msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task);
void msm_gpu_cleanup(struct msm_gpu *gpu);
diff --git a/drivers/gpu/drm/msm/msm_kms.c b/drivers/gpu/drm/msm/msm_kms.c
index 6458bd82a0cd..e82b8569a468 100644
--- a/drivers/gpu/drm/msm/msm_kms.c
+++ b/drivers/gpu/drm/msm/msm_kms.c
@@ -176,9 +176,9 @@ static int msm_kms_fault_handler(void *arg, unsigned long iova, int flags, void
return -ENOSYS;
}
-struct msm_gem_vm *msm_kms_init_vm(struct drm_device *dev)
+struct drm_gpuvm *msm_kms_init_vm(struct drm_device *dev)
{
- struct msm_gem_vm *vm;
+ struct drm_gpuvm *vm;
struct msm_mmu *mmu;
struct device *mdp_dev = dev->dev;
struct device *mdss_dev = mdp_dev->parent;
@@ -212,7 +212,7 @@ struct msm_gem_vm *msm_kms_init_vm(struct drm_device *dev)
return vm;
}
- msm_mmu_set_fault_handler(vm->mmu, kms, msm_kms_fault_handler);
+ msm_mmu_set_fault_handler(to_msm_vm(vm)->mmu, kms, msm_kms_fault_handler);
return vm;
}
diff --git a/drivers/gpu/drm/msm/msm_kms.h b/drivers/gpu/drm/msm/msm_kms.h
index f45996a03e15..7cdb2eb67700 100644
--- a/drivers/gpu/drm/msm/msm_kms.h
+++ b/drivers/gpu/drm/msm/msm_kms.h
@@ -139,7 +139,7 @@ struct msm_kms {
atomic_t fault_snapshot_capture;
/* mapper-id used to request GEM buffer mapped for scanout: */
- struct msm_gem_vm *vm;
+ struct drm_gpuvm *vm;
/* disp snapshot support */
struct kthread_worker *dump_worker;
diff --git a/drivers/gpu/drm/msm/msm_submitqueue.c b/drivers/gpu/drm/msm/msm_submitqueue.c
index 6298233c3568..8ced49c7557b 100644
--- a/drivers/gpu/drm/msm/msm_submitqueue.c
+++ b/drivers/gpu/drm/msm/msm_submitqueue.c
@@ -59,7 +59,7 @@ void __msm_context_destroy(struct kref *kref)
kfree(ctx->entities[i]);
}
- msm_gem_vm_put(ctx->vm);
+ drm_gpuvm_put(ctx->vm);
kfree(ctx->comm);
kfree(ctx->cmdline);
kfree(ctx);
--
2.48.1
^ permalink raw reply related [flat|nested] 45+ messages in thread
* [PATCH v2 13/34] drm/msm: Split submit_pin_objects()
2025-03-19 14:52 [PATCH v2 00/34] drm/msm: sparse / "VM_BIND" support Rob Clark
` (11 preceding siblings ...)
2025-03-19 14:52 ` [PATCH v2 12/34] drm/msm: Use drm_gpuvm types more Rob Clark
@ 2025-03-19 14:52 ` Rob Clark
2025-03-19 14:52 ` [PATCH v2 14/34] drm/msm: Lazily create context VM Rob Clark
` (20 subsequent siblings)
33 siblings, 0 replies; 45+ messages in thread
From: Rob Clark @ 2025-03-19 14:52 UTC (permalink / raw)
To: dri-devel
Cc: freedreno, linux-arm-msm, Rob Clark, Rob Clark, Abhinav Kumar,
Dmitry Baryshkov, Sean Paul, Marijn Suijten, David Airlie,
Simona Vetter, open list
From: Rob Clark <robdclark@chromium.org>
For VM_BIND, in the first step, we just want to get the backing pages,
but defer creating the vma until the map/unmap/ops are evaluated.
Signed-off-by: Rob Clark <robdclark@chromium.org>
---
drivers/gpu/drm/msm/msm_gem_submit.c | 27 +++++++++++++++++++--------
1 file changed, 19 insertions(+), 8 deletions(-)
diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c
index 998cedb24941..c65f3a6a5256 100644
--- a/drivers/gpu/drm/msm/msm_gem_submit.c
+++ b/drivers/gpu/drm/msm/msm_gem_submit.c
@@ -292,12 +292,16 @@ static int submit_fence_sync(struct msm_gem_submit *submit)
return ret;
}
-static int submit_pin_objects(struct msm_gem_submit *submit)
+static int submit_pin_vmas(struct msm_gem_submit *submit)
{
- struct msm_drm_private *priv = submit->dev->dev_private;
- int i, ret = 0;
+ int ret = 0;
- for (i = 0; i < submit->nr_bos; i++) {
+ /*
+ * First loop, before holding the LRU lock, avoids holding the
+ * LRU lock while calling msm_gem_pin_vma_locked (which could
+ * trigger get_pages())
+ */
+ for (int i = 0; i < submit->nr_bos; i++) {
struct drm_gem_object *obj = submit->bos[i].obj;
struct drm_gpuva *vma;
@@ -315,6 +319,13 @@ static int submit_pin_objects(struct msm_gem_submit *submit)
submit->bos[i].iova = vma->va.addr;
}
+ return ret;
+}
+
+static void submit_pin_objects(struct msm_gem_submit *submit)
+{
+ struct msm_drm_private *priv = submit->dev->dev_private;
+
/*
* A second loop while holding the LRU lock (a) avoids acquiring/dropping
* the LRU lock for each individual bo, while (b) avoiding holding the
@@ -323,14 +334,12 @@ static int submit_pin_objects(struct msm_gem_submit *submit)
* could trigger deadlock with the shrinker).
*/
mutex_lock(&priv->lru.lock);
- for (i = 0; i < submit->nr_bos; i++) {
+ for (int i = 0; i < submit->nr_bos; i++) {
msm_gem_pin_obj_locked(submit->bos[i].obj);
}
mutex_unlock(&priv->lru.lock);
submit->bos_pinned = true;
-
- return ret;
}
static void submit_unpin_objects(struct msm_gem_submit *submit)
@@ -760,10 +769,12 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data,
goto out;
}
- ret = submit_pin_objects(submit);
+ ret = submit_pin_vmas(submit);
if (ret)
goto out;
+ submit_pin_objects(submit);
+
for (i = 0; i < args->nr_cmds; i++) {
struct drm_gem_object *obj;
uint64_t iova;
--
2.48.1
^ permalink raw reply related [flat|nested] 45+ messages in thread
* [PATCH v2 14/34] drm/msm: Lazily create context VM
2025-03-19 14:52 [PATCH v2 00/34] drm/msm: sparse / "VM_BIND" support Rob Clark
` (12 preceding siblings ...)
2025-03-19 14:52 ` [PATCH v2 13/34] drm/msm: Split submit_pin_objects() Rob Clark
@ 2025-03-19 14:52 ` Rob Clark
2025-04-16 17:38 ` Akhil P Oommen
2025-03-19 14:52 ` [PATCH v2 15/34] drm/msm: Add opt-in for VM_BIND Rob Clark
` (19 subsequent siblings)
33 siblings, 1 reply; 45+ messages in thread
From: Rob Clark @ 2025-03-19 14:52 UTC (permalink / raw)
To: dri-devel
Cc: freedreno, linux-arm-msm, Rob Clark, Rob Clark, Sean Paul,
Konrad Dybcio, Abhinav Kumar, Dmitry Baryshkov, Marijn Suijten,
David Airlie, Simona Vetter, open list
From: Rob Clark <robdclark@chromium.org>
In the next commit, a way for userspace to opt-in to userspace managed
VM is added. For this to work, we need to defer creation of the VM
until it is needed.
Signed-off-by: Rob Clark <robdclark@chromium.org>
---
drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 3 ++-
drivers/gpu/drm/msm/adreno/adreno_gpu.c | 14 +++++++-----
drivers/gpu/drm/msm/msm_drv.c | 29 ++++++++++++++++++++-----
drivers/gpu/drm/msm/msm_gem_submit.c | 2 +-
drivers/gpu/drm/msm/msm_gpu.h | 9 +++++++-
5 files changed, 43 insertions(+), 14 deletions(-)
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
index 4811be5a7c29..0b1e2ba3539e 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
@@ -112,6 +112,7 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu,
{
bool sysprof = refcount_read(&a6xx_gpu->base.base.sysprof_active) > 1;
struct msm_context *ctx = submit->queue->ctx;
+ struct drm_gpuvm *vm = msm_context_vm(submit->dev, ctx);
struct adreno_gpu *adreno_gpu = &a6xx_gpu->base;
phys_addr_t ttbr;
u32 asid;
@@ -120,7 +121,7 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu,
if (ctx->seqno == ring->cur_ctx_seqno)
return;
- if (msm_iommu_pagetable_params(to_msm_vm(ctx->vm)->mmu, &ttbr, &asid))
+ if (msm_iommu_pagetable_params(to_msm_vm(vm)->mmu, &ttbr, &asid))
return;
if (adreno_gpu->info->family >= ADRENO_7XX_GEN1) {
diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
index 0f71703f6ec7..e4d895dda051 100644
--- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
@@ -351,6 +351,8 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx,
{
struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
struct drm_device *drm = gpu->dev;
+ /* Note ctx can be NULL when called from rd_open(): */
+ struct drm_gpuvm *vm = ctx ? msm_context_vm(drm, ctx) : NULL;
/* No pointer params yet */
if (*len != 0)
@@ -396,8 +398,8 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx,
*value = 0;
return 0;
case MSM_PARAM_FAULTS:
- if (ctx->vm)
- *value = gpu->global_faults + to_msm_vm(ctx->vm)->faults;
+ if (vm)
+ *value = gpu->global_faults + to_msm_vm(vm)->faults;
else
*value = gpu->global_faults;
return 0;
@@ -405,14 +407,14 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx,
*value = gpu->suspend_count;
return 0;
case MSM_PARAM_VA_START:
- if (ctx->vm == gpu->vm)
+ if (vm == gpu->vm)
return UERR(EINVAL, drm, "requires per-process pgtables");
- *value = ctx->vm->mm_start;
+ *value = vm->mm_start;
return 0;
case MSM_PARAM_VA_SIZE:
- if (ctx->vm == gpu->vm)
+ if (vm == gpu->vm)
return UERR(EINVAL, drm, "requires per-process pgtables");
- *value = ctx->vm->mm_range;
+ *value = vm->mm_range;
return 0;
case MSM_PARAM_HIGHEST_BANK_BIT:
*value = adreno_gpu->ubwc_config.highest_bank_bit;
diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
index 6ef29bc48bb0..6fd981ee6aee 100644
--- a/drivers/gpu/drm/msm/msm_drv.c
+++ b/drivers/gpu/drm/msm/msm_drv.c
@@ -214,10 +214,29 @@ static void load_gpu(struct drm_device *dev)
mutex_unlock(&init_lock);
}
+/**
+ * msm_context_vm - lazily create the context's VM
+ *
+ * @dev: the drm device
+ * @ctx: the context
+ *
+ * The VM is lazily created, so that userspace has a chance to opt-in to having
+ * a userspace managed VM before the VM is created.
+ *
+ * Note that this does not return a reference to the VM. Once the VM is created,
+ * it exists for the lifetime of the context.
+ */
+struct drm_gpuvm *msm_context_vm(struct drm_device *dev, struct msm_context *ctx)
+{
+ struct msm_drm_private *priv = dev->dev_private;
+ if (!ctx->vm)
+ ctx->vm = msm_gpu_create_private_vm(priv->gpu, current);
+ return ctx->vm;
+}
+
static int context_init(struct drm_device *dev, struct drm_file *file)
{
static atomic_t ident = ATOMIC_INIT(0);
- struct msm_drm_private *priv = dev->dev_private;
struct msm_context *ctx;
ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
@@ -230,7 +249,6 @@ static int context_init(struct drm_device *dev, struct drm_file *file)
kref_init(&ctx->ref);
msm_submitqueue_init(dev, ctx);
- ctx->vm = msm_gpu_create_private_vm(priv->gpu, current);
file->driver_priv = ctx;
ctx->seqno = atomic_inc_return(&ident);
@@ -408,7 +426,7 @@ static int msm_ioctl_gem_info_iova(struct drm_device *dev,
* Don't pin the memory here - just get an address so that userspace can
* be productive
*/
- return msm_gem_get_iova(obj, ctx->vm, iova);
+ return msm_gem_get_iova(obj, msm_context_vm(dev, ctx), iova);
}
static int msm_ioctl_gem_info_set_iova(struct drm_device *dev,
@@ -417,18 +435,19 @@ static int msm_ioctl_gem_info_set_iova(struct drm_device *dev,
{
struct msm_drm_private *priv = dev->dev_private;
struct msm_context *ctx = file->driver_priv;
+ struct drm_gpuvm *vm = msm_context_vm(dev, ctx);
if (!priv->gpu)
return -EINVAL;
/* Only supported if per-process address space is supported: */
- if (priv->gpu->vm == ctx->vm)
+ if (priv->gpu->vm == vm)
return UERR(EOPNOTSUPP, dev, "requires per-process pgtables");
if (should_fail(&fail_gem_iova, obj->size))
return -ENOMEM;
- return msm_gem_set_iova(obj, ctx->vm, iova);
+ return msm_gem_set_iova(obj, vm, iova);
}
static int msm_ioctl_gem_info_set_metadata(struct drm_gem_object *obj,
diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c
index c65f3a6a5256..9731ad7993cf 100644
--- a/drivers/gpu/drm/msm/msm_gem_submit.c
+++ b/drivers/gpu/drm/msm/msm_gem_submit.c
@@ -63,7 +63,7 @@ static struct msm_gem_submit *submit_create(struct drm_device *dev,
kref_init(&submit->ref);
submit->dev = dev;
- submit->vm = queue->ctx->vm;
+ submit->vm = msm_context_vm(dev, queue->ctx);
submit->gpu = gpu;
submit->cmd = (void *)&submit->bos[nr_bos];
submit->queue = queue;
diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h
index d8425e6d7f5a..c15aad288552 100644
--- a/drivers/gpu/drm/msm/msm_gpu.h
+++ b/drivers/gpu/drm/msm/msm_gpu.h
@@ -362,7 +362,12 @@ struct msm_context {
*/
int queueid;
- /** @vm: the per-process GPU address-space */
+ /**
+ * @vm:
+ *
+ * The per-process GPU address-space. Do not access directly, use
+ * msm_context_vm().
+ */
struct drm_gpuvm *vm;
/** @kref: the reference count */
@@ -447,6 +452,8 @@ struct msm_context {
atomic64_t ctx_mem;
};
+struct drm_gpuvm *msm_context_vm(struct drm_device *dev, struct msm_context *ctx);
+
/**
* msm_gpu_convert_priority - Map userspace priority to ring # and sched priority
*
--
2.48.1
^ permalink raw reply related [flat|nested] 45+ messages in thread
* [PATCH v2 15/34] drm/msm: Add opt-in for VM_BIND
2025-03-19 14:52 [PATCH v2 00/34] drm/msm: sparse / "VM_BIND" support Rob Clark
` (13 preceding siblings ...)
2025-03-19 14:52 ` [PATCH v2 14/34] drm/msm: Lazily create context VM Rob Clark
@ 2025-03-19 14:52 ` Rob Clark
2025-03-19 14:52 ` [PATCH v2 16/34] drm/msm: Mark VM as unusable on faults Rob Clark
` (18 subsequent siblings)
33 siblings, 0 replies; 45+ messages in thread
From: Rob Clark @ 2025-03-19 14:52 UTC (permalink / raw)
To: dri-devel
Cc: freedreno, linux-arm-msm, Rob Clark, Rob Clark, Sean Paul,
Konrad Dybcio, Abhinav Kumar, Dmitry Baryshkov, Marijn Suijten,
David Airlie, Simona Vetter, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, open list
From: Rob Clark <robdclark@chromium.org>
Add a SET_PARAM for userspace to request to manage to the VM itself,
instead of getting a kernel managed VM.
In order to transition to a userspace managed VM, this param must be set
before any mappings are created.
Signed-off-by: Rob Clark <robdclark@chromium.org>
---
drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 4 ++--
drivers/gpu/drm/msm/adreno/adreno_gpu.c | 15 +++++++++++++
drivers/gpu/drm/msm/msm_drv.c | 13 +++++++++--
drivers/gpu/drm/msm/msm_gem.c | 8 +++++++
drivers/gpu/drm/msm/msm_gpu.c | 5 +++--
drivers/gpu/drm/msm/msm_gpu.h | 29 +++++++++++++++++++++++--
include/uapi/drm/msm_drm.h | 24 ++++++++++++++++++++
7 files changed, 90 insertions(+), 8 deletions(-)
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
index 0b1e2ba3539e..ca3247f845b5 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
@@ -2263,7 +2263,7 @@ a6xx_create_vm(struct msm_gpu *gpu, struct platform_device *pdev)
}
static struct drm_gpuvm *
-a6xx_create_private_vm(struct msm_gpu *gpu)
+a6xx_create_private_vm(struct msm_gpu *gpu, bool kernel_managed)
{
struct msm_mmu *mmu;
@@ -2273,7 +2273,7 @@ a6xx_create_private_vm(struct msm_gpu *gpu)
return ERR_CAST(mmu);
return msm_gem_vm_create(gpu->dev, mmu, "gpu", 0x100000000ULL,
- adreno_private_vm_size(gpu), true);
+ adreno_private_vm_size(gpu), kernel_managed);
}
static uint32_t a6xx_get_rptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring)
diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
index e4d895dda051..739161df3e3c 100644
--- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
@@ -483,6 +483,21 @@ int adreno_set_param(struct msm_gpu *gpu, struct msm_context *ctx,
if (!capable(CAP_SYS_ADMIN))
return UERR(EPERM, drm, "invalid permissions");
return msm_context_set_sysprof(ctx, gpu, value);
+ case MSM_PARAM_EN_VM_BIND:
+ /* We can only support VM_BIND with per-process pgtables: */
+ if (ctx->vm == gpu->vm)
+ return UERR(EINVAL, drm, "requires per-process pgtables");
+
+ /*
+ * We can only swtich to VM_BIND mode if the VM has not yet
+ * been created:
+ */
+ if (ctx->vm)
+ return UERR(EBUSY, drm, "VM already created");
+
+ ctx->userspace_managed_vm = value;
+
+ return 0;
default:
return UERR(EINVAL, drm, "%s: invalid param: %u", gpu->name, param);
}
diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
index 6fd981ee6aee..5b5a64c8dddb 100644
--- a/drivers/gpu/drm/msm/msm_drv.c
+++ b/drivers/gpu/drm/msm/msm_drv.c
@@ -229,8 +229,11 @@ static void load_gpu(struct drm_device *dev)
struct drm_gpuvm *msm_context_vm(struct drm_device *dev, struct msm_context *ctx)
{
struct msm_drm_private *priv = dev->dev_private;
- if (!ctx->vm)
- ctx->vm = msm_gpu_create_private_vm(priv->gpu, current);
+ if (!ctx->vm) {
+ ctx->vm = msm_gpu_create_private_vm(
+ priv->gpu, current, !ctx->userspace_managed_vm);
+
+ }
return ctx->vm;
}
@@ -419,6 +422,9 @@ static int msm_ioctl_gem_info_iova(struct drm_device *dev,
if (!priv->gpu)
return -EINVAL;
+ if (msm_context_is_vmbind(ctx))
+ return UERR(EINVAL, dev, "VM_BIND is enabled");
+
if (should_fail(&fail_gem_iova, obj->size))
return -ENOMEM;
@@ -440,6 +446,9 @@ static int msm_ioctl_gem_info_set_iova(struct drm_device *dev,
if (!priv->gpu)
return -EINVAL;
+ if (msm_context_is_vmbind(ctx))
+ return UERR(EINVAL, dev, "VM_BIND is enabled");
+
/* Only supported if per-process address space is supported: */
if (priv->gpu->vm == vm)
return UERR(EOPNOTSUPP, dev, "requires per-process pgtables");
diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
index a0c15cca9245..5a5220b6f21d 100644
--- a/drivers/gpu/drm/msm/msm_gem.c
+++ b/drivers/gpu/drm/msm/msm_gem.c
@@ -63,6 +63,14 @@ static void msm_gem_close(struct drm_gem_object *obj, struct drm_file *file)
if (!ctx->vm)
return;
+ /*
+ * VM_BIND does not depend on implicit teardown of VMAs on handle
+ * close, but instead on implicit teardown of the VM when the device
+ * is closed (see msm_gem_vm_close())
+ */
+ if (msm_context_is_vmbind(ctx))
+ return;
+
/*
* TODO we might need to kick this to a queue to avoid blocking
* in CLOSE ioctl
diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c
index 4d24dcf62064..503e4dcc5a6f 100644
--- a/drivers/gpu/drm/msm/msm_gpu.c
+++ b/drivers/gpu/drm/msm/msm_gpu.c
@@ -831,7 +831,8 @@ static int get_clocks(struct platform_device *pdev, struct msm_gpu *gpu)
/* Return a new address space for a msm_drm_private instance */
struct drm_gpuvm *
-msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task)
+msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task,
+ bool kernel_managed)
{
struct drm_gpuvm *vm = NULL;
@@ -843,7 +844,7 @@ msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task)
* the global one
*/
if (gpu->funcs->create_private_vm) {
- vm = gpu->funcs->create_private_vm(gpu);
+ vm = gpu->funcs->create_private_vm(gpu, kernel_managed);
if (!IS_ERR(vm))
to_msm_vm(vm)->pid = get_pid(task_pid(task));
}
diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h
index c15aad288552..20f52d9636b0 100644
--- a/drivers/gpu/drm/msm/msm_gpu.h
+++ b/drivers/gpu/drm/msm/msm_gpu.h
@@ -79,7 +79,7 @@ struct msm_gpu_funcs {
void (*gpu_set_freq)(struct msm_gpu *gpu, struct dev_pm_opp *opp,
bool suspended);
struct drm_gpuvm *(*create_vm)(struct msm_gpu *gpu, struct platform_device *pdev);
- struct drm_gpuvm *(*create_private_vm)(struct msm_gpu *gpu);
+ struct drm_gpuvm *(*create_private_vm)(struct msm_gpu *gpu, bool kernel_managed);
uint32_t (*get_rptr)(struct msm_gpu *gpu, struct msm_ringbuffer *ring);
/**
@@ -362,6 +362,14 @@ struct msm_context {
*/
int queueid;
+ /**
+ * @userspace_managed_vm:
+ *
+ * Has userspace opted-in to userspace managed VM (ie. VM_BIND) via
+ * MSM_PARAM_EN_VM_BIND?
+ */
+ bool userspace_managed_vm;
+
/**
* @vm:
*
@@ -454,6 +462,22 @@ struct msm_context {
struct drm_gpuvm *msm_context_vm(struct drm_device *dev, struct msm_context *ctx);
+/**
+ * msm_context_is_vm_bind() - has userspace opted in to VM_BIND?
+ *
+ * @ctx: the drm_file context
+ *
+ * See MSM_PARAM_EN_VM_BIND. If userspace is managing the VM, it can
+ * do sparse binding including having multiple, potentially partial,
+ * mappings in the VM. Therefore certain legacy uabi (ie. GET_IOVA,
+ * SET_IOVA) are rejected because they don't have a sensible meaning.
+ */
+static inline bool
+msm_context_is_vmbind(struct msm_context *ctx)
+{
+ return ctx->userspace_managed_vm;
+}
+
/**
* msm_gpu_convert_priority - Map userspace priority to ring # and sched priority
*
@@ -681,7 +705,8 @@ int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev,
const char *name, struct msm_gpu_config *config);
struct drm_gpuvm *
-msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task);
+msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task,
+ bool kernel_managed);
void msm_gpu_cleanup(struct msm_gpu *gpu);
diff --git a/include/uapi/drm/msm_drm.h b/include/uapi/drm/msm_drm.h
index 2342cb90857e..072e82a80607 100644
--- a/include/uapi/drm/msm_drm.h
+++ b/include/uapi/drm/msm_drm.h
@@ -91,6 +91,30 @@ struct drm_msm_timespec {
#define MSM_PARAM_UBWC_SWIZZLE 0x12 /* RO */
#define MSM_PARAM_MACROTILE_MODE 0x13 /* RO */
#define MSM_PARAM_UCHE_TRAP_BASE 0x14 /* RO */
+/* MSM_PARAM_EN_VM_BIND is set to 1 to enable VM_BIND ops.
+ *
+ * With VM_BIND enabled, userspace is required to allocate iova and use the
+ * VM_BIND ops for map/unmap ioctls. MSM_INFO_SET_IOVA and MSM_INFO_GET_IOVA
+ * will be rejected. (The latter does not have a sensible meaning when a BO
+ * can have multiple and/or partial mappings.)
+ *
+ * With VM_BIND enabled, userspace does not include a submit_bo table in the
+ * SUBMIT ioctl (this will be rejected), the resident set is determined by
+ * the the VM_BIND ops.
+ *
+ * Enabling VM_BIND will fail on devices which do not have per-process pgtables.
+ * And it is not allowed to disable VM_BIND once it has been enabled.
+ *
+ * Enabling VM_BIND should be done (attempted) prior to allocating any BOs or
+ * submitqueues of type MSM_SUBMITQUEUE_VM_BIND.
+ *
+ * Relatedly, when VM_BIND mode is enabled, the kernel will not try to recover
+ * from GPU faults or failed async VM_BIND ops, in particular because it is
+ * difficult to communicate to userspace which op failed so that userspace
+ * could rewind and try again. When the VM is marked unusable, the SUBMIT
+ * ioctl will throw -EPIPE.
+ */
+#define MSM_PARAM_EN_VM_BIND 0x15 /* WO, once */
/* For backwards compat. The original support for preemption was based on
* a single ring per priority level so # of priority levels equals the #
--
2.48.1
^ permalink raw reply related [flat|nested] 45+ messages in thread
* [PATCH v2 16/34] drm/msm: Mark VM as unusable on faults
2025-03-19 14:52 [PATCH v2 00/34] drm/msm: sparse / "VM_BIND" support Rob Clark
` (14 preceding siblings ...)
2025-03-19 14:52 ` [PATCH v2 15/34] drm/msm: Add opt-in for VM_BIND Rob Clark
@ 2025-03-19 14:52 ` Rob Clark
2025-03-19 16:15 ` Connor Abbott
2025-03-19 14:52 ` [PATCH v2 17/34] drm/msm: Extend SUBMIT ioctl for VM_BIND Rob Clark
` (17 subsequent siblings)
33 siblings, 1 reply; 45+ messages in thread
From: Rob Clark @ 2025-03-19 14:52 UTC (permalink / raw)
To: dri-devel
Cc: freedreno, linux-arm-msm, Rob Clark, Rob Clark, Abhinav Kumar,
Dmitry Baryshkov, Sean Paul, Marijn Suijten, David Airlie,
Simona Vetter, Konrad Dybcio, open list
From: Rob Clark <robdclark@chromium.org>
If userspace has opted-in to VM_BIND, then GPU faults and VM_BIND errors
will mark the VM as unusable.
Signed-off-by: Rob Clark <robdclark@chromium.org>
---
drivers/gpu/drm/msm/msm_gem.h | 17 +++++++++++++++++
drivers/gpu/drm/msm/msm_gem_submit.c | 3 +++
drivers/gpu/drm/msm/msm_gpu.c | 16 ++++++++++++++--
3 files changed, 34 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h
index acb976722580..7cb720137548 100644
--- a/drivers/gpu/drm/msm/msm_gem.h
+++ b/drivers/gpu/drm/msm/msm_gem.h
@@ -82,6 +82,23 @@ struct msm_gem_vm {
/** @managed: is this a kernel managed VM? */
bool managed;
+
+ /**
+ * @unusable: True if the VM has turned unusable because something
+ * bad happened during an asynchronous request.
+ *
+ * We don't try to recover from such failures, because this implies
+ * informing userspace about the specific operation that failed, and
+ * hoping the userspace driver can replay things from there. This all
+ * sounds very complicated for little gain.
+ *
+ * Instead, we should just flag the VM as unusable, and fail any
+ * further request targeting this VM.
+ *
+ * As an analogy, this would be mapped to a VK_ERROR_DEVICE_LOST
+ * situation, where the logical device needs to be re-created.
+ */
+ bool unusable;
};
#define to_msm_vm(x) container_of(x, struct msm_gem_vm, base)
diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c
index 9731ad7993cf..9cef308a0ad1 100644
--- a/drivers/gpu/drm/msm/msm_gem_submit.c
+++ b/drivers/gpu/drm/msm/msm_gem_submit.c
@@ -668,6 +668,9 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data,
if (args->pad)
return -EINVAL;
+ if (to_msm_vm(ctx->vm)->unusable)
+ return UERR(EPIPE, dev, "context is unusable");
+
/* for now, we just have 3d pipe.. eventually this would need to
* be more clever to dispatch to appropriate gpu module:
*/
diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c
index 503e4dcc5a6f..4831f4e42fd9 100644
--- a/drivers/gpu/drm/msm/msm_gpu.c
+++ b/drivers/gpu/drm/msm/msm_gpu.c
@@ -386,8 +386,20 @@ static void recover_worker(struct kthread_work *work)
/* Increment the fault counts */
submit->queue->faults++;
- if (submit->vm)
- to_msm_vm(submit->vm)->faults++;
+ if (submit->vm) {
+ struct msm_gem_vm *vm = to_msm_vm(submit->vm);
+
+ vm->faults++;
+
+ /*
+ * If userspace has opted-in to VM_BIND (and therefore userspace
+ * management of the VM), faults mark the VM as unusuable. This
+ * matches vulkan expectations (vulkan is the main target for
+ * VM_BIND)
+ */
+ if (!vm->managed)
+ vm->unusable = true;
+ }
get_comm_cmdline(submit, &comm, &cmd);
--
2.48.1
^ permalink raw reply related [flat|nested] 45+ messages in thread
* [PATCH v2 17/34] drm/msm: Extend SUBMIT ioctl for VM_BIND
2025-03-19 14:52 [PATCH v2 00/34] drm/msm: sparse / "VM_BIND" support Rob Clark
` (15 preceding siblings ...)
2025-03-19 14:52 ` [PATCH v2 16/34] drm/msm: Mark VM as unusable on faults Rob Clark
@ 2025-03-19 14:52 ` Rob Clark
2025-03-19 14:52 ` [PATCH v2 18/34] drm/msm: Add VM_BIND submitqueue Rob Clark
` (16 subsequent siblings)
33 siblings, 0 replies; 45+ messages in thread
From: Rob Clark @ 2025-03-19 14:52 UTC (permalink / raw)
To: dri-devel
Cc: freedreno, linux-arm-msm, Rob Clark, Rob Clark, Abhinav Kumar,
Dmitry Baryshkov, Sean Paul, Marijn Suijten, David Airlie,
Simona Vetter, Konrad Dybcio, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, Sumit Semwal, Christian König, open list,
open list:DMA BUFFER SHARING FRAMEWORK:Keyword:bdma_(?:buf|fence|resv)b,
moderated list:DMA BUFFER SHARING FRAMEWORK:Keyword:bdma_(?:buf|fence|resv)b
From: Rob Clark <robdclark@chromium.org>
This is a bit different than the path taken by other clean-slate
drivers. But there is a lot in similar with BO pinning in the legacy
"EXEC" path and "VM_BIND" MAP path. Also, we want the same fence and
syncobj handling.
(Why bother with fence fd's? Because for virtgpu nctx for, guest
syncobj's exist only as dma_fence's between the guest kernel and host.)
Signed-off-by: Rob Clark <robdclark@chromium.org>
---
drivers/gpu/drm/msm/msm_gem.h | 10 ++---
drivers/gpu/drm/msm/msm_gem_submit.c | 65 ++++++++++++++++++++++++----
include/uapi/drm/msm_drm.h | 49 ++++++++++++++++++---
3 files changed, 103 insertions(+), 21 deletions(-)
diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h
index 7cb720137548..8e29e36ca9c5 100644
--- a/drivers/gpu/drm/msm/msm_gem.h
+++ b/drivers/gpu/drm/msm/msm_gem.h
@@ -345,13 +345,13 @@ struct msm_gem_submit {
uint32_t nr_relocs;
struct drm_msm_gem_submit_reloc *relocs;
} *cmd; /* array of size nr_cmds */
- struct {
+ struct msm_gem_submit_bo {
uint32_t flags;
- union {
- struct drm_gem_object *obj;
- uint32_t handle;
- };
+ uint32_t handle;
+ struct drm_gem_object *obj;
uint64_t iova;
+ uint64_t bo_offset;
+ uint64_t range;
} bos[];
};
diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c
index 9cef308a0ad1..bb61231ab8ba 100644
--- a/drivers/gpu/drm/msm/msm_gem_submit.c
+++ b/drivers/gpu/drm/msm/msm_gem_submit.c
@@ -115,23 +115,37 @@ void __msm_gem_submit_destroy(struct kref *kref)
kfree(submit);
}
+static bool invalid_alignment(uint64_t addr)
+{
+ /*
+ * Technically this is about GPU alignment, not CPU alignment. But
+ * I've not seen any qcom SoC where the SMMU does not support the
+ * CPU's smallest page size.
+ */
+ return !PAGE_ALIGNED(addr);
+}
+
static int submit_lookup_objects(struct msm_gem_submit *submit,
struct drm_msm_gem_submit *args, struct drm_file *file)
{
- unsigned i;
+ unsigned i, bo_stride = args->bos_stride;
int ret = 0;
+ if (!bo_stride)
+ bo_stride = sizeof(struct drm_msm_gem_submit_bo);
+
for (i = 0; i < args->nr_bos; i++) {
- struct drm_msm_gem_submit_bo submit_bo;
+ struct drm_msm_gem_submit_bo_v2 submit_bo = {0};
void __user *userptr =
- u64_to_user_ptr(args->bos + (i * sizeof(submit_bo)));
+ u64_to_user_ptr(args->bos + (i * bo_stride));
+ unsigned copy_sz = min(bo_stride, sizeof(submit_bo));
/* make sure we don't have garbage flags, in case we hit
* error path before flags is initialized:
*/
submit->bos[i].flags = 0;
- if (copy_from_user(&submit_bo, userptr, sizeof(submit_bo))) {
+ if (copy_from_user(&submit_bo, userptr, copy_sz)) {
ret = -EFAULT;
i = 0;
goto out;
@@ -141,14 +155,27 @@ static int submit_lookup_objects(struct msm_gem_submit *submit,
#define MANDATORY_FLAGS (MSM_SUBMIT_BO_READ | MSM_SUBMIT_BO_WRITE)
if ((submit_bo.flags & ~MSM_SUBMIT_BO_FLAGS) ||
- !(submit_bo.flags & MANDATORY_FLAGS)) {
+ !(submit_bo.flags & MANDATORY_FLAGS))
ret = SUBMIT_ERROR(EINVAL, submit, "invalid flags: %x\n", submit_bo.flags);
+
+ if (invalid_alignment(submit_bo.address))
+ ret = SUBMIT_ERROR(EINVAL, submit, "invalid address: %016llx\n", submit_bo.address);
+
+ if (invalid_alignment(submit_bo.bo_offset))
+ ret = SUBMIT_ERROR(EINVAL, submit, "invalid bo_offset: %016llx\n", submit_bo.bo_offset);
+
+ if (invalid_alignment(submit_bo.range))
+ ret = SUBMIT_ERROR(EINVAL, submit, "invalid range: %016llx\n", submit_bo.range);
+
+ if (ret) {
i = 0;
goto out;
}
submit->bos[i].handle = submit_bo.handle;
submit->bos[i].flags = submit_bo.flags;
+ submit->bos[i].bo_offset = submit_bo.bo_offset;
+ submit->bos[i].range = submit_bo.range;
}
spin_lock(&file->table_lock);
@@ -167,6 +194,15 @@ static int submit_lookup_objects(struct msm_gem_submit *submit,
drm_gem_object_get(obj);
+ if (submit->bos[i].bo_offset > obj->size)
+ ret = SUBMIT_ERROR(EINVAL, submit, "bo_offset to large: %016llx\n", submit->bos[i].bo_offset);
+
+ if ((submit->bos[i].bo_offset + submit->bos[i].range) > obj->size)
+ ret = SUBMIT_ERROR(EINVAL, submit, "range to large: %016llx\n", submit->bos[i].range);
+
+ if (ret)
+ goto out_unlock;
+
submit->bos[i].obj = obj;
}
@@ -182,6 +218,7 @@ static int submit_lookup_objects(struct msm_gem_submit *submit,
static int submit_lookup_cmds(struct msm_gem_submit *submit,
struct drm_msm_gem_submit *args, struct drm_file *file)
{
+ struct msm_context *ctx = file->driver_priv;
unsigned i;
size_t sz;
int ret = 0;
@@ -213,6 +250,19 @@ static int submit_lookup_cmds(struct msm_gem_submit *submit,
goto out;
}
+ if (msm_context_is_vmbind(ctx)) {
+ if (submit_cmd.nr_relocs) {
+ ret = SUBMIT_ERROR(EINVAL, submit, "nr_relocs must be zero");
+ goto out;
+ }
+ if (submit_cmd.submit_idx || submit_cmd.submit_offset) {
+ ret = SUBMIT_ERROR(EINVAL, submit, "submit_idx/offset must be zero");
+ goto out;
+ }
+
+ submit->cmd[i].iova = submit_cmd.iova;
+ }
+
submit->cmd[i].type = submit_cmd.type;
submit->cmd[i].size = submit_cmd.size / 4;
submit->cmd[i].offset = submit_cmd.submit_offset / 4;
@@ -665,9 +715,6 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data,
if (!gpu)
return -ENXIO;
- if (args->pad)
- return -EINVAL;
-
if (to_msm_vm(ctx->vm)->unusable)
return UERR(EPIPE, dev, "context is unusable");
@@ -677,7 +724,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data,
if (MSM_PIPE_ID(args->flags) != MSM_PIPE_3D0)
return UERR(EINVAL, dev, "invalid pipe");
- if (MSM_PIPE_FLAGS(args->flags) & ~MSM_SUBMIT_FLAGS)
+ if (MSM_PIPE_FLAGS(args->flags) & ~MSM_SUBMIT_EXEC_FLAGS)
return UERR(EINVAL, dev, "invalid flags");
if (args->flags & MSM_SUBMIT_SUDO) {
diff --git a/include/uapi/drm/msm_drm.h b/include/uapi/drm/msm_drm.h
index 072e82a80607..1a948d49c610 100644
--- a/include/uapi/drm/msm_drm.h
+++ b/include/uapi/drm/msm_drm.h
@@ -245,7 +245,10 @@ struct drm_msm_gem_submit_cmd {
__u32 size; /* in, cmdstream size */
__u32 pad;
__u32 nr_relocs; /* in, number of submit_reloc's */
- __u64 relocs; /* in, ptr to array of submit_reloc's */
+ union {
+ __u64 relocs; /* in, ptr to array of submit_reloc's */
+ __u64 iova; /* cmdstream address (for VM_BIND contexts) */
+ };
};
/* Each buffer referenced elsewhere in the cmdstream submit (ie. the
@@ -264,6 +267,19 @@ struct drm_msm_gem_submit_cmd {
#define MSM_SUBMIT_BO_DUMP 0x0004
#define MSM_SUBMIT_BO_NO_IMPLICIT 0x0008
+/* Map OP for for submits to a VM_BIND submitqueue:
+ * - MAP: map a specified range of the BO into the VM
+ * - MAP_NULL: map a NULL page into the specified range of the VM, handle
+ * and bo_offset MBZ. A NULL range will return zero on reads
+ * and discard writes
+ * see: VkPhysicalDeviceSparseProperties::residencyNonResidentStrict
+ * - UNMAP: unmap a specified VM range, handle and bo_offset MBZ
+ */
+#define MSM_SUBMIT_BO_OP_MASK 0xf000
+#define MSM_SUBMIT_BO_OP_MAP 0x0000
+#define MSM_SUBMIT_BO_OP_MAP_NULL 0x1000
+#define MSM_SUBMIT_BO_OP_UNMAP 0x2000
+
#define MSM_SUBMIT_BO_FLAGS (MSM_SUBMIT_BO_READ | \
MSM_SUBMIT_BO_WRITE | \
MSM_SUBMIT_BO_DUMP | \
@@ -272,7 +288,16 @@ struct drm_msm_gem_submit_cmd {
struct drm_msm_gem_submit_bo {
__u32 flags; /* in, mask of MSM_SUBMIT_BO_x */
__u32 handle; /* in, GEM handle */
- __u64 presumed; /* in/out, presumed buffer address */
+ __u64 address; /* in/out, presumed buffer address */
+};
+
+struct drm_msm_gem_submit_bo_v2 {
+ __u32 flags; /* in, mask of MSM_SUBMIT_BO_x */
+ __u32 handle; /* in, GEM handle */
+ __u64 address; /* in/out, presumed buffer address */
+ /* Remaining fields are only used with MSM_SUBMIT_OP_VM_BIND/_ASYNC: */
+ __u64 bo_offset;
+ __u64 range;
};
/* Valid submit ioctl flags: */
@@ -283,7 +308,8 @@ struct drm_msm_gem_submit_bo {
#define MSM_SUBMIT_SYNCOBJ_IN 0x08000000 /* enable input syncobj */
#define MSM_SUBMIT_SYNCOBJ_OUT 0x04000000 /* enable output syncobj */
#define MSM_SUBMIT_FENCE_SN_IN 0x02000000 /* userspace passes in seqno fence */
-#define MSM_SUBMIT_FLAGS ( \
+
+#define MSM_SUBMIT_EXEC_FLAGS ( \
MSM_SUBMIT_NO_IMPLICIT | \
MSM_SUBMIT_FENCE_FD_IN | \
MSM_SUBMIT_FENCE_FD_OUT | \
@@ -293,6 +319,13 @@ struct drm_msm_gem_submit_bo {
MSM_SUBMIT_FENCE_SN_IN | \
0)
+#define MSM_SUBMIT_VM_BIND_FLAGS ( \
+ MSM_SUBMIT_FENCE_FD_IN | \
+ MSM_SUBMIT_FENCE_FD_OUT | \
+ MSM_SUBMIT_SYNCOBJ_IN | \
+ MSM_SUBMIT_SYNCOBJ_OUT | \
+ 0)
+
#define MSM_SUBMIT_SYNCOBJ_RESET 0x00000001 /* Reset syncobj after wait. */
#define MSM_SUBMIT_SYNCOBJ_FLAGS ( \
MSM_SUBMIT_SYNCOBJ_RESET | \
@@ -307,14 +340,17 @@ struct drm_msm_gem_submit_syncobj {
/* Each cmdstream submit consists of a table of buffers involved, and
* one or more cmdstream buffers. This allows for conditional execution
* (context-restore), and IB buffers needed for per tile/bin draw cmds.
+ *
+ * For MSM_SUBMIT_VM_BIND/_ASYNC operations, the queue must have been
+ * created with the MSM_SUBMITQUEUE_VM_BIND flag.
*/
struct drm_msm_gem_submit {
__u32 flags; /* MSM_PIPE_x | MSM_SUBMIT_x */
__u32 fence; /* out (or in with MSM_SUBMIT_FENCE_SN_IN flag) */
__u32 nr_bos; /* in, number of submit_bo's */
- __u32 nr_cmds; /* in, number of submit_cmd's */
+ __u32 nr_cmds; /* in, number of submit_cmd's, MBZ for VM_BIND queue */
__u64 bos; /* in, ptr to array of submit_bo's */
- __u64 cmds; /* in, ptr to array of submit_cmd's */
+ __u64 cmds; /* in, ptr to array of submit_cmd's, MBZ for VM_BIND queue */
__s32 fence_fd; /* in/out fence fd (see MSM_SUBMIT_FENCE_FD_IN/OUT) */
__u32 queueid; /* in, submitqueue id */
__u64 in_syncobjs; /* in, ptr to array of drm_msm_gem_submit_syncobj */
@@ -322,8 +358,7 @@ struct drm_msm_gem_submit {
__u32 nr_in_syncobjs; /* in, number of entries in in_syncobj */
__u32 nr_out_syncobjs; /* in, number of entries in out_syncobj. */
__u32 syncobj_stride; /* in, stride of syncobj arrays. */
- __u32 pad; /*in, reserved for future use, always 0. */
-
+ __u32 bos_stride; /* in, stride of bos array, if zero 16bytes used. */
};
#define MSM_WAIT_FENCE_BOOST 0x00000001
--
2.48.1
^ permalink raw reply related [flat|nested] 45+ messages in thread
* [PATCH v2 18/34] drm/msm: Add VM_BIND submitqueue
2025-03-19 14:52 [PATCH v2 00/34] drm/msm: sparse / "VM_BIND" support Rob Clark
` (16 preceding siblings ...)
2025-03-19 14:52 ` [PATCH v2 17/34] drm/msm: Extend SUBMIT ioctl for VM_BIND Rob Clark
@ 2025-03-19 14:52 ` Rob Clark
2025-03-19 14:52 ` [PATCH v2 19/34] drm/msm: Add _NO_SHARE flag Rob Clark
` (15 subsequent siblings)
33 siblings, 0 replies; 45+ messages in thread
From: Rob Clark @ 2025-03-19 14:52 UTC (permalink / raw)
To: dri-devel
Cc: freedreno, linux-arm-msm, Rob Clark, Rob Clark, Abhinav Kumar,
Dmitry Baryshkov, Sean Paul, Marijn Suijten, David Airlie,
Simona Vetter, Konrad Dybcio, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, Sumit Semwal, Christian König, open list,
open list:DMA BUFFER SHARING FRAMEWORK:Keyword:bdma_(?:buf|fence|resv)b,
moderated list:DMA BUFFER SHARING FRAMEWORK:Keyword:bdma_(?:buf|fence|resv)b
From: Rob Clark <robdclark@chromium.org>
This submitqueue type isn't tied to a hw ringbuffer, but instead
executes on the CPU for performing async VM_BIND ops.
Signed-off-by: Rob Clark <robdclark@chromium.org>
---
drivers/gpu/drm/msm/msm_gem.c | 3 +-
drivers/gpu/drm/msm/msm_gem.h | 10 ++
drivers/gpu/drm/msm/msm_gem_submit.c | 128 +++++++++++++++++++++++---
drivers/gpu/drm/msm/msm_gem_vma.c | 107 +++++++++++++++++++++
drivers/gpu/drm/msm/msm_gpu.h | 3 +
drivers/gpu/drm/msm/msm_submitqueue.c | 57 +++++++++---
include/uapi/drm/msm_drm.h | 9 +-
7 files changed, 287 insertions(+), 30 deletions(-)
diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
index 5a5220b6f21d..4c68a3dd3fed 100644
--- a/drivers/gpu/drm/msm/msm_gem.c
+++ b/drivers/gpu/drm/msm/msm_gem.c
@@ -232,8 +232,7 @@ static void put_pages(struct drm_gem_object *obj)
}
}
-static struct page **msm_gem_get_pages_locked(struct drm_gem_object *obj,
- unsigned madv)
+struct page **msm_gem_get_pages_locked(struct drm_gem_object *obj, unsigned madv)
{
struct msm_gem_object *msm_obj = to_msm_bo(obj);
diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h
index 8e29e36ca9c5..d427ead2dce0 100644
--- a/drivers/gpu/drm/msm/msm_gem.h
+++ b/drivers/gpu/drm/msm/msm_gem.h
@@ -53,6 +53,13 @@ struct msm_gem_vm {
/** @base: Inherit from drm_gpuvm. */
struct drm_gpuvm base;
+ /**
+ * @sched: Scheduler used for asynchronous VM_BIND request.
+ *
+ * Unused for kernel managed VMs (where all operations are synchronous).
+ */
+ struct drm_gpu_scheduler sched;
+
/**
* @mm: Memory management for kernel managed VA allocations
*
@@ -106,6 +113,8 @@ struct drm_gpuvm *
msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name,
u64 va_start, u64 va_size, bool managed);
+void msm_gem_vm_close(struct drm_gpuvm *gpuvm);
+
struct msm_fence_context;
/**
@@ -195,6 +204,7 @@ int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm,
uint64_t *iova);
void msm_gem_unpin_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm);
void msm_gem_pin_obj_locked(struct drm_gem_object *obj);
+struct page **msm_gem_get_pages_locked(struct drm_gem_object *obj, unsigned madv);
struct page **msm_gem_pin_pages_locked(struct drm_gem_object *obj);
void msm_gem_unpin_pages_locked(struct drm_gem_object *obj);
int msm_gem_dumb_create(struct drm_file *file, struct drm_device *dev,
diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c
index bb61231ab8ba..39a6e0418bdf 100644
--- a/drivers/gpu/drm/msm/msm_gem_submit.c
+++ b/drivers/gpu/drm/msm/msm_gem_submit.c
@@ -23,6 +23,11 @@
#define SUBMIT_ERROR(err, submit, fmt, ...) \
UERR(err, (submit)->dev, fmt, ##__VA_ARGS__)
+static bool submit_is_vmbind(struct msm_gem_submit *submit)
+{
+ return !!(submit->queue->flags & MSM_SUBMITQUEUE_VM_BIND);
+}
+
/*
* Cmdstream submission:
*/
@@ -115,6 +120,17 @@ void __msm_gem_submit_destroy(struct kref *kref)
kfree(submit);
}
+static bool invalid_bo_flags(bool vm_bind, uint32_t flags)
+{
+ if (vm_bind) {
+ return flags & ~(MSM_SUBMIT_BO_DUMP | MSM_SUBMIT_BO_OP_MASK);
+ } else {
+ /* at least one of READ and/or WRITE flags should be set: */
+ return (flags & ~MSM_SUBMIT_BO_FLAGS) ||
+ !(flags & (MSM_SUBMIT_BO_READ | MSM_SUBMIT_BO_WRITE));
+ }
+}
+
static bool invalid_alignment(uint64_t addr)
{
/*
@@ -129,9 +145,10 @@ static int submit_lookup_objects(struct msm_gem_submit *submit,
struct drm_msm_gem_submit *args, struct drm_file *file)
{
unsigned i, bo_stride = args->bos_stride;
+ bool vm_bind = submit_is_vmbind(submit);
int ret = 0;
- if (!bo_stride)
+ if (!bo_stride || !vm_bind)
bo_stride = sizeof(struct drm_msm_gem_submit_bo);
for (i = 0; i < args->nr_bos; i++) {
@@ -151,11 +168,7 @@ static int submit_lookup_objects(struct msm_gem_submit *submit,
goto out;
}
-/* at least one of READ and/or WRITE flags should be set: */
-#define MANDATORY_FLAGS (MSM_SUBMIT_BO_READ | MSM_SUBMIT_BO_WRITE)
-
- if ((submit_bo.flags & ~MSM_SUBMIT_BO_FLAGS) ||
- !(submit_bo.flags & MANDATORY_FLAGS))
+ if (invalid_bo_flags(vm_bind, submit_bo.flags))
ret = SUBMIT_ERROR(EINVAL, submit, "invalid flags: %x\n", submit_bo.flags);
if (invalid_alignment(submit_bo.address))
@@ -174,6 +187,7 @@ static int submit_lookup_objects(struct msm_gem_submit *submit,
submit->bos[i].handle = submit_bo.handle;
submit->bos[i].flags = submit_bo.flags;
+ submit->bos[i].iova = submit_bo.address;
submit->bos[i].bo_offset = submit_bo.bo_offset;
submit->bos[i].range = submit_bo.range;
}
@@ -183,6 +197,12 @@ static int submit_lookup_objects(struct msm_gem_submit *submit,
for (i = 0; i < args->nr_bos; i++) {
struct drm_gem_object *obj;
+ if (vm_bind) {
+ unsigned op = submit->bos[i].flags & MSM_SUBMIT_BO_OP_MASK;
+ if (op != MSM_SUBMIT_BO_OP_MAP)
+ continue;
+ }
+
/* normally use drm_gem_object_lookup(), but for bulk lookup
* all under single table_lock just hit object_idr directly:
*/
@@ -297,13 +317,22 @@ static int submit_lookup_cmds(struct msm_gem_submit *submit,
/* This is where we make sure all the bo's are reserved and pin'd: */
static int submit_lock_objects(struct msm_gem_submit *submit)
{
+ bool vm_bind = submit_is_vmbind(submit);
+ unsigned flags = DRM_EXEC_INTERRUPTIBLE_WAIT;
int ret;
- drm_exec_init(&submit->exec, DRM_EXEC_INTERRUPTIBLE_WAIT, submit->nr_bos);
+ if (vm_bind)
+ flags |= DRM_EXEC_IGNORE_DUPLICATES;
+
+ drm_exec_init(&submit->exec, flags, submit->nr_bos);
drm_exec_until_all_locked (&submit->exec) {
for (unsigned i = 0; i < submit->nr_bos; i++) {
struct drm_gem_object *obj = submit->bos[i].obj;
+
+ if (!obj)
+ continue;
+
ret = drm_exec_prepare_obj(&submit->exec, obj, 1);
drm_exec_retry_on_contention(&submit->exec);
if (ret)
@@ -372,6 +401,28 @@ static int submit_pin_vmas(struct msm_gem_submit *submit)
return ret;
}
+static int submit_get_pages(struct msm_gem_submit *submit)
+{
+ /*
+ * First loop, before holding the LRU lock, avoids holding the
+ * LRU lock while calling msm_gem_pin_vma_locked (which could
+ * trigger get_pages())
+ */
+ for (int i = 0; i < submit->nr_bos; i++) {
+ struct drm_gem_object *obj = submit->bos[i].obj;
+ struct page **pages;
+
+ if (!obj)
+ continue;
+
+ pages = msm_gem_get_pages_locked(obj, MSM_MADV_WILLNEED);
+ if (IS_ERR(pages))
+ return PTR_ERR(pages);
+ }
+
+ return 0;
+}
+
static void submit_pin_objects(struct msm_gem_submit *submit)
{
struct msm_drm_private *priv = submit->dev->dev_private;
@@ -385,7 +436,12 @@ static void submit_pin_objects(struct msm_gem_submit *submit)
*/
mutex_lock(&priv->lru.lock);
for (int i = 0; i < submit->nr_bos; i++) {
- msm_gem_pin_obj_locked(submit->bos[i].obj);
+ struct drm_gem_object *obj = submit->bos[i].obj;
+
+ if (!obj)
+ continue;
+
+ msm_gem_pin_obj_locked(obj);
}
mutex_unlock(&priv->lru.lock);
@@ -400,6 +456,9 @@ static void submit_unpin_objects(struct msm_gem_submit *submit)
for (int i = 0; i < submit->nr_bos; i++) {
struct drm_gem_object *obj = submit->bos[i].obj;
+ if (!obj)
+ continue;
+
msm_gem_unpin_locked(obj);
}
@@ -413,6 +472,9 @@ static void submit_attach_object_fences(struct msm_gem_submit *submit)
for (i = 0; i < submit->nr_bos; i++) {
struct drm_gem_object *obj = submit->bos[i].obj;
+ if (!obj)
+ continue;
+
if (submit->bos[i].flags & MSM_SUBMIT_BO_WRITE)
dma_resv_add_fence(obj->resv, submit->user_fence,
DMA_RESV_USAGE_WRITE);
@@ -708,6 +770,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data,
struct msm_ringbuffer *ring;
struct msm_submit_post_dep *post_deps = NULL;
struct drm_syncobj **syncobjs_to_reset = NULL;
+ unsigned cmds_to_parse;
int out_fence_fd = -1;
unsigned i;
int ret;
@@ -724,9 +787,6 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data,
if (MSM_PIPE_ID(args->flags) != MSM_PIPE_3D0)
return UERR(EINVAL, dev, "invalid pipe");
- if (MSM_PIPE_FLAGS(args->flags) & ~MSM_SUBMIT_EXEC_FLAGS)
- return UERR(EINVAL, dev, "invalid flags");
-
if (args->flags & MSM_SUBMIT_SUDO) {
if (!IS_ENABLED(CONFIG_DRM_MSM_GPU_SUDO) ||
!capable(CAP_SYS_RAWIO))
@@ -737,6 +797,26 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data,
if (!queue)
return -ENOENT;
+ if (queue->flags & MSM_SUBMITQUEUE_VM_BIND) {
+ if (args->nr_cmds || args->cmds) {
+ ret = UERR(EINVAL, dev, "nr_cmds should be zero for VM_BIND queue");
+ goto out_post_unlock;
+ }
+ if (MSM_PIPE_FLAGS(args->flags) & ~MSM_SUBMIT_VM_BIND_FLAGS) {
+ ret = UERR(EINVAL, dev, "invalid flags");
+ goto out_post_unlock;
+ }
+ } else {
+ if (msm_context_is_vmbind(ctx) && (args->nr_bos || args->bos)) {
+ ret = UERR(EINVAL, dev, "nr_bos should be zero for VM_BIND contexts");
+ goto out_post_unlock;
+ }
+ if (MSM_PIPE_FLAGS(args->flags) & ~MSM_SUBMIT_EXEC_FLAGS) {
+ ret = UERR(EINVAL, dev, "invalid flags");
+ goto out_post_unlock;
+ }
+ }
+
ring = gpu->rb[queue->ring_nr];
if (args->flags & MSM_SUBMIT_FENCE_FD_OUT) {
@@ -813,19 +893,37 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data,
if (ret)
goto out;
- if (!(args->flags & MSM_SUBMIT_NO_IMPLICIT)) {
+ if (msm_context_is_vmbind(ctx) && !submit_is_vmbind(submit)) {
+ /*
+ * If we are not using VM_BIND, submit_pin_vmas() will validate
+ * just the BOs attached to the submit. In that case we don't
+ * need to validate the _entire_ vm, because userspace tracked
+ * what BOs are associated with the submit.
+ */
+ ret = drm_gpuvm_validate(submit->vm, &submit->exec);
+ if (ret)
+ goto out;
+ }
+
+ if (!(args->flags & MSM_SUBMIT_NO_IMPLICIT) && !submit_is_vmbind(submit)) {
ret = submit_fence_sync(submit);
if (ret)
goto out;
}
- ret = submit_pin_vmas(submit);
+ if (submit_is_vmbind(submit)) {
+ ret = submit_get_pages(submit);
+ } else {
+ ret = submit_pin_vmas(submit);
+ }
if (ret)
goto out;
submit_pin_objects(submit);
- for (i = 0; i < args->nr_cmds; i++) {
+ cmds_to_parse = msm_context_is_vmbind(ctx) ? 0 : args->nr_cmds;
+
+ for (i = 0; i < cmds_to_parse; i++) {
struct drm_gem_object *obj;
uint64_t iova;
@@ -856,7 +954,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data,
goto out;
}
- submit->nr_cmds = i;
+ submit->nr_cmds = args->nr_cmds;
idr_preload(GFP_KERNEL);
diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c
index 0bc22618e9f0..8c780dd6a936 100644
--- a/drivers/gpu/drm/msm/msm_gem_vma.c
+++ b/drivers/gpu/drm/msm/msm_gem_vma.c
@@ -162,6 +162,70 @@ static const struct drm_gpuvm_ops msm_gpuvm_ops = {
.vm_free = msm_gem_vm_free,
};
+static int
+run_bo_op(struct msm_gem_submit *submit, const struct msm_gem_submit_bo *bo)
+{
+ unsigned op = bo->flags & MSM_SUBMIT_BO_OP_MASK;
+
+ switch (op) {
+ case MSM_SUBMIT_BO_OP_MAP:
+ case MSM_SUBMIT_BO_OP_MAP_NULL:
+ return drm_gpuvm_sm_map(submit->vm, submit->vm, bo->iova,
+ bo->range, bo->obj, bo->bo_offset);
+ break;
+ case MSM_SUBMIT_BO_OP_UNMAP:
+ return drm_gpuvm_sm_unmap(submit->vm, submit->vm, bo->iova,
+ bo->bo_offset);
+ }
+
+ return -EINVAL;
+}
+
+static struct dma_fence *
+msm_vma_job_run(struct drm_sched_job *job)
+{
+ struct msm_gem_submit *submit = to_msm_submit(job);
+
+ for (unsigned i = 0; i < submit->nr_bos; i++) {
+ int ret = run_bo_op(submit, &submit->bos[i]);
+ if (ret) {
+ to_msm_vm(submit->vm)->unusable = true;
+ return ERR_PTR(ret);
+ }
+ }
+
+ /* VM_BIND ops run on CPU, so we are done now: */
+ msm_submit_retire(submit);
+
+ for (int i = 0; i < submit->nr_bos; i++) {
+ struct drm_gem_object *obj = submit->bos[i].obj;
+
+ if (!obj)
+ continue;
+
+ msm_gem_lock(obj);
+ msm_gem_unpin_locked(obj);
+ msm_gem_unlock(obj);
+ }
+
+ /* VM_BIND ops are synchronous, so no fence to wait on: */
+ return NULL;
+}
+
+static void
+msm_vma_job_free(struct drm_sched_job *job)
+{
+ struct msm_gem_submit *submit = to_msm_submit(job);
+
+ drm_sched_job_cleanup(job);
+ msm_gem_submit_put(submit);
+}
+
+static const struct drm_sched_backend_ops msm_vm_bind_ops = {
+ .run_job = msm_vma_job_run,
+ .free_job = msm_vma_job_free
+};
+
/**
* msm_gem_vm_create() - Create and initialize a &msm_gem_vm
* @drm: the drm device
@@ -198,6 +262,21 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name,
goto err_free_vm;
}
+ if (!managed) {
+ struct drm_sched_init_args args = {
+ .ops = &msm_vm_bind_ops,
+ .num_rqs = 1,
+ .credit_limit = 1,
+ .timeout = MAX_SCHEDULE_TIMEOUT,
+ .name = "msm-vm-bind",
+ .dev = drm->dev,
+ };
+
+ ret = drm_sched_init(&vm->sched, &args);
+ if (ret)
+ goto err_free_dummy;
+ }
+
drm_gpuvm_init(&vm->base, name, flags, drm, dummy_gem,
va_start, va_size, 0, 0, &msm_gpuvm_ops);
drm_gem_object_put(dummy_gem);
@@ -212,8 +291,36 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name,
return &vm->base;
+err_free_dummy:
+ drm_gem_object_put(dummy_gem);
+
err_free_vm:
kfree(vm);
return ERR_PTR(ret);
}
+
+/**
+ * msm_gem_vm_close() - Close a VM
+ * @gpuvm: The VM to close
+ *
+ * Called when the drm device file is closed, to tear down VM related resources
+ * (which will drop refcounts to GEM objects that were still mapped into the
+ * VM at the time).
+ */
+void
+msm_gem_vm_close(struct drm_gpuvm *gpuvm)
+{
+ struct msm_gem_vm *vm = to_msm_vm(gpuvm);
+
+ /*
+ * For kernel managed VMs, the VMAs are torn down when the handle is
+ * closed, so nothing more to do.
+ */
+ if (vm->managed)
+ return;
+
+ /* Kill the scheduler now, so we aren't racing with it for cleanup: */
+ drm_sched_stop(&vm->sched, NULL);
+ drm_sched_fini(&vm->sched);
+}
diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h
index 20f52d9636b0..49c8862ada13 100644
--- a/drivers/gpu/drm/msm/msm_gpu.h
+++ b/drivers/gpu/drm/msm/msm_gpu.h
@@ -562,6 +562,9 @@ struct msm_gpu_submitqueue {
struct mutex lock;
struct kref ref;
struct drm_sched_entity *entity;
+
+ /** @_vm_bind_entity: used for @entity pointer for VM_BIND queues */
+ struct drm_sched_entity _vm_bind_entity[0];
};
struct msm_gpu_state_bo {
diff --git a/drivers/gpu/drm/msm/msm_submitqueue.c b/drivers/gpu/drm/msm/msm_submitqueue.c
index 8ced49c7557b..99ab780d5d7b 100644
--- a/drivers/gpu/drm/msm/msm_submitqueue.c
+++ b/drivers/gpu/drm/msm/msm_submitqueue.c
@@ -72,6 +72,9 @@ void msm_submitqueue_destroy(struct kref *kref)
idr_destroy(&queue->fence_idr);
+ if (queue->entity == &queue->_vm_bind_entity[0])
+ drm_sched_entity_destroy(queue->entity);
+
msm_context_put(queue->ctx);
kfree(queue);
@@ -115,6 +118,11 @@ void msm_submitqueue_close(struct msm_context *ctx)
list_del(&entry->node);
msm_submitqueue_put(entry);
}
+
+ if (!ctx->vm)
+ return;
+
+ msm_gem_vm_close(ctx->vm);
}
static struct drm_sched_entity *
@@ -160,8 +168,6 @@ int msm_submitqueue_create(struct drm_device *drm, struct msm_context *ctx,
struct msm_drm_private *priv = drm->dev_private;
struct msm_gpu_submitqueue *queue;
enum drm_sched_priority sched_prio;
- extern int enable_preemption;
- bool preemption_supported;
unsigned ring_nr;
int ret;
@@ -171,26 +177,53 @@ int msm_submitqueue_create(struct drm_device *drm, struct msm_context *ctx,
if (!priv->gpu)
return -ENODEV;
- preemption_supported = priv->gpu->nr_rings == 1 && enable_preemption != 0;
+ if (flags & MSM_SUBMITQUEUE_VM_BIND) {
+ unsigned sz;
- if (flags & MSM_SUBMITQUEUE_ALLOW_PREEMPT && preemption_supported)
- return -EINVAL;
+ /* Not allowed for kernel managed VMs (ie. kernel allocs VA) */
+ if (!msm_context_is_vmbind(ctx))
+ return -EINVAL;
- ret = msm_gpu_convert_priority(priv->gpu, prio, &ring_nr, &sched_prio);
- if (ret)
- return ret;
+ if (prio)
+ return -EINVAL;
+
+ sz = struct_size(queue, _vm_bind_entity, 1);
+ queue = kzalloc(sz, GFP_KERNEL);
+ } else {
+ extern int enable_preemption;
+ bool preemption_supported =
+ priv->gpu->nr_rings == 1 && enable_preemption != 0;
+
+ if (flags & MSM_SUBMITQUEUE_ALLOW_PREEMPT && preemption_supported)
+ return -EINVAL;
- queue = kzalloc(sizeof(*queue), GFP_KERNEL);
+ ret = msm_gpu_convert_priority(priv->gpu, prio, &ring_nr, &sched_prio);
+ if (ret)
+ return ret;
+
+ queue = kzalloc(sizeof(*queue), GFP_KERNEL);
+ }
if (!queue)
return -ENOMEM;
kref_init(&queue->ref);
queue->flags = flags;
- queue->ring_nr = ring_nr;
- queue->entity = get_sched_entity(ctx, priv->gpu->rb[ring_nr],
- ring_nr, sched_prio);
+ if (flags & MSM_SUBMITQUEUE_VM_BIND) {
+ struct drm_gpu_scheduler *sched = &to_msm_vm(msm_context_vm(drm, ctx))->sched;
+
+ queue->entity = &queue->_vm_bind_entity[0];
+
+ drm_sched_entity_init(queue->entity, DRM_SCHED_PRIORITY_KERNEL,
+ &sched, 1, NULL);
+ } else {
+ queue->ring_nr = ring_nr;
+
+ queue->entity = get_sched_entity(ctx, priv->gpu->rb[ring_nr],
+ ring_nr, sched_prio);
+ }
+
if (IS_ERR(queue->entity)) {
ret = PTR_ERR(queue->entity);
kfree(queue);
diff --git a/include/uapi/drm/msm_drm.h b/include/uapi/drm/msm_drm.h
index 1a948d49c610..39b55c8d7413 100644
--- a/include/uapi/drm/msm_drm.h
+++ b/include/uapi/drm/msm_drm.h
@@ -404,12 +404,19 @@ struct drm_msm_gem_madvise {
/*
* Draw queues allow the user to set specific submission parameter. Command
* submissions specify a specific submitqueue to use. ID 0 is reserved for
- * backwards compatibility as a "default" submitqueue
+ * backwards compatibility as a "default" submitqueue.
+ *
+ * Because VM_BIND async updates happen on the CPU, they must run on a
+ * virtual queue created with the flag MSM_SUBMITQUEUE_VM_BIND. If we had
+ * a way to do pgtable updates on the GPU, we could drop this restriction.
*/
#define MSM_SUBMITQUEUE_ALLOW_PREEMPT 0x00000001
+#define MSM_SUBMITQUEUE_VM_BIND 0x00000002 /* virtual queue for VM_BIND ops */
+
#define MSM_SUBMITQUEUE_FLAGS ( \
MSM_SUBMITQUEUE_ALLOW_PREEMPT | \
+ MSM_SUBMITQUEUE_VM_BIND | \
0)
/*
--
2.48.1
^ permalink raw reply related [flat|nested] 45+ messages in thread
* [PATCH v2 19/34] drm/msm: Add _NO_SHARE flag
2025-03-19 14:52 [PATCH v2 00/34] drm/msm: sparse / "VM_BIND" support Rob Clark
` (17 preceding siblings ...)
2025-03-19 14:52 ` [PATCH v2 18/34] drm/msm: Add VM_BIND submitqueue Rob Clark
@ 2025-03-19 14:52 ` Rob Clark
2025-03-19 14:52 ` [PATCH v2 20/34] drm/msm: Split out helper to get iommu prot flags Rob Clark
` (14 subsequent siblings)
33 siblings, 0 replies; 45+ messages in thread
From: Rob Clark @ 2025-03-19 14:52 UTC (permalink / raw)
To: dri-devel
Cc: freedreno, linux-arm-msm, Rob Clark, Rob Clark, Abhinav Kumar,
Dmitry Baryshkov, Sean Paul, Marijn Suijten, David Airlie,
Simona Vetter, Konrad Dybcio, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, Sumit Semwal, Christian König, open list,
open list:DMA BUFFER SHARING FRAMEWORK:Keyword:bdma_(?:buf|fence|resv)b,
moderated list:DMA BUFFER SHARING FRAMEWORK:Keyword:bdma_(?:buf|fence|resv)b
From: Rob Clark <robdclark@chromium.org>
Buffers that are not shared between contexts can share a single resv
object. This way drm_gpuvm will not track them as external objects, and
submit-time validating overhead will be O(1) for all N non-shared BOs,
instead of O(n).
Signed-off-by: Rob Clark <robdclark@chromium.org>
---
drivers/gpu/drm/msm/msm_drv.h | 1 +
drivers/gpu/drm/msm/msm_gem.c | 23 +++++++++++++++++++++++
drivers/gpu/drm/msm/msm_gem_prime.c | 15 +++++++++++++++
include/uapi/drm/msm_drm.h | 14 ++++++++++++++
4 files changed, 53 insertions(+)
diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h
index b77fd2c531c3..b0add236cbb3 100644
--- a/drivers/gpu/drm/msm/msm_drv.h
+++ b/drivers/gpu/drm/msm/msm_drv.h
@@ -246,6 +246,7 @@ int msm_gem_prime_vmap(struct drm_gem_object *obj, struct iosys_map *map);
void msm_gem_prime_vunmap(struct drm_gem_object *obj, struct iosys_map *map);
struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *dev,
struct dma_buf_attachment *attach, struct sg_table *sg);
+struct dma_buf *msm_gem_prime_export(struct drm_gem_object *obj, int flags);
int msm_gem_prime_pin(struct drm_gem_object *obj);
void msm_gem_prime_unpin(struct drm_gem_object *obj);
diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
index 4c68a3dd3fed..9d4f7b76471f 100644
--- a/drivers/gpu/drm/msm/msm_gem.c
+++ b/drivers/gpu/drm/msm/msm_gem.c
@@ -522,6 +522,9 @@ static int get_and_pin_iova_range_locked(struct drm_gem_object *obj,
msm_gem_assert_locked(obj);
+ if (to_msm_bo(obj)->flags & MSM_BO_NO_SHARE)
+ return -EINVAL;
+
vma = get_vma_locked(obj, vm, range_start, range_end);
if (IS_ERR(vma))
return PTR_ERR(vma);
@@ -1032,6 +1035,16 @@ static void msm_gem_free_object(struct drm_gem_object *obj)
put_pages(obj);
}
+ if (msm_obj->flags & MSM_BO_NO_SHARE) {
+ struct drm_gem_object *r_obj =
+ container_of(obj->resv, struct drm_gem_object, _resv);
+
+ BUG_ON(obj->resv == &obj->_resv);
+
+ /* Drop reference we hold to shared resv obj: */
+ drm_gem_object_put(r_obj);
+ }
+
drm_gem_object_release(obj);
kfree(msm_obj->metadata);
@@ -1064,6 +1077,15 @@ int msm_gem_new_handle(struct drm_device *dev, struct drm_file *file,
if (name)
msm_gem_object_set_name(obj, "%s", name);
+ if (flags & MSM_BO_NO_SHARE) {
+ struct msm_context *ctx = file->driver_priv;
+ struct drm_gem_object *r_obj = drm_gpuvm_resv_obj(ctx->vm);
+
+ drm_gem_object_get(r_obj);
+
+ obj->resv = r_obj->resv;
+ }
+
ret = drm_gem_handle_create(file, obj, handle);
/* drop reference from allocate - handle holds it now */
@@ -1096,6 +1118,7 @@ static const struct drm_gem_object_funcs msm_gem_object_funcs = {
.free = msm_gem_free_object,
.open = msm_gem_open,
.close = msm_gem_close,
+ .export = msm_gem_prime_export,
.pin = msm_gem_prime_pin,
.unpin = msm_gem_prime_unpin,
.get_sg_table = msm_gem_prime_get_sg_table,
diff --git a/drivers/gpu/drm/msm/msm_gem_prime.c b/drivers/gpu/drm/msm/msm_gem_prime.c
index ee267490c935..1a6d8099196a 100644
--- a/drivers/gpu/drm/msm/msm_gem_prime.c
+++ b/drivers/gpu/drm/msm/msm_gem_prime.c
@@ -16,6 +16,9 @@ struct sg_table *msm_gem_prime_get_sg_table(struct drm_gem_object *obj)
struct msm_gem_object *msm_obj = to_msm_bo(obj);
int npages = obj->size >> PAGE_SHIFT;
+ if (msm_obj->flags & MSM_BO_NO_SHARE)
+ return ERR_PTR(-EINVAL);
+
if (WARN_ON(!msm_obj->pages)) /* should have already pinned! */
return ERR_PTR(-ENOMEM);
@@ -45,6 +48,15 @@ struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *dev,
return msm_gem_import(dev, attach->dmabuf, sg);
}
+
+struct dma_buf *msm_gem_prime_export(struct drm_gem_object *obj, int flags)
+{
+ if (to_msm_bo(obj)->flags & MSM_BO_NO_SHARE)
+ return ERR_PTR(-EPERM);
+
+ return drm_gem_prime_export(obj, flags);
+}
+
int msm_gem_prime_pin(struct drm_gem_object *obj)
{
struct page **pages;
@@ -53,6 +65,9 @@ int msm_gem_prime_pin(struct drm_gem_object *obj)
if (obj->import_attach)
return 0;
+ if (to_msm_bo(obj)->flags & MSM_BO_NO_SHARE)
+ return -EINVAL;
+
pages = msm_gem_pin_pages_locked(obj);
if (IS_ERR(pages))
ret = PTR_ERR(pages);
diff --git a/include/uapi/drm/msm_drm.h b/include/uapi/drm/msm_drm.h
index 39b55c8d7413..a7e48ee1dd95 100644
--- a/include/uapi/drm/msm_drm.h
+++ b/include/uapi/drm/msm_drm.h
@@ -138,6 +138,19 @@ struct drm_msm_param {
#define MSM_BO_SCANOUT 0x00000001 /* scanout capable */
#define MSM_BO_GPU_READONLY 0x00000002
+/* Private buffers do not need to be explicitly listed in the SUBMIT
+ * ioctl, unless referenced by a drm_msm_gem_submit_cmd. Private
+ * buffers may NOT be imported/exported or used for scanout (or any
+ * other situation where buffers can be indefinitely pinned, but
+ * cases other than scanout are all kernel owned BOs which are not
+ * visible to userspace).
+ *
+ * In exchange for those constraints, all private BOs associated with
+ * a single context (drm_file) share a single dma_resv, and if there
+ * has been no eviction since the last submit, there are no per-BO
+ * bookeeping to do, significantly cutting the SUBMIT overhead.
+ */
+#define MSM_BO_NO_SHARE 0x00000004
#define MSM_BO_CACHE_MASK 0x000f0000
/* cache modes */
#define MSM_BO_CACHED 0x00010000
@@ -147,6 +160,7 @@ struct drm_msm_param {
#define MSM_BO_FLAGS (MSM_BO_SCANOUT | \
MSM_BO_GPU_READONLY | \
+ MSM_BO_NO_SHARE | \
MSM_BO_CACHE_MASK)
struct drm_msm_gem_new {
--
2.48.1
^ permalink raw reply related [flat|nested] 45+ messages in thread
* [PATCH v2 20/34] drm/msm: Split out helper to get iommu prot flags
2025-03-19 14:52 [PATCH v2 00/34] drm/msm: sparse / "VM_BIND" support Rob Clark
` (18 preceding siblings ...)
2025-03-19 14:52 ` [PATCH v2 19/34] drm/msm: Add _NO_SHARE flag Rob Clark
@ 2025-03-19 14:52 ` Rob Clark
2025-03-19 14:52 ` [PATCH v2 21/34] drm/msm: Add mmu support for non-zero offset Rob Clark
` (13 subsequent siblings)
33 siblings, 0 replies; 45+ messages in thread
From: Rob Clark @ 2025-03-19 14:52 UTC (permalink / raw)
To: dri-devel
Cc: freedreno, linux-arm-msm, Rob Clark, Rob Clark, Abhinav Kumar,
Dmitry Baryshkov, Sean Paul, Marijn Suijten, David Airlie,
Simona Vetter, open list
From: Rob Clark <robdclark@chromium.org>
We'll re-use this in the vm_bind path.
Signed-off-by: Rob Clark <robdclark@chromium.org>
---
drivers/gpu/drm/msm/msm_gem.c | 12 ++++++++++--
drivers/gpu/drm/msm/msm_gem.h | 1 +
2 files changed, 11 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
index 9d4f7b76471f..632f560c81ec 100644
--- a/drivers/gpu/drm/msm/msm_gem.c
+++ b/drivers/gpu/drm/msm/msm_gem.c
@@ -450,10 +450,9 @@ static struct drm_gpuva *get_vma_locked(struct drm_gem_object *obj,
return vma;
}
-int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct drm_gpuva *vma)
+int msm_gem_prot(struct drm_gem_object *obj)
{
struct msm_gem_object *msm_obj = to_msm_bo(obj);
- struct page **pages;
int prot = IOMMU_READ;
if (!(msm_obj->flags & MSM_BO_GPU_READONLY))
@@ -469,6 +468,15 @@ int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct drm_gpuva *vma)
else if (prot == 2)
prot |= IOMMU_USE_LLC_NWA;
+ return prot;
+}
+
+int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct drm_gpuva *vma)
+{
+ struct msm_gem_object *msm_obj = to_msm_bo(obj);
+ struct page **pages;
+ int prot = msm_gem_prot(obj);
+
msm_gem_assert_locked(obj);
pages = msm_gem_get_pages_locked(obj, MSM_MADV_WILLNEED);
diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h
index d427ead2dce0..7ccdf15476b9 100644
--- a/drivers/gpu/drm/msm/msm_gem.h
+++ b/drivers/gpu/drm/msm/msm_gem.h
@@ -188,6 +188,7 @@ struct msm_gem_object {
#define to_msm_bo(x) container_of(x, struct msm_gem_object, base)
uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj);
+int msm_gem_prot(struct drm_gem_object *obj);
int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct drm_gpuva *vma);
void msm_gem_unpin_locked(struct drm_gem_object *obj);
void msm_gem_unpin_active(struct drm_gem_object *obj);
--
2.48.1
^ permalink raw reply related [flat|nested] 45+ messages in thread
* [PATCH v2 21/34] drm/msm: Add mmu support for non-zero offset
2025-03-19 14:52 [PATCH v2 00/34] drm/msm: sparse / "VM_BIND" support Rob Clark
` (19 preceding siblings ...)
2025-03-19 14:52 ` [PATCH v2 20/34] drm/msm: Split out helper to get iommu prot flags Rob Clark
@ 2025-03-19 14:52 ` Rob Clark
2025-03-19 14:52 ` [PATCH v2 22/34] drm/msm: Add PRR support Rob Clark
` (12 subsequent siblings)
33 siblings, 0 replies; 45+ messages in thread
From: Rob Clark @ 2025-03-19 14:52 UTC (permalink / raw)
To: dri-devel
Cc: freedreno, linux-arm-msm, Rob Clark, Rob Clark, Sean Paul,
Konrad Dybcio, Abhinav Kumar, Dmitry Baryshkov, Marijn Suijten,
David Airlie, Simona Vetter, open list
From: Rob Clark <robdclark@chromium.org>
Only needs to be supported for iopgtables mmu, the other cases are
either only used for kernel managed mappings (where offset is always
zero) or devices which do not support sparse bindings.
Signed-off-by: Rob Clark <robdclark@chromium.org>
---
drivers/gpu/drm/msm/adreno/a2xx_gpummu.c | 5 ++++-
drivers/gpu/drm/msm/msm_gem.c | 4 ++--
drivers/gpu/drm/msm/msm_gem.h | 4 ++--
drivers/gpu/drm/msm/msm_gem_vma.c | 13 +++++++------
drivers/gpu/drm/msm/msm_iommu.c | 22 ++++++++++++++++++++--
drivers/gpu/drm/msm/msm_mmu.h | 2 +-
6 files changed, 36 insertions(+), 14 deletions(-)
diff --git a/drivers/gpu/drm/msm/adreno/a2xx_gpummu.c b/drivers/gpu/drm/msm/adreno/a2xx_gpummu.c
index 39641551eeb6..6124336af2ec 100644
--- a/drivers/gpu/drm/msm/adreno/a2xx_gpummu.c
+++ b/drivers/gpu/drm/msm/adreno/a2xx_gpummu.c
@@ -29,13 +29,16 @@ static void a2xx_gpummu_detach(struct msm_mmu *mmu)
}
static int a2xx_gpummu_map(struct msm_mmu *mmu, uint64_t iova,
- struct sg_table *sgt, size_t len, int prot)
+ struct sg_table *sgt, size_t off, size_t len,
+ int prot)
{
struct a2xx_gpummu *gpummu = to_a2xx_gpummu(mmu);
unsigned idx = (iova - GPUMMU_VA_START) / GPUMMU_PAGE_SIZE;
struct sg_dma_page_iter dma_iter;
unsigned prot_bits = 0;
+ WARN_ON(off != 0);
+
if (prot & IOMMU_WRITE)
prot_bits |= 1;
if (prot & IOMMU_READ)
diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
index 632f560c81ec..577da3c54c8c 100644
--- a/drivers/gpu/drm/msm/msm_gem.c
+++ b/drivers/gpu/drm/msm/msm_gem.c
@@ -441,7 +441,7 @@ static struct drm_gpuva *get_vma_locked(struct drm_gem_object *obj,
vma = lookup_vma(obj, vm);
if (!vma) {
- vma = msm_gem_vma_new(vm, obj, range_start, range_end);
+ vma = msm_gem_vma_new(vm, obj, 0, range_start, range_end);
} else {
GEM_WARN_ON(vma->va.addr < range_start);
GEM_WARN_ON((vma->va.addr + obj->size) > range_end);
@@ -483,7 +483,7 @@ int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct drm_gpuva *vma)
if (IS_ERR(pages))
return PTR_ERR(pages);
- return msm_gem_vma_map(vma, prot, msm_obj->sgt, obj->size);
+ return msm_gem_vma_map(vma, prot, msm_obj->sgt);
}
void msm_gem_unpin_locked(struct drm_gem_object *obj)
diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h
index 7ccdf15476b9..3919b384d599 100644
--- a/drivers/gpu/drm/msm/msm_gem.h
+++ b/drivers/gpu/drm/msm/msm_gem.h
@@ -140,9 +140,9 @@ struct msm_gem_vma {
struct drm_gpuva *
msm_gem_vma_new(struct drm_gpuvm *vm, struct drm_gem_object *obj,
- u64 range_start, u64 range_end);
+ u64 offset, u64 range_start, u64 range_end);
void msm_gem_vma_purge(struct drm_gpuva *vma);
-int msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt, int size);
+int msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt);
void msm_gem_vma_close(struct drm_gpuva *vma);
struct msm_gem_object {
diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c
index 8c780dd6a936..d51d54c0da33 100644
--- a/drivers/gpu/drm/msm/msm_gem_vma.c
+++ b/drivers/gpu/drm/msm/msm_gem_vma.c
@@ -38,8 +38,7 @@ void msm_gem_vma_purge(struct drm_gpuva *vma)
/* Map and pin vma: */
int
-msm_gem_vma_map(struct drm_gpuva *vma, int prot,
- struct sg_table *sgt, int size)
+msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt)
{
struct msm_gem_vma *msm_vma = to_msm_vma(vma);
struct msm_gem_vm *vm = to_msm_vm(vma->vm);
@@ -62,8 +61,9 @@ msm_gem_vma_map(struct drm_gpuva *vma, int prot,
* Revisit this if we can come up with a scheme to pre-alloc pages
* for the pgtable in map/unmap ops.
*/
- ret = vm->mmu->funcs->map(vm->mmu, vma->va.addr, sgt, size, prot);
-
+ ret = vm->mmu->funcs->map(vm->mmu, vma->va.addr, sgt,
+ vma->gem.offset, vma->va.range,
+ prot);
if (ret) {
msm_vma->mapped = false;
}
@@ -97,7 +97,7 @@ void msm_gem_vma_close(struct drm_gpuva *vma)
/* Create a new vma and allocate an iova for it */
struct drm_gpuva *
msm_gem_vma_new(struct drm_gpuvm *_vm, struct drm_gem_object *obj,
- u64 range_start, u64 range_end)
+ u64 offset, u64 range_start, u64 range_end)
{
struct msm_gem_vm *vm = to_msm_vm(_vm);
struct drm_gpuvm_bo *vm_bo;
@@ -109,6 +109,7 @@ msm_gem_vma_new(struct drm_gpuvm *_vm, struct drm_gem_object *obj,
return ERR_PTR(-ENOMEM);
if (vm->managed) {
+ BUG_ON(offset != 0);
spin_lock(&vm->mm_lock);
ret = drm_mm_insert_node_in_range(&vm->mm, &vma->node,
obj->size, PAGE_SIZE, 0,
@@ -124,7 +125,7 @@ msm_gem_vma_new(struct drm_gpuvm *_vm, struct drm_gem_object *obj,
GEM_WARN_ON((range_end - range_start) > obj->size);
- drm_gpuva_init(&vma->base, range_start, range_end - range_start, obj, 0);
+ drm_gpuva_init(&vma->base, range_start, range_end - range_start, obj, offset);
vma->mapped = false;
mutex_lock(&vm->vm_lock);
diff --git a/drivers/gpu/drm/msm/msm_iommu.c b/drivers/gpu/drm/msm/msm_iommu.c
index e70088a91283..2fd48e66bc98 100644
--- a/drivers/gpu/drm/msm/msm_iommu.c
+++ b/drivers/gpu/drm/msm/msm_iommu.c
@@ -113,7 +113,8 @@ static int msm_iommu_pagetable_unmap(struct msm_mmu *mmu, u64 iova,
}
static int msm_iommu_pagetable_map(struct msm_mmu *mmu, u64 iova,
- struct sg_table *sgt, size_t len, int prot)
+ struct sg_table *sgt, size_t off, size_t len,
+ int prot)
{
struct msm_iommu_pagetable *pagetable = to_pagetable(mmu);
struct io_pgtable_ops *ops = pagetable->pgtbl_ops;
@@ -125,6 +126,19 @@ static int msm_iommu_pagetable_map(struct msm_mmu *mmu, u64 iova,
size_t size = sg->length;
phys_addr_t phys = sg_phys(sg);
+ if (!len)
+ break;
+
+ if (size <= off) {
+ off -= size;
+ continue;
+ }
+
+ phys += off;
+ size -= off;
+ size = min_t(size_t, size, len);
+ off = 0;
+
while (size) {
size_t pgsize, count, mapped = 0;
int ret;
@@ -140,6 +154,7 @@ static int msm_iommu_pagetable_map(struct msm_mmu *mmu, u64 iova,
phys += mapped;
addr += mapped;
size -= mapped;
+ len -= mapped;
if (ret) {
msm_iommu_pagetable_unmap(mmu, iova, addr - iova);
@@ -400,11 +415,14 @@ static void msm_iommu_detach(struct msm_mmu *mmu)
}
static int msm_iommu_map(struct msm_mmu *mmu, uint64_t iova,
- struct sg_table *sgt, size_t len, int prot)
+ struct sg_table *sgt, size_t off, size_t len,
+ int prot)
{
struct msm_iommu *iommu = to_msm_iommu(mmu);
size_t ret;
+ WARN_ON(off != 0);
+
/* The arm-smmu driver expects the addresses to be sign extended */
if (iova & BIT_ULL(48))
iova |= GENMASK_ULL(63, 49);
diff --git a/drivers/gpu/drm/msm/msm_mmu.h b/drivers/gpu/drm/msm/msm_mmu.h
index c33247e459d6..c874852b7331 100644
--- a/drivers/gpu/drm/msm/msm_mmu.h
+++ b/drivers/gpu/drm/msm/msm_mmu.h
@@ -12,7 +12,7 @@
struct msm_mmu_funcs {
void (*detach)(struct msm_mmu *mmu);
int (*map)(struct msm_mmu *mmu, uint64_t iova, struct sg_table *sgt,
- size_t len, int prot);
+ size_t off, size_t len, int prot);
int (*unmap)(struct msm_mmu *mmu, uint64_t iova, size_t len);
void (*destroy)(struct msm_mmu *mmu);
void (*resume_translation)(struct msm_mmu *mmu);
--
2.48.1
^ permalink raw reply related [flat|nested] 45+ messages in thread
* [PATCH v2 22/34] drm/msm: Add PRR support
2025-03-19 14:52 [PATCH v2 00/34] drm/msm: sparse / "VM_BIND" support Rob Clark
` (20 preceding siblings ...)
2025-03-19 14:52 ` [PATCH v2 21/34] drm/msm: Add mmu support for non-zero offset Rob Clark
@ 2025-03-19 14:52 ` Rob Clark
2025-03-19 14:52 ` [PATCH v2 23/34] drm/msm: Rename msm_gem_vma_purge() -> _unmap() Rob Clark
` (11 subsequent siblings)
33 siblings, 0 replies; 45+ messages in thread
From: Rob Clark @ 2025-03-19 14:52 UTC (permalink / raw)
To: dri-devel
Cc: freedreno, linux-arm-msm, Rob Clark, Rob Clark, Sean Paul,
Konrad Dybcio, Abhinav Kumar, Dmitry Baryshkov, Marijn Suijten,
David Airlie, Simona Vetter, Maarten Lankhorst, Maxime Ripard,
Thomas Zimmermann, open list
From: Rob Clark <robdclark@chromium.org>
Add PRR (Partial Resident Region) is a bypass address which make GPU
writes go to /dev/null and reads return zero. This is used to implement
vulkan sparse residency.
To support PRR/NULL mappings, we allocate a page to reserve a physical
address which we know will not be used as part of a GEM object, and
configure the SMMU to use this address for PRR/NULL mappings.
Signed-off-by: Rob Clark <robdclark@chromium.org>
---
drivers/gpu/drm/msm/adreno/adreno_gpu.c | 10 ++++
drivers/gpu/drm/msm/msm_iommu.c | 62 ++++++++++++++++++++++++-
include/uapi/drm/msm_drm.h | 2 +
3 files changed, 73 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
index 739161df3e3c..bac6cd3afe37 100644
--- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
@@ -346,6 +346,13 @@ int adreno_fault_handler(struct msm_gpu *gpu, unsigned long iova, int flags,
return 0;
}
+static bool
+adreno_smmu_has_prr(struct msm_gpu *gpu)
+{
+ struct adreno_smmu_priv *adreno_smmu = dev_get_drvdata(&gpu->pdev->dev);
+ return adreno_smmu && adreno_smmu->set_prr_addr;
+}
+
int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx,
uint32_t param, uint64_t *value, uint32_t *len)
{
@@ -431,6 +438,9 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx,
case MSM_PARAM_UCHE_TRAP_BASE:
*value = adreno_gpu->uche_trap_base;
return 0;
+ case MSM_PARAM_HAS_PRR:
+ *value = adreno_smmu_has_prr(gpu);
+ return 0;
default:
return UERR(EINVAL, drm, "%s: invalid param: %u", gpu->name, param);
}
diff --git a/drivers/gpu/drm/msm/msm_iommu.c b/drivers/gpu/drm/msm/msm_iommu.c
index 2fd48e66bc98..756bd55ee94f 100644
--- a/drivers/gpu/drm/msm/msm_iommu.c
+++ b/drivers/gpu/drm/msm/msm_iommu.c
@@ -13,6 +13,7 @@ struct msm_iommu {
struct msm_mmu base;
struct iommu_domain *domain;
atomic_t pagetables;
+ struct page *prr_page;
};
#define to_msm_iommu(x) container_of(x, struct msm_iommu, base)
@@ -112,6 +113,36 @@ static int msm_iommu_pagetable_unmap(struct msm_mmu *mmu, u64 iova,
return (size == 0) ? 0 : -EINVAL;
}
+static int msm_iommu_pagetable_map_prr(struct msm_mmu *mmu, u64 iova, size_t len, int prot)
+{
+ struct msm_iommu_pagetable *pagetable = to_pagetable(mmu);
+ struct io_pgtable_ops *ops = pagetable->pgtbl_ops;
+ struct msm_iommu *iommu = to_msm_iommu(pagetable->parent);
+ phys_addr_t phys = page_to_phys(iommu->prr_page);
+ u64 addr = iova;
+
+ while (len) {
+ size_t mapped = 0;
+ size_t size = PAGE_SIZE;
+ int ret;
+
+ ret = ops->map_pages(ops, addr, phys, size, 1, prot, GFP_KERNEL, &mapped);
+
+ /* map_pages could fail after mapping some of the pages,
+ * so update the counters before error handling.
+ */
+ addr += mapped;
+ len -= mapped;
+
+ if (ret) {
+ msm_iommu_pagetable_unmap(mmu, iova, addr - iova);
+ return -EINVAL;
+ }
+ }
+
+ return 0;
+}
+
static int msm_iommu_pagetable_map(struct msm_mmu *mmu, u64 iova,
struct sg_table *sgt, size_t off, size_t len,
int prot)
@@ -122,6 +153,9 @@ static int msm_iommu_pagetable_map(struct msm_mmu *mmu, u64 iova,
u64 addr = iova;
unsigned int i;
+ if (!sgt)
+ return msm_iommu_pagetable_map_prr(mmu, iova, len, prot);
+
for_each_sgtable_sg(sgt, sg, i) {
size_t size = sg->length;
phys_addr_t phys = sg_phys(sg);
@@ -177,9 +211,16 @@ static void msm_iommu_pagetable_destroy(struct msm_mmu *mmu)
* If this is the last attached pagetable for the parent,
* disable TTBR0 in the arm-smmu driver
*/
- if (atomic_dec_return(&iommu->pagetables) == 0)
+ if (atomic_dec_return(&iommu->pagetables) == 0) {
adreno_smmu->set_ttbr0_cfg(adreno_smmu->cookie, NULL);
+ if (adreno_smmu->set_prr_bit) {
+ adreno_smmu->set_prr_bit(adreno_smmu->cookie, false);
+ __free_page(iommu->prr_page);
+ iommu->prr_page = NULL;
+ }
+ }
+
free_io_pgtable_ops(pagetable->pgtbl_ops);
kfree(pagetable);
}
@@ -336,6 +377,25 @@ struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent)
kfree(pagetable);
return ERR_PTR(ret);
}
+
+ BUG_ON(iommu->prr_page);
+ if (adreno_smmu->set_prr_bit) {
+ /*
+ * We need a zero'd page for two reasons:
+ *
+ * 1) Reserve a known physical address to use when
+ * mapping NULL / sparsely resident regions
+ * 2) Read back zero
+ *
+ * It appears the hw drops writes to the PRR region
+ * on the floor, but reads actually return whatever
+ * is in the PRR page.
+ */
+ iommu->prr_page = alloc_page(GFP_KERNEL | __GFP_ZERO);
+ adreno_smmu->set_prr_addr(adreno_smmu->cookie,
+ page_to_phys(iommu->prr_page));
+ adreno_smmu->set_prr_bit(adreno_smmu->cookie, true);
+ }
}
/* Needed later for TLB flush */
diff --git a/include/uapi/drm/msm_drm.h b/include/uapi/drm/msm_drm.h
index a7e48ee1dd95..48bc0374e2ae 100644
--- a/include/uapi/drm/msm_drm.h
+++ b/include/uapi/drm/msm_drm.h
@@ -115,6 +115,8 @@ struct drm_msm_timespec {
* ioctl will throw -EPIPE.
*/
#define MSM_PARAM_EN_VM_BIND 0x15 /* WO, once */
+/* PRR (Partially Resident Region) is required for sparse residency: */
+#define MSM_PARAM_HAS_PRR 0x16 /* RO */
/* For backwards compat. The original support for preemption was based on
* a single ring per priority level so # of priority levels equals the #
--
2.48.1
^ permalink raw reply related [flat|nested] 45+ messages in thread
* [PATCH v2 23/34] drm/msm: Rename msm_gem_vma_purge() -> _unmap()
2025-03-19 14:52 [PATCH v2 00/34] drm/msm: sparse / "VM_BIND" support Rob Clark
` (21 preceding siblings ...)
2025-03-19 14:52 ` [PATCH v2 22/34] drm/msm: Add PRR support Rob Clark
@ 2025-03-19 14:52 ` Rob Clark
2025-03-19 14:52 ` [PATCH v2 24/34] drm/msm: Split msm_gem_vma_new() Rob Clark
` (10 subsequent siblings)
33 siblings, 0 replies; 45+ messages in thread
From: Rob Clark @ 2025-03-19 14:52 UTC (permalink / raw)
To: dri-devel
Cc: freedreno, linux-arm-msm, Rob Clark, Rob Clark, Abhinav Kumar,
Dmitry Baryshkov, Sean Paul, Marijn Suijten, David Airlie,
Simona Vetter, open list
From: Rob Clark <robdclark@chromium.org>
This is a more descriptive name.
Signed-off-by: Rob Clark <robdclark@chromium.org>
---
drivers/gpu/drm/msm/msm_gem.c | 4 ++--
drivers/gpu/drm/msm/msm_gem.h | 2 +-
drivers/gpu/drm/msm/msm_gem_vma.c | 2 +-
3 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
index 577da3c54c8c..dcf5f6a25f87 100644
--- a/drivers/gpu/drm/msm/msm_gem.c
+++ b/drivers/gpu/drm/msm/msm_gem.c
@@ -421,7 +421,7 @@ put_iova_spaces(struct drm_gem_object *obj, struct drm_gpuvm *vm, bool close)
drm_gpuvm_bo_get(vm_bo);
drm_gpuvm_bo_for_each_va_safe (vma, vmatmp, vm_bo) {
- msm_gem_vma_purge(vma);
+ msm_gem_vma_unmap(vma);
if (close)
msm_gem_vma_close(vma);
}
@@ -600,7 +600,7 @@ static int clear_iova(struct drm_gem_object *obj,
if (!vma)
return 0;
- msm_gem_vma_purge(vma);
+ msm_gem_vma_unmap(vma);
msm_gem_vma_close(vma);
return 0;
diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h
index 3919b384d599..1622d557ea1f 100644
--- a/drivers/gpu/drm/msm/msm_gem.h
+++ b/drivers/gpu/drm/msm/msm_gem.h
@@ -141,7 +141,7 @@ struct msm_gem_vma {
struct drm_gpuva *
msm_gem_vma_new(struct drm_gpuvm *vm, struct drm_gem_object *obj,
u64 offset, u64 range_start, u64 range_end);
-void msm_gem_vma_purge(struct drm_gpuva *vma);
+void msm_gem_vma_unmap(struct drm_gpuva *vma);
int msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt);
void msm_gem_vma_close(struct drm_gpuva *vma);
diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c
index d51d54c0da33..baa5c6a0ff22 100644
--- a/drivers/gpu/drm/msm/msm_gem_vma.c
+++ b/drivers/gpu/drm/msm/msm_gem_vma.c
@@ -21,7 +21,7 @@ msm_gem_vm_free(struct drm_gpuvm *gpuvm)
}
/* Actually unmap memory for the vma */
-void msm_gem_vma_purge(struct drm_gpuva *vma)
+void msm_gem_vma_unmap(struct drm_gpuva *vma)
{
struct msm_gem_vma *msm_vma = to_msm_vma(vma);
struct msm_gem_vm *vm = to_msm_vm(vma->vm);
--
2.48.1
^ permalink raw reply related [flat|nested] 45+ messages in thread
* [PATCH v2 24/34] drm/msm: Split msm_gem_vma_new()
2025-03-19 14:52 [PATCH v2 00/34] drm/msm: sparse / "VM_BIND" support Rob Clark
` (22 preceding siblings ...)
2025-03-19 14:52 ` [PATCH v2 23/34] drm/msm: Rename msm_gem_vma_purge() -> _unmap() Rob Clark
@ 2025-03-19 14:52 ` Rob Clark
2025-03-19 14:52 ` [PATCH v2 25/34] drm/msm: Pre-allocate VMAs Rob Clark
` (9 subsequent siblings)
33 siblings, 0 replies; 45+ messages in thread
From: Rob Clark @ 2025-03-19 14:52 UTC (permalink / raw)
To: dri-devel
Cc: freedreno, linux-arm-msm, Rob Clark, Rob Clark, Abhinav Kumar,
Dmitry Baryshkov, Sean Paul, Marijn Suijten, David Airlie,
Simona Vetter, open list
From: Rob Clark <robdclark@chromium.org>
Split memory allocation from vma initialization. Async vm-bind happens
in the fence signalling path, so it will need to use pre-allocated
memory.
Signed-off-by: Rob Clark <robdclark@chromium.org>
---
drivers/gpu/drm/msm/msm_gem_vma.c | 67 ++++++++++++++++++++++---------
1 file changed, 49 insertions(+), 18 deletions(-)
diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c
index baa5c6a0ff22..7d40b151aa95 100644
--- a/drivers/gpu/drm/msm/msm_gem_vma.c
+++ b/drivers/gpu/drm/msm/msm_gem_vma.c
@@ -71,40 +71,54 @@ msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt)
return ret;
}
-/* Close an iova. Warn if it is still in use */
-void msm_gem_vma_close(struct drm_gpuva *vma)
+static void __vma_close(struct drm_gpuva *vma)
{
struct msm_gem_vm *vm = to_msm_vm(vma->vm);
struct msm_gem_vma *msm_vma = to_msm_vma(vma);
GEM_WARN_ON(msm_vma->mapped);
+ GEM_WARN_ON(!mutex_is_locked(&vm->vm_lock));
spin_lock(&vm->mm_lock);
if (vma->va.addr && vm->managed)
drm_mm_remove_node(&msm_vma->node);
spin_unlock(&vm->mm_lock);
- dma_resv_lock(drm_gpuvm_resv(vma->vm), NULL);
- mutex_lock(&vm->vm_lock);
drm_gpuva_remove(vma);
drm_gpuva_unlink(vma);
- mutex_unlock(&vm->vm_lock);
- dma_resv_unlock(drm_gpuvm_resv(vma->vm));
kfree(vma);
}
-/* Create a new vma and allocate an iova for it */
-struct drm_gpuva *
-msm_gem_vma_new(struct drm_gpuvm *_vm, struct drm_gem_object *obj,
- u64 offset, u64 range_start, u64 range_end)
+/* Close an iova. Warn if it is still in use */
+void msm_gem_vma_close(struct drm_gpuva *vma)
+{
+ struct msm_gem_vm *vm = to_msm_vm(vma->vm);
+
+ /*
+ * Only used in legacy (kernel managed) VM, if userspace is managing
+ * the VM, the legacy paths should be disallowed:
+ */
+ GEM_WARN_ON(!vm->managed);
+
+ dma_resv_lock(drm_gpuvm_resv(vma->vm), NULL);
+ mutex_lock(&vm->vm_lock);
+ __vma_close(vma);
+ mutex_unlock(&vm->vm_lock);
+ dma_resv_unlock(drm_gpuvm_resv(vma->vm));
+}
+
+static struct drm_gpuva *
+__vma_init(struct msm_gem_vma *vma, struct drm_gpuvm *_vm,
+ struct drm_gem_object *obj, u64 offset,
+ u64 range_start, u64 range_end)
{
struct msm_gem_vm *vm = to_msm_vm(_vm);
struct drm_gpuvm_bo *vm_bo;
- struct msm_gem_vma *vma;
int ret;
- vma = kzalloc(sizeof(*vma), GFP_KERNEL);
+ GEM_WARN_ON(!mutex_is_locked(&vm->vm_lock));
+
if (!vma)
return ERR_PTR(-ENOMEM);
@@ -128,9 +142,7 @@ msm_gem_vma_new(struct drm_gpuvm *_vm, struct drm_gem_object *obj,
drm_gpuva_init(&vma->base, range_start, range_end - range_start, obj, offset);
vma->mapped = false;
- mutex_lock(&vm->vm_lock);
ret = drm_gpuva_insert(&vm->base, &vma->base);
- mutex_unlock(&vm->vm_lock);
if (ret)
goto err_free_range;
@@ -140,17 +152,13 @@ msm_gem_vma_new(struct drm_gpuvm *_vm, struct drm_gem_object *obj,
goto err_va_remove;
}
- mutex_lock(&vm->vm_lock);
drm_gpuva_link(&vma->base, vm_bo);
- mutex_unlock(&vm->vm_lock);
GEM_WARN_ON(drm_gpuvm_bo_put(vm_bo));
return &vma->base;
err_va_remove:
- mutex_lock(&vm->vm_lock);
drm_gpuva_remove(&vma->base);
- mutex_unlock(&vm->vm_lock);
err_free_range:
if (vm->managed)
drm_mm_remove_node(&vma->node);
@@ -159,6 +167,29 @@ msm_gem_vma_new(struct drm_gpuvm *_vm, struct drm_gem_object *obj,
return ERR_PTR(ret);
}
+/* Create a new vma and allocate an iova for it */
+struct drm_gpuva *
+msm_gem_vma_new(struct drm_gpuvm *_vm, struct drm_gem_object *obj,
+ u64 offset, u64 range_start, u64 range_end)
+{
+ struct msm_gem_vm *vm = to_msm_vm(_vm);
+ struct msm_gem_vma *vma;
+
+ /*
+ * Only used in legacy (kernel managed) VM, if userspace is managing
+ * the VM, the legacy paths should be disallowed:
+ */
+ GEM_WARN_ON(!vm->managed);
+
+ vma = kzalloc(sizeof(*vma), GFP_KERNEL);
+
+ mutex_lock(&vm->vm_lock);
+ __vma_init(vma, _vm, obj, offset, range_start, range_end);
+ mutex_unlock(&vm->vm_lock);
+
+ return &vma->base;
+}
+
static const struct drm_gpuvm_ops msm_gpuvm_ops = {
.vm_free = msm_gem_vm_free,
};
--
2.48.1
^ permalink raw reply related [flat|nested] 45+ messages in thread
* [PATCH v2 25/34] drm/msm: Pre-allocate VMAs
2025-03-19 14:52 [PATCH v2 00/34] drm/msm: sparse / "VM_BIND" support Rob Clark
` (23 preceding siblings ...)
2025-03-19 14:52 ` [PATCH v2 24/34] drm/msm: Split msm_gem_vma_new() Rob Clark
@ 2025-03-19 14:52 ` Rob Clark
2025-03-19 14:52 ` [PATCH v2 26/34] drm/msm: Pre-allocate vm_bo objects Rob Clark
` (8 subsequent siblings)
33 siblings, 0 replies; 45+ messages in thread
From: Rob Clark @ 2025-03-19 14:52 UTC (permalink / raw)
To: dri-devel
Cc: freedreno, linux-arm-msm, Rob Clark, Rob Clark, Abhinav Kumar,
Dmitry Baryshkov, Sean Paul, Marijn Suijten, David Airlie,
Simona Vetter, open list
From: Rob Clark <robdclark@chromium.org>
Pre-allocate the VMA objects that we will need in the vm bind job.
Signed-off-by: Rob Clark <robdclark@chromium.org>
---
drivers/gpu/drm/msm/msm_gem.h | 9 +++++
drivers/gpu/drm/msm/msm_gem_submit.c | 5 +++
drivers/gpu/drm/msm/msm_gem_vma.c | 60 ++++++++++++++++++++++++++++
3 files changed, 74 insertions(+)
diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h
index 1622d557ea1f..cb76959fa8a8 100644
--- a/drivers/gpu/drm/msm/msm_gem.h
+++ b/drivers/gpu/drm/msm/msm_gem.h
@@ -115,6 +115,9 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name,
void msm_gem_vm_close(struct drm_gpuvm *gpuvm);
+void msm_vma_job_prepare(struct msm_gem_submit *submit);
+void msm_vma_job_cleanup(struct msm_gem_submit *submit);
+
struct msm_fence_context;
/**
@@ -339,6 +342,12 @@ struct msm_gem_submit {
int fence_id; /* key into queue->fence_idr */
struct msm_gpu_submitqueue *queue;
+
+ /* List of pre-allocated msm_gem_vma's, used to avoid memory allocation
+ * in fence signalling path.
+ */
+ struct list_head preallocated_vmas;
+
struct pid *pid; /* submitting process */
bool bos_pinned : 1;
bool fault_dumped:1;/* Limit devcoredump dumping to one per submit */
diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c
index 39a6e0418bdf..a9b3e6692db3 100644
--- a/drivers/gpu/drm/msm/msm_gem_submit.c
+++ b/drivers/gpu/drm/msm/msm_gem_submit.c
@@ -80,6 +80,7 @@ static struct msm_gem_submit *submit_create(struct drm_device *dev,
submit->ident = atomic_inc_return(&ident) - 1;
INIT_LIST_HEAD(&submit->node);
+ INIT_LIST_HEAD(&submit->preallocated_vmas);
return submit;
}
@@ -584,6 +585,9 @@ void msm_submit_retire(struct msm_gem_submit *submit)
{
int i;
+ if (submit_is_vmbind(submit))
+ msm_vma_job_cleanup(submit);
+
for (i = 0; i < submit->nr_bos; i++) {
struct drm_gem_object *obj = submit->bos[i].obj;
@@ -912,6 +916,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data,
}
if (submit_is_vmbind(submit)) {
+ msm_vma_job_prepare(submit);
ret = submit_get_pages(submit);
} else {
ret = submit_pin_vmas(submit);
diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c
index 7d40b151aa95..5c7d44b004fb 100644
--- a/drivers/gpu/drm/msm/msm_gem_vma.c
+++ b/drivers/gpu/drm/msm/msm_gem_vma.c
@@ -258,6 +258,66 @@ static const struct drm_sched_backend_ops msm_vm_bind_ops = {
.free_job = msm_vma_job_free
};
+/**
+ * msm_vma_job_prepare() - VM_BIND job setup
+ * @submit: the VM_BIND job
+ *
+ * Prepare for a VM_BIND job by pre-allocating various memory that will
+ * be required once the job runs. Memory allocations cannot happen in
+ * the fence signalling path (ie. from job->run()) as that could recurse
+ * into the shrinker and potentially block waiting on the fence that is
+ * signalled when this job completes (ie. deadlock).
+ *
+ * Called after BOs are locked.
+ */
+void
+msm_vma_job_prepare(struct msm_gem_submit *submit)
+{
+ unsigned num_prealloc_vmas = 0;
+
+ for (int i = 0; i < submit->nr_bos; i++) {
+ unsigned op = submit->bos[i].flags & MSM_SUBMIT_BO_OP_MASK;
+
+ if (submit->bos[i].obj)
+ msm_gem_assert_locked(submit->bos[i].obj);
+
+ /*
+ * OP_MAP/OP_MAP_NULL has one new VMA for the new mapping,
+ * and potentially remaps with a prev and next VMA, for a
+ * total of 3 new VMAs.
+ *
+ * OP_UNMAP could trigger a remap with either a prev or
+ * next VMA, but not both.
+ */
+ num_prealloc_vmas += (op == MSM_SUBMIT_BO_OP_UNMAP) ? 1 : 3;
+ }
+
+ while (num_prealloc_vmas-- > 0) {
+ struct msm_gem_vma *vma = kzalloc(sizeof(*vma), GFP_KERNEL);
+ list_add_tail(&vma->base.rb.entry, &submit->preallocated_vmas);
+ }
+}
+
+/**
+ * msm_vma_job_cleanup() - cleanup after a VM_BIND job
+ * @submit: the VM_BIND job
+ *
+ * The counterpoint to msm_vma_job_prepare().
+ */
+void
+msm_vma_job_cleanup(struct msm_gem_submit *submit)
+{
+ struct drm_gpuva *vma;
+
+ while (!list_empty(&submit->preallocated_vmas)) {
+ vma = list_first_entry(&submit->preallocated_vmas,
+ struct drm_gpuva,
+ rb.entry);
+ list_del(&vma->rb.entry);
+ kfree(to_msm_vma(vma));
+ }
+}
+
/**
* msm_gem_vm_create() - Create and initialize a &msm_gem_vm
* @drm: the drm device
--
2.48.1
^ permalink raw reply related [flat|nested] 45+ messages in thread
* [PATCH v2 26/34] drm/msm: Pre-allocate vm_bo objects
2025-03-19 14:52 [PATCH v2 00/34] drm/msm: sparse / "VM_BIND" support Rob Clark
` (24 preceding siblings ...)
2025-03-19 14:52 ` [PATCH v2 25/34] drm/msm: Pre-allocate VMAs Rob Clark
@ 2025-03-19 14:52 ` Rob Clark
2025-03-19 14:52 ` [PATCH v2 27/34] drm/msm: Pre-allocate pages for pgtable entries Rob Clark
` (7 subsequent siblings)
33 siblings, 0 replies; 45+ messages in thread
From: Rob Clark @ 2025-03-19 14:52 UTC (permalink / raw)
To: dri-devel
Cc: freedreno, linux-arm-msm, Rob Clark, Rob Clark, Abhinav Kumar,
Dmitry Baryshkov, Sean Paul, Marijn Suijten, David Airlie,
Simona Vetter, open list
From: Rob Clark <robdclark@chromium.org>
Use drm_gpuvm_bo_obtain() in the synchronous part of the VM_BIND submit,
to hold a reference to the vm_bo for the duration of the submit. This
ensures that the vm_bo already exists before the async part of the job,
which is in the fence signalling path (and therefore cannot allocate
memory).
Signed-off-by: Rob Clark <robdclark@chromium.org>
---
drivers/gpu/drm/msm/msm_gem.h | 1 +
drivers/gpu/drm/msm/msm_gem_vma.c | 19 +++++++++++++++++--
2 files changed, 18 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h
index cb76959fa8a8..d2ffaa11ec1a 100644
--- a/drivers/gpu/drm/msm/msm_gem.h
+++ b/drivers/gpu/drm/msm/msm_gem.h
@@ -369,6 +369,7 @@ struct msm_gem_submit {
uint32_t flags;
uint32_t handle;
struct drm_gem_object *obj;
+ struct drm_gpuvm_bo *vm_bo;
uint64_t iova;
uint64_t bo_offset;
uint64_t range;
diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c
index 5c7d44b004fb..b1808d95002f 100644
--- a/drivers/gpu/drm/msm/msm_gem_vma.c
+++ b/drivers/gpu/drm/msm/msm_gem_vma.c
@@ -278,8 +278,18 @@ msm_vma_job_prepare(struct msm_gem_submit *submit)
for (int i = 0; i < submit->nr_bos; i++) {
unsigned op = submit->bos[i].flags & MSM_SUBMIT_BO_OP_MASK;
- if (submit->bos[i].obj)
- msm_gem_assert_locked(submit->bos[i].obj);
+ if (submit->bos[i].obj) {
+ struct drm_gem_object *obj = submit->bos[i].obj;
+
+ msm_gem_assert_locked(obj);
+
+ /*
+ * Ensure the vm_bo is already allocated by
+ * holding a ref until the submit is retired
+ */
+ submit->bos[i].vm_bo =
+ drm_gpuvm_bo_obtain(submit->vm, obj);
+ }
/*
* OP_MAP/OP_MAP_NULL has one new VMA for the new mapping,
@@ -309,6 +319,11 @@ msm_vma_job_cleanup(struct msm_gem_submit *submit)
{
struct drm_gpuva *vma;
+ for (int i = 0; i < submit->nr_bos; i++) {
+ /* If we're holding an extra ref to the vm_bo, drop it now: */
+ drm_gpuvm_bo_put(submit->bos[i].vm_bo);
+ }
+
while (!list_empty(&submit->preallocated_vmas)) {
vma = list_first_entry(&submit->preallocated_vmas,
struct drm_gpuva,
--
2.48.1
^ permalink raw reply related [flat|nested] 45+ messages in thread
* [PATCH v2 27/34] drm/msm: Pre-allocate pages for pgtable entries
2025-03-19 14:52 [PATCH v2 00/34] drm/msm: sparse / "VM_BIND" support Rob Clark
` (25 preceding siblings ...)
2025-03-19 14:52 ` [PATCH v2 26/34] drm/msm: Pre-allocate vm_bo objects Rob Clark
@ 2025-03-19 14:52 ` Rob Clark
2025-03-19 14:52 ` [PATCH v2 28/34] drm/msm: Wire up gpuvm ops Rob Clark
` (6 subsequent siblings)
33 siblings, 0 replies; 45+ messages in thread
From: Rob Clark @ 2025-03-19 14:52 UTC (permalink / raw)
To: dri-devel
Cc: freedreno, linux-arm-msm, Rob Clark, Rob Clark, Sean Paul,
Konrad Dybcio, Abhinav Kumar, Dmitry Baryshkov, Marijn Suijten,
David Airlie, Simona Vetter, open list
From: Rob Clark <robdclark@chromium.org>
Introduce a mechanism to count the worst case # of pages required in a
VM_BIND op.
Note that previously we would have had to somehow account for
allocations in unmap, when splitting a block. This behavior was removed
in commit 33729a5fc0ca ("iommu/io-pgtable-arm: Remove split on unmap
behavior)"
Signed-off-by: Rob Clark <robdclark@chromium.org>
---
drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 2 +-
drivers/gpu/drm/msm/msm_gem.h | 7 +-
drivers/gpu/drm/msm/msm_gem_submit.c | 5 +-
drivers/gpu/drm/msm/msm_gem_vma.c | 18 ++-
drivers/gpu/drm/msm/msm_iommu.c | 201 +++++++++++++++++++++++++-
drivers/gpu/drm/msm/msm_mmu.h | 36 ++++-
6 files changed, 260 insertions(+), 9 deletions(-)
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
index ca3247f845b5..9f66ad5bf0dc 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
@@ -2267,7 +2267,7 @@ a6xx_create_private_vm(struct msm_gpu *gpu, bool kernel_managed)
{
struct msm_mmu *mmu;
- mmu = msm_iommu_pagetable_create(to_msm_vm(gpu->vm)->mmu);
+ mmu = msm_iommu_pagetable_create(to_msm_vm(gpu->vm)->mmu, kernel_managed);
if (IS_ERR(mmu))
return ERR_CAST(mmu);
diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h
index d2ffaa11ec1a..117f0e35e628 100644
--- a/drivers/gpu/drm/msm/msm_gem.h
+++ b/drivers/gpu/drm/msm/msm_gem.h
@@ -7,6 +7,7 @@
#ifndef __MSM_GEM_H__
#define __MSM_GEM_H__
+#include "msm_mmu.h"
#include <linux/kref.h>
#include <linux/dma-resv.h>
#include "drm/drm_exec.h"
@@ -115,7 +116,7 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name,
void msm_gem_vm_close(struct drm_gpuvm *gpuvm);
-void msm_vma_job_prepare(struct msm_gem_submit *submit);
+int msm_vma_job_prepare(struct msm_gem_submit *submit);
void msm_vma_job_cleanup(struct msm_gem_submit *submit);
struct msm_fence_context;
@@ -348,6 +349,10 @@ struct msm_gem_submit {
*/
struct list_head preallocated_vmas;
+ /* Tracking for pre-allocated pgtable pages.
+ */
+ struct msm_mmu_prealloc prealloc;
+
struct pid *pid; /* submitting process */
bool bos_pinned : 1;
bool fault_dumped:1;/* Limit devcoredump dumping to one per submit */
diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c
index a9b3e6692db3..ed0265ac4e1d 100644
--- a/drivers/gpu/drm/msm/msm_gem_submit.c
+++ b/drivers/gpu/drm/msm/msm_gem_submit.c
@@ -916,8 +916,9 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data,
}
if (submit_is_vmbind(submit)) {
- msm_vma_job_prepare(submit);
- ret = submit_get_pages(submit);
+ ret = msm_vma_job_prepare(submit);
+ if (!ret)
+ ret = submit_get_pages(submit);
} else {
ret = submit_pin_vmas(submit);
}
diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c
index b1808d95002f..554ec93456a0 100644
--- a/drivers/gpu/drm/msm/msm_gem_vma.c
+++ b/drivers/gpu/drm/msm/msm_gem_vma.c
@@ -270,9 +270,10 @@ static const struct drm_sched_backend_ops msm_vm_bind_ops = {
*
* Called after BOs are locked.
*/
-void
+int
msm_vma_job_prepare(struct msm_gem_submit *submit)
{
+ struct msm_mmu *mmu = to_msm_vm(submit->vm)->mmu;
unsigned num_prealloc_vmas = 0;
for (int i = 0; i < submit->nr_bos; i++) {
@@ -299,13 +300,23 @@ msm_vma_job_prepare(struct msm_gem_submit *submit)
* OP_UNMAP could trigger a remap with either a prev or
* next VMA, but not both.
*/
- num_prealloc_vmas += (op == MSM_SUBMIT_BO_OP_UNMAP) ? 1 : 3;
+ if (op != MSM_SUBMIT_BO_OP_UNMAP) {
+ num_prealloc_vmas += 3;
+
+ mmu->funcs->prealloc_count(mmu, &submit->prealloc,
+ submit->bos[i].iova,
+ submit->bos[i].range);
+ } else {
+ num_prealloc_vmas += 1;
+ }
}
while (num_prealloc_vmas-- > 0) {
struct msm_gem_vma *vma = kzalloc(sizeof(*vma), GFP_KERNEL);
list_add_tail(&vma->base.rb.entry, &submit->preallocated_vmas);
}
+
+ return mmu->funcs->prealloc_allocate(mmu, &submit->prealloc);
}
/**
@@ -317,6 +328,7 @@ msm_vma_job_prepare(struct msm_gem_submit *submit)
void
msm_vma_job_cleanup(struct msm_gem_submit *submit)
{
+ struct msm_mmu *mmu = to_msm_vm(submit->vm)->mmu;
struct drm_gpuva *vma;
for (int i = 0; i < submit->nr_bos; i++) {
@@ -331,6 +343,8 @@ msm_vma_job_cleanup(struct msm_gem_submit *submit)
list_del(&vma->rb.entry);
kfree(to_msm_vma(vma));
}
+
+ mmu->funcs->prealloc_cleanup(mmu, &submit->prealloc);
}
/**
diff --git a/drivers/gpu/drm/msm/msm_iommu.c b/drivers/gpu/drm/msm/msm_iommu.c
index 756bd55ee94f..ff04f2451d1d 100644
--- a/drivers/gpu/drm/msm/msm_iommu.c
+++ b/drivers/gpu/drm/msm/msm_iommu.c
@@ -6,6 +6,7 @@
#include <linux/adreno-smmu-priv.h>
#include <linux/io-pgtable.h>
+#include <linux/kmemleak.h>
#include "msm_drv.h"
#include "msm_mmu.h"
@@ -14,6 +15,8 @@ struct msm_iommu {
struct iommu_domain *domain;
atomic_t pagetables;
struct page *prr_page;
+
+ struct kmem_cache *pt_cache;
};
#define to_msm_iommu(x) container_of(x, struct msm_iommu, base)
@@ -27,6 +30,12 @@ struct msm_iommu_pagetable {
unsigned long pgsize_bitmap; /* Bitmap of page sizes in use */
phys_addr_t ttbr;
u32 asid;
+
+ /** @root_page_table: Stores the root page table pointer. */
+ void *root_page_table;
+
+ /** @tblsz: pt table entry size */
+ size_t tblsz;
};
static struct msm_iommu_pagetable *to_pagetable(struct msm_mmu *mmu)
{
@@ -273,7 +282,145 @@ msm_iommu_pagetable_walk(struct msm_mmu *mmu, unsigned long iova, uint64_t ptes[
return 0;
}
+static void
+msm_iommu_pagetable_prealloc_count(struct msm_mmu *mmu, struct msm_mmu_prealloc *p,
+ uint64_t iova, size_t len)
+{
+ u64 pt_count;
+
+ /*
+ * L1, L2 and L3 page tables.
+ *
+ * We could optimize L3 allocation by iterating over the sgt and merging
+ * 2M contiguous blocks, but it's simpler to over-provision and return
+ * the pages if they're not used.
+ *
+ * The first level descriptor (v8 / v7-lpae page table format) encodes
+ * 30 bits of address. The second level encodes 29. For the 3rd it is
+ * 39.
+ *
+ * https://developer.arm.com/documentation/ddi0406/c/System-Level-Architecture/Virtual-Memory-System-Architecture--VMSA-/Long-descriptor-translation-table-format/Long-descriptor-translation-table-format-descriptors?lang=en#BEIHEFFB
+ */
+ pt_count = ((ALIGN(iova + len, 1ull << 39) - ALIGN_DOWN(iova, 1ull << 39)) >> 39) +
+ ((ALIGN(iova + len, 1ull << 30) - ALIGN_DOWN(iova, 1ull << 30)) >> 30) +
+ ((ALIGN(iova + len, 1ull << 21) - ALIGN_DOWN(iova, 1ull << 21)) >> 21);
+
+ p->count += pt_count;
+}
+
+static struct kmem_cache *
+get_pt_cache(struct msm_mmu *mmu)
+{
+ struct msm_iommu_pagetable *pagetable = to_pagetable(mmu);
+ return to_msm_iommu(pagetable->parent)->pt_cache;
+}
+
+static int
+msm_iommu_pagetable_prealloc_allocate(struct msm_mmu *mmu, struct msm_mmu_prealloc *p)
+{
+ struct kmem_cache *pt_cache = get_pt_cache(mmu);
+ int ret;
+
+ p->pages = kcalloc(p->count, sizeof(p->pages), GFP_KERNEL);
+ if (!p->pages)
+ return -ENOMEM;
+
+ ret = kmem_cache_alloc_bulk(pt_cache, GFP_KERNEL, p->count, p->pages);
+ if (ret != p->count) {
+ p->count = ret;
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+static void
+msm_iommu_pagetable_prealloc_cleanup(struct msm_mmu *mmu, struct msm_mmu_prealloc *p)
+{
+ struct kmem_cache *pt_cache = get_pt_cache(mmu);
+ uint32_t remaining_pt_count = p->count - p->ptr;
+
+ kmem_cache_free_bulk(pt_cache, remaining_pt_count, &p->pages[p->ptr]);
+ kfree(p->pages);
+}
+
+/**
+ * alloc_pt() - Custom page table allocator
+ * @cookie: Cookie passed at page table allocation time.
+ * @size: Size of the page table. This size should be fixed,
+ * and determined at creation time based on the granule size.
+ * @gfp: GFP flags.
+ *
+ * We want a custom allocator so we can use a cache for page table
+ * allocations and amortize the cost of the over-reservation that's
+ * done to allow asynchronous VM operations.
+ *
+ * Return: non-NULL on success, NULL if the allocation failed for any
+ * reason.
+ */
+static void *
+msm_iommu_pagetable_alloc_pt(void *cookie, size_t size, gfp_t gfp)
+{
+ struct msm_iommu_pagetable *pagetable = cookie;
+ struct msm_mmu_prealloc *p = pagetable->base.prealloc;
+ void *page;
+
+ /* Allocation of the root page table happening during init. */
+ if (unlikely(!pagetable->root_page_table)) {
+ struct page *p;
+
+ p = alloc_pages_node(dev_to_node(pagetable->iommu_dev),
+ gfp | __GFP_ZERO, get_order(size));
+ page = p ? page_address(p) : NULL;
+ pagetable->root_page_table = page;
+ return page;
+ }
+
+ if (WARN_ON(!p) || WARN_ON(p->ptr >= p->count))
+ return NULL;
+
+ page = p->pages[p->ptr++];
+ memset(page, 0, size);
+
+ /*
+ * Page table entries don't use virtual addresses, which trips out
+ * kmemleak. kmemleak_alloc_phys() might work, but physical addresses
+ * are mixed with other fields, and I fear kmemleak won't detect that
+ * either.
+ *
+ * Let's just ignore memory passed to the page-table driver for now.
+ */
+ kmemleak_ignore(page);
+
+ return page;
+}
+
+
+/**
+ * free_pt() - Custom page table free function
+ * @cookie: Cookie passed at page table allocation time.
+ * @data: Page table to free.
+ * @size: Size of the page table. This size should be fixed,
+ * and determined at creation time based on the granule size.
+ */
+static void
+msm_iommu_pagetable_free_pt(void *cookie, void *data, size_t size)
+{
+ struct msm_iommu_pagetable *pagetable = cookie;
+
+ if (unlikely(pagetable->root_page_table == data)) {
+ free_pages((unsigned long)data, get_order(size));
+ pagetable->root_page_table = NULL;
+ return;
+ }
+
+ kmem_cache_free(get_pt_cache(&pagetable->base), data);
+}
+
static const struct msm_mmu_funcs pagetable_funcs = {
+ .prealloc_count = msm_iommu_pagetable_prealloc_count,
+ .prealloc_allocate = msm_iommu_pagetable_prealloc_allocate,
+ .prealloc_cleanup = msm_iommu_pagetable_prealloc_cleanup,
.map = msm_iommu_pagetable_map,
.unmap = msm_iommu_pagetable_unmap,
.destroy = msm_iommu_pagetable_destroy,
@@ -324,7 +471,19 @@ static const struct iommu_flush_ops tlb_ops = {
static int msm_gpu_fault_handler(struct iommu_domain *domain, struct device *dev,
unsigned long iova, int flags, void *arg);
-struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent)
+
+static size_t get_tblsz(const struct io_pgtable_cfg *cfg)
+{
+ int pg_shift, bits_per_level;
+
+ pg_shift = __ffs(cfg->pgsize_bitmap);
+ /* arm_lpae_iopte is u64: */
+ bits_per_level = pg_shift - ilog2(sizeof(u64));
+
+ return sizeof(u64) << bits_per_level;
+}
+
+struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent, bool kernel_managed)
{
struct adreno_smmu_priv *adreno_smmu = dev_get_drvdata(parent->dev);
struct msm_iommu *iommu = to_msm_iommu(parent);
@@ -358,6 +517,36 @@ struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent)
ttbr0_cfg.quirks &= ~IO_PGTABLE_QUIRK_ARM_TTBR1;
ttbr0_cfg.tlb = &tlb_ops;
+ if (!kernel_managed) {
+ /*
+ * With userspace managed VM (aka VM_BIND), we need to pre-
+ * allocate pages ahead of time for map/unmap operations,
+ * handing them to io-pgtable via custom alloc/free ops as
+ * needed:
+ */
+ ttbr0_cfg.alloc = msm_iommu_pagetable_alloc_pt;
+ ttbr0_cfg.free = msm_iommu_pagetable_free_pt;
+
+ pagetable->tblsz = get_tblsz(&ttbr0_cfg);
+
+ /*
+ * Restrict to single page granules. Otherwise we may run
+ * into a situation where userspace wants to unmap/remap
+ * only a part of a larger block mapping, which is not
+ * possible without unmapping the entire block. Which in
+ * turn could cause faults if the GPU is accessing other
+ * parts of the block mapping.
+ *
+ * Note that prior to commit 33729a5fc0ca ("iommu/io-pgtable-arm:
+ * Remove split on unmap behavior)" this was handled in
+ * io-pgtable-arm. But this apparently does not work
+ * correctly on SMMUv3.
+ */
+ WARN_ON(!(ttbr0_cfg.pgsize_bitmap & PAGE_SIZE));
+ ttbr0_cfg.pgsize_bitmap = PAGE_SIZE;
+ }
+
+ pagetable->iommu_dev = ttbr1_cfg->iommu_dev;
pagetable->pgtbl_ops = alloc_io_pgtable_ops(ARM_64_LPAE_S1,
&ttbr0_cfg, pagetable);
@@ -401,7 +590,6 @@ struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent)
/* Needed later for TLB flush */
pagetable->parent = parent;
pagetable->tlb = ttbr1_cfg->tlb;
- pagetable->iommu_dev = ttbr1_cfg->iommu_dev;
pagetable->pgsize_bitmap = ttbr0_cfg.pgsize_bitmap;
pagetable->ttbr = ttbr0_cfg.arm_lpae_s1_cfg.ttbr;
@@ -509,6 +697,7 @@ static void msm_iommu_destroy(struct msm_mmu *mmu)
{
struct msm_iommu *iommu = to_msm_iommu(mmu);
iommu_domain_free(iommu->domain);
+ kmem_cache_destroy(iommu->pt_cache);
kfree(iommu);
}
@@ -583,6 +772,14 @@ struct msm_mmu *msm_iommu_gpu_new(struct device *dev, struct msm_gpu *gpu, unsig
return mmu;
iommu = to_msm_iommu(mmu);
+ if (adreno_smmu && adreno_smmu->cookie) {
+ const struct io_pgtable_cfg *cfg =
+ adreno_smmu->get_ttbr1_cfg(adreno_smmu->cookie);
+ size_t tblsz = get_tblsz(cfg);
+
+ iommu->pt_cache =
+ kmem_cache_create("msm-mmu-pt", tblsz, tblsz, 0, NULL);
+ }
iommu_set_fault_handler(iommu->domain, msm_gpu_fault_handler, iommu);
/* Enable stall on iommu fault: */
diff --git a/drivers/gpu/drm/msm/msm_mmu.h b/drivers/gpu/drm/msm/msm_mmu.h
index c874852b7331..24ef04d267a6 100644
--- a/drivers/gpu/drm/msm/msm_mmu.h
+++ b/drivers/gpu/drm/msm/msm_mmu.h
@@ -9,8 +9,16 @@
#include <linux/iommu.h>
+struct msm_mmu_prealloc;
+struct msm_mmu;
+struct msm_gpu;
+
struct msm_mmu_funcs {
void (*detach)(struct msm_mmu *mmu);
+ void (*prealloc_count)(struct msm_mmu *mmu, struct msm_mmu_prealloc *p,
+ uint64_t iova, size_t len);
+ int (*prealloc_allocate)(struct msm_mmu *mmu, struct msm_mmu_prealloc *p);
+ void (*prealloc_cleanup)(struct msm_mmu *mmu, struct msm_mmu_prealloc *p);
int (*map)(struct msm_mmu *mmu, uint64_t iova, struct sg_table *sgt,
size_t off, size_t len, int prot);
int (*unmap)(struct msm_mmu *mmu, uint64_t iova, size_t len);
@@ -25,12 +33,38 @@ enum msm_mmu_type {
MSM_MMU_IOMMU_PAGETABLE,
};
+/**
+ * struct msm_mmu_prealloc - Tracking for pre-allocated pages for MMU updates.
+ */
+struct msm_mmu_prealloc {
+ /** @count: Number of pages reserved. */
+ uint32_t count;
+ /** @ptr: Index of first unused page in @pages */
+ uint32_t ptr;
+ /**
+ * @pages: Array of pages preallocated for MMU table updates.
+ *
+ * After a VM operation, there might be free pages remaining in this
+ * array (since the amount allocated is a worst-case). These are
+ * returned to the pt_cache at mmu->prealloc_cleanup().
+ */
+ void **pages;
+};
+
struct msm_mmu {
const struct msm_mmu_funcs *funcs;
struct device *dev;
int (*handler)(void *arg, unsigned long iova, int flags, void *data);
void *arg;
enum msm_mmu_type type;
+
+ /**
+ * @prealloc: pre-allocated pages for pgtable
+ *
+ * Set while a VM_BIND job is running, serialized under
+ * msm_gem_vm::op_lock.
+ */
+ struct msm_mmu_prealloc *prealloc;
};
static inline void msm_mmu_init(struct msm_mmu *mmu, struct device *dev,
@@ -52,7 +86,7 @@ static inline void msm_mmu_set_fault_handler(struct msm_mmu *mmu, void *arg,
mmu->handler = handler;
}
-struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent);
+struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent, bool kernel_managed);
int msm_iommu_pagetable_params(struct msm_mmu *mmu, phys_addr_t *ttbr,
int *asid);
--
2.48.1
^ permalink raw reply related [flat|nested] 45+ messages in thread
* [PATCH v2 28/34] drm/msm: Wire up gpuvm ops
2025-03-19 14:52 [PATCH v2 00/34] drm/msm: sparse / "VM_BIND" support Rob Clark
` (26 preceding siblings ...)
2025-03-19 14:52 ` [PATCH v2 27/34] drm/msm: Pre-allocate pages for pgtable entries Rob Clark
@ 2025-03-19 14:52 ` Rob Clark
2025-03-19 14:52 ` [PATCH v2 29/34] drm/msm: Wire up drm_gpuvm debugfs Rob Clark
` (5 subsequent siblings)
33 siblings, 0 replies; 45+ messages in thread
From: Rob Clark @ 2025-03-19 14:52 UTC (permalink / raw)
To: dri-devel
Cc: freedreno, linux-arm-msm, Rob Clark, Rob Clark, Abhinav Kumar,
Dmitry Baryshkov, Sean Paul, Marijn Suijten, David Airlie,
Simona Vetter, open list
From: Rob Clark <robdclark@chromium.org>
Hook up the map/remap/unmap ops to apply MAP/UNMAP operations. The
MAP/UNMAP operations are split up by drm_gpuvm into a series of map/
remap/unmap ops, for example an UNMAP operation which spans multiple
vmas will get split up into a sequence of unmap (and possibly remap)
ops which each apply to a single vma.
Signed-off-by: Rob Clark <robdclark@chromium.org>
---
drivers/gpu/drm/msm/msm_gem.h | 12 ++
drivers/gpu/drm/msm/msm_gem_vma.c | 329 ++++++++++++++++++++++++++++--
2 files changed, 320 insertions(+), 21 deletions(-)
diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h
index 117f0e35e628..7f6315a66751 100644
--- a/drivers/gpu/drm/msm/msm_gem.h
+++ b/drivers/gpu/drm/msm/msm_gem.h
@@ -76,6 +76,16 @@ struct msm_gem_vm {
/** @vm_lock: protects gpuvm insert/remove/traverse */
struct mutex vm_lock;
+ /**
+ * @op_lock:
+ *
+ * Serializes VM operations. Typically operations are serialized
+ * by virtue of running on the VM_BIND queue, but in the cleanup
+ * path (or if multiple VM_BIND queues) the @op_lock provides the
+ * needed serialization.
+ */
+ struct mutex op_lock;
+
/** @mmu: The mmu object which manages the pgtables */
struct msm_mmu *mmu;
@@ -121,6 +131,8 @@ void msm_vma_job_cleanup(struct msm_gem_submit *submit);
struct msm_fence_context;
+#define MSM_VMA_DUMP (DRM_GPUVA_USERBITS << 0)
+
/**
* struct msm_gem_vma - a VMA mapping
*
diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c
index 554ec93456a0..09d4746248c2 100644
--- a/drivers/gpu/drm/msm/msm_gem_vma.c
+++ b/drivers/gpu/drm/msm/msm_gem_vma.c
@@ -8,6 +8,8 @@
#include "msm_gem.h"
#include "msm_mmu.h"
+#define vm_dbg(fmt, ...) pr_debug("%s:%d: "fmt"\n", __func__, __LINE__, ##__VA_ARGS__)
+
static void
msm_gem_vm_free(struct drm_gpuvm *gpuvm)
{
@@ -20,18 +22,29 @@ msm_gem_vm_free(struct drm_gpuvm *gpuvm)
kfree(vm);
}
+static void
+msm_gem_vma_unmap_range(struct drm_gpuva *vma, uint64_t unmap_start, uint64_t unmap_range)
+{
+ struct msm_gem_vm *vm = to_msm_vm(vma->vm);
+
+ vm_dbg("%p:%p: %016llx %016llx", vma->vm, vma, unmap_start, unmap_start + unmap_range);
+
+ if (vma->gem.obj)
+ msm_gem_assert_locked(vma->gem.obj);
+
+ vm->mmu->funcs->unmap(vm->mmu, unmap_start, unmap_range);
+}
+
/* Actually unmap memory for the vma */
void msm_gem_vma_unmap(struct drm_gpuva *vma)
{
struct msm_gem_vma *msm_vma = to_msm_vma(vma);
- struct msm_gem_vm *vm = to_msm_vm(vma->vm);
- unsigned size = vma->va.range;
/* Don't do anything if the memory isn't mapped */
if (!msm_vma->mapped)
return;
- vm->mmu->funcs->unmap(vm->mmu, vma->va.addr, size);
+ msm_gem_vma_unmap_range(vma, vma->va.addr, vma->va.range);
msm_vma->mapped = false;
}
@@ -52,6 +65,11 @@ msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt)
msm_vma->mapped = true;
+ vm_dbg("%p: %016llx %016llx", vma, vma->va.addr, vma->va.range);
+
+ if (vma->gem.obj)
+ msm_gem_assert_locked(vma->gem.obj);
+
/*
* NOTE: iommu/io-pgtable can allocate pages, so we cannot hold
* a lock across map/unmap which is also used in the job_run()
@@ -79,10 +97,11 @@ static void __vma_close(struct drm_gpuva *vma)
GEM_WARN_ON(msm_vma->mapped);
GEM_WARN_ON(!mutex_is_locked(&vm->vm_lock));
- spin_lock(&vm->mm_lock);
- if (vma->va.addr && vm->managed)
+ if (vma->va.addr && vm->managed) {
+ spin_lock(&vm->mm_lock);
drm_mm_remove_node(&msm_vma->node);
- spin_unlock(&vm->mm_lock);
+ spin_unlock(&vm->mm_lock);
+ }
drm_gpuva_remove(vma);
drm_gpuva_unlink(vma);
@@ -101,11 +120,9 @@ void msm_gem_vma_close(struct drm_gpuva *vma)
*/
GEM_WARN_ON(!vm->managed);
- dma_resv_lock(drm_gpuvm_resv(vma->vm), NULL);
mutex_lock(&vm->vm_lock);
__vma_close(vma);
mutex_unlock(&vm->vm_lock);
- dma_resv_unlock(drm_gpuvm_resv(vma->vm));
}
static struct drm_gpuva *
@@ -124,6 +141,7 @@ __vma_init(struct msm_gem_vma *vma, struct drm_gpuvm *_vm,
if (vm->managed) {
BUG_ON(offset != 0);
+ BUG_ON(!obj); /* NULL mappings not valid for kernel managed VM */
spin_lock(&vm->mm_lock);
ret = drm_mm_insert_node_in_range(&vm->mm, &vma->node,
obj->size, PAGE_SIZE, 0,
@@ -137,7 +155,8 @@ __vma_init(struct msm_gem_vma *vma, struct drm_gpuvm *_vm,
range_end = range_start + obj->size;
}
- GEM_WARN_ON((range_end - range_start) > obj->size);
+ if (obj)
+ GEM_WARN_ON((range_end - range_start) > obj->size);
drm_gpuva_init(&vma->base, range_start, range_end - range_start, obj, offset);
vma->mapped = false;
@@ -146,6 +165,9 @@ __vma_init(struct msm_gem_vma *vma, struct drm_gpuvm *_vm,
if (ret)
goto err_free_range;
+ if (!obj)
+ return &vma->base;
+
vm_bo = drm_gpuvm_bo_obtain(&vm->base, obj);
if (IS_ERR(vm_bo)) {
ret = PTR_ERR(vm_bo);
@@ -190,39 +212,289 @@ msm_gem_vma_new(struct drm_gpuvm *_vm, struct drm_gem_object *obj,
return &vma->base;
}
+static int
+msm_gem_vm_bo_validate(struct drm_gpuvm_bo *vm_bo, struct drm_exec *exec)
+{
+ struct drm_gem_object *obj = vm_bo->obj;
+ struct drm_gpuva *vma;
+ int ret;
+
+ msm_gem_assert_locked(obj);
+
+ drm_gpuvm_bo_for_each_va (vma, vm_bo) {
+ ret = msm_gem_pin_vma_locked(obj, vma);
+ if (ret)
+ return ret;
+ }
+
+ return 0;
+}
+
+struct op_arg {
+ unsigned flags;
+ struct drm_gpuvm *vm;
+ struct msm_gem_submit *submit;
+};
+
+static struct drm_gpuva *
+vma_from_op(struct op_arg *arg, struct drm_gpuva_op_map *op)
+{
+ struct msm_gem_vma *vma;
+
+ if (WARN_ON(list_empty(&arg->submit->preallocated_vmas)))
+ return NULL;
+
+ vma = list_first_entry(&arg->submit->preallocated_vmas,
+ struct msm_gem_vma, base.rb.entry);
+
+ list_del(&vma->base.rb.entry);
+
+ return __vma_init(vma, arg->vm, op->gem.obj, op->gem.offset,
+ op->va.addr, op->va.addr + op->va.range);
+}
+
+static int
+msm_gem_vm_sm_step_map(struct drm_gpuva_op *op, void *arg)
+{
+ struct drm_gem_object *obj = op->map.gem.obj;
+ struct drm_gpuva *vma;
+ struct sg_table *sgt;
+ unsigned prot;
+
+ vma = vma_from_op(arg, &op->map);
+
+ vm_dbg("%p:%p: %016llx %016llx", vma->vm, vma, vma->va.addr, vma->va.range);
+
+ if (obj) {
+ sgt = to_msm_bo(obj)->sgt;
+ prot = msm_gem_prot(obj);
+ } else {
+ sgt = NULL;
+ prot = IOMMU_READ | IOMMU_WRITE;
+ }
+
+ if (WARN_ON(IS_ERR(vma)))
+ return PTR_ERR(vma);
+
+ vma->flags = ((struct op_arg *)arg)->flags;
+
+ return msm_gem_vma_map(vma, prot, sgt);
+}
+
+static int
+msm_gem_vm_sm_step_remap(struct drm_gpuva_op *op, void *arg)
+{
+ struct drm_gpuvm *vm = ((struct op_arg *)arg)->vm;
+ struct drm_gpuva *orig_vma = op->remap.unmap->va;
+ struct drm_gpuva *prev_vma = NULL, *next_vma = NULL;
+ uint64_t unmap_start, unmap_range;
+ unsigned flags;
+
+ vm_dbg("orig_vma: %p:%p: %016llx %016llx", vm, orig_vma, orig_vma->va.addr, orig_vma->va.range);
+
+ drm_gpuva_op_remap_to_unmap_range(&op->remap, &unmap_start, &unmap_range);
+
+ msm_gem_vma_unmap_range(op->remap.unmap->va, unmap_start, unmap_range);
+
+ /*
+ * Part of this GEM obj is still mapped, but we're going to kill the
+ * existing VMA and replace it with one or two new ones (ie. two if)
+ * the unmapped range is in the middle of the existing (unmap) VMA.
+ * So just set the state to unmapped:
+ */
+ to_msm_vma(orig_vma)->mapped = false;
+
+ /*
+ * The prev_vma and/or next_vma are replacing the unmapped vma, and
+ * therefore should preserve it's flags:
+ */
+ flags = orig_vma->flags;
+
+ __vma_close(orig_vma);
+
+ if (op->remap.prev) {
+ prev_vma = vma_from_op(arg, op->remap.prev);
+ if (WARN_ON(IS_ERR(prev_vma)))
+ return PTR_ERR(prev_vma);
+
+ vm_dbg("prev_vma: %p:%p: %016llx %016llx", vm, prev_vma, prev_vma->va.addr, prev_vma->va.range);
+ to_msm_vma(prev_vma)->mapped = true;
+ prev_vma->flags = flags;
+ }
+
+ if (op->remap.next) {
+ next_vma = vma_from_op(arg, op->remap.next);
+ if (WARN_ON(IS_ERR(next_vma)))
+ return PTR_ERR(next_vma);
+
+ vm_dbg("next_vma: %p:%p: %016llx %016llx", vm, next_vma, next_vma->va.addr, next_vma->va.range);
+ to_msm_vma(next_vma)->mapped = true;
+ next_vma->flags = flags;
+ }
+
+ return 0;
+}
+
+static int
+msm_gem_vm_sm_step_unmap(struct drm_gpuva_op *op, void *priv)
+{
+ struct drm_gpuva *vma = op->unmap.va;
+
+ vm_dbg("%p:%p: %016llx %016llx", vma->vm, vma, vma->va.addr, vma->va.range);
+
+ msm_gem_vma_unmap(vma);
+ __vma_close(vma);
+
+ return 0;
+}
+
static const struct drm_gpuvm_ops msm_gpuvm_ops = {
.vm_free = msm_gem_vm_free,
+ .vm_bo_validate = msm_gem_vm_bo_validate,
+ .sm_step_map = msm_gem_vm_sm_step_map,
+ .sm_step_remap = msm_gem_vm_sm_step_remap,
+ .sm_step_unmap = msm_gem_vm_sm_step_unmap,
};
+static void
+cond_lock(struct drm_gem_object *obj)
+{
+ if (!obj)
+ return;
+
+ /*
+ * Hold a ref while we have the obj locked, so drm_gpuvm doesn't
+ * manage to drop the last ref to the obj while it is locked:
+ */
+ drm_gem_object_get(obj);
+ msm_gem_lock(obj);
+}
+
+static void
+cond_unlock(struct drm_gem_object *obj)
+{
+ if (!obj)
+ return;
+
+ msm_gem_unlock(obj);
+ /* Drop the ref obtained in cond_lock(): */
+ drm_gem_object_put(obj);
+}
+
+static int
+run_bo_unmap(struct op_arg *arg, u64 req_addr, u64 req_range)
+{
+ struct drm_gpuva *vma, *next;
+ struct drm_gpuvm *vm = arg->vm;
+ int ret;
+ u64 req_end = req_addr + req_range;
+
+ GEM_WARN_ON(!mutex_is_locked(&to_msm_vm(vm)->op_lock));
+
+ /*
+ * There are two locks at play when it comes to inserting/
+ * removing objs into a VM. There is vm->vm_lock and the
+ * obj resv lock. Because there are N objs and M VMs, and
+ * an obj can be attached to an arbitrary # of VMs, the only
+ * alternative is a single global lock.
+ *
+ * With two locks at play, there are two possible orderings
+ * of locking:
+ *
+ * 1) locking obj first, and then vm_lock. The problem with
+ * this ordering is that OP_UNMAP can be touching an
+ * arbitrary # of VMAs, with their own objs. So we have
+ * no way to know which objs to lock without traversing
+ * the VM. Which requires vm_lock!
+ *
+ * 2) vm_lock first, and then obj lock. The problem with this
+ * ordering is that non-VM_BIND (ie. kernel managed VM)
+ * userspace relies on implicit VMA teardown when a BO is
+ * freed. Which means there are paths where we need to
+ * traverse the obj's gpuva list to find the VM(s). Which
+ * requires the obj lock!
+ *
+ * To resolve this we rely on the fact that, with a VM_BIND
+ * userspace the legacy paths are disallowed. And all paths
+ * which modify the VM also hold op_lock. So we can savely
+ * reach into the VM to find the first object to lock. But
+ * it means having to bypass drm_gpuvm_sm_unmap(), and pull
+ * out the impacted VMAs ourself.
+ *
+ * (The other option would be to use drm_gpuvm_sm_unmap_ops_create(),
+ * but that requires memory allocation.. which we can't do
+ * here.)
+ */
+ drm_gpuvm_for_each_va_range_safe(vma, next, vm, req_addr, req_end) {
+ struct drm_gem_object *obj = vma->gem.obj;
+
+ cond_lock(obj);
+ mutex_lock(&to_msm_vm(vm)->vm_lock);
+
+ ret = drm_gpuvm_sm_unmap_va(vma, arg,
+ vma->va.addr,
+ vma->va.range);
+
+ mutex_unlock(&to_msm_vm(vm)->vm_lock);
+ cond_unlock(obj);
+
+ if (ret)
+ break;
+ }
+
+ return ret;
+}
+
static int
run_bo_op(struct msm_gem_submit *submit, const struct msm_gem_submit_bo *bo)
{
+ struct msm_gem_vm *vm = to_msm_vm(submit->vm);
+ struct op_arg arg = {
+ .vm = submit->vm,
+ .submit = submit,
+ };
unsigned op = bo->flags & MSM_SUBMIT_BO_OP_MASK;
+ int ret;
- switch (op) {
- case MSM_SUBMIT_BO_OP_MAP:
- case MSM_SUBMIT_BO_OP_MAP_NULL:
- return drm_gpuvm_sm_map(submit->vm, submit->vm, bo->iova,
- bo->range, bo->obj, bo->bo_offset);
- break;
- case MSM_SUBMIT_BO_OP_UNMAP:
- return drm_gpuvm_sm_unmap(submit->vm, submit->vm, bo->iova,
- bo->bo_offset);
+ if (op == MSM_SUBMIT_BO_OP_UNMAP) {
+ vm_dbg("UNMAP: %p: %016llx %016llx", vm, bo->iova, bo->range);
+ ret = run_bo_unmap(&arg, bo->iova, bo->range);
+ } else {
+ if ((op == MSM_SUBMIT_BO_OP_MAP) &&
+ (bo->flags & MSM_SUBMIT_BO_DUMP))
+ arg.flags |= MSM_VMA_DUMP;
+
+ cond_lock(bo->obj);
+ mutex_lock(&vm->vm_lock);
+
+ vm_dbg("MAP: %p: %016llx %016llx", vm, bo->iova, bo->range);
+ ret = drm_gpuvm_sm_map(submit->vm, &arg,
+ bo->iova, bo->range,
+ bo->obj, bo->bo_offset);
+
+ mutex_unlock(&vm->vm_lock);
+ cond_unlock(bo->obj);
}
- return -EINVAL;
+ return ret;
}
static struct dma_fence *
msm_vma_job_run(struct drm_sched_job *job)
{
struct msm_gem_submit *submit = to_msm_submit(job);
+ struct msm_gem_vm *vm = to_msm_vm(submit->vm);
+ int ret = 0;
+
+ vm_dbg("");
+
+ mutex_lock(&vm->op_lock);
+ vm->mmu->prealloc = &submit->prealloc;
for (unsigned i = 0; i < submit->nr_bos; i++) {
int ret = run_bo_op(submit, &submit->bos[i]);
if (ret) {
to_msm_vm(submit->vm)->unusable = true;
- return ERR_PTR(ret);
}
}
@@ -240,8 +512,11 @@ msm_vma_job_run(struct drm_sched_job *job)
msm_gem_unlock(obj);
}
+ vm->mmu->prealloc = NULL;
+ mutex_unlock(&vm->op_lock);
+
/* VM_BIND ops are synchronous, so no fence to wait on: */
- return NULL;
+ return ERR_PTR(ret);
}
static void
@@ -404,6 +679,7 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name,
spin_lock_init(&vm->mm_lock);
mutex_init(&vm->vm_lock);
+ mutex_init(&vm->op_lock);
vm->mmu = mmu;
vm->managed = managed;
@@ -433,6 +709,9 @@ void
msm_gem_vm_close(struct drm_gpuvm *gpuvm)
{
struct msm_gem_vm *vm = to_msm_vm(gpuvm);
+ struct op_arg arg = {
+ .vm = gpuvm,
+ };
/*
* For kernel managed VMs, the VMAs are torn down when the handle is
@@ -444,4 +723,12 @@ msm_gem_vm_close(struct drm_gpuvm *gpuvm)
/* Kill the scheduler now, so we aren't racing with it for cleanup: */
drm_sched_stop(&vm->sched, NULL);
drm_sched_fini(&vm->sched);
+
+ /* Serialize against vm scheduler thread: */
+ mutex_lock(&vm->op_lock);
+
+ /* Tear down any remaining mappings: */
+ run_bo_unmap(&arg, gpuvm->mm_start, gpuvm->mm_range);
+
+ mutex_unlock(&vm->op_lock);
}
--
2.48.1
^ permalink raw reply related [flat|nested] 45+ messages in thread
* [PATCH v2 29/34] drm/msm: Wire up drm_gpuvm debugfs
2025-03-19 14:52 [PATCH v2 00/34] drm/msm: sparse / "VM_BIND" support Rob Clark
` (27 preceding siblings ...)
2025-03-19 14:52 ` [PATCH v2 28/34] drm/msm: Wire up gpuvm ops Rob Clark
@ 2025-03-19 14:52 ` Rob Clark
2025-03-19 14:52 ` [PATCH v2 30/34] drm/msm: Crashdump prep for sparse mappings Rob Clark
` (4 subsequent siblings)
33 siblings, 0 replies; 45+ messages in thread
From: Rob Clark @ 2025-03-19 14:52 UTC (permalink / raw)
To: dri-devel
Cc: freedreno, linux-arm-msm, Rob Clark, Rob Clark, Sean Paul,
Konrad Dybcio, Abhinav Kumar, Dmitry Baryshkov, Marijn Suijten,
David Airlie, Simona Vetter, open list
From: Rob Clark <robdclark@chromium.org>
Core drm already provides a helper to dump vm state. We just need to
wire up tracking of VMs and giving userspace VMs a suitable name.
Signed-off-by: Rob Clark <robdclark@chromium.org>
---
drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 2 +-
drivers/gpu/drm/msm/msm_debugfs.c | 20 ++++++++++++++++++++
drivers/gpu/drm/msm/msm_drv.c | 3 +++
drivers/gpu/drm/msm/msm_drv.h | 4 ++++
drivers/gpu/drm/msm/msm_gem.h | 8 ++++++++
drivers/gpu/drm/msm/msm_gem_vma.c | 23 +++++++++++++++++++++++
6 files changed, 59 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
index 9f66ad5bf0dc..3189a6f75d74 100644
--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
@@ -2272,7 +2272,7 @@ a6xx_create_private_vm(struct msm_gpu *gpu, bool kernel_managed)
if (IS_ERR(mmu))
return ERR_CAST(mmu);
- return msm_gem_vm_create(gpu->dev, mmu, "gpu", 0x100000000ULL,
+ return msm_gem_vm_create(gpu->dev, mmu, NULL, 0x100000000ULL,
adreno_private_vm_size(gpu), kernel_managed);
}
diff --git a/drivers/gpu/drm/msm/msm_debugfs.c b/drivers/gpu/drm/msm/msm_debugfs.c
index 7ab607252d18..bde25981254f 100644
--- a/drivers/gpu/drm/msm/msm_debugfs.c
+++ b/drivers/gpu/drm/msm/msm_debugfs.c
@@ -10,6 +10,7 @@
#include <linux/fault-inject.h>
#include <drm/drm_debugfs.h>
+#include <drm/drm_drv.h>
#include <drm/drm_fb_helper.h>
#include <drm/drm_file.h>
#include <drm/drm_framebuffer.h>
@@ -238,6 +239,24 @@ static int msm_mm_show(struct seq_file *m, void *arg)
return 0;
}
+static int msm_gpuvas_show(struct seq_file *m, void *arg)
+{
+ struct drm_info_node *node = m->private;
+ struct drm_device *dev = node->minor->dev;
+ struct msm_drm_private *priv = dev->dev_private;
+ struct msm_gem_vm *vm;
+
+ mutex_lock(&priv->vm_lock);
+ list_for_each_entry(vm, &priv->vms, node) {
+ mutex_lock(&vm->op_lock);
+ drm_debugfs_gpuva_info(m, &vm->base);
+ mutex_unlock(&vm->op_lock);
+ }
+ mutex_unlock(&priv->vm_lock);
+
+ return 0;
+}
+
static int msm_fb_show(struct seq_file *m, void *arg)
{
struct drm_info_node *node = m->private;
@@ -266,6 +285,7 @@ static int msm_fb_show(struct seq_file *m, void *arg)
static struct drm_info_list msm_debugfs_list[] = {
{"gem", msm_gem_show},
{ "mm", msm_mm_show },
+ DRM_DEBUGFS_GPUVA_INFO(msm_gpuvas_show, NULL),
};
static struct drm_info_list msm_kms_debugfs_list[] = {
diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
index 5b5a64c8dddb..70c3a3712a3e 100644
--- a/drivers/gpu/drm/msm/msm_drv.c
+++ b/drivers/gpu/drm/msm/msm_drv.c
@@ -124,6 +124,9 @@ static int msm_drm_init(struct device *dev, const struct drm_driver *drv)
goto err_put_dev;
}
+ INIT_LIST_HEAD(&priv->vms);
+ mutex_init(&priv->vm_lock);
+
INIT_LIST_HEAD(&priv->objects);
mutex_init(&priv->obj_lock);
diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h
index b0add236cbb3..83d2a480cfcf 100644
--- a/drivers/gpu/drm/msm/msm_drv.h
+++ b/drivers/gpu/drm/msm/msm_drv.h
@@ -112,6 +112,10 @@ struct msm_drm_private {
*/
atomic64_t total_mem;
+ /** @vms: List of all VMs, protected by @vm_lock */
+ struct list_head vms;
+ struct mutex vm_lock;
+
/**
* List of all GEM objects (mainly for debugfs, protected by obj_lock
* (acquire before per GEM object lock)
diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h
index 7f6315a66751..0409d35ebb32 100644
--- a/drivers/gpu/drm/msm/msm_gem.h
+++ b/drivers/gpu/drm/msm/msm_gem.h
@@ -54,6 +54,9 @@ struct msm_gem_vm {
/** @base: Inherit from drm_gpuvm. */
struct drm_gpuvm base;
+ /** @name: Storage for dynamically generated VM name for user VMs */
+ char name[32];
+
/**
* @sched: Scheduler used for asynchronous VM_BIND request.
*
@@ -95,6 +98,11 @@ struct msm_gem_vm {
*/
struct pid *pid;
+ /**
+ * @node: List node in msm_drm_private.vms list
+ */
+ struct list_head node;
+
/** @faults: the number of GPU hangs associated with this address space */
int faults;
diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c
index 09d4746248c2..8d0c4d3afa13 100644
--- a/drivers/gpu/drm/msm/msm_gem_vma.c
+++ b/drivers/gpu/drm/msm/msm_gem_vma.c
@@ -14,6 +14,11 @@ static void
msm_gem_vm_free(struct drm_gpuvm *gpuvm)
{
struct msm_gem_vm *vm = container_of(gpuvm, struct msm_gem_vm, base);
+ struct msm_drm_private *priv = gpuvm->drm->dev_private;
+
+ mutex_lock(&priv->vm_lock);
+ list_del(&vm->node);
+ mutex_unlock(&priv->vm_lock);
drm_mm_takedown(&vm->mm);
if (vm->mmu)
@@ -640,6 +645,7 @@ struct drm_gpuvm *
msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name,
u64 va_start, u64 va_size, bool managed)
{
+ struct msm_drm_private *priv = drm->dev_private;
enum drm_gpuvm_flags flags = managed ? DRM_GPUVM_VA_WEAK_REF : 0;
struct msm_gem_vm *vm;
struct drm_gem_object *dummy_gem;
@@ -673,6 +679,19 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name,
goto err_free_dummy;
}
+ /* For userspace pgtables, generate a VM name based on comm and PID nr: */
+ if (!name) {
+ char tmpname[TASK_COMM_LEN];
+ struct pid *pid = get_pid(task_tgid(current));
+
+ get_task_comm(tmpname, current);
+ rcu_read_lock();
+ snprintf(vm->name, sizeof(name), "%s[%d]", tmpname, pid_nr(pid));
+ rcu_read_unlock();
+
+ name = vm->name;
+ }
+
drm_gpuvm_init(&vm->base, name, flags, drm, dummy_gem,
va_start, va_size, 0, 0, &msm_gpuvm_ops);
drm_gem_object_put(dummy_gem);
@@ -686,6 +705,10 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name,
drm_mm_init(&vm->mm, va_start, va_size);
+ mutex_lock(&priv->vm_lock);
+ list_add_tail(&vm->node, &priv->vms);
+ mutex_unlock(&priv->vm_lock);
+
return &vm->base;
err_free_dummy:
--
2.48.1
^ permalink raw reply related [flat|nested] 45+ messages in thread
* [PATCH v2 30/34] drm/msm: Crashdump prep for sparse mappings
2025-03-19 14:52 [PATCH v2 00/34] drm/msm: sparse / "VM_BIND" support Rob Clark
` (28 preceding siblings ...)
2025-03-19 14:52 ` [PATCH v2 29/34] drm/msm: Wire up drm_gpuvm debugfs Rob Clark
@ 2025-03-19 14:52 ` Rob Clark
2025-03-19 14:52 ` [PATCH v2 31/34] drm/msm: rd dumping " Rob Clark
` (3 subsequent siblings)
33 siblings, 0 replies; 45+ messages in thread
From: Rob Clark @ 2025-03-19 14:52 UTC (permalink / raw)
To: dri-devel
Cc: freedreno, linux-arm-msm, Rob Clark, Rob Clark, Sean Paul,
Konrad Dybcio, Abhinav Kumar, Dmitry Baryshkov, Marijn Suijten,
David Airlie, Simona Vetter, open list
From: Rob Clark <robdclark@chromium.org>
In this case, userspace could request dumping partial GEM obj mappings.
Also drop use of should_dump() helper, which really only makes sense in
the old submit->bos[] table world.
Signed-off-by: Rob Clark <robdclark@chromium.org>
---
drivers/gpu/drm/msm/msm_gpu.c | 17 ++++++++++-------
1 file changed, 10 insertions(+), 7 deletions(-)
diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c
index 4831f4e42fd9..e35125d88466 100644
--- a/drivers/gpu/drm/msm/msm_gpu.c
+++ b/drivers/gpu/drm/msm/msm_gpu.c
@@ -219,13 +219,14 @@ static void msm_gpu_devcoredump_free(void *data)
}
static void msm_gpu_crashstate_get_bo(struct msm_gpu_state *state,
- struct drm_gem_object *obj, u64 iova, bool full)
+ struct drm_gem_object *obj, u64 iova,
+ bool full, size_t offset, size_t size)
{
struct msm_gpu_state_bo *state_bo = &state->bos[state->nr_bos];
struct msm_gem_object *msm_obj = to_msm_bo(obj);
/* Don't record write only objects */
- state_bo->size = obj->size;
+ state_bo->size = size;
state_bo->flags = msm_obj->flags;
state_bo->iova = iova;
@@ -236,7 +237,7 @@ static void msm_gpu_crashstate_get_bo(struct msm_gpu_state *state,
if (full) {
void *ptr;
- state_bo->data = kvmalloc(obj->size, GFP_KERNEL);
+ state_bo->data = kvmalloc(size, GFP_KERNEL);
if (!state_bo->data)
goto out;
@@ -249,7 +250,7 @@ static void msm_gpu_crashstate_get_bo(struct msm_gpu_state *state,
goto out;
}
- memcpy(state_bo->data, ptr, obj->size);
+ memcpy(state_bo->data, ptr + offset, size);
msm_gem_put_vaddr(obj);
}
out:
@@ -279,6 +280,7 @@ static void msm_gpu_crashstate_capture(struct msm_gpu *gpu,
state->fault_info = gpu->fault_info;
if (submit) {
+ extern bool rd_full;
int i;
if (state->fault_info.ttbr0) {
@@ -294,9 +296,10 @@ static void msm_gpu_crashstate_capture(struct msm_gpu *gpu,
sizeof(struct msm_gpu_state_bo), GFP_KERNEL);
for (i = 0; state->bos && i < submit->nr_bos; i++) {
- msm_gpu_crashstate_get_bo(state, submit->bos[i].obj,
- submit->bos[i].iova,
- should_dump(submit, i));
+ struct drm_gem_object *obj = submit->bos[i].obj;
+ bool dump = rd_full || (submit->bos[i].flags & MSM_SUBMIT_BO_DUMP);
+ msm_gpu_crashstate_get_bo(state, obj, submit->bos[i].iova,
+ dump, 0, obj->size);
}
}
--
2.48.1
^ permalink raw reply related [flat|nested] 45+ messages in thread
* [PATCH v2 31/34] drm/msm: rd dumping prep for sparse mappings
2025-03-19 14:52 [PATCH v2 00/34] drm/msm: sparse / "VM_BIND" support Rob Clark
` (29 preceding siblings ...)
2025-03-19 14:52 ` [PATCH v2 30/34] drm/msm: Crashdump prep for sparse mappings Rob Clark
@ 2025-03-19 14:52 ` Rob Clark
2025-03-19 14:52 ` [PATCH v2 32/34] drm/msm: Crashdec support for sparse Rob Clark
` (2 subsequent siblings)
33 siblings, 0 replies; 45+ messages in thread
From: Rob Clark @ 2025-03-19 14:52 UTC (permalink / raw)
To: dri-devel
Cc: freedreno, linux-arm-msm, Rob Clark, Rob Clark, Abhinav Kumar,
Dmitry Baryshkov, Sean Paul, Marijn Suijten, David Airlie,
Simona Vetter, open list
From: Rob Clark <robdclark@chromium.org>
Similar to the previous commit, add support for dumping partial
mappings.
Signed-off-by: Rob Clark <robdclark@chromium.org>
---
drivers/gpu/drm/msm/msm_gem.h | 10 ---------
drivers/gpu/drm/msm/msm_rd.c | 38 ++++++++++++++++-------------------
2 files changed, 17 insertions(+), 31 deletions(-)
diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h
index 0409d35ebb32..bdd9b09b8ca9 100644
--- a/drivers/gpu/drm/msm/msm_gem.h
+++ b/drivers/gpu/drm/msm/msm_gem.h
@@ -420,14 +420,4 @@ static inline void msm_gem_submit_put(struct msm_gem_submit *submit)
void msm_submit_retire(struct msm_gem_submit *submit);
-/* helper to determine of a buffer in submit should be dumped, used for both
- * devcoredump and debugfs cmdstream dumping:
- */
-static inline bool
-should_dump(struct msm_gem_submit *submit, int idx)
-{
- extern bool rd_full;
- return rd_full || (submit->bos[idx].flags & MSM_SUBMIT_BO_DUMP);
-}
-
#endif /* __MSM_GEM_H__ */
diff --git a/drivers/gpu/drm/msm/msm_rd.c b/drivers/gpu/drm/msm/msm_rd.c
index 39138e190cb9..edbcb93410a9 100644
--- a/drivers/gpu/drm/msm/msm_rd.c
+++ b/drivers/gpu/drm/msm/msm_rd.c
@@ -308,21 +308,11 @@ void msm_rd_debugfs_cleanup(struct msm_drm_private *priv)
priv->hangrd = NULL;
}
-static void snapshot_buf(struct msm_rd_state *rd,
- struct msm_gem_submit *submit, int idx,
- uint64_t iova, uint32_t size, bool full)
+static void snapshot_buf(struct msm_rd_state *rd, struct drm_gem_object *obj,
+ uint64_t iova, bool full, size_t offset, size_t size)
{
- struct drm_gem_object *obj = submit->bos[idx].obj;
- unsigned offset = 0;
const char *buf;
- if (iova) {
- offset = iova - submit->bos[idx].iova;
- } else {
- iova = submit->bos[idx].iova;
- size = obj->size;
- }
-
/*
* Always write the GPUADDR header so can get a complete list of all the
* buffers in the cmd
@@ -333,10 +323,6 @@ static void snapshot_buf(struct msm_rd_state *rd,
if (!full)
return;
- /* But only dump the contents of buffers marked READ */
- if (!(submit->bos[idx].flags & MSM_SUBMIT_BO_READ))
- return;
-
buf = msm_gem_get_vaddr_active(obj);
if (IS_ERR(buf))
return;
@@ -352,6 +338,7 @@ static void snapshot_buf(struct msm_rd_state *rd,
void msm_rd_dump_submit(struct msm_rd_state *rd, struct msm_gem_submit *submit,
const char *fmt, ...)
{
+ extern bool rd_full;
struct task_struct *task;
char msg[256];
int i, n;
@@ -385,16 +372,25 @@ void msm_rd_dump_submit(struct msm_rd_state *rd, struct msm_gem_submit *submit,
rd_write_section(rd, RD_CMD, msg, ALIGN(n, 4));
- for (i = 0; i < submit->nr_bos; i++)
- snapshot_buf(rd, submit, i, 0, 0, should_dump(submit, i));
+ for (i = 0; i < submit->nr_bos; i++) {
+ struct drm_gem_object *obj = submit->bos[i].obj;
+ bool dump = rd_full || (submit->bos[i].flags & MSM_SUBMIT_BO_DUMP);
+
+ snapshot_buf(rd, obj, submit->bos[i].iova, dump, 0, obj->size);
+ }
for (i = 0; i < submit->nr_cmds; i++) {
uint32_t szd = submit->cmd[i].size; /* in dwords */
+ int idx = submit->cmd[i].idx;
+ bool dump = rd_full || (submit->bos[idx].flags & MSM_SUBMIT_BO_DUMP);
/* snapshot cmdstream bo's (if we haven't already): */
- if (!should_dump(submit, i)) {
- snapshot_buf(rd, submit, submit->cmd[i].idx,
- submit->cmd[i].iova, szd * 4, true);
+ if (!dump) {
+ struct drm_gem_object *obj = submit->bos[idx].obj;
+ size_t offset = submit->cmd[i].iova - submit->bos[idx].iova;
+
+ snapshot_buf(rd, obj, submit->cmd[i].iova, true,
+ offset, szd * 4);
}
}
--
2.48.1
^ permalink raw reply related [flat|nested] 45+ messages in thread
* [PATCH v2 32/34] drm/msm: Crashdec support for sparse
2025-03-19 14:52 [PATCH v2 00/34] drm/msm: sparse / "VM_BIND" support Rob Clark
` (30 preceding siblings ...)
2025-03-19 14:52 ` [PATCH v2 31/34] drm/msm: rd dumping " Rob Clark
@ 2025-03-19 14:52 ` Rob Clark
2025-03-19 14:52 ` [PATCH v2 33/34] drm/msm: rd dumping " Rob Clark
2025-03-19 14:52 ` [PATCH v2 34/34] drm/msm: Bump UAPI version Rob Clark
33 siblings, 0 replies; 45+ messages in thread
From: Rob Clark @ 2025-03-19 14:52 UTC (permalink / raw)
To: dri-devel
Cc: freedreno, linux-arm-msm, Rob Clark, Rob Clark, Sean Paul,
Konrad Dybcio, Abhinav Kumar, Dmitry Baryshkov, Marijn Suijten,
David Airlie, Simona Vetter, open list
From: Rob Clark <robdclark@chromium.org>
In this case, we need to iterate the VMAs looking for ones with
MSM_VMA_DUMP flag.
Signed-off-by: Rob Clark <robdclark@chromium.org>
---
drivers/gpu/drm/msm/msm_gpu.c | 73 +++++++++++++++++++++++++----------
1 file changed, 52 insertions(+), 21 deletions(-)
diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c
index e35125d88466..aca943dc0cd7 100644
--- a/drivers/gpu/drm/msm/msm_gpu.c
+++ b/drivers/gpu/drm/msm/msm_gpu.c
@@ -257,6 +257,50 @@ static void msm_gpu_crashstate_get_bo(struct msm_gpu_state *state,
state->nr_bos++;
}
+static void crashstate_get_bos(struct msm_gpu_state *state, struct msm_gem_submit *submit)
+{
+ extern bool rd_full;
+
+ if (!submit)
+ return;
+
+ if (msm_context_is_vmbind(submit->queue->ctx)) {
+ struct drm_gpuva *vma;
+ unsigned cnt = 0;
+
+ mutex_lock(&to_msm_vm(submit->vm)->vm_lock);
+
+ drm_gpuvm_for_each_va (vma, submit->vm)
+ cnt++;
+
+ state->bos = kcalloc(cnt, sizeof(struct msm_gpu_state_bo), GFP_KERNEL);
+
+ drm_gpuvm_for_each_va (vma, submit->vm) {
+ bool dump = rd_full || (vma->flags & MSM_VMA_DUMP);
+
+ /* Skip MAP_NULL/PRR VMAs: */
+ if (!vma->gem.obj)
+ continue;
+
+ msm_gpu_crashstate_get_bo(state, vma->gem.obj, vma->va.addr,
+ dump, vma->gem.offset, vma->va.range);
+ }
+
+ mutex_unlock(&to_msm_vm(submit->vm)->vm_lock);
+ } else {
+ state->bos = kcalloc(submit->nr_bos,
+ sizeof(struct msm_gpu_state_bo), GFP_KERNEL);
+
+ for (int i = 0; state->bos && i < submit->nr_bos; i++) {
+ struct drm_gem_object *obj = submit->bos[i].obj;
+ bool dump = rd_full || (submit->bos[i].flags & MSM_SUBMIT_BO_DUMP);
+
+ msm_gpu_crashstate_get_bo(state, obj, submit->bos[i].iova,
+ dump, 0, obj->size);
+ }
+ }
+}
+
static void msm_gpu_crashstate_capture(struct msm_gpu *gpu,
struct msm_gem_submit *submit, char *comm, char *cmd)
{
@@ -279,30 +323,17 @@ static void msm_gpu_crashstate_capture(struct msm_gpu *gpu,
state->cmd = kstrdup(cmd, GFP_KERNEL);
state->fault_info = gpu->fault_info;
- if (submit) {
- extern bool rd_full;
- int i;
-
- if (state->fault_info.ttbr0) {
- struct msm_gpu_fault_info *info = &state->fault_info;
- struct msm_mmu *mmu = to_msm_vm(submit->vm)->mmu;
-
- msm_iommu_pagetable_params(mmu, &info->pgtbl_ttbr0,
- &info->asid);
- msm_iommu_pagetable_walk(mmu, info->iova, info->ptes);
- }
+ if (submit && state->fault_info.ttbr0) {
+ struct msm_gpu_fault_info *info = &state->fault_info;
+ struct msm_mmu *mmu = to_msm_vm(submit->vm)->mmu;
- state->bos = kcalloc(submit->nr_bos,
- sizeof(struct msm_gpu_state_bo), GFP_KERNEL);
-
- for (i = 0; state->bos && i < submit->nr_bos; i++) {
- struct drm_gem_object *obj = submit->bos[i].obj;
- bool dump = rd_full || (submit->bos[i].flags & MSM_SUBMIT_BO_DUMP);
- msm_gpu_crashstate_get_bo(state, obj, submit->bos[i].iova,
- dump, 0, obj->size);
- }
+ msm_iommu_pagetable_params(mmu, &info->pgtbl_ttbr0,
+ &info->asid);
+ msm_iommu_pagetable_walk(mmu, info->iova, info->ptes);
}
+ crashstate_get_bos(state, submit);
+
/* Set the active crash state to be dumped on failure */
gpu->crashstate = state;
--
2.48.1
^ permalink raw reply related [flat|nested] 45+ messages in thread
* [PATCH v2 33/34] drm/msm: rd dumping support for sparse
2025-03-19 14:52 [PATCH v2 00/34] drm/msm: sparse / "VM_BIND" support Rob Clark
` (31 preceding siblings ...)
2025-03-19 14:52 ` [PATCH v2 32/34] drm/msm: Crashdec support for sparse Rob Clark
@ 2025-03-19 14:52 ` Rob Clark
2025-03-19 14:52 ` [PATCH v2 34/34] drm/msm: Bump UAPI version Rob Clark
33 siblings, 0 replies; 45+ messages in thread
From: Rob Clark @ 2025-03-19 14:52 UTC (permalink / raw)
To: dri-devel
Cc: freedreno, linux-arm-msm, Rob Clark, Rob Clark, Abhinav Kumar,
Dmitry Baryshkov, Sean Paul, Marijn Suijten, David Airlie,
Simona Vetter, open list
From: Rob Clark <robdclark@chromium.org>
As with devcoredump, we need to iterate the VMAs to figure out what to
dump.
Signed-off-by: Rob Clark <robdclark@chromium.org>
---
drivers/gpu/drm/msm/msm_rd.c | 48 +++++++++++++++++++++++++-----------
1 file changed, 33 insertions(+), 15 deletions(-)
diff --git a/drivers/gpu/drm/msm/msm_rd.c b/drivers/gpu/drm/msm/msm_rd.c
index edbcb93410a9..1876b789c924 100644
--- a/drivers/gpu/drm/msm/msm_rd.c
+++ b/drivers/gpu/drm/msm/msm_rd.c
@@ -372,25 +372,43 @@ void msm_rd_dump_submit(struct msm_rd_state *rd, struct msm_gem_submit *submit,
rd_write_section(rd, RD_CMD, msg, ALIGN(n, 4));
- for (i = 0; i < submit->nr_bos; i++) {
- struct drm_gem_object *obj = submit->bos[i].obj;
- bool dump = rd_full || (submit->bos[i].flags & MSM_SUBMIT_BO_DUMP);
+ if (msm_context_is_vmbind(submit->queue->ctx)) {
+ struct drm_gpuva *vma;
- snapshot_buf(rd, obj, submit->bos[i].iova, dump, 0, obj->size);
- }
+ mutex_lock(&to_msm_vm(submit->vm)->vm_lock);
+ drm_gpuvm_for_each_va (vma, submit->vm) {
+ bool dump = rd_full || (vma->flags & MSM_VMA_DUMP);
- for (i = 0; i < submit->nr_cmds; i++) {
- uint32_t szd = submit->cmd[i].size; /* in dwords */
- int idx = submit->cmd[i].idx;
- bool dump = rd_full || (submit->bos[idx].flags & MSM_SUBMIT_BO_DUMP);
+ /* Skip MAP_NULL/PRR VMAs: */
+ if (!vma->gem.obj)
+ continue;
+
+ snapshot_buf(rd, vma->gem.obj, vma->va.addr, dump,
+ vma->gem.offset, vma->va.range);
+ }
+ mutex_unlock(&to_msm_vm(submit->vm)->vm_lock);
+
+ } else {
+ for (i = 0; i < submit->nr_bos; i++) {
+ struct drm_gem_object *obj = submit->bos[i].obj;
+ bool dump = rd_full || (submit->bos[i].flags & MSM_SUBMIT_BO_DUMP);
+
+ snapshot_buf(rd, obj, submit->bos[i].iova, dump, 0, obj->size);
+ }
+
+ for (i = 0; i < submit->nr_cmds; i++) {
+ uint32_t szd = submit->cmd[i].size; /* in dwords */
+ int idx = submit->cmd[i].idx;
+ bool dump = rd_full || (submit->bos[idx].flags & MSM_SUBMIT_BO_DUMP);
- /* snapshot cmdstream bo's (if we haven't already): */
- if (!dump) {
- struct drm_gem_object *obj = submit->bos[idx].obj;
- size_t offset = submit->cmd[i].iova - submit->bos[idx].iova;
+ /* snapshot cmdstream bo's (if we haven't already): */
+ if (!dump) {
+ struct drm_gem_object *obj = submit->bos[idx].obj;
+ size_t offset = submit->cmd[i].iova - submit->bos[idx].iova;
- snapshot_buf(rd, obj, submit->cmd[i].iova, true,
- offset, szd * 4);
+ snapshot_buf(rd, obj, submit->cmd[i].iova, true,
+ offset, szd * 4);
+ }
}
}
--
2.48.1
^ permalink raw reply related [flat|nested] 45+ messages in thread
* [PATCH v2 34/34] drm/msm: Bump UAPI version
2025-03-19 14:52 [PATCH v2 00/34] drm/msm: sparse / "VM_BIND" support Rob Clark
` (32 preceding siblings ...)
2025-03-19 14:52 ` [PATCH v2 33/34] drm/msm: rd dumping " Rob Clark
@ 2025-03-19 14:52 ` Rob Clark
33 siblings, 0 replies; 45+ messages in thread
From: Rob Clark @ 2025-03-19 14:52 UTC (permalink / raw)
To: dri-devel
Cc: freedreno, linux-arm-msm, Rob Clark, Rob Clark, Abhinav Kumar,
Dmitry Baryshkov, Sean Paul, Marijn Suijten, David Airlie,
Simona Vetter, open list
From: Rob Clark <robdclark@chromium.org>
Bump version to signal to userspace that VM_BIND is supported.
Signed-off-by: Rob Clark <robdclark@chromium.org>
---
drivers/gpu/drm/msm/msm_drv.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
index 70c3a3712a3e..ee5a1e3d5f3b 100644
--- a/drivers/gpu/drm/msm/msm_drv.c
+++ b/drivers/gpu/drm/msm/msm_drv.c
@@ -41,9 +41,10 @@
* - 1.10.0 - Add MSM_SUBMIT_BO_NO_IMPLICIT
* - 1.11.0 - Add wait boost (MSM_WAIT_FENCE_BOOST, MSM_PREP_BOOST)
* - 1.12.0 - Add MSM_INFO_SET_METADATA and MSM_INFO_GET_METADATA
+ * - 1.13.0 - Add VM_BIND
*/
#define MSM_VERSION_MAJOR 1
-#define MSM_VERSION_MINOR 12
+#define MSM_VERSION_MINOR 13
#define MSM_VERSION_PATCHLEVEL 0
bool dumpstate;
--
2.48.1
^ permalink raw reply related [flat|nested] 45+ messages in thread
* Re: [PATCH v2 16/34] drm/msm: Mark VM as unusable on faults
2025-03-19 14:52 ` [PATCH v2 16/34] drm/msm: Mark VM as unusable on faults Rob Clark
@ 2025-03-19 16:15 ` Connor Abbott
2025-03-19 21:31 ` Rob Clark
0 siblings, 1 reply; 45+ messages in thread
From: Connor Abbott @ 2025-03-19 16:15 UTC (permalink / raw)
To: Rob Clark
Cc: dri-devel, freedreno, linux-arm-msm, Rob Clark, Abhinav Kumar,
Dmitry Baryshkov, Sean Paul, Marijn Suijten, David Airlie,
Simona Vetter, Konrad Dybcio, open list
On Wed, Mar 19, 2025 at 10:55 AM Rob Clark <robdclark@gmail.com> wrote:
>
> From: Rob Clark <robdclark@chromium.org>
>
> If userspace has opted-in to VM_BIND, then GPU faults and VM_BIND errors
> will mark the VM as unusable.
>
> Signed-off-by: Rob Clark <robdclark@chromium.org>
> ---
> drivers/gpu/drm/msm/msm_gem.h | 17 +++++++++++++++++
> drivers/gpu/drm/msm/msm_gem_submit.c | 3 +++
> drivers/gpu/drm/msm/msm_gpu.c | 16 ++++++++++++++--
> 3 files changed, 34 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h
> index acb976722580..7cb720137548 100644
> --- a/drivers/gpu/drm/msm/msm_gem.h
> +++ b/drivers/gpu/drm/msm/msm_gem.h
> @@ -82,6 +82,23 @@ struct msm_gem_vm {
>
> /** @managed: is this a kernel managed VM? */
> bool managed;
> +
> + /**
> + * @unusable: True if the VM has turned unusable because something
> + * bad happened during an asynchronous request.
> + *
> + * We don't try to recover from such failures, because this implies
> + * informing userspace about the specific operation that failed, and
> + * hoping the userspace driver can replay things from there. This all
> + * sounds very complicated for little gain.
> + *
> + * Instead, we should just flag the VM as unusable, and fail any
> + * further request targeting this VM.
> + *
> + * As an analogy, this would be mapped to a VK_ERROR_DEVICE_LOST
> + * situation, where the logical device needs to be re-created.
> + */
> + bool unusable;
> };
> #define to_msm_vm(x) container_of(x, struct msm_gem_vm, base)
>
> diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c
> index 9731ad7993cf..9cef308a0ad1 100644
> --- a/drivers/gpu/drm/msm/msm_gem_submit.c
> +++ b/drivers/gpu/drm/msm/msm_gem_submit.c
> @@ -668,6 +668,9 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data,
> if (args->pad)
> return -EINVAL;
>
> + if (to_msm_vm(ctx->vm)->unusable)
> + return UERR(EPIPE, dev, "context is unusable");
> +
> /* for now, we just have 3d pipe.. eventually this would need to
> * be more clever to dispatch to appropriate gpu module:
> */
> diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c
> index 503e4dcc5a6f..4831f4e42fd9 100644
> --- a/drivers/gpu/drm/msm/msm_gpu.c
> +++ b/drivers/gpu/drm/msm/msm_gpu.c
> @@ -386,8 +386,20 @@ static void recover_worker(struct kthread_work *work)
>
> /* Increment the fault counts */
> submit->queue->faults++;
> - if (submit->vm)
> - to_msm_vm(submit->vm)->faults++;
> + if (submit->vm) {
> + struct msm_gem_vm *vm = to_msm_vm(submit->vm);
> +
> + vm->faults++;
> +
> + /*
> + * If userspace has opted-in to VM_BIND (and therefore userspace
> + * management of the VM), faults mark the VM as unusuable. This
> + * matches vulkan expectations (vulkan is the main target for
> + * VM_BIND)
The bit about this matching Vulkan expectations isn't exactly true.
Some Vulkan implementations do do this, but many will also just ignore
the fault and try to continue going, and the spec allows either. It's
a choice that we're making.
Connor
> + */
> + if (!vm->managed)
> + vm->unusable = true;
> + }
>
> get_comm_cmdline(submit, &comm, &cmd);
>
> --
> 2.48.1
>
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [PATCH v2 16/34] drm/msm: Mark VM as unusable on faults
2025-03-19 16:15 ` Connor Abbott
@ 2025-03-19 21:31 ` Rob Clark
0 siblings, 0 replies; 45+ messages in thread
From: Rob Clark @ 2025-03-19 21:31 UTC (permalink / raw)
To: Connor Abbott
Cc: dri-devel, freedreno, linux-arm-msm, Rob Clark, Abhinav Kumar,
Dmitry Baryshkov, Sean Paul, Marijn Suijten, David Airlie,
Simona Vetter, Konrad Dybcio, open list
On Wed, Mar 19, 2025 at 9:15 AM Connor Abbott <cwabbott0@gmail.com> wrote:
>
> On Wed, Mar 19, 2025 at 10:55 AM Rob Clark <robdclark@gmail.com> wrote:
> >
> > From: Rob Clark <robdclark@chromium.org>
> >
> > If userspace has opted-in to VM_BIND, then GPU faults and VM_BIND errors
> > will mark the VM as unusable.
> >
> > Signed-off-by: Rob Clark <robdclark@chromium.org>
> > ---
> > drivers/gpu/drm/msm/msm_gem.h | 17 +++++++++++++++++
> > drivers/gpu/drm/msm/msm_gem_submit.c | 3 +++
> > drivers/gpu/drm/msm/msm_gpu.c | 16 ++++++++++++++--
> > 3 files changed, 34 insertions(+), 2 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h
> > index acb976722580..7cb720137548 100644
> > --- a/drivers/gpu/drm/msm/msm_gem.h
> > +++ b/drivers/gpu/drm/msm/msm_gem.h
> > @@ -82,6 +82,23 @@ struct msm_gem_vm {
> >
> > /** @managed: is this a kernel managed VM? */
> > bool managed;
> > +
> > + /**
> > + * @unusable: True if the VM has turned unusable because something
> > + * bad happened during an asynchronous request.
> > + *
> > + * We don't try to recover from such failures, because this implies
> > + * informing userspace about the specific operation that failed, and
> > + * hoping the userspace driver can replay things from there. This all
> > + * sounds very complicated for little gain.
> > + *
> > + * Instead, we should just flag the VM as unusable, and fail any
> > + * further request targeting this VM.
> > + *
> > + * As an analogy, this would be mapped to a VK_ERROR_DEVICE_LOST
> > + * situation, where the logical device needs to be re-created.
> > + */
> > + bool unusable;
> > };
> > #define to_msm_vm(x) container_of(x, struct msm_gem_vm, base)
> >
> > diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c
> > index 9731ad7993cf..9cef308a0ad1 100644
> > --- a/drivers/gpu/drm/msm/msm_gem_submit.c
> > +++ b/drivers/gpu/drm/msm/msm_gem_submit.c
> > @@ -668,6 +668,9 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data,
> > if (args->pad)
> > return -EINVAL;
> >
> > + if (to_msm_vm(ctx->vm)->unusable)
> > + return UERR(EPIPE, dev, "context is unusable");
> > +
> > /* for now, we just have 3d pipe.. eventually this would need to
> > * be more clever to dispatch to appropriate gpu module:
> > */
> > diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c
> > index 503e4dcc5a6f..4831f4e42fd9 100644
> > --- a/drivers/gpu/drm/msm/msm_gpu.c
> > +++ b/drivers/gpu/drm/msm/msm_gpu.c
> > @@ -386,8 +386,20 @@ static void recover_worker(struct kthread_work *work)
> >
> > /* Increment the fault counts */
> > submit->queue->faults++;
> > - if (submit->vm)
> > - to_msm_vm(submit->vm)->faults++;
> > + if (submit->vm) {
> > + struct msm_gem_vm *vm = to_msm_vm(submit->vm);
> > +
> > + vm->faults++;
> > +
> > + /*
> > + * If userspace has opted-in to VM_BIND (and therefore userspace
> > + * management of the VM), faults mark the VM as unusuable. This
> > + * matches vulkan expectations (vulkan is the main target for
> > + * VM_BIND)
>
> The bit about this matching Vulkan expectations isn't exactly true.
> Some Vulkan implementations do do this, but many will also just ignore
> the fault and try to continue going, and the spec allows either. It's
> a choice that we're making.
As mentioned on IRC, this is actually about GPU hangs rather then smmu
faults. I guess the $subject is a bit misleading.
BR,
-R
> Connor
>
> > + */
> > + if (!vm->managed)
> > + vm->unusable = true;
> > + }
> >
> > get_comm_cmdline(submit, &comm, &cmd);
> >
> > --
> > 2.48.1
> >
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [PATCH v2 08/34] drm/msm: Remove vram carveout support
2025-03-19 14:52 ` [PATCH v2 08/34] drm/msm: Remove vram carveout support Rob Clark
@ 2025-04-16 17:18 ` Akhil P Oommen
2025-04-16 23:20 ` Dmitry Baryshkov
1 sibling, 0 replies; 45+ messages in thread
From: Akhil P Oommen @ 2025-04-16 17:18 UTC (permalink / raw)
To: Rob Clark, dri-devel
Cc: freedreno, linux-arm-msm, Rob Clark, Sean Paul, Konrad Dybcio,
Abhinav Kumar, Dmitry Baryshkov, Marijn Suijten, David Airlie,
Simona Vetter, open list
On 3/19/2025 8:22 PM, Rob Clark wrote:
> From: Rob Clark <robdclark@chromium.org>
>
> It is standing in the way of drm_gpuvm / VM_BIND support. Not to
> mention frequently broken and rarely tested. And I think only needed
> for a 10yr old not quite upstream SoC (msm8974).
>
> Maybe we can add support back in later, but I'm doubtful.
There is a small clean up of "VRAM carveout" required in a debug print
in msm_gpu.c
>
> Signed-off-by: Rob Clark <robdclark@chromium.org>
> ---
> drivers/gpu/drm/msm/adreno/a2xx_gpu.c | 6 +-
> drivers/gpu/drm/msm/adreno/a3xx_gpu.c | 13 +-
> drivers/gpu/drm/msm/adreno/a4xx_gpu.c | 13 +-
> drivers/gpu/drm/msm/adreno/adreno_device.c | 4 -
> drivers/gpu/drm/msm/adreno/adreno_gpu.h | 1 -
> drivers/gpu/drm/msm/msm_drv.c | 117 +-----------------
> drivers/gpu/drm/msm/msm_drv.h | 11 --
> drivers/gpu/drm/msm/msm_gem.c | 131 ++-------------------
> drivers/gpu/drm/msm/msm_gem.h | 5 -
> drivers/gpu/drm/msm/msm_gem_submit.c | 5 -
> 10 files changed, 19 insertions(+), 287 deletions(-)
>
> diff --git a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c
> index 5eb063ed0b46..db1aa281ce47 100644
> --- a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c
> +++ b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c
> @@ -553,10 +553,8 @@ struct msm_gpu *a2xx_gpu_init(struct drm_device *dev)
>
> if (!gpu->vm) {
> dev_err(dev->dev, "No memory protection without MMU\n");
> - if (!allow_vram_carveout) {
> - ret = -ENXIO;
> - goto fail;
> - }
> + ret = -ENXIO;
> + goto fail;
> }
>
> return gpu;
> diff --git a/drivers/gpu/drm/msm/adreno/a3xx_gpu.c b/drivers/gpu/drm/msm/adreno/a3xx_gpu.c
> index 434e6ededf83..49ba1ce77144 100644
> --- a/drivers/gpu/drm/msm/adreno/a3xx_gpu.c
> +++ b/drivers/gpu/drm/msm/adreno/a3xx_gpu.c
> @@ -582,18 +582,9 @@ struct msm_gpu *a3xx_gpu_init(struct drm_device *dev)
> }
>
> if (!gpu->vm) {
> - /* TODO we think it is possible to configure the GPU to
> - * restrict access to VRAM carveout. But the required
> - * registers are unknown. For now just bail out and
> - * limp along with just modesetting. If it turns out
> - * to not be possible to restrict access, then we must
> - * implement a cmdstream validator.
> - */
> DRM_DEV_ERROR(dev->dev, "No memory protection without IOMMU\n");
> - if (!allow_vram_carveout) {
> - ret = -ENXIO;
> - goto fail;
> - }
> + ret = -ENXIO;
> + goto fail;
> }
>
> icc_path = devm_of_icc_get(&pdev->dev, "gfx-mem");
> diff --git a/drivers/gpu/drm/msm/adreno/a4xx_gpu.c b/drivers/gpu/drm/msm/adreno/a4xx_gpu.c
> index 2c75debcfd84..4faf8570aec7 100644
> --- a/drivers/gpu/drm/msm/adreno/a4xx_gpu.c
> +++ b/drivers/gpu/drm/msm/adreno/a4xx_gpu.c
> @@ -696,18 +696,9 @@ struct msm_gpu *a4xx_gpu_init(struct drm_device *dev)
> adreno_gpu->uche_trap_base = 0xffff0000ffff0000ull;
>
> if (!gpu->vm) {
Is this check still required?
-Akhil
> - /* TODO we think it is possible to configure the GPU to
> - * restrict access to VRAM carveout. But the required
> - * registers are unknown. For now just bail out and
> - * limp along with just modesetting. If it turns out
> - * to not be possible to restrict access, then we must
> - * implement a cmdstream validator.
> - */
> DRM_DEV_ERROR(dev->dev, "No memory protection without IOMMU\n");
> - if (!allow_vram_carveout) {
> - ret = -ENXIO;
> - goto fail;
> - }
> + ret = -ENXIO;
> + goto fail;
> }
>
> icc_path = devm_of_icc_get(&pdev->dev, "gfx-mem");
> diff --git a/drivers/gpu/drm/msm/adreno/adreno_device.c b/drivers/gpu/drm/msm/adreno/adreno_device.c
> index f4552b8c6767..6b0390c38bff 100644
> --- a/drivers/gpu/drm/msm/adreno/adreno_device.c
> +++ b/drivers/gpu/drm/msm/adreno/adreno_device.c
> @@ -16,10 +16,6 @@ bool snapshot_debugbus = false;
> MODULE_PARM_DESC(snapshot_debugbus, "Include debugbus sections in GPU devcoredump (if not fused off)");
> module_param_named(snapshot_debugbus, snapshot_debugbus, bool, 0600);
>
> -bool allow_vram_carveout = false;
> -MODULE_PARM_DESC(allow_vram_carveout, "Allow using VRAM Carveout, in place of IOMMU");
> -module_param_named(allow_vram_carveout, allow_vram_carveout, bool, 0600);
> -
> int enable_preemption = -1;
> MODULE_PARM_DESC(enable_preemption, "Enable preemption (A7xx only) (1=on , 0=disable, -1=auto (default))");
> module_param(enable_preemption, int, 0600);
> diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.h b/drivers/gpu/drm/msm/adreno/adreno_gpu.h
> index eaebcb108b5e..7dbe09817edc 100644
> --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h
> +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h
> @@ -18,7 +18,6 @@
> #include "adreno_pm4.xml.h"
>
> extern bool snapshot_debugbus;
> -extern bool allow_vram_carveout;
>
> enum {
> ADRENO_FW_PM4 = 0,
> diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
> index 903abf3532e0..978f1d355b42 100644
> --- a/drivers/gpu/drm/msm/msm_drv.c
> +++ b/drivers/gpu/drm/msm/msm_drv.c
> @@ -46,12 +46,6 @@
> #define MSM_VERSION_MINOR 12
> #define MSM_VERSION_PATCHLEVEL 0
>
> -static void msm_deinit_vram(struct drm_device *ddev);
> -
> -static char *vram = "16m";
> -MODULE_PARM_DESC(vram, "Configure VRAM size (for devices without IOMMU/GPUMMU)");
> -module_param(vram, charp, 0);
> -
> bool dumpstate;
> MODULE_PARM_DESC(dumpstate, "Dump KMS state on errors");
> module_param(dumpstate, bool, 0600);
> @@ -97,8 +91,6 @@ static int msm_drm_uninit(struct device *dev)
> if (priv->kms)
> msm_drm_kms_uninit(dev);
>
> - msm_deinit_vram(ddev);
> -
> component_unbind_all(dev, ddev);
>
> ddev->dev_private = NULL;
> @@ -109,107 +101,6 @@ static int msm_drm_uninit(struct device *dev)
> return 0;
> }
>
> -bool msm_use_mmu(struct drm_device *dev)
> -{
> - struct msm_drm_private *priv = dev->dev_private;
> -
> - /*
> - * a2xx comes with its own MMU
> - * On other platforms IOMMU can be declared specified either for the
> - * MDP/DPU device or for its parent, MDSS device.
> - */
> - return priv->is_a2xx ||
> - device_iommu_mapped(dev->dev) ||
> - device_iommu_mapped(dev->dev->parent);
> -}
> -
> -static int msm_init_vram(struct drm_device *dev)
> -{
> - struct msm_drm_private *priv = dev->dev_private;
> - struct device_node *node;
> - unsigned long size = 0;
> - int ret = 0;
> -
> - /* In the device-tree world, we could have a 'memory-region'
> - * phandle, which gives us a link to our "vram". Allocating
> - * is all nicely abstracted behind the dma api, but we need
> - * to know the entire size to allocate it all in one go. There
> - * are two cases:
> - * 1) device with no IOMMU, in which case we need exclusive
> - * access to a VRAM carveout big enough for all gpu
> - * buffers
> - * 2) device with IOMMU, but where the bootloader puts up
> - * a splash screen. In this case, the VRAM carveout
> - * need only be large enough for fbdev fb. But we need
> - * exclusive access to the buffer to avoid the kernel
> - * using those pages for other purposes (which appears
> - * as corruption on screen before we have a chance to
> - * load and do initial modeset)
> - */
> -
> - node = of_parse_phandle(dev->dev->of_node, "memory-region", 0);
> - if (node) {
> - struct resource r;
> - ret = of_address_to_resource(node, 0, &r);
> - of_node_put(node);
> - if (ret)
> - return ret;
> - size = r.end - r.start + 1;
> - DRM_INFO("using VRAM carveout: %lx@%pa\n", size, &r.start);
> -
> - /* if we have no IOMMU, then we need to use carveout allocator.
> - * Grab the entire DMA chunk carved out in early startup in
> - * mach-msm:
> - */
> - } else if (!msm_use_mmu(dev)) {
> - DRM_INFO("using %s VRAM carveout\n", vram);
> - size = memparse(vram, NULL);
> - }
> -
> - if (size) {
> - unsigned long attrs = 0;
> - void *p;
> -
> - priv->vram.size = size;
> -
> - drm_mm_init(&priv->vram.mm, 0, (size >> PAGE_SHIFT) - 1);
> - spin_lock_init(&priv->vram.lock);
> -
> - attrs |= DMA_ATTR_NO_KERNEL_MAPPING;
> - attrs |= DMA_ATTR_WRITE_COMBINE;
> -
> - /* note that for no-kernel-mapping, the vaddr returned
> - * is bogus, but non-null if allocation succeeded:
> - */
> - p = dma_alloc_attrs(dev->dev, size,
> - &priv->vram.paddr, GFP_KERNEL, attrs);
> - if (!p) {
> - DRM_DEV_ERROR(dev->dev, "failed to allocate VRAM\n");
> - priv->vram.paddr = 0;
> - return -ENOMEM;
> - }
> -
> - DRM_DEV_INFO(dev->dev, "VRAM: %08x->%08x\n",
> - (uint32_t)priv->vram.paddr,
> - (uint32_t)(priv->vram.paddr + size));
> - }
> -
> - return ret;
> -}
> -
> -static void msm_deinit_vram(struct drm_device *ddev)
> -{
> - struct msm_drm_private *priv = ddev->dev_private;
> - unsigned long attrs = DMA_ATTR_NO_KERNEL_MAPPING;
> -
> - if (!priv->vram.paddr)
> - return;
> -
> - drm_mm_takedown(&priv->vram.mm);
> - dma_free_attrs(ddev->dev, priv->vram.size, NULL, priv->vram.paddr,
> - attrs);
> -}
> -
> static int msm_drm_init(struct device *dev, const struct drm_driver *drv)
> {
> struct msm_drm_private *priv = dev_get_drvdata(dev);
> @@ -256,16 +147,12 @@ static int msm_drm_init(struct device *dev, const struct drm_driver *drv)
> goto err_destroy_wq;
> }
>
> - ret = msm_init_vram(ddev);
> - if (ret)
> - goto err_destroy_wq;
> -
> dma_set_max_seg_size(dev, UINT_MAX);
>
> /* Bind all our sub-components: */
> ret = component_bind_all(dev, ddev);
> if (ret)
> - goto err_deinit_vram;
> + goto err_destroy_wq;
>
> ret = msm_gem_shrinker_init(ddev);
> if (ret)
> @@ -302,8 +189,6 @@ static int msm_drm_init(struct device *dev, const struct drm_driver *drv)
>
> return ret;
>
> -err_deinit_vram:
> - msm_deinit_vram(ddev);
> err_destroy_wq:
> destroy_workqueue(priv->wq);
> err_put_dev:
> diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h
> index 0e675c9a7f83..ad509403f072 100644
> --- a/drivers/gpu/drm/msm/msm_drv.h
> +++ b/drivers/gpu/drm/msm/msm_drv.h
> @@ -183,17 +183,6 @@ struct msm_drm_private {
>
> struct msm_drm_thread event_thread[MAX_CRTCS];
>
> - /* VRAM carveout, used when no IOMMU: */
> - struct {
> - unsigned long size;
> - dma_addr_t paddr;
> - /* NOTE: mm managed at the page level, size is in # of pages
> - * and position mm_node->start is in # of pages:
> - */
> - struct drm_mm mm;
> - spinlock_t lock; /* Protects drm_mm node allocation/removal */
> - } vram;
> -
> struct notifier_block vmap_notifier;
> struct shrinker *shrinker;
>
> diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
> index 07a30d29248c..621fb4e17a2e 100644
> --- a/drivers/gpu/drm/msm/msm_gem.c
> +++ b/drivers/gpu/drm/msm/msm_gem.c
> @@ -17,24 +17,8 @@
> #include <trace/events/gpu_mem.h>
>
> #include "msm_drv.h"
> -#include "msm_fence.h"
> #include "msm_gem.h"
> #include "msm_gpu.h"
> -#include "msm_mmu.h"
> -
> -static dma_addr_t physaddr(struct drm_gem_object *obj)
> -{
> - struct msm_gem_object *msm_obj = to_msm_bo(obj);
> - struct msm_drm_private *priv = obj->dev->dev_private;
> - return (((dma_addr_t)msm_obj->vram_node->start) << PAGE_SHIFT) +
> - priv->vram.paddr;
> -}
> -
> -static bool use_pages(struct drm_gem_object *obj)
> -{
> - struct msm_gem_object *msm_obj = to_msm_bo(obj);
> - return !msm_obj->vram_node;
> -}
>
> static int pgprot = 0;
> module_param(pgprot, int, 0600);
> @@ -139,36 +123,6 @@ static void update_lru(struct drm_gem_object *obj)
> mutex_unlock(&priv->lru.lock);
> }
>
> -/* allocate pages from VRAM carveout, used when no IOMMU: */
> -static struct page **get_pages_vram(struct drm_gem_object *obj, int npages)
> -{
> - struct msm_gem_object *msm_obj = to_msm_bo(obj);
> - struct msm_drm_private *priv = obj->dev->dev_private;
> - dma_addr_t paddr;
> - struct page **p;
> - int ret, i;
> -
> - p = kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL);
> - if (!p)
> - return ERR_PTR(-ENOMEM);
> -
> - spin_lock(&priv->vram.lock);
> - ret = drm_mm_insert_node(&priv->vram.mm, msm_obj->vram_node, npages);
> - spin_unlock(&priv->vram.lock);
> - if (ret) {
> - kvfree(p);
> - return ERR_PTR(ret);
> - }
> -
> - paddr = physaddr(obj);
> - for (i = 0; i < npages; i++) {
> - p[i] = pfn_to_page(__phys_to_pfn(paddr));
> - paddr += PAGE_SIZE;
> - }
> -
> - return p;
> -}
> -
> static struct page **get_pages(struct drm_gem_object *obj)
> {
> struct msm_gem_object *msm_obj = to_msm_bo(obj);
> @@ -180,10 +134,7 @@ static struct page **get_pages(struct drm_gem_object *obj)
> struct page **p;
> int npages = obj->size >> PAGE_SHIFT;
>
> - if (use_pages(obj))
> - p = drm_gem_get_pages(obj);
> - else
> - p = get_pages_vram(obj, npages);
> + p = drm_gem_get_pages(obj);
>
> if (IS_ERR(p)) {
> DRM_DEV_ERROR(dev->dev, "could not get pages: %ld\n",
> @@ -216,18 +167,6 @@ static struct page **get_pages(struct drm_gem_object *obj)
> return msm_obj->pages;
> }
>
> -static void put_pages_vram(struct drm_gem_object *obj)
> -{
> - struct msm_gem_object *msm_obj = to_msm_bo(obj);
> - struct msm_drm_private *priv = obj->dev->dev_private;
> -
> - spin_lock(&priv->vram.lock);
> - drm_mm_remove_node(msm_obj->vram_node);
> - spin_unlock(&priv->vram.lock);
> -
> - kvfree(msm_obj->pages);
> -}
> -
> static void put_pages(struct drm_gem_object *obj)
> {
> struct msm_gem_object *msm_obj = to_msm_bo(obj);
> @@ -248,10 +187,7 @@ static void put_pages(struct drm_gem_object *obj)
>
> update_device_mem(obj->dev->dev_private, -obj->size);
>
> - if (use_pages(obj))
> - drm_gem_put_pages(obj, msm_obj->pages, true, false);
> - else
> - put_pages_vram(obj);
> + drm_gem_put_pages(obj, msm_obj->pages, true, false);
>
> msm_obj->pages = NULL;
> update_lru(obj);
> @@ -1215,19 +1151,10 @@ struct drm_gem_object *msm_gem_new(struct drm_device *dev, uint32_t size, uint32
> struct msm_drm_private *priv = dev->dev_private;
> struct msm_gem_object *msm_obj;
> struct drm_gem_object *obj = NULL;
> - bool use_vram = false;
> int ret;
>
> size = PAGE_ALIGN(size);
>
> - if (!msm_use_mmu(dev))
> - use_vram = true;
> - else if ((flags & (MSM_BO_STOLEN | MSM_BO_SCANOUT)) && priv->vram.size)
> - use_vram = true;
> -
> - if (GEM_WARN_ON(use_vram && !priv->vram.size))
> - return ERR_PTR(-EINVAL);
> -
> /* Disallow zero sized objects as they make the underlying
> * infrastructure grumpy
> */
> @@ -1240,44 +1167,16 @@ struct drm_gem_object *msm_gem_new(struct drm_device *dev, uint32_t size, uint32
>
> msm_obj = to_msm_bo(obj);
>
> - if (use_vram) {
> - struct msm_gem_vma *vma;
> - struct page **pages;
> -
> - drm_gem_private_object_init(dev, obj, size);
> -
> - msm_gem_lock(obj);
> -
> - vma = add_vma(obj, NULL);
> - msm_gem_unlock(obj);
> - if (IS_ERR(vma)) {
> - ret = PTR_ERR(vma);
> - goto fail;
> - }
> -
> - to_msm_bo(obj)->vram_node = &vma->node;
> -
> - msm_gem_lock(obj);
> - pages = get_pages(obj);
> - msm_gem_unlock(obj);
> - if (IS_ERR(pages)) {
> - ret = PTR_ERR(pages);
> - goto fail;
> - }
> -
> - vma->iova = physaddr(obj);
> - } else {
> - ret = drm_gem_object_init(dev, obj, size);
> - if (ret)
> - goto fail;
> - /*
> - * Our buffers are kept pinned, so allocating them from the
> - * MOVABLE zone is a really bad idea, and conflicts with CMA.
> - * See comments above new_inode() why this is required _and_
> - * expected if you're going to pin these pages.
> - */
> - mapping_set_gfp_mask(obj->filp->f_mapping, GFP_HIGHUSER);
> - }
> + ret = drm_gem_object_init(dev, obj, size);
> + if (ret)
> + goto fail;
> + /*
> + * Our buffers are kept pinned, so allocating them from the
> + * MOVABLE zone is a really bad idea, and conflicts with CMA.
> + * See comments above new_inode() why this is required _and_
> + * expected if you're going to pin these pages.
> + */
> + mapping_set_gfp_mask(obj->filp->f_mapping, GFP_HIGHUSER);
>
> drm_gem_lru_move_tail(&priv->lru.unbacked, obj);
>
> @@ -1305,12 +1204,6 @@ struct drm_gem_object *msm_gem_import(struct drm_device *dev,
> uint32_t size;
> int ret, npages;
>
> - /* if we don't have IOMMU, don't bother pretending we can import: */
> - if (!msm_use_mmu(dev)) {
> - DRM_DEV_ERROR(dev->dev, "cannot import without IOMMU\n");
> - return ERR_PTR(-EINVAL);
> - }
> -
> size = PAGE_ALIGN(dmabuf->size);
>
> ret = msm_gem_new_impl(dev, size, MSM_BO_WC, &obj);
> diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h
> index d2f39a371373..c16b11182831 100644
> --- a/drivers/gpu/drm/msm/msm_gem.h
> +++ b/drivers/gpu/drm/msm/msm_gem.h
> @@ -102,11 +102,6 @@ struct msm_gem_object {
>
> struct list_head vmas; /* list of msm_gem_vma */
>
> - /* For physically contiguous buffers. Used when we don't have
> - * an IOMMU. Also used for stolen/splashscreen buffer.
> - */
> - struct drm_mm_node *vram_node;
> -
> char name[32]; /* Identifier to print for the debugfs files */
>
> /* userspace metadata backchannel */
> diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c
> index 95da4714fffb..a186b7dfea35 100644
> --- a/drivers/gpu/drm/msm/msm_gem_submit.c
> +++ b/drivers/gpu/drm/msm/msm_gem_submit.c
> @@ -659,11 +659,6 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data,
> if (args->pad)
> return -EINVAL;
>
> - if (unlikely(!ctx->vm) && !capable(CAP_SYS_RAWIO)) {
> - DRM_ERROR_RATELIMITED("IOMMU support or CAP_SYS_RAWIO required!\n");
> - return -EPERM;
> - }
> -
> /* for now, we just have 3d pipe.. eventually this would need to
> * be more clever to dispatch to appropriate gpu module:
> */
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [PATCH v2 11/34] drm/msm: drm_gpuvm conversion
2025-03-19 14:52 ` [PATCH v2 11/34] drm/msm: drm_gpuvm conversion Rob Clark
@ 2025-04-16 17:20 ` Akhil P Oommen
0 siblings, 0 replies; 45+ messages in thread
From: Akhil P Oommen @ 2025-04-16 17:20 UTC (permalink / raw)
To: Rob Clark, dri-devel
Cc: freedreno, linux-arm-msm, Rob Clark, Abhinav Kumar,
Dmitry Baryshkov, Sean Paul, Marijn Suijten, David Airlie,
Simona Vetter, Konrad Dybcio, open list
On 3/19/2025 8:22 PM, Rob Clark wrote:
> From: Rob Clark <robdclark@chromium.org>
>
> Now that we've realigned deletion and allocation, switch over to using
> drm_gpuvm/drm_gpuva. This allows us to support multiple VMAs per BO per
> VM, to allow mapping different parts of a single BO at different virtual
> addresses, which is a key requirement for sparse/VM_BIND.
>
> This prepares us for using drm_gpuvm to translate a batch of MAP/
> MAP_NULL/UNMAP operations from userspace into a sequence of map/remap/
> unmap steps for updating the page tables.
>
> Signed-off-by: Rob Clark <robdclark@chromium.org>
> ---
> drivers/gpu/drm/msm/Kconfig | 1 +
> drivers/gpu/drm/msm/adreno/a2xx_gpu.c | 3 +-
> drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 6 +-
> drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 5 +-
> drivers/gpu/drm/msm/adreno/adreno_gpu.c | 7 +-
> drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c | 5 +-
> drivers/gpu/drm/msm/msm_drv.c | 1 +
> drivers/gpu/drm/msm/msm_gem.c | 142 ++++++++++++++---------
> drivers/gpu/drm/msm/msm_gem.h | 87 +++++++++++---
> drivers/gpu/drm/msm/msm_gem_submit.c | 2 +-
> drivers/gpu/drm/msm/msm_gem_vma.c | 139 +++++++++++++++-------
> drivers/gpu/drm/msm/msm_kms.c | 4 +-
> 12 files changed, 268 insertions(+), 134 deletions(-)
>
> diff --git a/drivers/gpu/drm/msm/Kconfig b/drivers/gpu/drm/msm/Kconfig
> index 974bc7c0ea76..4af7e896c1d4 100644
> --- a/drivers/gpu/drm/msm/Kconfig
> +++ b/drivers/gpu/drm/msm/Kconfig
> @@ -21,6 +21,7 @@ config DRM_MSM
> select DRM_DISPLAY_HELPER
> select DRM_BRIDGE_CONNECTOR
> select DRM_EXEC
> + select DRM_GPUVM
> select DRM_KMS_HELPER
> select DRM_PANEL
> select DRM_BRIDGE
> diff --git a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c
> index db1aa281ce47..94c49ed057cd 100644
> --- a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c
> +++ b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c
> @@ -472,8 +472,7 @@ a2xx_create_vm(struct msm_gpu *gpu, struct platform_device *pdev)
> struct msm_mmu *mmu = a2xx_gpummu_new(&pdev->dev, gpu);
> struct msm_gem_vm *vm;
>
> - vm = msm_gem_vm_create(mmu, "gpu", SZ_16M,
> - 0xfff * SZ_64K);
> + vm = msm_gem_vm_create(gpu->dev, mmu, "gpu", SZ_16M, 0xfff * SZ_64K, true);
>
> if (IS_ERR(vm) && !IS_ERR(mmu))
> mmu->funcs->destroy(mmu);
> diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> index 4c459ae25cba..259a589a827d 100644
> --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
> @@ -1311,7 +1311,7 @@ static int a6xx_gmu_memory_alloc(struct a6xx_gmu *gmu, struct a6xx_gmu_bo *bo,
> return 0;
> }
>
> -static int a6xx_gmu_memory_probe(struct a6xx_gmu *gmu)
> +static int a6xx_gmu_memory_probe(struct drm_device *drm, struct a6xx_gmu *gmu)
> {
> struct msm_mmu *mmu;
>
> @@ -1321,7 +1321,7 @@ static int a6xx_gmu_memory_probe(struct a6xx_gmu *gmu)
> if (IS_ERR(mmu))
> return PTR_ERR(mmu);
>
> - gmu->vm = msm_gem_vm_create(mmu, "gmu", 0x0, 0x80000000);
> + gmu->vm = msm_gem_vm_create(drm, mmu, "gmu", 0x0, 0x80000000, true);
> if (IS_ERR(gmu->vm))
> return PTR_ERR(gmu->vm);
>
> @@ -1940,7 +1940,7 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node)
> if (ret)
> goto err_put_device;
>
> - ret = a6xx_gmu_memory_probe(gmu);
> + ret = a6xx_gmu_memory_probe(adreno_gpu->base.dev, gmu);
> if (ret)
> goto err_put_device;
>
> diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
> index fa63149bf73f..a124249f7a1d 100644
> --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
> +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
> @@ -2271,9 +2271,8 @@ a6xx_create_private_vm(struct msm_gpu *gpu)
> if (IS_ERR(mmu))
> return ERR_CAST(mmu);
>
> - return msm_gem_vm_create(mmu,
> - "gpu", 0x100000000ULL,
> - adreno_private_vm_size(gpu));
> + return msm_gem_vm_create(gpu->dev, mmu, "gpu", 0x100000000ULL,
> + adreno_private_vm_size(gpu), true);
> }
>
> static uint32_t a6xx_get_rptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring)
> diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
> index ffbbf3d5ce2f..0ba1819833ab 100644
> --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c
> +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
> @@ -224,7 +224,8 @@ adreno_iommu_create_vm(struct msm_gpu *gpu,
> start = max_t(u64, SZ_16M, geometry->aperture_start);
> size = geometry->aperture_end - start + 1;
>
> - vm = msm_gem_vm_create(mmu, "gpu", start & GENMASK_ULL(48, 0), size);
> + vm = msm_gem_vm_create(gpu->dev, mmu, "gpu", start & GENMASK_ULL(48, 0),
> + size, true);
>
> if (IS_ERR(vm) && !IS_ERR(mmu))
> mmu->funcs->destroy(mmu);
> @@ -403,12 +404,12 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx,
> case MSM_PARAM_VA_START:
> if (ctx->vm == gpu->vm)
> return UERR(EINVAL, drm, "requires per-process pgtables");
> - *value = ctx->vm->va_start;
> + *value = ctx->vm->base.mm_start;
> return 0;
> case MSM_PARAM_VA_SIZE:
> if (ctx->vm == gpu->vm)
> return UERR(EINVAL, drm, "requires per-process pgtables");
> - *value = ctx->vm->va_size;
> + *value = ctx->vm->base.mm_range;
> return 0;
> case MSM_PARAM_HIGHEST_BANK_BIT:
> *value = adreno_gpu->ubwc_config.highest_bank_bit;
> diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c
> index 94fbc20b2fbd..d5b5628bee24 100644
> --- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c
> +++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c
> @@ -451,8 +451,9 @@ static int mdp4_kms_init(struct drm_device *dev)
> "contig buffers for scanout\n");
> vm = NULL;
> } else {
> - vm = msm_gem_vm_create(mmu,
> - "mdp4", 0x1000, 0x100000000 - 0x1000);
> + vm = msm_gem_vm_create(dev, mmu, "mdp4",
> + 0x1000, 0x100000000 - 0x1000,
> + true);
>
> if (IS_ERR(vm)) {
> if (!IS_ERR(mmu))
> diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
> index 978f1d355b42..6ef29bc48bb0 100644
> --- a/drivers/gpu/drm/msm/msm_drv.c
> +++ b/drivers/gpu/drm/msm/msm_drv.c
> @@ -776,6 +776,7 @@ static const struct file_operations fops = {
>
> static const struct drm_driver msm_driver = {
> .driver_features = DRIVER_GEM |
> + DRIVER_GEM_GPUVA |
> DRIVER_RENDER |
> DRIVER_ATOMIC |
> DRIVER_MODESET |
> diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
> index 4c10eca404e0..7901871c66cc 100644
> --- a/drivers/gpu/drm/msm/msm_gem.c
> +++ b/drivers/gpu/drm/msm/msm_gem.c
> @@ -47,9 +47,32 @@ static int msm_gem_open(struct drm_gem_object *obj, struct drm_file *file)
> return 0;
> }
>
> +static void put_iova_spaces(struct drm_gem_object *obj, struct drm_gpuvm *vm, bool close);
> +
> static void msm_gem_close(struct drm_gem_object *obj, struct drm_file *file)
> {
> + struct msm_context *ctx = file->driver_priv;
> +
> update_ctx_mem(file, -obj->size);
> +
> + /*
> + * If VM isn't created yet, nothing to cleanup. And in fact calling
> + * put_iova_spaces() with vm=NULL would be bad, in that it will tear-
> + * down the mappings of shared buffers in other contexts.
> + */
> + if (!ctx->vm)
> + return;
> +
> + /*
> + * TODO we might need to kick this to a queue to avoid blocking
> + * in CLOSE ioctl
> + */
> + dma_resv_wait_timeout(obj->resv, DMA_RESV_USAGE_READ, false,
> + msecs_to_jiffies(1000));
> +
> + msm_gem_lock(obj);
> + put_iova_spaces(obj, &ctx->vm->base, true);
> + msm_gem_unlock(obj);
> }
>
> /*
> @@ -171,6 +194,13 @@ static void put_pages(struct drm_gem_object *obj)
> {
> struct msm_gem_object *msm_obj = to_msm_bo(obj);
>
> + /*
> + * Skip gpuvm in the object free path to avoid a WARN_ON() splat.
> + * See explaination in msm_gem_assert_locked()
> + */
> + if (kref_read(&obj->refcount))
> + drm_gpuvm_bo_gem_evict(obj, true);
> +
> if (msm_obj->pages) {
> if (msm_obj->sgt) {
> /* For non-cached buffers, ensure the new
> @@ -338,16 +368,25 @@ uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj)
> }
>
> static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj,
> - struct msm_gem_vm *vm)
> + struct msm_gem_vm *vm)
> {
> - struct msm_gem_object *msm_obj = to_msm_bo(obj);
> - struct msm_gem_vma *vma;
> + struct drm_gpuvm_bo *vm_bo;
>
> msm_gem_assert_locked(obj);
>
> - list_for_each_entry(vma, &msm_obj->vmas, list) {
> - if (vma->vm == vm)
> - return vma;
> + drm_gem_for_each_gpuvm_bo (vm_bo, obj) {
> + struct drm_gpuva *vma;
> +
> + drm_gpuvm_bo_for_each_va (vma, vm_bo) {
> + if (vma->vm == &vm->base) {
> + /* lookup_vma() should only be used in paths
> + * with at most one vma per vm
> + */
> + GEM_WARN_ON(!list_is_singular(&vm_bo->list.gpuva));
> +
> + return to_msm_vma(vma);
> + }
> + }
> }
>
> return NULL;
> @@ -360,33 +399,29 @@ static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj,
> * mapping.
> */
> static void
> -put_iova_spaces(struct drm_gem_object *obj, bool close)
> +put_iova_spaces(struct drm_gem_object *obj, struct drm_gpuvm *vm, bool close)
> {
> - struct msm_gem_object *msm_obj = to_msm_bo(obj);
> - struct msm_gem_vma *vma, *tmp;
> + struct drm_gpuvm_bo *vm_bo, *tmp;
>
> msm_gem_assert_locked(obj);
>
> - list_for_each_entry_safe(vma, tmp, &msm_obj->vmas, list) {
> - if (vma->vm) {
> - msm_gem_vma_purge(vma);
> - if (close)
> - msm_gem_vma_close(vma);
> - }
> - }
> -}
> + drm_gem_for_each_gpuvm_bo_safe (vm_bo, tmp, obj) {
> + struct drm_gpuva *vma, *vmatmp;
>
> -/* Called with msm_obj locked */
> -static void
> -put_iova_vmas(struct drm_gem_object *obj)
> -{
> - struct msm_gem_object *msm_obj = to_msm_bo(obj);
> - struct msm_gem_vma *vma, *tmp;
> + if (vm && vm_bo->vm != vm)
> + continue;
>
> - msm_gem_assert_locked(obj);
> + drm_gpuvm_bo_get(vm_bo);
>
> - list_for_each_entry_safe(vma, tmp, &msm_obj->vmas, list) {
> - msm_gem_vma_close(vma);
> + drm_gpuvm_bo_for_each_va_safe (vma, vmatmp, vm_bo) {
> + struct msm_gem_vma *msm_vma = to_msm_vma(vma);
> +
> + msm_gem_vma_purge(msm_vma);
> + if (close)
> + msm_gem_vma_close(msm_vma);
> + }
> +
> + drm_gpuvm_bo_put(vm_bo);
> }
> }
>
> @@ -394,7 +429,6 @@ static struct msm_gem_vma *get_vma_locked(struct drm_gem_object *obj,
> struct msm_gem_vm *vm,
> u64 range_start, u64 range_end)
> {
> - struct msm_gem_object *msm_obj = to_msm_bo(obj);
> struct msm_gem_vma *vma;
>
> msm_gem_assert_locked(obj);
> @@ -403,12 +437,9 @@ static struct msm_gem_vma *get_vma_locked(struct drm_gem_object *obj,
>
> if (!vma) {
> vma = msm_gem_vma_new(vm, obj, range_start, range_end);
> - if (IS_ERR(vma))
> - return vma;
> - list_add_tail(&vma->list, &msm_obj->vmas);
> } else {
> - GEM_WARN_ON(vma->iova < range_start);
> - GEM_WARN_ON((vma->iova + obj->size) > range_end);
> + GEM_WARN_ON(vma->base.va.addr < range_start);
> + GEM_WARN_ON((vma->base.va.addr + obj->size) > range_end);
> }
>
> return vma;
> @@ -492,7 +523,7 @@ static int get_and_pin_iova_range_locked(struct drm_gem_object *obj,
>
> ret = msm_gem_pin_vma_locked(obj, vma);
> if (!ret) {
> - *iova = vma->iova;
> + *iova = vma->base.va.addr;
> pin_obj_locked(obj);
> }
>
> @@ -538,7 +569,7 @@ int msm_gem_get_iova(struct drm_gem_object *obj,
> if (IS_ERR(vma)) {
> ret = PTR_ERR(vma);
> } else {
> - *iova = vma->iova;
> + *iova = vma->base.va.addr;
> }
> msm_gem_unlock(obj);
>
> @@ -579,7 +610,7 @@ int msm_gem_set_iova(struct drm_gem_object *obj,
> vma = get_vma_locked(obj, vm, iova, iova + obj->size);
> if (IS_ERR(vma)) {
> ret = PTR_ERR(vma);
> - } else if (GEM_WARN_ON(vma->iova != iova)) {
> + } else if (GEM_WARN_ON(vma->base.va.addr != iova)) {
> clear_iova(obj, vm);
> ret = -EBUSY;
> }
> @@ -763,7 +794,7 @@ void msm_gem_purge(struct drm_gem_object *obj)
> GEM_WARN_ON(!is_purgeable(msm_obj));
>
> /* Get rid of any iommu mapping(s): */
> - put_iova_spaces(obj, true);
> + put_iova_spaces(obj, NULL, true);
>
> msm_gem_vunmap(obj);
>
> @@ -771,8 +802,6 @@ void msm_gem_purge(struct drm_gem_object *obj)
>
> put_pages(obj);
>
> - put_iova_vmas(obj);
> -
> mutex_lock(&priv->lru.lock);
> /* A one-way transition: */
> msm_obj->madv = __MSM_MADV_PURGED;
> @@ -803,7 +832,7 @@ void msm_gem_evict(struct drm_gem_object *obj)
> GEM_WARN_ON(is_unevictable(msm_obj));
>
> /* Get rid of any iommu mapping(s): */
> - put_iova_spaces(obj, false);
> + put_iova_spaces(obj, NULL, false);
>
> drm_vma_node_unmap(&obj->vma_node, dev->anon_inode->i_mapping);
>
> @@ -869,7 +898,6 @@ void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m,
> {
> struct msm_gem_object *msm_obj = to_msm_bo(obj);
> struct dma_resv *robj = obj->resv;
> - struct msm_gem_vma *vma;
> uint64_t off = drm_vma_node_start(&obj->vma_node);
> const char *madv;
>
> @@ -912,14 +940,17 @@ void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m,
>
> seq_printf(m, " %08zu %9s %-32s\n", obj->size, madv, msm_obj->name);
>
> - if (!list_empty(&msm_obj->vmas)) {
> + if (!list_empty(&obj->gpuva.list)) {
> + struct drm_gpuvm_bo *vm_bo;
>
> seq_puts(m, " vmas:");
>
> - list_for_each_entry(vma, &msm_obj->vmas, list) {
> - const char *name, *comm;
> - if (vma->vm) {
> - struct msm_gem_vm *vm = vma->vm;
> + drm_gem_for_each_gpuvm_bo (vm_bo, obj) {
> + struct drm_gpuva *vma;
> +
> + drm_gpuvm_bo_for_each_va (vma, vm_bo) {
> + const char *name, *comm;
> + struct msm_gem_vm *vm = to_msm_vm(vma->vm);
> struct task_struct *task =
> get_pid_task(vm->pid, PIDTYPE_PID);
> if (task) {
> @@ -928,15 +959,14 @@ void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m,
> } else {
> comm = NULL;
> }
> - name = vm->name;
> - } else {
> - name = comm = NULL;
> + name = vm->base.name;
> +
> + seq_printf(m, " [%s%s%s: vm=%p, %08llx,%smapped]",
> + name, comm ? ":" : "", comm ? comm : "",
> + vma->vm, vma->va.addr,
> + to_msm_vma(vma)->mapped ? "" : "un");
> + kfree(comm);
> }
> - seq_printf(m, " [%s%s%s: vm=%p, %08llx,%s]",
> - name, comm ? ":" : "", comm ? comm : "",
> - vma->vm, vma->iova,
> - vma->mapped ? "mapped" : "unmapped");
> - kfree(comm);
> }
>
> seq_puts(m, "\n");
> @@ -982,7 +1012,7 @@ static void msm_gem_free_object(struct drm_gem_object *obj)
> list_del(&msm_obj->node);
> mutex_unlock(&priv->obj_lock);
>
> - put_iova_spaces(obj, true);
> + put_iova_spaces(obj, NULL, true);
>
> if (obj->import_attach) {
> GEM_WARN_ON(msm_obj->vaddr);
> @@ -992,13 +1022,10 @@ static void msm_gem_free_object(struct drm_gem_object *obj)
> */
> kvfree(msm_obj->pages);
>
> - put_iova_vmas(obj);
> -
> drm_prime_gem_destroy(obj, msm_obj->sgt);
> } else {
> msm_gem_vunmap(obj);
> put_pages(obj);
> - put_iova_vmas(obj);
> }
>
> drm_gem_object_release(obj);
> @@ -1104,7 +1131,6 @@ static int msm_gem_new_impl(struct drm_device *dev,
> msm_obj->madv = MSM_MADV_WILLNEED;
>
> INIT_LIST_HEAD(&msm_obj->node);
> - INIT_LIST_HEAD(&msm_obj->vmas);
>
> *obj = &msm_obj->base;
> (*obj)->funcs = &msm_gem_object_funcs;
> diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h
> index 9bd78642671c..5091892bbe2e 100644
> --- a/drivers/gpu/drm/msm/msm_gem.h
> +++ b/drivers/gpu/drm/msm/msm_gem.h
> @@ -10,6 +10,7 @@
> #include <linux/kref.h>
> #include <linux/dma-resv.h>
> #include "drm/drm_exec.h"
> +#include "drm/drm_gpuvm.h"
> #include "drm/gpu_scheduler.h"
> #include "msm_drv.h"
>
> @@ -22,30 +23,67 @@
> #define MSM_BO_STOLEN 0x10000000 /* try to use stolen/splash memory */
> #define MSM_BO_MAP_PRIV 0x20000000 /* use IOMMU_PRIV when mapping */
>
> +/**
> + * struct msm_gem_vm - VM object
> + *
> + * A VM object representing a GPU (or display or GMU or ...) virtual address
> + * space.
> + *
> + * In the case of GPU, if per-process address spaces are supported, the address
> + * space is split into two VMs, which map to TTBR0 and TTBR1 in the SMMU. TTBR0
> + * is used for userspace objects, and is unique per msm_context/drm_file, while
> + * TTBR1 is the same for all processes. (The kernel controlled ringbuffer and
> + * a few other kernel controlled buffers live in TTBR1.)
> + *
> + * The GPU TTBR0 vm can be managed by userspace or by the kernel, depending on
> + * whether userspace supports VM_BIND. All other vm's are managed by the kernel.
> + * (Managed by kernel means the kernel is responsible for VA allocation.)
> + *
> + * Note that because VM_BIND allows a given BO to be mapped multiple times in
> + * a VM, and therefore have multiple VMA's in a VM, there is an extra object
> + * provided by drm_gpuvm infrastructure.. the drm_gpuvm_bo, which is not
> + * embedded in any larger driver structure. The GEM object holds a list of
> + * drm_gpuvm_bo, which in turn holds a list of msm_gem_vma. A linked vma
> + * holds a reference to the vm_bo, and drops it when the vma is unlinked.
> + * So we just need to call drm_gpuvm_bo_obtain() to return a ref to an
> + * existing vm_bo, or create a new one. Once the vma is linked, the ref
> + * to the vm_bo can be dropped (since the vma is holding one).
> + */
> struct msm_gem_vm {
> - const char *name;
> - /* NOTE: mm managed at the page level, size is in # of pages
> - * and position mm_node->start is in # of pages:
> + /** @base: Inherit from drm_gpuvm. */
> + struct drm_gpuvm base;
> +
> + /**
> + * @mm: Memory management for kernel managed VA allocations
> + *
> + * Only used for kernel managed VMs, unused for user managed VMs.
> + *
> + * Protected by @mm_lock.
> */
> struct drm_mm mm;
> - spinlock_t lock; /* Protects drm_mm node allocation/removal */
> +
> + /** @mm_lock: protects @mm node allocation/removal */
> + struct spinlock mm_lock;
> +
> + /** @vm_lock: protects gpuvm insert/remove/traverse */
> + struct mutex vm_lock;
> +
> + /** @mmu: The mmu object which manages the pgtables */
> struct msm_mmu *mmu;
> - struct kref kref;
>
> - /* For address spaces associated with a specific process, this
> + /**
> + * @pid: For address spaces associated with a specific process, this
> * will be non-NULL:
> */
> struct pid *pid;
>
> - /* @faults: the number of GPU hangs associated with this address space */
> + /** @faults: the number of GPU hangs associated with this address space */
> int faults;
>
> - /** @va_start: lowest possible address to allocate */
> - uint64_t va_start;
> -
> - /** @va_size: the size of the address space (in bytes) */
> - uint64_t va_size;
> + /** @managed: is this a kernel managed VM? */
> + bool managed;
> };
> +#define to_msm_vm(x) container_of(x, struct msm_gem_vm, base)
>
> struct msm_gem_vm *
> msm_gem_vm_get(struct msm_gem_vm *vm);
> @@ -53,18 +91,31 @@ msm_gem_vm_get(struct msm_gem_vm *vm);
> void msm_gem_vm_put(struct msm_gem_vm *vm);
>
> struct msm_gem_vm *
> -msm_gem_vm_create(struct msm_mmu *mmu, const char *name,
> - u64 va_start, u64 size);
> +msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name,
> + u64 va_start, u64 va_size, bool managed);
>
> struct msm_fence_context;
>
> +/**
> + * struct msm_gem_vma - a VMA mapping
> + *
> + * Represents a combination of a GEM object plus a VM.
> + */
> struct msm_gem_vma {
> + /** @base: inherit from drm_gpuva */
> + struct drm_gpuva base;
> +
> + /**
> + * @node: mm node for VA allocation
> + *
> + * Only used by kernel managed VMs
> + */
> struct drm_mm_node node;
> - uint64_t iova;
> - struct msm_gem_vm *vm;
> - struct list_head list; /* node in msm_gem_object::vmas */
> +
> + /** @mapped: Is this VMA mapped? */
> bool mapped;
> };
> +#define to_msm_vma(x) container_of(x, struct msm_gem_vma, base)
>
> struct msm_gem_vma *
> msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj,
> @@ -100,8 +151,6 @@ struct msm_gem_object {
> struct sg_table *sgt;
> void *vaddr;
>
> - struct list_head vmas; /* list of msm_gem_vma */
> -
> char name[32]; /* Identifier to print for the debugfs files */
>
> /* userspace metadata backchannel */
> diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c
> index a186b7dfea35..e8a670566147 100644
> --- a/drivers/gpu/drm/msm/msm_gem_submit.c
> +++ b/drivers/gpu/drm/msm/msm_gem_submit.c
> @@ -312,7 +312,7 @@ static int submit_pin_objects(struct msm_gem_submit *submit)
> if (ret)
> break;
>
> - submit->bos[i].iova = vma->iova;
> + submit->bos[i].iova = vma->base.va.addr;
> }
>
> /*
> diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c
> index ca29e81d79d2..56221dfdf551 100644
> --- a/drivers/gpu/drm/msm/msm_gem_vma.c
> +++ b/drivers/gpu/drm/msm/msm_gem_vma.c
> @@ -5,14 +5,13 @@
> */
>
> #include "msm_drv.h"
> -#include "msm_fence.h"
> #include "msm_gem.h"
> #include "msm_mmu.h"
>
> static void
> -msm_gem_vm_destroy(struct kref *kref)
> +msm_gem_vm_free(struct drm_gpuvm *gpuvm)
> {
> - struct msm_gem_vm *vm = container_of(kref, struct msm_gem_vm, kref);
> + struct msm_gem_vm *vm = container_of(gpuvm, struct msm_gem_vm, base);
>
> drm_mm_takedown(&vm->mm);
> if (vm->mmu)
> @@ -25,14 +24,14 @@ msm_gem_vm_destroy(struct kref *kref)
> void msm_gem_vm_put(struct msm_gem_vm *vm)
> {
> if (vm)
> - kref_put(&vm->kref, msm_gem_vm_destroy);
> + drm_gpuvm_put(&vm->base);
> }
>
> struct msm_gem_vm *
> msm_gem_vm_get(struct msm_gem_vm *vm)
> {
> if (!IS_ERR_OR_NULL(vm))
> - kref_get(&vm->kref);
> + drm_gpuvm_get(&vm->base);
>
> return vm;
> }
> @@ -40,14 +39,14 @@ msm_gem_vm_get(struct msm_gem_vm *vm)
> /* Actually unmap memory for the vma */
> void msm_gem_vma_purge(struct msm_gem_vma *vma)
> {
> - struct msm_gem_vm *vm = vma->vm;
> - unsigned size = vma->node.size;
> + struct msm_gem_vm *vm = to_msm_vm(vma->base.vm);
> + unsigned size = vma->base.va.range;
>
> /* Don't do anything if the memory isn't mapped */
> if (!vma->mapped)
> return;
>
> - vm->mmu->funcs->unmap(vm->mmu, vma->iova, size);
> + vm->mmu->funcs->unmap(vm->mmu, vma->base.va.addr, size);
>
> vma->mapped = false;
> }
> @@ -57,10 +56,10 @@ int
> msm_gem_vma_map(struct msm_gem_vma *vma, int prot,
> struct sg_table *sgt, int size)
> {
> - struct msm_gem_vm *vm = vma->vm;
> + struct msm_gem_vm *vm = to_msm_vm(vma->base.vm);
> int ret;
>
> - if (GEM_WARN_ON(!vma->iova))
> + if (GEM_WARN_ON(!vma->base.va.addr))
> return -EINVAL;
>
> if (vma->mapped)
> @@ -68,9 +67,6 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot,
>
> vma->mapped = true;
>
> - if (!vm)
> - return 0;
> -
> /*
> * NOTE: iommu/io-pgtable can allocate pages, so we cannot hold
> * a lock across map/unmap which is also used in the job_run()
> @@ -80,7 +76,7 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot,
> * Revisit this if we can come up with a scheme to pre-alloc pages
> * for the pgtable in map/unmap ops.
> */
> - ret = vm->mmu->funcs->map(vm->mmu, vma->iova, sgt, size, prot);
> + ret = vm->mmu->funcs->map(vm->mmu, vma->base.va.addr, sgt, size, prot);
>
> if (ret) {
> vma->mapped = false;
> @@ -92,19 +88,20 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot,
> /* Close an iova. Warn if it is still in use */
> void msm_gem_vma_close(struct msm_gem_vma *vma)
> {
> - struct msm_gem_vm *vm = vma->vm;
> + struct msm_gem_vm *vm = to_msm_vm(vma->base.vm);
>
> GEM_WARN_ON(vma->mapped);
>
> - spin_lock(&vm->lock);
> - if (vma->iova)
> + spin_lock(&vm->mm_lock);
> + if (vma->base.va.addr)
> drm_mm_remove_node(&vma->node);
> - spin_unlock(&vm->lock);
> + spin_unlock(&vm->mm_lock);
>
> - vma->iova = 0;
> - list_del(&vma->list);
> + mutex_lock(&vm->vm_lock);
> + drm_gpuva_remove(&vma->base);
> + drm_gpuva_unlink(&vma->base);
> + mutex_unlock(&vm->vm_lock);
>
> - msm_gem_vm_put(vm);
> kfree(vma);
> }
>
> @@ -113,6 +110,7 @@ struct msm_gem_vma *
> msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj,
> u64 range_start, u64 range_end)
> {
> + struct drm_gpuvm_bo *vm_bo;
> struct msm_gem_vma *vma;
> int ret;
>
> @@ -120,36 +118,82 @@ msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj,
> if (!vma)
> return ERR_PTR(-ENOMEM);
>
> - vma->vm = vm;
> + if (vm->managed) {
> + spin_lock(&vm->mm_lock);
> + ret = drm_mm_insert_node_in_range(&vm->mm, &vma->node,
> + obj->size, PAGE_SIZE, 0,
> + range_start, range_end, 0);
> + spin_unlock(&vm->mm_lock);
>
> - spin_lock(&vm->lock);
> - ret = drm_mm_insert_node_in_range(&vm->mm, &vma->node,
> - obj->size, PAGE_SIZE, 0,
> - range_start, range_end, 0);
> - spin_unlock(&vm->lock);
> + if (ret)
> + goto err_free_vma;
>
> - if (ret)
> - goto err_free_vma;
> + range_start = vma->node.start;
> + range_end = range_start + obj->size;
> + }
>
> - vma->iova = vma->node.start;
> + GEM_WARN_ON((range_end - range_start) > obj->size);
> +
> + drm_gpuva_init(&vma->base, range_start, range_end - range_start, obj, 0);
> vma->mapped = false;
>
> - INIT_LIST_HEAD(&vma->list);
> + mutex_lock(&vm->vm_lock);
> + ret = drm_gpuva_insert(&vm->base, &vma->base);
> + mutex_unlock(&vm->vm_lock);
> + if (ret)
> + goto err_free_range;
>
> - kref_get(&vm->kref);
> + vm_bo = drm_gpuvm_bo_obtain(&vm->base, obj);
> + if (IS_ERR(vm_bo)) {
> + ret = PTR_ERR(vm_bo);
> + goto err_va_remove;
> + }
> +
> + mutex_lock(&vm->vm_lock);
> + drm_gpuva_link(&vma->base, vm_bo);
> + mutex_unlock(&vm->vm_lock);
> + GEM_WARN_ON(drm_gpuvm_bo_put(vm_bo));
>
> return vma;
>
> +err_va_remove:
> + mutex_lock(&vm->vm_lock);
> + drm_gpuva_remove(&vma->base);
> + mutex_unlock(&vm->vm_lock);
> +err_free_range:
> + if (vm->managed)
> + drm_mm_remove_node(&vma->node);
> err_free_vma:
> kfree(vma);
> return ERR_PTR(ret);
> }
>
> +static const struct drm_gpuvm_ops msm_gpuvm_ops = {
> + .vm_free = msm_gem_vm_free,
> +};
> +
> +/**
> + * msm_gem_vm_create() - Create and initialize a &msm_gem_vm
> + * @drm: the drm device
> + * @mmu: the backing MMU objects handling mapping/unmapping
> + * @name: the name of the VM
> + * @va_start: the start offset of the GPU VA space
> + * @va_size: the size of the GPU VA space
This function is used on the KMS side too. So "GPU/KMS"?
-Akhil
> + * @managed: is it a kernel managed VM?
> + *
> + * In a kernel managed VM, the kernel handles address allocation, and only
> + * synchronous operations are supported. In a user managed VM, userspace
> + * handles virtual address allocation, and both async and sync operations
> + * are supported.
> + */
> struct msm_gem_vm *
> -msm_gem_vm_create(struct msm_mmu *mmu, const char *name,
> - u64 va_start, u64 size)
> +msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name,
> + u64 va_start, u64 va_size, bool managed)
> {
> + enum drm_gpuvm_flags flags = managed ? DRM_GPUVM_VA_WEAK_REF : 0;
> struct msm_gem_vm *vm;
> + struct drm_gem_object *dummy_gem;
> + int ret = 0;
>
> if (IS_ERR(mmu))
> return ERR_CAST(mmu);
> @@ -158,15 +202,28 @@ msm_gem_vm_create(struct msm_mmu *mmu, const char *name,
> if (!vm)
> return ERR_PTR(-ENOMEM);
>
> - spin_lock_init(&vm->lock);
> - vm->name = name;
> - vm->mmu = mmu;
> - vm->va_start = va_start;
> - vm->va_size = size;
> + dummy_gem = drm_gpuvm_resv_object_alloc(drm);
> + if (!dummy_gem) {
> + ret = -ENOMEM;
> + goto err_free_vm;
> + }
> +
> + drm_gpuvm_init(&vm->base, name, flags, drm, dummy_gem,
> + va_start, va_size, 0, 0, &msm_gpuvm_ops);
> + drm_gem_object_put(dummy_gem);
> +
> + spin_lock_init(&vm->mm_lock);
> + mutex_init(&vm->vm_lock);
>
> - drm_mm_init(&vm->mm, va_start, size);
> + vm->mmu = mmu;
> + vm->managed = managed;
>
> - kref_init(&vm->kref);
> + drm_mm_init(&vm->mm, va_start, va_size);
>
> return vm;
> +
> +err_free_vm:
> + kfree(vm);
> + return ERR_PTR(ret);
> +
> }
> diff --git a/drivers/gpu/drm/msm/msm_kms.c b/drivers/gpu/drm/msm/msm_kms.c
> index 88504c4b842f..6458bd82a0cd 100644
> --- a/drivers/gpu/drm/msm/msm_kms.c
> +++ b/drivers/gpu/drm/msm/msm_kms.c
> @@ -204,8 +204,8 @@ struct msm_gem_vm *msm_kms_init_vm(struct drm_device *dev)
> return NULL;
> }
>
> - vm = msm_gem_vm_create(mmu, "mdp_kms",
> - 0x1000, 0x100000000 - 0x1000);
> + vm = msm_gem_vm_create(dev, mmu, "mdp_kms",
> + 0x1000, 0x100000000 - 0x1000, true);
> if (IS_ERR(vm)) {
> dev_err(mdp_dev, "vm create, error %pe\n", vm);
> mmu->funcs->destroy(mmu);
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [PATCH v2 14/34] drm/msm: Lazily create context VM
2025-03-19 14:52 ` [PATCH v2 14/34] drm/msm: Lazily create context VM Rob Clark
@ 2025-04-16 17:38 ` Akhil P Oommen
0 siblings, 0 replies; 45+ messages in thread
From: Akhil P Oommen @ 2025-04-16 17:38 UTC (permalink / raw)
To: Rob Clark, dri-devel
Cc: freedreno, linux-arm-msm, Rob Clark, Sean Paul, Konrad Dybcio,
Abhinav Kumar, Dmitry Baryshkov, Marijn Suijten, David Airlie,
Simona Vetter, open list
On 3/19/2025 8:22 PM, Rob Clark wrote:
> From: Rob Clark <robdclark@chromium.org>
>
> In the next commit, a way for userspace to opt-in to userspace managed
> VM is added. For this to work, we need to defer creation of the VM
> until it is needed.
>
> Signed-off-by: Rob Clark <robdclark@chromium.org>
> ---
> drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 3 ++-
> drivers/gpu/drm/msm/adreno/adreno_gpu.c | 14 +++++++-----
> drivers/gpu/drm/msm/msm_drv.c | 29 ++++++++++++++++++++-----
> drivers/gpu/drm/msm/msm_gem_submit.c | 2 +-
> drivers/gpu/drm/msm/msm_gpu.h | 9 +++++++-
> 5 files changed, 43 insertions(+), 14 deletions(-)
>
> diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
> index 4811be5a7c29..0b1e2ba3539e 100644
> --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
> +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
> @@ -112,6 +112,7 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu,
> {
> bool sysprof = refcount_read(&a6xx_gpu->base.base.sysprof_active) > 1;
> struct msm_context *ctx = submit->queue->ctx;
> + struct drm_gpuvm *vm = msm_context_vm(submit->dev, ctx);
> struct adreno_gpu *adreno_gpu = &a6xx_gpu->base;
> phys_addr_t ttbr;
> u32 asid;
> @@ -120,7 +121,7 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu,
> if (ctx->seqno == ring->cur_ctx_seqno)
> return;
>
> - if (msm_iommu_pagetable_params(to_msm_vm(ctx->vm)->mmu, &ttbr, &asid))
> + if (msm_iommu_pagetable_params(to_msm_vm(vm)->mmu, &ttbr, &asid))
> return;
>
> if (adreno_gpu->info->family >= ADRENO_7XX_GEN1) {
> diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
> index 0f71703f6ec7..e4d895dda051 100644
> --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c
> +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
> @@ -351,6 +351,8 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx,
> {
> struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
> struct drm_device *drm = gpu->dev;
> + /* Note ctx can be NULL when called from rd_open(): */
> + struct drm_gpuvm *vm = ctx ? msm_context_vm(drm, ctx) : NULL;
>
> /* No pointer params yet */
> if (*len != 0)
> @@ -396,8 +398,8 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx,
> *value = 0;
> return 0;
> case MSM_PARAM_FAULTS:
> - if (ctx->vm)
> - *value = gpu->global_faults + to_msm_vm(ctx->vm)->faults;
> + if (vm)
> + *value = gpu->global_faults + to_msm_vm(vm)->faults;
> else
> *value = gpu->global_faults;
> return 0;
> @@ -405,14 +407,14 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx,
> *value = gpu->suspend_count;
> return 0;
> case MSM_PARAM_VA_START:
> - if (ctx->vm == gpu->vm)
> + if (vm == gpu->vm)
> return UERR(EINVAL, drm, "requires per-process pgtables");
> - *value = ctx->vm->mm_start;
> + *value = vm->mm_start;
> return 0;
> case MSM_PARAM_VA_SIZE:
> - if (ctx->vm == gpu->vm)
> + if (vm == gpu->vm)
> return UERR(EINVAL, drm, "requires per-process pgtables");
> - *value = ctx->vm->mm_range;
> + *value = vm->mm_range;
> return 0;
> case MSM_PARAM_HIGHEST_BANK_BIT:
> *value = adreno_gpu->ubwc_config.highest_bank_bit;
> diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
> index 6ef29bc48bb0..6fd981ee6aee 100644
> --- a/drivers/gpu/drm/msm/msm_drv.c
> +++ b/drivers/gpu/drm/msm/msm_drv.c
> @@ -214,10 +214,29 @@ static void load_gpu(struct drm_device *dev)
> mutex_unlock(&init_lock);
> }
>
> +/**
> + * msm_context_vm - lazily create the context's VM
> + *
> + * @dev: the drm device
> + * @ctx: the context
> + *
> + * The VM is lazily created, so that userspace has a chance to opt-in to having
> + * a userspace managed VM before the VM is created.
> + *
> + * Note that this does not return a reference to the VM. Once the VM is created,
> + * it exists for the lifetime of the context.
> + */
> +struct drm_gpuvm *msm_context_vm(struct drm_device *dev, struct msm_context *ctx)
> +{
> + struct msm_drm_private *priv = dev->dev_private;
> + if (!ctx->vm)
hmm. This is racy and it is in a userspace accessible path!
-Akhil
> + ctx->vm = msm_gpu_create_private_vm(priv->gpu, current);
> + return ctx->vm;
> +}
> +
> static int context_init(struct drm_device *dev, struct drm_file *file)
> {
> static atomic_t ident = ATOMIC_INIT(0);
> - struct msm_drm_private *priv = dev->dev_private;
> struct msm_context *ctx;
>
> ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
> @@ -230,7 +249,6 @@ static int context_init(struct drm_device *dev, struct drm_file *file)
> kref_init(&ctx->ref);
> msm_submitqueue_init(dev, ctx);
>
> - ctx->vm = msm_gpu_create_private_vm(priv->gpu, current);
> file->driver_priv = ctx;
>
> ctx->seqno = atomic_inc_return(&ident);
> @@ -408,7 +426,7 @@ static int msm_ioctl_gem_info_iova(struct drm_device *dev,
> * Don't pin the memory here - just get an address so that userspace can
> * be productive
> */
> - return msm_gem_get_iova(obj, ctx->vm, iova);
> + return msm_gem_get_iova(obj, msm_context_vm(dev, ctx), iova);
> }
>
> static int msm_ioctl_gem_info_set_iova(struct drm_device *dev,
> @@ -417,18 +435,19 @@ static int msm_ioctl_gem_info_set_iova(struct drm_device *dev,
> {
> struct msm_drm_private *priv = dev->dev_private;
> struct msm_context *ctx = file->driver_priv;
> + struct drm_gpuvm *vm = msm_context_vm(dev, ctx);
>
> if (!priv->gpu)
> return -EINVAL;
>
> /* Only supported if per-process address space is supported: */
> - if (priv->gpu->vm == ctx->vm)
> + if (priv->gpu->vm == vm)
> return UERR(EOPNOTSUPP, dev, "requires per-process pgtables");
>
> if (should_fail(&fail_gem_iova, obj->size))
> return -ENOMEM;
>
> - return msm_gem_set_iova(obj, ctx->vm, iova);
> + return msm_gem_set_iova(obj, vm, iova);
> }
>
> static int msm_ioctl_gem_info_set_metadata(struct drm_gem_object *obj,
> diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c
> index c65f3a6a5256..9731ad7993cf 100644
> --- a/drivers/gpu/drm/msm/msm_gem_submit.c
> +++ b/drivers/gpu/drm/msm/msm_gem_submit.c
> @@ -63,7 +63,7 @@ static struct msm_gem_submit *submit_create(struct drm_device *dev,
>
> kref_init(&submit->ref);
> submit->dev = dev;
> - submit->vm = queue->ctx->vm;
> + submit->vm = msm_context_vm(dev, queue->ctx);
> submit->gpu = gpu;
> submit->cmd = (void *)&submit->bos[nr_bos];
> submit->queue = queue;
> diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h
> index d8425e6d7f5a..c15aad288552 100644
> --- a/drivers/gpu/drm/msm/msm_gpu.h
> +++ b/drivers/gpu/drm/msm/msm_gpu.h
> @@ -362,7 +362,12 @@ struct msm_context {
> */
> int queueid;
>
> - /** @vm: the per-process GPU address-space */
> + /**
> + * @vm:
> + *
> + * The per-process GPU address-space. Do not access directly, use
> + * msm_context_vm().
> + */
> struct drm_gpuvm *vm;
>
> /** @kref: the reference count */
> @@ -447,6 +452,8 @@ struct msm_context {
> atomic64_t ctx_mem;
> };
>
> +struct drm_gpuvm *msm_context_vm(struct drm_device *dev, struct msm_context *ctx);
> +
> /**
> * msm_gpu_convert_priority - Map userspace priority to ring # and sched priority
> *
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [PATCH v2 05/34] drm/msm: Rename msm_file_private -> msm_context
2025-03-19 14:52 ` [PATCH v2 05/34] drm/msm: Rename msm_file_private -> msm_context Rob Clark
@ 2025-04-16 23:11 ` Dmitry Baryshkov
0 siblings, 0 replies; 45+ messages in thread
From: Dmitry Baryshkov @ 2025-04-16 23:11 UTC (permalink / raw)
To: Rob Clark
Cc: dri-devel, freedreno, linux-arm-msm, Rob Clark, Sean Paul,
Konrad Dybcio, Abhinav Kumar, Dmitry Baryshkov, Marijn Suijten,
David Airlie, Simona Vetter, open list
On Wed, Mar 19, 2025 at 07:52:17AM -0700, Rob Clark wrote:
> From: Rob Clark <robdclark@chromium.org>
>
> This is a more descriptive name.
>
> Signed-off-by: Rob Clark <robdclark@chromium.org>
> ---
> drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 2 +-
> drivers/gpu/drm/msm/adreno/adreno_gpu.c | 6 ++--
> drivers/gpu/drm/msm/adreno/adreno_gpu.h | 4 +--
> drivers/gpu/drm/msm/msm_drv.c | 14 ++++-----
> drivers/gpu/drm/msm/msm_gem.c | 2 +-
> drivers/gpu/drm/msm/msm_gem_submit.c | 2 +-
> drivers/gpu/drm/msm/msm_gpu.c | 4 +--
> drivers/gpu/drm/msm/msm_gpu.h | 39 ++++++++++++-------------
> drivers/gpu/drm/msm/msm_submitqueue.c | 27 +++++++++--------
> 9 files changed, 49 insertions(+), 51 deletions(-)
>
Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@oss.qualcomm.com>
--
With best wishes
Dmitry
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [PATCH v2 06/34] drm/msm: Improve msm_context comments
2025-03-19 14:52 ` [PATCH v2 06/34] drm/msm: Improve msm_context comments Rob Clark
@ 2025-04-16 23:19 ` Dmitry Baryshkov
0 siblings, 0 replies; 45+ messages in thread
From: Dmitry Baryshkov @ 2025-04-16 23:19 UTC (permalink / raw)
To: Rob Clark
Cc: dri-devel, freedreno, linux-arm-msm, Rob Clark, Sean Paul,
Konrad Dybcio, Abhinav Kumar, Dmitry Baryshkov, Marijn Suijten,
David Airlie, Simona Vetter, open list
On Wed, Mar 19, 2025 at 07:52:18AM -0700, Rob Clark wrote:
> From: Rob Clark <robdclark@chromium.org>
>
> Just some tidying up.
Probably there is nothing more to say.
Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@oss.qualcomm.com>
>
> Signed-off-by: Rob Clark <robdclark@chromium.org>
> ---
> drivers/gpu/drm/msm/msm_gpu.h | 44 +++++++++++++++++++++++------------
> 1 file changed, 29 insertions(+), 15 deletions(-)
>
--
With best wishes
Dmitry
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [PATCH v2 08/34] drm/msm: Remove vram carveout support
2025-03-19 14:52 ` [PATCH v2 08/34] drm/msm: Remove vram carveout support Rob Clark
2025-04-16 17:18 ` Akhil P Oommen
@ 2025-04-16 23:20 ` Dmitry Baryshkov
2025-04-17 13:41 ` Luca Weiss
1 sibling, 1 reply; 45+ messages in thread
From: Dmitry Baryshkov @ 2025-04-16 23:20 UTC (permalink / raw)
To: Rob Clark
Cc: dri-devel, freedreno, linux-arm-msm, Rob Clark, Sean Paul,
Konrad Dybcio, Abhinav Kumar, Dmitry Baryshkov, Marijn Suijten,
David Airlie, Simona Vetter, open list
On Wed, Mar 19, 2025 at 07:52:20AM -0700, Rob Clark wrote:
> From: Rob Clark <robdclark@chromium.org>
>
> It is standing in the way of drm_gpuvm / VM_BIND support. Not to
> mention frequently broken and rarely tested. And I think only needed
> for a 10yr old not quite upstream SoC (msm8974).
Well... MSM8974 is quite upstream, but anyway, let's drop it. Maybe
somebody will write an IOMMU driver.
>
> Maybe we can add support back in later, but I'm doubtful.
>
> Signed-off-by: Rob Clark <robdclark@chromium.org>
> ---
> drivers/gpu/drm/msm/adreno/a2xx_gpu.c | 6 +-
> drivers/gpu/drm/msm/adreno/a3xx_gpu.c | 13 +-
> drivers/gpu/drm/msm/adreno/a4xx_gpu.c | 13 +-
> drivers/gpu/drm/msm/adreno/adreno_device.c | 4 -
> drivers/gpu/drm/msm/adreno/adreno_gpu.h | 1 -
> drivers/gpu/drm/msm/msm_drv.c | 117 +-----------------
> drivers/gpu/drm/msm/msm_drv.h | 11 --
> drivers/gpu/drm/msm/msm_gem.c | 131 ++-------------------
> drivers/gpu/drm/msm/msm_gem.h | 5 -
> drivers/gpu/drm/msm/msm_gem_submit.c | 5 -
> 10 files changed, 19 insertions(+), 287 deletions(-)
--
With best wishes
Dmitry
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [PATCH v2 08/34] drm/msm: Remove vram carveout support
2025-04-16 23:20 ` Dmitry Baryshkov
@ 2025-04-17 13:41 ` Luca Weiss
0 siblings, 0 replies; 45+ messages in thread
From: Luca Weiss @ 2025-04-17 13:41 UTC (permalink / raw)
To: Dmitry Baryshkov, Rob Clark
Cc: dri-devel, freedreno, linux-arm-msm, Rob Clark, Sean Paul,
Konrad Dybcio, Abhinav Kumar, Dmitry Baryshkov, Marijn Suijten,
David Airlie, Simona Vetter, open list, luca
On Thu Apr 17, 2025 at 1:20 AM CEST, Dmitry Baryshkov wrote:
> On Wed, Mar 19, 2025 at 07:52:20AM -0700, Rob Clark wrote:
>> From: Rob Clark <robdclark@chromium.org>
>>
>> It is standing in the way of drm_gpuvm / VM_BIND support. Not to
>> mention frequently broken and rarely tested. And I think only needed
>> for a 10yr old not quite upstream SoC (msm8974).
>
> Well... MSM8974 is quite upstream, but anyway, let's drop it. Maybe
> somebody will write an IOMMU driver.
msm8226 is also using this!
Sad to see this happening, but I get the reasoning. Unfortunately nobody
who really knows GPU and IOMMU bits has looked at this in recent years,
for msm8974 we (mostly Matti Lehtimäki and me) have a semi-working
branch but hitting random issues with it.
Would've been nice if somebody made functional IOMMU support back in
like 2015-2016 when more people looked at this platform.
Regards
Luca
>
>>
>> Maybe we can add support back in later, but I'm doubtful.
>>
>> Signed-off-by: Rob Clark <robdclark@chromium.org>
>> ---
>> drivers/gpu/drm/msm/adreno/a2xx_gpu.c | 6 +-
>> drivers/gpu/drm/msm/adreno/a3xx_gpu.c | 13 +-
>> drivers/gpu/drm/msm/adreno/a4xx_gpu.c | 13 +-
>> drivers/gpu/drm/msm/adreno/adreno_device.c | 4 -
>> drivers/gpu/drm/msm/adreno/adreno_gpu.h | 1 -
>> drivers/gpu/drm/msm/msm_drv.c | 117 +-----------------
>> drivers/gpu/drm/msm/msm_drv.h | 11 --
>> drivers/gpu/drm/msm/msm_gem.c | 131 ++-------------------
>> drivers/gpu/drm/msm/msm_gem.h | 5 -
>> drivers/gpu/drm/msm/msm_gem_submit.c | 5 -
>> 10 files changed, 19 insertions(+), 287 deletions(-)
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [PATCH v2 07/34] drm/msm: Rename msm_gem_address_space -> msm_gem_vm
2025-03-19 14:52 ` [PATCH v2 07/34] drm/msm: Rename msm_gem_address_space -> msm_gem_vm Rob Clark
@ 2025-04-21 19:19 ` Dmitry Baryshkov
0 siblings, 0 replies; 45+ messages in thread
From: Dmitry Baryshkov @ 2025-04-21 19:19 UTC (permalink / raw)
To: Rob Clark
Cc: dri-devel, freedreno, linux-arm-msm, Rob Clark, Sean Paul,
Konrad Dybcio, Abhinav Kumar, Dmitry Baryshkov, Marijn Suijten,
David Airlie, Simona Vetter, Jessica Zhang, Jani Nikula,
Barnabás Czémán, Arnd Bergmann, André Almeida,
Christopher Snowhill, Jonathan Marek, Krzysztof Kozlowski,
Eugene Lepshy, open list
On Wed, Mar 19, 2025 at 07:52:19AM -0700, Rob Clark wrote:
> From: Rob Clark <robdclark@chromium.org>
>
> Re-aligning naming to better match drm_gpuvm terminology will make
> things less confusing at the end of the drm_gpuvm conversion.
>
> This is just rename churn, no functional change.
>
> Signed-off-by: Rob Clark <robdclark@chromium.org>
> ---
> drivers/gpu/drm/msm/adreno/a2xx_gpu.c | 18 ++--
> drivers/gpu/drm/msm/adreno/a3xx_gpu.c | 4 +-
> drivers/gpu/drm/msm/adreno/a4xx_gpu.c | 4 +-
> drivers/gpu/drm/msm/adreno/a5xx_debugfs.c | 4 +-
> drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 22 ++---
> drivers/gpu/drm/msm/adreno/a5xx_power.c | 2 +-
> drivers/gpu/drm/msm/adreno/a5xx_preempt.c | 10 +-
> drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 26 +++---
> drivers/gpu/drm/msm/adreno/a6xx_gmu.h | 2 +-
> drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 45 +++++----
> drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c | 6 +-
> drivers/gpu/drm/msm/adreno/a6xx_preempt.c | 10 +-
> drivers/gpu/drm/msm/adreno/adreno_gpu.c | 47 +++++-----
> drivers/gpu/drm/msm/adreno/adreno_gpu.h | 18 ++--
> .../drm/msm/disp/dpu1/dpu_encoder_phys_wb.c | 14 +--
> drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c | 18 ++--
> drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h | 2 +-
> drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c | 18 ++--
> drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c | 14 +--
> drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h | 4 +-
> drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c | 6 +-
> drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c | 24 ++---
> drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c | 12 +--
> drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c | 4 +-
> drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c | 18 ++--
> drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c | 12 +--
> drivers/gpu/drm/msm/dsi/dsi_host.c | 14 +--
> drivers/gpu/drm/msm/msm_drv.c | 8 +-
> drivers/gpu/drm/msm/msm_drv.h | 10 +-
> drivers/gpu/drm/msm/msm_fb.c | 10 +-
> drivers/gpu/drm/msm/msm_fbdev.c | 2 +-
> drivers/gpu/drm/msm/msm_gem.c | 74 +++++++--------
> drivers/gpu/drm/msm/msm_gem.h | 34 +++----
> drivers/gpu/drm/msm/msm_gem_submit.c | 6 +-
> drivers/gpu/drm/msm/msm_gem_vma.c | 93 +++++++++----------
> drivers/gpu/drm/msm/msm_gpu.c | 48 +++++-----
> drivers/gpu/drm/msm/msm_gpu.h | 16 ++--
> drivers/gpu/drm/msm/msm_kms.c | 16 ++--
> drivers/gpu/drm/msm/msm_kms.h | 2 +-
> drivers/gpu/drm/msm/msm_ringbuffer.c | 4 +-
> drivers/gpu/drm/msm/msm_submitqueue.c | 2 +-
> 41 files changed, 349 insertions(+), 354 deletions(-)
>
Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@oss.qualcomm.com>
--
With best wishes
Dmitry
^ permalink raw reply [flat|nested] 45+ messages in thread
end of thread, other threads:[~2025-04-21 19:19 UTC | newest]
Thread overview: 45+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-03-19 14:52 [PATCH v2 00/34] drm/msm: sparse / "VM_BIND" support Rob Clark
2025-03-19 14:52 ` [PATCH v2 01/34] drm/gpuvm: Don't require obj lock in destructor path Rob Clark
2025-03-19 14:52 ` [PATCH v2 02/34] drm/gpuvm: Remove bogus lock assert Rob Clark
2025-03-19 14:52 ` [PATCH v2 03/34] drm/gpuvm: Allow VAs to hold soft reference to BOs Rob Clark
2025-03-19 14:52 ` [PATCH v2 04/34] drm/gpuvm: Add drm_gpuvm_sm_unmap_va() Rob Clark
2025-03-19 14:52 ` [PATCH v2 05/34] drm/msm: Rename msm_file_private -> msm_context Rob Clark
2025-04-16 23:11 ` Dmitry Baryshkov
2025-03-19 14:52 ` [PATCH v2 06/34] drm/msm: Improve msm_context comments Rob Clark
2025-04-16 23:19 ` Dmitry Baryshkov
2025-03-19 14:52 ` [PATCH v2 07/34] drm/msm: Rename msm_gem_address_space -> msm_gem_vm Rob Clark
2025-04-21 19:19 ` Dmitry Baryshkov
2025-03-19 14:52 ` [PATCH v2 08/34] drm/msm: Remove vram carveout support Rob Clark
2025-04-16 17:18 ` Akhil P Oommen
2025-04-16 23:20 ` Dmitry Baryshkov
2025-04-17 13:41 ` Luca Weiss
2025-03-19 14:52 ` [PATCH v2 09/34] drm/msm: Collapse vma allocation and initialization Rob Clark
2025-03-19 14:52 ` [PATCH v2 10/34] drm/msm: Collapse vma close and delete Rob Clark
2025-03-19 14:52 ` [PATCH v2 11/34] drm/msm: drm_gpuvm conversion Rob Clark
2025-04-16 17:20 ` Akhil P Oommen
2025-03-19 14:52 ` [PATCH v2 12/34] drm/msm: Use drm_gpuvm types more Rob Clark
2025-03-19 14:52 ` [PATCH v2 13/34] drm/msm: Split submit_pin_objects() Rob Clark
2025-03-19 14:52 ` [PATCH v2 14/34] drm/msm: Lazily create context VM Rob Clark
2025-04-16 17:38 ` Akhil P Oommen
2025-03-19 14:52 ` [PATCH v2 15/34] drm/msm: Add opt-in for VM_BIND Rob Clark
2025-03-19 14:52 ` [PATCH v2 16/34] drm/msm: Mark VM as unusable on faults Rob Clark
2025-03-19 16:15 ` Connor Abbott
2025-03-19 21:31 ` Rob Clark
2025-03-19 14:52 ` [PATCH v2 17/34] drm/msm: Extend SUBMIT ioctl for VM_BIND Rob Clark
2025-03-19 14:52 ` [PATCH v2 18/34] drm/msm: Add VM_BIND submitqueue Rob Clark
2025-03-19 14:52 ` [PATCH v2 19/34] drm/msm: Add _NO_SHARE flag Rob Clark
2025-03-19 14:52 ` [PATCH v2 20/34] drm/msm: Split out helper to get iommu prot flags Rob Clark
2025-03-19 14:52 ` [PATCH v2 21/34] drm/msm: Add mmu support for non-zero offset Rob Clark
2025-03-19 14:52 ` [PATCH v2 22/34] drm/msm: Add PRR support Rob Clark
2025-03-19 14:52 ` [PATCH v2 23/34] drm/msm: Rename msm_gem_vma_purge() -> _unmap() Rob Clark
2025-03-19 14:52 ` [PATCH v2 24/34] drm/msm: Split msm_gem_vma_new() Rob Clark
2025-03-19 14:52 ` [PATCH v2 25/34] drm/msm: Pre-allocate VMAs Rob Clark
2025-03-19 14:52 ` [PATCH v2 26/34] drm/msm: Pre-allocate vm_bo objects Rob Clark
2025-03-19 14:52 ` [PATCH v2 27/34] drm/msm: Pre-allocate pages for pgtable entries Rob Clark
2025-03-19 14:52 ` [PATCH v2 28/34] drm/msm: Wire up gpuvm ops Rob Clark
2025-03-19 14:52 ` [PATCH v2 29/34] drm/msm: Wire up drm_gpuvm debugfs Rob Clark
2025-03-19 14:52 ` [PATCH v2 30/34] drm/msm: Crashdump prep for sparse mappings Rob Clark
2025-03-19 14:52 ` [PATCH v2 31/34] drm/msm: rd dumping " Rob Clark
2025-03-19 14:52 ` [PATCH v2 32/34] drm/msm: Crashdec support for sparse Rob Clark
2025-03-19 14:52 ` [PATCH v2 33/34] drm/msm: rd dumping " Rob Clark
2025-03-19 14:52 ` [PATCH v2 34/34] drm/msm: Bump UAPI version Rob Clark
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox