From: Rob Clark <robdclark@gmail.com>
To: dri-devel@lists.freedesktop.org
Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org,
Connor Abbott <cwabbott0@gmail.com>,
Rob Clark <robdclark@chromium.org>,
Rob Clark <robdclark@gmail.com>,
Abhinav Kumar <quic_abhinavk@quicinc.com>,
Dmitry Baryshkov <lumag@kernel.org>, Sean Paul <sean@poorly.run>,
Marijn Suijten <marijn.suijten@somainline.org>,
David Airlie <airlied@gmail.com>, Simona Vetter <simona@ffwll.ch>,
linux-kernel@vger.kernel.org (open list)
Subject: [PATCH v3 31/33] drm/msm: Split out map/unmap ops
Date: Mon, 28 Apr 2025 13:54:38 -0700 [thread overview]
Message-ID: <20250428205619.227835-32-robdclark@gmail.com> (raw)
In-Reply-To: <20250428205619.227835-1-robdclark@gmail.com>
From: Rob Clark <robdclark@chromium.org>
With async VM_BIND, the actual pgtable updates are deferred.
Synchronously, a list of map/unmap ops will be generated, but the
actual pgtable changes are deferred. To support that, split out
op handlers and change the existing non-VM_BIND paths to use them.
Note in particular, the vma itself may already be destroyed/freed
by the time an UNMAP op runs (or even a MAP op if there is a later
queued UNMAP). For this reason, the op handlers cannot reference
the vma pointer.
Signed-off-by: Rob Clark <robdclark@chromium.org>
---
drivers/gpu/drm/msm/msm_gem_vma.c | 63 +++++++++++++++++++++++++++----
1 file changed, 56 insertions(+), 7 deletions(-)
diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c
index 5b8769e152c9..f3903825e0b6 100644
--- a/drivers/gpu/drm/msm/msm_gem_vma.c
+++ b/drivers/gpu/drm/msm/msm_gem_vma.c
@@ -8,6 +8,34 @@
#include "msm_gem.h"
#include "msm_mmu.h"
+#define vm_dbg(fmt, ...) pr_debug("%s:%d: "fmt"\n", __func__, __LINE__, ##__VA_ARGS__)
+
+/**
+ * struct msm_vm_map_op - create new pgtable mapping
+ */
+struct msm_vm_map_op {
+ /** @iova: start address for mapping */
+ uint64_t iova;
+ /** @range: size of the region to map */
+ uint64_t range;
+ /** @offset: offset into @sgt to map */
+ uint64_t offset;
+ /** @sgt: pages to map, or NULL for a PRR mapping */
+ struct sg_table *sgt;
+ /** @prot: the mapping protection flags */
+ int prot;
+};
+
+/**
+ * struct msm_vm_unmap_op - unmap a range of pages from pgtable
+ */
+struct msm_vm_unmap_op {
+ /** @iova: start address for unmap */
+ uint64_t iova;
+ /** @range: size of region to unmap */
+ uint64_t range;
+};
+
static void
msm_gem_vm_free(struct drm_gpuvm *gpuvm)
{
@@ -21,28 +49,45 @@ msm_gem_vm_free(struct drm_gpuvm *gpuvm)
kfree(vm);
}
+static void
+vm_unmap_op(struct msm_gem_vm *vm, const struct msm_vm_unmap_op *op)
+{
+ vm_dbg("%p: %016llx %016llx", vm, op->iova, op->iova + op->range);
+
+ vm->mmu->funcs->unmap(vm->mmu, op->iova, op->range);
+}
+
/* Actually unmap memory for the vma */
void msm_gem_vma_unmap(struct drm_gpuva *vma)
{
struct msm_gem_vma *msm_vma = to_msm_vma(vma);
- struct msm_gem_vm *vm = to_msm_vm(vma->vm);
- unsigned size = vma->va.range;
/* Don't do anything if the memory isn't mapped */
if (!msm_vma->mapped)
return;
- vm->mmu->funcs->unmap(vm->mmu, vma->va.addr, size);
+ vm_unmap_op(to_msm_vm(vma->vm), &(struct msm_vm_unmap_op){
+ .iova = vma->va.addr,
+ .range = vma->va.range,
+ });
msm_vma->mapped = false;
}
+static int
+vm_map_op(struct msm_gem_vm *vm, const struct msm_vm_map_op *op)
+{
+ vm_dbg("%p: %016llx %016llx", vm, op->iova, op->iova + op->range);
+
+ return vm->mmu->funcs->map(vm->mmu, op->iova, op->sgt, op->offset,
+ op->range, op->prot);
+}
+
/* Map and pin vma: */
int
msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt)
{
struct msm_gem_vma *msm_vma = to_msm_vma(vma);
- struct msm_gem_vm *vm = to_msm_vm(vma->vm);
int ret;
if (GEM_WARN_ON(!vma->va.addr))
@@ -62,9 +107,13 @@ msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt)
* Revisit this if we can come up with a scheme to pre-alloc pages
* for the pgtable in map/unmap ops.
*/
- ret = vm->mmu->funcs->map(vm->mmu, vma->va.addr, sgt,
- vma->gem.offset, vma->va.range,
- prot);
+ ret = vm_map_op(to_msm_vm(vma->vm), &(struct msm_vm_map_op){
+ .iova = vma->va.addr,
+ .range = vma->va.range,
+ .offset = vma->gem.offset,
+ .sgt = sgt,
+ .prot = prot,
+ });
if (ret) {
msm_vma->mapped = false;
}
--
2.49.0
next prev parent reply other threads:[~2025-04-28 20:57 UTC|newest]
Thread overview: 39+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-04-28 20:54 [PATCH v3 00/33] drm/msm: sparse / "VM_BIND" support Rob Clark
2025-04-28 20:54 ` [PATCH v3 01/33] drm/gpuvm: Don't require obj lock in destructor path Rob Clark
2025-04-28 20:54 ` [PATCH v3 02/33] drm/gpuvm: Allow VAs to hold soft reference to BOs Rob Clark
2025-04-28 20:54 ` [PATCH v3 03/33] iommu/io-pgtable-arm: Add quirk to quiet WARN_ON() Rob Clark
2025-04-29 12:28 ` Jason Gunthorpe
2025-04-29 13:58 ` Rob Clark
2025-04-29 14:05 ` Jason Gunthorpe
2025-04-29 12:38 ` Robin Murphy
2025-04-29 13:59 ` Rob Clark
2025-04-28 20:54 ` [PATCH v3 04/33] drm/msm: Rename msm_file_private -> msm_context Rob Clark
2025-04-28 20:54 ` [PATCH v3 05/33] drm/msm: Improve msm_context comments Rob Clark
2025-04-28 20:54 ` [PATCH v3 06/33] drm/msm: Rename msm_gem_address_space -> msm_gem_vm Rob Clark
2025-04-28 20:54 ` [PATCH v3 07/33] drm/msm: Remove vram carveout support Rob Clark
2025-04-28 20:54 ` [PATCH v3 08/33] drm/msm: Collapse vma allocation and initialization Rob Clark
2025-04-28 20:54 ` [PATCH v3 09/33] drm/msm: Collapse vma close and delete Rob Clark
2025-04-28 20:54 ` [PATCH v3 10/33] drm/msm: Don't close VMAs on purge Rob Clark
2025-04-28 20:54 ` [PATCH v3 11/33] drm/msm: drm_gpuvm conversion Rob Clark
2025-04-28 20:54 ` [PATCH v3 12/33] drm/msm: Convert vm locking Rob Clark
2025-04-28 20:54 ` [PATCH v3 13/33] drm/msm: Use drm_gpuvm types more Rob Clark
2025-04-28 20:54 ` [PATCH v3 14/33] drm/msm: Split out helper to get iommu prot flags Rob Clark
2025-04-28 20:54 ` [PATCH v3 15/33] drm/msm: Add mmu support for non-zero offset Rob Clark
2025-04-28 20:54 ` [PATCH v3 16/33] drm/msm: Add PRR support Rob Clark
2025-04-28 20:54 ` [PATCH v3 17/33] drm/msm: Rename msm_gem_vma_purge() -> _unmap() Rob Clark
2025-04-28 20:54 ` [PATCH v3 18/33] drm/msm: Lazily create context VM Rob Clark
2025-04-28 20:54 ` [PATCH v3 19/33] drm/msm: Add opt-in for VM_BIND Rob Clark
2025-04-28 20:54 ` [PATCH v3 20/33] drm/msm: Mark VM as unusable on GPU hangs Rob Clark
2025-04-28 20:54 ` [PATCH v3 21/33] drm/msm: Add _NO_SHARE flag Rob Clark
2025-04-28 20:54 ` [PATCH v3 22/33] drm/msm: Crashdump prep for sparse mappings Rob Clark
2025-04-28 20:54 ` [PATCH v3 23/33] drm/msm: rd dumping " Rob Clark
2025-04-28 20:54 ` [PATCH v3 24/33] drm/msm: Crashdec support for sparse Rob Clark
2025-04-28 20:54 ` [PATCH v3 25/33] drm/msm: rd dumping " Rob Clark
2025-04-28 20:54 ` [PATCH v3 26/33] drm/msm: Extract out syncobj helpers Rob Clark
2025-04-28 20:54 ` [PATCH v3 27/33] drm/msm: Use DMA_RESV_USAGE_BOOKKEEP/KERNEL Rob Clark
2025-04-28 20:54 ` [PATCH v3 28/33] drm/msm: Add VM_BIND submitqueue Rob Clark
2025-04-28 20:54 ` [PATCH v3 29/33] drm/msm: Support IO_PGTABLE_QUIRK_NO_WARN_ON Rob Clark
2025-04-28 20:54 ` [PATCH v3 30/33] drm/msm: Support pgtable preallocation Rob Clark
2025-04-28 20:54 ` Rob Clark [this message]
2025-04-28 20:54 ` [PATCH v3 32/33] drm/msm: Add VM_BIND ioctl Rob Clark
2025-04-28 20:54 ` [PATCH v3 33/33] drm/msm: Bump UAPI version Rob Clark
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250428205619.227835-32-robdclark@gmail.com \
--to=robdclark@gmail.com \
--cc=airlied@gmail.com \
--cc=cwabbott0@gmail.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=freedreno@lists.freedesktop.org \
--cc=linux-arm-msm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=lumag@kernel.org \
--cc=marijn.suijten@somainline.org \
--cc=quic_abhinavk@quicinc.com \
--cc=robdclark@chromium.org \
--cc=sean@poorly.run \
--cc=simona@ffwll.ch \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox