From: Akhil P Oommen <quic_akhilpo@quicinc.com>
To: Rob Clark <robdclark@gmail.com>, <dri-devel@lists.freedesktop.org>
Cc: <freedreno@lists.freedesktop.org>,
<linux-arm-msm@vger.kernel.org>,
"Rob Clark" <robdclark@chromium.org>, Sean Paul <sean@poorly.run>,
Konrad Dybcio <konradybcio@kernel.org>,
Abhinav Kumar <quic_abhinavk@quicinc.com>,
"Dmitry Baryshkov" <lumag@kernel.org>,
Marijn Suijten <marijn.suijten@somainline.org>,
David Airlie <airlied@gmail.com>, Simona Vetter <simona@ffwll.ch>,
open list <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH v2 14/34] drm/msm: Lazily create context VM
Date: Wed, 16 Apr 2025 23:08:55 +0530 [thread overview]
Message-ID: <1d109f0f-e866-4f87-b8f9-06595dbc51ff@quicinc.com> (raw)
In-Reply-To: <20250319145425.51935-15-robdclark@gmail.com>
On 3/19/2025 8:22 PM, Rob Clark wrote:
> From: Rob Clark <robdclark@chromium.org>
>
> In the next commit, a way for userspace to opt-in to userspace managed
> VM is added. For this to work, we need to defer creation of the VM
> until it is needed.
>
> Signed-off-by: Rob Clark <robdclark@chromium.org>
> ---
> drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 3 ++-
> drivers/gpu/drm/msm/adreno/adreno_gpu.c | 14 +++++++-----
> drivers/gpu/drm/msm/msm_drv.c | 29 ++++++++++++++++++++-----
> drivers/gpu/drm/msm/msm_gem_submit.c | 2 +-
> drivers/gpu/drm/msm/msm_gpu.h | 9 +++++++-
> 5 files changed, 43 insertions(+), 14 deletions(-)
>
> diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
> index 4811be5a7c29..0b1e2ba3539e 100644
> --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
> +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
> @@ -112,6 +112,7 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu,
> {
> bool sysprof = refcount_read(&a6xx_gpu->base.base.sysprof_active) > 1;
> struct msm_context *ctx = submit->queue->ctx;
> + struct drm_gpuvm *vm = msm_context_vm(submit->dev, ctx);
> struct adreno_gpu *adreno_gpu = &a6xx_gpu->base;
> phys_addr_t ttbr;
> u32 asid;
> @@ -120,7 +121,7 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu,
> if (ctx->seqno == ring->cur_ctx_seqno)
> return;
>
> - if (msm_iommu_pagetable_params(to_msm_vm(ctx->vm)->mmu, &ttbr, &asid))
> + if (msm_iommu_pagetable_params(to_msm_vm(vm)->mmu, &ttbr, &asid))
> return;
>
> if (adreno_gpu->info->family >= ADRENO_7XX_GEN1) {
> diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
> index 0f71703f6ec7..e4d895dda051 100644
> --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c
> +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
> @@ -351,6 +351,8 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx,
> {
> struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
> struct drm_device *drm = gpu->dev;
> + /* Note ctx can be NULL when called from rd_open(): */
> + struct drm_gpuvm *vm = ctx ? msm_context_vm(drm, ctx) : NULL;
>
> /* No pointer params yet */
> if (*len != 0)
> @@ -396,8 +398,8 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx,
> *value = 0;
> return 0;
> case MSM_PARAM_FAULTS:
> - if (ctx->vm)
> - *value = gpu->global_faults + to_msm_vm(ctx->vm)->faults;
> + if (vm)
> + *value = gpu->global_faults + to_msm_vm(vm)->faults;
> else
> *value = gpu->global_faults;
> return 0;
> @@ -405,14 +407,14 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx,
> *value = gpu->suspend_count;
> return 0;
> case MSM_PARAM_VA_START:
> - if (ctx->vm == gpu->vm)
> + if (vm == gpu->vm)
> return UERR(EINVAL, drm, "requires per-process pgtables");
> - *value = ctx->vm->mm_start;
> + *value = vm->mm_start;
> return 0;
> case MSM_PARAM_VA_SIZE:
> - if (ctx->vm == gpu->vm)
> + if (vm == gpu->vm)
> return UERR(EINVAL, drm, "requires per-process pgtables");
> - *value = ctx->vm->mm_range;
> + *value = vm->mm_range;
> return 0;
> case MSM_PARAM_HIGHEST_BANK_BIT:
> *value = adreno_gpu->ubwc_config.highest_bank_bit;
> diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
> index 6ef29bc48bb0..6fd981ee6aee 100644
> --- a/drivers/gpu/drm/msm/msm_drv.c
> +++ b/drivers/gpu/drm/msm/msm_drv.c
> @@ -214,10 +214,29 @@ static void load_gpu(struct drm_device *dev)
> mutex_unlock(&init_lock);
> }
>
> +/**
> + * msm_context_vm - lazily create the context's VM
> + *
> + * @dev: the drm device
> + * @ctx: the context
> + *
> + * The VM is lazily created, so that userspace has a chance to opt-in to having
> + * a userspace managed VM before the VM is created.
> + *
> + * Note that this does not return a reference to the VM. Once the VM is created,
> + * it exists for the lifetime of the context.
> + */
> +struct drm_gpuvm *msm_context_vm(struct drm_device *dev, struct msm_context *ctx)
> +{
> + struct msm_drm_private *priv = dev->dev_private;
> + if (!ctx->vm)
hmm. This is racy and it is in a userspace accessible path!
-Akhil
> + ctx->vm = msm_gpu_create_private_vm(priv->gpu, current);
> + return ctx->vm;
> +}
> +
> static int context_init(struct drm_device *dev, struct drm_file *file)
> {
> static atomic_t ident = ATOMIC_INIT(0);
> - struct msm_drm_private *priv = dev->dev_private;
> struct msm_context *ctx;
>
> ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
> @@ -230,7 +249,6 @@ static int context_init(struct drm_device *dev, struct drm_file *file)
> kref_init(&ctx->ref);
> msm_submitqueue_init(dev, ctx);
>
> - ctx->vm = msm_gpu_create_private_vm(priv->gpu, current);
> file->driver_priv = ctx;
>
> ctx->seqno = atomic_inc_return(&ident);
> @@ -408,7 +426,7 @@ static int msm_ioctl_gem_info_iova(struct drm_device *dev,
> * Don't pin the memory here - just get an address so that userspace can
> * be productive
> */
> - return msm_gem_get_iova(obj, ctx->vm, iova);
> + return msm_gem_get_iova(obj, msm_context_vm(dev, ctx), iova);
> }
>
> static int msm_ioctl_gem_info_set_iova(struct drm_device *dev,
> @@ -417,18 +435,19 @@ static int msm_ioctl_gem_info_set_iova(struct drm_device *dev,
> {
> struct msm_drm_private *priv = dev->dev_private;
> struct msm_context *ctx = file->driver_priv;
> + struct drm_gpuvm *vm = msm_context_vm(dev, ctx);
>
> if (!priv->gpu)
> return -EINVAL;
>
> /* Only supported if per-process address space is supported: */
> - if (priv->gpu->vm == ctx->vm)
> + if (priv->gpu->vm == vm)
> return UERR(EOPNOTSUPP, dev, "requires per-process pgtables");
>
> if (should_fail(&fail_gem_iova, obj->size))
> return -ENOMEM;
>
> - return msm_gem_set_iova(obj, ctx->vm, iova);
> + return msm_gem_set_iova(obj, vm, iova);
> }
>
> static int msm_ioctl_gem_info_set_metadata(struct drm_gem_object *obj,
> diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c
> index c65f3a6a5256..9731ad7993cf 100644
> --- a/drivers/gpu/drm/msm/msm_gem_submit.c
> +++ b/drivers/gpu/drm/msm/msm_gem_submit.c
> @@ -63,7 +63,7 @@ static struct msm_gem_submit *submit_create(struct drm_device *dev,
>
> kref_init(&submit->ref);
> submit->dev = dev;
> - submit->vm = queue->ctx->vm;
> + submit->vm = msm_context_vm(dev, queue->ctx);
> submit->gpu = gpu;
> submit->cmd = (void *)&submit->bos[nr_bos];
> submit->queue = queue;
> diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h
> index d8425e6d7f5a..c15aad288552 100644
> --- a/drivers/gpu/drm/msm/msm_gpu.h
> +++ b/drivers/gpu/drm/msm/msm_gpu.h
> @@ -362,7 +362,12 @@ struct msm_context {
> */
> int queueid;
>
> - /** @vm: the per-process GPU address-space */
> + /**
> + * @vm:
> + *
> + * The per-process GPU address-space. Do not access directly, use
> + * msm_context_vm().
> + */
> struct drm_gpuvm *vm;
>
> /** @kref: the reference count */
> @@ -447,6 +452,8 @@ struct msm_context {
> atomic64_t ctx_mem;
> };
>
> +struct drm_gpuvm *msm_context_vm(struct drm_device *dev, struct msm_context *ctx);
> +
> /**
> * msm_gpu_convert_priority - Map userspace priority to ring # and sched priority
> *
next prev parent reply other threads:[~2025-04-16 17:39 UTC|newest]
Thread overview: 45+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-03-19 14:52 [PATCH v2 00/34] drm/msm: sparse / "VM_BIND" support Rob Clark
2025-03-19 14:52 ` [PATCH v2 01/34] drm/gpuvm: Don't require obj lock in destructor path Rob Clark
2025-03-19 14:52 ` [PATCH v2 02/34] drm/gpuvm: Remove bogus lock assert Rob Clark
2025-03-19 14:52 ` [PATCH v2 03/34] drm/gpuvm: Allow VAs to hold soft reference to BOs Rob Clark
2025-03-19 14:52 ` [PATCH v2 04/34] drm/gpuvm: Add drm_gpuvm_sm_unmap_va() Rob Clark
2025-03-19 14:52 ` [PATCH v2 05/34] drm/msm: Rename msm_file_private -> msm_context Rob Clark
2025-04-16 23:11 ` Dmitry Baryshkov
2025-03-19 14:52 ` [PATCH v2 06/34] drm/msm: Improve msm_context comments Rob Clark
2025-04-16 23:19 ` Dmitry Baryshkov
2025-03-19 14:52 ` [PATCH v2 07/34] drm/msm: Rename msm_gem_address_space -> msm_gem_vm Rob Clark
2025-04-21 19:19 ` Dmitry Baryshkov
2025-03-19 14:52 ` [PATCH v2 08/34] drm/msm: Remove vram carveout support Rob Clark
2025-04-16 17:18 ` Akhil P Oommen
2025-04-16 23:20 ` Dmitry Baryshkov
2025-04-17 13:41 ` Luca Weiss
2025-03-19 14:52 ` [PATCH v2 09/34] drm/msm: Collapse vma allocation and initialization Rob Clark
2025-03-19 14:52 ` [PATCH v2 10/34] drm/msm: Collapse vma close and delete Rob Clark
2025-03-19 14:52 ` [PATCH v2 11/34] drm/msm: drm_gpuvm conversion Rob Clark
2025-04-16 17:20 ` Akhil P Oommen
2025-03-19 14:52 ` [PATCH v2 12/34] drm/msm: Use drm_gpuvm types more Rob Clark
2025-03-19 14:52 ` [PATCH v2 13/34] drm/msm: Split submit_pin_objects() Rob Clark
2025-03-19 14:52 ` [PATCH v2 14/34] drm/msm: Lazily create context VM Rob Clark
2025-04-16 17:38 ` Akhil P Oommen [this message]
2025-03-19 14:52 ` [PATCH v2 15/34] drm/msm: Add opt-in for VM_BIND Rob Clark
2025-03-19 14:52 ` [PATCH v2 16/34] drm/msm: Mark VM as unusable on faults Rob Clark
2025-03-19 16:15 ` Connor Abbott
2025-03-19 21:31 ` Rob Clark
2025-03-19 14:52 ` [PATCH v2 17/34] drm/msm: Extend SUBMIT ioctl for VM_BIND Rob Clark
2025-03-19 14:52 ` [PATCH v2 18/34] drm/msm: Add VM_BIND submitqueue Rob Clark
2025-03-19 14:52 ` [PATCH v2 19/34] drm/msm: Add _NO_SHARE flag Rob Clark
2025-03-19 14:52 ` [PATCH v2 20/34] drm/msm: Split out helper to get iommu prot flags Rob Clark
2025-03-19 14:52 ` [PATCH v2 21/34] drm/msm: Add mmu support for non-zero offset Rob Clark
2025-03-19 14:52 ` [PATCH v2 22/34] drm/msm: Add PRR support Rob Clark
2025-03-19 14:52 ` [PATCH v2 23/34] drm/msm: Rename msm_gem_vma_purge() -> _unmap() Rob Clark
2025-03-19 14:52 ` [PATCH v2 24/34] drm/msm: Split msm_gem_vma_new() Rob Clark
2025-03-19 14:52 ` [PATCH v2 25/34] drm/msm: Pre-allocate VMAs Rob Clark
2025-03-19 14:52 ` [PATCH v2 26/34] drm/msm: Pre-allocate vm_bo objects Rob Clark
2025-03-19 14:52 ` [PATCH v2 27/34] drm/msm: Pre-allocate pages for pgtable entries Rob Clark
2025-03-19 14:52 ` [PATCH v2 28/34] drm/msm: Wire up gpuvm ops Rob Clark
2025-03-19 14:52 ` [PATCH v2 29/34] drm/msm: Wire up drm_gpuvm debugfs Rob Clark
2025-03-19 14:52 ` [PATCH v2 30/34] drm/msm: Crashdump prep for sparse mappings Rob Clark
2025-03-19 14:52 ` [PATCH v2 31/34] drm/msm: rd dumping " Rob Clark
2025-03-19 14:52 ` [PATCH v2 32/34] drm/msm: Crashdec support for sparse Rob Clark
2025-03-19 14:52 ` [PATCH v2 33/34] drm/msm: rd dumping " Rob Clark
2025-03-19 14:52 ` [PATCH v2 34/34] drm/msm: Bump UAPI version Rob Clark
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1d109f0f-e866-4f87-b8f9-06595dbc51ff@quicinc.com \
--to=quic_akhilpo@quicinc.com \
--cc=airlied@gmail.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=freedreno@lists.freedesktop.org \
--cc=konradybcio@kernel.org \
--cc=linux-arm-msm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=lumag@kernel.org \
--cc=marijn.suijten@somainline.org \
--cc=quic_abhinavk@quicinc.com \
--cc=robdclark@chromium.org \
--cc=robdclark@gmail.com \
--cc=sean@poorly.run \
--cc=simona@ffwll.ch \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox