From: "Ghimiray, Himal Prasad" <himal.prasad.ghimiray@intel.com>
To: <intel-xe@lists.freedesktop.org>
Cc: <matthew.brost@intel.com>, <thomas.hellstrom@linux.intel.com>,
<oak.zeng@intel.com>, Danilo Krummrich <dakr@redhat.com>
Subject: Re: [RFC 13/29] drm/gpuvm: Introduce MADVISE Operations
Date: Fri, 14 Mar 2025 14:16:12 +0530 [thread overview]
Message-ID: <97665538-698d-497b-9f34-248a3656af3c@intel.com> (raw)
In-Reply-To: <20250314080226.2059819-14-himal.prasad.ghimiray@intel.com>
On 14-03-2025 13:32, Himal Prasad Ghimiray wrote:
> Introduce MADVISE operations that do not unmap the GPU VMA. These
> operations split VMAs if the start or end addresses fall within existing
> VMAs. The operations can create up to 2 REMAPS and 2 MAPs.
>
> If the input range is within the existing range, it creates REMAP:UNMAP,
> REMAP:PREV, REMAP:NEXT, and MAP operations for the input range.
> Example:
> Input Range: 0x00007f0a54000000 to 0x00007f0a54400000
> GPU VMA: 0x0000000000000000 to 0x0000800000000000
> Operations Result:
> - REMAP:UNMAP: addr=0x0000000000000000, range=0x0000800000000000
> - REMAP:PREV: addr=0x0000000000000000, range=0x00007f0a54000000
> - REMAP:NEXT: addr=0x00007f0a54400000, range=0x000000f5abc00000
> - MAP: addr=0x00007f0a54000000, range=0x0000000000400000
>
> If the input range starts at the beginning of one GPU VMA and ends at
> the end of another VMA, covering multiple VMAs, the operations do nothing.
> Example:
> Input Range: 0x00007fc898800000 to 0x00007fc899000000
> GPU VMAs:
> - 0x0000000000000000 to 0x00007fc898800000
> - 0x00007fc898800000 to 0x00007fc898a00000
> - 0x00007fc898a00000 to 0x00007fc898c00000
> - 0x00007fc898c00000 to 0x00007fc899000000
> - 0x00007fc899000000 to 0x00007fc899200000
> Operations Result: None
>
> Cc: Danilo Krummrich <dakr@redhat.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> ---
> drivers/gpu/drm/drm_gpuvm.c | 175 +++++++++++++++++++++++++++++++++++-
> include/drm/drm_gpuvm.h | 6 ++
> 2 files changed, 180 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c
> index f9eb56f24bef..904a26641b21 100644
> --- a/drivers/gpu/drm/drm_gpuvm.c
> +++ b/drivers/gpu/drm/drm_gpuvm.c
> @@ -2230,7 +2230,7 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
> ret = op_remap_cb(ops, priv, NULL, &n, &u);
> if (ret)
> return ret;
> - break;
> + return 0;
> - break;
> + return 0;
Incorrectly left in patch, ignore this part.
> }
> }
> }
> @@ -2240,6 +2240,143 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
> req_obj, req_offset);
> }
>
> +static int
> +__drm_gpuvm_skip_split_map(struct drm_gpuvm *gpuvm,
> + const struct drm_gpuvm_ops *ops, void *priv,
> + u64 req_addr, u64 req_range,
> + bool skip_gem_obj_va, u64 req_offset)
> +{
> + struct drm_gpuva *va, *next;
> + u64 req_end = req_addr + req_range;
> + int ret;
> +
> + if (unlikely(!drm_gpuvm_range_valid(gpuvm, req_addr, req_range)))
> + return -EINVAL;
> +
> + drm_gpuvm_for_each_va_range_safe(va, next, gpuvm, req_addr, req_end) {
> + struct drm_gem_object *obj = va->gem.obj;
> + u64 offset = va->gem.offset;
> + u64 addr = va->va.addr;
> + u64 range = va->va.range;
> + u64 end = addr + range;
> +
> + if (addr == req_addr) {
> + if (end == req_end)
> + return 0;
> +
> + if (end < req_end)
> + continue;
> +
> + if (end > req_end) {
> + if (skip_gem_obj_va && !!obj)
> + return 0;
> +
> + struct drm_gpuva_op_map n = {
> + .va.addr = req_end,
> + .va.range = range - req_range,
> + .gem.obj = obj,
> + .gem.offset = offset + req_range,
> + };
> + struct drm_gpuva_op_unmap u = {
> + .va = va,
> + .keep = false,
> + };
> +
> + ret = op_remap_cb(ops, priv, NULL, &n, &u);
> + if (ret)
> + return ret;
> +
> + break;
> + }
> + } else if (addr < req_addr) {
> + u64 ls_range = req_addr - addr;
> + struct drm_gpuva_op_map p = {
> + .va.addr = addr,
> + .va.range = ls_range,
> + .gem.obj = obj,
> + .gem.offset = offset,
> + };
> + struct drm_gpuva_op_unmap u = { .va = va, .keep = false, };
> +
> + if (end == req_end) {
> + if (skip_gem_obj_va && !!obj)
> + return 0;
> +
> + ret = op_remap_cb(ops, priv, &p, NULL, &u);
> + if (ret)
> + return ret;
> + break;
> + }
> +
> + if (end < req_end) {
> + if (skip_gem_obj_va && !!obj)
> + continue;
> +
> + ret = op_remap_cb(ops, priv, &p, NULL, &u);
> + if (ret)
> + return ret;
> +
> + ret = op_map_cb(ops, priv, req_addr,
> + min(end - req_addr, req_end - end),
> + NULL, req_offset);
> + if (ret)
> + return ret;
> + continue;
> + }
> +
> + if (end > req_end) {
> + if (skip_gem_obj_va && !!obj)
> + return 0;
> +
> + struct drm_gpuva_op_map n = {
> + .va.addr = req_end,
> + .va.range = end - req_end,
> + .gem.obj = obj,
> + .gem.offset = offset + ls_range +
> + req_range,
> + };
> +
> + ret = op_remap_cb(ops, priv, &p, &n, &u);
> + if (ret)
> + return ret;
> + break;
> + }
> + } else if (addr > req_addr) {
> + if (end == req_end)
> + return 0;
> +
> + if (end < req_end)
> + continue;
> +
> + if (end > req_end) {
> + if (skip_gem_obj_va && !!obj)
> + return 0;
> +
> + struct drm_gpuva_op_map n = {
> + .va.addr = req_end,
> + .va.range = end - req_end,
> + .gem.obj = obj,
> + .gem.offset = offset + req_end - addr,
> + };
> + struct drm_gpuva_op_unmap u = {
> + .va = va,
> + .keep = false,
> + };
> +
> + ret = op_remap_cb(ops, priv, NULL, &n, &u);
> + if (ret)
> + return ret;
> + return op_map_cb(ops, priv, addr,
> + (req_end - addr), NULL, req_offset);
> + }
> + }
> + }
> +
> + return op_map_cb(ops, priv,
> + req_addr, req_range,
> + NULL, req_offset);
> +}
> +
> static int
> __drm_gpuvm_sm_unmap(struct drm_gpuvm *gpuvm,
> const struct drm_gpuvm_ops *ops, void *priv,
> @@ -2548,6 +2685,42 @@ drm_gpuvm_sm_map_ops_create(struct drm_gpuvm *gpuvm,
> }
> EXPORT_SYMBOL_GPL(drm_gpuvm_sm_map_ops_create);
>
> +struct drm_gpuva_ops *
> +drm_gpuvm_madvise_ops_create(struct drm_gpuvm *gpuvm,
> + u64 req_addr, u64 req_range,
> + bool skip_gem_obj_va, u64 req_offset)
> +{
> + struct drm_gpuva_ops *ops;
> + struct {
> + struct drm_gpuvm *vm;
> + struct drm_gpuva_ops *ops;
> + } args;
> + int ret;
> +
> + ops = kzalloc(sizeof(*ops), GFP_KERNEL);
> + if (unlikely(!ops))
> + return ERR_PTR(-ENOMEM);
> +
> + INIT_LIST_HEAD(&ops->list);
> +
> + args.vm = gpuvm;
> + args.ops = ops;
> +
> + ret = __drm_gpuvm_skip_split_map(gpuvm, &gpuvm_list_ops, &args,
> + req_addr, req_range,
> + skip_gem_obj_va, req_offset);
> +
> + if (ret || list_empty(&ops->list))
> + goto err_free_ops;
> +
> + return ops;
> +
> +err_free_ops:
> + drm_gpuva_ops_free(gpuvm, ops);
> + return ERR_PTR(ret);
> +}
> +EXPORT_SYMBOL_GPL(drm_gpuvm_madvise_ops_create);
> +
> /**
> * drm_gpuvm_sm_unmap_ops_create() - creates the &drm_gpuva_ops to split on
> * unmap
> diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h
> index 2a9629377633..e521ebabab9e 100644
> --- a/include/drm/drm_gpuvm.h
> +++ b/include/drm/drm_gpuvm.h
> @@ -1062,6 +1062,12 @@ struct drm_gpuva_ops *
> drm_gpuvm_sm_map_ops_create(struct drm_gpuvm *gpuvm,
> u64 addr, u64 range,
> struct drm_gem_object *obj, u64 offset);
> +
> +struct drm_gpuva_ops *
> +drm_gpuvm_madvise_ops_create(struct drm_gpuvm *gpuvm,
> + u64 addr, u64 range,
> + bool skip_gem_obj_va, u64 offset);
> +
> struct drm_gpuva_ops *
> drm_gpuvm_sm_unmap_ops_create(struct drm_gpuvm *gpuvm,
> u64 addr, u64 range);
next prev parent reply other threads:[~2025-03-14 8:46 UTC|newest]
Thread overview: 52+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-03-14 8:01 [RFC 00/29] PREFETCH and MADVISE for SVM ranges Himal Prasad Ghimiray
2025-03-14 8:01 ` [RFC 01/29] drm/xe: Introduce xe_vma_op_prefetch_range struct for prefetch of ranges Himal Prasad Ghimiray
2025-04-03 20:59 ` Matthew Brost
2025-03-14 8:01 ` [RFC 02/29] drm/xe: Make xe_svm_alloc_vram public Himal Prasad Ghimiray
2025-03-27 22:45 ` Matthew Brost
2025-03-28 7:51 ` Ghimiray, Himal Prasad
2025-03-14 8:02 ` [RFC 03/29] drm/xe/svm: Helper to add tile masks to svm ranges Himal Prasad Ghimiray
2025-03-14 8:02 ` [RFC 04/29] drm/xe/svm: Make to_xe_range a public function Himal Prasad Ghimiray
2025-03-28 2:57 ` Matthew Brost
2025-03-14 8:02 ` [RFC 05/29] drm/xe/svm: Make xe_svm_range_* end/start/size public Himal Prasad Ghimiray
2025-03-27 22:46 ` Matthew Brost
2025-03-14 8:02 ` [RFC 06/29] drm/xe/vm: Update xe_vma_ops_incr_pt_update_ops to take an increment value Himal Prasad Ghimiray
2025-03-28 2:56 ` Matthew Brost
2025-03-28 7:52 ` Ghimiray, Himal Prasad
2025-03-14 8:02 ` [RFC 07/29] drm/xe/vm: Add an identifier in xe_vma_ops for svm prefetch Himal Prasad Ghimiray
2025-03-27 22:49 ` Matthew Brost
2025-03-28 7:53 ` Ghimiray, Himal Prasad
2025-03-14 8:02 ` [RFC 08/29] drm/xe: Rename lookup_vma function to xe_find_vma_by_addr Himal Prasad Ghimiray
2025-04-03 21:02 ` Matthew Brost
2025-04-07 6:16 ` Ghimiray, Himal Prasad
2025-03-14 8:02 ` [RFC 09/29] drm/xe/svm: Allow unaligned addresses and ranges for prefetch Himal Prasad Ghimiray
2025-04-03 20:52 ` Matthew Brost
2025-04-07 6:15 ` Ghimiray, Himal Prasad
2025-03-14 8:02 ` [RFC 10/29] drm/xe/svm: Refactor usage of drm_gpusvm* function in xe_svm Himal Prasad Ghimiray
2025-04-03 20:54 ` Matthew Brost
2025-04-07 6:15 ` Ghimiray, Himal Prasad
2025-03-14 8:02 ` [RFC 11/29] drm/xe/svm: Implement prefetch support for SVM ranges Himal Prasad Ghimiray
2025-03-14 8:02 ` [RFC 12/29] drm/xe/vm: Add debug prints for SVM range prefetch Himal Prasad Ghimiray
2025-03-14 8:02 ` [RFC 13/29] drm/gpuvm: Introduce MADVISE Operations Himal Prasad Ghimiray
2025-03-14 8:46 ` Ghimiray, Himal Prasad [this message]
2025-03-17 14:27 ` Danilo Krummrich
2025-03-18 11:58 ` Ghimiray, Himal Prasad
2025-03-14 8:02 ` [RFC 14/29] drm/xe/uapi: Add madvise interface Himal Prasad Ghimiray
2025-03-14 8:02 ` [RFC 15/29] drm/xe/vm: Add attributes struct as member of vma Himal Prasad Ghimiray
2025-03-14 8:02 ` [RFC 16/29] drm/xe/vma: Move pat_index to vma attributes Himal Prasad Ghimiray
2025-03-14 8:02 ` [RFC 17/29] drm/xe/vma: Modify new_vma to accept struct xe_vma_mem_attr as parameter Himal Prasad Ghimiray
2025-03-14 8:02 ` [RFC 18/29] drm/gpusvm: Make drm_gpusvm_for_each_* macros public Himal Prasad Ghimiray
2025-03-14 8:02 ` [RFC 19/29] drm/xe/svm: Split system allocator vma incase of madvise call Himal Prasad Ghimiray
2025-03-14 8:02 ` [RFC 20/29] drm/xe: Implement madvise ioctl for xe Himal Prasad Ghimiray
2025-03-14 8:02 ` [RFC 21/29] drm/xe: Allow CPU address mirror VMA unbind with gpu bindings for madvise Himal Prasad Ghimiray
2025-03-14 8:02 ` [RFC 22/29] drm/xe/svm : Add svm ranges migration policy on atomic access Himal Prasad Ghimiray
2025-03-14 8:02 ` [RFC 23/29] drm/xe/madvise: Update migration policy based on preferred location Himal Prasad Ghimiray
2025-03-14 8:02 ` [RFC 24/29] drm/xe/svm: Support DRM_XE_SVM_ATTR_PAT memory attribute Himal Prasad Ghimiray
2025-03-14 8:02 ` [RFC 25/29] drm/xe/uapi: Add flag for consulting madvise hints on svm prefetch Himal Prasad Ghimiray
2025-05-14 19:26 ` Matthew Brost
2025-03-14 8:02 ` [RFC 26/29] drm/xe/svm: Consult madvise preferred location in prefetch Himal Prasad Ghimiray
2025-03-14 8:02 ` [RFC 27/29] drm/xe/uapi: Add uapi for vma count and mem attributes Himal Prasad Ghimiray
2025-05-14 19:39 ` Matthew Brost
2025-03-14 8:02 ` [RFC 28/29] drm/xe/bo: Add attributes field to xe_bo Himal Prasad Ghimiray
2025-05-14 19:35 ` Matthew Brost
2025-03-14 8:02 ` [RFC 29/29] drm/xe/bo : Update atomic_access attribute on madvise Himal Prasad Ghimiray
2025-03-14 15:00 ` ✗ CI.Patch_applied: failure for PREFETCH and MADVISE for SVM ranges (rev2) Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=97665538-698d-497b-9f34-248a3656af3c@intel.com \
--to=himal.prasad.ghimiray@intel.com \
--cc=dakr@redhat.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=matthew.brost@intel.com \
--cc=oak.zeng@intel.com \
--cc=thomas.hellstrom@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox