From: Matthew Brost <matthew.brost@intel.com>
To: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Cc: <intel-xe@lists.freedesktop.org>,
<thomas.hellstrom@linux.intel.com>, <oak.zeng@intel.com>
Subject: Re: [RFC 06/29] drm/xe/vm: Update xe_vma_ops_incr_pt_update_ops to take an increment value
Date: Thu, 27 Mar 2025 19:56:21 -0700 [thread overview]
Message-ID: <Z+YP1WAm4SwyD3xy@lstrano-desk.jf.intel.com> (raw)
In-Reply-To: <20250314080226.2059819-7-himal.prasad.ghimiray@intel.com>
On Fri, Mar 14, 2025 at 01:32:03PM +0530, Himal Prasad Ghimiray wrote:
> Prefetch for SVM ranges can have more than one operation to increment,
> hence modify the function to accept an increment value as input.
>
> Suggested-by: Matthew Brost <matthew.brost@intel.com>
> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> ---
> drivers/gpu/drm/xe/xe_vm.c | 22 +++++++++++-----------
> 1 file changed, 11 insertions(+), 11 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index 60303998bd61..53a80c0af8de 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -806,13 +806,13 @@ static void xe_vma_ops_fini(struct xe_vma_ops *vops)
> kfree(vops->pt_update_ops[i].ops);
> }
>
> -static void xe_vma_ops_incr_pt_update_ops(struct xe_vma_ops *vops, u8 tile_mask)
> +static void xe_vma_ops_incr_pt_update_ops(struct xe_vma_ops *vops, u8 tile_mask, u8 inc_val)
> {
> int i;
>
> for (i = 0; i < XE_MAX_TILES_PER_DEVICE; ++i)
> if (BIT(i) & tile_mask)
> - ++vops->pt_update_ops[i].num_ops;
> + vops->pt_update_ops[i].num_ops += inc_val;
> }
>
> static void xe_vm_populate_rebind(struct xe_vma_op *op, struct xe_vma *vma,
> @@ -842,7 +842,7 @@ static int xe_vm_ops_add_rebind(struct xe_vma_ops *vops, struct xe_vma *vma,
>
> xe_vm_populate_rebind(op, vma, tile_mask);
> list_add_tail(&op->link, &vops->list);
> - xe_vma_ops_incr_pt_update_ops(vops, tile_mask);
> + xe_vma_ops_incr_pt_update_ops(vops, tile_mask, 1);
>
> return 0;
> }
> @@ -977,7 +977,7 @@ xe_vm_ops_add_range_rebind(struct xe_vma_ops *vops,
>
> xe_vm_populate_range_rebind(op, vma, range, tile_mask);
> list_add_tail(&op->link, &vops->list);
> - xe_vma_ops_incr_pt_update_ops(vops, tile_mask);
> + xe_vma_ops_incr_pt_update_ops(vops, tile_mask, 1);
>
> return 0;
> }
> @@ -1062,7 +1062,7 @@ xe_vm_ops_add_range_unbind(struct xe_vma_ops *vops,
>
> xe_vm_populate_range_unbind(op, range);
> list_add_tail(&op->link, &vops->list);
> - xe_vma_ops_incr_pt_update_ops(vops, range->tile_present);
> + xe_vma_ops_incr_pt_update_ops(vops, range->tile_present, 1);
>
> return 0;
> }
> @@ -2475,7 +2475,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
> if ((op->map.immediate || !xe_vm_in_fault_mode(vm)) &&
> !op->map.is_cpu_addr_mirror)
> xe_vma_ops_incr_pt_update_ops(vops,
> - op->tile_mask);
> + op->tile_mask, 1);
> break;
> }
> case DRM_GPUVA_OP_REMAP:
> @@ -2536,7 +2536,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
> (ULL)op->remap.start,
> (ULL)op->remap.range);
> } else {
> - xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask);
> + xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask, 1);
> }
> }
>
> @@ -2565,11 +2565,11 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
> (ULL)op->remap.start,
> (ULL)op->remap.range);
> } else {
> - xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask);
> + xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask, 1);
> }
> }
> if (!skip)
> - xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask);
> + xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask, 1);
Maybe update the code here (REMAP case) to call
xe_vma_ops_incr_pt_update_ops once. I feel like that would be a bit
cleaner and use the new interface correctly.
Matt
> break;
> }
> case DRM_GPUVA_OP_UNMAP:
> @@ -2581,7 +2581,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
> return -EBUSY;
>
> if (!xe_vma_is_cpu_addr_mirror(vma))
> - xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask);
> + xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask, 1);
> break;
> case DRM_GPUVA_OP_PREFETCH:
> vma = gpuva_to_vma(op->base.prefetch.va);
> @@ -2593,7 +2593,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
> }
>
> if (!xe_vma_is_cpu_addr_mirror(vma))
> - xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask);
> + xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask, 1);
> break;
> default:
> drm_warn(&vm->xe->drm, "NOT POSSIBLE");
> --
> 2.34.1
>
next prev parent reply other threads:[~2025-03-28 2:55 UTC|newest]
Thread overview: 52+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-03-14 8:01 [RFC 00/29] PREFETCH and MADVISE for SVM ranges Himal Prasad Ghimiray
2025-03-14 8:01 ` [RFC 01/29] drm/xe: Introduce xe_vma_op_prefetch_range struct for prefetch of ranges Himal Prasad Ghimiray
2025-04-03 20:59 ` Matthew Brost
2025-03-14 8:01 ` [RFC 02/29] drm/xe: Make xe_svm_alloc_vram public Himal Prasad Ghimiray
2025-03-27 22:45 ` Matthew Brost
2025-03-28 7:51 ` Ghimiray, Himal Prasad
2025-03-14 8:02 ` [RFC 03/29] drm/xe/svm: Helper to add tile masks to svm ranges Himal Prasad Ghimiray
2025-03-14 8:02 ` [RFC 04/29] drm/xe/svm: Make to_xe_range a public function Himal Prasad Ghimiray
2025-03-28 2:57 ` Matthew Brost
2025-03-14 8:02 ` [RFC 05/29] drm/xe/svm: Make xe_svm_range_* end/start/size public Himal Prasad Ghimiray
2025-03-27 22:46 ` Matthew Brost
2025-03-14 8:02 ` [RFC 06/29] drm/xe/vm: Update xe_vma_ops_incr_pt_update_ops to take an increment value Himal Prasad Ghimiray
2025-03-28 2:56 ` Matthew Brost [this message]
2025-03-28 7:52 ` Ghimiray, Himal Prasad
2025-03-14 8:02 ` [RFC 07/29] drm/xe/vm: Add an identifier in xe_vma_ops for svm prefetch Himal Prasad Ghimiray
2025-03-27 22:49 ` Matthew Brost
2025-03-28 7:53 ` Ghimiray, Himal Prasad
2025-03-14 8:02 ` [RFC 08/29] drm/xe: Rename lookup_vma function to xe_find_vma_by_addr Himal Prasad Ghimiray
2025-04-03 21:02 ` Matthew Brost
2025-04-07 6:16 ` Ghimiray, Himal Prasad
2025-03-14 8:02 ` [RFC 09/29] drm/xe/svm: Allow unaligned addresses and ranges for prefetch Himal Prasad Ghimiray
2025-04-03 20:52 ` Matthew Brost
2025-04-07 6:15 ` Ghimiray, Himal Prasad
2025-03-14 8:02 ` [RFC 10/29] drm/xe/svm: Refactor usage of drm_gpusvm* function in xe_svm Himal Prasad Ghimiray
2025-04-03 20:54 ` Matthew Brost
2025-04-07 6:15 ` Ghimiray, Himal Prasad
2025-03-14 8:02 ` [RFC 11/29] drm/xe/svm: Implement prefetch support for SVM ranges Himal Prasad Ghimiray
2025-03-14 8:02 ` [RFC 12/29] drm/xe/vm: Add debug prints for SVM range prefetch Himal Prasad Ghimiray
2025-03-14 8:02 ` [RFC 13/29] drm/gpuvm: Introduce MADVISE Operations Himal Prasad Ghimiray
2025-03-14 8:46 ` Ghimiray, Himal Prasad
2025-03-17 14:27 ` Danilo Krummrich
2025-03-18 11:58 ` Ghimiray, Himal Prasad
2025-03-14 8:02 ` [RFC 14/29] drm/xe/uapi: Add madvise interface Himal Prasad Ghimiray
2025-03-14 8:02 ` [RFC 15/29] drm/xe/vm: Add attributes struct as member of vma Himal Prasad Ghimiray
2025-03-14 8:02 ` [RFC 16/29] drm/xe/vma: Move pat_index to vma attributes Himal Prasad Ghimiray
2025-03-14 8:02 ` [RFC 17/29] drm/xe/vma: Modify new_vma to accept struct xe_vma_mem_attr as parameter Himal Prasad Ghimiray
2025-03-14 8:02 ` [RFC 18/29] drm/gpusvm: Make drm_gpusvm_for_each_* macros public Himal Prasad Ghimiray
2025-03-14 8:02 ` [RFC 19/29] drm/xe/svm: Split system allocator vma incase of madvise call Himal Prasad Ghimiray
2025-03-14 8:02 ` [RFC 20/29] drm/xe: Implement madvise ioctl for xe Himal Prasad Ghimiray
2025-03-14 8:02 ` [RFC 21/29] drm/xe: Allow CPU address mirror VMA unbind with gpu bindings for madvise Himal Prasad Ghimiray
2025-03-14 8:02 ` [RFC 22/29] drm/xe/svm : Add svm ranges migration policy on atomic access Himal Prasad Ghimiray
2025-03-14 8:02 ` [RFC 23/29] drm/xe/madvise: Update migration policy based on preferred location Himal Prasad Ghimiray
2025-03-14 8:02 ` [RFC 24/29] drm/xe/svm: Support DRM_XE_SVM_ATTR_PAT memory attribute Himal Prasad Ghimiray
2025-03-14 8:02 ` [RFC 25/29] drm/xe/uapi: Add flag for consulting madvise hints on svm prefetch Himal Prasad Ghimiray
2025-05-14 19:26 ` Matthew Brost
2025-03-14 8:02 ` [RFC 26/29] drm/xe/svm: Consult madvise preferred location in prefetch Himal Prasad Ghimiray
2025-03-14 8:02 ` [RFC 27/29] drm/xe/uapi: Add uapi for vma count and mem attributes Himal Prasad Ghimiray
2025-05-14 19:39 ` Matthew Brost
2025-03-14 8:02 ` [RFC 28/29] drm/xe/bo: Add attributes field to xe_bo Himal Prasad Ghimiray
2025-05-14 19:35 ` Matthew Brost
2025-03-14 8:02 ` [RFC 29/29] drm/xe/bo : Update atomic_access attribute on madvise Himal Prasad Ghimiray
2025-03-14 15:00 ` ✗ CI.Patch_applied: failure for PREFETCH and MADVISE for SVM ranges (rev2) Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Z+YP1WAm4SwyD3xy@lstrano-desk.jf.intel.com \
--to=matthew.brost@intel.com \
--cc=himal.prasad.ghimiray@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=oak.zeng@intel.com \
--cc=thomas.hellstrom@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox