From: Matthew Brost <matthew.brost@intel.com>
To: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Cc: <intel-xe@lists.freedesktop.org>
Subject: Re: [PATCH v3 13/19] drm/xe/madvise: Update migration policy based on preferred location
Date: Thu, 29 May 2025 16:42:42 -0700 [thread overview]
Message-ID: <aDjw8rFTnR/AfvKW@lstrano-desk.jf.intel.com> (raw)
In-Reply-To: <20250527164003.1068118-14-himal.prasad.ghimiray@intel.com>
On Tue, May 27, 2025 at 10:09:57PM +0530, Himal Prasad Ghimiray wrote:
> When the user sets the valid devmem_fd as a preferred location, GPU fault
> will trigger migration to tile of device associated with devmem_fd.
>
> If the user sets an invalid devmem_fd the preferred location is current
> placement(smem) only.
>
> v2(Matthew Brost)
> - Default should be faulting tile
> - remove devmem_fd used as region
>
> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> ---
> drivers/gpu/drm/xe/xe_svm.c | 39 +++++++++++++++++++++++++++++-
> drivers/gpu/drm/xe/xe_svm.h | 8 ++++++
> drivers/gpu/drm/xe/xe_vm.h | 3 +++
> drivers/gpu/drm/xe/xe_vm_madvise.c | 15 +++++++++++-
> 4 files changed, 63 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
> index 743bb1f7d39c..8b6546ebac72 100644
> --- a/drivers/gpu/drm/xe/xe_svm.c
> +++ b/drivers/gpu/drm/xe/xe_svm.c
> @@ -791,6 +791,37 @@ bool xe_svm_range_needs_migrate_to_vram(struct xe_svm_range *range, struct xe_vm
> return true;
> }
>
> +/**
> + * xe_vma_resolve_pagemap - Resolve the appropriate DRM pagemap for a VMA
> + * @vma: Pointer to the xe_vma structure containing memory attributes
> + * @tile: Pointer to the xe_tile structure used as fallback for VRAM mapping
> + *
> + * This function determines the correct DRM pagemap to use for a given VMA.
> + * It first checks if a valid devmem_fd is provided in the VMA's preferred
> + * location. If the devmem_fd is negative, it returns NULL, indicating no
> + * pagemap is available and smem to be used as preferred location.
> + * If the devmem_fd is equal to the default faulting
> + * GT identifier, it returns the VRAM pagemap associated with the tile.
> + *
> + * Future support for multi-device configurations may use drm_pagemap_from_fd()
> + * to resolve pagemaps from arbitrary file descriptors.
> + *
> + * Return: A pointer to the resolved drm_pagemap, or NULL if none is applicable.
> + */
> +struct drm_pagemap *xe_vma_resolve_pagemap(struct xe_vma *vma, struct xe_tile *tile)
> +{
> + s32 fd = (s32)vma->attr.preferred_loc.devmem_fd;
> +
> + if (fd < 0)
> + return NULL;
> +
> + if (fd == DRM_XE_PREFERRED_LOC_DEFAULT_DEVMEM_FD && tile)
> + return &tile->mem.vram.dpagemap;
I'd change this here to:
return IS_DGFX(xe) ? return &tile->mem.vram.dpagemap : NULL;
> +
> + /* TODO: Support multi-device with drm_pagemap_from_fd(fd) */
> + return NULL;
> +}
> +
> /**
> * xe_svm_handle_pagefault() - SVM handle page fault
> * @vm: The VM.
> @@ -823,6 +854,7 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
> struct xe_svm_range *range;
> struct drm_exec exec;
> struct dma_fence *fence;
> + struct drm_pagemap *dpagemap;
> struct xe_tile *tile = gt_to_tile(gt);
> int migrate_try_count = ctx.devmem_only ? 3 : 1;
> ktime_t end = 0;
> @@ -852,8 +884,13 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
>
> range_debug(range, "PAGE FAULT");
>
> + dpagemap = xe_vma_resolve_pagemap(vma, tile);
> if (--migrate_try_count >= 0 &&
> - xe_svm_range_needs_migrate_to_vram(range, vma, IS_DGFX(vm->xe))) {
> + xe_svm_range_needs_migrate_to_vram(range, vma, IS_DGFX(vm->xe) && !!dpagemap)) {
See my comment in patch 12 + comment above, I think this condition should be:
!!dpagemap || (atomic && xe_vma_need_vram_migrate_for_atomic)
> + /* TODO : For multi-device dpagemap will be used to find the remote tile
> + * and remote device. Will need to modify xe_svm_alloc_vram to use dpagemap
> + * for future multi-device support.
> + */
> err = xe_svm_alloc_vram(vm, tile, range, &ctx);
> ctx.timeslice_ms <<= 1; /* Double timeslice if we have to retry */
> if (err) {
> diff --git a/drivers/gpu/drm/xe/xe_svm.h b/drivers/gpu/drm/xe/xe_svm.h
> index b36f70ab3d03..344349313001 100644
> --- a/drivers/gpu/drm/xe/xe_svm.h
> +++ b/drivers/gpu/drm/xe/xe_svm.h
> @@ -95,6 +95,8 @@ u8 xe_svm_ranges_zap_ptes_in_range(struct xe_vm *vm, u64 start, u64 end);
>
> void xe_svm_range_clean_if_addr_within(struct xe_vm *vm, u64 start, u64 end);
>
> +struct drm_pagemap *xe_vma_resolve_pagemap(struct xe_vma *vma, struct xe_tile *tile);
> +
> /**
> * xe_svm_range_has_dma_mapping() - SVM range has DMA mapping
> * @range: SVM range
> @@ -320,6 +322,12 @@ void xe_svm_range_clean_if_addr_within(struct xe_vm *vm, u64 start, u64 end)
> {
> }
>
> +static inline
> +struct drm_pagemap *xe_vma_resolve_pagemap(struct xe_vma *vma, struct xe_tile *tile)
> +{
> + return NULL;
> +}
> +
> #define xe_svm_assert_in_notifier(...) do {} while (0)
> #define xe_svm_range_has_dma_mapping(...) false
>
> diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
> index edd6ffd7c3ac..340ac34936f4 100644
> --- a/drivers/gpu/drm/xe/xe_vm.h
> +++ b/drivers/gpu/drm/xe/xe_vm.h
> @@ -222,6 +222,9 @@ int __xe_vm_userptr_needs_repin(struct xe_vm *vm);
>
> int xe_vm_userptr_check_repin(struct xe_vm *vm);
>
> +bool xe_vma_has_preferred_mem_loc(struct xe_vma *vma,
> + u32 *mem_region, u32 *devmem_fd);
> +
> int xe_vm_rebind(struct xe_vm *vm, bool rebind_worker);
> struct dma_fence *xe_vma_rebind(struct xe_vm *vm, struct xe_vma *vma,
> u8 tile_mask);
> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
> index 084719660401..1b31e41b3331 100644
> --- a/drivers/gpu/drm/xe/xe_vm_madvise.c
> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
> @@ -61,7 +61,20 @@ static int madvise_preferred_mem_loc(struct xe_device *xe, struct xe_vm *vm,
> struct xe_vma **vmas, int num_vmas,
> struct drm_xe_madvise_ops ops)
> {
> - /* Implementation pending */
> + int i;
> +
> + xe_assert(vm->xe, ops.type == DRM_XE_VMA_ATTR_PREFERRED_LOC);
> +
> + for (i = 0; i < num_vmas; i++) {
> + vmas[i]->attr.preferred_loc.devmem_fd = ops.preferred_mem_loc.devmem_fd;
> +
> + /* Till multi-device support is not added migration_policy
> + * is of no use and can be ignored.
> + */
> + //vmas[i]->attr.preferred_loc.migration_policy =
> + // ops.preferred_mem_loc.migration_policy;
No harm in just setting this for now and remaning unused, right?
Matt
> + }
> +
> return 0;
> }
>
> --
> 2.34.1
>
next prev parent reply other threads:[~2025-05-29 23:41 UTC|newest]
Thread overview: 72+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-05-27 16:39 [PATCH v3 00/19] MADVISE FOR XE Himal Prasad Ghimiray
2025-05-27 16:39 ` [PATCH v3 01/19] Introduce drm_gpuvm_sm_map_ops_flags enums for sm_map_ops Himal Prasad Ghimiray
2025-05-27 16:39 ` [PATCH v3 02/19] drm/xe/uapi: Add madvise interface Himal Prasad Ghimiray
2025-05-28 16:27 ` Matthew Brost
2025-05-28 17:03 ` Souza, Jose
2025-05-29 18:03 ` Matthew Brost
2025-05-29 18:00 ` Matthew Brost
2025-06-10 4:32 ` Ghimiray, Himal Prasad
2025-05-27 16:39 ` [PATCH v3 03/19] drm/xe/vm: Add attributes struct as member of vma Himal Prasad Ghimiray
2025-05-28 16:46 ` Matthew Brost
2025-05-27 16:39 ` [PATCH v3 04/19] drm/xe/vma: Move pat_index to vma attributes Himal Prasad Ghimiray
2025-05-28 22:51 ` Matthew Brost
2025-05-27 16:39 ` [PATCH v3 05/19] drm/xe/vma: Modify new_vma to accept struct xe_vma_mem_attr as parameter Himal Prasad Ghimiray
2025-05-28 22:58 ` Matthew Brost
2025-06-02 6:19 ` Dan Carpenter
2025-05-27 16:39 ` [PATCH v3 06/19] drm/gpusvm: Make drm_gpusvm_for_each_* macros public Himal Prasad Ghimiray
2025-05-28 23:01 ` Matthew Brost
2025-05-27 16:39 ` [PATCH v3 07/19] drm/xe/vm: Add a helper xe_vm_range_tilemask_tlb_invalidation() Himal Prasad Ghimiray
2025-05-28 23:12 ` Matthew Brost
2025-05-29 3:21 ` Ghimiray, Himal Prasad
2025-05-27 16:39 ` [PATCH v3 08/19] drm/xe/svm: Add xe_svm_ranges_zap_ptes_in_range() for PTE zapping Himal Prasad Ghimiray
2025-05-28 23:15 ` Matthew Brost
2025-05-29 3:06 ` Ghimiray, Himal Prasad
2025-05-29 4:00 ` Matthew Brost
2025-05-30 6:29 ` Matthew Brost
2025-06-10 4:31 ` Ghimiray, Himal Prasad
2025-05-27 16:39 ` [PATCH v3 09/19] drm/xe/svm: Split system allocator vma incase of madvise call Himal Prasad Ghimiray
2025-05-29 2:49 ` Matthew Brost
2025-05-29 3:14 ` Ghimiray, Himal Prasad
2025-06-02 6:31 ` Dan Carpenter
2025-05-27 16:39 ` [PATCH v3 10/19] drm/xe: Implement madvise ioctl for xe Himal Prasad Ghimiray
2025-05-29 22:43 ` Matthew Brost
2025-05-30 6:36 ` Matthew Brost
2025-05-30 21:34 ` Matthew Brost
2025-06-10 4:52 ` Ghimiray, Himal Prasad
2025-06-10 5:13 ` Matthew Brost
2025-05-27 16:39 ` [PATCH v3 11/19] drm/xe: Allow CPU address mirror VMA unbind with gpu bindings for madvise Himal Prasad Ghimiray
2025-05-29 22:54 ` Matthew Brost
2025-06-12 9:02 ` Ghimiray, Himal Prasad
2025-05-27 16:39 ` [PATCH v3 12/19] drm/xe/svm : Add svm ranges migration policy on atomic access Himal Prasad Ghimiray
2025-05-29 23:27 ` Matthew Brost
2025-05-29 23:38 ` Matthew Brost
2025-05-30 4:40 ` Matthew Brost
2025-05-27 16:39 ` [PATCH v3 13/19] drm/xe/madvise: Update migration policy based on preferred location Himal Prasad Ghimiray
2025-05-29 23:42 ` Matthew Brost [this message]
2025-05-27 16:39 ` [PATCH v3 14/19] drm/xe/svm: Support DRM_XE_SVM_ATTR_PAT memory attribute Himal Prasad Ghimiray
2025-05-30 0:24 ` Matthew Brost
2025-05-27 16:39 ` [PATCH v3 15/19] drm/xe/uapi: Add flag for consulting madvise hints on svm prefetch Himal Prasad Ghimiray
2025-05-28 16:29 ` Matthew Brost
2025-05-27 16:40 ` [PATCH v3 16/19] drm/xe/svm: Consult madvise preferred location in prefetch Himal Prasad Ghimiray
2025-05-30 4:24 ` Matthew Brost
2025-06-24 18:56 ` Matthew Brost
2025-05-27 16:40 ` [PATCH v3 17/19] drm/xe/uapi: Add UAPI for querying VMA count and memory attributes Himal Prasad Ghimiray
2025-05-28 17:02 ` Souza, Jose
2025-05-30 1:11 ` kernel test robot
2025-05-30 4:29 ` Matthew Brost
2025-05-27 16:40 ` [PATCH v3 18/19] drm/xe/bo: Add attributes field to xe_bo Himal Prasad Ghimiray
2025-05-28 23:47 ` Matthew Brost
2025-05-29 2:29 ` Ghimiray, Himal Prasad
2025-05-27 16:40 ` [PATCH v3 19/19] drm/xe/bo: Update atomic_access attribute on madvise Himal Prasad Ghimiray
2025-05-28 23:46 ` Matthew Brost
2025-05-29 3:03 ` Ghimiray, Himal Prasad
2025-05-29 18:24 ` Matthew Brost
2025-05-29 18:30 ` Matthew Brost
2025-05-27 21:35 ` ✓ CI.Patch_applied: success for MADVISE FOR XE Patchwork
2025-05-27 21:35 ` ✗ CI.checkpatch: warning " Patchwork
2025-05-27 21:37 ` ✓ CI.KUnit: success " Patchwork
2025-05-27 21:40 ` ✗ CI.Build: failure " Patchwork
2025-05-28 7:45 ` ✓ CI.Patch_applied: success " Patchwork
2025-05-28 7:45 ` ✗ CI.checkpatch: warning " Patchwork
2025-05-28 7:46 ` ✓ CI.KUnit: success " Patchwork
2025-05-28 7:50 ` ✗ CI.Build: failure " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aDjw8rFTnR/AfvKW@lstrano-desk.jf.intel.com \
--to=matthew.brost@intel.com \
--cc=himal.prasad.ghimiray@intel.com \
--cc=intel-xe@lists.freedesktop.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox