From: "Ghimiray, Himal Prasad" <himal.prasad.ghimiray@intel.com>
To: Matthew Brost <matthew.brost@intel.com>
Cc: <intel-xe@lists.freedesktop.org>, <thomas.hellstrom@linux.intel.com>
Subject: Re: [PATCH v2 25/32] drm/xe/svm : Add svm ranges migration policy on atomic access
Date: Tue, 20 May 2025 15:52:09 +0530 [thread overview]
Message-ID: <dfb08565-1c88-41b3-8e03-0f4e2bdfc832@intel.com> (raw)
In-Reply-To: <aCUXcxq8i7LjGl0w@lstrano-desk.jf.intel.com>
On 15-05-2025 03:51, Matthew Brost wrote:
> On Mon, Apr 07, 2025 at 03:47:12PM +0530, Himal Prasad Ghimiray wrote:
>> If the platform does not support atomic access on system memory, and the
>> ranges are in system memory, but the user requires atomic accesses on
>> the VMA, then migrate the ranges to VRAM. Apply this policy for prefetch
>> operations as well.
>>
>
> I think the baseline was changed a bit here, but I believe it mostly
> makes sense. Will review again on the rebase.
>
> A one nit below.
>
>> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>> ---
>> drivers/gpu/drm/xe/xe_pt.c | 9 +++++++--
>> drivers/gpu/drm/xe/xe_svm.c | 14 ++++++++++++--
>> drivers/gpu/drm/xe/xe_vm.c | 2 ++
>> drivers/gpu/drm/xe/xe_vm_madvise.c | 11 ++++++++++-
>> 4 files changed, 31 insertions(+), 5 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
>> index 2479d830d90a..ba9b30b25ded 100644
>> --- a/drivers/gpu/drm/xe/xe_pt.c
>> +++ b/drivers/gpu/drm/xe/xe_pt.c
>> @@ -645,13 +645,18 @@ static bool xe_atomic_for_vram(struct xe_vm *vm)
>> return true;
>> }
>>
>> -static bool xe_atomic_for_system(struct xe_vm *vm, struct xe_bo *bo)
>> +static bool xe_atomic_for_system(struct xe_vm *vm,
>> + struct xe_bo *bo,
>> + struct xe_vma *vma)
>> {
>> struct xe_device *xe = vm->xe;
>>
>> if (!xe->info.has_device_atomics_on_smem)
>> return false;
>>
>> + if (vma->attr.atomic_access == DRM_XE_VMA_ATOMIC_DEVICE)
>> + return true;
>> +
>> /*
>> * If a SMEM+LMEM allocation is backed by SMEM, a device
>> * atomics will cause a gpu page fault and which then
>> @@ -745,7 +750,7 @@ xe_pt_stage_bind(struct xe_tile *tile, struct xe_vma *vma,
>>
>> if (vma->gpuva.flags & XE_VMA_ATOMIC_PTE_BIT) {
>> xe_walk.default_vram_pte = xe_atomic_for_vram(vm) ? XE_USM_PPGTT_PTE_AE : 0;
>> - xe_walk.default_system_pte = xe_atomic_for_system(vm, bo) ?
>> + xe_walk.default_system_pte = xe_atomic_for_system(vm, bo, vma) ?
>> XE_USM_PPGTT_PTE_AE : 0;
>> }
>>
>> diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
>> index efcba4b77250..d40111e29bfe 100644
>> --- a/drivers/gpu/drm/xe/xe_svm.c
>> +++ b/drivers/gpu/drm/xe/xe_svm.c
>> @@ -717,6 +717,16 @@ static bool supports_4K_migration(struct xe_device *xe)
>> return false;
>> }
>>
>> +static bool needs_ranges_in_vram_to_support_atomic(struct xe_device *xe, struct xe_vma *vma)
>> +{
>> + if (vma->attr.atomic_access == DRM_XE_VMA_ATOMIC_UNDEFINED ||
>> + (xe->info.has_device_atomics_on_smem &&
>> + vma->attr.atomic_access == DRM_XE_VMA_ATOMIC_DEVICE))
>> + return false;
>> +
>> + return true;
>> +}
>> +
>> /**
>> * xe_svm_range_needs_migrate_to_vram() - SVM range needs migrate to VRAM or not
>> * @range: SVM range for which migration needs to be decided
>> @@ -735,7 +745,7 @@ bool xe_svm_range_needs_migrate_to_vram(struct xe_svm_range *range, struct xe_vm
>> if (!range->base.flags.migrate_devmem)
>> return false;
>>
>> - needs_migrate = region;
>> + needs_migrate = needs_ranges_in_vram_to_support_atomic(vm->xe, vma) || region;
>>
>> if (needs_migrate && !IS_DGFX(vm->xe)) {
>> drm_warn(&vm->xe->drm, "Platform doesn't support VRAM\n");
>> @@ -828,7 +838,7 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
>>
>> }
>>
>> - if (atomic)
>> + if (atomic && needs_ranges_in_vram_to_support_atomic(vm->xe, vma))
>> ctx.vram_only = 1;
>>
>> range_debug(range, "GET PAGES");
>> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
>> index 92b8e0cac063..0f9c45ce82b4 100644
>> --- a/drivers/gpu/drm/xe/xe_vm.c
>> +++ b/drivers/gpu/drm/xe/xe_vm.c
>> @@ -2930,6 +2930,7 @@ static int prefetch_ranges_lock_and_prep(struct xe_vm *vm,
>> for (i = 0; i < op->prefetch_range.ranges_count; i++) {
>> svm_range = xa_load(&op->prefetch_range.range, i);
>> if (xe_svm_range_needs_migrate_to_vram(svm_range, vma, region)) {
>> + region = region ? region : 1;
>> tile = &vm->xe->tiles[region_to_mem_type[region] - XE_PL_VRAM0];
>> err = xe_svm_alloc_vram(vm, tile, svm_range, &ctx);
>> if (err) {
>> @@ -2938,6 +2939,7 @@ static int prefetch_ranges_lock_and_prep(struct xe_vm *vm,
>> return -ENODATA;
>> }
>> xe_svm_range_debug(svm_range, "PREFETCH - RANGE MIGRATED TO VRAM");
>> + ctx.vram_only = 1;
>> }
>>
>> err = xe_svm_range_get_pages(vm, svm_range, &ctx);
>> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
>> index ef50031649e0..7e1a95106cb9 100644
>> --- a/drivers/gpu/drm/xe/xe_vm_madvise.c
>> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
>> @@ -69,7 +69,16 @@ static int madvise_atomic(struct xe_device *xe, struct xe_vm *vm,
>> struct xe_vma **vmas, int num_vmas,
>> struct drm_xe_madvise_ops ops)
>> {
>> - /* Implementation pending */
>> + int i;
>> +
>> + xe_assert(vm->xe, ops.type == DRM_XE_VMA_ATTR_ATOMIC);
>> + xe_assert(vm->xe, ops.atomic.val > DRM_XE_VMA_ATOMIC_UNDEFINED &&
>> + ops.atomic.val <= DRM_XE_VMA_ATOMIC_CPU);
>> + vm_dbg(&xe->drm, "attr_value = %d", ops.atomic.val);
>
> Again I'm unsure if this debug message how a ton of value without
> knowing the VMA info.
Agreed, will address it in all places
>
> Matt
>
>> +
>> + for (i = 0; i < num_vmas; i++)
>> + vmas[i]->attr.atomic_access = ops.atomic.val;
>> + /*TODO: handle bo backed vmas */
>> return 0;
>> }
>>
>> --
>> 2.34.1
>>
next prev parent reply other threads:[~2025-05-20 10:22 UTC|newest]
Thread overview: 120+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-04-07 10:16 [PATCH v2 00/32] PREFETCH and MADVISE for SVM ranges Himal Prasad Ghimiray
2025-04-07 10:16 ` [PATCH v2 01/32] drm/xe: Introduce xe_vma_op_prefetch_range struct for prefetch of ranges Himal Prasad Ghimiray
2025-04-07 10:16 ` [PATCH v2 02/32] drm/xe: Make xe_svm_alloc_vram public Himal Prasad Ghimiray
2025-04-17 2:50 ` Matthew Brost
2025-04-21 4:06 ` Ghimiray, Himal Prasad
2025-04-07 10:16 ` [PATCH v2 03/32] drm/xe/svm: Helper to add tile masks to svm ranges Himal Prasad Ghimiray
2025-04-07 10:16 ` [PATCH v2 04/32] drm/xe/svm: Make to_xe_range a public function Himal Prasad Ghimiray
2025-04-07 10:16 ` [PATCH v2 05/32] drm/xe/svm: Make xe_svm_range_* end/start/size public Himal Prasad Ghimiray
2025-04-07 10:16 ` [PATCH v2 06/32] drm/xe/vm: Update xe_vma_ops_incr_pt_update_ops to take an increment value Himal Prasad Ghimiray
2025-04-17 0:10 ` Matthew Brost
2025-04-21 4:09 ` Ghimiray, Himal Prasad
2025-04-07 10:16 ` [PATCH v2 07/32] drm/xe/vm: Add an identifier in xe_vma_ops for svm prefetch Himal Prasad Ghimiray
2025-04-17 2:53 ` Matthew Brost
2025-04-07 10:16 ` [PATCH v2 08/32] drm/xe: Rename lookup_vma function to xe_find_vma_by_addr Himal Prasad Ghimiray
2025-04-07 22:42 ` kernel test robot
2025-04-07 10:16 ` [PATCH v2 09/32] drm/xe/svm: Allow unaligned addresses and ranges for prefetch Himal Prasad Ghimiray
2025-04-17 2:53 ` Matthew Brost
2025-04-07 10:16 ` [PATCH v2 10/32] drm/xe/svm: Refactor usage of drm_gpusvm* function in xe_svm Himal Prasad Ghimiray
2025-04-17 2:57 ` Matthew Brost
2025-04-21 4:30 ` Ghimiray, Himal Prasad
2025-04-07 10:16 ` [PATCH v2 11/32] drm/xe/svm: Add function to determine if range needs VRAM migration Himal Prasad Ghimiray
2025-04-17 3:05 ` Matthew Brost
2025-04-21 4:52 ` Ghimiray, Himal Prasad
2025-04-07 10:16 ` [PATCH v2 12/32] drm/gpusvm: Introduce vram_only flag for VRAM allocation Himal Prasad Ghimiray
2025-04-17 3:07 ` Matthew Brost
2025-04-21 4:55 ` Ghimiray, Himal Prasad
2025-04-07 10:17 ` [PATCH v2 13/32] drm/xe/svm: Incase of atomic access ensure get_pages happens from vram Himal Prasad Ghimiray
2025-04-17 4:19 ` Matthew Brost
2025-04-21 4:58 ` Ghimiray, Himal Prasad
2025-04-21 6:29 ` Ghimiray, Himal Prasad
2025-04-22 15:25 ` Matthew Brost
2025-04-22 15:27 ` Matthew Brost
2025-04-07 10:17 ` [PATCH v2 14/32] drm/xe/svm: Implement prefetch support for SVM ranges Himal Prasad Ghimiray
2025-04-17 4:54 ` Matthew Brost
2025-04-24 10:03 ` Ghimiray, Himal Prasad
2025-04-24 23:48 ` Matthew Brost
2025-04-28 6:44 ` Ghimiray, Himal Prasad
2025-04-07 10:17 ` [PATCH v2 15/32] drm/xe/vm: Add debug prints for SVM range prefetch Himal Prasad Ghimiray
2025-04-17 4:56 ` Matthew Brost
2025-04-07 10:17 ` [PATCH v2 16/32] Introduce drm_gpuvm_sm_map_ops_flags enums for sm_map_ops Himal Prasad Ghimiray
2025-04-07 10:30 ` Boris Brezillon
2025-05-26 13:48 ` Ghimiray, Himal Prasad
2025-04-07 22:42 ` kernel test robot
2025-04-07 10:17 ` [PATCH v2 17/32] drm/xe/uapi: Add madvise interface Himal Prasad Ghimiray
2025-04-17 18:19 ` Souza, Jose
2025-04-17 18:24 ` Souza, Jose
2025-04-22 15:34 ` Matthew Brost
2025-04-22 15:55 ` Souza, Jose
2025-04-22 16:19 ` Matthew Brost
2025-04-22 15:40 ` Matthew Brost
2025-04-22 16:02 ` Souza, Jose
2025-04-22 16:12 ` Matthew Brost
2025-04-22 16:16 ` Souza, Jose
2025-05-02 14:00 ` Thomas Hellström
2025-05-20 8:13 ` Ghimiray, Himal Prasad
2025-05-20 8:49 ` Ghimiray, Himal Prasad
2025-04-07 10:17 ` [PATCH v2 18/32] drm/xe/vm: Add attributes struct as member of vma Himal Prasad Ghimiray
2025-05-14 18:36 ` Matthew Brost
2025-05-20 9:27 ` Ghimiray, Himal Prasad
2025-05-27 17:37 ` Matthew Brost
2025-05-28 5:33 ` Ghimiray, Himal Prasad
2025-05-28 16:09 ` Matthew Brost
2025-05-28 16:16 ` Ghimiray, Himal Prasad
2025-04-07 10:17 ` [PATCH v2 19/32] drm/xe/vma: Move pat_index to vma attributes Himal Prasad Ghimiray
2025-05-14 18:37 ` Matthew Brost
2025-04-07 10:17 ` [PATCH v2 20/32] drm/xe/vma: Modify new_vma to accept struct xe_vma_mem_attr as parameter Himal Prasad Ghimiray
2025-05-13 2:36 ` Matthew Brost
2025-05-14 18:40 ` Matthew Brost
2025-05-20 9:28 ` Ghimiray, Himal Prasad
2025-04-07 10:17 ` [PATCH v2 21/32] drm/gpusvm: Make drm_gpusvm_for_each_* macros public Himal Prasad Ghimiray
2025-04-08 1:49 ` kernel test robot
2025-05-14 18:47 ` Matthew Brost
2025-04-07 10:17 ` [PATCH v2 22/32] drm/xe/svm: Split system allocator vma incase of madvise call Himal Prasad Ghimiray
2025-05-14 19:01 ` Matthew Brost
2025-05-20 9:46 ` Ghimiray, Himal Prasad
2025-05-14 19:02 ` Matthew Brost
2025-04-07 10:17 ` [PATCH v2 23/32] drm/xe: Implement madvise ioctl for xe Himal Prasad Ghimiray
2025-05-14 21:41 ` Matthew Brost
2025-05-20 10:15 ` Ghimiray, Himal Prasad
2025-05-28 5:22 ` Ghimiray, Himal Prasad
2025-04-07 10:17 ` [PATCH v2 24/32] drm/xe: Allow CPU address mirror VMA unbind with gpu bindings for madvise Himal Prasad Ghimiray
2025-05-14 19:20 ` Matthew Brost
2025-05-20 10:21 ` Ghimiray, Himal Prasad
2025-05-27 17:32 ` Matthew Brost
2025-04-07 10:17 ` [PATCH v2 25/32] drm/xe/svm : Add svm ranges migration policy on atomic access Himal Prasad Ghimiray
2025-05-14 22:21 ` Matthew Brost
2025-05-20 10:22 ` Ghimiray, Himal Prasad [this message]
2025-04-07 10:17 ` [PATCH v2 26/32] drm/xe/madvise: Update migration policy based on preferred location Himal Prasad Ghimiray
2025-05-14 22:04 ` Matthew Brost
2025-05-21 8:50 ` Ghimiray, Himal Prasad
2025-05-21 16:51 ` Ghimiray, Himal Prasad
2025-04-07 10:17 ` [PATCH v2 27/32] drm/xe/svm: Support DRM_XE_SVM_ATTR_PAT memory attribute Himal Prasad Ghimiray
2025-05-14 21:52 ` Matthew Brost
2025-05-21 8:51 ` Ghimiray, Himal Prasad
2025-04-07 10:17 ` [PATCH v2 28/32] drm/xe/uapi: Add flag for consulting madvise hints on svm prefetch Himal Prasad Ghimiray
2025-05-14 21:05 ` Matthew Brost
2025-05-21 8:52 ` Ghimiray, Himal Prasad
2025-04-07 10:17 ` [PATCH v2 29/32] drm/xe/svm: Consult madvise preferred location in prefetch Himal Prasad Ghimiray
2025-05-14 22:17 ` Matthew Brost
2025-04-07 10:17 ` [PATCH v2 30/32] drm/xe/uapi: Add uapi for vma count and mem attributes Himal Prasad Ghimiray
2025-05-14 21:08 ` Matthew Brost
2025-05-21 8:54 ` Ghimiray, Himal Prasad
2025-05-28 16:18 ` Ghimiray, Himal Prasad
2025-04-07 10:17 ` [PATCH v2 31/32] drm/xe/bo: Add attributes field to xe_bo Himal Prasad Ghimiray
2025-05-14 21:10 ` Matthew Brost
2025-04-07 10:17 ` [PATCH v2 32/32] drm/xe/bo: Update atomic_access attribute on madvise Himal Prasad Ghimiray
2025-05-14 22:31 ` Matthew Brost
2025-05-21 9:13 ` Ghimiray, Himal Prasad
2025-04-07 14:07 ` ✓ CI.Patch_applied: success for PREFETCH and MADVISE for SVM ranges (rev3) Patchwork
2025-04-07 14:07 ` ✗ CI.checkpatch: warning " Patchwork
2025-04-07 14:09 ` ✓ CI.KUnit: success " Patchwork
2025-04-07 14:12 ` ✗ CI.Build: failure " Patchwork
2025-04-09 5:11 ` ✓ CI.Patch_applied: success for PREFETCH and MADVISE for SVM ranges (rev4) Patchwork
2025-04-09 5:11 ` ✗ CI.checkpatch: warning " Patchwork
2025-04-09 5:12 ` ✓ CI.KUnit: success " Patchwork
2025-04-09 5:29 ` ✓ CI.Build: " Patchwork
2025-04-09 5:31 ` ✗ CI.Hooks: failure " Patchwork
2025-04-09 5:32 ` ✗ CI.checksparse: warning " Patchwork
2025-04-09 5:52 ` ✓ Xe.CI.BAT: success " Patchwork
2025-04-09 7:00 ` ✗ Xe.CI.Full: failure " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=dfb08565-1c88-41b3-8e03-0f4e2bdfc832@intel.com \
--to=himal.prasad.ghimiray@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=matthew.brost@intel.com \
--cc=thomas.hellstrom@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox