Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: "Ghimiray, Himal Prasad" <himal.prasad.ghimiray@intel.com>
To: Matthew Brost <matthew.brost@intel.com>
Cc: <intel-xe@lists.freedesktop.org>, <thomas.hellstrom@linux.intel.com>
Subject: Re: [PATCH v4 11/20] drm/xe: Allow CPU address mirror VMA unbind with gpu bindings for madvise
Date: Mon, 23 Jun 2025 11:48:18 +0530	[thread overview]
Message-ID: <068adc36-b013-4fdf-8633-74a8887b3de7@intel.com> (raw)
In-Reply-To: <aFjrgNVdY6rOPezk@lstrano-desk.jf.intel.com>



On 23-06-2025 11:22, Matthew Brost wrote:
> On Fri, Jun 13, 2025 at 06:25:49PM +0530, Himal Prasad Ghimiray wrote:
>> In the case of the MADVISE ioctl, if the start or end addresses fall
>> within a VMA and existing SVM ranges are present, remove the existing
>> SVM mappings. Then, continue with ops_parse to create new VMAs by REMAP
>> unmapping of old one.
>>
>> v2 (Matthew Brost)
>> - Use vops flag to call unmapping of ranges in vm_bind_ioctl_ops_parse
>> - Rename the function
>>
>> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>> ---
>>   drivers/gpu/drm/xe/xe_svm.c | 27 +++++++++++++++++++++++++++
>>   drivers/gpu/drm/xe/xe_svm.h |  8 ++++++++
>>   drivers/gpu/drm/xe/xe_vm.c  |  8 ++++++--
>>   3 files changed, 41 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
>> index 19420635f1fa..df6992ee2e2d 100644
>> --- a/drivers/gpu/drm/xe/xe_svm.c
>> +++ b/drivers/gpu/drm/xe/xe_svm.c
>> @@ -935,6 +935,33 @@ bool xe_svm_has_mapping(struct xe_vm *vm, u64 start, u64 end)
>>   	return drm_gpusvm_has_mapping(&vm->svm.gpusvm, start, end);
>>   }
>>   
>> +/**
>> + * xe_svm_unmap_address_range - UNMAP SVM mappings and ranges
>> + * @start: start addr
>> + * @end: end addr
>> + *
>> + * This function UNMAPS svm ranges if start or end address are inside them.
>> + */
>> +void xe_svm_unmap_address_range(struct xe_vm *vm, u64 start, u64 end)
>> +{
>> +	struct drm_gpusvm_notifier *notifier, *next;
>> +
>> +	lockdep_assert_held_write(&vm->lock);
>> +
>> +	drm_gpusvm_for_each_notifier_safe(notifier, next, &vm->svm.gpusvm, start, end) {
>> +		struct drm_gpusvm_range *range, *__next;
>> +
>> +		drm_gpusvm_for_each_range_safe(range, __next, notifier, start, end) {
>> +			if (start > drm_gpusvm_range_start(range) ||
>> +			    end < drm_gpusvm_range_end(range)) {
>> +				if (IS_DGFX(vm->xe) && xe_svm_range_in_vram(to_xe_range(range)))
>> +					drm_gpusvm_range_evict(&vm->svm.gpusvm, range);
> 
> I think you could use xe_svm_range_migrate_to_smem here but also I don't
> eviction is strickly required here either. This akin to a partial unmap
> and we don't evict there. Any reason that I'm missing here?

If previous ranges had devmem pages allocated, and eviction did not 
occur, subsequent VRAM allocations for smaller ranges were failing.

Scenario:

A 2 MiB range existed with VRAM allocation.
A madvise call triggered a split, invoking xe_svm_unmap_address_range.
Without eviction, the 64 KiB sub-ranges failed to allocate VRAM during 
subsequent page faults.
As a result, bindings were being forced from system memory (SMEM) 
instead of VRAM.
  >
> Matt
> 
>> +				__xe_svm_garbage_collector(vm, to_xe_range(range));
>> +			}
>> +		}
>> +	}
>> +}
>> +
>>   /**
>>    * xe_svm_bo_evict() - SVM evict BO to system memory
>>    * @bo: BO to evict
>> diff --git a/drivers/gpu/drm/xe/xe_svm.h b/drivers/gpu/drm/xe/xe_svm.h
>> index af8f285b6caa..4e5d42323679 100644
>> --- a/drivers/gpu/drm/xe/xe_svm.h
>> +++ b/drivers/gpu/drm/xe/xe_svm.h
>> @@ -92,6 +92,9 @@ bool xe_svm_range_validate(struct xe_vm *vm,
>>   u64 xe_svm_find_vma_start(struct xe_vm *vm, u64 addr, u64 end,  struct xe_vma *vma);
>>   
>>   u8 xe_svm_ranges_zap_ptes_in_range(struct xe_vm *vm, u64 start, u64 end);
>> +
>> +void xe_svm_unmap_address_range(struct xe_vm *vm, u64 start, u64 end);
>> +
>>   /**
>>    * xe_svm_range_has_dma_mapping() - SVM range has DMA mapping
>>    * @range: SVM range
>> @@ -312,6 +315,11 @@ u8 xe_svm_ranges_zap_ptes_in_range(struct xe_vm *vm, u64 start, u64 end)
>>   	return 0;
>>   }
>>   
>> +static inline
>> +void xe_svm_unmap_address_range(struct xe_vm *vm, u64 start, u64 end)
>> +{
>> +}
>> +
>>   #define xe_svm_assert_in_notifier(...) do {} while (0)
>>   #define xe_svm_range_has_dma_mapping(...) false
>>   
>> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
>> index e059d9810d26..0872df8d0b15 100644
>> --- a/drivers/gpu/drm/xe/xe_vm.c
>> +++ b/drivers/gpu/drm/xe/xe_vm.c
>> @@ -2663,8 +2663,12 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
>>   				end = op->base.remap.next->va.addr;
>>   
>>   			if (xe_vma_is_cpu_addr_mirror(old) &&
>> -			    xe_svm_has_mapping(vm, start, end))
>> -				return -EBUSY;
>> +			    xe_svm_has_mapping(vm, start, end)) {
>> +				if (vops->flags & XE_VMA_OPS_FLAG_MADVISE)
>> +					xe_svm_unmap_address_range(vm, start, end);
>> +				else
>> +					return -EBUSY;
>> +			}
>>   
>>   			op->remap.start = xe_vma_start(old);
>>   			op->remap.range = xe_vma_size(old);
>> -- 
>> 2.34.1
>>


  reply	other threads:[~2025-06-23  6:18 UTC|newest]

Thread overview: 68+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-06-13 12:55 [PATCH v4 00/20] MADVISE FOR XE Himal Prasad Ghimiray
2025-06-13 12:43 ` ✗ CI.checkpatch: warning for MADVISE FOR XE (rev2) Patchwork
2025-06-13 12:44 ` ✗ CI.KUnit: failure " Patchwork
2025-06-13 12:55 ` [PATCH v4 01/20] Introduce drm_gpuvm_sm_map_ops_flags enums for sm_map_ops Himal Prasad Ghimiray
2025-06-13 12:55 ` [PATCH v4 02/20] drm/xe/uapi: Add madvise interface Himal Prasad Ghimiray
2025-06-13 14:15   ` Souza, Jose
2025-06-23  4:30   ` Matthew Brost
2025-06-23  6:20     ` Ghimiray, Himal Prasad
2025-06-27 13:47       ` Thomas Hellström
2025-06-27 14:29     ` Thomas Hellström
2025-06-27 18:13       ` Matthew Brost
2025-06-13 12:55 ` [PATCH v4 03/20] drm/xe/vm: Add attributes struct as member of vma Himal Prasad Ghimiray
2025-06-23  4:18   ` Matthew Brost
2025-06-23  6:21     ` Ghimiray, Himal Prasad
2025-06-27 14:32     ` Thomas Hellström
2025-06-13 12:55 ` [PATCH v4 04/20] drm/xe/vma: Move pat_index to vma attributes Himal Prasad Ghimiray
2025-06-13 12:55 ` [PATCH v4 05/20] drm/xe/vma: Modify new_vma to accept struct xe_vma_mem_attr as parameter Himal Prasad Ghimiray
2025-06-23  4:38   ` Matthew Brost
2025-06-23 16:21     ` Matthew Brost
2025-06-13 12:55 ` [PATCH v4 06/20] drm/gpusvm: Make drm_gpusvm_for_each_* macros public Himal Prasad Ghimiray
2025-06-13 12:55 ` [PATCH v4 07/20] drm/xe/svm: Split system allocator vma incase of madvise call Himal Prasad Ghimiray
2025-06-13 12:55 ` [PATCH v4 08/20] drm/xe/svm: Add xe_svm_ranges_zap_ptes_in_range() for PTE zapping Himal Prasad Ghimiray
2025-06-23  4:56   ` Matthew Brost
2025-06-23  6:25     ` Ghimiray, Himal Prasad
2025-06-13 12:55 ` [PATCH v4 09/20] drm/xe: Implement madvise ioctl for xe Himal Prasad Ghimiray
2025-06-23  5:33   ` Matthew Brost
2025-06-26  6:04   ` Lin, Shuicheng
2025-06-26  6:15     ` Matthew Brost
2025-06-26  8:36       ` Ghimiray, Himal Prasad
2025-06-26  8:34     ` Ghimiray, Himal Prasad
2025-06-13 12:55 ` [PATCH v4 10/20] drm/xe/vm: Add an identifier for madvise in xe_vma_ops Himal Prasad Ghimiray
2025-06-23  5:38   ` Matthew Brost
2025-06-23  6:28     ` Ghimiray, Himal Prasad
2025-06-13 12:55 ` [PATCH v4 11/20] drm/xe: Allow CPU address mirror VMA unbind with gpu bindings for madvise Himal Prasad Ghimiray
2025-06-14  4:31   ` kernel test robot
2025-06-23  5:52   ` Matthew Brost
2025-06-23  6:18     ` Ghimiray, Himal Prasad [this message]
2025-06-23 11:45       ` Matthew Brost
2025-06-13 12:55 ` [PATCH v4 12/20] drm/xe/svm : Add svm ranges migration policy on atomic access Himal Prasad Ghimiray
2025-06-23 16:32   ` Matthew Brost
2025-06-13 12:55 ` [PATCH v4 13/20] drm/xe/madvise: Update migration policy based on preferred location Himal Prasad Ghimiray
2025-06-13 23:31   ` kernel test robot
2025-06-14  5:33   ` kernel test robot
2025-06-13 12:55 ` [PATCH v4 14/20] drm/xe/svm: Support DRM_XE_SVM_ATTR_PAT memory attribute Himal Prasad Ghimiray
2025-06-23 16:34   ` Matthew Brost
2025-06-13 12:55 ` [PATCH v4 15/20] drm/xe/uapi: Add flag for consulting madvise hints on svm prefetch Himal Prasad Ghimiray
2025-06-23 16:36   ` Matthew Brost
2025-06-13 12:55 ` [PATCH v4 16/20] drm/xe/svm: Consult madvise preferred location in prefetch Himal Prasad Ghimiray
2025-06-23 22:07   ` Matthew Brost
2025-06-13 12:55 ` [PATCH v4 17/20] drm/xe/bo: Add attributes field to xe_bo Himal Prasad Ghimiray
2025-06-13 12:55 ` [PATCH v4 18/20] drm/xe/bo: Update atomic_access attribute on madvise Himal Prasad Ghimiray
2025-06-23 16:19   ` Matthew Brost
2025-06-13 12:55 ` [PATCH v4 19/20] drm/xe/uapi: Add UAPI for querying VMA count and memory attributes Himal Prasad Ghimiray
2025-06-23 22:43   ` Matthew Brost
2025-06-24  2:18   ` Matthew Brost
2025-06-27 13:20     ` Thomas Hellström
2025-06-27 13:43     ` Thomas Hellström
2025-06-26  3:44   ` Lin, Shuicheng
2025-06-13 12:55 ` [PATCH v4 20/20] drm/xe/madvise: Skip vma invalidation if mem attr are unchanged Himal Prasad Ghimiray
2025-06-23 22:28   ` Matthew Brost
2025-06-26  8:54     ` Ghimiray, Himal Prasad
2025-06-16  4:30 ` ✗ CI.checkpatch: warning for MADVISE FOR XE (rev3) Patchwork
2025-06-16  4:31 ` ✓ CI.KUnit: success " Patchwork
2025-06-16  4:45 ` ✗ CI.checksparse: warning " Patchwork
2025-06-16  5:13 ` ✓ Xe.CI.BAT: success " Patchwork
2025-06-16 15:06 ` ✗ Xe.CI.Full: failure " Patchwork
2025-07-29  4:41 ` [PATCH v4 00/20] MADVISE FOR XE Matthew Brost
2025-07-30 11:16   ` Ghimiray, Himal Prasad

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=068adc36-b013-4fdf-8633-74a8887b3de7@intel.com \
    --to=himal.prasad.ghimiray@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=matthew.brost@intel.com \
    --cc=thomas.hellstrom@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox