Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Matthew Brost <matthew.brost@intel.com>
To: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Cc: <intel-xe@lists.freedesktop.org>,
	<thomas.hellstrom@linux.intel.com>, <oak.zeng@intel.com>
Subject: Re: [RFC 08/29] drm/xe: Rename lookup_vma function to xe_find_vma_by_addr
Date: Thu, 3 Apr 2025 14:02:02 -0700	[thread overview]
Message-ID: <Z+73SkAyGHotMRry@lstrano-desk.jf.intel.com> (raw)
In-Reply-To: <20250314080226.2059819-9-himal.prasad.ghimiray@intel.com>

On Fri, Mar 14, 2025 at 01:32:05PM +0530, Himal Prasad Ghimiray wrote:
> This update renames the lookup_vma function to xe_vm_find_vma_by_addr and
> makes it accessible externally. The function, which looks up a VMA by
> its address within a specified VM, will be utilized in upcoming patches.
> 
> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>

Reviewed-by: Matthew Brost <matthew.brost@intel.com>

Side note just noticed there are multiple version of the list of this
patch but no versioning which then I accidentally replied to older
patches. Next rev, please include versioning.

Matt

> ---
>  drivers/gpu/drm/xe/xe_gt_pagefault.c | 24 +----------------------
>  drivers/gpu/drm/xe/xe_vm.c           | 29 ++++++++++++++++++++++++++++
>  drivers/gpu/drm/xe/xe_vm.h           |  2 ++
>  3 files changed, 32 insertions(+), 23 deletions(-)
> 
> diff --git a/drivers/gpu/drm/xe/xe_gt_pagefault.c b/drivers/gpu/drm/xe/xe_gt_pagefault.c
> index c5ad9a0a89c2..3aaf4090fcfe 100644
> --- a/drivers/gpu/drm/xe/xe_gt_pagefault.c
> +++ b/drivers/gpu/drm/xe/xe_gt_pagefault.c
> @@ -72,28 +72,6 @@ static bool vma_is_valid(struct xe_tile *tile, struct xe_vma *vma)
>  		!(BIT(tile->id) & vma->tile_invalidated);
>  }
>  
> -static bool vma_matches(struct xe_vma *vma, u64 page_addr)
> -{
> -	if (page_addr > xe_vma_end(vma) - 1 ||
> -	    page_addr + SZ_4K - 1 < xe_vma_start(vma))
> -		return false;
> -
> -	return true;
> -}
> -
> -static struct xe_vma *lookup_vma(struct xe_vm *vm, u64 page_addr)
> -{
> -	struct xe_vma *vma = NULL;
> -
> -	if (vm->usm.last_fault_vma) {   /* Fast lookup */
> -		if (vma_matches(vm->usm.last_fault_vma, page_addr))
> -			vma = vm->usm.last_fault_vma;
> -	}
> -	if (!vma)
> -		vma = xe_vm_find_overlapping_vma(vm, page_addr, SZ_4K);
> -
> -	return vma;
> -}
>  
>  static int xe_pf_begin(struct drm_exec *exec, struct xe_vma *vma,
>  		       bool atomic, unsigned int id)
> @@ -231,7 +209,7 @@ static int handle_pagefault(struct xe_gt *gt, struct pagefault *pf)
>  		goto unlock_vm;
>  	}
>  
> -	vma = lookup_vma(vm, pf->page_addr);
> +	vma = xe_vm_find_vma_by_addr(vm, pf->page_addr);
>  	if (!vma) {
>  		err = -EINVAL;
>  		goto unlock_vm;
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index b83154d338c7..07cad2804b14 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -2135,6 +2135,35 @@ int xe_vm_destroy_ioctl(struct drm_device *dev, void *data,
>  	return err;
>  }
>  
> +static bool vma_matches(struct xe_vma *vma, u64 page_addr)
> +{
> +	if (page_addr > xe_vma_end(vma) - 1 ||
> +	    page_addr + SZ_4K - 1 < xe_vma_start(vma))
> +		return false;
> +
> +	return true;
> +}
> +
> +/**
> + * xe_vm_find_vma_by_addr() - Find a VMA by its address
> + *
> + * @vm: the xe_vm the vma belongs to
> + * @page_address: address to look up
> + */
> +struct xe_vma *xe_vm_find_vma_by_addr(struct xe_vm *vm, u64 page_addr)
> +{
> +	struct xe_vma *vma = NULL;
> +
> +	if (vm->usm.last_fault_vma) {   /* Fast lookup */
> +		if (vma_matches(vm->usm.last_fault_vma, page_addr))
> +			vma = vm->usm.last_fault_vma;
> +	}
> +	if (!vma)
> +		vma = xe_vm_find_overlapping_vma(vm, page_addr, SZ_4K);
> +
> +	return vma;
> +}
> +
>  static const u32 region_to_mem_type[] = {
>  	XE_PL_TT,
>  	XE_PL_VRAM0,
> diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
> index 0ef811fc2bde..99e164852f63 100644
> --- a/drivers/gpu/drm/xe/xe_vm.h
> +++ b/drivers/gpu/drm/xe/xe_vm.h
> @@ -169,6 +169,8 @@ static inline bool xe_vma_is_userptr(struct xe_vma *vma)
>  		!xe_vma_is_cpu_addr_mirror(vma);
>  }
>  
> +struct xe_vma *xe_vm_find_vma_by_addr(struct xe_vm *vm, u64 page_addr);
> +
>  /**
>   * to_userptr_vma() - Return a pointer to an embedding userptr vma
>   * @vma: Pointer to the embedded struct xe_vma
> -- 
> 2.34.1
> 

  reply	other threads:[~2025-04-03 21:00 UTC|newest]

Thread overview: 52+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-03-14  8:01 [RFC 00/29] PREFETCH and MADVISE for SVM ranges Himal Prasad Ghimiray
2025-03-14  8:01 ` [RFC 01/29] drm/xe: Introduce xe_vma_op_prefetch_range struct for prefetch of ranges Himal Prasad Ghimiray
2025-04-03 20:59   ` Matthew Brost
2025-03-14  8:01 ` [RFC 02/29] drm/xe: Make xe_svm_alloc_vram public Himal Prasad Ghimiray
2025-03-27 22:45   ` Matthew Brost
2025-03-28  7:51     ` Ghimiray, Himal Prasad
2025-03-14  8:02 ` [RFC 03/29] drm/xe/svm: Helper to add tile masks to svm ranges Himal Prasad Ghimiray
2025-03-14  8:02 ` [RFC 04/29] drm/xe/svm: Make to_xe_range a public function Himal Prasad Ghimiray
2025-03-28  2:57   ` Matthew Brost
2025-03-14  8:02 ` [RFC 05/29] drm/xe/svm: Make xe_svm_range_* end/start/size public Himal Prasad Ghimiray
2025-03-27 22:46   ` Matthew Brost
2025-03-14  8:02 ` [RFC 06/29] drm/xe/vm: Update xe_vma_ops_incr_pt_update_ops to take an increment value Himal Prasad Ghimiray
2025-03-28  2:56   ` Matthew Brost
2025-03-28  7:52     ` Ghimiray, Himal Prasad
2025-03-14  8:02 ` [RFC 07/29] drm/xe/vm: Add an identifier in xe_vma_ops for svm prefetch Himal Prasad Ghimiray
2025-03-27 22:49   ` Matthew Brost
2025-03-28  7:53     ` Ghimiray, Himal Prasad
2025-03-14  8:02 ` [RFC 08/29] drm/xe: Rename lookup_vma function to xe_find_vma_by_addr Himal Prasad Ghimiray
2025-04-03 21:02   ` Matthew Brost [this message]
2025-04-07  6:16     ` Ghimiray, Himal Prasad
2025-03-14  8:02 ` [RFC 09/29] drm/xe/svm: Allow unaligned addresses and ranges for prefetch Himal Prasad Ghimiray
2025-04-03 20:52   ` Matthew Brost
2025-04-07  6:15     ` Ghimiray, Himal Prasad
2025-03-14  8:02 ` [RFC 10/29] drm/xe/svm: Refactor usage of drm_gpusvm* function in xe_svm Himal Prasad Ghimiray
2025-04-03 20:54   ` Matthew Brost
2025-04-07  6:15     ` Ghimiray, Himal Prasad
2025-03-14  8:02 ` [RFC 11/29] drm/xe/svm: Implement prefetch support for SVM ranges Himal Prasad Ghimiray
2025-03-14  8:02 ` [RFC 12/29] drm/xe/vm: Add debug prints for SVM range prefetch Himal Prasad Ghimiray
2025-03-14  8:02 ` [RFC 13/29] drm/gpuvm: Introduce MADVISE Operations Himal Prasad Ghimiray
2025-03-14  8:46   ` Ghimiray, Himal Prasad
2025-03-17 14:27   ` Danilo Krummrich
2025-03-18 11:58     ` Ghimiray, Himal Prasad
2025-03-14  8:02 ` [RFC 14/29] drm/xe/uapi: Add madvise interface Himal Prasad Ghimiray
2025-03-14  8:02 ` [RFC 15/29] drm/xe/vm: Add attributes struct as member of vma Himal Prasad Ghimiray
2025-03-14  8:02 ` [RFC 16/29] drm/xe/vma: Move pat_index to vma attributes Himal Prasad Ghimiray
2025-03-14  8:02 ` [RFC 17/29] drm/xe/vma: Modify new_vma to accept struct xe_vma_mem_attr as parameter Himal Prasad Ghimiray
2025-03-14  8:02 ` [RFC 18/29] drm/gpusvm: Make drm_gpusvm_for_each_* macros public Himal Prasad Ghimiray
2025-03-14  8:02 ` [RFC 19/29] drm/xe/svm: Split system allocator vma incase of madvise call Himal Prasad Ghimiray
2025-03-14  8:02 ` [RFC 20/29] drm/xe: Implement madvise ioctl for xe Himal Prasad Ghimiray
2025-03-14  8:02 ` [RFC 21/29] drm/xe: Allow CPU address mirror VMA unbind with gpu bindings for madvise Himal Prasad Ghimiray
2025-03-14  8:02 ` [RFC 22/29] drm/xe/svm : Add svm ranges migration policy on atomic access Himal Prasad Ghimiray
2025-03-14  8:02 ` [RFC 23/29] drm/xe/madvise: Update migration policy based on preferred location Himal Prasad Ghimiray
2025-03-14  8:02 ` [RFC 24/29] drm/xe/svm: Support DRM_XE_SVM_ATTR_PAT memory attribute Himal Prasad Ghimiray
2025-03-14  8:02 ` [RFC 25/29] drm/xe/uapi: Add flag for consulting madvise hints on svm prefetch Himal Prasad Ghimiray
2025-05-14 19:26   ` Matthew Brost
2025-03-14  8:02 ` [RFC 26/29] drm/xe/svm: Consult madvise preferred location in prefetch Himal Prasad Ghimiray
2025-03-14  8:02 ` [RFC 27/29] drm/xe/uapi: Add uapi for vma count and mem attributes Himal Prasad Ghimiray
2025-05-14 19:39   ` Matthew Brost
2025-03-14  8:02 ` [RFC 28/29] drm/xe/bo: Add attributes field to xe_bo Himal Prasad Ghimiray
2025-05-14 19:35   ` Matthew Brost
2025-03-14  8:02 ` [RFC 29/29] drm/xe/bo : Update atomic_access attribute on madvise Himal Prasad Ghimiray
2025-03-14 15:00 ` ✗ CI.Patch_applied: failure for PREFETCH and MADVISE for SVM ranges (rev2) Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Z+73SkAyGHotMRry@lstrano-desk.jf.intel.com \
    --to=matthew.brost@intel.com \
    --cc=himal.prasad.ghimiray@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=oak.zeng@intel.com \
    --cc=thomas.hellstrom@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox