Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: "Ghimiray, Himal Prasad" <himal.prasad.ghimiray@intel.com>
To: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>,
	intel-xe@lists.freedesktop.org
Cc: <matthew.brost@intel.com>
Subject: Re: [PATCH v2 17/32] drm/xe/uapi: Add madvise interface
Date: Tue, 20 May 2025 14:19:35 +0530	[thread overview]
Message-ID: <222cf26d-d0ba-4047-a07c-7599c07f5a44@intel.com> (raw)
In-Reply-To: <dfa4302fee599374c8bffee9e9ce8dcb1598a23d.camel@linux.intel.com>



On 02-05-2025 19:30, Thomas Hellström wrote:
> On Mon, 2025-04-07 at 15:47 +0530, Himal Prasad Ghimiray wrote:
>> This commit introduces a new madvise interface to support
>> driver-specific ioctl operations. The madvise interface allows for
>> more
>> efficient memory management by providing hints to the driver about
>> the
>> expected memory usage and pte update policy for gpuvma.
>>
>> Signed-off-by: Himal Prasad Ghimiray
>> <himal.prasad.ghimiray@intel.com>
>> ---
>>   include/uapi/drm/xe_drm.h | 97
>> +++++++++++++++++++++++++++++++++++++++
>>   1 file changed, 97 insertions(+)
>>
>> diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
>> index 9c08738c3b91..aaf515df3a83 100644
>> --- a/include/uapi/drm/xe_drm.h
>> +++ b/include/uapi/drm/xe_drm.h
>> @@ -81,6 +81,7 @@ extern "C" {
>>    *  - &DRM_IOCTL_XE_EXEC
>>    *  - &DRM_IOCTL_XE_WAIT_USER_FENCE
>>    *  - &DRM_IOCTL_XE_OBSERVATION
>> + *  - &DRM_IOCTL_XE_MADVISE
>>    */
>>   
>>   /*
>> @@ -102,6 +103,7 @@ extern "C" {
>>   #define DRM_XE_EXEC			0x09
>>   #define DRM_XE_WAIT_USER_FENCE		0x0a
>>   #define DRM_XE_OBSERVATION		0x0b
>> +#define DRM_XE_MADVISE			0x0c
>>   
>>   /* Must be kept compact -- no holes */
>>   
>> @@ -117,6 +119,7 @@ extern "C" {
>>   #define
>> DRM_IOCTL_XE_EXEC			DRM_IOW(DRM_COMMAND_BASE + DRM_XE_EXEC, struct drm_xe_exec)
>>   #define
>> DRM_IOCTL_XE_WAIT_USER_FENCE		DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_WAIT_USER_FENCE,structdrm_xe_wait_user_fence)
>>   #define
>> DRM_IOCTL_XE_OBSERVATION		DRM_IOW(DRM_COMMAND_BASE + DRM_XE_OBSERVATION,structdrm_xe_observation_param)
>> +#define
>> DRM_IOCTL_XE_MADVISE			DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_MADVISE, structdrm_xe_madvise)
>>   
>>   /**
>>    * DOC: Xe IOCTL Extensions
>> @@ -1965,6 +1968,100 @@ struct drm_xe_query_eu_stall {
>>   	__u64 sampling_rates[];
>>   };
>>   
>> +struct drm_xe_madvise_ops {
> 
> Suggest using extensions also for the ops, like for vm_bind, since we
> might come up with complicated ops in the future that don't fit the
> union + resvd below.
> 
>> +	/** @start: start of the virtual address range */
>> +	__u64 start;
>> +
>> +	/** @size: size of the virtual address range */
>> +	__u64 range;
>> +
>> +#define DRM_XE_VMA_ATTR_PREFERRED_LOC	0
> 
> Is UMD currently really using and exercising PREFERRED_LOC? If not, I
> suggest removing this op and invent a reasonable default behaviour
> until multi-device is in place.

Missed this in previous reply.

Default behavior is preferred location as vram (tile0), as of now in 
absence of multi-device support UMD's are using this to fallback to smem 
as preferred location by passing invalid devmem_fd.

current behaviour is if invalid devmem_fd is passed -> use smem as 
preferred location, with mult-device in place
if invalid devmem_fd -> use local vram as preferred location.

> 
>> +#define DRM_XE_VMA_ATTR_ATOMIC		1
>> +#define DRM_XE_VMA_ATTR_PAT		2
>> +#define DRM_XE_VMA_ATTR_PURGEABLE_STATE	3
>> +	/** @type: type of attribute */
>> +	__u32 type;
>> +
>> +	/** @pad: MBZ */
>> +	__u32 pad;
>> +
>> +	union {
>> +		struct {
>> +#define DRM_XE_VMA_ATOMIC_UNDEFINED	0
>> +#define DRM_XE_VMA_ATOMIC_DEVICE	1
>> +#define DRM_XE_VMA_ATOMIC_GLOBAL	2
>> +#define DRM_XE_VMA_ATOMIC_CPU		3
>> +		/** @val: value of atomic operation*/
>> +			__u32 val;
>> +
>> +		/** @reserved: Reserved */
>> +			__u32 reserved;
>> +		} atomic;
>> +
>> +		struct {
>> +#define DRM_XE_VMA_PURGEABLE_STATE_WILLNEED	0
>> +#define DRM_XE_VMA_PURGEABLE_STATE_DONTNEED	1
>> +#define DRM_XE_VMA_PURGEABLE_STATE_PURGED	2
> 
> I think the purged state, at least on i915 was only known to the KMD
> (so shouldn't really be visible in this header). Also we should
> probably define the semantics here if
> 
> a) There are multiple gpu vms with conflicting purgeable state.
> b) What happens if we call dontneed and the bo is deeply pipelined?
> c) What if a willneed madvise fails due to the bo being purged? And
> that op is embedded in an array of unrelated ops? Should it really fail
> the whole IOCTL?
> 
>> +		/** @val: value for DRM_XE_VMA_ATTR_PURGEABLE_STATE
>> */
>> +			__u32 val;
>> +
>> +		/** @reserved: Reserved */
>> +			__u32 reserved;
>> +		} purge_state_val;
>> +
>> +		struct {
>> +			/** @pat_index */
>> +			__u32 val;
>> +
>> +			/** @reserved: Reserved */
>> +			__u32 reserved;
>> +		} pat_index;
>> +
>> +		/** @preferred_mem_loc: preferred memory location */
>> +		struct {
>> +			__u32 devmem_fd;
>> +
>> +#define MIGRATE_ALL_PAGES 0
>> +#define MIGRATE_ONLY_SYSTEM_PAGES 1
>> +			__u32 migration_policy;
>> +		} preferred_mem_loc;
>> +	};
>> +
>> +	/** @reserved: Reserved */
>> +	__u64 reserved[2];
>> +};
>> +
>> +/**
>> + * struct drm_xe_madvise - Input of &DRM_IOCTL_XE_MADVISE
>> + *
>> + * Set memory attributes to a virtual address range
>> + */
>> +struct drm_xe_madvise {
>> +	/** @extensions: Pointer to the first extension struct, if
>> any */
>> +	__u64 extensions;
>> +
>> +	/** @vm_id: vm_id of the virtual range */
>> +	__u32 vm_id;
>> +
>> +	/** @num_ops: number of madvises in ioctl */
>> +	__u32 num_ops;
> 
> Should we really support an array of ops here given the experience we
> had with rollbacks on VM_bind? Also WRT this, also please see the
> purgeable state above.
> 
> 
> 
> 
>> +
>> +	union {
>> +		/** @ops: used if num_ops == 1 */
>> +		struct drm_xe_madvise_ops ops;
>> +
>> +		/**
>> +		 * @vector_of_ops: userptr to array of struct
>> +		 * drm_xe_vm_madvise_op if num_ops > 1
>> +		 */
>> +		__u64 vector_of_ops;
>> +	};
>> +
>> +	/** @reserved: Reserved */
>> +	__u64 reserved[2];
>> +
>> +};
>> +
>>   #if defined(__cplusplus)
>>   }
>>   #endif
> 
> /Thomas
> 


  parent reply	other threads:[~2025-05-20  8:49 UTC|newest]

Thread overview: 120+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-04-07 10:16 [PATCH v2 00/32] PREFETCH and MADVISE for SVM ranges Himal Prasad Ghimiray
2025-04-07 10:16 ` [PATCH v2 01/32] drm/xe: Introduce xe_vma_op_prefetch_range struct for prefetch of ranges Himal Prasad Ghimiray
2025-04-07 10:16 ` [PATCH v2 02/32] drm/xe: Make xe_svm_alloc_vram public Himal Prasad Ghimiray
2025-04-17  2:50   ` Matthew Brost
2025-04-21  4:06     ` Ghimiray, Himal Prasad
2025-04-07 10:16 ` [PATCH v2 03/32] drm/xe/svm: Helper to add tile masks to svm ranges Himal Prasad Ghimiray
2025-04-07 10:16 ` [PATCH v2 04/32] drm/xe/svm: Make to_xe_range a public function Himal Prasad Ghimiray
2025-04-07 10:16 ` [PATCH v2 05/32] drm/xe/svm: Make xe_svm_range_* end/start/size public Himal Prasad Ghimiray
2025-04-07 10:16 ` [PATCH v2 06/32] drm/xe/vm: Update xe_vma_ops_incr_pt_update_ops to take an increment value Himal Prasad Ghimiray
2025-04-17  0:10   ` Matthew Brost
2025-04-21  4:09     ` Ghimiray, Himal Prasad
2025-04-07 10:16 ` [PATCH v2 07/32] drm/xe/vm: Add an identifier in xe_vma_ops for svm prefetch Himal Prasad Ghimiray
2025-04-17  2:53   ` Matthew Brost
2025-04-07 10:16 ` [PATCH v2 08/32] drm/xe: Rename lookup_vma function to xe_find_vma_by_addr Himal Prasad Ghimiray
2025-04-07 22:42   ` kernel test robot
2025-04-07 10:16 ` [PATCH v2 09/32] drm/xe/svm: Allow unaligned addresses and ranges for prefetch Himal Prasad Ghimiray
2025-04-17  2:53   ` Matthew Brost
2025-04-07 10:16 ` [PATCH v2 10/32] drm/xe/svm: Refactor usage of drm_gpusvm* function in xe_svm Himal Prasad Ghimiray
2025-04-17  2:57   ` Matthew Brost
2025-04-21  4:30     ` Ghimiray, Himal Prasad
2025-04-07 10:16 ` [PATCH v2 11/32] drm/xe/svm: Add function to determine if range needs VRAM migration Himal Prasad Ghimiray
2025-04-17  3:05   ` Matthew Brost
2025-04-21  4:52     ` Ghimiray, Himal Prasad
2025-04-07 10:16 ` [PATCH v2 12/32] drm/gpusvm: Introduce vram_only flag for VRAM allocation Himal Prasad Ghimiray
2025-04-17  3:07   ` Matthew Brost
2025-04-21  4:55     ` Ghimiray, Himal Prasad
2025-04-07 10:17 ` [PATCH v2 13/32] drm/xe/svm: Incase of atomic access ensure get_pages happens from vram Himal Prasad Ghimiray
2025-04-17  4:19   ` Matthew Brost
2025-04-21  4:58     ` Ghimiray, Himal Prasad
2025-04-21  6:29       ` Ghimiray, Himal Prasad
2025-04-22 15:25         ` Matthew Brost
2025-04-22 15:27       ` Matthew Brost
2025-04-07 10:17 ` [PATCH v2 14/32] drm/xe/svm: Implement prefetch support for SVM ranges Himal Prasad Ghimiray
2025-04-17  4:54   ` Matthew Brost
2025-04-24 10:03     ` Ghimiray, Himal Prasad
2025-04-24 23:48   ` Matthew Brost
2025-04-28  6:44     ` Ghimiray, Himal Prasad
2025-04-07 10:17 ` [PATCH v2 15/32] drm/xe/vm: Add debug prints for SVM range prefetch Himal Prasad Ghimiray
2025-04-17  4:56   ` Matthew Brost
2025-04-07 10:17 ` [PATCH v2 16/32] Introduce drm_gpuvm_sm_map_ops_flags enums for sm_map_ops Himal Prasad Ghimiray
2025-04-07 10:30   ` Boris Brezillon
2025-05-26 13:48     ` Ghimiray, Himal Prasad
2025-04-07 22:42   ` kernel test robot
2025-04-07 10:17 ` [PATCH v2 17/32] drm/xe/uapi: Add madvise interface Himal Prasad Ghimiray
2025-04-17 18:19   ` Souza, Jose
2025-04-17 18:24     ` Souza, Jose
2025-04-22 15:34       ` Matthew Brost
2025-04-22 15:55         ` Souza, Jose
2025-04-22 16:19           ` Matthew Brost
2025-04-22 15:40     ` Matthew Brost
2025-04-22 16:02       ` Souza, Jose
2025-04-22 16:12         ` Matthew Brost
2025-04-22 16:16           ` Souza, Jose
2025-05-02 14:00   ` Thomas Hellström
2025-05-20  8:13     ` Ghimiray, Himal Prasad
2025-05-20  8:49     ` Ghimiray, Himal Prasad [this message]
2025-04-07 10:17 ` [PATCH v2 18/32] drm/xe/vm: Add attributes struct as member of vma Himal Prasad Ghimiray
2025-05-14 18:36   ` Matthew Brost
2025-05-20  9:27     ` Ghimiray, Himal Prasad
2025-05-27 17:37       ` Matthew Brost
2025-05-28  5:33         ` Ghimiray, Himal Prasad
2025-05-28 16:09           ` Matthew Brost
2025-05-28 16:16             ` Ghimiray, Himal Prasad
2025-04-07 10:17 ` [PATCH v2 19/32] drm/xe/vma: Move pat_index to vma attributes Himal Prasad Ghimiray
2025-05-14 18:37   ` Matthew Brost
2025-04-07 10:17 ` [PATCH v2 20/32] drm/xe/vma: Modify new_vma to accept struct xe_vma_mem_attr as parameter Himal Prasad Ghimiray
2025-05-13  2:36   ` Matthew Brost
2025-05-14 18:40     ` Matthew Brost
2025-05-20  9:28       ` Ghimiray, Himal Prasad
2025-04-07 10:17 ` [PATCH v2 21/32] drm/gpusvm: Make drm_gpusvm_for_each_* macros public Himal Prasad Ghimiray
2025-04-08  1:49   ` kernel test robot
2025-05-14 18:47   ` Matthew Brost
2025-04-07 10:17 ` [PATCH v2 22/32] drm/xe/svm: Split system allocator vma incase of madvise call Himal Prasad Ghimiray
2025-05-14 19:01   ` Matthew Brost
2025-05-20  9:46     ` Ghimiray, Himal Prasad
2025-05-14 19:02   ` Matthew Brost
2025-04-07 10:17 ` [PATCH v2 23/32] drm/xe: Implement madvise ioctl for xe Himal Prasad Ghimiray
2025-05-14 21:41   ` Matthew Brost
2025-05-20 10:15     ` Ghimiray, Himal Prasad
2025-05-28  5:22       ` Ghimiray, Himal Prasad
2025-04-07 10:17 ` [PATCH v2 24/32] drm/xe: Allow CPU address mirror VMA unbind with gpu bindings for madvise Himal Prasad Ghimiray
2025-05-14 19:20   ` Matthew Brost
2025-05-20 10:21     ` Ghimiray, Himal Prasad
2025-05-27 17:32       ` Matthew Brost
2025-04-07 10:17 ` [PATCH v2 25/32] drm/xe/svm : Add svm ranges migration policy on atomic access Himal Prasad Ghimiray
2025-05-14 22:21   ` Matthew Brost
2025-05-20 10:22     ` Ghimiray, Himal Prasad
2025-04-07 10:17 ` [PATCH v2 26/32] drm/xe/madvise: Update migration policy based on preferred location Himal Prasad Ghimiray
2025-05-14 22:04   ` Matthew Brost
2025-05-21  8:50     ` Ghimiray, Himal Prasad
2025-05-21 16:51       ` Ghimiray, Himal Prasad
2025-04-07 10:17 ` [PATCH v2 27/32] drm/xe/svm: Support DRM_XE_SVM_ATTR_PAT memory attribute Himal Prasad Ghimiray
2025-05-14 21:52   ` Matthew Brost
2025-05-21  8:51     ` Ghimiray, Himal Prasad
2025-04-07 10:17 ` [PATCH v2 28/32] drm/xe/uapi: Add flag for consulting madvise hints on svm prefetch Himal Prasad Ghimiray
2025-05-14 21:05   ` Matthew Brost
2025-05-21  8:52     ` Ghimiray, Himal Prasad
2025-04-07 10:17 ` [PATCH v2 29/32] drm/xe/svm: Consult madvise preferred location in prefetch Himal Prasad Ghimiray
2025-05-14 22:17   ` Matthew Brost
2025-04-07 10:17 ` [PATCH v2 30/32] drm/xe/uapi: Add uapi for vma count and mem attributes Himal Prasad Ghimiray
2025-05-14 21:08   ` Matthew Brost
2025-05-21  8:54     ` Ghimiray, Himal Prasad
2025-05-28 16:18       ` Ghimiray, Himal Prasad
2025-04-07 10:17 ` [PATCH v2 31/32] drm/xe/bo: Add attributes field to xe_bo Himal Prasad Ghimiray
2025-05-14 21:10   ` Matthew Brost
2025-04-07 10:17 ` [PATCH v2 32/32] drm/xe/bo: Update atomic_access attribute on madvise Himal Prasad Ghimiray
2025-05-14 22:31   ` Matthew Brost
2025-05-21  9:13     ` Ghimiray, Himal Prasad
2025-04-07 14:07 ` ✓ CI.Patch_applied: success for PREFETCH and MADVISE for SVM ranges (rev3) Patchwork
2025-04-07 14:07 ` ✗ CI.checkpatch: warning " Patchwork
2025-04-07 14:09 ` ✓ CI.KUnit: success " Patchwork
2025-04-07 14:12 ` ✗ CI.Build: failure " Patchwork
2025-04-09  5:11 ` ✓ CI.Patch_applied: success for PREFETCH and MADVISE for SVM ranges (rev4) Patchwork
2025-04-09  5:11 ` ✗ CI.checkpatch: warning " Patchwork
2025-04-09  5:12 ` ✓ CI.KUnit: success " Patchwork
2025-04-09  5:29 ` ✓ CI.Build: " Patchwork
2025-04-09  5:31 ` ✗ CI.Hooks: failure " Patchwork
2025-04-09  5:32 ` ✗ CI.checksparse: warning " Patchwork
2025-04-09  5:52 ` ✓ Xe.CI.BAT: success " Patchwork
2025-04-09  7:00 ` ✗ Xe.CI.Full: failure " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=222cf26d-d0ba-4047-a07c-7599c07f5a44@intel.com \
    --to=himal.prasad.ghimiray@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=matthew.brost@intel.com \
    --cc=thomas.hellstrom@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox