* [PATCH v6 01/12] drm/xe/uapi: Add UAPI support for purgeable buffer objects
2026-03-03 15:19 [PATCH v6 00/12] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
@ 2026-03-03 15:19 ` Arvind Yadav
2026-03-03 15:53 ` Souza, Jose
2026-03-10 8:31 ` Thomas Hellström
2026-03-03 15:19 ` [PATCH v6 02/12] drm/xe/bo: Add purgeable bo state tracking and field madv to xe_bo Arvind Yadav
` (15 subsequent siblings)
16 siblings, 2 replies; 39+ messages in thread
From: Arvind Yadav @ 2026-03-03 15:19 UTC (permalink / raw)
To: intel-xe
Cc: matthew.brost, himal.prasad.ghimiray, thomas.hellstrom,
pallavi.mishra, Jose Souze
From: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Extend the DRM_XE_MADVISE ioctl to support purgeable buffer object
management by adding DRM_XE_VMA_ATTR_PURGEABLE_STATE attribute type.
This allows userspace applications to provide memory usage hints to
the kernel for better memory management under pressure:
- WILLNEED: Buffer is needed and should not be purged. If the BO was
previously purged, retained field returns 0 indicating backing store
was lost (once purged, always purged semantics matching i915).
- DONTNEED: Buffer is not currently needed and may be purged by the
kernel under memory pressure to free resources. Only applies to
non-shared BOs.
To prevent undefined behavior, the following operations are blocked
while a BO is in DONTNEED state:
- New mmap() operations return -EBUSY
- VM_BIND operations return -EBUSY
- New dma-buf exports return -EBUSY
- CPU page faults return SIGBUS
- GPU page faults fail with -EACCES
This ensures applications cannot use a BO while marked as DONTNEED,
preventing erratic behavior when the kernel purges the backing store.
The implementation includes a 'retained' output field (matching i915's
drm_i915_gem_madvise.retained) that indicates whether the BO's backing
store still exists (1) or has been purged (0).
Added DRM_XE_QUERY_CONFIG_FLAG_HAS_PURGING_SUPPORT flag to allow
userspace to detect kernel support for purgeable buffer objects
before attempting to use the feature.
v2:
- Add PURGED state for read-only status, change ioctl to DRM_IOWR,
add retained field for i915 compatibility
v3:
- UAPI rule should not be changed (Matthew Brost)
- Make 'retained' a userptr (Matthew Brost)
v4:
- You cannot make this part of the union (purge_state_val) larger
than the existing union (16 bytes). So just drop the '__u64 reserved'
field. (Matt)
v5:
- Update UAPI documentation to clarify retained must be initialized
to 0(Thomas)
v6:
- Document DONTNEED BO access blocking behavior to prevent undefined
behavior and clarify uAPI contract (Thomas, Matt)
- Add query flag DRM_XE_QUERY_CONFIG_FLAG_HAS_PURGING_SUPPORT for
feature detection. (Jose)
- Rename retained to retained_ptr. (Jose)
Cc: Jose Souze <jose.souza@intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
---
include/uapi/drm/xe_drm.h | 60 +++++++++++++++++++++++++++++++++++++++
1 file changed, 60 insertions(+)
diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
index ef2565048bdf..42aedd30189d 100644
--- a/include/uapi/drm/xe_drm.h
+++ b/include/uapi/drm/xe_drm.h
@@ -425,6 +425,7 @@ struct drm_xe_query_config {
#define DRM_XE_QUERY_CONFIG_FLAG_HAS_LOW_LATENCY (1 << 1)
#define DRM_XE_QUERY_CONFIG_FLAG_HAS_CPU_ADDR_MIRROR (1 << 2)
#define DRM_XE_QUERY_CONFIG_FLAG_HAS_NO_COMPRESSION_HINT (1 << 3)
+ #define DRM_XE_QUERY_CONFIG_FLAG_HAS_PURGING_SUPPORT (1 << 4)
#define DRM_XE_QUERY_CONFIG_MIN_ALIGNMENT 2
#define DRM_XE_QUERY_CONFIG_VA_BITS 3
#define DRM_XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY 4
@@ -2067,6 +2068,7 @@ struct drm_xe_query_eu_stall {
* - DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC: Set preferred memory location.
* - DRM_XE_MEM_RANGE_ATTR_ATOMIC: Set atomic access policy.
* - DRM_XE_MEM_RANGE_ATTR_PAT: Set page attribute table index.
+ * - DRM_XE_VMA_ATTR_PURGEABLE_STATE: Set purgeable state for BOs.
*
* Example:
*
@@ -2099,6 +2101,7 @@ struct drm_xe_madvise {
#define DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC 0
#define DRM_XE_MEM_RANGE_ATTR_ATOMIC 1
#define DRM_XE_MEM_RANGE_ATTR_PAT 2
+#define DRM_XE_VMA_ATTR_PURGEABLE_STATE 3
/** @type: type of attribute */
__u32 type;
@@ -2189,6 +2192,63 @@ struct drm_xe_madvise {
/** @pat_index.reserved: Reserved */
__u64 reserved;
} pat_index;
+
+ /**
+ * @purge_state_val: Purgeable state configuration
+ *
+ * Used when @type == DRM_XE_VMA_ATTR_PURGEABLE_STATE.
+ *
+ * Configures the purgeable state of buffer objects in the specified
+ * virtual address range. This allows applications to hint to the kernel
+ * about bo's usage patterns for better memory management.
+ *
+ * Supported values for @purge_state_val.val:
+ * - DRM_XE_VMA_PURGEABLE_STATE_WILLNEED (0): Marks BO as needed.
+ * If BO was previously purged, returns retained=0 (backing store lost).
+ *
+ * - DRM_XE_VMA_PURGEABLE_STATE_DONTNEED (1): Marks BO as not currently
+ * needed. Kernel may purge it under memory pressure to reclaim memory.
+ * Only applies to non-shared BOs. Returns retained=1 if not purged yet.
+ *
+ * Important: Once marked as DONTNEED, touching the BO's memory
+ * is undefined behavior. It may succeed temporarily (before the
+ * kernel purges the backing store) but will suddenly fail once
+ * the BO transitions to PURGED state.
+ *
+ * The following operations are blocked in DONTNEED state to
+ * prevent the BO from being re-mapped after madvise:
+ * - New mmap() calls: Fail with -EBUSY
+ * - VM_BIND operations: Fail with -EBUSY
+ * - New dma-buf exports: Fail with -EBUSY
+ * - CPU page faults (existing mmap): Fail with SIGBUS
+ * - GPU page faults (fault-mode VMs): Fail with -EACCES
+ */
+ struct {
+#define DRM_XE_VMA_PURGEABLE_STATE_WILLNEED 0
+#define DRM_XE_VMA_PURGEABLE_STATE_DONTNEED 1
+ /** @purge_state_val.val: value for DRM_XE_VMA_ATTR_PURGEABLE_STATE */
+ __u32 val;
+
+ /** @purge_state_val.pad: MBZ */
+ __u32 pad;
+ /**
+ * @purge_state_val.retained_ptr: Pointer to a __u32 output
+ * field for backing store status.
+ *
+ * Userspace must initialize the __u32 value at this address
+ * to 0 before the ioctl. Kernel writes a __u32 after the
+ * operation:
+ * - 1 if backing store exists (not purged)
+ * - 0 if backing store was purged
+ *
+ * If userspace fails to initialize to 0, ioctl returns -EINVAL.
+ * This ensures a safe default (0 = assume purged) if kernel
+ * cannot write the result.
+ *
+ * Similar to i915's drm_i915_gem_madvise.retained field.
+ */
+ __u64 retained_ptr;
+ } purge_state_val;
};
/** @reserved: Reserved */
--
2.43.0
^ permalink raw reply related [flat|nested] 39+ messages in thread* Re: [PATCH v6 01/12] drm/xe/uapi: Add UAPI support for purgeable buffer objects
2026-03-03 15:19 ` [PATCH v6 01/12] drm/xe/uapi: Add UAPI " Arvind Yadav
@ 2026-03-03 15:53 ` Souza, Jose
2026-03-20 4:00 ` Yadav, Arvind
2026-03-10 8:31 ` Thomas Hellström
1 sibling, 1 reply; 39+ messages in thread
From: Souza, Jose @ 2026-03-03 15:53 UTC (permalink / raw)
To: intel-xe@lists.freedesktop.org, Yadav, Arvind
Cc: Brost, Matthew, Mishra, Pallavi, Ghimiray, Himal Prasad,
thomas.hellstrom@linux.intel.com
On Tue, 2026-03-03 at 20:49 +0530, Arvind Yadav wrote:
> From: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>
> Extend the DRM_XE_MADVISE ioctl to support purgeable buffer object
> management by adding DRM_XE_VMA_ATTR_PURGEABLE_STATE attribute type.
>
> This allows userspace applications to provide memory usage hints to
> the kernel for better memory management under pressure:
>
> - WILLNEED: Buffer is needed and should not be purged. If the BO was
> previously purged, retained field returns 0 indicating backing
> store
> was lost (once purged, always purged semantics matching i915).
>
> - DONTNEED: Buffer is not currently needed and may be purged by the
> kernel under memory pressure to free resources. Only applies to
> non-shared BOs.
>
> To prevent undefined behavior, the following operations are blocked
> while a BO is in DONTNEED state:
> - New mmap() operations return -EBUSY
> - VM_BIND operations return -EBUSY
> - New dma-buf exports return -EBUSY
> - CPU page faults return SIGBUS
> - GPU page faults fail with -EACCES
>
> This ensures applications cannot use a BO while marked as DONTNEED,
> preventing erratic behavior when the kernel purges the backing
> store.
>
> The implementation includes a 'retained' output field (matching
> i915's
> drm_i915_gem_madvise.retained) that indicates whether the BO's
> backing
> store still exists (1) or has been purged (0).
>
> Added DRM_XE_QUERY_CONFIG_FLAG_HAS_PURGING_SUPPORT flag to allow
> userspace to detect kernel support for purgeable buffer objects
> before attempting to use the feature.
>
> v2:
> - Add PURGED state for read-only status, change ioctl to DRM_IOWR,
> add retained field for i915 compatibility
>
> v3:
> - UAPI rule should not be changed (Matthew Brost)
> - Make 'retained' a userptr (Matthew Brost)
>
> v4:
> - You cannot make this part of the union (purge_state_val) larger
> than the existing union (16 bytes). So just drop the '__u64
> reserved'
> field. (Matt)
>
> v5:
> - Update UAPI documentation to clarify retained must be initialized
> to 0(Thomas)
>
> v6:
> - Document DONTNEED BO access blocking behavior to prevent
> undefined
> behavior and clarify uAPI contract (Thomas, Matt)
> - Add query flag DRM_XE_QUERY_CONFIG_FLAG_HAS_PURGING_SUPPORT for
> feature detection. (Jose)
> - Rename retained to retained_ptr. (Jose)
>
> Cc: Jose Souze <jose.souza@intel.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Signed-off-by: Himal Prasad Ghimiray
> <himal.prasad.ghimiray@intel.com>
> Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
> ---
> include/uapi/drm/xe_drm.h | 60
> +++++++++++++++++++++++++++++++++++++++
> 1 file changed, 60 insertions(+)
>
> diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
> index ef2565048bdf..42aedd30189d 100644
> --- a/include/uapi/drm/xe_drm.h
> +++ b/include/uapi/drm/xe_drm.h
> @@ -425,6 +425,7 @@ struct drm_xe_query_config {
> #define DRM_XE_QUERY_CONFIG_FLAG_HAS_LOW_LATENCY (1
> << 1)
> #define DRM_XE_QUERY_CONFIG_FLAG_HAS_CPU_ADDR_MIRROR (1
> << 2)
> #define DRM_XE_QUERY_CONFIG_FLAG_HAS_NO_COMPRESSION_HINT (1
> << 3)
> + #define DRM_XE_QUERY_CONFIG_FLAG_HAS_PURGING_SUPPORT (1
> << 4)
> #define DRM_XE_QUERY_CONFIG_MIN_ALIGNMENT 2
> #define DRM_XE_QUERY_CONFIG_VA_BITS 3
> #define DRM_XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY 4
> @@ -2067,6 +2068,7 @@ struct drm_xe_query_eu_stall {
> * - DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC: Set preferred memory
> location.
> * - DRM_XE_MEM_RANGE_ATTR_ATOMIC: Set atomic access policy.
> * - DRM_XE_MEM_RANGE_ATTR_PAT: Set page attribute table index.
> + * - DRM_XE_VMA_ATTR_PURGEABLE_STATE: Set purgeable state for BOs.
> *
> * Example:
> *
> @@ -2099,6 +2101,7 @@ struct drm_xe_madvise {
> #define DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC 0
> #define DRM_XE_MEM_RANGE_ATTR_ATOMIC 1
> #define DRM_XE_MEM_RANGE_ATTR_PAT 2
> +#define DRM_XE_VMA_ATTR_PURGEABLE_STATE 3
> /** @type: type of attribute */
> __u32 type;
>
> @@ -2189,6 +2192,63 @@ struct drm_xe_madvise {
> /** @pat_index.reserved: Reserved */
> __u64 reserved;
> } pat_index;
> +
> + /**
> + * @purge_state_val: Purgeable state configuration
> + *
> + * Used when @type ==
> DRM_XE_VMA_ATTR_PURGEABLE_STATE.
> + *
> + * Configures the purgeable state of buffer objects
> in the specified
> + * virtual address range. This allows applications
> to hint to the kernel
> + * about bo's usage patterns for better memory
> management.
> + *
> + * Supported values for @purge_state_val.val:
> + * - DRM_XE_VMA_PURGEABLE_STATE_WILLNEED (0): Marks
> BO as needed.
> + * If BO was previously purged, returns
> retained=0 (backing store lost).
> + *
> + * - DRM_XE_VMA_PURGEABLE_STATE_DONTNEED (1): Marks
> BO as not currently
> + * needed. Kernel may purge it under memory
> pressure to reclaim memory.
> + * Only applies to non-shared BOs. Returns
> retained=1 if not purged yet.
Nothing is returned, I think this need to be updated to reflect that
the backing store is updated to 1, same in
DRM_XE_VMA_PURGEABLE_STATE_WILLNEED.
> + *
> + * Important: Once marked as DONTNEED, touching
> the BO's memory
> + * is undefined behavior. It may succeed
> temporarily (before the
> + * kernel purges the backing store) but will
> suddenly fail once
> + * the BO transitions to PURGED state.
Just to make sure I understood correctly, if I want to change from
DONTNEED to WILLNEED, I do the uAPI call and then check if the backing
storage is set to 1?
By default all VMAs are in WILLNEED state right?
> + *
> + * The following operations are blocked in
> DONTNEED state to
> + * prevent the BO from being re-mapped after
> madvise:
> + * - New mmap() calls: Fail with -EBUSY
> + * - VM_BIND operations: Fail with -EBUSY
> + * - New dma-buf exports: Fail with -EBUSY
> + * - CPU page faults (existing mmap): Fail with
> SIGBUS
> + * - GPU page faults (fault-mode VMs): Fail with
> -EACCES
> + */
> + struct {
> +#define DRM_XE_VMA_PURGEABLE_STATE_WILLNEED 0
> +#define DRM_XE_VMA_PURGEABLE_STATE_DONTNEED 1
> + /** @purge_state_val.val: value for
> DRM_XE_VMA_ATTR_PURGEABLE_STATE */
> + __u32 val;
> +
> + /** @purge_state_val.pad: MBZ */
> + __u32 pad;
> + /**
> + * @purge_state_val.retained_ptr: Pointer to
> a __u32 output
> + * field for backing store status.
> + *
> + * Userspace must initialize the __u32 value
> at this address
> + * to 0 before the ioctl. Kernel writes a
> __u32 after the
> + * operation:
> + * - 1 if backing store exists (not purged)
> + * - 0 if backing store was purged
> + *
> + * If userspace fails to initialize to 0,
> ioctl returns -EINVAL.
> + * This ensures a safe default (0 = assume
> purged) if kernel
> + * cannot write the result.
> + *
> + * Similar to i915's
> drm_i915_gem_madvise.retained field.
> + */
> + __u64 retained_ptr;
> + } purge_state_val;
> };
>
> /** @reserved: Reserved */
^ permalink raw reply [flat|nested] 39+ messages in thread* Re: [PATCH v6 01/12] drm/xe/uapi: Add UAPI support for purgeable buffer objects
2026-03-03 15:53 ` Souza, Jose
@ 2026-03-20 4:00 ` Yadav, Arvind
0 siblings, 0 replies; 39+ messages in thread
From: Yadav, Arvind @ 2026-03-20 4:00 UTC (permalink / raw)
To: Souza, Jose, intel-xe@lists.freedesktop.org
Cc: Brost, Matthew, Mishra, Pallavi, Ghimiray, Himal Prasad,
thomas.hellstrom@linux.intel.com
On 03-03-2026 21:23, Souza, Jose wrote:
> On Tue, 2026-03-03 at 20:49 +0530, Arvind Yadav wrote:
>> From: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>>
>> Extend the DRM_XE_MADVISE ioctl to support purgeable buffer object
>> management by adding DRM_XE_VMA_ATTR_PURGEABLE_STATE attribute type.
>>
>> This allows userspace applications to provide memory usage hints to
>> the kernel for better memory management under pressure:
>>
>> - WILLNEED: Buffer is needed and should not be purged. If the BO was
>> previously purged, retained field returns 0 indicating backing
>> store
>> was lost (once purged, always purged semantics matching i915).
>>
>> - DONTNEED: Buffer is not currently needed and may be purged by the
>> kernel under memory pressure to free resources. Only applies to
>> non-shared BOs.
>>
>> To prevent undefined behavior, the following operations are blocked
>> while a BO is in DONTNEED state:
>> - New mmap() operations return -EBUSY
>> - VM_BIND operations return -EBUSY
>> - New dma-buf exports return -EBUSY
>> - CPU page faults return SIGBUS
>> - GPU page faults fail with -EACCES
>>
>> This ensures applications cannot use a BO while marked as DONTNEED,
>> preventing erratic behavior when the kernel purges the backing
>> store.
>>
>> The implementation includes a 'retained' output field (matching
>> i915's
>> drm_i915_gem_madvise.retained) that indicates whether the BO's
>> backing
>> store still exists (1) or has been purged (0).
>>
>> Added DRM_XE_QUERY_CONFIG_FLAG_HAS_PURGING_SUPPORT flag to allow
>> userspace to detect kernel support for purgeable buffer objects
>> before attempting to use the feature.
>>
>> v2:
>> - Add PURGED state for read-only status, change ioctl to DRM_IOWR,
>> add retained field for i915 compatibility
>>
>> v3:
>> - UAPI rule should not be changed (Matthew Brost)
>> - Make 'retained' a userptr (Matthew Brost)
>>
>> v4:
>> - You cannot make this part of the union (purge_state_val) larger
>> than the existing union (16 bytes). So just drop the '__u64
>> reserved'
>> field. (Matt)
>>
>> v5:
>> - Update UAPI documentation to clarify retained must be initialized
>> to 0(Thomas)
>>
>> v6:
>> - Document DONTNEED BO access blocking behavior to prevent
>> undefined
>> behavior and clarify uAPI contract (Thomas, Matt)
>> - Add query flag DRM_XE_QUERY_CONFIG_FLAG_HAS_PURGING_SUPPORT for
>> feature detection. (Jose)
>> - Rename retained to retained_ptr. (Jose)
>>
>> Cc: Jose Souze <jose.souza@intel.com>
>> Cc: Matthew Brost <matthew.brost@intel.com>
>> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>> Signed-off-by: Himal Prasad Ghimiray
>> <himal.prasad.ghimiray@intel.com>
>> Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
>> ---
>> include/uapi/drm/xe_drm.h | 60
>> +++++++++++++++++++++++++++++++++++++++
>> 1 file changed, 60 insertions(+)
>>
>> diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
>> index ef2565048bdf..42aedd30189d 100644
>> --- a/include/uapi/drm/xe_drm.h
>> +++ b/include/uapi/drm/xe_drm.h
>> @@ -425,6 +425,7 @@ struct drm_xe_query_config {
>> #define DRM_XE_QUERY_CONFIG_FLAG_HAS_LOW_LATENCY (1
>> << 1)
>> #define DRM_XE_QUERY_CONFIG_FLAG_HAS_CPU_ADDR_MIRROR (1
>> << 2)
>> #define DRM_XE_QUERY_CONFIG_FLAG_HAS_NO_COMPRESSION_HINT (1
>> << 3)
>> + #define DRM_XE_QUERY_CONFIG_FLAG_HAS_PURGING_SUPPORT (1
>> << 4)
>> #define DRM_XE_QUERY_CONFIG_MIN_ALIGNMENT 2
>> #define DRM_XE_QUERY_CONFIG_VA_BITS 3
>> #define DRM_XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY 4
>> @@ -2067,6 +2068,7 @@ struct drm_xe_query_eu_stall {
>> * - DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC: Set preferred memory
>> location.
>> * - DRM_XE_MEM_RANGE_ATTR_ATOMIC: Set atomic access policy.
>> * - DRM_XE_MEM_RANGE_ATTR_PAT: Set page attribute table index.
>> + * - DRM_XE_VMA_ATTR_PURGEABLE_STATE: Set purgeable state for BOs.
>> *
>> * Example:
>> *
>> @@ -2099,6 +2101,7 @@ struct drm_xe_madvise {
>> #define DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC 0
>> #define DRM_XE_MEM_RANGE_ATTR_ATOMIC 1
>> #define DRM_XE_MEM_RANGE_ATTR_PAT 2
>> +#define DRM_XE_VMA_ATTR_PURGEABLE_STATE 3
>> /** @type: type of attribute */
>> __u32 type;
>>
>> @@ -2189,6 +2192,63 @@ struct drm_xe_madvise {
>> /** @pat_index.reserved: Reserved */
>> __u64 reserved;
>> } pat_index;
>> +
>> + /**
>> + * @purge_state_val: Purgeable state configuration
>> + *
>> + * Used when @type ==
>> DRM_XE_VMA_ATTR_PURGEABLE_STATE.
>> + *
>> + * Configures the purgeable state of buffer objects
>> in the specified
>> + * virtual address range. This allows applications
>> to hint to the kernel
>> + * about bo's usage patterns for better memory
>> management.
>> + *
>> + * Supported values for @purge_state_val.val:
>> + * - DRM_XE_VMA_PURGEABLE_STATE_WILLNEED (0): Marks
>> BO as needed.
>> + * If BO was previously purged, returns
>> retained=0 (backing store lost).
>> + *
>> + * - DRM_XE_VMA_PURGEABLE_STATE_DONTNEED (1): Marks
>> BO as not currently
>> + * needed. Kernel may purge it under memory
>> pressure to reclaim memory.
>> + * Only applies to non-shared BOs. Returns
>> retained=1 if not purged yet.
> Nothing is returned, I think this need to be updated to reflect that
> the backing store is updated to 1, same in
> DRM_XE_VMA_PURGEABLE_STATE_WILLNEED.
Noted,
>
>> + *
>> + * Important: Once marked as DONTNEED, touching
>> the BO's memory
>> + * is undefined behavior. It may succeed
>> temporarily (before the
>> + * kernel purges the backing store) but will
>> suddenly fail once
>> + * the BO transitions to PURGED state.
> Just to make sure I understood correctly, if I want to change from
> DONTNEED to WILLNEED, I do the uAPI call and then check if the backing
> storage is set to 1?
Yes, correct. Call the ioctl with WILLNEED and then check the __u32 at
@retained_ptr: if it's 0 the backing store was lost and the BO must be
recreated. Added an explicit paragraph explaining this flow.
> By default all VMAs are in WILLNEED state right?
Yes. By default all VMAs are in WILLNEED state.
Thanks,
Arvind
>
>> + *
>> + * The following operations are blocked in
>> DONTNEED state to
>> + * prevent the BO from being re-mapped after
>> madvise:
>> + * - New mmap() calls: Fail with -EBUSY
>> + * - VM_BIND operations: Fail with -EBUSY
>> + * - New dma-buf exports: Fail with -EBUSY
>> + * - CPU page faults (existing mmap): Fail with
>> SIGBUS
>> + * - GPU page faults (fault-mode VMs): Fail with
>> -EACCES
>> + */
>> + struct {
>> +#define DRM_XE_VMA_PURGEABLE_STATE_WILLNEED 0
>> +#define DRM_XE_VMA_PURGEABLE_STATE_DONTNEED 1
>> + /** @purge_state_val.val: value for
>> DRM_XE_VMA_ATTR_PURGEABLE_STATE */
>> + __u32 val;
>> +
>> + /** @purge_state_val.pad: MBZ */
>> + __u32 pad;
>> + /**
>> + * @purge_state_val.retained_ptr: Pointer to
>> a __u32 output
>> + * field for backing store status.
>> + *
>> + * Userspace must initialize the __u32 value
>> at this address
>> + * to 0 before the ioctl. Kernel writes a
>> __u32 after the
>> + * operation:
>> + * - 1 if backing store exists (not purged)
>> + * - 0 if backing store was purged
>> + *
>> + * If userspace fails to initialize to 0,
>> ioctl returns -EINVAL.
>> + * This ensures a safe default (0 = assume
>> purged) if kernel
>> + * cannot write the result.
>> + *
>> + * Similar to i915's
>> drm_i915_gem_madvise.retained field.
>> + */
>> + __u64 retained_ptr;
>> + } purge_state_val;
>> };
>>
>> /** @reserved: Reserved */
^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [PATCH v6 01/12] drm/xe/uapi: Add UAPI support for purgeable buffer objects
2026-03-03 15:19 ` [PATCH v6 01/12] drm/xe/uapi: Add UAPI " Arvind Yadav
2026-03-03 15:53 ` Souza, Jose
@ 2026-03-10 8:31 ` Thomas Hellström
1 sibling, 0 replies; 39+ messages in thread
From: Thomas Hellström @ 2026-03-10 8:31 UTC (permalink / raw)
To: Arvind Yadav, intel-xe
Cc: matthew.brost, himal.prasad.ghimiray, pallavi.mishra, Jose Souze
On Tue, 2026-03-03 at 20:49 +0530, Arvind Yadav wrote:
> From: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>
> Extend the DRM_XE_MADVISE ioctl to support purgeable buffer object
> management by adding DRM_XE_VMA_ATTR_PURGEABLE_STATE attribute type.
>
> This allows userspace applications to provide memory usage hints to
> the kernel for better memory management under pressure:
>
> - WILLNEED: Buffer is needed and should not be purged. If the BO was
> previously purged, retained field returns 0 indicating backing
> store
> was lost (once purged, always purged semantics matching i915).
>
> - DONTNEED: Buffer is not currently needed and may be purged by the
> kernel under memory pressure to free resources. Only applies to
> non-shared BOs.
>
> To prevent undefined behavior, the following operations are blocked
> while a BO is in DONTNEED state:
> - New mmap() operations return -EBUSY
> - VM_BIND operations return -EBUSY
> - New dma-buf exports return -EBUSY
> - CPU page faults return SIGBUS
> - GPU page faults fail with -EACCES
>
> This ensures applications cannot use a BO while marked as DONTNEED,
> preventing erratic behavior when the kernel purges the backing
> store.
>
> The implementation includes a 'retained' output field (matching
> i915's
> drm_i915_gem_madvise.retained) that indicates whether the BO's
> backing
> store still exists (1) or has been purged (0).
>
> Added DRM_XE_QUERY_CONFIG_FLAG_HAS_PURGING_SUPPORT flag to allow
> userspace to detect kernel support for purgeable buffer objects
> before attempting to use the feature.
>
> v2:
> - Add PURGED state for read-only status, change ioctl to DRM_IOWR,
> add retained field for i915 compatibility
>
> v3:
> - UAPI rule should not be changed (Matthew Brost)
> - Make 'retained' a userptr (Matthew Brost)
>
> v4:
> - You cannot make this part of the union (purge_state_val) larger
> than the existing union (16 bytes). So just drop the '__u64
> reserved'
> field. (Matt)
>
> v5:
> - Update UAPI documentation to clarify retained must be initialized
> to 0(Thomas)
>
> v6:
> - Document DONTNEED BO access blocking behavior to prevent
> undefined
> behavior and clarify uAPI contract (Thomas, Matt)
> - Add query flag DRM_XE_QUERY_CONFIG_FLAG_HAS_PURGING_SUPPORT for
> feature detection. (Jose)
> - Rename retained to retained_ptr. (Jose)
>
> Cc: Jose Souze <jose.souza@intel.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Signed-off-by: Himal Prasad Ghimiray
> <himal.prasad.ghimiray@intel.com>
> Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> ---
> include/uapi/drm/xe_drm.h | 60
> +++++++++++++++++++++++++++++++++++++++
> 1 file changed, 60 insertions(+)
>
> diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
> index ef2565048bdf..42aedd30189d 100644
> --- a/include/uapi/drm/xe_drm.h
> +++ b/include/uapi/drm/xe_drm.h
> @@ -425,6 +425,7 @@ struct drm_xe_query_config {
> #define DRM_XE_QUERY_CONFIG_FLAG_HAS_LOW_LATENCY (1
> << 1)
> #define DRM_XE_QUERY_CONFIG_FLAG_HAS_CPU_ADDR_MIRROR (1
> << 2)
> #define DRM_XE_QUERY_CONFIG_FLAG_HAS_NO_COMPRESSION_HINT (1
> << 3)
> + #define DRM_XE_QUERY_CONFIG_FLAG_HAS_PURGING_SUPPORT (1
> << 4)
> #define DRM_XE_QUERY_CONFIG_MIN_ALIGNMENT 2
> #define DRM_XE_QUERY_CONFIG_VA_BITS 3
> #define DRM_XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY 4
> @@ -2067,6 +2068,7 @@ struct drm_xe_query_eu_stall {
> * - DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC: Set preferred memory
> location.
> * - DRM_XE_MEM_RANGE_ATTR_ATOMIC: Set atomic access policy.
> * - DRM_XE_MEM_RANGE_ATTR_PAT: Set page attribute table index.
> + * - DRM_XE_VMA_ATTR_PURGEABLE_STATE: Set purgeable state for BOs.
> *
> * Example:
> *
> @@ -2099,6 +2101,7 @@ struct drm_xe_madvise {
> #define DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC 0
> #define DRM_XE_MEM_RANGE_ATTR_ATOMIC 1
> #define DRM_XE_MEM_RANGE_ATTR_PAT 2
> +#define DRM_XE_VMA_ATTR_PURGEABLE_STATE 3
> /** @type: type of attribute */
> __u32 type;
>
> @@ -2189,6 +2192,63 @@ struct drm_xe_madvise {
> /** @pat_index.reserved: Reserved */
> __u64 reserved;
> } pat_index;
> +
> + /**
> + * @purge_state_val: Purgeable state configuration
> + *
> + * Used when @type ==
> DRM_XE_VMA_ATTR_PURGEABLE_STATE.
> + *
> + * Configures the purgeable state of buffer objects
> in the specified
> + * virtual address range. This allows applications
> to hint to the kernel
> + * about bo's usage patterns for better memory
> management.
> + *
> + * Supported values for @purge_state_val.val:
> + * - DRM_XE_VMA_PURGEABLE_STATE_WILLNEED (0): Marks
> BO as needed.
> + * If BO was previously purged, returns
> retained=0 (backing store lost).
> + *
> + * - DRM_XE_VMA_PURGEABLE_STATE_DONTNEED (1): Marks
> BO as not currently
> + * needed. Kernel may purge it under memory
> pressure to reclaim memory.
> + * Only applies to non-shared BOs. Returns
> retained=1 if not purged yet.
> + *
> + * Important: Once marked as DONTNEED, touching
> the BO's memory
> + * is undefined behavior. It may succeed
> temporarily (before the
> + * kernel purges the backing store) but will
> suddenly fail once
> + * the BO transitions to PURGED state.
> + *
> + * The following operations are blocked in
> DONTNEED state to
> + * prevent the BO from being re-mapped after
> madvise:
> + * - New mmap() calls: Fail with -EBUSY
> + * - VM_BIND operations: Fail with -EBUSY
> + * - New dma-buf exports: Fail with -EBUSY
> + * - CPU page faults (existing mmap): Fail with
> SIGBUS
> + * - GPU page faults (fault-mode VMs): Fail with
> -EACCES
> + */
> + struct {
> +#define DRM_XE_VMA_PURGEABLE_STATE_WILLNEED 0
> +#define DRM_XE_VMA_PURGEABLE_STATE_DONTNEED 1
> + /** @purge_state_val.val: value for
> DRM_XE_VMA_ATTR_PURGEABLE_STATE */
> + __u32 val;
> +
> + /** @purge_state_val.pad: MBZ */
> + __u32 pad;
> + /**
> + * @purge_state_val.retained_ptr: Pointer to
> a __u32 output
> + * field for backing store status.
> + *
> + * Userspace must initialize the __u32 value
> at this address
> + * to 0 before the ioctl. Kernel writes a
> __u32 after the
> + * operation:
> + * - 1 if backing store exists (not purged)
> + * - 0 if backing store was purged
> + *
> + * If userspace fails to initialize to 0,
> ioctl returns -EINVAL.
> + * This ensures a safe default (0 = assume
> purged) if kernel
> + * cannot write the result.
> + *
> + * Similar to i915's
> drm_i915_gem_madvise.retained field.
> + */
> + __u64 retained_ptr;
> + } purge_state_val;
> };
>
> /** @reserved: Reserved */
^ permalink raw reply [flat|nested] 39+ messages in thread
* [PATCH v6 02/12] drm/xe/bo: Add purgeable bo state tracking and field madv to xe_bo
2026-03-03 15:19 [PATCH v6 00/12] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
2026-03-03 15:19 ` [PATCH v6 01/12] drm/xe/uapi: Add UAPI " Arvind Yadav
@ 2026-03-03 15:19 ` Arvind Yadav
2026-03-03 15:19 ` [PATCH v6 03/12] drm/xe/madvise: Implement purgeable buffer object support Arvind Yadav
` (14 subsequent siblings)
16 siblings, 0 replies; 39+ messages in thread
From: Arvind Yadav @ 2026-03-03 15:19 UTC (permalink / raw)
To: intel-xe
Cc: matthew.brost, himal.prasad.ghimiray, thomas.hellstrom,
pallavi.mishra
Add infrastructure for tracking purgeable state of buffer objects.
This includes:
Introduce enum xe_madv_purgeable_state with three states:
- XE_MADV_PURGEABLE_WILLNEED (0): BO is needed and should not be
purged. This is the default state for all BOs.
- XE_MADV_PURGEABLE_DONTNEED (1): BO is not currently needed and
can be purged by the kernel under memory pressure to reclaim
resources. Only non-shared BOs can be marked as DONTNEED.
- XE_MADV_PURGEABLE_PURGED (2): BO has been purged by the kernel.
Accessing a purged BO results in error. Follows i915 semantics
where once purged, the BO remains permanently invalid ("once
purged, always purged").
Add madv_purgeable field to struct xe_bo for state tracking
of purgeable state across concurrent access paths
v2:
- Add xe_bo_is_purged() helper, improve state documentation
v3:
- Add the kernel doc(Matthew Brost)
- Add the new helpers xe_bo_madv_is_dontneed(Matthew Brost)
v4:
- @madv_purgeable atomic_t → u32 change across all relevant
patches (Matt)
v5:
- Add locking documentation to madv_purgeable field comment (Matt)
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
---
drivers/gpu/drm/xe/xe_bo.h | 56 ++++++++++++++++++++++++++++++++
drivers/gpu/drm/xe/xe_bo_types.h | 6 ++++
2 files changed, 62 insertions(+)
diff --git a/drivers/gpu/drm/xe/xe_bo.h b/drivers/gpu/drm/xe/xe_bo.h
index c914ab719f20..ea157d74e2fb 100644
--- a/drivers/gpu/drm/xe/xe_bo.h
+++ b/drivers/gpu/drm/xe/xe_bo.h
@@ -87,6 +87,28 @@
#define XE_PCI_BARRIER_MMAP_OFFSET (0x50 << XE_PTE_SHIFT)
+/**
+ * enum xe_madv_purgeable_state - Buffer object purgeable state enumeration
+ *
+ * This enum defines the possible purgeable states for a buffer object,
+ * allowing userspace to provide memory usage hints to the kernel for
+ * better memory management under pressure.
+ *
+ * @XE_MADV_PURGEABLE_WILLNEED: The buffer object is needed and should not be purged.
+ * This is the default state.
+ * @XE_MADV_PURGEABLE_DONTNEED: The buffer object is not currently needed and can be
+ * purged by the kernel under memory pressure.
+ * @XE_MADV_PURGEABLE_PURGED: The buffer object has been purged by the kernel.
+ *
+ * Accessing a purged buffer will result in an error. Per i915 semantics,
+ * once purged, a BO remains permanently invalid and must be destroyed and recreated.
+ */
+enum xe_madv_purgeable_state {
+ XE_MADV_PURGEABLE_WILLNEED,
+ XE_MADV_PURGEABLE_DONTNEED,
+ XE_MADV_PURGEABLE_PURGED,
+};
+
struct sg_table;
struct xe_bo *xe_bo_alloc(void);
@@ -215,6 +237,40 @@ static inline bool xe_bo_is_protected(const struct xe_bo *bo)
return bo->pxp_key_instance;
}
+/**
+ * xe_bo_is_purged() - Check if buffer object has been purged
+ * @bo: The buffer object to check
+ *
+ * Checks if the buffer object's backing store has been discarded by the
+ * kernel due to memory pressure after being marked as purgeable (DONTNEED).
+ * Once purged, the BO cannot be restored and any attempt to use it will fail.
+ *
+ * Context: Caller must hold the BO's dma-resv lock
+ * Return: true if the BO has been purged, false otherwise
+ */
+static inline bool xe_bo_is_purged(struct xe_bo *bo)
+{
+ xe_bo_assert_held(bo);
+ return bo->madv_purgeable == XE_MADV_PURGEABLE_PURGED;
+}
+
+/**
+ * xe_bo_madv_is_dontneed() - Check if BO is marked as DONTNEED
+ * @bo: The buffer object to check
+ *
+ * Checks if userspace has marked this BO as DONTNEED (i.e., its contents
+ * are not currently needed and can be discarded under memory pressure).
+ * This is used internally to decide whether a BO is eligible for purging.
+ *
+ * Context: Caller must hold the BO's dma-resv lock
+ * Return: true if the BO is marked DONTNEED, false otherwise
+ */
+static inline bool xe_bo_madv_is_dontneed(struct xe_bo *bo)
+{
+ xe_bo_assert_held(bo);
+ return bo->madv_purgeable == XE_MADV_PURGEABLE_DONTNEED;
+}
+
static inline void xe_bo_unpin_map_no_vm(struct xe_bo *bo)
{
if (likely(bo)) {
diff --git a/drivers/gpu/drm/xe/xe_bo_types.h b/drivers/gpu/drm/xe/xe_bo_types.h
index d4fe3c8dca5b..ff8317bfc1ae 100644
--- a/drivers/gpu/drm/xe/xe_bo_types.h
+++ b/drivers/gpu/drm/xe/xe_bo_types.h
@@ -108,6 +108,12 @@ struct xe_bo {
* from default
*/
u64 min_align;
+
+ /**
+ * @madv_purgeable: user space advise on BO purgeability, protected
+ * by BO's dma-resv lock.
+ */
+ u32 madv_purgeable;
};
#endif
--
2.43.0
^ permalink raw reply related [flat|nested] 39+ messages in thread* [PATCH v6 03/12] drm/xe/madvise: Implement purgeable buffer object support
2026-03-03 15:19 [PATCH v6 00/12] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
2026-03-03 15:19 ` [PATCH v6 01/12] drm/xe/uapi: Add UAPI " Arvind Yadav
2026-03-03 15:19 ` [PATCH v6 02/12] drm/xe/bo: Add purgeable bo state tracking and field madv to xe_bo Arvind Yadav
@ 2026-03-03 15:19 ` Arvind Yadav
2026-03-10 8:41 ` Thomas Hellström
2026-03-03 15:20 ` [PATCH v6 04/12] drm/xe/bo: Block CPU faults to purgeable buffer objects Arvind Yadav
` (13 subsequent siblings)
16 siblings, 1 reply; 39+ messages in thread
From: Arvind Yadav @ 2026-03-03 15:19 UTC (permalink / raw)
To: intel-xe
Cc: matthew.brost, himal.prasad.ghimiray, thomas.hellstrom,
pallavi.mishra
This allows userspace applications to provide memory usage hints to
the kernel for better memory management under pressure:
Add the core implementation for purgeable buffer objects, enabling memory
reclamation of user-designated DONTNEED buffers during eviction.
This patch implements the purge operation and state machine transitions:
Purgeable States (from xe_madv_purgeable_state):
- WILLNEED (0): BO should be retained, actively used
- DONTNEED (1): BO eligible for purging, not currently needed
- PURGED (2): BO backing store reclaimed, permanently invalid
Design Rationale:
- Async TLB invalidation via trigger_rebind (no blocking xe_vm_invalidate_vma)
- i915 compatibility: retained field, "once purged always purged" semantics
- Shared BO protection prevents multi-process memory corruption
- Scratch PTE reuse avoids new infrastructure, safe for fault mode
Note: The madvise_purgeable() function is implemented but not hooked into
the IOCTL handler (madvise_funcs[] entry is NULL) to maintain bisectability.
The feature will be enabled in the final patch when all supporting
infrastructure (shrinker, per-VMA tracking) is complete.
v2:
- Use xe_bo_trigger_rebind() for async TLB invalidation (Thomas Hellström)
- Add NULL rebind with scratch PTEs for fault mode (Thomas Hellström)
- Implement i915-compatible retained field logic (Thomas Hellström)
- Skip BO validation for purged BOs in page fault handler (crash fix)
- Add scratch VM check in page fault path (non-scratch VMs fail fault)
- Force clear_pt for non-scratch VMs to avoid phys addr 0 mapping (review fix)
- Add !is_purged check to resource cursor setup to prevent stale access
v3:
- Rebase as xe_gt_pagefault.c is gone upstream and replaced
with xe_pagefault.c (Matthew Brost)
- Xe specific warn on (Matthew Brost)
- Call helpers for madv_purgeable access(Matthew Brost)
- Remove bo NULL check(Matthew Brost)
- Use xe_bo_assert_held instead of dma assert(Matthew Brost)
- Move the xe_bo_is_purged check under the dma-resv lock( by Matt)
- Drop is_purged from xe_pt_stage_bind_entry and just set is_null to true
for purged BO rename s/is_null/is_null_or_purged (by Matt)
- UAPI rule should not be changed.(Matthew Brost)
- Make 'retained' a userptr (Matthew Brost)
v4:
- @madv_purgeable atomic_t → u32 change across all relevant patches (Matt)
v5:
- Introduce xe_bo_set_purgeable_state() helper (void return) to centralize
madv_purgeable updates with xe_bo_assert_held() and state transition
validation using explicit enum checks (no transition out of PURGED) (Matt)
- Make xe_ttm_bo_purge() return int and propagate failures from
xe_bo_move(); handle xe_bo_trigger_rebind() failures (e.g. no_wait_gpu
paths) rather than silently ignoring (Matt)
- Replace drm_WARN_ON with xe_assert for better Xe-specific assertions (Matt)
- Hook purgeable handling into madvise_funcs[DRM_XE_VMA_ATTR_PURGEABLE_STATE]
instead of special-case path in xe_vm_madvise_ioctl() (Matt)
- Track purgeable retained return via xe_madvise_details and perform
copy_to_user() from xe_madvise_details_fini() after locks are dropped (Matt)
- Set madvise_funcs[DRM_XE_VMA_ATTR_PURGEABLE_STATE] to NULL with
__maybe_unused on madvise_purgeable() to maintain bisectability until
shrinker integration is complete in final patch (Matt)
- Use put_user() instead of copy_to_user() for single u32 retained value (Thomas)
- Return -EFAULT from ioctl if put_user() fails (Thomas)
- Validate userspace initialized retained to 0 before ioctl, ensuring safe
default (0 = "assume purged") if put_user() fails (Thomas)
- Refactor error handling: separate fallible put_user from infallible cleanup
- xe_madvise_purgeable_retained_to_user(): separate helper for fallible put_user
- Call put_user() after releasing all locks to avoid circular dependencies
- Use xe_bo_move_notify() instead of xe_bo_trigger_rebind() in xe_ttm_bo_purge()
for proper abstraction - handles vunmap, dma-buf notifications, and VRAM
userfault cleanup (Thomas)
- Fix LRU crash while running shrink test
- Skip xe_bo_validate() for purged BOs in xe_gpuvm_validate()
v6:
- xe_bo_move_notify() must be called *before* ttm_bo_validate(). (Thomas)
- Block GPU page faults (fault-mode VMs) for DONTNEED bo's (Thomas, Matt)
- Rename retained to retained_ptr. (Jose)
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
---
drivers/gpu/drm/xe/xe_bo.c | 107 ++++++++++++++++++++---
drivers/gpu/drm/xe/xe_bo.h | 2 +
drivers/gpu/drm/xe/xe_pagefault.c | 19 ++++
drivers/gpu/drm/xe/xe_pt.c | 40 +++++++--
drivers/gpu/drm/xe/xe_vm.c | 20 ++++-
drivers/gpu/drm/xe/xe_vm_madvise.c | 136 +++++++++++++++++++++++++++++
6 files changed, 303 insertions(+), 21 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
index 8ff193600443..513f01aa2ddd 100644
--- a/drivers/gpu/drm/xe/xe_bo.c
+++ b/drivers/gpu/drm/xe/xe_bo.c
@@ -835,6 +835,84 @@ static int xe_bo_move_notify(struct xe_bo *bo,
return 0;
}
+/**
+ * xe_bo_set_purgeable_state() - Set BO purgeable state with validation
+ * @bo: Buffer object
+ * @new_state: New purgeable state
+ *
+ * Sets the purgeable state with lockdep assertions and validates state
+ * transitions. Once a BO is PURGED, it cannot transition to any other state.
+ * Invalid transitions are caught with xe_assert().
+ */
+void xe_bo_set_purgeable_state(struct xe_bo *bo,
+ enum xe_madv_purgeable_state new_state)
+{
+ struct xe_device *xe = xe_bo_device(bo);
+
+ xe_bo_assert_held(bo);
+
+ /* Validate state is one of the known values */
+ xe_assert(xe, new_state == XE_MADV_PURGEABLE_WILLNEED ||
+ new_state == XE_MADV_PURGEABLE_DONTNEED ||
+ new_state == XE_MADV_PURGEABLE_PURGED);
+
+ /* Once purged, always purged - cannot transition out */
+ xe_assert(xe, !(bo->madv_purgeable == XE_MADV_PURGEABLE_PURGED &&
+ new_state != XE_MADV_PURGEABLE_PURGED));
+
+ bo->madv_purgeable = new_state;
+}
+
+/**
+ * xe_ttm_bo_purge() - Purge buffer object backing store
+ * @ttm_bo: The TTM buffer object to purge
+ * @ctx: TTM operation context
+ *
+ * This function purges the backing store of a BO marked as DONTNEED and
+ * triggers rebind to invalidate stale GPU mappings. For fault-mode VMs,
+ * this zaps the PTEs. The next GPU access will trigger a page fault and
+ * perform NULL rebind (scratch pages or clear PTEs based on VM config).
+ *
+ * Return: 0 on success, negative error code on failure
+ */
+static int xe_ttm_bo_purge(struct ttm_buffer_object *ttm_bo, struct ttm_operation_ctx *ctx)
+{
+ struct xe_bo *bo = ttm_to_xe_bo(ttm_bo);
+ struct ttm_placement place = {};
+ int ret;
+
+ xe_bo_assert_held(bo);
+
+ if (!ttm_bo->ttm)
+ return 0;
+
+ if (!xe_bo_madv_is_dontneed(bo))
+ return 0;
+
+ /*
+ * Use the standard pre-move hook so we share the same cleanup/invalidate
+ * path as migrations: drop any CPU vmap and schedule the necessary GPU
+ * unbind/rebind work.
+ *
+ * This must be called before ttm_bo_validate() frees the pages.
+ * May fail in no-wait contexts (fault/shrinker) or if the BO is
+ * pinned. Keep state unchanged on failure so we don't end up "PURGED"
+ * with stale mappings.
+ */
+ ret = xe_bo_move_notify(bo, ctx);
+ if (ret)
+ return ret;
+
+ ret = ttm_bo_validate(ttm_bo, &place, ctx);
+ if (ret)
+ return ret;
+
+ /* Commit the state transition only once invalidation was queued */
+ xe_bo_set_purgeable_state(bo, XE_MADV_PURGEABLE_PURGED);
+
+ return 0;
+}
+
static int xe_bo_move(struct ttm_buffer_object *ttm_bo, bool evict,
struct ttm_operation_ctx *ctx,
struct ttm_resource *new_mem,
@@ -854,6 +932,20 @@ static int xe_bo_move(struct ttm_buffer_object *ttm_bo, bool evict,
ttm && ttm_tt_is_populated(ttm)) ? true : false;
int ret = 0;
+ /*
+ * Purge only non-shared BOs explicitly marked DONTNEED by userspace.
+ * The move_notify callback will handle invalidation asynchronously.
+ */
+ if (evict && xe_bo_madv_is_dontneed(bo)) {
+ ret = xe_ttm_bo_purge(ttm_bo, ctx);
+ if (ret)
+ return ret;
+
+ /* Free the unused eviction destination resource */
+ ttm_resource_free(ttm_bo, &new_mem);
+ return 0;
+ }
+
/* Bo creation path, moving to system or TT. */
if ((!old_mem && ttm) && !handle_system_ccs) {
if (new_mem->mem_type == XE_PL_TT)
@@ -1603,18 +1695,6 @@ static void xe_ttm_bo_delete_mem_notify(struct ttm_buffer_object *ttm_bo)
}
}
-static void xe_ttm_bo_purge(struct ttm_buffer_object *ttm_bo, struct ttm_operation_ctx *ctx)
-{
- struct xe_device *xe = ttm_to_xe_device(ttm_bo->bdev);
-
- if (ttm_bo->ttm) {
- struct ttm_placement place = {};
- int ret = ttm_bo_validate(ttm_bo, &place, ctx);
-
- drm_WARN_ON(&xe->drm, ret);
- }
-}
-
static void xe_ttm_bo_swap_notify(struct ttm_buffer_object *ttm_bo)
{
struct ttm_operation_ctx ctx = {
@@ -2195,6 +2275,9 @@ struct xe_bo *xe_bo_init_locked(struct xe_device *xe, struct xe_bo *bo,
#endif
INIT_LIST_HEAD(&bo->vram_userfault_link);
+ /* Initialize purge advisory state */
+ bo->madv_purgeable = XE_MADV_PURGEABLE_WILLNEED;
+
drm_gem_private_object_init(&xe->drm, &bo->ttm.base, size);
if (resv) {
diff --git a/drivers/gpu/drm/xe/xe_bo.h b/drivers/gpu/drm/xe/xe_bo.h
index ea157d74e2fb..0d9f25b51eb2 100644
--- a/drivers/gpu/drm/xe/xe_bo.h
+++ b/drivers/gpu/drm/xe/xe_bo.h
@@ -271,6 +271,8 @@ static inline bool xe_bo_madv_is_dontneed(struct xe_bo *bo)
return bo->madv_purgeable == XE_MADV_PURGEABLE_DONTNEED;
}
+void xe_bo_set_purgeable_state(struct xe_bo *bo, enum xe_madv_purgeable_state new_state);
+
static inline void xe_bo_unpin_map_no_vm(struct xe_bo *bo)
{
if (likely(bo)) {
diff --git a/drivers/gpu/drm/xe/xe_pagefault.c b/drivers/gpu/drm/xe/xe_pagefault.c
index ea4857acf28d..4ef8674e6b0b 100644
--- a/drivers/gpu/drm/xe/xe_pagefault.c
+++ b/drivers/gpu/drm/xe/xe_pagefault.c
@@ -59,6 +59,25 @@ static int xe_pagefault_begin(struct drm_exec *exec, struct xe_vma *vma,
if (!bo)
return 0;
+ /* Block GPU faults on DONTNEED BOs to preserve the GPU PTE zap done at
+ * madvise time; otherwise the rebind path would re-map real pages and
+ * undo the invalidation, preventing the shrinker from reclaiming the BO.
+ */
+ if (unlikely(xe_bo_madv_is_dontneed(bo)))
+ return -EACCES;
+
+ /*
+ * Check if BO is purged (under dma-resv lock).
+ * For purged BOs:
+ * - Scratch VMs: Skip validation, rebind will use scratch PTEs
+ * - Non-scratch VMs: FAIL the page fault (no scratch page available)
+ */
+ if (unlikely(xe_bo_is_purged(bo))) {
+ if (!xe_vm_has_scratch(vm))
+ return -EACCES;
+ return 0;
+ }
+
return need_vram_move ? xe_bo_migrate(bo, vram->placement, NULL, exec) :
xe_bo_validate(bo, vm, true, exec);
}
diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
index 13b355fadd58..93f9fdf0ff24 100644
--- a/drivers/gpu/drm/xe/xe_pt.c
+++ b/drivers/gpu/drm/xe/xe_pt.c
@@ -531,20 +531,26 @@ xe_pt_stage_bind_entry(struct xe_ptw *parent, pgoff_t offset,
/* Is this a leaf entry ?*/
if (level == 0 || xe_pt_hugepte_possible(addr, next, level, xe_walk)) {
struct xe_res_cursor *curs = xe_walk->curs;
- bool is_null = xe_vma_is_null(xe_walk->vma);
- bool is_vram = is_null ? false : xe_res_is_vram(curs);
+ struct xe_bo *bo = xe_vma_bo(xe_walk->vma);
+ bool is_null_or_purged = xe_vma_is_null(xe_walk->vma) ||
+ (bo && xe_bo_is_purged(bo));
+ bool is_vram = is_null_or_purged ? false : xe_res_is_vram(curs);
XE_WARN_ON(xe_walk->va_curs_start != addr);
if (xe_walk->clear_pt) {
pte = 0;
} else {
- pte = vm->pt_ops->pte_encode_vma(is_null ? 0 :
+ /*
+ * For purged BOs, treat like null VMAs - pass address 0.
+ * The pte_encode_vma will set XE_PTE_NULL flag for scratch mapping.
+ */
+ pte = vm->pt_ops->pte_encode_vma(is_null_or_purged ? 0 :
xe_res_dma(curs) +
xe_walk->dma_offset,
xe_walk->vma,
pat_index, level);
- if (!is_null)
+ if (!is_null_or_purged)
pte |= is_vram ? xe_walk->default_vram_pte :
xe_walk->default_system_pte;
@@ -568,7 +574,7 @@ xe_pt_stage_bind_entry(struct xe_ptw *parent, pgoff_t offset,
if (unlikely(ret))
return ret;
- if (!is_null && !xe_walk->clear_pt)
+ if (!is_null_or_purged && !xe_walk->clear_pt)
xe_res_next(curs, next - addr);
xe_walk->va_curs_start = next;
xe_walk->vma->gpuva.flags |= (XE_VMA_PTE_4K << level);
@@ -721,6 +727,26 @@ xe_pt_stage_bind(struct xe_tile *tile, struct xe_vma *vma,
};
struct xe_pt *pt = vm->pt_root[tile->id];
int ret;
+ bool is_purged = false;
+
+ /*
+ * Check if BO is purged:
+ * - Scratch VMs: Use scratch PTEs (XE_PTE_NULL) for safe zero reads
+ * - Non-scratch VMs: Clear PTEs to zero (non-present) to avoid mapping to phys addr 0
+ *
+ * For non-scratch VMs, we force clear_pt=true so leaf PTEs become completely
+ * zero instead of creating a PRESENT mapping to physical address 0.
+ */
+ if (bo && xe_bo_is_purged(bo)) {
+ is_purged = true;
+
+ /*
+ * For non-scratch VMs, a NULL rebind should use zero PTEs
+ * (non-present), not a present PTE to phys 0.
+ */
+ if (!xe_vm_has_scratch(vm))
+ xe_walk.clear_pt = true;
+ }
if (range) {
/* Move this entire thing to xe_svm.c? */
@@ -756,11 +782,11 @@ xe_pt_stage_bind(struct xe_tile *tile, struct xe_vma *vma,
}
xe_walk.default_vram_pte |= XE_PPGTT_PTE_DM;
- xe_walk.dma_offset = bo ? vram_region_gpu_offset(bo->ttm.resource) : 0;
+ xe_walk.dma_offset = (bo && !is_purged) ? vram_region_gpu_offset(bo->ttm.resource) : 0;
if (!range)
xe_bo_assert_held(bo);
- if (!xe_vma_is_null(vma) && !range) {
+ if (!xe_vma_is_null(vma) && !range && !is_purged) {
if (xe_vma_is_userptr(vma))
xe_res_first_dma(to_userptr_vma(vma)->userptr.pages.dma_addr, 0,
xe_vma_size(vma), &curs);
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 548b0769b3ef..c65d014c7491 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -326,6 +326,7 @@ void xe_vm_kill(struct xe_vm *vm, bool unlocked)
static int xe_gpuvm_validate(struct drm_gpuvm_bo *vm_bo, struct drm_exec *exec)
{
struct xe_vm *vm = gpuvm_to_vm(vm_bo->vm);
+ struct xe_bo *bo = gem_to_xe_bo(vm_bo->obj);
struct drm_gpuva *gpuva;
int ret;
@@ -334,10 +335,16 @@ static int xe_gpuvm_validate(struct drm_gpuvm_bo *vm_bo, struct drm_exec *exec)
list_move_tail(&gpuva_to_vma(gpuva)->combined_links.rebind,
&vm->rebind_list);
+ /* Skip re-populating purged BOs, rebind maps scratch pages. */
+ if (xe_bo_is_purged(bo)) {
+ vm_bo->evicted = false;
+ return 0;
+ }
+
if (!try_wait_for_completion(&vm->xe->pm_block))
return -EAGAIN;
- ret = xe_bo_validate(gem_to_xe_bo(vm_bo->obj), vm, false, exec);
+ ret = xe_bo_validate(bo, vm, false, exec);
if (ret)
return ret;
@@ -1358,6 +1365,9 @@ static u64 xelp_pte_encode_bo(struct xe_bo *bo, u64 bo_offset,
static u64 xelp_pte_encode_vma(u64 pte, struct xe_vma *vma,
u16 pat_index, u32 pt_level)
{
+ struct xe_bo *bo = xe_vma_bo(vma);
+ struct xe_vm *vm = xe_vma_vm(vma);
+
pte |= XE_PAGE_PRESENT;
if (likely(!xe_vma_read_only(vma)))
@@ -1366,7 +1376,13 @@ static u64 xelp_pte_encode_vma(u64 pte, struct xe_vma *vma,
pte |= pte_encode_pat_index(pat_index, pt_level);
pte |= pte_encode_ps(pt_level);
- if (unlikely(xe_vma_is_null(vma)))
+ /*
+ * NULL PTEs redirect to scratch page (return zeros on read).
+ * Set for: 1) explicit null VMAs, 2) purged BOs on scratch VMs.
+ * Never set NULL flag without scratch page - causes undefined behavior.
+ */
+ if (unlikely(xe_vma_is_null(vma) ||
+ (bo && xe_bo_is_purged(bo) && xe_vm_has_scratch(vm))))
pte |= XE_PTE_NULL;
return pte;
diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
index 95bf53cc29e3..f7e767f21795 100644
--- a/drivers/gpu/drm/xe/xe_vm_madvise.c
+++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
@@ -25,6 +25,8 @@ struct xe_vmas_in_madvise_range {
/**
* struct xe_madvise_details - Argument to madvise_funcs
* @dpagemap: Reference-counted pointer to a struct drm_pagemap.
+ * @has_purged_bo: Track if any BO was purged (for purgeable state)
+ * @retained_ptr: User pointer for retained value (for purgeable state)
*
* The madvise IOCTL handler may, in addition to the user-space
* args, have additional info to pass into the madvise_func that
@@ -33,6 +35,8 @@ struct xe_vmas_in_madvise_range {
*/
struct xe_madvise_details {
struct drm_pagemap *dpagemap;
+ bool has_purged_bo;
+ u64 retained_ptr;
};
static int get_vmas(struct xe_vm *vm, struct xe_vmas_in_madvise_range *madvise_range)
@@ -179,6 +183,67 @@ static void madvise_pat_index(struct xe_device *xe, struct xe_vm *vm,
}
}
+/**
+ * madvise_purgeable - Handle purgeable buffer object advice
+ * @xe: XE device
+ * @vm: VM
+ * @vmas: Array of VMAs
+ * @num_vmas: Number of VMAs
+ * @op: Madvise operation
+ * @details: Madvise details for return values
+ *
+ * Handles DONTNEED/WILLNEED/PURGED states. Tracks if any BO was purged
+ * in details->has_purged_bo for later copy to userspace.
+ *
+ * Note: Marked __maybe_unused until hooked into madvise_funcs[] in the
+ * final patch to maintain bisectability. The NULL placeholder in the
+ * array ensures proper -EINVAL return for userspace until all supporting
+ * infrastructure (shrinker, per-VMA tracking) is complete.
+ */
+static void __maybe_unused madvise_purgeable(struct xe_device *xe,
+ struct xe_vm *vm,
+ struct xe_vma **vmas,
+ int num_vmas,
+ struct drm_xe_madvise *op,
+ struct xe_madvise_details *details)
+{
+ int i;
+
+ xe_assert(vm->xe, op->type == DRM_XE_VMA_ATTR_PURGEABLE_STATE);
+
+ for (i = 0; i < num_vmas; i++) {
+ struct xe_bo *bo = xe_vma_bo(vmas[i]);
+
+ if (!bo)
+ continue;
+
+ /* BO must be locked before modifying madv state */
+ xe_bo_assert_held(bo);
+
+ /*
+ * Once purged, always purged. Cannot transition back to WILLNEED.
+ * This matches i915 semantics where purged BOs are permanently invalid.
+ */
+ if (xe_bo_is_purged(bo)) {
+ details->has_purged_bo = true;
+ continue;
+ }
+
+ switch (op->purge_state_val.val) {
+ case DRM_XE_VMA_PURGEABLE_STATE_WILLNEED:
+ xe_bo_set_purgeable_state(bo, XE_MADV_PURGEABLE_WILLNEED);
+ break;
+ case DRM_XE_VMA_PURGEABLE_STATE_DONTNEED:
+ xe_bo_set_purgeable_state(bo, XE_MADV_PURGEABLE_DONTNEED);
+ break;
+ default:
+ drm_warn(&vm->xe->drm, "Invalid madvice value = %d\n",
+ op->purge_state_val.val);
+ return;
+ }
+ }
+}
+
typedef void (*madvise_func)(struct xe_device *xe, struct xe_vm *vm,
struct xe_vma **vmas, int num_vmas,
struct drm_xe_madvise *op,
@@ -188,6 +253,12 @@ static const madvise_func madvise_funcs[] = {
[DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC] = madvise_preferred_mem_loc,
[DRM_XE_MEM_RANGE_ATTR_ATOMIC] = madvise_atomic,
[DRM_XE_MEM_RANGE_ATTR_PAT] = madvise_pat_index,
+ /*
+ * Purgeable support implemented but not enabled yet to maintain
+ * bisectability. Will be set to madvise_purgeable() in final patch
+ * when all infrastructure (shrinker, VMA tracking) is complete.
+ */
+ [DRM_XE_VMA_ATTR_PURGEABLE_STATE] = NULL,
};
static u8 xe_zap_ptes_in_madvise_range(struct xe_vm *vm, u64 start, u64 end)
@@ -311,6 +382,19 @@ static bool madvise_args_are_sane(struct xe_device *xe, const struct drm_xe_madv
return false;
break;
}
+ case DRM_XE_VMA_ATTR_PURGEABLE_STATE:
+ {
+ u32 val = args->purge_state_val.val;
+
+ if (XE_IOCTL_DBG(xe, !(val == DRM_XE_VMA_PURGEABLE_STATE_WILLNEED ||
+ val == DRM_XE_VMA_PURGEABLE_STATE_DONTNEED)))
+ return false;
+
+ if (XE_IOCTL_DBG(xe, args->purge_state_val.pad))
+ return false;
+
+ break;
+ }
default:
if (XE_IOCTL_DBG(xe, 1))
return false;
@@ -329,6 +413,12 @@ static int xe_madvise_details_init(struct xe_vm *vm, const struct drm_xe_madvise
memset(details, 0, sizeof(*details));
+ /* Store retained pointer for purgeable state */
+ if (args->type == DRM_XE_VMA_ATTR_PURGEABLE_STATE) {
+ details->retained_ptr = args->purge_state_val.retained_ptr;
+ return 0;
+ }
+
if (args->type == DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC) {
int fd = args->preferred_mem_loc.devmem_fd;
struct drm_pagemap *dpagemap;
@@ -357,6 +447,21 @@ static void xe_madvise_details_fini(struct xe_madvise_details *details)
drm_pagemap_put(details->dpagemap);
}
+static int xe_madvise_purgeable_retained_to_user(const struct xe_madvise_details *details)
+{
+ u32 retained;
+
+ if (!details->retained_ptr)
+ return 0;
+
+ retained = !details->has_purged_bo;
+
+ if (put_user(retained, (u32 __user *)u64_to_user_ptr(details->retained_ptr)))
+ return -EFAULT;
+
+ return 0;
+}
+
static bool check_bo_args_are_sane(struct xe_vm *vm, struct xe_vma **vmas,
int num_vmas, u32 atomic_val)
{
@@ -414,6 +519,7 @@ int xe_vm_madvise_ioctl(struct drm_device *dev, void *data, struct drm_file *fil
struct xe_vm *vm;
struct drm_exec exec;
int err, attr_type;
+ bool do_retained;
vm = xe_vm_lookup(xef, args->vm_id);
if (XE_IOCTL_DBG(xe, !vm))
@@ -424,6 +530,25 @@ int xe_vm_madvise_ioctl(struct drm_device *dev, void *data, struct drm_file *fil
goto put_vm;
}
+ /* Cache whether we need to write retained, and validate it's initialized to 0 */
+ do_retained = args->type == DRM_XE_VMA_ATTR_PURGEABLE_STATE &&
+ args->purge_state_val.retained_ptr;
+ if (do_retained) {
+ u32 retained;
+ u32 __user *retained_ptr;
+
+ retained_ptr = u64_to_user_ptr(args->purge_state_val.retained_ptr);
+ if (get_user(retained, retained_ptr)) {
+ err = -EFAULT;
+ goto put_vm;
+ }
+
+ if (XE_IOCTL_DBG(xe, retained != 0)) {
+ err = -EINVAL;
+ goto put_vm;
+ }
+ }
+
xe_svm_flush(vm);
err = down_write_killable(&vm->lock);
@@ -479,6 +604,13 @@ int xe_vm_madvise_ioctl(struct drm_device *dev, void *data, struct drm_file *fil
}
attr_type = array_index_nospec(args->type, ARRAY_SIZE(madvise_funcs));
+
+ /* Ensure the madvise function exists for this type */
+ if (!madvise_funcs[attr_type]) {
+ err = -EINVAL;
+ goto err_fini;
+ }
+
madvise_funcs[attr_type](xe, vm, madvise_range.vmas, madvise_range.num_vmas, args,
&details);
@@ -496,6 +628,10 @@ int xe_vm_madvise_ioctl(struct drm_device *dev, void *data, struct drm_file *fil
xe_madvise_details_fini(&details);
unlock_vm:
up_write(&vm->lock);
+
+ /* Write retained value to user after releasing all locks */
+ if (!err && do_retained)
+ err = xe_madvise_purgeable_retained_to_user(&details);
put_vm:
xe_vm_put(vm);
return err;
--
2.43.0
^ permalink raw reply related [flat|nested] 39+ messages in thread* Re: [PATCH v6 03/12] drm/xe/madvise: Implement purgeable buffer object support
2026-03-03 15:19 ` [PATCH v6 03/12] drm/xe/madvise: Implement purgeable buffer object support Arvind Yadav
@ 2026-03-10 8:41 ` Thomas Hellström
0 siblings, 0 replies; 39+ messages in thread
From: Thomas Hellström @ 2026-03-10 8:41 UTC (permalink / raw)
To: Arvind Yadav, intel-xe
Cc: matthew.brost, himal.prasad.ghimiray, pallavi.mishra
On Tue, 2026-03-03 at 20:49 +0530, Arvind Yadav wrote:
> This allows userspace applications to provide memory usage hints to
> the kernel for better memory management under pressure:
>
> Add the core implementation for purgeable buffer objects, enabling
> memory
> reclamation of user-designated DONTNEED buffers during eviction.
>
> This patch implements the purge operation and state machine
> transitions:
>
> Purgeable States (from xe_madv_purgeable_state):
> - WILLNEED (0): BO should be retained, actively used
> - DONTNEED (1): BO eligible for purging, not currently needed
> - PURGED (2): BO backing store reclaimed, permanently invalid
>
> Design Rationale:
> - Async TLB invalidation via trigger_rebind (no blocking
> xe_vm_invalidate_vma)
> - i915 compatibility: retained field, "once purged always purged"
> semantics
> - Shared BO protection prevents multi-process memory corruption
> - Scratch PTE reuse avoids new infrastructure, safe for fault mode
>
> Note: The madvise_purgeable() function is implemented but not hooked
> into
> the IOCTL handler (madvise_funcs[] entry is NULL) to maintain
> bisectability.
> The feature will be enabled in the final patch when all supporting
> infrastructure (shrinker, per-VMA tracking) is complete.
>
> v2:
> - Use xe_bo_trigger_rebind() for async TLB invalidation (Thomas
> Hellström)
> - Add NULL rebind with scratch PTEs for fault mode (Thomas
> Hellström)
> - Implement i915-compatible retained field logic (Thomas Hellström)
> - Skip BO validation for purged BOs in page fault handler (crash
> fix)
> - Add scratch VM check in page fault path (non-scratch VMs fail
> fault)
> - Force clear_pt for non-scratch VMs to avoid phys addr 0 mapping
> (review fix)
> - Add !is_purged check to resource cursor setup to prevent stale
> access
>
> v3:
> - Rebase as xe_gt_pagefault.c is gone upstream and replaced
> with xe_pagefault.c (Matthew Brost)
> - Xe specific warn on (Matthew Brost)
> - Call helpers for madv_purgeable access(Matthew Brost)
> - Remove bo NULL check(Matthew Brost)
> - Use xe_bo_assert_held instead of dma assert(Matthew Brost)
> - Move the xe_bo_is_purged check under the dma-resv lock( by Matt)
> - Drop is_purged from xe_pt_stage_bind_entry and just set is_null
> to true
> for purged BO rename s/is_null/is_null_or_purged (by Matt)
> - UAPI rule should not be changed.(Matthew Brost)
> - Make 'retained' a userptr (Matthew Brost)
>
> v4:
> - @madv_purgeable atomic_t → u32 change across all relevant patches
> (Matt)
>
> v5:
> - Introduce xe_bo_set_purgeable_state() helper (void return) to
> centralize
> madv_purgeable updates with xe_bo_assert_held() and state
> transition
> validation using explicit enum checks (no transition out of
> PURGED) (Matt)
> - Make xe_ttm_bo_purge() return int and propagate failures from
> xe_bo_move(); handle xe_bo_trigger_rebind() failures (e.g.
> no_wait_gpu
> paths) rather than silently ignoring (Matt)
> - Replace drm_WARN_ON with xe_assert for better Xe-specific
> assertions (Matt)
> - Hook purgeable handling into
> madvise_funcs[DRM_XE_VMA_ATTR_PURGEABLE_STATE]
> instead of special-case path in xe_vm_madvise_ioctl() (Matt)
> - Track purgeable retained return via xe_madvise_details and
> perform
> copy_to_user() from xe_madvise_details_fini() after locks are
> dropped (Matt)
> - Set madvise_funcs[DRM_XE_VMA_ATTR_PURGEABLE_STATE] to NULL with
> __maybe_unused on madvise_purgeable() to maintain bisectability
> until
> shrinker integration is complete in final patch (Matt)
> - Use put_user() instead of copy_to_user() for single u32 retained
> value (Thomas)
> - Return -EFAULT from ioctl if put_user() fails (Thomas)
> - Validate userspace initialized retained to 0 before ioctl,
> ensuring safe
> default (0 = "assume purged") if put_user() fails (Thomas)
> - Refactor error handling: separate fallible put_user from
> infallible cleanup
> - xe_madvise_purgeable_retained_to_user(): separate helper for
> fallible put_user
> - Call put_user() after releasing all locks to avoid circular
> dependencies
> - Use xe_bo_move_notify() instead of xe_bo_trigger_rebind() in
> xe_ttm_bo_purge()
> for proper abstraction - handles vunmap, dma-buf notifications,
> and VRAM
> userfault cleanup (Thomas)
> - Fix LRU crash while running shrink test
> - Skip xe_bo_validate() for purged BOs in xe_gpuvm_validate()
>
> v6:
> - xe_bo_move_notify() must be called *before* ttm_bo_validate().
> (Thomas)
> - Block GPU page faults (fault-mode VMs) for DONTNEED bo's (Thomas,
> Matt)
> - Rename retained to retained_ptr. (Jose)
>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
> ---
> drivers/gpu/drm/xe/xe_bo.c | 107 ++++++++++++++++++++---
> drivers/gpu/drm/xe/xe_bo.h | 2 +
> drivers/gpu/drm/xe/xe_pagefault.c | 19 ++++
> drivers/gpu/drm/xe/xe_pt.c | 40 +++++++--
> drivers/gpu/drm/xe/xe_vm.c | 20 ++++-
> drivers/gpu/drm/xe/xe_vm_madvise.c | 136
> +++++++++++++++++++++++++++++
> 6 files changed, 303 insertions(+), 21 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
> index 8ff193600443..513f01aa2ddd 100644
> --- a/drivers/gpu/drm/xe/xe_bo.c
> +++ b/drivers/gpu/drm/xe/xe_bo.c
> @@ -835,6 +835,84 @@ static int xe_bo_move_notify(struct xe_bo *bo,
> return 0;
> }
>
> +/**
> + * xe_bo_set_purgeable_state() - Set BO purgeable state with
> validation
> + * @bo: Buffer object
> + * @new_state: New purgeable state
> + *
> + * Sets the purgeable state with lockdep assertions and validates
> state
> + * transitions. Once a BO is PURGED, it cannot transition to any
> other state.
> + * Invalid transitions are caught with xe_assert().
> + */
> +void xe_bo_set_purgeable_state(struct xe_bo *bo,
> + enum xe_madv_purgeable_state
> new_state)
> +{
> + struct xe_device *xe = xe_bo_device(bo);
> +
> + xe_bo_assert_held(bo);
> +
> + /* Validate state is one of the known values */
> + xe_assert(xe, new_state == XE_MADV_PURGEABLE_WILLNEED ||
> + new_state == XE_MADV_PURGEABLE_DONTNEED ||
> + new_state == XE_MADV_PURGEABLE_PURGED);
> +
> + /* Once purged, always purged - cannot transition out */
> + xe_assert(xe, !(bo->madv_purgeable ==
> XE_MADV_PURGEABLE_PURGED &&
> + new_state != XE_MADV_PURGEABLE_PURGED));
> +
> + bo->madv_purgeable = new_state;
> +}
> +
> +/**
> + * xe_ttm_bo_purge() - Purge buffer object backing store
> + * @ttm_bo: The TTM buffer object to purge
> + * @ctx: TTM operation context
> + *
> + * This function purges the backing store of a BO marked as DONTNEED
> and
> + * triggers rebind to invalidate stale GPU mappings. For fault-mode
> VMs,
> + * this zaps the PTEs. The next GPU access will trigger a page fault
> and
> + * perform NULL rebind (scratch pages or clear PTEs based on VM
> config).
> + *
> + * Return: 0 on success, negative error code on failure
> + */
> +static int xe_ttm_bo_purge(struct ttm_buffer_object *ttm_bo, struct
> ttm_operation_ctx *ctx)
> +{
> + struct xe_bo *bo = ttm_to_xe_bo(ttm_bo);
> + struct ttm_placement place = {};
> + int ret;
> +
> + xe_bo_assert_held(bo);
> +
> + if (!ttm_bo->ttm)
> + return 0;
> +
> + if (!xe_bo_madv_is_dontneed(bo))
> + return 0;
> +
> + /*
> + * Use the standard pre-move hook so we share the same
> cleanup/invalidate
> + * path as migrations: drop any CPU vmap and schedule the
> necessary GPU
> + * unbind/rebind work.
> + *
> + * This must be called before ttm_bo_validate() frees the
> pages.
> + * May fail in no-wait contexts (fault/shrinker) or if the
> BO is
> + * pinned. Keep state unchanged on failure so we don't end
> up "PURGED"
> + * with stale mappings.
> + */
> + ret = xe_bo_move_notify(bo, ctx);
> + if (ret)
> + return ret;
> +
> + ret = ttm_bo_validate(ttm_bo, &place, ctx);
> + if (ret)
> + return ret;
> +
> + /* Commit the state transition only once invalidation was
> queued */
> + xe_bo_set_purgeable_state(bo, XE_MADV_PURGEABLE_PURGED);
> +
> + return 0;
> +}
> +
> static int xe_bo_move(struct ttm_buffer_object *ttm_bo, bool evict,
> struct ttm_operation_ctx *ctx,
> struct ttm_resource *new_mem,
> @@ -854,6 +932,20 @@ static int xe_bo_move(struct ttm_buffer_object
> *ttm_bo, bool evict,
> ttm && ttm_tt_is_populated(ttm)) ?
> true : false;
> int ret = 0;
>
> + /*
> + * Purge only non-shared BOs explicitly marked DONTNEED by
> userspace.
> + * The move_notify callback will handle invalidation
> asynchronously.
> + */
> + if (evict && xe_bo_madv_is_dontneed(bo)) {
> + ret = xe_ttm_bo_purge(ttm_bo, ctx);
> + if (ret)
> + return ret;
> +
> + /* Free the unused eviction destination resource */
> + ttm_resource_free(ttm_bo, &new_mem);
> + return 0;
> + }
> +
> /* Bo creation path, moving to system or TT. */
> if ((!old_mem && ttm) && !handle_system_ccs) {
> if (new_mem->mem_type == XE_PL_TT)
> @@ -1603,18 +1695,6 @@ static void xe_ttm_bo_delete_mem_notify(struct
> ttm_buffer_object *ttm_bo)
> }
> }
>
> -static void xe_ttm_bo_purge(struct ttm_buffer_object *ttm_bo, struct
> ttm_operation_ctx *ctx)
> -{
> - struct xe_device *xe = ttm_to_xe_device(ttm_bo->bdev);
> -
> - if (ttm_bo->ttm) {
> - struct ttm_placement place = {};
> - int ret = ttm_bo_validate(ttm_bo, &place, ctx);
> -
> - drm_WARN_ON(&xe->drm, ret);
> - }
> -}
> -
> static void xe_ttm_bo_swap_notify(struct ttm_buffer_object *ttm_bo)
> {
> struct ttm_operation_ctx ctx = {
> @@ -2195,6 +2275,9 @@ struct xe_bo *xe_bo_init_locked(struct
> xe_device *xe, struct xe_bo *bo,
> #endif
> INIT_LIST_HEAD(&bo->vram_userfault_link);
>
> + /* Initialize purge advisory state */
> + bo->madv_purgeable = XE_MADV_PURGEABLE_WILLNEED;
> +
> drm_gem_private_object_init(&xe->drm, &bo->ttm.base, size);
>
> if (resv) {
> diff --git a/drivers/gpu/drm/xe/xe_bo.h b/drivers/gpu/drm/xe/xe_bo.h
> index ea157d74e2fb..0d9f25b51eb2 100644
> --- a/drivers/gpu/drm/xe/xe_bo.h
> +++ b/drivers/gpu/drm/xe/xe_bo.h
> @@ -271,6 +271,8 @@ static inline bool xe_bo_madv_is_dontneed(struct
> xe_bo *bo)
> return bo->madv_purgeable == XE_MADV_PURGEABLE_DONTNEED;
> }
>
> +void xe_bo_set_purgeable_state(struct xe_bo *bo, enum
> xe_madv_purgeable_state new_state);
> +
> static inline void xe_bo_unpin_map_no_vm(struct xe_bo *bo)
> {
> if (likely(bo)) {
> diff --git a/drivers/gpu/drm/xe/xe_pagefault.c
> b/drivers/gpu/drm/xe/xe_pagefault.c
> index ea4857acf28d..4ef8674e6b0b 100644
> --- a/drivers/gpu/drm/xe/xe_pagefault.c
> +++ b/drivers/gpu/drm/xe/xe_pagefault.c
> @@ -59,6 +59,25 @@ static int xe_pagefault_begin(struct drm_exec
> *exec, struct xe_vma *vma,
> if (!bo)
> return 0;
>
> + /* Block GPU faults on DONTNEED BOs to preserve the GPU PTE
> zap done at
For multi-line code comments, No text on the first line. Just the '/*'
With that fixed,
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> + * madvise time; otherwise the rebind path would re-map real
> pages and
> + * undo the invalidation, preventing the shrinker from
> reclaiming the BO.
> + */
> + if (unlikely(xe_bo_madv_is_dontneed(bo)))
> + return -EACCES;
> +
> + /*
> + * Check if BO is purged (under dma-resv lock).
> + * For purged BOs:
> + * - Scratch VMs: Skip validation, rebind will use scratch
> PTEs
> + * - Non-scratch VMs: FAIL the page fault (no scratch page
> available)
> + */
> + if (unlikely(xe_bo_is_purged(bo))) {
> + if (!xe_vm_has_scratch(vm))
> + return -EACCES;
> + return 0;
> + }
> +
> return need_vram_move ? xe_bo_migrate(bo, vram->placement,
> NULL, exec) :
> xe_bo_validate(bo, vm, true, exec);
> }
> diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
> index 13b355fadd58..93f9fdf0ff24 100644
> --- a/drivers/gpu/drm/xe/xe_pt.c
> +++ b/drivers/gpu/drm/xe/xe_pt.c
> @@ -531,20 +531,26 @@ xe_pt_stage_bind_entry(struct xe_ptw *parent,
> pgoff_t offset,
> /* Is this a leaf entry ?*/
> if (level == 0 || xe_pt_hugepte_possible(addr, next, level,
> xe_walk)) {
> struct xe_res_cursor *curs = xe_walk->curs;
> - bool is_null = xe_vma_is_null(xe_walk->vma);
> - bool is_vram = is_null ? false :
> xe_res_is_vram(curs);
> + struct xe_bo *bo = xe_vma_bo(xe_walk->vma);
> + bool is_null_or_purged = xe_vma_is_null(xe_walk-
> >vma) ||
> + (bo &&
> xe_bo_is_purged(bo));
> + bool is_vram = is_null_or_purged ? false :
> xe_res_is_vram(curs);
>
> XE_WARN_ON(xe_walk->va_curs_start != addr);
>
> if (xe_walk->clear_pt) {
> pte = 0;
> } else {
> - pte = vm->pt_ops->pte_encode_vma(is_null ? 0
> :
> + /*
> + * For purged BOs, treat like null VMAs -
> pass address 0.
> + * The pte_encode_vma will set XE_PTE_NULL
> flag for scratch mapping.
> + */
> + pte = vm->pt_ops-
> >pte_encode_vma(is_null_or_purged ? 0 :
>
> xe_res_dma(curs) +
> xe_walk-
> >dma_offset,
> xe_walk-
> >vma,
> pat_index,
> level);
> - if (!is_null)
> + if (!is_null_or_purged)
> pte |= is_vram ? xe_walk-
> >default_vram_pte :
> xe_walk->default_system_pte;
>
> @@ -568,7 +574,7 @@ xe_pt_stage_bind_entry(struct xe_ptw *parent,
> pgoff_t offset,
> if (unlikely(ret))
> return ret;
>
> - if (!is_null && !xe_walk->clear_pt)
> + if (!is_null_or_purged && !xe_walk->clear_pt)
> xe_res_next(curs, next - addr);
> xe_walk->va_curs_start = next;
> xe_walk->vma->gpuva.flags |= (XE_VMA_PTE_4K <<
> level);
> @@ -721,6 +727,26 @@ xe_pt_stage_bind(struct xe_tile *tile, struct
> xe_vma *vma,
> };
> struct xe_pt *pt = vm->pt_root[tile->id];
> int ret;
> + bool is_purged = false;
> +
> + /*
> + * Check if BO is purged:
> + * - Scratch VMs: Use scratch PTEs (XE_PTE_NULL) for safe
> zero reads
> + * - Non-scratch VMs: Clear PTEs to zero (non-present) to
> avoid mapping to phys addr 0
> + *
> + * For non-scratch VMs, we force clear_pt=true so leaf PTEs
> become completely
> + * zero instead of creating a PRESENT mapping to physical
> address 0.
> + */
> + if (bo && xe_bo_is_purged(bo)) {
> + is_purged = true;
> +
> + /*
> + * For non-scratch VMs, a NULL rebind should use
> zero PTEs
> + * (non-present), not a present PTE to phys 0.
> + */
> + if (!xe_vm_has_scratch(vm))
> + xe_walk.clear_pt = true;
> + }
>
> if (range) {
> /* Move this entire thing to xe_svm.c? */
> @@ -756,11 +782,11 @@ xe_pt_stage_bind(struct xe_tile *tile, struct
> xe_vma *vma,
> }
>
> xe_walk.default_vram_pte |= XE_PPGTT_PTE_DM;
> - xe_walk.dma_offset = bo ? vram_region_gpu_offset(bo-
> >ttm.resource) : 0;
> + xe_walk.dma_offset = (bo && !is_purged) ?
> vram_region_gpu_offset(bo->ttm.resource) : 0;
> if (!range)
> xe_bo_assert_held(bo);
>
> - if (!xe_vma_is_null(vma) && !range) {
> + if (!xe_vma_is_null(vma) && !range && !is_purged) {
> if (xe_vma_is_userptr(vma))
> xe_res_first_dma(to_userptr_vma(vma)-
> >userptr.pages.dma_addr, 0,
> xe_vma_size(vma), &curs);
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index 548b0769b3ef..c65d014c7491 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -326,6 +326,7 @@ void xe_vm_kill(struct xe_vm *vm, bool unlocked)
> static int xe_gpuvm_validate(struct drm_gpuvm_bo *vm_bo, struct
> drm_exec *exec)
> {
> struct xe_vm *vm = gpuvm_to_vm(vm_bo->vm);
> + struct xe_bo *bo = gem_to_xe_bo(vm_bo->obj);
> struct drm_gpuva *gpuva;
> int ret;
>
> @@ -334,10 +335,16 @@ static int xe_gpuvm_validate(struct
> drm_gpuvm_bo *vm_bo, struct drm_exec *exec)
> list_move_tail(&gpuva_to_vma(gpuva)-
> >combined_links.rebind,
> &vm->rebind_list);
>
> + /* Skip re-populating purged BOs, rebind maps scratch pages.
> */
> + if (xe_bo_is_purged(bo)) {
> + vm_bo->evicted = false;
> + return 0;
> + }
> +
> if (!try_wait_for_completion(&vm->xe->pm_block))
> return -EAGAIN;
>
> - ret = xe_bo_validate(gem_to_xe_bo(vm_bo->obj), vm, false,
> exec);
> + ret = xe_bo_validate(bo, vm, false, exec);
> if (ret)
> return ret;
>
> @@ -1358,6 +1365,9 @@ static u64 xelp_pte_encode_bo(struct xe_bo *bo,
> u64 bo_offset,
> static u64 xelp_pte_encode_vma(u64 pte, struct xe_vma *vma,
> u16 pat_index, u32 pt_level)
> {
> + struct xe_bo *bo = xe_vma_bo(vma);
> + struct xe_vm *vm = xe_vma_vm(vma);
> +
> pte |= XE_PAGE_PRESENT;
>
> if (likely(!xe_vma_read_only(vma)))
> @@ -1366,7 +1376,13 @@ static u64 xelp_pte_encode_vma(u64 pte, struct
> xe_vma *vma,
> pte |= pte_encode_pat_index(pat_index, pt_level);
> pte |= pte_encode_ps(pt_level);
>
> - if (unlikely(xe_vma_is_null(vma)))
> + /*
> + * NULL PTEs redirect to scratch page (return zeros on
> read).
> + * Set for: 1) explicit null VMAs, 2) purged BOs on scratch
> VMs.
> + * Never set NULL flag without scratch page - causes
> undefined behavior.
> + */
> + if (unlikely(xe_vma_is_null(vma) ||
> + (bo && xe_bo_is_purged(bo) &&
> xe_vm_has_scratch(vm))))
> pte |= XE_PTE_NULL;
>
> return pte;
> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c
> b/drivers/gpu/drm/xe/xe_vm_madvise.c
> index 95bf53cc29e3..f7e767f21795 100644
> --- a/drivers/gpu/drm/xe/xe_vm_madvise.c
> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
> @@ -25,6 +25,8 @@ struct xe_vmas_in_madvise_range {
> /**
> * struct xe_madvise_details - Argument to madvise_funcs
> * @dpagemap: Reference-counted pointer to a struct drm_pagemap.
> + * @has_purged_bo: Track if any BO was purged (for purgeable state)
> + * @retained_ptr: User pointer for retained value (for purgeable
> state)
> *
> * The madvise IOCTL handler may, in addition to the user-space
> * args, have additional info to pass into the madvise_func that
> @@ -33,6 +35,8 @@ struct xe_vmas_in_madvise_range {
> */
> struct xe_madvise_details {
> struct drm_pagemap *dpagemap;
> + bool has_purged_bo;
> + u64 retained_ptr;
> };
>
> static int get_vmas(struct xe_vm *vm, struct
> xe_vmas_in_madvise_range *madvise_range)
> @@ -179,6 +183,67 @@ static void madvise_pat_index(struct xe_device
> *xe, struct xe_vm *vm,
> }
> }
>
> +/**
> + * madvise_purgeable - Handle purgeable buffer object advice
> + * @xe: XE device
> + * @vm: VM
> + * @vmas: Array of VMAs
> + * @num_vmas: Number of VMAs
> + * @op: Madvise operation
> + * @details: Madvise details for return values
> + *
> + * Handles DONTNEED/WILLNEED/PURGED states. Tracks if any BO was
> purged
> + * in details->has_purged_bo for later copy to userspace.
> + *
> + * Note: Marked __maybe_unused until hooked into madvise_funcs[] in
> the
> + * final patch to maintain bisectability. The NULL placeholder in
> the
> + * array ensures proper -EINVAL return for userspace until all
> supporting
> + * infrastructure (shrinker, per-VMA tracking) is complete.
> + */
> +static void __maybe_unused madvise_purgeable(struct xe_device *xe,
> + struct xe_vm *vm,
> + struct xe_vma **vmas,
> + int num_vmas,
> + struct drm_xe_madvise
> *op,
> + struct
> xe_madvise_details *details)
> +{
> + int i;
> +
> + xe_assert(vm->xe, op->type ==
> DRM_XE_VMA_ATTR_PURGEABLE_STATE);
> +
> + for (i = 0; i < num_vmas; i++) {
> + struct xe_bo *bo = xe_vma_bo(vmas[i]);
> +
> + if (!bo)
> + continue;
> +
> + /* BO must be locked before modifying madv state */
> + xe_bo_assert_held(bo);
> +
> + /*
> + * Once purged, always purged. Cannot transition
> back to WILLNEED.
> + * This matches i915 semantics where purged BOs are
> permanently invalid.
> + */
> + if (xe_bo_is_purged(bo)) {
> + details->has_purged_bo = true;
> + continue;
> + }
> +
> + switch (op->purge_state_val.val) {
> + case DRM_XE_VMA_PURGEABLE_STATE_WILLNEED:
> + xe_bo_set_purgeable_state(bo,
> XE_MADV_PURGEABLE_WILLNEED);
> + break;
> + case DRM_XE_VMA_PURGEABLE_STATE_DONTNEED:
> + xe_bo_set_purgeable_state(bo,
> XE_MADV_PURGEABLE_DONTNEED);
> + break;
> + default:
> + drm_warn(&vm->xe->drm, "Invalid madvice
> value = %d\n",
> + op->purge_state_val.val);
> + return;
> + }
> + }
> +}
> +
> typedef void (*madvise_func)(struct xe_device *xe, struct xe_vm *vm,
> struct xe_vma **vmas, int num_vmas,
> struct drm_xe_madvise *op,
> @@ -188,6 +253,12 @@ static const madvise_func madvise_funcs[] = {
> [DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC] =
> madvise_preferred_mem_loc,
> [DRM_XE_MEM_RANGE_ATTR_ATOMIC] = madvise_atomic,
> [DRM_XE_MEM_RANGE_ATTR_PAT] = madvise_pat_index,
> + /*
> + * Purgeable support implemented but not enabled yet to
> maintain
> + * bisectability. Will be set to madvise_purgeable() in
> final patch
> + * when all infrastructure (shrinker, VMA tracking) is
> complete.
> + */
> + [DRM_XE_VMA_ATTR_PURGEABLE_STATE] = NULL,
> };
>
> static u8 xe_zap_ptes_in_madvise_range(struct xe_vm *vm, u64 start,
> u64 end)
> @@ -311,6 +382,19 @@ static bool madvise_args_are_sane(struct
> xe_device *xe, const struct drm_xe_madv
> return false;
> break;
> }
> + case DRM_XE_VMA_ATTR_PURGEABLE_STATE:
> + {
> + u32 val = args->purge_state_val.val;
> +
> + if (XE_IOCTL_DBG(xe, !(val ==
> DRM_XE_VMA_PURGEABLE_STATE_WILLNEED ||
> + val ==
> DRM_XE_VMA_PURGEABLE_STATE_DONTNEED)))
> + return false;
> +
> + if (XE_IOCTL_DBG(xe, args->purge_state_val.pad))
> + return false;
> +
> + break;
> + }
> default:
> if (XE_IOCTL_DBG(xe, 1))
> return false;
> @@ -329,6 +413,12 @@ static int xe_madvise_details_init(struct xe_vm
> *vm, const struct drm_xe_madvise
>
> memset(details, 0, sizeof(*details));
>
> + /* Store retained pointer for purgeable state */
> + if (args->type == DRM_XE_VMA_ATTR_PURGEABLE_STATE) {
> + details->retained_ptr = args-
> >purge_state_val.retained_ptr;
> + return 0;
> + }
> +
> if (args->type == DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC) {
> int fd = args->preferred_mem_loc.devmem_fd;
> struct drm_pagemap *dpagemap;
> @@ -357,6 +447,21 @@ static void xe_madvise_details_fini(struct
> xe_madvise_details *details)
> drm_pagemap_put(details->dpagemap);
> }
>
> +static int xe_madvise_purgeable_retained_to_user(const struct
> xe_madvise_details *details)
> +{
> + u32 retained;
> +
> + if (!details->retained_ptr)
> + return 0;
> +
> + retained = !details->has_purged_bo;
> +
> + if (put_user(retained, (u32 __user
> *)u64_to_user_ptr(details->retained_ptr)))
> + return -EFAULT;
> +
> + return 0;
> +}
> +
> static bool check_bo_args_are_sane(struct xe_vm *vm, struct xe_vma
> **vmas,
> int num_vmas, u32 atomic_val)
> {
> @@ -414,6 +519,7 @@ int xe_vm_madvise_ioctl(struct drm_device *dev,
> void *data, struct drm_file *fil
> struct xe_vm *vm;
> struct drm_exec exec;
> int err, attr_type;
> + bool do_retained;
>
> vm = xe_vm_lookup(xef, args->vm_id);
> if (XE_IOCTL_DBG(xe, !vm))
> @@ -424,6 +530,25 @@ int xe_vm_madvise_ioctl(struct drm_device *dev,
> void *data, struct drm_file *fil
> goto put_vm;
> }
>
> + /* Cache whether we need to write retained, and validate
> it's initialized to 0 */
> + do_retained = args->type == DRM_XE_VMA_ATTR_PURGEABLE_STATE
> &&
> + args->purge_state_val.retained_ptr;
> + if (do_retained) {
> + u32 retained;
> + u32 __user *retained_ptr;
> +
> + retained_ptr = u64_to_user_ptr(args-
> >purge_state_val.retained_ptr);
> + if (get_user(retained, retained_ptr)) {
> + err = -EFAULT;
> + goto put_vm;
> + }
> +
> + if (XE_IOCTL_DBG(xe, retained != 0)) {
> + err = -EINVAL;
> + goto put_vm;
> + }
> + }
> +
> xe_svm_flush(vm);
>
> err = down_write_killable(&vm->lock);
> @@ -479,6 +604,13 @@ int xe_vm_madvise_ioctl(struct drm_device *dev,
> void *data, struct drm_file *fil
> }
>
> attr_type = array_index_nospec(args->type,
> ARRAY_SIZE(madvise_funcs));
> +
> + /* Ensure the madvise function exists for this type */
> + if (!madvise_funcs[attr_type]) {
> + err = -EINVAL;
> + goto err_fini;
> + }
> +
> madvise_funcs[attr_type](xe, vm, madvise_range.vmas,
> madvise_range.num_vmas, args,
> &details);
>
> @@ -496,6 +628,10 @@ int xe_vm_madvise_ioctl(struct drm_device *dev,
> void *data, struct drm_file *fil
> xe_madvise_details_fini(&details);
> unlock_vm:
> up_write(&vm->lock);
> +
> + /* Write retained value to user after releasing all locks */
> + if (!err && do_retained)
> + err =
> xe_madvise_purgeable_retained_to_user(&details);
> put_vm:
> xe_vm_put(vm);
> return err;
^ permalink raw reply [flat|nested] 39+ messages in thread
* [PATCH v6 04/12] drm/xe/bo: Block CPU faults to purgeable buffer objects
2026-03-03 15:19 [PATCH v6 00/12] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
` (2 preceding siblings ...)
2026-03-03 15:19 ` [PATCH v6 03/12] drm/xe/madvise: Implement purgeable buffer object support Arvind Yadav
@ 2026-03-03 15:20 ` Arvind Yadav
2026-03-05 15:26 ` Thomas Hellström
2026-03-03 15:20 ` [PATCH v6 05/12] drm/xe/vm: Prevent binding of purged " Arvind Yadav
` (12 subsequent siblings)
16 siblings, 1 reply; 39+ messages in thread
From: Arvind Yadav @ 2026-03-03 15:20 UTC (permalink / raw)
To: intel-xe
Cc: matthew.brost, himal.prasad.ghimiray, thomas.hellstrom,
pallavi.mishra
Block CPU page faults to buffer objects marked as purgeable (DONTNEED)
or already purged. Once a BO is marked DONTNEED, its contents can be
discarded by the kernel at any time, making access undefined behavior.
Return VM_FAULT_SIGBUS immediately to fail consistently instead of
allowing erratic behavior where access sometimes works (if not yet
purged) and sometimes fails (if purged).
For DONTNEED BOs:
- Block new CPU faults with SIGBUS to prevent undefined behavior.
- Existing CPU PTEs may still work until TLB flush, but new faults
fail immediately.
For PURGED BOs:
- Backing store has been reclaimed, making CPU access invalid.
- Without this check, accessing existing mmap mappings would trigger
xe_bo_fault_migrate() on freed backing store, causing kernel hangs
or crashes.
The purgeable check is added to both CPU fault paths:
- Fastpath (xe_bo_cpu_fault_fastpath): Returns VM_FAULT_SIGBUS immediately
under dma-resv lock, preventing attempts to migrate/validate
DONTNEED/purged pages.
- Slowpath (xe_bo_cpu_fault): Returns -EFAULT under drm_exec lock,
converted to VM_FAULT_SIGBUS.
This matches i915 semantics for purged buffer handling.
v2:
- Added xe_bo_is_purged(bo) instead of atomic_read.
- Avoids leaks and keeps drm_dev_exit() while returning.
v3:
- Move xe_bo_is_purged check under a dma-resv lock (Matthew Brost)
v4:
- Add purged check to fastpath (xe_bo_cpu_fault_fastpath) to prevent
hang when accessing existing mmap of purged BO.
v6:
- Block CPU faults to DONTNEED BOs with VM_FAULT_SIGBUS. (Thomas, Matt)
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
---
drivers/gpu/drm/xe/xe_bo.c | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
index 513f01aa2ddd..d05a73756905 100644
--- a/drivers/gpu/drm/xe/xe_bo.c
+++ b/drivers/gpu/drm/xe/xe_bo.c
@@ -1979,6 +1979,16 @@ static vm_fault_t xe_bo_cpu_fault_fastpath(struct vm_fault *vmf, struct xe_devic
if (!dma_resv_trylock(tbo->base.resv))
goto out_validation;
+ /*
+ * Reject CPU faults to purgeable BOs. DONTNEED BOs can be purged
+ * at any time, and purged BOs have no backing store. Either case
+ * is undefined behavior for CPU access.
+ */
+ if (xe_bo_madv_is_dontneed(bo) || xe_bo_is_purged(bo)) {
+ ret = VM_FAULT_SIGBUS;
+ goto out_unlock;
+ }
+
if (xe_ttm_bo_is_imported(tbo)) {
ret = VM_FAULT_SIGBUS;
drm_dbg(&xe->drm, "CPU trying to access an imported buffer object.\n");
@@ -2069,6 +2079,15 @@ static vm_fault_t xe_bo_cpu_fault(struct vm_fault *vmf)
if (err)
break;
+ /*
+ * Reject CPU faults to purgeable BOs. DONTNEED BOs can be
+ * purged at any time, and purged BOs have no backing store.
+ */
+ if (xe_bo_madv_is_dontneed(bo) || xe_bo_is_purged(bo)) {
+ err = -EFAULT;
+ break;
+ }
+
if (xe_ttm_bo_is_imported(tbo)) {
err = -EFAULT;
drm_dbg(&xe->drm, "CPU trying to access an imported buffer object.\n");
--
2.43.0
^ permalink raw reply related [flat|nested] 39+ messages in thread* Re: [PATCH v6 04/12] drm/xe/bo: Block CPU faults to purgeable buffer objects
2026-03-03 15:20 ` [PATCH v6 04/12] drm/xe/bo: Block CPU faults to purgeable buffer objects Arvind Yadav
@ 2026-03-05 15:26 ` Thomas Hellström
0 siblings, 0 replies; 39+ messages in thread
From: Thomas Hellström @ 2026-03-05 15:26 UTC (permalink / raw)
To: Arvind Yadav, intel-xe
Cc: matthew.brost, himal.prasad.ghimiray, pallavi.mishra
On Tue, 2026-03-03 at 20:50 +0530, Arvind Yadav wrote:
> Block CPU page faults to buffer objects marked as purgeable
> (DONTNEED)
> or already purged. Once a BO is marked DONTNEED, its contents can be
> discarded by the kernel at any time, making access undefined
> behavior.
> Return VM_FAULT_SIGBUS immediately to fail consistently instead of
> allowing erratic behavior where access sometimes works (if not yet
> purged) and sometimes fails (if purged).
>
> For DONTNEED BOs:
> - Block new CPU faults with SIGBUS to prevent undefined behavior.
> - Existing CPU PTEs may still work until TLB flush, but new faults
> fail immediately.
>
> For PURGED BOs:
> - Backing store has been reclaimed, making CPU access invalid.
> - Without this check, accessing existing mmap mappings would trigger
> xe_bo_fault_migrate() on freed backing store, causing kernel hangs
> or crashes.
>
> The purgeable check is added to both CPU fault paths:
> - Fastpath (xe_bo_cpu_fault_fastpath): Returns VM_FAULT_SIGBUS
> immediately
> under dma-resv lock, preventing attempts to migrate/validate
> DONTNEED/purged pages.
> - Slowpath (xe_bo_cpu_fault): Returns -EFAULT under drm_exec lock,
> converted to VM_FAULT_SIGBUS.
>
> This matches i915 semantics for purged buffer handling.
>
> v2:
> - Added xe_bo_is_purged(bo) instead of atomic_read.
> - Avoids leaks and keeps drm_dev_exit() while returning.
>
> v3:
> - Move xe_bo_is_purged check under a dma-resv lock (Matthew Brost)
>
> v4:
> - Add purged check to fastpath (xe_bo_cpu_fault_fastpath) to
> prevent
> hang when accessing existing mmap of purged BO.
>
> v6:
> - Block CPU faults to DONTNEED BOs with VM_FAULT_SIGBUS. (Thomas,
> Matt)
>
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> ---
> drivers/gpu/drm/xe/xe_bo.c | 19 +++++++++++++++++++
> 1 file changed, 19 insertions(+)
>
> diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
> index 513f01aa2ddd..d05a73756905 100644
> --- a/drivers/gpu/drm/xe/xe_bo.c
> +++ b/drivers/gpu/drm/xe/xe_bo.c
> @@ -1979,6 +1979,16 @@ static vm_fault_t
> xe_bo_cpu_fault_fastpath(struct vm_fault *vmf, struct xe_devic
> if (!dma_resv_trylock(tbo->base.resv))
> goto out_validation;
>
> + /*
> + * Reject CPU faults to purgeable BOs. DONTNEED BOs can be
> purged
> + * at any time, and purged BOs have no backing store. Either
> case
> + * is undefined behavior for CPU access.
> + */
> + if (xe_bo_madv_is_dontneed(bo) || xe_bo_is_purged(bo)) {
> + ret = VM_FAULT_SIGBUS;
> + goto out_unlock;
> + }
> +
> if (xe_ttm_bo_is_imported(tbo)) {
> ret = VM_FAULT_SIGBUS;
> drm_dbg(&xe->drm, "CPU trying to access an imported
> buffer object.\n");
> @@ -2069,6 +2079,15 @@ static vm_fault_t xe_bo_cpu_fault(struct
> vm_fault *vmf)
> if (err)
> break;
>
> + /*
> + * Reject CPU faults to purgeable BOs. DONTNEED BOs
> can be
> + * purged at any time, and purged BOs have no
> backing store.
> + */
> + if (xe_bo_madv_is_dontneed(bo) ||
> xe_bo_is_purged(bo)) {
> + err = -EFAULT;
> + break;
> + }
> +
> if (xe_ttm_bo_is_imported(tbo)) {
> err = -EFAULT;
> drm_dbg(&xe->drm, "CPU trying to access an
> imported buffer object.\n");
^ permalink raw reply [flat|nested] 39+ messages in thread
* [PATCH v6 05/12] drm/xe/vm: Prevent binding of purged buffer objects
2026-03-03 15:19 [PATCH v6 00/12] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
` (3 preceding siblings ...)
2026-03-03 15:20 ` [PATCH v6 04/12] drm/xe/bo: Block CPU faults to purgeable buffer objects Arvind Yadav
@ 2026-03-03 15:20 ` Arvind Yadav
2026-03-05 15:38 ` Thomas Hellström
2026-03-03 15:20 ` [PATCH v6 06/12] drm/xe/madvise: Implement per-VMA purgeable state tracking Arvind Yadav
` (11 subsequent siblings)
16 siblings, 1 reply; 39+ messages in thread
From: Arvind Yadav @ 2026-03-03 15:20 UTC (permalink / raw)
To: intel-xe
Cc: matthew.brost, himal.prasad.ghimiray, thomas.hellstrom,
pallavi.mishra
Add purge checking to vma_lock_and_validate() to block new mapping
operations on purged BOs while allowing cleanup operations to proceed.
Purged BOs have their backing pages freed by the kernel. New
mapping operations (MAP, PREFETCH, REMAP) must be rejected with
-EINVAL to prevent GPU access to invalid memory. Cleanup
operations (UNMAP) must be allowed so applications can release
resources after detecting purge via the retained field.
REMAP operations require mixed handling - reject new prev/next
VMAs if the BO is purged, but allow the unmap portion to proceed
for cleanup.
The check_purged flag in struct xe_vma_lock_and_validate_flags
distinguishes between these cases: true for new mappings (must reject),
false for cleanup (allow).
v2:
- Clarify that purged BOs are permanently invalid (i915 semantics)
- Remove incorrect claim about madvise(WILLNEED) restoring purged BOs
v3:
- Move xe_bo_is_purged check under vma_lock_and_validate (Matt)
- Add check_purged parameter to distinguish new mappings from cleanup
- Allow UNMAP operations to prevent resource leaks
- Handle REMAP operation's dual nature (cleanup + new mappings)
v5:
- Replace three boolean parameters with struct xe_vma_lock_and_validate_flags
to improve readability and prevent argument transposition (Matt)
- Use u32 bitfields instead of bool members to match xe_bo_shrink_flags
pattern - more efficient packing and follows xe driver conventions (Thomas)
- Pass struct as const since flags are read-only (Thomas)
v6:
- Block VM_BIND to DONTNEED BOs with -EBUSY (Thomas, Matt)
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
---
drivers/gpu/drm/xe/xe_vm.c | 71 ++++++++++++++++++++++++++++++++------
1 file changed, 60 insertions(+), 11 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index c65d014c7491..4a8abdcfb912 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -2917,8 +2917,20 @@ static void vm_bind_ioctl_ops_unwind(struct xe_vm *vm,
}
}
+/**
+ * struct xe_vma_lock_and_validate_flags - Flags for vma_lock_and_validate()
+ * @res_evict: Allow evicting resources during validation
+ * @validate: Perform BO validation
+ * @check_purged: Reject operation if BO is purged
+ */
+struct xe_vma_lock_and_validate_flags {
+ u32 res_evict : 1;
+ u32 validate : 1;
+ u32 check_purged : 1;
+};
+
static int vma_lock_and_validate(struct drm_exec *exec, struct xe_vma *vma,
- bool res_evict, bool validate)
+ const struct xe_vma_lock_and_validate_flags *flags)
{
struct xe_bo *bo = xe_vma_bo(vma);
struct xe_vm *vm = xe_vma_vm(vma);
@@ -2927,10 +2939,19 @@ static int vma_lock_and_validate(struct drm_exec *exec, struct xe_vma *vma,
if (bo) {
if (!bo->vm)
err = drm_exec_lock_obj(exec, &bo->ttm.base);
- if (!err && validate)
+
+ /* Reject new mappings to DONTNEED/purged BOs; allow cleanup operations */
+ if (!err && flags->check_purged) {
+ if (xe_bo_madv_is_dontneed(bo))
+ err = -EBUSY; /* BO marked purgeable */
+ else if (xe_bo_is_purged(bo))
+ err = -EINVAL; /* BO already purged */
+ }
+
+ if (!err && flags->validate)
err = xe_bo_validate(bo, vm,
xe_vm_allow_vm_eviction(vm) &&
- res_evict, exec);
+ flags->res_evict, exec);
}
return err;
@@ -3023,9 +3044,12 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
case DRM_GPUVA_OP_MAP:
if (!op->map.invalidate_on_bind)
err = vma_lock_and_validate(exec, op->map.vma,
- res_evict,
- !xe_vm_in_fault_mode(vm) ||
- op->map.immediate);
+ &(struct xe_vma_lock_and_validate_flags) {
+ .res_evict = res_evict,
+ .validate = !xe_vm_in_fault_mode(vm) ||
+ op->map.immediate,
+ .check_purged = true
+ });
break;
case DRM_GPUVA_OP_REMAP:
err = check_ufence(gpuva_to_vma(op->base.remap.unmap->va));
@@ -3034,13 +3058,25 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
err = vma_lock_and_validate(exec,
gpuva_to_vma(op->base.remap.unmap->va),
- res_evict, false);
+ &(struct xe_vma_lock_and_validate_flags) {
+ .res_evict = res_evict,
+ .validate = false,
+ .check_purged = false
+ });
if (!err && op->remap.prev)
err = vma_lock_and_validate(exec, op->remap.prev,
- res_evict, true);
+ &(struct xe_vma_lock_and_validate_flags) {
+ .res_evict = res_evict,
+ .validate = true,
+ .check_purged = true
+ });
if (!err && op->remap.next)
err = vma_lock_and_validate(exec, op->remap.next,
- res_evict, true);
+ &(struct xe_vma_lock_and_validate_flags) {
+ .res_evict = res_evict,
+ .validate = true,
+ .check_purged = true
+ });
break;
case DRM_GPUVA_OP_UNMAP:
err = check_ufence(gpuva_to_vma(op->base.unmap.va));
@@ -3049,7 +3085,11 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
err = vma_lock_and_validate(exec,
gpuva_to_vma(op->base.unmap.va),
- res_evict, false);
+ &(struct xe_vma_lock_and_validate_flags) {
+ .res_evict = res_evict,
+ .validate = false,
+ .check_purged = false
+ });
break;
case DRM_GPUVA_OP_PREFETCH:
{
@@ -3062,9 +3102,18 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
region <= ARRAY_SIZE(region_to_mem_type));
}
+ /*
+ * Prefetch attempts to migrate BO's backing store without
+ * repopulating it first. Purged BOs have no backing store
+ * to migrate, so reject the operation.
+ */
err = vma_lock_and_validate(exec,
gpuva_to_vma(op->base.prefetch.va),
- res_evict, false);
+ &(struct xe_vma_lock_and_validate_flags) {
+ .res_evict = res_evict,
+ .validate = false,
+ .check_purged = true
+ });
if (!err && !xe_vma_has_no_bo(vma))
err = xe_bo_migrate(xe_vma_bo(vma),
region_to_mem_type[region],
--
2.43.0
^ permalink raw reply related [flat|nested] 39+ messages in thread* Re: [PATCH v6 05/12] drm/xe/vm: Prevent binding of purged buffer objects
2026-03-03 15:20 ` [PATCH v6 05/12] drm/xe/vm: Prevent binding of purged " Arvind Yadav
@ 2026-03-05 15:38 ` Thomas Hellström
2026-03-20 2:34 ` Yadav, Arvind
0 siblings, 1 reply; 39+ messages in thread
From: Thomas Hellström @ 2026-03-05 15:38 UTC (permalink / raw)
To: Arvind Yadav, intel-xe
Cc: matthew.brost, himal.prasad.ghimiray, pallavi.mishra
On Tue, 2026-03-03 at 20:50 +0530, Arvind Yadav wrote:
> Add purge checking to vma_lock_and_validate() to block new mapping
> operations on purged BOs while allowing cleanup operations to
> proceed.
>
> Purged BOs have their backing pages freed by the kernel. New
> mapping operations (MAP, PREFETCH, REMAP) must be rejected with
> -EINVAL to prevent GPU access to invalid memory. Cleanup
> operations (UNMAP) must be allowed so applications can release
> resources after detecting purge via the retained field.
>
> REMAP operations require mixed handling - reject new prev/next
> VMAs if the BO is purged, but allow the unmap portion to proceed
> for cleanup.
>
> The check_purged flag in struct xe_vma_lock_and_validate_flags
> distinguishes between these cases: true for new mappings (must
> reject),
> false for cleanup (allow).
>
> v2:
> - Clarify that purged BOs are permanently invalid (i915 semantics)
> - Remove incorrect claim about madvise(WILLNEED) restoring purged
> BOs
>
> v3:
> - Move xe_bo_is_purged check under vma_lock_and_validate (Matt)
> - Add check_purged parameter to distinguish new mappings from
> cleanup
> - Allow UNMAP operations to prevent resource leaks
> - Handle REMAP operation's dual nature (cleanup + new mappings)
>
> v5:
> - Replace three boolean parameters with struct
> xe_vma_lock_and_validate_flags
> to improve readability and prevent argument transposition (Matt)
> - Use u32 bitfields instead of bool members to match
> xe_bo_shrink_flags
> pattern - more efficient packing and follows xe driver
> conventions (Thomas)
> - Pass struct as const since flags are read-only (Thomas)
>
> v6:
> - Block VM_BIND to DONTNEED BOs with -EBUSY (Thomas, Matt)
>
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
> ---
> drivers/gpu/drm/xe/xe_vm.c | 71 ++++++++++++++++++++++++++++++++----
> --
> 1 file changed, 60 insertions(+), 11 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index c65d014c7491..4a8abdcfb912 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -2917,8 +2917,20 @@ static void vm_bind_ioctl_ops_unwind(struct
> xe_vm *vm,
> }
> }
>
> +/**
> + * struct xe_vma_lock_and_validate_flags - Flags for
> vma_lock_and_validate()
> + * @res_evict: Allow evicting resources during validation
> + * @validate: Perform BO validation
> + * @check_purged: Reject operation if BO is purged
> + */
> +struct xe_vma_lock_and_validate_flags {
> + u32 res_evict : 1;
> + u32 validate : 1;
> + u32 check_purged : 1;
> +};
> +
> static int vma_lock_and_validate(struct drm_exec *exec, struct
> xe_vma *vma,
> - bool res_evict, bool validate)
> + const struct
> xe_vma_lock_and_validate_flags *flags)
Small structs like these can be passed by value rather than by
reference. It's done elsewhere in the xe driver, but we've also seen
people complain about ABI incompatibilities. I'd recommend passing by
value here to make the driver style consistent.
Other than that LGTM.
> {
> struct xe_bo *bo = xe_vma_bo(vma);
> struct xe_vm *vm = xe_vma_vm(vma);
> @@ -2927,10 +2939,19 @@ static int vma_lock_and_validate(struct
> drm_exec *exec, struct xe_vma *vma,
> if (bo) {
> if (!bo->vm)
> err = drm_exec_lock_obj(exec, &bo-
> >ttm.base);
> - if (!err && validate)
> +
> + /* Reject new mappings to DONTNEED/purged BOs; allow
> cleanup operations */
> + if (!err && flags->check_purged) {
> + if (xe_bo_madv_is_dontneed(bo))
> + err = -EBUSY; /* BO marked
> purgeable */
> + else if (xe_bo_is_purged(bo))
> + err = -EINVAL; /* BO already purged
> */
> + }
> +
> + if (!err && flags->validate)
> err = xe_bo_validate(bo, vm,
>
> xe_vm_allow_vm_eviction(vm) &&
> - res_evict, exec);
> + flags->res_evict,
> exec);
> }
>
> return err;
> @@ -3023,9 +3044,12 @@ static int op_lock_and_prep(struct drm_exec
> *exec, struct xe_vm *vm,
> case DRM_GPUVA_OP_MAP:
> if (!op->map.invalidate_on_bind)
> err = vma_lock_and_validate(exec, op-
> >map.vma,
> - res_evict,
> -
> !xe_vm_in_fault_mode(vm) ||
> - op-
> >map.immediate);
> + &(struct
> xe_vma_lock_and_validate_flags) {
> +
> .res_evict = res_evict,
> +
> .validate = !xe_vm_in_fault_mode(vm) ||
> +
> op->map.immediate,
> +
> .check_purged = true
> + });
> break;
> case DRM_GPUVA_OP_REMAP:
> err = check_ufence(gpuva_to_vma(op-
> >base.remap.unmap->va));
> @@ -3034,13 +3058,25 @@ static int op_lock_and_prep(struct drm_exec
> *exec, struct xe_vm *vm,
>
> err = vma_lock_and_validate(exec,
> gpuva_to_vma(op-
> >base.remap.unmap->va),
> - res_evict, false);
> + &(struct
> xe_vma_lock_and_validate_flags) {
> + .res_evict =
> res_evict,
> + .validate =
> false,
> + .check_purged =
> false
> + });
> if (!err && op->remap.prev)
> err = vma_lock_and_validate(exec, op-
> >remap.prev,
> - res_evict,
> true);
> + &(struct
> xe_vma_lock_and_validate_flags) {
> +
> .res_evict = res_evict,
> +
> .validate = true,
> +
> .check_purged = true
> + });
> if (!err && op->remap.next)
> err = vma_lock_and_validate(exec, op-
> >remap.next,
> - res_evict,
> true);
> + &(struct
> xe_vma_lock_and_validate_flags) {
> +
> .res_evict = res_evict,
> +
> .validate = true,
> +
> .check_purged = true
> + });
> break;
> case DRM_GPUVA_OP_UNMAP:
> err = check_ufence(gpuva_to_vma(op->base.unmap.va));
> @@ -3049,7 +3085,11 @@ static int op_lock_and_prep(struct drm_exec
> *exec, struct xe_vm *vm,
>
> err = vma_lock_and_validate(exec,
> gpuva_to_vma(op-
> >base.unmap.va),
> - res_evict, false);
> + &(struct
> xe_vma_lock_and_validate_flags) {
> + .res_evict =
> res_evict,
> + .validate =
> false,
> + .check_purged =
> false
> + });
> break;
> case DRM_GPUVA_OP_PREFETCH:
> {
> @@ -3062,9 +3102,18 @@ static int op_lock_and_prep(struct drm_exec
> *exec, struct xe_vm *vm,
> region <=
> ARRAY_SIZE(region_to_mem_type));
> }
>
> + /*
> + * Prefetch attempts to migrate BO's backing store
> without
> + * repopulating it first. Purged BOs have no backing
> store
> + * to migrate, so reject the operation.
> + */
> err = vma_lock_and_validate(exec,
> gpuva_to_vma(op-
> >base.prefetch.va),
> - res_evict, false);
> + &(struct
> xe_vma_lock_and_validate_flags) {
> + .res_evict =
> res_evict,
> + .validate =
> false,
> + .check_purged =
> true
> + });
> if (!err && !xe_vma_has_no_bo(vma))
> err = xe_bo_migrate(xe_vma_bo(vma),
>
> region_to_mem_type[region],
^ permalink raw reply [flat|nested] 39+ messages in thread* Re: [PATCH v6 05/12] drm/xe/vm: Prevent binding of purged buffer objects
2026-03-05 15:38 ` Thomas Hellström
@ 2026-03-20 2:34 ` Yadav, Arvind
0 siblings, 0 replies; 39+ messages in thread
From: Yadav, Arvind @ 2026-03-20 2:34 UTC (permalink / raw)
To: Thomas Hellström, intel-xe
Cc: matthew.brost, himal.prasad.ghimiray, pallavi.mishra
On 05-03-2026 21:08, Thomas Hellström wrote:
> On Tue, 2026-03-03 at 20:50 +0530, Arvind Yadav wrote:
>> Add purge checking to vma_lock_and_validate() to block new mapping
>> operations on purged BOs while allowing cleanup operations to
>> proceed.
>>
>> Purged BOs have their backing pages freed by the kernel. New
>> mapping operations (MAP, PREFETCH, REMAP) must be rejected with
>> -EINVAL to prevent GPU access to invalid memory. Cleanup
>> operations (UNMAP) must be allowed so applications can release
>> resources after detecting purge via the retained field.
>>
>> REMAP operations require mixed handling - reject new prev/next
>> VMAs if the BO is purged, but allow the unmap portion to proceed
>> for cleanup.
>>
>> The check_purged flag in struct xe_vma_lock_and_validate_flags
>> distinguishes between these cases: true for new mappings (must
>> reject),
>> false for cleanup (allow).
>>
>> v2:
>> - Clarify that purged BOs are permanently invalid (i915 semantics)
>> - Remove incorrect claim about madvise(WILLNEED) restoring purged
>> BOs
>>
>> v3:
>> - Move xe_bo_is_purged check under vma_lock_and_validate (Matt)
>> - Add check_purged parameter to distinguish new mappings from
>> cleanup
>> - Allow UNMAP operations to prevent resource leaks
>> - Handle REMAP operation's dual nature (cleanup + new mappings)
>>
>> v5:
>> - Replace three boolean parameters with struct
>> xe_vma_lock_and_validate_flags
>> to improve readability and prevent argument transposition (Matt)
>> - Use u32 bitfields instead of bool members to match
>> xe_bo_shrink_flags
>> pattern - more efficient packing and follows xe driver
>> conventions (Thomas)
>> - Pass struct as const since flags are read-only (Thomas)
>>
>> v6:
>> - Block VM_BIND to DONTNEED BOs with -EBUSY (Thomas, Matt)
>>
>> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>> Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>> Cc: Matthew Brost <matthew.brost@intel.com>
>> Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
>> ---
>> drivers/gpu/drm/xe/xe_vm.c | 71 ++++++++++++++++++++++++++++++++----
>> --
>> 1 file changed, 60 insertions(+), 11 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
>> index c65d014c7491..4a8abdcfb912 100644
>> --- a/drivers/gpu/drm/xe/xe_vm.c
>> +++ b/drivers/gpu/drm/xe/xe_vm.c
>> @@ -2917,8 +2917,20 @@ static void vm_bind_ioctl_ops_unwind(struct
>> xe_vm *vm,
>> }
>> }
>>
>> +/**
>> + * struct xe_vma_lock_and_validate_flags - Flags for
>> vma_lock_and_validate()
>> + * @res_evict: Allow evicting resources during validation
>> + * @validate: Perform BO validation
>> + * @check_purged: Reject operation if BO is purged
>> + */
>> +struct xe_vma_lock_and_validate_flags {
>> + u32 res_evict : 1;
>> + u32 validate : 1;
>> + u32 check_purged : 1;
>> +};
>> +
>> static int vma_lock_and_validate(struct drm_exec *exec, struct
>> xe_vma *vma,
>> - bool res_evict, bool validate)
>> + const struct
>> xe_vma_lock_and_validate_flags *flags)
> Small structs like these can be passed by value rather than by
> reference. It's done elsewhere in the xe driver, but we've also seen
> people complain about ABI incompatibilities. I'd recommend passing by
> value here to make the driver style consistent.
>
> Other than that LGTM.
Noted,
Thanks,
Arvind
>
>
>
>> {
>> struct xe_bo *bo = xe_vma_bo(vma);
>> struct xe_vm *vm = xe_vma_vm(vma);
>> @@ -2927,10 +2939,19 @@ static int vma_lock_and_validate(struct
>> drm_exec *exec, struct xe_vma *vma,
>> if (bo) {
>> if (!bo->vm)
>> err = drm_exec_lock_obj(exec, &bo-
>>> ttm.base);
>> - if (!err && validate)
>> +
>> + /* Reject new mappings to DONTNEED/purged BOs; allow
>> cleanup operations */
>> + if (!err && flags->check_purged) {
>> + if (xe_bo_madv_is_dontneed(bo))
>> + err = -EBUSY; /* BO marked
>> purgeable */
>> + else if (xe_bo_is_purged(bo))
>> + err = -EINVAL; /* BO already purged
>> */
>> + }
>> +
>> + if (!err && flags->validate)
>> err = xe_bo_validate(bo, vm,
>>
>> xe_vm_allow_vm_eviction(vm) &&
>> - res_evict, exec);
>> + flags->res_evict,
>> exec);
>> }
>>
>> return err;
>> @@ -3023,9 +3044,12 @@ static int op_lock_and_prep(struct drm_exec
>> *exec, struct xe_vm *vm,
>> case DRM_GPUVA_OP_MAP:
>> if (!op->map.invalidate_on_bind)
>> err = vma_lock_and_validate(exec, op-
>>> map.vma,
>> - res_evict,
>> -
>> !xe_vm_in_fault_mode(vm) ||
>> - op-
>>> map.immediate);
>> + &(struct
>> xe_vma_lock_and_validate_flags) {
>> +
>> .res_evict = res_evict,
>> +
>> .validate = !xe_vm_in_fault_mode(vm) ||
>> +
>> op->map.immediate,
>> +
>> .check_purged = true
>> + });
>> break;
>> case DRM_GPUVA_OP_REMAP:
>> err = check_ufence(gpuva_to_vma(op-
>>> base.remap.unmap->va));
>> @@ -3034,13 +3058,25 @@ static int op_lock_and_prep(struct drm_exec
>> *exec, struct xe_vm *vm,
>>
>> err = vma_lock_and_validate(exec,
>> gpuva_to_vma(op-
>>> base.remap.unmap->va),
>> - res_evict, false);
>> + &(struct
>> xe_vma_lock_and_validate_flags) {
>> + .res_evict =
>> res_evict,
>> + .validate =
>> false,
>> + .check_purged =
>> false
>> + });
>> if (!err && op->remap.prev)
>> err = vma_lock_and_validate(exec, op-
>>> remap.prev,
>> - res_evict,
>> true);
>> + &(struct
>> xe_vma_lock_and_validate_flags) {
>> +
>> .res_evict = res_evict,
>> +
>> .validate = true,
>> +
>> .check_purged = true
>> + });
>> if (!err && op->remap.next)
>> err = vma_lock_and_validate(exec, op-
>>> remap.next,
>> - res_evict,
>> true);
>> + &(struct
>> xe_vma_lock_and_validate_flags) {
>> +
>> .res_evict = res_evict,
>> +
>> .validate = true,
>> +
>> .check_purged = true
>> + });
>> break;
>> case DRM_GPUVA_OP_UNMAP:
>> err = check_ufence(gpuva_to_vma(op->base.unmap.va));
>> @@ -3049,7 +3085,11 @@ static int op_lock_and_prep(struct drm_exec
>> *exec, struct xe_vm *vm,
>>
>> err = vma_lock_and_validate(exec,
>> gpuva_to_vma(op-
>>> base.unmap.va),
>> - res_evict, false);
>> + &(struct
>> xe_vma_lock_and_validate_flags) {
>> + .res_evict =
>> res_evict,
>> + .validate =
>> false,
>> + .check_purged =
>> false
>> + });
>> break;
>> case DRM_GPUVA_OP_PREFETCH:
>> {
>> @@ -3062,9 +3102,18 @@ static int op_lock_and_prep(struct drm_exec
>> *exec, struct xe_vm *vm,
>> region <=
>> ARRAY_SIZE(region_to_mem_type));
>> }
>>
>> + /*
>> + * Prefetch attempts to migrate BO's backing store
>> without
>> + * repopulating it first. Purged BOs have no backing
>> store
>> + * to migrate, so reject the operation.
>> + */
>> err = vma_lock_and_validate(exec,
>> gpuva_to_vma(op-
>>> base.prefetch.va),
>> - res_evict, false);
>> + &(struct
>> xe_vma_lock_and_validate_flags) {
>> + .res_evict =
>> res_evict,
>> + .validate =
>> false,
>> + .check_purged =
>> true
>> + });
>> if (!err && !xe_vma_has_no_bo(vma))
>> err = xe_bo_migrate(xe_vma_bo(vma),
>>
>> region_to_mem_type[region],
^ permalink raw reply [flat|nested] 39+ messages in thread
* [PATCH v6 06/12] drm/xe/madvise: Implement per-VMA purgeable state tracking
2026-03-03 15:19 [PATCH v6 00/12] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
` (4 preceding siblings ...)
2026-03-03 15:20 ` [PATCH v6 05/12] drm/xe/vm: Prevent binding of purged " Arvind Yadav
@ 2026-03-03 15:20 ` Arvind Yadav
2026-03-10 9:57 ` Thomas Hellström
2026-03-03 15:20 ` [PATCH v6 07/12] drm/xe/madvise: Block imported and exported dma-bufs Arvind Yadav
` (10 subsequent siblings)
16 siblings, 1 reply; 39+ messages in thread
From: Arvind Yadav @ 2026-03-03 15:20 UTC (permalink / raw)
To: intel-xe
Cc: matthew.brost, himal.prasad.ghimiray, thomas.hellstrom,
pallavi.mishra
Track purgeable state per-VMA instead of using a coarse shared
BO check. This prevents purging shared BOs until all VMAs across
all VMs are marked DONTNEED.
Add xe_bo_all_vmas_dontneed() to check all VMAs before marking
a BO purgeable. Add xe_bo_recheck_purgeable_on_vma_unbind() to
handle state transitions when VMAs are destroyed - if all
remaining VMAs are DONTNEED the BO can become purgeable, or if
no VMAs remain it transitions to WILLNEED.
The per-VMA purgeable_state field stores the madvise hint for
each mapping. Shared BOs can only be purged when all VMAs
unanimously indicate DONTNEED.
This prevents the bug where unmapping the last VMA would incorrectly flip
a DONTNEED BO back to WILLNEED. The enum-based state check preserves BO
state when no VMAs remain, only updating when VMAs provide explicit hints.
v3:
- This addresses Thomas Hellström's feedback: "loop over all vmas
attached to the bo and check that they all say WONTNEED. This will
also need a check at VMA unbinding"
v4:
- @madv_purgeable atomic_t → u32 change across all relevant
patches (Matt)
v5:
- Call xe_bo_recheck_purgeable_on_vma_unbind() from xe_vma_destroy()
right after drm_gpuva_unlink() where we already hold the BO lock,
drop the trylock-based late destroy path (Matt)
- Move purgeable_state into xe_vma_mem_attr with the other madvise
attributes (Matt)
- Drop READ_ONCE since the BO lock already protects us (Matt)
- Keep returning false when there are no VMAs - otherwise we'd mark
BOs purgeable without any user hint (Matt)
- Use xe_bo_set_purgeable_state() instead of direct initialization(Matt)
- use xe_assert instead of drm_warn (Thomas)
v6:
- Fix state transition bug: don't flip DONTNEED → WILLNEED when last
VMA unmapped (Matt)
- Change xe_bo_all_vmas_dontneed() from bool to enum to distinguish
"no VMAs" from "has WILLNEED VMA" (Matt)
- Preserve BO state on NO_VMAS instead of forcing WILLNEED.
- Set skip_invalidation explicitly in madvise_purgeable() to ensure
DONTNEED always zaps GPU PTEs regardless of prior madvise state.
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
---
drivers/gpu/drm/xe/xe_svm.c | 1 +
drivers/gpu/drm/xe/xe_vm.c | 9 +-
drivers/gpu/drm/xe/xe_vm_madvise.c | 127 +++++++++++++++++++++++++++--
drivers/gpu/drm/xe/xe_vm_madvise.h | 3 +
drivers/gpu/drm/xe/xe_vm_types.h | 11 +++
5 files changed, 144 insertions(+), 7 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
index 002b6c22ad3f..dffa0cab5f5d 100644
--- a/drivers/gpu/drm/xe/xe_svm.c
+++ b/drivers/gpu/drm/xe/xe_svm.c
@@ -318,6 +318,7 @@ static void xe_vma_set_default_attributes(struct xe_vma *vma)
.preferred_loc.migration_policy = DRM_XE_MIGRATE_ALL_PAGES,
.pat_index = vma->attr.default_pat_index,
.atomic_access = DRM_XE_ATOMIC_UNDEFINED,
+ .purgeable_state = XE_MADV_PURGEABLE_WILLNEED,
};
xe_vma_mem_attr_copy(&vma->attr, &default_attr);
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 4a8abdcfb912..8e4c14fa3df2 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -39,6 +39,7 @@
#include "xe_tile.h"
#include "xe_tlb_inval.h"
#include "xe_trace_bo.h"
+#include "xe_vm_madvise.h"
#include "xe_wa.h"
static struct drm_gem_object *xe_vm_obj(struct xe_vm *vm)
@@ -1085,6 +1086,7 @@ static struct xe_vma *xe_vma_create(struct xe_vm *vm,
static void xe_vma_destroy_late(struct xe_vma *vma)
{
struct xe_vm *vm = xe_vma_vm(vma);
+ struct xe_bo *bo = xe_vma_bo(vma);
if (vma->ufence) {
xe_sync_ufence_put(vma->ufence);
@@ -1099,7 +1101,7 @@ static void xe_vma_destroy_late(struct xe_vma *vma)
} else if (xe_vma_is_null(vma) || xe_vma_is_cpu_addr_mirror(vma)) {
xe_vm_put(vm);
} else {
- xe_bo_put(xe_vma_bo(vma));
+ xe_bo_put(bo);
}
xe_vma_free(vma);
@@ -1125,6 +1127,7 @@ static void vma_destroy_cb(struct dma_fence *fence,
static void xe_vma_destroy(struct xe_vma *vma, struct dma_fence *fence)
{
struct xe_vm *vm = xe_vma_vm(vma);
+ struct xe_bo *bo = xe_vma_bo(vma);
lockdep_assert_held_write(&vm->lock);
xe_assert(vm->xe, list_empty(&vma->combined_links.destroy));
@@ -1133,9 +1136,10 @@ static void xe_vma_destroy(struct xe_vma *vma, struct dma_fence *fence)
xe_assert(vm->xe, vma->gpuva.flags & XE_VMA_DESTROYED);
xe_userptr_destroy(to_userptr_vma(vma));
} else if (!xe_vma_is_null(vma) && !xe_vma_is_cpu_addr_mirror(vma)) {
- xe_bo_assert_held(xe_vma_bo(vma));
+ xe_bo_assert_held(bo);
drm_gpuva_unlink(&vma->gpuva);
+ xe_bo_recompute_purgeable_state(bo);
}
xe_vm_assert_held(vm);
@@ -2691,6 +2695,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
.atomic_access = DRM_XE_ATOMIC_UNDEFINED,
.default_pat_index = op->map.pat_index,
.pat_index = op->map.pat_index,
+ .purgeable_state = XE_MADV_PURGEABLE_WILLNEED,
};
flags |= op->map.vma_flags & XE_VMA_CREATE_MASK;
diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
index f7e767f21795..ca003e0db87b 100644
--- a/drivers/gpu/drm/xe/xe_vm_madvise.c
+++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
@@ -12,6 +12,7 @@
#include "xe_pat.h"
#include "xe_pt.h"
#include "xe_svm.h"
+#include "xe_vm.h"
struct xe_vmas_in_madvise_range {
u64 addr;
@@ -183,6 +184,112 @@ static void madvise_pat_index(struct xe_device *xe, struct xe_vm *vm,
}
}
+/**
+ * enum xe_bo_vmas_purge_state - VMA purgeable state aggregation
+ *
+ * Distinguishes whether a BO's VMAs are all DONTNEED, have at least
+ * one WILLNEED, or have no VMAs at all.
+ *
+ * Enum values align with XE_MADV_PURGEABLE_* states for consistency.
+ */
+enum xe_bo_vmas_purge_state {
+ /** @XE_BO_VMAS_STATE_WILLNEED: At least one VMA is WILLNEED */
+ XE_BO_VMAS_STATE_WILLNEED = 0,
+ /** @XE_BO_VMAS_STATE_DONTNEED: All VMAs are DONTNEED */
+ XE_BO_VMAS_STATE_DONTNEED = 1,
+ /** @XE_BO_VMAS_STATE_NO_VMAS: BO has no VMAs */
+ XE_BO_VMAS_STATE_NO_VMAS = 2,
+};
+
+/**
+ * xe_bo_all_vmas_dontneed() - Determine BO VMA purgeable state
+ * @bo: Buffer object
+ *
+ * Check all VMAs across all VMs to determine aggregate purgeable state.
+ * Shared BOs require unanimous DONTNEED state from all mappings.
+ *
+ * Caller must hold BO dma-resv lock.
+ *
+ * Return: XE_BO_VMAS_STATE_DONTNEED if all VMAs are DONTNEED,
+ * XE_BO_VMAS_STATE_WILLNEED if at least one VMA is not DONTNEED,
+ * XE_BO_VMAS_STATE_NO_VMAS if BO has no VMAs
+ */
+static enum xe_bo_vmas_purge_state xe_bo_all_vmas_dontneed(struct xe_bo *bo)
+{
+ struct drm_gpuvm_bo *vm_bo;
+ struct drm_gpuva *gpuva;
+ struct drm_gem_object *obj = &bo->ttm.base;
+ bool has_vmas = false;
+
+ xe_bo_assert_held(bo);
+
+ drm_gem_for_each_gpuvm_bo(vm_bo, obj) {
+ drm_gpuvm_bo_for_each_va(gpuva, vm_bo) {
+ struct xe_vma *vma = gpuva_to_vma(gpuva);
+
+ has_vmas = true;
+
+ /* Any non-DONTNEED VMA prevents purging */
+ if (vma->attr.purgeable_state != XE_MADV_PURGEABLE_DONTNEED)
+ return XE_BO_VMAS_STATE_WILLNEED;
+ }
+ }
+
+ /*
+ * No VMAs => preserve existing BO purgeable state.
+ * Avoids incorrectly flipping DONTNEED -> WILLNEED when last VMA unmapped.
+ */
+ if (!has_vmas)
+ return XE_BO_VMAS_STATE_NO_VMAS;
+
+ return XE_BO_VMAS_STATE_DONTNEED;
+}
+
+/**
+ * xe_bo_recompute_purgeable_state() - Recompute BO purgeable state from VMAs
+ * @bo: Buffer object
+ *
+ * Walk all VMAs to determine if BO should be purgeable or not.
+ * Shared BOs require unanimous DONTNEED state from all mappings.
+ *
+ * Locking: Caller must hold BO dma-resv lock. When iterating GPUVM lists,
+ * VM lock must also be held (write) to prevent concurrent VMA modifications.
+ * This is satisfied at both call sites:
+ * - xe_vma_destroy(): holds vm->lock write
+ * - madvise_purgeable(): holds vm->lock write (from madvise ioctl path)
+ *
+ * Return: nothing
+ */
+void xe_bo_recompute_purgeable_state(struct xe_bo *bo)
+{
+ enum xe_bo_vmas_purge_state vma_state;
+
+ if (!bo)
+ return;
+
+ xe_bo_assert_held(bo);
+
+ /*
+ * Once purged, always purged. Cannot transition back to WILLNEED.
+ * This matches i915 semantics where purged BOs are permanently invalid.
+ */
+ if (bo->madv_purgeable == XE_MADV_PURGEABLE_PURGED)
+ return;
+
+ vma_state = xe_bo_all_vmas_dontneed(bo);
+
+ if (vma_state == XE_BO_VMAS_STATE_DONTNEED) {
+ /* All VMAs are DONTNEED - mark BO purgeable */
+ if (bo->madv_purgeable != XE_MADV_PURGEABLE_DONTNEED)
+ xe_bo_set_purgeable_state(bo, XE_MADV_PURGEABLE_DONTNEED);
+ } else if (vma_state == XE_BO_VMAS_STATE_WILLNEED) {
+ /* At least one VMA is WILLNEED - BO must not be purgeable */
+ if (bo->madv_purgeable != XE_MADV_PURGEABLE_WILLNEED)
+ xe_bo_set_purgeable_state(bo, XE_MADV_PURGEABLE_WILLNEED);
+ }
+ /* XE_BO_VMAS_STATE_NO_VMAS: Preserve existing BO state */
+}
+
/**
* madvise_purgeable - Handle purgeable buffer object advice
* @xe: XE device
@@ -214,8 +321,11 @@ static void __maybe_unused madvise_purgeable(struct xe_device *xe,
for (i = 0; i < num_vmas; i++) {
struct xe_bo *bo = xe_vma_bo(vmas[i]);
- if (!bo)
+ if (!bo) {
+ /* Purgeable state applies to BOs only, skip non-BO VMAs */
+ vmas[i]->skip_invalidation = true;
continue;
+ }
/* BO must be locked before modifying madv state */
xe_bo_assert_held(bo);
@@ -226,19 +336,26 @@ static void __maybe_unused madvise_purgeable(struct xe_device *xe,
*/
if (xe_bo_is_purged(bo)) {
details->has_purged_bo = true;
+ vmas[i]->skip_invalidation = true;
continue;
}
switch (op->purge_state_val.val) {
case DRM_XE_VMA_PURGEABLE_STATE_WILLNEED:
- xe_bo_set_purgeable_state(bo, XE_MADV_PURGEABLE_WILLNEED);
+ vmas[i]->attr.purgeable_state = XE_MADV_PURGEABLE_WILLNEED;
+ vmas[i]->skip_invalidation = true;
+
+ xe_bo_recompute_purgeable_state(bo);
break;
case DRM_XE_VMA_PURGEABLE_STATE_DONTNEED:
- xe_bo_set_purgeable_state(bo, XE_MADV_PURGEABLE_DONTNEED);
+ vmas[i]->attr.purgeable_state = XE_MADV_PURGEABLE_DONTNEED;
+ vmas[i]->skip_invalidation = false;
+
+ xe_bo_recompute_purgeable_state(bo);
break;
default:
- drm_warn(&vm->xe->drm, "Invalid madvice value = %d\n",
- op->purge_state_val.val);
+ /* Should never hit - values validated in madvise_args_are_sane() */
+ xe_assert(vm->xe, 0);
return;
}
}
diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.h b/drivers/gpu/drm/xe/xe_vm_madvise.h
index b0e1fc445f23..39acd2689ca0 100644
--- a/drivers/gpu/drm/xe/xe_vm_madvise.h
+++ b/drivers/gpu/drm/xe/xe_vm_madvise.h
@@ -8,8 +8,11 @@
struct drm_device;
struct drm_file;
+struct xe_bo;
int xe_vm_madvise_ioctl(struct drm_device *dev, void *data,
struct drm_file *file);
+void xe_bo_recompute_purgeable_state(struct xe_bo *bo);
+
#endif
diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
index 1f6f7e30e751..bfe7157756ad 100644
--- a/drivers/gpu/drm/xe/xe_vm_types.h
+++ b/drivers/gpu/drm/xe/xe_vm_types.h
@@ -94,6 +94,17 @@ struct xe_vma_mem_attr {
* same as default_pat_index unless overwritten by madvise.
*/
u16 pat_index;
+
+ /**
+ * @purgeable_state: Purgeable hint for this VMA mapping
+ *
+ * Per-VMA purgeable state from madvise. Valid states are WILLNEED (0)
+ * or DONTNEED (1). Shared BOs require all VMAs to be DONTNEED before
+ * the BO can be purged. PURGED state exists only at BO level.
+ *
+ * Protected by BO dma-resv lock. Set via DRM_IOCTL_XE_MADVISE.
+ */
+ u32 purgeable_state;
};
struct xe_vma {
--
2.43.0
^ permalink raw reply related [flat|nested] 39+ messages in thread* Re: [PATCH v6 06/12] drm/xe/madvise: Implement per-VMA purgeable state tracking
2026-03-03 15:20 ` [PATCH v6 06/12] drm/xe/madvise: Implement per-VMA purgeable state tracking Arvind Yadav
@ 2026-03-10 9:57 ` Thomas Hellström
2026-03-23 6:47 ` Yadav, Arvind
0 siblings, 1 reply; 39+ messages in thread
From: Thomas Hellström @ 2026-03-10 9:57 UTC (permalink / raw)
To: Arvind Yadav, intel-xe
Cc: matthew.brost, himal.prasad.ghimiray, pallavi.mishra
On Tue, 2026-03-03 at 20:50 +0530, Arvind Yadav wrote:
> Track purgeable state per-VMA instead of using a coarse shared
> BO check. This prevents purging shared BOs until all VMAs across
> all VMs are marked DONTNEED.
>
> Add xe_bo_all_vmas_dontneed() to check all VMAs before marking
> a BO purgeable. Add xe_bo_recheck_purgeable_on_vma_unbind() to
> handle state transitions when VMAs are destroyed - if all
> remaining VMAs are DONTNEED the BO can become purgeable, or if
> no VMAs remain it transitions to WILLNEED.
>
> The per-VMA purgeable_state field stores the madvise hint for
> each mapping. Shared BOs can only be purged when all VMAs
> unanimously indicate DONTNEED.
>
> This prevents the bug where unmapping the last VMA would incorrectly
> flip
> a DONTNEED BO back to WILLNEED. The enum-based state check preserves
> BO
> state when no VMAs remain, only updating when VMAs provide explicit
> hints.
>
> v3:
> - This addresses Thomas Hellström's feedback: "loop over all vmas
> attached to the bo and check that they all say WONTNEED. This
> will
> also need a check at VMA unbinding"
>
> v4:
> - @madv_purgeable atomic_t → u32 change across all relevant
> patches (Matt)
>
> v5:
> - Call xe_bo_recheck_purgeable_on_vma_unbind() from
> xe_vma_destroy()
> right after drm_gpuva_unlink() where we already hold the BO lock,
> drop the trylock-based late destroy path (Matt)
> - Move purgeable_state into xe_vma_mem_attr with the other madvise
> attributes (Matt)
> - Drop READ_ONCE since the BO lock already protects us (Matt)
> - Keep returning false when there are no VMAs - otherwise we'd mark
> BOs purgeable without any user hint (Matt)
> - Use xe_bo_set_purgeable_state() instead of direct
> initialization(Matt)
> - use xe_assert instead of drm_warn (Thomas)
>
> v6:
> - Fix state transition bug: don't flip DONTNEED → WILLNEED when
> last
> VMA unmapped (Matt)
> - Change xe_bo_all_vmas_dontneed() from bool to enum to distinguish
> "no VMAs" from "has WILLNEED VMA" (Matt)
> - Preserve BO state on NO_VMAS instead of forcing WILLNEED.
> - Set skip_invalidation explicitly in madvise_purgeable() to ensure
> DONTNEED always zaps GPU PTEs regardless of prior madvise state.
>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
> ---
> drivers/gpu/drm/xe/xe_svm.c | 1 +
> drivers/gpu/drm/xe/xe_vm.c | 9 +-
> drivers/gpu/drm/xe/xe_vm_madvise.c | 127
> +++++++++++++++++++++++++++--
> drivers/gpu/drm/xe/xe_vm_madvise.h | 3 +
> drivers/gpu/drm/xe/xe_vm_types.h | 11 +++
> 5 files changed, 144 insertions(+), 7 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_svm.c
> b/drivers/gpu/drm/xe/xe_svm.c
> index 002b6c22ad3f..dffa0cab5f5d 100644
> --- a/drivers/gpu/drm/xe/xe_svm.c
> +++ b/drivers/gpu/drm/xe/xe_svm.c
> @@ -318,6 +318,7 @@ static void xe_vma_set_default_attributes(struct
> xe_vma *vma)
> .preferred_loc.migration_policy =
> DRM_XE_MIGRATE_ALL_PAGES,
> .pat_index = vma->attr.default_pat_index,
> .atomic_access = DRM_XE_ATOMIC_UNDEFINED,
> + .purgeable_state = XE_MADV_PURGEABLE_WILLNEED,
> };
>
> xe_vma_mem_attr_copy(&vma->attr, &default_attr);
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index 4a8abdcfb912..8e4c14fa3df2 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -39,6 +39,7 @@
> #include "xe_tile.h"
> #include "xe_tlb_inval.h"
> #include "xe_trace_bo.h"
> +#include "xe_vm_madvise.h"
> #include "xe_wa.h"
>
> static struct drm_gem_object *xe_vm_obj(struct xe_vm *vm)
> @@ -1085,6 +1086,7 @@ static struct xe_vma *xe_vma_create(struct
> xe_vm *vm,
> static void xe_vma_destroy_late(struct xe_vma *vma)
> {
> struct xe_vm *vm = xe_vma_vm(vma);
> + struct xe_bo *bo = xe_vma_bo(vma);
>
> if (vma->ufence) {
> xe_sync_ufence_put(vma->ufence);
> @@ -1099,7 +1101,7 @@ static void xe_vma_destroy_late(struct xe_vma
> *vma)
> } else if (xe_vma_is_null(vma) ||
> xe_vma_is_cpu_addr_mirror(vma)) {
> xe_vm_put(vm);
> } else {
> - xe_bo_put(xe_vma_bo(vma));
> + xe_bo_put(bo);
> }
>
> xe_vma_free(vma);
> @@ -1125,6 +1127,7 @@ static void vma_destroy_cb(struct dma_fence
> *fence,
> static void xe_vma_destroy(struct xe_vma *vma, struct dma_fence
> *fence)
> {
> struct xe_vm *vm = xe_vma_vm(vma);
> + struct xe_bo *bo = xe_vma_bo(vma);
>
> lockdep_assert_held_write(&vm->lock);
> xe_assert(vm->xe, list_empty(&vma->combined_links.destroy));
> @@ -1133,9 +1136,10 @@ static void xe_vma_destroy(struct xe_vma *vma,
> struct dma_fence *fence)
> xe_assert(vm->xe, vma->gpuva.flags &
> XE_VMA_DESTROYED);
> xe_userptr_destroy(to_userptr_vma(vma));
> } else if (!xe_vma_is_null(vma) &&
> !xe_vma_is_cpu_addr_mirror(vma)) {
> - xe_bo_assert_held(xe_vma_bo(vma));
> + xe_bo_assert_held(bo);
>
> drm_gpuva_unlink(&vma->gpuva);
> + xe_bo_recompute_purgeable_state(bo);
> }
>
> xe_vm_assert_held(vm);
> @@ -2691,6 +2695,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm
> *vm, struct drm_gpuva_ops *ops,
> .atomic_access =
> DRM_XE_ATOMIC_UNDEFINED,
> .default_pat_index = op-
> >map.pat_index,
> .pat_index = op->map.pat_index,
> + .purgeable_state =
> XE_MADV_PURGEABLE_WILLNEED,
> };
>
> flags |= op->map.vma_flags &
> XE_VMA_CREATE_MASK;
> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c
> b/drivers/gpu/drm/xe/xe_vm_madvise.c
> index f7e767f21795..ca003e0db87b 100644
> --- a/drivers/gpu/drm/xe/xe_vm_madvise.c
> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
> @@ -12,6 +12,7 @@
> #include "xe_pat.h"
> #include "xe_pt.h"
> #include "xe_svm.h"
> +#include "xe_vm.h"
>
> struct xe_vmas_in_madvise_range {
> u64 addr;
> @@ -183,6 +184,112 @@ static void madvise_pat_index(struct xe_device
> *xe, struct xe_vm *vm,
> }
> }
>
> +/**
> + * enum xe_bo_vmas_purge_state - VMA purgeable state aggregation
> + *
> + * Distinguishes whether a BO's VMAs are all DONTNEED, have at least
> + * one WILLNEED, or have no VMAs at all.
> + *
> + * Enum values align with XE_MADV_PURGEABLE_* states for
> consistency.
> + */
> +enum xe_bo_vmas_purge_state {
> + /** @XE_BO_VMAS_STATE_WILLNEED: At least one VMA is WILLNEED
> */
> + XE_BO_VMAS_STATE_WILLNEED = 0,
> + /** @XE_BO_VMAS_STATE_DONTNEED: All VMAs are DONTNEED */
> + XE_BO_VMAS_STATE_DONTNEED = 1,
> + /** @XE_BO_VMAS_STATE_NO_VMAS: BO has no VMAs */
> + XE_BO_VMAS_STATE_NO_VMAS = 2,
> +};
> +
> +/**
> + * xe_bo_all_vmas_dontneed() - Determine BO VMA purgeable state
> + * @bo: Buffer object
> + *
> + * Check all VMAs across all VMs to determine aggregate purgeable
> state.
> + * Shared BOs require unanimous DONTNEED state from all mappings.
> + *
> + * Caller must hold BO dma-resv lock.
> + *
> + * Return: XE_BO_VMAS_STATE_DONTNEED if all VMAs are DONTNEED,
> + * XE_BO_VMAS_STATE_WILLNEED if at least one VMA is not
> DONTNEED,
> + * XE_BO_VMAS_STATE_NO_VMAS if BO has no VMAs
> + */
> +static enum xe_bo_vmas_purge_state xe_bo_all_vmas_dontneed(struct
> xe_bo *bo)
> +{
> + struct drm_gpuvm_bo *vm_bo;
> + struct drm_gpuva *gpuva;
> + struct drm_gem_object *obj = &bo->ttm.base;
> + bool has_vmas = false;
> +
> + xe_bo_assert_held(bo);
> +
> + drm_gem_for_each_gpuvm_bo(vm_bo, obj) {
> + drm_gpuvm_bo_for_each_va(gpuva, vm_bo) {
> + struct xe_vma *vma = gpuva_to_vma(gpuva);
> +
> + has_vmas = true;
> +
> + /* Any non-DONTNEED VMA prevents purging */
> + if (vma->attr.purgeable_state !=
> XE_MADV_PURGEABLE_DONTNEED)
> + return XE_BO_VMAS_STATE_WILLNEED;
> + }
> + }
> +
> + /*
> + * No VMAs => preserve existing BO purgeable state.
> + * Avoids incorrectly flipping DONTNEED -> WILLNEED when
> last VMA unmapped.
> + */
> + if (!has_vmas)
> + return XE_BO_VMAS_STATE_NO_VMAS;
> +
> + return XE_BO_VMAS_STATE_DONTNEED;
> +}
> +
> +/**
> + * xe_bo_recompute_purgeable_state() - Recompute BO purgeable state
> from VMAs
> + * @bo: Buffer object
> + *
> + * Walk all VMAs to determine if BO should be purgeable or not.
> + * Shared BOs require unanimous DONTNEED state from all mappings.
> + *
> + * Locking: Caller must hold BO dma-resv lock. When iterating GPUVM
> lists,
> + * VM lock must also be held (write) to prevent concurrent VMA
> modifications.
> + * This is satisfied at both call sites:
> + * - xe_vma_destroy(): holds vm->lock write
> + * - madvise_purgeable(): holds vm->lock write (from madvise ioctl
> path)
> + *
> + * Return: nothing
> + */
> +void xe_bo_recompute_purgeable_state(struct xe_bo *bo)
> +{
> + enum xe_bo_vmas_purge_state vma_state;
> +
> + if (!bo)
> + return;
> +
> + xe_bo_assert_held(bo);
> +
> + /*
> + * Once purged, always purged. Cannot transition back to
> WILLNEED.
> + * This matches i915 semantics where purged BOs are
> permanently invalid.
> + */
> + if (bo->madv_purgeable == XE_MADV_PURGEABLE_PURGED)
> + return;
> +
> + vma_state = xe_bo_all_vmas_dontneed(bo);
> +
> + if (vma_state == XE_BO_VMAS_STATE_DONTNEED) {
> + /* All VMAs are DONTNEED - mark BO purgeable */
> + if (bo->madv_purgeable !=
> XE_MADV_PURGEABLE_DONTNEED)
> + xe_bo_set_purgeable_state(bo,
> XE_MADV_PURGEABLE_DONTNEED);
> + } else if (vma_state == XE_BO_VMAS_STATE_WILLNEED) {
> + /* At least one VMA is WILLNEED - BO must not be
> purgeable */
> + if (bo->madv_purgeable !=
> XE_MADV_PURGEABLE_WILLNEED)
> + xe_bo_set_purgeable_state(bo,
> XE_MADV_PURGEABLE_WILLNEED);
> + }
> + /* XE_BO_VMAS_STATE_NO_VMAS: Preserve existing BO state */
Couldn't this be made:
if (vma_state != bo->madv_purgeable && vma_state !=
XE_BO_VMAS_STATE_NO_VMAS)
xe_bo_set_purgeable_state(bo, vma_state);
(see upcoming email for shrinker implication).
I also wonder if you ever explored the idea of having a "willneed_maps"
refcount on each bo. Each willneed vma as well as each exported dma-buf
would then take such a refcount, and state-transitions would happen
when the refcount goes from 0->1 and 1->0? That could possibly save a
lot of processing in xe_bo_all_vmas_dontneed?
/Thomas
> +}
> +
> /**
> * madvise_purgeable - Handle purgeable buffer object advice
> * @xe: XE device
> @@ -214,8 +321,11 @@ static void __maybe_unused
> madvise_purgeable(struct xe_device *xe,
> for (i = 0; i < num_vmas; i++) {
> struct xe_bo *bo = xe_vma_bo(vmas[i]);
>
> - if (!bo)
> + if (!bo) {
> + /* Purgeable state applies to BOs only, skip
> non-BO VMAs */
> + vmas[i]->skip_invalidation = true;
> continue;
> + }
>
> /* BO must be locked before modifying madv state */
> xe_bo_assert_held(bo);
> @@ -226,19 +336,26 @@ static void __maybe_unused
> madvise_purgeable(struct xe_device *xe,
> */
> if (xe_bo_is_purged(bo)) {
> details->has_purged_bo = true;
> + vmas[i]->skip_invalidation = true;
> continue;
> }
>
> switch (op->purge_state_val.val) {
> case DRM_XE_VMA_PURGEABLE_STATE_WILLNEED:
> - xe_bo_set_purgeable_state(bo,
> XE_MADV_PURGEABLE_WILLNEED);
> + vmas[i]->attr.purgeable_state =
> XE_MADV_PURGEABLE_WILLNEED;
> + vmas[i]->skip_invalidation = true;
> +
> + xe_bo_recompute_purgeable_state(bo);
> break;
> case DRM_XE_VMA_PURGEABLE_STATE_DONTNEED:
> - xe_bo_set_purgeable_state(bo,
> XE_MADV_PURGEABLE_DONTNEED);
> + vmas[i]->attr.purgeable_state =
> XE_MADV_PURGEABLE_DONTNEED;
> + vmas[i]->skip_invalidation = false;
> +
> + xe_bo_recompute_purgeable_state(bo);
> break;
> default:
> - drm_warn(&vm->xe->drm, "Invalid madvice
> value = %d\n",
> - op->purge_state_val.val);
> + /* Should never hit - values validated in
> madvise_args_are_sane() */
> + xe_assert(vm->xe, 0);
> return;
> }
> }
> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.h
> b/drivers/gpu/drm/xe/xe_vm_madvise.h
> index b0e1fc445f23..39acd2689ca0 100644
> --- a/drivers/gpu/drm/xe/xe_vm_madvise.h
> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.h
> @@ -8,8 +8,11 @@
>
> struct drm_device;
> struct drm_file;
> +struct xe_bo;
>
> int xe_vm_madvise_ioctl(struct drm_device *dev, void *data,
> struct drm_file *file);
>
> +void xe_bo_recompute_purgeable_state(struct xe_bo *bo);
> +
> #endif
> diff --git a/drivers/gpu/drm/xe/xe_vm_types.h
> b/drivers/gpu/drm/xe/xe_vm_types.h
> index 1f6f7e30e751..bfe7157756ad 100644
> --- a/drivers/gpu/drm/xe/xe_vm_types.h
> +++ b/drivers/gpu/drm/xe/xe_vm_types.h
> @@ -94,6 +94,17 @@ struct xe_vma_mem_attr {
> * same as default_pat_index unless overwritten by madvise.
> */
> u16 pat_index;
> +
> + /**
> + * @purgeable_state: Purgeable hint for this VMA mapping
> + *
> + * Per-VMA purgeable state from madvise. Valid states are
> WILLNEED (0)
> + * or DONTNEED (1). Shared BOs require all VMAs to be
> DONTNEED before
> + * the BO can be purged. PURGED state exists only at BO
> level.
> + *
> + * Protected by BO dma-resv lock. Set via
> DRM_IOCTL_XE_MADVISE.
> + */
> + u32 purgeable_state;
> };
>
> struct xe_vma {
^ permalink raw reply [flat|nested] 39+ messages in thread* Re: [PATCH v6 06/12] drm/xe/madvise: Implement per-VMA purgeable state tracking
2026-03-10 9:57 ` Thomas Hellström
@ 2026-03-23 6:47 ` Yadav, Arvind
0 siblings, 0 replies; 39+ messages in thread
From: Yadav, Arvind @ 2026-03-23 6:47 UTC (permalink / raw)
To: Thomas Hellström, intel-xe
Cc: matthew.brost, himal.prasad.ghimiray, pallavi.mishra
On 10-03-2026 15:27, Thomas Hellström wrote:
> On Tue, 2026-03-03 at 20:50 +0530, Arvind Yadav wrote:
>> Track purgeable state per-VMA instead of using a coarse shared
>> BO check. This prevents purging shared BOs until all VMAs across
>> all VMs are marked DONTNEED.
>>
>> Add xe_bo_all_vmas_dontneed() to check all VMAs before marking
>> a BO purgeable. Add xe_bo_recheck_purgeable_on_vma_unbind() to
>> handle state transitions when VMAs are destroyed - if all
>> remaining VMAs are DONTNEED the BO can become purgeable, or if
>> no VMAs remain it transitions to WILLNEED.
>>
>> The per-VMA purgeable_state field stores the madvise hint for
>> each mapping. Shared BOs can only be purged when all VMAs
>> unanimously indicate DONTNEED.
>>
>> This prevents the bug where unmapping the last VMA would incorrectly
>> flip
>> a DONTNEED BO back to WILLNEED. The enum-based state check preserves
>> BO
>> state when no VMAs remain, only updating when VMAs provide explicit
>> hints.
>>
>> v3:
>> - This addresses Thomas Hellström's feedback: "loop over all vmas
>> attached to the bo and check that they all say WONTNEED. This
>> will
>> also need a check at VMA unbinding"
>>
>> v4:
>> - @madv_purgeable atomic_t → u32 change across all relevant
>> patches (Matt)
>>
>> v5:
>> - Call xe_bo_recheck_purgeable_on_vma_unbind() from
>> xe_vma_destroy()
>> right after drm_gpuva_unlink() where we already hold the BO lock,
>> drop the trylock-based late destroy path (Matt)
>> - Move purgeable_state into xe_vma_mem_attr with the other madvise
>> attributes (Matt)
>> - Drop READ_ONCE since the BO lock already protects us (Matt)
>> - Keep returning false when there are no VMAs - otherwise we'd mark
>> BOs purgeable without any user hint (Matt)
>> - Use xe_bo_set_purgeable_state() instead of direct
>> initialization(Matt)
>> - use xe_assert instead of drm_warn (Thomas)
>>
>> v6:
>> - Fix state transition bug: don't flip DONTNEED → WILLNEED when
>> last
>> VMA unmapped (Matt)
>> - Change xe_bo_all_vmas_dontneed() from bool to enum to distinguish
>> "no VMAs" from "has WILLNEED VMA" (Matt)
>> - Preserve BO state on NO_VMAS instead of forcing WILLNEED.
>> - Set skip_invalidation explicitly in madvise_purgeable() to ensure
>> DONTNEED always zaps GPU PTEs regardless of prior madvise state.
>>
>> Cc: Matthew Brost <matthew.brost@intel.com>
>> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>> Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>> Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
>> ---
>> drivers/gpu/drm/xe/xe_svm.c | 1 +
>> drivers/gpu/drm/xe/xe_vm.c | 9 +-
>> drivers/gpu/drm/xe/xe_vm_madvise.c | 127
>> +++++++++++++++++++++++++++--
>> drivers/gpu/drm/xe/xe_vm_madvise.h | 3 +
>> drivers/gpu/drm/xe/xe_vm_types.h | 11 +++
>> 5 files changed, 144 insertions(+), 7 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_svm.c
>> b/drivers/gpu/drm/xe/xe_svm.c
>> index 002b6c22ad3f..dffa0cab5f5d 100644
>> --- a/drivers/gpu/drm/xe/xe_svm.c
>> +++ b/drivers/gpu/drm/xe/xe_svm.c
>> @@ -318,6 +318,7 @@ static void xe_vma_set_default_attributes(struct
>> xe_vma *vma)
>> .preferred_loc.migration_policy =
>> DRM_XE_MIGRATE_ALL_PAGES,
>> .pat_index = vma->attr.default_pat_index,
>> .atomic_access = DRM_XE_ATOMIC_UNDEFINED,
>> + .purgeable_state = XE_MADV_PURGEABLE_WILLNEED,
>> };
>>
>> xe_vma_mem_attr_copy(&vma->attr, &default_attr);
>> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
>> index 4a8abdcfb912..8e4c14fa3df2 100644
>> --- a/drivers/gpu/drm/xe/xe_vm.c
>> +++ b/drivers/gpu/drm/xe/xe_vm.c
>> @@ -39,6 +39,7 @@
>> #include "xe_tile.h"
>> #include "xe_tlb_inval.h"
>> #include "xe_trace_bo.h"
>> +#include "xe_vm_madvise.h"
>> #include "xe_wa.h"
>>
>> static struct drm_gem_object *xe_vm_obj(struct xe_vm *vm)
>> @@ -1085,6 +1086,7 @@ static struct xe_vma *xe_vma_create(struct
>> xe_vm *vm,
>> static void xe_vma_destroy_late(struct xe_vma *vma)
>> {
>> struct xe_vm *vm = xe_vma_vm(vma);
>> + struct xe_bo *bo = xe_vma_bo(vma);
>>
>> if (vma->ufence) {
>> xe_sync_ufence_put(vma->ufence);
>> @@ -1099,7 +1101,7 @@ static void xe_vma_destroy_late(struct xe_vma
>> *vma)
>> } else if (xe_vma_is_null(vma) ||
>> xe_vma_is_cpu_addr_mirror(vma)) {
>> xe_vm_put(vm);
>> } else {
>> - xe_bo_put(xe_vma_bo(vma));
>> + xe_bo_put(bo);
>> }
>>
>> xe_vma_free(vma);
>> @@ -1125,6 +1127,7 @@ static void vma_destroy_cb(struct dma_fence
>> *fence,
>> static void xe_vma_destroy(struct xe_vma *vma, struct dma_fence
>> *fence)
>> {
>> struct xe_vm *vm = xe_vma_vm(vma);
>> + struct xe_bo *bo = xe_vma_bo(vma);
>>
>> lockdep_assert_held_write(&vm->lock);
>> xe_assert(vm->xe, list_empty(&vma->combined_links.destroy));
>> @@ -1133,9 +1136,10 @@ static void xe_vma_destroy(struct xe_vma *vma,
>> struct dma_fence *fence)
>> xe_assert(vm->xe, vma->gpuva.flags &
>> XE_VMA_DESTROYED);
>> xe_userptr_destroy(to_userptr_vma(vma));
>> } else if (!xe_vma_is_null(vma) &&
>> !xe_vma_is_cpu_addr_mirror(vma)) {
>> - xe_bo_assert_held(xe_vma_bo(vma));
>> + xe_bo_assert_held(bo);
>>
>> drm_gpuva_unlink(&vma->gpuva);
>> + xe_bo_recompute_purgeable_state(bo);
>> }
>>
>> xe_vm_assert_held(vm);
>> @@ -2691,6 +2695,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm
>> *vm, struct drm_gpuva_ops *ops,
>> .atomic_access =
>> DRM_XE_ATOMIC_UNDEFINED,
>> .default_pat_index = op-
>>> map.pat_index,
>> .pat_index = op->map.pat_index,
>> + .purgeable_state =
>> XE_MADV_PURGEABLE_WILLNEED,
>> };
>>
>> flags |= op->map.vma_flags &
>> XE_VMA_CREATE_MASK;
>> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c
>> b/drivers/gpu/drm/xe/xe_vm_madvise.c
>> index f7e767f21795..ca003e0db87b 100644
>> --- a/drivers/gpu/drm/xe/xe_vm_madvise.c
>> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
>> @@ -12,6 +12,7 @@
>> #include "xe_pat.h"
>> #include "xe_pt.h"
>> #include "xe_svm.h"
>> +#include "xe_vm.h"
>>
>> struct xe_vmas_in_madvise_range {
>> u64 addr;
>> @@ -183,6 +184,112 @@ static void madvise_pat_index(struct xe_device
>> *xe, struct xe_vm *vm,
>> }
>> }
>>
>> +/**
>> + * enum xe_bo_vmas_purge_state - VMA purgeable state aggregation
>> + *
>> + * Distinguishes whether a BO's VMAs are all DONTNEED, have at least
>> + * one WILLNEED, or have no VMAs at all.
>> + *
>> + * Enum values align with XE_MADV_PURGEABLE_* states for
>> consistency.
>> + */
>> +enum xe_bo_vmas_purge_state {
>> + /** @XE_BO_VMAS_STATE_WILLNEED: At least one VMA is WILLNEED
>> */
>> + XE_BO_VMAS_STATE_WILLNEED = 0,
>> + /** @XE_BO_VMAS_STATE_DONTNEED: All VMAs are DONTNEED */
>> + XE_BO_VMAS_STATE_DONTNEED = 1,
>> + /** @XE_BO_VMAS_STATE_NO_VMAS: BO has no VMAs */
>> + XE_BO_VMAS_STATE_NO_VMAS = 2,
>> +};
>> +
>> +/**
>> + * xe_bo_all_vmas_dontneed() - Determine BO VMA purgeable state
>> + * @bo: Buffer object
>> + *
>> + * Check all VMAs across all VMs to determine aggregate purgeable
>> state.
>> + * Shared BOs require unanimous DONTNEED state from all mappings.
>> + *
>> + * Caller must hold BO dma-resv lock.
>> + *
>> + * Return: XE_BO_VMAS_STATE_DONTNEED if all VMAs are DONTNEED,
>> + * XE_BO_VMAS_STATE_WILLNEED if at least one VMA is not
>> DONTNEED,
>> + * XE_BO_VMAS_STATE_NO_VMAS if BO has no VMAs
>> + */
>> +static enum xe_bo_vmas_purge_state xe_bo_all_vmas_dontneed(struct
>> xe_bo *bo)
>> +{
>> + struct drm_gpuvm_bo *vm_bo;
>> + struct drm_gpuva *gpuva;
>> + struct drm_gem_object *obj = &bo->ttm.base;
>> + bool has_vmas = false;
>> +
>> + xe_bo_assert_held(bo);
>> +
>> + drm_gem_for_each_gpuvm_bo(vm_bo, obj) {
>> + drm_gpuvm_bo_for_each_va(gpuva, vm_bo) {
>> + struct xe_vma *vma = gpuva_to_vma(gpuva);
>> +
>> + has_vmas = true;
>> +
>> + /* Any non-DONTNEED VMA prevents purging */
>> + if (vma->attr.purgeable_state !=
>> XE_MADV_PURGEABLE_DONTNEED)
>> + return XE_BO_VMAS_STATE_WILLNEED;
>> + }
>> + }
>> +
>> + /*
>> + * No VMAs => preserve existing BO purgeable state.
>> + * Avoids incorrectly flipping DONTNEED -> WILLNEED when
>> last VMA unmapped.
>> + */
>> + if (!has_vmas)
>> + return XE_BO_VMAS_STATE_NO_VMAS;
>> +
>> + return XE_BO_VMAS_STATE_DONTNEED;
>> +}
>> +
>> +/**
>> + * xe_bo_recompute_purgeable_state() - Recompute BO purgeable state
>> from VMAs
>> + * @bo: Buffer object
>> + *
>> + * Walk all VMAs to determine if BO should be purgeable or not.
>> + * Shared BOs require unanimous DONTNEED state from all mappings.
>> + *
>> + * Locking: Caller must hold BO dma-resv lock. When iterating GPUVM
>> lists,
>> + * VM lock must also be held (write) to prevent concurrent VMA
>> modifications.
>> + * This is satisfied at both call sites:
>> + * - xe_vma_destroy(): holds vm->lock write
>> + * - madvise_purgeable(): holds vm->lock write (from madvise ioctl
>> path)
>> + *
>> + * Return: nothing
>> + */
>> +void xe_bo_recompute_purgeable_state(struct xe_bo *bo)
>> +{
>> + enum xe_bo_vmas_purge_state vma_state;
>> +
>> + if (!bo)
>> + return;
>> +
>> + xe_bo_assert_held(bo);
>> +
>> + /*
>> + * Once purged, always purged. Cannot transition back to
>> WILLNEED.
>> + * This matches i915 semantics where purged BOs are
>> permanently invalid.
>> + */
>> + if (bo->madv_purgeable == XE_MADV_PURGEABLE_PURGED)
>> + return;
>> +
>> + vma_state = xe_bo_all_vmas_dontneed(bo);
>> +
>> + if (vma_state == XE_BO_VMAS_STATE_DONTNEED) {
>> + /* All VMAs are DONTNEED - mark BO purgeable */
>> + if (bo->madv_purgeable !=
>> XE_MADV_PURGEABLE_DONTNEED)
>> + xe_bo_set_purgeable_state(bo,
>> XE_MADV_PURGEABLE_DONTNEED);
>> + } else if (vma_state == XE_BO_VMAS_STATE_WILLNEED) {
>> + /* At least one VMA is WILLNEED - BO must not be
>> purgeable */
>> + if (bo->madv_purgeable !=
>> XE_MADV_PURGEABLE_WILLNEED)
>> + xe_bo_set_purgeable_state(bo,
>> XE_MADV_PURGEABLE_WILLNEED);
>> + }
>> + /* XE_BO_VMAS_STATE_NO_VMAS: Preserve existing BO state */
> Couldn't this be made:
>
> if (vma_state != bo->madv_purgeable && vma_state !=
> XE_BO_VMAS_STATE_NO_VMAS)
> xe_bo_set_purgeable_state(bo, vma_state);
Good catch — since static_assert already enforces the enum value
alignment, the explicit casts are redundant. Will simplify in v8.
>
> (see upcoming email for shrinker implication).
>
> I also wonder if you ever explored the idea of having a "willneed_maps"
> refcount on each bo. Each willneed vma as well as each exported dma-buf
> would then take such a refcount, and state-transitions would happen
> when the refcount goes from 0->1 and 1->0? That could possibly save a
> lot of processing in xe_bo_all_vmas_dontneed?
>
The willneed_maps refcount idea is clean and I plan to take it as a
separate follow-up after this series to keep things simple for now.
Thanks,
Arvind
> /Thomas
>
>
>> +}
>> +
>> /**
>> * madvise_purgeable - Handle purgeable buffer object advice
>> * @xe: XE device
>> @@ -214,8 +321,11 @@ static void __maybe_unused
>> madvise_purgeable(struct xe_device *xe,
>> for (i = 0; i < num_vmas; i++) {
>> struct xe_bo *bo = xe_vma_bo(vmas[i]);
>>
>> - if (!bo)
>> + if (!bo) {
>> + /* Purgeable state applies to BOs only, skip
>> non-BO VMAs */
>> + vmas[i]->skip_invalidation = true;
>> continue;
>> + }
>>
>> /* BO must be locked before modifying madv state */
>> xe_bo_assert_held(bo);
>> @@ -226,19 +336,26 @@ static void __maybe_unused
>> madvise_purgeable(struct xe_device *xe,
>> */
>> if (xe_bo_is_purged(bo)) {
>> details->has_purged_bo = true;
>> + vmas[i]->skip_invalidation = true;
>> continue;
>> }
>>
>> switch (op->purge_state_val.val) {
>> case DRM_XE_VMA_PURGEABLE_STATE_WILLNEED:
>> - xe_bo_set_purgeable_state(bo,
>> XE_MADV_PURGEABLE_WILLNEED);
>> + vmas[i]->attr.purgeable_state =
>> XE_MADV_PURGEABLE_WILLNEED;
>> + vmas[i]->skip_invalidation = true;
>> +
>> + xe_bo_recompute_purgeable_state(bo);
>> break;
>> case DRM_XE_VMA_PURGEABLE_STATE_DONTNEED:
>> - xe_bo_set_purgeable_state(bo,
>> XE_MADV_PURGEABLE_DONTNEED);
>> + vmas[i]->attr.purgeable_state =
>> XE_MADV_PURGEABLE_DONTNEED;
>> + vmas[i]->skip_invalidation = false;
>> +
>> + xe_bo_recompute_purgeable_state(bo);
>> break;
>> default:
>> - drm_warn(&vm->xe->drm, "Invalid madvice
>> value = %d\n",
>> - op->purge_state_val.val);
>> + /* Should never hit - values validated in
>> madvise_args_are_sane() */
>> + xe_assert(vm->xe, 0);
>> return;
>> }
>> }
>> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.h
>> b/drivers/gpu/drm/xe/xe_vm_madvise.h
>> index b0e1fc445f23..39acd2689ca0 100644
>> --- a/drivers/gpu/drm/xe/xe_vm_madvise.h
>> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.h
>> @@ -8,8 +8,11 @@
>>
>> struct drm_device;
>> struct drm_file;
>> +struct xe_bo;
>>
>> int xe_vm_madvise_ioctl(struct drm_device *dev, void *data,
>> struct drm_file *file);
>>
>> +void xe_bo_recompute_purgeable_state(struct xe_bo *bo);
>> +
>> #endif
>> diff --git a/drivers/gpu/drm/xe/xe_vm_types.h
>> b/drivers/gpu/drm/xe/xe_vm_types.h
>> index 1f6f7e30e751..bfe7157756ad 100644
>> --- a/drivers/gpu/drm/xe/xe_vm_types.h
>> +++ b/drivers/gpu/drm/xe/xe_vm_types.h
>> @@ -94,6 +94,17 @@ struct xe_vma_mem_attr {
>> * same as default_pat_index unless overwritten by madvise.
>> */
>> u16 pat_index;
>> +
>> + /**
>> + * @purgeable_state: Purgeable hint for this VMA mapping
>> + *
>> + * Per-VMA purgeable state from madvise. Valid states are
>> WILLNEED (0)
>> + * or DONTNEED (1). Shared BOs require all VMAs to be
>> DONTNEED before
>> + * the BO can be purged. PURGED state exists only at BO
>> level.
>> + *
>> + * Protected by BO dma-resv lock. Set via
>> DRM_IOCTL_XE_MADVISE.
>> + */
>> + u32 purgeable_state;
>> };
>>
>> struct xe_vma {
^ permalink raw reply [flat|nested] 39+ messages in thread
* [PATCH v6 07/12] drm/xe/madvise: Block imported and exported dma-bufs
2026-03-03 15:19 [PATCH v6 00/12] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
` (5 preceding siblings ...)
2026-03-03 15:20 ` [PATCH v6 06/12] drm/xe/madvise: Implement per-VMA purgeable state tracking Arvind Yadav
@ 2026-03-03 15:20 ` Arvind Yadav
2026-03-03 15:20 ` [PATCH v6 08/12] drm/xe/bo: Block mmap of DONTNEED/purged BOs Arvind Yadav
` (9 subsequent siblings)
16 siblings, 0 replies; 39+ messages in thread
From: Arvind Yadav @ 2026-03-03 15:20 UTC (permalink / raw)
To: intel-xe
Cc: matthew.brost, himal.prasad.ghimiray, thomas.hellstrom,
pallavi.mishra
Prevent marking imported or exported dma-bufs as purgeable.
External devices may be accessing these buffers without our
knowledge, making purging unsafe.
Check drm_gem_is_imported() for buffers created by other
drivers and obj->dma_buf for buffers exported to other
drivers. Silently skip these BOs during madvise processing.
This follows drm_gem_shmem's purgeable implementation and
prevents data corruption from purging actively-used shared
buffers.
v3:
- Addresses review feedback from Matt Roper about handling
imported/exported BOs correctly in the purgeable BO
implementation.
v4:
- Check should be add to xe_vm_madvise_purgeable_bo.
v5:
- Rename xe_bo_is_external_dmabuf() to xe_bo_is_dmabuf_shared()
for clarity (Thomas)
- Update comments to clarify why both imports and exports
are unsafe to purge.
v6:
- No PTEs to zap for shared dma-bufs.
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
---
drivers/gpu/drm/xe/xe_vm_madvise.c | 38 ++++++++++++++++++++++++++++++
1 file changed, 38 insertions(+)
diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
index ca003e0db87b..8acc19e25aa5 100644
--- a/drivers/gpu/drm/xe/xe_vm_madvise.c
+++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
@@ -184,6 +184,34 @@ static void madvise_pat_index(struct xe_device *xe, struct xe_vm *vm,
}
}
+
+/**
+ * xe_bo_is_dmabuf_shared() - Check if BO is shared via dma-buf
+ * @bo: Buffer object
+ *
+ * Prevent marking imported or exported dma-bufs as purgeable.
+ * For imported BOs, Xe doesn't own the backing store and cannot
+ * safely reclaim pages (exporter or other devices may still be
+ * using them). For exported BOs, external devices may have active
+ * mappings we cannot track.
+ *
+ * Return: true if BO is imported or exported, false otherwise
+ */
+static bool xe_bo_is_dmabuf_shared(struct xe_bo *bo)
+{
+ struct drm_gem_object *obj = &bo->ttm.base;
+
+ /* Imported: exporter owns backing store */
+ if (drm_gem_is_imported(obj))
+ return true;
+
+ /* Exported: external devices may be accessing */
+ if (obj->dma_buf)
+ return true;
+
+ return false;
+}
+
/**
* enum xe_bo_vmas_purge_state - VMA purgeable state aggregation
*
@@ -223,6 +251,10 @@ static enum xe_bo_vmas_purge_state xe_bo_all_vmas_dontneed(struct xe_bo *bo)
xe_bo_assert_held(bo);
+ /* Shared dma-bufs cannot be purgeable */
+ if (xe_bo_is_dmabuf_shared(bo))
+ return XE_BO_VMAS_STATE_WILLNEED;
+
drm_gem_for_each_gpuvm_bo(vm_bo, obj) {
drm_gpuvm_bo_for_each_va(gpuva, vm_bo) {
struct xe_vma *vma = gpuva_to_vma(gpuva);
@@ -330,6 +362,12 @@ static void __maybe_unused madvise_purgeable(struct xe_device *xe,
/* BO must be locked before modifying madv state */
xe_bo_assert_held(bo);
+ /* Skip shared dma-bufs - no PTEs to zap*/
+ if (xe_bo_is_dmabuf_shared(bo)) {
+ vmas[i]->skip_invalidation = true;
+ continue;
+ }
+
/*
* Once purged, always purged. Cannot transition back to WILLNEED.
* This matches i915 semantics where purged BOs are permanently invalid.
--
2.43.0
^ permalink raw reply related [flat|nested] 39+ messages in thread* [PATCH v6 08/12] drm/xe/bo: Block mmap of DONTNEED/purged BOs
2026-03-03 15:19 [PATCH v6 00/12] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
` (6 preceding siblings ...)
2026-03-03 15:20 ` [PATCH v6 07/12] drm/xe/madvise: Block imported and exported dma-bufs Arvind Yadav
@ 2026-03-03 15:20 ` Arvind Yadav
2026-03-10 10:17 ` Thomas Hellström
2026-03-03 15:20 ` [PATCH v6 09/12] drm/xe/dma_buf: Block export " Arvind Yadav
` (8 subsequent siblings)
16 siblings, 1 reply; 39+ messages in thread
From: Arvind Yadav @ 2026-03-03 15:20 UTC (permalink / raw)
To: intel-xe
Cc: matthew.brost, himal.prasad.ghimiray, thomas.hellstrom,
pallavi.mishra
Don't allow new CPU mmaps to BOs marked DONTNEED or PURGED.
DONTNEED BOs can have their contents discarded at any time, making
CPU access undefined behavior. PURGED BOs have no backing store and
are permanently invalid.
Return -EBUSY for DONTNEED BOs (temporary purgeable state) and
-EINVAL for purged BOs (permanent, no backing store).
The mmap offset ioctl now checks the BO's purgeable state before
allowing userspace to establish a new CPU mapping. This prevents
the race where userspace gets a valid offset but the BO is purged
before actual faulting begins.
Existing mmaps (established before DONTNEED) may still work until
pages are purged, at which point CPU faults fail with SIGBUS.
v6:
- Split DONTNEED → -EBUSY and PURGED → -EINVAL for consistency
with the rest of the series (Thomas, Matt)
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
---
drivers/gpu/drm/xe/xe_bo.c | 28 +++++++++++++++++++++++++++-
1 file changed, 27 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
index d05a73756905..3a4965bdadf2 100644
--- a/drivers/gpu/drm/xe/xe_bo.c
+++ b/drivers/gpu/drm/xe/xe_bo.c
@@ -3396,6 +3396,8 @@ int xe_gem_mmap_offset_ioctl(struct drm_device *dev, void *data,
struct xe_device *xe = to_xe_device(dev);
struct drm_xe_gem_mmap_offset *args = data;
struct drm_gem_object *gem_obj;
+ struct xe_bo *bo;
+ int err;
if (XE_IOCTL_DBG(xe, args->extensions) ||
XE_IOCTL_DBG(xe, args->reserved[0] || args->reserved[1]))
@@ -3425,11 +3427,35 @@ int xe_gem_mmap_offset_ioctl(struct drm_device *dev, void *data,
if (XE_IOCTL_DBG(xe, !gem_obj))
return -ENOENT;
+ bo = gem_to_xe_bo(gem_obj);
+
+ /*
+ * Reject new mmap to purgeable BOs. DONTNEED BOs can be purged
+ * at any time, making CPU access undefined behavior. Purged BOs have
+ * no backing store and are permanently invalid.
+ */
+ xe_bo_lock(bo, false);
+ if (xe_bo_madv_is_dontneed(bo)) {
+ err = -EBUSY;
+ goto out_unlock;
+ }
+
+ if (xe_bo_is_purged(bo)) {
+ err = -EINVAL;
+ goto out_unlock;
+ }
+ xe_bo_unlock(bo);
+
/* The mmap offset was set up at BO allocation time. */
args->offset = drm_vma_node_offset_addr(&gem_obj->vma_node);
+ xe_bo_put(bo);
- xe_bo_put(gem_to_xe_bo(gem_obj));
return 0;
+
+out_unlock:
+ xe_bo_unlock(bo);
+ xe_bo_put(bo);
+ return err;
}
/**
--
2.43.0
^ permalink raw reply related [flat|nested] 39+ messages in thread* Re: [PATCH v6 08/12] drm/xe/bo: Block mmap of DONTNEED/purged BOs
2026-03-03 15:20 ` [PATCH v6 08/12] drm/xe/bo: Block mmap of DONTNEED/purged BOs Arvind Yadav
@ 2026-03-10 10:17 ` Thomas Hellström
2026-03-18 13:03 ` Yadav, Arvind
0 siblings, 1 reply; 39+ messages in thread
From: Thomas Hellström @ 2026-03-10 10:17 UTC (permalink / raw)
To: Arvind Yadav, intel-xe
Cc: matthew.brost, himal.prasad.ghimiray, pallavi.mishra
On Tue, 2026-03-03 at 20:50 +0530, Arvind Yadav wrote:
> Don't allow new CPU mmaps to BOs marked DONTNEED or PURGED.
> DONTNEED BOs can have their contents discarded at any time, making
> CPU access undefined behavior. PURGED BOs have no backing store and
> are permanently invalid.
>
> Return -EBUSY for DONTNEED BOs (temporary purgeable state) and
> -EINVAL for purged BOs (permanent, no backing store).
>
> The mmap offset ioctl now checks the BO's purgeable state before
> allowing userspace to establish a new CPU mapping. This prevents
> the race where userspace gets a valid offset but the BO is purged
> before actual faulting begins.
>
> Existing mmaps (established before DONTNEED) may still work until
> pages are purged, at which point CPU faults fail with SIGBUS.
>
> v6:
> - Split DONTNEED → -EBUSY and PURGED → -EINVAL for consistency
> with the rest of the series (Thomas, Matt)
>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
> ---
> drivers/gpu/drm/xe/xe_bo.c | 28 +++++++++++++++++++++++++++-
> 1 file changed, 27 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
> index d05a73756905..3a4965bdadf2 100644
> --- a/drivers/gpu/drm/xe/xe_bo.c
> +++ b/drivers/gpu/drm/xe/xe_bo.c
> @@ -3396,6 +3396,8 @@ int xe_gem_mmap_offset_ioctl(struct drm_device
This needs to be done in the mmap() callback. It should be OK to wrap
the existing callback there.
> *dev, void *data,
> struct xe_device *xe = to_xe_device(dev);
> struct drm_xe_gem_mmap_offset *args = data;
> struct drm_gem_object *gem_obj;
> + struct xe_bo *bo;
> + int err;
>
> if (XE_IOCTL_DBG(xe, args->extensions) ||
> XE_IOCTL_DBG(xe, args->reserved[0] || args-
> >reserved[1]))
> @@ -3425,11 +3427,35 @@ int xe_gem_mmap_offset_ioctl(struct
> drm_device *dev, void *data,
> if (XE_IOCTL_DBG(xe, !gem_obj))
> return -ENOENT;
>
> + bo = gem_to_xe_bo(gem_obj);
> +
> + /*
> + * Reject new mmap to purgeable BOs. DONTNEED BOs can be
> purged
> + * at any time, making CPU access undefined behavior. Purged
> BOs have
> + * no backing store and are permanently invalid.
> + */
> + xe_bo_lock(bo, false);
sleeping locks need to be interruptible whenever possible.
Thanks,
Thomas
> + if (xe_bo_madv_is_dontneed(bo)) {
> + err = -EBUSY;
> + goto out_unlock;
> + }
> +
> + if (xe_bo_is_purged(bo)) {
> + err = -EINVAL;
> + goto out_unlock;
> + }
> + xe_bo_unlock(bo);
> +
> /* The mmap offset was set up at BO allocation time. */
> args->offset = drm_vma_node_offset_addr(&gem_obj->vma_node);
> + xe_bo_put(bo);
>
> - xe_bo_put(gem_to_xe_bo(gem_obj));
> return 0;
> +
> +out_unlock:
> + xe_bo_unlock(bo);
> + xe_bo_put(bo);
> + return err;
> }
>
> /**
^ permalink raw reply [flat|nested] 39+ messages in thread* Re: [PATCH v6 08/12] drm/xe/bo: Block mmap of DONTNEED/purged BOs
2026-03-10 10:17 ` Thomas Hellström
@ 2026-03-18 13:03 ` Yadav, Arvind
0 siblings, 0 replies; 39+ messages in thread
From: Yadav, Arvind @ 2026-03-18 13:03 UTC (permalink / raw)
To: Thomas Hellström, intel-xe
Cc: matthew.brost, himal.prasad.ghimiray, pallavi.mishra
On 10-03-2026 15:47, Thomas Hellström wrote:
> On Tue, 2026-03-03 at 20:50 +0530, Arvind Yadav wrote:
>> Don't allow new CPU mmaps to BOs marked DONTNEED or PURGED.
>> DONTNEED BOs can have their contents discarded at any time, making
>> CPU access undefined behavior. PURGED BOs have no backing store and
>> are permanently invalid.
>>
>> Return -EBUSY for DONTNEED BOs (temporary purgeable state) and
>> -EINVAL for purged BOs (permanent, no backing store).
>>
>> The mmap offset ioctl now checks the BO's purgeable state before
>> allowing userspace to establish a new CPU mapping. This prevents
>> the race where userspace gets a valid offset but the BO is purged
>> before actual faulting begins.
>>
>> Existing mmaps (established before DONTNEED) may still work until
>> pages are purged, at which point CPU faults fail with SIGBUS.
>>
>> v6:
>> - Split DONTNEED → -EBUSY and PURGED → -EINVAL for consistency
>> with the rest of the series (Thomas, Matt)
>>
>> Cc: Matthew Brost <matthew.brost@intel.com>
>> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>> Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>> Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
>> ---
>> drivers/gpu/drm/xe/xe_bo.c | 28 +++++++++++++++++++++++++++-
>> 1 file changed, 27 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
>> index d05a73756905..3a4965bdadf2 100644
>> --- a/drivers/gpu/drm/xe/xe_bo.c
>> +++ b/drivers/gpu/drm/xe/xe_bo.c
>> @@ -3396,6 +3396,8 @@ int xe_gem_mmap_offset_ioctl(struct drm_device
> This needs to be done in the mmap() callback. It should be OK to wrap
> the existing callback there.
I will move this to mmap() callback.
>
>> *dev, void *data,
>> struct xe_device *xe = to_xe_device(dev);
>> struct drm_xe_gem_mmap_offset *args = data;
>> struct drm_gem_object *gem_obj;
>> + struct xe_bo *bo;
>> + int err;
>>
>> if (XE_IOCTL_DBG(xe, args->extensions) ||
>> XE_IOCTL_DBG(xe, args->reserved[0] || args-
>>> reserved[1]))
>> @@ -3425,11 +3427,35 @@ int xe_gem_mmap_offset_ioctl(struct
>> drm_device *dev, void *data,
>> if (XE_IOCTL_DBG(xe, !gem_obj))
>> return -ENOENT;
>>
>> + bo = gem_to_xe_bo(gem_obj);
>> +
>> + /*
>> + * Reject new mmap to purgeable BOs. DONTNEED BOs can be
>> purged
>> + * at any time, making CPU access undefined behavior. Purged
>> BOs have
>> + * no backing store and are permanently invalid.
>> + */
>> + xe_bo_lock(bo, false);
> sleeping locks need to be interruptible whenever possible.
Noted,
Thanks,
Arvind
>
> Thanks,
> Thomas
>
>
>
>> + if (xe_bo_madv_is_dontneed(bo)) {
>> + err = -EBUSY;
>> + goto out_unlock;
>> + }
>> +
>> + if (xe_bo_is_purged(bo)) {
>> + err = -EINVAL;
>> + goto out_unlock;
>> + }
>> + xe_bo_unlock(bo);
>> +
>> /* The mmap offset was set up at BO allocation time. */
>> args->offset = drm_vma_node_offset_addr(&gem_obj->vma_node);
>> + xe_bo_put(bo);
>>
>> - xe_bo_put(gem_to_xe_bo(gem_obj));
>> return 0;
>> +
>> +out_unlock:
>> + xe_bo_unlock(bo);
>> + xe_bo_put(bo);
>> + return err;
>> }
>>
>> /**
^ permalink raw reply [flat|nested] 39+ messages in thread
* [PATCH v6 09/12] drm/xe/dma_buf: Block export of DONTNEED/purged BOs
2026-03-03 15:19 [PATCH v6 00/12] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
` (7 preceding siblings ...)
2026-03-03 15:20 ` [PATCH v6 08/12] drm/xe/bo: Block mmap of DONTNEED/purged BOs Arvind Yadav
@ 2026-03-03 15:20 ` Arvind Yadav
2026-03-10 10:19 ` Thomas Hellström
2026-03-03 15:20 ` [PATCH v6 10/12] drm/xe/bo: Add purgeable shrinker state helpers Arvind Yadav
` (7 subsequent siblings)
16 siblings, 1 reply; 39+ messages in thread
From: Arvind Yadav @ 2026-03-03 15:20 UTC (permalink / raw)
To: intel-xe
Cc: matthew.brost, himal.prasad.ghimiray, thomas.hellstrom,
pallavi.mishra
Don't allow exporting BOs marked DONTNEED or PURGED as dma-bufs.
DONTNEED BOs can have their contents discarded at any time, making
the exported dma-buf unusable for external devices. PURGED BOs have
no backing store and are permanently invalid.
Return -EBUSY for DONTNEED BOs (temporary purgeable state) and
-EINVAL for purged BOs (permanent, no backing store).
The export path now checks the BO's purgeable state before creating
the dma-buf, preventing external devices from accessing memory that
may be purged at any time.
v6:
- Split DONTNEED → -EBUSY and PURGED → -EINVAL for consistency
with the rest of the series (Thomas, Matt)
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
---
drivers/gpu/drm/xe/xe_dma_buf.c | 21 +++++++++++++++++++++
1 file changed, 21 insertions(+)
diff --git a/drivers/gpu/drm/xe/xe_dma_buf.c b/drivers/gpu/drm/xe/xe_dma_buf.c
index ea370cd373e9..aba6b9696030 100644
--- a/drivers/gpu/drm/xe/xe_dma_buf.c
+++ b/drivers/gpu/drm/xe/xe_dma_buf.c
@@ -223,6 +223,23 @@ struct dma_buf *xe_gem_prime_export(struct drm_gem_object *obj, int flags)
if (bo->vm)
return ERR_PTR(-EPERM);
+ /*
+ * Reject exporting purgeable BOs. DONTNEED BOs can be purged
+ * at any time, making the exported dma-buf unusable. Purged BOs
+ * have no backing store and are permanently invalid.
+ */
+ xe_bo_lock(bo, false);
+ if (xe_bo_madv_is_dontneed(bo)) {
+ ret = -EBUSY;
+ goto out_unlock;
+ }
+
+ if (xe_bo_is_purged(bo)) {
+ ret = -EINVAL;
+ goto out_unlock;
+ }
+ xe_bo_unlock(bo);
+
ret = ttm_bo_setup_export(&bo->ttm, &ctx);
if (ret)
return ERR_PTR(ret);
@@ -232,6 +249,10 @@ struct dma_buf *xe_gem_prime_export(struct drm_gem_object *obj, int flags)
buf->ops = &xe_dmabuf_ops;
return buf;
+
+out_unlock:
+ xe_bo_unlock(bo);
+ return ERR_PTR(ret);
}
static struct drm_gem_object *
--
2.43.0
^ permalink raw reply related [flat|nested] 39+ messages in thread* Re: [PATCH v6 09/12] drm/xe/dma_buf: Block export of DONTNEED/purged BOs
2026-03-03 15:20 ` [PATCH v6 09/12] drm/xe/dma_buf: Block export " Arvind Yadav
@ 2026-03-10 10:19 ` Thomas Hellström
2026-03-18 13:02 ` Yadav, Arvind
0 siblings, 1 reply; 39+ messages in thread
From: Thomas Hellström @ 2026-03-10 10:19 UTC (permalink / raw)
To: Arvind Yadav, intel-xe
Cc: matthew.brost, himal.prasad.ghimiray, pallavi.mishra
On Tue, 2026-03-03 at 20:50 +0530, Arvind Yadav wrote:
> Don't allow exporting BOs marked DONTNEED or PURGED as dma-bufs.
> DONTNEED BOs can have their contents discarded at any time, making
> the exported dma-buf unusable for external devices. PURGED BOs have
> no backing store and are permanently invalid.
>
> Return -EBUSY for DONTNEED BOs (temporary purgeable state) and
> -EINVAL for purged BOs (permanent, no backing store).
>
> The export path now checks the BO's purgeable state before creating
> the dma-buf, preventing external devices from accessing memory that
> may be purged at any time.
>
> v6:
> - Split DONTNEED → -EBUSY and PURGED → -EINVAL for consistency
> with the rest of the series (Thomas, Matt)
>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
> ---
> drivers/gpu/drm/xe/xe_dma_buf.c | 21 +++++++++++++++++++++
> 1 file changed, 21 insertions(+)
>
> diff --git a/drivers/gpu/drm/xe/xe_dma_buf.c
> b/drivers/gpu/drm/xe/xe_dma_buf.c
> index ea370cd373e9..aba6b9696030 100644
> --- a/drivers/gpu/drm/xe/xe_dma_buf.c
> +++ b/drivers/gpu/drm/xe/xe_dma_buf.c
> @@ -223,6 +223,23 @@ struct dma_buf *xe_gem_prime_export(struct
> drm_gem_object *obj, int flags)
> if (bo->vm)
> return ERR_PTR(-EPERM);
>
> + /*
> + * Reject exporting purgeable BOs. DONTNEED BOs can be
> purged
> + * at any time, making the exported dma-buf unusable. Purged
> BOs
> + * have no backing store and are permanently invalid.
> + */
> + xe_bo_lock(bo, false);
Interruptible lock.
/Thomas
> + if (xe_bo_madv_is_dontneed(bo)) {
> + ret = -EBUSY;
> + goto out_unlock;
> + }
> +
> + if (xe_bo_is_purged(bo)) {
> + ret = -EINVAL;
> + goto out_unlock;
> + }
> + xe_bo_unlock(bo);
> +
> ret = ttm_bo_setup_export(&bo->ttm, &ctx);
> if (ret)
> return ERR_PTR(ret);
> @@ -232,6 +249,10 @@ struct dma_buf *xe_gem_prime_export(struct
> drm_gem_object *obj, int flags)
> buf->ops = &xe_dmabuf_ops;
>
> return buf;
> +
> +out_unlock:
> + xe_bo_unlock(bo);
> + return ERR_PTR(ret);
> }
>
> static struct drm_gem_object *
^ permalink raw reply [flat|nested] 39+ messages in thread* Re: [PATCH v6 09/12] drm/xe/dma_buf: Block export of DONTNEED/purged BOs
2026-03-10 10:19 ` Thomas Hellström
@ 2026-03-18 13:02 ` Yadav, Arvind
0 siblings, 0 replies; 39+ messages in thread
From: Yadav, Arvind @ 2026-03-18 13:02 UTC (permalink / raw)
To: Thomas Hellström, intel-xe
Cc: matthew.brost, himal.prasad.ghimiray, pallavi.mishra
On 10-03-2026 15:49, Thomas Hellström wrote:
> On Tue, 2026-03-03 at 20:50 +0530, Arvind Yadav wrote:
>> Don't allow exporting BOs marked DONTNEED or PURGED as dma-bufs.
>> DONTNEED BOs can have their contents discarded at any time, making
>> the exported dma-buf unusable for external devices. PURGED BOs have
>> no backing store and are permanently invalid.
>>
>> Return -EBUSY for DONTNEED BOs (temporary purgeable state) and
>> -EINVAL for purged BOs (permanent, no backing store).
>>
>> The export path now checks the BO's purgeable state before creating
>> the dma-buf, preventing external devices from accessing memory that
>> may be purged at any time.
>>
>> v6:
>> - Split DONTNEED → -EBUSY and PURGED → -EINVAL for consistency
>> with the rest of the series (Thomas, Matt)
>>
>> Cc: Matthew Brost <matthew.brost@intel.com>
>> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>> Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>> Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
>> ---
>> drivers/gpu/drm/xe/xe_dma_buf.c | 21 +++++++++++++++++++++
>> 1 file changed, 21 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_dma_buf.c
>> b/drivers/gpu/drm/xe/xe_dma_buf.c
>> index ea370cd373e9..aba6b9696030 100644
>> --- a/drivers/gpu/drm/xe/xe_dma_buf.c
>> +++ b/drivers/gpu/drm/xe/xe_dma_buf.c
>> @@ -223,6 +223,23 @@ struct dma_buf *xe_gem_prime_export(struct
>> drm_gem_object *obj, int flags)
>> if (bo->vm)
>> return ERR_PTR(-EPERM);
>>
>> + /*
>> + * Reject exporting purgeable BOs. DONTNEED BOs can be
>> purged
>> + * at any time, making the exported dma-buf unusable. Purged
>> BOs
>> + * have no backing store and are permanently invalid.
>> + */
>> + xe_bo_lock(bo, false);
> Interruptible lock.
Noted,
~Arvind
>
> /Thomas
>
>
>> + if (xe_bo_madv_is_dontneed(bo)) {
>> + ret = -EBUSY;
>> + goto out_unlock;
>> + }
>> +
>> + if (xe_bo_is_purged(bo)) {
>> + ret = -EINVAL;
>> + goto out_unlock;
>> + }
>> + xe_bo_unlock(bo);
>> +
>> ret = ttm_bo_setup_export(&bo->ttm, &ctx);
>> if (ret)
>> return ERR_PTR(ret);
>> @@ -232,6 +249,10 @@ struct dma_buf *xe_gem_prime_export(struct
>> drm_gem_object *obj, int flags)
>> buf->ops = &xe_dmabuf_ops;
>>
>> return buf;
>> +
>> +out_unlock:
>> + xe_bo_unlock(bo);
>> + return ERR_PTR(ret);
>> }
>>
>> static struct drm_gem_object *
^ permalink raw reply [flat|nested] 39+ messages in thread
* [PATCH v6 10/12] drm/xe/bo: Add purgeable shrinker state helpers
2026-03-03 15:19 [PATCH v6 00/12] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
` (8 preceding siblings ...)
2026-03-03 15:20 ` [PATCH v6 09/12] drm/xe/dma_buf: Block export " Arvind Yadav
@ 2026-03-03 15:20 ` Arvind Yadav
2026-03-10 10:01 ` Thomas Hellström
2026-03-03 15:20 ` [PATCH v6 11/12] drm/xe/madvise: Enable purgeable buffer object IOCTL support Arvind Yadav
` (6 subsequent siblings)
16 siblings, 1 reply; 39+ messages in thread
From: Arvind Yadav @ 2026-03-03 15:20 UTC (permalink / raw)
To: intel-xe
Cc: matthew.brost, himal.prasad.ghimiray, thomas.hellstrom,
pallavi.mishra
Encapsulate TTM purgeable flag updates and shrinker page accounting
into helper functions. This prevents desynchronization between the
TTM tt->purgeable flag and the shrinker's page bucket counters.
Without these helpers, direct manipulation of xe_ttm_tt->purgeable
risks forgetting to update the corresponding shrinker counters,
leading to incorrect memory pressure calculations.
Add xe_bo_set_purgeable_shrinker() and xe_bo_clear_purgeable_shrinker()
which atomically update both the TTM flag and transfer pages between
the shrinkable and purgeable buckets.
Update purgeable BO state to PURGED after successful shrinker purge
for DONTNEED BOs.
v4:
- @madv_purgeable atomic_t → u32 change across all relevant
patches (Matt)
v5:
- Update purgeable BO state to PURGED after a successful shrinker
purge for DONTNEED BOs.
- Split ghost BO and zero-refcount handling in xe_bo_shrink() (Thomas)
v6:
- Create separate patch for 'Split ghost BO and zero-refcount
handling'. (Thomas)
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
---
drivers/gpu/drm/xe/xe_bo.c | 63 ++++++++++++++++++++++++++++++
drivers/gpu/drm/xe/xe_bo.h | 2 +
drivers/gpu/drm/xe/xe_vm_madvise.c | 8 +++-
3 files changed, 71 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
index 3a4965bdadf2..598d4463baf3 100644
--- a/drivers/gpu/drm/xe/xe_bo.c
+++ b/drivers/gpu/drm/xe/xe_bo.c
@@ -863,6 +863,66 @@ void xe_bo_set_purgeable_state(struct xe_bo *bo,
bo->madv_purgeable = new_state;
}
+/**
+ * xe_bo_set_purgeable_shrinker() - Mark BO purgeable and update shrinker
+ * @bo: Buffer object
+ *
+ * Transfers pages from shrinkable to purgeable bucket. Shrinker can now
+ * discard pages immediately without swapping. Caller holds BO lock.
+ */
+void xe_bo_set_purgeable_shrinker(struct xe_bo *bo)
+{
+ struct ttm_buffer_object *ttm_bo = &bo->ttm;
+ struct ttm_tt *tt = ttm_bo->ttm;
+ struct xe_device *xe = ttm_to_xe_device(ttm_bo->bdev);
+ struct xe_ttm_tt *xe_tt;
+
+ xe_bo_assert_held(bo);
+
+ if (!tt || !ttm_tt_is_populated(tt))
+ return;
+
+ xe_tt = container_of(tt, struct xe_ttm_tt, ttm);
+
+ if (!xe_tt->purgeable) {
+ xe_tt->purgeable = true;
+ /* Transfer pages from shrinkable to purgeable count */
+ xe_shrinker_mod_pages(xe->mem.shrinker,
+ -(long)tt->num_pages,
+ tt->num_pages);
+ }
+}
+
+/**
+ * xe_bo_clear_purgeable_shrinker() - Mark BO non-purgeable and update shrinker
+ * @bo: Buffer object
+ *
+ * Transfers pages from purgeable to shrinkable bucket. Shrinker must now
+ * swap pages instead of discarding. Caller holds BO lock.
+ */
+void xe_bo_clear_purgeable_shrinker(struct xe_bo *bo)
+{
+ struct ttm_buffer_object *ttm_bo = &bo->ttm;
+ struct ttm_tt *tt = ttm_bo->ttm;
+ struct xe_device *xe = ttm_to_xe_device(ttm_bo->bdev);
+ struct xe_ttm_tt *xe_tt;
+
+ xe_bo_assert_held(bo);
+
+ if (!tt || !ttm_tt_is_populated(tt))
+ return;
+
+ xe_tt = container_of(tt, struct xe_ttm_tt, ttm);
+
+ if (xe_tt->purgeable) {
+ xe_tt->purgeable = false;
+ /* Transfer pages from purgeable to shrinkable count */
+ xe_shrinker_mod_pages(xe->mem.shrinker,
+ tt->num_pages,
+ -(long)tt->num_pages);
+ }
+}
+
/**
* xe_ttm_bo_purge() - Purge buffer object backing store
* @ttm_bo: The TTM buffer object to purge
@@ -1243,6 +1303,9 @@ long xe_bo_shrink(struct ttm_operation_ctx *ctx, struct ttm_buffer_object *bo,
lret = xe_bo_move_notify(xe_bo, ctx);
if (!lret)
lret = xe_bo_shrink_purge(ctx, bo, scanned);
+ if (lret > 0 && xe_bo_madv_is_dontneed(xe_bo))
+ xe_bo_set_purgeable_state(xe_bo,
+ XE_MADV_PURGEABLE_PURGED);
goto out_unref;
}
diff --git a/drivers/gpu/drm/xe/xe_bo.h b/drivers/gpu/drm/xe/xe_bo.h
index 0d9f25b51eb2..46d1fff10e4f 100644
--- a/drivers/gpu/drm/xe/xe_bo.h
+++ b/drivers/gpu/drm/xe/xe_bo.h
@@ -272,6 +272,8 @@ static inline bool xe_bo_madv_is_dontneed(struct xe_bo *bo)
}
void xe_bo_set_purgeable_state(struct xe_bo *bo, enum xe_madv_purgeable_state new_state);
+void xe_bo_set_purgeable_shrinker(struct xe_bo *bo);
+void xe_bo_clear_purgeable_shrinker(struct xe_bo *bo);
static inline void xe_bo_unpin_map_no_vm(struct xe_bo *bo)
{
diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
index 8acc19e25aa5..ab83e94980e4 100644
--- a/drivers/gpu/drm/xe/xe_vm_madvise.c
+++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
@@ -312,12 +312,16 @@ void xe_bo_recompute_purgeable_state(struct xe_bo *bo)
if (vma_state == XE_BO_VMAS_STATE_DONTNEED) {
/* All VMAs are DONTNEED - mark BO purgeable */
- if (bo->madv_purgeable != XE_MADV_PURGEABLE_DONTNEED)
+ if (bo->madv_purgeable != XE_MADV_PURGEABLE_DONTNEED) {
xe_bo_set_purgeable_state(bo, XE_MADV_PURGEABLE_DONTNEED);
+ xe_bo_set_purgeable_shrinker(bo);
+ }
} else if (vma_state == XE_BO_VMAS_STATE_WILLNEED) {
/* At least one VMA is WILLNEED - BO must not be purgeable */
- if (bo->madv_purgeable != XE_MADV_PURGEABLE_WILLNEED)
+ if (bo->madv_purgeable != XE_MADV_PURGEABLE_WILLNEED) {
xe_bo_set_purgeable_state(bo, XE_MADV_PURGEABLE_WILLNEED);
+ xe_bo_clear_purgeable_shrinker(bo);
+ }
}
/* XE_BO_VMAS_STATE_NO_VMAS: Preserve existing BO state */
}
--
2.43.0
^ permalink raw reply related [flat|nested] 39+ messages in thread* Re: [PATCH v6 10/12] drm/xe/bo: Add purgeable shrinker state helpers
2026-03-03 15:20 ` [PATCH v6 10/12] drm/xe/bo: Add purgeable shrinker state helpers Arvind Yadav
@ 2026-03-10 10:01 ` Thomas Hellström
2026-03-18 12:15 ` Yadav, Arvind
0 siblings, 1 reply; 39+ messages in thread
From: Thomas Hellström @ 2026-03-10 10:01 UTC (permalink / raw)
To: Arvind Yadav, intel-xe
Cc: matthew.brost, himal.prasad.ghimiray, pallavi.mishra
On Tue, 2026-03-03 at 20:50 +0530, Arvind Yadav wrote:
> Encapsulate TTM purgeable flag updates and shrinker page accounting
> into helper functions. This prevents desynchronization between the
> TTM tt->purgeable flag and the shrinker's page bucket counters.
>
> Without these helpers, direct manipulation of xe_ttm_tt->purgeable
> risks forgetting to update the corresponding shrinker counters,
> leading to incorrect memory pressure calculations.
>
> Add xe_bo_set_purgeable_shrinker() and
> xe_bo_clear_purgeable_shrinker()
> which atomically update both the TTM flag and transfer pages between
> the shrinkable and purgeable buckets.
>
> Update purgeable BO state to PURGED after successful shrinker purge
> for DONTNEED BOs.
>
> v4:
> - @madv_purgeable atomic_t → u32 change across all relevant
> patches (Matt)
>
> v5:
> - Update purgeable BO state to PURGED after a successful shrinker
> purge for DONTNEED BOs.
> - Split ghost BO and zero-refcount handling in xe_bo_shrink()
> (Thomas)
>
> v6:
> - Create separate patch for 'Split ghost BO and zero-refcount
> handling'. (Thomas)
>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
> ---
> drivers/gpu/drm/xe/xe_bo.c | 63
> ++++++++++++++++++++++++++++++
> drivers/gpu/drm/xe/xe_bo.h | 2 +
> drivers/gpu/drm/xe/xe_vm_madvise.c | 8 +++-
> 3 files changed, 71 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
> index 3a4965bdadf2..598d4463baf3 100644
> --- a/drivers/gpu/drm/xe/xe_bo.c
> +++ b/drivers/gpu/drm/xe/xe_bo.c
> @@ -863,6 +863,66 @@ void xe_bo_set_purgeable_state(struct xe_bo *bo,
> bo->madv_purgeable = new_state;
> }
>
> +/**
> + * xe_bo_set_purgeable_shrinker() - Mark BO purgeable and update
> shrinker
> + * @bo: Buffer object
> + *
> + * Transfers pages from shrinkable to purgeable bucket. Shrinker can
> now
> + * discard pages immediately without swapping. Caller holds BO lock.
> + */
> +void xe_bo_set_purgeable_shrinker(struct xe_bo *bo)
> +{
> + struct ttm_buffer_object *ttm_bo = &bo->ttm;
> + struct ttm_tt *tt = ttm_bo->ttm;
> + struct xe_device *xe = ttm_to_xe_device(ttm_bo->bdev);
> + struct xe_ttm_tt *xe_tt;
> +
> + xe_bo_assert_held(bo);
> +
> + if (!tt || !ttm_tt_is_populated(tt))
> + return;
> +
> + xe_tt = container_of(tt, struct xe_ttm_tt, ttm);
> +
> + if (!xe_tt->purgeable) {
> + xe_tt->purgeable = true;
> + /* Transfer pages from shrinkable to purgeable count
> */
> + xe_shrinker_mod_pages(xe->mem.shrinker,
> + -(long)tt->num_pages,
> + tt->num_pages);
> + }
> +}
> +
> +/**
> + * xe_bo_clear_purgeable_shrinker() - Mark BO non-purgeable and
> update shrinker
> + * @bo: Buffer object
> + *
> + * Transfers pages from purgeable to shrinkable bucket. Shrinker
> must now
> + * swap pages instead of discarding. Caller holds BO lock.
> + */
> +void xe_bo_clear_purgeable_shrinker(struct xe_bo *bo)
> +{
> + struct ttm_buffer_object *ttm_bo = &bo->ttm;
> + struct ttm_tt *tt = ttm_bo->ttm;
> + struct xe_device *xe = ttm_to_xe_device(ttm_bo->bdev);
> + struct xe_ttm_tt *xe_tt;
> +
> + xe_bo_assert_held(bo);
> +
> + if (!tt || !ttm_tt_is_populated(tt))
> + return;
> +
> + xe_tt = container_of(tt, struct xe_ttm_tt, ttm);
> +
> + if (xe_tt->purgeable) {
> + xe_tt->purgeable = false;
> + /* Transfer pages from purgeable to shrinkable count
> */
> + xe_shrinker_mod_pages(xe->mem.shrinker,
> + tt->num_pages,
> + -(long)tt->num_pages);
> + }
> +}
> +
> /**
> * xe_ttm_bo_purge() - Purge buffer object backing store
> * @ttm_bo: The TTM buffer object to purge
> @@ -1243,6 +1303,9 @@ long xe_bo_shrink(struct ttm_operation_ctx
> *ctx, struct ttm_buffer_object *bo,
> lret = xe_bo_move_notify(xe_bo, ctx);
> if (!lret)
> lret = xe_bo_shrink_purge(ctx, bo, scanned);
> + if (lret > 0 && xe_bo_madv_is_dontneed(xe_bo))
> + xe_bo_set_purgeable_state(xe_bo,
> +
> XE_MADV_PURGEABLE_PURGED);
> goto out_unref;
> }
>
> diff --git a/drivers/gpu/drm/xe/xe_bo.h b/drivers/gpu/drm/xe/xe_bo.h
> index 0d9f25b51eb2..46d1fff10e4f 100644
> --- a/drivers/gpu/drm/xe/xe_bo.h
> +++ b/drivers/gpu/drm/xe/xe_bo.h
> @@ -272,6 +272,8 @@ static inline bool xe_bo_madv_is_dontneed(struct
> xe_bo *bo)
> }
>
> void xe_bo_set_purgeable_state(struct xe_bo *bo, enum
> xe_madv_purgeable_state new_state);
> +void xe_bo_set_purgeable_shrinker(struct xe_bo *bo);
> +void xe_bo_clear_purgeable_shrinker(struct xe_bo *bo);
>
> static inline void xe_bo_unpin_map_no_vm(struct xe_bo *bo)
> {
> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c
> b/drivers/gpu/drm/xe/xe_vm_madvise.c
> index 8acc19e25aa5..ab83e94980e4 100644
> --- a/drivers/gpu/drm/xe/xe_vm_madvise.c
> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
> @@ -312,12 +312,16 @@ void xe_bo_recompute_purgeable_state(struct
> xe_bo *bo)
>
> if (vma_state == XE_BO_VMAS_STATE_DONTNEED) {
> /* All VMAs are DONTNEED - mark BO purgeable */
> - if (bo->madv_purgeable !=
> XE_MADV_PURGEABLE_DONTNEED)
> + if (bo->madv_purgeable !=
> XE_MADV_PURGEABLE_DONTNEED) {
> xe_bo_set_purgeable_state(bo,
> XE_MADV_PURGEABLE_DONTNEED);
> + xe_bo_set_purgeable_shrinker(bo);
> + }
> } else if (vma_state == XE_BO_VMAS_STATE_WILLNEED) {
> /* At least one VMA is WILLNEED - BO must not be
> purgeable */
> - if (bo->madv_purgeable !=
> XE_MADV_PURGEABLE_WILLNEED)
> + if (bo->madv_purgeable !=
> XE_MADV_PURGEABLE_WILLNEED) {
> xe_bo_set_purgeable_state(bo,
> XE_MADV_PURGEABLE_WILLNEED);
> + xe_bo_clear_purgeable_shrinker(bo);
> + }
> }
> /* XE_BO_VMAS_STATE_NO_VMAS: Preserve existing BO state */
> }
I think this can be simplified a bit using something like the below
applied after the above patch: (untested).
diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
index 07acce383cb1..9f0885cd3cfd 100644
--- a/drivers/gpu/drm/xe/xe_bo.c
+++ b/drivers/gpu/drm/xe/xe_bo.c
@@ -835,47 +835,14 @@ static int xe_bo_move_notify(struct xe_bo *bo,
return 0;
}
-/**
- * xe_bo_set_purgeable_state() - Set BO purgeable state with
validation
- * @bo: Buffer object
- * @new_state: New purgeable state
- *
- * Sets the purgeable state with lockdep assertions and validates
state
- * transitions. Once a BO is PURGED, it cannot transition to any other
state.
- * Invalid transitions are caught with xe_assert().
- */
-void xe_bo_set_purgeable_state(struct xe_bo *bo,
- enum xe_madv_purgeable_state new_state)
-{
- struct xe_device *xe = xe_bo_device(bo);
-
- xe_bo_assert_held(bo);
-
- /* Validate state is one of the known values */
- xe_assert(xe, new_state == XE_MADV_PURGEABLE_WILLNEED ||
- new_state == XE_MADV_PURGEABLE_DONTNEED ||
- new_state == XE_MADV_PURGEABLE_PURGED);
-
- /* Once purged, always purged - cannot transition out */
- xe_assert(xe, !(bo->madv_purgeable == XE_MADV_PURGEABLE_PURGED
&&
- new_state != XE_MADV_PURGEABLE_PURGED));
-
- bo->madv_purgeable = new_state;
-}
-
-/**
- * xe_bo_set_purgeable_shrinker() - Mark BO purgeable and update
shrinker
- * @bo: Buffer object
- *
- * Transfers pages from shrinkable to purgeable bucket. Shrinker can
now
- * discard pages immediately without swapping. Caller holds BO lock.
- */
-void xe_bo_set_purgeable_shrinker(struct xe_bo *bo)
+static void xe_bo_set_purgeable_shrinker(struct xe_bo *bo, enum
xe_madv_purgeable_state new_state)
+
{
struct ttm_buffer_object *ttm_bo = &bo->ttm;
struct ttm_tt *tt = ttm_bo->ttm;
struct xe_device *xe = ttm_to_xe_device(ttm_bo->bdev);
struct xe_ttm_tt *xe_tt;
+ long int tt_pages;
xe_bo_assert_held(bo);
@@ -883,44 +850,44 @@ void xe_bo_set_purgeable_shrinker(struct xe_bo
*bo)
return;
xe_tt = container_of(tt, struct xe_ttm_tt, ttm);
-
- if (!xe_tt->purgeable) {
+ tt_pages = tt->num_pages;
+
+ if (!xe_tt->purgeable && new_state ==
XE_MADV_PURGEABLE_DONTNEED) {
xe_tt->purgeable = true;
- /* Transfer pages from shrinkable to purgeable count
*/
- xe_shrinker_mod_pages(xe->mem.shrinker,
- -(long)tt->num_pages,
- tt->num_pages);
+ xe_shrinker_mod_pages(xe->mem.shrinker, -tt_pages,
tt_pages);
+ } else if (xe_tt->purgeable && new_state ==
XE_MADV_PURGEABLE_WILLNEED) {
+ xe_tt->purgeable = false;
+ xe_shrinker_mod_pages(xe->mem.shrinker, tt_pages, -
tt_pages);
}
}
/**
- * xe_bo_clear_purgeable_shrinker() - Mark BO non-purgeable and update
shrinker
+ * xe_bo_set_purgeable_state() - Set BO purgeable state with
validation
* @bo: Buffer object
+ * @new_state: New purgeable state
*
- * Transfers pages from purgeable to shrinkable bucket. Shrinker must
now
- * swap pages instead of discarding. Caller holds BO lock.
+ * Sets the purgeable state with lockdep assertions and validates
state
+ * transitions. Once a BO is PURGED, it cannot transition to any other
state.
+ * Invalid transitions are caught with xe_assert().
*/
-void xe_bo_clear_purgeable_shrinker(struct xe_bo *bo)
+void xe_bo_set_purgeable_state(struct xe_bo *bo
+, enum xe_madv_purgeable_state new_state)
{
- struct ttm_buffer_object *ttm_bo = &bo->ttm;
- struct ttm_tt *tt = ttm_bo->ttm;
- struct xe_device *xe = ttm_to_xe_device(ttm_bo->bdev);
- struct xe_ttm_tt *xe_tt;
+ struct xe_device *xe = xe_bo_device(bo);
xe_bo_assert_held(bo);
- if (!tt || !ttm_tt_is_populated(tt))
- return;
+ /* Validate state is one of the known values */
+ xe_assert(xe, new_state == XE_MADV_PURGEABLE_WILLNEED ||
+ new_state == XE_MADV_PURGEABLE_DONTNEED ||
+ new_state == XE_MADV_PURGEABLE_PURGED);
- xe_tt = container_of(tt, struct xe_ttm_tt, ttm);
+ /* Once purged, always purged - cannot transition out */
+ xe_assert(xe, !(bo->madv_purgeable == XE_MADV_PURGEABLE_PURGED
&&
+ new_state != XE_MADV_PURGEABLE_PURGED));
- if (xe_tt->purgeable) {
- xe_tt->purgeable = false;
- /* Transfer pages from purgeable to shrinkable count
*/
- xe_shrinker_mod_pages(xe->mem.shrinker,
- tt->num_pages,
- -(long)tt->num_pages);
- }
+ bo->madv_purgeable = new_state;
+ xe_bo_set_purgeable_shrinker(bo, new_state);
}
/**
diff --git a/drivers/gpu/drm/xe/xe_bo.h b/drivers/gpu/drm/xe/xe_bo.h
index 46d1fff10e4f..0d9f25b51eb2 100644
--- a/drivers/gpu/drm/xe/xe_bo.h
+++ b/drivers/gpu/drm/xe/xe_bo.h
@@ -272,8 +272,6 @@ static inline bool xe_bo_madv_is_dontneed(struct
xe_bo *bo)
}
void xe_bo_set_purgeable_state(struct xe_bo *bo, enum
xe_madv_purgeable_state new_state);
-void xe_bo_set_purgeable_shrinker(struct xe_bo *bo);
-void xe_bo_clear_purgeable_shrinker(struct xe_bo *bo);
static inline void xe_bo_unpin_map_no_vm(struct xe_bo *bo)
{
^ permalink raw reply related [flat|nested] 39+ messages in thread* Re: [PATCH v6 10/12] drm/xe/bo: Add purgeable shrinker state helpers
2026-03-10 10:01 ` Thomas Hellström
@ 2026-03-18 12:15 ` Yadav, Arvind
0 siblings, 0 replies; 39+ messages in thread
From: Yadav, Arvind @ 2026-03-18 12:15 UTC (permalink / raw)
To: Thomas Hellström, intel-xe
Cc: matthew.brost, himal.prasad.ghimiray, pallavi.mishra
On 10-03-2026 15:31, Thomas Hellström wrote:
> On Tue, 2026-03-03 at 20:50 +0530, Arvind Yadav wrote:
>> Encapsulate TTM purgeable flag updates and shrinker page accounting
>> into helper functions. This prevents desynchronization between the
>> TTM tt->purgeable flag and the shrinker's page bucket counters.
>>
>> Without these helpers, direct manipulation of xe_ttm_tt->purgeable
>> risks forgetting to update the corresponding shrinker counters,
>> leading to incorrect memory pressure calculations.
>>
>> Add xe_bo_set_purgeable_shrinker() and
>> xe_bo_clear_purgeable_shrinker()
>> which atomically update both the TTM flag and transfer pages between
>> the shrinkable and purgeable buckets.
>>
>> Update purgeable BO state to PURGED after successful shrinker purge
>> for DONTNEED BOs.
>>
>> v4:
>> - @madv_purgeable atomic_t → u32 change across all relevant
>> patches (Matt)
>>
>> v5:
>> - Update purgeable BO state to PURGED after a successful shrinker
>> purge for DONTNEED BOs.
>> - Split ghost BO and zero-refcount handling in xe_bo_shrink()
>> (Thomas)
>>
>> v6:
>> - Create separate patch for 'Split ghost BO and zero-refcount
>> handling'. (Thomas)
>>
>> Cc: Matthew Brost <matthew.brost@intel.com>
>> Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>> Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
>> ---
>> drivers/gpu/drm/xe/xe_bo.c | 63
>> ++++++++++++++++++++++++++++++
>> drivers/gpu/drm/xe/xe_bo.h | 2 +
>> drivers/gpu/drm/xe/xe_vm_madvise.c | 8 +++-
>> 3 files changed, 71 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
>> index 3a4965bdadf2..598d4463baf3 100644
>> --- a/drivers/gpu/drm/xe/xe_bo.c
>> +++ b/drivers/gpu/drm/xe/xe_bo.c
>> @@ -863,6 +863,66 @@ void xe_bo_set_purgeable_state(struct xe_bo *bo,
>> bo->madv_purgeable = new_state;
>> }
>>
>> +/**
>> + * xe_bo_set_purgeable_shrinker() - Mark BO purgeable and update
>> shrinker
>> + * @bo: Buffer object
>> + *
>> + * Transfers pages from shrinkable to purgeable bucket. Shrinker can
>> now
>> + * discard pages immediately without swapping. Caller holds BO lock.
>> + */
>> +void xe_bo_set_purgeable_shrinker(struct xe_bo *bo)
>> +{
>> + struct ttm_buffer_object *ttm_bo = &bo->ttm;
>> + struct ttm_tt *tt = ttm_bo->ttm;
>> + struct xe_device *xe = ttm_to_xe_device(ttm_bo->bdev);
>> + struct xe_ttm_tt *xe_tt;
>> +
>> + xe_bo_assert_held(bo);
>> +
>> + if (!tt || !ttm_tt_is_populated(tt))
>> + return;
>> +
>> + xe_tt = container_of(tt, struct xe_ttm_tt, ttm);
>> +
>> + if (!xe_tt->purgeable) {
>> + xe_tt->purgeable = true;
>> + /* Transfer pages from shrinkable to purgeable count
>> */
>> + xe_shrinker_mod_pages(xe->mem.shrinker,
>> + -(long)tt->num_pages,
>> + tt->num_pages);
>> + }
>> +}
>> +
>> +/**
>> + * xe_bo_clear_purgeable_shrinker() - Mark BO non-purgeable and
>> update shrinker
>> + * @bo: Buffer object
>> + *
>> + * Transfers pages from purgeable to shrinkable bucket. Shrinker
>> must now
>> + * swap pages instead of discarding. Caller holds BO lock.
>> + */
>> +void xe_bo_clear_purgeable_shrinker(struct xe_bo *bo)
>> +{
>> + struct ttm_buffer_object *ttm_bo = &bo->ttm;
>> + struct ttm_tt *tt = ttm_bo->ttm;
>> + struct xe_device *xe = ttm_to_xe_device(ttm_bo->bdev);
>> + struct xe_ttm_tt *xe_tt;
>> +
>> + xe_bo_assert_held(bo);
>> +
>> + if (!tt || !ttm_tt_is_populated(tt))
>> + return;
>> +
>> + xe_tt = container_of(tt, struct xe_ttm_tt, ttm);
>> +
>> + if (xe_tt->purgeable) {
>> + xe_tt->purgeable = false;
>> + /* Transfer pages from purgeable to shrinkable count
>> */
>> + xe_shrinker_mod_pages(xe->mem.shrinker,
>> + tt->num_pages,
>> + -(long)tt->num_pages);
>> + }
>> +}
>> +
>> /**
>> * xe_ttm_bo_purge() - Purge buffer object backing store
>> * @ttm_bo: The TTM buffer object to purge
>> @@ -1243,6 +1303,9 @@ long xe_bo_shrink(struct ttm_operation_ctx
>> *ctx, struct ttm_buffer_object *bo,
>> lret = xe_bo_move_notify(xe_bo, ctx);
>> if (!lret)
>> lret = xe_bo_shrink_purge(ctx, bo, scanned);
>> + if (lret > 0 && xe_bo_madv_is_dontneed(xe_bo))
>> + xe_bo_set_purgeable_state(xe_bo,
>> +
>> XE_MADV_PURGEABLE_PURGED);
>> goto out_unref;
>> }
>>
>> diff --git a/drivers/gpu/drm/xe/xe_bo.h b/drivers/gpu/drm/xe/xe_bo.h
>> index 0d9f25b51eb2..46d1fff10e4f 100644
>> --- a/drivers/gpu/drm/xe/xe_bo.h
>> +++ b/drivers/gpu/drm/xe/xe_bo.h
>> @@ -272,6 +272,8 @@ static inline bool xe_bo_madv_is_dontneed(struct
>> xe_bo *bo)
>> }
>>
>> void xe_bo_set_purgeable_state(struct xe_bo *bo, enum
>> xe_madv_purgeable_state new_state);
>> +void xe_bo_set_purgeable_shrinker(struct xe_bo *bo);
>> +void xe_bo_clear_purgeable_shrinker(struct xe_bo *bo);
>>
>> static inline void xe_bo_unpin_map_no_vm(struct xe_bo *bo)
>> {
>> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c
>> b/drivers/gpu/drm/xe/xe_vm_madvise.c
>> index 8acc19e25aa5..ab83e94980e4 100644
>> --- a/drivers/gpu/drm/xe/xe_vm_madvise.c
>> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
>> @@ -312,12 +312,16 @@ void xe_bo_recompute_purgeable_state(struct
>> xe_bo *bo)
>>
>> if (vma_state == XE_BO_VMAS_STATE_DONTNEED) {
>> /* All VMAs are DONTNEED - mark BO purgeable */
>> - if (bo->madv_purgeable !=
>> XE_MADV_PURGEABLE_DONTNEED)
>> + if (bo->madv_purgeable !=
>> XE_MADV_PURGEABLE_DONTNEED) {
>> xe_bo_set_purgeable_state(bo,
>> XE_MADV_PURGEABLE_DONTNEED);
>> + xe_bo_set_purgeable_shrinker(bo);
>> + }
>> } else if (vma_state == XE_BO_VMAS_STATE_WILLNEED) {
>> /* At least one VMA is WILLNEED - BO must not be
>> purgeable */
>> - if (bo->madv_purgeable !=
>> XE_MADV_PURGEABLE_WILLNEED)
>> + if (bo->madv_purgeable !=
>> XE_MADV_PURGEABLE_WILLNEED) {
>> xe_bo_set_purgeable_state(bo,
>> XE_MADV_PURGEABLE_WILLNEED);
>> + xe_bo_clear_purgeable_shrinker(bo);
>> + }
>> }
>> /* XE_BO_VMAS_STATE_NO_VMAS: Preserve existing BO state */
>> }
> I think this can be simplified a bit using something like the below
> applied after the above patch: (untested).
>
> diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
> index 07acce383cb1..9f0885cd3cfd 100644
> --- a/drivers/gpu/drm/xe/xe_bo.c
> +++ b/drivers/gpu/drm/xe/xe_bo.c
> @@ -835,47 +835,14 @@ static int xe_bo_move_notify(struct xe_bo *bo,
> return 0;
> }
>
> -/**
> - * xe_bo_set_purgeable_state() - Set BO purgeable state with
> validation
> - * @bo: Buffer object
> - * @new_state: New purgeable state
> - *
> - * Sets the purgeable state with lockdep assertions and validates
> state
> - * transitions. Once a BO is PURGED, it cannot transition to any other
> state.
> - * Invalid transitions are caught with xe_assert().
> - */
> -void xe_bo_set_purgeable_state(struct xe_bo *bo,
> - enum xe_madv_purgeable_state new_state)
> -{
> - struct xe_device *xe = xe_bo_device(bo);
> -
> - xe_bo_assert_held(bo);
> -
> - /* Validate state is one of the known values */
> - xe_assert(xe, new_state == XE_MADV_PURGEABLE_WILLNEED ||
> - new_state == XE_MADV_PURGEABLE_DONTNEED ||
> - new_state == XE_MADV_PURGEABLE_PURGED);
> -
> - /* Once purged, always purged - cannot transition out */
> - xe_assert(xe, !(bo->madv_purgeable == XE_MADV_PURGEABLE_PURGED
> &&
> - new_state != XE_MADV_PURGEABLE_PURGED));
> -
> - bo->madv_purgeable = new_state;
> -}
> -
> -/**
> - * xe_bo_set_purgeable_shrinker() - Mark BO purgeable and update
> shrinker
> - * @bo: Buffer object
> - *
> - * Transfers pages from shrinkable to purgeable bucket. Shrinker can
> now
> - * discard pages immediately without swapping. Caller holds BO lock.
> - */
> -void xe_bo_set_purgeable_shrinker(struct xe_bo *bo)
> +static void xe_bo_set_purgeable_shrinker(struct xe_bo *bo, enum
> xe_madv_purgeable_state new_state)
> +
> {
> struct ttm_buffer_object *ttm_bo = &bo->ttm;
> struct ttm_tt *tt = ttm_bo->ttm;
> struct xe_device *xe = ttm_to_xe_device(ttm_bo->bdev);
> struct xe_ttm_tt *xe_tt;
> + long int tt_pages;
>
> xe_bo_assert_held(bo);
>
> @@ -883,44 +850,44 @@ void xe_bo_set_purgeable_shrinker(struct xe_bo
> *bo)
> return;
>
> xe_tt = container_of(tt, struct xe_ttm_tt, ttm);
> -
> - if (!xe_tt->purgeable) {
> + tt_pages = tt->num_pages;
> +
> + if (!xe_tt->purgeable && new_state ==
> XE_MADV_PURGEABLE_DONTNEED) {
> xe_tt->purgeable = true;
> - /* Transfer pages from shrinkable to purgeable count
> */
> - xe_shrinker_mod_pages(xe->mem.shrinker,
> - -(long)tt->num_pages,
> - tt->num_pages);
> + xe_shrinker_mod_pages(xe->mem.shrinker, -tt_pages,
> tt_pages);
> + } else if (xe_tt->purgeable && new_state ==
> XE_MADV_PURGEABLE_WILLNEED) {
> + xe_tt->purgeable = false;
> + xe_shrinker_mod_pages(xe->mem.shrinker, tt_pages, -
> tt_pages);
> }
> }
>
> /**
> - * xe_bo_clear_purgeable_shrinker() - Mark BO non-purgeable and update
> shrinker
> + * xe_bo_set_purgeable_state() - Set BO purgeable state with
> validation
> * @bo: Buffer object
> + * @new_state: New purgeable state
> *
> - * Transfers pages from purgeable to shrinkable bucket. Shrinker must
> now
> - * swap pages instead of discarding. Caller holds BO lock.
> + * Sets the purgeable state with lockdep assertions and validates
> state
> + * transitions. Once a BO is PURGED, it cannot transition to any other
> state.
> + * Invalid transitions are caught with xe_assert().
> */
> -void xe_bo_clear_purgeable_shrinker(struct xe_bo *bo)
> +void xe_bo_set_purgeable_state(struct xe_bo *bo
> +, enum xe_madv_purgeable_state new_state)
> {
> - struct ttm_buffer_object *ttm_bo = &bo->ttm;
> - struct ttm_tt *tt = ttm_bo->ttm;
> - struct xe_device *xe = ttm_to_xe_device(ttm_bo->bdev);
> - struct xe_ttm_tt *xe_tt;
> + struct xe_device *xe = xe_bo_device(bo);
>
> xe_bo_assert_held(bo);
>
> - if (!tt || !ttm_tt_is_populated(tt))
> - return;
> + /* Validate state is one of the known values */
> + xe_assert(xe, new_state == XE_MADV_PURGEABLE_WILLNEED ||
> + new_state == XE_MADV_PURGEABLE_DONTNEED ||
> + new_state == XE_MADV_PURGEABLE_PURGED);
>
> - xe_tt = container_of(tt, struct xe_ttm_tt, ttm);
> + /* Once purged, always purged - cannot transition out */
> + xe_assert(xe, !(bo->madv_purgeable == XE_MADV_PURGEABLE_PURGED
> &&
> + new_state != XE_MADV_PURGEABLE_PURGED));
>
> - if (xe_tt->purgeable) {
> - xe_tt->purgeable = false;
> - /* Transfer pages from purgeable to shrinkable count
> */
> - xe_shrinker_mod_pages(xe->mem.shrinker,
> - tt->num_pages,
> - -(long)tt->num_pages);
> - }
> + bo->madv_purgeable = new_state;
> + xe_bo_set_purgeable_shrinker(bo, new_state);
> }
>
> /**
> diff --git a/drivers/gpu/drm/xe/xe_bo.h b/drivers/gpu/drm/xe/xe_bo.h
> index 46d1fff10e4f..0d9f25b51eb2 100644
> --- a/drivers/gpu/drm/xe/xe_bo.h
> +++ b/drivers/gpu/drm/xe/xe_bo.h
> @@ -272,8 +272,6 @@ static inline bool xe_bo_madv_is_dontneed(struct
> xe_bo *bo)
> }
>
> void xe_bo_set_purgeable_state(struct xe_bo *bo, enum
> xe_madv_purgeable_state new_state);
> -void xe_bo_set_purgeable_shrinker(struct xe_bo *bo);
> -void xe_bo_clear_purgeable_shrinker(struct xe_bo *bo);
>
> static inline void xe_bo_unpin_map_no_vm(struct xe_bo *bo)
> {
>
Thanks Thomas, that makes sense. I will combine both shrinker helpers
into one and now call it directly from xe_bo_set_purgeable_state(). This
also removes the dual‑call pattern from xe_bo_recompute_purgeable_state()
Thanks,
Arvind
>
>
>
>
>
>
^ permalink raw reply [flat|nested] 39+ messages in thread
* [PATCH v6 11/12] drm/xe/madvise: Enable purgeable buffer object IOCTL support
2026-03-03 15:19 [PATCH v6 00/12] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
` (9 preceding siblings ...)
2026-03-03 15:20 ` [PATCH v6 10/12] drm/xe/bo: Add purgeable shrinker state helpers Arvind Yadav
@ 2026-03-03 15:20 ` Arvind Yadav
2026-03-10 10:23 ` Thomas Hellström
2026-03-03 15:20 ` [PATCH v6 12/12] drm/xe/bo: Skip zero-refcount BOs in shrinker Arvind Yadav
` (5 subsequent siblings)
16 siblings, 1 reply; 39+ messages in thread
From: Arvind Yadav @ 2026-03-03 15:20 UTC (permalink / raw)
To: intel-xe
Cc: matthew.brost, himal.prasad.ghimiray, thomas.hellstrom,
pallavi.mishra
Hook the madvise_purgeable() handler into the madvise IOCTL now that all
supporting infrastructure is complete:
- Core purge implementation (patch 3)
- BO state tracking and helpers (patches 1-2)
- Per-VMA purgeable state tracking (patch 6)
- Shrinker integration for memory reclamation (patch 10)
This final patch enables userspace to use the DRM_XE_VMA_ATTR_PURGEABLE_STATE
madvise type to mark buffers as WILLNEED/DONTNEED and receive the retained
status indicating whether buffers were purged.
The feature was kept disabled in earlier patches to maintain bisectability
and ensure all components are in place before exposing to userspace.
Userspace can detect kernel support for purgeable BOs by checking the
DRM_XE_QUERY_CONFIG_FLAG_HAS_PURGING_SUPPORT flag in the query_config
response.
v6:
- Add DRM_XE_QUERY_CONFIG_FLAG_HAS_PURGING_SUPPORT for userspace
feature detection. (Jose)
Suggested-by: Matthew Brost <matthew.brost@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
---
drivers/gpu/drm/xe/xe_query.c | 2 ++
drivers/gpu/drm/xe/xe_vm_madvise.c | 22 +++++-----------------
2 files changed, 7 insertions(+), 17 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_query.c b/drivers/gpu/drm/xe/xe_query.c
index 34db266b723f..e535c77405b9 100644
--- a/drivers/gpu/drm/xe/xe_query.c
+++ b/drivers/gpu/drm/xe/xe_query.c
@@ -340,6 +340,8 @@ static int query_config(struct xe_device *xe, struct drm_xe_device_query *query)
DRM_XE_QUERY_CONFIG_FLAG_HAS_NO_COMPRESSION_HINT;
config->info[DRM_XE_QUERY_CONFIG_FLAGS] |=
DRM_XE_QUERY_CONFIG_FLAG_HAS_LOW_LATENCY;
+ config->info[DRM_XE_QUERY_CONFIG_FLAGS] |=
+ DRM_XE_QUERY_CONFIG_FLAG_HAS_PURGING_SUPPORT;
config->info[DRM_XE_QUERY_CONFIG_MIN_ALIGNMENT] =
xe->info.vram_flags & XE_VRAM_FLAGS_NEED64K ? SZ_64K : SZ_4K;
config->info[DRM_XE_QUERY_CONFIG_VA_BITS] = xe->info.va_bits;
diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
index ab83e94980e4..746e9b7b47bb 100644
--- a/drivers/gpu/drm/xe/xe_vm_madvise.c
+++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
@@ -337,18 +337,11 @@ void xe_bo_recompute_purgeable_state(struct xe_bo *bo)
*
* Handles DONTNEED/WILLNEED/PURGED states. Tracks if any BO was purged
* in details->has_purged_bo for later copy to userspace.
- *
- * Note: Marked __maybe_unused until hooked into madvise_funcs[] in the
- * final patch to maintain bisectability. The NULL placeholder in the
- * array ensures proper -EINVAL return for userspace until all supporting
- * infrastructure (shrinker, per-VMA tracking) is complete.
*/
-static void __maybe_unused madvise_purgeable(struct xe_device *xe,
- struct xe_vm *vm,
- struct xe_vma **vmas,
- int num_vmas,
- struct drm_xe_madvise *op,
- struct xe_madvise_details *details)
+static void madvise_purgeable(struct xe_device *xe, struct xe_vm *vm,
+ struct xe_vma **vmas, int num_vmas,
+ struct drm_xe_madvise *op,
+ struct xe_madvise_details *details)
{
int i;
@@ -412,12 +405,7 @@ static const madvise_func madvise_funcs[] = {
[DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC] = madvise_preferred_mem_loc,
[DRM_XE_MEM_RANGE_ATTR_ATOMIC] = madvise_atomic,
[DRM_XE_MEM_RANGE_ATTR_PAT] = madvise_pat_index,
- /*
- * Purgeable support implemented but not enabled yet to maintain
- * bisectability. Will be set to madvise_purgeable() in final patch
- * when all infrastructure (shrinker, VMA tracking) is complete.
- */
- [DRM_XE_VMA_ATTR_PURGEABLE_STATE] = NULL,
+ [DRM_XE_VMA_ATTR_PURGEABLE_STATE] = madvise_purgeable,
};
static u8 xe_zap_ptes_in_madvise_range(struct xe_vm *vm, u64 start, u64 end)
--
2.43.0
^ permalink raw reply related [flat|nested] 39+ messages in thread* Re: [PATCH v6 11/12] drm/xe/madvise: Enable purgeable buffer object IOCTL support
2026-03-03 15:20 ` [PATCH v6 11/12] drm/xe/madvise: Enable purgeable buffer object IOCTL support Arvind Yadav
@ 2026-03-10 10:23 ` Thomas Hellström
0 siblings, 0 replies; 39+ messages in thread
From: Thomas Hellström @ 2026-03-10 10:23 UTC (permalink / raw)
To: Arvind Yadav, intel-xe
Cc: matthew.brost, himal.prasad.ghimiray, pallavi.mishra
On Tue, 2026-03-03 at 20:50 +0530, Arvind Yadav wrote:
> Hook the madvise_purgeable() handler into the madvise IOCTL now that
> all
> supporting infrastructure is complete:
>
> - Core purge implementation (patch 3)
> - BO state tracking and helpers (patches 1-2)
> - Per-VMA purgeable state tracking (patch 6)
> - Shrinker integration for memory reclamation (patch 10)
>
> This final patch enables userspace to use the
> DRM_XE_VMA_ATTR_PURGEABLE_STATE
> madvise type to mark buffers as WILLNEED/DONTNEED and receive the
> retained
> status indicating whether buffers were purged.
>
> The feature was kept disabled in earlier patches to maintain
> bisectability
> and ensure all components are in place before exposing to userspace.
>
> Userspace can detect kernel support for purgeable BOs by checking the
> DRM_XE_QUERY_CONFIG_FLAG_HAS_PURGING_SUPPORT flag in the query_config
> response.
>
> v6:
> - Add DRM_XE_QUERY_CONFIG_FLAG_HAS_PURGING_SUPPORT for userspace
> feature detection. (Jose)
>
> Suggested-by: Matthew Brost <matthew.brost@intel.com>
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Signed-off-by: Himal Prasad Ghimiray
> <himal.prasad.ghimiray@intel.com>
> Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> ---
> drivers/gpu/drm/xe/xe_query.c | 2 ++
> drivers/gpu/drm/xe/xe_vm_madvise.c | 22 +++++-----------------
> 2 files changed, 7 insertions(+), 17 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_query.c
> b/drivers/gpu/drm/xe/xe_query.c
> index 34db266b723f..e535c77405b9 100644
> --- a/drivers/gpu/drm/xe/xe_query.c
> +++ b/drivers/gpu/drm/xe/xe_query.c
> @@ -340,6 +340,8 @@ static int query_config(struct xe_device *xe,
> struct drm_xe_device_query *query)
> DRM_XE_QUERY_CONFIG_FLAG_HAS_NO_COMPRESSION_
> HINT;
> config->info[DRM_XE_QUERY_CONFIG_FLAGS] |=
> DRM_XE_QUERY_CONFIG_FLAG_HAS_LOW_LATENCY;
> + config->info[DRM_XE_QUERY_CONFIG_FLAGS] |=
> + DRM_XE_QUERY_CONFIG_FLAG_HAS_PURGING_SUPPORT
> ;
> config->info[DRM_XE_QUERY_CONFIG_MIN_ALIGNMENT] =
> xe->info.vram_flags & XE_VRAM_FLAGS_NEED64K ? SZ_64K
> : SZ_4K;
> config->info[DRM_XE_QUERY_CONFIG_VA_BITS] = xe-
> >info.va_bits;
> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c
> b/drivers/gpu/drm/xe/xe_vm_madvise.c
> index ab83e94980e4..746e9b7b47bb 100644
> --- a/drivers/gpu/drm/xe/xe_vm_madvise.c
> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
> @@ -337,18 +337,11 @@ void xe_bo_recompute_purgeable_state(struct
> xe_bo *bo)
> *
> * Handles DONTNEED/WILLNEED/PURGED states. Tracks if any BO was
> purged
> * in details->has_purged_bo for later copy to userspace.
> - *
> - * Note: Marked __maybe_unused until hooked into madvise_funcs[] in
> the
> - * final patch to maintain bisectability. The NULL placeholder in
> the
> - * array ensures proper -EINVAL return for userspace until all
> supporting
> - * infrastructure (shrinker, per-VMA tracking) is complete.
> */
> -static void __maybe_unused madvise_purgeable(struct xe_device *xe,
> - struct xe_vm *vm,
> - struct xe_vma **vmas,
> - int num_vmas,
> - struct drm_xe_madvise
> *op,
> - struct
> xe_madvise_details *details)
> +static void madvise_purgeable(struct xe_device *xe, struct xe_vm
> *vm,
> + struct xe_vma **vmas, int num_vmas,
> + struct drm_xe_madvise *op,
> + struct xe_madvise_details *details)
> {
> int i;
>
> @@ -412,12 +405,7 @@ static const madvise_func madvise_funcs[] = {
> [DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC] =
> madvise_preferred_mem_loc,
> [DRM_XE_MEM_RANGE_ATTR_ATOMIC] = madvise_atomic,
> [DRM_XE_MEM_RANGE_ATTR_PAT] = madvise_pat_index,
> - /*
> - * Purgeable support implemented but not enabled yet to
> maintain
> - * bisectability. Will be set to madvise_purgeable() in
> final patch
> - * when all infrastructure (shrinker, VMA tracking) is
> complete.
> - */
> - [DRM_XE_VMA_ATTR_PURGEABLE_STATE] = NULL,
> + [DRM_XE_VMA_ATTR_PURGEABLE_STATE] = madvise_purgeable,
> };
>
> static u8 xe_zap_ptes_in_madvise_range(struct xe_vm *vm, u64 start,
> u64 end)
^ permalink raw reply [flat|nested] 39+ messages in thread
* [PATCH v6 12/12] drm/xe/bo: Skip zero-refcount BOs in shrinker
2026-03-03 15:19 [PATCH v6 00/12] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
` (10 preceding siblings ...)
2026-03-03 15:20 ` [PATCH v6 11/12] drm/xe/madvise: Enable purgeable buffer object IOCTL support Arvind Yadav
@ 2026-03-03 15:20 ` Arvind Yadav
2026-03-05 15:49 ` Thomas Hellström
2026-03-03 16:12 ` ✗ CI.checkpatch: warning for drm/xe/madvise: Add support for purgeable buffer objects (rev7) Patchwork
` (4 subsequent siblings)
16 siblings, 1 reply; 39+ messages in thread
From: Arvind Yadav @ 2026-03-03 15:20 UTC (permalink / raw)
To: intel-xe
Cc: matthew.brost, himal.prasad.ghimiray, thomas.hellstrom,
pallavi.mishra
Zero-refcount BOs are being destroyed. Skip them in the shrinker
to avoid racing with cleanup by returning -EBUSY.
Ghost BOs from ttm_bo_pipeline_gutting() still hold reclaimable
pages, so continue processing them via xe_bo_shrink_purge().
Fixes: 00c8efc3180f ("drm/xe: Add a shrinker for xe bos")
v6:
- Split from patch 0010 (Thomas)
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
---
drivers/gpu/drm/xe/xe_bo.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
index 598d4463baf3..07acce383cb1 100644
--- a/drivers/gpu/drm/xe/xe_bo.c
+++ b/drivers/gpu/drm/xe/xe_bo.c
@@ -1295,9 +1295,13 @@ long xe_bo_shrink(struct ttm_operation_ctx *ctx, struct ttm_buffer_object *bo,
if (!xe_bo_eviction_valuable(bo, &place))
return -EBUSY;
- if (!xe_bo_is_xe_bo(bo) || !xe_bo_get_unless_zero(xe_bo))
+ /* Ghost BOs still hold reclaimable pages, try to shrink them. */
+ if (!xe_bo_is_xe_bo(bo))
return xe_bo_shrink_purge(ctx, bo, scanned);
+ if (!xe_bo_get_unless_zero(xe_bo))
+ return -EBUSY;
+
if (xe_tt->purgeable) {
if (bo->resource->mem_type != XE_PL_SYSTEM)
lret = xe_bo_move_notify(xe_bo, ctx);
--
2.43.0
^ permalink raw reply related [flat|nested] 39+ messages in thread* Re: [PATCH v6 12/12] drm/xe/bo: Skip zero-refcount BOs in shrinker
2026-03-03 15:20 ` [PATCH v6 12/12] drm/xe/bo: Skip zero-refcount BOs in shrinker Arvind Yadav
@ 2026-03-05 15:49 ` Thomas Hellström
2026-03-17 5:59 ` Yadav, Arvind
0 siblings, 1 reply; 39+ messages in thread
From: Thomas Hellström @ 2026-03-05 15:49 UTC (permalink / raw)
To: Arvind Yadav, intel-xe
Cc: matthew.brost, himal.prasad.ghimiray, pallavi.mishra
On Tue, 2026-03-03 at 20:50 +0530, Arvind Yadav wrote:
> Zero-refcount BOs are being destroyed. Skip them in the shrinker
> to avoid racing with cleanup by returning -EBUSY.
>
> Ghost BOs from ttm_bo_pipeline_gutting() still hold reclaimable
> pages, so continue processing them via xe_bo_shrink_purge().
>
> Fixes: 00c8efc3180f ("drm/xe: Add a shrinker for xe bos")
>
> v6:
> - Split from patch 0010 (Thomas)
>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
Hi, Arvind.
> ---
> drivers/gpu/drm/xe/xe_bo.c | 6 +++++-
> 1 file changed, 5 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
> index 598d4463baf3..07acce383cb1 100644
> --- a/drivers/gpu/drm/xe/xe_bo.c
> +++ b/drivers/gpu/drm/xe/xe_bo.c
> @@ -1295,9 +1295,13 @@ long xe_bo_shrink(struct ttm_operation_ctx
> *ctx, struct ttm_buffer_object *bo,
> if (!xe_bo_eviction_valuable(bo, &place))
> return -EBUSY;
>
> - if (!xe_bo_is_xe_bo(bo) || !xe_bo_get_unless_zero(xe_bo))
Looking a bit closer at this, I realized this is intentional. The bo
has two refcounts. One is the gem refcount (which we fail to grab here)
and one is the ttm refcount, which we have successfully grabbed,
otherwise this function wouldn't be called. So the bo is in a zombie
state (all xe-specific members are invalidd) but calling
xe_bo_shrink_purge is allowed. So this patch could actually be dropped.
Question is did you see any issues from xe_bo_shrink_purge() without
this patch?
Thanks,
Thomas
^ permalink raw reply [flat|nested] 39+ messages in thread* Re: [PATCH v6 12/12] drm/xe/bo: Skip zero-refcount BOs in shrinker
2026-03-05 15:49 ` Thomas Hellström
@ 2026-03-17 5:59 ` Yadav, Arvind
0 siblings, 0 replies; 39+ messages in thread
From: Yadav, Arvind @ 2026-03-17 5:59 UTC (permalink / raw)
To: Thomas Hellström, intel-xe
Cc: matthew.brost, himal.prasad.ghimiray, pallavi.mishra
On 05-03-2026 21:19, Thomas Hellström wrote:
> On Tue, 2026-03-03 at 20:50 +0530, Arvind Yadav wrote:
>> Zero-refcount BOs are being destroyed. Skip them in the shrinker
>> to avoid racing with cleanup by returning -EBUSY.
>>
>> Ghost BOs from ttm_bo_pipeline_gutting() still hold reclaimable
>> pages, so continue processing them via xe_bo_shrink_purge().
>>
>> Fixes: 00c8efc3180f ("drm/xe: Add a shrinker for xe bos")
>>
>> v6:
>> - Split from patch 0010 (Thomas)
>>
>> Cc: Matthew Brost <matthew.brost@intel.com>
>> Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>> Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
> Hi, Arvind.
>
>> ---
>> drivers/gpu/drm/xe/xe_bo.c | 6 +++++-
>> 1 file changed, 5 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
>> index 598d4463baf3..07acce383cb1 100644
>> --- a/drivers/gpu/drm/xe/xe_bo.c
>> +++ b/drivers/gpu/drm/xe/xe_bo.c
>> @@ -1295,9 +1295,13 @@ long xe_bo_shrink(struct ttm_operation_ctx
>> *ctx, struct ttm_buffer_object *bo,
>> if (!xe_bo_eviction_valuable(bo, &place))
>> return -EBUSY;
>>
>> - if (!xe_bo_is_xe_bo(bo) || !xe_bo_get_unless_zero(xe_bo))
> Looking a bit closer at this, I realized this is intentional. The bo
> has two refcounts. One is the gem refcount (which we fail to grab here)
> and one is the ttm refcount, which we have successfully grabbed,
> otherwise this function wouldn't be called. So the bo is in a zombie
> state (all xe-specific members are invalidd) but calling
> xe_bo_shrink_purge is allowed. So this patch could actually be dropped.
>
> Question is did you see any issues from xe_bo_shrink_purge() without
> this patch?
I did not see a concrete crash or corruption from xe_bo_shrink_purge()
on a zombie BO — the patch was added as a precaution based on the
refcount check pattern.
Given your analysis, I'll drop this patch in v7.
Thanks,
Arvind
>
> Thanks,
> Thomas
^ permalink raw reply [flat|nested] 39+ messages in thread
* ✗ CI.checkpatch: warning for drm/xe/madvise: Add support for purgeable buffer objects (rev7)
2026-03-03 15:19 [PATCH v6 00/12] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
` (11 preceding siblings ...)
2026-03-03 15:20 ` [PATCH v6 12/12] drm/xe/bo: Skip zero-refcount BOs in shrinker Arvind Yadav
@ 2026-03-03 16:12 ` Patchwork
2026-03-03 16:14 ` ✓ CI.KUnit: success " Patchwork
` (3 subsequent siblings)
16 siblings, 0 replies; 39+ messages in thread
From: Patchwork @ 2026-03-03 16:12 UTC (permalink / raw)
To: Arvind Yadav; +Cc: intel-xe
== Series Details ==
Series: drm/xe/madvise: Add support for purgeable buffer objects (rev7)
URL : https://patchwork.freedesktop.org/series/156651/
State : warning
== Summary ==
+ KERNEL=/kernel
+ git clone https://gitlab.freedesktop.org/drm/maintainer-tools mt
Cloning into 'mt'...
warning: redirecting to https://gitlab.freedesktop.org/drm/maintainer-tools.git/
+ git -C mt rev-list -n1 origin/master
1f57ba1afceae32108bd24770069f764d940a0e4
+ cd /kernel
+ git config --global --add safe.directory /kernel
+ git log -n1
commit 389b1060c1dfec0d57ee6d0159e325938d067263
Author: Arvind Yadav <arvind.yadav@intel.com>
Date: Tue Mar 3 20:50:08 2026 +0530
drm/xe/bo: Skip zero-refcount BOs in shrinker
Zero-refcount BOs are being destroyed. Skip them in the shrinker
to avoid racing with cleanup by returning -EBUSY.
Ghost BOs from ttm_bo_pipeline_gutting() still hold reclaimable
pages, so continue processing them via xe_bo_shrink_purge().
Fixes: 00c8efc3180f ("drm/xe: Add a shrinker for xe bos")
v6:
- Split from patch 0010 (Thomas)
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
+ /mt/dim checkpatch 7f2495944abe599477bd1d891f6fbaf1d5dded3b drm-intel
2f4f3f3df02a drm/xe/uapi: Add UAPI support for purgeable buffer objects
780ecafc0c65 drm/xe/bo: Add purgeable bo state tracking and field madv to xe_bo
f8010040b7b9 drm/xe/madvise: Implement purgeable buffer object support
-:23: WARNING:COMMIT_LOG_LONG_LINE: Prefer a maximum 75 chars per line (possible unwrapped commit description?)
#23:
- Async TLB invalidation via trigger_rebind (no blocking xe_vm_invalidate_vma)
-:122: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#122: FILE: drivers/gpu/drm/xe/xe_bo.c:856:
+ xe_assert(xe, new_state == XE_MADV_PURGEABLE_WILLNEED ||
+ new_state == XE_MADV_PURGEABLE_DONTNEED ||
total: 0 errors, 1 warnings, 1 checks, 490 lines checked
0a629c8de4b7 drm/xe/bo: Block CPU faults to purgeable buffer objects
7d7ae8e22221 drm/xe/vm: Prevent binding of purged buffer objects
-:37: WARNING:COMMIT_LOG_LONG_LINE: Prefer a maximum 75 chars per line (possible unwrapped commit description?)
#37:
- Replace three boolean parameters with struct xe_vma_lock_and_validate_flags
total: 0 errors, 1 warnings, 0 checks, 116 lines checked
837c3fe4cbc5 drm/xe/madvise: Implement per-VMA purgeable state tracking
e76ec44d9788 drm/xe/madvise: Block imported and exported dma-bufs
-:51: CHECK:LINE_SPACING: Please don't use multiple blank lines
#51: FILE: drivers/gpu/drm/xe/xe_vm_madvise.c:187:
+
total: 0 errors, 0 warnings, 1 checks, 56 lines checked
f1235a64fa31 drm/xe/bo: Block mmap of DONTNEED/purged BOs
2a4aaaf7d55e drm/xe/dma_buf: Block export of DONTNEED/purged BOs
2bb599c4a7be drm/xe/bo: Add purgeable shrinker state helpers
88639917c398 drm/xe/madvise: Enable purgeable buffer object IOCTL support
-:17: WARNING:COMMIT_LOG_LONG_LINE: Prefer a maximum 75 chars per line (possible unwrapped commit description?)
#17:
This final patch enables userspace to use the DRM_XE_VMA_ATTR_PURGEABLE_STATE
total: 0 errors, 1 warnings, 0 checks, 43 lines checked
389b1060c1df drm/xe/bo: Skip zero-refcount BOs in shrinker
^ permalink raw reply [flat|nested] 39+ messages in thread* ✓ CI.KUnit: success for drm/xe/madvise: Add support for purgeable buffer objects (rev7)
2026-03-03 15:19 [PATCH v6 00/12] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
` (12 preceding siblings ...)
2026-03-03 16:12 ` ✗ CI.checkpatch: warning for drm/xe/madvise: Add support for purgeable buffer objects (rev7) Patchwork
@ 2026-03-03 16:14 ` Patchwork
2026-03-03 16:50 ` ✓ Xe.CI.BAT: " Patchwork
` (2 subsequent siblings)
16 siblings, 0 replies; 39+ messages in thread
From: Patchwork @ 2026-03-03 16:14 UTC (permalink / raw)
To: Arvind Yadav; +Cc: intel-xe
== Series Details ==
Series: drm/xe/madvise: Add support for purgeable buffer objects (rev7)
URL : https://patchwork.freedesktop.org/series/156651/
State : success
== Summary ==
+ trap cleanup EXIT
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/xe/.kunitconfig
[16:12:56] Configuring KUnit Kernel ...
Generating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[16:13:01] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[16:13:31] Starting KUnit Kernel (1/1)...
[16:13:31] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[16:13:31] ================== guc_buf (11 subtests) ===================
[16:13:31] [PASSED] test_smallest
[16:13:31] [PASSED] test_largest
[16:13:31] [PASSED] test_granular
[16:13:31] [PASSED] test_unique
[16:13:31] [PASSED] test_overlap
[16:13:31] [PASSED] test_reusable
[16:13:31] [PASSED] test_too_big
[16:13:31] [PASSED] test_flush
[16:13:31] [PASSED] test_lookup
[16:13:31] [PASSED] test_data
[16:13:31] [PASSED] test_class
[16:13:31] ===================== [PASSED] guc_buf =====================
[16:13:31] =================== guc_dbm (7 subtests) ===================
[16:13:31] [PASSED] test_empty
[16:13:31] [PASSED] test_default
[16:13:31] ======================== test_size ========================
[16:13:31] [PASSED] 4
[16:13:31] [PASSED] 8
[16:13:31] [PASSED] 32
[16:13:31] [PASSED] 256
[16:13:31] ==================== [PASSED] test_size ====================
[16:13:31] ======================= test_reuse ========================
[16:13:31] [PASSED] 4
[16:13:31] [PASSED] 8
[16:13:31] [PASSED] 32
[16:13:31] [PASSED] 256
[16:13:31] =================== [PASSED] test_reuse ====================
[16:13:31] =================== test_range_overlap ====================
[16:13:31] [PASSED] 4
[16:13:31] [PASSED] 8
[16:13:31] [PASSED] 32
[16:13:31] [PASSED] 256
[16:13:31] =============== [PASSED] test_range_overlap ================
[16:13:31] =================== test_range_compact ====================
[16:13:31] [PASSED] 4
[16:13:31] [PASSED] 8
[16:13:31] [PASSED] 32
[16:13:31] [PASSED] 256
[16:13:31] =============== [PASSED] test_range_compact ================
[16:13:31] ==================== test_range_spare =====================
[16:13:31] [PASSED] 4
[16:13:31] [PASSED] 8
[16:13:31] [PASSED] 32
[16:13:31] [PASSED] 256
[16:13:31] ================ [PASSED] test_range_spare =================
[16:13:31] ===================== [PASSED] guc_dbm =====================
[16:13:31] =================== guc_idm (6 subtests) ===================
[16:13:31] [PASSED] bad_init
[16:13:31] [PASSED] no_init
[16:13:31] [PASSED] init_fini
[16:13:31] [PASSED] check_used
[16:13:31] [PASSED] check_quota
[16:13:31] [PASSED] check_all
[16:13:31] ===================== [PASSED] guc_idm =====================
[16:13:31] ================== no_relay (3 subtests) ===================
[16:13:31] [PASSED] xe_drops_guc2pf_if_not_ready
[16:13:31] [PASSED] xe_drops_guc2vf_if_not_ready
[16:13:31] [PASSED] xe_rejects_send_if_not_ready
[16:13:31] ==================== [PASSED] no_relay =====================
[16:13:31] ================== pf_relay (14 subtests) ==================
[16:13:31] [PASSED] pf_rejects_guc2pf_too_short
[16:13:31] [PASSED] pf_rejects_guc2pf_too_long
[16:13:31] [PASSED] pf_rejects_guc2pf_no_payload
[16:13:31] [PASSED] pf_fails_no_payload
[16:13:31] [PASSED] pf_fails_bad_origin
[16:13:31] [PASSED] pf_fails_bad_type
[16:13:31] [PASSED] pf_txn_reports_error
[16:13:31] [PASSED] pf_txn_sends_pf2guc
[16:13:31] [PASSED] pf_sends_pf2guc
[16:13:31] [SKIPPED] pf_loopback_nop
[16:13:31] [SKIPPED] pf_loopback_echo
[16:13:31] [SKIPPED] pf_loopback_fail
[16:13:31] [SKIPPED] pf_loopback_busy
[16:13:31] [SKIPPED] pf_loopback_retry
[16:13:31] ==================== [PASSED] pf_relay =====================
[16:13:31] ================== vf_relay (3 subtests) ===================
[16:13:31] [PASSED] vf_rejects_guc2vf_too_short
[16:13:31] [PASSED] vf_rejects_guc2vf_too_long
[16:13:31] [PASSED] vf_rejects_guc2vf_no_payload
[16:13:31] ==================== [PASSED] vf_relay =====================
[16:13:31] ================ pf_gt_config (9 subtests) =================
[16:13:31] [PASSED] fair_contexts_1vf
[16:13:31] [PASSED] fair_doorbells_1vf
[16:13:31] [PASSED] fair_ggtt_1vf
[16:13:31] ====================== fair_vram_1vf ======================
[16:13:31] [PASSED] 3.50 GiB
[16:13:31] [PASSED] 11.5 GiB
[16:13:31] [PASSED] 15.5 GiB
[16:13:31] [PASSED] 31.5 GiB
[16:13:31] [PASSED] 63.5 GiB
[16:13:31] [PASSED] 1.91 GiB
[16:13:31] ================== [PASSED] fair_vram_1vf ==================
[16:13:31] ================ fair_vram_1vf_admin_only =================
[16:13:31] [PASSED] 3.50 GiB
[16:13:31] [PASSED] 11.5 GiB
[16:13:31] [PASSED] 15.5 GiB
[16:13:31] [PASSED] 31.5 GiB
[16:13:31] [PASSED] 63.5 GiB
[16:13:31] [PASSED] 1.91 GiB
[16:13:31] ============ [PASSED] fair_vram_1vf_admin_only =============
[16:13:31] ====================== fair_contexts ======================
[16:13:31] [PASSED] 1 VF
[16:13:31] [PASSED] 2 VFs
[16:13:31] [PASSED] 3 VFs
[16:13:31] [PASSED] 4 VFs
[16:13:31] [PASSED] 5 VFs
[16:13:31] [PASSED] 6 VFs
[16:13:31] [PASSED] 7 VFs
[16:13:31] [PASSED] 8 VFs
[16:13:31] [PASSED] 9 VFs
[16:13:31] [PASSED] 10 VFs
[16:13:31] [PASSED] 11 VFs
[16:13:31] [PASSED] 12 VFs
[16:13:31] [PASSED] 13 VFs
[16:13:31] [PASSED] 14 VFs
[16:13:31] [PASSED] 15 VFs
[16:13:31] [PASSED] 16 VFs
[16:13:31] [PASSED] 17 VFs
[16:13:31] [PASSED] 18 VFs
[16:13:31] [PASSED] 19 VFs
[16:13:31] [PASSED] 20 VFs
[16:13:31] [PASSED] 21 VFs
[16:13:31] [PASSED] 22 VFs
[16:13:31] [PASSED] 23 VFs
[16:13:31] [PASSED] 24 VFs
[16:13:31] [PASSED] 25 VFs
[16:13:31] [PASSED] 26 VFs
[16:13:31] [PASSED] 27 VFs
[16:13:31] [PASSED] 28 VFs
[16:13:31] [PASSED] 29 VFs
[16:13:31] [PASSED] 30 VFs
[16:13:31] [PASSED] 31 VFs
[16:13:31] [PASSED] 32 VFs
[16:13:31] [PASSED] 33 VFs
[16:13:31] [PASSED] 34 VFs
[16:13:31] [PASSED] 35 VFs
[16:13:31] [PASSED] 36 VFs
[16:13:31] [PASSED] 37 VFs
[16:13:31] [PASSED] 38 VFs
[16:13:31] [PASSED] 39 VFs
[16:13:31] [PASSED] 40 VFs
[16:13:31] [PASSED] 41 VFs
[16:13:31] [PASSED] 42 VFs
[16:13:31] [PASSED] 43 VFs
[16:13:31] [PASSED] 44 VFs
[16:13:31] [PASSED] 45 VFs
[16:13:31] [PASSED] 46 VFs
[16:13:31] [PASSED] 47 VFs
[16:13:31] [PASSED] 48 VFs
[16:13:31] [PASSED] 49 VFs
[16:13:31] [PASSED] 50 VFs
[16:13:31] [PASSED] 51 VFs
[16:13:31] [PASSED] 52 VFs
[16:13:31] [PASSED] 53 VFs
[16:13:31] [PASSED] 54 VFs
[16:13:31] [PASSED] 55 VFs
[16:13:31] [PASSED] 56 VFs
[16:13:31] [PASSED] 57 VFs
[16:13:31] [PASSED] 58 VFs
[16:13:31] [PASSED] 59 VFs
[16:13:31] [PASSED] 60 VFs
[16:13:31] [PASSED] 61 VFs
[16:13:31] [PASSED] 62 VFs
[16:13:31] [PASSED] 63 VFs
[16:13:31] ================== [PASSED] fair_contexts ==================
[16:13:31] ===================== fair_doorbells ======================
[16:13:31] [PASSED] 1 VF
[16:13:31] [PASSED] 2 VFs
[16:13:31] [PASSED] 3 VFs
[16:13:31] [PASSED] 4 VFs
[16:13:31] [PASSED] 5 VFs
[16:13:31] [PASSED] 6 VFs
[16:13:31] [PASSED] 7 VFs
[16:13:31] [PASSED] 8 VFs
[16:13:31] [PASSED] 9 VFs
[16:13:31] [PASSED] 10 VFs
[16:13:31] [PASSED] 11 VFs
[16:13:31] [PASSED] 12 VFs
[16:13:31] [PASSED] 13 VFs
[16:13:31] [PASSED] 14 VFs
[16:13:31] [PASSED] 15 VFs
[16:13:31] [PASSED] 16 VFs
[16:13:31] [PASSED] 17 VFs
[16:13:31] [PASSED] 18 VFs
[16:13:31] [PASSED] 19 VFs
[16:13:31] [PASSED] 20 VFs
[16:13:31] [PASSED] 21 VFs
[16:13:31] [PASSED] 22 VFs
[16:13:31] [PASSED] 23 VFs
[16:13:31] [PASSED] 24 VFs
[16:13:31] [PASSED] 25 VFs
[16:13:31] [PASSED] 26 VFs
[16:13:31] [PASSED] 27 VFs
[16:13:31] [PASSED] 28 VFs
[16:13:31] [PASSED] 29 VFs
[16:13:31] [PASSED] 30 VFs
[16:13:31] [PASSED] 31 VFs
[16:13:31] [PASSED] 32 VFs
[16:13:31] [PASSED] 33 VFs
[16:13:31] [PASSED] 34 VFs
[16:13:31] [PASSED] 35 VFs
[16:13:31] [PASSED] 36 VFs
[16:13:31] [PASSED] 37 VFs
[16:13:31] [PASSED] 38 VFs
[16:13:31] [PASSED] 39 VFs
[16:13:31] [PASSED] 40 VFs
[16:13:31] [PASSED] 41 VFs
[16:13:31] [PASSED] 42 VFs
[16:13:31] [PASSED] 43 VFs
[16:13:31] [PASSED] 44 VFs
[16:13:31] [PASSED] 45 VFs
[16:13:31] [PASSED] 46 VFs
[16:13:31] [PASSED] 47 VFs
[16:13:31] [PASSED] 48 VFs
[16:13:31] [PASSED] 49 VFs
[16:13:31] [PASSED] 50 VFs
[16:13:31] [PASSED] 51 VFs
[16:13:31] [PASSED] 52 VFs
[16:13:31] [PASSED] 53 VFs
[16:13:31] [PASSED] 54 VFs
[16:13:31] [PASSED] 55 VFs
[16:13:31] [PASSED] 56 VFs
[16:13:31] [PASSED] 57 VFs
[16:13:31] [PASSED] 58 VFs
[16:13:31] [PASSED] 59 VFs
[16:13:31] [PASSED] 60 VFs
[16:13:31] [PASSED] 61 VFs
[16:13:31] [PASSED] 62 VFs
[16:13:31] [PASSED] 63 VFs
[16:13:31] ================= [PASSED] fair_doorbells ==================
[16:13:31] ======================== fair_ggtt ========================
[16:13:31] [PASSED] 1 VF
[16:13:31] [PASSED] 2 VFs
[16:13:31] [PASSED] 3 VFs
[16:13:31] [PASSED] 4 VFs
[16:13:31] [PASSED] 5 VFs
[16:13:31] [PASSED] 6 VFs
[16:13:31] [PASSED] 7 VFs
[16:13:31] [PASSED] 8 VFs
[16:13:31] [PASSED] 9 VFs
[16:13:31] [PASSED] 10 VFs
[16:13:31] [PASSED] 11 VFs
[16:13:31] [PASSED] 12 VFs
[16:13:31] [PASSED] 13 VFs
[16:13:31] [PASSED] 14 VFs
[16:13:31] [PASSED] 15 VFs
[16:13:31] [PASSED] 16 VFs
[16:13:31] [PASSED] 17 VFs
[16:13:31] [PASSED] 18 VFs
[16:13:31] [PASSED] 19 VFs
[16:13:31] [PASSED] 20 VFs
[16:13:31] [PASSED] 21 VFs
[16:13:31] [PASSED] 22 VFs
[16:13:31] [PASSED] 23 VFs
[16:13:31] [PASSED] 24 VFs
[16:13:31] [PASSED] 25 VFs
[16:13:31] [PASSED] 26 VFs
[16:13:31] [PASSED] 27 VFs
[16:13:31] [PASSED] 28 VFs
[16:13:31] [PASSED] 29 VFs
[16:13:31] [PASSED] 30 VFs
[16:13:31] [PASSED] 31 VFs
[16:13:31] [PASSED] 32 VFs
[16:13:31] [PASSED] 33 VFs
[16:13:31] [PASSED] 34 VFs
[16:13:31] [PASSED] 35 VFs
[16:13:31] [PASSED] 36 VFs
[16:13:31] [PASSED] 37 VFs
[16:13:31] [PASSED] 38 VFs
[16:13:31] [PASSED] 39 VFs
[16:13:31] [PASSED] 40 VFs
[16:13:31] [PASSED] 41 VFs
[16:13:31] [PASSED] 42 VFs
[16:13:31] [PASSED] 43 VFs
[16:13:31] [PASSED] 44 VFs
[16:13:31] [PASSED] 45 VFs
[16:13:31] [PASSED] 46 VFs
[16:13:31] [PASSED] 47 VFs
[16:13:31] [PASSED] 48 VFs
[16:13:31] [PASSED] 49 VFs
[16:13:31] [PASSED] 50 VFs
[16:13:31] [PASSED] 51 VFs
[16:13:31] [PASSED] 52 VFs
[16:13:31] [PASSED] 53 VFs
[16:13:31] [PASSED] 54 VFs
[16:13:31] [PASSED] 55 VFs
[16:13:31] [PASSED] 56 VFs
[16:13:31] [PASSED] 57 VFs
[16:13:31] [PASSED] 58 VFs
[16:13:31] [PASSED] 59 VFs
[16:13:31] [PASSED] 60 VFs
[16:13:31] [PASSED] 61 VFs
[16:13:31] [PASSED] 62 VFs
[16:13:31] [PASSED] 63 VFs
[16:13:31] ==================== [PASSED] fair_ggtt ====================
[16:13:31] ======================== fair_vram ========================
[16:13:31] [PASSED] 1 VF
[16:13:31] [PASSED] 2 VFs
[16:13:31] [PASSED] 3 VFs
[16:13:31] [PASSED] 4 VFs
[16:13:31] [PASSED] 5 VFs
[16:13:31] [PASSED] 6 VFs
[16:13:31] [PASSED] 7 VFs
[16:13:31] [PASSED] 8 VFs
[16:13:31] [PASSED] 9 VFs
[16:13:31] [PASSED] 10 VFs
[16:13:31] [PASSED] 11 VFs
[16:13:31] [PASSED] 12 VFs
[16:13:31] [PASSED] 13 VFs
[16:13:31] [PASSED] 14 VFs
[16:13:31] [PASSED] 15 VFs
[16:13:31] [PASSED] 16 VFs
[16:13:31] [PASSED] 17 VFs
[16:13:31] [PASSED] 18 VFs
[16:13:31] [PASSED] 19 VFs
[16:13:32] [PASSED] 20 VFs
[16:13:32] [PASSED] 21 VFs
[16:13:32] [PASSED] 22 VFs
[16:13:32] [PASSED] 23 VFs
[16:13:32] [PASSED] 24 VFs
[16:13:32] [PASSED] 25 VFs
[16:13:32] [PASSED] 26 VFs
[16:13:32] [PASSED] 27 VFs
[16:13:32] [PASSED] 28 VFs
[16:13:32] [PASSED] 29 VFs
[16:13:32] [PASSED] 30 VFs
[16:13:32] [PASSED] 31 VFs
[16:13:32] [PASSED] 32 VFs
[16:13:32] [PASSED] 33 VFs
[16:13:32] [PASSED] 34 VFs
[16:13:32] [PASSED] 35 VFs
[16:13:32] [PASSED] 36 VFs
[16:13:32] [PASSED] 37 VFs
[16:13:32] [PASSED] 38 VFs
[16:13:32] [PASSED] 39 VFs
[16:13:32] [PASSED] 40 VFs
[16:13:32] [PASSED] 41 VFs
[16:13:32] [PASSED] 42 VFs
[16:13:32] [PASSED] 43 VFs
[16:13:32] [PASSED] 44 VFs
[16:13:32] [PASSED] 45 VFs
[16:13:32] [PASSED] 46 VFs
[16:13:32] [PASSED] 47 VFs
[16:13:32] [PASSED] 48 VFs
[16:13:32] [PASSED] 49 VFs
[16:13:32] [PASSED] 50 VFs
[16:13:32] [PASSED] 51 VFs
[16:13:32] [PASSED] 52 VFs
[16:13:32] [PASSED] 53 VFs
[16:13:32] [PASSED] 54 VFs
[16:13:32] [PASSED] 55 VFs
[16:13:32] [PASSED] 56 VFs
[16:13:32] [PASSED] 57 VFs
[16:13:32] [PASSED] 58 VFs
[16:13:32] [PASSED] 59 VFs
[16:13:32] [PASSED] 60 VFs
[16:13:32] [PASSED] 61 VFs
[16:13:32] [PASSED] 62 VFs
[16:13:32] [PASSED] 63 VFs
[16:13:32] ==================== [PASSED] fair_vram ====================
[16:13:32] ================== [PASSED] pf_gt_config ===================
[16:13:32] ===================== lmtt (1 subtest) =====================
[16:13:32] ======================== test_ops =========================
[16:13:32] [PASSED] 2-level
[16:13:32] [PASSED] multi-level
[16:13:32] ==================== [PASSED] test_ops =====================
[16:13:32] ====================== [PASSED] lmtt =======================
[16:13:32] ================= pf_service (11 subtests) =================
[16:13:32] [PASSED] pf_negotiate_any
[16:13:32] [PASSED] pf_negotiate_base_match
[16:13:32] [PASSED] pf_negotiate_base_newer
[16:13:32] [PASSED] pf_negotiate_base_next
[16:13:32] [SKIPPED] pf_negotiate_base_older
[16:13:32] [PASSED] pf_negotiate_base_prev
[16:13:32] [PASSED] pf_negotiate_latest_match
[16:13:32] [PASSED] pf_negotiate_latest_newer
[16:13:32] [PASSED] pf_negotiate_latest_next
[16:13:32] [SKIPPED] pf_negotiate_latest_older
[16:13:32] [SKIPPED] pf_negotiate_latest_prev
[16:13:32] =================== [PASSED] pf_service ====================
[16:13:32] ================= xe_guc_g2g (2 subtests) ==================
[16:13:32] ============== xe_live_guc_g2g_kunit_default ==============
[16:13:32] ========= [SKIPPED] xe_live_guc_g2g_kunit_default ==========
[16:13:32] ============== xe_live_guc_g2g_kunit_allmem ===============
[16:13:32] ========== [SKIPPED] xe_live_guc_g2g_kunit_allmem ==========
[16:13:32] =================== [SKIPPED] xe_guc_g2g ===================
[16:13:32] =================== xe_mocs (2 subtests) ===================
[16:13:32] ================ xe_live_mocs_kernel_kunit ================
[16:13:32] =========== [SKIPPED] xe_live_mocs_kernel_kunit ============
[16:13:32] ================ xe_live_mocs_reset_kunit =================
[16:13:32] ============ [SKIPPED] xe_live_mocs_reset_kunit ============
[16:13:32] ==================== [SKIPPED] xe_mocs =====================
[16:13:32] ================= xe_migrate (2 subtests) ==================
[16:13:32] ================= xe_migrate_sanity_kunit =================
[16:13:32] ============ [SKIPPED] xe_migrate_sanity_kunit =============
[16:13:32] ================== xe_validate_ccs_kunit ==================
[16:13:32] ============= [SKIPPED] xe_validate_ccs_kunit ==============
[16:13:32] =================== [SKIPPED] xe_migrate ===================
[16:13:32] ================== xe_dma_buf (1 subtest) ==================
[16:13:32] ==================== xe_dma_buf_kunit =====================
[16:13:32] ================ [SKIPPED] xe_dma_buf_kunit ================
[16:13:32] =================== [SKIPPED] xe_dma_buf ===================
[16:13:32] ================= xe_bo_shrink (1 subtest) =================
[16:13:32] =================== xe_bo_shrink_kunit ====================
[16:13:32] =============== [SKIPPED] xe_bo_shrink_kunit ===============
[16:13:32] ================== [SKIPPED] xe_bo_shrink ==================
[16:13:32] ==================== xe_bo (2 subtests) ====================
[16:13:32] ================== xe_ccs_migrate_kunit ===================
[16:13:32] ============== [SKIPPED] xe_ccs_migrate_kunit ==============
[16:13:32] ==================== xe_bo_evict_kunit ====================
[16:13:32] =============== [SKIPPED] xe_bo_evict_kunit ================
[16:13:32] ===================== [SKIPPED] xe_bo ======================
[16:13:32] ==================== args (13 subtests) ====================
[16:13:32] [PASSED] count_args_test
[16:13:32] [PASSED] call_args_example
[16:13:32] [PASSED] call_args_test
[16:13:32] [PASSED] drop_first_arg_example
[16:13:32] [PASSED] drop_first_arg_test
[16:13:32] [PASSED] first_arg_example
[16:13:32] [PASSED] first_arg_test
[16:13:32] [PASSED] last_arg_example
[16:13:32] [PASSED] last_arg_test
[16:13:32] [PASSED] pick_arg_example
[16:13:32] [PASSED] if_args_example
[16:13:32] [PASSED] if_args_test
[16:13:32] [PASSED] sep_comma_example
[16:13:32] ====================== [PASSED] args =======================
[16:13:32] =================== xe_pci (3 subtests) ====================
[16:13:32] ==================== check_graphics_ip ====================
[16:13:32] [PASSED] 12.00 Xe_LP
[16:13:32] [PASSED] 12.10 Xe_LP+
[16:13:32] [PASSED] 12.55 Xe_HPG
[16:13:32] [PASSED] 12.60 Xe_HPC
[16:13:32] [PASSED] 12.70 Xe_LPG
[16:13:32] [PASSED] 12.71 Xe_LPG
[16:13:32] [PASSED] 12.74 Xe_LPG+
[16:13:32] [PASSED] 20.01 Xe2_HPG
[16:13:32] [PASSED] 20.02 Xe2_HPG
[16:13:32] [PASSED] 20.04 Xe2_LPG
[16:13:32] [PASSED] 30.00 Xe3_LPG
[16:13:32] [PASSED] 30.01 Xe3_LPG
[16:13:32] [PASSED] 30.03 Xe3_LPG
[16:13:32] [PASSED] 30.04 Xe3_LPG
[16:13:32] [PASSED] 30.05 Xe3_LPG
[16:13:32] [PASSED] 35.10 Xe3p_LPG
[16:13:32] [PASSED] 35.11 Xe3p_XPC
[16:13:32] ================ [PASSED] check_graphics_ip ================
[16:13:32] ===================== check_media_ip ======================
[16:13:32] [PASSED] 12.00 Xe_M
[16:13:32] [PASSED] 12.55 Xe_HPM
[16:13:32] [PASSED] 13.00 Xe_LPM+
[16:13:32] [PASSED] 13.01 Xe2_HPM
[16:13:32] [PASSED] 20.00 Xe2_LPM
[16:13:32] [PASSED] 30.00 Xe3_LPM
[16:13:32] [PASSED] 30.02 Xe3_LPM
[16:13:32] [PASSED] 35.00 Xe3p_LPM
[16:13:32] [PASSED] 35.03 Xe3p_HPM
[16:13:32] ================= [PASSED] check_media_ip ==================
[16:13:32] =================== check_platform_desc ===================
[16:13:32] [PASSED] 0x9A60 (TIGERLAKE)
[16:13:32] [PASSED] 0x9A68 (TIGERLAKE)
[16:13:32] [PASSED] 0x9A70 (TIGERLAKE)
[16:13:32] [PASSED] 0x9A40 (TIGERLAKE)
[16:13:32] [PASSED] 0x9A49 (TIGERLAKE)
[16:13:32] [PASSED] 0x9A59 (TIGERLAKE)
[16:13:32] [PASSED] 0x9A78 (TIGERLAKE)
[16:13:32] [PASSED] 0x9AC0 (TIGERLAKE)
[16:13:32] [PASSED] 0x9AC9 (TIGERLAKE)
[16:13:32] [PASSED] 0x9AD9 (TIGERLAKE)
[16:13:32] [PASSED] 0x9AF8 (TIGERLAKE)
[16:13:32] [PASSED] 0x4C80 (ROCKETLAKE)
[16:13:32] [PASSED] 0x4C8A (ROCKETLAKE)
[16:13:32] [PASSED] 0x4C8B (ROCKETLAKE)
[16:13:32] [PASSED] 0x4C8C (ROCKETLAKE)
[16:13:32] [PASSED] 0x4C90 (ROCKETLAKE)
[16:13:32] [PASSED] 0x4C9A (ROCKETLAKE)
[16:13:32] [PASSED] 0x4680 (ALDERLAKE_S)
[16:13:32] [PASSED] 0x4682 (ALDERLAKE_S)
[16:13:32] [PASSED] 0x4688 (ALDERLAKE_S)
[16:13:32] [PASSED] 0x468A (ALDERLAKE_S)
[16:13:32] [PASSED] 0x468B (ALDERLAKE_S)
[16:13:32] [PASSED] 0x4690 (ALDERLAKE_S)
[16:13:32] [PASSED] 0x4692 (ALDERLAKE_S)
[16:13:32] [PASSED] 0x4693 (ALDERLAKE_S)
[16:13:32] [PASSED] 0x46A0 (ALDERLAKE_P)
[16:13:32] [PASSED] 0x46A1 (ALDERLAKE_P)
[16:13:32] [PASSED] 0x46A2 (ALDERLAKE_P)
[16:13:32] [PASSED] 0x46A3 (ALDERLAKE_P)
[16:13:32] [PASSED] 0x46A6 (ALDERLAKE_P)
[16:13:32] [PASSED] 0x46A8 (ALDERLAKE_P)
[16:13:32] [PASSED] 0x46AA (ALDERLAKE_P)
[16:13:32] [PASSED] 0x462A (ALDERLAKE_P)
[16:13:32] [PASSED] 0x4626 (ALDERLAKE_P)
[16:13:32] [PASSED] 0x4628 (ALDERLAKE_P)
[16:13:32] [PASSED] 0x46B0 (ALDERLAKE_P)
[16:13:32] [PASSED] 0x46B1 (ALDERLAKE_P)
[16:13:32] [PASSED] 0x46B2 (ALDERLAKE_P)
[16:13:32] [PASSED] 0x46B3 (ALDERLAKE_P)
[16:13:32] [PASSED] 0x46C0 (ALDERLAKE_P)
[16:13:32] [PASSED] 0x46C1 (ALDERLAKE_P)
[16:13:32] [PASSED] 0x46C2 (ALDERLAKE_P)
[16:13:32] [PASSED] 0x46C3 (ALDERLAKE_P)
[16:13:32] [PASSED] 0x46D0 (ALDERLAKE_N)
[16:13:32] [PASSED] 0x46D1 (ALDERLAKE_N)
[16:13:32] [PASSED] 0x46D2 (ALDERLAKE_N)
[16:13:32] [PASSED] 0x46D3 (ALDERLAKE_N)
[16:13:32] [PASSED] 0x46D4 (ALDERLAKE_N)
[16:13:32] [PASSED] 0xA721 (ALDERLAKE_P)
[16:13:32] [PASSED] 0xA7A1 (ALDERLAKE_P)
[16:13:32] [PASSED] 0xA7A9 (ALDERLAKE_P)
[16:13:32] [PASSED] 0xA7AC (ALDERLAKE_P)
[16:13:32] [PASSED] 0xA7AD (ALDERLAKE_P)
[16:13:32] [PASSED] 0xA720 (ALDERLAKE_P)
[16:13:32] [PASSED] 0xA7A0 (ALDERLAKE_P)
[16:13:32] [PASSED] 0xA7A8 (ALDERLAKE_P)
[16:13:32] [PASSED] 0xA7AA (ALDERLAKE_P)
[16:13:32] [PASSED] 0xA7AB (ALDERLAKE_P)
[16:13:32] [PASSED] 0xA780 (ALDERLAKE_S)
[16:13:32] [PASSED] 0xA781 (ALDERLAKE_S)
[16:13:32] [PASSED] 0xA782 (ALDERLAKE_S)
[16:13:32] [PASSED] 0xA783 (ALDERLAKE_S)
[16:13:32] [PASSED] 0xA788 (ALDERLAKE_S)
[16:13:32] [PASSED] 0xA789 (ALDERLAKE_S)
[16:13:32] [PASSED] 0xA78A (ALDERLAKE_S)
[16:13:32] [PASSED] 0xA78B (ALDERLAKE_S)
[16:13:32] [PASSED] 0x4905 (DG1)
[16:13:32] [PASSED] 0x4906 (DG1)
[16:13:32] [PASSED] 0x4907 (DG1)
[16:13:32] [PASSED] 0x4908 (DG1)
[16:13:32] [PASSED] 0x4909 (DG1)
[16:13:32] [PASSED] 0x56C0 (DG2)
[16:13:32] [PASSED] 0x56C2 (DG2)
[16:13:32] [PASSED] 0x56C1 (DG2)
[16:13:32] [PASSED] 0x7D51 (METEORLAKE)
[16:13:32] [PASSED] 0x7DD1 (METEORLAKE)
[16:13:32] [PASSED] 0x7D41 (METEORLAKE)
[16:13:32] [PASSED] 0x7D67 (METEORLAKE)
[16:13:32] [PASSED] 0xB640 (METEORLAKE)
[16:13:32] [PASSED] 0x56A0 (DG2)
[16:13:32] [PASSED] 0x56A1 (DG2)
[16:13:32] [PASSED] 0x56A2 (DG2)
[16:13:32] [PASSED] 0x56BE (DG2)
[16:13:32] [PASSED] 0x56BF (DG2)
[16:13:32] [PASSED] 0x5690 (DG2)
[16:13:32] [PASSED] 0x5691 (DG2)
[16:13:32] [PASSED] 0x5692 (DG2)
[16:13:32] [PASSED] 0x56A5 (DG2)
[16:13:32] [PASSED] 0x56A6 (DG2)
[16:13:32] [PASSED] 0x56B0 (DG2)
[16:13:32] [PASSED] 0x56B1 (DG2)
[16:13:32] [PASSED] 0x56BA (DG2)
[16:13:32] [PASSED] 0x56BB (DG2)
[16:13:32] [PASSED] 0x56BC (DG2)
[16:13:32] [PASSED] 0x56BD (DG2)
[16:13:32] [PASSED] 0x5693 (DG2)
[16:13:32] [PASSED] 0x5694 (DG2)
[16:13:32] [PASSED] 0x5695 (DG2)
[16:13:32] [PASSED] 0x56A3 (DG2)
[16:13:32] [PASSED] 0x56A4 (DG2)
[16:13:32] [PASSED] 0x56B2 (DG2)
[16:13:32] [PASSED] 0x56B3 (DG2)
[16:13:32] [PASSED] 0x5696 (DG2)
[16:13:32] [PASSED] 0x5697 (DG2)
[16:13:32] [PASSED] 0xB69 (PVC)
[16:13:32] [PASSED] 0xB6E (PVC)
[16:13:32] [PASSED] 0xBD4 (PVC)
[16:13:32] [PASSED] 0xBD5 (PVC)
[16:13:32] [PASSED] 0xBD6 (PVC)
[16:13:32] [PASSED] 0xBD7 (PVC)
[16:13:32] [PASSED] 0xBD8 (PVC)
[16:13:32] [PASSED] 0xBD9 (PVC)
[16:13:32] [PASSED] 0xBDA (PVC)
[16:13:32] [PASSED] 0xBDB (PVC)
[16:13:32] [PASSED] 0xBE0 (PVC)
[16:13:32] [PASSED] 0xBE1 (PVC)
[16:13:32] [PASSED] 0xBE5 (PVC)
[16:13:32] [PASSED] 0x7D40 (METEORLAKE)
[16:13:32] [PASSED] 0x7D45 (METEORLAKE)
[16:13:32] [PASSED] 0x7D55 (METEORLAKE)
[16:13:32] [PASSED] 0x7D60 (METEORLAKE)
[16:13:32] [PASSED] 0x7DD5 (METEORLAKE)
[16:13:32] [PASSED] 0x6420 (LUNARLAKE)
[16:13:32] [PASSED] 0x64A0 (LUNARLAKE)
[16:13:32] [PASSED] 0x64B0 (LUNARLAKE)
[16:13:32] [PASSED] 0xE202 (BATTLEMAGE)
[16:13:32] [PASSED] 0xE209 (BATTLEMAGE)
[16:13:32] [PASSED] 0xE20B (BATTLEMAGE)
[16:13:32] [PASSED] 0xE20C (BATTLEMAGE)
[16:13:32] [PASSED] 0xE20D (BATTLEMAGE)
[16:13:32] [PASSED] 0xE210 (BATTLEMAGE)
[16:13:32] [PASSED] 0xE211 (BATTLEMAGE)
[16:13:32] [PASSED] 0xE212 (BATTLEMAGE)
[16:13:32] [PASSED] 0xE216 (BATTLEMAGE)
[16:13:32] [PASSED] 0xE220 (BATTLEMAGE)
[16:13:32] [PASSED] 0xE221 (BATTLEMAGE)
[16:13:32] [PASSED] 0xE222 (BATTLEMAGE)
[16:13:32] [PASSED] 0xE223 (BATTLEMAGE)
[16:13:32] [PASSED] 0xB080 (PANTHERLAKE)
[16:13:32] [PASSED] 0xB081 (PANTHERLAKE)
[16:13:32] [PASSED] 0xB082 (PANTHERLAKE)
[16:13:32] [PASSED] 0xB083 (PANTHERLAKE)
[16:13:32] [PASSED] 0xB084 (PANTHERLAKE)
[16:13:32] [PASSED] 0xB085 (PANTHERLAKE)
[16:13:32] [PASSED] 0xB086 (PANTHERLAKE)
[16:13:32] [PASSED] 0xB087 (PANTHERLAKE)
[16:13:32] [PASSED] 0xB08F (PANTHERLAKE)
[16:13:32] [PASSED] 0xB090 (PANTHERLAKE)
[16:13:32] [PASSED] 0xB0A0 (PANTHERLAKE)
[16:13:32] [PASSED] 0xB0B0 (PANTHERLAKE)
[16:13:32] [PASSED] 0xFD80 (PANTHERLAKE)
[16:13:32] [PASSED] 0xFD81 (PANTHERLAKE)
[16:13:32] [PASSED] 0xD740 (NOVALAKE_S)
[16:13:32] [PASSED] 0xD741 (NOVALAKE_S)
[16:13:32] [PASSED] 0xD742 (NOVALAKE_S)
[16:13:32] [PASSED] 0xD743 (NOVALAKE_S)
[16:13:32] [PASSED] 0xD744 (NOVALAKE_S)
[16:13:32] [PASSED] 0xD745 (NOVALAKE_S)
[16:13:32] [PASSED] 0x674C (CRESCENTISLAND)
[16:13:32] [PASSED] 0xD750 (NOVALAKE_P)
[16:13:32] [PASSED] 0xD751 (NOVALAKE_P)
[16:13:32] [PASSED] 0xD752 (NOVALAKE_P)
[16:13:32] [PASSED] 0xD753 (NOVALAKE_P)
[16:13:32] [PASSED] 0xD754 (NOVALAKE_P)
[16:13:32] [PASSED] 0xD755 (NOVALAKE_P)
[16:13:32] [PASSED] 0xD756 (NOVALAKE_P)
[16:13:32] [PASSED] 0xD757 (NOVALAKE_P)
[16:13:32] [PASSED] 0xD75F (NOVALAKE_P)
[16:13:32] =============== [PASSED] check_platform_desc ===============
[16:13:32] ===================== [PASSED] xe_pci ======================
[16:13:32] =================== xe_rtp (2 subtests) ====================
[16:13:32] =============== xe_rtp_process_to_sr_tests ================
[16:13:32] [PASSED] coalesce-same-reg
[16:13:32] [PASSED] no-match-no-add
[16:13:32] [PASSED] match-or
[16:13:32] [PASSED] match-or-xfail
[16:13:32] [PASSED] no-match-no-add-multiple-rules
[16:13:32] [PASSED] two-regs-two-entries
[16:13:32] [PASSED] clr-one-set-other
[16:13:32] [PASSED] set-field
[16:13:32] [PASSED] conflict-duplicate
stty: 'standard input': Inappropriate ioctl for device
[16:13:32] [PASSED] conflict-not-disjoint
[16:13:32] [PASSED] conflict-reg-type
[16:13:32] =========== [PASSED] xe_rtp_process_to_sr_tests ============
[16:13:32] ================== xe_rtp_process_tests ===================
[16:13:32] [PASSED] active1
[16:13:32] [PASSED] active2
[16:13:32] [PASSED] active-inactive
[16:13:32] [PASSED] inactive-active
[16:13:32] [PASSED] inactive-1st_or_active-inactive
[16:13:32] [PASSED] inactive-2nd_or_active-inactive
[16:13:32] [PASSED] inactive-last_or_active-inactive
[16:13:32] [PASSED] inactive-no_or_active-inactive
[16:13:32] ============== [PASSED] xe_rtp_process_tests ===============
[16:13:32] ===================== [PASSED] xe_rtp ======================
[16:13:32] ==================== xe_wa (1 subtest) =====================
[16:13:32] ======================== xe_wa_gt =========================
[16:13:32] [PASSED] TIGERLAKE B0
[16:13:32] [PASSED] DG1 A0
[16:13:32] [PASSED] DG1 B0
[16:13:32] [PASSED] ALDERLAKE_S A0
[16:13:32] [PASSED] ALDERLAKE_S B0
[16:13:32] [PASSED] ALDERLAKE_S C0
[16:13:32] [PASSED] ALDERLAKE_S D0
[16:13:32] [PASSED] ALDERLAKE_P A0
[16:13:32] [PASSED] ALDERLAKE_P B0
[16:13:32] [PASSED] ALDERLAKE_P C0
[16:13:32] [PASSED] ALDERLAKE_S RPLS D0
[16:13:32] [PASSED] ALDERLAKE_P RPLU E0
[16:13:32] [PASSED] DG2 G10 C0
[16:13:32] [PASSED] DG2 G11 B1
[16:13:32] [PASSED] DG2 G12 A1
[16:13:32] [PASSED] METEORLAKE 12.70(Xe_LPG) A0 13.00(Xe_LPM+) A0
[16:13:32] [PASSED] METEORLAKE 12.71(Xe_LPG) A0 13.00(Xe_LPM+) A0
[16:13:32] [PASSED] METEORLAKE 12.74(Xe_LPG+) A0 13.00(Xe_LPM+) A0
[16:13:32] [PASSED] LUNARLAKE 20.04(Xe2_LPG) A0 20.00(Xe2_LPM) A0
[16:13:32] [PASSED] LUNARLAKE 20.04(Xe2_LPG) B0 20.00(Xe2_LPM) A0
[16:13:32] [PASSED] BATTLEMAGE 20.01(Xe2_HPG) A0 13.01(Xe2_HPM) A1
[16:13:32] [PASSED] PANTHERLAKE 30.00(Xe3_LPG) A0 30.00(Xe3_LPM) A0
[16:13:32] ==================== [PASSED] xe_wa_gt =====================
[16:13:32] ====================== [PASSED] xe_wa ======================
[16:13:32] ============================================================
[16:13:32] Testing complete. Ran 597 tests: passed: 579, skipped: 18
[16:13:32] Elapsed time: 35.301s total, 4.253s configuring, 30.431s building, 0.604s running
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/tests/.kunitconfig
[16:13:32] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[16:13:33] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[16:13:57] Starting KUnit Kernel (1/1)...
[16:13:57] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[16:13:57] ============ drm_test_pick_cmdline (2 subtests) ============
[16:13:57] [PASSED] drm_test_pick_cmdline_res_1920_1080_60
[16:13:57] =============== drm_test_pick_cmdline_named ===============
[16:13:57] [PASSED] NTSC
[16:13:57] [PASSED] NTSC-J
[16:13:57] [PASSED] PAL
[16:13:57] [PASSED] PAL-M
[16:13:57] =========== [PASSED] drm_test_pick_cmdline_named ===========
[16:13:57] ============== [PASSED] drm_test_pick_cmdline ==============
[16:13:57] == drm_test_atomic_get_connector_for_encoder (1 subtest) ===
[16:13:57] [PASSED] drm_test_drm_atomic_get_connector_for_encoder
[16:13:57] ==== [PASSED] drm_test_atomic_get_connector_for_encoder ====
[16:13:57] =========== drm_validate_clone_mode (2 subtests) ===========
[16:13:57] ============== drm_test_check_in_clone_mode ===============
[16:13:57] [PASSED] in_clone_mode
[16:13:57] [PASSED] not_in_clone_mode
[16:13:57] ========== [PASSED] drm_test_check_in_clone_mode ===========
[16:13:57] =============== drm_test_check_valid_clones ===============
[16:13:57] [PASSED] not_in_clone_mode
[16:13:57] [PASSED] valid_clone
[16:13:57] [PASSED] invalid_clone
[16:13:57] =========== [PASSED] drm_test_check_valid_clones ===========
[16:13:57] ============= [PASSED] drm_validate_clone_mode =============
[16:13:57] ============= drm_validate_modeset (1 subtest) =============
[16:13:57] [PASSED] drm_test_check_connector_changed_modeset
[16:13:57] ============== [PASSED] drm_validate_modeset ===============
[16:13:57] ====== drm_test_bridge_get_current_state (2 subtests) ======
[16:13:57] [PASSED] drm_test_drm_bridge_get_current_state_atomic
[16:13:57] [PASSED] drm_test_drm_bridge_get_current_state_legacy
[16:13:57] ======== [PASSED] drm_test_bridge_get_current_state ========
[16:13:57] ====== drm_test_bridge_helper_reset_crtc (3 subtests) ======
[16:13:57] [PASSED] drm_test_drm_bridge_helper_reset_crtc_atomic
[16:13:57] [PASSED] drm_test_drm_bridge_helper_reset_crtc_atomic_disabled
[16:13:57] [PASSED] drm_test_drm_bridge_helper_reset_crtc_legacy
[16:13:57] ======== [PASSED] drm_test_bridge_helper_reset_crtc ========
[16:13:57] ============== drm_bridge_alloc (2 subtests) ===============
[16:13:57] [PASSED] drm_test_drm_bridge_alloc_basic
[16:13:57] [PASSED] drm_test_drm_bridge_alloc_get_put
[16:13:57] ================ [PASSED] drm_bridge_alloc =================
[16:13:57] ============= drm_cmdline_parser (40 subtests) =============
[16:13:57] [PASSED] drm_test_cmdline_force_d_only
[16:13:57] [PASSED] drm_test_cmdline_force_D_only_dvi
[16:13:57] [PASSED] drm_test_cmdline_force_D_only_hdmi
[16:13:57] [PASSED] drm_test_cmdline_force_D_only_not_digital
[16:13:57] [PASSED] drm_test_cmdline_force_e_only
[16:13:57] [PASSED] drm_test_cmdline_res
[16:13:57] [PASSED] drm_test_cmdline_res_vesa
[16:13:57] [PASSED] drm_test_cmdline_res_vesa_rblank
[16:13:57] [PASSED] drm_test_cmdline_res_rblank
[16:13:57] [PASSED] drm_test_cmdline_res_bpp
[16:13:57] [PASSED] drm_test_cmdline_res_refresh
[16:13:57] [PASSED] drm_test_cmdline_res_bpp_refresh
[16:13:57] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced
[16:13:57] [PASSED] drm_test_cmdline_res_bpp_refresh_margins
[16:13:57] [PASSED] drm_test_cmdline_res_bpp_refresh_force_off
[16:13:57] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on
[16:13:57] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_analog
[16:13:57] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_digital
[16:13:57] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced_margins_force_on
[16:13:57] [PASSED] drm_test_cmdline_res_margins_force_on
[16:13:57] [PASSED] drm_test_cmdline_res_vesa_margins
[16:13:57] [PASSED] drm_test_cmdline_name
[16:13:57] [PASSED] drm_test_cmdline_name_bpp
[16:13:57] [PASSED] drm_test_cmdline_name_option
[16:13:57] [PASSED] drm_test_cmdline_name_bpp_option
[16:13:57] [PASSED] drm_test_cmdline_rotate_0
[16:13:57] [PASSED] drm_test_cmdline_rotate_90
[16:13:57] [PASSED] drm_test_cmdline_rotate_180
[16:13:57] [PASSED] drm_test_cmdline_rotate_270
[16:13:57] [PASSED] drm_test_cmdline_hmirror
[16:13:57] [PASSED] drm_test_cmdline_vmirror
[16:13:57] [PASSED] drm_test_cmdline_margin_options
[16:13:57] [PASSED] drm_test_cmdline_multiple_options
[16:13:57] [PASSED] drm_test_cmdline_bpp_extra_and_option
[16:13:57] [PASSED] drm_test_cmdline_extra_and_option
[16:13:57] [PASSED] drm_test_cmdline_freestanding_options
[16:13:57] [PASSED] drm_test_cmdline_freestanding_force_e_and_options
[16:13:57] [PASSED] drm_test_cmdline_panel_orientation
[16:13:57] ================ drm_test_cmdline_invalid =================
[16:13:57] [PASSED] margin_only
[16:13:57] [PASSED] interlace_only
[16:13:57] [PASSED] res_missing_x
[16:13:57] [PASSED] res_missing_y
[16:13:57] [PASSED] res_bad_y
[16:13:57] [PASSED] res_missing_y_bpp
[16:13:57] [PASSED] res_bad_bpp
[16:13:57] [PASSED] res_bad_refresh
[16:13:57] [PASSED] res_bpp_refresh_force_on_off
[16:13:57] [PASSED] res_invalid_mode
[16:13:57] [PASSED] res_bpp_wrong_place_mode
[16:13:57] [PASSED] name_bpp_refresh
[16:13:57] [PASSED] name_refresh
[16:13:57] [PASSED] name_refresh_wrong_mode
[16:13:57] [PASSED] name_refresh_invalid_mode
[16:13:57] [PASSED] rotate_multiple
[16:13:57] [PASSED] rotate_invalid_val
[16:13:57] [PASSED] rotate_truncated
[16:13:57] [PASSED] invalid_option
[16:13:57] [PASSED] invalid_tv_option
[16:13:57] [PASSED] truncated_tv_option
[16:13:57] ============ [PASSED] drm_test_cmdline_invalid =============
[16:13:57] =============== drm_test_cmdline_tv_options ===============
[16:13:57] [PASSED] NTSC
[16:13:57] [PASSED] NTSC_443
[16:13:57] [PASSED] NTSC_J
[16:13:57] [PASSED] PAL
[16:13:57] [PASSED] PAL_M
[16:13:57] [PASSED] PAL_N
[16:13:57] [PASSED] SECAM
[16:13:57] [PASSED] MONO_525
[16:13:57] [PASSED] MONO_625
[16:13:57] =========== [PASSED] drm_test_cmdline_tv_options ===========
[16:13:57] =============== [PASSED] drm_cmdline_parser ================
[16:13:57] ========== drmm_connector_hdmi_init (20 subtests) ==========
[16:13:57] [PASSED] drm_test_connector_hdmi_init_valid
[16:13:57] [PASSED] drm_test_connector_hdmi_init_bpc_8
[16:13:57] [PASSED] drm_test_connector_hdmi_init_bpc_10
[16:13:57] [PASSED] drm_test_connector_hdmi_init_bpc_12
[16:13:57] [PASSED] drm_test_connector_hdmi_init_bpc_invalid
[16:13:57] [PASSED] drm_test_connector_hdmi_init_bpc_null
[16:13:57] [PASSED] drm_test_connector_hdmi_init_formats_empty
[16:13:57] [PASSED] drm_test_connector_hdmi_init_formats_no_rgb
[16:13:57] === drm_test_connector_hdmi_init_formats_yuv420_allowed ===
[16:13:57] [PASSED] supported_formats=0x9 yuv420_allowed=1
[16:13:57] [PASSED] supported_formats=0x9 yuv420_allowed=0
[16:13:57] [PASSED] supported_formats=0x3 yuv420_allowed=1
[16:13:57] [PASSED] supported_formats=0x3 yuv420_allowed=0
[16:13:57] === [PASSED] drm_test_connector_hdmi_init_formats_yuv420_allowed ===
[16:13:57] [PASSED] drm_test_connector_hdmi_init_null_ddc
[16:13:57] [PASSED] drm_test_connector_hdmi_init_null_product
[16:13:57] [PASSED] drm_test_connector_hdmi_init_null_vendor
[16:13:57] [PASSED] drm_test_connector_hdmi_init_product_length_exact
[16:13:57] [PASSED] drm_test_connector_hdmi_init_product_length_too_long
[16:13:57] [PASSED] drm_test_connector_hdmi_init_product_valid
[16:13:57] [PASSED] drm_test_connector_hdmi_init_vendor_length_exact
[16:13:57] [PASSED] drm_test_connector_hdmi_init_vendor_length_too_long
[16:13:57] [PASSED] drm_test_connector_hdmi_init_vendor_valid
[16:13:57] ========= drm_test_connector_hdmi_init_type_valid =========
[16:13:57] [PASSED] HDMI-A
[16:13:57] [PASSED] HDMI-B
[16:13:57] ===== [PASSED] drm_test_connector_hdmi_init_type_valid =====
[16:13:57] ======== drm_test_connector_hdmi_init_type_invalid ========
[16:13:57] [PASSED] Unknown
[16:13:57] [PASSED] VGA
[16:13:57] [PASSED] DVI-I
[16:13:57] [PASSED] DVI-D
[16:13:57] [PASSED] DVI-A
[16:13:57] [PASSED] Composite
[16:13:57] [PASSED] SVIDEO
[16:13:57] [PASSED] LVDS
[16:13:57] [PASSED] Component
[16:13:57] [PASSED] DIN
[16:13:57] [PASSED] DP
[16:13:57] [PASSED] TV
[16:13:57] [PASSED] eDP
[16:13:57] [PASSED] Virtual
[16:13:57] [PASSED] DSI
[16:13:57] [PASSED] DPI
[16:13:57] [PASSED] Writeback
[16:13:57] [PASSED] SPI
[16:13:57] [PASSED] USB
[16:13:57] ==== [PASSED] drm_test_connector_hdmi_init_type_invalid ====
[16:13:57] ============ [PASSED] drmm_connector_hdmi_init =============
[16:13:57] ============= drmm_connector_init (3 subtests) =============
[16:13:57] [PASSED] drm_test_drmm_connector_init
[16:13:57] [PASSED] drm_test_drmm_connector_init_null_ddc
[16:13:57] ========= drm_test_drmm_connector_init_type_valid =========
[16:13:57] [PASSED] Unknown
[16:13:57] [PASSED] VGA
[16:13:57] [PASSED] DVI-I
[16:13:57] [PASSED] DVI-D
[16:13:57] [PASSED] DVI-A
[16:13:57] [PASSED] Composite
[16:13:57] [PASSED] SVIDEO
[16:13:57] [PASSED] LVDS
[16:13:57] [PASSED] Component
[16:13:57] [PASSED] DIN
[16:13:57] [PASSED] DP
[16:13:57] [PASSED] HDMI-A
[16:13:57] [PASSED] HDMI-B
[16:13:57] [PASSED] TV
[16:13:57] [PASSED] eDP
[16:13:57] [PASSED] Virtual
[16:13:57] [PASSED] DSI
[16:13:57] [PASSED] DPI
[16:13:57] [PASSED] Writeback
[16:13:57] [PASSED] SPI
[16:13:57] [PASSED] USB
[16:13:57] ===== [PASSED] drm_test_drmm_connector_init_type_valid =====
[16:13:57] =============== [PASSED] drmm_connector_init ===============
[16:13:57] ========= drm_connector_dynamic_init (6 subtests) ==========
[16:13:57] [PASSED] drm_test_drm_connector_dynamic_init
[16:13:57] [PASSED] drm_test_drm_connector_dynamic_init_null_ddc
[16:13:57] [PASSED] drm_test_drm_connector_dynamic_init_not_added
[16:13:57] [PASSED] drm_test_drm_connector_dynamic_init_properties
[16:13:57] ===== drm_test_drm_connector_dynamic_init_type_valid ======
[16:13:57] [PASSED] Unknown
[16:13:57] [PASSED] VGA
[16:13:57] [PASSED] DVI-I
[16:13:57] [PASSED] DVI-D
[16:13:57] [PASSED] DVI-A
[16:13:57] [PASSED] Composite
[16:13:57] [PASSED] SVIDEO
[16:13:57] [PASSED] LVDS
[16:13:57] [PASSED] Component
[16:13:57] [PASSED] DIN
[16:13:57] [PASSED] DP
[16:13:57] [PASSED] HDMI-A
[16:13:57] [PASSED] HDMI-B
[16:13:57] [PASSED] TV
[16:13:57] [PASSED] eDP
[16:13:57] [PASSED] Virtual
[16:13:57] [PASSED] DSI
[16:13:57] [PASSED] DPI
[16:13:57] [PASSED] Writeback
[16:13:57] [PASSED] SPI
[16:13:57] [PASSED] USB
[16:13:57] = [PASSED] drm_test_drm_connector_dynamic_init_type_valid ==
[16:13:57] ======== drm_test_drm_connector_dynamic_init_name =========
[16:13:57] [PASSED] Unknown
[16:13:57] [PASSED] VGA
[16:13:57] [PASSED] DVI-I
[16:13:57] [PASSED] DVI-D
[16:13:57] [PASSED] DVI-A
[16:13:57] [PASSED] Composite
[16:13:57] [PASSED] SVIDEO
[16:13:57] [PASSED] LVDS
[16:13:57] [PASSED] Component
[16:13:57] [PASSED] DIN
[16:13:57] [PASSED] DP
[16:13:57] [PASSED] HDMI-A
[16:13:57] [PASSED] HDMI-B
[16:13:57] [PASSED] TV
[16:13:57] [PASSED] eDP
[16:13:57] [PASSED] Virtual
[16:13:57] [PASSED] DSI
[16:13:57] [PASSED] DPI
[16:13:57] [PASSED] Writeback
[16:13:57] [PASSED] SPI
[16:13:57] [PASSED] USB
[16:13:57] ==== [PASSED] drm_test_drm_connector_dynamic_init_name =====
[16:13:57] =========== [PASSED] drm_connector_dynamic_init ============
[16:13:57] ==== drm_connector_dynamic_register_early (4 subtests) =====
[16:13:57] [PASSED] drm_test_drm_connector_dynamic_register_early_on_list
[16:13:57] [PASSED] drm_test_drm_connector_dynamic_register_early_defer
[16:13:57] [PASSED] drm_test_drm_connector_dynamic_register_early_no_init
[16:13:57] [PASSED] drm_test_drm_connector_dynamic_register_early_no_mode_object
[16:13:57] ====== [PASSED] drm_connector_dynamic_register_early =======
[16:13:57] ======= drm_connector_dynamic_register (7 subtests) ========
[16:13:57] [PASSED] drm_test_drm_connector_dynamic_register_on_list
[16:13:57] [PASSED] drm_test_drm_connector_dynamic_register_no_defer
[16:13:57] [PASSED] drm_test_drm_connector_dynamic_register_no_init
[16:13:57] [PASSED] drm_test_drm_connector_dynamic_register_mode_object
[16:13:57] [PASSED] drm_test_drm_connector_dynamic_register_sysfs
[16:13:57] [PASSED] drm_test_drm_connector_dynamic_register_sysfs_name
[16:13:57] [PASSED] drm_test_drm_connector_dynamic_register_debugfs
[16:13:57] ========= [PASSED] drm_connector_dynamic_register ==========
[16:13:57] = drm_connector_attach_broadcast_rgb_property (2 subtests) =
[16:13:57] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property
[16:13:57] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property_hdmi_connector
[16:13:57] === [PASSED] drm_connector_attach_broadcast_rgb_property ===
[16:13:57] ========== drm_get_tv_mode_from_name (2 subtests) ==========
[16:13:57] ========== drm_test_get_tv_mode_from_name_valid ===========
[16:13:57] [PASSED] NTSC
[16:13:57] [PASSED] NTSC-443
[16:13:57] [PASSED] NTSC-J
[16:13:57] [PASSED] PAL
[16:13:57] [PASSED] PAL-M
[16:13:57] [PASSED] PAL-N
[16:13:57] [PASSED] SECAM
[16:13:57] [PASSED] Mono
[16:13:57] ====== [PASSED] drm_test_get_tv_mode_from_name_valid =======
[16:13:57] [PASSED] drm_test_get_tv_mode_from_name_truncated
[16:13:57] ============ [PASSED] drm_get_tv_mode_from_name ============
[16:13:57] = drm_test_connector_hdmi_compute_mode_clock (12 subtests) =
[16:13:57] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb
[16:13:57] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc
[16:13:57] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc_vic_1
[16:13:57] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc
[16:13:57] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc_vic_1
[16:13:57] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_double
[16:13:57] = drm_test_connector_hdmi_compute_mode_clock_yuv420_valid =
[16:13:57] [PASSED] VIC 96
[16:13:57] [PASSED] VIC 97
[16:13:57] [PASSED] VIC 101
[16:13:57] [PASSED] VIC 102
[16:13:57] [PASSED] VIC 106
[16:13:57] [PASSED] VIC 107
[16:13:57] === [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_valid ===
[16:13:57] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_10_bpc
[16:13:57] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_12_bpc
[16:13:57] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_8_bpc
[16:13:57] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_10_bpc
[16:13:57] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_12_bpc
[16:13:57] === [PASSED] drm_test_connector_hdmi_compute_mode_clock ====
[16:13:57] == drm_hdmi_connector_get_broadcast_rgb_name (2 subtests) ==
[16:13:57] === drm_test_drm_hdmi_connector_get_broadcast_rgb_name ====
[16:13:57] [PASSED] Automatic
[16:13:57] [PASSED] Full
[16:13:57] [PASSED] Limited 16:235
[16:13:57] === [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name ===
[16:13:57] [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name_invalid
[16:13:57] ==== [PASSED] drm_hdmi_connector_get_broadcast_rgb_name ====
[16:13:57] == drm_hdmi_connector_get_output_format_name (2 subtests) ==
[16:13:57] === drm_test_drm_hdmi_connector_get_output_format_name ====
[16:13:57] [PASSED] RGB
[16:13:57] [PASSED] YUV 4:2:0
[16:13:57] [PASSED] YUV 4:2:2
[16:13:57] [PASSED] YUV 4:4:4
[16:13:57] === [PASSED] drm_test_drm_hdmi_connector_get_output_format_name ===
[16:13:57] [PASSED] drm_test_drm_hdmi_connector_get_output_format_name_invalid
[16:13:57] ==== [PASSED] drm_hdmi_connector_get_output_format_name ====
[16:13:57] ============= drm_damage_helper (21 subtests) ==============
[16:13:57] [PASSED] drm_test_damage_iter_no_damage
[16:13:57] [PASSED] drm_test_damage_iter_no_damage_fractional_src
[16:13:57] [PASSED] drm_test_damage_iter_no_damage_src_moved
[16:13:57] [PASSED] drm_test_damage_iter_no_damage_fractional_src_moved
[16:13:57] [PASSED] drm_test_damage_iter_no_damage_not_visible
[16:13:57] [PASSED] drm_test_damage_iter_no_damage_no_crtc
[16:13:57] [PASSED] drm_test_damage_iter_no_damage_no_fb
[16:13:57] [PASSED] drm_test_damage_iter_simple_damage
[16:13:57] [PASSED] drm_test_damage_iter_single_damage
[16:13:57] [PASSED] drm_test_damage_iter_single_damage_intersect_src
[16:13:57] [PASSED] drm_test_damage_iter_single_damage_outside_src
[16:13:57] [PASSED] drm_test_damage_iter_single_damage_fractional_src
[16:13:57] [PASSED] drm_test_damage_iter_single_damage_intersect_fractional_src
[16:13:57] [PASSED] drm_test_damage_iter_single_damage_outside_fractional_src
[16:13:57] [PASSED] drm_test_damage_iter_single_damage_src_moved
[16:13:57] [PASSED] drm_test_damage_iter_single_damage_fractional_src_moved
[16:13:57] [PASSED] drm_test_damage_iter_damage
[16:13:57] [PASSED] drm_test_damage_iter_damage_one_intersect
[16:13:57] [PASSED] drm_test_damage_iter_damage_one_outside
[16:13:57] [PASSED] drm_test_damage_iter_damage_src_moved
[16:13:57] [PASSED] drm_test_damage_iter_damage_not_visible
[16:13:57] ================ [PASSED] drm_damage_helper ================
[16:13:57] ============== drm_dp_mst_helper (3 subtests) ==============
[16:13:57] ============== drm_test_dp_mst_calc_pbn_mode ==============
[16:13:57] [PASSED] Clock 154000 BPP 30 DSC disabled
[16:13:57] [PASSED] Clock 234000 BPP 30 DSC disabled
[16:13:57] [PASSED] Clock 297000 BPP 24 DSC disabled
[16:13:57] [PASSED] Clock 332880 BPP 24 DSC enabled
[16:13:57] [PASSED] Clock 324540 BPP 24 DSC enabled
[16:13:57] ========== [PASSED] drm_test_dp_mst_calc_pbn_mode ==========
[16:13:57] ============== drm_test_dp_mst_calc_pbn_div ===============
[16:13:57] [PASSED] Link rate 2000000 lane count 4
[16:13:57] [PASSED] Link rate 2000000 lane count 2
[16:13:57] [PASSED] Link rate 2000000 lane count 1
[16:13:57] [PASSED] Link rate 1350000 lane count 4
[16:13:57] [PASSED] Link rate 1350000 lane count 2
[16:13:57] [PASSED] Link rate 1350000 lane count 1
[16:13:57] [PASSED] Link rate 1000000 lane count 4
[16:13:57] [PASSED] Link rate 1000000 lane count 2
[16:13:57] [PASSED] Link rate 1000000 lane count 1
[16:13:57] [PASSED] Link rate 810000 lane count 4
[16:13:57] [PASSED] Link rate 810000 lane count 2
[16:13:57] [PASSED] Link rate 810000 lane count 1
[16:13:57] [PASSED] Link rate 540000 lane count 4
[16:13:57] [PASSED] Link rate 540000 lane count 2
[16:13:57] [PASSED] Link rate 540000 lane count 1
[16:13:57] [PASSED] Link rate 270000 lane count 4
[16:13:57] [PASSED] Link rate 270000 lane count 2
[16:13:57] [PASSED] Link rate 270000 lane count 1
[16:13:57] [PASSED] Link rate 162000 lane count 4
[16:13:57] [PASSED] Link rate 162000 lane count 2
[16:13:57] [PASSED] Link rate 162000 lane count 1
[16:13:57] ========== [PASSED] drm_test_dp_mst_calc_pbn_div ===========
[16:13:57] ========= drm_test_dp_mst_sideband_msg_req_decode =========
[16:13:57] [PASSED] DP_ENUM_PATH_RESOURCES with port number
[16:13:57] [PASSED] DP_POWER_UP_PHY with port number
[16:13:57] [PASSED] DP_POWER_DOWN_PHY with port number
[16:13:57] [PASSED] DP_ALLOCATE_PAYLOAD with SDP stream sinks
[16:13:57] [PASSED] DP_ALLOCATE_PAYLOAD with port number
[16:13:57] [PASSED] DP_ALLOCATE_PAYLOAD with VCPI
[16:13:57] [PASSED] DP_ALLOCATE_PAYLOAD with PBN
[16:13:57] [PASSED] DP_QUERY_PAYLOAD with port number
[16:13:57] [PASSED] DP_QUERY_PAYLOAD with VCPI
[16:13:57] [PASSED] DP_REMOTE_DPCD_READ with port number
[16:13:57] [PASSED] DP_REMOTE_DPCD_READ with DPCD address
[16:13:57] [PASSED] DP_REMOTE_DPCD_READ with max number of bytes
[16:13:57] [PASSED] DP_REMOTE_DPCD_WRITE with port number
[16:13:57] [PASSED] DP_REMOTE_DPCD_WRITE with DPCD address
[16:13:57] [PASSED] DP_REMOTE_DPCD_WRITE with data array
[16:13:57] [PASSED] DP_REMOTE_I2C_READ with port number
[16:13:57] [PASSED] DP_REMOTE_I2C_READ with I2C device ID
[16:13:57] [PASSED] DP_REMOTE_I2C_READ with transactions array
[16:13:57] [PASSED] DP_REMOTE_I2C_WRITE with port number
[16:13:57] [PASSED] DP_REMOTE_I2C_WRITE with I2C device ID
[16:13:57] [PASSED] DP_REMOTE_I2C_WRITE with data array
[16:13:57] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream ID
[16:13:57] [PASSED] DP_QUERY_STREAM_ENC_STATUS with client ID
[16:13:57] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream event
[16:13:57] [PASSED] DP_QUERY_STREAM_ENC_STATUS with valid stream event
[16:13:57] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream behavior
[16:13:57] [PASSED] DP_QUERY_STREAM_ENC_STATUS with a valid stream behavior
[16:13:57] ===== [PASSED] drm_test_dp_mst_sideband_msg_req_decode =====
[16:13:57] ================ [PASSED] drm_dp_mst_helper ================
[16:13:57] ================== drm_exec (7 subtests) ===================
[16:13:57] [PASSED] sanitycheck
[16:13:57] [PASSED] test_lock
[16:13:57] [PASSED] test_lock_unlock
[16:13:57] [PASSED] test_duplicates
[16:13:57] [PASSED] test_prepare
[16:13:57] [PASSED] test_prepare_array
[16:13:57] [PASSED] test_multiple_loops
[16:13:57] ==================== [PASSED] drm_exec =====================
[16:13:57] =========== drm_format_helper_test (17 subtests) ===========
[16:13:57] ============== drm_test_fb_xrgb8888_to_gray8 ==============
[16:13:57] [PASSED] single_pixel_source_buffer
[16:13:57] [PASSED] single_pixel_clip_rectangle
[16:13:57] [PASSED] well_known_colors
[16:13:57] [PASSED] destination_pitch
[16:13:57] ========== [PASSED] drm_test_fb_xrgb8888_to_gray8 ==========
[16:13:57] ============= drm_test_fb_xrgb8888_to_rgb332 ==============
[16:13:57] [PASSED] single_pixel_source_buffer
[16:13:57] [PASSED] single_pixel_clip_rectangle
[16:13:57] [PASSED] well_known_colors
[16:13:57] [PASSED] destination_pitch
[16:13:57] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb332 ==========
[16:13:57] ============= drm_test_fb_xrgb8888_to_rgb565 ==============
[16:13:57] [PASSED] single_pixel_source_buffer
[16:13:57] [PASSED] single_pixel_clip_rectangle
[16:13:57] [PASSED] well_known_colors
[16:13:57] [PASSED] destination_pitch
[16:13:57] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb565 ==========
[16:13:57] ============ drm_test_fb_xrgb8888_to_xrgb1555 =============
[16:13:57] [PASSED] single_pixel_source_buffer
[16:13:57] [PASSED] single_pixel_clip_rectangle
[16:13:57] [PASSED] well_known_colors
[16:13:57] [PASSED] destination_pitch
[16:13:57] ======== [PASSED] drm_test_fb_xrgb8888_to_xrgb1555 =========
[16:13:57] ============ drm_test_fb_xrgb8888_to_argb1555 =============
[16:13:57] [PASSED] single_pixel_source_buffer
[16:13:57] [PASSED] single_pixel_clip_rectangle
[16:13:57] [PASSED] well_known_colors
[16:13:57] [PASSED] destination_pitch
[16:13:57] ======== [PASSED] drm_test_fb_xrgb8888_to_argb1555 =========
[16:13:57] ============ drm_test_fb_xrgb8888_to_rgba5551 =============
[16:13:57] [PASSED] single_pixel_source_buffer
[16:13:57] [PASSED] single_pixel_clip_rectangle
[16:13:57] [PASSED] well_known_colors
[16:13:57] [PASSED] destination_pitch
[16:13:57] ======== [PASSED] drm_test_fb_xrgb8888_to_rgba5551 =========
[16:13:57] ============= drm_test_fb_xrgb8888_to_rgb888 ==============
[16:13:57] [PASSED] single_pixel_source_buffer
[16:13:57] [PASSED] single_pixel_clip_rectangle
[16:13:57] [PASSED] well_known_colors
[16:13:57] [PASSED] destination_pitch
[16:13:57] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb888 ==========
[16:13:57] ============= drm_test_fb_xrgb8888_to_bgr888 ==============
[16:13:57] [PASSED] single_pixel_source_buffer
[16:13:57] [PASSED] single_pixel_clip_rectangle
[16:13:57] [PASSED] well_known_colors
[16:13:57] [PASSED] destination_pitch
[16:13:57] ========= [PASSED] drm_test_fb_xrgb8888_to_bgr888 ==========
[16:13:57] ============ drm_test_fb_xrgb8888_to_argb8888 =============
[16:13:57] [PASSED] single_pixel_source_buffer
[16:13:57] [PASSED] single_pixel_clip_rectangle
[16:13:57] [PASSED] well_known_colors
[16:13:57] [PASSED] destination_pitch
[16:13:57] ======== [PASSED] drm_test_fb_xrgb8888_to_argb8888 =========
[16:13:57] =========== drm_test_fb_xrgb8888_to_xrgb2101010 ===========
[16:13:57] [PASSED] single_pixel_source_buffer
[16:13:57] [PASSED] single_pixel_clip_rectangle
[16:13:57] [PASSED] well_known_colors
[16:13:57] [PASSED] destination_pitch
[16:13:57] ======= [PASSED] drm_test_fb_xrgb8888_to_xrgb2101010 =======
[16:13:57] =========== drm_test_fb_xrgb8888_to_argb2101010 ===========
[16:13:57] [PASSED] single_pixel_source_buffer
[16:13:57] [PASSED] single_pixel_clip_rectangle
[16:13:57] [PASSED] well_known_colors
[16:13:57] [PASSED] destination_pitch
[16:13:57] ======= [PASSED] drm_test_fb_xrgb8888_to_argb2101010 =======
[16:13:57] ============== drm_test_fb_xrgb8888_to_mono ===============
[16:13:57] [PASSED] single_pixel_source_buffer
[16:13:57] [PASSED] single_pixel_clip_rectangle
[16:13:57] [PASSED] well_known_colors
[16:13:57] [PASSED] destination_pitch
[16:13:57] ========== [PASSED] drm_test_fb_xrgb8888_to_mono ===========
[16:13:57] ==================== drm_test_fb_swab =====================
[16:13:57] [PASSED] single_pixel_source_buffer
[16:13:57] [PASSED] single_pixel_clip_rectangle
[16:13:57] [PASSED] well_known_colors
[16:13:57] [PASSED] destination_pitch
[16:13:57] ================ [PASSED] drm_test_fb_swab =================
[16:13:57] ============ drm_test_fb_xrgb8888_to_xbgr8888 =============
[16:13:57] [PASSED] single_pixel_source_buffer
[16:13:57] [PASSED] single_pixel_clip_rectangle
[16:13:57] [PASSED] well_known_colors
[16:13:57] [PASSED] destination_pitch
[16:13:57] ======== [PASSED] drm_test_fb_xrgb8888_to_xbgr8888 =========
[16:13:57] ============ drm_test_fb_xrgb8888_to_abgr8888 =============
[16:13:57] [PASSED] single_pixel_source_buffer
[16:13:57] [PASSED] single_pixel_clip_rectangle
[16:13:57] [PASSED] well_known_colors
[16:13:57] [PASSED] destination_pitch
[16:13:57] ======== [PASSED] drm_test_fb_xrgb8888_to_abgr8888 =========
[16:13:57] ================= drm_test_fb_clip_offset =================
[16:13:57] [PASSED] pass through
[16:13:57] [PASSED] horizontal offset
[16:13:57] [PASSED] vertical offset
[16:13:57] [PASSED] horizontal and vertical offset
[16:13:57] [PASSED] horizontal offset (custom pitch)
[16:13:57] [PASSED] vertical offset (custom pitch)
[16:13:57] [PASSED] horizontal and vertical offset (custom pitch)
[16:13:57] ============= [PASSED] drm_test_fb_clip_offset =============
[16:13:57] =================== drm_test_fb_memcpy ====================
[16:13:57] [PASSED] single_pixel_source_buffer: XR24 little-endian (0x34325258)
[16:13:57] [PASSED] single_pixel_source_buffer: XRA8 little-endian (0x38415258)
[16:13:57] [PASSED] single_pixel_source_buffer: YU24 little-endian (0x34325559)
[16:13:57] [PASSED] single_pixel_clip_rectangle: XB24 little-endian (0x34324258)
[16:13:57] [PASSED] single_pixel_clip_rectangle: XRA8 little-endian (0x38415258)
[16:13:57] [PASSED] single_pixel_clip_rectangle: YU24 little-endian (0x34325559)
[16:13:57] [PASSED] well_known_colors: XB24 little-endian (0x34324258)
[16:13:57] [PASSED] well_known_colors: XRA8 little-endian (0x38415258)
[16:13:57] [PASSED] well_known_colors: YU24 little-endian (0x34325559)
[16:13:57] [PASSED] destination_pitch: XB24 little-endian (0x34324258)
[16:13:57] [PASSED] destination_pitch: XRA8 little-endian (0x38415258)
[16:13:57] [PASSED] destination_pitch: YU24 little-endian (0x34325559)
[16:13:57] =============== [PASSED] drm_test_fb_memcpy ================
[16:13:57] ============= [PASSED] drm_format_helper_test ==============
[16:13:57] ================= drm_format (18 subtests) =================
[16:13:57] [PASSED] drm_test_format_block_width_invalid
[16:13:57] [PASSED] drm_test_format_block_width_one_plane
[16:13:57] [PASSED] drm_test_format_block_width_two_plane
[16:13:57] [PASSED] drm_test_format_block_width_three_plane
[16:13:57] [PASSED] drm_test_format_block_width_tiled
[16:13:57] [PASSED] drm_test_format_block_height_invalid
[16:13:57] [PASSED] drm_test_format_block_height_one_plane
[16:13:57] [PASSED] drm_test_format_block_height_two_plane
[16:13:57] [PASSED] drm_test_format_block_height_three_plane
[16:13:57] [PASSED] drm_test_format_block_height_tiled
[16:13:57] [PASSED] drm_test_format_min_pitch_invalid
[16:13:57] [PASSED] drm_test_format_min_pitch_one_plane_8bpp
[16:13:57] [PASSED] drm_test_format_min_pitch_one_plane_16bpp
[16:13:57] [PASSED] drm_test_format_min_pitch_one_plane_24bpp
[16:13:57] [PASSED] drm_test_format_min_pitch_one_plane_32bpp
[16:13:57] [PASSED] drm_test_format_min_pitch_two_plane
[16:13:57] [PASSED] drm_test_format_min_pitch_three_plane_8bpp
[16:13:57] [PASSED] drm_test_format_min_pitch_tiled
[16:13:57] =================== [PASSED] drm_format ====================
[16:13:57] ============== drm_framebuffer (10 subtests) ===============
[16:13:57] ========== drm_test_framebuffer_check_src_coords ==========
[16:13:57] [PASSED] Success: source fits into fb
[16:13:57] [PASSED] Fail: overflowing fb with x-axis coordinate
[16:13:57] [PASSED] Fail: overflowing fb with y-axis coordinate
[16:13:57] [PASSED] Fail: overflowing fb with source width
[16:13:57] [PASSED] Fail: overflowing fb with source height
[16:13:57] ====== [PASSED] drm_test_framebuffer_check_src_coords ======
[16:13:57] [PASSED] drm_test_framebuffer_cleanup
[16:13:57] =============== drm_test_framebuffer_create ===============
[16:13:57] [PASSED] ABGR8888 normal sizes
[16:13:57] [PASSED] ABGR8888 max sizes
[16:13:57] [PASSED] ABGR8888 pitch greater than min required
[16:13:57] [PASSED] ABGR8888 pitch less than min required
[16:13:57] [PASSED] ABGR8888 Invalid width
[16:13:57] [PASSED] ABGR8888 Invalid buffer handle
[16:13:57] [PASSED] No pixel format
[16:13:57] [PASSED] ABGR8888 Width 0
[16:13:57] [PASSED] ABGR8888 Height 0
[16:13:57] [PASSED] ABGR8888 Out of bound height * pitch combination
[16:13:57] [PASSED] ABGR8888 Large buffer offset
[16:13:57] [PASSED] ABGR8888 Buffer offset for inexistent plane
[16:13:57] [PASSED] ABGR8888 Invalid flag
[16:13:57] [PASSED] ABGR8888 Set DRM_MODE_FB_MODIFIERS without modifiers
[16:13:57] [PASSED] ABGR8888 Valid buffer modifier
[16:13:57] [PASSED] ABGR8888 Invalid buffer modifier(DRM_FORMAT_MOD_SAMSUNG_64_32_TILE)
[16:13:57] [PASSED] ABGR8888 Extra pitches without DRM_MODE_FB_MODIFIERS
[16:13:57] [PASSED] ABGR8888 Extra pitches with DRM_MODE_FB_MODIFIERS
[16:13:57] [PASSED] NV12 Normal sizes
[16:13:57] [PASSED] NV12 Max sizes
[16:13:57] [PASSED] NV12 Invalid pitch
[16:13:57] [PASSED] NV12 Invalid modifier/missing DRM_MODE_FB_MODIFIERS flag
[16:13:57] [PASSED] NV12 different modifier per-plane
[16:13:57] [PASSED] NV12 with DRM_FORMAT_MOD_SAMSUNG_64_32_TILE
[16:13:57] [PASSED] NV12 Valid modifiers without DRM_MODE_FB_MODIFIERS
[16:13:57] [PASSED] NV12 Modifier for inexistent plane
[16:13:57] [PASSED] NV12 Handle for inexistent plane
[16:13:57] [PASSED] NV12 Handle for inexistent plane without DRM_MODE_FB_MODIFIERS
[16:13:57] [PASSED] YVU420 DRM_MODE_FB_MODIFIERS set without modifier
[16:13:57] [PASSED] YVU420 Normal sizes
[16:13:57] [PASSED] YVU420 Max sizes
[16:13:57] [PASSED] YVU420 Invalid pitch
[16:13:57] [PASSED] YVU420 Different pitches
[16:13:57] [PASSED] YVU420 Different buffer offsets/pitches
[16:13:57] [PASSED] YVU420 Modifier set just for plane 0, without DRM_MODE_FB_MODIFIERS
[16:13:57] [PASSED] YVU420 Modifier set just for planes 0, 1, without DRM_MODE_FB_MODIFIERS
[16:13:57] [PASSED] YVU420 Modifier set just for plane 0, 1, with DRM_MODE_FB_MODIFIERS
[16:13:57] [PASSED] YVU420 Valid modifier
[16:13:57] [PASSED] YVU420 Different modifiers per plane
[16:13:57] [PASSED] YVU420 Modifier for inexistent plane
[16:13:57] [PASSED] YUV420_10BIT Invalid modifier(DRM_FORMAT_MOD_LINEAR)
[16:13:57] [PASSED] X0L2 Normal sizes
[16:13:57] [PASSED] X0L2 Max sizes
[16:13:57] [PASSED] X0L2 Invalid pitch
[16:13:57] [PASSED] X0L2 Pitch greater than minimum required
[16:13:57] [PASSED] X0L2 Handle for inexistent plane
[16:13:57] [PASSED] X0L2 Offset for inexistent plane, without DRM_MODE_FB_MODIFIERS set
[16:13:57] [PASSED] X0L2 Modifier without DRM_MODE_FB_MODIFIERS set
[16:13:57] [PASSED] X0L2 Valid modifier
[16:13:57] [PASSED] X0L2 Modifier for inexistent plane
[16:13:57] =========== [PASSED] drm_test_framebuffer_create ===========
[16:13:57] [PASSED] drm_test_framebuffer_free
[16:13:57] [PASSED] drm_test_framebuffer_init
[16:13:57] [PASSED] drm_test_framebuffer_init_bad_format
[16:13:57] [PASSED] drm_test_framebuffer_init_dev_mismatch
[16:13:57] [PASSED] drm_test_framebuffer_lookup
[16:13:57] [PASSED] drm_test_framebuffer_lookup_inexistent
[16:13:57] [PASSED] drm_test_framebuffer_modifiers_not_supported
[16:13:57] ================= [PASSED] drm_framebuffer =================
[16:13:57] ================ drm_gem_shmem (8 subtests) ================
[16:13:57] [PASSED] drm_gem_shmem_test_obj_create
[16:13:57] [PASSED] drm_gem_shmem_test_obj_create_private
[16:13:57] [PASSED] drm_gem_shmem_test_pin_pages
[16:13:57] [PASSED] drm_gem_shmem_test_vmap
[16:13:57] [PASSED] drm_gem_shmem_test_get_sg_table
[16:13:57] [PASSED] drm_gem_shmem_test_get_pages_sgt
[16:13:57] [PASSED] drm_gem_shmem_test_madvise
[16:13:57] [PASSED] drm_gem_shmem_test_purge
[16:13:57] ================== [PASSED] drm_gem_shmem ==================
[16:13:57] === drm_atomic_helper_connector_hdmi_check (27 subtests) ===
[16:13:57] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode
[16:13:57] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode_vic_1
[16:13:57] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode
[16:13:57] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode_vic_1
[16:13:57] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode
[16:13:57] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode_vic_1
[16:13:57] ====== drm_test_check_broadcast_rgb_cea_mode_yuv420 =======
[16:13:57] [PASSED] Automatic
[16:13:57] [PASSED] Full
[16:13:57] [PASSED] Limited 16:235
[16:13:57] == [PASSED] drm_test_check_broadcast_rgb_cea_mode_yuv420 ===
[16:13:57] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_changed
[16:13:57] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_not_changed
[16:13:57] [PASSED] drm_test_check_disable_connector
[16:13:57] [PASSED] drm_test_check_hdmi_funcs_reject_rate
[16:13:57] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_rgb
[16:13:57] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_yuv420
[16:13:57] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_ignore_yuv422
[16:13:57] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_ignore_yuv420
[16:13:57] [PASSED] drm_test_check_driver_unsupported_fallback_yuv420
[16:13:57] [PASSED] drm_test_check_output_bpc_crtc_mode_changed
[16:13:57] [PASSED] drm_test_check_output_bpc_crtc_mode_not_changed
[16:13:57] [PASSED] drm_test_check_output_bpc_dvi
[16:13:57] [PASSED] drm_test_check_output_bpc_format_vic_1
[16:13:57] [PASSED] drm_test_check_output_bpc_format_display_8bpc_only
[16:13:57] [PASSED] drm_test_check_output_bpc_format_display_rgb_only
[16:13:57] [PASSED] drm_test_check_output_bpc_format_driver_8bpc_only
[16:13:57] [PASSED] drm_test_check_output_bpc_format_driver_rgb_only
[16:13:57] [PASSED] drm_test_check_tmds_char_rate_rgb_8bpc
[16:13:57] [PASSED] drm_test_check_tmds_char_rate_rgb_10bpc
[16:13:57] [PASSED] drm_test_check_tmds_char_rate_rgb_12bpc
[16:13:57] ===== [PASSED] drm_atomic_helper_connector_hdmi_check ======
[16:13:57] === drm_atomic_helper_connector_hdmi_reset (6 subtests) ====
[16:13:57] [PASSED] drm_test_check_broadcast_rgb_value
[16:13:57] [PASSED] drm_test_check_bpc_8_value
[16:13:57] [PASSED] drm_test_check_bpc_10_value
[16:13:57] [PASSED] drm_test_check_bpc_12_value
[16:13:57] [PASSED] drm_test_check_format_value
[16:13:57] [PASSED] drm_test_check_tmds_char_value
[16:13:57] ===== [PASSED] drm_atomic_helper_connector_hdmi_reset ======
[16:13:57] = drm_atomic_helper_connector_hdmi_mode_valid (4 subtests) =
[16:13:57] [PASSED] drm_test_check_mode_valid
[16:13:57] [PASSED] drm_test_check_mode_valid_reject
[16:13:57] [PASSED] drm_test_check_mode_valid_reject_rate
[16:13:57] [PASSED] drm_test_check_mode_valid_reject_max_clock
[16:13:57] === [PASSED] drm_atomic_helper_connector_hdmi_mode_valid ===
[16:13:57] = drm_atomic_helper_connector_hdmi_infoframes (5 subtests) =
[16:13:57] [PASSED] drm_test_check_infoframes
[16:13:57] [PASSED] drm_test_check_reject_avi_infoframe
[16:13:57] [PASSED] drm_test_check_reject_hdr_infoframe_bpc_8
[16:13:57] [PASSED] drm_test_check_reject_hdr_infoframe_bpc_10
[16:13:57] [PASSED] drm_test_check_reject_audio_infoframe
[16:13:57] === [PASSED] drm_atomic_helper_connector_hdmi_infoframes ===
[16:13:57] ================= drm_managed (2 subtests) =================
[16:13:57] [PASSED] drm_test_managed_release_action
[16:13:57] [PASSED] drm_test_managed_run_action
[16:13:57] =================== [PASSED] drm_managed ===================
[16:13:57] =================== drm_mm (6 subtests) ====================
[16:13:57] [PASSED] drm_test_mm_init
[16:13:57] [PASSED] drm_test_mm_debug
[16:13:57] [PASSED] drm_test_mm_align32
[16:13:57] [PASSED] drm_test_mm_align64
[16:13:57] [PASSED] drm_test_mm_lowest
[16:13:57] [PASSED] drm_test_mm_highest
[16:13:57] ===================== [PASSED] drm_mm ======================
[16:13:57] ============= drm_modes_analog_tv (5 subtests) =============
[16:13:57] [PASSED] drm_test_modes_analog_tv_mono_576i
[16:13:57] [PASSED] drm_test_modes_analog_tv_ntsc_480i
[16:13:57] [PASSED] drm_test_modes_analog_tv_ntsc_480i_inlined
[16:13:57] [PASSED] drm_test_modes_analog_tv_pal_576i
[16:13:57] [PASSED] drm_test_modes_analog_tv_pal_576i_inlined
[16:13:57] =============== [PASSED] drm_modes_analog_tv ===============
[16:13:57] ============== drm_plane_helper (2 subtests) ===============
[16:13:57] =============== drm_test_check_plane_state ================
[16:13:57] [PASSED] clipping_simple
[16:13:57] [PASSED] clipping_rotate_reflect
[16:13:57] [PASSED] positioning_simple
[16:13:57] [PASSED] upscaling
[16:13:57] [PASSED] downscaling
[16:13:57] [PASSED] rounding1
[16:13:57] [PASSED] rounding2
[16:13:57] [PASSED] rounding3
[16:13:57] [PASSED] rounding4
[16:13:57] =========== [PASSED] drm_test_check_plane_state ============
[16:13:57] =========== drm_test_check_invalid_plane_state ============
[16:13:57] [PASSED] positioning_invalid
[16:13:57] [PASSED] upscaling_invalid
[16:13:57] [PASSED] downscaling_invalid
[16:13:57] ======= [PASSED] drm_test_check_invalid_plane_state ========
[16:13:57] ================ [PASSED] drm_plane_helper =================
[16:13:57] ====== drm_connector_helper_tv_get_modes (1 subtest) =======
[16:13:57] ====== drm_test_connector_helper_tv_get_modes_check =======
[16:13:57] [PASSED] None
[16:13:57] [PASSED] PAL
[16:13:57] [PASSED] NTSC
[16:13:57] [PASSED] Both, NTSC Default
[16:13:57] [PASSED] Both, PAL Default
[16:13:57] [PASSED] Both, NTSC Default, with PAL on command-line
[16:13:57] [PASSED] Both, PAL Default, with NTSC on command-line
[16:13:57] == [PASSED] drm_test_connector_helper_tv_get_modes_check ===
[16:13:57] ======== [PASSED] drm_connector_helper_tv_get_modes ========
[16:13:57] ================== drm_rect (9 subtests) ===================
[16:13:57] [PASSED] drm_test_rect_clip_scaled_div_by_zero
[16:13:57] [PASSED] drm_test_rect_clip_scaled_not_clipped
[16:13:57] [PASSED] drm_test_rect_clip_scaled_clipped
[16:13:57] [PASSED] drm_test_rect_clip_scaled_signed_vs_unsigned
[16:13:57] ================= drm_test_rect_intersect =================
[16:13:57] [PASSED] top-left x bottom-right: 2x2+1+1 x 2x2+0+0
[16:13:57] [PASSED] top-right x bottom-left: 2x2+0+0 x 2x2+1-1
[16:13:57] [PASSED] bottom-left x top-right: 2x2+1-1 x 2x2+0+0
[16:13:57] [PASSED] bottom-right x top-left: 2x2+0+0 x 2x2+1+1
[16:13:57] [PASSED] right x left: 2x1+0+0 x 3x1+1+0
[16:13:57] [PASSED] left x right: 3x1+1+0 x 2x1+0+0
[16:13:57] [PASSED] up x bottom: 1x2+0+0 x 1x3+0-1
[16:13:57] [PASSED] bottom x up: 1x3+0-1 x 1x2+0+0
[16:13:57] [PASSED] touching corner: 1x1+0+0 x 2x2+1+1
[16:13:57] [PASSED] touching side: 1x1+0+0 x 1x1+1+0
[16:13:57] [PASSED] equal rects: 2x2+0+0 x 2x2+0+0
[16:13:57] [PASSED] inside another: 2x2+0+0 x 1x1+1+1
[16:13:57] [PASSED] far away: 1x1+0+0 x 1x1+3+6
[16:13:57] [PASSED] points intersecting: 0x0+5+10 x 0x0+5+10
[16:13:57] [PASSED] points not intersecting: 0x0+0+0 x 0x0+5+10
[16:13:57] ============= [PASSED] drm_test_rect_intersect =============
[16:13:57] ================ drm_test_rect_calc_hscale ================
[16:13:57] [PASSED] normal use
[16:13:57] [PASSED] out of max range
[16:13:57] [PASSED] out of min range
[16:13:57] [PASSED] zero dst
[16:13:57] [PASSED] negative src
[16:13:57] [PASSED] negative dst
[16:13:57] ============ [PASSED] drm_test_rect_calc_hscale ============
[16:13:57] ================ drm_test_rect_calc_vscale ================
[16:13:57] [PASSED] normal use
[16:13:57] [PASSED] out of max range
[16:13:57] [PASSED] out of min range
[16:13:57] [PASSED] zero dst
[16:13:57] [PASSED] negative src
[16:13:57] [PASSED] negative dst
stty: 'standard input': Inappropriate ioctl for device
[16:13:57] ============ [PASSED] drm_test_rect_calc_vscale ============
[16:13:57] ================== drm_test_rect_rotate ===================
[16:13:57] [PASSED] reflect-x
[16:13:57] [PASSED] reflect-y
[16:13:57] [PASSED] rotate-0
[16:13:57] [PASSED] rotate-90
[16:13:57] [PASSED] rotate-180
[16:13:57] [PASSED] rotate-270
[16:13:57] ============== [PASSED] drm_test_rect_rotate ===============
[16:13:57] ================ drm_test_rect_rotate_inv =================
[16:13:57] [PASSED] reflect-x
[16:13:57] [PASSED] reflect-y
[16:13:57] [PASSED] rotate-0
[16:13:57] [PASSED] rotate-90
[16:13:57] [PASSED] rotate-180
[16:13:57] [PASSED] rotate-270
[16:13:57] ============ [PASSED] drm_test_rect_rotate_inv =============
[16:13:57] ==================== [PASSED] drm_rect =====================
[16:13:57] ============ drm_sysfb_modeset_test (1 subtest) ============
[16:13:57] ============ drm_test_sysfb_build_fourcc_list =============
[16:13:57] [PASSED] no native formats
[16:13:57] [PASSED] XRGB8888 as native format
[16:13:57] [PASSED] remove duplicates
[16:13:57] [PASSED] convert alpha formats
[16:13:57] [PASSED] random formats
[16:13:57] ======== [PASSED] drm_test_sysfb_build_fourcc_list =========
[16:13:57] ============= [PASSED] drm_sysfb_modeset_test ==============
[16:13:57] ================== drm_fixp (2 subtests) ===================
[16:13:57] [PASSED] drm_test_int2fixp
[16:13:57] [PASSED] drm_test_sm2fixp
[16:13:57] ==================== [PASSED] drm_fixp =====================
[16:13:57] ============================================================
[16:13:57] Testing complete. Ran 621 tests: passed: 621
[16:13:57] Elapsed time: 25.774s total, 1.726s configuring, 23.866s building, 0.180s running
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/ttm/tests/.kunitconfig
[16:13:58] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[16:13:59] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[16:14:09] Starting KUnit Kernel (1/1)...
[16:14:09] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[16:14:09] ================= ttm_device (5 subtests) ==================
[16:14:09] [PASSED] ttm_device_init_basic
[16:14:09] [PASSED] ttm_device_init_multiple
[16:14:09] [PASSED] ttm_device_fini_basic
[16:14:09] [PASSED] ttm_device_init_no_vma_man
[16:14:09] ================== ttm_device_init_pools ==================
[16:14:09] [PASSED] No DMA allocations, no DMA32 required
[16:14:09] [PASSED] DMA allocations, DMA32 required
[16:14:09] [PASSED] No DMA allocations, DMA32 required
[16:14:09] [PASSED] DMA allocations, no DMA32 required
[16:14:09] ============== [PASSED] ttm_device_init_pools ==============
[16:14:09] =================== [PASSED] ttm_device ====================
[16:14:09] ================== ttm_pool (8 subtests) ===================
[16:14:09] ================== ttm_pool_alloc_basic ===================
[16:14:09] [PASSED] One page
[16:14:09] [PASSED] More than one page
[16:14:09] [PASSED] Above the allocation limit
[16:14:09] [PASSED] One page, with coherent DMA mappings enabled
[16:14:09] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[16:14:09] ============== [PASSED] ttm_pool_alloc_basic ===============
[16:14:09] ============== ttm_pool_alloc_basic_dma_addr ==============
[16:14:09] [PASSED] One page
[16:14:09] [PASSED] More than one page
[16:14:09] [PASSED] Above the allocation limit
[16:14:09] [PASSED] One page, with coherent DMA mappings enabled
[16:14:09] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[16:14:09] ========== [PASSED] ttm_pool_alloc_basic_dma_addr ==========
[16:14:09] [PASSED] ttm_pool_alloc_order_caching_match
[16:14:09] [PASSED] ttm_pool_alloc_caching_mismatch
[16:14:09] [PASSED] ttm_pool_alloc_order_mismatch
[16:14:09] [PASSED] ttm_pool_free_dma_alloc
[16:14:09] [PASSED] ttm_pool_free_no_dma_alloc
[16:14:09] [PASSED] ttm_pool_fini_basic
[16:14:09] ==================== [PASSED] ttm_pool =====================
[16:14:09] ================ ttm_resource (8 subtests) =================
[16:14:09] ================= ttm_resource_init_basic =================
[16:14:09] [PASSED] Init resource in TTM_PL_SYSTEM
[16:14:09] [PASSED] Init resource in TTM_PL_VRAM
[16:14:09] [PASSED] Init resource in a private placement
[16:14:09] [PASSED] Init resource in TTM_PL_SYSTEM, set placement flags
[16:14:09] ============= [PASSED] ttm_resource_init_basic =============
[16:14:09] [PASSED] ttm_resource_init_pinned
[16:14:09] [PASSED] ttm_resource_fini_basic
[16:14:09] [PASSED] ttm_resource_manager_init_basic
[16:14:09] [PASSED] ttm_resource_manager_usage_basic
[16:14:09] [PASSED] ttm_resource_manager_set_used_basic
[16:14:09] [PASSED] ttm_sys_man_alloc_basic
[16:14:09] [PASSED] ttm_sys_man_free_basic
[16:14:09] ================== [PASSED] ttm_resource ===================
[16:14:09] =================== ttm_tt (15 subtests) ===================
[16:14:09] ==================== ttm_tt_init_basic ====================
[16:14:09] [PASSED] Page-aligned size
[16:14:09] [PASSED] Extra pages requested
[16:14:09] ================ [PASSED] ttm_tt_init_basic ================
[16:14:09] [PASSED] ttm_tt_init_misaligned
[16:14:09] [PASSED] ttm_tt_fini_basic
[16:14:09] [PASSED] ttm_tt_fini_sg
[16:14:09] [PASSED] ttm_tt_fini_shmem
[16:14:09] [PASSED] ttm_tt_create_basic
[16:14:09] [PASSED] ttm_tt_create_invalid_bo_type
[16:14:09] [PASSED] ttm_tt_create_ttm_exists
[16:14:09] [PASSED] ttm_tt_create_failed
[16:14:09] [PASSED] ttm_tt_destroy_basic
[16:14:09] [PASSED] ttm_tt_populate_null_ttm
[16:14:09] [PASSED] ttm_tt_populate_populated_ttm
[16:14:09] [PASSED] ttm_tt_unpopulate_basic
[16:14:09] [PASSED] ttm_tt_unpopulate_empty_ttm
[16:14:09] [PASSED] ttm_tt_swapin_basic
[16:14:09] ===================== [PASSED] ttm_tt ======================
[16:14:09] =================== ttm_bo (14 subtests) ===================
[16:14:09] =========== ttm_bo_reserve_optimistic_no_ticket ===========
[16:14:09] [PASSED] Cannot be interrupted and sleeps
[16:14:09] [PASSED] Cannot be interrupted, locks straight away
[16:14:09] [PASSED] Can be interrupted, sleeps
[16:14:09] ======= [PASSED] ttm_bo_reserve_optimistic_no_ticket =======
[16:14:09] [PASSED] ttm_bo_reserve_locked_no_sleep
[16:14:09] [PASSED] ttm_bo_reserve_no_wait_ticket
[16:14:09] [PASSED] ttm_bo_reserve_double_resv
[16:14:09] [PASSED] ttm_bo_reserve_interrupted
[16:14:09] [PASSED] ttm_bo_reserve_deadlock
[16:14:09] [PASSED] ttm_bo_unreserve_basic
[16:14:09] [PASSED] ttm_bo_unreserve_pinned
[16:14:09] [PASSED] ttm_bo_unreserve_bulk
[16:14:09] [PASSED] ttm_bo_fini_basic
[16:14:09] [PASSED] ttm_bo_fini_shared_resv
[16:14:09] [PASSED] ttm_bo_pin_basic
[16:14:09] [PASSED] ttm_bo_pin_unpin_resource
[16:14:09] [PASSED] ttm_bo_multiple_pin_one_unpin
[16:14:09] ===================== [PASSED] ttm_bo ======================
[16:14:09] ============== ttm_bo_validate (21 subtests) ===============
[16:14:09] ============== ttm_bo_init_reserved_sys_man ===============
[16:14:09] [PASSED] Buffer object for userspace
[16:14:09] [PASSED] Kernel buffer object
[16:14:09] [PASSED] Shared buffer object
[16:14:09] ========== [PASSED] ttm_bo_init_reserved_sys_man ===========
[16:14:09] ============== ttm_bo_init_reserved_mock_man ==============
[16:14:09] [PASSED] Buffer object for userspace
[16:14:09] [PASSED] Kernel buffer object
[16:14:09] [PASSED] Shared buffer object
[16:14:09] ========== [PASSED] ttm_bo_init_reserved_mock_man ==========
[16:14:09] [PASSED] ttm_bo_init_reserved_resv
[16:14:09] ================== ttm_bo_validate_basic ==================
[16:14:09] [PASSED] Buffer object for userspace
[16:14:09] [PASSED] Kernel buffer object
[16:14:09] [PASSED] Shared buffer object
[16:14:09] ============== [PASSED] ttm_bo_validate_basic ==============
[16:14:09] [PASSED] ttm_bo_validate_invalid_placement
[16:14:09] ============= ttm_bo_validate_same_placement ==============
[16:14:09] [PASSED] System manager
[16:14:09] [PASSED] VRAM manager
[16:14:09] ========= [PASSED] ttm_bo_validate_same_placement ==========
[16:14:09] [PASSED] ttm_bo_validate_failed_alloc
[16:14:09] [PASSED] ttm_bo_validate_pinned
[16:14:09] [PASSED] ttm_bo_validate_busy_placement
[16:14:09] ================ ttm_bo_validate_multihop =================
[16:14:09] [PASSED] Buffer object for userspace
[16:14:09] [PASSED] Kernel buffer object
[16:14:09] [PASSED] Shared buffer object
[16:14:09] ============ [PASSED] ttm_bo_validate_multihop =============
[16:14:09] ========== ttm_bo_validate_no_placement_signaled ==========
[16:14:09] [PASSED] Buffer object in system domain, no page vector
[16:14:09] [PASSED] Buffer object in system domain with an existing page vector
[16:14:09] ====== [PASSED] ttm_bo_validate_no_placement_signaled ======
[16:14:09] ======== ttm_bo_validate_no_placement_not_signaled ========
[16:14:09] [PASSED] Buffer object for userspace
[16:14:09] [PASSED] Kernel buffer object
[16:14:09] [PASSED] Shared buffer object
[16:14:09] ==== [PASSED] ttm_bo_validate_no_placement_not_signaled ====
[16:14:09] [PASSED] ttm_bo_validate_move_fence_signaled
[16:14:09] ========= ttm_bo_validate_move_fence_not_signaled =========
[16:14:09] [PASSED] Waits for GPU
[16:14:09] [PASSED] Tries to lock straight away
[16:14:09] ===== [PASSED] ttm_bo_validate_move_fence_not_signaled =====
[16:14:09] [PASSED] ttm_bo_validate_happy_evict
[16:14:09] [PASSED] ttm_bo_validate_all_pinned_evict
[16:14:09] [PASSED] ttm_bo_validate_allowed_only_evict
[16:14:09] [PASSED] ttm_bo_validate_deleted_evict
[16:14:09] [PASSED] ttm_bo_validate_busy_domain_evict
[16:14:09] [PASSED] ttm_bo_validate_evict_gutting
[16:14:09] [PASSED] ttm_bo_validate_recrusive_evict
stty: 'standard input': Inappropriate ioctl for device
[16:14:09] ================= [PASSED] ttm_bo_validate =================
[16:14:09] ============================================================
[16:14:09] Testing complete. Ran 101 tests: passed: 101
[16:14:09] Elapsed time: 11.388s total, 1.709s configuring, 9.463s building, 0.177s running
+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel
^ permalink raw reply [flat|nested] 39+ messages in thread* ✓ Xe.CI.BAT: success for drm/xe/madvise: Add support for purgeable buffer objects (rev7)
2026-03-03 15:19 [PATCH v6 00/12] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
` (13 preceding siblings ...)
2026-03-03 16:14 ` ✓ CI.KUnit: success " Patchwork
@ 2026-03-03 16:50 ` Patchwork
2026-03-03 22:05 ` [PATCH v6 00/12] drm/xe/madvise: Add support for purgeable buffer objects Souza, Jose
2026-03-04 4:01 ` ✗ Xe.CI.FULL: failure for drm/xe/madvise: Add support for purgeable buffer objects (rev7) Patchwork
16 siblings, 0 replies; 39+ messages in thread
From: Patchwork @ 2026-03-03 16:50 UTC (permalink / raw)
To: Arvind Yadav; +Cc: intel-xe
[-- Attachment #1: Type: text/plain, Size: 977 bytes --]
== Series Details ==
Series: drm/xe/madvise: Add support for purgeable buffer objects (rev7)
URL : https://patchwork.freedesktop.org/series/156651/
State : success
== Summary ==
CI Bug Log - changes from xe-4653-d995d708ed7b2e37fa25d41783f747e43bb91c5b_BAT -> xe-pw-156651v7_BAT
====================================================
Summary
-------
**SUCCESS**
No regressions found.
Participating hosts (14 -> 14)
------------------------------
No changes in participating hosts
Changes
-------
No changes found
Build changes
-------------
* Linux: xe-4653-d995d708ed7b2e37fa25d41783f747e43bb91c5b -> xe-pw-156651v7
IGT_8777: a50285a68dbef0fe11140adef4016a756f57b324 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
xe-4653-d995d708ed7b2e37fa25d41783f747e43bb91c5b: d995d708ed7b2e37fa25d41783f747e43bb91c5b
xe-pw-156651v7: 156651v7
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/index.html
[-- Attachment #2: Type: text/html, Size: 1525 bytes --]
^ permalink raw reply [flat|nested] 39+ messages in thread* Re: [PATCH v6 00/12] drm/xe/madvise: Add support for purgeable buffer objects
2026-03-03 15:19 [PATCH v6 00/12] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
` (14 preceding siblings ...)
2026-03-03 16:50 ` ✓ Xe.CI.BAT: " Patchwork
@ 2026-03-03 22:05 ` Souza, Jose
2026-03-03 22:49 ` Matthew Brost
2026-03-04 4:01 ` ✗ Xe.CI.FULL: failure for drm/xe/madvise: Add support for purgeable buffer objects (rev7) Patchwork
16 siblings, 1 reply; 39+ messages in thread
From: Souza, Jose @ 2026-03-03 22:05 UTC (permalink / raw)
To: intel-xe@lists.freedesktop.org, Yadav, Arvind
Cc: Brost, Matthew, Mishra, Pallavi, Ghimiray, Himal Prasad,
Vivi, Rodrigo, thomas.hellstrom@linux.intel.com
On Tue, 2026-03-03 at 20:49 +0530, Arvind Yadav wrote:
> This patch series introduces comprehensive support for purgeable
> buffer objects
> in the Xe driver, enabling userspace to provide memory usage hints
> for better
> memory management under system pressure.
>
> Overview:
>
> Purgeable memory allows applications to mark buffer objects as "not
> currently
> needed" (DONTNEED), making them eligible for kernel reclamation
> during memory
> pressure. This helps prevent OOM conditions and enables more
> efficient GPU
> memory utilization for workloads with temporary or regeneratable data
> (caches,
> intermediate results, decoded frames, etc.).
>
> Purgeable BO Lifecycle:
> 1. WILLNEED (default): BO actively needed, kernel preserves backing
> store
> 2. DONTNEED (user hint): BO contents discardable, eligible for
> purging
> 3. PURGED (kernel action): Backing store reclaimed during memory
> pressure
>
> Key Design Principles:
> - i915 compatibility: "Once purged, always purged" semantics -
> purged BOs
> remain permanently invalid and must be destroyed/recreated
> - Per-VMA state tracking: Each VMA tracks its own purgeable state,
> BO is
> only marked DONTNEED when ALL VMAs across ALL VMs agree (Thomas
> Hellström)
> - Safety first: Imported/exported dma-bufs blocked from purgeable
> state -
> no visibility into external device usage (Matt Roper)
> - Multiple protection layers: Validation in madvise, VM bind, mmap,
> CPU
> and GPU fault handlers. GPU page faults on DONTNEED BOs are
> rejected in
> xe_pagefault_begin() to preserve the GPU PTE invalidation done at
> madvise
> time; without this the rebind path would re-map real pages and
> undo the
> PTE zap, preventing the shrinker from ever reclaiming the BO.
> - Correct GPU PTE zapping: madvise_purgeable() explicitly sets
> skip_invalidation per VMA (false for DONTNEED, true for WILLNEED,
> purged
> and dmabuf-shared BOs) so DONTNEED always triggers a GPU PTE zap
> regardless of prior madvise state.
> - Scratch PTE support: Fault-mode VMs use scratch pages for safe
> zero reads
> on purged BO access.
> - TTM shrinker integration: Encapsulated helpers manage xe_ttm_tt-
> >purgeable
> flag and shrinker page accounting (shrinkable vs purgeable
> buckets)
I get Engine memory CAT errors when using this feature:
[ 240.301213] xe 0000:00:02.0: [drm] Tile0: GT0: Fault response:
Unsuccessful -EINVAL
[ 240.301301] xe 0000:00:02.0: [drm] Tile0: GT0: Engine memory CAT
error [18]: class=rcs, logical_mask: 0x1, guc_id=17
[ 240.302871] xe 0000:00:02.0: [drm] Tile0: GT0: Engine reset:
engine_class=rcs, logical_mask: 0x1, guc_id=17, state=0x249
[ 240.302885] xe 0000:00:02.0: [drm] Tile0: GT0: Timedout job:
seqno=4294967169, lrc_seqno=4294967169, guc_id=17, flags=0x0 in
arb_map_buffer_ [3374]
[ 240.302892] xe 0000:00:02.0: [drm:xe_devcoredump [xe]] Multiple
hangs are occurring, but only the first snapshot was taken
Mesa creates VM with DRM_XE_VM_CREATE_FLAG_SCRATCH_PAGE, probably you
don't have a IGT test with this scenario.
@cc Rodrigo
Other issue is not related to your patches but drm_xe_madvise only
works with non-canonical addresses and some time ago was agreed that
all the user-visible addresses would be in canonical format.
Not sure if we can do anything at this point but letting you know.
>
> v2 Changes:
> - Reordered patches: Moved shared BO helper before main
> implementation for
> proper dependency order
> - Fixed reference counting in mmap offset validation (use
> drm_gem_object_put)
> - Removed incorrect claims about madvise(WILLNEED) restoring purged
> BOs
> - Fixed error code documentation inconsistencies
> - Initialize purge_state_val fields to prevent kernel memory leaks
> - Use xe_bo_trigger_rebind() for async TLB invalidation (Thomas
> Hellström)
> - Add NULL rebind with scratch PTEs for fault mode (Thomas
> Hellström)
> - Implement i915-compatible retained field logic (Thomas Hellström)
> - Skip BO validation for purged BOs in page fault handler (crash
> fix)
> - Add scratch VM check in page fault path (non-scratch VMs fail
> fault)
>
> v3 Changes (addressing Matt and Thomas Hellström feedback):
> - Per-VMA purgeable state tracking: Added xe_vma->purgeable_state
> field
> - Complete VMA check: xe_bo_all_vmas_dontneed() walks all VMAs
> across all
> VMs to ensure unanimous DONTNEED before marking BO purgeable
> - VMA unbind recheck: Added xe_bo_recheck_purgeable_on_vma_unbind()
> to
> re-evaluate BO state when VMAs are destroyed
> - Block external dma-bufs: Added xe_bo_is_external_dmabuf() check
> using
> drm_gem_is_imported() and obj->dma_buf to prevent purging
> imported/exported BOs
> - Consistent lockdep enforcement: Added xe_bo_assert_held() to all
> helpers
> that access madv_purgeable state
> - Simplified page table logic: Renamed is_null to is_null_or_purged
> in
> xe_pt_stage_bind_entry() - purged BOs treated identically to null
> VMAs
> - Removed unnecessary checks: Dropped redundant "&& bo" check in
> xe_ttm_bo_purge()
> - Xe-specific warnings: Changed drm_warn() to XE_WARN_ON() in purge
> path
> - Moved purge checks under locks: Purge state validation now done
> after
> acquiring dma-resv lock in vma_lock_and_validate() and
> xe_pagefault_begin()
> - Race-free fault handling: Removed unlocked purge check from
> xe_pagefault_handle_vma(), moved to locked xe_pagefault_begin()
> - Shrinker helper functions: Added xe_bo_set_purgeable_shrinker()
> and
> xe_bo_clear_purgeable_shrinker() to encapsulate TTM purgeable
> flag updates
> and shrinker page accounting, improving code clarity and
> maintainability
>
> v4 Changes (addressing Matt and Thomas Hellström feedback):
> - UAPI: Removed '__u64 reserved' field from purge_state_val union
> to fit
> 16-byte size constraint (Matt)
> - Changed madv_purgeable from atomic_t to u32 across all patches
> (Matt)
> - CPU fault handling: Added purged check to fastpath
> (xe_bo_cpu_fault_fastpath)
> to prevent hang when accessing existing mmap of purged BO
>
> v5 Changes (addressing Matt and Thomas Hellström feedback):
> - Add locking documentation to madv_purgeable field comment (Matt)
> - Introduce xe_bo_set_purgeable_state() helper (void return) to
> centralize
> madv_purgeable updates with xe_bo_assert_held() and state
> transition
> validation using explicit enum checks (no transition out of
> PURGED) (Matt)
> - Make xe_ttm_bo_purge() return int and propagate failures from
> xe_bo_move(); handle xe_bo_trigger_rebind() failures (e.g.
> no_wait_gpu
> paths) rather than silently ignoring (Matt)
> - Replace drm_WARN_ON with xe_assert for better Xe-specific
> assertions (Matt)
> - Hook purgeable handling into
> madvise_funcs[DRM_XE_VMA_ATTR_PURGEABLE_STATE]
> instead of special-case path in xe_vm_madvise_ioctl() (Matt)
> - Track purgeable retained return via xe_madvise_details and
> perform
> copy_to_user() from xe_madvise_details_fini() after locks are
> dropped (Matt)
> - Set madvise_funcs[DRM_XE_VMA_ATTR_PURGEABLE_STATE] to NULL with
> __maybe_unused on madvise_purgeable() to maintain bisectability
> until
> shrinker integration is complete in final patch (Matt)
> - Call xe_bo_recheck_purgeable_on_vma_unbind() from
> xe_vma_destroy()
> right after drm_gpuva_unlink() where we already hold the BO lock,
> drop the trylock-based late destroy path (Matt)
> - Move purgeable_state into xe_vma_mem_attr with the other madvise
> attributes (Matt)
> - Drop READ_ONCE since the BO lock already protects us (Matt)
> - Keep returning false when there are no VMAs - otherwise we'd mark
> BOs purgeable without any user hint (Matt)
> - Use struct xe_vma_lock_and_validate_flags instead of multiple
> bool
> parameters to improve readability and prevent argument
> transposition (Matt)
> - Fix LRU crash while running shrink test
> - Skip xe_bo_validate() for purged BOs in xe_gpuvm_validate()
> - Split ghost BO and zero-refcount handling in xe_bo_shrink()
> (Thomas)
>
> v6 Changes (addressing Jose Souza, Thomas Hellström and Matt Brost
> feedback):
> - Document DONTNEED blocking behavior in uAPI: Clearly describe
> which
> operations are blocked and with what error codes. (Thomas, Matt)
> - Block VM_BIND to DONTNEED BOs: Return -EBUSY to prevent creating
> new
> VMAs to purgeable BOs (undefined behavior). (Thomas, Matt)
> - Block CPU faults to DONTNEED BOs: Return VM_FAULT_SIGBUS in both
> fastpath
> and slowpath to prevent undefined behavior. (Thomas, Matt)
> - Block new mmap() to DONTNEED/purged BOs: Return -EBUSY for
> DONTNEED,
> -EINVAL for PURGED. (Thomas, Matt)
> - Block dma-buf export of DONTNEED/purged BOs: Return -EBUSY for
> DONTNEED,
> -EINVAL for PURGED. (Thomas, Matt)
> - Fix state transition bug: xe_bo_all_vmas_dontneed() now returns
> enum to
> distinguish NO_VMAS (preserve state) from WILLNEED (has active
> VMAs),
> preventing incorrect DONTNEED → WILLNEED flip on last VMA unmap
> (Matt)
> - Set skip_invalidation explicitly in madvise_purgeable() to ensure
> DONTNEED always zaps GPU PTEs regardless of prior madvise state.
> - Add DRM_XE_QUERY_CONFIG_FLAG_HAS_PURGING_SUPPORT for userspace
> feature detection. (Jose)
>
> Arvind Yadav (11):
> drm/xe/bo: Add purgeable bo state tracking and field madv to xe_bo
> drm/xe/madvise: Implement purgeable buffer object support
> drm/xe/bo: Block CPU faults to purgeable buffer objects
> drm/xe/vm: Prevent binding of purged buffer objects
> drm/xe/madvise: Implement per-VMA purgeable state tracking
> drm/xe/madvise: Block imported and exported dma-bufs
> drm/xe/bo: Block mmap of DONTNEED/purged BOs
> drm/xe/dma_buf: Block export of DONTNEED/purged BOs
> drm/xe/bo: Add purgeable shrinker state helpers
> drm/xe/madvise: Enable purgeable buffer object IOCTL support
> drm/xe/bo: Skip zero-refcount BOs in shrinker
>
> Himal Prasad Ghimiray (1):
> drm/xe/uapi: Add UAPI support for purgeable buffer objects
>
> drivers/gpu/drm/xe/xe_bo.c | 223 +++++++++++++++++++++--
> drivers/gpu/drm/xe/xe_bo.h | 60 ++++++
> drivers/gpu/drm/xe/xe_bo_types.h | 6 +
> drivers/gpu/drm/xe/xe_dma_buf.c | 21 +++
> drivers/gpu/drm/xe/xe_pagefault.c | 19 ++
> drivers/gpu/drm/xe/xe_pt.c | 40 +++-
> drivers/gpu/drm/xe/xe_query.c | 2 +
> drivers/gpu/drm/xe/xe_svm.c | 1 +
> drivers/gpu/drm/xe/xe_vm.c | 100 ++++++++--
> drivers/gpu/drm/xe/xe_vm_madvise.c | 283
> +++++++++++++++++++++++++++++
> drivers/gpu/drm/xe/xe_vm_madvise.h | 3 +
> drivers/gpu/drm/xe/xe_vm_types.h | 11 ++
> include/uapi/drm/xe_drm.h | 60 ++++++
> 13 files changed, 793 insertions(+), 36 deletions(-)
^ permalink raw reply [flat|nested] 39+ messages in thread* Re: [PATCH v6 00/12] drm/xe/madvise: Add support for purgeable buffer objects
2026-03-03 22:05 ` [PATCH v6 00/12] drm/xe/madvise: Add support for purgeable buffer objects Souza, Jose
@ 2026-03-03 22:49 ` Matthew Brost
2026-03-04 13:29 ` Souza, Jose
0 siblings, 1 reply; 39+ messages in thread
From: Matthew Brost @ 2026-03-03 22:49 UTC (permalink / raw)
To: Souza, Jose
Cc: intel-xe@lists.freedesktop.org, Yadav, Arvind, Mishra, Pallavi,
Ghimiray, Himal Prasad, Vivi, Rodrigo,
thomas.hellstrom@linux.intel.com
On Tue, Mar 03, 2026 at 03:05:59PM -0700, Souza, Jose wrote:
> On Tue, 2026-03-03 at 20:49 +0530, Arvind Yadav wrote:
> > This patch series introduces comprehensive support for purgeable
> > buffer objects
> > in the Xe driver, enabling userspace to provide memory usage hints
> > for better
> > memory management under system pressure.
> >
> > Overview:
> >
> > Purgeable memory allows applications to mark buffer objects as "not
> > currently
> > needed" (DONTNEED), making them eligible for kernel reclamation
> > during memory
> > pressure. This helps prevent OOM conditions and enables more
> > efficient GPU
> > memory utilization for workloads with temporary or regeneratable data
> > (caches,
> > intermediate results, decoded frames, etc.).
> >
> > Purgeable BO Lifecycle:
> > 1. WILLNEED (default): BO actively needed, kernel preserves backing
> > store
> > 2. DONTNEED (user hint): BO contents discardable, eligible for
> > purging
> > 3. PURGED (kernel action): Backing store reclaimed during memory
> > pressure
> >
> > Key Design Principles:
> > - i915 compatibility: "Once purged, always purged" semantics -
> > purged BOs
> > remain permanently invalid and must be destroyed/recreated
> > - Per-VMA state tracking: Each VMA tracks its own purgeable state,
> > BO is
> > only marked DONTNEED when ALL VMAs across ALL VMs agree (Thomas
> > Hellström)
> > - Safety first: Imported/exported dma-bufs blocked from purgeable
> > state -
> > no visibility into external device usage (Matt Roper)
> > - Multiple protection layers: Validation in madvise, VM bind, mmap,
> > CPU
> > and GPU fault handlers. GPU page faults on DONTNEED BOs are
> > rejected in
> > xe_pagefault_begin() to preserve the GPU PTE invalidation done at
> > madvise
> > time; without this the rebind path would re-map real pages and
> > undo the
> > PTE zap, preventing the shrinker from ever reclaiming the BO.
> > - Correct GPU PTE zapping: madvise_purgeable() explicitly sets
> > skip_invalidation per VMA (false for DONTNEED, true for WILLNEED,
> > purged
> > and dmabuf-shared BOs) so DONTNEED always triggers a GPU PTE zap
> > regardless of prior madvise state.
> > - Scratch PTE support: Fault-mode VMs use scratch pages for safe
> > zero reads
> > on purged BO access.
> > - TTM shrinker integration: Encapsulated helpers manage xe_ttm_tt-
> > >purgeable
> > flag and shrinker page accounting (shrinkable vs purgeable
> > buckets)
>
>
> I get Engine memory CAT errors when using this feature:
>
> [ 240.301213] xe 0000:00:02.0: [drm] Tile0: GT0: Fault response:
> Unsuccessful -EINVAL
> [ 240.301301] xe 0000:00:02.0: [drm] Tile0: GT0: Engine memory CAT
> error [18]: class=rcs, logical_mask: 0x1, guc_id=17
> [ 240.302871] xe 0000:00:02.0: [drm] Tile0: GT0: Engine reset:
> engine_class=rcs, logical_mask: 0x1, guc_id=17, state=0x249
> [ 240.302885] xe 0000:00:02.0: [drm] Tile0: GT0: Timedout job:
> seqno=4294967169, lrc_seqno=4294967169, guc_id=17, flags=0x0 in
> arb_map_buffer_ [3374]
> [ 240.302892] xe 0000:00:02.0: [drm:xe_devcoredump [xe]] Multiple
> hangs are occurring, but only the first snapshot was taken
>
> Mesa creates VM with DRM_XE_VM_CREATE_FLAG_SCRATCH_PAGE, probably you
> don't have a IGT test with this scenario.
>
> @cc Rodrigo
>
> Other issue is not related to your patches but drm_xe_madvise only
> works with non-canonical addresses and some time ago was agreed that
> all the user-visible addresses would be in canonical format.
> Not sure if we can do anything at this point but letting you know.
>
We actually might be able to fix it to accept canonical addresses, what we
can't blindly do is make non-canonical addresses stop working...
It might create weird scenario for UMDs though if canonical addresses
work on some kernel but not others but perhaps since this is Mesa's
first use of madvise we get this in as part of purgable and only NEO
would have to deal with this scenario.
Matt
>
> >
> > v2 Changes:
> > - Reordered patches: Moved shared BO helper before main
> > implementation for
> > proper dependency order
> > - Fixed reference counting in mmap offset validation (use
> > drm_gem_object_put)
> > - Removed incorrect claims about madvise(WILLNEED) restoring purged
> > BOs
> > - Fixed error code documentation inconsistencies
> > - Initialize purge_state_val fields to prevent kernel memory leaks
> > - Use xe_bo_trigger_rebind() for async TLB invalidation (Thomas
> > Hellström)
> > - Add NULL rebind with scratch PTEs for fault mode (Thomas
> > Hellström)
> > - Implement i915-compatible retained field logic (Thomas Hellström)
> > - Skip BO validation for purged BOs in page fault handler (crash
> > fix)
> > - Add scratch VM check in page fault path (non-scratch VMs fail
> > fault)
> >
> > v3 Changes (addressing Matt and Thomas Hellström feedback):
> > - Per-VMA purgeable state tracking: Added xe_vma->purgeable_state
> > field
> > - Complete VMA check: xe_bo_all_vmas_dontneed() walks all VMAs
> > across all
> > VMs to ensure unanimous DONTNEED before marking BO purgeable
> > - VMA unbind recheck: Added xe_bo_recheck_purgeable_on_vma_unbind()
> > to
> > re-evaluate BO state when VMAs are destroyed
> > - Block external dma-bufs: Added xe_bo_is_external_dmabuf() check
> > using
> > drm_gem_is_imported() and obj->dma_buf to prevent purging
> > imported/exported BOs
> > - Consistent lockdep enforcement: Added xe_bo_assert_held() to all
> > helpers
> > that access madv_purgeable state
> > - Simplified page table logic: Renamed is_null to is_null_or_purged
> > in
> > xe_pt_stage_bind_entry() - purged BOs treated identically to null
> > VMAs
> > - Removed unnecessary checks: Dropped redundant "&& bo" check in
> > xe_ttm_bo_purge()
> > - Xe-specific warnings: Changed drm_warn() to XE_WARN_ON() in purge
> > path
> > - Moved purge checks under locks: Purge state validation now done
> > after
> > acquiring dma-resv lock in vma_lock_and_validate() and
> > xe_pagefault_begin()
> > - Race-free fault handling: Removed unlocked purge check from
> > xe_pagefault_handle_vma(), moved to locked xe_pagefault_begin()
> > - Shrinker helper functions: Added xe_bo_set_purgeable_shrinker()
> > and
> > xe_bo_clear_purgeable_shrinker() to encapsulate TTM purgeable
> > flag updates
> > and shrinker page accounting, improving code clarity and
> > maintainability
> >
> > v4 Changes (addressing Matt and Thomas Hellström feedback):
> > - UAPI: Removed '__u64 reserved' field from purge_state_val union
> > to fit
> > 16-byte size constraint (Matt)
> > - Changed madv_purgeable from atomic_t to u32 across all patches
> > (Matt)
> > - CPU fault handling: Added purged check to fastpath
> > (xe_bo_cpu_fault_fastpath)
> > to prevent hang when accessing existing mmap of purged BO
> >
> > v5 Changes (addressing Matt and Thomas Hellström feedback):
> > - Add locking documentation to madv_purgeable field comment (Matt)
> > - Introduce xe_bo_set_purgeable_state() helper (void return) to
> > centralize
> > madv_purgeable updates with xe_bo_assert_held() and state
> > transition
> > validation using explicit enum checks (no transition out of
> > PURGED) (Matt)
> > - Make xe_ttm_bo_purge() return int and propagate failures from
> > xe_bo_move(); handle xe_bo_trigger_rebind() failures (e.g.
> > no_wait_gpu
> > paths) rather than silently ignoring (Matt)
> > - Replace drm_WARN_ON with xe_assert for better Xe-specific
> > assertions (Matt)
> > - Hook purgeable handling into
> > madvise_funcs[DRM_XE_VMA_ATTR_PURGEABLE_STATE]
> > instead of special-case path in xe_vm_madvise_ioctl() (Matt)
> > - Track purgeable retained return via xe_madvise_details and
> > perform
> > copy_to_user() from xe_madvise_details_fini() after locks are
> > dropped (Matt)
> > - Set madvise_funcs[DRM_XE_VMA_ATTR_PURGEABLE_STATE] to NULL with
> > __maybe_unused on madvise_purgeable() to maintain bisectability
> > until
> > shrinker integration is complete in final patch (Matt)
> > - Call xe_bo_recheck_purgeable_on_vma_unbind() from
> > xe_vma_destroy()
> > right after drm_gpuva_unlink() where we already hold the BO lock,
> > drop the trylock-based late destroy path (Matt)
> > - Move purgeable_state into xe_vma_mem_attr with the other madvise
> > attributes (Matt)
> > - Drop READ_ONCE since the BO lock already protects us (Matt)
> > - Keep returning false when there are no VMAs - otherwise we'd mark
> > BOs purgeable without any user hint (Matt)
> > - Use struct xe_vma_lock_and_validate_flags instead of multiple
> > bool
> > parameters to improve readability and prevent argument
> > transposition (Matt)
> > - Fix LRU crash while running shrink test
> > - Skip xe_bo_validate() for purged BOs in xe_gpuvm_validate()
> > - Split ghost BO and zero-refcount handling in xe_bo_shrink()
> > (Thomas)
> >
> > v6 Changes (addressing Jose Souza, Thomas Hellström and Matt Brost
> > feedback):
> > - Document DONTNEED blocking behavior in uAPI: Clearly describe
> > which
> > operations are blocked and with what error codes. (Thomas, Matt)
> > - Block VM_BIND to DONTNEED BOs: Return -EBUSY to prevent creating
> > new
> > VMAs to purgeable BOs (undefined behavior). (Thomas, Matt)
> > - Block CPU faults to DONTNEED BOs: Return VM_FAULT_SIGBUS in both
> > fastpath
> > and slowpath to prevent undefined behavior. (Thomas, Matt)
> > - Block new mmap() to DONTNEED/purged BOs: Return -EBUSY for
> > DONTNEED,
> > -EINVAL for PURGED. (Thomas, Matt)
> > - Block dma-buf export of DONTNEED/purged BOs: Return -EBUSY for
> > DONTNEED,
> > -EINVAL for PURGED. (Thomas, Matt)
> > - Fix state transition bug: xe_bo_all_vmas_dontneed() now returns
> > enum to
> > distinguish NO_VMAS (preserve state) from WILLNEED (has active
> > VMAs),
> > preventing incorrect DONTNEED → WILLNEED flip on last VMA unmap
> > (Matt)
> > - Set skip_invalidation explicitly in madvise_purgeable() to ensure
> > DONTNEED always zaps GPU PTEs regardless of prior madvise state.
> > - Add DRM_XE_QUERY_CONFIG_FLAG_HAS_PURGING_SUPPORT for userspace
> > feature detection. (Jose)
> >
> > Arvind Yadav (11):
> > drm/xe/bo: Add purgeable bo state tracking and field madv to xe_bo
> > drm/xe/madvise: Implement purgeable buffer object support
> > drm/xe/bo: Block CPU faults to purgeable buffer objects
> > drm/xe/vm: Prevent binding of purged buffer objects
> > drm/xe/madvise: Implement per-VMA purgeable state tracking
> > drm/xe/madvise: Block imported and exported dma-bufs
> > drm/xe/bo: Block mmap of DONTNEED/purged BOs
> > drm/xe/dma_buf: Block export of DONTNEED/purged BOs
> > drm/xe/bo: Add purgeable shrinker state helpers
> > drm/xe/madvise: Enable purgeable buffer object IOCTL support
> > drm/xe/bo: Skip zero-refcount BOs in shrinker
> >
> > Himal Prasad Ghimiray (1):
> > drm/xe/uapi: Add UAPI support for purgeable buffer objects
> >
> > drivers/gpu/drm/xe/xe_bo.c | 223 +++++++++++++++++++++--
> > drivers/gpu/drm/xe/xe_bo.h | 60 ++++++
> > drivers/gpu/drm/xe/xe_bo_types.h | 6 +
> > drivers/gpu/drm/xe/xe_dma_buf.c | 21 +++
> > drivers/gpu/drm/xe/xe_pagefault.c | 19 ++
> > drivers/gpu/drm/xe/xe_pt.c | 40 +++-
> > drivers/gpu/drm/xe/xe_query.c | 2 +
> > drivers/gpu/drm/xe/xe_svm.c | 1 +
> > drivers/gpu/drm/xe/xe_vm.c | 100 ++++++++--
> > drivers/gpu/drm/xe/xe_vm_madvise.c | 283
> > +++++++++++++++++++++++++++++
> > drivers/gpu/drm/xe/xe_vm_madvise.h | 3 +
> > drivers/gpu/drm/xe/xe_vm_types.h | 11 ++
> > include/uapi/drm/xe_drm.h | 60 ++++++
> > 13 files changed, 793 insertions(+), 36 deletions(-)
^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [PATCH v6 00/12] drm/xe/madvise: Add support for purgeable buffer objects
2026-03-03 22:49 ` Matthew Brost
@ 2026-03-04 13:29 ` Souza, Jose
2026-03-23 6:37 ` Yadav, Arvind
0 siblings, 1 reply; 39+ messages in thread
From: Souza, Jose @ 2026-03-04 13:29 UTC (permalink / raw)
To: Brost, Matthew
Cc: intel-xe@lists.freedesktop.org, Vivi, Rodrigo, Mishra, Pallavi,
Ghimiray, Himal Prasad, Yadav, Arvind,
thomas.hellstrom@linux.intel.com
On Tue, 2026-03-03 at 14:49 -0800, Matthew Brost wrote:
> On Tue, Mar 03, 2026 at 03:05:59PM -0700, Souza, Jose wrote:
> > On Tue, 2026-03-03 at 20:49 +0530, Arvind Yadav wrote:
> > > This patch series introduces comprehensive support for purgeable
> > > buffer objects
> > > in the Xe driver, enabling userspace to provide memory usage
> > > hints
> > > for better
> > > memory management under system pressure.
> > >
> > > Overview:
> > >
> > > Purgeable memory allows applications to mark buffer objects as
> > > "not
> > > currently
> > > needed" (DONTNEED), making them eligible for kernel reclamation
> > > during memory
> > > pressure. This helps prevent OOM conditions and enables more
> > > efficient GPU
> > > memory utilization for workloads with temporary or regeneratable
> > > data
> > > (caches,
> > > intermediate results, decoded frames, etc.).
> > >
> > > Purgeable BO Lifecycle:
> > > 1. WILLNEED (default): BO actively needed, kernel preserves
> > > backing
> > > store
> > > 2. DONTNEED (user hint): BO contents discardable, eligible for
> > > purging
> > > 3. PURGED (kernel action): Backing store reclaimed during memory
> > > pressure
> > >
> > > Key Design Principles:
> > > - i915 compatibility: "Once purged, always purged" semantics -
> > > purged BOs
> > > remain permanently invalid and must be destroyed/recreated
> > > - Per-VMA state tracking: Each VMA tracks its own purgeable
> > > state,
> > > BO is
> > > only marked DONTNEED when ALL VMAs across ALL VMs agree
> > > (Thomas
> > > Hellström)
> > > - Safety first: Imported/exported dma-bufs blocked from
> > > purgeable
> > > state -
> > > no visibility into external device usage (Matt Roper)
> > > - Multiple protection layers: Validation in madvise, VM bind,
> > > mmap,
> > > CPU
> > > and GPU fault handlers. GPU page faults on DONTNEED BOs are
> > > rejected in
> > > xe_pagefault_begin() to preserve the GPU PTE invalidation
> > > done at
> > > madvise
> > > time; without this the rebind path would re-map real pages
> > > and
> > > undo the
> > > PTE zap, preventing the shrinker from ever reclaiming the BO.
> > > - Correct GPU PTE zapping: madvise_purgeable() explicitly sets
> > > skip_invalidation per VMA (false for DONTNEED, true for
> > > WILLNEED,
> > > purged
> > > and dmabuf-shared BOs) so DONTNEED always triggers a GPU PTE
> > > zap
> > > regardless of prior madvise state.
> > > - Scratch PTE support: Fault-mode VMs use scratch pages for
> > > safe
> > > zero reads
> > > on purged BO access.
> > > - TTM shrinker integration: Encapsulated helpers manage
> > > xe_ttm_tt-
> > > > purgeable
> > > flag and shrinker page accounting (shrinkable vs purgeable
> > > buckets)
> >
> >
> > I get Engine memory CAT errors when using this feature:
> >
> > [ 240.301213] xe 0000:00:02.0: [drm] Tile0: GT0: Fault response:
> > Unsuccessful -EINVAL
> > [ 240.301301] xe 0000:00:02.0: [drm] Tile0: GT0: Engine memory CAT
> > error [18]: class=rcs, logical_mask: 0x1, guc_id=17
> > [ 240.302871] xe 0000:00:02.0: [drm] Tile0: GT0: Engine reset:
> > engine_class=rcs, logical_mask: 0x1, guc_id=17, state=0x249
> > [ 240.302885] xe 0000:00:02.0: [drm] Tile0: GT0: Timedout job:
> > seqno=4294967169, lrc_seqno=4294967169, guc_id=17, flags=0x0 in
> > arb_map_buffer_ [3374]
> > [ 240.302892] xe 0000:00:02.0: [drm:xe_devcoredump [xe]] Multiple
> > hangs are occurring, but only the first snapshot was taken
> >
> > Mesa creates VM with DRM_XE_VM_CREATE_FLAG_SCRATCH_PAGE, probably
> > you
> > don't have a IGT test with this scenario.
> >
> > @cc Rodrigo
> >
> > Other issue is not related to your patches but drm_xe_madvise only
> > works with non-canonical addresses and some time ago was agreed
> > that
> > all the user-visible addresses would be in canonical format.
> > Not sure if we can do anything at this point but letting you know.
> >
>
> We actually might be able to fix it to accept canonical addresses,
> what we
> can't blindly do is make non-canonical addresses stop working...
>
> It might create weird scenario for UMDs though if canonical addresses
> work on some kernel but not others but perhaps since this is Mesa's
> first use of madvise we get this in as part of purgable and only NEO
> would have to deal with this scenario.
Yes, this is the first madvise usage in Mesa.
>
> Matt
>
> >
> > >
> > > v2 Changes:
> > > - Reordered patches: Moved shared BO helper before main
> > > implementation for
> > > proper dependency order
> > > - Fixed reference counting in mmap offset validation (use
> > > drm_gem_object_put)
> > > - Removed incorrect claims about madvise(WILLNEED) restoring
> > > purged
> > > BOs
> > > - Fixed error code documentation inconsistencies
> > > - Initialize purge_state_val fields to prevent kernel memory
> > > leaks
> > > - Use xe_bo_trigger_rebind() for async TLB invalidation (Thomas
> > > Hellström)
> > > - Add NULL rebind with scratch PTEs for fault mode (Thomas
> > > Hellström)
> > > - Implement i915-compatible retained field logic (Thomas
> > > Hellström)
> > > - Skip BO validation for purged BOs in page fault handler
> > > (crash
> > > fix)
> > > - Add scratch VM check in page fault path (non-scratch VMs fail
> > > fault)
> > >
> > > v3 Changes (addressing Matt and Thomas Hellström feedback):
> > > - Per-VMA purgeable state tracking: Added xe_vma-
> > > >purgeable_state
> > > field
> > > - Complete VMA check: xe_bo_all_vmas_dontneed() walks all VMAs
> > > across all
> > > VMs to ensure unanimous DONTNEED before marking BO purgeable
> > > - VMA unbind recheck: Added
> > > xe_bo_recheck_purgeable_on_vma_unbind()
> > > to
> > > re-evaluate BO state when VMAs are destroyed
> > > - Block external dma-bufs: Added xe_bo_is_external_dmabuf()
> > > check
> > > using
> > > drm_gem_is_imported() and obj->dma_buf to prevent purging
> > > imported/exported BOs
> > > - Consistent lockdep enforcement: Added xe_bo_assert_held() to
> > > all
> > > helpers
> > > that access madv_purgeable state
> > > - Simplified page table logic: Renamed is_null to
> > > is_null_or_purged
> > > in
> > > xe_pt_stage_bind_entry() - purged BOs treated identically to
> > > null
> > > VMAs
> > > - Removed unnecessary checks: Dropped redundant "&& bo" check
> > > in
> > > xe_ttm_bo_purge()
> > > - Xe-specific warnings: Changed drm_warn() to XE_WARN_ON() in
> > > purge
> > > path
> > > - Moved purge checks under locks: Purge state validation now
> > > done
> > > after
> > > acquiring dma-resv lock in vma_lock_and_validate() and
> > > xe_pagefault_begin()
> > > - Race-free fault handling: Removed unlocked purge check from
> > > xe_pagefault_handle_vma(), moved to locked
> > > xe_pagefault_begin()
> > > - Shrinker helper functions: Added
> > > xe_bo_set_purgeable_shrinker()
> > > and
> > > xe_bo_clear_purgeable_shrinker() to encapsulate TTM purgeable
> > > flag updates
> > > and shrinker page accounting, improving code clarity and
> > > maintainability
> > >
> > > v4 Changes (addressing Matt and Thomas Hellström feedback):
> > > - UAPI: Removed '__u64 reserved' field from purge_state_val
> > > union
> > > to fit
> > > 16-byte size constraint (Matt)
> > > - Changed madv_purgeable from atomic_t to u32 across all
> > > patches
> > > (Matt)
> > > - CPU fault handling: Added purged check to fastpath
> > > (xe_bo_cpu_fault_fastpath)
> > > to prevent hang when accessing existing mmap of purged BO
> > >
> > > v5 Changes (addressing Matt and Thomas Hellström feedback):
> > > - Add locking documentation to madv_purgeable field comment
> > > (Matt)
> > > - Introduce xe_bo_set_purgeable_state() helper (void return) to
> > > centralize
> > > madv_purgeable updates with xe_bo_assert_held() and state
> > > transition
> > > validation using explicit enum checks (no transition out of
> > > PURGED) (Matt)
> > > - Make xe_ttm_bo_purge() return int and propagate failures from
> > > xe_bo_move(); handle xe_bo_trigger_rebind() failures (e.g.
> > > no_wait_gpu
> > > paths) rather than silently ignoring (Matt)
> > > - Replace drm_WARN_ON with xe_assert for better Xe-specific
> > > assertions (Matt)
> > > - Hook purgeable handling into
> > > madvise_funcs[DRM_XE_VMA_ATTR_PURGEABLE_STATE]
> > > instead of special-case path in xe_vm_madvise_ioctl() (Matt)
> > > - Track purgeable retained return via xe_madvise_details and
> > > perform
> > > copy_to_user() from xe_madvise_details_fini() after locks are
> > > dropped (Matt)
> > > - Set madvise_funcs[DRM_XE_VMA_ATTR_PURGEABLE_STATE] to NULL
> > > with
> > > __maybe_unused on madvise_purgeable() to maintain
> > > bisectability
> > > until
> > > shrinker integration is complete in final patch (Matt)
> > > - Call xe_bo_recheck_purgeable_on_vma_unbind() from
> > > xe_vma_destroy()
> > > right after drm_gpuva_unlink() where we already hold the BO
> > > lock,
> > > drop the trylock-based late destroy path (Matt)
> > > - Move purgeable_state into xe_vma_mem_attr with the other
> > > madvise
> > > attributes (Matt)
> > > - Drop READ_ONCE since the BO lock already protects us (Matt)
> > > - Keep returning false when there are no VMAs - otherwise we'd
> > > mark
> > > BOs purgeable without any user hint (Matt)
> > > - Use struct xe_vma_lock_and_validate_flags instead of
> > > multiple
> > > bool
> > > parameters to improve readability and prevent argument
> > > transposition (Matt)
> > > - Fix LRU crash while running shrink test
> > > - Skip xe_bo_validate() for purged BOs in xe_gpuvm_validate()
> > > - Split ghost BO and zero-refcount handling in xe_bo_shrink()
> > > (Thomas)
> > >
> > > v6 Changes (addressing Jose Souza, Thomas Hellström and Matt
> > > Brost
> > > feedback):
> > > - Document DONTNEED blocking behavior in uAPI: Clearly describe
> > > which
> > > operations are blocked and with what error codes. (Thomas,
> > > Matt)
> > > - Block VM_BIND to DONTNEED BOs: Return -EBUSY to prevent
> > > creating
> > > new
> > > VMAs to purgeable BOs (undefined behavior). (Thomas, Matt)
> > > - Block CPU faults to DONTNEED BOs: Return VM_FAULT_SIGBUS in
> > > both
> > > fastpath
> > > and slowpath to prevent undefined behavior. (Thomas, Matt)
> > > - Block new mmap() to DONTNEED/purged BOs: Return -EBUSY for
> > > DONTNEED,
> > > -EINVAL for PURGED. (Thomas, Matt)
> > > - Block dma-buf export of DONTNEED/purged BOs: Return -EBUSY
> > > for
> > > DONTNEED,
> > > -EINVAL for PURGED. (Thomas, Matt)
> > > - Fix state transition bug: xe_bo_all_vmas_dontneed() now
> > > returns
> > > enum to
> > > distinguish NO_VMAS (preserve state) from WILLNEED (has
> > > active
> > > VMAs),
> > > preventing incorrect DONTNEED → WILLNEED flip on last VMA
> > > unmap
> > > (Matt)
> > > - Set skip_invalidation explicitly in madvise_purgeable() to
> > > ensure
> > > DONTNEED always zaps GPU PTEs regardless of prior madvise
> > > state.
> > > - Add DRM_XE_QUERY_CONFIG_FLAG_HAS_PURGING_SUPPORT for
> > > userspace
> > > feature detection. (Jose)
> > >
> > > Arvind Yadav (11):
> > > drm/xe/bo: Add purgeable bo state tracking and field madv to
> > > xe_bo
> > > drm/xe/madvise: Implement purgeable buffer object support
> > > drm/xe/bo: Block CPU faults to purgeable buffer objects
> > > drm/xe/vm: Prevent binding of purged buffer objects
> > > drm/xe/madvise: Implement per-VMA purgeable state tracking
> > > drm/xe/madvise: Block imported and exported dma-bufs
> > > drm/xe/bo: Block mmap of DONTNEED/purged BOs
> > > drm/xe/dma_buf: Block export of DONTNEED/purged BOs
> > > drm/xe/bo: Add purgeable shrinker state helpers
> > > drm/xe/madvise: Enable purgeable buffer object IOCTL support
> > > drm/xe/bo: Skip zero-refcount BOs in shrinker
> > >
> > > Himal Prasad Ghimiray (1):
> > > drm/xe/uapi: Add UAPI support for purgeable buffer objects
> > >
> > > drivers/gpu/drm/xe/xe_bo.c | 223 +++++++++++++++++++++--
> > > drivers/gpu/drm/xe/xe_bo.h | 60 ++++++
> > > drivers/gpu/drm/xe/xe_bo_types.h | 6 +
> > > drivers/gpu/drm/xe/xe_dma_buf.c | 21 +++
> > > drivers/gpu/drm/xe/xe_pagefault.c | 19 ++
> > > drivers/gpu/drm/xe/xe_pt.c | 40 +++-
> > > drivers/gpu/drm/xe/xe_query.c | 2 +
> > > drivers/gpu/drm/xe/xe_svm.c | 1 +
> > > drivers/gpu/drm/xe/xe_vm.c | 100 ++++++++--
> > > drivers/gpu/drm/xe/xe_vm_madvise.c | 283
> > > +++++++++++++++++++++++++++++
> > > drivers/gpu/drm/xe/xe_vm_madvise.h | 3 +
> > > drivers/gpu/drm/xe/xe_vm_types.h | 11 ++
> > > include/uapi/drm/xe_drm.h | 60 ++++++
> > > 13 files changed, 793 insertions(+), 36 deletions(-)
^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [PATCH v6 00/12] drm/xe/madvise: Add support for purgeable buffer objects
2026-03-04 13:29 ` Souza, Jose
@ 2026-03-23 6:37 ` Yadav, Arvind
0 siblings, 0 replies; 39+ messages in thread
From: Yadav, Arvind @ 2026-03-23 6:37 UTC (permalink / raw)
To: Souza, Jose, Brost, Matthew
Cc: intel-xe@lists.freedesktop.org, Vivi, Rodrigo, Mishra, Pallavi,
Ghimiray, Himal Prasad, thomas.hellstrom@linux.intel.com
On 04-03-2026 18:59, Souza, Jose wrote:
> On Tue, 2026-03-03 at 14:49 -0800, Matthew Brost wrote:
>> On Tue, Mar 03, 2026 at 03:05:59PM -0700, Souza, Jose wrote:
>>> On Tue, 2026-03-03 at 20:49 +0530, Arvind Yadav wrote:
>>>> This patch series introduces comprehensive support for purgeable
>>>> buffer objects
>>>> in the Xe driver, enabling userspace to provide memory usage
>>>> hints
>>>> for better
>>>> memory management under system pressure.
>>>>
>>>> Overview:
>>>>
>>>> Purgeable memory allows applications to mark buffer objects as
>>>> "not
>>>> currently
>>>> needed" (DONTNEED), making them eligible for kernel reclamation
>>>> during memory
>>>> pressure. This helps prevent OOM conditions and enables more
>>>> efficient GPU
>>>> memory utilization for workloads with temporary or regeneratable
>>>> data
>>>> (caches,
>>>> intermediate results, decoded frames, etc.).
>>>>
>>>> Purgeable BO Lifecycle:
>>>> 1. WILLNEED (default): BO actively needed, kernel preserves
>>>> backing
>>>> store
>>>> 2. DONTNEED (user hint): BO contents discardable, eligible for
>>>> purging
>>>> 3. PURGED (kernel action): Backing store reclaimed during memory
>>>> pressure
>>>>
>>>> Key Design Principles:
>>>> - i915 compatibility: "Once purged, always purged" semantics -
>>>> purged BOs
>>>> remain permanently invalid and must be destroyed/recreated
>>>> - Per-VMA state tracking: Each VMA tracks its own purgeable
>>>> state,
>>>> BO is
>>>> only marked DONTNEED when ALL VMAs across ALL VMs agree
>>>> (Thomas
>>>> Hellström)
>>>> - Safety first: Imported/exported dma-bufs blocked from
>>>> purgeable
>>>> state -
>>>> no visibility into external device usage (Matt Roper)
>>>> - Multiple protection layers: Validation in madvise, VM bind,
>>>> mmap,
>>>> CPU
>>>> and GPU fault handlers. GPU page faults on DONTNEED BOs are
>>>> rejected in
>>>> xe_pagefault_begin() to preserve the GPU PTE invalidation
>>>> done at
>>>> madvise
>>>> time; without this the rebind path would re-map real pages
>>>> and
>>>> undo the
>>>> PTE zap, preventing the shrinker from ever reclaiming the BO.
>>>> - Correct GPU PTE zapping: madvise_purgeable() explicitly sets
>>>> skip_invalidation per VMA (false for DONTNEED, true for
>>>> WILLNEED,
>>>> purged
>>>> and dmabuf-shared BOs) so DONTNEED always triggers a GPU PTE
>>>> zap
>>>> regardless of prior madvise state.
>>>> - Scratch PTE support: Fault-mode VMs use scratch pages for
>>>> safe
>>>> zero reads
>>>> on purged BO access.
>>>> - TTM shrinker integration: Encapsulated helpers manage
>>>> xe_ttm_tt-
>>>>> purgeable
>>>> flag and shrinker page accounting (shrinkable vs purgeable
>>>> buckets)
>>>
>>> I get Engine memory CAT errors when using this feature:
>>>
>>> [ 240.301213] xe 0000:00:02.0: [drm] Tile0: GT0: Fault response:
>>> Unsuccessful -EINVAL
>>> [ 240.301301] xe 0000:00:02.0: [drm] Tile0: GT0: Engine memory CAT
>>> error [18]: class=rcs, logical_mask: 0x1, guc_id=17
>>> [ 240.302871] xe 0000:00:02.0: [drm] Tile0: GT0: Engine reset:
>>> engine_class=rcs, logical_mask: 0x1, guc_id=17, state=0x249
>>> [ 240.302885] xe 0000:00:02.0: [drm] Tile0: GT0: Timedout job:
>>> seqno=4294967169, lrc_seqno=4294967169, guc_id=17, flags=0x0 in
>>> arb_map_buffer_ [3374]
>>> [ 240.302892] xe 0000:00:02.0: [drm:xe_devcoredump [xe]] Multiple
>>> hangs are occurring, but only the first snapshot was taken
>>>
>>> Mesa creates VM with DRM_XE_VM_CREATE_FLAG_SCRATCH_PAGE, probably
>>> you
>>> don't have a IGT test with this scenario.
>>>
>>> @cc Rodrigo
>>>
>>> Other issue is not related to your patches but drm_xe_madvise only
>>> works with non-canonical addresses and some time ago was agreed
>>> that
>>> all the user-visible addresses would be in canonical format.
>>> Not sure if we can do anything at this point but letting you know.
>>>
CAT error with SCRATCH_PAGE VMs: Fixed in patch 3.
>> We actually might be able to fix it to accept canonical addresses,
>> what we
>> can't blindly do is make non-canonical addresses stop working...
>>
>> It might create weird scenario for UMDs though if canonical addresses
>> work on some kernel but not others but perhaps since this is Mesa's
>> first use of madvise we get this in as part of purgable and only NEO
>> would have to deal with this scenario.
> Yes, this is the first madvise usage in Mesa.
Canonical addresses: Fixed in patch 12. xe_vm_madvise_ioctl now strips
sign extension via xe_device_uncanonicalize_addr() at the top, so both
canonical and non-canonical addresses work transparently. Non-canonical
addresses are unaffected.
Thanks,
Arvind
>> Matt
>>
>>>> v2 Changes:
>>>> - Reordered patches: Moved shared BO helper before main
>>>> implementation for
>>>> proper dependency order
>>>> - Fixed reference counting in mmap offset validation (use
>>>> drm_gem_object_put)
>>>> - Removed incorrect claims about madvise(WILLNEED) restoring
>>>> purged
>>>> BOs
>>>> - Fixed error code documentation inconsistencies
>>>> - Initialize purge_state_val fields to prevent kernel memory
>>>> leaks
>>>> - Use xe_bo_trigger_rebind() for async TLB invalidation (Thomas
>>>> Hellström)
>>>> - Add NULL rebind with scratch PTEs for fault mode (Thomas
>>>> Hellström)
>>>> - Implement i915-compatible retained field logic (Thomas
>>>> Hellström)
>>>> - Skip BO validation for purged BOs in page fault handler
>>>> (crash
>>>> fix)
>>>> - Add scratch VM check in page fault path (non-scratch VMs fail
>>>> fault)
>>>>
>>>> v3 Changes (addressing Matt and Thomas Hellström feedback):
>>>> - Per-VMA purgeable state tracking: Added xe_vma-
>>>>> purgeable_state
>>>> field
>>>> - Complete VMA check: xe_bo_all_vmas_dontneed() walks all VMAs
>>>> across all
>>>> VMs to ensure unanimous DONTNEED before marking BO purgeable
>>>> - VMA unbind recheck: Added
>>>> xe_bo_recheck_purgeable_on_vma_unbind()
>>>> to
>>>> re-evaluate BO state when VMAs are destroyed
>>>> - Block external dma-bufs: Added xe_bo_is_external_dmabuf()
>>>> check
>>>> using
>>>> drm_gem_is_imported() and obj->dma_buf to prevent purging
>>>> imported/exported BOs
>>>> - Consistent lockdep enforcement: Added xe_bo_assert_held() to
>>>> all
>>>> helpers
>>>> that access madv_purgeable state
>>>> - Simplified page table logic: Renamed is_null to
>>>> is_null_or_purged
>>>> in
>>>> xe_pt_stage_bind_entry() - purged BOs treated identically to
>>>> null
>>>> VMAs
>>>> - Removed unnecessary checks: Dropped redundant "&& bo" check
>>>> in
>>>> xe_ttm_bo_purge()
>>>> - Xe-specific warnings: Changed drm_warn() to XE_WARN_ON() in
>>>> purge
>>>> path
>>>> - Moved purge checks under locks: Purge state validation now
>>>> done
>>>> after
>>>> acquiring dma-resv lock in vma_lock_and_validate() and
>>>> xe_pagefault_begin()
>>>> - Race-free fault handling: Removed unlocked purge check from
>>>> xe_pagefault_handle_vma(), moved to locked
>>>> xe_pagefault_begin()
>>>> - Shrinker helper functions: Added
>>>> xe_bo_set_purgeable_shrinker()
>>>> and
>>>> xe_bo_clear_purgeable_shrinker() to encapsulate TTM purgeable
>>>> flag updates
>>>> and shrinker page accounting, improving code clarity and
>>>> maintainability
>>>>
>>>> v4 Changes (addressing Matt and Thomas Hellström feedback):
>>>> - UAPI: Removed '__u64 reserved' field from purge_state_val
>>>> union
>>>> to fit
>>>> 16-byte size constraint (Matt)
>>>> - Changed madv_purgeable from atomic_t to u32 across all
>>>> patches
>>>> (Matt)
>>>> - CPU fault handling: Added purged check to fastpath
>>>> (xe_bo_cpu_fault_fastpath)
>>>> to prevent hang when accessing existing mmap of purged BO
>>>>
>>>> v5 Changes (addressing Matt and Thomas Hellström feedback):
>>>> - Add locking documentation to madv_purgeable field comment
>>>> (Matt)
>>>> - Introduce xe_bo_set_purgeable_state() helper (void return) to
>>>> centralize
>>>> madv_purgeable updates with xe_bo_assert_held() and state
>>>> transition
>>>> validation using explicit enum checks (no transition out of
>>>> PURGED) (Matt)
>>>> - Make xe_ttm_bo_purge() return int and propagate failures from
>>>> xe_bo_move(); handle xe_bo_trigger_rebind() failures (e.g.
>>>> no_wait_gpu
>>>> paths) rather than silently ignoring (Matt)
>>>> - Replace drm_WARN_ON with xe_assert for better Xe-specific
>>>> assertions (Matt)
>>>> - Hook purgeable handling into
>>>> madvise_funcs[DRM_XE_VMA_ATTR_PURGEABLE_STATE]
>>>> instead of special-case path in xe_vm_madvise_ioctl() (Matt)
>>>> - Track purgeable retained return via xe_madvise_details and
>>>> perform
>>>> copy_to_user() from xe_madvise_details_fini() after locks are
>>>> dropped (Matt)
>>>> - Set madvise_funcs[DRM_XE_VMA_ATTR_PURGEABLE_STATE] to NULL
>>>> with
>>>> __maybe_unused on madvise_purgeable() to maintain
>>>> bisectability
>>>> until
>>>> shrinker integration is complete in final patch (Matt)
>>>> - Call xe_bo_recheck_purgeable_on_vma_unbind() from
>>>> xe_vma_destroy()
>>>> right after drm_gpuva_unlink() where we already hold the BO
>>>> lock,
>>>> drop the trylock-based late destroy path (Matt)
>>>> - Move purgeable_state into xe_vma_mem_attr with the other
>>>> madvise
>>>> attributes (Matt)
>>>> - Drop READ_ONCE since the BO lock already protects us (Matt)
>>>> - Keep returning false when there are no VMAs - otherwise we'd
>>>> mark
>>>> BOs purgeable without any user hint (Matt)
>>>> - Use struct xe_vma_lock_and_validate_flags instead of
>>>> multiple
>>>> bool
>>>> parameters to improve readability and prevent argument
>>>> transposition (Matt)
>>>> - Fix LRU crash while running shrink test
>>>> - Skip xe_bo_validate() for purged BOs in xe_gpuvm_validate()
>>>> - Split ghost BO and zero-refcount handling in xe_bo_shrink()
>>>> (Thomas)
>>>>
>>>> v6 Changes (addressing Jose Souza, Thomas Hellström and Matt
>>>> Brost
>>>> feedback):
>>>> - Document DONTNEED blocking behavior in uAPI: Clearly describe
>>>> which
>>>> operations are blocked and with what error codes. (Thomas,
>>>> Matt)
>>>> - Block VM_BIND to DONTNEED BOs: Return -EBUSY to prevent
>>>> creating
>>>> new
>>>> VMAs to purgeable BOs (undefined behavior). (Thomas, Matt)
>>>> - Block CPU faults to DONTNEED BOs: Return VM_FAULT_SIGBUS in
>>>> both
>>>> fastpath
>>>> and slowpath to prevent undefined behavior. (Thomas, Matt)
>>>> - Block new mmap() to DONTNEED/purged BOs: Return -EBUSY for
>>>> DONTNEED,
>>>> -EINVAL for PURGED. (Thomas, Matt)
>>>> - Block dma-buf export of DONTNEED/purged BOs: Return -EBUSY
>>>> for
>>>> DONTNEED,
>>>> -EINVAL for PURGED. (Thomas, Matt)
>>>> - Fix state transition bug: xe_bo_all_vmas_dontneed() now
>>>> returns
>>>> enum to
>>>> distinguish NO_VMAS (preserve state) from WILLNEED (has
>>>> active
>>>> VMAs),
>>>> preventing incorrect DONTNEED → WILLNEED flip on last VMA
>>>> unmap
>>>> (Matt)
>>>> - Set skip_invalidation explicitly in madvise_purgeable() to
>>>> ensure
>>>> DONTNEED always zaps GPU PTEs regardless of prior madvise
>>>> state.
>>>> - Add DRM_XE_QUERY_CONFIG_FLAG_HAS_PURGING_SUPPORT for
>>>> userspace
>>>> feature detection. (Jose)
>>>>
>>>> Arvind Yadav (11):
>>>> drm/xe/bo: Add purgeable bo state tracking and field madv to
>>>> xe_bo
>>>> drm/xe/madvise: Implement purgeable buffer object support
>>>> drm/xe/bo: Block CPU faults to purgeable buffer objects
>>>> drm/xe/vm: Prevent binding of purged buffer objects
>>>> drm/xe/madvise: Implement per-VMA purgeable state tracking
>>>> drm/xe/madvise: Block imported and exported dma-bufs
>>>> drm/xe/bo: Block mmap of DONTNEED/purged BOs
>>>> drm/xe/dma_buf: Block export of DONTNEED/purged BOs
>>>> drm/xe/bo: Add purgeable shrinker state helpers
>>>> drm/xe/madvise: Enable purgeable buffer object IOCTL support
>>>> drm/xe/bo: Skip zero-refcount BOs in shrinker
>>>>
>>>> Himal Prasad Ghimiray (1):
>>>> drm/xe/uapi: Add UAPI support for purgeable buffer objects
>>>>
>>>> drivers/gpu/drm/xe/xe_bo.c | 223 +++++++++++++++++++++--
>>>> drivers/gpu/drm/xe/xe_bo.h | 60 ++++++
>>>> drivers/gpu/drm/xe/xe_bo_types.h | 6 +
>>>> drivers/gpu/drm/xe/xe_dma_buf.c | 21 +++
>>>> drivers/gpu/drm/xe/xe_pagefault.c | 19 ++
>>>> drivers/gpu/drm/xe/xe_pt.c | 40 +++-
>>>> drivers/gpu/drm/xe/xe_query.c | 2 +
>>>> drivers/gpu/drm/xe/xe_svm.c | 1 +
>>>> drivers/gpu/drm/xe/xe_vm.c | 100 ++++++++--
>>>> drivers/gpu/drm/xe/xe_vm_madvise.c | 283
>>>> +++++++++++++++++++++++++++++
>>>> drivers/gpu/drm/xe/xe_vm_madvise.h | 3 +
>>>> drivers/gpu/drm/xe/xe_vm_types.h | 11 ++
>>>> include/uapi/drm/xe_drm.h | 60 ++++++
>>>> 13 files changed, 793 insertions(+), 36 deletions(-)
^ permalink raw reply [flat|nested] 39+ messages in thread
* ✗ Xe.CI.FULL: failure for drm/xe/madvise: Add support for purgeable buffer objects (rev7)
2026-03-03 15:19 [PATCH v6 00/12] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
` (15 preceding siblings ...)
2026-03-03 22:05 ` [PATCH v6 00/12] drm/xe/madvise: Add support for purgeable buffer objects Souza, Jose
@ 2026-03-04 4:01 ` Patchwork
16 siblings, 0 replies; 39+ messages in thread
From: Patchwork @ 2026-03-04 4:01 UTC (permalink / raw)
To: Arvind Yadav; +Cc: intel-xe
[-- Attachment #1: Type: text/plain, Size: 29738 bytes --]
== Series Details ==
Series: drm/xe/madvise: Add support for purgeable buffer objects (rev7)
URL : https://patchwork.freedesktop.org/series/156651/
State : failure
== Summary ==
CI Bug Log - changes from xe-4653-d995d708ed7b2e37fa25d41783f747e43bb91c5b_FULL -> xe-pw-156651v7_FULL
====================================================
Summary
-------
**FAILURE**
Serious unknown changes coming with xe-pw-156651v7_FULL absolutely need to be
verified manually.
If you think the reported changes have nothing to do with the changes
introduced in xe-pw-156651v7_FULL, please notify your bug team (I915-ci-infra@lists.freedesktop.org) to allow them
to document this new failure mode, which will reduce false positives in CI.
Participating hosts (2 -> 2)
------------------------------
No changes in participating hosts
Possible new issues
-------------------
Here are the unknown changes that may have been introduced in xe-pw-156651v7_FULL:
### IGT changes ###
#### Possible regressions ####
* igt@kms_ccs@crc-primary-suspend-4-tiled-bmg-ccs@pipe-d-dp-2:
- shard-bmg: [PASS][1] -> [INCOMPLETE][2]
[1]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4653-d995d708ed7b2e37fa25d41783f747e43bb91c5b/shard-bmg-8/igt@kms_ccs@crc-primary-suspend-4-tiled-bmg-ccs@pipe-d-dp-2.html
[2]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-4/igt@kms_ccs@crc-primary-suspend-4-tiled-bmg-ccs@pipe-d-dp-2.html
Known issues
------------
Here are the changes found in xe-pw-156651v7_FULL that come from known issues:
### IGT changes ###
#### Issues hit ####
* igt@kms_big_fb@linear-16bpp-rotate-90:
- shard-bmg: NOTRUN -> [SKIP][3] ([Intel XE#2327]) +1 other test skip
[3]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-5/igt@kms_big_fb@linear-16bpp-rotate-90.html
* igt@kms_big_fb@yf-tiled-8bpp-rotate-180:
- shard-bmg: NOTRUN -> [SKIP][4] ([Intel XE#1124]) +2 other tests skip
[4]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-5/igt@kms_big_fb@yf-tiled-8bpp-rotate-180.html
* igt@kms_bw@linear-tiling-3-displays-1920x1080p:
- shard-bmg: NOTRUN -> [SKIP][5] ([Intel XE#367] / [Intel XE#7354])
[5]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-5/igt@kms_bw@linear-tiling-3-displays-1920x1080p.html
* igt@kms_ccs@bad-rotation-90-4-tiled-dg2-rc-ccs-cc:
- shard-bmg: NOTRUN -> [SKIP][6] ([Intel XE#2887]) +4 other tests skip
[6]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-5/igt@kms_ccs@bad-rotation-90-4-tiled-dg2-rc-ccs-cc.html
* igt@kms_ccs@crc-primary-suspend-4-tiled-bmg-ccs:
- shard-bmg: [PASS][7] -> [INCOMPLETE][8] ([Intel XE#7084])
[7]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4653-d995d708ed7b2e37fa25d41783f747e43bb91c5b/shard-bmg-8/igt@kms_ccs@crc-primary-suspend-4-tiled-bmg-ccs.html
[8]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-4/igt@kms_ccs@crc-primary-suspend-4-tiled-bmg-ccs.html
* igt@kms_ccs@crc-primary-suspend-4-tiled-lnl-ccs@pipe-a-dp-2:
- shard-bmg: NOTRUN -> [SKIP][9] ([Intel XE#2652]) +7 other tests skip
[9]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-8/igt@kms_ccs@crc-primary-suspend-4-tiled-lnl-ccs@pipe-a-dp-2.html
* igt@kms_chamelium_edid@dp-edid-change-during-suspend:
- shard-bmg: NOTRUN -> [SKIP][10] ([Intel XE#2252]) +2 other tests skip
[10]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-5/igt@kms_chamelium_edid@dp-edid-change-during-suspend.html
* igt@kms_content_protection@lic-type-0-hdcp14:
- shard-bmg: NOTRUN -> [FAIL][11] ([Intel XE#1178] / [Intel XE#3304] / [Intel XE#7374])
[11]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-5/igt@kms_content_protection@lic-type-0-hdcp14.html
* igt@kms_content_protection@lic-type-0-hdcp14@pipe-a-dp-1:
- shard-bmg: NOTRUN -> [FAIL][12] ([Intel XE#3304])
[12]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-5/igt@kms_content_protection@lic-type-0-hdcp14@pipe-a-dp-1.html
* igt@kms_cursor_crc@cursor-offscreen-32x32:
- shard-bmg: NOTRUN -> [SKIP][13] ([Intel XE#2320])
[13]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-5/igt@kms_cursor_crc@cursor-offscreen-32x32.html
* igt@kms_cursor_crc@cursor-random-256x256@pipe-d-dp-2:
- shard-bmg: [PASS][14] -> [FAIL][15] ([Intel XE#6747]) +1 other test fail
[14]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4653-d995d708ed7b2e37fa25d41783f747e43bb91c5b/shard-bmg-2/igt@kms_cursor_crc@cursor-random-256x256@pipe-d-dp-2.html
[15]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-2/igt@kms_cursor_crc@cursor-random-256x256@pipe-d-dp-2.html
* igt@kms_cursor_crc@cursor-rapid-movement-512x512:
- shard-bmg: NOTRUN -> [SKIP][16] ([Intel XE#2321] / [Intel XE#7355])
[16]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-5/igt@kms_cursor_crc@cursor-rapid-movement-512x512.html
* igt@kms_flip_scaled_crc@flip-64bpp-4tile-to-32bpp-4tiledg2rcccs-upscaling:
- shard-bmg: NOTRUN -> [SKIP][17] ([Intel XE#7178] / [Intel XE#7349])
[17]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-5/igt@kms_flip_scaled_crc@flip-64bpp-4tile-to-32bpp-4tiledg2rcccs-upscaling.html
* igt@kms_frontbuffer_tracking@drrs-2p-primscrn-pri-indfb-draw-mmap-wc:
- shard-bmg: NOTRUN -> [SKIP][18] ([Intel XE#2311]) +8 other tests skip
[18]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-5/igt@kms_frontbuffer_tracking@drrs-2p-primscrn-pri-indfb-draw-mmap-wc.html
* igt@kms_frontbuffer_tracking@fbc-2p-primscrn-shrfb-pgflip-blt:
- shard-bmg: NOTRUN -> [SKIP][19] ([Intel XE#4141]) +2 other tests skip
[19]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-5/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-shrfb-pgflip-blt.html
* igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-shrfb-pgflip-blt:
- shard-bmg: NOTRUN -> [SKIP][20] ([Intel XE#2313]) +7 other tests skip
[20]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-5/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-shrfb-pgflip-blt.html
* igt@kms_joiner@basic-force-ultra-joiner:
- shard-bmg: NOTRUN -> [SKIP][21] ([Intel XE#6911] / [Intel XE#7466])
[21]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-5/igt@kms_joiner@basic-force-ultra-joiner.html
* igt@kms_joiner@invalid-modeset-big-joiner:
- shard-bmg: NOTRUN -> [SKIP][22] ([Intel XE#6901])
[22]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-5/igt@kms_joiner@invalid-modeset-big-joiner.html
* igt@kms_plane@pixel-format-4-tiled-mtl-rc-ccs-cc-modifier-source-clamping:
- shard-bmg: NOTRUN -> [SKIP][23] ([Intel XE#7283])
[23]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-5/igt@kms_plane@pixel-format-4-tiled-mtl-rc-ccs-cc-modifier-source-clamping.html
* igt@kms_plane_cursor@viewport:
- shard-bmg: [PASS][24] -> [ABORT][25] ([Intel XE#5545] / [Intel XE#6652]) +1 other test abort
[24]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4653-d995d708ed7b2e37fa25d41783f747e43bb91c5b/shard-bmg-2/igt@kms_plane_cursor@viewport.html
[25]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-2/igt@kms_plane_cursor@viewport.html
* igt@kms_plane_multiple@2x-tiling-y:
- shard-bmg: NOTRUN -> [SKIP][26] ([Intel XE#5021] / [Intel XE#7377])
[26]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-5/igt@kms_plane_multiple@2x-tiling-y.html
* igt@kms_pm_dc@dc5-retention-flops:
- shard-bmg: NOTRUN -> [SKIP][27] ([Intel XE#3309] / [Intel XE#7368])
[27]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-5/igt@kms_pm_dc@dc5-retention-flops.html
* igt@kms_psr2_sf@pr-primary-plane-update-sf-dmg-area:
- shard-bmg: NOTRUN -> [SKIP][28] ([Intel XE#1489]) +1 other test skip
[28]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-5/igt@kms_psr2_sf@pr-primary-plane-update-sf-dmg-area.html
* igt@kms_psr2_su@page_flip-nv12:
- shard-bmg: NOTRUN -> [SKIP][29] ([Intel XE#2387] / [Intel XE#7429])
[29]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-5/igt@kms_psr2_su@page_flip-nv12.html
* igt@kms_psr@psr2-sprite-render:
- shard-bmg: NOTRUN -> [SKIP][30] ([Intel XE#2234] / [Intel XE#2850]) +1 other test skip
[30]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-5/igt@kms_psr@psr2-sprite-render.html
* igt@kms_rotation_crc@primary-yf-tiled-reflect-x-90:
- shard-bmg: NOTRUN -> [SKIP][31] ([Intel XE#3414] / [Intel XE#3904] / [Intel XE#7342])
[31]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-5/igt@kms_rotation_crc@primary-yf-tiled-reflect-x-90.html
* igt@kms_vrr@seamless-rr-switch-virtual@pipe-a-edp-1:
- shard-lnl: [PASS][32] -> [FAIL][33] ([Intel XE#2142]) +1 other test fail
[32]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4653-d995d708ed7b2e37fa25d41783f747e43bb91c5b/shard-lnl-6/igt@kms_vrr@seamless-rr-switch-virtual@pipe-a-edp-1.html
[33]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-lnl-7/igt@kms_vrr@seamless-rr-switch-virtual@pipe-a-edp-1.html
* igt@xe_eudebug@vm-bind-clear-faultable:
- shard-bmg: NOTRUN -> [SKIP][34] ([Intel XE#4837])
[34]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-5/igt@xe_eudebug@vm-bind-clear-faultable.html
* igt@xe_eudebug_online@interrupt-other:
- shard-bmg: NOTRUN -> [SKIP][35] ([Intel XE#4837] / [Intel XE#6665]) +1 other test skip
[35]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-5/igt@xe_eudebug_online@interrupt-other.html
* igt@xe_evict@evict-mixed-many-threads-small:
- shard-bmg: [PASS][36] -> [INCOMPLETE][37] ([Intel XE#6321]) +1 other test incomplete
[36]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4653-d995d708ed7b2e37fa25d41783f747e43bb91c5b/shard-bmg-6/igt@xe_evict@evict-mixed-many-threads-small.html
[37]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-9/igt@xe_evict@evict-mixed-many-threads-small.html
* igt@xe_evict@evict-small-multi-queue:
- shard-bmg: NOTRUN -> [SKIP][38] ([Intel XE#7140])
[38]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-5/igt@xe_evict@evict-small-multi-queue.html
* igt@xe_exec_basic@multigpu-once-basic-defer-bind:
- shard-bmg: NOTRUN -> [SKIP][39] ([Intel XE#2322] / [Intel XE#7372]) +1 other test skip
[39]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-5/igt@xe_exec_basic@multigpu-once-basic-defer-bind.html
* igt@xe_exec_fault_mode@twice-multi-queue-prefetch:
- shard-bmg: NOTRUN -> [SKIP][40] ([Intel XE#7136]) +2 other tests skip
[40]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-5/igt@xe_exec_fault_mode@twice-multi-queue-prefetch.html
* igt@xe_exec_multi_queue@many-queues-preempt-mode-priority-smem:
- shard-bmg: NOTRUN -> [SKIP][41] ([Intel XE#6874]) +6 other tests skip
[41]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-5/igt@xe_exec_multi_queue@many-queues-preempt-mode-priority-smem.html
* igt@xe_exec_system_allocator@many-malloc:
- shard-bmg: [PASS][42] -> [DMESG-FAIL][43] ([Intel XE#5213] / [Intel XE#5545] / [Intel XE#6652])
[42]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4653-d995d708ed7b2e37fa25d41783f747e43bb91c5b/shard-bmg-2/igt@xe_exec_system_allocator@many-malloc.html
[43]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-2/igt@xe_exec_system_allocator@many-malloc.html
* igt@xe_exec_system_allocator@pat-index-madvise-pat-idx-uc-multi-vma:
- shard-lnl: [PASS][44] -> [FAIL][45] ([Intel XE#5625]) +1 other test fail
[44]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4653-d995d708ed7b2e37fa25d41783f747e43bb91c5b/shard-lnl-1/igt@xe_exec_system_allocator@pat-index-madvise-pat-idx-uc-multi-vma.html
[45]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-lnl-8/igt@xe_exec_system_allocator@pat-index-madvise-pat-idx-uc-multi-vma.html
* igt@xe_exec_system_allocator@threads-shared-vm-many-stride-mmap-remap-madvise:
- shard-bmg: [PASS][46] -> [SKIP][47] ([Intel XE#6557] / [Intel XE#6703])
[46]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4653-d995d708ed7b2e37fa25d41783f747e43bb91c5b/shard-bmg-2/igt@xe_exec_system_allocator@threads-shared-vm-many-stride-mmap-remap-madvise.html
[47]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-2/igt@xe_exec_system_allocator@threads-shared-vm-many-stride-mmap-remap-madvise.html
* igt@xe_exec_threads@threads-multi-queue-mixed-fd-userptr-invalidate-race:
- shard-bmg: NOTRUN -> [SKIP][48] ([Intel XE#7138])
[48]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-5/igt@xe_exec_threads@threads-multi-queue-mixed-fd-userptr-invalidate-race.html
* igt@xe_multigpu_svm@mgpu-coherency-prefetch:
- shard-bmg: NOTRUN -> [SKIP][49] ([Intel XE#6964])
[49]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-5/igt@xe_multigpu_svm@mgpu-coherency-prefetch.html
* igt@xe_pm@vram-d3cold-threshold:
- shard-bmg: NOTRUN -> [SKIP][50] ([Intel XE#579] / [Intel XE#7329] / [Intel XE#7456])
[50]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-5/igt@xe_pm@vram-d3cold-threshold.html
* igt@xe_pxp@pxp-stale-bo-bind-post-termination-irq:
- shard-bmg: NOTRUN -> [SKIP][51] ([Intel XE#4733] / [Intel XE#7417])
[51]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-5/igt@xe_pxp@pxp-stale-bo-bind-post-termination-irq.html
* igt@xe_sriov_admin@sched-priority-vf-write-denied:
- shard-bmg: [PASS][52] -> [SKIP][53] ([Intel XE#6703]) +48 other tests skip
[52]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4653-d995d708ed7b2e37fa25d41783f747e43bb91c5b/shard-bmg-2/igt@xe_sriov_admin@sched-priority-vf-write-denied.html
[53]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-2/igt@xe_sriov_admin@sched-priority-vf-write-denied.html
#### Possible fixes ####
* igt@kms_addfb_basic@addfb25-y-tiled-legacy:
- shard-bmg: [DMESG-WARN][54] ([Intel XE#1727] / [Intel XE#6819]) -> [PASS][55]
[54]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4653-d995d708ed7b2e37fa25d41783f747e43bb91c5b/shard-bmg-2/igt@kms_addfb_basic@addfb25-y-tiled-legacy.html
[55]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-5/igt@kms_addfb_basic@addfb25-y-tiled-legacy.html
* igt@kms_bw@linear-tiling-1-displays-2160x1440p:
- shard-bmg: [SKIP][56] ([Intel XE#367] / [Intel XE#7354]) -> [PASS][57]
[56]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4653-d995d708ed7b2e37fa25d41783f747e43bb91c5b/shard-bmg-4/igt@kms_bw@linear-tiling-1-displays-2160x1440p.html
[57]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-5/igt@kms_bw@linear-tiling-1-displays-2160x1440p.html
* igt@kms_flip@2x-flip-vs-expired-vblank:
- shard-bmg: [FAIL][58] ([Intel XE#3321]) -> [PASS][59]
[58]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4653-d995d708ed7b2e37fa25d41783f747e43bb91c5b/shard-bmg-6/igt@kms_flip@2x-flip-vs-expired-vblank.html
[59]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-9/igt@kms_flip@2x-flip-vs-expired-vblank.html
* igt@kms_flip@2x-flip-vs-expired-vblank@bd-dp2-hdmi-a3:
- shard-bmg: [FAIL][60] -> [PASS][61]
[60]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4653-d995d708ed7b2e37fa25d41783f747e43bb91c5b/shard-bmg-6/igt@kms_flip@2x-flip-vs-expired-vblank@bd-dp2-hdmi-a3.html
[61]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-9/igt@kms_flip@2x-flip-vs-expired-vblank@bd-dp2-hdmi-a3.html
* igt@kms_flip@2x-flip-vs-suspend:
- shard-bmg: [INCOMPLETE][62] ([Intel XE#2049] / [Intel XE#2597]) -> [PASS][63]
[62]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4653-d995d708ed7b2e37fa25d41783f747e43bb91c5b/shard-bmg-5/igt@kms_flip@2x-flip-vs-suspend.html
[63]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-8/igt@kms_flip@2x-flip-vs-suspend.html
#### Warnings ####
* igt@kms_big_fb@y-tiled-64bpp-rotate-270:
- shard-bmg: [SKIP][64] ([Intel XE#1124]) -> [SKIP][65] ([Intel XE#6703])
[64]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4653-d995d708ed7b2e37fa25d41783f747e43bb91c5b/shard-bmg-2/igt@kms_big_fb@y-tiled-64bpp-rotate-270.html
[65]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-2/igt@kms_big_fb@y-tiled-64bpp-rotate-270.html
* igt@kms_ccs@crc-primary-basic-4-tiled-mtl-mc-ccs:
- shard-bmg: [SKIP][66] ([Intel XE#2887]) -> [SKIP][67] ([Intel XE#6703]) +1 other test skip
[66]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4653-d995d708ed7b2e37fa25d41783f747e43bb91c5b/shard-bmg-2/igt@kms_ccs@crc-primary-basic-4-tiled-mtl-mc-ccs.html
[67]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-2/igt@kms_ccs@crc-primary-basic-4-tiled-mtl-mc-ccs.html
* igt@kms_feature_discovery@display-3x:
- shard-bmg: [SKIP][68] ([Intel XE#2373] / [Intel XE#7448]) -> [SKIP][69] ([Intel XE#6703])
[68]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4653-d995d708ed7b2e37fa25d41783f747e43bb91c5b/shard-bmg-2/igt@kms_feature_discovery@display-3x.html
[69]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-2/igt@kms_feature_discovery@display-3x.html
* igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-64bpp-ytile-upscaling:
- shard-bmg: [SKIP][70] ([Intel XE#7178] / [Intel XE#7351]) -> [SKIP][71] ([Intel XE#6703])
[70]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4653-d995d708ed7b2e37fa25d41783f747e43bb91c5b/shard-bmg-2/igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-64bpp-ytile-upscaling.html
[71]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-2/igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-64bpp-ytile-upscaling.html
* igt@kms_frontbuffer_tracking@fbc-1p-primscrn-indfb-pgflip-blt:
- shard-bmg: [SKIP][72] ([Intel XE#4141]) -> [SKIP][73] ([Intel XE#6703]) +1 other test skip
[72]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4653-d995d708ed7b2e37fa25d41783f747e43bb91c5b/shard-bmg-2/igt@kms_frontbuffer_tracking@fbc-1p-primscrn-indfb-pgflip-blt.html
[73]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-2/igt@kms_frontbuffer_tracking@fbc-1p-primscrn-indfb-pgflip-blt.html
* igt@kms_frontbuffer_tracking@fbcdrrs-1p-primscrn-cur-indfb-draw-mmap-wc:
- shard-bmg: [SKIP][74] ([Intel XE#2311]) -> [SKIP][75] ([Intel XE#6703]) +1 other test skip
[74]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4653-d995d708ed7b2e37fa25d41783f747e43bb91c5b/shard-bmg-2/igt@kms_frontbuffer_tracking@fbcdrrs-1p-primscrn-cur-indfb-draw-mmap-wc.html
[75]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-2/igt@kms_frontbuffer_tracking@fbcdrrs-1p-primscrn-cur-indfb-draw-mmap-wc.html
* igt@kms_frontbuffer_tracking@fbcpsr-rgb565-draw-blt:
- shard-bmg: [SKIP][76] ([Intel XE#2313]) -> [SKIP][77] ([Intel XE#6703]) +6 other tests skip
[76]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4653-d995d708ed7b2e37fa25d41783f747e43bb91c5b/shard-bmg-2/igt@kms_frontbuffer_tracking@fbcpsr-rgb565-draw-blt.html
[77]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-2/igt@kms_frontbuffer_tracking@fbcpsr-rgb565-draw-blt.html
* igt@kms_pm_backlight@fade-with-suspend:
- shard-bmg: [SKIP][78] ([Intel XE#7376] / [Intel XE#870]) -> [SKIP][79] ([Intel XE#6703])
[78]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4653-d995d708ed7b2e37fa25d41783f747e43bb91c5b/shard-bmg-2/igt@kms_pm_backlight@fade-with-suspend.html
[79]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-2/igt@kms_pm_backlight@fade-with-suspend.html
* igt@kms_psr2_sf@psr2-plane-move-sf-dmg-area:
- shard-bmg: [SKIP][80] ([Intel XE#1489]) -> [SKIP][81] ([Intel XE#6703])
[80]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4653-d995d708ed7b2e37fa25d41783f747e43bb91c5b/shard-bmg-2/igt@kms_psr2_sf@psr2-plane-move-sf-dmg-area.html
[81]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-2/igt@kms_psr2_sf@psr2-plane-move-sf-dmg-area.html
* igt@kms_psr@psr2-primary-render:
- shard-bmg: [SKIP][82] ([Intel XE#2234]) -> [SKIP][83] ([Intel XE#6703])
[82]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4653-d995d708ed7b2e37fa25d41783f747e43bb91c5b/shard-bmg-2/igt@kms_psr@psr2-primary-render.html
[83]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-2/igt@kms_psr@psr2-primary-render.html
* igt@xe_eudebug@basic-exec-queues:
- shard-bmg: [SKIP][84] ([Intel XE#4837]) -> [SKIP][85] ([Intel XE#6703]) +1 other test skip
[84]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4653-d995d708ed7b2e37fa25d41783f747e43bb91c5b/shard-bmg-2/igt@xe_eudebug@basic-exec-queues.html
[85]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-2/igt@xe_eudebug@basic-exec-queues.html
* igt@xe_exec_basic@multigpu-many-execqueues-many-vm-rebind:
- shard-bmg: [SKIP][86] ([Intel XE#2322] / [Intel XE#7372]) -> [SKIP][87] ([Intel XE#6703]) +1 other test skip
[86]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4653-d995d708ed7b2e37fa25d41783f747e43bb91c5b/shard-bmg-2/igt@xe_exec_basic@multigpu-many-execqueues-many-vm-rebind.html
[87]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-2/igt@xe_exec_basic@multigpu-many-execqueues-many-vm-rebind.html
* igt@xe_exec_fault_mode@many-multi-queue-rebind-prefetch:
- shard-bmg: [SKIP][88] ([Intel XE#7136]) -> [SKIP][89] ([Intel XE#6703]) +1 other test skip
[88]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4653-d995d708ed7b2e37fa25d41783f747e43bb91c5b/shard-bmg-2/igt@xe_exec_fault_mode@many-multi-queue-rebind-prefetch.html
[89]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-2/igt@xe_exec_fault_mode@many-multi-queue-rebind-prefetch.html
* igt@xe_exec_multi_queue@one-queue-preempt-mode-userptr-invalidate:
- shard-bmg: [SKIP][90] ([Intel XE#6874]) -> [SKIP][91] ([Intel XE#6703])
[90]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4653-d995d708ed7b2e37fa25d41783f747e43bb91c5b/shard-bmg-2/igt@xe_exec_multi_queue@one-queue-preempt-mode-userptr-invalidate.html
[91]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-2/igt@xe_exec_multi_queue@one-queue-preempt-mode-userptr-invalidate.html
* igt@xe_exec_threads@threads-multi-queue-cm-shared-vm-rebind:
- shard-bmg: [SKIP][92] ([Intel XE#7138]) -> [SKIP][93] ([Intel XE#6703]) +1 other test skip
[92]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4653-d995d708ed7b2e37fa25d41783f747e43bb91c5b/shard-bmg-2/igt@xe_exec_threads@threads-multi-queue-cm-shared-vm-rebind.html
[93]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-2/igt@xe_exec_threads@threads-multi-queue-cm-shared-vm-rebind.html
* igt@xe_query@multigpu-query-topology-l3-bank-mask:
- shard-bmg: [SKIP][94] ([Intel XE#944]) -> [SKIP][95] ([Intel XE#6703])
[94]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4653-d995d708ed7b2e37fa25d41783f747e43bb91c5b/shard-bmg-2/igt@xe_query@multigpu-query-topology-l3-bank-mask.html
[95]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/shard-bmg-2/igt@xe_query@multigpu-query-topology-l3-bank-mask.html
[Intel XE#1124]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1124
[Intel XE#1178]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1178
[Intel XE#1489]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1489
[Intel XE#1727]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1727
[Intel XE#2049]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2049
[Intel XE#2142]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2142
[Intel XE#2234]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2234
[Intel XE#2252]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2252
[Intel XE#2311]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2311
[Intel XE#2313]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2313
[Intel XE#2320]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2320
[Intel XE#2321]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2321
[Intel XE#2322]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2322
[Intel XE#2327]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2327
[Intel XE#2373]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2373
[Intel XE#2387]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2387
[Intel XE#2597]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2597
[Intel XE#2652]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2652
[Intel XE#2850]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2850
[Intel XE#2887]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2887
[Intel XE#3304]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3304
[Intel XE#3309]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3309
[Intel XE#3321]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3321
[Intel XE#3414]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3414
[Intel XE#367]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/367
[Intel XE#3904]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3904
[Intel XE#4141]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4141
[Intel XE#4733]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4733
[Intel XE#4837]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4837
[Intel XE#5021]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5021
[Intel XE#5213]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5213
[Intel XE#5545]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5545
[Intel XE#5625]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5625
[Intel XE#579]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/579
[Intel XE#6321]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6321
[Intel XE#6557]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6557
[Intel XE#6652]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6652
[Intel XE#6665]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6665
[Intel XE#6703]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6703
[Intel XE#6747]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6747
[Intel XE#6819]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6819
[Intel XE#6874]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6874
[Intel XE#6901]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6901
[Intel XE#6911]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6911
[Intel XE#6964]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6964
[Intel XE#7084]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7084
[Intel XE#7136]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7136
[Intel XE#7138]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7138
[Intel XE#7140]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7140
[Intel XE#7178]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7178
[Intel XE#7283]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7283
[Intel XE#7329]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7329
[Intel XE#7342]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7342
[Intel XE#7349]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7349
[Intel XE#7351]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7351
[Intel XE#7354]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7354
[Intel XE#7355]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7355
[Intel XE#7368]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7368
[Intel XE#7372]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7372
[Intel XE#7374]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7374
[Intel XE#7376]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7376
[Intel XE#7377]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7377
[Intel XE#7417]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7417
[Intel XE#7429]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7429
[Intel XE#7448]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7448
[Intel XE#7456]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7456
[Intel XE#7466]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7466
[Intel XE#870]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/870
[Intel XE#944]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/944
Build changes
-------------
* Linux: xe-4653-d995d708ed7b2e37fa25d41783f747e43bb91c5b -> xe-pw-156651v7
IGT_8777: a50285a68dbef0fe11140adef4016a756f57b324 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
xe-4653-d995d708ed7b2e37fa25d41783f747e43bb91c5b: d995d708ed7b2e37fa25d41783f747e43bb91c5b
xe-pw-156651v7: 156651v7
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v7/index.html
[-- Attachment #2: Type: text/html, Size: 34224 bytes --]
^ permalink raw reply [flat|nested] 39+ messages in thread