Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v5 0/9] drm/xe/madvise: Add support for purgeable buffer objects
@ 2026-02-11 15:26 Arvind Yadav
  2026-02-11 15:26 ` [PATCH v5 1/9] drm/xe/uapi: Add UAPI " Arvind Yadav
                   ` (13 more replies)
  0 siblings, 14 replies; 36+ messages in thread
From: Arvind Yadav @ 2026-02-11 15:26 UTC (permalink / raw)
  To: intel-xe
  Cc: matthew.brost, himal.prasad.ghimiray, thomas.hellstrom,
	pallavi.mishra

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=yes, Size: 7544 bytes --]

This patch series introduces comprehensive support for purgeable buffer objects
in the Xe driver, enabling userspace to provide memory usage hints for better
memory management under system pressure.

Overview:

Purgeable memory allows applications to mark buffer objects as "not currently
needed" (DONTNEED), making them eligible for kernel reclamation during memory
pressure. This helps prevent OOM conditions and enables more efficient GPU
memory utilization for workloads with temporary or regeneratable data (caches,
intermediate results, decoded frames, etc.).

Purgeable BO Lifecycle:
1. WILLNEED (default): BO actively needed, kernel preserves backing store
2. DONTNEED (user hint): BO contents discardable, eligible for purging
3. PURGED (kernel action): Backing store reclaimed during memory pressure

Key Design Principles:
  - i915 compatibility: "Once purged, always purged" semantics - purged BOs
    remain permanently invalid and must be destroyed/recreated
  - Per-VMA state tracking: Each VMA tracks its own purgeable state, BO is
    only marked DONTNEED when ALL VMAs across ALL VMs agree (Thomas Hellström)
  - Safety first: Imported/exported dma-bufs blocked from purgeable state -
    no visibility into external device usage (Matt Roper)
  - Multiple protection layers: Validation in madvise, VM bind, mmap, and
    fault handlers
  - Async TLB invalidation: Uses xe_bo_trigger_rebind() for non-blocking
    GPU mapping invalidation
  - Scratch PTE support: Fault-mode VMs use scratch pages for safe zero reads
    on purged BO access.
  - Purgeable state is not applied to imported/exported dma-bufs,
    those BOs always behave as WILLNEED.
  - TTM shrinker integration: Encapsulated helpers manage xe_ttm_tt->purgeable
    flag and shrinker page accounting (shrinkable vs purgeable buckets)

v2 Changes:
  - Reordered patches: Moved shared BO helper before main implementation for
    proper dependency order
  - Fixed reference counting in mmap offset validation (use drm_gem_object_put)
  - Removed incorrect claims about madvise(WILLNEED) restoring purged BOs
  - Fixed error code documentation inconsistencies
  - Initialize purge_state_val fields to prevent kernel memory leaks
  - Use xe_bo_trigger_rebind() for async TLB invalidation (Thomas Hellström)
  - Add NULL rebind with scratch PTEs for fault mode (Thomas Hellström)
  - Implement i915-compatible retained field logic (Thomas Hellström)
  - Skip BO validation for purged BOs in page fault handler (crash fix)
  - Add scratch VM check in page fault path (non-scratch VMs fail fault)

v3 Changes (addressing Matt and Thomas Hellström feedback):
  - Per-VMA purgeable state tracking: Added xe_vma->purgeable_state field
  - Complete VMA check: xe_bo_all_vmas_dontneed() walks all VMAs across all
    VMs to ensure unanimous DONTNEED before marking BO purgeable
  - VMA unbind recheck: Added xe_bo_recheck_purgeable_on_vma_unbind() to
    re-evaluate BO state when VMAs are destroyed
  - Block external dma-bufs: Added xe_bo_is_external_dmabuf() check using
    drm_gem_is_imported() and obj->dma_buf to prevent purging imported/exported BOs
  - Consistent lockdep enforcement: Added xe_bo_assert_held() to all helpers
    that access madv_purgeable state
  - Simplified page table logic: Renamed is_null to is_null_or_purged in
    xe_pt_stage_bind_entry() - purged BOs treated identically to null VMAs
  - Removed unnecessary checks: Dropped redundant "&& bo" check in xe_ttm_bo_purge()
  - Xe-specific warnings: Changed drm_warn() to XE_WARN_ON() in purge path
  - Moved purge checks under locks: Purge state validation now done after
    acquiring dma-resv lock in vma_lock_and_validate() and xe_pagefault_begin()
  - Race-free fault handling: Removed unlocked purge check from
    xe_pagefault_handle_vma(), moved to locked xe_pagefault_begin()
  - Shrinker helper functions: Added xe_bo_set_purgeable_shrinker() and
    xe_bo_clear_purgeable_shrinker() to encapsulate TTM purgeable flag updates
    and shrinker page accounting, improving code clarity and maintainability

v4 Changes (addressing Matt and Thomas Hellström feedback):
  - UAPI: Removed '__u64 reserved' field from purge_state_val union to fit
    16-byte size constraint (Matt)
  - Changed madv_purgeable from atomic_t to u32 across all patches (Matt)
  - CPU fault handling: Added purged check to fastpath (xe_bo_cpu_fault_fastpath)
    to prevent hang when accessing existing mmap of purged BO

v5 Changes (addressing Matt and Thomas Hellström feedback):
  - Add locking documentation to madv_purgeable field comment (Matt)
  - Introduce xe_bo_set_purgeable_state() helper (void return) to centralize
    madv_purgeable updates with xe_bo_assert_held() and state transition
    validation using explicit enum checks (no transition out of PURGED) (Matt)
  - Make xe_ttm_bo_purge() return int and propagate failures from
    xe_bo_move(); handle xe_bo_trigger_rebind() failures (e.g. no_wait_gpu
    paths) rather than silently ignoring (Matt)
  - Replace drm_WARN_ON with xe_assert for better Xe-specific assertions (Matt)
  - Hook purgeable handling into madvise_funcs[DRM_XE_VMA_ATTR_PURGEABLE_STATE]
    instead of special-case path in xe_vm_madvise_ioctl() (Matt)
  - Track purgeable retained return via xe_madvise_details and perform
    copy_to_user() from xe_madvise_details_fini() after locks are dropped (Matt)
  - Set madvise_funcs[DRM_XE_VMA_ATTR_PURGEABLE_STATE] to NULL with
    __maybe_unused on madvise_purgeable() to maintain bisectability until
    shrinker integration is complete in final patch (Matt)
  - Call xe_bo_recheck_purgeable_on_vma_unbind() from xe_vma_destroy()
    right after drm_gpuva_unlink() where we already hold the BO lock,
    drop the trylock-based late destroy path (Matt)
  - Move purgeable_state into xe_vma_mem_attr with the other madvise
    attributes (Matt)
  - Drop READ_ONCE since the BO lock already protects us (Matt)
  - Keep returning false when there are no VMAs - otherwise we'd mark
    BOs purgeable without any user hint (Matt)
  -  Use struct xe_vma_lock_and_validate_flags instead of multiple bool
    parameters to improve readability and prevent argument transposition (Matt)
  - Fix LRU crash while running shrink test
  - Skip xe_bo_validate() for purged BOs in xe_gpuvm_validate()
  - Split ghost BO and zero-refcount handling in xe_bo_shrink() (Thomas)

Arvind Yadav (8):
  drm/xe/bo: Add purgeable bo state tracking and field madv to xe_bo
  drm/xe/madvise: Implement purgeable buffer object support
  drm/xe/bo: Handle CPU faults on purged buffer objects
  drm/xe/vm: Prevent binding of purged buffer objects
  drm/xe/madvise: Implement per-VMA purgeable state tracking
  drm/xe/madvise: Block imported and exported dma-bufs
  drm/xe/bo: Add purgeable shrinker state helpers
  drm/xe/madvise: Enable purgeable buffer object IOCTL support

Himal Prasad Ghimiray (1):
  drm/xe/uapi: Add UAPI support for purgeable buffer objects

 drivers/gpu/drm/xe/xe_bo.c         | 187 ++++++++++++++++++++--
 drivers/gpu/drm/xe/xe_bo.h         |  60 +++++++
 drivers/gpu/drm/xe/xe_bo_types.h   |   6 +
 drivers/gpu/drm/xe/xe_pagefault.c  |  12 ++
 drivers/gpu/drm/xe/xe_pt.c         |  40 ++++-
 drivers/gpu/drm/xe/xe_vm.c         |  90 +++++++++--
 drivers/gpu/drm/xe/xe_vm_madvise.c | 249 +++++++++++++++++++++++++++++
 drivers/gpu/drm/xe/xe_vm_madvise.h |   3 +
 drivers/gpu/drm/xe/xe_vm_types.h   |  11 ++
 include/uapi/drm/xe_drm.h          |  44 +++++
 10 files changed, 667 insertions(+), 35 deletions(-)

-- 
2.43.0


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v5 1/9] drm/xe/uapi: Add UAPI support for purgeable buffer objects
  2026-02-11 15:26 [PATCH v5 0/9] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
@ 2026-02-11 15:26 ` Arvind Yadav
  2026-02-24 10:50   ` Thomas Hellström
  2026-02-26 17:58   ` Souza, Jose
  2026-02-11 15:26 ` [PATCH v5 2/9] drm/xe/bo: Add purgeable bo state tracking and field madv to xe_bo Arvind Yadav
                   ` (12 subsequent siblings)
  13 siblings, 2 replies; 36+ messages in thread
From: Arvind Yadav @ 2026-02-11 15:26 UTC (permalink / raw)
  To: intel-xe
  Cc: matthew.brost, himal.prasad.ghimiray, thomas.hellstrom,
	pallavi.mishra

From: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>

Extend the DRM_XE_MADVISE ioctl to support purgeable buffer object
management by adding DRM_XE_VMA_ATTR_PURGEABLE_STATE attribute type.

This allows userspace applications to provide memory usage hints to
the kernel for better memory management under pressure:

- WILLNEED: Buffer is needed and should not be purged. If the BO was
  previously purged, retained field returns 0 indicating backing store
  was lost (once purged, always purged semantics matching i915).

- DONTNEED: Buffer is not currently needed and may be purged by the
  kernel under memory pressure to free resources. Only applies to
  non-shared BOs.

The implementation includes a 'retained' output field (matching i915's
drm_i915_gem_madvise.retained) that indicates whether the BO's backing
store still exists (1) or has been purged (0).

v2:
  - Add PURGED state for read-only status, change ioctl to DRM_IOWR,
    add retained field for i915 compatibility

v3:
  - UAPI rule should not be changed (Matthew Brost)
  - Make 'retained' a userptr (Matthew Brost)

v4:
  - You cannot make this part of the union (purge_state_val) larger
    than the existing union (16 bytes). So just drop the '__u64 reserved'
    field. (Matt)

v5:
  - Update UAPI documentation to clarify retained must be initialized
    to 0(Thomas)

Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
---
 include/uapi/drm/xe_drm.h | 44 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 44 insertions(+)

diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
index 077e66a682e2..3e2f145e7f8f 100644
--- a/include/uapi/drm/xe_drm.h
+++ b/include/uapi/drm/xe_drm.h
@@ -2099,6 +2099,7 @@ struct drm_xe_madvise {
 #define DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC	0
 #define DRM_XE_MEM_RANGE_ATTR_ATOMIC		1
 #define DRM_XE_MEM_RANGE_ATTR_PAT		2
+#define DRM_XE_VMA_ATTR_PURGEABLE_STATE		3
 	/** @type: type of attribute */
 	__u32 type;
 
@@ -2189,6 +2190,49 @@ struct drm_xe_madvise {
 			/** @pat_index.reserved: Reserved */
 			__u64 reserved;
 		} pat_index;
+
+		/**
+		 * @purge_state_val: Purgeable state configuration
+		 *
+		 * Used when @type == DRM_XE_VMA_ATTR_PURGEABLE_STATE.
+		 *
+		 * Configures the purgeable state of buffer objects in the specified
+		 * virtual address range. This allows applications to hint to the kernel
+		 * about bo's usage patterns for better memory management.
+		 *
+		 * Supported values for @purge_state_val.val:
+		 *  - DRM_XE_VMA_PURGEABLE_STATE_WILLNEED (0): Marks BO as needed.
+		 *    If BO was purged, returns retained=0 (backing store lost).
+		 *
+		 *  - DRM_XE_VMA_PURGEABLE_STATE_DONTNEED (1): Hints that BO is not
+		 *    currently needed. Kernel may purge it under memory pressure.
+		 *    Only applies to non-shared BOs. Returns retained=1 if not purged.
+		 */
+		struct {
+#define DRM_XE_VMA_PURGEABLE_STATE_WILLNEED	0
+#define DRM_XE_VMA_PURGEABLE_STATE_DONTNEED	1
+			/** @purge_state_val.val: value for DRM_XE_VMA_ATTR_PURGEABLE_STATE */
+			__u32 val;
+
+			/* @purge_state_val.pad */
+			__u32 pad;
+			/**
+			 * @purge_state_val.retained: Pointer to output field for backing
+			 * store status.
+			 *
+			 * Userspace must initialize this field to 0 before the
+			 * ioctl. Kernel writes to it after the operation:
+			 * - 1 if backing store exists (not purged)
+			 * - 0 if backing store was purged
+			 *
+			 * If userspace fails to initialize to 0, ioctl returns -EINVAL.
+			 * This ensures a safe default (0 = assume purged) if kernel
+			 * cannot write the result.
+			 *
+			 * Similar to i915's drm_i915_gem_madvise.retained field.
+			 */
+			__u64 retained;
+		} purge_state_val;
 	};
 
 	/** @reserved: Reserved */
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v5 2/9] drm/xe/bo: Add purgeable bo state tracking and field madv to xe_bo
  2026-02-11 15:26 [PATCH v5 0/9] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
  2026-02-11 15:26 ` [PATCH v5 1/9] drm/xe/uapi: Add UAPI " Arvind Yadav
@ 2026-02-11 15:26 ` Arvind Yadav
  2026-02-11 16:00   ` Matthew Brost
  2026-02-11 15:26 ` [PATCH v5 3/9] drm/xe/madvise: Implement purgeable buffer object support Arvind Yadav
                   ` (11 subsequent siblings)
  13 siblings, 1 reply; 36+ messages in thread
From: Arvind Yadav @ 2026-02-11 15:26 UTC (permalink / raw)
  To: intel-xe
  Cc: matthew.brost, himal.prasad.ghimiray, thomas.hellstrom,
	pallavi.mishra

Add infrastructure for tracking purgeable state of buffer objects.
This includes:

Introduce enum xe_madv_purgeable_state with three states:
   - XE_MADV_PURGEABLE_WILLNEED (0): BO is needed and should not be
     purged. This is the default state for all BOs.

   - XE_MADV_PURGEABLE_DONTNEED (1): BO is not currently needed and
     can be purged by the kernel under memory pressure to reclaim
     resources. Only non-shared BOs can be marked as DONTNEED.

   - XE_MADV_PURGEABLE_PURGED (2): BO has been purged by the kernel.
     Accessing a purged BO results in error. Follows i915 semantics
     where once purged, the BO remains permanently invalid ("once
     purged, always purged").

Add madv_purgeable field to struct xe_bo for state tracking
  of purgeable state across concurrent access paths

v2:
  - Add xe_bo_is_purged() helper, improve state documentation

v3:
  - Add the kernel doc(Matthew Brost)
  - Add the new helpers xe_bo_madv_is_dontneed(Matthew Brost)

v4:
  - @madv_purgeable atomic_t → u32 change across all relevant
    patches (Matt)

v5:
  - Add locking documentation to madv_purgeable field comment (Matt)

Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
---
 drivers/gpu/drm/xe/xe_bo.h       | 56 ++++++++++++++++++++++++++++++++
 drivers/gpu/drm/xe/xe_bo_types.h |  6 ++++
 2 files changed, 62 insertions(+)

diff --git a/drivers/gpu/drm/xe/xe_bo.h b/drivers/gpu/drm/xe/xe_bo.h
index c914ab719f20..ea157d74e2fb 100644
--- a/drivers/gpu/drm/xe/xe_bo.h
+++ b/drivers/gpu/drm/xe/xe_bo.h
@@ -87,6 +87,28 @@
 
 #define XE_PCI_BARRIER_MMAP_OFFSET	(0x50 << XE_PTE_SHIFT)
 
+/**
+ * enum xe_madv_purgeable_state - Buffer object purgeable state enumeration
+ *
+ * This enum defines the possible purgeable states for a buffer object,
+ * allowing userspace to provide memory usage hints to the kernel for
+ * better memory management under pressure.
+ *
+ * @XE_MADV_PURGEABLE_WILLNEED: The buffer object is needed and should not be purged.
+ * This is the default state.
+ * @XE_MADV_PURGEABLE_DONTNEED: The buffer object is not currently needed and can be
+ * purged by the kernel under memory pressure.
+ * @XE_MADV_PURGEABLE_PURGED: The buffer object has been purged by the kernel.
+ *
+ * Accessing a purged buffer will result in an error. Per i915 semantics,
+ * once purged, a BO remains permanently invalid and must be destroyed and recreated.
+ */
+enum xe_madv_purgeable_state {
+	XE_MADV_PURGEABLE_WILLNEED,
+	XE_MADV_PURGEABLE_DONTNEED,
+	XE_MADV_PURGEABLE_PURGED,
+};
+
 struct sg_table;
 
 struct xe_bo *xe_bo_alloc(void);
@@ -215,6 +237,40 @@ static inline bool xe_bo_is_protected(const struct xe_bo *bo)
 	return bo->pxp_key_instance;
 }
 
+/**
+ * xe_bo_is_purged() - Check if buffer object has been purged
+ * @bo: The buffer object to check
+ *
+ * Checks if the buffer object's backing store has been discarded by the
+ * kernel due to memory pressure after being marked as purgeable (DONTNEED).
+ * Once purged, the BO cannot be restored and any attempt to use it will fail.
+ *
+ * Context: Caller must hold the BO's dma-resv lock
+ * Return: true if the BO has been purged, false otherwise
+ */
+static inline bool xe_bo_is_purged(struct xe_bo *bo)
+{
+	xe_bo_assert_held(bo);
+	return bo->madv_purgeable == XE_MADV_PURGEABLE_PURGED;
+}
+
+/**
+ * xe_bo_madv_is_dontneed() - Check if BO is marked as DONTNEED
+ * @bo: The buffer object to check
+ *
+ * Checks if userspace has marked this BO as DONTNEED (i.e., its contents
+ * are not currently needed and can be discarded under memory pressure).
+ * This is used internally to decide whether a BO is eligible for purging.
+ *
+ * Context: Caller must hold the BO's dma-resv lock
+ * Return: true if the BO is marked DONTNEED, false otherwise
+ */
+static inline bool xe_bo_madv_is_dontneed(struct xe_bo *bo)
+{
+	xe_bo_assert_held(bo);
+	return bo->madv_purgeable == XE_MADV_PURGEABLE_DONTNEED;
+}
+
 static inline void xe_bo_unpin_map_no_vm(struct xe_bo *bo)
 {
 	if (likely(bo)) {
diff --git a/drivers/gpu/drm/xe/xe_bo_types.h b/drivers/gpu/drm/xe/xe_bo_types.h
index d4fe3c8dca5b..ff8317bfc1ae 100644
--- a/drivers/gpu/drm/xe/xe_bo_types.h
+++ b/drivers/gpu/drm/xe/xe_bo_types.h
@@ -108,6 +108,12 @@ struct xe_bo {
 	 * from default
 	 */
 	u64 min_align;
+
+	/**
+	 * @madv_purgeable: user space advise on BO purgeability, protected
+	 * by BO's dma-resv lock.
+	 */
+	u32 madv_purgeable;
 };
 
 #endif
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v5 3/9] drm/xe/madvise: Implement purgeable buffer object support
  2026-02-11 15:26 [PATCH v5 0/9] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
  2026-02-11 15:26 ` [PATCH v5 1/9] drm/xe/uapi: Add UAPI " Arvind Yadav
  2026-02-11 15:26 ` [PATCH v5 2/9] drm/xe/bo: Add purgeable bo state tracking and field madv to xe_bo Arvind Yadav
@ 2026-02-11 15:26 ` Arvind Yadav
  2026-02-24 12:21   ` Thomas Hellström
  2026-02-11 15:26 ` [PATCH v5 4/9] drm/xe/bo: Handle CPU faults on purged buffer objects Arvind Yadav
                   ` (10 subsequent siblings)
  13 siblings, 1 reply; 36+ messages in thread
From: Arvind Yadav @ 2026-02-11 15:26 UTC (permalink / raw)
  To: intel-xe
  Cc: matthew.brost, himal.prasad.ghimiray, thomas.hellstrom,
	pallavi.mishra

This allows userspace applications to provide memory usage hints to
the kernel for better memory management under pressure:

Add the core implementation for purgeable buffer objects, enabling memory
reclamation of user-designated DONTNEED buffers during eviction.

This patch implements the purge operation and state machine transitions:

Purgeable States (from xe_madv_purgeable_state):
 - WILLNEED (0): BO should be retained, actively used
 - DONTNEED (1): BO eligible for purging, not currently needed
 - PURGED (2): BO backing store reclaimed, permanently invalid

Design Rationale:
  - Async TLB invalidation via trigger_rebind (no blocking xe_vm_invalidate_vma)
  - i915 compatibility: retained field, "once purged always purged" semantics
  - Shared BO protection prevents multi-process memory corruption
  - Scratch PTE reuse avoids new infrastructure, safe for fault mode

Note: The madvise_purgeable() function is implemented but not hooked into
the IOCTL handler (madvise_funcs[] entry is NULL) to maintain bisectability.
The feature will be enabled in the final patch when all supporting
infrastructure (shrinker, per-VMA tracking) is complete.

v2:
  - Use xe_bo_trigger_rebind() for async TLB invalidation (Thomas Hellström)
  - Add NULL rebind with scratch PTEs for fault mode (Thomas Hellström)
  - Implement i915-compatible retained field logic (Thomas Hellström)
  - Skip BO validation for purged BOs in page fault handler (crash fix)
  - Add scratch VM check in page fault path (non-scratch VMs fail fault)
  - Force clear_pt for non-scratch VMs to avoid phys addr 0 mapping (review fix)
  - Add !is_purged check to resource cursor setup to prevent stale access

v3:
  - Rebase as xe_gt_pagefault.c is gone upstream and replaced
    with xe_pagefault.c (Matthew Brost)
  - Xe specific warn on (Matthew Brost)
  - Call helpers for madv_purgeable access(Matthew Brost)
  - Remove bo NULL check(Matthew Brost)
  - Use xe_bo_assert_held instead of dma assert(Matthew Brost)
  - Move the xe_bo_is_purged check under the dma-resv lock( by Matt)
  - Drop is_purged from xe_pt_stage_bind_entry and just set is_null to true
    for purged BO rename s/is_null/is_null_or_purged (by Matt)
  - UAPI rule should not be changed.(Matthew Brost)
  - Make 'retained' a userptr (Matthew Brost)

v4:
  - @madv_purgeable atomic_t → u32 change across all relevant patches (Matt)

v5:
  - Introduce xe_bo_set_purgeable_state() helper (void return) to centralize
    madv_purgeable updates with xe_bo_assert_held() and state transition
    validation using explicit enum checks (no transition out of PURGED) (Matt)
  - Make xe_ttm_bo_purge() return int and propagate failures from
    xe_bo_move(); handle xe_bo_trigger_rebind() failures (e.g. no_wait_gpu
    paths) rather than silently ignoring (Matt)
  - Replace drm_WARN_ON with xe_assert for better Xe-specific assertions (Matt)
  - Hook purgeable handling into madvise_funcs[DRM_XE_VMA_ATTR_PURGEABLE_STATE]
    instead of special-case path in xe_vm_madvise_ioctl() (Matt)
  - Track purgeable retained return via xe_madvise_details and perform
    copy_to_user() from xe_madvise_details_fini() after locks are dropped (Matt)
  - Set madvise_funcs[DRM_XE_VMA_ATTR_PURGEABLE_STATE] to NULL with
    __maybe_unused on madvise_purgeable() to maintain bisectability until
    shrinker integration is complete in final patch (Matt)
  - Use put_user() instead of copy_to_user() for single u32 retained value (Thomas)
  - Return -EFAULT from ioctl if put_user() fails (Thomas)
  - Validate userspace initialized retained to 0 before ioctl, ensuring safe
    default (0 = "assume purged") if put_user() fails (Thomas)
  - Refactor error handling: separate fallible put_user from infallible cleanup
  - xe_madvise_purgeable_retained_to_user(): separate helper for fallible put_user
  - Call put_user() after releasing all locks to avoid circular dependencies
  - Use xe_bo_move_notify() instead of xe_bo_trigger_rebind() in xe_ttm_bo_purge()
    for proper abstraction - handles vunmap, dma-buf notifications, and VRAM
    userfault cleanup (Thomas)
  - Fix LRU crash while running shrink test
  - Skip xe_bo_validate() for purged BOs in xe_gpuvm_validate()

Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
---
 drivers/gpu/drm/xe/xe_bo.c         | 106 ++++++++++++++++++++---
 drivers/gpu/drm/xe/xe_bo.h         |   2 +
 drivers/gpu/drm/xe/xe_pagefault.c  |  12 +++
 drivers/gpu/drm/xe/xe_pt.c         |  40 +++++++--
 drivers/gpu/drm/xe/xe_vm.c         |  20 ++++-
 drivers/gpu/drm/xe/xe_vm_madvise.c | 133 +++++++++++++++++++++++++++++
 6 files changed, 292 insertions(+), 21 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
index 8bf16d60b9a5..87cde4b2fe59 100644
--- a/drivers/gpu/drm/xe/xe_bo.c
+++ b/drivers/gpu/drm/xe/xe_bo.c
@@ -835,6 +835,83 @@ static int xe_bo_move_notify(struct xe_bo *bo,
 	return 0;
 }
 
+/**
+ * xe_bo_set_purgeable_state() - Set BO purgeable state with validation
+ * @bo: Buffer object
+ * @new_state: New purgeable state
+ *
+ * Sets the purgeable state with lockdep assertions and validates state
+ * transitions. Once a BO is PURGED, it cannot transition to any other state.
+ * Invalid transitions are caught with xe_assert().
+ */
+void xe_bo_set_purgeable_state(struct xe_bo *bo,
+			       enum xe_madv_purgeable_state new_state)
+{
+	struct xe_device *xe = xe_bo_device(bo);
+
+	xe_bo_assert_held(bo);
+
+	/* Validate state is one of the known values */
+	xe_assert(xe, new_state == XE_MADV_PURGEABLE_WILLNEED ||
+		      new_state == XE_MADV_PURGEABLE_DONTNEED ||
+		      new_state == XE_MADV_PURGEABLE_PURGED);
+
+	/* Once purged, always purged - cannot transition out */
+	xe_assert(xe, !(bo->madv_purgeable == XE_MADV_PURGEABLE_PURGED &&
+			new_state != XE_MADV_PURGEABLE_PURGED));
+
+	bo->madv_purgeable = new_state;
+}
+
+/**
+ * xe_ttm_bo_purge() - Purge buffer object backing store
+ * @ttm_bo: The TTM buffer object to purge
+ * @ctx: TTM operation context
+ *
+ * This function purges the backing store of a BO marked as DONTNEED and
+ * triggers rebind to invalidate stale GPU mappings. For fault-mode VMs,
+ * this zaps the PTEs. The next GPU access will trigger a page fault and
+ * perform NULL rebind (scratch pages or clear PTEs based on VM config).
+ *
+ * Return: 0 on success, negative error code on failure
+ */
+static int xe_ttm_bo_purge(struct ttm_buffer_object *ttm_bo, struct ttm_operation_ctx *ctx)
+{
+	struct xe_bo *bo = ttm_to_xe_bo(ttm_bo);
+	struct ttm_placement place = {};
+	int ret;
+
+	xe_bo_assert_held(bo);
+
+	if (!ttm_bo->ttm)
+		return 0;
+
+	if (!xe_bo_madv_is_dontneed(bo))
+		return 0;
+
+	ret = ttm_bo_validate(ttm_bo, &place, ctx);
+	if (ret)
+		return ret;
+
+	/*
+	 * Use the standard pre-move hook so we share the same cleanup/invalidate
+	 * path as migrations: drop any CPU vmap and schedule the necessary GPU
+	 * unbind/rebind work.
+	 *
+	 * This may fail in no-wait contexts (fault/shrinker) or if the BO is
+	 * pinned. Keep state unchanged on failure so we don't end up "PURGED"
+	 * with stale mappings.
+	 */
+	ret = xe_bo_move_notify(bo, ctx);
+	if (ret)
+		return ret;
+
+	/* Commit the state transition only once invalidation was queued */
+	xe_bo_set_purgeable_state(bo, XE_MADV_PURGEABLE_PURGED);
+
+	return 0;
+}
+
 static int xe_bo_move(struct ttm_buffer_object *ttm_bo, bool evict,
 		      struct ttm_operation_ctx *ctx,
 		      struct ttm_resource *new_mem,
@@ -854,6 +931,20 @@ static int xe_bo_move(struct ttm_buffer_object *ttm_bo, bool evict,
 				  ttm && ttm_tt_is_populated(ttm)) ? true : false;
 	int ret = 0;
 
+	/*
+	 * Purge only non-shared BOs explicitly marked DONTNEED by userspace.
+	 * The move_notify callback will handle invalidation asynchronously.
+	 */
+	if (evict && xe_bo_madv_is_dontneed(bo)) {
+		ret = xe_ttm_bo_purge(ttm_bo, ctx);
+		if (ret)
+			return ret;
+
+		/* Free the unused eviction destination resource */
+		ttm_resource_free(ttm_bo, &new_mem);
+		return 0;
+	}
+
 	/* Bo creation path, moving to system or TT. */
 	if ((!old_mem && ttm) && !handle_system_ccs) {
 		if (new_mem->mem_type == XE_PL_TT)
@@ -1603,18 +1694,6 @@ static void xe_ttm_bo_delete_mem_notify(struct ttm_buffer_object *ttm_bo)
 	}
 }
 
-static void xe_ttm_bo_purge(struct ttm_buffer_object *ttm_bo, struct ttm_operation_ctx *ctx)
-{
-	struct xe_device *xe = ttm_to_xe_device(ttm_bo->bdev);
-
-	if (ttm_bo->ttm) {
-		struct ttm_placement place = {};
-		int ret = ttm_bo_validate(ttm_bo, &place, ctx);
-
-		drm_WARN_ON(&xe->drm, ret);
-	}
-}
-
 static void xe_ttm_bo_swap_notify(struct ttm_buffer_object *ttm_bo)
 {
 	struct ttm_operation_ctx ctx = {
@@ -2195,6 +2274,9 @@ struct xe_bo *xe_bo_init_locked(struct xe_device *xe, struct xe_bo *bo,
 #endif
 	INIT_LIST_HEAD(&bo->vram_userfault_link);
 
+	/* Initialize purge advisory state */
+	bo->madv_purgeable = XE_MADV_PURGEABLE_WILLNEED;
+
 	drm_gem_private_object_init(&xe->drm, &bo->ttm.base, size);
 
 	if (resv) {
diff --git a/drivers/gpu/drm/xe/xe_bo.h b/drivers/gpu/drm/xe/xe_bo.h
index ea157d74e2fb..0d9f25b51eb2 100644
--- a/drivers/gpu/drm/xe/xe_bo.h
+++ b/drivers/gpu/drm/xe/xe_bo.h
@@ -271,6 +271,8 @@ static inline bool xe_bo_madv_is_dontneed(struct xe_bo *bo)
 	return bo->madv_purgeable == XE_MADV_PURGEABLE_DONTNEED;
 }
 
+void xe_bo_set_purgeable_state(struct xe_bo *bo, enum xe_madv_purgeable_state new_state);
+
 static inline void xe_bo_unpin_map_no_vm(struct xe_bo *bo)
 {
 	if (likely(bo)) {
diff --git a/drivers/gpu/drm/xe/xe_pagefault.c b/drivers/gpu/drm/xe/xe_pagefault.c
index 6bee53d6ffc3..e3ace179e9cf 100644
--- a/drivers/gpu/drm/xe/xe_pagefault.c
+++ b/drivers/gpu/drm/xe/xe_pagefault.c
@@ -59,6 +59,18 @@ static int xe_pagefault_begin(struct drm_exec *exec, struct xe_vma *vma,
 	if (!bo)
 		return 0;
 
+	/*
+	 * Check if BO is purged (under dma-resv lock).
+	 * For purged BOs:
+	 * - Scratch VMs: Skip validation, rebind will use scratch PTEs
+	 * - Non-scratch VMs: FAIL the page fault (no scratch page available)
+	 */
+	if (unlikely(xe_bo_is_purged(bo))) {
+		if (!xe_vm_has_scratch(vm))
+			return -EACCES;
+		return 0;
+	}
+
 	return need_vram_move ? xe_bo_migrate(bo, vram->placement, NULL, exec) :
 		xe_bo_validate(bo, vm, true, exec);
 }
diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
index 6703a7049227..27aedee95470 100644
--- a/drivers/gpu/drm/xe/xe_pt.c
+++ b/drivers/gpu/drm/xe/xe_pt.c
@@ -533,20 +533,26 @@ xe_pt_stage_bind_entry(struct xe_ptw *parent, pgoff_t offset,
 	/* Is this a leaf entry ?*/
 	if (level == 0 || xe_pt_hugepte_possible(addr, next, level, xe_walk)) {
 		struct xe_res_cursor *curs = xe_walk->curs;
-		bool is_null = xe_vma_is_null(xe_walk->vma);
-		bool is_vram = is_null ? false : xe_res_is_vram(curs);
+		struct xe_bo *bo = xe_vma_bo(xe_walk->vma);
+		bool is_null_or_purged = xe_vma_is_null(xe_walk->vma) ||
+					 (bo && xe_bo_is_purged(bo));
+		bool is_vram = is_null_or_purged ? false : xe_res_is_vram(curs);
 
 		XE_WARN_ON(xe_walk->va_curs_start != addr);
 
 		if (xe_walk->clear_pt) {
 			pte = 0;
 		} else {
-			pte = vm->pt_ops->pte_encode_vma(is_null ? 0 :
+			/*
+			 * For purged BOs, treat like null VMAs - pass address 0.
+			 * The pte_encode_vma will set XE_PTE_NULL flag for scratch mapping.
+			 */
+			pte = vm->pt_ops->pte_encode_vma(is_null_or_purged ? 0 :
 							 xe_res_dma(curs) +
 							 xe_walk->dma_offset,
 							 xe_walk->vma,
 							 pat_index, level);
-			if (!is_null)
+			if (!is_null_or_purged)
 				pte |= is_vram ? xe_walk->default_vram_pte :
 					xe_walk->default_system_pte;
 
@@ -570,7 +576,7 @@ xe_pt_stage_bind_entry(struct xe_ptw *parent, pgoff_t offset,
 		if (unlikely(ret))
 			return ret;
 
-		if (!is_null && !xe_walk->clear_pt)
+		if (!is_null_or_purged && !xe_walk->clear_pt)
 			xe_res_next(curs, next - addr);
 		xe_walk->va_curs_start = next;
 		xe_walk->vma->gpuva.flags |= (XE_VMA_PTE_4K << level);
@@ -723,6 +729,26 @@ xe_pt_stage_bind(struct xe_tile *tile, struct xe_vma *vma,
 	};
 	struct xe_pt *pt = vm->pt_root[tile->id];
 	int ret;
+	bool is_purged = false;
+
+	/*
+	 * Check if BO is purged:
+	 * - Scratch VMs: Use scratch PTEs (XE_PTE_NULL) for safe zero reads
+	 * - Non-scratch VMs: Clear PTEs to zero (non-present) to avoid mapping to phys addr 0
+	 *
+	 * For non-scratch VMs, we force clear_pt=true so leaf PTEs become completely
+	 * zero instead of creating a PRESENT mapping to physical address 0.
+	 */
+	if (bo && xe_bo_is_purged(bo)) {
+		is_purged = true;
+
+		/*
+		 * For non-scratch VMs, a NULL rebind should use zero PTEs
+		 * (non-present), not a present PTE to phys 0.
+		 */
+		if (!xe_vm_has_scratch(vm))
+			xe_walk.clear_pt = true;
+	}
 
 	if (range) {
 		/* Move this entire thing to xe_svm.c? */
@@ -758,11 +784,11 @@ xe_pt_stage_bind(struct xe_tile *tile, struct xe_vma *vma,
 	}
 
 	xe_walk.default_vram_pte |= XE_PPGTT_PTE_DM;
-	xe_walk.dma_offset = bo ? vram_region_gpu_offset(bo->ttm.resource) : 0;
+	xe_walk.dma_offset = (bo && !is_purged) ? vram_region_gpu_offset(bo->ttm.resource) : 0;
 	if (!range)
 		xe_bo_assert_held(bo);
 
-	if (!xe_vma_is_null(vma) && !range) {
+	if (!xe_vma_is_null(vma) && !range && !is_purged) {
 		if (xe_vma_is_userptr(vma))
 			xe_res_first_dma(to_userptr_vma(vma)->userptr.pages.dma_addr, 0,
 					 xe_vma_size(vma), &curs);
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 8fe54a998385..21a2527ca064 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -326,6 +326,7 @@ void xe_vm_kill(struct xe_vm *vm, bool unlocked)
 static int xe_gpuvm_validate(struct drm_gpuvm_bo *vm_bo, struct drm_exec *exec)
 {
 	struct xe_vm *vm = gpuvm_to_vm(vm_bo->vm);
+	struct xe_bo *bo = gem_to_xe_bo(vm_bo->obj);
 	struct drm_gpuva *gpuva;
 	int ret;
 
@@ -334,10 +335,16 @@ static int xe_gpuvm_validate(struct drm_gpuvm_bo *vm_bo, struct drm_exec *exec)
 		list_move_tail(&gpuva_to_vma(gpuva)->combined_links.rebind,
 			       &vm->rebind_list);
 
+	/* Skip re-populating purged BOs, rebind maps scratch pages. */
+	if (xe_bo_is_purged(bo)) {
+		vm_bo->evicted = false;
+		return 0;
+	}
+
 	if (!try_wait_for_completion(&vm->xe->pm_block))
 		return -EAGAIN;
 
-	ret = xe_bo_validate(gem_to_xe_bo(vm_bo->obj), vm, false, exec);
+	ret = xe_bo_validate(bo, vm, false, exec);
 	if (ret)
 		return ret;
 
@@ -1358,6 +1365,9 @@ static u64 xelp_pte_encode_bo(struct xe_bo *bo, u64 bo_offset,
 static u64 xelp_pte_encode_vma(u64 pte, struct xe_vma *vma,
 			       u16 pat_index, u32 pt_level)
 {
+	struct xe_bo *bo = xe_vma_bo(vma);
+	struct xe_vm *vm = xe_vma_vm(vma);
+
 	pte |= XE_PAGE_PRESENT;
 
 	if (likely(!xe_vma_read_only(vma)))
@@ -1366,7 +1376,13 @@ static u64 xelp_pte_encode_vma(u64 pte, struct xe_vma *vma,
 	pte |= pte_encode_pat_index(pat_index, pt_level);
 	pte |= pte_encode_ps(pt_level);
 
-	if (unlikely(xe_vma_is_null(vma)))
+	/*
+	 * NULL PTEs redirect to scratch page (return zeros on read).
+	 * Set for: 1) explicit null VMAs, 2) purged BOs on scratch VMs.
+	 * Never set NULL flag without scratch page - causes undefined behavior.
+	 */
+	if (unlikely(xe_vma_is_null(vma) ||
+		     (bo && xe_bo_is_purged(bo) && xe_vm_has_scratch(vm))))
 		pte |= XE_PTE_NULL;
 
 	return pte;
diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
index add9a6ca2390..d9cfba7bfe0b 100644
--- a/drivers/gpu/drm/xe/xe_vm_madvise.c
+++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
@@ -25,6 +25,8 @@ struct xe_vmas_in_madvise_range {
 /**
  * struct xe_madvise_details - Argument to madvise_funcs
  * @dpagemap: Reference-counted pointer to a struct drm_pagemap.
+ * @has_purged_bo: Track if any BO was purged (for purgeable state)
+ * @retained_ptr: User pointer for retained value (for purgeable state)
  *
  * The madvise IOCTL handler may, in addition to the user-space
  * args, have additional info to pass into the madvise_func that
@@ -33,6 +35,8 @@ struct xe_vmas_in_madvise_range {
  */
 struct xe_madvise_details {
 	struct drm_pagemap *dpagemap;
+	bool has_purged_bo;
+	u64 retained_ptr;
 };
 
 static int get_vmas(struct xe_vm *vm, struct xe_vmas_in_madvise_range *madvise_range)
@@ -179,6 +183,67 @@ static void madvise_pat_index(struct xe_device *xe, struct xe_vm *vm,
 	}
 }
 
+/**
+ * madvise_purgeable - Handle purgeable buffer object advice
+ * @xe: XE device
+ * @vm: VM
+ * @vmas: Array of VMAs
+ * @num_vmas: Number of VMAs
+ * @op: Madvise operation
+ * @details: Madvise details for return values
+ *
+ * Handles DONTNEED/WILLNEED/PURGED states. Tracks if any BO was purged
+ * in details->has_purged_bo for later copy to userspace.
+ *
+ * Note: Marked __maybe_unused until hooked into madvise_funcs[] in the
+ * final patch to maintain bisectability. The NULL placeholder in the
+ * array ensures proper -EINVAL return for userspace until all supporting
+ * infrastructure (shrinker, per-VMA tracking) is complete.
+ */
+static void __maybe_unused madvise_purgeable(struct xe_device *xe,
+					     struct xe_vm *vm,
+					     struct xe_vma **vmas,
+					     int num_vmas,
+					     struct drm_xe_madvise *op,
+					     struct xe_madvise_details *details)
+{
+	int i;
+
+	xe_assert(vm->xe, op->type == DRM_XE_VMA_ATTR_PURGEABLE_STATE);
+
+	for (i = 0; i < num_vmas; i++) {
+		struct xe_bo *bo = xe_vma_bo(vmas[i]);
+
+		if (!bo)
+			continue;
+
+		/* BO must be locked before modifying madv state */
+		xe_bo_assert_held(bo);
+
+		/*
+		 * Once purged, always purged. Cannot transition back to WILLNEED.
+		 * This matches i915 semantics where purged BOs are permanently invalid.
+		 */
+		if (xe_bo_is_purged(bo)) {
+			details->has_purged_bo = true;
+			continue;
+		}
+
+		switch (op->purge_state_val.val) {
+		case DRM_XE_VMA_PURGEABLE_STATE_WILLNEED:
+			xe_bo_set_purgeable_state(bo, XE_MADV_PURGEABLE_WILLNEED);
+			break;
+		case DRM_XE_VMA_PURGEABLE_STATE_DONTNEED:
+			xe_bo_set_purgeable_state(bo, XE_MADV_PURGEABLE_DONTNEED);
+			break;
+		default:
+			drm_warn(&vm->xe->drm, "Invalid madvice value = %d\n",
+				 op->purge_state_val.val);
+			return;
+		}
+	}
+}
+
 typedef void (*madvise_func)(struct xe_device *xe, struct xe_vm *vm,
 			     struct xe_vma **vmas, int num_vmas,
 			     struct drm_xe_madvise *op,
@@ -188,6 +253,12 @@ static const madvise_func madvise_funcs[] = {
 	[DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC] = madvise_preferred_mem_loc,
 	[DRM_XE_MEM_RANGE_ATTR_ATOMIC] = madvise_atomic,
 	[DRM_XE_MEM_RANGE_ATTR_PAT] = madvise_pat_index,
+	/*
+	 * Purgeable support implemented but not enabled yet to maintain
+	 * bisectability. Will be set to madvise_purgeable() in final patch
+	 * when all infrastructure (shrinker, VMA tracking) is complete.
+	 */
+	[DRM_XE_VMA_ATTR_PURGEABLE_STATE] = NULL,
 };
 
 static u8 xe_zap_ptes_in_madvise_range(struct xe_vm *vm, u64 start, u64 end)
@@ -306,6 +377,16 @@ static bool madvise_args_are_sane(struct xe_device *xe, const struct drm_xe_madv
 			return false;
 		break;
 	}
+	case DRM_XE_VMA_ATTR_PURGEABLE_STATE:
+	{
+		u32 val = args->purge_state_val.val;
+
+		if (XE_IOCTL_DBG(xe, !(val == DRM_XE_VMA_PURGEABLE_STATE_WILLNEED ||
+				       val == DRM_XE_VMA_PURGEABLE_STATE_DONTNEED)))
+			return false;
+
+		break;
+	}
 	default:
 		if (XE_IOCTL_DBG(xe, 1))
 			return false;
@@ -324,6 +405,12 @@ static int xe_madvise_details_init(struct xe_vm *vm, const struct drm_xe_madvise
 
 	memset(details, 0, sizeof(*details));
 
+	/* Store retained pointer for purgeable state */
+	if (args->type == DRM_XE_VMA_ATTR_PURGEABLE_STATE) {
+		details->retained_ptr = args->purge_state_val.retained;
+		return 0;
+	}
+
 	if (args->type == DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC) {
 		int fd = args->preferred_mem_loc.devmem_fd;
 		struct drm_pagemap *dpagemap;
@@ -352,6 +439,21 @@ static void xe_madvise_details_fini(struct xe_madvise_details *details)
 	drm_pagemap_put(details->dpagemap);
 }
 
+static int xe_madvise_purgeable_retained_to_user(const struct xe_madvise_details *details)
+{
+	u32 retained;
+
+	if (!details->retained_ptr)
+		return 0;
+
+	retained = !details->has_purged_bo;
+
+	if (put_user(retained, (u32 __user *)u64_to_user_ptr(details->retained_ptr)))
+		return -EFAULT;
+
+	return 0;
+}
+
 static bool check_bo_args_are_sane(struct xe_vm *vm, struct xe_vma **vmas,
 				   int num_vmas, u32 atomic_val)
 {
@@ -409,6 +511,7 @@ int xe_vm_madvise_ioctl(struct drm_device *dev, void *data, struct drm_file *fil
 	struct xe_vm *vm;
 	struct drm_exec exec;
 	int err, attr_type;
+	bool do_retained;
 
 	vm = xe_vm_lookup(xef, args->vm_id);
 	if (XE_IOCTL_DBG(xe, !vm))
@@ -419,6 +522,25 @@ int xe_vm_madvise_ioctl(struct drm_device *dev, void *data, struct drm_file *fil
 		goto put_vm;
 	}
 
+	/* Cache whether we need to write retained, and validate it's initialized to 0 */
+	do_retained = args->type == DRM_XE_VMA_ATTR_PURGEABLE_STATE &&
+		      args->purge_state_val.retained;
+	if (do_retained) {
+		u32 retained;
+		u32 __user *retained_ptr;
+
+		retained_ptr = u64_to_user_ptr(args->purge_state_val.retained);
+		if (get_user(retained, retained_ptr)) {
+			err = -EFAULT;
+			goto put_vm;
+		}
+
+		if (XE_IOCTL_DBG(xe, retained != 0)) {
+			err = -EINVAL;
+			goto put_vm;
+		}
+	}
+
 	xe_svm_flush(vm);
 
 	err = down_write_killable(&vm->lock);
@@ -474,6 +596,13 @@ int xe_vm_madvise_ioctl(struct drm_device *dev, void *data, struct drm_file *fil
 	}
 
 	attr_type = array_index_nospec(args->type, ARRAY_SIZE(madvise_funcs));
+
+	/* Ensure the madvise function exists for this type */
+	if (!madvise_funcs[attr_type]) {
+		err = -EINVAL;
+		goto err_fini;
+	}
+
 	madvise_funcs[attr_type](xe, vm, madvise_range.vmas, madvise_range.num_vmas, args,
 				 &details);
 
@@ -491,6 +620,10 @@ int xe_vm_madvise_ioctl(struct drm_device *dev, void *data, struct drm_file *fil
 	xe_madvise_details_fini(&details);
 unlock_vm:
 	up_write(&vm->lock);
+
+	/* Write retained value to user after releasing all locks */
+	if (!err && do_retained)
+		err = xe_madvise_purgeable_retained_to_user(&details);
 put_vm:
 	xe_vm_put(vm);
 	return err;
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v5 4/9] drm/xe/bo: Handle CPU faults on purged buffer objects
  2026-02-11 15:26 [PATCH v5 0/9] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
                   ` (2 preceding siblings ...)
  2026-02-11 15:26 ` [PATCH v5 3/9] drm/xe/madvise: Implement purgeable buffer object support Arvind Yadav
@ 2026-02-11 15:26 ` Arvind Yadav
  2026-02-11 15:26 ` [PATCH v5 5/9] drm/xe/vm: Prevent binding of " Arvind Yadav
                   ` (9 subsequent siblings)
  13 siblings, 0 replies; 36+ messages in thread
From: Arvind Yadav @ 2026-02-11 15:26 UTC (permalink / raw)
  To: intel-xe
  Cc: matthew.brost, himal.prasad.ghimiray, thomas.hellstrom,
	pallavi.mishra

Return error when CPU attempts to access a purged buffer object.
Purged BOs have their backing store reclaimed by the kernel, making
CPU access invalid. The fault handler returns SIGBUS to userspace,
matching i915 semantics.

The purged check is added to both CPU fault paths:
- Fastpath (xe_bo_cpu_fault_fastpath): Returns error immediately
  under dma-resv lock, preventing attempts to migrate/validate freed pages
- Slowpath (xe_bo_cpu_fault): Returns -EFAULT under drm_exec lock,
  converted to VM_FAULT_SIGBUS

Without the fastpath check, accessing existing mmap mappings of purged BOs
would trigger xe_bo_fault_migrate() on freed backing store, causing kernel
hangs or crashes.

v2:
  - Added xe_bo_is_purged(bo) instead of atomic_read.
  - Avoids leaks and keeps drm_dev_exit() while returning.

v3:
  - Move xe_bo_is_purged check under a dma-resv lock (Matthew Brost)

v4:
  - Add purged check to fastpath (xe_bo_cpu_fault_fastpath) to prevent
    hang when accessing existing mmap of purged BO

Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
---
 drivers/gpu/drm/xe/xe_bo.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
index 87cde4b2fe59..7ee85c8eadde 100644
--- a/drivers/gpu/drm/xe/xe_bo.c
+++ b/drivers/gpu/drm/xe/xe_bo.c
@@ -1978,6 +1978,12 @@ static vm_fault_t xe_bo_cpu_fault_fastpath(struct vm_fault *vmf, struct xe_devic
 	if (!dma_resv_trylock(tbo->base.resv))
 		goto out_validation;
 
+	/* Purged BOs have no backing store - fault to userspace */
+	if (xe_bo_is_purged(bo)) {
+		ret = VM_FAULT_SIGBUS;
+		goto out_unlock;
+	}
+
 	if (xe_ttm_bo_is_imported(tbo)) {
 		ret = VM_FAULT_SIGBUS;
 		drm_dbg(&xe->drm, "CPU trying to access an imported buffer object.\n");
@@ -2068,6 +2074,12 @@ static vm_fault_t xe_bo_cpu_fault(struct vm_fault *vmf)
 		if (err)
 			break;
 
+		/* Purged BOs have no backing store - fault to userspace */
+		if (xe_bo_is_purged(bo)) {
+			err = -EFAULT;
+			break;
+		}
+
 		if (xe_ttm_bo_is_imported(tbo)) {
 			err = -EFAULT;
 			drm_dbg(&xe->drm, "CPU trying to access an imported buffer object.\n");
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v5 5/9] drm/xe/vm: Prevent binding of purged buffer objects
  2026-02-11 15:26 [PATCH v5 0/9] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
                   ` (3 preceding siblings ...)
  2026-02-11 15:26 ` [PATCH v5 4/9] drm/xe/bo: Handle CPU faults on purged buffer objects Arvind Yadav
@ 2026-02-11 15:26 ` Arvind Yadav
  2026-02-11 16:17   ` Matthew Brost
  2026-02-11 15:26 ` [PATCH v5 6/9] drm/xe/madvise: Implement per-VMA purgeable state tracking Arvind Yadav
                   ` (8 subsequent siblings)
  13 siblings, 1 reply; 36+ messages in thread
From: Arvind Yadav @ 2026-02-11 15:26 UTC (permalink / raw)
  To: intel-xe
  Cc: matthew.brost, himal.prasad.ghimiray, thomas.hellstrom,
	pallavi.mishra

Add purge checking to vma_lock_and_validate() to block new mapping
operations on purged BOs while allowing cleanup operations to proceed.

Purged BOs have their backing pages freed by the kernel. New
mapping operations (MAP, PREFETCH, REMAP) must be rejected with
-EINVAL to prevent GPU access to invalid memory. Cleanup
operations (UNMAP) must be allowed so applications can release
resources after detecting purge via the retained field.

REMAP operations require mixed handling - reject new prev/next
VMAs if the BO is purged, but allow the unmap portion to proceed
for cleanup.

The check_purged flag in struct xe_vma_lock_and_validate_flags
distinguishes between these cases: true for new mappings (must reject),
false for cleanup (allow).

v2:
  - Clarify that purged BOs are permanently invalid (i915 semantics)
  - Remove incorrect claim about madvise(WILLNEED) restoring purged BOs

v3:
  - Move xe_bo_is_purged check under vma_lock_and_validate (Matt)
  - Add check_purged parameter to distinguish new mappings from cleanup
  - Allow UNMAP operations to prevent resource leaks
  - Handle REMAP operation's dual nature (cleanup + new mappings)

v5:
  - Replace three boolean parameters with struct xe_vma_lock_and_validate_flags
    to improve readability and prevent argument transposition (Matt)
  - Use u32 bitfields instead of bool members to match xe_bo_shrink_flags
    pattern - more efficient packing and follows xe driver conventions (Thomas)
  - Pass struct as const since flags are read-only (Thomas)

Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
---
 drivers/gpu/drm/xe/xe_vm.c | 67 +++++++++++++++++++++++++++++++-------
 1 file changed, 56 insertions(+), 11 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 21a2527ca064..71cf3ce6c62b 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -2907,8 +2907,20 @@ static void vm_bind_ioctl_ops_unwind(struct xe_vm *vm,
 	}
 }
 
+/**
+ * struct xe_vma_lock_and_validate_flags - Flags for vma_lock_and_validate()
+ * @res_evict: Allow evicting resources during validation
+ * @validate: Perform BO validation
+ * @check_purged: Reject operation if BO is purged
+ */
+struct xe_vma_lock_and_validate_flags {
+	u32 res_evict : 1;
+	u32 validate : 1;
+	u32 check_purged : 1;
+};
+
 static int vma_lock_and_validate(struct drm_exec *exec, struct xe_vma *vma,
-				 bool res_evict, bool validate)
+				 const struct xe_vma_lock_and_validate_flags *flags)
 {
 	struct xe_bo *bo = xe_vma_bo(vma);
 	struct xe_vm *vm = xe_vma_vm(vma);
@@ -2917,10 +2929,15 @@ static int vma_lock_and_validate(struct drm_exec *exec, struct xe_vma *vma,
 	if (bo) {
 		if (!bo->vm)
 			err = drm_exec_lock_obj(exec, &bo->ttm.base);
-		if (!err && validate)
+
+		/* Reject new mappings to purged BOs; allow cleanup operations */
+		if (!err && flags->check_purged && xe_bo_is_purged(bo))
+			err = -EINVAL;
+
+		if (!err && flags->validate)
 			err = xe_bo_validate(bo, vm,
 					     !xe_vm_in_preempt_fence_mode(vm) &&
-					     res_evict, exec);
+					     flags->res_evict, exec);
 	}
 
 	return err;
@@ -3013,9 +3030,12 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
 	case DRM_GPUVA_OP_MAP:
 		if (!op->map.invalidate_on_bind)
 			err = vma_lock_and_validate(exec, op->map.vma,
-						    res_evict,
-						    !xe_vm_in_fault_mode(vm) ||
-						    op->map.immediate);
+						    &(struct xe_vma_lock_and_validate_flags) {
+							    .res_evict = res_evict,
+							    .validate = !xe_vm_in_fault_mode(vm) ||
+									op->map.immediate,
+							    .check_purged = true
+						    });
 		break;
 	case DRM_GPUVA_OP_REMAP:
 		err = check_ufence(gpuva_to_vma(op->base.remap.unmap->va));
@@ -3024,13 +3044,25 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
 
 		err = vma_lock_and_validate(exec,
 					    gpuva_to_vma(op->base.remap.unmap->va),
-					    res_evict, false);
+					    &(struct xe_vma_lock_and_validate_flags) {
+						    .res_evict = res_evict,
+						    .validate = false,
+						    .check_purged = false
+					    });
 		if (!err && op->remap.prev)
 			err = vma_lock_and_validate(exec, op->remap.prev,
-						    res_evict, true);
+						    &(struct xe_vma_lock_and_validate_flags) {
+							    .res_evict = res_evict,
+							    .validate = true,
+							    .check_purged = true
+						    });
 		if (!err && op->remap.next)
 			err = vma_lock_and_validate(exec, op->remap.next,
-						    res_evict, true);
+						    &(struct xe_vma_lock_and_validate_flags) {
+							    .res_evict = res_evict,
+							    .validate = true,
+							    .check_purged = true
+						    });
 		break;
 	case DRM_GPUVA_OP_UNMAP:
 		err = check_ufence(gpuva_to_vma(op->base.unmap.va));
@@ -3039,7 +3071,11 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
 
 		err = vma_lock_and_validate(exec,
 					    gpuva_to_vma(op->base.unmap.va),
-					    res_evict, false);
+					    &(struct xe_vma_lock_and_validate_flags) {
+						    .res_evict = res_evict,
+						    .validate = false,
+						    .check_purged = false
+					    });
 		break;
 	case DRM_GPUVA_OP_PREFETCH:
 	{
@@ -3052,9 +3088,18 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
 				  region <= ARRAY_SIZE(region_to_mem_type));
 		}
 
+		/*
+		 * Prefetch attempts to migrate BO's backing store without
+		 * repopulating it first. Purged BOs have no backing store
+		 * to migrate, so reject the operation.
+		 */
 		err = vma_lock_and_validate(exec,
 					    gpuva_to_vma(op->base.prefetch.va),
-					    res_evict, false);
+					    &(struct xe_vma_lock_and_validate_flags) {
+						    .res_evict = res_evict,
+						    .validate = false,
+						    .check_purged = true
+					    });
 		if (!err && !xe_vma_has_no_bo(vma))
 			err = xe_bo_migrate(xe_vma_bo(vma),
 					    region_to_mem_type[region],
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v5 6/9] drm/xe/madvise: Implement per-VMA purgeable state tracking
  2026-02-11 15:26 [PATCH v5 0/9] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
                   ` (4 preceding siblings ...)
  2026-02-11 15:26 ` [PATCH v5 5/9] drm/xe/vm: Prevent binding of " Arvind Yadav
@ 2026-02-11 15:26 ` Arvind Yadav
  2026-02-24 12:48   ` Thomas Hellström
  2026-02-11 15:26 ` [PATCH v5 7/9] drm/xe/madvise: Block imported and exported dma-bufs Arvind Yadav
                   ` (7 subsequent siblings)
  13 siblings, 1 reply; 36+ messages in thread
From: Arvind Yadav @ 2026-02-11 15:26 UTC (permalink / raw)
  To: intel-xe
  Cc: matthew.brost, himal.prasad.ghimiray, thomas.hellstrom,
	pallavi.mishra

Track purgeable state per-VMA instead of using a coarse shared
BO check. This prevents purging shared BOs until all VMAs across
all VMs are marked DONTNEED.

Add xe_bo_all_vmas_dontneed() to check all VMAs before marking
a BO purgeable. Add xe_bo_recheck_purgeable_on_vma_unbind() to
handle state transitions when VMAs are destroyed - if all
remaining VMAs are DONTNEED the BO can become purgeable, or if
no VMAs remain it transitions to WILLNEED.

The per-VMA purgeable_state field stores the madvise hint for
each mapping. Shared BOs can only be purged when all VMAs
unanimously indicate DONTNEED.

One thing to note: when the last VMA goes away, we default back to
WILLNEED. DONTNEED is a per-mapping hint, and without any mappings
there is no remaining madvise state to justify purging. This prevents
BOs from becoming purgeable solely due to being temporarily unmapped.

v3:
  - This addresses Thomas Hellström's feedback: "loop over all vmas
    attached to the bo and check that they all say WONTNEED. This will
    also need a check at VMA unbinding"

v4:
  - @madv_purgeable atomic_t → u32 change across all relevant
    patches (Matt)

v5:
  - Call xe_bo_recheck_purgeable_on_vma_unbind() from xe_vma_destroy()
    right after drm_gpuva_unlink() where we already hold the BO lock,
    drop the trylock-based late destroy path (Matt)
  - Move purgeable_state into xe_vma_mem_attr with the other madvise
    attributes (Matt)
  - Drop READ_ONCE since the BO lock already protects us (Matt)
  - Keep returning false when there are no VMAs - otherwise we'd mark
    BOs purgeable without any user hint (Matt)
  - Use xe_bo_set_purgeable_state() instead of direct initialization(Matt)
  - use xe_assert instead of drm_war (Thomas)

Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
---
 drivers/gpu/drm/xe/xe_svm.c        |  1 +
 drivers/gpu/drm/xe/xe_vm.c         |  9 ++-
 drivers/gpu/drm/xe/xe_vm_madvise.c | 98 ++++++++++++++++++++++++++++--
 drivers/gpu/drm/xe/xe_vm_madvise.h |  3 +
 drivers/gpu/drm/xe/xe_vm_types.h   | 11 ++++
 5 files changed, 116 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
index cda3bf7e2418..329c77aa5c20 100644
--- a/drivers/gpu/drm/xe/xe_svm.c
+++ b/drivers/gpu/drm/xe/xe_svm.c
@@ -318,6 +318,7 @@ static void xe_vma_set_default_attributes(struct xe_vma *vma)
 		.preferred_loc.migration_policy = DRM_XE_MIGRATE_ALL_PAGES,
 		.pat_index = vma->attr.default_pat_index,
 		.atomic_access = DRM_XE_ATOMIC_UNDEFINED,
+		.purgeable_state = XE_MADV_PURGEABLE_WILLNEED,
 	};
 
 	xe_vma_mem_attr_copy(&vma->attr, &default_attr);
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 71cf3ce6c62b..e84b9e7cb5eb 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -39,6 +39,7 @@
 #include "xe_tile.h"
 #include "xe_tlb_inval.h"
 #include "xe_trace_bo.h"
+#include "xe_vm_madvise.h"
 #include "xe_wa.h"
 
 static struct drm_gem_object *xe_vm_obj(struct xe_vm *vm)
@@ -1085,6 +1086,7 @@ static struct xe_vma *xe_vma_create(struct xe_vm *vm,
 static void xe_vma_destroy_late(struct xe_vma *vma)
 {
 	struct xe_vm *vm = xe_vma_vm(vma);
+	struct xe_bo *bo = xe_vma_bo(vma);
 
 	if (vma->ufence) {
 		xe_sync_ufence_put(vma->ufence);
@@ -1099,7 +1101,7 @@ static void xe_vma_destroy_late(struct xe_vma *vma)
 	} else if (xe_vma_is_null(vma) || xe_vma_is_cpu_addr_mirror(vma)) {
 		xe_vm_put(vm);
 	} else {
-		xe_bo_put(xe_vma_bo(vma));
+		xe_bo_put(bo);
 	}
 
 	xe_vma_free(vma);
@@ -1125,6 +1127,7 @@ static void vma_destroy_cb(struct dma_fence *fence,
 static void xe_vma_destroy(struct xe_vma *vma, struct dma_fence *fence)
 {
 	struct xe_vm *vm = xe_vma_vm(vma);
+	struct xe_bo *bo = xe_vma_bo(vma);
 
 	lockdep_assert_held_write(&vm->lock);
 	xe_assert(vm->xe, list_empty(&vma->combined_links.destroy));
@@ -1133,9 +1136,10 @@ static void xe_vma_destroy(struct xe_vma *vma, struct dma_fence *fence)
 		xe_assert(vm->xe, vma->gpuva.flags & XE_VMA_DESTROYED);
 		xe_userptr_destroy(to_userptr_vma(vma));
 	} else if (!xe_vma_is_null(vma) && !xe_vma_is_cpu_addr_mirror(vma)) {
-		xe_bo_assert_held(xe_vma_bo(vma));
+		xe_bo_assert_held(bo);
 
 		drm_gpuva_unlink(&vma->gpuva);
+		xe_bo_recompute_purgeable_state(bo);
 	}
 
 	xe_vm_assert_held(vm);
@@ -2681,6 +2685,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
 				.atomic_access = DRM_XE_ATOMIC_UNDEFINED,
 				.default_pat_index = op->map.pat_index,
 				.pat_index = op->map.pat_index,
+				.purgeable_state = XE_MADV_PURGEABLE_WILLNEED,
 			};
 
 			flags |= op->map.vma_flags & XE_VMA_CREATE_MASK;
diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
index d9cfba7bfe0b..c184426546a2 100644
--- a/drivers/gpu/drm/xe/xe_vm_madvise.c
+++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
@@ -12,6 +12,7 @@
 #include "xe_pat.h"
 #include "xe_pt.h"
 #include "xe_svm.h"
+#include "xe_vm.h"
 
 struct xe_vmas_in_madvise_range {
 	u64 addr;
@@ -183,6 +184,89 @@ static void madvise_pat_index(struct xe_device *xe, struct xe_vm *vm,
 	}
 }
 
+/**
+ * xe_bo_all_vmas_dontneed() - Check if all VMAs of a BO are marked DONTNEED
+ * @bo: Buffer object
+ *
+ * Check all VMAs across all VMs to determine if BO can be purged.
+ * Shared BOs require unanimous DONTNEED state from all mappings.
+ *
+ * Caller must hold BO dma-resv lock.
+ *
+ * Return: true if all VMAs are DONTNEED, false otherwise
+ */
+static bool xe_bo_all_vmas_dontneed(struct xe_bo *bo)
+{
+	struct drm_gpuvm_bo *vm_bo;
+	struct drm_gpuva *gpuva;
+	struct drm_gem_object *obj = &bo->ttm.base;
+	bool has_vmas = false;
+
+	xe_bo_assert_held(bo);
+
+	drm_gem_for_each_gpuvm_bo(vm_bo, obj) {
+		drm_gpuvm_bo_for_each_va(gpuva, vm_bo) {
+			struct xe_vma *vma = gpuva_to_vma(gpuva);
+
+			has_vmas = true;
+
+			/* Any non-DONTNEED VMA prevents purging */
+			if (vma->attr.purgeable_state != XE_MADV_PURGEABLE_DONTNEED)
+				return false;
+		}
+	}
+
+	/*
+	 * No VMAs => no mapping-level DONTNEED hint.
+	 * Default to WILLNEED to avoid making BOs purgeable without
+	 * explicit user intent.
+	 */
+	if (!has_vmas)
+		return false;
+
+	return true;
+}
+
+/**
+ * xe_bo_recompute_purgeable_state() - Recompute BO purgeable state from VMAs
+ * @bo: Buffer object
+ *
+ * Walk all VMAs to determine if BO should be purgeable or not.
+ * Shared BOs require unanimous DONTNEED state from all mappings.
+ *
+ * Locking: Caller must hold BO dma-resv lock. When iterating GPUVM lists,
+ * VM lock must also be held (write) to prevent concurrent VMA modifications.
+ * This is satisfied at both call sites:
+ * - xe_vma_destroy(): holds vm->lock write
+ * - madvise_purgeable(): holds vm->lock write (from madvise ioctl path)
+ *
+ * Return: nothing
+ */
+void xe_bo_recompute_purgeable_state(struct xe_bo *bo)
+{
+	if (!bo)
+		return;
+
+	xe_bo_assert_held(bo);
+
+	/*
+	 * Once purged, always purged. Cannot transition back to WILLNEED.
+	 * This matches i915 semantics where purged BOs are permanently invalid.
+	 */
+	if (bo->madv_purgeable == XE_MADV_PURGEABLE_PURGED)
+		return;
+
+	if (xe_bo_all_vmas_dontneed(bo)) {
+		/* All VMAs are DONTNEED - mark BO purgeable */
+		if (bo->madv_purgeable != XE_MADV_PURGEABLE_DONTNEED)
+			xe_bo_set_purgeable_state(bo, XE_MADV_PURGEABLE_DONTNEED);
+	} else {
+		/* At least one VMA is WILLNEED - BO must not be purgeable */
+		if (bo->madv_purgeable != XE_MADV_PURGEABLE_WILLNEED)
+			xe_bo_set_purgeable_state(bo, XE_MADV_PURGEABLE_WILLNEED);
+	}
+}
+
 /**
  * madvise_purgeable - Handle purgeable buffer object advice
  * @xe: XE device
@@ -231,14 +315,20 @@ static void __maybe_unused madvise_purgeable(struct xe_device *xe,
 
 		switch (op->purge_state_val.val) {
 		case DRM_XE_VMA_PURGEABLE_STATE_WILLNEED:
-			xe_bo_set_purgeable_state(bo, XE_MADV_PURGEABLE_WILLNEED);
+			vmas[i]->attr.purgeable_state = XE_MADV_PURGEABLE_WILLNEED;
+
+			/* Update BO purgeable state */
+			xe_bo_recompute_purgeable_state(bo);
 			break;
 		case DRM_XE_VMA_PURGEABLE_STATE_DONTNEED:
-			xe_bo_set_purgeable_state(bo, XE_MADV_PURGEABLE_DONTNEED);
+			vmas[i]->attr.purgeable_state = XE_MADV_PURGEABLE_DONTNEED;
+
+			/* Update BO purgeable state */
+			xe_bo_recompute_purgeable_state(bo);
 			break;
 		default:
-			drm_warn(&vm->xe->drm, "Invalid madvice value = %d\n",
-				 op->purge_state_val.val);
+			/* Should never hit - values validated in madvise_args_are_sane() */
+			xe_assert(vm->xe, 0);
 			return;
 		}
 	}
diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.h b/drivers/gpu/drm/xe/xe_vm_madvise.h
index b0e1fc445f23..39acd2689ca0 100644
--- a/drivers/gpu/drm/xe/xe_vm_madvise.h
+++ b/drivers/gpu/drm/xe/xe_vm_madvise.h
@@ -8,8 +8,11 @@
 
 struct drm_device;
 struct drm_file;
+struct xe_bo;
 
 int xe_vm_madvise_ioctl(struct drm_device *dev, void *data,
 			struct drm_file *file);
 
+void xe_bo_recompute_purgeable_state(struct xe_bo *bo);
+
 #endif
diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
index 43203e90ee3e..fd563039e8f4 100644
--- a/drivers/gpu/drm/xe/xe_vm_types.h
+++ b/drivers/gpu/drm/xe/xe_vm_types.h
@@ -94,6 +94,17 @@ struct xe_vma_mem_attr {
 	 * same as default_pat_index unless overwritten by madvise.
 	 */
 	u16 pat_index;
+
+	/**
+	 * @purgeable_state: Purgeable hint for this VMA mapping
+	 *
+	 * Per-VMA purgeable state from madvise. Valid states are WILLNEED (0)
+	 * or DONTNEED (1). Shared BOs require all VMAs to be DONTNEED before
+	 * the BO can be purged. PURGED state exists only at BO level.
+	 *
+	 * Protected by BO dma-resv lock. Set via DRM_IOCTL_XE_MADVISE.
+	 */
+	u32 purgeable_state;
 };
 
 struct xe_vma {
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v5 7/9] drm/xe/madvise: Block imported and exported dma-bufs
  2026-02-11 15:26 [PATCH v5 0/9] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
                   ` (5 preceding siblings ...)
  2026-02-11 15:26 ` [PATCH v5 6/9] drm/xe/madvise: Implement per-VMA purgeable state tracking Arvind Yadav
@ 2026-02-11 15:26 ` Arvind Yadav
  2026-02-24 14:15   ` Thomas Hellström
  2026-02-11 15:26 ` [PATCH v5 8/9] drm/xe/bo: Add purgeable shrinker state helpers Arvind Yadav
                   ` (6 subsequent siblings)
  13 siblings, 1 reply; 36+ messages in thread
From: Arvind Yadav @ 2026-02-11 15:26 UTC (permalink / raw)
  To: intel-xe
  Cc: matthew.brost, himal.prasad.ghimiray, thomas.hellstrom,
	pallavi.mishra

Prevent marking imported or exported dma-bufs as purgeable.
External devices may be accessing these buffers without our
knowledge, making purging unsafe.

Check drm_gem_is_imported() for buffers created by other
drivers and obj->dma_buf for buffers exported to other
drivers. Silently skip these BOs during madvise processing.

This follows drm_gem_shmem's purgeable implementation and
prevents data corruption from purging actively-used shared
buffers.

v3:
   - Addresses review feedback from Matt Roper about handling
     imported/exported BOs correctly in the purgeable BO
     implementation.

v4:
   - Check should be add to xe_vm_madvise_purgeable_bo.

v5:
   - Rename xe_bo_is_external_dmabuf() to xe_bo_is_dmabuf_shared()
     for clarity (Thomas)
   - Update comments to clarify why both imports and exports
     are unsafe to purge.

Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
---
 drivers/gpu/drm/xe/xe_vm_madvise.c | 35 ++++++++++++++++++++++++++++++
 1 file changed, 35 insertions(+)

diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
index c184426546a2..8d55ea78b6d1 100644
--- a/drivers/gpu/drm/xe/xe_vm_madvise.c
+++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
@@ -184,6 +184,33 @@ static void madvise_pat_index(struct xe_device *xe, struct xe_vm *vm,
 	}
 }
 
+/**
+ * xe_bo_is_dmabuf_shared() - Check if BO is shared via dma-buf
+ * @bo: Buffer object
+ *
+ * Prevent marking imported or exported dma-bufs as purgeable.
+ * For imported BOs, Xe doesn't own the backing store and cannot
+ * safely reclaim pages (exporter or other devices may still be
+ * using them). For exported BOs, external devices may have active
+ * mappings we cannot track.
+ *
+ * Return: true if BO is imported or exported, false otherwise
+ */
+static bool xe_bo_is_dmabuf_shared(struct xe_bo *bo)
+{
+	struct drm_gem_object *obj = &bo->ttm.base;
+
+	/* Imported: exporter owns backing store */
+	if (drm_gem_is_imported(obj))
+		return true;
+
+	/* Exported: external devices may be accessing */
+	if (obj->dma_buf)
+		return true;
+
+	return false;
+}
+
 /**
  * xe_bo_all_vmas_dontneed() - Check if all VMAs of a BO are marked DONTNEED
  * @bo: Buffer object
@@ -204,6 +231,10 @@ static bool xe_bo_all_vmas_dontneed(struct xe_bo *bo)
 
 	xe_bo_assert_held(bo);
 
+	/* Shared dma-bufs cannot be purgeable */
+	if (xe_bo_is_dmabuf_shared(bo))
+		return false;
+
 	drm_gem_for_each_gpuvm_bo(vm_bo, obj) {
 		drm_gpuvm_bo_for_each_va(gpuva, vm_bo) {
 			struct xe_vma *vma = gpuva_to_vma(gpuva);
@@ -304,6 +335,10 @@ static void __maybe_unused madvise_purgeable(struct xe_device *xe,
 		/* BO must be locked before modifying madv state */
 		xe_bo_assert_held(bo);
 
+		/* Skip shared dma-bufs */
+		if (xe_bo_is_dmabuf_shared(bo))
+			continue;
+
 		/*
 		 * Once purged, always purged. Cannot transition back to WILLNEED.
 		 * This matches i915 semantics where purged BOs are permanently invalid.
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v5 8/9] drm/xe/bo: Add purgeable shrinker state helpers
  2026-02-11 15:26 [PATCH v5 0/9] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
                   ` (6 preceding siblings ...)
  2026-02-11 15:26 ` [PATCH v5 7/9] drm/xe/madvise: Block imported and exported dma-bufs Arvind Yadav
@ 2026-02-11 15:26 ` Arvind Yadav
  2026-02-24 14:21   ` Thomas Hellström
  2026-02-11 15:26 ` [PATCH v5 9/9] drm/xe/madvise: Enable purgeable buffer object IOCTL support Arvind Yadav
                   ` (5 subsequent siblings)
  13 siblings, 1 reply; 36+ messages in thread
From: Arvind Yadav @ 2026-02-11 15:26 UTC (permalink / raw)
  To: intel-xe
  Cc: matthew.brost, himal.prasad.ghimiray, thomas.hellstrom,
	pallavi.mishra

Encapsulate TTM purgeable flag updates and shrinker page accounting
into helper functions. This prevents desynchronization between the
TTM tt->purgeable flag and the shrinker's page bucket counters.

Without these helpers, direct manipulation of xe_ttm_tt->purgeable
risks forgetting to update the corresponding shrinker counters,
leading to incorrect memory pressure calculations.

Add xe_bo_set_purgeable_shrinker() and xe_bo_clear_purgeable_shrinker()
which atomically update both the TTM flag and transfer pages between
the shrinkable and purgeable buckets.

Handle ghost BOs and zero-refcount xe BOs separately in xe_bo_shrink().
Ghost BOs from ttm_bo_pipeline_gutting() still hold reclaimable pages,
so attempt the shrink to let the shrinker block until the fence signals.
For xe BOs whose refcount has dropped to zero, return -EBUSY since the
destroy path will handle cleanup.

v4:
  - @madv_purgeable atomic_t → u32 change across all relevant
    patches (Matt)

v5:
  - Update purgeable BO state to PURGED after a successful shrinker
    purge for DONTNEED BOs.
  - Split ghost BO and zero-refcount handling in xe_bo_shrink() (Thomas)

Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
---
 drivers/gpu/drm/xe/xe_bo.c         | 69 +++++++++++++++++++++++++++++-
 drivers/gpu/drm/xe/xe_bo.h         |  2 +
 drivers/gpu/drm/xe/xe_vm_madvise.c |  8 +++-
 3 files changed, 76 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
index 7ee85c8eadde..9484105708f7 100644
--- a/drivers/gpu/drm/xe/xe_bo.c
+++ b/drivers/gpu/drm/xe/xe_bo.c
@@ -863,6 +863,66 @@ void xe_bo_set_purgeable_state(struct xe_bo *bo,
 	bo->madv_purgeable = new_state;
 }
 
+/**
+ * xe_bo_set_purgeable_shrinker() - Mark BO purgeable and update shrinker
+ * @bo: Buffer object
+ *
+ * Transfers pages from shrinkable to purgeable bucket. Shrinker can now
+ * discard pages immediately without swapping. Caller holds BO lock.
+ */
+void xe_bo_set_purgeable_shrinker(struct xe_bo *bo)
+{
+	struct ttm_buffer_object *ttm_bo = &bo->ttm;
+	struct ttm_tt *tt = ttm_bo->ttm;
+	struct xe_device *xe = ttm_to_xe_device(ttm_bo->bdev);
+	struct xe_ttm_tt *xe_tt;
+
+	xe_bo_assert_held(bo);
+
+	if (!tt || !ttm_tt_is_populated(tt))
+		return;
+
+	xe_tt = container_of(tt, struct xe_ttm_tt, ttm);
+
+	if (!xe_tt->purgeable) {
+		xe_tt->purgeable = true;
+		/* Transfer pages from shrinkable to purgeable count */
+		xe_shrinker_mod_pages(xe->mem.shrinker,
+				      -(long)tt->num_pages,
+				      tt->num_pages);
+	}
+}
+
+/**
+ * xe_bo_clear_purgeable_shrinker() - Mark BO non-purgeable and update shrinker
+ * @bo: Buffer object
+ *
+ * Transfers pages from purgeable to shrinkable bucket. Shrinker must now
+ * swap pages instead of discarding. Caller holds BO lock.
+ */
+void xe_bo_clear_purgeable_shrinker(struct xe_bo *bo)
+{
+	struct ttm_buffer_object *ttm_bo = &bo->ttm;
+	struct ttm_tt *tt = ttm_bo->ttm;
+	struct xe_device *xe = ttm_to_xe_device(ttm_bo->bdev);
+	struct xe_ttm_tt *xe_tt;
+
+	xe_bo_assert_held(bo);
+
+	if (!tt || !ttm_tt_is_populated(tt))
+		return;
+
+	xe_tt = container_of(tt, struct xe_ttm_tt, ttm);
+
+	if (xe_tt->purgeable) {
+		xe_tt->purgeable = false;
+		/* Transfer pages from purgeable to shrinkable count */
+		xe_shrinker_mod_pages(xe->mem.shrinker,
+				      tt->num_pages,
+				      -(long)tt->num_pages);
+	}
+}
+
 /**
  * xe_ttm_bo_purge() - Purge buffer object backing store
  * @ttm_bo: The TTM buffer object to purge
@@ -1234,14 +1294,21 @@ long xe_bo_shrink(struct ttm_operation_ctx *ctx, struct ttm_buffer_object *bo,
 	if (!xe_bo_eviction_valuable(bo, &place))
 		return -EBUSY;
 
-	if (!xe_bo_is_xe_bo(bo) || !xe_bo_get_unless_zero(xe_bo))
+	/* Ghost BOs still hold reclaimable pages, try to shrink them. */
+	if (!xe_bo_is_xe_bo(bo))
 		return xe_bo_shrink_purge(ctx, bo, scanned);
 
+	if (!xe_bo_get_unless_zero(xe_bo))
+		return -EBUSY;
+
 	if (xe_tt->purgeable) {
 		if (bo->resource->mem_type != XE_PL_SYSTEM)
 			lret = xe_bo_move_notify(xe_bo, ctx);
 		if (!lret)
 			lret = xe_bo_shrink_purge(ctx, bo, scanned);
+		if (lret > 0 && xe_bo_madv_is_dontneed(xe_bo))
+			xe_bo_set_purgeable_state(xe_bo,
+						  XE_MADV_PURGEABLE_PURGED);
 		goto out_unref;
 	}
 
diff --git a/drivers/gpu/drm/xe/xe_bo.h b/drivers/gpu/drm/xe/xe_bo.h
index 0d9f25b51eb2..46d1fff10e4f 100644
--- a/drivers/gpu/drm/xe/xe_bo.h
+++ b/drivers/gpu/drm/xe/xe_bo.h
@@ -272,6 +272,8 @@ static inline bool xe_bo_madv_is_dontneed(struct xe_bo *bo)
 }
 
 void xe_bo_set_purgeable_state(struct xe_bo *bo, enum xe_madv_purgeable_state new_state);
+void xe_bo_set_purgeable_shrinker(struct xe_bo *bo);
+void xe_bo_clear_purgeable_shrinker(struct xe_bo *bo);
 
 static inline void xe_bo_unpin_map_no_vm(struct xe_bo *bo)
 {
diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
index 8d55ea78b6d1..235fff2b654e 100644
--- a/drivers/gpu/drm/xe/xe_vm_madvise.c
+++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
@@ -289,12 +289,16 @@ void xe_bo_recompute_purgeable_state(struct xe_bo *bo)
 
 	if (xe_bo_all_vmas_dontneed(bo)) {
 		/* All VMAs are DONTNEED - mark BO purgeable */
-		if (bo->madv_purgeable != XE_MADV_PURGEABLE_DONTNEED)
+		if (bo->madv_purgeable != XE_MADV_PURGEABLE_DONTNEED) {
 			xe_bo_set_purgeable_state(bo, XE_MADV_PURGEABLE_DONTNEED);
+			xe_bo_set_purgeable_shrinker(bo);
+		}
 	} else {
 		/* At least one VMA is WILLNEED - BO must not be purgeable */
-		if (bo->madv_purgeable != XE_MADV_PURGEABLE_WILLNEED)
+		if (bo->madv_purgeable != XE_MADV_PURGEABLE_WILLNEED) {
 			xe_bo_set_purgeable_state(bo, XE_MADV_PURGEABLE_WILLNEED);
+			xe_bo_clear_purgeable_shrinker(bo);
+		}
 	}
 }
 
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v5 9/9] drm/xe/madvise: Enable purgeable buffer object IOCTL support
  2026-02-11 15:26 [PATCH v5 0/9] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
                   ` (7 preceding siblings ...)
  2026-02-11 15:26 ` [PATCH v5 8/9] drm/xe/bo: Add purgeable shrinker state helpers Arvind Yadav
@ 2026-02-11 15:26 ` Arvind Yadav
  2026-02-11 15:40   ` Matthew Brost
  2026-02-11 15:46 ` [PATCH v5 0/9] drm/xe/madvise: Add support for purgeable buffer objects Matthew Brost
                   ` (4 subsequent siblings)
  13 siblings, 1 reply; 36+ messages in thread
From: Arvind Yadav @ 2026-02-11 15:26 UTC (permalink / raw)
  To: intel-xe
  Cc: matthew.brost, himal.prasad.ghimiray, thomas.hellstrom,
	pallavi.mishra

Hook the madvise_purgeable() handler into the madvise IOCTL now that all
supporting infrastructure is complete:

 - Core purge implementation (patch 3)
 - BO state tracking and helpers (patches 1-2)
 - Per-VMA purgeable state tracking (patch 6)
 - Shrinker integration for memory reclamation (patch 8)

This final patch enables userspace to use the DRM_XE_VMA_ATTR_PURGEABLE_STATE
madvise type to mark buffers as WILLNEED/DONTNEED and receive the retained
status indicating whether buffers were purged.

The feature was kept disabled in earlier patches to maintain bisectability
and ensure all components are in place before exposing to userspace.

Suggested-by: Matthew Brost <matthew.brost@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
---
 drivers/gpu/drm/xe/xe_vm_madvise.c | 22 +++++-----------------
 1 file changed, 5 insertions(+), 17 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
index 235fff2b654e..20b1ac7e61d6 100644
--- a/drivers/gpu/drm/xe/xe_vm_madvise.c
+++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
@@ -313,18 +313,11 @@ void xe_bo_recompute_purgeable_state(struct xe_bo *bo)
  *
  * Handles DONTNEED/WILLNEED/PURGED states. Tracks if any BO was purged
  * in details->has_purged_bo for later copy to userspace.
- *
- * Note: Marked __maybe_unused until hooked into madvise_funcs[] in the
- * final patch to maintain bisectability. The NULL placeholder in the
- * array ensures proper -EINVAL return for userspace until all supporting
- * infrastructure (shrinker, per-VMA tracking) is complete.
  */
-static void __maybe_unused madvise_purgeable(struct xe_device *xe,
-					     struct xe_vm *vm,
-					     struct xe_vma **vmas,
-					     int num_vmas,
-					     struct drm_xe_madvise *op,
-					     struct xe_madvise_details *details)
+static void madvise_purgeable(struct xe_device *xe, struct xe_vm *vm,
+			      struct xe_vma **vmas, int num_vmas,
+			      struct drm_xe_madvise *op,
+			      struct xe_madvise_details *details)
 {
 	int i;
 
@@ -382,12 +375,7 @@ static const madvise_func madvise_funcs[] = {
 	[DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC] = madvise_preferred_mem_loc,
 	[DRM_XE_MEM_RANGE_ATTR_ATOMIC] = madvise_atomic,
 	[DRM_XE_MEM_RANGE_ATTR_PAT] = madvise_pat_index,
-	/*
-	 * Purgeable support implemented but not enabled yet to maintain
-	 * bisectability. Will be set to madvise_purgeable() in final patch
-	 * when all infrastructure (shrinker, VMA tracking) is complete.
-	 */
-	[DRM_XE_VMA_ATTR_PURGEABLE_STATE] = NULL,
+	[DRM_XE_VMA_ATTR_PURGEABLE_STATE] = madvise_purgeable,
 };
 
 static u8 xe_zap_ptes_in_madvise_range(struct xe_vm *vm, u64 start, u64 end)
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* Re: [PATCH v5 9/9] drm/xe/madvise: Enable purgeable buffer object IOCTL support
  2026-02-11 15:26 ` [PATCH v5 9/9] drm/xe/madvise: Enable purgeable buffer object IOCTL support Arvind Yadav
@ 2026-02-11 15:40   ` Matthew Brost
  0 siblings, 0 replies; 36+ messages in thread
From: Matthew Brost @ 2026-02-11 15:40 UTC (permalink / raw)
  To: Arvind Yadav
  Cc: intel-xe, himal.prasad.ghimiray, thomas.hellstrom, pallavi.mishra

On Wed, Feb 11, 2026 at 08:56:38PM +0530, Arvind Yadav wrote:
> Hook the madvise_purgeable() handler into the madvise IOCTL now that all
> supporting infrastructure is complete:
> 
>  - Core purge implementation (patch 3)
>  - BO state tracking and helpers (patches 1-2)
>  - Per-VMA purgeable state tracking (patch 6)
>  - Shrinker integration for memory reclamation (patch 8)
> 
> This final patch enables userspace to use the DRM_XE_VMA_ATTR_PURGEABLE_STATE
> madvise type to mark buffers as WILLNEED/DONTNEED and receive the retained
> status indicating whether buffers were purged.
> 
> The feature was kept disabled in earlier patches to maintain bisectability
> and ensure all components are in place before exposing to userspace.
> 
> Suggested-by: Matthew Brost <matthew.brost@intel.com>

Reviewed-by: Matthew Brost <matthew.brost@intel.com>

> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
> ---
>  drivers/gpu/drm/xe/xe_vm_madvise.c | 22 +++++-----------------
>  1 file changed, 5 insertions(+), 17 deletions(-)
> 
> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
> index 235fff2b654e..20b1ac7e61d6 100644
> --- a/drivers/gpu/drm/xe/xe_vm_madvise.c
> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
> @@ -313,18 +313,11 @@ void xe_bo_recompute_purgeable_state(struct xe_bo *bo)
>   *
>   * Handles DONTNEED/WILLNEED/PURGED states. Tracks if any BO was purged
>   * in details->has_purged_bo for later copy to userspace.
> - *
> - * Note: Marked __maybe_unused until hooked into madvise_funcs[] in the
> - * final patch to maintain bisectability. The NULL placeholder in the
> - * array ensures proper -EINVAL return for userspace until all supporting
> - * infrastructure (shrinker, per-VMA tracking) is complete.
>   */
> -static void __maybe_unused madvise_purgeable(struct xe_device *xe,
> -					     struct xe_vm *vm,
> -					     struct xe_vma **vmas,
> -					     int num_vmas,
> -					     struct drm_xe_madvise *op,
> -					     struct xe_madvise_details *details)
> +static void madvise_purgeable(struct xe_device *xe, struct xe_vm *vm,
> +			      struct xe_vma **vmas, int num_vmas,
> +			      struct drm_xe_madvise *op,
> +			      struct xe_madvise_details *details)
>  {
>  	int i;
>  
> @@ -382,12 +375,7 @@ static const madvise_func madvise_funcs[] = {
>  	[DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC] = madvise_preferred_mem_loc,
>  	[DRM_XE_MEM_RANGE_ATTR_ATOMIC] = madvise_atomic,
>  	[DRM_XE_MEM_RANGE_ATTR_PAT] = madvise_pat_index,
> -	/*
> -	 * Purgeable support implemented but not enabled yet to maintain
> -	 * bisectability. Will be set to madvise_purgeable() in final patch
> -	 * when all infrastructure (shrinker, VMA tracking) is complete.
> -	 */
> -	[DRM_XE_VMA_ATTR_PURGEABLE_STATE] = NULL,
> +	[DRM_XE_VMA_ATTR_PURGEABLE_STATE] = madvise_purgeable,
>  };
>  
>  static u8 xe_zap_ptes_in_madvise_range(struct xe_vm *vm, u64 start, u64 end)
> -- 
> 2.43.0
> 

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v5 0/9] drm/xe/madvise: Add support for purgeable buffer objects
  2026-02-11 15:26 [PATCH v5 0/9] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
                   ` (8 preceding siblings ...)
  2026-02-11 15:26 ` [PATCH v5 9/9] drm/xe/madvise: Enable purgeable buffer object IOCTL support Arvind Yadav
@ 2026-02-11 15:46 ` Matthew Brost
  2026-02-25 10:10   ` Yadav, Arvind
  2026-02-11 16:21 ` ✗ CI.checkpatch: warning for drm/xe/madvise: Add support for purgeable buffer objects (rev6) Patchwork
                   ` (3 subsequent siblings)
  13 siblings, 1 reply; 36+ messages in thread
From: Matthew Brost @ 2026-02-11 15:46 UTC (permalink / raw)
  To: Arvind Yadav
  Cc: intel-xe, himal.prasad.ghimiray, thomas.hellstrom, pallavi.mishra

On Wed, Feb 11, 2026 at 08:56:29PM +0530, Arvind Yadav wrote:

I have a feeling from the KMD POV we are getting close for this being
ready to merge. What is the status a UMD PR to use this feature (?) as
this is a prerequisite to merging.

Also it likely time to start collecting ack's from the UMD teams on the
the uAPI patch too.

Matt 

> This patch series introduces comprehensive support for purgeable buffer objects
> in the Xe driver, enabling userspace to provide memory usage hints for better
> memory management under system pressure.
> 
> Overview:
> 
> Purgeable memory allows applications to mark buffer objects as "not currently
> needed" (DONTNEED), making them eligible for kernel reclamation during memory
> pressure. This helps prevent OOM conditions and enables more efficient GPU
> memory utilization for workloads with temporary or regeneratable data (caches,
> intermediate results, decoded frames, etc.).
> 
> Purgeable BO Lifecycle:
> 1. WILLNEED (default): BO actively needed, kernel preserves backing store
> 2. DONTNEED (user hint): BO contents discardable, eligible for purging
> 3. PURGED (kernel action): Backing store reclaimed during memory pressure
> 
> Key Design Principles:
>   - i915 compatibility: "Once purged, always purged" semantics - purged BOs
>     remain permanently invalid and must be destroyed/recreated
>   - Per-VMA state tracking: Each VMA tracks its own purgeable state, BO is
>     only marked DONTNEED when ALL VMAs across ALL VMs agree (Thomas Hellström)
>   - Safety first: Imported/exported dma-bufs blocked from purgeable state -
>     no visibility into external device usage (Matt Roper)
>   - Multiple protection layers: Validation in madvise, VM bind, mmap, and
>     fault handlers
>   - Async TLB invalidation: Uses xe_bo_trigger_rebind() for non-blocking
>     GPU mapping invalidation
>   - Scratch PTE support: Fault-mode VMs use scratch pages for safe zero reads
>     on purged BO access.
>   - Purgeable state is not applied to imported/exported dma-bufs,
>     those BOs always behave as WILLNEED.
>   - TTM shrinker integration: Encapsulated helpers manage xe_ttm_tt->purgeable
>     flag and shrinker page accounting (shrinkable vs purgeable buckets)
> 
> v2 Changes:
>   - Reordered patches: Moved shared BO helper before main implementation for
>     proper dependency order
>   - Fixed reference counting in mmap offset validation (use drm_gem_object_put)
>   - Removed incorrect claims about madvise(WILLNEED) restoring purged BOs
>   - Fixed error code documentation inconsistencies
>   - Initialize purge_state_val fields to prevent kernel memory leaks
>   - Use xe_bo_trigger_rebind() for async TLB invalidation (Thomas Hellström)
>   - Add NULL rebind with scratch PTEs for fault mode (Thomas Hellström)
>   - Implement i915-compatible retained field logic (Thomas Hellström)
>   - Skip BO validation for purged BOs in page fault handler (crash fix)
>   - Add scratch VM check in page fault path (non-scratch VMs fail fault)
> 
> v3 Changes (addressing Matt and Thomas Hellström feedback):
>   - Per-VMA purgeable state tracking: Added xe_vma->purgeable_state field
>   - Complete VMA check: xe_bo_all_vmas_dontneed() walks all VMAs across all
>     VMs to ensure unanimous DONTNEED before marking BO purgeable
>   - VMA unbind recheck: Added xe_bo_recheck_purgeable_on_vma_unbind() to
>     re-evaluate BO state when VMAs are destroyed
>   - Block external dma-bufs: Added xe_bo_is_external_dmabuf() check using
>     drm_gem_is_imported() and obj->dma_buf to prevent purging imported/exported BOs
>   - Consistent lockdep enforcement: Added xe_bo_assert_held() to all helpers
>     that access madv_purgeable state
>   - Simplified page table logic: Renamed is_null to is_null_or_purged in
>     xe_pt_stage_bind_entry() - purged BOs treated identically to null VMAs
>   - Removed unnecessary checks: Dropped redundant "&& bo" check in xe_ttm_bo_purge()
>   - Xe-specific warnings: Changed drm_warn() to XE_WARN_ON() in purge path
>   - Moved purge checks under locks: Purge state validation now done after
>     acquiring dma-resv lock in vma_lock_and_validate() and xe_pagefault_begin()
>   - Race-free fault handling: Removed unlocked purge check from
>     xe_pagefault_handle_vma(), moved to locked xe_pagefault_begin()
>   - Shrinker helper functions: Added xe_bo_set_purgeable_shrinker() and
>     xe_bo_clear_purgeable_shrinker() to encapsulate TTM purgeable flag updates
>     and shrinker page accounting, improving code clarity and maintainability
> 
> v4 Changes (addressing Matt and Thomas Hellström feedback):
>   - UAPI: Removed '__u64 reserved' field from purge_state_val union to fit
>     16-byte size constraint (Matt)
>   - Changed madv_purgeable from atomic_t to u32 across all patches (Matt)
>   - CPU fault handling: Added purged check to fastpath (xe_bo_cpu_fault_fastpath)
>     to prevent hang when accessing existing mmap of purged BO
> 
> v5 Changes (addressing Matt and Thomas Hellström feedback):
>   - Add locking documentation to madv_purgeable field comment (Matt)
>   - Introduce xe_bo_set_purgeable_state() helper (void return) to centralize
>     madv_purgeable updates with xe_bo_assert_held() and state transition
>     validation using explicit enum checks (no transition out of PURGED) (Matt)
>   - Make xe_ttm_bo_purge() return int and propagate failures from
>     xe_bo_move(); handle xe_bo_trigger_rebind() failures (e.g. no_wait_gpu
>     paths) rather than silently ignoring (Matt)
>   - Replace drm_WARN_ON with xe_assert for better Xe-specific assertions (Matt)
>   - Hook purgeable handling into madvise_funcs[DRM_XE_VMA_ATTR_PURGEABLE_STATE]
>     instead of special-case path in xe_vm_madvise_ioctl() (Matt)
>   - Track purgeable retained return via xe_madvise_details and perform
>     copy_to_user() from xe_madvise_details_fini() after locks are dropped (Matt)
>   - Set madvise_funcs[DRM_XE_VMA_ATTR_PURGEABLE_STATE] to NULL with
>     __maybe_unused on madvise_purgeable() to maintain bisectability until
>     shrinker integration is complete in final patch (Matt)
>   - Call xe_bo_recheck_purgeable_on_vma_unbind() from xe_vma_destroy()
>     right after drm_gpuva_unlink() where we already hold the BO lock,
>     drop the trylock-based late destroy path (Matt)
>   - Move purgeable_state into xe_vma_mem_attr with the other madvise
>     attributes (Matt)
>   - Drop READ_ONCE since the BO lock already protects us (Matt)
>   - Keep returning false when there are no VMAs - otherwise we'd mark
>     BOs purgeable without any user hint (Matt)
>   -  Use struct xe_vma_lock_and_validate_flags instead of multiple bool
>     parameters to improve readability and prevent argument transposition (Matt)
>   - Fix LRU crash while running shrink test
>   - Skip xe_bo_validate() for purged BOs in xe_gpuvm_validate()
>   - Split ghost BO and zero-refcount handling in xe_bo_shrink() (Thomas)
> 
> Arvind Yadav (8):
>   drm/xe/bo: Add purgeable bo state tracking and field madv to xe_bo
>   drm/xe/madvise: Implement purgeable buffer object support
>   drm/xe/bo: Handle CPU faults on purged buffer objects
>   drm/xe/vm: Prevent binding of purged buffer objects
>   drm/xe/madvise: Implement per-VMA purgeable state tracking
>   drm/xe/madvise: Block imported and exported dma-bufs
>   drm/xe/bo: Add purgeable shrinker state helpers
>   drm/xe/madvise: Enable purgeable buffer object IOCTL support
> 
> Himal Prasad Ghimiray (1):
>   drm/xe/uapi: Add UAPI support for purgeable buffer objects
> 
>  drivers/gpu/drm/xe/xe_bo.c         | 187 ++++++++++++++++++++--
>  drivers/gpu/drm/xe/xe_bo.h         |  60 +++++++
>  drivers/gpu/drm/xe/xe_bo_types.h   |   6 +
>  drivers/gpu/drm/xe/xe_pagefault.c  |  12 ++
>  drivers/gpu/drm/xe/xe_pt.c         |  40 ++++-
>  drivers/gpu/drm/xe/xe_vm.c         |  90 +++++++++--
>  drivers/gpu/drm/xe/xe_vm_madvise.c | 249 +++++++++++++++++++++++++++++
>  drivers/gpu/drm/xe/xe_vm_madvise.h |   3 +
>  drivers/gpu/drm/xe/xe_vm_types.h   |  11 ++
>  include/uapi/drm/xe_drm.h          |  44 +++++
>  10 files changed, 667 insertions(+), 35 deletions(-)
> 
> -- 
> 2.43.0
> 

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v5 2/9] drm/xe/bo: Add purgeable bo state tracking and field madv to xe_bo
  2026-02-11 15:26 ` [PATCH v5 2/9] drm/xe/bo: Add purgeable bo state tracking and field madv to xe_bo Arvind Yadav
@ 2026-02-11 16:00   ` Matthew Brost
  0 siblings, 0 replies; 36+ messages in thread
From: Matthew Brost @ 2026-02-11 16:00 UTC (permalink / raw)
  To: Arvind Yadav
  Cc: intel-xe, himal.prasad.ghimiray, thomas.hellstrom, pallavi.mishra

On Wed, Feb 11, 2026 at 08:56:31PM +0530, Arvind Yadav wrote:
> Add infrastructure for tracking purgeable state of buffer objects.
> This includes:
> 
> Introduce enum xe_madv_purgeable_state with three states:
>    - XE_MADV_PURGEABLE_WILLNEED (0): BO is needed and should not be
>      purged. This is the default state for all BOs.
> 
>    - XE_MADV_PURGEABLE_DONTNEED (1): BO is not currently needed and
>      can be purged by the kernel under memory pressure to reclaim
>      resources. Only non-shared BOs can be marked as DONTNEED.
> 
>    - XE_MADV_PURGEABLE_PURGED (2): BO has been purged by the kernel.
>      Accessing a purged BO results in error. Follows i915 semantics
>      where once purged, the BO remains permanently invalid ("once
>      purged, always purged").
> 
> Add madv_purgeable field to struct xe_bo for state tracking
>   of purgeable state across concurrent access paths
> 
> v2:
>   - Add xe_bo_is_purged() helper, improve state documentation
> 
> v3:
>   - Add the kernel doc(Matthew Brost)
>   - Add the new helpers xe_bo_madv_is_dontneed(Matthew Brost)
> 
> v4:
>   - @madv_purgeable atomic_t → u32 change across all relevant
>     patches (Matt)
> 
> v5:
>   - Add locking documentation to madv_purgeable field comment (Matt)
> 
> Cc: Matthew Brost <matthew.brost@intel.com>

Reviewed-by: Matthew Brost <matthew.brost@intel.com>

> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
> ---
>  drivers/gpu/drm/xe/xe_bo.h       | 56 ++++++++++++++++++++++++++++++++
>  drivers/gpu/drm/xe/xe_bo_types.h |  6 ++++
>  2 files changed, 62 insertions(+)
> 
> diff --git a/drivers/gpu/drm/xe/xe_bo.h b/drivers/gpu/drm/xe/xe_bo.h
> index c914ab719f20..ea157d74e2fb 100644
> --- a/drivers/gpu/drm/xe/xe_bo.h
> +++ b/drivers/gpu/drm/xe/xe_bo.h
> @@ -87,6 +87,28 @@
>  
>  #define XE_PCI_BARRIER_MMAP_OFFSET	(0x50 << XE_PTE_SHIFT)
>  
> +/**
> + * enum xe_madv_purgeable_state - Buffer object purgeable state enumeration
> + *
> + * This enum defines the possible purgeable states for a buffer object,
> + * allowing userspace to provide memory usage hints to the kernel for
> + * better memory management under pressure.
> + *
> + * @XE_MADV_PURGEABLE_WILLNEED: The buffer object is needed and should not be purged.
> + * This is the default state.
> + * @XE_MADV_PURGEABLE_DONTNEED: The buffer object is not currently needed and can be
> + * purged by the kernel under memory pressure.
> + * @XE_MADV_PURGEABLE_PURGED: The buffer object has been purged by the kernel.
> + *
> + * Accessing a purged buffer will result in an error. Per i915 semantics,
> + * once purged, a BO remains permanently invalid and must be destroyed and recreated.
> + */
> +enum xe_madv_purgeable_state {
> +	XE_MADV_PURGEABLE_WILLNEED,
> +	XE_MADV_PURGEABLE_DONTNEED,
> +	XE_MADV_PURGEABLE_PURGED,
> +};
> +
>  struct sg_table;
>  
>  struct xe_bo *xe_bo_alloc(void);
> @@ -215,6 +237,40 @@ static inline bool xe_bo_is_protected(const struct xe_bo *bo)
>  	return bo->pxp_key_instance;
>  }
>  
> +/**
> + * xe_bo_is_purged() - Check if buffer object has been purged
> + * @bo: The buffer object to check
> + *
> + * Checks if the buffer object's backing store has been discarded by the
> + * kernel due to memory pressure after being marked as purgeable (DONTNEED).
> + * Once purged, the BO cannot be restored and any attempt to use it will fail.
> + *
> + * Context: Caller must hold the BO's dma-resv lock
> + * Return: true if the BO has been purged, false otherwise
> + */
> +static inline bool xe_bo_is_purged(struct xe_bo *bo)
> +{
> +	xe_bo_assert_held(bo);
> +	return bo->madv_purgeable == XE_MADV_PURGEABLE_PURGED;
> +}
> +
> +/**
> + * xe_bo_madv_is_dontneed() - Check if BO is marked as DONTNEED
> + * @bo: The buffer object to check
> + *
> + * Checks if userspace has marked this BO as DONTNEED (i.e., its contents
> + * are not currently needed and can be discarded under memory pressure).
> + * This is used internally to decide whether a BO is eligible for purging.
> + *
> + * Context: Caller must hold the BO's dma-resv lock
> + * Return: true if the BO is marked DONTNEED, false otherwise
> + */
> +static inline bool xe_bo_madv_is_dontneed(struct xe_bo *bo)
> +{
> +	xe_bo_assert_held(bo);
> +	return bo->madv_purgeable == XE_MADV_PURGEABLE_DONTNEED;
> +}
> +
>  static inline void xe_bo_unpin_map_no_vm(struct xe_bo *bo)
>  {
>  	if (likely(bo)) {
> diff --git a/drivers/gpu/drm/xe/xe_bo_types.h b/drivers/gpu/drm/xe/xe_bo_types.h
> index d4fe3c8dca5b..ff8317bfc1ae 100644
> --- a/drivers/gpu/drm/xe/xe_bo_types.h
> +++ b/drivers/gpu/drm/xe/xe_bo_types.h
> @@ -108,6 +108,12 @@ struct xe_bo {
>  	 * from default
>  	 */
>  	u64 min_align;
> +
> +	/**
> +	 * @madv_purgeable: user space advise on BO purgeability, protected
> +	 * by BO's dma-resv lock.
> +	 */
> +	u32 madv_purgeable;
>  };
>  
>  #endif
> -- 
> 2.43.0
> 

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v5 5/9] drm/xe/vm: Prevent binding of purged buffer objects
  2026-02-11 15:26 ` [PATCH v5 5/9] drm/xe/vm: Prevent binding of " Arvind Yadav
@ 2026-02-11 16:17   ` Matthew Brost
  0 siblings, 0 replies; 36+ messages in thread
From: Matthew Brost @ 2026-02-11 16:17 UTC (permalink / raw)
  To: Arvind Yadav
  Cc: intel-xe, himal.prasad.ghimiray, thomas.hellstrom, pallavi.mishra

On Wed, Feb 11, 2026 at 08:56:34PM +0530, Arvind Yadav wrote:
> Add purge checking to vma_lock_and_validate() to block new mapping
> operations on purged BOs while allowing cleanup operations to proceed.
> 
> Purged BOs have their backing pages freed by the kernel. New
> mapping operations (MAP, PREFETCH, REMAP) must be rejected with
> -EINVAL to prevent GPU access to invalid memory. Cleanup
> operations (UNMAP) must be allowed so applications can release
> resources after detecting purge via the retained field.
> 
> REMAP operations require mixed handling - reject new prev/next
> VMAs if the BO is purged, but allow the unmap portion to proceed
> for cleanup.
> 
> The check_purged flag in struct xe_vma_lock_and_validate_flags
> distinguishes between these cases: true for new mappings (must reject),
> false for cleanup (allow).
> 
> v2:
>   - Clarify that purged BOs are permanently invalid (i915 semantics)
>   - Remove incorrect claim about madvise(WILLNEED) restoring purged BOs
> 
> v3:
>   - Move xe_bo_is_purged check under vma_lock_and_validate (Matt)
>   - Add check_purged parameter to distinguish new mappings from cleanup
>   - Allow UNMAP operations to prevent resource leaks
>   - Handle REMAP operation's dual nature (cleanup + new mappings)
> 
> v5:
>   - Replace three boolean parameters with struct xe_vma_lock_and_validate_flags
>     to improve readability and prevent argument transposition (Matt)
>   - Use u32 bitfields instead of bool members to match xe_bo_shrink_flags
>     pattern - more efficient packing and follows xe driver conventions (Thomas)
>   - Pass struct as const since flags are read-only (Thomas)
> 
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
> ---
>  drivers/gpu/drm/xe/xe_vm.c | 67 +++++++++++++++++++++++++++++++-------
>  1 file changed, 56 insertions(+), 11 deletions(-)
> 
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index 21a2527ca064..71cf3ce6c62b 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -2907,8 +2907,20 @@ static void vm_bind_ioctl_ops_unwind(struct xe_vm *vm,
>  	}
>  }
>  
> +/**
> + * struct xe_vma_lock_and_validate_flags - Flags for vma_lock_and_validate()
> + * @res_evict: Allow evicting resources during validation
> + * @validate: Perform BO validation
> + * @check_purged: Reject operation if BO is purged
> + */
> +struct xe_vma_lock_and_validate_flags {
> +	u32 res_evict : 1;
> +	u32 validate : 1;
> +	u32 check_purged : 1;
> +};

This looks better, thanks for the cleanup.

Reviewed-by: Matthew Brost <matthew.brost@intel.com>

> +
>  static int vma_lock_and_validate(struct drm_exec *exec, struct xe_vma *vma,
> -				 bool res_evict, bool validate)
> +				 const struct xe_vma_lock_and_validate_flags *flags)
>  {
>  	struct xe_bo *bo = xe_vma_bo(vma);
>  	struct xe_vm *vm = xe_vma_vm(vma);
> @@ -2917,10 +2929,15 @@ static int vma_lock_and_validate(struct drm_exec *exec, struct xe_vma *vma,
>  	if (bo) {
>  		if (!bo->vm)
>  			err = drm_exec_lock_obj(exec, &bo->ttm.base);
> -		if (!err && validate)
> +
> +		/* Reject new mappings to purged BOs; allow cleanup operations */
> +		if (!err && flags->check_purged && xe_bo_is_purged(bo))
> +			err = -EINVAL;
> +
> +		if (!err && flags->validate)
>  			err = xe_bo_validate(bo, vm,
>  					     !xe_vm_in_preempt_fence_mode(vm) &&
> -					     res_evict, exec);
> +					     flags->res_evict, exec);
>  	}
>  
>  	return err;
> @@ -3013,9 +3030,12 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
>  	case DRM_GPUVA_OP_MAP:
>  		if (!op->map.invalidate_on_bind)
>  			err = vma_lock_and_validate(exec, op->map.vma,
> -						    res_evict,
> -						    !xe_vm_in_fault_mode(vm) ||
> -						    op->map.immediate);
> +						    &(struct xe_vma_lock_and_validate_flags) {
> +							    .res_evict = res_evict,
> +							    .validate = !xe_vm_in_fault_mode(vm) ||
> +									op->map.immediate,
> +							    .check_purged = true
> +						    });
>  		break;
>  	case DRM_GPUVA_OP_REMAP:
>  		err = check_ufence(gpuva_to_vma(op->base.remap.unmap->va));
> @@ -3024,13 +3044,25 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
>  
>  		err = vma_lock_and_validate(exec,
>  					    gpuva_to_vma(op->base.remap.unmap->va),
> -					    res_evict, false);
> +					    &(struct xe_vma_lock_and_validate_flags) {
> +						    .res_evict = res_evict,
> +						    .validate = false,
> +						    .check_purged = false
> +					    });
>  		if (!err && op->remap.prev)
>  			err = vma_lock_and_validate(exec, op->remap.prev,
> -						    res_evict, true);
> +						    &(struct xe_vma_lock_and_validate_flags) {
> +							    .res_evict = res_evict,
> +							    .validate = true,
> +							    .check_purged = true
> +						    });
>  		if (!err && op->remap.next)
>  			err = vma_lock_and_validate(exec, op->remap.next,
> -						    res_evict, true);
> +						    &(struct xe_vma_lock_and_validate_flags) {
> +							    .res_evict = res_evict,
> +							    .validate = true,
> +							    .check_purged = true
> +						    });
>  		break;
>  	case DRM_GPUVA_OP_UNMAP:
>  		err = check_ufence(gpuva_to_vma(op->base.unmap.va));
> @@ -3039,7 +3071,11 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
>  
>  		err = vma_lock_and_validate(exec,
>  					    gpuva_to_vma(op->base.unmap.va),
> -					    res_evict, false);
> +					    &(struct xe_vma_lock_and_validate_flags) {
> +						    .res_evict = res_evict,
> +						    .validate = false,
> +						    .check_purged = false
> +					    });
>  		break;
>  	case DRM_GPUVA_OP_PREFETCH:
>  	{
> @@ -3052,9 +3088,18 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
>  				  region <= ARRAY_SIZE(region_to_mem_type));
>  		}
>  
> +		/*
> +		 * Prefetch attempts to migrate BO's backing store without
> +		 * repopulating it first. Purged BOs have no backing store
> +		 * to migrate, so reject the operation.
> +		 */
>  		err = vma_lock_and_validate(exec,
>  					    gpuva_to_vma(op->base.prefetch.va),
> -					    res_evict, false);
> +					    &(struct xe_vma_lock_and_validate_flags) {
> +						    .res_evict = res_evict,
> +						    .validate = false,
> +						    .check_purged = true
> +					    });
>  		if (!err && !xe_vma_has_no_bo(vma))
>  			err = xe_bo_migrate(xe_vma_bo(vma),
>  					    region_to_mem_type[region],
> -- 
> 2.43.0
> 

^ permalink raw reply	[flat|nested] 36+ messages in thread

* ✗ CI.checkpatch: warning for drm/xe/madvise: Add support for purgeable buffer objects (rev6)
  2026-02-11 15:26 [PATCH v5 0/9] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
                   ` (9 preceding siblings ...)
  2026-02-11 15:46 ` [PATCH v5 0/9] drm/xe/madvise: Add support for purgeable buffer objects Matthew Brost
@ 2026-02-11 16:21 ` Patchwork
  2026-02-11 16:22 ` ✓ CI.KUnit: success " Patchwork
                   ` (2 subsequent siblings)
  13 siblings, 0 replies; 36+ messages in thread
From: Patchwork @ 2026-02-11 16:21 UTC (permalink / raw)
  To: Arvind Yadav; +Cc: intel-xe

== Series Details ==

Series: drm/xe/madvise: Add support for purgeable buffer objects (rev6)
URL   : https://patchwork.freedesktop.org/series/156651/
State : warning

== Summary ==

+ KERNEL=/kernel
+ git clone https://gitlab.freedesktop.org/drm/maintainer-tools mt
Cloning into 'mt'...
warning: redirecting to https://gitlab.freedesktop.org/drm/maintainer-tools.git/
+ git -C mt rev-list -n1 origin/master
1f57ba1afceae32108bd24770069f764d940a0e4
+ cd /kernel
+ git config --global --add safe.directory /kernel
+ git log -n1
commit ce4667c9197a132c8023c1c37754f01c64e822d7
Author: Arvind Yadav <arvind.yadav@intel.com>
Date:   Wed Feb 11 20:56:38 2026 +0530

    drm/xe/madvise: Enable purgeable buffer object IOCTL support
    
    Hook the madvise_purgeable() handler into the madvise IOCTL now that all
    supporting infrastructure is complete:
    
     - Core purge implementation (patch 3)
     - BO state tracking and helpers (patches 1-2)
     - Per-VMA purgeable state tracking (patch 6)
     - Shrinker integration for memory reclamation (patch 8)
    
    This final patch enables userspace to use the DRM_XE_VMA_ATTR_PURGEABLE_STATE
    madvise type to mark buffers as WILLNEED/DONTNEED and receive the retained
    status indicating whether buffers were purged.
    
    The feature was kept disabled in earlier patches to maintain bisectability
    and ensure all components are in place before exposing to userspace.
    
    Suggested-by: Matthew Brost <matthew.brost@intel.com>
    Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
    Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
    Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
    Reviewed-by: Matthew Brost <matthew.brost@intel.com>
+ /mt/dim checkpatch 2938ce73d01357a5816ed7dbd041154b58635a37 drm-intel
92c37c0e8bbd drm/xe/uapi: Add UAPI support for purgeable buffer objects
2bdaec330f0c drm/xe/bo: Add purgeable bo state tracking and field madv to xe_bo
1a9fcc98d3bc drm/xe/madvise: Implement purgeable buffer object support
-:23: WARNING:COMMIT_LOG_LONG_LINE: Prefer a maximum 75 chars per line (possible unwrapped commit description?)
#23: 
  - Async TLB invalidation via trigger_rebind (no blocking xe_vm_invalidate_vma)

-:117: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#117: FILE: drivers/gpu/drm/xe/xe_bo.c:856:
+	xe_assert(xe, new_state == XE_MADV_PURGEABLE_WILLNEED ||
+		      new_state == XE_MADV_PURGEABLE_DONTNEED ||

total: 0 errors, 1 warnings, 1 checks, 479 lines checked
6b2717ab1896 drm/xe/bo: Handle CPU faults on purged buffer objects
f851c2bce37c drm/xe/vm: Prevent binding of purged buffer objects
-:37: WARNING:COMMIT_LOG_LONG_LINE: Prefer a maximum 75 chars per line (possible unwrapped commit description?)
#37: 
  - Replace three boolean parameters with struct xe_vma_lock_and_validate_flags

total: 0 errors, 1 warnings, 0 checks, 112 lines checked
f3a8b35bb75b drm/xe/madvise: Implement per-VMA purgeable state tracking
f8dfbd848476 drm/xe/madvise: Block imported and exported dma-bufs
b7c1dcbfb2d9 drm/xe/bo: Add purgeable shrinker state helpers
ce4667c9197a drm/xe/madvise: Enable purgeable buffer object IOCTL support
-:17: WARNING:COMMIT_LOG_LONG_LINE: Prefer a maximum 75 chars per line (possible unwrapped commit description?)
#17: 
This final patch enables userspace to use the DRM_XE_VMA_ATTR_PURGEABLE_STATE

total: 0 errors, 1 warnings, 0 checks, 35 lines checked



^ permalink raw reply	[flat|nested] 36+ messages in thread

* ✓ CI.KUnit: success for drm/xe/madvise: Add support for purgeable buffer objects (rev6)
  2026-02-11 15:26 [PATCH v5 0/9] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
                   ` (10 preceding siblings ...)
  2026-02-11 16:21 ` ✗ CI.checkpatch: warning for drm/xe/madvise: Add support for purgeable buffer objects (rev6) Patchwork
@ 2026-02-11 16:22 ` Patchwork
  2026-02-11 17:11 ` ✗ Xe.CI.BAT: failure " Patchwork
  2026-02-13  1:15 ` ✗ Xe.CI.FULL: " Patchwork
  13 siblings, 0 replies; 36+ messages in thread
From: Patchwork @ 2026-02-11 16:22 UTC (permalink / raw)
  To: Arvind Yadav; +Cc: intel-xe

== Series Details ==

Series: drm/xe/madvise: Add support for purgeable buffer objects (rev6)
URL   : https://patchwork.freedesktop.org/series/156651/
State : success

== Summary ==

+ trap cleanup EXIT
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/xe/.kunitconfig
[16:21:25] Configuring KUnit Kernel ...
Generating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[16:21:29] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[16:22:00] Starting KUnit Kernel (1/1)...
[16:22:00] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[16:22:00] ================== guc_buf (11 subtests) ===================
[16:22:00] [PASSED] test_smallest
[16:22:00] [PASSED] test_largest
[16:22:00] [PASSED] test_granular
[16:22:00] [PASSED] test_unique
[16:22:00] [PASSED] test_overlap
[16:22:00] [PASSED] test_reusable
[16:22:00] [PASSED] test_too_big
[16:22:00] [PASSED] test_flush
[16:22:00] [PASSED] test_lookup
[16:22:00] [PASSED] test_data
[16:22:00] [PASSED] test_class
[16:22:00] ===================== [PASSED] guc_buf =====================
[16:22:00] =================== guc_dbm (7 subtests) ===================
[16:22:00] [PASSED] test_empty
[16:22:00] [PASSED] test_default
[16:22:00] ======================== test_size  ========================
[16:22:00] [PASSED] 4
[16:22:00] [PASSED] 8
[16:22:00] [PASSED] 32
[16:22:00] [PASSED] 256
[16:22:00] ==================== [PASSED] test_size ====================
[16:22:00] ======================= test_reuse  ========================
[16:22:00] [PASSED] 4
[16:22:00] [PASSED] 8
[16:22:00] [PASSED] 32
[16:22:00] [PASSED] 256
[16:22:00] =================== [PASSED] test_reuse ====================
[16:22:00] =================== test_range_overlap  ====================
[16:22:00] [PASSED] 4
[16:22:00] [PASSED] 8
[16:22:00] [PASSED] 32
[16:22:00] [PASSED] 256
[16:22:00] =============== [PASSED] test_range_overlap ================
[16:22:00] =================== test_range_compact  ====================
[16:22:00] [PASSED] 4
[16:22:00] [PASSED] 8
[16:22:00] [PASSED] 32
[16:22:00] [PASSED] 256
[16:22:00] =============== [PASSED] test_range_compact ================
[16:22:00] ==================== test_range_spare  =====================
[16:22:00] [PASSED] 4
[16:22:00] [PASSED] 8
[16:22:00] [PASSED] 32
[16:22:00] [PASSED] 256
[16:22:00] ================ [PASSED] test_range_spare =================
[16:22:00] ===================== [PASSED] guc_dbm =====================
[16:22:00] =================== guc_idm (6 subtests) ===================
[16:22:00] [PASSED] bad_init
[16:22:00] [PASSED] no_init
[16:22:00] [PASSED] init_fini
[16:22:00] [PASSED] check_used
[16:22:01] [PASSED] check_quota
[16:22:01] [PASSED] check_all
[16:22:01] ===================== [PASSED] guc_idm =====================
[16:22:01] ================== no_relay (3 subtests) ===================
[16:22:01] [PASSED] xe_drops_guc2pf_if_not_ready
[16:22:01] [PASSED] xe_drops_guc2vf_if_not_ready
[16:22:01] [PASSED] xe_rejects_send_if_not_ready
[16:22:01] ==================== [PASSED] no_relay =====================
[16:22:01] ================== pf_relay (14 subtests) ==================
[16:22:01] [PASSED] pf_rejects_guc2pf_too_short
[16:22:01] [PASSED] pf_rejects_guc2pf_too_long
[16:22:01] [PASSED] pf_rejects_guc2pf_no_payload
[16:22:01] [PASSED] pf_fails_no_payload
[16:22:01] [PASSED] pf_fails_bad_origin
[16:22:01] [PASSED] pf_fails_bad_type
[16:22:01] [PASSED] pf_txn_reports_error
[16:22:01] [PASSED] pf_txn_sends_pf2guc
[16:22:01] [PASSED] pf_sends_pf2guc
[16:22:01] [SKIPPED] pf_loopback_nop
[16:22:01] [SKIPPED] pf_loopback_echo
[16:22:01] [SKIPPED] pf_loopback_fail
[16:22:01] [SKIPPED] pf_loopback_busy
[16:22:01] [SKIPPED] pf_loopback_retry
[16:22:01] ==================== [PASSED] pf_relay =====================
[16:22:01] ================== vf_relay (3 subtests) ===================
[16:22:01] [PASSED] vf_rejects_guc2vf_too_short
[16:22:01] [PASSED] vf_rejects_guc2vf_too_long
[16:22:01] [PASSED] vf_rejects_guc2vf_no_payload
[16:22:01] ==================== [PASSED] vf_relay =====================
[16:22:01] ================ pf_gt_config (6 subtests) =================
[16:22:01] [PASSED] fair_contexts_1vf
[16:22:01] [PASSED] fair_doorbells_1vf
[16:22:01] [PASSED] fair_ggtt_1vf
[16:22:01] ====================== fair_contexts  ======================
[16:22:01] [PASSED] 1 VF
[16:22:01] [PASSED] 2 VFs
[16:22:01] [PASSED] 3 VFs
[16:22:01] [PASSED] 4 VFs
[16:22:01] [PASSED] 5 VFs
[16:22:01] [PASSED] 6 VFs
[16:22:01] [PASSED] 7 VFs
[16:22:01] [PASSED] 8 VFs
[16:22:01] [PASSED] 9 VFs
[16:22:01] [PASSED] 10 VFs
[16:22:01] [PASSED] 11 VFs
[16:22:01] [PASSED] 12 VFs
[16:22:01] [PASSED] 13 VFs
[16:22:01] [PASSED] 14 VFs
[16:22:01] [PASSED] 15 VFs
[16:22:01] [PASSED] 16 VFs
[16:22:01] [PASSED] 17 VFs
[16:22:01] [PASSED] 18 VFs
[16:22:01] [PASSED] 19 VFs
[16:22:01] [PASSED] 20 VFs
[16:22:01] [PASSED] 21 VFs
[16:22:01] [PASSED] 22 VFs
[16:22:01] [PASSED] 23 VFs
[16:22:01] [PASSED] 24 VFs
[16:22:01] [PASSED] 25 VFs
[16:22:01] [PASSED] 26 VFs
[16:22:01] [PASSED] 27 VFs
[16:22:01] [PASSED] 28 VFs
[16:22:01] [PASSED] 29 VFs
[16:22:01] [PASSED] 30 VFs
[16:22:01] [PASSED] 31 VFs
[16:22:01] [PASSED] 32 VFs
[16:22:01] [PASSED] 33 VFs
[16:22:01] [PASSED] 34 VFs
[16:22:01] [PASSED] 35 VFs
[16:22:01] [PASSED] 36 VFs
[16:22:01] [PASSED] 37 VFs
[16:22:01] [PASSED] 38 VFs
[16:22:01] [PASSED] 39 VFs
[16:22:01] [PASSED] 40 VFs
[16:22:01] [PASSED] 41 VFs
[16:22:01] [PASSED] 42 VFs
[16:22:01] [PASSED] 43 VFs
[16:22:01] [PASSED] 44 VFs
[16:22:01] [PASSED] 45 VFs
[16:22:01] [PASSED] 46 VFs
[16:22:01] [PASSED] 47 VFs
[16:22:01] [PASSED] 48 VFs
[16:22:01] [PASSED] 49 VFs
[16:22:01] [PASSED] 50 VFs
[16:22:01] [PASSED] 51 VFs
[16:22:01] [PASSED] 52 VFs
[16:22:01] [PASSED] 53 VFs
[16:22:01] [PASSED] 54 VFs
[16:22:01] [PASSED] 55 VFs
[16:22:01] [PASSED] 56 VFs
[16:22:01] [PASSED] 57 VFs
[16:22:01] [PASSED] 58 VFs
[16:22:01] [PASSED] 59 VFs
[16:22:01] [PASSED] 60 VFs
[16:22:01] [PASSED] 61 VFs
[16:22:01] [PASSED] 62 VFs
[16:22:01] [PASSED] 63 VFs
[16:22:01] ================== [PASSED] fair_contexts ==================
[16:22:01] ===================== fair_doorbells  ======================
[16:22:01] [PASSED] 1 VF
[16:22:01] [PASSED] 2 VFs
[16:22:01] [PASSED] 3 VFs
[16:22:01] [PASSED] 4 VFs
[16:22:01] [PASSED] 5 VFs
[16:22:01] [PASSED] 6 VFs
[16:22:01] [PASSED] 7 VFs
[16:22:01] [PASSED] 8 VFs
[16:22:01] [PASSED] 9 VFs
[16:22:01] [PASSED] 10 VFs
[16:22:01] [PASSED] 11 VFs
[16:22:01] [PASSED] 12 VFs
[16:22:01] [PASSED] 13 VFs
[16:22:01] [PASSED] 14 VFs
[16:22:01] [PASSED] 15 VFs
[16:22:01] [PASSED] 16 VFs
[16:22:01] [PASSED] 17 VFs
[16:22:01] [PASSED] 18 VFs
[16:22:01] [PASSED] 19 VFs
[16:22:01] [PASSED] 20 VFs
[16:22:01] [PASSED] 21 VFs
[16:22:01] [PASSED] 22 VFs
[16:22:01] [PASSED] 23 VFs
[16:22:01] [PASSED] 24 VFs
[16:22:01] [PASSED] 25 VFs
[16:22:01] [PASSED] 26 VFs
[16:22:01] [PASSED] 27 VFs
[16:22:01] [PASSED] 28 VFs
[16:22:01] [PASSED] 29 VFs
[16:22:01] [PASSED] 30 VFs
[16:22:01] [PASSED] 31 VFs
[16:22:01] [PASSED] 32 VFs
[16:22:01] [PASSED] 33 VFs
[16:22:01] [PASSED] 34 VFs
[16:22:01] [PASSED] 35 VFs
[16:22:01] [PASSED] 36 VFs
[16:22:01] [PASSED] 37 VFs
[16:22:01] [PASSED] 38 VFs
[16:22:01] [PASSED] 39 VFs
[16:22:01] [PASSED] 40 VFs
[16:22:01] [PASSED] 41 VFs
[16:22:01] [PASSED] 42 VFs
[16:22:01] [PASSED] 43 VFs
[16:22:01] [PASSED] 44 VFs
[16:22:01] [PASSED] 45 VFs
[16:22:01] [PASSED] 46 VFs
[16:22:01] [PASSED] 47 VFs
[16:22:01] [PASSED] 48 VFs
[16:22:01] [PASSED] 49 VFs
[16:22:01] [PASSED] 50 VFs
[16:22:01] [PASSED] 51 VFs
[16:22:01] [PASSED] 52 VFs
[16:22:01] [PASSED] 53 VFs
[16:22:01] [PASSED] 54 VFs
[16:22:01] [PASSED] 55 VFs
[16:22:01] [PASSED] 56 VFs
[16:22:01] [PASSED] 57 VFs
[16:22:01] [PASSED] 58 VFs
[16:22:01] [PASSED] 59 VFs
[16:22:01] [PASSED] 60 VFs
[16:22:01] [PASSED] 61 VFs
[16:22:01] [PASSED] 62 VFs
[16:22:01] [PASSED] 63 VFs
[16:22:01] ================= [PASSED] fair_doorbells ==================
[16:22:01] ======================== fair_ggtt  ========================
[16:22:01] [PASSED] 1 VF
[16:22:01] [PASSED] 2 VFs
[16:22:01] [PASSED] 3 VFs
[16:22:01] [PASSED] 4 VFs
[16:22:01] [PASSED] 5 VFs
[16:22:01] [PASSED] 6 VFs
[16:22:01] [PASSED] 7 VFs
[16:22:01] [PASSED] 8 VFs
[16:22:01] [PASSED] 9 VFs
[16:22:01] [PASSED] 10 VFs
[16:22:01] [PASSED] 11 VFs
[16:22:01] [PASSED] 12 VFs
[16:22:01] [PASSED] 13 VFs
[16:22:01] [PASSED] 14 VFs
[16:22:01] [PASSED] 15 VFs
[16:22:01] [PASSED] 16 VFs
[16:22:01] [PASSED] 17 VFs
[16:22:01] [PASSED] 18 VFs
[16:22:01] [PASSED] 19 VFs
[16:22:01] [PASSED] 20 VFs
[16:22:01] [PASSED] 21 VFs
[16:22:01] [PASSED] 22 VFs
[16:22:01] [PASSED] 23 VFs
[16:22:01] [PASSED] 24 VFs
[16:22:01] [PASSED] 25 VFs
[16:22:01] [PASSED] 26 VFs
[16:22:01] [PASSED] 27 VFs
[16:22:01] [PASSED] 28 VFs
[16:22:01] [PASSED] 29 VFs
[16:22:01] [PASSED] 30 VFs
[16:22:01] [PASSED] 31 VFs
[16:22:01] [PASSED] 32 VFs
[16:22:01] [PASSED] 33 VFs
[16:22:01] [PASSED] 34 VFs
[16:22:01] [PASSED] 35 VFs
[16:22:01] [PASSED] 36 VFs
[16:22:01] [PASSED] 37 VFs
[16:22:01] [PASSED] 38 VFs
[16:22:01] [PASSED] 39 VFs
[16:22:01] [PASSED] 40 VFs
[16:22:01] [PASSED] 41 VFs
[16:22:01] [PASSED] 42 VFs
[16:22:01] [PASSED] 43 VFs
[16:22:01] [PASSED] 44 VFs
[16:22:01] [PASSED] 45 VFs
[16:22:01] [PASSED] 46 VFs
[16:22:01] [PASSED] 47 VFs
[16:22:01] [PASSED] 48 VFs
[16:22:01] [PASSED] 49 VFs
[16:22:01] [PASSED] 50 VFs
[16:22:01] [PASSED] 51 VFs
[16:22:01] [PASSED] 52 VFs
[16:22:01] [PASSED] 53 VFs
[16:22:01] [PASSED] 54 VFs
[16:22:01] [PASSED] 55 VFs
[16:22:01] [PASSED] 56 VFs
[16:22:01] [PASSED] 57 VFs
[16:22:01] [PASSED] 58 VFs
[16:22:01] [PASSED] 59 VFs
[16:22:01] [PASSED] 60 VFs
[16:22:01] [PASSED] 61 VFs
[16:22:01] [PASSED] 62 VFs
[16:22:01] [PASSED] 63 VFs
[16:22:01] ==================== [PASSED] fair_ggtt ====================
[16:22:01] ================== [PASSED] pf_gt_config ===================
[16:22:01] ===================== lmtt (1 subtest) =====================
[16:22:01] ======================== test_ops  =========================
[16:22:01] [PASSED] 2-level
[16:22:01] [PASSED] multi-level
[16:22:01] ==================== [PASSED] test_ops =====================
[16:22:01] ====================== [PASSED] lmtt =======================
[16:22:01] ================= pf_service (11 subtests) =================
[16:22:01] [PASSED] pf_negotiate_any
[16:22:01] [PASSED] pf_negotiate_base_match
[16:22:01] [PASSED] pf_negotiate_base_newer
[16:22:01] [PASSED] pf_negotiate_base_next
[16:22:01] [SKIPPED] pf_negotiate_base_older
[16:22:01] [PASSED] pf_negotiate_base_prev
[16:22:01] [PASSED] pf_negotiate_latest_match
[16:22:01] [PASSED] pf_negotiate_latest_newer
[16:22:01] [PASSED] pf_negotiate_latest_next
[16:22:01] [SKIPPED] pf_negotiate_latest_older
[16:22:01] [SKIPPED] pf_negotiate_latest_prev
[16:22:01] =================== [PASSED] pf_service ====================
[16:22:01] ================= xe_guc_g2g (2 subtests) ==================
[16:22:01] ============== xe_live_guc_g2g_kunit_default  ==============
[16:22:01] ========= [SKIPPED] xe_live_guc_g2g_kunit_default ==========
[16:22:01] ============== xe_live_guc_g2g_kunit_allmem  ===============
[16:22:01] ========== [SKIPPED] xe_live_guc_g2g_kunit_allmem ==========
[16:22:01] =================== [SKIPPED] xe_guc_g2g ===================
[16:22:01] =================== xe_mocs (2 subtests) ===================
[16:22:01] ================ xe_live_mocs_kernel_kunit  ================
[16:22:01] =========== [SKIPPED] xe_live_mocs_kernel_kunit ============
[16:22:01] ================ xe_live_mocs_reset_kunit  =================
[16:22:01] ============ [SKIPPED] xe_live_mocs_reset_kunit ============
[16:22:01] ==================== [SKIPPED] xe_mocs =====================
[16:22:01] ================= xe_migrate (2 subtests) ==================
[16:22:01] ================= xe_migrate_sanity_kunit  =================
[16:22:01] ============ [SKIPPED] xe_migrate_sanity_kunit =============
[16:22:01] ================== xe_validate_ccs_kunit  ==================
[16:22:01] ============= [SKIPPED] xe_validate_ccs_kunit ==============
[16:22:01] =================== [SKIPPED] xe_migrate ===================
[16:22:01] ================== xe_dma_buf (1 subtest) ==================
[16:22:01] ==================== xe_dma_buf_kunit  =====================
[16:22:01] ================ [SKIPPED] xe_dma_buf_kunit ================
[16:22:01] =================== [SKIPPED] xe_dma_buf ===================
[16:22:01] ================= xe_bo_shrink (1 subtest) =================
[16:22:01] =================== xe_bo_shrink_kunit  ====================
[16:22:01] =============== [SKIPPED] xe_bo_shrink_kunit ===============
[16:22:01] ================== [SKIPPED] xe_bo_shrink ==================
[16:22:01] ==================== xe_bo (2 subtests) ====================
[16:22:01] ================== xe_ccs_migrate_kunit  ===================
[16:22:01] ============== [SKIPPED] xe_ccs_migrate_kunit ==============
[16:22:01] ==================== xe_bo_evict_kunit  ====================
[16:22:01] =============== [SKIPPED] xe_bo_evict_kunit ================
[16:22:01] ===================== [SKIPPED] xe_bo ======================
[16:22:01] ==================== args (13 subtests) ====================
[16:22:01] [PASSED] count_args_test
[16:22:01] [PASSED] call_args_example
[16:22:01] [PASSED] call_args_test
[16:22:01] [PASSED] drop_first_arg_example
[16:22:01] [PASSED] drop_first_arg_test
[16:22:01] [PASSED] first_arg_example
[16:22:01] [PASSED] first_arg_test
[16:22:01] [PASSED] last_arg_example
[16:22:01] [PASSED] last_arg_test
[16:22:01] [PASSED] pick_arg_example
[16:22:01] [PASSED] if_args_example
[16:22:01] [PASSED] if_args_test
[16:22:01] [PASSED] sep_comma_example
[16:22:01] ====================== [PASSED] args =======================
[16:22:01] =================== xe_pci (3 subtests) ====================
[16:22:01] ==================== check_graphics_ip  ====================
[16:22:01] [PASSED] 12.00 Xe_LP
[16:22:01] [PASSED] 12.10 Xe_LP+
[16:22:01] [PASSED] 12.55 Xe_HPG
[16:22:01] [PASSED] 12.60 Xe_HPC
[16:22:01] [PASSED] 12.70 Xe_LPG
[16:22:01] [PASSED] 12.71 Xe_LPG
[16:22:01] [PASSED] 12.74 Xe_LPG+
[16:22:01] [PASSED] 20.01 Xe2_HPG
[16:22:01] [PASSED] 20.02 Xe2_HPG
[16:22:01] [PASSED] 20.04 Xe2_LPG
[16:22:01] [PASSED] 30.00 Xe3_LPG
[16:22:01] [PASSED] 30.01 Xe3_LPG
[16:22:01] [PASSED] 30.03 Xe3_LPG
[16:22:01] [PASSED] 30.04 Xe3_LPG
[16:22:01] [PASSED] 30.05 Xe3_LPG
[16:22:01] [PASSED] 35.10 Xe3p_LPG
[16:22:01] [PASSED] 35.11 Xe3p_XPC
[16:22:01] ================ [PASSED] check_graphics_ip ================
[16:22:01] ===================== check_media_ip  ======================
[16:22:01] [PASSED] 12.00 Xe_M
[16:22:01] [PASSED] 12.55 Xe_HPM
[16:22:01] [PASSED] 13.00 Xe_LPM+
[16:22:01] [PASSED] 13.01 Xe2_HPM
[16:22:01] [PASSED] 20.00 Xe2_LPM
[16:22:01] [PASSED] 30.00 Xe3_LPM
[16:22:01] [PASSED] 30.02 Xe3_LPM
[16:22:01] [PASSED] 35.00 Xe3p_LPM
[16:22:01] [PASSED] 35.03 Xe3p_HPM
[16:22:01] ================= [PASSED] check_media_ip ==================
[16:22:01] =================== check_platform_desc  ===================
[16:22:01] [PASSED] 0x9A60 (TIGERLAKE)
[16:22:01] [PASSED] 0x9A68 (TIGERLAKE)
[16:22:01] [PASSED] 0x9A70 (TIGERLAKE)
[16:22:01] [PASSED] 0x9A40 (TIGERLAKE)
[16:22:01] [PASSED] 0x9A49 (TIGERLAKE)
[16:22:01] [PASSED] 0x9A59 (TIGERLAKE)
[16:22:01] [PASSED] 0x9A78 (TIGERLAKE)
[16:22:01] [PASSED] 0x9AC0 (TIGERLAKE)
[16:22:01] [PASSED] 0x9AC9 (TIGERLAKE)
[16:22:01] [PASSED] 0x9AD9 (TIGERLAKE)
[16:22:01] [PASSED] 0x9AF8 (TIGERLAKE)
[16:22:01] [PASSED] 0x4C80 (ROCKETLAKE)
[16:22:01] [PASSED] 0x4C8A (ROCKETLAKE)
[16:22:01] [PASSED] 0x4C8B (ROCKETLAKE)
[16:22:01] [PASSED] 0x4C8C (ROCKETLAKE)
[16:22:01] [PASSED] 0x4C90 (ROCKETLAKE)
[16:22:01] [PASSED] 0x4C9A (ROCKETLAKE)
[16:22:01] [PASSED] 0x4680 (ALDERLAKE_S)
[16:22:01] [PASSED] 0x4682 (ALDERLAKE_S)
[16:22:01] [PASSED] 0x4688 (ALDERLAKE_S)
[16:22:01] [PASSED] 0x468A (ALDERLAKE_S)
[16:22:01] [PASSED] 0x468B (ALDERLAKE_S)
[16:22:01] [PASSED] 0x4690 (ALDERLAKE_S)
[16:22:01] [PASSED] 0x4692 (ALDERLAKE_S)
[16:22:01] [PASSED] 0x4693 (ALDERLAKE_S)
[16:22:01] [PASSED] 0x46A0 (ALDERLAKE_P)
[16:22:01] [PASSED] 0x46A1 (ALDERLAKE_P)
[16:22:01] [PASSED] 0x46A2 (ALDERLAKE_P)
[16:22:01] [PASSED] 0x46A3 (ALDERLAKE_P)
[16:22:01] [PASSED] 0x46A6 (ALDERLAKE_P)
[16:22:01] [PASSED] 0x46A8 (ALDERLAKE_P)
[16:22:01] [PASSED] 0x46AA (ALDERLAKE_P)
[16:22:01] [PASSED] 0x462A (ALDERLAKE_P)
[16:22:01] [PASSED] 0x4626 (ALDERLAKE_P)
stty: 'standard input': Inappropriate ioctl for device
[16:22:01] [PASSED] 0x4628 (ALDERLAKE_P)
[16:22:01] [PASSED] 0x46B0 (ALDERLAKE_P)
[16:22:01] [PASSED] 0x46B1 (ALDERLAKE_P)
[16:22:01] [PASSED] 0x46B2 (ALDERLAKE_P)
[16:22:01] [PASSED] 0x46B3 (ALDERLAKE_P)
[16:22:01] [PASSED] 0x46C0 (ALDERLAKE_P)
[16:22:01] [PASSED] 0x46C1 (ALDERLAKE_P)
[16:22:01] [PASSED] 0x46C2 (ALDERLAKE_P)
[16:22:01] [PASSED] 0x46C3 (ALDERLAKE_P)
[16:22:01] [PASSED] 0x46D0 (ALDERLAKE_N)
[16:22:01] [PASSED] 0x46D1 (ALDERLAKE_N)
[16:22:01] [PASSED] 0x46D2 (ALDERLAKE_N)
[16:22:01] [PASSED] 0x46D3 (ALDERLAKE_N)
[16:22:01] [PASSED] 0x46D4 (ALDERLAKE_N)
[16:22:01] [PASSED] 0xA721 (ALDERLAKE_P)
[16:22:01] [PASSED] 0xA7A1 (ALDERLAKE_P)
[16:22:01] [PASSED] 0xA7A9 (ALDERLAKE_P)
[16:22:01] [PASSED] 0xA7AC (ALDERLAKE_P)
[16:22:01] [PASSED] 0xA7AD (ALDERLAKE_P)
[16:22:01] [PASSED] 0xA720 (ALDERLAKE_P)
[16:22:01] [PASSED] 0xA7A0 (ALDERLAKE_P)
[16:22:01] [PASSED] 0xA7A8 (ALDERLAKE_P)
[16:22:01] [PASSED] 0xA7AA (ALDERLAKE_P)
[16:22:01] [PASSED] 0xA7AB (ALDERLAKE_P)
[16:22:01] [PASSED] 0xA780 (ALDERLAKE_S)
[16:22:01] [PASSED] 0xA781 (ALDERLAKE_S)
[16:22:01] [PASSED] 0xA782 (ALDERLAKE_S)
[16:22:01] [PASSED] 0xA783 (ALDERLAKE_S)
[16:22:01] [PASSED] 0xA788 (ALDERLAKE_S)
[16:22:01] [PASSED] 0xA789 (ALDERLAKE_S)
[16:22:01] [PASSED] 0xA78A (ALDERLAKE_S)
[16:22:01] [PASSED] 0xA78B (ALDERLAKE_S)
[16:22:01] [PASSED] 0x4905 (DG1)
[16:22:01] [PASSED] 0x4906 (DG1)
[16:22:01] [PASSED] 0x4907 (DG1)
[16:22:01] [PASSED] 0x4908 (DG1)
[16:22:01] [PASSED] 0x4909 (DG1)
[16:22:01] [PASSED] 0x56C0 (DG2)
[16:22:01] [PASSED] 0x56C2 (DG2)
[16:22:01] [PASSED] 0x56C1 (DG2)
[16:22:01] [PASSED] 0x7D51 (METEORLAKE)
[16:22:01] [PASSED] 0x7DD1 (METEORLAKE)
[16:22:01] [PASSED] 0x7D41 (METEORLAKE)
[16:22:01] [PASSED] 0x7D67 (METEORLAKE)
[16:22:01] [PASSED] 0xB640 (METEORLAKE)
[16:22:01] [PASSED] 0x56A0 (DG2)
[16:22:01] [PASSED] 0x56A1 (DG2)
[16:22:01] [PASSED] 0x56A2 (DG2)
[16:22:01] [PASSED] 0x56BE (DG2)
[16:22:01] [PASSED] 0x56BF (DG2)
[16:22:01] [PASSED] 0x5690 (DG2)
[16:22:01] [PASSED] 0x5691 (DG2)
[16:22:01] [PASSED] 0x5692 (DG2)
[16:22:01] [PASSED] 0x56A5 (DG2)
[16:22:01] [PASSED] 0x56A6 (DG2)
[16:22:01] [PASSED] 0x56B0 (DG2)
[16:22:01] [PASSED] 0x56B1 (DG2)
[16:22:01] [PASSED] 0x56BA (DG2)
[16:22:01] [PASSED] 0x56BB (DG2)
[16:22:01] [PASSED] 0x56BC (DG2)
[16:22:01] [PASSED] 0x56BD (DG2)
[16:22:01] [PASSED] 0x5693 (DG2)
[16:22:01] [PASSED] 0x5694 (DG2)
[16:22:01] [PASSED] 0x5695 (DG2)
[16:22:01] [PASSED] 0x56A3 (DG2)
[16:22:01] [PASSED] 0x56A4 (DG2)
[16:22:01] [PASSED] 0x56B2 (DG2)
[16:22:01] [PASSED] 0x56B3 (DG2)
[16:22:01] [PASSED] 0x5696 (DG2)
[16:22:01] [PASSED] 0x5697 (DG2)
[16:22:01] [PASSED] 0xB69 (PVC)
[16:22:01] [PASSED] 0xB6E (PVC)
[16:22:01] [PASSED] 0xBD4 (PVC)
[16:22:01] [PASSED] 0xBD5 (PVC)
[16:22:01] [PASSED] 0xBD6 (PVC)
[16:22:01] [PASSED] 0xBD7 (PVC)
[16:22:01] [PASSED] 0xBD8 (PVC)
[16:22:01] [PASSED] 0xBD9 (PVC)
[16:22:01] [PASSED] 0xBDA (PVC)
[16:22:01] [PASSED] 0xBDB (PVC)
[16:22:01] [PASSED] 0xBE0 (PVC)
[16:22:01] [PASSED] 0xBE1 (PVC)
[16:22:01] [PASSED] 0xBE5 (PVC)
[16:22:01] [PASSED] 0x7D40 (METEORLAKE)
[16:22:01] [PASSED] 0x7D45 (METEORLAKE)
[16:22:01] [PASSED] 0x7D55 (METEORLAKE)
[16:22:01] [PASSED] 0x7D60 (METEORLAKE)
[16:22:01] [PASSED] 0x7DD5 (METEORLAKE)
[16:22:01] [PASSED] 0x6420 (LUNARLAKE)
[16:22:01] [PASSED] 0x64A0 (LUNARLAKE)
[16:22:01] [PASSED] 0x64B0 (LUNARLAKE)
[16:22:01] [PASSED] 0xE202 (BATTLEMAGE)
[16:22:01] [PASSED] 0xE209 (BATTLEMAGE)
[16:22:01] [PASSED] 0xE20B (BATTLEMAGE)
[16:22:01] [PASSED] 0xE20C (BATTLEMAGE)
[16:22:01] [PASSED] 0xE20D (BATTLEMAGE)
[16:22:01] [PASSED] 0xE210 (BATTLEMAGE)
[16:22:01] [PASSED] 0xE211 (BATTLEMAGE)
[16:22:01] [PASSED] 0xE212 (BATTLEMAGE)
[16:22:01] [PASSED] 0xE216 (BATTLEMAGE)
[16:22:01] [PASSED] 0xE220 (BATTLEMAGE)
[16:22:01] [PASSED] 0xE221 (BATTLEMAGE)
[16:22:01] [PASSED] 0xE222 (BATTLEMAGE)
[16:22:01] [PASSED] 0xE223 (BATTLEMAGE)
[16:22:01] [PASSED] 0xB080 (PANTHERLAKE)
[16:22:01] [PASSED] 0xB081 (PANTHERLAKE)
[16:22:01] [PASSED] 0xB082 (PANTHERLAKE)
[16:22:01] [PASSED] 0xB083 (PANTHERLAKE)
[16:22:01] [PASSED] 0xB084 (PANTHERLAKE)
[16:22:01] [PASSED] 0xB085 (PANTHERLAKE)
[16:22:01] [PASSED] 0xB086 (PANTHERLAKE)
[16:22:01] [PASSED] 0xB087 (PANTHERLAKE)
[16:22:01] [PASSED] 0xB08F (PANTHERLAKE)
[16:22:01] [PASSED] 0xB090 (PANTHERLAKE)
[16:22:01] [PASSED] 0xB0A0 (PANTHERLAKE)
[16:22:01] [PASSED] 0xB0B0 (PANTHERLAKE)
[16:22:01] [PASSED] 0xFD80 (PANTHERLAKE)
[16:22:01] [PASSED] 0xFD81 (PANTHERLAKE)
[16:22:01] [PASSED] 0xD740 (NOVALAKE_S)
[16:22:01] [PASSED] 0xD741 (NOVALAKE_S)
[16:22:01] [PASSED] 0xD742 (NOVALAKE_S)
[16:22:01] [PASSED] 0xD743 (NOVALAKE_S)
[16:22:01] [PASSED] 0xD744 (NOVALAKE_S)
[16:22:01] [PASSED] 0xD745 (NOVALAKE_S)
[16:22:01] [PASSED] 0x674C (CRESCENTISLAND)
[16:22:01] [PASSED] 0xD750 (NOVALAKE_P)
[16:22:01] [PASSED] 0xD751 (NOVALAKE_P)
[16:22:01] [PASSED] 0xD752 (NOVALAKE_P)
[16:22:01] [PASSED] 0xD753 (NOVALAKE_P)
[16:22:01] [PASSED] 0xD754 (NOVALAKE_P)
[16:22:01] [PASSED] 0xD755 (NOVALAKE_P)
[16:22:01] [PASSED] 0xD756 (NOVALAKE_P)
[16:22:01] [PASSED] 0xD757 (NOVALAKE_P)
[16:22:01] [PASSED] 0xD75F (NOVALAKE_P)
[16:22:01] =============== [PASSED] check_platform_desc ===============
[16:22:01] ===================== [PASSED] xe_pci ======================
[16:22:01] =================== xe_rtp (2 subtests) ====================
[16:22:01] =============== xe_rtp_process_to_sr_tests  ================
[16:22:01] [PASSED] coalesce-same-reg
[16:22:01] [PASSED] no-match-no-add
[16:22:01] [PASSED] match-or
[16:22:01] [PASSED] match-or-xfail
[16:22:01] [PASSED] no-match-no-add-multiple-rules
[16:22:01] [PASSED] two-regs-two-entries
[16:22:01] [PASSED] clr-one-set-other
[16:22:01] [PASSED] set-field
[16:22:01] [PASSED] conflict-duplicate
[16:22:01] [PASSED] conflict-not-disjoint
[16:22:01] [PASSED] conflict-reg-type
[16:22:01] =========== [PASSED] xe_rtp_process_to_sr_tests ============
[16:22:01] ================== xe_rtp_process_tests  ===================
[16:22:01] [PASSED] active1
[16:22:01] [PASSED] active2
[16:22:01] [PASSED] active-inactive
[16:22:01] [PASSED] inactive-active
[16:22:01] [PASSED] inactive-1st_or_active-inactive
[16:22:01] [PASSED] inactive-2nd_or_active-inactive
[16:22:01] [PASSED] inactive-last_or_active-inactive
[16:22:01] [PASSED] inactive-no_or_active-inactive
[16:22:01] ============== [PASSED] xe_rtp_process_tests ===============
[16:22:01] ===================== [PASSED] xe_rtp ======================
[16:22:01] ==================== xe_wa (1 subtest) =====================
[16:22:01] ======================== xe_wa_gt  =========================
[16:22:01] [PASSED] TIGERLAKE B0
[16:22:01] [PASSED] DG1 A0
[16:22:01] [PASSED] DG1 B0
[16:22:01] [PASSED] ALDERLAKE_S A0
[16:22:01] [PASSED] ALDERLAKE_S B0
[16:22:01] [PASSED] ALDERLAKE_S C0
[16:22:01] [PASSED] ALDERLAKE_S D0
[16:22:01] [PASSED] ALDERLAKE_P A0
[16:22:01] [PASSED] ALDERLAKE_P B0
[16:22:01] [PASSED] ALDERLAKE_P C0
[16:22:01] [PASSED] ALDERLAKE_S RPLS D0
[16:22:01] [PASSED] ALDERLAKE_P RPLU E0
[16:22:01] [PASSED] DG2 G10 C0
[16:22:01] [PASSED] DG2 G11 B1
[16:22:01] [PASSED] DG2 G12 A1
[16:22:01] [PASSED] METEORLAKE 12.70(Xe_LPG) A0 13.00(Xe_LPM+) A0
[16:22:01] [PASSED] METEORLAKE 12.71(Xe_LPG) A0 13.00(Xe_LPM+) A0
[16:22:01] [PASSED] METEORLAKE 12.74(Xe_LPG+) A0 13.00(Xe_LPM+) A0
[16:22:01] [PASSED] LUNARLAKE 20.04(Xe2_LPG) A0 20.00(Xe2_LPM) A0
[16:22:01] [PASSED] LUNARLAKE 20.04(Xe2_LPG) B0 20.00(Xe2_LPM) A0
[16:22:01] [PASSED] BATTLEMAGE 20.01(Xe2_HPG) A0 13.01(Xe2_HPM) A1
[16:22:01] [PASSED] PANTHERLAKE 30.00(Xe3_LPG) A0 30.00(Xe3_LPM) A0
[16:22:01] ==================== [PASSED] xe_wa_gt =====================
[16:22:01] ====================== [PASSED] xe_wa ======================
[16:22:01] ============================================================
[16:22:01] Testing complete. Ran 522 tests: passed: 504, skipped: 18
[16:22:01] Elapsed time: 36.163s total, 4.163s configuring, 31.481s building, 0.464s running

+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/tests/.kunitconfig
[16:22:01] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[16:22:03] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[16:22:28] Starting KUnit Kernel (1/1)...
[16:22:28] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[16:22:28] ============ drm_test_pick_cmdline (2 subtests) ============
[16:22:28] [PASSED] drm_test_pick_cmdline_res_1920_1080_60
[16:22:28] =============== drm_test_pick_cmdline_named  ===============
[16:22:28] [PASSED] NTSC
[16:22:28] [PASSED] NTSC-J
[16:22:28] [PASSED] PAL
[16:22:28] [PASSED] PAL-M
[16:22:28] =========== [PASSED] drm_test_pick_cmdline_named ===========
[16:22:28] ============== [PASSED] drm_test_pick_cmdline ==============
[16:22:28] == drm_test_atomic_get_connector_for_encoder (1 subtest) ===
[16:22:28] [PASSED] drm_test_drm_atomic_get_connector_for_encoder
[16:22:28] ==== [PASSED] drm_test_atomic_get_connector_for_encoder ====
[16:22:28] =========== drm_validate_clone_mode (2 subtests) ===========
[16:22:28] ============== drm_test_check_in_clone_mode  ===============
[16:22:28] [PASSED] in_clone_mode
[16:22:28] [PASSED] not_in_clone_mode
[16:22:28] ========== [PASSED] drm_test_check_in_clone_mode ===========
[16:22:28] =============== drm_test_check_valid_clones  ===============
[16:22:28] [PASSED] not_in_clone_mode
[16:22:28] [PASSED] valid_clone
[16:22:28] [PASSED] invalid_clone
[16:22:28] =========== [PASSED] drm_test_check_valid_clones ===========
[16:22:28] ============= [PASSED] drm_validate_clone_mode =============
[16:22:28] ============= drm_validate_modeset (1 subtest) =============
[16:22:28] [PASSED] drm_test_check_connector_changed_modeset
[16:22:28] ============== [PASSED] drm_validate_modeset ===============
[16:22:28] ====== drm_test_bridge_get_current_state (2 subtests) ======
[16:22:28] [PASSED] drm_test_drm_bridge_get_current_state_atomic
[16:22:28] [PASSED] drm_test_drm_bridge_get_current_state_legacy
[16:22:28] ======== [PASSED] drm_test_bridge_get_current_state ========
[16:22:28] ====== drm_test_bridge_helper_reset_crtc (3 subtests) ======
[16:22:28] [PASSED] drm_test_drm_bridge_helper_reset_crtc_atomic
[16:22:28] [PASSED] drm_test_drm_bridge_helper_reset_crtc_atomic_disabled
[16:22:28] [PASSED] drm_test_drm_bridge_helper_reset_crtc_legacy
[16:22:28] ======== [PASSED] drm_test_bridge_helper_reset_crtc ========
[16:22:28] ============== drm_bridge_alloc (2 subtests) ===============
[16:22:28] [PASSED] drm_test_drm_bridge_alloc_basic
[16:22:28] [PASSED] drm_test_drm_bridge_alloc_get_put
[16:22:28] ================ [PASSED] drm_bridge_alloc =================
[16:22:28] ============= drm_cmdline_parser (40 subtests) =============
[16:22:28] [PASSED] drm_test_cmdline_force_d_only
[16:22:28] [PASSED] drm_test_cmdline_force_D_only_dvi
[16:22:28] [PASSED] drm_test_cmdline_force_D_only_hdmi
[16:22:28] [PASSED] drm_test_cmdline_force_D_only_not_digital
[16:22:28] [PASSED] drm_test_cmdline_force_e_only
[16:22:28] [PASSED] drm_test_cmdline_res
[16:22:28] [PASSED] drm_test_cmdline_res_vesa
[16:22:28] [PASSED] drm_test_cmdline_res_vesa_rblank
[16:22:28] [PASSED] drm_test_cmdline_res_rblank
[16:22:28] [PASSED] drm_test_cmdline_res_bpp
[16:22:28] [PASSED] drm_test_cmdline_res_refresh
[16:22:28] [PASSED] drm_test_cmdline_res_bpp_refresh
[16:22:28] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced
[16:22:28] [PASSED] drm_test_cmdline_res_bpp_refresh_margins
[16:22:28] [PASSED] drm_test_cmdline_res_bpp_refresh_force_off
[16:22:28] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on
[16:22:28] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_analog
[16:22:28] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_digital
[16:22:28] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced_margins_force_on
[16:22:28] [PASSED] drm_test_cmdline_res_margins_force_on
[16:22:28] [PASSED] drm_test_cmdline_res_vesa_margins
[16:22:28] [PASSED] drm_test_cmdline_name
[16:22:28] [PASSED] drm_test_cmdline_name_bpp
[16:22:28] [PASSED] drm_test_cmdline_name_option
[16:22:28] [PASSED] drm_test_cmdline_name_bpp_option
[16:22:28] [PASSED] drm_test_cmdline_rotate_0
[16:22:28] [PASSED] drm_test_cmdline_rotate_90
[16:22:28] [PASSED] drm_test_cmdline_rotate_180
[16:22:28] [PASSED] drm_test_cmdline_rotate_270
[16:22:28] [PASSED] drm_test_cmdline_hmirror
[16:22:28] [PASSED] drm_test_cmdline_vmirror
[16:22:28] [PASSED] drm_test_cmdline_margin_options
[16:22:28] [PASSED] drm_test_cmdline_multiple_options
[16:22:28] [PASSED] drm_test_cmdline_bpp_extra_and_option
[16:22:28] [PASSED] drm_test_cmdline_extra_and_option
[16:22:28] [PASSED] drm_test_cmdline_freestanding_options
[16:22:28] [PASSED] drm_test_cmdline_freestanding_force_e_and_options
[16:22:28] [PASSED] drm_test_cmdline_panel_orientation
[16:22:28] ================ drm_test_cmdline_invalid  =================
[16:22:28] [PASSED] margin_only
[16:22:28] [PASSED] interlace_only
[16:22:28] [PASSED] res_missing_x
[16:22:28] [PASSED] res_missing_y
[16:22:28] [PASSED] res_bad_y
[16:22:28] [PASSED] res_missing_y_bpp
[16:22:28] [PASSED] res_bad_bpp
[16:22:28] [PASSED] res_bad_refresh
[16:22:28] [PASSED] res_bpp_refresh_force_on_off
[16:22:28] [PASSED] res_invalid_mode
[16:22:28] [PASSED] res_bpp_wrong_place_mode
[16:22:28] [PASSED] name_bpp_refresh
[16:22:28] [PASSED] name_refresh
[16:22:28] [PASSED] name_refresh_wrong_mode
[16:22:28] [PASSED] name_refresh_invalid_mode
[16:22:28] [PASSED] rotate_multiple
[16:22:28] [PASSED] rotate_invalid_val
[16:22:28] [PASSED] rotate_truncated
[16:22:28] [PASSED] invalid_option
[16:22:28] [PASSED] invalid_tv_option
[16:22:28] [PASSED] truncated_tv_option
[16:22:28] ============ [PASSED] drm_test_cmdline_invalid =============
[16:22:28] =============== drm_test_cmdline_tv_options  ===============
[16:22:28] [PASSED] NTSC
[16:22:28] [PASSED] NTSC_443
[16:22:28] [PASSED] NTSC_J
[16:22:28] [PASSED] PAL
[16:22:28] [PASSED] PAL_M
[16:22:28] [PASSED] PAL_N
[16:22:28] [PASSED] SECAM
[16:22:28] [PASSED] MONO_525
[16:22:28] [PASSED] MONO_625
[16:22:28] =========== [PASSED] drm_test_cmdline_tv_options ===========
[16:22:28] =============== [PASSED] drm_cmdline_parser ================
[16:22:28] ========== drmm_connector_hdmi_init (20 subtests) ==========
[16:22:28] [PASSED] drm_test_connector_hdmi_init_valid
[16:22:28] [PASSED] drm_test_connector_hdmi_init_bpc_8
[16:22:28] [PASSED] drm_test_connector_hdmi_init_bpc_10
[16:22:28] [PASSED] drm_test_connector_hdmi_init_bpc_12
[16:22:28] [PASSED] drm_test_connector_hdmi_init_bpc_invalid
[16:22:28] [PASSED] drm_test_connector_hdmi_init_bpc_null
[16:22:28] [PASSED] drm_test_connector_hdmi_init_formats_empty
[16:22:28] [PASSED] drm_test_connector_hdmi_init_formats_no_rgb
[16:22:28] === drm_test_connector_hdmi_init_formats_yuv420_allowed  ===
[16:22:28] [PASSED] supported_formats=0x9 yuv420_allowed=1
[16:22:28] [PASSED] supported_formats=0x9 yuv420_allowed=0
[16:22:28] [PASSED] supported_formats=0x3 yuv420_allowed=1
[16:22:28] [PASSED] supported_formats=0x3 yuv420_allowed=0
[16:22:28] === [PASSED] drm_test_connector_hdmi_init_formats_yuv420_allowed ===
[16:22:28] [PASSED] drm_test_connector_hdmi_init_null_ddc
[16:22:28] [PASSED] drm_test_connector_hdmi_init_null_product
[16:22:28] [PASSED] drm_test_connector_hdmi_init_null_vendor
[16:22:28] [PASSED] drm_test_connector_hdmi_init_product_length_exact
[16:22:28] [PASSED] drm_test_connector_hdmi_init_product_length_too_long
[16:22:28] [PASSED] drm_test_connector_hdmi_init_product_valid
[16:22:28] [PASSED] drm_test_connector_hdmi_init_vendor_length_exact
[16:22:28] [PASSED] drm_test_connector_hdmi_init_vendor_length_too_long
[16:22:28] [PASSED] drm_test_connector_hdmi_init_vendor_valid
[16:22:28] ========= drm_test_connector_hdmi_init_type_valid  =========
[16:22:28] [PASSED] HDMI-A
[16:22:28] [PASSED] HDMI-B
[16:22:28] ===== [PASSED] drm_test_connector_hdmi_init_type_valid =====
[16:22:28] ======== drm_test_connector_hdmi_init_type_invalid  ========
[16:22:28] [PASSED] Unknown
[16:22:28] [PASSED] VGA
[16:22:28] [PASSED] DVI-I
[16:22:28] [PASSED] DVI-D
[16:22:28] [PASSED] DVI-A
[16:22:28] [PASSED] Composite
[16:22:28] [PASSED] SVIDEO
[16:22:28] [PASSED] LVDS
[16:22:28] [PASSED] Component
[16:22:28] [PASSED] DIN
[16:22:28] [PASSED] DP
[16:22:28] [PASSED] TV
[16:22:28] [PASSED] eDP
[16:22:28] [PASSED] Virtual
[16:22:28] [PASSED] DSI
[16:22:28] [PASSED] DPI
[16:22:28] [PASSED] Writeback
[16:22:28] [PASSED] SPI
[16:22:28] [PASSED] USB
[16:22:28] ==== [PASSED] drm_test_connector_hdmi_init_type_invalid ====
[16:22:28] ============ [PASSED] drmm_connector_hdmi_init =============
[16:22:28] ============= drmm_connector_init (3 subtests) =============
[16:22:28] [PASSED] drm_test_drmm_connector_init
[16:22:28] [PASSED] drm_test_drmm_connector_init_null_ddc
[16:22:28] ========= drm_test_drmm_connector_init_type_valid  =========
[16:22:28] [PASSED] Unknown
[16:22:28] [PASSED] VGA
[16:22:28] [PASSED] DVI-I
[16:22:28] [PASSED] DVI-D
[16:22:28] [PASSED] DVI-A
[16:22:28] [PASSED] Composite
[16:22:28] [PASSED] SVIDEO
[16:22:28] [PASSED] LVDS
[16:22:28] [PASSED] Component
[16:22:28] [PASSED] DIN
[16:22:28] [PASSED] DP
[16:22:28] [PASSED] HDMI-A
[16:22:28] [PASSED] HDMI-B
[16:22:28] [PASSED] TV
[16:22:28] [PASSED] eDP
[16:22:28] [PASSED] Virtual
[16:22:28] [PASSED] DSI
[16:22:28] [PASSED] DPI
[16:22:28] [PASSED] Writeback
[16:22:28] [PASSED] SPI
[16:22:28] [PASSED] USB
[16:22:28] ===== [PASSED] drm_test_drmm_connector_init_type_valid =====
[16:22:28] =============== [PASSED] drmm_connector_init ===============
[16:22:28] ========= drm_connector_dynamic_init (6 subtests) ==========
[16:22:28] [PASSED] drm_test_drm_connector_dynamic_init
[16:22:28] [PASSED] drm_test_drm_connector_dynamic_init_null_ddc
[16:22:28] [PASSED] drm_test_drm_connector_dynamic_init_not_added
[16:22:28] [PASSED] drm_test_drm_connector_dynamic_init_properties
[16:22:28] ===== drm_test_drm_connector_dynamic_init_type_valid  ======
[16:22:28] [PASSED] Unknown
[16:22:28] [PASSED] VGA
[16:22:28] [PASSED] DVI-I
[16:22:28] [PASSED] DVI-D
[16:22:28] [PASSED] DVI-A
[16:22:28] [PASSED] Composite
[16:22:28] [PASSED] SVIDEO
[16:22:28] [PASSED] LVDS
[16:22:28] [PASSED] Component
[16:22:28] [PASSED] DIN
[16:22:28] [PASSED] DP
[16:22:28] [PASSED] HDMI-A
[16:22:28] [PASSED] HDMI-B
[16:22:28] [PASSED] TV
[16:22:28] [PASSED] eDP
[16:22:28] [PASSED] Virtual
[16:22:28] [PASSED] DSI
[16:22:28] [PASSED] DPI
[16:22:28] [PASSED] Writeback
[16:22:28] [PASSED] SPI
[16:22:28] [PASSED] USB
[16:22:28] = [PASSED] drm_test_drm_connector_dynamic_init_type_valid ==
[16:22:28] ======== drm_test_drm_connector_dynamic_init_name  =========
[16:22:28] [PASSED] Unknown
[16:22:28] [PASSED] VGA
[16:22:28] [PASSED] DVI-I
[16:22:28] [PASSED] DVI-D
[16:22:28] [PASSED] DVI-A
[16:22:28] [PASSED] Composite
[16:22:28] [PASSED] SVIDEO
[16:22:28] [PASSED] LVDS
[16:22:28] [PASSED] Component
[16:22:28] [PASSED] DIN
[16:22:28] [PASSED] DP
[16:22:28] [PASSED] HDMI-A
[16:22:28] [PASSED] HDMI-B
[16:22:28] [PASSED] TV
[16:22:28] [PASSED] eDP
[16:22:28] [PASSED] Virtual
[16:22:28] [PASSED] DSI
[16:22:28] [PASSED] DPI
[16:22:28] [PASSED] Writeback
[16:22:28] [PASSED] SPI
[16:22:28] [PASSED] USB
[16:22:28] ==== [PASSED] drm_test_drm_connector_dynamic_init_name =====
[16:22:28] =========== [PASSED] drm_connector_dynamic_init ============
[16:22:28] ==== drm_connector_dynamic_register_early (4 subtests) =====
[16:22:28] [PASSED] drm_test_drm_connector_dynamic_register_early_on_list
[16:22:28] [PASSED] drm_test_drm_connector_dynamic_register_early_defer
[16:22:28] [PASSED] drm_test_drm_connector_dynamic_register_early_no_init
[16:22:28] [PASSED] drm_test_drm_connector_dynamic_register_early_no_mode_object
[16:22:28] ====== [PASSED] drm_connector_dynamic_register_early =======
[16:22:28] ======= drm_connector_dynamic_register (7 subtests) ========
[16:22:28] [PASSED] drm_test_drm_connector_dynamic_register_on_list
[16:22:28] [PASSED] drm_test_drm_connector_dynamic_register_no_defer
[16:22:28] [PASSED] drm_test_drm_connector_dynamic_register_no_init
[16:22:28] [PASSED] drm_test_drm_connector_dynamic_register_mode_object
[16:22:28] [PASSED] drm_test_drm_connector_dynamic_register_sysfs
[16:22:28] [PASSED] drm_test_drm_connector_dynamic_register_sysfs_name
[16:22:28] [PASSED] drm_test_drm_connector_dynamic_register_debugfs
[16:22:28] ========= [PASSED] drm_connector_dynamic_register ==========
[16:22:28] = drm_connector_attach_broadcast_rgb_property (2 subtests) =
[16:22:28] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property
[16:22:28] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property_hdmi_connector
[16:22:28] === [PASSED] drm_connector_attach_broadcast_rgb_property ===
[16:22:28] ========== drm_get_tv_mode_from_name (2 subtests) ==========
[16:22:28] ========== drm_test_get_tv_mode_from_name_valid  ===========
[16:22:28] [PASSED] NTSC
[16:22:28] [PASSED] NTSC-443
[16:22:28] [PASSED] NTSC-J
[16:22:28] [PASSED] PAL
[16:22:28] [PASSED] PAL-M
[16:22:28] [PASSED] PAL-N
[16:22:28] [PASSED] SECAM
[16:22:28] [PASSED] Mono
[16:22:28] ====== [PASSED] drm_test_get_tv_mode_from_name_valid =======
[16:22:28] [PASSED] drm_test_get_tv_mode_from_name_truncated
[16:22:28] ============ [PASSED] drm_get_tv_mode_from_name ============
[16:22:28] = drm_test_connector_hdmi_compute_mode_clock (12 subtests) =
[16:22:28] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb
[16:22:28] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc
[16:22:28] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc_vic_1
[16:22:28] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc
[16:22:28] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc_vic_1
[16:22:28] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_double
[16:22:28] = drm_test_connector_hdmi_compute_mode_clock_yuv420_valid  =
[16:22:28] [PASSED] VIC 96
[16:22:28] [PASSED] VIC 97
[16:22:28] [PASSED] VIC 101
[16:22:28] [PASSED] VIC 102
[16:22:28] [PASSED] VIC 106
[16:22:28] [PASSED] VIC 107
[16:22:28] === [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_valid ===
[16:22:28] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_10_bpc
[16:22:28] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_12_bpc
[16:22:28] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_8_bpc
[16:22:28] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_10_bpc
[16:22:28] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_12_bpc
[16:22:28] === [PASSED] drm_test_connector_hdmi_compute_mode_clock ====
[16:22:28] == drm_hdmi_connector_get_broadcast_rgb_name (2 subtests) ==
[16:22:28] === drm_test_drm_hdmi_connector_get_broadcast_rgb_name  ====
[16:22:28] [PASSED] Automatic
[16:22:28] [PASSED] Full
[16:22:28] [PASSED] Limited 16:235
[16:22:28] === [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name ===
[16:22:28] [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name_invalid
[16:22:28] ==== [PASSED] drm_hdmi_connector_get_broadcast_rgb_name ====
[16:22:28] == drm_hdmi_connector_get_output_format_name (2 subtests) ==
[16:22:28] === drm_test_drm_hdmi_connector_get_output_format_name  ====
[16:22:28] [PASSED] RGB
[16:22:28] [PASSED] YUV 4:2:0
[16:22:28] [PASSED] YUV 4:2:2
[16:22:28] [PASSED] YUV 4:4:4
[16:22:28] === [PASSED] drm_test_drm_hdmi_connector_get_output_format_name ===
[16:22:28] [PASSED] drm_test_drm_hdmi_connector_get_output_format_name_invalid
[16:22:28] ==== [PASSED] drm_hdmi_connector_get_output_format_name ====
[16:22:28] ============= drm_damage_helper (21 subtests) ==============
[16:22:28] [PASSED] drm_test_damage_iter_no_damage
[16:22:28] [PASSED] drm_test_damage_iter_no_damage_fractional_src
[16:22:28] [PASSED] drm_test_damage_iter_no_damage_src_moved
[16:22:28] [PASSED] drm_test_damage_iter_no_damage_fractional_src_moved
[16:22:28] [PASSED] drm_test_damage_iter_no_damage_not_visible
[16:22:28] [PASSED] drm_test_damage_iter_no_damage_no_crtc
[16:22:28] [PASSED] drm_test_damage_iter_no_damage_no_fb
[16:22:28] [PASSED] drm_test_damage_iter_simple_damage
[16:22:28] [PASSED] drm_test_damage_iter_single_damage
[16:22:28] [PASSED] drm_test_damage_iter_single_damage_intersect_src
[16:22:28] [PASSED] drm_test_damage_iter_single_damage_outside_src
[16:22:28] [PASSED] drm_test_damage_iter_single_damage_fractional_src
[16:22:28] [PASSED] drm_test_damage_iter_single_damage_intersect_fractional_src
[16:22:28] [PASSED] drm_test_damage_iter_single_damage_outside_fractional_src
[16:22:28] [PASSED] drm_test_damage_iter_single_damage_src_moved
[16:22:28] [PASSED] drm_test_damage_iter_single_damage_fractional_src_moved
[16:22:28] [PASSED] drm_test_damage_iter_damage
[16:22:28] [PASSED] drm_test_damage_iter_damage_one_intersect
[16:22:28] [PASSED] drm_test_damage_iter_damage_one_outside
[16:22:28] [PASSED] drm_test_damage_iter_damage_src_moved
[16:22:28] [PASSED] drm_test_damage_iter_damage_not_visible
[16:22:28] ================ [PASSED] drm_damage_helper ================
[16:22:28] ============== drm_dp_mst_helper (3 subtests) ==============
[16:22:28] ============== drm_test_dp_mst_calc_pbn_mode  ==============
[16:22:28] [PASSED] Clock 154000 BPP 30 DSC disabled
[16:22:28] [PASSED] Clock 234000 BPP 30 DSC disabled
[16:22:28] [PASSED] Clock 297000 BPP 24 DSC disabled
[16:22:28] [PASSED] Clock 332880 BPP 24 DSC enabled
[16:22:28] [PASSED] Clock 324540 BPP 24 DSC enabled
[16:22:28] ========== [PASSED] drm_test_dp_mst_calc_pbn_mode ==========
[16:22:28] ============== drm_test_dp_mst_calc_pbn_div  ===============
[16:22:28] [PASSED] Link rate 2000000 lane count 4
[16:22:28] [PASSED] Link rate 2000000 lane count 2
[16:22:28] [PASSED] Link rate 2000000 lane count 1
[16:22:28] [PASSED] Link rate 1350000 lane count 4
[16:22:28] [PASSED] Link rate 1350000 lane count 2
[16:22:28] [PASSED] Link rate 1350000 lane count 1
[16:22:28] [PASSED] Link rate 1000000 lane count 4
[16:22:28] [PASSED] Link rate 1000000 lane count 2
[16:22:28] [PASSED] Link rate 1000000 lane count 1
[16:22:28] [PASSED] Link rate 810000 lane count 4
[16:22:28] [PASSED] Link rate 810000 lane count 2
[16:22:28] [PASSED] Link rate 810000 lane count 1
[16:22:28] [PASSED] Link rate 540000 lane count 4
[16:22:28] [PASSED] Link rate 540000 lane count 2
[16:22:28] [PASSED] Link rate 540000 lane count 1
[16:22:28] [PASSED] Link rate 270000 lane count 4
[16:22:28] [PASSED] Link rate 270000 lane count 2
[16:22:28] [PASSED] Link rate 270000 lane count 1
[16:22:28] [PASSED] Link rate 162000 lane count 4
[16:22:28] [PASSED] Link rate 162000 lane count 2
[16:22:28] [PASSED] Link rate 162000 lane count 1
[16:22:28] ========== [PASSED] drm_test_dp_mst_calc_pbn_div ===========
[16:22:28] ========= drm_test_dp_mst_sideband_msg_req_decode  =========
[16:22:28] [PASSED] DP_ENUM_PATH_RESOURCES with port number
[16:22:28] [PASSED] DP_POWER_UP_PHY with port number
[16:22:28] [PASSED] DP_POWER_DOWN_PHY with port number
[16:22:28] [PASSED] DP_ALLOCATE_PAYLOAD with SDP stream sinks
[16:22:28] [PASSED] DP_ALLOCATE_PAYLOAD with port number
[16:22:28] [PASSED] DP_ALLOCATE_PAYLOAD with VCPI
[16:22:28] [PASSED] DP_ALLOCATE_PAYLOAD with PBN
[16:22:28] [PASSED] DP_QUERY_PAYLOAD with port number
[16:22:28] [PASSED] DP_QUERY_PAYLOAD with VCPI
[16:22:28] [PASSED] DP_REMOTE_DPCD_READ with port number
[16:22:28] [PASSED] DP_REMOTE_DPCD_READ with DPCD address
[16:22:28] [PASSED] DP_REMOTE_DPCD_READ with max number of bytes
[16:22:28] [PASSED] DP_REMOTE_DPCD_WRITE with port number
[16:22:28] [PASSED] DP_REMOTE_DPCD_WRITE with DPCD address
[16:22:28] [PASSED] DP_REMOTE_DPCD_WRITE with data array
[16:22:28] [PASSED] DP_REMOTE_I2C_READ with port number
[16:22:28] [PASSED] DP_REMOTE_I2C_READ with I2C device ID
[16:22:28] [PASSED] DP_REMOTE_I2C_READ with transactions array
[16:22:28] [PASSED] DP_REMOTE_I2C_WRITE with port number
[16:22:28] [PASSED] DP_REMOTE_I2C_WRITE with I2C device ID
[16:22:28] [PASSED] DP_REMOTE_I2C_WRITE with data array
[16:22:28] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream ID
[16:22:28] [PASSED] DP_QUERY_STREAM_ENC_STATUS with client ID
[16:22:28] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream event
[16:22:28] [PASSED] DP_QUERY_STREAM_ENC_STATUS with valid stream event
[16:22:28] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream behavior
[16:22:28] [PASSED] DP_QUERY_STREAM_ENC_STATUS with a valid stream behavior
[16:22:28] ===== [PASSED] drm_test_dp_mst_sideband_msg_req_decode =====
[16:22:28] ================ [PASSED] drm_dp_mst_helper ================
[16:22:28] ================== drm_exec (7 subtests) ===================
[16:22:28] [PASSED] sanitycheck
[16:22:28] [PASSED] test_lock
[16:22:28] [PASSED] test_lock_unlock
[16:22:28] [PASSED] test_duplicates
[16:22:28] [PASSED] test_prepare
[16:22:28] [PASSED] test_prepare_array
[16:22:28] [PASSED] test_multiple_loops
[16:22:28] ==================== [PASSED] drm_exec =====================
[16:22:28] =========== drm_format_helper_test (17 subtests) ===========
[16:22:28] ============== drm_test_fb_xrgb8888_to_gray8  ==============
[16:22:28] [PASSED] single_pixel_source_buffer
[16:22:28] [PASSED] single_pixel_clip_rectangle
[16:22:28] [PASSED] well_known_colors
[16:22:28] [PASSED] destination_pitch
[16:22:28] ========== [PASSED] drm_test_fb_xrgb8888_to_gray8 ==========
[16:22:28] ============= drm_test_fb_xrgb8888_to_rgb332  ==============
[16:22:28] [PASSED] single_pixel_source_buffer
[16:22:28] [PASSED] single_pixel_clip_rectangle
[16:22:28] [PASSED] well_known_colors
[16:22:28] [PASSED] destination_pitch
[16:22:28] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb332 ==========
[16:22:28] ============= drm_test_fb_xrgb8888_to_rgb565  ==============
[16:22:28] [PASSED] single_pixel_source_buffer
[16:22:28] [PASSED] single_pixel_clip_rectangle
[16:22:28] [PASSED] well_known_colors
[16:22:28] [PASSED] destination_pitch
[16:22:28] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb565 ==========
[16:22:28] ============ drm_test_fb_xrgb8888_to_xrgb1555  =============
[16:22:28] [PASSED] single_pixel_source_buffer
[16:22:28] [PASSED] single_pixel_clip_rectangle
[16:22:28] [PASSED] well_known_colors
[16:22:28] [PASSED] destination_pitch
[16:22:28] ======== [PASSED] drm_test_fb_xrgb8888_to_xrgb1555 =========
[16:22:28] ============ drm_test_fb_xrgb8888_to_argb1555  =============
[16:22:28] [PASSED] single_pixel_source_buffer
[16:22:28] [PASSED] single_pixel_clip_rectangle
[16:22:28] [PASSED] well_known_colors
[16:22:28] [PASSED] destination_pitch
[16:22:28] ======== [PASSED] drm_test_fb_xrgb8888_to_argb1555 =========
[16:22:28] ============ drm_test_fb_xrgb8888_to_rgba5551  =============
[16:22:28] [PASSED] single_pixel_source_buffer
[16:22:28] [PASSED] single_pixel_clip_rectangle
[16:22:28] [PASSED] well_known_colors
[16:22:28] [PASSED] destination_pitch
[16:22:28] ======== [PASSED] drm_test_fb_xrgb8888_to_rgba5551 =========
[16:22:28] ============= drm_test_fb_xrgb8888_to_rgb888  ==============
[16:22:28] [PASSED] single_pixel_source_buffer
[16:22:28] [PASSED] single_pixel_clip_rectangle
[16:22:28] [PASSED] well_known_colors
[16:22:28] [PASSED] destination_pitch
[16:22:28] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb888 ==========
[16:22:28] ============= drm_test_fb_xrgb8888_to_bgr888  ==============
[16:22:28] [PASSED] single_pixel_source_buffer
[16:22:28] [PASSED] single_pixel_clip_rectangle
[16:22:28] [PASSED] well_known_colors
[16:22:28] [PASSED] destination_pitch
[16:22:28] ========= [PASSED] drm_test_fb_xrgb8888_to_bgr888 ==========
[16:22:28] ============ drm_test_fb_xrgb8888_to_argb8888  =============
[16:22:28] [PASSED] single_pixel_source_buffer
[16:22:28] [PASSED] single_pixel_clip_rectangle
[16:22:28] [PASSED] well_known_colors
[16:22:28] [PASSED] destination_pitch
[16:22:28] ======== [PASSED] drm_test_fb_xrgb8888_to_argb8888 =========
[16:22:28] =========== drm_test_fb_xrgb8888_to_xrgb2101010  ===========
[16:22:28] [PASSED] single_pixel_source_buffer
[16:22:28] [PASSED] single_pixel_clip_rectangle
[16:22:28] [PASSED] well_known_colors
[16:22:28] [PASSED] destination_pitch
[16:22:28] ======= [PASSED] drm_test_fb_xrgb8888_to_xrgb2101010 =======
[16:22:28] =========== drm_test_fb_xrgb8888_to_argb2101010  ===========
[16:22:28] [PASSED] single_pixel_source_buffer
[16:22:28] [PASSED] single_pixel_clip_rectangle
[16:22:28] [PASSED] well_known_colors
[16:22:28] [PASSED] destination_pitch
[16:22:28] ======= [PASSED] drm_test_fb_xrgb8888_to_argb2101010 =======
[16:22:28] ============== drm_test_fb_xrgb8888_to_mono  ===============
[16:22:28] [PASSED] single_pixel_source_buffer
[16:22:28] [PASSED] single_pixel_clip_rectangle
[16:22:28] [PASSED] well_known_colors
[16:22:28] [PASSED] destination_pitch
[16:22:28] ========== [PASSED] drm_test_fb_xrgb8888_to_mono ===========
[16:22:28] ==================== drm_test_fb_swab  =====================
[16:22:28] [PASSED] single_pixel_source_buffer
[16:22:28] [PASSED] single_pixel_clip_rectangle
[16:22:28] [PASSED] well_known_colors
[16:22:28] [PASSED] destination_pitch
[16:22:28] ================ [PASSED] drm_test_fb_swab =================
[16:22:28] ============ drm_test_fb_xrgb8888_to_xbgr8888  =============
[16:22:28] [PASSED] single_pixel_source_buffer
[16:22:28] [PASSED] single_pixel_clip_rectangle
[16:22:28] [PASSED] well_known_colors
[16:22:28] [PASSED] destination_pitch
[16:22:28] ======== [PASSED] drm_test_fb_xrgb8888_to_xbgr8888 =========
[16:22:28] ============ drm_test_fb_xrgb8888_to_abgr8888  =============
[16:22:28] [PASSED] single_pixel_source_buffer
[16:22:28] [PASSED] single_pixel_clip_rectangle
[16:22:28] [PASSED] well_known_colors
[16:22:28] [PASSED] destination_pitch
[16:22:28] ======== [PASSED] drm_test_fb_xrgb8888_to_abgr8888 =========
[16:22:28] ================= drm_test_fb_clip_offset  =================
[16:22:28] [PASSED] pass through
[16:22:28] [PASSED] horizontal offset
[16:22:28] [PASSED] vertical offset
[16:22:28] [PASSED] horizontal and vertical offset
[16:22:28] [PASSED] horizontal offset (custom pitch)
[16:22:28] [PASSED] vertical offset (custom pitch)
[16:22:28] [PASSED] horizontal and vertical offset (custom pitch)
[16:22:28] ============= [PASSED] drm_test_fb_clip_offset =============
[16:22:28] =================== drm_test_fb_memcpy  ====================
[16:22:28] [PASSED] single_pixel_source_buffer: XR24 little-endian (0x34325258)
[16:22:28] [PASSED] single_pixel_source_buffer: XRA8 little-endian (0x38415258)
[16:22:28] [PASSED] single_pixel_source_buffer: YU24 little-endian (0x34325559)
[16:22:28] [PASSED] single_pixel_clip_rectangle: XB24 little-endian (0x34324258)
[16:22:28] [PASSED] single_pixel_clip_rectangle: XRA8 little-endian (0x38415258)
[16:22:28] [PASSED] single_pixel_clip_rectangle: YU24 little-endian (0x34325559)
[16:22:28] [PASSED] well_known_colors: XB24 little-endian (0x34324258)
[16:22:28] [PASSED] well_known_colors: XRA8 little-endian (0x38415258)
[16:22:28] [PASSED] well_known_colors: YU24 little-endian (0x34325559)
[16:22:28] [PASSED] destination_pitch: XB24 little-endian (0x34324258)
[16:22:28] [PASSED] destination_pitch: XRA8 little-endian (0x38415258)
[16:22:28] [PASSED] destination_pitch: YU24 little-endian (0x34325559)
[16:22:28] =============== [PASSED] drm_test_fb_memcpy ================
[16:22:28] ============= [PASSED] drm_format_helper_test ==============
[16:22:28] ================= drm_format (18 subtests) =================
[16:22:28] [PASSED] drm_test_format_block_width_invalid
[16:22:28] [PASSED] drm_test_format_block_width_one_plane
[16:22:28] [PASSED] drm_test_format_block_width_two_plane
[16:22:28] [PASSED] drm_test_format_block_width_three_plane
[16:22:28] [PASSED] drm_test_format_block_width_tiled
[16:22:28] [PASSED] drm_test_format_block_height_invalid
[16:22:28] [PASSED] drm_test_format_block_height_one_plane
[16:22:28] [PASSED] drm_test_format_block_height_two_plane
[16:22:28] [PASSED] drm_test_format_block_height_three_plane
[16:22:28] [PASSED] drm_test_format_block_height_tiled
[16:22:28] [PASSED] drm_test_format_min_pitch_invalid
[16:22:28] [PASSED] drm_test_format_min_pitch_one_plane_8bpp
[16:22:28] [PASSED] drm_test_format_min_pitch_one_plane_16bpp
[16:22:28] [PASSED] drm_test_format_min_pitch_one_plane_24bpp
[16:22:28] [PASSED] drm_test_format_min_pitch_one_plane_32bpp
[16:22:28] [PASSED] drm_test_format_min_pitch_two_plane
[16:22:28] [PASSED] drm_test_format_min_pitch_three_plane_8bpp
[16:22:28] [PASSED] drm_test_format_min_pitch_tiled
[16:22:28] =================== [PASSED] drm_format ====================
[16:22:28] ============== drm_framebuffer (10 subtests) ===============
[16:22:28] ========== drm_test_framebuffer_check_src_coords  ==========
[16:22:28] [PASSED] Success: source fits into fb
[16:22:28] [PASSED] Fail: overflowing fb with x-axis coordinate
[16:22:28] [PASSED] Fail: overflowing fb with y-axis coordinate
[16:22:28] [PASSED] Fail: overflowing fb with source width
[16:22:28] [PASSED] Fail: overflowing fb with source height
[16:22:28] ====== [PASSED] drm_test_framebuffer_check_src_coords ======
[16:22:28] [PASSED] drm_test_framebuffer_cleanup
[16:22:28] =============== drm_test_framebuffer_create  ===============
[16:22:28] [PASSED] ABGR8888 normal sizes
[16:22:28] [PASSED] ABGR8888 max sizes
[16:22:28] [PASSED] ABGR8888 pitch greater than min required
[16:22:28] [PASSED] ABGR8888 pitch less than min required
[16:22:28] [PASSED] ABGR8888 Invalid width
[16:22:28] [PASSED] ABGR8888 Invalid buffer handle
[16:22:28] [PASSED] No pixel format
[16:22:28] [PASSED] ABGR8888 Width 0
[16:22:28] [PASSED] ABGR8888 Height 0
[16:22:28] [PASSED] ABGR8888 Out of bound height * pitch combination
[16:22:28] [PASSED] ABGR8888 Large buffer offset
[16:22:28] [PASSED] ABGR8888 Buffer offset for inexistent plane
[16:22:28] [PASSED] ABGR8888 Invalid flag
[16:22:28] [PASSED] ABGR8888 Set DRM_MODE_FB_MODIFIERS without modifiers
[16:22:28] [PASSED] ABGR8888 Valid buffer modifier
[16:22:28] [PASSED] ABGR8888 Invalid buffer modifier(DRM_FORMAT_MOD_SAMSUNG_64_32_TILE)
[16:22:28] [PASSED] ABGR8888 Extra pitches without DRM_MODE_FB_MODIFIERS
[16:22:28] [PASSED] ABGR8888 Extra pitches with DRM_MODE_FB_MODIFIERS
[16:22:28] [PASSED] NV12 Normal sizes
[16:22:28] [PASSED] NV12 Max sizes
[16:22:28] [PASSED] NV12 Invalid pitch
[16:22:28] [PASSED] NV12 Invalid modifier/missing DRM_MODE_FB_MODIFIERS flag
[16:22:28] [PASSED] NV12 different  modifier per-plane
[16:22:28] [PASSED] NV12 with DRM_FORMAT_MOD_SAMSUNG_64_32_TILE
[16:22:28] [PASSED] NV12 Valid modifiers without DRM_MODE_FB_MODIFIERS
[16:22:28] [PASSED] NV12 Modifier for inexistent plane
[16:22:28] [PASSED] NV12 Handle for inexistent plane
[16:22:28] [PASSED] NV12 Handle for inexistent plane without DRM_MODE_FB_MODIFIERS
[16:22:28] [PASSED] YVU420 DRM_MODE_FB_MODIFIERS set without modifier
[16:22:28] [PASSED] YVU420 Normal sizes
[16:22:28] [PASSED] YVU420 Max sizes
[16:22:28] [PASSED] YVU420 Invalid pitch
[16:22:28] [PASSED] YVU420 Different pitches
[16:22:28] [PASSED] YVU420 Different buffer offsets/pitches
[16:22:28] [PASSED] YVU420 Modifier set just for plane 0, without DRM_MODE_FB_MODIFIERS
[16:22:28] [PASSED] YVU420 Modifier set just for planes 0, 1, without DRM_MODE_FB_MODIFIERS
[16:22:28] [PASSED] YVU420 Modifier set just for plane 0, 1, with DRM_MODE_FB_MODIFIERS
[16:22:28] [PASSED] YVU420 Valid modifier
[16:22:28] [PASSED] YVU420 Different modifiers per plane
[16:22:28] [PASSED] YVU420 Modifier for inexistent plane
[16:22:28] [PASSED] YUV420_10BIT Invalid modifier(DRM_FORMAT_MOD_LINEAR)
[16:22:28] [PASSED] X0L2 Normal sizes
[16:22:28] [PASSED] X0L2 Max sizes
[16:22:28] [PASSED] X0L2 Invalid pitch
[16:22:28] [PASSED] X0L2 Pitch greater than minimum required
[16:22:28] [PASSED] X0L2 Handle for inexistent plane
[16:22:28] [PASSED] X0L2 Offset for inexistent plane, without DRM_MODE_FB_MODIFIERS set
[16:22:28] [PASSED] X0L2 Modifier without DRM_MODE_FB_MODIFIERS set
[16:22:28] [PASSED] X0L2 Valid modifier
[16:22:28] [PASSED] X0L2 Modifier for inexistent plane
[16:22:28] =========== [PASSED] drm_test_framebuffer_create ===========
[16:22:28] [PASSED] drm_test_framebuffer_free
[16:22:28] [PASSED] drm_test_framebuffer_init
[16:22:28] [PASSED] drm_test_framebuffer_init_bad_format
[16:22:28] [PASSED] drm_test_framebuffer_init_dev_mismatch
[16:22:28] [PASSED] drm_test_framebuffer_lookup
[16:22:28] [PASSED] drm_test_framebuffer_lookup_inexistent
[16:22:28] [PASSED] drm_test_framebuffer_modifiers_not_supported
[16:22:28] ================= [PASSED] drm_framebuffer =================
[16:22:28] ================ drm_gem_shmem (8 subtests) ================
[16:22:28] [PASSED] drm_gem_shmem_test_obj_create
[16:22:28] [PASSED] drm_gem_shmem_test_obj_create_private
[16:22:28] [PASSED] drm_gem_shmem_test_pin_pages
[16:22:28] [PASSED] drm_gem_shmem_test_vmap
[16:22:28] [PASSED] drm_gem_shmem_test_get_sg_table
[16:22:28] [PASSED] drm_gem_shmem_test_get_pages_sgt
[16:22:28] [PASSED] drm_gem_shmem_test_madvise
[16:22:28] [PASSED] drm_gem_shmem_test_purge
[16:22:28] ================== [PASSED] drm_gem_shmem ==================
[16:22:28] === drm_atomic_helper_connector_hdmi_check (27 subtests) ===
[16:22:28] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode
[16:22:28] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode_vic_1
[16:22:28] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode
[16:22:28] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode_vic_1
[16:22:28] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode
[16:22:28] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode_vic_1
[16:22:28] ====== drm_test_check_broadcast_rgb_cea_mode_yuv420  =======
[16:22:28] [PASSED] Automatic
[16:22:28] [PASSED] Full
[16:22:28] [PASSED] Limited 16:235
[16:22:28] == [PASSED] drm_test_check_broadcast_rgb_cea_mode_yuv420 ===
[16:22:28] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_changed
[16:22:28] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_not_changed
[16:22:28] [PASSED] drm_test_check_disable_connector
[16:22:28] [PASSED] drm_test_check_hdmi_funcs_reject_rate
[16:22:28] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_rgb
[16:22:28] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_yuv420
[16:22:28] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_ignore_yuv422
[16:22:28] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_ignore_yuv420
[16:22:28] [PASSED] drm_test_check_driver_unsupported_fallback_yuv420
[16:22:28] [PASSED] drm_test_check_output_bpc_crtc_mode_changed
[16:22:28] [PASSED] drm_test_check_output_bpc_crtc_mode_not_changed
[16:22:28] [PASSED] drm_test_check_output_bpc_dvi
[16:22:28] [PASSED] drm_test_check_output_bpc_format_vic_1
[16:22:28] [PASSED] drm_test_check_output_bpc_format_display_8bpc_only
[16:22:28] [PASSED] drm_test_check_output_bpc_format_display_rgb_only
[16:22:28] [PASSED] drm_test_check_output_bpc_format_driver_8bpc_only
[16:22:28] [PASSED] drm_test_check_output_bpc_format_driver_rgb_only
[16:22:28] [PASSED] drm_test_check_tmds_char_rate_rgb_8bpc
[16:22:28] [PASSED] drm_test_check_tmds_char_rate_rgb_10bpc
[16:22:28] [PASSED] drm_test_check_tmds_char_rate_rgb_12bpc
[16:22:28] ===== [PASSED] drm_atomic_helper_connector_hdmi_check ======
[16:22:28] === drm_atomic_helper_connector_hdmi_reset (6 subtests) ====
[16:22:28] [PASSED] drm_test_check_broadcast_rgb_value
[16:22:28] [PASSED] drm_test_check_bpc_8_value
[16:22:28] [PASSED] drm_test_check_bpc_10_value
[16:22:28] [PASSED] drm_test_check_bpc_12_value
[16:22:28] [PASSED] drm_test_check_format_value
[16:22:28] [PASSED] drm_test_check_tmds_char_value
[16:22:28] ===== [PASSED] drm_atomic_helper_connector_hdmi_reset ======
[16:22:28] = drm_atomic_helper_connector_hdmi_mode_valid (4 subtests) =
[16:22:28] [PASSED] drm_test_check_mode_valid
[16:22:28] [PASSED] drm_test_check_mode_valid_reject
[16:22:28] [PASSED] drm_test_check_mode_valid_reject_rate
[16:22:28] [PASSED] drm_test_check_mode_valid_reject_max_clock
[16:22:28] === [PASSED] drm_atomic_helper_connector_hdmi_mode_valid ===
[16:22:28] = drm_atomic_helper_connector_hdmi_infoframes (5 subtests) =
[16:22:28] [PASSED] drm_test_check_infoframes
[16:22:28] [PASSED] drm_test_check_reject_avi_infoframe
[16:22:28] [PASSED] drm_test_check_reject_hdr_infoframe_bpc_8
[16:22:28] [PASSED] drm_test_check_reject_hdr_infoframe_bpc_10
[16:22:28] [PASSED] drm_test_check_reject_audio_infoframe
[16:22:28] === [PASSED] drm_atomic_helper_connector_hdmi_infoframes ===
[16:22:28] ================= drm_managed (2 subtests) =================
[16:22:28] [PASSED] drm_test_managed_release_action
[16:22:28] [PASSED] drm_test_managed_run_action
[16:22:28] =================== [PASSED] drm_managed ===================
[16:22:28] =================== drm_mm (6 subtests) ====================
[16:22:28] [PASSED] drm_test_mm_init
[16:22:28] [PASSED] drm_test_mm_debug
[16:22:28] [PASSED] drm_test_mm_align32
[16:22:28] [PASSED] drm_test_mm_align64
[16:22:28] [PASSED] drm_test_mm_lowest
[16:22:28] [PASSED] drm_test_mm_highest
[16:22:28] ===================== [PASSED] drm_mm ======================
[16:22:28] ============= drm_modes_analog_tv (5 subtests) =============
[16:22:28] [PASSED] drm_test_modes_analog_tv_mono_576i
[16:22:28] [PASSED] drm_test_modes_analog_tv_ntsc_480i
[16:22:28] [PASSED] drm_test_modes_analog_tv_ntsc_480i_inlined
[16:22:28] [PASSED] drm_test_modes_analog_tv_pal_576i
[16:22:28] [PASSED] drm_test_modes_analog_tv_pal_576i_inlined
[16:22:28] =============== [PASSED] drm_modes_analog_tv ===============
[16:22:28] ============== drm_plane_helper (2 subtests) ===============
[16:22:28] =============== drm_test_check_plane_state  ================
[16:22:28] [PASSED] clipping_simple
[16:22:28] [PASSED] clipping_rotate_reflect
[16:22:28] [PASSED] positioning_simple
[16:22:28] [PASSED] upscaling
[16:22:28] [PASSED] downscaling
[16:22:28] [PASSED] rounding1
[16:22:28] [PASSED] rounding2
[16:22:28] [PASSED] rounding3
[16:22:28] [PASSED] rounding4
[16:22:28] =========== [PASSED] drm_test_check_plane_state ============
[16:22:28] =========== drm_test_check_invalid_plane_state  ============
[16:22:28] [PASSED] positioning_invalid
[16:22:28] [PASSED] upscaling_invalid
[16:22:28] [PASSED] downscaling_invalid
[16:22:28] ======= [PASSED] drm_test_check_invalid_plane_state ========
[16:22:28] ================ [PASSED] drm_plane_helper =================
[16:22:28] ====== drm_connector_helper_tv_get_modes (1 subtest) =======
[16:22:28] ====== drm_test_connector_helper_tv_get_modes_check  =======
[16:22:28] [PASSED] None
[16:22:28] [PASSED] PAL
[16:22:28] [PASSED] NTSC
[16:22:28] [PASSED] Both, NTSC Default
[16:22:28] [PASSED] Both, PAL Default
[16:22:28] [PASSED] Both, NTSC Default, with PAL on command-line
[16:22:28] [PASSED] Both, PAL Default, with NTSC on command-line
[16:22:28] == [PASSED] drm_test_connector_helper_tv_get_modes_check ===
[16:22:28] ======== [PASSED] drm_connector_helper_tv_get_modes ========
[16:22:28] ================== drm_rect (9 subtests) ===================
[16:22:28] [PASSED] drm_test_rect_clip_scaled_div_by_zero
[16:22:28] [PASSED] drm_test_rect_clip_scaled_not_clipped
[16:22:28] [PASSED] drm_test_rect_clip_scaled_clipped
[16:22:28] [PASSED] drm_test_rect_clip_scaled_signed_vs_unsigned
[16:22:28] ================= drm_test_rect_intersect  =================
[16:22:28] [PASSED] top-left x bottom-right: 2x2+1+1 x 2x2+0+0
[16:22:28] [PASSED] top-right x bottom-left: 2x2+0+0 x 2x2+1-1
[16:22:28] [PASSED] bottom-left x top-right: 2x2+1-1 x 2x2+0+0
[16:22:28] [PASSED] bottom-right x top-left: 2x2+0+0 x 2x2+1+1
[16:22:28] [PASSED] right x left: 2x1+0+0 x 3x1+1+0
[16:22:28] [PASSED] left x right: 3x1+1+0 x 2x1+0+0
[16:22:28] [PASSED] up x bottom: 1x2+0+0 x 1x3+0-1
[16:22:28] [PASSED] bottom x up: 1x3+0-1 x 1x2+0+0
[16:22:28] [PASSED] touching corner: 1x1+0+0 x 2x2+1+1
[16:22:28] [PASSED] touching side: 1x1+0+0 x 1x1+1+0
[16:22:28] [PASSED] equal rects: 2x2+0+0 x 2x2+0+0
[16:22:28] [PASSED] inside another: 2x2+0+0 x 1x1+1+1
[16:22:28] [PASSED] far away: 1x1+0+0 x 1x1+3+6
[16:22:28] [PASSED] points intersecting: 0x0+5+10 x 0x0+5+10
[16:22:28] [PASSED] points not intersecting: 0x0+0+0 x 0x0+5+10
[16:22:28] ============= [PASSED] drm_test_rect_intersect =============
[16:22:28] ================ drm_test_rect_calc_hscale  ================
[16:22:28] [PASSED] normal use
[16:22:28] [PASSED] out of max range
[16:22:28] [PASSED] out of min range
[16:22:28] [PASSED] zero dst
[16:22:28] [PASSED] negative src
[16:22:28] [PASSED] negative dst
[16:22:28] ============ [PASSED] drm_test_rect_calc_hscale ============
[16:22:28] ================ drm_test_rect_calc_vscale  ================
[16:22:28] [PASSED] normal use
[16:22:28] [PASSED] out of max range
[16:22:28] [PASSED] out of min range
[16:22:28] [PASSED] zero dst
[16:22:28] [PASSED] negative src
[16:22:28] [PASSED] negative dst
stty: 'standard input': Inappropriate ioctl for device
[16:22:28] ============ [PASSED] drm_test_rect_calc_vscale ============
[16:22:28] ================== drm_test_rect_rotate  ===================
[16:22:28] [PASSED] reflect-x
[16:22:28] [PASSED] reflect-y
[16:22:28] [PASSED] rotate-0
[16:22:28] [PASSED] rotate-90
[16:22:28] [PASSED] rotate-180
[16:22:28] [PASSED] rotate-270
[16:22:28] ============== [PASSED] drm_test_rect_rotate ===============
[16:22:28] ================ drm_test_rect_rotate_inv  =================
[16:22:28] [PASSED] reflect-x
[16:22:28] [PASSED] reflect-y
[16:22:28] [PASSED] rotate-0
[16:22:28] [PASSED] rotate-90
[16:22:28] [PASSED] rotate-180
[16:22:28] [PASSED] rotate-270
[16:22:28] ============ [PASSED] drm_test_rect_rotate_inv =============
[16:22:28] ==================== [PASSED] drm_rect =====================
[16:22:28] ============ drm_sysfb_modeset_test (1 subtest) ============
[16:22:28] ============ drm_test_sysfb_build_fourcc_list  =============
[16:22:28] [PASSED] no native formats
[16:22:28] [PASSED] XRGB8888 as native format
[16:22:28] [PASSED] remove duplicates
[16:22:28] [PASSED] convert alpha formats
[16:22:28] [PASSED] random formats
[16:22:28] ======== [PASSED] drm_test_sysfb_build_fourcc_list =========
[16:22:28] ============= [PASSED] drm_sysfb_modeset_test ==============
[16:22:28] ================== drm_fixp (2 subtests) ===================
[16:22:28] [PASSED] drm_test_int2fixp
[16:22:28] [PASSED] drm_test_sm2fixp
[16:22:28] ==================== [PASSED] drm_fixp =====================
[16:22:28] ============================================================
[16:22:28] Testing complete. Ran 621 tests: passed: 621
[16:22:28] Elapsed time: 27.186s total, 1.665s configuring, 25.351s building, 0.133s running

+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/ttm/tests/.kunitconfig
[16:22:28] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[16:22:30] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[16:22:39] Starting KUnit Kernel (1/1)...
[16:22:39] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[16:22:39] ================= ttm_device (5 subtests) ==================
[16:22:39] [PASSED] ttm_device_init_basic
[16:22:39] [PASSED] ttm_device_init_multiple
[16:22:39] [PASSED] ttm_device_fini_basic
[16:22:39] [PASSED] ttm_device_init_no_vma_man
[16:22:39] ================== ttm_device_init_pools  ==================
[16:22:39] [PASSED] No DMA allocations, no DMA32 required
[16:22:39] [PASSED] DMA allocations, DMA32 required
[16:22:39] [PASSED] No DMA allocations, DMA32 required
[16:22:39] [PASSED] DMA allocations, no DMA32 required
[16:22:39] ============== [PASSED] ttm_device_init_pools ==============
[16:22:39] =================== [PASSED] ttm_device ====================
[16:22:39] ================== ttm_pool (8 subtests) ===================
[16:22:39] ================== ttm_pool_alloc_basic  ===================
[16:22:39] [PASSED] One page
[16:22:39] [PASSED] More than one page
[16:22:39] [PASSED] Above the allocation limit
[16:22:39] [PASSED] One page, with coherent DMA mappings enabled
[16:22:39] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[16:22:39] ============== [PASSED] ttm_pool_alloc_basic ===============
[16:22:39] ============== ttm_pool_alloc_basic_dma_addr  ==============
[16:22:39] [PASSED] One page
[16:22:39] [PASSED] More than one page
[16:22:39] [PASSED] Above the allocation limit
[16:22:39] [PASSED] One page, with coherent DMA mappings enabled
[16:22:39] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[16:22:39] ========== [PASSED] ttm_pool_alloc_basic_dma_addr ==========
[16:22:39] [PASSED] ttm_pool_alloc_order_caching_match
[16:22:39] [PASSED] ttm_pool_alloc_caching_mismatch
[16:22:39] [PASSED] ttm_pool_alloc_order_mismatch
[16:22:39] [PASSED] ttm_pool_free_dma_alloc
[16:22:39] [PASSED] ttm_pool_free_no_dma_alloc
[16:22:39] [PASSED] ttm_pool_fini_basic
[16:22:39] ==================== [PASSED] ttm_pool =====================
[16:22:39] ================ ttm_resource (8 subtests) =================
[16:22:39] ================= ttm_resource_init_basic  =================
[16:22:39] [PASSED] Init resource in TTM_PL_SYSTEM
[16:22:39] [PASSED] Init resource in TTM_PL_VRAM
[16:22:39] [PASSED] Init resource in a private placement
[16:22:39] [PASSED] Init resource in TTM_PL_SYSTEM, set placement flags
[16:22:39] ============= [PASSED] ttm_resource_init_basic =============
[16:22:39] [PASSED] ttm_resource_init_pinned
[16:22:39] [PASSED] ttm_resource_fini_basic
[16:22:39] [PASSED] ttm_resource_manager_init_basic
[16:22:39] [PASSED] ttm_resource_manager_usage_basic
[16:22:39] [PASSED] ttm_resource_manager_set_used_basic
[16:22:39] [PASSED] ttm_sys_man_alloc_basic
[16:22:39] [PASSED] ttm_sys_man_free_basic
[16:22:39] ================== [PASSED] ttm_resource ===================
[16:22:39] =================== ttm_tt (15 subtests) ===================
[16:22:39] ==================== ttm_tt_init_basic  ====================
[16:22:39] [PASSED] Page-aligned size
[16:22:39] [PASSED] Extra pages requested
[16:22:39] ================ [PASSED] ttm_tt_init_basic ================
[16:22:39] [PASSED] ttm_tt_init_misaligned
[16:22:39] [PASSED] ttm_tt_fini_basic
[16:22:39] [PASSED] ttm_tt_fini_sg
[16:22:39] [PASSED] ttm_tt_fini_shmem
[16:22:39] [PASSED] ttm_tt_create_basic
[16:22:39] [PASSED] ttm_tt_create_invalid_bo_type
[16:22:39] [PASSED] ttm_tt_create_ttm_exists
[16:22:39] [PASSED] ttm_tt_create_failed
[16:22:39] [PASSED] ttm_tt_destroy_basic
[16:22:39] [PASSED] ttm_tt_populate_null_ttm
[16:22:39] [PASSED] ttm_tt_populate_populated_ttm
[16:22:39] [PASSED] ttm_tt_unpopulate_basic
[16:22:39] [PASSED] ttm_tt_unpopulate_empty_ttm
[16:22:39] [PASSED] ttm_tt_swapin_basic
[16:22:39] ===================== [PASSED] ttm_tt ======================
[16:22:39] =================== ttm_bo (14 subtests) ===================
[16:22:39] =========== ttm_bo_reserve_optimistic_no_ticket  ===========
[16:22:39] [PASSED] Cannot be interrupted and sleeps
[16:22:39] [PASSED] Cannot be interrupted, locks straight away
[16:22:39] [PASSED] Can be interrupted, sleeps
[16:22:39] ======= [PASSED] ttm_bo_reserve_optimistic_no_ticket =======
[16:22:39] [PASSED] ttm_bo_reserve_locked_no_sleep
[16:22:39] [PASSED] ttm_bo_reserve_no_wait_ticket
[16:22:40] [PASSED] ttm_bo_reserve_double_resv
[16:22:40] [PASSED] ttm_bo_reserve_interrupted
[16:22:40] [PASSED] ttm_bo_reserve_deadlock
[16:22:40] [PASSED] ttm_bo_unreserve_basic
[16:22:40] [PASSED] ttm_bo_unreserve_pinned
[16:22:40] [PASSED] ttm_bo_unreserve_bulk
[16:22:40] [PASSED] ttm_bo_fini_basic
[16:22:40] [PASSED] ttm_bo_fini_shared_resv
[16:22:40] [PASSED] ttm_bo_pin_basic
[16:22:40] [PASSED] ttm_bo_pin_unpin_resource
[16:22:40] [PASSED] ttm_bo_multiple_pin_one_unpin
[16:22:40] ===================== [PASSED] ttm_bo ======================
[16:22:40] ============== ttm_bo_validate (21 subtests) ===============
[16:22:40] ============== ttm_bo_init_reserved_sys_man  ===============
[16:22:40] [PASSED] Buffer object for userspace
[16:22:40] [PASSED] Kernel buffer object
[16:22:40] [PASSED] Shared buffer object
[16:22:40] ========== [PASSED] ttm_bo_init_reserved_sys_man ===========
[16:22:40] ============== ttm_bo_init_reserved_mock_man  ==============
[16:22:40] [PASSED] Buffer object for userspace
[16:22:40] [PASSED] Kernel buffer object
[16:22:40] [PASSED] Shared buffer object
[16:22:40] ========== [PASSED] ttm_bo_init_reserved_mock_man ==========
[16:22:40] [PASSED] ttm_bo_init_reserved_resv
[16:22:40] ================== ttm_bo_validate_basic  ==================
[16:22:40] [PASSED] Buffer object for userspace
[16:22:40] [PASSED] Kernel buffer object
[16:22:40] [PASSED] Shared buffer object
[16:22:40] ============== [PASSED] ttm_bo_validate_basic ==============
[16:22:40] [PASSED] ttm_bo_validate_invalid_placement
[16:22:40] ============= ttm_bo_validate_same_placement  ==============
[16:22:40] [PASSED] System manager
[16:22:40] [PASSED] VRAM manager
[16:22:40] ========= [PASSED] ttm_bo_validate_same_placement ==========
[16:22:40] [PASSED] ttm_bo_validate_failed_alloc
[16:22:40] [PASSED] ttm_bo_validate_pinned
[16:22:40] [PASSED] ttm_bo_validate_busy_placement
[16:22:40] ================ ttm_bo_validate_multihop  =================
[16:22:40] [PASSED] Buffer object for userspace
[16:22:40] [PASSED] Kernel buffer object
[16:22:40] [PASSED] Shared buffer object
[16:22:40] ============ [PASSED] ttm_bo_validate_multihop =============
[16:22:40] ========== ttm_bo_validate_no_placement_signaled  ==========
[16:22:40] [PASSED] Buffer object in system domain, no page vector
[16:22:40] [PASSED] Buffer object in system domain with an existing page vector
[16:22:40] ====== [PASSED] ttm_bo_validate_no_placement_signaled ======
[16:22:40] ======== ttm_bo_validate_no_placement_not_signaled  ========
[16:22:40] [PASSED] Buffer object for userspace
[16:22:40] [PASSED] Kernel buffer object
[16:22:40] [PASSED] Shared buffer object
[16:22:40] ==== [PASSED] ttm_bo_validate_no_placement_not_signaled ====
[16:22:40] [PASSED] ttm_bo_validate_move_fence_signaled
[16:22:40] ========= ttm_bo_validate_move_fence_not_signaled  =========
[16:22:40] [PASSED] Waits for GPU
[16:22:40] [PASSED] Tries to lock straight away
[16:22:40] ===== [PASSED] ttm_bo_validate_move_fence_not_signaled =====
[16:22:40] [PASSED] ttm_bo_validate_happy_evict
[16:22:40] [PASSED] ttm_bo_validate_all_pinned_evict
[16:22:40] [PASSED] ttm_bo_validate_allowed_only_evict
[16:22:40] [PASSED] ttm_bo_validate_deleted_evict
[16:22:40] [PASSED] ttm_bo_validate_busy_domain_evict
[16:22:40] [PASSED] ttm_bo_validate_evict_gutting
[16:22:40] [PASSED] ttm_bo_validate_recrusive_evict
stty: 'standard input': Inappropriate ioctl for device
[16:22:40] ================= [PASSED] ttm_bo_validate =================
[16:22:40] ============================================================
[16:22:40] Testing complete. Ran 101 tests: passed: 101
[16:22:40] Elapsed time: 11.440s total, 1.609s configuring, 9.615s building, 0.183s running

+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel



^ permalink raw reply	[flat|nested] 36+ messages in thread

* ✗ Xe.CI.BAT: failure for drm/xe/madvise: Add support for purgeable buffer objects (rev6)
  2026-02-11 15:26 [PATCH v5 0/9] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
                   ` (11 preceding siblings ...)
  2026-02-11 16:22 ` ✓ CI.KUnit: success " Patchwork
@ 2026-02-11 17:11 ` Patchwork
  2026-02-13  1:15 ` ✗ Xe.CI.FULL: " Patchwork
  13 siblings, 0 replies; 36+ messages in thread
From: Patchwork @ 2026-02-11 17:11 UTC (permalink / raw)
  To: Arvind Yadav; +Cc: intel-xe

[-- Attachment #1: Type: text/plain, Size: 3278 bytes --]

== Series Details ==

Series: drm/xe/madvise: Add support for purgeable buffer objects (rev6)
URL   : https://patchwork.freedesktop.org/series/156651/
State : failure

== Summary ==

CI Bug Log - changes from xe-4544-2938ce73d01357a5816ed7dbd041154b58635a37_BAT -> xe-pw-156651v6_BAT
====================================================

Summary
-------

  **FAILURE**

  Serious unknown changes coming with xe-pw-156651v6_BAT absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in xe-pw-156651v6_BAT, please notify your bug team (I915-ci-infra@lists.freedesktop.org) to allow them
  to document this new failure mode, which will reduce false positives in CI.

  

Participating hosts (9 -> 12)
------------------------------

  Additional (3): bat-wcl-1 bat-wcl-2 bat-bmg-3 

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in xe-pw-156651v6_BAT:

### IGT changes ###

#### Possible regressions ####

  * igt@kms_pipe_crc_basic@hang-read-crc:
    - bat-wcl-2:          NOTRUN -> [SKIP][1] +43 other tests skip
   [1]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/bat-wcl-2/igt@kms_pipe_crc_basic@hang-read-crc.html

  * igt@xe_live_ktest@xe_migrate@xe_validate_ccs_kunit:
    - bat-wcl-1:          NOTRUN -> [SKIP][2] +20 other tests skip
   [2]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/bat-wcl-1/igt@xe_live_ktest@xe_migrate@xe_validate_ccs_kunit.html

  
Known issues
------------

  Here are the changes found in xe-pw-156651v6_BAT that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@kms_psr@psr-sprite-plane-onoff:
    - bat-wcl-2:          NOTRUN -> [SKIP][3] ([Intel XE#1406]) +2 other tests skip
   [3]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/bat-wcl-2/igt@kms_psr@psr-sprite-plane-onoff.html

  * igt@xe_peer2peer@read@read-gpua-vram01-gpub-system-p2p:
    - bat-bmg-3:          NOTRUN -> [SKIP][4] ([Intel XE#6566]) +3 other tests skip
   [4]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/bat-bmg-3/igt@xe_peer2peer@read@read-gpua-vram01-gpub-system-p2p.html

  
#### Possible fixes ####

  * igt@xe_waitfence@reltime:
    - bat-ptl-1:          [FAIL][5] -> [PASS][6]
   [5]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4544-2938ce73d01357a5816ed7dbd041154b58635a37/bat-ptl-1/igt@xe_waitfence@reltime.html
   [6]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/bat-ptl-1/igt@xe_waitfence@reltime.html

  
  [Intel XE#1406]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1406
  [Intel XE#6566]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6566


Build changes
-------------

  * IGT: IGT_8749 -> IGT_8751
  * Linux: xe-4544-2938ce73d01357a5816ed7dbd041154b58635a37 -> xe-pw-156651v6

  IGT_8749: 195f101f25a7984686f36f340aa88d44a1716ec6 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
  IGT_8751: af788251f1ef729d17c802aec2c4547b52059e58 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
  xe-4544-2938ce73d01357a5816ed7dbd041154b58635a37: 2938ce73d01357a5816ed7dbd041154b58635a37
  xe-pw-156651v6: 156651v6

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/index.html

[-- Attachment #2: Type: text/html, Size: 3998 bytes --]

^ permalink raw reply	[flat|nested] 36+ messages in thread

* ✗ Xe.CI.FULL: failure for drm/xe/madvise: Add support for purgeable buffer objects (rev6)
  2026-02-11 15:26 [PATCH v5 0/9] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
                   ` (12 preceding siblings ...)
  2026-02-11 17:11 ` ✗ Xe.CI.BAT: failure " Patchwork
@ 2026-02-13  1:15 ` Patchwork
  13 siblings, 0 replies; 36+ messages in thread
From: Patchwork @ 2026-02-13  1:15 UTC (permalink / raw)
  To: Arvind Yadav; +Cc: intel-xe

[-- Attachment #1: Type: text/plain, Size: 46683 bytes --]

== Series Details ==

Series: drm/xe/madvise: Add support for purgeable buffer objects (rev6)
URL   : https://patchwork.freedesktop.org/series/156651/
State : failure

== Summary ==

CI Bug Log - changes from xe-4544-2938ce73d01357a5816ed7dbd041154b58635a37_FULL -> xe-pw-156651v6_FULL
====================================================

Summary
-------

  **FAILURE**

  Serious unknown changes coming with xe-pw-156651v6_FULL absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in xe-pw-156651v6_FULL, please notify your bug team (I915-ci-infra@lists.freedesktop.org) to allow them
  to document this new failure mode, which will reduce false positives in CI.

  

Participating hosts (2 -> 2)
------------------------------

  No changes in participating hosts

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in xe-pw-156651v6_FULL:

### IGT changes ###

#### Possible regressions ####

  * igt@kms_vblank@ts-continuation-dpms-rpm:
    - shard-lnl:          [PASS][1] -> [SKIP][2] +2 other tests skip
   [1]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4544-2938ce73d01357a5816ed7dbd041154b58635a37/shard-lnl-8/igt@kms_vblank@ts-continuation-dpms-rpm.html
   [2]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-lnl-3/igt@kms_vblank@ts-continuation-dpms-rpm.html

  
Known issues
------------

  Here are the changes found in xe-pw-156651v6_FULL that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@kms_async_flips@async-flip-with-page-flip-events-linear:
    - shard-lnl:          [PASS][3] -> [FAIL][4] ([Intel XE#5993]) +3 other tests fail
   [3]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4544-2938ce73d01357a5816ed7dbd041154b58635a37/shard-lnl-7/igt@kms_async_flips@async-flip-with-page-flip-events-linear.html
   [4]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-lnl-2/igt@kms_async_flips@async-flip-with-page-flip-events-linear.html

  * igt@kms_async_flips@async-flip-with-page-flip-events-linear-atomic@pipe-c-edp-1:
    - shard-lnl:          [PASS][5] -> [FAIL][6] ([Intel XE#6054]) +3 other tests fail
   [5]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4544-2938ce73d01357a5816ed7dbd041154b58635a37/shard-lnl-7/igt@kms_async_flips@async-flip-with-page-flip-events-linear-atomic@pipe-c-edp-1.html
   [6]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-lnl-1/igt@kms_async_flips@async-flip-with-page-flip-events-linear-atomic@pipe-c-edp-1.html

  * igt@kms_atomic_transition@plane-all-modeset-transition-internal-panels:
    - shard-bmg:          NOTRUN -> [SKIP][7] ([Intel XE#2370])
   [7]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-3/igt@kms_atomic_transition@plane-all-modeset-transition-internal-panels.html

  * igt@kms_big_fb@4-tiled-max-hw-stride-64bpp-rotate-180-hflip:
    - shard-lnl:          NOTRUN -> [SKIP][8] ([Intel XE#1407]) +1 other test skip
   [8]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-lnl-3/igt@kms_big_fb@4-tiled-max-hw-stride-64bpp-rotate-180-hflip.html

  * igt@kms_big_fb@y-tiled-64bpp-rotate-270:
    - shard-bmg:          NOTRUN -> [SKIP][9] ([Intel XE#1124]) +6 other tests skip
   [9]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-5/igt@kms_big_fb@y-tiled-64bpp-rotate-270.html

  * igt@kms_big_fb@y-tiled-addfb:
    - shard-bmg:          NOTRUN -> [SKIP][10] ([Intel XE#2328])
   [10]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-5/igt@kms_big_fb@y-tiled-addfb.html

  * igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-0-hflip-async-flip:
    - shard-lnl:          NOTRUN -> [SKIP][11] ([Intel XE#1124]) +4 other tests skip
   [11]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-lnl-7/igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-0-hflip-async-flip.html

  * igt@kms_bw@connected-linear-tiling-3-displays-3840x2160p:
    - shard-lnl:          NOTRUN -> [SKIP][12] ([Intel XE#2191])
   [12]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-lnl-1/igt@kms_bw@connected-linear-tiling-3-displays-3840x2160p.html

  * igt@kms_bw@linear-tiling-1-displays-1920x1080p:
    - shard-bmg:          NOTRUN -> [SKIP][13] ([Intel XE#367]) +1 other test skip
   [13]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-3/igt@kms_bw@linear-tiling-1-displays-1920x1080p.html

  * igt@kms_bw@linear-tiling-1-displays-2160x1440p:
    - shard-bmg:          [PASS][14] -> [SKIP][15] ([Intel XE#367])
   [14]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4544-2938ce73d01357a5816ed7dbd041154b58635a37/shard-bmg-5/igt@kms_bw@linear-tiling-1-displays-2160x1440p.html
   [15]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-7/igt@kms_bw@linear-tiling-1-displays-2160x1440p.html

  * igt@kms_ccs@bad-rotation-90-4-tiled-lnl-ccs@pipe-d-dp-1:
    - shard-bmg:          NOTRUN -> [SKIP][16] ([Intel XE#2652] / [Intel XE#787]) +3 other tests skip
   [16]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-5/igt@kms_ccs@bad-rotation-90-4-tiled-lnl-ccs@pipe-d-dp-1.html

  * igt@kms_ccs@ccs-on-another-bo-4-tiled-mtl-rc-ccs-cc:
    - shard-lnl:          NOTRUN -> [SKIP][17] ([Intel XE#2887]) +5 other tests skip
   [17]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-lnl-7/igt@kms_ccs@ccs-on-another-bo-4-tiled-mtl-rc-ccs-cc.html

  * igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-rc-ccs-cc:
    - shard-lnl:          NOTRUN -> [SKIP][18] ([Intel XE#3432])
   [18]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-lnl-5/igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-rc-ccs-cc.html

  * igt@kms_ccs@crc-primary-suspend-yf-tiled-ccs:
    - shard-bmg:          NOTRUN -> [SKIP][19] ([Intel XE#3432]) +2 other tests skip
   [19]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-1/igt@kms_ccs@crc-primary-suspend-yf-tiled-ccs.html

  * igt@kms_ccs@crc-sprite-planes-basic-4-tiled-mtl-rc-ccs:
    - shard-bmg:          NOTRUN -> [SKIP][20] ([Intel XE#2887]) +11 other tests skip
   [20]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-4/igt@kms_ccs@crc-sprite-planes-basic-4-tiled-mtl-rc-ccs.html

  * igt@kms_cdclk@plane-scaling:
    - shard-bmg:          NOTRUN -> [SKIP][21] ([Intel XE#2724])
   [21]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-6/igt@kms_cdclk@plane-scaling.html

  * igt@kms_chamelium_hpd@common-hpd-after-hibernate:
    - shard-bmg:          NOTRUN -> [SKIP][22] ([Intel XE#2252]) +6 other tests skip
   [22]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-3/igt@kms_chamelium_hpd@common-hpd-after-hibernate.html

  * igt@kms_chamelium_hpd@hdmi-hpd-for-each-pipe:
    - shard-lnl:          NOTRUN -> [SKIP][23] ([Intel XE#373]) +1 other test skip
   [23]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-lnl-6/igt@kms_chamelium_hpd@hdmi-hpd-for-each-pipe.html

  * igt@kms_color_pipeline@plane-lut3d-green-only@pipe-c-dp-1:
    - shard-bmg:          NOTRUN -> [SKIP][24] ([Intel XE#6969]) +1 other test skip
   [24]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-5/igt@kms_color_pipeline@plane-lut3d-green-only@pipe-c-dp-1.html

  * igt@kms_color_pipeline@plane-lut3d-green-only@pipe-d-dp-1:
    - shard-bmg:          NOTRUN -> [SKIP][25] ([Intel XE#6969] / [Intel XE#7006])
   [25]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-5/igt@kms_color_pipeline@plane-lut3d-green-only@pipe-d-dp-1.html

  * igt@kms_content_protection@atomic-hdcp14:
    - shard-lnl:          NOTRUN -> [SKIP][26] ([Intel XE#6973])
   [26]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-lnl-1/igt@kms_content_protection@atomic-hdcp14.html

  * igt@kms_content_protection@content-type-change:
    - shard-bmg:          NOTRUN -> [SKIP][27] ([Intel XE#2341]) +1 other test skip
   [27]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-4/igt@kms_content_protection@content-type-change.html

  * igt@kms_content_protection@dp-mst-lic-type-0:
    - shard-lnl:          NOTRUN -> [SKIP][28] ([Intel XE#307] / [Intel XE#6974])
   [28]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-lnl-7/igt@kms_content_protection@dp-mst-lic-type-0.html
    - shard-bmg:          NOTRUN -> [SKIP][29] ([Intel XE#2390] / [Intel XE#6974])
   [29]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-1/igt@kms_content_protection@dp-mst-lic-type-0.html

  * igt@kms_content_protection@lic-type-0-hdcp14@pipe-a-dp-1:
    - shard-bmg:          NOTRUN -> [FAIL][30] ([Intel XE#3304]) +1 other test fail
   [30]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-5/igt@kms_content_protection@lic-type-0-hdcp14@pipe-a-dp-1.html

  * igt@kms_cursor_crc@cursor-onscreen-max-size:
    - shard-bmg:          NOTRUN -> [SKIP][31] ([Intel XE#2320]) +3 other tests skip
   [31]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-4/igt@kms_cursor_crc@cursor-onscreen-max-size.html

  * igt@kms_cursor_crc@cursor-random-256x85:
    - shard-lnl:          NOTRUN -> [SKIP][32] ([Intel XE#1424]) +1 other test skip
   [32]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-lnl-1/igt@kms_cursor_crc@cursor-random-256x85.html

  * igt@kms_cursor_crc@cursor-sliding-512x512:
    - shard-lnl:          NOTRUN -> [SKIP][33] ([Intel XE#2321]) +1 other test skip
   [33]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-lnl-8/igt@kms_cursor_crc@cursor-sliding-512x512.html
    - shard-bmg:          NOTRUN -> [SKIP][34] ([Intel XE#2321]) +2 other tests skip
   [34]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-7/igt@kms_cursor_crc@cursor-sliding-512x512.html

  * igt@kms_cursor_legacy@cursor-vs-flip-atomic-transitions-varying-size:
    - shard-bmg:          [PASS][35] -> [DMESG-WARN][36] ([Intel XE#5354])
   [35]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4544-2938ce73d01357a5816ed7dbd041154b58635a37/shard-bmg-1/igt@kms_cursor_legacy@cursor-vs-flip-atomic-transitions-varying-size.html
   [36]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-1/igt@kms_cursor_legacy@cursor-vs-flip-atomic-transitions-varying-size.html

  * igt@kms_cursor_legacy@cursorb-vs-flipb-atomic-transitions:
    - shard-lnl:          NOTRUN -> [SKIP][37] ([Intel XE#309])
   [37]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-lnl-1/igt@kms_cursor_legacy@cursorb-vs-flipb-atomic-transitions.html

  * igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions:
    - shard-bmg:          NOTRUN -> [FAIL][38] ([Intel XE#5299])
   [38]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-4/igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions.html

  * igt@kms_cursor_legacy@short-busy-flip-before-cursor-toggle:
    - shard-bmg:          NOTRUN -> [SKIP][39] ([Intel XE#2286])
   [39]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-3/igt@kms_cursor_legacy@short-busy-flip-before-cursor-toggle.html

  * igt@kms_dp_link_training@non-uhbr-mst:
    - shard-bmg:          NOTRUN -> [SKIP][40] ([Intel XE#4354])
   [40]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-9/igt@kms_dp_link_training@non-uhbr-mst.html

  * igt@kms_dp_link_training@non-uhbr-sst:
    - shard-lnl:          NOTRUN -> [SKIP][41] ([Intel XE#4354])
   [41]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-lnl-2/igt@kms_dp_link_training@non-uhbr-sst.html

  * igt@kms_dsc@dsc-with-bpc:
    - shard-bmg:          NOTRUN -> [SKIP][42] ([Intel XE#2244]) +1 other test skip
   [42]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-7/igt@kms_dsc@dsc-with-bpc.html

  * igt@kms_dsc@dsc-with-bpc-formats:
    - shard-lnl:          NOTRUN -> [SKIP][43] ([Intel XE#2244]) +1 other test skip
   [43]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-lnl-3/igt@kms_dsc@dsc-with-bpc-formats.html

  * igt@kms_fbc_dirty_rect@fbc-dirty-rectangle-dirtyfb-tests:
    - shard-bmg:          NOTRUN -> [SKIP][44] ([Intel XE#4422])
   [44]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-3/igt@kms_fbc_dirty_rect@fbc-dirty-rectangle-dirtyfb-tests.html

  * igt@kms_flip@2x-flip-vs-rmfb:
    - shard-lnl:          NOTRUN -> [SKIP][45] ([Intel XE#1421]) +2 other tests skip
   [45]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-lnl-7/igt@kms_flip@2x-flip-vs-rmfb.html

  * igt@kms_flip@flip-vs-expired-vblank-interruptible@b-edp1:
    - shard-lnl:          [PASS][46] -> [FAIL][47] ([Intel XE#301]) +1 other test fail
   [46]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4544-2938ce73d01357a5816ed7dbd041154b58635a37/shard-lnl-3/igt@kms_flip@flip-vs-expired-vblank-interruptible@b-edp1.html
   [47]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-lnl-6/igt@kms_flip@flip-vs-expired-vblank-interruptible@b-edp1.html

  * igt@kms_flip_scaled_crc@flip-64bpp-4tile-to-32bpp-4tiledg2rcccs-downscaling:
    - shard-bmg:          NOTRUN -> [SKIP][48] ([Intel XE#7178]) +1 other test skip
   [48]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-9/igt@kms_flip_scaled_crc@flip-64bpp-4tile-to-32bpp-4tiledg2rcccs-downscaling.html

  * igt@kms_flip_scaled_crc@flip-64bpp-yftile-to-32bpp-yftile-upscaling:
    - shard-lnl:          NOTRUN -> [SKIP][49] ([Intel XE#7178]) +3 other tests skip
   [49]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-lnl-8/igt@kms_flip_scaled_crc@flip-64bpp-yftile-to-32bpp-yftile-upscaling.html

  * igt@kms_frontbuffer_tracking@drrs-2p-scndscrn-pri-indfb-draw-mmap-wc:
    - shard-bmg:          NOTRUN -> [SKIP][50] ([Intel XE#2311]) +17 other tests skip
   [50]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-4/igt@kms_frontbuffer_tracking@drrs-2p-scndscrn-pri-indfb-draw-mmap-wc.html

  * igt@kms_frontbuffer_tracking@drrs-modesetfrombusy:
    - shard-lnl:          NOTRUN -> [SKIP][51] ([Intel XE#651]) +6 other tests skip
   [51]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-lnl-5/igt@kms_frontbuffer_tracking@drrs-modesetfrombusy.html

  * igt@kms_frontbuffer_tracking@fbc-1p-pri-indfb-multidraw:
    - shard-bmg:          NOTRUN -> [SKIP][52] ([Intel XE#4141]) +12 other tests skip
   [52]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-10/igt@kms_frontbuffer_tracking@fbc-1p-pri-indfb-multidraw.html

  * igt@kms_frontbuffer_tracking@fbcdrrs-2p-scndscrn-cur-indfb-draw-render:
    - shard-lnl:          NOTRUN -> [SKIP][53] ([Intel XE#656]) +12 other tests skip
   [53]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-lnl-6/igt@kms_frontbuffer_tracking@fbcdrrs-2p-scndscrn-cur-indfb-draw-render.html

  * igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-shrfb-pgflip-blt:
    - shard-bmg:          NOTRUN -> [SKIP][54] ([Intel XE#2313]) +24 other tests skip
   [54]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-7/igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-shrfb-pgflip-blt.html

  * igt@kms_frontbuffer_tracking@psr-abgr161616f-draw-blt:
    - shard-lnl:          NOTRUN -> [SKIP][55] ([Intel XE#7061]) +1 other test skip
   [55]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-lnl-5/igt@kms_frontbuffer_tracking@psr-abgr161616f-draw-blt.html
    - shard-bmg:          NOTRUN -> [SKIP][56] ([Intel XE#7061]) +5 other tests skip
   [56]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-8/igt@kms_frontbuffer_tracking@psr-abgr161616f-draw-blt.html

  * igt@kms_hdr@static-swap:
    - shard-lnl:          NOTRUN -> [SKIP][57] ([Intel XE#1503])
   [57]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-lnl-4/igt@kms_hdr@static-swap.html

  * igt@kms_plane@pixel-format-4-tiled-dg2-rc-ccs-cc-modifier@pipe-a-plane-5:
    - shard-bmg:          NOTRUN -> [SKIP][58] ([Intel XE#7130]) +13 other tests skip
   [58]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-3/igt@kms_plane@pixel-format-4-tiled-dg2-rc-ccs-cc-modifier@pipe-a-plane-5.html

  * igt@kms_plane@pixel-format-4-tiled-dg2-rc-ccs-cc-modifier@pipe-b-plane-5:
    - shard-bmg:          NOTRUN -> [SKIP][59] ([Intel XE#7111] / [Intel XE#7130]) +3 other tests skip
   [59]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-3/igt@kms_plane@pixel-format-4-tiled-dg2-rc-ccs-cc-modifier@pipe-b-plane-5.html

  * igt@kms_plane@pixel-format-4-tiled-modifier@pipe-a-plane-5:
    - shard-lnl:          NOTRUN -> [SKIP][60] ([Intel XE#7130]) +12 other tests skip
   [60]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-lnl-8/igt@kms_plane@pixel-format-4-tiled-modifier@pipe-a-plane-5.html

  * igt@kms_plane@pixel-format-y-tiled-gen12-mc-ccs-modifier-source-clamping:
    - shard-lnl:          NOTRUN -> [SKIP][61] ([Intel XE#7130] / [Intel XE#7131])
   [61]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-lnl-3/igt@kms_plane@pixel-format-y-tiled-gen12-mc-ccs-modifier-source-clamping.html

  * igt@kms_plane@pixel-format-y-tiled-gen12-mc-ccs-modifier-source-clamping@pipe-a-plane-5:
    - shard-lnl:          NOTRUN -> [SKIP][62] ([Intel XE#7131]) +1 other test skip
   [62]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-lnl-3/igt@kms_plane@pixel-format-y-tiled-gen12-mc-ccs-modifier-source-clamping@pipe-a-plane-5.html

  * igt@kms_plane@pixel-format-y-tiled-gen12-rc-ccs-modifier-source-clamping:
    - shard-bmg:          NOTRUN -> [SKIP][63] ([Intel XE#7111] / [Intel XE#7130] / [Intel XE#7131])
   [63]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-8/igt@kms_plane@pixel-format-y-tiled-gen12-rc-ccs-modifier-source-clamping.html

  * igt@kms_plane@pixel-format-y-tiled-gen12-rc-ccs-modifier-source-clamping@pipe-a-plane-5:
    - shard-bmg:          NOTRUN -> [SKIP][64] ([Intel XE#7131])
   [64]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-8/igt@kms_plane@pixel-format-y-tiled-gen12-rc-ccs-modifier-source-clamping@pipe-a-plane-5.html

  * igt@kms_plane@pixel-format-y-tiled-gen12-rc-ccs-modifier-source-clamping@pipe-b-plane-5:
    - shard-bmg:          NOTRUN -> [SKIP][65] ([Intel XE#7111] / [Intel XE#7131])
   [65]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-8/igt@kms_plane@pixel-format-y-tiled-gen12-rc-ccs-modifier-source-clamping@pipe-b-plane-5.html

  * igt@kms_plane_multiple@2x-tiling-yf:
    - shard-bmg:          NOTRUN -> [SKIP][66] ([Intel XE#5021])
   [66]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-10/igt@kms_plane_multiple@2x-tiling-yf.html
    - shard-lnl:          NOTRUN -> [SKIP][67] ([Intel XE#4596])
   [67]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-lnl-4/igt@kms_plane_multiple@2x-tiling-yf.html

  * igt@kms_plane_scaling@planes-downscale-factor-0-5@pipe-c:
    - shard-lnl:          NOTRUN -> [SKIP][68] ([Intel XE#6886]) +3 other tests skip
   [68]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-lnl-1/igt@kms_plane_scaling@planes-downscale-factor-0-5@pipe-c.html
    - shard-bmg:          NOTRUN -> [SKIP][69] ([Intel XE#6886]) +4 other tests skip
   [69]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-7/igt@kms_plane_scaling@planes-downscale-factor-0-5@pipe-c.html

  * igt@kms_pm_backlight@brightness-with-dpms:
    - shard-bmg:          NOTRUN -> [SKIP][70] ([Intel XE#2938])
   [70]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-7/igt@kms_pm_backlight@brightness-with-dpms.html

  * igt@kms_pm_backlight@fade:
    - shard-bmg:          NOTRUN -> [SKIP][71] ([Intel XE#870])
   [71]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-6/igt@kms_pm_backlight@fade.html

  * igt@kms_pm_rpm@dpms-mode-unset-lpsp:
    - shard-bmg:          NOTRUN -> [SKIP][72] ([Intel XE#1439] / [Intel XE#836])
   [72]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-9/igt@kms_pm_rpm@dpms-mode-unset-lpsp.html

  * igt@kms_pm_rpm@package-g7:
    - shard-bmg:          NOTRUN -> [SKIP][73] ([Intel XE#6814])
   [73]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-7/igt@kms_pm_rpm@package-g7.html

  * igt@kms_psr2_sf@fbc-pr-overlay-plane-update-continuous-sf:
    - shard-lnl:          NOTRUN -> [SKIP][74] ([Intel XE#1406] / [Intel XE#2893])
   [74]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-lnl-3/igt@kms_psr2_sf@fbc-pr-overlay-plane-update-continuous-sf.html

  * igt@kms_psr2_sf@fbc-psr2-overlay-plane-update-continuous-sf:
    - shard-lnl:          NOTRUN -> [SKIP][75] ([Intel XE#1406] / [Intel XE#2893] / [Intel XE#4608])
   [75]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-lnl-5/igt@kms_psr2_sf@fbc-psr2-overlay-plane-update-continuous-sf.html

  * igt@kms_psr2_sf@fbc-psr2-overlay-plane-update-continuous-sf@pipe-b-edp-1:
    - shard-lnl:          NOTRUN -> [SKIP][76] ([Intel XE#1406] / [Intel XE#4608]) +1 other test skip
   [76]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-lnl-5/igt@kms_psr2_sf@fbc-psr2-overlay-plane-update-continuous-sf@pipe-b-edp-1.html

  * igt@kms_psr2_sf@psr2-overlay-plane-move-continuous-exceed-sf:
    - shard-bmg:          NOTRUN -> [SKIP][77] ([Intel XE#1406] / [Intel XE#1489]) +6 other tests skip
   [77]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-3/igt@kms_psr2_sf@psr2-overlay-plane-move-continuous-exceed-sf.html

  * igt@kms_psr@fbc-pr-primary-blt:
    - shard-bmg:          NOTRUN -> [SKIP][78] ([Intel XE#1406] / [Intel XE#2234] / [Intel XE#2850]) +4 other tests skip
   [78]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-4/igt@kms_psr@fbc-pr-primary-blt.html

  * igt@kms_psr@fbc-psr2-primary-blt@edp-1:
    - shard-lnl:          NOTRUN -> [SKIP][79] ([Intel XE#1406] / [Intel XE#4609]) +1 other test skip
   [79]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-lnl-6/igt@kms_psr@fbc-psr2-primary-blt@edp-1.html

  * igt@kms_psr@fbc-psr2-primary-render:
    - shard-lnl:          NOTRUN -> [SKIP][80] ([Intel XE#1406]) +2 other tests skip
   [80]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-lnl-6/igt@kms_psr@fbc-psr2-primary-render.html

  * igt@kms_rotation_crc@primary-yf-tiled-reflect-x-0:
    - shard-lnl:          NOTRUN -> [SKIP][81] ([Intel XE#1127])
   [81]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-lnl-6/igt@kms_rotation_crc@primary-yf-tiled-reflect-x-0.html

  * igt@kms_rotation_crc@sprite-rotation-90:
    - shard-bmg:          NOTRUN -> [SKIP][82] ([Intel XE#3414] / [Intel XE#3904]) +1 other test skip
   [82]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-5/igt@kms_rotation_crc@sprite-rotation-90.html

  * igt@kms_rotation_crc@sprite-rotation-90-pos-100-0:
    - shard-lnl:          NOTRUN -> [SKIP][83] ([Intel XE#3414] / [Intel XE#3904])
   [83]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-lnl-6/igt@kms_rotation_crc@sprite-rotation-90-pos-100-0.html

  * igt@kms_setmode@invalid-clone-exclusive-crtc:
    - shard-lnl:          NOTRUN -> [SKIP][84] ([Intel XE#1435])
   [84]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-lnl-1/igt@kms_setmode@invalid-clone-exclusive-crtc.html

  * igt@kms_sharpness_filter@filter-rotations:
    - shard-bmg:          NOTRUN -> [SKIP][85] ([Intel XE#6503]) +1 other test skip
   [85]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-3/igt@kms_sharpness_filter@filter-rotations.html

  * igt@kms_tv_load_detect@load-detect:
    - shard-bmg:          NOTRUN -> [SKIP][86] ([Intel XE#2450])
   [86]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-4/igt@kms_tv_load_detect@load-detect.html

  * igt@kms_vrr@seamless-rr-switch-drrs:
    - shard-bmg:          NOTRUN -> [SKIP][87] ([Intel XE#1499])
   [87]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-9/igt@kms_vrr@seamless-rr-switch-drrs.html

  * igt@kms_vrr@seamless-rr-switch-virtual@pipe-a-edp-1:
    - shard-lnl:          [PASS][88] -> [FAIL][89] ([Intel XE#2142]) +1 other test fail
   [88]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4544-2938ce73d01357a5816ed7dbd041154b58635a37/shard-lnl-8/igt@kms_vrr@seamless-rr-switch-virtual@pipe-a-edp-1.html
   [89]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-lnl-4/igt@kms_vrr@seamless-rr-switch-virtual@pipe-a-edp-1.html

  * igt@xe_compute@ccs-mode-basic:
    - shard-bmg:          NOTRUN -> [SKIP][90] ([Intel XE#6599])
   [90]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-9/igt@xe_compute@ccs-mode-basic.html

  * igt@xe_eudebug@basic-vm-access-userptr:
    - shard-bmg:          NOTRUN -> [SKIP][91] ([Intel XE#4837]) +7 other tests skip
   [91]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-5/igt@xe_eudebug@basic-vm-access-userptr.html

  * igt@xe_eudebug_online@interrupt-reconnect:
    - shard-bmg:          NOTRUN -> [SKIP][92] ([Intel XE#4837] / [Intel XE#6665]) +4 other tests skip
   [92]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-6/igt@xe_eudebug_online@interrupt-reconnect.html

  * igt@xe_eudebug_online@writes-caching-vram-bb-sram-target-sram:
    - shard-lnl:          NOTRUN -> [SKIP][93] ([Intel XE#4837] / [Intel XE#6665]) +2 other tests skip
   [93]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-lnl-1/igt@xe_eudebug_online@writes-caching-vram-bb-sram-target-sram.html

  * igt@xe_evict_ccs@evict-overcommit-standalone-instantfree-samefd:
    - shard-lnl:          NOTRUN -> [SKIP][94] ([Intel XE#688]) +3 other tests skip
   [94]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-lnl-8/igt@xe_evict_ccs@evict-overcommit-standalone-instantfree-samefd.html

  * igt@xe_exec_basic@multigpu-many-execqueues-many-vm-bindexecqueue-userptr:
    - shard-bmg:          NOTRUN -> [SKIP][95] ([Intel XE#2322]) +7 other tests skip
   [95]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-3/igt@xe_exec_basic@multigpu-many-execqueues-many-vm-bindexecqueue-userptr.html

  * igt@xe_exec_basic@multigpu-once-userptr:
    - shard-lnl:          NOTRUN -> [SKIP][96] ([Intel XE#1392]) +2 other tests skip
   [96]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-lnl-7/igt@xe_exec_basic@multigpu-once-userptr.html

  * igt@xe_exec_fault_mode@many-multi-queue-userptr-invalidate-race-prefetch:
    - shard-bmg:          NOTRUN -> [SKIP][97] ([Intel XE#7136]) +7 other tests skip
   [97]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-10/igt@xe_exec_fault_mode@many-multi-queue-userptr-invalidate-race-prefetch.html

  * igt@xe_exec_fault_mode@once-multi-queue-userptr-invalidate-prefetch:
    - shard-lnl:          NOTRUN -> [SKIP][98] ([Intel XE#7136]) +1 other test skip
   [98]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-lnl-1/igt@xe_exec_fault_mode@once-multi-queue-userptr-invalidate-prefetch.html

  * igt@xe_exec_multi_queue@many-execs-preempt-mode-fault-dyn-priority:
    - shard-lnl:          NOTRUN -> [SKIP][99] ([Intel XE#6874]) +12 other tests skip
   [99]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-lnl-5/igt@xe_exec_multi_queue@many-execs-preempt-mode-fault-dyn-priority.html

  * igt@xe_exec_multi_queue@max-queues-preempt-mode-basic-smem:
    - shard-bmg:          NOTRUN -> [SKIP][100] ([Intel XE#6874]) +22 other tests skip
   [100]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-6/igt@xe_exec_multi_queue@max-queues-preempt-mode-basic-smem.html

  * igt@xe_exec_sip_eudebug@breakpoint-writesip-twice:
    - shard-lnl:          NOTRUN -> [SKIP][101] ([Intel XE#4837]) +2 other tests skip
   [101]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-lnl-6/igt@xe_exec_sip_eudebug@breakpoint-writesip-twice.html

  * igt@xe_exec_system_allocator@pat-index-madvise-pat-idx-uc-comp-single-vma:
    - shard-lnl:          NOTRUN -> [SKIP][102] ([Intel XE#6196])
   [102]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-lnl-7/igt@xe_exec_system_allocator@pat-index-madvise-pat-idx-uc-comp-single-vma.html

  * igt@xe_exec_system_allocator@threads-many-large-execqueues-malloc-mlock-nomemset:
    - shard-bmg:          NOTRUN -> [ABORT][103] ([Intel XE#7169])
   [103]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-8/igt@xe_exec_system_allocator@threads-many-large-execqueues-malloc-mlock-nomemset.html

  * igt@xe_exec_threads@threads-multi-queue-cm-basic:
    - shard-lnl:          NOTRUN -> [SKIP][104] ([Intel XE#7138]) +2 other tests skip
   [104]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-lnl-1/igt@xe_exec_threads@threads-multi-queue-cm-basic.html

  * igt@xe_exec_threads@threads-multi-queue-mixed-shared-vm-userptr-rebind:
    - shard-bmg:          NOTRUN -> [SKIP][105] ([Intel XE#7138]) +5 other tests skip
   [105]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-10/igt@xe_exec_threads@threads-multi-queue-mixed-shared-vm-userptr-rebind.html

  * igt@xe_live_ktest@xe_eudebug:
    - shard-bmg:          NOTRUN -> [SKIP][106] ([Intel XE#2833])
   [106]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-9/igt@xe_live_ktest@xe_eudebug.html

  * igt@xe_module_load@force-load:
    - shard-lnl:          NOTRUN -> [DMESG-WARN][107] ([Intel XE#7145])
   [107]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-lnl-3/igt@xe_module_load@force-load.html
    - shard-bmg:          NOTRUN -> [DMESG-WARN][108] ([Intel XE#7145])
   [108]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-8/igt@xe_module_load@force-load.html

  * igt@xe_multigpu_svm@mgpu-latency-copy-basic:
    - shard-bmg:          NOTRUN -> [SKIP][109] ([Intel XE#6964]) +1 other test skip
   [109]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-6/igt@xe_multigpu_svm@mgpu-latency-copy-basic.html

  * igt@xe_multigpu_svm@mgpu-migration-basic:
    - shard-lnl:          NOTRUN -> [SKIP][110] ([Intel XE#6964]) +1 other test skip
   [110]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-lnl-6/igt@xe_multigpu_svm@mgpu-migration-basic.html

  * igt@xe_peer2peer@write:
    - shard-bmg:          NOTRUN -> [SKIP][111] ([Intel XE#2427] / [Intel XE#6953])
   [111]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-9/igt@xe_peer2peer@write.html

  * igt@xe_pm@d3cold-basic-exec:
    - shard-lnl:          NOTRUN -> [SKIP][112] ([Intel XE#2284] / [Intel XE#366])
   [112]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-lnl-7/igt@xe_pm@d3cold-basic-exec.html

  * igt@xe_pm@s2idle-mocs:
    - shard-lnl:          NOTRUN -> [ABORT][113] ([Intel XE#7169])
   [113]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-lnl-3/igt@xe_pm@s2idle-mocs.html

  * igt@xe_pm@s4-d3cold-basic-exec:
    - shard-bmg:          NOTRUN -> [SKIP][114] ([Intel XE#2284])
   [114]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-1/igt@xe_pm@s4-d3cold-basic-exec.html

  * igt@xe_pxp@pxp-src-to-pxp-dest-rendercopy:
    - shard-bmg:          NOTRUN -> [SKIP][115] ([Intel XE#4733])
   [115]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-10/igt@xe_pxp@pxp-src-to-pxp-dest-rendercopy.html

  * igt@xe_query@multigpu-query-mem-usage:
    - shard-bmg:          NOTRUN -> [SKIP][116] ([Intel XE#944]) +2 other tests skip
   [116]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-1/igt@xe_query@multigpu-query-mem-usage.html
    - shard-lnl:          NOTRUN -> [SKIP][117] ([Intel XE#944])
   [117]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-lnl-7/igt@xe_query@multigpu-query-mem-usage.html

  * igt@xe_sriov_auto_provisioning@resources-released-on-vfs-disabling:
    - shard-lnl:          NOTRUN -> [SKIP][118] ([Intel XE#4130]) +1 other test skip
   [118]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-lnl-4/igt@xe_sriov_auto_provisioning@resources-released-on-vfs-disabling.html

  * igt@xe_sriov_vram@vf-access-after-resize-up:
    - shard-bmg:          NOTRUN -> [FAIL][119] ([Intel XE#5937])
   [119]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-5/igt@xe_sriov_vram@vf-access-after-resize-up.html

  * igt@xe_sriov_vram@vf-access-provisioned:
    - shard-lnl:          NOTRUN -> [SKIP][120] ([Intel XE#6376])
   [120]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-lnl-7/igt@xe_sriov_vram@vf-access-provisioned.html

  
#### Possible fixes ####

  * igt@kms_flip@2x-flip-vs-suspend:
    - shard-bmg:          [DMESG-WARN][121] ([Intel XE#5208]) -> [PASS][122]
   [121]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4544-2938ce73d01357a5816ed7dbd041154b58635a37/shard-bmg-3/igt@kms_flip@2x-flip-vs-suspend.html
   [122]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-8/igt@kms_flip@2x-flip-vs-suspend.html

  * igt@kms_flip@2x-flip-vs-suspend@bc-dp2-hdmi-a3:
    - shard-bmg:          [DMESG-WARN][123] ([Intel XE#3428]) -> [PASS][124]
   [123]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4544-2938ce73d01357a5816ed7dbd041154b58635a37/shard-bmg-3/igt@kms_flip@2x-flip-vs-suspend@bc-dp2-hdmi-a3.html
   [124]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-8/igt@kms_flip@2x-flip-vs-suspend@bc-dp2-hdmi-a3.html

  * igt@kms_flip@flip-vs-suspend-interruptible:
    - shard-bmg:          [INCOMPLETE][125] ([Intel XE#2049] / [Intel XE#2597]) -> [PASS][126] +1 other test pass
   [125]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4544-2938ce73d01357a5816ed7dbd041154b58635a37/shard-bmg-8/igt@kms_flip@flip-vs-suspend-interruptible.html
   [126]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-9/igt@kms_flip@flip-vs-suspend-interruptible.html

  * igt@kms_rotation_crc@multiplane-rotation-cropping-top:
    - shard-lnl:          [FAIL][127] ([Intel XE#1874]) -> [PASS][128]
   [127]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4544-2938ce73d01357a5816ed7dbd041154b58635a37/shard-lnl-1/igt@kms_rotation_crc@multiplane-rotation-cropping-top.html
   [128]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-lnl-8/igt@kms_rotation_crc@multiplane-rotation-cropping-top.html

  * igt@kms_setmode@basic:
    - shard-bmg:          [FAIL][129] ([Intel XE#6361]) -> [PASS][130] +3 other tests pass
   [129]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4544-2938ce73d01357a5816ed7dbd041154b58635a37/shard-bmg-7/igt@kms_setmode@basic.html
   [130]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-1/igt@kms_setmode@basic.html

  * igt@xe_exec_system_allocator@many-64k-mmap-free-huge-nomemset:
    - shard-lnl:          [SKIP][131] ([Intel XE#5007]) -> [PASS][132] +4 other tests pass
   [131]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4544-2938ce73d01357a5816ed7dbd041154b58635a37/shard-lnl-6/igt@xe_exec_system_allocator@many-64k-mmap-free-huge-nomemset.html
   [132]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-lnl-7/igt@xe_exec_system_allocator@many-64k-mmap-free-huge-nomemset.html

  * igt@xe_exec_system_allocator@many-64k-mmap-new-huge:
    - shard-bmg:          [SKIP][133] ([Intel XE#5007]) -> [PASS][134] +5 other tests pass
   [133]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4544-2938ce73d01357a5816ed7dbd041154b58635a37/shard-bmg-9/igt@xe_exec_system_allocator@many-64k-mmap-new-huge.html
   [134]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-6/igt@xe_exec_system_allocator@many-64k-mmap-new-huge.html

  * igt@xe_exec_system_allocator@process-many-large-execqueues-mmap-new-huge:
    - shard-bmg:          [SKIP][135] ([Intel XE#4943]) -> [PASS][136] +98 other tests pass
   [135]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4544-2938ce73d01357a5816ed7dbd041154b58635a37/shard-bmg-7/igt@xe_exec_system_allocator@process-many-large-execqueues-mmap-new-huge.html
   [136]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-3/igt@xe_exec_system_allocator@process-many-large-execqueues-mmap-new-huge.html

  * igt@xe_exec_system_allocator@threads-shared-vm-many-execqueues-mmap-free-huge:
    - shard-lnl:          [SKIP][137] ([Intel XE#4943]) -> [PASS][138] +128 other tests pass
   [137]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4544-2938ce73d01357a5816ed7dbd041154b58635a37/shard-lnl-6/igt@xe_exec_system_allocator@threads-shared-vm-many-execqueues-mmap-free-huge.html
   [138]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-lnl-2/igt@xe_exec_system_allocator@threads-shared-vm-many-execqueues-mmap-free-huge.html

  * igt@xe_exec_system_allocator@threads-shared-vm-many-stride-mmap-remap-dontunmap:
    - shard-bmg:          [INCOMPLETE][139] -> [PASS][140]
   [139]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4544-2938ce73d01357a5816ed7dbd041154b58635a37/shard-bmg-3/igt@xe_exec_system_allocator@threads-shared-vm-many-stride-mmap-remap-dontunmap.html
   [140]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-5/igt@xe_exec_system_allocator@threads-shared-vm-many-stride-mmap-remap-dontunmap.html

  * igt@xe_pm@s4-vm-bind-unbind-all:
    - shard-bmg:          [DMESG-WARN][141] -> [PASS][142]
   [141]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4544-2938ce73d01357a5816ed7dbd041154b58635a37/shard-bmg-6/igt@xe_pm@s4-vm-bind-unbind-all.html
   [142]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-9/igt@xe_pm@s4-vm-bind-unbind-all.html

  * igt@xe_sriov_auto_provisioning@selfconfig-basic:
    - shard-bmg:          [FAIL][143] ([Intel XE#5937]) -> [PASS][144] +1 other test pass
   [143]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4544-2938ce73d01357a5816ed7dbd041154b58635a37/shard-bmg-7/igt@xe_sriov_auto_provisioning@selfconfig-basic.html
   [144]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-5/igt@xe_sriov_auto_provisioning@selfconfig-basic.html

  * igt@xe_sriov_flr@flr-vf1-clear:
    - shard-bmg:          [FAIL][145] ([Intel XE#6569]) -> [PASS][146]
   [145]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4544-2938ce73d01357a5816ed7dbd041154b58635a37/shard-bmg-6/igt@xe_sriov_flr@flr-vf1-clear.html
   [146]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/shard-bmg-10/igt@xe_sriov_flr@flr-vf1-clear.html

  
  [Intel XE#1124]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1124
  [Intel XE#1127]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1127
  [Intel XE#1392]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1392
  [Intel XE#1406]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1406
  [Intel XE#1407]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1407
  [Intel XE#1421]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1421
  [Intel XE#1424]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1424
  [Intel XE#1435]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1435
  [Intel XE#1439]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1439
  [Intel XE#1489]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1489
  [Intel XE#1499]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1499
  [Intel XE#1503]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1503
  [Intel XE#1874]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1874
  [Intel XE#2049]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2049
  [Intel XE#2142]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2142
  [Intel XE#2191]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2191
  [Intel XE#2234]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2234
  [Intel XE#2244]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2244
  [Intel XE#2252]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2252
  [Intel XE#2284]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2284
  [Intel XE#2286]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2286
  [Intel XE#2311]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2311
  [Intel XE#2313]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2313
  [Intel XE#2320]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2320
  [Intel XE#2321]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2321
  [Intel XE#2322]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2322
  [Intel XE#2328]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2328
  [Intel XE#2341]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2341
  [Intel XE#2370]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2370
  [Intel XE#2390]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2390
  [Intel XE#2427]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2427
  [Intel XE#2450]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2450
  [Intel XE#2597]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2597
  [Intel XE#2652]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2652
  [Intel XE#2724]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2724
  [Intel XE#2833]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2833
  [Intel XE#2850]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2850
  [Intel XE#2887]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2887
  [Intel XE#2893]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2893
  [Intel XE#2938]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2938
  [Intel XE#301]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/301
  [Intel XE#307]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/307
  [Intel XE#309]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/309
  [Intel XE#3304]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3304
  [Intel XE#3414]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3414
  [Intel XE#3428]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3428
  [Intel XE#3432]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3432
  [Intel XE#366]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/366
  [Intel XE#367]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/367
  [Intel XE#373]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/373
  [Intel XE#3904]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3904
  [Intel XE#4130]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4130
  [Intel XE#4141]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4141
  [Intel XE#4354]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4354
  [Intel XE#4422]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4422
  [Intel XE#4596]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4596
  [Intel XE#4608]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4608
  [Intel XE#4609]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4609
  [Intel XE#4733]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4733
  [Intel XE#4837]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4837
  [Intel XE#4943]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4943
  [Intel XE#5007]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5007
  [Intel XE#5021]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5021
  [Intel XE#5208]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5208
  [Intel XE#5299]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5299
  [Intel XE#5354]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5354
  [Intel XE#5937]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5937
  [Intel XE#5993]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5993
  [Intel XE#6054]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6054
  [Intel XE#6196]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6196
  [Intel XE#6361]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6361
  [Intel XE#6376]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6376
  [Intel XE#6503]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6503
  [Intel XE#651]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/651
  [Intel XE#656]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/656
  [Intel XE#6569]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6569
  [Intel XE#6599]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6599
  [Intel XE#6665]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6665
  [Intel XE#6814]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6814
  [Intel XE#6874]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6874
  [Intel XE#688]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/688
  [Intel XE#6886]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6886
  [Intel XE#6953]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6953
  [Intel XE#6964]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6964
  [Intel XE#6969]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6969
  [Intel XE#6973]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6973
  [Intel XE#6974]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6974
  [Intel XE#7006]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7006
  [Intel XE#7061]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7061
  [Intel XE#7111]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7111
  [Intel XE#7130]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7130
  [Intel XE#7131]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7131
  [Intel XE#7136]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7136
  [Intel XE#7138]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7138
  [Intel XE#7145]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7145
  [Intel XE#7169]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7169
  [Intel XE#7178]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7178
  [Intel XE#787]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/787
  [Intel XE#836]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/836
  [Intel XE#870]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/870
  [Intel XE#944]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/944


Build changes
-------------

  * IGT: IGT_8749 -> IGT_8751
  * Linux: xe-4544-2938ce73d01357a5816ed7dbd041154b58635a37 -> xe-pw-156651v6

  IGT_8749: 195f101f25a7984686f36f340aa88d44a1716ec6 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
  IGT_8751: af788251f1ef729d17c802aec2c4547b52059e58 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
  xe-4544-2938ce73d01357a5816ed7dbd041154b58635a37: 2938ce73d01357a5816ed7dbd041154b58635a37
  xe-pw-156651v6: 156651v6

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v6/index.html

[-- Attachment #2: Type: text/html, Size: 53198 bytes --]

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v5 1/9] drm/xe/uapi: Add UAPI support for purgeable buffer objects
  2026-02-11 15:26 ` [PATCH v5 1/9] drm/xe/uapi: Add UAPI " Arvind Yadav
@ 2026-02-24 10:50   ` Thomas Hellström
  2026-02-26 17:58   ` Souza, Jose
  1 sibling, 0 replies; 36+ messages in thread
From: Thomas Hellström @ 2026-02-24 10:50 UTC (permalink / raw)
  To: Arvind Yadav, intel-xe
  Cc: matthew.brost, himal.prasad.ghimiray, pallavi.mishra

On Wed, 2026-02-11 at 20:56 +0530, Arvind Yadav wrote:
> From: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> 
> Extend the DRM_XE_MADVISE ioctl to support purgeable buffer object
> management by adding DRM_XE_VMA_ATTR_PURGEABLE_STATE attribute type.
> 
> This allows userspace applications to provide memory usage hints to
> the kernel for better memory management under pressure:
> 
> - WILLNEED: Buffer is needed and should not be purged. If the BO was
>   previously purged, retained field returns 0 indicating backing
> store
>   was lost (once purged, always purged semantics matching i915).
> 
> - DONTNEED: Buffer is not currently needed and may be purged by the
>   kernel under memory pressure to free resources. Only applies to
>   non-shared BOs.
> 
> The implementation includes a 'retained' output field (matching
> i915's
> drm_i915_gem_madvise.retained) that indicates whether the BO's
> backing
> store still exists (1) or has been purged (0).
> 
> v2:
>   - Add PURGED state for read-only status, change ioctl to DRM_IOWR,
>     add retained field for i915 compatibility
> 
> v3:
>   - UAPI rule should not be changed (Matthew Brost)
>   - Make 'retained' a userptr (Matthew Brost)
> 
> v4:
>   - You cannot make this part of the union (purge_state_val) larger
>     than the existing union (16 bytes). So just drop the '__u64
> reserved'
>     field. (Matt)
> 
> v5:
>   - Update UAPI documentation to clarify retained must be initialized
>     to 0(Thomas)
> 
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Signed-off-by: Himal Prasad Ghimiray
> <himal.prasad.ghimiray@intel.com>
> Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
> ---
>  include/uapi/drm/xe_drm.h | 44
> +++++++++++++++++++++++++++++++++++++++
>  1 file changed, 44 insertions(+)
> 
> diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
> index 077e66a682e2..3e2f145e7f8f 100644
> --- a/include/uapi/drm/xe_drm.h
> +++ b/include/uapi/drm/xe_drm.h
> @@ -2099,6 +2099,7 @@ struct drm_xe_madvise {
>  #define DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC	0
>  #define DRM_XE_MEM_RANGE_ATTR_ATOMIC		1
>  #define DRM_XE_MEM_RANGE_ATTR_PAT		2
> +#define DRM_XE_VMA_ATTR_PURGEABLE_STATE		3
>  	/** @type: type of attribute */
>  	__u32 type;
>  
> @@ -2189,6 +2190,49 @@ struct drm_xe_madvise {
>  			/** @pat_index.reserved: Reserved */
>  			__u64 reserved;
>  		} pat_index;
> +
> +		/**
> +		 * @purge_state_val: Purgeable state configuration
> +		 *
> +		 * Used when @type ==
> DRM_XE_VMA_ATTR_PURGEABLE_STATE.
> +		 *
> +		 * Configures the purgeable state of buffer objects
> in the specified
> +		 * virtual address range. This allows applications
> to hint to the kernel
> +		 * about bo's usage patterns for better memory
> management.
> +		 *
> +		 * Supported values for @purge_state_val.val:
> +		 *  - DRM_XE_VMA_PURGEABLE_STATE_WILLNEED (0): Marks
> BO as needed.
> +		 *    If BO was purged, returns retained=0 (backing
> store lost).
> +		 *
> +		 *  - DRM_XE_VMA_PURGEABLE_STATE_DONTNEED (1): Hints
> that BO is not
> +		 *    currently needed. Kernel may purge it under
> memory pressure.
> +		 *    Only applies to non-shared BOs. Returns
> retained=1 if not purged.
> +		 */
> +		struct {
> +#define DRM_XE_VMA_PURGEABLE_STATE_WILLNEED	0
> +#define DRM_XE_VMA_PURGEABLE_STATE_DONTNEED	1
> +			/** @purge_state_val.val: value for
> DRM_XE_VMA_ATTR_PURGEABLE_STATE */
> +			__u32 val;
> +
> +			/* @purge_state_val.pad */
> +			__u32 pad;
> +			/**
> +			 * @purge_state_val.retained: Pointer to
> output field for backing
> +			 * store status.
> +			 *
> +			 * Userspace must initialize this field to 0
> before the
> +			 * ioctl. Kernel writes to it after the
> operation:
> +			 * - 1 if backing store exists (not purged)
> +			 * - 0 if backing store was purged
> +			 *
> +			 * If userspace fails to initialize to 0,
> ioctl returns -EINVAL.
> +			 * This ensures a safe default (0 = assume
> purged) if kernel
> +			 * cannot write the result.
> +			 *
> +			 * Similar to i915's
> drm_i915_gem_madvise.retained field.
> +			 */
> +			__u64 retained;
> +		} purge_state_val;
>  	};
>  
>  	/** @reserved: Reserved */

Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v5 3/9] drm/xe/madvise: Implement purgeable buffer object support
  2026-02-11 15:26 ` [PATCH v5 3/9] drm/xe/madvise: Implement purgeable buffer object support Arvind Yadav
@ 2026-02-24 12:21   ` Thomas Hellström
  2026-02-24 14:56     ` Yadav, Arvind
  0 siblings, 1 reply; 36+ messages in thread
From: Thomas Hellström @ 2026-02-24 12:21 UTC (permalink / raw)
  To: Arvind Yadav, intel-xe
  Cc: matthew.brost, himal.prasad.ghimiray, pallavi.mishra

On Wed, 2026-02-11 at 20:56 +0530, Arvind Yadav wrote:
> This allows userspace applications to provide memory usage hints to
> the kernel for better memory management under pressure:
> 
> Add the core implementation for purgeable buffer objects, enabling
> memory
> reclamation of user-designated DONTNEED buffers during eviction.
> 
> This patch implements the purge operation and state machine
> transitions:
> 
> Purgeable States (from xe_madv_purgeable_state):
>  - WILLNEED (0): BO should be retained, actively used
>  - DONTNEED (1): BO eligible for purging, not currently needed
>  - PURGED (2): BO backing store reclaimed, permanently invalid
> 
> Design Rationale:
>   - Async TLB invalidation via trigger_rebind (no blocking
> xe_vm_invalidate_vma)
>   - i915 compatibility: retained field, "once purged always purged"
> semantics
>   - Shared BO protection prevents multi-process memory corruption
>   - Scratch PTE reuse avoids new infrastructure, safe for fault mode
> 
> Note: The madvise_purgeable() function is implemented but not hooked
> into
> the IOCTL handler (madvise_funcs[] entry is NULL) to maintain
> bisectability.
> The feature will be enabled in the final patch when all supporting
> infrastructure (shrinker, per-VMA tracking) is complete.
> 
> v2:
>   - Use xe_bo_trigger_rebind() for async TLB invalidation (Thomas
> Hellström)
>   - Add NULL rebind with scratch PTEs for fault mode (Thomas
> Hellström)
>   - Implement i915-compatible retained field logic (Thomas Hellström)
>   - Skip BO validation for purged BOs in page fault handler (crash
> fix)
>   - Add scratch VM check in page fault path (non-scratch VMs fail
> fault)
>   - Force clear_pt for non-scratch VMs to avoid phys addr 0 mapping
> (review fix)
>   - Add !is_purged check to resource cursor setup to prevent stale
> access
> 
> v3:
>   - Rebase as xe_gt_pagefault.c is gone upstream and replaced
>     with xe_pagefault.c (Matthew Brost)
>   - Xe specific warn on (Matthew Brost)
>   - Call helpers for madv_purgeable access(Matthew Brost)
>   - Remove bo NULL check(Matthew Brost)
>   - Use xe_bo_assert_held instead of dma assert(Matthew Brost)
>   - Move the xe_bo_is_purged check under the dma-resv lock( by Matt)
>   - Drop is_purged from xe_pt_stage_bind_entry and just set is_null
> to true
>     for purged BO rename s/is_null/is_null_or_purged (by Matt)
>   - UAPI rule should not be changed.(Matthew Brost)
>   - Make 'retained' a userptr (Matthew Brost)
> 
> v4:
>   - @madv_purgeable atomic_t → u32 change across all relevant patches
> (Matt)
> 
> v5:
>   - Introduce xe_bo_set_purgeable_state() helper (void return) to
> centralize
>     madv_purgeable updates with xe_bo_assert_held() and state
> transition
>     validation using explicit enum checks (no transition out of
> PURGED) (Matt)
>   - Make xe_ttm_bo_purge() return int and propagate failures from
>     xe_bo_move(); handle xe_bo_trigger_rebind() failures (e.g.
> no_wait_gpu
>     paths) rather than silently ignoring (Matt)
>   - Replace drm_WARN_ON with xe_assert for better Xe-specific
> assertions (Matt)
>   - Hook purgeable handling into
> madvise_funcs[DRM_XE_VMA_ATTR_PURGEABLE_STATE]
>     instead of special-case path in xe_vm_madvise_ioctl() (Matt)
>   - Track purgeable retained return via xe_madvise_details and
> perform
>     copy_to_user() from xe_madvise_details_fini() after locks are
> dropped (Matt)
>   - Set madvise_funcs[DRM_XE_VMA_ATTR_PURGEABLE_STATE] to NULL with
>     __maybe_unused on madvise_purgeable() to maintain bisectability
> until
>     shrinker integration is complete in final patch (Matt)
>   - Use put_user() instead of copy_to_user() for single u32 retained
> value (Thomas)
>   - Return -EFAULT from ioctl if put_user() fails (Thomas)
>   - Validate userspace initialized retained to 0 before ioctl,
> ensuring safe
>     default (0 = "assume purged") if put_user() fails (Thomas)
>   - Refactor error handling: separate fallible put_user from
> infallible cleanup
>   - xe_madvise_purgeable_retained_to_user(): separate helper for
> fallible put_user
>   - Call put_user() after releasing all locks to avoid circular
> dependencies
>   - Use xe_bo_move_notify() instead of xe_bo_trigger_rebind() in
> xe_ttm_bo_purge()
>     for proper abstraction - handles vunmap, dma-buf notifications,
> and VRAM
>     userfault cleanup (Thomas)
>   - Fix LRU crash while running shrink test
>   - Skip xe_bo_validate() for purged BOs in xe_gpuvm_validate()
> 
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
> ---
>  drivers/gpu/drm/xe/xe_bo.c         | 106 ++++++++++++++++++++---
>  drivers/gpu/drm/xe/xe_bo.h         |   2 +
>  drivers/gpu/drm/xe/xe_pagefault.c  |  12 +++
>  drivers/gpu/drm/xe/xe_pt.c         |  40 +++++++--
>  drivers/gpu/drm/xe/xe_vm.c         |  20 ++++-
>  drivers/gpu/drm/xe/xe_vm_madvise.c | 133
> +++++++++++++++++++++++++++++
>  6 files changed, 292 insertions(+), 21 deletions(-)
> 
> diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
> index 8bf16d60b9a5..87cde4b2fe59 100644
> --- a/drivers/gpu/drm/xe/xe_bo.c
> +++ b/drivers/gpu/drm/xe/xe_bo.c
> @@ -835,6 +835,83 @@ static int xe_bo_move_notify(struct xe_bo *bo,
>  	return 0;
>  }
>  
> +/**
> + * xe_bo_set_purgeable_state() - Set BO purgeable state with
> validation
> + * @bo: Buffer object
> + * @new_state: New purgeable state
> + *
> + * Sets the purgeable state with lockdep assertions and validates
> state
> + * transitions. Once a BO is PURGED, it cannot transition to any
> other state.
> + * Invalid transitions are caught with xe_assert().
> + */
> +void xe_bo_set_purgeable_state(struct xe_bo *bo,
> +			       enum xe_madv_purgeable_state
> new_state)
> +{
> +	struct xe_device *xe = xe_bo_device(bo);
> +
> +	xe_bo_assert_held(bo);
> +
> +	/* Validate state is one of the known values */
> +	xe_assert(xe, new_state == XE_MADV_PURGEABLE_WILLNEED ||
> +		      new_state == XE_MADV_PURGEABLE_DONTNEED ||
> +		      new_state == XE_MADV_PURGEABLE_PURGED);
> +
> +	/* Once purged, always purged - cannot transition out */
> +	xe_assert(xe, !(bo->madv_purgeable ==
> XE_MADV_PURGEABLE_PURGED &&
> +			new_state != XE_MADV_PURGEABLE_PURGED));
> +
> +	bo->madv_purgeable = new_state;
> +}
> +
> +/**
> + * xe_ttm_bo_purge() - Purge buffer object backing store
> + * @ttm_bo: The TTM buffer object to purge
> + * @ctx: TTM operation context
> + *
> + * This function purges the backing store of a BO marked as DONTNEED
> and
> + * triggers rebind to invalidate stale GPU mappings. For fault-mode
> VMs,
> + * this zaps the PTEs. The next GPU access will trigger a page fault
> and
> + * perform NULL rebind (scratch pages or clear PTEs based on VM
> config).
> + *
> + * Return: 0 on success, negative error code on failure
> + */
> +static int xe_ttm_bo_purge(struct ttm_buffer_object *ttm_bo, struct
> ttm_operation_ctx *ctx)
> +{
> +	struct xe_bo *bo = ttm_to_xe_bo(ttm_bo);
> +	struct ttm_placement place = {};
> +	int ret;
> +
> +	xe_bo_assert_held(bo);
> +
> +	if (!ttm_bo->ttm)
> +		return 0;
> +
> +	if (!xe_bo_madv_is_dontneed(bo))
> +		return 0;
> +
> +	ret = ttm_bo_validate(ttm_bo, &place, ctx);
> +	if (ret)
> +		return ret;
> +
> +	/*
> +	 * Use the standard pre-move hook so we share the same
> cleanup/invalidate
> +	 * path as migrations: drop any CPU vmap and schedule the
> necessary GPU
> +	 * unbind/rebind work.
> +	 *
> +	 * This may fail in no-wait contexts (fault/shrinker) or if
> the BO is
> +	 * pinned. Keep state unchanged on failure so we don't end
> up "PURGED"
> +	 * with stale mappings.
> +	 */
> +	ret = xe_bo_move_notify(bo, ctx);
> +	if (ret)
> +		return ret;

move_notify() must be called *before* pages are actually freed, that is
before ttm_bo_validate().

Other than that LGTM.


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v5 6/9] drm/xe/madvise: Implement per-VMA purgeable state tracking
  2026-02-11 15:26 ` [PATCH v5 6/9] drm/xe/madvise: Implement per-VMA purgeable state tracking Arvind Yadav
@ 2026-02-24 12:48   ` Thomas Hellström
  2026-02-24 15:07     ` Yadav, Arvind
  0 siblings, 1 reply; 36+ messages in thread
From: Thomas Hellström @ 2026-02-24 12:48 UTC (permalink / raw)
  To: Arvind Yadav, intel-xe
  Cc: matthew.brost, himal.prasad.ghimiray, pallavi.mishra

On Wed, 2026-02-11 at 20:56 +0530, Arvind Yadav wrote:
> Track purgeable state per-VMA instead of using a coarse shared
> BO check. This prevents purging shared BOs until all VMAs across
> all VMs are marked DONTNEED.
> 
> Add xe_bo_all_vmas_dontneed() to check all VMAs before marking
> a BO purgeable. Add xe_bo_recheck_purgeable_on_vma_unbind() to
> handle state transitions when VMAs are destroyed - if all
> remaining VMAs are DONTNEED the BO can become purgeable, or if
> no VMAs remain it transitions to WILLNEED.
> 
> The per-VMA purgeable_state field stores the madvise hint for
> each mapping. Shared BOs can only be purged when all VMAs
> unanimously indicate DONTNEED.
> 
> One thing to note: when the last VMA goes away, we default back to
> WILLNEED. DONTNEED is a per-mapping hint, and without any mappings
> there is no remaining madvise state to justify purging. This prevents
> BOs from becoming purgeable solely due to being temporarily unmapped.
> 
> v3:
>   - This addresses Thomas Hellström's feedback: "loop over all vmas
>     attached to the bo and check that they all say WONTNEED. This
> will
>     also need a check at VMA unbinding"
> 
> v4:
>   - @madv_purgeable atomic_t → u32 change across all relevant
>     patches (Matt)
> 
> v5:
>   - Call xe_bo_recheck_purgeable_on_vma_unbind() from
> xe_vma_destroy()
>     right after drm_gpuva_unlink() where we already hold the BO lock,
>     drop the trylock-based late destroy path (Matt)
>   - Move purgeable_state into xe_vma_mem_attr with the other madvise
>     attributes (Matt)
>   - Drop READ_ONCE since the BO lock already protects us (Matt)
>   - Keep returning false when there are no VMAs - otherwise we'd mark
>     BOs purgeable without any user hint (Matt)
>   - Use xe_bo_set_purgeable_state() instead of direct
> initialization(Matt)
>   - use xe_assert instead of drm_war (Thomas)

Typo. 

There were also a couple of review issues in my reply here:

https://patchwork.freedesktop.org/patch/699451/?series=156651&rev=5

that were never addressed or at least commented upon.

The comment there on retaining purgeable state after the last vma is
unmapped could be discussed, though.

Let's say we unmap a vma marking a bo purgeable. It then becomes either
purged or non-purgeable. 

Then an app tries to access it either using a new vma or CPU map. Then
it will typically succeed, or might occasionally fail if the bo
happened to be purged in between.

How do we handle new vma map requests and cpu-faults to a bo in
purgeable state? Do we block those?

Thanks,
Thomas



> 
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
> ---
>  drivers/gpu/drm/xe/xe_svm.c        |  1 +
>  drivers/gpu/drm/xe/xe_vm.c         |  9 ++-
>  drivers/gpu/drm/xe/xe_vm_madvise.c | 98
> ++++++++++++++++++++++++++++--
>  drivers/gpu/drm/xe/xe_vm_madvise.h |  3 +
>  drivers/gpu/drm/xe/xe_vm_types.h   | 11 ++++
>  5 files changed, 116 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/gpu/drm/xe/xe_svm.c
> b/drivers/gpu/drm/xe/xe_svm.c
> index cda3bf7e2418..329c77aa5c20 100644
> --- a/drivers/gpu/drm/xe/xe_svm.c
> +++ b/drivers/gpu/drm/xe/xe_svm.c
> @@ -318,6 +318,7 @@ static void xe_vma_set_default_attributes(struct
> xe_vma *vma)
>  		.preferred_loc.migration_policy =
> DRM_XE_MIGRATE_ALL_PAGES,
>  		.pat_index = vma->attr.default_pat_index,
>  		.atomic_access = DRM_XE_ATOMIC_UNDEFINED,
> +		.purgeable_state = XE_MADV_PURGEABLE_WILLNEED,
>  	};
>  
>  	xe_vma_mem_attr_copy(&vma->attr, &default_attr);
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index 71cf3ce6c62b..e84b9e7cb5eb 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -39,6 +39,7 @@
>  #include "xe_tile.h"
>  #include "xe_tlb_inval.h"
>  #include "xe_trace_bo.h"
> +#include "xe_vm_madvise.h"
>  #include "xe_wa.h"
>  
>  static struct drm_gem_object *xe_vm_obj(struct xe_vm *vm)
> @@ -1085,6 +1086,7 @@ static struct xe_vma *xe_vma_create(struct
> xe_vm *vm,
>  static void xe_vma_destroy_late(struct xe_vma *vma)
>  {
>  	struct xe_vm *vm = xe_vma_vm(vma);
> +	struct xe_bo *bo = xe_vma_bo(vma);
>  
>  	if (vma->ufence) {
>  		xe_sync_ufence_put(vma->ufence);
> @@ -1099,7 +1101,7 @@ static void xe_vma_destroy_late(struct xe_vma
> *vma)
>  	} else if (xe_vma_is_null(vma) ||
> xe_vma_is_cpu_addr_mirror(vma)) {
>  		xe_vm_put(vm);
>  	} else {
> -		xe_bo_put(xe_vma_bo(vma));
> +		xe_bo_put(bo);
>  	}
>  
>  	xe_vma_free(vma);
> @@ -1125,6 +1127,7 @@ static void vma_destroy_cb(struct dma_fence
> *fence,
>  static void xe_vma_destroy(struct xe_vma *vma, struct dma_fence
> *fence)
>  {
>  	struct xe_vm *vm = xe_vma_vm(vma);
> +	struct xe_bo *bo = xe_vma_bo(vma);
>  
>  	lockdep_assert_held_write(&vm->lock);
>  	xe_assert(vm->xe, list_empty(&vma->combined_links.destroy));
> @@ -1133,9 +1136,10 @@ static void xe_vma_destroy(struct xe_vma *vma,
> struct dma_fence *fence)
>  		xe_assert(vm->xe, vma->gpuva.flags &
> XE_VMA_DESTROYED);
>  		xe_userptr_destroy(to_userptr_vma(vma));
>  	} else if (!xe_vma_is_null(vma) &&
> !xe_vma_is_cpu_addr_mirror(vma)) {
> -		xe_bo_assert_held(xe_vma_bo(vma));
> +		xe_bo_assert_held(bo);
>  
>  		drm_gpuva_unlink(&vma->gpuva);
> +		xe_bo_recompute_purgeable_state(bo);
>  	}
>  
>  	xe_vm_assert_held(vm);
> @@ -2681,6 +2685,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm
> *vm, struct drm_gpuva_ops *ops,
>  				.atomic_access =
> DRM_XE_ATOMIC_UNDEFINED,
>  				.default_pat_index = op-
> >map.pat_index,
>  				.pat_index = op->map.pat_index,
> +				.purgeable_state =
> XE_MADV_PURGEABLE_WILLNEED,
>  			};
>  
>  			flags |= op->map.vma_flags &
> XE_VMA_CREATE_MASK;
> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c
> b/drivers/gpu/drm/xe/xe_vm_madvise.c
> index d9cfba7bfe0b..c184426546a2 100644
> --- a/drivers/gpu/drm/xe/xe_vm_madvise.c
> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
> @@ -12,6 +12,7 @@
>  #include "xe_pat.h"
>  #include "xe_pt.h"
>  #include "xe_svm.h"
> +#include "xe_vm.h"
>  
>  struct xe_vmas_in_madvise_range {
>  	u64 addr;
> @@ -183,6 +184,89 @@ static void madvise_pat_index(struct xe_device
> *xe, struct xe_vm *vm,
>  	}
>  }
>  
> +/**
> + * xe_bo_all_vmas_dontneed() - Check if all VMAs of a BO are marked
> DONTNEED
> + * @bo: Buffer object
> + *
> + * Check all VMAs across all VMs to determine if BO can be purged.
> + * Shared BOs require unanimous DONTNEED state from all mappings.
> + *
> + * Caller must hold BO dma-resv lock.
> + *
> + * Return: true if all VMAs are DONTNEED, false otherwise
> + */
> +static bool xe_bo_all_vmas_dontneed(struct xe_bo *bo)
> +{
> +	struct drm_gpuvm_bo *vm_bo;
> +	struct drm_gpuva *gpuva;
> +	struct drm_gem_object *obj = &bo->ttm.base;
> +	bool has_vmas = false;
> +
> +	xe_bo_assert_held(bo);
> +
> +	drm_gem_for_each_gpuvm_bo(vm_bo, obj) {
> +		drm_gpuvm_bo_for_each_va(gpuva, vm_bo) {
> +			struct xe_vma *vma = gpuva_to_vma(gpuva);
> +
> +			has_vmas = true;
> +
> +			/* Any non-DONTNEED VMA prevents purging */
> +			if (vma->attr.purgeable_state !=
> XE_MADV_PURGEABLE_DONTNEED)
> +				return false;
> +		}
> +	}
> +
> +	/*
> +	 * No VMAs => no mapping-level DONTNEED hint.
> +	 * Default to WILLNEED to avoid making BOs purgeable without
> +	 * explicit user intent.
> +	 */
> +	if (!has_vmas)
> +		return false;
> +
> +	return true;
> +}
> +
> +/**
> + * xe_bo_recompute_purgeable_state() - Recompute BO purgeable state
> from VMAs
> + * @bo: Buffer object
> + *
> + * Walk all VMAs to determine if BO should be purgeable or not.
> + * Shared BOs require unanimous DONTNEED state from all mappings.
> + *
> + * Locking: Caller must hold BO dma-resv lock. When iterating GPUVM
> lists,
> + * VM lock must also be held (write) to prevent concurrent VMA
> modifications.
> + * This is satisfied at both call sites:
> + * - xe_vma_destroy(): holds vm->lock write
> + * - madvise_purgeable(): holds vm->lock write (from madvise ioctl
> path)
> + *
> + * Return: nothing
> + */
> +void xe_bo_recompute_purgeable_state(struct xe_bo *bo)
> +{
> +	if (!bo)
> +		return;
> +
> +	xe_bo_assert_held(bo);
> +
> +	/*
> +	 * Once purged, always purged. Cannot transition back to
> WILLNEED.
> +	 * This matches i915 semantics where purged BOs are
> permanently invalid.
> +	 */
> +	if (bo->madv_purgeable == XE_MADV_PURGEABLE_PURGED)
> +		return;
> +
> +	if (xe_bo_all_vmas_dontneed(bo)) {
> +		/* All VMAs are DONTNEED - mark BO purgeable */
> +		if (bo->madv_purgeable !=
> XE_MADV_PURGEABLE_DONTNEED)
> +			xe_bo_set_purgeable_state(bo,
> XE_MADV_PURGEABLE_DONTNEED);
> +	} else {
> +		/* At least one VMA is WILLNEED - BO must not be
> purgeable */
> +		if (bo->madv_purgeable !=
> XE_MADV_PURGEABLE_WILLNEED)
> +			xe_bo_set_purgeable_state(bo,
> XE_MADV_PURGEABLE_WILLNEED);
> +	}
> +}
> +
>  /**
>   * madvise_purgeable - Handle purgeable buffer object advice
>   * @xe: XE device
> @@ -231,14 +315,20 @@ static void __maybe_unused
> madvise_purgeable(struct xe_device *xe,
>  
>  		switch (op->purge_state_val.val) {
>  		case DRM_XE_VMA_PURGEABLE_STATE_WILLNEED:
> -			xe_bo_set_purgeable_state(bo,
> XE_MADV_PURGEABLE_WILLNEED);
> +			vmas[i]->attr.purgeable_state =
> XE_MADV_PURGEABLE_WILLNEED;
> +
> +			/* Update BO purgeable state */
> +			xe_bo_recompute_purgeable_state(bo);
>  			break;
>  		case DRM_XE_VMA_PURGEABLE_STATE_DONTNEED:
> -			xe_bo_set_purgeable_state(bo,
> XE_MADV_PURGEABLE_DONTNEED);
> +			vmas[i]->attr.purgeable_state =
> XE_MADV_PURGEABLE_DONTNEED;
> +
> +			/* Update BO purgeable state */
> +			xe_bo_recompute_purgeable_state(bo);
>  			break;
>  		default:
> -			drm_warn(&vm->xe->drm, "Invalid madvice
> value = %d\n",
> -				 op->purge_state_val.val);
> +			/* Should never hit - values validated in
> madvise_args_are_sane() */
> +			xe_assert(vm->xe, 0);
>  			return;
>  		}
>  	}
> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.h
> b/drivers/gpu/drm/xe/xe_vm_madvise.h
> index b0e1fc445f23..39acd2689ca0 100644
> --- a/drivers/gpu/drm/xe/xe_vm_madvise.h
> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.h
> @@ -8,8 +8,11 @@
>  
>  struct drm_device;
>  struct drm_file;
> +struct xe_bo;
>  
>  int xe_vm_madvise_ioctl(struct drm_device *dev, void *data,
>  			struct drm_file *file);
>  
> +void xe_bo_recompute_purgeable_state(struct xe_bo *bo);
> +
>  #endif
> diff --git a/drivers/gpu/drm/xe/xe_vm_types.h
> b/drivers/gpu/drm/xe/xe_vm_types.h
> index 43203e90ee3e..fd563039e8f4 100644
> --- a/drivers/gpu/drm/xe/xe_vm_types.h
> +++ b/drivers/gpu/drm/xe/xe_vm_types.h
> @@ -94,6 +94,17 @@ struct xe_vma_mem_attr {
>  	 * same as default_pat_index unless overwritten by madvise.
>  	 */
>  	u16 pat_index;
> +
> +	/**
> +	 * @purgeable_state: Purgeable hint for this VMA mapping
> +	 *
> +	 * Per-VMA purgeable state from madvise. Valid states are
> WILLNEED (0)
> +	 * or DONTNEED (1). Shared BOs require all VMAs to be
> DONTNEED before
> +	 * the BO can be purged. PURGED state exists only at BO
> level.
> +	 *
> +	 * Protected by BO dma-resv lock. Set via
> DRM_IOCTL_XE_MADVISE.
> +	 */
> +	u32 purgeable_state;
>  };
>  
>  struct xe_vma {

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v5 7/9] drm/xe/madvise: Block imported and exported dma-bufs
  2026-02-11 15:26 ` [PATCH v5 7/9] drm/xe/madvise: Block imported and exported dma-bufs Arvind Yadav
@ 2026-02-24 14:15   ` Thomas Hellström
  0 siblings, 0 replies; 36+ messages in thread
From: Thomas Hellström @ 2026-02-24 14:15 UTC (permalink / raw)
  To: Arvind Yadav, intel-xe
  Cc: matthew.brost, himal.prasad.ghimiray, pallavi.mishra

On Wed, 2026-02-11 at 20:56 +0530, Arvind Yadav wrote:
> Prevent marking imported or exported dma-bufs as purgeable.
> External devices may be accessing these buffers without our
> knowledge, making purging unsafe.
> 
> Check drm_gem_is_imported() for buffers created by other
> drivers and obj->dma_buf for buffers exported to other
> drivers. Silently skip these BOs during madvise processing.
> 
> This follows drm_gem_shmem's purgeable implementation and
> prevents data corruption from purging actively-used shared
> buffers.
> 
> v3:
>    - Addresses review feedback from Matt Roper about handling
>      imported/exported BOs correctly in the purgeable BO
>      implementation.
> 
> v4:
>    - Check should be add to xe_vm_madvise_purgeable_bo.
> 
> v5:
>    - Rename xe_bo_is_external_dmabuf() to xe_bo_is_dmabuf_shared()
>      for clarity (Thomas)
>    - Update comments to clarify why both imports and exports
>      are unsafe to purge.
> 
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
> ---
>  drivers/gpu/drm/xe/xe_vm_madvise.c | 35
> ++++++++++++++++++++++++++++++
>  1 file changed, 35 insertions(+)
> 
> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c
> b/drivers/gpu/drm/xe/xe_vm_madvise.c
> index c184426546a2..8d55ea78b6d1 100644
> --- a/drivers/gpu/drm/xe/xe_vm_madvise.c
> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
> @@ -184,6 +184,33 @@ static void madvise_pat_index(struct xe_device
> *xe, struct xe_vm *vm,
>  	}
>  }
>  
> +/**
> + * xe_bo_is_dmabuf_shared() - Check if BO is shared via dma-buf
> + * @bo: Buffer object
> + *
> + * Prevent marking imported or exported dma-bufs as purgeable.
> + * For imported BOs, Xe doesn't own the backing store and cannot
> + * safely reclaim pages (exporter or other devices may still be
> + * using them). For exported BOs, external devices may have active
> + * mappings we cannot track.
> + *
> + * Return: true if BO is imported or exported, false otherwise

Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>


> + */
> +static bool xe_bo_is_dmabuf_shared(struct xe_bo *bo)
> +{
> +	struct drm_gem_object *obj = &bo->ttm.base;
> +
> +	/* Imported: exporter owns backing store */
> +	if (drm_gem_is_imported(obj))
> +		return true;
> +
> +	/* Exported: external devices may be accessing */
> +	if (obj->dma_buf)
> +		return true;
> +
> +	return false;
> +}
> +
>  /**
>   * xe_bo_all_vmas_dontneed() - Check if all VMAs of a BO are marked
> DONTNEED
>   * @bo: Buffer object
> @@ -204,6 +231,10 @@ static bool xe_bo_all_vmas_dontneed(struct xe_bo
> *bo)
>  
>  	xe_bo_assert_held(bo);
>  
> +	/* Shared dma-bufs cannot be purgeable */
> +	if (xe_bo_is_dmabuf_shared(bo))
> +		return false;
> +
>  	drm_gem_for_each_gpuvm_bo(vm_bo, obj) {
>  		drm_gpuvm_bo_for_each_va(gpuva, vm_bo) {
>  			struct xe_vma *vma = gpuva_to_vma(gpuva);
> @@ -304,6 +335,10 @@ static void __maybe_unused
> madvise_purgeable(struct xe_device *xe,
>  		/* BO must be locked before modifying madv state */
>  		xe_bo_assert_held(bo);
>  
> +		/* Skip shared dma-bufs */
> +		if (xe_bo_is_dmabuf_shared(bo))
> +			continue;
> +
>  		/*
>  		 * Once purged, always purged. Cannot transition
> back to WILLNEED.
>  		 * This matches i915 semantics where purged BOs are
> permanently invalid.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v5 8/9] drm/xe/bo: Add purgeable shrinker state helpers
  2026-02-11 15:26 ` [PATCH v5 8/9] drm/xe/bo: Add purgeable shrinker state helpers Arvind Yadav
@ 2026-02-24 14:21   ` Thomas Hellström
  2026-02-24 15:09     ` Yadav, Arvind
  0 siblings, 1 reply; 36+ messages in thread
From: Thomas Hellström @ 2026-02-24 14:21 UTC (permalink / raw)
  To: Arvind Yadav, intel-xe
  Cc: matthew.brost, himal.prasad.ghimiray, pallavi.mishra

On Wed, 2026-02-11 at 20:56 +0530, Arvind Yadav wrote:
> Encapsulate TTM purgeable flag updates and shrinker page accounting
> into helper functions. This prevents desynchronization between the
> TTM tt->purgeable flag and the shrinker's page bucket counters.
> 
> Without these helpers, direct manipulation of xe_ttm_tt->purgeable
> risks forgetting to update the corresponding shrinker counters,
> leading to incorrect memory pressure calculations.
> 
> Add xe_bo_set_purgeable_shrinker() and
> xe_bo_clear_purgeable_shrinker()
> which atomically update both the TTM flag and transfer pages between
> the shrinkable and purgeable buckets.
> 
> Handle ghost BOs and zero-refcount xe BOs separately in
> xe_bo_shrink().
> Ghost BOs from ttm_bo_pipeline_gutting() still hold reclaimable
> pages,
> so attempt the shrink to let the shrinker block until the fence
> signals.
> For xe BOs whose refcount has dropped to zero, return -EBUSY since
> the
> destroy path will handle cleanup.
> 
> v4:
>   - @madv_purgeable atomic_t → u32 change across all relevant
>     patches (Matt)
> 
> v5:
>   - Update purgeable BO state to PURGED after a successful shrinker
>     purge for DONTNEED BOs.
>   - Split ghost BO and zero-refcount handling in xe_bo_shrink()
> (Thomas)

You'd need to split this patch so that the zero-refcount fix gets into
a separate patch with a Fixes: tag!

Otherwise LGTM.


> 
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
> ---
>  drivers/gpu/drm/xe/xe_bo.c         | 69
> +++++++++++++++++++++++++++++-
>  drivers/gpu/drm/xe/xe_bo.h         |  2 +
>  drivers/gpu/drm/xe/xe_vm_madvise.c |  8 +++-
>  3 files changed, 76 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
> index 7ee85c8eadde..9484105708f7 100644
> --- a/drivers/gpu/drm/xe/xe_bo.c
> +++ b/drivers/gpu/drm/xe/xe_bo.c
> @@ -863,6 +863,66 @@ void xe_bo_set_purgeable_state(struct xe_bo *bo,
>  	bo->madv_purgeable = new_state;
>  }
>  
> +/**
> + * xe_bo_set_purgeable_shrinker() - Mark BO purgeable and update
> shrinker
> + * @bo: Buffer object
> + *
> + * Transfers pages from shrinkable to purgeable bucket. Shrinker can
> now
> + * discard pages immediately without swapping. Caller holds BO lock.
> + */
> +void xe_bo_set_purgeable_shrinker(struct xe_bo *bo)
> +{
> +	struct ttm_buffer_object *ttm_bo = &bo->ttm;
> +	struct ttm_tt *tt = ttm_bo->ttm;
> +	struct xe_device *xe = ttm_to_xe_device(ttm_bo->bdev);
> +	struct xe_ttm_tt *xe_tt;
> +
> +	xe_bo_assert_held(bo);
> +
> +	if (!tt || !ttm_tt_is_populated(tt))
> +		return;
> +
> +	xe_tt = container_of(tt, struct xe_ttm_tt, ttm);
> +
> +	if (!xe_tt->purgeable) {
> +		xe_tt->purgeable = true;
> +		/* Transfer pages from shrinkable to purgeable count
> */
> +		xe_shrinker_mod_pages(xe->mem.shrinker,
> +				      -(long)tt->num_pages,
> +				      tt->num_pages);
> +	}
> +}
> +
> +/**
> + * xe_bo_clear_purgeable_shrinker() - Mark BO non-purgeable and
> update shrinker
> + * @bo: Buffer object
> + *
> + * Transfers pages from purgeable to shrinkable bucket. Shrinker
> must now
> + * swap pages instead of discarding. Caller holds BO lock.
> + */
> +void xe_bo_clear_purgeable_shrinker(struct xe_bo *bo)
> +{
> +	struct ttm_buffer_object *ttm_bo = &bo->ttm;
> +	struct ttm_tt *tt = ttm_bo->ttm;
> +	struct xe_device *xe = ttm_to_xe_device(ttm_bo->bdev);
> +	struct xe_ttm_tt *xe_tt;
> +
> +	xe_bo_assert_held(bo);
> +
> +	if (!tt || !ttm_tt_is_populated(tt))
> +		return;
> +
> +	xe_tt = container_of(tt, struct xe_ttm_tt, ttm);
> +
> +	if (xe_tt->purgeable) {
> +		xe_tt->purgeable = false;
> +		/* Transfer pages from purgeable to shrinkable count
> */
> +		xe_shrinker_mod_pages(xe->mem.shrinker,
> +				      tt->num_pages,
> +				      -(long)tt->num_pages);
> +	}
> +}
> +
>  /**
>   * xe_ttm_bo_purge() - Purge buffer object backing store
>   * @ttm_bo: The TTM buffer object to purge
> @@ -1234,14 +1294,21 @@ long xe_bo_shrink(struct ttm_operation_ctx
> *ctx, struct ttm_buffer_object *bo,
>  	if (!xe_bo_eviction_valuable(bo, &place))
>  		return -EBUSY;
>  
> -	if (!xe_bo_is_xe_bo(bo) || !xe_bo_get_unless_zero(xe_bo))
> +	/* Ghost BOs still hold reclaimable pages, try to shrink
> them. */
> +	if (!xe_bo_is_xe_bo(bo))
>  		return xe_bo_shrink_purge(ctx, bo, scanned);
>  
> +	if (!xe_bo_get_unless_zero(xe_bo))
> +		return -EBUSY;
> +
>  	if (xe_tt->purgeable) {
>  		if (bo->resource->mem_type != XE_PL_SYSTEM)
>  			lret = xe_bo_move_notify(xe_bo, ctx);
>  		if (!lret)
>  			lret = xe_bo_shrink_purge(ctx, bo, scanned);
> +		if (lret > 0 && xe_bo_madv_is_dontneed(xe_bo))
> +			xe_bo_set_purgeable_state(xe_bo,
> +						 
> XE_MADV_PURGEABLE_PURGED);
>  		goto out_unref;
>  	}
>  
> diff --git a/drivers/gpu/drm/xe/xe_bo.h b/drivers/gpu/drm/xe/xe_bo.h
> index 0d9f25b51eb2..46d1fff10e4f 100644
> --- a/drivers/gpu/drm/xe/xe_bo.h
> +++ b/drivers/gpu/drm/xe/xe_bo.h
> @@ -272,6 +272,8 @@ static inline bool xe_bo_madv_is_dontneed(struct
> xe_bo *bo)
>  }
>  
>  void xe_bo_set_purgeable_state(struct xe_bo *bo, enum
> xe_madv_purgeable_state new_state);
> +void xe_bo_set_purgeable_shrinker(struct xe_bo *bo);
> +void xe_bo_clear_purgeable_shrinker(struct xe_bo *bo);
>  
>  static inline void xe_bo_unpin_map_no_vm(struct xe_bo *bo)
>  {
> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c
> b/drivers/gpu/drm/xe/xe_vm_madvise.c
> index 8d55ea78b6d1..235fff2b654e 100644
> --- a/drivers/gpu/drm/xe/xe_vm_madvise.c
> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
> @@ -289,12 +289,16 @@ void xe_bo_recompute_purgeable_state(struct
> xe_bo *bo)
>  
>  	if (xe_bo_all_vmas_dontneed(bo)) {
>  		/* All VMAs are DONTNEED - mark BO purgeable */
> -		if (bo->madv_purgeable !=
> XE_MADV_PURGEABLE_DONTNEED)
> +		if (bo->madv_purgeable !=
> XE_MADV_PURGEABLE_DONTNEED) {
>  			xe_bo_set_purgeable_state(bo,
> XE_MADV_PURGEABLE_DONTNEED);
> +			xe_bo_set_purgeable_shrinker(bo);
> +		}
>  	} else {
>  		/* At least one VMA is WILLNEED - BO must not be
> purgeable */
> -		if (bo->madv_purgeable !=
> XE_MADV_PURGEABLE_WILLNEED)
> +		if (bo->madv_purgeable !=
> XE_MADV_PURGEABLE_WILLNEED) {
>  			xe_bo_set_purgeable_state(bo,
> XE_MADV_PURGEABLE_WILLNEED);
> +			xe_bo_clear_purgeable_shrinker(bo);
> +		}
>  	}
>  }
>  

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v5 3/9] drm/xe/madvise: Implement purgeable buffer object support
  2026-02-24 12:21   ` Thomas Hellström
@ 2026-02-24 14:56     ` Yadav, Arvind
  0 siblings, 0 replies; 36+ messages in thread
From: Yadav, Arvind @ 2026-02-24 14:56 UTC (permalink / raw)
  To: Thomas Hellström, intel-xe
  Cc: matthew.brost, himal.prasad.ghimiray, pallavi.mishra


On 24-02-2026 17:51, Thomas Hellström wrote:
> On Wed, 2026-02-11 at 20:56 +0530, Arvind Yadav wrote:
>> This allows userspace applications to provide memory usage hints to
>> the kernel for better memory management under pressure:
>>
>> Add the core implementation for purgeable buffer objects, enabling
>> memory
>> reclamation of user-designated DONTNEED buffers during eviction.
>>
>> This patch implements the purge operation and state machine
>> transitions:
>>
>> Purgeable States (from xe_madv_purgeable_state):
>>   - WILLNEED (0): BO should be retained, actively used
>>   - DONTNEED (1): BO eligible for purging, not currently needed
>>   - PURGED (2): BO backing store reclaimed, permanently invalid
>>
>> Design Rationale:
>>    - Async TLB invalidation via trigger_rebind (no blocking
>> xe_vm_invalidate_vma)
>>    - i915 compatibility: retained field, "once purged always purged"
>> semantics
>>    - Shared BO protection prevents multi-process memory corruption
>>    - Scratch PTE reuse avoids new infrastructure, safe for fault mode
>>
>> Note: The madvise_purgeable() function is implemented but not hooked
>> into
>> the IOCTL handler (madvise_funcs[] entry is NULL) to maintain
>> bisectability.
>> The feature will be enabled in the final patch when all supporting
>> infrastructure (shrinker, per-VMA tracking) is complete.
>>
>> v2:
>>    - Use xe_bo_trigger_rebind() for async TLB invalidation (Thomas
>> Hellström)
>>    - Add NULL rebind with scratch PTEs for fault mode (Thomas
>> Hellström)
>>    - Implement i915-compatible retained field logic (Thomas Hellström)
>>    - Skip BO validation for purged BOs in page fault handler (crash
>> fix)
>>    - Add scratch VM check in page fault path (non-scratch VMs fail
>> fault)
>>    - Force clear_pt for non-scratch VMs to avoid phys addr 0 mapping
>> (review fix)
>>    - Add !is_purged check to resource cursor setup to prevent stale
>> access
>>
>> v3:
>>    - Rebase as xe_gt_pagefault.c is gone upstream and replaced
>>      with xe_pagefault.c (Matthew Brost)
>>    - Xe specific warn on (Matthew Brost)
>>    - Call helpers for madv_purgeable access(Matthew Brost)
>>    - Remove bo NULL check(Matthew Brost)
>>    - Use xe_bo_assert_held instead of dma assert(Matthew Brost)
>>    - Move the xe_bo_is_purged check under the dma-resv lock( by Matt)
>>    - Drop is_purged from xe_pt_stage_bind_entry and just set is_null
>> to true
>>      for purged BO rename s/is_null/is_null_or_purged (by Matt)
>>    - UAPI rule should not be changed.(Matthew Brost)
>>    - Make 'retained' a userptr (Matthew Brost)
>>
>> v4:
>>    - @madv_purgeable atomic_t → u32 change across all relevant patches
>> (Matt)
>>
>> v5:
>>    - Introduce xe_bo_set_purgeable_state() helper (void return) to
>> centralize
>>      madv_purgeable updates with xe_bo_assert_held() and state
>> transition
>>      validation using explicit enum checks (no transition out of
>> PURGED) (Matt)
>>    - Make xe_ttm_bo_purge() return int and propagate failures from
>>      xe_bo_move(); handle xe_bo_trigger_rebind() failures (e.g.
>> no_wait_gpu
>>      paths) rather than silently ignoring (Matt)
>>    - Replace drm_WARN_ON with xe_assert for better Xe-specific
>> assertions (Matt)
>>    - Hook purgeable handling into
>> madvise_funcs[DRM_XE_VMA_ATTR_PURGEABLE_STATE]
>>      instead of special-case path in xe_vm_madvise_ioctl() (Matt)
>>    - Track purgeable retained return via xe_madvise_details and
>> perform
>>      copy_to_user() from xe_madvise_details_fini() after locks are
>> dropped (Matt)
>>    - Set madvise_funcs[DRM_XE_VMA_ATTR_PURGEABLE_STATE] to NULL with
>>      __maybe_unused on madvise_purgeable() to maintain bisectability
>> until
>>      shrinker integration is complete in final patch (Matt)
>>    - Use put_user() instead of copy_to_user() for single u32 retained
>> value (Thomas)
>>    - Return -EFAULT from ioctl if put_user() fails (Thomas)
>>    - Validate userspace initialized retained to 0 before ioctl,
>> ensuring safe
>>      default (0 = "assume purged") if put_user() fails (Thomas)
>>    - Refactor error handling: separate fallible put_user from
>> infallible cleanup
>>    - xe_madvise_purgeable_retained_to_user(): separate helper for
>> fallible put_user
>>    - Call put_user() after releasing all locks to avoid circular
>> dependencies
>>    - Use xe_bo_move_notify() instead of xe_bo_trigger_rebind() in
>> xe_ttm_bo_purge()
>>      for proper abstraction - handles vunmap, dma-buf notifications,
>> and VRAM
>>      userfault cleanup (Thomas)
>>    - Fix LRU crash while running shrink test
>>    - Skip xe_bo_validate() for purged BOs in xe_gpuvm_validate()
>>
>> Cc: Matthew Brost <matthew.brost@intel.com>
>> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>> Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>> Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
>> ---
>>   drivers/gpu/drm/xe/xe_bo.c         | 106 ++++++++++++++++++++---
>>   drivers/gpu/drm/xe/xe_bo.h         |   2 +
>>   drivers/gpu/drm/xe/xe_pagefault.c  |  12 +++
>>   drivers/gpu/drm/xe/xe_pt.c         |  40 +++++++--
>>   drivers/gpu/drm/xe/xe_vm.c         |  20 ++++-
>>   drivers/gpu/drm/xe/xe_vm_madvise.c | 133
>> +++++++++++++++++++++++++++++
>>   6 files changed, 292 insertions(+), 21 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
>> index 8bf16d60b9a5..87cde4b2fe59 100644
>> --- a/drivers/gpu/drm/xe/xe_bo.c
>> +++ b/drivers/gpu/drm/xe/xe_bo.c
>> @@ -835,6 +835,83 @@ static int xe_bo_move_notify(struct xe_bo *bo,
>>   	return 0;
>>   }
>>   
>> +/**
>> + * xe_bo_set_purgeable_state() - Set BO purgeable state with
>> validation
>> + * @bo: Buffer object
>> + * @new_state: New purgeable state
>> + *
>> + * Sets the purgeable state with lockdep assertions and validates
>> state
>> + * transitions. Once a BO is PURGED, it cannot transition to any
>> other state.
>> + * Invalid transitions are caught with xe_assert().
>> + */
>> +void xe_bo_set_purgeable_state(struct xe_bo *bo,
>> +			       enum xe_madv_purgeable_state
>> new_state)
>> +{
>> +	struct xe_device *xe = xe_bo_device(bo);
>> +
>> +	xe_bo_assert_held(bo);
>> +
>> +	/* Validate state is one of the known values */
>> +	xe_assert(xe, new_state == XE_MADV_PURGEABLE_WILLNEED ||
>> +		      new_state == XE_MADV_PURGEABLE_DONTNEED ||
>> +		      new_state == XE_MADV_PURGEABLE_PURGED);
>> +
>> +	/* Once purged, always purged - cannot transition out */
>> +	xe_assert(xe, !(bo->madv_purgeable ==
>> XE_MADV_PURGEABLE_PURGED &&
>> +			new_state != XE_MADV_PURGEABLE_PURGED));
>> +
>> +	bo->madv_purgeable = new_state;
>> +}
>> +
>> +/**
>> + * xe_ttm_bo_purge() - Purge buffer object backing store
>> + * @ttm_bo: The TTM buffer object to purge
>> + * @ctx: TTM operation context
>> + *
>> + * This function purges the backing store of a BO marked as DONTNEED
>> and
>> + * triggers rebind to invalidate stale GPU mappings. For fault-mode
>> VMs,
>> + * this zaps the PTEs. The next GPU access will trigger a page fault
>> and
>> + * perform NULL rebind (scratch pages or clear PTEs based on VM
>> config).
>> + *
>> + * Return: 0 on success, negative error code on failure
>> + */
>> +static int xe_ttm_bo_purge(struct ttm_buffer_object *ttm_bo, struct
>> ttm_operation_ctx *ctx)
>> +{
>> +	struct xe_bo *bo = ttm_to_xe_bo(ttm_bo);
>> +	struct ttm_placement place = {};
>> +	int ret;
>> +
>> +	xe_bo_assert_held(bo);
>> +
>> +	if (!ttm_bo->ttm)
>> +		return 0;
>> +
>> +	if (!xe_bo_madv_is_dontneed(bo))
>> +		return 0;
>> +
>> +	ret = ttm_bo_validate(ttm_bo, &place, ctx);
>> +	if (ret)
>> +		return ret;
>> +
>> +	/*
>> +	 * Use the standard pre-move hook so we share the same
>> cleanup/invalidate
>> +	 * path as migrations: drop any CPU vmap and schedule the
>> necessary GPU
>> +	 * unbind/rebind work.
>> +	 *
>> +	 * This may fail in no-wait contexts (fault/shrinker) or if
>> the BO is
>> +	 * pinned. Keep state unchanged on failure so we don't end
>> up "PURGED"
>> +	 * with stale mappings.
>> +	 */
>> +	ret = xe_bo_move_notify(bo, ctx);
>> +	if (ret)
>> +		return ret;
> move_notify() must be called *before* pages are actually freed, that is
> before ttm_bo_validate().


Noted,  I will move this before tttm_bo_validate().

thanks,
Arvind

>
> Other than that LGTM.
>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v5 6/9] drm/xe/madvise: Implement per-VMA purgeable state tracking
  2026-02-24 12:48   ` Thomas Hellström
@ 2026-02-24 15:07     ` Yadav, Arvind
  2026-02-24 16:36       ` Matthew Brost
  0 siblings, 1 reply; 36+ messages in thread
From: Yadav, Arvind @ 2026-02-24 15:07 UTC (permalink / raw)
  To: Thomas Hellström, intel-xe
  Cc: matthew.brost, himal.prasad.ghimiray, pallavi.mishra


On 24-02-2026 18:18, Thomas Hellström wrote:
> On Wed, 2026-02-11 at 20:56 +0530, Arvind Yadav wrote:
>> Track purgeable state per-VMA instead of using a coarse shared
>> BO check. This prevents purging shared BOs until all VMAs across
>> all VMs are marked DONTNEED.
>>
>> Add xe_bo_all_vmas_dontneed() to check all VMAs before marking
>> a BO purgeable. Add xe_bo_recheck_purgeable_on_vma_unbind() to
>> handle state transitions when VMAs are destroyed - if all
>> remaining VMAs are DONTNEED the BO can become purgeable, or if
>> no VMAs remain it transitions to WILLNEED.
>>
>> The per-VMA purgeable_state field stores the madvise hint for
>> each mapping. Shared BOs can only be purged when all VMAs
>> unanimously indicate DONTNEED.
>>
>> One thing to note: when the last VMA goes away, we default back to
>> WILLNEED. DONTNEED is a per-mapping hint, and without any mappings
>> there is no remaining madvise state to justify purging. This prevents
>> BOs from becoming purgeable solely due to being temporarily unmapped.
>>
>> v3:
>>    - This addresses Thomas Hellström's feedback: "loop over all vmas
>>      attached to the bo and check that they all say WONTNEED. This
>> will
>>      also need a check at VMA unbinding"
>>
>> v4:
>>    - @madv_purgeable atomic_t → u32 change across all relevant
>>      patches (Matt)
>>
>> v5:
>>    - Call xe_bo_recheck_purgeable_on_vma_unbind() from
>> xe_vma_destroy()
>>      right after drm_gpuva_unlink() where we already hold the BO lock,
>>      drop the trylock-based late destroy path (Matt)
>>    - Move purgeable_state into xe_vma_mem_attr with the other madvise
>>      attributes (Matt)
>>    - Drop READ_ONCE since the BO lock already protects us (Matt)
>>    - Keep returning false when there are no VMAs - otherwise we'd mark
>>      BOs purgeable without any user hint (Matt)
>>    - Use xe_bo_set_purgeable_state() instead of direct
>> initialization(Matt)
>>    - use xe_assert instead of drm_war (Thomas)
> Typo.


Noted,

>
> There were also a couple of review issues in my reply here:
>
> https://patchwork.freedesktop.org/patch/699451/?series=156651&rev=5
>
> that were never addressed or at least commented upon.
>
> The comment there on retaining purgeable state after the last vma is
> unmapped could be discussed, though.
>
> Let's say we unmap a vma marking a bo purgeable. It then becomes either
> purged or non-purgeable.
>
> Then an app tries to access it either using a new vma or CPU map. Then
> it will typically succeed, or might occasionally fail if the bo
> happened to be purged in between.
>
> How do we handle new vma map requests and cpu-faults to a bo in
> purgeable state? Do we block those?


@Thomas,

The implementation already blocks new access to purged BOs:
  1. New VMA mappings (Patch 0005): vma_lock_and_validate() rejects MAP 
operations to purged BOs with -EINVAL via the check_purged flag.
  2. CPU faults (Patch 0004): Both xe_bo_cpu_prep() and 
xe_gem_mmap_offset() return errors (-EFAULT / VM_FAULT_SIGBUS) when 
accessing purged BOs.
  3 . "Once purged, always purged": Even when the last VMA is unmapped, 
xe_bo_recompute_purgeable_state() preserves the PURGED state - it never 
transitions back to WILLNEED or DONTNEED (see early return at the top of 
the function).

The only way forward for the application is to destroy the purged BO and 
create a new one.

Regarding the 'no VMAs → WILLNEED' logic: this only applies to 
non-purged BOs that happen to be temporarily unmapped. Purged BOs remain 
permanently invalid.

Thanks,
Arvind
>
> Thanks,
> Thomas
>
>
>
>> Cc: Matthew Brost <matthew.brost@intel.com>
>> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>> Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>> Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
>> ---
>>   drivers/gpu/drm/xe/xe_svm.c        |  1 +
>>   drivers/gpu/drm/xe/xe_vm.c         |  9 ++-
>>   drivers/gpu/drm/xe/xe_vm_madvise.c | 98
>> ++++++++++++++++++++++++++++--
>>   drivers/gpu/drm/xe/xe_vm_madvise.h |  3 +
>>   drivers/gpu/drm/xe/xe_vm_types.h   | 11 ++++
>>   5 files changed, 116 insertions(+), 6 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_svm.c
>> b/drivers/gpu/drm/xe/xe_svm.c
>> index cda3bf7e2418..329c77aa5c20 100644
>> --- a/drivers/gpu/drm/xe/xe_svm.c
>> +++ b/drivers/gpu/drm/xe/xe_svm.c
>> @@ -318,6 +318,7 @@ static void xe_vma_set_default_attributes(struct
>> xe_vma *vma)
>>   		.preferred_loc.migration_policy =
>> DRM_XE_MIGRATE_ALL_PAGES,
>>   		.pat_index = vma->attr.default_pat_index,
>>   		.atomic_access = DRM_XE_ATOMIC_UNDEFINED,
>> +		.purgeable_state = XE_MADV_PURGEABLE_WILLNEED,
>>   	};
>>   
>>   	xe_vma_mem_attr_copy(&vma->attr, &default_attr);
>> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
>> index 71cf3ce6c62b..e84b9e7cb5eb 100644
>> --- a/drivers/gpu/drm/xe/xe_vm.c
>> +++ b/drivers/gpu/drm/xe/xe_vm.c
>> @@ -39,6 +39,7 @@
>>   #include "xe_tile.h"
>>   #include "xe_tlb_inval.h"
>>   #include "xe_trace_bo.h"
>> +#include "xe_vm_madvise.h"
>>   #include "xe_wa.h"
>>   
>>   static struct drm_gem_object *xe_vm_obj(struct xe_vm *vm)
>> @@ -1085,6 +1086,7 @@ static struct xe_vma *xe_vma_create(struct
>> xe_vm *vm,
>>   static void xe_vma_destroy_late(struct xe_vma *vma)
>>   {
>>   	struct xe_vm *vm = xe_vma_vm(vma);
>> +	struct xe_bo *bo = xe_vma_bo(vma);
>>   
>>   	if (vma->ufence) {
>>   		xe_sync_ufence_put(vma->ufence);
>> @@ -1099,7 +1101,7 @@ static void xe_vma_destroy_late(struct xe_vma
>> *vma)
>>   	} else if (xe_vma_is_null(vma) ||
>> xe_vma_is_cpu_addr_mirror(vma)) {
>>   		xe_vm_put(vm);
>>   	} else {
>> -		xe_bo_put(xe_vma_bo(vma));
>> +		xe_bo_put(bo);
>>   	}
>>   
>>   	xe_vma_free(vma);
>> @@ -1125,6 +1127,7 @@ static void vma_destroy_cb(struct dma_fence
>> *fence,
>>   static void xe_vma_destroy(struct xe_vma *vma, struct dma_fence
>> *fence)
>>   {
>>   	struct xe_vm *vm = xe_vma_vm(vma);
>> +	struct xe_bo *bo = xe_vma_bo(vma);
>>   
>>   	lockdep_assert_held_write(&vm->lock);
>>   	xe_assert(vm->xe, list_empty(&vma->combined_links.destroy));
>> @@ -1133,9 +1136,10 @@ static void xe_vma_destroy(struct xe_vma *vma,
>> struct dma_fence *fence)
>>   		xe_assert(vm->xe, vma->gpuva.flags &
>> XE_VMA_DESTROYED);
>>   		xe_userptr_destroy(to_userptr_vma(vma));
>>   	} else if (!xe_vma_is_null(vma) &&
>> !xe_vma_is_cpu_addr_mirror(vma)) {
>> -		xe_bo_assert_held(xe_vma_bo(vma));
>> +		xe_bo_assert_held(bo);
>>   
>>   		drm_gpuva_unlink(&vma->gpuva);
>> +		xe_bo_recompute_purgeable_state(bo);
>>   	}
>>   
>>   	xe_vm_assert_held(vm);
>> @@ -2681,6 +2685,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm
>> *vm, struct drm_gpuva_ops *ops,
>>   				.atomic_access =
>> DRM_XE_ATOMIC_UNDEFINED,
>>   				.default_pat_index = op-
>>> map.pat_index,
>>   				.pat_index = op->map.pat_index,
>> +				.purgeable_state =
>> XE_MADV_PURGEABLE_WILLNEED,
>>   			};
>>   
>>   			flags |= op->map.vma_flags &
>> XE_VMA_CREATE_MASK;
>> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c
>> b/drivers/gpu/drm/xe/xe_vm_madvise.c
>> index d9cfba7bfe0b..c184426546a2 100644
>> --- a/drivers/gpu/drm/xe/xe_vm_madvise.c
>> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
>> @@ -12,6 +12,7 @@
>>   #include "xe_pat.h"
>>   #include "xe_pt.h"
>>   #include "xe_svm.h"
>> +#include "xe_vm.h"
>>   
>>   struct xe_vmas_in_madvise_range {
>>   	u64 addr;
>> @@ -183,6 +184,89 @@ static void madvise_pat_index(struct xe_device
>> *xe, struct xe_vm *vm,
>>   	}
>>   }
>>   
>> +/**
>> + * xe_bo_all_vmas_dontneed() - Check if all VMAs of a BO are marked
>> DONTNEED
>> + * @bo: Buffer object
>> + *
>> + * Check all VMAs across all VMs to determine if BO can be purged.
>> + * Shared BOs require unanimous DONTNEED state from all mappings.
>> + *
>> + * Caller must hold BO dma-resv lock.
>> + *
>> + * Return: true if all VMAs are DONTNEED, false otherwise
>> + */
>> +static bool xe_bo_all_vmas_dontneed(struct xe_bo *bo)
>> +{
>> +	struct drm_gpuvm_bo *vm_bo;
>> +	struct drm_gpuva *gpuva;
>> +	struct drm_gem_object *obj = &bo->ttm.base;
>> +	bool has_vmas = false;
>> +
>> +	xe_bo_assert_held(bo);
>> +
>> +	drm_gem_for_each_gpuvm_bo(vm_bo, obj) {
>> +		drm_gpuvm_bo_for_each_va(gpuva, vm_bo) {
>> +			struct xe_vma *vma = gpuva_to_vma(gpuva);
>> +
>> +			has_vmas = true;
>> +
>> +			/* Any non-DONTNEED VMA prevents purging */
>> +			if (vma->attr.purgeable_state !=
>> XE_MADV_PURGEABLE_DONTNEED)
>> +				return false;
>> +		}
>> +	}
>> +
>> +	/*
>> +	 * No VMAs => no mapping-level DONTNEED hint.
>> +	 * Default to WILLNEED to avoid making BOs purgeable without
>> +	 * explicit user intent.
>> +	 */
>> +	if (!has_vmas)
>> +		return false;
>> +
>> +	return true;
>> +}
>> +
>> +/**
>> + * xe_bo_recompute_purgeable_state() - Recompute BO purgeable state
>> from VMAs
>> + * @bo: Buffer object
>> + *
>> + * Walk all VMAs to determine if BO should be purgeable or not.
>> + * Shared BOs require unanimous DONTNEED state from all mappings.
>> + *
>> + * Locking: Caller must hold BO dma-resv lock. When iterating GPUVM
>> lists,
>> + * VM lock must also be held (write) to prevent concurrent VMA
>> modifications.
>> + * This is satisfied at both call sites:
>> + * - xe_vma_destroy(): holds vm->lock write
>> + * - madvise_purgeable(): holds vm->lock write (from madvise ioctl
>> path)
>> + *
>> + * Return: nothing
>> + */
>> +void xe_bo_recompute_purgeable_state(struct xe_bo *bo)
>> +{
>> +	if (!bo)
>> +		return;
>> +
>> +	xe_bo_assert_held(bo);
>> +
>> +	/*
>> +	 * Once purged, always purged. Cannot transition back to
>> WILLNEED.
>> +	 * This matches i915 semantics where purged BOs are
>> permanently invalid.
>> +	 */
>> +	if (bo->madv_purgeable == XE_MADV_PURGEABLE_PURGED)
>> +		return;
>> +
>> +	if (xe_bo_all_vmas_dontneed(bo)) {
>> +		/* All VMAs are DONTNEED - mark BO purgeable */
>> +		if (bo->madv_purgeable !=
>> XE_MADV_PURGEABLE_DONTNEED)
>> +			xe_bo_set_purgeable_state(bo,
>> XE_MADV_PURGEABLE_DONTNEED);
>> +	} else {
>> +		/* At least one VMA is WILLNEED - BO must not be
>> purgeable */
>> +		if (bo->madv_purgeable !=
>> XE_MADV_PURGEABLE_WILLNEED)
>> +			xe_bo_set_purgeable_state(bo,
>> XE_MADV_PURGEABLE_WILLNEED);
>> +	}
>> +}
>> +
>>   /**
>>    * madvise_purgeable - Handle purgeable buffer object advice
>>    * @xe: XE device
>> @@ -231,14 +315,20 @@ static void __maybe_unused
>> madvise_purgeable(struct xe_device *xe,
>>   
>>   		switch (op->purge_state_val.val) {
>>   		case DRM_XE_VMA_PURGEABLE_STATE_WILLNEED:
>> -			xe_bo_set_purgeable_state(bo,
>> XE_MADV_PURGEABLE_WILLNEED);
>> +			vmas[i]->attr.purgeable_state =
>> XE_MADV_PURGEABLE_WILLNEED;
>> +
>> +			/* Update BO purgeable state */
>> +			xe_bo_recompute_purgeable_state(bo);
>>   			break;
>>   		case DRM_XE_VMA_PURGEABLE_STATE_DONTNEED:
>> -			xe_bo_set_purgeable_state(bo,
>> XE_MADV_PURGEABLE_DONTNEED);
>> +			vmas[i]->attr.purgeable_state =
>> XE_MADV_PURGEABLE_DONTNEED;
>> +
>> +			/* Update BO purgeable state */
>> +			xe_bo_recompute_purgeable_state(bo);
>>   			break;
>>   		default:
>> -			drm_warn(&vm->xe->drm, "Invalid madvice
>> value = %d\n",
>> -				 op->purge_state_val.val);
>> +			/* Should never hit - values validated in
>> madvise_args_are_sane() */
>> +			xe_assert(vm->xe, 0);
>>   			return;
>>   		}
>>   	}
>> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.h
>> b/drivers/gpu/drm/xe/xe_vm_madvise.h
>> index b0e1fc445f23..39acd2689ca0 100644
>> --- a/drivers/gpu/drm/xe/xe_vm_madvise.h
>> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.h
>> @@ -8,8 +8,11 @@
>>   
>>   struct drm_device;
>>   struct drm_file;
>> +struct xe_bo;
>>   
>>   int xe_vm_madvise_ioctl(struct drm_device *dev, void *data,
>>   			struct drm_file *file);
>>   
>> +void xe_bo_recompute_purgeable_state(struct xe_bo *bo);
>> +
>>   #endif
>> diff --git a/drivers/gpu/drm/xe/xe_vm_types.h
>> b/drivers/gpu/drm/xe/xe_vm_types.h
>> index 43203e90ee3e..fd563039e8f4 100644
>> --- a/drivers/gpu/drm/xe/xe_vm_types.h
>> +++ b/drivers/gpu/drm/xe/xe_vm_types.h
>> @@ -94,6 +94,17 @@ struct xe_vma_mem_attr {
>>   	 * same as default_pat_index unless overwritten by madvise.
>>   	 */
>>   	u16 pat_index;
>> +
>> +	/**
>> +	 * @purgeable_state: Purgeable hint for this VMA mapping
>> +	 *
>> +	 * Per-VMA purgeable state from madvise. Valid states are
>> WILLNEED (0)
>> +	 * or DONTNEED (1). Shared BOs require all VMAs to be
>> DONTNEED before
>> +	 * the BO can be purged. PURGED state exists only at BO
>> level.
>> +	 *
>> +	 * Protected by BO dma-resv lock. Set via
>> DRM_IOCTL_XE_MADVISE.
>> +	 */
>> +	u32 purgeable_state;
>>   };
>>   
>>   struct xe_vma {

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v5 8/9] drm/xe/bo: Add purgeable shrinker state helpers
  2026-02-24 14:21   ` Thomas Hellström
@ 2026-02-24 15:09     ` Yadav, Arvind
  0 siblings, 0 replies; 36+ messages in thread
From: Yadav, Arvind @ 2026-02-24 15:09 UTC (permalink / raw)
  To: Thomas Hellström, intel-xe
  Cc: matthew.brost, himal.prasad.ghimiray, pallavi.mishra


On 24-02-2026 19:51, Thomas Hellström wrote:
> On Wed, 2026-02-11 at 20:56 +0530, Arvind Yadav wrote:
>> Encapsulate TTM purgeable flag updates and shrinker page accounting
>> into helper functions. This prevents desynchronization between the
>> TTM tt->purgeable flag and the shrinker's page bucket counters.
>>
>> Without these helpers, direct manipulation of xe_ttm_tt->purgeable
>> risks forgetting to update the corresponding shrinker counters,
>> leading to incorrect memory pressure calculations.
>>
>> Add xe_bo_set_purgeable_shrinker() and
>> xe_bo_clear_purgeable_shrinker()
>> which atomically update both the TTM flag and transfer pages between
>> the shrinkable and purgeable buckets.
>>
>> Handle ghost BOs and zero-refcount xe BOs separately in
>> xe_bo_shrink().
>> Ghost BOs from ttm_bo_pipeline_gutting() still hold reclaimable
>> pages,
>> so attempt the shrink to let the shrinker block until the fence
>> signals.
>> For xe BOs whose refcount has dropped to zero, return -EBUSY since
>> the
>> destroy path will handle cleanup.
>>
>> v4:
>>    - @madv_purgeable atomic_t → u32 change across all relevant
>>      patches (Matt)
>>
>> v5:
>>    - Update purgeable BO state to PURGED after a successful shrinker
>>      purge for DONTNEED BOs.
>>    - Split ghost BO and zero-refcount handling in xe_bo_shrink()
>> (Thomas)
> You'd need to split this patch so that the zero-refcount fix gets into
> a separate patch with a Fixes: tag!


Noted, I will create separate patch .

Thanks,
Arvind

>
> Otherwise LGTM.
>
>
>> Cc: Matthew Brost <matthew.brost@intel.com>
>> Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>> Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
>> ---
>>   drivers/gpu/drm/xe/xe_bo.c         | 69
>> +++++++++++++++++++++++++++++-
>>   drivers/gpu/drm/xe/xe_bo.h         |  2 +
>>   drivers/gpu/drm/xe/xe_vm_madvise.c |  8 +++-
>>   3 files changed, 76 insertions(+), 3 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
>> index 7ee85c8eadde..9484105708f7 100644
>> --- a/drivers/gpu/drm/xe/xe_bo.c
>> +++ b/drivers/gpu/drm/xe/xe_bo.c
>> @@ -863,6 +863,66 @@ void xe_bo_set_purgeable_state(struct xe_bo *bo,
>>   	bo->madv_purgeable = new_state;
>>   }
>>   
>> +/**
>> + * xe_bo_set_purgeable_shrinker() - Mark BO purgeable and update
>> shrinker
>> + * @bo: Buffer object
>> + *
>> + * Transfers pages from shrinkable to purgeable bucket. Shrinker can
>> now
>> + * discard pages immediately without swapping. Caller holds BO lock.
>> + */
>> +void xe_bo_set_purgeable_shrinker(struct xe_bo *bo)
>> +{
>> +	struct ttm_buffer_object *ttm_bo = &bo->ttm;
>> +	struct ttm_tt *tt = ttm_bo->ttm;
>> +	struct xe_device *xe = ttm_to_xe_device(ttm_bo->bdev);
>> +	struct xe_ttm_tt *xe_tt;
>> +
>> +	xe_bo_assert_held(bo);
>> +
>> +	if (!tt || !ttm_tt_is_populated(tt))
>> +		return;
>> +
>> +	xe_tt = container_of(tt, struct xe_ttm_tt, ttm);
>> +
>> +	if (!xe_tt->purgeable) {
>> +		xe_tt->purgeable = true;
>> +		/* Transfer pages from shrinkable to purgeable count
>> */
>> +		xe_shrinker_mod_pages(xe->mem.shrinker,
>> +				      -(long)tt->num_pages,
>> +				      tt->num_pages);
>> +	}
>> +}
>> +
>> +/**
>> + * xe_bo_clear_purgeable_shrinker() - Mark BO non-purgeable and
>> update shrinker
>> + * @bo: Buffer object
>> + *
>> + * Transfers pages from purgeable to shrinkable bucket. Shrinker
>> must now
>> + * swap pages instead of discarding. Caller holds BO lock.
>> + */
>> +void xe_bo_clear_purgeable_shrinker(struct xe_bo *bo)
>> +{
>> +	struct ttm_buffer_object *ttm_bo = &bo->ttm;
>> +	struct ttm_tt *tt = ttm_bo->ttm;
>> +	struct xe_device *xe = ttm_to_xe_device(ttm_bo->bdev);
>> +	struct xe_ttm_tt *xe_tt;
>> +
>> +	xe_bo_assert_held(bo);
>> +
>> +	if (!tt || !ttm_tt_is_populated(tt))
>> +		return;
>> +
>> +	xe_tt = container_of(tt, struct xe_ttm_tt, ttm);
>> +
>> +	if (xe_tt->purgeable) {
>> +		xe_tt->purgeable = false;
>> +		/* Transfer pages from purgeable to shrinkable count
>> */
>> +		xe_shrinker_mod_pages(xe->mem.shrinker,
>> +				      tt->num_pages,
>> +				      -(long)tt->num_pages);
>> +	}
>> +}
>> +
>>   /**
>>    * xe_ttm_bo_purge() - Purge buffer object backing store
>>    * @ttm_bo: The TTM buffer object to purge
>> @@ -1234,14 +1294,21 @@ long xe_bo_shrink(struct ttm_operation_ctx
>> *ctx, struct ttm_buffer_object *bo,
>>   	if (!xe_bo_eviction_valuable(bo, &place))
>>   		return -EBUSY;
>>   
>> -	if (!xe_bo_is_xe_bo(bo) || !xe_bo_get_unless_zero(xe_bo))
>> +	/* Ghost BOs still hold reclaimable pages, try to shrink
>> them. */
>> +	if (!xe_bo_is_xe_bo(bo))
>>   		return xe_bo_shrink_purge(ctx, bo, scanned);
>>   
>> +	if (!xe_bo_get_unless_zero(xe_bo))
>> +		return -EBUSY;
>> +
>>   	if (xe_tt->purgeable) {
>>   		if (bo->resource->mem_type != XE_PL_SYSTEM)
>>   			lret = xe_bo_move_notify(xe_bo, ctx);
>>   		if (!lret)
>>   			lret = xe_bo_shrink_purge(ctx, bo, scanned);
>> +		if (lret > 0 && xe_bo_madv_is_dontneed(xe_bo))
>> +			xe_bo_set_purgeable_state(xe_bo,
>> +						
>> XE_MADV_PURGEABLE_PURGED);
>>   		goto out_unref;
>>   	}
>>   
>> diff --git a/drivers/gpu/drm/xe/xe_bo.h b/drivers/gpu/drm/xe/xe_bo.h
>> index 0d9f25b51eb2..46d1fff10e4f 100644
>> --- a/drivers/gpu/drm/xe/xe_bo.h
>> +++ b/drivers/gpu/drm/xe/xe_bo.h
>> @@ -272,6 +272,8 @@ static inline bool xe_bo_madv_is_dontneed(struct
>> xe_bo *bo)
>>   }
>>   
>>   void xe_bo_set_purgeable_state(struct xe_bo *bo, enum
>> xe_madv_purgeable_state new_state);
>> +void xe_bo_set_purgeable_shrinker(struct xe_bo *bo);
>> +void xe_bo_clear_purgeable_shrinker(struct xe_bo *bo);
>>   
>>   static inline void xe_bo_unpin_map_no_vm(struct xe_bo *bo)
>>   {
>> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c
>> b/drivers/gpu/drm/xe/xe_vm_madvise.c
>> index 8d55ea78b6d1..235fff2b654e 100644
>> --- a/drivers/gpu/drm/xe/xe_vm_madvise.c
>> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
>> @@ -289,12 +289,16 @@ void xe_bo_recompute_purgeable_state(struct
>> xe_bo *bo)
>>   
>>   	if (xe_bo_all_vmas_dontneed(bo)) {
>>   		/* All VMAs are DONTNEED - mark BO purgeable */
>> -		if (bo->madv_purgeable !=
>> XE_MADV_PURGEABLE_DONTNEED)
>> +		if (bo->madv_purgeable !=
>> XE_MADV_PURGEABLE_DONTNEED) {
>>   			xe_bo_set_purgeable_state(bo,
>> XE_MADV_PURGEABLE_DONTNEED);
>> +			xe_bo_set_purgeable_shrinker(bo);
>> +		}
>>   	} else {
>>   		/* At least one VMA is WILLNEED - BO must not be
>> purgeable */
>> -		if (bo->madv_purgeable !=
>> XE_MADV_PURGEABLE_WILLNEED)
>> +		if (bo->madv_purgeable !=
>> XE_MADV_PURGEABLE_WILLNEED) {
>>   			xe_bo_set_purgeable_state(bo,
>> XE_MADV_PURGEABLE_WILLNEED);
>> +			xe_bo_clear_purgeable_shrinker(bo);
>> +		}
>>   	}
>>   }
>>   

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v5 6/9] drm/xe/madvise: Implement per-VMA purgeable state tracking
  2026-02-24 15:07     ` Yadav, Arvind
@ 2026-02-24 16:36       ` Matthew Brost
  2026-02-25  5:35         ` Yadav, Arvind
  0 siblings, 1 reply; 36+ messages in thread
From: Matthew Brost @ 2026-02-24 16:36 UTC (permalink / raw)
  To: Yadav, Arvind
  Cc: Thomas Hellström, intel-xe, himal.prasad.ghimiray,
	pallavi.mishra

On Tue, Feb 24, 2026 at 08:37:44PM +0530, Yadav, Arvind wrote:
> 
> On 24-02-2026 18:18, Thomas Hellström wrote:
> > On Wed, 2026-02-11 at 20:56 +0530, Arvind Yadav wrote:
> > > Track purgeable state per-VMA instead of using a coarse shared
> > > BO check. This prevents purging shared BOs until all VMAs across
> > > all VMs are marked DONTNEED.
> > > 
> > > Add xe_bo_all_vmas_dontneed() to check all VMAs before marking
> > > a BO purgeable. Add xe_bo_recheck_purgeable_on_vma_unbind() to
> > > handle state transitions when VMAs are destroyed - if all
> > > remaining VMAs are DONTNEED the BO can become purgeable, or if
> > > no VMAs remain it transitions to WILLNEED.
> > > 
> > > The per-VMA purgeable_state field stores the madvise hint for
> > > each mapping. Shared BOs can only be purged when all VMAs
> > > unanimously indicate DONTNEED.
> > > 
> > > One thing to note: when the last VMA goes away, we default back to
> > > WILLNEED. DONTNEED is a per-mapping hint, and without any mappings
> > > there is no remaining madvise state to justify purging. This prevents
> > > BOs from becoming purgeable solely due to being temporarily unmapped.
> > > 
> > > v3:
> > >    - This addresses Thomas Hellström's feedback: "loop over all vmas
> > >      attached to the bo and check that they all say WONTNEED. This
> > > will
> > >      also need a check at VMA unbinding"
> > > 
> > > v4:
> > >    - @madv_purgeable atomic_t → u32 change across all relevant
> > >      patches (Matt)
> > > 
> > > v5:
> > >    - Call xe_bo_recheck_purgeable_on_vma_unbind() from
> > > xe_vma_destroy()
> > >      right after drm_gpuva_unlink() where we already hold the BO lock,
> > >      drop the trylock-based late destroy path (Matt)
> > >    - Move purgeable_state into xe_vma_mem_attr with the other madvise
> > >      attributes (Matt)
> > >    - Drop READ_ONCE since the BO lock already protects us (Matt)
> > >    - Keep returning false when there are no VMAs - otherwise we'd mark
> > >      BOs purgeable without any user hint (Matt)
> > >    - Use xe_bo_set_purgeable_state() instead of direct
> > > initialization(Matt)
> > >    - use xe_assert instead of drm_war (Thomas)
> > Typo.
> 
> 
> Noted,
> 
> > 
> > There were also a couple of review issues in my reply here:
> > 
> > https://patchwork.freedesktop.org/patch/699451/?series=156651&rev=5
> > 
> > that were never addressed or at least commented upon.
> > 
> > The comment there on retaining purgeable state after the last vma is
> > unmapped could be discussed, though.
> > 
> > Let's say we unmap a vma marking a bo purgeable. It then becomes either
> > purged or non-purgeable.
> > 
> > Then an app tries to access it either using a new vma or CPU map. Then
> > it will typically succeed, or might occasionally fail if the bo
> > happened to be purged in between.
> > 
> > How do we handle new vma map requests and cpu-faults to a bo in
> > purgeable state? Do we block those?
> 
> 
> @Thomas,
> 
> The implementation already blocks new access to purged BOs:
>  1. New VMA mappings (Patch 0005): vma_lock_and_validate() rejects MAP
> operations to purged BOs with -EINVAL via the check_purged flag.
>  2. CPU faults (Patch 0004): Both xe_bo_cpu_prep() and xe_gem_mmap_offset()
> return errors (-EFAULT / VM_FAULT_SIGBUS) when accessing purged BOs.
>  3 . "Once purged, always purged": Even when the last VMA is unmapped,
> xe_bo_recompute_purgeable_state() preserves the PURGED state - it never
> transitions back to WILLNEED or DONTNEED (see early return at the top of the
> function).
> 
> The only way forward for the application is to destroy the purged BO and
> create a new one.
> 
> Regarding the 'no VMAs → WILLNEED' logic: this only applies to non-purged
> BOs that happen to be temporarily unmapped. Purged BOs remain permanently
> invalid.

So I think xe_bo_all_vmas_dontneed() isn't 100% correct...

I think should return an enum...

enum xe_bo_vmas_purge_state {	/* Maybe a better name? */
	XE_BO_VMAS_STATE_DONTNEED = 0,
	XE_BO_VMAS_STATE_WILLNEED = 1,
	XE_BO_VMAS_STATE_NO_VMAS = 2,
};


Then in xe_bo_recompute_purgeable_state() something like this:

void xe_bo_recompute_purgeable_state(struct xe_bo *bo)
{
	enum xe_bo_vma_purge_state state;

	if (!bo)
		return;

	xe_bo_assert_held(bo);

	/*
	 * Once purged, always purged. Cannot transition back to WILLNEED.
	 * This matches i915 semantics where purged BOs are permanently invalid.
	 */
	if (bo->madv_purgeable == XE_MADV_PURGEABLE_PURGED)
		return;

	state = xe_bo_all_vmas_dontneed(bo);
	if (state == XE_BO_VMAS_STATE_DONTNEED) {
		/* All VMAs are DONTNEED - mark BO purgeable */
		if (bo->madv_purgeable != XE_MADV_PURGEABLE_DONTNEED)
			xe_bo_set_purgeable_state(bo, XE_MADV_PURGEABLE_DONTNEED);
	} else if (state == XE_BO_VMAS_STATE_WILLNEED) {
		/* At least one VMA is WILLNEED - BO must not be purgeable */
		if (bo->madv_purgeable != XE_MADV_PURGEABLE_WILLNEED)
			xe_bo_set_purgeable_state(bo, XE_MADV_PURGEABLE_WILLNEED);
	}
}

I think would avoid the last unbind unintentionally flipping from
DONTNEED -> WILLNEED.

What do you both of you (Thomas, Arvind) think?

Matt

> 
> Thanks,
> Arvind
> > 
> > Thanks,
> > Thomas
> > 
> > 
> > 
> > > Cc: Matthew Brost <matthew.brost@intel.com>
> > > Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> > > Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> > > Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
> > > ---
> > >   drivers/gpu/drm/xe/xe_svm.c        |  1 +
> > >   drivers/gpu/drm/xe/xe_vm.c         |  9 ++-
> > >   drivers/gpu/drm/xe/xe_vm_madvise.c | 98
> > > ++++++++++++++++++++++++++++--
> > >   drivers/gpu/drm/xe/xe_vm_madvise.h |  3 +
> > >   drivers/gpu/drm/xe/xe_vm_types.h   | 11 ++++
> > >   5 files changed, 116 insertions(+), 6 deletions(-)
> > > 
> > > diff --git a/drivers/gpu/drm/xe/xe_svm.c
> > > b/drivers/gpu/drm/xe/xe_svm.c
> > > index cda3bf7e2418..329c77aa5c20 100644
> > > --- a/drivers/gpu/drm/xe/xe_svm.c
> > > +++ b/drivers/gpu/drm/xe/xe_svm.c
> > > @@ -318,6 +318,7 @@ static void xe_vma_set_default_attributes(struct
> > > xe_vma *vma)
> > >   		.preferred_loc.migration_policy =
> > > DRM_XE_MIGRATE_ALL_PAGES,
> > >   		.pat_index = vma->attr.default_pat_index,
> > >   		.atomic_access = DRM_XE_ATOMIC_UNDEFINED,
> > > +		.purgeable_state = XE_MADV_PURGEABLE_WILLNEED,
> > >   	};
> > >   	xe_vma_mem_attr_copy(&vma->attr, &default_attr);
> > > diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> > > index 71cf3ce6c62b..e84b9e7cb5eb 100644
> > > --- a/drivers/gpu/drm/xe/xe_vm.c
> > > +++ b/drivers/gpu/drm/xe/xe_vm.c
> > > @@ -39,6 +39,7 @@
> > >   #include "xe_tile.h"
> > >   #include "xe_tlb_inval.h"
> > >   #include "xe_trace_bo.h"
> > > +#include "xe_vm_madvise.h"
> > >   #include "xe_wa.h"
> > >   static struct drm_gem_object *xe_vm_obj(struct xe_vm *vm)
> > > @@ -1085,6 +1086,7 @@ static struct xe_vma *xe_vma_create(struct
> > > xe_vm *vm,
> > >   static void xe_vma_destroy_late(struct xe_vma *vma)
> > >   {
> > >   	struct xe_vm *vm = xe_vma_vm(vma);
> > > +	struct xe_bo *bo = xe_vma_bo(vma);
> > >   	if (vma->ufence) {
> > >   		xe_sync_ufence_put(vma->ufence);
> > > @@ -1099,7 +1101,7 @@ static void xe_vma_destroy_late(struct xe_vma
> > > *vma)
> > >   	} else if (xe_vma_is_null(vma) ||
> > > xe_vma_is_cpu_addr_mirror(vma)) {
> > >   		xe_vm_put(vm);
> > >   	} else {
> > > -		xe_bo_put(xe_vma_bo(vma));
> > > +		xe_bo_put(bo);
> > >   	}
> > >   	xe_vma_free(vma);
> > > @@ -1125,6 +1127,7 @@ static void vma_destroy_cb(struct dma_fence
> > > *fence,
> > >   static void xe_vma_destroy(struct xe_vma *vma, struct dma_fence
> > > *fence)
> > >   {
> > >   	struct xe_vm *vm = xe_vma_vm(vma);
> > > +	struct xe_bo *bo = xe_vma_bo(vma);
> > >   	lockdep_assert_held_write(&vm->lock);
> > >   	xe_assert(vm->xe, list_empty(&vma->combined_links.destroy));
> > > @@ -1133,9 +1136,10 @@ static void xe_vma_destroy(struct xe_vma *vma,
> > > struct dma_fence *fence)
> > >   		xe_assert(vm->xe, vma->gpuva.flags &
> > > XE_VMA_DESTROYED);
> > >   		xe_userptr_destroy(to_userptr_vma(vma));
> > >   	} else if (!xe_vma_is_null(vma) &&
> > > !xe_vma_is_cpu_addr_mirror(vma)) {
> > > -		xe_bo_assert_held(xe_vma_bo(vma));
> > > +		xe_bo_assert_held(bo);
> > >   		drm_gpuva_unlink(&vma->gpuva);
> > > +		xe_bo_recompute_purgeable_state(bo);
> > >   	}
> > >   	xe_vm_assert_held(vm);
> > > @@ -2681,6 +2685,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm
> > > *vm, struct drm_gpuva_ops *ops,
> > >   				.atomic_access =
> > > DRM_XE_ATOMIC_UNDEFINED,
> > >   				.default_pat_index = op-
> > > > map.pat_index,
> > >   				.pat_index = op->map.pat_index,
> > > +				.purgeable_state =
> > > XE_MADV_PURGEABLE_WILLNEED,
> > >   			};
> > >   			flags |= op->map.vma_flags &
> > > XE_VMA_CREATE_MASK;
> > > diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c
> > > b/drivers/gpu/drm/xe/xe_vm_madvise.c
> > > index d9cfba7bfe0b..c184426546a2 100644
> > > --- a/drivers/gpu/drm/xe/xe_vm_madvise.c
> > > +++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
> > > @@ -12,6 +12,7 @@
> > >   #include "xe_pat.h"
> > >   #include "xe_pt.h"
> > >   #include "xe_svm.h"
> > > +#include "xe_vm.h"
> > >   struct xe_vmas_in_madvise_range {
> > >   	u64 addr;
> > > @@ -183,6 +184,89 @@ static void madvise_pat_index(struct xe_device
> > > *xe, struct xe_vm *vm,
> > >   	}
> > >   }
> > > +/**
> > > + * xe_bo_all_vmas_dontneed() - Check if all VMAs of a BO are marked
> > > DONTNEED
> > > + * @bo: Buffer object
> > > + *
> > > + * Check all VMAs across all VMs to determine if BO can be purged.
> > > + * Shared BOs require unanimous DONTNEED state from all mappings.
> > > + *
> > > + * Caller must hold BO dma-resv lock.
> > > + *
> > > + * Return: true if all VMAs are DONTNEED, false otherwise
> > > + */
> > > +static bool xe_bo_all_vmas_dontneed(struct xe_bo *bo)
> > > +{
> > > +	struct drm_gpuvm_bo *vm_bo;
> > > +	struct drm_gpuva *gpuva;
> > > +	struct drm_gem_object *obj = &bo->ttm.base;
> > > +	bool has_vmas = false;
> > > +
> > > +	xe_bo_assert_held(bo);
> > > +
> > > +	drm_gem_for_each_gpuvm_bo(vm_bo, obj) {
> > > +		drm_gpuvm_bo_for_each_va(gpuva, vm_bo) {
> > > +			struct xe_vma *vma = gpuva_to_vma(gpuva);
> > > +
> > > +			has_vmas = true;
> > > +
> > > +			/* Any non-DONTNEED VMA prevents purging */
> > > +			if (vma->attr.purgeable_state !=
> > > XE_MADV_PURGEABLE_DONTNEED)
> > > +				return false;
> > > +		}
> > > +	}
> > > +
> > > +	/*
> > > +	 * No VMAs => no mapping-level DONTNEED hint.
> > > +	 * Default to WILLNEED to avoid making BOs purgeable without
> > > +	 * explicit user intent.
> > > +	 */
> > > +	if (!has_vmas)
> > > +		return false;
> > > +
> > > +	return true;
> > > +}
> > > +
> > > +/**
> > > + * xe_bo_recompute_purgeable_state() - Recompute BO purgeable state
> > > from VMAs
> > > + * @bo: Buffer object
> > > + *
> > > + * Walk all VMAs to determine if BO should be purgeable or not.
> > > + * Shared BOs require unanimous DONTNEED state from all mappings.
> > > + *
> > > + * Locking: Caller must hold BO dma-resv lock. When iterating GPUVM
> > > lists,
> > > + * VM lock must also be held (write) to prevent concurrent VMA
> > > modifications.
> > > + * This is satisfied at both call sites:
> > > + * - xe_vma_destroy(): holds vm->lock write
> > > + * - madvise_purgeable(): holds vm->lock write (from madvise ioctl
> > > path)
> > > + *
> > > + * Return: nothing
> > > + */
> > > +void xe_bo_recompute_purgeable_state(struct xe_bo *bo)
> > > +{
> > > +	if (!bo)
> > > +		return;
> > > +
> > > +	xe_bo_assert_held(bo);
> > > +
> > > +	/*
> > > +	 * Once purged, always purged. Cannot transition back to
> > > WILLNEED.
> > > +	 * This matches i915 semantics where purged BOs are
> > > permanently invalid.
> > > +	 */
> > > +	if (bo->madv_purgeable == XE_MADV_PURGEABLE_PURGED)
> > > +		return;
> > > +
> > > +	if (xe_bo_all_vmas_dontneed(bo)) {
> > > +		/* All VMAs are DONTNEED - mark BO purgeable */
> > > +		if (bo->madv_purgeable !=
> > > XE_MADV_PURGEABLE_DONTNEED)
> > > +			xe_bo_set_purgeable_state(bo,
> > > XE_MADV_PURGEABLE_DONTNEED);
> > > +	} else {
> > > +		/* At least one VMA is WILLNEED - BO must not be
> > > purgeable */
> > > +		if (bo->madv_purgeable !=
> > > XE_MADV_PURGEABLE_WILLNEED)
> > > +			xe_bo_set_purgeable_state(bo,
> > > XE_MADV_PURGEABLE_WILLNEED);
> > > +	}
> > > +}
> > > +
> > >   /**
> > >    * madvise_purgeable - Handle purgeable buffer object advice
> > >    * @xe: XE device
> > > @@ -231,14 +315,20 @@ static void __maybe_unused
> > > madvise_purgeable(struct xe_device *xe,
> > >   		switch (op->purge_state_val.val) {
> > >   		case DRM_XE_VMA_PURGEABLE_STATE_WILLNEED:
> > > -			xe_bo_set_purgeable_state(bo,
> > > XE_MADV_PURGEABLE_WILLNEED);
> > > +			vmas[i]->attr.purgeable_state =
> > > XE_MADV_PURGEABLE_WILLNEED;
> > > +
> > > +			/* Update BO purgeable state */
> > > +			xe_bo_recompute_purgeable_state(bo);
> > >   			break;
> > >   		case DRM_XE_VMA_PURGEABLE_STATE_DONTNEED:
> > > -			xe_bo_set_purgeable_state(bo,
> > > XE_MADV_PURGEABLE_DONTNEED);
> > > +			vmas[i]->attr.purgeable_state =
> > > XE_MADV_PURGEABLE_DONTNEED;
> > > +
> > > +			/* Update BO purgeable state */
> > > +			xe_bo_recompute_purgeable_state(bo);
> > >   			break;
> > >   		default:
> > > -			drm_warn(&vm->xe->drm, "Invalid madvice
> > > value = %d\n",
> > > -				 op->purge_state_val.val);
> > > +			/* Should never hit - values validated in
> > > madvise_args_are_sane() */
> > > +			xe_assert(vm->xe, 0);
> > >   			return;
> > >   		}
> > >   	}
> > > diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.h
> > > b/drivers/gpu/drm/xe/xe_vm_madvise.h
> > > index b0e1fc445f23..39acd2689ca0 100644
> > > --- a/drivers/gpu/drm/xe/xe_vm_madvise.h
> > > +++ b/drivers/gpu/drm/xe/xe_vm_madvise.h
> > > @@ -8,8 +8,11 @@
> > >   struct drm_device;
> > >   struct drm_file;
> > > +struct xe_bo;
> > >   int xe_vm_madvise_ioctl(struct drm_device *dev, void *data,
> > >   			struct drm_file *file);
> > > +void xe_bo_recompute_purgeable_state(struct xe_bo *bo);
> > > +
> > >   #endif
> > > diff --git a/drivers/gpu/drm/xe/xe_vm_types.h
> > > b/drivers/gpu/drm/xe/xe_vm_types.h
> > > index 43203e90ee3e..fd563039e8f4 100644
> > > --- a/drivers/gpu/drm/xe/xe_vm_types.h
> > > +++ b/drivers/gpu/drm/xe/xe_vm_types.h
> > > @@ -94,6 +94,17 @@ struct xe_vma_mem_attr {
> > >   	 * same as default_pat_index unless overwritten by madvise.
> > >   	 */
> > >   	u16 pat_index;
> > > +
> > > +	/**
> > > +	 * @purgeable_state: Purgeable hint for this VMA mapping
> > > +	 *
> > > +	 * Per-VMA purgeable state from madvise. Valid states are
> > > WILLNEED (0)
> > > +	 * or DONTNEED (1). Shared BOs require all VMAs to be
> > > DONTNEED before
> > > +	 * the BO can be purged. PURGED state exists only at BO
> > > level.
> > > +	 *
> > > +	 * Protected by BO dma-resv lock. Set via
> > > DRM_IOCTL_XE_MADVISE.
> > > +	 */
> > > +	u32 purgeable_state;
> > >   };
> > >   struct xe_vma {

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v5 6/9] drm/xe/madvise: Implement per-VMA purgeable state tracking
  2026-02-24 16:36       ` Matthew Brost
@ 2026-02-25  5:35         ` Yadav, Arvind
  2026-02-25  8:21           ` Thomas Hellström
  0 siblings, 1 reply; 36+ messages in thread
From: Yadav, Arvind @ 2026-02-25  5:35 UTC (permalink / raw)
  To: Matthew Brost, Thomas Hellström
  Cc: intel-xe, himal.prasad.ghimiray, pallavi.mishra


On 24-02-2026 22:06, Matthew Brost wrote:
> On Tue, Feb 24, 2026 at 08:37:44PM +0530, Yadav, Arvind wrote:
>> On 24-02-2026 18:18, Thomas Hellström wrote:
>>> On Wed, 2026-02-11 at 20:56 +0530, Arvind Yadav wrote:
>>>> Track purgeable state per-VMA instead of using a coarse shared
>>>> BO check. This prevents purging shared BOs until all VMAs across
>>>> all VMs are marked DONTNEED.
>>>>
>>>> Add xe_bo_all_vmas_dontneed() to check all VMAs before marking
>>>> a BO purgeable. Add xe_bo_recheck_purgeable_on_vma_unbind() to
>>>> handle state transitions when VMAs are destroyed - if all
>>>> remaining VMAs are DONTNEED the BO can become purgeable, or if
>>>> no VMAs remain it transitions to WILLNEED.
>>>>
>>>> The per-VMA purgeable_state field stores the madvise hint for
>>>> each mapping. Shared BOs can only be purged when all VMAs
>>>> unanimously indicate DONTNEED.
>>>>
>>>> One thing to note: when the last VMA goes away, we default back to
>>>> WILLNEED. DONTNEED is a per-mapping hint, and without any mappings
>>>> there is no remaining madvise state to justify purging. This prevents
>>>> BOs from becoming purgeable solely due to being temporarily unmapped.
>>>>
>>>> v3:
>>>>     - This addresses Thomas Hellström's feedback: "loop over all vmas
>>>>       attached to the bo and check that they all say WONTNEED. This
>>>> will
>>>>       also need a check at VMA unbinding"
>>>>
>>>> v4:
>>>>     - @madv_purgeable atomic_t → u32 change across all relevant
>>>>       patches (Matt)
>>>>
>>>> v5:
>>>>     - Call xe_bo_recheck_purgeable_on_vma_unbind() from
>>>> xe_vma_destroy()
>>>>       right after drm_gpuva_unlink() where we already hold the BO lock,
>>>>       drop the trylock-based late destroy path (Matt)
>>>>     - Move purgeable_state into xe_vma_mem_attr with the other madvise
>>>>       attributes (Matt)
>>>>     - Drop READ_ONCE since the BO lock already protects us (Matt)
>>>>     - Keep returning false when there are no VMAs - otherwise we'd mark
>>>>       BOs purgeable without any user hint (Matt)
>>>>     - Use xe_bo_set_purgeable_state() instead of direct
>>>> initialization(Matt)
>>>>     - use xe_assert instead of drm_war (Thomas)
>>> Typo.
>>
>> Noted,
>>
>>> There were also a couple of review issues in my reply here:
>>>
>>> https://patchwork.freedesktop.org/patch/699451/?series=156651&rev=5
>>>
>>> that were never addressed or at least commented upon.
>>>
>>> The comment there on retaining purgeable state after the last vma is
>>> unmapped could be discussed, though.
>>>
>>> Let's say we unmap a vma marking a bo purgeable. It then becomes either
>>> purged or non-purgeable.
>>>
>>> Then an app tries to access it either using a new vma or CPU map. Then
>>> it will typically succeed, or might occasionally fail if the bo
>>> happened to be purged in between.
>>>
>>> How do we handle new vma map requests and cpu-faults to a bo in
>>> purgeable state? Do we block those?
>>
>> @Thomas,
>>
>> The implementation already blocks new access to purged BOs:
>>   1. New VMA mappings (Patch 0005): vma_lock_and_validate() rejects MAP
>> operations to purged BOs with -EINVAL via the check_purged flag.
>>   2. CPU faults (Patch 0004): Both xe_bo_cpu_prep() and xe_gem_mmap_offset()
>> return errors (-EFAULT / VM_FAULT_SIGBUS) when accessing purged BOs.
>>   3 . "Once purged, always purged": Even when the last VMA is unmapped,
>> xe_bo_recompute_purgeable_state() preserves the PURGED state - it never
>> transitions back to WILLNEED or DONTNEED (see early return at the top of the
>> function).
>>
>> The only way forward for the application is to destroy the purged BO and
>> create a new one.
>>
>> Regarding the 'no VMAs → WILLNEED' logic: this only applies to non-purged
>> BOs that happen to be temporarily unmapped. Purged BOs remain permanently
>> invalid.
> So I think xe_bo_all_vmas_dontneed() isn't 100% correct...
>
> I think should return an enum...
>
> enum xe_bo_vmas_purge_state {	/* Maybe a better name? */
> 	XE_BO_VMAS_STATE_DONTNEED = 0,
> 	XE_BO_VMAS_STATE_WILLNEED = 1,
> 	XE_BO_VMAS_STATE_NO_VMAS = 2,
> };
>
>
> Then in xe_bo_recompute_purgeable_state() something like this:
>
> void xe_bo_recompute_purgeable_state(struct xe_bo *bo)
> {
> 	enum xe_bo_vma_purge_state state;
>
> 	if (!bo)
> 		return;
>
> 	xe_bo_assert_held(bo);
>
> 	/*
> 	 * Once purged, always purged. Cannot transition back to WILLNEED.
> 	 * This matches i915 semantics where purged BOs are permanently invalid.
> 	 */
> 	if (bo->madv_purgeable == XE_MADV_PURGEABLE_PURGED)
> 		return;
>
> 	state = xe_bo_all_vmas_dontneed(bo);
> 	if (state == XE_BO_VMAS_STATE_DONTNEED) {
> 		/* All VMAs are DONTNEED - mark BO purgeable */
> 		if (bo->madv_purgeable != XE_MADV_PURGEABLE_DONTNEED)
> 			xe_bo_set_purgeable_state(bo, XE_MADV_PURGEABLE_DONTNEED);
> 	} else if (state == XE_BO_VMAS_STATE_WILLNEED) {
> 		/* At least one VMA is WILLNEED - BO must not be purgeable */
> 		if (bo->madv_purgeable != XE_MADV_PURGEABLE_WILLNEED)
> 			xe_bo_set_purgeable_state(bo, XE_MADV_PURGEABLE_WILLNEED);
> 	}
> }
>
> I think would avoid the last unbind unintentionally flipping from
> DONTNEED -> WILLNEED.
>
> What do you both of you (Thomas, Arvind) think?


@Matt,

Good catch—I missed that transition. You’re right: when the last VMA is 
unmapped from a DONTNEED BO, the current logic can flip it back to 
WILLNEED, which discards the user’s hint. That’s wrong.

   I like the enum approach to distinguish:
     -  *_DONTNEED: all VMAs are DONTNEED
     - *_WILLNEED: at least one VMA is WILLNEED
     - *_NO_VMAS: no VMAs present

With that, xe_bo_recompute_purgeable_state() can avoid changing state on 
NO_VMAS and preserve "once purged, always purged," matching i915 
semantics. This also addresses Thomas's earlier question about new 
VMA/CPU access to purgeable BOs—the enum makes it clear we only 
transition on explicit VMA state, not on absence of VMAs.

I'll rework xe_bo_all_vmas_dontneed() to return the enum and update the 
recompute path accordingly.


@Thomas,

Does this direction look good to you? If yes, I will send updated patch.

Thanks,
Arvind


>
> Matt
>
>> Thanks,
>> Arvind
>>> Thanks,
>>> Thomas
>>>
>>>
>>>
>>>> Cc: Matthew Brost <matthew.brost@intel.com>
>>>> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>>>> Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>>>> Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
>>>> ---
>>>>    drivers/gpu/drm/xe/xe_svm.c        |  1 +
>>>>    drivers/gpu/drm/xe/xe_vm.c         |  9 ++-
>>>>    drivers/gpu/drm/xe/xe_vm_madvise.c | 98
>>>> ++++++++++++++++++++++++++++--
>>>>    drivers/gpu/drm/xe/xe_vm_madvise.h |  3 +
>>>>    drivers/gpu/drm/xe/xe_vm_types.h   | 11 ++++
>>>>    5 files changed, 116 insertions(+), 6 deletions(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/xe/xe_svm.c
>>>> b/drivers/gpu/drm/xe/xe_svm.c
>>>> index cda3bf7e2418..329c77aa5c20 100644
>>>> --- a/drivers/gpu/drm/xe/xe_svm.c
>>>> +++ b/drivers/gpu/drm/xe/xe_svm.c
>>>> @@ -318,6 +318,7 @@ static void xe_vma_set_default_attributes(struct
>>>> xe_vma *vma)
>>>>    		.preferred_loc.migration_policy =
>>>> DRM_XE_MIGRATE_ALL_PAGES,
>>>>    		.pat_index = vma->attr.default_pat_index,
>>>>    		.atomic_access = DRM_XE_ATOMIC_UNDEFINED,
>>>> +		.purgeable_state = XE_MADV_PURGEABLE_WILLNEED,
>>>>    	};
>>>>    	xe_vma_mem_attr_copy(&vma->attr, &default_attr);
>>>> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
>>>> index 71cf3ce6c62b..e84b9e7cb5eb 100644
>>>> --- a/drivers/gpu/drm/xe/xe_vm.c
>>>> +++ b/drivers/gpu/drm/xe/xe_vm.c
>>>> @@ -39,6 +39,7 @@
>>>>    #include "xe_tile.h"
>>>>    #include "xe_tlb_inval.h"
>>>>    #include "xe_trace_bo.h"
>>>> +#include "xe_vm_madvise.h"
>>>>    #include "xe_wa.h"
>>>>    static struct drm_gem_object *xe_vm_obj(struct xe_vm *vm)
>>>> @@ -1085,6 +1086,7 @@ static struct xe_vma *xe_vma_create(struct
>>>> xe_vm *vm,
>>>>    static void xe_vma_destroy_late(struct xe_vma *vma)
>>>>    {
>>>>    	struct xe_vm *vm = xe_vma_vm(vma);
>>>> +	struct xe_bo *bo = xe_vma_bo(vma);
>>>>    	if (vma->ufence) {
>>>>    		xe_sync_ufence_put(vma->ufence);
>>>> @@ -1099,7 +1101,7 @@ static void xe_vma_destroy_late(struct xe_vma
>>>> *vma)
>>>>    	} else if (xe_vma_is_null(vma) ||
>>>> xe_vma_is_cpu_addr_mirror(vma)) {
>>>>    		xe_vm_put(vm);
>>>>    	} else {
>>>> -		xe_bo_put(xe_vma_bo(vma));
>>>> +		xe_bo_put(bo);
>>>>    	}
>>>>    	xe_vma_free(vma);
>>>> @@ -1125,6 +1127,7 @@ static void vma_destroy_cb(struct dma_fence
>>>> *fence,
>>>>    static void xe_vma_destroy(struct xe_vma *vma, struct dma_fence
>>>> *fence)
>>>>    {
>>>>    	struct xe_vm *vm = xe_vma_vm(vma);
>>>> +	struct xe_bo *bo = xe_vma_bo(vma);
>>>>    	lockdep_assert_held_write(&vm->lock);
>>>>    	xe_assert(vm->xe, list_empty(&vma->combined_links.destroy));
>>>> @@ -1133,9 +1136,10 @@ static void xe_vma_destroy(struct xe_vma *vma,
>>>> struct dma_fence *fence)
>>>>    		xe_assert(vm->xe, vma->gpuva.flags &
>>>> XE_VMA_DESTROYED);
>>>>    		xe_userptr_destroy(to_userptr_vma(vma));
>>>>    	} else if (!xe_vma_is_null(vma) &&
>>>> !xe_vma_is_cpu_addr_mirror(vma)) {
>>>> -		xe_bo_assert_held(xe_vma_bo(vma));
>>>> +		xe_bo_assert_held(bo);
>>>>    		drm_gpuva_unlink(&vma->gpuva);
>>>> +		xe_bo_recompute_purgeable_state(bo);
>>>>    	}
>>>>    	xe_vm_assert_held(vm);
>>>> @@ -2681,6 +2685,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm
>>>> *vm, struct drm_gpuva_ops *ops,
>>>>    				.atomic_access =
>>>> DRM_XE_ATOMIC_UNDEFINED,
>>>>    				.default_pat_index = op-
>>>>> map.pat_index,
>>>>    				.pat_index = op->map.pat_index,
>>>> +				.purgeable_state =
>>>> XE_MADV_PURGEABLE_WILLNEED,
>>>>    			};
>>>>    			flags |= op->map.vma_flags &
>>>> XE_VMA_CREATE_MASK;
>>>> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c
>>>> b/drivers/gpu/drm/xe/xe_vm_madvise.c
>>>> index d9cfba7bfe0b..c184426546a2 100644
>>>> --- a/drivers/gpu/drm/xe/xe_vm_madvise.c
>>>> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
>>>> @@ -12,6 +12,7 @@
>>>>    #include "xe_pat.h"
>>>>    #include "xe_pt.h"
>>>>    #include "xe_svm.h"
>>>> +#include "xe_vm.h"
>>>>    struct xe_vmas_in_madvise_range {
>>>>    	u64 addr;
>>>> @@ -183,6 +184,89 @@ static void madvise_pat_index(struct xe_device
>>>> *xe, struct xe_vm *vm,
>>>>    	}
>>>>    }
>>>> +/**
>>>> + * xe_bo_all_vmas_dontneed() - Check if all VMAs of a BO are marked
>>>> DONTNEED
>>>> + * @bo: Buffer object
>>>> + *
>>>> + * Check all VMAs across all VMs to determine if BO can be purged.
>>>> + * Shared BOs require unanimous DONTNEED state from all mappings.
>>>> + *
>>>> + * Caller must hold BO dma-resv lock.
>>>> + *
>>>> + * Return: true if all VMAs are DONTNEED, false otherwise
>>>> + */
>>>> +static bool xe_bo_all_vmas_dontneed(struct xe_bo *bo)
>>>> +{
>>>> +	struct drm_gpuvm_bo *vm_bo;
>>>> +	struct drm_gpuva *gpuva;
>>>> +	struct drm_gem_object *obj = &bo->ttm.base;
>>>> +	bool has_vmas = false;
>>>> +
>>>> +	xe_bo_assert_held(bo);
>>>> +
>>>> +	drm_gem_for_each_gpuvm_bo(vm_bo, obj) {
>>>> +		drm_gpuvm_bo_for_each_va(gpuva, vm_bo) {
>>>> +			struct xe_vma *vma = gpuva_to_vma(gpuva);
>>>> +
>>>> +			has_vmas = true;
>>>> +
>>>> +			/* Any non-DONTNEED VMA prevents purging */
>>>> +			if (vma->attr.purgeable_state !=
>>>> XE_MADV_PURGEABLE_DONTNEED)
>>>> +				return false;
>>>> +		}
>>>> +	}
>>>> +
>>>> +	/*
>>>> +	 * No VMAs => no mapping-level DONTNEED hint.
>>>> +	 * Default to WILLNEED to avoid making BOs purgeable without
>>>> +	 * explicit user intent.
>>>> +	 */
>>>> +	if (!has_vmas)
>>>> +		return false;
>>>> +
>>>> +	return true;
>>>> +}
>>>> +
>>>> +/**
>>>> + * xe_bo_recompute_purgeable_state() - Recompute BO purgeable state
>>>> from VMAs
>>>> + * @bo: Buffer object
>>>> + *
>>>> + * Walk all VMAs to determine if BO should be purgeable or not.
>>>> + * Shared BOs require unanimous DONTNEED state from all mappings.
>>>> + *
>>>> + * Locking: Caller must hold BO dma-resv lock. When iterating GPUVM
>>>> lists,
>>>> + * VM lock must also be held (write) to prevent concurrent VMA
>>>> modifications.
>>>> + * This is satisfied at both call sites:
>>>> + * - xe_vma_destroy(): holds vm->lock write
>>>> + * - madvise_purgeable(): holds vm->lock write (from madvise ioctl
>>>> path)
>>>> + *
>>>> + * Return: nothing
>>>> + */
>>>> +void xe_bo_recompute_purgeable_state(struct xe_bo *bo)
>>>> +{
>>>> +	if (!bo)
>>>> +		return;
>>>> +
>>>> +	xe_bo_assert_held(bo);
>>>> +
>>>> +	/*
>>>> +	 * Once purged, always purged. Cannot transition back to
>>>> WILLNEED.
>>>> +	 * This matches i915 semantics where purged BOs are
>>>> permanently invalid.
>>>> +	 */
>>>> +	if (bo->madv_purgeable == XE_MADV_PURGEABLE_PURGED)
>>>> +		return;
>>>> +
>>>> +	if (xe_bo_all_vmas_dontneed(bo)) {
>>>> +		/* All VMAs are DONTNEED - mark BO purgeable */
>>>> +		if (bo->madv_purgeable !=
>>>> XE_MADV_PURGEABLE_DONTNEED)
>>>> +			xe_bo_set_purgeable_state(bo,
>>>> XE_MADV_PURGEABLE_DONTNEED);
>>>> +	} else {
>>>> +		/* At least one VMA is WILLNEED - BO must not be
>>>> purgeable */
>>>> +		if (bo->madv_purgeable !=
>>>> XE_MADV_PURGEABLE_WILLNEED)
>>>> +			xe_bo_set_purgeable_state(bo,
>>>> XE_MADV_PURGEABLE_WILLNEED);
>>>> +	}
>>>> +}
>>>> +
>>>>    /**
>>>>     * madvise_purgeable - Handle purgeable buffer object advice
>>>>     * @xe: XE device
>>>> @@ -231,14 +315,20 @@ static void __maybe_unused
>>>> madvise_purgeable(struct xe_device *xe,
>>>>    		switch (op->purge_state_val.val) {
>>>>    		case DRM_XE_VMA_PURGEABLE_STATE_WILLNEED:
>>>> -			xe_bo_set_purgeable_state(bo,
>>>> XE_MADV_PURGEABLE_WILLNEED);
>>>> +			vmas[i]->attr.purgeable_state =
>>>> XE_MADV_PURGEABLE_WILLNEED;
>>>> +
>>>> +			/* Update BO purgeable state */
>>>> +			xe_bo_recompute_purgeable_state(bo);
>>>>    			break;
>>>>    		case DRM_XE_VMA_PURGEABLE_STATE_DONTNEED:
>>>> -			xe_bo_set_purgeable_state(bo,
>>>> XE_MADV_PURGEABLE_DONTNEED);
>>>> +			vmas[i]->attr.purgeable_state =
>>>> XE_MADV_PURGEABLE_DONTNEED;
>>>> +
>>>> +			/* Update BO purgeable state */
>>>> +			xe_bo_recompute_purgeable_state(bo);
>>>>    			break;
>>>>    		default:
>>>> -			drm_warn(&vm->xe->drm, "Invalid madvice
>>>> value = %d\n",
>>>> -				 op->purge_state_val.val);
>>>> +			/* Should never hit - values validated in
>>>> madvise_args_are_sane() */
>>>> +			xe_assert(vm->xe, 0);
>>>>    			return;
>>>>    		}
>>>>    	}
>>>> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.h
>>>> b/drivers/gpu/drm/xe/xe_vm_madvise.h
>>>> index b0e1fc445f23..39acd2689ca0 100644
>>>> --- a/drivers/gpu/drm/xe/xe_vm_madvise.h
>>>> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.h
>>>> @@ -8,8 +8,11 @@
>>>>    struct drm_device;
>>>>    struct drm_file;
>>>> +struct xe_bo;
>>>>    int xe_vm_madvise_ioctl(struct drm_device *dev, void *data,
>>>>    			struct drm_file *file);
>>>> +void xe_bo_recompute_purgeable_state(struct xe_bo *bo);
>>>> +
>>>>    #endif
>>>> diff --git a/drivers/gpu/drm/xe/xe_vm_types.h
>>>> b/drivers/gpu/drm/xe/xe_vm_types.h
>>>> index 43203e90ee3e..fd563039e8f4 100644
>>>> --- a/drivers/gpu/drm/xe/xe_vm_types.h
>>>> +++ b/drivers/gpu/drm/xe/xe_vm_types.h
>>>> @@ -94,6 +94,17 @@ struct xe_vma_mem_attr {
>>>>    	 * same as default_pat_index unless overwritten by madvise.
>>>>    	 */
>>>>    	u16 pat_index;
>>>> +
>>>> +	/**
>>>> +	 * @purgeable_state: Purgeable hint for this VMA mapping
>>>> +	 *
>>>> +	 * Per-VMA purgeable state from madvise. Valid states are
>>>> WILLNEED (0)
>>>> +	 * or DONTNEED (1). Shared BOs require all VMAs to be
>>>> DONTNEED before
>>>> +	 * the BO can be purged. PURGED state exists only at BO
>>>> level.
>>>> +	 *
>>>> +	 * Protected by BO dma-resv lock. Set via
>>>> DRM_IOCTL_XE_MADVISE.
>>>> +	 */
>>>> +	u32 purgeable_state;
>>>>    };
>>>>    struct xe_vma {

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v5 6/9] drm/xe/madvise: Implement per-VMA purgeable state tracking
  2026-02-25  5:35         ` Yadav, Arvind
@ 2026-02-25  8:21           ` Thomas Hellström
  2026-02-25  9:04             ` Matthew Brost
  0 siblings, 1 reply; 36+ messages in thread
From: Thomas Hellström @ 2026-02-25  8:21 UTC (permalink / raw)
  To: Yadav, Arvind, Matthew Brost
  Cc: intel-xe, himal.prasad.ghimiray, pallavi.mishra

On Wed, 2026-02-25 at 11:05 +0530, Yadav, Arvind wrote:
> 
> On 24-02-2026 22:06, Matthew Brost wrote:
> > On Tue, Feb 24, 2026 at 08:37:44PM +0530, Yadav, Arvind wrote:
> > > On 24-02-2026 18:18, Thomas Hellström wrote:
> > > > On Wed, 2026-02-11 at 20:56 +0530, Arvind Yadav wrote:
> > > > > Track purgeable state per-VMA instead of using a coarse
> > > > > shared
> > > > > BO check. This prevents purging shared BOs until all VMAs
> > > > > across
> > > > > all VMs are marked DONTNEED.
> > > > > 
> > > > > Add xe_bo_all_vmas_dontneed() to check all VMAs before
> > > > > marking
> > > > > a BO purgeable. Add xe_bo_recheck_purgeable_on_vma_unbind()
> > > > > to
> > > > > handle state transitions when VMAs are destroyed - if all
> > > > > remaining VMAs are DONTNEED the BO can become purgeable, or
> > > > > if
> > > > > no VMAs remain it transitions to WILLNEED.
> > > > > 
> > > > > The per-VMA purgeable_state field stores the madvise hint for
> > > > > each mapping. Shared BOs can only be purged when all VMAs
> > > > > unanimously indicate DONTNEED.
> > > > > 
> > > > > One thing to note: when the last VMA goes away, we default
> > > > > back to
> > > > > WILLNEED. DONTNEED is a per-mapping hint, and without any
> > > > > mappings
> > > > > there is no remaining madvise state to justify purging. This
> > > > > prevents
> > > > > BOs from becoming purgeable solely due to being temporarily
> > > > > unmapped.
> > > > > 
> > > > > v3:
> > > > >     - This addresses Thomas Hellström's feedback: "loop over
> > > > > all vmas
> > > > >       attached to the bo and check that they all say
> > > > > WONTNEED. This
> > > > > will
> > > > >       also need a check at VMA unbinding"
> > > > > 
> > > > > v4:
> > > > >     - @madv_purgeable atomic_t → u32 change across all
> > > > > relevant
> > > > >       patches (Matt)
> > > > > 
> > > > > v5:
> > > > >     - Call xe_bo_recheck_purgeable_on_vma_unbind() from
> > > > > xe_vma_destroy()
> > > > >       right after drm_gpuva_unlink() where we already hold
> > > > > the BO lock,
> > > > >       drop the trylock-based late destroy path (Matt)
> > > > >     - Move purgeable_state into xe_vma_mem_attr with the
> > > > > other madvise
> > > > >       attributes (Matt)
> > > > >     - Drop READ_ONCE since the BO lock already protects us
> > > > > (Matt)
> > > > >     - Keep returning false when there are no VMAs - otherwise
> > > > > we'd mark
> > > > >       BOs purgeable without any user hint (Matt)
> > > > >     - Use xe_bo_set_purgeable_state() instead of direct
> > > > > initialization(Matt)
> > > > >     - use xe_assert instead of drm_war (Thomas)
> > > > Typo.
> > > 
> > > Noted,
> > > 
> > > > There were also a couple of review issues in my reply here:
> > > > 
> > > > https://patchwork.freedesktop.org/patch/699451/?series=156651&rev=5
> > > > 
> > > > that were never addressed or at least commented upon.
> > > > 
> > > > The comment there on retaining purgeable state after the last
> > > > vma is
> > > > unmapped could be discussed, though.
> > > > 
> > > > Let's say we unmap a vma marking a bo purgeable. It then
> > > > becomes either
> > > > purged or non-purgeable.
> > > > 
> > > > Then an app tries to access it either using a new vma or CPU
> > > > map. Then
> > > > it will typically succeed, or might occasionally fail if the bo
> > > > happened to be purged in between.
> > > > 
> > > > How do we handle new vma map requests and cpu-faults to a bo in
> > > > purgeable state? Do we block those?
> > > 
> > > @Thomas,
> > > 
> > > The implementation already blocks new access to purged BOs:
> > >   1. New VMA mappings (Patch 0005): vma_lock_and_validate()
> > > rejects MAP
> > > operations to purged BOs with -EINVAL via the check_purged flag.
> > >   2. CPU faults (Patch 0004): Both xe_bo_cpu_prep() and
> > > xe_gem_mmap_offset()
> > > return errors (-EFAULT / VM_FAULT_SIGBUS) when accessing purged
> > > BOs.
> > >   3 . "Once purged, always purged": Even when the last VMA is
> > > unmapped,
> > > xe_bo_recompute_purgeable_state() preserves the PURGED state - it
> > > never
> > > transitions back to WILLNEED or DONTNEED (see early return at the
> > > top of the
> > > function).
> > > 
> > > The only way forward for the application is to destroy the purged
> > > BO and
> > > create a new one.
> > > 
> > > Regarding the 'no VMAs → WILLNEED' logic: this only applies to
> > > non-purged
> > > BOs that happen to be temporarily unmapped. Purged BOs remain
> > > permanently
> > > invalid.
> > So I think xe_bo_all_vmas_dontneed() isn't 100% correct...
> > 
> > I think should return an enum...
> > 
> > enum xe_bo_vmas_purge_state {	/* Maybe a better name? */
> > 	XE_BO_VMAS_STATE_DONTNEED = 0,
> > 	XE_BO_VMAS_STATE_WILLNEED = 1,
> > 	XE_BO_VMAS_STATE_NO_VMAS = 2,
> > };
> > 
> > 
> > Then in xe_bo_recompute_purgeable_state() something like this:
> > 
> > void xe_bo_recompute_purgeable_state(struct xe_bo *bo)
> > {
> > 	enum xe_bo_vma_purge_state state;
> > 
> > 	if (!bo)
> > 		return;
> > 
> > 	xe_bo_assert_held(bo);
> > 
> > 	/*
> > 	 * Once purged, always purged. Cannot transition back to
> > WILLNEED.
> > 	 * This matches i915 semantics where purged BOs are
> > permanently invalid.
> > 	 */
> > 	if (bo->madv_purgeable == XE_MADV_PURGEABLE_PURGED)
> > 		return;
> > 
> > 	state = xe_bo_all_vmas_dontneed(bo);
> > 	if (state == XE_BO_VMAS_STATE_DONTNEED) {
> > 		/* All VMAs are DONTNEED - mark BO purgeable */
> > 		if (bo->madv_purgeable !=
> > XE_MADV_PURGEABLE_DONTNEED)
> > 			xe_bo_set_purgeable_state(bo,
> > XE_MADV_PURGEABLE_DONTNEED);
> > 	} else if (state == XE_BO_VMAS_STATE_WILLNEED) {
> > 		/* At least one VMA is WILLNEED - BO must not be
> > purgeable */
> > 		if (bo->madv_purgeable !=
> > XE_MADV_PURGEABLE_WILLNEED)
> > 			xe_bo_set_purgeable_state(bo,
> > XE_MADV_PURGEABLE_WILLNEED);
> > 	}
> > }
> > 
> > I think would avoid the last unbind unintentionally flipping from
> > DONTNEED -> WILLNEED.
> > 
> > What do you both of you (Thomas, Arvind) think?
> 
> 
> @Matt,
> 
> Good catch—I missed that transition. You’re right: when the last VMA
> is 
> unmapped from a DONTNEED BO, the current logic can flip it back to 
> WILLNEED, which discards the user’s hint. That’s wrong.
> 
>    I like the enum approach to distinguish:
>      -  *_DONTNEED: all VMAs are DONTNEED
>      - *_WILLNEED: at least one VMA is WILLNEED
>      - *_NO_VMAS: no VMAs present
> 
> With that, xe_bo_recompute_purgeable_state() can avoid changing state
> on 
> NO_VMAS and preserve "once purged, always purged," matching i915 
> semantics. This also addresses Thomas's earlier question about new 
> VMA/CPU access to purgeable BOs—the enum makes it clear we only 
> transition on explicit VMA state, not on absence of VMAs.
> 
> I'll rework xe_bo_all_vmas_dontneed() to return the enum and update
> the 
> recompute path accordingly.
> 
> 
> @Thomas,
> 
> Does this direction look good to you? If yes, I will send updated
> patch.

Yes, but I'm also as mentioned concerned about whether we can add new
vmas, cpu faults and exports in the WONTNEED state. If we can do that,
it might succeed most of the time making a well-behave appearance in
user-space, but if on occation the bo gets purged, the app would
seeming unexpectedly fail.

So do we block new vmas cpu-faults and exports in the WONTNEED state?

/Thomas


> 
> Thanks,
> Arvind
> 
> 
> > 
> > Matt
> > 
> > > Thanks,
> > > Arvind
> > > > Thanks,
> > > > Thomas
> > > > 
> > > > 
> > > > 
> > > > > Cc: Matthew Brost <matthew.brost@intel.com>
> > > > > Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> > > > > Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> > > > > Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
> > > > > ---
> > > > >    drivers/gpu/drm/xe/xe_svm.c        |  1 +
> > > > >    drivers/gpu/drm/xe/xe_vm.c         |  9 ++-
> > > > >    drivers/gpu/drm/xe/xe_vm_madvise.c | 98
> > > > > ++++++++++++++++++++++++++++--
> > > > >    drivers/gpu/drm/xe/xe_vm_madvise.h |  3 +
> > > > >    drivers/gpu/drm/xe/xe_vm_types.h   | 11 ++++
> > > > >    5 files changed, 116 insertions(+), 6 deletions(-)
> > > > > 
> > > > > diff --git a/drivers/gpu/drm/xe/xe_svm.c
> > > > > b/drivers/gpu/drm/xe/xe_svm.c
> > > > > index cda3bf7e2418..329c77aa5c20 100644
> > > > > --- a/drivers/gpu/drm/xe/xe_svm.c
> > > > > +++ b/drivers/gpu/drm/xe/xe_svm.c
> > > > > @@ -318,6 +318,7 @@ static void
> > > > > xe_vma_set_default_attributes(struct
> > > > > xe_vma *vma)
> > > > >    		.preferred_loc.migration_policy =
> > > > > DRM_XE_MIGRATE_ALL_PAGES,
> > > > >    		.pat_index = vma->attr.default_pat_index,
> > > > >    		.atomic_access = DRM_XE_ATOMIC_UNDEFINED,
> > > > > +		.purgeable_state =
> > > > > XE_MADV_PURGEABLE_WILLNEED,
> > > > >    	};
> > > > >    	xe_vma_mem_attr_copy(&vma->attr, &default_attr);
> > > > > diff --git a/drivers/gpu/drm/xe/xe_vm.c
> > > > > b/drivers/gpu/drm/xe/xe_vm.c
> > > > > index 71cf3ce6c62b..e84b9e7cb5eb 100644
> > > > > --- a/drivers/gpu/drm/xe/xe_vm.c
> > > > > +++ b/drivers/gpu/drm/xe/xe_vm.c
> > > > > @@ -39,6 +39,7 @@
> > > > >    #include "xe_tile.h"
> > > > >    #include "xe_tlb_inval.h"
> > > > >    #include "xe_trace_bo.h"
> > > > > +#include "xe_vm_madvise.h"
> > > > >    #include "xe_wa.h"
> > > > >    static struct drm_gem_object *xe_vm_obj(struct xe_vm *vm)
> > > > > @@ -1085,6 +1086,7 @@ static struct xe_vma
> > > > > *xe_vma_create(struct
> > > > > xe_vm *vm,
> > > > >    static void xe_vma_destroy_late(struct xe_vma *vma)
> > > > >    {
> > > > >    	struct xe_vm *vm = xe_vma_vm(vma);
> > > > > +	struct xe_bo *bo = xe_vma_bo(vma);
> > > > >    	if (vma->ufence) {
> > > > >    		xe_sync_ufence_put(vma->ufence);
> > > > > @@ -1099,7 +1101,7 @@ static void xe_vma_destroy_late(struct
> > > > > xe_vma
> > > > > *vma)
> > > > >    	} else if (xe_vma_is_null(vma) ||
> > > > > xe_vma_is_cpu_addr_mirror(vma)) {
> > > > >    		xe_vm_put(vm);
> > > > >    	} else {
> > > > > -		xe_bo_put(xe_vma_bo(vma));
> > > > > +		xe_bo_put(bo);
> > > > >    	}
> > > > >    	xe_vma_free(vma);
> > > > > @@ -1125,6 +1127,7 @@ static void vma_destroy_cb(struct
> > > > > dma_fence
> > > > > *fence,
> > > > >    static void xe_vma_destroy(struct xe_vma *vma, struct
> > > > > dma_fence
> > > > > *fence)
> > > > >    {
> > > > >    	struct xe_vm *vm = xe_vma_vm(vma);
> > > > > +	struct xe_bo *bo = xe_vma_bo(vma);
> > > > >    	lockdep_assert_held_write(&vm->lock);
> > > > >    	xe_assert(vm->xe, list_empty(&vma-
> > > > > >combined_links.destroy));
> > > > > @@ -1133,9 +1136,10 @@ static void xe_vma_destroy(struct
> > > > > xe_vma *vma,
> > > > > struct dma_fence *fence)
> > > > >    		xe_assert(vm->xe, vma->gpuva.flags &
> > > > > XE_VMA_DESTROYED);
> > > > >    		xe_userptr_destroy(to_userptr_vma(vma));
> > > > >    	} else if (!xe_vma_is_null(vma) &&
> > > > > !xe_vma_is_cpu_addr_mirror(vma)) {
> > > > > -		xe_bo_assert_held(xe_vma_bo(vma));
> > > > > +		xe_bo_assert_held(bo);
> > > > >    		drm_gpuva_unlink(&vma->gpuva);
> > > > > +		xe_bo_recompute_purgeable_state(bo);
> > > > >    	}
> > > > >    	xe_vm_assert_held(vm);
> > > > > @@ -2681,6 +2685,7 @@ static int
> > > > > vm_bind_ioctl_ops_parse(struct xe_vm
> > > > > *vm, struct drm_gpuva_ops *ops,
> > > > >    				.atomic_access =
> > > > > DRM_XE_ATOMIC_UNDEFINED,
> > > > >    				.default_pat_index = op-
> > > > > > map.pat_index,
> > > > >    				.pat_index = op-
> > > > > >map.pat_index,
> > > > > +				.purgeable_state =
> > > > > XE_MADV_PURGEABLE_WILLNEED,
> > > > >    			};
> > > > >    			flags |= op->map.vma_flags &
> > > > > XE_VMA_CREATE_MASK;
> > > > > diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c
> > > > > b/drivers/gpu/drm/xe/xe_vm_madvise.c
> > > > > index d9cfba7bfe0b..c184426546a2 100644
> > > > > --- a/drivers/gpu/drm/xe/xe_vm_madvise.c
> > > > > +++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
> > > > > @@ -12,6 +12,7 @@
> > > > >    #include "xe_pat.h"
> > > > >    #include "xe_pt.h"
> > > > >    #include "xe_svm.h"
> > > > > +#include "xe_vm.h"
> > > > >    struct xe_vmas_in_madvise_range {
> > > > >    	u64 addr;
> > > > > @@ -183,6 +184,89 @@ static void madvise_pat_index(struct
> > > > > xe_device
> > > > > *xe, struct xe_vm *vm,
> > > > >    	}
> > > > >    }
> > > > > +/**
> > > > > + * xe_bo_all_vmas_dontneed() - Check if all VMAs of a BO are
> > > > > marked
> > > > > DONTNEED
> > > > > + * @bo: Buffer object
> > > > > + *
> > > > > + * Check all VMAs across all VMs to determine if BO can be
> > > > > purged.
> > > > > + * Shared BOs require unanimous DONTNEED state from all
> > > > > mappings.
> > > > > + *
> > > > > + * Caller must hold BO dma-resv lock.
> > > > > + *
> > > > > + * Return: true if all VMAs are DONTNEED, false otherwise
> > > > > + */
> > > > > +static bool xe_bo_all_vmas_dontneed(struct xe_bo *bo)
> > > > > +{
> > > > > +	struct drm_gpuvm_bo *vm_bo;
> > > > > +	struct drm_gpuva *gpuva;
> > > > > +	struct drm_gem_object *obj = &bo->ttm.base;
> > > > > +	bool has_vmas = false;
> > > > > +
> > > > > +	xe_bo_assert_held(bo);
> > > > > +
> > > > > +	drm_gem_for_each_gpuvm_bo(vm_bo, obj) {
> > > > > +		drm_gpuvm_bo_for_each_va(gpuva, vm_bo) {
> > > > > +			struct xe_vma *vma =
> > > > > gpuva_to_vma(gpuva);
> > > > > +
> > > > > +			has_vmas = true;
> > > > > +
> > > > > +			/* Any non-DONTNEED VMA prevents
> > > > > purging */
> > > > > +			if (vma->attr.purgeable_state !=
> > > > > XE_MADV_PURGEABLE_DONTNEED)
> > > > > +				return false;
> > > > > +		}
> > > > > +	}
> > > > > +
> > > > > +	/*
> > > > > +	 * No VMAs => no mapping-level DONTNEED hint.
> > > > > +	 * Default to WILLNEED to avoid making BOs purgeable
> > > > > without
> > > > > +	 * explicit user intent.
> > > > > +	 */
> > > > > +	if (!has_vmas)
> > > > > +		return false;
> > > > > +
> > > > > +	return true;
> > > > > +}
> > > > > +
> > > > > +/**
> > > > > + * xe_bo_recompute_purgeable_state() - Recompute BO
> > > > > purgeable state
> > > > > from VMAs
> > > > > + * @bo: Buffer object
> > > > > + *
> > > > > + * Walk all VMAs to determine if BO should be purgeable or
> > > > > not.
> > > > > + * Shared BOs require unanimous DONTNEED state from all
> > > > > mappings.
> > > > > + *
> > > > > + * Locking: Caller must hold BO dma-resv lock. When
> > > > > iterating GPUVM
> > > > > lists,
> > > > > + * VM lock must also be held (write) to prevent concurrent
> > > > > VMA
> > > > > modifications.
> > > > > + * This is satisfied at both call sites:
> > > > > + * - xe_vma_destroy(): holds vm->lock write
> > > > > + * - madvise_purgeable(): holds vm->lock write (from madvise
> > > > > ioctl
> > > > > path)
> > > > > + *
> > > > > + * Return: nothing
> > > > > + */
> > > > > +void xe_bo_recompute_purgeable_state(struct xe_bo *bo)
> > > > > +{
> > > > > +	if (!bo)
> > > > > +		return;
> > > > > +
> > > > > +	xe_bo_assert_held(bo);
> > > > > +
> > > > > +	/*
> > > > > +	 * Once purged, always purged. Cannot transition
> > > > > back to
> > > > > WILLNEED.
> > > > > +	 * This matches i915 semantics where purged BOs are
> > > > > permanently invalid.
> > > > > +	 */
> > > > > +	if (bo->madv_purgeable == XE_MADV_PURGEABLE_PURGED)
> > > > > +		return;
> > > > > +
> > > > > +	if (xe_bo_all_vmas_dontneed(bo)) {
> > > > > +		/* All VMAs are DONTNEED - mark BO purgeable
> > > > > */
> > > > > +		if (bo->madv_purgeable !=
> > > > > XE_MADV_PURGEABLE_DONTNEED)
> > > > > +			xe_bo_set_purgeable_state(bo,
> > > > > XE_MADV_PURGEABLE_DONTNEED);
> > > > > +	} else {
> > > > > +		/* At least one VMA is WILLNEED - BO must
> > > > > not be
> > > > > purgeable */
> > > > > +		if (bo->madv_purgeable !=
> > > > > XE_MADV_PURGEABLE_WILLNEED)
> > > > > +			xe_bo_set_purgeable_state(bo,
> > > > > XE_MADV_PURGEABLE_WILLNEED);
> > > > > +	}
> > > > > +}
> > > > > +
> > > > >    /**
> > > > >     * madvise_purgeable - Handle purgeable buffer object
> > > > > advice
> > > > >     * @xe: XE device
> > > > > @@ -231,14 +315,20 @@ static void __maybe_unused
> > > > > madvise_purgeable(struct xe_device *xe,
> > > > >    		switch (op->purge_state_val.val) {
> > > > >    		case DRM_XE_VMA_PURGEABLE_STATE_WILLNEED:
> > > > > -			xe_bo_set_purgeable_state(bo,
> > > > > XE_MADV_PURGEABLE_WILLNEED);
> > > > > +			vmas[i]->attr.purgeable_state =
> > > > > XE_MADV_PURGEABLE_WILLNEED;
> > > > > +
> > > > > +			/* Update BO purgeable state */
> > > > > +			xe_bo_recompute_purgeable_state(bo);
> > > > >    			break;
> > > > >    		case DRM_XE_VMA_PURGEABLE_STATE_DONTNEED:
> > > > > -			xe_bo_set_purgeable_state(bo,
> > > > > XE_MADV_PURGEABLE_DONTNEED);
> > > > > +			vmas[i]->attr.purgeable_state =
> > > > > XE_MADV_PURGEABLE_DONTNEED;
> > > > > +
> > > > > +			/* Update BO purgeable state */
> > > > > +			xe_bo_recompute_purgeable_state(bo);
> > > > >    			break;
> > > > >    		default:
> > > > > -			drm_warn(&vm->xe->drm, "Invalid
> > > > > madvice
> > > > > value = %d\n",
> > > > > -				 op->purge_state_val.val);
> > > > > +			/* Should never hit - values
> > > > > validated in
> > > > > madvise_args_are_sane() */
> > > > > +			xe_assert(vm->xe, 0);
> > > > >    			return;
> > > > >    		}
> > > > >    	}
> > > > > diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.h
> > > > > b/drivers/gpu/drm/xe/xe_vm_madvise.h
> > > > > index b0e1fc445f23..39acd2689ca0 100644
> > > > > --- a/drivers/gpu/drm/xe/xe_vm_madvise.h
> > > > > +++ b/drivers/gpu/drm/xe/xe_vm_madvise.h
> > > > > @@ -8,8 +8,11 @@
> > > > >    struct drm_device;
> > > > >    struct drm_file;
> > > > > +struct xe_bo;
> > > > >    int xe_vm_madvise_ioctl(struct drm_device *dev, void
> > > > > *data,
> > > > >    			struct drm_file *file);
> > > > > +void xe_bo_recompute_purgeable_state(struct xe_bo *bo);
> > > > > +
> > > > >    #endif
> > > > > diff --git a/drivers/gpu/drm/xe/xe_vm_types.h
> > > > > b/drivers/gpu/drm/xe/xe_vm_types.h
> > > > > index 43203e90ee3e..fd563039e8f4 100644
> > > > > --- a/drivers/gpu/drm/xe/xe_vm_types.h
> > > > > +++ b/drivers/gpu/drm/xe/xe_vm_types.h
> > > > > @@ -94,6 +94,17 @@ struct xe_vma_mem_attr {
> > > > >    	 * same as default_pat_index unless overwritten by
> > > > > madvise.
> > > > >    	 */
> > > > >    	u16 pat_index;
> > > > > +
> > > > > +	/**
> > > > > +	 * @purgeable_state: Purgeable hint for this VMA
> > > > > mapping
> > > > > +	 *
> > > > > +	 * Per-VMA purgeable state from madvise. Valid
> > > > > states are
> > > > > WILLNEED (0)
> > > > > +	 * or DONTNEED (1). Shared BOs require all VMAs to
> > > > > be
> > > > > DONTNEED before
> > > > > +	 * the BO can be purged. PURGED state exists only at
> > > > > BO
> > > > > level.
> > > > > +	 *
> > > > > +	 * Protected by BO dma-resv lock. Set via
> > > > > DRM_IOCTL_XE_MADVISE.
> > > > > +	 */
> > > > > +	u32 purgeable_state;
> > > > >    };
> > > > >    struct xe_vma {

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v5 6/9] drm/xe/madvise: Implement per-VMA purgeable state tracking
  2026-02-25  8:21           ` Thomas Hellström
@ 2026-02-25  9:04             ` Matthew Brost
  2026-02-25  9:18               ` Thomas Hellström
  0 siblings, 1 reply; 36+ messages in thread
From: Matthew Brost @ 2026-02-25  9:04 UTC (permalink / raw)
  To: Thomas Hellström
  Cc: Yadav, Arvind, intel-xe, himal.prasad.ghimiray, pallavi.mishra

On Wed, Feb 25, 2026 at 09:21:10AM +0100, Thomas Hellström wrote:
> On Wed, 2026-02-25 at 11:05 +0530, Yadav, Arvind wrote:
> > 
> > On 24-02-2026 22:06, Matthew Brost wrote:
> > > On Tue, Feb 24, 2026 at 08:37:44PM +0530, Yadav, Arvind wrote:
> > > > On 24-02-2026 18:18, Thomas Hellström wrote:
> > > > > On Wed, 2026-02-11 at 20:56 +0530, Arvind Yadav wrote:
> > > > > > Track purgeable state per-VMA instead of using a coarse
> > > > > > shared
> > > > > > BO check. This prevents purging shared BOs until all VMAs
> > > > > > across
> > > > > > all VMs are marked DONTNEED.
> > > > > > 
> > > > > > Add xe_bo_all_vmas_dontneed() to check all VMAs before
> > > > > > marking
> > > > > > a BO purgeable. Add xe_bo_recheck_purgeable_on_vma_unbind()
> > > > > > to
> > > > > > handle state transitions when VMAs are destroyed - if all
> > > > > > remaining VMAs are DONTNEED the BO can become purgeable, or
> > > > > > if
> > > > > > no VMAs remain it transitions to WILLNEED.
> > > > > > 
> > > > > > The per-VMA purgeable_state field stores the madvise hint for
> > > > > > each mapping. Shared BOs can only be purged when all VMAs
> > > > > > unanimously indicate DONTNEED.
> > > > > > 
> > > > > > One thing to note: when the last VMA goes away, we default
> > > > > > back to
> > > > > > WILLNEED. DONTNEED is a per-mapping hint, and without any
> > > > > > mappings
> > > > > > there is no remaining madvise state to justify purging. This
> > > > > > prevents
> > > > > > BOs from becoming purgeable solely due to being temporarily
> > > > > > unmapped.
> > > > > > 
> > > > > > v3:
> > > > > >     - This addresses Thomas Hellström's feedback: "loop over
> > > > > > all vmas
> > > > > >       attached to the bo and check that they all say
> > > > > > WONTNEED. This
> > > > > > will
> > > > > >       also need a check at VMA unbinding"
> > > > > > 
> > > > > > v4:
> > > > > >     - @madv_purgeable atomic_t → u32 change across all
> > > > > > relevant
> > > > > >       patches (Matt)
> > > > > > 
> > > > > > v5:
> > > > > >     - Call xe_bo_recheck_purgeable_on_vma_unbind() from
> > > > > > xe_vma_destroy()
> > > > > >       right after drm_gpuva_unlink() where we already hold
> > > > > > the BO lock,
> > > > > >       drop the trylock-based late destroy path (Matt)
> > > > > >     - Move purgeable_state into xe_vma_mem_attr with the
> > > > > > other madvise
> > > > > >       attributes (Matt)
> > > > > >     - Drop READ_ONCE since the BO lock already protects us
> > > > > > (Matt)
> > > > > >     - Keep returning false when there are no VMAs - otherwise
> > > > > > we'd mark
> > > > > >       BOs purgeable without any user hint (Matt)
> > > > > >     - Use xe_bo_set_purgeable_state() instead of direct
> > > > > > initialization(Matt)
> > > > > >     - use xe_assert instead of drm_war (Thomas)
> > > > > Typo.
> > > > 
> > > > Noted,
> > > > 
> > > > > There were also a couple of review issues in my reply here:
> > > > > 
> > > > > https://patchwork.freedesktop.org/patch/699451/?series=156651&rev=5
> > > > > 
> > > > > that were never addressed or at least commented upon.
> > > > > 
> > > > > The comment there on retaining purgeable state after the last
> > > > > vma is
> > > > > unmapped could be discussed, though.
> > > > > 
> > > > > Let's say we unmap a vma marking a bo purgeable. It then
> > > > > becomes either
> > > > > purged or non-purgeable.
> > > > > 
> > > > > Then an app tries to access it either using a new vma or CPU
> > > > > map. Then
> > > > > it will typically succeed, or might occasionally fail if the bo
> > > > > happened to be purged in between.
> > > > > 
> > > > > How do we handle new vma map requests and cpu-faults to a bo in
> > > > > purgeable state? Do we block those?
> > > > 
> > > > @Thomas,
> > > > 
> > > > The implementation already blocks new access to purged BOs:
> > > >   1. New VMA mappings (Patch 0005): vma_lock_and_validate()
> > > > rejects MAP
> > > > operations to purged BOs with -EINVAL via the check_purged flag.
> > > >   2. CPU faults (Patch 0004): Both xe_bo_cpu_prep() and
> > > > xe_gem_mmap_offset()
> > > > return errors (-EFAULT / VM_FAULT_SIGBUS) when accessing purged
> > > > BOs.
> > > >   3 . "Once purged, always purged": Even when the last VMA is
> > > > unmapped,
> > > > xe_bo_recompute_purgeable_state() preserves the PURGED state - it
> > > > never
> > > > transitions back to WILLNEED or DONTNEED (see early return at the
> > > > top of the
> > > > function).
> > > > 
> > > > The only way forward for the application is to destroy the purged
> > > > BO and
> > > > create a new one.
> > > > 
> > > > Regarding the 'no VMAs → WILLNEED' logic: this only applies to
> > > > non-purged
> > > > BOs that happen to be temporarily unmapped. Purged BOs remain
> > > > permanently
> > > > invalid.
> > > So I think xe_bo_all_vmas_dontneed() isn't 100% correct...
> > > 
> > > I think should return an enum...
> > > 
> > > enum xe_bo_vmas_purge_state {	/* Maybe a better name? */
> > > 	XE_BO_VMAS_STATE_DONTNEED = 0,
> > > 	XE_BO_VMAS_STATE_WILLNEED = 1,
> > > 	XE_BO_VMAS_STATE_NO_VMAS = 2,
> > > };
> > > 
> > > 
> > > Then in xe_bo_recompute_purgeable_state() something like this:
> > > 
> > > void xe_bo_recompute_purgeable_state(struct xe_bo *bo)
> > > {
> > > 	enum xe_bo_vma_purge_state state;
> > > 
> > > 	if (!bo)
> > > 		return;
> > > 
> > > 	xe_bo_assert_held(bo);
> > > 
> > > 	/*
> > > 	 * Once purged, always purged. Cannot transition back to
> > > WILLNEED.
> > > 	 * This matches i915 semantics where purged BOs are
> > > permanently invalid.
> > > 	 */
> > > 	if (bo->madv_purgeable == XE_MADV_PURGEABLE_PURGED)
> > > 		return;
> > > 
> > > 	state = xe_bo_all_vmas_dontneed(bo);
> > > 	if (state == XE_BO_VMAS_STATE_DONTNEED) {
> > > 		/* All VMAs are DONTNEED - mark BO purgeable */
> > > 		if (bo->madv_purgeable !=
> > > XE_MADV_PURGEABLE_DONTNEED)
> > > 			xe_bo_set_purgeable_state(bo,
> > > XE_MADV_PURGEABLE_DONTNEED);
> > > 	} else if (state == XE_BO_VMAS_STATE_WILLNEED) {
> > > 		/* At least one VMA is WILLNEED - BO must not be
> > > purgeable */
> > > 		if (bo->madv_purgeable !=
> > > XE_MADV_PURGEABLE_WILLNEED)
> > > 			xe_bo_set_purgeable_state(bo,
> > > XE_MADV_PURGEABLE_WILLNEED);
> > > 	}
> > > }
> > > 
> > > I think would avoid the last unbind unintentionally flipping from
> > > DONTNEED -> WILLNEED.
> > > 
> > > What do you both of you (Thomas, Arvind) think?
> > 
> > 
> > @Matt,
> > 
> > Good catch—I missed that transition. You’re right: when the last VMA
> > is 
> > unmapped from a DONTNEED BO, the current logic can flip it back to 
> > WILLNEED, which discards the user’s hint. That’s wrong.
> > 
> >    I like the enum approach to distinguish:
> >      -  *_DONTNEED: all VMAs are DONTNEED
> >      - *_WILLNEED: at least one VMA is WILLNEED
> >      - *_NO_VMAS: no VMAs present
> > 
> > With that, xe_bo_recompute_purgeable_state() can avoid changing state
> > on 
> > NO_VMAS and preserve "once purged, always purged," matching i915 
> > semantics. This also addresses Thomas's earlier question about new 
> > VMA/CPU access to purgeable BOs—the enum makes it clear we only 
> > transition on explicit VMA state, not on absence of VMAs.
> > 
> > I'll rework xe_bo_all_vmas_dontneed() to return the enum and update
> > the 
> > recompute path accordingly.
> > 
> > 
> > @Thomas,
> > 
> > Does this direction look good to you? If yes, I will send updated
> > patch.
> 
> Yes, but I'm also as mentioned concerned about whether we can add new
> vmas, cpu faults and exports in the WONTNEED state. If we can do that,
> it might succeed most of the time making a well-behave appearance in
> user-space, but if on occation the bo gets purged, the app would
> seeming unexpectedly fail.
> 
> So do we block new vmas cpu-faults and exports in the WONTNEED state?
> 

I’ve thought about the same thing. The new vmas semantics are a bit odd,
because if you unbind the BO in WONTNEED and disallow creating new VMAs,
the BO can never be used again—madvise requires a VMA to operate thus
you can't move a BO out of WONTNEED. Maybe that’s acceptable or even
desirable, but it would need to be documented, and ultimately we’d need
a UMD ack for those semantics.

CPU faults or exports in WONTNEED also seem like they should be
disallowed with less odd sematics, but again, this should be documented
and require UMD ack.

Matt

> /Thomas
> 
> 
> > 
> > Thanks,
> > Arvind
> > 
> > 
> > > 
> > > Matt
> > > 
> > > > Thanks,
> > > > Arvind
> > > > > Thanks,
> > > > > Thomas
> > > > > 
> > > > > 
> > > > > 
> > > > > > Cc: Matthew Brost <matthew.brost@intel.com>
> > > > > > Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> > > > > > Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> > > > > > Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
> > > > > > ---
> > > > > >    drivers/gpu/drm/xe/xe_svm.c        |  1 +
> > > > > >    drivers/gpu/drm/xe/xe_vm.c         |  9 ++-
> > > > > >    drivers/gpu/drm/xe/xe_vm_madvise.c | 98
> > > > > > ++++++++++++++++++++++++++++--
> > > > > >    drivers/gpu/drm/xe/xe_vm_madvise.h |  3 +
> > > > > >    drivers/gpu/drm/xe/xe_vm_types.h   | 11 ++++
> > > > > >    5 files changed, 116 insertions(+), 6 deletions(-)
> > > > > > 
> > > > > > diff --git a/drivers/gpu/drm/xe/xe_svm.c
> > > > > > b/drivers/gpu/drm/xe/xe_svm.c
> > > > > > index cda3bf7e2418..329c77aa5c20 100644
> > > > > > --- a/drivers/gpu/drm/xe/xe_svm.c
> > > > > > +++ b/drivers/gpu/drm/xe/xe_svm.c
> > > > > > @@ -318,6 +318,7 @@ static void
> > > > > > xe_vma_set_default_attributes(struct
> > > > > > xe_vma *vma)
> > > > > >    		.preferred_loc.migration_policy =
> > > > > > DRM_XE_MIGRATE_ALL_PAGES,
> > > > > >    		.pat_index = vma->attr.default_pat_index,
> > > > > >    		.atomic_access = DRM_XE_ATOMIC_UNDEFINED,
> > > > > > +		.purgeable_state =
> > > > > > XE_MADV_PURGEABLE_WILLNEED,
> > > > > >    	};
> > > > > >    	xe_vma_mem_attr_copy(&vma->attr, &default_attr);
> > > > > > diff --git a/drivers/gpu/drm/xe/xe_vm.c
> > > > > > b/drivers/gpu/drm/xe/xe_vm.c
> > > > > > index 71cf3ce6c62b..e84b9e7cb5eb 100644
> > > > > > --- a/drivers/gpu/drm/xe/xe_vm.c
> > > > > > +++ b/drivers/gpu/drm/xe/xe_vm.c
> > > > > > @@ -39,6 +39,7 @@
> > > > > >    #include "xe_tile.h"
> > > > > >    #include "xe_tlb_inval.h"
> > > > > >    #include "xe_trace_bo.h"
> > > > > > +#include "xe_vm_madvise.h"
> > > > > >    #include "xe_wa.h"
> > > > > >    static struct drm_gem_object *xe_vm_obj(struct xe_vm *vm)
> > > > > > @@ -1085,6 +1086,7 @@ static struct xe_vma
> > > > > > *xe_vma_create(struct
> > > > > > xe_vm *vm,
> > > > > >    static void xe_vma_destroy_late(struct xe_vma *vma)
> > > > > >    {
> > > > > >    	struct xe_vm *vm = xe_vma_vm(vma);
> > > > > > +	struct xe_bo *bo = xe_vma_bo(vma);
> > > > > >    	if (vma->ufence) {
> > > > > >    		xe_sync_ufence_put(vma->ufence);
> > > > > > @@ -1099,7 +1101,7 @@ static void xe_vma_destroy_late(struct
> > > > > > xe_vma
> > > > > > *vma)
> > > > > >    	} else if (xe_vma_is_null(vma) ||
> > > > > > xe_vma_is_cpu_addr_mirror(vma)) {
> > > > > >    		xe_vm_put(vm);
> > > > > >    	} else {
> > > > > > -		xe_bo_put(xe_vma_bo(vma));
> > > > > > +		xe_bo_put(bo);
> > > > > >    	}
> > > > > >    	xe_vma_free(vma);
> > > > > > @@ -1125,6 +1127,7 @@ static void vma_destroy_cb(struct
> > > > > > dma_fence
> > > > > > *fence,
> > > > > >    static void xe_vma_destroy(struct xe_vma *vma, struct
> > > > > > dma_fence
> > > > > > *fence)
> > > > > >    {
> > > > > >    	struct xe_vm *vm = xe_vma_vm(vma);
> > > > > > +	struct xe_bo *bo = xe_vma_bo(vma);
> > > > > >    	lockdep_assert_held_write(&vm->lock);
> > > > > >    	xe_assert(vm->xe, list_empty(&vma-
> > > > > > >combined_links.destroy));
> > > > > > @@ -1133,9 +1136,10 @@ static void xe_vma_destroy(struct
> > > > > > xe_vma *vma,
> > > > > > struct dma_fence *fence)
> > > > > >    		xe_assert(vm->xe, vma->gpuva.flags &
> > > > > > XE_VMA_DESTROYED);
> > > > > >    		xe_userptr_destroy(to_userptr_vma(vma));
> > > > > >    	} else if (!xe_vma_is_null(vma) &&
> > > > > > !xe_vma_is_cpu_addr_mirror(vma)) {
> > > > > > -		xe_bo_assert_held(xe_vma_bo(vma));
> > > > > > +		xe_bo_assert_held(bo);
> > > > > >    		drm_gpuva_unlink(&vma->gpuva);
> > > > > > +		xe_bo_recompute_purgeable_state(bo);
> > > > > >    	}
> > > > > >    	xe_vm_assert_held(vm);
> > > > > > @@ -2681,6 +2685,7 @@ static int
> > > > > > vm_bind_ioctl_ops_parse(struct xe_vm
> > > > > > *vm, struct drm_gpuva_ops *ops,
> > > > > >    				.atomic_access =
> > > > > > DRM_XE_ATOMIC_UNDEFINED,
> > > > > >    				.default_pat_index = op-
> > > > > > > map.pat_index,
> > > > > >    				.pat_index = op-
> > > > > > >map.pat_index,
> > > > > > +				.purgeable_state =
> > > > > > XE_MADV_PURGEABLE_WILLNEED,
> > > > > >    			};
> > > > > >    			flags |= op->map.vma_flags &
> > > > > > XE_VMA_CREATE_MASK;
> > > > > > diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c
> > > > > > b/drivers/gpu/drm/xe/xe_vm_madvise.c
> > > > > > index d9cfba7bfe0b..c184426546a2 100644
> > > > > > --- a/drivers/gpu/drm/xe/xe_vm_madvise.c
> > > > > > +++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
> > > > > > @@ -12,6 +12,7 @@
> > > > > >    #include "xe_pat.h"
> > > > > >    #include "xe_pt.h"
> > > > > >    #include "xe_svm.h"
> > > > > > +#include "xe_vm.h"
> > > > > >    struct xe_vmas_in_madvise_range {
> > > > > >    	u64 addr;
> > > > > > @@ -183,6 +184,89 @@ static void madvise_pat_index(struct
> > > > > > xe_device
> > > > > > *xe, struct xe_vm *vm,
> > > > > >    	}
> > > > > >    }
> > > > > > +/**
> > > > > > + * xe_bo_all_vmas_dontneed() - Check if all VMAs of a BO are
> > > > > > marked
> > > > > > DONTNEED
> > > > > > + * @bo: Buffer object
> > > > > > + *
> > > > > > + * Check all VMAs across all VMs to determine if BO can be
> > > > > > purged.
> > > > > > + * Shared BOs require unanimous DONTNEED state from all
> > > > > > mappings.
> > > > > > + *
> > > > > > + * Caller must hold BO dma-resv lock.
> > > > > > + *
> > > > > > + * Return: true if all VMAs are DONTNEED, false otherwise
> > > > > > + */
> > > > > > +static bool xe_bo_all_vmas_dontneed(struct xe_bo *bo)
> > > > > > +{
> > > > > > +	struct drm_gpuvm_bo *vm_bo;
> > > > > > +	struct drm_gpuva *gpuva;
> > > > > > +	struct drm_gem_object *obj = &bo->ttm.base;
> > > > > > +	bool has_vmas = false;
> > > > > > +
> > > > > > +	xe_bo_assert_held(bo);
> > > > > > +
> > > > > > +	drm_gem_for_each_gpuvm_bo(vm_bo, obj) {
> > > > > > +		drm_gpuvm_bo_for_each_va(gpuva, vm_bo) {
> > > > > > +			struct xe_vma *vma =
> > > > > > gpuva_to_vma(gpuva);
> > > > > > +
> > > > > > +			has_vmas = true;
> > > > > > +
> > > > > > +			/* Any non-DONTNEED VMA prevents
> > > > > > purging */
> > > > > > +			if (vma->attr.purgeable_state !=
> > > > > > XE_MADV_PURGEABLE_DONTNEED)
> > > > > > +				return false;
> > > > > > +		}
> > > > > > +	}
> > > > > > +
> > > > > > +	/*
> > > > > > +	 * No VMAs => no mapping-level DONTNEED hint.
> > > > > > +	 * Default to WILLNEED to avoid making BOs purgeable
> > > > > > without
> > > > > > +	 * explicit user intent.
> > > > > > +	 */
> > > > > > +	if (!has_vmas)
> > > > > > +		return false;
> > > > > > +
> > > > > > +	return true;
> > > > > > +}
> > > > > > +
> > > > > > +/**
> > > > > > + * xe_bo_recompute_purgeable_state() - Recompute BO
> > > > > > purgeable state
> > > > > > from VMAs
> > > > > > + * @bo: Buffer object
> > > > > > + *
> > > > > > + * Walk all VMAs to determine if BO should be purgeable or
> > > > > > not.
> > > > > > + * Shared BOs require unanimous DONTNEED state from all
> > > > > > mappings.
> > > > > > + *
> > > > > > + * Locking: Caller must hold BO dma-resv lock. When
> > > > > > iterating GPUVM
> > > > > > lists,
> > > > > > + * VM lock must also be held (write) to prevent concurrent
> > > > > > VMA
> > > > > > modifications.
> > > > > > + * This is satisfied at both call sites:
> > > > > > + * - xe_vma_destroy(): holds vm->lock write
> > > > > > + * - madvise_purgeable(): holds vm->lock write (from madvise
> > > > > > ioctl
> > > > > > path)
> > > > > > + *
> > > > > > + * Return: nothing
> > > > > > + */
> > > > > > +void xe_bo_recompute_purgeable_state(struct xe_bo *bo)
> > > > > > +{
> > > > > > +	if (!bo)
> > > > > > +		return;
> > > > > > +
> > > > > > +	xe_bo_assert_held(bo);
> > > > > > +
> > > > > > +	/*
> > > > > > +	 * Once purged, always purged. Cannot transition
> > > > > > back to
> > > > > > WILLNEED.
> > > > > > +	 * This matches i915 semantics where purged BOs are
> > > > > > permanently invalid.
> > > > > > +	 */
> > > > > > +	if (bo->madv_purgeable == XE_MADV_PURGEABLE_PURGED)
> > > > > > +		return;
> > > > > > +
> > > > > > +	if (xe_bo_all_vmas_dontneed(bo)) {
> > > > > > +		/* All VMAs are DONTNEED - mark BO purgeable
> > > > > > */
> > > > > > +		if (bo->madv_purgeable !=
> > > > > > XE_MADV_PURGEABLE_DONTNEED)
> > > > > > +			xe_bo_set_purgeable_state(bo,
> > > > > > XE_MADV_PURGEABLE_DONTNEED);
> > > > > > +	} else {
> > > > > > +		/* At least one VMA is WILLNEED - BO must
> > > > > > not be
> > > > > > purgeable */
> > > > > > +		if (bo->madv_purgeable !=
> > > > > > XE_MADV_PURGEABLE_WILLNEED)
> > > > > > +			xe_bo_set_purgeable_state(bo,
> > > > > > XE_MADV_PURGEABLE_WILLNEED);
> > > > > > +	}
> > > > > > +}
> > > > > > +
> > > > > >    /**
> > > > > >     * madvise_purgeable - Handle purgeable buffer object
> > > > > > advice
> > > > > >     * @xe: XE device
> > > > > > @@ -231,14 +315,20 @@ static void __maybe_unused
> > > > > > madvise_purgeable(struct xe_device *xe,
> > > > > >    		switch (op->purge_state_val.val) {
> > > > > >    		case DRM_XE_VMA_PURGEABLE_STATE_WILLNEED:
> > > > > > -			xe_bo_set_purgeable_state(bo,
> > > > > > XE_MADV_PURGEABLE_WILLNEED);
> > > > > > +			vmas[i]->attr.purgeable_state =
> > > > > > XE_MADV_PURGEABLE_WILLNEED;
> > > > > > +
> > > > > > +			/* Update BO purgeable state */
> > > > > > +			xe_bo_recompute_purgeable_state(bo);
> > > > > >    			break;
> > > > > >    		case DRM_XE_VMA_PURGEABLE_STATE_DONTNEED:
> > > > > > -			xe_bo_set_purgeable_state(bo,
> > > > > > XE_MADV_PURGEABLE_DONTNEED);
> > > > > > +			vmas[i]->attr.purgeable_state =
> > > > > > XE_MADV_PURGEABLE_DONTNEED;
> > > > > > +
> > > > > > +			/* Update BO purgeable state */
> > > > > > +			xe_bo_recompute_purgeable_state(bo);
> > > > > >    			break;
> > > > > >    		default:
> > > > > > -			drm_warn(&vm->xe->drm, "Invalid
> > > > > > madvice
> > > > > > value = %d\n",
> > > > > > -				 op->purge_state_val.val);
> > > > > > +			/* Should never hit - values
> > > > > > validated in
> > > > > > madvise_args_are_sane() */
> > > > > > +			xe_assert(vm->xe, 0);
> > > > > >    			return;
> > > > > >    		}
> > > > > >    	}
> > > > > > diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.h
> > > > > > b/drivers/gpu/drm/xe/xe_vm_madvise.h
> > > > > > index b0e1fc445f23..39acd2689ca0 100644
> > > > > > --- a/drivers/gpu/drm/xe/xe_vm_madvise.h
> > > > > > +++ b/drivers/gpu/drm/xe/xe_vm_madvise.h
> > > > > > @@ -8,8 +8,11 @@
> > > > > >    struct drm_device;
> > > > > >    struct drm_file;
> > > > > > +struct xe_bo;
> > > > > >    int xe_vm_madvise_ioctl(struct drm_device *dev, void
> > > > > > *data,
> > > > > >    			struct drm_file *file);
> > > > > > +void xe_bo_recompute_purgeable_state(struct xe_bo *bo);
> > > > > > +
> > > > > >    #endif
> > > > > > diff --git a/drivers/gpu/drm/xe/xe_vm_types.h
> > > > > > b/drivers/gpu/drm/xe/xe_vm_types.h
> > > > > > index 43203e90ee3e..fd563039e8f4 100644
> > > > > > --- a/drivers/gpu/drm/xe/xe_vm_types.h
> > > > > > +++ b/drivers/gpu/drm/xe/xe_vm_types.h
> > > > > > @@ -94,6 +94,17 @@ struct xe_vma_mem_attr {
> > > > > >    	 * same as default_pat_index unless overwritten by
> > > > > > madvise.
> > > > > >    	 */
> > > > > >    	u16 pat_index;
> > > > > > +
> > > > > > +	/**
> > > > > > +	 * @purgeable_state: Purgeable hint for this VMA
> > > > > > mapping
> > > > > > +	 *
> > > > > > +	 * Per-VMA purgeable state from madvise. Valid
> > > > > > states are
> > > > > > WILLNEED (0)
> > > > > > +	 * or DONTNEED (1). Shared BOs require all VMAs to
> > > > > > be
> > > > > > DONTNEED before
> > > > > > +	 * the BO can be purged. PURGED state exists only at
> > > > > > BO
> > > > > > level.
> > > > > > +	 *
> > > > > > +	 * Protected by BO dma-resv lock. Set via
> > > > > > DRM_IOCTL_XE_MADVISE.
> > > > > > +	 */
> > > > > > +	u32 purgeable_state;
> > > > > >    };
> > > > > >    struct xe_vma {

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v5 6/9] drm/xe/madvise: Implement per-VMA purgeable state tracking
  2026-02-25  9:04             ` Matthew Brost
@ 2026-02-25  9:18               ` Thomas Hellström
  2026-02-25  9:40                 ` Yadav, Arvind
  0 siblings, 1 reply; 36+ messages in thread
From: Thomas Hellström @ 2026-02-25  9:18 UTC (permalink / raw)
  To: Matthew Brost
  Cc: Yadav, Arvind, intel-xe, himal.prasad.ghimiray, pallavi.mishra

On Wed, 2026-02-25 at 01:04 -0800, Matthew Brost wrote:
> On Wed, Feb 25, 2026 at 09:21:10AM +0100, Thomas Hellström wrote:
> > On Wed, 2026-02-25 at 11:05 +0530, Yadav, Arvind wrote:
> > > 
> > > On 24-02-2026 22:06, Matthew Brost wrote:
> > > > On Tue, Feb 24, 2026 at 08:37:44PM +0530, Yadav, Arvind wrote:
> > > > > On 24-02-2026 18:18, Thomas Hellström wrote:
> > > > > > On Wed, 2026-02-11 at 20:56 +0530, Arvind Yadav wrote:
> > > > > > > Track purgeable state per-VMA instead of using a coarse
> > > > > > > shared
> > > > > > > BO check. This prevents purging shared BOs until all VMAs
> > > > > > > across
> > > > > > > all VMs are marked DONTNEED.
> > > > > > > 
> > > > > > > Add xe_bo_all_vmas_dontneed() to check all VMAs before
> > > > > > > marking
> > > > > > > a BO purgeable. Add
> > > > > > > xe_bo_recheck_purgeable_on_vma_unbind()
> > > > > > > to
> > > > > > > handle state transitions when VMAs are destroyed - if all
> > > > > > > remaining VMAs are DONTNEED the BO can become purgeable,
> > > > > > > or
> > > > > > > if
> > > > > > > no VMAs remain it transitions to WILLNEED.
> > > > > > > 
> > > > > > > The per-VMA purgeable_state field stores the madvise hint
> > > > > > > for
> > > > > > > each mapping. Shared BOs can only be purged when all VMAs
> > > > > > > unanimously indicate DONTNEED.
> > > > > > > 
> > > > > > > One thing to note: when the last VMA goes away, we
> > > > > > > default
> > > > > > > back to
> > > > > > > WILLNEED. DONTNEED is a per-mapping hint, and without any
> > > > > > > mappings
> > > > > > > there is no remaining madvise state to justify purging.
> > > > > > > This
> > > > > > > prevents
> > > > > > > BOs from becoming purgeable solely due to being
> > > > > > > temporarily
> > > > > > > unmapped.
> > > > > > > 
> > > > > > > v3:
> > > > > > >     - This addresses Thomas Hellström's feedback: "loop
> > > > > > > over
> > > > > > > all vmas
> > > > > > >       attached to the bo and check that they all say
> > > > > > > WONTNEED. This
> > > > > > > will
> > > > > > >       also need a check at VMA unbinding"
> > > > > > > 
> > > > > > > v4:
> > > > > > >     - @madv_purgeable atomic_t → u32 change across all
> > > > > > > relevant
> > > > > > >       patches (Matt)
> > > > > > > 
> > > > > > > v5:
> > > > > > >     - Call xe_bo_recheck_purgeable_on_vma_unbind() from
> > > > > > > xe_vma_destroy()
> > > > > > >       right after drm_gpuva_unlink() where we already
> > > > > > > hold
> > > > > > > the BO lock,
> > > > > > >       drop the trylock-based late destroy path (Matt)
> > > > > > >     - Move purgeable_state into xe_vma_mem_attr with the
> > > > > > > other madvise
> > > > > > >       attributes (Matt)
> > > > > > >     - Drop READ_ONCE since the BO lock already protects
> > > > > > > us
> > > > > > > (Matt)
> > > > > > >     - Keep returning false when there are no VMAs -
> > > > > > > otherwise
> > > > > > > we'd mark
> > > > > > >       BOs purgeable without any user hint (Matt)
> > > > > > >     - Use xe_bo_set_purgeable_state() instead of direct
> > > > > > > initialization(Matt)
> > > > > > >     - use xe_assert instead of drm_war (Thomas)
> > > > > > Typo.
> > > > > 
> > > > > Noted,
> > > > > 
> > > > > > There were also a couple of review issues in my reply here:
> > > > > > 
> > > > > > https://patchwork.freedesktop.org/patch/699451/?series=156651&rev=5
> > > > > > 
> > > > > > that were never addressed or at least commented upon.
> > > > > > 
> > > > > > The comment there on retaining purgeable state after the
> > > > > > last
> > > > > > vma is
> > > > > > unmapped could be discussed, though.
> > > > > > 
> > > > > > Let's say we unmap a vma marking a bo purgeable. It then
> > > > > > becomes either
> > > > > > purged or non-purgeable.
> > > > > > 
> > > > > > Then an app tries to access it either using a new vma or
> > > > > > CPU
> > > > > > map. Then
> > > > > > it will typically succeed, or might occasionally fail if
> > > > > > the bo
> > > > > > happened to be purged in between.
> > > > > > 
> > > > > > How do we handle new vma map requests and cpu-faults to a
> > > > > > bo in
> > > > > > purgeable state? Do we block those?
> > > > > 
> > > > > @Thomas,
> > > > > 
> > > > > The implementation already blocks new access to purged BOs:
> > > > >   1. New VMA mappings (Patch 0005): vma_lock_and_validate()
> > > > > rejects MAP
> > > > > operations to purged BOs with -EINVAL via the check_purged
> > > > > flag.
> > > > >   2. CPU faults (Patch 0004): Both xe_bo_cpu_prep() and
> > > > > xe_gem_mmap_offset()
> > > > > return errors (-EFAULT / VM_FAULT_SIGBUS) when accessing
> > > > > purged
> > > > > BOs.
> > > > >   3 . "Once purged, always purged": Even when the last VMA is
> > > > > unmapped,
> > > > > xe_bo_recompute_purgeable_state() preserves the PURGED state
> > > > > - it
> > > > > never
> > > > > transitions back to WILLNEED or DONTNEED (see early return at
> > > > > the
> > > > > top of the
> > > > > function).
> > > > > 
> > > > > The only way forward for the application is to destroy the
> > > > > purged
> > > > > BO and
> > > > > create a new one.
> > > > > 
> > > > > Regarding the 'no VMAs → WILLNEED' logic: this only applies
> > > > > to
> > > > > non-purged
> > > > > BOs that happen to be temporarily unmapped. Purged BOs remain
> > > > > permanently
> > > > > invalid.
> > > > So I think xe_bo_all_vmas_dontneed() isn't 100% correct...
> > > > 
> > > > I think should return an enum...
> > > > 
> > > > enum xe_bo_vmas_purge_state {	/* Maybe a better name? */
> > > > 	XE_BO_VMAS_STATE_DONTNEED = 0,
> > > > 	XE_BO_VMAS_STATE_WILLNEED = 1,
> > > > 	XE_BO_VMAS_STATE_NO_VMAS = 2,
> > > > };
> > > > 
> > > > 
> > > > Then in xe_bo_recompute_purgeable_state() something like this:
> > > > 
> > > > void xe_bo_recompute_purgeable_state(struct xe_bo *bo)
> > > > {
> > > > 	enum xe_bo_vma_purge_state state;
> > > > 
> > > > 	if (!bo)
> > > > 		return;
> > > > 
> > > > 	xe_bo_assert_held(bo);
> > > > 
> > > > 	/*
> > > > 	 * Once purged, always purged. Cannot transition back
> > > > to
> > > > WILLNEED.
> > > > 	 * This matches i915 semantics where purged BOs are
> > > > permanently invalid.
> > > > 	 */
> > > > 	if (bo->madv_purgeable == XE_MADV_PURGEABLE_PURGED)
> > > > 		return;
> > > > 
> > > > 	state = xe_bo_all_vmas_dontneed(bo);
> > > > 	if (state == XE_BO_VMAS_STATE_DONTNEED) {
> > > > 		/* All VMAs are DONTNEED - mark BO purgeable
> > > > */
> > > > 		if (bo->madv_purgeable !=
> > > > XE_MADV_PURGEABLE_DONTNEED)
> > > > 			xe_bo_set_purgeable_state(bo,
> > > > XE_MADV_PURGEABLE_DONTNEED);
> > > > 	} else if (state == XE_BO_VMAS_STATE_WILLNEED) {
> > > > 		/* At least one VMA is WILLNEED - BO must not
> > > > be
> > > > purgeable */
> > > > 		if (bo->madv_purgeable !=
> > > > XE_MADV_PURGEABLE_WILLNEED)
> > > > 			xe_bo_set_purgeable_state(bo,
> > > > XE_MADV_PURGEABLE_WILLNEED);
> > > > 	}
> > > > }
> > > > 
> > > > I think would avoid the last unbind unintentionally flipping
> > > > from
> > > > DONTNEED -> WILLNEED.
> > > > 
> > > > What do you both of you (Thomas, Arvind) think?
> > > 
> > > 
> > > @Matt,
> > > 
> > > Good catch—I missed that transition. You’re right: when the last
> > > VMA
> > > is 
> > > unmapped from a DONTNEED BO, the current logic can flip it back
> > > to 
> > > WILLNEED, which discards the user’s hint. That’s wrong.
> > > 
> > >    I like the enum approach to distinguish:
> > >      -  *_DONTNEED: all VMAs are DONTNEED
> > >      - *_WILLNEED: at least one VMA is WILLNEED
> > >      - *_NO_VMAS: no VMAs present
> > > 
> > > With that, xe_bo_recompute_purgeable_state() can avoid changing
> > > state
> > > on 
> > > NO_VMAS and preserve "once purged, always purged," matching i915 
> > > semantics. This also addresses Thomas's earlier question about
> > > new 
> > > VMA/CPU access to purgeable BOs—the enum makes it clear we only 
> > > transition on explicit VMA state, not on absence of VMAs.
> > > 
> > > I'll rework xe_bo_all_vmas_dontneed() to return the enum and
> > > update
> > > the 
> > > recompute path accordingly.
> > > 
> > > 
> > > @Thomas,
> > > 
> > > Does this direction look good to you? If yes, I will send updated
> > > patch.
> > 
> > Yes, but I'm also as mentioned concerned about whether we can add
> > new
> > vmas, cpu faults and exports in the WONTNEED state. If we can do
> > that,
> > it might succeed most of the time making a well-behave appearance
> > in
> > user-space, but if on occation the bo gets purged, the app would
> > seeming unexpectedly fail.
> > 
> > So do we block new vmas cpu-faults and exports in the WONTNEED
> > state?
> > 
> 
> I’ve thought about the same thing. The new vmas semantics are a bit
> odd,
> because if you unbind the BO in WONTNEED and disallow creating new
> VMAs,
> the BO can never be used again—madvise requires a VMA to operate thus
> you can't move a BO out of WONTNEED. Maybe that’s acceptable or even
> desirable, but it would need to be documented, and ultimately we’d
> need
> a UMD ack for those semantics.
> 
> CPU faults or exports in WONTNEED also seem like they should be
> disallowed with less odd sematics, but again, this should be
> documented
> and require UMD ack.

Hmm. With WONTNEED really to do as little as possible. So we shouldn't
go to into any sort of unmapping GPU- or CPU ptes. That means the end
behaviour might still be a bit erratic on access of a WONTNEED bo,
depending on previous access pattern we may or may not fault.

So we should probably disallow mmap(), VM_BIND and export, but allow
CPU- and GPU pagefaults. And document.

Speaking of pagefaults, I noticed that when *purged*, it looks like we
populate with scratch PTEs also on faulting VMs. I think this is the
correct approach, though, to avoid the prefetch pagefaults wreaking
havoc if accessing vmas with purged bos.

/Thomas


> 
> Matt
> 
> > /Thomas
> > 
> > 
> > > 
> > > Thanks,
> > > Arvind
> > > 
> > > 
> > > > 
> > > > Matt
> > > > 
> > > > > Thanks,
> > > > > Arvind
> > > > > > Thanks,
> > > > > > Thomas
> > > > > > 
> > > > > > 
> > > > > > 
> > > > > > > Cc: Matthew Brost <matthew.brost@intel.com>
> > > > > > > Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> > > > > > > Cc: Himal Prasad Ghimiray
> > > > > > > <himal.prasad.ghimiray@intel.com>
> > > > > > > Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
> > > > > > > ---
> > > > > > >    drivers/gpu/drm/xe/xe_svm.c        |  1 +
> > > > > > >    drivers/gpu/drm/xe/xe_vm.c         |  9 ++-
> > > > > > >    drivers/gpu/drm/xe/xe_vm_madvise.c | 98
> > > > > > > ++++++++++++++++++++++++++++--
> > > > > > >    drivers/gpu/drm/xe/xe_vm_madvise.h |  3 +
> > > > > > >    drivers/gpu/drm/xe/xe_vm_types.h   | 11 ++++
> > > > > > >    5 files changed, 116 insertions(+), 6 deletions(-)
> > > > > > > 
> > > > > > > diff --git a/drivers/gpu/drm/xe/xe_svm.c
> > > > > > > b/drivers/gpu/drm/xe/xe_svm.c
> > > > > > > index cda3bf7e2418..329c77aa5c20 100644
> > > > > > > --- a/drivers/gpu/drm/xe/xe_svm.c
> > > > > > > +++ b/drivers/gpu/drm/xe/xe_svm.c
> > > > > > > @@ -318,6 +318,7 @@ static void
> > > > > > > xe_vma_set_default_attributes(struct
> > > > > > > xe_vma *vma)
> > > > > > >    		.preferred_loc.migration_policy =
> > > > > > > DRM_XE_MIGRATE_ALL_PAGES,
> > > > > > >    		.pat_index = vma-
> > > > > > > >attr.default_pat_index,
> > > > > > >    		.atomic_access =
> > > > > > > DRM_XE_ATOMIC_UNDEFINED,
> > > > > > > +		.purgeable_state =
> > > > > > > XE_MADV_PURGEABLE_WILLNEED,
> > > > > > >    	};
> > > > > > >    	xe_vma_mem_attr_copy(&vma->attr, &default_attr);
> > > > > > > diff --git a/drivers/gpu/drm/xe/xe_vm.c
> > > > > > > b/drivers/gpu/drm/xe/xe_vm.c
> > > > > > > index 71cf3ce6c62b..e84b9e7cb5eb 100644
> > > > > > > --- a/drivers/gpu/drm/xe/xe_vm.c
> > > > > > > +++ b/drivers/gpu/drm/xe/xe_vm.c
> > > > > > > @@ -39,6 +39,7 @@
> > > > > > >    #include "xe_tile.h"
> > > > > > >    #include "xe_tlb_inval.h"
> > > > > > >    #include "xe_trace_bo.h"
> > > > > > > +#include "xe_vm_madvise.h"
> > > > > > >    #include "xe_wa.h"
> > > > > > >    static struct drm_gem_object *xe_vm_obj(struct xe_vm
> > > > > > > *vm)
> > > > > > > @@ -1085,6 +1086,7 @@ static struct xe_vma
> > > > > > > *xe_vma_create(struct
> > > > > > > xe_vm *vm,
> > > > > > >    static void xe_vma_destroy_late(struct xe_vma *vma)
> > > > > > >    {
> > > > > > >    	struct xe_vm *vm = xe_vma_vm(vma);
> > > > > > > +	struct xe_bo *bo = xe_vma_bo(vma);
> > > > > > >    	if (vma->ufence) {
> > > > > > >    		xe_sync_ufence_put(vma->ufence);
> > > > > > > @@ -1099,7 +1101,7 @@ static void
> > > > > > > xe_vma_destroy_late(struct
> > > > > > > xe_vma
> > > > > > > *vma)
> > > > > > >    	} else if (xe_vma_is_null(vma) ||
> > > > > > > xe_vma_is_cpu_addr_mirror(vma)) {
> > > > > > >    		xe_vm_put(vm);
> > > > > > >    	} else {
> > > > > > > -		xe_bo_put(xe_vma_bo(vma));
> > > > > > > +		xe_bo_put(bo);
> > > > > > >    	}
> > > > > > >    	xe_vma_free(vma);
> > > > > > > @@ -1125,6 +1127,7 @@ static void vma_destroy_cb(struct
> > > > > > > dma_fence
> > > > > > > *fence,
> > > > > > >    static void xe_vma_destroy(struct xe_vma *vma, struct
> > > > > > > dma_fence
> > > > > > > *fence)
> > > > > > >    {
> > > > > > >    	struct xe_vm *vm = xe_vma_vm(vma);
> > > > > > > +	struct xe_bo *bo = xe_vma_bo(vma);
> > > > > > >    	lockdep_assert_held_write(&vm->lock);
> > > > > > >    	xe_assert(vm->xe, list_empty(&vma-
> > > > > > > > combined_links.destroy));
> > > > > > > @@ -1133,9 +1136,10 @@ static void xe_vma_destroy(struct
> > > > > > > xe_vma *vma,
> > > > > > > struct dma_fence *fence)
> > > > > > >    		xe_assert(vm->xe, vma->gpuva.flags &
> > > > > > > XE_VMA_DESTROYED);
> > > > > > >    		xe_userptr_destroy(to_userptr_vma(vma));
> > > > > > >    	} else if (!xe_vma_is_null(vma) &&
> > > > > > > !xe_vma_is_cpu_addr_mirror(vma)) {
> > > > > > > -		xe_bo_assert_held(xe_vma_bo(vma));
> > > > > > > +		xe_bo_assert_held(bo);
> > > > > > >    		drm_gpuva_unlink(&vma->gpuva);
> > > > > > > +		xe_bo_recompute_purgeable_state(bo);
> > > > > > >    	}
> > > > > > >    	xe_vm_assert_held(vm);
> > > > > > > @@ -2681,6 +2685,7 @@ static int
> > > > > > > vm_bind_ioctl_ops_parse(struct xe_vm
> > > > > > > *vm, struct drm_gpuva_ops *ops,
> > > > > > >    				.atomic_access =
> > > > > > > DRM_XE_ATOMIC_UNDEFINED,
> > > > > > >    				.default_pat_index = op-
> > > > > > > > map.pat_index,
> > > > > > >    				.pat_index = op-
> > > > > > > > map.pat_index,
> > > > > > > +				.purgeable_state =
> > > > > > > XE_MADV_PURGEABLE_WILLNEED,
> > > > > > >    			};
> > > > > > >    			flags |= op->map.vma_flags &
> > > > > > > XE_VMA_CREATE_MASK;
> > > > > > > diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c
> > > > > > > b/drivers/gpu/drm/xe/xe_vm_madvise.c
> > > > > > > index d9cfba7bfe0b..c184426546a2 100644
> > > > > > > --- a/drivers/gpu/drm/xe/xe_vm_madvise.c
> > > > > > > +++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
> > > > > > > @@ -12,6 +12,7 @@
> > > > > > >    #include "xe_pat.h"
> > > > > > >    #include "xe_pt.h"
> > > > > > >    #include "xe_svm.h"
> > > > > > > +#include "xe_vm.h"
> > > > > > >    struct xe_vmas_in_madvise_range {
> > > > > > >    	u64 addr;
> > > > > > > @@ -183,6 +184,89 @@ static void madvise_pat_index(struct
> > > > > > > xe_device
> > > > > > > *xe, struct xe_vm *vm,
> > > > > > >    	}
> > > > > > >    }
> > > > > > > +/**
> > > > > > > + * xe_bo_all_vmas_dontneed() - Check if all VMAs of a BO
> > > > > > > are
> > > > > > > marked
> > > > > > > DONTNEED
> > > > > > > + * @bo: Buffer object
> > > > > > > + *
> > > > > > > + * Check all VMAs across all VMs to determine if BO can
> > > > > > > be
> > > > > > > purged.
> > > > > > > + * Shared BOs require unanimous DONTNEED state from all
> > > > > > > mappings.
> > > > > > > + *
> > > > > > > + * Caller must hold BO dma-resv lock.
> > > > > > > + *
> > > > > > > + * Return: true if all VMAs are DONTNEED, false
> > > > > > > otherwise
> > > > > > > + */
> > > > > > > +static bool xe_bo_all_vmas_dontneed(struct xe_bo *bo)
> > > > > > > +{
> > > > > > > +	struct drm_gpuvm_bo *vm_bo;
> > > > > > > +	struct drm_gpuva *gpuva;
> > > > > > > +	struct drm_gem_object *obj = &bo->ttm.base;
> > > > > > > +	bool has_vmas = false;
> > > > > > > +
> > > > > > > +	xe_bo_assert_held(bo);
> > > > > > > +
> > > > > > > +	drm_gem_for_each_gpuvm_bo(vm_bo, obj) {
> > > > > > > +		drm_gpuvm_bo_for_each_va(gpuva, vm_bo) {
> > > > > > > +			struct xe_vma *vma =
> > > > > > > gpuva_to_vma(gpuva);
> > > > > > > +
> > > > > > > +			has_vmas = true;
> > > > > > > +
> > > > > > > +			/* Any non-DONTNEED VMA prevents
> > > > > > > purging */
> > > > > > > +			if (vma->attr.purgeable_state !=
> > > > > > > XE_MADV_PURGEABLE_DONTNEED)
> > > > > > > +				return false;
> > > > > > > +		}
> > > > > > > +	}
> > > > > > > +
> > > > > > > +	/*
> > > > > > > +	 * No VMAs => no mapping-level DONTNEED hint.
> > > > > > > +	 * Default to WILLNEED to avoid making BOs
> > > > > > > purgeable
> > > > > > > without
> > > > > > > +	 * explicit user intent.
> > > > > > > +	 */
> > > > > > > +	if (!has_vmas)
> > > > > > > +		return false;
> > > > > > > +
> > > > > > > +	return true;
> > > > > > > +}
> > > > > > > +
> > > > > > > +/**
> > > > > > > + * xe_bo_recompute_purgeable_state() - Recompute BO
> > > > > > > purgeable state
> > > > > > > from VMAs
> > > > > > > + * @bo: Buffer object
> > > > > > > + *
> > > > > > > + * Walk all VMAs to determine if BO should be purgeable
> > > > > > > or
> > > > > > > not.
> > > > > > > + * Shared BOs require unanimous DONTNEED state from all
> > > > > > > mappings.
> > > > > > > + *
> > > > > > > + * Locking: Caller must hold BO dma-resv lock. When
> > > > > > > iterating GPUVM
> > > > > > > lists,
> > > > > > > + * VM lock must also be held (write) to prevent
> > > > > > > concurrent
> > > > > > > VMA
> > > > > > > modifications.
> > > > > > > + * This is satisfied at both call sites:
> > > > > > > + * - xe_vma_destroy(): holds vm->lock write
> > > > > > > + * - madvise_purgeable(): holds vm->lock write (from
> > > > > > > madvise
> > > > > > > ioctl
> > > > > > > path)
> > > > > > > + *
> > > > > > > + * Return: nothing
> > > > > > > + */
> > > > > > > +void xe_bo_recompute_purgeable_state(struct xe_bo *bo)
> > > > > > > +{
> > > > > > > +	if (!bo)
> > > > > > > +		return;
> > > > > > > +
> > > > > > > +	xe_bo_assert_held(bo);
> > > > > > > +
> > > > > > > +	/*
> > > > > > > +	 * Once purged, always purged. Cannot transition
> > > > > > > back to
> > > > > > > WILLNEED.
> > > > > > > +	 * This matches i915 semantics where purged BOs
> > > > > > > are
> > > > > > > permanently invalid.
> > > > > > > +	 */
> > > > > > > +	if (bo->madv_purgeable ==
> > > > > > > XE_MADV_PURGEABLE_PURGED)
> > > > > > > +		return;
> > > > > > > +
> > > > > > > +	if (xe_bo_all_vmas_dontneed(bo)) {
> > > > > > > +		/* All VMAs are DONTNEED - mark BO
> > > > > > > purgeable
> > > > > > > */
> > > > > > > +		if (bo->madv_purgeable !=
> > > > > > > XE_MADV_PURGEABLE_DONTNEED)
> > > > > > > +			xe_bo_set_purgeable_state(bo,
> > > > > > > XE_MADV_PURGEABLE_DONTNEED);
> > > > > > > +	} else {
> > > > > > > +		/* At least one VMA is WILLNEED - BO
> > > > > > > must
> > > > > > > not be
> > > > > > > purgeable */
> > > > > > > +		if (bo->madv_purgeable !=
> > > > > > > XE_MADV_PURGEABLE_WILLNEED)
> > > > > > > +			xe_bo_set_purgeable_state(bo,
> > > > > > > XE_MADV_PURGEABLE_WILLNEED);
> > > > > > > +	}
> > > > > > > +}
> > > > > > > +
> > > > > > >    /**
> > > > > > >     * madvise_purgeable - Handle purgeable buffer object
> > > > > > > advice
> > > > > > >     * @xe: XE device
> > > > > > > @@ -231,14 +315,20 @@ static void __maybe_unused
> > > > > > > madvise_purgeable(struct xe_device *xe,
> > > > > > >    		switch (op->purge_state_val.val) {
> > > > > > >    		case
> > > > > > > DRM_XE_VMA_PURGEABLE_STATE_WILLNEED:
> > > > > > > -			xe_bo_set_purgeable_state(bo,
> > > > > > > XE_MADV_PURGEABLE_WILLNEED);
> > > > > > > +			vmas[i]->attr.purgeable_state =
> > > > > > > XE_MADV_PURGEABLE_WILLNEED;
> > > > > > > +
> > > > > > > +			/* Update BO purgeable state */
> > > > > > > +			xe_bo_recompute_purgeable_state(
> > > > > > > bo);
> > > > > > >    			break;
> > > > > > >    		case
> > > > > > > DRM_XE_VMA_PURGEABLE_STATE_DONTNEED:
> > > > > > > -			xe_bo_set_purgeable_state(bo,
> > > > > > > XE_MADV_PURGEABLE_DONTNEED);
> > > > > > > +			vmas[i]->attr.purgeable_state =
> > > > > > > XE_MADV_PURGEABLE_DONTNEED;
> > > > > > > +
> > > > > > > +			/* Update BO purgeable state */
> > > > > > > +			xe_bo_recompute_purgeable_state(
> > > > > > > bo);
> > > > > > >    			break;
> > > > > > >    		default:
> > > > > > > -			drm_warn(&vm->xe->drm, "Invalid
> > > > > > > madvice
> > > > > > > value = %d\n",
> > > > > > > -				 op-
> > > > > > > >purge_state_val.val);
> > > > > > > +			/* Should never hit - values
> > > > > > > validated in
> > > > > > > madvise_args_are_sane() */
> > > > > > > +			xe_assert(vm->xe, 0);
> > > > > > >    			return;
> > > > > > >    		}
> > > > > > >    	}
> > > > > > > diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.h
> > > > > > > b/drivers/gpu/drm/xe/xe_vm_madvise.h
> > > > > > > index b0e1fc445f23..39acd2689ca0 100644
> > > > > > > --- a/drivers/gpu/drm/xe/xe_vm_madvise.h
> > > > > > > +++ b/drivers/gpu/drm/xe/xe_vm_madvise.h
> > > > > > > @@ -8,8 +8,11 @@
> > > > > > >    struct drm_device;
> > > > > > >    struct drm_file;
> > > > > > > +struct xe_bo;
> > > > > > >    int xe_vm_madvise_ioctl(struct drm_device *dev, void
> > > > > > > *data,
> > > > > > >    			struct drm_file *file);
> > > > > > > +void xe_bo_recompute_purgeable_state(struct xe_bo *bo);
> > > > > > > +
> > > > > > >    #endif
> > > > > > > diff --git a/drivers/gpu/drm/xe/xe_vm_types.h
> > > > > > > b/drivers/gpu/drm/xe/xe_vm_types.h
> > > > > > > index 43203e90ee3e..fd563039e8f4 100644
> > > > > > > --- a/drivers/gpu/drm/xe/xe_vm_types.h
> > > > > > > +++ b/drivers/gpu/drm/xe/xe_vm_types.h
> > > > > > > @@ -94,6 +94,17 @@ struct xe_vma_mem_attr {
> > > > > > >    	 * same as default_pat_index unless overwritten
> > > > > > > by
> > > > > > > madvise.
> > > > > > >    	 */
> > > > > > >    	u16 pat_index;
> > > > > > > +
> > > > > > > +	/**
> > > > > > > +	 * @purgeable_state: Purgeable hint for this VMA
> > > > > > > mapping
> > > > > > > +	 *
> > > > > > > +	 * Per-VMA purgeable state from madvise. Valid
> > > > > > > states are
> > > > > > > WILLNEED (0)
> > > > > > > +	 * or DONTNEED (1). Shared BOs require all VMAs
> > > > > > > to
> > > > > > > be
> > > > > > > DONTNEED before
> > > > > > > +	 * the BO can be purged. PURGED state exists
> > > > > > > only at
> > > > > > > BO
> > > > > > > level.
> > > > > > > +	 *
> > > > > > > +	 * Protected by BO dma-resv lock. Set via
> > > > > > > DRM_IOCTL_XE_MADVISE.
> > > > > > > +	 */
> > > > > > > +	u32 purgeable_state;
> > > > > > >    };
> > > > > > >    struct xe_vma {

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v5 6/9] drm/xe/madvise: Implement per-VMA purgeable state tracking
  2026-02-25  9:18               ` Thomas Hellström
@ 2026-02-25  9:40                 ` Yadav, Arvind
  2026-02-25 18:32                   ` Matthew Brost
  0 siblings, 1 reply; 36+ messages in thread
From: Yadav, Arvind @ 2026-02-25  9:40 UTC (permalink / raw)
  To: Thomas Hellström, Matthew Brost
  Cc: intel-xe, himal.prasad.ghimiray, pallavi.mishra


On 25-02-2026 14:48, Thomas Hellström wrote:
> On Wed, 2026-02-25 at 01:04 -0800, Matthew Brost wrote:
>> On Wed, Feb 25, 2026 at 09:21:10AM +0100, Thomas Hellström wrote:
>>> On Wed, 2026-02-25 at 11:05 +0530, Yadav, Arvind wrote:
>>>> On 24-02-2026 22:06, Matthew Brost wrote:
>>>>> On Tue, Feb 24, 2026 at 08:37:44PM +0530, Yadav, Arvind wrote:
>>>>>> On 24-02-2026 18:18, Thomas Hellström wrote:
>>>>>>> On Wed, 2026-02-11 at 20:56 +0530, Arvind Yadav wrote:
>>>>>>>> Track purgeable state per-VMA instead of using a coarse
>>>>>>>> shared
>>>>>>>> BO check. This prevents purging shared BOs until all VMAs
>>>>>>>> across
>>>>>>>> all VMs are marked DONTNEED.
>>>>>>>>
>>>>>>>> Add xe_bo_all_vmas_dontneed() to check all VMAs before
>>>>>>>> marking
>>>>>>>> a BO purgeable. Add
>>>>>>>> xe_bo_recheck_purgeable_on_vma_unbind()
>>>>>>>> to
>>>>>>>> handle state transitions when VMAs are destroyed - if all
>>>>>>>> remaining VMAs are DONTNEED the BO can become purgeable,
>>>>>>>> or
>>>>>>>> if
>>>>>>>> no VMAs remain it transitions to WILLNEED.
>>>>>>>>
>>>>>>>> The per-VMA purgeable_state field stores the madvise hint
>>>>>>>> for
>>>>>>>> each mapping. Shared BOs can only be purged when all VMAs
>>>>>>>> unanimously indicate DONTNEED.
>>>>>>>>
>>>>>>>> One thing to note: when the last VMA goes away, we
>>>>>>>> default
>>>>>>>> back to
>>>>>>>> WILLNEED. DONTNEED is a per-mapping hint, and without any
>>>>>>>> mappings
>>>>>>>> there is no remaining madvise state to justify purging.
>>>>>>>> This
>>>>>>>> prevents
>>>>>>>> BOs from becoming purgeable solely due to being
>>>>>>>> temporarily
>>>>>>>> unmapped.
>>>>>>>>
>>>>>>>> v3:
>>>>>>>>      - This addresses Thomas Hellström's feedback: "loop
>>>>>>>> over
>>>>>>>> all vmas
>>>>>>>>        attached to the bo and check that they all say
>>>>>>>> WONTNEED. This
>>>>>>>> will
>>>>>>>>        also need a check at VMA unbinding"
>>>>>>>>
>>>>>>>> v4:
>>>>>>>>      - @madv_purgeable atomic_t → u32 change across all
>>>>>>>> relevant
>>>>>>>>        patches (Matt)
>>>>>>>>
>>>>>>>> v5:
>>>>>>>>      - Call xe_bo_recheck_purgeable_on_vma_unbind() from
>>>>>>>> xe_vma_destroy()
>>>>>>>>        right after drm_gpuva_unlink() where we already
>>>>>>>> hold
>>>>>>>> the BO lock,
>>>>>>>>        drop the trylock-based late destroy path (Matt)
>>>>>>>>      - Move purgeable_state into xe_vma_mem_attr with the
>>>>>>>> other madvise
>>>>>>>>        attributes (Matt)
>>>>>>>>      - Drop READ_ONCE since the BO lock already protects
>>>>>>>> us
>>>>>>>> (Matt)
>>>>>>>>      - Keep returning false when there are no VMAs -
>>>>>>>> otherwise
>>>>>>>> we'd mark
>>>>>>>>        BOs purgeable without any user hint (Matt)
>>>>>>>>      - Use xe_bo_set_purgeable_state() instead of direct
>>>>>>>> initialization(Matt)
>>>>>>>>      - use xe_assert instead of drm_war (Thomas)
>>>>>>> Typo.
>>>>>> Noted,
>>>>>>
>>>>>>> There were also a couple of review issues in my reply here:
>>>>>>>
>>>>>>> https://patchwork.freedesktop.org/patch/699451/?series=156651&rev=5
>>>>>>>
>>>>>>> that were never addressed or at least commented upon.
>>>>>>>
>>>>>>> The comment there on retaining purgeable state after the
>>>>>>> last
>>>>>>> vma is
>>>>>>> unmapped could be discussed, though.
>>>>>>>
>>>>>>> Let's say we unmap a vma marking a bo purgeable. It then
>>>>>>> becomes either
>>>>>>> purged or non-purgeable.
>>>>>>>
>>>>>>> Then an app tries to access it either using a new vma or
>>>>>>> CPU
>>>>>>> map. Then
>>>>>>> it will typically succeed, or might occasionally fail if
>>>>>>> the bo
>>>>>>> happened to be purged in between.
>>>>>>>
>>>>>>> How do we handle new vma map requests and cpu-faults to a
>>>>>>> bo in
>>>>>>> purgeable state? Do we block those?
>>>>>> @Thomas,
>>>>>>
>>>>>> The implementation already blocks new access to purged BOs:
>>>>>>    1. New VMA mappings (Patch 0005): vma_lock_and_validate()
>>>>>> rejects MAP
>>>>>> operations to purged BOs with -EINVAL via the check_purged
>>>>>> flag.
>>>>>>    2. CPU faults (Patch 0004): Both xe_bo_cpu_prep() and
>>>>>> xe_gem_mmap_offset()
>>>>>> return errors (-EFAULT / VM_FAULT_SIGBUS) when accessing
>>>>>> purged
>>>>>> BOs.
>>>>>>    3 . "Once purged, always purged": Even when the last VMA is
>>>>>> unmapped,
>>>>>> xe_bo_recompute_purgeable_state() preserves the PURGED state
>>>>>> - it
>>>>>> never
>>>>>> transitions back to WILLNEED or DONTNEED (see early return at
>>>>>> the
>>>>>> top of the
>>>>>> function).
>>>>>>
>>>>>> The only way forward for the application is to destroy the
>>>>>> purged
>>>>>> BO and
>>>>>> create a new one.
>>>>>>
>>>>>> Regarding the 'no VMAs → WILLNEED' logic: this only applies
>>>>>> to
>>>>>> non-purged
>>>>>> BOs that happen to be temporarily unmapped. Purged BOs remain
>>>>>> permanently
>>>>>> invalid.
>>>>> So I think xe_bo_all_vmas_dontneed() isn't 100% correct...
>>>>>
>>>>> I think should return an enum...
>>>>>
>>>>> enum xe_bo_vmas_purge_state {	/* Maybe a better name? */
>>>>> 	XE_BO_VMAS_STATE_DONTNEED = 0,
>>>>> 	XE_BO_VMAS_STATE_WILLNEED = 1,
>>>>> 	XE_BO_VMAS_STATE_NO_VMAS = 2,
>>>>> };
>>>>>
>>>>>
>>>>> Then in xe_bo_recompute_purgeable_state() something like this:
>>>>>
>>>>> void xe_bo_recompute_purgeable_state(struct xe_bo *bo)
>>>>> {
>>>>> 	enum xe_bo_vma_purge_state state;
>>>>>
>>>>> 	if (!bo)
>>>>> 		return;
>>>>>
>>>>> 	xe_bo_assert_held(bo);
>>>>>
>>>>> 	/*
>>>>> 	 * Once purged, always purged. Cannot transition back
>>>>> to
>>>>> WILLNEED.
>>>>> 	 * This matches i915 semantics where purged BOs are
>>>>> permanently invalid.
>>>>> 	 */
>>>>> 	if (bo->madv_purgeable == XE_MADV_PURGEABLE_PURGED)
>>>>> 		return;
>>>>>
>>>>> 	state = xe_bo_all_vmas_dontneed(bo);
>>>>> 	if (state == XE_BO_VMAS_STATE_DONTNEED) {
>>>>> 		/* All VMAs are DONTNEED - mark BO purgeable
>>>>> */
>>>>> 		if (bo->madv_purgeable !=
>>>>> XE_MADV_PURGEABLE_DONTNEED)
>>>>> 			xe_bo_set_purgeable_state(bo,
>>>>> XE_MADV_PURGEABLE_DONTNEED);
>>>>> 	} else if (state == XE_BO_VMAS_STATE_WILLNEED) {
>>>>> 		/* At least one VMA is WILLNEED - BO must not
>>>>> be
>>>>> purgeable */
>>>>> 		if (bo->madv_purgeable !=
>>>>> XE_MADV_PURGEABLE_WILLNEED)
>>>>> 			xe_bo_set_purgeable_state(bo,
>>>>> XE_MADV_PURGEABLE_WILLNEED);
>>>>> 	}
>>>>> }
>>>>>
>>>>> I think would avoid the last unbind unintentionally flipping
>>>>> from
>>>>> DONTNEED -> WILLNEED.
>>>>>
>>>>> What do you both of you (Thomas, Arvind) think?
>>>>
>>>> @Matt,
>>>>
>>>> Good catch—I missed that transition. You’re right: when the last
>>>> VMA
>>>> is
>>>> unmapped from a DONTNEED BO, the current logic can flip it back
>>>> to
>>>> WILLNEED, which discards the user’s hint. That’s wrong.
>>>>
>>>>     I like the enum approach to distinguish:
>>>>       -  *_DONTNEED: all VMAs are DONTNEED
>>>>       - *_WILLNEED: at least one VMA is WILLNEED
>>>>       - *_NO_VMAS: no VMAs present
>>>>
>>>> With that, xe_bo_recompute_purgeable_state() can avoid changing
>>>> state
>>>> on
>>>> NO_VMAS and preserve "once purged, always purged," matching i915
>>>> semantics. This also addresses Thomas's earlier question about
>>>> new
>>>> VMA/CPU access to purgeable BOs—the enum makes it clear we only
>>>> transition on explicit VMA state, not on absence of VMAs.
>>>>
>>>> I'll rework xe_bo_all_vmas_dontneed() to return the enum and
>>>> update
>>>> the
>>>> recompute path accordingly.
>>>>
>>>>
>>>> @Thomas,
>>>>
>>>> Does this direction look good to you? If yes, I will send updated
>>>> patch.
>>> Yes, but I'm also as mentioned concerned about whether we can add
>>> new
>>> vmas, cpu faults and exports in the WONTNEED state. If we can do
>>> that,
>>> it might succeed most of the time making a well-behave appearance
>>> in
>>> user-space, but if on occation the bo gets purged, the app would
>>> seeming unexpectedly fail.
>>>
>>> So do we block new vmas cpu-faults and exports in the WONTNEED
>>> state?
>>>
>> I’ve thought about the same thing. The new vmas semantics are a bit
>> odd,
>> because if you unbind the BO in WONTNEED and disallow creating new
>> VMAs,
>> the BO can never be used again—madvise requires a VMA to operate thus
>> you can't move a BO out of WONTNEED. Maybe that’s acceptable or even
>> desirable, but it would need to be documented, and ultimately we’d
>> need
>> a UMD ack for those semantics.
>>
>> CPU faults or exports in WONTNEED also seem like they should be
>> disallowed with less odd sematics, but again, this should be
>> documented
>> and require UMD ack.
> Hmm. With WONTNEED really to do as little as possible. So we shouldn't
> go to into any sort of unmapping GPU- or CPU ptes. That means the end
> behaviour might still be a bit erratic on access of a WONTNEED bo,
> depending on previous access pattern we may or may not fault.
>
> So we should probably disallow mmap(), VM_BIND and export, but allow
> CPU- and GPU pagefaults. And document.
>
> Speaking of pagefaults, I noticed that when *purged*, it looks like we
> populate with scratch PTEs also on faulting VMs. I think this is the
> correct approach, though, to avoid the prefetch pagefaults wreaking
> havoc if accessing vmas with purged bos.


@Thomas, @Matt,

Got it. So the plan is:

DONTNEED BOs:
    - Block: new mmap(), VM_BIND, dma-buf export
    - Allow: CPU/GPU faults on existing mappings (fail if purged)
    - Keep PTEs intact, just mark as purgeable

I'll add checks in:
1. xe_gem_mmap_offset() - reject new mmap to DONTNEED BO
2. VM_BIND path (vma_lock_and_validate) - reject new VMA to DONTNEED BO
3. dma-buf export path - reject export of DONTNEED BO

Let me know if I am missing something.

Thanks,
Arvind


> /Thomas
>
>
>> Matt
>>
>>> /Thomas
>>>
>>>
>>>> Thanks,
>>>> Arvind
>>>>
>>>>
>>>>> Matt
>>>>>
>>>>>> Thanks,
>>>>>> Arvind
>>>>>>> Thanks,
>>>>>>> Thomas
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>> Cc: Matthew Brost <matthew.brost@intel.com>
>>>>>>>> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>>>>>>>> Cc: Himal Prasad Ghimiray
>>>>>>>> <himal.prasad.ghimiray@intel.com>
>>>>>>>> Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
>>>>>>>> ---
>>>>>>>>     drivers/gpu/drm/xe/xe_svm.c        |  1 +
>>>>>>>>     drivers/gpu/drm/xe/xe_vm.c         |  9 ++-
>>>>>>>>     drivers/gpu/drm/xe/xe_vm_madvise.c | 98
>>>>>>>> ++++++++++++++++++++++++++++--
>>>>>>>>     drivers/gpu/drm/xe/xe_vm_madvise.h |  3 +
>>>>>>>>     drivers/gpu/drm/xe/xe_vm_types.h   | 11 ++++
>>>>>>>>     5 files changed, 116 insertions(+), 6 deletions(-)
>>>>>>>>
>>>>>>>> diff --git a/drivers/gpu/drm/xe/xe_svm.c
>>>>>>>> b/drivers/gpu/drm/xe/xe_svm.c
>>>>>>>> index cda3bf7e2418..329c77aa5c20 100644
>>>>>>>> --- a/drivers/gpu/drm/xe/xe_svm.c
>>>>>>>> +++ b/drivers/gpu/drm/xe/xe_svm.c
>>>>>>>> @@ -318,6 +318,7 @@ static void
>>>>>>>> xe_vma_set_default_attributes(struct
>>>>>>>> xe_vma *vma)
>>>>>>>>     		.preferred_loc.migration_policy =
>>>>>>>> DRM_XE_MIGRATE_ALL_PAGES,
>>>>>>>>     		.pat_index = vma-
>>>>>>>>> attr.default_pat_index,
>>>>>>>>     		.atomic_access =
>>>>>>>> DRM_XE_ATOMIC_UNDEFINED,
>>>>>>>> +		.purgeable_state =
>>>>>>>> XE_MADV_PURGEABLE_WILLNEED,
>>>>>>>>     	};
>>>>>>>>     	xe_vma_mem_attr_copy(&vma->attr, &default_attr);
>>>>>>>> diff --git a/drivers/gpu/drm/xe/xe_vm.c
>>>>>>>> b/drivers/gpu/drm/xe/xe_vm.c
>>>>>>>> index 71cf3ce6c62b..e84b9e7cb5eb 100644
>>>>>>>> --- a/drivers/gpu/drm/xe/xe_vm.c
>>>>>>>> +++ b/drivers/gpu/drm/xe/xe_vm.c
>>>>>>>> @@ -39,6 +39,7 @@
>>>>>>>>     #include "xe_tile.h"
>>>>>>>>     #include "xe_tlb_inval.h"
>>>>>>>>     #include "xe_trace_bo.h"
>>>>>>>> +#include "xe_vm_madvise.h"
>>>>>>>>     #include "xe_wa.h"
>>>>>>>>     static struct drm_gem_object *xe_vm_obj(struct xe_vm
>>>>>>>> *vm)
>>>>>>>> @@ -1085,6 +1086,7 @@ static struct xe_vma
>>>>>>>> *xe_vma_create(struct
>>>>>>>> xe_vm *vm,
>>>>>>>>     static void xe_vma_destroy_late(struct xe_vma *vma)
>>>>>>>>     {
>>>>>>>>     	struct xe_vm *vm = xe_vma_vm(vma);
>>>>>>>> +	struct xe_bo *bo = xe_vma_bo(vma);
>>>>>>>>     	if (vma->ufence) {
>>>>>>>>     		xe_sync_ufence_put(vma->ufence);
>>>>>>>> @@ -1099,7 +1101,7 @@ static void
>>>>>>>> xe_vma_destroy_late(struct
>>>>>>>> xe_vma
>>>>>>>> *vma)
>>>>>>>>     	} else if (xe_vma_is_null(vma) ||
>>>>>>>> xe_vma_is_cpu_addr_mirror(vma)) {
>>>>>>>>     		xe_vm_put(vm);
>>>>>>>>     	} else {
>>>>>>>> -		xe_bo_put(xe_vma_bo(vma));
>>>>>>>> +		xe_bo_put(bo);
>>>>>>>>     	}
>>>>>>>>     	xe_vma_free(vma);
>>>>>>>> @@ -1125,6 +1127,7 @@ static void vma_destroy_cb(struct
>>>>>>>> dma_fence
>>>>>>>> *fence,
>>>>>>>>     static void xe_vma_destroy(struct xe_vma *vma, struct
>>>>>>>> dma_fence
>>>>>>>> *fence)
>>>>>>>>     {
>>>>>>>>     	struct xe_vm *vm = xe_vma_vm(vma);
>>>>>>>> +	struct xe_bo *bo = xe_vma_bo(vma);
>>>>>>>>     	lockdep_assert_held_write(&vm->lock);
>>>>>>>>     	xe_assert(vm->xe, list_empty(&vma-
>>>>>>>>> combined_links.destroy));
>>>>>>>> @@ -1133,9 +1136,10 @@ static void xe_vma_destroy(struct
>>>>>>>> xe_vma *vma,
>>>>>>>> struct dma_fence *fence)
>>>>>>>>     		xe_assert(vm->xe, vma->gpuva.flags &
>>>>>>>> XE_VMA_DESTROYED);
>>>>>>>>     		xe_userptr_destroy(to_userptr_vma(vma));
>>>>>>>>     	} else if (!xe_vma_is_null(vma) &&
>>>>>>>> !xe_vma_is_cpu_addr_mirror(vma)) {
>>>>>>>> -		xe_bo_assert_held(xe_vma_bo(vma));
>>>>>>>> +		xe_bo_assert_held(bo);
>>>>>>>>     		drm_gpuva_unlink(&vma->gpuva);
>>>>>>>> +		xe_bo_recompute_purgeable_state(bo);
>>>>>>>>     	}
>>>>>>>>     	xe_vm_assert_held(vm);
>>>>>>>> @@ -2681,6 +2685,7 @@ static int
>>>>>>>> vm_bind_ioctl_ops_parse(struct xe_vm
>>>>>>>> *vm, struct drm_gpuva_ops *ops,
>>>>>>>>     				.atomic_access =
>>>>>>>> DRM_XE_ATOMIC_UNDEFINED,
>>>>>>>>     				.default_pat_index = op-
>>>>>>>>> map.pat_index,
>>>>>>>>     				.pat_index = op-
>>>>>>>>> map.pat_index,
>>>>>>>> +				.purgeable_state =
>>>>>>>> XE_MADV_PURGEABLE_WILLNEED,
>>>>>>>>     			};
>>>>>>>>     			flags |= op->map.vma_flags &
>>>>>>>> XE_VMA_CREATE_MASK;
>>>>>>>> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c
>>>>>>>> b/drivers/gpu/drm/xe/xe_vm_madvise.c
>>>>>>>> index d9cfba7bfe0b..c184426546a2 100644
>>>>>>>> --- a/drivers/gpu/drm/xe/xe_vm_madvise.c
>>>>>>>> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
>>>>>>>> @@ -12,6 +12,7 @@
>>>>>>>>     #include "xe_pat.h"
>>>>>>>>     #include "xe_pt.h"
>>>>>>>>     #include "xe_svm.h"
>>>>>>>> +#include "xe_vm.h"
>>>>>>>>     struct xe_vmas_in_madvise_range {
>>>>>>>>     	u64 addr;
>>>>>>>> @@ -183,6 +184,89 @@ static void madvise_pat_index(struct
>>>>>>>> xe_device
>>>>>>>> *xe, struct xe_vm *vm,
>>>>>>>>     	}
>>>>>>>>     }
>>>>>>>> +/**
>>>>>>>> + * xe_bo_all_vmas_dontneed() - Check if all VMAs of a BO
>>>>>>>> are
>>>>>>>> marked
>>>>>>>> DONTNEED
>>>>>>>> + * @bo: Buffer object
>>>>>>>> + *
>>>>>>>> + * Check all VMAs across all VMs to determine if BO can
>>>>>>>> be
>>>>>>>> purged.
>>>>>>>> + * Shared BOs require unanimous DONTNEED state from all
>>>>>>>> mappings.
>>>>>>>> + *
>>>>>>>> + * Caller must hold BO dma-resv lock.
>>>>>>>> + *
>>>>>>>> + * Return: true if all VMAs are DONTNEED, false
>>>>>>>> otherwise
>>>>>>>> + */
>>>>>>>> +static bool xe_bo_all_vmas_dontneed(struct xe_bo *bo)
>>>>>>>> +{
>>>>>>>> +	struct drm_gpuvm_bo *vm_bo;
>>>>>>>> +	struct drm_gpuva *gpuva;
>>>>>>>> +	struct drm_gem_object *obj = &bo->ttm.base;
>>>>>>>> +	bool has_vmas = false;
>>>>>>>> +
>>>>>>>> +	xe_bo_assert_held(bo);
>>>>>>>> +
>>>>>>>> +	drm_gem_for_each_gpuvm_bo(vm_bo, obj) {
>>>>>>>> +		drm_gpuvm_bo_for_each_va(gpuva, vm_bo) {
>>>>>>>> +			struct xe_vma *vma =
>>>>>>>> gpuva_to_vma(gpuva);
>>>>>>>> +
>>>>>>>> +			has_vmas = true;
>>>>>>>> +
>>>>>>>> +			/* Any non-DONTNEED VMA prevents
>>>>>>>> purging */
>>>>>>>> +			if (vma->attr.purgeable_state !=
>>>>>>>> XE_MADV_PURGEABLE_DONTNEED)
>>>>>>>> +				return false;
>>>>>>>> +		}
>>>>>>>> +	}
>>>>>>>> +
>>>>>>>> +	/*
>>>>>>>> +	 * No VMAs => no mapping-level DONTNEED hint.
>>>>>>>> +	 * Default to WILLNEED to avoid making BOs
>>>>>>>> purgeable
>>>>>>>> without
>>>>>>>> +	 * explicit user intent.
>>>>>>>> +	 */
>>>>>>>> +	if (!has_vmas)
>>>>>>>> +		return false;
>>>>>>>> +
>>>>>>>> +	return true;
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> +/**
>>>>>>>> + * xe_bo_recompute_purgeable_state() - Recompute BO
>>>>>>>> purgeable state
>>>>>>>> from VMAs
>>>>>>>> + * @bo: Buffer object
>>>>>>>> + *
>>>>>>>> + * Walk all VMAs to determine if BO should be purgeable
>>>>>>>> or
>>>>>>>> not.
>>>>>>>> + * Shared BOs require unanimous DONTNEED state from all
>>>>>>>> mappings.
>>>>>>>> + *
>>>>>>>> + * Locking: Caller must hold BO dma-resv lock. When
>>>>>>>> iterating GPUVM
>>>>>>>> lists,
>>>>>>>> + * VM lock must also be held (write) to prevent
>>>>>>>> concurrent
>>>>>>>> VMA
>>>>>>>> modifications.
>>>>>>>> + * This is satisfied at both call sites:
>>>>>>>> + * - xe_vma_destroy(): holds vm->lock write
>>>>>>>> + * - madvise_purgeable(): holds vm->lock write (from
>>>>>>>> madvise
>>>>>>>> ioctl
>>>>>>>> path)
>>>>>>>> + *
>>>>>>>> + * Return: nothing
>>>>>>>> + */
>>>>>>>> +void xe_bo_recompute_purgeable_state(struct xe_bo *bo)
>>>>>>>> +{
>>>>>>>> +	if (!bo)
>>>>>>>> +		return;
>>>>>>>> +
>>>>>>>> +	xe_bo_assert_held(bo);
>>>>>>>> +
>>>>>>>> +	/*
>>>>>>>> +	 * Once purged, always purged. Cannot transition
>>>>>>>> back to
>>>>>>>> WILLNEED.
>>>>>>>> +	 * This matches i915 semantics where purged BOs
>>>>>>>> are
>>>>>>>> permanently invalid.
>>>>>>>> +	 */
>>>>>>>> +	if (bo->madv_purgeable ==
>>>>>>>> XE_MADV_PURGEABLE_PURGED)
>>>>>>>> +		return;
>>>>>>>> +
>>>>>>>> +	if (xe_bo_all_vmas_dontneed(bo)) {
>>>>>>>> +		/* All VMAs are DONTNEED - mark BO
>>>>>>>> purgeable
>>>>>>>> */
>>>>>>>> +		if (bo->madv_purgeable !=
>>>>>>>> XE_MADV_PURGEABLE_DONTNEED)
>>>>>>>> +			xe_bo_set_purgeable_state(bo,
>>>>>>>> XE_MADV_PURGEABLE_DONTNEED);
>>>>>>>> +	} else {
>>>>>>>> +		/* At least one VMA is WILLNEED - BO
>>>>>>>> must
>>>>>>>> not be
>>>>>>>> purgeable */
>>>>>>>> +		if (bo->madv_purgeable !=
>>>>>>>> XE_MADV_PURGEABLE_WILLNEED)
>>>>>>>> +			xe_bo_set_purgeable_state(bo,
>>>>>>>> XE_MADV_PURGEABLE_WILLNEED);
>>>>>>>> +	}
>>>>>>>> +}
>>>>>>>> +
>>>>>>>>     /**
>>>>>>>>      * madvise_purgeable - Handle purgeable buffer object
>>>>>>>> advice
>>>>>>>>      * @xe: XE device
>>>>>>>> @@ -231,14 +315,20 @@ static void __maybe_unused
>>>>>>>> madvise_purgeable(struct xe_device *xe,
>>>>>>>>     		switch (op->purge_state_val.val) {
>>>>>>>>     		case
>>>>>>>> DRM_XE_VMA_PURGEABLE_STATE_WILLNEED:
>>>>>>>> -			xe_bo_set_purgeable_state(bo,
>>>>>>>> XE_MADV_PURGEABLE_WILLNEED);
>>>>>>>> +			vmas[i]->attr.purgeable_state =
>>>>>>>> XE_MADV_PURGEABLE_WILLNEED;
>>>>>>>> +
>>>>>>>> +			/* Update BO purgeable state */
>>>>>>>> +			xe_bo_recompute_purgeable_state(
>>>>>>>> bo);
>>>>>>>>     			break;
>>>>>>>>     		case
>>>>>>>> DRM_XE_VMA_PURGEABLE_STATE_DONTNEED:
>>>>>>>> -			xe_bo_set_purgeable_state(bo,
>>>>>>>> XE_MADV_PURGEABLE_DONTNEED);
>>>>>>>> +			vmas[i]->attr.purgeable_state =
>>>>>>>> XE_MADV_PURGEABLE_DONTNEED;
>>>>>>>> +
>>>>>>>> +			/* Update BO purgeable state */
>>>>>>>> +			xe_bo_recompute_purgeable_state(
>>>>>>>> bo);
>>>>>>>>     			break;
>>>>>>>>     		default:
>>>>>>>> -			drm_warn(&vm->xe->drm, "Invalid
>>>>>>>> madvice
>>>>>>>> value = %d\n",
>>>>>>>> -				 op-
>>>>>>>>> purge_state_val.val);
>>>>>>>> +			/* Should never hit - values
>>>>>>>> validated in
>>>>>>>> madvise_args_are_sane() */
>>>>>>>> +			xe_assert(vm->xe, 0);
>>>>>>>>     			return;
>>>>>>>>     		}
>>>>>>>>     	}
>>>>>>>> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.h
>>>>>>>> b/drivers/gpu/drm/xe/xe_vm_madvise.h
>>>>>>>> index b0e1fc445f23..39acd2689ca0 100644
>>>>>>>> --- a/drivers/gpu/drm/xe/xe_vm_madvise.h
>>>>>>>> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.h
>>>>>>>> @@ -8,8 +8,11 @@
>>>>>>>>     struct drm_device;
>>>>>>>>     struct drm_file;
>>>>>>>> +struct xe_bo;
>>>>>>>>     int xe_vm_madvise_ioctl(struct drm_device *dev, void
>>>>>>>> *data,
>>>>>>>>     			struct drm_file *file);
>>>>>>>> +void xe_bo_recompute_purgeable_state(struct xe_bo *bo);
>>>>>>>> +
>>>>>>>>     #endif
>>>>>>>> diff --git a/drivers/gpu/drm/xe/xe_vm_types.h
>>>>>>>> b/drivers/gpu/drm/xe/xe_vm_types.h
>>>>>>>> index 43203e90ee3e..fd563039e8f4 100644
>>>>>>>> --- a/drivers/gpu/drm/xe/xe_vm_types.h
>>>>>>>> +++ b/drivers/gpu/drm/xe/xe_vm_types.h
>>>>>>>> @@ -94,6 +94,17 @@ struct xe_vma_mem_attr {
>>>>>>>>     	 * same as default_pat_index unless overwritten
>>>>>>>> by
>>>>>>>> madvise.
>>>>>>>>     	 */
>>>>>>>>     	u16 pat_index;
>>>>>>>> +
>>>>>>>> +	/**
>>>>>>>> +	 * @purgeable_state: Purgeable hint for this VMA
>>>>>>>> mapping
>>>>>>>> +	 *
>>>>>>>> +	 * Per-VMA purgeable state from madvise. Valid
>>>>>>>> states are
>>>>>>>> WILLNEED (0)
>>>>>>>> +	 * or DONTNEED (1). Shared BOs require all VMAs
>>>>>>>> to
>>>>>>>> be
>>>>>>>> DONTNEED before
>>>>>>>> +	 * the BO can be purged. PURGED state exists
>>>>>>>> only at
>>>>>>>> BO
>>>>>>>> level.
>>>>>>>> +	 *
>>>>>>>> +	 * Protected by BO dma-resv lock. Set via
>>>>>>>> DRM_IOCTL_XE_MADVISE.
>>>>>>>> +	 */
>>>>>>>> +	u32 purgeable_state;
>>>>>>>>     };
>>>>>>>>     struct xe_vma {

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v5 0/9] drm/xe/madvise: Add support for purgeable buffer objects
  2026-02-11 15:46 ` [PATCH v5 0/9] drm/xe/madvise: Add support for purgeable buffer objects Matthew Brost
@ 2026-02-25 10:10   ` Yadav, Arvind
  0 siblings, 0 replies; 36+ messages in thread
From: Yadav, Arvind @ 2026-02-25 10:10 UTC (permalink / raw)
  To: Matthew Brost, Souza, Jose
  Cc: intel-xe, himal.prasad.ghimiray, thomas.hellstrom, pallavi.mishra


On 11-02-2026 21:16, Matthew Brost wrote:
> On Wed, Feb 11, 2026 at 08:56:29PM +0530, Arvind Yadav wrote:
>
> I have a feeling from the KMD POV we are getting close for this being
> ready to merge. What is the status a UMD PR to use this feature (?) as
> this is a prerequisite to merging.
>
> Also it likely time to start collecting ack's from the UMD teams on the
> the uAPI patch too.

Hi Jose,

Since this is now close to merge. Can you or some from UMD side to Ack 
the uAPI patch.

Thanks,
Arvind

>
> Matt
>
>> This patch series introduces comprehensive support for purgeable buffer objects
>> in the Xe driver, enabling userspace to provide memory usage hints for better
>> memory management under system pressure.
>>
>> Overview:
>>
>> Purgeable memory allows applications to mark buffer objects as "not currently
>> needed" (DONTNEED), making them eligible for kernel reclamation during memory
>> pressure. This helps prevent OOM conditions and enables more efficient GPU
>> memory utilization for workloads with temporary or regeneratable data (caches,
>> intermediate results, decoded frames, etc.).
>>
>> Purgeable BO Lifecycle:
>> 1. WILLNEED (default): BO actively needed, kernel preserves backing store
>> 2. DONTNEED (user hint): BO contents discardable, eligible for purging
>> 3. PURGED (kernel action): Backing store reclaimed during memory pressure
>>
>> Key Design Principles:
>>    - i915 compatibility: "Once purged, always purged" semantics - purged BOs
>>      remain permanently invalid and must be destroyed/recreated
>>    - Per-VMA state tracking: Each VMA tracks its own purgeable state, BO is
>>      only marked DONTNEED when ALL VMAs across ALL VMs agree (Thomas Hellström)
>>    - Safety first: Imported/exported dma-bufs blocked from purgeable state -
>>      no visibility into external device usage (Matt Roper)
>>    - Multiple protection layers: Validation in madvise, VM bind, mmap, and
>>      fault handlers
>>    - Async TLB invalidation: Uses xe_bo_trigger_rebind() for non-blocking
>>      GPU mapping invalidation
>>    - Scratch PTE support: Fault-mode VMs use scratch pages for safe zero reads
>>      on purged BO access.
>>    - Purgeable state is not applied to imported/exported dma-bufs,
>>      those BOs always behave as WILLNEED.
>>    - TTM shrinker integration: Encapsulated helpers manage xe_ttm_tt->purgeable
>>      flag and shrinker page accounting (shrinkable vs purgeable buckets)
>>
>> v2 Changes:
>>    - Reordered patches: Moved shared BO helper before main implementation for
>>      proper dependency order
>>    - Fixed reference counting in mmap offset validation (use drm_gem_object_put)
>>    - Removed incorrect claims about madvise(WILLNEED) restoring purged BOs
>>    - Fixed error code documentation inconsistencies
>>    - Initialize purge_state_val fields to prevent kernel memory leaks
>>    - Use xe_bo_trigger_rebind() for async TLB invalidation (Thomas Hellström)
>>    - Add NULL rebind with scratch PTEs for fault mode (Thomas Hellström)
>>    - Implement i915-compatible retained field logic (Thomas Hellström)
>>    - Skip BO validation for purged BOs in page fault handler (crash fix)
>>    - Add scratch VM check in page fault path (non-scratch VMs fail fault)
>>
>> v3 Changes (addressing Matt and Thomas Hellström feedback):
>>    - Per-VMA purgeable state tracking: Added xe_vma->purgeable_state field
>>    - Complete VMA check: xe_bo_all_vmas_dontneed() walks all VMAs across all
>>      VMs to ensure unanimous DONTNEED before marking BO purgeable
>>    - VMA unbind recheck: Added xe_bo_recheck_purgeable_on_vma_unbind() to
>>      re-evaluate BO state when VMAs are destroyed
>>    - Block external dma-bufs: Added xe_bo_is_external_dmabuf() check using
>>      drm_gem_is_imported() and obj->dma_buf to prevent purging imported/exported BOs
>>    - Consistent lockdep enforcement: Added xe_bo_assert_held() to all helpers
>>      that access madv_purgeable state
>>    - Simplified page table logic: Renamed is_null to is_null_or_purged in
>>      xe_pt_stage_bind_entry() - purged BOs treated identically to null VMAs
>>    - Removed unnecessary checks: Dropped redundant "&& bo" check in xe_ttm_bo_purge()
>>    - Xe-specific warnings: Changed drm_warn() to XE_WARN_ON() in purge path
>>    - Moved purge checks under locks: Purge state validation now done after
>>      acquiring dma-resv lock in vma_lock_and_validate() and xe_pagefault_begin()
>>    - Race-free fault handling: Removed unlocked purge check from
>>      xe_pagefault_handle_vma(), moved to locked xe_pagefault_begin()
>>    - Shrinker helper functions: Added xe_bo_set_purgeable_shrinker() and
>>      xe_bo_clear_purgeable_shrinker() to encapsulate TTM purgeable flag updates
>>      and shrinker page accounting, improving code clarity and maintainability
>>
>> v4 Changes (addressing Matt and Thomas Hellström feedback):
>>    - UAPI: Removed '__u64 reserved' field from purge_state_val union to fit
>>      16-byte size constraint (Matt)
>>    - Changed madv_purgeable from atomic_t to u32 across all patches (Matt)
>>    - CPU fault handling: Added purged check to fastpath (xe_bo_cpu_fault_fastpath)
>>      to prevent hang when accessing existing mmap of purged BO
>>
>> v5 Changes (addressing Matt and Thomas Hellström feedback):
>>    - Add locking documentation to madv_purgeable field comment (Matt)
>>    - Introduce xe_bo_set_purgeable_state() helper (void return) to centralize
>>      madv_purgeable updates with xe_bo_assert_held() and state transition
>>      validation using explicit enum checks (no transition out of PURGED) (Matt)
>>    - Make xe_ttm_bo_purge() return int and propagate failures from
>>      xe_bo_move(); handle xe_bo_trigger_rebind() failures (e.g. no_wait_gpu
>>      paths) rather than silently ignoring (Matt)
>>    - Replace drm_WARN_ON with xe_assert for better Xe-specific assertions (Matt)
>>    - Hook purgeable handling into madvise_funcs[DRM_XE_VMA_ATTR_PURGEABLE_STATE]
>>      instead of special-case path in xe_vm_madvise_ioctl() (Matt)
>>    - Track purgeable retained return via xe_madvise_details and perform
>>      copy_to_user() from xe_madvise_details_fini() after locks are dropped (Matt)
>>    - Set madvise_funcs[DRM_XE_VMA_ATTR_PURGEABLE_STATE] to NULL with
>>      __maybe_unused on madvise_purgeable() to maintain bisectability until
>>      shrinker integration is complete in final patch (Matt)
>>    - Call xe_bo_recheck_purgeable_on_vma_unbind() from xe_vma_destroy()
>>      right after drm_gpuva_unlink() where we already hold the BO lock,
>>      drop the trylock-based late destroy path (Matt)
>>    - Move purgeable_state into xe_vma_mem_attr with the other madvise
>>      attributes (Matt)
>>    - Drop READ_ONCE since the BO lock already protects us (Matt)
>>    - Keep returning false when there are no VMAs - otherwise we'd mark
>>      BOs purgeable without any user hint (Matt)
>>    -  Use struct xe_vma_lock_and_validate_flags instead of multiple bool
>>      parameters to improve readability and prevent argument transposition (Matt)
>>    - Fix LRU crash while running shrink test
>>    - Skip xe_bo_validate() for purged BOs in xe_gpuvm_validate()
>>    - Split ghost BO and zero-refcount handling in xe_bo_shrink() (Thomas)
>>
>> Arvind Yadav (8):
>>    drm/xe/bo: Add purgeable bo state tracking and field madv to xe_bo
>>    drm/xe/madvise: Implement purgeable buffer object support
>>    drm/xe/bo: Handle CPU faults on purged buffer objects
>>    drm/xe/vm: Prevent binding of purged buffer objects
>>    drm/xe/madvise: Implement per-VMA purgeable state tracking
>>    drm/xe/madvise: Block imported and exported dma-bufs
>>    drm/xe/bo: Add purgeable shrinker state helpers
>>    drm/xe/madvise: Enable purgeable buffer object IOCTL support
>>
>> Himal Prasad Ghimiray (1):
>>    drm/xe/uapi: Add UAPI support for purgeable buffer objects
>>
>>   drivers/gpu/drm/xe/xe_bo.c         | 187 ++++++++++++++++++++--
>>   drivers/gpu/drm/xe/xe_bo.h         |  60 +++++++
>>   drivers/gpu/drm/xe/xe_bo_types.h   |   6 +
>>   drivers/gpu/drm/xe/xe_pagefault.c  |  12 ++
>>   drivers/gpu/drm/xe/xe_pt.c         |  40 ++++-
>>   drivers/gpu/drm/xe/xe_vm.c         |  90 +++++++++--
>>   drivers/gpu/drm/xe/xe_vm_madvise.c | 249 +++++++++++++++++++++++++++++
>>   drivers/gpu/drm/xe/xe_vm_madvise.h |   3 +
>>   drivers/gpu/drm/xe/xe_vm_types.h   |  11 ++
>>   include/uapi/drm/xe_drm.h          |  44 +++++
>>   10 files changed, 667 insertions(+), 35 deletions(-)
>>
>> -- 
>> 2.43.0
>>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v5 6/9] drm/xe/madvise: Implement per-VMA purgeable state tracking
  2026-02-25  9:40                 ` Yadav, Arvind
@ 2026-02-25 18:32                   ` Matthew Brost
  0 siblings, 0 replies; 36+ messages in thread
From: Matthew Brost @ 2026-02-25 18:32 UTC (permalink / raw)
  To: Yadav, Arvind
  Cc: Thomas Hellström, intel-xe, himal.prasad.ghimiray,
	pallavi.mishra

On Wed, Feb 25, 2026 at 03:10:46PM +0530, Yadav, Arvind wrote:
> 
> On 25-02-2026 14:48, Thomas Hellström wrote:
> > On Wed, 2026-02-25 at 01:04 -0800, Matthew Brost wrote:
> > > On Wed, Feb 25, 2026 at 09:21:10AM +0100, Thomas Hellström wrote:
> > > > On Wed, 2026-02-25 at 11:05 +0530, Yadav, Arvind wrote:
> > > > > On 24-02-2026 22:06, Matthew Brost wrote:
> > > > > > On Tue, Feb 24, 2026 at 08:37:44PM +0530, Yadav, Arvind wrote:
> > > > > > > On 24-02-2026 18:18, Thomas Hellström wrote:
> > > > > > > > On Wed, 2026-02-11 at 20:56 +0530, Arvind Yadav wrote:
> > > > > > > > > Track purgeable state per-VMA instead of using a coarse
> > > > > > > > > shared
> > > > > > > > > BO check. This prevents purging shared BOs until all VMAs
> > > > > > > > > across
> > > > > > > > > all VMs are marked DONTNEED.
> > > > > > > > > 
> > > > > > > > > Add xe_bo_all_vmas_dontneed() to check all VMAs before
> > > > > > > > > marking
> > > > > > > > > a BO purgeable. Add
> > > > > > > > > xe_bo_recheck_purgeable_on_vma_unbind()
> > > > > > > > > to
> > > > > > > > > handle state transitions when VMAs are destroyed - if all
> > > > > > > > > remaining VMAs are DONTNEED the BO can become purgeable,
> > > > > > > > > or
> > > > > > > > > if
> > > > > > > > > no VMAs remain it transitions to WILLNEED.
> > > > > > > > > 
> > > > > > > > > The per-VMA purgeable_state field stores the madvise hint
> > > > > > > > > for
> > > > > > > > > each mapping. Shared BOs can only be purged when all VMAs
> > > > > > > > > unanimously indicate DONTNEED.
> > > > > > > > > 
> > > > > > > > > One thing to note: when the last VMA goes away, we
> > > > > > > > > default
> > > > > > > > > back to
> > > > > > > > > WILLNEED. DONTNEED is a per-mapping hint, and without any
> > > > > > > > > mappings
> > > > > > > > > there is no remaining madvise state to justify purging.
> > > > > > > > > This
> > > > > > > > > prevents
> > > > > > > > > BOs from becoming purgeable solely due to being
> > > > > > > > > temporarily
> > > > > > > > > unmapped.
> > > > > > > > > 
> > > > > > > > > v3:
> > > > > > > > >      - This addresses Thomas Hellström's feedback: "loop
> > > > > > > > > over
> > > > > > > > > all vmas
> > > > > > > > >        attached to the bo and check that they all say
> > > > > > > > > DONTNEED. This
> > > > > > > > > will
> > > > > > > > >        also need a check at VMA unbinding"
> > > > > > > > > 
> > > > > > > > > v4:
> > > > > > > > >      - @madv_purgeable atomic_t → u32 change across all
> > > > > > > > > relevant
> > > > > > > > >        patches (Matt)
> > > > > > > > > 
> > > > > > > > > v5:
> > > > > > > > >      - Call xe_bo_recheck_purgeable_on_vma_unbind() from
> > > > > > > > > xe_vma_destroy()
> > > > > > > > >        right after drm_gpuva_unlink() where we already
> > > > > > > > > hold
> > > > > > > > > the BO lock,
> > > > > > > > >        drop the trylock-based late destroy path (Matt)
> > > > > > > > >      - Move purgeable_state into xe_vma_mem_attr with the
> > > > > > > > > other madvise
> > > > > > > > >        attributes (Matt)
> > > > > > > > >      - Drop READ_ONCE since the BO lock already protects
> > > > > > > > > us
> > > > > > > > > (Matt)
> > > > > > > > >      - Keep returning false when there are no VMAs -
> > > > > > > > > otherwise
> > > > > > > > > we'd mark
> > > > > > > > >        BOs purgeable without any user hint (Matt)
> > > > > > > > >      - Use xe_bo_set_purgeable_state() instead of direct
> > > > > > > > > initialization(Matt)
> > > > > > > > >      - use xe_assert instead of drm_war (Thomas)
> > > > > > > > Typo.
> > > > > > > Noted,
> > > > > > > 
> > > > > > > > There were also a couple of review issues in my reply here:
> > > > > > > > 
> > > > > > > > https://patchwork.freedesktop.org/patch/699451/?series=156651&rev=5
> > > > > > > > 
> > > > > > > > that were never addressed or at least commented upon.
> > > > > > > > 
> > > > > > > > The comment there on retaining purgeable state after the
> > > > > > > > last
> > > > > > > > vma is
> > > > > > > > unmapped could be discussed, though.
> > > > > > > > 
> > > > > > > > Let's say we unmap a vma marking a bo purgeable. It then
> > > > > > > > becomes either
> > > > > > > > purged or non-purgeable.
> > > > > > > > 
> > > > > > > > Then an app tries to access it either using a new vma or
> > > > > > > > CPU
> > > > > > > > map. Then
> > > > > > > > it will typically succeed, or might occasionally fail if
> > > > > > > > the bo
> > > > > > > > happened to be purged in between.
> > > > > > > > 
> > > > > > > > How do we handle new vma map requests and cpu-faults to a
> > > > > > > > bo in
> > > > > > > > purgeable state? Do we block those?
> > > > > > > @Thomas,
> > > > > > > 
> > > > > > > The implementation already blocks new access to purged BOs:
> > > > > > >    1. New VMA mappings (Patch 0005): vma_lock_and_validate()
> > > > > > > rejects MAP
> > > > > > > operations to purged BOs with -EINVAL via the check_purged
> > > > > > > flag.
> > > > > > >    2. CPU faults (Patch 0004): Both xe_bo_cpu_prep() and
> > > > > > > xe_gem_mmap_offset()
> > > > > > > return errors (-EFAULT / VM_FAULT_SIGBUS) when accessing
> > > > > > > purged
> > > > > > > BOs.
> > > > > > >    3 . "Once purged, always purged": Even when the last VMA is
> > > > > > > unmapped,
> > > > > > > xe_bo_recompute_purgeable_state() preserves the PURGED state
> > > > > > > - it
> > > > > > > never
> > > > > > > transitions back to WILLNEED or DONTNEED (see early return at
> > > > > > > the
> > > > > > > top of the
> > > > > > > function).
> > > > > > > 
> > > > > > > The only way forward for the application is to destroy the
> > > > > > > purged
> > > > > > > BO and
> > > > > > > create a new one.
> > > > > > > 
> > > > > > > Regarding the 'no VMAs → WILLNEED' logic: this only applies
> > > > > > > to
> > > > > > > non-purged
> > > > > > > BOs that happen to be temporarily unmapped. Purged BOs remain
> > > > > > > permanently
> > > > > > > invalid.
> > > > > > So I think xe_bo_all_vmas_dontneed() isn't 100% correct...
> > > > > > 
> > > > > > I think should return an enum...
> > > > > > 
> > > > > > enum xe_bo_vmas_purge_state {	/* Maybe a better name? */
> > > > > > 	XE_BO_VMAS_STATE_DONTNEED = 0,
> > > > > > 	XE_BO_VMAS_STATE_WILLNEED = 1,
> > > > > > 	XE_BO_VMAS_STATE_NO_VMAS = 2,
> > > > > > };
> > > > > > 
> > > > > > 
> > > > > > Then in xe_bo_recompute_purgeable_state() something like this:
> > > > > > 
> > > > > > void xe_bo_recompute_purgeable_state(struct xe_bo *bo)
> > > > > > {
> > > > > > 	enum xe_bo_vma_purge_state state;
> > > > > > 
> > > > > > 	if (!bo)
> > > > > > 		return;
> > > > > > 
> > > > > > 	xe_bo_assert_held(bo);
> > > > > > 
> > > > > > 	/*
> > > > > > 	 * Once purged, always purged. Cannot transition back
> > > > > > to
> > > > > > WILLNEED.
> > > > > > 	 * This matches i915 semantics where purged BOs are
> > > > > > permanently invalid.
> > > > > > 	 */
> > > > > > 	if (bo->madv_purgeable == XE_MADV_PURGEABLE_PURGED)
> > > > > > 		return;
> > > > > > 
> > > > > > 	state = xe_bo_all_vmas_dontneed(bo);
> > > > > > 	if (state == XE_BO_VMAS_STATE_DONTNEED) {
> > > > > > 		/* All VMAs are DONTNEED - mark BO purgeable
> > > > > > */
> > > > > > 		if (bo->madv_purgeable !=
> > > > > > XE_MADV_PURGEABLE_DONTNEED)
> > > > > > 			xe_bo_set_purgeable_state(bo,
> > > > > > XE_MADV_PURGEABLE_DONTNEED);
> > > > > > 	} else if (state == XE_BO_VMAS_STATE_WILLNEED) {
> > > > > > 		/* At least one VMA is WILLNEED - BO must not
> > > > > > be
> > > > > > purgeable */
> > > > > > 		if (bo->madv_purgeable !=
> > > > > > XE_MADV_PURGEABLE_WILLNEED)
> > > > > > 			xe_bo_set_purgeable_state(bo,
> > > > > > XE_MADV_PURGEABLE_WILLNEED);
> > > > > > 	}
> > > > > > }
> > > > > > 
> > > > > > I think would avoid the last unbind unintentionally flipping
> > > > > > from
> > > > > > DONTNEED -> WILLNEED.
> > > > > > 
> > > > > > What do you both of you (Thomas, Arvind) think?
> > > > > 
> > > > > @Matt,
> > > > > 
> > > > > Good catch—I missed that transition. You’re right: when the last
> > > > > VMA
> > > > > is
> > > > > unmapped from a DONTNEED BO, the current logic can flip it back
> > > > > to
> > > > > WILLNEED, which discards the user’s hint. That’s wrong.
> > > > > 
> > > > >     I like the enum approach to distinguish:
> > > > >       -  *_DONTNEED: all VMAs are DONTNEED
> > > > >       - *_WILLNEED: at least one VMA is WILLNEED
> > > > >       - *_NO_VMAS: no VMAs present
> > > > > 
> > > > > With that, xe_bo_recompute_purgeable_state() can avoid changing
> > > > > state
> > > > > on
> > > > > NO_VMAS and preserve "once purged, always purged," matching i915
> > > > > semantics. This also addresses Thomas's earlier question about
> > > > > new
> > > > > VMA/CPU access to purgeable BOs—the enum makes it clear we only
> > > > > transition on explicit VMA state, not on absence of VMAs.
> > > > > 
> > > > > I'll rework xe_bo_all_vmas_dontneed() to return the enum and
> > > > > update
> > > > > the
> > > > > recompute path accordingly.
> > > > > 
> > > > > 
> > > > > @Thomas,
> > > > > 
> > > > > Does this direction look good to you? If yes, I will send updated
> > > > > patch.
> > > > Yes, but I'm also as mentioned concerned about whether we can add
> > > > new
> > > > vmas, cpu faults and exports in the DONTNEED state. If we can do
> > > > that,
> > > > it might succeed most of the time making a well-behave appearance
> > > > in
> > > > user-space, but if on occation the bo gets purged, the app would
> > > > seeming unexpectedly fail.
> > > > 
> > > > So do we block new vmas cpu-faults and exports in the DONTNEED
> > > > state?
> > > > 
> > > I’ve thought about the same thing. The new vmas semantics are a bit
> > > odd,
> > > because if you unbind the BO in DONTNEED and disallow creating new
> > > VMAs,
> > > the BO can never be used again—madvise requires a VMA to operate thus
> > > you can't move a BO out of DONTNEED. Maybe that’s acceptable or even
> > > desirable, but it would need to be documented, and ultimately we’d
> > > need
> > > a UMD ack for those semantics.
> > > 
> > > CPU faults or exports in DONTNEED also seem like they should be
> > > disallowed with less odd sematics, but again, this should be
> > > documented
> > > and require UMD ack.
> > Hmm. With DONTNEED really to do as little as possible. So we shouldn't
> > go to into any sort of unmapping GPU- or CPU ptes. That means the end

I agree DONTNEED should be light weight and not invalidate anything.

> > behaviour might still be a bit erratic on access of a DONTNEED bo,
> > depending on previous access pattern we may or may not fault.
> > 
> > So we should probably disallow mmap(), VM_BIND and export, but allow
> > CPU- and GPU pagefaults. And document.
> > 

I think this is reasonable, but any CPU/GPU access after marking memory
as DONTNEED is fundamentally a user bug, and behavior will be erratic.
Consider that if we don’t invalidate CPU/GPU mappings on DONTNEED, those
accesses may appear to work for a while, but once the memory is actually
purged, things will suddenly fail. The only way to make this consistent
would be to invalidate CPU/GPU pages at DONTNEED time + disallow all
access, but I agree we don’t want to do that—DONTNEED should be as
lightweight as possible.

So perhaps the best we can do is restrict future access requests
(mmap(), VM_BIND, export, etc.) and state that any existing access after
DONTNEED is undefined behavior. More on this below.

> > Speaking of pagefaults, I noticed that when *purged*, it looks like we
> > populate with scratch PTEs also on faulting VMs. I think this is the
> > correct approach, though, to avoid the prefetch pagefaults wreaking
> > havoc if accessing vmas with purged bos.
> 
> 
> @Thomas, @Matt,
> 
> Got it. So the plan is:
> 
> DONTNEED BOs:
>    - Block: new mmap(), VM_BIND, dma-buf export
>    - Allow: CPU/GPU faults on existing mappings (fail if purged)

Torn on CPU/GPU faults here, since the DONTNEED → PURGED transition can
happen at any time, meaning a fault can eventually fail anyway. So why
not just fail the fault immediately at DONTNEED?

>    - Keep PTEs intact, just mark as purgeable

Yes, agreed. Invalidations are pretty expensive outside of ring
instructions, so issuing them at DONTNEED defeats the purpose of
DONTNEED being lightweight.

> 
> I'll add checks in:
> 1. xe_gem_mmap_offset() - reject new mmap to DONTNEED BO
> 2. VM_BIND path (vma_lock_and_validate) - reject new VMA to DONTNEED BO
> 3. dma-buf export path - reject export of DONTNEED BO
> 

I'm fine with this if that's the preference, but let's close on this
point—especially regarding CPU/GPU faults—before the next rev.

Matt

> Let me know if I am missing something.
> 
> Thanks,
> Arvind
> 
> 
> > /Thomas
> > 
> > 
> > > Matt
> > > 
> > > > /Thomas
> > > > 
> > > > 
> > > > > Thanks,
> > > > > Arvind
> > > > > 
> > > > > 
> > > > > > Matt
> > > > > > 
> > > > > > > Thanks,
> > > > > > > Arvind
> > > > > > > > Thanks,
> > > > > > > > Thomas
> > > > > > > > 
> > > > > > > > 
> > > > > > > > 
> > > > > > > > > Cc: Matthew Brost <matthew.brost@intel.com>
> > > > > > > > > Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> > > > > > > > > Cc: Himal Prasad Ghimiray
> > > > > > > > > <himal.prasad.ghimiray@intel.com>
> > > > > > > > > Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
> > > > > > > > > ---
> > > > > > > > >     drivers/gpu/drm/xe/xe_svm.c        |  1 +
> > > > > > > > >     drivers/gpu/drm/xe/xe_vm.c         |  9 ++-
> > > > > > > > >     drivers/gpu/drm/xe/xe_vm_madvise.c | 98
> > > > > > > > > ++++++++++++++++++++++++++++--
> > > > > > > > >     drivers/gpu/drm/xe/xe_vm_madvise.h |  3 +
> > > > > > > > >     drivers/gpu/drm/xe/xe_vm_types.h   | 11 ++++
> > > > > > > > >     5 files changed, 116 insertions(+), 6 deletions(-)
> > > > > > > > > 
> > > > > > > > > diff --git a/drivers/gpu/drm/xe/xe_svm.c
> > > > > > > > > b/drivers/gpu/drm/xe/xe_svm.c
> > > > > > > > > index cda3bf7e2418..329c77aa5c20 100644
> > > > > > > > > --- a/drivers/gpu/drm/xe/xe_svm.c
> > > > > > > > > +++ b/drivers/gpu/drm/xe/xe_svm.c
> > > > > > > > > @@ -318,6 +318,7 @@ static void
> > > > > > > > > xe_vma_set_default_attributes(struct
> > > > > > > > > xe_vma *vma)
> > > > > > > > >     		.preferred_loc.migration_policy =
> > > > > > > > > DRM_XE_MIGRATE_ALL_PAGES,
> > > > > > > > >     		.pat_index = vma-
> > > > > > > > > > attr.default_pat_index,
> > > > > > > > >     		.atomic_access =
> > > > > > > > > DRM_XE_ATOMIC_UNDEFINED,
> > > > > > > > > +		.purgeable_state =
> > > > > > > > > XE_MADV_PURGEABLE_WILLNEED,
> > > > > > > > >     	};
> > > > > > > > >     	xe_vma_mem_attr_copy(&vma->attr, &default_attr);
> > > > > > > > > diff --git a/drivers/gpu/drm/xe/xe_vm.c
> > > > > > > > > b/drivers/gpu/drm/xe/xe_vm.c
> > > > > > > > > index 71cf3ce6c62b..e84b9e7cb5eb 100644
> > > > > > > > > --- a/drivers/gpu/drm/xe/xe_vm.c
> > > > > > > > > +++ b/drivers/gpu/drm/xe/xe_vm.c
> > > > > > > > > @@ -39,6 +39,7 @@
> > > > > > > > >     #include "xe_tile.h"
> > > > > > > > >     #include "xe_tlb_inval.h"
> > > > > > > > >     #include "xe_trace_bo.h"
> > > > > > > > > +#include "xe_vm_madvise.h"
> > > > > > > > >     #include "xe_wa.h"
> > > > > > > > >     static struct drm_gem_object *xe_vm_obj(struct xe_vm
> > > > > > > > > *vm)
> > > > > > > > > @@ -1085,6 +1086,7 @@ static struct xe_vma
> > > > > > > > > *xe_vma_create(struct
> > > > > > > > > xe_vm *vm,
> > > > > > > > >     static void xe_vma_destroy_late(struct xe_vma *vma)
> > > > > > > > >     {
> > > > > > > > >     	struct xe_vm *vm = xe_vma_vm(vma);
> > > > > > > > > +	struct xe_bo *bo = xe_vma_bo(vma);
> > > > > > > > >     	if (vma->ufence) {
> > > > > > > > >     		xe_sync_ufence_put(vma->ufence);
> > > > > > > > > @@ -1099,7 +1101,7 @@ static void
> > > > > > > > > xe_vma_destroy_late(struct
> > > > > > > > > xe_vma
> > > > > > > > > *vma)
> > > > > > > > >     	} else if (xe_vma_is_null(vma) ||
> > > > > > > > > xe_vma_is_cpu_addr_mirror(vma)) {
> > > > > > > > >     		xe_vm_put(vm);
> > > > > > > > >     	} else {
> > > > > > > > > -		xe_bo_put(xe_vma_bo(vma));
> > > > > > > > > +		xe_bo_put(bo);
> > > > > > > > >     	}
> > > > > > > > >     	xe_vma_free(vma);
> > > > > > > > > @@ -1125,6 +1127,7 @@ static void vma_destroy_cb(struct
> > > > > > > > > dma_fence
> > > > > > > > > *fence,
> > > > > > > > >     static void xe_vma_destroy(struct xe_vma *vma, struct
> > > > > > > > > dma_fence
> > > > > > > > > *fence)
> > > > > > > > >     {
> > > > > > > > >     	struct xe_vm *vm = xe_vma_vm(vma);
> > > > > > > > > +	struct xe_bo *bo = xe_vma_bo(vma);
> > > > > > > > >     	lockdep_assert_held_write(&vm->lock);
> > > > > > > > >     	xe_assert(vm->xe, list_empty(&vma-
> > > > > > > > > > combined_links.destroy));
> > > > > > > > > @@ -1133,9 +1136,10 @@ static void xe_vma_destroy(struct
> > > > > > > > > xe_vma *vma,
> > > > > > > > > struct dma_fence *fence)
> > > > > > > > >     		xe_assert(vm->xe, vma->gpuva.flags &
> > > > > > > > > XE_VMA_DESTROYED);
> > > > > > > > >     		xe_userptr_destroy(to_userptr_vma(vma));
> > > > > > > > >     	} else if (!xe_vma_is_null(vma) &&
> > > > > > > > > !xe_vma_is_cpu_addr_mirror(vma)) {
> > > > > > > > > -		xe_bo_assert_held(xe_vma_bo(vma));
> > > > > > > > > +		xe_bo_assert_held(bo);
> > > > > > > > >     		drm_gpuva_unlink(&vma->gpuva);
> > > > > > > > > +		xe_bo_recompute_purgeable_state(bo);
> > > > > > > > >     	}
> > > > > > > > >     	xe_vm_assert_held(vm);
> > > > > > > > > @@ -2681,6 +2685,7 @@ static int
> > > > > > > > > vm_bind_ioctl_ops_parse(struct xe_vm
> > > > > > > > > *vm, struct drm_gpuva_ops *ops,
> > > > > > > > >     				.atomic_access =
> > > > > > > > > DRM_XE_ATOMIC_UNDEFINED,
> > > > > > > > >     				.default_pat_index = op-
> > > > > > > > > > map.pat_index,
> > > > > > > > >     				.pat_index = op-
> > > > > > > > > > map.pat_index,
> > > > > > > > > +				.purgeable_state =
> > > > > > > > > XE_MADV_PURGEABLE_WILLNEED,
> > > > > > > > >     			};
> > > > > > > > >     			flags |= op->map.vma_flags &
> > > > > > > > > XE_VMA_CREATE_MASK;
> > > > > > > > > diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c
> > > > > > > > > b/drivers/gpu/drm/xe/xe_vm_madvise.c
> > > > > > > > > index d9cfba7bfe0b..c184426546a2 100644
> > > > > > > > > --- a/drivers/gpu/drm/xe/xe_vm_madvise.c
> > > > > > > > > +++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
> > > > > > > > > @@ -12,6 +12,7 @@
> > > > > > > > >     #include "xe_pat.h"
> > > > > > > > >     #include "xe_pt.h"
> > > > > > > > >     #include "xe_svm.h"
> > > > > > > > > +#include "xe_vm.h"
> > > > > > > > >     struct xe_vmas_in_madvise_range {
> > > > > > > > >     	u64 addr;
> > > > > > > > > @@ -183,6 +184,89 @@ static void madvise_pat_index(struct
> > > > > > > > > xe_device
> > > > > > > > > *xe, struct xe_vm *vm,
> > > > > > > > >     	}
> > > > > > > > >     }
> > > > > > > > > +/**
> > > > > > > > > + * xe_bo_all_vmas_dontneed() - Check if all VMAs of a BO
> > > > > > > > > are
> > > > > > > > > marked
> > > > > > > > > DONTNEED
> > > > > > > > > + * @bo: Buffer object
> > > > > > > > > + *
> > > > > > > > > + * Check all VMAs across all VMs to determine if BO can
> > > > > > > > > be
> > > > > > > > > purged.
> > > > > > > > > + * Shared BOs require unanimous DONTNEED state from all
> > > > > > > > > mappings.
> > > > > > > > > + *
> > > > > > > > > + * Caller must hold BO dma-resv lock.
> > > > > > > > > + *
> > > > > > > > > + * Return: true if all VMAs are DONTNEED, false
> > > > > > > > > otherwise
> > > > > > > > > + */
> > > > > > > > > +static bool xe_bo_all_vmas_dontneed(struct xe_bo *bo)
> > > > > > > > > +{
> > > > > > > > > +	struct drm_gpuvm_bo *vm_bo;
> > > > > > > > > +	struct drm_gpuva *gpuva;
> > > > > > > > > +	struct drm_gem_object *obj = &bo->ttm.base;
> > > > > > > > > +	bool has_vmas = false;
> > > > > > > > > +
> > > > > > > > > +	xe_bo_assert_held(bo);
> > > > > > > > > +
> > > > > > > > > +	drm_gem_for_each_gpuvm_bo(vm_bo, obj) {
> > > > > > > > > +		drm_gpuvm_bo_for_each_va(gpuva, vm_bo) {
> > > > > > > > > +			struct xe_vma *vma =
> > > > > > > > > gpuva_to_vma(gpuva);
> > > > > > > > > +
> > > > > > > > > +			has_vmas = true;
> > > > > > > > > +
> > > > > > > > > +			/* Any non-DONTNEED VMA prevents
> > > > > > > > > purging */
> > > > > > > > > +			if (vma->attr.purgeable_state !=
> > > > > > > > > XE_MADV_PURGEABLE_DONTNEED)
> > > > > > > > > +				return false;
> > > > > > > > > +		}
> > > > > > > > > +	}
> > > > > > > > > +
> > > > > > > > > +	/*
> > > > > > > > > +	 * No VMAs => no mapping-level DONTNEED hint.
> > > > > > > > > +	 * Default to WILLNEED to avoid making BOs
> > > > > > > > > purgeable
> > > > > > > > > without
> > > > > > > > > +	 * explicit user intent.
> > > > > > > > > +	 */
> > > > > > > > > +	if (!has_vmas)
> > > > > > > > > +		return false;
> > > > > > > > > +
> > > > > > > > > +	return true;
> > > > > > > > > +}
> > > > > > > > > +
> > > > > > > > > +/**
> > > > > > > > > + * xe_bo_recompute_purgeable_state() - Recompute BO
> > > > > > > > > purgeable state
> > > > > > > > > from VMAs
> > > > > > > > > + * @bo: Buffer object
> > > > > > > > > + *
> > > > > > > > > + * Walk all VMAs to determine if BO should be purgeable
> > > > > > > > > or
> > > > > > > > > not.
> > > > > > > > > + * Shared BOs require unanimous DONTNEED state from all
> > > > > > > > > mappings.
> > > > > > > > > + *
> > > > > > > > > + * Locking: Caller must hold BO dma-resv lock. When
> > > > > > > > > iterating GPUVM
> > > > > > > > > lists,
> > > > > > > > > + * VM lock must also be held (write) to prevent
> > > > > > > > > concurrent
> > > > > > > > > VMA
> > > > > > > > > modifications.
> > > > > > > > > + * This is satisfied at both call sites:
> > > > > > > > > + * - xe_vma_destroy(): holds vm->lock write
> > > > > > > > > + * - madvise_purgeable(): holds vm->lock write (from
> > > > > > > > > madvise
> > > > > > > > > ioctl
> > > > > > > > > path)
> > > > > > > > > + *
> > > > > > > > > + * Return: nothing
> > > > > > > > > + */
> > > > > > > > > +void xe_bo_recompute_purgeable_state(struct xe_bo *bo)
> > > > > > > > > +{
> > > > > > > > > +	if (!bo)
> > > > > > > > > +		return;
> > > > > > > > > +
> > > > > > > > > +	xe_bo_assert_held(bo);
> > > > > > > > > +
> > > > > > > > > +	/*
> > > > > > > > > +	 * Once purged, always purged. Cannot transition
> > > > > > > > > back to
> > > > > > > > > WILLNEED.
> > > > > > > > > +	 * This matches i915 semantics where purged BOs
> > > > > > > > > are
> > > > > > > > > permanently invalid.
> > > > > > > > > +	 */
> > > > > > > > > +	if (bo->madv_purgeable ==
> > > > > > > > > XE_MADV_PURGEABLE_PURGED)
> > > > > > > > > +		return;
> > > > > > > > > +
> > > > > > > > > +	if (xe_bo_all_vmas_dontneed(bo)) {
> > > > > > > > > +		/* All VMAs are DONTNEED - mark BO
> > > > > > > > > purgeable
> > > > > > > > > */
> > > > > > > > > +		if (bo->madv_purgeable !=
> > > > > > > > > XE_MADV_PURGEABLE_DONTNEED)
> > > > > > > > > +			xe_bo_set_purgeable_state(bo,
> > > > > > > > > XE_MADV_PURGEABLE_DONTNEED);
> > > > > > > > > +	} else {
> > > > > > > > > +		/* At least one VMA is WILLNEED - BO
> > > > > > > > > must
> > > > > > > > > not be
> > > > > > > > > purgeable */
> > > > > > > > > +		if (bo->madv_purgeable !=
> > > > > > > > > XE_MADV_PURGEABLE_WILLNEED)
> > > > > > > > > +			xe_bo_set_purgeable_state(bo,
> > > > > > > > > XE_MADV_PURGEABLE_WILLNEED);
> > > > > > > > > +	}
> > > > > > > > > +}
> > > > > > > > > +
> > > > > > > > >     /**
> > > > > > > > >      * madvise_purgeable - Handle purgeable buffer object
> > > > > > > > > advice
> > > > > > > > >      * @xe: XE device
> > > > > > > > > @@ -231,14 +315,20 @@ static void __maybe_unused
> > > > > > > > > madvise_purgeable(struct xe_device *xe,
> > > > > > > > >     		switch (op->purge_state_val.val) {
> > > > > > > > >     		case
> > > > > > > > > DRM_XE_VMA_PURGEABLE_STATE_WILLNEED:
> > > > > > > > > -			xe_bo_set_purgeable_state(bo,
> > > > > > > > > XE_MADV_PURGEABLE_WILLNEED);
> > > > > > > > > +			vmas[i]->attr.purgeable_state =
> > > > > > > > > XE_MADV_PURGEABLE_WILLNEED;
> > > > > > > > > +
> > > > > > > > > +			/* Update BO purgeable state */
> > > > > > > > > +			xe_bo_recompute_purgeable_state(
> > > > > > > > > bo);
> > > > > > > > >     			break;
> > > > > > > > >     		case
> > > > > > > > > DRM_XE_VMA_PURGEABLE_STATE_DONTNEED:
> > > > > > > > > -			xe_bo_set_purgeable_state(bo,
> > > > > > > > > XE_MADV_PURGEABLE_DONTNEED);
> > > > > > > > > +			vmas[i]->attr.purgeable_state =
> > > > > > > > > XE_MADV_PURGEABLE_DONTNEED;
> > > > > > > > > +
> > > > > > > > > +			/* Update BO purgeable state */
> > > > > > > > > +			xe_bo_recompute_purgeable_state(
> > > > > > > > > bo);
> > > > > > > > >     			break;
> > > > > > > > >     		default:
> > > > > > > > > -			drm_warn(&vm->xe->drm, "Invalid
> > > > > > > > > madvice
> > > > > > > > > value = %d\n",
> > > > > > > > > -				 op-
> > > > > > > > > > purge_state_val.val);
> > > > > > > > > +			/* Should never hit - values
> > > > > > > > > validated in
> > > > > > > > > madvise_args_are_sane() */
> > > > > > > > > +			xe_assert(vm->xe, 0);
> > > > > > > > >     			return;
> > > > > > > > >     		}
> > > > > > > > >     	}
> > > > > > > > > diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.h
> > > > > > > > > b/drivers/gpu/drm/xe/xe_vm_madvise.h
> > > > > > > > > index b0e1fc445f23..39acd2689ca0 100644
> > > > > > > > > --- a/drivers/gpu/drm/xe/xe_vm_madvise.h
> > > > > > > > > +++ b/drivers/gpu/drm/xe/xe_vm_madvise.h
> > > > > > > > > @@ -8,8 +8,11 @@
> > > > > > > > >     struct drm_device;
> > > > > > > > >     struct drm_file;
> > > > > > > > > +struct xe_bo;
> > > > > > > > >     int xe_vm_madvise_ioctl(struct drm_device *dev, void
> > > > > > > > > *data,
> > > > > > > > >     			struct drm_file *file);
> > > > > > > > > +void xe_bo_recompute_purgeable_state(struct xe_bo *bo);
> > > > > > > > > +
> > > > > > > > >     #endif
> > > > > > > > > diff --git a/drivers/gpu/drm/xe/xe_vm_types.h
> > > > > > > > > b/drivers/gpu/drm/xe/xe_vm_types.h
> > > > > > > > > index 43203e90ee3e..fd563039e8f4 100644
> > > > > > > > > --- a/drivers/gpu/drm/xe/xe_vm_types.h
> > > > > > > > > +++ b/drivers/gpu/drm/xe/xe_vm_types.h
> > > > > > > > > @@ -94,6 +94,17 @@ struct xe_vma_mem_attr {
> > > > > > > > >     	 * same as default_pat_index unless overwritten
> > > > > > > > > by
> > > > > > > > > madvise.
> > > > > > > > >     	 */
> > > > > > > > >     	u16 pat_index;
> > > > > > > > > +
> > > > > > > > > +	/**
> > > > > > > > > +	 * @purgeable_state: Purgeable hint for this VMA
> > > > > > > > > mapping
> > > > > > > > > +	 *
> > > > > > > > > +	 * Per-VMA purgeable state from madvise. Valid
> > > > > > > > > states are
> > > > > > > > > WILLNEED (0)
> > > > > > > > > +	 * or DONTNEED (1). Shared BOs require all VMAs
> > > > > > > > > to
> > > > > > > > > be
> > > > > > > > > DONTNEED before
> > > > > > > > > +	 * the BO can be purged. PURGED state exists
> > > > > > > > > only at
> > > > > > > > > BO
> > > > > > > > > level.
> > > > > > > > > +	 *
> > > > > > > > > +	 * Protected by BO dma-resv lock. Set via
> > > > > > > > > DRM_IOCTL_XE_MADVISE.
> > > > > > > > > +	 */
> > > > > > > > > +	u32 purgeable_state;
> > > > > > > > >     };
> > > > > > > > >     struct xe_vma {

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v5 1/9] drm/xe/uapi: Add UAPI support for purgeable buffer objects
  2026-02-11 15:26 ` [PATCH v5 1/9] drm/xe/uapi: Add UAPI " Arvind Yadav
  2026-02-24 10:50   ` Thomas Hellström
@ 2026-02-26 17:58   ` Souza, Jose
  2026-02-27  9:32     ` Yadav, Arvind
  1 sibling, 1 reply; 36+ messages in thread
From: Souza, Jose @ 2026-02-26 17:58 UTC (permalink / raw)
  To: intel-xe@lists.freedesktop.org, Yadav,  Arvind
  Cc: Brost, Matthew, Mishra, Pallavi, Ghimiray, Himal Prasad,
	thomas.hellstrom@linux.intel.com

On Wed, 2026-02-11 at 20:56 +0530, Arvind Yadav wrote:
> From: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> 
> Extend the DRM_XE_MADVISE ioctl to support purgeable buffer object
> management by adding DRM_XE_VMA_ATTR_PURGEABLE_STATE attribute type.
> 
> This allows userspace applications to provide memory usage hints to
> the kernel for better memory management under pressure:
> 
> - WILLNEED: Buffer is needed and should not be purged. If the BO was
>   previously purged, retained field returns 0 indicating backing
> store
>   was lost (once purged, always purged semantics matching i915).
> 
> - DONTNEED: Buffer is not currently needed and may be purged by the
>   kernel under memory pressure to free resources. Only applies to
>   non-shared BOs.
> 
> The implementation includes a 'retained' output field (matching
> i915's
> drm_i915_gem_madvise.retained) that indicates whether the BO's
> backing
> store still exists (1) or has been purged (0).
> 
> v2:
>   - Add PURGED state for read-only status, change ioctl to DRM_IOWR,
>     add retained field for i915 compatibility
> 
> v3:
>   - UAPI rule should not be changed (Matthew Brost)
>   - Make 'retained' a userptr (Matthew Brost)
> 
> v4:
>   - You cannot make this part of the union (purge_state_val) larger
>     than the existing union (16 bytes). So just drop the '__u64
> reserved'
>     field. (Matt)
> 
> v5:
>   - Update UAPI documentation to clarify retained must be initialized
>     to 0(Thomas)
> 
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Signed-off-by: Himal Prasad Ghimiray
> <himal.prasad.ghimiray@intel.com>
> Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
> ---
>  include/uapi/drm/xe_drm.h | 44
> +++++++++++++++++++++++++++++++++++++++
>  1 file changed, 44 insertions(+)
> 
> diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
> index 077e66a682e2..3e2f145e7f8f 100644
> --- a/include/uapi/drm/xe_drm.h
> +++ b/include/uapi/drm/xe_drm.h
> @@ -2099,6 +2099,7 @@ struct drm_xe_madvise {
>  #define DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC	0
>  #define DRM_XE_MEM_RANGE_ATTR_ATOMIC		1
>  #define DRM_XE_MEM_RANGE_ATTR_PAT		2
> +#define DRM_XE_VMA_ATTR_PURGEABLE_STATE		3
>  	/** @type: type of attribute */
>  	__u32 type;
>  
> @@ -2189,6 +2190,49 @@ struct drm_xe_madvise {
>  			/** @pat_index.reserved: Reserved */
>  			__u64 reserved;
>  		} pat_index;
> +
> +		/**
> +		 * @purge_state_val: Purgeable state configuration
> +		 *
> +		 * Used when @type ==
> DRM_XE_VMA_ATTR_PURGEABLE_STATE.
> +		 *
> +		 * Configures the purgeable state of buffer objects
> in the specified
> +		 * virtual address range. This allows applications
> to hint to the kernel
> +		 * about bo's usage patterns for better memory
> management.
> +		 *
> +		 * Supported values for @purge_state_val.val:
> +		 *  - DRM_XE_VMA_PURGEABLE_STATE_WILLNEED (0): Marks
> BO as needed.
> +		 *    If BO was purged, returns retained=0 (backing
> store lost).
> +		 *
> +		 *  - DRM_XE_VMA_PURGEABLE_STATE_DONTNEED (1): Hints
> that BO is not
> +		 *    currently needed. Kernel may purge it under
> memory pressure.
> +		 *    Only applies to non-shared BOs. Returns
> retained=1 if not purged.
> +		 */
> +		struct {
> +#define DRM_XE_VMA_PURGEABLE_STATE_WILLNEED	0
> +#define DRM_XE_VMA_PURGEABLE_STATE_DONTNEED	1
> +			/** @purge_state_val.val: value for
> DRM_XE_VMA_ATTR_PURGEABLE_STATE */
> +			__u32 val;
> +
> +			/* @purge_state_val.pad */
> +			__u32 pad;
> +			/**
> +			 * @purge_state_val.retained: Pointer to
> output field for backing
> +			 * store status.
> +			 *
> +			 * Userspace must initialize this field to 0
> before the
> +			 * ioctl. Kernel writes to it after the
> operation:
> +			 * - 1 if backing store exists (not purged)
> +			 * - 0 if backing store was purged
> +			 *
> +			 * If userspace fails to initialize to 0,
> ioctl returns -EINVAL.
> +			 * This ensures a safe default (0 = assume
> purged) if kernel
> +			 * cannot write the result.
> +			 *
> +			 * Similar to i915's
> drm_i915_gem_madvise.retained field.
> +			 */
> +			__u64 retained;

why do you need a u32 pad and a u64? why not use pad and drop the last
u64?

> +		} purge_state_val;
>  	};

This is missing a new flag like
DRM_XE_QUERY_CONFIG_FLAG_HAS_NO_COMPRESSION_HINT to tell UMDs if
running Xe KMD has support for purgeable madvise.

>  
>  	/** @reserved: Reserved */

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v5 1/9] drm/xe/uapi: Add UAPI support for purgeable buffer objects
  2026-02-26 17:58   ` Souza, Jose
@ 2026-02-27  9:32     ` Yadav, Arvind
  0 siblings, 0 replies; 36+ messages in thread
From: Yadav, Arvind @ 2026-02-27  9:32 UTC (permalink / raw)
  To: Souza, Jose, intel-xe@lists.freedesktop.org
  Cc: Brost, Matthew, Mishra, Pallavi, Ghimiray, Himal Prasad,
	thomas.hellstrom@linux.intel.com


On 26-02-2026 23:28, Souza, Jose wrote:
> On Wed, 2026-02-11 at 20:56 +0530, Arvind Yadav wrote:
>> From: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>>
>> Extend the DRM_XE_MADVISE ioctl to support purgeable buffer object
>> management by adding DRM_XE_VMA_ATTR_PURGEABLE_STATE attribute type.
>>
>> This allows userspace applications to provide memory usage hints to
>> the kernel for better memory management under pressure:
>>
>> - WILLNEED: Buffer is needed and should not be purged. If the BO was
>>    previously purged, retained field returns 0 indicating backing
>> store
>>    was lost (once purged, always purged semantics matching i915).
>>
>> - DONTNEED: Buffer is not currently needed and may be purged by the
>>    kernel under memory pressure to free resources. Only applies to
>>    non-shared BOs.
>>
>> The implementation includes a 'retained' output field (matching
>> i915's
>> drm_i915_gem_madvise.retained) that indicates whether the BO's
>> backing
>> store still exists (1) or has been purged (0).
>>
>> v2:
>>    - Add PURGED state for read-only status, change ioctl to DRM_IOWR,
>>      add retained field for i915 compatibility
>>
>> v3:
>>    - UAPI rule should not be changed (Matthew Brost)
>>    - Make 'retained' a userptr (Matthew Brost)
>>
>> v4:
>>    - You cannot make this part of the union (purge_state_val) larger
>>      than the existing union (16 bytes). So just drop the '__u64
>> reserved'
>>      field. (Matt)
>>
>> v5:
>>    - Update UAPI documentation to clarify retained must be initialized
>>      to 0(Thomas)
>>
>> Cc: Matthew Brost <matthew.brost@intel.com>
>> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>> Signed-off-by: Himal Prasad Ghimiray
>> <himal.prasad.ghimiray@intel.com>
>> Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
>> ---
>>   include/uapi/drm/xe_drm.h | 44
>> +++++++++++++++++++++++++++++++++++++++
>>   1 file changed, 44 insertions(+)
>>
>> diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
>> index 077e66a682e2..3e2f145e7f8f 100644
>> --- a/include/uapi/drm/xe_drm.h
>> +++ b/include/uapi/drm/xe_drm.h
>> @@ -2099,6 +2099,7 @@ struct drm_xe_madvise {
>>   #define DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC	0
>>   #define DRM_XE_MEM_RANGE_ATTR_ATOMIC		1
>>   #define DRM_XE_MEM_RANGE_ATTR_PAT		2
>> +#define DRM_XE_VMA_ATTR_PURGEABLE_STATE		3
>>   	/** @type: type of attribute */
>>   	__u32 type;
>>   
>> @@ -2189,6 +2190,49 @@ struct drm_xe_madvise {
>>   			/** @pat_index.reserved: Reserved */
>>   			__u64 reserved;
>>   		} pat_index;
>> +
>> +		/**
>> +		 * @purge_state_val: Purgeable state configuration
>> +		 *
>> +		 * Used when @type ==
>> DRM_XE_VMA_ATTR_PURGEABLE_STATE.
>> +		 *
>> +		 * Configures the purgeable state of buffer objects
>> in the specified
>> +		 * virtual address range. This allows applications
>> to hint to the kernel
>> +		 * about bo's usage patterns for better memory
>> management.
>> +		 *
>> +		 * Supported values for @purge_state_val.val:
>> +		 *  - DRM_XE_VMA_PURGEABLE_STATE_WILLNEED (0): Marks
>> BO as needed.
>> +		 *    If BO was purged, returns retained=0 (backing
>> store lost).
>> +		 *
>> +		 *  - DRM_XE_VMA_PURGEABLE_STATE_DONTNEED (1): Hints
>> that BO is not
>> +		 *    currently needed. Kernel may purge it under
>> memory pressure.
>> +		 *    Only applies to non-shared BOs. Returns
>> retained=1 if not purged.
>> +		 */
>> +		struct {
>> +#define DRM_XE_VMA_PURGEABLE_STATE_WILLNEED	0
>> +#define DRM_XE_VMA_PURGEABLE_STATE_DONTNEED	1
>> +			/** @purge_state_val.val: value for
>> DRM_XE_VMA_ATTR_PURGEABLE_STATE */
>> +			__u32 val;
>> +
>> +			/* @purge_state_val.pad */
>> +			__u32 pad;
>> +			/**
>> +			 * @purge_state_val.retained: Pointer to
>> output field for backing
>> +			 * store status.
>> +			 *
>> +			 * Userspace must initialize this field to 0
>> before the
>> +			 * ioctl. Kernel writes to it after the
>> operation:
>> +			 * - 1 if backing store exists (not purged)
>> +			 * - 0 if backing store was purged
>> +			 *
>> +			 * If userspace fails to initialize to 0,
>> ioctl returns -EINVAL.
>> +			 * This ensures a safe default (0 = assume
>> purged) if kernel
>> +			 * cannot write the result.
>> +			 *
>> +			 * Similar to i915's
>> drm_i915_gem_madvise.retained field.
>> +			 */
>> +			__u64 retained;
> why do you need a u32 pad and a u64? why not use pad and drop the last
> u64?


The __u64 retained field is a pointer to userspace memory where the 
kernel writes the output, not a direct value. Userspace pointers in uAPI 
must be __u64 to work on both 32-bit and 64-bit systems.

>
>> +		} purge_state_val;
>>   	};
> This is missing a new flag like
> DRM_XE_QUERY_CONFIG_FLAG_HAS_NO_COMPRESSION_HINT to tell UMDs if
> running Xe KMD has support for purgeable madvise.


I will add DRM_XE_QUERY_CONFIG_FLAG_HAS_PURGING for purgeable madivse 
support.

Thanks,
Arvind

>
>>   
>>   	/** @reserved: Reserved */

^ permalink raw reply	[flat|nested] 36+ messages in thread

end of thread, other threads:[~2026-02-27  9:32 UTC | newest]

Thread overview: 36+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-11 15:26 [PATCH v5 0/9] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
2026-02-11 15:26 ` [PATCH v5 1/9] drm/xe/uapi: Add UAPI " Arvind Yadav
2026-02-24 10:50   ` Thomas Hellström
2026-02-26 17:58   ` Souza, Jose
2026-02-27  9:32     ` Yadav, Arvind
2026-02-11 15:26 ` [PATCH v5 2/9] drm/xe/bo: Add purgeable bo state tracking and field madv to xe_bo Arvind Yadav
2026-02-11 16:00   ` Matthew Brost
2026-02-11 15:26 ` [PATCH v5 3/9] drm/xe/madvise: Implement purgeable buffer object support Arvind Yadav
2026-02-24 12:21   ` Thomas Hellström
2026-02-24 14:56     ` Yadav, Arvind
2026-02-11 15:26 ` [PATCH v5 4/9] drm/xe/bo: Handle CPU faults on purged buffer objects Arvind Yadav
2026-02-11 15:26 ` [PATCH v5 5/9] drm/xe/vm: Prevent binding of " Arvind Yadav
2026-02-11 16:17   ` Matthew Brost
2026-02-11 15:26 ` [PATCH v5 6/9] drm/xe/madvise: Implement per-VMA purgeable state tracking Arvind Yadav
2026-02-24 12:48   ` Thomas Hellström
2026-02-24 15:07     ` Yadav, Arvind
2026-02-24 16:36       ` Matthew Brost
2026-02-25  5:35         ` Yadav, Arvind
2026-02-25  8:21           ` Thomas Hellström
2026-02-25  9:04             ` Matthew Brost
2026-02-25  9:18               ` Thomas Hellström
2026-02-25  9:40                 ` Yadav, Arvind
2026-02-25 18:32                   ` Matthew Brost
2026-02-11 15:26 ` [PATCH v5 7/9] drm/xe/madvise: Block imported and exported dma-bufs Arvind Yadav
2026-02-24 14:15   ` Thomas Hellström
2026-02-11 15:26 ` [PATCH v5 8/9] drm/xe/bo: Add purgeable shrinker state helpers Arvind Yadav
2026-02-24 14:21   ` Thomas Hellström
2026-02-24 15:09     ` Yadav, Arvind
2026-02-11 15:26 ` [PATCH v5 9/9] drm/xe/madvise: Enable purgeable buffer object IOCTL support Arvind Yadav
2026-02-11 15:40   ` Matthew Brost
2026-02-11 15:46 ` [PATCH v5 0/9] drm/xe/madvise: Add support for purgeable buffer objects Matthew Brost
2026-02-25 10:10   ` Yadav, Arvind
2026-02-11 16:21 ` ✗ CI.checkpatch: warning for drm/xe/madvise: Add support for purgeable buffer objects (rev6) Patchwork
2026-02-11 16:22 ` ✓ CI.KUnit: success " Patchwork
2026-02-11 17:11 ` ✗ Xe.CI.BAT: failure " Patchwork
2026-02-13  1:15 ` ✗ Xe.CI.FULL: " Patchwork

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox