* [PATCH v7 00/12] drm/xe/madvise: Add support for purgeable buffer objects
@ 2026-03-23 9:30 Arvind Yadav
2026-03-23 9:30 ` [PATCH v7 01/12] drm/xe/uapi: Add UAPI " Arvind Yadav
` (16 more replies)
0 siblings, 17 replies; 29+ messages in thread
From: Arvind Yadav @ 2026-03-23 9:30 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, himal.prasad.ghimiray, thomas.hellstrom
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=yes, Size: 10780 bytes --]
This patch series introduces comprehensive support for purgeable buffer objects
in the Xe driver, enabling userspace to provide memory usage hints for better
memory management under system pressure.
Overview:
Purgeable memory allows applications to mark buffer objects as "not currently
needed" (DONTNEED), making them eligible for kernel reclamation during memory
pressure. This helps prevent OOM conditions and enables more efficient GPU
memory utilization for workloads with temporary or regeneratable data (caches,
intermediate results, decoded frames, etc.).
Purgeable BO Lifecycle:
1. WILLNEED (default): BO actively needed, kernel preserves backing store
2. DONTNEED (user hint): BO contents discardable, eligible for purging
3. PURGED (kernel action): Backing store reclaimed during memory pressure
Key Design Principles:
- i915 compatibility: "Once purged, always purged" semantics - purged BOs
remain permanently invalid and must be destroyed/recreated
- Per-VMA state tracking: Each VMA tracks its own purgeable state, BO is
only marked DONTNEED when ALL VMAs across ALL VMs agree (Thomas Hellström)
- Safety first: Imported/exported dma-bufs blocked from purgeable state -
no visibility into external device usage (Matt Roper)
- Multiple protection layers: Validation in madvise, VM bind, mmap, CPU
and GPU fault handlers. GPU page faults on DONTNEED BOs are rejected in
xe_pagefault_begin() to preserve the GPU PTE invalidation done at madvise
time; without this the rebind path would re-map real pages and undo the
PTE zap, preventing the shrinker from ever reclaiming the BO.
- Correct GPU PTE zapping: madvise_purgeable() explicitly sets
skip_invalidation per VMA (false for DONTNEED, true for WILLNEED, purged
and dmabuf-shared BOs) so DONTNEED always triggers a GPU PTE zap
regardless of prior madvise state.
- Scratch PTE support: Fault-mode VMs use scratch pages for safe zero reads
on purged BO access.
- TTM shrinker integration: Encapsulated helpers manage xe_ttm_tt->purgeable
flag and shrinker page accounting (shrinkable vs purgeable buckets)
v2 Changes:
- Reordered patches: Moved shared BO helper before main implementation for
proper dependency order
- Fixed reference counting in mmap offset validation (use drm_gem_object_put)
- Removed incorrect claims about madvise(WILLNEED) restoring purged BOs
- Fixed error code documentation inconsistencies
- Initialize purge_state_val fields to prevent kernel memory leaks
- Use xe_bo_trigger_rebind() for async TLB invalidation (Thomas Hellström)
- Add NULL rebind with scratch PTEs for fault mode (Thomas Hellström)
- Implement i915-compatible retained field logic (Thomas Hellström)
- Skip BO validation for purged BOs in page fault handler (crash fix)
- Add scratch VM check in page fault path (non-scratch VMs fail fault)
v3 Changes (addressing Matt and Thomas Hellström feedback):
- Per-VMA purgeable state tracking: Added xe_vma->purgeable_state field
- Complete VMA check: xe_bo_all_vmas_dontneed() walks all VMAs across all
VMs to ensure unanimous DONTNEED before marking BO purgeable
- VMA unbind recheck: Added xe_bo_recheck_purgeable_on_vma_unbind() to
re-evaluate BO state when VMAs are destroyed
- Block external dma-bufs: Added xe_bo_is_external_dmabuf() check using
drm_gem_is_imported() and obj->dma_buf to prevent purging imported/exported BOs
- Consistent lockdep enforcement: Added xe_bo_assert_held() to all helpers
that access madv_purgeable state
- Simplified page table logic: Renamed is_null to is_null_or_purged in
xe_pt_stage_bind_entry() - purged BOs treated identically to null VMAs
- Removed unnecessary checks: Dropped redundant "&& bo" check in xe_ttm_bo_purge()
- Xe-specific warnings: Changed drm_warn() to XE_WARN_ON() in purge path
- Moved purge checks under locks: Purge state validation now done after
acquiring dma-resv lock in vma_lock_and_validate() and xe_pagefault_begin()
- Race-free fault handling: Removed unlocked purge check from
xe_pagefault_handle_vma(), moved to locked xe_pagefault_begin()
- Shrinker helper functions: Added xe_bo_set_purgeable_shrinker() and
xe_bo_clear_purgeable_shrinker() to encapsulate TTM purgeable flag updates
and shrinker page accounting, improving code clarity and maintainability
v4 Changes (addressing Matt and Thomas Hellström feedback):
- UAPI: Removed '__u64 reserved' field from purge_state_val union to fit
16-byte size constraint (Matt)
- Changed madv_purgeable from atomic_t to u32 across all patches (Matt)
- CPU fault handling: Added purged check to fastpath (xe_bo_cpu_fault_fastpath)
to prevent hang when accessing existing mmap of purged BO
v5 Changes (addressing Matt and Thomas Hellström feedback):
- Add locking documentation to madv_purgeable field comment (Matt)
- Introduce xe_bo_set_purgeable_state() helper (void return) to centralize
madv_purgeable updates with xe_bo_assert_held() and state transition
validation using explicit enum checks (no transition out of PURGED) (Matt)
- Make xe_ttm_bo_purge() return int and propagate failures from
xe_bo_move(); handle xe_bo_trigger_rebind() failures (e.g. no_wait_gpu
paths) rather than silently ignoring (Matt)
- Replace drm_WARN_ON with xe_assert for better Xe-specific assertions (Matt)
- Hook purgeable handling into madvise_funcs[DRM_XE_VMA_ATTR_PURGEABLE_STATE]
instead of special-case path in xe_vm_madvise_ioctl() (Matt)
- Track purgeable retained return via xe_madvise_details and perform
copy_to_user() from xe_madvise_details_fini() after locks are dropped (Matt)
- Set madvise_funcs[DRM_XE_VMA_ATTR_PURGEABLE_STATE] to NULL with
__maybe_unused on madvise_purgeable() to maintain bisectability until
shrinker integration is complete in final patch (Matt)
- Call xe_bo_recheck_purgeable_on_vma_unbind() from xe_vma_destroy()
right after drm_gpuva_unlink() where we already hold the BO lock,
drop the trylock-based late destroy path (Matt)
- Move purgeable_state into xe_vma_mem_attr with the other madvise
attributes (Matt)
- Drop READ_ONCE since the BO lock already protects us (Matt)
- Keep returning false when there are no VMAs - otherwise we'd mark
BOs purgeable without any user hint (Matt)
- Use struct xe_vma_lock_and_validate_flags instead of multiple bool
parameters to improve readability and prevent argument transposition (Matt)
- Fix LRU crash while running shrink test
- Skip xe_bo_validate() for purged BOs in xe_gpuvm_validate()
- Split ghost BO and zero-refcount handling in xe_bo_shrink() (Thomas)
v6 Changes (addressing Jose Souza, Thomas Hellström and Matt Brost feedback):
- Document DONTNEED blocking behavior in uAPI: Clearly describe which
operations are blocked and with what error codes. (Thomas, Matt)
- Block VM_BIND to DONTNEED BOs: Return -EBUSY to prevent creating new
VMAs to purgeable BOs (undefined behavior). (Thomas, Matt)
- Block CPU faults to DONTNEED BOs: Return VM_FAULT_SIGBUS in both fastpath
and slowpath to prevent undefined behavior. (Thomas, Matt)
- Block new mmap() to DONTNEED/purged BOs: Return -EBUSY for DONTNEED,
-EINVAL for PURGED. (Thomas, Matt)
- Block dma-buf export of DONTNEED/purged BOs: Return -EBUSY for DONTNEED,
-EINVAL for PURGED. (Thomas, Matt)
- Fix state transition bug: xe_bo_all_vmas_dontneed() now returns enum to
distinguish NO_VMAS (preserve state) from WILLNEED (has active VMAs),
preventing incorrect DONTNEED → WILLNEED flip on last VMA unmap (Matt)
- Set skip_invalidation explicitly in madvise_purgeable() to ensure
DONTNEED always zaps GPU PTEs regardless of prior madvise state.
- Add DRM_XE_QUERY_CONFIG_FLAG_HAS_PURGING_SUPPORT for userspace
feature detection. (Jose)
v7 Changes (addressing Thomas Hellström, Matt B and Jose feedback):
- mmap check moved from xe_gem_mmap_offset_ioctl() into a new
xe_gem_object_mmap() callback wrapping drm_gem_ttm_mmap(), with
interruptible lock (Thomas)
- dma-buf export lock made interruptible: xe_bo_lock(bo, true) (Thomas)
- vma_lock_and_validate_flags passed by value instead of pointer (reviewer)
- xe_bo_recompute_purgeable_state() simplified using enum value alignment
between xe_bo_vmas_purge_state and xe_madv_purgeable_state, with
static_assert to enforce the alignment (Thomas)
- Merge xe_bo_set_purgeable_shrinker/xe_bo_clear_purgeable_shrinker into
a single static xe_bo_set_purgeable_shrinker(bo, new_state) called
automatically from xe_bo_set_purgeable_state() (Thomas)
- Drop "drm/xe/bo: Skip zero-refcount BOs in shrinker" patch — ghost BO
path already handles this correctly (Thomas)
- Fix Engine memory CAT errors on scratch-page VMs (Matt Roper):
xe_pagefault_asid_to_vm() now accepts scratch VMs via
|| xe_vm_has_scratch(vm); xe_pagefault_begin() checks DONTNEED/purged
before validate/migrate and signals skip_rebind to caller via bool*
out-parameter to avoid xe_vma_rebind() assert and PTE zap undo
- Add new patch 12: Accept canonical GPU addresses in xe_vm_madvise_ioctl()
using xe_device_uncanonicalize_addr() (Matt B)
- UAPI doc comment improvement. (Jose)
Arvind Yadav (11):
drm/xe/bo: Add purgeable bo state tracking and field madv to xe_bo
drm/xe/madvise: Implement purgeable buffer object support
drm/xe/bo: Block CPU faults to purgeable buffer objects
drm/xe/vm: Prevent binding of purged buffer objects
drm/xe/madvise: Implement per-VMA purgeable state tracking
drm/xe/madvise: Block imported and exported dma-bufs
drm/xe/bo: Block mmap of DONTNEED/purged BOs
drm/xe/dma_buf: Block export of DONTNEED/purged BOs
drm/xe/bo: Add purgeable shrinker state helpers
drm/xe/madvise: Enable purgeable buffer object IOCTL support
drm/xe/madvise: Accept canonical GPU addresses in xe_vm_madvise_ioctl
Himal Prasad Ghimiray (1):
drm/xe/uapi: Add UAPI support for purgeable buffer objects
drivers/gpu/drm/xe/xe_bo.c | 193 +++++++++++++++++--
drivers/gpu/drm/xe/xe_bo.h | 58 ++++++
drivers/gpu/drm/xe/xe_bo_types.h | 6 +
drivers/gpu/drm/xe/xe_dma_buf.c | 21 +++
drivers/gpu/drm/xe/xe_pagefault.c | 25 ++-
drivers/gpu/drm/xe/xe_pt.c | 40 +++-
drivers/gpu/drm/xe/xe_query.c | 2 +
drivers/gpu/drm/xe/xe_svm.c | 1 +
drivers/gpu/drm/xe/xe_vm.c | 100 ++++++++--
drivers/gpu/drm/xe/xe_vm_madvise.c | 292 ++++++++++++++++++++++++++++-
drivers/gpu/drm/xe/xe_vm_madvise.h | 3 +
drivers/gpu/drm/xe/xe_vm_types.h | 11 ++
include/uapi/drm/xe_drm.h | 69 +++++++
13 files changed, 778 insertions(+), 43 deletions(-)
--
2.43.0
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH v7 01/12] drm/xe/uapi: Add UAPI support for purgeable buffer objects
2026-03-23 9:30 [PATCH v7 00/12] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
@ 2026-03-23 9:30 ` Arvind Yadav
2026-03-23 9:30 ` [PATCH v7 02/12] drm/xe/bo: Add purgeable bo state tracking and field madv to xe_bo Arvind Yadav
` (15 subsequent siblings)
16 siblings, 0 replies; 29+ messages in thread
From: Arvind Yadav @ 2026-03-23 9:30 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, himal.prasad.ghimiray, thomas.hellstrom,
Jose Souza
Extend the DRM_XE_MADVISE ioctl to support purgeable buffer object
management by adding DRM_XE_VMA_ATTR_PURGEABLE_STATE attribute type.
This allows userspace applications to provide memory usage hints to
the kernel for better memory management under pressure:
- WILLNEED: Buffer is needed and should not be purged. If the BO was
previously purged, retained field returns 0 indicating backing store
was lost (once purged, always purged semantics matching i915).
- DONTNEED: Buffer is not currently needed and may be purged by the
kernel under memory pressure to free resources. Only applies to
non-shared BOs.
To prevent undefined behavior, the following operations are blocked
while a BO is in DONTNEED state:
- New mmap() operations return -EBUSY
- VM_BIND operations return -EBUSY
- New dma-buf exports return -EBUSY
- CPU page faults return SIGBUS
- GPU page faults fail with -EACCES
This ensures applications cannot use a BO while marked as DONTNEED,
preventing erratic behavior when the kernel purges the backing store.
The implementation includes a 'retained' output field (matching i915's
drm_i915_gem_madvise.retained) that indicates whether the BO's backing
store still exists (1) or has been purged (0).
Added DRM_XE_QUERY_CONFIG_FLAG_HAS_PURGING_SUPPORT flag to allow
userspace to detect kernel support for purgeable buffer objects
before attempting to use the feature.
v2:
- Add PURGED state for read-only status, change ioctl to DRM_IOWR,
add retained field for i915 compatibility
v3:
- UAPI rule should not be changed (Matthew Brost)
- Make 'retained' a userptr (Matthew Brost)
v4:
- You cannot make this part of the union (purge_state_val) larger
than the existing union (16 bytes). So just drop the '__u64 reserved'
field. (Matt)
v5:
- Update UAPI documentation to clarify retained must be initialized
to 0(Thomas)
v6:
- Document DONTNEED BO access blocking behavior to prevent undefined
behavior and clarify uAPI contract (Thomas, Matt)
- Add query flag DRM_XE_QUERY_CONFIG_FLAG_HAS_PURGING_SUPPORT for
feature detection. (Jose)
- Rename retained to retained_ptr. (Jose)
v7:
- Updated UAPI documentation as suggested to reflect 'updated' value
instead of 'return'. (Jose)
Cc: Jose Souza <jose.souza@intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
---
include/uapi/drm/xe_drm.h | 69 +++++++++++++++++++++++++++++++++++++++
1 file changed, 69 insertions(+)
diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
index f8b2afb20540..a59baf5add9a 100644
--- a/include/uapi/drm/xe_drm.h
+++ b/include/uapi/drm/xe_drm.h
@@ -429,6 +429,7 @@ struct drm_xe_query_config {
#define DRM_XE_QUERY_CONFIG_FLAG_HAS_CPU_ADDR_MIRROR (1 << 2)
#define DRM_XE_QUERY_CONFIG_FLAG_HAS_NO_COMPRESSION_HINT (1 << 3)
#define DRM_XE_QUERY_CONFIG_FLAG_HAS_DISABLE_STATE_CACHE_PERF_FIX (1 << 4)
+ #define DRM_XE_QUERY_CONFIG_FLAG_HAS_PURGING_SUPPORT (1 << 5)
#define DRM_XE_QUERY_CONFIG_MIN_ALIGNMENT 2
#define DRM_XE_QUERY_CONFIG_VA_BITS 3
#define DRM_XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY 4
@@ -2083,6 +2084,7 @@ struct drm_xe_query_eu_stall {
* - DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC: Set preferred memory location.
* - DRM_XE_MEM_RANGE_ATTR_ATOMIC: Set atomic access policy.
* - DRM_XE_MEM_RANGE_ATTR_PAT: Set page attribute table index.
+ * - DRM_XE_VMA_ATTR_PURGEABLE_STATE: Set purgeable state for BOs.
*
* Example:
*
@@ -2115,6 +2117,7 @@ struct drm_xe_madvise {
#define DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC 0
#define DRM_XE_MEM_RANGE_ATTR_ATOMIC 1
#define DRM_XE_MEM_RANGE_ATTR_PAT 2
+#define DRM_XE_VMA_ATTR_PURGEABLE_STATE 3
/** @type: type of attribute */
__u32 type;
@@ -2205,6 +2208,72 @@ struct drm_xe_madvise {
/** @pat_index.reserved: Reserved */
__u64 reserved;
} pat_index;
+
+ /**
+ * @purge_state_val: Purgeable state configuration
+ *
+ * Used when @type == DRM_XE_VMA_ATTR_PURGEABLE_STATE.
+ *
+ * Configures the purgeable state of buffer objects in the specified
+ * virtual address range. This allows applications to hint to the kernel
+ * about bo's usage patterns for better memory management.
+ *
+ * By default all VMAs are in WILLNEED state.
+ *
+ * Supported values for @purge_state_val.val:
+ * - DRM_XE_VMA_PURGEABLE_STATE_WILLNEED (0): Marks BO as needed.
+ * If the BO was previously purged, the kernel sets the __u32 at
+ * @retained_ptr to 0 (backing store lost) so the application knows
+ * it must recreate the BO.
+ *
+ * - DRM_XE_VMA_PURGEABLE_STATE_DONTNEED (1): Marks BO as not currently
+ * needed. Kernel may purge it under memory pressure to reclaim memory.
+ * Only applies to non-shared BOs. The kernel sets the __u32 at
+ * @retained_ptr to 1 if the backing store still exists (not yet purged),
+ * or 0 if it was already purged.
+ *
+ * Important: Once marked as DONTNEED, touching the BO's memory
+ * is undefined behavior. It may succeed temporarily (before the
+ * kernel purges the backing store) but will suddenly fail once
+ * the BO transitions to PURGED state.
+ *
+ * To transition back: use WILLNEED and check @retained_ptr —
+ * if 0, backing store was lost and the BO must be recreated.
+ *
+ * The following operations are blocked in DONTNEED state to
+ * prevent the BO from being re-mapped after madvise:
+ * - New mmap() calls: Fail with -EBUSY
+ * - VM_BIND operations: Fail with -EBUSY
+ * - New dma-buf exports: Fail with -EBUSY
+ * - CPU page faults (existing mmap): Fail with SIGBUS
+ * - GPU page faults (fault-mode VMs): Fail with -EACCES
+ */
+ struct {
+#define DRM_XE_VMA_PURGEABLE_STATE_WILLNEED 0
+#define DRM_XE_VMA_PURGEABLE_STATE_DONTNEED 1
+ /** @purge_state_val.val: value for DRM_XE_VMA_ATTR_PURGEABLE_STATE */
+ __u32 val;
+
+ /** @purge_state_val.pad: MBZ */
+ __u32 pad;
+ /**
+ * @purge_state_val.retained_ptr: Pointer to a __u32 output
+ * field for backing store status.
+ *
+ * Userspace must initialize the __u32 value at this address
+ * to 0 before the ioctl. Kernel writes a __u32 after the
+ * operation:
+ * - 1 if backing store exists (not purged)
+ * - 0 if backing store was purged
+ *
+ * If userspace fails to initialize to 0, ioctl returns -EINVAL.
+ * This ensures a safe default (0 = assume purged) if kernel
+ * cannot write the result.
+ *
+ * Similar to i915's drm_i915_gem_madvise.retained field.
+ */
+ __u64 retained_ptr;
+ } purge_state_val;
};
/** @reserved: Reserved */
--
2.43.0
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH v7 02/12] drm/xe/bo: Add purgeable bo state tracking and field madv to xe_bo
2026-03-23 9:30 [PATCH v7 00/12] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
2026-03-23 9:30 ` [PATCH v7 01/12] drm/xe/uapi: Add UAPI " Arvind Yadav
@ 2026-03-23 9:30 ` Arvind Yadav
2026-03-23 9:30 ` [PATCH v7 03/12] drm/xe/madvise: Implement purgeable buffer object support Arvind Yadav
` (14 subsequent siblings)
16 siblings, 0 replies; 29+ messages in thread
From: Arvind Yadav @ 2026-03-23 9:30 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, himal.prasad.ghimiray, thomas.hellstrom
Add infrastructure for tracking purgeable state of buffer objects.
This includes:
Introduce enum xe_madv_purgeable_state with three states:
- XE_MADV_PURGEABLE_WILLNEED (0): BO is needed and should not be
purged. This is the default state for all BOs.
- XE_MADV_PURGEABLE_DONTNEED (1): BO is not currently needed and
can be purged by the kernel under memory pressure to reclaim
resources. Only non-shared BOs can be marked as DONTNEED.
- XE_MADV_PURGEABLE_PURGED (2): BO has been purged by the kernel.
Accessing a purged BO results in error. Follows i915 semantics
where once purged, the BO remains permanently invalid ("once
purged, always purged").
Add madv_purgeable field to struct xe_bo for state tracking
of purgeable state across concurrent access paths
v2:
- Add xe_bo_is_purged() helper, improve state documentation
v3:
- Add the kernel doc(Matthew Brost)
- Add the new helpers xe_bo_madv_is_dontneed(Matthew Brost)
v4:
- @madv_purgeable atomic_t → u32 change across all relevant
patches (Matt)
v5:
- Add locking documentation to madv_purgeable field comment (Matt)
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
---
drivers/gpu/drm/xe/xe_bo.h | 56 ++++++++++++++++++++++++++++++++
drivers/gpu/drm/xe/xe_bo_types.h | 6 ++++
2 files changed, 62 insertions(+)
diff --git a/drivers/gpu/drm/xe/xe_bo.h b/drivers/gpu/drm/xe/xe_bo.h
index 2cbac16f7db7..fb5541bdf602 100644
--- a/drivers/gpu/drm/xe/xe_bo.h
+++ b/drivers/gpu/drm/xe/xe_bo.h
@@ -87,6 +87,28 @@
#define XE_PCI_BARRIER_MMAP_OFFSET (0x50 << XE_PTE_SHIFT)
+/**
+ * enum xe_madv_purgeable_state - Buffer object purgeable state enumeration
+ *
+ * This enum defines the possible purgeable states for a buffer object,
+ * allowing userspace to provide memory usage hints to the kernel for
+ * better memory management under pressure.
+ *
+ * @XE_MADV_PURGEABLE_WILLNEED: The buffer object is needed and should not be purged.
+ * This is the default state.
+ * @XE_MADV_PURGEABLE_DONTNEED: The buffer object is not currently needed and can be
+ * purged by the kernel under memory pressure.
+ * @XE_MADV_PURGEABLE_PURGED: The buffer object has been purged by the kernel.
+ *
+ * Accessing a purged buffer will result in an error. Per i915 semantics,
+ * once purged, a BO remains permanently invalid and must be destroyed and recreated.
+ */
+enum xe_madv_purgeable_state {
+ XE_MADV_PURGEABLE_WILLNEED,
+ XE_MADV_PURGEABLE_DONTNEED,
+ XE_MADV_PURGEABLE_PURGED,
+};
+
struct sg_table;
struct xe_bo *xe_bo_alloc(void);
@@ -215,6 +237,40 @@ static inline bool xe_bo_is_protected(const struct xe_bo *bo)
return bo->pxp_key_instance;
}
+/**
+ * xe_bo_is_purged() - Check if buffer object has been purged
+ * @bo: The buffer object to check
+ *
+ * Checks if the buffer object's backing store has been discarded by the
+ * kernel due to memory pressure after being marked as purgeable (DONTNEED).
+ * Once purged, the BO cannot be restored and any attempt to use it will fail.
+ *
+ * Context: Caller must hold the BO's dma-resv lock
+ * Return: true if the BO has been purged, false otherwise
+ */
+static inline bool xe_bo_is_purged(struct xe_bo *bo)
+{
+ xe_bo_assert_held(bo);
+ return bo->madv_purgeable == XE_MADV_PURGEABLE_PURGED;
+}
+
+/**
+ * xe_bo_madv_is_dontneed() - Check if BO is marked as DONTNEED
+ * @bo: The buffer object to check
+ *
+ * Checks if userspace has marked this BO as DONTNEED (i.e., its contents
+ * are not currently needed and can be discarded under memory pressure).
+ * This is used internally to decide whether a BO is eligible for purging.
+ *
+ * Context: Caller must hold the BO's dma-resv lock
+ * Return: true if the BO is marked DONTNEED, false otherwise
+ */
+static inline bool xe_bo_madv_is_dontneed(struct xe_bo *bo)
+{
+ xe_bo_assert_held(bo);
+ return bo->madv_purgeable == XE_MADV_PURGEABLE_DONTNEED;
+}
+
static inline void xe_bo_unpin_map_no_vm(struct xe_bo *bo)
{
if (likely(bo)) {
diff --git a/drivers/gpu/drm/xe/xe_bo_types.h b/drivers/gpu/drm/xe/xe_bo_types.h
index d4fe3c8dca5b..ff8317bfc1ae 100644
--- a/drivers/gpu/drm/xe/xe_bo_types.h
+++ b/drivers/gpu/drm/xe/xe_bo_types.h
@@ -108,6 +108,12 @@ struct xe_bo {
* from default
*/
u64 min_align;
+
+ /**
+ * @madv_purgeable: user space advise on BO purgeability, protected
+ * by BO's dma-resv lock.
+ */
+ u32 madv_purgeable;
};
#endif
--
2.43.0
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH v7 03/12] drm/xe/madvise: Implement purgeable buffer object support
2026-03-23 9:30 [PATCH v7 00/12] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
2026-03-23 9:30 ` [PATCH v7 01/12] drm/xe/uapi: Add UAPI " Arvind Yadav
2026-03-23 9:30 ` [PATCH v7 02/12] drm/xe/bo: Add purgeable bo state tracking and field madv to xe_bo Arvind Yadav
@ 2026-03-23 9:30 ` Arvind Yadav
2026-03-25 15:01 ` Thomas Hellström
2026-03-23 9:30 ` [PATCH v7 04/12] drm/xe/bo: Block CPU faults to purgeable buffer objects Arvind Yadav
` (13 subsequent siblings)
16 siblings, 1 reply; 29+ messages in thread
From: Arvind Yadav @ 2026-03-23 9:30 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, himal.prasad.ghimiray, thomas.hellstrom
This allows userspace applications to provide memory usage hints to
the kernel for better memory management under pressure:
Add the core implementation for purgeable buffer objects, enabling memory
reclamation of user-designated DONTNEED buffers during eviction.
This patch implements the purge operation and state machine transitions:
Purgeable States (from xe_madv_purgeable_state):
- WILLNEED (0): BO should be retained, actively used
- DONTNEED (1): BO eligible for purging, not currently needed
- PURGED (2): BO backing store reclaimed, permanently invalid
Design Rationale:
- Async TLB invalidation via trigger_rebind (no blocking xe_vm_invalidate_vma)
- i915 compatibility: retained field, "once purged always purged" semantics
- Shared BO protection prevents multi-process memory corruption
- Scratch PTE reuse avoids new infrastructure, safe for fault mode
Note: The madvise_purgeable() function is implemented but not hooked into
the IOCTL handler (madvise_funcs[] entry is NULL) to maintain bisectability.
The feature will be enabled in the final patch when all supporting
infrastructure (shrinker, per-VMA tracking) is complete.
v2:
- Use xe_bo_trigger_rebind() for async TLB invalidation (Thomas Hellström)
- Add NULL rebind with scratch PTEs for fault mode (Thomas Hellström)
- Implement i915-compatible retained field logic (Thomas Hellström)
- Skip BO validation for purged BOs in page fault handler (crash fix)
- Add scratch VM check in page fault path (non-scratch VMs fail fault)
- Force clear_pt for non-scratch VMs to avoid phys addr 0 mapping (review fix)
- Add !is_purged check to resource cursor setup to prevent stale access
v3:
- Rebase as xe_gt_pagefault.c is gone upstream and replaced
with xe_pagefault.c (Matthew Brost)
- Xe specific warn on (Matthew Brost)
- Call helpers for madv_purgeable access(Matthew Brost)
- Remove bo NULL check(Matthew Brost)
- Use xe_bo_assert_held instead of dma assert(Matthew Brost)
- Move the xe_bo_is_purged check under the dma-resv lock( by Matt)
- Drop is_purged from xe_pt_stage_bind_entry and just set is_null to true
for purged BO rename s/is_null/is_null_or_purged (by Matt)
- UAPI rule should not be changed.(Matthew Brost)
- Make 'retained' a userptr (Matthew Brost)
v4:
- @madv_purgeable atomic_t → u32 change across all relevant patches (Matt)
v5:
- Introduce xe_bo_set_purgeable_state() helper (void return) to centralize
madv_purgeable updates with xe_bo_assert_held() and state transition
validation using explicit enum checks (no transition out of PURGED) (Matt)
- Make xe_ttm_bo_purge() return int and propagate failures from
xe_bo_move(); handle xe_bo_trigger_rebind() failures (e.g. no_wait_gpu
paths) rather than silently ignoring (Matt)
- Replace drm_WARN_ON with xe_assert for better Xe-specific assertions (Matt)
- Hook purgeable handling into madvise_funcs[DRM_XE_VMA_ATTR_PURGEABLE_STATE]
instead of special-case path in xe_vm_madvise_ioctl() (Matt)
- Track purgeable retained return via xe_madvise_details and perform
copy_to_user() from xe_madvise_details_fini() after locks are dropped (Matt)
- Set madvise_funcs[DRM_XE_VMA_ATTR_PURGEABLE_STATE] to NULL with
__maybe_unused on madvise_purgeable() to maintain bisectability until
shrinker integration is complete in final patch (Matt)
- Use put_user() instead of copy_to_user() for single u32 retained value (Thomas)
- Return -EFAULT from ioctl if put_user() fails (Thomas)
- Validate userspace initialized retained to 0 before ioctl, ensuring safe
default (0 = "assume purged") if put_user() fails (Thomas)
- Refactor error handling: separate fallible put_user from infallible cleanup
- xe_madvise_purgeable_retained_to_user(): separate helper for fallible put_user
- Call put_user() after releasing all locks to avoid circular dependencies
- Use xe_bo_move_notify() instead of xe_bo_trigger_rebind() in xe_ttm_bo_purge()
for proper abstraction - handles vunmap, dma-buf notifications, and VRAM
userfault cleanup (Thomas)
- Fix LRU crash while running shrink test
- Skip xe_bo_validate() for purged BOs in xe_gpuvm_validate()
v6:
- xe_bo_move_notify() must be called *before* ttm_bo_validate(). (Thomas)
- Block GPU page faults (fault-mode VMs) for DONTNEED bo's (Thomas, Matt)
- Rename retained to retained_ptr. (Jose)
v7 Changes:
- Fix engine reset from EU overfetch in scratch VMs: xe_pagefault_begin()
and xe_pagefault_service() now return 0 instead of -EACCES/-EINVAL for
DONTNEED/purged BOs and missing VMAs so stale accesses hit scratch PTEs.
- Fix Engine memory CAT errors when Mesa uses DRM_XE_VM_CREATE_FLAG_SCRATCH_PAGE:
accept scratch VMs in xe_pagefault_asid_to_vm() via '|| xe_vm_has_scratch(vm).
- Skip validate/migrate/rebind for DONTNEED/purged BOs in xe_pagefault_begin()
using a bool *skip_rebind out-parameter. Scratch VMs ACK the fault and fall back
to scratch PTEs; non-scratch VMs return -EACCES.
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
---
drivers/gpu/drm/xe/xe_bo.c | 107 ++++++++++++++++++++---
drivers/gpu/drm/xe/xe_bo.h | 2 +
drivers/gpu/drm/xe/xe_pagefault.c | 25 +++++-
drivers/gpu/drm/xe/xe_pt.c | 40 +++++++--
drivers/gpu/drm/xe/xe_vm.c | 20 ++++-
drivers/gpu/drm/xe/xe_vm_madvise.c | 136 +++++++++++++++++++++++++++++
6 files changed, 305 insertions(+), 25 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
index 22179b2df85c..b6055bb4c578 100644
--- a/drivers/gpu/drm/xe/xe_bo.c
+++ b/drivers/gpu/drm/xe/xe_bo.c
@@ -835,6 +835,84 @@ static int xe_bo_move_notify(struct xe_bo *bo,
return 0;
}
+/**
+ * xe_bo_set_purgeable_state() - Set BO purgeable state with validation
+ * @bo: Buffer object
+ * @new_state: New purgeable state
+ *
+ * Sets the purgeable state with lockdep assertions and validates state
+ * transitions. Once a BO is PURGED, it cannot transition to any other state.
+ * Invalid transitions are caught with xe_assert().
+ */
+void xe_bo_set_purgeable_state(struct xe_bo *bo,
+ enum xe_madv_purgeable_state new_state)
+{
+ struct xe_device *xe = xe_bo_device(bo);
+
+ xe_bo_assert_held(bo);
+
+ /* Validate state is one of the known values */
+ xe_assert(xe, new_state == XE_MADV_PURGEABLE_WILLNEED ||
+ new_state == XE_MADV_PURGEABLE_DONTNEED ||
+ new_state == XE_MADV_PURGEABLE_PURGED);
+
+ /* Once purged, always purged - cannot transition out */
+ xe_assert(xe, !(bo->madv_purgeable == XE_MADV_PURGEABLE_PURGED &&
+ new_state != XE_MADV_PURGEABLE_PURGED));
+
+ bo->madv_purgeable = new_state;
+}
+
+/**
+ * xe_ttm_bo_purge() - Purge buffer object backing store
+ * @ttm_bo: The TTM buffer object to purge
+ * @ctx: TTM operation context
+ *
+ * This function purges the backing store of a BO marked as DONTNEED and
+ * triggers rebind to invalidate stale GPU mappings. For fault-mode VMs,
+ * this zaps the PTEs. The next GPU access will trigger a page fault and
+ * perform NULL rebind (scratch pages or clear PTEs based on VM config).
+ *
+ * Return: 0 on success, negative error code on failure
+ */
+static int xe_ttm_bo_purge(struct ttm_buffer_object *ttm_bo, struct ttm_operation_ctx *ctx)
+{
+ struct xe_bo *bo = ttm_to_xe_bo(ttm_bo);
+ struct ttm_placement place = {};
+ int ret;
+
+ xe_bo_assert_held(bo);
+
+ if (!ttm_bo->ttm)
+ return 0;
+
+ if (!xe_bo_madv_is_dontneed(bo))
+ return 0;
+
+ /*
+ * Use the standard pre-move hook so we share the same cleanup/invalidate
+ * path as migrations: drop any CPU vmap and schedule the necessary GPU
+ * unbind/rebind work.
+ *
+ * This must be called before ttm_bo_validate() frees the pages.
+ * May fail in no-wait contexts (fault/shrinker) or if the BO is
+ * pinned. Keep state unchanged on failure so we don't end up "PURGED"
+ * with stale mappings.
+ */
+ ret = xe_bo_move_notify(bo, ctx);
+ if (ret)
+ return ret;
+
+ ret = ttm_bo_validate(ttm_bo, &place, ctx);
+ if (ret)
+ return ret;
+
+ /* Commit the state transition only once invalidation was queued */
+ xe_bo_set_purgeable_state(bo, XE_MADV_PURGEABLE_PURGED);
+
+ return 0;
+}
+
static int xe_bo_move(struct ttm_buffer_object *ttm_bo, bool evict,
struct ttm_operation_ctx *ctx,
struct ttm_resource *new_mem,
@@ -854,6 +932,20 @@ static int xe_bo_move(struct ttm_buffer_object *ttm_bo, bool evict,
ttm && ttm_tt_is_populated(ttm)) ? true : false;
int ret = 0;
+ /*
+ * Purge only non-shared BOs explicitly marked DONTNEED by userspace.
+ * The move_notify callback will handle invalidation asynchronously.
+ */
+ if (evict && xe_bo_madv_is_dontneed(bo)) {
+ ret = xe_ttm_bo_purge(ttm_bo, ctx);
+ if (ret)
+ return ret;
+
+ /* Free the unused eviction destination resource */
+ ttm_resource_free(ttm_bo, &new_mem);
+ return 0;
+ }
+
/* Bo creation path, moving to system or TT. */
if ((!old_mem && ttm) && !handle_system_ccs) {
if (new_mem->mem_type == XE_PL_TT)
@@ -1603,18 +1695,6 @@ static void xe_ttm_bo_delete_mem_notify(struct ttm_buffer_object *ttm_bo)
}
}
-static void xe_ttm_bo_purge(struct ttm_buffer_object *ttm_bo, struct ttm_operation_ctx *ctx)
-{
- struct xe_device *xe = ttm_to_xe_device(ttm_bo->bdev);
-
- if (ttm_bo->ttm) {
- struct ttm_placement place = {};
- int ret = ttm_bo_validate(ttm_bo, &place, ctx);
-
- drm_WARN_ON(&xe->drm, ret);
- }
-}
-
static void xe_ttm_bo_swap_notify(struct ttm_buffer_object *ttm_bo)
{
struct ttm_operation_ctx ctx = {
@@ -2195,6 +2275,9 @@ struct xe_bo *xe_bo_init_locked(struct xe_device *xe, struct xe_bo *bo,
#endif
INIT_LIST_HEAD(&bo->vram_userfault_link);
+ /* Initialize purge advisory state */
+ bo->madv_purgeable = XE_MADV_PURGEABLE_WILLNEED;
+
drm_gem_private_object_init(&xe->drm, &bo->ttm.base, size);
if (resv) {
diff --git a/drivers/gpu/drm/xe/xe_bo.h b/drivers/gpu/drm/xe/xe_bo.h
index fb5541bdf602..653851d47aa6 100644
--- a/drivers/gpu/drm/xe/xe_bo.h
+++ b/drivers/gpu/drm/xe/xe_bo.h
@@ -271,6 +271,8 @@ static inline bool xe_bo_madv_is_dontneed(struct xe_bo *bo)
return bo->madv_purgeable == XE_MADV_PURGEABLE_DONTNEED;
}
+void xe_bo_set_purgeable_state(struct xe_bo *bo, enum xe_madv_purgeable_state new_state);
+
static inline void xe_bo_unpin_map_no_vm(struct xe_bo *bo)
{
if (likely(bo)) {
diff --git a/drivers/gpu/drm/xe/xe_pagefault.c b/drivers/gpu/drm/xe/xe_pagefault.c
index ea4857acf28d..415253631e6f 100644
--- a/drivers/gpu/drm/xe/xe_pagefault.c
+++ b/drivers/gpu/drm/xe/xe_pagefault.c
@@ -46,7 +46,8 @@ static int xe_pagefault_entry_size(void)
}
static int xe_pagefault_begin(struct drm_exec *exec, struct xe_vma *vma,
- struct xe_vram_region *vram, bool need_vram_move)
+ struct xe_vram_region *vram, bool need_vram_move,
+ bool *skip_rebind)
{
struct xe_bo *bo = xe_vma_bo(vma);
struct xe_vm *vm = xe_vma_vm(vma);
@@ -59,6 +60,20 @@ static int xe_pagefault_begin(struct drm_exec *exec, struct xe_vma *vma,
if (!bo)
return 0;
+ /*
+ * Under dma-resv lock: reject rebind for DONTNEED/purged BOs.
+ * Validating or migrating would repopulate pages we want the shrinker
+ * to reclaim, and rebinding would undo the GPU PTE zap.
+ * Scratch VMs absorb the access via scratch PTEs (skip_rebind=true);
+ * non-scratch VMs have no fallback so fail the fault.
+ */
+ if (unlikely(xe_bo_madv_is_dontneed(bo) || xe_bo_is_purged(bo))) {
+ if (!xe_vm_has_scratch(vm))
+ return -EACCES;
+ *skip_rebind = true;
+ return 0;
+ }
+
return need_vram_move ? xe_bo_migrate(bo, vram->placement, NULL, exec) :
xe_bo_validate(bo, vm, true, exec);
}
@@ -103,11 +118,13 @@ static int xe_pagefault_handle_vma(struct xe_gt *gt, struct xe_vma *vma,
/* Lock VM and BOs dma-resv */
xe_validation_ctx_init(&ctx, &vm->xe->val, &exec, (struct xe_val_flags) {});
drm_exec_until_all_locked(&exec) {
+ bool skip_rebind = false;
+
err = xe_pagefault_begin(&exec, vma, tile->mem.vram,
- needs_vram == 1);
+ needs_vram == 1, &skip_rebind);
drm_exec_retry_on_contention(&exec);
xe_validation_retry_on_oom(&ctx, &err);
- if (err)
+ if (err || skip_rebind)
goto unlock_dma_resv;
/* Bind VMA only to the GT that has faulted */
@@ -145,7 +162,7 @@ static struct xe_vm *xe_pagefault_asid_to_vm(struct xe_device *xe, u32 asid)
down_read(&xe->usm.lock);
vm = xa_load(&xe->usm.asid_to_vm, asid);
- if (vm && xe_vm_in_fault_mode(vm))
+ if (vm && (xe_vm_in_fault_mode(vm) || xe_vm_has_scratch(vm)))
xe_vm_get(vm);
else
vm = ERR_PTR(-EINVAL);
diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
index 2d9ce2c4cb4f..08f40701f654 100644
--- a/drivers/gpu/drm/xe/xe_pt.c
+++ b/drivers/gpu/drm/xe/xe_pt.c
@@ -531,20 +531,26 @@ xe_pt_stage_bind_entry(struct xe_ptw *parent, pgoff_t offset,
/* Is this a leaf entry ?*/
if (level == 0 || xe_pt_hugepte_possible(addr, next, level, xe_walk)) {
struct xe_res_cursor *curs = xe_walk->curs;
- bool is_null = xe_vma_is_null(xe_walk->vma);
- bool is_vram = is_null ? false : xe_res_is_vram(curs);
+ struct xe_bo *bo = xe_vma_bo(xe_walk->vma);
+ bool is_null_or_purged = xe_vma_is_null(xe_walk->vma) ||
+ (bo && xe_bo_is_purged(bo));
+ bool is_vram = is_null_or_purged ? false : xe_res_is_vram(curs);
XE_WARN_ON(xe_walk->va_curs_start != addr);
if (xe_walk->clear_pt) {
pte = 0;
} else {
- pte = vm->pt_ops->pte_encode_vma(is_null ? 0 :
+ /*
+ * For purged BOs, treat like null VMAs - pass address 0.
+ * The pte_encode_vma will set XE_PTE_NULL flag for scratch mapping.
+ */
+ pte = vm->pt_ops->pte_encode_vma(is_null_or_purged ? 0 :
xe_res_dma(curs) +
xe_walk->dma_offset,
xe_walk->vma,
pat_index, level);
- if (!is_null)
+ if (!is_null_or_purged)
pte |= is_vram ? xe_walk->default_vram_pte :
xe_walk->default_system_pte;
@@ -568,7 +574,7 @@ xe_pt_stage_bind_entry(struct xe_ptw *parent, pgoff_t offset,
if (unlikely(ret))
return ret;
- if (!is_null && !xe_walk->clear_pt)
+ if (!is_null_or_purged && !xe_walk->clear_pt)
xe_res_next(curs, next - addr);
xe_walk->va_curs_start = next;
xe_walk->vma->gpuva.flags |= (XE_VMA_PTE_4K << level);
@@ -721,6 +727,26 @@ xe_pt_stage_bind(struct xe_tile *tile, struct xe_vma *vma,
};
struct xe_pt *pt = vm->pt_root[tile->id];
int ret;
+ bool is_purged = false;
+
+ /*
+ * Check if BO is purged:
+ * - Scratch VMs: Use scratch PTEs (XE_PTE_NULL) for safe zero reads
+ * - Non-scratch VMs: Clear PTEs to zero (non-present) to avoid mapping to phys addr 0
+ *
+ * For non-scratch VMs, we force clear_pt=true so leaf PTEs become completely
+ * zero instead of creating a PRESENT mapping to physical address 0.
+ */
+ if (bo && xe_bo_is_purged(bo)) {
+ is_purged = true;
+
+ /*
+ * For non-scratch VMs, a NULL rebind should use zero PTEs
+ * (non-present), not a present PTE to phys 0.
+ */
+ if (!xe_vm_has_scratch(vm))
+ xe_walk.clear_pt = true;
+ }
if (range) {
/* Move this entire thing to xe_svm.c? */
@@ -756,11 +782,11 @@ xe_pt_stage_bind(struct xe_tile *tile, struct xe_vma *vma,
}
xe_walk.default_vram_pte |= XE_PPGTT_PTE_DM;
- xe_walk.dma_offset = bo ? vram_region_gpu_offset(bo->ttm.resource) : 0;
+ xe_walk.dma_offset = (bo && !is_purged) ? vram_region_gpu_offset(bo->ttm.resource) : 0;
if (!range)
xe_bo_assert_held(bo);
- if (!xe_vma_is_null(vma) && !range) {
+ if (!xe_vma_is_null(vma) && !range && !is_purged) {
if (xe_vma_is_userptr(vma))
xe_res_first_dma(to_userptr_vma(vma)->userptr.pages.dma_addr, 0,
xe_vma_size(vma), &curs);
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 5572e12c2a7e..a0ade67d616e 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -326,6 +326,7 @@ void xe_vm_kill(struct xe_vm *vm, bool unlocked)
static int xe_gpuvm_validate(struct drm_gpuvm_bo *vm_bo, struct drm_exec *exec)
{
struct xe_vm *vm = gpuvm_to_vm(vm_bo->vm);
+ struct xe_bo *bo = gem_to_xe_bo(vm_bo->obj);
struct drm_gpuva *gpuva;
int ret;
@@ -334,10 +335,16 @@ static int xe_gpuvm_validate(struct drm_gpuvm_bo *vm_bo, struct drm_exec *exec)
list_move_tail(&gpuva_to_vma(gpuva)->combined_links.rebind,
&vm->rebind_list);
+ /* Skip re-populating purged BOs, rebind maps scratch pages. */
+ if (xe_bo_is_purged(bo)) {
+ vm_bo->evicted = false;
+ return 0;
+ }
+
if (!try_wait_for_completion(&vm->xe->pm_block))
return -EAGAIN;
- ret = xe_bo_validate(gem_to_xe_bo(vm_bo->obj), vm, false, exec);
+ ret = xe_bo_validate(bo, vm, false, exec);
if (ret)
return ret;
@@ -1358,6 +1365,9 @@ static u64 xelp_pte_encode_bo(struct xe_bo *bo, u64 bo_offset,
static u64 xelp_pte_encode_vma(u64 pte, struct xe_vma *vma,
u16 pat_index, u32 pt_level)
{
+ struct xe_bo *bo = xe_vma_bo(vma);
+ struct xe_vm *vm = xe_vma_vm(vma);
+
pte |= XE_PAGE_PRESENT;
if (likely(!xe_vma_read_only(vma)))
@@ -1366,7 +1376,13 @@ static u64 xelp_pte_encode_vma(u64 pte, struct xe_vma *vma,
pte |= pte_encode_pat_index(pat_index, pt_level);
pte |= pte_encode_ps(pt_level);
- if (unlikely(xe_vma_is_null(vma)))
+ /*
+ * NULL PTEs redirect to scratch page (return zeros on read).
+ * Set for: 1) explicit null VMAs, 2) purged BOs on scratch VMs.
+ * Never set NULL flag without scratch page - causes undefined behavior.
+ */
+ if (unlikely(xe_vma_is_null(vma) ||
+ (bo && xe_bo_is_purged(bo) && xe_vm_has_scratch(vm))))
pte |= XE_PTE_NULL;
return pte;
diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
index 869db304d96d..ffba2e41c539 100644
--- a/drivers/gpu/drm/xe/xe_vm_madvise.c
+++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
@@ -26,6 +26,8 @@ struct xe_vmas_in_madvise_range {
/**
* struct xe_madvise_details - Argument to madvise_funcs
* @dpagemap: Reference-counted pointer to a struct drm_pagemap.
+ * @has_purged_bo: Track if any BO was purged (for purgeable state)
+ * @retained_ptr: User pointer for retained value (for purgeable state)
*
* The madvise IOCTL handler may, in addition to the user-space
* args, have additional info to pass into the madvise_func that
@@ -34,6 +36,8 @@ struct xe_vmas_in_madvise_range {
*/
struct xe_madvise_details {
struct drm_pagemap *dpagemap;
+ bool has_purged_bo;
+ u64 retained_ptr;
};
static int get_vmas(struct xe_vm *vm, struct xe_vmas_in_madvise_range *madvise_range)
@@ -180,6 +184,67 @@ static void madvise_pat_index(struct xe_device *xe, struct xe_vm *vm,
}
}
+/**
+ * madvise_purgeable - Handle purgeable buffer object advice
+ * @xe: XE device
+ * @vm: VM
+ * @vmas: Array of VMAs
+ * @num_vmas: Number of VMAs
+ * @op: Madvise operation
+ * @details: Madvise details for return values
+ *
+ * Handles DONTNEED/WILLNEED/PURGED states. Tracks if any BO was purged
+ * in details->has_purged_bo for later copy to userspace.
+ *
+ * Note: Marked __maybe_unused until hooked into madvise_funcs[] in the
+ * final patch to maintain bisectability. The NULL placeholder in the
+ * array ensures proper -EINVAL return for userspace until all supporting
+ * infrastructure (shrinker, per-VMA tracking) is complete.
+ */
+static void __maybe_unused madvise_purgeable(struct xe_device *xe,
+ struct xe_vm *vm,
+ struct xe_vma **vmas,
+ int num_vmas,
+ struct drm_xe_madvise *op,
+ struct xe_madvise_details *details)
+{
+ int i;
+
+ xe_assert(vm->xe, op->type == DRM_XE_VMA_ATTR_PURGEABLE_STATE);
+
+ for (i = 0; i < num_vmas; i++) {
+ struct xe_bo *bo = xe_vma_bo(vmas[i]);
+
+ if (!bo)
+ continue;
+
+ /* BO must be locked before modifying madv state */
+ xe_bo_assert_held(bo);
+
+ /*
+ * Once purged, always purged. Cannot transition back to WILLNEED.
+ * This matches i915 semantics where purged BOs are permanently invalid.
+ */
+ if (xe_bo_is_purged(bo)) {
+ details->has_purged_bo = true;
+ continue;
+ }
+
+ switch (op->purge_state_val.val) {
+ case DRM_XE_VMA_PURGEABLE_STATE_WILLNEED:
+ xe_bo_set_purgeable_state(bo, XE_MADV_PURGEABLE_WILLNEED);
+ break;
+ case DRM_XE_VMA_PURGEABLE_STATE_DONTNEED:
+ xe_bo_set_purgeable_state(bo, XE_MADV_PURGEABLE_DONTNEED);
+ break;
+ default:
+ drm_warn(&vm->xe->drm, "Invalid madvice value = %d\n",
+ op->purge_state_val.val);
+ return;
+ }
+ }
+}
+
typedef void (*madvise_func)(struct xe_device *xe, struct xe_vm *vm,
struct xe_vma **vmas, int num_vmas,
struct drm_xe_madvise *op,
@@ -189,6 +254,12 @@ static const madvise_func madvise_funcs[] = {
[DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC] = madvise_preferred_mem_loc,
[DRM_XE_MEM_RANGE_ATTR_ATOMIC] = madvise_atomic,
[DRM_XE_MEM_RANGE_ATTR_PAT] = madvise_pat_index,
+ /*
+ * Purgeable support implemented but not enabled yet to maintain
+ * bisectability. Will be set to madvise_purgeable() in final patch
+ * when all infrastructure (shrinker, VMA tracking) is complete.
+ */
+ [DRM_XE_VMA_ATTR_PURGEABLE_STATE] = NULL,
};
static u8 xe_zap_ptes_in_madvise_range(struct xe_vm *vm, u64 start, u64 end)
@@ -319,6 +390,19 @@ static bool madvise_args_are_sane(struct xe_device *xe, const struct drm_xe_madv
return false;
break;
}
+ case DRM_XE_VMA_ATTR_PURGEABLE_STATE:
+ {
+ u32 val = args->purge_state_val.val;
+
+ if (XE_IOCTL_DBG(xe, !(val == DRM_XE_VMA_PURGEABLE_STATE_WILLNEED ||
+ val == DRM_XE_VMA_PURGEABLE_STATE_DONTNEED)))
+ return false;
+
+ if (XE_IOCTL_DBG(xe, args->purge_state_val.pad))
+ return false;
+
+ break;
+ }
default:
if (XE_IOCTL_DBG(xe, 1))
return false;
@@ -337,6 +421,12 @@ static int xe_madvise_details_init(struct xe_vm *vm, const struct drm_xe_madvise
memset(details, 0, sizeof(*details));
+ /* Store retained pointer for purgeable state */
+ if (args->type == DRM_XE_VMA_ATTR_PURGEABLE_STATE) {
+ details->retained_ptr = args->purge_state_val.retained_ptr;
+ return 0;
+ }
+
if (args->type == DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC) {
int fd = args->preferred_mem_loc.devmem_fd;
struct drm_pagemap *dpagemap;
@@ -365,6 +455,21 @@ static void xe_madvise_details_fini(struct xe_madvise_details *details)
drm_pagemap_put(details->dpagemap);
}
+static int xe_madvise_purgeable_retained_to_user(const struct xe_madvise_details *details)
+{
+ u32 retained;
+
+ if (!details->retained_ptr)
+ return 0;
+
+ retained = !details->has_purged_bo;
+
+ if (put_user(retained, (u32 __user *)u64_to_user_ptr(details->retained_ptr)))
+ return -EFAULT;
+
+ return 0;
+}
+
static bool check_bo_args_are_sane(struct xe_vm *vm, struct xe_vma **vmas,
int num_vmas, u32 atomic_val)
{
@@ -422,6 +527,7 @@ int xe_vm_madvise_ioctl(struct drm_device *dev, void *data, struct drm_file *fil
struct xe_vm *vm;
struct drm_exec exec;
int err, attr_type;
+ bool do_retained;
vm = xe_vm_lookup(xef, args->vm_id);
if (XE_IOCTL_DBG(xe, !vm))
@@ -432,6 +538,25 @@ int xe_vm_madvise_ioctl(struct drm_device *dev, void *data, struct drm_file *fil
goto put_vm;
}
+ /* Cache whether we need to write retained, and validate it's initialized to 0 */
+ do_retained = args->type == DRM_XE_VMA_ATTR_PURGEABLE_STATE &&
+ args->purge_state_val.retained_ptr;
+ if (do_retained) {
+ u32 retained;
+ u32 __user *retained_ptr;
+
+ retained_ptr = u64_to_user_ptr(args->purge_state_val.retained_ptr);
+ if (get_user(retained, retained_ptr)) {
+ err = -EFAULT;
+ goto put_vm;
+ }
+
+ if (XE_IOCTL_DBG(xe, retained != 0)) {
+ err = -EINVAL;
+ goto put_vm;
+ }
+ }
+
xe_svm_flush(vm);
err = down_write_killable(&vm->lock);
@@ -487,6 +612,13 @@ int xe_vm_madvise_ioctl(struct drm_device *dev, void *data, struct drm_file *fil
}
attr_type = array_index_nospec(args->type, ARRAY_SIZE(madvise_funcs));
+
+ /* Ensure the madvise function exists for this type */
+ if (!madvise_funcs[attr_type]) {
+ err = -EINVAL;
+ goto err_fini;
+ }
+
madvise_funcs[attr_type](xe, vm, madvise_range.vmas, madvise_range.num_vmas, args,
&details);
@@ -505,6 +637,10 @@ int xe_vm_madvise_ioctl(struct drm_device *dev, void *data, struct drm_file *fil
xe_madvise_details_fini(&details);
unlock_vm:
up_write(&vm->lock);
+
+ /* Write retained value to user after releasing all locks */
+ if (!err && do_retained)
+ err = xe_madvise_purgeable_retained_to_user(&details);
put_vm:
xe_vm_put(vm);
return err;
--
2.43.0
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH v7 04/12] drm/xe/bo: Block CPU faults to purgeable buffer objects
2026-03-23 9:30 [PATCH v7 00/12] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
` (2 preceding siblings ...)
2026-03-23 9:30 ` [PATCH v7 03/12] drm/xe/madvise: Implement purgeable buffer object support Arvind Yadav
@ 2026-03-23 9:30 ` Arvind Yadav
2026-03-23 9:30 ` [PATCH v7 05/12] drm/xe/vm: Prevent binding of purged " Arvind Yadav
` (12 subsequent siblings)
16 siblings, 0 replies; 29+ messages in thread
From: Arvind Yadav @ 2026-03-23 9:30 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, himal.prasad.ghimiray, thomas.hellstrom
Block CPU page faults to buffer objects marked as purgeable (DONTNEED)
or already purged. Once a BO is marked DONTNEED, its contents can be
discarded by the kernel at any time, making access undefined behavior.
Return VM_FAULT_SIGBUS immediately to fail consistently instead of
allowing erratic behavior where access sometimes works (if not yet
purged) and sometimes fails (if purged).
For DONTNEED BOs:
- Block new CPU faults with SIGBUS to prevent undefined behavior.
- Existing CPU PTEs may still work until TLB flush, but new faults
fail immediately.
For PURGED BOs:
- Backing store has been reclaimed, making CPU access invalid.
- Without this check, accessing existing mmap mappings would trigger
xe_bo_fault_migrate() on freed backing store, causing kernel hangs
or crashes.
The purgeable check is added to both CPU fault paths:
- Fastpath (xe_bo_cpu_fault_fastpath): Returns VM_FAULT_SIGBUS immediately
under dma-resv lock, preventing attempts to migrate/validate
DONTNEED/purged pages.
- Slowpath (xe_bo_cpu_fault): Returns -EFAULT under drm_exec lock,
converted to VM_FAULT_SIGBUS.
This matches i915 semantics for purged buffer handling.
v2:
- Added xe_bo_is_purged(bo) instead of atomic_read.
- Avoids leaks and keeps drm_dev_exit() while returning.
v3:
- Move xe_bo_is_purged check under a dma-resv lock (Matthew Brost)
v4:
- Add purged check to fastpath (xe_bo_cpu_fault_fastpath) to prevent
hang when accessing existing mmap of purged BO.
v6:
- Block CPU faults to DONTNEED BOs with VM_FAULT_SIGBUS. (Thomas, Matt)
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
---
drivers/gpu/drm/xe/xe_bo.c | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
index b6055bb4c578..da18b43650e3 100644
--- a/drivers/gpu/drm/xe/xe_bo.c
+++ b/drivers/gpu/drm/xe/xe_bo.c
@@ -1979,6 +1979,16 @@ static vm_fault_t xe_bo_cpu_fault_fastpath(struct vm_fault *vmf, struct xe_devic
if (!dma_resv_trylock(tbo->base.resv))
goto out_validation;
+ /*
+ * Reject CPU faults to purgeable BOs. DONTNEED BOs can be purged
+ * at any time, and purged BOs have no backing store. Either case
+ * is undefined behavior for CPU access.
+ */
+ if (xe_bo_madv_is_dontneed(bo) || xe_bo_is_purged(bo)) {
+ ret = VM_FAULT_SIGBUS;
+ goto out_unlock;
+ }
+
if (xe_ttm_bo_is_imported(tbo)) {
ret = VM_FAULT_SIGBUS;
drm_dbg(&xe->drm, "CPU trying to access an imported buffer object.\n");
@@ -2069,6 +2079,15 @@ static vm_fault_t xe_bo_cpu_fault(struct vm_fault *vmf)
if (err)
break;
+ /*
+ * Reject CPU faults to purgeable BOs. DONTNEED BOs can be
+ * purged at any time, and purged BOs have no backing store.
+ */
+ if (xe_bo_madv_is_dontneed(bo) || xe_bo_is_purged(bo)) {
+ err = -EFAULT;
+ break;
+ }
+
if (xe_ttm_bo_is_imported(tbo)) {
err = -EFAULT;
drm_dbg(&xe->drm, "CPU trying to access an imported buffer object.\n");
--
2.43.0
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH v7 05/12] drm/xe/vm: Prevent binding of purged buffer objects
2026-03-23 9:30 [PATCH v7 00/12] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
` (3 preceding siblings ...)
2026-03-23 9:30 ` [PATCH v7 04/12] drm/xe/bo: Block CPU faults to purgeable buffer objects Arvind Yadav
@ 2026-03-23 9:30 ` Arvind Yadav
2026-03-24 12:21 ` Thomas Hellström
2026-03-23 9:30 ` [PATCH v7 06/12] drm/xe/madvise: Implement per-VMA purgeable state tracking Arvind Yadav
` (11 subsequent siblings)
16 siblings, 1 reply; 29+ messages in thread
From: Arvind Yadav @ 2026-03-23 9:30 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, himal.prasad.ghimiray, thomas.hellstrom
Add purge checking to vma_lock_and_validate() to block new mapping
operations on purged BOs while allowing cleanup operations to proceed.
Purged BOs have their backing pages freed by the kernel. New
mapping operations (MAP, PREFETCH, REMAP) must be rejected with
-EINVAL to prevent GPU access to invalid memory. Cleanup
operations (UNMAP) must be allowed so applications can release
resources after detecting purge via the retained field.
REMAP operations require mixed handling - reject new prev/next
VMAs if the BO is purged, but allow the unmap portion to proceed
for cleanup.
The check_purged flag in struct xe_vma_lock_and_validate_flags
distinguishes between these cases: true for new mappings (must reject),
false for cleanup (allow).
v2:
- Clarify that purged BOs are permanently invalid (i915 semantics)
- Remove incorrect claim about madvise(WILLNEED) restoring purged BOs
v3:
- Move xe_bo_is_purged check under vma_lock_and_validate (Matt)
- Add check_purged parameter to distinguish new mappings from cleanup
- Allow UNMAP operations to prevent resource leaks
- Handle REMAP operation's dual nature (cleanup + new mappings)
v5:
- Replace three boolean parameters with struct xe_vma_lock_and_validate_flags
to improve readability and prevent argument transposition (Matt)
- Use u32 bitfields instead of bool members to match xe_bo_shrink_flags
pattern - more efficient packing and follows xe driver conventions (Thomas)
- Pass struct as const since flags are read-only (Matt)
v6:
- Block VM_BIND to DONTNEED BOs with -EBUSY (Thomas, Matt)
v7:
- Pass xe_vma_lock_and_validate_flags by value instead of by
pointer, consistent with xe driver style. (Thomas)
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
---
drivers/gpu/drm/xe/xe_vm.c | 82 ++++++++++++++++++++++++++++++++------
1 file changed, 69 insertions(+), 13 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index a0ade67d616e..9c1a82b64a43 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -2918,8 +2918,22 @@ static void vm_bind_ioctl_ops_unwind(struct xe_vm *vm,
}
}
+/**
+ * struct xe_vma_lock_and_validate_flags - Flags for vma_lock_and_validate()
+ * @res_evict: Allow evicting resources during validation
+ * @validate: Perform BO validation
+ * @request_decompress: Request BO decompression
+ * @check_purged: Reject operation if BO is purged
+ */
+struct xe_vma_lock_and_validate_flags {
+ u32 res_evict : 1;
+ u32 validate : 1;
+ u32 request_decompress : 1;
+ u32 check_purged : 1;
+};
+
static int vma_lock_and_validate(struct drm_exec *exec, struct xe_vma *vma,
- bool res_evict, bool validate, bool request_decompress)
+ struct xe_vma_lock_and_validate_flags flags)
{
struct xe_bo *bo = xe_vma_bo(vma);
struct xe_vm *vm = xe_vma_vm(vma);
@@ -2928,15 +2942,24 @@ static int vma_lock_and_validate(struct drm_exec *exec, struct xe_vma *vma,
if (bo) {
if (!bo->vm)
err = drm_exec_lock_obj(exec, &bo->ttm.base);
- if (!err && validate)
+
+ /* Reject new mappings to DONTNEED/purged BOs; allow cleanup operations */
+ if (!err && flags.check_purged) {
+ if (xe_bo_madv_is_dontneed(bo))
+ err = -EBUSY; /* BO marked purgeable */
+ else if (xe_bo_is_purged(bo))
+ err = -EINVAL; /* BO already purged */
+ }
+
+ if (!err && flags.validate)
err = xe_bo_validate(bo, vm,
xe_vm_allow_vm_eviction(vm) &&
- res_evict, exec);
+ flags.res_evict, exec);
if (err)
return err;
- if (request_decompress)
+ if (flags.request_decompress)
err = xe_bo_decompress(bo);
}
@@ -3030,10 +3053,13 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
case DRM_GPUVA_OP_MAP:
if (!op->map.invalidate_on_bind)
err = vma_lock_and_validate(exec, op->map.vma,
- res_evict,
- !xe_vm_in_fault_mode(vm) ||
- op->map.immediate,
- op->map.request_decompress);
+ (struct xe_vma_lock_and_validate_flags) {
+ .res_evict = res_evict,
+ .validate = !xe_vm_in_fault_mode(vm) ||
+ op->map.immediate,
+ .request_decompress = op->map.request_decompress,
+ .check_purged = true,
+ });
break;
case DRM_GPUVA_OP_REMAP:
err = check_ufence(gpuva_to_vma(op->base.remap.unmap->va));
@@ -3042,13 +3068,28 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
err = vma_lock_and_validate(exec,
gpuva_to_vma(op->base.remap.unmap->va),
- res_evict, false, false);
+ (struct xe_vma_lock_and_validate_flags) {
+ .res_evict = res_evict,
+ .validate = false,
+ .request_decompress = false,
+ .check_purged = false,
+ });
if (!err && op->remap.prev)
err = vma_lock_and_validate(exec, op->remap.prev,
- res_evict, true, false);
+ (struct xe_vma_lock_and_validate_flags) {
+ .res_evict = res_evict,
+ .validate = true,
+ .request_decompress = false,
+ .check_purged = true,
+ });
if (!err && op->remap.next)
err = vma_lock_and_validate(exec, op->remap.next,
- res_evict, true, false);
+ (struct xe_vma_lock_and_validate_flags) {
+ .res_evict = res_evict,
+ .validate = true,
+ .request_decompress = false,
+ .check_purged = true,
+ });
break;
case DRM_GPUVA_OP_UNMAP:
err = check_ufence(gpuva_to_vma(op->base.unmap.va));
@@ -3057,7 +3098,12 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
err = vma_lock_and_validate(exec,
gpuva_to_vma(op->base.unmap.va),
- res_evict, false, false);
+ (struct xe_vma_lock_and_validate_flags) {
+ .res_evict = res_evict,
+ .validate = false,
+ .request_decompress = false,
+ .check_purged = false,
+ });
break;
case DRM_GPUVA_OP_PREFETCH:
{
@@ -3070,9 +3116,19 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
region <= ARRAY_SIZE(region_to_mem_type));
}
+ /*
+ * Prefetch attempts to migrate BO's backing store without
+ * repopulating it first. Purged BOs have no backing store
+ * to migrate, so reject the operation.
+ */
err = vma_lock_and_validate(exec,
gpuva_to_vma(op->base.prefetch.va),
- res_evict, false, false);
+ (struct xe_vma_lock_and_validate_flags) {
+ .res_evict = res_evict,
+ .validate = false,
+ .request_decompress = false,
+ .check_purged = true,
+ });
if (!err && !xe_vma_has_no_bo(vma))
err = xe_bo_migrate(xe_vma_bo(vma),
region_to_mem_type[region],
--
2.43.0
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH v7 06/12] drm/xe/madvise: Implement per-VMA purgeable state tracking
2026-03-23 9:30 [PATCH v7 00/12] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
` (4 preceding siblings ...)
2026-03-23 9:30 ` [PATCH v7 05/12] drm/xe/vm: Prevent binding of purged " Arvind Yadav
@ 2026-03-23 9:30 ` Arvind Yadav
2026-03-24 12:25 ` Thomas Hellström
2026-03-23 9:30 ` [PATCH v7 07/12] drm/xe/madvise: Block imported and exported dma-bufs Arvind Yadav
` (10 subsequent siblings)
16 siblings, 1 reply; 29+ messages in thread
From: Arvind Yadav @ 2026-03-23 9:30 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, himal.prasad.ghimiray, thomas.hellstrom
Track purgeable state per-VMA instead of using a coarse shared
BO check. This prevents purging shared BOs until all VMAs across
all VMs are marked DONTNEED.
Add xe_bo_all_vmas_dontneed() to check all VMAs before marking
a BO purgeable. Add xe_bo_recheck_purgeable_on_vma_unbind() to
handle state transitions when VMAs are destroyed - if all
remaining VMAs are DONTNEED the BO can become purgeable, or if
no VMAs remain it transitions to WILLNEED.
The per-VMA purgeable_state field stores the madvise hint for
each mapping. Shared BOs can only be purged when all VMAs
unanimously indicate DONTNEED.
This prevents the bug where unmapping the last VMA would incorrectly flip
a DONTNEED BO back to WILLNEED. The enum-based state check preserves BO
state when no VMAs remain, only updating when VMAs provide explicit hints.
v3:
- This addresses Thomas Hellström's feedback: "loop over all vmas
attached to the bo and check that they all say WONTNEED. This will
also need a check at VMA unbinding"
v4:
- @madv_purgeable atomic_t → u32 change across all relevant
patches (Matt)
v5:
- Call xe_bo_recheck_purgeable_on_vma_unbind() from xe_vma_destroy()
right after drm_gpuva_unlink() where we already hold the BO lock,
drop the trylock-based late destroy path (Matt)
- Move purgeable_state into xe_vma_mem_attr with the other madvise
attributes (Matt)
- Drop READ_ONCE since the BO lock already protects us (Matt)
- Keep returning false when there are no VMAs - otherwise we'd mark
BOs purgeable without any user hint (Matt)
- Use xe_bo_set_purgeable_state() instead of direct initialization(Matt)
- use xe_assert instead of drm_warn (Thomas)
v6:
- Fix state transition bug: don't flip DONTNEED → WILLNEED when last
VMA unmapped (Matt)
- Change xe_bo_all_vmas_dontneed() from bool to enum to distinguish
"no VMAs" from "has WILLNEED VMA" (Matt)
- Preserve BO state on NO_VMAS instead of forcing WILLNEED.
- Set skip_invalidation explicitly in madvise_purgeable() to ensure
DONTNEED always zaps GPU PTEs regardless of prior madvise state.
v7:
- Don't zap PTEs at DONTNEED time -- pages are still alive.
The zap happens in xe_bo_move_notify() right before the shrinker
frees them.
- Simplify xe_bo_recompute_purgeable_state() by relying on the
intentional value alignment between xe_bo_vmas_purge_state and
xe_madv_purgeable_state enums. Add static_assert to enforce the
alignment. (Thomas)
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
---
drivers/gpu/drm/xe/xe_svm.c | 1 +
drivers/gpu/drm/xe/xe_vm.c | 9 +-
drivers/gpu/drm/xe/xe_vm_madvise.c | 136 +++++++++++++++++++++++++++--
drivers/gpu/drm/xe/xe_vm_madvise.h | 3 +
drivers/gpu/drm/xe/xe_vm_types.h | 11 +++
5 files changed, 153 insertions(+), 7 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
index a91c84487a67..062ef77e283f 100644
--- a/drivers/gpu/drm/xe/xe_svm.c
+++ b/drivers/gpu/drm/xe/xe_svm.c
@@ -322,6 +322,7 @@ static void xe_vma_set_default_attributes(struct xe_vma *vma)
.preferred_loc.migration_policy = DRM_XE_MIGRATE_ALL_PAGES,
.pat_index = vma->attr.default_pat_index,
.atomic_access = DRM_XE_ATOMIC_UNDEFINED,
+ .purgeable_state = XE_MADV_PURGEABLE_WILLNEED,
};
xe_vma_mem_attr_copy(&vma->attr, &default_attr);
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 9c1a82b64a43..07393540f34c 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -39,6 +39,7 @@
#include "xe_tile.h"
#include "xe_tlb_inval.h"
#include "xe_trace_bo.h"
+#include "xe_vm_madvise.h"
#include "xe_wa.h"
static struct drm_gem_object *xe_vm_obj(struct xe_vm *vm)
@@ -1085,6 +1086,7 @@ static struct xe_vma *xe_vma_create(struct xe_vm *vm,
static void xe_vma_destroy_late(struct xe_vma *vma)
{
struct xe_vm *vm = xe_vma_vm(vma);
+ struct xe_bo *bo = xe_vma_bo(vma);
if (vma->ufence) {
xe_sync_ufence_put(vma->ufence);
@@ -1099,7 +1101,7 @@ static void xe_vma_destroy_late(struct xe_vma *vma)
} else if (xe_vma_is_null(vma) || xe_vma_is_cpu_addr_mirror(vma)) {
xe_vm_put(vm);
} else {
- xe_bo_put(xe_vma_bo(vma));
+ xe_bo_put(bo);
}
xe_vma_free(vma);
@@ -1125,6 +1127,7 @@ static void vma_destroy_cb(struct dma_fence *fence,
static void xe_vma_destroy(struct xe_vma *vma, struct dma_fence *fence)
{
struct xe_vm *vm = xe_vma_vm(vma);
+ struct xe_bo *bo = xe_vma_bo(vma);
lockdep_assert_held_write(&vm->lock);
xe_assert(vm->xe, list_empty(&vma->combined_links.destroy));
@@ -1133,9 +1136,10 @@ static void xe_vma_destroy(struct xe_vma *vma, struct dma_fence *fence)
xe_assert(vm->xe, vma->gpuva.flags & XE_VMA_DESTROYED);
xe_userptr_destroy(to_userptr_vma(vma));
} else if (!xe_vma_is_null(vma) && !xe_vma_is_cpu_addr_mirror(vma)) {
- xe_bo_assert_held(xe_vma_bo(vma));
+ xe_bo_assert_held(bo);
drm_gpuva_unlink(&vma->gpuva);
+ xe_bo_recompute_purgeable_state(bo);
}
xe_vm_assert_held(vm);
@@ -2692,6 +2696,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
.atomic_access = DRM_XE_ATOMIC_UNDEFINED,
.default_pat_index = op->map.pat_index,
.pat_index = op->map.pat_index,
+ .purgeable_state = XE_MADV_PURGEABLE_WILLNEED,
};
flags |= op->map.vma_flags & XE_VMA_CREATE_MASK;
diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
index ffba2e41c539..ed1940da7739 100644
--- a/drivers/gpu/drm/xe/xe_vm_madvise.c
+++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
@@ -13,6 +13,7 @@
#include "xe_pt.h"
#include "xe_svm.h"
#include "xe_tlb_inval.h"
+#include "xe_vm.h"
struct xe_vmas_in_madvise_range {
u64 addr;
@@ -184,6 +185,116 @@ static void madvise_pat_index(struct xe_device *xe, struct xe_vm *vm,
}
}
+/**
+ * enum xe_bo_vmas_purge_state - VMA purgeable state aggregation
+ *
+ * Distinguishes whether a BO's VMAs are all DONTNEED, have at least
+ * one WILLNEED, or have no VMAs at all.
+ *
+ * Enum values align with XE_MADV_PURGEABLE_* states for consistency.
+ */
+enum xe_bo_vmas_purge_state {
+ /** @XE_BO_VMAS_STATE_WILLNEED: At least one VMA is WILLNEED */
+ XE_BO_VMAS_STATE_WILLNEED = 0,
+ /** @XE_BO_VMAS_STATE_DONTNEED: All VMAs are DONTNEED */
+ XE_BO_VMAS_STATE_DONTNEED = 1,
+ /** @XE_BO_VMAS_STATE_NO_VMAS: BO has no VMAs */
+ XE_BO_VMAS_STATE_NO_VMAS = 2,
+};
+
+/*
+ * xe_bo_recompute_purgeable_state() casts between xe_bo_vmas_purge_state and
+ * xe_madv_purgeable_state. Enforce that WILLNEED=0 and DONTNEED=1 match across
+ * both enums so the single-line cast is always valid.
+ */
+static_assert(XE_BO_VMAS_STATE_WILLNEED == (int)XE_MADV_PURGEABLE_WILLNEED,
+ "VMA purge state WILLNEED must equal madv purgeable WILLNEED");
+static_assert(XE_BO_VMAS_STATE_DONTNEED == (int)XE_MADV_PURGEABLE_DONTNEED,
+ "VMA purge state DONTNEED must equal madv purgeable DONTNEED");
+
+/**
+ * xe_bo_all_vmas_dontneed() - Determine BO VMA purgeable state
+ * @bo: Buffer object
+ *
+ * Check all VMAs across all VMs to determine aggregate purgeable state.
+ * Shared BOs require unanimous DONTNEED state from all mappings.
+ *
+ * Caller must hold BO dma-resv lock.
+ *
+ * Return: XE_BO_VMAS_STATE_DONTNEED if all VMAs are DONTNEED,
+ * XE_BO_VMAS_STATE_WILLNEED if at least one VMA is not DONTNEED,
+ * XE_BO_VMAS_STATE_NO_VMAS if BO has no VMAs
+ */
+static enum xe_bo_vmas_purge_state xe_bo_all_vmas_dontneed(struct xe_bo *bo)
+{
+ struct drm_gpuvm_bo *vm_bo;
+ struct drm_gpuva *gpuva;
+ struct drm_gem_object *obj = &bo->ttm.base;
+ bool has_vmas = false;
+
+ xe_bo_assert_held(bo);
+
+ drm_gem_for_each_gpuvm_bo(vm_bo, obj) {
+ drm_gpuvm_bo_for_each_va(gpuva, vm_bo) {
+ struct xe_vma *vma = gpuva_to_vma(gpuva);
+
+ has_vmas = true;
+
+ /* Any non-DONTNEED VMA prevents purging */
+ if (vma->attr.purgeable_state != XE_MADV_PURGEABLE_DONTNEED)
+ return XE_BO_VMAS_STATE_WILLNEED;
+ }
+ }
+
+ /*
+ * No VMAs => preserve existing BO purgeable state.
+ * Avoids incorrectly flipping DONTNEED -> WILLNEED when last VMA unmapped.
+ */
+ if (!has_vmas)
+ return XE_BO_VMAS_STATE_NO_VMAS;
+
+ return XE_BO_VMAS_STATE_DONTNEED;
+}
+
+/**
+ * xe_bo_recompute_purgeable_state() - Recompute BO purgeable state from VMAs
+ * @bo: Buffer object
+ *
+ * Walk all VMAs to determine if BO should be purgeable or not.
+ * Shared BOs require unanimous DONTNEED state from all mappings.
+ * If the BO has no VMAs the existing state is preserved.
+ *
+ * Locking: Caller must hold BO dma-resv lock. When iterating GPUVM lists,
+ * VM lock must also be held (write) to prevent concurrent VMA modifications.
+ * This is satisfied at both call sites:
+ * - xe_vma_destroy(): holds vm->lock write
+ * - madvise_purgeable(): holds vm->lock write (from madvise ioctl path)
+ *
+ * Return: nothing
+ */
+void xe_bo_recompute_purgeable_state(struct xe_bo *bo)
+{
+ enum xe_bo_vmas_purge_state vma_state;
+
+ if (!bo)
+ return;
+
+ xe_bo_assert_held(bo);
+
+ /*
+ * Once purged, always purged. Cannot transition back to WILLNEED.
+ * This matches i915 semantics where purged BOs are permanently invalid.
+ */
+ if (bo->madv_purgeable == XE_MADV_PURGEABLE_PURGED)
+ return;
+
+ vma_state = xe_bo_all_vmas_dontneed(bo);
+
+ if (vma_state != (enum xe_bo_vmas_purge_state)bo->madv_purgeable &&
+ vma_state != XE_BO_VMAS_STATE_NO_VMAS)
+ xe_bo_set_purgeable_state(bo, (enum xe_madv_purgeable_state)vma_state);
+}
+
/**
* madvise_purgeable - Handle purgeable buffer object advice
* @xe: XE device
@@ -215,8 +326,11 @@ static void __maybe_unused madvise_purgeable(struct xe_device *xe,
for (i = 0; i < num_vmas; i++) {
struct xe_bo *bo = xe_vma_bo(vmas[i]);
- if (!bo)
+ if (!bo) {
+ /* Purgeable state applies to BOs only, skip non-BO VMAs */
+ vmas[i]->skip_invalidation = true;
continue;
+ }
/* BO must be locked before modifying madv state */
xe_bo_assert_held(bo);
@@ -227,19 +341,31 @@ static void __maybe_unused madvise_purgeable(struct xe_device *xe,
*/
if (xe_bo_is_purged(bo)) {
details->has_purged_bo = true;
+ vmas[i]->skip_invalidation = true;
continue;
}
switch (op->purge_state_val.val) {
case DRM_XE_VMA_PURGEABLE_STATE_WILLNEED:
- xe_bo_set_purgeable_state(bo, XE_MADV_PURGEABLE_WILLNEED);
+ vmas[i]->attr.purgeable_state = XE_MADV_PURGEABLE_WILLNEED;
+ vmas[i]->skip_invalidation = true;
+
+ xe_bo_recompute_purgeable_state(bo);
break;
case DRM_XE_VMA_PURGEABLE_STATE_DONTNEED:
- xe_bo_set_purgeable_state(bo, XE_MADV_PURGEABLE_DONTNEED);
+ vmas[i]->attr.purgeable_state = XE_MADV_PURGEABLE_DONTNEED;
+ /*
+ * Don't zap PTEs at DONTNEED time -- pages are still
+ * alive. The zap happens in xe_bo_move_notify() right
+ * before the shrinker frees them.
+ */
+ vmas[i]->skip_invalidation = true;
+
+ xe_bo_recompute_purgeable_state(bo);
break;
default:
- drm_warn(&vm->xe->drm, "Invalid madvice value = %d\n",
- op->purge_state_val.val);
+ /* Should never hit - values validated in madvise_args_are_sane() */
+ xe_assert(vm->xe, 0);
return;
}
}
diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.h b/drivers/gpu/drm/xe/xe_vm_madvise.h
index b0e1fc445f23..39acd2689ca0 100644
--- a/drivers/gpu/drm/xe/xe_vm_madvise.h
+++ b/drivers/gpu/drm/xe/xe_vm_madvise.h
@@ -8,8 +8,11 @@
struct drm_device;
struct drm_file;
+struct xe_bo;
int xe_vm_madvise_ioctl(struct drm_device *dev, void *data,
struct drm_file *file);
+void xe_bo_recompute_purgeable_state(struct xe_bo *bo);
+
#endif
diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
index 69e80c94138a..033cfdd56c95 100644
--- a/drivers/gpu/drm/xe/xe_vm_types.h
+++ b/drivers/gpu/drm/xe/xe_vm_types.h
@@ -95,6 +95,17 @@ struct xe_vma_mem_attr {
* same as default_pat_index unless overwritten by madvise.
*/
u16 pat_index;
+
+ /**
+ * @purgeable_state: Purgeable hint for this VMA mapping
+ *
+ * Per-VMA purgeable state from madvise. Valid states are WILLNEED (0)
+ * or DONTNEED (1). Shared BOs require all VMAs to be DONTNEED before
+ * the BO can be purged. PURGED state exists only at BO level.
+ *
+ * Protected by BO dma-resv lock. Set via DRM_IOCTL_XE_MADVISE.
+ */
+ u32 purgeable_state;
};
struct xe_vma {
--
2.43.0
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH v7 07/12] drm/xe/madvise: Block imported and exported dma-bufs
2026-03-23 9:30 [PATCH v7 00/12] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
` (5 preceding siblings ...)
2026-03-23 9:30 ` [PATCH v7 06/12] drm/xe/madvise: Implement per-VMA purgeable state tracking Arvind Yadav
@ 2026-03-23 9:30 ` Arvind Yadav
2026-03-24 14:13 ` Thomas Hellström
2026-03-23 9:30 ` [PATCH v7 08/12] drm/xe/bo: Block mmap of DONTNEED/purged BOs Arvind Yadav
` (9 subsequent siblings)
16 siblings, 1 reply; 29+ messages in thread
From: Arvind Yadav @ 2026-03-23 9:30 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, himal.prasad.ghimiray, thomas.hellstrom
Prevent marking imported or exported dma-bufs as purgeable.
External devices may be accessing these buffers without our
knowledge, making purging unsafe.
Check drm_gem_is_imported() for buffers created by other
drivers and obj->dma_buf for buffers exported to other
drivers. Silently skip these BOs during madvise processing.
This follows drm_gem_shmem's purgeable implementation and
prevents data corruption from purging actively-used shared
buffers.
v3:
- Addresses review feedback from Matt Roper about handling
imported/exported BOs correctly in the purgeable BO
implementation.
v4:
- Check should be add to xe_vm_madvise_purgeable_bo.
v5:
- Rename xe_bo_is_external_dmabuf() to xe_bo_is_dmabuf_shared()
for clarity (Thomas)
- Update comments to clarify why both imports and exports
are unsafe to purge.
v6:
- No PTEs to zap for shared dma-bufs.
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
---
drivers/gpu/drm/xe/xe_vm_madvise.c | 38 ++++++++++++++++++++++++++++++
1 file changed, 38 insertions(+)
diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
index ed1940da7739..340e83764a76 100644
--- a/drivers/gpu/drm/xe/xe_vm_madvise.c
+++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
@@ -185,6 +185,34 @@ static void madvise_pat_index(struct xe_device *xe, struct xe_vm *vm,
}
}
+
+/**
+ * xe_bo_is_dmabuf_shared() - Check if BO is shared via dma-buf
+ * @bo: Buffer object
+ *
+ * Prevent marking imported or exported dma-bufs as purgeable.
+ * For imported BOs, Xe doesn't own the backing store and cannot
+ * safely reclaim pages (exporter or other devices may still be
+ * using them). For exported BOs, external devices may have active
+ * mappings we cannot track.
+ *
+ * Return: true if BO is imported or exported, false otherwise
+ */
+static bool xe_bo_is_dmabuf_shared(struct xe_bo *bo)
+{
+ struct drm_gem_object *obj = &bo->ttm.base;
+
+ /* Imported: exporter owns backing store */
+ if (drm_gem_is_imported(obj))
+ return true;
+
+ /* Exported: external devices may be accessing */
+ if (obj->dma_buf)
+ return true;
+
+ return false;
+}
+
/**
* enum xe_bo_vmas_purge_state - VMA purgeable state aggregation
*
@@ -234,6 +262,10 @@ static enum xe_bo_vmas_purge_state xe_bo_all_vmas_dontneed(struct xe_bo *bo)
xe_bo_assert_held(bo);
+ /* Shared dma-bufs cannot be purgeable */
+ if (xe_bo_is_dmabuf_shared(bo))
+ return XE_BO_VMAS_STATE_WILLNEED;
+
drm_gem_for_each_gpuvm_bo(vm_bo, obj) {
drm_gpuvm_bo_for_each_va(gpuva, vm_bo) {
struct xe_vma *vma = gpuva_to_vma(gpuva);
@@ -335,6 +367,12 @@ static void __maybe_unused madvise_purgeable(struct xe_device *xe,
/* BO must be locked before modifying madv state */
xe_bo_assert_held(bo);
+ /* Skip shared dma-bufs - no PTEs to zap */
+ if (xe_bo_is_dmabuf_shared(bo)) {
+ vmas[i]->skip_invalidation = true;
+ continue;
+ }
+
/*
* Once purged, always purged. Cannot transition back to WILLNEED.
* This matches i915 semantics where purged BOs are permanently invalid.
--
2.43.0
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH v7 08/12] drm/xe/bo: Block mmap of DONTNEED/purged BOs
2026-03-23 9:30 [PATCH v7 00/12] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
` (6 preceding siblings ...)
2026-03-23 9:30 ` [PATCH v7 07/12] drm/xe/madvise: Block imported and exported dma-bufs Arvind Yadav
@ 2026-03-23 9:30 ` Arvind Yadav
2026-03-26 1:33 ` Matthew Brost
2026-03-23 9:30 ` [PATCH v7 09/12] drm/xe/dma_buf: Block export " Arvind Yadav
` (8 subsequent siblings)
16 siblings, 1 reply; 29+ messages in thread
From: Arvind Yadav @ 2026-03-23 9:30 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, himal.prasad.ghimiray, thomas.hellstrom
Don't allow new CPU mmaps to BOs marked DONTNEED or PURGED.
DONTNEED BOs can have their contents discarded at any time, making
CPU access undefined behavior. PURGED BOs have no backing store and
are permanently invalid.
Return -EBUSY for DONTNEED BOs (temporary purgeable state) and
-EINVAL for purged BOs (permanent, no backing store).
The mmap offset ioctl now checks the BO's purgeable state before
allowing userspace to establish a new CPU mapping. This prevents
the race where userspace gets a valid offset but the BO is purged
before actual faulting begins.
Existing mmaps (established before DONTNEED) may still work until
pages are purged, at which point CPU faults fail with SIGBUS.
v6:
- Split DONTNEED → -EBUSY and PURGED → -EINVAL for consistency
with the rest of the series (Thomas, Matt)
v7:
- Move purgeable check from xe_gem_mmap_offset_ioctl() into a new
xe_gem_object_mmap() callback that wraps drm_gem_ttm_mmap(). (Thomas)
- Use an interruptible lock. (Thomas)
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
---
drivers/gpu/drm/xe/xe_bo.c | 26 ++++++++++++++++++++++++--
1 file changed, 24 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
index da18b43650e3..83a1d1ca6cc6 100644
--- a/drivers/gpu/drm/xe/xe_bo.c
+++ b/drivers/gpu/drm/xe/xe_bo.c
@@ -2165,10 +2165,32 @@ static const struct vm_operations_struct xe_gem_vm_ops = {
.access = xe_bo_vm_access,
};
+static int xe_gem_object_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
+{
+ struct xe_bo *bo = gem_to_xe_bo(obj);
+ int err = 0;
+
+ /*
+ * Reject mmap of purgeable BOs. DONTNEED BOs can be purged
+ * at any time, making CPU access undefined behavior. Purged BOs have
+ * no backing store and are permanently invalid.
+ */
+ xe_bo_lock(bo, true);
+ if (xe_bo_madv_is_dontneed(bo))
+ err = -EBUSY;
+ else if (xe_bo_is_purged(bo))
+ err = -EINVAL;
+ xe_bo_unlock(bo);
+ if (err)
+ return err;
+
+ return drm_gem_ttm_mmap(obj, vma);
+}
+
static const struct drm_gem_object_funcs xe_gem_object_funcs = {
.free = xe_gem_object_free,
.close = xe_gem_object_close,
- .mmap = drm_gem_ttm_mmap,
+ .mmap = xe_gem_object_mmap,
.export = xe_gem_prime_export,
.vm_ops = &xe_gem_vm_ops,
};
@@ -3427,8 +3449,8 @@ int xe_gem_mmap_offset_ioctl(struct drm_device *dev, void *data,
/* The mmap offset was set up at BO allocation time. */
args->offset = drm_vma_node_offset_addr(&gem_obj->vma_node);
+ drm_gem_object_put(gem_obj);
- xe_bo_put(gem_to_xe_bo(gem_obj));
return 0;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH v7 09/12] drm/xe/dma_buf: Block export of DONTNEED/purged BOs
2026-03-23 9:30 [PATCH v7 00/12] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
` (7 preceding siblings ...)
2026-03-23 9:30 ` [PATCH v7 08/12] drm/xe/bo: Block mmap of DONTNEED/purged BOs Arvind Yadav
@ 2026-03-23 9:30 ` Arvind Yadav
2026-03-24 14:47 ` Thomas Hellström
2026-03-23 9:30 ` [PATCH v7 10/12] drm/xe/bo: Add purgeable shrinker state helpers Arvind Yadav
` (7 subsequent siblings)
16 siblings, 1 reply; 29+ messages in thread
From: Arvind Yadav @ 2026-03-23 9:30 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, himal.prasad.ghimiray, thomas.hellstrom
Don't allow exporting BOs marked DONTNEED or PURGED as dma-bufs.
DONTNEED BOs can have their contents discarded at any time, making
the exported dma-buf unusable for external devices. PURGED BOs have
no backing store and are permanently invalid.
Return -EBUSY for DONTNEED BOs (temporary purgeable state) and
-EINVAL for purged BOs (permanent, no backing store).
The export path now checks the BO's purgeable state before creating
the dma-buf, preventing external devices from accessing memory that
may be purged at any time.
v6:
- Split DONTNEED → -EBUSY and PURGED → -EINVAL for consistency
with the rest of the series (Thomas, Matt)
v7:
- Use Interruptible lock. (Thomas)
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
---
drivers/gpu/drm/xe/xe_dma_buf.c | 21 +++++++++++++++++++++
1 file changed, 21 insertions(+)
diff --git a/drivers/gpu/drm/xe/xe_dma_buf.c b/drivers/gpu/drm/xe/xe_dma_buf.c
index ea370cd373e9..4edbe9f3c001 100644
--- a/drivers/gpu/drm/xe/xe_dma_buf.c
+++ b/drivers/gpu/drm/xe/xe_dma_buf.c
@@ -223,6 +223,23 @@ struct dma_buf *xe_gem_prime_export(struct drm_gem_object *obj, int flags)
if (bo->vm)
return ERR_PTR(-EPERM);
+ /*
+ * Reject exporting purgeable BOs. DONTNEED BOs can be purged
+ * at any time, making the exported dma-buf unusable. Purged BOs
+ * have no backing store and are permanently invalid.
+ */
+ xe_bo_lock(bo, true);
+ if (xe_bo_madv_is_dontneed(bo)) {
+ ret = -EBUSY;
+ goto out_unlock;
+ }
+
+ if (xe_bo_is_purged(bo)) {
+ ret = -EINVAL;
+ goto out_unlock;
+ }
+ xe_bo_unlock(bo);
+
ret = ttm_bo_setup_export(&bo->ttm, &ctx);
if (ret)
return ERR_PTR(ret);
@@ -232,6 +249,10 @@ struct dma_buf *xe_gem_prime_export(struct drm_gem_object *obj, int flags)
buf->ops = &xe_dmabuf_ops;
return buf;
+
+out_unlock:
+ xe_bo_unlock(bo);
+ return ERR_PTR(ret);
}
static struct drm_gem_object *
--
2.43.0
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH v7 10/12] drm/xe/bo: Add purgeable shrinker state helpers
2026-03-23 9:30 [PATCH v7 00/12] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
` (8 preceding siblings ...)
2026-03-23 9:30 ` [PATCH v7 09/12] drm/xe/dma_buf: Block export " Arvind Yadav
@ 2026-03-23 9:30 ` Arvind Yadav
2026-03-24 14:51 ` Thomas Hellström
2026-03-23 9:31 ` [PATCH v7 11/12] drm/xe/madvise: Enable purgeable buffer object IOCTL support Arvind Yadav
` (6 subsequent siblings)
16 siblings, 1 reply; 29+ messages in thread
From: Arvind Yadav @ 2026-03-23 9:30 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, himal.prasad.ghimiray, thomas.hellstrom
Encapsulate TTM purgeable flag updates and shrinker page accounting
into helper functions to prevent desynchronization between the TTM
tt->purgeable flag and the shrinker's page bucket counters.
Without these helpers, direct manipulation of xe_ttm_tt->purgeable
risks forgetting to update the corresponding shrinker counters,
leading to incorrect memory pressure calculations.
Update purgeable BO state to PURGED after successful shrinker purge
for DONTNEED BOs.
v4:
- @madv_purgeable atomic_t → u32 change across all relevant
patches (Matt)
v5:
- Update purgeable BO state to PURGED after a successful shrinker
purge for DONTNEED BOs.
- Split ghost BO and zero-refcount handling in xe_bo_shrink() (Thomas)
v6:
- Create separate patch for 'Split ghost BO and zero-refcount
handling'. (Thomas)
v7:
- Merge xe_bo_set_purgeable_shrinker() and xe_bo_clear_purgeable_shrinker()
into a single static helper xe_bo_set_purgeable_shrinker(bo, new_state)
called automatically from xe_bo_set_purgeable_state(). Callers no longer
need to manage shrinker accounting separately. (Thomas)
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
---
drivers/gpu/drm/xe/xe_bo.c | 43 +++++++++++++++++++++++++++++++++++++-
1 file changed, 42 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
index 83a1d1ca6cc6..85e42e785ebe 100644
--- a/drivers/gpu/drm/xe/xe_bo.c
+++ b/drivers/gpu/drm/xe/xe_bo.c
@@ -835,6 +835,42 @@ static int xe_bo_move_notify(struct xe_bo *bo,
return 0;
}
+/**
+ * xe_bo_set_purgeable_shrinker() - Update shrinker accounting for purgeable state
+ * @bo: Buffer object
+ * @new_state: New purgeable state being set
+ *
+ * Transfers pages between shrinkable and purgeable buckets when the BO
+ * purgeable state changes. Called automatically from xe_bo_set_purgeable_state().
+ */
+static void xe_bo_set_purgeable_shrinker(struct xe_bo *bo,
+ enum xe_madv_purgeable_state new_state)
+{
+ struct ttm_buffer_object *ttm_bo = &bo->ttm;
+ struct ttm_tt *tt = ttm_bo->ttm;
+ struct xe_device *xe = ttm_to_xe_device(ttm_bo->bdev);
+ struct xe_ttm_tt *xe_tt;
+ long tt_pages;
+
+ xe_bo_assert_held(bo);
+
+ if (!tt || !ttm_tt_is_populated(tt))
+ return;
+
+ xe_tt = container_of(tt, struct xe_ttm_tt, ttm);
+ tt_pages = tt->num_pages;
+
+ if (!xe_tt->purgeable && new_state == XE_MADV_PURGEABLE_DONTNEED) {
+ xe_tt->purgeable = true;
+ /* Transfer pages from shrinkable to purgeable count */
+ xe_shrinker_mod_pages(xe->mem.shrinker, -tt_pages, tt_pages);
+ } else if (xe_tt->purgeable && new_state == XE_MADV_PURGEABLE_WILLNEED) {
+ xe_tt->purgeable = false;
+ /* Transfer pages from purgeable to shrinkable count */
+ xe_shrinker_mod_pages(xe->mem.shrinker, tt_pages, -tt_pages);
+ }
+}
+
/**
* xe_bo_set_purgeable_state() - Set BO purgeable state with validation
* @bo: Buffer object
@@ -842,7 +878,8 @@ static int xe_bo_move_notify(struct xe_bo *bo,
*
* Sets the purgeable state with lockdep assertions and validates state
* transitions. Once a BO is PURGED, it cannot transition to any other state.
- * Invalid transitions are caught with xe_assert().
+ * Invalid transitions are caught with xe_assert(). Shrinker page accounting
+ * is updated automatically.
*/
void xe_bo_set_purgeable_state(struct xe_bo *bo,
enum xe_madv_purgeable_state new_state)
@@ -861,6 +898,7 @@ void xe_bo_set_purgeable_state(struct xe_bo *bo,
new_state != XE_MADV_PURGEABLE_PURGED));
bo->madv_purgeable = new_state;
+ xe_bo_set_purgeable_shrinker(bo, new_state);
}
/**
@@ -1243,6 +1281,9 @@ long xe_bo_shrink(struct ttm_operation_ctx *ctx, struct ttm_buffer_object *bo,
lret = xe_bo_move_notify(xe_bo, ctx);
if (!lret)
lret = xe_bo_shrink_purge(ctx, bo, scanned);
+ if (lret > 0 && xe_bo_madv_is_dontneed(xe_bo))
+ xe_bo_set_purgeable_state(xe_bo,
+ XE_MADV_PURGEABLE_PURGED);
goto out_unref;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH v7 11/12] drm/xe/madvise: Enable purgeable buffer object IOCTL support
2026-03-23 9:30 [PATCH v7 00/12] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
` (9 preceding siblings ...)
2026-03-23 9:30 ` [PATCH v7 10/12] drm/xe/bo: Add purgeable shrinker state helpers Arvind Yadav
@ 2026-03-23 9:31 ` Arvind Yadav
2026-03-23 9:31 ` [PATCH v7 12/12] drm/xe/madvise: Accept canonical GPU addresses in xe_vm_madvise_ioctl Arvind Yadav
` (5 subsequent siblings)
16 siblings, 0 replies; 29+ messages in thread
From: Arvind Yadav @ 2026-03-23 9:31 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, himal.prasad.ghimiray, thomas.hellstrom
Hook the madvise_purgeable() handler into the madvise IOCTL now that all
supporting infrastructure is complete:
- Core purge implementation (patch 3)
- BO state tracking and helpers (patches 1-2)
- Per-VMA purgeable state tracking (patch 6)
- Shrinker integration for memory reclamation (patch 10)
This final patch enables userspace to use the DRM_XE_VMA_ATTR_PURGEABLE_STATE
madvise type to mark buffers as WILLNEED/DONTNEED and receive the retained
status indicating whether buffers were purged.
The feature was kept disabled in earlier patches to maintain bisectability
and ensure all components are in place before exposing to userspace.
Userspace can detect kernel support for purgeable BOs by checking the
DRM_XE_QUERY_CONFIG_FLAG_HAS_PURGING_SUPPORT flag in the query_config
response.
v6:
- Add DRM_XE_QUERY_CONFIG_FLAG_HAS_PURGING_SUPPORT for userspace
feature detection. (Jose)
Suggested-by: Matthew Brost <matthew.brost@intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
---
drivers/gpu/drm/xe/xe_query.c | 2 ++
drivers/gpu/drm/xe/xe_vm_madvise.c | 22 +++++-----------------
2 files changed, 7 insertions(+), 17 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_query.c b/drivers/gpu/drm/xe/xe_query.c
index 4852fdcb4b95..d84d6a422c45 100644
--- a/drivers/gpu/drm/xe/xe_query.c
+++ b/drivers/gpu/drm/xe/xe_query.c
@@ -342,6 +342,8 @@ static int query_config(struct xe_device *xe, struct drm_xe_device_query *query)
DRM_XE_QUERY_CONFIG_FLAG_HAS_LOW_LATENCY;
config->info[DRM_XE_QUERY_CONFIG_FLAGS] |=
DRM_XE_QUERY_CONFIG_FLAG_HAS_DISABLE_STATE_CACHE_PERF_FIX;
+ config->info[DRM_XE_QUERY_CONFIG_FLAGS] |=
+ DRM_XE_QUERY_CONFIG_FLAG_HAS_PURGING_SUPPORT;
config->info[DRM_XE_QUERY_CONFIG_MIN_ALIGNMENT] =
xe->info.vram_flags & XE_VRAM_FLAGS_NEED64K ? SZ_64K : SZ_4K;
config->info[DRM_XE_QUERY_CONFIG_VA_BITS] = xe->info.va_bits;
diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
index 340e83764a76..4a19da5e86d4 100644
--- a/drivers/gpu/drm/xe/xe_vm_madvise.c
+++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
@@ -338,18 +338,11 @@ void xe_bo_recompute_purgeable_state(struct xe_bo *bo)
*
* Handles DONTNEED/WILLNEED/PURGED states. Tracks if any BO was purged
* in details->has_purged_bo for later copy to userspace.
- *
- * Note: Marked __maybe_unused until hooked into madvise_funcs[] in the
- * final patch to maintain bisectability. The NULL placeholder in the
- * array ensures proper -EINVAL return for userspace until all supporting
- * infrastructure (shrinker, per-VMA tracking) is complete.
*/
-static void __maybe_unused madvise_purgeable(struct xe_device *xe,
- struct xe_vm *vm,
- struct xe_vma **vmas,
- int num_vmas,
- struct drm_xe_madvise *op,
- struct xe_madvise_details *details)
+static void madvise_purgeable(struct xe_device *xe, struct xe_vm *vm,
+ struct xe_vma **vmas, int num_vmas,
+ struct drm_xe_madvise *op,
+ struct xe_madvise_details *details)
{
int i;
@@ -418,12 +411,7 @@ static const madvise_func madvise_funcs[] = {
[DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC] = madvise_preferred_mem_loc,
[DRM_XE_MEM_RANGE_ATTR_ATOMIC] = madvise_atomic,
[DRM_XE_MEM_RANGE_ATTR_PAT] = madvise_pat_index,
- /*
- * Purgeable support implemented but not enabled yet to maintain
- * bisectability. Will be set to madvise_purgeable() in final patch
- * when all infrastructure (shrinker, VMA tracking) is complete.
- */
- [DRM_XE_VMA_ATTR_PURGEABLE_STATE] = NULL,
+ [DRM_XE_VMA_ATTR_PURGEABLE_STATE] = madvise_purgeable,
};
static u8 xe_zap_ptes_in_madvise_range(struct xe_vm *vm, u64 start, u64 end)
--
2.43.0
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH v7 12/12] drm/xe/madvise: Accept canonical GPU addresses in xe_vm_madvise_ioctl
2026-03-23 9:30 [PATCH v7 00/12] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
` (10 preceding siblings ...)
2026-03-23 9:31 ` [PATCH v7 11/12] drm/xe/madvise: Enable purgeable buffer object IOCTL support Arvind Yadav
@ 2026-03-23 9:31 ` Arvind Yadav
2026-03-24 3:35 ` Matthew Brost
2026-03-23 9:40 ` ✗ CI.checkpatch: warning for drm/xe/madvise: Add support for purgeable buffer objects (rev8) Patchwork
` (4 subsequent siblings)
16 siblings, 1 reply; 29+ messages in thread
From: Arvind Yadav @ 2026-03-23 9:31 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, himal.prasad.ghimiray, thomas.hellstrom
Userspace passes canonical (sign-extended) GPU addresses where bits 63:48
mirror bit 47. The internal GPUVM uses non-canonical form (upper bits
zeroed), so passing raw canonical addresses into GPUVM lookups causes
mismatches for addresses above 128TiB.
Strip the sign extension with xe_device_uncanonicalize_addr() at the
top of xe_vm_madvise_ioctl(). Non-canonical addresses are unaffected.
Fixes: ada7486c5668 ("drm/xe: Implement madvise ioctl for xe")
Suggested-by: Matthew Brost <matthew.brost@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
---
drivers/gpu/drm/xe/xe_vm_madvise.c | 16 ++++++++++++----
1 file changed, 12 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
index 4a19da5e86d4..2d03676ee595 100644
--- a/drivers/gpu/drm/xe/xe_vm_madvise.c
+++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
@@ -673,8 +673,15 @@ int xe_vm_madvise_ioctl(struct drm_device *dev, void *data, struct drm_file *fil
struct xe_device *xe = to_xe_device(dev);
struct xe_file *xef = to_xe_file(file);
struct drm_xe_madvise *args = data;
- struct xe_vmas_in_madvise_range madvise_range = {.addr = args->start,
- .range = args->range, };
+ struct xe_vmas_in_madvise_range madvise_range = {
+ /*
+ * Userspace may pass canonical (sign-extended) addresses.
+ * Strip the sign extension to get the internal non-canonical
+ * form used by the GPUVM, matching xe_vm_bind_ioctl() behavior.
+ */
+ .addr = xe_device_uncanonicalize_addr(xe, args->start),
+ .range = args->range,
+ };
struct xe_madvise_details details;
struct xe_vm *vm;
struct drm_exec exec;
@@ -724,7 +731,7 @@ int xe_vm_madvise_ioctl(struct drm_device *dev, void *data, struct drm_file *fil
if (err)
goto unlock_vm;
- err = xe_vm_alloc_madvise_vma(vm, args->start, args->range);
+ err = xe_vm_alloc_madvise_vma(vm, madvise_range.addr, args->range);
if (err)
goto madv_fini;
@@ -774,7 +781,8 @@ int xe_vm_madvise_ioctl(struct drm_device *dev, void *data, struct drm_file *fil
madvise_funcs[attr_type](xe, vm, madvise_range.vmas, madvise_range.num_vmas, args,
&details);
- err = xe_vm_invalidate_madvise_range(vm, args->start, args->start + args->range);
+ err = xe_vm_invalidate_madvise_range(vm, madvise_range.addr,
+ madvise_range.addr + args->range);
if (madvise_range.has_svm_userptr_vmas)
xe_svm_notifier_unlock(vm);
--
2.43.0
^ permalink raw reply related [flat|nested] 29+ messages in thread
* ✗ CI.checkpatch: warning for drm/xe/madvise: Add support for purgeable buffer objects (rev8)
2026-03-23 9:30 [PATCH v7 00/12] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
` (11 preceding siblings ...)
2026-03-23 9:31 ` [PATCH v7 12/12] drm/xe/madvise: Accept canonical GPU addresses in xe_vm_madvise_ioctl Arvind Yadav
@ 2026-03-23 9:40 ` Patchwork
2026-03-23 9:42 ` ✓ CI.KUnit: success " Patchwork
` (3 subsequent siblings)
16 siblings, 0 replies; 29+ messages in thread
From: Patchwork @ 2026-03-23 9:40 UTC (permalink / raw)
To: Arvind Yadav; +Cc: intel-xe
== Series Details ==
Series: drm/xe/madvise: Add support for purgeable buffer objects (rev8)
URL : https://patchwork.freedesktop.org/series/156651/
State : warning
== Summary ==
+ KERNEL=/kernel
+ git clone https://gitlab.freedesktop.org/drm/maintainer-tools mt
Cloning into 'mt'...
warning: redirecting to https://gitlab.freedesktop.org/drm/maintainer-tools.git/
+ git -C mt rev-list -n1 origin/master
1f57ba1afceae32108bd24770069f764d940a0e4
+ cd /kernel
+ git config --global --add safe.directory /kernel
+ git log -n1
commit 0952626bd51291fbb52833d8b6e629a4ae3502d6
Author: Arvind Yadav <arvind.yadav@intel.com>
Date: Mon Mar 23 15:01:01 2026 +0530
drm/xe/madvise: Accept canonical GPU addresses in xe_vm_madvise_ioctl
Userspace passes canonical (sign-extended) GPU addresses where bits 63:48
mirror bit 47. The internal GPUVM uses non-canonical form (upper bits
zeroed), so passing raw canonical addresses into GPUVM lookups causes
mismatches for addresses above 128TiB.
Strip the sign extension with xe_device_uncanonicalize_addr() at the
top of xe_vm_madvise_ioctl(). Non-canonical addresses are unaffected.
Fixes: ada7486c5668 ("drm/xe: Implement madvise ioctl for xe")
Suggested-by: Matthew Brost <matthew.brost@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
+ /mt/dim checkpatch aea7130e799d7d9a09c00f453f7edda33f4587e7 drm-intel
81eb56e4fb82 drm/xe/uapi: Add UAPI support for purgeable buffer objects
1f46e4a73bc3 drm/xe/bo: Add purgeable bo state tracking and field madv to xe_bo
8a15573b0868 drm/xe/madvise: Implement purgeable buffer object support
-:23: WARNING:COMMIT_LOG_LONG_LINE: Prefer a maximum 75 chars per line (possible unwrapped commit description?)
#23:
- Async TLB invalidation via trigger_rebind (no blocking xe_vm_invalidate_vma)
-:132: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#132: FILE: drivers/gpu/drm/xe/xe_bo.c:856:
+ xe_assert(xe, new_state == XE_MADV_PURGEABLE_WILLNEED ||
+ new_state == XE_MADV_PURGEABLE_DONTNEED ||
total: 0 errors, 1 warnings, 1 checks, 517 lines checked
cd8447dbb9f5 drm/xe/bo: Block CPU faults to purgeable buffer objects
3a868969ec09 drm/xe/vm: Prevent binding of purged buffer objects
-:37: WARNING:COMMIT_LOG_LONG_LINE: Prefer a maximum 75 chars per line (possible unwrapped commit description?)
#37:
- Replace three boolean parameters with struct xe_vma_lock_and_validate_flags
-:123: WARNING:LONG_LINE: line length of 105 exceeds 100 columns
#123: FILE: drivers/gpu/drm/xe/xe_vm.c:3060:
+ .request_decompress = op->map.request_decompress,
total: 0 errors, 2 warnings, 0 checks, 131 lines checked
14d60d5357c6 drm/xe/madvise: Implement per-VMA purgeable state tracking
9225a876d963 drm/xe/madvise: Block imported and exported dma-bufs
-:51: CHECK:LINE_SPACING: Please don't use multiple blank lines
#51: FILE: drivers/gpu/drm/xe/xe_vm_madvise.c:188:
+
total: 0 errors, 0 warnings, 1 checks, 56 lines checked
27b3b103f5cf drm/xe/bo: Block mmap of DONTNEED/purged BOs
a0acce580eb3 drm/xe/dma_buf: Block export of DONTNEED/purged BOs
66049647446d drm/xe/bo: Add purgeable shrinker state helpers
-:34: WARNING:COMMIT_LOG_LONG_LINE: Prefer a maximum 75 chars per line (possible unwrapped commit description?)
#34:
- Merge xe_bo_set_purgeable_shrinker() and xe_bo_clear_purgeable_shrinker()
total: 0 errors, 1 warnings, 0 checks, 67 lines checked
41a5701076e4 drm/xe/madvise: Enable purgeable buffer object IOCTL support
-:17: WARNING:COMMIT_LOG_LONG_LINE: Prefer a maximum 75 chars per line (possible unwrapped commit description?)
#17:
This final patch enables userspace to use the DRM_XE_VMA_ATTR_PURGEABLE_STATE
total: 0 errors, 1 warnings, 0 checks, 43 lines checked
0952626bd512 drm/xe/madvise: Accept canonical GPU addresses in xe_vm_madvise_ioctl
^ permalink raw reply [flat|nested] 29+ messages in thread
* ✓ CI.KUnit: success for drm/xe/madvise: Add support for purgeable buffer objects (rev8)
2026-03-23 9:30 [PATCH v7 00/12] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
` (12 preceding siblings ...)
2026-03-23 9:40 ` ✗ CI.checkpatch: warning for drm/xe/madvise: Add support for purgeable buffer objects (rev8) Patchwork
@ 2026-03-23 9:42 ` Patchwork
2026-03-23 10:40 ` ✓ Xe.CI.BAT: " Patchwork
` (2 subsequent siblings)
16 siblings, 0 replies; 29+ messages in thread
From: Patchwork @ 2026-03-23 9:42 UTC (permalink / raw)
To: Arvind Yadav; +Cc: intel-xe
== Series Details ==
Series: drm/xe/madvise: Add support for purgeable buffer objects (rev8)
URL : https://patchwork.freedesktop.org/series/156651/
State : success
== Summary ==
+ trap cleanup EXIT
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/xe/.kunitconfig
[09:40:46] Configuring KUnit Kernel ...
Generating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[09:40:51] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[09:41:52] Starting KUnit Kernel (1/1)...
[09:41:52] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[09:41:52] ================== guc_buf (11 subtests) ===================
[09:41:52] [PASSED] test_smallest
[09:41:52] [PASSED] test_largest
[09:41:52] [PASSED] test_granular
[09:41:52] [PASSED] test_unique
[09:41:52] [PASSED] test_overlap
[09:41:52] [PASSED] test_reusable
[09:41:52] [PASSED] test_too_big
[09:41:52] [PASSED] test_flush
[09:41:52] [PASSED] test_lookup
[09:41:52] [PASSED] test_data
[09:41:52] [PASSED] test_class
[09:41:52] ===================== [PASSED] guc_buf =====================
[09:41:52] =================== guc_dbm (7 subtests) ===================
[09:41:52] [PASSED] test_empty
[09:41:52] [PASSED] test_default
[09:41:52] ======================== test_size ========================
[09:41:52] [PASSED] 4
[09:41:52] [PASSED] 8
[09:41:52] [PASSED] 32
[09:41:52] [PASSED] 256
[09:41:52] ==================== [PASSED] test_size ====================
[09:41:52] ======================= test_reuse ========================
[09:41:52] [PASSED] 4
[09:41:52] [PASSED] 8
[09:41:52] [PASSED] 32
[09:41:52] [PASSED] 256
[09:41:52] =================== [PASSED] test_reuse ====================
[09:41:52] =================== test_range_overlap ====================
[09:41:52] [PASSED] 4
[09:41:52] [PASSED] 8
[09:41:52] [PASSED] 32
[09:41:52] [PASSED] 256
[09:41:52] =============== [PASSED] test_range_overlap ================
[09:41:52] =================== test_range_compact ====================
[09:41:52] [PASSED] 4
[09:41:52] [PASSED] 8
[09:41:52] [PASSED] 32
[09:41:52] [PASSED] 256
[09:41:52] =============== [PASSED] test_range_compact ================
[09:41:52] ==================== test_range_spare =====================
[09:41:52] [PASSED] 4
[09:41:52] [PASSED] 8
[09:41:52] [PASSED] 32
[09:41:52] [PASSED] 256
[09:41:52] ================ [PASSED] test_range_spare =================
[09:41:52] ===================== [PASSED] guc_dbm =====================
[09:41:52] =================== guc_idm (6 subtests) ===================
[09:41:52] [PASSED] bad_init
[09:41:52] [PASSED] no_init
[09:41:52] [PASSED] init_fini
[09:41:52] [PASSED] check_used
[09:41:52] [PASSED] check_quota
[09:41:52] [PASSED] check_all
[09:41:52] ===================== [PASSED] guc_idm =====================
[09:41:52] ================== no_relay (3 subtests) ===================
[09:41:52] [PASSED] xe_drops_guc2pf_if_not_ready
[09:41:52] [PASSED] xe_drops_guc2vf_if_not_ready
[09:41:52] [PASSED] xe_rejects_send_if_not_ready
[09:41:52] ==================== [PASSED] no_relay =====================
[09:41:52] ================== pf_relay (14 subtests) ==================
[09:41:52] [PASSED] pf_rejects_guc2pf_too_short
[09:41:52] [PASSED] pf_rejects_guc2pf_too_long
[09:41:52] [PASSED] pf_rejects_guc2pf_no_payload
[09:41:52] [PASSED] pf_fails_no_payload
[09:41:52] [PASSED] pf_fails_bad_origin
[09:41:52] [PASSED] pf_fails_bad_type
[09:41:52] [PASSED] pf_txn_reports_error
[09:41:52] [PASSED] pf_txn_sends_pf2guc
[09:41:52] [PASSED] pf_sends_pf2guc
[09:41:52] [SKIPPED] pf_loopback_nop
[09:41:52] [SKIPPED] pf_loopback_echo
[09:41:52] [SKIPPED] pf_loopback_fail
[09:41:52] [SKIPPED] pf_loopback_busy
[09:41:52] [SKIPPED] pf_loopback_retry
[09:41:52] ==================== [PASSED] pf_relay =====================
[09:41:52] ================== vf_relay (3 subtests) ===================
[09:41:52] [PASSED] vf_rejects_guc2vf_too_short
[09:41:52] [PASSED] vf_rejects_guc2vf_too_long
[09:41:52] [PASSED] vf_rejects_guc2vf_no_payload
[09:41:52] ==================== [PASSED] vf_relay =====================
[09:41:52] ================ pf_gt_config (9 subtests) =================
[09:41:52] [PASSED] fair_contexts_1vf
[09:41:52] [PASSED] fair_doorbells_1vf
[09:41:52] [PASSED] fair_ggtt_1vf
[09:41:52] ====================== fair_vram_1vf ======================
[09:41:52] [PASSED] 3.50 GiB
[09:41:52] [PASSED] 11.5 GiB
[09:41:52] [PASSED] 15.5 GiB
[09:41:52] [PASSED] 31.5 GiB
[09:41:52] [PASSED] 63.5 GiB
[09:41:52] [PASSED] 1.91 GiB
[09:41:53] ================== [PASSED] fair_vram_1vf ==================
[09:41:53] ================ fair_vram_1vf_admin_only =================
[09:41:53] [PASSED] 3.50 GiB
[09:41:53] [PASSED] 11.5 GiB
[09:41:53] [PASSED] 15.5 GiB
[09:41:53] [PASSED] 31.5 GiB
[09:41:53] [PASSED] 63.5 GiB
[09:41:53] [PASSED] 1.91 GiB
[09:41:53] ============ [PASSED] fair_vram_1vf_admin_only =============
[09:41:53] ====================== fair_contexts ======================
[09:41:53] [PASSED] 1 VF
[09:41:53] [PASSED] 2 VFs
[09:41:53] [PASSED] 3 VFs
[09:41:53] [PASSED] 4 VFs
[09:41:53] [PASSED] 5 VFs
[09:41:53] [PASSED] 6 VFs
[09:41:53] [PASSED] 7 VFs
[09:41:53] [PASSED] 8 VFs
[09:41:53] [PASSED] 9 VFs
[09:41:53] [PASSED] 10 VFs
[09:41:53] [PASSED] 11 VFs
[09:41:53] [PASSED] 12 VFs
[09:41:53] [PASSED] 13 VFs
[09:41:53] [PASSED] 14 VFs
[09:41:53] [PASSED] 15 VFs
[09:41:53] [PASSED] 16 VFs
[09:41:53] [PASSED] 17 VFs
[09:41:53] [PASSED] 18 VFs
[09:41:53] [PASSED] 19 VFs
[09:41:53] [PASSED] 20 VFs
[09:41:53] [PASSED] 21 VFs
[09:41:53] [PASSED] 22 VFs
[09:41:53] [PASSED] 23 VFs
[09:41:53] [PASSED] 24 VFs
[09:41:53] [PASSED] 25 VFs
[09:41:53] [PASSED] 26 VFs
[09:41:53] [PASSED] 27 VFs
[09:41:53] [PASSED] 28 VFs
[09:41:53] [PASSED] 29 VFs
[09:41:53] [PASSED] 30 VFs
[09:41:53] [PASSED] 31 VFs
[09:41:53] [PASSED] 32 VFs
[09:41:53] [PASSED] 33 VFs
[09:41:53] [PASSED] 34 VFs
[09:41:53] [PASSED] 35 VFs
[09:41:53] [PASSED] 36 VFs
[09:41:53] [PASSED] 37 VFs
[09:41:53] [PASSED] 38 VFs
[09:41:53] [PASSED] 39 VFs
[09:41:53] [PASSED] 40 VFs
[09:41:53] [PASSED] 41 VFs
[09:41:53] [PASSED] 42 VFs
[09:41:53] [PASSED] 43 VFs
[09:41:53] [PASSED] 44 VFs
[09:41:53] [PASSED] 45 VFs
[09:41:53] [PASSED] 46 VFs
[09:41:53] [PASSED] 47 VFs
[09:41:53] [PASSED] 48 VFs
[09:41:53] [PASSED] 49 VFs
[09:41:53] [PASSED] 50 VFs
[09:41:53] [PASSED] 51 VFs
[09:41:53] [PASSED] 52 VFs
[09:41:53] [PASSED] 53 VFs
[09:41:53] [PASSED] 54 VFs
[09:41:53] [PASSED] 55 VFs
[09:41:53] [PASSED] 56 VFs
[09:41:53] [PASSED] 57 VFs
[09:41:53] [PASSED] 58 VFs
[09:41:53] [PASSED] 59 VFs
[09:41:53] [PASSED] 60 VFs
[09:41:53] [PASSED] 61 VFs
[09:41:53] [PASSED] 62 VFs
[09:41:53] [PASSED] 63 VFs
[09:41:53] ================== [PASSED] fair_contexts ==================
[09:41:53] ===================== fair_doorbells ======================
[09:41:53] [PASSED] 1 VF
[09:41:53] [PASSED] 2 VFs
[09:41:53] [PASSED] 3 VFs
[09:41:53] [PASSED] 4 VFs
[09:41:53] [PASSED] 5 VFs
[09:41:53] [PASSED] 6 VFs
[09:41:53] [PASSED] 7 VFs
[09:41:53] [PASSED] 8 VFs
[09:41:53] [PASSED] 9 VFs
[09:41:53] [PASSED] 10 VFs
[09:41:53] [PASSED] 11 VFs
[09:41:53] [PASSED] 12 VFs
[09:41:53] [PASSED] 13 VFs
[09:41:53] [PASSED] 14 VFs
[09:41:53] [PASSED] 15 VFs
[09:41:53] [PASSED] 16 VFs
[09:41:53] [PASSED] 17 VFs
[09:41:53] [PASSED] 18 VFs
[09:41:53] [PASSED] 19 VFs
[09:41:53] [PASSED] 20 VFs
[09:41:53] [PASSED] 21 VFs
[09:41:53] [PASSED] 22 VFs
[09:41:53] [PASSED] 23 VFs
[09:41:53] [PASSED] 24 VFs
[09:41:53] [PASSED] 25 VFs
[09:41:53] [PASSED] 26 VFs
[09:41:53] [PASSED] 27 VFs
[09:41:53] [PASSED] 28 VFs
[09:41:53] [PASSED] 29 VFs
[09:41:53] [PASSED] 30 VFs
[09:41:53] [PASSED] 31 VFs
[09:41:53] [PASSED] 32 VFs
[09:41:53] [PASSED] 33 VFs
[09:41:53] [PASSED] 34 VFs
[09:41:53] [PASSED] 35 VFs
[09:41:53] [PASSED] 36 VFs
[09:41:53] [PASSED] 37 VFs
[09:41:53] [PASSED] 38 VFs
[09:41:53] [PASSED] 39 VFs
[09:41:53] [PASSED] 40 VFs
[09:41:53] [PASSED] 41 VFs
[09:41:53] [PASSED] 42 VFs
[09:41:53] [PASSED] 43 VFs
[09:41:53] [PASSED] 44 VFs
[09:41:53] [PASSED] 45 VFs
[09:41:53] [PASSED] 46 VFs
[09:41:53] [PASSED] 47 VFs
[09:41:53] [PASSED] 48 VFs
[09:41:53] [PASSED] 49 VFs
[09:41:53] [PASSED] 50 VFs
[09:41:53] [PASSED] 51 VFs
[09:41:53] [PASSED] 52 VFs
[09:41:53] [PASSED] 53 VFs
[09:41:53] [PASSED] 54 VFs
[09:41:53] [PASSED] 55 VFs
[09:41:53] [PASSED] 56 VFs
[09:41:53] [PASSED] 57 VFs
[09:41:53] [PASSED] 58 VFs
[09:41:53] [PASSED] 59 VFs
[09:41:53] [PASSED] 60 VFs
[09:41:53] [PASSED] 61 VFs
[09:41:53] [PASSED] 62 VFs
[09:41:53] [PASSED] 63 VFs
[09:41:53] ================= [PASSED] fair_doorbells ==================
[09:41:53] ======================== fair_ggtt ========================
[09:41:53] [PASSED] 1 VF
[09:41:53] [PASSED] 2 VFs
[09:41:53] [PASSED] 3 VFs
[09:41:53] [PASSED] 4 VFs
[09:41:53] [PASSED] 5 VFs
[09:41:53] [PASSED] 6 VFs
[09:41:53] [PASSED] 7 VFs
[09:41:53] [PASSED] 8 VFs
[09:41:53] [PASSED] 9 VFs
[09:41:53] [PASSED] 10 VFs
[09:41:53] [PASSED] 11 VFs
[09:41:53] [PASSED] 12 VFs
[09:41:53] [PASSED] 13 VFs
[09:41:53] [PASSED] 14 VFs
[09:41:53] [PASSED] 15 VFs
[09:41:53] [PASSED] 16 VFs
[09:41:53] [PASSED] 17 VFs
[09:41:53] [PASSED] 18 VFs
[09:41:53] [PASSED] 19 VFs
[09:41:53] [PASSED] 20 VFs
[09:41:53] [PASSED] 21 VFs
[09:41:53] [PASSED] 22 VFs
[09:41:53] [PASSED] 23 VFs
[09:41:53] [PASSED] 24 VFs
[09:41:53] [PASSED] 25 VFs
[09:41:53] [PASSED] 26 VFs
[09:41:53] [PASSED] 27 VFs
[09:41:53] [PASSED] 28 VFs
[09:41:53] [PASSED] 29 VFs
[09:41:53] [PASSED] 30 VFs
[09:41:53] [PASSED] 31 VFs
[09:41:53] [PASSED] 32 VFs
[09:41:53] [PASSED] 33 VFs
[09:41:53] [PASSED] 34 VFs
[09:41:53] [PASSED] 35 VFs
[09:41:53] [PASSED] 36 VFs
[09:41:53] [PASSED] 37 VFs
[09:41:53] [PASSED] 38 VFs
[09:41:53] [PASSED] 39 VFs
[09:41:53] [PASSED] 40 VFs
[09:41:53] [PASSED] 41 VFs
[09:41:53] [PASSED] 42 VFs
[09:41:53] [PASSED] 43 VFs
[09:41:53] [PASSED] 44 VFs
[09:41:53] [PASSED] 45 VFs
[09:41:53] [PASSED] 46 VFs
[09:41:53] [PASSED] 47 VFs
[09:41:53] [PASSED] 48 VFs
[09:41:53] [PASSED] 49 VFs
[09:41:53] [PASSED] 50 VFs
[09:41:53] [PASSED] 51 VFs
[09:41:53] [PASSED] 52 VFs
[09:41:53] [PASSED] 53 VFs
[09:41:53] [PASSED] 54 VFs
[09:41:53] [PASSED] 55 VFs
[09:41:53] [PASSED] 56 VFs
[09:41:53] [PASSED] 57 VFs
[09:41:53] [PASSED] 58 VFs
[09:41:53] [PASSED] 59 VFs
[09:41:53] [PASSED] 60 VFs
[09:41:53] [PASSED] 61 VFs
[09:41:53] [PASSED] 62 VFs
[09:41:53] [PASSED] 63 VFs
[09:41:53] ==================== [PASSED] fair_ggtt ====================
[09:41:53] ======================== fair_vram ========================
[09:41:53] [PASSED] 1 VF
[09:41:53] [PASSED] 2 VFs
[09:41:53] [PASSED] 3 VFs
[09:41:53] [PASSED] 4 VFs
[09:41:53] [PASSED] 5 VFs
[09:41:53] [PASSED] 6 VFs
[09:41:53] [PASSED] 7 VFs
[09:41:53] [PASSED] 8 VFs
[09:41:53] [PASSED] 9 VFs
[09:41:53] [PASSED] 10 VFs
[09:41:53] [PASSED] 11 VFs
[09:41:53] [PASSED] 12 VFs
[09:41:53] [PASSED] 13 VFs
[09:41:53] [PASSED] 14 VFs
[09:41:53] [PASSED] 15 VFs
[09:41:53] [PASSED] 16 VFs
[09:41:53] [PASSED] 17 VFs
[09:41:53] [PASSED] 18 VFs
[09:41:53] [PASSED] 19 VFs
[09:41:53] [PASSED] 20 VFs
[09:41:53] [PASSED] 21 VFs
[09:41:53] [PASSED] 22 VFs
[09:41:53] [PASSED] 23 VFs
[09:41:53] [PASSED] 24 VFs
[09:41:53] [PASSED] 25 VFs
[09:41:53] [PASSED] 26 VFs
[09:41:53] [PASSED] 27 VFs
[09:41:53] [PASSED] 28 VFs
[09:41:53] [PASSED] 29 VFs
[09:41:53] [PASSED] 30 VFs
[09:41:53] [PASSED] 31 VFs
[09:41:53] [PASSED] 32 VFs
[09:41:53] [PASSED] 33 VFs
[09:41:53] [PASSED] 34 VFs
[09:41:53] [PASSED] 35 VFs
[09:41:53] [PASSED] 36 VFs
[09:41:53] [PASSED] 37 VFs
[09:41:53] [PASSED] 38 VFs
[09:41:53] [PASSED] 39 VFs
[09:41:53] [PASSED] 40 VFs
[09:41:53] [PASSED] 41 VFs
[09:41:53] [PASSED] 42 VFs
[09:41:53] [PASSED] 43 VFs
[09:41:53] [PASSED] 44 VFs
[09:41:53] [PASSED] 45 VFs
[09:41:53] [PASSED] 46 VFs
[09:41:53] [PASSED] 47 VFs
[09:41:53] [PASSED] 48 VFs
[09:41:53] [PASSED] 49 VFs
[09:41:53] [PASSED] 50 VFs
[09:41:53] [PASSED] 51 VFs
[09:41:53] [PASSED] 52 VFs
[09:41:53] [PASSED] 53 VFs
[09:41:53] [PASSED] 54 VFs
[09:41:53] [PASSED] 55 VFs
[09:41:53] [PASSED] 56 VFs
[09:41:53] [PASSED] 57 VFs
[09:41:53] [PASSED] 58 VFs
[09:41:53] [PASSED] 59 VFs
[09:41:53] [PASSED] 60 VFs
[09:41:53] [PASSED] 61 VFs
[09:41:53] [PASSED] 62 VFs
[09:41:53] [PASSED] 63 VFs
[09:41:53] ==================== [PASSED] fair_vram ====================
[09:41:53] ================== [PASSED] pf_gt_config ===================
[09:41:53] ===================== lmtt (1 subtest) =====================
[09:41:53] ======================== test_ops =========================
[09:41:53] [PASSED] 2-level
[09:41:53] [PASSED] multi-level
[09:41:53] ==================== [PASSED] test_ops =====================
[09:41:53] ====================== [PASSED] lmtt =======================
[09:41:53] ================= pf_service (11 subtests) =================
[09:41:53] [PASSED] pf_negotiate_any
[09:41:53] [PASSED] pf_negotiate_base_match
[09:41:53] [PASSED] pf_negotiate_base_newer
[09:41:53] [PASSED] pf_negotiate_base_next
[09:41:53] [SKIPPED] pf_negotiate_base_older
[09:41:53] [PASSED] pf_negotiate_base_prev
[09:41:53] [PASSED] pf_negotiate_latest_match
[09:41:53] [PASSED] pf_negotiate_latest_newer
[09:41:53] [PASSED] pf_negotiate_latest_next
[09:41:53] [SKIPPED] pf_negotiate_latest_older
[09:41:53] [SKIPPED] pf_negotiate_latest_prev
[09:41:53] =================== [PASSED] pf_service ====================
[09:41:53] ================= xe_guc_g2g (2 subtests) ==================
[09:41:53] ============== xe_live_guc_g2g_kunit_default ==============
[09:41:53] ========= [SKIPPED] xe_live_guc_g2g_kunit_default ==========
[09:41:53] ============== xe_live_guc_g2g_kunit_allmem ===============
[09:41:53] ========== [SKIPPED] xe_live_guc_g2g_kunit_allmem ==========
[09:41:53] =================== [SKIPPED] xe_guc_g2g ===================
[09:41:53] =================== xe_mocs (2 subtests) ===================
[09:41:53] ================ xe_live_mocs_kernel_kunit ================
[09:41:53] =========== [SKIPPED] xe_live_mocs_kernel_kunit ============
[09:41:53] ================ xe_live_mocs_reset_kunit =================
[09:41:53] ============ [SKIPPED] xe_live_mocs_reset_kunit ============
[09:41:53] ==================== [SKIPPED] xe_mocs =====================
[09:41:53] ================= xe_migrate (2 subtests) ==================
[09:41:53] ================= xe_migrate_sanity_kunit =================
[09:41:53] ============ [SKIPPED] xe_migrate_sanity_kunit =============
[09:41:53] ================== xe_validate_ccs_kunit ==================
[09:41:53] ============= [SKIPPED] xe_validate_ccs_kunit ==============
[09:41:53] =================== [SKIPPED] xe_migrate ===================
[09:41:53] ================== xe_dma_buf (1 subtest) ==================
[09:41:53] ==================== xe_dma_buf_kunit =====================
[09:41:53] ================ [SKIPPED] xe_dma_buf_kunit ================
[09:41:53] =================== [SKIPPED] xe_dma_buf ===================
[09:41:53] ================= xe_bo_shrink (1 subtest) =================
[09:41:53] =================== xe_bo_shrink_kunit ====================
[09:41:53] =============== [SKIPPED] xe_bo_shrink_kunit ===============
[09:41:53] ================== [SKIPPED] xe_bo_shrink ==================
[09:41:53] ==================== xe_bo (2 subtests) ====================
[09:41:53] ================== xe_ccs_migrate_kunit ===================
[09:41:53] ============== [SKIPPED] xe_ccs_migrate_kunit ==============
[09:41:53] ==================== xe_bo_evict_kunit ====================
[09:41:53] =============== [SKIPPED] xe_bo_evict_kunit ================
[09:41:53] ===================== [SKIPPED] xe_bo ======================
[09:41:53] ==================== args (13 subtests) ====================
[09:41:53] [PASSED] count_args_test
[09:41:53] [PASSED] call_args_example
[09:41:53] [PASSED] call_args_test
[09:41:53] [PASSED] drop_first_arg_example
[09:41:53] [PASSED] drop_first_arg_test
[09:41:53] [PASSED] first_arg_example
[09:41:53] [PASSED] first_arg_test
[09:41:53] [PASSED] last_arg_example
[09:41:53] [PASSED] last_arg_test
[09:41:53] [PASSED] pick_arg_example
[09:41:53] [PASSED] if_args_example
[09:41:53] [PASSED] if_args_test
[09:41:53] [PASSED] sep_comma_example
[09:41:53] ====================== [PASSED] args =======================
[09:41:53] =================== xe_pci (3 subtests) ====================
[09:41:53] ==================== check_graphics_ip ====================
[09:41:53] [PASSED] 12.00 Xe_LP
[09:41:53] [PASSED] 12.10 Xe_LP+
[09:41:53] [PASSED] 12.55 Xe_HPG
[09:41:53] [PASSED] 12.60 Xe_HPC
[09:41:53] [PASSED] 12.70 Xe_LPG
[09:41:53] [PASSED] 12.71 Xe_LPG
[09:41:53] [PASSED] 12.74 Xe_LPG+
[09:41:53] [PASSED] 20.01 Xe2_HPG
[09:41:53] [PASSED] 20.02 Xe2_HPG
[09:41:53] [PASSED] 20.04 Xe2_LPG
[09:41:53] [PASSED] 30.00 Xe3_LPG
[09:41:53] [PASSED] 30.01 Xe3_LPG
[09:41:53] [PASSED] 30.03 Xe3_LPG
[09:41:53] [PASSED] 30.04 Xe3_LPG
[09:41:53] [PASSED] 30.05 Xe3_LPG
[09:41:53] [PASSED] 35.10 Xe3p_LPG
[09:41:53] [PASSED] 35.11 Xe3p_XPC
[09:41:53] ================ [PASSED] check_graphics_ip ================
[09:41:53] ===================== check_media_ip ======================
[09:41:53] [PASSED] 12.00 Xe_M
[09:41:53] [PASSED] 12.55 Xe_HPM
[09:41:53] [PASSED] 13.00 Xe_LPM+
[09:41:53] [PASSED] 13.01 Xe2_HPM
[09:41:53] [PASSED] 20.00 Xe2_LPM
[09:41:53] [PASSED] 30.00 Xe3_LPM
[09:41:53] [PASSED] 30.02 Xe3_LPM
[09:41:53] [PASSED] 35.00 Xe3p_LPM
[09:41:53] [PASSED] 35.03 Xe3p_HPM
[09:41:53] ================= [PASSED] check_media_ip ==================
[09:41:53] =================== check_platform_desc ===================
[09:41:53] [PASSED] 0x9A60 (TIGERLAKE)
[09:41:53] [PASSED] 0x9A68 (TIGERLAKE)
[09:41:53] [PASSED] 0x9A70 (TIGERLAKE)
[09:41:53] [PASSED] 0x9A40 (TIGERLAKE)
[09:41:53] [PASSED] 0x9A49 (TIGERLAKE)
[09:41:53] [PASSED] 0x9A59 (TIGERLAKE)
[09:41:53] [PASSED] 0x9A78 (TIGERLAKE)
[09:41:53] [PASSED] 0x9AC0 (TIGERLAKE)
[09:41:53] [PASSED] 0x9AC9 (TIGERLAKE)
[09:41:53] [PASSED] 0x9AD9 (TIGERLAKE)
[09:41:53] [PASSED] 0x9AF8 (TIGERLAKE)
[09:41:53] [PASSED] 0x4C80 (ROCKETLAKE)
[09:41:53] [PASSED] 0x4C8A (ROCKETLAKE)
[09:41:53] [PASSED] 0x4C8B (ROCKETLAKE)
[09:41:53] [PASSED] 0x4C8C (ROCKETLAKE)
[09:41:53] [PASSED] 0x4C90 (ROCKETLAKE)
[09:41:53] [PASSED] 0x4C9A (ROCKETLAKE)
[09:41:53] [PASSED] 0x4680 (ALDERLAKE_S)
[09:41:53] [PASSED] 0x4682 (ALDERLAKE_S)
[09:41:53] [PASSED] 0x4688 (ALDERLAKE_S)
[09:41:53] [PASSED] 0x468A (ALDERLAKE_S)
[09:41:53] [PASSED] 0x468B (ALDERLAKE_S)
[09:41:53] [PASSED] 0x4690 (ALDERLAKE_S)
[09:41:53] [PASSED] 0x4692 (ALDERLAKE_S)
[09:41:53] [PASSED] 0x4693 (ALDERLAKE_S)
[09:41:53] [PASSED] 0x46A0 (ALDERLAKE_P)
[09:41:53] [PASSED] 0x46A1 (ALDERLAKE_P)
[09:41:53] [PASSED] 0x46A2 (ALDERLAKE_P)
[09:41:53] [PASSED] 0x46A3 (ALDERLAKE_P)
[09:41:53] [PASSED] 0x46A6 (ALDERLAKE_P)
[09:41:53] [PASSED] 0x46A8 (ALDERLAKE_P)
[09:41:53] [PASSED] 0x46AA (ALDERLAKE_P)
[09:41:53] [PASSED] 0x462A (ALDERLAKE_P)
[09:41:53] [PASSED] 0x4626 (ALDERLAKE_P)
[09:41:53] [PASSED] 0x4628 (ALDERLAKE_P)
[09:41:53] [PASSED] 0x46B0 (ALDERLAKE_P)
[09:41:53] [PASSED] 0x46B1 (ALDERLAKE_P)
[09:41:53] [PASSED] 0x46B2 (ALDERLAKE_P)
[09:41:53] [PASSED] 0x46B3 (ALDERLAKE_P)
[09:41:53] [PASSED] 0x46C0 (ALDERLAKE_P)
[09:41:53] [PASSED] 0x46C1 (ALDERLAKE_P)
[09:41:53] [PASSED] 0x46C2 (ALDERLAKE_P)
[09:41:53] [PASSED] 0x46C3 (ALDERLAKE_P)
[09:41:53] [PASSED] 0x46D0 (ALDERLAKE_N)
[09:41:53] [PASSED] 0x46D1 (ALDERLAKE_N)
[09:41:53] [PASSED] 0x46D2 (ALDERLAKE_N)
[09:41:53] [PASSED] 0x46D3 (ALDERLAKE_N)
[09:41:53] [PASSED] 0x46D4 (ALDERLAKE_N)
[09:41:53] [PASSED] 0xA721 (ALDERLAKE_P)
[09:41:53] [PASSED] 0xA7A1 (ALDERLAKE_P)
[09:41:53] [PASSED] 0xA7A9 (ALDERLAKE_P)
[09:41:53] [PASSED] 0xA7AC (ALDERLAKE_P)
[09:41:53] [PASSED] 0xA7AD (ALDERLAKE_P)
[09:41:53] [PASSED] 0xA720 (ALDERLAKE_P)
[09:41:53] [PASSED] 0xA7A0 (ALDERLAKE_P)
[09:41:53] [PASSED] 0xA7A8 (ALDERLAKE_P)
[09:41:53] [PASSED] 0xA7AA (ALDERLAKE_P)
[09:41:53] [PASSED] 0xA7AB (ALDERLAKE_P)
[09:41:53] [PASSED] 0xA780 (ALDERLAKE_S)
[09:41:53] [PASSED] 0xA781 (ALDERLAKE_S)
[09:41:53] [PASSED] 0xA782 (ALDERLAKE_S)
[09:41:53] [PASSED] 0xA783 (ALDERLAKE_S)
[09:41:53] [PASSED] 0xA788 (ALDERLAKE_S)
[09:41:53] [PASSED] 0xA789 (ALDERLAKE_S)
[09:41:53] [PASSED] 0xA78A (ALDERLAKE_S)
[09:41:53] [PASSED] 0xA78B (ALDERLAKE_S)
[09:41:53] [PASSED] 0x4905 (DG1)
[09:41:53] [PASSED] 0x4906 (DG1)
[09:41:53] [PASSED] 0x4907 (DG1)
[09:41:53] [PASSED] 0x4908 (DG1)
[09:41:53] [PASSED] 0x4909 (DG1)
[09:41:53] [PASSED] 0x56C0 (DG2)
[09:41:53] [PASSED] 0x56C2 (DG2)
[09:41:53] [PASSED] 0x56C1 (DG2)
[09:41:53] [PASSED] 0x7D51 (METEORLAKE)
[09:41:53] [PASSED] 0x7DD1 (METEORLAKE)
[09:41:53] [PASSED] 0x7D41 (METEORLAKE)
[09:41:53] [PASSED] 0x7D67 (METEORLAKE)
[09:41:53] [PASSED] 0xB640 (METEORLAKE)
[09:41:53] [PASSED] 0x56A0 (DG2)
[09:41:53] [PASSED] 0x56A1 (DG2)
[09:41:53] [PASSED] 0x56A2 (DG2)
[09:41:53] [PASSED] 0x56BE (DG2)
[09:41:53] [PASSED] 0x56BF (DG2)
[09:41:53] [PASSED] 0x5690 (DG2)
[09:41:53] [PASSED] 0x5691 (DG2)
[09:41:53] [PASSED] 0x5692 (DG2)
[09:41:53] [PASSED] 0x56A5 (DG2)
[09:41:53] [PASSED] 0x56A6 (DG2)
[09:41:53] [PASSED] 0x56B0 (DG2)
[09:41:53] [PASSED] 0x56B1 (DG2)
[09:41:53] [PASSED] 0x56BA (DG2)
[09:41:53] [PASSED] 0x56BB (DG2)
[09:41:53] [PASSED] 0x56BC (DG2)
[09:41:53] [PASSED] 0x56BD (DG2)
[09:41:53] [PASSED] 0x5693 (DG2)
[09:41:53] [PASSED] 0x5694 (DG2)
[09:41:53] [PASSED] 0x5695 (DG2)
[09:41:53] [PASSED] 0x56A3 (DG2)
[09:41:53] [PASSED] 0x56A4 (DG2)
[09:41:53] [PASSED] 0x56B2 (DG2)
[09:41:53] [PASSED] 0x56B3 (DG2)
[09:41:53] [PASSED] 0x5696 (DG2)
[09:41:53] [PASSED] 0x5697 (DG2)
[09:41:53] [PASSED] 0xB69 (PVC)
[09:41:53] [PASSED] 0xB6E (PVC)
[09:41:53] [PASSED] 0xBD4 (PVC)
[09:41:53] [PASSED] 0xBD5 (PVC)
[09:41:53] [PASSED] 0xBD6 (PVC)
[09:41:53] [PASSED] 0xBD7 (PVC)
[09:41:53] [PASSED] 0xBD8 (PVC)
[09:41:53] [PASSED] 0xBD9 (PVC)
[09:41:53] [PASSED] 0xBDA (PVC)
[09:41:53] [PASSED] 0xBDB (PVC)
[09:41:53] [PASSED] 0xBE0 (PVC)
[09:41:53] [PASSED] 0xBE1 (PVC)
[09:41:53] [PASSED] 0xBE5 (PVC)
[09:41:53] [PASSED] 0x7D40 (METEORLAKE)
[09:41:53] [PASSED] 0x7D45 (METEORLAKE)
[09:41:53] [PASSED] 0x7D55 (METEORLAKE)
[09:41:53] [PASSED] 0x7D60 (METEORLAKE)
[09:41:53] [PASSED] 0x7DD5 (METEORLAKE)
[09:41:53] [PASSED] 0x6420 (LUNARLAKE)
[09:41:53] [PASSED] 0x64A0 (LUNARLAKE)
[09:41:53] [PASSED] 0x64B0 (LUNARLAKE)
[09:41:53] [PASSED] 0xE202 (BATTLEMAGE)
[09:41:53] [PASSED] 0xE209 (BATTLEMAGE)
[09:41:53] [PASSED] 0xE20B (BATTLEMAGE)
[09:41:53] [PASSED] 0xE20C (BATTLEMAGE)
[09:41:53] [PASSED] 0xE20D (BATTLEMAGE)
[09:41:53] [PASSED] 0xE210 (BATTLEMAGE)
[09:41:53] [PASSED] 0xE211 (BATTLEMAGE)
[09:41:53] [PASSED] 0xE212 (BATTLEMAGE)
[09:41:53] [PASSED] 0xE216 (BATTLEMAGE)
[09:41:53] [PASSED] 0xE220 (BATTLEMAGE)
[09:41:53] [PASSED] 0xE221 (BATTLEMAGE)
[09:41:53] [PASSED] 0xE222 (BATTLEMAGE)
[09:41:53] [PASSED] 0xE223 (BATTLEMAGE)
[09:41:53] [PASSED] 0xB080 (PANTHERLAKE)
[09:41:53] [PASSED] 0xB081 (PANTHERLAKE)
[09:41:53] [PASSED] 0xB082 (PANTHERLAKE)
[09:41:53] [PASSED] 0xB083 (PANTHERLAKE)
[09:41:53] [PASSED] 0xB084 (PANTHERLAKE)
[09:41:53] [PASSED] 0xB085 (PANTHERLAKE)
[09:41:53] [PASSED] 0xB086 (PANTHERLAKE)
[09:41:53] [PASSED] 0xB087 (PANTHERLAKE)
[09:41:53] [PASSED] 0xB08F (PANTHERLAKE)
[09:41:53] [PASSED] 0xB090 (PANTHERLAKE)
[09:41:53] [PASSED] 0xB0A0 (PANTHERLAKE)
[09:41:53] [PASSED] 0xB0B0 (PANTHERLAKE)
[09:41:53] [PASSED] 0xFD80 (PANTHERLAKE)
[09:41:53] [PASSED] 0xFD81 (PANTHERLAKE)
[09:41:53] [PASSED] 0xD740 (NOVALAKE_S)
[09:41:53] [PASSED] 0xD741 (NOVALAKE_S)
[09:41:53] [PASSED] 0xD742 (NOVALAKE_S)
[09:41:53] [PASSED] 0xD743 (NOVALAKE_S)
[09:41:53] [PASSED] 0xD744 (NOVALAKE_S)
[09:41:53] [PASSED] 0xD745 (NOVALAKE_S)
[09:41:53] [PASSED] 0x674C (CRESCENTISLAND)
[09:41:53] [PASSED] 0xD750 (NOVALAKE_P)
[09:41:53] [PASSED] 0xD751 (NOVALAKE_P)
[09:41:53] [PASSED] 0xD752 (NOVALAKE_P)
[09:41:53] [PASSED] 0xD753 (NOVALAKE_P)
[09:41:53] [PASSED] 0xD754 (NOVALAKE_P)
[09:41:53] [PASSED] 0xD755 (NOVALAKE_P)
[09:41:53] [PASSED] 0xD756 (NOVALAKE_P)
[09:41:53] [PASSED] 0xD757 (NOVALAKE_P)
[09:41:53] [PASSED] 0xD75F (NOVALAKE_P)
[09:41:53] =============== [PASSED] check_platform_desc ===============
[09:41:53] ===================== [PASSED] xe_pci ======================
[09:41:53] =================== xe_rtp (2 subtests) ====================
[09:41:53] =============== xe_rtp_process_to_sr_tests ================
[09:41:53] [PASSED] coalesce-same-reg
[09:41:53] [PASSED] no-match-no-add
[09:41:53] [PASSED] match-or
[09:41:53] [PASSED] match-or-xfail
[09:41:53] [PASSED] no-match-no-add-multiple-rules
[09:41:53] [PASSED] two-regs-two-entries
[09:41:53] [PASSED] clr-one-set-other
[09:41:53] [PASSED] set-field
[09:41:53] [PASSED] conflict-duplicate
stty: 'standard input': Inappropriate ioctl for device
[09:41:53] [PASSED] conflict-not-disjoint
[09:41:53] [PASSED] conflict-reg-type
[09:41:53] =========== [PASSED] xe_rtp_process_to_sr_tests ============
[09:41:53] ================== xe_rtp_process_tests ===================
[09:41:53] [PASSED] active1
[09:41:53] [PASSED] active2
[09:41:53] [PASSED] active-inactive
[09:41:53] [PASSED] inactive-active
[09:41:53] [PASSED] inactive-1st_or_active-inactive
[09:41:53] [PASSED] inactive-2nd_or_active-inactive
[09:41:53] [PASSED] inactive-last_or_active-inactive
[09:41:53] [PASSED] inactive-no_or_active-inactive
[09:41:53] ============== [PASSED] xe_rtp_process_tests ===============
[09:41:53] ===================== [PASSED] xe_rtp ======================
[09:41:53] ==================== xe_wa (1 subtest) =====================
[09:41:53] ======================== xe_wa_gt =========================
[09:41:53] [PASSED] TIGERLAKE B0
[09:41:53] [PASSED] DG1 A0
[09:41:53] [PASSED] DG1 B0
[09:41:53] [PASSED] ALDERLAKE_S A0
[09:41:53] [PASSED] ALDERLAKE_S B0
[09:41:53] [PASSED] ALDERLAKE_S C0
[09:41:53] [PASSED] ALDERLAKE_S D0
[09:41:53] [PASSED] ALDERLAKE_P A0
[09:41:53] [PASSED] ALDERLAKE_P B0
[09:41:53] [PASSED] ALDERLAKE_P C0
[09:41:53] [PASSED] ALDERLAKE_S RPLS D0
[09:41:53] [PASSED] ALDERLAKE_P RPLU E0
[09:41:53] [PASSED] DG2 G10 C0
[09:41:53] [PASSED] DG2 G11 B1
[09:41:53] [PASSED] DG2 G12 A1
[09:41:53] [PASSED] METEORLAKE 12.70(Xe_LPG) A0 13.00(Xe_LPM+) A0
[09:41:53] [PASSED] METEORLAKE 12.71(Xe_LPG) A0 13.00(Xe_LPM+) A0
[09:41:53] [PASSED] METEORLAKE 12.74(Xe_LPG+) A0 13.00(Xe_LPM+) A0
[09:41:53] [PASSED] LUNARLAKE 20.04(Xe2_LPG) A0 20.00(Xe2_LPM) A0
[09:41:53] [PASSED] LUNARLAKE 20.04(Xe2_LPG) B0 20.00(Xe2_LPM) A0
[09:41:53] [PASSED] BATTLEMAGE 20.01(Xe2_HPG) A0 13.01(Xe2_HPM) A1
[09:41:53] [PASSED] PANTHERLAKE 30.00(Xe3_LPG) A0 30.00(Xe3_LPM) A0
[09:41:53] ==================== [PASSED] xe_wa_gt =====================
[09:41:53] ====================== [PASSED] xe_wa ======================
[09:41:53] ============================================================
[09:41:53] Testing complete. Ran 597 tests: passed: 579, skipped: 18
[09:41:53] Elapsed time: 67.374s total, 5.234s configuring, 61.066s building, 1.028s running
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/tests/.kunitconfig
[09:41:53] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[09:41:56] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[09:42:23] Starting KUnit Kernel (1/1)...
[09:42:23] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[09:42:23] ============ drm_test_pick_cmdline (2 subtests) ============
[09:42:23] [PASSED] drm_test_pick_cmdline_res_1920_1080_60
[09:42:23] =============== drm_test_pick_cmdline_named ===============
[09:42:23] [PASSED] NTSC
[09:42:23] [PASSED] NTSC-J
[09:42:23] [PASSED] PAL
[09:42:23] [PASSED] PAL-M
[09:42:23] =========== [PASSED] drm_test_pick_cmdline_named ===========
[09:42:23] ============== [PASSED] drm_test_pick_cmdline ==============
[09:42:23] == drm_test_atomic_get_connector_for_encoder (1 subtest) ===
[09:42:23] [PASSED] drm_test_drm_atomic_get_connector_for_encoder
[09:42:23] ==== [PASSED] drm_test_atomic_get_connector_for_encoder ====
[09:42:23] =========== drm_validate_clone_mode (2 subtests) ===========
[09:42:23] ============== drm_test_check_in_clone_mode ===============
[09:42:23] [PASSED] in_clone_mode
[09:42:23] [PASSED] not_in_clone_mode
[09:42:23] ========== [PASSED] drm_test_check_in_clone_mode ===========
[09:42:23] =============== drm_test_check_valid_clones ===============
[09:42:23] [PASSED] not_in_clone_mode
[09:42:23] [PASSED] valid_clone
[09:42:23] [PASSED] invalid_clone
[09:42:23] =========== [PASSED] drm_test_check_valid_clones ===========
[09:42:23] ============= [PASSED] drm_validate_clone_mode =============
[09:42:23] ============= drm_validate_modeset (1 subtest) =============
[09:42:23] [PASSED] drm_test_check_connector_changed_modeset
[09:42:23] ============== [PASSED] drm_validate_modeset ===============
[09:42:23] ====== drm_test_bridge_get_current_state (2 subtests) ======
[09:42:23] [PASSED] drm_test_drm_bridge_get_current_state_atomic
[09:42:23] [PASSED] drm_test_drm_bridge_get_current_state_legacy
[09:42:23] ======== [PASSED] drm_test_bridge_get_current_state ========
[09:42:23] ====== drm_test_bridge_helper_reset_crtc (3 subtests) ======
[09:42:23] [PASSED] drm_test_drm_bridge_helper_reset_crtc_atomic
[09:42:23] [PASSED] drm_test_drm_bridge_helper_reset_crtc_atomic_disabled
[09:42:23] [PASSED] drm_test_drm_bridge_helper_reset_crtc_legacy
[09:42:23] ======== [PASSED] drm_test_bridge_helper_reset_crtc ========
[09:42:23] ============== drm_bridge_alloc (2 subtests) ===============
[09:42:23] [PASSED] drm_test_drm_bridge_alloc_basic
[09:42:23] [PASSED] drm_test_drm_bridge_alloc_get_put
[09:42:23] ================ [PASSED] drm_bridge_alloc =================
[09:42:23] ============= drm_cmdline_parser (40 subtests) =============
[09:42:23] [PASSED] drm_test_cmdline_force_d_only
[09:42:23] [PASSED] drm_test_cmdline_force_D_only_dvi
[09:42:23] [PASSED] drm_test_cmdline_force_D_only_hdmi
[09:42:23] [PASSED] drm_test_cmdline_force_D_only_not_digital
[09:42:23] [PASSED] drm_test_cmdline_force_e_only
[09:42:23] [PASSED] drm_test_cmdline_res
[09:42:23] [PASSED] drm_test_cmdline_res_vesa
[09:42:23] [PASSED] drm_test_cmdline_res_vesa_rblank
[09:42:23] [PASSED] drm_test_cmdline_res_rblank
[09:42:23] [PASSED] drm_test_cmdline_res_bpp
[09:42:23] [PASSED] drm_test_cmdline_res_refresh
[09:42:23] [PASSED] drm_test_cmdline_res_bpp_refresh
[09:42:23] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced
[09:42:23] [PASSED] drm_test_cmdline_res_bpp_refresh_margins
[09:42:23] [PASSED] drm_test_cmdline_res_bpp_refresh_force_off
[09:42:23] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on
[09:42:23] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_analog
[09:42:23] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_digital
[09:42:23] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced_margins_force_on
[09:42:23] [PASSED] drm_test_cmdline_res_margins_force_on
[09:42:23] [PASSED] drm_test_cmdline_res_vesa_margins
[09:42:23] [PASSED] drm_test_cmdline_name
[09:42:23] [PASSED] drm_test_cmdline_name_bpp
[09:42:23] [PASSED] drm_test_cmdline_name_option
[09:42:23] [PASSED] drm_test_cmdline_name_bpp_option
[09:42:23] [PASSED] drm_test_cmdline_rotate_0
[09:42:23] [PASSED] drm_test_cmdline_rotate_90
[09:42:23] [PASSED] drm_test_cmdline_rotate_180
[09:42:23] [PASSED] drm_test_cmdline_rotate_270
[09:42:23] [PASSED] drm_test_cmdline_hmirror
[09:42:23] [PASSED] drm_test_cmdline_vmirror
[09:42:23] [PASSED] drm_test_cmdline_margin_options
[09:42:23] [PASSED] drm_test_cmdline_multiple_options
[09:42:23] [PASSED] drm_test_cmdline_bpp_extra_and_option
[09:42:23] [PASSED] drm_test_cmdline_extra_and_option
[09:42:23] [PASSED] drm_test_cmdline_freestanding_options
[09:42:23] [PASSED] drm_test_cmdline_freestanding_force_e_and_options
[09:42:23] [PASSED] drm_test_cmdline_panel_orientation
[09:42:23] ================ drm_test_cmdline_invalid =================
[09:42:23] [PASSED] margin_only
[09:42:23] [PASSED] interlace_only
[09:42:23] [PASSED] res_missing_x
[09:42:23] [PASSED] res_missing_y
[09:42:23] [PASSED] res_bad_y
[09:42:23] [PASSED] res_missing_y_bpp
[09:42:23] [PASSED] res_bad_bpp
[09:42:23] [PASSED] res_bad_refresh
[09:42:23] [PASSED] res_bpp_refresh_force_on_off
[09:42:23] [PASSED] res_invalid_mode
[09:42:23] [PASSED] res_bpp_wrong_place_mode
[09:42:23] [PASSED] name_bpp_refresh
[09:42:23] [PASSED] name_refresh
[09:42:23] [PASSED] name_refresh_wrong_mode
[09:42:23] [PASSED] name_refresh_invalid_mode
[09:42:23] [PASSED] rotate_multiple
[09:42:23] [PASSED] rotate_invalid_val
[09:42:23] [PASSED] rotate_truncated
[09:42:23] [PASSED] invalid_option
[09:42:23] [PASSED] invalid_tv_option
[09:42:23] [PASSED] truncated_tv_option
[09:42:23] ============ [PASSED] drm_test_cmdline_invalid =============
[09:42:23] =============== drm_test_cmdline_tv_options ===============
[09:42:23] [PASSED] NTSC
[09:42:23] [PASSED] NTSC_443
[09:42:23] [PASSED] NTSC_J
[09:42:23] [PASSED] PAL
[09:42:23] [PASSED] PAL_M
[09:42:23] [PASSED] PAL_N
[09:42:23] [PASSED] SECAM
[09:42:23] [PASSED] MONO_525
[09:42:23] [PASSED] MONO_625
[09:42:23] =========== [PASSED] drm_test_cmdline_tv_options ===========
[09:42:23] =============== [PASSED] drm_cmdline_parser ================
[09:42:23] ========== drmm_connector_hdmi_init (20 subtests) ==========
[09:42:23] [PASSED] drm_test_connector_hdmi_init_valid
[09:42:23] [PASSED] drm_test_connector_hdmi_init_bpc_8
[09:42:23] [PASSED] drm_test_connector_hdmi_init_bpc_10
[09:42:23] [PASSED] drm_test_connector_hdmi_init_bpc_12
[09:42:23] [PASSED] drm_test_connector_hdmi_init_bpc_invalid
[09:42:23] [PASSED] drm_test_connector_hdmi_init_bpc_null
[09:42:23] [PASSED] drm_test_connector_hdmi_init_formats_empty
[09:42:23] [PASSED] drm_test_connector_hdmi_init_formats_no_rgb
[09:42:23] === drm_test_connector_hdmi_init_formats_yuv420_allowed ===
[09:42:23] [PASSED] supported_formats=0x9 yuv420_allowed=1
[09:42:23] [PASSED] supported_formats=0x9 yuv420_allowed=0
[09:42:23] [PASSED] supported_formats=0x3 yuv420_allowed=1
[09:42:23] [PASSED] supported_formats=0x3 yuv420_allowed=0
[09:42:23] === [PASSED] drm_test_connector_hdmi_init_formats_yuv420_allowed ===
[09:42:23] [PASSED] drm_test_connector_hdmi_init_null_ddc
[09:42:23] [PASSED] drm_test_connector_hdmi_init_null_product
[09:42:23] [PASSED] drm_test_connector_hdmi_init_null_vendor
[09:42:23] [PASSED] drm_test_connector_hdmi_init_product_length_exact
[09:42:23] [PASSED] drm_test_connector_hdmi_init_product_length_too_long
[09:42:23] [PASSED] drm_test_connector_hdmi_init_product_valid
[09:42:23] [PASSED] drm_test_connector_hdmi_init_vendor_length_exact
[09:42:23] [PASSED] drm_test_connector_hdmi_init_vendor_length_too_long
[09:42:23] [PASSED] drm_test_connector_hdmi_init_vendor_valid
[09:42:23] ========= drm_test_connector_hdmi_init_type_valid =========
[09:42:23] [PASSED] HDMI-A
[09:42:23] [PASSED] HDMI-B
[09:42:23] ===== [PASSED] drm_test_connector_hdmi_init_type_valid =====
[09:42:23] ======== drm_test_connector_hdmi_init_type_invalid ========
[09:42:23] [PASSED] Unknown
[09:42:23] [PASSED] VGA
[09:42:23] [PASSED] DVI-I
[09:42:23] [PASSED] DVI-D
[09:42:23] [PASSED] DVI-A
[09:42:23] [PASSED] Composite
[09:42:23] [PASSED] SVIDEO
[09:42:23] [PASSED] LVDS
[09:42:23] [PASSED] Component
[09:42:23] [PASSED] DIN
[09:42:23] [PASSED] DP
[09:42:23] [PASSED] TV
[09:42:23] [PASSED] eDP
[09:42:23] [PASSED] Virtual
[09:42:23] [PASSED] DSI
[09:42:23] [PASSED] DPI
[09:42:23] [PASSED] Writeback
[09:42:23] [PASSED] SPI
[09:42:23] [PASSED] USB
[09:42:23] ==== [PASSED] drm_test_connector_hdmi_init_type_invalid ====
[09:42:23] ============ [PASSED] drmm_connector_hdmi_init =============
[09:42:23] ============= drmm_connector_init (3 subtests) =============
[09:42:23] [PASSED] drm_test_drmm_connector_init
[09:42:23] [PASSED] drm_test_drmm_connector_init_null_ddc
[09:42:23] ========= drm_test_drmm_connector_init_type_valid =========
[09:42:23] [PASSED] Unknown
[09:42:23] [PASSED] VGA
[09:42:23] [PASSED] DVI-I
[09:42:23] [PASSED] DVI-D
[09:42:23] [PASSED] DVI-A
[09:42:23] [PASSED] Composite
[09:42:23] [PASSED] SVIDEO
[09:42:23] [PASSED] LVDS
[09:42:23] [PASSED] Component
[09:42:23] [PASSED] DIN
[09:42:23] [PASSED] DP
[09:42:23] [PASSED] HDMI-A
[09:42:23] [PASSED] HDMI-B
[09:42:23] [PASSED] TV
[09:42:23] [PASSED] eDP
[09:42:23] [PASSED] Virtual
[09:42:23] [PASSED] DSI
[09:42:23] [PASSED] DPI
[09:42:23] [PASSED] Writeback
[09:42:23] [PASSED] SPI
[09:42:23] [PASSED] USB
[09:42:23] ===== [PASSED] drm_test_drmm_connector_init_type_valid =====
[09:42:23] =============== [PASSED] drmm_connector_init ===============
[09:42:23] ========= drm_connector_dynamic_init (6 subtests) ==========
[09:42:23] [PASSED] drm_test_drm_connector_dynamic_init
[09:42:23] [PASSED] drm_test_drm_connector_dynamic_init_null_ddc
[09:42:23] [PASSED] drm_test_drm_connector_dynamic_init_not_added
[09:42:23] [PASSED] drm_test_drm_connector_dynamic_init_properties
[09:42:23] ===== drm_test_drm_connector_dynamic_init_type_valid ======
[09:42:23] [PASSED] Unknown
[09:42:23] [PASSED] VGA
[09:42:23] [PASSED] DVI-I
[09:42:23] [PASSED] DVI-D
[09:42:23] [PASSED] DVI-A
[09:42:23] [PASSED] Composite
[09:42:23] [PASSED] SVIDEO
[09:42:23] [PASSED] LVDS
[09:42:23] [PASSED] Component
[09:42:23] [PASSED] DIN
[09:42:23] [PASSED] DP
[09:42:23] [PASSED] HDMI-A
[09:42:23] [PASSED] HDMI-B
[09:42:23] [PASSED] TV
[09:42:23] [PASSED] eDP
[09:42:23] [PASSED] Virtual
[09:42:23] [PASSED] DSI
[09:42:23] [PASSED] DPI
[09:42:23] [PASSED] Writeback
[09:42:23] [PASSED] SPI
[09:42:23] [PASSED] USB
[09:42:23] = [PASSED] drm_test_drm_connector_dynamic_init_type_valid ==
[09:42:23] ======== drm_test_drm_connector_dynamic_init_name =========
[09:42:23] [PASSED] Unknown
[09:42:23] [PASSED] VGA
[09:42:23] [PASSED] DVI-I
[09:42:23] [PASSED] DVI-D
[09:42:23] [PASSED] DVI-A
[09:42:23] [PASSED] Composite
[09:42:23] [PASSED] SVIDEO
[09:42:23] [PASSED] LVDS
[09:42:23] [PASSED] Component
[09:42:23] [PASSED] DIN
[09:42:23] [PASSED] DP
[09:42:23] [PASSED] HDMI-A
[09:42:23] [PASSED] HDMI-B
[09:42:23] [PASSED] TV
[09:42:23] [PASSED] eDP
[09:42:23] [PASSED] Virtual
[09:42:23] [PASSED] DSI
[09:42:23] [PASSED] DPI
[09:42:23] [PASSED] Writeback
[09:42:23] [PASSED] SPI
[09:42:23] [PASSED] USB
[09:42:23] ==== [PASSED] drm_test_drm_connector_dynamic_init_name =====
[09:42:23] =========== [PASSED] drm_connector_dynamic_init ============
[09:42:23] ==== drm_connector_dynamic_register_early (4 subtests) =====
[09:42:23] [PASSED] drm_test_drm_connector_dynamic_register_early_on_list
[09:42:23] [PASSED] drm_test_drm_connector_dynamic_register_early_defer
[09:42:23] [PASSED] drm_test_drm_connector_dynamic_register_early_no_init
[09:42:23] [PASSED] drm_test_drm_connector_dynamic_register_early_no_mode_object
[09:42:23] ====== [PASSED] drm_connector_dynamic_register_early =======
[09:42:23] ======= drm_connector_dynamic_register (7 subtests) ========
[09:42:23] [PASSED] drm_test_drm_connector_dynamic_register_on_list
[09:42:23] [PASSED] drm_test_drm_connector_dynamic_register_no_defer
[09:42:23] [PASSED] drm_test_drm_connector_dynamic_register_no_init
[09:42:23] [PASSED] drm_test_drm_connector_dynamic_register_mode_object
[09:42:23] [PASSED] drm_test_drm_connector_dynamic_register_sysfs
[09:42:23] [PASSED] drm_test_drm_connector_dynamic_register_sysfs_name
[09:42:23] [PASSED] drm_test_drm_connector_dynamic_register_debugfs
[09:42:23] ========= [PASSED] drm_connector_dynamic_register ==========
[09:42:23] = drm_connector_attach_broadcast_rgb_property (2 subtests) =
[09:42:23] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property
[09:42:23] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property_hdmi_connector
[09:42:23] === [PASSED] drm_connector_attach_broadcast_rgb_property ===
[09:42:23] ========== drm_get_tv_mode_from_name (2 subtests) ==========
[09:42:23] ========== drm_test_get_tv_mode_from_name_valid ===========
[09:42:23] [PASSED] NTSC
[09:42:23] [PASSED] NTSC-443
[09:42:23] [PASSED] NTSC-J
[09:42:23] [PASSED] PAL
[09:42:23] [PASSED] PAL-M
[09:42:23] [PASSED] PAL-N
[09:42:23] [PASSED] SECAM
[09:42:23] [PASSED] Mono
[09:42:23] ====== [PASSED] drm_test_get_tv_mode_from_name_valid =======
[09:42:23] [PASSED] drm_test_get_tv_mode_from_name_truncated
[09:42:23] ============ [PASSED] drm_get_tv_mode_from_name ============
[09:42:23] = drm_test_connector_hdmi_compute_mode_clock (12 subtests) =
[09:42:23] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb
[09:42:23] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc
[09:42:23] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc_vic_1
[09:42:23] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc
[09:42:23] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc_vic_1
[09:42:23] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_double
[09:42:23] = drm_test_connector_hdmi_compute_mode_clock_yuv420_valid =
[09:42:23] [PASSED] VIC 96
[09:42:23] [PASSED] VIC 97
[09:42:23] [PASSED] VIC 101
[09:42:23] [PASSED] VIC 102
[09:42:23] [PASSED] VIC 106
[09:42:23] [PASSED] VIC 107
[09:42:23] === [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_valid ===
[09:42:23] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_10_bpc
[09:42:23] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_12_bpc
[09:42:23] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_8_bpc
[09:42:23] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_10_bpc
[09:42:23] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_12_bpc
[09:42:23] === [PASSED] drm_test_connector_hdmi_compute_mode_clock ====
[09:42:23] == drm_hdmi_connector_get_broadcast_rgb_name (2 subtests) ==
[09:42:23] === drm_test_drm_hdmi_connector_get_broadcast_rgb_name ====
[09:42:23] [PASSED] Automatic
[09:42:23] [PASSED] Full
[09:42:23] [PASSED] Limited 16:235
[09:42:23] === [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name ===
[09:42:23] [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name_invalid
[09:42:23] ==== [PASSED] drm_hdmi_connector_get_broadcast_rgb_name ====
[09:42:23] == drm_hdmi_connector_get_output_format_name (2 subtests) ==
[09:42:23] === drm_test_drm_hdmi_connector_get_output_format_name ====
[09:42:23] [PASSED] RGB
[09:42:23] [PASSED] YUV 4:2:0
[09:42:23] [PASSED] YUV 4:2:2
[09:42:23] [PASSED] YUV 4:4:4
[09:42:23] === [PASSED] drm_test_drm_hdmi_connector_get_output_format_name ===
[09:42:23] [PASSED] drm_test_drm_hdmi_connector_get_output_format_name_invalid
[09:42:23] ==== [PASSED] drm_hdmi_connector_get_output_format_name ====
[09:42:23] ============= drm_damage_helper (21 subtests) ==============
[09:42:23] [PASSED] drm_test_damage_iter_no_damage
[09:42:23] [PASSED] drm_test_damage_iter_no_damage_fractional_src
[09:42:23] [PASSED] drm_test_damage_iter_no_damage_src_moved
[09:42:23] [PASSED] drm_test_damage_iter_no_damage_fractional_src_moved
[09:42:23] [PASSED] drm_test_damage_iter_no_damage_not_visible
[09:42:23] [PASSED] drm_test_damage_iter_no_damage_no_crtc
[09:42:23] [PASSED] drm_test_damage_iter_no_damage_no_fb
[09:42:23] [PASSED] drm_test_damage_iter_simple_damage
[09:42:23] [PASSED] drm_test_damage_iter_single_damage
[09:42:23] [PASSED] drm_test_damage_iter_single_damage_intersect_src
[09:42:23] [PASSED] drm_test_damage_iter_single_damage_outside_src
[09:42:23] [PASSED] drm_test_damage_iter_single_damage_fractional_src
[09:42:23] [PASSED] drm_test_damage_iter_single_damage_intersect_fractional_src
[09:42:23] [PASSED] drm_test_damage_iter_single_damage_outside_fractional_src
[09:42:23] [PASSED] drm_test_damage_iter_single_damage_src_moved
[09:42:23] [PASSED] drm_test_damage_iter_single_damage_fractional_src_moved
[09:42:23] [PASSED] drm_test_damage_iter_damage
[09:42:23] [PASSED] drm_test_damage_iter_damage_one_intersect
[09:42:23] [PASSED] drm_test_damage_iter_damage_one_outside
[09:42:23] [PASSED] drm_test_damage_iter_damage_src_moved
[09:42:23] [PASSED] drm_test_damage_iter_damage_not_visible
[09:42:23] ================ [PASSED] drm_damage_helper ================
[09:42:23] ============== drm_dp_mst_helper (3 subtests) ==============
[09:42:23] ============== drm_test_dp_mst_calc_pbn_mode ==============
[09:42:23] [PASSED] Clock 154000 BPP 30 DSC disabled
[09:42:23] [PASSED] Clock 234000 BPP 30 DSC disabled
[09:42:23] [PASSED] Clock 297000 BPP 24 DSC disabled
[09:42:23] [PASSED] Clock 332880 BPP 24 DSC enabled
[09:42:23] [PASSED] Clock 324540 BPP 24 DSC enabled
[09:42:23] ========== [PASSED] drm_test_dp_mst_calc_pbn_mode ==========
[09:42:23] ============== drm_test_dp_mst_calc_pbn_div ===============
[09:42:23] [PASSED] Link rate 2000000 lane count 4
[09:42:23] [PASSED] Link rate 2000000 lane count 2
[09:42:23] [PASSED] Link rate 2000000 lane count 1
[09:42:23] [PASSED] Link rate 1350000 lane count 4
[09:42:23] [PASSED] Link rate 1350000 lane count 2
[09:42:23] [PASSED] Link rate 1350000 lane count 1
[09:42:23] [PASSED] Link rate 1000000 lane count 4
[09:42:23] [PASSED] Link rate 1000000 lane count 2
[09:42:23] [PASSED] Link rate 1000000 lane count 1
[09:42:23] [PASSED] Link rate 810000 lane count 4
[09:42:23] [PASSED] Link rate 810000 lane count 2
[09:42:23] [PASSED] Link rate 810000 lane count 1
[09:42:23] [PASSED] Link rate 540000 lane count 4
[09:42:23] [PASSED] Link rate 540000 lane count 2
[09:42:23] [PASSED] Link rate 540000 lane count 1
[09:42:23] [PASSED] Link rate 270000 lane count 4
[09:42:23] [PASSED] Link rate 270000 lane count 2
[09:42:23] [PASSED] Link rate 270000 lane count 1
[09:42:23] [PASSED] Link rate 162000 lane count 4
[09:42:23] [PASSED] Link rate 162000 lane count 2
[09:42:23] [PASSED] Link rate 162000 lane count 1
[09:42:23] ========== [PASSED] drm_test_dp_mst_calc_pbn_div ===========
[09:42:23] ========= drm_test_dp_mst_sideband_msg_req_decode =========
[09:42:23] [PASSED] DP_ENUM_PATH_RESOURCES with port number
[09:42:23] [PASSED] DP_POWER_UP_PHY with port number
[09:42:23] [PASSED] DP_POWER_DOWN_PHY with port number
[09:42:23] [PASSED] DP_ALLOCATE_PAYLOAD with SDP stream sinks
[09:42:23] [PASSED] DP_ALLOCATE_PAYLOAD with port number
[09:42:23] [PASSED] DP_ALLOCATE_PAYLOAD with VCPI
[09:42:23] [PASSED] DP_ALLOCATE_PAYLOAD with PBN
[09:42:23] [PASSED] DP_QUERY_PAYLOAD with port number
[09:42:23] [PASSED] DP_QUERY_PAYLOAD with VCPI
[09:42:23] [PASSED] DP_REMOTE_DPCD_READ with port number
[09:42:23] [PASSED] DP_REMOTE_DPCD_READ with DPCD address
[09:42:23] [PASSED] DP_REMOTE_DPCD_READ with max number of bytes
[09:42:23] [PASSED] DP_REMOTE_DPCD_WRITE with port number
[09:42:23] [PASSED] DP_REMOTE_DPCD_WRITE with DPCD address
[09:42:23] [PASSED] DP_REMOTE_DPCD_WRITE with data array
[09:42:23] [PASSED] DP_REMOTE_I2C_READ with port number
[09:42:23] [PASSED] DP_REMOTE_I2C_READ with I2C device ID
[09:42:23] [PASSED] DP_REMOTE_I2C_READ with transactions array
[09:42:23] [PASSED] DP_REMOTE_I2C_WRITE with port number
[09:42:23] [PASSED] DP_REMOTE_I2C_WRITE with I2C device ID
[09:42:23] [PASSED] DP_REMOTE_I2C_WRITE with data array
[09:42:23] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream ID
[09:42:23] [PASSED] DP_QUERY_STREAM_ENC_STATUS with client ID
[09:42:23] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream event
[09:42:23] [PASSED] DP_QUERY_STREAM_ENC_STATUS with valid stream event
[09:42:23] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream behavior
[09:42:23] [PASSED] DP_QUERY_STREAM_ENC_STATUS with a valid stream behavior
[09:42:23] ===== [PASSED] drm_test_dp_mst_sideband_msg_req_decode =====
[09:42:23] ================ [PASSED] drm_dp_mst_helper ================
[09:42:23] ================== drm_exec (7 subtests) ===================
[09:42:23] [PASSED] sanitycheck
[09:42:23] [PASSED] test_lock
[09:42:23] [PASSED] test_lock_unlock
[09:42:23] [PASSED] test_duplicates
[09:42:23] [PASSED] test_prepare
[09:42:23] [PASSED] test_prepare_array
[09:42:23] [PASSED] test_multiple_loops
[09:42:23] ==================== [PASSED] drm_exec =====================
[09:42:23] =========== drm_format_helper_test (17 subtests) ===========
[09:42:23] ============== drm_test_fb_xrgb8888_to_gray8 ==============
[09:42:23] [PASSED] single_pixel_source_buffer
[09:42:23] [PASSED] single_pixel_clip_rectangle
[09:42:23] [PASSED] well_known_colors
[09:42:23] [PASSED] destination_pitch
[09:42:23] ========== [PASSED] drm_test_fb_xrgb8888_to_gray8 ==========
[09:42:23] ============= drm_test_fb_xrgb8888_to_rgb332 ==============
[09:42:23] [PASSED] single_pixel_source_buffer
[09:42:23] [PASSED] single_pixel_clip_rectangle
[09:42:23] [PASSED] well_known_colors
[09:42:23] [PASSED] destination_pitch
[09:42:23] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb332 ==========
[09:42:23] ============= drm_test_fb_xrgb8888_to_rgb565 ==============
[09:42:23] [PASSED] single_pixel_source_buffer
[09:42:23] [PASSED] single_pixel_clip_rectangle
[09:42:23] [PASSED] well_known_colors
[09:42:23] [PASSED] destination_pitch
[09:42:23] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb565 ==========
[09:42:23] ============ drm_test_fb_xrgb8888_to_xrgb1555 =============
[09:42:23] [PASSED] single_pixel_source_buffer
[09:42:23] [PASSED] single_pixel_clip_rectangle
[09:42:23] [PASSED] well_known_colors
[09:42:23] [PASSED] destination_pitch
[09:42:23] ======== [PASSED] drm_test_fb_xrgb8888_to_xrgb1555 =========
[09:42:23] ============ drm_test_fb_xrgb8888_to_argb1555 =============
[09:42:23] [PASSED] single_pixel_source_buffer
[09:42:23] [PASSED] single_pixel_clip_rectangle
[09:42:23] [PASSED] well_known_colors
[09:42:23] [PASSED] destination_pitch
[09:42:23] ======== [PASSED] drm_test_fb_xrgb8888_to_argb1555 =========
[09:42:23] ============ drm_test_fb_xrgb8888_to_rgba5551 =============
[09:42:23] [PASSED] single_pixel_source_buffer
[09:42:23] [PASSED] single_pixel_clip_rectangle
[09:42:23] [PASSED] well_known_colors
[09:42:23] [PASSED] destination_pitch
[09:42:23] ======== [PASSED] drm_test_fb_xrgb8888_to_rgba5551 =========
[09:42:23] ============= drm_test_fb_xrgb8888_to_rgb888 ==============
[09:42:23] [PASSED] single_pixel_source_buffer
[09:42:23] [PASSED] single_pixel_clip_rectangle
[09:42:23] [PASSED] well_known_colors
[09:42:23] [PASSED] destination_pitch
[09:42:23] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb888 ==========
[09:42:23] ============= drm_test_fb_xrgb8888_to_bgr888 ==============
[09:42:23] [PASSED] single_pixel_source_buffer
[09:42:23] [PASSED] single_pixel_clip_rectangle
[09:42:23] [PASSED] well_known_colors
[09:42:23] [PASSED] destination_pitch
[09:42:23] ========= [PASSED] drm_test_fb_xrgb8888_to_bgr888 ==========
[09:42:23] ============ drm_test_fb_xrgb8888_to_argb8888 =============
[09:42:23] [PASSED] single_pixel_source_buffer
[09:42:23] [PASSED] single_pixel_clip_rectangle
[09:42:23] [PASSED] well_known_colors
[09:42:23] [PASSED] destination_pitch
[09:42:23] ======== [PASSED] drm_test_fb_xrgb8888_to_argb8888 =========
[09:42:23] =========== drm_test_fb_xrgb8888_to_xrgb2101010 ===========
[09:42:23] [PASSED] single_pixel_source_buffer
[09:42:23] [PASSED] single_pixel_clip_rectangle
[09:42:23] [PASSED] well_known_colors
[09:42:23] [PASSED] destination_pitch
[09:42:23] ======= [PASSED] drm_test_fb_xrgb8888_to_xrgb2101010 =======
[09:42:23] =========== drm_test_fb_xrgb8888_to_argb2101010 ===========
[09:42:23] [PASSED] single_pixel_source_buffer
[09:42:23] [PASSED] single_pixel_clip_rectangle
[09:42:23] [PASSED] well_known_colors
[09:42:23] [PASSED] destination_pitch
[09:42:23] ======= [PASSED] drm_test_fb_xrgb8888_to_argb2101010 =======
[09:42:23] ============== drm_test_fb_xrgb8888_to_mono ===============
[09:42:23] [PASSED] single_pixel_source_buffer
[09:42:23] [PASSED] single_pixel_clip_rectangle
[09:42:23] [PASSED] well_known_colors
[09:42:23] [PASSED] destination_pitch
[09:42:23] ========== [PASSED] drm_test_fb_xrgb8888_to_mono ===========
[09:42:23] ==================== drm_test_fb_swab =====================
[09:42:23] [PASSED] single_pixel_source_buffer
[09:42:23] [PASSED] single_pixel_clip_rectangle
[09:42:23] [PASSED] well_known_colors
[09:42:23] [PASSED] destination_pitch
[09:42:23] ================ [PASSED] drm_test_fb_swab =================
[09:42:23] ============ drm_test_fb_xrgb8888_to_xbgr8888 =============
[09:42:23] [PASSED] single_pixel_source_buffer
[09:42:23] [PASSED] single_pixel_clip_rectangle
[09:42:23] [PASSED] well_known_colors
[09:42:23] [PASSED] destination_pitch
[09:42:23] ======== [PASSED] drm_test_fb_xrgb8888_to_xbgr8888 =========
[09:42:23] ============ drm_test_fb_xrgb8888_to_abgr8888 =============
[09:42:23] [PASSED] single_pixel_source_buffer
[09:42:23] [PASSED] single_pixel_clip_rectangle
[09:42:23] [PASSED] well_known_colors
[09:42:23] [PASSED] destination_pitch
[09:42:23] ======== [PASSED] drm_test_fb_xrgb8888_to_abgr8888 =========
[09:42:23] ================= drm_test_fb_clip_offset =================
[09:42:23] [PASSED] pass through
[09:42:23] [PASSED] horizontal offset
[09:42:23] [PASSED] vertical offset
[09:42:23] [PASSED] horizontal and vertical offset
[09:42:23] [PASSED] horizontal offset (custom pitch)
[09:42:23] [PASSED] vertical offset (custom pitch)
[09:42:23] [PASSED] horizontal and vertical offset (custom pitch)
[09:42:23] ============= [PASSED] drm_test_fb_clip_offset =============
[09:42:23] =================== drm_test_fb_memcpy ====================
[09:42:23] [PASSED] single_pixel_source_buffer: XR24 little-endian (0x34325258)
[09:42:23] [PASSED] single_pixel_source_buffer: XRA8 little-endian (0x38415258)
[09:42:23] [PASSED] single_pixel_source_buffer: YU24 little-endian (0x34325559)
[09:42:23] [PASSED] single_pixel_clip_rectangle: XB24 little-endian (0x34324258)
[09:42:23] [PASSED] single_pixel_clip_rectangle: XRA8 little-endian (0x38415258)
[09:42:23] [PASSED] single_pixel_clip_rectangle: YU24 little-endian (0x34325559)
[09:42:23] [PASSED] well_known_colors: XB24 little-endian (0x34324258)
[09:42:23] [PASSED] well_known_colors: XRA8 little-endian (0x38415258)
[09:42:23] [PASSED] well_known_colors: YU24 little-endian (0x34325559)
[09:42:23] [PASSED] destination_pitch: XB24 little-endian (0x34324258)
[09:42:23] [PASSED] destination_pitch: XRA8 little-endian (0x38415258)
[09:42:23] [PASSED] destination_pitch: YU24 little-endian (0x34325559)
[09:42:23] =============== [PASSED] drm_test_fb_memcpy ================
[09:42:23] ============= [PASSED] drm_format_helper_test ==============
[09:42:23] ================= drm_format (18 subtests) =================
[09:42:23] [PASSED] drm_test_format_block_width_invalid
[09:42:23] [PASSED] drm_test_format_block_width_one_plane
[09:42:23] [PASSED] drm_test_format_block_width_two_plane
[09:42:23] [PASSED] drm_test_format_block_width_three_plane
[09:42:23] [PASSED] drm_test_format_block_width_tiled
[09:42:23] [PASSED] drm_test_format_block_height_invalid
[09:42:23] [PASSED] drm_test_format_block_height_one_plane
[09:42:23] [PASSED] drm_test_format_block_height_two_plane
[09:42:23] [PASSED] drm_test_format_block_height_three_plane
[09:42:23] [PASSED] drm_test_format_block_height_tiled
[09:42:23] [PASSED] drm_test_format_min_pitch_invalid
[09:42:23] [PASSED] drm_test_format_min_pitch_one_plane_8bpp
[09:42:23] [PASSED] drm_test_format_min_pitch_one_plane_16bpp
[09:42:23] [PASSED] drm_test_format_min_pitch_one_plane_24bpp
[09:42:23] [PASSED] drm_test_format_min_pitch_one_plane_32bpp
[09:42:23] [PASSED] drm_test_format_min_pitch_two_plane
[09:42:23] [PASSED] drm_test_format_min_pitch_three_plane_8bpp
[09:42:23] [PASSED] drm_test_format_min_pitch_tiled
[09:42:23] =================== [PASSED] drm_format ====================
[09:42:23] ============== drm_framebuffer (10 subtests) ===============
[09:42:23] ========== drm_test_framebuffer_check_src_coords ==========
[09:42:23] [PASSED] Success: source fits into fb
[09:42:23] [PASSED] Fail: overflowing fb with x-axis coordinate
[09:42:23] [PASSED] Fail: overflowing fb with y-axis coordinate
[09:42:23] [PASSED] Fail: overflowing fb with source width
[09:42:23] [PASSED] Fail: overflowing fb with source height
[09:42:23] ====== [PASSED] drm_test_framebuffer_check_src_coords ======
[09:42:23] [PASSED] drm_test_framebuffer_cleanup
[09:42:23] =============== drm_test_framebuffer_create ===============
[09:42:23] [PASSED] ABGR8888 normal sizes
[09:42:23] [PASSED] ABGR8888 max sizes
[09:42:23] [PASSED] ABGR8888 pitch greater than min required
[09:42:23] [PASSED] ABGR8888 pitch less than min required
[09:42:23] [PASSED] ABGR8888 Invalid width
[09:42:23] [PASSED] ABGR8888 Invalid buffer handle
[09:42:23] [PASSED] No pixel format
[09:42:23] [PASSED] ABGR8888 Width 0
[09:42:23] [PASSED] ABGR8888 Height 0
[09:42:23] [PASSED] ABGR8888 Out of bound height * pitch combination
[09:42:23] [PASSED] ABGR8888 Large buffer offset
[09:42:23] [PASSED] ABGR8888 Buffer offset for inexistent plane
[09:42:23] [PASSED] ABGR8888 Invalid flag
[09:42:23] [PASSED] ABGR8888 Set DRM_MODE_FB_MODIFIERS without modifiers
[09:42:23] [PASSED] ABGR8888 Valid buffer modifier
[09:42:23] [PASSED] ABGR8888 Invalid buffer modifier(DRM_FORMAT_MOD_SAMSUNG_64_32_TILE)
[09:42:23] [PASSED] ABGR8888 Extra pitches without DRM_MODE_FB_MODIFIERS
[09:42:23] [PASSED] ABGR8888 Extra pitches with DRM_MODE_FB_MODIFIERS
[09:42:23] [PASSED] NV12 Normal sizes
[09:42:23] [PASSED] NV12 Max sizes
[09:42:23] [PASSED] NV12 Invalid pitch
[09:42:23] [PASSED] NV12 Invalid modifier/missing DRM_MODE_FB_MODIFIERS flag
[09:42:23] [PASSED] NV12 different modifier per-plane
[09:42:23] [PASSED] NV12 with DRM_FORMAT_MOD_SAMSUNG_64_32_TILE
[09:42:23] [PASSED] NV12 Valid modifiers without DRM_MODE_FB_MODIFIERS
[09:42:23] [PASSED] NV12 Modifier for inexistent plane
[09:42:23] [PASSED] NV12 Handle for inexistent plane
[09:42:23] [PASSED] NV12 Handle for inexistent plane without DRM_MODE_FB_MODIFIERS
[09:42:23] [PASSED] YVU420 DRM_MODE_FB_MODIFIERS set without modifier
[09:42:23] [PASSED] YVU420 Normal sizes
[09:42:23] [PASSED] YVU420 Max sizes
[09:42:23] [PASSED] YVU420 Invalid pitch
[09:42:23] [PASSED] YVU420 Different pitches
[09:42:23] [PASSED] YVU420 Different buffer offsets/pitches
[09:42:23] [PASSED] YVU420 Modifier set just for plane 0, without DRM_MODE_FB_MODIFIERS
[09:42:23] [PASSED] YVU420 Modifier set just for planes 0, 1, without DRM_MODE_FB_MODIFIERS
[09:42:23] [PASSED] YVU420 Modifier set just for plane 0, 1, with DRM_MODE_FB_MODIFIERS
[09:42:23] [PASSED] YVU420 Valid modifier
[09:42:23] [PASSED] YVU420 Different modifiers per plane
[09:42:23] [PASSED] YVU420 Modifier for inexistent plane
[09:42:23] [PASSED] YUV420_10BIT Invalid modifier(DRM_FORMAT_MOD_LINEAR)
[09:42:23] [PASSED] X0L2 Normal sizes
[09:42:23] [PASSED] X0L2 Max sizes
[09:42:23] [PASSED] X0L2 Invalid pitch
[09:42:23] [PASSED] X0L2 Pitch greater than minimum required
[09:42:23] [PASSED] X0L2 Handle for inexistent plane
[09:42:23] [PASSED] X0L2 Offset for inexistent plane, without DRM_MODE_FB_MODIFIERS set
[09:42:23] [PASSED] X0L2 Modifier without DRM_MODE_FB_MODIFIERS set
[09:42:23] [PASSED] X0L2 Valid modifier
[09:42:23] [PASSED] X0L2 Modifier for inexistent plane
[09:42:23] =========== [PASSED] drm_test_framebuffer_create ===========
[09:42:23] [PASSED] drm_test_framebuffer_free
[09:42:23] [PASSED] drm_test_framebuffer_init
[09:42:23] [PASSED] drm_test_framebuffer_init_bad_format
[09:42:23] [PASSED] drm_test_framebuffer_init_dev_mismatch
[09:42:23] [PASSED] drm_test_framebuffer_lookup
[09:42:23] [PASSED] drm_test_framebuffer_lookup_inexistent
[09:42:23] [PASSED] drm_test_framebuffer_modifiers_not_supported
[09:42:23] ================= [PASSED] drm_framebuffer =================
[09:42:23] ================ drm_gem_shmem (8 subtests) ================
[09:42:23] [PASSED] drm_gem_shmem_test_obj_create
[09:42:23] [PASSED] drm_gem_shmem_test_obj_create_private
[09:42:23] [PASSED] drm_gem_shmem_test_pin_pages
[09:42:23] [PASSED] drm_gem_shmem_test_vmap
[09:42:23] [PASSED] drm_gem_shmem_test_get_sg_table
[09:42:23] [PASSED] drm_gem_shmem_test_get_pages_sgt
[09:42:23] [PASSED] drm_gem_shmem_test_madvise
[09:42:23] [PASSED] drm_gem_shmem_test_purge
[09:42:23] ================== [PASSED] drm_gem_shmem ==================
[09:42:23] === drm_atomic_helper_connector_hdmi_check (27 subtests) ===
[09:42:23] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode
[09:42:23] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode_vic_1
[09:42:23] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode
[09:42:23] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode_vic_1
[09:42:23] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode
[09:42:23] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode_vic_1
[09:42:23] ====== drm_test_check_broadcast_rgb_cea_mode_yuv420 =======
[09:42:23] [PASSED] Automatic
[09:42:23] [PASSED] Full
[09:42:23] [PASSED] Limited 16:235
[09:42:23] == [PASSED] drm_test_check_broadcast_rgb_cea_mode_yuv420 ===
[09:42:23] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_changed
[09:42:23] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_not_changed
[09:42:23] [PASSED] drm_test_check_disable_connector
[09:42:23] [PASSED] drm_test_check_hdmi_funcs_reject_rate
[09:42:23] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_rgb
[09:42:23] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_yuv420
[09:42:23] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_ignore_yuv422
[09:42:23] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_ignore_yuv420
[09:42:23] [PASSED] drm_test_check_driver_unsupported_fallback_yuv420
[09:42:23] [PASSED] drm_test_check_output_bpc_crtc_mode_changed
[09:42:23] [PASSED] drm_test_check_output_bpc_crtc_mode_not_changed
[09:42:23] [PASSED] drm_test_check_output_bpc_dvi
[09:42:23] [PASSED] drm_test_check_output_bpc_format_vic_1
[09:42:23] [PASSED] drm_test_check_output_bpc_format_display_8bpc_only
[09:42:23] [PASSED] drm_test_check_output_bpc_format_display_rgb_only
[09:42:23] [PASSED] drm_test_check_output_bpc_format_driver_8bpc_only
[09:42:23] [PASSED] drm_test_check_output_bpc_format_driver_rgb_only
[09:42:23] [PASSED] drm_test_check_tmds_char_rate_rgb_8bpc
[09:42:23] [PASSED] drm_test_check_tmds_char_rate_rgb_10bpc
[09:42:23] [PASSED] drm_test_check_tmds_char_rate_rgb_12bpc
[09:42:23] ===== [PASSED] drm_atomic_helper_connector_hdmi_check ======
[09:42:23] === drm_atomic_helper_connector_hdmi_reset (6 subtests) ====
[09:42:23] [PASSED] drm_test_check_broadcast_rgb_value
[09:42:23] [PASSED] drm_test_check_bpc_8_value
[09:42:23] [PASSED] drm_test_check_bpc_10_value
[09:42:23] [PASSED] drm_test_check_bpc_12_value
[09:42:23] [PASSED] drm_test_check_format_value
[09:42:23] [PASSED] drm_test_check_tmds_char_value
[09:42:23] ===== [PASSED] drm_atomic_helper_connector_hdmi_reset ======
[09:42:23] = drm_atomic_helper_connector_hdmi_mode_valid (4 subtests) =
[09:42:23] [PASSED] drm_test_check_mode_valid
[09:42:23] [PASSED] drm_test_check_mode_valid_reject
[09:42:23] [PASSED] drm_test_check_mode_valid_reject_rate
[09:42:23] [PASSED] drm_test_check_mode_valid_reject_max_clock
[09:42:23] === [PASSED] drm_atomic_helper_connector_hdmi_mode_valid ===
[09:42:23] = drm_atomic_helper_connector_hdmi_infoframes (5 subtests) =
[09:42:23] [PASSED] drm_test_check_infoframes
[09:42:23] [PASSED] drm_test_check_reject_avi_infoframe
[09:42:23] [PASSED] drm_test_check_reject_hdr_infoframe_bpc_8
[09:42:23] [PASSED] drm_test_check_reject_hdr_infoframe_bpc_10
[09:42:23] [PASSED] drm_test_check_reject_audio_infoframe
[09:42:23] === [PASSED] drm_atomic_helper_connector_hdmi_infoframes ===
[09:42:23] ================= drm_managed (2 subtests) =================
[09:42:23] [PASSED] drm_test_managed_release_action
[09:42:23] [PASSED] drm_test_managed_run_action
[09:42:23] =================== [PASSED] drm_managed ===================
[09:42:23] =================== drm_mm (6 subtests) ====================
[09:42:23] [PASSED] drm_test_mm_init
[09:42:23] [PASSED] drm_test_mm_debug
[09:42:23] [PASSED] drm_test_mm_align32
[09:42:23] [PASSED] drm_test_mm_align64
[09:42:23] [PASSED] drm_test_mm_lowest
[09:42:23] [PASSED] drm_test_mm_highest
[09:42:23] ===================== [PASSED] drm_mm ======================
[09:42:23] ============= drm_modes_analog_tv (5 subtests) =============
[09:42:23] [PASSED] drm_test_modes_analog_tv_mono_576i
[09:42:23] [PASSED] drm_test_modes_analog_tv_ntsc_480i
[09:42:23] [PASSED] drm_test_modes_analog_tv_ntsc_480i_inlined
[09:42:23] [PASSED] drm_test_modes_analog_tv_pal_576i
[09:42:23] [PASSED] drm_test_modes_analog_tv_pal_576i_inlined
[09:42:23] =============== [PASSED] drm_modes_analog_tv ===============
[09:42:23] ============== drm_plane_helper (2 subtests) ===============
[09:42:23] =============== drm_test_check_plane_state ================
[09:42:23] [PASSED] clipping_simple
[09:42:23] [PASSED] clipping_rotate_reflect
[09:42:23] [PASSED] positioning_simple
[09:42:23] [PASSED] upscaling
[09:42:23] [PASSED] downscaling
[09:42:23] [PASSED] rounding1
[09:42:23] [PASSED] rounding2
[09:42:23] [PASSED] rounding3
[09:42:23] [PASSED] rounding4
[09:42:23] =========== [PASSED] drm_test_check_plane_state ============
[09:42:23] =========== drm_test_check_invalid_plane_state ============
[09:42:23] [PASSED] positioning_invalid
[09:42:23] [PASSED] upscaling_invalid
[09:42:23] [PASSED] downscaling_invalid
[09:42:23] ======= [PASSED] drm_test_check_invalid_plane_state ========
[09:42:23] ================ [PASSED] drm_plane_helper =================
[09:42:23] ====== drm_connector_helper_tv_get_modes (1 subtest) =======
[09:42:23] ====== drm_test_connector_helper_tv_get_modes_check =======
[09:42:23] [PASSED] None
[09:42:23] [PASSED] PAL
[09:42:23] [PASSED] NTSC
[09:42:23] [PASSED] Both, NTSC Default
[09:42:23] [PASSED] Both, PAL Default
[09:42:23] [PASSED] Both, NTSC Default, with PAL on command-line
[09:42:23] [PASSED] Both, PAL Default, with NTSC on command-line
[09:42:23] == [PASSED] drm_test_connector_helper_tv_get_modes_check ===
[09:42:23] ======== [PASSED] drm_connector_helper_tv_get_modes ========
[09:42:23] ================== drm_rect (9 subtests) ===================
[09:42:23] [PASSED] drm_test_rect_clip_scaled_div_by_zero
[09:42:23] [PASSED] drm_test_rect_clip_scaled_not_clipped
[09:42:23] [PASSED] drm_test_rect_clip_scaled_clipped
[09:42:23] [PASSED] drm_test_rect_clip_scaled_signed_vs_unsigned
[09:42:23] ================= drm_test_rect_intersect =================
[09:42:23] [PASSED] top-left x bottom-right: 2x2+1+1 x 2x2+0+0
[09:42:23] [PASSED] top-right x bottom-left: 2x2+0+0 x 2x2+1-1
[09:42:23] [PASSED] bottom-left x top-right: 2x2+1-1 x 2x2+0+0
[09:42:23] [PASSED] bottom-right x top-left: 2x2+0+0 x 2x2+1+1
[09:42:23] [PASSED] right x left: 2x1+0+0 x 3x1+1+0
[09:42:23] [PASSED] left x right: 3x1+1+0 x 2x1+0+0
[09:42:23] [PASSED] up x bottom: 1x2+0+0 x 1x3+0-1
[09:42:23] [PASSED] bottom x up: 1x3+0-1 x 1x2+0+0
[09:42:23] [PASSED] touching corner: 1x1+0+0 x 2x2+1+1
[09:42:23] [PASSED] touching side: 1x1+0+0 x 1x1+1+0
[09:42:23] [PASSED] equal rects: 2x2+0+0 x 2x2+0+0
[09:42:23] [PASSED] inside another: 2x2+0+0 x 1x1+1+1
[09:42:23] [PASSED] far away: 1x1+0+0 x 1x1+3+6
[09:42:23] [PASSED] points intersecting: 0x0+5+10 x 0x0+5+10
[09:42:23] [PASSED] points not intersecting: 0x0+0+0 x 0x0+5+10
[09:42:23] ============= [PASSED] drm_test_rect_intersect =============
[09:42:23] ================ drm_test_rect_calc_hscale ================
[09:42:23] [PASSED] normal use
[09:42:23] [PASSED] out of max range
[09:42:23] [PASSED] out of min range
[09:42:23] [PASSED] zero dst
[09:42:23] [PASSED] negative src
[09:42:23] [PASSED] negative dst
[09:42:23] ============ [PASSED] drm_test_rect_calc_hscale ============
[09:42:23] ================ drm_test_rect_calc_vscale ================
[09:42:23] [PASSED] normal use
[09:42:23] [PASSED] out of max range
[09:42:23] [PASSED] out of min range
[09:42:23] [PASSED] zero dst
[09:42:23] [PASSED] negative src
[09:42:23] [PASSED] negative dst
stty: 'standard input': Inappropriate ioctl for device
[09:42:23] ============ [PASSED] drm_test_rect_calc_vscale ============
[09:42:23] ================== drm_test_rect_rotate ===================
[09:42:23] [PASSED] reflect-x
[09:42:23] [PASSED] reflect-y
[09:42:23] [PASSED] rotate-0
[09:42:23] [PASSED] rotate-90
[09:42:23] [PASSED] rotate-180
[09:42:23] [PASSED] rotate-270
[09:42:23] ============== [PASSED] drm_test_rect_rotate ===============
[09:42:23] ================ drm_test_rect_rotate_inv =================
[09:42:23] [PASSED] reflect-x
[09:42:23] [PASSED] reflect-y
[09:42:23] [PASSED] rotate-0
[09:42:23] [PASSED] rotate-90
[09:42:23] [PASSED] rotate-180
[09:42:23] [PASSED] rotate-270
[09:42:23] ============ [PASSED] drm_test_rect_rotate_inv =============
[09:42:23] ==================== [PASSED] drm_rect =====================
[09:42:23] ============ drm_sysfb_modeset_test (1 subtest) ============
[09:42:23] ============ drm_test_sysfb_build_fourcc_list =============
[09:42:23] [PASSED] no native formats
[09:42:23] [PASSED] XRGB8888 as native format
[09:42:23] [PASSED] remove duplicates
[09:42:23] [PASSED] convert alpha formats
[09:42:23] [PASSED] random formats
[09:42:23] ======== [PASSED] drm_test_sysfb_build_fourcc_list =========
[09:42:23] ============= [PASSED] drm_sysfb_modeset_test ==============
[09:42:23] ================== drm_fixp (2 subtests) ===================
[09:42:23] [PASSED] drm_test_int2fixp
[09:42:23] [PASSED] drm_test_sm2fixp
[09:42:23] ==================== [PASSED] drm_fixp =====================
[09:42:23] ============================================================
[09:42:23] Testing complete. Ran 621 tests: passed: 621
[09:42:23] Elapsed time: 29.623s total, 2.710s configuring, 26.746s building, 0.164s running
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/ttm/tests/.kunitconfig
[09:42:23] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[09:42:25] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[09:42:34] Starting KUnit Kernel (1/1)...
[09:42:34] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[09:42:34] ================= ttm_device (5 subtests) ==================
[09:42:34] [PASSED] ttm_device_init_basic
[09:42:34] [PASSED] ttm_device_init_multiple
[09:42:34] [PASSED] ttm_device_fini_basic
[09:42:34] [PASSED] ttm_device_init_no_vma_man
[09:42:34] ================== ttm_device_init_pools ==================
[09:42:34] [PASSED] No DMA allocations, no DMA32 required
[09:42:34] [PASSED] DMA allocations, DMA32 required
[09:42:34] [PASSED] No DMA allocations, DMA32 required
[09:42:34] [PASSED] DMA allocations, no DMA32 required
[09:42:34] ============== [PASSED] ttm_device_init_pools ==============
[09:42:34] =================== [PASSED] ttm_device ====================
[09:42:34] ================== ttm_pool (8 subtests) ===================
[09:42:34] ================== ttm_pool_alloc_basic ===================
[09:42:34] [PASSED] One page
[09:42:34] [PASSED] More than one page
[09:42:34] [PASSED] Above the allocation limit
[09:42:34] [PASSED] One page, with coherent DMA mappings enabled
[09:42:34] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[09:42:34] ============== [PASSED] ttm_pool_alloc_basic ===============
[09:42:34] ============== ttm_pool_alloc_basic_dma_addr ==============
[09:42:34] [PASSED] One page
[09:42:34] [PASSED] More than one page
[09:42:34] [PASSED] Above the allocation limit
[09:42:34] [PASSED] One page, with coherent DMA mappings enabled
[09:42:34] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[09:42:34] ========== [PASSED] ttm_pool_alloc_basic_dma_addr ==========
[09:42:34] [PASSED] ttm_pool_alloc_order_caching_match
[09:42:34] [PASSED] ttm_pool_alloc_caching_mismatch
[09:42:34] [PASSED] ttm_pool_alloc_order_mismatch
[09:42:34] [PASSED] ttm_pool_free_dma_alloc
[09:42:34] [PASSED] ttm_pool_free_no_dma_alloc
[09:42:34] [PASSED] ttm_pool_fini_basic
[09:42:34] ==================== [PASSED] ttm_pool =====================
[09:42:34] ================ ttm_resource (8 subtests) =================
[09:42:34] ================= ttm_resource_init_basic =================
[09:42:34] [PASSED] Init resource in TTM_PL_SYSTEM
[09:42:34] [PASSED] Init resource in TTM_PL_VRAM
[09:42:34] [PASSED] Init resource in a private placement
[09:42:34] [PASSED] Init resource in TTM_PL_SYSTEM, set placement flags
[09:42:34] ============= [PASSED] ttm_resource_init_basic =============
[09:42:34] [PASSED] ttm_resource_init_pinned
[09:42:34] [PASSED] ttm_resource_fini_basic
[09:42:34] [PASSED] ttm_resource_manager_init_basic
[09:42:34] [PASSED] ttm_resource_manager_usage_basic
[09:42:34] [PASSED] ttm_resource_manager_set_used_basic
[09:42:34] [PASSED] ttm_sys_man_alloc_basic
[09:42:34] [PASSED] ttm_sys_man_free_basic
[09:42:34] ================== [PASSED] ttm_resource ===================
[09:42:34] =================== ttm_tt (15 subtests) ===================
[09:42:34] ==================== ttm_tt_init_basic ====================
[09:42:34] [PASSED] Page-aligned size
[09:42:34] [PASSED] Extra pages requested
[09:42:34] ================ [PASSED] ttm_tt_init_basic ================
[09:42:34] [PASSED] ttm_tt_init_misaligned
[09:42:34] [PASSED] ttm_tt_fini_basic
[09:42:34] [PASSED] ttm_tt_fini_sg
[09:42:34] [PASSED] ttm_tt_fini_shmem
[09:42:34] [PASSED] ttm_tt_create_basic
[09:42:34] [PASSED] ttm_tt_create_invalid_bo_type
[09:42:34] [PASSED] ttm_tt_create_ttm_exists
[09:42:34] [PASSED] ttm_tt_create_failed
[09:42:34] [PASSED] ttm_tt_destroy_basic
[09:42:34] [PASSED] ttm_tt_populate_null_ttm
[09:42:34] [PASSED] ttm_tt_populate_populated_ttm
[09:42:34] [PASSED] ttm_tt_unpopulate_basic
[09:42:34] [PASSED] ttm_tt_unpopulate_empty_ttm
[09:42:34] [PASSED] ttm_tt_swapin_basic
[09:42:34] ===================== [PASSED] ttm_tt ======================
[09:42:34] =================== ttm_bo (14 subtests) ===================
[09:42:34] =========== ttm_bo_reserve_optimistic_no_ticket ===========
[09:42:34] [PASSED] Cannot be interrupted and sleeps
[09:42:34] [PASSED] Cannot be interrupted, locks straight away
[09:42:34] [PASSED] Can be interrupted, sleeps
[09:42:34] ======= [PASSED] ttm_bo_reserve_optimistic_no_ticket =======
[09:42:34] [PASSED] ttm_bo_reserve_locked_no_sleep
[09:42:34] [PASSED] ttm_bo_reserve_no_wait_ticket
[09:42:34] [PASSED] ttm_bo_reserve_double_resv
[09:42:34] [PASSED] ttm_bo_reserve_interrupted
[09:42:34] [PASSED] ttm_bo_reserve_deadlock
[09:42:34] [PASSED] ttm_bo_unreserve_basic
[09:42:34] [PASSED] ttm_bo_unreserve_pinned
[09:42:34] [PASSED] ttm_bo_unreserve_bulk
[09:42:34] [PASSED] ttm_bo_fini_basic
[09:42:34] [PASSED] ttm_bo_fini_shared_resv
[09:42:34] [PASSED] ttm_bo_pin_basic
[09:42:34] [PASSED] ttm_bo_pin_unpin_resource
[09:42:34] [PASSED] ttm_bo_multiple_pin_one_unpin
[09:42:34] ===================== [PASSED] ttm_bo ======================
[09:42:34] ============== ttm_bo_validate (22 subtests) ===============
[09:42:34] ============== ttm_bo_init_reserved_sys_man ===============
[09:42:34] [PASSED] Buffer object for userspace
[09:42:34] [PASSED] Kernel buffer object
[09:42:34] [PASSED] Shared buffer object
[09:42:34] ========== [PASSED] ttm_bo_init_reserved_sys_man ===========
[09:42:34] ============== ttm_bo_init_reserved_mock_man ==============
[09:42:34] [PASSED] Buffer object for userspace
[09:42:34] [PASSED] Kernel buffer object
[09:42:34] [PASSED] Shared buffer object
[09:42:34] ========== [PASSED] ttm_bo_init_reserved_mock_man ==========
[09:42:34] [PASSED] ttm_bo_init_reserved_resv
[09:42:34] ================== ttm_bo_validate_basic ==================
[09:42:34] [PASSED] Buffer object for userspace
[09:42:34] [PASSED] Kernel buffer object
[09:42:34] [PASSED] Shared buffer object
[09:42:34] ============== [PASSED] ttm_bo_validate_basic ==============
[09:42:34] [PASSED] ttm_bo_validate_invalid_placement
[09:42:34] ============= ttm_bo_validate_same_placement ==============
[09:42:34] [PASSED] System manager
[09:42:34] [PASSED] VRAM manager
[09:42:34] ========= [PASSED] ttm_bo_validate_same_placement ==========
[09:42:34] [PASSED] ttm_bo_validate_failed_alloc
[09:42:34] [PASSED] ttm_bo_validate_pinned
[09:42:34] [PASSED] ttm_bo_validate_busy_placement
[09:42:34] ================ ttm_bo_validate_multihop =================
[09:42:34] [PASSED] Buffer object for userspace
[09:42:34] [PASSED] Kernel buffer object
[09:42:34] [PASSED] Shared buffer object
[09:42:34] ============ [PASSED] ttm_bo_validate_multihop =============
[09:42:34] ========== ttm_bo_validate_no_placement_signaled ==========
[09:42:34] [PASSED] Buffer object in system domain, no page vector
[09:42:34] [PASSED] Buffer object in system domain with an existing page vector
[09:42:34] ====== [PASSED] ttm_bo_validate_no_placement_signaled ======
[09:42:34] ======== ttm_bo_validate_no_placement_not_signaled ========
[09:42:34] [PASSED] Buffer object for userspace
[09:42:34] [PASSED] Kernel buffer object
[09:42:34] [PASSED] Shared buffer object
[09:42:34] ==== [PASSED] ttm_bo_validate_no_placement_not_signaled ====
[09:42:34] [PASSED] ttm_bo_validate_move_fence_signaled
[09:42:34] ========= ttm_bo_validate_move_fence_not_signaled =========
[09:42:34] [PASSED] Waits for GPU
[09:42:34] [PASSED] Tries to lock straight away
[09:42:34] ===== [PASSED] ttm_bo_validate_move_fence_not_signaled =====
[09:42:34] [PASSED] ttm_bo_validate_swapout
[09:42:34] [PASSED] ttm_bo_validate_happy_evict
[09:42:34] [PASSED] ttm_bo_validate_all_pinned_evict
[09:42:34] [PASSED] ttm_bo_validate_allowed_only_evict
[09:42:34] [PASSED] ttm_bo_validate_deleted_evict
[09:42:34] [PASSED] ttm_bo_validate_busy_domain_evict
[09:42:34] [PASSED] ttm_bo_validate_evict_gutting
[09:42:34] [PASSED] ttm_bo_validate_recrusive_evict
stty: 'standard input': Inappropriate ioctl for device
[09:42:34] ================= [PASSED] ttm_bo_validate =================
[09:42:34] ============================================================
[09:42:34] Testing complete. Ran 102 tests: passed: 102
[09:42:34] Elapsed time: 11.181s total, 1.690s configuring, 9.276s building, 0.183s running
+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel
^ permalink raw reply [flat|nested] 29+ messages in thread
* ✓ Xe.CI.BAT: success for drm/xe/madvise: Add support for purgeable buffer objects (rev8)
2026-03-23 9:30 [PATCH v7 00/12] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
` (13 preceding siblings ...)
2026-03-23 9:42 ` ✓ CI.KUnit: success " Patchwork
@ 2026-03-23 10:40 ` Patchwork
2026-03-23 12:05 ` ✓ Xe.CI.FULL: " Patchwork
2026-03-23 15:45 ` [PATCH v7 00/12] drm/xe/madvise: Add support for purgeable buffer objects Souza, Jose
16 siblings, 0 replies; 29+ messages in thread
From: Patchwork @ 2026-03-23 10:40 UTC (permalink / raw)
To: Arvind Yadav; +Cc: intel-xe
[-- Attachment #1: Type: text/plain, Size: 1427 bytes --]
== Series Details ==
Series: drm/xe/madvise: Add support for purgeable buffer objects (rev8)
URL : https://patchwork.freedesktop.org/series/156651/
State : success
== Summary ==
CI Bug Log - changes from xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c_BAT -> xe-pw-156651v8_BAT
====================================================
Summary
-------
**SUCCESS**
No regressions found.
Participating hosts (14 -> 13)
------------------------------
Missing (1): bat-bmg-2
Known issues
------------
Here are the changes found in xe-pw-156651v8_BAT that come from known issues:
### IGT changes ###
#### Possible fixes ####
* igt@xe_waitfence@abstime:
- bat-dg2-oem2: [TIMEOUT][1] ([Intel XE#6506]) -> [PASS][2]
[1]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c/bat-dg2-oem2/igt@xe_waitfence@abstime.html
[2]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/bat-dg2-oem2/igt@xe_waitfence@abstime.html
[Intel XE#6506]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6506
Build changes
-------------
* Linux: xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c -> xe-pw-156651v8
IGT_8816: 8816
xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c: 6d4a5468301db368a25ae8d595bdca120a17428c
xe-pw-156651v8: 156651v8
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/index.html
[-- Attachment #2: Type: text/html, Size: 1992 bytes --]
^ permalink raw reply [flat|nested] 29+ messages in thread
* ✓ Xe.CI.FULL: success for drm/xe/madvise: Add support for purgeable buffer objects (rev8)
2026-03-23 9:30 [PATCH v7 00/12] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
` (14 preceding siblings ...)
2026-03-23 10:40 ` ✓ Xe.CI.BAT: " Patchwork
@ 2026-03-23 12:05 ` Patchwork
2026-03-23 15:45 ` [PATCH v7 00/12] drm/xe/madvise: Add support for purgeable buffer objects Souza, Jose
16 siblings, 0 replies; 29+ messages in thread
From: Patchwork @ 2026-03-23 12:05 UTC (permalink / raw)
To: Arvind Yadav; +Cc: intel-xe
[-- Attachment #1: Type: text/plain, Size: 37208 bytes --]
== Series Details ==
Series: drm/xe/madvise: Add support for purgeable buffer objects (rev8)
URL : https://patchwork.freedesktop.org/series/156651/
State : success
== Summary ==
CI Bug Log - changes from xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c_FULL -> xe-pw-156651v8_FULL
====================================================
Summary
-------
**SUCCESS**
No regressions found.
Participating hosts (2 -> 2)
------------------------------
No changes in participating hosts
Known issues
------------
Here are the changes found in xe-pw-156651v8_FULL that come from known issues:
### IGT changes ###
#### Issues hit ####
* igt@kms_big_fb@x-tiled-32bpp-rotate-90:
- shard-bmg: NOTRUN -> [SKIP][1] ([Intel XE#2327]) +1 other test skip
[1]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-3/igt@kms_big_fb@x-tiled-32bpp-rotate-90.html
* igt@kms_big_fb@y-tiled-16bpp-rotate-270:
- shard-bmg: NOTRUN -> [SKIP][2] ([Intel XE#1124]) +4 other tests skip
[2]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-9/igt@kms_big_fb@y-tiled-16bpp-rotate-270.html
* igt@kms_bw@linear-tiling-1-displays-2560x1440p:
- shard-bmg: NOTRUN -> [SKIP][3] ([Intel XE#367] / [Intel XE#7354]) +1 other test skip
[3]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-3/igt@kms_bw@linear-tiling-1-displays-2560x1440p.html
* igt@kms_ccs@bad-rotation-90-4-tiled-lnl-ccs@pipe-c-dp-2:
- shard-bmg: NOTRUN -> [SKIP][4] ([Intel XE#2652]) +15 other tests skip
[4]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-9/igt@kms_ccs@bad-rotation-90-4-tiled-lnl-ccs@pipe-c-dp-2.html
* igt@kms_ccs@crc-sprite-planes-basic-4-tiled-dg2-rc-ccs:
- shard-bmg: NOTRUN -> [SKIP][5] ([Intel XE#2887]) +4 other tests skip
[5]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-4/igt@kms_ccs@crc-sprite-planes-basic-4-tiled-dg2-rc-ccs.html
* igt@kms_cdclk@mode-transition-all-outputs:
- shard-bmg: NOTRUN -> [SKIP][6] ([Intel XE#2724] / [Intel XE#7449]) +1 other test skip
[6]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-3/igt@kms_cdclk@mode-transition-all-outputs.html
* igt@kms_chamelium_edid@dp-edid-change-during-hibernate:
- shard-bmg: NOTRUN -> [SKIP][7] ([Intel XE#2252]) +4 other tests skip
[7]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-9/igt@kms_chamelium_edid@dp-edid-change-during-hibernate.html
* igt@kms_content_protection@dp-mst-type-1-suspend-resume:
- shard-bmg: NOTRUN -> [SKIP][8] ([Intel XE#6974])
[8]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-3/igt@kms_content_protection@dp-mst-type-1-suspend-resume.html
* igt@kms_content_protection@lic-type-0@pipe-a-dp-2:
- shard-bmg: NOTRUN -> [FAIL][9] ([Intel XE#1178] / [Intel XE#3304] / [Intel XE#7374]) +2 other tests fail
[9]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-9/igt@kms_content_protection@lic-type-0@pipe-a-dp-2.html
* igt@kms_cursor_crc@cursor-rapid-movement-128x42:
- shard-bmg: NOTRUN -> [SKIP][10] ([Intel XE#2320]) +1 other test skip
[10]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-3/igt@kms_cursor_crc@cursor-rapid-movement-128x42.html
* igt@kms_cursor_legacy@2x-long-cursor-vs-flip-legacy:
- shard-bmg: NOTRUN -> [SKIP][11] ([Intel XE#2291])
[11]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-3/igt@kms_cursor_legacy@2x-long-cursor-vs-flip-legacy.html
* igt@kms_cursor_legacy@cursora-vs-flipb-legacy:
- shard-bmg: [PASS][12] -> [SKIP][13] ([Intel XE#2291]) +1 other test skip
[12]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c/shard-bmg-10/igt@kms_cursor_legacy@cursora-vs-flipb-legacy.html
[13]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-3/igt@kms_cursor_legacy@cursora-vs-flipb-legacy.html
* igt@kms_cursor_legacy@cursorb-vs-flipb-atomic-transitions-varying-size:
- shard-bmg: [PASS][14] -> [SKIP][15] ([Intel XE#2291] / [Intel XE#7343])
[14]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c/shard-bmg-7/igt@kms_cursor_legacy@cursorb-vs-flipb-atomic-transitions-varying-size.html
[15]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-3/igt@kms_cursor_legacy@cursorb-vs-flipb-atomic-transitions-varying-size.html
* igt@kms_dsc@dsc-fractional-bpp:
- shard-bmg: NOTRUN -> [SKIP][16] ([Intel XE#2244])
[16]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-3/igt@kms_dsc@dsc-fractional-bpp.html
* igt@kms_fbcon_fbt@fbc:
- shard-bmg: NOTRUN -> [SKIP][17] ([Intel XE#4156] / [Intel XE#7425])
[17]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-3/igt@kms_fbcon_fbt@fbc.html
* igt@kms_feature_discovery@display-2x:
- shard-bmg: [PASS][18] -> [SKIP][19] ([Intel XE#2373] / [Intel XE#7344])
[18]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c/shard-bmg-7/igt@kms_feature_discovery@display-2x.html
[19]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-3/igt@kms_feature_discovery@display-2x.html
* igt@kms_feature_discovery@dp-mst:
- shard-bmg: NOTRUN -> [SKIP][20] ([Intel XE#2375])
[20]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-3/igt@kms_feature_discovery@dp-mst.html
* igt@kms_flip@2x-flip-vs-dpms-on-nop:
- shard-bmg: [PASS][21] -> [SKIP][22] ([Intel XE#2316]) +6 other tests skip
[21]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c/shard-bmg-10/igt@kms_flip@2x-flip-vs-dpms-on-nop.html
[22]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-3/igt@kms_flip@2x-flip-vs-dpms-on-nop.html
* igt@kms_flip@2x-wf_vblank-ts-check-interruptible:
- shard-bmg: NOTRUN -> [SKIP][23] ([Intel XE#2316])
[23]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-3/igt@kms_flip@2x-wf_vblank-ts-check-interruptible.html
* igt@kms_flip@flip-vs-expired-vblank-interruptible@b-edp1:
- shard-lnl: [PASS][24] -> [FAIL][25] ([Intel XE#301]) +1 other test fail
[24]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c/shard-lnl-3/igt@kms_flip@flip-vs-expired-vblank-interruptible@b-edp1.html
[25]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-lnl-4/igt@kms_flip@flip-vs-expired-vblank-interruptible@b-edp1.html
* igt@kms_flip_scaled_crc@flip-64bpp-yftile-to-16bpp-yftile-upscaling:
- shard-bmg: NOTRUN -> [SKIP][26] ([Intel XE#7178] / [Intel XE#7351])
[26]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-4/igt@kms_flip_scaled_crc@flip-64bpp-yftile-to-16bpp-yftile-upscaling.html
* igt@kms_frontbuffer_tracking@fbc-1p-pri-indfb-multidraw:
- shard-bmg: NOTRUN -> [SKIP][27] ([Intel XE#4141]) +3 other tests skip
[27]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-3/igt@kms_frontbuffer_tracking@fbc-1p-pri-indfb-multidraw.html
* igt@kms_frontbuffer_tracking@fbc-abgr161616f-draw-mmap-wc:
- shard-bmg: NOTRUN -> [SKIP][28] ([Intel XE#7061] / [Intel XE#7356])
[28]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-4/igt@kms_frontbuffer_tracking@fbc-abgr161616f-draw-mmap-wc.html
* igt@kms_frontbuffer_tracking@fbcdrrs-2p-scndscrn-cur-indfb-draw-mmap-wc:
- shard-bmg: NOTRUN -> [SKIP][29] ([Intel XE#2311]) +7 other tests skip
[29]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-9/igt@kms_frontbuffer_tracking@fbcdrrs-2p-scndscrn-cur-indfb-draw-mmap-wc.html
* igt@kms_frontbuffer_tracking@fbcdrrs-2p-scndscrn-pri-shrfb-draw-render:
- shard-bmg: NOTRUN -> [SKIP][30] ([Intel XE#2312]) +6 other tests skip
[30]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-3/igt@kms_frontbuffer_tracking@fbcdrrs-2p-scndscrn-pri-shrfb-draw-render.html
* igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-pri-indfb-draw-blt:
- shard-bmg: NOTRUN -> [SKIP][31] ([Intel XE#2313]) +5 other tests skip
[31]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-3/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-pri-indfb-draw-blt.html
* igt@kms_hdr@static-toggle-dpms:
- shard-bmg: NOTRUN -> [SKIP][32] ([Intel XE#1503])
[32]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-3/igt@kms_hdr@static-toggle-dpms.html
* igt@kms_hdr@static-toggle-suspend:
- shard-bmg: [PASS][33] -> [SKIP][34] ([Intel XE#1503])
[33]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c/shard-bmg-4/igt@kms_hdr@static-toggle-suspend.html
[34]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-5/igt@kms_hdr@static-toggle-suspend.html
* igt@kms_pipe_crc_basic@suspend-read-crc@pipe-b-hdmi-a-3:
- shard-bmg: [PASS][35] -> [INCOMPLETE][36] ([Intel XE#2597]) +1 other test incomplete
[35]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c/shard-bmg-2/igt@kms_pipe_crc_basic@suspend-read-crc@pipe-b-hdmi-a-3.html
[36]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-6/igt@kms_pipe_crc_basic@suspend-read-crc@pipe-b-hdmi-a-3.html
* igt@kms_plane@pixel-format-yf-tiled-modifier-source-clamping:
- shard-bmg: NOTRUN -> [SKIP][37] ([Intel XE#7283])
[37]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-3/igt@kms_plane@pixel-format-yf-tiled-modifier-source-clamping.html
* igt@kms_plane_multiple@tiling-yf:
- shard-bmg: NOTRUN -> [SKIP][38] ([Intel XE#5020] / [Intel XE#7348])
[38]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-3/igt@kms_plane_multiple@tiling-yf.html
* igt@kms_plane_scaling@2x-scaler-multi-pipe:
- shard-bmg: NOTRUN -> [SKIP][39] ([Intel XE#2571] / [Intel XE#7343])
[39]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-3/igt@kms_plane_scaling@2x-scaler-multi-pipe.html
* igt@kms_pm_backlight@fade-with-dpms:
- shard-bmg: NOTRUN -> [SKIP][40] ([Intel XE#7376] / [Intel XE#870])
[40]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-3/igt@kms_pm_backlight@fade-with-dpms.html
* igt@kms_psr2_sf@fbc-psr2-overlay-plane-move-continuous-sf:
- shard-bmg: NOTRUN -> [SKIP][41] ([Intel XE#1489]) +2 other tests skip
[41]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-3/igt@kms_psr2_sf@fbc-psr2-overlay-plane-move-continuous-sf.html
* igt@kms_psr@fbc-pr-cursor-plane-onoff:
- shard-bmg: NOTRUN -> [SKIP][42] ([Intel XE#2234] / [Intel XE#2850]) +2 other tests skip
[42]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-3/igt@kms_psr@fbc-pr-cursor-plane-onoff.html
* igt@kms_scaling_modes@scaling-mode-center:
- shard-bmg: NOTRUN -> [SKIP][43] ([Intel XE#2413])
[43]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-3/igt@kms_scaling_modes@scaling-mode-center.html
* igt@kms_setmode@invalid-clone-single-crtc:
- shard-bmg: [PASS][44] -> [SKIP][45] ([Intel XE#1435])
[44]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c/shard-bmg-4/igt@kms_setmode@invalid-clone-single-crtc.html
[45]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-5/igt@kms_setmode@invalid-clone-single-crtc.html
* igt@kms_vrr@negative-basic:
- shard-bmg: NOTRUN -> [SKIP][46] ([Intel XE#1499])
[46]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-3/igt@kms_vrr@negative-basic.html
* igt@xe_compute@eu-busy-10s:
- shard-bmg: NOTRUN -> [SKIP][47] ([Intel XE#6599])
[47]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-3/igt@xe_compute@eu-busy-10s.html
* igt@xe_eudebug_online@breakpoint-many-sessions-single-tile:
- shard-bmg: NOTRUN -> [SKIP][48] ([Intel XE#7636]) +5 other tests skip
[48]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-3/igt@xe_eudebug_online@breakpoint-many-sessions-single-tile.html
* igt@xe_exec_basic@multigpu-many-execqueues-many-vm-null-rebind:
- shard-bmg: NOTRUN -> [SKIP][49] ([Intel XE#2322] / [Intel XE#7372]) +3 other tests skip
[49]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-3/igt@xe_exec_basic@multigpu-many-execqueues-many-vm-null-rebind.html
* igt@xe_exec_fault_mode@twice-multi-queue-userptr:
- shard-bmg: NOTRUN -> [SKIP][50] ([Intel XE#7136]) +4 other tests skip
[50]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-3/igt@xe_exec_fault_mode@twice-multi-queue-userptr.html
* igt@xe_exec_multi_queue@two-queues-preempt-mode-basic-smem:
- shard-bmg: NOTRUN -> [SKIP][51] ([Intel XE#6874]) +6 other tests skip
[51]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-3/igt@xe_exec_multi_queue@two-queues-preempt-mode-basic-smem.html
* igt@xe_exec_threads@threads-multi-queue-mixed-fd-userptr:
- shard-bmg: NOTRUN -> [SKIP][52] ([Intel XE#7138]) +2 other tests skip
[52]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-3/igt@xe_exec_threads@threads-multi-queue-mixed-fd-userptr.html
* igt@xe_multigpu_svm@mgpu-latency-prefetch:
- shard-bmg: NOTRUN -> [SKIP][53] ([Intel XE#6964])
[53]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-9/igt@xe_multigpu_svm@mgpu-latency-prefetch.html
* igt@xe_pxp@pxp-termination-key-update-post-rpm:
- shard-bmg: NOTRUN -> [SKIP][54] ([Intel XE#4733] / [Intel XE#7417])
[54]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-3/igt@xe_pxp@pxp-termination-key-update-post-rpm.html
* igt@xe_query@multigpu-query-topology-l3-bank-mask:
- shard-bmg: NOTRUN -> [SKIP][55] ([Intel XE#944]) +1 other test skip
[55]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-3/igt@xe_query@multigpu-query-topology-l3-bank-mask.html
#### Possible fixes ####
* igt@kms_bw@connected-linear-tiling-1-displays-3840x2160p:
- shard-bmg: [SKIP][56] ([Intel XE#7621]) -> [PASS][57]
[56]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c/shard-bmg-7/igt@kms_bw@connected-linear-tiling-1-displays-3840x2160p.html
[57]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-3/igt@kms_bw@connected-linear-tiling-1-displays-3840x2160p.html
* igt@kms_cursor_legacy@cursorb-vs-flipa-legacy:
- shard-bmg: [SKIP][58] ([Intel XE#2291]) -> [PASS][59] +1 other test pass
[58]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c/shard-bmg-3/igt@kms_cursor_legacy@cursorb-vs-flipa-legacy.html
[59]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-10/igt@kms_cursor_legacy@cursorb-vs-flipa-legacy.html
* igt@kms_cursor_legacy@cursorb-vs-flipb-varying-size:
- shard-bmg: [SKIP][60] ([Intel XE#2291] / [Intel XE#7343]) -> [PASS][61]
[60]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c/shard-bmg-3/igt@kms_cursor_legacy@cursorb-vs-flipb-varying-size.html
[61]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-7/igt@kms_cursor_legacy@cursorb-vs-flipb-varying-size.html
* igt@kms_dp_link_training@non-uhbr-sst:
- shard-bmg: [SKIP][62] ([Intel XE#4354]) -> [PASS][63]
[62]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c/shard-bmg-5/igt@kms_dp_link_training@non-uhbr-sst.html
[63]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-7/igt@kms_dp_link_training@non-uhbr-sst.html
* igt@kms_flip@2x-flip-vs-suspend-interruptible:
- shard-bmg: [INCOMPLETE][64] ([Intel XE#2049] / [Intel XE#2597]) -> [PASS][65] +1 other test pass
[64]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c/shard-bmg-8/igt@kms_flip@2x-flip-vs-suspend-interruptible.html
[65]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-9/igt@kms_flip@2x-flip-vs-suspend-interruptible.html
* igt@kms_flip@2x-plain-flip-fb-recreate:
- shard-bmg: [SKIP][66] ([Intel XE#2316]) -> [PASS][67] +6 other tests pass
[66]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c/shard-bmg-5/igt@kms_flip@2x-plain-flip-fb-recreate.html
[67]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-7/igt@kms_flip@2x-plain-flip-fb-recreate.html
* igt@kms_plane_multiple@2x-tiling-4:
- shard-bmg: [SKIP][68] ([Intel XE#4596]) -> [PASS][69] +1 other test pass
[68]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c/shard-bmg-3/igt@kms_plane_multiple@2x-tiling-4.html
[69]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-9/igt@kms_plane_multiple@2x-tiling-4.html
* igt@kms_setmode@invalid-clone-single-crtc-stealing:
- shard-bmg: [SKIP][70] ([Intel XE#1435]) -> [PASS][71]
[70]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c/shard-bmg-3/igt@kms_setmode@invalid-clone-single-crtc-stealing.html
[71]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-10/igt@kms_setmode@invalid-clone-single-crtc-stealing.html
* igt@kms_vrr@seamless-rr-switch-virtual@pipe-a-edp-1:
- shard-lnl: [FAIL][72] ([Intel XE#2142]) -> [PASS][73] +1 other test pass
[72]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c/shard-lnl-8/igt@kms_vrr@seamless-rr-switch-virtual@pipe-a-edp-1.html
[73]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-lnl-1/igt@kms_vrr@seamless-rr-switch-virtual@pipe-a-edp-1.html
* igt@xe_fault_injection@inject-fault-probe-function-xe_device_create:
- shard-bmg: [ABORT][74] ([Intel XE#7578]) -> [PASS][75]
[74]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c/shard-bmg-9/igt@xe_fault_injection@inject-fault-probe-function-xe_device_create.html
[75]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-3/igt@xe_fault_injection@inject-fault-probe-function-xe_device_create.html
* igt@xe_module_load@load:
- shard-lnl: ([PASS][76], [PASS][77], [PASS][78], [PASS][79], [PASS][80], [PASS][81], [PASS][82], [PASS][83], [PASS][84], [PASS][85], [PASS][86], [PASS][87], [PASS][88], [PASS][89], [PASS][90], [PASS][91], [PASS][92], [PASS][93], [PASS][94], [PASS][95], [PASS][96], [PASS][97], [PASS][98], [SKIP][99], [PASS][100], [PASS][101]) ([Intel XE#378] / [Intel XE#7405]) -> ([PASS][102], [PASS][103], [PASS][104], [PASS][105], [PASS][106], [PASS][107], [PASS][108], [PASS][109], [PASS][110], [PASS][111], [PASS][112], [PASS][113], [PASS][114], [PASS][115], [PASS][116], [PASS][117], [PASS][118], [PASS][119], [PASS][120], [PASS][121], [PASS][122], [PASS][123], [PASS][124], [PASS][125])
[76]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c/shard-lnl-7/igt@xe_module_load@load.html
[77]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c/shard-lnl-7/igt@xe_module_load@load.html
[78]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c/shard-lnl-7/igt@xe_module_load@load.html
[79]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c/shard-lnl-1/igt@xe_module_load@load.html
[80]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c/shard-lnl-5/igt@xe_module_load@load.html
[81]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c/shard-lnl-5/igt@xe_module_load@load.html
[82]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c/shard-lnl-5/igt@xe_module_load@load.html
[83]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c/shard-lnl-2/igt@xe_module_load@load.html
[84]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c/shard-lnl-2/igt@xe_module_load@load.html
[85]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c/shard-lnl-2/igt@xe_module_load@load.html
[86]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c/shard-lnl-3/igt@xe_module_load@load.html
[87]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c/shard-lnl-3/igt@xe_module_load@load.html
[88]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c/shard-lnl-3/igt@xe_module_load@load.html
[89]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c/shard-lnl-6/igt@xe_module_load@load.html
[90]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c/shard-lnl-6/igt@xe_module_load@load.html
[91]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c/shard-lnl-6/igt@xe_module_load@load.html
[92]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c/shard-lnl-4/igt@xe_module_load@load.html
[93]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c/shard-lnl-4/igt@xe_module_load@load.html
[94]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c/shard-lnl-4/igt@xe_module_load@load.html
[95]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c/shard-lnl-8/igt@xe_module_load@load.html
[96]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c/shard-lnl-8/igt@xe_module_load@load.html
[97]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c/shard-lnl-8/igt@xe_module_load@load.html
[98]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c/shard-lnl-8/igt@xe_module_load@load.html
[99]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c/shard-lnl-1/igt@xe_module_load@load.html
[100]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c/shard-lnl-1/igt@xe_module_load@load.html
[101]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c/shard-lnl-1/igt@xe_module_load@load.html
[102]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-lnl-8/igt@xe_module_load@load.html
[103]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-lnl-7/igt@xe_module_load@load.html
[104]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-lnl-7/igt@xe_module_load@load.html
[105]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-lnl-8/igt@xe_module_load@load.html
[106]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-lnl-7/igt@xe_module_load@load.html
[107]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-lnl-4/igt@xe_module_load@load.html
[108]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-lnl-3/igt@xe_module_load@load.html
[109]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-lnl-3/igt@xe_module_load@load.html
[110]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-lnl-3/igt@xe_module_load@load.html
[111]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-lnl-5/igt@xe_module_load@load.html
[112]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-lnl-5/igt@xe_module_load@load.html
[113]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-lnl-4/igt@xe_module_load@load.html
[114]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-lnl-4/igt@xe_module_load@load.html
[115]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-lnl-2/igt@xe_module_load@load.html
[116]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-lnl-5/igt@xe_module_load@load.html
[117]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-lnl-6/igt@xe_module_load@load.html
[118]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-lnl-2/igt@xe_module_load@load.html
[119]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-lnl-6/igt@xe_module_load@load.html
[120]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-lnl-2/igt@xe_module_load@load.html
[121]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-lnl-6/igt@xe_module_load@load.html
[122]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-lnl-1/igt@xe_module_load@load.html
[123]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-lnl-1/igt@xe_module_load@load.html
[124]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-lnl-1/igt@xe_module_load@load.html
[125]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-lnl-8/igt@xe_module_load@load.html
#### Warnings ####
* igt@kms_content_protection@atomic:
- shard-bmg: [FAIL][126] ([Intel XE#1178] / [Intel XE#3304] / [Intel XE#7374]) -> [SKIP][127] ([Intel XE#7642])
[126]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c/shard-bmg-7/igt@kms_content_protection@atomic.html
[127]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-3/igt@kms_content_protection@atomic.html
* igt@kms_content_protection@legacy:
- shard-bmg: [SKIP][128] ([Intel XE#7642]) -> [FAIL][129] ([Intel XE#1178] / [Intel XE#3304] / [Intel XE#7374]) +2 other tests fail
[128]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c/shard-bmg-5/igt@kms_content_protection@legacy.html
[129]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-7/igt@kms_content_protection@legacy.html
* igt@kms_frontbuffer_tracking@drrs-2p-primscrn-spr-indfb-draw-render:
- shard-bmg: [SKIP][130] ([Intel XE#2311]) -> [SKIP][131] ([Intel XE#2312]) +10 other tests skip
[130]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c/shard-bmg-4/igt@kms_frontbuffer_tracking@drrs-2p-primscrn-spr-indfb-draw-render.html
[131]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-5/igt@kms_frontbuffer_tracking@drrs-2p-primscrn-spr-indfb-draw-render.html
* igt@kms_frontbuffer_tracking@drrs-2p-scndscrn-spr-indfb-draw-blt:
- shard-bmg: [SKIP][132] ([Intel XE#2312]) -> [SKIP][133] ([Intel XE#2311]) +16 other tests skip
[132]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c/shard-bmg-3/igt@kms_frontbuffer_tracking@drrs-2p-scndscrn-spr-indfb-draw-blt.html
[133]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-10/igt@kms_frontbuffer_tracking@drrs-2p-scndscrn-spr-indfb-draw-blt.html
* igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-pri-shrfb-draw-blt:
- shard-bmg: [SKIP][134] ([Intel XE#4141]) -> [SKIP][135] ([Intel XE#2312]) +5 other tests skip
[134]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c/shard-bmg-10/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-pri-shrfb-draw-blt.html
[135]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-3/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-pri-shrfb-draw-blt.html
* igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-spr-indfb-draw-render:
- shard-bmg: [SKIP][136] ([Intel XE#2312]) -> [SKIP][137] ([Intel XE#4141]) +8 other tests skip
[136]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c/shard-bmg-3/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-spr-indfb-draw-render.html
[137]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-9/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-spr-indfb-draw-render.html
* igt@kms_frontbuffer_tracking@psr-2p-primscrn-indfb-plflip-blt:
- shard-bmg: [SKIP][138] ([Intel XE#2312]) -> [SKIP][139] ([Intel XE#2313]) +16 other tests skip
[138]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c/shard-bmg-5/igt@kms_frontbuffer_tracking@psr-2p-primscrn-indfb-plflip-blt.html
[139]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-7/igt@kms_frontbuffer_tracking@psr-2p-primscrn-indfb-plflip-blt.html
* igt@kms_frontbuffer_tracking@psr-2p-scndscrn-shrfb-pgflip-blt:
- shard-bmg: [SKIP][140] ([Intel XE#2313]) -> [SKIP][141] ([Intel XE#2312]) +13 other tests skip
[140]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c/shard-bmg-10/igt@kms_frontbuffer_tracking@psr-2p-scndscrn-shrfb-pgflip-blt.html
[141]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-3/igt@kms_frontbuffer_tracking@psr-2p-scndscrn-shrfb-pgflip-blt.html
* igt@kms_rotation_crc@primary-rotation-90:
- shard-bmg: [SKIP][142] ([Intel XE#3414] / [Intel XE#3904] / [Intel XE#7342]) -> [SKIP][143] ([Intel XE#3904] / [Intel XE#7342]) +1 other test skip
[142]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c/shard-bmg-5/igt@kms_rotation_crc@primary-rotation-90.html
[143]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-7/igt@kms_rotation_crc@primary-rotation-90.html
* igt@kms_rotation_crc@primary-y-tiled-reflect-x-90:
- shard-bmg: [SKIP][144] ([Intel XE#3904] / [Intel XE#7342]) -> [SKIP][145] ([Intel XE#3414] / [Intel XE#3904] / [Intel XE#7342])
[144]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c/shard-bmg-4/igt@kms_rotation_crc@primary-y-tiled-reflect-x-90.html
[145]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-5/igt@kms_rotation_crc@primary-y-tiled-reflect-x-90.html
* igt@kms_tiled_display@basic-test-pattern:
- shard-bmg: [FAIL][146] ([Intel XE#1729] / [Intel XE#7424]) -> [SKIP][147] ([Intel XE#2426] / [Intel XE#5848])
[146]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c/shard-bmg-4/igt@kms_tiled_display@basic-test-pattern.html
[147]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/shard-bmg-5/igt@kms_tiled_display@basic-test-pattern.html
[Intel XE#1124]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1124
[Intel XE#1178]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1178
[Intel XE#1435]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1435
[Intel XE#1489]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1489
[Intel XE#1499]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1499
[Intel XE#1503]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1503
[Intel XE#1729]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1729
[Intel XE#2049]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2049
[Intel XE#2142]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2142
[Intel XE#2234]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2234
[Intel XE#2244]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2244
[Intel XE#2252]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2252
[Intel XE#2291]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2291
[Intel XE#2311]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2311
[Intel XE#2312]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2312
[Intel XE#2313]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2313
[Intel XE#2316]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2316
[Intel XE#2320]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2320
[Intel XE#2322]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2322
[Intel XE#2327]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2327
[Intel XE#2373]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2373
[Intel XE#2375]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2375
[Intel XE#2413]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2413
[Intel XE#2426]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2426
[Intel XE#2571]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2571
[Intel XE#2597]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2597
[Intel XE#2652]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2652
[Intel XE#2724]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2724
[Intel XE#2850]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2850
[Intel XE#2887]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2887
[Intel XE#301]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/301
[Intel XE#3304]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3304
[Intel XE#3414]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3414
[Intel XE#367]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/367
[Intel XE#378]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/378
[Intel XE#3904]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3904
[Intel XE#4141]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4141
[Intel XE#4156]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4156
[Intel XE#4354]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4354
[Intel XE#4596]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4596
[Intel XE#4733]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4733
[Intel XE#5020]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5020
[Intel XE#5848]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5848
[Intel XE#6599]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6599
[Intel XE#6874]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6874
[Intel XE#6964]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6964
[Intel XE#6974]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6974
[Intel XE#7061]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7061
[Intel XE#7136]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7136
[Intel XE#7138]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7138
[Intel XE#7178]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7178
[Intel XE#7283]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7283
[Intel XE#7342]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7342
[Intel XE#7343]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7343
[Intel XE#7344]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7344
[Intel XE#7348]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7348
[Intel XE#7351]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7351
[Intel XE#7354]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7354
[Intel XE#7356]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7356
[Intel XE#7372]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7372
[Intel XE#7374]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7374
[Intel XE#7376]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7376
[Intel XE#7405]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7405
[Intel XE#7417]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7417
[Intel XE#7424]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7424
[Intel XE#7425]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7425
[Intel XE#7449]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7449
[Intel XE#7578]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7578
[Intel XE#7621]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7621
[Intel XE#7636]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7636
[Intel XE#7642]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7642
[Intel XE#870]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/870
[Intel XE#944]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/944
Build changes
-------------
* Linux: xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c -> xe-pw-156651v8
IGT_8816: 8816
xe-4760-6d4a5468301db368a25ae8d595bdca120a17428c: 6d4a5468301db368a25ae8d595bdca120a17428c
xe-pw-156651v8: 156651v8
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-156651v8/index.html
[-- Attachment #2: Type: text/html, Size: 42127 bytes --]
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v7 00/12] drm/xe/madvise: Add support for purgeable buffer objects
2026-03-23 9:30 [PATCH v7 00/12] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
` (15 preceding siblings ...)
2026-03-23 12:05 ` ✓ Xe.CI.FULL: " Patchwork
@ 2026-03-23 15:45 ` Souza, Jose
16 siblings, 0 replies; 29+ messages in thread
From: Souza, Jose @ 2026-03-23 15:45 UTC (permalink / raw)
To: intel-xe@lists.freedesktop.org, Yadav, Arvind
Cc: Brost, Matthew, Ghimiray, Himal Prasad,
thomas.hellstrom@linux.intel.com
On Mon, 2026-03-23 at 15:00 +0530, Arvind Yadav wrote:
> This patch series introduces comprehensive support for purgeable
> buffer objects
> in the Xe driver, enabling userspace to provide memory usage hints
> for better
> memory management under system pressure.
>
> Overview:
>
> Purgeable memory allows applications to mark buffer objects as "not
> currently
> needed" (DONTNEED), making them eligible for kernel reclamation
> during memory
> pressure. This helps prevent OOM conditions and enables more
> efficient GPU
> memory utilization for workloads with temporary or regeneratable data
> (caches,
> intermediate results, decoded frames, etc.).
>
> Purgeable BO Lifecycle:
> 1. WILLNEED (default): BO actively needed, kernel preserves backing
> store
> 2. DONTNEED (user hint): BO contents discardable, eligible for
> purging
> 3. PURGED (kernel action): Backing store reclaimed during memory
> pressure
>
> Key Design Principles:
> - i915 compatibility: "Once purged, always purged" semantics -
> purged BOs
> remain permanently invalid and must be destroyed/recreated
> - Per-VMA state tracking: Each VMA tracks its own purgeable state,
> BO is
> only marked DONTNEED when ALL VMAs across ALL VMs agree (Thomas
> Hellström)
> - Safety first: Imported/exported dma-bufs blocked from purgeable
> state -
> no visibility into external device usage (Matt Roper)
> - Multiple protection layers: Validation in madvise, VM bind, mmap,
> CPU
> and GPU fault handlers. GPU page faults on DONTNEED BOs are
> rejected in
> xe_pagefault_begin() to preserve the GPU PTE invalidation done at
> madvise
> time; without this the rebind path would re-map real pages and
> undo the
> PTE zap, preventing the shrinker from ever reclaiming the BO.
> - Correct GPU PTE zapping: madvise_purgeable() explicitly sets
> skip_invalidation per VMA (false for DONTNEED, true for WILLNEED,
> purged
> and dmabuf-shared BOs) so DONTNEED always triggers a GPU PTE zap
> regardless of prior madvise state.
> - Scratch PTE support: Fault-mode VMs use scratch pages for safe
> zero reads
> on purged BO access.
> - TTM shrinker integration: Encapsulated helpers manage xe_ttm_tt-
> >purgeable
> flag and shrinker page accounting (shrinkable vs purgeable
> buckets)
>
>
uAPI patch is Acked-by: José Roberto de Souza <jose.souza@intel.com>
Mesa MR:
https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/40573
Thank you
>
> v2 Changes:
> - Reordered patches: Moved shared BO helper before main
> implementation for
> proper dependency order
> - Fixed reference counting in mmap offset validation (use
> drm_gem_object_put)
> - Removed incorrect claims about madvise(WILLNEED) restoring purged
> BOs
> - Fixed error code documentation inconsistencies
> - Initialize purge_state_val fields to prevent kernel memory leaks
> - Use xe_bo_trigger_rebind() for async TLB invalidation (Thomas
> Hellström)
> - Add NULL rebind with scratch PTEs for fault mode (Thomas
> Hellström)
> - Implement i915-compatible retained field logic (Thomas Hellström)
> - Skip BO validation for purged BOs in page fault handler (crash
> fix)
> - Add scratch VM check in page fault path (non-scratch VMs fail
> fault)
>
> v3 Changes (addressing Matt and Thomas Hellström feedback):
> - Per-VMA purgeable state tracking: Added xe_vma->purgeable_state
> field
> - Complete VMA check: xe_bo_all_vmas_dontneed() walks all VMAs
> across all
> VMs to ensure unanimous DONTNEED before marking BO purgeable
> - VMA unbind recheck: Added xe_bo_recheck_purgeable_on_vma_unbind()
> to
> re-evaluate BO state when VMAs are destroyed
> - Block external dma-bufs: Added xe_bo_is_external_dmabuf() check
> using
> drm_gem_is_imported() and obj->dma_buf to prevent purging
> imported/exported BOs
> - Consistent lockdep enforcement: Added xe_bo_assert_held() to all
> helpers
> that access madv_purgeable state
> - Simplified page table logic: Renamed is_null to is_null_or_purged
> in
> xe_pt_stage_bind_entry() - purged BOs treated identically to null
> VMAs
> - Removed unnecessary checks: Dropped redundant "&& bo" check in
> xe_ttm_bo_purge()
> - Xe-specific warnings: Changed drm_warn() to XE_WARN_ON() in purge
> path
> - Moved purge checks under locks: Purge state validation now done
> after
> acquiring dma-resv lock in vma_lock_and_validate() and
> xe_pagefault_begin()
> - Race-free fault handling: Removed unlocked purge check from
> xe_pagefault_handle_vma(), moved to locked xe_pagefault_begin()
> - Shrinker helper functions: Added xe_bo_set_purgeable_shrinker()
> and
> xe_bo_clear_purgeable_shrinker() to encapsulate TTM purgeable
> flag updates
> and shrinker page accounting, improving code clarity and
> maintainability
>
> v4 Changes (addressing Matt and Thomas Hellström feedback):
> - UAPI: Removed '__u64 reserved' field from purge_state_val union
> to fit
> 16-byte size constraint (Matt)
> - Changed madv_purgeable from atomic_t to u32 across all patches
> (Matt)
> - CPU fault handling: Added purged check to fastpath
> (xe_bo_cpu_fault_fastpath)
> to prevent hang when accessing existing mmap of purged BO
>
> v5 Changes (addressing Matt and Thomas Hellström feedback):
> - Add locking documentation to madv_purgeable field comment (Matt)
> - Introduce xe_bo_set_purgeable_state() helper (void return) to
> centralize
> madv_purgeable updates with xe_bo_assert_held() and state
> transition
> validation using explicit enum checks (no transition out of
> PURGED) (Matt)
> - Make xe_ttm_bo_purge() return int and propagate failures from
> xe_bo_move(); handle xe_bo_trigger_rebind() failures (e.g.
> no_wait_gpu
> paths) rather than silently ignoring (Matt)
> - Replace drm_WARN_ON with xe_assert for better Xe-specific
> assertions (Matt)
> - Hook purgeable handling into
> madvise_funcs[DRM_XE_VMA_ATTR_PURGEABLE_STATE]
> instead of special-case path in xe_vm_madvise_ioctl() (Matt)
> - Track purgeable retained return via xe_madvise_details and
> perform
> copy_to_user() from xe_madvise_details_fini() after locks are
> dropped (Matt)
> - Set madvise_funcs[DRM_XE_VMA_ATTR_PURGEABLE_STATE] to NULL with
> __maybe_unused on madvise_purgeable() to maintain bisectability
> until
> shrinker integration is complete in final patch (Matt)
> - Call xe_bo_recheck_purgeable_on_vma_unbind() from
> xe_vma_destroy()
> right after drm_gpuva_unlink() where we already hold the BO lock,
> drop the trylock-based late destroy path (Matt)
> - Move purgeable_state into xe_vma_mem_attr with the other madvise
> attributes (Matt)
> - Drop READ_ONCE since the BO lock already protects us (Matt)
> - Keep returning false when there are no VMAs - otherwise we'd mark
> BOs purgeable without any user hint (Matt)
> - Use struct xe_vma_lock_and_validate_flags instead of multiple
> bool
> parameters to improve readability and prevent argument
> transposition (Matt)
> - Fix LRU crash while running shrink test
> - Skip xe_bo_validate() for purged BOs in xe_gpuvm_validate()
> - Split ghost BO and zero-refcount handling in xe_bo_shrink()
> (Thomas)
>
> v6 Changes (addressing Jose Souza, Thomas Hellström and Matt Brost
> feedback):
> - Document DONTNEED blocking behavior in uAPI: Clearly describe
> which
> operations are blocked and with what error codes. (Thomas, Matt)
> - Block VM_BIND to DONTNEED BOs: Return -EBUSY to prevent creating
> new
> VMAs to purgeable BOs (undefined behavior). (Thomas, Matt)
> - Block CPU faults to DONTNEED BOs: Return VM_FAULT_SIGBUS in both
> fastpath
> and slowpath to prevent undefined behavior. (Thomas, Matt)
> - Block new mmap() to DONTNEED/purged BOs: Return -EBUSY for
> DONTNEED,
> -EINVAL for PURGED. (Thomas, Matt)
> - Block dma-buf export of DONTNEED/purged BOs: Return -EBUSY for
> DONTNEED,
> -EINVAL for PURGED. (Thomas, Matt)
> - Fix state transition bug: xe_bo_all_vmas_dontneed() now returns
> enum to
> distinguish NO_VMAS (preserve state) from WILLNEED (has active
> VMAs),
> preventing incorrect DONTNEED → WILLNEED flip on last VMA unmap
> (Matt)
> - Set skip_invalidation explicitly in madvise_purgeable() to ensure
> DONTNEED always zaps GPU PTEs regardless of prior madvise state.
> - Add DRM_XE_QUERY_CONFIG_FLAG_HAS_PURGING_SUPPORT for userspace
> feature detection. (Jose)
>
> v7 Changes (addressing Thomas Hellström, Matt B and Jose feedback):
> - mmap check moved from xe_gem_mmap_offset_ioctl() into a new
> xe_gem_object_mmap() callback wrapping drm_gem_ttm_mmap(), with
> interruptible lock (Thomas)
> - dma-buf export lock made interruptible: xe_bo_lock(bo, true)
> (Thomas)
> - vma_lock_and_validate_flags passed by value instead of pointer
> (reviewer)
> - xe_bo_recompute_purgeable_state() simplified using enum value
> alignment
> between xe_bo_vmas_purge_state and xe_madv_purgeable_state, with
> static_assert to enforce the alignment (Thomas)
> - Merge xe_bo_set_purgeable_shrinker/xe_bo_clear_purgeable_shrinker
> into
> a single static xe_bo_set_purgeable_shrinker(bo, new_state)
> called
> automatically from xe_bo_set_purgeable_state() (Thomas)
> - Drop "drm/xe/bo: Skip zero-refcount BOs in shrinker" patch —
> ghost BO
> path already handles this correctly (Thomas)
> - Fix Engine memory CAT errors on scratch-page VMs (Matt Roper):
> xe_pagefault_asid_to_vm() now accepts scratch VMs via
> || xe_vm_has_scratch(vm); xe_pagefault_begin() checks
> DONTNEED/purged
> before validate/migrate and signals skip_rebind to caller via
> bool*
> out-parameter to avoid xe_vma_rebind() assert and PTE zap undo
> - Add new patch 12: Accept canonical GPU addresses in
> xe_vm_madvise_ioctl()
> using xe_device_uncanonicalize_addr() (Matt B)
> - UAPI doc comment improvement. (Jose)
>
> Arvind Yadav (11):
> drm/xe/bo: Add purgeable bo state tracking and field madv to xe_bo
> drm/xe/madvise: Implement purgeable buffer object support
> drm/xe/bo: Block CPU faults to purgeable buffer objects
> drm/xe/vm: Prevent binding of purged buffer objects
> drm/xe/madvise: Implement per-VMA purgeable state tracking
> drm/xe/madvise: Block imported and exported dma-bufs
> drm/xe/bo: Block mmap of DONTNEED/purged BOs
> drm/xe/dma_buf: Block export of DONTNEED/purged BOs
> drm/xe/bo: Add purgeable shrinker state helpers
> drm/xe/madvise: Enable purgeable buffer object IOCTL support
> drm/xe/madvise: Accept canonical GPU addresses in
> xe_vm_madvise_ioctl
>
> Himal Prasad Ghimiray (1):
> drm/xe/uapi: Add UAPI support for purgeable buffer objects
>
> drivers/gpu/drm/xe/xe_bo.c | 193 +++++++++++++++++--
> drivers/gpu/drm/xe/xe_bo.h | 58 ++++++
> drivers/gpu/drm/xe/xe_bo_types.h | 6 +
> drivers/gpu/drm/xe/xe_dma_buf.c | 21 +++
> drivers/gpu/drm/xe/xe_pagefault.c | 25 ++-
> drivers/gpu/drm/xe/xe_pt.c | 40 +++-
> drivers/gpu/drm/xe/xe_query.c | 2 +
> drivers/gpu/drm/xe/xe_svm.c | 1 +
> drivers/gpu/drm/xe/xe_vm.c | 100 ++++++++--
> drivers/gpu/drm/xe/xe_vm_madvise.c | 292
> ++++++++++++++++++++++++++++-
> drivers/gpu/drm/xe/xe_vm_madvise.h | 3 +
> drivers/gpu/drm/xe/xe_vm_types.h | 11 ++
> include/uapi/drm/xe_drm.h | 69 +++++++
> 13 files changed, 778 insertions(+), 43 deletions(-)
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v7 12/12] drm/xe/madvise: Accept canonical GPU addresses in xe_vm_madvise_ioctl
2026-03-23 9:31 ` [PATCH v7 12/12] drm/xe/madvise: Accept canonical GPU addresses in xe_vm_madvise_ioctl Arvind Yadav
@ 2026-03-24 3:35 ` Matthew Brost
0 siblings, 0 replies; 29+ messages in thread
From: Matthew Brost @ 2026-03-24 3:35 UTC (permalink / raw)
To: Arvind Yadav; +Cc: intel-xe, himal.prasad.ghimiray, thomas.hellstrom
On Mon, Mar 23, 2026 at 03:01:01PM +0530, Arvind Yadav wrote:
> Userspace passes canonical (sign-extended) GPU addresses where bits 63:48
> mirror bit 47. The internal GPUVM uses non-canonical form (upper bits
> zeroed), so passing raw canonical addresses into GPUVM lookups causes
> mismatches for addresses above 128TiB.
>
> Strip the sign extension with xe_device_uncanonicalize_addr() at the
> top of xe_vm_madvise_ioctl(). Non-canonical addresses are unaffected.
>
> Fixes: ada7486c5668 ("drm/xe: Implement madvise ioctl for xe")
>
> Suggested-by: Matthew Brost <matthew.brost@intel.com>
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
> ---
> drivers/gpu/drm/xe/xe_vm_madvise.c | 16 ++++++++++++----
> 1 file changed, 12 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
> index 4a19da5e86d4..2d03676ee595 100644
> --- a/drivers/gpu/drm/xe/xe_vm_madvise.c
> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
> @@ -673,8 +673,15 @@ int xe_vm_madvise_ioctl(struct drm_device *dev, void *data, struct drm_file *fil
> struct xe_device *xe = to_xe_device(dev);
> struct xe_file *xef = to_xe_file(file);
> struct drm_xe_madvise *args = data;
> - struct xe_vmas_in_madvise_range madvise_range = {.addr = args->start,
> - .range = args->range, };
> + struct xe_vmas_in_madvise_range madvise_range = {
> + /*
> + * Userspace may pass canonical (sign-extended) addresses.
> + * Strip the sign extension to get the internal non-canonical
> + * form used by the GPUVM, matching xe_vm_bind_ioctl() behavior.
> + */
> + .addr = xe_device_uncanonicalize_addr(xe, args->start),
> + .range = args->range,
> + };
> struct xe_madvise_details details;
> struct xe_vm *vm;
> struct drm_exec exec;
> @@ -724,7 +731,7 @@ int xe_vm_madvise_ioctl(struct drm_device *dev, void *data, struct drm_file *fil
> if (err)
> goto unlock_vm;
>
> - err = xe_vm_alloc_madvise_vma(vm, args->start, args->range);
> + err = xe_vm_alloc_madvise_vma(vm, madvise_range.addr, args->range);
> if (err)
> goto madv_fini;
>
> @@ -774,7 +781,8 @@ int xe_vm_madvise_ioctl(struct drm_device *dev, void *data, struct drm_file *fil
> madvise_funcs[attr_type](xe, vm, madvise_range.vmas, madvise_range.num_vmas, args,
> &details);
>
> - err = xe_vm_invalidate_madvise_range(vm, args->start, args->start + args->range);
> + err = xe_vm_invalidate_madvise_range(vm, madvise_range.addr,
> + madvise_range.addr + args->range);
>
> if (madvise_range.has_svm_userptr_vmas)
> xe_svm_notifier_unlock(vm);
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v7 05/12] drm/xe/vm: Prevent binding of purged buffer objects
2026-03-23 9:30 ` [PATCH v7 05/12] drm/xe/vm: Prevent binding of purged " Arvind Yadav
@ 2026-03-24 12:21 ` Thomas Hellström
0 siblings, 0 replies; 29+ messages in thread
From: Thomas Hellström @ 2026-03-24 12:21 UTC (permalink / raw)
To: Arvind Yadav, intel-xe; +Cc: matthew.brost, himal.prasad.ghimiray
On Mon, 2026-03-23 at 15:00 +0530, Arvind Yadav wrote:
> Add purge checking to vma_lock_and_validate() to block new mapping
> operations on purged BOs while allowing cleanup operations to
> proceed.
>
> Purged BOs have their backing pages freed by the kernel. New
> mapping operations (MAP, PREFETCH, REMAP) must be rejected with
> -EINVAL to prevent GPU access to invalid memory. Cleanup
> operations (UNMAP) must be allowed so applications can release
> resources after detecting purge via the retained field.
>
> REMAP operations require mixed handling - reject new prev/next
> VMAs if the BO is purged, but allow the unmap portion to proceed
> for cleanup.
>
> The check_purged flag in struct xe_vma_lock_and_validate_flags
> distinguishes between these cases: true for new mappings (must
> reject),
> false for cleanup (allow).
>
> v2:
> - Clarify that purged BOs are permanently invalid (i915 semantics)
> - Remove incorrect claim about madvise(WILLNEED) restoring purged
> BOs
>
> v3:
> - Move xe_bo_is_purged check under vma_lock_and_validate (Matt)
> - Add check_purged parameter to distinguish new mappings from
> cleanup
> - Allow UNMAP operations to prevent resource leaks
> - Handle REMAP operation's dual nature (cleanup + new mappings)
>
> v5:
> - Replace three boolean parameters with struct
> xe_vma_lock_and_validate_flags
> to improve readability and prevent argument transposition (Matt)
> - Use u32 bitfields instead of bool members to match
> xe_bo_shrink_flags
> pattern - more efficient packing and follows xe driver
> conventions (Thomas)
> - Pass struct as const since flags are read-only (Matt)
>
> v6:
> - Block VM_BIND to DONTNEED BOs with -EBUSY (Thomas, Matt)
>
> v7:
> - Pass xe_vma_lock_and_validate_flags by value instead of by
> pointer, consistent with xe driver style. (Thomas)
>
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> ---
> drivers/gpu/drm/xe/xe_vm.c | 82 ++++++++++++++++++++++++++++++++----
> --
> 1 file changed, 69 insertions(+), 13 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index a0ade67d616e..9c1a82b64a43 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -2918,8 +2918,22 @@ static void vm_bind_ioctl_ops_unwind(struct
> xe_vm *vm,
> }
> }
>
> +/**
> + * struct xe_vma_lock_and_validate_flags - Flags for
> vma_lock_and_validate()
> + * @res_evict: Allow evicting resources during validation
> + * @validate: Perform BO validation
> + * @request_decompress: Request BO decompression
> + * @check_purged: Reject operation if BO is purged
> + */
> +struct xe_vma_lock_and_validate_flags {
> + u32 res_evict : 1;
> + u32 validate : 1;
> + u32 request_decompress : 1;
> + u32 check_purged : 1;
> +};
> +
> static int vma_lock_and_validate(struct drm_exec *exec, struct
> xe_vma *vma,
> - bool res_evict, bool validate, bool
> request_decompress)
> + struct
> xe_vma_lock_and_validate_flags flags)
> {
> struct xe_bo *bo = xe_vma_bo(vma);
> struct xe_vm *vm = xe_vma_vm(vma);
> @@ -2928,15 +2942,24 @@ static int vma_lock_and_validate(struct
> drm_exec *exec, struct xe_vma *vma,
> if (bo) {
> if (!bo->vm)
> err = drm_exec_lock_obj(exec, &bo-
> >ttm.base);
> - if (!err && validate)
> +
> + /* Reject new mappings to DONTNEED/purged BOs; allow
> cleanup operations */
> + if (!err && flags.check_purged) {
> + if (xe_bo_madv_is_dontneed(bo))
> + err = -EBUSY; /* BO marked
> purgeable */
> + else if (xe_bo_is_purged(bo))
> + err = -EINVAL; /* BO already purged
> */
> + }
> +
> + if (!err && flags.validate)
> err = xe_bo_validate(bo, vm,
>
> xe_vm_allow_vm_eviction(vm) &&
> - res_evict, exec);
> + flags.res_evict, exec);
>
> if (err)
> return err;
>
> - if (request_decompress)
> + if (flags.request_decompress)
> err = xe_bo_decompress(bo);
> }
>
> @@ -3030,10 +3053,13 @@ static int op_lock_and_prep(struct drm_exec
> *exec, struct xe_vm *vm,
> case DRM_GPUVA_OP_MAP:
> if (!op->map.invalidate_on_bind)
> err = vma_lock_and_validate(exec, op-
> >map.vma,
> - res_evict,
> -
> !xe_vm_in_fault_mode(vm) ||
> - op-
> >map.immediate,
> - op-
> >map.request_decompress);
> + (struct
> xe_vma_lock_and_validate_flags) {
> + .res_evict =
> res_evict,
> + .validate =
> !xe_vm_in_fault_mode(vm) ||
> +
> op->map.immediate,
> + .request_dec
> ompress = op->map.request_decompress,
> + .check_purge
> d = true,
> + });
> break;
> case DRM_GPUVA_OP_REMAP:
> err = check_ufence(gpuva_to_vma(op-
> >base.remap.unmap->va));
> @@ -3042,13 +3068,28 @@ static int op_lock_and_prep(struct drm_exec
> *exec, struct xe_vm *vm,
>
> err = vma_lock_and_validate(exec,
> gpuva_to_vma(op-
> >base.remap.unmap->va),
> - res_evict, false,
> false);
> + (struct
> xe_vma_lock_and_validate_flags) {
> + .res_evict =
> res_evict,
> + .validate =
> false,
> +
> .request_decompress = false,
> + .check_purged =
> false,
> + });
> if (!err && op->remap.prev)
> err = vma_lock_and_validate(exec, op-
> >remap.prev,
> - res_evict, true,
> false);
> + (struct
> xe_vma_lock_and_validate_flags) {
> +
> .res_evict = res_evict,
> +
> .validate = true,
> +
> .request_decompress = false,
> +
> .check_purged = true,
> + });
> if (!err && op->remap.next)
> err = vma_lock_and_validate(exec, op-
> >remap.next,
> - res_evict, true,
> false);
> + (struct
> xe_vma_lock_and_validate_flags) {
> +
> .res_evict = res_evict,
> +
> .validate = true,
> +
> .request_decompress = false,
> +
> .check_purged = true,
> + });
> break;
> case DRM_GPUVA_OP_UNMAP:
> err = check_ufence(gpuva_to_vma(op->base.unmap.va));
> @@ -3057,7 +3098,12 @@ static int op_lock_and_prep(struct drm_exec
> *exec, struct xe_vm *vm,
>
> err = vma_lock_and_validate(exec,
> gpuva_to_vma(op-
> >base.unmap.va),
> - res_evict, false,
> false);
> + (struct
> xe_vma_lock_and_validate_flags) {
> + .res_evict =
> res_evict,
> + .validate =
> false,
> +
> .request_decompress = false,
> + .check_purged =
> false,
> + });
> break;
> case DRM_GPUVA_OP_PREFETCH:
> {
> @@ -3070,9 +3116,19 @@ static int op_lock_and_prep(struct drm_exec
> *exec, struct xe_vm *vm,
> region <=
> ARRAY_SIZE(region_to_mem_type));
> }
>
> + /*
> + * Prefetch attempts to migrate BO's backing store
> without
> + * repopulating it first. Purged BOs have no backing
> store
> + * to migrate, so reject the operation.
> + */
> err = vma_lock_and_validate(exec,
> gpuva_to_vma(op-
> >base.prefetch.va),
> - res_evict, false,
> false);
> + (struct
> xe_vma_lock_and_validate_flags) {
> + .res_evict =
> res_evict,
> + .validate =
> false,
> +
> .request_decompress = false,
> + .check_purged =
> true,
> + });
> if (!err && !xe_vma_has_no_bo(vma))
> err = xe_bo_migrate(xe_vma_bo(vma),
>
> region_to_mem_type[region],
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v7 06/12] drm/xe/madvise: Implement per-VMA purgeable state tracking
2026-03-23 9:30 ` [PATCH v7 06/12] drm/xe/madvise: Implement per-VMA purgeable state tracking Arvind Yadav
@ 2026-03-24 12:25 ` Thomas Hellström
0 siblings, 0 replies; 29+ messages in thread
From: Thomas Hellström @ 2026-03-24 12:25 UTC (permalink / raw)
To: Arvind Yadav, intel-xe; +Cc: matthew.brost, himal.prasad.ghimiray
On Mon, 2026-03-23 at 15:00 +0530, Arvind Yadav wrote:
> Track purgeable state per-VMA instead of using a coarse shared
> BO check. This prevents purging shared BOs until all VMAs across
> all VMs are marked DONTNEED.
>
> Add xe_bo_all_vmas_dontneed() to check all VMAs before marking
> a BO purgeable. Add xe_bo_recheck_purgeable_on_vma_unbind() to
> handle state transitions when VMAs are destroyed - if all
> remaining VMAs are DONTNEED the BO can become purgeable, or if
> no VMAs remain it transitions to WILLNEED.
>
> The per-VMA purgeable_state field stores the madvise hint for
> each mapping. Shared BOs can only be purged when all VMAs
> unanimously indicate DONTNEED.
>
> This prevents the bug where unmapping the last VMA would incorrectly
> flip
> a DONTNEED BO back to WILLNEED. The enum-based state check preserves
> BO
> state when no VMAs remain, only updating when VMAs provide explicit
> hints.
>
> v3:
> - This addresses Thomas Hellström's feedback: "loop over all vmas
> attached to the bo and check that they all say WONTNEED. This
> will
> also need a check at VMA unbinding"
>
> v4:
> - @madv_purgeable atomic_t → u32 change across all relevant
> patches (Matt)
>
> v5:
> - Call xe_bo_recheck_purgeable_on_vma_unbind() from
> xe_vma_destroy()
> right after drm_gpuva_unlink() where we already hold the BO lock,
> drop the trylock-based late destroy path (Matt)
> - Move purgeable_state into xe_vma_mem_attr with the other madvise
> attributes (Matt)
> - Drop READ_ONCE since the BO lock already protects us (Matt)
> - Keep returning false when there are no VMAs - otherwise we'd mark
> BOs purgeable without any user hint (Matt)
> - Use xe_bo_set_purgeable_state() instead of direct
> initialization(Matt)
> - use xe_assert instead of drm_warn (Thomas)
>
> v6:
> - Fix state transition bug: don't flip DONTNEED → WILLNEED when
> last
> VMA unmapped (Matt)
> - Change xe_bo_all_vmas_dontneed() from bool to enum to distinguish
> "no VMAs" from "has WILLNEED VMA" (Matt)
> - Preserve BO state on NO_VMAS instead of forcing WILLNEED.
> - Set skip_invalidation explicitly in madvise_purgeable() to ensure
> DONTNEED always zaps GPU PTEs regardless of prior madvise state.
>
> v7:
> - Don't zap PTEs at DONTNEED time -- pages are still alive.
> The zap happens in xe_bo_move_notify() right before the shrinker
> frees them.
> - Simplify xe_bo_recompute_purgeable_state() by relying on the
> intentional value alignment between xe_bo_vmas_purge_state and
> xe_madv_purgeable_state enums. Add static_assert to enforce the
> alignment. (Thomas)
>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> ---
> drivers/gpu/drm/xe/xe_svm.c | 1 +
> drivers/gpu/drm/xe/xe_vm.c | 9 +-
> drivers/gpu/drm/xe/xe_vm_madvise.c | 136
> +++++++++++++++++++++++++++--
> drivers/gpu/drm/xe/xe_vm_madvise.h | 3 +
> drivers/gpu/drm/xe/xe_vm_types.h | 11 +++
> 5 files changed, 153 insertions(+), 7 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_svm.c
> b/drivers/gpu/drm/xe/xe_svm.c
> index a91c84487a67..062ef77e283f 100644
> --- a/drivers/gpu/drm/xe/xe_svm.c
> +++ b/drivers/gpu/drm/xe/xe_svm.c
> @@ -322,6 +322,7 @@ static void xe_vma_set_default_attributes(struct
> xe_vma *vma)
> .preferred_loc.migration_policy =
> DRM_XE_MIGRATE_ALL_PAGES,
> .pat_index = vma->attr.default_pat_index,
> .atomic_access = DRM_XE_ATOMIC_UNDEFINED,
> + .purgeable_state = XE_MADV_PURGEABLE_WILLNEED,
> };
>
> xe_vma_mem_attr_copy(&vma->attr, &default_attr);
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index 9c1a82b64a43..07393540f34c 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -39,6 +39,7 @@
> #include "xe_tile.h"
> #include "xe_tlb_inval.h"
> #include "xe_trace_bo.h"
> +#include "xe_vm_madvise.h"
> #include "xe_wa.h"
>
> static struct drm_gem_object *xe_vm_obj(struct xe_vm *vm)
> @@ -1085,6 +1086,7 @@ static struct xe_vma *xe_vma_create(struct
> xe_vm *vm,
> static void xe_vma_destroy_late(struct xe_vma *vma)
> {
> struct xe_vm *vm = xe_vma_vm(vma);
> + struct xe_bo *bo = xe_vma_bo(vma);
>
> if (vma->ufence) {
> xe_sync_ufence_put(vma->ufence);
> @@ -1099,7 +1101,7 @@ static void xe_vma_destroy_late(struct xe_vma
> *vma)
> } else if (xe_vma_is_null(vma) ||
> xe_vma_is_cpu_addr_mirror(vma)) {
> xe_vm_put(vm);
> } else {
> - xe_bo_put(xe_vma_bo(vma));
> + xe_bo_put(bo);
> }
>
> xe_vma_free(vma);
> @@ -1125,6 +1127,7 @@ static void vma_destroy_cb(struct dma_fence
> *fence,
> static void xe_vma_destroy(struct xe_vma *vma, struct dma_fence
> *fence)
> {
> struct xe_vm *vm = xe_vma_vm(vma);
> + struct xe_bo *bo = xe_vma_bo(vma);
>
> lockdep_assert_held_write(&vm->lock);
> xe_assert(vm->xe, list_empty(&vma->combined_links.destroy));
> @@ -1133,9 +1136,10 @@ static void xe_vma_destroy(struct xe_vma *vma,
> struct dma_fence *fence)
> xe_assert(vm->xe, vma->gpuva.flags &
> XE_VMA_DESTROYED);
> xe_userptr_destroy(to_userptr_vma(vma));
> } else if (!xe_vma_is_null(vma) &&
> !xe_vma_is_cpu_addr_mirror(vma)) {
> - xe_bo_assert_held(xe_vma_bo(vma));
> + xe_bo_assert_held(bo);
>
> drm_gpuva_unlink(&vma->gpuva);
> + xe_bo_recompute_purgeable_state(bo);
> }
>
> xe_vm_assert_held(vm);
> @@ -2692,6 +2696,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm
> *vm, struct drm_gpuva_ops *ops,
> .atomic_access =
> DRM_XE_ATOMIC_UNDEFINED,
> .default_pat_index = op-
> >map.pat_index,
> .pat_index = op->map.pat_index,
> + .purgeable_state =
> XE_MADV_PURGEABLE_WILLNEED,
> };
>
> flags |= op->map.vma_flags &
> XE_VMA_CREATE_MASK;
> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c
> b/drivers/gpu/drm/xe/xe_vm_madvise.c
> index ffba2e41c539..ed1940da7739 100644
> --- a/drivers/gpu/drm/xe/xe_vm_madvise.c
> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
> @@ -13,6 +13,7 @@
> #include "xe_pt.h"
> #include "xe_svm.h"
> #include "xe_tlb_inval.h"
> +#include "xe_vm.h"
>
> struct xe_vmas_in_madvise_range {
> u64 addr;
> @@ -184,6 +185,116 @@ static void madvise_pat_index(struct xe_device
> *xe, struct xe_vm *vm,
> }
> }
>
> +/**
> + * enum xe_bo_vmas_purge_state - VMA purgeable state aggregation
> + *
> + * Distinguishes whether a BO's VMAs are all DONTNEED, have at least
> + * one WILLNEED, or have no VMAs at all.
> + *
> + * Enum values align with XE_MADV_PURGEABLE_* states for
> consistency.
> + */
> +enum xe_bo_vmas_purge_state {
> + /** @XE_BO_VMAS_STATE_WILLNEED: At least one VMA is WILLNEED
> */
> + XE_BO_VMAS_STATE_WILLNEED = 0,
> + /** @XE_BO_VMAS_STATE_DONTNEED: All VMAs are DONTNEED */
> + XE_BO_VMAS_STATE_DONTNEED = 1,
> + /** @XE_BO_VMAS_STATE_NO_VMAS: BO has no VMAs */
> + XE_BO_VMAS_STATE_NO_VMAS = 2,
> +};
> +
> +/*
> + * xe_bo_recompute_purgeable_state() casts between
> xe_bo_vmas_purge_state and
> + * xe_madv_purgeable_state. Enforce that WILLNEED=0 and DONTNEED=1
> match across
> + * both enums so the single-line cast is always valid.
> + */
> +static_assert(XE_BO_VMAS_STATE_WILLNEED ==
> (int)XE_MADV_PURGEABLE_WILLNEED,
> + "VMA purge state WILLNEED must equal madv purgeable
> WILLNEED");
> +static_assert(XE_BO_VMAS_STATE_DONTNEED ==
> (int)XE_MADV_PURGEABLE_DONTNEED,
> + "VMA purge state DONTNEED must equal madv purgeable
> DONTNEED");
> +
> +/**
> + * xe_bo_all_vmas_dontneed() - Determine BO VMA purgeable state
> + * @bo: Buffer object
> + *
> + * Check all VMAs across all VMs to determine aggregate purgeable
> state.
> + * Shared BOs require unanimous DONTNEED state from all mappings.
> + *
> + * Caller must hold BO dma-resv lock.
> + *
> + * Return: XE_BO_VMAS_STATE_DONTNEED if all VMAs are DONTNEED,
> + * XE_BO_VMAS_STATE_WILLNEED if at least one VMA is not
> DONTNEED,
> + * XE_BO_VMAS_STATE_NO_VMAS if BO has no VMAs
> + */
> +static enum xe_bo_vmas_purge_state xe_bo_all_vmas_dontneed(struct
> xe_bo *bo)
> +{
> + struct drm_gpuvm_bo *vm_bo;
> + struct drm_gpuva *gpuva;
> + struct drm_gem_object *obj = &bo->ttm.base;
> + bool has_vmas = false;
> +
> + xe_bo_assert_held(bo);
> +
> + drm_gem_for_each_gpuvm_bo(vm_bo, obj) {
> + drm_gpuvm_bo_for_each_va(gpuva, vm_bo) {
> + struct xe_vma *vma = gpuva_to_vma(gpuva);
> +
> + has_vmas = true;
> +
> + /* Any non-DONTNEED VMA prevents purging */
> + if (vma->attr.purgeable_state !=
> XE_MADV_PURGEABLE_DONTNEED)
> + return XE_BO_VMAS_STATE_WILLNEED;
> + }
> + }
> +
> + /*
> + * No VMAs => preserve existing BO purgeable state.
> + * Avoids incorrectly flipping DONTNEED -> WILLNEED when
> last VMA unmapped.
> + */
> + if (!has_vmas)
> + return XE_BO_VMAS_STATE_NO_VMAS;
> +
> + return XE_BO_VMAS_STATE_DONTNEED;
> +}
> +
> +/**
> + * xe_bo_recompute_purgeable_state() - Recompute BO purgeable state
> from VMAs
> + * @bo: Buffer object
> + *
> + * Walk all VMAs to determine if BO should be purgeable or not.
> + * Shared BOs require unanimous DONTNEED state from all mappings.
> + * If the BO has no VMAs the existing state is preserved.
> + *
> + * Locking: Caller must hold BO dma-resv lock. When iterating GPUVM
> lists,
> + * VM lock must also be held (write) to prevent concurrent VMA
> modifications.
> + * This is satisfied at both call sites:
> + * - xe_vma_destroy(): holds vm->lock write
> + * - madvise_purgeable(): holds vm->lock write (from madvise ioctl
> path)
> + *
> + * Return: nothing
> + */
> +void xe_bo_recompute_purgeable_state(struct xe_bo *bo)
> +{
> + enum xe_bo_vmas_purge_state vma_state;
> +
> + if (!bo)
> + return;
> +
> + xe_bo_assert_held(bo);
> +
> + /*
> + * Once purged, always purged. Cannot transition back to
> WILLNEED.
> + * This matches i915 semantics where purged BOs are
> permanently invalid.
> + */
> + if (bo->madv_purgeable == XE_MADV_PURGEABLE_PURGED)
> + return;
> +
> + vma_state = xe_bo_all_vmas_dontneed(bo);
> +
> + if (vma_state != (enum xe_bo_vmas_purge_state)bo-
> >madv_purgeable &&
> + vma_state != XE_BO_VMAS_STATE_NO_VMAS)
> + xe_bo_set_purgeable_state(bo, (enum
> xe_madv_purgeable_state)vma_state);
> +}
> +
> /**
> * madvise_purgeable - Handle purgeable buffer object advice
> * @xe: XE device
> @@ -215,8 +326,11 @@ static void __maybe_unused
> madvise_purgeable(struct xe_device *xe,
> for (i = 0; i < num_vmas; i++) {
> struct xe_bo *bo = xe_vma_bo(vmas[i]);
>
> - if (!bo)
> + if (!bo) {
> + /* Purgeable state applies to BOs only, skip
> non-BO VMAs */
> + vmas[i]->skip_invalidation = true;
> continue;
> + }
>
> /* BO must be locked before modifying madv state */
> xe_bo_assert_held(bo);
> @@ -227,19 +341,31 @@ static void __maybe_unused
> madvise_purgeable(struct xe_device *xe,
> */
> if (xe_bo_is_purged(bo)) {
> details->has_purged_bo = true;
> + vmas[i]->skip_invalidation = true;
> continue;
> }
>
> switch (op->purge_state_val.val) {
> case DRM_XE_VMA_PURGEABLE_STATE_WILLNEED:
> - xe_bo_set_purgeable_state(bo,
> XE_MADV_PURGEABLE_WILLNEED);
> + vmas[i]->attr.purgeable_state =
> XE_MADV_PURGEABLE_WILLNEED;
> + vmas[i]->skip_invalidation = true;
> +
> + xe_bo_recompute_purgeable_state(bo);
> break;
> case DRM_XE_VMA_PURGEABLE_STATE_DONTNEED:
> - xe_bo_set_purgeable_state(bo,
> XE_MADV_PURGEABLE_DONTNEED);
> + vmas[i]->attr.purgeable_state =
> XE_MADV_PURGEABLE_DONTNEED;
> + /*
> + * Don't zap PTEs at DONTNEED time -- pages
> are still
> + * alive. The zap happens in
> xe_bo_move_notify() right
> + * before the shrinker frees them.
> + */
> + vmas[i]->skip_invalidation = true;
> +
> + xe_bo_recompute_purgeable_state(bo);
> break;
> default:
> - drm_warn(&vm->xe->drm, "Invalid madvice
> value = %d\n",
> - op->purge_state_val.val);
> + /* Should never hit - values validated in
> madvise_args_are_sane() */
> + xe_assert(vm->xe, 0);
> return;
> }
> }
> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.h
> b/drivers/gpu/drm/xe/xe_vm_madvise.h
> index b0e1fc445f23..39acd2689ca0 100644
> --- a/drivers/gpu/drm/xe/xe_vm_madvise.h
> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.h
> @@ -8,8 +8,11 @@
>
> struct drm_device;
> struct drm_file;
> +struct xe_bo;
>
> int xe_vm_madvise_ioctl(struct drm_device *dev, void *data,
> struct drm_file *file);
>
> +void xe_bo_recompute_purgeable_state(struct xe_bo *bo);
> +
> #endif
> diff --git a/drivers/gpu/drm/xe/xe_vm_types.h
> b/drivers/gpu/drm/xe/xe_vm_types.h
> index 69e80c94138a..033cfdd56c95 100644
> --- a/drivers/gpu/drm/xe/xe_vm_types.h
> +++ b/drivers/gpu/drm/xe/xe_vm_types.h
> @@ -95,6 +95,17 @@ struct xe_vma_mem_attr {
> * same as default_pat_index unless overwritten by madvise.
> */
> u16 pat_index;
> +
> + /**
> + * @purgeable_state: Purgeable hint for this VMA mapping
> + *
> + * Per-VMA purgeable state from madvise. Valid states are
> WILLNEED (0)
> + * or DONTNEED (1). Shared BOs require all VMAs to be
> DONTNEED before
> + * the BO can be purged. PURGED state exists only at BO
> level.
> + *
> + * Protected by BO dma-resv lock. Set via
> DRM_IOCTL_XE_MADVISE.
> + */
> + u32 purgeable_state;
> };
>
> struct xe_vma {
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v7 07/12] drm/xe/madvise: Block imported and exported dma-bufs
2026-03-23 9:30 ` [PATCH v7 07/12] drm/xe/madvise: Block imported and exported dma-bufs Arvind Yadav
@ 2026-03-24 14:13 ` Thomas Hellström
0 siblings, 0 replies; 29+ messages in thread
From: Thomas Hellström @ 2026-03-24 14:13 UTC (permalink / raw)
To: Arvind Yadav, intel-xe; +Cc: matthew.brost, himal.prasad.ghimiray
On Mon, 2026-03-23 at 15:00 +0530, Arvind Yadav wrote:
> Prevent marking imported or exported dma-bufs as purgeable.
> External devices may be accessing these buffers without our
> knowledge, making purging unsafe.
>
> Check drm_gem_is_imported() for buffers created by other
> drivers and obj->dma_buf for buffers exported to other
> drivers. Silently skip these BOs during madvise processing.
>
> This follows drm_gem_shmem's purgeable implementation and
> prevents data corruption from purging actively-used shared
> buffers.
>
> v3:
> - Addresses review feedback from Matt Roper about handling
> imported/exported BOs correctly in the purgeable BO
> implementation.
>
> v4:
> - Check should be add to xe_vm_madvise_purgeable_bo.
>
> v5:
> - Rename xe_bo_is_external_dmabuf() to xe_bo_is_dmabuf_shared()
> for clarity (Thomas)
> - Update comments to clarify why both imports and exports
> are unsafe to purge.
>
> v6:
> - No PTEs to zap for shared dma-bufs.
>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> ---
> drivers/gpu/drm/xe/xe_vm_madvise.c | 38
> ++++++++++++++++++++++++++++++
> 1 file changed, 38 insertions(+)
>
> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c
> b/drivers/gpu/drm/xe/xe_vm_madvise.c
> index ed1940da7739..340e83764a76 100644
> --- a/drivers/gpu/drm/xe/xe_vm_madvise.c
> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
> @@ -185,6 +185,34 @@ static void madvise_pat_index(struct xe_device
> *xe, struct xe_vm *vm,
> }
> }
>
> +
> +/**
> + * xe_bo_is_dmabuf_shared() - Check if BO is shared via dma-buf
> + * @bo: Buffer object
> + *
> + * Prevent marking imported or exported dma-bufs as purgeable.
> + * For imported BOs, Xe doesn't own the backing store and cannot
> + * safely reclaim pages (exporter or other devices may still be
> + * using them). For exported BOs, external devices may have active
> + * mappings we cannot track.
> + *
> + * Return: true if BO is imported or exported, false otherwise
> + */
> +static bool xe_bo_is_dmabuf_shared(struct xe_bo *bo)
> +{
> + struct drm_gem_object *obj = &bo->ttm.base;
> +
> + /* Imported: exporter owns backing store */
> + if (drm_gem_is_imported(obj))
> + return true;
> +
> + /* Exported: external devices may be accessing */
> + if (obj->dma_buf)
> + return true;
> +
> + return false;
> +}
> +
> /**
> * enum xe_bo_vmas_purge_state - VMA purgeable state aggregation
> *
> @@ -234,6 +262,10 @@ static enum xe_bo_vmas_purge_state
> xe_bo_all_vmas_dontneed(struct xe_bo *bo)
>
> xe_bo_assert_held(bo);
>
> + /* Shared dma-bufs cannot be purgeable */
> + if (xe_bo_is_dmabuf_shared(bo))
> + return XE_BO_VMAS_STATE_WILLNEED;
> +
> drm_gem_for_each_gpuvm_bo(vm_bo, obj) {
> drm_gpuvm_bo_for_each_va(gpuva, vm_bo) {
> struct xe_vma *vma = gpuva_to_vma(gpuva);
> @@ -335,6 +367,12 @@ static void __maybe_unused
> madvise_purgeable(struct xe_device *xe,
> /* BO must be locked before modifying madv state */
> xe_bo_assert_held(bo);
>
> + /* Skip shared dma-bufs - no PTEs to zap */
> + if (xe_bo_is_dmabuf_shared(bo)) {
> + vmas[i]->skip_invalidation = true;
> + continue;
> + }
> +
> /*
> * Once purged, always purged. Cannot transition
> back to WILLNEED.
> * This matches i915 semantics where purged BOs are
> permanently invalid.
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v7 09/12] drm/xe/dma_buf: Block export of DONTNEED/purged BOs
2026-03-23 9:30 ` [PATCH v7 09/12] drm/xe/dma_buf: Block export " Arvind Yadav
@ 2026-03-24 14:47 ` Thomas Hellström
2026-03-26 2:50 ` Yadav, Arvind
0 siblings, 1 reply; 29+ messages in thread
From: Thomas Hellström @ 2026-03-24 14:47 UTC (permalink / raw)
To: Arvind Yadav, intel-xe; +Cc: matthew.brost, himal.prasad.ghimiray
On Mon, 2026-03-23 at 15:00 +0530, Arvind Yadav wrote:
> Don't allow exporting BOs marked DONTNEED or PURGED as dma-bufs.
> DONTNEED BOs can have their contents discarded at any time, making
> the exported dma-buf unusable for external devices. PURGED BOs have
> no backing store and are permanently invalid.
>
> Return -EBUSY for DONTNEED BOs (temporary purgeable state) and
> -EINVAL for purged BOs (permanent, no backing store).
>
> The export path now checks the BO's purgeable state before creating
> the dma-buf, preventing external devices from accessing memory that
> may be purged at any time.
>
> v6:
> - Split DONTNEED → -EBUSY and PURGED → -EINVAL for consistency
> with the rest of the series (Thomas, Matt)
>
> v7:
> - Use Interruptible lock. (Thomas)
>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
> ---
> drivers/gpu/drm/xe/xe_dma_buf.c | 21 +++++++++++++++++++++
> 1 file changed, 21 insertions(+)
>
> diff --git a/drivers/gpu/drm/xe/xe_dma_buf.c
> b/drivers/gpu/drm/xe/xe_dma_buf.c
> index ea370cd373e9..4edbe9f3c001 100644
> --- a/drivers/gpu/drm/xe/xe_dma_buf.c
> +++ b/drivers/gpu/drm/xe/xe_dma_buf.c
> @@ -223,6 +223,23 @@ struct dma_buf *xe_gem_prime_export(struct
> drm_gem_object *obj, int flags)
> if (bo->vm)
> return ERR_PTR(-EPERM);
>
> + /*
> + * Reject exporting purgeable BOs. DONTNEED BOs can be
> purged
> + * at any time, making the exported dma-buf unusable. Purged
> BOs
> + * have no backing store and are permanently invalid.
> + */
> + xe_bo_lock(bo, true);
Missing error check.
/Thomas
> + if (xe_bo_madv_is_dontneed(bo)) {
> + ret = -EBUSY;
> + goto out_unlock;
> + }
> +
> + if (xe_bo_is_purged(bo)) {
> + ret = -EINVAL;
> + goto out_unlock;
> + }
> + xe_bo_unlock(bo);
> +
> ret = ttm_bo_setup_export(&bo->ttm, &ctx);
> if (ret)
> return ERR_PTR(ret);
> @@ -232,6 +249,10 @@ struct dma_buf *xe_gem_prime_export(struct
> drm_gem_object *obj, int flags)
> buf->ops = &xe_dmabuf_ops;
>
> return buf;
> +
> +out_unlock:
> + xe_bo_unlock(bo);
> + return ERR_PTR(ret);
> }
>
> static struct drm_gem_object *
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v7 10/12] drm/xe/bo: Add purgeable shrinker state helpers
2026-03-23 9:30 ` [PATCH v7 10/12] drm/xe/bo: Add purgeable shrinker state helpers Arvind Yadav
@ 2026-03-24 14:51 ` Thomas Hellström
0 siblings, 0 replies; 29+ messages in thread
From: Thomas Hellström @ 2026-03-24 14:51 UTC (permalink / raw)
To: Arvind Yadav, intel-xe; +Cc: matthew.brost, himal.prasad.ghimiray
On Mon, 2026-03-23 at 15:00 +0530, Arvind Yadav wrote:
> Encapsulate TTM purgeable flag updates and shrinker page accounting
> into helper functions to prevent desynchronization between the TTM
> tt->purgeable flag and the shrinker's page bucket counters.
>
> Without these helpers, direct manipulation of xe_ttm_tt->purgeable
> risks forgetting to update the corresponding shrinker counters,
> leading to incorrect memory pressure calculations.
>
> Update purgeable BO state to PURGED after successful shrinker purge
> for DONTNEED BOs.
>
> v4:
> - @madv_purgeable atomic_t → u32 change across all relevant
> patches (Matt)
>
> v5:
> - Update purgeable BO state to PURGED after a successful shrinker
> purge for DONTNEED BOs.
> - Split ghost BO and zero-refcount handling in xe_bo_shrink()
> (Thomas)
>
> v6:
> - Create separate patch for 'Split ghost BO and zero-refcount
> handling'. (Thomas)
>
> v7:
> - Merge xe_bo_set_purgeable_shrinker() and
> xe_bo_clear_purgeable_shrinker()
> into a single static helper xe_bo_set_purgeable_shrinker(bo,
> new_state)
> called automatically from xe_bo_set_purgeable_state(). Callers no
> longer
> need to manage shrinker accounting separately. (Thomas)
>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> ---
> drivers/gpu/drm/xe/xe_bo.c | 43
> +++++++++++++++++++++++++++++++++++++-
> 1 file changed, 42 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
> index 83a1d1ca6cc6..85e42e785ebe 100644
> --- a/drivers/gpu/drm/xe/xe_bo.c
> +++ b/drivers/gpu/drm/xe/xe_bo.c
> @@ -835,6 +835,42 @@ static int xe_bo_move_notify(struct xe_bo *bo,
> return 0;
> }
>
> +/**
> + * xe_bo_set_purgeable_shrinker() - Update shrinker accounting for
> purgeable state
> + * @bo: Buffer object
> + * @new_state: New purgeable state being set
> + *
> + * Transfers pages between shrinkable and purgeable buckets when the
> BO
> + * purgeable state changes. Called automatically from
> xe_bo_set_purgeable_state().
> + */
> +static void xe_bo_set_purgeable_shrinker(struct xe_bo *bo,
> + enum
> xe_madv_purgeable_state new_state)
> +{
> + struct ttm_buffer_object *ttm_bo = &bo->ttm;
> + struct ttm_tt *tt = ttm_bo->ttm;
> + struct xe_device *xe = ttm_to_xe_device(ttm_bo->bdev);
> + struct xe_ttm_tt *xe_tt;
> + long tt_pages;
> +
> + xe_bo_assert_held(bo);
> +
> + if (!tt || !ttm_tt_is_populated(tt))
> + return;
> +
> + xe_tt = container_of(tt, struct xe_ttm_tt, ttm);
> + tt_pages = tt->num_pages;
> +
> + if (!xe_tt->purgeable && new_state ==
> XE_MADV_PURGEABLE_DONTNEED) {
> + xe_tt->purgeable = true;
> + /* Transfer pages from shrinkable to purgeable count
> */
> + xe_shrinker_mod_pages(xe->mem.shrinker, -tt_pages,
> tt_pages);
> + } else if (xe_tt->purgeable && new_state ==
> XE_MADV_PURGEABLE_WILLNEED) {
> + xe_tt->purgeable = false;
> + /* Transfer pages from purgeable to shrinkable count
> */
> + xe_shrinker_mod_pages(xe->mem.shrinker, tt_pages, -
> tt_pages);
> + }
> +}
> +
> /**
> * xe_bo_set_purgeable_state() - Set BO purgeable state with
> validation
> * @bo: Buffer object
> @@ -842,7 +878,8 @@ static int xe_bo_move_notify(struct xe_bo *bo,
> *
> * Sets the purgeable state with lockdep assertions and validates
> state
> * transitions. Once a BO is PURGED, it cannot transition to any
> other state.
> - * Invalid transitions are caught with xe_assert().
> + * Invalid transitions are caught with xe_assert(). Shrinker page
> accounting
> + * is updated automatically.
> */
> void xe_bo_set_purgeable_state(struct xe_bo *bo,
> enum xe_madv_purgeable_state
> new_state)
> @@ -861,6 +898,7 @@ void xe_bo_set_purgeable_state(struct xe_bo *bo,
> new_state != XE_MADV_PURGEABLE_PURGED));
>
> bo->madv_purgeable = new_state;
> + xe_bo_set_purgeable_shrinker(bo, new_state);
> }
>
> /**
> @@ -1243,6 +1281,9 @@ long xe_bo_shrink(struct ttm_operation_ctx
> *ctx, struct ttm_buffer_object *bo,
> lret = xe_bo_move_notify(xe_bo, ctx);
> if (!lret)
> lret = xe_bo_shrink_purge(ctx, bo, scanned);
> + if (lret > 0 && xe_bo_madv_is_dontneed(xe_bo))
> + xe_bo_set_purgeable_state(xe_bo,
> +
> XE_MADV_PURGEABLE_PURGED);
> goto out_unref;
> }
>
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v7 03/12] drm/xe/madvise: Implement purgeable buffer object support
2026-03-23 9:30 ` [PATCH v7 03/12] drm/xe/madvise: Implement purgeable buffer object support Arvind Yadav
@ 2026-03-25 15:01 ` Thomas Hellström
2026-03-26 4:02 ` Yadav, Arvind
0 siblings, 1 reply; 29+ messages in thread
From: Thomas Hellström @ 2026-03-25 15:01 UTC (permalink / raw)
To: Arvind Yadav, intel-xe; +Cc: matthew.brost, himal.prasad.ghimiray
On Mon, 2026-03-23 at 15:00 +0530, Arvind Yadav wrote:
> This allows userspace applications to provide memory usage hints to
> the kernel for better memory management under pressure:
>
> Add the core implementation for purgeable buffer objects, enabling
> memory
> reclamation of user-designated DONTNEED buffers during eviction.
>
> This patch implements the purge operation and state machine
> transitions:
>
> Purgeable States (from xe_madv_purgeable_state):
> - WILLNEED (0): BO should be retained, actively used
> - DONTNEED (1): BO eligible for purging, not currently needed
> - PURGED (2): BO backing store reclaimed, permanently invalid
>
> Design Rationale:
> - Async TLB invalidation via trigger_rebind (no blocking
> xe_vm_invalidate_vma)
> - i915 compatibility: retained field, "once purged always purged"
> semantics
> - Shared BO protection prevents multi-process memory corruption
> - Scratch PTE reuse avoids new infrastructure, safe for fault mode
>
> Note: The madvise_purgeable() function is implemented but not hooked
> into
> the IOCTL handler (madvise_funcs[] entry is NULL) to maintain
> bisectability.
> The feature will be enabled in the final patch when all supporting
> infrastructure (shrinker, per-VMA tracking) is complete.
>
> v2:
> - Use xe_bo_trigger_rebind() for async TLB invalidation (Thomas
> Hellström)
> - Add NULL rebind with scratch PTEs for fault mode (Thomas
> Hellström)
> - Implement i915-compatible retained field logic (Thomas Hellström)
> - Skip BO validation for purged BOs in page fault handler (crash
> fix)
> - Add scratch VM check in page fault path (non-scratch VMs fail
> fault)
> - Force clear_pt for non-scratch VMs to avoid phys addr 0 mapping
> (review fix)
> - Add !is_purged check to resource cursor setup to prevent stale
> access
>
> v3:
> - Rebase as xe_gt_pagefault.c is gone upstream and replaced
> with xe_pagefault.c (Matthew Brost)
> - Xe specific warn on (Matthew Brost)
> - Call helpers for madv_purgeable access(Matthew Brost)
> - Remove bo NULL check(Matthew Brost)
> - Use xe_bo_assert_held instead of dma assert(Matthew Brost)
> - Move the xe_bo_is_purged check under the dma-resv lock( by Matt)
> - Drop is_purged from xe_pt_stage_bind_entry and just set is_null
> to true
> for purged BO rename s/is_null/is_null_or_purged (by Matt)
> - UAPI rule should not be changed.(Matthew Brost)
> - Make 'retained' a userptr (Matthew Brost)
>
> v4:
> - @madv_purgeable atomic_t → u32 change across all relevant patches
> (Matt)
>
> v5:
> - Introduce xe_bo_set_purgeable_state() helper (void return) to
> centralize
> madv_purgeable updates with xe_bo_assert_held() and state
> transition
> validation using explicit enum checks (no transition out of
> PURGED) (Matt)
> - Make xe_ttm_bo_purge() return int and propagate failures from
> xe_bo_move(); handle xe_bo_trigger_rebind() failures (e.g.
> no_wait_gpu
> paths) rather than silently ignoring (Matt)
> - Replace drm_WARN_ON with xe_assert for better Xe-specific
> assertions (Matt)
> - Hook purgeable handling into
> madvise_funcs[DRM_XE_VMA_ATTR_PURGEABLE_STATE]
> instead of special-case path in xe_vm_madvise_ioctl() (Matt)
> - Track purgeable retained return via xe_madvise_details and
> perform
> copy_to_user() from xe_madvise_details_fini() after locks are
> dropped (Matt)
> - Set madvise_funcs[DRM_XE_VMA_ATTR_PURGEABLE_STATE] to NULL with
> __maybe_unused on madvise_purgeable() to maintain bisectability
> until
> shrinker integration is complete in final patch (Matt)
> - Use put_user() instead of copy_to_user() for single u32 retained
> value (Thomas)
> - Return -EFAULT from ioctl if put_user() fails (Thomas)
> - Validate userspace initialized retained to 0 before ioctl,
> ensuring safe
> default (0 = "assume purged") if put_user() fails (Thomas)
> - Refactor error handling: separate fallible put_user from
> infallible cleanup
> - xe_madvise_purgeable_retained_to_user(): separate helper for
> fallible put_user
> - Call put_user() after releasing all locks to avoid circular
> dependencies
> - Use xe_bo_move_notify() instead of xe_bo_trigger_rebind() in
> xe_ttm_bo_purge()
> for proper abstraction - handles vunmap, dma-buf notifications,
> and VRAM
> userfault cleanup (Thomas)
> - Fix LRU crash while running shrink test
> - Skip xe_bo_validate() for purged BOs in xe_gpuvm_validate()
>
> v6:
> - xe_bo_move_notify() must be called *before* ttm_bo_validate().
> (Thomas)
> - Block GPU page faults (fault-mode VMs) for DONTNEED bo's (Thomas,
> Matt)
> - Rename retained to retained_ptr. (Jose)
>
> v7 Changes:
> - Fix engine reset from EU overfetch in scratch VMs:
> xe_pagefault_begin()
> and xe_pagefault_service() now return 0 instead of -EACCES/-
> EINVAL for
> DONTNEED/purged BOs and missing VMAs so stale accesses hit
> scratch PTEs.
> - Fix Engine memory CAT errors when Mesa uses
> DRM_XE_VM_CREATE_FLAG_SCRATCH_PAGE:
> accept scratch VMs in xe_pagefault_asid_to_vm() via '||
> xe_vm_has_scratch(vm).
> - Skip validate/migrate/rebind for DONTNEED/purged BOs in
> xe_pagefault_begin()
> using a bool *skip_rebind out-parameter. Scratch VMs ACK the
> fault and fall back
> to scratch PTEs; non-scratch VMs return -EACCES.
>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
> ---
> drivers/gpu/drm/xe/xe_bo.c | 107 ++++++++++++++++++++---
> drivers/gpu/drm/xe/xe_bo.h | 2 +
> drivers/gpu/drm/xe/xe_pagefault.c | 25 +++++-
> drivers/gpu/drm/xe/xe_pt.c | 40 +++++++--
> drivers/gpu/drm/xe/xe_vm.c | 20 ++++-
> drivers/gpu/drm/xe/xe_vm_madvise.c | 136
> +++++++++++++++++++++++++++++
> 6 files changed, 305 insertions(+), 25 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
> index 22179b2df85c..b6055bb4c578 100644
> --- a/drivers/gpu/drm/xe/xe_bo.c
> +++ b/drivers/gpu/drm/xe/xe_bo.c
> @@ -835,6 +835,84 @@ static int xe_bo_move_notify(struct xe_bo *bo,
> return 0;
> }
>
> +/**
> + * xe_bo_set_purgeable_state() - Set BO purgeable state with
> validation
> + * @bo: Buffer object
> + * @new_state: New purgeable state
> + *
> + * Sets the purgeable state with lockdep assertions and validates
> state
> + * transitions. Once a BO is PURGED, it cannot transition to any
> other state.
> + * Invalid transitions are caught with xe_assert().
> + */
> +void xe_bo_set_purgeable_state(struct xe_bo *bo,
> + enum xe_madv_purgeable_state
> new_state)
> +{
> + struct xe_device *xe = xe_bo_device(bo);
> +
> + xe_bo_assert_held(bo);
> +
> + /* Validate state is one of the known values */
> + xe_assert(xe, new_state == XE_MADV_PURGEABLE_WILLNEED ||
> + new_state == XE_MADV_PURGEABLE_DONTNEED ||
> + new_state == XE_MADV_PURGEABLE_PURGED);
> +
> + /* Once purged, always purged - cannot transition out */
> + xe_assert(xe, !(bo->madv_purgeable ==
> XE_MADV_PURGEABLE_PURGED &&
> + new_state != XE_MADV_PURGEABLE_PURGED));
> +
> + bo->madv_purgeable = new_state;
> +}
> +
> +/**
> + * xe_ttm_bo_purge() - Purge buffer object backing store
> + * @ttm_bo: The TTM buffer object to purge
> + * @ctx: TTM operation context
> + *
> + * This function purges the backing store of a BO marked as DONTNEED
> and
> + * triggers rebind to invalidate stale GPU mappings. For fault-mode
> VMs,
> + * this zaps the PTEs. The next GPU access will trigger a page fault
> and
> + * perform NULL rebind (scratch pages or clear PTEs based on VM
> config).
> + *
> + * Return: 0 on success, negative error code on failure
> + */
> +static int xe_ttm_bo_purge(struct ttm_buffer_object *ttm_bo, struct
> ttm_operation_ctx *ctx)
> +{
> + struct xe_bo *bo = ttm_to_xe_bo(ttm_bo);
> + struct ttm_placement place = {};
> + int ret;
> +
> + xe_bo_assert_held(bo);
> +
> + if (!ttm_bo->ttm)
> + return 0;
> +
> + if (!xe_bo_madv_is_dontneed(bo))
> + return 0;
> +
> + /*
> + * Use the standard pre-move hook so we share the same
> cleanup/invalidate
> + * path as migrations: drop any CPU vmap and schedule the
> necessary GPU
> + * unbind/rebind work.
> + *
> + * This must be called before ttm_bo_validate() frees the
> pages.
> + * May fail in no-wait contexts (fault/shrinker) or if the
> BO is
> + * pinned. Keep state unchanged on failure so we don't end
> up "PURGED"
> + * with stale mappings.
> + */
> + ret = xe_bo_move_notify(bo, ctx);
> + if (ret)
> + return ret;
> +
> + ret = ttm_bo_validate(ttm_bo, &place, ctx);
> + if (ret)
> + return ret;
> +
> + /* Commit the state transition only once invalidation was
> queued */
> + xe_bo_set_purgeable_state(bo, XE_MADV_PURGEABLE_PURGED);
> +
> + return 0;
> +}
> +
> static int xe_bo_move(struct ttm_buffer_object *ttm_bo, bool evict,
> struct ttm_operation_ctx *ctx,
> struct ttm_resource *new_mem,
> @@ -854,6 +932,20 @@ static int xe_bo_move(struct ttm_buffer_object
> *ttm_bo, bool evict,
> ttm && ttm_tt_is_populated(ttm)) ?
> true : false;
> int ret = 0;
>
> + /*
> + * Purge only non-shared BOs explicitly marked DONTNEED by
> userspace.
> + * The move_notify callback will handle invalidation
> asynchronously.
> + */
> + if (evict && xe_bo_madv_is_dontneed(bo)) {
> + ret = xe_ttm_bo_purge(ttm_bo, ctx);
> + if (ret)
> + return ret;
> +
> + /* Free the unused eviction destination resource */
> + ttm_resource_free(ttm_bo, &new_mem);
> + return 0;
> + }
> +
> /* Bo creation path, moving to system or TT. */
> if ((!old_mem && ttm) && !handle_system_ccs) {
> if (new_mem->mem_type == XE_PL_TT)
> @@ -1603,18 +1695,6 @@ static void xe_ttm_bo_delete_mem_notify(struct
> ttm_buffer_object *ttm_bo)
> }
> }
>
> -static void xe_ttm_bo_purge(struct ttm_buffer_object *ttm_bo, struct
> ttm_operation_ctx *ctx)
> -{
> - struct xe_device *xe = ttm_to_xe_device(ttm_bo->bdev);
> -
> - if (ttm_bo->ttm) {
> - struct ttm_placement place = {};
> - int ret = ttm_bo_validate(ttm_bo, &place, ctx);
> -
> - drm_WARN_ON(&xe->drm, ret);
> - }
> -}
> -
> static void xe_ttm_bo_swap_notify(struct ttm_buffer_object *ttm_bo)
> {
> struct ttm_operation_ctx ctx = {
> @@ -2195,6 +2275,9 @@ struct xe_bo *xe_bo_init_locked(struct
> xe_device *xe, struct xe_bo *bo,
> #endif
> INIT_LIST_HEAD(&bo->vram_userfault_link);
>
> + /* Initialize purge advisory state */
> + bo->madv_purgeable = XE_MADV_PURGEABLE_WILLNEED;
> +
> drm_gem_private_object_init(&xe->drm, &bo->ttm.base, size);
>
> if (resv) {
> diff --git a/drivers/gpu/drm/xe/xe_bo.h b/drivers/gpu/drm/xe/xe_bo.h
> index fb5541bdf602..653851d47aa6 100644
> --- a/drivers/gpu/drm/xe/xe_bo.h
> +++ b/drivers/gpu/drm/xe/xe_bo.h
> @@ -271,6 +271,8 @@ static inline bool xe_bo_madv_is_dontneed(struct
> xe_bo *bo)
> return bo->madv_purgeable == XE_MADV_PURGEABLE_DONTNEED;
> }
>
> +void xe_bo_set_purgeable_state(struct xe_bo *bo, enum
> xe_madv_purgeable_state new_state);
> +
> static inline void xe_bo_unpin_map_no_vm(struct xe_bo *bo)
> {
> if (likely(bo)) {
> diff --git a/drivers/gpu/drm/xe/xe_pagefault.c
> b/drivers/gpu/drm/xe/xe_pagefault.c
> index ea4857acf28d..415253631e6f 100644
> --- a/drivers/gpu/drm/xe/xe_pagefault.c
> +++ b/drivers/gpu/drm/xe/xe_pagefault.c
> @@ -46,7 +46,8 @@ static int xe_pagefault_entry_size(void)
> }
>
> static int xe_pagefault_begin(struct drm_exec *exec, struct xe_vma
> *vma,
> - struct xe_vram_region *vram, bool
> need_vram_move)
> + struct xe_vram_region *vram, bool
> need_vram_move,
> + bool *skip_rebind)
> {
> struct xe_bo *bo = xe_vma_bo(vma);
> struct xe_vm *vm = xe_vma_vm(vma);
> @@ -59,6 +60,20 @@ static int xe_pagefault_begin(struct drm_exec
> *exec, struct xe_vma *vma,
> if (!bo)
> return 0;
>
> + /*
> + * Under dma-resv lock: reject rebind for DONTNEED/purged
> BOs.
> + * Validating or migrating would repopulate pages we want
> the shrinker
> + * to reclaim, and rebinding would undo the GPU PTE zap.
> + * Scratch VMs absorb the access via scratch PTEs
> (skip_rebind=true);
> + * non-scratch VMs have no fallback so fail the fault.
> + */
> + if (unlikely(xe_bo_madv_is_dontneed(bo) ||
> xe_bo_is_purged(bo))) {
> + if (!xe_vm_has_scratch(vm))
> + return -EACCES;
> + *skip_rebind = true;
So what happens here if we have a scratch VM, then do a lazy VM_BIND,
then madvise(DONTNEED) and then hit a page-fault. If we then
skip_rebind, aren't we going to hit the same pagefault again and again?
> + return 0;
> + }
> +
> return need_vram_move ? xe_bo_migrate(bo, vram->placement,
> NULL, exec) :
> xe_bo_validate(bo, vm, true, exec);
> }
> @@ -103,11 +118,13 @@ static int xe_pagefault_handle_vma(struct xe_gt
> *gt, struct xe_vma *vma,
> /* Lock VM and BOs dma-resv */
> xe_validation_ctx_init(&ctx, &vm->xe->val, &exec, (struct
> xe_val_flags) {});
> drm_exec_until_all_locked(&exec) {
> + bool skip_rebind = false;
> +
> err = xe_pagefault_begin(&exec, vma, tile->mem.vram,
> - needs_vram == 1);
> + needs_vram == 1,
> &skip_rebind);
> drm_exec_retry_on_contention(&exec);
> xe_validation_retry_on_oom(&ctx, &err);
> - if (err)
> + if (err || skip_rebind)
> goto unlock_dma_resv;
>
> /* Bind VMA only to the GT that has faulted */
> @@ -145,7 +162,7 @@ static struct xe_vm
> *xe_pagefault_asid_to_vm(struct xe_device *xe, u32 asid)
>
> down_read(&xe->usm.lock);
> vm = xa_load(&xe->usm.asid_to_vm, asid);
> - if (vm && xe_vm_in_fault_mode(vm))
> + if (vm && (xe_vm_in_fault_mode(vm) ||
> xe_vm_has_scratch(vm)))
> xe_vm_get(vm);
> else
> vm = ERR_PTR(-EINVAL);
> diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
> index 2d9ce2c4cb4f..08f40701f654 100644
> --- a/drivers/gpu/drm/xe/xe_pt.c
> +++ b/drivers/gpu/drm/xe/xe_pt.c
> @@ -531,20 +531,26 @@ xe_pt_stage_bind_entry(struct xe_ptw *parent,
> pgoff_t offset,
> /* Is this a leaf entry ?*/
> if (level == 0 || xe_pt_hugepte_possible(addr, next, level,
> xe_walk)) {
> struct xe_res_cursor *curs = xe_walk->curs;
> - bool is_null = xe_vma_is_null(xe_walk->vma);
> - bool is_vram = is_null ? false :
> xe_res_is_vram(curs);
> + struct xe_bo *bo = xe_vma_bo(xe_walk->vma);
> + bool is_null_or_purged = xe_vma_is_null(xe_walk-
> >vma) ||
> + (bo &&
> xe_bo_is_purged(bo));
> + bool is_vram = is_null_or_purged ? false :
> xe_res_is_vram(curs);
>
> XE_WARN_ON(xe_walk->va_curs_start != addr);
>
> if (xe_walk->clear_pt) {
> pte = 0;
> } else {
> - pte = vm->pt_ops->pte_encode_vma(is_null ? 0
> :
> + /*
> + * For purged BOs, treat like null VMAs -
> pass address 0.
> + * The pte_encode_vma will set XE_PTE_NULL
> flag for scratch mapping.
> + */
> + pte = vm->pt_ops-
> >pte_encode_vma(is_null_or_purged ? 0 :
>
> xe_res_dma(curs) +
> xe_walk-
> >dma_offset,
> xe_walk-
> >vma,
> pat_index,
> level);
> - if (!is_null)
> + if (!is_null_or_purged)
> pte |= is_vram ? xe_walk-
> >default_vram_pte :
> xe_walk->default_system_pte;
>
> @@ -568,7 +574,7 @@ xe_pt_stage_bind_entry(struct xe_ptw *parent,
> pgoff_t offset,
> if (unlikely(ret))
> return ret;
>
> - if (!is_null && !xe_walk->clear_pt)
> + if (!is_null_or_purged && !xe_walk->clear_pt)
> xe_res_next(curs, next - addr);
> xe_walk->va_curs_start = next;
> xe_walk->vma->gpuva.flags |= (XE_VMA_PTE_4K <<
> level);
> @@ -721,6 +727,26 @@ xe_pt_stage_bind(struct xe_tile *tile, struct
> xe_vma *vma,
> };
> struct xe_pt *pt = vm->pt_root[tile->id];
> int ret;
> + bool is_purged = false;
> +
> + /*
> + * Check if BO is purged:
> + * - Scratch VMs: Use scratch PTEs (XE_PTE_NULL) for safe
> zero reads
> + * - Non-scratch VMs: Clear PTEs to zero (non-present) to
> avoid mapping to phys addr 0
> + *
> + * For non-scratch VMs, we force clear_pt=true so leaf PTEs
> become completely
> + * zero instead of creating a PRESENT mapping to physical
> address 0.
> + */
> + if (bo && xe_bo_is_purged(bo)) {
> + is_purged = true;
> +
> + /*
> + * For non-scratch VMs, a NULL rebind should use
> zero PTEs
> + * (non-present), not a present PTE to phys 0.
> + */
> + if (!xe_vm_has_scratch(vm))
> + xe_walk.clear_pt = true;
> + }
>
> if (range) {
> /* Move this entire thing to xe_svm.c? */
> @@ -756,11 +782,11 @@ xe_pt_stage_bind(struct xe_tile *tile, struct
> xe_vma *vma,
> }
>
> xe_walk.default_vram_pte |= XE_PPGTT_PTE_DM;
> - xe_walk.dma_offset = bo ? vram_region_gpu_offset(bo-
> >ttm.resource) : 0;
> + xe_walk.dma_offset = (bo && !is_purged) ?
> vram_region_gpu_offset(bo->ttm.resource) : 0;
> if (!range)
> xe_bo_assert_held(bo);
>
> - if (!xe_vma_is_null(vma) && !range) {
> + if (!xe_vma_is_null(vma) && !range && !is_purged) {
> if (xe_vma_is_userptr(vma))
> xe_res_first_dma(to_userptr_vma(vma)-
> >userptr.pages.dma_addr, 0,
> xe_vma_size(vma), &curs);
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index 5572e12c2a7e..a0ade67d616e 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -326,6 +326,7 @@ void xe_vm_kill(struct xe_vm *vm, bool unlocked)
> static int xe_gpuvm_validate(struct drm_gpuvm_bo *vm_bo, struct
> drm_exec *exec)
> {
> struct xe_vm *vm = gpuvm_to_vm(vm_bo->vm);
> + struct xe_bo *bo = gem_to_xe_bo(vm_bo->obj);
> struct drm_gpuva *gpuva;
> int ret;
>
> @@ -334,10 +335,16 @@ static int xe_gpuvm_validate(struct
> drm_gpuvm_bo *vm_bo, struct drm_exec *exec)
> list_move_tail(&gpuva_to_vma(gpuva)-
> >combined_links.rebind,
> &vm->rebind_list);
>
> + /* Skip re-populating purged BOs, rebind maps scratch pages.
> */
> + if (xe_bo_is_purged(bo)) {
> + vm_bo->evicted = false;
> + return 0;
> + }
> +
> if (!try_wait_for_completion(&vm->xe->pm_block))
> return -EAGAIN;
>
> - ret = xe_bo_validate(gem_to_xe_bo(vm_bo->obj), vm, false,
> exec);
> + ret = xe_bo_validate(bo, vm, false, exec);
> if (ret)
> return ret;
>
> @@ -1358,6 +1365,9 @@ static u64 xelp_pte_encode_bo(struct xe_bo *bo,
> u64 bo_offset,
> static u64 xelp_pte_encode_vma(u64 pte, struct xe_vma *vma,
> u16 pat_index, u32 pt_level)
> {
> + struct xe_bo *bo = xe_vma_bo(vma);
> + struct xe_vm *vm = xe_vma_vm(vma);
> +
> pte |= XE_PAGE_PRESENT;
>
> if (likely(!xe_vma_read_only(vma)))
> @@ -1366,7 +1376,13 @@ static u64 xelp_pte_encode_vma(u64 pte, struct
> xe_vma *vma,
> pte |= pte_encode_pat_index(pat_index, pt_level);
> pte |= pte_encode_ps(pt_level);
>
> - if (unlikely(xe_vma_is_null(vma)))
> + /*
> + * NULL PTEs redirect to scratch page (return zeros on
> read).
> + * Set for: 1) explicit null VMAs, 2) purged BOs on scratch
> VMs.
> + * Never set NULL flag without scratch page - causes
> undefined behavior.
> + */
> + if (unlikely(xe_vma_is_null(vma) ||
> + (bo && xe_bo_is_purged(bo) &&
> xe_vm_has_scratch(vm))))
> pte |= XE_PTE_NULL;
>
> return pte;
> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c
> b/drivers/gpu/drm/xe/xe_vm_madvise.c
> index 869db304d96d..ffba2e41c539 100644
> --- a/drivers/gpu/drm/xe/xe_vm_madvise.c
> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
> @@ -26,6 +26,8 @@ struct xe_vmas_in_madvise_range {
> /**
> * struct xe_madvise_details - Argument to madvise_funcs
> * @dpagemap: Reference-counted pointer to a struct drm_pagemap.
> + * @has_purged_bo: Track if any BO was purged (for purgeable state)
> + * @retained_ptr: User pointer for retained value (for purgeable
> state)
> *
> * The madvise IOCTL handler may, in addition to the user-space
> * args, have additional info to pass into the madvise_func that
> @@ -34,6 +36,8 @@ struct xe_vmas_in_madvise_range {
> */
> struct xe_madvise_details {
> struct drm_pagemap *dpagemap;
> + bool has_purged_bo;
> + u64 retained_ptr;
> };
>
> static int get_vmas(struct xe_vm *vm, struct
> xe_vmas_in_madvise_range *madvise_range)
> @@ -180,6 +184,67 @@ static void madvise_pat_index(struct xe_device
> *xe, struct xe_vm *vm,
> }
> }
>
> +/**
> + * madvise_purgeable - Handle purgeable buffer object advice
> + * @xe: XE device
> + * @vm: VM
> + * @vmas: Array of VMAs
> + * @num_vmas: Number of VMAs
> + * @op: Madvise operation
> + * @details: Madvise details for return values
> + *
> + * Handles DONTNEED/WILLNEED/PURGED states. Tracks if any BO was
> purged
> + * in details->has_purged_bo for later copy to userspace.
> + *
> + * Note: Marked __maybe_unused until hooked into madvise_funcs[] in
> the
> + * final patch to maintain bisectability. The NULL placeholder in
> the
> + * array ensures proper -EINVAL return for userspace until all
> supporting
> + * infrastructure (shrinker, per-VMA tracking) is complete.
> + */
> +static void __maybe_unused madvise_purgeable(struct xe_device *xe,
> + struct xe_vm *vm,
> + struct xe_vma **vmas,
> + int num_vmas,
> + struct drm_xe_madvise
> *op,
> + struct
> xe_madvise_details *details)
> +{
> + int i;
> +
> + xe_assert(vm->xe, op->type ==
> DRM_XE_VMA_ATTR_PURGEABLE_STATE);
> +
> + for (i = 0; i < num_vmas; i++) {
> + struct xe_bo *bo = xe_vma_bo(vmas[i]);
> +
> + if (!bo)
> + continue;
> +
> + /* BO must be locked before modifying madv state */
> + xe_bo_assert_held(bo);
> +
> + /*
> + * Once purged, always purged. Cannot transition
> back to WILLNEED.
> + * This matches i915 semantics where purged BOs are
> permanently invalid.
> + */
> + if (xe_bo_is_purged(bo)) {
> + details->has_purged_bo = true;
> + continue;
> + }
> +
> + switch (op->purge_state_val.val) {
> + case DRM_XE_VMA_PURGEABLE_STATE_WILLNEED:
> + xe_bo_set_purgeable_state(bo,
> XE_MADV_PURGEABLE_WILLNEED);
> + break;
> + case DRM_XE_VMA_PURGEABLE_STATE_DONTNEED:
> + xe_bo_set_purgeable_state(bo,
> XE_MADV_PURGEABLE_DONTNEED);
> + break;
> + default:
> + drm_warn(&vm->xe->drm, "Invalid madvice
> value = %d\n",
Please use either "madvise" with an 's' or "advice".
Thanks,
Thomas
> + op->purge_state_val.val);
> + return;
> + }
> + }
> +}
> +
> typedef void (*madvise_func)(struct xe_device *xe, struct xe_vm *vm,
> struct xe_vma **vmas, int num_vmas,
> struct drm_xe_madvise *op,
> @@ -189,6 +254,12 @@ static const madvise_func madvise_funcs[] = {
> [DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC] =
> madvise_preferred_mem_loc,
> [DRM_XE_MEM_RANGE_ATTR_ATOMIC] = madvise_atomic,
> [DRM_XE_MEM_RANGE_ATTR_PAT] = madvise_pat_index,
> + /*
> + * Purgeable support implemented but not enabled yet to
> maintain
> + * bisectability. Will be set to madvise_purgeable() in
> final patch
> + * when all infrastructure (shrinker, VMA tracking) is
> complete.
> + */
> + [DRM_XE_VMA_ATTR_PURGEABLE_STATE] = NULL,
> };
>
> static u8 xe_zap_ptes_in_madvise_range(struct xe_vm *vm, u64 start,
> u64 end)
> @@ -319,6 +390,19 @@ static bool madvise_args_are_sane(struct
> xe_device *xe, const struct drm_xe_madv
> return false;
> break;
> }
> + case DRM_XE_VMA_ATTR_PURGEABLE_STATE:
> + {
> + u32 val = args->purge_state_val.val;
> +
> + if (XE_IOCTL_DBG(xe, !(val ==
> DRM_XE_VMA_PURGEABLE_STATE_WILLNEED ||
> + val ==
> DRM_XE_VMA_PURGEABLE_STATE_DONTNEED)))
> + return false;
> +
> + if (XE_IOCTL_DBG(xe, args->purge_state_val.pad))
> + return false;
> +
> + break;
> + }
> default:
> if (XE_IOCTL_DBG(xe, 1))
> return false;
> @@ -337,6 +421,12 @@ static int xe_madvise_details_init(struct xe_vm
> *vm, const struct drm_xe_madvise
>
> memset(details, 0, sizeof(*details));
>
> + /* Store retained pointer for purgeable state */
> + if (args->type == DRM_XE_VMA_ATTR_PURGEABLE_STATE) {
> + details->retained_ptr = args-
> >purge_state_val.retained_ptr;
> + return 0;
> + }
> +
> if (args->type == DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC) {
> int fd = args->preferred_mem_loc.devmem_fd;
> struct drm_pagemap *dpagemap;
> @@ -365,6 +455,21 @@ static void xe_madvise_details_fini(struct
> xe_madvise_details *details)
> drm_pagemap_put(details->dpagemap);
> }
>
> +static int xe_madvise_purgeable_retained_to_user(const struct
> xe_madvise_details *details)
> +{
> + u32 retained;
> +
> + if (!details->retained_ptr)
> + return 0;
> +
> + retained = !details->has_purged_bo;
> +
> + if (put_user(retained, (u32 __user
> *)u64_to_user_ptr(details->retained_ptr)))
> + return -EFAULT;
> +
> + return 0;
> +}
> +
> static bool check_bo_args_are_sane(struct xe_vm *vm, struct xe_vma
> **vmas,
> int num_vmas, u32 atomic_val)
> {
> @@ -422,6 +527,7 @@ int xe_vm_madvise_ioctl(struct drm_device *dev,
> void *data, struct drm_file *fil
> struct xe_vm *vm;
> struct drm_exec exec;
> int err, attr_type;
> + bool do_retained;
>
> vm = xe_vm_lookup(xef, args->vm_id);
> if (XE_IOCTL_DBG(xe, !vm))
> @@ -432,6 +538,25 @@ int xe_vm_madvise_ioctl(struct drm_device *dev,
> void *data, struct drm_file *fil
> goto put_vm;
> }
>
> + /* Cache whether we need to write retained, and validate
> it's initialized to 0 */
> + do_retained = args->type == DRM_XE_VMA_ATTR_PURGEABLE_STATE
> &&
> + args->purge_state_val.retained_ptr;
> + if (do_retained) {
> + u32 retained;
> + u32 __user *retained_ptr;
> +
> + retained_ptr = u64_to_user_ptr(args-
> >purge_state_val.retained_ptr);
> + if (get_user(retained, retained_ptr)) {
> + err = -EFAULT;
> + goto put_vm;
> + }
> +
> + if (XE_IOCTL_DBG(xe, retained != 0)) {
> + err = -EINVAL;
> + goto put_vm;
> + }
> + }
> +
> xe_svm_flush(vm);
>
> err = down_write_killable(&vm->lock);
> @@ -487,6 +612,13 @@ int xe_vm_madvise_ioctl(struct drm_device *dev,
> void *data, struct drm_file *fil
> }
>
> attr_type = array_index_nospec(args->type,
> ARRAY_SIZE(madvise_funcs));
> +
> + /* Ensure the madvise function exists for this type */
> + if (!madvise_funcs[attr_type]) {
> + err = -EINVAL;
> + goto err_fini;
> + }
> +
> madvise_funcs[attr_type](xe, vm, madvise_range.vmas,
> madvise_range.num_vmas, args,
> &details);
>
> @@ -505,6 +637,10 @@ int xe_vm_madvise_ioctl(struct drm_device *dev,
> void *data, struct drm_file *fil
> xe_madvise_details_fini(&details);
> unlock_vm:
> up_write(&vm->lock);
> +
> + /* Write retained value to user after releasing all locks */
> + if (!err && do_retained)
> + err =
> xe_madvise_purgeable_retained_to_user(&details);
> put_vm:
> xe_vm_put(vm);
> return err;
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v7 08/12] drm/xe/bo: Block mmap of DONTNEED/purged BOs
2026-03-23 9:30 ` [PATCH v7 08/12] drm/xe/bo: Block mmap of DONTNEED/purged BOs Arvind Yadav
@ 2026-03-26 1:33 ` Matthew Brost
2026-03-26 2:49 ` Yadav, Arvind
0 siblings, 1 reply; 29+ messages in thread
From: Matthew Brost @ 2026-03-26 1:33 UTC (permalink / raw)
To: Arvind Yadav; +Cc: intel-xe, himal.prasad.ghimiray, thomas.hellstrom
On Mon, Mar 23, 2026 at 03:00:57PM +0530, Arvind Yadav wrote:
> Don't allow new CPU mmaps to BOs marked DONTNEED or PURGED.
> DONTNEED BOs can have their contents discarded at any time, making
> CPU access undefined behavior. PURGED BOs have no backing store and
> are permanently invalid.
>
> Return -EBUSY for DONTNEED BOs (temporary purgeable state) and
> -EINVAL for purged BOs (permanent, no backing store).
>
> The mmap offset ioctl now checks the BO's purgeable state before
> allowing userspace to establish a new CPU mapping. This prevents
> the race where userspace gets a valid offset but the BO is purged
> before actual faulting begins.
>
> Existing mmaps (established before DONTNEED) may still work until
> pages are purged, at which point CPU faults fail with SIGBUS.
>
> v6:
> - Split DONTNEED → -EBUSY and PURGED → -EINVAL for consistency
> with the rest of the series (Thomas, Matt)
>
> v7:
> - Move purgeable check from xe_gem_mmap_offset_ioctl() into a new
> xe_gem_object_mmap() callback that wraps drm_gem_ttm_mmap(). (Thomas)
> - Use an interruptible lock. (Thomas)
>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
> ---
> drivers/gpu/drm/xe/xe_bo.c | 26 ++++++++++++++++++++++++--
> 1 file changed, 24 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
> index da18b43650e3..83a1d1ca6cc6 100644
> --- a/drivers/gpu/drm/xe/xe_bo.c
> +++ b/drivers/gpu/drm/xe/xe_bo.c
> @@ -2165,10 +2165,32 @@ static const struct vm_operations_struct xe_gem_vm_ops = {
> .access = xe_bo_vm_access,
> };
>
> +static int xe_gem_object_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
> +{
> + struct xe_bo *bo = gem_to_xe_bo(obj);
> + int err = 0;
> +
> + /*
> + * Reject mmap of purgeable BOs. DONTNEED BOs can be purged
> + * at any time, making CPU access undefined behavior. Purged BOs have
> + * no backing store and are permanently invalid.
> + */
> + xe_bo_lock(bo, true);
You need to check the return of xe_bo_lock if the 2nd argument is true
as it can fail. On failure, kick the error code to the caller.
> + if (xe_bo_madv_is_dontneed(bo))
> + err = -EBUSY;
> + else if (xe_bo_is_purged(bo))
> + err = -EINVAL;
> + xe_bo_unlock(bo);
> + if (err)
> + return err;
> +
> + return drm_gem_ttm_mmap(obj, vma);
> +}
> +
> static const struct drm_gem_object_funcs xe_gem_object_funcs = {
> .free = xe_gem_object_free,
> .close = xe_gem_object_close,
> - .mmap = drm_gem_ttm_mmap,
> + .mmap = xe_gem_object_mmap,
> .export = xe_gem_prime_export,
> .vm_ops = &xe_gem_vm_ops,
> };
> @@ -3427,8 +3449,8 @@ int xe_gem_mmap_offset_ioctl(struct drm_device *dev, void *data,
>
> /* The mmap offset was set up at BO allocation time. */
> args->offset = drm_vma_node_offset_addr(&gem_obj->vma_node);
> + drm_gem_object_put(gem_obj);
>
> - xe_bo_put(gem_to_xe_bo(gem_obj));
Look unrelated.
Matt
> return 0;
> }
>
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v7 08/12] drm/xe/bo: Block mmap of DONTNEED/purged BOs
2026-03-26 1:33 ` Matthew Brost
@ 2026-03-26 2:49 ` Yadav, Arvind
0 siblings, 0 replies; 29+ messages in thread
From: Yadav, Arvind @ 2026-03-26 2:49 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-xe, himal.prasad.ghimiray, thomas.hellstrom
On 26-03-2026 07:03, Matthew Brost wrote:
> On Mon, Mar 23, 2026 at 03:00:57PM +0530, Arvind Yadav wrote:
>> Don't allow new CPU mmaps to BOs marked DONTNEED or PURGED.
>> DONTNEED BOs can have their contents discarded at any time, making
>> CPU access undefined behavior. PURGED BOs have no backing store and
>> are permanently invalid.
>>
>> Return -EBUSY for DONTNEED BOs (temporary purgeable state) and
>> -EINVAL for purged BOs (permanent, no backing store).
>>
>> The mmap offset ioctl now checks the BO's purgeable state before
>> allowing userspace to establish a new CPU mapping. This prevents
>> the race where userspace gets a valid offset but the BO is purged
>> before actual faulting begins.
>>
>> Existing mmaps (established before DONTNEED) may still work until
>> pages are purged, at which point CPU faults fail with SIGBUS.
>>
>> v6:
>> - Split DONTNEED → -EBUSY and PURGED → -EINVAL for consistency
>> with the rest of the series (Thomas, Matt)
>>
>> v7:
>> - Move purgeable check from xe_gem_mmap_offset_ioctl() into a new
>> xe_gem_object_mmap() callback that wraps drm_gem_ttm_mmap(). (Thomas)
>> - Use an interruptible lock. (Thomas)
>>
>> Cc: Matthew Brost <matthew.brost@intel.com>
>> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>> Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>> Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
>> ---
>> drivers/gpu/drm/xe/xe_bo.c | 26 ++++++++++++++++++++++++--
>> 1 file changed, 24 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
>> index da18b43650e3..83a1d1ca6cc6 100644
>> --- a/drivers/gpu/drm/xe/xe_bo.c
>> +++ b/drivers/gpu/drm/xe/xe_bo.c
>> @@ -2165,10 +2165,32 @@ static const struct vm_operations_struct xe_gem_vm_ops = {
>> .access = xe_bo_vm_access,
>> };
>>
>> +static int xe_gem_object_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
>> +{
>> + struct xe_bo *bo = gem_to_xe_bo(obj);
>> + int err = 0;
>> +
>> + /*
>> + * Reject mmap of purgeable BOs. DONTNEED BOs can be purged
>> + * at any time, making CPU access undefined behavior. Purged BOs have
>> + * no backing store and are permanently invalid.
>> + */
>> + xe_bo_lock(bo, true);
> You need to check the return of xe_bo_lock if the 2nd argument is true
> as it can fail. On failure, kick the error code to the caller.
Noted,
>
>> + if (xe_bo_madv_is_dontneed(bo))
>> + err = -EBUSY;
>> + else if (xe_bo_is_purged(bo))
>> + err = -EINVAL;
>> + xe_bo_unlock(bo);
>> + if (err)
>> + return err;
>> +
>> + return drm_gem_ttm_mmap(obj, vma);
>> +}
>> +
>> static const struct drm_gem_object_funcs xe_gem_object_funcs = {
>> .free = xe_gem_object_free,
>> .close = xe_gem_object_close,
>> - .mmap = drm_gem_ttm_mmap,
>> + .mmap = xe_gem_object_mmap,
>> .export = xe_gem_prime_export,
>> .vm_ops = &xe_gem_vm_ops,
>> };
>> @@ -3427,8 +3449,8 @@ int xe_gem_mmap_offset_ioctl(struct drm_device *dev, void *data,
>>
>> /* The mmap offset was set up at BO allocation time. */
>> args->offset = drm_vma_node_offset_addr(&gem_obj->vma_node);
>> + drm_gem_object_put(gem_obj);
>>
>> - xe_bo_put(gem_to_xe_bo(gem_obj));
> Look unrelated.
Yes, I will revert back this changes.
Thanks,
Arvind
> Matt
>
>> return 0;
>> }
>>
>> --
>> 2.43.0
>>
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v7 09/12] drm/xe/dma_buf: Block export of DONTNEED/purged BOs
2026-03-24 14:47 ` Thomas Hellström
@ 2026-03-26 2:50 ` Yadav, Arvind
0 siblings, 0 replies; 29+ messages in thread
From: Yadav, Arvind @ 2026-03-26 2:50 UTC (permalink / raw)
To: Thomas Hellström, intel-xe; +Cc: matthew.brost, himal.prasad.ghimiray
On 24-03-2026 20:17, Thomas Hellström wrote:
> On Mon, 2026-03-23 at 15:00 +0530, Arvind Yadav wrote:
>> Don't allow exporting BOs marked DONTNEED or PURGED as dma-bufs.
>> DONTNEED BOs can have their contents discarded at any time, making
>> the exported dma-buf unusable for external devices. PURGED BOs have
>> no backing store and are permanently invalid.
>>
>> Return -EBUSY for DONTNEED BOs (temporary purgeable state) and
>> -EINVAL for purged BOs (permanent, no backing store).
>>
>> The export path now checks the BO's purgeable state before creating
>> the dma-buf, preventing external devices from accessing memory that
>> may be purged at any time.
>>
>> v6:
>> - Split DONTNEED → -EBUSY and PURGED → -EINVAL for consistency
>> with the rest of the series (Thomas, Matt)
>>
>> v7:
>> - Use Interruptible lock. (Thomas)
>>
>> Cc: Matthew Brost <matthew.brost@intel.com>
>> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>> Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>> Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
>> ---
>> drivers/gpu/drm/xe/xe_dma_buf.c | 21 +++++++++++++++++++++
>> 1 file changed, 21 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_dma_buf.c
>> b/drivers/gpu/drm/xe/xe_dma_buf.c
>> index ea370cd373e9..4edbe9f3c001 100644
>> --- a/drivers/gpu/drm/xe/xe_dma_buf.c
>> +++ b/drivers/gpu/drm/xe/xe_dma_buf.c
>> @@ -223,6 +223,23 @@ struct dma_buf *xe_gem_prime_export(struct
>> drm_gem_object *obj, int flags)
>> if (bo->vm)
>> return ERR_PTR(-EPERM);
>>
>> + /*
>> + * Reject exporting purgeable BOs. DONTNEED BOs can be
>> purged
>> + * at any time, making the exported dma-buf unusable. Purged
>> BOs
>> + * have no backing store and are permanently invalid.
>> + */
>> + xe_bo_lock(bo, true);
> Missing error check.
Noted,
Thanks,
Arvind
>
> /Thomas
>
>
>
>> + if (xe_bo_madv_is_dontneed(bo)) {
>> + ret = -EBUSY;
>> + goto out_unlock;
>> + }
>> +
>> + if (xe_bo_is_purged(bo)) {
>> + ret = -EINVAL;
>> + goto out_unlock;
>> + }
>> + xe_bo_unlock(bo);
>> +
>> ret = ttm_bo_setup_export(&bo->ttm, &ctx);
>> if (ret)
>> return ERR_PTR(ret);
>> @@ -232,6 +249,10 @@ struct dma_buf *xe_gem_prime_export(struct
>> drm_gem_object *obj, int flags)
>> buf->ops = &xe_dmabuf_ops;
>>
>> return buf;
>> +
>> +out_unlock:
>> + xe_bo_unlock(bo);
>> + return ERR_PTR(ret);
>> }
>>
>> static struct drm_gem_object *
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v7 03/12] drm/xe/madvise: Implement purgeable buffer object support
2026-03-25 15:01 ` Thomas Hellström
@ 2026-03-26 4:02 ` Yadav, Arvind
0 siblings, 0 replies; 29+ messages in thread
From: Yadav, Arvind @ 2026-03-26 4:02 UTC (permalink / raw)
To: Thomas Hellström, intel-xe; +Cc: matthew.brost, himal.prasad.ghimiray
On 25-03-2026 20:31, Thomas Hellström wrote:
> On Mon, 2026-03-23 at 15:00 +0530, Arvind Yadav wrote:
>> This allows userspace applications to provide memory usage hints to
>> the kernel for better memory management under pressure:
>>
>> Add the core implementation for purgeable buffer objects, enabling
>> memory
>> reclamation of user-designated DONTNEED buffers during eviction.
>>
>> This patch implements the purge operation and state machine
>> transitions:
>>
>> Purgeable States (from xe_madv_purgeable_state):
>> - WILLNEED (0): BO should be retained, actively used
>> - DONTNEED (1): BO eligible for purging, not currently needed
>> - PURGED (2): BO backing store reclaimed, permanently invalid
>>
>> Design Rationale:
>> - Async TLB invalidation via trigger_rebind (no blocking
>> xe_vm_invalidate_vma)
>> - i915 compatibility: retained field, "once purged always purged"
>> semantics
>> - Shared BO protection prevents multi-process memory corruption
>> - Scratch PTE reuse avoids new infrastructure, safe for fault mode
>>
>> Note: The madvise_purgeable() function is implemented but not hooked
>> into
>> the IOCTL handler (madvise_funcs[] entry is NULL) to maintain
>> bisectability.
>> The feature will be enabled in the final patch when all supporting
>> infrastructure (shrinker, per-VMA tracking) is complete.
>>
>> v2:
>> - Use xe_bo_trigger_rebind() for async TLB invalidation (Thomas
>> Hellström)
>> - Add NULL rebind with scratch PTEs for fault mode (Thomas
>> Hellström)
>> - Implement i915-compatible retained field logic (Thomas Hellström)
>> - Skip BO validation for purged BOs in page fault handler (crash
>> fix)
>> - Add scratch VM check in page fault path (non-scratch VMs fail
>> fault)
>> - Force clear_pt for non-scratch VMs to avoid phys addr 0 mapping
>> (review fix)
>> - Add !is_purged check to resource cursor setup to prevent stale
>> access
>>
>> v3:
>> - Rebase as xe_gt_pagefault.c is gone upstream and replaced
>> with xe_pagefault.c (Matthew Brost)
>> - Xe specific warn on (Matthew Brost)
>> - Call helpers for madv_purgeable access(Matthew Brost)
>> - Remove bo NULL check(Matthew Brost)
>> - Use xe_bo_assert_held instead of dma assert(Matthew Brost)
>> - Move the xe_bo_is_purged check under the dma-resv lock( by Matt)
>> - Drop is_purged from xe_pt_stage_bind_entry and just set is_null
>> to true
>> for purged BO rename s/is_null/is_null_or_purged (by Matt)
>> - UAPI rule should not be changed.(Matthew Brost)
>> - Make 'retained' a userptr (Matthew Brost)
>>
>> v4:
>> - @madv_purgeable atomic_t → u32 change across all relevant patches
>> (Matt)
>>
>> v5:
>> - Introduce xe_bo_set_purgeable_state() helper (void return) to
>> centralize
>> madv_purgeable updates with xe_bo_assert_held() and state
>> transition
>> validation using explicit enum checks (no transition out of
>> PURGED) (Matt)
>> - Make xe_ttm_bo_purge() return int and propagate failures from
>> xe_bo_move(); handle xe_bo_trigger_rebind() failures (e.g.
>> no_wait_gpu
>> paths) rather than silently ignoring (Matt)
>> - Replace drm_WARN_ON with xe_assert for better Xe-specific
>> assertions (Matt)
>> - Hook purgeable handling into
>> madvise_funcs[DRM_XE_VMA_ATTR_PURGEABLE_STATE]
>> instead of special-case path in xe_vm_madvise_ioctl() (Matt)
>> - Track purgeable retained return via xe_madvise_details and
>> perform
>> copy_to_user() from xe_madvise_details_fini() after locks are
>> dropped (Matt)
>> - Set madvise_funcs[DRM_XE_VMA_ATTR_PURGEABLE_STATE] to NULL with
>> __maybe_unused on madvise_purgeable() to maintain bisectability
>> until
>> shrinker integration is complete in final patch (Matt)
>> - Use put_user() instead of copy_to_user() for single u32 retained
>> value (Thomas)
>> - Return -EFAULT from ioctl if put_user() fails (Thomas)
>> - Validate userspace initialized retained to 0 before ioctl,
>> ensuring safe
>> default (0 = "assume purged") if put_user() fails (Thomas)
>> - Refactor error handling: separate fallible put_user from
>> infallible cleanup
>> - xe_madvise_purgeable_retained_to_user(): separate helper for
>> fallible put_user
>> - Call put_user() after releasing all locks to avoid circular
>> dependencies
>> - Use xe_bo_move_notify() instead of xe_bo_trigger_rebind() in
>> xe_ttm_bo_purge()
>> for proper abstraction - handles vunmap, dma-buf notifications,
>> and VRAM
>> userfault cleanup (Thomas)
>> - Fix LRU crash while running shrink test
>> - Skip xe_bo_validate() for purged BOs in xe_gpuvm_validate()
>>
>> v6:
>> - xe_bo_move_notify() must be called *before* ttm_bo_validate().
>> (Thomas)
>> - Block GPU page faults (fault-mode VMs) for DONTNEED bo's (Thomas,
>> Matt)
>> - Rename retained to retained_ptr. (Jose)
>>
>> v7 Changes:
>> - Fix engine reset from EU overfetch in scratch VMs:
>> xe_pagefault_begin()
>> and xe_pagefault_service() now return 0 instead of -EACCES/-
>> EINVAL for
>> DONTNEED/purged BOs and missing VMAs so stale accesses hit
>> scratch PTEs.
>> - Fix Engine memory CAT errors when Mesa uses
>> DRM_XE_VM_CREATE_FLAG_SCRATCH_PAGE:
>> accept scratch VMs in xe_pagefault_asid_to_vm() via '||
>> xe_vm_has_scratch(vm).
>> - Skip validate/migrate/rebind for DONTNEED/purged BOs in
>> xe_pagefault_begin()
>> using a bool *skip_rebind out-parameter. Scratch VMs ACK the
>> fault and fall back
>> to scratch PTEs; non-scratch VMs return -EACCES.
>>
>> Cc: Matthew Brost <matthew.brost@intel.com>
>> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>> Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>> Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
>> ---
>> drivers/gpu/drm/xe/xe_bo.c | 107 ++++++++++++++++++++---
>> drivers/gpu/drm/xe/xe_bo.h | 2 +
>> drivers/gpu/drm/xe/xe_pagefault.c | 25 +++++-
>> drivers/gpu/drm/xe/xe_pt.c | 40 +++++++--
>> drivers/gpu/drm/xe/xe_vm.c | 20 ++++-
>> drivers/gpu/drm/xe/xe_vm_madvise.c | 136
>> +++++++++++++++++++++++++++++
>> 6 files changed, 305 insertions(+), 25 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
>> index 22179b2df85c..b6055bb4c578 100644
>> --- a/drivers/gpu/drm/xe/xe_bo.c
>> +++ b/drivers/gpu/drm/xe/xe_bo.c
>> @@ -835,6 +835,84 @@ static int xe_bo_move_notify(struct xe_bo *bo,
>> return 0;
>> }
>>
>> +/**
>> + * xe_bo_set_purgeable_state() - Set BO purgeable state with
>> validation
>> + * @bo: Buffer object
>> + * @new_state: New purgeable state
>> + *
>> + * Sets the purgeable state with lockdep assertions and validates
>> state
>> + * transitions. Once a BO is PURGED, it cannot transition to any
>> other state.
>> + * Invalid transitions are caught with xe_assert().
>> + */
>> +void xe_bo_set_purgeable_state(struct xe_bo *bo,
>> + enum xe_madv_purgeable_state
>> new_state)
>> +{
>> + struct xe_device *xe = xe_bo_device(bo);
>> +
>> + xe_bo_assert_held(bo);
>> +
>> + /* Validate state is one of the known values */
>> + xe_assert(xe, new_state == XE_MADV_PURGEABLE_WILLNEED ||
>> + new_state == XE_MADV_PURGEABLE_DONTNEED ||
>> + new_state == XE_MADV_PURGEABLE_PURGED);
>> +
>> + /* Once purged, always purged - cannot transition out */
>> + xe_assert(xe, !(bo->madv_purgeable ==
>> XE_MADV_PURGEABLE_PURGED &&
>> + new_state != XE_MADV_PURGEABLE_PURGED));
>> +
>> + bo->madv_purgeable = new_state;
>> +}
>> +
>> +/**
>> + * xe_ttm_bo_purge() - Purge buffer object backing store
>> + * @ttm_bo: The TTM buffer object to purge
>> + * @ctx: TTM operation context
>> + *
>> + * This function purges the backing store of a BO marked as DONTNEED
>> and
>> + * triggers rebind to invalidate stale GPU mappings. For fault-mode
>> VMs,
>> + * this zaps the PTEs. The next GPU access will trigger a page fault
>> and
>> + * perform NULL rebind (scratch pages or clear PTEs based on VM
>> config).
>> + *
>> + * Return: 0 on success, negative error code on failure
>> + */
>> +static int xe_ttm_bo_purge(struct ttm_buffer_object *ttm_bo, struct
>> ttm_operation_ctx *ctx)
>> +{
>> + struct xe_bo *bo = ttm_to_xe_bo(ttm_bo);
>> + struct ttm_placement place = {};
>> + int ret;
>> +
>> + xe_bo_assert_held(bo);
>> +
>> + if (!ttm_bo->ttm)
>> + return 0;
>> +
>> + if (!xe_bo_madv_is_dontneed(bo))
>> + return 0;
>> +
>> + /*
>> + * Use the standard pre-move hook so we share the same
>> cleanup/invalidate
>> + * path as migrations: drop any CPU vmap and schedule the
>> necessary GPU
>> + * unbind/rebind work.
>> + *
>> + * This must be called before ttm_bo_validate() frees the
>> pages.
>> + * May fail in no-wait contexts (fault/shrinker) or if the
>> BO is
>> + * pinned. Keep state unchanged on failure so we don't end
>> up "PURGED"
>> + * with stale mappings.
>> + */
>> + ret = xe_bo_move_notify(bo, ctx);
>> + if (ret)
>> + return ret;
>> +
>> + ret = ttm_bo_validate(ttm_bo, &place, ctx);
>> + if (ret)
>> + return ret;
>> +
>> + /* Commit the state transition only once invalidation was
>> queued */
>> + xe_bo_set_purgeable_state(bo, XE_MADV_PURGEABLE_PURGED);
>> +
>> + return 0;
>> +}
>> +
>> static int xe_bo_move(struct ttm_buffer_object *ttm_bo, bool evict,
>> struct ttm_operation_ctx *ctx,
>> struct ttm_resource *new_mem,
>> @@ -854,6 +932,20 @@ static int xe_bo_move(struct ttm_buffer_object
>> *ttm_bo, bool evict,
>> ttm && ttm_tt_is_populated(ttm)) ?
>> true : false;
>> int ret = 0;
>>
>> + /*
>> + * Purge only non-shared BOs explicitly marked DONTNEED by
>> userspace.
>> + * The move_notify callback will handle invalidation
>> asynchronously.
>> + */
>> + if (evict && xe_bo_madv_is_dontneed(bo)) {
>> + ret = xe_ttm_bo_purge(ttm_bo, ctx);
>> + if (ret)
>> + return ret;
>> +
>> + /* Free the unused eviction destination resource */
>> + ttm_resource_free(ttm_bo, &new_mem);
>> + return 0;
>> + }
>> +
>> /* Bo creation path, moving to system or TT. */
>> if ((!old_mem && ttm) && !handle_system_ccs) {
>> if (new_mem->mem_type == XE_PL_TT)
>> @@ -1603,18 +1695,6 @@ static void xe_ttm_bo_delete_mem_notify(struct
>> ttm_buffer_object *ttm_bo)
>> }
>> }
>>
>> -static void xe_ttm_bo_purge(struct ttm_buffer_object *ttm_bo, struct
>> ttm_operation_ctx *ctx)
>> -{
>> - struct xe_device *xe = ttm_to_xe_device(ttm_bo->bdev);
>> -
>> - if (ttm_bo->ttm) {
>> - struct ttm_placement place = {};
>> - int ret = ttm_bo_validate(ttm_bo, &place, ctx);
>> -
>> - drm_WARN_ON(&xe->drm, ret);
>> - }
>> -}
>> -
>> static void xe_ttm_bo_swap_notify(struct ttm_buffer_object *ttm_bo)
>> {
>> struct ttm_operation_ctx ctx = {
>> @@ -2195,6 +2275,9 @@ struct xe_bo *xe_bo_init_locked(struct
>> xe_device *xe, struct xe_bo *bo,
>> #endif
>> INIT_LIST_HEAD(&bo->vram_userfault_link);
>>
>> + /* Initialize purge advisory state */
>> + bo->madv_purgeable = XE_MADV_PURGEABLE_WILLNEED;
>> +
>> drm_gem_private_object_init(&xe->drm, &bo->ttm.base, size);
>>
>> if (resv) {
>> diff --git a/drivers/gpu/drm/xe/xe_bo.h b/drivers/gpu/drm/xe/xe_bo.h
>> index fb5541bdf602..653851d47aa6 100644
>> --- a/drivers/gpu/drm/xe/xe_bo.h
>> +++ b/drivers/gpu/drm/xe/xe_bo.h
>> @@ -271,6 +271,8 @@ static inline bool xe_bo_madv_is_dontneed(struct
>> xe_bo *bo)
>> return bo->madv_purgeable == XE_MADV_PURGEABLE_DONTNEED;
>> }
>>
>> +void xe_bo_set_purgeable_state(struct xe_bo *bo, enum
>> xe_madv_purgeable_state new_state);
>> +
>> static inline void xe_bo_unpin_map_no_vm(struct xe_bo *bo)
>> {
>> if (likely(bo)) {
>> diff --git a/drivers/gpu/drm/xe/xe_pagefault.c
>> b/drivers/gpu/drm/xe/xe_pagefault.c
>> index ea4857acf28d..415253631e6f 100644
>> --- a/drivers/gpu/drm/xe/xe_pagefault.c
>> +++ b/drivers/gpu/drm/xe/xe_pagefault.c
>> @@ -46,7 +46,8 @@ static int xe_pagefault_entry_size(void)
>> }
>>
>> static int xe_pagefault_begin(struct drm_exec *exec, struct xe_vma
>> *vma,
>> - struct xe_vram_region *vram, bool
>> need_vram_move)
>> + struct xe_vram_region *vram, bool
>> need_vram_move,
>> + bool *skip_rebind)
>> {
>> struct xe_bo *bo = xe_vma_bo(vma);
>> struct xe_vm *vm = xe_vma_vm(vma);
>> @@ -59,6 +60,20 @@ static int xe_pagefault_begin(struct drm_exec
>> *exec, struct xe_vma *vma,
>> if (!bo)
>> return 0;
>>
>> + /*
>> + * Under dma-resv lock: reject rebind for DONTNEED/purged
>> BOs.
>> + * Validating or migrating would repopulate pages we want
>> the shrinker
>> + * to reclaim, and rebinding would undo the GPU PTE zap.
>> + * Scratch VMs absorb the access via scratch PTEs
>> (skip_rebind=true);
>> + * non-scratch VMs have no fallback so fail the fault.
>> + */
>> + if (unlikely(xe_bo_madv_is_dontneed(bo) ||
>> xe_bo_is_purged(bo))) {
>> + if (!xe_vm_has_scratch(vm))
>> + return -EACCES;
>> + *skip_rebind = true;
> So what happens here if we have a scratch VM, then do a lazy VM_BIND,
> then madvise(DONTNEED) and then hit a page-fault. If we then
> skip_rebind, aren't we going to hit the same pagefault again and again?
You're right, thank you for catching this. With lazy VM_BIND + scratch
VM + madvise(DONTNEED), skip_rebind=true meant no PTEs were ever
written, so tile_present stayed 0 and the GPU kept faulting on the same
address — infinite loop.
Fix: remove skip_rebind and let rebind proceed for scratch VMs.
xe_pt_stage_bind already handles both cases — DONTNEED BOs get real PTEs
(pages are still alive, shrinker can purge later), and PURGED BOs get
XE_PTE_NULL scratch PTEs. Either way tile_present is updated and the
fault resolves.
>
>
>
>> + return 0;
>> + }
>> +
>> return need_vram_move ? xe_bo_migrate(bo, vram->placement,
>> NULL, exec) :
>> xe_bo_validate(bo, vm, true, exec);
>> }
>> @@ -103,11 +118,13 @@ static int xe_pagefault_handle_vma(struct xe_gt
>> *gt, struct xe_vma *vma,
>> /* Lock VM and BOs dma-resv */
>> xe_validation_ctx_init(&ctx, &vm->xe->val, &exec, (struct
>> xe_val_flags) {});
>> drm_exec_until_all_locked(&exec) {
>> + bool skip_rebind = false;
>> +
>> err = xe_pagefault_begin(&exec, vma, tile->mem.vram,
>> - needs_vram == 1);
>> + needs_vram == 1,
>> &skip_rebind);
>> drm_exec_retry_on_contention(&exec);
>> xe_validation_retry_on_oom(&ctx, &err);
>> - if (err)
>> + if (err || skip_rebind)
>> goto unlock_dma_resv;
>>
>> /* Bind VMA only to the GT that has faulted */
>> @@ -145,7 +162,7 @@ static struct xe_vm
>> *xe_pagefault_asid_to_vm(struct xe_device *xe, u32 asid)
>>
>> down_read(&xe->usm.lock);
>> vm = xa_load(&xe->usm.asid_to_vm, asid);
>> - if (vm && xe_vm_in_fault_mode(vm))
>> + if (vm && (xe_vm_in_fault_mode(vm) ||
>> xe_vm_has_scratch(vm)))
>> xe_vm_get(vm);
>> else
>> vm = ERR_PTR(-EINVAL);
>> diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
>> index 2d9ce2c4cb4f..08f40701f654 100644
>> --- a/drivers/gpu/drm/xe/xe_pt.c
>> +++ b/drivers/gpu/drm/xe/xe_pt.c
>> @@ -531,20 +531,26 @@ xe_pt_stage_bind_entry(struct xe_ptw *parent,
>> pgoff_t offset,
>> /* Is this a leaf entry ?*/
>> if (level == 0 || xe_pt_hugepte_possible(addr, next, level,
>> xe_walk)) {
>> struct xe_res_cursor *curs = xe_walk->curs;
>> - bool is_null = xe_vma_is_null(xe_walk->vma);
>> - bool is_vram = is_null ? false :
>> xe_res_is_vram(curs);
>> + struct xe_bo *bo = xe_vma_bo(xe_walk->vma);
>> + bool is_null_or_purged = xe_vma_is_null(xe_walk-
>>> vma) ||
>> + (bo &&
>> xe_bo_is_purged(bo));
>> + bool is_vram = is_null_or_purged ? false :
>> xe_res_is_vram(curs);
>>
>> XE_WARN_ON(xe_walk->va_curs_start != addr);
>>
>> if (xe_walk->clear_pt) {
>> pte = 0;
>> } else {
>> - pte = vm->pt_ops->pte_encode_vma(is_null ? 0
>> :
>> + /*
>> + * For purged BOs, treat like null VMAs -
>> pass address 0.
>> + * The pte_encode_vma will set XE_PTE_NULL
>> flag for scratch mapping.
>> + */
>> + pte = vm->pt_ops-
>>> pte_encode_vma(is_null_or_purged ? 0 :
>>
>> xe_res_dma(curs) +
>> xe_walk-
>>> dma_offset,
>> xe_walk-
>>> vma,
>> pat_index,
>> level);
>> - if (!is_null)
>> + if (!is_null_or_purged)
>> pte |= is_vram ? xe_walk-
>>> default_vram_pte :
>> xe_walk->default_system_pte;
>>
>> @@ -568,7 +574,7 @@ xe_pt_stage_bind_entry(struct xe_ptw *parent,
>> pgoff_t offset,
>> if (unlikely(ret))
>> return ret;
>>
>> - if (!is_null && !xe_walk->clear_pt)
>> + if (!is_null_or_purged && !xe_walk->clear_pt)
>> xe_res_next(curs, next - addr);
>> xe_walk->va_curs_start = next;
>> xe_walk->vma->gpuva.flags |= (XE_VMA_PTE_4K <<
>> level);
>> @@ -721,6 +727,26 @@ xe_pt_stage_bind(struct xe_tile *tile, struct
>> xe_vma *vma,
>> };
>> struct xe_pt *pt = vm->pt_root[tile->id];
>> int ret;
>> + bool is_purged = false;
>> +
>> + /*
>> + * Check if BO is purged:
>> + * - Scratch VMs: Use scratch PTEs (XE_PTE_NULL) for safe
>> zero reads
>> + * - Non-scratch VMs: Clear PTEs to zero (non-present) to
>> avoid mapping to phys addr 0
>> + *
>> + * For non-scratch VMs, we force clear_pt=true so leaf PTEs
>> become completely
>> + * zero instead of creating a PRESENT mapping to physical
>> address 0.
>> + */
>> + if (bo && xe_bo_is_purged(bo)) {
>> + is_purged = true;
>> +
>> + /*
>> + * For non-scratch VMs, a NULL rebind should use
>> zero PTEs
>> + * (non-present), not a present PTE to phys 0.
>> + */
>> + if (!xe_vm_has_scratch(vm))
>> + xe_walk.clear_pt = true;
>> + }
>>
>> if (range) {
>> /* Move this entire thing to xe_svm.c? */
>> @@ -756,11 +782,11 @@ xe_pt_stage_bind(struct xe_tile *tile, struct
>> xe_vma *vma,
>> }
>>
>> xe_walk.default_vram_pte |= XE_PPGTT_PTE_DM;
>> - xe_walk.dma_offset = bo ? vram_region_gpu_offset(bo-
>>> ttm.resource) : 0;
>> + xe_walk.dma_offset = (bo && !is_purged) ?
>> vram_region_gpu_offset(bo->ttm.resource) : 0;
>> if (!range)
>> xe_bo_assert_held(bo);
>>
>> - if (!xe_vma_is_null(vma) && !range) {
>> + if (!xe_vma_is_null(vma) && !range && !is_purged) {
>> if (xe_vma_is_userptr(vma))
>> xe_res_first_dma(to_userptr_vma(vma)-
>>> userptr.pages.dma_addr, 0,
>> xe_vma_size(vma), &curs);
>> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
>> index 5572e12c2a7e..a0ade67d616e 100644
>> --- a/drivers/gpu/drm/xe/xe_vm.c
>> +++ b/drivers/gpu/drm/xe/xe_vm.c
>> @@ -326,6 +326,7 @@ void xe_vm_kill(struct xe_vm *vm, bool unlocked)
>> static int xe_gpuvm_validate(struct drm_gpuvm_bo *vm_bo, struct
>> drm_exec *exec)
>> {
>> struct xe_vm *vm = gpuvm_to_vm(vm_bo->vm);
>> + struct xe_bo *bo = gem_to_xe_bo(vm_bo->obj);
>> struct drm_gpuva *gpuva;
>> int ret;
>>
>> @@ -334,10 +335,16 @@ static int xe_gpuvm_validate(struct
>> drm_gpuvm_bo *vm_bo, struct drm_exec *exec)
>> list_move_tail(&gpuva_to_vma(gpuva)-
>>> combined_links.rebind,
>> &vm->rebind_list);
>>
>> + /* Skip re-populating purged BOs, rebind maps scratch pages.
>> */
>> + if (xe_bo_is_purged(bo)) {
>> + vm_bo->evicted = false;
>> + return 0;
>> + }
>> +
>> if (!try_wait_for_completion(&vm->xe->pm_block))
>> return -EAGAIN;
>>
>> - ret = xe_bo_validate(gem_to_xe_bo(vm_bo->obj), vm, false,
>> exec);
>> + ret = xe_bo_validate(bo, vm, false, exec);
>> if (ret)
>> return ret;
>>
>> @@ -1358,6 +1365,9 @@ static u64 xelp_pte_encode_bo(struct xe_bo *bo,
>> u64 bo_offset,
>> static u64 xelp_pte_encode_vma(u64 pte, struct xe_vma *vma,
>> u16 pat_index, u32 pt_level)
>> {
>> + struct xe_bo *bo = xe_vma_bo(vma);
>> + struct xe_vm *vm = xe_vma_vm(vma);
>> +
>> pte |= XE_PAGE_PRESENT;
>>
>> if (likely(!xe_vma_read_only(vma)))
>> @@ -1366,7 +1376,13 @@ static u64 xelp_pte_encode_vma(u64 pte, struct
>> xe_vma *vma,
>> pte |= pte_encode_pat_index(pat_index, pt_level);
>> pte |= pte_encode_ps(pt_level);
>>
>> - if (unlikely(xe_vma_is_null(vma)))
>> + /*
>> + * NULL PTEs redirect to scratch page (return zeros on
>> read).
>> + * Set for: 1) explicit null VMAs, 2) purged BOs on scratch
>> VMs.
>> + * Never set NULL flag without scratch page - causes
>> undefined behavior.
>> + */
>> + if (unlikely(xe_vma_is_null(vma) ||
>> + (bo && xe_bo_is_purged(bo) &&
>> xe_vm_has_scratch(vm))))
>> pte |= XE_PTE_NULL;
>>
>> return pte;
>> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c
>> b/drivers/gpu/drm/xe/xe_vm_madvise.c
>> index 869db304d96d..ffba2e41c539 100644
>> --- a/drivers/gpu/drm/xe/xe_vm_madvise.c
>> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
>> @@ -26,6 +26,8 @@ struct xe_vmas_in_madvise_range {
>> /**
>> * struct xe_madvise_details - Argument to madvise_funcs
>> * @dpagemap: Reference-counted pointer to a struct drm_pagemap.
>> + * @has_purged_bo: Track if any BO was purged (for purgeable state)
>> + * @retained_ptr: User pointer for retained value (for purgeable
>> state)
>> *
>> * The madvise IOCTL handler may, in addition to the user-space
>> * args, have additional info to pass into the madvise_func that
>> @@ -34,6 +36,8 @@ struct xe_vmas_in_madvise_range {
>> */
>> struct xe_madvise_details {
>> struct drm_pagemap *dpagemap;
>> + bool has_purged_bo;
>> + u64 retained_ptr;
>> };
>>
>> static int get_vmas(struct xe_vm *vm, struct
>> xe_vmas_in_madvise_range *madvise_range)
>> @@ -180,6 +184,67 @@ static void madvise_pat_index(struct xe_device
>> *xe, struct xe_vm *vm,
>> }
>> }
>>
>> +/**
>> + * madvise_purgeable - Handle purgeable buffer object advice
>> + * @xe: XE device
>> + * @vm: VM
>> + * @vmas: Array of VMAs
>> + * @num_vmas: Number of VMAs
>> + * @op: Madvise operation
>> + * @details: Madvise details for return values
>> + *
>> + * Handles DONTNEED/WILLNEED/PURGED states. Tracks if any BO was
>> purged
>> + * in details->has_purged_bo for later copy to userspace.
>> + *
>> + * Note: Marked __maybe_unused until hooked into madvise_funcs[] in
>> the
>> + * final patch to maintain bisectability. The NULL placeholder in
>> the
>> + * array ensures proper -EINVAL return for userspace until all
>> supporting
>> + * infrastructure (shrinker, per-VMA tracking) is complete.
>> + */
>> +static void __maybe_unused madvise_purgeable(struct xe_device *xe,
>> + struct xe_vm *vm,
>> + struct xe_vma **vmas,
>> + int num_vmas,
>> + struct drm_xe_madvise
>> *op,
>> + struct
>> xe_madvise_details *details)
>> +{
>> + int i;
>> +
>> + xe_assert(vm->xe, op->type ==
>> DRM_XE_VMA_ATTR_PURGEABLE_STATE);
>> +
>> + for (i = 0; i < num_vmas; i++) {
>> + struct xe_bo *bo = xe_vma_bo(vmas[i]);
>> +
>> + if (!bo)
>> + continue;
>> +
>> + /* BO must be locked before modifying madv state */
>> + xe_bo_assert_held(bo);
>> +
>> + /*
>> + * Once purged, always purged. Cannot transition
>> back to WILLNEED.
>> + * This matches i915 semantics where purged BOs are
>> permanently invalid.
>> + */
>> + if (xe_bo_is_purged(bo)) {
>> + details->has_purged_bo = true;
>> + continue;
>> + }
>> +
>> + switch (op->purge_state_val.val) {
>> + case DRM_XE_VMA_PURGEABLE_STATE_WILLNEED:
>> + xe_bo_set_purgeable_state(bo,
>> XE_MADV_PURGEABLE_WILLNEED);
>> + break;
>> + case DRM_XE_VMA_PURGEABLE_STATE_DONTNEED:
>> + xe_bo_set_purgeable_state(bo,
>> XE_MADV_PURGEABLE_DONTNEED);
>> + break;
>> + default:
>> + drm_warn(&vm->xe->drm, "Invalid madvice
>> value = %d\n",
> Please use either "madvise" with an 's' or "advice".
Noted.
Thanks,
Arvind
>
> Thanks,
> Thomas
>
>
>
>> + op->purge_state_val.val);
>> + return;
>> + }
>> + }
>> +}
>> +
>> typedef void (*madvise_func)(struct xe_device *xe, struct xe_vm *vm,
>> struct xe_vma **vmas, int num_vmas,
>> struct drm_xe_madvise *op,
>> @@ -189,6 +254,12 @@ static const madvise_func madvise_funcs[] = {
>> [DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC] =
>> madvise_preferred_mem_loc,
>> [DRM_XE_MEM_RANGE_ATTR_ATOMIC] = madvise_atomic,
>> [DRM_XE_MEM_RANGE_ATTR_PAT] = madvise_pat_index,
>> + /*
>> + * Purgeable support implemented but not enabled yet to
>> maintain
>> + * bisectability. Will be set to madvise_purgeable() in
>> final patch
>> + * when all infrastructure (shrinker, VMA tracking) is
>> complete.
>> + */
>> + [DRM_XE_VMA_ATTR_PURGEABLE_STATE] = NULL,
>> };
>>
>> static u8 xe_zap_ptes_in_madvise_range(struct xe_vm *vm, u64 start,
>> u64 end)
>> @@ -319,6 +390,19 @@ static bool madvise_args_are_sane(struct
>> xe_device *xe, const struct drm_xe_madv
>> return false;
>> break;
>> }
>> + case DRM_XE_VMA_ATTR_PURGEABLE_STATE:
>> + {
>> + u32 val = args->purge_state_val.val;
>> +
>> + if (XE_IOCTL_DBG(xe, !(val ==
>> DRM_XE_VMA_PURGEABLE_STATE_WILLNEED ||
>> + val ==
>> DRM_XE_VMA_PURGEABLE_STATE_DONTNEED)))
>> + return false;
>> +
>> + if (XE_IOCTL_DBG(xe, args->purge_state_val.pad))
>> + return false;
>> +
>> + break;
>> + }
>> default:
>> if (XE_IOCTL_DBG(xe, 1))
>> return false;
>> @@ -337,6 +421,12 @@ static int xe_madvise_details_init(struct xe_vm
>> *vm, const struct drm_xe_madvise
>>
>> memset(details, 0, sizeof(*details));
>>
>> + /* Store retained pointer for purgeable state */
>> + if (args->type == DRM_XE_VMA_ATTR_PURGEABLE_STATE) {
>> + details->retained_ptr = args-
>>> purge_state_val.retained_ptr;
>> + return 0;
>> + }
>> +
>> if (args->type == DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC) {
>> int fd = args->preferred_mem_loc.devmem_fd;
>> struct drm_pagemap *dpagemap;
>> @@ -365,6 +455,21 @@ static void xe_madvise_details_fini(struct
>> xe_madvise_details *details)
>> drm_pagemap_put(details->dpagemap);
>> }
>>
>> +static int xe_madvise_purgeable_retained_to_user(const struct
>> xe_madvise_details *details)
>> +{
>> + u32 retained;
>> +
>> + if (!details->retained_ptr)
>> + return 0;
>> +
>> + retained = !details->has_purged_bo;
>> +
>> + if (put_user(retained, (u32 __user
>> *)u64_to_user_ptr(details->retained_ptr)))
>> + return -EFAULT;
>> +
>> + return 0;
>> +}
>> +
>> static bool check_bo_args_are_sane(struct xe_vm *vm, struct xe_vma
>> **vmas,
>> int num_vmas, u32 atomic_val)
>> {
>> @@ -422,6 +527,7 @@ int xe_vm_madvise_ioctl(struct drm_device *dev,
>> void *data, struct drm_file *fil
>> struct xe_vm *vm;
>> struct drm_exec exec;
>> int err, attr_type;
>> + bool do_retained;
>>
>> vm = xe_vm_lookup(xef, args->vm_id);
>> if (XE_IOCTL_DBG(xe, !vm))
>> @@ -432,6 +538,25 @@ int xe_vm_madvise_ioctl(struct drm_device *dev,
>> void *data, struct drm_file *fil
>> goto put_vm;
>> }
>>
>> + /* Cache whether we need to write retained, and validate
>> it's initialized to 0 */
>> + do_retained = args->type == DRM_XE_VMA_ATTR_PURGEABLE_STATE
>> &&
>> + args->purge_state_val.retained_ptr;
>> + if (do_retained) {
>> + u32 retained;
>> + u32 __user *retained_ptr;
>> +
>> + retained_ptr = u64_to_user_ptr(args-
>>> purge_state_val.retained_ptr);
>> + if (get_user(retained, retained_ptr)) {
>> + err = -EFAULT;
>> + goto put_vm;
>> + }
>> +
>> + if (XE_IOCTL_DBG(xe, retained != 0)) {
>> + err = -EINVAL;
>> + goto put_vm;
>> + }
>> + }
>> +
>> xe_svm_flush(vm);
>>
>> err = down_write_killable(&vm->lock);
>> @@ -487,6 +612,13 @@ int xe_vm_madvise_ioctl(struct drm_device *dev,
>> void *data, struct drm_file *fil
>> }
>>
>> attr_type = array_index_nospec(args->type,
>> ARRAY_SIZE(madvise_funcs));
>> +
>> + /* Ensure the madvise function exists for this type */
>> + if (!madvise_funcs[attr_type]) {
>> + err = -EINVAL;
>> + goto err_fini;
>> + }
>> +
>> madvise_funcs[attr_type](xe, vm, madvise_range.vmas,
>> madvise_range.num_vmas, args,
>> &details);
>>
>> @@ -505,6 +637,10 @@ int xe_vm_madvise_ioctl(struct drm_device *dev,
>> void *data, struct drm_file *fil
>> xe_madvise_details_fini(&details);
>> unlock_vm:
>> up_write(&vm->lock);
>> +
>> + /* Write retained value to user after releasing all locks */
>> + if (!err && do_retained)
>> + err =
>> xe_madvise_purgeable_retained_to_user(&details);
>> put_vm:
>> xe_vm_put(vm);
>> return err;
^ permalink raw reply [flat|nested] 29+ messages in thread
end of thread, other threads:[~2026-03-26 4:03 UTC | newest]
Thread overview: 29+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-23 9:30 [PATCH v7 00/12] drm/xe/madvise: Add support for purgeable buffer objects Arvind Yadav
2026-03-23 9:30 ` [PATCH v7 01/12] drm/xe/uapi: Add UAPI " Arvind Yadav
2026-03-23 9:30 ` [PATCH v7 02/12] drm/xe/bo: Add purgeable bo state tracking and field madv to xe_bo Arvind Yadav
2026-03-23 9:30 ` [PATCH v7 03/12] drm/xe/madvise: Implement purgeable buffer object support Arvind Yadav
2026-03-25 15:01 ` Thomas Hellström
2026-03-26 4:02 ` Yadav, Arvind
2026-03-23 9:30 ` [PATCH v7 04/12] drm/xe/bo: Block CPU faults to purgeable buffer objects Arvind Yadav
2026-03-23 9:30 ` [PATCH v7 05/12] drm/xe/vm: Prevent binding of purged " Arvind Yadav
2026-03-24 12:21 ` Thomas Hellström
2026-03-23 9:30 ` [PATCH v7 06/12] drm/xe/madvise: Implement per-VMA purgeable state tracking Arvind Yadav
2026-03-24 12:25 ` Thomas Hellström
2026-03-23 9:30 ` [PATCH v7 07/12] drm/xe/madvise: Block imported and exported dma-bufs Arvind Yadav
2026-03-24 14:13 ` Thomas Hellström
2026-03-23 9:30 ` [PATCH v7 08/12] drm/xe/bo: Block mmap of DONTNEED/purged BOs Arvind Yadav
2026-03-26 1:33 ` Matthew Brost
2026-03-26 2:49 ` Yadav, Arvind
2026-03-23 9:30 ` [PATCH v7 09/12] drm/xe/dma_buf: Block export " Arvind Yadav
2026-03-24 14:47 ` Thomas Hellström
2026-03-26 2:50 ` Yadav, Arvind
2026-03-23 9:30 ` [PATCH v7 10/12] drm/xe/bo: Add purgeable shrinker state helpers Arvind Yadav
2026-03-24 14:51 ` Thomas Hellström
2026-03-23 9:31 ` [PATCH v7 11/12] drm/xe/madvise: Enable purgeable buffer object IOCTL support Arvind Yadav
2026-03-23 9:31 ` [PATCH v7 12/12] drm/xe/madvise: Accept canonical GPU addresses in xe_vm_madvise_ioctl Arvind Yadav
2026-03-24 3:35 ` Matthew Brost
2026-03-23 9:40 ` ✗ CI.checkpatch: warning for drm/xe/madvise: Add support for purgeable buffer objects (rev8) Patchwork
2026-03-23 9:42 ` ✓ CI.KUnit: success " Patchwork
2026-03-23 10:40 ` ✓ Xe.CI.BAT: " Patchwork
2026-03-23 12:05 ` ✓ Xe.CI.FULL: " Patchwork
2026-03-23 15:45 ` [PATCH v7 00/12] drm/xe/madvise: Add support for purgeable buffer objects Souza, Jose
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox