* [PATCH v2 00/32] PREFETCH and MADVISE for SVM ranges
@ 2025-04-07 10:16 Himal Prasad Ghimiray
2025-04-07 10:16 ` [PATCH v2 01/32] drm/xe: Introduce xe_vma_op_prefetch_range struct for prefetch of ranges Himal Prasad Ghimiray
` (43 more replies)
0 siblings, 44 replies; 120+ messages in thread
From: Himal Prasad Ghimiray @ 2025-04-07 10:16 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, thomas.hellstrom, Himal Prasad Ghimiray
Region 0: Ranges remain in SMEM with only PTE updates.
Region 1: Ranges migrate to VRAM with corresponding PTE updates.
Madvise Ioctl:
Provides a user API to assign attributes like pat_index, atomic
operation type, and preferred location for SVM ranges.
The Kernel Mode Driver (KMD) may split existing VMAs to cover input
ranges, assign user-provided attributes, and invalidate existing PTEs so
that the next page fault/prefetch can use the new attributes.
v2
- Address comments from Matt B
- Add atomic handling
- Rebase
Himal Prasad Ghimiray (32):
drm/xe: Introduce xe_vma_op_prefetch_range struct for prefetch of
ranges
drm/xe: Make xe_svm_alloc_vram public
drm/xe/svm: Helper to add tile masks to svm ranges
drm/xe/svm: Make to_xe_range a public function
drm/xe/svm: Make xe_svm_range_* end/start/size public
drm/xe/vm: Update xe_vma_ops_incr_pt_update_ops to take an increment
value
drm/xe/vm: Add an identifier in xe_vma_ops for svm prefetch
drm/xe: Rename lookup_vma function to xe_find_vma_by_addr
drm/xe/svm: Allow unaligned addresses and ranges for prefetch
drm/xe/svm: Refactor usage of drm_gpusvm* function in xe_svm
drm/xe/svm: Add function to determine if range needs VRAM migration
drm/gpusvm: Introduce vram_only flag for VRAM allocation
drm/xe/svm: Incase of atomic access ensure get_pages happens from vram
drm/xe/svm: Implement prefetch support for SVM ranges
drm/xe/vm: Add debug prints for SVM range prefetch
Introduce drm_gpuvm_sm_map_ops_flags enums for sm_map_ops
drm/xe/uapi: Add madvise interface
drm/xe/vm: Add attributes struct as member of vma
drm/xe/vma: Move pat_index to vma attributes
drm/xe/vma: Modify new_vma to accept struct xe_vma_mem_attr as
parameter
drm/gpusvm: Make drm_gpusvm_for_each_* macros public
drm/xe/svm: Split system allocator vma incase of madvise call
drm/xe: Implement madvise ioctl for xe
drm/xe: Allow CPU address mirror VMA unbind with gpu bindings for
madvise
drm/xe/svm : Add svm ranges migration policy on atomic access
drm/xe/madvise: Update migration policy based on preferred location
drm/xe/svm: Support DRM_XE_SVM_ATTR_PAT memory attribute
drm/xe/uapi: Add flag for consulting madvise hints on svm prefetch
drm/xe/svm: Consult madvise preferred location in prefetch
drm/xe/uapi: Add uapi for vma count and mem attributes
drm/xe/bo: Add attributes field to xe_bo
drm/xe/bo: Update atomic_access attribute on madvise
drivers/gpu/drm/drm_gpusvm.c | 94 +----
drivers/gpu/drm/drm_gpuvm.c | 93 ++++-
drivers/gpu/drm/nouveau/nouveau_uvmm.c | 1 +
drivers/gpu/drm/xe/Makefile | 1 +
drivers/gpu/drm/xe/xe_bo.c | 21 +-
drivers/gpu/drm/xe/xe_bo_types.h | 5 +
drivers/gpu/drm/xe/xe_device.c | 4 +
drivers/gpu/drm/xe/xe_gt_pagefault.c | 24 +-
drivers/gpu/drm/xe/xe_pt.c | 91 +++--
drivers/gpu/drm/xe/xe_svm.c | 240 +++++++++---
drivers/gpu/drm/xe/xe_svm.h | 128 ++++++
drivers/gpu/drm/xe/xe_vm.c | 518 +++++++++++++++++++++++--
drivers/gpu/drm/xe/xe_vm.h | 10 +-
drivers/gpu/drm/xe/xe_vm_madvise.c | 375 ++++++++++++++++++
drivers/gpu/drm/xe/xe_vm_madvise.h | 15 +
drivers/gpu/drm/xe/xe_vm_types.h | 43 +-
include/drm/drm_gpusvm.h | 98 ++++-
include/drm/drm_gpuvm.h | 25 +-
include/uapi/drm/xe_drm.h | 213 ++++++++++
19 files changed, 1745 insertions(+), 254 deletions(-)
create mode 100644 drivers/gpu/drm/xe/xe_vm_madvise.c
create mode 100644 drivers/gpu/drm/xe/xe_vm_madvise.h
--
2.34.1
^ permalink raw reply [flat|nested] 120+ messages in thread
* [PATCH v2 01/32] drm/xe: Introduce xe_vma_op_prefetch_range struct for prefetch of ranges
2025-04-07 10:16 [PATCH v2 00/32] PREFETCH and MADVISE for SVM ranges Himal Prasad Ghimiray
@ 2025-04-07 10:16 ` Himal Prasad Ghimiray
2025-04-07 10:16 ` [PATCH v2 02/32] drm/xe: Make xe_svm_alloc_vram public Himal Prasad Ghimiray
` (42 subsequent siblings)
43 siblings, 0 replies; 120+ messages in thread
From: Himal Prasad Ghimiray @ 2025-04-07 10:16 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, thomas.hellstrom, Himal Prasad Ghimiray
Add xe_vma_op_prefetch_range struct for svm ranges prefetching, including
an xarray of SVM range pointers, range count, and target memory region.
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_vm_types.h | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
index 1662604c4486..ae0bcefdbfcd 100644
--- a/drivers/gpu/drm/xe/xe_vm_types.h
+++ b/drivers/gpu/drm/xe/xe_vm_types.h
@@ -374,6 +374,16 @@ struct xe_vma_op_unmap_range {
struct xe_svm_range *range;
};
+/** struct xe_vma_op_prefetch_range - VMA prefetch range operation */
+struct xe_vma_op_prefetch_range {
+ /** @range: xarray for SVM ranges data */
+ struct xarray range;
+ /** @ranges_count: number of svm ranges to map */
+ u32 ranges_count;
+ /** @region: memory region to prefetch to */
+ u32 region;
+};
+
/** enum xe_vma_op_flags - flags for VMA operation */
enum xe_vma_op_flags {
/** @XE_VMA_OP_COMMITTED: VMA operation committed */
@@ -416,6 +426,8 @@ struct xe_vma_op {
struct xe_vma_op_map_range map_range;
/** @unmap_range: VMA unmap range operation specific data */
struct xe_vma_op_unmap_range unmap_range;
+ /** @prefetch: VMA prefetch range operation specific data */
+ struct xe_vma_op_prefetch_range prefetch_range;
};
};
--
2.34.1
^ permalink raw reply related [flat|nested] 120+ messages in thread
* [PATCH v2 02/32] drm/xe: Make xe_svm_alloc_vram public
2025-04-07 10:16 [PATCH v2 00/32] PREFETCH and MADVISE for SVM ranges Himal Prasad Ghimiray
2025-04-07 10:16 ` [PATCH v2 01/32] drm/xe: Introduce xe_vma_op_prefetch_range struct for prefetch of ranges Himal Prasad Ghimiray
@ 2025-04-07 10:16 ` Himal Prasad Ghimiray
2025-04-17 2:50 ` Matthew Brost
2025-04-07 10:16 ` [PATCH v2 03/32] drm/xe/svm: Helper to add tile masks to svm ranges Himal Prasad Ghimiray
` (41 subsequent siblings)
43 siblings, 1 reply; 120+ messages in thread
From: Himal Prasad Ghimiray @ 2025-04-07 10:16 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, thomas.hellstrom, Himal Prasad Ghimiray
This function will be used in prefetch too, hence make it public.
v2:
- Add kernel-doc (Matthew Brost)
- Rebase
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
---
drivers/gpu/drm/xe/xe_svm.c | 23 +++++++++++++----------
drivers/gpu/drm/xe/xe_svm.h | 23 +++++++++++++++++++++++
2 files changed, 36 insertions(+), 10 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
index c7424c824a14..de19ad056287 100644
--- a/drivers/gpu/drm/xe/xe_svm.c
+++ b/drivers/gpu/drm/xe/xe_svm.c
@@ -661,9 +661,19 @@ static struct xe_vram_region *tile_to_vr(struct xe_tile *tile)
return &tile->mem.vram;
}
-static int xe_svm_alloc_vram(struct xe_vm *vm, struct xe_tile *tile,
- struct xe_svm_range *range,
- const struct drm_gpusvm_ctx *ctx)
+/**
+ * xe_svm_alloc_vram()- Allocate device memory pages for range,
+ * migrating existing data.
+ * @vm: The VM.
+ * @tile: tile to allocate vram from
+ * @range: SVM range
+ * @ctx: DRM GPU SVM context
+ *
+ * Return: 0 on success, error code on failure.
+ */
+int xe_svm_alloc_vram(struct xe_vm *vm, struct xe_tile *tile,
+ struct xe_svm_range *range,
+ const struct drm_gpusvm_ctx *ctx)
{
struct mm_struct *mm = vm->svm.gpusvm.mm;
struct xe_vram_region *vr = tile_to_vr(tile);
@@ -717,13 +727,6 @@ static int xe_svm_alloc_vram(struct xe_vm *vm, struct xe_tile *tile,
return err;
}
-#else
-static int xe_svm_alloc_vram(struct xe_vm *vm, struct xe_tile *tile,
- struct xe_svm_range *range,
- const struct drm_gpusvm_ctx *ctx)
-{
- return -EOPNOTSUPP;
-}
#endif
diff --git a/drivers/gpu/drm/xe/xe_svm.h b/drivers/gpu/drm/xe/xe_svm.h
index 3d441eb1f7ea..d8772f841ab7 100644
--- a/drivers/gpu/drm/xe/xe_svm.h
+++ b/drivers/gpu/drm/xe/xe_svm.h
@@ -75,6 +75,20 @@ int xe_svm_bo_evict(struct xe_bo *bo);
void xe_svm_range_debug(struct xe_svm_range *range, const char *operation);
+#if IS_ENABLED(CONFIG_DRM_XE_DEVMEM_MIRROR)
+int xe_svm_alloc_vram(struct xe_vm *vm, struct xe_tile *tile,
+ struct xe_svm_range *range,
+ const struct drm_gpusvm_ctx *ctx);
+#else
+static inline
+int xe_svm_alloc_vram(struct xe_vm *vm, struct xe_tile *tile,
+ struct xe_svm_range *range,
+ const struct drm_gpusvm_ctx *ctx)
+{
+ return -EOPNOTSUPP;
+}
+#endif
+
/**
* xe_svm_range_has_dma_mapping() - SVM range has DMA mapping
* @range: SVM range
@@ -100,6 +114,7 @@ static inline bool xe_svm_range_has_dma_mapping(struct xe_svm_range *range)
#include <linux/interval_tree.h>
struct drm_pagemap_device_addr;
+struct drm_gpusvm_ctx;
struct xe_bo;
struct xe_gt;
struct xe_vm;
@@ -170,6 +185,14 @@ void xe_svm_range_debug(struct xe_svm_range *range, const char *operation)
{
}
+static inline
+int xe_svm_alloc_vram(struct xe_vm *vm, struct xe_tile *tile,
+ struct xe_svm_range *range,
+ const struct drm_gpusvm_ctx *ctx)
+{
+ return -EOPNOTSUPP;
+}
+
#define xe_svm_assert_in_notifier(...) do {} while (0)
#define xe_svm_range_has_dma_mapping(...) false
--
2.34.1
^ permalink raw reply related [flat|nested] 120+ messages in thread
* [PATCH v2 03/32] drm/xe/svm: Helper to add tile masks to svm ranges
2025-04-07 10:16 [PATCH v2 00/32] PREFETCH and MADVISE for SVM ranges Himal Prasad Ghimiray
2025-04-07 10:16 ` [PATCH v2 01/32] drm/xe: Introduce xe_vma_op_prefetch_range struct for prefetch of ranges Himal Prasad Ghimiray
2025-04-07 10:16 ` [PATCH v2 02/32] drm/xe: Make xe_svm_alloc_vram public Himal Prasad Ghimiray
@ 2025-04-07 10:16 ` Himal Prasad Ghimiray
2025-04-07 10:16 ` [PATCH v2 04/32] drm/xe/svm: Make to_xe_range a public function Himal Prasad Ghimiray
` (40 subsequent siblings)
43 siblings, 0 replies; 120+ messages in thread
From: Himal Prasad Ghimiray @ 2025-04-07 10:16 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, thomas.hellstrom, Himal Prasad Ghimiray
Introduce a helper to add tile mask of binding present and invalidated
for the range. Add a lockdep_assert to ensure it is protected by GPU SVM
notifier lock.
Suggested-by: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_pt.c | 19 ++++++++++++++-----
1 file changed, 14 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
index b42cf5d1b20c..de4e3edda758 100644
--- a/drivers/gpu/drm/xe/xe_pt.c
+++ b/drivers/gpu/drm/xe/xe_pt.c
@@ -2216,6 +2216,16 @@ static void unbind_op_commit(struct xe_vm *vm, struct xe_tile *tile,
}
}
+static void range_present_and_invalidated_tile(struct xe_vm *vm,
+ struct xe_svm_range *range,
+ u8 tile_id)
+{
+ lockdep_assert_held(&vm->svm.gpusvm.notifier_lock);
+
+ range->tile_present |= BIT(tile_id);
+ range->tile_invalidated &= ~BIT(tile_id);
+}
+
static void op_commit(struct xe_vm *vm,
struct xe_tile *tile,
struct xe_vm_pgtable_update_ops *pt_update_ops,
@@ -2270,12 +2280,11 @@ static void op_commit(struct xe_vm *vm,
}
case DRM_GPUVA_OP_DRIVER:
{
- if (op->subop == XE_VMA_SUBOP_MAP_RANGE) {
- op->map_range.range->tile_present |= BIT(tile->id);
- op->map_range.range->tile_invalidated &= ~BIT(tile->id);
- } else if (op->subop == XE_VMA_SUBOP_UNMAP_RANGE) {
+ if (op->subop == XE_VMA_SUBOP_MAP_RANGE)
+ range_present_and_invalidated_tile(vm, op->map_range.range, tile->id);
+ else if (op->subop == XE_VMA_SUBOP_UNMAP_RANGE)
op->unmap_range.range->tile_present &= ~BIT(tile->id);
- }
+
break;
}
default:
--
2.34.1
^ permalink raw reply related [flat|nested] 120+ messages in thread
* [PATCH v2 04/32] drm/xe/svm: Make to_xe_range a public function
2025-04-07 10:16 [PATCH v2 00/32] PREFETCH and MADVISE for SVM ranges Himal Prasad Ghimiray
` (2 preceding siblings ...)
2025-04-07 10:16 ` [PATCH v2 03/32] drm/xe/svm: Helper to add tile masks to svm ranges Himal Prasad Ghimiray
@ 2025-04-07 10:16 ` Himal Prasad Ghimiray
2025-04-07 10:16 ` [PATCH v2 05/32] drm/xe/svm: Make xe_svm_range_* end/start/size public Himal Prasad Ghimiray
` (39 subsequent siblings)
43 siblings, 0 replies; 120+ messages in thread
From: Himal Prasad Ghimiray @ 2025-04-07 10:16 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, thomas.hellstrom, Himal Prasad Ghimiray
The to_xe_range function will be used in other files. Therefore, make it
public and add kernel-doc documentation
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_svm.c | 5 -----
drivers/gpu/drm/xe/xe_svm.h | 20 ++++++++++++++++++++
2 files changed, 20 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
index de19ad056287..24d42018981a 100644
--- a/drivers/gpu/drm/xe/xe_svm.c
+++ b/drivers/gpu/drm/xe/xe_svm.c
@@ -94,11 +94,6 @@ static void xe_svm_range_free(struct drm_gpusvm_range *range)
kfree(range);
}
-static struct xe_svm_range *to_xe_range(struct drm_gpusvm_range *r)
-{
- return container_of(r, struct xe_svm_range, base);
-}
-
static void
xe_svm_garbage_collector_add_range(struct xe_vm *vm, struct xe_svm_range *range,
const struct mmu_notifier_range *mmu_range)
diff --git a/drivers/gpu/drm/xe/xe_svm.h b/drivers/gpu/drm/xe/xe_svm.h
index d8772f841ab7..ec5652c143ec 100644
--- a/drivers/gpu/drm/xe/xe_svm.h
+++ b/drivers/gpu/drm/xe/xe_svm.h
@@ -101,6 +101,20 @@ static inline bool xe_svm_range_has_dma_mapping(struct xe_svm_range *range)
return range->base.flags.has_dma_mapping;
}
+/**
+ * to_xe_range - Convert a drm_gpusvm_range pointer to a xe_svm_range
+ * @r: Pointer to the drm_gpusvm_range structure
+ *
+ * This function takes a pointer to a drm_gpusvm_range structure and
+ * converts it to a pointer to the containing xe_svm_range structure.
+ *
+ * Return: Pointer to the xe_svm_range structure
+ */
+static inline struct xe_svm_range *to_xe_range(struct drm_gpusvm_range *r)
+{
+ return container_of(r, struct xe_svm_range, base);
+}
+
#define xe_svm_assert_in_notifier(vm__) \
lockdep_assert_held_write(&(vm__)->svm.gpusvm.notifier_lock)
@@ -115,6 +129,7 @@ static inline bool xe_svm_range_has_dma_mapping(struct xe_svm_range *range)
struct drm_pagemap_device_addr;
struct drm_gpusvm_ctx;
+struct drm_gpusvm_range;
struct xe_bo;
struct xe_gt;
struct xe_vm;
@@ -193,6 +208,11 @@ int xe_svm_alloc_vram(struct xe_vm *vm, struct xe_tile *tile,
return -EOPNOTSUPP;
}
+static inline struct xe_svm_range *to_xe_range(struct drm_gpusvm_range *r)
+{
+ return NULL;
+}
+
#define xe_svm_assert_in_notifier(...) do {} while (0)
#define xe_svm_range_has_dma_mapping(...) false
--
2.34.1
^ permalink raw reply related [flat|nested] 120+ messages in thread
* [PATCH v2 05/32] drm/xe/svm: Make xe_svm_range_* end/start/size public
2025-04-07 10:16 [PATCH v2 00/32] PREFETCH and MADVISE for SVM ranges Himal Prasad Ghimiray
` (3 preceding siblings ...)
2025-04-07 10:16 ` [PATCH v2 04/32] drm/xe/svm: Make to_xe_range a public function Himal Prasad Ghimiray
@ 2025-04-07 10:16 ` Himal Prasad Ghimiray
2025-04-07 10:16 ` [PATCH v2 06/32] drm/xe/vm: Update xe_vma_ops_incr_pt_update_ops to take an increment value Himal Prasad Ghimiray
` (38 subsequent siblings)
43 siblings, 0 replies; 120+ messages in thread
From: Himal Prasad Ghimiray @ 2025-04-07 10:16 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, thomas.hellstrom, Himal Prasad Ghimiray
These functions will be used in prefetch too, therefore make them public.
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_svm.c | 15 ------------
drivers/gpu/drm/xe/xe_svm.h | 48 +++++++++++++++++++++++++++++++++++++
2 files changed, 48 insertions(+), 15 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
index 24d42018981a..6648b4da0bca 100644
--- a/drivers/gpu/drm/xe/xe_svm.c
+++ b/drivers/gpu/drm/xe/xe_svm.c
@@ -36,21 +36,6 @@ static struct xe_vm *range_to_vm(struct drm_gpusvm_range *r)
return gpusvm_to_vm(r->gpusvm);
}
-static unsigned long xe_svm_range_start(struct xe_svm_range *range)
-{
- return drm_gpusvm_range_start(&range->base);
-}
-
-static unsigned long xe_svm_range_end(struct xe_svm_range *range)
-{
- return drm_gpusvm_range_end(&range->base);
-}
-
-static unsigned long xe_svm_range_size(struct xe_svm_range *range)
-{
- return drm_gpusvm_range_size(&range->base);
-}
-
#define range_debug(r__, operaton__) \
vm_dbg(&range_to_vm(&(r__)->base)->xe->drm, \
"%s: asid=%u, gpusvm=%p, vram=%d,%d, seqno=%lu, " \
diff --git a/drivers/gpu/drm/xe/xe_svm.h b/drivers/gpu/drm/xe/xe_svm.h
index ec5652c143ec..1ec90d9bc749 100644
--- a/drivers/gpu/drm/xe/xe_svm.h
+++ b/drivers/gpu/drm/xe/xe_svm.h
@@ -115,6 +115,39 @@ static inline struct xe_svm_range *to_xe_range(struct drm_gpusvm_range *r)
return container_of(r, struct xe_svm_range, base);
}
+/**
+ * xe_svm_range_start() - SVM range start address
+ * @range: SVM range
+ *
+ * Return: start address of range.
+ */
+static inline unsigned long xe_svm_range_start(struct xe_svm_range *range)
+{
+ return drm_gpusvm_range_start(&range->base);
+}
+
+/**
+ * xe_svm_range_start() - SVM range end address
+ * @range: SVM range
+ *
+ * Return: end address of range.
+ */
+static inline unsigned long xe_svm_range_end(struct xe_svm_range *range)
+{
+ return drm_gpusvm_range_end(&range->base);
+}
+
+/**
+ * xe_svm_range_start() - SVM range size
+ * @range: SVM range
+ *
+ * Return: Size of range.
+ */
+static inline unsigned long xe_svm_range_size(struct xe_svm_range *range)
+{
+ return drm_gpusvm_range_size(&range->base);
+}
+
#define xe_svm_assert_in_notifier(vm__) \
lockdep_assert_held_write(&(vm__)->svm.gpusvm.notifier_lock)
@@ -213,6 +246,21 @@ static inline struct xe_svm_range *to_xe_range(struct drm_gpusvm_range *r)
return NULL;
}
+static inline unsigned long xe_svm_range_start(struct xe_svm_range *range)
+{
+ return 0;
+}
+
+static inline unsigned long xe_svm_range_end(struct xe_svm_range *range)
+{
+ return 0;
+}
+
+static inline unsigned long xe_svm_range_size(struct xe_svm_range *range)
+{
+ return 0;
+}
+
#define xe_svm_assert_in_notifier(...) do {} while (0)
#define xe_svm_range_has_dma_mapping(...) false
--
2.34.1
^ permalink raw reply related [flat|nested] 120+ messages in thread
* [PATCH v2 06/32] drm/xe/vm: Update xe_vma_ops_incr_pt_update_ops to take an increment value
2025-04-07 10:16 [PATCH v2 00/32] PREFETCH and MADVISE for SVM ranges Himal Prasad Ghimiray
` (4 preceding siblings ...)
2025-04-07 10:16 ` [PATCH v2 05/32] drm/xe/svm: Make xe_svm_range_* end/start/size public Himal Prasad Ghimiray
@ 2025-04-07 10:16 ` Himal Prasad Ghimiray
2025-04-17 0:10 ` Matthew Brost
2025-04-07 10:16 ` [PATCH v2 07/32] drm/xe/vm: Add an identifier in xe_vma_ops for svm prefetch Himal Prasad Ghimiray
` (37 subsequent siblings)
43 siblings, 1 reply; 120+ messages in thread
From: Himal Prasad Ghimiray @ 2025-04-07 10:16 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, thomas.hellstrom, Himal Prasad Ghimiray
Prefetch for SVM ranges can have more than one operation to increment,
hence modify the function to accept an increment value as input.
v2:
- Call xe_vma_ops_incr_pt_update_ops only once for REMAP (Matthew Brost)
- Add check for 0 ops
Suggested-by: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
---
drivers/gpu/drm/xe/xe_vm.c | 28 +++++++++++++++++-----------
1 file changed, 17 insertions(+), 11 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 0c69ef6b5ec5..4d215c55a778 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -806,13 +806,16 @@ static void xe_vma_ops_fini(struct xe_vma_ops *vops)
kfree(vops->pt_update_ops[i].ops);
}
-static void xe_vma_ops_incr_pt_update_ops(struct xe_vma_ops *vops, u8 tile_mask)
+static void xe_vma_ops_incr_pt_update_ops(struct xe_vma_ops *vops, u8 tile_mask, u8 inc_val)
{
int i;
+ if(!inc_val)
+ return;
+
for (i = 0; i < XE_MAX_TILES_PER_DEVICE; ++i)
if (BIT(i) & tile_mask)
- ++vops->pt_update_ops[i].num_ops;
+ vops->pt_update_ops[i].num_ops += inc_val;
}
static void xe_vm_populate_rebind(struct xe_vma_op *op, struct xe_vma *vma,
@@ -842,7 +845,7 @@ static int xe_vm_ops_add_rebind(struct xe_vma_ops *vops, struct xe_vma *vma,
xe_vm_populate_rebind(op, vma, tile_mask);
list_add_tail(&op->link, &vops->list);
- xe_vma_ops_incr_pt_update_ops(vops, tile_mask);
+ xe_vma_ops_incr_pt_update_ops(vops, tile_mask, 1);
return 0;
}
@@ -977,7 +980,7 @@ xe_vm_ops_add_range_rebind(struct xe_vma_ops *vops,
xe_vm_populate_range_rebind(op, vma, range, tile_mask);
list_add_tail(&op->link, &vops->list);
- xe_vma_ops_incr_pt_update_ops(vops, tile_mask);
+ xe_vma_ops_incr_pt_update_ops(vops, tile_mask, 1);
return 0;
}
@@ -1062,7 +1065,7 @@ xe_vm_ops_add_range_unbind(struct xe_vma_ops *vops,
xe_vm_populate_range_unbind(op, range);
list_add_tail(&op->link, &vops->list);
- xe_vma_ops_incr_pt_update_ops(vops, range->tile_present);
+ xe_vma_ops_incr_pt_update_ops(vops, range->tile_present, 1);
return 0;
}
@@ -2493,7 +2496,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
!op->map.is_cpu_addr_mirror) ||
op->map.invalidate_on_bind)
xe_vma_ops_incr_pt_update_ops(vops,
- op->tile_mask);
+ op->tile_mask, 1);
break;
}
case DRM_GPUVA_OP_REMAP:
@@ -2502,6 +2505,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
gpuva_to_vma(op->base.remap.unmap->va);
bool skip = xe_vma_is_cpu_addr_mirror(old);
u64 start = xe_vma_start(old), end = xe_vma_end(old);
+ u8 num_remap_ops = 0;
if (op->base.remap.prev)
start = op->base.remap.prev->va.addr +
@@ -2554,7 +2558,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
(ULL)op->remap.start,
(ULL)op->remap.range);
} else {
- xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask);
+ num_remap_ops++;
}
}
@@ -2583,11 +2587,13 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
(ULL)op->remap.start,
(ULL)op->remap.range);
} else {
- xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask);
+ num_remap_ops++;
}
}
if (!skip)
- xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask);
+ num_remap_ops++;
+
+ xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask, num_remap_ops);
break;
}
case DRM_GPUVA_OP_UNMAP:
@@ -2599,7 +2605,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
return -EBUSY;
if (!xe_vma_is_cpu_addr_mirror(vma))
- xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask);
+ xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask, 1);
break;
case DRM_GPUVA_OP_PREFETCH:
vma = gpuva_to_vma(op->base.prefetch.va);
@@ -2611,7 +2617,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
}
if (!xe_vma_is_cpu_addr_mirror(vma))
- xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask);
+ xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask, 1);
break;
default:
drm_warn(&vm->xe->drm, "NOT POSSIBLE");
--
2.34.1
^ permalink raw reply related [flat|nested] 120+ messages in thread
* [PATCH v2 07/32] drm/xe/vm: Add an identifier in xe_vma_ops for svm prefetch
2025-04-07 10:16 [PATCH v2 00/32] PREFETCH and MADVISE for SVM ranges Himal Prasad Ghimiray
` (5 preceding siblings ...)
2025-04-07 10:16 ` [PATCH v2 06/32] drm/xe/vm: Update xe_vma_ops_incr_pt_update_ops to take an increment value Himal Prasad Ghimiray
@ 2025-04-07 10:16 ` Himal Prasad Ghimiray
2025-04-17 2:53 ` Matthew Brost
2025-04-07 10:16 ` [PATCH v2 08/32] drm/xe: Rename lookup_vma function to xe_find_vma_by_addr Himal Prasad Ghimiray
` (36 subsequent siblings)
43 siblings, 1 reply; 120+ messages in thread
From: Himal Prasad Ghimiray @ 2025-04-07 10:16 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, thomas.hellstrom, Himal Prasad Ghimiray
Add a flag in xe_vma_ops to determine whether it has svm prefetch ops or
not.
v2:
- s/false/0 (Matthew Brost)
Suggested-by: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
---
drivers/gpu/drm/xe/xe_vm.c | 1 +
drivers/gpu/drm/xe/xe_vm_types.h | 3 +++
2 files changed, 4 insertions(+)
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 4d215c55a778..b1f1e85d26f7 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -3240,6 +3240,7 @@ static void xe_vma_ops_init(struct xe_vma_ops *vops, struct xe_vm *vm,
vops->q = q;
vops->syncs = syncs;
vops->num_syncs = num_syncs;
+ vops->flags = 0;
}
static int xe_vm_bind_ioctl_validate_bo(struct xe_device *xe, struct xe_bo *bo,
diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
index ae0bcefdbfcd..d3c1209348e9 100644
--- a/drivers/gpu/drm/xe/xe_vm_types.h
+++ b/drivers/gpu/drm/xe/xe_vm_types.h
@@ -445,6 +445,9 @@ struct xe_vma_ops {
u32 num_syncs;
/** @pt_update_ops: page table update operations */
struct xe_vm_pgtable_update_ops pt_update_ops[XE_MAX_TILES_PER_DEVICE];
+ /** @flag: signify the properties within xe_vma_ops*/
+#define XE_VMA_OPS_HAS_SVM_PREFETCH BIT(0)
+ u32 flags;
#ifdef TEST_VM_OPS_ERROR
/** @inject_error: inject error to test error handling */
bool inject_error;
--
2.34.1
^ permalink raw reply related [flat|nested] 120+ messages in thread
* [PATCH v2 08/32] drm/xe: Rename lookup_vma function to xe_find_vma_by_addr
2025-04-07 10:16 [PATCH v2 00/32] PREFETCH and MADVISE for SVM ranges Himal Prasad Ghimiray
` (6 preceding siblings ...)
2025-04-07 10:16 ` [PATCH v2 07/32] drm/xe/vm: Add an identifier in xe_vma_ops for svm prefetch Himal Prasad Ghimiray
@ 2025-04-07 10:16 ` Himal Prasad Ghimiray
2025-04-07 22:42 ` kernel test robot
2025-04-07 10:16 ` [PATCH v2 09/32] drm/xe/svm: Allow unaligned addresses and ranges for prefetch Himal Prasad Ghimiray
` (35 subsequent siblings)
43 siblings, 1 reply; 120+ messages in thread
From: Himal Prasad Ghimiray @ 2025-04-07 10:16 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, thomas.hellstrom, Himal Prasad Ghimiray
This update renames the lookup_vma function to xe_vm_find_vma_by_addr and
makes it accessible externally. The function, which looks up a VMA by
its address within a specified VM, will be utilized in upcoming patches.
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_gt_pagefault.c | 24 +----------------------
drivers/gpu/drm/xe/xe_vm.c | 29 ++++++++++++++++++++++++++++
drivers/gpu/drm/xe/xe_vm.h | 2 ++
3 files changed, 32 insertions(+), 23 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_gt_pagefault.c b/drivers/gpu/drm/xe/xe_gt_pagefault.c
index 9fa11e837dd1..19f5df40a4ed 100644
--- a/drivers/gpu/drm/xe/xe_gt_pagefault.c
+++ b/drivers/gpu/drm/xe/xe_gt_pagefault.c
@@ -72,28 +72,6 @@ static bool vma_is_valid(struct xe_tile *tile, struct xe_vma *vma)
!(BIT(tile->id) & vma->tile_invalidated);
}
-static bool vma_matches(struct xe_vma *vma, u64 page_addr)
-{
- if (page_addr > xe_vma_end(vma) - 1 ||
- page_addr + SZ_4K - 1 < xe_vma_start(vma))
- return false;
-
- return true;
-}
-
-static struct xe_vma *lookup_vma(struct xe_vm *vm, u64 page_addr)
-{
- struct xe_vma *vma = NULL;
-
- if (vm->usm.last_fault_vma) { /* Fast lookup */
- if (vma_matches(vm->usm.last_fault_vma, page_addr))
- vma = vm->usm.last_fault_vma;
- }
- if (!vma)
- vma = xe_vm_find_overlapping_vma(vm, page_addr, SZ_4K);
-
- return vma;
-}
static int xe_pf_begin(struct drm_exec *exec, struct xe_vma *vma,
bool atomic, unsigned int id)
@@ -231,7 +209,7 @@ static int handle_pagefault(struct xe_gt *gt, struct pagefault *pf)
goto unlock_vm;
}
- vma = lookup_vma(vm, pf->page_addr);
+ vma = xe_vm_find_vma_by_addr(vm, pf->page_addr);
if (!vma) {
err = -EINVAL;
goto unlock_vm;
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index b1f1e85d26f7..ac308cfdaf28 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -2139,6 +2139,35 @@ int xe_vm_destroy_ioctl(struct drm_device *dev, void *data,
return err;
}
+static bool vma_matches(struct xe_vma *vma, u64 page_addr)
+{
+ if (page_addr > xe_vma_end(vma) - 1 ||
+ page_addr + SZ_4K - 1 < xe_vma_start(vma))
+ return false;
+
+ return true;
+}
+
+/**
+ * xe_vm_find_vma_by_addr() - Find a VMA by its address
+ *
+ * @vm: the xe_vm the vma belongs to
+ * @page_address: address to look up
+ */
+struct xe_vma *xe_vm_find_vma_by_addr(struct xe_vm *vm, u64 page_addr)
+{
+ struct xe_vma *vma = NULL;
+
+ if (vm->usm.last_fault_vma) { /* Fast lookup */
+ if (vma_matches(vm->usm.last_fault_vma, page_addr))
+ vma = vm->usm.last_fault_vma;
+ }
+ if (!vma)
+ vma = xe_vm_find_overlapping_vma(vm, page_addr, SZ_4K);
+
+ return vma;
+}
+
static const u32 region_to_mem_type[] = {
XE_PL_TT,
XE_PL_VRAM0,
diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
index 0ef811fc2bde..99e164852f63 100644
--- a/drivers/gpu/drm/xe/xe_vm.h
+++ b/drivers/gpu/drm/xe/xe_vm.h
@@ -169,6 +169,8 @@ static inline bool xe_vma_is_userptr(struct xe_vma *vma)
!xe_vma_is_cpu_addr_mirror(vma);
}
+struct xe_vma *xe_vm_find_vma_by_addr(struct xe_vm *vm, u64 page_addr);
+
/**
* to_userptr_vma() - Return a pointer to an embedding userptr vma
* @vma: Pointer to the embedded struct xe_vma
--
2.34.1
^ permalink raw reply related [flat|nested] 120+ messages in thread
* [PATCH v2 09/32] drm/xe/svm: Allow unaligned addresses and ranges for prefetch
2025-04-07 10:16 [PATCH v2 00/32] PREFETCH and MADVISE for SVM ranges Himal Prasad Ghimiray
` (7 preceding siblings ...)
2025-04-07 10:16 ` [PATCH v2 08/32] drm/xe: Rename lookup_vma function to xe_find_vma_by_addr Himal Prasad Ghimiray
@ 2025-04-07 10:16 ` Himal Prasad Ghimiray
2025-04-17 2:53 ` Matthew Brost
2025-04-07 10:16 ` [PATCH v2 10/32] drm/xe/svm: Refactor usage of drm_gpusvm* function in xe_svm Himal Prasad Ghimiray
` (34 subsequent siblings)
43 siblings, 1 reply; 120+ messages in thread
From: Himal Prasad Ghimiray @ 2025-04-07 10:16 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, thomas.hellstrom, Himal Prasad Ghimiray
The SVM prefetch operation can handle unaligned addresses and range sizes.
This commit updates the ioctl parameter checks to accommodate unaligned
addresses and range sizes for SVM prefetch operations.
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
---
drivers/gpu/drm/xe/xe_vm.c | 18 ++++++++++++++++--
1 file changed, 16 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index ac308cfdaf28..57af2c37f927 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -3111,6 +3111,16 @@ ALLOW_ERROR_INJECTION(vm_bind_ioctl_ops_execute, ERRNO);
#define XE_64K_PAGE_MASK 0xffffull
#define ALL_DRM_XE_SYNCS_FLAGS (DRM_XE_SYNCS_FLAG_WAIT_FOR_OP)
+static bool addr_not_in_cpu_addr_vma(struct xe_vm *vm, u64 addr)
+{
+ struct xe_vma *vma;
+
+ down_write(&vm->lock);
+ vma = xe_vm_find_vma_by_addr(vm, addr);
+ up_write(&vm->lock);
+ return !xe_vma_is_cpu_addr_mirror(vma);
+}
+
static int vm_bind_ioctl_check_args(struct xe_device *xe, struct xe_vm *vm,
struct drm_xe_vm_bind *args,
struct drm_xe_vm_bind_op **bind_ops)
@@ -3219,8 +3229,12 @@ static int vm_bind_ioctl_check_args(struct xe_device *xe, struct xe_vm *vm,
}
if (XE_IOCTL_DBG(xe, obj_offset & ~PAGE_MASK) ||
- XE_IOCTL_DBG(xe, addr & ~PAGE_MASK) ||
- XE_IOCTL_DBG(xe, range & ~PAGE_MASK) ||
+ XE_IOCTL_DBG(xe, (addr & ~PAGE_MASK) &&
+ (addr_not_in_cpu_addr_vma(vm, addr) ||
+ op != DRM_XE_VM_BIND_OP_PREFETCH)) ||
+ XE_IOCTL_DBG(xe, (range & ~PAGE_MASK) &&
+ (addr_not_in_cpu_addr_vma(vm, addr) ||
+ op != DRM_XE_VM_BIND_OP_PREFETCH)) ||
XE_IOCTL_DBG(xe, !range &&
op != DRM_XE_VM_BIND_OP_UNMAP_ALL)) {
err = -EINVAL;
--
2.34.1
^ permalink raw reply related [flat|nested] 120+ messages in thread
* [PATCH v2 10/32] drm/xe/svm: Refactor usage of drm_gpusvm* function in xe_svm
2025-04-07 10:16 [PATCH v2 00/32] PREFETCH and MADVISE for SVM ranges Himal Prasad Ghimiray
` (8 preceding siblings ...)
2025-04-07 10:16 ` [PATCH v2 09/32] drm/xe/svm: Allow unaligned addresses and ranges for prefetch Himal Prasad Ghimiray
@ 2025-04-07 10:16 ` Himal Prasad Ghimiray
2025-04-17 2:57 ` Matthew Brost
2025-04-07 10:16 ` [PATCH v2 11/32] drm/xe/svm: Add function to determine if range needs VRAM migration Himal Prasad Ghimiray
` (33 subsequent siblings)
43 siblings, 1 reply; 120+ messages in thread
From: Himal Prasad Ghimiray @ 2025-04-07 10:16 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, thomas.hellstrom, Himal Prasad Ghimiray
Define xe_svm_range_find_or_insert function wrapping
drm_gpusvm_range_find_or_insert for reusing in prefetch.
Define xe_svm_range_get_pages function wrapping
drm_gpusvm_range_get_pages for reusing in prefetch.
-v2 pass pagefault defined drm_gpu_svm context as parameter
in xe_svm_range_find_or_insert(Matthew Brost)
Cc: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
---
drivers/gpu/drm/xe/xe_svm.c | 67 ++++++++++++++++++++++++++++++-------
drivers/gpu/drm/xe/xe_svm.h | 20 +++++++++++
2 files changed, 75 insertions(+), 12 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
index 6648b4da0bca..8cd35553a927 100644
--- a/drivers/gpu/drm/xe/xe_svm.c
+++ b/drivers/gpu/drm/xe/xe_svm.c
@@ -735,7 +735,6 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
IS_ENABLED(CONFIG_DRM_XE_DEVMEM_MIRROR) ? SZ_64K : 0,
};
struct xe_svm_range *range;
- struct drm_gpusvm_range *r;
struct drm_exec exec;
struct dma_fence *fence;
struct xe_tile *tile = gt_to_tile(gt);
@@ -753,13 +752,11 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
if (err)
return err;
- r = drm_gpusvm_range_find_or_insert(&vm->svm.gpusvm, fault_addr,
- xe_vma_start(vma), xe_vma_end(vma),
- &ctx);
- if (IS_ERR(r))
- return PTR_ERR(r);
+ range = xe_svm_range_find_or_insert(vm, fault_addr, vma, &ctx);
+
+ if (IS_ERR(range))
+ return PTR_ERR(range);
- range = to_xe_range(r);
if (xe_svm_range_is_valid(range, tile))
return 0;
@@ -781,13 +778,9 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
}
range_debug(range, "GET PAGES");
- err = drm_gpusvm_range_get_pages(&vm->svm.gpusvm, r, &ctx);
+ err = xe_svm_range_get_pages(vm, range, &ctx);
/* Corner where CPU mappings have changed */
if (err == -EOPNOTSUPP || err == -EFAULT || err == -EPERM) {
- if (err == -EOPNOTSUPP) {
- range_debug(range, "PAGE FAULT - EVICT PAGES");
- drm_gpusvm_range_evict(&vm->svm.gpusvm, &range->base);
- }
drm_dbg(&vm->xe->drm,
"Get pages failed, falling back to retrying, asid=%u, gpusvm=%p, errno=%pe\n",
vm->usm.asid, &vm->svm.gpusvm, ERR_PTR(err));
@@ -866,6 +859,56 @@ int xe_svm_bo_evict(struct xe_bo *bo)
return drm_gpusvm_evict_to_ram(&bo->devmem_allocation);
}
+/**
+ * xe_svm_range_find_or_insert- Find or insert GPU SVM range
+ * @vm: xe_vm pointer
+ * @addr: address for which range needs to be found/inserted
+ * @vma: Pointer to struct xe_vma which mirrors CPU
+ * @ctx: GPU SVM context
+ *
+ * This function finds or inserts a newly allocated a SVM range based on the
+ * address.
+ *
+ * Return: Pointer to the SVM range on success, ERR_PTR() on failure.
+ */
+struct xe_svm_range *xe_svm_range_find_or_insert(struct xe_vm *vm, u64 addr,
+ struct xe_vma *vma, struct drm_gpusvm_ctx *ctx)
+{
+ struct drm_gpusvm_range *r;
+
+ r = drm_gpusvm_range_find_or_insert(&vm->svm.gpusvm, max(addr, xe_vma_start(vma)),
+ xe_vma_start(vma), xe_vma_end(vma), ctx);
+ if (IS_ERR(r))
+ return ERR_PTR(PTR_ERR(r));
+
+ return to_xe_range(r);
+}
+
+/**
+ * xe_svm_range_get_pages() - Get pages for a SVM range
+ * @vm: Pointer to the struct xe_vm
+ * @range: Pointer to the xe SVM range structure
+ * @ctx: GPU SVM context
+ *
+ * This function gets pages for a SVM range and ensures they are mapped for
+ * DMA access. In case of failure with -EOPNOTSUPP, it evicts the range.
+ *
+ * Return: 0 on success, negative error code on failure.
+ */
+int xe_svm_range_get_pages(struct xe_vm *vm, struct xe_svm_range *range,
+ struct drm_gpusvm_ctx *ctx)
+{
+ int err = 0;
+
+ err = drm_gpusvm_range_get_pages(&vm->svm.gpusvm, &range->base, ctx);
+ if (err == -EOPNOTSUPP) {
+ range_debug(range, "PAGE FAULT - EVICT PAGES");
+ drm_gpusvm_range_evict(&vm->svm.gpusvm, &range->base);
+ }
+
+ return err;
+}
+
#if IS_ENABLED(CONFIG_DRM_XE_DEVMEM_MIRROR)
static struct drm_pagemap_device_addr
diff --git a/drivers/gpu/drm/xe/xe_svm.h b/drivers/gpu/drm/xe/xe_svm.h
index 1ec90d9bc749..9c4c3aeacc6c 100644
--- a/drivers/gpu/drm/xe/xe_svm.h
+++ b/drivers/gpu/drm/xe/xe_svm.h
@@ -89,6 +89,12 @@ int xe_svm_alloc_vram(struct xe_vm *vm, struct xe_tile *tile,
}
#endif
+struct xe_svm_range *xe_svm_range_find_or_insert(struct xe_vm *vm, u64 addr,
+ struct xe_vma *vma, struct drm_gpusvm_ctx *ctx);
+
+int xe_svm_range_get_pages(struct xe_vm *vm, struct xe_svm_range *range,
+ struct drm_gpusvm_ctx *ctx);
+
/**
* xe_svm_range_has_dma_mapping() - SVM range has DMA mapping
* @range: SVM range
@@ -241,6 +247,20 @@ int xe_svm_alloc_vram(struct xe_vm *vm, struct xe_tile *tile,
return -EOPNOTSUPP;
}
+static inline
+struct xe_svm_range *xe_svm_range_find_or_insert(struct xe_vm *vm, u64 addr,
+ struct xe_vma *vma, struct drm_gpusvm_ctx *ctx)
+{
+ return ERR_PTR(-EINVAL);
+}
+
+static inline
+int xe_svm_range_get_pages(struct xe_vm *vm, struct xe_svm_range *range,
+ struct drm_gpusvm_ctx *ctx)
+{
+ return -EINVAL;
+}
+
static inline struct xe_svm_range *to_xe_range(struct drm_gpusvm_range *r)
{
return NULL;
--
2.34.1
^ permalink raw reply related [flat|nested] 120+ messages in thread
* [PATCH v2 11/32] drm/xe/svm: Add function to determine if range needs VRAM migration
2025-04-07 10:16 [PATCH v2 00/32] PREFETCH and MADVISE for SVM ranges Himal Prasad Ghimiray
` (9 preceding siblings ...)
2025-04-07 10:16 ` [PATCH v2 10/32] drm/xe/svm: Refactor usage of drm_gpusvm* function in xe_svm Himal Prasad Ghimiray
@ 2025-04-07 10:16 ` Himal Prasad Ghimiray
2025-04-17 3:05 ` Matthew Brost
2025-04-07 10:16 ` [PATCH v2 12/32] drm/gpusvm: Introduce vram_only flag for VRAM allocation Himal Prasad Ghimiray
` (32 subsequent siblings)
43 siblings, 1 reply; 120+ messages in thread
From: Himal Prasad Ghimiray @ 2025-04-07 10:16 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, thomas.hellstrom, Himal Prasad Ghimiray
xe_svm_range_needs_migrate_to_vram() determines whether range needs
migration to vram or not, for pagefault try at least once.
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
---
drivers/gpu/drm/xe/xe_svm.c | 49 +++++++++++++++++++++++++++++++++++--
drivers/gpu/drm/xe/xe_svm.h | 10 ++++++++
2 files changed, 57 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
index 8cd35553a927..f4ae3feaf9d3 100644
--- a/drivers/gpu/drm/xe/xe_svm.c
+++ b/drivers/gpu/drm/xe/xe_svm.c
@@ -709,6 +709,51 @@ int xe_svm_alloc_vram(struct xe_vm *vm, struct xe_tile *tile,
}
#endif
+static bool supports_4K_migration(struct xe_device *xe)
+{
+ if (xe->info.platform == XE_BATTLEMAGE)
+ return true;
+
+ return false;
+}
+
+/**
+ * xe_svm_range_needs_migrate_to_vram() - SVM range needs migrate to VRAM or not
+ * @range: SVM range for which migration needs to be decided
+ * @vma: vma which has range
+ * @region: default placement for range
+ *
+ * Return: True for range needing migration and migration is supported else false
+ */
+bool xe_svm_range_needs_migrate_to_vram(struct xe_svm_range *range, struct xe_vma *vma,
+ u32 region)
+{
+ struct xe_vm *vm = range_to_vm(&range->base);
+ u64 range_size = xe_svm_range_size(range);
+ bool needs_migrate = false;
+
+ if (!range->base.flags.migrate_devmem)
+ return false;
+
+ needs_migrate = region;
+
+ if (needs_migrate && !IS_DGFX(vm->xe)) {
+ drm_warn(&vm->xe->drm, "Platform doesn't support VRAM\n");
+ return false;
+ }
+
+ if (needs_migrate && xe_svm_range_in_vram(range)) {
+ drm_info(&vm->xe->drm, "Range is already in VRAM\n");
+ return false;
+ }
+
+ if (needs_migrate && range_size <= SZ_64K && !supports_4K_migration(vm->xe)) {
+ drm_warn(&vm->xe->drm, "Platform doesn't support SZ_4K range migration\n");
+ return false;
+ }
+
+ return needs_migrate;
+}
/**
* xe_svm_handle_pagefault() - SVM handle page fault
@@ -763,8 +808,8 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
range_debug(range, "PAGE FAULT");
/* XXX: Add migration policy, for now migrate range once */
- if (!range->skip_migrate && range->base.flags.migrate_devmem &&
- xe_svm_range_size(range) >= SZ_64K) {
+ if (!range->skip_migrate &&
+ xe_svm_range_needs_migrate_to_vram(range, vma, IS_DGFX(vm->xe))) {
range->skip_migrate = true;
err = xe_svm_alloc_vram(vm, tile, range, &ctx);
diff --git a/drivers/gpu/drm/xe/xe_svm.h b/drivers/gpu/drm/xe/xe_svm.h
index 9c4c3aeacc6c..d5be8229ca7e 100644
--- a/drivers/gpu/drm/xe/xe_svm.h
+++ b/drivers/gpu/drm/xe/xe_svm.h
@@ -95,6 +95,9 @@ struct xe_svm_range *xe_svm_range_find_or_insert(struct xe_vm *vm, u64 addr,
int xe_svm_range_get_pages(struct xe_vm *vm, struct xe_svm_range *range,
struct drm_gpusvm_ctx *ctx);
+bool xe_svm_range_needs_migrate_to_vram(struct xe_svm_range *range, struct xe_vma *vma,
+ u32 region);
+
/**
* xe_svm_range_has_dma_mapping() - SVM range has DMA mapping
* @range: SVM range
@@ -281,6 +284,13 @@ static inline unsigned long xe_svm_range_size(struct xe_svm_range *range)
return 0;
}
+static inline
+bool xe_svm_range_needs_migrate_to_vram(struct xe_svm_range *range, struct xe_vma *vma,
+ u32 region)
+{
+ return false;
+}
+
#define xe_svm_assert_in_notifier(...) do {} while (0)
#define xe_svm_range_has_dma_mapping(...) false
--
2.34.1
^ permalink raw reply related [flat|nested] 120+ messages in thread
* [PATCH v2 12/32] drm/gpusvm: Introduce vram_only flag for VRAM allocation
2025-04-07 10:16 [PATCH v2 00/32] PREFETCH and MADVISE for SVM ranges Himal Prasad Ghimiray
` (10 preceding siblings ...)
2025-04-07 10:16 ` [PATCH v2 11/32] drm/xe/svm: Add function to determine if range needs VRAM migration Himal Prasad Ghimiray
@ 2025-04-07 10:16 ` Himal Prasad Ghimiray
2025-04-17 3:07 ` Matthew Brost
2025-04-07 10:17 ` [PATCH v2 13/32] drm/xe/svm: Incase of atomic access ensure get_pages happens from vram Himal Prasad Ghimiray
` (31 subsequent siblings)
43 siblings, 1 reply; 120+ messages in thread
From: Himal Prasad Ghimiray @ 2025-04-07 10:16 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, thomas.hellstrom, Himal Prasad Ghimiray
This commit adds a new flag, vram_only, to the drm_gpusvm structure. The
purpose of this flag is to ensure that the get_pages function allocates
memory exclusively from the device's VRAM. If the allocation from VRAM
fails, the function will return an -EFAULT error.
Suggested-by: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
---
drivers/gpu/drm/drm_gpusvm.c | 5 +++++
include/drm/drm_gpusvm.h | 2 ++
2 files changed, 7 insertions(+)
diff --git a/drivers/gpu/drm/drm_gpusvm.c b/drivers/gpu/drm/drm_gpusvm.c
index 2451c816edd5..149ac56eff70 100644
--- a/drivers/gpu/drm/drm_gpusvm.c
+++ b/drivers/gpu/drm/drm_gpusvm.c
@@ -1454,6 +1454,11 @@ int drm_gpusvm_range_get_pages(struct drm_gpusvm *gpusvm,
goto err_unmap;
}
+ if (ctx->vram_only) {
+ err = -EFAULT;
+ goto err_unmap;
+ }
+
addr = dma_map_page(gpusvm->drm->dev,
page, 0,
PAGE_SIZE << order,
diff --git a/include/drm/drm_gpusvm.h b/include/drm/drm_gpusvm.h
index df120b4d1f83..8093cc6ab1f4 100644
--- a/include/drm/drm_gpusvm.h
+++ b/include/drm/drm_gpusvm.h
@@ -286,6 +286,7 @@ struct drm_gpusvm {
* @in_notifier: entering from a MMU notifier
* @read_only: operating on read-only memory
* @devmem_possible: possible to use device memory
+ * @vram_only: Use only device memory
*
* Context that is DRM GPUSVM is operating in (i.e. user arguments).
*/
@@ -294,6 +295,7 @@ struct drm_gpusvm_ctx {
unsigned int in_notifier :1;
unsigned int read_only :1;
unsigned int devmem_possible :1;
+ unsigned int vram_only :1;
};
int drm_gpusvm_init(struct drm_gpusvm *gpusvm,
--
2.34.1
^ permalink raw reply related [flat|nested] 120+ messages in thread
* [PATCH v2 13/32] drm/xe/svm: Incase of atomic access ensure get_pages happens from vram
2025-04-07 10:16 [PATCH v2 00/32] PREFETCH and MADVISE for SVM ranges Himal Prasad Ghimiray
` (11 preceding siblings ...)
2025-04-07 10:16 ` [PATCH v2 12/32] drm/gpusvm: Introduce vram_only flag for VRAM allocation Himal Prasad Ghimiray
@ 2025-04-07 10:17 ` Himal Prasad Ghimiray
2025-04-17 4:19 ` Matthew Brost
2025-04-07 10:17 ` [PATCH v2 14/32] drm/xe/svm: Implement prefetch support for SVM ranges Himal Prasad Ghimiray
` (30 subsequent siblings)
43 siblings, 1 reply; 120+ messages in thread
From: Himal Prasad Ghimiray @ 2025-04-07 10:17 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, thomas.hellstrom, Himal Prasad Ghimiray
Ranges can be invalidated in between vram allocation and get_pages,
ensure the dma mapping is happening from vram only incase of atomic
access. Retry 3 times before calling out fault in case of concurrent
cpu/gpu access.
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
---
drivers/gpu/drm/xe/xe_svm.c | 43 ++++++++++++++++++++++++-------------
1 file changed, 28 insertions(+), 15 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
index f4ae3feaf9d3..7ec7ecd7eb1f 100644
--- a/drivers/gpu/drm/xe/xe_svm.c
+++ b/drivers/gpu/drm/xe/xe_svm.c
@@ -778,11 +778,13 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
IS_ENABLED(CONFIG_DRM_XE_DEVMEM_MIRROR),
.check_pages_threshold = IS_DGFX(vm->xe) &&
IS_ENABLED(CONFIG_DRM_XE_DEVMEM_MIRROR) ? SZ_64K : 0,
+ .vram_only = 0,
};
struct xe_svm_range *range;
struct drm_exec exec;
struct dma_fence *fence;
struct xe_tile *tile = gt_to_tile(gt);
+ int retry_count = 3;
ktime_t end = 0;
int err;
@@ -792,6 +794,7 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
xe_gt_stats_incr(gt, XE_GT_STATS_ID_SVM_PAGEFAULT_COUNT, 1);
retry:
+ retry_count--;
/* Always process UNMAPs first so view SVM ranges is current */
err = xe_svm_garbage_collector(vm);
if (err)
@@ -807,30 +810,40 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
range_debug(range, "PAGE FAULT");
- /* XXX: Add migration policy, for now migrate range once */
- if (!range->skip_migrate &&
- xe_svm_range_needs_migrate_to_vram(range, vma, IS_DGFX(vm->xe))) {
- range->skip_migrate = true;
-
+ if (xe_svm_range_needs_migrate_to_vram(range, vma, IS_DGFX(vm->xe))) {
err = xe_svm_alloc_vram(vm, tile, range, &ctx);
if (err) {
- drm_dbg(&vm->xe->drm,
- "VRAM allocation failed, falling back to "
- "retrying fault, asid=%u, errno=%pe\n",
- vm->usm.asid, ERR_PTR(err));
- goto retry;
+ if (retry_count) {
+ drm_dbg(&vm->xe->drm,
+ "VRAM allocation failed, falling back to retrying fault, asid=%u, errno=%pe\n",
+ vm->usm.asid, ERR_PTR(err));
+ goto retry;
+ } else {
+ drm_err(&vm->xe->drm,
+ "VRAM allocation failed, retry count exceeded, asid=%u, errno=%pe\n",
+ vm->usm.asid, ERR_PTR(err));
+ return err;
+ }
}
+
}
+ if (atomic)
+ ctx.vram_only = 1;
+
range_debug(range, "GET PAGES");
err = xe_svm_range_get_pages(vm, range, &ctx);
/* Corner where CPU mappings have changed */
if (err == -EOPNOTSUPP || err == -EFAULT || err == -EPERM) {
- drm_dbg(&vm->xe->drm,
- "Get pages failed, falling back to retrying, asid=%u, gpusvm=%p, errno=%pe\n",
- vm->usm.asid, &vm->svm.gpusvm, ERR_PTR(err));
- range_debug(range, "PAGE FAULT - RETRY PAGES");
- goto retry;
+ if (retry_count) {
+ drm_dbg(&vm->xe->drm, "Get pages failed, falling back to retrying, asid=%u, gpusvm=%p, errno=%pe\n",
+ vm->usm.asid, &vm->svm.gpusvm, ERR_PTR(err));
+ range_debug(range, "PAGE FAULT - RETRY PAGES");
+ goto retry;
+ } else {
+ drm_err(&vm->xe->drm, "Get pages failed,, retry count exceeded, asid=%u,, errno=%pe\n",
+ vm->usm.asid, ERR_PTR(err));
+ }
}
if (err) {
range_debug(range, "PAGE FAULT - FAIL PAGE COLLECT");
--
2.34.1
^ permalink raw reply related [flat|nested] 120+ messages in thread
* [PATCH v2 14/32] drm/xe/svm: Implement prefetch support for SVM ranges
2025-04-07 10:16 [PATCH v2 00/32] PREFETCH and MADVISE for SVM ranges Himal Prasad Ghimiray
` (12 preceding siblings ...)
2025-04-07 10:17 ` [PATCH v2 13/32] drm/xe/svm: Incase of atomic access ensure get_pages happens from vram Himal Prasad Ghimiray
@ 2025-04-07 10:17 ` Himal Prasad Ghimiray
2025-04-17 4:54 ` Matthew Brost
2025-04-24 23:48 ` Matthew Brost
2025-04-07 10:17 ` [PATCH v2 15/32] drm/xe/vm: Add debug prints for SVM range prefetch Himal Prasad Ghimiray
` (29 subsequent siblings)
43 siblings, 2 replies; 120+ messages in thread
From: Himal Prasad Ghimiray @ 2025-04-07 10:17 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, thomas.hellstrom, Himal Prasad Ghimiray
This commit adds prefetch support for SVM ranges, utilizing the
existing ioctl vm_bind functionality to achieve this.
v2: rebase
Cc: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
---
drivers/gpu/drm/xe/xe_pt.c | 61 +++++++++---
drivers/gpu/drm/xe/xe_vm.c | 185 +++++++++++++++++++++++++++++++++++--
2 files changed, 222 insertions(+), 24 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
index de4e3edda758..59dc065fae93 100644
--- a/drivers/gpu/drm/xe/xe_pt.c
+++ b/drivers/gpu/drm/xe/xe_pt.c
@@ -1458,7 +1458,8 @@ static int xe_pt_svm_pre_commit(struct xe_migrate_pt_update *pt_update)
struct xe_vm *vm = pt_update->vops->vm;
struct xe_vma_ops *vops = pt_update->vops;
struct xe_vma_op *op;
- int err;
+ int ranges_count;
+ int err, i;
err = xe_pt_pre_commit(pt_update);
if (err)
@@ -1467,20 +1468,33 @@ static int xe_pt_svm_pre_commit(struct xe_migrate_pt_update *pt_update)
xe_svm_notifier_lock(vm);
list_for_each_entry(op, &vops->list, link) {
- struct xe_svm_range *range = op->map_range.range;
+ struct xe_svm_range *range = NULL;
if (op->subop == XE_VMA_SUBOP_UNMAP_RANGE)
continue;
- xe_svm_range_debug(range, "PRE-COMMIT");
-
- xe_assert(vm->xe, xe_vma_is_cpu_addr_mirror(op->map_range.vma));
- xe_assert(vm->xe, op->subop == XE_VMA_SUBOP_MAP_RANGE);
+ if (op->base.op == DRM_GPUVA_OP_PREFETCH) {
+ xe_assert(vm->xe,
+ xe_vma_is_cpu_addr_mirror(gpuva_to_vma(op->base.prefetch.va)));
+ ranges_count = op->prefetch_range.ranges_count;
+ } else {
+ xe_assert(vm->xe, xe_vma_is_cpu_addr_mirror(op->map_range.vma));
+ xe_assert(vm->xe, op->subop == XE_VMA_SUBOP_MAP_RANGE);
+ ranges_count = 1;
+ }
- if (!xe_svm_range_pages_valid(range)) {
- xe_svm_range_debug(range, "PRE-COMMIT - RETRY");
- xe_svm_notifier_unlock(vm);
- return -EAGAIN;
+ for (i = 0; i < ranges_count; i++) {
+ if (op->base.op == DRM_GPUVA_OP_PREFETCH)
+ range = xa_load(&op->prefetch_range.range, i);
+ else
+ range = op->map_range.range;
+ xe_svm_range_debug(range, "PRE-COMMIT");
+
+ if (!xe_svm_range_pages_valid(range)) {
+ xe_svm_range_debug(range, "PRE-COMMIT - RETRY");
+ xe_svm_notifier_unlock(vm);
+ return -EAGAIN;
+ }
}
}
@@ -2065,11 +2079,21 @@ static int op_prepare(struct xe_vm *vm,
{
struct xe_vma *vma = gpuva_to_vma(op->base.prefetch.va);
- if (xe_vma_is_cpu_addr_mirror(vma))
- break;
+ if (xe_vma_is_cpu_addr_mirror(vma)) {
+ struct xe_svm_range *range;
+ int i;
- err = bind_op_prepare(vm, tile, pt_update_ops, vma, false);
- pt_update_ops->wait_vm_kernel = true;
+ for (i = 0; i < op->prefetch_range.ranges_count; i++) {
+ range = xa_load(&op->prefetch_range.range, i);
+ err = bind_range_prepare(vm, tile, pt_update_ops,
+ vma, range);
+ if (err)
+ return err;
+ }
+ } else {
+ err = bind_op_prepare(vm, tile, pt_update_ops, vma, false);
+ pt_update_ops->wait_vm_kernel = true;
+ }
break;
}
case DRM_GPUVA_OP_DRIVER:
@@ -2273,9 +2297,16 @@ static void op_commit(struct xe_vm *vm,
{
struct xe_vma *vma = gpuva_to_vma(op->base.prefetch.va);
- if (!xe_vma_is_cpu_addr_mirror(vma))
+ if (xe_vma_is_cpu_addr_mirror(vma)) {
+ for (int i = 0 ; i < op->prefetch_range.ranges_count; i++) {
+ struct xe_svm_range *range = xa_load(&op->prefetch_range.range, i);
+
+ range_present_and_invalidated_tile(vm, range, tile->id);
+ }
+ } else {
bind_op_commit(vm, tile, pt_update_ops, vma, fence,
fence2, false);
+ }
break;
}
case DRM_GPUVA_OP_DRIVER:
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 57af2c37f927..ffd7ad664921 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -798,10 +798,36 @@ static int xe_vma_ops_alloc(struct xe_vma_ops *vops, bool array_of_binds)
}
ALLOW_ERROR_INJECTION(xe_vma_ops_alloc, ERRNO);
+static void clean_svm_prefetch_op(struct xe_vma_op *op)
+{
+ struct xe_vma *vma;
+
+ vma = gpuva_to_vma(op->base.prefetch.va);
+
+ if (op->base.op == DRM_GPUVA_OP_PREFETCH && xe_vma_is_cpu_addr_mirror(vma)) {
+ xa_destroy(&op->prefetch_range.range);
+ op->prefetch_range.ranges_count = 0;
+ }
+}
+
+static void clean_svm_prefetch_in_vma_ops(struct xe_vma_ops *vops)
+{
+ struct xe_vma_op *op;
+
+ if (!(vops->flags & XE_VMA_OPS_HAS_SVM_PREFETCH))
+ return;
+
+ list_for_each_entry(op, &vops->list, link) {
+ clean_svm_prefetch_op(op);
+ }
+}
+
static void xe_vma_ops_fini(struct xe_vma_ops *vops)
{
int i;
+ clean_svm_prefetch_in_vma_ops(vops);
+
for (i = 0; i < XE_MAX_TILES_PER_DEVICE; ++i)
kfree(vops->pt_update_ops[i].ops);
}
@@ -2248,13 +2274,25 @@ static bool __xe_vm_needs_clear_scratch_pages(struct xe_vm *vm, u32 bind_flags)
return true;
}
+static void clean_svm_prefetch_in_gpuva_ops(struct drm_gpuva_ops *ops)
+{
+ struct drm_gpuva_op *__op;
+
+ drm_gpuva_for_each_op(__op, ops) {
+ struct xe_vma_op *op = gpuva_op_to_vma_op(__op);
+
+ clean_svm_prefetch_op(op);
+ }
+}
+
/*
* Create operations list from IOCTL arguments, setup operations fields so parse
* and commit steps are decoupled from IOCTL arguments. This step can fail.
*/
static struct drm_gpuva_ops *
-vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_bo *bo,
- u64 bo_offset_or_userptr, u64 addr, u64 range,
+vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_vma_ops *vops,
+ struct xe_bo *bo, u64 bo_offset_or_userptr,
+ u64 addr, u64 range,
u32 operation, u32 flags,
u32 prefetch_region, u16 pat_index)
{
@@ -2262,6 +2300,7 @@ vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_bo *bo,
struct drm_gpuva_ops *ops;
struct drm_gpuva_op *__op;
struct drm_gpuvm_bo *vm_bo;
+ u64 range_end = addr + range;
int err;
lockdep_assert_held_write(&vm->lock);
@@ -2323,14 +2362,61 @@ vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_bo *bo,
op->map.invalidate_on_bind =
__xe_vm_needs_clear_scratch_pages(vm, flags);
} else if (__op->op == DRM_GPUVA_OP_PREFETCH) {
- op->prefetch.region = prefetch_region;
- }
+ struct xe_vma *vma = gpuva_to_vma(op->base.prefetch.va);
+
+ if (!xe_vma_is_cpu_addr_mirror(vma)) {
+ op->prefetch.region = prefetch_region;
+ break;
+ }
+ struct drm_gpusvm_ctx ctx = {
+ .read_only = xe_vma_read_only(vma),
+ .devmem_possible = IS_DGFX(vm->xe) &&
+ IS_ENABLED(CONFIG_DRM_XE_DEVMEM_MIRROR),
+ .check_pages_threshold = IS_DGFX(vm->xe) &&
+ IS_ENABLED(CONFIG_DRM_XE_DEVMEM_MIRROR) ?
+ SZ_64K : 0,
+ };
+
+ op->prefetch_range.region = prefetch_region;
+ struct xe_svm_range *svm_range;
+ int i = 0;
+
+ xa_init(&op->prefetch_range.range);
+ op->prefetch_range.ranges_count = 0;
+alloc_next_range:
+ svm_range = xe_svm_range_find_or_insert(vm, addr, vma, &ctx);
+
+ if (PTR_ERR(svm_range) == -ENOENT)
+ break;
+
+ if (IS_ERR(svm_range)) {
+ err = PTR_ERR(svm_range);
+ goto unwind_prefetch_ops;
+ }
+
+ xa_store(&op->prefetch_range.range, i, svm_range, GFP_KERNEL);
+ op->prefetch_range.ranges_count++;
+ vops->flags |= XE_VMA_OPS_HAS_SVM_PREFETCH;
+
+ if (range_end > xe_svm_range_end(svm_range) &&
+ xe_svm_range_end(svm_range) < xe_vma_end(vma)) {
+ i++;
+ addr = xe_svm_range_end(svm_range);
+ goto alloc_next_range;
+ }
+ }
print_op(vm->xe, __op);
}
return ops;
+
+unwind_prefetch_ops:
+ clean_svm_prefetch_in_gpuva_ops(ops);
+ drm_gpuva_ops_free(&vm->gpuvm, ops);
+ return ERR_PTR(err);
}
+
ALLOW_ERROR_INJECTION(vm_bind_ioctl_ops_create, ERRNO);
static struct xe_vma *new_vma(struct xe_vm *vm, struct drm_gpuva_op_map *op,
@@ -2645,8 +2731,12 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
return err;
}
- if (!xe_vma_is_cpu_addr_mirror(vma))
+ if (xe_vma_is_cpu_addr_mirror(vma))
+ xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask,
+ op->prefetch_range.ranges_count);
+ else
xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask, 1);
+
break;
default:
drm_warn(&vm->xe->drm, "NOT POSSIBLE");
@@ -2772,6 +2862,58 @@ static int check_ufence(struct xe_vma *vma)
return 0;
}
+static int prefetch_ranges_lock_and_prep(struct xe_vm *vm,
+ struct xe_vma_op *op)
+{
+ int err = 0;
+
+ if (op->base.op == DRM_GPUVA_OP_PREFETCH) {
+ struct xe_vma *vma = gpuva_to_vma(op->base.prefetch.va);
+ struct drm_gpusvm_ctx ctx = {
+ .read_only = xe_vma_read_only(vma),
+ .devmem_possible = IS_DGFX(vm->xe) &&
+ IS_ENABLED(CONFIG_DRM_XE_DEVMEM_MIRROR),
+ .check_pages_threshold = IS_DGFX(vm->xe) &&
+ IS_ENABLED(CONFIG_DRM_XE_DEVMEM_MIRROR) ?
+ SZ_64K : 0,
+ };
+ struct xe_svm_range *svm_range;
+ struct xe_tile *tile;
+ u32 region;
+ int i;
+
+ if (!xe_vma_is_cpu_addr_mirror(vma))
+ return 0;
+
+ region = op->prefetch_range.region;
+
+ /* TODO: Threading the migration */
+ for (i = 0; i < op->prefetch_range.ranges_count; i++) {
+ svm_range = xa_load(&op->prefetch_range.range, i);
+ if (xe_svm_range_needs_migrate_to_vram(svm_range, vma, region)) {
+ tile = &vm->xe->tiles[region_to_mem_type[region] - XE_PL_VRAM0];
+ err = xe_svm_alloc_vram(vm, tile, svm_range, &ctx);
+ if (err) {
+ drm_err(&vm->xe->drm, "VRAM allocation failed, can be retried from userspace, asid=%u, gpusvm=%p, errno=%pe\n",
+ vm->usm.asid, &vm->svm.gpusvm, ERR_PTR(err));
+ return -ENODATA;
+ }
+ }
+
+ err = xe_svm_range_get_pages(vm, svm_range, &ctx);
+ if (err) {
+ if (err == -EOPNOTSUPP || err == -EFAULT || err == -EPERM)
+ err = -ENODATA;
+
+ drm_err(&vm->xe->drm, "Get pages failed, asid=%u, gpusvm=%p, errno=%pe\n",
+ vm->usm.asid, &vm->svm.gpusvm, ERR_PTR(err));
+ return err;
+ }
+ }
+ }
+ return err;
+}
+
static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
struct xe_vma_op *op)
{
@@ -2809,7 +2951,12 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
case DRM_GPUVA_OP_PREFETCH:
{
struct xe_vma *vma = gpuva_to_vma(op->base.prefetch.va);
- u32 region = op->prefetch.region;
+ u32 region;
+
+ if (xe_vma_is_cpu_addr_mirror(vma))
+ region = op->prefetch_range.region;
+ else
+ region = op->prefetch.region;
xe_assert(vm->xe, region <= ARRAY_SIZE(region_to_mem_type));
@@ -2828,6 +2975,23 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
return err;
}
+static int xe_vma_ops_execute_ready(struct xe_vm *vm, struct xe_vma_ops *vops)
+{
+ struct xe_vma_op *op;
+ int err;
+
+ if (!(vops->flags & XE_VMA_OPS_HAS_SVM_PREFETCH))
+ return 0;
+
+ list_for_each_entry(op, &vops->list, link) {
+ err = prefetch_ranges_lock_and_prep(vm, op);
+ if (err)
+ return err;
+ }
+
+ return 0;
+}
+
static int vm_bind_ioctl_ops_lock_and_prep(struct drm_exec *exec,
struct xe_vm *vm,
struct xe_vma_ops *vops)
@@ -2850,7 +3014,6 @@ static int vm_bind_ioctl_ops_lock_and_prep(struct drm_exec *exec,
vm->xe->vm_inject_error_position == FORCE_OP_ERROR_LOCK)
return -ENOSPC;
#endif
-
return 0;
}
@@ -3492,7 +3655,7 @@ int xe_vm_bind_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
u32 prefetch_region = bind_ops[i].prefetch_mem_region_instance;
u16 pat_index = bind_ops[i].pat_index;
- ops[i] = vm_bind_ioctl_ops_create(vm, bos[i], obj_offset,
+ ops[i] = vm_bind_ioctl_ops_create(vm, &vops, bos[i], obj_offset,
addr, range, op, flags,
prefetch_region, pat_index);
if (IS_ERR(ops[i])) {
@@ -3525,6 +3688,10 @@ int xe_vm_bind_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
if (err)
goto unwind_ops;
+ err = xe_vma_ops_execute_ready(vm, &vops);
+ if (err)
+ goto unwind_ops;
+
fence = vm_bind_ioctl_ops_execute(vm, &vops);
if (IS_ERR(fence))
err = PTR_ERR(fence);
@@ -3594,7 +3761,7 @@ struct dma_fence *xe_vm_bind_kernel_bo(struct xe_vm *vm, struct xe_bo *bo,
xe_vma_ops_init(&vops, vm, q, NULL, 0);
- ops = vm_bind_ioctl_ops_create(vm, bo, 0, addr, bo->size,
+ ops = vm_bind_ioctl_ops_create(vm, &vops, bo, 0, addr, bo->size,
DRM_XE_VM_BIND_OP_MAP, 0, 0,
vm->xe->pat.idx[cache_lvl]);
if (IS_ERR(ops)) {
--
2.34.1
^ permalink raw reply related [flat|nested] 120+ messages in thread
* [PATCH v2 15/32] drm/xe/vm: Add debug prints for SVM range prefetch
2025-04-07 10:16 [PATCH v2 00/32] PREFETCH and MADVISE for SVM ranges Himal Prasad Ghimiray
` (13 preceding siblings ...)
2025-04-07 10:17 ` [PATCH v2 14/32] drm/xe/svm: Implement prefetch support for SVM ranges Himal Prasad Ghimiray
@ 2025-04-07 10:17 ` Himal Prasad Ghimiray
2025-04-17 4:56 ` Matthew Brost
2025-04-07 10:17 ` [PATCH v2 16/32] Introduce drm_gpuvm_sm_map_ops_flags enums for sm_map_ops Himal Prasad Ghimiray
` (28 subsequent siblings)
43 siblings, 1 reply; 120+ messages in thread
From: Himal Prasad Ghimiray @ 2025-04-07 10:17 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, thomas.hellstrom, Himal Prasad Ghimiray
Introduce debug logs for the prefetch operation of SVM ranges.
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
---
drivers/gpu/drm/xe/xe_vm.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index ffd7ad664921..fd98e74485f4 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -2398,6 +2398,7 @@ vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_vma_ops *vops,
xa_store(&op->prefetch_range.range, i, svm_range, GFP_KERNEL);
op->prefetch_range.ranges_count++;
vops->flags |= XE_VMA_OPS_HAS_SVM_PREFETCH;
+ xe_svm_range_debug(svm_range, "PREFETCH - RANGE CREATED");
if (range_end > xe_svm_range_end(svm_range) &&
xe_svm_range_end(svm_range) < xe_vma_end(vma)) {
@@ -2898,6 +2899,7 @@ static int prefetch_ranges_lock_and_prep(struct xe_vm *vm,
vm->usm.asid, &vm->svm.gpusvm, ERR_PTR(err));
return -ENODATA;
}
+ xe_svm_range_debug(svm_range, "PREFETCH - RANGE MIGRATED TO VRAM");
}
err = xe_svm_range_get_pages(vm, svm_range, &ctx);
@@ -2909,6 +2911,7 @@ static int prefetch_ranges_lock_and_prep(struct xe_vm *vm,
vm->usm.asid, &vm->svm.gpusvm, ERR_PTR(err));
return err;
}
+ xe_svm_range_debug(svm_range, "PREFETCH - RANGE GET PAGES DONE");
}
}
return err;
--
2.34.1
^ permalink raw reply related [flat|nested] 120+ messages in thread
* [PATCH v2 16/32] Introduce drm_gpuvm_sm_map_ops_flags enums for sm_map_ops
2025-04-07 10:16 [PATCH v2 00/32] PREFETCH and MADVISE for SVM ranges Himal Prasad Ghimiray
` (14 preceding siblings ...)
2025-04-07 10:17 ` [PATCH v2 15/32] drm/xe/vm: Add debug prints for SVM range prefetch Himal Prasad Ghimiray
@ 2025-04-07 10:17 ` Himal Prasad Ghimiray
2025-04-07 10:30 ` Boris Brezillon
2025-04-07 22:42 ` kernel test robot
2025-04-07 10:17 ` [PATCH v2 17/32] drm/xe/uapi: Add madvise interface Himal Prasad Ghimiray
` (27 subsequent siblings)
43 siblings, 2 replies; 120+ messages in thread
From: Himal Prasad Ghimiray @ 2025-04-07 10:17 UTC (permalink / raw)
To: intel-xe
Cc: matthew.brost, thomas.hellstrom, Himal Prasad Ghimiray,
Danilo Krummrich, Boris Brezillon, dri-devel
- DRM_GPUVM_SM_MAP_NOT_MADVISE: Default sm_map operations for the input
range.
- DRM_GPUVM_SKIP_GEM_OBJ_VA_SPLIT_MADVISE: This flag is used by
drm_gpuvm_sm_map_ops_create to iterate over GPUVMA's in the
user-provided range and split the existing non-GEM object VMA if the
start or end of the input range lies within it. The operations can
create up to 2 REMAPS and 2 MAPs. The purpose of this operation is to be
used by the Xe driver to assign attributes to GPUVMA's within the
user-defined range. Unlike drm_gpuvm_sm_map_ops_flags in default mode,
the operation with this flag will never have UNMAPs and
merges, and can be without any final operations.
v2
- use drm_gpuvm_sm_map_ops_create with flags instead of defining new
ops_create (Danilo)
- Add doc (Danilo)
Cc: Danilo Krummrich <dakr@redhat.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Boris Brezillon <bbrezillon@kernel.org>
Cc: <dri-devel@lists.freedesktop.org>
Signed-off-by: Himal Prasad Ghimiray<himal.prasad.ghimiray@intel.com>
---
RFC Link:
https://lore.kernel.org/intel-xe/20250314080226.2059819-1-himal.prasad.ghimiray@intel.com/T/#mb706bd1c55232110e42dc7d5c05de61946982472
---
drivers/gpu/drm/drm_gpuvm.c | 93 ++++++++++++++++++++------
drivers/gpu/drm/nouveau/nouveau_uvmm.c | 1 +
drivers/gpu/drm/xe/xe_vm.c | 1 +
include/drm/drm_gpuvm.h | 25 ++++++-
4 files changed, 98 insertions(+), 22 deletions(-)
diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c
index f9eb56f24bef..9d09d177b9fa 100644
--- a/drivers/gpu/drm/drm_gpuvm.c
+++ b/drivers/gpu/drm/drm_gpuvm.c
@@ -2102,10 +2102,13 @@ static int
__drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
const struct drm_gpuvm_ops *ops, void *priv,
u64 req_addr, u64 req_range,
+ enum drm_gpuvm_sm_map_ops_flags flags,
struct drm_gem_object *req_obj, u64 req_offset)
{
struct drm_gpuva *va, *next;
u64 req_end = req_addr + req_range;
+ bool is_madvise_ops = (flags == DRM_GPUVM_SKIP_GEM_OBJ_VA_SPLIT_MADVISE);
+ bool needs_map = !is_madvise_ops;
int ret;
if (unlikely(!drm_gpuvm_range_valid(gpuvm, req_addr, req_range)))
@@ -2118,26 +2121,35 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
u64 range = va->va.range;
u64 end = addr + range;
bool merge = !!va->gem.obj;
+ bool skip_madvise_ops = is_madvise_ops && merge;
+ needs_map = false;
if (addr == req_addr) {
merge &= obj == req_obj &&
offset == req_offset;
if (end == req_end) {
- ret = op_unmap_cb(ops, priv, va, merge);
- if (ret)
- return ret;
+ if (!is_madvise_ops) {
+ ret = op_unmap_cb(ops, priv, va, merge);
+ if (ret)
+ return ret;
+ }
break;
}
if (end < req_end) {
- ret = op_unmap_cb(ops, priv, va, merge);
- if (ret)
- return ret;
+ if (!is_madvise_ops) {
+ ret = op_unmap_cb(ops, priv, va, merge);
+ if (ret)
+ return ret;
+ }
continue;
}
if (end > req_end) {
+ if (skip_madvise_ops)
+ break;
+
struct drm_gpuva_op_map n = {
.va.addr = req_end,
.va.range = range - req_range,
@@ -2152,6 +2164,9 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
ret = op_remap_cb(ops, priv, NULL, &n, &u);
if (ret)
return ret;
+
+ if (is_madvise_ops)
+ needs_map = true;
break;
}
} else if (addr < req_addr) {
@@ -2169,20 +2184,42 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
u.keep = merge;
if (end == req_end) {
+ if (skip_madvise_ops)
+ break;
+
ret = op_remap_cb(ops, priv, &p, NULL, &u);
if (ret)
return ret;
+
+ if (is_madvise_ops)
+ needs_map = true;
+
break;
}
if (end < req_end) {
+ if (skip_madvise_ops)
+ continue;
+
ret = op_remap_cb(ops, priv, &p, NULL, &u);
if (ret)
return ret;
+
+ if (is_madvise_ops) {
+ ret = op_map_cb(ops, priv, req_addr,
+ min(end - req_addr, req_end - end),
+ NULL, req_offset);
+ if (ret)
+ return ret;
+ }
+
continue;
}
if (end > req_end) {
+ if (skip_madvise_ops)
+ break;
+
struct drm_gpuva_op_map n = {
.va.addr = req_end,
.va.range = end - req_end,
@@ -2194,6 +2231,9 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
ret = op_remap_cb(ops, priv, &p, &n, &u);
if (ret)
return ret;
+
+ if (is_madvise_ops)
+ needs_map = true;
break;
}
} else if (addr > req_addr) {
@@ -2202,20 +2242,29 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
(addr - req_addr);
if (end == req_end) {
- ret = op_unmap_cb(ops, priv, va, merge);
- if (ret)
- return ret;
+ if (!is_madvise_ops) {
+ ret = op_unmap_cb(ops, priv, va, merge);
+ if (ret)
+ return ret;
+ }
+
break;
}
if (end < req_end) {
- ret = op_unmap_cb(ops, priv, va, merge);
- if (ret)
- return ret;
+ if (is_madvise_ops) {
+ ret = op_unmap_cb(ops, priv, va, merge);
+ if (ret)
+ return ret;
+ }
+
continue;
}
if (end > req_end) {
+ if (skip_madvise_ops)
+ break;
+
struct drm_gpuva_op_map n = {
.va.addr = req_end,
.va.range = end - req_end,
@@ -2230,14 +2279,16 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
ret = op_remap_cb(ops, priv, NULL, &n, &u);
if (ret)
return ret;
+
+ if (is_madvise_ops)
+ return op_map_cb(ops, priv, addr,
+ (req_end - addr), NULL, req_offset);
break;
}
}
}
-
- return op_map_cb(ops, priv,
- req_addr, req_range,
- req_obj, req_offset);
+ return needs_map ? op_map_cb(ops, priv, req_addr,
+ req_range, req_obj, req_offset) : 0;
}
static int
@@ -2336,15 +2387,15 @@ drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm, void *priv,
struct drm_gem_object *req_obj, u64 req_offset)
{
const struct drm_gpuvm_ops *ops = gpuvm->ops;
+ enum drm_gpuvm_sm_map_ops_flags flags = DRM_GPUVM_SM_MAP_NOT_MADVISE;
if (unlikely(!(ops && ops->sm_step_map &&
ops->sm_step_remap &&
ops->sm_step_unmap)))
return -EINVAL;
- return __drm_gpuvm_sm_map(gpuvm, ops, priv,
- req_addr, req_range,
- req_obj, req_offset);
+ return __drm_gpuvm_sm_map(gpuvm, ops, priv, req_addr, req_range,
+ flags, req_obj, req_offset);
}
EXPORT_SYMBOL_GPL(drm_gpuvm_sm_map);
@@ -2486,6 +2537,7 @@ static const struct drm_gpuvm_ops gpuvm_list_ops = {
* @gpuvm: the &drm_gpuvm representing the GPU VA space
* @req_addr: the start address of the new mapping
* @req_range: the range of the new mapping
+ * @drm_gpuvm_sm_map_ops_flag: ops flag determining madvise or not
* @req_obj: the &drm_gem_object to map
* @req_offset: the offset within the &drm_gem_object
*
@@ -2516,6 +2568,7 @@ static const struct drm_gpuvm_ops gpuvm_list_ops = {
struct drm_gpuva_ops *
drm_gpuvm_sm_map_ops_create(struct drm_gpuvm *gpuvm,
u64 req_addr, u64 req_range,
+ enum drm_gpuvm_sm_map_ops_flags flags,
struct drm_gem_object *req_obj, u64 req_offset)
{
struct drm_gpuva_ops *ops;
@@ -2535,7 +2588,7 @@ drm_gpuvm_sm_map_ops_create(struct drm_gpuvm *gpuvm,
args.ops = ops;
ret = __drm_gpuvm_sm_map(gpuvm, &gpuvm_list_ops, &args,
- req_addr, req_range,
+ req_addr, req_range, flags,
req_obj, req_offset);
if (ret)
goto err_free_ops;
diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.c b/drivers/gpu/drm/nouveau/nouveau_uvmm.c
index 48f105239f42..26e13fcdbdb8 100644
--- a/drivers/gpu/drm/nouveau/nouveau_uvmm.c
+++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.c
@@ -1303,6 +1303,7 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job,
op->ops = drm_gpuvm_sm_map_ops_create(&uvmm->base,
op->va.addr,
op->va.range,
+ DRM_GPUVM_SM_MAP_NOT_MADVISE,
op->gem.obj,
op->gem.offset);
if (IS_ERR(op->ops)) {
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index fd98e74485f4..27a8dbe709c2 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -2314,6 +2314,7 @@ vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_vma_ops *vops,
case DRM_XE_VM_BIND_OP_MAP:
case DRM_XE_VM_BIND_OP_MAP_USERPTR:
ops = drm_gpuvm_sm_map_ops_create(&vm->gpuvm, addr, range,
+ DRM_GPUVM_SM_MAP_NOT_MADVISE,
obj, bo_offset_or_userptr);
break;
case DRM_XE_VM_BIND_OP_UNMAP:
diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h
index 2a9629377633..6c2452537d4f 100644
--- a/include/drm/drm_gpuvm.h
+++ b/include/drm/drm_gpuvm.h
@@ -211,6 +211,27 @@ enum drm_gpuvm_flags {
DRM_GPUVM_USERBITS = BIT(1),
};
+/**
+ * enum drm_gpuvm_madvise_flags - flags for drm_gpuvm split/merge ops
+ */
+enum drm_gpuvm_sm_map_ops_flags {
+ /**
+ * @DRM_GPUVM_SM_MAP_NOT_MADVISE: DEFAULT sm_map ops
+ */
+ DRM_GPUVM_SM_MAP_NOT_MADVISE = 0,
+
+ /**
+ * @DRM_GPUVM_SKIP_GEM_OBJ_VA_SPLIT_MADVISE: This flag is used by
+ * drm_gpuvm_sm_map_ops_create to iterate over GPUVMA's in the
+ * user-provided range and split the existing non-GEM object VMA if the
+ * start or end of the input range lies within it. The operations can
+ * create up to 2 REMAPS and 2 MAPs. Unlike drm_gpuvm_sm_map_ops_flags
+ * in default mode, the operation with this flag will never have UNMAPs and
+ * merges, and can be without any final operations.
+ */
+ DRM_GPUVM_SKIP_GEM_OBJ_VA_SPLIT_MADVISE = BIT(0),
+};
+
/**
* struct drm_gpuvm - DRM GPU VA Manager
*
@@ -1059,8 +1080,8 @@ struct drm_gpuva_ops {
#define drm_gpuva_next_op(op) list_next_entry(op, entry)
struct drm_gpuva_ops *
-drm_gpuvm_sm_map_ops_create(struct drm_gpuvm *gpuvm,
- u64 addr, u64 range,
+drm_gpuvm_sm_map_ops_create(struct drm_gpuvm *gpuvm, u64 addr, u64 range,
+ enum drm_gpuvm_sm_map_ops_flags flags,
struct drm_gem_object *obj, u64 offset);
struct drm_gpuva_ops *
drm_gpuvm_sm_unmap_ops_create(struct drm_gpuvm *gpuvm,
--
2.34.1
^ permalink raw reply related [flat|nested] 120+ messages in thread
* [PATCH v2 17/32] drm/xe/uapi: Add madvise interface
2025-04-07 10:16 [PATCH v2 00/32] PREFETCH and MADVISE for SVM ranges Himal Prasad Ghimiray
` (15 preceding siblings ...)
2025-04-07 10:17 ` [PATCH v2 16/32] Introduce drm_gpuvm_sm_map_ops_flags enums for sm_map_ops Himal Prasad Ghimiray
@ 2025-04-07 10:17 ` Himal Prasad Ghimiray
2025-04-17 18:19 ` Souza, Jose
2025-05-02 14:00 ` Thomas Hellström
2025-04-07 10:17 ` [PATCH v2 18/32] drm/xe/vm: Add attributes struct as member of vma Himal Prasad Ghimiray
` (26 subsequent siblings)
43 siblings, 2 replies; 120+ messages in thread
From: Himal Prasad Ghimiray @ 2025-04-07 10:17 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, thomas.hellstrom, Himal Prasad Ghimiray
This commit introduces a new madvise interface to support
driver-specific ioctl operations. The madvise interface allows for more
efficient memory management by providing hints to the driver about the
expected memory usage and pte update policy for gpuvma.
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
---
include/uapi/drm/xe_drm.h | 97 +++++++++++++++++++++++++++++++++++++++
1 file changed, 97 insertions(+)
diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
index 9c08738c3b91..aaf515df3a83 100644
--- a/include/uapi/drm/xe_drm.h
+++ b/include/uapi/drm/xe_drm.h
@@ -81,6 +81,7 @@ extern "C" {
* - &DRM_IOCTL_XE_EXEC
* - &DRM_IOCTL_XE_WAIT_USER_FENCE
* - &DRM_IOCTL_XE_OBSERVATION
+ * - &DRM_IOCTL_XE_MADVISE
*/
/*
@@ -102,6 +103,7 @@ extern "C" {
#define DRM_XE_EXEC 0x09
#define DRM_XE_WAIT_USER_FENCE 0x0a
#define DRM_XE_OBSERVATION 0x0b
+#define DRM_XE_MADVISE 0x0c
/* Must be kept compact -- no holes */
@@ -117,6 +119,7 @@ extern "C" {
#define DRM_IOCTL_XE_EXEC DRM_IOW(DRM_COMMAND_BASE + DRM_XE_EXEC, struct drm_xe_exec)
#define DRM_IOCTL_XE_WAIT_USER_FENCE DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_WAIT_USER_FENCE, struct drm_xe_wait_user_fence)
#define DRM_IOCTL_XE_OBSERVATION DRM_IOW(DRM_COMMAND_BASE + DRM_XE_OBSERVATION, struct drm_xe_observation_param)
+#define DRM_IOCTL_XE_MADVISE DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_MADVISE, struct drm_xe_madvise)
/**
* DOC: Xe IOCTL Extensions
@@ -1965,6 +1968,100 @@ struct drm_xe_query_eu_stall {
__u64 sampling_rates[];
};
+struct drm_xe_madvise_ops {
+ /** @start: start of the virtual address range */
+ __u64 start;
+
+ /** @size: size of the virtual address range */
+ __u64 range;
+
+#define DRM_XE_VMA_ATTR_PREFERRED_LOC 0
+#define DRM_XE_VMA_ATTR_ATOMIC 1
+#define DRM_XE_VMA_ATTR_PAT 2
+#define DRM_XE_VMA_ATTR_PURGEABLE_STATE 3
+ /** @type: type of attribute */
+ __u32 type;
+
+ /** @pad: MBZ */
+ __u32 pad;
+
+ union {
+ struct {
+#define DRM_XE_VMA_ATOMIC_UNDEFINED 0
+#define DRM_XE_VMA_ATOMIC_DEVICE 1
+#define DRM_XE_VMA_ATOMIC_GLOBAL 2
+#define DRM_XE_VMA_ATOMIC_CPU 3
+ /** @val: value of atomic operation*/
+ __u32 val;
+
+ /** @reserved: Reserved */
+ __u32 reserved;
+ } atomic;
+
+ struct {
+#define DRM_XE_VMA_PURGEABLE_STATE_WILLNEED 0
+#define DRM_XE_VMA_PURGEABLE_STATE_DONTNEED 1
+#define DRM_XE_VMA_PURGEABLE_STATE_PURGED 2
+ /** @val: value for DRM_XE_VMA_ATTR_PURGEABLE_STATE */
+ __u32 val;
+
+ /** @reserved: Reserved */
+ __u32 reserved;
+ } purge_state_val;
+
+ struct {
+ /** @pat_index */
+ __u32 val;
+
+ /** @reserved: Reserved */
+ __u32 reserved;
+ } pat_index;
+
+ /** @preferred_mem_loc: preferred memory location */
+ struct {
+ __u32 devmem_fd;
+
+#define MIGRATE_ALL_PAGES 0
+#define MIGRATE_ONLY_SYSTEM_PAGES 1
+ __u32 migration_policy;
+ } preferred_mem_loc;
+ };
+
+ /** @reserved: Reserved */
+ __u64 reserved[2];
+};
+
+/**
+ * struct drm_xe_madvise - Input of &DRM_IOCTL_XE_MADVISE
+ *
+ * Set memory attributes to a virtual address range
+ */
+struct drm_xe_madvise {
+ /** @extensions: Pointer to the first extension struct, if any */
+ __u64 extensions;
+
+ /** @vm_id: vm_id of the virtual range */
+ __u32 vm_id;
+
+ /** @num_ops: number of madvises in ioctl */
+ __u32 num_ops;
+
+ union {
+ /** @ops: used if num_ops == 1 */
+ struct drm_xe_madvise_ops ops;
+
+ /**
+ * @vector_of_ops: userptr to array of struct
+ * drm_xe_vm_madvise_op if num_ops > 1
+ */
+ __u64 vector_of_ops;
+ };
+
+ /** @reserved: Reserved */
+ __u64 reserved[2];
+
+};
+
#if defined(__cplusplus)
}
#endif
--
2.34.1
^ permalink raw reply related [flat|nested] 120+ messages in thread
* [PATCH v2 18/32] drm/xe/vm: Add attributes struct as member of vma
2025-04-07 10:16 [PATCH v2 00/32] PREFETCH and MADVISE for SVM ranges Himal Prasad Ghimiray
` (16 preceding siblings ...)
2025-04-07 10:17 ` [PATCH v2 17/32] drm/xe/uapi: Add madvise interface Himal Prasad Ghimiray
@ 2025-04-07 10:17 ` Himal Prasad Ghimiray
2025-05-14 18:36 ` Matthew Brost
2025-04-07 10:17 ` [PATCH v2 19/32] drm/xe/vma: Move pat_index to vma attributes Himal Prasad Ghimiray
` (25 subsequent siblings)
43 siblings, 1 reply; 120+ messages in thread
From: Himal Prasad Ghimiray @ 2025-04-07 10:17 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, thomas.hellstrom, Himal Prasad Ghimiray
The attribute of xe_vma will determine the migration policy and the
encoding of the page table entries (PTEs) for that vma.
This attribute helps manage how memory pages are moved and how their
addresses are translated. It will be used by madvise to set the
behavior of the vma.
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
---
drivers/gpu/drm/xe/xe_vm.c | 6 ++++++
drivers/gpu/drm/xe/xe_vm_types.h | 20 ++++++++++++++++++++
2 files changed, 26 insertions(+)
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 27a8dbe709c2..1ff9e477e061 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -2470,6 +2470,12 @@ static struct xe_vma *new_vma(struct xe_vm *vm, struct drm_gpuva_op_map *op,
vma = ERR_PTR(err);
}
+ /*TODO: assign devmem_fd of local vram once multi device
+ * support is added.
+ */
+ vma->attr.preferred_loc.devmem_fd = 1;
+ vma->attr.atomic_access = DRM_XE_VMA_ATOMIC_UNDEFINED;
+
return vma;
}
diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
index d3c1209348e9..5f5feffecb82 100644
--- a/drivers/gpu/drm/xe/xe_vm_types.h
+++ b/drivers/gpu/drm/xe/xe_vm_types.h
@@ -77,6 +77,19 @@ struct xe_userptr {
#endif
};
+/**
+ * struct xe_vma_mem_attr - memory attributes associated with vma
+ */
+struct xe_vma_mem_attr {
+ /** @preferred_loc: perferred memory_location*/
+ struct {
+ u32 migration_policy; /* represents migration policies */
+ u32 devmem_fd; /* devmem_fd used for determining pagemap_fd requested by user */
+ } preferred_loc;
+ /** @atomic_access: The atomic access type for the vma */
+ u32 atomic_access;
+};
+
struct xe_vma {
/** @gpuva: Base GPUVA object */
struct drm_gpuva gpuva;
@@ -128,6 +141,13 @@ struct xe_vma {
* Needs to be signalled before UNMAP can be processed.
*/
struct xe_user_fence *ufence;
+
+ /**
+ * @attr: The attributes of vma which determines the migration policy
+ * and encoding of the PTEs for this vma.
+ */
+ struct xe_vma_mem_attr attr;
+
};
/**
--
2.34.1
^ permalink raw reply related [flat|nested] 120+ messages in thread
* [PATCH v2 19/32] drm/xe/vma: Move pat_index to vma attributes
2025-04-07 10:16 [PATCH v2 00/32] PREFETCH and MADVISE for SVM ranges Himal Prasad Ghimiray
` (17 preceding siblings ...)
2025-04-07 10:17 ` [PATCH v2 18/32] drm/xe/vm: Add attributes struct as member of vma Himal Prasad Ghimiray
@ 2025-04-07 10:17 ` Himal Prasad Ghimiray
2025-05-14 18:37 ` Matthew Brost
2025-04-07 10:17 ` [PATCH v2 20/32] drm/xe/vma: Modify new_vma to accept struct xe_vma_mem_attr as parameter Himal Prasad Ghimiray
` (24 subsequent siblings)
43 siblings, 1 reply; 120+ messages in thread
From: Himal Prasad Ghimiray @ 2025-04-07 10:17 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, thomas.hellstrom, Himal Prasad Ghimiray
The PAT index determines how PTEs are encoded and can be modified by
madvise. Therefore, it is now part of the vma attributes.
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
---
drivers/gpu/drm/xe/xe_pt.c | 2 +-
drivers/gpu/drm/xe/xe_vm.c | 6 +++---
drivers/gpu/drm/xe/xe_vm_types.h | 10 ++++------
3 files changed, 8 insertions(+), 10 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
index 59dc065fae93..2479d830d90a 100644
--- a/drivers/gpu/drm/xe/xe_pt.c
+++ b/drivers/gpu/drm/xe/xe_pt.c
@@ -518,7 +518,7 @@ xe_pt_stage_bind_entry(struct xe_ptw *parent, pgoff_t offset,
{
struct xe_pt_stage_bind_walk *xe_walk =
container_of(walk, typeof(*xe_walk), base);
- u16 pat_index = xe_walk->vma->pat_index;
+ u16 pat_index = xe_walk->vma->attr.pat_index;
struct xe_pt *xe_parent = container_of(parent, typeof(*xe_parent), base);
struct xe_vm *vm = xe_walk->vm;
struct xe_pt *xe_child;
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 1ff9e477e061..59e2a951db25 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -1224,7 +1224,7 @@ static struct xe_vma *xe_vma_create(struct xe_vm *vm,
if (vm->xe->info.has_atomic_enable_pte_bit)
vma->gpuva.flags |= XE_VMA_ATOMIC_PTE_BIT;
- vma->pat_index = pat_index;
+ vma->attr.pat_index = pat_index;
if (bo) {
struct drm_gpuvm_bo *vm_bo;
@@ -2657,7 +2657,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
if (op->base.remap.prev) {
vma = new_vma(vm, op->base.remap.prev,
- old->pat_index, flags);
+ old->attr.pat_index, flags);
if (IS_ERR(vma))
return PTR_ERR(vma);
@@ -2687,7 +2687,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
if (op->base.remap.next) {
vma = new_vma(vm, op->base.remap.next,
- old->pat_index, flags);
+ old->attr.pat_index, flags);
if (IS_ERR(vma))
return PTR_ERR(vma);
diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
index 5f5feffecb82..2fcc48d9d776 100644
--- a/drivers/gpu/drm/xe/xe_vm_types.h
+++ b/drivers/gpu/drm/xe/xe_vm_types.h
@@ -88,6 +88,10 @@ struct xe_vma_mem_attr {
} preferred_loc;
/** @atomic_access: The atomic access type for the vma */
u32 atomic_access;
+ /**
+ * @pat_index: The pat index to use when encoding the PTEs for this vma.
+ */
+ u16 pat_index;
};
struct xe_vma {
@@ -131,11 +135,6 @@ struct xe_vma {
/** @tile_staged: bind is staged for this VMA */
u8 tile_staged;
- /**
- * @pat_index: The pat index to use when encoding the PTEs for this vma.
- */
- u16 pat_index;
-
/**
* @ufence: The user fence that was provided with MAP.
* Needs to be signalled before UNMAP can be processed.
@@ -147,7 +146,6 @@ struct xe_vma {
* and encoding of the PTEs for this vma.
*/
struct xe_vma_mem_attr attr;
-
};
/**
--
2.34.1
^ permalink raw reply related [flat|nested] 120+ messages in thread
* [PATCH v2 20/32] drm/xe/vma: Modify new_vma to accept struct xe_vma_mem_attr as parameter
2025-04-07 10:16 [PATCH v2 00/32] PREFETCH and MADVISE for SVM ranges Himal Prasad Ghimiray
` (18 preceding siblings ...)
2025-04-07 10:17 ` [PATCH v2 19/32] drm/xe/vma: Move pat_index to vma attributes Himal Prasad Ghimiray
@ 2025-04-07 10:17 ` Himal Prasad Ghimiray
2025-05-13 2:36 ` Matthew Brost
2025-04-07 10:17 ` [PATCH v2 21/32] drm/gpusvm: Make drm_gpusvm_for_each_* macros public Himal Prasad Ghimiray
` (23 subsequent siblings)
43 siblings, 1 reply; 120+ messages in thread
From: Himal Prasad Ghimiray @ 2025-04-07 10:17 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, thomas.hellstrom, Himal Prasad Ghimiray
This change simplifies the logic by ensuring that remapped previous or
next VMAs are created with the same memory attributes as the original VMA.
By passing struct xe_vma_mem_attr as a parameter, we maintain consistency
in memory attributes.
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
---
drivers/gpu/drm/xe/xe_vm.c | 37 ++++++++++++++++++++++++++-----------
1 file changed, 26 insertions(+), 11 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 59e2a951db25..6e5ba58d475e 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -2421,8 +2421,16 @@ vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_vma_ops *vops,
ALLOW_ERROR_INJECTION(vm_bind_ioctl_ops_create, ERRNO);
+static void cp_mem_attr(struct xe_vma_mem_attr *dst, struct xe_vma_mem_attr *src)
+{
+ dst->preferred_loc.migration_policy = src->preferred_loc.migration_policy;
+ dst->preferred_loc.devmem_fd = src->preferred_loc.devmem_fd;
+ dst->atomic_access = src->atomic_access;
+ dst->pat_index = src->pat_index;
+}
+
static struct xe_vma *new_vma(struct xe_vm *vm, struct drm_gpuva_op_map *op,
- u16 pat_index, unsigned int flags)
+ struct xe_vma_mem_attr attr, unsigned int flags)
{
struct xe_bo *bo = op->gem.obj ? gem_to_xe_bo(op->gem.obj) : NULL;
struct drm_exec exec;
@@ -2451,7 +2459,7 @@ static struct xe_vma *new_vma(struct xe_vm *vm, struct drm_gpuva_op_map *op,
}
vma = xe_vma_create(vm, bo, op->gem.offset,
op->va.addr, op->va.addr +
- op->va.range - 1, pat_index, flags);
+ op->va.range - 1, attr.pat_index, flags);
if (IS_ERR(vma))
goto err_unlock;
@@ -2468,14 +2476,10 @@ static struct xe_vma *new_vma(struct xe_vm *vm, struct drm_gpuva_op_map *op,
prep_vma_destroy(vm, vma, false);
xe_vma_destroy_unlocked(vma);
vma = ERR_PTR(err);
+ } else {
+ cp_mem_attr(&vma->attr, &attr);
}
- /*TODO: assign devmem_fd of local vram once multi device
- * support is added.
- */
- vma->attr.preferred_loc.devmem_fd = 1;
- vma->attr.atomic_access = DRM_XE_VMA_ATOMIC_UNDEFINED;
-
return vma;
}
@@ -2600,6 +2604,17 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
switch (op->base.op) {
case DRM_GPUVA_OP_MAP:
{
+ struct xe_vma_mem_attr default_attr = {
+ .preferred_loc = {
+ /*TODO: assign devmem_fd of local vram
+ * once multi device support is added.
+ */
+ .devmem_fd = IS_DGFX(vm->xe) ? 1 : 0,
+ .migration_policy = 1, },
+ .atomic_access = DRM_XE_VMA_ATOMIC_UNDEFINED,
+ .pat_index = op->map.pat_index
+ };
+
flags |= op->map.read_only ?
VMA_CREATE_FLAG_READ_ONLY : 0;
flags |= op->map.is_null ?
@@ -2609,7 +2624,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
flags |= op->map.is_cpu_addr_mirror ?
VMA_CREATE_FLAG_IS_SYSTEM_ALLOCATOR : 0;
- vma = new_vma(vm, &op->base.map, op->map.pat_index,
+ vma = new_vma(vm, &op->base.map, default_attr,
flags);
if (IS_ERR(vma))
return PTR_ERR(vma);
@@ -2657,7 +2672,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
if (op->base.remap.prev) {
vma = new_vma(vm, op->base.remap.prev,
- old->attr.pat_index, flags);
+ old->attr, flags);
if (IS_ERR(vma))
return PTR_ERR(vma);
@@ -2687,7 +2702,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
if (op->base.remap.next) {
vma = new_vma(vm, op->base.remap.next,
- old->attr.pat_index, flags);
+ old->attr, flags);
if (IS_ERR(vma))
return PTR_ERR(vma);
--
2.34.1
^ permalink raw reply related [flat|nested] 120+ messages in thread
* [PATCH v2 21/32] drm/gpusvm: Make drm_gpusvm_for_each_* macros public
2025-04-07 10:16 [PATCH v2 00/32] PREFETCH and MADVISE for SVM ranges Himal Prasad Ghimiray
` (19 preceding siblings ...)
2025-04-07 10:17 ` [PATCH v2 20/32] drm/xe/vma: Modify new_vma to accept struct xe_vma_mem_attr as parameter Himal Prasad Ghimiray
@ 2025-04-07 10:17 ` Himal Prasad Ghimiray
2025-04-08 1:49 ` kernel test robot
2025-05-14 18:47 ` Matthew Brost
2025-04-07 10:17 ` [PATCH v2 22/32] drm/xe/svm: Split system allocator vma incase of madvise call Himal Prasad Ghimiray
` (22 subsequent siblings)
43 siblings, 2 replies; 120+ messages in thread
From: Himal Prasad Ghimiray @ 2025-04-07 10:17 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, thomas.hellstrom, Himal Prasad Ghimiray
The drm_gpusvm_for_each_notifier, drm_gpusvm_for_each_notifier_safe and
drm_gpusvm_for_each_range_safe macros are useful for locating notifiers
and ranges within a user-specified range. By making these macros public,
we enable broader access and utility for developers who need to leverage
them in their implementations.
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
---
drivers/gpu/drm/drm_gpusvm.c | 89 +--------------------------------
include/drm/drm_gpusvm.h | 96 +++++++++++++++++++++++++++++++++++-
2 files changed, 96 insertions(+), 89 deletions(-)
diff --git a/drivers/gpu/drm/drm_gpusvm.c b/drivers/gpu/drm/drm_gpusvm.c
index 149ac56eff70..09708eef1c86 100644
--- a/drivers/gpu/drm/drm_gpusvm.c
+++ b/drivers/gpu/drm/drm_gpusvm.c
@@ -391,97 +391,10 @@ struct drm_gpusvm_range *
drm_gpusvm_range_find(struct drm_gpusvm_notifier *notifier, unsigned long start,
unsigned long end)
{
- struct interval_tree_node *itree;
-
- itree = interval_tree_iter_first(¬ifier->root, start, end - 1);
-
- if (itree)
- return container_of(itree, struct drm_gpusvm_range, itree);
- else
- return NULL;
+ return __drm_gpusvm_range_find(notifier, start, end);
}
EXPORT_SYMBOL_GPL(drm_gpusvm_range_find);
-/**
- * drm_gpusvm_for_each_range_safe() - Safely iterate over GPU SVM ranges in a notifier
- * @range__: Iterator variable for the ranges
- * @next__: Iterator variable for the ranges temporay storage
- * @notifier__: Pointer to the GPU SVM notifier
- * @start__: Start address of the range
- * @end__: End address of the range
- *
- * This macro is used to iterate over GPU SVM ranges in a notifier while
- * removing ranges from it.
- */
-#define drm_gpusvm_for_each_range_safe(range__, next__, notifier__, start__, end__) \
- for ((range__) = drm_gpusvm_range_find((notifier__), (start__), (end__)), \
- (next__) = __drm_gpusvm_range_next(range__); \
- (range__) && (drm_gpusvm_range_start(range__) < (end__)); \
- (range__) = (next__), (next__) = __drm_gpusvm_range_next(range__))
-
-/**
- * __drm_gpusvm_notifier_next() - get the next drm_gpusvm_notifier in the list
- * @notifier: a pointer to the current drm_gpusvm_notifier
- *
- * Return: A pointer to the next drm_gpusvm_notifier if available, or NULL if
- * the current notifier is the last one or if the input notifier is
- * NULL.
- */
-static struct drm_gpusvm_notifier *
-__drm_gpusvm_notifier_next(struct drm_gpusvm_notifier *notifier)
-{
- if (notifier && !list_is_last(¬ifier->entry,
- ¬ifier->gpusvm->notifier_list))
- return list_next_entry(notifier, entry);
-
- return NULL;
-}
-
-static struct drm_gpusvm_notifier *
-notifier_iter_first(struct rb_root_cached *root, unsigned long start,
- unsigned long last)
-{
- struct interval_tree_node *itree;
-
- itree = interval_tree_iter_first(root, start, last);
-
- if (itree)
- return container_of(itree, struct drm_gpusvm_notifier, itree);
- else
- return NULL;
-}
-
-/**
- * drm_gpusvm_for_each_notifier() - Iterate over GPU SVM notifiers in a gpusvm
- * @notifier__: Iterator variable for the notifiers
- * @notifier__: Pointer to the GPU SVM notifier
- * @start__: Start address of the notifier
- * @end__: End address of the notifier
- *
- * This macro is used to iterate over GPU SVM notifiers in a gpusvm.
- */
-#define drm_gpusvm_for_each_notifier(notifier__, gpusvm__, start__, end__) \
- for ((notifier__) = notifier_iter_first(&(gpusvm__)->root, (start__), (end__) - 1); \
- (notifier__) && (drm_gpusvm_notifier_start(notifier__) < (end__)); \
- (notifier__) = __drm_gpusvm_notifier_next(notifier__))
-
-/**
- * drm_gpusvm_for_each_notifier_safe() - Safely iterate over GPU SVM notifiers in a gpusvm
- * @notifier__: Iterator variable for the notifiers
- * @next__: Iterator variable for the notifiers temporay storage
- * @notifier__: Pointer to the GPU SVM notifier
- * @start__: Start address of the notifier
- * @end__: End address of the notifier
- *
- * This macro is used to iterate over GPU SVM notifiers in a gpusvm while
- * removing notifiers from it.
- */
-#define drm_gpusvm_for_each_notifier_safe(notifier__, next__, gpusvm__, start__, end__) \
- for ((notifier__) = notifier_iter_first(&(gpusvm__)->root, (start__), (end__) - 1), \
- (next__) = __drm_gpusvm_notifier_next(notifier__); \
- (notifier__) && (drm_gpusvm_notifier_start(notifier__) < (end__)); \
- (notifier__) = (next__), (next__) = __drm_gpusvm_notifier_next(notifier__))
-
/**
* drm_gpusvm_notifier_invalidate() - Invalidate a GPU SVM notifier.
* @mni: Pointer to the mmu_interval_notifier structure.
diff --git a/include/drm/drm_gpusvm.h b/include/drm/drm_gpusvm.h
index 8093cc6ab1f4..8b70361db351 100644
--- a/include/drm/drm_gpusvm.h
+++ b/include/drm/drm_gpusvm.h
@@ -491,6 +491,20 @@ __drm_gpusvm_range_next(struct drm_gpusvm_range *range)
return NULL;
}
+static inline struct drm_gpusvm_range *
+__drm_gpusvm_range_find(struct drm_gpusvm_notifier *notifier,
+ unsigned long start, unsigned long end)
+{
+ struct interval_tree_node *itree;
+
+ itree = interval_tree_iter_first(¬ifier->root, start, end - 1);
+
+ if (itree)
+ return container_of(itree, struct drm_gpusvm_range, itree);
+ else
+ return NULL;
+}
+
/**
* drm_gpusvm_for_each_range() - Iterate over GPU SVM ranges in a notifier
* @range__: Iterator variable for the ranges. If set, it indicates the start of
@@ -504,8 +518,88 @@ __drm_gpusvm_range_next(struct drm_gpusvm_range *range)
*/
#define drm_gpusvm_for_each_range(range__, notifier__, start__, end__) \
for ((range__) = (range__) ?: \
- drm_gpusvm_range_find((notifier__), (start__), (end__)); \
+ __drm_gpusvm_range_find((notifier__), (start__), (end__)); \
(range__) && (drm_gpusvm_range_start(range__) < (end__)); \
(range__) = __drm_gpusvm_range_next(range__))
+/**
+ * drm_gpusvm_for_each_range_safe() - Safely iterate over GPU SVM ranges in a notifier
+ * @range__: Iterator variable for the ranges
+ * @next__: Iterator variable for the ranges temporay storage
+ * @notifier__: Pointer to the GPU SVM notifier
+ * @start__: Start address of the range
+ * @end__: End address of the range
+ *
+ * This macro is used to iterate over GPU SVM ranges in a notifier while
+ * removing ranges from it.
+ */
+#define drm_gpusvm_for_each_range_safe(range__, next__, notifier__, start__, end__) \
+ for ((range__) = __drm_gpusvm_range_find((notifier__), (start__), (end__)), \
+ (next__) = __drm_gpusvm_range_next(range__); \
+ (range__) && (drm_gpusvm_range_start(range__) < (end__)); \
+ (range__) = (next__), (next__) = __drm_gpusvm_range_next(range__))
+
+/**
+ * __drm_gpusvm_notifier_next() - get the next drm_gpusvm_notifier in the list
+ * @notifier: a pointer to the current drm_gpusvm_notifier
+ *
+ * Return: A pointer to the next drm_gpusvm_notifier if available, or NULL if
+ * the current notifier is the last one or if the input notifier is
+ * NULL.
+ */
+static inline struct drm_gpusvm_notifier *
+__drm_gpusvm_notifier_next(struct drm_gpusvm_notifier *notifier)
+{
+ if (notifier && !list_is_last(¬ifier->entry,
+ ¬ifier->gpusvm->notifier_list))
+ return list_next_entry(notifier, entry);
+
+ return NULL;
+}
+
+static inline struct drm_gpusvm_notifier *
+notifier_iter_first(struct rb_root_cached *root, unsigned long start,
+ unsigned long last)
+{
+ struct interval_tree_node *itree;
+
+ itree = interval_tree_iter_first(root, start, last);
+
+ if (itree)
+ return container_of(itree, struct drm_gpusvm_notifier, itree);
+ else
+ return NULL;
+}
+
+/**
+ * drm_gpusvm_for_each_notifier() - Iterate over GPU SVM notifiers in a gpusvm
+ * @notifier__: Iterator variable for the notifiers
+ * @notifier__: Pointer to the GPU SVM notifier
+ * @start__: Start address of the notifier
+ * @end__: End address of the notifier
+ *
+ * This macro is used to iterate over GPU SVM notifiers in a gpusvm.
+ */
+#define drm_gpusvm_for_each_notifier(notifier__, gpusvm__, start__, end__) \
+ for ((notifier__) = notifier_iter_first(&(gpusvm__)->root, (start__), (end__) - 1); \
+ (notifier__) && (drm_gpusvm_notifier_start(notifier__) < (end__)); \
+ (notifier__) = __drm_gpusvm_notifier_next(notifier__))
+
+/**
+ * drm_gpusvm_for_each_notifier_safe() - Safely iterate over GPU SVM notifiers in a gpusvm
+ * @notifier__: Iterator variable for the notifiers
+ * @next__: Iterator variable for the notifiers temporay storage
+ * @notifier__: Pointer to the GPU SVM notifier
+ * @start__: Start address of the notifier
+ * @end__: End address of the notifier
+ *
+ * This macro is used to iterate over GPU SVM notifiers in a gpusvm while
+ * removing notifiers from it.
+ */
+#define drm_gpusvm_for_each_notifier_safe(notifier__, next__, gpusvm__, start__, end__) \
+ for ((notifier__) = notifier_iter_first(&(gpusvm__)->root, (start__), (end__) - 1), \
+ (next__) = __drm_gpusvm_notifier_next(notifier__); \
+ (notifier__) && (drm_gpusvm_notifier_start(notifier__) < (end__)); \
+ (notifier__) = (next__), (next__) = __drm_gpusvm_notifier_next(notifier__))
+
#endif /* __DRM_GPUSVM_H__ */
--
2.34.1
^ permalink raw reply related [flat|nested] 120+ messages in thread
* [PATCH v2 22/32] drm/xe/svm: Split system allocator vma incase of madvise call
2025-04-07 10:16 [PATCH v2 00/32] PREFETCH and MADVISE for SVM ranges Himal Prasad Ghimiray
` (20 preceding siblings ...)
2025-04-07 10:17 ` [PATCH v2 21/32] drm/gpusvm: Make drm_gpusvm_for_each_* macros public Himal Prasad Ghimiray
@ 2025-04-07 10:17 ` Himal Prasad Ghimiray
2025-05-14 19:01 ` Matthew Brost
2025-05-14 19:02 ` Matthew Brost
2025-04-07 10:17 ` [PATCH v2 23/32] drm/xe: Implement madvise ioctl for xe Himal Prasad Ghimiray
` (21 subsequent siblings)
43 siblings, 2 replies; 120+ messages in thread
From: Himal Prasad Ghimiray @ 2025-04-07 10:17 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, thomas.hellstrom, Himal Prasad Ghimiray
If the start or end of input address range lies within system allocator
vma split the vma to create new vma's as per input range.
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
---
drivers/gpu/drm/xe/xe_vm.c | 84 ++++++++++++++++++++++++++++++++++++++
drivers/gpu/drm/xe/xe_vm.h | 2 +
2 files changed, 86 insertions(+)
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 6e5ba58d475e..c7c012afe9eb 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -4127,3 +4127,87 @@ void xe_vm_snapshot_free(struct xe_vm_snapshot *snap)
}
kvfree(snap);
}
+
+int xe_vm_alloc_madvise_vma(struct xe_vm *vm, uint64_t start, uint64_t range)
+{
+ struct xe_vma_ops vops;
+ struct drm_gpuva_ops *ops = NULL;
+ struct drm_gpuva_op *__op;
+ bool is_cpu_addr_mirror = false;
+ int err;
+
+ vm_dbg(&vm->xe->drm, "MADVISE IN: addr=0x%016llx, size=0x%016llx", start, range);
+
+ if (start & ~PAGE_MASK)
+ start = ALIGN_DOWN(start, SZ_4K);
+
+ if (range & ~PAGE_MASK)
+ range = ALIGN(range, SZ_4K);
+
+ vm_dbg(&vm->xe->drm, "MADVISE_OPS_CREATE: addr=0x%016llx, size=0x%016llx", start, range);
+ ops = drm_gpuvm_sm_map_ops_create(&vm->gpuvm, start, range,
+ DRM_GPUVM_SKIP_GEM_OBJ_VA_SPLIT_MADVISE,
+ NULL, start);
+ if (IS_ERR(ops)) {
+ err = PTR_ERR(ops);
+ goto unwind_ops;
+ }
+
+ if (list_empty(&ops->list)) {
+ err = 0;
+ goto free_ops;
+ }
+
+ drm_gpuva_for_each_op(__op, ops) {
+ struct xe_vma_op *op = gpuva_op_to_vma_op(__op);
+
+ if (__op->op == DRM_GPUVA_OP_REMAP) {
+ if (xe_vma_is_cpu_addr_mirror(gpuva_to_vma(op->base.remap.unmap->va)))
+ is_cpu_addr_mirror = true;
+ else
+ is_cpu_addr_mirror = false;
+ }
+
+ if (__op->op == DRM_GPUVA_OP_MAP)
+ op->map.is_cpu_addr_mirror = is_cpu_addr_mirror;
+
+ print_op(vm->xe, __op);
+ }
+
+ xe_vma_ops_init(&vops, vm, NULL, NULL, 0);
+ err = vm_bind_ioctl_ops_parse(vm, ops, &vops);
+ if (err)
+ goto unwind_ops;
+
+ xe_vm_lock(vm, false);
+
+ drm_gpuva_for_each_op(__op, ops) {
+ struct xe_vma_op *op = gpuva_op_to_vma_op(__op);
+ struct xe_vma *vma;
+ struct xe_vma_mem_attr temp_attr;
+
+ if (__op->op == DRM_GPUVA_OP_UNMAP) {
+ /* There should be no unmap */
+ xe_assert(vm->xe, true);
+ xe_vma_destroy(gpuva_to_vma(op->base.unmap.va), NULL);
+ } else if (__op->op == DRM_GPUVA_OP_REMAP) {
+ vma = gpuva_to_vma(op->base.remap.unmap->va);
+ cp_mem_attr(&temp_attr, &vma->attr);
+ xe_vma_destroy(gpuva_to_vma(op->base.remap.unmap->va), NULL);
+ } else if (__op->op == DRM_GPUVA_OP_MAP) {
+ vma = op->map.vma;
+ cp_mem_attr(&vma->attr, &temp_attr);
+ }
+ }
+
+ xe_vm_unlock(vm);
+ drm_gpuva_ops_free(&vm->gpuvm, ops);
+ return 0;
+
+unwind_ops:
+ vm_bind_ioctl_ops_unwind(vm, &ops, 1);
+free_ops:
+ if (ops)
+ drm_gpuva_ops_free(&vm->gpuvm, ops);
+ return err;
+}
diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
index 99e164852f63..4e45230b7205 100644
--- a/drivers/gpu/drm/xe/xe_vm.h
+++ b/drivers/gpu/drm/xe/xe_vm.h
@@ -171,6 +171,8 @@ static inline bool xe_vma_is_userptr(struct xe_vma *vma)
struct xe_vma *xe_vm_find_vma_by_addr(struct xe_vm *vm, u64 page_addr);
+int xe_vm_alloc_madvise_vma(struct xe_vm *vm, uint64_t addr, uint64_t size);
+
/**
* to_userptr_vma() - Return a pointer to an embedding userptr vma
* @vma: Pointer to the embedded struct xe_vma
--
2.34.1
^ permalink raw reply related [flat|nested] 120+ messages in thread
* [PATCH v2 23/32] drm/xe: Implement madvise ioctl for xe
2025-04-07 10:16 [PATCH v2 00/32] PREFETCH and MADVISE for SVM ranges Himal Prasad Ghimiray
` (21 preceding siblings ...)
2025-04-07 10:17 ` [PATCH v2 22/32] drm/xe/svm: Split system allocator vma incase of madvise call Himal Prasad Ghimiray
@ 2025-04-07 10:17 ` Himal Prasad Ghimiray
2025-05-14 21:41 ` Matthew Brost
2025-04-07 10:17 ` [PATCH v2 24/32] drm/xe: Allow CPU address mirror VMA unbind with gpu bindings for madvise Himal Prasad Ghimiray
` (20 subsequent siblings)
43 siblings, 1 reply; 120+ messages in thread
From: Himal Prasad Ghimiray @ 2025-04-07 10:17 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, thomas.hellstrom, Himal Prasad Ghimiray
This driver-specific ioctl enables UMDs to control the memory attributes
for GPU VMAs within a specified input range. If the start or end
addresses fall within an existing VMA, the VMA is split accordingly. The
attributes of the VMA are modified as provided by the users. The old
mappings of the VMAs are invalidated, and TLB invalidation is performed
if necessary.
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
---
drivers/gpu/drm/xe/Makefile | 1 +
drivers/gpu/drm/xe/xe_device.c | 2 +
drivers/gpu/drm/xe/xe_vm_madvise.c | 309 +++++++++++++++++++++++++++++
drivers/gpu/drm/xe/xe_vm_madvise.h | 15 ++
4 files changed, 327 insertions(+)
create mode 100644 drivers/gpu/drm/xe/xe_vm_madvise.c
create mode 100644 drivers/gpu/drm/xe/xe_vm_madvise.h
diff --git a/drivers/gpu/drm/xe/Makefile b/drivers/gpu/drm/xe/Makefile
index e4fec90bab55..3e83ae8b9dc1 100644
--- a/drivers/gpu/drm/xe/Makefile
+++ b/drivers/gpu/drm/xe/Makefile
@@ -117,6 +117,7 @@ xe-y += xe_bb.o \
xe_uc.o \
xe_uc_fw.o \
xe_vm.o \
+ xe_vm_madvise.o \
xe_vram.o \
xe_vram_freq.o \
xe_vsec.o \
diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
index d8e227ddf255..3e57300014bf 100644
--- a/drivers/gpu/drm/xe/xe_device.c
+++ b/drivers/gpu/drm/xe/xe_device.c
@@ -60,6 +60,7 @@
#include "xe_ttm_stolen_mgr.h"
#include "xe_ttm_sys_mgr.h"
#include "xe_vm.h"
+#include "xe_vm_madvise.h"
#include "xe_vram.h"
#include "xe_vsec.h"
#include "xe_wait_user_fence.h"
@@ -196,6 +197,7 @@ static const struct drm_ioctl_desc xe_ioctls[] = {
DRM_IOCTL_DEF_DRV(XE_WAIT_USER_FENCE, xe_wait_user_fence_ioctl,
DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(XE_OBSERVATION, xe_observation_ioctl, DRM_RENDER_ALLOW),
+ DRM_IOCTL_DEF_DRV(XE_MADVISE, xe_vm_madvise_ioctl, DRM_RENDER_ALLOW),
};
static long xe_drm_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
new file mode 100644
index 000000000000..ef50031649e0
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
@@ -0,0 +1,309 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2024 Intel Corporation
+ */
+
+#include "xe_vm_madvise.h"
+
+#include <linux/nospec.h>
+#include <drm/ttm/ttm_tt.h>
+#include <drm/xe_drm.h>
+
+#include "xe_bo.h"
+#include "xe_gt_tlb_invalidation.h"
+#include "xe_pt.h"
+#include "xe_svm.h"
+
+static struct xe_vma **get_vmas(struct xe_vm *vm, int *num_vmas,
+ u64 addr, u64 range)
+{
+ struct xe_vma **vmas, **__vmas;
+ struct drm_gpuva *gpuva;
+ int max_vmas = 8;
+
+ lockdep_assert_held(&vm->lock);
+
+ *num_vmas = 0;
+ vmas = kmalloc_array(max_vmas, sizeof(*vmas), GFP_KERNEL);
+ if (!vmas)
+ return NULL;
+
+ vm_dbg(&vm->xe->drm, "VMA's in range: start=0x%016llx, end=0x%016llx", addr, addr + range);
+
+ drm_gpuvm_for_each_va_range(gpuva, &vm->gpuvm, addr, addr + range) {
+ struct xe_vma *vma = gpuva_to_vma(gpuva);
+
+ if (*num_vmas == max_vmas) {
+ max_vmas <<= 1;
+ __vmas = krealloc(vmas, max_vmas * sizeof(*vmas), GFP_KERNEL);
+ if (!__vmas) {
+ kfree(vmas);
+ return NULL;
+ }
+ vmas = __vmas;
+ }
+
+ vmas[*num_vmas] = vma;
+ (*num_vmas)++;
+ }
+
+ vm_dbg(&vm->xe->drm, "*num_vmas = %d\n", *num_vmas);
+
+ if (!*num_vmas) {
+ kfree(vmas);
+ return NULL;
+ }
+
+ return vmas;
+}
+
+static int madvise_preferred_mem_loc(struct xe_device *xe, struct xe_vm *vm,
+ struct xe_vma **vmas, int num_vmas,
+ struct drm_xe_madvise_ops ops)
+{
+ /* Implementation pending */
+ return 0;
+}
+
+static int madvise_atomic(struct xe_device *xe, struct xe_vm *vm,
+ struct xe_vma **vmas, int num_vmas,
+ struct drm_xe_madvise_ops ops)
+{
+ /* Implementation pending */
+ return 0;
+}
+
+static int madvise_pat_index(struct xe_device *xe, struct xe_vm *vm,
+ struct xe_vma **vmas, int num_vmas,
+ struct drm_xe_madvise_ops ops)
+{
+ /* Implementation pending */
+ return 0;
+}
+
+static int madvise_purgeable_state(struct xe_device *xe, struct xe_vm *vm,
+ struct xe_vma **vmas, int num_vmas,
+ struct drm_xe_madvise_ops ops)
+{
+ /* Implementation pending */
+ return 0;
+}
+
+typedef int (*madvise_func)(struct xe_device *xe, struct xe_vm *vm,
+ struct xe_vma **vmas, int num_vmas, struct drm_xe_madvise_ops ops);
+
+static const madvise_func madvise_funcs[] = {
+ [DRM_XE_VMA_ATTR_PREFERRED_LOC] = madvise_preferred_mem_loc,
+ [DRM_XE_VMA_ATTR_ATOMIC] = madvise_atomic,
+ [DRM_XE_VMA_ATTR_PAT] = madvise_pat_index,
+ [DRM_XE_VMA_ATTR_PURGEABLE_STATE] = madvise_purgeable_state,
+};
+
+static void xe_zap_ptes_in_madvise_range(struct xe_vm *vm, u64 start, u64 end, u8 *tile_mask)
+{
+ struct drm_gpusvm_notifier *notifier;
+ struct drm_gpuva *gpuva;
+ struct xe_svm_range *range;
+ struct xe_tile *tile;
+ u64 adj_start, adj_end;
+ u8 id;
+
+ lockdep_assert_held(&vm->lock);
+
+ if (dma_resv_wait_timeout(xe_vm_resv(vm), DMA_RESV_USAGE_BOOKKEEP,
+ false, MAX_SCHEDULE_TIMEOUT) <= 0)
+ XE_WARN_ON(1);
+
+ down_write(&vm->svm.gpusvm.notifier_lock);
+
+ drm_gpusvm_for_each_notifier(notifier, &vm->svm.gpusvm, start, end) {
+ struct drm_gpusvm_range *r = NULL;
+
+ adj_start = max(start, notifier->itree.start);
+ adj_end = min(end, notifier->itree.last + 1);
+ drm_gpusvm_for_each_range(r, notifier, adj_start, adj_end) {
+ range = to_xe_range(r);
+ for_each_tile(tile, vm->xe, id) {
+ if (xe_pt_zap_ptes_range(tile, vm, range)) {
+ *tile_mask |= BIT(id);
+ range->tile_invalidated |= BIT(id);
+ }
+ }
+ }
+ }
+
+ up_write(&vm->svm.gpusvm.notifier_lock);
+
+ drm_gpuvm_for_each_va_range(gpuva, &vm->gpuvm, start, end) {
+ struct xe_vma *vma = gpuva_to_vma(gpuva);
+
+ if (xe_vma_is_cpu_addr_mirror(vma))
+ continue;
+
+ if (xe_vma_is_userptr(vma)) {
+ WARN_ON_ONCE(!mmu_interval_check_retry
+ (&to_userptr_vma(vma)->userptr.notifier,
+ to_userptr_vma(vma)->userptr.notifier_seq));
+
+ WARN_ON_ONCE(!dma_resv_test_signaled(xe_vm_resv(xe_vma_vm(vma)),
+ DMA_RESV_USAGE_BOOKKEEP));
+ }
+
+ if (xe_vma_bo(vma))
+ xe_bo_lock(xe_vma_bo(vma), false);
+
+ for_each_tile(tile, vm->xe, id) {
+ if (xe_pt_zap_ptes(tile, vma))
+ *tile_mask |= BIT(id);
+ }
+
+ if (xe_vma_bo(vma))
+ xe_bo_unlock(xe_vma_bo(vma));
+ }
+}
+
+static void xe_vm_invalidate_madvise_range(struct xe_vm *vm, u64 start, u64 end)
+{
+ struct xe_gt_tlb_invalidation_fence
+ fence[XE_MAX_TILES_PER_DEVICE * XE_MAX_GT_PER_TILE];
+ struct xe_tile *tile;
+ u32 fence_id = 0;
+ u8 tile_mask = 0;
+ u8 id;
+
+ xe_zap_ptes_in_madvise_range(vm, start, end, &tile_mask);
+ if (!tile_mask)
+ return;
+
+ xe_device_wmb(vm->xe);
+
+ for_each_tile(tile, vm->xe, id) {
+ if (tile_mask & BIT(id)) {
+ int err;
+
+ xe_gt_tlb_invalidation_fence_init(tile->primary_gt,
+ &fence[fence_id], true);
+
+ err = xe_gt_tlb_invalidation_range(tile->primary_gt,
+ &fence[fence_id],
+ start,
+ end,
+ vm->usm.asid);
+ if (WARN_ON_ONCE(err < 0))
+ goto wait;
+ ++fence_id;
+
+ if (!tile->media_gt)
+ continue;
+
+ xe_gt_tlb_invalidation_fence_init(tile->media_gt,
+ &fence[fence_id], true);
+
+ err = xe_gt_tlb_invalidation_range(tile->media_gt,
+ &fence[fence_id],
+ start,
+ end,
+ vm->usm.asid);
+ if (WARN_ON_ONCE(err < 0))
+ goto wait;
+ ++fence_id;
+ }
+ }
+
+wait:
+ for (id = 0; id < fence_id; ++id)
+ xe_gt_tlb_invalidation_fence_wait(&fence[id]);
+}
+
+static int input_ranges_same(struct drm_xe_madvise_ops *old,
+ struct drm_xe_madvise_ops *new)
+{
+ return (new->start == old->start && new->range == old->range);
+}
+
+int xe_vm_madvise_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
+{
+ struct xe_device *xe = to_xe_device(dev);
+ struct xe_file *xef = to_xe_file(file);
+ struct drm_xe_madvise_ops *advs_ops;
+ struct drm_xe_madvise *args = data;
+ struct xe_vm *vm;
+ struct xe_vma **vmas = NULL;
+ int num_vmas, err = 0;
+ int i, j, attr_type;
+
+ if (XE_IOCTL_DBG(xe, args->num_ops < 1))
+ return -EINVAL;
+
+ vm = xe_vm_lookup(xef, args->vm_id);
+ if (XE_IOCTL_DBG(xe, !vm))
+ return -EINVAL;
+
+ if (XE_IOCTL_DBG(xe, !xe_vm_in_fault_mode(vm))) {
+ err = -EINVAL;
+ goto put_vm;
+ }
+
+ down_write(&vm->lock);
+
+ if (XE_IOCTL_DBG(xe, xe_vm_is_closed_or_banned(vm))) {
+ err = -ENOENT;
+ goto unlock_vm;
+ }
+
+ if (args->num_ops > 1) {
+ u64 __user *madvise_user = u64_to_user_ptr(args->vector_of_ops);
+
+ advs_ops = kvmalloc_array(args->num_ops, sizeof(struct drm_xe_madvise_ops),
+ GFP_KERNEL | __GFP_ACCOUNT |
+ __GFP_RETRY_MAYFAIL | __GFP_NOWARN);
+ if (!advs_ops)
+ return args->num_ops > 1 ? -ENOBUFS : -ENOMEM;
+
+ err = __copy_from_user(advs_ops, madvise_user,
+ sizeof(struct drm_xe_madvise_ops) *
+ args->num_ops);
+ if (XE_IOCTL_DBG(xe, err)) {
+ err = -EFAULT;
+ goto free_advs_ops;
+ }
+ } else {
+ advs_ops = &args->ops;
+ }
+
+ for (i = 0; i < args->num_ops; i++) {
+ xe_vm_alloc_madvise_vma(vm, advs_ops[i].start, advs_ops[i].range);
+
+ vmas = get_vmas(vm, &num_vmas, advs_ops[i].start, advs_ops[i].range);
+ if (!vmas) {
+ err = -ENOMEM;
+ goto unlock_vm;
+ }
+
+ attr_type = array_index_nospec(advs_ops[i].type, ARRAY_SIZE(madvise_funcs));
+ err = madvise_funcs[attr_type](xe, vm, vmas, num_vmas, advs_ops[i]);
+
+ kfree(vmas);
+ vmas = NULL;
+
+ if (err)
+ break;
+ }
+
+ for (i = 0; i < args->num_ops; i++) {
+ for (j = i + 1; j < args->num_ops; ++j) {
+ if (input_ranges_same(&advs_ops[j], &advs_ops[i]))
+ break;
+ }
+ xe_vm_invalidate_madvise_range(vm, advs_ops[i].start,
+ advs_ops[i].start + advs_ops[i].range);
+ }
+free_advs_ops:
+ if (args->num_ops > 1)
+ kvfree(advs_ops);
+unlock_vm:
+ up_write(&vm->lock);
+put_vm:
+ xe_vm_put(vm);
+ return err;
+}
diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.h b/drivers/gpu/drm/xe/xe_vm_madvise.h
new file mode 100644
index 000000000000..c5cdd058c322
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_vm_madvise.h
@@ -0,0 +1,15 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2024 Intel Corporation
+ */
+
+#ifndef _XE_VM_MADVISE_H_
+#define _XE_VM_MADVISE_H_
+
+struct drm_device;
+struct drm_file;
+
+int xe_vm_madvise_ioctl(struct drm_device *dev, void *data,
+ struct drm_file *file);
+
+#endif
--
2.34.1
^ permalink raw reply related [flat|nested] 120+ messages in thread
* [PATCH v2 24/32] drm/xe: Allow CPU address mirror VMA unbind with gpu bindings for madvise
2025-04-07 10:16 [PATCH v2 00/32] PREFETCH and MADVISE for SVM ranges Himal Prasad Ghimiray
` (22 preceding siblings ...)
2025-04-07 10:17 ` [PATCH v2 23/32] drm/xe: Implement madvise ioctl for xe Himal Prasad Ghimiray
@ 2025-04-07 10:17 ` Himal Prasad Ghimiray
2025-05-14 19:20 ` Matthew Brost
2025-04-07 10:17 ` [PATCH v2 25/32] drm/xe/svm : Add svm ranges migration policy on atomic access Himal Prasad Ghimiray
` (19 subsequent siblings)
43 siblings, 1 reply; 120+ messages in thread
From: Himal Prasad Ghimiray @ 2025-04-07 10:17 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, thomas.hellstrom, Himal Prasad Ghimiray
In the case of the MADVISE ioctl, if the start or end addresses fall
within a VMA and existing SVM ranges are present, remove the existing
SVM mappings. Then, continue with ops_parse to create new VMAs by REMAP
unmapping of old one.
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
---
drivers/gpu/drm/xe/xe_svm.c | 25 +++++++++++++++++++++++++
drivers/gpu/drm/xe/xe_svm.h | 7 +++++++
drivers/gpu/drm/xe/xe_vm.c | 18 +++++++++++++++++-
3 files changed, 49 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
index 7ec7ecd7eb1f..efcba4b77250 100644
--- a/drivers/gpu/drm/xe/xe_svm.c
+++ b/drivers/gpu/drm/xe/xe_svm.c
@@ -903,6 +903,31 @@ bool xe_svm_has_mapping(struct xe_vm *vm, u64 start, u64 end)
return drm_gpusvm_has_mapping(&vm->svm.gpusvm, start, end);
}
+/**
+ * xe_svm_range_clean_if_addr_within - Clean SVM mappings and ranges
+ * @start: start addr
+ * @end: end addr
+ *
+ * This function cleans up svm ranges if start or end address are inside them.
+ */
+void xe_svm_range_clean_if_addr_within(struct xe_vm *vm, u64 start, u64 end)
+{
+ struct drm_gpusvm_notifier *notifier, *next;
+
+ drm_gpusvm_for_each_notifier_safe(notifier, next, &vm->svm.gpusvm, start, end) {
+ struct drm_gpusvm_range *range, *__next;
+
+ drm_gpusvm_for_each_range_safe(range, __next, notifier, start, end) {
+ if (start > drm_gpusvm_range_start(range) ||
+ end < drm_gpusvm_range_end(range)) {
+ if (IS_DGFX(vm->xe) && xe_svm_range_in_vram(to_xe_range(range)))
+ drm_gpusvm_range_evict(&vm->svm.gpusvm, range);
+ __xe_svm_garbage_collector(vm, to_xe_range(range));
+ }
+ }
+ }
+}
+
/**
* xe_svm_bo_evict() - SVM evict BO to system memory
* @bo: BO to evict
diff --git a/drivers/gpu/drm/xe/xe_svm.h b/drivers/gpu/drm/xe/xe_svm.h
index d5be8229ca7e..d00ba6d6ba53 100644
--- a/drivers/gpu/drm/xe/xe_svm.h
+++ b/drivers/gpu/drm/xe/xe_svm.h
@@ -98,6 +98,8 @@ int xe_svm_range_get_pages(struct xe_vm *vm, struct xe_svm_range *range,
bool xe_svm_range_needs_migrate_to_vram(struct xe_svm_range *range, struct xe_vma *vma,
u32 region);
+void xe_svm_range_clean_if_addr_within(struct xe_vm *vm, u64 start, u64 end);
+
/**
* xe_svm_range_has_dma_mapping() - SVM range has DMA mapping
* @range: SVM range
@@ -291,6 +293,11 @@ bool xe_svm_range_needs_migrate_to_vram(struct xe_svm_range *range, struct xe_vm
return false;
}
+static inline
+void xe_svm_range_clean_if_addr_within(struct xe_vm *vm, u64 start, u64 end)
+{
+}
+
#define xe_svm_assert_in_notifier(...) do {} while (0)
#define xe_svm_range_has_dma_mapping(...) false
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index c7c012afe9eb..92b8e0cac063 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -2362,6 +2362,22 @@ vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_vma_ops *vops,
op->map.pat_index = pat_index;
op->map.invalidate_on_bind =
__xe_vm_needs_clear_scratch_pages(vm, flags);
+ } else if (__op->op == DRM_GPUVA_OP_REMAP) {
+ struct xe_vma *old =
+ gpuva_to_vma(op->base.remap.unmap->va);
+ u64 start = xe_vma_start(old), end = xe_vma_end(old);
+
+ if (op->base.remap.prev)
+ start = op->base.remap.prev->va.addr +
+ op->base.remap.prev->va.range;
+ if (op->base.remap.next)
+ end = op->base.remap.next->va.addr;
+
+ if (xe_vma_is_cpu_addr_mirror(old) &&
+ xe_svm_has_mapping(vm, start, end)) {
+ drm_gpuva_ops_free(&vm->gpuvm, ops);
+ return ERR_PTR(-EBUSY);
+ }
} else if (__op->op == DRM_GPUVA_OP_PREFETCH) {
struct xe_vma *vma = gpuva_to_vma(op->base.prefetch.va);
@@ -2653,7 +2669,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
if (xe_vma_is_cpu_addr_mirror(old) &&
xe_svm_has_mapping(vm, start, end))
- return -EBUSY;
+ xe_svm_range_clean_if_addr_within(vm, start, end);
op->remap.start = xe_vma_start(old);
op->remap.range = xe_vma_size(old);
--
2.34.1
^ permalink raw reply related [flat|nested] 120+ messages in thread
* [PATCH v2 25/32] drm/xe/svm : Add svm ranges migration policy on atomic access
2025-04-07 10:16 [PATCH v2 00/32] PREFETCH and MADVISE for SVM ranges Himal Prasad Ghimiray
` (23 preceding siblings ...)
2025-04-07 10:17 ` [PATCH v2 24/32] drm/xe: Allow CPU address mirror VMA unbind with gpu bindings for madvise Himal Prasad Ghimiray
@ 2025-04-07 10:17 ` Himal Prasad Ghimiray
2025-05-14 22:21 ` Matthew Brost
2025-04-07 10:17 ` [PATCH v2 26/32] drm/xe/madvise: Update migration policy based on preferred location Himal Prasad Ghimiray
` (18 subsequent siblings)
43 siblings, 1 reply; 120+ messages in thread
From: Himal Prasad Ghimiray @ 2025-04-07 10:17 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, thomas.hellstrom, Himal Prasad Ghimiray
If the platform does not support atomic access on system memory, and the
ranges are in system memory, but the user requires atomic accesses on
the VMA, then migrate the ranges to VRAM. Apply this policy for prefetch
operations as well.
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
---
drivers/gpu/drm/xe/xe_pt.c | 9 +++++++--
drivers/gpu/drm/xe/xe_svm.c | 14 ++++++++++++--
drivers/gpu/drm/xe/xe_vm.c | 2 ++
drivers/gpu/drm/xe/xe_vm_madvise.c | 11 ++++++++++-
4 files changed, 31 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
index 2479d830d90a..ba9b30b25ded 100644
--- a/drivers/gpu/drm/xe/xe_pt.c
+++ b/drivers/gpu/drm/xe/xe_pt.c
@@ -645,13 +645,18 @@ static bool xe_atomic_for_vram(struct xe_vm *vm)
return true;
}
-static bool xe_atomic_for_system(struct xe_vm *vm, struct xe_bo *bo)
+static bool xe_atomic_for_system(struct xe_vm *vm,
+ struct xe_bo *bo,
+ struct xe_vma *vma)
{
struct xe_device *xe = vm->xe;
if (!xe->info.has_device_atomics_on_smem)
return false;
+ if (vma->attr.atomic_access == DRM_XE_VMA_ATOMIC_DEVICE)
+ return true;
+
/*
* If a SMEM+LMEM allocation is backed by SMEM, a device
* atomics will cause a gpu page fault and which then
@@ -745,7 +750,7 @@ xe_pt_stage_bind(struct xe_tile *tile, struct xe_vma *vma,
if (vma->gpuva.flags & XE_VMA_ATOMIC_PTE_BIT) {
xe_walk.default_vram_pte = xe_atomic_for_vram(vm) ? XE_USM_PPGTT_PTE_AE : 0;
- xe_walk.default_system_pte = xe_atomic_for_system(vm, bo) ?
+ xe_walk.default_system_pte = xe_atomic_for_system(vm, bo, vma) ?
XE_USM_PPGTT_PTE_AE : 0;
}
diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
index efcba4b77250..d40111e29bfe 100644
--- a/drivers/gpu/drm/xe/xe_svm.c
+++ b/drivers/gpu/drm/xe/xe_svm.c
@@ -717,6 +717,16 @@ static bool supports_4K_migration(struct xe_device *xe)
return false;
}
+static bool needs_ranges_in_vram_to_support_atomic(struct xe_device *xe, struct xe_vma *vma)
+{
+ if (vma->attr.atomic_access == DRM_XE_VMA_ATOMIC_UNDEFINED ||
+ (xe->info.has_device_atomics_on_smem &&
+ vma->attr.atomic_access == DRM_XE_VMA_ATOMIC_DEVICE))
+ return false;
+
+ return true;
+}
+
/**
* xe_svm_range_needs_migrate_to_vram() - SVM range needs migrate to VRAM or not
* @range: SVM range for which migration needs to be decided
@@ -735,7 +745,7 @@ bool xe_svm_range_needs_migrate_to_vram(struct xe_svm_range *range, struct xe_vm
if (!range->base.flags.migrate_devmem)
return false;
- needs_migrate = region;
+ needs_migrate = needs_ranges_in_vram_to_support_atomic(vm->xe, vma) || region;
if (needs_migrate && !IS_DGFX(vm->xe)) {
drm_warn(&vm->xe->drm, "Platform doesn't support VRAM\n");
@@ -828,7 +838,7 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
}
- if (atomic)
+ if (atomic && needs_ranges_in_vram_to_support_atomic(vm->xe, vma))
ctx.vram_only = 1;
range_debug(range, "GET PAGES");
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 92b8e0cac063..0f9c45ce82b4 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -2930,6 +2930,7 @@ static int prefetch_ranges_lock_and_prep(struct xe_vm *vm,
for (i = 0; i < op->prefetch_range.ranges_count; i++) {
svm_range = xa_load(&op->prefetch_range.range, i);
if (xe_svm_range_needs_migrate_to_vram(svm_range, vma, region)) {
+ region = region ? region : 1;
tile = &vm->xe->tiles[region_to_mem_type[region] - XE_PL_VRAM0];
err = xe_svm_alloc_vram(vm, tile, svm_range, &ctx);
if (err) {
@@ -2938,6 +2939,7 @@ static int prefetch_ranges_lock_and_prep(struct xe_vm *vm,
return -ENODATA;
}
xe_svm_range_debug(svm_range, "PREFETCH - RANGE MIGRATED TO VRAM");
+ ctx.vram_only = 1;
}
err = xe_svm_range_get_pages(vm, svm_range, &ctx);
diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
index ef50031649e0..7e1a95106cb9 100644
--- a/drivers/gpu/drm/xe/xe_vm_madvise.c
+++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
@@ -69,7 +69,16 @@ static int madvise_atomic(struct xe_device *xe, struct xe_vm *vm,
struct xe_vma **vmas, int num_vmas,
struct drm_xe_madvise_ops ops)
{
- /* Implementation pending */
+ int i;
+
+ xe_assert(vm->xe, ops.type == DRM_XE_VMA_ATTR_ATOMIC);
+ xe_assert(vm->xe, ops.atomic.val > DRM_XE_VMA_ATOMIC_UNDEFINED &&
+ ops.atomic.val <= DRM_XE_VMA_ATOMIC_CPU);
+ vm_dbg(&xe->drm, "attr_value = %d", ops.atomic.val);
+
+ for (i = 0; i < num_vmas; i++)
+ vmas[i]->attr.atomic_access = ops.atomic.val;
+ /*TODO: handle bo backed vmas */
return 0;
}
--
2.34.1
^ permalink raw reply related [flat|nested] 120+ messages in thread
* [PATCH v2 26/32] drm/xe/madvise: Update migration policy based on preferred location
2025-04-07 10:16 [PATCH v2 00/32] PREFETCH and MADVISE for SVM ranges Himal Prasad Ghimiray
` (24 preceding siblings ...)
2025-04-07 10:17 ` [PATCH v2 25/32] drm/xe/svm : Add svm ranges migration policy on atomic access Himal Prasad Ghimiray
@ 2025-04-07 10:17 ` Himal Prasad Ghimiray
2025-05-14 22:04 ` Matthew Brost
2025-04-07 10:17 ` [PATCH v2 27/32] drm/xe/svm: Support DRM_XE_SVM_ATTR_PAT memory attribute Himal Prasad Ghimiray
` (17 subsequent siblings)
43 siblings, 1 reply; 120+ messages in thread
From: Himal Prasad Ghimiray @ 2025-04-07 10:17 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, thomas.hellstrom, Himal Prasad Ghimiray
When the user sets the valid devmem_fd as a preferred location, GPU fault
will trigger migration to tile of device associated with devmem_fd.
If the user sets an invalid devmem_fd the preferred location is current
placement only.
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
---
drivers/gpu/drm/xe/xe_svm.c | 15 ++++++++++++++-
drivers/gpu/drm/xe/xe_vm.h | 3 +++
drivers/gpu/drm/xe/xe_vm_madvise.c | 20 +++++++++++++++++++-
3 files changed, 36 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
index d40111e29bfe..60dfb1bf12ca 100644
--- a/drivers/gpu/drm/xe/xe_svm.c
+++ b/drivers/gpu/drm/xe/xe_svm.c
@@ -765,6 +765,12 @@ bool xe_svm_range_needs_migrate_to_vram(struct xe_svm_range *range, struct xe_vm
return needs_migrate;
}
+static const u32 region_to_mem_type[] = {
+ XE_PL_TT,
+ XE_PL_VRAM0,
+ XE_PL_VRAM1,
+};
+
/**
* xe_svm_handle_pagefault() - SVM handle page fault
* @vm: The VM.
@@ -796,6 +802,7 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
struct xe_tile *tile = gt_to_tile(gt);
int retry_count = 3;
ktime_t end = 0;
+ u32 region;
int err;
lockdep_assert_held_write(&vm->lock);
@@ -820,7 +827,13 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
range_debug(range, "PAGE FAULT");
- if (xe_svm_range_needs_migrate_to_vram(range, vma, IS_DGFX(vm->xe))) {
+ region = vma->attr.preferred_loc.devmem_fd;
+
+ if (xe_svm_range_needs_migrate_to_vram(range, vma, region)) {
+ region = region ? region : 1;
+ /* Need rework for multigpu */
+ tile = &vm->xe->tiles[region_to_mem_type[region] - XE_PL_VRAM0];
+
err = xe_svm_alloc_vram(vm, tile, range, &ctx);
if (err) {
if (retry_count) {
diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
index 4e45230b7205..377f62f859b7 100644
--- a/drivers/gpu/drm/xe/xe_vm.h
+++ b/drivers/gpu/drm/xe/xe_vm.h
@@ -220,6 +220,9 @@ int __xe_vm_userptr_needs_repin(struct xe_vm *vm);
int xe_vm_userptr_check_repin(struct xe_vm *vm);
+bool xe_vma_has_preferred_mem_loc(struct xe_vma *vma,
+ u32 *mem_region, u32 *devmem_fd);
+
int xe_vm_rebind(struct xe_vm *vm, bool rebind_worker);
struct dma_fence *xe_vma_rebind(struct xe_vm *vm, struct xe_vma *vma,
u8 tile_mask);
diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
index 7e1a95106cb9..f870e8642190 100644
--- a/drivers/gpu/drm/xe/xe_vm_madvise.c
+++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
@@ -61,7 +61,25 @@ static int madvise_preferred_mem_loc(struct xe_device *xe, struct xe_vm *vm,
struct xe_vma **vmas, int num_vmas,
struct drm_xe_madvise_ops ops)
{
- /* Implementation pending */
+ s32 devmem_fd;
+ u32 migration_policy;
+ int i;
+
+ xe_assert(vm->xe, ops.type == DRM_XE_VMA_ATTR_PREFERRED_LOC);
+ vm_dbg(&xe->drm, "migration policy = %d, devmem_fd = %d\n",
+ ops.preferred_mem_loc.migration_policy,
+ ops.preferred_mem_loc.devmem_fd);
+
+ devmem_fd = (s32)ops.preferred_mem_loc.devmem_fd;
+ devmem_fd = (devmem_fd < 0) ? 0 : devmem_fd;
+
+ migration_policy = ops.preferred_mem_loc.migration_policy;
+
+ for (i = 0; i < num_vmas; i++) {
+ vmas[i]->attr.preferred_loc.devmem_fd = devmem_fd;
+ vmas[i]->attr.preferred_loc.migration_policy = migration_policy;
+ }
+
return 0;
}
--
2.34.1
^ permalink raw reply related [flat|nested] 120+ messages in thread
* [PATCH v2 27/32] drm/xe/svm: Support DRM_XE_SVM_ATTR_PAT memory attribute
2025-04-07 10:16 [PATCH v2 00/32] PREFETCH and MADVISE for SVM ranges Himal Prasad Ghimiray
` (25 preceding siblings ...)
2025-04-07 10:17 ` [PATCH v2 26/32] drm/xe/madvise: Update migration policy based on preferred location Himal Prasad Ghimiray
@ 2025-04-07 10:17 ` Himal Prasad Ghimiray
2025-05-14 21:52 ` Matthew Brost
2025-04-07 10:17 ` [PATCH v2 28/32] drm/xe/uapi: Add flag for consulting madvise hints on svm prefetch Himal Prasad Ghimiray
` (16 subsequent siblings)
43 siblings, 1 reply; 120+ messages in thread
From: Himal Prasad Ghimiray @ 2025-04-07 10:17 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, thomas.hellstrom, Himal Prasad Ghimiray
This attributes sets the pat_index for the svm used vma range, which is
utilized to ascertain the coherence.
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
---
drivers/gpu/drm/xe/xe_vm_madvise.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
index f870e8642190..f4e0545937b0 100644
--- a/drivers/gpu/drm/xe/xe_vm_madvise.c
+++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
@@ -104,7 +104,14 @@ static int madvise_pat_index(struct xe_device *xe, struct xe_vm *vm,
struct xe_vma **vmas, int num_vmas,
struct drm_xe_madvise_ops ops)
{
- /* Implementation pending */
+ int i;
+
+ xe_assert(vm->xe, ops.type == DRM_XE_VMA_ATTR_PAT);
+ vm_dbg(&xe->drm, "attr_value = %d", ops.pat_index.val);
+
+ for (i = 0; i < num_vmas; i++)
+ vmas[i]->attr.pat_index = ops.pat_index.val;
+
return 0;
}
--
2.34.1
^ permalink raw reply related [flat|nested] 120+ messages in thread
* [PATCH v2 28/32] drm/xe/uapi: Add flag for consulting madvise hints on svm prefetch
2025-04-07 10:16 [PATCH v2 00/32] PREFETCH and MADVISE for SVM ranges Himal Prasad Ghimiray
` (26 preceding siblings ...)
2025-04-07 10:17 ` [PATCH v2 27/32] drm/xe/svm: Support DRM_XE_SVM_ATTR_PAT memory attribute Himal Prasad Ghimiray
@ 2025-04-07 10:17 ` Himal Prasad Ghimiray
2025-05-14 21:05 ` Matthew Brost
2025-04-07 10:17 ` [PATCH v2 29/32] drm/xe/svm: Consult madvise preferred location in prefetch Himal Prasad Ghimiray
` (15 subsequent siblings)
43 siblings, 1 reply; 120+ messages in thread
From: Himal Prasad Ghimiray @ 2025-04-07 10:17 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, thomas.hellstrom, Himal Prasad Ghimiray
Introduce flag DRM_XE_CONSULT_MEM_ADVISE_PREF_LOC to ensure prefetching
in madvise-advised memory regions
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
---
include/uapi/drm/xe_drm.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
index aaf515df3a83..ab96dee25f6c 100644
--- a/include/uapi/drm/xe_drm.h
+++ b/include/uapi/drm/xe_drm.h
@@ -1111,6 +1111,7 @@ struct drm_xe_vm_bind_op {
/** @flags: Bind flags */
__u32 flags;
+#define DRM_XE_CONSULT_MEM_ADVISE_PREF_LOC -1
/**
* @prefetch_mem_region_instance: Memory region to prefetch VMA to.
* It is a region instance, not a mask.
--
2.34.1
^ permalink raw reply related [flat|nested] 120+ messages in thread
* [PATCH v2 29/32] drm/xe/svm: Consult madvise preferred location in prefetch
2025-04-07 10:16 [PATCH v2 00/32] PREFETCH and MADVISE for SVM ranges Himal Prasad Ghimiray
` (27 preceding siblings ...)
2025-04-07 10:17 ` [PATCH v2 28/32] drm/xe/uapi: Add flag for consulting madvise hints on svm prefetch Himal Prasad Ghimiray
@ 2025-04-07 10:17 ` Himal Prasad Ghimiray
2025-05-14 22:17 ` Matthew Brost
2025-04-07 10:17 ` [PATCH v2 30/32] drm/xe/uapi: Add uapi for vma count and mem attributes Himal Prasad Ghimiray
` (14 subsequent siblings)
43 siblings, 1 reply; 120+ messages in thread
From: Himal Prasad Ghimiray @ 2025-04-07 10:17 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, thomas.hellstrom, Himal Prasad Ghimiray
When prefetch region is DRM_XE_CONSULT_MEM_ADVISE_PREF_LOC, prefetch svm
ranges to preferred location provided by madvise.
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
---
drivers/gpu/drm/xe/xe_vm.c | 15 ++++++++++-----
1 file changed, 10 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 0f9c45ce82b4..e5246c633e62 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -2924,9 +2924,12 @@ static int prefetch_ranges_lock_and_prep(struct xe_vm *vm,
if (!xe_vma_is_cpu_addr_mirror(vma))
return 0;
- region = op->prefetch_range.region;
+ region = (op->prefetch_range.region == DRM_XE_CONSULT_MEM_ADVISE_PREF_LOC) ?
+ vma->attr.preferred_loc.devmem_fd : op->prefetch_range.region;
- /* TODO: Threading the migration */
+ /* TODO: Threading the migration
+ * TODO: Multigpu support migration
+ */
for (i = 0; i < op->prefetch_range.ranges_count; i++) {
svm_range = xa_load(&op->prefetch_range.range, i);
if (xe_svm_range_needs_migrate_to_vram(svm_range, vma, region)) {
@@ -3001,7 +3004,8 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
else
region = op->prefetch.region;
- xe_assert(vm->xe, region <= ARRAY_SIZE(region_to_mem_type));
+ xe_assert(vm->xe, region == DRM_XE_CONSULT_MEM_ADVISE_PREF_LOC ||
+ region <= ARRAY_SIZE(region_to_mem_type));
err = vma_lock_and_validate(exec,
gpuva_to_vma(op->base.prefetch.va),
@@ -3426,8 +3430,9 @@ static int vm_bind_ioctl_check_args(struct xe_device *xe, struct xe_vm *vm,
op == DRM_XE_VM_BIND_OP_PREFETCH) ||
XE_IOCTL_DBG(xe, prefetch_region &&
op != DRM_XE_VM_BIND_OP_PREFETCH) ||
- XE_IOCTL_DBG(xe, !(BIT(prefetch_region) &
- xe->info.mem_region_mask)) ||
+ XE_IOCTL_DBG(xe, (is_cpu_addr_mirror &&
+ prefetch_region != DRM_XE_CONSULT_MEM_ADVISE_PREF_LOC) &&
+ !(BIT(prefetch_region) & xe->info.mem_region_mask)) ||
XE_IOCTL_DBG(xe, obj &&
op == DRM_XE_VM_BIND_OP_UNMAP)) {
err = -EINVAL;
--
2.34.1
^ permalink raw reply related [flat|nested] 120+ messages in thread
* [PATCH v2 30/32] drm/xe/uapi: Add uapi for vma count and mem attributes
2025-04-07 10:16 [PATCH v2 00/32] PREFETCH and MADVISE for SVM ranges Himal Prasad Ghimiray
` (28 preceding siblings ...)
2025-04-07 10:17 ` [PATCH v2 29/32] drm/xe/svm: Consult madvise preferred location in prefetch Himal Prasad Ghimiray
@ 2025-04-07 10:17 ` Himal Prasad Ghimiray
2025-05-14 21:08 ` Matthew Brost
2025-04-07 10:17 ` [PATCH v2 31/32] drm/xe/bo: Add attributes field to xe_bo Himal Prasad Ghimiray
` (13 subsequent siblings)
43 siblings, 1 reply; 120+ messages in thread
From: Himal Prasad Ghimiray @ 2025-04-07 10:17 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, thomas.hellstrom, Himal Prasad Ghimiray
-DRM_IOCTL_XE_VM_QUERY_VMAS: Return number of VMAs in user-specified range.
-DRM_IOCTL_XE_VM_QUERY_VMAS_ATTRS: Fill VMA attributes in user-provided
buffer.
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
---
drivers/gpu/drm/xe/xe_device.c | 2 +
drivers/gpu/drm/xe/xe_vm.c | 94 +++++++++++++++++++++++++++
drivers/gpu/drm/xe/xe_vm.h | 3 +-
include/uapi/drm/xe_drm.h | 115 +++++++++++++++++++++++++++++++++
4 files changed, 213 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
index 3e57300014bf..968c24c77241 100644
--- a/drivers/gpu/drm/xe/xe_device.c
+++ b/drivers/gpu/drm/xe/xe_device.c
@@ -198,6 +198,8 @@ static const struct drm_ioctl_desc xe_ioctls[] = {
DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(XE_OBSERVATION, xe_observation_ioctl, DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(XE_MADVISE, xe_vm_madvise_ioctl, DRM_RENDER_ALLOW),
+ DRM_IOCTL_DEF_DRV(XE_VM_QUERY_VMAS, xe_vm_query_vmas_ioctl, DRM_RENDER_ALLOW),
+ DRM_IOCTL_DEF_DRV(XE_VM_QUERY_VMAS_ATTRS, xe_vm_query_vmas_attrs_ioctl, DRM_RENDER_ALLOW),
};
static long xe_drm_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index e5246c633e62..f1d4daf90efe 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -2165,6 +2165,100 @@ int xe_vm_destroy_ioctl(struct drm_device *dev, void *data,
return err;
}
+int xe_vm_query_vmas_ioctl(struct drm_device *dev, void *data,
+ struct drm_file *file)
+{
+ struct xe_device *xe = to_xe_device(dev);
+ struct xe_file *xef = to_xe_file(file);
+ struct drm_xe_vm_query_num_vmas *args = data;
+ struct drm_gpuva *gpuva;
+ struct xe_vm *vm;
+
+ vm = xe_vm_lookup(xef, args->vm_id);
+ if (XE_IOCTL_DBG(xe, !vm))
+ return -EINVAL;
+
+ args->num_vmas = 0;
+ down_write(&vm->lock);
+
+ drm_gpuvm_for_each_va_range(gpuva, &vm->gpuvm, args->start, args->start + args->range)
+ args->num_vmas++;
+
+ up_write(&vm->lock);
+ return 0;
+}
+
+static int get_mem_attrs(struct xe_vm *vm, u32 *num_vmas, u64 start,
+ u64 end, struct drm_xe_vma_mem_attr *mem_attrs)
+{
+ struct drm_gpuva *gpuva;
+ int i = 0;
+
+ lockdep_assert_held(&vm->lock);
+
+ drm_gpuvm_for_each_va_range(gpuva, &vm->gpuvm, start, end) {
+ struct xe_vma *vma = gpuva_to_vma(gpuva);
+
+ if (i == *num_vmas)
+ return -EINVAL;
+
+ mem_attrs[i].start = xe_vma_start(vma);
+ mem_attrs[i].end = xe_vma_end(vma);
+ mem_attrs[i].atomic.val = vma->attr.atomic_access;
+ mem_attrs[i].pat_index.val = vma->attr.pat_index;
+ mem_attrs[i].preferred_mem_loc.devmem_fd = vma->attr.preferred_loc.devmem_fd;
+ mem_attrs[i].preferred_mem_loc.migration_policy = vma->attr.preferred_loc.migration_policy;
+
+ i++;
+ }
+
+ if (i < (*num_vmas - 1))
+ *num_vmas = i;
+ return 0;
+}
+
+int xe_vm_query_vmas_attrs_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
+{
+ struct xe_device *xe = to_xe_device(dev);
+ struct xe_file *xef = to_xe_file(file);
+ struct drm_xe_vma_mem_attr *mem_attrs;
+ struct drm_xe_vm_query_vmas_attr *args = data;
+ u64 __user *attrs_user = NULL;
+ struct xe_vm *vm;
+ int err;
+
+ if (XE_IOCTL_DBG(xe, args->num_vmas < 1))
+ return -EINVAL;
+
+ vm = xe_vm_lookup(xef, args->vm_id);
+ if (XE_IOCTL_DBG(xe, !vm))
+ return -EINVAL;
+
+ down_write(&vm->lock);
+
+ attrs_user = u64_to_user_ptr(args->vector_of_vma_mem_attr);
+ mem_attrs = kvmalloc_array(args->num_vmas, sizeof(struct drm_xe_vma_mem_attr),
+ GFP_KERNEL | __GFP_ACCOUNT |
+ __GFP_RETRY_MAYFAIL | __GFP_NOWARN);
+ if (!mem_attrs)
+ return args->num_vmas > 1 ? -ENOBUFS : -ENOMEM;
+
+ err = get_mem_attrs(vm, &args->num_vmas, args->start,
+ args->start + args->range, mem_attrs);
+ if (err)
+ goto free_mem_attrs;
+
+ err = __copy_to_user(attrs_user, mem_attrs,
+ sizeof(struct drm_xe_vma_mem_attr) * args->num_vmas);
+
+free_mem_attrs:
+ kvfree(mem_attrs);
+
+ up_write(&vm->lock);
+
+ return err;
+}
+
static bool vma_matches(struct xe_vma *vma, u64 page_addr)
{
if (page_addr > xe_vma_end(vma) - 1 ||
diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
index 377f62f859b7..0b2d6e9f77ef 100644
--- a/drivers/gpu/drm/xe/xe_vm.h
+++ b/drivers/gpu/drm/xe/xe_vm.h
@@ -193,7 +193,8 @@ int xe_vm_destroy_ioctl(struct drm_device *dev, void *data,
struct drm_file *file);
int xe_vm_bind_ioctl(struct drm_device *dev, void *data,
struct drm_file *file);
-
+int xe_vm_query_vmas_ioctl(struct drm_device *dev, void *data, struct drm_file *file);
+int xe_vm_query_vmas_attrs_ioctl(struct drm_device *dev, void *data, struct drm_file *file);
void xe_vm_close_and_put(struct xe_vm *vm);
static inline bool xe_vm_in_fault_mode(struct xe_vm *vm)
diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
index ab96dee25f6c..177ee3a1c20d 100644
--- a/include/uapi/drm/xe_drm.h
+++ b/include/uapi/drm/xe_drm.h
@@ -82,6 +82,8 @@ extern "C" {
* - &DRM_IOCTL_XE_WAIT_USER_FENCE
* - &DRM_IOCTL_XE_OBSERVATION
* - &DRM_IOCTL_XE_MADVISE
+ * - &DRM_IOCTL_XE_VM_QUERY_VMAS
+ * - &DRM_IOCTL_XE_VM_QUERY_VMAS_ATTRS
*/
/*
@@ -104,6 +106,8 @@ extern "C" {
#define DRM_XE_WAIT_USER_FENCE 0x0a
#define DRM_XE_OBSERVATION 0x0b
#define DRM_XE_MADVISE 0x0c
+#define DRM_XE_VM_QUERY_VMAS 0x0d
+#define DRM_XE_VM_QUERY_VMAS_ATTRS 0x0e
/* Must be kept compact -- no holes */
@@ -120,6 +124,8 @@ extern "C" {
#define DRM_IOCTL_XE_WAIT_USER_FENCE DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_WAIT_USER_FENCE, struct drm_xe_wait_user_fence)
#define DRM_IOCTL_XE_OBSERVATION DRM_IOW(DRM_COMMAND_BASE + DRM_XE_OBSERVATION, struct drm_xe_observation_param)
#define DRM_IOCTL_XE_MADVISE DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_MADVISE, struct drm_xe_madvise)
+#define DRM_IOCTL_XE_VM_QUERY_VMAS DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_VM_QUERY_VMAS, struct drm_xe_vm_query_num_vmas)
+#define DRM_IOCTL_XE_VM_QUERY_VMAS_ATTRS DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_VM_QUERY_VMAS_ATTRS, struct drm_xe_vm_query_vmas_attr)
/**
* DOC: Xe IOCTL Extensions
@@ -2063,6 +2069,115 @@ struct drm_xe_madvise {
};
+/**
+ * struct drm_xe_vm_query_num_vmas - Input of &DRM_IOCTL_XE_VM_QUERY_VMAS
+ *
+ * Get number of vmas in virtual range of vm_id
+ */
+struct drm_xe_vm_query_num_vmas {
+ /** @extensions: Pointer to the first extension struct, if any */
+ __u64 extensions;
+
+ /** @vm_id: vm_id of the virtual range */
+ __u32 vm_id;
+
+ /** @num_vmas: number of vmas in range returned in @num_vmas */
+ __u32 num_vmas;
+
+ /** @start: start of the virtual address range */
+ __u64 start;
+
+ /** @size: size of the virtual address range */
+ __u64 range;
+
+ /** @reserved: Reserved */
+ __u64 reserved[2];
+};
+
+struct drm_xe_vma_mem_attr {
+ /** @extensions: Pointer to the first extension struct, if any */
+ __u64 extensions;
+
+ /** @start: start of the vma */
+ __u64 start;
+
+ /** @size: end of the vma */
+ __u64 end;
+
+ struct {
+ struct {
+ /** @val: value of atomic operation*/
+ __u32 val;
+
+ /** @reserved: Reserved */
+ __u32 reserved;
+ } atomic;
+
+ struct {
+ /** @val: value for DRM_XE_VMA_ATTR_PURGEABLE_STATE */
+ __u32 val;
+
+ /** @reserved: Reserved */
+ __u32 reserved;
+ } purge_state_val;
+
+ struct {
+ /** @pat_index */
+ __u32 val;
+
+ /** @reserved: Reserved */
+ __u32 reserved;
+ } pat_index;
+
+ /** @preferred_mem_loc: preferred memory location */
+ struct {
+ __u32 devmem_fd;
+
+ __u32 migration_policy;
+ } preferred_mem_loc;
+ };
+
+ /** @reserved: Reserved */
+ __u64 reserved[2];
+};
+
+/**
+ * struct drm_xe_vm_query_vmas_attr - Input of &DRM_IOCTL_XE_VM_QUERY_MEM_ATTRIBUTES
+ *
+ * Get memory attributes to a virtual address range
+ */
+struct drm_xe_vm_query_vmas_attr {
+ /** @extensions: Pointer to the first extension struct, if any */
+ __u64 extensions;
+
+ /** @vm_id: vm_id of the virtual range */
+ __u32 vm_id;
+
+ /** @num_vmas: number of vmas in range returned in @num_vmas */
+ __u32 num_vmas;
+
+ /** @start: start of the virtual address range */
+ __u64 start;
+
+ /** @size: size of the virtual address range */
+ __u64 range;
+
+ union {
+ /** @num_vmas: used if num_vmas == 1 */
+ struct drm_xe_vma_mem_attr attr;
+
+ /**
+ * @vector_of_ops: userptr to array of struct
+ * drm_xe_vma_mem_attr if num_vmas > 1
+ */
+ __u64 vector_of_vma_mem_attr;
+ };
+
+ /** @reserved: Reserved */
+ __u64 reserved[2];
+
+};
+
#if defined(__cplusplus)
}
#endif
--
2.34.1
^ permalink raw reply related [flat|nested] 120+ messages in thread
* [PATCH v2 31/32] drm/xe/bo: Add attributes field to xe_bo
2025-04-07 10:16 [PATCH v2 00/32] PREFETCH and MADVISE for SVM ranges Himal Prasad Ghimiray
` (29 preceding siblings ...)
2025-04-07 10:17 ` [PATCH v2 30/32] drm/xe/uapi: Add uapi for vma count and mem attributes Himal Prasad Ghimiray
@ 2025-04-07 10:17 ` Himal Prasad Ghimiray
2025-05-14 21:10 ` Matthew Brost
2025-04-07 10:17 ` [PATCH v2 32/32] drm/xe/bo: Update atomic_access attribute on madvise Himal Prasad Ghimiray
` (12 subsequent siblings)
43 siblings, 1 reply; 120+ messages in thread
From: Himal Prasad Ghimiray @ 2025-04-07 10:17 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, thomas.hellstrom, Himal Prasad Ghimiray
A single BO can be linked to multiple VMAs, making VMA attributes
insufficient for determining the placement and PTE update attributes
of the BO. To address this, an attributes field has been added to the
BO.
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
---
drivers/gpu/drm/xe/xe_bo_types.h | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/drivers/gpu/drm/xe/xe_bo_types.h b/drivers/gpu/drm/xe/xe_bo_types.h
index 81396181aaea..5340127e67ae 100644
--- a/drivers/gpu/drm/xe/xe_bo_types.h
+++ b/drivers/gpu/drm/xe/xe_bo_types.h
@@ -60,6 +60,11 @@ struct xe_bo {
*/
struct list_head client_link;
#endif
+ /** @attr: User controlled attributes for bo */
+ struct {
+ /** @atomic_access: type of atomic access bo needs */
+ u32 atomic_access;
+ } attr;
/**
* @pxp_key_instance: PXP key instance this BO was created against. A
* 0 in this variable indicates that the BO does not use PXP encryption.
--
2.34.1
^ permalink raw reply related [flat|nested] 120+ messages in thread
* [PATCH v2 32/32] drm/xe/bo: Update atomic_access attribute on madvise
2025-04-07 10:16 [PATCH v2 00/32] PREFETCH and MADVISE for SVM ranges Himal Prasad Ghimiray
` (30 preceding siblings ...)
2025-04-07 10:17 ` [PATCH v2 31/32] drm/xe/bo: Add attributes field to xe_bo Himal Prasad Ghimiray
@ 2025-04-07 10:17 ` Himal Prasad Ghimiray
2025-05-14 22:31 ` Matthew Brost
2025-04-07 14:07 ` ✓ CI.Patch_applied: success for PREFETCH and MADVISE for SVM ranges (rev3) Patchwork
` (11 subsequent siblings)
43 siblings, 1 reply; 120+ messages in thread
From: Himal Prasad Ghimiray @ 2025-04-07 10:17 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, thomas.hellstrom, Himal Prasad Ghimiray
Update the bo_atomic_access based on user-provided input and determine
the migration to smem during a CPU fault
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
---
drivers/gpu/drm/xe/xe_bo.c | 21 ++++++++++++++---
drivers/gpu/drm/xe/xe_vm.c | 11 +++++++--
drivers/gpu/drm/xe/xe_vm_madvise.c | 38 +++++++++++++++++++++++++++---
3 files changed, 62 insertions(+), 8 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
index c337790c81ae..fe78f6da7054 100644
--- a/drivers/gpu/drm/xe/xe_bo.c
+++ b/drivers/gpu/drm/xe/xe_bo.c
@@ -1573,6 +1573,12 @@ static void xe_gem_object_close(struct drm_gem_object *obj,
}
}
+static bool should_migrate_to_smem(struct xe_bo *bo)
+{
+ return bo->attr.atomic_access == DRM_XE_VMA_ATOMIC_GLOBAL ||
+ bo->attr.atomic_access == DRM_XE_VMA_ATOMIC_CPU;
+}
+
static vm_fault_t xe_gem_fault(struct vm_fault *vmf)
{
struct ttm_buffer_object *tbo = vmf->vma->vm_private_data;
@@ -1581,7 +1587,7 @@ static vm_fault_t xe_gem_fault(struct vm_fault *vmf)
struct xe_bo *bo = ttm_to_xe_bo(tbo);
bool needs_rpm = bo->flags & XE_BO_FLAG_VRAM_MASK;
vm_fault_t ret;
- int idx;
+ int idx, r = 0;
if (needs_rpm)
xe_pm_runtime_get(xe);
@@ -1593,8 +1599,17 @@ static vm_fault_t xe_gem_fault(struct vm_fault *vmf)
if (drm_dev_enter(ddev, &idx)) {
trace_xe_bo_cpu_fault(bo);
- ret = ttm_bo_vm_fault_reserved(vmf, vmf->vma->vm_page_prot,
- TTM_BO_VM_NUM_PREFAULT);
+ if (should_migrate_to_smem(bo)) {
+ r = xe_bo_migrate(bo, XE_PL_TT);
+ if (r == -EBUSY || r == -ERESTARTSYS || r == -EINTR)
+ ret = VM_FAULT_NOPAGE;
+ else if (r)
+ ret = VM_FAULT_SIGBUS;
+ }
+ if (!ret)
+ ret = ttm_bo_vm_fault_reserved(vmf,
+ vmf->vma->vm_page_prot,
+ TTM_BO_VM_NUM_PREFAULT);
drm_dev_exit(idx);
} else {
ret = ttm_bo_vm_dummy_page(vmf, vmf->vma->vm_page_prot);
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index f1d4daf90efe..189e97113dbe 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -3104,9 +3104,16 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
err = vma_lock_and_validate(exec,
gpuva_to_vma(op->base.prefetch.va),
false);
- if (!err && !xe_vma_has_no_bo(vma))
- err = xe_bo_migrate(xe_vma_bo(vma),
+ if (!err && !xe_vma_has_no_bo(vma)) {
+ struct xe_bo *bo = xe_vma_bo(vma);
+
+ if (region == 0 && !vm->xe->info.has_device_atomics_on_smem &&
+ bo->attr.atomic_access == DRM_XE_VMA_ATOMIC_DEVICE)
+ region = 1;
+
+ err = xe_bo_migrate(bo,
region_to_mem_type[region]);
+ }
break;
}
default:
diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
index f4e0545937b0..bbae2faee603 100644
--- a/drivers/gpu/drm/xe/xe_vm_madvise.c
+++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
@@ -87,16 +87,48 @@ static int madvise_atomic(struct xe_device *xe, struct xe_vm *vm,
struct xe_vma **vmas, int num_vmas,
struct drm_xe_madvise_ops ops)
{
- int i;
+ struct xe_bo *bo;
+ int err, i;
xe_assert(vm->xe, ops.type == DRM_XE_VMA_ATTR_ATOMIC);
xe_assert(vm->xe, ops.atomic.val > DRM_XE_VMA_ATOMIC_UNDEFINED &&
ops.atomic.val <= DRM_XE_VMA_ATOMIC_CPU);
vm_dbg(&xe->drm, "attr_value = %d", ops.atomic.val);
- for (i = 0; i < num_vmas; i++)
+ for (i = 0; i < num_vmas; i++) {
vmas[i]->attr.atomic_access = ops.atomic.val;
- /*TODO: handle bo backed vmas */
+
+ bo = xe_vma_bo(vmas[i]);
+ if (!bo)
+ continue;
+
+ if (XE_IOCTL_DBG(xe, ops.atomic.val == DRM_XE_VMA_ATOMIC_CPU &&
+ !(bo->flags & XE_BO_FLAG_SYSTEM)))
+ return -EINVAL;
+
+ if (XE_IOCTL_DBG(xe, ops.atomic.val == DRM_XE_VMA_ATOMIC_DEVICE &&
+ !(bo->flags & XE_BO_FLAG_VRAM0) &&
+ !(bo->flags & XE_BO_FLAG_VRAM1)))
+ return -EINVAL;
+
+ if (XE_IOCTL_DBG(xe, ops.atomic.val == DRM_XE_VMA_ATOMIC_GLOBAL &&
+ (!(bo->flags & XE_BO_FLAG_SYSTEM) ||
+ (!(bo->flags & XE_BO_FLAG_VRAM0) &&
+ !(bo->flags & XE_BO_FLAG_VRAM1)))))
+ return -EINVAL;
+
+ err = xe_bo_lock(bo, true);
+ if (err)
+ return err;
+ bo->attr.atomic_access = ops.atomic.val;
+
+ /* Invalidate cpu page table, so bo can migrate to smem in next access */
+ if (bo->attr.atomic_access == DRM_XE_VMA_ATOMIC_CPU ||
+ bo->attr.atomic_access == DRM_XE_VMA_ATOMIC_GLOBAL)
+ ttm_bo_unmap_virtual(&bo->ttm);
+
+ xe_bo_unlock(bo);
+ }
return 0;
}
--
2.34.1
^ permalink raw reply related [flat|nested] 120+ messages in thread
* Re: [PATCH v2 16/32] Introduce drm_gpuvm_sm_map_ops_flags enums for sm_map_ops
2025-04-07 10:17 ` [PATCH v2 16/32] Introduce drm_gpuvm_sm_map_ops_flags enums for sm_map_ops Himal Prasad Ghimiray
@ 2025-04-07 10:30 ` Boris Brezillon
2025-05-26 13:48 ` Ghimiray, Himal Prasad
2025-04-07 22:42 ` kernel test robot
1 sibling, 1 reply; 120+ messages in thread
From: Boris Brezillon @ 2025-04-07 10:30 UTC (permalink / raw)
To: Himal Prasad Ghimiray
Cc: intel-xe, matthew.brost, thomas.hellstrom, Danilo Krummrich,
Boris Brezillon, dri-devel
On Mon, 7 Apr 2025 15:47:03 +0530
Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com> wrote:
> - DRM_GPUVM_SM_MAP_NOT_MADVISE: Default sm_map operations for the input
> range.
>
> - DRM_GPUVM_SKIP_GEM_OBJ_VA_SPLIT_MADVISE: This flag is used by
> drm_gpuvm_sm_map_ops_create to iterate over GPUVMA's in the
> user-provided range and split the existing non-GEM object VMA if the
> start or end of the input range lies within it. The operations can
> create up to 2 REMAPS and 2 MAPs. The purpose of this operation is to be
> used by the Xe driver to assign attributes to GPUVMA's within the
> user-defined range. Unlike drm_gpuvm_sm_map_ops_flags in default mode,
> the operation with this flag will never have UNMAPs and
> merges, and can be without any final operations.
>
> v2
> - use drm_gpuvm_sm_map_ops_create with flags instead of defining new
> ops_create (Danilo)
> - Add doc (Danilo)
>
> Cc: Danilo Krummrich <dakr@redhat.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Boris Brezillon <bbrezillon@kernel.org>
> Cc: <dri-devel@lists.freedesktop.org>
> Signed-off-by: Himal Prasad Ghimiray<himal.prasad.ghimiray@intel.com>
>
> ---
> RFC Link:
> https://lore.kernel.org/intel-xe/20250314080226.2059819-1-himal.prasad.ghimiray@intel.com/T/#mb706bd1c55232110e42dc7d5c05de61946982472
> ---
> drivers/gpu/drm/drm_gpuvm.c | 93 ++++++++++++++++++++------
> drivers/gpu/drm/nouveau/nouveau_uvmm.c | 1 +
> drivers/gpu/drm/xe/xe_vm.c | 1 +
> include/drm/drm_gpuvm.h | 25 ++++++-
> 4 files changed, 98 insertions(+), 22 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c
> index f9eb56f24bef..9d09d177b9fa 100644
> --- a/drivers/gpu/drm/drm_gpuvm.c
> +++ b/drivers/gpu/drm/drm_gpuvm.c
> @@ -2102,10 +2102,13 @@ static int
> __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
> const struct drm_gpuvm_ops *ops, void *priv,
> u64 req_addr, u64 req_range,
> + enum drm_gpuvm_sm_map_ops_flags flags,
> struct drm_gem_object *req_obj, u64 req_offset)
Not exactly related to this series, but I've been playing with Lina's
series[1] which is hooking up flag propagation from _map() calls to
drm_gpuva, and I think we should pass all map args through a struct so
we don't have to change all call-sites anytime we add one a new optional
argument. Here's a patch [2] doing that.
[1]https://lore.kernel.org/lkml/4a431b98-cccc-495e-b72e-02362828c96b@asahilina.net/T/
[2]https://gitlab.freedesktop.org/bbrezillon/linux/-/commit/0587c15b9b81ccae1e37ad0a5d524754d8455558
^ permalink raw reply [flat|nested] 120+ messages in thread
* ✓ CI.Patch_applied: success for PREFETCH and MADVISE for SVM ranges (rev3)
2025-04-07 10:16 [PATCH v2 00/32] PREFETCH and MADVISE for SVM ranges Himal Prasad Ghimiray
` (31 preceding siblings ...)
2025-04-07 10:17 ` [PATCH v2 32/32] drm/xe/bo: Update atomic_access attribute on madvise Himal Prasad Ghimiray
@ 2025-04-07 14:07 ` Patchwork
2025-04-07 14:07 ` ✗ CI.checkpatch: warning " Patchwork
` (10 subsequent siblings)
43 siblings, 0 replies; 120+ messages in thread
From: Patchwork @ 2025-04-07 14:07 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe
== Series Details ==
Series: PREFETCH and MADVISE for SVM ranges (rev3)
URL : https://patchwork.freedesktop.org/series/146290/
State : success
== Summary ==
=== Applying kernel patches on branch 'drm-tip' with base: ===
Base commit: 100ba24787e8 drm-tip: 2025y-04m-07d-13h-47m-40s UTC integration manifest
=== git am output follows ===
Applying: drm/xe: Introduce xe_vma_op_prefetch_range struct for prefetch of ranges
Applying: drm/xe: Make xe_svm_alloc_vram public
Applying: drm/xe/svm: Helper to add tile masks to svm ranges
Applying: drm/xe/svm: Make to_xe_range a public function
Applying: drm/xe/svm: Make xe_svm_range_* end/start/size public
Applying: drm/xe/vm: Update xe_vma_ops_incr_pt_update_ops to take an increment value
Applying: drm/xe/vm: Add an identifier in xe_vma_ops for svm prefetch
Applying: drm/xe: Rename lookup_vma function to xe_find_vma_by_addr
Applying: drm/xe/svm: Allow unaligned addresses and ranges for prefetch
Applying: drm/xe/svm: Refactor usage of drm_gpusvm* function in xe_svm
Applying: drm/xe/svm: Add function to determine if range needs VRAM migration
Applying: drm/gpusvm: Introduce vram_only flag for VRAM allocation
Applying: drm/xe/svm: Incase of atomic access ensure get_pages happens from vram
Applying: drm/xe/svm: Implement prefetch support for SVM ranges
Applying: drm/xe/vm: Add debug prints for SVM range prefetch
Applying: Introduce drm_gpuvm_sm_map_ops_flags enums for sm_map_ops
Applying: drm/xe/uapi: Add madvise interface
Applying: drm/xe/vm: Add attributes struct as member of vma
Applying: drm/xe/vma: Move pat_index to vma attributes
Applying: drm/xe/vma: Modify new_vma to accept struct xe_vma_mem_attr as parameter
Applying: drm/gpusvm: Make drm_gpusvm_for_each_* macros public
Applying: drm/xe/svm: Split system allocator vma incase of madvise call
Applying: drm/xe: Implement madvise ioctl for xe
Applying: drm/xe: Allow CPU address mirror VMA unbind with gpu bindings for madvise
Applying: drm/xe/svm : Add svm ranges migration policy on atomic access
Applying: drm/xe/madvise: Update migration policy based on preferred location
Applying: drm/xe/svm: Support DRM_XE_SVM_ATTR_PAT memory attribute
Applying: drm/xe/uapi: Add flag for consulting madvise hints on svm prefetch
Applying: drm/xe/svm: Consult madvise preferred location in prefetch
Applying: drm/xe/uapi: Add uapi for vma count and mem attributes
Applying: drm/xe/bo: Add attributes field to xe_bo
Applying: drm/xe/bo: Update atomic_access attribute on madvise
^ permalink raw reply [flat|nested] 120+ messages in thread
* ✗ CI.checkpatch: warning for PREFETCH and MADVISE for SVM ranges (rev3)
2025-04-07 10:16 [PATCH v2 00/32] PREFETCH and MADVISE for SVM ranges Himal Prasad Ghimiray
` (32 preceding siblings ...)
2025-04-07 14:07 ` ✓ CI.Patch_applied: success for PREFETCH and MADVISE for SVM ranges (rev3) Patchwork
@ 2025-04-07 14:07 ` Patchwork
2025-04-07 14:09 ` ✓ CI.KUnit: success " Patchwork
` (9 subsequent siblings)
43 siblings, 0 replies; 120+ messages in thread
From: Patchwork @ 2025-04-07 14:07 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe
== Series Details ==
Series: PREFETCH and MADVISE for SVM ranges (rev3)
URL : https://patchwork.freedesktop.org/series/146290/
State : warning
== Summary ==
+ KERNEL=/kernel
+ git clone https://gitlab.freedesktop.org/drm/maintainer-tools mt
Cloning into 'mt'...
warning: redirecting to https://gitlab.freedesktop.org/drm/maintainer-tools.git/
+ git -C mt rev-list -n1 origin/master
99e5a866b5e13f134e606a3e29d9508d97826fb3
+ cd /kernel
+ git config --global --add safe.directory /kernel
+ git log -n1
commit c443caaf71ea71a9bc61c61383a24b9e3326023f
Author: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Date: Mon Apr 7 15:47:19 2025 +0530
drm/xe/bo: Update atomic_access attribute on madvise
Update the bo_atomic_access based on user-provided input and determine
the migration to smem during a CPU fault
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
+ /mt/dim checkpatch 100ba24787e877cd422caa7d08cb046ef4bb769c drm-intel
35ac963567d7 drm/xe: Introduce xe_vma_op_prefetch_range struct for prefetch of ranges
41aa6781fbda drm/xe: Make xe_svm_alloc_vram public
5eeaaef3d5aa drm/xe/svm: Helper to add tile masks to svm ranges
6a907a068f1d drm/xe/svm: Make to_xe_range a public function
63d2d2f23ae9 drm/xe/svm: Make xe_svm_range_* end/start/size public
63fb574ac68d drm/xe/vm: Update xe_vma_ops_incr_pt_update_ops to take an increment value
-:30: ERROR:SPACING: space required before the open parenthesis '('
#30: FILE: drivers/gpu/drm/xe/xe_vm.c:813:
+ if(!inc_val)
total: 1 errors, 0 warnings, 0 checks, 96 lines checked
e7ff182670e3 drm/xe/vm: Add an identifier in xe_vma_ops for svm prefetch
cef867ff66d6 drm/xe: Rename lookup_vma function to xe_find_vma_by_addr
4579352136cb drm/xe/svm: Allow unaligned addresses and ranges for prefetch
db7ba572a630 drm/xe/svm: Refactor usage of drm_gpusvm* function in xe_svm
15b7e9bc3433 drm/xe/svm: Add function to determine if range needs VRAM migration
bf4443a1c73d drm/gpusvm: Introduce vram_only flag for VRAM allocation
6dd1de84a57c drm/xe/svm: Incase of atomic access ensure get_pages happens from vram
c974aa30b4e8 drm/xe/svm: Implement prefetch support for SVM ranges
aafacca0ea5f drm/xe/vm: Add debug prints for SVM range prefetch
0ce18cc8b752 Introduce drm_gpuvm_sm_map_ops_flags enums for sm_map_ops
971811347396 drm/xe/uapi: Add madvise interface
-:37: WARNING:LONG_LINE: line length of 114 exceeds 100 columns
#37: FILE: include/uapi/drm/xe_drm.h:122:
+#define DRM_IOCTL_XE_MADVISE DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_MADVISE, struct drm_xe_madvise)
total: 0 errors, 1 warnings, 0 checks, 121 lines checked
22e154a312f9 drm/xe/vm: Add attributes struct as member of vma
77eef0321e50 drm/xe/vma: Move pat_index to vma attributes
dbe07bb8c967 drm/xe/vma: Modify new_vma to accept struct xe_vma_mem_attr as parameter
250dadc7dda3 drm/gpusvm: Make drm_gpusvm_for_each_* macros public
-:162: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'range__' - possible side-effects?
#162: FILE: include/drm/drm_gpusvm.h:536:
+#define drm_gpusvm_for_each_range_safe(range__, next__, notifier__, start__, end__) \
+ for ((range__) = __drm_gpusvm_range_find((notifier__), (start__), (end__)), \
+ (next__) = __drm_gpusvm_range_next(range__); \
+ (range__) && (drm_gpusvm_range_start(range__) < (end__)); \
+ (range__) = (next__), (next__) = __drm_gpusvm_range_next(range__))
-:162: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'next__' - possible side-effects?
#162: FILE: include/drm/drm_gpusvm.h:536:
+#define drm_gpusvm_for_each_range_safe(range__, next__, notifier__, start__, end__) \
+ for ((range__) = __drm_gpusvm_range_find((notifier__), (start__), (end__)), \
+ (next__) = __drm_gpusvm_range_next(range__); \
+ (range__) && (drm_gpusvm_range_start(range__) < (end__)); \
+ (range__) = (next__), (next__) = __drm_gpusvm_range_next(range__))
-:162: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'end__' - possible side-effects?
#162: FILE: include/drm/drm_gpusvm.h:536:
+#define drm_gpusvm_for_each_range_safe(range__, next__, notifier__, start__, end__) \
+ for ((range__) = __drm_gpusvm_range_find((notifier__), (start__), (end__)), \
+ (next__) = __drm_gpusvm_range_next(range__); \
+ (range__) && (drm_gpusvm_range_start(range__) < (end__)); \
+ (range__) = (next__), (next__) = __drm_gpusvm_range_next(range__))
-:209: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'notifier__' - possible side-effects?
#209: FILE: include/drm/drm_gpusvm.h:583:
+#define drm_gpusvm_for_each_notifier(notifier__, gpusvm__, start__, end__) \
+ for ((notifier__) = notifier_iter_first(&(gpusvm__)->root, (start__), (end__) - 1); \
+ (notifier__) && (drm_gpusvm_notifier_start(notifier__) < (end__)); \
+ (notifier__) = __drm_gpusvm_notifier_next(notifier__))
-:209: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'end__' - possible side-effects?
#209: FILE: include/drm/drm_gpusvm.h:583:
+#define drm_gpusvm_for_each_notifier(notifier__, gpusvm__, start__, end__) \
+ for ((notifier__) = notifier_iter_first(&(gpusvm__)->root, (start__), (end__) - 1); \
+ (notifier__) && (drm_gpusvm_notifier_start(notifier__) < (end__)); \
+ (notifier__) = __drm_gpusvm_notifier_next(notifier__))
-:225: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'notifier__' - possible side-effects?
#225: FILE: include/drm/drm_gpusvm.h:599:
+#define drm_gpusvm_for_each_notifier_safe(notifier__, next__, gpusvm__, start__, end__) \
+ for ((notifier__) = notifier_iter_first(&(gpusvm__)->root, (start__), (end__) - 1), \
+ (next__) = __drm_gpusvm_notifier_next(notifier__); \
+ (notifier__) && (drm_gpusvm_notifier_start(notifier__) < (end__)); \
+ (notifier__) = (next__), (next__) = __drm_gpusvm_notifier_next(notifier__))
-:225: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'next__' - possible side-effects?
#225: FILE: include/drm/drm_gpusvm.h:599:
+#define drm_gpusvm_for_each_notifier_safe(notifier__, next__, gpusvm__, start__, end__) \
+ for ((notifier__) = notifier_iter_first(&(gpusvm__)->root, (start__), (end__) - 1), \
+ (next__) = __drm_gpusvm_notifier_next(notifier__); \
+ (notifier__) && (drm_gpusvm_notifier_start(notifier__) < (end__)); \
+ (notifier__) = (next__), (next__) = __drm_gpusvm_notifier_next(notifier__))
-:225: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'end__' - possible side-effects?
#225: FILE: include/drm/drm_gpusvm.h:599:
+#define drm_gpusvm_for_each_notifier_safe(notifier__, next__, gpusvm__, start__, end__) \
+ for ((notifier__) = notifier_iter_first(&(gpusvm__)->root, (start__), (end__) - 1), \
+ (next__) = __drm_gpusvm_notifier_next(notifier__); \
+ (notifier__) && (drm_gpusvm_notifier_start(notifier__) < (end__)); \
+ (notifier__) = (next__), (next__) = __drm_gpusvm_notifier_next(notifier__))
total: 0 errors, 0 warnings, 8 checks, 207 lines checked
c520c7bdd542 drm/xe/svm: Split system allocator vma incase of madvise call
7d4fe2a76106 drm/xe: Implement madvise ioctl for xe
-:48: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#48:
new file mode 100644
total: 0 errors, 1 warnings, 0 checks, 345 lines checked
3bd47c6e5c68 drm/xe: Allow CPU address mirror VMA unbind with gpu bindings for madvise
a07b3c5ab931 drm/xe/svm : Add svm ranges migration policy on atomic access
27e8b71c4976 drm/xe/madvise: Update migration policy based on preferred location
85c55d76bd45 drm/xe/svm: Support DRM_XE_SVM_ATTR_PAT memory attribute
661425da0e06 drm/xe/uapi: Add flag for consulting madvise hints on svm prefetch
aeef6241b5c5 drm/xe/svm: Consult madvise preferred location in prefetch
b20f08c37021 drm/xe/uapi: Add uapi for vma count and mem attributes
-:75: WARNING:LONG_LINE: line length of 107 exceeds 100 columns
#75: FILE: drivers/gpu/drm/xe/xe_vm.c:2210:
+ mem_attrs[i].preferred_mem_loc.migration_policy = vma->attr.preferred_loc.migration_policy;
-:170: WARNING:LONG_LINE: line length of 130 exceeds 100 columns
#170: FILE: include/uapi/drm/xe_drm.h:127:
+#define DRM_IOCTL_XE_VM_QUERY_VMAS DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_VM_QUERY_VMAS, struct drm_xe_vm_query_num_vmas)
-:171: WARNING:LONG_LINE: line length of 137 exceeds 100 columns
#171: FILE: include/uapi/drm/xe_drm.h:128:
+#define DRM_IOCTL_XE_VM_QUERY_VMAS_ATTRS DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_VM_QUERY_VMAS_ATTRS, struct drm_xe_vm_query_vmas_attr)
total: 0 errors, 3 warnings, 0 checks, 256 lines checked
9d789c08123e drm/xe/bo: Add attributes field to xe_bo
c443caaf71ea drm/xe/bo: Update atomic_access attribute on madvise
^ permalink raw reply [flat|nested] 120+ messages in thread
* ✓ CI.KUnit: success for PREFETCH and MADVISE for SVM ranges (rev3)
2025-04-07 10:16 [PATCH v2 00/32] PREFETCH and MADVISE for SVM ranges Himal Prasad Ghimiray
` (33 preceding siblings ...)
2025-04-07 14:07 ` ✗ CI.checkpatch: warning " Patchwork
@ 2025-04-07 14:09 ` Patchwork
2025-04-07 14:12 ` ✗ CI.Build: failure " Patchwork
` (8 subsequent siblings)
43 siblings, 0 replies; 120+ messages in thread
From: Patchwork @ 2025-04-07 14:09 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe
== Series Details ==
Series: PREFETCH and MADVISE for SVM ranges (rev3)
URL : https://patchwork.freedesktop.org/series/146290/
State : success
== Summary ==
+ trap cleanup EXIT
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/xe/.kunitconfig
[14:07:59] Configuring KUnit Kernel ...
Generating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[14:08:03] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[14:08:29] Starting KUnit Kernel (1/1)...
[14:08:29] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[14:08:30] ================== guc_buf (11 subtests) ===================
[14:08:30] [PASSED] test_smallest
[14:08:30] [PASSED] test_largest
[14:08:30] [PASSED] test_granular
[14:08:30] [PASSED] test_unique
[14:08:30] [PASSED] test_overlap
[14:08:30] [PASSED] test_reusable
[14:08:30] [PASSED] test_too_big
[14:08:30] [PASSED] test_flush
[14:08:30] [PASSED] test_lookup
[14:08:30] [PASSED] test_data
[14:08:30] [PASSED] test_class
[14:08:30] ===================== [PASSED] guc_buf =====================
[14:08:30] =================== guc_dbm (7 subtests) ===================
[14:08:30] [PASSED] test_empty
[14:08:30] [PASSED] test_default
[14:08:30] ======================== test_size ========================
[14:08:30] [PASSED] 4
[14:08:30] [PASSED] 8
[14:08:30] [PASSED] 32
[14:08:30] [PASSED] 256
[14:08:30] ==================== [PASSED] test_size ====================
[14:08:30] ======================= test_reuse ========================
[14:08:30] [PASSED] 4
[14:08:30] [PASSED] 8
[14:08:30] [PASSED] 32
[14:08:30] [PASSED] 256
[14:08:30] =================== [PASSED] test_reuse ====================
[14:08:30] =================== test_range_overlap ====================
[14:08:30] [PASSED] 4
[14:08:30] [PASSED] 8
[14:08:30] [PASSED] 32
[14:08:30] [PASSED] 256
[14:08:30] =============== [PASSED] test_range_overlap ================
[14:08:30] =================== test_range_compact ====================
[14:08:30] [PASSED] 4
[14:08:30] [PASSED] 8
[14:08:30] [PASSED] 32
[14:08:30] [PASSED] 256
[14:08:30] =============== [PASSED] test_range_compact ================
[14:08:30] ==================== test_range_spare =====================
[14:08:30] [PASSED] 4
[14:08:30] [PASSED] 8
[14:08:30] [PASSED] 32
[14:08:30] [PASSED] 256
[14:08:30] ================ [PASSED] test_range_spare =================
[14:08:30] ===================== [PASSED] guc_dbm =====================
[14:08:30] =================== guc_idm (6 subtests) ===================
[14:08:30] [PASSED] bad_init
[14:08:30] [PASSED] no_init
[14:08:30] [PASSED] init_fini
[14:08:30] [PASSED] check_used
[14:08:30] [PASSED] check_quota
[14:08:30] [PASSED] check_all
[14:08:30] ===================== [PASSED] guc_idm =====================
[14:08:30] ================== no_relay (3 subtests) ===================
[14:08:30] [PASSED] xe_drops_guc2pf_if_not_ready
[14:08:30] [PASSED] xe_drops_guc2vf_if_not_ready
[14:08:30] [PASSED] xe_rejects_send_if_not_ready
[14:08:30] ==================== [PASSED] no_relay =====================
[14:08:30] ================== pf_relay (14 subtests) ==================
[14:08:30] [PASSED] pf_rejects_guc2pf_too_short
[14:08:30] [PASSED] pf_rejects_guc2pf_too_long
[14:08:30] [PASSED] pf_rejects_guc2pf_no_payload
[14:08:30] [PASSED] pf_fails_no_payload
[14:08:30] [PASSED] pf_fails_bad_origin
[14:08:30] [PASSED] pf_fails_bad_type
[14:08:30] [PASSED] pf_txn_reports_error
[14:08:30] [PASSED] pf_txn_sends_pf2guc
[14:08:30] [PASSED] pf_sends_pf2guc
[14:08:30] [SKIPPED] pf_loopback_nop
[14:08:30] [SKIPPED] pf_loopback_echo
[14:08:30] [SKIPPED] pf_loopback_fail
[14:08:30] [SKIPPED] pf_loopback_busy
[14:08:30] [SKIPPED] pf_loopback_retry
[14:08:30] ==================== [PASSED] pf_relay =====================
[14:08:30] ================== vf_relay (3 subtests) ===================
[14:08:30] [PASSED] vf_rejects_guc2vf_too_short
[14:08:30] [PASSED] vf_rejects_guc2vf_too_long
[14:08:30] [PASSED] vf_rejects_guc2vf_no_payload
[14:08:30] ==================== [PASSED] vf_relay =====================
[14:08:30] ================= pf_service (11 subtests) =================
[14:08:30] [PASSED] pf_negotiate_any
[14:08:30] [PASSED] pf_negotiate_base_match
[14:08:30] [PASSED] pf_negotiate_base_newer
[14:08:30] [PASSED] pf_negotiate_base_next
[14:08:30] [SKIPPED] pf_negotiate_base_older
[14:08:30] [PASSED] pf_negotiate_base_prev
[14:08:30] [PASSED] pf_negotiate_latest_match
[14:08:30] [PASSED] pf_negotiate_latest_newer
[14:08:30] [PASSED] pf_negotiate_latest_next
[14:08:30] [SKIPPED] pf_negotiate_latest_older
[14:08:30] [SKIPPED] pf_negotiate_latest_prev
[14:08:30] =================== [PASSED] pf_service ====================
[14:08:30] ===================== lmtt (1 subtest) =====================
[14:08:30] ======================== test_ops =========================
[14:08:30] [PASSED] 2-level
[14:08:30] [PASSED] multi-level
[14:08:30] ==================== [PASSED] test_ops =====================
[14:08:30] ====================== [PASSED] lmtt =======================
[14:08:30] =================== xe_mocs (2 subtests) ===================
[14:08:30] ================ xe_live_mocs_kernel_kunit ================
[14:08:30] =========== [SKIPPED] xe_live_mocs_kernel_kunit ============
[14:08:30] ================ xe_live_mocs_reset_kunit =================
[14:08:30] ============ [SKIPPED] xe_live_mocs_reset_kunit ============
[14:08:30] ==================== [SKIPPED] xe_mocs =====================
[14:08:30] ================= xe_migrate (2 subtests) ==================
[14:08:30] ================= xe_migrate_sanity_kunit =================
[14:08:30] ============ [SKIPPED] xe_migrate_sanity_kunit =============
[14:08:30] ================== xe_validate_ccs_kunit ==================
[14:08:30] ============= [SKIPPED] xe_validate_ccs_kunit ==============
[14:08:30] =================== [SKIPPED] xe_migrate ===================
[14:08:30] ================== xe_dma_buf (1 subtest) ==================
[14:08:30] ==================== xe_dma_buf_kunit =====================
[14:08:30] ================ [SKIPPED] xe_dma_buf_kunit ================
[14:08:30] =================== [SKIPPED] xe_dma_buf ===================
[14:08:30] ================= xe_bo_shrink (1 subtest) =================
[14:08:30] =================== xe_bo_shrink_kunit ====================
[14:08:30] =============== [SKIPPED] xe_bo_shrink_kunit ===============
[14:08:30] ================== [SKIPPED] xe_bo_shrink ==================
[14:08:30] ==================== xe_bo (2 subtests) ====================
[14:08:30] ================== xe_ccs_migrate_kunit ===================
[14:08:30] ============== [SKIPPED] xe_ccs_migrate_kunit ==============
[14:08:30] ==================== xe_bo_evict_kunit ====================
[14:08:30] =============== [SKIPPED] xe_bo_evict_kunit ================
[14:08:30] ===================== [SKIPPED] xe_bo ======================
[14:08:30] ==================== args (11 subtests) ====================
[14:08:30] [PASSED] count_args_test
[14:08:30] [PASSED] call_args_example
[14:08:30] [PASSED] call_args_test
[14:08:30] [PASSED] drop_first_arg_example
[14:08:30] [PASSED] drop_first_arg_test
[14:08:30] [PASSED] first_arg_example
[14:08:30] [PASSED] first_arg_test
[14:08:30] [PASSED] last_arg_example
[14:08:30] [PASSED] last_arg_test
[14:08:30] [PASSED] pick_arg_example
[14:08:30] [PASSED] sep_comma_example
[14:08:30] ====================== [PASSED] args =======================
[14:08:30] =================== xe_pci (2 subtests) ====================
[14:08:30] [PASSED] xe_gmdid_graphics_ip
[14:08:30] [PASSED] xe_gmdid_media_ip
[14:08:30] ===================== [PASSED] xe_pci ======================
[14:08:30] =================== xe_rtp (2 subtests) ====================
[14:08:30] =============== xe_rtp_process_to_sr_tests ================
[14:08:30] [PASSED] coalesce-same-reg
[14:08:30] [PASSED] no-match-no-add
[14:08:30] [PASSED] match-or
[14:08:30] [PASSED] match-or-xfail
[14:08:30] [PASSED] no-match-no-add-multiple-rules
[14:08:30] [PASSED] two-regs-two-entries
[14:08:30] [PASSED] clr-one-set-other
[14:08:30] [PASSED] set-field
[14:08:30] [PASSED] conflict-duplicate
[14:08:30] [PASSED] conflict-not-disjoint
stty: 'standard input': Inappropriate ioctl for device
[14:08:30] [PASSED] conflict-reg-type
[14:08:30] =========== [PASSED] xe_rtp_process_to_sr_tests ============
[14:08:30] ================== xe_rtp_process_tests ===================
[14:08:30] [PASSED] active1
[14:08:30] [PASSED] active2
[14:08:30] [PASSED] active-inactive
[14:08:30] [PASSED] inactive-active
[14:08:30] [PASSED] inactive-1st_or_active-inactive
[14:08:30] [PASSED] inactive-2nd_or_active-inactive
[14:08:30] [PASSED] inactive-last_or_active-inactive
[14:08:30] [PASSED] inactive-no_or_active-inactive
[14:08:30] ============== [PASSED] xe_rtp_process_tests ===============
[14:08:30] ===================== [PASSED] xe_rtp ======================
[14:08:30] ==================== xe_wa (1 subtest) =====================
[14:08:30] ======================== xe_wa_gt =========================
[14:08:30] [PASSED] TIGERLAKE (B0)
[14:08:30] [PASSED] DG1 (A0)
[14:08:30] [PASSED] DG1 (B0)
[14:08:30] [PASSED] ALDERLAKE_S (A0)
[14:08:30] [PASSED] ALDERLAKE_S (B0)
[14:08:30] [PASSED] ALDERLAKE_S (C0)
[14:08:30] [PASSED] ALDERLAKE_S (D0)
[14:08:30] [PASSED] ALDERLAKE_P (A0)
[14:08:30] [PASSED] ALDERLAKE_P (B0)
[14:08:30] [PASSED] ALDERLAKE_P (C0)
[14:08:30] [PASSED] ALDERLAKE_S_RPLS (D0)
[14:08:30] [PASSED] ALDERLAKE_P_RPLU (E0)
[14:08:30] [PASSED] DG2_G10 (C0)
[14:08:30] [PASSED] DG2_G11 (B1)
[14:08:30] [PASSED] DG2_G12 (A1)
[14:08:30] [PASSED] METEORLAKE (g:A0, m:A0)
[14:08:30] [PASSED] METEORLAKE (g:A0, m:A0)
[14:08:30] [PASSED] METEORLAKE (g:A0, m:A0)
[14:08:30] [PASSED] LUNARLAKE (g:A0, m:A0)
[14:08:30] [PASSED] LUNARLAKE (g:B0, m:A0)
[14:08:30] [PASSED] BATTLEMAGE (g:A0, m:A1)
[14:08:30] ==================== [PASSED] xe_wa_gt =====================
[14:08:30] ====================== [PASSED] xe_wa ======================
[14:08:30] ============================================================
[14:08:30] Testing complete. Ran 133 tests: passed: 117, skipped: 16
[14:08:30] Elapsed time: 31.081s total, 4.213s configuring, 26.548s building, 0.293s running
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/tests/.kunitconfig
[14:08:30] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[14:08:32] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[14:08:52] Starting KUnit Kernel (1/1)...
[14:08:52] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[14:08:52] == drm_test_atomic_get_connector_for_encoder (1 subtest) ===
[14:08:52] [PASSED] drm_test_drm_atomic_get_connector_for_encoder
[14:08:52] ==== [PASSED] drm_test_atomic_get_connector_for_encoder ====
[14:08:52] =========== drm_validate_clone_mode (2 subtests) ===========
[14:08:52] ============== drm_test_check_in_clone_mode ===============
[14:08:52] [PASSED] in_clone_mode
[14:08:52] [PASSED] not_in_clone_mode
[14:08:52] ========== [PASSED] drm_test_check_in_clone_mode ===========
[14:08:52] =============== drm_test_check_valid_clones ===============
[14:08:52] [PASSED] not_in_clone_mode
[14:08:52] [PASSED] valid_clone
[14:08:52] [PASSED] invalid_clone
[14:08:52] =========== [PASSED] drm_test_check_valid_clones ===========
[14:08:52] ============= [PASSED] drm_validate_clone_mode =============
[14:08:52] ============= drm_validate_modeset (1 subtest) =============
[14:08:52] [PASSED] drm_test_check_connector_changed_modeset
[14:08:52] ============== [PASSED] drm_validate_modeset ===============
[14:08:52] ====== drm_test_bridge_get_current_state (2 subtests) ======
[14:08:52] [PASSED] drm_test_drm_bridge_get_current_state_atomic
[14:08:52] [PASSED] drm_test_drm_bridge_get_current_state_legacy
[14:08:52] ======== [PASSED] drm_test_bridge_get_current_state ========
[14:08:52] ====== drm_test_bridge_helper_reset_crtc (3 subtests) ======
[14:08:52] [PASSED] drm_test_drm_bridge_helper_reset_crtc_atomic
[14:08:52] [PASSED] drm_test_drm_bridge_helper_reset_crtc_atomic_disabled
[14:08:52] [PASSED] drm_test_drm_bridge_helper_reset_crtc_legacy
[14:08:52] ======== [PASSED] drm_test_bridge_helper_reset_crtc ========
[14:08:52] ================== drm_buddy (7 subtests) ==================
[14:08:52] [PASSED] drm_test_buddy_alloc_limit
[14:08:52] [PASSED] drm_test_buddy_alloc_optimistic
[14:08:52] [PASSED] drm_test_buddy_alloc_pessimistic
[14:08:52] [PASSED] drm_test_buddy_alloc_pathological
[14:08:53] [PASSED] drm_test_buddy_alloc_contiguous
[14:08:53] [PASSED] drm_test_buddy_alloc_clear
[14:08:53] [PASSED] drm_test_buddy_alloc_range_bias
[14:08:53] ==================== [PASSED] drm_buddy ====================
[14:08:53] ============= drm_cmdline_parser (40 subtests) =============
[14:08:53] [PASSED] drm_test_cmdline_force_d_only
[14:08:53] [PASSED] drm_test_cmdline_force_D_only_dvi
[14:08:53] [PASSED] drm_test_cmdline_force_D_only_hdmi
[14:08:53] [PASSED] drm_test_cmdline_force_D_only_not_digital
[14:08:53] [PASSED] drm_test_cmdline_force_e_only
[14:08:53] [PASSED] drm_test_cmdline_res
[14:08:53] [PASSED] drm_test_cmdline_res_vesa
[14:08:53] [PASSED] drm_test_cmdline_res_vesa_rblank
[14:08:53] [PASSED] drm_test_cmdline_res_rblank
[14:08:53] [PASSED] drm_test_cmdline_res_bpp
[14:08:53] [PASSED] drm_test_cmdline_res_refresh
[14:08:53] [PASSED] drm_test_cmdline_res_bpp_refresh
[14:08:53] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced
[14:08:53] [PASSED] drm_test_cmdline_res_bpp_refresh_margins
[14:08:53] [PASSED] drm_test_cmdline_res_bpp_refresh_force_off
[14:08:53] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on
[14:08:53] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_analog
[14:08:53] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_digital
[14:08:53] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced_margins_force_on
[14:08:53] [PASSED] drm_test_cmdline_res_margins_force_on
[14:08:53] [PASSED] drm_test_cmdline_res_vesa_margins
[14:08:53] [PASSED] drm_test_cmdline_name
[14:08:53] [PASSED] drm_test_cmdline_name_bpp
[14:08:53] [PASSED] drm_test_cmdline_name_option
[14:08:53] [PASSED] drm_test_cmdline_name_bpp_option
[14:08:53] [PASSED] drm_test_cmdline_rotate_0
[14:08:53] [PASSED] drm_test_cmdline_rotate_90
[14:08:53] [PASSED] drm_test_cmdline_rotate_180
[14:08:53] [PASSED] drm_test_cmdline_rotate_270
[14:08:53] [PASSED] drm_test_cmdline_hmirror
[14:08:53] [PASSED] drm_test_cmdline_vmirror
[14:08:53] [PASSED] drm_test_cmdline_margin_options
[14:08:53] [PASSED] drm_test_cmdline_multiple_options
[14:08:53] [PASSED] drm_test_cmdline_bpp_extra_and_option
[14:08:53] [PASSED] drm_test_cmdline_extra_and_option
[14:08:53] [PASSED] drm_test_cmdline_freestanding_options
[14:08:53] [PASSED] drm_test_cmdline_freestanding_force_e_and_options
[14:08:53] [PASSED] drm_test_cmdline_panel_orientation
[14:08:53] ================ drm_test_cmdline_invalid =================
[14:08:53] [PASSED] margin_only
[14:08:53] [PASSED] interlace_only
[14:08:53] [PASSED] res_missing_x
[14:08:53] [PASSED] res_missing_y
[14:08:53] [PASSED] res_bad_y
[14:08:53] [PASSED] res_missing_y_bpp
[14:08:53] [PASSED] res_bad_bpp
[14:08:53] [PASSED] res_bad_refresh
[14:08:53] [PASSED] res_bpp_refresh_force_on_off
[14:08:53] [PASSED] res_invalid_mode
[14:08:53] [PASSED] res_bpp_wrong_place_mode
[14:08:53] [PASSED] name_bpp_refresh
[14:08:53] [PASSED] name_refresh
[14:08:53] [PASSED] name_refresh_wrong_mode
[14:08:53] [PASSED] name_refresh_invalid_mode
[14:08:53] [PASSED] rotate_multiple
[14:08:53] [PASSED] rotate_invalid_val
[14:08:53] [PASSED] rotate_truncated
[14:08:53] [PASSED] invalid_option
[14:08:53] [PASSED] invalid_tv_option
[14:08:53] [PASSED] truncated_tv_option
[14:08:53] ============ [PASSED] drm_test_cmdline_invalid =============
[14:08:53] =============== drm_test_cmdline_tv_options ===============
[14:08:53] [PASSED] NTSC
[14:08:53] [PASSED] NTSC_443
[14:08:53] [PASSED] NTSC_J
[14:08:53] [PASSED] PAL
[14:08:53] [PASSED] PAL_M
[14:08:53] [PASSED] PAL_N
[14:08:53] [PASSED] SECAM
[14:08:53] [PASSED] MONO_525
[14:08:53] [PASSED] MONO_625
[14:08:53] =========== [PASSED] drm_test_cmdline_tv_options ===========
[14:08:53] =============== [PASSED] drm_cmdline_parser ================
[14:08:53] ========== drmm_connector_hdmi_init (20 subtests) ==========
[14:08:53] [PASSED] drm_test_connector_hdmi_init_valid
[14:08:53] [PASSED] drm_test_connector_hdmi_init_bpc_8
[14:08:53] [PASSED] drm_test_connector_hdmi_init_bpc_10
[14:08:53] [PASSED] drm_test_connector_hdmi_init_bpc_12
[14:08:53] [PASSED] drm_test_connector_hdmi_init_bpc_invalid
[14:08:53] [PASSED] drm_test_connector_hdmi_init_bpc_null
[14:08:53] [PASSED] drm_test_connector_hdmi_init_formats_empty
[14:08:53] [PASSED] drm_test_connector_hdmi_init_formats_no_rgb
[14:08:53] === drm_test_connector_hdmi_init_formats_yuv420_allowed ===
[14:08:53] [PASSED] supported_formats=0x9 yuv420_allowed=1
[14:08:53] [PASSED] supported_formats=0x9 yuv420_allowed=0
[14:08:53] [PASSED] supported_formats=0x3 yuv420_allowed=1
[14:08:53] [PASSED] supported_formats=0x3 yuv420_allowed=0
[14:08:53] === [PASSED] drm_test_connector_hdmi_init_formats_yuv420_allowed ===
[14:08:53] [PASSED] drm_test_connector_hdmi_init_null_ddc
[14:08:53] [PASSED] drm_test_connector_hdmi_init_null_product
[14:08:53] [PASSED] drm_test_connector_hdmi_init_null_vendor
[14:08:53] [PASSED] drm_test_connector_hdmi_init_product_length_exact
[14:08:53] [PASSED] drm_test_connector_hdmi_init_product_length_too_long
[14:08:53] [PASSED] drm_test_connector_hdmi_init_product_valid
[14:08:53] [PASSED] drm_test_connector_hdmi_init_vendor_length_exact
[14:08:53] [PASSED] drm_test_connector_hdmi_init_vendor_length_too_long
[14:08:53] [PASSED] drm_test_connector_hdmi_init_vendor_valid
[14:08:53] ========= drm_test_connector_hdmi_init_type_valid =========
[14:08:53] [PASSED] HDMI-A
[14:08:53] [PASSED] HDMI-B
[14:08:53] ===== [PASSED] drm_test_connector_hdmi_init_type_valid =====
[14:08:53] ======== drm_test_connector_hdmi_init_type_invalid ========
[14:08:53] [PASSED] Unknown
[14:08:53] [PASSED] VGA
[14:08:53] [PASSED] DVI-I
[14:08:53] [PASSED] DVI-D
[14:08:53] [PASSED] DVI-A
[14:08:53] [PASSED] Composite
[14:08:53] [PASSED] SVIDEO
[14:08:53] [PASSED] LVDS
[14:08:53] [PASSED] Component
[14:08:53] [PASSED] DIN
[14:08:53] [PASSED] DP
[14:08:53] [PASSED] TV
[14:08:53] [PASSED] eDP
[14:08:53] [PASSED] Virtual
[14:08:53] [PASSED] DSI
[14:08:53] [PASSED] DPI
[14:08:53] [PASSED] Writeback
[14:08:53] [PASSED] SPI
[14:08:53] [PASSED] USB
[14:08:53] ==== [PASSED] drm_test_connector_hdmi_init_type_invalid ====
[14:08:53] ============ [PASSED] drmm_connector_hdmi_init =============
[14:08:53] ============= drmm_connector_init (3 subtests) =============
[14:08:53] [PASSED] drm_test_drmm_connector_init
[14:08:53] [PASSED] drm_test_drmm_connector_init_null_ddc
[14:08:53] ========= drm_test_drmm_connector_init_type_valid =========
[14:08:53] [PASSED] Unknown
[14:08:53] [PASSED] VGA
[14:08:53] [PASSED] DVI-I
[14:08:53] [PASSED] DVI-D
[14:08:53] [PASSED] DVI-A
[14:08:53] [PASSED] Composite
[14:08:53] [PASSED] SVIDEO
[14:08:53] [PASSED] LVDS
[14:08:53] [PASSED] Component
[14:08:53] [PASSED] DIN
[14:08:53] [PASSED] DP
[14:08:53] [PASSED] HDMI-A
[14:08:53] [PASSED] HDMI-B
[14:08:53] [PASSED] TV
[14:08:53] [PASSED] eDP
[14:08:53] [PASSED] Virtual
[14:08:53] [PASSED] DSI
[14:08:53] [PASSED] DPI
[14:08:53] [PASSED] Writeback
[14:08:53] [PASSED] SPI
[14:08:53] [PASSED] USB
[14:08:53] ===== [PASSED] drm_test_drmm_connector_init_type_valid =====
[14:08:53] =============== [PASSED] drmm_connector_init ===============
[14:08:53] ========= drm_connector_dynamic_init (6 subtests) ==========
[14:08:53] [PASSED] drm_test_drm_connector_dynamic_init
[14:08:53] [PASSED] drm_test_drm_connector_dynamic_init_null_ddc
[14:08:53] [PASSED] drm_test_drm_connector_dynamic_init_not_added
[14:08:53] [PASSED] drm_test_drm_connector_dynamic_init_properties
[14:08:53] ===== drm_test_drm_connector_dynamic_init_type_valid ======
[14:08:53] [PASSED] Unknown
[14:08:53] [PASSED] VGA
[14:08:53] [PASSED] DVI-I
[14:08:53] [PASSED] DVI-D
[14:08:53] [PASSED] DVI-A
[14:08:53] [PASSED] Composite
[14:08:53] [PASSED] SVIDEO
[14:08:53] [PASSED] LVDS
[14:08:53] [PASSED] Component
[14:08:53] [PASSED] DIN
[14:08:53] [PASSED] DP
[14:08:53] [PASSED] HDMI-A
[14:08:53] [PASSED] HDMI-B
[14:08:53] [PASSED] TV
[14:08:53] [PASSED] eDP
[14:08:53] [PASSED] Virtual
[14:08:53] [PASSED] DSI
[14:08:53] [PASSED] DPI
[14:08:53] [PASSED] Writeback
[14:08:53] [PASSED] SPI
[14:08:53] [PASSED] USB
[14:08:53] = [PASSED] drm_test_drm_connector_dynamic_init_type_valid ==
[14:08:53] ======== drm_test_drm_connector_dynamic_init_name =========
[14:08:53] [PASSED] Unknown
[14:08:53] [PASSED] VGA
[14:08:53] [PASSED] DVI-I
[14:08:53] [PASSED] DVI-D
[14:08:53] [PASSED] DVI-A
[14:08:53] [PASSED] Composite
[14:08:53] [PASSED] SVIDEO
[14:08:53] [PASSED] LVDS
[14:08:53] [PASSED] Component
[14:08:53] [PASSED] DIN
[14:08:53] [PASSED] DP
[14:08:53] [PASSED] HDMI-A
[14:08:53] [PASSED] HDMI-B
[14:08:53] [PASSED] TV
[14:08:53] [PASSED] eDP
[14:08:53] [PASSED] Virtual
[14:08:53] [PASSED] DSI
[14:08:53] [PASSED] DPI
[14:08:53] [PASSED] Writeback
[14:08:53] [PASSED] SPI
[14:08:53] [PASSED] USB
[14:08:53] ==== [PASSED] drm_test_drm_connector_dynamic_init_name =====
[14:08:53] =========== [PASSED] drm_connector_dynamic_init ============
[14:08:53] ==== drm_connector_dynamic_register_early (4 subtests) =====
[14:08:53] [PASSED] drm_test_drm_connector_dynamic_register_early_on_list
[14:08:53] [PASSED] drm_test_drm_connector_dynamic_register_early_defer
[14:08:53] [PASSED] drm_test_drm_connector_dynamic_register_early_no_init
[14:08:53] [PASSED] drm_test_drm_connector_dynamic_register_early_no_mode_object
[14:08:53] ====== [PASSED] drm_connector_dynamic_register_early =======
[14:08:53] ======= drm_connector_dynamic_register (7 subtests) ========
[14:08:53] [PASSED] drm_test_drm_connector_dynamic_register_on_list
[14:08:53] [PASSED] drm_test_drm_connector_dynamic_register_no_defer
[14:08:53] [PASSED] drm_test_drm_connector_dynamic_register_no_init
[14:08:53] [PASSED] drm_test_drm_connector_dynamic_register_mode_object
[14:08:53] [PASSED] drm_test_drm_connector_dynamic_register_sysfs
[14:08:53] [PASSED] drm_test_drm_connector_dynamic_register_sysfs_name
[14:08:53] [PASSED] drm_test_drm_connector_dynamic_register_debugfs
[14:08:53] ========= [PASSED] drm_connector_dynamic_register ==========
[14:08:53] = drm_connector_attach_broadcast_rgb_property (2 subtests) =
[14:08:53] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property
[14:08:53] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property_hdmi_connector
[14:08:53] === [PASSED] drm_connector_attach_broadcast_rgb_property ===
[14:08:53] ========== drm_get_tv_mode_from_name (2 subtests) ==========
[14:08:53] ========== drm_test_get_tv_mode_from_name_valid ===========
[14:08:53] [PASSED] NTSC
[14:08:53] [PASSED] NTSC-443
[14:08:53] [PASSED] NTSC-J
[14:08:53] [PASSED] PAL
[14:08:53] [PASSED] PAL-M
[14:08:53] [PASSED] PAL-N
[14:08:53] [PASSED] SECAM
[14:08:53] [PASSED] Mono
[14:08:53] ====== [PASSED] drm_test_get_tv_mode_from_name_valid =======
[14:08:53] [PASSED] drm_test_get_tv_mode_from_name_truncated
[14:08:53] ============ [PASSED] drm_get_tv_mode_from_name ============
[14:08:53] = drm_test_connector_hdmi_compute_mode_clock (12 subtests) =
[14:08:53] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb
[14:08:53] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc
[14:08:53] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc_vic_1
[14:08:53] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc
[14:08:53] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc_vic_1
[14:08:53] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_double
[14:08:53] = drm_test_connector_hdmi_compute_mode_clock_yuv420_valid =
[14:08:53] [PASSED] VIC 96
[14:08:53] [PASSED] VIC 97
[14:08:53] [PASSED] VIC 101
[14:08:53] [PASSED] VIC 102
[14:08:53] [PASSED] VIC 106
[14:08:53] [PASSED] VIC 107
[14:08:53] === [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_valid ===
[14:08:53] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_10_bpc
[14:08:53] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_12_bpc
[14:08:53] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_8_bpc
[14:08:53] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_10_bpc
[14:08:53] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_12_bpc
[14:08:53] === [PASSED] drm_test_connector_hdmi_compute_mode_clock ====
[14:08:53] == drm_hdmi_connector_get_broadcast_rgb_name (2 subtests) ==
[14:08:53] === drm_test_drm_hdmi_connector_get_broadcast_rgb_name ====
[14:08:53] [PASSED] Automatic
[14:08:53] [PASSED] Full
[14:08:53] [PASSED] Limited 16:235
[14:08:53] === [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name ===
[14:08:53] [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name_invalid
[14:08:53] ==== [PASSED] drm_hdmi_connector_get_broadcast_rgb_name ====
[14:08:53] == drm_hdmi_connector_get_output_format_name (2 subtests) ==
[14:08:53] === drm_test_drm_hdmi_connector_get_output_format_name ====
[14:08:53] [PASSED] RGB
[14:08:53] [PASSED] YUV 4:2:0
[14:08:53] [PASSED] YUV 4:2:2
[14:08:53] [PASSED] YUV 4:4:4
[14:08:53] === [PASSED] drm_test_drm_hdmi_connector_get_output_format_name ===
[14:08:53] [PASSED] drm_test_drm_hdmi_connector_get_output_format_name_invalid
[14:08:53] ==== [PASSED] drm_hdmi_connector_get_output_format_name ====
[14:08:53] ============= drm_damage_helper (21 subtests) ==============
[14:08:53] [PASSED] drm_test_damage_iter_no_damage
[14:08:53] [PASSED] drm_test_damage_iter_no_damage_fractional_src
[14:08:53] [PASSED] drm_test_damage_iter_no_damage_src_moved
[14:08:53] [PASSED] drm_test_damage_iter_no_damage_fractional_src_moved
[14:08:53] [PASSED] drm_test_damage_iter_no_damage_not_visible
[14:08:53] [PASSED] drm_test_damage_iter_no_damage_no_crtc
[14:08:53] [PASSED] drm_test_damage_iter_no_damage_no_fb
[14:08:53] [PASSED] drm_test_damage_iter_simple_damage
[14:08:53] [PASSED] drm_test_damage_iter_single_damage
[14:08:53] [PASSED] drm_test_damage_iter_single_damage_intersect_src
[14:08:53] [PASSED] drm_test_damage_iter_single_damage_outside_src
[14:08:53] [PASSED] drm_test_damage_iter_single_damage_fractional_src
[14:08:53] [PASSED] drm_test_damage_iter_single_damage_intersect_fractional_src
[14:08:53] [PASSED] drm_test_damage_iter_single_damage_outside_fractional_src
[14:08:53] [PASSED] drm_test_damage_iter_single_damage_src_moved
[14:08:53] [PASSED] drm_test_damage_iter_single_damage_fractional_src_moved
[14:08:53] [PASSED] drm_test_damage_iter_damage
[14:08:53] [PASSED] drm_test_damage_iter_damage_one_intersect
[14:08:53] [PASSED] drm_test_damage_iter_damage_one_outside
[14:08:53] [PASSED] drm_test_damage_iter_damage_src_moved
[14:08:53] [PASSED] drm_test_damage_iter_damage_not_visible
[14:08:53] ================ [PASSED] drm_damage_helper ================
[14:08:53] ============== drm_dp_mst_helper (3 subtests) ==============
[14:08:53] ============== drm_test_dp_mst_calc_pbn_mode ==============
[14:08:53] [PASSED] Clock 154000 BPP 30 DSC disabled
[14:08:53] [PASSED] Clock 234000 BPP 30 DSC disabled
[14:08:53] [PASSED] Clock 297000 BPP 24 DSC disabled
[14:08:53] [PASSED] Clock 332880 BPP 24 DSC enabled
[14:08:53] [PASSED] Clock 324540 BPP 24 DSC enabled
[14:08:53] ========== [PASSED] drm_test_dp_mst_calc_pbn_mode ==========
[14:08:53] ============== drm_test_dp_mst_calc_pbn_div ===============
[14:08:53] [PASSED] Link rate 2000000 lane count 4
[14:08:53] [PASSED] Link rate 2000000 lane count 2
[14:08:53] [PASSED] Link rate 2000000 lane count 1
[14:08:53] [PASSED] Link rate 1350000 lane count 4
[14:08:53] [PASSED] Link rate 1350000 lane count 2
[14:08:53] [PASSED] Link rate 1350000 lane count 1
[14:08:53] [PASSED] Link rate 1000000 lane count 4
[14:08:53] [PASSED] Link rate 1000000 lane count 2
[14:08:53] [PASSED] Link rate 1000000 lane count 1
[14:08:53] [PASSED] Link rate 810000 lane count 4
[14:08:53] [PASSED] Link rate 810000 lane count 2
[14:08:53] [PASSED] Link rate 810000 lane count 1
[14:08:53] [PASSED] Link rate 540000 lane count 4
[14:08:53] [PASSED] Link rate 540000 lane count 2
[14:08:53] [PASSED] Link rate 540000 lane count 1
[14:08:53] [PASSED] Link rate 270000 lane count 4
[14:08:53] [PASSED] Link rate 270000 lane count 2
[14:08:53] [PASSED] Link rate 270000 lane count 1
[14:08:53] [PASSED] Link rate 162000 lane count 4
[14:08:53] [PASSED] Link rate 162000 lane count 2
[14:08:53] [PASSED] Link rate 162000 lane count 1
[14:08:53] ========== [PASSED] drm_test_dp_mst_calc_pbn_div ===========
[14:08:53] ========= drm_test_dp_mst_sideband_msg_req_decode =========
[14:08:53] [PASSED] DP_ENUM_PATH_RESOURCES with port number
[14:08:53] [PASSED] DP_POWER_UP_PHY with port number
[14:08:53] [PASSED] DP_POWER_DOWN_PHY with port number
[14:08:53] [PASSED] DP_ALLOCATE_PAYLOAD with SDP stream sinks
[14:08:53] [PASSED] DP_ALLOCATE_PAYLOAD with port number
[14:08:53] [PASSED] DP_ALLOCATE_PAYLOAD with VCPI
[14:08:53] [PASSED] DP_ALLOCATE_PAYLOAD with PBN
[14:08:53] [PASSED] DP_QUERY_PAYLOAD with port number
[14:08:53] [PASSED] DP_QUERY_PAYLOAD with VCPI
[14:08:53] [PASSED] DP_REMOTE_DPCD_READ with port number
[14:08:53] [PASSED] DP_REMOTE_DPCD_READ with DPCD address
[14:08:53] [PASSED] DP_REMOTE_DPCD_READ with max number of bytes
[14:08:53] [PASSED] DP_REMOTE_DPCD_WRITE with port number
[14:08:53] [PASSED] DP_REMOTE_DPCD_WRITE with DPCD address
[14:08:53] [PASSED] DP_REMOTE_DPCD_WRITE with data array
[14:08:53] [PASSED] DP_REMOTE_I2C_READ with port number
[14:08:53] [PASSED] DP_REMOTE_I2C_READ with I2C device ID
[14:08:53] [PASSED] DP_REMOTE_I2C_READ with transactions array
[14:08:53] [PASSED] DP_REMOTE_I2C_WRITE with port number
[14:08:53] [PASSED] DP_REMOTE_I2C_WRITE with I2C device ID
[14:08:53] [PASSED] DP_REMOTE_I2C_WRITE with data array
[14:08:53] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream ID
[14:08:53] [PASSED] DP_QUERY_STREAM_ENC_STATUS with client ID
[14:08:53] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream event
[14:08:53] [PASSED] DP_QUERY_STREAM_ENC_STATUS with valid stream event
[14:08:53] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream behavior
[14:08:53] [PASSED] DP_QUERY_STREAM_ENC_STATUS with a valid stream behavior
[14:08:53] ===== [PASSED] drm_test_dp_mst_sideband_msg_req_decode =====
[14:08:53] ================ [PASSED] drm_dp_mst_helper ================
[14:08:53] ================== drm_exec (7 subtests) ===================
[14:08:53] [PASSED] sanitycheck
[14:08:53] [PASSED] test_lock
[14:08:53] [PASSED] test_lock_unlock
[14:08:53] [PASSED] test_duplicates
[14:08:53] [PASSED] test_prepare
[14:08:53] [PASSED] test_prepare_array
[14:08:53] [PASSED] test_multiple_loops
[14:08:53] ==================== [PASSED] drm_exec =====================
[14:08:53] =========== drm_format_helper_test (18 subtests) ===========
[14:08:53] ============== drm_test_fb_xrgb8888_to_gray8 ==============
[14:08:53] [PASSED] single_pixel_source_buffer
[14:08:53] [PASSED] single_pixel_clip_rectangle
[14:08:53] [PASSED] well_known_colors
[14:08:53] [PASSED] destination_pitch
[14:08:53] ========== [PASSED] drm_test_fb_xrgb8888_to_gray8 ==========
[14:08:53] ============= drm_test_fb_xrgb8888_to_rgb332 ==============
[14:08:53] [PASSED] single_pixel_source_buffer
[14:08:53] [PASSED] single_pixel_clip_rectangle
[14:08:53] [PASSED] well_known_colors
[14:08:53] [PASSED] destination_pitch
[14:08:53] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb332 ==========
[14:08:53] ============= drm_test_fb_xrgb8888_to_rgb565 ==============
[14:08:53] [PASSED] single_pixel_source_buffer
[14:08:53] [PASSED] single_pixel_clip_rectangle
[14:08:53] [PASSED] well_known_colors
[14:08:53] [PASSED] destination_pitch
[14:08:53] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb565 ==========
[14:08:53] ============ drm_test_fb_xrgb8888_to_xrgb1555 =============
[14:08:53] [PASSED] single_pixel_source_buffer
[14:08:53] [PASSED] single_pixel_clip_rectangle
[14:08:53] [PASSED] well_known_colors
[14:08:53] [PASSED] destination_pitch
[14:08:53] ======== [PASSED] drm_test_fb_xrgb8888_to_xrgb1555 =========
[14:08:53] ============ drm_test_fb_xrgb8888_to_argb1555 =============
[14:08:53] [PASSED] single_pixel_source_buffer
[14:08:53] [PASSED] single_pixel_clip_rectangle
[14:08:53] [PASSED] well_known_colors
[14:08:53] [PASSED] destination_pitch
[14:08:53] ======== [PASSED] drm_test_fb_xrgb8888_to_argb1555 =========
[14:08:53] ============ drm_test_fb_xrgb8888_to_rgba5551 =============
[14:08:53] [PASSED] single_pixel_source_buffer
[14:08:53] [PASSED] single_pixel_clip_rectangle
[14:08:53] [PASSED] well_known_colors
[14:08:53] [PASSED] destination_pitch
[14:08:53] ======== [PASSED] drm_test_fb_xrgb8888_to_rgba5551 =========
[14:08:53] ============= drm_test_fb_xrgb8888_to_rgb888 ==============
[14:08:53] [PASSED] single_pixel_source_buffer
[14:08:53] [PASSED] single_pixel_clip_rectangle
[14:08:53] [PASSED] well_known_colors
[14:08:53] [PASSED] destination_pitch
[14:08:53] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb888 ==========
[14:08:53] ============= drm_test_fb_xrgb8888_to_bgr888 ==============
[14:08:53] [PASSED] single_pixel_source_buffer
[14:08:53] [PASSED] single_pixel_clip_rectangle
[14:08:53] [PASSED] well_known_colors
[14:08:53] [PASSED] destination_pitch
[14:08:53] ========= [PASSED] drm_test_fb_xrgb8888_to_bgr888 ==========
[14:08:53] ============ drm_test_fb_xrgb8888_to_argb8888 =============
[14:08:53] [PASSED] single_pixel_source_buffer
[14:08:53] [PASSED] single_pixel_clip_rectangle
[14:08:53] [PASSED] well_known_colors
[14:08:53] [PASSED] destination_pitch
[14:08:53] ======== [PASSED] drm_test_fb_xrgb8888_to_argb8888 =========
[14:08:53] =========== drm_test_fb_xrgb8888_to_xrgb2101010 ===========
[14:08:53] [PASSED] single_pixel_source_buffer
[14:08:53] [PASSED] single_pixel_clip_rectangle
[14:08:53] [PASSED] well_known_colors
[14:08:53] [PASSED] destination_pitch
[14:08:53] ======= [PASSED] drm_test_fb_xrgb8888_to_xrgb2101010 =======
[14:08:53] =========== drm_test_fb_xrgb8888_to_argb2101010 ===========
[14:08:53] [PASSED] single_pixel_source_buffer
[14:08:53] [PASSED] single_pixel_clip_rectangle
[14:08:53] [PASSED] well_known_colors
[14:08:53] [PASSED] destination_pitch
[14:08:53] ======= [PASSED] drm_test_fb_xrgb8888_to_argb2101010 =======
[14:08:53] ============== drm_test_fb_xrgb8888_to_mono ===============
[14:08:53] [PASSED] single_pixel_source_buffer
[14:08:53] [PASSED] single_pixel_clip_rectangle
[14:08:53] [PASSED] well_known_colors
[14:08:53] [PASSED] destination_pitch
[14:08:53] ========== [PASSED] drm_test_fb_xrgb8888_to_mono ===========
[14:08:53] ==================== drm_test_fb_swab =====================
[14:08:53] [PASSED] single_pixel_source_buffer
[14:08:53] [PASSED] single_pixel_clip_rectangle
[14:08:53] [PASSED] well_known_colors
[14:08:53] [PASSED] destination_pitch
[14:08:53] ================ [PASSED] drm_test_fb_swab =================
[14:08:53] ============ drm_test_fb_xrgb8888_to_xbgr8888 =============
[14:08:53] [PASSED] single_pixel_source_buffer
[14:08:53] [PASSED] single_pixel_clip_rectangle
[14:08:53] [PASSED] well_known_colors
[14:08:53] [PASSED] destination_pitch
[14:08:53] ======== [PASSED] drm_test_fb_xrgb8888_to_xbgr8888 =========
[14:08:53] ============ drm_test_fb_xrgb8888_to_abgr8888 =============
[14:08:53] [PASSED] single_pixel_source_buffer
[14:08:53] [PASSED] single_pixel_clip_rectangle
[14:08:53] [PASSED] well_known_colors
[14:08:53] [PASSED] destination_pitch
[14:08:53] ======== [PASSED] drm_test_fb_xrgb8888_to_abgr8888 =========
[14:08:53] ================= drm_test_fb_clip_offset =================
[14:08:53] [PASSED] pass through
[14:08:53] [PASSED] horizontal offset
[14:08:53] [PASSED] vertical offset
[14:08:53] [PASSED] horizontal and vertical offset
[14:08:53] [PASSED] horizontal offset (custom pitch)
[14:08:53] [PASSED] vertical offset (custom pitch)
[14:08:53] [PASSED] horizontal and vertical offset (custom pitch)
[14:08:53] ============= [PASSED] drm_test_fb_clip_offset =============
[14:08:53] ============== drm_test_fb_build_fourcc_list ==============
[14:08:53] [PASSED] no native formats
[14:08:53] [PASSED] XRGB8888 as native format
[14:08:53] [PASSED] remove duplicates
[14:08:53] [PASSED] convert alpha formats
[14:08:53] [PASSED] random formats
[14:08:53] ========== [PASSED] drm_test_fb_build_fourcc_list ==========
[14:08:53] =================== drm_test_fb_memcpy ====================
[14:08:53] [PASSED] single_pixel_source_buffer: XR24 little-endian (0x34325258)
[14:08:53] [PASSED] single_pixel_source_buffer: XRA8 little-endian (0x38415258)
[14:08:53] [PASSED] single_pixel_source_buffer: YU24 little-endian (0x34325559)
[14:08:53] [PASSED] single_pixel_clip_rectangle: XB24 little-endian (0x34324258)
[14:08:53] [PASSED] single_pixel_clip_rectangle: XRA8 little-endian (0x38415258)
[14:08:53] [PASSED] single_pixel_clip_rectangle: YU24 little-endian (0x34325559)
[14:08:53] [PASSED] well_known_colors: XB24 little-endian (0x34324258)
[14:08:53] [PASSED] well_known_colors: XRA8 little-endian (0x38415258)
[14:08:53] [PASSED] well_known_colors: YU24 little-endian (0x34325559)
[14:08:53] [PASSED] destination_pitch: XB24 little-endian (0x34324258)
[14:08:53] [PASSED] destination_pitch: XRA8 little-endian (0x38415258)
[14:08:53] [PASSED] destination_pitch: YU24 little-endian (0x34325559)
[14:08:53] =============== [PASSED] drm_test_fb_memcpy ================
[14:08:53] ============= [PASSED] drm_format_helper_test ==============
[14:08:53] ================= drm_format (18 subtests) =================
[14:08:53] [PASSED] drm_test_format_block_width_invalid
[14:08:53] [PASSED] drm_test_format_block_width_one_plane
[14:08:53] [PASSED] drm_test_format_block_width_two_plane
[14:08:53] [PASSED] drm_test_format_block_width_three_plane
[14:08:53] [PASSED] drm_test_format_block_width_tiled
[14:08:53] [PASSED] drm_test_format_block_height_invalid
[14:08:53] [PASSED] drm_test_format_block_height_one_plane
[14:08:53] [PASSED] drm_test_format_block_height_two_plane
[14:08:53] [PASSED] drm_test_format_block_height_three_plane
[14:08:53] [PASSED] drm_test_format_block_height_tiled
[14:08:53] [PASSED] drm_test_format_min_pitch_invalid
[14:08:53] [PASSED] drm_test_format_min_pitch_one_plane_8bpp
[14:08:53] [PASSED] drm_test_format_min_pitch_one_plane_16bpp
[14:08:53] [PASSED] drm_test_format_min_pitch_one_plane_24bpp
[14:08:53] [PASSED] drm_test_format_min_pitch_one_plane_32bpp
[14:08:53] [PASSED] drm_test_format_min_pitch_two_plane
[14:08:53] [PASSED] drm_test_format_min_pitch_three_plane_8bpp
[14:08:53] [PASSED] drm_test_format_min_pitch_tiled
[14:08:53] =================== [PASSED] drm_format ====================
[14:08:53] ============== drm_framebuffer (10 subtests) ===============
[14:08:53] ========== drm_test_framebuffer_check_src_coords ==========
[14:08:53] [PASSED] Success: source fits into fb
[14:08:53] [PASSED] Fail: overflowing fb with x-axis coordinate
[14:08:53] [PASSED] Fail: overflowing fb with y-axis coordinate
[14:08:53] [PASSED] Fail: overflowing fb with source width
[14:08:53] [PASSED] Fail: overflowing fb with source height
[14:08:53] ====== [PASSED] drm_test_framebuffer_check_src_coords ======
[14:08:53] [PASSED] drm_test_framebuffer_cleanup
[14:08:53] =============== drm_test_framebuffer_create ===============
[14:08:53] [PASSED] ABGR8888 normal sizes
[14:08:53] [PASSED] ABGR8888 max sizes
[14:08:53] [PASSED] ABGR8888 pitch greater than min required
[14:08:53] [PASSED] ABGR8888 pitch less than min required
[14:08:53] [PASSED] ABGR8888 Invalid width
[14:08:53] [PASSED] ABGR8888 Invalid buffer handle
[14:08:53] [PASSED] No pixel format
[14:08:53] [PASSED] ABGR8888 Width 0
[14:08:53] [PASSED] ABGR8888 Height 0
[14:08:53] [PASSED] ABGR8888 Out of bound height * pitch combination
[14:08:53] [PASSED] ABGR8888 Large buffer offset
[14:08:53] [PASSED] ABGR8888 Buffer offset for inexistent plane
[14:08:53] [PASSED] ABGR8888 Invalid flag
[14:08:53] [PASSED] ABGR8888 Set DRM_MODE_FB_MODIFIERS without modifiers
[14:08:53] [PASSED] ABGR8888 Valid buffer modifier
[14:08:53] [PASSED] ABGR8888 Invalid buffer modifier(DRM_FORMAT_MOD_SAMSUNG_64_32_TILE)
[14:08:53] [PASSED] ABGR8888 Extra pitches without DRM_MODE_FB_MODIFIERS
[14:08:53] [PASSED] ABGR8888 Extra pitches with DRM_MODE_FB_MODIFIERS
[14:08:53] [PASSED] NV12 Normal sizes
[14:08:53] [PASSED] NV12 Max sizes
[14:08:53] [PASSED] NV12 Invalid pitch
[14:08:53] [PASSED] NV12 Invalid modifier/missing DRM_MODE_FB_MODIFIERS flag
[14:08:53] [PASSED] NV12 different modifier per-plane
[14:08:53] [PASSED] NV12 with DRM_FORMAT_MOD_SAMSUNG_64_32_TILE
[14:08:53] [PASSED] NV12 Valid modifiers without DRM_MODE_FB_MODIFIERS
[14:08:53] [PASSED] NV12 Modifier for inexistent plane
[14:08:53] [PASSED] NV12 Handle for inexistent plane
[14:08:53] [PASSED] NV12 Handle for inexistent plane without DRM_MODE_FB_MODIFIERS
[14:08:53] [PASSED] YVU420 DRM_MODE_FB_MODIFIERS set without modifier
[14:08:53] [PASSED] YVU420 Normal sizes
[14:08:53] [PASSED] YVU420 Max sizes
[14:08:53] [PASSED] YVU420 Invalid pitch
[14:08:53] [PASSED] YVU420 Different pitches
[14:08:53] [PASSED] YVU420 Different buffer offsets/pitches
[14:08:53] [PASSED] YVU420 Modifier set just for plane 0, without DRM_MODE_FB_MODIFIERS
[14:08:53] [PASSED] YVU420 Modifier set just for planes 0, 1, without DRM_MODE_FB_MODIFIERS
[14:08:53] [PASSED] YVU420 Modifier set just for plane 0, 1, with DRM_MODE_FB_MODIFIERS
[14:08:53] [PASSED] YVU420 Valid modifier
[14:08:53] [PASSED] YVU420 Different modifiers per plane
[14:08:53] [PASSED] YVU420 Modifier for inexistent plane
[14:08:53] [PASSED] YUV420_10BIT Invalid modifier(DRM_FORMAT_MOD_LINEAR)
[14:08:53] [PASSED] X0L2 Normal sizes
[14:08:53] [PASSED] X0L2 Max sizes
[14:08:53] [PASSED] X0L2 Invalid pitch
[14:08:53] [PASSED] X0L2 Pitch greater than minimum required
[14:08:53] [PASSED] X0L2 Handle for inexistent plane
[14:08:53] [PASSED] X0L2 Offset for inexistent plane, without DRM_MODE_FB_MODIFIERS set
[14:08:53] [PASSED] X0L2 Modifier without DRM_MODE_FB_MODIFIERS set
[14:08:53] [PASSED] X0L2 Valid modifier
[14:08:53] [PASSED] X0L2 Modifier for inexistent plane
[14:08:53] =========== [PASSED] drm_test_framebuffer_create ===========
[14:08:53] [PASSED] drm_test_framebuffer_free
[14:08:53] [PASSED] drm_test_framebuffer_init
[14:08:53] [PASSED] drm_test_framebuffer_init_bad_format
[14:08:53] [PASSED] drm_test_framebuffer_init_dev_mismatch
[14:08:53] [PASSED] drm_test_framebuffer_lookup
[14:08:53] [PASSED] drm_test_framebuffer_lookup_inexistent
[14:08:53] [PASSED] drm_test_framebuffer_modifiers_not_supported
[14:08:53] ================= [PASSED] drm_framebuffer =================
[14:08:53] ================ drm_gem_shmem (8 subtests) ================
[14:08:53] [PASSED] drm_gem_shmem_test_obj_create
[14:08:53] [PASSED] drm_gem_shmem_test_obj_create_private
[14:08:53] [PASSED] drm_gem_shmem_test_pin_pages
[14:08:53] [PASSED] drm_gem_shmem_test_vmap
[14:08:53] [PASSED] drm_gem_shmem_test_get_pages_sgt
[14:08:53] [PASSED] drm_gem_shmem_test_get_sg_table
[14:08:53] [PASSED] drm_gem_shmem_test_madvise
[14:08:53] [PASSED] drm_gem_shmem_test_purge
[14:08:53] ================== [PASSED] drm_gem_shmem ==================
[14:08:53] === drm_atomic_helper_connector_hdmi_check (23 subtests) ===
[14:08:53] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode
[14:08:53] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode_vic_1
[14:08:53] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode
[14:08:53] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode_vic_1
[14:08:53] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode
[14:08:53] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode_vic_1
[14:08:53] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_changed
[14:08:53] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_not_changed
[14:08:53] [PASSED] drm_test_check_disable_connector
[14:08:53] [PASSED] drm_test_check_hdmi_funcs_reject_rate
[14:08:53] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback
[14:08:53] [PASSED] drm_test_check_max_tmds_rate_format_fallback
[14:08:53] [PASSED] drm_test_check_output_bpc_crtc_mode_changed
[14:08:53] [PASSED] drm_test_check_output_bpc_crtc_mode_not_changed
[14:08:53] [PASSED] drm_test_check_output_bpc_dvi
[14:08:53] [PASSED] drm_test_check_output_bpc_format_vic_1
[14:08:53] [PASSED] drm_test_check_output_bpc_format_display_8bpc_only
[14:08:53] [PASSED] drm_test_check_output_bpc_format_display_rgb_only
[14:08:53] [PASSED] drm_test_check_output_bpc_format_driver_8bpc_only
[14:08:53] [PASSED] drm_test_check_output_bpc_format_driver_rgb_only
[14:08:53] [PASSED] drm_test_check_tmds_char_rate_rgb_8bpc
[14:08:53] [PASSED] drm_test_check_tmds_char_rate_rgb_10bpc
[14:08:53] [PASSED] drm_test_check_tmds_char_rate_rgb_12bpc
[14:08:53] ===== [PASSED] drm_atomic_helper_connector_hdmi_check ======
[14:08:53] === drm_atomic_helper_connector_hdmi_reset (6 subtests) ====
[14:08:53] [PASSED] drm_test_check_broadcast_rgb_value
[14:08:53] [PASSED] drm_test_check_bpc_8_value
[14:08:53] [PASSED] drm_test_check_bpc_10_value
[14:08:53] [PASSED] drm_test_check_bpc_12_value
[14:08:53] [PASSED] drm_test_check_format_value
[14:08:53] [PASSED] drm_test_check_tmds_char_value
[14:08:53] ===== [PASSED] drm_atomic_helper_connector_hdmi_reset ======
[14:08:53] = drm_atomic_helper_connector_hdmi_mode_valid (4 subtests) =
[14:08:53] [PASSED] drm_test_check_mode_valid
[14:08:53] [PASSED] drm_test_check_mode_valid_reject
[14:08:53] [PASSED] drm_test_check_mode_valid_reject_rate
[14:08:53] [PASSED] drm_test_check_mode_valid_reject_max_clock
[14:08:53] === [PASSED] drm_atomic_helper_connector_hdmi_mode_valid ===
[14:08:53] ================= drm_managed (2 subtests) =================
[14:08:53] [PASSED] drm_test_managed_release_action
[14:08:53] [PASSED] drm_test_managed_run_action
[14:08:53] =================== [PASSED] drm_managed ===================
[14:08:53] =================== drm_mm (6 subtests) ====================
[14:08:53] [PASSED] drm_test_mm_init
[14:08:53] [PASSED] drm_test_mm_debug
[14:08:53] [PASSED] drm_test_mm_align32
[14:08:53] [PASSED] drm_test_mm_align64
[14:08:53] [PASSED] drm_test_mm_lowest
[14:08:53] [PASSED] drm_test_mm_highest
[14:08:53] ===================== [PASSED] drm_mm ======================
[14:08:53] ============= drm_modes_analog_tv (5 subtests) =============
[14:08:53] [PASSED] drm_test_modes_analog_tv_mono_576i
[14:08:53] [PASSED] drm_test_modes_analog_tv_ntsc_480i
[14:08:53] [PASSED] drm_test_modes_analog_tv_ntsc_480i_inlined
[14:08:53] [PASSED] drm_test_modes_analog_tv_pal_576i
[14:08:53] [PASSED] drm_test_modes_analog_tv_pal_576i_inlined
[14:08:53] =============== [PASSED] drm_modes_analog_tv ===============
[14:08:53] ============== drm_plane_helper (2 subtests) ===============
[14:08:53] =============== drm_test_check_plane_state ================
[14:08:53] [PASSED] clipping_simple
[14:08:53] [PASSED] clipping_rotate_reflect
[14:08:53] [PASSED] positioning_simple
[14:08:53] [PASSED] upscaling
[14:08:53] [PASSED] downscaling
[14:08:53] [PASSED] rounding1
[14:08:53] [PASSED] rounding2
[14:08:53] [PASSED] rounding3
[14:08:53] [PASSED] rounding4
[14:08:53] =========== [PASSED] drm_test_check_plane_state ============
[14:08:53] =========== drm_test_check_invalid_plane_state ============
[14:08:53] [PASSED] positioning_invalid
[14:08:53] [PASSED] upscaling_invalid
[14:08:53] [PASSED] downscaling_invalid
[14:08:53] ======= [PASSED] drm_test_check_invalid_plane_state ========
[14:08:53] ================ [PASSED] drm_plane_helper =================
[14:08:53] ====== drm_connector_helper_tv_get_modes (1 subtest) =======
[14:08:53] ====== drm_test_connector_helper_tv_get_modes_check =======
[14:08:53] [PASSED] None
[14:08:53] [PASSED] PAL
[14:08:53] [PASSED] NTSC
[14:08:53] [PASSED] Both, NTSC Default
[14:08:53] [PASSED] Both, PAL Default
[14:08:53] [PASSED] Both, NTSC Default, with PAL on command-line
[14:08:53] [PASSED] Both, PAL Default, with NTSC on command-line
[14:08:53] == [PASSED] drm_test_connector_helper_tv_get_modes_check ===
[14:08:53] ======== [PASSED] drm_connector_helper_tv_get_modes ========
[14:08:53] ================== drm_rect (9 subtests) ===================
[14:08:53] [PASSED] drm_test_rect_clip_scaled_div_by_zero
[14:08:53] [PASSED] drm_test_rect_clip_scaled_not_clipped
[14:08:53] [PASSED] drm_test_rect_clip_scaled_clipped
[14:08:53] [PASSED] drm_test_rect_clip_scaled_signed_vs_unsigned
[14:08:53] ================= drm_test_rect_intersect =================
[14:08:53] [PASSED] top-left x bottom-right: 2x2+1+1 x 2x2+0+0
[14:08:53] [PASSED] top-right x bottom-left: 2x2+0+0 x 2x2+1-1
[14:08:53] [PASSED] bottom-left x top-right: 2x2+1-1 x 2x2+0+0
[14:08:53] [PASSED] bottom-right x top-left: 2x2+0+0 x 2x2+1+1
[14:08:53] [PASSED] right x left: 2x1+0+0 x 3x1+1+0
[14:08:53] [PASSED] left x right: 3x1+1+0 x 2x1+0+0
[14:08:53] [PASSED] up x bottom: 1x2+0+0 x 1x3+0-1
[14:08:53] [PASSED] bottom x up: 1x3+0-1 x 1x2+0+0
[14:08:53] [PASSED] touching corner: 1x1+0+0 x 2x2+1+1
[14:08:53] [PASSED] touching side: 1x1+0+0 x 1x1+1+0
[14:08:53] [PASSED] equal rects: 2x2+0+0 x 2x2+0+0
[14:08:53] [PASSED] inside another: 2x2+0+0 x 1x1+1+1
[14:08:53] [PASSED] far away: 1x1+0+0 x 1x1+3+6
[14:08:53] [PASSED] points intersecting: 0x0+5+10 x 0x0+5+10
[14:08:53] [PASSED] points not intersecting: 0x0+0+0 x 0x0+5+10
[14:08:53] ============= [PASSED] drm_test_rect_intersect =============
[14:08:53] ================ drm_test_rect_calc_hscale ================
[14:08:53] [PASSED] normal use
[14:08:53] [PASSED] out of max range
[14:08:53] [PASSED] out of min range
[14:08:53] [PASSED] zero dst
[14:08:53] [PASSED] negative src
[14:08:53] [PASSED] negative dst
[14:08:53] ============ [PASSED] drm_test_rect_calc_hscale ============
[14:08:53] ================ drm_test_rect_calc_vscale ================
[14:08:53] [PASSED] normal use
[14:08:53] [PASSED] out of max range
[14:08:53] [PASSED] out of min range
[14:08:53] [PASSED] zero dst
[14:08:53] [PASSED] negative src
[14:08:53] [PASSED] negative dst
[14:08:53] ============ [PASSED] drm_test_rect_calc_vscale ============
[14:08:53] ================== drm_test_rect_rotate ===================
[14:08:53] [PASSED] reflect-x
[14:08:53] [PASSED] reflect-y
[14:08:53] [PASSED] rotate-0
[14:08:53] [PASSED] rotate-90
[14:08:53] [PASSED] rotate-180
[14:08:53] [PASSED] rotate-270
[14:08:53] ============== [PASSED] drm_test_rect_rotate ===============
[14:08:53] ================ drm_test_rect_rotate_inv =================
[14:08:53] [PASSED] reflect-x
[14:08:53] [PASSED] reflect-y
[14:08:53] [PASSED] rotate-0
[14:08:53] [PASSED] rotate-90
[14:08:53] [PASSED] rotate-180
[14:08:53] [PASSED] rotate-270
[14:08:53] ============ [PASSED] drm_test_rect_rotate_inv =============
stty: 'standard input': Inappropriate ioctl for device
[14:08:53] ==================== [PASSED] drm_rect =====================
[14:08:53] ============================================================
[14:08:53] Testing complete. Ran 608 tests: passed: 608
[14:08:53] Elapsed time: 22.810s total, 1.745s configuring, 20.894s building, 0.152s running
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/ttm/tests/.kunitconfig
[14:08:53] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[14:08:54] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[14:09:02] Starting KUnit Kernel (1/1)...
[14:09:02] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[14:09:02] ================= ttm_device (5 subtests) ==================
[14:09:02] [PASSED] ttm_device_init_basic
[14:09:02] [PASSED] ttm_device_init_multiple
[14:09:02] [PASSED] ttm_device_fini_basic
[14:09:02] [PASSED] ttm_device_init_no_vma_man
[14:09:02] ================== ttm_device_init_pools ==================
[14:09:02] [PASSED] No DMA allocations, no DMA32 required
[14:09:02] [PASSED] DMA allocations, DMA32 required
[14:09:02] [PASSED] No DMA allocations, DMA32 required
[14:09:02] [PASSED] DMA allocations, no DMA32 required
[14:09:02] ============== [PASSED] ttm_device_init_pools ==============
[14:09:02] =================== [PASSED] ttm_device ====================
[14:09:02] ================== ttm_pool (8 subtests) ===================
[14:09:02] ================== ttm_pool_alloc_basic ===================
[14:09:02] [PASSED] One page
[14:09:02] [PASSED] More than one page
[14:09:02] [PASSED] Above the allocation limit
[14:09:02] [PASSED] One page, with coherent DMA mappings enabled
[14:09:02] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[14:09:02] ============== [PASSED] ttm_pool_alloc_basic ===============
[14:09:02] ============== ttm_pool_alloc_basic_dma_addr ==============
[14:09:02] [PASSED] One page
[14:09:02] [PASSED] More than one page
[14:09:02] [PASSED] Above the allocation limit
[14:09:02] [PASSED] One page, with coherent DMA mappings enabled
[14:09:02] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[14:09:02] ========== [PASSED] ttm_pool_alloc_basic_dma_addr ==========
[14:09:02] [PASSED] ttm_pool_alloc_order_caching_match
[14:09:02] [PASSED] ttm_pool_alloc_caching_mismatch
[14:09:02] [PASSED] ttm_pool_alloc_order_mismatch
[14:09:02] [PASSED] ttm_pool_free_dma_alloc
[14:09:02] [PASSED] ttm_pool_free_no_dma_alloc
[14:09:02] [PASSED] ttm_pool_fini_basic
[14:09:02] ==================== [PASSED] ttm_pool =====================
[14:09:02] ================ ttm_resource (8 subtests) =================
[14:09:02] ================= ttm_resource_init_basic =================
[14:09:02] [PASSED] Init resource in TTM_PL_SYSTEM
[14:09:02] [PASSED] Init resource in TTM_PL_VRAM
[14:09:02] [PASSED] Init resource in a private placement
[14:09:02] [PASSED] Init resource in TTM_PL_SYSTEM, set placement flags
[14:09:02] ============= [PASSED] ttm_resource_init_basic =============
[14:09:02] [PASSED] ttm_resource_init_pinned
[14:09:02] [PASSED] ttm_resource_fini_basic
[14:09:02] [PASSED] ttm_resource_manager_init_basic
[14:09:02] [PASSED] ttm_resource_manager_usage_basic
[14:09:02] [PASSED] ttm_resource_manager_set_used_basic
[14:09:02] [PASSED] ttm_sys_man_alloc_basic
[14:09:02] [PASSED] ttm_sys_man_free_basic
[14:09:02] ================== [PASSED] ttm_resource ===================
[14:09:02] =================== ttm_tt (15 subtests) ===================
[14:09:02] ==================== ttm_tt_init_basic ====================
[14:09:02] [PASSED] Page-aligned size
[14:09:02] [PASSED] Extra pages requested
[14:09:02] ================ [PASSED] ttm_tt_init_basic ================
[14:09:02] [PASSED] ttm_tt_init_misaligned
[14:09:02] [PASSED] ttm_tt_fini_basic
[14:09:02] [PASSED] ttm_tt_fini_sg
[14:09:02] [PASSED] ttm_tt_fini_shmem
[14:09:02] [PASSED] ttm_tt_create_basic
[14:09:02] [PASSED] ttm_tt_create_invalid_bo_type
[14:09:02] [PASSED] ttm_tt_create_ttm_exists
[14:09:02] [PASSED] ttm_tt_create_failed
[14:09:02] [PASSED] ttm_tt_destroy_basic
[14:09:02] [PASSED] ttm_tt_populate_null_ttm
[14:09:02] [PASSED] ttm_tt_populate_populated_ttm
[14:09:02] [PASSED] ttm_tt_unpopulate_basic
[14:09:02] [PASSED] ttm_tt_unpopulate_empty_ttm
[14:09:02] [PASSED] ttm_tt_swapin_basic
[14:09:02] ===================== [PASSED] ttm_tt ======================
[14:09:02] =================== ttm_bo (14 subtests) ===================
[14:09:02] =========== ttm_bo_reserve_optimistic_no_ticket ===========
[14:09:02] [PASSED] Cannot be interrupted and sleeps
[14:09:02] [PASSED] Cannot be interrupted, locks straight away
[14:09:02] [PASSED] Can be interrupted, sleeps
[14:09:02] ======= [PASSED] ttm_bo_reserve_optimistic_no_ticket =======
[14:09:02] [PASSED] ttm_bo_reserve_locked_no_sleep
[14:09:02] [PASSED] ttm_bo_reserve_no_wait_ticket
[14:09:02] [PASSED] ttm_bo_reserve_double_resv
[14:09:02] [PASSED] ttm_bo_reserve_interrupted
[14:09:02] [PASSED] ttm_bo_reserve_deadlock
[14:09:02] [PASSED] ttm_bo_unreserve_basic
[14:09:02] [PASSED] ttm_bo_unreserve_pinned
[14:09:02] [PASSED] ttm_bo_unreserve_bulk
[14:09:02] [PASSED] ttm_bo_put_basic
[14:09:02] [PASSED] ttm_bo_put_shared_resv
[14:09:02] [PASSED] ttm_bo_pin_basic
[14:09:02] [PASSED] ttm_bo_pin_unpin_resource
[14:09:02] [PASSED] ttm_bo_multiple_pin_one_unpin
[14:09:02] ===================== [PASSED] ttm_bo ======================
[14:09:02] ============== ttm_bo_validate (22 subtests) ===============
[14:09:02] ============== ttm_bo_init_reserved_sys_man ===============
[14:09:02] [PASSED] Buffer object for userspace
[14:09:02] [PASSED] Kernel buffer object
[14:09:02] [PASSED] Shared buffer object
[14:09:02] ========== [PASSED] ttm_bo_init_reserved_sys_man ===========
[14:09:02] ============== ttm_bo_init_reserved_mock_man ==============
[14:09:02] [PASSED] Buffer object for userspace
[14:09:02] [PASSED] Kernel buffer object
[14:09:02] [PASSED] Shared buffer object
[14:09:02] ========== [PASSED] ttm_bo_init_reserved_mock_man ==========
[14:09:02] [PASSED] ttm_bo_init_reserved_resv
[14:09:02] ================== ttm_bo_validate_basic ==================
[14:09:02] [PASSED] Buffer object for userspace
[14:09:02] [PASSED] Kernel buffer object
[14:09:02] [PASSED] Shared buffer object
[14:09:02] ============== [PASSED] ttm_bo_validate_basic ==============
[14:09:02] [PASSED] ttm_bo_validate_invalid_placement
[14:09:02] ============= ttm_bo_validate_same_placement ==============
[14:09:02] [PASSED] System manager
[14:09:02] [PASSED] VRAM manager
[14:09:02] ========= [PASSED] ttm_bo_validate_same_placement ==========
[14:09:02] [PASSED] ttm_bo_validate_failed_alloc
[14:09:02] [PASSED] ttm_bo_validate_pinned
[14:09:02] [PASSED] ttm_bo_validate_busy_placement
[14:09:02] ================ ttm_bo_validate_multihop =================
[14:09:02] [PASSED] Buffer object for userspace
[14:09:02] [PASSED] Kernel buffer object
[14:09:02] [PASSED] Shared buffer object
[14:09:02] ============ [PASSED] ttm_bo_validate_multihop =============
[14:09:02] ========== ttm_bo_validate_no_placement_signaled ==========
[14:09:02] [PASSED] Buffer object in system domain, no page vector
[14:09:02] [PASSED] Buffer object in system domain with an existing page vector
[14:09:02] ====== [PASSED] ttm_bo_validate_no_placement_signaled ======
[14:09:02] ======== ttm_bo_validate_no_placement_not_signaled ========
[14:09:02] [PASSED] Buffer object for userspace
[14:09:02] [PASSED] Kernel buffer object
[14:09:02] [PASSED] Shared buffer object
[14:09:02] ==== [PASSED] ttm_bo_validate_no_placement_not_signaled ====
[14:09:02] [PASSED] ttm_bo_validate_move_fence_signaled
[14:09:02] ========= ttm_bo_validate_move_fence_not_signaled =========
[14:09:02] [PASSED] Waits for GPU
[14:09:02] [PASSED] Tries to lock straight away
[14:09:03] ===== [PASSED] ttm_bo_validate_move_fence_not_signaled =====
[14:09:03] [PASSED] ttm_bo_validate_swapout
[14:09:03] [PASSED] ttm_bo_validate_happy_evict
[14:09:03] [PASSED] ttm_bo_validate_all_pinned_evict
[14:09:03] [PASSED] ttm_bo_validate_allowed_only_evict
[14:09:03] [PASSED] ttm_bo_validate_deleted_evict
[14:09:03] [PASSED] ttm_bo_validate_busy_domain_evict
[14:09:03] [PASSED] ttm_bo_validate_evict_gutting
[14:09:03] [PASSED] ttm_bo_validate_recrusive_evict
stty: 'standard input': Inappropriate ioctl for device
[14:09:03] ================= [PASSED] ttm_bo_validate =================
[14:09:03] ============================================================
[14:09:03] Testing complete. Ran 102 tests: passed: 102
[14:09:03] Elapsed time: 10.020s total, 1.729s configuring, 7.624s building, 0.567s running
+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel
^ permalink raw reply [flat|nested] 120+ messages in thread
* ✗ CI.Build: failure for PREFETCH and MADVISE for SVM ranges (rev3)
2025-04-07 10:16 [PATCH v2 00/32] PREFETCH and MADVISE for SVM ranges Himal Prasad Ghimiray
` (34 preceding siblings ...)
2025-04-07 14:09 ` ✓ CI.KUnit: success " Patchwork
@ 2025-04-07 14:12 ` Patchwork
2025-04-09 5:11 ` ✓ CI.Patch_applied: success for PREFETCH and MADVISE for SVM ranges (rev4) Patchwork
` (7 subsequent siblings)
43 siblings, 0 replies; 120+ messages in thread
From: Patchwork @ 2025-04-07 14:12 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe
== Series Details ==
Series: PREFETCH and MADVISE for SVM ranges (rev3)
URL : https://patchwork.freedesktop.org/series/146290/
State : failure
== Summary ==
CC [M] drivers/mfd/da9062-core.o
CC [M] net/netfilter/xt_esp.o
CC [M] net/netfilter/xt_hashlimit.o
CC [M] drivers/mfd/da9150-core.o
CC [M] net/netfilter/xt_helper.o
CC [M] drivers/mfd/max77541.o
CC [M] net/netfilter/xt_hl.o
CC [M] net/netfilter/xt_ipcomp.o
CC [M] net/netfilter/xt_iprange.o
CC [M] drivers/mfd/max8907.o
CC [M] drivers/mfd/mp2629.o
CC [M] net/netfilter/xt_l2tp.o
CC [M] net/netfilter/xt_length.o
CC [M] net/netfilter/xt_limit.o
CC [M] drivers/mfd/mt6360-core.o
CC [M] drivers/mfd/mt6370.o
CC [M] net/netfilter/xt_mac.o
AR drivers/regulator/built-in.a
CC [M] net/netfilter/xt_multiport.o
CC [M] drivers/mfd/mt6397-core.o
AR kernel/built-in.a
CC [M] drivers/mfd/mt6397-irq.o
CC [M] drivers/mfd/mt6358-irq.o
CC [M] net/netfilter/xt_nfacct.o
CC [M] net/netfilter/xt_osf.o
CC [M] net/netfilter/xt_owner.o
CC [M] net/netfilter/xt_cgroup.o
CC [M] drivers/mfd/kempld-core.o
CC [M] drivers/mfd/intel_quark_i2c_gpio.o
CC [M] net/netfilter/xt_physdev.o
CC [M] drivers/mfd/lpc_sch.o
CC [M] net/netfilter/xt_pkttype.o
CC [M] net/netfilter/xt_policy.o
CC [M] net/netfilter/xt_quota.o
AR lib/built-in.a
CC [M] drivers/mfd/lpc_ich.o
CC [M] net/netfilter/xt_rateest.o
CC [M] drivers/mfd/rdc321x-southbridge.o
CC [M] drivers/mfd/janz-cmodio.o
CC [M] net/netfilter/xt_realm.o
CC [M] drivers/mfd/vx855.o
CC [M] net/netfilter/xt_recent.o
CC [M] drivers/mfd/wl1273-core.o
CC [M] net/netfilter/xt_sctp.o
CC [M] net/netfilter/xt_socket.o
CC [M] net/netfilter/xt_state.o
CC [M] net/netfilter/xt_statistic.o
CC [M] drivers/mfd/si476x-cmd.o
CC [M] drivers/mfd/si476x-prop.o
CC [M] net/netfilter/xt_string.o
CC [M] drivers/mfd/si476x-i2c.o
CC [M] drivers/mfd/intel_pmc_bxt.o
CC [M] net/netfilter/xt_tcpmss.o
CC [M] net/netfilter/xt_time.o
CC [M] net/netfilter/xt_u32.o
CC [M] drivers/mfd/viperboard.o
CC [M] drivers/mfd/lm3533-core.o
CC [M] drivers/mfd/lm3533-ctrlbank.o
CC [M] drivers/mfd/retu-mfd.o
CC [M] drivers/mfd/iqs62x.o
CC [M] drivers/mfd/menf21bmc.o
CC [M] drivers/mfd/dln2.o
CC [M] drivers/mfd/rt4831.o
CC [M] drivers/mfd/rt5033.o
CC [M] drivers/mfd/rt5120.o
CC [M] drivers/mfd/sky81452.o
CC [M] drivers/mfd/intel_soc_pmic_bxtwc.o
CC [M] drivers/mfd/intel_soc_pmic_chtdc_ti.o
CC [M] drivers/mfd/intel_soc_pmic_mrfld.o
CC [M] drivers/mfd/rave-sp.o
CC [M] drivers/mfd/simple-mfd-i2c.o
CC [M] drivers/mfd/smpro-core.o
CC [M] drivers/mfd/intel-m10-bmc-core.o
CC [M] drivers/mfd/intel-m10-bmc-spi.o
CC [M] drivers/mfd/atc260x-core.o
CC [M] drivers/mfd/atc260x-i2c.o
LD [M] drivers/mfd/arizona.o
LD [M] drivers/mfd/wm8994.o
LD [M] drivers/mfd/madera.o
LD [M] drivers/mfd/ocelot-soc.o
LD [M] drivers/mfd/mt6397.o
AR drivers/mfd/built-in.a
LD [M] net/netfilter/nf_conntrack.o
LD [M] net/netfilter/nf_conntrack_h323.o
LD [M] net/netfilter/nf_nat.o
LD [M] net/netfilter/nf_tables.o
LD [M] net/netfilter/nf_flow_table.o
AR net/netfilter/built-in.a
LD [M] drivers/mfd/si476x-core.o
AR net/built-in.a
make[3]: *** [../scripts/Makefile.build:461: drivers] Error 2
make[3]: *** Waiting for unfinished jobs....
CC [M] kernel/kheaders.o
make[2]: *** [/kernel/Makefile:2006: .] Error 2
make[1]: *** [/kernel/Makefile:248: __sub-make] Error 2
make[1]: Leaving directory '/kernel/build64-default'
make: *** [Makefile:248: __sub-make] Error 2
+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 16/32] Introduce drm_gpuvm_sm_map_ops_flags enums for sm_map_ops
2025-04-07 10:17 ` [PATCH v2 16/32] Introduce drm_gpuvm_sm_map_ops_flags enums for sm_map_ops Himal Prasad Ghimiray
2025-04-07 10:30 ` Boris Brezillon
@ 2025-04-07 22:42 ` kernel test robot
1 sibling, 0 replies; 120+ messages in thread
From: kernel test robot @ 2025-04-07 22:42 UTC (permalink / raw)
To: Himal Prasad Ghimiray, intel-xe
Cc: oe-kbuild-all, matthew.brost, thomas.hellstrom,
Himal Prasad Ghimiray, Danilo Krummrich, Boris Brezillon,
dri-devel
Hi Himal,
kernel test robot noticed the following build warnings:
[auto build test WARNING on drm-xe/drm-xe-next]
[cannot apply to linus/master v6.15-rc1 next-20250407]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Himal-Prasad-Ghimiray/drm-xe-Introduce-xe_vma_op_prefetch_range-struct-for-prefetch-of-ranges/20250407-215536
base: https://gitlab.freedesktop.org/drm/xe/kernel.git drm-xe-next
patch link: https://lore.kernel.org/r/20250407101719.3350996-17-himal.prasad.ghimiray%40intel.com
patch subject: [PATCH v2 16/32] Introduce drm_gpuvm_sm_map_ops_flags enums for sm_map_ops
config: alpha-allyesconfig (https://download.01.org/0day-ci/archive/20250408/202504080624.tIlwDwWT-lkp@intel.com/config)
compiler: alpha-linux-gcc (GCC) 14.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250408/202504080624.tIlwDwWT-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202504080624.tIlwDwWT-lkp@intel.com/
All warnings (new ones prefixed by >>):
>> include/drm/drm_gpuvm.h:233: warning: expecting prototype for enum drm_gpuvm_madvise_flags. Prototype was for enum drm_gpuvm_sm_map_ops_flags instead
vim +233 include/drm/drm_gpuvm.h
213
214 /**
215 * enum drm_gpuvm_madvise_flags - flags for drm_gpuvm split/merge ops
216 */
217 enum drm_gpuvm_sm_map_ops_flags {
218 /**
219 * @DRM_GPUVM_SM_MAP_NOT_MADVISE: DEFAULT sm_map ops
220 */
221 DRM_GPUVM_SM_MAP_NOT_MADVISE = 0,
222
223 /**
224 * @DRM_GPUVM_SKIP_GEM_OBJ_VA_SPLIT_MADVISE: This flag is used by
225 * drm_gpuvm_sm_map_ops_create to iterate over GPUVMA's in the
226 * user-provided range and split the existing non-GEM object VMA if the
227 * start or end of the input range lies within it. The operations can
228 * create up to 2 REMAPS and 2 MAPs. Unlike drm_gpuvm_sm_map_ops_flags
229 * in default mode, the operation with this flag will never have UNMAPs and
230 * merges, and can be without any final operations.
231 */
232 DRM_GPUVM_SKIP_GEM_OBJ_VA_SPLIT_MADVISE = BIT(0),
> 233 };
234
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 08/32] drm/xe: Rename lookup_vma function to xe_find_vma_by_addr
2025-04-07 10:16 ` [PATCH v2 08/32] drm/xe: Rename lookup_vma function to xe_find_vma_by_addr Himal Prasad Ghimiray
@ 2025-04-07 22:42 ` kernel test robot
0 siblings, 0 replies; 120+ messages in thread
From: kernel test robot @ 2025-04-07 22:42 UTC (permalink / raw)
To: Himal Prasad Ghimiray, intel-xe
Cc: oe-kbuild-all, matthew.brost, thomas.hellstrom,
Himal Prasad Ghimiray
Hi Himal,
kernel test robot noticed the following build warnings:
[auto build test WARNING on drm-xe/drm-xe-next]
[also build test WARNING on next-20250407]
[cannot apply to linus/master v6.15-rc1]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Himal-Prasad-Ghimiray/drm-xe-Introduce-xe_vma_op_prefetch_range-struct-for-prefetch-of-ranges/20250407-215536
base: https://gitlab.freedesktop.org/drm/xe/kernel.git drm-xe-next
patch link: https://lore.kernel.org/r/20250407101719.3350996-9-himal.prasad.ghimiray%40intel.com
patch subject: [PATCH v2 08/32] drm/xe: Rename lookup_vma function to xe_find_vma_by_addr
config: sparc-randconfig-001-20250407 (https://download.01.org/0day-ci/archive/20250408/202504080624.57T20xNh-lkp@intel.com/config)
compiler: sparc64-linux-gcc (GCC) 13.3.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250408/202504080624.57T20xNh-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202504080624.57T20xNh-lkp@intel.com/
All warnings (new ones prefixed by >>):
>> drivers/gpu/drm/xe/xe_vm.c:2158: warning: Function parameter or struct member 'page_addr' not described in 'xe_vm_find_vma_by_addr'
>> drivers/gpu/drm/xe/xe_vm.c:2158: warning: Excess function parameter 'page_address' description in 'xe_vm_find_vma_by_addr'
vim +2158 drivers/gpu/drm/xe/xe_vm.c
2150
2151 /**
2152 * xe_vm_find_vma_by_addr() - Find a VMA by its address
2153 *
2154 * @vm: the xe_vm the vma belongs to
2155 * @page_address: address to look up
2156 */
2157 struct xe_vma *xe_vm_find_vma_by_addr(struct xe_vm *vm, u64 page_addr)
> 2158 {
2159 struct xe_vma *vma = NULL;
2160
2161 if (vm->usm.last_fault_vma) { /* Fast lookup */
2162 if (vma_matches(vm->usm.last_fault_vma, page_addr))
2163 vma = vm->usm.last_fault_vma;
2164 }
2165 if (!vma)
2166 vma = xe_vm_find_overlapping_vma(vm, page_addr, SZ_4K);
2167
2168 return vma;
2169 }
2170
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 21/32] drm/gpusvm: Make drm_gpusvm_for_each_* macros public
2025-04-07 10:17 ` [PATCH v2 21/32] drm/gpusvm: Make drm_gpusvm_for_each_* macros public Himal Prasad Ghimiray
@ 2025-04-08 1:49 ` kernel test robot
2025-05-14 18:47 ` Matthew Brost
1 sibling, 0 replies; 120+ messages in thread
From: kernel test robot @ 2025-04-08 1:49 UTC (permalink / raw)
To: Himal Prasad Ghimiray, intel-xe
Cc: oe-kbuild-all, matthew.brost, thomas.hellstrom,
Himal Prasad Ghimiray
Hi Himal,
kernel test robot noticed the following build warnings:
[auto build test WARNING on drm-xe/drm-xe-next]
[cannot apply to linus/master v6.15-rc1 next-20250407]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Himal-Prasad-Ghimiray/drm-xe-Introduce-xe_vma_op_prefetch_range-struct-for-prefetch-of-ranges/20250407-215536
base: https://gitlab.freedesktop.org/drm/xe/kernel.git drm-xe-next
patch link: https://lore.kernel.org/r/20250407101719.3350996-22-himal.prasad.ghimiray%40intel.com
patch subject: [PATCH v2 21/32] drm/gpusvm: Make drm_gpusvm_for_each_* macros public
config: alpha-allyesconfig (https://download.01.org/0day-ci/archive/20250408/202504080922.7Yz1JvEt-lkp@intel.com/config)
compiler: alpha-linux-gcc (GCC) 14.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250408/202504080922.7Yz1JvEt-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202504080922.7Yz1JvEt-lkp@intel.com/
All warnings (new ones prefixed by >>):
>> include/drm/drm_gpusvm.h:586: warning: Function parameter or struct member 'gpusvm__' not described in 'drm_gpusvm_for_each_notifier'
>> include/drm/drm_gpusvm.h:603: warning: Function parameter or struct member 'gpusvm__' not described in 'drm_gpusvm_for_each_notifier_safe'
vim +586 include/drm/drm_gpusvm.h
573
574 /**
575 * drm_gpusvm_for_each_notifier() - Iterate over GPU SVM notifiers in a gpusvm
576 * @notifier__: Iterator variable for the notifiers
577 * @notifier__: Pointer to the GPU SVM notifier
578 * @start__: Start address of the notifier
579 * @end__: End address of the notifier
580 *
581 * This macro is used to iterate over GPU SVM notifiers in a gpusvm.
582 */
583 #define drm_gpusvm_for_each_notifier(notifier__, gpusvm__, start__, end__) \
584 for ((notifier__) = notifier_iter_first(&(gpusvm__)->root, (start__), (end__) - 1); \
585 (notifier__) && (drm_gpusvm_notifier_start(notifier__) < (end__)); \
> 586 (notifier__) = __drm_gpusvm_notifier_next(notifier__))
587
588 /**
589 * drm_gpusvm_for_each_notifier_safe() - Safely iterate over GPU SVM notifiers in a gpusvm
590 * @notifier__: Iterator variable for the notifiers
591 * @next__: Iterator variable for the notifiers temporay storage
592 * @notifier__: Pointer to the GPU SVM notifier
593 * @start__: Start address of the notifier
594 * @end__: End address of the notifier
595 *
596 * This macro is used to iterate over GPU SVM notifiers in a gpusvm while
597 * removing notifiers from it.
598 */
599 #define drm_gpusvm_for_each_notifier_safe(notifier__, next__, gpusvm__, start__, end__) \
600 for ((notifier__) = notifier_iter_first(&(gpusvm__)->root, (start__), (end__) - 1), \
601 (next__) = __drm_gpusvm_notifier_next(notifier__); \
602 (notifier__) && (drm_gpusvm_notifier_start(notifier__) < (end__)); \
> 603 (notifier__) = (next__), (next__) = __drm_gpusvm_notifier_next(notifier__))
604
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 120+ messages in thread
* ✓ CI.Patch_applied: success for PREFETCH and MADVISE for SVM ranges (rev4)
2025-04-07 10:16 [PATCH v2 00/32] PREFETCH and MADVISE for SVM ranges Himal Prasad Ghimiray
` (35 preceding siblings ...)
2025-04-07 14:12 ` ✗ CI.Build: failure " Patchwork
@ 2025-04-09 5:11 ` Patchwork
2025-04-09 5:11 ` ✗ CI.checkpatch: warning " Patchwork
` (6 subsequent siblings)
43 siblings, 0 replies; 120+ messages in thread
From: Patchwork @ 2025-04-09 5:11 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe
== Series Details ==
Series: PREFETCH and MADVISE for SVM ranges (rev4)
URL : https://patchwork.freedesktop.org/series/146290/
State : success
== Summary ==
=== Applying kernel patches on branch 'drm-tip' with base: ===
Base commit: a49a4787e6bc drm-tip: 2025y-04m-08d-21h-20m-56s UTC integration manifest
=== git am output follows ===
Applying: drm/xe: Introduce xe_vma_op_prefetch_range struct for prefetch of ranges
Applying: drm/xe: Make xe_svm_alloc_vram public
Applying: drm/xe/svm: Helper to add tile masks to svm ranges
Applying: drm/xe/svm: Make to_xe_range a public function
Applying: drm/xe/svm: Make xe_svm_range_* end/start/size public
Applying: drm/xe/vm: Update xe_vma_ops_incr_pt_update_ops to take an increment value
Applying: drm/xe/vm: Add an identifier in xe_vma_ops for svm prefetch
Applying: drm/xe: Rename lookup_vma function to xe_find_vma_by_addr
Applying: drm/xe/svm: Allow unaligned addresses and ranges for prefetch
Applying: drm/xe/svm: Refactor usage of drm_gpusvm* function in xe_svm
Applying: drm/xe/svm: Add function to determine if range needs VRAM migration
Applying: drm/gpusvm: Introduce vram_only flag for VRAM allocation
Applying: drm/xe/svm: Incase of atomic access ensure get_pages happens from vram
Applying: drm/xe/svm: Implement prefetch support for SVM ranges
Applying: drm/xe/vm: Add debug prints for SVM range prefetch
Applying: Introduce drm_gpuvm_sm_map_ops_flags enums for sm_map_ops
Applying: drm/xe/uapi: Add madvise interface
Applying: drm/xe/vm: Add attributes struct as member of vma
Applying: drm/xe/vma: Move pat_index to vma attributes
Applying: drm/xe/vma: Modify new_vma to accept struct xe_vma_mem_attr as parameter
Applying: drm/gpusvm: Make drm_gpusvm_for_each_* macros public
Applying: drm/xe/svm: Split system allocator vma incase of madvise call
Applying: drm/xe: Implement madvise ioctl for xe
Applying: drm/xe: Allow CPU address mirror VMA unbind with gpu bindings for madvise
Applying: drm/xe/svm : Add svm ranges migration policy on atomic access
Applying: drm/xe/madvise: Update migration policy based on preferred location
Applying: drm/xe/svm: Support DRM_XE_SVM_ATTR_PAT memory attribute
Applying: drm/xe/uapi: Add flag for consulting madvise hints on svm prefetch
Applying: drm/xe/svm: Consult madvise preferred location in prefetch
Applying: drm/xe/uapi: Add uapi for vma count and mem attributes
Applying: drm/xe/bo: Add attributes field to xe_bo
Applying: drm/xe/bo: Update atomic_access attribute on madvise
^ permalink raw reply [flat|nested] 120+ messages in thread
* ✗ CI.checkpatch: warning for PREFETCH and MADVISE for SVM ranges (rev4)
2025-04-07 10:16 [PATCH v2 00/32] PREFETCH and MADVISE for SVM ranges Himal Prasad Ghimiray
` (36 preceding siblings ...)
2025-04-09 5:11 ` ✓ CI.Patch_applied: success for PREFETCH and MADVISE for SVM ranges (rev4) Patchwork
@ 2025-04-09 5:11 ` Patchwork
2025-04-09 5:12 ` ✓ CI.KUnit: success " Patchwork
` (5 subsequent siblings)
43 siblings, 0 replies; 120+ messages in thread
From: Patchwork @ 2025-04-09 5:11 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe
== Series Details ==
Series: PREFETCH and MADVISE for SVM ranges (rev4)
URL : https://patchwork.freedesktop.org/series/146290/
State : warning
== Summary ==
+ KERNEL=/kernel
+ git clone https://gitlab.freedesktop.org/drm/maintainer-tools mt
Cloning into 'mt'...
warning: redirecting to https://gitlab.freedesktop.org/drm/maintainer-tools.git/
+ git -C mt rev-list -n1 origin/master
13a92ce9fd458ebd6064f23cec8c39c53d02ed26
+ cd /kernel
+ git config --global --add safe.directory /kernel
+ git log -n1
commit 902f9353ae04d49e3d40280650c89f6e6e370bc7
Author: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Date: Mon Apr 7 15:47:19 2025 +0530
drm/xe/bo: Update atomic_access attribute on madvise
Update the bo_atomic_access based on user-provided input and determine
the migration to smem during a CPU fault
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
+ /mt/dim checkpatch a49a4787e6bc70296204f4a6e1b0fed3759938cd drm-intel
752b206d9c89 drm/xe: Introduce xe_vma_op_prefetch_range struct for prefetch of ranges
f1b78b50c2bf drm/xe: Make xe_svm_alloc_vram public
e4758bf081a4 drm/xe/svm: Helper to add tile masks to svm ranges
c579ef6ed6bc drm/xe/svm: Make to_xe_range a public function
c6f093e3ef5c drm/xe/svm: Make xe_svm_range_* end/start/size public
d82c1cf4e6fd drm/xe/vm: Update xe_vma_ops_incr_pt_update_ops to take an increment value
-:30: ERROR:SPACING: space required before the open parenthesis '('
#30: FILE: drivers/gpu/drm/xe/xe_vm.c:813:
+ if(!inc_val)
total: 1 errors, 0 warnings, 0 checks, 96 lines checked
71ed634656f9 drm/xe/vm: Add an identifier in xe_vma_ops for svm prefetch
c30151d94235 drm/xe: Rename lookup_vma function to xe_find_vma_by_addr
c24b9641e007 drm/xe/svm: Allow unaligned addresses and ranges for prefetch
e3b2143af8b1 drm/xe/svm: Refactor usage of drm_gpusvm* function in xe_svm
95ac3d7256a7 drm/xe/svm: Add function to determine if range needs VRAM migration
6f39700a50ff drm/gpusvm: Introduce vram_only flag for VRAM allocation
43a08682be30 drm/xe/svm: Incase of atomic access ensure get_pages happens from vram
10bf7eafc345 drm/xe/svm: Implement prefetch support for SVM ranges
3ba4d31e877e drm/xe/vm: Add debug prints for SVM range prefetch
db5c1983c7cf Introduce drm_gpuvm_sm_map_ops_flags enums for sm_map_ops
0948b3776e6a drm/xe/uapi: Add madvise interface
-:37: WARNING:LONG_LINE: line length of 114 exceeds 100 columns
#37: FILE: include/uapi/drm/xe_drm.h:122:
+#define DRM_IOCTL_XE_MADVISE DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_MADVISE, struct drm_xe_madvise)
total: 0 errors, 1 warnings, 0 checks, 121 lines checked
e65270d8836f drm/xe/vm: Add attributes struct as member of vma
770cfb9e0269 drm/xe/vma: Move pat_index to vma attributes
06e95defa28d drm/xe/vma: Modify new_vma to accept struct xe_vma_mem_attr as parameter
296eb3eb2e55 drm/gpusvm: Make drm_gpusvm_for_each_* macros public
-:162: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'range__' - possible side-effects?
#162: FILE: include/drm/drm_gpusvm.h:536:
+#define drm_gpusvm_for_each_range_safe(range__, next__, notifier__, start__, end__) \
+ for ((range__) = __drm_gpusvm_range_find((notifier__), (start__), (end__)), \
+ (next__) = __drm_gpusvm_range_next(range__); \
+ (range__) && (drm_gpusvm_range_start(range__) < (end__)); \
+ (range__) = (next__), (next__) = __drm_gpusvm_range_next(range__))
-:162: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'next__' - possible side-effects?
#162: FILE: include/drm/drm_gpusvm.h:536:
+#define drm_gpusvm_for_each_range_safe(range__, next__, notifier__, start__, end__) \
+ for ((range__) = __drm_gpusvm_range_find((notifier__), (start__), (end__)), \
+ (next__) = __drm_gpusvm_range_next(range__); \
+ (range__) && (drm_gpusvm_range_start(range__) < (end__)); \
+ (range__) = (next__), (next__) = __drm_gpusvm_range_next(range__))
-:162: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'end__' - possible side-effects?
#162: FILE: include/drm/drm_gpusvm.h:536:
+#define drm_gpusvm_for_each_range_safe(range__, next__, notifier__, start__, end__) \
+ for ((range__) = __drm_gpusvm_range_find((notifier__), (start__), (end__)), \
+ (next__) = __drm_gpusvm_range_next(range__); \
+ (range__) && (drm_gpusvm_range_start(range__) < (end__)); \
+ (range__) = (next__), (next__) = __drm_gpusvm_range_next(range__))
-:209: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'notifier__' - possible side-effects?
#209: FILE: include/drm/drm_gpusvm.h:583:
+#define drm_gpusvm_for_each_notifier(notifier__, gpusvm__, start__, end__) \
+ for ((notifier__) = notifier_iter_first(&(gpusvm__)->root, (start__), (end__) - 1); \
+ (notifier__) && (drm_gpusvm_notifier_start(notifier__) < (end__)); \
+ (notifier__) = __drm_gpusvm_notifier_next(notifier__))
-:209: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'end__' - possible side-effects?
#209: FILE: include/drm/drm_gpusvm.h:583:
+#define drm_gpusvm_for_each_notifier(notifier__, gpusvm__, start__, end__) \
+ for ((notifier__) = notifier_iter_first(&(gpusvm__)->root, (start__), (end__) - 1); \
+ (notifier__) && (drm_gpusvm_notifier_start(notifier__) < (end__)); \
+ (notifier__) = __drm_gpusvm_notifier_next(notifier__))
-:225: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'notifier__' - possible side-effects?
#225: FILE: include/drm/drm_gpusvm.h:599:
+#define drm_gpusvm_for_each_notifier_safe(notifier__, next__, gpusvm__, start__, end__) \
+ for ((notifier__) = notifier_iter_first(&(gpusvm__)->root, (start__), (end__) - 1), \
+ (next__) = __drm_gpusvm_notifier_next(notifier__); \
+ (notifier__) && (drm_gpusvm_notifier_start(notifier__) < (end__)); \
+ (notifier__) = (next__), (next__) = __drm_gpusvm_notifier_next(notifier__))
-:225: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'next__' - possible side-effects?
#225: FILE: include/drm/drm_gpusvm.h:599:
+#define drm_gpusvm_for_each_notifier_safe(notifier__, next__, gpusvm__, start__, end__) \
+ for ((notifier__) = notifier_iter_first(&(gpusvm__)->root, (start__), (end__) - 1), \
+ (next__) = __drm_gpusvm_notifier_next(notifier__); \
+ (notifier__) && (drm_gpusvm_notifier_start(notifier__) < (end__)); \
+ (notifier__) = (next__), (next__) = __drm_gpusvm_notifier_next(notifier__))
-:225: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'end__' - possible side-effects?
#225: FILE: include/drm/drm_gpusvm.h:599:
+#define drm_gpusvm_for_each_notifier_safe(notifier__, next__, gpusvm__, start__, end__) \
+ for ((notifier__) = notifier_iter_first(&(gpusvm__)->root, (start__), (end__) - 1), \
+ (next__) = __drm_gpusvm_notifier_next(notifier__); \
+ (notifier__) && (drm_gpusvm_notifier_start(notifier__) < (end__)); \
+ (notifier__) = (next__), (next__) = __drm_gpusvm_notifier_next(notifier__))
total: 0 errors, 0 warnings, 8 checks, 207 lines checked
2564304ab819 drm/xe/svm: Split system allocator vma incase of madvise call
85953d145284 drm/xe: Implement madvise ioctl for xe
-:48: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#48:
new file mode 100644
total: 0 errors, 1 warnings, 0 checks, 345 lines checked
df369fad9581 drm/xe: Allow CPU address mirror VMA unbind with gpu bindings for madvise
fff502f987dc drm/xe/svm : Add svm ranges migration policy on atomic access
888d2438a2ab drm/xe/madvise: Update migration policy based on preferred location
da42f612c26d drm/xe/svm: Support DRM_XE_SVM_ATTR_PAT memory attribute
5545b2b6c9dc drm/xe/uapi: Add flag for consulting madvise hints on svm prefetch
5edbbef8ec39 drm/xe/svm: Consult madvise preferred location in prefetch
108238268ebf drm/xe/uapi: Add uapi for vma count and mem attributes
-:75: WARNING:LONG_LINE: line length of 107 exceeds 100 columns
#75: FILE: drivers/gpu/drm/xe/xe_vm.c:2210:
+ mem_attrs[i].preferred_mem_loc.migration_policy = vma->attr.preferred_loc.migration_policy;
-:170: WARNING:LONG_LINE: line length of 130 exceeds 100 columns
#170: FILE: include/uapi/drm/xe_drm.h:127:
+#define DRM_IOCTL_XE_VM_QUERY_VMAS DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_VM_QUERY_VMAS, struct drm_xe_vm_query_num_vmas)
-:171: WARNING:LONG_LINE: line length of 137 exceeds 100 columns
#171: FILE: include/uapi/drm/xe_drm.h:128:
+#define DRM_IOCTL_XE_VM_QUERY_VMAS_ATTRS DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_VM_QUERY_VMAS_ATTRS, struct drm_xe_vm_query_vmas_attr)
total: 0 errors, 3 warnings, 0 checks, 256 lines checked
52c98975e7e8 drm/xe/bo: Add attributes field to xe_bo
902f9353ae04 drm/xe/bo: Update atomic_access attribute on madvise
^ permalink raw reply [flat|nested] 120+ messages in thread
* ✓ CI.KUnit: success for PREFETCH and MADVISE for SVM ranges (rev4)
2025-04-07 10:16 [PATCH v2 00/32] PREFETCH and MADVISE for SVM ranges Himal Prasad Ghimiray
` (37 preceding siblings ...)
2025-04-09 5:11 ` ✗ CI.checkpatch: warning " Patchwork
@ 2025-04-09 5:12 ` Patchwork
2025-04-09 5:29 ` ✓ CI.Build: " Patchwork
` (4 subsequent siblings)
43 siblings, 0 replies; 120+ messages in thread
From: Patchwork @ 2025-04-09 5:12 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe
== Series Details ==
Series: PREFETCH and MADVISE for SVM ranges (rev4)
URL : https://patchwork.freedesktop.org/series/146290/
State : success
== Summary ==
+ trap cleanup EXIT
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/xe/.kunitconfig
[05:11:45] Configuring KUnit Kernel ...
Generating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[05:11:49] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[05:12:16] Starting KUnit Kernel (1/1)...
[05:12:16] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[05:12:16] ================== guc_buf (11 subtests) ===================
[05:12:16] [PASSED] test_smallest
[05:12:16] [PASSED] test_largest
[05:12:16] [PASSED] test_granular
[05:12:16] [PASSED] test_unique
[05:12:16] [PASSED] test_overlap
[05:12:16] [PASSED] test_reusable
[05:12:16] [PASSED] test_too_big
[05:12:16] [PASSED] test_flush
[05:12:16] [PASSED] test_lookup
[05:12:16] [PASSED] test_data
[05:12:16] [PASSED] test_class
[05:12:16] ===================== [PASSED] guc_buf =====================
[05:12:16] =================== guc_dbm (7 subtests) ===================
[05:12:16] [PASSED] test_empty
[05:12:16] [PASSED] test_default
[05:12:16] ======================== test_size ========================
[05:12:16] [PASSED] 4
[05:12:16] [PASSED] 8
[05:12:16] [PASSED] 32
[05:12:16] [PASSED] 256
[05:12:16] ==================== [PASSED] test_size ====================
[05:12:16] ======================= test_reuse ========================
[05:12:16] [PASSED] 4
[05:12:16] [PASSED] 8
[05:12:16] [PASSED] 32
[05:12:16] [PASSED] 256
[05:12:16] =================== [PASSED] test_reuse ====================
[05:12:16] =================== test_range_overlap ====================
[05:12:16] [PASSED] 4
[05:12:16] [PASSED] 8
[05:12:16] [PASSED] 32
[05:12:16] [PASSED] 256
[05:12:16] =============== [PASSED] test_range_overlap ================
[05:12:16] =================== test_range_compact ====================
[05:12:16] [PASSED] 4
[05:12:16] [PASSED] 8
[05:12:16] [PASSED] 32
[05:12:16] [PASSED] 256
[05:12:16] =============== [PASSED] test_range_compact ================
[05:12:16] ==================== test_range_spare =====================
[05:12:16] [PASSED] 4
[05:12:16] [PASSED] 8
[05:12:16] [PASSED] 32
[05:12:16] [PASSED] 256
[05:12:16] ================ [PASSED] test_range_spare =================
[05:12:16] ===================== [PASSED] guc_dbm =====================
[05:12:16] =================== guc_idm (6 subtests) ===================
[05:12:16] [PASSED] bad_init
[05:12:16] [PASSED] no_init
[05:12:16] [PASSED] init_fini
[05:12:16] [PASSED] check_used
[05:12:16] [PASSED] check_quota
[05:12:16] [PASSED] check_all
[05:12:16] ===================== [PASSED] guc_idm =====================
[05:12:16] ================== no_relay (3 subtests) ===================
[05:12:16] [PASSED] xe_drops_guc2pf_if_not_ready
[05:12:16] [PASSED] xe_drops_guc2vf_if_not_ready
[05:12:16] [PASSED] xe_rejects_send_if_not_ready
[05:12:16] ==================== [PASSED] no_relay =====================
[05:12:16] ================== pf_relay (14 subtests) ==================
[05:12:16] [PASSED] pf_rejects_guc2pf_too_short
[05:12:16] [PASSED] pf_rejects_guc2pf_too_long
[05:12:16] [PASSED] pf_rejects_guc2pf_no_payload
[05:12:16] [PASSED] pf_fails_no_payload
[05:12:16] [PASSED] pf_fails_bad_origin
[05:12:16] [PASSED] pf_fails_bad_type
[05:12:16] [PASSED] pf_txn_reports_error
[05:12:16] [PASSED] pf_txn_sends_pf2guc
[05:12:16] [PASSED] pf_sends_pf2guc
[05:12:16] [SKIPPED] pf_loopback_nop
[05:12:16] [SKIPPED] pf_loopback_echo
[05:12:16] [SKIPPED] pf_loopback_fail
[05:12:16] [SKIPPED] pf_loopback_busy
[05:12:16] [SKIPPED] pf_loopback_retry
[05:12:16] ==================== [PASSED] pf_relay =====================
[05:12:16] ================== vf_relay (3 subtests) ===================
[05:12:16] [PASSED] vf_rejects_guc2vf_too_short
[05:12:16] [PASSED] vf_rejects_guc2vf_too_long
[05:12:16] [PASSED] vf_rejects_guc2vf_no_payload
[05:12:16] ==================== [PASSED] vf_relay =====================
[05:12:16] ================= pf_service (11 subtests) =================
[05:12:16] [PASSED] pf_negotiate_any
[05:12:16] [PASSED] pf_negotiate_base_match
[05:12:16] [PASSED] pf_negotiate_base_newer
[05:12:16] [PASSED] pf_negotiate_base_next
[05:12:16] [SKIPPED] pf_negotiate_base_older
[05:12:16] [PASSED] pf_negotiate_base_prev
[05:12:16] [PASSED] pf_negotiate_latest_match
[05:12:16] [PASSED] pf_negotiate_latest_newer
[05:12:16] [PASSED] pf_negotiate_latest_next
[05:12:16] [SKIPPED] pf_negotiate_latest_older
[05:12:16] [SKIPPED] pf_negotiate_latest_prev
[05:12:16] =================== [PASSED] pf_service ====================
[05:12:16] ===================== lmtt (1 subtest) =====================
[05:12:16] ======================== test_ops =========================
[05:12:16] [PASSED] 2-level
[05:12:16] [PASSED] multi-level
[05:12:16] ==================== [PASSED] test_ops =====================
[05:12:16] ====================== [PASSED] lmtt =======================
[05:12:16] =================== xe_mocs (2 subtests) ===================
[05:12:16] ================ xe_live_mocs_kernel_kunit ================
[05:12:16] =========== [SKIPPED] xe_live_mocs_kernel_kunit ============
[05:12:16] ================ xe_live_mocs_reset_kunit =================
[05:12:16] ============ [SKIPPED] xe_live_mocs_reset_kunit ============
[05:12:16] ==================== [SKIPPED] xe_mocs =====================
[05:12:16] ================= xe_migrate (2 subtests) ==================
[05:12:16] ================= xe_migrate_sanity_kunit =================
[05:12:16] ============ [SKIPPED] xe_migrate_sanity_kunit =============
[05:12:16] ================== xe_validate_ccs_kunit ==================
[05:12:16] ============= [SKIPPED] xe_validate_ccs_kunit ==============
[05:12:16] =================== [SKIPPED] xe_migrate ===================
[05:12:16] ================== xe_dma_buf (1 subtest) ==================
[05:12:16] ==================== xe_dma_buf_kunit =====================
[05:12:16] ================ [SKIPPED] xe_dma_buf_kunit ================
[05:12:16] =================== [SKIPPED] xe_dma_buf ===================
[05:12:16] ================= xe_bo_shrink (1 subtest) =================
[05:12:16] =================== xe_bo_shrink_kunit ====================
[05:12:16] =============== [SKIPPED] xe_bo_shrink_kunit ===============
[05:12:16] ================== [SKIPPED] xe_bo_shrink ==================
[05:12:16] ==================== xe_bo (2 subtests) ====================
[05:12:16] ================== xe_ccs_migrate_kunit ===================
[05:12:16] ============== [SKIPPED] xe_ccs_migrate_kunit ==============
[05:12:16] ==================== xe_bo_evict_kunit ====================
[05:12:16] =============== [SKIPPED] xe_bo_evict_kunit ================
[05:12:16] ===================== [SKIPPED] xe_bo ======================
[05:12:16] ==================== args (11 subtests) ====================
[05:12:16] [PASSED] count_args_test
[05:12:16] [PASSED] call_args_example
[05:12:16] [PASSED] call_args_test
[05:12:16] [PASSED] drop_first_arg_example
[05:12:16] [PASSED] drop_first_arg_test
[05:12:16] [PASSED] first_arg_example
[05:12:16] [PASSED] first_arg_test
[05:12:16] [PASSED] last_arg_example
[05:12:16] [PASSED] last_arg_test
[05:12:16] [PASSED] pick_arg_example
[05:12:16] [PASSED] sep_comma_example
[05:12:16] ====================== [PASSED] args =======================
[05:12:16] =================== xe_pci (2 subtests) ====================
[05:12:16] [PASSED] xe_gmdid_graphics_ip
[05:12:16] [PASSED] xe_gmdid_media_ip
[05:12:16] ===================== [PASSED] xe_pci ======================
[05:12:16] =================== xe_rtp (2 subtests) ====================
[05:12:16] =============== xe_rtp_process_to_sr_tests ================
[05:12:16] [PASSED] coalesce-same-reg
[05:12:16] [PASSED] no-match-no-add
[05:12:16] [PASSED] match-or
[05:12:16] [PASSED] match-or-xfail
[05:12:16] [PASSED] no-match-no-add-multiple-rules
[05:12:16] [PASSED] two-regs-two-entries
[05:12:16] [PASSED] clr-one-set-other
[05:12:16] [PASSED] set-field
[05:12:16] [PASSED] conflict-duplicate
[05:12:16] [PASSED] conflict-not-disjoint
stty: 'standard input': Inappropriate ioctl for device
[05:12:16] [PASSED] conflict-reg-type
[05:12:16] =========== [PASSED] xe_rtp_process_to_sr_tests ============
[05:12:16] ================== xe_rtp_process_tests ===================
[05:12:16] [PASSED] active1
[05:12:16] [PASSED] active2
[05:12:16] [PASSED] active-inactive
[05:12:16] [PASSED] inactive-active
[05:12:16] [PASSED] inactive-1st_or_active-inactive
[05:12:16] [PASSED] inactive-2nd_or_active-inactive
[05:12:16] [PASSED] inactive-last_or_active-inactive
[05:12:16] [PASSED] inactive-no_or_active-inactive
[05:12:16] ============== [PASSED] xe_rtp_process_tests ===============
[05:12:16] ===================== [PASSED] xe_rtp ======================
[05:12:16] ==================== xe_wa (1 subtest) =====================
[05:12:16] ======================== xe_wa_gt =========================
[05:12:16] [PASSED] TIGERLAKE (B0)
[05:12:16] [PASSED] DG1 (A0)
[05:12:16] [PASSED] DG1 (B0)
[05:12:16] [PASSED] ALDERLAKE_S (A0)
[05:12:16] [PASSED] ALDERLAKE_S (B0)
[05:12:16] [PASSED] ALDERLAKE_S (C0)
[05:12:16] [PASSED] ALDERLAKE_S (D0)
[05:12:16] [PASSED] ALDERLAKE_P (A0)
[05:12:16] [PASSED] ALDERLAKE_P (B0)
[05:12:16] [PASSED] ALDERLAKE_P (C0)
[05:12:16] [PASSED] ALDERLAKE_S_RPLS (D0)
[05:12:16] [PASSED] ALDERLAKE_P_RPLU (E0)
[05:12:16] [PASSED] DG2_G10 (C0)
[05:12:16] [PASSED] DG2_G11 (B1)
[05:12:16] [PASSED] DG2_G12 (A1)
[05:12:16] [PASSED] METEORLAKE (g:A0, m:A0)
[05:12:16] [PASSED] METEORLAKE (g:A0, m:A0)
[05:12:16] [PASSED] METEORLAKE (g:A0, m:A0)
[05:12:16] [PASSED] LUNARLAKE (g:A0, m:A0)
[05:12:16] [PASSED] LUNARLAKE (g:B0, m:A0)
[05:12:16] [PASSED] BATTLEMAGE (g:A0, m:A1)
[05:12:16] ==================== [PASSED] xe_wa_gt =====================
[05:12:16] ====================== [PASSED] xe_wa ======================
[05:12:16] ============================================================
[05:12:16] Testing complete. Ran 133 tests: passed: 117, skipped: 16
[05:12:16] Elapsed time: 31.112s total, 4.294s configuring, 26.501s building, 0.286s running
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/tests/.kunitconfig
[05:12:16] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[05:12:18] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[05:12:39] Starting KUnit Kernel (1/1)...
[05:12:39] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[05:12:39] == drm_test_atomic_get_connector_for_encoder (1 subtest) ===
[05:12:39] [PASSED] drm_test_drm_atomic_get_connector_for_encoder
[05:12:39] ==== [PASSED] drm_test_atomic_get_connector_for_encoder ====
[05:12:39] =========== drm_validate_clone_mode (2 subtests) ===========
[05:12:39] ============== drm_test_check_in_clone_mode ===============
[05:12:39] [PASSED] in_clone_mode
[05:12:39] [PASSED] not_in_clone_mode
[05:12:39] ========== [PASSED] drm_test_check_in_clone_mode ===========
[05:12:39] =============== drm_test_check_valid_clones ===============
[05:12:39] [PASSED] not_in_clone_mode
[05:12:39] [PASSED] valid_clone
[05:12:39] [PASSED] invalid_clone
[05:12:39] =========== [PASSED] drm_test_check_valid_clones ===========
[05:12:39] ============= [PASSED] drm_validate_clone_mode =============
[05:12:39] ============= drm_validate_modeset (1 subtest) =============
[05:12:39] [PASSED] drm_test_check_connector_changed_modeset
[05:12:39] ============== [PASSED] drm_validate_modeset ===============
[05:12:39] ====== drm_test_bridge_get_current_state (2 subtests) ======
[05:12:39] [PASSED] drm_test_drm_bridge_get_current_state_atomic
[05:12:39] [PASSED] drm_test_drm_bridge_get_current_state_legacy
[05:12:39] ======== [PASSED] drm_test_bridge_get_current_state ========
[05:12:39] ====== drm_test_bridge_helper_reset_crtc (3 subtests) ======
[05:12:39] [PASSED] drm_test_drm_bridge_helper_reset_crtc_atomic
[05:12:39] [PASSED] drm_test_drm_bridge_helper_reset_crtc_atomic_disabled
[05:12:39] [PASSED] drm_test_drm_bridge_helper_reset_crtc_legacy
[05:12:39] ======== [PASSED] drm_test_bridge_helper_reset_crtc ========
[05:12:39] ================== drm_buddy (7 subtests) ==================
[05:12:39] [PASSED] drm_test_buddy_alloc_limit
[05:12:39] [PASSED] drm_test_buddy_alloc_optimistic
[05:12:39] [PASSED] drm_test_buddy_alloc_pessimistic
[05:12:39] [PASSED] drm_test_buddy_alloc_pathological
[05:12:39] [PASSED] drm_test_buddy_alloc_contiguous
[05:12:39] [PASSED] drm_test_buddy_alloc_clear
[05:12:39] [PASSED] drm_test_buddy_alloc_range_bias
[05:12:39] ==================== [PASSED] drm_buddy ====================
[05:12:39] ============= drm_cmdline_parser (40 subtests) =============
[05:12:39] [PASSED] drm_test_cmdline_force_d_only
[05:12:39] [PASSED] drm_test_cmdline_force_D_only_dvi
[05:12:39] [PASSED] drm_test_cmdline_force_D_only_hdmi
[05:12:39] [PASSED] drm_test_cmdline_force_D_only_not_digital
[05:12:39] [PASSED] drm_test_cmdline_force_e_only
[05:12:39] [PASSED] drm_test_cmdline_res
[05:12:39] [PASSED] drm_test_cmdline_res_vesa
[05:12:39] [PASSED] drm_test_cmdline_res_vesa_rblank
[05:12:39] [PASSED] drm_test_cmdline_res_rblank
[05:12:39] [PASSED] drm_test_cmdline_res_bpp
[05:12:39] [PASSED] drm_test_cmdline_res_refresh
[05:12:39] [PASSED] drm_test_cmdline_res_bpp_refresh
[05:12:39] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced
[05:12:39] [PASSED] drm_test_cmdline_res_bpp_refresh_margins
[05:12:39] [PASSED] drm_test_cmdline_res_bpp_refresh_force_off
[05:12:39] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on
[05:12:39] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_analog
[05:12:39] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_digital
[05:12:39] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced_margins_force_on
[05:12:39] [PASSED] drm_test_cmdline_res_margins_force_on
[05:12:39] [PASSED] drm_test_cmdline_res_vesa_margins
[05:12:39] [PASSED] drm_test_cmdline_name
[05:12:39] [PASSED] drm_test_cmdline_name_bpp
[05:12:39] [PASSED] drm_test_cmdline_name_option
[05:12:39] [PASSED] drm_test_cmdline_name_bpp_option
[05:12:39] [PASSED] drm_test_cmdline_rotate_0
[05:12:39] [PASSED] drm_test_cmdline_rotate_90
[05:12:39] [PASSED] drm_test_cmdline_rotate_180
[05:12:39] [PASSED] drm_test_cmdline_rotate_270
[05:12:39] [PASSED] drm_test_cmdline_hmirror
[05:12:39] [PASSED] drm_test_cmdline_vmirror
[05:12:39] [PASSED] drm_test_cmdline_margin_options
[05:12:39] [PASSED] drm_test_cmdline_multiple_options
[05:12:39] [PASSED] drm_test_cmdline_bpp_extra_and_option
[05:12:39] [PASSED] drm_test_cmdline_extra_and_option
[05:12:39] [PASSED] drm_test_cmdline_freestanding_options
[05:12:39] [PASSED] drm_test_cmdline_freestanding_force_e_and_options
[05:12:39] [PASSED] drm_test_cmdline_panel_orientation
[05:12:39] ================ drm_test_cmdline_invalid =================
[05:12:39] [PASSED] margin_only
[05:12:39] [PASSED] interlace_only
[05:12:39] [PASSED] res_missing_x
[05:12:39] [PASSED] res_missing_y
[05:12:39] [PASSED] res_bad_y
[05:12:39] [PASSED] res_missing_y_bpp
[05:12:39] [PASSED] res_bad_bpp
[05:12:39] [PASSED] res_bad_refresh
[05:12:39] [PASSED] res_bpp_refresh_force_on_off
[05:12:39] [PASSED] res_invalid_mode
[05:12:39] [PASSED] res_bpp_wrong_place_mode
[05:12:39] [PASSED] name_bpp_refresh
[05:12:39] [PASSED] name_refresh
[05:12:39] [PASSED] name_refresh_wrong_mode
[05:12:39] [PASSED] name_refresh_invalid_mode
[05:12:39] [PASSED] rotate_multiple
[05:12:39] [PASSED] rotate_invalid_val
[05:12:39] [PASSED] rotate_truncated
[05:12:39] [PASSED] invalid_option
[05:12:39] [PASSED] invalid_tv_option
[05:12:39] [PASSED] truncated_tv_option
[05:12:39] ============ [PASSED] drm_test_cmdline_invalid =============
[05:12:39] =============== drm_test_cmdline_tv_options ===============
[05:12:39] [PASSED] NTSC
[05:12:39] [PASSED] NTSC_443
[05:12:39] [PASSED] NTSC_J
[05:12:39] [PASSED] PAL
[05:12:39] [PASSED] PAL_M
[05:12:39] [PASSED] PAL_N
[05:12:39] [PASSED] SECAM
[05:12:39] [PASSED] MONO_525
[05:12:39] [PASSED] MONO_625
[05:12:39] =========== [PASSED] drm_test_cmdline_tv_options ===========
[05:12:39] =============== [PASSED] drm_cmdline_parser ================
[05:12:39] ========== drmm_connector_hdmi_init (20 subtests) ==========
[05:12:39] [PASSED] drm_test_connector_hdmi_init_valid
[05:12:39] [PASSED] drm_test_connector_hdmi_init_bpc_8
[05:12:39] [PASSED] drm_test_connector_hdmi_init_bpc_10
[05:12:39] [PASSED] drm_test_connector_hdmi_init_bpc_12
[05:12:39] [PASSED] drm_test_connector_hdmi_init_bpc_invalid
[05:12:39] [PASSED] drm_test_connector_hdmi_init_bpc_null
[05:12:39] [PASSED] drm_test_connector_hdmi_init_formats_empty
[05:12:39] [PASSED] drm_test_connector_hdmi_init_formats_no_rgb
[05:12:39] === drm_test_connector_hdmi_init_formats_yuv420_allowed ===
[05:12:39] [PASSED] supported_formats=0x9 yuv420_allowed=1
[05:12:39] [PASSED] supported_formats=0x9 yuv420_allowed=0
[05:12:39] [PASSED] supported_formats=0x3 yuv420_allowed=1
[05:12:39] [PASSED] supported_formats=0x3 yuv420_allowed=0
[05:12:39] === [PASSED] drm_test_connector_hdmi_init_formats_yuv420_allowed ===
[05:12:39] [PASSED] drm_test_connector_hdmi_init_null_ddc
[05:12:39] [PASSED] drm_test_connector_hdmi_init_null_product
[05:12:39] [PASSED] drm_test_connector_hdmi_init_null_vendor
[05:12:39] [PASSED] drm_test_connector_hdmi_init_product_length_exact
[05:12:39] [PASSED] drm_test_connector_hdmi_init_product_length_too_long
[05:12:39] [PASSED] drm_test_connector_hdmi_init_product_valid
[05:12:39] [PASSED] drm_test_connector_hdmi_init_vendor_length_exact
[05:12:39] [PASSED] drm_test_connector_hdmi_init_vendor_length_too_long
[05:12:39] [PASSED] drm_test_connector_hdmi_init_vendor_valid
[05:12:39] ========= drm_test_connector_hdmi_init_type_valid =========
[05:12:39] [PASSED] HDMI-A
[05:12:39] [PASSED] HDMI-B
[05:12:39] ===== [PASSED] drm_test_connector_hdmi_init_type_valid =====
[05:12:39] ======== drm_test_connector_hdmi_init_type_invalid ========
[05:12:39] [PASSED] Unknown
[05:12:39] [PASSED] VGA
[05:12:39] [PASSED] DVI-I
[05:12:39] [PASSED] DVI-D
[05:12:39] [PASSED] DVI-A
[05:12:39] [PASSED] Composite
[05:12:39] [PASSED] SVIDEO
[05:12:39] [PASSED] LVDS
[05:12:39] [PASSED] Component
[05:12:39] [PASSED] DIN
[05:12:39] [PASSED] DP
[05:12:39] [PASSED] TV
[05:12:39] [PASSED] eDP
[05:12:39] [PASSED] Virtual
[05:12:39] [PASSED] DSI
[05:12:39] [PASSED] DPI
[05:12:39] [PASSED] Writeback
[05:12:39] [PASSED] SPI
[05:12:39] [PASSED] USB
[05:12:39] ==== [PASSED] drm_test_connector_hdmi_init_type_invalid ====
[05:12:39] ============ [PASSED] drmm_connector_hdmi_init =============
[05:12:39] ============= drmm_connector_init (3 subtests) =============
[05:12:39] [PASSED] drm_test_drmm_connector_init
[05:12:39] [PASSED] drm_test_drmm_connector_init_null_ddc
[05:12:39] ========= drm_test_drmm_connector_init_type_valid =========
[05:12:39] [PASSED] Unknown
[05:12:39] [PASSED] VGA
[05:12:39] [PASSED] DVI-I
[05:12:39] [PASSED] DVI-D
[05:12:39] [PASSED] DVI-A
[05:12:39] [PASSED] Composite
[05:12:39] [PASSED] SVIDEO
[05:12:39] [PASSED] LVDS
[05:12:39] [PASSED] Component
[05:12:39] [PASSED] DIN
[05:12:39] [PASSED] DP
[05:12:39] [PASSED] HDMI-A
[05:12:39] [PASSED] HDMI-B
[05:12:39] [PASSED] TV
[05:12:39] [PASSED] eDP
[05:12:39] [PASSED] Virtual
[05:12:39] [PASSED] DSI
[05:12:39] [PASSED] DPI
[05:12:39] [PASSED] Writeback
[05:12:39] [PASSED] SPI
[05:12:39] [PASSED] USB
[05:12:39] ===== [PASSED] drm_test_drmm_connector_init_type_valid =====
[05:12:39] =============== [PASSED] drmm_connector_init ===============
[05:12:39] ========= drm_connector_dynamic_init (6 subtests) ==========
[05:12:39] [PASSED] drm_test_drm_connector_dynamic_init
[05:12:39] [PASSED] drm_test_drm_connector_dynamic_init_null_ddc
[05:12:39] [PASSED] drm_test_drm_connector_dynamic_init_not_added
[05:12:39] [PASSED] drm_test_drm_connector_dynamic_init_properties
[05:12:39] ===== drm_test_drm_connector_dynamic_init_type_valid ======
[05:12:39] [PASSED] Unknown
[05:12:39] [PASSED] VGA
[05:12:39] [PASSED] DVI-I
[05:12:39] [PASSED] DVI-D
[05:12:39] [PASSED] DVI-A
[05:12:39] [PASSED] Composite
[05:12:39] [PASSED] SVIDEO
[05:12:39] [PASSED] LVDS
[05:12:39] [PASSED] Component
[05:12:39] [PASSED] DIN
[05:12:39] [PASSED] DP
[05:12:39] [PASSED] HDMI-A
[05:12:39] [PASSED] HDMI-B
[05:12:39] [PASSED] TV
[05:12:39] [PASSED] eDP
[05:12:39] [PASSED] Virtual
[05:12:39] [PASSED] DSI
[05:12:39] [PASSED] DPI
[05:12:39] [PASSED] Writeback
[05:12:39] [PASSED] SPI
[05:12:39] [PASSED] USB
[05:12:39] = [PASSED] drm_test_drm_connector_dynamic_init_type_valid ==
[05:12:39] ======== drm_test_drm_connector_dynamic_init_name =========
[05:12:39] [PASSED] Unknown
[05:12:39] [PASSED] VGA
[05:12:39] [PASSED] DVI-I
[05:12:39] [PASSED] DVI-D
[05:12:39] [PASSED] DVI-A
[05:12:39] [PASSED] Composite
[05:12:39] [PASSED] SVIDEO
[05:12:39] [PASSED] LVDS
[05:12:39] [PASSED] Component
[05:12:39] [PASSED] DIN
[05:12:39] [PASSED] DP
[05:12:39] [PASSED] HDMI-A
[05:12:39] [PASSED] HDMI-B
[05:12:39] [PASSED] TV
[05:12:39] [PASSED] eDP
[05:12:39] [PASSED] Virtual
[05:12:39] [PASSED] DSI
[05:12:39] [PASSED] DPI
[05:12:39] [PASSED] Writeback
[05:12:39] [PASSED] SPI
[05:12:39] [PASSED] USB
[05:12:39] ==== [PASSED] drm_test_drm_connector_dynamic_init_name =====
[05:12:39] =========== [PASSED] drm_connector_dynamic_init ============
[05:12:39] ==== drm_connector_dynamic_register_early (4 subtests) =====
[05:12:39] [PASSED] drm_test_drm_connector_dynamic_register_early_on_list
[05:12:39] [PASSED] drm_test_drm_connector_dynamic_register_early_defer
[05:12:39] [PASSED] drm_test_drm_connector_dynamic_register_early_no_init
[05:12:39] [PASSED] drm_test_drm_connector_dynamic_register_early_no_mode_object
[05:12:39] ====== [PASSED] drm_connector_dynamic_register_early =======
[05:12:39] ======= drm_connector_dynamic_register (7 subtests) ========
[05:12:39] [PASSED] drm_test_drm_connector_dynamic_register_on_list
[05:12:39] [PASSED] drm_test_drm_connector_dynamic_register_no_defer
[05:12:39] [PASSED] drm_test_drm_connector_dynamic_register_no_init
[05:12:39] [PASSED] drm_test_drm_connector_dynamic_register_mode_object
[05:12:39] [PASSED] drm_test_drm_connector_dynamic_register_sysfs
[05:12:39] [PASSED] drm_test_drm_connector_dynamic_register_sysfs_name
[05:12:39] [PASSED] drm_test_drm_connector_dynamic_register_debugfs
[05:12:39] ========= [PASSED] drm_connector_dynamic_register ==========
[05:12:39] = drm_connector_attach_broadcast_rgb_property (2 subtests) =
[05:12:39] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property
[05:12:39] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property_hdmi_connector
[05:12:39] === [PASSED] drm_connector_attach_broadcast_rgb_property ===
[05:12:39] ========== drm_get_tv_mode_from_name (2 subtests) ==========
[05:12:39] ========== drm_test_get_tv_mode_from_name_valid ===========
[05:12:39] [PASSED] NTSC
[05:12:39] [PASSED] NTSC-443
[05:12:39] [PASSED] NTSC-J
[05:12:39] [PASSED] PAL
[05:12:39] [PASSED] PAL-M
[05:12:39] [PASSED] PAL-N
[05:12:39] [PASSED] SECAM
[05:12:39] [PASSED] Mono
[05:12:39] ====== [PASSED] drm_test_get_tv_mode_from_name_valid =======
[05:12:39] [PASSED] drm_test_get_tv_mode_from_name_truncated
[05:12:39] ============ [PASSED] drm_get_tv_mode_from_name ============
[05:12:39] = drm_test_connector_hdmi_compute_mode_clock (12 subtests) =
[05:12:39] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb
[05:12:39] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc
[05:12:39] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc_vic_1
[05:12:39] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc
[05:12:39] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc_vic_1
[05:12:39] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_double
[05:12:39] = drm_test_connector_hdmi_compute_mode_clock_yuv420_valid =
[05:12:39] [PASSED] VIC 96
[05:12:39] [PASSED] VIC 97
[05:12:39] [PASSED] VIC 101
[05:12:39] [PASSED] VIC 102
[05:12:39] [PASSED] VIC 106
[05:12:39] [PASSED] VIC 107
[05:12:39] === [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_valid ===
[05:12:39] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_10_bpc
[05:12:39] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_12_bpc
[05:12:39] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_8_bpc
[05:12:39] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_10_bpc
[05:12:39] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_12_bpc
[05:12:39] === [PASSED] drm_test_connector_hdmi_compute_mode_clock ====
[05:12:39] == drm_hdmi_connector_get_broadcast_rgb_name (2 subtests) ==
[05:12:39] === drm_test_drm_hdmi_connector_get_broadcast_rgb_name ====
[05:12:39] [PASSED] Automatic
[05:12:39] [PASSED] Full
[05:12:39] [PASSED] Limited 16:235
[05:12:39] === [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name ===
[05:12:39] [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name_invalid
[05:12:39] ==== [PASSED] drm_hdmi_connector_get_broadcast_rgb_name ====
[05:12:39] == drm_hdmi_connector_get_output_format_name (2 subtests) ==
[05:12:39] === drm_test_drm_hdmi_connector_get_output_format_name ====
[05:12:39] [PASSED] RGB
[05:12:39] [PASSED] YUV 4:2:0
[05:12:39] [PASSED] YUV 4:2:2
[05:12:39] [PASSED] YUV 4:4:4
[05:12:39] === [PASSED] drm_test_drm_hdmi_connector_get_output_format_name ===
[05:12:39] [PASSED] drm_test_drm_hdmi_connector_get_output_format_name_invalid
[05:12:39] ==== [PASSED] drm_hdmi_connector_get_output_format_name ====
[05:12:39] ============= drm_damage_helper (21 subtests) ==============
[05:12:39] [PASSED] drm_test_damage_iter_no_damage
[05:12:39] [PASSED] drm_test_damage_iter_no_damage_fractional_src
[05:12:39] [PASSED] drm_test_damage_iter_no_damage_src_moved
[05:12:39] [PASSED] drm_test_damage_iter_no_damage_fractional_src_moved
[05:12:39] [PASSED] drm_test_damage_iter_no_damage_not_visible
[05:12:39] [PASSED] drm_test_damage_iter_no_damage_no_crtc
[05:12:39] [PASSED] drm_test_damage_iter_no_damage_no_fb
[05:12:39] [PASSED] drm_test_damage_iter_simple_damage
[05:12:39] [PASSED] drm_test_damage_iter_single_damage
[05:12:39] [PASSED] drm_test_damage_iter_single_damage_intersect_src
[05:12:39] [PASSED] drm_test_damage_iter_single_damage_outside_src
[05:12:39] [PASSED] drm_test_damage_iter_single_damage_fractional_src
[05:12:39] [PASSED] drm_test_damage_iter_single_damage_intersect_fractional_src
[05:12:39] [PASSED] drm_test_damage_iter_single_damage_outside_fractional_src
[05:12:39] [PASSED] drm_test_damage_iter_single_damage_src_moved
[05:12:39] [PASSED] drm_test_damage_iter_single_damage_fractional_src_moved
[05:12:39] [PASSED] drm_test_damage_iter_damage
[05:12:39] [PASSED] drm_test_damage_iter_damage_one_intersect
[05:12:39] [PASSED] drm_test_damage_iter_damage_one_outside
[05:12:39] [PASSED] drm_test_damage_iter_damage_src_moved
[05:12:39] [PASSED] drm_test_damage_iter_damage_not_visible
[05:12:39] ================ [PASSED] drm_damage_helper ================
[05:12:39] ============== drm_dp_mst_helper (3 subtests) ==============
[05:12:39] ============== drm_test_dp_mst_calc_pbn_mode ==============
[05:12:39] [PASSED] Clock 154000 BPP 30 DSC disabled
[05:12:39] [PASSED] Clock 234000 BPP 30 DSC disabled
[05:12:39] [PASSED] Clock 297000 BPP 24 DSC disabled
[05:12:39] [PASSED] Clock 332880 BPP 24 DSC enabled
[05:12:39] [PASSED] Clock 324540 BPP 24 DSC enabled
[05:12:39] ========== [PASSED] drm_test_dp_mst_calc_pbn_mode ==========
[05:12:39] ============== drm_test_dp_mst_calc_pbn_div ===============
[05:12:39] [PASSED] Link rate 2000000 lane count 4
[05:12:39] [PASSED] Link rate 2000000 lane count 2
[05:12:39] [PASSED] Link rate 2000000 lane count 1
[05:12:39] [PASSED] Link rate 1350000 lane count 4
[05:12:39] [PASSED] Link rate 1350000 lane count 2
[05:12:39] [PASSED] Link rate 1350000 lane count 1
[05:12:39] [PASSED] Link rate 1000000 lane count 4
[05:12:39] [PASSED] Link rate 1000000 lane count 2
[05:12:39] [PASSED] Link rate 1000000 lane count 1
[05:12:39] [PASSED] Link rate 810000 lane count 4
[05:12:39] [PASSED] Link rate 810000 lane count 2
[05:12:39] [PASSED] Link rate 810000 lane count 1
[05:12:39] [PASSED] Link rate 540000 lane count 4
[05:12:39] [PASSED] Link rate 540000 lane count 2
[05:12:39] [PASSED] Link rate 540000 lane count 1
[05:12:39] [PASSED] Link rate 270000 lane count 4
[05:12:39] [PASSED] Link rate 270000 lane count 2
[05:12:39] [PASSED] Link rate 270000 lane count 1
[05:12:39] [PASSED] Link rate 162000 lane count 4
[05:12:39] [PASSED] Link rate 162000 lane count 2
[05:12:39] [PASSED] Link rate 162000 lane count 1
[05:12:39] ========== [PASSED] drm_test_dp_mst_calc_pbn_div ===========
[05:12:39] ========= drm_test_dp_mst_sideband_msg_req_decode =========
[05:12:39] [PASSED] DP_ENUM_PATH_RESOURCES with port number
[05:12:39] [PASSED] DP_POWER_UP_PHY with port number
[05:12:39] [PASSED] DP_POWER_DOWN_PHY with port number
[05:12:39] [PASSED] DP_ALLOCATE_PAYLOAD with SDP stream sinks
[05:12:39] [PASSED] DP_ALLOCATE_PAYLOAD with port number
[05:12:39] [PASSED] DP_ALLOCATE_PAYLOAD with VCPI
[05:12:39] [PASSED] DP_ALLOCATE_PAYLOAD with PBN
[05:12:39] [PASSED] DP_QUERY_PAYLOAD with port number
[05:12:39] [PASSED] DP_QUERY_PAYLOAD with VCPI
[05:12:39] [PASSED] DP_REMOTE_DPCD_READ with port number
[05:12:39] [PASSED] DP_REMOTE_DPCD_READ with DPCD address
[05:12:39] [PASSED] DP_REMOTE_DPCD_READ with max number of bytes
[05:12:39] [PASSED] DP_REMOTE_DPCD_WRITE with port number
[05:12:39] [PASSED] DP_REMOTE_DPCD_WRITE with DPCD address
[05:12:39] [PASSED] DP_REMOTE_DPCD_WRITE with data array
[05:12:39] [PASSED] DP_REMOTE_I2C_READ with port number
[05:12:39] [PASSED] DP_REMOTE_I2C_READ with I2C device ID
[05:12:39] [PASSED] DP_REMOTE_I2C_READ with transactions array
[05:12:39] [PASSED] DP_REMOTE_I2C_WRITE with port number
[05:12:39] [PASSED] DP_REMOTE_I2C_WRITE with I2C device ID
[05:12:39] [PASSED] DP_REMOTE_I2C_WRITE with data array
[05:12:39] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream ID
[05:12:39] [PASSED] DP_QUERY_STREAM_ENC_STATUS with client ID
[05:12:39] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream event
[05:12:39] [PASSED] DP_QUERY_STREAM_ENC_STATUS with valid stream event
[05:12:39] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream behavior
[05:12:39] [PASSED] DP_QUERY_STREAM_ENC_STATUS with a valid stream behavior
[05:12:39] ===== [PASSED] drm_test_dp_mst_sideband_msg_req_decode =====
[05:12:39] ================ [PASSED] drm_dp_mst_helper ================
[05:12:39] ================== drm_exec (7 subtests) ===================
[05:12:39] [PASSED] sanitycheck
[05:12:39] [PASSED] test_lock
[05:12:39] [PASSED] test_lock_unlock
[05:12:39] [PASSED] test_duplicates
[05:12:39] [PASSED] test_prepare
[05:12:39] [PASSED] test_prepare_array
[05:12:39] [PASSED] test_multiple_loops
[05:12:39] ==================== [PASSED] drm_exec =====================
[05:12:39] =========== drm_format_helper_test (18 subtests) ===========
[05:12:39] ============== drm_test_fb_xrgb8888_to_gray8 ==============
[05:12:39] [PASSED] single_pixel_source_buffer
[05:12:39] [PASSED] single_pixel_clip_rectangle
[05:12:39] [PASSED] well_known_colors
[05:12:39] [PASSED] destination_pitch
[05:12:39] ========== [PASSED] drm_test_fb_xrgb8888_to_gray8 ==========
[05:12:39] ============= drm_test_fb_xrgb8888_to_rgb332 ==============
[05:12:39] [PASSED] single_pixel_source_buffer
[05:12:39] [PASSED] single_pixel_clip_rectangle
[05:12:39] [PASSED] well_known_colors
[05:12:39] [PASSED] destination_pitch
[05:12:39] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb332 ==========
[05:12:39] ============= drm_test_fb_xrgb8888_to_rgb565 ==============
[05:12:39] [PASSED] single_pixel_source_buffer
[05:12:39] [PASSED] single_pixel_clip_rectangle
[05:12:39] [PASSED] well_known_colors
[05:12:39] [PASSED] destination_pitch
[05:12:39] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb565 ==========
[05:12:39] ============ drm_test_fb_xrgb8888_to_xrgb1555 =============
[05:12:39] [PASSED] single_pixel_source_buffer
[05:12:39] [PASSED] single_pixel_clip_rectangle
[05:12:39] [PASSED] well_known_colors
[05:12:39] [PASSED] destination_pitch
[05:12:39] ======== [PASSED] drm_test_fb_xrgb8888_to_xrgb1555 =========
[05:12:39] ============ drm_test_fb_xrgb8888_to_argb1555 =============
[05:12:39] [PASSED] single_pixel_source_buffer
[05:12:39] [PASSED] single_pixel_clip_rectangle
[05:12:39] [PASSED] well_known_colors
[05:12:39] [PASSED] destination_pitch
[05:12:39] ======== [PASSED] drm_test_fb_xrgb8888_to_argb1555 =========
[05:12:39] ============ drm_test_fb_xrgb8888_to_rgba5551 =============
[05:12:39] [PASSED] single_pixel_source_buffer
[05:12:39] [PASSED] single_pixel_clip_rectangle
[05:12:39] [PASSED] well_known_colors
[05:12:39] [PASSED] destination_pitch
[05:12:39] ======== [PASSED] drm_test_fb_xrgb8888_to_rgba5551 =========
[05:12:39] ============= drm_test_fb_xrgb8888_to_rgb888 ==============
[05:12:39] [PASSED] single_pixel_source_buffer
[05:12:39] [PASSED] single_pixel_clip_rectangle
[05:12:39] [PASSED] well_known_colors
[05:12:39] [PASSED] destination_pitch
[05:12:39] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb888 ==========
[05:12:39] ============= drm_test_fb_xrgb8888_to_bgr888 ==============
[05:12:39] [PASSED] single_pixel_source_buffer
[05:12:39] [PASSED] single_pixel_clip_rectangle
[05:12:39] [PASSED] well_known_colors
[05:12:39] [PASSED] destination_pitch
[05:12:39] ========= [PASSED] drm_test_fb_xrgb8888_to_bgr888 ==========
[05:12:39] ============ drm_test_fb_xrgb8888_to_argb8888 =============
[05:12:39] [PASSED] single_pixel_source_buffer
[05:12:39] [PASSED] single_pixel_clip_rectangle
[05:12:39] [PASSED] well_known_colors
[05:12:39] [PASSED] destination_pitch
[05:12:39] ======== [PASSED] drm_test_fb_xrgb8888_to_argb8888 =========
[05:12:39] =========== drm_test_fb_xrgb8888_to_xrgb2101010 ===========
[05:12:39] [PASSED] single_pixel_source_buffer
[05:12:39] [PASSED] single_pixel_clip_rectangle
[05:12:39] [PASSED] well_known_colors
[05:12:39] [PASSED] destination_pitch
[05:12:39] ======= [PASSED] drm_test_fb_xrgb8888_to_xrgb2101010 =======
[05:12:39] =========== drm_test_fb_xrgb8888_to_argb2101010 ===========
[05:12:39] [PASSED] single_pixel_source_buffer
[05:12:39] [PASSED] single_pixel_clip_rectangle
[05:12:39] [PASSED] well_known_colors
[05:12:39] [PASSED] destination_pitch
[05:12:39] ======= [PASSED] drm_test_fb_xrgb8888_to_argb2101010 =======
[05:12:39] ============== drm_test_fb_xrgb8888_to_mono ===============
[05:12:39] [PASSED] single_pixel_source_buffer
[05:12:39] [PASSED] single_pixel_clip_rectangle
[05:12:39] [PASSED] well_known_colors
[05:12:39] [PASSED] destination_pitch
[05:12:39] ========== [PASSED] drm_test_fb_xrgb8888_to_mono ===========
[05:12:39] ==================== drm_test_fb_swab =====================
[05:12:39] [PASSED] single_pixel_source_buffer
[05:12:39] [PASSED] single_pixel_clip_rectangle
[05:12:39] [PASSED] well_known_colors
[05:12:39] [PASSED] destination_pitch
[05:12:39] ================ [PASSED] drm_test_fb_swab =================
[05:12:39] ============ drm_test_fb_xrgb8888_to_xbgr8888 =============
[05:12:39] [PASSED] single_pixel_source_buffer
[05:12:39] [PASSED] single_pixel_clip_rectangle
[05:12:39] [PASSED] well_known_colors
[05:12:39] [PASSED] destination_pitch
[05:12:39] ======== [PASSED] drm_test_fb_xrgb8888_to_xbgr8888 =========
[05:12:39] ============ drm_test_fb_xrgb8888_to_abgr8888 =============
[05:12:39] [PASSED] single_pixel_source_buffer
[05:12:39] [PASSED] single_pixel_clip_rectangle
[05:12:39] [PASSED] well_known_colors
[05:12:39] [PASSED] destination_pitch
[05:12:39] ======== [PASSED] drm_test_fb_xrgb8888_to_abgr8888 =========
[05:12:39] ================= drm_test_fb_clip_offset =================
[05:12:39] [PASSED] pass through
[05:12:39] [PASSED] horizontal offset
[05:12:39] [PASSED] vertical offset
[05:12:39] [PASSED] horizontal and vertical offset
[05:12:39] [PASSED] horizontal offset (custom pitch)
[05:12:39] [PASSED] vertical offset (custom pitch)
[05:12:39] [PASSED] horizontal and vertical offset (custom pitch)
[05:12:39] ============= [PASSED] drm_test_fb_clip_offset =============
[05:12:39] ============== drm_test_fb_build_fourcc_list ==============
[05:12:39] [PASSED] no native formats
[05:12:39] [PASSED] XRGB8888 as native format
[05:12:39] [PASSED] remove duplicates
[05:12:39] [PASSED] convert alpha formats
[05:12:39] [PASSED] random formats
[05:12:39] ========== [PASSED] drm_test_fb_build_fourcc_list ==========
[05:12:39] =================== drm_test_fb_memcpy ====================
[05:12:39] [PASSED] single_pixel_source_buffer: XR24 little-endian (0x34325258)
[05:12:39] [PASSED] single_pixel_source_buffer: XRA8 little-endian (0x38415258)
[05:12:39] [PASSED] single_pixel_source_buffer: YU24 little-endian (0x34325559)
[05:12:39] [PASSED] single_pixel_clip_rectangle: XB24 little-endian (0x34324258)
[05:12:39] [PASSED] single_pixel_clip_rectangle: XRA8 little-endian (0x38415258)
[05:12:39] [PASSED] single_pixel_clip_rectangle: YU24 little-endian (0x34325559)
[05:12:39] [PASSED] well_known_colors: XB24 little-endian (0x34324258)
[05:12:39] [PASSED] well_known_colors: XRA8 little-endian (0x38415258)
[05:12:39] [PASSED] well_known_colors: YU24 little-endian (0x34325559)
[05:12:39] [PASSED] destination_pitch: XB24 little-endian (0x34324258)
[05:12:39] [PASSED] destination_pitch: XRA8 little-endian (0x38415258)
[05:12:39] [PASSED] destination_pitch: YU24 little-endian (0x34325559)
[05:12:39] =============== [PASSED] drm_test_fb_memcpy ================
[05:12:39] ============= [PASSED] drm_format_helper_test ==============
[05:12:39] ================= drm_format (18 subtests) =================
[05:12:39] [PASSED] drm_test_format_block_width_invalid
[05:12:39] [PASSED] drm_test_format_block_width_one_plane
[05:12:39] [PASSED] drm_test_format_block_width_two_plane
[05:12:39] [PASSED] drm_test_format_block_width_three_plane
[05:12:39] [PASSED] drm_test_format_block_width_tiled
[05:12:39] [PASSED] drm_test_format_block_height_invalid
[05:12:39] [PASSED] drm_test_format_block_height_one_plane
[05:12:39] [PASSED] drm_test_format_block_height_two_plane
[05:12:39] [PASSED] drm_test_format_block_height_three_plane
[05:12:39] [PASSED] drm_test_format_block_height_tiled
[05:12:39] [PASSED] drm_test_format_min_pitch_invalid
[05:12:39] [PASSED] drm_test_format_min_pitch_one_plane_8bpp
[05:12:39] [PASSED] drm_test_format_min_pitch_one_plane_16bpp
[05:12:39] [PASSED] drm_test_format_min_pitch_one_plane_24bpp
[05:12:39] [PASSED] drm_test_format_min_pitch_one_plane_32bpp
[05:12:39] [PASSED] drm_test_format_min_pitch_two_plane
[05:12:39] [PASSED] drm_test_format_min_pitch_three_plane_8bpp
[05:12:39] [PASSED] drm_test_format_min_pitch_tiled
[05:12:39] =================== [PASSED] drm_format ====================
[05:12:39] ============== drm_framebuffer (10 subtests) ===============
[05:12:39] ========== drm_test_framebuffer_check_src_coords ==========
[05:12:39] [PASSED] Success: source fits into fb
[05:12:39] [PASSED] Fail: overflowing fb with x-axis coordinate
[05:12:39] [PASSED] Fail: overflowing fb with y-axis coordinate
[05:12:39] [PASSED] Fail: overflowing fb with source width
[05:12:39] [PASSED] Fail: overflowing fb with source height
[05:12:39] ====== [PASSED] drm_test_framebuffer_check_src_coords ======
[05:12:39] [PASSED] drm_test_framebuffer_cleanup
[05:12:39] =============== drm_test_framebuffer_create ===============
[05:12:39] [PASSED] ABGR8888 normal sizes
[05:12:39] [PASSED] ABGR8888 max sizes
[05:12:39] [PASSED] ABGR8888 pitch greater than min required
[05:12:39] [PASSED] ABGR8888 pitch less than min required
[05:12:39] [PASSED] ABGR8888 Invalid width
[05:12:39] [PASSED] ABGR8888 Invalid buffer handle
[05:12:39] [PASSED] No pixel format
[05:12:39] [PASSED] ABGR8888 Width 0
[05:12:39] [PASSED] ABGR8888 Height 0
[05:12:39] [PASSED] ABGR8888 Out of bound height * pitch combination
[05:12:39] [PASSED] ABGR8888 Large buffer offset
[05:12:39] [PASSED] ABGR8888 Buffer offset for inexistent plane
[05:12:39] [PASSED] ABGR8888 Invalid flag
[05:12:39] [PASSED] ABGR8888 Set DRM_MODE_FB_MODIFIERS without modifiers
[05:12:39] [PASSED] ABGR8888 Valid buffer modifier
[05:12:39] [PASSED] ABGR8888 Invalid buffer modifier(DRM_FORMAT_MOD_SAMSUNG_64_32_TILE)
[05:12:39] [PASSED] ABGR8888 Extra pitches without DRM_MODE_FB_MODIFIERS
[05:12:39] [PASSED] ABGR8888 Extra pitches with DRM_MODE_FB_MODIFIERS
[05:12:39] [PASSED] NV12 Normal sizes
[05:12:39] [PASSED] NV12 Max sizes
[05:12:39] [PASSED] NV12 Invalid pitch
[05:12:39] [PASSED] NV12 Invalid modifier/missing DRM_MODE_FB_MODIFIERS flag
[05:12:39] [PASSED] NV12 different modifier per-plane
[05:12:39] [PASSED] NV12 with DRM_FORMAT_MOD_SAMSUNG_64_32_TILE
[05:12:39] [PASSED] NV12 Valid modifiers without DRM_MODE_FB_MODIFIERS
[05:12:39] [PASSED] NV12 Modifier for inexistent plane
[05:12:39] [PASSED] NV12 Handle for inexistent plane
[05:12:39] [PASSED] NV12 Handle for inexistent plane without DRM_MODE_FB_MODIFIERS
[05:12:39] [PASSED] YVU420 DRM_MODE_FB_MODIFIERS set without modifier
[05:12:39] [PASSED] YVU420 Normal sizes
[05:12:39] [PASSED] YVU420 Max sizes
[05:12:39] [PASSED] YVU420 Invalid pitch
[05:12:39] [PASSED] YVU420 Different pitches
[05:12:39] [PASSED] YVU420 Different buffer offsets/pitches
[05:12:39] [PASSED] YVU420 Modifier set just for plane 0, without DRM_MODE_FB_MODIFIERS
[05:12:39] [PASSED] YVU420 Modifier set just for planes 0, 1, without DRM_MODE_FB_MODIFIERS
[05:12:39] [PASSED] YVU420 Modifier set just for plane 0, 1, with DRM_MODE_FB_MODIFIERS
[05:12:39] [PASSED] YVU420 Valid modifier
[05:12:39] [PASSED] YVU420 Different modifiers per plane
[05:12:39] [PASSED] YVU420 Modifier for inexistent plane
[05:12:39] [PASSED] YUV420_10BIT Invalid modifier(DRM_FORMAT_MOD_LINEAR)
[05:12:39] [PASSED] X0L2 Normal sizes
[05:12:39] [PASSED] X0L2 Max sizes
[05:12:39] [PASSED] X0L2 Invalid pitch
[05:12:39] [PASSED] X0L2 Pitch greater than minimum required
[05:12:39] [PASSED] X0L2 Handle for inexistent plane
[05:12:39] [PASSED] X0L2 Offset for inexistent plane, without DRM_MODE_FB_MODIFIERS set
[05:12:39] [PASSED] X0L2 Modifier without DRM_MODE_FB_MODIFIERS set
[05:12:39] [PASSED] X0L2 Valid modifier
[05:12:39] [PASSED] X0L2 Modifier for inexistent plane
[05:12:39] =========== [PASSED] drm_test_framebuffer_create ===========
[05:12:39] [PASSED] drm_test_framebuffer_free
[05:12:39] [PASSED] drm_test_framebuffer_init
[05:12:39] [PASSED] drm_test_framebuffer_init_bad_format
[05:12:39] [PASSED] drm_test_framebuffer_init_dev_mismatch
[05:12:39] [PASSED] drm_test_framebuffer_lookup
[05:12:39] [PASSED] drm_test_framebuffer_lookup_inexistent
[05:12:39] [PASSED] drm_test_framebuffer_modifiers_not_supported
[05:12:39] ================= [PASSED] drm_framebuffer =================
[05:12:39] ================ drm_gem_shmem (8 subtests) ================
[05:12:39] [PASSED] drm_gem_shmem_test_obj_create
[05:12:39] [PASSED] drm_gem_shmem_test_obj_create_private
[05:12:39] [PASSED] drm_gem_shmem_test_pin_pages
[05:12:39] [PASSED] drm_gem_shmem_test_vmap
[05:12:39] [PASSED] drm_gem_shmem_test_get_pages_sgt
[05:12:39] [PASSED] drm_gem_shmem_test_get_sg_table
[05:12:39] [PASSED] drm_gem_shmem_test_madvise
[05:12:39] [PASSED] drm_gem_shmem_test_purge
[05:12:39] ================== [PASSED] drm_gem_shmem ==================
[05:12:39] === drm_atomic_helper_connector_hdmi_check (23 subtests) ===
[05:12:39] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode
[05:12:39] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode_vic_1
[05:12:39] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode
[05:12:39] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode_vic_1
[05:12:39] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode
[05:12:39] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode_vic_1
[05:12:39] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_changed
[05:12:39] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_not_changed
[05:12:39] [PASSED] drm_test_check_disable_connector
[05:12:39] [PASSED] drm_test_check_hdmi_funcs_reject_rate
[05:12:39] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback
[05:12:39] [PASSED] drm_test_check_max_tmds_rate_format_fallback
[05:12:39] [PASSED] drm_test_check_output_bpc_crtc_mode_changed
[05:12:39] [PASSED] drm_test_check_output_bpc_crtc_mode_not_changed
[05:12:39] [PASSED] drm_test_check_output_bpc_dvi
[05:12:39] [PASSED] drm_test_check_output_bpc_format_vic_1
[05:12:39] [PASSED] drm_test_check_output_bpc_format_display_8bpc_only
[05:12:39] [PASSED] drm_test_check_output_bpc_format_display_rgb_only
[05:12:39] [PASSED] drm_test_check_output_bpc_format_driver_8bpc_only
[05:12:39] [PASSED] drm_test_check_output_bpc_format_driver_rgb_only
[05:12:39] [PASSED] drm_test_check_tmds_char_rate_rgb_8bpc
[05:12:39] [PASSED] drm_test_check_tmds_char_rate_rgb_10bpc
[05:12:39] [PASSED] drm_test_check_tmds_char_rate_rgb_12bpc
[05:12:39] ===== [PASSED] drm_atomic_helper_connector_hdmi_check ======
[05:12:39] === drm_atomic_helper_connector_hdmi_reset (6 subtests) ====
[05:12:39] [PASSED] drm_test_check_broadcast_rgb_value
[05:12:39] [PASSED] drm_test_check_bpc_8_value
[05:12:39] [PASSED] drm_test_check_bpc_10_value
[05:12:39] [PASSED] drm_test_check_bpc_12_value
[05:12:39] [PASSED] drm_test_check_format_value
[05:12:39] [PASSED] drm_test_check_tmds_char_value
[05:12:39] ===== [PASSED] drm_atomic_helper_connector_hdmi_reset ======
[05:12:39] = drm_atomic_helper_connector_hdmi_mode_valid (4 subtests) =
[05:12:39] [PASSED] drm_test_check_mode_valid
[05:12:39] [PASSED] drm_test_check_mode_valid_reject
[05:12:39] [PASSED] drm_test_check_mode_valid_reject_rate
[05:12:39] [PASSED] drm_test_check_mode_valid_reject_max_clock
[05:12:39] === [PASSED] drm_atomic_helper_connector_hdmi_mode_valid ===
[05:12:39] ================= drm_managed (2 subtests) =================
[05:12:39] [PASSED] drm_test_managed_release_action
[05:12:39] [PASSED] drm_test_managed_run_action
[05:12:39] =================== [PASSED] drm_managed ===================
[05:12:39] =================== drm_mm (6 subtests) ====================
[05:12:39] [PASSED] drm_test_mm_init
[05:12:39] [PASSED] drm_test_mm_debug
[05:12:39] [PASSED] drm_test_mm_align32
[05:12:39] [PASSED] drm_test_mm_align64
[05:12:39] [PASSED] drm_test_mm_lowest
[05:12:39] [PASSED] drm_test_mm_highest
[05:12:39] ===================== [PASSED] drm_mm ======================
[05:12:39] ============= drm_modes_analog_tv (5 subtests) =============
[05:12:39] [PASSED] drm_test_modes_analog_tv_mono_576i
[05:12:39] [PASSED] drm_test_modes_analog_tv_ntsc_480i
[05:12:39] [PASSED] drm_test_modes_analog_tv_ntsc_480i_inlined
[05:12:39] [PASSED] drm_test_modes_analog_tv_pal_576i
[05:12:39] [PASSED] drm_test_modes_analog_tv_pal_576i_inlined
[05:12:39] =============== [PASSED] drm_modes_analog_tv ===============
[05:12:39] ============== drm_plane_helper (2 subtests) ===============
[05:12:39] =============== drm_test_check_plane_state ================
[05:12:39] [PASSED] clipping_simple
[05:12:39] [PASSED] clipping_rotate_reflect
[05:12:39] [PASSED] positioning_simple
[05:12:39] [PASSED] upscaling
[05:12:39] [PASSED] downscaling
[05:12:39] [PASSED] rounding1
[05:12:39] [PASSED] rounding2
[05:12:39] [PASSED] rounding3
[05:12:39] [PASSED] rounding4
[05:12:39] =========== [PASSED] drm_test_check_plane_state ============
[05:12:39] =========== drm_test_check_invalid_plane_state ============
[05:12:39] [PASSED] positioning_invalid
[05:12:39] [PASSED] upscaling_invalid
[05:12:39] [PASSED] downscaling_invalid
[05:12:39] ======= [PASSED] drm_test_check_invalid_plane_state ========
[05:12:39] ================ [PASSED] drm_plane_helper =================
[05:12:39] ====== drm_connector_helper_tv_get_modes (1 subtest) =======
[05:12:39] ====== drm_test_connector_helper_tv_get_modes_check =======
[05:12:39] [PASSED] None
[05:12:39] [PASSED] PAL
[05:12:39] [PASSED] NTSC
[05:12:39] [PASSED] Both, NTSC Default
[05:12:39] [PASSED] Both, PAL Default
[05:12:39] [PASSED] Both, NTSC Default, with PAL on command-line
[05:12:39] [PASSED] Both, PAL Default, with NTSC on command-line
[05:12:39] == [PASSED] drm_test_connector_helper_tv_get_modes_check ===
[05:12:39] ======== [PASSED] drm_connector_helper_tv_get_modes ========
[05:12:39] ================== drm_rect (9 subtests) ===================
[05:12:39] [PASSED] drm_test_rect_clip_scaled_div_by_zero
[05:12:39] [PASSED] drm_test_rect_clip_scaled_not_clipped
[05:12:39] [PASSED] drm_test_rect_clip_scaled_clipped
[05:12:39] [PASSED] drm_test_rect_clip_scaled_signed_vs_unsigned
[05:12:39] ================= drm_test_rect_intersect =================
[05:12:39] [PASSED] top-left x bottom-right: 2x2+1+1 x 2x2+0+0
[05:12:39] [PASSED] top-right x bottom-left: 2x2+0+0 x 2x2+1-1
[05:12:39] [PASSED] bottom-left x top-right: 2x2+1-1 x 2x2+0+0
[05:12:39] [PASSED] bottom-right x top-left: 2x2+0+0 x 2x2+1+1
[05:12:39] [PASSED] right x left: 2x1+0+0 x 3x1+1+0
[05:12:39] [PASSED] left x right: 3x1+1+0 x 2x1+0+0
[05:12:39] [PASSED] up x bottom: 1x2+0+0 x 1x3+0-1
[05:12:39] [PASSED] bottom x up: 1x3+0-1 x 1x2+0+0
[05:12:39] [PASSED] touching corner: 1x1+0+0 x 2x2+1+1
[05:12:39] [PASSED] touching side: 1x1+0+0 x 1x1+1+0
[05:12:39] [PASSED] equal rects: 2x2+0+0 x 2x2+0+0
[05:12:39] [PASSED] inside another: 2x2+0+0 x 1x1+1+1
[05:12:39] [PASSED] far away: 1x1+0+0 x 1x1+3+6
[05:12:39] [PASSED] points intersecting: 0x0+5+10 x 0x0+5+10
[05:12:39] [PASSED] points not intersecting: 0x0+0+0 x 0x0+5+10
[05:12:39] ============= [PASSED] drm_test_rect_intersect =============
[05:12:39] ================ drm_test_rect_calc_hscale ================
[05:12:39] [PASSED] normal use
[05:12:39] [PASSED] out of max range
[05:12:39] [PASSED] out of min range
[05:12:39] [PASSED] zero dst
[05:12:39] [PASSED] negative src
[05:12:39] [PASSED] negative dst
[05:12:39] ============ [PASSED] drm_test_rect_calc_hscale ============
[05:12:39] ================ drm_test_rect_calc_vscale ================
[05:12:39] [PASSED] normal use
[05:12:39] [PASSED] out of max range
[05:12:39] [PASSED] out of min range
[05:12:39] [PASSED] zero dst
[05:12:39] [PASSED] negative src
[05:12:39] [PASSED] negative dst
[05:12:39] ============ [PASSED] drm_test_rect_calc_vscale ============
[05:12:39] ================== drm_test_rect_rotate ===================
[05:12:39] [PASSED] reflect-x
[05:12:39] [PASSED] reflect-y
[05:12:39] [PASSED] rotate-0
[05:12:39] [PASSED] rotate-90
[05:12:39] [PASSED] rotate-180
[05:12:39] [PASSED] rotate-270
[05:12:39] ============== [PASSED] drm_test_rect_rotate ===============
[05:12:39] ================ drm_test_rect_rotate_inv =================
[05:12:39] [PASSED] reflect-x
[05:12:39] [PASSED] reflect-y
[05:12:39] [PASSED] rotate-0
[05:12:39] [PASSED] rotate-90
[05:12:39] [PASSED] rotate-180
[05:12:39] [PASSED] rotate-270
[05:12:39] ============ [PASSED] drm_test_rect_rotate_inv =============
stty: 'standard input': Inappropriate ioctl for device
[05:12:39] ==================== [PASSED] drm_rect =====================
[05:12:39] ============================================================
[05:12:39] Testing complete. Ran 608 tests: passed: 608
[05:12:39] Elapsed time: 22.859s total, 1.743s configuring, 20.943s building, 0.140s running
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/ttm/tests/.kunitconfig
[05:12:39] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[05:12:41] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[05:12:49] Starting KUnit Kernel (1/1)...
[05:12:49] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[05:12:49] ================= ttm_device (5 subtests) ==================
[05:12:49] [PASSED] ttm_device_init_basic
[05:12:49] [PASSED] ttm_device_init_multiple
[05:12:49] [PASSED] ttm_device_fini_basic
[05:12:49] [PASSED] ttm_device_init_no_vma_man
[05:12:49] ================== ttm_device_init_pools ==================
[05:12:49] [PASSED] No DMA allocations, no DMA32 required
[05:12:49] [PASSED] DMA allocations, DMA32 required
[05:12:49] [PASSED] No DMA allocations, DMA32 required
[05:12:49] [PASSED] DMA allocations, no DMA32 required
[05:12:49] ============== [PASSED] ttm_device_init_pools ==============
[05:12:49] =================== [PASSED] ttm_device ====================
[05:12:49] ================== ttm_pool (8 subtests) ===================
[05:12:49] ================== ttm_pool_alloc_basic ===================
[05:12:49] [PASSED] One page
[05:12:49] [PASSED] More than one page
[05:12:49] [PASSED] Above the allocation limit
[05:12:49] [PASSED] One page, with coherent DMA mappings enabled
[05:12:49] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[05:12:49] ============== [PASSED] ttm_pool_alloc_basic ===============
[05:12:49] ============== ttm_pool_alloc_basic_dma_addr ==============
[05:12:49] [PASSED] One page
[05:12:49] [PASSED] More than one page
[05:12:49] [PASSED] Above the allocation limit
[05:12:49] [PASSED] One page, with coherent DMA mappings enabled
[05:12:49] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[05:12:49] ========== [PASSED] ttm_pool_alloc_basic_dma_addr ==========
[05:12:49] [PASSED] ttm_pool_alloc_order_caching_match
[05:12:49] [PASSED] ttm_pool_alloc_caching_mismatch
[05:12:49] [PASSED] ttm_pool_alloc_order_mismatch
[05:12:49] [PASSED] ttm_pool_free_dma_alloc
[05:12:49] [PASSED] ttm_pool_free_no_dma_alloc
[05:12:49] [PASSED] ttm_pool_fini_basic
[05:12:49] ==================== [PASSED] ttm_pool =====================
[05:12:49] ================ ttm_resource (8 subtests) =================
[05:12:49] ================= ttm_resource_init_basic =================
[05:12:49] [PASSED] Init resource in TTM_PL_SYSTEM
[05:12:49] [PASSED] Init resource in TTM_PL_VRAM
[05:12:49] [PASSED] Init resource in a private placement
[05:12:49] [PASSED] Init resource in TTM_PL_SYSTEM, set placement flags
[05:12:49] ============= [PASSED] ttm_resource_init_basic =============
[05:12:49] [PASSED] ttm_resource_init_pinned
[05:12:49] [PASSED] ttm_resource_fini_basic
[05:12:49] [PASSED] ttm_resource_manager_init_basic
[05:12:49] [PASSED] ttm_resource_manager_usage_basic
[05:12:49] [PASSED] ttm_resource_manager_set_used_basic
[05:12:49] [PASSED] ttm_sys_man_alloc_basic
[05:12:49] [PASSED] ttm_sys_man_free_basic
[05:12:49] ================== [PASSED] ttm_resource ===================
[05:12:49] =================== ttm_tt (15 subtests) ===================
[05:12:49] ==================== ttm_tt_init_basic ====================
[05:12:49] [PASSED] Page-aligned size
[05:12:49] [PASSED] Extra pages requested
[05:12:49] ================ [PASSED] ttm_tt_init_basic ================
[05:12:49] [PASSED] ttm_tt_init_misaligned
[05:12:49] [PASSED] ttm_tt_fini_basic
[05:12:49] [PASSED] ttm_tt_fini_sg
[05:12:49] [PASSED] ttm_tt_fini_shmem
[05:12:49] [PASSED] ttm_tt_create_basic
[05:12:49] [PASSED] ttm_tt_create_invalid_bo_type
[05:12:49] [PASSED] ttm_tt_create_ttm_exists
[05:12:49] [PASSED] ttm_tt_create_failed
[05:12:49] [PASSED] ttm_tt_destroy_basic
[05:12:49] [PASSED] ttm_tt_populate_null_ttm
[05:12:49] [PASSED] ttm_tt_populate_populated_ttm
[05:12:49] [PASSED] ttm_tt_unpopulate_basic
[05:12:49] [PASSED] ttm_tt_unpopulate_empty_ttm
[05:12:49] [PASSED] ttm_tt_swapin_basic
[05:12:49] ===================== [PASSED] ttm_tt ======================
[05:12:49] =================== ttm_bo (14 subtests) ===================
[05:12:49] =========== ttm_bo_reserve_optimistic_no_ticket ===========
[05:12:49] [PASSED] Cannot be interrupted and sleeps
[05:12:49] [PASSED] Cannot be interrupted, locks straight away
[05:12:49] [PASSED] Can be interrupted, sleeps
[05:12:49] ======= [PASSED] ttm_bo_reserve_optimistic_no_ticket =======
[05:12:49] [PASSED] ttm_bo_reserve_locked_no_sleep
[05:12:49] [PASSED] ttm_bo_reserve_no_wait_ticket
[05:12:49] [PASSED] ttm_bo_reserve_double_resv
[05:12:49] [PASSED] ttm_bo_reserve_interrupted
[05:12:49] [PASSED] ttm_bo_reserve_deadlock
[05:12:49] [PASSED] ttm_bo_unreserve_basic
[05:12:49] [PASSED] ttm_bo_unreserve_pinned
[05:12:49] [PASSED] ttm_bo_unreserve_bulk
[05:12:49] [PASSED] ttm_bo_put_basic
[05:12:49] [PASSED] ttm_bo_put_shared_resv
[05:12:49] [PASSED] ttm_bo_pin_basic
[05:12:49] [PASSED] ttm_bo_pin_unpin_resource
[05:12:49] [PASSED] ttm_bo_multiple_pin_one_unpin
[05:12:49] ===================== [PASSED] ttm_bo ======================
[05:12:49] ============== ttm_bo_validate (22 subtests) ===============
[05:12:49] ============== ttm_bo_init_reserved_sys_man ===============
[05:12:49] [PASSED] Buffer object for userspace
[05:12:49] [PASSED] Kernel buffer object
[05:12:49] [PASSED] Shared buffer object
[05:12:49] ========== [PASSED] ttm_bo_init_reserved_sys_man ===========
[05:12:49] ============== ttm_bo_init_reserved_mock_man ==============
[05:12:49] [PASSED] Buffer object for userspace
[05:12:49] [PASSED] Kernel buffer object
[05:12:49] [PASSED] Shared buffer object
[05:12:49] ========== [PASSED] ttm_bo_init_reserved_mock_man ==========
[05:12:49] [PASSED] ttm_bo_init_reserved_resv
[05:12:49] ================== ttm_bo_validate_basic ==================
[05:12:49] [PASSED] Buffer object for userspace
[05:12:49] [PASSED] Kernel buffer object
[05:12:49] [PASSED] Shared buffer object
[05:12:49] ============== [PASSED] ttm_bo_validate_basic ==============
[05:12:49] [PASSED] ttm_bo_validate_invalid_placement
[05:12:49] ============= ttm_bo_validate_same_placement ==============
[05:12:49] [PASSED] System manager
[05:12:49] [PASSED] VRAM manager
[05:12:49] ========= [PASSED] ttm_bo_validate_same_placement ==========
[05:12:49] [PASSED] ttm_bo_validate_failed_alloc
[05:12:49] [PASSED] ttm_bo_validate_pinned
[05:12:49] [PASSED] ttm_bo_validate_busy_placement
[05:12:49] ================ ttm_bo_validate_multihop =================
[05:12:49] [PASSED] Buffer object for userspace
[05:12:49] [PASSED] Kernel buffer object
[05:12:49] [PASSED] Shared buffer object
[05:12:49] ============ [PASSED] ttm_bo_validate_multihop =============
[05:12:49] ========== ttm_bo_validate_no_placement_signaled ==========
[05:12:49] [PASSED] Buffer object in system domain, no page vector
[05:12:49] [PASSED] Buffer object in system domain with an existing page vector
[05:12:49] ====== [PASSED] ttm_bo_validate_no_placement_signaled ======
[05:12:49] ======== ttm_bo_validate_no_placement_not_signaled ========
[05:12:49] [PASSED] Buffer object for userspace
[05:12:49] [PASSED] Kernel buffer object
[05:12:49] [PASSED] Shared buffer object
[05:12:49] ==== [PASSED] ttm_bo_validate_no_placement_not_signaled ====
[05:12:49] [PASSED] ttm_bo_validate_move_fence_signaled
[05:12:49] ========= ttm_bo_validate_move_fence_not_signaled =========
[05:12:49] [PASSED] Waits for GPU
[05:12:49] [PASSED] Tries to lock straight away
[05:12:49] ===== [PASSED] ttm_bo_validate_move_fence_not_signaled =====
[05:12:49] [PASSED] ttm_bo_validate_swapout
[05:12:49] [PASSED] ttm_bo_validate_happy_evict
[05:12:49] [PASSED] ttm_bo_validate_all_pinned_evict
[05:12:49] [PASSED] ttm_bo_validate_allowed_only_evict
[05:12:49] [PASSED] ttm_bo_validate_deleted_evict
[05:12:49] [PASSED] ttm_bo_validate_busy_domain_evict
[05:12:49] [PASSED] ttm_bo_validate_evict_gutting
[05:12:49] [PASSED] ttm_bo_validate_recrusive_evict
stty: 'standard input': Inappropriate ioctl for device
[05:12:49] ================= [PASSED] ttm_bo_validate =================
[05:12:49] ============================================================
[05:12:49] Testing complete. Ran 102 tests: passed: 102
[05:12:49] Elapsed time: 10.208s total, 1.731s configuring, 7.860s building, 0.507s running
+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel
^ permalink raw reply [flat|nested] 120+ messages in thread
* ✓ CI.Build: success for PREFETCH and MADVISE for SVM ranges (rev4)
2025-04-07 10:16 [PATCH v2 00/32] PREFETCH and MADVISE for SVM ranges Himal Prasad Ghimiray
` (38 preceding siblings ...)
2025-04-09 5:12 ` ✓ CI.KUnit: success " Patchwork
@ 2025-04-09 5:29 ` Patchwork
2025-04-09 5:31 ` ✗ CI.Hooks: failure " Patchwork
` (3 subsequent siblings)
43 siblings, 0 replies; 120+ messages in thread
From: Patchwork @ 2025-04-09 5:29 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe
== Series Details ==
Series: PREFETCH and MADVISE for SVM ranges (rev4)
URL : https://patchwork.freedesktop.org/series/146290/
State : success
== Summary ==
lib/modules/6.15.0-rc1-xe+/kernel/arch/x86/events/amd/
lib/modules/6.15.0-rc1-xe+/kernel/arch/x86/events/amd/amd-uncore.ko
lib/modules/6.15.0-rc1-xe+/kernel/arch/x86/events/rapl.ko
lib/modules/6.15.0-rc1-xe+/kernel/arch/x86/kvm/
lib/modules/6.15.0-rc1-xe+/kernel/arch/x86/kvm/kvm.ko
lib/modules/6.15.0-rc1-xe+/kernel/arch/x86/kvm/kvm-intel.ko
lib/modules/6.15.0-rc1-xe+/kernel/arch/x86/kvm/kvm-amd.ko
lib/modules/6.15.0-rc1-xe+/kernel/kernel/
lib/modules/6.15.0-rc1-xe+/kernel/kernel/kheaders.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/
lib/modules/6.15.0-rc1-xe+/kernel/crypto/ecrdsa_generic.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/xcbc.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/serpent_generic.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/aria_generic.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/crypto_simd.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/adiantum.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/tcrypt.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/crypto_engine.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/zstd.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/asymmetric_keys/
lib/modules/6.15.0-rc1-xe+/kernel/crypto/asymmetric_keys/pkcs7_test_key.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/asymmetric_keys/pkcs8_key_parser.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/des_generic.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/xctr.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/authenc.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/sm4_generic.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/camellia_generic.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/sm3.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/pcrypt.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/aegis128.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/af_alg.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/algif_aead.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/cmac.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/sm3_generic.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/aes_ti.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/chacha_generic.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/poly1305_generic.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/nhpoly1305.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/crc32_generic.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/essiv.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/ccm.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/wp512.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/streebog_generic.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/authencesn.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/echainiv.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/lrw.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/cryptd.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/crypto_user.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/algif_hash.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/polyval-generic.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/hctr2.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/842.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/pcbc.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/ansi_cprng.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/cast6_generic.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/twofish_common.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/twofish_generic.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/lz4hc.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/blowfish_generic.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/md4.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/chacha20poly1305.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/curve25519-generic.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/lz4.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/rmd160.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/algif_skcipher.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/cast5_generic.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/fcrypt.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/ecdsa_generic.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/sm4.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/cast_common.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/blowfish_common.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/michael_mic.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/async_tx/
lib/modules/6.15.0-rc1-xe+/kernel/crypto/async_tx/async_xor.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/async_tx/async_tx.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/async_tx/async_memcpy.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/async_tx/async_pq.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/async_tx/async_raid6_recov.ko
lib/modules/6.15.0-rc1-xe+/kernel/crypto/algif_rng.ko
lib/modules/6.15.0-rc1-xe+/kernel/block/
lib/modules/6.15.0-rc1-xe+/kernel/block/bfq.ko
lib/modules/6.15.0-rc1-xe+/kernel/block/kyber-iosched.ko
lib/modules/6.15.0-rc1-xe+/build
lib/modules/6.15.0-rc1-xe+/modules.alias.bin
lib/modules/6.15.0-rc1-xe+/modules.builtin
lib/modules/6.15.0-rc1-xe+/modules.softdep
lib/modules/6.15.0-rc1-xe+/modules.alias
lib/modules/6.15.0-rc1-xe+/modules.order
lib/modules/6.15.0-rc1-xe+/modules.symbols
lib/modules/6.15.0-rc1-xe+/modules.dep.bin
+ mv kernel-nodebug.tar.gz ..
+ cd ..
+ rm -rf archive
++ date +%s
+ echo -e '\e[0Ksection_end:1744176541:package_x86_64_nodebug\r\e[0K'
+ sync
^[[0Ksection_end:1744176541:package_x86_64_nodebug
^[[0K
+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel
^ permalink raw reply [flat|nested] 120+ messages in thread
* ✗ CI.Hooks: failure for PREFETCH and MADVISE for SVM ranges (rev4)
2025-04-07 10:16 [PATCH v2 00/32] PREFETCH and MADVISE for SVM ranges Himal Prasad Ghimiray
` (39 preceding siblings ...)
2025-04-09 5:29 ` ✓ CI.Build: " Patchwork
@ 2025-04-09 5:31 ` Patchwork
2025-04-09 5:32 ` ✗ CI.checksparse: warning " Patchwork
` (2 subsequent siblings)
43 siblings, 0 replies; 120+ messages in thread
From: Patchwork @ 2025-04-09 5:31 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe
== Series Details ==
Series: PREFETCH and MADVISE for SVM ranges (rev4)
URL : https://patchwork.freedesktop.org/series/146290/
State : failure
== Summary ==
run-parts: executing /workspace/ci/hooks/00-showenv
+ export
+ grep -Ei '(^|\W)CI_'
declare -x CI_KERNEL_BUILD_DIR="/workspace/kernel/build64-default"
declare -x CI_KERNEL_SRC_DIR="/workspace/kernel"
declare -x CI_TOOLS_SRC_DIR="/workspace/ci"
declare -x CI_WORKSPACE_DIR="/workspace"
run-parts: executing /workspace/ci/hooks/10-build-W1
+ SRC_DIR=/workspace/kernel
+ RESTORE_DISPLAY_CONFIG=0
+ '[' -n /workspace/kernel/build64-default ']'
+ BUILD_DIR=/workspace/kernel/build64-default
+ cd /workspace/kernel
++ nproc
+ make -j48 O=/workspace/kernel/build64-default modules_prepare
make[1]: Entering directory '/workspace/kernel/build64-default'
GEN Makefile
DESCEND objtool
CALL ../scripts/checksyscalls.sh
INSTALL libsubcmd_headers
CC /workspace/kernel/build64-default/tools/objtool/libsubcmd/exec-cmd.o
CC /workspace/kernel/build64-default/tools/objtool/libsubcmd/help.o
CC /workspace/kernel/build64-default/tools/objtool/libsubcmd/pager.o
CC /workspace/kernel/build64-default/tools/objtool/libsubcmd/parse-options.o
CC /workspace/kernel/build64-default/tools/objtool/libsubcmd/run-command.o
CC /workspace/kernel/build64-default/tools/objtool/libsubcmd/sigchain.o
CC /workspace/kernel/build64-default/tools/objtool/libsubcmd/subcmd-config.o
LD /workspace/kernel/build64-default/tools/objtool/libsubcmd/libsubcmd-in.o
AR /workspace/kernel/build64-default/tools/objtool/libsubcmd/libsubcmd.a
CC /workspace/kernel/build64-default/tools/objtool/weak.o
CC /workspace/kernel/build64-default/tools/objtool/check.o
CC /workspace/kernel/build64-default/tools/objtool/special.o
CC /workspace/kernel/build64-default/tools/objtool/builtin-check.o
CC /workspace/kernel/build64-default/tools/objtool/arch/x86/special.o
CC /workspace/kernel/build64-default/tools/objtool/arch/x86/decode.o
CC /workspace/kernel/build64-default/tools/objtool/objtool.o
CC /workspace/kernel/build64-default/tools/objtool/elf.o
CC /workspace/kernel/build64-default/tools/objtool/arch/x86/orc.o
CC /workspace/kernel/build64-default/tools/objtool/orc_gen.o
CC /workspace/kernel/build64-default/tools/objtool/orc_dump.o
CC /workspace/kernel/build64-default/tools/objtool/libstring.o
CC /workspace/kernel/build64-default/tools/objtool/libctype.o
CC /workspace/kernel/build64-default/tools/objtool/str_error_r.o
CC /workspace/kernel/build64-default/tools/objtool/librbtree.o
LD /workspace/kernel/build64-default/tools/objtool/arch/x86/objtool-in.o
LD /workspace/kernel/build64-default/tools/objtool/objtool-in.o
LINK /workspace/kernel/build64-default/tools/objtool/objtool
make[1]: Leaving directory '/workspace/kernel/build64-default'
++ nproc
+ make -j48 O=/workspace/kernel/build64-default W=1 drivers/gpu/drm/xe
make[1]: Entering directory '/workspace/kernel/build64-default'
make[2]: Nothing to be done for 'drivers/gpu/drm/xe'.
make[1]: Leaving directory '/workspace/kernel/build64-default'
run-parts: executing /workspace/ci/hooks/11-build-32b
+++ realpath /workspace/ci/hooks/11-build-32b
++ dirname /workspace/ci/hooks/11-build-32b
+ THIS_SCRIPT_DIR=/workspace/ci/hooks
+ SRC_DIR=/workspace/kernel
+ TOOLS_SRC_DIR=/workspace/ci
+ '[' -n /workspace/kernel/build64-default ']'
+ BUILD_DIR=/workspace/kernel/build64-default
+ BUILD_DIR=/workspace/kernel/build64-default/build32
+ cd /workspace/kernel
+ mkdir -p /workspace/kernel/build64-default/build32
++ nproc
+ make -j48 ARCH=i386 O=/workspace/kernel/build64-default/build32 defconfig
make[1]: Entering directory '/workspace/kernel/build64-default/build32'
GEN Makefile
HOSTCC scripts/basic/fixdep
HOSTCC scripts/kconfig/confdata.o
HOSTCC scripts/kconfig/conf.o
HOSTCC scripts/kconfig/expr.o
LEX scripts/kconfig/lexer.lex.c
YACC scripts/kconfig/parser.tab.[ch]
HOSTCC scripts/kconfig/menu.o
HOSTCC scripts/kconfig/preprocess.o
HOSTCC scripts/kconfig/symbol.o
HOSTCC scripts/kconfig/util.o
HOSTCC scripts/kconfig/lexer.lex.o
HOSTCC scripts/kconfig/parser.tab.o
HOSTLD scripts/kconfig/conf
*** Default configuration is based on 'i386_defconfig'
#
# configuration written to .config
#
make[1]: Leaving directory '/workspace/kernel/build64-default/build32'
+ cd /workspace/kernel/build64-default/build32
+ /workspace/kernel/scripts/kconfig/merge_config.sh .config /workspace/ci/kernel/fragments/10-xe.fragment
Using .config as base
Merging /workspace/ci/kernel/fragments/10-xe.fragment
Value of CONFIG_DRM_XE is redefined by fragment /workspace/ci/kernel/fragments/10-xe.fragment:
Previous value: # CONFIG_DRM_XE is not set
New value: CONFIG_DRM_XE=m
GEN Makefile
#
# configuration written to .config
#
Value requested for CONFIG_HAVE_UID16 not in final .config
Requested value: CONFIG_HAVE_UID16=y
Actual value:
Value requested for CONFIG_UID16 not in final .config
Requested value: CONFIG_UID16=y
Actual value:
Value requested for CONFIG_X86_32 not in final .config
Requested value: CONFIG_X86_32=y
Actual value:
Value requested for CONFIG_OUTPUT_FORMAT not in final .config
Requested value: CONFIG_OUTPUT_FORMAT="elf32-i386"
Actual value: CONFIG_OUTPUT_FORMAT="elf64-x86-64"
Value requested for CONFIG_ARCH_MMAP_RND_BITS_MIN not in final .config
Requested value: CONFIG_ARCH_MMAP_RND_BITS_MIN=8
Actual value: CONFIG_ARCH_MMAP_RND_BITS_MIN=28
Value requested for CONFIG_ARCH_MMAP_RND_BITS_MAX not in final .config
Requested value: CONFIG_ARCH_MMAP_RND_BITS_MAX=16
Actual value: CONFIG_ARCH_MMAP_RND_BITS_MAX=32
Value requested for CONFIG_PGTABLE_LEVELS not in final .config
Requested value: CONFIG_PGTABLE_LEVELS=2
Actual value: CONFIG_PGTABLE_LEVELS=5
Value requested for CONFIG_X86_INTEL_QUARK not in final .config
Requested value: # CONFIG_X86_INTEL_QUARK is not set
Actual value:
Value requested for CONFIG_X86_RDC321X not in final .config
Requested value: # CONFIG_X86_RDC321X is not set
Actual value:
Value requested for CONFIG_X86_32_IRIS not in final .config
Requested value: # CONFIG_X86_32_IRIS is not set
Actual value:
Value requested for CONFIG_M486SX not in final .config
Requested value: # CONFIG_M486SX is not set
Actual value:
Value requested for CONFIG_M486 not in final .config
Requested value: # CONFIG_M486 is not set
Actual value:
Value requested for CONFIG_M586 not in final .config
Requested value: # CONFIG_M586 is not set
Actual value:
Value requested for CONFIG_M586TSC not in final .config
Requested value: # CONFIG_M586TSC is not set
Actual value:
Value requested for CONFIG_M586MMX not in final .config
Requested value: # CONFIG_M586MMX is not set
Actual value:
Value requested for CONFIG_M686 not in final .config
Requested value: CONFIG_M686=y
Actual value:
Value requested for CONFIG_MPENTIUMII not in final .config
Requested value: # CONFIG_MPENTIUMII is not set
Actual value:
Value requested for CONFIG_MPENTIUMIII not in final .config
Requested value: # CONFIG_MPENTIUMIII is not set
Actual value:
Value requested for CONFIG_MPENTIUMM not in final .config
Requested value: # CONFIG_MPENTIUMM is not set
Actual value:
Value requested for CONFIG_MPENTIUM4 not in final .config
Requested value: # CONFIG_MPENTIUM4 is not set
Actual value:
Value requested for CONFIG_MK6 not in final .config
Requested value: # CONFIG_MK6 is not set
Actual value:
Value requested for CONFIG_MK7 not in final .config
Requested value: # CONFIG_MK7 is not set
Actual value:
Value requested for CONFIG_MCRUSOE not in final .config
Requested value: # CONFIG_MCRUSOE is not set
Actual value:
Value requested for CONFIG_MEFFICEON not in final .config
Requested value: # CONFIG_MEFFICEON is not set
Actual value:
Value requested for CONFIG_MWINCHIPC6 not in final .config
Requested value: # CONFIG_MWINCHIPC6 is not set
Actual value:
Value requested for CONFIG_MWINCHIP3D not in final .config
Requested value: # CONFIG_MWINCHIP3D is not set
Actual value:
Value requested for CONFIG_MELAN not in final .config
Requested value: # CONFIG_MELAN is not set
Actual value:
Value requested for CONFIG_MGEODEGX1 not in final .config
Requested value: # CONFIG_MGEODEGX1 is not set
Actual value:
Value requested for CONFIG_MGEODE_LX not in final .config
Requested value: # CONFIG_MGEODE_LX is not set
Actual value:
Value requested for CONFIG_MCYRIXIII not in final .config
Requested value: # CONFIG_MCYRIXIII is not set
Actual value:
Value requested for CONFIG_MVIAC3_2 not in final .config
Requested value: # CONFIG_MVIAC3_2 is not set
Actual value:
Value requested for CONFIG_MVIAC7 not in final .config
Requested value: # CONFIG_MVIAC7 is not set
Actual value:
Value requested for CONFIG_MATOM not in final .config
Requested value: # CONFIG_MATOM is not set
Actual value:
Value requested for CONFIG_X86_GENERIC not in final .config
Requested value: # CONFIG_X86_GENERIC is not set
Actual value:
Value requested for CONFIG_X86_INTERNODE_CACHE_SHIFT not in final .config
Requested value: CONFIG_X86_INTERNODE_CACHE_SHIFT=5
Actual value: CONFIG_X86_INTERNODE_CACHE_SHIFT=6
Value requested for CONFIG_X86_L1_CACHE_SHIFT not in final .config
Requested value: CONFIG_X86_L1_CACHE_SHIFT=5
Actual value: CONFIG_X86_L1_CACHE_SHIFT=6
Value requested for CONFIG_X86_USE_PPRO_CHECKSUM not in final .config
Requested value: CONFIG_X86_USE_PPRO_CHECKSUM=y
Actual value:
Value requested for CONFIG_X86_MINIMUM_CPU_FAMILY not in final .config
Requested value: CONFIG_X86_MINIMUM_CPU_FAMILY=6
Actual value: CONFIG_X86_MINIMUM_CPU_FAMILY=64
Value requested for CONFIG_CPU_SUP_TRANSMETA_32 not in final .config
Requested value: CONFIG_CPU_SUP_TRANSMETA_32=y
Actual value:
Value requested for CONFIG_CPU_SUP_VORTEX_32 not in final .config
Requested value: CONFIG_CPU_SUP_VORTEX_32=y
Actual value:
Value requested for CONFIG_HPET_TIMER not in final .config
Requested value: # CONFIG_HPET_TIMER is not set
Actual value: CONFIG_HPET_TIMER=y
Value requested for CONFIG_NR_CPUS_RANGE_END not in final .config
Requested value: CONFIG_NR_CPUS_RANGE_END=8
Actual value: CONFIG_NR_CPUS_RANGE_END=512
Value requested for CONFIG_NR_CPUS_DEFAULT not in final .config
Requested value: CONFIG_NR_CPUS_DEFAULT=8
Actual value: CONFIG_NR_CPUS_DEFAULT=64
Value requested for CONFIG_X86_ANCIENT_MCE not in final .config
Requested value: # CONFIG_X86_ANCIENT_MCE is not set
Actual value:
Value requested for CONFIG_X86_LEGACY_VM86 not in final .config
Requested value: # CONFIG_X86_LEGACY_VM86 is not set
Actual value:
Value requested for CONFIG_X86_ESPFIX32 not in final .config
Requested value: CONFIG_X86_ESPFIX32=y
Actual value:
Value requested for CONFIG_TOSHIBA not in final .config
Requested value: # CONFIG_TOSHIBA is not set
Actual value:
Value requested for CONFIG_X86_REBOOTFIXUPS not in final .config
Requested value: # CONFIG_X86_REBOOTFIXUPS is not set
Actual value:
Value requested for CONFIG_MICROCODE_INITRD32 not in final .config
Requested value: CONFIG_MICROCODE_INITRD32=y
Actual value:
Value requested for CONFIG_HIGHMEM4G not in final .config
Requested value: # CONFIG_HIGHMEM4G is not set
Actual value:
Value requested for CONFIG_VMSPLIT_3G not in final .config
Requested value: CONFIG_VMSPLIT_3G=y
Actual value:
Value requested for CONFIG_VMSPLIT_3G_OPT not in final .config
Requested value: # CONFIG_VMSPLIT_3G_OPT is not set
Actual value:
Value requested for CONFIG_VMSPLIT_2G not in final .config
Requested value: # CONFIG_VMSPLIT_2G is not set
Actual value:
Value requested for CONFIG_VMSPLIT_2G_OPT not in final .config
Requested value: # CONFIG_VMSPLIT_2G_OPT is not set
Actual value:
Value requested for CONFIG_VMSPLIT_1G not in final .config
Requested value: # CONFIG_VMSPLIT_1G is not set
Actual value:
Value requested for CONFIG_PAGE_OFFSET not in final .config
Requested value: CONFIG_PAGE_OFFSET=0xC0000000
Actual value:
Value requested for CONFIG_X86_PAE not in final .config
Requested value: # CONFIG_X86_PAE is not set
Actual value:
Value requested for CONFIG_ARCH_FLATMEM_ENABLE not in final .config
Requested value: CONFIG_ARCH_FLATMEM_ENABLE=y
Actual value:
Value requested for CONFIG_ARCH_SELECT_MEMORY_MODEL not in final .config
Requested value: CONFIG_ARCH_SELECT_MEMORY_MODEL=y
Actual value:
Value requested for CONFIG_ILLEGAL_POINTER_VALUE not in final .config
Requested value: CONFIG_ILLEGAL_POINTER_VALUE=0
Actual value: CONFIG_ILLEGAL_POINTER_VALUE=0xdead000000000000
Value requested for CONFIG_COMPAT_VDSO not in final .config
Requested value: # CONFIG_COMPAT_VDSO is not set
Actual value:
Value requested for CONFIG_FUNCTION_PADDING_CFI not in final .config
Requested value: CONFIG_FUNCTION_PADDING_CFI=0
Actual value: CONFIG_FUNCTION_PADDING_CFI=11
Value requested for CONFIG_FUNCTION_PADDING_BYTES not in final .config
Requested value: CONFIG_FUNCTION_PADDING_BYTES=4
Actual value: CONFIG_FUNCTION_PADDING_BYTES=16
Value requested for CONFIG_APM not in final .config
Requested value: # CONFIG_APM is not set
Actual value:
Value requested for CONFIG_X86_POWERNOW_K6 not in final .config
Requested value: # CONFIG_X86_POWERNOW_K6 is not set
Actual value:
Value requested for CONFIG_X86_POWERNOW_K7 not in final .config
Requested value: # CONFIG_X86_POWERNOW_K7 is not set
Actual value:
Value requested for CONFIG_X86_GX_SUSPMOD not in final .config
Requested value: # CONFIG_X86_GX_SUSPMOD is not set
Actual value:
Value requested for CONFIG_X86_SPEEDSTEP_ICH not in final .config
Requested value: # CONFIG_X86_SPEEDSTEP_ICH is not set
Actual value:
Value requested for CONFIG_X86_SPEEDSTEP_SMI not in final .config
Requested value: # CONFIG_X86_SPEEDSTEP_SMI is not set
Actual value:
Value requested for CONFIG_X86_CPUFREQ_NFORCE2 not in final .config
Requested value: # CONFIG_X86_CPUFREQ_NFORCE2 is not set
Actual value:
Value requested for CONFIG_X86_LONGRUN not in final .config
Requested value: # CONFIG_X86_LONGRUN is not set
Actual value:
Value requested for CONFIG_X86_LONGHAUL not in final .config
Requested value: # CONFIG_X86_LONGHAUL is not set
Actual value:
Value requested for CONFIG_X86_E_POWERSAVER not in final .config
Requested value: # CONFIG_X86_E_POWERSAVER is not set
Actual value:
Value requested for CONFIG_PCI_GOBIOS not in final .config
Requested value: # CONFIG_PCI_GOBIOS is not set
Actual value:
Value requested for CONFIG_PCI_GOMMCONFIG not in final .config
Requested value: # CONFIG_PCI_GOMMCONFIG is not set
Actual value:
Value requested for CONFIG_PCI_GODIRECT not in final .config
Requested value: # CONFIG_PCI_GODIRECT is not set
Actual value:
Value requested for CONFIG_PCI_GOANY not in final .config
Requested value: CONFIG_PCI_GOANY=y
Actual value:
Value requested for CONFIG_PCI_BIOS not in final .config
Requested value: CONFIG_PCI_BIOS=y
Actual value:
Value requested for CONFIG_ISA not in final .config
Requested value: # CONFIG_ISA is not set
Actual value:
Value requested for CONFIG_SCx200 not in final .config
Requested value: # CONFIG_SCx200 is not set
Actual value:
Value requested for CONFIG_OLPC not in final .config
Requested value: # CONFIG_OLPC is not set
Actual value:
Value requested for CONFIG_ALIX not in final .config
Requested value: # CONFIG_ALIX is not set
Actual value:
Value requested for CONFIG_NET5501 not in final .config
Requested value: # CONFIG_NET5501 is not set
Actual value:
Value requested for CONFIG_GEOS not in final .config
Requested value: # CONFIG_GEOS is not set
Actual value:
Value requested for CONFIG_COMPAT_32 not in final .config
Requested value: CONFIG_COMPAT_32=y
Actual value:
Value requested for CONFIG_HAVE_ATOMIC_IOMAP not in final .config
Requested value: CONFIG_HAVE_ATOMIC_IOMAP=y
Actual value:
Value requested for CONFIG_X86_DISABLED_FEATURE_PCID not in final .config
Requested value: CONFIG_X86_DISABLED_FEATURE_PCID=y
Actual value:
Value requested for CONFIG_X86_DISABLED_FEATURE_PKU not in final .config
Requested value: CONFIG_X86_DISABLED_FEATURE_PKU=y
Actual value:
Value requested for CONFIG_X86_DISABLED_FEATURE_OSPKE not in final .config
Requested value: CONFIG_X86_DISABLED_FEATURE_OSPKE=y
Actual value:
Value requested for CONFIG_X86_DISABLED_FEATURE_LA57 not in final .config
Requested value: CONFIG_X86_DISABLED_FEATURE_LA57=y
Actual value:
Value requested for CONFIG_X86_DISABLED_FEATURE_PTI not in final .config
Requested value: CONFIG_X86_DISABLED_FEATURE_PTI=y
Actual value:
Value requested for CONFIG_X86_DISABLED_FEATURE_IBT not in final .config
Requested value: CONFIG_X86_DISABLED_FEATURE_IBT=y
Actual value:
Value requested for CONFIG_X86_DISABLED_FEATURE_INVLPGB not in final .config
Requested value: CONFIG_X86_DISABLED_FEATURE_INVLPGB=y
Actual value:
Value requested for CONFIG_ARCH_32BIT_OFF_T not in final .config
Requested value: CONFIG_ARCH_32BIT_OFF_T=y
Actual value:
Value requested for CONFIG_ARCH_WANT_IPC_PARSE_VERSION not in final .config
Requested value: CONFIG_ARCH_WANT_IPC_PARSE_VERSION=y
Actual value:
Value requested for CONFIG_MODULES_USE_ELF_REL not in final .config
Requested value: CONFIG_MODULES_USE_ELF_REL=y
Actual value:
Value requested for CONFIG_ARCH_MMAP_RND_BITS not in final .config
Requested value: CONFIG_ARCH_MMAP_RND_BITS=8
Actual value: CONFIG_ARCH_MMAP_RND_BITS=28
Value requested for CONFIG_CLONE_BACKWARDS not in final .config
Requested value: CONFIG_CLONE_BACKWARDS=y
Actual value:
Value requested for CONFIG_OLD_SIGSUSPEND3 not in final .config
Requested value: CONFIG_OLD_SIGSUSPEND3=y
Actual value:
Value requested for CONFIG_OLD_SIGACTION not in final .config
Requested value: CONFIG_OLD_SIGACTION=y
Actual value:
Value requested for CONFIG_ARCH_SPLIT_ARG64 not in final .config
Requested value: CONFIG_ARCH_SPLIT_ARG64=y
Actual value:
Value requested for CONFIG_FUNCTION_ALIGNMENT not in final .config
Requested value: CONFIG_FUNCTION_ALIGNMENT=4
Actual value: CONFIG_FUNCTION_ALIGNMENT=16
Value requested for CONFIG_SELECT_MEMORY_MODEL not in final .config
Requested value: CONFIG_SELECT_MEMORY_MODEL=y
Actual value:
Value requested for CONFIG_FLATMEM_MANUAL not in final .config
Requested value: CONFIG_FLATMEM_MANUAL=y
Actual value:
Value requested for CONFIG_SPARSEMEM_MANUAL not in final .config
Requested value: # CONFIG_SPARSEMEM_MANUAL is not set
Actual value:
Value requested for CONFIG_FLATMEM not in final .config
Requested value: CONFIG_FLATMEM=y
Actual value:
Value requested for CONFIG_SPARSEMEM_STATIC not in final .config
Requested value: CONFIG_SPARSEMEM_STATIC=y
Actual value:
Value requested for CONFIG_KMAP_LOCAL not in final .config
Requested value: CONFIG_KMAP_LOCAL=y
Actual value:
Value requested for CONFIG_HAVE_EISA not in final .config
Requested value: CONFIG_HAVE_EISA=y
Actual value:
Value requested for CONFIG_EISA not in final .config
Requested value: # CONFIG_EISA is not set
Actual value:
Value requested for CONFIG_HOTPLUG_PCI_COMPAQ not in final .config
Requested value: # CONFIG_HOTPLUG_PCI_COMPAQ is not set
Actual value:
Value requested for CONFIG_HOTPLUG_PCI_IBM not in final .config
Requested value: # CONFIG_HOTPLUG_PCI_IBM is not set
Actual value:
Value requested for CONFIG_EFI_CAPSULE_QUIRK_QUARK_CSH not in final .config
Requested value: CONFIG_EFI_CAPSULE_QUIRK_QUARK_CSH=y
Actual value:
Value requested for CONFIG_PCH_PHUB not in final .config
Requested value: # CONFIG_PCH_PHUB is not set
Actual value:
Value requested for CONFIG_SCSI_NSP32 not in final .config
Requested value: # CONFIG_SCSI_NSP32 is not set
Actual value:
Value requested for CONFIG_PATA_CS5520 not in final .config
Requested value: # CONFIG_PATA_CS5520 is not set
Actual value:
Value requested for CONFIG_PATA_CS5530 not in final .config
Requested value: # CONFIG_PATA_CS5530 is not set
Actual value:
Value requested for CONFIG_PATA_CS5535 not in final .config
Requested value: # CONFIG_PATA_CS5535 is not set
Actual value:
Value requested for CONFIG_PATA_CS5536 not in final .config
Requested value: # CONFIG_PATA_CS5536 is not set
Actual value:
Value requested for CONFIG_PATA_SC1200 not in final .config
Requested value: # CONFIG_PATA_SC1200 is not set
Actual value:
Value requested for CONFIG_PCH_GBE not in final .config
Requested value: # CONFIG_PCH_GBE is not set
Actual value:
Value requested for CONFIG_INPUT_WISTRON_BTNS not in final .config
Requested value: # CONFIG_INPUT_WISTRON_BTNS is not set
Actual value:
Value requested for CONFIG_SERIAL_TIMBERDALE not in final .config
Requested value: # CONFIG_SERIAL_TIMBERDALE is not set
Actual value:
Value requested for CONFIG_SERIAL_PCH_UART not in final .config
Requested value: # CONFIG_SERIAL_PCH_UART is not set
Actual value:
Value requested for CONFIG_HW_RANDOM_GEODE not in final .config
Requested value: CONFIG_HW_RANDOM_GEODE=y
Actual value:
Value requested for CONFIG_SONYPI not in final .config
Requested value: # CONFIG_SONYPI is not set
Actual value:
Value requested for CONFIG_PC8736x_GPIO not in final .config
Requested value: # CONFIG_PC8736x_GPIO is not set
Actual value:
Value requested for CONFIG_NSC_GPIO not in final .config
Requested value: # CONFIG_NSC_GPIO is not set
Actual value:
Value requested for CONFIG_I2C_EG20T not in final .config
Requested value: # CONFIG_I2C_EG20T is not set
Actual value:
Value requested for CONFIG_SCx200_ACB not in final .config
Requested value: # CONFIG_SCx200_ACB is not set
Actual value:
Value requested for CONFIG_PTP_1588_CLOCK_PCH not in final .config
Requested value: # CONFIG_PTP_1588_CLOCK_PCH is not set
Actual value:
Value requested for CONFIG_SBC8360_WDT not in final .config
Requested value: # CONFIG_SBC8360_WDT is not set
Actual value:
Value requested for CONFIG_SBC7240_WDT not in final .config
Requested value: # CONFIG_SBC7240_WDT is not set
Actual value:
Value requested for CONFIG_MFD_CS5535 not in final .config
Requested value: # CONFIG_MFD_CS5535 is not set
Actual value:
Value requested for CONFIG_AGP_ALI not in final .config
Requested value: # CONFIG_AGP_ALI is not set
Actual value:
Value requested for CONFIG_AGP_ATI not in final .config
Requested value: # CONFIG_AGP_ATI is not set
Actual value:
Value requested for CONFIG_AGP_AMD not in final .config
Requested value: # CONFIG_AGP_AMD is not set
Actual value:
Value requested for CONFIG_AGP_NVIDIA not in final .config
Requested value: # CONFIG_AGP_NVIDIA is not set
Actual value:
Value requested for CONFIG_AGP_SWORKS not in final .config
Requested value: # CONFIG_AGP_SWORKS is not set
Actual value:
Value requested for CONFIG_AGP_EFFICEON not in final .config
Requested value: # CONFIG_AGP_EFFICEON is not set
Actual value:
Value requested for CONFIG_SND_CS5530 not in final .config
Requested value: # CONFIG_SND_CS5530 is not set
Actual value:
Value requested for CONFIG_SND_CS5535AUDIO not in final .config
Requested value: # CONFIG_SND_CS5535AUDIO is not set
Actual value:
Value requested for CONFIG_SND_SIS7019 not in final .config
Requested value: # CONFIG_SND_SIS7019 is not set
Actual value:
Value requested for CONFIG_LEDS_OT200 not in final .config
Requested value: # CONFIG_LEDS_OT200 is not set
Actual value:
Value requested for CONFIG_PCH_DMA not in final .config
Requested value: # CONFIG_PCH_DMA is not set
Actual value:
Value requested for CONFIG_CLKSRC_I8253 not in final .config
Requested value: CONFIG_CLKSRC_I8253=y
Actual value:
Value requested for CONFIG_MAILBOX not in final .config
Requested value: # CONFIG_MAILBOX is not set
Actual value: CONFIG_MAILBOX=y
Value requested for CONFIG_CRYPTO_SERPENT_SSE2_586 not in final .config
Requested value: # CONFIG_CRYPTO_SERPENT_SSE2_586 is not set
Actual value:
Value requested for CONFIG_CRYPTO_TWOFISH_586 not in final .config
Requested value: # CONFIG_CRYPTO_TWOFISH_586 is not set
Actual value:
Value requested for CONFIG_CRYPTO_DEV_GEODE not in final .config
Requested value: # CONFIG_CRYPTO_DEV_GEODE is not set
Actual value:
Value requested for CONFIG_CRYPTO_DEV_HIFN_795X not in final .config
Requested value: # CONFIG_CRYPTO_DEV_HIFN_795X is not set
Actual value:
Value requested for CONFIG_CRYPTO_LIB_POLY1305_RSIZE not in final .config
Requested value: CONFIG_CRYPTO_LIB_POLY1305_RSIZE=1
Actual value: CONFIG_CRYPTO_LIB_POLY1305_RSIZE=11
Value requested for CONFIG_AUDIT_GENERIC not in final .config
Requested value: CONFIG_AUDIT_GENERIC=y
Actual value:
Value requested for CONFIG_GENERIC_VDSO_32 not in final .config
Requested value: CONFIG_GENERIC_VDSO_32=y
Actual value:
Value requested for CONFIG_DEBUG_KMAP_LOCAL not in final .config
Requested value: # CONFIG_DEBUG_KMAP_LOCAL is not set
Actual value:
Value requested for CONFIG_HAVE_DEBUG_STACKOVERFLOW not in final .config
Requested value: CONFIG_HAVE_DEBUG_STACKOVERFLOW=y
Actual value:
Value requested for CONFIG_DEBUG_STACKOVERFLOW not in final .config
Requested value: # CONFIG_DEBUG_STACKOVERFLOW is not set
Actual value:
Value requested for CONFIG_HAVE_FUNCTION_GRAPH_TRACER not in final .config
Requested value: CONFIG_HAVE_FUNCTION_GRAPH_TRACER=y
Actual value:
Value requested for CONFIG_HAVE_FUNCTION_GRAPH_FREGS not in final .config
Requested value: CONFIG_HAVE_FUNCTION_GRAPH_FREGS=y
Actual value:
Value requested for CONFIG_HAVE_FTRACE_GRAPH_FUNC not in final .config
Requested value: CONFIG_HAVE_FTRACE_GRAPH_FUNC=y
Actual value:
Value requested for CONFIG_DRM_KUNIT_TEST not in final .config
Requested value: CONFIG_DRM_KUNIT_TEST=m
Actual value:
Value requested for CONFIG_DRM_XE_WERROR not in final .config
Requested value: CONFIG_DRM_XE_WERROR=y
Actual value:
Value requested for CONFIG_DRM_XE_DEBUG not in final .config
Requested value: CONFIG_DRM_XE_DEBUG=y
Actual value:
Value requested for CONFIG_DRM_XE_DEBUG_MEM not in final .config
Requested value: CONFIG_DRM_XE_DEBUG_MEM=y
Actual value:
Value requested for CONFIG_DRM_XE_KUNIT_TEST not in final .config
Requested value: CONFIG_DRM_XE_KUNIT_TEST=m
Actual value:
++ nproc
+ make -j48 ARCH=i386 olddefconfig
GEN Makefile
#
# configuration written to .config
#
++ nproc
+ make -j48 ARCH=i386
SYNC include/config/auto.conf.cmd
GEN Makefile
GEN Makefile
WRAP arch/x86/include/generated/uapi/asm/bpf_perf_event.h
WRAP arch/x86/include/generated/uapi/asm/fcntl.h
WRAP arch/x86/include/generated/uapi/asm/errno.h
WRAP arch/x86/include/generated/uapi/asm/ioctl.h
SYSHDR arch/x86/include/generated/uapi/asm/unistd_32.h
WRAP arch/x86/include/generated/uapi/asm/ioctls.h
WRAP arch/x86/include/generated/uapi/asm/ipcbuf.h
SYSHDR arch/x86/include/generated/uapi/asm/unistd_64.h
WRAP arch/x86/include/generated/uapi/asm/param.h
UPD include/generated/uapi/linux/version.h
WRAP arch/x86/include/generated/uapi/asm/poll.h
SYSHDR arch/x86/include/generated/uapi/asm/unistd_x32.h
WRAP arch/x86/include/generated/uapi/asm/resource.h
SYSTBL arch/x86/include/generated/asm/syscalls_32.h
WRAP arch/x86/include/generated/uapi/asm/socket.h
WRAP arch/x86/include/generated/uapi/asm/sockios.h
WRAP arch/x86/include/generated/uapi/asm/termbits.h
UPD arch/x86/include/generated/asm/cpufeaturemasks.h
WRAP arch/x86/include/generated/uapi/asm/termios.h
WRAP arch/x86/include/generated/uapi/asm/types.h
UPD include/generated/compile.h
HOSTCC arch/x86/tools/relocs_32.o
HOSTCC arch/x86/tools/relocs_64.o
WRAP arch/x86/include/generated/asm/early_ioremap.h
HOSTCC arch/x86/tools/relocs_common.o
WRAP arch/x86/include/generated/asm/fprobe.h
WRAP arch/x86/include/generated/asm/mcs_spinlock.h
WRAP arch/x86/include/generated/asm/mmzone.h
WRAP arch/x86/include/generated/asm/irq_regs.h
WRAP arch/x86/include/generated/asm/kmap_size.h
WRAP arch/x86/include/generated/asm/local64.h
WRAP arch/x86/include/generated/asm/mmiowb.h
HOSTCC scripts/kallsyms
WRAP arch/x86/include/generated/asm/module.lds.h
HOSTCC scripts/sorttable
WRAP arch/x86/include/generated/asm/rwonce.h
HOSTCC scripts/asn1_compiler
HOSTCC scripts/selinux/mdp/mdp
HOSTLD arch/x86/tools/relocs
UPD include/config/kernel.release
UPD include/generated/utsrelease.h
CC scripts/mod/empty.o
HOSTCC scripts/mod/mk_elfconfig
CC scripts/mod/devicetable-offsets.s
UPD scripts/mod/devicetable-offsets.h
MKELF scripts/mod/elfconfig.h
HOSTCC scripts/mod/modpost.o
HOSTCC scripts/mod/file2alias.o
HOSTCC scripts/mod/sumversion.o
HOSTCC scripts/mod/symsearch.o
HOSTLD scripts/mod/modpost
CHKSHA1 /workspace/kernel/include/linux/atomic/atomic-arch-fallback.h
CC kernel/bounds.s
CHKSHA1 /workspace/kernel/include/linux/atomic/atomic-instrumented.h
CHKSHA1 /workspace/kernel/include/linux/atomic/atomic-long.h
UPD include/generated/timeconst.h
UPD include/generated/bounds.h
CC arch/x86/kernel/asm-offsets.s
UPD include/generated/asm-offsets.h
CALL /workspace/kernel/scripts/checksyscalls.sh
LDS scripts/module.lds
CC init/main.o
HOSTCC usr/gen_init_cpio
CC init/do_mounts.o
CC certs/system_keyring.o
CC init/do_mounts_initrd.o
UPD init/utsversion-tmp.h
CC ipc/util.o
CC init/initramfs.o
CC ipc/msgutil.o
CC init/calibrate.o
CC security/commoncap.o
CC mm/filemap.o
CC ipc/msg.o
CC io_uring/io_uring.o
CC init/init_task.o
AS arch/x86/lib/atomic64_cx8_32.o
CC ipc/sem.o
CC security/lsm_syscalls.o
CC mm/mempool.o
CC block/bdev.o
CC io_uring/opdef.o
CC security/min_addr.o
CC ipc/shm.o
CC arch/x86/power/cpu.o
CC arch/x86/realmode/init.o
CC mm/oom_kill.o
AR arch/x86/crypto/built-in.a
CC arch/x86/pci/i386.o
AR arch/x86/net/built-in.a
CC security/keys/gc.o
CC arch/x86/video/video-common.o
CC security/integrity/iint.o
CC block/partitions/core.o
CC arch/x86/events/amd/core.o
CC block/partitions/msdos.o
AR virt/lib/built-in.a
HOSTCC security/selinux/genheaders
CC arch/x86/mm/pat/set_memory.o
CC fs/nfs_common/nfsacl.o
CC arch/x86/virt/svm/cmdline.o
AR drivers/cache/built-in.a
AR arch/x86/platform/atom/built-in.a
CC arch/x86/mm/pat/memtype.o
CC net/core/sock.o
CC arch/x86/kernel/fpu/init.o
AR virt/built-in.a
AR lib/math/tests/built-in.a
CC fs/notify/dnotify/dnotify.o
CC sound/core/seq/seq.o
CC lib/math/div64.o
AS arch/x86/lib/checksum_32.o
HOSTCC certs/extract-cert
AR drivers/irqchip/built-in.a
CC arch/x86/entry/vdso/vma.o
AR arch/x86/platform/ce4100/built-in.a
CC kernel/sched/core.o
CC arch/x86/platform/efi/memmap.o
AR drivers/bus/mhi/built-in.a
AR drivers/bus/built-in.a
CC arch/x86/lib/cmdline.o
CC crypto/asymmetric_keys/asymmetric_type.o
AR drivers/pwm/built-in.a
AR drivers/leds/trigger/built-in.a
AR drivers/leds/blink/built-in.a
AR arch/x86/virt/svm/built-in.a
AR drivers/leds/simatic/built-in.a
CC drivers/leds/led-core.o
AR arch/x86/virt/vmx/built-in.a
AR arch/x86/virt/built-in.a
AS arch/x86/realmode/rm/header.o
AS arch/x86/lib/cmpxchg8b_emu.o
AS arch/x86/realmode/rm/trampoline_32.o
CC lib/math/gcd.o
GEN security/selinux/flask.h security/selinux/av_permissions.h
CC security/selinux/avc.o
CC arch/x86/lib/cpu.o
AS arch/x86/realmode/rm/stack.o
AS arch/x86/realmode/rm/reboot.o
CC lib/math/lcm.o
AS arch/x86/realmode/rm/wakeup_asm.o
CC arch/x86/realmode/rm/wakemain.o
CC arch/x86/lib/delay.o
CC lib/math/int_log.o
CC arch/x86/realmode/rm/video-mode.o
GEN usr/initramfs_data.cpio
COPY usr/initramfs_inc_data
AS usr/initramfs_data.o
CC arch/x86/kernel/fpu/bugs.o
CERT certs/x509_certificate_list
CERT certs/signing_key.x509
CC sound/core/seq/seq_lock.o
AS certs/system_certificates.o
AR usr/built-in.a
CC arch/x86/pci/init.o
CC lib/math/int_pow.o
AR certs/built-in.a
CC security/security.o
AS arch/x86/realmode/rm/copy.o
CC lib/math/int_sqrt.o
CC arch/x86/kernel/fpu/core.o
AS arch/x86/realmode/rm/bioscall.o
CC arch/x86/realmode/rm/regs.o
CC lib/math/reciprocal_div.o
CC arch/x86/realmode/rm/video-vga.o
CC io_uring/kbuf.o
CC fs/iomap/trace.o
AS arch/x86/lib/getuser.o
CC net/ethernet/eth.o
GEN arch/x86/lib/inat-tables.c
AR arch/x86/video/built-in.a
CC lib/math/rational.o
CC block/partitions/efi.o
CC arch/x86/realmode/rm/video-vesa.o
CC arch/x86/lib/insn-eval.o
CC fs/iomap/iter.o
CC security/integrity/integrity_audit.o
AR sound/i2c/other/built-in.a
CC fs/nfs_common/grace.o
AR sound/i2c/built-in.a
CC arch/x86/realmode/rm/video-bios.o
CC drivers/leds/led-class.o
CC security/selinux/hooks.o
CC net/core/request_sock.o
CC sound/core/seq/seq_clientmgr.o
CC lib/crypto/mpi/generic_mpih-lshift.o
CC arch/x86/mm/pat/memtype_interval.o
CC arch/x86/entry/vdso/extable.o
AR lib/tests/built-in.a
LDS arch/x86/entry/vdso/vdso32/vdso32.lds
CC security/selinux/selinuxfs.o
CC crypto/asymmetric_keys/restrict.o
CC security/keys/key.o
PASYMS arch/x86/realmode/rm/pasyms.h
CC arch/x86/events/amd/lbr.o
AR fs/notify/dnotify/built-in.a
CC fs/nfs_common/common.o
CC fs/notify/inotify/inotify_fsnotify.o
CC arch/x86/power/hibernate_32.o
LDS arch/x86/realmode/rm/realmode.lds
CC kernel/locking/mutex.o
LD arch/x86/realmode/rm/realmode.elf
RELOCS arch/x86/realmode/rm/realmode.relocs
OBJCOPY arch/x86/realmode/rm/realmode.bin
AS arch/x86/realmode/rmpiggy.o
CC arch/x86/platform/efi/quirks.o
CC fs/notify/inotify/inotify_user.o
AR arch/x86/realmode/built-in.a
CC arch/x86/pci/pcbios.o
CC kernel/sched/fair.o
AR lib/math/built-in.a
CC sound/core/seq/seq_memory.o
CC kernel/power/qos.o
AR fs/notify/fanotify/built-in.a
CC fs/quota/dquot.o
CC security/lsm_audit.o
CC arch/x86/mm/init.o
CC net/core/skbuff.o
AS arch/x86/power/hibernate_asm_32.o
CC crypto/asymmetric_keys/signature.o
CC io_uring/rsrc.o
CC kernel/sched/build_policy.o
CC arch/x86/platform/efi/efi.o
CC arch/x86/mm/init_32.o
CC drivers/leds/led-triggers.o
CC lib/crypto/mpi/generic_mpih-mul1.o
CC fs/notify/fsnotify.o
CC lib/crypto/memneq.o
CC arch/x86/kernel/cpu/mce/core.o
AR arch/x86/mm/pat/built-in.a
AR security/integrity/built-in.a
CC arch/x86/kernel/cpu/mtrr/mtrr.o
CC crypto/asymmetric_keys/public_key.o
CC arch/x86/kernel/cpu/mce/severity.o
CC init/version.o
CC arch/x86/lib/insn.o
AR block/partitions/built-in.a
CC block/fops.o
AR fs/nfs_common/built-in.a
AS arch/x86/entry/vdso/vdso32/note.o
CC arch/x86/power/hibernate.o
AS arch/x86/entry/vdso/vdso32/system_call.o
AR sound/drivers/opl3/built-in.a
AR sound/drivers/opl4/built-in.a
AS arch/x86/entry/vdso/vdso32/sigreturn.o
AR sound/drivers/mpu401/built-in.a
AR sound/drivers/vx/built-in.a
CC arch/x86/entry/vdso/vdso32/vclock_gettime.o
AR sound/drivers/pcsp/built-in.a
AR sound/drivers/built-in.a
CC arch/x86/events/intel/core.o
CC arch/x86/events/zhaoxin/core.o
CC arch/x86/kernel/cpu/microcode/core.o
CC arch/x86/kernel/fpu/regset.o
CC arch/x86/pci/mmconfig_32.o
AR init/built-in.a
CC arch/x86/kernel/fpu/signal.o
CC arch/x86/events/amd/ibs.o
CC arch/x86/events/core.o
CC arch/x86/kernel/cpu/mce/genpool.o
CC fs/iomap/buffered-io.o
CC ipc/syscall.o
CC block/bio.o
CC fs/proc/task_mmu.o
CC arch/x86/entry/vdso/vdso32/vgetcpu.o
CC security/keys/keyring.o
CC arch/x86/lib/kaslr.o
CC net/core/datagram.o
AR fs/notify/inotify/built-in.a
CC arch/x86/lib/memcpy_32.o
CC lib/crypto/mpi/generic_mpih-mul2.o
CC arch/x86/kernel/cpu/mtrr/if.o
CC kernel/locking/semaphore.o
AR net/ethernet/built-in.a
CC kernel/printk/printk.o
AR drivers/leds/built-in.a
CC drivers/pci/msi/pcidev_msi.o
AS arch/x86/lib/memmove_32.o
CC kernel/power/main.o
ASN.1 crypto/asymmetric_keys/x509.asn1.[ch]
ASN.1 crypto/asymmetric_keys/x509_akid.asn1.[ch]
CC arch/x86/lib/misc.o
CC crypto/asymmetric_keys/x509_loader.o
CC sound/core/seq/seq_queue.o
CC kernel/printk/printk_safe.o
CC arch/x86/lib/pc-conf-reg.o
CC arch/x86/mm/fault.o
CC drivers/pci/msi/api.o
AR net/802/built-in.a
AR arch/x86/power/built-in.a
CC block/elevator.o
CC arch/x86/platform/efi/efi_32.o
CC kernel/irq/irqdesc.o
HOSTCC arch/x86/entry/vdso/vdso2c
CC fs/notify/notification.o
CC arch/x86/entry/vdso/vdso32-setup.o
CC crypto/asymmetric_keys/x509_public_key.o
AS arch/x86/lib/putuser.o
CC arch/x86/kernel/cpu/microcode/intel.o
AS arch/x86/lib/retpoline.o
CC arch/x86/pci/direct.o
CC lib/vdso/datastore.o
CC lib/zlib_inflate/inffast.o
CC arch/x86/lib/string_32.o
CC arch/x86/lib/strstr_32.o
AR arch/x86/entry/vsyscall/built-in.a
CC arch/x86/kernel/acpi/boot.o
CC lib/crypto/mpi/generic_mpih-mul3.o
CC arch/x86/kernel/cpu/mtrr/generic.o
CC arch/x86/lib/usercopy.o
CC ipc/ipc_sysctl.o
CC arch/x86/kernel/fpu/xstate.o
AR arch/x86/events/zhaoxin/built-in.a
CC lib/crypto/mpi/generic_mpih-rshift.o
CC sound/core/seq/seq_fifo.o
AS arch/x86/entry/entry.o
CC lib/zlib_inflate/inflate.o
CC sound/core/sound.o
CC sound/core/init.o
CC kernel/locking/rwsem.o
VDSO arch/x86/entry/vdso/vdso32.so.dbg
CC kernel/locking/percpu-rwsem.o
OBJCOPY arch/x86/entry/vdso/vdso32.so
VDSO2C arch/x86/entry/vdso/vdso-image-32.c
CC arch/x86/entry/vdso/vdso-image-32.o
CC drivers/pci/pcie/portdrv.o
CC arch/x86/lib/usercopy_32.o
CC fs/proc/inode.o
CC drivers/pci/msi/msi.o
ASN.1 crypto/asymmetric_keys/pkcs7.asn1.[ch]
CC crypto/asymmetric_keys/pkcs7_trust.o
CC drivers/video/console/dummycon.o
CC fs/notify/group.o
AR lib/vdso/built-in.a
CC kernel/irq/handle.o
CC drivers/video/backlight/backlight.o
AS arch/x86/platform/efi/efi_stub_32.o
CC arch/x86/events/amd/uncore.o
CC arch/x86/platform/efi/runtime-map.o
AR arch/x86/entry/vdso/built-in.a
CC ipc/mqueue.o
AS arch/x86/entry/entry_32.o
CC mm/fadvise.o
CC arch/x86/entry/syscall_32.o
CC security/keys/keyctl.o
CC fs/quota/quota_v2.o
CC kernel/irq/manage.o
CC arch/x86/pci/mmconfig-shared.o
CC arch/x86/lib/msr-smp.o
CC lib/crypto/mpi/generic_mpih-sub1.o
CC arch/x86/kernel/cpu/microcode/amd.o
CC kernel/power/console.o
CC sound/core/seq/seq_prioq.o
CC lib/zlib_deflate/deflate.o
CC crypto/api.o
CC lib/zlib_inflate/infutil.o
CC arch/x86/kernel/cpu/mtrr/cleanup.o
CC arch/x86/kernel/cpu/mce/intel.o
CC crypto/asymmetric_keys/pkcs7_verify.o
CC ipc/namespace.o
CC lib/zlib_deflate/deftree.o
CC arch/x86/lib/cache-smp.o
CC block/blk-core.o
CC arch/x86/mm/ioremap.o
CC kernel/locking/spinlock.o
CC arch/x86/kernel/cpu/cacheinfo.o
CC arch/x86/events/probe.o
CC mm/maccess.o
CC lib/crypto/utils.o
CC drivers/video/console/vgacon.o
CC drivers/pci/pcie/rcec.o
CC lib/zlib_inflate/inftrees.o
CC arch/x86/kernel/acpi/sleep.o
CC arch/x86/lib/crc32-glue.o
AS arch/x86/kernel/acpi/wakeup_32.o
CC fs/kernfs/mount.o
CC fs/notify/mark.o
CC fs/proc/root.o
CC lib/crypto/mpi/generic_mpih-add1.o
CC fs/kernfs/inode.o
AR arch/x86/kernel/fpu/built-in.a
CC fs/kernfs/dir.o
CC fs/iomap/direct-io.o
CC lib/zlib_inflate/inflate_syms.o
CC crypto/asymmetric_keys/x509.asn1.o
AR arch/x86/platform/efi/built-in.a
CC fs/quota/quota_tree.o
CC crypto/asymmetric_keys/x509_akid.asn1.o
AR arch/x86/platform/geode/built-in.a
AR arch/x86/platform/iris/built-in.a
CC crypto/asymmetric_keys/x509_cert_parser.o
CC arch/x86/platform/intel/iosf_mbi.o
CC io_uring/notif.o
AR drivers/video/backlight/built-in.a
CC sound/core/seq/seq_timer.o
AR drivers/idle/built-in.a
CC kernel/power/process.o
CC arch/x86/events/intel/bts.o
CC kernel/power/suspend.o
CC drivers/pci/msi/irqdomain.o
CC arch/x86/kernel/cpu/mce/amd.o
CC kernel/locking/osq_lock.o
CC block/blk-sysfs.o
AR drivers/pci/pwrctrl/built-in.a
CC fs/notify/fdinfo.o
CC drivers/pci/hotplug/pci_hotplug_core.o
CC fs/proc/base.o
CC arch/x86/kernel/cpu/mtrr/amd.o
AS arch/x86/lib/crc32-pclmul.o
CC arch/x86/pci/fixup.o
CC fs/proc/generic.o
CC lib/zlib_deflate/deflate_syms.o
CC arch/x86/lib/msr.o
AR lib/zlib_inflate/built-in.a
CC kernel/rcu/update.o
AR arch/x86/events/amd/built-in.a
AR kernel/livepatch/built-in.a
CC kernel/locking/qspinlock.o
CC net/sched/sch_generic.o
AR arch/x86/kernel/cpu/microcode/built-in.a
CC kernel/dma/mapping.o
CC drivers/pci/pcie/bwctrl.o
CC kernel/printk/nbcon.o
AS arch/x86/entry/thunk.o
CC lib/crypto/mpi/mpicoder.o
CC mm/page-writeback.o
CC arch/x86/kernel/acpi/cstate.o
CC security/keys/permission.o
AR arch/x86/entry/built-in.a
CC net/core/stream.o
CC arch/x86/mm/extable.o
CC crypto/asymmetric_keys/pkcs7.asn1.o
CC crypto/asymmetric_keys/pkcs7_parser.o
AR lib/zlib_deflate/built-in.a
CC net/core/scm.o
CC arch/x86/pci/acpi.o
CC fs/quota/quota.o
CC kernel/irq/spurious.o
CC kernel/irq/resend.o
CC security/selinux/netlink.o
AR arch/x86/platform/intel/built-in.a
CC arch/x86/kernel/apic/apic.o
AR arch/x86/platform/intel-mid/built-in.a
AR arch/x86/platform/intel-quark/built-in.a
AR drivers/video/console/built-in.a
CC arch/x86/kernel/cpu/mtrr/cyrix.o
AR arch/x86/platform/olpc/built-in.a
CC sound/core/seq/seq_system.o
CC ipc/mq_sysctl.o
CC block/blk-flush.o
AR arch/x86/platform/scx200/built-in.a
AR drivers/video/fbdev/core/built-in.a
AR fs/notify/built-in.a
AR arch/x86/platform/ts5500/built-in.a
AR drivers/video/fbdev/omap/built-in.a
CC arch/x86/kernel/kprobes/core.o
AR arch/x86/platform/uv/built-in.a
AR arch/x86/platform/built-in.a
AR drivers/pci/msi/built-in.a
AR drivers/video/fbdev/omap2/omapfb/dss/built-in.a
LDS arch/x86/kernel/vmlinux.lds
CC kernel/locking/rtmutex_api.o
AR drivers/video/fbdev/omap2/omapfb/displays/built-in.a
CC lib/crypto/chacha.o
AR drivers/video/fbdev/omap2/omapfb/built-in.a
CC fs/proc/array.o
AR drivers/video/fbdev/omap2/built-in.a
CC kernel/locking/qrwlock.o
AR drivers/video/fbdev/built-in.a
CC drivers/video/aperture.o
CC drivers/pci/hotplug/acpi_pcihp.o
CC lib/crypto/aes.o
AS arch/x86/lib/msr-reg.o
AR arch/x86/kernel/acpi/built-in.a
AS arch/x86/kernel/head_32.o
CC fs/kernfs/file.o
CC fs/iomap/ioend.o
CC fs/kernfs/symlink.o
CC drivers/video/cmdline.o
CC mm/folio-compat.o
CC arch/x86/kernel/cpu/mce/threshold.o
AR crypto/asymmetric_keys/built-in.a
CC io_uring/tctx.o
CC crypto/cipher.o
CC drivers/pci/pcie/aspm.o
CC arch/x86/events/intel/ds.o
CC arch/x86/lib/msr-reg-export.o
AS arch/x86/lib/hweight.o
AR ipc/built-in.a
CC arch/x86/mm/mmap.o
CC lib/crypto/mpi/mpi-add.o
CC kernel/rcu/sync.o
CC kernel/power/hibernate.o
AR sound/isa/ad1816a/built-in.a
CC security/keys/process_keys.o
AR sound/isa/ad1848/built-in.a
CC kernel/irq/chip.o
AR sound/isa/cs423x/built-in.a
AR sound/isa/es1688/built-in.a
AR sound/isa/galaxy/built-in.a
CC kernel/printk/printk_ringbuffer.o
AR sound/isa/gus/built-in.a
AR sound/isa/msnd/built-in.a
AR sound/isa/opti9xx/built-in.a
CC arch/x86/kernel/head32.o
AR sound/isa/sb/built-in.a
AR sound/isa/wavefront/built-in.a
AR sound/isa/wss/built-in.a
AR sound/isa/built-in.a
CC arch/x86/kernel/cpu/mtrr/centaur.o
CC arch/x86/lib/iomem.o
CC sound/core/seq/seq_ports.o
CC arch/x86/mm/pgtable.o
CC kernel/rcu/srcutree.o
CC arch/x86/pci/legacy.o
CC fs/proc/fd.o
CC fs/sysfs/file.o
CC kernel/dma/direct.o
CC fs/sysfs/dir.o
CC security/selinux/nlmsgtab.o
CC arch/x86/kernel/apic/apic_common.o
CC drivers/video/nomodeset.o
CC kernel/irq/dummychip.o
CC block/blk-settings.o
CC arch/x86/kernel/kprobes/opt.o
AR drivers/pci/hotplug/built-in.a
CC arch/x86/lib/atomic64_32.o
CC fs/iomap/fiemap.o
CC fs/proc/proc_tty.o
CC crypto/algapi.o
CC arch/x86/lib/inat.o
CC lib/lzo/lzo1x_compress.o
CC lib/lz4/lz4_decompress.o
CC arch/x86/kernel/apic/apic_noop.o
CC lib/crypto/mpi/mpi-bit.o
CC sound/core/seq/seq_info.o
CC fs/iomap/seek.o
CC lib/zstd/zstd_decompress_module.o
CC fs/quota/kqid.o
AR kernel/locking/built-in.a
CC arch/x86/pci/irq.o
CC arch/x86/kernel/cpu/mtrr/legacy.o
AR drivers/pci/controller/dwc/built-in.a
CC mm/readahead.o
CC fs/iomap/swapfile.o
AR drivers/pci/controller/mobiveil/built-in.a
AR arch/x86/lib/built-in.a
AR arch/x86/lib/lib.a
AR drivers/pci/controller/plda/built-in.a
AR drivers/pci/controller/built-in.a
CC mm/swap.o
CC drivers/video/hdmi.o
CC net/core/gen_stats.o
CC kernel/printk/sysctl.o
CC io_uring/filetable.o
CC arch/x86/mm/physaddr.o
AR fs/kernfs/built-in.a
CC kernel/rcu/tree.o
CC crypto/scatterwalk.o
CC fs/proc/cmdline.o
CC kernel/sched/build_utility.o
CC drivers/pci/pcie/pme.o
AR arch/x86/kernel/cpu/mce/built-in.a
AR drivers/char/ipmi/built-in.a
AR sound/pci/ac97/built-in.a
CC lib/crypto/arc4.o
AR sound/ppc/built-in.a
AR sound/pci/ali5451/built-in.a
CC arch/x86/kernel/ebda.o
AR sound/pci/asihpi/built-in.a
CC sound/core/memory.o
CC kernel/irq/devres.o
AR sound/pci/au88x0/built-in.a
AR sound/pci/aw2/built-in.a
AR sound/pci/ctxfi/built-in.a
CC lib/zstd/decompress/huf_decompress.o
CC fs/sysfs/symlink.o
CC security/keys/request_key.o
AR sound/pci/ca0106/built-in.a
AR sound/pci/cs46xx/built-in.a
CC sound/core/control.o
AR sound/pci/cs5535audio/built-in.a
AR arch/x86/kernel/cpu/mtrr/built-in.a
CC arch/x86/kernel/cpu/scattered.o
AR sound/pci/lola/built-in.a
AR sound/pci/lx6464es/built-in.a
CC sound/core/seq/seq_dummy.o
AR sound/pci/echoaudio/built-in.a
AR sound/pci/emu10k1/built-in.a
CC lib/xz/xz_dec_syms.o
CC lib/lzo/lzo1x_compress_safe.o
CC sound/pci/hda/hda_bind.o
CC fs/quota/netlink.o
CC lib/dim/dim.o
AR kernel/printk/built-in.a
AR sound/pci/ice1712/built-in.a
CC crypto/proc.o
CC io_uring/rw.o
AR sound/pci/korg1212/built-in.a
CC security/keys/request_key_auth.o
CC net/sched/sch_mq.o
CC lib/crypto/gf128mul.o
CC io_uring/net.o
CC fs/sysfs/mount.o
CC lib/crypto/mpi/mpi-cmp.o
CC kernel/power/snapshot.o
CC lib/crypto/mpi/mpi-sub-ui.o
CC arch/x86/kernel/apic/ipi.o
CC arch/x86/pci/common.o
CC kernel/irq/kexec.o
CC arch/x86/mm/tlb.o
CC kernel/dma/ops_helpers.o
AR arch/x86/kernel/kprobes/built-in.a
CC security/selinux/netif.o
CC block/blk-ioc.o
CC fs/proc/consoles.o
CC security/selinux/netnode.o
AR fs/iomap/built-in.a
CC arch/x86/mm/cpu_entry_area.o
CC lib/xz/xz_dec_stream.o
CC arch/x86/pci/early.o
CC arch/x86/kernel/cpu/topology_common.o
CC arch/x86/pci/bus_numa.o
AR sound/arm/built-in.a
CC arch/x86/kernel/cpu/topology_ext.o
CC lib/dim/net_dim.o
CC sound/pci/hda/hda_codec.o
AR drivers/video/built-in.a
CC lib/dim/rdma_dim.o
CC net/netlink/af_netlink.o
AR net/bpf/built-in.a
AR drivers/pci/pcie/built-in.a
CC arch/x86/kernel/platform-quirks.o
AR drivers/pci/switch/built-in.a
CC drivers/pci/access.o
CC lib/lzo/lzo1x_decompress_safe.o
AR sound/core/seq/built-in.a
CC fs/sysfs/group.o
CC fs/proc/cpuinfo.o
CC net/sched/sch_frag.o
CC net/netlink/genetlink.o
CC kernel/irq/autoprobe.o
CC crypto/aead.o
CC drivers/acpi/acpica/dsargs.o
CC security/device_cgroup.o
CC net/core/gen_estimator.o
CC kernel/rcu/rcu_segcblist.o
CC lib/crypto/mpi/mpi-div.o
CC sound/core/misc.o
CC arch/x86/kernel/apic/vector.o
CC security/keys/user_defined.o
CC arch/x86/events/intel/knc.o
CC arch/x86/mm/maccess.o
AR lib/lz4/built-in.a
CC arch/x86/events/intel/lbr.o
AR fs/quota/built-in.a
CC arch/x86/events/intel/p4.o
CC arch/x86/kernel/cpu/topology_amd.o
CC lib/zstd/decompress/zstd_ddict.o
CC arch/x86/mm/pgprot.o
CC kernel/dma/remap.o
AR sound/sh/built-in.a
CC lib/xz/xz_dec_lzma2.o
AR drivers/acpi/pmic/built-in.a
CC drivers/acpi/acpica/dscontrol.o
CC security/selinux/netport.o
CC kernel/entry/common.o
CC sound/pci/hda/hda_jack.o
CC kernel/power/swap.o
CC fs/proc/devices.o
CC kernel/power/user.o
AR lib/lzo/built-in.a
CC mm/truncate.o
CC net/core/net_namespace.o
CC block/blk-map.o
CC arch/x86/pci/amd_bus.o
CC lib/crypto/blake2s.o
CC net/netlink/policy.o
CC arch/x86/events/utils.o
CC kernel/irq/irqdomain.o
CC security/keys/proc.o
AR fs/sysfs/built-in.a
CC crypto/geniv.o
CC kernel/module/main.o
CC drivers/acpi/acpica/dsdebug.o
CC arch/x86/kernel/cpu/common.o
CC kernel/module/strict_rwx.o
CC lib/crypto/mpi/mpi-mod.o
CC drivers/pnp/pnpacpi/core.o
CC kernel/entry/syscall_user_dispatch.o
CC arch/x86/mm/pgtable_32.o
CC drivers/pci/bus.o
CC drivers/pnp/core.o
CC drivers/pnp/card.o
AR lib/dim/built-in.a
CC sound/core/device.o
CC net/core/secure_seq.o
AR kernel/dma/built-in.a
CC drivers/pci/probe.o
CC arch/x86/events/rapl.o
CC net/ethtool/ioctl.o
CC arch/x86/mm/iomap_32.o
CC lib/xz/xz_dec_bcj.o
CC lib/zstd/decompress/zstd_decompress.o
CC fs/proc/interrupts.o
CC drivers/acpi/acpica/dsfield.o
AR sound/pci/mixart/built-in.a
CC drivers/pnp/pnpacpi/rsparser.o
CC drivers/acpi/acpica/dsinit.o
CC net/sched/sch_api.o
CC lib/zstd/decompress/zstd_decompress_block.o
CC block/blk-merge.o
CC security/keys/sysctl.o
CC arch/x86/kernel/cpu/rdrand.o
AR arch/x86/pci/built-in.a
CC lib/crypto/mpi/mpi-mul.o
CC sound/core/info.o
AR drivers/amba/built-in.a
CC arch/x86/kernel/apic/init.o
CC lib/crypto/mpi/mpih-cmp.o
CC io_uring/poll.o
CC mm/vmscan.o
CC security/selinux/status.o
CC drivers/pci/host-bridge.o
CC kernel/time/time.o
AR kernel/entry/built-in.a
CC kernel/futex/core.o
CC arch/x86/kernel/process_32.o
CC drivers/acpi/acpica/dsmethod.o
CC arch/x86/mm/hugetlbpage.o
CC net/sched/sch_blackhole.o
CC fs/proc/loadavg.o
CC kernel/irq/proc.o
CC kernel/power/poweroff.o
CC net/core/flow_dissector.o
AR lib/xz/built-in.a
CC drivers/pnp/driver.o
AR sound/pci/nm256/built-in.a
CC net/ethtool/common.o
CC crypto/lskcipher.o
AR drivers/clk/actions/built-in.a
CC fs/devpts/inode.o
CC arch/x86/events/intel/p6.o
CC sound/pci/hda/hda_auto_parser.o
AR drivers/clk/analogbits/built-in.a
AR drivers/clk/bcm/built-in.a
AR drivers/clk/imgtec/built-in.a
CC security/selinux/ss/ebitmap.o
AR drivers/clk/imx/built-in.a
AR drivers/clk/ingenic/built-in.a
AR drivers/clk/mediatek/built-in.a
AR drivers/clk/microchip/built-in.a
CC net/sched/cls_api.o
AR drivers/clk/mstar/built-in.a
AR drivers/clk/mvebu/built-in.a
AR drivers/clk/ralink/built-in.a
AR drivers/clk/renesas/built-in.a
AR drivers/clk/socfpga/built-in.a
AR drivers/clk/sophgo/built-in.a
CC crypto/skcipher.o
CC net/sched/act_api.o
CC crypto/seqiv.o
AR drivers/clk/sprd/built-in.a
AR drivers/clk/starfive/built-in.a
AR kernel/power/built-in.a
AR drivers/clk/sunxi-ng/built-in.a
AR drivers/clk/ti/built-in.a
CC lib/fonts/fonts.o
CC arch/x86/kernel/apic/hw_nmi.o
AR drivers/clk/versatile/built-in.a
CC lib/fonts/font_8x16.o
AR drivers/clk/xilinx/built-in.a
CC security/keys/keyctl_pkey.o
AR drivers/clk/built-in.a
CC net/netfilter/core.o
CC io_uring/eventfd.o
CC drivers/acpi/acpica/dsmthdat.o
AR sound/pci/oxygen/built-in.a
CC arch/x86/events/intel/pt.o
CC arch/x86/kernel/signal.o
CC lib/crypto/mpi/mpih-div.o
CC net/netfilter/nf_log.o
AR drivers/pnp/pnpacpi/built-in.a
CC kernel/time/timer.o
CC kernel/module/kmod.o
CC fs/netfs/buffered_read.o
CC security/selinux/ss/hashtab.o
CC fs/proc/meminfo.o
AR arch/x86/mm/built-in.a
CC block/blk-timeout.o
CC drivers/pnp/resource.o
CC arch/x86/kernel/cpu/match.o
CC sound/pci/hda/hda_sysfs.o
AR lib/fonts/built-in.a
CC arch/x86/events/intel/uncore.o
CC kernel/irq/migration.o
CC sound/core/isadma.o
CC kernel/futex/syscalls.o
AR sound/synth/emux/built-in.a
AR sound/synth/built-in.a
CC drivers/acpi/acpica/dsobject.o
CC io_uring/uring_cmd.o
AR net/netlink/built-in.a
CC drivers/dma/dw/core.o
CC drivers/dma/hsu/hsu.o
CC net/ethtool/netlink.o
AR drivers/dma/idxd/built-in.a
AR drivers/dma/amd/built-in.a
CC net/ethtool/bitset.o
CC arch/x86/kernel/apic/io_apic.o
AR fs/devpts/built-in.a
AR drivers/dma/mediatek/built-in.a
AR drivers/dma/qcom/built-in.a
CC mm/shrinker.o
AR drivers/soc/apple/built-in.a
AR security/keys/built-in.a
AR drivers/soc/aspeed/built-in.a
CC mm/shmem.o
AR drivers/soc/bcm/built-in.a
AR drivers/soc/fsl/built-in.a
CC fs/proc/stat.o
AR drivers/soc/fujitsu/built-in.a
AR drivers/soc/hisilicon/built-in.a
AR drivers/soc/imx/built-in.a
CC drivers/dma/dw/dw.o
CC drivers/dma/dw/idma32.o
AR drivers/soc/ixp4xx/built-in.a
AR drivers/soc/loongson/built-in.a
AR drivers/soc/mediatek/built-in.a
CC arch/x86/events/msr.o
AR drivers/soc/microchip/built-in.a
CC arch/x86/kernel/cpu/bugs.o
AR drivers/soc/nuvoton/built-in.a
CC drivers/pci/remove.o
AR drivers/soc/pxa/built-in.a
AR drivers/soc/amlogic/built-in.a
AR drivers/soc/qcom/built-in.a
AR drivers/soc/renesas/built-in.a
AR drivers/soc/rockchip/built-in.a
AR drivers/soc/sunxi/built-in.a
AR drivers/soc/ti/built-in.a
AR drivers/soc/versatile/built-in.a
CC kernel/irq/cpuhotplug.o
CC drivers/pnp/manager.o
AR drivers/soc/xilinx/built-in.a
AR drivers/soc/built-in.a
AR drivers/dma/stm32/built-in.a
CC security/selinux/ss/symtab.o
CC sound/core/vmaster.o
CC kernel/module/tree_lookup.o
CC sound/core/ctljack.o
CC drivers/acpi/acpica/dsopcode.o
CC block/blk-lib.o
AR sound/pci/pcxhr/built-in.a
CC lib/crypto/mpi/mpih-mul.o
AR sound/pci/riptide/built-in.a
CC fs/ext4/balloc.o
AR kernel/rcu/built-in.a
CC block/blk-mq.o
CC net/sched/sch_fifo.o
CC crypto/echainiv.o
CC drivers/acpi/dptf/int340x_thermal.o
CC net/sched/cls_cgroup.o
CC drivers/acpi/x86/apple.o
CC sound/pci/hda/hda_controller.o
CC security/selinux/ss/sidtab.o
CC fs/netfs/buffered_write.o
CC fs/proc/uptime.o
CC sound/pci/hda/hda_proc.o
CC arch/x86/events/intel/uncore_nhmex.o
CC kernel/cgroup/cgroup.o
CC arch/x86/events/intel/uncore_snb.o
AR drivers/dma/hsu/built-in.a
CC kernel/module/kallsyms.o
CC kernel/module/procfs.o
CC kernel/futex/pi.o
CC drivers/acpi/acpica/dspkginit.o
CC kernel/time/hrtimer.o
CC arch/x86/kernel/signal_32.o
CC lib/argv_split.o
CC drivers/pci/pci.o
AR drivers/acpi/dptf/built-in.a
CC sound/pci/hda/hda_hwdep.o
CC net/netfilter/nf_queue.o
CC kernel/cgroup/rstat.o
CC drivers/pnp/support.o
CC arch/x86/kernel/cpu/aperfmperf.o
CC sound/core/jack.o
CC drivers/virtio/virtio.o
CC kernel/irq/pm.o
CC drivers/acpi/x86/cmos_rtc.o
CC crypto/ahash.o
CC drivers/dma/dw/acpi.o
CC io_uring/openclose.o
CC security/selinux/ss/avtab.o
CC net/core/sysctl_net_core.o
CC fs/netfs/direct_read.o
CC arch/x86/events/intel/uncore_snbep.o
CC lib/crypto/mpi/mpi-pow.o
CC fs/proc/util.o
CC drivers/acpi/acpica/dsutils.o
AR sound/usb/misc/built-in.a
AR sound/usb/usx2y/built-in.a
CC net/ethtool/strset.o
AR sound/usb/caiaq/built-in.a
CC mm/util.o
AR sound/usb/6fire/built-in.a
AR sound/usb/hiface/built-in.a
AR sound/usb/bcd2000/built-in.a
AR sound/usb/built-in.a
CC lib/zstd/zstd_common_module.o
AR sound/pci/rme9652/built-in.a
CC net/netfilter/nf_sockopt.o
CC fs/jbd2/transaction.o
CC drivers/virtio/virtio_ring.o
CC fs/ramfs/inode.o
CC drivers/pnp/interface.o
CC kernel/module/sysfs.o
CC lib/crypto/mpi/mpiutil.o
CC kernel/futex/requeue.o
CC lib/crypto/blake2s-generic.o
CC drivers/acpi/x86/lpss.o
CC sound/core/hwdep.o
CC net/ipv4/netfilter/nf_defrag_ipv4.o
CC sound/pci/hda/hda_intel.o
CC net/xfrm/xfrm_policy.o
CC arch/x86/kernel/apic/msi.o
CC drivers/acpi/acpica/dswexec.o
CC arch/x86/kernel/cpu/cpuid-deps.o
CC kernel/irq/msi.o
CC drivers/acpi/x86/s2idle.o
CC fs/proc/version.o
AR sound/pci/trident/built-in.a
CC lib/zstd/common/debug.o
AR drivers/dma/dw/built-in.a
CC kernel/trace/trace_clock.o
CC kernel/time/sleep_timeout.o
AR drivers/dma/ti/built-in.a
CC fs/hugetlbfs/inode.o
AR drivers/dma/xilinx/built-in.a
CC drivers/dma/dmaengine.o
CC fs/ramfs/file-mmu.o
CC arch/x86/kernel/traps.o
CC net/sched/ematch.o
AR sound/pci/ymfpci/built-in.a
CC fs/proc/softirqs.o
CC net/xfrm/xfrm_state.o
CC net/unix/af_unix.o
CC net/ipv6/netfilter/ip6_tables.o
CC fs/netfs/direct_write.o
CC io_uring/sqpoll.o
AR kernel/sched/built-in.a
CC arch/x86/kernel/idt.o
CC fs/ext4/bitmap.o
CC security/selinux/ss/policydb.o
CC drivers/pnp/quirks.o
CC arch/x86/kernel/cpu/umwait.o
CC drivers/acpi/acpica/dswload.o
AR lib/crypto/mpi/built-in.a
CC lib/crypto/sha1.o
CC kernel/trace/ring_buffer.o
CC net/packet/af_packet.o
CC drivers/pci/pci-driver.o
CC kernel/bpf/core.o
CC crypto/shash.o
AR kernel/module/built-in.a
CC drivers/dma/virt-dma.o
CC kernel/time/timekeeping.o
CC net/unix/garbage.o
CC kernel/futex/waitwake.o
CC io_uring/xattr.o
CC net/ipv6/af_inet6.o
CC net/netfilter/utils.o
CC sound/core/timer.o
MKCAP arch/x86/kernel/cpu/capflags.c
CC net/ethtool/linkinfo.o
CC arch/x86/kernel/apic/probe_32.o
CC net/core/dev.o
CC fs/proc/namespaces.o
CC drivers/acpi/acpica/dswload2.o
AR fs/ramfs/built-in.a
CC fs/proc/self.o
CC lib/crypto/sha256.o
CC mm/mmzone.o
CC drivers/acpi/x86/utils.o
CC lib/zstd/common/entropy_common.o
CC fs/ext4/block_validity.o
CC fs/netfs/iterator.o
CC fs/netfs/locking.o
CC net/ipv4/netfilter/nf_reject_ipv4.o
CC kernel/irq/affinity.o
AR arch/x86/kernel/apic/built-in.a
CC drivers/pnp/system.o
CC fs/jbd2/commit.o
CC fs/fat/cache.o
CC block/blk-mq-tag.o
CC lib/zstd/common/error_private.o
CC drivers/acpi/acpica/dswscope.o
AR net/sched/built-in.a
CC lib/zstd/common/fse_decompress.o
CC fs/isofs/namei.o
CC fs/nfs/client.o
CC fs/exportfs/expfs.o
CC drivers/dma/acpi-dma.o
AR kernel/futex/built-in.a
CC fs/lockd/clntlock.o
CC fs/nls/nls_base.o
CC mm/vmstat.o
CC fs/lockd/clntproc.o
AR lib/crypto/built-in.a
CC drivers/virtio/virtio_anchor.o
CC drivers/virtio/virtio_pci_modern_dev.o
CC crypto/akcipher.o
AR sound/pci/hda/built-in.a
AR sound/pci/vx222/built-in.a
AR sound/pci/built-in.a
CC drivers/acpi/x86/blacklist.o
CC drivers/acpi/acpica/dswstate.o
CC fs/proc/thread_self.o
CC fs/lockd/clntxdr.o
CC arch/x86/events/intel/uncore_discovery.o
CC net/ethtool/linkmodes.o
CC kernel/irq/matrix.o
AR drivers/pnp/built-in.a
CC kernel/time/ntp.o
CC fs/nfs/dir.o
AR fs/hugetlbfs/built-in.a
CC mm/backing-dev.o
CC fs/netfs/main.o
CC lib/zstd/common/zstd_common.o
CC io_uring/nop.o
CC fs/nls/nls_cp437.o
AR lib/zstd/built-in.a
CC fs/isofs/inode.o
CC lib/bug.o
CC drivers/pci/search.o
CC lib/buildid.o
AR fs/exportfs/built-in.a
CC fs/ext4/dir.o
CC net/netfilter/nfnetlink.o
AR net/dsa/built-in.a
CC fs/netfs/misc.o
CC net/ipv6/netfilter/ip6table_filter.o
CC net/netfilter/nfnetlink_log.o
CC fs/ext4/ext4_jbd2.o
AR drivers/acpi/x86/built-in.a
CC drivers/acpi/acpica/evevent.o
CC net/xfrm/xfrm_hash.o
CC arch/x86/kernel/irq.o
CC fs/fat/dir.o
AR drivers/dma/built-in.a
AR fs/unicode/built-in.a
CC fs/lockd/host.o
CC sound/core/hrtimer.o
CC net/sunrpc/auth_gss/auth_gss.o
CC block/blk-stat.o
CC fs/proc/proc_sysctl.o
CC kernel/cgroup/namespace.o
CC drivers/virtio/virtio_pci_legacy_dev.o
CC fs/nls/nls_ascii.o
CC crypto/sig.o
CC fs/jbd2/recovery.o
CC drivers/acpi/acpica/evgpe.o
CC net/ipv4/netfilter/ip_tables.o
CC kernel/time/clocksource.o
CC fs/nfs/file.o
CC net/sunrpc/clnt.o
CC sound/core/pcm.o
CC net/unix/sysctl_net_unix.o
CC net/core/dev_api.o
CC io_uring/fs.o
CC fs/nls/nls_iso8859-1.o
CC lib/clz_tab.o
CC arch/x86/events/intel/cstate.o
CC lib/cmdline.o
CC drivers/pci/rom.o
CC net/ethtool/rss.o
CC io_uring/splice.o
CC lib/cpumask.o
CC net/netfilter/nf_conntrack_core.o
CC fs/ext4/extents.o
CC mm/mm_init.o
CC security/selinux/ss/services.o
CC drivers/virtio/virtio_pci_modern.o
CC mm/percpu.o
CC drivers/acpi/acpica/evgpeblk.o
CC block/blk-mq-sysfs.o
CC net/ipv6/netfilter/ip6table_mangle.o
CC arch/x86/kernel/irq_32.o
CC fs/fat/fatent.o
AR kernel/irq/built-in.a
CC kernel/trace/trace.o
CC fs/nls/nls_utf8.o
CC fs/ext4/extents_status.o
CC sound/core/pcm_native.o
CC crypto/kpp.o
CC kernel/cgroup/cgroup-v1.o
CC kernel/cgroup/freezer.o
CC fs/isofs/dir.o
CC drivers/virtio/virtio_pci_common.o
AR kernel/bpf/built-in.a
CC drivers/virtio/virtio_pci_legacy.o
CC drivers/pci/setup-res.o
CC net/xfrm/xfrm_input.o
CC fs/lockd/svc.o
CC fs/netfs/objects.o
CC drivers/acpi/acpica/evgpeinit.o
CC fs/jbd2/checkpoint.o
AR fs/nls/built-in.a
CC net/netfilter/nf_conntrack_standalone.o
CC fs/fat/file.o
CC lib/ctype.o
CC drivers/pci/irq.o
CC lib/dec_and_lock.o
CC kernel/time/jiffies.o
CC drivers/pci/vpd.o
AR arch/x86/events/intel/built-in.a
AR arch/x86/events/built-in.a
CC drivers/virtio/virtio_pci_admin_legacy_io.o
CC net/sunrpc/xprt.o
AR net/unix/built-in.a
CC security/selinux/ss/conditional.o
CC drivers/tty/vt/vt_ioctl.o
CC fs/ext4/file.o
CC drivers/char/hw_random/core.o
CC io_uring/sync.o
AR net/packet/built-in.a
CC fs/proc/proc_net.o
CC lib/decompress.o
CC drivers/char/hw_random/intel-rng.o
CC block/blk-mq-cpumap.o
CC block/blk-mq-sched.o
CC lib/decompress_bunzip2.o
CC drivers/acpi/acpica/evgpeutil.o
CC net/ethtool/linkstate.o
CC drivers/acpi/acpica/evglock.o
CC kernel/time/timer_list.o
CC fs/isofs/util.o
CC drivers/acpi/tables.o
ASN.1 crypto/rsapubkey.asn1.[ch]
ASN.1 crypto/rsaprivkey.asn1.[ch]
CC crypto/rsa.o
CC net/sunrpc/auth_gss/gss_mech_switch.o
CC net/ipv4/netfilter/iptable_filter.o
CC net/ipv6/netfilter/nf_defrag_ipv6_hooks.o
CC fs/netfs/read_collect.o
CC fs/nfs/getroot.o
CC net/xfrm/xfrm_output.o
AR net/wireless/tests/built-in.a
CC drivers/acpi/acpica/evhandler.o
CC net/wireless/core.o
CC net/xfrm/xfrm_sysctl.o
CC drivers/virtio/virtio_input.o
AR drivers/iommu/amd/built-in.a
AR drivers/iommu/intel/built-in.a
CC fs/fat/inode.o
AR drivers/iommu/arm/arm-smmu/built-in.a
AR drivers/iommu/arm/arm-smmu-v3/built-in.a
AR drivers/iommu/arm/built-in.a
AR drivers/iommu/iommufd/built-in.a
AR drivers/iommu/riscv/built-in.a
CC drivers/iommu/iommu.o
CC drivers/acpi/osi.o
CC fs/jbd2/revoke.o
CC drivers/pci/setup-bus.o
CC net/ipv4/route.o
CC drivers/char/hw_random/amd-rng.o
CC io_uring/msg_ring.o
CC fs/isofs/rock.o
CC fs/proc/kcore.o
CC drivers/tty/vt/vc_screen.o
CC lib/decompress_inflate.o
CC security/selinux/ss/mls.o
CC kernel/cgroup/legacy_freezer.o
CC drivers/pci/vc.o
CC fs/lockd/svclock.o
CC drivers/acpi/acpica/evmisc.o
CC net/sunrpc/auth_gss/svcauth_gss.o
CC drivers/char/hw_random/geode-rng.o
CC kernel/time/timeconv.o
CC fs/jbd2/journal.o
CC block/ioctl.o
CC security/selinux/ss/context.o
CC crypto/rsa_helper.o
CC net/ethtool/debug.o
CC net/ipv6/netfilter/nf_conntrack_reasm.o
CC crypto/rsa-pkcs1pad.o
CC net/wireless/sysfs.o
CC lib/decompress_unlz4.o
CC drivers/acpi/acpica/evregion.o
CC kernel/time/timecounter.o
CC drivers/pci/mmap.o
CC fs/netfs/read_pgpriv2.o
CC net/netfilter/nf_conntrack_expect.o
CC drivers/virtio/virtio_dma_buf.o
CC kernel/time/alarmtimer.o
CC drivers/tty/vt/selection.o
CC net/ipv6/anycast.o
CC net/ipv4/netfilter/iptable_mangle.o
CC fs/fat/misc.o
CC mm/slab_common.o
CC kernel/cgroup/pids.o
CC fs/nfs/inode.o
CC drivers/tty/hvc/hvc_console.o
CC io_uring/advise.o
CC drivers/char/hw_random/via-rng.o
CC fs/isofs/export.o
CC drivers/iommu/iommu-traces.o
CC lib/decompress_unlzma.o
CC fs/netfs/read_retry.o
CC fs/nfs/super.o
CC fs/proc/kmsg.o
CC kernel/time/posix-timers.o
CC arch/x86/kernel/cpu/powerflags.o
CC net/ipv4/netfilter/ipt_REJECT.o
CC drivers/acpi/acpica/evrgnini.o
AR net/mac80211/tests/built-in.a
CC net/mac80211/main.o
CC arch/x86/kernel/cpu/topology.o
CC fs/autofs/init.o
CC sound/core/pcm_lib.o
CC block/genhd.o
CC crypto/rsassa-pkcs1.o
CC net/mac80211/status.o
CC net/ipv6/netfilter/nf_reject_ipv6.o
AR drivers/virtio/built-in.a
CC fs/fat/nfs.o
CC drivers/iommu/iommu-sysfs.o
AR drivers/char/hw_random/built-in.a
CC drivers/char/agp/backend.o
CC security/selinux/netlabel.o
CC net/xfrm/xfrm_replay.o
CC net/ethtool/wol.o
CC kernel/cgroup/rdma.o
CC drivers/tty/vt/keyboard.o
CC fs/proc/page.o
CC fs/lockd/svcshare.o
CC kernel/trace/trace_output.o
CC drivers/acpi/acpica/evsci.o
CC fs/isofs/joliet.o
CC net/ipv6/netfilter/ip6t_ipv6header.o
CC net/wireless/radiotap.o
CC io_uring/statx.o
CC arch/x86/kernel/cpu/proc.o
CC drivers/pci/devres.o
AR drivers/tty/hvc/built-in.a
CC drivers/pci/proc.o
CC lib/decompress_unlzo.o
AR sound/firewire/built-in.a
CC drivers/iommu/dma-iommu.o
CC fs/autofs/inode.o
CC drivers/char/agp/generic.o
CC net/xfrm/xfrm_device.o
CC kernel/events/core.o
CC drivers/acpi/acpica/evxface.o
CC fs/netfs/read_single.o
CC lib/decompress_unxz.o
CC mm/compaction.o
CC crypto/acompress.o
CC drivers/pci/pci-sysfs.o
CC net/netfilter/nf_conntrack_helper.o
CC fs/fat/namei_vfat.o
CC net/sunrpc/auth_gss/gss_rpc_upcall.o
CC [M] net/ipv4/netfilter/iptable_nat.o
CC fs/isofs/compress.o
CC kernel/cgroup/cpuset.o
CC drivers/acpi/osl.o
AR fs/proc/built-in.a
CC net/ipv6/ip6_output.o
CC fs/ext4/fsmap.o
CC kernel/time/posix-cpu-timers.o
CC block/ioprio.o
CC arch/x86/kernel/cpu/feat_ctl.o
CC drivers/acpi/acpica/evxfevnt.o
CC lib/decompress_unzstd.o
CC io_uring/timeout.o
CC net/netlabel/netlabel_user.o
CC net/ethtool/features.o
CC lib/dump_stack.o
CC kernel/trace/trace_seq.o
AR sound/sparc/built-in.a
CC net/xfrm/xfrm_nat_keepalive.o
CC kernel/cgroup/misc.o
CC drivers/char/mem.o
CC drivers/tty/vt/vt.o
CC fs/lockd/svcproc.o
CC fs/autofs/root.o
CC net/rfkill/core.o
CC fs/fat/namei_msdos.o
CC net/wireless/util.o
CC fs/netfs/rolling_buffer.o
AR fs/jbd2/built-in.a
CC net/wireless/reg.o
AR security/selinux/built-in.a
AR security/built-in.a
CC net/sunrpc/socklib.o
CC net/wireless/scan.o
CC arch/x86/kernel/cpu/intel.o
CC sound/core/pcm_misc.o
CC drivers/tty/serial/8250/8250_core.o
CC drivers/acpi/acpica/evxfgpe.o
CC crypto/scompress.o
CC net/ipv6/netfilter/ip6t_REJECT.o
CC drivers/char/agp/isoch.o
AR drivers/tty/ipwireless/built-in.a
CC drivers/tty/serial/serial_core.o
CC drivers/tty/serial/serial_base_bus.o
AR fs/isofs/built-in.a
CC drivers/tty/serial/serial_ctrl.o
CC net/netlabel/netlabel_kapi.o
CC kernel/trace/trace_stat.o
CC net/rfkill/input.o
CC drivers/pci/slot.o
CC lib/earlycpio.o
CC net/sunrpc/auth_gss/gss_rpc_xdr.o
AR net/ipv4/netfilter/built-in.a
CC lib/extable.o
CC net/ipv4/inetpeer.o
CC block/badblocks.o
CC drivers/iommu/iova.o
CC drivers/tty/tty_io.o
CC drivers/acpi/acpica/evxfregn.o
CC lib/flex_proportions.o
CC net/netfilter/nf_conntrack_proto.o
CC kernel/time/posix-clock.o
CC fs/nfs/io.o
CC arch/x86/kernel/cpu/tsx.o
CC kernel/time/itimer.o
CC net/core/dev_addr_lists.o
COPY drivers/tty/vt/defkeymap.c
AR drivers/gpu/host1x/built-in.a
CC kernel/fork.o
CC net/ethtool/privflags.o
CC sound/core/pcm_memory.o
CC io_uring/fdinfo.o
CC fs/autofs/symlink.o
AR drivers/gpu/vga/built-in.a
CC drivers/tty/n_tty.o
CC kernel/cgroup/debug.o
CC sound/core/memalloc.o
AR fs/fat/built-in.a
CC drivers/acpi/utils.o
CC crypto/algboss.o
CC drivers/char/agp/amd64-agp.o
CC fs/netfs/write_collect.o
AR drivers/gpu/drm/tests/built-in.a
AR drivers/gpu/drm/arm/built-in.a
CC net/xfrm/xfrm_algo.o
AR drivers/gpu/drm/clients/built-in.a
CC drivers/acpi/acpica/exconcat.o
CC net/mac80211/driver-ops.o
CC drivers/gpu/drm/display/drm_display_helper_mod.o
CC lib/idr.o
AR net/rfkill/built-in.a
CC crypto/testmgr.o
CC net/wireless/nl80211.o
CC drivers/pci/pci-acpi.o
CC fs/lockd/svcsubs.o
CC fs/9p/vfs_super.o
CC drivers/tty/serial/8250/8250_platform.o
AR fs/hostfs/built-in.a
CC drivers/char/agp/intel-agp.o
CC kernel/trace/trace_printk.o
CC fs/ext4/fsync.o
CC arch/x86/kernel/cpu/intel_epb.o
CC drivers/gpu/drm/display/drm_dp_dual_mode_helper.o
AR net/ipv6/netfilter/built-in.a
CC block/blk-rq-qos.o
CC mm/show_mem.o
CC drivers/acpi/acpica/exconfig.o
AR drivers/iommu/built-in.a
CC drivers/char/agp/intel-gtt.o
CC drivers/tty/serial/serial_port.o
CC net/netlabel/netlabel_domainhash.o
CC fs/autofs/waitq.o
CC net/ipv6/ip6_input.o
CC lib/iomem_copy.o
CC arch/x86/kernel/cpu/amd.o
CC lib/irq_regs.o
CC net/ipv4/protocol.o
CC net/sunrpc/auth_gss/trace.o
AR kernel/cgroup/built-in.a
CC drivers/tty/tty_ioctl.o
CC block/disk-events.o
CC io_uring/cancel.o
CC fs/nfs/direct.o
CC fs/ext4/hash.o
CC drivers/char/random.o
CC kernel/time/clockevents.o
CC drivers/gpu/drm/ttm/ttm_tt.o
CC net/ethtool/rings.o
CC arch/x86/kernel/dumpstack_32.o
CC sound/core/pcm_timer.o
CC lib/is_single_threaded.o
CC fs/9p/vfs_inode.o
CC drivers/acpi/acpica/exconvrt.o
CC crypto/cmac.o
CC drivers/tty/serial/8250/8250_pnp.o
CC net/core/dst.o
CC net/ipv4/ip_input.o
CC kernel/trace/pid_list.o
CC fs/netfs/write_issue.o
CC net/netfilter/nf_conntrack_proto_generic.o
AR sound/spi/built-in.a
CC net/xfrm/xfrm_user.o
CC kernel/time/tick-common.o
CC drivers/pci/iomap.o
CC net/wireless/mlme.o
CC drivers/gpu/drm/display/drm_dp_helper.o
CC lib/klist.o
CC drivers/char/misc.o
CC fs/lockd/mon.o
CC mm/interval_tree.o
CC net/9p/mod.o
CC fs/ext4/ialloc.o
CC drivers/acpi/acpica/excreate.o
CC drivers/tty/vt/consolemap.o
CC fs/autofs/expire.o
CC crypto/hmac.o
CC block/blk-ia-ranges.o
CC net/sunrpc/xprtsock.o
CC lib/kobject.o
CC sound/core/seq_device.o
CC drivers/acpi/reboot.o
CC io_uring/waitid.o
AR drivers/char/agp/built-in.a
CC net/mac80211/sta_info.o
CC drivers/gpu/drm/display/drm_dp_mst_topology.o
CC arch/x86/kernel/cpu/hygon.o
CC net/dns_resolver/dns_key.o
CC drivers/gpu/drm/i915/i915_config.o
CC net/handshake/alert.o
CC drivers/tty/serial/8250/8250_rsa.o
CC net/devres.o
CC drivers/gpu/drm/ttm/ttm_bo.o
CC drivers/acpi/acpica/exdebug.o
CC net/netlabel/netlabel_addrlist.o
CC net/ethtool/channels.o
CC net/9p/client.o
CC drivers/gpu/drm/i915/i915_driver.o
CC drivers/pci/quirks.o
CC net/socket.o
CC kernel/trace/trace_sched_switch.o
CC fs/9p/vfs_inode_dotl.o
CC arch/x86/kernel/cpu/centaur.o
AR sound/core/built-in.a
CC crypto/crypto_null.o
AR sound/parisc/built-in.a
CC lib/kobject_uevent.o
AR sound/pcmcia/vx/built-in.a
CC net/sysctl_net.o
CC drivers/acpi/acpica/exdump.o
AR sound/pcmcia/pdaudiocf/built-in.a
AR sound/pcmcia/built-in.a
CC mm/list_lru.o
CC net/netfilter/nf_conntrack_proto_tcp.o
CC kernel/time/tick-broadcast.o
CC drivers/char/virtio_console.o
CC block/early-lookup.o
AR sound/mips/built-in.a
AR sound/soc/built-in.a
AR sound/atmel/built-in.a
CC net/ipv6/addrconf.o
CC net/dns_resolver/dns_query.o
CC sound/hda/hda_bus_type.o
CC net/core/netevent.o
CC fs/autofs/dev-ioctl.o
CC drivers/acpi/acpica/exfield.o
HOSTCC drivers/tty/vt/conmakehash
CC drivers/tty/serial/8250/8250_port.o
CC fs/netfs/write_retry.o
AR sound/x86/built-in.a
CC net/ipv4/ip_fragment.o
CC io_uring/register.o
CC net/ipv6/addrlabel.o
CC fs/lockd/trace.o
CC drivers/gpu/drm/i915/i915_drm_client.o
CC mm/workingset.o
CC arch/x86/kernel/cpu/transmeta.o
CC drivers/tty/vt/defkeymap.o
CC net/sunrpc/sched.o
CC crypto/md5.o
CC fs/nfs/pagelist.o
CC drivers/acpi/acpica/exfldio.o
CC drivers/gpu/drm/ttm/ttm_bo_util.o
CONMK drivers/tty/vt/consolemap_deftbl.c
CC drivers/tty/vt/consolemap_deftbl.o
CC drivers/gpu/drm/i915/i915_getparam.o
CC net/sunrpc/auth_gss/gss_krb5_mech.o
AR drivers/tty/vt/built-in.a
CC arch/x86/kernel/cpu/zhaoxin.o
CC kernel/time/tick-broadcast-hrtimer.o
CC arch/x86/kernel/time.o
CC net/ethtool/coalesce.o
CC sound/hda/hdac_bus.o
CC fs/lockd/xdr.o
CC net/handshake/genl.o
CC block/bsg.o
AR net/dns_resolver/built-in.a
CC block/blk-cgroup.o
CC net/netlabel/netlabel_mgmt.o
CC fs/9p/vfs_addr.o
CC crypto/sha256_generic.o
CC net/core/neighbour.o
AR fs/autofs/built-in.a
CC kernel/trace/trace_nop.o
CC block/blk-ioprio.o
CC net/handshake/netlink.o
CC arch/x86/kernel/cpu/vortex.o
CC net/sunrpc/auth_gss/gss_krb5_seal.o
CC drivers/tty/tty_ldisc.o
CC drivers/acpi/acpica/exmisc.o
AR fs/netfs/built-in.a
CC lib/logic_pio.o
CC net/core/rtnetlink.o
CC kernel/time/tick-oneshot.o
CC drivers/pci/pci-label.o
CC mm/debug.o
CC fs/ext4/indirect.o
CC drivers/gpu/drm/i915/i915_ioctl.o
CC arch/x86/kernel/cpu/perfctr-watchdog.o
CC drivers/char/hpet.o
CC arch/x86/kernel/cpu/vmware.o
CC drivers/tty/serial/8250/8250_dma.o
CC drivers/acpi/acpica/exmutex.o
CC drivers/gpu/drm/ttm/ttm_bo_vm.o
CC sound/hda/hdac_device.o
CC net/9p/error.o
AR sound/xen/built-in.a
CC net/ethtool/pause.o
CC crypto/sha512_generic.o
CC net/ipv4/ip_forward.o
CC drivers/connector/cn_queue.o
CC drivers/gpu/drm/i915/i915_irq.o
CC drivers/base/power/sysfs.o
CC kernel/time/tick-sched.o
CC drivers/base/firmware_loader/builtin/main.o
CC drivers/base/firmware_loader/main.o
CC lib/maple_tree.o
CC fs/9p/vfs_file.o
CC kernel/trace/blktrace.o
AR net/xfrm/built-in.a
CC net/netfilter/nf_conntrack_proto_udp.o
CC drivers/tty/tty_buffer.o
CC sound/hda/hdac_sysfs.o
CC io_uring/truncate.o
CC net/ipv6/route.o
CC mm/gup.o
CC drivers/acpi/acpica/exnames.o
CC drivers/pci/vgaarb.o
CC net/9p/protocol.o
CC fs/lockd/netlink.o
CC drivers/connector/connector.o
AR drivers/base/firmware_loader/builtin/built-in.a
CC kernel/time/timer_migration.o
CC kernel/events/ring_buffer.o
CC kernel/events/callchain.o
CC drivers/char/nvram.o
CC net/sunrpc/auth_gss/gss_krb5_unseal.o
CC net/handshake/request.o
CC net/netlabel/netlabel_unlabeled.o
CC drivers/base/power/generic_ops.o
CC drivers/gpu/drm/i915/i915_mitigations.o
CC mm/mmap_lock.o
CC arch/x86/kernel/cpu/hypervisor.o
CC drivers/tty/serial/8250/8250_dwlib.o
CC drivers/gpu/drm/ttm/ttm_module.o
CC drivers/acpi/acpica/exoparg1.o
CC drivers/acpi/acpica/exoparg2.o
CC crypto/sha3_generic.o
CC block/blk-iolatency.o
CC net/handshake/tlshd.o
CC drivers/gpu/drm/display/drm_dsc_helper.o
CC drivers/gpu/drm/display/drm_hdcp_helper.o
CC fs/9p/vfs_dir.o
CC arch/x86/kernel/cpu/mshyperv.o
CC net/ethtool/eee.o
CC net/ipv4/ip_options.o
CC arch/x86/kernel/ioport.o
CC drivers/base/power/common.o
AR sound/virtio/built-in.a
CC sound/hda/hdac_regmap.o
CC sound/hda/hdac_controller.o
AR drivers/base/firmware_loader/built-in.a
CC io_uring/memmap.o
CC net/handshake/trace.o
CC fs/lockd/clnt4xdr.o
CC fs/nfs/read.o
CC net/9p/trans_common.o
CC fs/lockd/xdr4.o
CC drivers/acpi/acpica/exoparg3.o
CC drivers/gpu/drm/ttm/ttm_execbuf_util.o
CC block/blk-iocost.o
CC drivers/connector/cn_proc.o
CC crypto/ecb.o
CC kernel/exec_domain.o
AR drivers/char/built-in.a
CC lib/memcat_p.o
CC arch/x86/kernel/cpu/debugfs.o
CC drivers/tty/serial/8250/8250_pcilib.o
AR drivers/pci/built-in.a
CC fs/9p/vfs_dentry.o
CC drivers/base/power/qos.o
CC fs/ext4/inline.o
CC net/netfilter/nf_conntrack_proto_icmp.o
CC net/mac80211/wep.o
CC drivers/gpu/drm/i915/i915_module.o
CC arch/x86/kernel/cpu/bus_lock.o
CC kernel/events/hw_breakpoint.o
CC fs/nfs/symlink.o
CC net/sunrpc/auth_gss/gss_krb5_wrap.o
CC drivers/base/regmap/regmap.o
CC drivers/acpi/acpica/exoparg6.o
AR drivers/base/test/built-in.a
CC sound/hda/hdac_stream.o
CC drivers/gpu/drm/i915/i915_params.o
CC net/9p/trans_fd.o
CC kernel/trace/trace_events.o
CC crypto/cbc.o
CC drivers/base/regmap/regcache.o
CC drivers/gpu/drm/display/drm_hdmi_helper.o
CC kernel/time/vsyscall.o
CC drivers/block/loop.o
AR drivers/gpu/drm/renesas/rcar-du/built-in.a
CC drivers/gpu/drm/ttm/ttm_range_manager.o
AR drivers/gpu/drm/renesas/rz-du/built-in.a
CC io_uring/alloc_cache.o
AR drivers/gpu/drm/renesas/built-in.a
CC io_uring/io-wq.o
CC net/ethtool/tsinfo.o
CC sound/hda/array.o
CC fs/nfs/unlink.o
CC net/netlabel/netlabel_cipso_v4.o
CC net/ipv4/ip_output.o
CC drivers/acpi/acpica/exprep.o
CC drivers/tty/serial/8250/8250_early.o
CC drivers/block/virtio_blk.o
CC net/sunrpc/auth.o
CC fs/9p/v9fs.o
AR drivers/gpu/drm/omapdrm/built-in.a
CC net/core/utils.o
CC kernel/trace/trace_export.o
CC crypto/ctr.o
CC crypto/gcm.o
CC kernel/time/timekeeping_debug.o
CC net/sunrpc/auth_gss/gss_krb5_crypto.o
CC mm/highmem.o
CC fs/lockd/svc4proc.o
AR drivers/connector/built-in.a
CC net/wireless/ibss.o
CC drivers/acpi/acpica/exregion.o
CC arch/x86/kernel/cpu/capflags.o
CC drivers/gpu/drm/display/drm_scdc_helper.o
AR arch/x86/kernel/cpu/built-in.a
CC drivers/tty/tty_port.o
CC arch/x86/kernel/dumpstack.o
CC net/wireless/sme.o
CC drivers/base/power/runtime.o
CC drivers/base/power/wakeirq.o
CC drivers/acpi/acpica/exresnte.o
CC drivers/gpu/drm/ttm/ttm_resource.o
CC lib/nmi_backtrace.o
CC net/netfilter/nf_conntrack_extend.o
CC net/mac80211/aead_api.o
CC sound/hda/hdmi_chmap.o
AR net/handshake/built-in.a
CC arch/x86/kernel/nmi.o
CC kernel/time/namespace.o
CC drivers/tty/serial/8250/8250_exar.o
CC drivers/gpu/drm/i915/i915_pci.o
CC net/wireless/chan.o
CC net/wireless/ethtool.o
CC kernel/events/uprobes.o
CC fs/9p/fid.o
CC io_uring/futex.o
CC sound/sound_core.o
CC drivers/acpi/acpica/exresolv.o
CC arch/x86/kernel/ldt.o
CC fs/debugfs/inode.o
CC net/9p/trans_virtio.o
CC fs/ext4/inode.o
CC net/ethtool/cabletest.o
CC net/ipv4/ip_sockglue.o
CC mm/memory.o
CC net/netlabel/netlabel_calipso.o
CC drivers/base/component.o
CC net/mac80211/wpa.o
CC net/sunrpc/auth_null.o
CC crypto/ccm.o
CC drivers/gpu/drm/ttm/ttm_pool.o
CC net/core/link_watch.o
CC fs/nfs/write.o
AR drivers/gpu/drm/display/built-in.a
CC fs/nfs/namespace.o
CC net/mac80211/scan.o
AR drivers/block/built-in.a
CC net/netfilter/nf_conntrack_acct.o
CC drivers/acpi/acpica/exresop.o
CC sound/hda/trace.o
AR kernel/time/built-in.a
AR drivers/gpu/drm/tilcdc/built-in.a
CC drivers/base/power/main.o
CC drivers/base/regmap/regcache-rbtree.o
CC fs/lockd/procfs.o
CC drivers/base/core.o
CC lib/objpool.o
CC block/mq-deadline.o
CC net/sunrpc/auth_gss/gss_krb5_keys.o
CC fs/9p/xattr.o
CC drivers/tty/serial/8250/8250_lpss.o
CC drivers/tty/tty_mutex.o
CC crypto/aes_generic.o
CC io_uring/epoll.o
CC drivers/acpi/nvs.o
CC drivers/gpu/drm/i915/i915_scatterlist.o
CC drivers/acpi/acpica/exserial.o
CC kernel/panic.o
CC arch/x86/kernel/setup.o
CC drivers/tty/serial/earlycon.o
CC fs/debugfs/file.o
CC drivers/base/regmap/regcache-flat.o
CC net/ipv6/ip6_fib.o
CC kernel/trace/trace_event_perf.o
CC fs/ext4/ioctl.o
CC drivers/acpi/acpica/exstore.o
CC net/core/filter.o
CC net/ethtool/tunnels.o
CC drivers/gpu/drm/virtio/virtgpu_drv.o
AR fs/lockd/built-in.a
AR drivers/gpu/drm/imx/built-in.a
CC fs/ext4/mballoc.o
CC net/core/sock_diag.o
CC drivers/acpi/acpica/exstoren.o
CC drivers/gpu/drm/i915/i915_switcheroo.o
AR net/netlabel/built-in.a
CC kernel/cpu.o
AR net/9p/built-in.a
CC net/core/dev_ioctl.o
AR fs/9p/built-in.a
CC drivers/base/bus.o
CC sound/hda/hdac_component.o
CC drivers/tty/serial/8250/8250_mid.o
CC net/netfilter/nf_conntrack_seqadj.o
CC net/ethtool/fec.o
CC fs/nfs/mount_clnt.o
CC drivers/gpu/drm/ttm/ttm_device.o
CC arch/x86/kernel/x86_init.o
CC sound/last.o
CC mm/mincore.o
CC kernel/exit.o
CC io_uring/napi.o
CC drivers/base/regmap/regcache-maple.o
CC crypto/authenc.o
CC drivers/base/dd.o
CC drivers/gpu/drm/virtio/virtgpu_kms.o
AR kernel/events/built-in.a
CC drivers/acpi/acpica/exstorob.o
CC drivers/gpu/drm/virtio/virtgpu_gem.o
CC drivers/gpu/drm/i915/i915_sysfs.o
CC drivers/acpi/wakeup.o
AR net/sunrpc/auth_gss/built-in.a
CC net/mac80211/offchannel.o
CC net/netfilter/nf_conntrack_proto_icmpv6.o
CC block/kyber-iosched.o
CC net/core/tso.o
CC drivers/misc/eeprom/eeprom_93cx6.o
CC net/ipv4/inet_hashtables.o
CC fs/tracefs/inode.o
CC kernel/trace/trace_events_filter.o
CC drivers/acpi/acpica/exsystem.o
CC drivers/base/power/wakeup.o
CC sound/hda/hdac_i915.o
CC net/sunrpc/auth_tls.o
AR drivers/misc/cb710/built-in.a
CC drivers/tty/serial/8250/8250_pci.o
CC lib/plist.o
CC arch/x86/kernel/i8259.o
CC drivers/gpu/drm/ttm/ttm_sys_manager.o
AR fs/debugfs/built-in.a
CC arch/x86/kernel/irqinit.o
CC arch/x86/kernel/jump_label.o
CC arch/x86/kernel/irq_work.o
CC net/mac80211/ht.o
CC drivers/base/regmap/regmap-debugfs.o
CC net/netfilter/nf_conntrack_netlink.o
CC net/netfilter/nf_conntrack_ftp.o
AR drivers/misc/eeprom/built-in.a
CC net/ipv4/inet_timewait_sock.o
AR drivers/misc/lis3lv02d/built-in.a
AR drivers/misc/cardreader/built-in.a
CC drivers/acpi/acpica/extrace.o
CC net/ipv4/inet_connection_sock.o
CC drivers/base/power/wakeup_stats.o
AR drivers/misc/keba/built-in.a
CC net/ethtool/eeprom.o
AR drivers/misc/built-in.a
CC net/mac80211/agg-tx.o
CC drivers/gpu/drm/virtio/virtgpu_vram.o
CC net/ipv6/ipv6_sockglue.o
CC net/sunrpc/auth_unix.o
CC sound/hda/intel-dsp-config.o
CC drivers/tty/serial/8250/8250_pericom.o
CC drivers/gpu/drm/i915/i915_utils.o
CC lib/radix-tree.o
CC crypto/authencesn.o
CC drivers/gpu/drm/ttm/ttm_backup.o
CC fs/nfs/nfstrace.o
CC drivers/gpu/drm/ttm/ttm_agp_backend.o
AR drivers/mfd/built-in.a
CC kernel/softirq.o
AR drivers/gpu/drm/panel/built-in.a
CC net/sunrpc/svc.o
CC mm/mlock.o
CC drivers/acpi/acpica/exutils.o
CC kernel/trace/trace_events_trigger.o
CC fs/tracefs/event_inode.o
CC fs/ext4/migrate.o
CC drivers/base/syscore.o
AR io_uring/built-in.a
CC net/ipv4/tcp.o
AR drivers/gpu/drm/bridge/analogix/built-in.a
AR drivers/gpu/drm/bridge/cadence/built-in.a
AR drivers/gpu/drm/bridge/imx/built-in.a
AR drivers/gpu/drm/bridge/synopsys/built-in.a
AR drivers/gpu/drm/hisilicon/built-in.a
AR drivers/gpu/drm/bridge/built-in.a
CC fs/nfs/export.o
CC net/netfilter/nf_conntrack_irc.o
AR drivers/base/regmap/built-in.a
CC net/core/sock_reuseport.o
CC drivers/gpu/drm/i915/intel_clock_gating.o
CC arch/x86/kernel/probe_roms.o
CC kernel/resource.o
AR drivers/nfc/built-in.a
CC net/core/fib_notifier.o
AR drivers/gpu/drm/mxsfb/built-in.a
CC net/ipv6/ndisc.o
CC drivers/gpu/drm/virtio/virtgpu_display.o
CC drivers/acpi/acpica/hwacpi.o
CC sound/hda/intel-nhlt.o
CC drivers/base/power/trace.o
CC [M] fs/efivarfs/inode.o
CC fs/open.o
CC block/blk-mq-debugfs.o
CC drivers/gpu/drm/virtio/virtgpu_vq.o
CC lib/ratelimit.o
AR drivers/gpu/drm/ttm/built-in.a
CC net/ipv4/tcp_input.o
AR drivers/tty/serial/8250/built-in.a
CC net/ethtool/stats.o
AR drivers/tty/serial/built-in.a
CC kernel/sysctl.o
CC net/mac80211/agg-rx.o
CC drivers/tty/tty_ldsem.o
CC [M] fs/efivarfs/file.o
CC net/wireless/mesh.o
CC drivers/acpi/acpica/hwesleep.o
CC drivers/acpi/acpica/hwgpe.o
CC lib/rbtree.o
CC crypto/lzo.o
CC fs/read_write.o
CC sound/hda/intel-sdw-acpi.o
CC mm/mmap.o
CC arch/x86/kernel/sys_ia32.o
CC net/ipv6/udp.o
CC drivers/tty/tty_baudrate.o
AR fs/tracefs/built-in.a
CC net/ethtool/phc_vclocks.o
CC block/blk-pm.o
CC net/wireless/ap.o
CC kernel/capability.o
CC net/sunrpc/svcsock.o
CC net/ethtool/mm.o
CC net/ipv6/udplite.o
AR drivers/base/power/built-in.a
CC lib/seq_buf.o
AR drivers/gpu/drm/sysfb/built-in.a
CC drivers/base/driver.o
CC drivers/gpu/drm/i915/intel_cpu_info.o
CC drivers/acpi/acpica/hwregs.o
CC drivers/acpi/acpica/hwsleep.o
CC kernel/trace/trace_eprobe.o
CC net/netfilter/nf_conntrack_sip.o
CC [M] fs/efivarfs/super.o
CC [M] fs/efivarfs/vars.o
CC fs/nfs/sysfs.o
CC net/netfilter/nf_nat_core.o
CC mm/mmu_gather.o
CC crypto/lzo-rle.o
CC drivers/gpu/drm/i915/intel_device_info.o
CC net/ipv6/raw.o
AR sound/hda/built-in.a
AR sound/built-in.a
CC net/ipv4/tcp_output.o
CC net/sunrpc/svcauth.o
CC net/wireless/trace.o
CC arch/x86/kernel/ksysfs.o
CC drivers/tty/tty_jobctrl.o
CC drivers/acpi/acpica/hwvalid.o
CC drivers/gpu/drm/virtio/virtgpu_fence.o
CC lib/siphash.o
CC drivers/base/class.o
CC block/holder.o
CC kernel/trace/trace_kprobe.o
CC net/mac80211/vht.o
AR drivers/gpu/drm/tiny/built-in.a
CC drivers/gpu/drm/i915/intel_memory_region.o
CC net/sunrpc/svcauth_unix.o
CC net/core/xdp.o
CC fs/nfs/fs_context.o
AR drivers/dax/hmem/built-in.a
AR drivers/dax/built-in.a
CC net/ipv6/icmp.o
CC crypto/rng.o
CC fs/file_table.o
CC arch/x86/kernel/bootflag.o
CC drivers/acpi/acpica/hwxface.o
CC drivers/dma-buf/dma-buf.o
CC kernel/trace/error_report-traces.o
CC lib/string.o
CC net/ethtool/module.o
CC drivers/acpi/sleep.o
CC mm/mprotect.o
CC fs/ext4/mmp.o
CC drivers/tty/n_null.o
CC net/wireless/ocb.o
LD [M] fs/efivarfs/efivarfs.o
CC fs/nfs/nfsroot.o
CC lib/timerqueue.o
AR block/built-in.a
CC net/mac80211/he.o
CC drivers/gpu/drm/virtio/virtgpu_object.o
CC net/netfilter/nf_nat_proto.o
CC drivers/acpi/acpica/hwxfsleep.o
CC drivers/base/platform.o
CC drivers/acpi/device_sysfs.o
CC net/core/flow_offload.o
CC drivers/gpu/drm/virtio/virtgpu_debugfs.o
CC kernel/ptrace.o
CC lib/union_find.o
CC drivers/base/cpu.o
CC lib/vsprintf.o
CC arch/x86/kernel/e820.o
CC arch/x86/kernel/pci-dma.o
CC crypto/drbg.o
CC drivers/tty/pty.o
CC drivers/base/firmware.o
CC drivers/acpi/acpica/hwpci.o
CC drivers/gpu/drm/i915/intel_pcode.o
CC net/ipv4/tcp_timer.o
CC net/ipv6/mcast.o
CC drivers/dma-buf/dma-fence.o
CC net/sunrpc/addr.o
CC net/netfilter/nf_nat_helper.o
CC mm/mremap.o
CC fs/ext4/move_extent.o
CC fs/nfs/sysctl.o
CC drivers/gpu/drm/virtio/virtgpu_plane.o
CC kernel/user.o
CC lib/win_minmax.o
CC drivers/acpi/acpica/nsaccess.o
CC drivers/acpi/acpica/nsalloc.o
CC net/wireless/pmsr.o
CC net/mac80211/s1g.o
CC crypto/jitterentropy.o
CC net/ethtool/cmis_fw_update.o
CC drivers/macintosh/mac_hid.o
AR drivers/cxl/core/built-in.a
AR drivers/cxl/built-in.a
CC fs/super.o
CC lib/xarray.o
AR drivers/scsi/pcmcia/built-in.a
GEN net/wireless/shipped-certs.c
CC drivers/scsi/scsi.o
CC mm/msync.o
CC drivers/acpi/acpica/nsarguments.o
AR drivers/nvme/common/built-in.a
AR drivers/nvme/host/built-in.a
AR drivers/nvme/target/built-in.a
AR drivers/nvme/built-in.a
CC net/netfilter/nf_nat_masquerade.o
CC drivers/dma-buf/dma-fence-array.o
CC drivers/base/init.o
CC drivers/gpu/drm/i915/intel_region_ttm.o
CC drivers/tty/tty_audit.o
CC kernel/trace/power-traces.o
CC crypto/jitterentropy-kcapi.o
CC lib/lockref.o
CC drivers/acpi/acpica/nsconvert.o
CC fs/char_dev.o
CC net/ipv6/reassembly.o
CC drivers/scsi/hosts.o
CC drivers/ata/libata-core.o
CC drivers/base/map.o
AR drivers/gpu/drm/xlnx/built-in.a
CC kernel/trace/rpm-traces.o
CC arch/x86/kernel/quirks.o
CC drivers/scsi/scsi_ioctl.o
CC crypto/ghash-generic.o
CC drivers/dma-buf/dma-fence-chain.o
CC drivers/acpi/acpica/nsdump.o
CC net/ethtool/cmis_cdb.o
CC drivers/ata/libata-scsi.o
CC mm/page_vma_mapped.o
AR drivers/macintosh/built-in.a
AR drivers/gpu/drm/gud/built-in.a
CC net/ipv6/tcp_ipv6.o
CC kernel/signal.o
CC net/ipv4/tcp_ipv4.o
CC drivers/gpu/drm/virtio/virtgpu_ioctl.o
CC net/sunrpc/rpcb_clnt.o
CC crypto/hash_info.o
CC mm/pagewalk.o
CC net/ethtool/pse-pd.o
CC drivers/ata/libata-eh.o
CC drivers/ata/libata-transport.o
CC fs/ext4/namei.o
CC drivers/dma-buf/dma-fence-unwrap.o
CC lib/bcd.o
CC fs/nfs/nfs3super.o
CC drivers/base/devres.o
CC drivers/acpi/acpica/nseval.o
CC kernel/trace/trace_dynevent.o
CC fs/stat.o
CC fs/exec.o
CC crypto/rsapubkey.asn1.o
CC crypto/rsaprivkey.asn1.o
CC drivers/tty/sysrq.o
AR crypto/built-in.a
CC net/ipv6/ping.o
AR drivers/gpu/drm/solomon/built-in.a
CC arch/x86/kernel/kdebugfs.o
CC drivers/ata/libata-trace.o
CC net/core/gro.o
CC fs/nfs/nfs3client.o
AR drivers/net/phy/mediatek/built-in.a
AR drivers/net/phy/qcom/built-in.a
CC drivers/net/phy/realtek/realtek_main.o
CC drivers/net/phy/mdio-boardinfo.o
CC drivers/gpu/drm/i915/intel_runtime_pm.o
CC drivers/acpi/device_pm.o
CC lib/sort.o
CC drivers/acpi/acpica/nsinit.o
CC net/sunrpc/timer.o
CC drivers/dma-buf/dma-resv.o
CC drivers/scsi/scsicam.o
CC [M] drivers/gpu/drm/scheduler/sched_main.o
CC net/netfilter/nf_nat_ftp.o
CC drivers/acpi/proc.o
CC drivers/base/attribute_container.o
CC drivers/firewire/init_ohci1394_dma.o
CC fs/nfs/nfs3proc.o
CC drivers/gpu/drm/i915/intel_sbi.o
CC drivers/gpu/drm/virtio/virtgpu_prime.o
CC arch/x86/kernel/alternative.o
CC drivers/acpi/acpica/nsload.o
CC net/core/netdev-genl.o
CC net/sunrpc/xdr.o
CC mm/pgtable-generic.o
CC drivers/acpi/bus.o
CC net/ipv4/tcp_minisocks.o
CC net/wireless/shipped-certs.o
CC drivers/gpu/drm/virtio/virtgpu_trace_points.o
CC net/mac80211/ibss.o
CC drivers/scsi/scsi_error.o
CC net/ethtool/plca.o
CC fs/pipe.o
AR drivers/tty/built-in.a
CC fs/namei.o
CC drivers/base/transport_class.o
CC kernel/trace/trace_probe.o
CC drivers/acpi/acpica/nsnames.o
CC drivers/ata/libata-sata.o
CC lib/parser.o
HOSTCC drivers/gpu/drm/xe/xe_gen_wa_oob
CC mm/rmap.o
CC net/ethtool/phy.o
CC [M] drivers/gpu/drm/scheduler/sched_fence.o
CC net/ipv4/tcp_cong.o
AR drivers/firewire/built-in.a
CC drivers/gpu/drm/drm_atomic.o
CC drivers/dma-buf/sync_file.o
CC drivers/gpu/drm/drm_atomic_uapi.o
GEN xe_wa_oob.c xe_wa_oob.h
CC [M] drivers/gpu/drm/xe/xe_bb.o
CC drivers/scsi/scsi_lib.o
CC drivers/acpi/glue.o
CC drivers/gpu/drm/virtio/virtgpu_submit.o
CC drivers/net/phy/stubs.o
CC fs/fcntl.o
AR drivers/net/phy/realtek/built-in.a
CC [M] drivers/gpu/drm/scheduler/sched_entity.o
CC lib/debug_locks.o
CC drivers/base/topology.o
CC kernel/trace/trace_uprobe.o
CC drivers/acpi/acpica/nsobject.o
CC drivers/gpu/drm/i915/intel_step.o
CC net/core/netdev-genl-gen.o
CC net/netfilter/nf_nat_irc.o
CC lib/random32.o
CC net/core/gso.o
CC net/mac80211/iface.o
CC drivers/scsi/constants.o
CC fs/ioctl.o
CC net/ipv4/tcp_metrics.o
CC net/netfilter/nf_nat_sip.o
CC drivers/cdrom/cdrom.o
AR drivers/net/pse-pd/built-in.a
CC net/sunrpc/sunrpc_syms.o
CC fs/nfs/nfs3xdr.o
AR drivers/dma-buf/built-in.a
CC drivers/gpu/drm/drm_auth.o
CC drivers/acpi/acpica/nsparse.o
CC arch/x86/kernel/i8253.o
CC mm/vmalloc.o
CC lib/bust_spinlocks.o
CC net/mac80211/link.o
CC drivers/scsi/scsi_lib_dma.o
LD [M] drivers/gpu/drm/scheduler/gpu-sched.o
CC drivers/net/mdio/acpi_mdio.o
CC drivers/base/container.o
CC net/sunrpc/cache.o
CC [M] drivers/gpu/drm/xe/xe_bo.o
CC net/ipv6/exthdrs.o
CC drivers/net/phy/mdio_devres.o
AR drivers/gpu/drm/virtio/built-in.a
CC net/netfilter/x_tables.o
CC kernel/sys.o
CC net/mac80211/rate.o
CC net/ethtool/tsconfig.o
CC [M] drivers/gpu/drm/xe/xe_bo_evict.o
CC drivers/net/phy/phy.o
CC drivers/acpi/acpica/nspredef.o
CC lib/kasprintf.o
CC net/ipv6/datagram.o
CC mm/vma.o
CC arch/x86/kernel/hw_breakpoint.o
CC drivers/gpu/drm/drm_blend.o
AR drivers/auxdisplay/built-in.a
CC fs/nfs/nfs3acl.o
CC drivers/gpu/drm/i915/intel_uncore.o
CC drivers/ata/libata-sff.o
CC net/core/net-sysfs.o
CC drivers/net/phy/phy-c45.o
CC drivers/base/property.o
CC drivers/base/cacheinfo.o
CC drivers/gpu/drm/drm_bridge.o
CC net/core/hotdata.o
CC lib/bitmap.o
CC fs/nfs/nfs4proc.o
CC net/ipv6/ip6_flowlabel.o
CC drivers/acpi/acpica/nsprepkg.o
CC mm/process_vm_access.o
CC drivers/ata/libata-pmp.o
CC fs/ext4/page-io.o
CC drivers/gpu/drm/i915/intel_uncore_trace.o
CC net/sunrpc/rpc_pipe.o
CC drivers/net/mdio/fwnode_mdio.o
AR drivers/net/pcs/built-in.a
CC drivers/scsi/scsi_scan.o
CC drivers/pcmcia/cs.o
CC arch/x86/kernel/tsc.o
CC kernel/umh.o
CC drivers/acpi/acpica/nsrepair.o
CC lib/scatterlist.o
CC [M] drivers/gpu/drm/xe/xe_devcoredump.o
CC fs/nfs/nfs4xdr.o
CC net/ipv4/tcp_fastopen.o
CC kernel/trace/rethook.o
AR net/ethtool/built-in.a
CC net/ipv6/inet6_connection_sock.o
CC net/ipv6/udp_offload.o
CC fs/readdir.o
CC drivers/ata/libata-acpi.o
CC [M] drivers/gpu/drm/xe/xe_device.o
CC drivers/net/phy/phy-core.o
GEN drivers/scsi/scsi_devinfo_tbl.c
CC drivers/acpi/acpica/nsrepair2.o
CC drivers/usb/common/common.o
CC drivers/pcmcia/socket_sysfs.o
CC drivers/usb/common/debug.o
AR drivers/cdrom/built-in.a
CC drivers/usb/core/usb.o
AR drivers/usb/phy/built-in.a
CC fs/select.o
CC net/sunrpc/sysfs.o
CC net/sunrpc/svc_xprt.o
CC fs/ext4/readpage.o
CC drivers/base/swnode.o
CC net/sunrpc/xprtmultipath.o
CC drivers/usb/mon/mon_main.o
CC drivers/usb/mon/mon_stat.o
CC drivers/acpi/acpica/nssearch.o
CC drivers/usb/host/pci-quirks.o
CC drivers/usb/mon/mon_text.o
AR drivers/net/mdio/built-in.a
CC drivers/gpu/drm/i915/intel_wakeref.o
AR drivers/net/ethernet/3com/built-in.a
CC drivers/net/ethernet/8390/ne2k-pci.o
AR kernel/trace/built-in.a
CC drivers/scsi/scsi_devinfo.o
CC drivers/usb/class/usblp.o
CC net/sunrpc/stats.o
CC kernel/workqueue.o
CC net/netfilter/xt_tcpudp.o
CC net/netfilter/xt_CONNSECMARK.o
CC net/mac80211/michael.o
CC fs/ext4/resize.o
CC net/sunrpc/sysctl.o
CC arch/x86/kernel/tsc_msr.o
CC net/ipv4/tcp_rate.o
CC drivers/pcmcia/cardbus.o
CC net/mac80211/tkip.o
CC drivers/acpi/acpica/nsutils.o
CC drivers/net/ethernet/8390/8390.o
CC drivers/gpu/drm/drm_cache.o
AR drivers/usb/common/built-in.a
CC drivers/input/serio/serio.o
CC drivers/usb/mon/mon_bin.o
CC net/core/netdev_rx_queue.o
CC drivers/net/phy/phy_device.o
CC lib/list_sort.o
CC drivers/base/faux.o
CC drivers/ata/libata-pata-timings.o
CC fs/dcache.o
AR drivers/net/ethernet/adaptec/built-in.a
CC drivers/pcmcia/ds.o
CC lib/uuid.o
CC drivers/scsi/scsi_sysctl.o
CC net/ipv6/seg6.o
CC net/ipv4/tcp_recovery.o
CC drivers/usb/core/hub.o
CC drivers/input/serio/i8042.o
CC lib/iov_iter.o
CC arch/x86/kernel/io_delay.o
CC mm/page_alloc.o
CC kernel/pid.o
CC drivers/input/keyboard/atkbd.o
AR drivers/net/ethernet/agere/built-in.a
CC [M] drivers/gpu/drm/xe/xe_device_sysfs.o
CC drivers/acpi/scan.o
CC drivers/rtc/lib.o
CC drivers/acpi/acpica/nswalk.o
CC drivers/usb/host/ehci-hcd.o
CC drivers/input/mouse/psmouse-base.o
CC drivers/base/auxiliary.o
AR drivers/input/joystick/built-in.a
CC arch/x86/kernel/rtc.o
CC net/core/net-procfs.o
CC net/ipv6/fib6_notifier.o
CC drivers/gpu/drm/i915/vlv_sideband.o
AR drivers/usb/class/built-in.a
CC drivers/scsi/scsi_proc.o
CC drivers/input/mouse/synaptics.o
CC drivers/acpi/mipi-disco-img.o
CC drivers/usb/host/ehci-pci.o
CC net/mac80211/aes_cmac.o
CC net/ipv6/rpl.o
AR drivers/input/tablet/built-in.a
CC drivers/acpi/acpica/nsxfeval.o
CC drivers/i2c/algos/i2c-algo-bit.o
CC drivers/i2c/busses/i2c-i801.o
AR drivers/net/ethernet/alacritech/built-in.a
CC drivers/input/mouse/focaltech.o
CC net/core/netpoll.o
CC net/mac80211/aes_gmac.o
CC drivers/ata/ahci.o
CC drivers/base/devtmpfs.o
CC kernel/task_work.o
CC drivers/rtc/class.o
CC net/netfilter/xt_NFLOG.o
CC drivers/gpu/drm/drm_color_mgmt.o
AR drivers/usb/mon/built-in.a
AR drivers/net/ethernet/alteon/built-in.a
AR drivers/net/ethernet/8390/built-in.a
CC [M] drivers/gpu/drm/xe/xe_dma_buf.o
AR drivers/net/ethernet/amazon/built-in.a
AR drivers/net/ethernet/amd/built-in.a
AR drivers/net/ethernet/aquantia/built-in.a
AR drivers/net/ethernet/arc/built-in.a
CC arch/x86/kernel/resource.o
AR drivers/net/ethernet/asix/built-in.a
AR drivers/net/ethernet/atheros/built-in.a
CC drivers/acpi/acpica/nsxfname.o
AR drivers/net/ethernet/cadence/built-in.a
CC drivers/net/ethernet/broadcom/bnx2.o
AS arch/x86/kernel/irqflags.o
CC drivers/acpi/acpica/nsxfobj.o
CC drivers/acpi/acpica/psargs.o
CC drivers/net/phy/linkmode.o
CC drivers/pcmcia/pcmcia_resource.o
AR drivers/i3c/built-in.a
CC arch/x86/kernel/static_call.o
CC drivers/acpi/resource.o
AR drivers/input/keyboard/built-in.a
CC drivers/input/mouse/alps.o
CC drivers/scsi/scsi_debugfs.o
AR drivers/net/wireless/admtek/built-in.a
AR drivers/net/wireless/ath/built-in.a
AR drivers/net/wireless/atmel/built-in.a
CC drivers/base/module.o
AR drivers/net/wireless/broadcom/built-in.a
AR drivers/input/touchscreen/built-in.a
CC net/ipv4/tcp_ulp.o
CC net/ipv4/tcp_offload.o
AR drivers/net/wireless/intel/built-in.a
AR drivers/net/wireless/intersil/built-in.a
AR drivers/net/wireless/marvell/built-in.a
CC fs/ext4/super.o
CC drivers/input/serio/serport.o
AR drivers/net/wireless/mediatek/built-in.a
AR drivers/net/wireless/microchip/built-in.a
AR drivers/net/wireless/purelifi/built-in.a
CC drivers/gpu/drm/i915/vlv_suspend.o
AR drivers/net/wireless/quantenna/built-in.a
CC drivers/ata/libahci.o
CC drivers/net/phy/phy_link_topology.o
AR drivers/net/wireless/ralink/built-in.a
AR drivers/net/wireless/realtek/built-in.a
CC drivers/net/ethernet/broadcom/tg3.o
AR drivers/net/wireless/rsi/built-in.a
CC drivers/rtc/interface.o
AR drivers/net/wireless/silabs/built-in.a
CC drivers/base/auxiliary_sysfs.o
AR drivers/net/wireless/st/built-in.a
AR drivers/net/ethernet/brocade/built-in.a
CC drivers/base/devcoredump.o
AR drivers/i2c/algos/built-in.a
CC drivers/input/serio/libps2.o
AR drivers/net/wireless/ti/built-in.a
CC drivers/usb/core/hcd.o
AR drivers/media/i2c/built-in.a
AR drivers/net/wireless/zydas/built-in.a
AR drivers/media/tuners/built-in.a
AR drivers/net/wireless/virtual/built-in.a
CC drivers/base/platform-msi.o
AR drivers/media/rc/keymaps/built-in.a
AR drivers/net/wireless/built-in.a
AR drivers/media/rc/built-in.a
CC net/ipv6/ioam6.o
CC drivers/scsi/scsi_trace.o
AR drivers/media/common/b2c2/built-in.a
CC drivers/acpi/acpica/psloop.o
AR drivers/media/common/saa7146/built-in.a
AR drivers/media/common/siano/built-in.a
AR drivers/media/common/v4l2-tpg/built-in.a
AR drivers/media/common/videobuf2/built-in.a
AR drivers/media/common/built-in.a
CC arch/x86/kernel/process.o
CC drivers/gpu/drm/drm_connector.o
AR net/sunrpc/built-in.a
AR drivers/media/platform/allegro-dvt/built-in.a
CC net/ipv4/tcp_plb.o
CC drivers/net/phy/phy_package.o
AR drivers/media/platform/amlogic/meson-ge2d/built-in.a
AR drivers/media/platform/amlogic/built-in.a
CC drivers/input/mouse/byd.o
AR drivers/media/platform/amphion/built-in.a
AR drivers/net/ethernet/cavium/common/built-in.a
AR drivers/media/platform/aspeed/built-in.a
AR drivers/net/ethernet/cavium/thunder/built-in.a
AR drivers/media/platform/atmel/built-in.a
CC net/netfilter/xt_SECMARK.o
CC [M] drivers/gpu/drm/xe/xe_drm_client.o
AR drivers/net/ethernet/cavium/liquidio/built-in.a
AR drivers/i2c/muxes/built-in.a
AR drivers/media/platform/broadcom/built-in.a
CC drivers/acpi/acpica/psobject.o
CC net/mac80211/fils_aead.o
AR drivers/net/ethernet/cavium/octeon/built-in.a
AR drivers/media/platform/cadence/built-in.a
AR drivers/net/ethernet/cavium/built-in.a
CC net/netfilter/xt_TCPMSS.o
AR drivers/media/platform/chips-media/coda/built-in.a
AR drivers/media/platform/chips-media/wave5/built-in.a
CC drivers/gpu/drm/drm_crtc.o
AR drivers/i2c/busses/built-in.a
AR drivers/media/platform/chips-media/built-in.a
CC drivers/i2c/i2c-boardinfo.o
AR drivers/media/platform/imagination/built-in.a
AR drivers/media/platform/intel/built-in.a
AR drivers/media/platform/marvell/built-in.a
AR drivers/media/platform/mediatek/jpeg/built-in.a
AR drivers/media/platform/mediatek/mdp/built-in.a
CC arch/x86/kernel/ptrace.o
CC net/ipv4/datagram.o
AR drivers/media/platform/mediatek/vcodec/common/built-in.a
CC drivers/base/physical_location.o
CC kernel/extable.o
CC net/ipv6/sysctl_net_ipv6.o
AR drivers/media/platform/mediatek/vcodec/encoder/built-in.a
AR drivers/media/platform/mediatek/vcodec/decoder/built-in.a
AR drivers/media/platform/mediatek/vcodec/built-in.a
AR drivers/net/usb/built-in.a
CC arch/x86/kernel/tls.o
AR drivers/media/platform/mediatek/vpu/built-in.a
AR drivers/media/platform/mediatek/mdp3/built-in.a
CC net/ipv6/xfrm6_policy.o
CC drivers/acpi/acpica/psopcode.o
AR drivers/media/platform/mediatek/built-in.a
AR drivers/media/platform/microchip/built-in.a
AR drivers/media/platform/nuvoton/built-in.a
CC drivers/gpu/drm/i915/soc/intel_dram.o
AR drivers/media/platform/nvidia/tegra-vde/built-in.a
AR drivers/media/platform/nvidia/built-in.a
AR drivers/media/platform/nxp/dw100/built-in.a
AR drivers/media/platform/qcom/camss/built-in.a
AR drivers/media/platform/nxp/imx-jpeg/built-in.a
AR drivers/media/platform/qcom/iris/built-in.a
AR drivers/media/platform/nxp/imx8-isi/built-in.a
AR drivers/media/platform/nxp/built-in.a
AR drivers/media/platform/qcom/venus/built-in.a
AR drivers/media/platform/qcom/built-in.a
CC drivers/usb/core/urb.o
AR drivers/media/platform/raspberrypi/pisp_be/built-in.a
CC drivers/gpu/drm/i915/soc/intel_gmch.o
AR drivers/media/platform/raspberrypi/rp1-cfe/built-in.a
AR drivers/media/platform/raspberrypi/built-in.a
AR drivers/media/platform/renesas/rcar-vin/built-in.a
CC drivers/gpu/drm/i915/soc/intel_pch.o
AR drivers/media/platform/renesas/rzg2l-cru/built-in.a
AR drivers/media/platform/renesas/vsp1/built-in.a
AR drivers/media/platform/renesas/built-in.a
AR drivers/input/serio/built-in.a
AR drivers/media/platform/rockchip/rga/built-in.a
CC drivers/usb/core/message.o
CC drivers/pcmcia/cistpl.o
AR drivers/media/platform/rockchip/rkisp1/built-in.a
AR drivers/input/misc/built-in.a
AR drivers/media/platform/rockchip/built-in.a
CC fs/inode.o
CC net/core/fib_rules.o
AR drivers/media/platform/samsung/exynos-gsc/built-in.a
CC lib/clz_ctz.o
AR drivers/media/platform/samsung/exynos4-is/built-in.a
AR drivers/media/platform/samsung/s3c-camif/built-in.a
AR drivers/media/platform/samsung/s5p-g2d/built-in.a
CC drivers/scsi/scsi_logging.o
CC drivers/acpi/acpica/psopinfo.o
AR drivers/media/platform/samsung/s5p-jpeg/built-in.a
AR drivers/media/platform/samsung/s5p-mfc/built-in.a
CC lib/bsearch.o
AR drivers/media/platform/samsung/built-in.a
CC net/ipv4/raw.o
CC drivers/input/input.o
AR drivers/media/platform/st/sti/bdisp/built-in.a
CC drivers/base/trace.o
AR drivers/media/platform/st/sti/c8sectpfe/built-in.a
AR drivers/media/platform/st/sti/delta/built-in.a
AR drivers/media/platform/st/sti/hva/built-in.a
AR drivers/media/platform/st/stm32/built-in.a
AR drivers/media/platform/st/built-in.a
AR drivers/media/platform/sunxi/sun4i-csi/built-in.a
AR drivers/media/platform/sunxi/sun6i-csi/built-in.a
AR drivers/media/platform/sunxi/sun6i-mipi-csi2/built-in.a
AR drivers/media/platform/sunxi/sun8i-a83t-mipi-csi2/built-in.a
AR drivers/media/platform/sunxi/sun8i-di/built-in.a
CC drivers/i2c/i2c-core-base.o
AR drivers/media/platform/sunxi/sun8i-rotate/built-in.a
AR drivers/media/platform/sunxi/built-in.a
AR drivers/media/platform/synopsys/hdmirx/built-in.a
AR drivers/media/platform/synopsys/built-in.a
CC drivers/net/phy/phy_caps.o
AR drivers/media/platform/ti/am437x/built-in.a
CC drivers/usb/host/ohci-hcd.o
CC drivers/net/phy/mdio_bus.o
AR drivers/media/platform/ti/cal/built-in.a
AR drivers/media/platform/ti/vpe/built-in.a
CC fs/nfs/nfs4state.o
AR drivers/media/platform/ti/davinci/built-in.a
AR drivers/media/platform/ti/j721e-csi2rx/built-in.a
CC lib/find_bit.o
AR drivers/media/platform/ti/omap/built-in.a
AR drivers/media/platform/ti/omap3isp/built-in.a
CC drivers/acpi/acpica/psparse.o
AR drivers/media/platform/ti/built-in.a
AR drivers/media/platform/verisilicon/built-in.a
AR drivers/media/platform/via/built-in.a
AR drivers/media/platform/xilinx/built-in.a
AR drivers/media/platform/built-in.a
CC net/ipv6/xfrm6_state.o
CC fs/attr.o
CC mm/page_frag_cache.o
CC drivers/input/mouse/logips2pp.o
AR drivers/media/pci/ttpci/built-in.a
AR drivers/media/pci/b2c2/built-in.a
AR drivers/media/pci/pluto2/built-in.a
AR drivers/media/pci/dm1105/built-in.a
AR drivers/media/pci/pt1/built-in.a
CC drivers/usb/host/ohci-pci.o
AR drivers/media/pci/pt3/built-in.a
AR drivers/net/ethernet/chelsio/built-in.a
CC drivers/pcmcia/pcmcia_cis.o
AR drivers/media/pci/mantis/built-in.a
CC drivers/rtc/nvmem.o
CC net/core/net-traces.o
AR drivers/media/pci/ngene/built-in.a
CC drivers/usb/storage/scsiglue.o
CC drivers/input/input-compat.o
CC kernel/params.o
AR drivers/media/pci/ddbridge/built-in.a
AR drivers/media/pci/saa7146/built-in.a
AR drivers/media/pci/smipcie/built-in.a
CC kernel/kthread.o
CC lib/llist.o
AR drivers/media/pci/netup_unidvb/built-in.a
CC drivers/acpi/acpi_processor.o
CC [M] drivers/gpu/drm/xe/xe_eu_stall.o
AR drivers/media/pci/intel/ipu3/built-in.a
AR drivers/media/pci/intel/ivsc/built-in.a
CC drivers/gpu/drm/drm_displayid.o
CC net/mac80211/cfg.o
AR drivers/media/pci/intel/built-in.a
CC arch/x86/kernel/step.o
AR drivers/media/pci/built-in.a
CC drivers/usb/storage/protocol.o
CC net/ipv6/xfrm6_input.o
AR drivers/media/usb/b2c2/built-in.a
AR drivers/media/usb/dvb-usb/built-in.a
CC lib/lwq.o
AR net/wireless/built-in.a
AR drivers/media/mmc/siano/built-in.a
CC drivers/ata/ata_piix.o
AR drivers/media/usb/dvb-usb-v2/built-in.a
AR drivers/media/mmc/built-in.a
AR drivers/media/usb/s2255/built-in.a
CC drivers/pps/pps.o
AR drivers/pps/clients/built-in.a
AR drivers/media/usb/siano/built-in.a
CC drivers/ptp/ptp_clock.o
CC drivers/gpu/drm/drm_drv.o
CC drivers/scsi/scsi_pm.o
AR drivers/media/usb/ttusb-budget/built-in.a
CC drivers/acpi/acpica/psscope.o
AR drivers/media/usb/ttusb-dec/built-in.a
AR drivers/media/usb/built-in.a
CC net/netfilter/xt_conntrack.o
CC lib/memweight.o
AR drivers/base/built-in.a
AR drivers/media/firewire/built-in.a
AR drivers/media/spi/built-in.a
CC lib/kfifo.o
CC net/ipv4/udp.o
AR drivers/media/test-drivers/built-in.a
AR drivers/media/built-in.a
CC fs/bad_inode.o
CC drivers/net/phy/mdio_device.o
CC drivers/rtc/dev.o
CC kernel/sys_ni.o
CC drivers/gpu/drm/i915/soc/intel_rom.o
CC drivers/input/input-mt.o
CC drivers/acpi/processor_core.o
CC mm/init-mm.o
CC drivers/scsi/scsi_bsg.o
CC drivers/input/mouse/lifebook.o
CC net/netfilter/xt_policy.o
CC drivers/acpi/acpica/pstree.o
CC net/ipv6/xfrm6_output.o
CC drivers/ata/pata_amd.o
CC drivers/pps/kapi.o
CC fs/file.o
CC drivers/net/mii.o
CC arch/x86/kernel/i8237.o
CC drivers/usb/core/driver.o
CC net/ipv4/udplite.o
CC [M] drivers/gpu/drm/xe/xe_exec.o
CC drivers/usb/storage/transport.o
CC drivers/power/supply/power_supply_core.o
CC drivers/hwmon/hwmon.o
CC drivers/ata/pata_oldpiix.o
CC drivers/pcmcia/rsrc_mgr.o
CC kernel/nsproxy.o
CC drivers/acpi/acpica/psutils.o
CC drivers/gpu/drm/drm_dumb_buffers.o
CC drivers/usb/host/uhci-hcd.o
CC drivers/rtc/proc.o
CC drivers/net/loopback.o
CC net/netfilter/xt_state.o
CC lib/percpu-refcount.o
CC drivers/pcmcia/rsrc_nonstatic.o
CC fs/nfs/nfs4renewd.o
CC mm/memblock.o
CC arch/x86/kernel/stacktrace.o
CC drivers/input/mouse/trackpoint.o
CC drivers/ptp/ptp_chardev.o
CC drivers/scsi/scsi_common.o
CC arch/x86/kernel/reboot.o
CC drivers/net/phy/swphy.o
CC drivers/pps/sysfs.o
AR drivers/thermal/broadcom/built-in.a
AR drivers/thermal/renesas/built-in.a
AR drivers/thermal/samsung/built-in.a
CC [M] drivers/gpu/drm/xe/xe_exec_queue.o
CC drivers/thermal/intel/intel_tcc.o
CC drivers/acpi/acpica/pswalk.o
CC drivers/net/phy/fixed_phy.o
AR drivers/net/ethernet/cisco/built-in.a
CC drivers/gpu/drm/i915/i915_memcpy.o
CC drivers/gpu/drm/i915/i915_mm.o
CC drivers/i2c/i2c-core-smbus.o
CC drivers/net/netconsole.o
CC drivers/pcmcia/yenta_socket.o
CC drivers/usb/core/config.o
CC drivers/gpu/drm/drm_edid.o
CC net/core/selftests.o
CC net/core/ptp_classifier.o
CC drivers/rtc/sysfs.o
CC drivers/thermal/intel/therm_throt.o
CC drivers/scsi/scsi_transport_spi.o
CC [M] drivers/thermal/intel/x86_pkg_temp_thermal.o
AR drivers/pps/built-in.a
CC drivers/ata/pata_sch.o
CC drivers/rtc/rtc-mc146818-lib.o
CC drivers/acpi/acpica/psxface.o
CC drivers/net/virtio_net.o
CC fs/filesystems.o
CC net/ipv4/udp_offload.o
CC drivers/power/supply/power_supply_sysfs.o
CC lib/rhashtable.o
CC lib/base64.o
CC net/ipv6/xfrm6_protocol.o
CC drivers/usb/storage/usb.o
CC drivers/ata/pata_mpiix.o
CC kernel/notifier.o
CC drivers/input/mouse/cypress_ps2.o
CC fs/ext4/symlink.o
AR drivers/watchdog/built-in.a
CC drivers/acpi/processor_pdc.o
CC drivers/i2c/i2c-core-acpi.o
CC drivers/usb/storage/initializers.o
CC drivers/scsi/virtio_scsi.o
CC arch/x86/kernel/msr.o
CC mm/slub.o
CC drivers/acpi/acpica/rsaddr.o
CC drivers/ptp/ptp_sysfs.o
CC net/ipv4/arp.o
CC [M] net/netfilter/nf_log_syslog.o
CC fs/nfs/nfs4super.o
AR drivers/hwmon/built-in.a
CC drivers/ptp/ptp_vclock.o
CC fs/ext4/sysfs.o
CC drivers/usb/host/xhci.o
CC drivers/power/supply/power_supply_leds.o
CC drivers/input/mouse/psmouse-smbus.o
CC drivers/rtc/rtc-cmos.o
CC drivers/gpu/drm/i915/i915_sw_fence.o
AR drivers/net/phy/built-in.a
CC arch/x86/kernel/cpuid.o
AR drivers/thermal/st/built-in.a
CC drivers/md/md.o
CC drivers/cpufreq/cpufreq.o
CC net/mac80211/ethtool.o
CC drivers/ptp/ptp_kvm_x86.o
CC drivers/acpi/acpica/rscalc.o
CC drivers/usb/core/file.o
CC [M] net/netfilter/xt_mark.o
CC drivers/gpu/drm/drm_eld.o
CC drivers/ata/ata_generic.o
CC kernel/ksysfs.o
CC drivers/gpu/drm/i915/i915_sw_fence_work.o
CC drivers/usb/core/buffer.o
CC drivers/usb/host/xhci-mem.o
CC drivers/usb/storage/sierra_ms.o
AR drivers/pcmcia/built-in.a
AR drivers/thermal/intel/built-in.a
CC drivers/cpuidle/governors/menu.o
CC drivers/power/supply/power_supply_hwmon.o
AR drivers/thermal/qcom/built-in.a
AR drivers/thermal/tegra/built-in.a
CC drivers/usb/storage/option_ms.o
AR drivers/thermal/mediatek/built-in.a
CC drivers/thermal/thermal_core.o
CC drivers/net/net_failover.o
CC mm/madvise.o
CC [M] drivers/gpu/drm/xe/xe_execlist.o
CC drivers/cpuidle/cpuidle.o
CC lib/once.o
CC drivers/md/md-bitmap.o
CC drivers/i2c/i2c-smbus.o
CC drivers/cpuidle/driver.o
CC [M] net/netfilter/xt_nat.o
CC drivers/acpi/acpica/rscreate.o
CC fs/namespace.o
CC fs/seq_file.o
CC arch/x86/kernel/early-quirks.o
CC drivers/gpu/drm/i915/i915_syncmap.o
CC net/ipv6/netfilter.o
CC net/ipv6/proc.o
CC drivers/scsi/sd.o
AR drivers/usb/misc/built-in.a
CC drivers/usb/core/sysfs.o
AR drivers/input/mouse/built-in.a
CC [M] drivers/gpu/drm/xe/xe_force_wake.o
CC drivers/input/input-poller.o
CC drivers/ptp/ptp_kvm_common.o
AR drivers/power/supply/built-in.a
CC kernel/cred.o
AR drivers/power/built-in.a
CC drivers/usb/host/xhci-ext-caps.o
CC lib/refcount.o
CC drivers/cpuidle/governors/haltpoll.o
CC fs/nfs/nfs4file.o
CC fs/nfs/delegation.o
CC drivers/thermal/thermal_sysfs.o
CC fs/xattr.o
AR drivers/rtc/built-in.a
AR drivers/ata/built-in.a
CC drivers/gpu/drm/i915/i915_user_extensions.o
CC drivers/thermal/thermal_trip.o
CC drivers/acpi/acpica/rsdumpinfo.o
CC drivers/usb/host/xhci-ring.o
CC drivers/md/md-autodetect.o
CC drivers/usb/storage/usual-tables.o
CC drivers/acpi/ec.o
AR drivers/net/ethernet/cortina/built-in.a
CC drivers/cpufreq/freq_table.o
CC lib/rcuref.o
AR drivers/i2c/built-in.a
CC net/ipv6/syncookies.o
CC lib/usercopy.o
CC drivers/acpi/acpica/rsinfo.o
CC drivers/gpu/drm/i915/i915_debugfs.o
CC drivers/input/ff-core.o
CC drivers/input/touchscreen.o
CC net/ipv6/calipso.o
CC arch/x86/kernel/smp.o
CC net/ipv4/icmp.o
CC net/ipv6/ah6.o
AR drivers/ptp/built-in.a
CC lib/errseq.o
CC lib/bucket_locks.o
CC lib/generic-radix-tree.o
CC net/ipv6/esp6.o
CC drivers/md/dm.o
AR drivers/usb/storage/built-in.a
CC kernel/reboot.o
CC drivers/gpu/drm/i915/i915_debugfs_params.o
CC drivers/usb/core/endpoint.o
CC [M] net/netfilter/xt_LOG.o
CC drivers/usb/early/ehci-dbgp.o
CC arch/x86/kernel/smpboot.o
CC drivers/thermal/thermal_helpers.o
CC drivers/acpi/acpica/rsio.o
CC [M] drivers/gpu/drm/xe/xe_ggtt.o
CC drivers/cpuidle/governor.o
CC net/core/netprio_cgroup.o
AR drivers/cpuidle/governors/built-in.a
CC drivers/cpuidle/sysfs.o
CC drivers/scsi/sr.o
CC drivers/cpufreq/cpufreq_performance.o
CC drivers/usb/core/devio.o
CC drivers/usb/host/xhci-hub.o
CC drivers/gpu/drm/drm_encoder.o
CC net/mac80211/rx.o
CC net/mac80211/spectmgmt.o
AR drivers/net/ethernet/dec/tulip/built-in.a
AR drivers/net/ethernet/dec/built-in.a
CC net/mac80211/tx.o
AR drivers/mmc/built-in.a
CC fs/ext4/xattr.o
CC drivers/acpi/acpica/rsirq.o
CC drivers/input/ff-memless.o
CC lib/bitmap-str.o
CC lib/string_helpers.o
CC mm/page_io.o
AR drivers/net/ethernet/dlink/built-in.a
CC drivers/cpufreq/cpufreq_userspace.o
CC [M] net/netfilter/xt_MASQUERADE.o
CC net/ipv4/devinet.o
CC drivers/thermal/thermal_thresholds.o
CC drivers/acpi/dock.o
CC drivers/cpufreq/cpufreq_ondemand.o
CC kernel/async.o
CC drivers/cpuidle/poll_state.o
CC drivers/usb/host/xhci-dbg.o
CC net/core/netclassid_cgroup.o
CC drivers/acpi/acpica/rslist.o
CC drivers/cpuidle/cpuidle-haltpoll.o
CC drivers/usb/core/notify.o
AR drivers/usb/early/built-in.a
CC kernel/range.o
CC drivers/input/sparse-keymap.o
CC drivers/usb/host/xhci-trace.o
CC [M] drivers/gpu/drm/xe/xe_gpu_scheduler.o
CC fs/nfs/nfs4idmap.o
CC net/mac80211/key.o
CC net/ipv6/sit.o
CC drivers/gpu/drm/i915/i915_pmu.o
CC drivers/thermal/thermal_netlink.o
CC drivers/scsi/sr_ioctl.o
AR drivers/net/ethernet/emulex/built-in.a
CC drivers/usb/core/generic.o
CC drivers/input/vivaldi-fmap.o
CC drivers/acpi/acpica/rsmemory.o
CC kernel/smpboot.o
AR drivers/ufs/built-in.a
CC drivers/acpi/acpica/rsmisc.o
CC [M] net/netfilter/xt_addrtype.o
CC net/ipv4/af_inet.o
CC drivers/cpufreq/cpufreq_governor.o
CC arch/x86/kernel/tsc_sync.o
CC drivers/thermal/thermal_hwmon.o
CC mm/swap_state.o
CC fs/nfs/callback.o
CC drivers/gpu/drm/drm_file.o
AR drivers/cpuidle/built-in.a
CC drivers/usb/host/xhci-debugfs.o
CC lib/hexdump.o
CC [M] drivers/gpu/drm/xe/xe_gsc.o
CC kernel/ucount.o
CC drivers/md/dm-table.o
CC mm/swapfile.o
CC net/mac80211/util.o
AR drivers/net/ethernet/engleder/built-in.a
CC [M] drivers/gpu/drm/xe/xe_gsc_debugfs.o
CC fs/nfs/callback_xdr.o
CC lib/kstrtox.o
CC net/ipv4/igmp.o
CC drivers/gpu/drm/drm_fourcc.o
CC drivers/input/input-leds.o
CC drivers/cpufreq/cpufreq_governor_attr_set.o
CC net/core/dst_cache.o
CC drivers/cpufreq/acpi-cpufreq.o
CC drivers/acpi/acpica/rsserial.o
CC drivers/gpu/drm/i915/gt/gen2_engine_cs.o
CC drivers/gpu/drm/drm_framebuffer.o
CC drivers/md/dm-target.o
CC drivers/acpi/pci_root.o
CC fs/libfs.o
CC arch/x86/kernel/setup_percpu.o
CC net/mac80211/parse.o
CC lib/iomap.o
CC drivers/scsi/sr_vendor.o
CC drivers/thermal/gov_step_wise.o
CC drivers/acpi/acpica/rsutils.o
CC kernel/regset.o
CC drivers/usb/host/xhci-pci.o
CC drivers/acpi/acpica/rsxface.o
AR drivers/net/ethernet/ezchip/built-in.a
CC drivers/md/dm-linear.o
CC arch/x86/kernel/mpparse.o
CC drivers/usb/core/quirks.o
CC net/core/gro_cells.o
CC drivers/input/evdev.o
CC fs/nfs/callback_proc.o
CC kernel/ksyms_common.o
CC [M] drivers/gpu/drm/xe/xe_gsc_proxy.o
CC drivers/acpi/pci_link.o
CC fs/fs-writeback.o
CC drivers/gpu/drm/i915/gt/gen6_engine_cs.o
AR drivers/thermal/built-in.a
CC drivers/gpu/drm/i915/gt/gen6_ppgtt.o
CC net/mac80211/wme.o
CC fs/pnode.o
CC net/mac80211/chan.o
CC drivers/gpu/drm/drm_gem.o
CC drivers/cpufreq/amd-pstate.o
CC drivers/acpi/acpica/tbdata.o
CC fs/splice.o
CC drivers/gpu/drm/i915/gt/gen7_renderclear.o
CC lib/iomap_copy.o
AR net/netfilter/built-in.a
CC drivers/md/dm-stripe.o
CC net/core/failover.o
CC drivers/scsi/sg.o
AR drivers/firmware/arm_ffa/built-in.a
CC net/ipv6/addrconf_core.o
AR drivers/firmware/arm_scmi/built-in.a
AR drivers/firmware/broadcom/built-in.a
AR drivers/firmware/cirrus/test/built-in.a
AR drivers/firmware/cirrus/built-in.a
CC drivers/cpufreq/amd-pstate-trace.o
CC drivers/md/dm-ioctl.o
CC lib/devres.o
AR drivers/firmware/meson/built-in.a
AR drivers/firmware/microchip/built-in.a
CC drivers/firmware/efi/efi-bgrt.o
CC drivers/usb/core/devices.o
CC fs/nfs/nfs4namespace.o
CC drivers/firmware/efi/libstub/efi-stub-helper.o
CC kernel/groups.o
CC drivers/acpi/pci_irq.o
CC net/ipv4/fib_frontend.o
CC fs/ext4/xattr_hurd.o
CC drivers/gpu/drm/i915/gt/gen8_engine_cs.o
CC drivers/acpi/acpica/tbfadt.o
CC arch/x86/kernel/trace_clock.o
CC drivers/cpufreq/intel_pstate.o
CC fs/sync.o
CC net/mac80211/trace.o
CC lib/check_signature.o
CC drivers/firmware/efi/efi.o
CC drivers/gpu/drm/i915/gt/gen8_ppgtt.o
CC fs/ext4/xattr_trusted.o
AR drivers/input/built-in.a
CC arch/x86/kernel/trace.o
CC drivers/acpi/acpi_apd.o
CC drivers/scsi/scsi_sysfs.o
CC drivers/usb/core/phy.o
AR drivers/usb/host/built-in.a
CC drivers/gpu/drm/drm_ioctl.o
CC lib/interval_tree.o
CC drivers/md/dm-io.o
CC net/ipv6/exthdrs_core.o
CC [M] drivers/gpu/drm/xe/xe_gsc_submit.o
CC lib/assoc_array.o
CC drivers/acpi/acpica/tbfind.o
CC drivers/gpu/drm/i915/gt/intel_breadcrumbs.o
AR drivers/firmware/imx/built-in.a
AR net/core/built-in.a
CC drivers/md/dm-kcopyd.o
CC net/mac80211/mlme.o
CC mm/dmapool.o
CC net/ipv4/fib_semantics.o
CC drivers/usb/core/port.o
CC fs/utimes.o
CC kernel/kcmp.o
CC drivers/firmware/efi/libstub/gop.o
AR drivers/net/ethernet/broadcom/built-in.a
AR drivers/net/ethernet/fujitsu/built-in.a
CC net/ipv6/ip6_checksum.o
AR drivers/net/ethernet/fungible/built-in.a
CC arch/x86/kernel/rethook.o
AR drivers/net/ethernet/google/built-in.a
AR drivers/net/ethernet/hisilicon/built-in.a
CC drivers/gpu/drm/i915/gt/intel_context.o
AR drivers/net/ethernet/huawei/built-in.a
CC lib/bitrev.o
AR drivers/crypto/stm32/built-in.a
CC net/ipv4/fib_trie.o
CC drivers/acpi/acpi_platform.o
CC drivers/net/ethernet/intel/e1000/e1000_main.o
AR drivers/crypto/inside-secure/eip93/built-in.a
AR drivers/crypto/inside-secure/built-in.a
CC drivers/usb/core/hcd-pci.o
AR drivers/crypto/xilinx/built-in.a
CC lib/crc-ccitt.o
AR drivers/crypto/hisilicon/built-in.a
AR drivers/crypto/starfive/built-in.a
AR drivers/crypto/intel/keembay/built-in.a
CC drivers/gpu/drm/i915/gt/intel_context_sseu.o
AR drivers/crypto/intel/ixp4xx/built-in.a
AR drivers/crypto/intel/built-in.a
AR drivers/crypto/built-in.a
CC drivers/acpi/acpica/tbinstal.o
AR drivers/firmware/psci/built-in.a
CC drivers/md/dm-sysfs.o
CC fs/ext4/xattr_user.o
CC drivers/firmware/efi/libstub/secureboot.o
CC drivers/firmware/efi/libstub/tpm.o
CC drivers/gpu/drm/drm_lease.o
CC drivers/usb/core/usb-acpi.o
CC fs/nfs/nfs4getroot.o
CC kernel/freezer.o
CC net/ipv6/ip6_icmp.o
CC net/mac80211/tdls.o
CC drivers/acpi/acpica/tbprint.o
CC drivers/net/ethernet/intel/e1000e/82571.o
CC lib/crc16.o
HOSTCC lib/gen_crc32table
CC drivers/gpu/drm/i915/gt/intel_engine_cs.o
CC arch/x86/kernel/vmcore_info_32.o
CC drivers/net/ethernet/intel/e1000/e1000_hw.o
CC mm/hugetlb.o
CC [M] drivers/gpu/drm/xe/xe_gt.o
CC drivers/net/ethernet/intel/e1000e/ich8lan.o
CC drivers/net/ethernet/intel/e100.o
CC fs/d_path.o
AR drivers/firmware/qcom/built-in.a
CC arch/x86/kernel/machine_kexec_32.o
CC drivers/clocksource/acpi_pm.o
CC net/mac80211/ocb.o
AR drivers/net/ethernet/i825xx/built-in.a
CC drivers/gpu/drm/drm_managed.o
CC fs/ext4/fast_commit.o
AR drivers/net/ethernet/microsoft/built-in.a
CC drivers/gpu/drm/i915/gt/intel_engine_heartbeat.o
AR drivers/scsi/built-in.a
AR drivers/net/ethernet/litex/built-in.a
AR drivers/firmware/samsung/built-in.a
CC drivers/net/ethernet/intel/e1000/e1000_ethtool.o
CC drivers/acpi/acpica/tbutils.o
CC lib/xxhash.o
CC kernel/profile.o
CC net/ipv6/output_core.o
CC fs/stack.o
CC drivers/gpu/drm/i915/gt/intel_engine_pm.o
CC net/mac80211/airtime.o
CC mm/mmu_notifier.o
CC drivers/md/dm-stats.o
CC drivers/firmware/efi/libstub/file.o
CC fs/nfs/nfs4client.o
CC [M] drivers/gpu/drm/xe/xe_gt_ccs_mode.o
CC drivers/firmware/efi/vars.o
AR drivers/usb/core/built-in.a
CC lib/genalloc.o
AR drivers/usb/built-in.a
CC drivers/gpu/drm/drm_mm.o
CC fs/nfs/nfs4session.o
CC [M] drivers/gpu/drm/xe/xe_gt_clock.o
CC fs/ext4/orphan.o
CC drivers/hid/usbhid/hid-core.o
CC drivers/acpi/acpica/tbxface.o
CC fs/fs_struct.o
CC net/ipv6/protocol.o
CC drivers/net/ethernet/intel/e1000e/80003es2lan.o
CC drivers/clocksource/i8253.o
CC net/mac80211/eht.o
CC drivers/hid/usbhid/hiddev.o
CC drivers/firmware/efi/reboot.o
AS arch/x86/kernel/relocate_kernel_32.o
CC drivers/md/dm-rq.o
CC arch/x86/kernel/module.o
AR drivers/cpufreq/built-in.a
CC drivers/firmware/efi/memattr.o
AR drivers/net/ethernet/mellanox/built-in.a
AR drivers/net/ethernet/marvell/octeon_ep/built-in.a
CC drivers/hid/usbhid/hid-pidff.o
AR drivers/net/ethernet/marvell/octeon_ep_vf/built-in.a
AR drivers/net/ethernet/marvell/octeontx2/built-in.a
AR drivers/net/ethernet/marvell/prestera/built-in.a
CC kernel/stacktrace.o
CC drivers/net/ethernet/marvell/sky2.o
CC drivers/acpi/acpica/tbxfload.o
CC drivers/hid/hid-core.o
CC drivers/acpi/acpi_pnp.o
CC fs/ext4/acl.o
CC drivers/firmware/efi/libstub/mem.o
CC drivers/net/ethernet/intel/e1000e/mac.o
CC arch/x86/kernel/doublefault_32.o
CC lib/percpu_counter.o
AR drivers/clocksource/built-in.a
AR drivers/firmware/smccc/built-in.a
CC net/ipv4/fib_notifier.o
CC drivers/net/ethernet/intel/e1000/e1000_param.o
CC drivers/firmware/efi/tpm.o
CC fs/ext4/xattr_security.o
CC kernel/dma.o
CC kernel/smp.o
CC [M] drivers/gpu/drm/xe/xe_gt_freq.o
CC drivers/md/dm-io-rewind.o
CC mm/migrate.o
CC drivers/acpi/acpica/tbxfroot.o
CC drivers/md/dm-builtin.o
CC drivers/firmware/efi/libstub/random.o
CC arch/x86/kernel/early_printk.o
CC mm/page_counter.o
CC arch/x86/kernel/hpet.o
CC mm/hugetlb_cgroup.o
CC drivers/gpu/drm/i915/gt/intel_engine_user.o
AR drivers/firmware/tegra/built-in.a
CC drivers/hid/hid-input.o
CC fs/nfs/dns_resolve.o
CC drivers/firmware/efi/memmap.o
CC net/ipv6/ip6_offload.o
CC lib/audit.o
CC arch/x86/kernel/amd_nb.o
CC drivers/acpi/power.o
AR drivers/net/ethernet/meta/built-in.a
AR drivers/net/ethernet/micrel/built-in.a
CC drivers/acpi/acpica/utaddress.o
AR drivers/platform/x86/amd/built-in.a
CC drivers/mailbox/mailbox.o
AR drivers/platform/x86/intel/built-in.a
CC drivers/platform/x86/wmi.o
AR drivers/perf/built-in.a
CC fs/nfs/nfs4trace.o
CC net/mac80211/led.o
CC net/ipv6/tcpv6_offload.o
AR drivers/platform/surface/built-in.a
CC drivers/firmware/efi/capsule.o
CC [M] drivers/gpu/drm/xe/xe_gt_idle.o
CC [M] drivers/gpu/drm/xe/xe_gt_mcr.o
CC drivers/gpu/drm/i915/gt/intel_execlists_submission.o
CC mm/early_ioremap.o
CC kernel/uid16.o
AR drivers/hid/usbhid/built-in.a
CC drivers/mailbox/pcc.o
CC drivers/md/dm-raid1.o
CC drivers/gpu/drm/drm_mode_config.o
CC drivers/platform/x86/wmi-bmof.o
AR drivers/net/ethernet/microchip/built-in.a
CC drivers/gpu/drm/i915/gt/intel_ggtt.o
CC kernel/kallsyms.o
CC drivers/net/ethernet/intel/e1000e/manage.o
CC drivers/firmware/efi/libstub/randomalloc.o
CC drivers/acpi/acpica/utalloc.o
CC mm/secretmem.o
CC net/ipv4/inet_fragment.o
AR fs/ext4/built-in.a
CC drivers/net/ethernet/intel/e1000e/nvm.o
CC fs/statfs.o
CC drivers/net/ethernet/intel/e1000e/phy.o
CC lib/syscall.o
CC drivers/firmware/efi/esrt.o
CC drivers/hid/hid-quirks.o
AR drivers/hwtracing/intel_th/built-in.a
AR drivers/android/built-in.a
CC drivers/md/dm-log.o
CC net/ipv4/ping.o
AR drivers/firmware/xilinx/built-in.a
CC net/ipv6/exthdrs_offload.o
CC drivers/acpi/event.o
CC drivers/firmware/efi/libstub/pci.o
CC drivers/firmware/efi/runtime-wrappers.o
AR drivers/net/ethernet/intel/e1000/built-in.a
CC drivers/hid/hid-debug.o
CC arch/x86/kernel/amd_node.o
AR drivers/nvmem/layouts/built-in.a
CC drivers/nvmem/core.o
CC kernel/acct.o
CC net/ipv6/inet6_hashtables.o
CC drivers/acpi/acpica/utascii.o
CC drivers/gpu/drm/i915/gt/intel_ggtt_fencing.o
CC [M] drivers/gpu/drm/xe/xe_gt_pagefault.o
AR drivers/mailbox/built-in.a
CC fs/nfs/nfs4sysctl.o
CC [M] drivers/gpu/drm/xe/xe_gt_sysfs.o
CC fs/fs_pin.o
CC drivers/net/ethernet/intel/e1000e/param.o
CC drivers/platform/x86/eeepc-laptop.o
CC drivers/md/dm-region-hash.o
CC net/mac80211/pm.o
CC lib/errname.o
CC drivers/acpi/acpica/utbuffer.o
CC drivers/hid/hidraw.o
CC lib/nlattr.o
CC arch/x86/kernel/kvm.o
CC drivers/firmware/efi/capsule-loader.o
CC drivers/firmware/dmi_scan.o
AR drivers/net/ethernet/mscc/built-in.a
CC drivers/gpu/drm/i915/gt/intel_gt.o
CC drivers/gpu/drm/drm_mode_object.o
CC net/ipv4/ip_tunnel_core.o
CC drivers/hid/hid-generic.o
CC lib/cpu_rmap.o
CC drivers/firmware/efi/libstub/skip_spaces.o
CC mm/hmm.o
CC drivers/net/ethernet/intel/e1000e/ethtool.o
CC drivers/acpi/acpica/utcksum.o
CC drivers/firmware/efi/libstub/lib-cmdline.o
CC kernel/vmcore_info.o
CC net/ipv6/mcast_snoop.o
AR drivers/net/ethernet/myricom/built-in.a
CC drivers/hid/hid-a4tech.o
CC drivers/net/ethernet/intel/e1000e/netdev.o
CC mm/memfd.o
CC drivers/firmware/efi/earlycon.o
CC drivers/acpi/evged.o
CC net/ipv4/gre_offload.o
CC drivers/firmware/efi/libstub/lib-ctype.o
CC drivers/firmware/efi/libstub/alignedmem.o
CC drivers/hid/hid-apple.o
CC kernel/elfcorehdr.o
CC drivers/firmware/dmi-id.o
CC arch/x86/kernel/kvmclock.o
CC mm/execmem.o
CC drivers/gpu/drm/drm_modes.o
CC fs/nsfs.o
CC drivers/acpi/acpica/utcopy.o
CC net/ipv4/metrics.o
CC drivers/platform/x86/p2sb.o
CC drivers/md/dm-zero.o
AR drivers/nvmem/built-in.a
CC drivers/gpu/drm/i915/gt/intel_gt_buffer_pool.o
CC drivers/hid/hid-belkin.o
CC net/ipv4/netlink.o
CC kernel/kexec_core.o
CC drivers/hid/hid-cherry.o
AR drivers/net/ethernet/natsemi/built-in.a
CC net/mac80211/rc80211_minstrel_ht.o
CC drivers/gpu/drm/i915/gt/intel_gt_ccs_mode.o
CC kernel/kexec.o
CC arch/x86/kernel/paravirt.o
AR drivers/net/ethernet/neterion/built-in.a
CC arch/x86/kernel/pvclock.o
CC [M] drivers/gpu/drm/xe/xe_gt_throttle.o
CC kernel/utsname.o
AR drivers/net/ethernet/marvell/built-in.a
CC lib/dynamic_queue_limits.o
CC drivers/gpu/drm/drm_modeset_lock.o
CC net/ipv4/nexthop.o
CC drivers/firmware/efi/libstub/relocate.o
CC drivers/gpu/drm/i915/gt/intel_gt_clock_utils.o
CC drivers/firmware/memmap.o
CC fs/fs_types.o
CC drivers/acpi/acpica/utexcep.o
CC drivers/gpu/drm/drm_plane.o
CC [M] drivers/gpu/drm/xe/xe_gt_tlb_invalidation.o
AR drivers/firmware/efi/built-in.a
CC net/mac80211/wbrf.o
CC drivers/firmware/efi/libstub/printk.o
CC lib/glob.o
CC drivers/firmware/efi/libstub/vsprintf.o
CC drivers/hid/hid-chicony.o
CC fs/fs_context.o
CC drivers/acpi/sysfs.o
CC drivers/acpi/property.o
CC arch/x86/kernel/pcspeaker.o
CC [M] drivers/gpu/drm/xe/xe_gt_topology.o
AR mm/built-in.a
CC arch/x86/kernel/check.o
AR drivers/md/built-in.a
AR drivers/platform/x86/built-in.a
AR drivers/platform/built-in.a
CC drivers/firmware/efi/libstub/x86-stub.o
CC net/ipv4/udp_tunnel_stub.o
AR net/ipv6/built-in.a
AR drivers/net/ethernet/netronome/built-in.a
CC drivers/acpi/acpica/utdebug.o
AR drivers/net/ethernet/ni/built-in.a
CC drivers/acpi/acpica/utdecode.o
CC drivers/net/ethernet/nvidia/forcedeth.o
CC lib/strncpy_from_user.o
AR drivers/net/ethernet/oki-semi/built-in.a
CC drivers/acpi/debugfs.o
CC net/ipv4/ip_tunnel.o
CC lib/strnlen_user.o
CC drivers/acpi/acpica/utdelete.o
AR drivers/net/ethernet/packetengines/built-in.a
CC lib/net_utils.o
CC lib/sg_pool.o
CC drivers/net/ethernet/intel/e1000e/ptp.o
CC drivers/firmware/efi/libstub/smbios.o
CC drivers/acpi/acpica/uterror.o
STUBCPY drivers/firmware/efi/libstub/alignedmem.stub.o
CC fs/fs_parser.o
CC drivers/gpu/drm/i915/gt/intel_gt_debugfs.o
CC lib/stackdepot.o
CC drivers/gpu/drm/drm_prime.o
CC arch/x86/kernel/uprobes.o
CC net/ipv4/sysctl_net_ipv4.o
CC [M] drivers/gpu/drm/xe/xe_guc.o
CC fs/fsopen.o
CC drivers/hid/hid-cypress.o
CC arch/x86/kernel/perf_regs.o
CC net/ipv4/proc.o
CC lib/asn1_decoder.o
CC drivers/hid/hid-ezkey.o
CC net/ipv4/fib_rules.o
CC kernel/pid_namespace.o
CC arch/x86/kernel/tracepoint.o
CC fs/init.o
CC kernel/stop_machine.o
GEN lib/oid_registry_data.c
CC drivers/gpu/drm/drm_print.o
CC drivers/acpi/acpica/uteval.o
AR drivers/net/ethernet/qlogic/built-in.a
STUBCPY drivers/firmware/efi/libstub/efi-stub-helper.stub.o
CC drivers/gpu/drm/i915/gt/intel_gt_engines_debugfs.o
CC net/ipv4/ipmr.o
CC drivers/gpu/drm/drm_property.o
CC arch/x86/kernel/itmt.o
CC net/ipv4/ipmr_base.o
CC drivers/hid/hid-gyration.o
AR drivers/net/ethernet/qualcomm/emac/built-in.a
AR drivers/net/ethernet/qualcomm/built-in.a
CC fs/kernel_read_file.o
CC kernel/audit.o
CC [M] drivers/gpu/drm/xe/xe_guc_ads.o
CC drivers/gpu/drm/i915/gt/intel_gt_irq.o
CC kernel/auditfilter.o
CC arch/x86/kernel/umip.o
AR fs/nfs/built-in.a
CC drivers/hid/hid-ite.o
STUBCPY drivers/firmware/efi/libstub/file.stub.o
CC kernel/auditsc.o
CC net/ipv4/syncookies.o
STUBCPY drivers/firmware/efi/libstub/gop.stub.o
STUBCPY drivers/firmware/efi/libstub/lib-cmdline.stub.o
STUBCPY drivers/firmware/efi/libstub/lib-ctype.stub.o
STUBCPY drivers/firmware/efi/libstub/mem.stub.o
STUBCPY drivers/firmware/efi/libstub/pci.stub.o
CC drivers/acpi/acpi_lpat.o
STUBCPY drivers/firmware/efi/libstub/printk.stub.o
CC arch/x86/kernel/unwind_frame.o
STUBCPY drivers/firmware/efi/libstub/random.stub.o
STUBCPY drivers/firmware/efi/libstub/randomalloc.stub.o
CC [M] drivers/gpu/drm/xe/xe_guc_buf.o
STUBCPY drivers/firmware/efi/libstub/relocate.stub.o
CC [M] drivers/gpu/drm/xe/xe_guc_capture.o
STUBCPY drivers/firmware/efi/libstub/secureboot.stub.o
CC drivers/acpi/acpica/utglobal.o
STUBCPY drivers/firmware/efi/libstub/skip_spaces.stub.o
CC fs/mnt_idmapping.o
STUBCPY drivers/firmware/efi/libstub/smbios.stub.o
STUBCPY drivers/firmware/efi/libstub/tpm.stub.o
STUBCPY drivers/firmware/efi/libstub/vsprintf.stub.o
STUBCPY drivers/firmware/efi/libstub/x86-stub.stub.o
AR drivers/firmware/efi/libstub/lib.a
CC [M] drivers/gpu/drm/xe/xe_guc_ct.o
CC net/ipv4/tunnel4.o
AR drivers/firmware/built-in.a
CC kernel/audit_watch.o
CC drivers/gpu/drm/drm_rect.o
CC lib/ucs2_string.o
CC drivers/gpu/drm/i915/gt/intel_gt_mcr.o
CC drivers/net/ethernet/realtek/8139too.o
CC lib/sbitmap.o
AR drivers/net/ethernet/renesas/built-in.a
CC drivers/hid/hid-kensington.o
CC lib/group_cpus.o
CC drivers/net/ethernet/realtek/r8169_main.o
CC drivers/hid/hid-microsoft.o
CC drivers/gpu/drm/i915/gt/intel_gt_pm.o
CC [M] drivers/gpu/drm/xe/xe_guc_db_mgr.o
CC lib/fw_table.o
CC drivers/acpi/acpica/uthex.o
CC drivers/net/ethernet/realtek/r8169_firmware.o
AR drivers/net/ethernet/rdc/built-in.a
CC drivers/gpu/drm/drm_syncobj.o
CC drivers/gpu/drm/i915/gt/intel_gt_pm_debugfs.o
CC net/ipv4/ipconfig.o
CC [M] drivers/gpu/drm/xe/xe_guc_engine_activity.o
CC drivers/gpu/drm/i915/gt/intel_gt_pm_irq.o
CC drivers/acpi/acpi_pcc.o
CC fs/remap_range.o
CC kernel/audit_fsnotify.o
CC drivers/net/ethernet/realtek/r8169_phy_config.o
CC drivers/hid/hid-monterey.o
CC [M] drivers/gpu/drm/xe/xe_guc_hwconfig.o
CC drivers/acpi/acpica/utids.o
AR arch/x86/kernel/built-in.a
AR arch/x86/built-in.a
CC drivers/gpu/drm/i915/gt/intel_gt_requests.o
CC drivers/gpu/drm/i915/gt/intel_gt_sysfs.o
CC [M] drivers/gpu/drm/xe/xe_guc_id_mgr.o
CC drivers/acpi/ac.o
AR drivers/net/ethernet/rocker/built-in.a
CC drivers/gpu/drm/drm_sysfs.o
CC [M] drivers/gpu/drm/xe/xe_guc_klv_helpers.o
CC fs/pidfs.o
CC drivers/gpu/drm/drm_trace_points.o
CC drivers/acpi/acpica/utinit.o
CC drivers/hid/hid-ntrig.o
CC fs/buffer.o
CC drivers/gpu/drm/drm_vblank.o
CC drivers/gpu/drm/i915/gt/intel_gt_sysfs_pm.o
CC [M] drivers/gpu/drm/xe/xe_guc_log.o
CC net/ipv4/netfilter.o
CC fs/mpage.o
AR lib/lib.a
CC drivers/gpu/drm/drm_vblank_work.o
AR drivers/net/ethernet/samsung/built-in.a
CC drivers/hid/hid-pl.o
GEN lib/crc32table.h
CC lib/oid_registry.o
CC net/ipv4/tcp_cubic.o
CC drivers/acpi/button.o
CC net/ipv4/tcp_sigpool.o
CC drivers/gpu/drm/i915/gt/intel_gtt.o
CC drivers/acpi/acpica/utlock.o
CC fs/proc_namespace.o
CC kernel/audit_tree.o
CC kernel/kprobes.o
CC kernel/seccomp.o
CC fs/direct-io.o
CC [M] drivers/gpu/drm/xe/xe_guc_pc.o
CC drivers/acpi/acpica/utmath.o
CC drivers/acpi/fan_core.o
AR drivers/net/ethernet/seeq/built-in.a
CC [M] drivers/gpu/drm/xe/xe_guc_submit.o
CC kernel/relay.o
AR net/mac80211/built-in.a
CC fs/eventpoll.o
CC drivers/gpu/drm/i915/gt/intel_llc.o
AR drivers/net/ethernet/silan/built-in.a
CC drivers/gpu/drm/i915/gt/intel_lrc.o
CC drivers/gpu/drm/i915/gt/intel_migrate.o
CC lib/crc32.o
CC drivers/acpi/acpica/utmisc.o
CC drivers/hid/hid-petalynx.o
AR drivers/net/ethernet/sis/built-in.a
CC drivers/acpi/fan_attr.o
CC drivers/gpu/drm/i915/gt/intel_mocs.o
CC net/ipv4/cipso_ipv4.o
CC fs/anon_inodes.o
CC drivers/hid/hid-redragon.o
CC drivers/acpi/acpica/utmutex.o
AR drivers/net/ethernet/sfc/built-in.a
CC net/ipv4/xfrm4_policy.o
CC drivers/gpu/drm/i915/gt/intel_ppgtt.o
CC drivers/acpi/fan_hwmon.o
CC drivers/gpu/drm/drm_vma_manager.o
CC drivers/gpu/drm/i915/gt/intel_rc6.o
CC drivers/gpu/drm/drm_writeback.o
CC drivers/hid/hid-samsung.o
CC [M] drivers/gpu/drm/xe/xe_heci_gsc.o
CC drivers/gpu/drm/i915/gt/intel_region_lmem.o
CC drivers/acpi/acpi_video.o
CC [M] drivers/gpu/drm/xe/xe_huc.o
AR drivers/net/ethernet/smsc/built-in.a
CC drivers/hid/hid-sony.o
CC net/ipv4/xfrm4_state.o
AR lib/built-in.a
CC drivers/gpu/drm/i915/gt/intel_renderstate.o
CC kernel/utsname_sysctl.o
CC drivers/acpi/video_detect.o
CC drivers/gpu/drm/drm_panel.o
CC [M] drivers/gpu/drm/xe/xe_hw_engine.o
CC net/ipv4/xfrm4_input.o
CC drivers/acpi/acpica/utnonansi.o
AR drivers/net/ethernet/nvidia/built-in.a
CC drivers/hid/hid-sunplus.o
CC kernel/delayacct.o
CC fs/signalfd.o
CC fs/timerfd.o
CC drivers/acpi/processor_driver.o
CC kernel/taskstats.o
CC drivers/hid/hid-topseed.o
CC fs/eventfd.o
AR drivers/net/ethernet/socionext/built-in.a
CC kernel/tsacct.o
AR drivers/net/ethernet/stmicro/built-in.a
CC net/ipv4/xfrm4_output.o
CC [M] drivers/gpu/drm/xe/xe_hw_engine_class_sysfs.o
CC drivers/gpu/drm/i915/gt/intel_reset.o
CC drivers/acpi/acpica/utobject.o
CC fs/aio.o
AR drivers/net/ethernet/intel/e1000e/built-in.a
AR drivers/net/ethernet/sun/built-in.a
CC drivers/acpi/processor_thermal.o
AR drivers/net/ethernet/intel/built-in.a
CC drivers/gpu/drm/drm_pci.o
CC drivers/gpu/drm/i915/gt/intel_ring.o
CC [M] drivers/gpu/drm/xe/xe_hw_engine_group.o
CC kernel/tracepoint.o
CC drivers/acpi/acpica/utosi.o
CC net/ipv4/xfrm4_protocol.o
CC drivers/acpi/processor_idle.o
CC drivers/gpu/drm/i915/gt/intel_ring_submission.o
CC fs/locks.o
CC drivers/gpu/drm/drm_debugfs.o
CC drivers/acpi/processor_throttling.o
CC [M] drivers/gpu/drm/xe/xe_hw_fence.o
AR drivers/net/ethernet/tehuti/built-in.a
CC drivers/gpu/drm/drm_debugfs_crc.o
CC drivers/acpi/acpica/utownerid.o
CC fs/binfmt_misc.o
CC kernel/irq_work.o
CC kernel/static_call.o
CC fs/binfmt_script.o
CC [M] drivers/gpu/drm/xe/xe_irq.o
CC drivers/acpi/processor_perflib.o
CC [M] drivers/gpu/drm/xe/xe_lrc.o
CC drivers/gpu/drm/drm_panel_orientation_quirks.o
CC kernel/padata.o
CC [M] drivers/gpu/drm/xe/xe_migrate.o
CC fs/binfmt_elf.o
CC fs/mbcache.o
AR drivers/net/ethernet/ti/built-in.a
CC [M] drivers/gpu/drm/xe/xe_mmio.o
AR drivers/net/ethernet/vertexcom/built-in.a
CC fs/posix_acl.o
CC drivers/gpu/drm/drm_buddy.o
CC drivers/acpi/acpica/utpredef.o
CC kernel/jump_label.o
AR drivers/net/ethernet/via/built-in.a
CC [M] drivers/gpu/drm/xe/xe_mocs.o
CC drivers/gpu/drm/drm_gem_shmem_helper.o
CC drivers/gpu/drm/i915/gt/intel_rps.o
CC drivers/gpu/drm/drm_atomic_helper.o
AR drivers/hid/built-in.a
CC [M] drivers/gpu/drm/xe/xe_module.o
AR drivers/net/ethernet/wangxun/built-in.a
CC [M] drivers/gpu/drm/xe/xe_oa.o
CC kernel/context_tracking.o
CC fs/coredump.o
CC drivers/gpu/drm/drm_atomic_state_helper.o
CC drivers/acpi/acpica/utresdecode.o
CC fs/drop_caches.o
AR drivers/net/ethernet/wiznet/built-in.a
CC drivers/gpu/drm/i915/gt/intel_sa_media.o
CC drivers/acpi/acpica/utresrc.o
CC drivers/gpu/drm/drm_bridge_helper.o
CC [M] drivers/gpu/drm/xe/xe_observation.o
AR drivers/net/ethernet/realtek/built-in.a
CC drivers/acpi/acpica/utstate.o
AR drivers/net/ethernet/xilinx/built-in.a
AR drivers/net/ethernet/xircom/built-in.a
AR drivers/net/ethernet/synopsys/built-in.a
AR drivers/net/ethernet/pensando/built-in.a
CC drivers/gpu/drm/drm_crtc_helper.o
CC drivers/gpu/drm/i915/gt/intel_sseu.o
AR drivers/net/ethernet/built-in.a
CC drivers/acpi/acpica/utstring.o
CC drivers/gpu/drm/drm_damage_helper.o
CC kernel/iomem.o
CC fs/sysctls.o
AR drivers/net/built-in.a
CC drivers/gpu/drm/i915/gt/intel_sseu_debugfs.o
CC drivers/acpi/container.o
CC drivers/gpu/drm/drm_flip_work.o
CC drivers/acpi/acpica/utstrsuppt.o
CC [M] drivers/gpu/drm/xe/xe_pat.o
CC kernel/rseq.o
CC fs/fhandle.o
CC drivers/acpi/thermal_lib.o
CC drivers/gpu/drm/i915/gt/intel_timeline.o
CC [M] drivers/gpu/drm/xe/xe_pci.o
CC drivers/gpu/drm/i915/gt/intel_tlb.o
CC drivers/acpi/thermal.o
AR net/ipv4/built-in.a
AR net/built-in.a
CC drivers/acpi/acpica/utstrtoul64.o
CC drivers/gpu/drm/drm_format_helper.o
CC drivers/gpu/drm/i915/gt/intel_wopcm.o
CC [M] drivers/gpu/drm/xe/xe_pcode.o
CC drivers/acpi/acpica/utxface.o
CC drivers/gpu/drm/drm_gem_atomic_helper.o
CC drivers/acpi/nhlt.o
CC drivers/acpi/acpica/utxfinit.o
CC [M] drivers/gpu/drm/xe/xe_pm.o
CC drivers/gpu/drm/i915/gt/intel_workarounds.o
CC drivers/gpu/drm/drm_gem_framebuffer_helper.o
CC drivers/acpi/acpi_memhotplug.o
CC [M] drivers/gpu/drm/xe/xe_preempt_fence.o
CC drivers/acpi/ioapic.o
CC drivers/gpu/drm/i915/gt/shmem_utils.o
CC drivers/acpi/battery.o
CC drivers/gpu/drm/drm_kms_helper_common.o
CC [M] drivers/gpu/drm/xe/xe_pt.o
CC drivers/acpi/acpica/utxferror.o
CC drivers/gpu/drm/drm_modeset_helper.o
CC drivers/gpu/drm/drm_plane_helper.o
CC drivers/acpi/bgrt.o
CC [M] drivers/gpu/drm/xe/xe_pt_walk.o
CC drivers/acpi/acpica/utxfmutex.o
CC drivers/acpi/spcr.o
CC [M] drivers/gpu/drm/xe/xe_pxp.o
CC [M] drivers/gpu/drm/xe/xe_pxp_debugfs.o
CC drivers/gpu/drm/drm_probe_helper.o
CC drivers/gpu/drm/i915/gt/sysfs_engines.o
CC [M] drivers/gpu/drm/xe/xe_pxp_submit.o
CC [M] drivers/gpu/drm/xe/xe_query.o
CC drivers/gpu/drm/drm_self_refresh_helper.o
CC drivers/gpu/drm/i915/gt/intel_ggtt_gmch.o
CC [M] drivers/gpu/drm/xe/xe_range_fence.o
CC [M] drivers/gpu/drm/xe/xe_reg_sr.o
CC drivers/gpu/drm/drm_simple_kms_helper.o
CC drivers/gpu/drm/i915/gt/gen6_renderstate.o
CC drivers/gpu/drm/bridge/panel.o
CC [M] drivers/gpu/drm/xe/xe_reg_whitelist.o
CC drivers/gpu/drm/drm_mipi_dsi.o
AR drivers/acpi/acpica/built-in.a
AR kernel/built-in.a
CC drivers/gpu/drm/i915/gt/gen7_renderstate.o
CC drivers/gpu/drm/i915/gt/gen8_renderstate.o
CC [M] drivers/gpu/drm/xe/xe_ring_ops.o
CC [M] drivers/gpu/drm/drm_exec.o
CC [M] drivers/gpu/drm/xe/xe_rtp.o
CC [M] drivers/gpu/drm/xe/xe_sa.o
CC drivers/gpu/drm/i915/gt/gen9_renderstate.o
CC [M] drivers/gpu/drm/drm_gpuvm.o
CC drivers/gpu/drm/i915/gem/i915_gem_busy.o
CC [M] drivers/gpu/drm/xe/xe_sched_job.o
CC [M] drivers/gpu/drm/drm_suballoc.o
CC [M] drivers/gpu/drm/drm_gem_ttm_helper.o
CC drivers/gpu/drm/i915/gem/i915_gem_clflush.o
CC [M] drivers/gpu/drm/xe/xe_shrinker.o
CC [M] drivers/gpu/drm/xe/xe_step.o
CC drivers/gpu/drm/i915/gem/i915_gem_context.o
CC drivers/gpu/drm/i915/gem/i915_gem_create.o
CC [M] drivers/gpu/drm/xe/xe_survivability_mode.o
CC drivers/gpu/drm/i915/gem/i915_gem_dmabuf.o
CC drivers/gpu/drm/i915/gem/i915_gem_domain.o
CC drivers/gpu/drm/i915/gem/i915_gem_execbuffer.o
CC [M] drivers/gpu/drm/xe/xe_sync.o
CC drivers/gpu/drm/i915/gem/i915_gem_internal.o
CC [M] drivers/gpu/drm/xe/xe_tile.o
AR fs/built-in.a
CC drivers/gpu/drm/i915/gem/i915_gem_lmem.o
CC [M] drivers/gpu/drm/xe/xe_tile_sysfs.o
CC drivers/gpu/drm/i915/gem/i915_gem_mman.o
CC drivers/gpu/drm/i915/gem/i915_gem_object.o
CC drivers/gpu/drm/i915/gem/i915_gem_pages.o
CC drivers/gpu/drm/i915/gem/i915_gem_phys.o
CC [M] drivers/gpu/drm/xe/xe_trace.o
CC drivers/gpu/drm/i915/gem/i915_gem_pm.o
AR drivers/acpi/built-in.a
CC [M] drivers/gpu/drm/xe/xe_trace_bo.o
CC drivers/gpu/drm/i915/gem/i915_gem_region.o
CC [M] drivers/gpu/drm/xe/xe_trace_guc.o
CC drivers/gpu/drm/i915/gem/i915_gem_shmem.o
CC [M] drivers/gpu/drm/xe/xe_trace_lrc.o
CC drivers/gpu/drm/i915/gem/i915_gem_shrinker.o
CC drivers/gpu/drm/i915/gem/i915_gem_stolen.o
CC [M] drivers/gpu/drm/xe/xe_ttm_stolen_mgr.o
CC drivers/gpu/drm/i915/gem/i915_gem_throttle.o
CC drivers/gpu/drm/i915/gem/i915_gem_tiling.o
CC drivers/gpu/drm/i915/gem/i915_gem_ttm.o
CC [M] drivers/gpu/drm/xe/xe_ttm_sys_mgr.o
CC [M] drivers/gpu/drm/xe/xe_ttm_vram_mgr.o
CC [M] drivers/gpu/drm/xe/xe_tuning.o
LD [M] drivers/gpu/drm/drm_suballoc_helper.o
CC drivers/gpu/drm/i915/gem/i915_gem_ttm_move.o
LD [M] drivers/gpu/drm/drm_ttm_helper.o
CC [M] drivers/gpu/drm/xe/xe_uc.o
CC [M] drivers/gpu/drm/xe/xe_uc_fw.o
CC [M] drivers/gpu/drm/xe/xe_vm.o
CC drivers/gpu/drm/i915/gem/i915_gem_ttm_pm.o
CC [M] drivers/gpu/drm/xe/xe_vm_madvise.o
CC [M] drivers/gpu/drm/xe/xe_vram.o
CC drivers/gpu/drm/i915/gem/i915_gem_userptr.o
CC [M] drivers/gpu/drm/xe/xe_vram_freq.o
CC [M] drivers/gpu/drm/xe/xe_vsec.o
CC [M] drivers/gpu/drm/xe/xe_wa.o
CC [M] drivers/gpu/drm/xe/xe_wait_user_fence.o
CC drivers/gpu/drm/i915/gem/i915_gem_wait.o
CC [M] drivers/gpu/drm/xe/xe_wopcm.o
CC drivers/gpu/drm/i915/gem/i915_gemfs.o
CC [M] drivers/gpu/drm/xe/xe_hmm.o
CC drivers/gpu/drm/i915/i915_active.o
CC drivers/gpu/drm/i915/i915_cmd_parser.o
CC [M] drivers/gpu/drm/xe/xe_hwmon.o
CC [M] drivers/gpu/drm/xe/xe_pmu.o
CC [M] drivers/gpu/drm/xe/xe_gt_sriov_vf.o
CC drivers/gpu/drm/i915/i915_deps.o
CC drivers/gpu/drm/i915/i915_gem.o
CC [M] drivers/gpu/drm/xe/xe_guc_relay.o
CC drivers/gpu/drm/i915/i915_gem_evict.o
CC [M] drivers/gpu/drm/xe/xe_memirq.o
CC drivers/gpu/drm/i915/i915_gem_gtt.o
CC drivers/gpu/drm/i915/i915_gem_ww.o
CC [M] drivers/gpu/drm/xe/xe_sriov.o
CC [M] drivers/gpu/drm/xe/xe_sriov_vf.o
CC [M] drivers/gpu/drm/xe/display/ext/i915_irq.o
CC [M] drivers/gpu/drm/xe/display/ext/i915_utils.o
CC drivers/gpu/drm/i915/i915_query.o
CC [M] drivers/gpu/drm/xe/display/intel_bo.o
CC drivers/gpu/drm/i915/i915_request.o
CC drivers/gpu/drm/i915/i915_scheduler.o
CC [M] drivers/gpu/drm/xe/display/intel_fb_bo.o
CC drivers/gpu/drm/i915/i915_trace_points.o
CC drivers/gpu/drm/i915/i915_ttm_buddy_manager.o
CC drivers/gpu/drm/i915/i915_vma.o
CC [M] drivers/gpu/drm/xe/display/intel_fbdev_fb.o
CC [M] drivers/gpu/drm/xe/display/xe_display.o
CC [M] drivers/gpu/drm/xe/display/xe_display_misc.o
CC drivers/gpu/drm/i915/i915_vma_resource.o
CC [M] drivers/gpu/drm/xe/display/xe_display_rpm.o
CC drivers/gpu/drm/i915/gt/uc/intel_gsc_fw.o
CC [M] drivers/gpu/drm/xe/display/xe_display_rps.o
CC drivers/gpu/drm/i915/gt/uc/intel_gsc_proxy.o
CC [M] drivers/gpu/drm/xe/display/xe_display_wa.o
CC [M] drivers/gpu/drm/xe/display/xe_dsb_buffer.o
CC [M] drivers/gpu/drm/xe/display/xe_fb_pin.o
CC drivers/gpu/drm/i915/gt/uc/intel_gsc_uc.o
CC [M] drivers/gpu/drm/xe/display/xe_hdcp_gsc.o
CC drivers/gpu/drm/i915/gt/uc/intel_gsc_uc_debugfs.o
CC [M] drivers/gpu/drm/xe/display/xe_plane_initial.o
CC drivers/gpu/drm/i915/gt/uc/intel_gsc_uc_heci_cmd_submit.o
CC drivers/gpu/drm/i915/gt/uc/intel_guc.o
CC [M] drivers/gpu/drm/xe/display/xe_tdf.o
CC drivers/gpu/drm/i915/gt/uc/intel_guc_ads.o
CC [M] drivers/gpu/drm/xe/i915-soc/intel_dram.o
CC drivers/gpu/drm/i915/gt/uc/intel_guc_capture.o
CC [M] drivers/gpu/drm/xe/i915-soc/intel_pch.o
CC [M] drivers/gpu/drm/xe/i915-soc/intel_rom.o
CC drivers/gpu/drm/i915/gt/uc/intel_guc_ct.o
CC drivers/gpu/drm/i915/gt/uc/intel_guc_debugfs.o
CC [M] drivers/gpu/drm/xe/i915-display/icl_dsi.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_alpm.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_atomic.o
CC drivers/gpu/drm/i915/gt/uc/intel_guc_fw.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_atomic_plane.o
CC drivers/gpu/drm/i915/gt/uc/intel_guc_hwconfig.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_audio.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_backlight.o
CC drivers/gpu/drm/i915/gt/uc/intel_guc_log.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_bios.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_bw.o
CC drivers/gpu/drm/i915/gt/uc/intel_guc_log_debugfs.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_cdclk.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_cmtg.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_color.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_combo_phy.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_connector.o
CC drivers/gpu/drm/i915/gt/uc/intel_guc_rc.o
CC drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.o
CC drivers/gpu/drm/i915/gt/uc/intel_guc_submission.o
CC drivers/gpu/drm/i915/gt/uc/intel_huc.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_crtc.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_crtc_state_dump.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_cursor.o
CC drivers/gpu/drm/i915/gt/uc/intel_huc_debugfs.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_cx0_phy.o
CC drivers/gpu/drm/i915/gt/uc/intel_huc_fw.o
CC drivers/gpu/drm/i915/gt/uc/intel_uc.o
CC drivers/gpu/drm/i915/gt/uc/intel_uc_debugfs.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_ddi.o
CC drivers/gpu/drm/i915/gt/uc/intel_uc_fw.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_ddi_buf_trans.o
CC drivers/gpu/drm/i915/gt/intel_gsc.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_display.o
CC drivers/gpu/drm/i915/i915_hwmon.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_display_conversion.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_display_device.o
CC drivers/gpu/drm/i915/display/hsw_ips.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_display_driver.o
CC drivers/gpu/drm/i915/display/i9xx_plane.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_display_irq.o
CC drivers/gpu/drm/i915/display/i9xx_display_sr.o
CC drivers/gpu/drm/i915/display/i9xx_wm.o
CC drivers/gpu/drm/i915/display/intel_alpm.o
CC drivers/gpu/drm/i915/display/intel_atomic.o
CC drivers/gpu/drm/i915/display/intel_atomic_plane.o
CC drivers/gpu/drm/i915/display/intel_audio.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_display_params.o
CC drivers/gpu/drm/i915/display/intel_bios.o
CC drivers/gpu/drm/i915/display/intel_bo.o
CC drivers/gpu/drm/i915/display/intel_bw.o
CC drivers/gpu/drm/i915/display/intel_cdclk.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_display_power.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_display_power_map.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_display_power_well.o
CC drivers/gpu/drm/i915/display/intel_cmtg.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_display_trace.o
CC drivers/gpu/drm/i915/display/intel_color.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_display_wa.o
CC drivers/gpu/drm/i915/display/intel_combo_phy.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_dkl_phy.o
CC drivers/gpu/drm/i915/display/intel_connector.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_dmc.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_dmc_wl.o
CC drivers/gpu/drm/i915/display/intel_crtc.o
CC drivers/gpu/drm/i915/display/intel_crtc_state_dump.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_dp.o
CC drivers/gpu/drm/i915/display/intel_cursor.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_dp_aux.o
CC drivers/gpu/drm/i915/display/intel_display.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_dp_aux_backlight.o
CC drivers/gpu/drm/i915/display/intel_display_conversion.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_dp_hdcp.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_dp_link_training.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_dp_mst.o
CC drivers/gpu/drm/i915/display/intel_display_driver.o
CC drivers/gpu/drm/i915/display/intel_display_irq.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_dp_test.o
CC drivers/gpu/drm/i915/display/intel_display_params.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_dpll.o
CC drivers/gpu/drm/i915/display/intel_display_power.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_dpll_mgr.o
CC drivers/gpu/drm/i915/display/intel_display_power_map.o
CC drivers/gpu/drm/i915/display/intel_display_power_well.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_dpt_common.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_drrs.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_dsb.o
CC drivers/gpu/drm/i915/display/intel_display_reset.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_dsi.o
CC drivers/gpu/drm/i915/display/intel_display_rpm.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_dsi_dcs_backlight.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_dsi_vbt.o
CC drivers/gpu/drm/i915/display/intel_display_rps.o
CC drivers/gpu/drm/i915/display/intel_display_snapshot.o
CC drivers/gpu/drm/i915/display/intel_display_wa.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_encoder.o
CC drivers/gpu/drm/i915/display/intel_dmc.o
CC drivers/gpu/drm/i915/display/intel_dmc_wl.o
CC drivers/gpu/drm/i915/display/intel_dpio_phy.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_fb.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_fbc.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_fdi.o
CC drivers/gpu/drm/i915/display/intel_dpll.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_fifo_underrun.o
CC drivers/gpu/drm/i915/display/intel_dpll_mgr.o
CC drivers/gpu/drm/i915/display/intel_dpt.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_frontbuffer.o
CC drivers/gpu/drm/i915/display/intel_dpt_common.o
CC drivers/gpu/drm/i915/display/intel_drrs.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_global_state.o
CC drivers/gpu/drm/i915/display/intel_dsb.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_gmbus.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_hdcp.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_hdcp_gsc_message.o
CC drivers/gpu/drm/i915/display/intel_dsb_buffer.o
CC drivers/gpu/drm/i915/display/intel_fb.o
CC drivers/gpu/drm/i915/display/intel_fb_bo.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_hdmi.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_hotplug.o
CC drivers/gpu/drm/i915/display/intel_fb_pin.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_hotplug_irq.o
CC drivers/gpu/drm/i915/display/intel_fbc.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_hti.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_link_bw.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_lspcon.o
CC drivers/gpu/drm/i915/display/intel_fdi.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_modeset_lock.o
CC drivers/gpu/drm/i915/display/intel_fifo_underrun.o
CC drivers/gpu/drm/i915/display/intel_frontbuffer.o
CC drivers/gpu/drm/i915/display/intel_global_state.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_modeset_setup.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_modeset_verify.o
CC drivers/gpu/drm/i915/display/intel_hdcp.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_panel.o
CC drivers/gpu/drm/i915/display/intel_hdcp_gsc.o
CC drivers/gpu/drm/i915/display/intel_hdcp_gsc_message.o
CC drivers/gpu/drm/i915/display/intel_hotplug.o
CC drivers/gpu/drm/i915/display/intel_hotplug_irq.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_pfit.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_pmdemand.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_pps.o
CC drivers/gpu/drm/i915/display/intel_hti.o
CC drivers/gpu/drm/i915/display/intel_link_bw.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_psr.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_qp_tables.o
CC drivers/gpu/drm/i915/display/intel_load_detect.o
CC drivers/gpu/drm/i915/display/intel_lpe_audio.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_quirks.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_snps_hdmi_pll.o
CC drivers/gpu/drm/i915/display/intel_modeset_lock.o
CC drivers/gpu/drm/i915/display/intel_modeset_setup.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_snps_phy.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_tc.o
CC drivers/gpu/drm/i915/display/intel_modeset_verify.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_vblank.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_vdsc.o
CC drivers/gpu/drm/i915/display/intel_overlay.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_vga.o
CC drivers/gpu/drm/i915/display/intel_pch_display.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_vrr.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_wm.o
CC [M] drivers/gpu/drm/xe/i915-display/skl_scaler.o
CC [M] drivers/gpu/drm/xe/i915-display/skl_universal_plane.o
CC [M] drivers/gpu/drm/xe/i915-display/skl_watermark.o
CC drivers/gpu/drm/i915/display/intel_pch_refclk.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_acpi.o
CC drivers/gpu/drm/i915/display/intel_plane_initial.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_opregion.o
CC drivers/gpu/drm/i915/display/intel_pmdemand.o
CC drivers/gpu/drm/i915/display/intel_psr.o
CC [M] drivers/gpu/drm/xe/xe_debugfs.o
CC drivers/gpu/drm/i915/display/intel_quirks.o
CC drivers/gpu/drm/i915/display/intel_sprite.o
CC [M] drivers/gpu/drm/xe/xe_gt_debugfs.o
CC drivers/gpu/drm/i915/display/intel_sprite_uapi.o
CC [M] drivers/gpu/drm/xe/xe_gt_sriov_vf_debugfs.o
CC drivers/gpu/drm/i915/display/intel_tc.o
CC [M] drivers/gpu/drm/xe/xe_gt_stats.o
CC [M] drivers/gpu/drm/xe/xe_guc_debugfs.o
CC drivers/gpu/drm/i915/display/intel_vblank.o
CC drivers/gpu/drm/i915/display/intel_vga.o
CC [M] drivers/gpu/drm/xe/xe_huc_debugfs.o
CC drivers/gpu/drm/i915/display/intel_wm.o
CC drivers/gpu/drm/i915/display/skl_scaler.o
CC [M] drivers/gpu/drm/xe/xe_uc_debugfs.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_display_debugfs.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_display_debugfs_params.o
CC drivers/gpu/drm/i915/display/skl_universal_plane.o
CC drivers/gpu/drm/i915/display/skl_watermark.o
CC [M] drivers/gpu/drm/xe/i915-display/intel_pipe_crc.o
CC drivers/gpu/drm/i915/display/intel_acpi.o
CC drivers/gpu/drm/i915/display/intel_opregion.o
CC drivers/gpu/drm/i915/display/intel_display_debugfs.o
CC drivers/gpu/drm/i915/display/intel_display_debugfs_params.o
CC drivers/gpu/drm/i915/display/intel_pipe_crc.o
CC drivers/gpu/drm/i915/display/dvo_ch7017.o
CC drivers/gpu/drm/i915/display/dvo_ch7xxx.o
CC drivers/gpu/drm/i915/display/dvo_ivch.o
CC drivers/gpu/drm/i915/display/dvo_ns2501.o
CC drivers/gpu/drm/i915/display/dvo_sil164.o
CC drivers/gpu/drm/i915/display/dvo_tfp410.o
CC drivers/gpu/drm/i915/display/g4x_dp.o
CC drivers/gpu/drm/i915/display/g4x_hdmi.o
CC drivers/gpu/drm/i915/display/icl_dsi.o
CC drivers/gpu/drm/i915/display/intel_backlight.o
CC drivers/gpu/drm/i915/display/intel_crt.o
CC drivers/gpu/drm/i915/display/intel_cx0_phy.o
CC drivers/gpu/drm/i915/display/intel_ddi.o
CC drivers/gpu/drm/i915/display/intel_ddi_buf_trans.o
CC drivers/gpu/drm/i915/display/intel_display_device.o
CC drivers/gpu/drm/i915/display/intel_display_trace.o
CC drivers/gpu/drm/i915/display/intel_dkl_phy.o
CC drivers/gpu/drm/i915/display/intel_dp.o
CC drivers/gpu/drm/i915/display/intel_dp_aux.o
CC drivers/gpu/drm/i915/display/intel_dp_aux_backlight.o
CC drivers/gpu/drm/i915/display/intel_dp_hdcp.o
CC drivers/gpu/drm/i915/display/intel_dp_link_training.o
CC drivers/gpu/drm/i915/display/intel_dp_mst.o
CC drivers/gpu/drm/i915/display/intel_dp_test.o
CC drivers/gpu/drm/i915/display/intel_dsi.o
CC drivers/gpu/drm/i915/display/intel_dsi_dcs_backlight.o
CC drivers/gpu/drm/i915/display/intel_dsi_vbt.o
CC drivers/gpu/drm/i915/display/intel_dvo.o
CC drivers/gpu/drm/i915/display/intel_encoder.o
CC drivers/gpu/drm/i915/display/intel_gmbus.o
CC drivers/gpu/drm/i915/display/intel_hdmi.o
CC drivers/gpu/drm/i915/display/intel_lspcon.o
CC drivers/gpu/drm/i915/display/intel_lvds.o
CC drivers/gpu/drm/i915/display/intel_panel.o
CC drivers/gpu/drm/i915/display/intel_pfit.o
CC drivers/gpu/drm/i915/display/intel_pps.o
CC drivers/gpu/drm/i915/display/intel_qp_tables.o
CC drivers/gpu/drm/i915/display/intel_sdvo.o
CC drivers/gpu/drm/i915/display/intel_snps_hdmi_pll.o
CC drivers/gpu/drm/i915/display/intel_snps_phy.o
CC drivers/gpu/drm/i915/display/intel_tv.o
CC drivers/gpu/drm/i915/display/intel_vdsc.o
CC drivers/gpu/drm/i915/display/intel_vrr.o
CC drivers/gpu/drm/i915/display/vlv_dsi.o
CC drivers/gpu/drm/i915/display/vlv_dsi_pll.o
CC drivers/gpu/drm/i915/i915_perf.o
CC drivers/gpu/drm/i915/pxp/intel_pxp.o
CC drivers/gpu/drm/i915/pxp/intel_pxp_huc.o
CC drivers/gpu/drm/i915/pxp/intel_pxp_tee.o
CC drivers/gpu/drm/i915/i915_gpu_error.o
CC drivers/gpu/drm/i915/i915_vgpu.o
LD [M] drivers/gpu/drm/xe/xe.o
AR drivers/gpu/drm/i915/built-in.a
AR drivers/gpu/drm/built-in.a
AR drivers/gpu/built-in.a
AR drivers/built-in.a
AR built-in.a
AR vmlinux.a
LD vmlinux.o
OBJCOPY modules.builtin.modinfo
GEN modules.builtin
MODPOST Module.symvers
CC .vmlinux.export.o
CC [M] fs/efivarfs/efivarfs.mod.o
CC [M] .module-common.o
CC [M] drivers/gpu/drm/drm_exec.mod.o
CC [M] drivers/gpu/drm/drm_gpuvm.mod.o
CC [M] drivers/gpu/drm/drm_suballoc_helper.mod.o
CC [M] drivers/gpu/drm/drm_ttm_helper.mod.o
CC [M] drivers/gpu/drm/scheduler/gpu-sched.mod.o
CC [M] drivers/gpu/drm/xe/xe.mod.o
CC [M] drivers/thermal/intel/x86_pkg_temp_thermal.mod.o
CC [M] net/netfilter/nf_log_syslog.mod.o
CC [M] net/netfilter/xt_mark.mod.o
CC [M] net/netfilter/xt_nat.mod.o
CC [M] net/netfilter/xt_LOG.mod.o
CC [M] net/netfilter/xt_MASQUERADE.mod.o
CC [M] net/netfilter/xt_addrtype.mod.o
CC [M] net/ipv4/netfilter/iptable_nat.mod.o
LD [M] drivers/gpu/drm/drm_suballoc_helper.ko
LD [M] net/netfilter/nf_log_syslog.ko
LD [M] drivers/gpu/drm/scheduler/gpu-sched.ko
LD [M] net/netfilter/xt_addrtype.ko
LD [M] fs/efivarfs/efivarfs.ko
LD [M] drivers/gpu/drm/drm_ttm_helper.ko
LD [M] net/netfilter/xt_LOG.ko
LD [M] net/netfilter/xt_MASQUERADE.ko
LD [M] drivers/gpu/drm/drm_exec.ko
LD [M] net/netfilter/xt_nat.ko
LD [M] net/ipv4/netfilter/iptable_nat.ko
LD [M] drivers/gpu/drm/drm_gpuvm.ko
LD [M] drivers/thermal/intel/x86_pkg_temp_thermal.ko
LD [M] net/netfilter/xt_mark.ko
LD [M] drivers/gpu/drm/xe/xe.ko
UPD include/generated/utsversion.h
CC init/version-timestamp.o
KSYMS .tmp_vmlinux0.kallsyms.S
AS .tmp_vmlinux0.kallsyms.o
LD .tmp_vmlinux1
NM .tmp_vmlinux1.syms
KSYMS .tmp_vmlinux1.kallsyms.S
AS .tmp_vmlinux1.kallsyms.o
LD .tmp_vmlinux2
NM .tmp_vmlinux2.syms
KSYMS .tmp_vmlinux2.kallsyms.S
AS .tmp_vmlinux2.kallsyms.o
LD vmlinux.unstripped
NM System.map
SORTTAB vmlinux.unstripped
RSTRIP vmlinux
CC arch/x86/boot/a20.o
AS arch/x86/boot/bioscall.o
CC arch/x86/boot/cmdline.o
AS arch/x86/boot/copy.o
HOSTCC arch/x86/boot/mkcpustr
CC arch/x86/boot/cpuflags.o
CC arch/x86/boot/cpucheck.o
CC arch/x86/boot/early_serial_console.o
CC arch/x86/boot/edd.o
CC arch/x86/boot/main.o
CC arch/x86/boot/memory.o
CC arch/x86/boot/pm.o
AS arch/x86/boot/pmjump.o
CC arch/x86/boot/printf.o
CC arch/x86/boot/regs.o
CC arch/x86/boot/string.o
CC arch/x86/boot/tty.o
CC arch/x86/boot/video.o
CC arch/x86/boot/video-mode.o
CC arch/x86/boot/version.o
CC arch/x86/boot/video-vga.o
CC arch/x86/boot/video-vesa.o
CC arch/x86/boot/video-bios.o
LDS arch/x86/boot/compressed/vmlinux.lds
AS arch/x86/boot/compressed/kernel_info.o
AS arch/x86/boot/compressed/head_32.o
VOFFSET arch/x86/boot/compressed/../voffset.h
CC arch/x86/boot/compressed/string.o
CC arch/x86/boot/compressed/cmdline.o
CC arch/x86/boot/compressed/error.o
OBJCOPY arch/x86/boot/compressed/vmlinux.bin
RELOCS arch/x86/boot/compressed/vmlinux.relocs
HOSTCC arch/x86/boot/compressed/mkpiggy
CC arch/x86/boot/compressed/cpuflags.o
CC arch/x86/boot/compressed/early_serial_console.o
CC arch/x86/boot/compressed/kaslr.o
CC arch/x86/boot/compressed/acpi.o
CC arch/x86/boot/compressed/efi.o
CPUSTR arch/x86/boot/cpustr.h
CC arch/x86/boot/cpu.o
GZIP arch/x86/boot/compressed/vmlinux.bin.gz
CC arch/x86/boot/compressed/misc.o
MKPIGGY arch/x86/boot/compressed/piggy.S
AS arch/x86/boot/compressed/piggy.o
LD arch/x86/boot/compressed/vmlinux
ZOFFSET arch/x86/boot/zoffset.h
OBJCOPY arch/x86/boot/vmlinux.bin
AS arch/x86/boot/header.o
LD arch/x86/boot/setup.elf
OBJCOPY arch/x86/boot/setup.bin
BUILD arch/x86/boot/bzImage
Kernel: arch/x86/boot/bzImage is ready (#1)
run-parts: executing /workspace/ci/hooks/20-kernel-doc
+ SRC_DIR=/workspace/kernel
+ cd /workspace/kernel
+ find drivers/gpu/drm/xe/ -name '*.[ch]' -not -path 'drivers/gpu/drm/xe/display/*'
+ xargs ./scripts/kernel-doc -Werror -none include/uapi/drm/xe_drm.h
include/uapi/drm/xe_drm.h:2095: warning: Function parameter or struct member 'range' not described in 'drm_xe_vm_query_num_vmas'
include/uapi/drm/xe_drm.h:2095: warning: Excess struct member 'size' description in 'drm_xe_vm_query_num_vmas'
include/uapi/drm/xe_drm.h:2179: warning: Function parameter or struct member 'range' not described in 'drm_xe_vm_query_vmas_attr'
include/uapi/drm/xe_drm.h:2179: warning: Function parameter or struct member 'attr' not described in 'drm_xe_vm_query_vmas_attr'
include/uapi/drm/xe_drm.h:2179: warning: Function parameter or struct member 'vector_of_vma_mem_attr' not described in 'drm_xe_vm_query_vmas_attr'
include/uapi/drm/xe_drm.h:2179: warning: Excess struct member 'size' description in 'drm_xe_vm_query_vmas_attr'
include/uapi/drm/xe_drm.h:2179: warning: Excess struct member 'vector_of_ops' description in 'drm_xe_vm_query_vmas_attr'
drivers/gpu/drm/xe/xe_svm.h:148: warning: expecting prototype for xe_svm_range_start(). Prototype was for xe_svm_range_end() instead
drivers/gpu/drm/xe/xe_svm.h:159: warning: expecting prototype for xe_svm_range_start(). Prototype was for xe_svm_range_size() instead
drivers/gpu/drm/xe/xe_svm.c:938: warning: Function parameter or struct member 'vm' not described in 'xe_svm_range_clean_if_addr_within'
drivers/gpu/drm/xe/xe_vm.c:2279: warning: Function parameter or struct member 'page_addr' not described in 'xe_vm_find_vma_by_addr'
drivers/gpu/drm/xe/xe_vm.c:2279: warning: Excess function parameter 'page_address' description in 'xe_vm_find_vma_by_addr'
12 warnings as Errors
run-parts: /workspace/ci/hooks/20-kernel-doc exited with return code 123
^ permalink raw reply [flat|nested] 120+ messages in thread
* ✗ CI.checksparse: warning for PREFETCH and MADVISE for SVM ranges (rev4)
2025-04-07 10:16 [PATCH v2 00/32] PREFETCH and MADVISE for SVM ranges Himal Prasad Ghimiray
` (40 preceding siblings ...)
2025-04-09 5:31 ` ✗ CI.Hooks: failure " Patchwork
@ 2025-04-09 5:32 ` Patchwork
2025-04-09 5:52 ` ✓ Xe.CI.BAT: success " Patchwork
2025-04-09 7:00 ` ✗ Xe.CI.Full: failure " Patchwork
43 siblings, 0 replies; 120+ messages in thread
From: Patchwork @ 2025-04-09 5:32 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe
== Series Details ==
Series: PREFETCH and MADVISE for SVM ranges (rev4)
URL : https://patchwork.freedesktop.org/series/146290/
State : warning
== Summary ==
+ trap cleanup EXIT
+ KERNEL=/kernel
+ MT=/root/linux/maintainer-tools
+ git clone https://gitlab.freedesktop.org/drm/maintainer-tools /root/linux/maintainer-tools
Cloning into '/root/linux/maintainer-tools'...
warning: redirecting to https://gitlab.freedesktop.org/drm/maintainer-tools.git/
+ make -C /root/linux/maintainer-tools
make: Entering directory '/root/linux/maintainer-tools'
cc -O2 -g -Wextra -o remap-log remap-log.c
make: Leaving directory '/root/linux/maintainer-tools'
+ cd /kernel
+ git config --global --add safe.directory /kernel
+ /root/linux/maintainer-tools/dim sparse --fast a49a4787e6bc70296204f4a6e1b0fed3759938cd
Sparse version: 0.6.4 (Ubuntu: 0.6.4-4ubuntu3)
Fast mode used, each commit won't be checked separately.
-
+drivers/gpu/drm/i915/display/intel_display_types.h:1978:24: warning: unreplaced symbol '<noident>'
+drivers/gpu/drm/i915/display/intel_display_types.h:1978:24: warning: unreplaced symbol '<noident>'
+drivers/gpu/drm/i915/display/intel_psr.c: note: in included file:
+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel
^ permalink raw reply [flat|nested] 120+ messages in thread
* ✓ Xe.CI.BAT: success for PREFETCH and MADVISE for SVM ranges (rev4)
2025-04-07 10:16 [PATCH v2 00/32] PREFETCH and MADVISE for SVM ranges Himal Prasad Ghimiray
` (41 preceding siblings ...)
2025-04-09 5:32 ` ✗ CI.checksparse: warning " Patchwork
@ 2025-04-09 5:52 ` Patchwork
2025-04-09 7:00 ` ✗ Xe.CI.Full: failure " Patchwork
43 siblings, 0 replies; 120+ messages in thread
From: Patchwork @ 2025-04-09 5:52 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe
[-- Attachment #1: Type: text/plain, Size: 949 bytes --]
== Series Details ==
Series: PREFETCH and MADVISE for SVM ranges (rev4)
URL : https://patchwork.freedesktop.org/series/146290/
State : success
== Summary ==
CI Bug Log - changes from xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd_BAT -> xe-pw-146290v4_BAT
====================================================
Summary
-------
**SUCCESS**
No regressions found.
Participating hosts (9 -> 8)
------------------------------
Missing (1): bat-adlp-vm
Changes
-------
No changes found
Build changes
-------------
* Linux: xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd -> xe-pw-146290v4
IGT_8311: 851a9c1cb1a690d8c527f26c49c250ec583af65e @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd: a49a4787e6bc70296204f4a6e1b0fed3759938cd
xe-pw-146290v4: 146290v4
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/index.html
[-- Attachment #2: Type: text/html, Size: 1497 bytes --]
^ permalink raw reply [flat|nested] 120+ messages in thread
* ✗ Xe.CI.Full: failure for PREFETCH and MADVISE for SVM ranges (rev4)
2025-04-07 10:16 [PATCH v2 00/32] PREFETCH and MADVISE for SVM ranges Himal Prasad Ghimiray
` (42 preceding siblings ...)
2025-04-09 5:52 ` ✓ Xe.CI.BAT: success " Patchwork
@ 2025-04-09 7:00 ` Patchwork
43 siblings, 0 replies; 120+ messages in thread
From: Patchwork @ 2025-04-09 7:00 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe
[-- Attachment #1: Type: text/plain, Size: 110489 bytes --]
== Series Details ==
Series: PREFETCH and MADVISE for SVM ranges (rev4)
URL : https://patchwork.freedesktop.org/series/146290/
State : failure
== Summary ==
CI Bug Log - changes from xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd_FULL -> xe-pw-146290v4_FULL
====================================================
Summary
-------
**FAILURE**
Serious unknown changes coming with xe-pw-146290v4_FULL absolutely need to be
verified manually.
If you think the reported changes have nothing to do with the changes
introduced in xe-pw-146290v4_FULL, please notify your bug team (I915-ci-infra@lists.freedesktop.org) to allow them
to document this new failure mode, which will reduce false positives in CI.
Participating hosts (4 -> 4)
------------------------------
No changes in participating hosts
Possible new issues
-------------------
Here are the unknown changes that may have been introduced in xe-pw-146290v4_FULL:
### IGT changes ###
#### Possible regressions ####
* igt@kms_flip@2x-flip-vs-dpms-on-nop:
- shard-bmg: [PASS][1] -> [SKIP][2]
[1]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-bmg-8/igt@kms_flip@2x-flip-vs-dpms-on-nop.html
[2]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-4/igt@kms_flip@2x-flip-vs-dpms-on-nop.html
* igt@kms_flip@2x-flip-vs-dpms-on-nop-interruptible:
- shard-bmg: NOTRUN -> [SKIP][3] +4 other tests skip
[3]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-4/igt@kms_flip@2x-flip-vs-dpms-on-nop-interruptible.html
- shard-adlp: NOTRUN -> [SKIP][4] +5 other tests skip
[4]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-1/igt@kms_flip@2x-flip-vs-dpms-on-nop-interruptible.html
* igt@xe_pxp@pxp-optout:
- shard-lnl: [PASS][5] -> [ABORT][6]
[5]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-lnl-2/igt@xe_pxp@pxp-optout.html
[6]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-lnl-2/igt@xe_pxp@pxp-optout.html
* igt@xe_vm@compact-64k-pages:
- shard-bmg: [PASS][7] -> [FAIL][8] +8 other tests fail
[7]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-bmg-4/igt@xe_vm@compact-64k-pages.html
[8]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-3/igt@xe_vm@compact-64k-pages.html
* igt@xe_vm@mmap-style-bind-either-side-full:
- shard-dg2-set2: [PASS][9] -> [FAIL][10] +11 other tests fail
[9]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-dg2-433/igt@xe_vm@mmap-style-bind-either-side-full.html
[10]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-436/igt@xe_vm@mmap-style-bind-either-side-full.html
* igt@xe_vm@mmap-style-bind-either-side-partial:
- shard-adlp: [PASS][11] -> [DMESG-FAIL][12] +4 other tests dmesg-fail
[11]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-adlp-1/igt@xe_vm@mmap-style-bind-either-side-partial.html
[12]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-1/igt@xe_vm@mmap-style-bind-either-side-partial.html
- shard-dg2-set2: NOTRUN -> [FAIL][13] +1 other test fail
[13]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-433/igt@xe_vm@mmap-style-bind-either-side-partial.html
* igt@xe_vm@mmap-style-bind-end:
- shard-lnl: [PASS][14] -> [FAIL][15] +13 other tests fail
[14]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-lnl-1/igt@xe_vm@mmap-style-bind-end.html
[15]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-lnl-6/igt@xe_vm@mmap-style-bind-end.html
* igt@xe_vm@mmap-style-bind-many-either-side-partial-hammer:
- shard-bmg: NOTRUN -> [FAIL][16] +5 other tests fail
[16]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-3/igt@xe_vm@mmap-style-bind-many-either-side-partial-hammer.html
- shard-adlp: NOTRUN -> [DMESG-FAIL][17]
[17]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-2/igt@xe_vm@mmap-style-bind-many-either-side-partial-hammer.html
* igt@xe_vm@mmap-style-bind-userptr-either-side-full:
- shard-adlp: NOTRUN -> [FAIL][18] +2 other tests fail
[18]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-1/igt@xe_vm@mmap-style-bind-userptr-either-side-full.html
* igt@xe_vm@mmap-style-bind-userptr-either-side-partial:
- shard-adlp: [PASS][19] -> [FAIL][20] +2 other tests fail
[19]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-adlp-6/igt@xe_vm@mmap-style-bind-userptr-either-side-partial.html
[20]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-4/igt@xe_vm@mmap-style-bind-userptr-either-side-partial.html
Known issues
------------
Here are the changes found in xe-pw-146290v4_FULL that come from known issues:
### IGT changes ###
#### Issues hit ####
* igt@intel_hwmon@hwmon-read:
- shard-adlp: NOTRUN -> [SKIP][21] ([Intel XE#1125])
[21]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-6/igt@intel_hwmon@hwmon-read.html
* igt@kms_async_flips@async-flip-with-page-flip-events@pipe-d-dp-2-4-rc-ccs-cc:
- shard-dg2-set2: NOTRUN -> [SKIP][22] ([Intel XE#2550] / [Intel XE#3767]) +15 other tests skip
[22]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-432/igt@kms_async_flips@async-flip-with-page-flip-events@pipe-d-dp-2-4-rc-ccs-cc.html
* igt@kms_atomic@plane-primary-overlay-mutable-zpos:
- shard-bmg: NOTRUN -> [SKIP][23] ([Intel XE#2385])
[23]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-4/igt@kms_atomic@plane-primary-overlay-mutable-zpos.html
* igt@kms_atomic_transition@plane-all-modeset-transition-internal-panels:
- shard-bmg: NOTRUN -> [SKIP][24] ([Intel XE#2370])
[24]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-4/igt@kms_atomic_transition@plane-all-modeset-transition-internal-panels.html
* igt@kms_big_fb@4-tiled-addfb:
- shard-adlp: NOTRUN -> [SKIP][25] ([Intel XE#619])
[25]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-8/igt@kms_big_fb@4-tiled-addfb.html
* igt@kms_big_fb@4-tiled-max-hw-stride-64bpp-rotate-180-async-flip:
- shard-adlp: NOTRUN -> [SKIP][26] ([Intel XE#1124]) +17 other tests skip
[26]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-8/igt@kms_big_fb@4-tiled-max-hw-stride-64bpp-rotate-180-async-flip.html
* igt@kms_big_fb@x-tiled-32bpp-rotate-90:
- shard-bmg: NOTRUN -> [SKIP][27] ([Intel XE#2327]) +7 other tests skip
[27]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-6/igt@kms_big_fb@x-tiled-32bpp-rotate-90.html
* igt@kms_big_fb@x-tiled-8bpp-rotate-270:
- shard-dg2-set2: NOTRUN -> [SKIP][28] ([Intel XE#316]) +1 other test skip
[28]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-433/igt@kms_big_fb@x-tiled-8bpp-rotate-270.html
* igt@kms_big_fb@x-tiled-8bpp-rotate-90:
- shard-adlp: NOTRUN -> [SKIP][29] ([Intel XE#316]) +6 other tests skip
[29]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-1/igt@kms_big_fb@x-tiled-8bpp-rotate-90.html
* igt@kms_big_fb@x-tiled-max-hw-stride-32bpp-rotate-180-async-flip:
- shard-adlp: NOTRUN -> [DMESG-FAIL][30] ([Intel XE#4543]) +7 other tests dmesg-fail
[30]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-9/igt@kms_big_fb@x-tiled-max-hw-stride-32bpp-rotate-180-async-flip.html
* igt@kms_big_fb@x-tiled-max-hw-stride-64bpp-rotate-180-async-flip:
- shard-adlp: [PASS][31] -> [DMESG-FAIL][32] ([Intel XE#4543]) +1 other test dmesg-fail
[31]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-adlp-1/igt@kms_big_fb@x-tiled-max-hw-stride-64bpp-rotate-180-async-flip.html
[32]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-9/igt@kms_big_fb@x-tiled-max-hw-stride-64bpp-rotate-180-async-flip.html
* igt@kms_big_fb@y-tiled-addfb:
- shard-bmg: NOTRUN -> [SKIP][33] ([Intel XE#2328])
[33]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-4/igt@kms_big_fb@y-tiled-addfb.html
* igt@kms_big_fb@y-tiled-addfb-size-offset-overflow:
- shard-dg2-set2: NOTRUN -> [SKIP][34] ([Intel XE#607])
[34]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-464/igt@kms_big_fb@y-tiled-addfb-size-offset-overflow.html
* igt@kms_big_fb@yf-tiled-addfb-size-offset-overflow:
- shard-bmg: NOTRUN -> [SKIP][35] ([Intel XE#607])
[35]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-8/igt@kms_big_fb@yf-tiled-addfb-size-offset-overflow.html
* igt@kms_big_fb@yf-tiled-max-hw-stride-32bpp-rotate-180-hflip-async-flip:
- shard-dg2-set2: NOTRUN -> [SKIP][36] ([Intel XE#1124]) +1 other test skip
[36]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-433/igt@kms_big_fb@yf-tiled-max-hw-stride-32bpp-rotate-180-hflip-async-flip.html
* igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-180:
- shard-bmg: NOTRUN -> [SKIP][37] ([Intel XE#1124]) +15 other tests skip
[37]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-4/igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-180.html
* igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-180-hflip:
- shard-lnl: NOTRUN -> [SKIP][38] ([Intel XE#1124]) +1 other test skip
[38]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-lnl-1/igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-180-hflip.html
* igt@kms_bw@connected-linear-tiling-1-displays-3840x2160p:
- shard-adlp: NOTRUN -> [SKIP][39] ([Intel XE#367]) +3 other tests skip
[39]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-2/igt@kms_bw@connected-linear-tiling-1-displays-3840x2160p.html
* igt@kms_bw@connected-linear-tiling-2-displays-2160x1440p:
- shard-bmg: [PASS][40] -> [SKIP][41] ([Intel XE#2314] / [Intel XE#2894])
[40]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-bmg-7/igt@kms_bw@connected-linear-tiling-2-displays-2160x1440p.html
[41]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-4/igt@kms_bw@connected-linear-tiling-2-displays-2160x1440p.html
* igt@kms_bw@connected-linear-tiling-4-displays-2160x1440p:
- shard-bmg: NOTRUN -> [SKIP][42] ([Intel XE#2314] / [Intel XE#2894])
[42]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-6/igt@kms_bw@connected-linear-tiling-4-displays-2160x1440p.html
* igt@kms_bw@connected-linear-tiling-4-displays-2560x1440p:
- shard-adlp: NOTRUN -> [SKIP][43] ([Intel XE#2191]) +1 other test skip
[43]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-8/igt@kms_bw@connected-linear-tiling-4-displays-2560x1440p.html
* igt@kms_bw@linear-tiling-1-displays-3840x2160p:
- shard-dg2-set2: NOTRUN -> [SKIP][44] ([Intel XE#367])
[44]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-464/igt@kms_bw@linear-tiling-1-displays-3840x2160p.html
* igt@kms_bw@linear-tiling-4-displays-2160x1440p:
- shard-bmg: NOTRUN -> [SKIP][45] ([Intel XE#367]) +4 other tests skip
[45]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-6/igt@kms_bw@linear-tiling-4-displays-2160x1440p.html
* igt@kms_ccs@bad-pixel-format-4-tiled-dg2-mc-ccs:
- shard-bmg: NOTRUN -> [SKIP][46] ([Intel XE#2887]) +25 other tests skip
[46]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-7/igt@kms_ccs@bad-pixel-format-4-tiled-dg2-mc-ccs.html
* igt@kms_ccs@bad-rotation-90-4-tiled-lnl-ccs:
- shard-dg2-set2: NOTRUN -> [SKIP][47] ([Intel XE#2907])
[47]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-464/igt@kms_ccs@bad-rotation-90-4-tiled-lnl-ccs.html
* igt@kms_ccs@bad-rotation-90-y-tiled-gen12-rc-ccs:
- shard-lnl: NOTRUN -> [SKIP][48] ([Intel XE#2887]) +3 other tests skip
[48]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-lnl-4/igt@kms_ccs@bad-rotation-90-y-tiled-gen12-rc-ccs.html
* igt@kms_ccs@crc-primary-rotation-180-4-tiled-bmg-ccs:
- shard-adlp: NOTRUN -> [SKIP][49] ([Intel XE#2907]) +2 other tests skip
[49]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-6/igt@kms_ccs@crc-primary-rotation-180-4-tiled-bmg-ccs.html
* igt@kms_ccs@crc-primary-rotation-180-4-tiled-dg2-rc-ccs-cc@pipe-d-hdmi-a-1:
- shard-adlp: NOTRUN -> [SKIP][50] ([Intel XE#455] / [Intel XE#787]) +51 other tests skip
[50]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-8/igt@kms_ccs@crc-primary-rotation-180-4-tiled-dg2-rc-ccs-cc@pipe-d-hdmi-a-1.html
* igt@kms_ccs@crc-primary-suspend-4-tiled-bmg-ccs:
- shard-adlp: NOTRUN -> [SKIP][51] ([Intel XE#3442])
[51]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-1/igt@kms_ccs@crc-primary-suspend-4-tiled-bmg-ccs.html
* igt@kms_ccs@crc-primary-suspend-4-tiled-lnl-ccs:
- shard-dg2-set2: NOTRUN -> [SKIP][52] ([Intel XE#3442])
[52]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-433/igt@kms_ccs@crc-primary-suspend-4-tiled-lnl-ccs.html
* igt@kms_ccs@crc-primary-suspend-4-tiled-lnl-ccs@pipe-a-dp-2:
- shard-bmg: NOTRUN -> [SKIP][53] ([Intel XE#2652] / [Intel XE#787]) +13 other tests skip
[53]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-7/igt@kms_ccs@crc-primary-suspend-4-tiled-lnl-ccs@pipe-a-dp-2.html
* igt@kms_ccs@crc-primary-suspend-4-tiled-mtl-rc-ccs-cc:
- shard-bmg: NOTRUN -> [SKIP][54] ([Intel XE#3432])
[54]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-4/igt@kms_ccs@crc-primary-suspend-4-tiled-mtl-rc-ccs-cc.html
* igt@kms_ccs@crc-primary-suspend-4-tiled-mtl-rc-ccs@pipe-a-hdmi-a-2:
- shard-dg2-set2: NOTRUN -> [SKIP][55] ([Intel XE#787]) +121 other tests skip
[55]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-432/igt@kms_ccs@crc-primary-suspend-4-tiled-mtl-rc-ccs@pipe-a-hdmi-a-2.html
* igt@kms_ccs@crc-primary-suspend-y-tiled-gen12-rc-ccs-cc@pipe-b-hdmi-a-1:
- shard-adlp: NOTRUN -> [SKIP][56] ([Intel XE#787]) +77 other tests skip
[56]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-9/igt@kms_ccs@crc-primary-suspend-y-tiled-gen12-rc-ccs-cc@pipe-b-hdmi-a-1.html
* igt@kms_ccs@random-ccs-data-4-tiled-dg2-mc-ccs@pipe-c-dp-4:
- shard-dg2-set2: NOTRUN -> [INCOMPLETE][57] ([Intel XE#1727] / [Intel XE#2705] / [Intel XE#3113] / [Intel XE#4212] / [Intel XE#4522])
[57]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-434/igt@kms_ccs@random-ccs-data-4-tiled-dg2-mc-ccs@pipe-c-dp-4.html
* igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs:
- shard-dg2-set2: [PASS][58] -> [INCOMPLETE][59] ([Intel XE#1727] / [Intel XE#2705] / [Intel XE#3113] / [Intel XE#4212] / [Intel XE#4522])
[58]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-dg2-464/igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs.html
[59]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-464/igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs.html
* igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs-cc@pipe-a-hdmi-a-6:
- shard-dg2-set2: [PASS][60] -> [INCOMPLETE][61] ([Intel XE#1727] / [Intel XE#3113])
[60]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-dg2-435/igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs-cc@pipe-a-hdmi-a-6.html
[61]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-436/igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs-cc@pipe-a-hdmi-a-6.html
* igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs@pipe-d-hdmi-a-6:
- shard-dg2-set2: [PASS][62] -> [INCOMPLETE][63] ([Intel XE#1727] / [Intel XE#3113] / [Intel XE#4212] / [Intel XE#4522])
[62]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-dg2-464/igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs@pipe-d-hdmi-a-6.html
[63]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-464/igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs@pipe-d-hdmi-a-6.html
* igt@kms_ccs@random-ccs-data-4-tiled-mtl-mc-ccs:
- shard-dg2-set2: NOTRUN -> [SKIP][64] ([Intel XE#455] / [Intel XE#787]) +26 other tests skip
[64]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-464/igt@kms_ccs@random-ccs-data-4-tiled-mtl-mc-ccs.html
* igt@kms_cdclk@plane-scaling:
- shard-bmg: NOTRUN -> [SKIP][65] ([Intel XE#2724])
[65]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-8/igt@kms_cdclk@plane-scaling.html
* igt@kms_chamelium_color@ctm-0-50:
- shard-adlp: NOTRUN -> [SKIP][66] ([Intel XE#306]) +2 other tests skip
[66]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-2/igt@kms_chamelium_color@ctm-0-50.html
* igt@kms_chamelium_color@ctm-red-to-blue:
- shard-bmg: NOTRUN -> [SKIP][67] ([Intel XE#2325]) +2 other tests skip
[67]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-4/igt@kms_chamelium_color@ctm-red-to-blue.html
* igt@kms_chamelium_frames@hdmi-crc-single:
- shard-dg2-set2: NOTRUN -> [SKIP][68] ([Intel XE#373]) +2 other tests skip
[68]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-433/igt@kms_chamelium_frames@hdmi-crc-single.html
* igt@kms_chamelium_hpd@dp-hpd-storm-disable:
- shard-adlp: NOTRUN -> [SKIP][69] ([Intel XE#373]) +15 other tests skip
[69]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-6/igt@kms_chamelium_hpd@dp-hpd-storm-disable.html
* igt@kms_chamelium_hpd@vga-hpd:
- shard-lnl: NOTRUN -> [SKIP][70] ([Intel XE#373]) +1 other test skip
[70]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-lnl-1/igt@kms_chamelium_hpd@vga-hpd.html
* igt@kms_chamelium_hpd@vga-hpd-fast:
- shard-bmg: NOTRUN -> [SKIP][71] ([Intel XE#2252]) +15 other tests skip
[71]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-4/igt@kms_chamelium_hpd@vga-hpd-fast.html
* igt@kms_content_protection@atomic@pipe-a-dp-2:
- shard-bmg: NOTRUN -> [FAIL][72] ([Intel XE#1178]) +2 other tests fail
[72]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-3/igt@kms_content_protection@atomic@pipe-a-dp-2.html
- shard-dg2-set2: NOTRUN -> [FAIL][73] ([Intel XE#1178])
[73]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-432/igt@kms_content_protection@atomic@pipe-a-dp-2.html
* igt@kms_content_protection@dp-mst-lic-type-1:
- shard-bmg: NOTRUN -> [SKIP][74] ([Intel XE#2390]) +1 other test skip
[74]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-4/igt@kms_content_protection@dp-mst-lic-type-1.html
* igt@kms_content_protection@dp-mst-type-1:
- shard-adlp: NOTRUN -> [SKIP][75] ([Intel XE#307]) +1 other test skip
[75]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-8/igt@kms_content_protection@dp-mst-type-1.html
* igt@kms_content_protection@legacy:
- shard-adlp: NOTRUN -> [SKIP][76] ([Intel XE#455]) +38 other tests skip
[76]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-8/igt@kms_content_protection@legacy.html
* igt@kms_content_protection@lic-type-0@pipe-a-dp-4:
- shard-dg2-set2: NOTRUN -> [FAIL][77] ([Intel XE#3304])
[77]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-433/igt@kms_content_protection@lic-type-0@pipe-a-dp-4.html
* igt@kms_content_protection@mei-interface:
- shard-bmg: NOTRUN -> [SKIP][78] ([Intel XE#2341])
[78]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-6/igt@kms_content_protection@mei-interface.html
* igt@kms_content_protection@uevent@pipe-a-dp-2:
- shard-dg2-set2: NOTRUN -> [FAIL][79] ([Intel XE#1188])
[79]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-432/igt@kms_content_protection@uevent@pipe-a-dp-2.html
- shard-bmg: NOTRUN -> [FAIL][80] ([Intel XE#1188])
[80]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-1/igt@kms_content_protection@uevent@pipe-a-dp-2.html
* igt@kms_cursor_crc@cursor-offscreen-512x512:
- shard-lnl: NOTRUN -> [SKIP][81] ([Intel XE#2321])
[81]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-lnl-1/igt@kms_cursor_crc@cursor-offscreen-512x512.html
- shard-bmg: NOTRUN -> [SKIP][82] ([Intel XE#2321]) +1 other test skip
[82]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-6/igt@kms_cursor_crc@cursor-offscreen-512x512.html
* igt@kms_cursor_crc@cursor-offscreen-max-size:
- shard-lnl: NOTRUN -> [SKIP][83] ([Intel XE#1424])
[83]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-lnl-4/igt@kms_cursor_crc@cursor-offscreen-max-size.html
* igt@kms_cursor_crc@cursor-rapid-movement-32x10:
- shard-bmg: NOTRUN -> [SKIP][84] ([Intel XE#2320]) +8 other tests skip
[84]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-4/igt@kms_cursor_crc@cursor-rapid-movement-32x10.html
* igt@kms_cursor_crc@cursor-rapid-movement-512x512:
- shard-dg2-set2: NOTRUN -> [SKIP][85] ([Intel XE#308])
[85]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-464/igt@kms_cursor_crc@cursor-rapid-movement-512x512.html
* igt@kms_cursor_crc@cursor-sliding-512x512:
- shard-adlp: NOTRUN -> [SKIP][86] ([Intel XE#308]) +2 other tests skip
[86]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-6/igt@kms_cursor_crc@cursor-sliding-512x512.html
* igt@kms_cursor_legacy@2x-long-cursor-vs-flip-atomic:
- shard-bmg: NOTRUN -> [SKIP][87] ([Intel XE#2291])
[87]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-4/igt@kms_cursor_legacy@2x-long-cursor-vs-flip-atomic.html
* igt@kms_cursor_legacy@2x-long-flip-vs-cursor-atomic:
- shard-adlp: NOTRUN -> [SKIP][88] ([Intel XE#309]) +4 other tests skip
[88]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-6/igt@kms_cursor_legacy@2x-long-flip-vs-cursor-atomic.html
* igt@kms_cursor_legacy@basic-busy-flip-before-cursor-varying-size:
- shard-adlp: NOTRUN -> [SKIP][89] ([Intel XE#323]) +2 other tests skip
[89]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-2/igt@kms_cursor_legacy@basic-busy-flip-before-cursor-varying-size.html
- shard-bmg: NOTRUN -> [SKIP][90] ([Intel XE#2286])
[90]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-3/igt@kms_cursor_legacy@basic-busy-flip-before-cursor-varying-size.html
* igt@kms_cursor_legacy@cursorb-vs-flipb-atomic-transitions-varying-size:
- shard-bmg: [PASS][91] -> [SKIP][92] ([Intel XE#2291]) +4 other tests skip
[91]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-bmg-8/igt@kms_cursor_legacy@cursorb-vs-flipb-atomic-transitions-varying-size.html
[92]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-4/igt@kms_cursor_legacy@cursorb-vs-flipb-atomic-transitions-varying-size.html
* igt@kms_cursor_legacy@cursorb-vs-flipb-varying-size:
- shard-dg2-set2: NOTRUN -> [SKIP][93] ([Intel XE#309])
[93]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-464/igt@kms_cursor_legacy@cursorb-vs-flipb-varying-size.html
* igt@kms_cursor_legacy@long-nonblocking-modeset-vs-cursor-atomic:
- shard-adlp: [PASS][94] -> [DMESG-WARN][95] ([Intel XE#2953] / [Intel XE#4173])
[94]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-adlp-1/igt@kms_cursor_legacy@long-nonblocking-modeset-vs-cursor-atomic.html
[95]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-4/igt@kms_cursor_legacy@long-nonblocking-modeset-vs-cursor-atomic.html
* igt@kms_dirtyfb@drrs-dirtyfb-ioctl:
- shard-bmg: NOTRUN -> [SKIP][96] ([Intel XE#1508])
[96]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-3/igt@kms_dirtyfb@drrs-dirtyfb-ioctl.html
* igt@kms_display_modes@extended-mode-basic:
- shard-lnl: NOTRUN -> [SKIP][97] ([Intel XE#4302])
[97]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-lnl-1/igt@kms_display_modes@extended-mode-basic.html
* igt@kms_dp_link_training@non-uhbr-mst:
- shard-dg2-set2: NOTRUN -> [SKIP][98] ([Intel XE#4354])
[98]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-463/igt@kms_dp_link_training@non-uhbr-mst.html
- shard-lnl: NOTRUN -> [SKIP][99] ([Intel XE#4354])
[99]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-lnl-8/igt@kms_dp_link_training@non-uhbr-mst.html
* igt@kms_dp_link_training@non-uhbr-sst:
- shard-dg2-set2: [PASS][100] -> [SKIP][101] ([Intel XE#4354])
[100]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-dg2-436/igt@kms_dp_link_training@non-uhbr-sst.html
[101]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-464/igt@kms_dp_link_training@non-uhbr-sst.html
* igt@kms_dp_link_training@uhbr-sst:
- shard-bmg: NOTRUN -> [SKIP][102] ([Intel XE#4354])
[102]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-6/igt@kms_dp_link_training@uhbr-sst.html
* igt@kms_dsc@dsc-with-bpc:
- shard-bmg: NOTRUN -> [SKIP][103] ([Intel XE#2244]) +1 other test skip
[103]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-4/igt@kms_dsc@dsc-with-bpc.html
* igt@kms_fbc_dirty_rect@fbc-dirty-rectangle-different-formats:
- shard-adlp: NOTRUN -> [SKIP][104] ([Intel XE#4422])
[104]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-2/igt@kms_fbc_dirty_rect@fbc-dirty-rectangle-different-formats.html
* igt@kms_fbcon_fbt@fbc:
- shard-bmg: NOTRUN -> [SKIP][105] ([Intel XE#4156])
[105]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-6/igt@kms_fbcon_fbt@fbc.html
* igt@kms_fbcon_fbt@psr-suspend:
- shard-adlp: NOTRUN -> [SKIP][106] ([Intel XE#776])
[106]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-8/igt@kms_fbcon_fbt@psr-suspend.html
* igt@kms_feature_discovery@psr2:
- shard-adlp: NOTRUN -> [SKIP][107] ([Intel XE#1135])
[107]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-8/igt@kms_feature_discovery@psr2.html
* igt@kms_flip@2x-absolute-wf_vblank:
- shard-bmg: NOTRUN -> [SKIP][108] ([Intel XE#2316]) +2 other tests skip
[108]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-4/igt@kms_flip@2x-absolute-wf_vblank.html
* igt@kms_flip@2x-busy-flip:
- shard-dg2-set2: [PASS][109] -> [SKIP][110] ([Intel XE#310]) +1 other test skip
[109]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-dg2-436/igt@kms_flip@2x-busy-flip.html
[110]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-464/igt@kms_flip@2x-busy-flip.html
* igt@kms_flip@2x-flip-vs-expired-vblank@ac-dp2-hdmi-a3:
- shard-bmg: [PASS][111] -> [FAIL][112] ([Intel XE#3321]) +1 other test fail
[111]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-bmg-3/igt@kms_flip@2x-flip-vs-expired-vblank@ac-dp2-hdmi-a3.html
[112]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-6/igt@kms_flip@2x-flip-vs-expired-vblank@ac-dp2-hdmi-a3.html
* igt@kms_flip@2x-flip-vs-panning-vs-hang:
- shard-lnl: NOTRUN -> [SKIP][113] ([Intel XE#1421])
[113]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-lnl-4/igt@kms_flip@2x-flip-vs-panning-vs-hang.html
* igt@kms_flip@2x-plain-flip:
- shard-adlp: NOTRUN -> [SKIP][114] ([Intel XE#310]) +7 other tests skip
[114]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-2/igt@kms_flip@2x-plain-flip.html
* igt@kms_flip@2x-plain-flip-fb-recreate:
- shard-bmg: [PASS][115] -> [SKIP][116] ([Intel XE#2316]) +3 other tests skip
[115]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-bmg-8/igt@kms_flip@2x-plain-flip-fb-recreate.html
[116]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-4/igt@kms_flip@2x-plain-flip-fb-recreate.html
* igt@kms_flip@flip-vs-expired-vblank-interruptible@c-hdmi-a6:
- shard-dg2-set2: [PASS][117] -> [FAIL][118] ([Intel XE#301]) +4 other tests fail
[117]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-dg2-466/igt@kms_flip@flip-vs-expired-vblank-interruptible@c-hdmi-a6.html
[118]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-435/igt@kms_flip@flip-vs-expired-vblank-interruptible@c-hdmi-a6.html
* igt@kms_flip@flip-vs-suspend-interruptible@a-hdmi-a1:
- shard-adlp: [PASS][119] -> [ABORT][120] ([Intel XE#2953]) +1 other test abort
[119]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-adlp-4/igt@kms_flip@flip-vs-suspend-interruptible@a-hdmi-a1.html
[120]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-1/igt@kms_flip@flip-vs-suspend-interruptible@a-hdmi-a1.html
* igt@kms_flip_scaled_crc@flip-32bpp-yftile-to-64bpp-yftile-downscaling:
- shard-dg2-set2: NOTRUN -> [SKIP][121] ([Intel XE#455]) +9 other tests skip
[121]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-464/igt@kms_flip_scaled_crc@flip-32bpp-yftile-to-64bpp-yftile-downscaling.html
* igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytilegen12rcccs-upscaling:
- shard-lnl: NOTRUN -> [SKIP][122] ([Intel XE#1401] / [Intel XE#1745])
[122]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-lnl-1/igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytilegen12rcccs-upscaling.html
* igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytilegen12rcccs-upscaling@pipe-a-default-mode:
- shard-lnl: NOTRUN -> [SKIP][123] ([Intel XE#1401])
[123]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-lnl-1/igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytilegen12rcccs-upscaling@pipe-a-default-mode.html
* igt@kms_flip_scaled_crc@flip-64bpp-yftile-to-16bpp-yftile-downscaling:
- shard-bmg: NOTRUN -> [SKIP][124] ([Intel XE#2293] / [Intel XE#2380]) +7 other tests skip
[124]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-8/igt@kms_flip_scaled_crc@flip-64bpp-yftile-to-16bpp-yftile-downscaling.html
* igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilegen12rcccs-upscaling@pipe-a-valid-mode:
- shard-bmg: NOTRUN -> [SKIP][125] ([Intel XE#2293]) +7 other tests skip
[125]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-3/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilegen12rcccs-upscaling@pipe-a-valid-mode.html
* igt@kms_frontbuffer_tracking@drrs-1p-primscrn-pri-shrfb-draw-render:
- shard-lnl: NOTRUN -> [SKIP][126] ([Intel XE#651]) +3 other tests skip
[126]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-lnl-4/igt@kms_frontbuffer_tracking@drrs-1p-primscrn-pri-shrfb-draw-render.html
* igt@kms_frontbuffer_tracking@drrs-1p-primscrn-shrfb-pgflip-blt:
- shard-adlp: NOTRUN -> [SKIP][127] ([Intel XE#651]) +22 other tests skip
[127]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-8/igt@kms_frontbuffer_tracking@drrs-1p-primscrn-shrfb-pgflip-blt.html
* igt@kms_frontbuffer_tracking@drrs-2p-primscrn-cur-indfb-onoff:
- shard-dg2-set2: NOTRUN -> [SKIP][128] ([Intel XE#656]) +5 other tests skip
[128]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-464/igt@kms_frontbuffer_tracking@drrs-2p-primscrn-cur-indfb-onoff.html
* igt@kms_frontbuffer_tracking@drrs-2p-primscrn-indfb-pgflip-blt:
- shard-adlp: NOTRUN -> [SKIP][129] ([Intel XE#656]) +65 other tests skip
[129]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-8/igt@kms_frontbuffer_tracking@drrs-2p-primscrn-indfb-pgflip-blt.html
* igt@kms_frontbuffer_tracking@drrs-2p-primscrn-pri-indfb-draw-mmap-wc:
- shard-bmg: NOTRUN -> [SKIP][130] ([Intel XE#2311]) +35 other tests skip
[130]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-3/igt@kms_frontbuffer_tracking@drrs-2p-primscrn-pri-indfb-draw-mmap-wc.html
* igt@kms_frontbuffer_tracking@drrs-indfb-scaledprimary:
- shard-dg2-set2: NOTRUN -> [SKIP][131] ([Intel XE#651]) +7 other tests skip
[131]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-464/igt@kms_frontbuffer_tracking@drrs-indfb-scaledprimary.html
* igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-cur-indfb-move:
- shard-dg2-set2: [PASS][132] -> [SKIP][133] ([Intel XE#656]) +1 other test skip
[132]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-dg2-436/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-cur-indfb-move.html
[133]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-464/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-cur-indfb-move.html
* igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-spr-indfb-onoff:
- shard-bmg: NOTRUN -> [SKIP][134] ([Intel XE#4141]) +14 other tests skip
[134]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-3/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-spr-indfb-onoff.html
* igt@kms_frontbuffer_tracking@fbc-tiling-4:
- shard-adlp: NOTRUN -> [SKIP][135] ([Intel XE#1151]) +1 other test skip
[135]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-8/igt@kms_frontbuffer_tracking@fbc-tiling-4.html
* igt@kms_frontbuffer_tracking@fbcdrrs-tiling-y:
- shard-bmg: NOTRUN -> [SKIP][136] ([Intel XE#2352])
[136]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-3/igt@kms_frontbuffer_tracking@fbcdrrs-tiling-y.html
* igt@kms_frontbuffer_tracking@fbcpsr-indfb-scaledprimary:
- shard-adlp: NOTRUN -> [SKIP][137] ([Intel XE#653]) +20 other tests skip
[137]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-8/igt@kms_frontbuffer_tracking@fbcpsr-indfb-scaledprimary.html
* igt@kms_frontbuffer_tracking@psr-1p-offscren-pri-indfb-draw-render:
- shard-dg2-set2: NOTRUN -> [SKIP][138] ([Intel XE#653]) +6 other tests skip
[138]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-435/igt@kms_frontbuffer_tracking@psr-1p-offscren-pri-indfb-draw-render.html
* igt@kms_frontbuffer_tracking@psr-2p-primscrn-indfb-plflip-blt:
- shard-bmg: NOTRUN -> [SKIP][139] ([Intel XE#2313]) +37 other tests skip
[139]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-8/igt@kms_frontbuffer_tracking@psr-2p-primscrn-indfb-plflip-blt.html
* igt@kms_frontbuffer_tracking@psr-2p-primscrn-spr-indfb-fullscreen:
- shard-bmg: NOTRUN -> [SKIP][140] ([Intel XE#2312]) +21 other tests skip
[140]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-4/igt@kms_frontbuffer_tracking@psr-2p-primscrn-spr-indfb-fullscreen.html
* igt@kms_frontbuffer_tracking@psr-2p-scndscrn-pri-indfb-draw-render:
- shard-lnl: NOTRUN -> [SKIP][141] ([Intel XE#656]) +4 other tests skip
[141]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-lnl-1/igt@kms_frontbuffer_tracking@psr-2p-scndscrn-pri-indfb-draw-render.html
* igt@kms_getfb@getfb-reject-ccs:
- shard-bmg: NOTRUN -> [SKIP][142] ([Intel XE#2502])
[142]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-4/igt@kms_getfb@getfb-reject-ccs.html
* igt@kms_getfb@getfb2-accept-ccs:
- shard-bmg: NOTRUN -> [SKIP][143] ([Intel XE#2340])
[143]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-8/igt@kms_getfb@getfb2-accept-ccs.html
* igt@kms_hdr@invalid-metadata-sizes:
- shard-bmg: NOTRUN -> [SKIP][144] ([Intel XE#1503])
[144]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-4/igt@kms_hdr@invalid-metadata-sizes.html
* igt@kms_joiner@basic-force-big-joiner:
- shard-bmg: [PASS][145] -> [SKIP][146] ([Intel XE#3012])
[145]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-bmg-8/igt@kms_joiner@basic-force-big-joiner.html
[146]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-4/igt@kms_joiner@basic-force-big-joiner.html
* igt@kms_joiner@basic-max-non-joiner:
- shard-dg2-set2: NOTRUN -> [SKIP][147] ([Intel XE#4298])
[147]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-433/igt@kms_joiner@basic-max-non-joiner.html
- shard-bmg: NOTRUN -> [SKIP][148] ([Intel XE#4298])
[148]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-7/igt@kms_joiner@basic-max-non-joiner.html
* igt@kms_joiner@basic-ultra-joiner:
- shard-adlp: NOTRUN -> [SKIP][149] ([Intel XE#2927])
[149]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-8/igt@kms_joiner@basic-ultra-joiner.html
* igt@kms_joiner@switch-modeset-ultra-joiner-big-joiner:
- shard-adlp: NOTRUN -> [SKIP][150] ([Intel XE#2925]) +1 other test skip
[150]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-9/igt@kms_joiner@switch-modeset-ultra-joiner-big-joiner.html
* igt@kms_plane@pixel-format-source-clamping:
- shard-adlp: NOTRUN -> [INCOMPLETE][151] ([Intel XE#1035])
[151]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-9/igt@kms_plane@pixel-format-source-clamping.html
* igt@kms_plane@pixel-format-source-clamping@pipe-a-plane-0:
- shard-adlp: NOTRUN -> [WARN][152] ([Intel XE#2078]) +1 other test warn
[152]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-9/igt@kms_plane@pixel-format-source-clamping@pipe-a-plane-0.html
* igt@kms_plane_cursor@primary@pipe-a-hdmi-a-6-size-256:
- shard-dg2-set2: NOTRUN -> [FAIL][153] ([Intel XE#616]) +2 other tests fail
[153]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-434/igt@kms_plane_cursor@primary@pipe-a-hdmi-a-6-size-256.html
* igt@kms_plane_lowres@tiling-y:
- shard-bmg: NOTRUN -> [SKIP][154] ([Intel XE#2393])
[154]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-6/igt@kms_plane_lowres@tiling-y.html
* igt@kms_plane_multiple@2x-tiling-4:
- shard-adlp: NOTRUN -> [SKIP][155] ([Intel XE#4596]) +1 other test skip
[155]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-8/igt@kms_plane_multiple@2x-tiling-4.html
* igt@kms_plane_multiple@2x-tiling-none:
- shard-bmg: NOTRUN -> [SKIP][156] ([Intel XE#4596])
[156]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-4/igt@kms_plane_multiple@2x-tiling-none.html
* igt@kms_plane_multiple@2x-tiling-x:
- shard-bmg: [PASS][157] -> [SKIP][158] ([Intel XE#4596])
[157]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-bmg-8/igt@kms_plane_multiple@2x-tiling-x.html
[158]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-4/igt@kms_plane_multiple@2x-tiling-x.html
* igt@kms_plane_scaling@plane-downscale-factor-0-5-with-modifiers:
- shard-lnl: NOTRUN -> [SKIP][159] ([Intel XE#2763]) +3 other tests skip
[159]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-lnl-1/igt@kms_plane_scaling@plane-downscale-factor-0-5-with-modifiers.html
* igt@kms_plane_scaling@planes-downscale-factor-0-25-upscale-factor-0-25:
- shard-adlp: NOTRUN -> [SKIP][160] ([Intel XE#2763] / [Intel XE#455]) +3 other tests skip
[160]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-8/igt@kms_plane_scaling@planes-downscale-factor-0-25-upscale-factor-0-25.html
* igt@kms_plane_scaling@planes-downscale-factor-0-25-upscale-factor-0-25@pipe-c:
- shard-adlp: NOTRUN -> [SKIP][161] ([Intel XE#2763]) +5 other tests skip
[161]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-8/igt@kms_plane_scaling@planes-downscale-factor-0-25-upscale-factor-0-25@pipe-c.html
* igt@kms_plane_scaling@planes-upscale-20x20-downscale-factor-0-5@pipe-b:
- shard-bmg: NOTRUN -> [SKIP][162] ([Intel XE#2763]) +24 other tests skip
[162]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-4/igt@kms_plane_scaling@planes-upscale-20x20-downscale-factor-0-5@pipe-b.html
* igt@kms_pm_backlight@bad-brightness:
- shard-adlp: NOTRUN -> [SKIP][163] ([Intel XE#870])
[163]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-9/igt@kms_pm_backlight@bad-brightness.html
- shard-dg2-set2: NOTRUN -> [SKIP][164] ([Intel XE#870])
[164]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-464/igt@kms_pm_backlight@bad-brightness.html
* igt@kms_pm_backlight@basic-brightness:
- shard-bmg: NOTRUN -> [SKIP][165] ([Intel XE#870])
[165]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-6/igt@kms_pm_backlight@basic-brightness.html
* igt@kms_pm_backlight@brightness-with-dpms:
- shard-bmg: NOTRUN -> [SKIP][166] ([Intel XE#2938])
[166]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-4/igt@kms_pm_backlight@brightness-with-dpms.html
* igt@kms_pm_dc@dc6-dpms:
- shard-dg2-set2: NOTRUN -> [SKIP][167] ([Intel XE#908])
[167]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-433/igt@kms_pm_dc@dc6-dpms.html
- shard-bmg: NOTRUN -> [FAIL][168] ([Intel XE#1430])
[168]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-7/igt@kms_pm_dc@dc6-dpms.html
* igt@kms_pm_dc@dc9-dpms:
- shard-adlp: NOTRUN -> [SKIP][169] ([Intel XE#734])
[169]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-8/igt@kms_pm_dc@dc9-dpms.html
* igt@kms_pm_rpm@modeset-lpsp:
- shard-bmg: NOTRUN -> [SKIP][170] ([Intel XE#1439] / [Intel XE#3141] / [Intel XE#836]) +1 other test skip
[170]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-4/igt@kms_pm_rpm@modeset-lpsp.html
* igt@kms_pm_rpm@modeset-non-lpsp-stress-no-wait:
- shard-adlp: NOTRUN -> [SKIP][171] ([Intel XE#836])
[171]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-2/igt@kms_pm_rpm@modeset-non-lpsp-stress-no-wait.html
* igt@kms_psr2_sf@fbc-pr-cursor-plane-move-continuous-sf:
- shard-bmg: NOTRUN -> [SKIP][172] ([Intel XE#1489]) +11 other tests skip
[172]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-6/igt@kms_psr2_sf@fbc-pr-cursor-plane-move-continuous-sf.html
- shard-lnl: NOTRUN -> [SKIP][173] ([Intel XE#2893]) +1 other test skip
[173]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-lnl-1/igt@kms_psr2_sf@fbc-pr-cursor-plane-move-continuous-sf.html
* igt@kms_psr2_sf@fbc-pr-overlay-plane-move-continuous-sf:
- shard-dg2-set2: NOTRUN -> [SKIP][174] ([Intel XE#1489]) +2 other tests skip
[174]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-464/igt@kms_psr2_sf@fbc-pr-overlay-plane-move-continuous-sf.html
* igt@kms_psr2_sf@fbc-psr2-cursor-plane-move-continuous-sf:
- shard-adlp: NOTRUN -> [SKIP][175] ([Intel XE#1489]) +12 other tests skip
[175]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-8/igt@kms_psr2_sf@fbc-psr2-cursor-plane-move-continuous-sf.html
* igt@kms_psr2_sf@fbc-psr2-plane-move-sf-dmg-area:
- shard-lnl: NOTRUN -> [SKIP][176] ([Intel XE#2893] / [Intel XE#4608])
[176]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-lnl-1/igt@kms_psr2_sf@fbc-psr2-plane-move-sf-dmg-area.html
* igt@kms_psr2_sf@fbc-psr2-plane-move-sf-dmg-area@pipe-b-edp-1:
- shard-lnl: NOTRUN -> [SKIP][177] ([Intel XE#4608]) +1 other test skip
[177]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-lnl-1/igt@kms_psr2_sf@fbc-psr2-plane-move-sf-dmg-area@pipe-b-edp-1.html
* igt@kms_psr2_su@page_flip-p010:
- shard-adlp: NOTRUN -> [SKIP][178] ([Intel XE#1122]) +1 other test skip
[178]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-9/igt@kms_psr2_su@page_flip-p010.html
- shard-dg2-set2: NOTRUN -> [SKIP][179] ([Intel XE#1122])
[179]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-464/igt@kms_psr2_su@page_flip-p010.html
* igt@kms_psr@fbc-pr-cursor-blt:
- shard-bmg: NOTRUN -> [SKIP][180] ([Intel XE#2234] / [Intel XE#2850]) +21 other tests skip
[180]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-6/igt@kms_psr@fbc-pr-cursor-blt.html
* igt@kms_psr@fbc-pr-sprite-plane-move:
- shard-dg2-set2: NOTRUN -> [SKIP][181] ([Intel XE#2850] / [Intel XE#929]) +5 other tests skip
[181]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-464/igt@kms_psr@fbc-pr-sprite-plane-move.html
* igt@kms_psr@fbc-psr-primary-render:
- shard-adlp: NOTRUN -> [SKIP][182] ([Intel XE#2850] / [Intel XE#929]) +24 other tests skip
[182]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-9/igt@kms_psr@fbc-psr-primary-render.html
* igt@kms_psr@fbc-psr2-primary-page-flip:
- shard-lnl: NOTRUN -> [SKIP][183] ([Intel XE#1406])
[183]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-lnl-8/igt@kms_psr@fbc-psr2-primary-page-flip.html
* igt@kms_psr@fbc-psr2-primary-page-flip@edp-1:
- shard-lnl: NOTRUN -> [SKIP][184] ([Intel XE#4609])
[184]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-lnl-8/igt@kms_psr@fbc-psr2-primary-page-flip@edp-1.html
* igt@kms_psr_stress_test@invalidate-primary-flip-overlay:
- shard-bmg: NOTRUN -> [SKIP][185] ([Intel XE#2414])
[185]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-4/igt@kms_psr_stress_test@invalidate-primary-flip-overlay.html
* igt@kms_rotation_crc@primary-y-tiled-reflect-x-0:
- shard-bmg: NOTRUN -> [SKIP][186] ([Intel XE#2330])
[186]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-6/igt@kms_rotation_crc@primary-y-tiled-reflect-x-0.html
* igt@kms_rotation_crc@primary-y-tiled-reflect-x-90:
- shard-bmg: NOTRUN -> [SKIP][187] ([Intel XE#3414] / [Intel XE#3904])
[187]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-8/igt@kms_rotation_crc@primary-y-tiled-reflect-x-90.html
* igt@kms_rotation_crc@sprite-rotation-270:
- shard-adlp: NOTRUN -> [SKIP][188] ([Intel XE#3414]) +2 other tests skip
[188]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-8/igt@kms_rotation_crc@sprite-rotation-270.html
* igt@kms_scaling_modes@scaling-mode-center:
- shard-bmg: NOTRUN -> [SKIP][189] ([Intel XE#2413]) +2 other tests skip
[189]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-3/igt@kms_scaling_modes@scaling-mode-center.html
* igt@kms_setmode@clone-exclusive-crtc:
- shard-bmg: NOTRUN -> [SKIP][190] ([Intel XE#1435])
[190]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-4/igt@kms_setmode@clone-exclusive-crtc.html
* igt@kms_tiled_display@basic-test-pattern:
- shard-adlp: NOTRUN -> [SKIP][191] ([Intel XE#362]) +1 other test skip
[191]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-8/igt@kms_tiled_display@basic-test-pattern.html
* igt@kms_universal_plane@cursor-fb-leak@pipe-a-edp-1:
- shard-lnl: [PASS][192] -> [FAIL][193] ([Intel XE#771]) +1 other test fail
[192]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-lnl-8/igt@kms_universal_plane@cursor-fb-leak@pipe-a-edp-1.html
[193]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-lnl-7/igt@kms_universal_plane@cursor-fb-leak@pipe-a-edp-1.html
* igt@kms_vrr@cmrr:
- shard-bmg: NOTRUN -> [SKIP][194] ([Intel XE#2168])
[194]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-4/igt@kms_vrr@cmrr.html
* igt@kms_vrr@max-min:
- shard-bmg: NOTRUN -> [SKIP][195] ([Intel XE#1499])
[195]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-8/igt@kms_vrr@max-min.html
* igt@kms_writeback@writeback-check-output:
- shard-bmg: NOTRUN -> [SKIP][196] ([Intel XE#756])
[196]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-3/igt@kms_writeback@writeback-check-output.html
* igt@kms_writeback@writeback-invalid-parameters:
- shard-adlp: NOTRUN -> [SKIP][197] ([Intel XE#756]) +3 other tests skip
[197]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-2/igt@kms_writeback@writeback-invalid-parameters.html
* igt@xe_ccs@ctrl-surf-copy:
- shard-adlp: NOTRUN -> [SKIP][198] ([Intel XE#455] / [Intel XE#488]) +2 other tests skip
[198]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-8/igt@xe_ccs@ctrl-surf-copy.html
* igt@xe_ccs@large-ctrl-surf-copy:
- shard-adlp: NOTRUN -> [SKIP][199] ([Intel XE#3576])
[199]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-2/igt@xe_ccs@large-ctrl-surf-copy.html
* igt@xe_create@create-big-vram:
- shard-lnl: NOTRUN -> [SKIP][200] ([Intel XE#1062])
[200]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-lnl-1/igt@xe_create@create-big-vram.html
* igt@xe_eu_stall@blocking-read:
- shard-adlp: NOTRUN -> [SKIP][201] ([Intel XE#4497]) +3 other tests skip
[201]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-8/igt@xe_eu_stall@blocking-read.html
* igt@xe_eu_stall@unprivileged-access:
- shard-dg2-set2: NOTRUN -> [SKIP][202] ([Intel XE#4497])
[202]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-464/igt@xe_eu_stall@unprivileged-access.html
* igt@xe_eudebug@basic-vm-bind-metadata-discovery:
- shard-lnl: NOTRUN -> [SKIP][203] ([Intel XE#2905]) +1 other test skip
[203]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-lnl-8/igt@xe_eudebug@basic-vm-bind-metadata-discovery.html
* igt@xe_eudebug@basic-vm-bind-ufence-sigint-client:
- shard-bmg: NOTRUN -> [SKIP][204] ([Intel XE#2905] / [Intel XE#3889])
[204]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-6/igt@xe_eudebug@basic-vm-bind-ufence-sigint-client.html
* igt@xe_eudebug@discovery-empty:
- shard-adlp: NOTRUN -> [SKIP][205] ([Intel XE#2905]) +15 other tests skip
[205]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-6/igt@xe_eudebug@discovery-empty.html
* igt@xe_eudebug@discovery-race-sigint:
- shard-bmg: NOTRUN -> [SKIP][206] ([Intel XE#2905] / [Intel XE#4259])
[206]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-6/igt@xe_eudebug@discovery-race-sigint.html
* igt@xe_eudebug_online@interrupt-all-set-breakpoint:
- shard-dg2-set2: NOTRUN -> [SKIP][207] ([Intel XE#2905]) +4 other tests skip
[207]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-464/igt@xe_eudebug_online@interrupt-all-set-breakpoint.html
* igt@xe_eudebug_online@single-step:
- shard-bmg: NOTRUN -> [SKIP][208] ([Intel XE#2905]) +14 other tests skip
[208]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-7/igt@xe_eudebug_online@single-step.html
* igt@xe_eudebug_sriov@deny-sriov:
- shard-bmg: NOTRUN -> [SKIP][209] ([Intel XE#4518])
[209]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-6/igt@xe_eudebug_sriov@deny-sriov.html
* igt@xe_evict@evict-beng-small:
- shard-adlp: NOTRUN -> [SKIP][210] ([Intel XE#261] / [Intel XE#688]) +2 other tests skip
[210]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-6/igt@xe_evict@evict-beng-small.html
* igt@xe_evict@evict-beng-small-cm:
- shard-lnl: NOTRUN -> [SKIP][211] ([Intel XE#688])
[211]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-lnl-1/igt@xe_evict@evict-beng-small-cm.html
* igt@xe_evict_ccs@evict-overcommit-parallel-instantfree-samefd:
- shard-adlp: NOTRUN -> [SKIP][212] ([Intel XE#688]) +3 other tests skip
[212]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-1/igt@xe_evict_ccs@evict-overcommit-parallel-instantfree-samefd.html
* igt@xe_exec_basic@multigpu-many-execqueues-many-vm-bindexecqueue-userptr:
- shard-bmg: NOTRUN -> [SKIP][213] ([Intel XE#2322]) +14 other tests skip
[213]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-6/igt@xe_exec_basic@multigpu-many-execqueues-many-vm-bindexecqueue-userptr.html
* igt@xe_exec_basic@multigpu-once-basic-defer-mmap:
- shard-dg2-set2: [PASS][214] -> [SKIP][215] ([Intel XE#1392]) +5 other tests skip
[214]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-dg2-463/igt@xe_exec_basic@multigpu-once-basic-defer-mmap.html
[215]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-432/igt@xe_exec_basic@multigpu-once-basic-defer-mmap.html
* igt@xe_exec_basic@multigpu-once-bindexecqueue-userptr-invalidate:
- shard-adlp: NOTRUN -> [SKIP][216] ([Intel XE#1392]) +14 other tests skip
[216]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-9/igt@xe_exec_basic@multigpu-once-bindexecqueue-userptr-invalidate.html
* igt@xe_exec_basic@multigpu-once-bindexecqueue-userptr-invalidate-race:
- shard-lnl: NOTRUN -> [SKIP][217] ([Intel XE#1392]) +1 other test skip
[217]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-lnl-1/igt@xe_exec_basic@multigpu-once-bindexecqueue-userptr-invalidate-race.html
* igt@xe_exec_fault_mode@once-bindexecqueue-userptr:
- shard-adlp: NOTRUN -> [SKIP][218] ([Intel XE#288]) +41 other tests skip
[218]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-8/igt@xe_exec_fault_mode@once-bindexecqueue-userptr.html
* igt@xe_exec_fault_mode@twice-invalid-fault:
- shard-dg2-set2: NOTRUN -> [SKIP][219] ([Intel XE#288]) +14 other tests skip
[219]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-464/igt@xe_exec_fault_mode@twice-invalid-fault.html
* igt@xe_exec_mix_modes@exec-simple-batch-store-dma-fence:
- shard-adlp: NOTRUN -> [SKIP][220] ([Intel XE#2360])
[220]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-2/igt@xe_exec_mix_modes@exec-simple-batch-store-dma-fence.html
* igt@xe_live_ktest@xe_bo:
- shard-adlp: NOTRUN -> [SKIP][221] ([Intel XE#2229] / [Intel XE#455]) +1 other test skip
[221]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-1/igt@xe_live_ktest@xe_bo.html
* igt@xe_live_ktest@xe_bo@xe_ccs_migrate_kunit:
- shard-bmg: NOTRUN -> [SKIP][222] ([Intel XE#2229])
[222]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-4/igt@xe_live_ktest@xe_bo@xe_ccs_migrate_kunit.html
- shard-adlp: NOTRUN -> [SKIP][223] ([Intel XE#2229])
[223]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-1/igt@xe_live_ktest@xe_bo@xe_ccs_migrate_kunit.html
* igt@xe_live_ktest@xe_eudebug:
- shard-bmg: NOTRUN -> [SKIP][224] ([Intel XE#2833])
[224]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-3/igt@xe_live_ktest@xe_eudebug.html
* igt@xe_mmap@pci-membarrier-bad-pagesize:
- shard-adlp: NOTRUN -> [SKIP][225] ([Intel XE#4045])
[225]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-2/igt@xe_mmap@pci-membarrier-bad-pagesize.html
* igt@xe_module_load@force-load:
- shard-adlp: NOTRUN -> [SKIP][226] ([Intel XE#378])
[226]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-6/igt@xe_module_load@force-load.html
* igt@xe_oa@non-zero-reason:
- shard-adlp: NOTRUN -> [SKIP][227] ([Intel XE#2541] / [Intel XE#3573]) +7 other tests skip
[227]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-2/igt@xe_oa@non-zero-reason.html
* igt@xe_oa@oa-tlb-invalidate:
- shard-bmg: NOTRUN -> [SKIP][228] ([Intel XE#2248])
[228]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-6/igt@xe_oa@oa-tlb-invalidate.html
* igt@xe_oa@syncs-ufence-wait:
- shard-adlp: NOTRUN -> [SKIP][229] ([Intel XE#2541] / [Intel XE#3573] / [Intel XE#4501]) +1 other test skip
[229]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-9/igt@xe_oa@syncs-ufence-wait.html
- shard-dg2-set2: NOTRUN -> [SKIP][230] ([Intel XE#2541] / [Intel XE#3573] / [Intel XE#4501])
[230]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-464/igt@xe_oa@syncs-ufence-wait.html
* igt@xe_pat@pat-index-xehpc:
- shard-adlp: NOTRUN -> [SKIP][231] ([Intel XE#2838] / [Intel XE#979])
[231]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-8/igt@xe_pat@pat-index-xehpc.html
* igt@xe_peer2peer@read:
- shard-adlp: NOTRUN -> [SKIP][232] ([Intel XE#1061]) +1 other test skip
[232]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-8/igt@xe_peer2peer@read.html
* igt@xe_pm@d3cold-mmap-system:
- shard-adlp: NOTRUN -> [SKIP][233] ([Intel XE#2284] / [Intel XE#366])
[233]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-1/igt@xe_pm@d3cold-mmap-system.html
- shard-bmg: NOTRUN -> [SKIP][234] ([Intel XE#2284]) +1 other test skip
[234]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-4/igt@xe_pm@d3cold-mmap-system.html
* igt@xe_pm@s2idle-basic-exec:
- shard-adlp: [PASS][235] -> [DMESG-WARN][236] ([Intel XE#4173]) +5 other tests dmesg-warn
[235]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-adlp-8/igt@xe_pm@s2idle-basic-exec.html
[236]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-2/igt@xe_pm@s2idle-basic-exec.html
* igt@xe_pm@s3-mocs:
- shard-lnl: NOTRUN -> [SKIP][237] ([Intel XE#584])
[237]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-lnl-8/igt@xe_pm@s3-mocs.html
* igt@xe_pm@s4-d3hot-basic-exec:
- shard-lnl: [PASS][238] -> [ABORT][239] ([Intel XE#1794])
[238]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-lnl-4/igt@xe_pm@s4-d3hot-basic-exec.html
[239]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-lnl-2/igt@xe_pm@s4-d3hot-basic-exec.html
* igt@xe_pm@s4-exec-after:
- shard-adlp: [PASS][240] -> [ABORT][241] ([Intel XE#1794])
[240]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-adlp-1/igt@xe_pm@s4-exec-after.html
[241]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-9/igt@xe_pm@s4-exec-after.html
* igt@xe_pm@vram-d3cold-threshold:
- shard-bmg: NOTRUN -> [SKIP][242] ([Intel XE#579])
[242]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-6/igt@xe_pm@vram-d3cold-threshold.html
* igt@xe_query@multigpu-query-gt-list:
- shard-bmg: NOTRUN -> [SKIP][243] ([Intel XE#944]) +2 other tests skip
[243]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-6/igt@xe_query@multigpu-query-gt-list.html
* igt@xe_query@multigpu-query-invalid-extension:
- shard-adlp: NOTRUN -> [SKIP][244] ([Intel XE#944]) +1 other test skip
[244]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-8/igt@xe_query@multigpu-query-invalid-extension.html
* igt@xe_sriov_auto_provisioning@fair-allocation:
- shard-dg2-set2: NOTRUN -> [SKIP][245] ([Intel XE#4130])
[245]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-464/igt@xe_sriov_auto_provisioning@fair-allocation.html
* igt@xe_sriov_auto_provisioning@resources-released-on-vfs-disabling:
- shard-bmg: NOTRUN -> [SKIP][246] ([Intel XE#4130])
[246]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-6/igt@xe_sriov_auto_provisioning@resources-released-on-vfs-disabling.html
- shard-lnl: NOTRUN -> [SKIP][247] ([Intel XE#4130])
[247]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-lnl-1/igt@xe_sriov_auto_provisioning@resources-released-on-vfs-disabling.html
* igt@xe_sriov_flr@flr-each-isolation:
- shard-dg2-set2: NOTRUN -> [SKIP][248] ([Intel XE#3342])
[248]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-464/igt@xe_sriov_flr@flr-each-isolation.html
* igt@xe_sriov_scheduling@nonpreempt-engine-resets:
- shard-bmg: NOTRUN -> [SKIP][249] ([Intel XE#4351])
[249]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-8/igt@xe_sriov_scheduling@nonpreempt-engine-resets.html
#### Possible fixes ####
* igt@kms_atomic_transition@plane-all-modeset-transition-fencing:
- shard-adlp: [FAIL][250] ([Intel XE#3908]) -> [PASS][251] +1 other test pass
[250]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-adlp-6/igt@kms_atomic_transition@plane-all-modeset-transition-fencing.html
[251]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-2/igt@kms_atomic_transition@plane-all-modeset-transition-fencing.html
* igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-mc-ccs:
- shard-dg2-set2: [INCOMPLETE][252] ([Intel XE#3862]) -> [PASS][253] +1 other test pass
[252]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-dg2-436/igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-mc-ccs.html
[253]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-464/igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-mc-ccs.html
* igt@kms_color@ctm-0-75@pipe-a-hdmi-a-1:
- shard-adlp: [DMESG-WARN][254] ([Intel XE#4173]) -> [PASS][255] +1 other test pass
[254]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-adlp-8/igt@kms_color@ctm-0-75@pipe-a-hdmi-a-1.html
[255]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-2/igt@kms_color@ctm-0-75@pipe-a-hdmi-a-1.html
* igt@kms_cursor_legacy@cursor-vs-flip-legacy:
- shard-dg2-set2: [INCOMPLETE][256] ([Intel XE#3226]) -> [PASS][257]
[256]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-dg2-434/igt@kms_cursor_legacy@cursor-vs-flip-legacy.html
[257]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-435/igt@kms_cursor_legacy@cursor-vs-flip-legacy.html
* igt@kms_cursor_legacy@cursora-vs-flipb-atomic-transitions:
- shard-dg2-set2: [SKIP][258] ([Intel XE#309]) -> [PASS][259]
[258]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-dg2-464/igt@kms_cursor_legacy@cursora-vs-flipb-atomic-transitions.html
[259]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-463/igt@kms_cursor_legacy@cursora-vs-flipb-atomic-transitions.html
* igt@kms_cursor_legacy@cursora-vs-flipb-atomic-transitions-varying-size:
- shard-bmg: [SKIP][260] ([Intel XE#2291]) -> [PASS][261] +3 other tests pass
[260]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-bmg-4/igt@kms_cursor_legacy@cursora-vs-flipb-atomic-transitions-varying-size.html
[261]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-2/igt@kms_cursor_legacy@cursora-vs-flipb-atomic-transitions-varying-size.html
* igt@kms_dither@fb-8bpc-vs-panel-6bpc:
- shard-dg2-set2: [SKIP][262] ([Intel XE#455]) -> [PASS][263] +1 other test pass
[262]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-dg2-464/igt@kms_dither@fb-8bpc-vs-panel-6bpc.html
[263]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-433/igt@kms_dither@fb-8bpc-vs-panel-6bpc.html
* igt@kms_dp_linktrain_fallback@dp-fallback:
- shard-bmg: [SKIP][264] ([Intel XE#4294]) -> [PASS][265]
[264]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-bmg-4/igt@kms_dp_linktrain_fallback@dp-fallback.html
[265]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-2/igt@kms_dp_linktrain_fallback@dp-fallback.html
* igt@kms_feature_discovery@display-2x:
- shard-bmg: [SKIP][266] ([Intel XE#2373]) -> [PASS][267]
[266]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-bmg-4/igt@kms_feature_discovery@display-2x.html
[267]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-2/igt@kms_feature_discovery@display-2x.html
* igt@kms_flip@2x-dpms-vs-vblank-race:
- shard-dg2-set2: [SKIP][268] ([Intel XE#310]) -> [PASS][269] +6 other tests pass
[268]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-dg2-464/igt@kms_flip@2x-dpms-vs-vblank-race.html
[269]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-432/igt@kms_flip@2x-dpms-vs-vblank-race.html
* igt@kms_flip@2x-flip-vs-expired-vblank-interruptible@ab-hdmi-a6-dp4:
- shard-dg2-set2: [FAIL][270] ([Intel XE#301] / [Intel XE#3321]) -> [PASS][271]
[270]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-dg2-434/igt@kms_flip@2x-flip-vs-expired-vblank-interruptible@ab-hdmi-a6-dp4.html
[271]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-435/igt@kms_flip@2x-flip-vs-expired-vblank-interruptible@ab-hdmi-a6-dp4.html
* igt@kms_flip@2x-flip-vs-expired-vblank-interruptible@cd-hdmi-a6-dp4:
- shard-dg2-set2: [FAIL][272] ([Intel XE#301]) -> [PASS][273] +7 other tests pass
[272]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-dg2-434/igt@kms_flip@2x-flip-vs-expired-vblank-interruptible@cd-hdmi-a6-dp4.html
[273]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-435/igt@kms_flip@2x-flip-vs-expired-vblank-interruptible@cd-hdmi-a6-dp4.html
* igt@kms_flip@2x-flip-vs-expired-vblank@ab-dp2-hdmi-a3:
- shard-bmg: [FAIL][274] ([Intel XE#3321]) -> [PASS][275] +2 other tests pass
[274]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-bmg-3/igt@kms_flip@2x-flip-vs-expired-vblank@ab-dp2-hdmi-a3.html
[275]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-6/igt@kms_flip@2x-flip-vs-expired-vblank@ab-dp2-hdmi-a3.html
* igt@kms_flip@2x-flip-vs-rmfb-interruptible:
- shard-dg2-set2: [INCOMPLETE][276] ([Intel XE#2049]) -> [PASS][277]
[276]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-dg2-432/igt@kms_flip@2x-flip-vs-rmfb-interruptible.html
[277]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-433/igt@kms_flip@2x-flip-vs-rmfb-interruptible.html
* igt@kms_flip@2x-flip-vs-rmfb-interruptible@bc-dp2-hdmi-a3:
- shard-bmg: [INCOMPLETE][278] ([Intel XE#2049]) -> [PASS][279] +1 other test pass
[278]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-bmg-7/igt@kms_flip@2x-flip-vs-rmfb-interruptible@bc-dp2-hdmi-a3.html
[279]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-7/igt@kms_flip@2x-flip-vs-rmfb-interruptible@bc-dp2-hdmi-a3.html
* igt@kms_flip@2x-modeset-vs-vblank-race:
- shard-bmg: [SKIP][280] ([Intel XE#2316]) -> [PASS][281] +2 other tests pass
[280]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-bmg-4/igt@kms_flip@2x-modeset-vs-vblank-race.html
[281]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-1/igt@kms_flip@2x-modeset-vs-vblank-race.html
* igt@kms_flip@blocking-wf_vblank:
- shard-lnl: [FAIL][282] ([Intel XE#886]) -> [PASS][283] +2 other tests pass
[282]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-lnl-8/igt@kms_flip@blocking-wf_vblank.html
[283]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-lnl-4/igt@kms_flip@blocking-wf_vblank.html
* igt@kms_flip@flip-vs-suspend-interruptible:
- shard-dg2-set2: [INCOMPLETE][284] ([Intel XE#2049] / [Intel XE#2597]) -> [PASS][285] +1 other test pass
[284]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-dg2-464/igt@kms_flip@flip-vs-suspend-interruptible.html
[285]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-463/igt@kms_flip@flip-vs-suspend-interruptible.html
- shard-lnl: [ABORT][286] -> [PASS][287] +1 other test pass
[286]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-lnl-3/igt@kms_flip@flip-vs-suspend-interruptible.html
[287]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-lnl-8/igt@kms_flip@flip-vs-suspend-interruptible.html
* igt@kms_flip@plain-flip-ts-check@a-hdmi-a3:
- shard-bmg: [FAIL][288] ([Intel XE#2882]) -> [PASS][289] +2 other tests pass
[288]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-bmg-4/igt@kms_flip@plain-flip-ts-check@a-hdmi-a3.html
[289]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-3/igt@kms_flip@plain-flip-ts-check@a-hdmi-a3.html
* igt@kms_flip_tiling@flip-change-tiling@pipe-d-hdmi-a-1-y-to-x:
- shard-adlp: [DMESG-FAIL][290] ([Intel XE#4543]) -> [PASS][291]
[290]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-adlp-9/igt@kms_flip_tiling@flip-change-tiling@pipe-d-hdmi-a-1-y-to-x.html
[291]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-4/igt@kms_flip_tiling@flip-change-tiling@pipe-d-hdmi-a-1-y-to-x.html
* igt@kms_frontbuffer_tracking@fbc-2p-primscrn-pri-shrfb-draw-blt:
- shard-dg2-set2: [SKIP][292] ([Intel XE#656]) -> [PASS][293] +2 other tests pass
[292]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-dg2-464/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-pri-shrfb-draw-blt.html
[293]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-433/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-pri-shrfb-draw-blt.html
* igt@xe_exec_basic@multigpu-once-rebind:
- shard-dg2-set2: [SKIP][294] ([Intel XE#1392]) -> [PASS][295] +4 other tests pass
[294]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-dg2-432/igt@xe_exec_basic@multigpu-once-rebind.html
[295]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-434/igt@xe_exec_basic@multigpu-once-rebind.html
* igt@xe_module_load@load:
- shard-lnl: ([PASS][296], [PASS][297], [PASS][298], [PASS][299], [PASS][300], [PASS][301], [PASS][302], [PASS][303], [PASS][304], [PASS][305], [PASS][306], [PASS][307], [PASS][308], [PASS][309], [PASS][310], [PASS][311], [PASS][312], [PASS][313], [PASS][314], [PASS][315], [PASS][316], [PASS][317], [PASS][318], [SKIP][319], [PASS][320], [PASS][321]) ([Intel XE#378]) -> ([PASS][322], [PASS][323], [PASS][324], [PASS][325], [PASS][326], [PASS][327], [PASS][328], [PASS][329], [PASS][330], [PASS][331], [PASS][332], [PASS][333], [PASS][334], [PASS][335], [PASS][336], [PASS][337], [PASS][338], [PASS][339], [PASS][340], [PASS][341], [PASS][342], [PASS][343], [PASS][344], [PASS][345], [PASS][346])
[296]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-lnl-4/igt@xe_module_load@load.html
[297]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-lnl-1/igt@xe_module_load@load.html
[298]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-lnl-1/igt@xe_module_load@load.html
[299]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-lnl-1/igt@xe_module_load@load.html
[300]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-lnl-5/igt@xe_module_load@load.html
[301]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-lnl-5/igt@xe_module_load@load.html
[302]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-lnl-2/igt@xe_module_load@load.html
[303]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-lnl-4/igt@xe_module_load@load.html
[304]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-lnl-2/igt@xe_module_load@load.html
[305]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-lnl-2/igt@xe_module_load@load.html
[306]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-lnl-3/igt@xe_module_load@load.html
[307]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-lnl-4/igt@xe_module_load@load.html
[308]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-lnl-3/igt@xe_module_load@load.html
[309]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-lnl-3/igt@xe_module_load@load.html
[310]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-lnl-6/igt@xe_module_load@load.html
[311]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-lnl-7/igt@xe_module_load@load.html
[312]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-lnl-7/igt@xe_module_load@load.html
[313]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-lnl-6/igt@xe_module_load@load.html
[314]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-lnl-6/igt@xe_module_load@load.html
[315]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-lnl-4/igt@xe_module_load@load.html
[316]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-lnl-8/igt@xe_module_load@load.html
[317]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-lnl-5/igt@xe_module_load@load.html
[318]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-lnl-7/igt@xe_module_load@load.html
[319]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-lnl-2/igt@xe_module_load@load.html
[320]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-lnl-8/igt@xe_module_load@load.html
[321]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-lnl-8/igt@xe_module_load@load.html
[322]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-lnl-3/igt@xe_module_load@load.html
[323]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-lnl-4/igt@xe_module_load@load.html
[324]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-lnl-6/igt@xe_module_load@load.html
[325]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-lnl-3/igt@xe_module_load@load.html
[326]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-lnl-7/igt@xe_module_load@load.html
[327]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-lnl-7/igt@xe_module_load@load.html
[328]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-lnl-7/igt@xe_module_load@load.html
[329]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-lnl-5/igt@xe_module_load@load.html
[330]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-lnl-5/igt@xe_module_load@load.html
[331]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-lnl-5/igt@xe_module_load@load.html
[332]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-lnl-6/igt@xe_module_load@load.html
[333]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-lnl-1/igt@xe_module_load@load.html
[334]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-lnl-8/igt@xe_module_load@load.html
[335]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-lnl-8/igt@xe_module_load@load.html
[336]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-lnl-8/igt@xe_module_load@load.html
[337]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-lnl-2/igt@xe_module_load@load.html
[338]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-lnl-2/igt@xe_module_load@load.html
[339]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-lnl-2/igt@xe_module_load@load.html
[340]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-lnl-3/igt@xe_module_load@load.html
[341]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-lnl-6/igt@xe_module_load@load.html
[342]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-lnl-2/igt@xe_module_load@load.html
[343]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-lnl-4/igt@xe_module_load@load.html
[344]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-lnl-4/igt@xe_module_load@load.html
[345]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-lnl-1/igt@xe_module_load@load.html
[346]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-lnl-1/igt@xe_module_load@load.html
- shard-adlp: ([PASS][347], [PASS][348], [PASS][349], [PASS][350], [PASS][351], [PASS][352], [PASS][353], [PASS][354], [PASS][355], [PASS][356], [PASS][357], [PASS][358], [PASS][359], [PASS][360], [PASS][361], [PASS][362], [PASS][363], [PASS][364], [PASS][365], [SKIP][366]) ([Intel XE#378]) -> ([PASS][367], [PASS][368], [PASS][369], [PASS][370], [PASS][371], [PASS][372], [PASS][373], [PASS][374], [PASS][375], [PASS][376], [PASS][377], [PASS][378], [PASS][379], [PASS][380], [PASS][381], [PASS][382], [PASS][383], [PASS][384], [PASS][385], [PASS][386])
[347]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-adlp-1/igt@xe_module_load@load.html
[348]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-adlp-1/igt@xe_module_load@load.html
[349]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-adlp-1/igt@xe_module_load@load.html
[350]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-adlp-1/igt@xe_module_load@load.html
[351]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-adlp-2/igt@xe_module_load@load.html
[352]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-adlp-9/igt@xe_module_load@load.html
[353]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-adlp-9/igt@xe_module_load@load.html
[354]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-adlp-6/igt@xe_module_load@load.html
[355]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-adlp-6/igt@xe_module_load@load.html
[356]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-adlp-6/igt@xe_module_load@load.html
[357]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-adlp-2/igt@xe_module_load@load.html
[358]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-adlp-2/igt@xe_module_load@load.html
[359]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-adlp-4/igt@xe_module_load@load.html
[360]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-adlp-9/igt@xe_module_load@load.html
[361]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-adlp-8/igt@xe_module_load@load.html
[362]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-adlp-4/igt@xe_module_load@load.html
[363]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-adlp-8/igt@xe_module_load@load.html
[364]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-adlp-4/igt@xe_module_load@load.html
[365]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-adlp-8/igt@xe_module_load@load.html
[366]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-adlp-6/igt@xe_module_load@load.html
[367]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-9/igt@xe_module_load@load.html
[368]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-2/igt@xe_module_load@load.html
[369]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-1/igt@xe_module_load@load.html
[370]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-1/igt@xe_module_load@load.html
[371]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-1/igt@xe_module_load@load.html
[372]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-2/igt@xe_module_load@load.html
[373]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-1/igt@xe_module_load@load.html
[374]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-8/igt@xe_module_load@load.html
[375]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-8/igt@xe_module_load@load.html
[376]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-2/igt@xe_module_load@load.html
[377]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-2/igt@xe_module_load@load.html
[378]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-6/igt@xe_module_load@load.html
[379]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-9/igt@xe_module_load@load.html
[380]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-6/igt@xe_module_load@load.html
[381]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-6/igt@xe_module_load@load.html
[382]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-8/igt@xe_module_load@load.html
[383]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-4/igt@xe_module_load@load.html
[384]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-4/igt@xe_module_load@load.html
[385]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-4/igt@xe_module_load@load.html
[386]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-4/igt@xe_module_load@load.html
* igt@xe_pm@s4-basic:
- shard-adlp: [ABORT][387] ([Intel XE#1794]) -> [PASS][388]
[387]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-adlp-9/igt@xe_pm@s4-basic.html
[388]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-adlp-2/igt@xe_pm@s4-basic.html
* igt@xe_pm@s4-basic-exec:
- shard-lnl: [ABORT][389] ([Intel XE#1794]) -> [PASS][390]
[389]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-lnl-2/igt@xe_pm@s4-basic-exec.html
[390]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-lnl-1/igt@xe_pm@s4-basic-exec.html
#### Warnings ####
* igt@kms_ccs@bad-aux-stride-4-tiled-mtl-rc-ccs-cc@pipe-d-hdmi-a-6:
- shard-dg2-set2: [SKIP][391] ([Intel XE#787]) -> [SKIP][392] ([Intel XE#455] / [Intel XE#787]) +4 other tests skip
[391]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-dg2-436/igt@kms_ccs@bad-aux-stride-4-tiled-mtl-rc-ccs-cc@pipe-d-hdmi-a-6.html
[392]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-464/igt@kms_ccs@bad-aux-stride-4-tiled-mtl-rc-ccs-cc@pipe-d-hdmi-a-6.html
* igt@kms_ccs@missing-ccs-buffer-y-tiled-gen12-rc-ccs@pipe-d-hdmi-a-6:
- shard-dg2-set2: [SKIP][393] ([Intel XE#455] / [Intel XE#787]) -> [SKIP][394] ([Intel XE#787]) +5 other tests skip
[393]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-dg2-464/igt@kms_ccs@missing-ccs-buffer-y-tiled-gen12-rc-ccs@pipe-d-hdmi-a-6.html
[394]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-463/igt@kms_ccs@missing-ccs-buffer-y-tiled-gen12-rc-ccs@pipe-d-hdmi-a-6.html
* igt@kms_ccs@random-ccs-data-4-tiled-dg2-mc-ccs:
- shard-dg2-set2: [INCOMPLETE][395] ([Intel XE#1727] / [Intel XE#3113] / [Intel XE#4345]) -> [INCOMPLETE][396] ([Intel XE#1727] / [Intel XE#2705] / [Intel XE#3113] / [Intel XE#4212] / [Intel XE#4345] / [Intel XE#4522])
[395]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-dg2-432/igt@kms_ccs@random-ccs-data-4-tiled-dg2-mc-ccs.html
[396]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-434/igt@kms_ccs@random-ccs-data-4-tiled-dg2-mc-ccs.html
* igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs-cc:
- shard-dg2-set2: [INCOMPLETE][397] ([Intel XE#1727] / [Intel XE#3113] / [Intel XE#3124]) -> [INCOMPLETE][398] ([Intel XE#1727] / [Intel XE#3113])
[397]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-dg2-435/igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs-cc.html
[398]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-436/igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs-cc.html
* igt@kms_content_protection@atomic:
- shard-bmg: [SKIP][399] ([Intel XE#2341]) -> [FAIL][400] ([Intel XE#1178])
[399]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-bmg-4/igt@kms_content_protection@atomic.html
[400]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-3/igt@kms_content_protection@atomic.html
* igt@kms_content_protection@atomic-dpms:
- shard-bmg: [FAIL][401] ([Intel XE#1178]) -> [SKIP][402] ([Intel XE#2341])
[401]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-bmg-8/igt@kms_content_protection@atomic-dpms.html
[402]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-4/igt@kms_content_protection@atomic-dpms.html
* igt@kms_content_protection@uevent:
- shard-bmg: [SKIP][403] ([Intel XE#2341]) -> [FAIL][404] ([Intel XE#1188])
[403]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-bmg-4/igt@kms_content_protection@uevent.html
[404]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-1/igt@kms_content_protection@uevent.html
* igt@kms_dither@fb-8bpc-vs-panel-6bpc@pipe-a-hdmi-a-6:
- shard-dg2-set2: [SKIP][405] ([Intel XE#455] / [i915#3804]) -> [SKIP][406] ([i915#3804])
[405]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-dg2-464/igt@kms_dither@fb-8bpc-vs-panel-6bpc@pipe-a-hdmi-a-6.html
[406]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-433/igt@kms_dither@fb-8bpc-vs-panel-6bpc@pipe-a-hdmi-a-6.html
* igt@kms_flip@2x-flip-vs-expired-vblank-interruptible:
- shard-bmg: [FAIL][407] ([Intel XE#3321]) -> [SKIP][408] ([Intel XE#2316])
[407]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-bmg-8/igt@kms_flip@2x-flip-vs-expired-vblank-interruptible.html
[408]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-4/igt@kms_flip@2x-flip-vs-expired-vblank-interruptible.html
* igt@kms_frontbuffer_tracking@drrs-2p-pri-indfb-multidraw:
- shard-dg2-set2: [SKIP][409] ([Intel XE#651]) -> [SKIP][410] ([Intel XE#656]) +3 other tests skip
[409]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-dg2-436/igt@kms_frontbuffer_tracking@drrs-2p-pri-indfb-multidraw.html
[410]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-464/igt@kms_frontbuffer_tracking@drrs-2p-pri-indfb-multidraw.html
* igt@kms_frontbuffer_tracking@drrs-2p-scndscrn-indfb-pgflip-blt:
- shard-bmg: [SKIP][411] ([Intel XE#2312]) -> [SKIP][412] ([Intel XE#2311]) +9 other tests skip
[411]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-bmg-4/igt@kms_frontbuffer_tracking@drrs-2p-scndscrn-indfb-pgflip-blt.html
[412]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-3/igt@kms_frontbuffer_tracking@drrs-2p-scndscrn-indfb-pgflip-blt.html
* igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-shrfb-msflip-blt:
- shard-bmg: [SKIP][413] ([Intel XE#2312]) -> [SKIP][414] ([Intel XE#4141]) +5 other tests skip
[413]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-bmg-4/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-shrfb-msflip-blt.html
[414]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-1/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-shrfb-msflip-blt.html
* igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-spr-indfb-draw-mmap-wc:
- shard-bmg: [SKIP][415] ([Intel XE#4141]) -> [SKIP][416] ([Intel XE#2312]) +5 other tests skip
[415]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-bmg-7/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-spr-indfb-draw-mmap-wc.html
[416]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-4/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-spr-indfb-draw-mmap-wc.html
* igt@kms_frontbuffer_tracking@fbcdrrs-2p-primscrn-cur-indfb-move:
- shard-bmg: [SKIP][417] ([Intel XE#2311]) -> [SKIP][418] ([Intel XE#2312]) +10 other tests skip
[417]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-bmg-8/igt@kms_frontbuffer_tracking@fbcdrrs-2p-primscrn-cur-indfb-move.html
[418]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-4/igt@kms_frontbuffer_tracking@fbcdrrs-2p-primscrn-cur-indfb-move.html
* igt@kms_frontbuffer_tracking@fbcdrrs-2p-primscrn-pri-indfb-draw-mmap-wc:
- shard-dg2-set2: [SKIP][419] ([Intel XE#656]) -> [SKIP][420] ([Intel XE#651]) +10 other tests skip
[419]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-dg2-464/igt@kms_frontbuffer_tracking@fbcdrrs-2p-primscrn-pri-indfb-draw-mmap-wc.html
[420]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-432/igt@kms_frontbuffer_tracking@fbcdrrs-2p-primscrn-pri-indfb-draw-mmap-wc.html
* igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-cur-indfb-draw-blt:
- shard-bmg: [SKIP][421] ([Intel XE#2313]) -> [SKIP][422] ([Intel XE#2312]) +10 other tests skip
[421]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-bmg-8/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-cur-indfb-draw-blt.html
[422]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-4/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-cur-indfb-draw-blt.html
* igt@kms_frontbuffer_tracking@psr-2p-primscrn-shrfb-msflip-blt:
- shard-dg2-set2: [SKIP][423] ([Intel XE#656]) -> [SKIP][424] ([Intel XE#653]) +7 other tests skip
[423]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-dg2-464/igt@kms_frontbuffer_tracking@psr-2p-primscrn-shrfb-msflip-blt.html
[424]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-463/igt@kms_frontbuffer_tracking@psr-2p-primscrn-shrfb-msflip-blt.html
* igt@kms_frontbuffer_tracking@psr-2p-scndscrn-pri-indfb-draw-mmap-wc:
- shard-bmg: [SKIP][425] ([Intel XE#2312]) -> [SKIP][426] ([Intel XE#2313]) +11 other tests skip
[425]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-bmg-4/igt@kms_frontbuffer_tracking@psr-2p-scndscrn-pri-indfb-draw-mmap-wc.html
[426]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-3/igt@kms_frontbuffer_tracking@psr-2p-scndscrn-pri-indfb-draw-mmap-wc.html
* igt@kms_frontbuffer_tracking@psr-2p-scndscrn-pri-shrfb-draw-render:
- shard-dg2-set2: [SKIP][427] ([Intel XE#653]) -> [SKIP][428] ([Intel XE#656]) +6 other tests skip
[427]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-dg2-436/igt@kms_frontbuffer_tracking@psr-2p-scndscrn-pri-shrfb-draw-render.html
[428]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-464/igt@kms_frontbuffer_tracking@psr-2p-scndscrn-pri-shrfb-draw-render.html
* igt@kms_plane_multiple@2x-tiling-yf:
- shard-dg2-set2: [SKIP][429] ([Intel XE#455]) -> [SKIP][430] ([Intel XE#4596])
[429]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-dg2-436/igt@kms_plane_multiple@2x-tiling-yf.html
[430]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-464/igt@kms_plane_multiple@2x-tiling-yf.html
* igt@kms_tiled_display@basic-test-pattern:
- shard-bmg: [FAIL][431] ([Intel XE#1729]) -> [SKIP][432] ([Intel XE#2426])
[431]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-bmg-1/igt@kms_tiled_display@basic-test-pattern.html
[432]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-7/igt@kms_tiled_display@basic-test-pattern.html
* igt@kms_tiled_display@basic-test-pattern-with-chamelium:
- shard-bmg: [SKIP][433] ([Intel XE#2426]) -> [SKIP][434] ([Intel XE#2509])
[433]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-bmg-4/igt@kms_tiled_display@basic-test-pattern-with-chamelium.html
[434]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-bmg-1/igt@kms_tiled_display@basic-test-pattern-with-chamelium.html
- shard-dg2-set2: [SKIP][435] ([Intel XE#362]) -> [SKIP][436] ([Intel XE#1500])
[435]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd/shard-dg2-463/igt@kms_tiled_display@basic-test-pattern-with-chamelium.html
[436]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/shard-dg2-432/igt@kms_tiled_display@basic-test-pattern-with-chamelium.html
[Intel XE#1035]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1035
[Intel XE#1061]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1061
[Intel XE#1062]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1062
[Intel XE#1122]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1122
[Intel XE#1124]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1124
[Intel XE#1125]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1125
[Intel XE#1135]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1135
[Intel XE#1151]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1151
[Intel XE#1178]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1178
[Intel XE#1188]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1188
[Intel XE#1392]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1392
[Intel XE#1401]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1401
[Intel XE#1406]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1406
[Intel XE#1421]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1421
[Intel XE#1424]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1424
[Intel XE#1430]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1430
[Intel XE#1435]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1435
[Intel XE#1439]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1439
[Intel XE#1489]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1489
[Intel XE#1499]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1499
[Intel XE#1500]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1500
[Intel XE#1503]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1503
[Intel XE#1508]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1508
[Intel XE#1727]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1727
[Intel XE#1729]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1729
[Intel XE#1745]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1745
[Intel XE#1794]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1794
[Intel XE#2049]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2049
[Intel XE#2078]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2078
[Intel XE#2168]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2168
[Intel XE#2191]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2191
[Intel XE#2229]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2229
[Intel XE#2234]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2234
[Intel XE#2244]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2244
[Intel XE#2248]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2248
[Intel XE#2252]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2252
[Intel XE#2284]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2284
[Intel XE#2286]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2286
[Intel XE#2291]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2291
[Intel XE#2293]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2293
[Intel XE#2311]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2311
[Intel XE#2312]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2312
[Intel XE#2313]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2313
[Intel XE#2314]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2314
[Intel XE#2316]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2316
[Intel XE#2320]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2320
[Intel XE#2321]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2321
[Intel XE#2322]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2322
[Intel XE#2325]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2325
[Intel XE#2327]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2327
[Intel XE#2328]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2328
[Intel XE#2330]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2330
[Intel XE#2340]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2340
[Intel XE#2341]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2341
[Intel XE#2352]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2352
[Intel XE#2360]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2360
[Intel XE#2370]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2370
[Intel XE#2373]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2373
[Intel XE#2380]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2380
[Intel XE#2385]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2385
[Intel XE#2390]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2390
[Intel XE#2393]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2393
[Intel XE#2413]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2413
[Intel XE#2414]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2414
[Intel XE#2426]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2426
[Intel XE#2502]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2502
[Intel XE#2509]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2509
[Intel XE#2541]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2541
[Intel XE#2550]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2550
[Intel XE#2597]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2597
[Intel XE#261]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/261
[Intel XE#2652]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2652
[Intel XE#2705]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2705
[Intel XE#2724]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2724
[Intel XE#2763]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2763
[Intel XE#2833]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2833
[Intel XE#2838]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2838
[Intel XE#2850]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2850
[Intel XE#288]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/288
[Intel XE#2882]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2882
[Intel XE#2887]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2887
[Intel XE#2893]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2893
[Intel XE#2894]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2894
[Intel XE#2905]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2905
[Intel XE#2907]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2907
[Intel XE#2925]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2925
[Intel XE#2927]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2927
[Intel XE#2938]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2938
[Intel XE#2953]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2953
[Intel XE#301]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/301
[Intel XE#3012]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3012
[Intel XE#306]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/306
[Intel XE#307]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/307
[Intel XE#308]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/308
[Intel XE#309]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/309
[Intel XE#310]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/310
[Intel XE#3113]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3113
[Intel XE#3124]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3124
[Intel XE#3141]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3141
[Intel XE#316]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/316
[Intel XE#3226]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3226
[Intel XE#323]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/323
[Intel XE#3304]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3304
[Intel XE#3321]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3321
[Intel XE#3342]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3342
[Intel XE#3414]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3414
[Intel XE#3432]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3432
[Intel XE#3442]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3442
[Intel XE#3573]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3573
[Intel XE#3576]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3576
[Intel XE#362]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/362
[Intel XE#366]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/366
[Intel XE#367]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/367
[Intel XE#373]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/373
[Intel XE#3767]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3767
[Intel XE#378]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/378
[Intel XE#3862]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3862
[Intel XE#3889]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3889
[Intel XE#3904]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3904
[Intel XE#3908]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3908
[Intel XE#4045]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4045
[Intel XE#4130]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4130
[Intel XE#4141]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4141
[Intel XE#4156]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4156
[Intel XE#4173]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4173
[Intel XE#4212]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4212
[Intel XE#4259]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4259
[Intel XE#4294]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4294
[Intel XE#4298]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4298
[Intel XE#4302]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4302
[Intel XE#4345]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4345
[Intel XE#4351]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4351
[Intel XE#4354]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4354
[Intel XE#4422]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4422
[Intel XE#4497]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4497
[Intel XE#4501]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4501
[Intel XE#4518]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4518
[Intel XE#4522]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4522
[Intel XE#4543]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4543
[Intel XE#455]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/455
[Intel XE#4596]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4596
[Intel XE#4608]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4608
[Intel XE#4609]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4609
[Intel XE#488]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/488
[Intel XE#579]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/579
[Intel XE#584]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/584
[Intel XE#607]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/607
[Intel XE#616]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/616
[Intel XE#619]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/619
[Intel XE#651]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/651
[Intel XE#653]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/653
[Intel XE#656]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/656
[Intel XE#688]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/688
[Intel XE#734]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/734
[Intel XE#756]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/756
[Intel XE#771]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/771
[Intel XE#776]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/776
[Intel XE#787]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/787
[Intel XE#836]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/836
[Intel XE#870]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/870
[Intel XE#886]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/886
[Intel XE#908]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/908
[Intel XE#929]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/929
[Intel XE#944]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/944
[Intel XE#979]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/979
[i915#3804]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3804
Build changes
-------------
* Linux: xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd -> xe-pw-146290v4
IGT_8311: 851a9c1cb1a690d8c527f26c49c250ec583af65e @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
xe-2921-a49a4787e6bc70296204f4a6e1b0fed3759938cd: a49a4787e6bc70296204f4a6e1b0fed3759938cd
xe-pw-146290v4: 146290v4
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-146290v4/index.html
[-- Attachment #2: Type: text/html, Size: 129769 bytes --]
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 06/32] drm/xe/vm: Update xe_vma_ops_incr_pt_update_ops to take an increment value
2025-04-07 10:16 ` [PATCH v2 06/32] drm/xe/vm: Update xe_vma_ops_incr_pt_update_ops to take an increment value Himal Prasad Ghimiray
@ 2025-04-17 0:10 ` Matthew Brost
2025-04-21 4:09 ` Ghimiray, Himal Prasad
0 siblings, 1 reply; 120+ messages in thread
From: Matthew Brost @ 2025-04-17 0:10 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe, thomas.hellstrom
On Mon, Apr 07, 2025 at 03:46:53PM +0530, Himal Prasad Ghimiray wrote:
> Prefetch for SVM ranges can have more than one operation to increment,
> hence modify the function to accept an increment value as input.
>
> v2:
> - Call xe_vma_ops_incr_pt_update_ops only once for REMAP (Matthew Brost)
> - Add check for 0 ops
>
> Suggested-by: Matthew Brost <matthew.brost@intel.com>
> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> ---
> drivers/gpu/drm/xe/xe_vm.c | 28 +++++++++++++++++-----------
> 1 file changed, 17 insertions(+), 11 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index 0c69ef6b5ec5..4d215c55a778 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -806,13 +806,16 @@ static void xe_vma_ops_fini(struct xe_vma_ops *vops)
> kfree(vops->pt_update_ops[i].ops);
> }
>
> -static void xe_vma_ops_incr_pt_update_ops(struct xe_vma_ops *vops, u8 tile_mask)
> +static void xe_vma_ops_incr_pt_update_ops(struct xe_vma_ops *vops, u8 tile_mask, u8 inc_val)
s/u8 inc_val/int inc_val
or maybe u32?
Just debugged a problem which the compute UMD + prefetch where the
inc_val was 256, thus 0, so the binding step was skipped for prefetch.
> {
> int i;
>
> + if(!inc_val)
> + return;
> +
> for (i = 0; i < XE_MAX_TILES_PER_DEVICE; ++i)
> if (BIT(i) & tile_mask)
> - ++vops->pt_update_ops[i].num_ops;
> + vops->pt_update_ops[i].num_ops += inc_val;
> }
>
> static void xe_vm_populate_rebind(struct xe_vma_op *op, struct xe_vma *vma,
> @@ -842,7 +845,7 @@ static int xe_vm_ops_add_rebind(struct xe_vma_ops *vops, struct xe_vma *vma,
>
> xe_vm_populate_rebind(op, vma, tile_mask);
> list_add_tail(&op->link, &vops->list);
> - xe_vma_ops_incr_pt_update_ops(vops, tile_mask);
> + xe_vma_ops_incr_pt_update_ops(vops, tile_mask, 1);
>
> return 0;
> }
> @@ -977,7 +980,7 @@ xe_vm_ops_add_range_rebind(struct xe_vma_ops *vops,
>
> xe_vm_populate_range_rebind(op, vma, range, tile_mask);
> list_add_tail(&op->link, &vops->list);
> - xe_vma_ops_incr_pt_update_ops(vops, tile_mask);
> + xe_vma_ops_incr_pt_update_ops(vops, tile_mask, 1);
>
> return 0;
> }
> @@ -1062,7 +1065,7 @@ xe_vm_ops_add_range_unbind(struct xe_vma_ops *vops,
>
> xe_vm_populate_range_unbind(op, range);
> list_add_tail(&op->link, &vops->list);
> - xe_vma_ops_incr_pt_update_ops(vops, range->tile_present);
> + xe_vma_ops_incr_pt_update_ops(vops, range->tile_present, 1);
>
> return 0;
> }
> @@ -2493,7 +2496,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
> !op->map.is_cpu_addr_mirror) ||
> op->map.invalidate_on_bind)
> xe_vma_ops_incr_pt_update_ops(vops,
> - op->tile_mask);
> + op->tile_mask, 1);
> break;
> }
> case DRM_GPUVA_OP_REMAP:
> @@ -2502,6 +2505,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
> gpuva_to_vma(op->base.remap.unmap->va);
> bool skip = xe_vma_is_cpu_addr_mirror(old);
> u64 start = xe_vma_start(old), end = xe_vma_end(old);
> + u8 num_remap_ops = 0;
u8 actually works here as the max value is 3 but I'd change this to a
u32 or int.
Otherwise LGTM.
Matt
>
> if (op->base.remap.prev)
> start = op->base.remap.prev->va.addr +
> @@ -2554,7 +2558,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
> (ULL)op->remap.start,
> (ULL)op->remap.range);
> } else {
> - xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask);
> + num_remap_ops++;
> }
> }
>
> @@ -2583,11 +2587,13 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
> (ULL)op->remap.start,
> (ULL)op->remap.range);
> } else {
> - xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask);
> + num_remap_ops++;
> }
> }
> if (!skip)
> - xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask);
> + num_remap_ops++;
> +
> + xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask, num_remap_ops);
> break;
> }
> case DRM_GPUVA_OP_UNMAP:
> @@ -2599,7 +2605,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
> return -EBUSY;
>
> if (!xe_vma_is_cpu_addr_mirror(vma))
> - xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask);
> + xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask, 1);
> break;
> case DRM_GPUVA_OP_PREFETCH:
> vma = gpuva_to_vma(op->base.prefetch.va);
> @@ -2611,7 +2617,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
> }
>
> if (!xe_vma_is_cpu_addr_mirror(vma))
> - xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask);
> + xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask, 1);
> break;
> default:
> drm_warn(&vm->xe->drm, "NOT POSSIBLE");
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 02/32] drm/xe: Make xe_svm_alloc_vram public
2025-04-07 10:16 ` [PATCH v2 02/32] drm/xe: Make xe_svm_alloc_vram public Himal Prasad Ghimiray
@ 2025-04-17 2:50 ` Matthew Brost
2025-04-21 4:06 ` Ghimiray, Himal Prasad
0 siblings, 1 reply; 120+ messages in thread
From: Matthew Brost @ 2025-04-17 2:50 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe, thomas.hellstrom
On Mon, Apr 07, 2025 at 03:46:49PM +0530, Himal Prasad Ghimiray wrote:
> This function will be used in prefetch too, hence make it public.
>
> v2:
> - Add kernel-doc (Matthew Brost)
> - Rebase
>
> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> ---
> drivers/gpu/drm/xe/xe_svm.c | 23 +++++++++++++----------
> drivers/gpu/drm/xe/xe_svm.h | 23 +++++++++++++++++++++++
> 2 files changed, 36 insertions(+), 10 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
> index c7424c824a14..de19ad056287 100644
> --- a/drivers/gpu/drm/xe/xe_svm.c
> +++ b/drivers/gpu/drm/xe/xe_svm.c
> @@ -661,9 +661,19 @@ static struct xe_vram_region *tile_to_vr(struct xe_tile *tile)
> return &tile->mem.vram;
> }
>
> -static int xe_svm_alloc_vram(struct xe_vm *vm, struct xe_tile *tile,
> - struct xe_svm_range *range,
> - const struct drm_gpusvm_ctx *ctx)
> +/**
> + * xe_svm_alloc_vram()- Allocate device memory pages for range,
> + * migrating existing data.
> + * @vm: The VM.
> + * @tile: tile to allocate vram from
> + * @range: SVM range
> + * @ctx: DRM GPU SVM context
> + *
> + * Return: 0 on success, error code on failure.
> + */
> +int xe_svm_alloc_vram(struct xe_vm *vm, struct xe_tile *tile,
> + struct xe_svm_range *range,
> + const struct drm_gpusvm_ctx *ctx)
> {
> struct mm_struct *mm = vm->svm.gpusvm.mm;
> struct xe_vram_region *vr = tile_to_vr(tile);
> @@ -717,13 +727,6 @@ static int xe_svm_alloc_vram(struct xe_vm *vm, struct xe_tile *tile,
>
> return err;
> }
> -#else
> -static int xe_svm_alloc_vram(struct xe_vm *vm, struct xe_tile *tile,
> - struct xe_svm_range *range,
> - const struct drm_gpusvm_ctx *ctx)
> -{
> - return -EOPNOTSUPP;
> -}
> #endif
>
>
> diff --git a/drivers/gpu/drm/xe/xe_svm.h b/drivers/gpu/drm/xe/xe_svm.h
> index 3d441eb1f7ea..d8772f841ab7 100644
> --- a/drivers/gpu/drm/xe/xe_svm.h
> +++ b/drivers/gpu/drm/xe/xe_svm.h
> @@ -75,6 +75,20 @@ int xe_svm_bo_evict(struct xe_bo *bo);
>
> void xe_svm_range_debug(struct xe_svm_range *range, const char *operation);
>
> +#if IS_ENABLED(CONFIG_DRM_XE_DEVMEM_MIRROR)
> +int xe_svm_alloc_vram(struct xe_vm *vm, struct xe_tile *tile,
> + struct xe_svm_range *range,
> + const struct drm_gpusvm_ctx *ctx);
> +#else
> +static inline
> +int xe_svm_alloc_vram(struct xe_vm *vm, struct xe_tile *tile,
> + struct xe_svm_range *range,
> + const struct drm_gpusvm_ctx *ctx)
> +{
> + return -EOPNOTSUPP;
> +}
> +#endif
> +
> /**
> * xe_svm_range_has_dma_mapping() - SVM range has DMA mapping
> * @range: SVM range
> @@ -100,6 +114,7 @@ static inline bool xe_svm_range_has_dma_mapping(struct xe_svm_range *range)
> #include <linux/interval_tree.h>
>
> struct drm_pagemap_device_addr;
> +struct drm_gpusvm_ctx;
> struct xe_bo;
> struct xe_gt;
> struct xe_vm;
> @@ -170,6 +185,14 @@ void xe_svm_range_debug(struct xe_svm_range *range, const char *operation)
> {
> }
>
> +static inline
> +int xe_svm_alloc_vram(struct xe_vm *vm, struct xe_tile *tile,
> + struct xe_svm_range *range,
> + const struct drm_gpusvm_ctx *ctx)
> +{
> + return -EOPNOTSUPP;
> +}
> +
It is a little goofy to have 2 versions of xe_svm_alloc_vram stubbed
out in a single file. How about...
#if IS_ENABLED(CONFIG_DRM_GPUSVM) && IS_ENABLED(CONFIG_DRM_XE_DEVMEM_MIRROR)
prototyope
#else
stub
#endif
Or another option is in xe_svm.c we stub out xe_devm_add behind
CONFIG_DRM_XE_DEVMEM_MIRROR so maybe stick xe_svm_alloc_vram there?
Or lastly, I don't think anything in xe_svm_alloc_vram actually depends
on CONFIG_DRM_XE_DEVMEM_MIRROR either as static version is not hidden
behind CONFIG_DRM_XE_DEVMEM_MIRROR.
Matt
> #define xe_svm_assert_in_notifier(...) do {} while (0)
> #define xe_svm_range_has_dma_mapping(...) false
>
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 07/32] drm/xe/vm: Add an identifier in xe_vma_ops for svm prefetch
2025-04-07 10:16 ` [PATCH v2 07/32] drm/xe/vm: Add an identifier in xe_vma_ops for svm prefetch Himal Prasad Ghimiray
@ 2025-04-17 2:53 ` Matthew Brost
0 siblings, 0 replies; 120+ messages in thread
From: Matthew Brost @ 2025-04-17 2:53 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe, thomas.hellstrom
On Mon, Apr 07, 2025 at 03:46:54PM +0530, Himal Prasad Ghimiray wrote:
> Add a flag in xe_vma_ops to determine whether it has svm prefetch ops or
> not.
>
> v2:
> - s/false/0 (Matthew Brost)
>
> Suggested-by: Matthew Brost <matthew.brost@intel.com>
> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> ---
> drivers/gpu/drm/xe/xe_vm.c | 1 +
> drivers/gpu/drm/xe/xe_vm_types.h | 3 +++
> 2 files changed, 4 insertions(+)
>
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index 4d215c55a778..b1f1e85d26f7 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -3240,6 +3240,7 @@ static void xe_vma_ops_init(struct xe_vma_ops *vops, struct xe_vm *vm,
> vops->q = q;
> vops->syncs = syncs;
> vops->num_syncs = num_syncs;
> + vops->flags = 0;
> }
>
> static int xe_vm_bind_ioctl_validate_bo(struct xe_device *xe, struct xe_bo *bo,
> diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
> index ae0bcefdbfcd..d3c1209348e9 100644
> --- a/drivers/gpu/drm/xe/xe_vm_types.h
> +++ b/drivers/gpu/drm/xe/xe_vm_types.h
> @@ -445,6 +445,9 @@ struct xe_vma_ops {
> u32 num_syncs;
> /** @pt_update_ops: page table update operations */
> struct xe_vm_pgtable_update_ops pt_update_ops[XE_MAX_TILES_PER_DEVICE];
> + /** @flag: signify the properties within xe_vma_ops*/
> +#define XE_VMA_OPS_HAS_SVM_PREFETCH BIT(0)
Maybe s/XE_VMA_OPS_HAS_SVM_PREFETCH/XE_VMA_OPS_FLAG_HAS_SVM_PREFETCH ?
Either way:
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
> + u32 flags;
> #ifdef TEST_VM_OPS_ERROR
> /** @inject_error: inject error to test error handling */
> bool inject_error;
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 09/32] drm/xe/svm: Allow unaligned addresses and ranges for prefetch
2025-04-07 10:16 ` [PATCH v2 09/32] drm/xe/svm: Allow unaligned addresses and ranges for prefetch Himal Prasad Ghimiray
@ 2025-04-17 2:53 ` Matthew Brost
0 siblings, 0 replies; 120+ messages in thread
From: Matthew Brost @ 2025-04-17 2:53 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe, thomas.hellstrom
On Mon, Apr 07, 2025 at 03:46:56PM +0530, Himal Prasad Ghimiray wrote:
> The SVM prefetch operation can handle unaligned addresses and range sizes.
> This commit updates the ioctl parameter checks to accommodate unaligned
> addresses and range sizes for SVM prefetch operations.
>
This patch can dropped per discussions with the UMD team.
Matt
> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> ---
> drivers/gpu/drm/xe/xe_vm.c | 18 ++++++++++++++++--
> 1 file changed, 16 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index ac308cfdaf28..57af2c37f927 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -3111,6 +3111,16 @@ ALLOW_ERROR_INJECTION(vm_bind_ioctl_ops_execute, ERRNO);
> #define XE_64K_PAGE_MASK 0xffffull
> #define ALL_DRM_XE_SYNCS_FLAGS (DRM_XE_SYNCS_FLAG_WAIT_FOR_OP)
>
> +static bool addr_not_in_cpu_addr_vma(struct xe_vm *vm, u64 addr)
> +{
> + struct xe_vma *vma;
> +
> + down_write(&vm->lock);
> + vma = xe_vm_find_vma_by_addr(vm, addr);
> + up_write(&vm->lock);
> + return !xe_vma_is_cpu_addr_mirror(vma);
> +}
> +
> static int vm_bind_ioctl_check_args(struct xe_device *xe, struct xe_vm *vm,
> struct drm_xe_vm_bind *args,
> struct drm_xe_vm_bind_op **bind_ops)
> @@ -3219,8 +3229,12 @@ static int vm_bind_ioctl_check_args(struct xe_device *xe, struct xe_vm *vm,
> }
>
> if (XE_IOCTL_DBG(xe, obj_offset & ~PAGE_MASK) ||
> - XE_IOCTL_DBG(xe, addr & ~PAGE_MASK) ||
> - XE_IOCTL_DBG(xe, range & ~PAGE_MASK) ||
> + XE_IOCTL_DBG(xe, (addr & ~PAGE_MASK) &&
> + (addr_not_in_cpu_addr_vma(vm, addr) ||
> + op != DRM_XE_VM_BIND_OP_PREFETCH)) ||
> + XE_IOCTL_DBG(xe, (range & ~PAGE_MASK) &&
> + (addr_not_in_cpu_addr_vma(vm, addr) ||
> + op != DRM_XE_VM_BIND_OP_PREFETCH)) ||
> XE_IOCTL_DBG(xe, !range &&
> op != DRM_XE_VM_BIND_OP_UNMAP_ALL)) {
> err = -EINVAL;
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 10/32] drm/xe/svm: Refactor usage of drm_gpusvm* function in xe_svm
2025-04-07 10:16 ` [PATCH v2 10/32] drm/xe/svm: Refactor usage of drm_gpusvm* function in xe_svm Himal Prasad Ghimiray
@ 2025-04-17 2:57 ` Matthew Brost
2025-04-21 4:30 ` Ghimiray, Himal Prasad
0 siblings, 1 reply; 120+ messages in thread
From: Matthew Brost @ 2025-04-17 2:57 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe, thomas.hellstrom
On Mon, Apr 07, 2025 at 03:46:57PM +0530, Himal Prasad Ghimiray wrote:
> Define xe_svm_range_find_or_insert function wrapping
> drm_gpusvm_range_find_or_insert for reusing in prefetch.
>
> Define xe_svm_range_get_pages function wrapping
> drm_gpusvm_range_get_pages for reusing in prefetch.
>
> -v2 pass pagefault defined drm_gpu_svm context as parameter
> in xe_svm_range_find_or_insert(Matthew Brost)
>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> ---
> drivers/gpu/drm/xe/xe_svm.c | 67 ++++++++++++++++++++++++++++++-------
> drivers/gpu/drm/xe/xe_svm.h | 20 +++++++++++
> 2 files changed, 75 insertions(+), 12 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
> index 6648b4da0bca..8cd35553a927 100644
> --- a/drivers/gpu/drm/xe/xe_svm.c
> +++ b/drivers/gpu/drm/xe/xe_svm.c
> @@ -735,7 +735,6 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
> IS_ENABLED(CONFIG_DRM_XE_DEVMEM_MIRROR) ? SZ_64K : 0,
> };
> struct xe_svm_range *range;
> - struct drm_gpusvm_range *r;
> struct drm_exec exec;
> struct dma_fence *fence;
> struct xe_tile *tile = gt_to_tile(gt);
> @@ -753,13 +752,11 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
> if (err)
> return err;
>
> - r = drm_gpusvm_range_find_or_insert(&vm->svm.gpusvm, fault_addr,
> - xe_vma_start(vma), xe_vma_end(vma),
> - &ctx);
> - if (IS_ERR(r))
> - return PTR_ERR(r);
> + range = xe_svm_range_find_or_insert(vm, fault_addr, vma, &ctx);
> +
> + if (IS_ERR(range))
> + return PTR_ERR(range);
>
> - range = to_xe_range(r);
> if (xe_svm_range_is_valid(range, tile))
> return 0;
>
> @@ -781,13 +778,9 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
> }
>
> range_debug(range, "GET PAGES");
> - err = drm_gpusvm_range_get_pages(&vm->svm.gpusvm, r, &ctx);
> + err = xe_svm_range_get_pages(vm, range, &ctx);
> /* Corner where CPU mappings have changed */
> if (err == -EOPNOTSUPP || err == -EFAULT || err == -EPERM) {
> - if (err == -EOPNOTSUPP) {
> - range_debug(range, "PAGE FAULT - EVICT PAGES");
> - drm_gpusvm_range_evict(&vm->svm.gpusvm, &range->base);
> - }
> drm_dbg(&vm->xe->drm,
> "Get pages failed, falling back to retrying, asid=%u, gpusvm=%p, errno=%pe\n",
> vm->usm.asid, &vm->svm.gpusvm, ERR_PTR(err));
> @@ -866,6 +859,56 @@ int xe_svm_bo_evict(struct xe_bo *bo)
> return drm_gpusvm_evict_to_ram(&bo->devmem_allocation);
> }
>
> +/**
> + * xe_svm_range_find_or_insert- Find or insert GPU SVM range
> + * @vm: xe_vm pointer
> + * @addr: address for which range needs to be found/inserted
> + * @vma: Pointer to struct xe_vma which mirrors CPU
> + * @ctx: GPU SVM context
> + *
> + * This function finds or inserts a newly allocated a SVM range based on the
> + * address.
> + *
> + * Return: Pointer to the SVM range on success, ERR_PTR() on failure.
> + */
> +struct xe_svm_range *xe_svm_range_find_or_insert(struct xe_vm *vm, u64 addr,
> + struct xe_vma *vma, struct drm_gpusvm_ctx *ctx)
> +{
> + struct drm_gpusvm_range *r;
> +
> + r = drm_gpusvm_range_find_or_insert(&vm->svm.gpusvm, max(addr, xe_vma_start(vma)),
> + xe_vma_start(vma), xe_vma_end(vma), ctx);
> + if (IS_ERR(r))
> + return ERR_PTR(PTR_ERR(r));
> +
> + return to_xe_range(r);
> +}
> +
> +/**
> + * xe_svm_range_get_pages() - Get pages for a SVM range
> + * @vm: Pointer to the struct xe_vm
> + * @range: Pointer to the xe SVM range structure
> + * @ctx: GPU SVM context
> + *
> + * This function gets pages for a SVM range and ensures they are mapped for
> + * DMA access. In case of failure with -EOPNOTSUPP, it evicts the range.
> + *
> + * Return: 0 on success, negative error code on failure.
> + */
> +int xe_svm_range_get_pages(struct xe_vm *vm, struct xe_svm_range *range,
> + struct drm_gpusvm_ctx *ctx)
> +{
> + int err = 0;
> +
> + err = drm_gpusvm_range_get_pages(&vm->svm.gpusvm, &range->base, ctx);
> + if (err == -EOPNOTSUPP) {
> + range_debug(range, "PAGE FAULT - EVICT PAGES");
> + drm_gpusvm_range_evict(&vm->svm.gpusvm, &range->base);
> + }
> +
> + return err;
> +}
> +
> #if IS_ENABLED(CONFIG_DRM_XE_DEVMEM_MIRROR)
>
> static struct drm_pagemap_device_addr
> diff --git a/drivers/gpu/drm/xe/xe_svm.h b/drivers/gpu/drm/xe/xe_svm.h
> index 1ec90d9bc749..9c4c3aeacc6c 100644
> --- a/drivers/gpu/drm/xe/xe_svm.h
> +++ b/drivers/gpu/drm/xe/xe_svm.h
> @@ -89,6 +89,12 @@ int xe_svm_alloc_vram(struct xe_vm *vm, struct xe_tile *tile,
> }
> #endif
>
> +struct xe_svm_range *xe_svm_range_find_or_insert(struct xe_vm *vm, u64 addr,
> + struct xe_vma *vma, struct drm_gpusvm_ctx *ctx);
One nit, check on the alignment here, checkpatch should complain if this
is off, hard to tell if this wrong from the patch.
But patch LGTM:
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
> +
> +int xe_svm_range_get_pages(struct xe_vm *vm, struct xe_svm_range *range,
> + struct drm_gpusvm_ctx *ctx);
> +
> /**
> * xe_svm_range_has_dma_mapping() - SVM range has DMA mapping
> * @range: SVM range
> @@ -241,6 +247,20 @@ int xe_svm_alloc_vram(struct xe_vm *vm, struct xe_tile *tile,
> return -EOPNOTSUPP;
> }
>
> +static inline
> +struct xe_svm_range *xe_svm_range_find_or_insert(struct xe_vm *vm, u64 addr,
> + struct xe_vma *vma, struct drm_gpusvm_ctx *ctx)
> +{
> + return ERR_PTR(-EINVAL);
> +}
> +
> +static inline
> +int xe_svm_range_get_pages(struct xe_vm *vm, struct xe_svm_range *range,
> + struct drm_gpusvm_ctx *ctx)
> +{
> + return -EINVAL;
> +}
> +
> static inline struct xe_svm_range *to_xe_range(struct drm_gpusvm_range *r)
> {
> return NULL;
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 11/32] drm/xe/svm: Add function to determine if range needs VRAM migration
2025-04-07 10:16 ` [PATCH v2 11/32] drm/xe/svm: Add function to determine if range needs VRAM migration Himal Prasad Ghimiray
@ 2025-04-17 3:05 ` Matthew Brost
2025-04-21 4:52 ` Ghimiray, Himal Prasad
0 siblings, 1 reply; 120+ messages in thread
From: Matthew Brost @ 2025-04-17 3:05 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe, thomas.hellstrom
On Mon, Apr 07, 2025 at 03:46:58PM +0530, Himal Prasad Ghimiray wrote:
> xe_svm_range_needs_migrate_to_vram() determines whether range needs
> migration to vram or not, for pagefault try at least once.
>
So I pulled this patch into requested series to minimally enable atomic
on the existing upstream code here [1]. I suspect we will get my series
in first, perhaps even in the 6.15 cycle as I think it could reasonably
justified as a fixes series as the compute UMD doesn't really work
without it. Just a heads up.
[1] https://patchwork.freedesktop.org/patch/647159/?series=146290&rev=4
> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> ---
> drivers/gpu/drm/xe/xe_svm.c | 49 +++++++++++++++++++++++++++++++++++--
> drivers/gpu/drm/xe/xe_svm.h | 10 ++++++++
> 2 files changed, 57 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
> index 8cd35553a927..f4ae3feaf9d3 100644
> --- a/drivers/gpu/drm/xe/xe_svm.c
> +++ b/drivers/gpu/drm/xe/xe_svm.c
> @@ -709,6 +709,51 @@ int xe_svm_alloc_vram(struct xe_vm *vm, struct xe_tile *tile,
> }
> #endif
>
> +static bool supports_4K_migration(struct xe_device *xe)
> +{
> + if (xe->info.platform == XE_BATTLEMAGE)
> + return true;
> +
> + return false;
if (xe->info.vram_flags & XE_VRAM_FLAGS_NEED64K)
return false;
return true;
> +}
> +
> +/**
> + * xe_svm_range_needs_migrate_to_vram() - SVM range needs migrate to VRAM or not
> + * @range: SVM range for which migration needs to be decided
> + * @vma: vma which has range
> + * @region: default placement for range
> + *
> + * Return: True for range needing migration and migration is supported else false
> + */
> +bool xe_svm_range_needs_migrate_to_vram(struct xe_svm_range *range, struct xe_vma *vma,
> + u32 region)
> +{
> + struct xe_vm *vm = range_to_vm(&range->base);
> + u64 range_size = xe_svm_range_size(range);
> + bool needs_migrate = false;
> +
> + if (!range->base.flags.migrate_devmem)
> + return false;
> +
> + needs_migrate = region;
> +
> + if (needs_migrate && !IS_DGFX(vm->xe)) {
> + drm_warn(&vm->xe->drm, "Platform doesn't support VRAM\n");
> + return false;
> + }
I'm not sure if this warrents a drm_warn, I think an assert would be
better here as it shouldn't happen unless we have an internal
programming error.
Matt
> +
> + if (needs_migrate && xe_svm_range_in_vram(range)) {
> + drm_info(&vm->xe->drm, "Range is already in VRAM\n");
> + return false;
> + }
> +
> + if (needs_migrate && range_size <= SZ_64K && !supports_4K_migration(vm->xe)) {
> + drm_warn(&vm->xe->drm, "Platform doesn't support SZ_4K range migration\n");
> + return false;
> + }
> +
> + return needs_migrate;
> +}
>
> /**
> * xe_svm_handle_pagefault() - SVM handle page fault
> @@ -763,8 +808,8 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
> range_debug(range, "PAGE FAULT");
>
> /* XXX: Add migration policy, for now migrate range once */
> - if (!range->skip_migrate && range->base.flags.migrate_devmem &&
> - xe_svm_range_size(range) >= SZ_64K) {
> + if (!range->skip_migrate &&
> + xe_svm_range_needs_migrate_to_vram(range, vma, IS_DGFX(vm->xe))) {
A little odd to pass IS_DGFX(vm->xe) as the region...
> range->skip_migrate = true;
>
> err = xe_svm_alloc_vram(vm, tile, range, &ctx);
> diff --git a/drivers/gpu/drm/xe/xe_svm.h b/drivers/gpu/drm/xe/xe_svm.h
> index 9c4c3aeacc6c..d5be8229ca7e 100644
> --- a/drivers/gpu/drm/xe/xe_svm.h
> +++ b/drivers/gpu/drm/xe/xe_svm.h
> @@ -95,6 +95,9 @@ struct xe_svm_range *xe_svm_range_find_or_insert(struct xe_vm *vm, u64 addr,
> int xe_svm_range_get_pages(struct xe_vm *vm, struct xe_svm_range *range,
> struct drm_gpusvm_ctx *ctx);
>
> +bool xe_svm_range_needs_migrate_to_vram(struct xe_svm_range *range, struct xe_vma *vma,
> + u32 region);
> +
> /**
> * xe_svm_range_has_dma_mapping() - SVM range has DMA mapping
> * @range: SVM range
> @@ -281,6 +284,13 @@ static inline unsigned long xe_svm_range_size(struct xe_svm_range *range)
> return 0;
> }
>
> +static inline
> +bool xe_svm_range_needs_migrate_to_vram(struct xe_svm_range *range, struct xe_vma *vma,
> + u32 region)
> +{
> + return false;
> +}
> +
> #define xe_svm_assert_in_notifier(...) do {} while (0)
> #define xe_svm_range_has_dma_mapping(...) false
>
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 12/32] drm/gpusvm: Introduce vram_only flag for VRAM allocation
2025-04-07 10:16 ` [PATCH v2 12/32] drm/gpusvm: Introduce vram_only flag for VRAM allocation Himal Prasad Ghimiray
@ 2025-04-17 3:07 ` Matthew Brost
2025-04-21 4:55 ` Ghimiray, Himal Prasad
0 siblings, 1 reply; 120+ messages in thread
From: Matthew Brost @ 2025-04-17 3:07 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe, thomas.hellstrom
On Mon, Apr 07, 2025 at 03:46:59PM +0530, Himal Prasad Ghimiray wrote:
> This commit adds a new flag, vram_only, to the drm_gpusvm structure. The
> purpose of this flag is to ensure that the get_pages function allocates
> memory exclusively from the device's VRAM. If the allocation from VRAM
> fails, the function will return an -EFAULT error.
>
> Suggested-by: Matthew Brost <matthew.brost@intel.com>
Again this is included in [1] with you remaining as the author.
Anyways:
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
[1] https://patchwork.freedesktop.org/series/147846/
> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> ---
> drivers/gpu/drm/drm_gpusvm.c | 5 +++++
> include/drm/drm_gpusvm.h | 2 ++
> 2 files changed, 7 insertions(+)
>
> diff --git a/drivers/gpu/drm/drm_gpusvm.c b/drivers/gpu/drm/drm_gpusvm.c
> index 2451c816edd5..149ac56eff70 100644
> --- a/drivers/gpu/drm/drm_gpusvm.c
> +++ b/drivers/gpu/drm/drm_gpusvm.c
> @@ -1454,6 +1454,11 @@ int drm_gpusvm_range_get_pages(struct drm_gpusvm *gpusvm,
> goto err_unmap;
> }
>
> + if (ctx->vram_only) {
> + err = -EFAULT;
> + goto err_unmap;
> + }
> +
> addr = dma_map_page(gpusvm->drm->dev,
> page, 0,
> PAGE_SIZE << order,
> diff --git a/include/drm/drm_gpusvm.h b/include/drm/drm_gpusvm.h
> index df120b4d1f83..8093cc6ab1f4 100644
> --- a/include/drm/drm_gpusvm.h
> +++ b/include/drm/drm_gpusvm.h
> @@ -286,6 +286,7 @@ struct drm_gpusvm {
> * @in_notifier: entering from a MMU notifier
> * @read_only: operating on read-only memory
> * @devmem_possible: possible to use device memory
> + * @vram_only: Use only device memory
> *
> * Context that is DRM GPUSVM is operating in (i.e. user arguments).
> */
> @@ -294,6 +295,7 @@ struct drm_gpusvm_ctx {
> unsigned int in_notifier :1;
> unsigned int read_only :1;
> unsigned int devmem_possible :1;
> + unsigned int vram_only :1;
> };
>
> int drm_gpusvm_init(struct drm_gpusvm *gpusvm,
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 13/32] drm/xe/svm: Incase of atomic access ensure get_pages happens from vram
2025-04-07 10:17 ` [PATCH v2 13/32] drm/xe/svm: Incase of atomic access ensure get_pages happens from vram Himal Prasad Ghimiray
@ 2025-04-17 4:19 ` Matthew Brost
2025-04-21 4:58 ` Ghimiray, Himal Prasad
0 siblings, 1 reply; 120+ messages in thread
From: Matthew Brost @ 2025-04-17 4:19 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe, thomas.hellstrom
On Mon, Apr 07, 2025 at 03:47:00PM +0530, Himal Prasad Ghimiray wrote:
> Ranges can be invalidated in between vram allocation and get_pages,
> ensure the dma mapping is happening from vram only incase of atomic
> access. Retry 3 times before calling out fault in case of concurrent
> cpu/gpu access.
>
Again I pulled this patch into a series which will minimally enable
atomics per UMD request. See the version of the patch [1] I landed on -
that is basically my review feedback. I took ownership but left SoB by
you as it based on this patch. We will need another reviewer though as
we are both contributors but feel to comment there.
Matt
[1] https://patchwork.freedesktop.org/patch/649010/?series=147846&rev=2
> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> ---
> drivers/gpu/drm/xe/xe_svm.c | 43 ++++++++++++++++++++++++-------------
> 1 file changed, 28 insertions(+), 15 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
> index f4ae3feaf9d3..7ec7ecd7eb1f 100644
> --- a/drivers/gpu/drm/xe/xe_svm.c
> +++ b/drivers/gpu/drm/xe/xe_svm.c
> @@ -778,11 +778,13 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
> IS_ENABLED(CONFIG_DRM_XE_DEVMEM_MIRROR),
> .check_pages_threshold = IS_DGFX(vm->xe) &&
> IS_ENABLED(CONFIG_DRM_XE_DEVMEM_MIRROR) ? SZ_64K : 0,
> + .vram_only = 0,
> };
> struct xe_svm_range *range;
> struct drm_exec exec;
> struct dma_fence *fence;
> struct xe_tile *tile = gt_to_tile(gt);
> + int retry_count = 3;
> ktime_t end = 0;
> int err;
>
> @@ -792,6 +794,7 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
> xe_gt_stats_incr(gt, XE_GT_STATS_ID_SVM_PAGEFAULT_COUNT, 1);
>
> retry:
> + retry_count--;
> /* Always process UNMAPs first so view SVM ranges is current */
> err = xe_svm_garbage_collector(vm);
> if (err)
> @@ -807,30 +810,40 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
>
> range_debug(range, "PAGE FAULT");
>
> - /* XXX: Add migration policy, for now migrate range once */
> - if (!range->skip_migrate &&
> - xe_svm_range_needs_migrate_to_vram(range, vma, IS_DGFX(vm->xe))) {
> - range->skip_migrate = true;
> -
> + if (xe_svm_range_needs_migrate_to_vram(range, vma, IS_DGFX(vm->xe))) {
> err = xe_svm_alloc_vram(vm, tile, range, &ctx);
> if (err) {
> - drm_dbg(&vm->xe->drm,
> - "VRAM allocation failed, falling back to "
> - "retrying fault, asid=%u, errno=%pe\n",
> - vm->usm.asid, ERR_PTR(err));
> - goto retry;
> + if (retry_count) {
> + drm_dbg(&vm->xe->drm,
> + "VRAM allocation failed, falling back to retrying fault, asid=%u, errno=%pe\n",
> + vm->usm.asid, ERR_PTR(err));
> + goto retry;
> + } else {
> + drm_err(&vm->xe->drm,
> + "VRAM allocation failed, retry count exceeded, asid=%u, errno=%pe\n",
> + vm->usm.asid, ERR_PTR(err));
> + return err;
> + }
> }
> +
> }
>
> + if (atomic)
> + ctx.vram_only = 1;
> +
> range_debug(range, "GET PAGES");
> err = xe_svm_range_get_pages(vm, range, &ctx);
> /* Corner where CPU mappings have changed */
> if (err == -EOPNOTSUPP || err == -EFAULT || err == -EPERM) {
> - drm_dbg(&vm->xe->drm,
> - "Get pages failed, falling back to retrying, asid=%u, gpusvm=%p, errno=%pe\n",
> - vm->usm.asid, &vm->svm.gpusvm, ERR_PTR(err));
> - range_debug(range, "PAGE FAULT - RETRY PAGES");
> - goto retry;
> + if (retry_count) {
> + drm_dbg(&vm->xe->drm, "Get pages failed, falling back to retrying, asid=%u, gpusvm=%p, errno=%pe\n",
> + vm->usm.asid, &vm->svm.gpusvm, ERR_PTR(err));
> + range_debug(range, "PAGE FAULT - RETRY PAGES");
> + goto retry;
> + } else {
> + drm_err(&vm->xe->drm, "Get pages failed,, retry count exceeded, asid=%u,, errno=%pe\n",
> + vm->usm.asid, ERR_PTR(err));
> + }
> }
> if (err) {
> range_debug(range, "PAGE FAULT - FAIL PAGE COLLECT");
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 14/32] drm/xe/svm: Implement prefetch support for SVM ranges
2025-04-07 10:17 ` [PATCH v2 14/32] drm/xe/svm: Implement prefetch support for SVM ranges Himal Prasad Ghimiray
@ 2025-04-17 4:54 ` Matthew Brost
2025-04-24 10:03 ` Ghimiray, Himal Prasad
2025-04-24 23:48 ` Matthew Brost
1 sibling, 1 reply; 120+ messages in thread
From: Matthew Brost @ 2025-04-17 4:54 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe, thomas.hellstrom
On Mon, Apr 07, 2025 at 03:47:01PM +0530, Himal Prasad Ghimiray wrote:
> This commit adds prefetch support for SVM ranges, utilizing the
> existing ioctl vm_bind functionality to achieve this.
>
> v2: rebase
>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> ---
> drivers/gpu/drm/xe/xe_pt.c | 61 +++++++++---
> drivers/gpu/drm/xe/xe_vm.c | 185 +++++++++++++++++++++++++++++++++++--
> 2 files changed, 222 insertions(+), 24 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
> index de4e3edda758..59dc065fae93 100644
> --- a/drivers/gpu/drm/xe/xe_pt.c
> +++ b/drivers/gpu/drm/xe/xe_pt.c
> @@ -1458,7 +1458,8 @@ static int xe_pt_svm_pre_commit(struct xe_migrate_pt_update *pt_update)
> struct xe_vm *vm = pt_update->vops->vm;
> struct xe_vma_ops *vops = pt_update->vops;
> struct xe_vma_op *op;
> - int err;
> + int ranges_count;
> + int err, i;
>
> err = xe_pt_pre_commit(pt_update);
> if (err)
> @@ -1467,20 +1468,33 @@ static int xe_pt_svm_pre_commit(struct xe_migrate_pt_update *pt_update)
> xe_svm_notifier_lock(vm);
>
> list_for_each_entry(op, &vops->list, link) {
> - struct xe_svm_range *range = op->map_range.range;
> + struct xe_svm_range *range = NULL;
>
> if (op->subop == XE_VMA_SUBOP_UNMAP_RANGE)
> continue;
>
> - xe_svm_range_debug(range, "PRE-COMMIT");
> -
> - xe_assert(vm->xe, xe_vma_is_cpu_addr_mirror(op->map_range.vma));
> - xe_assert(vm->xe, op->subop == XE_VMA_SUBOP_MAP_RANGE);
> + if (op->base.op == DRM_GPUVA_OP_PREFETCH) {
> + xe_assert(vm->xe,
> + xe_vma_is_cpu_addr_mirror(gpuva_to_vma(op->base.prefetch.va)));
> + ranges_count = op->prefetch_range.ranges_count;
> + } else {
> + xe_assert(vm->xe, xe_vma_is_cpu_addr_mirror(op->map_range.vma));
> + xe_assert(vm->xe, op->subop == XE_VMA_SUBOP_MAP_RANGE);
> + ranges_count = 1;
> + }
>
> - if (!xe_svm_range_pages_valid(range)) {
> - xe_svm_range_debug(range, "PRE-COMMIT - RETRY");
> - xe_svm_notifier_unlock(vm);
> - return -EAGAIN;
> + for (i = 0; i < ranges_count; i++) {
xa_for_each as it doesn't make any assumptions above the key (e.g. the value of i).
> + if (op->base.op == DRM_GPUVA_OP_PREFETCH)
> + range = xa_load(&op->prefetch_range.range, i);
I'd move this logic above... So I'd write it like this...
if (op->base.op == DRM_GPUVA_OP_PREFETCH) {
assert
xe_for_each
do_pages_check()
} else {
assert
do_pages_check();
}
> + else
> + range = op->map_range.range;
> + xe_svm_range_debug(range, "PRE-COMMIT");
> +
> + if (!xe_svm_range_pages_valid(range)) {
> + xe_svm_range_debug(range, "PRE-COMMIT - RETRY");
> + xe_svm_notifier_unlock(vm);
> + return -EAGAIN;
So in the case of prefetch, this is bit inconsistent as below when
things race, you return -ENODATA which is converted to 0 in the IOCTL.
-EAGAIN here could result in a livelock in the right conditions as
-EAGAIN means must retry. I think maybe just -ENODATA if a prefetch
fails... If there are any other binds in the array of IOCTL they will
just fault I guess. Maybe not a concern as only VK uses array of binds
at the moment.
> + }
> }
> }
>
> @@ -2065,11 +2079,21 @@ static int op_prepare(struct xe_vm *vm,
> {
> struct xe_vma *vma = gpuva_to_vma(op->base.prefetch.va);
>
> - if (xe_vma_is_cpu_addr_mirror(vma))
> - break;
> + if (xe_vma_is_cpu_addr_mirror(vma)) {
> + struct xe_svm_range *range;
> + int i;
>
> - err = bind_op_prepare(vm, tile, pt_update_ops, vma, false);
> - pt_update_ops->wait_vm_kernel = true;
> + for (i = 0; i < op->prefetch_range.ranges_count; i++) {
> + range = xa_load(&op->prefetch_range.range, i);
Again xe_for_each...
> + err = bind_range_prepare(vm, tile, pt_update_ops,
> + vma, range);
> + if (err)
> + return err;
> + }
> + } else {
> + err = bind_op_prepare(vm, tile, pt_update_ops, vma, false);
> + pt_update_ops->wait_vm_kernel = true;
> + }
> break;
> }
> case DRM_GPUVA_OP_DRIVER:
> @@ -2273,9 +2297,16 @@ static void op_commit(struct xe_vm *vm,
> {
> struct xe_vma *vma = gpuva_to_vma(op->base.prefetch.va);
>
> - if (!xe_vma_is_cpu_addr_mirror(vma))
> + if (xe_vma_is_cpu_addr_mirror(vma)) {
> + for (int i = 0 ; i < op->prefetch_range.ranges_count; i++) {
Again xe_for_each...
> + struct xe_svm_range *range = xa_load(&op->prefetch_range.range, i);
> +
> + range_present_and_invalidated_tile(vm, range, tile->id);
> + }
> + } else {
> bind_op_commit(vm, tile, pt_update_ops, vma, fence,
> fence2, false);
> + }
> break;
> }
> case DRM_GPUVA_OP_DRIVER:
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index 57af2c37f927..ffd7ad664921 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -798,10 +798,36 @@ static int xe_vma_ops_alloc(struct xe_vma_ops *vops, bool array_of_binds)
> }
> ALLOW_ERROR_INJECTION(xe_vma_ops_alloc, ERRNO);
>
> +static void clean_svm_prefetch_op(struct xe_vma_op *op)
> +{
Can we rename this with fini convention to match xe_vma_ops_fini?
> + struct xe_vma *vma;
> +
> + vma = gpuva_to_vma(op->base.prefetch.va);
> +
> + if (op->base.op == DRM_GPUVA_OP_PREFETCH && xe_vma_is_cpu_addr_mirror(vma)) {
> + xa_destroy(&op->prefetch_range.range);
> + op->prefetch_range.ranges_count = 0;
Do you need to set 'op->prefetch_range.ranges_count' to zero here.
> + }
> +}
> +
> +static void clean_svm_prefetch_in_vma_ops(struct xe_vma_ops *vops)
> +{
Same here, fini convention?
> + struct xe_vma_op *op;
> +
> + if (!(vops->flags & XE_VMA_OPS_HAS_SVM_PREFETCH))
> + return;
> +
> + list_for_each_entry(op, &vops->list, link) {
> + clean_svm_prefetch_op(op);
> + }
Brackets not needed.
> +}
> +
> static void xe_vma_ops_fini(struct xe_vma_ops *vops)
> {
> int i;
>
> + clean_svm_prefetch_in_vma_ops(vops);
> +
> for (i = 0; i < XE_MAX_TILES_PER_DEVICE; ++i)
> kfree(vops->pt_update_ops[i].ops);
> }
> @@ -2248,13 +2274,25 @@ static bool __xe_vm_needs_clear_scratch_pages(struct xe_vm *vm, u32 bind_flags)
> return true;
> }
>
> +static void clean_svm_prefetch_in_gpuva_ops(struct drm_gpuva_ops *ops)
> +{
Same here, fini convention?
> + struct drm_gpuva_op *__op;
> +
> + drm_gpuva_for_each_op(__op, ops) {
> + struct xe_vma_op *op = gpuva_op_to_vma_op(__op);
> +
> + clean_svm_prefetch_op(op);
> + }
> +}
> +
> /*
> * Create operations list from IOCTL arguments, setup operations fields so parse
> * and commit steps are decoupled from IOCTL arguments. This step can fail.
> */
> static struct drm_gpuva_ops *
> -vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_bo *bo,
> - u64 bo_offset_or_userptr, u64 addr, u64 range,
> +vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_vma_ops *vops,
> + struct xe_bo *bo, u64 bo_offset_or_userptr,
> + u64 addr, u64 range,
> u32 operation, u32 flags,
> u32 prefetch_region, u16 pat_index)
> {
> @@ -2262,6 +2300,7 @@ vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_bo *bo,
> struct drm_gpuva_ops *ops;
> struct drm_gpuva_op *__op;
> struct drm_gpuvm_bo *vm_bo;
> + u64 range_end = addr + range;
> int err;
>
> lockdep_assert_held_write(&vm->lock);
> @@ -2323,14 +2362,61 @@ vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_bo *bo,
> op->map.invalidate_on_bind =
> __xe_vm_needs_clear_scratch_pages(vm, flags);
> } else if (__op->op == DRM_GPUVA_OP_PREFETCH) {
> - op->prefetch.region = prefetch_region;
> - }
> + struct xe_vma *vma = gpuva_to_vma(op->base.prefetch.va);
> +
> + if (!xe_vma_is_cpu_addr_mirror(vma)) {
> + op->prefetch.region = prefetch_region;
> + break;
> + }
>
> + struct drm_gpusvm_ctx ctx = {
> + .read_only = xe_vma_read_only(vma),
> + .devmem_possible = IS_DGFX(vm->xe) &&
> + IS_ENABLED(CONFIG_DRM_XE_DEVMEM_MIRROR),
> + .check_pages_threshold = IS_DGFX(vm->xe) &&
> + IS_ENABLED(CONFIG_DRM_XE_DEVMEM_MIRROR) ?
> + SZ_64K : 0,
The alignment looks weird here.
Also you don't techincally need to set check_pages_threshold here give
this is used by get_pages which is not called here.
> + };
> +
> + op->prefetch_range.region = prefetch_region;
> + struct xe_svm_range *svm_range;
> + int i = 0;
I don't think you need 'i' here, you probably just can use xa_alloc
rather than xa_store if use xe_for_each everywhere else.
> +
> + xa_init(&op->prefetch_range.range);
> + op->prefetch_range.ranges_count = 0;
> +alloc_next_range:
> + svm_range = xe_svm_range_find_or_insert(vm, addr, vma, &ctx);
> +
I think you want to check if range has a mapping and is in preferred
location, if it is then don't add to the xarray as no reason to try to
migrate it or rebind the GPU pages.
> + if (PTR_ERR(svm_range) == -ENOENT)
> + break;
> +
> + if (IS_ERR(svm_range)) {
> + err = PTR_ERR(svm_range);
> + goto unwind_prefetch_ops;
> + }
> +
> + xa_store(&op->prefetch_range.range, i, svm_range, GFP_KERNEL);
> + op->prefetch_range.ranges_count++;
> + vops->flags |= XE_VMA_OPS_HAS_SVM_PREFETCH;
> +
> + if (range_end > xe_svm_range_end(svm_range) &&
> + xe_svm_range_end(svm_range) < xe_vma_end(vma)) {
> + i++;
> + addr = xe_svm_range_end(svm_range);
> + goto alloc_next_range;
> + }
> + }
> print_op(vm->xe, __op);
> }
>
> return ops;
> +
> +unwind_prefetch_ops:
> + clean_svm_prefetch_in_gpuva_ops(ops);
> + drm_gpuva_ops_free(&vm->gpuvm, ops);
> + return ERR_PTR(err);
> }
> +
> ALLOW_ERROR_INJECTION(vm_bind_ioctl_ops_create, ERRNO);
>
> static struct xe_vma *new_vma(struct xe_vm *vm, struct drm_gpuva_op_map *op,
> @@ -2645,8 +2731,12 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
> return err;
> }
>
> - if (!xe_vma_is_cpu_addr_mirror(vma))
> + if (xe_vma_is_cpu_addr_mirror(vma))
> + xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask,
> + op->prefetch_range.ranges_count);
> + else
> xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask, 1);
> +
> break;
> default:
> drm_warn(&vm->xe->drm, "NOT POSSIBLE");
> @@ -2772,6 +2862,58 @@ static int check_ufence(struct xe_vma *vma)
> return 0;
> }
>
> +static int prefetch_ranges_lock_and_prep(struct xe_vm *vm,
> + struct xe_vma_op *op)
> +{
> + int err = 0;
> +
> + if (op->base.op == DRM_GPUVA_OP_PREFETCH) {
if (op->base.op != DRM_GPUVA_OP_PREFETCH || xe_vma_is_cpu_addr_mirror(vma))
return 0;
Will help with spacing. Or do this check at the caller.
> + struct xe_vma *vma = gpuva_to_vma(op->base.prefetch.va);
> + struct drm_gpusvm_ctx ctx = {
> + .read_only = xe_vma_read_only(vma),
> + .devmem_possible = IS_DGFX(vm->xe) &&
> + IS_ENABLED(CONFIG_DRM_XE_DEVMEM_MIRROR),
> + .check_pages_threshold = IS_DGFX(vm->xe) &&
> + IS_ENABLED(CONFIG_DRM_XE_DEVMEM_MIRROR) ?
> + SZ_64K : 0,
> + };
> + struct xe_svm_range *svm_range;
> + struct xe_tile *tile;
> + u32 region;
> + int i;
> +
> + if (!xe_vma_is_cpu_addr_mirror(vma))
> + return 0;
> +
> + region = op->prefetch_range.region;
> +
> + /* TODO: Threading the migration */
> + for (i = 0; i < op->prefetch_range.ranges_count; i++) {
Again xa_for_each...
> + svm_range = xa_load(&op->prefetch_range.range, i);
> + if (xe_svm_range_needs_migrate_to_vram(svm_range, vma, region)) {
> + tile = &vm->xe->tiles[region_to_mem_type[region] - XE_PL_VRAM0];
> + err = xe_svm_alloc_vram(vm, tile, svm_range, &ctx);
> + if (err) {
> + drm_err(&vm->xe->drm, "VRAM allocation failed, can be retried from userspace, asid=%u, gpusvm=%p, errno=%pe\n",
> + vm->usm.asid, &vm->svm.gpusvm, ERR_PTR(err));
Not drm_err here, drm_dbg as user space can easily retrigger this.
> + return -ENODATA;
So this gets squashed into return 0, which I think is correct for now.
Same explaination as above wrt to error codes.
> + }
> + }
> +
> + err = xe_svm_range_get_pages(vm, svm_range, &ctx);
> + if (err) {
> + if (err == -EOPNOTSUPP || err == -EFAULT || err == -EPERM)
> + err = -ENODATA;
> +
> + drm_err(&vm->xe->drm, "Get pages failed, asid=%u, gpusvm=%p, errno=%pe\n",
> + vm->usm.asid, &vm->svm.gpusvm, ERR_PTR(err));
Same as above.
We are going to want IGTs to test these error paths btw, issue prefetch
then have another thread immediately touch some of the memory to abort
the prefetch.
> + return err;
> + }
> + }
> + }
> + return err;
> +}
> +
> static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
> struct xe_vma_op *op)
> {
> @@ -2809,7 +2951,12 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
> case DRM_GPUVA_OP_PREFETCH:
> {
> struct xe_vma *vma = gpuva_to_vma(op->base.prefetch.va);
> - u32 region = op->prefetch.region;
> + u32 region;
> +
> + if (xe_vma_is_cpu_addr_mirror(vma))
> + region = op->prefetch_range.region;
> + else
> + region = op->prefetch.region;
>
> xe_assert(vm->xe, region <= ARRAY_SIZE(region_to_mem_type));
>
> @@ -2828,6 +2975,23 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
> return err;
> }
>
> +static int xe_vma_ops_execute_ready(struct xe_vm *vm, struct xe_vma_ops *vops)
> +{
Let's make these names consistent.
How about...
s/xe_vma_ops_execute_ready/vm_bind_ioctl_ops_prefetch_ranges
s/prefetch_ranges_lock_and_prep/prefetch_ranges
> + struct xe_vma_op *op;
> + int err;
> +
> + if (!(vops->flags & XE_VMA_OPS_HAS_SVM_PREFETCH))
> + return 0;
> +
> + list_for_each_entry(op, &vops->list, link) {
> + err = prefetch_ranges_lock_and_prep(vm, op);
> + if (err)
> + return err;
> + }
> +
> + return 0;
> +}
> +
> static int vm_bind_ioctl_ops_lock_and_prep(struct drm_exec *exec,
> struct xe_vm *vm,
> struct xe_vma_ops *vops)
> @@ -2850,7 +3014,6 @@ static int vm_bind_ioctl_ops_lock_and_prep(struct drm_exec *exec,
> vm->xe->vm_inject_error_position == FORCE_OP_ERROR_LOCK)
> return -ENOSPC;
> #endif
> -
Look unrelated.
Matt
> return 0;
> }
>
> @@ -3492,7 +3655,7 @@ int xe_vm_bind_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
> u32 prefetch_region = bind_ops[i].prefetch_mem_region_instance;
> u16 pat_index = bind_ops[i].pat_index;
>
> - ops[i] = vm_bind_ioctl_ops_create(vm, bos[i], obj_offset,
> + ops[i] = vm_bind_ioctl_ops_create(vm, &vops, bos[i], obj_offset,
> addr, range, op, flags,
> prefetch_region, pat_index);
> if (IS_ERR(ops[i])) {
> @@ -3525,6 +3688,10 @@ int xe_vm_bind_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
> if (err)
> goto unwind_ops;
>
> + err = xe_vma_ops_execute_ready(vm, &vops);
> + if (err)
> + goto unwind_ops;
> +
> fence = vm_bind_ioctl_ops_execute(vm, &vops);
> if (IS_ERR(fence))
> err = PTR_ERR(fence);
> @@ -3594,7 +3761,7 @@ struct dma_fence *xe_vm_bind_kernel_bo(struct xe_vm *vm, struct xe_bo *bo,
>
> xe_vma_ops_init(&vops, vm, q, NULL, 0);
>
> - ops = vm_bind_ioctl_ops_create(vm, bo, 0, addr, bo->size,
> + ops = vm_bind_ioctl_ops_create(vm, &vops, bo, 0, addr, bo->size,
> DRM_XE_VM_BIND_OP_MAP, 0, 0,
> vm->xe->pat.idx[cache_lvl]);
> if (IS_ERR(ops)) {
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 15/32] drm/xe/vm: Add debug prints for SVM range prefetch
2025-04-07 10:17 ` [PATCH v2 15/32] drm/xe/vm: Add debug prints for SVM range prefetch Himal Prasad Ghimiray
@ 2025-04-17 4:56 ` Matthew Brost
0 siblings, 0 replies; 120+ messages in thread
From: Matthew Brost @ 2025-04-17 4:56 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe, thomas.hellstrom
On Mon, Apr 07, 2025 at 03:47:02PM +0530, Himal Prasad Ghimiray wrote:
> Introduce debug logs for the prefetch operation of SVM ranges.
>
> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
> ---
> drivers/gpu/drm/xe/xe_vm.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index ffd7ad664921..fd98e74485f4 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -2398,6 +2398,7 @@ vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_vma_ops *vops,
> xa_store(&op->prefetch_range.range, i, svm_range, GFP_KERNEL);
> op->prefetch_range.ranges_count++;
> vops->flags |= XE_VMA_OPS_HAS_SVM_PREFETCH;
> + xe_svm_range_debug(svm_range, "PREFETCH - RANGE CREATED");
>
> if (range_end > xe_svm_range_end(svm_range) &&
> xe_svm_range_end(svm_range) < xe_vma_end(vma)) {
> @@ -2898,6 +2899,7 @@ static int prefetch_ranges_lock_and_prep(struct xe_vm *vm,
> vm->usm.asid, &vm->svm.gpusvm, ERR_PTR(err));
> return -ENODATA;
> }
> + xe_svm_range_debug(svm_range, "PREFETCH - RANGE MIGRATED TO VRAM");
> }
>
> err = xe_svm_range_get_pages(vm, svm_range, &ctx);
> @@ -2909,6 +2911,7 @@ static int prefetch_ranges_lock_and_prep(struct xe_vm *vm,
> vm->usm.asid, &vm->svm.gpusvm, ERR_PTR(err));
> return err;
> }
> + xe_svm_range_debug(svm_range, "PREFETCH - RANGE GET PAGES DONE");
> }
> }
> return err;
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 17/32] drm/xe/uapi: Add madvise interface
2025-04-07 10:17 ` [PATCH v2 17/32] drm/xe/uapi: Add madvise interface Himal Prasad Ghimiray
@ 2025-04-17 18:19 ` Souza, Jose
2025-04-17 18:24 ` Souza, Jose
2025-04-22 15:40 ` Matthew Brost
2025-05-02 14:00 ` Thomas Hellström
1 sibling, 2 replies; 120+ messages in thread
From: Souza, Jose @ 2025-04-17 18:19 UTC (permalink / raw)
To: intel-xe@lists.freedesktop.org, Ghimiray, Himal Prasad
Cc: Brost, Matthew, thomas.hellstrom@linux.intel.com
On Mon, 2025-04-07 at 15:47 +0530, Himal Prasad Ghimiray wrote:
> This commit introduces a new madvise interface to support
> driver-specific ioctl operations. The madvise interface allows for more
> efficient memory management by providing hints to the driver about the
> expected memory usage and pte update policy for gpuvma.
>
> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> ---
> include/uapi/drm/xe_drm.h | 97 +++++++++++++++++++++++++++++++++++++++
> 1 file changed, 97 insertions(+)
>
> diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
> index 9c08738c3b91..aaf515df3a83 100644
> --- a/include/uapi/drm/xe_drm.h
> +++ b/include/uapi/drm/xe_drm.h
> @@ -81,6 +81,7 @@ extern "C" {
> * - &DRM_IOCTL_XE_EXEC
> * - &DRM_IOCTL_XE_WAIT_USER_FENCE
> * - &DRM_IOCTL_XE_OBSERVATION
> + * - &DRM_IOCTL_XE_MADVISE
> */
>
> /*
> @@ -102,6 +103,7 @@ extern "C" {
> #define DRM_XE_EXEC 0x09
> #define DRM_XE_WAIT_USER_FENCE 0x0a
> #define DRM_XE_OBSERVATION 0x0b
> +#define DRM_XE_MADVISE 0x0c
>
> /* Must be kept compact -- no holes */
>
> @@ -117,6 +119,7 @@ extern "C" {
> #define DRM_IOCTL_XE_EXEC DRM_IOW(DRM_COMMAND_BASE + DRM_XE_EXEC, struct drm_xe_exec)
> #define DRM_IOCTL_XE_WAIT_USER_FENCE DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_WAIT_USER_FENCE, struct drm_xe_wait_user_fence)
> #define DRM_IOCTL_XE_OBSERVATION DRM_IOW(DRM_COMMAND_BASE + DRM_XE_OBSERVATION, struct drm_xe_observation_param)
> +#define DRM_IOCTL_XE_MADVISE DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_MADVISE, struct drm_xe_madvise)
>
> /**
> * DOC: Xe IOCTL Extensions
> @@ -1965,6 +1968,100 @@ struct drm_xe_query_eu_stall {
> __u64 sampling_rates[];
> };
>
> +struct drm_xe_madvise_ops {
> + /** @start: start of the virtual address range */
> + __u64 start;
> +
> + /** @size: size of the virtual address range */
> + __u64 range;
> +
> +#define DRM_XE_VMA_ATTR_PREFERRED_LOC 0
> +#define DRM_XE_VMA_ATTR_ATOMIC 1
> +#define DRM_XE_VMA_ATTR_PAT 2
In my opinion we should drop DRM_XE_VMA_ATTR_PAT as we can already change the PAT index by unbinding and binding the VM with the wanted PAT index.
Similar for DRM_XE_VMA_ATTR_ATOMIC and DRM_XE_VMA_ATTR_PREFERRED_LOC, we could have those properties in vm bind uAPI and if needed to change we could
unbind and bind the VMA with the updated attributes.
> +#define DRM_XE_VMA_ATTR_PURGEABLE_STATE 3
Mesa can make use of DRM_XE_VMA_ATTR_PURGEABLE_STATE but not the other ones, so you will need other UMDs to land those.
> + /** @type: type of attribute */
> + __u32 type;
> +
> + /** @pad: MBZ */
> + __u32 pad;
> +
> + union {
> + struct {
> +#define DRM_XE_VMA_ATOMIC_UNDEFINED 0
> +#define DRM_XE_VMA_ATOMIC_DEVICE 1
> +#define DRM_XE_VMA_ATOMIC_GLOBAL 2
> +#define DRM_XE_VMA_ATOMIC_CPU 3
> + /** @val: value of atomic operation*/
> + __u32 val;
> +
> + /** @reserved: Reserved */
> + __u32 reserved;
> + } atomic;
> +
> + struct {
> +#define DRM_XE_VMA_PURGEABLE_STATE_WILLNEED 0
> +#define DRM_XE_VMA_PURGEABLE_STATE_DONTNEED 1
> +#define DRM_XE_VMA_PURGEABLE_STATE_PURGED 2
> + /** @val: value for DRM_XE_VMA_ATTR_PURGEABLE_STATE */
> + __u32 val;
> +
> + /** @reserved: Reserved */
> + __u32 reserved;
> + } purge_state_val;
> +
> + struct {
> + /** @pat_index */
> + __u32 val;
> +
> + /** @reserved: Reserved */
> + __u32 reserved;
> + } pat_index;
> +
> + /** @preferred_mem_loc: preferred memory location */
> + struct {
> + __u32 devmem_fd;
> +
> +#define MIGRATE_ALL_PAGES 0
> +#define MIGRATE_ONLY_SYSTEM_PAGES 1
> + __u32 migration_policy;
> + } preferred_mem_loc;
> + };
> +
> + /** @reserved: Reserved */
> + __u64 reserved[2];
> +};
> +
> +/**
> + * struct drm_xe_madvise - Input of &DRM_IOCTL_XE_MADVISE
> + *
> + * Set memory attributes to a virtual address range
> + */
> +struct drm_xe_madvise {
> + /** @extensions: Pointer to the first extension struct, if any */
> + __u64 extensions;
> +
> + /** @vm_id: vm_id of the virtual range */
> + __u32 vm_id;
> +
> + /** @num_ops: number of madvises in ioctl */
> + __u32 num_ops;
> +
> + union {
> + /** @ops: used if num_ops == 1 */
> + struct drm_xe_madvise_ops ops;
> +
> + /**
> + * @vector_of_ops: userptr to array of struct
> + * drm_xe_vm_madvise_op if num_ops > 1
> + */
> + __u64 vector_of_ops;
> + };
> +
> + /** @reserved: Reserved */
> + __u64 reserved[2];
> +
> +};
> +
> #if defined(__cplusplus)
> }
> #endif
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 17/32] drm/xe/uapi: Add madvise interface
2025-04-17 18:19 ` Souza, Jose
@ 2025-04-17 18:24 ` Souza, Jose
2025-04-22 15:34 ` Matthew Brost
2025-04-22 15:40 ` Matthew Brost
1 sibling, 1 reply; 120+ messages in thread
From: Souza, Jose @ 2025-04-17 18:24 UTC (permalink / raw)
To: intel-xe@lists.freedesktop.org, Ghimiray, Himal Prasad
Cc: Brost, Matthew, thomas.hellstrom@linux.intel.com
On Thu, 2025-04-17 at 11:19 -0700, José Roberto de Souza wrote:
> On Mon, 2025-04-07 at 15:47 +0530, Himal Prasad Ghimiray wrote:
> > This commit introduces a new madvise interface to support
> > driver-specific ioctl operations. The madvise interface allows for more
> > efficient memory management by providing hints to the driver about the
> > expected memory usage and pte update policy for gpuvma.
> >
> > Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> > ---
> > include/uapi/drm/xe_drm.h | 97 +++++++++++++++++++++++++++++++++++++++
> > 1 file changed, 97 insertions(+)
> >
> > diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
> > index 9c08738c3b91..aaf515df3a83 100644
> > --- a/include/uapi/drm/xe_drm.h
> > +++ b/include/uapi/drm/xe_drm.h
> > @@ -81,6 +81,7 @@ extern "C" {
> > * - &DRM_IOCTL_XE_EXEC
> > * - &DRM_IOCTL_XE_WAIT_USER_FENCE
> > * - &DRM_IOCTL_XE_OBSERVATION
> > + * - &DRM_IOCTL_XE_MADVISE
> > */
> >
> > /*
> > @@ -102,6 +103,7 @@ extern "C" {
> > #define DRM_XE_EXEC 0x09
> > #define DRM_XE_WAIT_USER_FENCE 0x0a
> > #define DRM_XE_OBSERVATION 0x0b
> > +#define DRM_XE_MADVISE 0x0c
> >
> > /* Must be kept compact -- no holes */
> >
> > @@ -117,6 +119,7 @@ extern "C" {
> > #define DRM_IOCTL_XE_EXEC DRM_IOW(DRM_COMMAND_BASE + DRM_XE_EXEC, struct drm_xe_exec)
> > #define DRM_IOCTL_XE_WAIT_USER_FENCE DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_WAIT_USER_FENCE, struct drm_xe_wait_user_fence)
> > #define DRM_IOCTL_XE_OBSERVATION DRM_IOW(DRM_COMMAND_BASE + DRM_XE_OBSERVATION, struct drm_xe_observation_param)
> > +#define DRM_IOCTL_XE_MADVISE DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_MADVISE, struct drm_xe_madvise)
> >
> > /**
> > * DOC: Xe IOCTL Extensions
> > @@ -1965,6 +1968,100 @@ struct drm_xe_query_eu_stall {
> > __u64 sampling_rates[];
> > };
> >
> > +struct drm_xe_madvise_ops {
> > + /** @start: start of the virtual address range */
> > + __u64 start;
> > +
> > + /** @size: size of the virtual address range */
> > + __u64 range;
> > +
> > +#define DRM_XE_VMA_ATTR_PREFERRED_LOC 0
> > +#define DRM_XE_VMA_ATTR_ATOMIC 1
> > +#define DRM_XE_VMA_ATTR_PAT 2
>
> In my opinion we should drop DRM_XE_VMA_ATTR_PAT as we can already change the PAT index by unbinding and binding the VM with the wanted PAT index.
> Similar for DRM_XE_VMA_ATTR_ATOMIC and DRM_XE_VMA_ATTR_PREFERRED_LOC, we could have those properties in vm bind uAPI and if needed to change we could
> unbind and bind the VMA with the updated attributes.
Ah and if it is chosen to move forward with those I think would be nice to have drm_xe_sync so we could synchronize the update of this attributes with
previous submissions and future ones.
>
> > +#define DRM_XE_VMA_ATTR_PURGEABLE_STATE 3
>
> Mesa can make use of DRM_XE_VMA_ATTR_PURGEABLE_STATE but not the other ones, so you will need other UMDs to land those.
>
> > + /** @type: type of attribute */
> > + __u32 type;
> > +
> > + /** @pad: MBZ */
> > + __u32 pad;
> > +
> > + union {
> > + struct {
> > +#define DRM_XE_VMA_ATOMIC_UNDEFINED 0
> > +#define DRM_XE_VMA_ATOMIC_DEVICE 1
> > +#define DRM_XE_VMA_ATOMIC_GLOBAL 2
> > +#define DRM_XE_VMA_ATOMIC_CPU 3
> > + /** @val: value of atomic operation*/
> > + __u32 val;
> > +
> > + /** @reserved: Reserved */
> > + __u32 reserved;
> > + } atomic;
> > +
> > + struct {
> > +#define DRM_XE_VMA_PURGEABLE_STATE_WILLNEED 0
> > +#define DRM_XE_VMA_PURGEABLE_STATE_DONTNEED 1
> > +#define DRM_XE_VMA_PURGEABLE_STATE_PURGED 2
> > + /** @val: value for DRM_XE_VMA_ATTR_PURGEABLE_STATE */
> > + __u32 val;
> > +
> > + /** @reserved: Reserved */
> > + __u32 reserved;
> > + } purge_state_val;
> > +
> > + struct {
> > + /** @pat_index */
> > + __u32 val;
> > +
> > + /** @reserved: Reserved */
> > + __u32 reserved;
> > + } pat_index;
> > +
> > + /** @preferred_mem_loc: preferred memory location */
> > + struct {
> > + __u32 devmem_fd;
> > +
> > +#define MIGRATE_ALL_PAGES 0
> > +#define MIGRATE_ONLY_SYSTEM_PAGES 1
> > + __u32 migration_policy;
> > + } preferred_mem_loc;
> > + };
> > +
> > + /** @reserved: Reserved */
> > + __u64 reserved[2];
> > +};
> > +
> > +/**
> > + * struct drm_xe_madvise - Input of &DRM_IOCTL_XE_MADVISE
> > + *
> > + * Set memory attributes to a virtual address range
> > + */
> > +struct drm_xe_madvise {
> > + /** @extensions: Pointer to the first extension struct, if any */
> > + __u64 extensions;
> > +
> > + /** @vm_id: vm_id of the virtual range */
> > + __u32 vm_id;
> > +
> > + /** @num_ops: number of madvises in ioctl */
> > + __u32 num_ops;
> > +
> > + union {
> > + /** @ops: used if num_ops == 1 */
> > + struct drm_xe_madvise_ops ops;
> > +
> > + /**
> > + * @vector_of_ops: userptr to array of struct
> > + * drm_xe_vm_madvise_op if num_ops > 1
> > + */
> > + __u64 vector_of_ops;
> > + };
> > +
> > + /** @reserved: Reserved */
> > + __u64 reserved[2];
> > +
> > +};
> > +
> > #if defined(__cplusplus)
> > }
> > #endif
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 02/32] drm/xe: Make xe_svm_alloc_vram public
2025-04-17 2:50 ` Matthew Brost
@ 2025-04-21 4:06 ` Ghimiray, Himal Prasad
0 siblings, 0 replies; 120+ messages in thread
From: Ghimiray, Himal Prasad @ 2025-04-21 4:06 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-xe, thomas.hellstrom
On 17-04-2025 08:20, Matthew Brost wrote:
> On Mon, Apr 07, 2025 at 03:46:49PM +0530, Himal Prasad Ghimiray wrote:
>> This function will be used in prefetch too, hence make it public.
>>
>> v2:
>> - Add kernel-doc (Matthew Brost)
>> - Rebase
>>
>> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>> ---
>> drivers/gpu/drm/xe/xe_svm.c | 23 +++++++++++++----------
>> drivers/gpu/drm/xe/xe_svm.h | 23 +++++++++++++++++++++++
>> 2 files changed, 36 insertions(+), 10 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
>> index c7424c824a14..de19ad056287 100644
>> --- a/drivers/gpu/drm/xe/xe_svm.c
>> +++ b/drivers/gpu/drm/xe/xe_svm.c
>> @@ -661,9 +661,19 @@ static struct xe_vram_region *tile_to_vr(struct xe_tile *tile)
>> return &tile->mem.vram;
>> }
>>
>> -static int xe_svm_alloc_vram(struct xe_vm *vm, struct xe_tile *tile,
>> - struct xe_svm_range *range,
>> - const struct drm_gpusvm_ctx *ctx)
>> +/**
>> + * xe_svm_alloc_vram()- Allocate device memory pages for range,
>> + * migrating existing data.
>> + * @vm: The VM.
>> + * @tile: tile to allocate vram from
>> + * @range: SVM range
>> + * @ctx: DRM GPU SVM context
>> + *
>> + * Return: 0 on success, error code on failure.
>> + */
>> +int xe_svm_alloc_vram(struct xe_vm *vm, struct xe_tile *tile,
>> + struct xe_svm_range *range,
>> + const struct drm_gpusvm_ctx *ctx)
>> {
>> struct mm_struct *mm = vm->svm.gpusvm.mm;
>> struct xe_vram_region *vr = tile_to_vr(tile);
>> @@ -717,13 +727,6 @@ static int xe_svm_alloc_vram(struct xe_vm *vm, struct xe_tile *tile,
>>
>> return err;
>> }
>> -#else
>> -static int xe_svm_alloc_vram(struct xe_vm *vm, struct xe_tile *tile,
>> - struct xe_svm_range *range,
>> - const struct drm_gpusvm_ctx *ctx)
>> -{
>> - return -EOPNOTSUPP;
>> -}
>> #endif
>>
>>
>> diff --git a/drivers/gpu/drm/xe/xe_svm.h b/drivers/gpu/drm/xe/xe_svm.h
>> index 3d441eb1f7ea..d8772f841ab7 100644
>> --- a/drivers/gpu/drm/xe/xe_svm.h
>> +++ b/drivers/gpu/drm/xe/xe_svm.h
>> @@ -75,6 +75,20 @@ int xe_svm_bo_evict(struct xe_bo *bo);
>>
>> void xe_svm_range_debug(struct xe_svm_range *range, const char *operation);
>>
>> +#if IS_ENABLED(CONFIG_DRM_XE_DEVMEM_MIRROR)
>> +int xe_svm_alloc_vram(struct xe_vm *vm, struct xe_tile *tile,
>> + struct xe_svm_range *range,
>> + const struct drm_gpusvm_ctx *ctx);
>> +#else
>> +static inline
>> +int xe_svm_alloc_vram(struct xe_vm *vm, struct xe_tile *tile,
>> + struct xe_svm_range *range,
>> + const struct drm_gpusvm_ctx *ctx)
>> +{
>> + return -EOPNOTSUPP;
>> +}
>> +#endif
>> +
>> /**
>> * xe_svm_range_has_dma_mapping() - SVM range has DMA mapping
>> * @range: SVM range
>> @@ -100,6 +114,7 @@ static inline bool xe_svm_range_has_dma_mapping(struct xe_svm_range *range)
>> #include <linux/interval_tree.h>
>>
>> struct drm_pagemap_device_addr;
>> +struct drm_gpusvm_ctx;
>> struct xe_bo;
>> struct xe_gt;
>> struct xe_vm;
>> @@ -170,6 +185,14 @@ void xe_svm_range_debug(struct xe_svm_range *range, const char *operation)
>> {
>> }
>>
>> +static inline
>> +int xe_svm_alloc_vram(struct xe_vm *vm, struct xe_tile *tile,
>> + struct xe_svm_range *range,
>> + const struct drm_gpusvm_ctx *ctx)
>> +{
>> + return -EOPNOTSUPP;
>> +}
>> +
>
> It is a little goofy to have 2 versions of xe_svm_alloc_vram stubbed
> out in a single file. How about...
>
> #if IS_ENABLED(CONFIG_DRM_GPUSVM) && IS_ENABLED(CONFIG_DRM_XE_DEVMEM_MIRROR)
> prototyope
> #else
> stub
> #endif
>
> Or another option is in xe_svm.c we stub out xe_devm_add behind
> CONFIG_DRM_XE_DEVMEM_MIRROR so maybe stick xe_svm_alloc_vram there?
would go ahead with this.
>
> Or lastly, I don't think anything in xe_svm_alloc_vram actually depends
> on CONFIG_DRM_XE_DEVMEM_MIRROR either as static version is not hidden
> behind CONFIG_DRM_XE_DEVMEM_MIRROR.
>
> Matt
>
>> #define xe_svm_assert_in_notifier(...) do {} while (0)
>> #define xe_svm_range_has_dma_mapping(...) false
>>
>> --
>> 2.34.1
>>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 06/32] drm/xe/vm: Update xe_vma_ops_incr_pt_update_ops to take an increment value
2025-04-17 0:10 ` Matthew Brost
@ 2025-04-21 4:09 ` Ghimiray, Himal Prasad
0 siblings, 0 replies; 120+ messages in thread
From: Ghimiray, Himal Prasad @ 2025-04-21 4:09 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-xe, thomas.hellstrom
On 17-04-2025 05:40, Matthew Brost wrote:
> On Mon, Apr 07, 2025 at 03:46:53PM +0530, Himal Prasad Ghimiray wrote:
>> Prefetch for SVM ranges can have more than one operation to increment,
>> hence modify the function to accept an increment value as input.
>>
>> v2:
>> - Call xe_vma_ops_incr_pt_update_ops only once for REMAP (Matthew Brost)
>> - Add check for 0 ops
>>
>> Suggested-by: Matthew Brost <matthew.brost@intel.com>
>> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>> ---
>> drivers/gpu/drm/xe/xe_vm.c | 28 +++++++++++++++++-----------
>> 1 file changed, 17 insertions(+), 11 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
>> index 0c69ef6b5ec5..4d215c55a778 100644
>> --- a/drivers/gpu/drm/xe/xe_vm.c
>> +++ b/drivers/gpu/drm/xe/xe_vm.c
>> @@ -806,13 +806,16 @@ static void xe_vma_ops_fini(struct xe_vma_ops *vops)
>> kfree(vops->pt_update_ops[i].ops);
>> }
>>
>> -static void xe_vma_ops_incr_pt_update_ops(struct xe_vma_ops *vops, u8 tile_mask)
>> +static void xe_vma_ops_incr_pt_update_ops(struct xe_vma_ops *vops, u8 tile_mask, u8 inc_val)
>
> s/u8 inc_val/int inc_val
>
> or maybe u32?
>
> Just debugged a problem which the compute UMD + prefetch where the
> inc_val was 256, thus 0, so the binding step was skipped for prefetch.
Thanks for pointing this. I tested with prefetch up to 256 MiB hence
missed it. >
>
>
>> {
>> int i;
>>
>> + if(!inc_val)
>> + return;
>> +
>> for (i = 0; i < XE_MAX_TILES_PER_DEVICE; ++i)
>> if (BIT(i) & tile_mask)
>> - ++vops->pt_update_ops[i].num_ops;
>> + vops->pt_update_ops[i].num_ops += inc_val;
>> }
>>
>> static void xe_vm_populate_rebind(struct xe_vma_op *op, struct xe_vma *vma,
>> @@ -842,7 +845,7 @@ static int xe_vm_ops_add_rebind(struct xe_vma_ops *vops, struct xe_vma *vma,
>>
>> xe_vm_populate_rebind(op, vma, tile_mask);
>> list_add_tail(&op->link, &vops->list);
>> - xe_vma_ops_incr_pt_update_ops(vops, tile_mask);
>> + xe_vma_ops_incr_pt_update_ops(vops, tile_mask, 1);
>>
>> return 0;
>> }
>> @@ -977,7 +980,7 @@ xe_vm_ops_add_range_rebind(struct xe_vma_ops *vops,
>>
>> xe_vm_populate_range_rebind(op, vma, range, tile_mask);
>> list_add_tail(&op->link, &vops->list);
>> - xe_vma_ops_incr_pt_update_ops(vops, tile_mask);
>> + xe_vma_ops_incr_pt_update_ops(vops, tile_mask, 1);
>>
>> return 0;
>> }
>> @@ -1062,7 +1065,7 @@ xe_vm_ops_add_range_unbind(struct xe_vma_ops *vops,
>>
>> xe_vm_populate_range_unbind(op, range);
>> list_add_tail(&op->link, &vops->list);
>> - xe_vma_ops_incr_pt_update_ops(vops, range->tile_present);
>> + xe_vma_ops_incr_pt_update_ops(vops, range->tile_present, 1);
>>
>> return 0;
>> }
>> @@ -2493,7 +2496,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
>> !op->map.is_cpu_addr_mirror) ||
>> op->map.invalidate_on_bind)
>> xe_vma_ops_incr_pt_update_ops(vops,
>> - op->tile_mask);
>> + op->tile_mask, 1);
>> break;
>> }
>> case DRM_GPUVA_OP_REMAP:
>> @@ -2502,6 +2505,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
>> gpuva_to_vma(op->base.remap.unmap->va);
>> bool skip = xe_vma_is_cpu_addr_mirror(old);
>> u64 start = xe_vma_start(old), end = xe_vma_end(old);
>> + u8 num_remap_ops = 0;
>
> u8 actually works here as the max value is 3 but I'd change this to a
> u32 or int.
>
sure will use int.
> Otherwise LGTM.
>
> Matt
>
>>
>> if (op->base.remap.prev)
>> start = op->base.remap.prev->va.addr +
>> @@ -2554,7 +2558,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
>> (ULL)op->remap.start,
>> (ULL)op->remap.range);
>> } else {
>> - xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask);
>> + num_remap_ops++;
>> }
>> }
>>
>> @@ -2583,11 +2587,13 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
>> (ULL)op->remap.start,
>> (ULL)op->remap.range);
>> } else {
>> - xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask);
>> + num_remap_ops++;
>> }
>> }
>> if (!skip)
>> - xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask);
>> + num_remap_ops++;
>> +
>> + xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask, num_remap_ops);
>> break;
>> }
>> case DRM_GPUVA_OP_UNMAP:
>> @@ -2599,7 +2605,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
>> return -EBUSY;
>>
>> if (!xe_vma_is_cpu_addr_mirror(vma))
>> - xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask);
>> + xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask, 1);
>> break;
>> case DRM_GPUVA_OP_PREFETCH:
>> vma = gpuva_to_vma(op->base.prefetch.va);
>> @@ -2611,7 +2617,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
>> }
>>
>> if (!xe_vma_is_cpu_addr_mirror(vma))
>> - xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask);
>> + xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask, 1);
>> break;
>> default:
>> drm_warn(&vm->xe->drm, "NOT POSSIBLE");
>> --
>> 2.34.1
>>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 10/32] drm/xe/svm: Refactor usage of drm_gpusvm* function in xe_svm
2025-04-17 2:57 ` Matthew Brost
@ 2025-04-21 4:30 ` Ghimiray, Himal Prasad
0 siblings, 0 replies; 120+ messages in thread
From: Ghimiray, Himal Prasad @ 2025-04-21 4:30 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-xe, thomas.hellstrom
On 17-04-2025 08:27, Matthew Brost wrote:
> On Mon, Apr 07, 2025 at 03:46:57PM +0530, Himal Prasad Ghimiray wrote:
>> Define xe_svm_range_find_or_insert function wrapping
>> drm_gpusvm_range_find_or_insert for reusing in prefetch.
>>
>> Define xe_svm_range_get_pages function wrapping
>> drm_gpusvm_range_get_pages for reusing in prefetch.
>>
>> -v2 pass pagefault defined drm_gpu_svm context as parameter
>> in xe_svm_range_find_or_insert(Matthew Brost)
>>
>> Cc: Matthew Brost <matthew.brost@intel.com>
>> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>> ---
>> drivers/gpu/drm/xe/xe_svm.c | 67 ++++++++++++++++++++++++++++++-------
>> drivers/gpu/drm/xe/xe_svm.h | 20 +++++++++++
>> 2 files changed, 75 insertions(+), 12 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
>> index 6648b4da0bca..8cd35553a927 100644
>> --- a/drivers/gpu/drm/xe/xe_svm.c
>> +++ b/drivers/gpu/drm/xe/xe_svm.c
>> @@ -735,7 +735,6 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
>> IS_ENABLED(CONFIG_DRM_XE_DEVMEM_MIRROR) ? SZ_64K : 0,
>> };
>> struct xe_svm_range *range;
>> - struct drm_gpusvm_range *r;
>> struct drm_exec exec;
>> struct dma_fence *fence;
>> struct xe_tile *tile = gt_to_tile(gt);
>> @@ -753,13 +752,11 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
>> if (err)
>> return err;
>>
>> - r = drm_gpusvm_range_find_or_insert(&vm->svm.gpusvm, fault_addr,
>> - xe_vma_start(vma), xe_vma_end(vma),
>> - &ctx);
>> - if (IS_ERR(r))
>> - return PTR_ERR(r);
>> + range = xe_svm_range_find_or_insert(vm, fault_addr, vma, &ctx);
>> +
>> + if (IS_ERR(range))
>> + return PTR_ERR(range);
>>
>> - range = to_xe_range(r);
>> if (xe_svm_range_is_valid(range, tile))
>> return 0;
>>
>> @@ -781,13 +778,9 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
>> }
>>
>> range_debug(range, "GET PAGES");
>> - err = drm_gpusvm_range_get_pages(&vm->svm.gpusvm, r, &ctx);
>> + err = xe_svm_range_get_pages(vm, range, &ctx);
>> /* Corner where CPU mappings have changed */
>> if (err == -EOPNOTSUPP || err == -EFAULT || err == -EPERM) {
>> - if (err == -EOPNOTSUPP) {
>> - range_debug(range, "PAGE FAULT - EVICT PAGES");
>> - drm_gpusvm_range_evict(&vm->svm.gpusvm, &range->base);
>> - }
>> drm_dbg(&vm->xe->drm,
>> "Get pages failed, falling back to retrying, asid=%u, gpusvm=%p, errno=%pe\n",
>> vm->usm.asid, &vm->svm.gpusvm, ERR_PTR(err));
>> @@ -866,6 +859,56 @@ int xe_svm_bo_evict(struct xe_bo *bo)
>> return drm_gpusvm_evict_to_ram(&bo->devmem_allocation);
>> }
>>
>> +/**
>> + * xe_svm_range_find_or_insert- Find or insert GPU SVM range
>> + * @vm: xe_vm pointer
>> + * @addr: address for which range needs to be found/inserted
>> + * @vma: Pointer to struct xe_vma which mirrors CPU
>> + * @ctx: GPU SVM context
>> + *
>> + * This function finds or inserts a newly allocated a SVM range based on the
>> + * address.
>> + *
>> + * Return: Pointer to the SVM range on success, ERR_PTR() on failure.
>> + */
>> +struct xe_svm_range *xe_svm_range_find_or_insert(struct xe_vm *vm, u64 addr,
>> + struct xe_vma *vma, struct drm_gpusvm_ctx *ctx)
>> +{
>> + struct drm_gpusvm_range *r;
>> +
>> + r = drm_gpusvm_range_find_or_insert(&vm->svm.gpusvm, max(addr, xe_vma_start(vma)),
>> + xe_vma_start(vma), xe_vma_end(vma), ctx);
>> + if (IS_ERR(r))
>> + return ERR_PTR(PTR_ERR(r));
>> +
>> + return to_xe_range(r);
>> +}
>> +
>> +/**
>> + * xe_svm_range_get_pages() - Get pages for a SVM range
>> + * @vm: Pointer to the struct xe_vm
>> + * @range: Pointer to the xe SVM range structure
>> + * @ctx: GPU SVM context
>> + *
>> + * This function gets pages for a SVM range and ensures they are mapped for
>> + * DMA access. In case of failure with -EOPNOTSUPP, it evicts the range.
>> + *
>> + * Return: 0 on success, negative error code on failure.
>> + */
>> +int xe_svm_range_get_pages(struct xe_vm *vm, struct xe_svm_range *range,
>> + struct drm_gpusvm_ctx *ctx)
>> +{
>> + int err = 0;
>> +
>> + err = drm_gpusvm_range_get_pages(&vm->svm.gpusvm, &range->base, ctx);
>> + if (err == -EOPNOTSUPP) {
>> + range_debug(range, "PAGE FAULT - EVICT PAGES");
>> + drm_gpusvm_range_evict(&vm->svm.gpusvm, &range->base);
>> + }
>> +
>> + return err;
>> +}
>> +
>> #if IS_ENABLED(CONFIG_DRM_XE_DEVMEM_MIRROR)
>>
>> static struct drm_pagemap_device_addr
>> diff --git a/drivers/gpu/drm/xe/xe_svm.h b/drivers/gpu/drm/xe/xe_svm.h
>> index 1ec90d9bc749..9c4c3aeacc6c 100644
>> --- a/drivers/gpu/drm/xe/xe_svm.h
>> +++ b/drivers/gpu/drm/xe/xe_svm.h
>> @@ -89,6 +89,12 @@ int xe_svm_alloc_vram(struct xe_vm *vm, struct xe_tile *tile,
>> }
>> #endif
>>
>> +struct xe_svm_range *xe_svm_range_find_or_insert(struct xe_vm *vm, u64 addr,
>> + struct xe_vma *vma, struct drm_gpusvm_ctx *ctx);
>
> One nit, check on the alignment here, checkpatch should complain if this
> is off, hard to tell if this wrong from the patch.
Checkpatch confirms alignment is ok here.
>
> But patch LGTM:
> Reviewed-by: Matthew Brost <matthew.brost@intel.com>
Thanks for the review.
>
>> +
>> +int xe_svm_range_get_pages(struct xe_vm *vm, struct xe_svm_range *range,
>> + struct drm_gpusvm_ctx *ctx);
>> +
>> /**
>> * xe_svm_range_has_dma_mapping() - SVM range has DMA mapping
>> * @range: SVM range
>> @@ -241,6 +247,20 @@ int xe_svm_alloc_vram(struct xe_vm *vm, struct xe_tile *tile,
>> return -EOPNOTSUPP;
>> }
>>
>> +static inline
>> +struct xe_svm_range *xe_svm_range_find_or_insert(struct xe_vm *vm, u64 addr,
>> + struct xe_vma *vma, struct drm_gpusvm_ctx *ctx)
>> +{
>> + return ERR_PTR(-EINVAL);
>> +}
>> +
>> +static inline
>> +int xe_svm_range_get_pages(struct xe_vm *vm, struct xe_svm_range *range,
>> + struct drm_gpusvm_ctx *ctx)
>> +{
>> + return -EINVAL;
>> +}
>> +
>> static inline struct xe_svm_range *to_xe_range(struct drm_gpusvm_range *r)
>> {
>> return NULL;
>> --
>> 2.34.1
>>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 11/32] drm/xe/svm: Add function to determine if range needs VRAM migration
2025-04-17 3:05 ` Matthew Brost
@ 2025-04-21 4:52 ` Ghimiray, Himal Prasad
0 siblings, 0 replies; 120+ messages in thread
From: Ghimiray, Himal Prasad @ 2025-04-21 4:52 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-xe, thomas.hellstrom
On 17-04-2025 08:35, Matthew Brost wrote:
> On Mon, Apr 07, 2025 at 03:46:58PM +0530, Himal Prasad Ghimiray wrote:
>> xe_svm_range_needs_migrate_to_vram() determines whether range needs
>> migration to vram or not, for pagefault try at least once.
>>
>
> So I pulled this patch into requested series to minimally enable atomic
> on the existing upstream code here [1]. I suspect we will get my series
> in first, perhaps even in the 6.15 cycle as I think it could reasonably
> justified as a fixes series as the compute UMD doesn't really work
> without it. Just a heads up.
I will build upon the [1] so irrespective of which series lands first,
there wont be any impact on another one.
[1] https://patchwork.freedesktop.org/patch/649010/?series=147846&rev=2
>
> [1] https://patchwork.freedesktop.org/patch/647159/?series=146290&rev=4
>
>> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>> ---
>> drivers/gpu/drm/xe/xe_svm.c | 49 +++++++++++++++++++++++++++++++++++--
>> drivers/gpu/drm/xe/xe_svm.h | 10 ++++++++
>> 2 files changed, 57 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
>> index 8cd35553a927..f4ae3feaf9d3 100644
>> --- a/drivers/gpu/drm/xe/xe_svm.c
>> +++ b/drivers/gpu/drm/xe/xe_svm.c
>> @@ -709,6 +709,51 @@ int xe_svm_alloc_vram(struct xe_vm *vm, struct xe_tile *tile,
>> }
>> #endif
>>
>> +static bool supports_4K_migration(struct xe_device *xe)
>> +{
>> + if (xe->info.platform == XE_BATTLEMAGE)
>> + return true;
>> +
>> + return false;
>
> if (xe->info.vram_flags & XE_VRAM_FLAGS_NEED64K)
> return false;
>
> return true;
>
>> +}
>> +
>> +/**
>> + * xe_svm_range_needs_migrate_to_vram() - SVM range needs migrate to VRAM or not
>> + * @range: SVM range for which migration needs to be decided
>> + * @vma: vma which has range
>> + * @region: default placement for range
>> + *
>> + * Return: True for range needing migration and migration is supported else false
>> + */
>> +bool xe_svm_range_needs_migrate_to_vram(struct xe_svm_range *range, struct xe_vma *vma,
>> + u32 region)
>> +{
>> + struct xe_vm *vm = range_to_vm(&range->base);
>> + u64 range_size = xe_svm_range_size(range);
>> + bool needs_migrate = false;
>> +
>> + if (!range->base.flags.migrate_devmem)
>> + return false;
>> +
>> + needs_migrate = region;
>> +
>> + if (needs_migrate && !IS_DGFX(vm->xe)) {
>> + drm_warn(&vm->xe->drm, "Platform doesn't support VRAM\n");
>> + return false;
>> + }
>
> I'm not sure if this warrents a drm_warn, I think an assert would be
> better here as it shouldn't happen unless we have an internal
> programming error.
Sure.
>
> Matt
>
>> +
>> + if (needs_migrate && xe_svm_range_in_vram(range)) {
>> + drm_info(&vm->xe->drm, "Range is already in VRAM\n");
>> + return false;
>> + }
>> +
>> + if (needs_migrate && range_size <= SZ_64K && !supports_4K_migration(vm->xe)) {
>> + drm_warn(&vm->xe->drm, "Platform doesn't support SZ_4K range migration\n");
>> + return false;
>> + }
>> +
>> + return needs_migrate;
>> +}
>>
>> /**
>> * xe_svm_handle_pagefault() - SVM handle page fault
>> @@ -763,8 +808,8 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
>> range_debug(range, "PAGE FAULT");
>>
>> /* XXX: Add migration policy, for now migrate range once */
>> - if (!range->skip_migrate && range->base.flags.migrate_devmem &&
>> - xe_svm_range_size(range) >= SZ_64K) {
>> + if (!range->skip_migrate &&
>> + xe_svm_range_needs_migrate_to_vram(range, vma, IS_DGFX(vm->xe))) {
>
> A little odd to pass IS_DGFX(vm->xe) as the region...
OK.
>
>> range->skip_migrate = true;
>>
>> err = xe_svm_alloc_vram(vm, tile, range, &ctx);
>> diff --git a/drivers/gpu/drm/xe/xe_svm.h b/drivers/gpu/drm/xe/xe_svm.h
>> index 9c4c3aeacc6c..d5be8229ca7e 100644
>> --- a/drivers/gpu/drm/xe/xe_svm.h
>> +++ b/drivers/gpu/drm/xe/xe_svm.h
>> @@ -95,6 +95,9 @@ struct xe_svm_range *xe_svm_range_find_or_insert(struct xe_vm *vm, u64 addr,
>> int xe_svm_range_get_pages(struct xe_vm *vm, struct xe_svm_range *range,
>> struct drm_gpusvm_ctx *ctx);
>>
>> +bool xe_svm_range_needs_migrate_to_vram(struct xe_svm_range *range, struct xe_vma *vma,
>> + u32 region);
>> +
>> /**
>> * xe_svm_range_has_dma_mapping() - SVM range has DMA mapping
>> * @range: SVM range
>> @@ -281,6 +284,13 @@ static inline unsigned long xe_svm_range_size(struct xe_svm_range *range)
>> return 0;
>> }
>>
>> +static inline
>> +bool xe_svm_range_needs_migrate_to_vram(struct xe_svm_range *range, struct xe_vma *vma,
>> + u32 region)
>> +{
>> + return false;
>> +}
>> +
>> #define xe_svm_assert_in_notifier(...) do {} while (0)
>> #define xe_svm_range_has_dma_mapping(...) false
>>
>> --
>> 2.34.1
>>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 12/32] drm/gpusvm: Introduce vram_only flag for VRAM allocation
2025-04-17 3:07 ` Matthew Brost
@ 2025-04-21 4:55 ` Ghimiray, Himal Prasad
0 siblings, 0 replies; 120+ messages in thread
From: Ghimiray, Himal Prasad @ 2025-04-21 4:55 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-xe, thomas.hellstrom
On 17-04-2025 08:37, Matthew Brost wrote:
> On Mon, Apr 07, 2025 at 03:46:59PM +0530, Himal Prasad Ghimiray wrote:
>> This commit adds a new flag, vram_only, to the drm_gpusvm structure. The
>> purpose of this flag is to ensure that the get_pages function allocates
>> memory exclusively from the device's VRAM. If the allocation from VRAM
>> fails, the function will return an -EFAULT error.
>>
>> Suggested-by: Matthew Brost <matthew.brost@intel.com>
>
> Again this is included in [1] with you remaining as the author.
>
> Anyways:
> Reviewed-by: Matthew Brost <matthew.brost@intel.com>
Thanks
>
> [1] https://patchwork.freedesktop.org/series/147846/
>
>> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>> ---
>> drivers/gpu/drm/drm_gpusvm.c | 5 +++++
>> include/drm/drm_gpusvm.h | 2 ++
>> 2 files changed, 7 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/drm_gpusvm.c b/drivers/gpu/drm/drm_gpusvm.c
>> index 2451c816edd5..149ac56eff70 100644
>> --- a/drivers/gpu/drm/drm_gpusvm.c
>> +++ b/drivers/gpu/drm/drm_gpusvm.c
>> @@ -1454,6 +1454,11 @@ int drm_gpusvm_range_get_pages(struct drm_gpusvm *gpusvm,
>> goto err_unmap;
>> }
>>
>> + if (ctx->vram_only) {
>> + err = -EFAULT;
>> + goto err_unmap;
>> + }
>> +
>> addr = dma_map_page(gpusvm->drm->dev,
>> page, 0,
>> PAGE_SIZE << order,
>> diff --git a/include/drm/drm_gpusvm.h b/include/drm/drm_gpusvm.h
>> index df120b4d1f83..8093cc6ab1f4 100644
>> --- a/include/drm/drm_gpusvm.h
>> +++ b/include/drm/drm_gpusvm.h
>> @@ -286,6 +286,7 @@ struct drm_gpusvm {
>> * @in_notifier: entering from a MMU notifier
>> * @read_only: operating on read-only memory
>> * @devmem_possible: possible to use device memory
>> + * @vram_only: Use only device memory
>> *
>> * Context that is DRM GPUSVM is operating in (i.e. user arguments).
>> */
>> @@ -294,6 +295,7 @@ struct drm_gpusvm_ctx {
>> unsigned int in_notifier :1;
>> unsigned int read_only :1;
>> unsigned int devmem_possible :1;
>> + unsigned int vram_only :1;
>> };
>>
>> int drm_gpusvm_init(struct drm_gpusvm *gpusvm,
>> --
>> 2.34.1
>>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 13/32] drm/xe/svm: Incase of atomic access ensure get_pages happens from vram
2025-04-17 4:19 ` Matthew Brost
@ 2025-04-21 4:58 ` Ghimiray, Himal Prasad
2025-04-21 6:29 ` Ghimiray, Himal Prasad
2025-04-22 15:27 ` Matthew Brost
0 siblings, 2 replies; 120+ messages in thread
From: Ghimiray, Himal Prasad @ 2025-04-21 4:58 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-xe, thomas.hellstrom
On 17-04-2025 09:49, Matthew Brost wrote:
> On Mon, Apr 07, 2025 at 03:47:00PM +0530, Himal Prasad Ghimiray wrote:
>> Ranges can be invalidated in between vram allocation and get_pages,
>> ensure the dma mapping is happening from vram only incase of atomic
>> access. Retry 3 times before calling out fault in case of concurrent
>> cpu/gpu access.
>>
>
> Again I pulled this patch into a series which will minimally enable
> atomics per UMD request. See the version of the patch [1] I landed on -
> that is basically my review feedback. I took ownership but left SoB by
> you as it based on this patch. We will need another reviewer though as
> we are both contributors but feel to comment there.
Thanks for update. Will use the version you modified for the prefetch
series too. It looks good to me.
>
> Matt
>
> [1] https://patchwork.freedesktop.org/patch/649010/?series=147846&rev=2
>
>> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>> ---
>> drivers/gpu/drm/xe/xe_svm.c | 43 ++++++++++++++++++++++++-------------
>> 1 file changed, 28 insertions(+), 15 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
>> index f4ae3feaf9d3..7ec7ecd7eb1f 100644
>> --- a/drivers/gpu/drm/xe/xe_svm.c
>> +++ b/drivers/gpu/drm/xe/xe_svm.c
>> @@ -778,11 +778,13 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
>> IS_ENABLED(CONFIG_DRM_XE_DEVMEM_MIRROR),
>> .check_pages_threshold = IS_DGFX(vm->xe) &&
>> IS_ENABLED(CONFIG_DRM_XE_DEVMEM_MIRROR) ? SZ_64K : 0,
>> + .vram_only = 0,
>> };
>> struct xe_svm_range *range;
>> struct drm_exec exec;
>> struct dma_fence *fence;
>> struct xe_tile *tile = gt_to_tile(gt);
>> + int retry_count = 3;
>> ktime_t end = 0;
>> int err;
>>
>> @@ -792,6 +794,7 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
>> xe_gt_stats_incr(gt, XE_GT_STATS_ID_SVM_PAGEFAULT_COUNT, 1);
>>
>> retry:
>> + retry_count--;
>> /* Always process UNMAPs first so view SVM ranges is current */
>> err = xe_svm_garbage_collector(vm);
>> if (err)
>> @@ -807,30 +810,40 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
>>
>> range_debug(range, "PAGE FAULT");
>>
>> - /* XXX: Add migration policy, for now migrate range once */
>> - if (!range->skip_migrate &&
>> - xe_svm_range_needs_migrate_to_vram(range, vma, IS_DGFX(vm->xe))) {
>> - range->skip_migrate = true;
>> -
>> + if (xe_svm_range_needs_migrate_to_vram(range, vma, IS_DGFX(vm->xe))) {
>> err = xe_svm_alloc_vram(vm, tile, range, &ctx);
>> if (err) {
>> - drm_dbg(&vm->xe->drm,
>> - "VRAM allocation failed, falling back to "
>> - "retrying fault, asid=%u, errno=%pe\n",
>> - vm->usm.asid, ERR_PTR(err));
>> - goto retry;
>> + if (retry_count) {
>> + drm_dbg(&vm->xe->drm,
>> + "VRAM allocation failed, falling back to retrying fault, asid=%u, errno=%pe\n",
>> + vm->usm.asid, ERR_PTR(err));
>> + goto retry;
>> + } else {
>> + drm_err(&vm->xe->drm,
>> + "VRAM allocation failed, retry count exceeded, asid=%u, errno=%pe\n",
>> + vm->usm.asid, ERR_PTR(err));
>> + return err;
>> + }
>> }
>> +
>> }
>>
>> + if (atomic)
>> + ctx.vram_only = 1;
>> +
>> range_debug(range, "GET PAGES");
>> err = xe_svm_range_get_pages(vm, range, &ctx);
>> /* Corner where CPU mappings have changed */
>> if (err == -EOPNOTSUPP || err == -EFAULT || err == -EPERM) {
>> - drm_dbg(&vm->xe->drm,
>> - "Get pages failed, falling back to retrying, asid=%u, gpusvm=%p, errno=%pe\n",
>> - vm->usm.asid, &vm->svm.gpusvm, ERR_PTR(err));
>> - range_debug(range, "PAGE FAULT - RETRY PAGES");
>> - goto retry;
>> + if (retry_count) {
>> + drm_dbg(&vm->xe->drm, "Get pages failed, falling back to retrying, asid=%u, gpusvm=%p, errno=%pe\n",
>> + vm->usm.asid, &vm->svm.gpusvm, ERR_PTR(err));
>> + range_debug(range, "PAGE FAULT - RETRY PAGES");
>> + goto retry;
>> + } else {
>> + drm_err(&vm->xe->drm, "Get pages failed,, retry count exceeded, asid=%u,, errno=%pe\n",
>> + vm->usm.asid, ERR_PTR(err));
>> + }
>> }
>> if (err) {
>> range_debug(range, "PAGE FAULT - FAIL PAGE COLLECT");
>> --
>> 2.34.1
>>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 13/32] drm/xe/svm: Incase of atomic access ensure get_pages happens from vram
2025-04-21 4:58 ` Ghimiray, Himal Prasad
@ 2025-04-21 6:29 ` Ghimiray, Himal Prasad
2025-04-22 15:25 ` Matthew Brost
2025-04-22 15:27 ` Matthew Brost
1 sibling, 1 reply; 120+ messages in thread
From: Ghimiray, Himal Prasad @ 2025-04-21 6:29 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-xe, thomas.hellstrom
On 21-04-2025 10:28, Ghimiray, Himal Prasad wrote:
>
>
> On 17-04-2025 09:49, Matthew Brost wrote:
>> On Mon, Apr 07, 2025 at 03:47:00PM +0530, Himal Prasad Ghimiray wrote:
>>> Ranges can be invalidated in between vram allocation and get_pages,
>>> ensure the dma mapping is happening from vram only incase of atomic
>>> access. Retry 3 times before calling out fault in case of concurrent
>>> cpu/gpu access.
>>>
>>
>> Again I pulled this patch into a series which will minimally enable
>> atomics per UMD request. See the version of the patch [1] I landed on -
>> that is basically my review feedback. I took ownership but left SoB by
>> you as it based on this patch. We will need another reviewer though as
>> we are both contributors but feel to comment there.
>
> Thanks for update. Will use the version you modified for the prefetch
> series too. It looks good to me.
Actually, I see a retry count check missing for get_pages in
https://patchwork.freedesktop.org/patch/649010/?series=147846&rev=2
which might lead to infinite loop of get_pages retry from vram.
> >
>> Matt
>>
>> [1] https://patchwork.freedesktop.org/patch/649010/?series=147846&rev=2
>>
>>> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>>> ---
>>> drivers/gpu/drm/xe/xe_svm.c | 43 ++++++++++++++++++++++++-------------
>>> 1 file changed, 28 insertions(+), 15 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
>>> index f4ae3feaf9d3..7ec7ecd7eb1f 100644
>>> --- a/drivers/gpu/drm/xe/xe_svm.c
>>> +++ b/drivers/gpu/drm/xe/xe_svm.c
>>> @@ -778,11 +778,13 @@ int xe_svm_handle_pagefault(struct xe_vm *vm,
>>> struct xe_vma *vma,
>>> IS_ENABLED(CONFIG_DRM_XE_DEVMEM_MIRROR),
>>> .check_pages_threshold = IS_DGFX(vm->xe) &&
>>> IS_ENABLED(CONFIG_DRM_XE_DEVMEM_MIRROR) ? SZ_64K : 0,
>>> + .vram_only = 0,
>>> };
>>> struct xe_svm_range *range;
>>> struct drm_exec exec;
>>> struct dma_fence *fence;
>>> struct xe_tile *tile = gt_to_tile(gt);
>>> + int retry_count = 3;
>>> ktime_t end = 0;
>>> int err;
>>> @@ -792,6 +794,7 @@ int xe_svm_handle_pagefault(struct xe_vm *vm,
>>> struct xe_vma *vma,
>>> xe_gt_stats_incr(gt, XE_GT_STATS_ID_SVM_PAGEFAULT_COUNT, 1);
>>> retry:
>>> + retry_count--;
>>> /* Always process UNMAPs first so view SVM ranges is current */
>>> err = xe_svm_garbage_collector(vm);
>>> if (err)
>>> @@ -807,30 +810,40 @@ int xe_svm_handle_pagefault(struct xe_vm *vm,
>>> struct xe_vma *vma,
>>> range_debug(range, "PAGE FAULT");
>>> - /* XXX: Add migration policy, for now migrate range once */
>>> - if (!range->skip_migrate &&
>>> - xe_svm_range_needs_migrate_to_vram(range, vma, IS_DGFX(vm-
>>> >xe))) {
>>> - range->skip_migrate = true;
>>> -
>>> + if (xe_svm_range_needs_migrate_to_vram(range, vma, IS_DGFX(vm-
>>> >xe))) {
>>> err = xe_svm_alloc_vram(vm, tile, range, &ctx);
>>> if (err) {
>>> - drm_dbg(&vm->xe->drm,
>>> - "VRAM allocation failed, falling back to "
>>> - "retrying fault, asid=%u, errno=%pe\n",
>>> - vm->usm.asid, ERR_PTR(err));
>>> - goto retry;
>>> + if (retry_count) {
>>> + drm_dbg(&vm->xe->drm,
>>> + "VRAM allocation failed, falling back to
>>> retrying fault, asid=%u, errno=%pe\n",
>>> + vm->usm.asid, ERR_PTR(err));
>>> + goto retry;
>>> + } else {
>>> + drm_err(&vm->xe->drm,
>>> + "VRAM allocation failed, retry count exceeded,
>>> asid=%u, errno=%pe\n",
>>> + vm->usm.asid, ERR_PTR(err));
>>> + return err;
>>> + }
>>> }
>>> +
>>> }
>>> + if (atomic)
>>> + ctx.vram_only = 1;
>>> +
>>> range_debug(range, "GET PAGES");
>>> err = xe_svm_range_get_pages(vm, range, &ctx);
>>> /* Corner where CPU mappings have changed */
>>> if (err == -EOPNOTSUPP || err == -EFAULT || err == -EPERM) {
>>> - drm_dbg(&vm->xe->drm,
>>> - "Get pages failed, falling back to retrying, asid=%u,
>>> gpusvm=%p, errno=%pe\n",
>>> - vm->usm.asid, &vm->svm.gpusvm, ERR_PTR(err));
>>> - range_debug(range, "PAGE FAULT - RETRY PAGES");
>>> - goto retry;
>>> + if (retry_count) {
>>> + drm_dbg(&vm->xe->drm, "Get pages failed, falling back to
>>> retrying, asid=%u, gpusvm=%p, errno=%pe\n",
>>> + vm->usm.asid, &vm->svm.gpusvm, ERR_PTR(err));
>>> + range_debug(range, "PAGE FAULT - RETRY PAGES");
>>> + goto retry;
>>> + } else {
>>> + drm_err(&vm->xe->drm, "Get pages failed,, retry count
>>> exceeded, asid=%u,, errno=%pe\n",
>>> + vm->usm.asid, ERR_PTR(err));
>>> + }
>>> }
>>> if (err) {
>>> range_debug(range, "PAGE FAULT - FAIL PAGE COLLECT");
>>> --
>>> 2.34.1
>>>
>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 13/32] drm/xe/svm: Incase of atomic access ensure get_pages happens from vram
2025-04-21 6:29 ` Ghimiray, Himal Prasad
@ 2025-04-22 15:25 ` Matthew Brost
0 siblings, 0 replies; 120+ messages in thread
From: Matthew Brost @ 2025-04-22 15:25 UTC (permalink / raw)
To: Ghimiray, Himal Prasad; +Cc: intel-xe, thomas.hellstrom
On Mon, Apr 21, 2025 at 11:59:30AM +0530, Ghimiray, Himal Prasad wrote:
>
>
> On 21-04-2025 10:28, Ghimiray, Himal Prasad wrote:
> >
> >
> > On 17-04-2025 09:49, Matthew Brost wrote:
> > > On Mon, Apr 07, 2025 at 03:47:00PM +0530, Himal Prasad Ghimiray wrote:
> > > > Ranges can be invalidated in between vram allocation and get_pages,
> > > > ensure the dma mapping is happening from vram only incase of atomic
> > > > access. Retry 3 times before calling out fault in case of concurrent
> > > > cpu/gpu access.
> > > >
> > >
> > > Again I pulled this patch into a series which will minimally enable
> > > atomics per UMD request. See the version of the patch [1] I landed on -
> > > that is basically my review feedback. I took ownership but left SoB by
> > > you as it based on this patch. We will need another reviewer though as
> > > we are both contributors but feel to comment there.
> >
> > Thanks for update. Will use the version you modified for the prefetch
> > series too. It looks good to me.
>
> Actually, I see a retry count check missing for get_pages in
> https://patchwork.freedesktop.org/patch/649010/?series=147846&rev=2 which
> might lead to infinite loop of get_pages retry from vram.
>
Ah, yes indeed that is needed too.
We something like this on a get_pages() failure.
if (vram_only && migrate_retry_count <= 0)
bail
else
retry
Matt
>
> > >
> > > Matt
> > >
> > > [1] https://patchwork.freedesktop.org/patch/649010/?series=147846&rev=2
> > >
> > > > Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> > > > ---
> > > > drivers/gpu/drm/xe/xe_svm.c | 43 ++++++++++++++++++++++++-------------
> > > > 1 file changed, 28 insertions(+), 15 deletions(-)
> > > >
> > > > diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
> > > > index f4ae3feaf9d3..7ec7ecd7eb1f 100644
> > > > --- a/drivers/gpu/drm/xe/xe_svm.c
> > > > +++ b/drivers/gpu/drm/xe/xe_svm.c
> > > > @@ -778,11 +778,13 @@ int xe_svm_handle_pagefault(struct xe_vm
> > > > *vm, struct xe_vma *vma,
> > > > IS_ENABLED(CONFIG_DRM_XE_DEVMEM_MIRROR),
> > > > .check_pages_threshold = IS_DGFX(vm->xe) &&
> > > > IS_ENABLED(CONFIG_DRM_XE_DEVMEM_MIRROR) ? SZ_64K : 0,
> > > > + .vram_only = 0,
> > > > };
> > > > struct xe_svm_range *range;
> > > > struct drm_exec exec;
> > > > struct dma_fence *fence;
> > > > struct xe_tile *tile = gt_to_tile(gt);
> > > > + int retry_count = 3;
> > > > ktime_t end = 0;
> > > > int err;
> > > > @@ -792,6 +794,7 @@ int xe_svm_handle_pagefault(struct xe_vm
> > > > *vm, struct xe_vma *vma,
> > > > xe_gt_stats_incr(gt, XE_GT_STATS_ID_SVM_PAGEFAULT_COUNT, 1);
> > > > retry:
> > > > + retry_count--;
> > > > /* Always process UNMAPs first so view SVM ranges is current */
> > > > err = xe_svm_garbage_collector(vm);
> > > > if (err)
> > > > @@ -807,30 +810,40 @@ int xe_svm_handle_pagefault(struct xe_vm
> > > > *vm, struct xe_vma *vma,
> > > > range_debug(range, "PAGE FAULT");
> > > > - /* XXX: Add migration policy, for now migrate range once */
> > > > - if (!range->skip_migrate &&
> > > > - xe_svm_range_needs_migrate_to_vram(range, vma,
> > > > IS_DGFX(vm- >xe))) {
> > > > - range->skip_migrate = true;
> > > > -
> > > > + if (xe_svm_range_needs_migrate_to_vram(range, vma,
> > > > IS_DGFX(vm- >xe))) {
> > > > err = xe_svm_alloc_vram(vm, tile, range, &ctx);
> > > > if (err) {
> > > > - drm_dbg(&vm->xe->drm,
> > > > - "VRAM allocation failed, falling back to "
> > > > - "retrying fault, asid=%u, errno=%pe\n",
> > > > - vm->usm.asid, ERR_PTR(err));
> > > > - goto retry;
> > > > + if (retry_count) {
> > > > + drm_dbg(&vm->xe->drm,
> > > > + "VRAM allocation failed, falling back to
> > > > retrying fault, asid=%u, errno=%pe\n",
> > > > + vm->usm.asid, ERR_PTR(err));
> > > > + goto retry;
> > > > + } else {
> > > > + drm_err(&vm->xe->drm,
> > > > + "VRAM allocation failed, retry count
> > > > exceeded, asid=%u, errno=%pe\n",
> > > > + vm->usm.asid, ERR_PTR(err));
> > > > + return err;
> > > > + }
> > > > }
> > > > +
> > > > }
> > > > + if (atomic)
> > > > + ctx.vram_only = 1;
> > > > +
> > > > range_debug(range, "GET PAGES");
> > > > err = xe_svm_range_get_pages(vm, range, &ctx);
> > > > /* Corner where CPU mappings have changed */
> > > > if (err == -EOPNOTSUPP || err == -EFAULT || err == -EPERM) {
> > > > - drm_dbg(&vm->xe->drm,
> > > > - "Get pages failed, falling back to retrying,
> > > > asid=%u, gpusvm=%p, errno=%pe\n",
> > > > - vm->usm.asid, &vm->svm.gpusvm, ERR_PTR(err));
> > > > - range_debug(range, "PAGE FAULT - RETRY PAGES");
> > > > - goto retry;
> > > > + if (retry_count) {
> > > > + drm_dbg(&vm->xe->drm, "Get pages failed, falling
> > > > back to retrying, asid=%u, gpusvm=%p, errno=%pe\n",
> > > > + vm->usm.asid, &vm->svm.gpusvm, ERR_PTR(err));
> > > > + range_debug(range, "PAGE FAULT - RETRY PAGES");
> > > > + goto retry;
> > > > + } else {
> > > > + drm_err(&vm->xe->drm, "Get pages failed,, retry
> > > > count exceeded, asid=%u,, errno=%pe\n",
> > > > + vm->usm.asid, ERR_PTR(err));
> > > > + }
> > > > }
> > > > if (err) {
> > > > range_debug(range, "PAGE FAULT - FAIL PAGE COLLECT");
> > > > --
> > > > 2.34.1
> > > >
> >
>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 13/32] drm/xe/svm: Incase of atomic access ensure get_pages happens from vram
2025-04-21 4:58 ` Ghimiray, Himal Prasad
2025-04-21 6:29 ` Ghimiray, Himal Prasad
@ 2025-04-22 15:27 ` Matthew Brost
1 sibling, 0 replies; 120+ messages in thread
From: Matthew Brost @ 2025-04-22 15:27 UTC (permalink / raw)
To: Ghimiray, Himal Prasad; +Cc: intel-xe, thomas.hellstrom
On Mon, Apr 21, 2025 at 10:28:25AM +0530, Ghimiray, Himal Prasad wrote:
>
>
> On 17-04-2025 09:49, Matthew Brost wrote:
> > On Mon, Apr 07, 2025 at 03:47:00PM +0530, Himal Prasad Ghimiray wrote:
> > > Ranges can be invalidated in between vram allocation and get_pages,
> > > ensure the dma mapping is happening from vram only incase of atomic
> > > access. Retry 3 times before calling out fault in case of concurrent
> > > cpu/gpu access.
> > >
> >
> > Again I pulled this patch into a series which will minimally enable
> > atomics per UMD request. See the version of the patch [1] I landed on -
> > that is basically my review feedback. I took ownership but left SoB by
> > you as it based on this patch. We will need another reviewer though as
> > we are both contributors but feel to comment there.
>
> Thanks for update. Will use the version you modified for the prefetch series
> too. It looks good to me.
I'm going to repost today with your feedback addresses. I'd pull these
patches in as the first patches in your series to avoid conflicts if
those patches merge ahead of your series - I believe we are going to try
to those patches in 6.15 as fixes patches as the UMD doesn't really work
without them.
Matt
> >
> > Matt
> >
> > [1] https://patchwork.freedesktop.org/patch/649010/?series=147846&rev=2
> >
> > > Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> > > ---
> > > drivers/gpu/drm/xe/xe_svm.c | 43 ++++++++++++++++++++++++-------------
> > > 1 file changed, 28 insertions(+), 15 deletions(-)
> > >
> > > diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
> > > index f4ae3feaf9d3..7ec7ecd7eb1f 100644
> > > --- a/drivers/gpu/drm/xe/xe_svm.c
> > > +++ b/drivers/gpu/drm/xe/xe_svm.c
> > > @@ -778,11 +778,13 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
> > > IS_ENABLED(CONFIG_DRM_XE_DEVMEM_MIRROR),
> > > .check_pages_threshold = IS_DGFX(vm->xe) &&
> > > IS_ENABLED(CONFIG_DRM_XE_DEVMEM_MIRROR) ? SZ_64K : 0,
> > > + .vram_only = 0,
> > > };
> > > struct xe_svm_range *range;
> > > struct drm_exec exec;
> > > struct dma_fence *fence;
> > > struct xe_tile *tile = gt_to_tile(gt);
> > > + int retry_count = 3;
> > > ktime_t end = 0;
> > > int err;
> > > @@ -792,6 +794,7 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
> > > xe_gt_stats_incr(gt, XE_GT_STATS_ID_SVM_PAGEFAULT_COUNT, 1);
> > > retry:
> > > + retry_count--;
> > > /* Always process UNMAPs first so view SVM ranges is current */
> > > err = xe_svm_garbage_collector(vm);
> > > if (err)
> > > @@ -807,30 +810,40 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
> > > range_debug(range, "PAGE FAULT");
> > > - /* XXX: Add migration policy, for now migrate range once */
> > > - if (!range->skip_migrate &&
> > > - xe_svm_range_needs_migrate_to_vram(range, vma, IS_DGFX(vm->xe))) {
> > > - range->skip_migrate = true;
> > > -
> > > + if (xe_svm_range_needs_migrate_to_vram(range, vma, IS_DGFX(vm->xe))) {
> > > err = xe_svm_alloc_vram(vm, tile, range, &ctx);
> > > if (err) {
> > > - drm_dbg(&vm->xe->drm,
> > > - "VRAM allocation failed, falling back to "
> > > - "retrying fault, asid=%u, errno=%pe\n",
> > > - vm->usm.asid, ERR_PTR(err));
> > > - goto retry;
> > > + if (retry_count) {
> > > + drm_dbg(&vm->xe->drm,
> > > + "VRAM allocation failed, falling back to retrying fault, asid=%u, errno=%pe\n",
> > > + vm->usm.asid, ERR_PTR(err));
> > > + goto retry;
> > > + } else {
> > > + drm_err(&vm->xe->drm,
> > > + "VRAM allocation failed, retry count exceeded, asid=%u, errno=%pe\n",
> > > + vm->usm.asid, ERR_PTR(err));
> > > + return err;
> > > + }
> > > }
> > > +
> > > }
> > > + if (atomic)
> > > + ctx.vram_only = 1;
> > > +
> > > range_debug(range, "GET PAGES");
> > > err = xe_svm_range_get_pages(vm, range, &ctx);
> > > /* Corner where CPU mappings have changed */
> > > if (err == -EOPNOTSUPP || err == -EFAULT || err == -EPERM) {
> > > - drm_dbg(&vm->xe->drm,
> > > - "Get pages failed, falling back to retrying, asid=%u, gpusvm=%p, errno=%pe\n",
> > > - vm->usm.asid, &vm->svm.gpusvm, ERR_PTR(err));
> > > - range_debug(range, "PAGE FAULT - RETRY PAGES");
> > > - goto retry;
> > > + if (retry_count) {
> > > + drm_dbg(&vm->xe->drm, "Get pages failed, falling back to retrying, asid=%u, gpusvm=%p, errno=%pe\n",
> > > + vm->usm.asid, &vm->svm.gpusvm, ERR_PTR(err));
> > > + range_debug(range, "PAGE FAULT - RETRY PAGES");
> > > + goto retry;
> > > + } else {
> > > + drm_err(&vm->xe->drm, "Get pages failed,, retry count exceeded, asid=%u,, errno=%pe\n",
> > > + vm->usm.asid, ERR_PTR(err));
> > > + }
> > > }
> > > if (err) {
> > > range_debug(range, "PAGE FAULT - FAIL PAGE COLLECT");
> > > --
> > > 2.34.1
> > >
>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 17/32] drm/xe/uapi: Add madvise interface
2025-04-17 18:24 ` Souza, Jose
@ 2025-04-22 15:34 ` Matthew Brost
2025-04-22 15:55 ` Souza, Jose
0 siblings, 1 reply; 120+ messages in thread
From: Matthew Brost @ 2025-04-22 15:34 UTC (permalink / raw)
To: Souza, Jose
Cc: intel-xe@lists.freedesktop.org, Ghimiray, Himal Prasad,
thomas.hellstrom@linux.intel.com
On Thu, Apr 17, 2025 at 12:24:17PM -0600, Souza, Jose wrote:
> On Thu, 2025-04-17 at 11:19 -0700, José Roberto de Souza wrote:
> > On Mon, 2025-04-07 at 15:47 +0530, Himal Prasad Ghimiray wrote:
> > > This commit introduces a new madvise interface to support
> > > driver-specific ioctl operations. The madvise interface allows for more
> > > efficient memory management by providing hints to the driver about the
> > > expected memory usage and pte update policy for gpuvma.
> > >
> > > Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> > > ---
> > > include/uapi/drm/xe_drm.h | 97 +++++++++++++++++++++++++++++++++++++++
> > > 1 file changed, 97 insertions(+)
> > >
> > > diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
> > > index 9c08738c3b91..aaf515df3a83 100644
> > > --- a/include/uapi/drm/xe_drm.h
> > > +++ b/include/uapi/drm/xe_drm.h
> > > @@ -81,6 +81,7 @@ extern "C" {
> > > * - &DRM_IOCTL_XE_EXEC
> > > * - &DRM_IOCTL_XE_WAIT_USER_FENCE
> > > * - &DRM_IOCTL_XE_OBSERVATION
> > > + * - &DRM_IOCTL_XE_MADVISE
> > > */
> > >
> > > /*
> > > @@ -102,6 +103,7 @@ extern "C" {
> > > #define DRM_XE_EXEC 0x09
> > > #define DRM_XE_WAIT_USER_FENCE 0x0a
> > > #define DRM_XE_OBSERVATION 0x0b
> > > +#define DRM_XE_MADVISE 0x0c
> > >
> > > /* Must be kept compact -- no holes */
> > >
> > > @@ -117,6 +119,7 @@ extern "C" {
> > > #define DRM_IOCTL_XE_EXEC DRM_IOW(DRM_COMMAND_BASE + DRM_XE_EXEC, struct drm_xe_exec)
> > > #define DRM_IOCTL_XE_WAIT_USER_FENCE DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_WAIT_USER_FENCE, struct drm_xe_wait_user_fence)
> > > #define DRM_IOCTL_XE_OBSERVATION DRM_IOW(DRM_COMMAND_BASE + DRM_XE_OBSERVATION, struct drm_xe_observation_param)
> > > +#define DRM_IOCTL_XE_MADVISE DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_MADVISE, struct drm_xe_madvise)
> > >
> > > /**
> > > * DOC: Xe IOCTL Extensions
> > > @@ -1965,6 +1968,100 @@ struct drm_xe_query_eu_stall {
> > > __u64 sampling_rates[];
> > > };
> > >
> > > +struct drm_xe_madvise_ops {
> > > + /** @start: start of the virtual address range */
> > > + __u64 start;
> > > +
> > > + /** @size: size of the virtual address range */
> > > + __u64 range;
> > > +
> > > +#define DRM_XE_VMA_ATTR_PREFERRED_LOC 0
> > > +#define DRM_XE_VMA_ATTR_ATOMIC 1
> > > +#define DRM_XE_VMA_ATTR_PAT 2
> >
> > In my opinion we should drop DRM_XE_VMA_ATTR_PAT as we can already change the PAT index by unbinding and binding the VM with the wanted PAT index.
> > Similar for DRM_XE_VMA_ATTR_ATOMIC and DRM_XE_VMA_ATTR_PREFERRED_LOC, we could have those properties in vm bind uAPI and if needed to change we could
> > unbind and bind the VMA with the updated attributes.
>
> Ah and if it is chosen to move forward with those I think would be nice to have drm_xe_sync so we could synchronize the update of this attributes with
> previous submissions and future ones.
>
I think we should have extra reserved bits for a sync interface so we
can add one if needed but the way madvise is currently implement it is
completely synchronous as all it does is update the KMD's VM/VMA
bookkeeping, it does not update the GPU page tables (it can invalidate
them though). A subsequent is expected to be issued if you need
immediate page table updates.
Matt
> >
> > > +#define DRM_XE_VMA_ATTR_PURGEABLE_STATE 3
> >
> > Mesa can make use of DRM_XE_VMA_ATTR_PURGEABLE_STATE but not the other ones, so you will need other UMDs to land those.
> >
> > > + /** @type: type of attribute */
> > > + __u32 type;
> > > +
> > > + /** @pad: MBZ */
> > > + __u32 pad;
> > > +
> > > + union {
> > > + struct {
> > > +#define DRM_XE_VMA_ATOMIC_UNDEFINED 0
> > > +#define DRM_XE_VMA_ATOMIC_DEVICE 1
> > > +#define DRM_XE_VMA_ATOMIC_GLOBAL 2
> > > +#define DRM_XE_VMA_ATOMIC_CPU 3
> > > + /** @val: value of atomic operation*/
> > > + __u32 val;
> > > +
> > > + /** @reserved: Reserved */
> > > + __u32 reserved;
> > > + } atomic;
> > > +
> > > + struct {
> > > +#define DRM_XE_VMA_PURGEABLE_STATE_WILLNEED 0
> > > +#define DRM_XE_VMA_PURGEABLE_STATE_DONTNEED 1
> > > +#define DRM_XE_VMA_PURGEABLE_STATE_PURGED 2
> > > + /** @val: value for DRM_XE_VMA_ATTR_PURGEABLE_STATE */
> > > + __u32 val;
> > > +
> > > + /** @reserved: Reserved */
> > > + __u32 reserved;
> > > + } purge_state_val;
> > > +
> > > + struct {
> > > + /** @pat_index */
> > > + __u32 val;
> > > +
> > > + /** @reserved: Reserved */
> > > + __u32 reserved;
> > > + } pat_index;
> > > +
> > > + /** @preferred_mem_loc: preferred memory location */
> > > + struct {
> > > + __u32 devmem_fd;
> > > +
> > > +#define MIGRATE_ALL_PAGES 0
> > > +#define MIGRATE_ONLY_SYSTEM_PAGES 1
> > > + __u32 migration_policy;
> > > + } preferred_mem_loc;
> > > + };
> > > +
> > > + /** @reserved: Reserved */
> > > + __u64 reserved[2];
> > > +};
> > > +
> > > +/**
> > > + * struct drm_xe_madvise - Input of &DRM_IOCTL_XE_MADVISE
> > > + *
> > > + * Set memory attributes to a virtual address range
> > > + */
> > > +struct drm_xe_madvise {
> > > + /** @extensions: Pointer to the first extension struct, if any */
> > > + __u64 extensions;
> > > +
> > > + /** @vm_id: vm_id of the virtual range */
> > > + __u32 vm_id;
> > > +
> > > + /** @num_ops: number of madvises in ioctl */
> > > + __u32 num_ops;
> > > +
> > > + union {
> > > + /** @ops: used if num_ops == 1 */
> > > + struct drm_xe_madvise_ops ops;
> > > +
> > > + /**
> > > + * @vector_of_ops: userptr to array of struct
> > > + * drm_xe_vm_madvise_op if num_ops > 1
> > > + */
> > > + __u64 vector_of_ops;
> > > + };
> > > +
> > > + /** @reserved: Reserved */
> > > + __u64 reserved[2];
> > > +
> > > +};
> > > +
> > > #if defined(__cplusplus)
> > > }
> > > #endif
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 17/32] drm/xe/uapi: Add madvise interface
2025-04-17 18:19 ` Souza, Jose
2025-04-17 18:24 ` Souza, Jose
@ 2025-04-22 15:40 ` Matthew Brost
2025-04-22 16:02 ` Souza, Jose
1 sibling, 1 reply; 120+ messages in thread
From: Matthew Brost @ 2025-04-22 15:40 UTC (permalink / raw)
To: Souza, Jose
Cc: intel-xe@lists.freedesktop.org, Ghimiray, Himal Prasad,
thomas.hellstrom@linux.intel.com
On Thu, Apr 17, 2025 at 12:19:54PM -0600, Souza, Jose wrote:
> On Mon, 2025-04-07 at 15:47 +0530, Himal Prasad Ghimiray wrote:
> > This commit introduces a new madvise interface to support
> > driver-specific ioctl operations. The madvise interface allows for more
> > efficient memory management by providing hints to the driver about the
> > expected memory usage and pte update policy for gpuvma.
> >
> > Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> > ---
> > include/uapi/drm/xe_drm.h | 97 +++++++++++++++++++++++++++++++++++++++
> > 1 file changed, 97 insertions(+)
> >
> > diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
> > index 9c08738c3b91..aaf515df3a83 100644
> > --- a/include/uapi/drm/xe_drm.h
> > +++ b/include/uapi/drm/xe_drm.h
> > @@ -81,6 +81,7 @@ extern "C" {
> > * - &DRM_IOCTL_XE_EXEC
> > * - &DRM_IOCTL_XE_WAIT_USER_FENCE
> > * - &DRM_IOCTL_XE_OBSERVATION
> > + * - &DRM_IOCTL_XE_MADVISE
> > */
> >
> > /*
> > @@ -102,6 +103,7 @@ extern "C" {
> > #define DRM_XE_EXEC 0x09
> > #define DRM_XE_WAIT_USER_FENCE 0x0a
> > #define DRM_XE_OBSERVATION 0x0b
> > +#define DRM_XE_MADVISE 0x0c
> >
> > /* Must be kept compact -- no holes */
> >
> > @@ -117,6 +119,7 @@ extern "C" {
> > #define DRM_IOCTL_XE_EXEC DRM_IOW(DRM_COMMAND_BASE + DRM_XE_EXEC, struct drm_xe_exec)
> > #define DRM_IOCTL_XE_WAIT_USER_FENCE DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_WAIT_USER_FENCE, struct drm_xe_wait_user_fence)
> > #define DRM_IOCTL_XE_OBSERVATION DRM_IOW(DRM_COMMAND_BASE + DRM_XE_OBSERVATION, struct drm_xe_observation_param)
> > +#define DRM_IOCTL_XE_MADVISE DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_MADVISE, struct drm_xe_madvise)
> >
> > /**
> > * DOC: Xe IOCTL Extensions
> > @@ -1965,6 +1968,100 @@ struct drm_xe_query_eu_stall {
> > __u64 sampling_rates[];
> > };
> >
> > +struct drm_xe_madvise_ops {
> > + /** @start: start of the virtual address range */
> > + __u64 start;
> > +
> > + /** @size: size of the virtual address range */
> > + __u64 range;
> > +
> > +#define DRM_XE_VMA_ATTR_PREFERRED_LOC 0
> > +#define DRM_XE_VMA_ATTR_ATOMIC 1
> > +#define DRM_XE_VMA_ATTR_PAT 2
>
> In my opinion we should drop DRM_XE_VMA_ATTR_PAT as we can already change the PAT index by unbinding and binding the VM with the wanted PAT index.
> Similar for DRM_XE_VMA_ATTR_ATOMIC and DRM_XE_VMA_ATTR_PREFERRED_LOC, we could have those properties in vm bind uAPI and if needed to change we could
> unbind and bind the VMA with the updated attributes.
>
I had thought about making madvise patch of VM bind too. I think it
could work but I do think it further complicate an already highly
complicated VM bind KMD path. Also the idea with madvise, is it doesn't
actually update the GPU page tables rather just the KMD VM/VMA
bookkeeping. My prefence here would to be keep this seperate.
All of the above attribute are really only intended to be set by
faulting VM which Mesa doesn't use.
> > +#define DRM_XE_VMA_ATTR_PURGEABLE_STATE 3
>
> Mesa can make use of DRM_XE_VMA_ATTR_PURGEABLE_STATE but not the other ones, so you will need other UMDs to land those.
>
I could see Mesa using this one, but again this just flips a bit in KMD
internal state - it does touch GPU page tables so would be goofy to
implement this in a very complex VM bind IOCTL.
Matt
> > + /** @type: type of attribute */
> > + __u32 type;
> > +
> > + /** @pad: MBZ */
> > + __u32 pad;
> > +
> > + union {
> > + struct {
> > +#define DRM_XE_VMA_ATOMIC_UNDEFINED 0
> > +#define DRM_XE_VMA_ATOMIC_DEVICE 1
> > +#define DRM_XE_VMA_ATOMIC_GLOBAL 2
> > +#define DRM_XE_VMA_ATOMIC_CPU 3
> > + /** @val: value of atomic operation*/
> > + __u32 val;
> > +
> > + /** @reserved: Reserved */
> > + __u32 reserved;
> > + } atomic;
> > +
> > + struct {
> > +#define DRM_XE_VMA_PURGEABLE_STATE_WILLNEED 0
> > +#define DRM_XE_VMA_PURGEABLE_STATE_DONTNEED 1
> > +#define DRM_XE_VMA_PURGEABLE_STATE_PURGED 2
> > + /** @val: value for DRM_XE_VMA_ATTR_PURGEABLE_STATE */
> > + __u32 val;
> > +
> > + /** @reserved: Reserved */
> > + __u32 reserved;
> > + } purge_state_val;
> > +
> > + struct {
> > + /** @pat_index */
> > + __u32 val;
> > +
> > + /** @reserved: Reserved */
> > + __u32 reserved;
> > + } pat_index;
> > +
> > + /** @preferred_mem_loc: preferred memory location */
> > + struct {
> > + __u32 devmem_fd;
> > +
> > +#define MIGRATE_ALL_PAGES 0
> > +#define MIGRATE_ONLY_SYSTEM_PAGES 1
> > + __u32 migration_policy;
> > + } preferred_mem_loc;
> > + };
> > +
> > + /** @reserved: Reserved */
> > + __u64 reserved[2];
> > +};
> > +
> > +/**
> > + * struct drm_xe_madvise - Input of &DRM_IOCTL_XE_MADVISE
> > + *
> > + * Set memory attributes to a virtual address range
> > + */
> > +struct drm_xe_madvise {
> > + /** @extensions: Pointer to the first extension struct, if any */
> > + __u64 extensions;
> > +
> > + /** @vm_id: vm_id of the virtual range */
> > + __u32 vm_id;
> > +
> > + /** @num_ops: number of madvises in ioctl */
> > + __u32 num_ops;
> > +
> > + union {
> > + /** @ops: used if num_ops == 1 */
> > + struct drm_xe_madvise_ops ops;
> > +
> > + /**
> > + * @vector_of_ops: userptr to array of struct
> > + * drm_xe_vm_madvise_op if num_ops > 1
> > + */
> > + __u64 vector_of_ops;
> > + };
> > +
> > + /** @reserved: Reserved */
> > + __u64 reserved[2];
> > +
> > +};
> > +
> > #if defined(__cplusplus)
> > }
> > #endif
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 17/32] drm/xe/uapi: Add madvise interface
2025-04-22 15:34 ` Matthew Brost
@ 2025-04-22 15:55 ` Souza, Jose
2025-04-22 16:19 ` Matthew Brost
0 siblings, 1 reply; 120+ messages in thread
From: Souza, Jose @ 2025-04-22 15:55 UTC (permalink / raw)
To: Brost, Matthew
Cc: intel-xe@lists.freedesktop.org, Ghimiray, Himal Prasad,
thomas.hellstrom@linux.intel.com
On Tue, 2025-04-22 at 08:34 -0700, Matthew Brost wrote:
> On Thu, Apr 17, 2025 at 12:24:17PM -0600, Souza, Jose wrote:
> > On Thu, 2025-04-17 at 11:19 -0700, José Roberto de Souza wrote:
> > > On Mon, 2025-04-07 at 15:47 +0530, Himal Prasad Ghimiray wrote:
> > > > This commit introduces a new madvise interface to support
> > > > driver-specific ioctl operations. The madvise interface allows for more
> > > > efficient memory management by providing hints to the driver about the
> > > > expected memory usage and pte update policy for gpuvma.
> > > >
> > > > Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> > > > ---
> > > > include/uapi/drm/xe_drm.h | 97 +++++++++++++++++++++++++++++++++++++++
> > > > 1 file changed, 97 insertions(+)
> > > >
> > > > diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
> > > > index 9c08738c3b91..aaf515df3a83 100644
> > > > --- a/include/uapi/drm/xe_drm.h
> > > > +++ b/include/uapi/drm/xe_drm.h
> > > > @@ -81,6 +81,7 @@ extern "C" {
> > > > * - &DRM_IOCTL_XE_EXEC
> > > > * - &DRM_IOCTL_XE_WAIT_USER_FENCE
> > > > * - &DRM_IOCTL_XE_OBSERVATION
> > > > + * - &DRM_IOCTL_XE_MADVISE
> > > > */
> > > >
> > > > /*
> > > > @@ -102,6 +103,7 @@ extern "C" {
> > > > #define DRM_XE_EXEC 0x09
> > > > #define DRM_XE_WAIT_USER_FENCE 0x0a
> > > > #define DRM_XE_OBSERVATION 0x0b
> > > > +#define DRM_XE_MADVISE 0x0c
> > > >
> > > > /* Must be kept compact -- no holes */
> > > >
> > > > @@ -117,6 +119,7 @@ extern "C" {
> > > > #define DRM_IOCTL_XE_EXEC DRM_IOW(DRM_COMMAND_BASE + DRM_XE_EXEC, struct drm_xe_exec)
> > > > #define DRM_IOCTL_XE_WAIT_USER_FENCE DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_WAIT_USER_FENCE, struct drm_xe_wait_user_fence)
> > > > #define DRM_IOCTL_XE_OBSERVATION DRM_IOW(DRM_COMMAND_BASE + DRM_XE_OBSERVATION, struct drm_xe_observation_param)
> > > > +#define DRM_IOCTL_XE_MADVISE DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_MADVISE, struct drm_xe_madvise)
> > > >
> > > > /**
> > > > * DOC: Xe IOCTL Extensions
> > > > @@ -1965,6 +1968,100 @@ struct drm_xe_query_eu_stall {
> > > > __u64 sampling_rates[];
> > > > };
> > > >
> > > > +struct drm_xe_madvise_ops {
> > > > + /** @start: start of the virtual address range */
> > > > + __u64 start;
> > > > +
> > > > + /** @size: size of the virtual address range */
> > > > + __u64 range;
> > > > +
> > > > +#define DRM_XE_VMA_ATTR_PREFERRED_LOC 0
> > > > +#define DRM_XE_VMA_ATTR_ATOMIC 1
> > > > +#define DRM_XE_VMA_ATTR_PAT 2
> > >
> > > In my opinion we should drop DRM_XE_VMA_ATTR_PAT as we can already change the PAT index by unbinding and binding the VM with the wanted PAT index.
> > > Similar for DRM_XE_VMA_ATTR_ATOMIC and DRM_XE_VMA_ATTR_PREFERRED_LOC, we could have those properties in vm bind uAPI and if needed to change we could
> > > unbind and bind the VMA with the updated attributes.
> >
> > Ah and if it is chosen to move forward with those I think would be nice to have drm_xe_sync so we could synchronize the update of this attributes with
> > previous submissions and future ones.
> >
>
> I think we should have extra reserved bits for a sync interface so we
> can add one if needed but the way madvise is currently implement it is
> completely synchronous as all it does is update the KMD's VM/VMA
> bookkeeping, it does not update the GPU page tables (it can invalidate
> them though). A subsequent is expected to be issued if you need
> immediate page table updates.
So this need to be better documented in the uAPI I had no clue about that.
What do you mean by subsequent?
I think that only updating KMD bookkeeping will led to issues that are hard to debug, UMD could expect that VMA in running with a set of attributes
while it running with other.
>
> Matt
>
> > >
> > > > +#define DRM_XE_VMA_ATTR_PURGEABLE_STATE 3
> > >
> > > Mesa can make use of DRM_XE_VMA_ATTR_PURGEABLE_STATE but not the other ones, so you will need other UMDs to land those.
> > >
> > > > + /** @type: type of attribute */
> > > > + __u32 type;
> > > > +
> > > > + /** @pad: MBZ */
> > > > + __u32 pad;
> > > > +
> > > > + union {
> > > > + struct {
> > > > +#define DRM_XE_VMA_ATOMIC_UNDEFINED 0
> > > > +#define DRM_XE_VMA_ATOMIC_DEVICE 1
> > > > +#define DRM_XE_VMA_ATOMIC_GLOBAL 2
> > > > +#define DRM_XE_VMA_ATOMIC_CPU 3
> > > > + /** @val: value of atomic operation*/
> > > > + __u32 val;
> > > > +
> > > > + /** @reserved: Reserved */
> > > > + __u32 reserved;
> > > > + } atomic;
> > > > +
> > > > + struct {
> > > > +#define DRM_XE_VMA_PURGEABLE_STATE_WILLNEED 0
> > > > +#define DRM_XE_VMA_PURGEABLE_STATE_DONTNEED 1
> > > > +#define DRM_XE_VMA_PURGEABLE_STATE_PURGED 2
> > > > + /** @val: value for DRM_XE_VMA_ATTR_PURGEABLE_STATE */
> > > > + __u32 val;
> > > > +
> > > > + /** @reserved: Reserved */
> > > > + __u32 reserved;
> > > > + } purge_state_val;
> > > > +
> > > > + struct {
> > > > + /** @pat_index */
> > > > + __u32 val;
> > > > +
> > > > + /** @reserved: Reserved */
> > > > + __u32 reserved;
> > > > + } pat_index;
> > > > +
> > > > + /** @preferred_mem_loc: preferred memory location */
> > > > + struct {
> > > > + __u32 devmem_fd;
> > > > +
> > > > +#define MIGRATE_ALL_PAGES 0
> > > > +#define MIGRATE_ONLY_SYSTEM_PAGES 1
> > > > + __u32 migration_policy;
> > > > + } preferred_mem_loc;
> > > > + };
> > > > +
> > > > + /** @reserved: Reserved */
> > > > + __u64 reserved[2];
> > > > +};
> > > > +
> > > > +/**
> > > > + * struct drm_xe_madvise - Input of &DRM_IOCTL_XE_MADVISE
> > > > + *
> > > > + * Set memory attributes to a virtual address range
> > > > + */
> > > > +struct drm_xe_madvise {
> > > > + /** @extensions: Pointer to the first extension struct, if any */
> > > > + __u64 extensions;
> > > > +
> > > > + /** @vm_id: vm_id of the virtual range */
> > > > + __u32 vm_id;
> > > > +
> > > > + /** @num_ops: number of madvises in ioctl */
> > > > + __u32 num_ops;
> > > > +
> > > > + union {
> > > > + /** @ops: used if num_ops == 1 */
> > > > + struct drm_xe_madvise_ops ops;
> > > > +
> > > > + /**
> > > > + * @vector_of_ops: userptr to array of struct
> > > > + * drm_xe_vm_madvise_op if num_ops > 1
> > > > + */
> > > > + __u64 vector_of_ops;
> > > > + };
> > > > +
> > > > + /** @reserved: Reserved */
> > > > + __u64 reserved[2];
> > > > +
> > > > +};
> > > > +
> > > > #if defined(__cplusplus)
> > > > }
> > > > #endif
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 17/32] drm/xe/uapi: Add madvise interface
2025-04-22 15:40 ` Matthew Brost
@ 2025-04-22 16:02 ` Souza, Jose
2025-04-22 16:12 ` Matthew Brost
0 siblings, 1 reply; 120+ messages in thread
From: Souza, Jose @ 2025-04-22 16:02 UTC (permalink / raw)
To: Brost, Matthew
Cc: intel-xe@lists.freedesktop.org, Ghimiray, Himal Prasad,
thomas.hellstrom@linux.intel.com
On Tue, 2025-04-22 at 08:40 -0700, Matthew Brost wrote:
> On Thu, Apr 17, 2025 at 12:19:54PM -0600, Souza, Jose wrote:
> > On Mon, 2025-04-07 at 15:47 +0530, Himal Prasad Ghimiray wrote:
> > > This commit introduces a new madvise interface to support
> > > driver-specific ioctl operations. The madvise interface allows for more
> > > efficient memory management by providing hints to the driver about the
> > > expected memory usage and pte update policy for gpuvma.
> > >
> > > Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> > > ---
> > > include/uapi/drm/xe_drm.h | 97 +++++++++++++++++++++++++++++++++++++++
> > > 1 file changed, 97 insertions(+)
> > >
> > > diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
> > > index 9c08738c3b91..aaf515df3a83 100644
> > > --- a/include/uapi/drm/xe_drm.h
> > > +++ b/include/uapi/drm/xe_drm.h
> > > @@ -81,6 +81,7 @@ extern "C" {
> > > * - &DRM_IOCTL_XE_EXEC
> > > * - &DRM_IOCTL_XE_WAIT_USER_FENCE
> > > * - &DRM_IOCTL_XE_OBSERVATION
> > > + * - &DRM_IOCTL_XE_MADVISE
> > > */
> > >
> > > /*
> > > @@ -102,6 +103,7 @@ extern "C" {
> > > #define DRM_XE_EXEC 0x09
> > > #define DRM_XE_WAIT_USER_FENCE 0x0a
> > > #define DRM_XE_OBSERVATION 0x0b
> > > +#define DRM_XE_MADVISE 0x0c
> > >
> > > /* Must be kept compact -- no holes */
> > >
> > > @@ -117,6 +119,7 @@ extern "C" {
> > > #define DRM_IOCTL_XE_EXEC DRM_IOW(DRM_COMMAND_BASE + DRM_XE_EXEC, struct drm_xe_exec)
> > > #define DRM_IOCTL_XE_WAIT_USER_FENCE DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_WAIT_USER_FENCE, struct drm_xe_wait_user_fence)
> > > #define DRM_IOCTL_XE_OBSERVATION DRM_IOW(DRM_COMMAND_BASE + DRM_XE_OBSERVATION, struct drm_xe_observation_param)
> > > +#define DRM_IOCTL_XE_MADVISE DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_MADVISE, struct drm_xe_madvise)
> > >
> > > /**
> > > * DOC: Xe IOCTL Extensions
> > > @@ -1965,6 +1968,100 @@ struct drm_xe_query_eu_stall {
> > > __u64 sampling_rates[];
> > > };
> > >
> > > +struct drm_xe_madvise_ops {
> > > + /** @start: start of the virtual address range */
> > > + __u64 start;
> > > +
> > > + /** @size: size of the virtual address range */
> > > + __u64 range;
> > > +
> > > +#define DRM_XE_VMA_ATTR_PREFERRED_LOC 0
> > > +#define DRM_XE_VMA_ATTR_ATOMIC 1
> > > +#define DRM_XE_VMA_ATTR_PAT 2
> >
> > In my opinion we should drop DRM_XE_VMA_ATTR_PAT as we can already change the PAT index by unbinding and binding the VM with the wanted PAT index.
> > Similar for DRM_XE_VMA_ATTR_ATOMIC and DRM_XE_VMA_ATTR_PREFERRED_LOC, we could have those properties in vm bind uAPI and if needed to change we could
> > unbind and bind the VMA with the updated attributes.
> >
>
> I had thought about making madvise patch of VM bind too. I think it
> could work but I do think it further complicate an already highly
> complicated VM bind KMD path. Also the idea with madvise, is it doesn't
> actually update the GPU page tables rather just the KMD VM/VMA
> bookkeeping. My prefence here would to be keep this seperate.
Looks like of those 32 pages 25ish are just refactors to allow madvise support set these attributes, maybe if those were supported in VM bind uAPI it
would take less changes...
>
> All of the above attribute are really only intended to be set by
> faulting VM which Mesa doesn't use.
We have plans to drop scratch flag from VM in Xe2+ platforms to support a Vulkan extension but that will take some time...
>
> > > +#define DRM_XE_VMA_ATTR_PURGEABLE_STATE 3
> >
> > Mesa can make use of DRM_XE_VMA_ATTR_PURGEABLE_STATE but not the other ones, so you will need other UMDs to land those.
> >
>
> I could see Mesa using this one, but again this just flips a bit in KMD
> internal state - it does touch GPU page tables so would be goofy to
> implement this in a very complex VM bind IOCTL.
This one I also don't think it belongs to VM bind uAPI but I did not knew it required a fault VM, we plan to remove the scratch flag from Vulkan
driver but the usage of this attribute would be in the OpenGL driver, so I don't think we can make use of it then.
>
> Matt
>
> > > + /** @type: type of attribute */
> > > + __u32 type;
> > > +
> > > + /** @pad: MBZ */
> > > + __u32 pad;
> > > +
> > > + union {
> > > + struct {
> > > +#define DRM_XE_VMA_ATOMIC_UNDEFINED 0
> > > +#define DRM_XE_VMA_ATOMIC_DEVICE 1
> > > +#define DRM_XE_VMA_ATOMIC_GLOBAL 2
> > > +#define DRM_XE_VMA_ATOMIC_CPU 3
> > > + /** @val: value of atomic operation*/
> > > + __u32 val;
> > > +
> > > + /** @reserved: Reserved */
> > > + __u32 reserved;
> > > + } atomic;
> > > +
> > > + struct {
> > > +#define DRM_XE_VMA_PURGEABLE_STATE_WILLNEED 0
> > > +#define DRM_XE_VMA_PURGEABLE_STATE_DONTNEED 1
> > > +#define DRM_XE_VMA_PURGEABLE_STATE_PURGED 2
> > > + /** @val: value for DRM_XE_VMA_ATTR_PURGEABLE_STATE */
> > > + __u32 val;
> > > +
> > > + /** @reserved: Reserved */
> > > + __u32 reserved;
> > > + } purge_state_val;
> > > +
> > > + struct {
> > > + /** @pat_index */
> > > + __u32 val;
> > > +
> > > + /** @reserved: Reserved */
> > > + __u32 reserved;
> > > + } pat_index;
> > > +
> > > + /** @preferred_mem_loc: preferred memory location */
> > > + struct {
> > > + __u32 devmem_fd;
> > > +
> > > +#define MIGRATE_ALL_PAGES 0
> > > +#define MIGRATE_ONLY_SYSTEM_PAGES 1
> > > + __u32 migration_policy;
> > > + } preferred_mem_loc;
> > > + };
> > > +
> > > + /** @reserved: Reserved */
> > > + __u64 reserved[2];
> > > +};
> > > +
> > > +/**
> > > + * struct drm_xe_madvise - Input of &DRM_IOCTL_XE_MADVISE
> > > + *
> > > + * Set memory attributes to a virtual address range
> > > + */
> > > +struct drm_xe_madvise {
> > > + /** @extensions: Pointer to the first extension struct, if any */
> > > + __u64 extensions;
> > > +
> > > + /** @vm_id: vm_id of the virtual range */
> > > + __u32 vm_id;
> > > +
> > > + /** @num_ops: number of madvises in ioctl */
> > > + __u32 num_ops;
> > > +
> > > + union {
> > > + /** @ops: used if num_ops == 1 */
> > > + struct drm_xe_madvise_ops ops;
> > > +
> > > + /**
> > > + * @vector_of_ops: userptr to array of struct
> > > + * drm_xe_vm_madvise_op if num_ops > 1
> > > + */
> > > + __u64 vector_of_ops;
> > > + };
> > > +
> > > + /** @reserved: Reserved */
> > > + __u64 reserved[2];
> > > +
> > > +};
> > > +
> > > #if defined(__cplusplus)
> > > }
> > > #endif
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 17/32] drm/xe/uapi: Add madvise interface
2025-04-22 16:02 ` Souza, Jose
@ 2025-04-22 16:12 ` Matthew Brost
2025-04-22 16:16 ` Souza, Jose
0 siblings, 1 reply; 120+ messages in thread
From: Matthew Brost @ 2025-04-22 16:12 UTC (permalink / raw)
To: Souza, Jose
Cc: intel-xe@lists.freedesktop.org, Ghimiray, Himal Prasad,
thomas.hellstrom@linux.intel.com
On Tue, Apr 22, 2025 at 10:02:58AM -0600, Souza, Jose wrote:
> On Tue, 2025-04-22 at 08:40 -0700, Matthew Brost wrote:
> > On Thu, Apr 17, 2025 at 12:19:54PM -0600, Souza, Jose wrote:
> > > On Mon, 2025-04-07 at 15:47 +0530, Himal Prasad Ghimiray wrote:
> > > > This commit introduces a new madvise interface to support
> > > > driver-specific ioctl operations. The madvise interface allows for more
> > > > efficient memory management by providing hints to the driver about the
> > > > expected memory usage and pte update policy for gpuvma.
> > > >
> > > > Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> > > > ---
> > > > include/uapi/drm/xe_drm.h | 97 +++++++++++++++++++++++++++++++++++++++
> > > > 1 file changed, 97 insertions(+)
> > > >
> > > > diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
> > > > index 9c08738c3b91..aaf515df3a83 100644
> > > > --- a/include/uapi/drm/xe_drm.h
> > > > +++ b/include/uapi/drm/xe_drm.h
> > > > @@ -81,6 +81,7 @@ extern "C" {
> > > > * - &DRM_IOCTL_XE_EXEC
> > > > * - &DRM_IOCTL_XE_WAIT_USER_FENCE
> > > > * - &DRM_IOCTL_XE_OBSERVATION
> > > > + * - &DRM_IOCTL_XE_MADVISE
> > > > */
> > > >
> > > > /*
> > > > @@ -102,6 +103,7 @@ extern "C" {
> > > > #define DRM_XE_EXEC 0x09
> > > > #define DRM_XE_WAIT_USER_FENCE 0x0a
> > > > #define DRM_XE_OBSERVATION 0x0b
> > > > +#define DRM_XE_MADVISE 0x0c
> > > >
> > > > /* Must be kept compact -- no holes */
> > > >
> > > > @@ -117,6 +119,7 @@ extern "C" {
> > > > #define DRM_IOCTL_XE_EXEC DRM_IOW(DRM_COMMAND_BASE + DRM_XE_EXEC, struct drm_xe_exec)
> > > > #define DRM_IOCTL_XE_WAIT_USER_FENCE DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_WAIT_USER_FENCE, struct drm_xe_wait_user_fence)
> > > > #define DRM_IOCTL_XE_OBSERVATION DRM_IOW(DRM_COMMAND_BASE + DRM_XE_OBSERVATION, struct drm_xe_observation_param)
> > > > +#define DRM_IOCTL_XE_MADVISE DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_MADVISE, struct drm_xe_madvise)
> > > >
> > > > /**
> > > > * DOC: Xe IOCTL Extensions
> > > > @@ -1965,6 +1968,100 @@ struct drm_xe_query_eu_stall {
> > > > __u64 sampling_rates[];
> > > > };
> > > >
> > > > +struct drm_xe_madvise_ops {
> > > > + /** @start: start of the virtual address range */
> > > > + __u64 start;
> > > > +
> > > > + /** @size: size of the virtual address range */
> > > > + __u64 range;
> > > > +
> > > > +#define DRM_XE_VMA_ATTR_PREFERRED_LOC 0
> > > > +#define DRM_XE_VMA_ATTR_ATOMIC 1
> > > > +#define DRM_XE_VMA_ATTR_PAT 2
> > >
> > > In my opinion we should drop DRM_XE_VMA_ATTR_PAT as we can already change the PAT index by unbinding and binding the VM with the wanted PAT index.
> > > Similar for DRM_XE_VMA_ATTR_ATOMIC and DRM_XE_VMA_ATTR_PREFERRED_LOC, we could have those properties in vm bind uAPI and if needed to change we could
> > > unbind and bind the VMA with the updated attributes.
> > >
> >
> > I had thought about making madvise patch of VM bind too. I think it
> > could work but I do think it further complicate an already highly
> > complicated VM bind KMD path. Also the idea with madvise, is it doesn't
> > actually update the GPU page tables rather just the KMD VM/VMA
> > bookkeeping. My prefence here would to be keep this seperate.
>
> Looks like of those 32 pages 25ish are just refactors to allow madvise support set these attributes, maybe if those were supported in VM bind uAPI it
> would take less changes...
>
Maybe but I honestly doubt it.
> >
> > All of the above attribute are really only intended to be set by
> > faulting VM which Mesa doesn't use.
>
> We have plans to drop scratch flag from VM in Xe2+ platforms to support a Vulkan extension but that will take some time...
>
faulting VM != droping scratch page - it means the GPU takes page faults
which are fixed by the KMD. This has a consequence of not allowing
dma-fences (drm syncobjs) so I don't think Mesa will be doing that
anytime soon.
> >
> > > > +#define DRM_XE_VMA_ATTR_PURGEABLE_STATE 3
> > >
> > > Mesa can make use of DRM_XE_VMA_ATTR_PURGEABLE_STATE but not the other ones, so you will need other UMDs to land those.
> > >
> >
> > I could see Mesa using this one, but again this just flips a bit in KMD
> > internal state - it does touch GPU page tables so would be goofy to
> > implement this in a very complex VM bind IOCTL.
>
> This one I also don't think it belongs to VM bind uAPI but I did not knew it required a fault VM, we plan to remove the scratch flag from Vulkan
> driver but the usage of this attribute would be in the OpenGL driver, so I don't think we can make use of it then.
>
This one can be set without a faulting VM - it means when memory
pressure exists and memory is idle, it can purged (not saved) rather
than evicted (saved / restored). Same idea on suspend / resume.
Matt
> >
> > Matt
> >
> > > > + /** @type: type of attribute */
> > > > + __u32 type;
> > > > +
> > > > + /** @pad: MBZ */
> > > > + __u32 pad;
> > > > +
> > > > + union {
> > > > + struct {
> > > > +#define DRM_XE_VMA_ATOMIC_UNDEFINED 0
> > > > +#define DRM_XE_VMA_ATOMIC_DEVICE 1
> > > > +#define DRM_XE_VMA_ATOMIC_GLOBAL 2
> > > > +#define DRM_XE_VMA_ATOMIC_CPU 3
> > > > + /** @val: value of atomic operation*/
> > > > + __u32 val;
> > > > +
> > > > + /** @reserved: Reserved */
> > > > + __u32 reserved;
> > > > + } atomic;
> > > > +
> > > > + struct {
> > > > +#define DRM_XE_VMA_PURGEABLE_STATE_WILLNEED 0
> > > > +#define DRM_XE_VMA_PURGEABLE_STATE_DONTNEED 1
> > > > +#define DRM_XE_VMA_PURGEABLE_STATE_PURGED 2
> > > > + /** @val: value for DRM_XE_VMA_ATTR_PURGEABLE_STATE */
> > > > + __u32 val;
> > > > +
> > > > + /** @reserved: Reserved */
> > > > + __u32 reserved;
> > > > + } purge_state_val;
> > > > +
> > > > + struct {
> > > > + /** @pat_index */
> > > > + __u32 val;
> > > > +
> > > > + /** @reserved: Reserved */
> > > > + __u32 reserved;
> > > > + } pat_index;
> > > > +
> > > > + /** @preferred_mem_loc: preferred memory location */
> > > > + struct {
> > > > + __u32 devmem_fd;
> > > > +
> > > > +#define MIGRATE_ALL_PAGES 0
> > > > +#define MIGRATE_ONLY_SYSTEM_PAGES 1
> > > > + __u32 migration_policy;
> > > > + } preferred_mem_loc;
> > > > + };
> > > > +
> > > > + /** @reserved: Reserved */
> > > > + __u64 reserved[2];
> > > > +};
> > > > +
> > > > +/**
> > > > + * struct drm_xe_madvise - Input of &DRM_IOCTL_XE_MADVISE
> > > > + *
> > > > + * Set memory attributes to a virtual address range
> > > > + */
> > > > +struct drm_xe_madvise {
> > > > + /** @extensions: Pointer to the first extension struct, if any */
> > > > + __u64 extensions;
> > > > +
> > > > + /** @vm_id: vm_id of the virtual range */
> > > > + __u32 vm_id;
> > > > +
> > > > + /** @num_ops: number of madvises in ioctl */
> > > > + __u32 num_ops;
> > > > +
> > > > + union {
> > > > + /** @ops: used if num_ops == 1 */
> > > > + struct drm_xe_madvise_ops ops;
> > > > +
> > > > + /**
> > > > + * @vector_of_ops: userptr to array of struct
> > > > + * drm_xe_vm_madvise_op if num_ops > 1
> > > > + */
> > > > + __u64 vector_of_ops;
> > > > + };
> > > > +
> > > > + /** @reserved: Reserved */
> > > > + __u64 reserved[2];
> > > > +
> > > > +};
> > > > +
> > > > #if defined(__cplusplus)
> > > > }
> > > > #endif
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 17/32] drm/xe/uapi: Add madvise interface
2025-04-22 16:12 ` Matthew Brost
@ 2025-04-22 16:16 ` Souza, Jose
0 siblings, 0 replies; 120+ messages in thread
From: Souza, Jose @ 2025-04-22 16:16 UTC (permalink / raw)
To: Brost, Matthew
Cc: intel-xe@lists.freedesktop.org, Ghimiray, Himal Prasad,
thomas.hellstrom@linux.intel.com
On Tue, 2025-04-22 at 09:12 -0700, Matthew Brost wrote:
> On Tue, Apr 22, 2025 at 10:02:58AM -0600, Souza, Jose wrote:
> > On Tue, 2025-04-22 at 08:40 -0700, Matthew Brost wrote:
> > > On Thu, Apr 17, 2025 at 12:19:54PM -0600, Souza, Jose wrote:
> > > > On Mon, 2025-04-07 at 15:47 +0530, Himal Prasad Ghimiray wrote:
> > > > > This commit introduces a new madvise interface to support
> > > > > driver-specific ioctl operations. The madvise interface allows for more
> > > > > efficient memory management by providing hints to the driver about the
> > > > > expected memory usage and pte update policy for gpuvma.
> > > > >
> > > > > Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> > > > > ---
> > > > > include/uapi/drm/xe_drm.h | 97 +++++++++++++++++++++++++++++++++++++++
> > > > > 1 file changed, 97 insertions(+)
> > > > >
> > > > > diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
> > > > > index 9c08738c3b91..aaf515df3a83 100644
> > > > > --- a/include/uapi/drm/xe_drm.h
> > > > > +++ b/include/uapi/drm/xe_drm.h
> > > > > @@ -81,6 +81,7 @@ extern "C" {
> > > > > * - &DRM_IOCTL_XE_EXEC
> > > > > * - &DRM_IOCTL_XE_WAIT_USER_FENCE
> > > > > * - &DRM_IOCTL_XE_OBSERVATION
> > > > > + * - &DRM_IOCTL_XE_MADVISE
> > > > > */
> > > > >
> > > > > /*
> > > > > @@ -102,6 +103,7 @@ extern "C" {
> > > > > #define DRM_XE_EXEC 0x09
> > > > > #define DRM_XE_WAIT_USER_FENCE 0x0a
> > > > > #define DRM_XE_OBSERVATION 0x0b
> > > > > +#define DRM_XE_MADVISE 0x0c
> > > > >
> > > > > /* Must be kept compact -- no holes */
> > > > >
> > > > > @@ -117,6 +119,7 @@ extern "C" {
> > > > > #define DRM_IOCTL_XE_EXEC DRM_IOW(DRM_COMMAND_BASE + DRM_XE_EXEC, struct drm_xe_exec)
> > > > > #define DRM_IOCTL_XE_WAIT_USER_FENCE DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_WAIT_USER_FENCE, struct drm_xe_wait_user_fence)
> > > > > #define DRM_IOCTL_XE_OBSERVATION DRM_IOW(DRM_COMMAND_BASE + DRM_XE_OBSERVATION, struct drm_xe_observation_param)
> > > > > +#define DRM_IOCTL_XE_MADVISE DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_MADVISE, struct drm_xe_madvise)
> > > > >
> > > > > /**
> > > > > * DOC: Xe IOCTL Extensions
> > > > > @@ -1965,6 +1968,100 @@ struct drm_xe_query_eu_stall {
> > > > > __u64 sampling_rates[];
> > > > > };
> > > > >
> > > > > +struct drm_xe_madvise_ops {
> > > > > + /** @start: start of the virtual address range */
> > > > > + __u64 start;
> > > > > +
> > > > > + /** @size: size of the virtual address range */
> > > > > + __u64 range;
> > > > > +
> > > > > +#define DRM_XE_VMA_ATTR_PREFERRED_LOC 0
> > > > > +#define DRM_XE_VMA_ATTR_ATOMIC 1
> > > > > +#define DRM_XE_VMA_ATTR_PAT 2
> > > >
> > > > In my opinion we should drop DRM_XE_VMA_ATTR_PAT as we can already change the PAT index by unbinding and binding the VM with the wanted PAT index.
> > > > Similar for DRM_XE_VMA_ATTR_ATOMIC and DRM_XE_VMA_ATTR_PREFERRED_LOC, we could have those properties in vm bind uAPI and if needed to change we could
> > > > unbind and bind the VMA with the updated attributes.
> > > >
> > >
> > > I had thought about making madvise patch of VM bind too. I think it
> > > could work but I do think it further complicate an already highly
> > > complicated VM bind KMD path. Also the idea with madvise, is it doesn't
> > > actually update the GPU page tables rather just the KMD VM/VMA
> > > bookkeeping. My prefence here would to be keep this seperate.
> >
> > Looks like of those 32 pages 25ish are just refactors to allow madvise support set these attributes, maybe if those were supported in VM bind uAPI it
> > would take less changes...
> >
>
> Maybe but I honestly doubt it.
>
> > >
> > > All of the above attribute are really only intended to be set by
> > > faulting VM which Mesa doesn't use.
> >
> > We have plans to drop scratch flag from VM in Xe2+ platforms to support a Vulkan extension but that will take some time...
> >
>
> faulting VM != droping scratch page - it means the GPU takes page faults
> which are fixed by the KMD. This has a consequence of not allowing
> dma-fences (drm syncobjs) so I don't think Mesa will be doing that
> anytime soon.
thank you for the clarification
>
> > >
> > > > > +#define DRM_XE_VMA_ATTR_PURGEABLE_STATE 3
> > > >
> > > > Mesa can make use of DRM_XE_VMA_ATTR_PURGEABLE_STATE but not the other ones, so you will need other UMDs to land those.
> > > >
> > >
> > > I could see Mesa using this one, but again this just flips a bit in KMD
> > > internal state - it does touch GPU page tables so would be goofy to
> > > implement this in a very complex VM bind IOCTL.
> >
> > This one I also don't think it belongs to VM bind uAPI but I did not knew it required a fault VM, we plan to remove the scratch flag from Vulkan
> > driver but the usage of this attribute would be in the OpenGL driver, so I don't think we can make use of it then.
> >
>
> This one can be set without a faulting VM - it means when memory
> pressure exists and memory is idle, it can purged (not saved) rather
> than evicted (saved / restored). Same idea on suspend / resume.
nice, would be good to have a documentation with the requirements of each attribute...
>
> Matt
>
> > >
> > > Matt
> > >
> > > > > + /** @type: type of attribute */
> > > > > + __u32 type;
> > > > > +
> > > > > + /** @pad: MBZ */
> > > > > + __u32 pad;
> > > > > +
> > > > > + union {
> > > > > + struct {
> > > > > +#define DRM_XE_VMA_ATOMIC_UNDEFINED 0
> > > > > +#define DRM_XE_VMA_ATOMIC_DEVICE 1
> > > > > +#define DRM_XE_VMA_ATOMIC_GLOBAL 2
> > > > > +#define DRM_XE_VMA_ATOMIC_CPU 3
> > > > > + /** @val: value of atomic operation*/
> > > > > + __u32 val;
> > > > > +
> > > > > + /** @reserved: Reserved */
> > > > > + __u32 reserved;
> > > > > + } atomic;
> > > > > +
> > > > > + struct {
> > > > > +#define DRM_XE_VMA_PURGEABLE_STATE_WILLNEED 0
> > > > > +#define DRM_XE_VMA_PURGEABLE_STATE_DONTNEED 1
> > > > > +#define DRM_XE_VMA_PURGEABLE_STATE_PURGED 2
> > > > > + /** @val: value for DRM_XE_VMA_ATTR_PURGEABLE_STATE */
> > > > > + __u32 val;
> > > > > +
> > > > > + /** @reserved: Reserved */
> > > > > + __u32 reserved;
> > > > > + } purge_state_val;
> > > > > +
> > > > > + struct {
> > > > > + /** @pat_index */
> > > > > + __u32 val;
> > > > > +
> > > > > + /** @reserved: Reserved */
> > > > > + __u32 reserved;
> > > > > + } pat_index;
> > > > > +
> > > > > + /** @preferred_mem_loc: preferred memory location */
> > > > > + struct {
> > > > > + __u32 devmem_fd;
> > > > > +
> > > > > +#define MIGRATE_ALL_PAGES 0
> > > > > +#define MIGRATE_ONLY_SYSTEM_PAGES 1
> > > > > + __u32 migration_policy;
> > > > > + } preferred_mem_loc;
> > > > > + };
> > > > > +
> > > > > + /** @reserved: Reserved */
> > > > > + __u64 reserved[2];
> > > > > +};
> > > > > +
> > > > > +/**
> > > > > + * struct drm_xe_madvise - Input of &DRM_IOCTL_XE_MADVISE
> > > > > + *
> > > > > + * Set memory attributes to a virtual address range
> > > > > + */
> > > > > +struct drm_xe_madvise {
> > > > > + /** @extensions: Pointer to the first extension struct, if any */
> > > > > + __u64 extensions;
> > > > > +
> > > > > + /** @vm_id: vm_id of the virtual range */
> > > > > + __u32 vm_id;
> > > > > +
> > > > > + /** @num_ops: number of madvises in ioctl */
> > > > > + __u32 num_ops;
> > > > > +
> > > > > + union {
> > > > > + /** @ops: used if num_ops == 1 */
> > > > > + struct drm_xe_madvise_ops ops;
> > > > > +
> > > > > + /**
> > > > > + * @vector_of_ops: userptr to array of struct
> > > > > + * drm_xe_vm_madvise_op if num_ops > 1
> > > > > + */
> > > > > + __u64 vector_of_ops;
> > > > > + };
> > > > > +
> > > > > + /** @reserved: Reserved */
> > > > > + __u64 reserved[2];
> > > > > +
> > > > > +};
> > > > > +
> > > > > #if defined(__cplusplus)
> > > > > }
> > > > > #endif
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 17/32] drm/xe/uapi: Add madvise interface
2025-04-22 15:55 ` Souza, Jose
@ 2025-04-22 16:19 ` Matthew Brost
0 siblings, 0 replies; 120+ messages in thread
From: Matthew Brost @ 2025-04-22 16:19 UTC (permalink / raw)
To: Souza, Jose
Cc: intel-xe@lists.freedesktop.org, Ghimiray, Himal Prasad,
thomas.hellstrom@linux.intel.com
On Tue, Apr 22, 2025 at 09:55:05AM -0600, Souza, Jose wrote:
> On Tue, 2025-04-22 at 08:34 -0700, Matthew Brost wrote:
> > On Thu, Apr 17, 2025 at 12:24:17PM -0600, Souza, Jose wrote:
> > > On Thu, 2025-04-17 at 11:19 -0700, José Roberto de Souza wrote:
> > > > On Mon, 2025-04-07 at 15:47 +0530, Himal Prasad Ghimiray wrote:
> > > > > This commit introduces a new madvise interface to support
> > > > > driver-specific ioctl operations. The madvise interface allows for more
> > > > > efficient memory management by providing hints to the driver about the
> > > > > expected memory usage and pte update policy for gpuvma.
> > > > >
> > > > > Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> > > > > ---
> > > > > include/uapi/drm/xe_drm.h | 97 +++++++++++++++++++++++++++++++++++++++
> > > > > 1 file changed, 97 insertions(+)
> > > > >
> > > > > diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
> > > > > index 9c08738c3b91..aaf515df3a83 100644
> > > > > --- a/include/uapi/drm/xe_drm.h
> > > > > +++ b/include/uapi/drm/xe_drm.h
> > > > > @@ -81,6 +81,7 @@ extern "C" {
> > > > > * - &DRM_IOCTL_XE_EXEC
> > > > > * - &DRM_IOCTL_XE_WAIT_USER_FENCE
> > > > > * - &DRM_IOCTL_XE_OBSERVATION
> > > > > + * - &DRM_IOCTL_XE_MADVISE
> > > > > */
> > > > >
> > > > > /*
> > > > > @@ -102,6 +103,7 @@ extern "C" {
> > > > > #define DRM_XE_EXEC 0x09
> > > > > #define DRM_XE_WAIT_USER_FENCE 0x0a
> > > > > #define DRM_XE_OBSERVATION 0x0b
> > > > > +#define DRM_XE_MADVISE 0x0c
> > > > >
> > > > > /* Must be kept compact -- no holes */
> > > > >
> > > > > @@ -117,6 +119,7 @@ extern "C" {
> > > > > #define DRM_IOCTL_XE_EXEC DRM_IOW(DRM_COMMAND_BASE + DRM_XE_EXEC, struct drm_xe_exec)
> > > > > #define DRM_IOCTL_XE_WAIT_USER_FENCE DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_WAIT_USER_FENCE, struct drm_xe_wait_user_fence)
> > > > > #define DRM_IOCTL_XE_OBSERVATION DRM_IOW(DRM_COMMAND_BASE + DRM_XE_OBSERVATION, struct drm_xe_observation_param)
> > > > > +#define DRM_IOCTL_XE_MADVISE DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_MADVISE, struct drm_xe_madvise)
> > > > >
> > > > > /**
> > > > > * DOC: Xe IOCTL Extensions
> > > > > @@ -1965,6 +1968,100 @@ struct drm_xe_query_eu_stall {
> > > > > __u64 sampling_rates[];
> > > > > };
> > > > >
> > > > > +struct drm_xe_madvise_ops {
> > > > > + /** @start: start of the virtual address range */
> > > > > + __u64 start;
> > > > > +
> > > > > + /** @size: size of the virtual address range */
> > > > > + __u64 range;
> > > > > +
> > > > > +#define DRM_XE_VMA_ATTR_PREFERRED_LOC 0
> > > > > +#define DRM_XE_VMA_ATTR_ATOMIC 1
> > > > > +#define DRM_XE_VMA_ATTR_PAT 2
> > > >
> > > > In my opinion we should drop DRM_XE_VMA_ATTR_PAT as we can already change the PAT index by unbinding and binding the VM with the wanted PAT index.
> > > > Similar for DRM_XE_VMA_ATTR_ATOMIC and DRM_XE_VMA_ATTR_PREFERRED_LOC, we could have those properties in vm bind uAPI and if needed to change we could
> > > > unbind and bind the VMA with the updated attributes.
> > >
> > > Ah and if it is chosen to move forward with those I think would be nice to have drm_xe_sync so we could synchronize the update of this attributes with
> > > previous submissions and future ones.
> > >
> >
> > I think we should have extra reserved bits for a sync interface so we
> > can add one if needed but the way madvise is currently implement it is
> > completely synchronous as all it does is update the KMD's VM/VMA
> > bookkeeping, it does not update the GPU page tables (it can invalidate
> > them though). A subsequent is expected to be issued if you need
> > immediate page table updates.
>
> So this need to be better documented in the uAPI I had no clue about that.
This is WIP and hasn't been really reviewed by the KMD team yet, but in
general agree more documentation is likely required.
Thomas and I are on hook to review madvise, will consider what it would
look like to just make part of VM bind.
> What do you mean by subsequent?
>
Typo...
s/subsequent/subsequent prefetch/
> I think that only updating KMD bookkeeping will led to issues that are hard to debug, UMD could expect that VMA in running with a set of attributes
> while it running with other.
>
That loops in why DRM_XE_VMA_ATTR_PREFERRED_LOC, DRM_XE_VMA_ATTR_ATOMIC,
DRM_XE_VMA_ATTR_PAT are only expected to set on faulting VMs - madvise
invalidates the page tables so a subsequent GPU access would fault and
pick up the correct values. A subsequent prefetch would populate the GPU
pages to avoid a fault. Also the first two attributes really only have
meaning on faulting VMs too.
Matt
> >
> > Matt
> >
> > > >
> > > > > +#define DRM_XE_VMA_ATTR_PURGEABLE_STATE 3
> > > >
> > > > Mesa can make use of DRM_XE_VMA_ATTR_PURGEABLE_STATE but not the other ones, so you will need other UMDs to land those.
> > > >
> > > > > + /** @type: type of attribute */
> > > > > + __u32 type;
> > > > > +
> > > > > + /** @pad: MBZ */
> > > > > + __u32 pad;
> > > > > +
> > > > > + union {
> > > > > + struct {
> > > > > +#define DRM_XE_VMA_ATOMIC_UNDEFINED 0
> > > > > +#define DRM_XE_VMA_ATOMIC_DEVICE 1
> > > > > +#define DRM_XE_VMA_ATOMIC_GLOBAL 2
> > > > > +#define DRM_XE_VMA_ATOMIC_CPU 3
> > > > > + /** @val: value of atomic operation*/
> > > > > + __u32 val;
> > > > > +
> > > > > + /** @reserved: Reserved */
> > > > > + __u32 reserved;
> > > > > + } atomic;
> > > > > +
> > > > > + struct {
> > > > > +#define DRM_XE_VMA_PURGEABLE_STATE_WILLNEED 0
> > > > > +#define DRM_XE_VMA_PURGEABLE_STATE_DONTNEED 1
> > > > > +#define DRM_XE_VMA_PURGEABLE_STATE_PURGED 2
> > > > > + /** @val: value for DRM_XE_VMA_ATTR_PURGEABLE_STATE */
> > > > > + __u32 val;
> > > > > +
> > > > > + /** @reserved: Reserved */
> > > > > + __u32 reserved;
> > > > > + } purge_state_val;
> > > > > +
> > > > > + struct {
> > > > > + /** @pat_index */
> > > > > + __u32 val;
> > > > > +
> > > > > + /** @reserved: Reserved */
> > > > > + __u32 reserved;
> > > > > + } pat_index;
> > > > > +
> > > > > + /** @preferred_mem_loc: preferred memory location */
> > > > > + struct {
> > > > > + __u32 devmem_fd;
> > > > > +
> > > > > +#define MIGRATE_ALL_PAGES 0
> > > > > +#define MIGRATE_ONLY_SYSTEM_PAGES 1
> > > > > + __u32 migration_policy;
> > > > > + } preferred_mem_loc;
> > > > > + };
> > > > > +
> > > > > + /** @reserved: Reserved */
> > > > > + __u64 reserved[2];
> > > > > +};
> > > > > +
> > > > > +/**
> > > > > + * struct drm_xe_madvise - Input of &DRM_IOCTL_XE_MADVISE
> > > > > + *
> > > > > + * Set memory attributes to a virtual address range
> > > > > + */
> > > > > +struct drm_xe_madvise {
> > > > > + /** @extensions: Pointer to the first extension struct, if any */
> > > > > + __u64 extensions;
> > > > > +
> > > > > + /** @vm_id: vm_id of the virtual range */
> > > > > + __u32 vm_id;
> > > > > +
> > > > > + /** @num_ops: number of madvises in ioctl */
> > > > > + __u32 num_ops;
> > > > > +
> > > > > + union {
> > > > > + /** @ops: used if num_ops == 1 */
> > > > > + struct drm_xe_madvise_ops ops;
> > > > > +
> > > > > + /**
> > > > > + * @vector_of_ops: userptr to array of struct
> > > > > + * drm_xe_vm_madvise_op if num_ops > 1
> > > > > + */
> > > > > + __u64 vector_of_ops;
> > > > > + };
> > > > > +
> > > > > + /** @reserved: Reserved */
> > > > > + __u64 reserved[2];
> > > > > +
> > > > > +};
> > > > > +
> > > > > #if defined(__cplusplus)
> > > > > }
> > > > > #endif
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 14/32] drm/xe/svm: Implement prefetch support for SVM ranges
2025-04-17 4:54 ` Matthew Brost
@ 2025-04-24 10:03 ` Ghimiray, Himal Prasad
0 siblings, 0 replies; 120+ messages in thread
From: Ghimiray, Himal Prasad @ 2025-04-24 10:03 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-xe, thomas.hellstrom
On 17-04-2025 10:24, Matthew Brost wrote:
> On Mon, Apr 07, 2025 at 03:47:01PM +0530, Himal Prasad Ghimiray wrote:
>> This commit adds prefetch support for SVM ranges, utilizing the
>> existing ioctl vm_bind functionality to achieve this.
>>
>> v2: rebase
>>
>> Cc: Matthew Brost <matthew.brost@intel.com>
>> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>> ---
>> drivers/gpu/drm/xe/xe_pt.c | 61 +++++++++---
>> drivers/gpu/drm/xe/xe_vm.c | 185 +++++++++++++++++++++++++++++++++++--
>> 2 files changed, 222 insertions(+), 24 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
>> index de4e3edda758..59dc065fae93 100644
>> --- a/drivers/gpu/drm/xe/xe_pt.c
>> +++ b/drivers/gpu/drm/xe/xe_pt.c
>> @@ -1458,7 +1458,8 @@ static int xe_pt_svm_pre_commit(struct xe_migrate_pt_update *pt_update)
>> struct xe_vm *vm = pt_update->vops->vm;
>> struct xe_vma_ops *vops = pt_update->vops;
>> struct xe_vma_op *op;
>> - int err;
>> + int ranges_count;
>> + int err, i;
>>
>> err = xe_pt_pre_commit(pt_update);
>> if (err)
>> @@ -1467,20 +1468,33 @@ static int xe_pt_svm_pre_commit(struct xe_migrate_pt_update *pt_update)
>> xe_svm_notifier_lock(vm);
>>
>> list_for_each_entry(op, &vops->list, link) {
>> - struct xe_svm_range *range = op->map_range.range;
>> + struct xe_svm_range *range = NULL;
>>
>> if (op->subop == XE_VMA_SUBOP_UNMAP_RANGE)
>> continue;
>>
>> - xe_svm_range_debug(range, "PRE-COMMIT");
>> -
>> - xe_assert(vm->xe, xe_vma_is_cpu_addr_mirror(op->map_range.vma));
>> - xe_assert(vm->xe, op->subop == XE_VMA_SUBOP_MAP_RANGE);
>> + if (op->base.op == DRM_GPUVA_OP_PREFETCH) {
>> + xe_assert(vm->xe,
>> + xe_vma_is_cpu_addr_mirror(gpuva_to_vma(op->base.prefetch.va)));
>> + ranges_count = op->prefetch_range.ranges_count;
>> + } else {
>> + xe_assert(vm->xe, xe_vma_is_cpu_addr_mirror(op->map_range.vma));
>> + xe_assert(vm->xe, op->subop == XE_VMA_SUBOP_MAP_RANGE);
>> + ranges_count = 1;
>> + }
>>
>> - if (!xe_svm_range_pages_valid(range)) {
>> - xe_svm_range_debug(range, "PRE-COMMIT - RETRY");
>> - xe_svm_notifier_unlock(vm);
>> - return -EAGAIN;
>> + for (i = 0; i < ranges_count; i++) {
>
> xa_for_each as it doesn't make any assumptions above the key (e.g. the value of i).
Sure
>
>> + if (op->base.op == DRM_GPUVA_OP_PREFETCH)
>> + range = xa_load(&op->prefetch_range.range, i);
>
> I'd move this logic above... So I'd write it like this...
>
> if (op->base.op == DRM_GPUVA_OP_PREFETCH) {
> assert
> xe_for_each
> do_pages_check()
> } else {
> assert
> do_pages_check();
> }
Looks better.
>
>> + else
>> + range = op->map_range.range;
>> + xe_svm_range_debug(range, "PRE-COMMIT");
>> +
>> + if (!xe_svm_range_pages_valid(range)) {
>> + xe_svm_range_debug(range, "PRE-COMMIT - RETRY");
>> + xe_svm_notifier_unlock(vm);
>> + return -EAGAIN;
>
> So in the case of prefetch, this is bit inconsistent as below when
> things race, you return -ENODATA which is converted to 0 in the IOCTL.
> -EAGAIN here could result in a livelock in the right conditions as
> -EAGAIN means must retry. I think maybe just -ENODATA if a prefetch
> fails... If there are any other binds in the array of IOCTL they will
> just fault I guess. Maybe not a concern as only VK uses array of binds
> at the moment.
Ok
>
>> + }
>> }
>> }
>>
>> @@ -2065,11 +2079,21 @@ static int op_prepare(struct xe_vm *vm,
>> {
>> struct xe_vma *vma = gpuva_to_vma(op->base.prefetch.va);
>>
>> - if (xe_vma_is_cpu_addr_mirror(vma))
>> - break;
>> + if (xe_vma_is_cpu_addr_mirror(vma)) {
>> + struct xe_svm_range *range;
>> + int i;
>>
>> - err = bind_op_prepare(vm, tile, pt_update_ops, vma, false);
>> - pt_update_ops->wait_vm_kernel = true;
>> + for (i = 0; i < op->prefetch_range.ranges_count; i++) {
>> + range = xa_load(&op->prefetch_range.range, i);
>
> Again xe_for_each...
Noted
>
>> + err = bind_range_prepare(vm, tile, pt_update_ops,
>> + vma, range);
>> + if (err)
>> + return err;
>> + }
>> + } else {
>> + err = bind_op_prepare(vm, tile, pt_update_ops, vma, false);
>> + pt_update_ops->wait_vm_kernel = true;
>> + }
>> break;
>> }
>> case DRM_GPUVA_OP_DRIVER:
>> @@ -2273,9 +2297,16 @@ static void op_commit(struct xe_vm *vm,
>> {
>> struct xe_vma *vma = gpuva_to_vma(op->base.prefetch.va);
>>
>> - if (!xe_vma_is_cpu_addr_mirror(vma))
>> + if (xe_vma_is_cpu_addr_mirror(vma)) {
>> + for (int i = 0 ; i < op->prefetch_range.ranges_count; i++) {
>
> Again xe_for_each...
>
>> + struct xe_svm_range *range = xa_load(&op->prefetch_range.range, i);
>> +
>> + range_present_and_invalidated_tile(vm, range, tile->id);
>> + }
>> + } else {
>> bind_op_commit(vm, tile, pt_update_ops, vma, fence,
>> fence2, false);
>> + }
>> break;
>> }
>> case DRM_GPUVA_OP_DRIVER:
>> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
>> index 57af2c37f927..ffd7ad664921 100644
>> --- a/drivers/gpu/drm/xe/xe_vm.c
>> +++ b/drivers/gpu/drm/xe/xe_vm.c
>> @@ -798,10 +798,36 @@ static int xe_vma_ops_alloc(struct xe_vma_ops *vops, bool array_of_binds)
>> }
>> ALLOW_ERROR_INJECTION(xe_vma_ops_alloc, ERRNO);
>>
>> +static void clean_svm_prefetch_op(struct xe_vma_op *op)
>> +{
>
> Can we rename this with fini convention to match xe_vma_ops_fini?
Sure
>
>> + struct xe_vma *vma;
>> +
>> + vma = gpuva_to_vma(op->base.prefetch.va);
>> +
>> + if (op->base.op == DRM_GPUVA_OP_PREFETCH && xe_vma_is_cpu_addr_mirror(vma)) {
>> + xa_destroy(&op->prefetch_range.range);
>> + op->prefetch_range.ranges_count = 0;
>
> Do you need to set 'op->prefetch_range.ranges_count' to zero here.
Looks redundant.
>
>> + }
>> +}
>> +
>> +static void clean_svm_prefetch_in_vma_ops(struct xe_vma_ops *vops)
>> +{
>
> Same here, fini convention?
Sure
>
>> + struct xe_vma_op *op;
>> +
>> + if (!(vops->flags & XE_VMA_OPS_HAS_SVM_PREFETCH))
>> + return;
>> +
>> + list_for_each_entry(op, &vops->list, link) {
>> + clean_svm_prefetch_op(op);
>> + }
>
> Brackets not needed.
Noted
>
>> +}
>> +
>> static void xe_vma_ops_fini(struct xe_vma_ops *vops)
>> {
>> int i;
>>
>> + clean_svm_prefetch_in_vma_ops(vops);
>> +
>> for (i = 0; i < XE_MAX_TILES_PER_DEVICE; ++i)
>> kfree(vops->pt_update_ops[i].ops);
>> }
>> @@ -2248,13 +2274,25 @@ static bool __xe_vm_needs_clear_scratch_pages(struct xe_vm *vm, u32 bind_flags)
>> return true;
>> }
>>
>> +static void clean_svm_prefetch_in_gpuva_ops(struct drm_gpuva_ops *ops)
>> +{
>
> Same here, fini convention?
>
>> + struct drm_gpuva_op *__op;
>> +
>> + drm_gpuva_for_each_op(__op, ops) {
>> + struct xe_vma_op *op = gpuva_op_to_vma_op(__op);
>> +
>> + clean_svm_prefetch_op(op);
>> + }
>> +}
>> +
>> /*
>> * Create operations list from IOCTL arguments, setup operations fields so parse
>> * and commit steps are decoupled from IOCTL arguments. This step can fail.
>> */
>> static struct drm_gpuva_ops *
>> -vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_bo *bo,
>> - u64 bo_offset_or_userptr, u64 addr, u64 range,
>> +vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_vma_ops *vops,
>> + struct xe_bo *bo, u64 bo_offset_or_userptr,
>> + u64 addr, u64 range,
>> u32 operation, u32 flags,
>> u32 prefetch_region, u16 pat_index)
>> {
>> @@ -2262,6 +2300,7 @@ vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_bo *bo,
>> struct drm_gpuva_ops *ops;
>> struct drm_gpuva_op *__op;
>> struct drm_gpuvm_bo *vm_bo;
>> + u64 range_end = addr + range;
>> int err;
>>
>> lockdep_assert_held_write(&vm->lock);
>> @@ -2323,14 +2362,61 @@ vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_bo *bo,
>> op->map.invalidate_on_bind =
>> __xe_vm_needs_clear_scratch_pages(vm, flags);
>> } else if (__op->op == DRM_GPUVA_OP_PREFETCH) {
>> - op->prefetch.region = prefetch_region;
>> - }
>> + struct xe_vma *vma = gpuva_to_vma(op->base.prefetch.va);
>> +
>> + if (!xe_vma_is_cpu_addr_mirror(vma)) {
>> + op->prefetch.region = prefetch_region;
>> + break;
>> + }
>>
>> + struct drm_gpusvm_ctx ctx = {
>> + .read_only = xe_vma_read_only(vma),
>> + .devmem_possible = IS_DGFX(vm->xe) &&
>> + IS_ENABLED(CONFIG_DRM_XE_DEVMEM_MIRROR),
>> + .check_pages_threshold = IS_DGFX(vm->xe) &&
>> + IS_ENABLED(CONFIG_DRM_XE_DEVMEM_MIRROR) ?
>> + SZ_64K : 0,
>
> The alignment looks weird here.
Checkpatch confirms its good.
>
> Also you don't techincally need to set check_pages_threshold here give
> this is used by get_pages which is not called here.
Thats true, will remove it
>
>> + };
>> +
>> + op->prefetch_range.region = prefetch_region;
>> + struct xe_svm_range *svm_range;
>> + int i = 0;
>
> I don't think you need 'i' here, you probably just can use xa_alloc
> rather than xa_store if use xe_for_each everywhere else.
Yes xa_alloc with xa_init_flags will be cleaner here. Will change
>
>
>> +
>> + xa_init(&op->prefetch_range.range);
>> + op->prefetch_range.ranges_count = 0;
>> +alloc_next_range:
>> + svm_range = xe_svm_range_find_or_insert(vm, addr, vma, &ctx);
>> +
>
> I think you want to check if range has a mapping and is in preferred
> location, if it is then don't add to the xarray as no reason to try to
> migrate it or rebind the GPU pages.
Makes sense, will add a check here.
>
>> + if (PTR_ERR(svm_range) == -ENOENT)
>> + break;
>> +
>> + if (IS_ERR(svm_range)) {
>> + err = PTR_ERR(svm_range);
>> + goto unwind_prefetch_ops;
>> + }
>> +
>> + xa_store(&op->prefetch_range.range, i, svm_range, GFP_KERNEL);
>> + op->prefetch_range.ranges_count++;
>> + vops->flags |= XE_VMA_OPS_HAS_SVM_PREFETCH;
>> +
>> + if (range_end > xe_svm_range_end(svm_range) &&
>> + xe_svm_range_end(svm_range) < xe_vma_end(vma)) {
>> + i++;
>> + addr = xe_svm_range_end(svm_range);
>> + goto alloc_next_range;
>> + }
>> + }
>> print_op(vm->xe, __op);
>> }
>>
>> return ops;
>> +
>> +unwind_prefetch_ops:
>> + clean_svm_prefetch_in_gpuva_ops(ops);
>> + drm_gpuva_ops_free(&vm->gpuvm, ops);
>> + return ERR_PTR(err);
>> }
>> +
>> ALLOW_ERROR_INJECTION(vm_bind_ioctl_ops_create, ERRNO);
>>
>> static struct xe_vma *new_vma(struct xe_vm *vm, struct drm_gpuva_op_map *op,
>> @@ -2645,8 +2731,12 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
>> return err;
>> }
>>
>> - if (!xe_vma_is_cpu_addr_mirror(vma))
>> + if (xe_vma_is_cpu_addr_mirror(vma))
>> + xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask,
>> + op->prefetch_range.ranges_count);
>> + else
>> xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask, 1);
>> +
>> break;
>> default:
>> drm_warn(&vm->xe->drm, "NOT POSSIBLE");
>> @@ -2772,6 +2862,58 @@ static int check_ufence(struct xe_vma *vma)
>> return 0;
>> }
>>
>> +static int prefetch_ranges_lock_and_prep(struct xe_vm *vm,
>> + struct xe_vma_op *op)
>> +{
>> + int err = 0;
>> +
>> + if (op->base.op == DRM_GPUVA_OP_PREFETCH) {
>
>
> if (op->base.op != DRM_GPUVA_OP_PREFETCH || xe_vma_is_cpu_addr_mirror(vma))
> return 0;
>
> Will help with spacing. Or do this check at the caller.
>
>> + struct xe_vma *vma = gpuva_to_vma(op->base.prefetch.va);
>> + struct drm_gpusvm_ctx ctx = {
>> + .read_only = xe_vma_read_only(vma),
>> + .devmem_possible = IS_DGFX(vm->xe) &&
>> + IS_ENABLED(CONFIG_DRM_XE_DEVMEM_MIRROR),
>> + .check_pages_threshold = IS_DGFX(vm->xe) &&
>> + IS_ENABLED(CONFIG_DRM_XE_DEVMEM_MIRROR) ?
>> + SZ_64K : 0,
>> + };
>> + struct xe_svm_range *svm_range;
>> + struct xe_tile *tile;
>> + u32 region;
>> + int i;
>> +
>> + if (!xe_vma_is_cpu_addr_mirror(vma))
>> + return 0;
>> +
>> + region = op->prefetch_range.region;
>> +
>> + /* TODO: Threading the migration */
>> + for (i = 0; i < op->prefetch_range.ranges_count; i++) {
>
> Again xa_for_each...
>
>> + svm_range = xa_load(&op->prefetch_range.range, i);
>> + if (xe_svm_range_needs_migrate_to_vram(svm_range, vma, region)) {
>> + tile = &vm->xe->tiles[region_to_mem_type[region] - XE_PL_VRAM0];
>> + err = xe_svm_alloc_vram(vm, tile, svm_range, &ctx);
>> + if (err) {
>> + drm_err(&vm->xe->drm, "VRAM allocation failed, can be retried from userspace, asid=%u, gpusvm=%p, errno=%pe\n",
>> + vm->usm.asid, &vm->svm.gpusvm, ERR_PTR(err));
>
> Not drm_err here, drm_dbg as user space can easily retrigger this.
Sure
>
>> + return -ENODATA;
>
> So this gets squashed into return 0, which I think is correct for now.
> Same explaination as above wrt to error codes.
>
>> + }
>> + }
>> +
>> + err = xe_svm_range_get_pages(vm, svm_range, &ctx);
>> + if (err) {
>> + if (err == -EOPNOTSUPP || err == -EFAULT || err == -EPERM)
>> + err = -ENODATA;
>> +
>> + drm_err(&vm->xe->drm, "Get pages failed, asid=%u, gpusvm=%p, errno=%pe\n",
>> + vm->usm.asid, &vm->svm.gpusvm, ERR_PTR(err));
>
> Same as above.
>
> We are going to want IGTs to test these error paths btw, issue prefetch
> then have another thread immediately touch some of the memory to abort
> the prefetch.
Sure will look into it.
>
>> + return err;
>> + }
>> + }
>> + }
>> + return err;
>> +}
>> +
>> static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
>> struct xe_vma_op *op)
>> {
>> @@ -2809,7 +2951,12 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
>> case DRM_GPUVA_OP_PREFETCH:
>> {
>> struct xe_vma *vma = gpuva_to_vma(op->base.prefetch.va);
>> - u32 region = op->prefetch.region;
>> + u32 region;
>> +
>> + if (xe_vma_is_cpu_addr_mirror(vma))
>> + region = op->prefetch_range.region;
>> + else
>> + region = op->prefetch.region;
>>
>> xe_assert(vm->xe, region <= ARRAY_SIZE(region_to_mem_type));
>>
>> @@ -2828,6 +2975,23 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
>> return err;
>> }
>>
>> +static int xe_vma_ops_execute_ready(struct xe_vm *vm, struct xe_vma_ops *vops)
>> +{
>
> Let's make these names consistent.
>
> How about...
>
> s/xe_vma_ops_execute_ready/vm_bind_ioctl_ops_prefetch_ranges
>
> s/prefetch_ranges_lock_and_prep/prefetch_ranges
Sure
>
>> + struct xe_vma_op *op;
>> + int err;
>> +
>> + if (!(vops->flags & XE_VMA_OPS_HAS_SVM_PREFETCH))
>> + return 0;
>> +
>> + list_for_each_entry(op, &vops->list, link) {
>> + err = prefetch_ranges_lock_and_prep(vm, op);
>> + if (err)
>> + return err;
>> + }
>> +
>> + return 0;
>> +}
>> +
>> static int vm_bind_ioctl_ops_lock_and_prep(struct drm_exec *exec,
>> struct xe_vm *vm,
>> struct xe_vma_ops *vops)
>> @@ -2850,7 +3014,6 @@ static int vm_bind_ioctl_ops_lock_and_prep(struct drm_exec *exec,
>> vm->xe->vm_inject_error_position == FORCE_OP_ERROR_LOCK)
>> return -ENOSPC;
>> #endif
>> -
>
> Look unrelated.
>
> Matt
>
>> return 0;
>> }
>>
>> @@ -3492,7 +3655,7 @@ int xe_vm_bind_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
>> u32 prefetch_region = bind_ops[i].prefetch_mem_region_instance;
>> u16 pat_index = bind_ops[i].pat_index;
>>
>> - ops[i] = vm_bind_ioctl_ops_create(vm, bos[i], obj_offset,
>> + ops[i] = vm_bind_ioctl_ops_create(vm, &vops, bos[i], obj_offset,
>> addr, range, op, flags,
>> prefetch_region, pat_index);
>> if (IS_ERR(ops[i])) {
>> @@ -3525,6 +3688,10 @@ int xe_vm_bind_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
>> if (err)
>> goto unwind_ops;
>>
>> + err = xe_vma_ops_execute_ready(vm, &vops);
>> + if (err)
>> + goto unwind_ops;
>> +
>> fence = vm_bind_ioctl_ops_execute(vm, &vops);
>> if (IS_ERR(fence))
>> err = PTR_ERR(fence);
>> @@ -3594,7 +3761,7 @@ struct dma_fence *xe_vm_bind_kernel_bo(struct xe_vm *vm, struct xe_bo *bo,
>>
>> xe_vma_ops_init(&vops, vm, q, NULL, 0);
>>
>> - ops = vm_bind_ioctl_ops_create(vm, bo, 0, addr, bo->size,
>> + ops = vm_bind_ioctl_ops_create(vm, &vops, bo, 0, addr, bo->size,
>> DRM_XE_VM_BIND_OP_MAP, 0, 0,
>> vm->xe->pat.idx[cache_lvl]);
>> if (IS_ERR(ops)) {
>> --
>> 2.34.1
>>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 14/32] drm/xe/svm: Implement prefetch support for SVM ranges
2025-04-07 10:17 ` [PATCH v2 14/32] drm/xe/svm: Implement prefetch support for SVM ranges Himal Prasad Ghimiray
2025-04-17 4:54 ` Matthew Brost
@ 2025-04-24 23:48 ` Matthew Brost
2025-04-28 6:44 ` Ghimiray, Himal Prasad
1 sibling, 1 reply; 120+ messages in thread
From: Matthew Brost @ 2025-04-24 23:48 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe, thomas.hellstrom
On Mon, Apr 07, 2025 at 03:47:01PM +0530, Himal Prasad Ghimiray wrote:
> This commit adds prefetch support for SVM ranges, utilizing the
> existing ioctl vm_bind functionality to achieve this.
>
> v2: rebase
>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> ---
> drivers/gpu/drm/xe/xe_pt.c | 61 +++++++++---
> drivers/gpu/drm/xe/xe_vm.c | 185 +++++++++++++++++++++++++++++++++++--
> 2 files changed, 222 insertions(+), 24 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
> index de4e3edda758..59dc065fae93 100644
> --- a/drivers/gpu/drm/xe/xe_pt.c
> +++ b/drivers/gpu/drm/xe/xe_pt.c
> @@ -1458,7 +1458,8 @@ static int xe_pt_svm_pre_commit(struct xe_migrate_pt_update *pt_update)
> struct xe_vm *vm = pt_update->vops->vm;
> struct xe_vma_ops *vops = pt_update->vops;
> struct xe_vma_op *op;
> - int err;
> + int ranges_count;
> + int err, i;
>
> err = xe_pt_pre_commit(pt_update);
> if (err)
> @@ -1467,20 +1468,33 @@ static int xe_pt_svm_pre_commit(struct xe_migrate_pt_update *pt_update)
> xe_svm_notifier_lock(vm);
>
> list_for_each_entry(op, &vops->list, link) {
> - struct xe_svm_range *range = op->map_range.range;
> + struct xe_svm_range *range = NULL;
>
> if (op->subop == XE_VMA_SUBOP_UNMAP_RANGE)
> continue;
>
> - xe_svm_range_debug(range, "PRE-COMMIT");
> -
> - xe_assert(vm->xe, xe_vma_is_cpu_addr_mirror(op->map_range.vma));
> - xe_assert(vm->xe, op->subop == XE_VMA_SUBOP_MAP_RANGE);
> + if (op->base.op == DRM_GPUVA_OP_PREFETCH) {
> + xe_assert(vm->xe,
> + xe_vma_is_cpu_addr_mirror(gpuva_to_vma(op->base.prefetch.va)));
> + ranges_count = op->prefetch_range.ranges_count;
> + } else {
> + xe_assert(vm->xe, xe_vma_is_cpu_addr_mirror(op->map_range.vma));
> + xe_assert(vm->xe, op->subop == XE_VMA_SUBOP_MAP_RANGE);
> + ranges_count = 1;
> + }
>
> - if (!xe_svm_range_pages_valid(range)) {
> - xe_svm_range_debug(range, "PRE-COMMIT - RETRY");
> - xe_svm_notifier_unlock(vm);
> - return -EAGAIN;
> + for (i = 0; i < ranges_count; i++) {
> + if (op->base.op == DRM_GPUVA_OP_PREFETCH)
> + range = xa_load(&op->prefetch_range.range, i);
> + else
> + range = op->map_range.range;
> + xe_svm_range_debug(range, "PRE-COMMIT");
> +
> + if (!xe_svm_range_pages_valid(range)) {
> + xe_svm_range_debug(range, "PRE-COMMIT - RETRY");
> + xe_svm_notifier_unlock(vm);
> + return -EAGAIN;
> + }
> }
> }
>
> @@ -2065,11 +2079,21 @@ static int op_prepare(struct xe_vm *vm,
> {
> struct xe_vma *vma = gpuva_to_vma(op->base.prefetch.va);
>
> - if (xe_vma_is_cpu_addr_mirror(vma))
> - break;
> + if (xe_vma_is_cpu_addr_mirror(vma)) {
> + struct xe_svm_range *range;
> + int i;
>
> - err = bind_op_prepare(vm, tile, pt_update_ops, vma, false);
> - pt_update_ops->wait_vm_kernel = true;
> + for (i = 0; i < op->prefetch_range.ranges_count; i++) {
> + range = xa_load(&op->prefetch_range.range, i);
> + err = bind_range_prepare(vm, tile, pt_update_ops,
> + vma, range);
> + if (err)
> + return err;
> + }
> + } else {
> + err = bind_op_prepare(vm, tile, pt_update_ops, vma, false);
> + pt_update_ops->wait_vm_kernel = true;
> + }
> break;
> }
> case DRM_GPUVA_OP_DRIVER:
> @@ -2273,9 +2297,16 @@ static void op_commit(struct xe_vm *vm,
> {
> struct xe_vma *vma = gpuva_to_vma(op->base.prefetch.va);
>
> - if (!xe_vma_is_cpu_addr_mirror(vma))
> + if (xe_vma_is_cpu_addr_mirror(vma)) {
> + for (int i = 0 ; i < op->prefetch_range.ranges_count; i++) {
> + struct xe_svm_range *range = xa_load(&op->prefetch_range.range, i);
> +
> + range_present_and_invalidated_tile(vm, range, tile->id);
> + }
> + } else {
> bind_op_commit(vm, tile, pt_update_ops, vma, fence,
> fence2, false);
> + }
> break;
> }
> case DRM_GPUVA_OP_DRIVER:
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index 57af2c37f927..ffd7ad664921 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -798,10 +798,36 @@ static int xe_vma_ops_alloc(struct xe_vma_ops *vops, bool array_of_binds)
> }
> ALLOW_ERROR_INJECTION(xe_vma_ops_alloc, ERRNO);
>
> +static void clean_svm_prefetch_op(struct xe_vma_op *op)
> +{
> + struct xe_vma *vma;
> +
> + vma = gpuva_to_vma(op->base.prefetch.va);
> +
> + if (op->base.op == DRM_GPUVA_OP_PREFETCH && xe_vma_is_cpu_addr_mirror(vma)) {
> + xa_destroy(&op->prefetch_range.range);
> + op->prefetch_range.ranges_count = 0;
> + }
> +}
> +
> +static void clean_svm_prefetch_in_vma_ops(struct xe_vma_ops *vops)
> +{
> + struct xe_vma_op *op;
> +
> + if (!(vops->flags & XE_VMA_OPS_HAS_SVM_PREFETCH))
> + return;
> +
> + list_for_each_entry(op, &vops->list, link) {
> + clean_svm_prefetch_op(op);
> + }
> +}
> +
> static void xe_vma_ops_fini(struct xe_vma_ops *vops)
> {
> int i;
>
> + clean_svm_prefetch_in_vma_ops(vops);
> +
> for (i = 0; i < XE_MAX_TILES_PER_DEVICE; ++i)
> kfree(vops->pt_update_ops[i].ops);
> }
> @@ -2248,13 +2274,25 @@ static bool __xe_vm_needs_clear_scratch_pages(struct xe_vm *vm, u32 bind_flags)
> return true;
> }
>
> +static void clean_svm_prefetch_in_gpuva_ops(struct drm_gpuva_ops *ops)
> +{
> + struct drm_gpuva_op *__op;
> +
> + drm_gpuva_for_each_op(__op, ops) {
> + struct xe_vma_op *op = gpuva_op_to_vma_op(__op);
> +
> + clean_svm_prefetch_op(op);
> + }
> +}
> +
> /*
> * Create operations list from IOCTL arguments, setup operations fields so parse
> * and commit steps are decoupled from IOCTL arguments. This step can fail.
> */
> static struct drm_gpuva_ops *
> -vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_bo *bo,
> - u64 bo_offset_or_userptr, u64 addr, u64 range,
> +vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_vma_ops *vops,
> + struct xe_bo *bo, u64 bo_offset_or_userptr,
> + u64 addr, u64 range,
> u32 operation, u32 flags,
> u32 prefetch_region, u16 pat_index)
> {
> @@ -2262,6 +2300,7 @@ vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_bo *bo,
> struct drm_gpuva_ops *ops;
> struct drm_gpuva_op *__op;
> struct drm_gpuvm_bo *vm_bo;
> + u64 range_end = addr + range;
> int err;
>
> lockdep_assert_held_write(&vm->lock);
> @@ -2323,14 +2362,61 @@ vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_bo *bo,
> op->map.invalidate_on_bind =
> __xe_vm_needs_clear_scratch_pages(vm, flags);
> } else if (__op->op == DRM_GPUVA_OP_PREFETCH) {
> - op->prefetch.region = prefetch_region;
> - }
> + struct xe_vma *vma = gpuva_to_vma(op->base.prefetch.va);
> +
> + if (!xe_vma_is_cpu_addr_mirror(vma)) {
> + op->prefetch.region = prefetch_region;
> + break;
> + }
>
> + struct drm_gpusvm_ctx ctx = {
> + .read_only = xe_vma_read_only(vma),
> + .devmem_possible = IS_DGFX(vm->xe) &&
> + IS_ENABLED(CONFIG_DRM_XE_DEVMEM_MIRROR),
> + .check_pages_threshold = IS_DGFX(vm->xe) &&
> + IS_ENABLED(CONFIG_DRM_XE_DEVMEM_MIRROR) ?
> + SZ_64K : 0,
> + };
> +
> + op->prefetch_range.region = prefetch_region;
> + struct xe_svm_range *svm_range;
> + int i = 0;
> +
> + xa_init(&op->prefetch_range.range);
> + op->prefetch_range.ranges_count = 0;
> +alloc_next_range:
> + svm_range = xe_svm_range_find_or_insert(vm, addr, vma, &ctx);
> +
> + if (PTR_ERR(svm_range) == -ENOENT)
> + break;
I missed this in previous review. -ENOENT means a CPU VMA does not
exist. I think it fairly reasonable use to case for a UMD to issue a
prefetch to sparsely populated CPU VMA range so I don't think breaking
here is correct, rather a goto alloc_next_range after adjusting to the
next address. This gets tricky as we likely don't want to iterate 4k at
a time... Maybe we add GPU SVM support function which wraps a CPU VMA
lookup function (find_vma I think) to find the next CPU VMA and returns
the starting address, if the starting address is within the prefetch
range we continue the walk.
Matt
> +
> + if (IS_ERR(svm_range)) {
> + err = PTR_ERR(svm_range);
> + goto unwind_prefetch_ops;
> + }
> +
> + xa_store(&op->prefetch_range.range, i, svm_range, GFP_KERNEL);
> + op->prefetch_range.ranges_count++;
> + vops->flags |= XE_VMA_OPS_HAS_SVM_PREFETCH;
> +
> + if (range_end > xe_svm_range_end(svm_range) &&
> + xe_svm_range_end(svm_range) < xe_vma_end(vma)) {
> + i++;
> + addr = xe_svm_range_end(svm_range);
> + goto alloc_next_range;
> + }
> + }
> print_op(vm->xe, __op);
> }
>
> return ops;
> +
> +unwind_prefetch_ops:
> + clean_svm_prefetch_in_gpuva_ops(ops);
> + drm_gpuva_ops_free(&vm->gpuvm, ops);
> + return ERR_PTR(err);
> }
> +
> ALLOW_ERROR_INJECTION(vm_bind_ioctl_ops_create, ERRNO);
>
> static struct xe_vma *new_vma(struct xe_vm *vm, struct drm_gpuva_op_map *op,
> @@ -2645,8 +2731,12 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
> return err;
> }
>
> - if (!xe_vma_is_cpu_addr_mirror(vma))
> + if (xe_vma_is_cpu_addr_mirror(vma))
> + xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask,
> + op->prefetch_range.ranges_count);
> + else
> xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask, 1);
> +
> break;
> default:
> drm_warn(&vm->xe->drm, "NOT POSSIBLE");
> @@ -2772,6 +2862,58 @@ static int check_ufence(struct xe_vma *vma)
> return 0;
> }
>
> +static int prefetch_ranges_lock_and_prep(struct xe_vm *vm,
> + struct xe_vma_op *op)
> +{
> + int err = 0;
> +
> + if (op->base.op == DRM_GPUVA_OP_PREFETCH) {
> + struct xe_vma *vma = gpuva_to_vma(op->base.prefetch.va);
> + struct drm_gpusvm_ctx ctx = {
> + .read_only = xe_vma_read_only(vma),
> + .devmem_possible = IS_DGFX(vm->xe) &&
> + IS_ENABLED(CONFIG_DRM_XE_DEVMEM_MIRROR),
> + .check_pages_threshold = IS_DGFX(vm->xe) &&
> + IS_ENABLED(CONFIG_DRM_XE_DEVMEM_MIRROR) ?
> + SZ_64K : 0,
> + };
> + struct xe_svm_range *svm_range;
> + struct xe_tile *tile;
> + u32 region;
> + int i;
> +
> + if (!xe_vma_is_cpu_addr_mirror(vma))
> + return 0;
> +
> + region = op->prefetch_range.region;
> +
> + /* TODO: Threading the migration */
> + for (i = 0; i < op->prefetch_range.ranges_count; i++) {
> + svm_range = xa_load(&op->prefetch_range.range, i);
> + if (xe_svm_range_needs_migrate_to_vram(svm_range, vma, region)) {
> + tile = &vm->xe->tiles[region_to_mem_type[region] - XE_PL_VRAM0];
> + err = xe_svm_alloc_vram(vm, tile, svm_range, &ctx);
> + if (err) {
> + drm_err(&vm->xe->drm, "VRAM allocation failed, can be retried from userspace, asid=%u, gpusvm=%p, errno=%pe\n",
> + vm->usm.asid, &vm->svm.gpusvm, ERR_PTR(err));
> + return -ENODATA;
> + }
> + }
> +
> + err = xe_svm_range_get_pages(vm, svm_range, &ctx);
> + if (err) {
> + if (err == -EOPNOTSUPP || err == -EFAULT || err == -EPERM)
> + err = -ENODATA;
> +
> + drm_err(&vm->xe->drm, "Get pages failed, asid=%u, gpusvm=%p, errno=%pe\n",
> + vm->usm.asid, &vm->svm.gpusvm, ERR_PTR(err));
> + return err;
> + }
> + }
> + }
> + return err;
> +}
> +
> static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
> struct xe_vma_op *op)
> {
> @@ -2809,7 +2951,12 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
> case DRM_GPUVA_OP_PREFETCH:
> {
> struct xe_vma *vma = gpuva_to_vma(op->base.prefetch.va);
> - u32 region = op->prefetch.region;
> + u32 region;
> +
> + if (xe_vma_is_cpu_addr_mirror(vma))
> + region = op->prefetch_range.region;
> + else
> + region = op->prefetch.region;
>
> xe_assert(vm->xe, region <= ARRAY_SIZE(region_to_mem_type));
>
> @@ -2828,6 +2975,23 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
> return err;
> }
>
> +static int xe_vma_ops_execute_ready(struct xe_vm *vm, struct xe_vma_ops *vops)
> +{
> + struct xe_vma_op *op;
> + int err;
> +
> + if (!(vops->flags & XE_VMA_OPS_HAS_SVM_PREFETCH))
> + return 0;
> +
> + list_for_each_entry(op, &vops->list, link) {
> + err = prefetch_ranges_lock_and_prep(vm, op);
> + if (err)
> + return err;
> + }
> +
> + return 0;
> +}
> +
> static int vm_bind_ioctl_ops_lock_and_prep(struct drm_exec *exec,
> struct xe_vm *vm,
> struct xe_vma_ops *vops)
> @@ -2850,7 +3014,6 @@ static int vm_bind_ioctl_ops_lock_and_prep(struct drm_exec *exec,
> vm->xe->vm_inject_error_position == FORCE_OP_ERROR_LOCK)
> return -ENOSPC;
> #endif
> -
> return 0;
> }
>
> @@ -3492,7 +3655,7 @@ int xe_vm_bind_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
> u32 prefetch_region = bind_ops[i].prefetch_mem_region_instance;
> u16 pat_index = bind_ops[i].pat_index;
>
> - ops[i] = vm_bind_ioctl_ops_create(vm, bos[i], obj_offset,
> + ops[i] = vm_bind_ioctl_ops_create(vm, &vops, bos[i], obj_offset,
> addr, range, op, flags,
> prefetch_region, pat_index);
> if (IS_ERR(ops[i])) {
> @@ -3525,6 +3688,10 @@ int xe_vm_bind_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
> if (err)
> goto unwind_ops;
>
> + err = xe_vma_ops_execute_ready(vm, &vops);
> + if (err)
> + goto unwind_ops;
> +
> fence = vm_bind_ioctl_ops_execute(vm, &vops);
> if (IS_ERR(fence))
> err = PTR_ERR(fence);
> @@ -3594,7 +3761,7 @@ struct dma_fence *xe_vm_bind_kernel_bo(struct xe_vm *vm, struct xe_bo *bo,
>
> xe_vma_ops_init(&vops, vm, q, NULL, 0);
>
> - ops = vm_bind_ioctl_ops_create(vm, bo, 0, addr, bo->size,
> + ops = vm_bind_ioctl_ops_create(vm, &vops, bo, 0, addr, bo->size,
> DRM_XE_VM_BIND_OP_MAP, 0, 0,
> vm->xe->pat.idx[cache_lvl]);
> if (IS_ERR(ops)) {
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 14/32] drm/xe/svm: Implement prefetch support for SVM ranges
2025-04-24 23:48 ` Matthew Brost
@ 2025-04-28 6:44 ` Ghimiray, Himal Prasad
0 siblings, 0 replies; 120+ messages in thread
From: Ghimiray, Himal Prasad @ 2025-04-28 6:44 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-xe, thomas.hellstrom
On 25-04-2025 05:18, Matthew Brost wrote:
> On Mon, Apr 07, 2025 at 03:47:01PM +0530, Himal Prasad Ghimiray wrote:
>> This commit adds prefetch support for SVM ranges, utilizing the
>> existing ioctl vm_bind functionality to achieve this.
>>
>> v2: rebase
>>
>> Cc: Matthew Brost <matthew.brost@intel.com>
>> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>> ---
>> drivers/gpu/drm/xe/xe_pt.c | 61 +++++++++---
>> drivers/gpu/drm/xe/xe_vm.c | 185 +++++++++++++++++++++++++++++++++++--
>> 2 files changed, 222 insertions(+), 24 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
>> index de4e3edda758..59dc065fae93 100644
>> --- a/drivers/gpu/drm/xe/xe_pt.c
>> +++ b/drivers/gpu/drm/xe/xe_pt.c
>> @@ -1458,7 +1458,8 @@ static int xe_pt_svm_pre_commit(struct xe_migrate_pt_update *pt_update)
>> struct xe_vm *vm = pt_update->vops->vm;
>> struct xe_vma_ops *vops = pt_update->vops;
>> struct xe_vma_op *op;
>> - int err;
>> + int ranges_count;
>> + int err, i;
>>
>> err = xe_pt_pre_commit(pt_update);
>> if (err)
>> @@ -1467,20 +1468,33 @@ static int xe_pt_svm_pre_commit(struct xe_migrate_pt_update *pt_update)
>> xe_svm_notifier_lock(vm);
>>
>> list_for_each_entry(op, &vops->list, link) {
>> - struct xe_svm_range *range = op->map_range.range;
>> + struct xe_svm_range *range = NULL;
>>
>> if (op->subop == XE_VMA_SUBOP_UNMAP_RANGE)
>> continue;
>>
>> - xe_svm_range_debug(range, "PRE-COMMIT");
>> -
>> - xe_assert(vm->xe, xe_vma_is_cpu_addr_mirror(op->map_range.vma));
>> - xe_assert(vm->xe, op->subop == XE_VMA_SUBOP_MAP_RANGE);
>> + if (op->base.op == DRM_GPUVA_OP_PREFETCH) {
>> + xe_assert(vm->xe,
>> + xe_vma_is_cpu_addr_mirror(gpuva_to_vma(op->base.prefetch.va)));
>> + ranges_count = op->prefetch_range.ranges_count;
>> + } else {
>> + xe_assert(vm->xe, xe_vma_is_cpu_addr_mirror(op->map_range.vma));
>> + xe_assert(vm->xe, op->subop == XE_VMA_SUBOP_MAP_RANGE);
>> + ranges_count = 1;
>> + }
>>
>> - if (!xe_svm_range_pages_valid(range)) {
>> - xe_svm_range_debug(range, "PRE-COMMIT - RETRY");
>> - xe_svm_notifier_unlock(vm);
>> - return -EAGAIN;
>> + for (i = 0; i < ranges_count; i++) {
>> + if (op->base.op == DRM_GPUVA_OP_PREFETCH)
>> + range = xa_load(&op->prefetch_range.range, i);
>> + else
>> + range = op->map_range.range;
>> + xe_svm_range_debug(range, "PRE-COMMIT");
>> +
>> + if (!xe_svm_range_pages_valid(range)) {
>> + xe_svm_range_debug(range, "PRE-COMMIT - RETRY");
>> + xe_svm_notifier_unlock(vm);
>> + return -EAGAIN;
>> + }
>> }
>> }
>>
>> @@ -2065,11 +2079,21 @@ static int op_prepare(struct xe_vm *vm,
>> {
>> struct xe_vma *vma = gpuva_to_vma(op->base.prefetch.va);
>>
>> - if (xe_vma_is_cpu_addr_mirror(vma))
>> - break;
>> + if (xe_vma_is_cpu_addr_mirror(vma)) {
>> + struct xe_svm_range *range;
>> + int i;
>>
>> - err = bind_op_prepare(vm, tile, pt_update_ops, vma, false);
>> - pt_update_ops->wait_vm_kernel = true;
>> + for (i = 0; i < op->prefetch_range.ranges_count; i++) {
>> + range = xa_load(&op->prefetch_range.range, i);
>> + err = bind_range_prepare(vm, tile, pt_update_ops,
>> + vma, range);
>> + if (err)
>> + return err;
>> + }
>> + } else {
>> + err = bind_op_prepare(vm, tile, pt_update_ops, vma, false);
>> + pt_update_ops->wait_vm_kernel = true;
>> + }
>> break;
>> }
>> case DRM_GPUVA_OP_DRIVER:
>> @@ -2273,9 +2297,16 @@ static void op_commit(struct xe_vm *vm,
>> {
>> struct xe_vma *vma = gpuva_to_vma(op->base.prefetch.va);
>>
>> - if (!xe_vma_is_cpu_addr_mirror(vma))
>> + if (xe_vma_is_cpu_addr_mirror(vma)) {
>> + for (int i = 0 ; i < op->prefetch_range.ranges_count; i++) {
>> + struct xe_svm_range *range = xa_load(&op->prefetch_range.range, i);
>> +
>> + range_present_and_invalidated_tile(vm, range, tile->id);
>> + }
>> + } else {
>> bind_op_commit(vm, tile, pt_update_ops, vma, fence,
>> fence2, false);
>> + }
>> break;
>> }
>> case DRM_GPUVA_OP_DRIVER:
>> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
>> index 57af2c37f927..ffd7ad664921 100644
>> --- a/drivers/gpu/drm/xe/xe_vm.c
>> +++ b/drivers/gpu/drm/xe/xe_vm.c
>> @@ -798,10 +798,36 @@ static int xe_vma_ops_alloc(struct xe_vma_ops *vops, bool array_of_binds)
>> }
>> ALLOW_ERROR_INJECTION(xe_vma_ops_alloc, ERRNO);
>>
>> +static void clean_svm_prefetch_op(struct xe_vma_op *op)
>> +{
>> + struct xe_vma *vma;
>> +
>> + vma = gpuva_to_vma(op->base.prefetch.va);
>> +
>> + if (op->base.op == DRM_GPUVA_OP_PREFETCH && xe_vma_is_cpu_addr_mirror(vma)) {
>> + xa_destroy(&op->prefetch_range.range);
>> + op->prefetch_range.ranges_count = 0;
>> + }
>> +}
>> +
>> +static void clean_svm_prefetch_in_vma_ops(struct xe_vma_ops *vops)
>> +{
>> + struct xe_vma_op *op;
>> +
>> + if (!(vops->flags & XE_VMA_OPS_HAS_SVM_PREFETCH))
>> + return;
>> +
>> + list_for_each_entry(op, &vops->list, link) {
>> + clean_svm_prefetch_op(op);
>> + }
>> +}
>> +
>> static void xe_vma_ops_fini(struct xe_vma_ops *vops)
>> {
>> int i;
>>
>> + clean_svm_prefetch_in_vma_ops(vops);
>> +
>> for (i = 0; i < XE_MAX_TILES_PER_DEVICE; ++i)
>> kfree(vops->pt_update_ops[i].ops);
>> }
>> @@ -2248,13 +2274,25 @@ static bool __xe_vm_needs_clear_scratch_pages(struct xe_vm *vm, u32 bind_flags)
>> return true;
>> }
>>
>> +static void clean_svm_prefetch_in_gpuva_ops(struct drm_gpuva_ops *ops)
>> +{
>> + struct drm_gpuva_op *__op;
>> +
>> + drm_gpuva_for_each_op(__op, ops) {
>> + struct xe_vma_op *op = gpuva_op_to_vma_op(__op);
>> +
>> + clean_svm_prefetch_op(op);
>> + }
>> +}
>> +
>> /*
>> * Create operations list from IOCTL arguments, setup operations fields so parse
>> * and commit steps are decoupled from IOCTL arguments. This step can fail.
>> */
>> static struct drm_gpuva_ops *
>> -vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_bo *bo,
>> - u64 bo_offset_or_userptr, u64 addr, u64 range,
>> +vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_vma_ops *vops,
>> + struct xe_bo *bo, u64 bo_offset_or_userptr,
>> + u64 addr, u64 range,
>> u32 operation, u32 flags,
>> u32 prefetch_region, u16 pat_index)
>> {
>> @@ -2262,6 +2300,7 @@ vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_bo *bo,
>> struct drm_gpuva_ops *ops;
>> struct drm_gpuva_op *__op;
>> struct drm_gpuvm_bo *vm_bo;
>> + u64 range_end = addr + range;
>> int err;
>>
>> lockdep_assert_held_write(&vm->lock);
>> @@ -2323,14 +2362,61 @@ vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_bo *bo,
>> op->map.invalidate_on_bind =
>> __xe_vm_needs_clear_scratch_pages(vm, flags);
>> } else if (__op->op == DRM_GPUVA_OP_PREFETCH) {
>> - op->prefetch.region = prefetch_region;
>> - }
>> + struct xe_vma *vma = gpuva_to_vma(op->base.prefetch.va);
>> +
>> + if (!xe_vma_is_cpu_addr_mirror(vma)) {
>> + op->prefetch.region = prefetch_region;
>> + break;
>> + }
>>
>> + struct drm_gpusvm_ctx ctx = {
>> + .read_only = xe_vma_read_only(vma),
>> + .devmem_possible = IS_DGFX(vm->xe) &&
>> + IS_ENABLED(CONFIG_DRM_XE_DEVMEM_MIRROR),
>> + .check_pages_threshold = IS_DGFX(vm->xe) &&
>> + IS_ENABLED(CONFIG_DRM_XE_DEVMEM_MIRROR) ?
>> + SZ_64K : 0,
>> + };
>> +
>> + op->prefetch_range.region = prefetch_region;
>> + struct xe_svm_range *svm_range;
>> + int i = 0;
>> +
>> + xa_init(&op->prefetch_range.range);
>> + op->prefetch_range.ranges_count = 0;
>> +alloc_next_range:
>> + svm_range = xe_svm_range_find_or_insert(vm, addr, vma, &ctx);
>> +
>> + if (PTR_ERR(svm_range) == -ENOENT)
>> + break;
>
> I missed this in previous review. -ENOENT means a CPU VMA does not
> exist. I think it fairly reasonable use to case for a UMD to issue a
> prefetch to sparsely populated CPU VMA range so I don't think breaking
> here is correct, rather a goto alloc_next_range after adjusting to the
> next address. This gets tricky as we likely don't want to iterate 4k at
> a time... Maybe we add GPU SVM support function which wraps a CPU VMA
> lookup function (find_vma I think) to find the next CPU VMA and returns
> the starting address, if the starting address is within the prefetch
> range we continue the walk.
very valid point. will add it in next version.
>
> Matt
>
>> +
>> + if (IS_ERR(svm_range)) {
>> + err = PTR_ERR(svm_range);
>> + goto unwind_prefetch_ops;
>> + }
>> +
>> + xa_store(&op->prefetch_range.range, i, svm_range, GFP_KERNEL);
>> + op->prefetch_range.ranges_count++;
>> + vops->flags |= XE_VMA_OPS_HAS_SVM_PREFETCH;
>> +
>> + if (range_end > xe_svm_range_end(svm_range) &&
>> + xe_svm_range_end(svm_range) < xe_vma_end(vma)) {
>> + i++;
>> + addr = xe_svm_range_end(svm_range);
>> + goto alloc_next_range;
>> + }
>> + }
>> print_op(vm->xe, __op);
>> }
>>
>> return ops;
>> +
>> +unwind_prefetch_ops:
>> + clean_svm_prefetch_in_gpuva_ops(ops);
>> + drm_gpuva_ops_free(&vm->gpuvm, ops);
>> + return ERR_PTR(err);
>> }
>> +
>> ALLOW_ERROR_INJECTION(vm_bind_ioctl_ops_create, ERRNO);
>>
>> static struct xe_vma *new_vma(struct xe_vm *vm, struct drm_gpuva_op_map *op,
>> @@ -2645,8 +2731,12 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
>> return err;
>> }
>>
>> - if (!xe_vma_is_cpu_addr_mirror(vma))
>> + if (xe_vma_is_cpu_addr_mirror(vma))
>> + xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask,
>> + op->prefetch_range.ranges_count);
>> + else
>> xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask, 1);
>> +
>> break;
>> default:
>> drm_warn(&vm->xe->drm, "NOT POSSIBLE");
>> @@ -2772,6 +2862,58 @@ static int check_ufence(struct xe_vma *vma)
>> return 0;
>> }
>>
>> +static int prefetch_ranges_lock_and_prep(struct xe_vm *vm,
>> + struct xe_vma_op *op)
>> +{
>> + int err = 0;
>> +
>> + if (op->base.op == DRM_GPUVA_OP_PREFETCH) {
>> + struct xe_vma *vma = gpuva_to_vma(op->base.prefetch.va);
>> + struct drm_gpusvm_ctx ctx = {
>> + .read_only = xe_vma_read_only(vma),
>> + .devmem_possible = IS_DGFX(vm->xe) &&
>> + IS_ENABLED(CONFIG_DRM_XE_DEVMEM_MIRROR),
>> + .check_pages_threshold = IS_DGFX(vm->xe) &&
>> + IS_ENABLED(CONFIG_DRM_XE_DEVMEM_MIRROR) ?
>> + SZ_64K : 0,
>> + };
>> + struct xe_svm_range *svm_range;
>> + struct xe_tile *tile;
>> + u32 region;
>> + int i;
>> +
>> + if (!xe_vma_is_cpu_addr_mirror(vma))
>> + return 0;
>> +
>> + region = op->prefetch_range.region;
>> +
>> + /* TODO: Threading the migration */
>> + for (i = 0; i < op->prefetch_range.ranges_count; i++) {
>> + svm_range = xa_load(&op->prefetch_range.range, i);
>> + if (xe_svm_range_needs_migrate_to_vram(svm_range, vma, region)) {
>> + tile = &vm->xe->tiles[region_to_mem_type[region] - XE_PL_VRAM0];
>> + err = xe_svm_alloc_vram(vm, tile, svm_range, &ctx);
>> + if (err) {
>> + drm_err(&vm->xe->drm, "VRAM allocation failed, can be retried from userspace, asid=%u, gpusvm=%p, errno=%pe\n",
>> + vm->usm.asid, &vm->svm.gpusvm, ERR_PTR(err));
>> + return -ENODATA;
>> + }
>> + }
>> +
>> + err = xe_svm_range_get_pages(vm, svm_range, &ctx);
>> + if (err) {
>> + if (err == -EOPNOTSUPP || err == -EFAULT || err == -EPERM)
>> + err = -ENODATA;
>> +
>> + drm_err(&vm->xe->drm, "Get pages failed, asid=%u, gpusvm=%p, errno=%pe\n",
>> + vm->usm.asid, &vm->svm.gpusvm, ERR_PTR(err));
>> + return err;
>> + }
>> + }
>> + }
>> + return err;
>> +}
>> +
>> static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
>> struct xe_vma_op *op)
>> {
>> @@ -2809,7 +2951,12 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
>> case DRM_GPUVA_OP_PREFETCH:
>> {
>> struct xe_vma *vma = gpuva_to_vma(op->base.prefetch.va);
>> - u32 region = op->prefetch.region;
>> + u32 region;
>> +
>> + if (xe_vma_is_cpu_addr_mirror(vma))
>> + region = op->prefetch_range.region;
>> + else
>> + region = op->prefetch.region;
>>
>> xe_assert(vm->xe, region <= ARRAY_SIZE(region_to_mem_type));
>>
>> @@ -2828,6 +2975,23 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
>> return err;
>> }
>>
>> +static int xe_vma_ops_execute_ready(struct xe_vm *vm, struct xe_vma_ops *vops)
>> +{
>> + struct xe_vma_op *op;
>> + int err;
>> +
>> + if (!(vops->flags & XE_VMA_OPS_HAS_SVM_PREFETCH))
>> + return 0;
>> +
>> + list_for_each_entry(op, &vops->list, link) {
>> + err = prefetch_ranges_lock_and_prep(vm, op);
>> + if (err)
>> + return err;
>> + }
>> +
>> + return 0;
>> +}
>> +
>> static int vm_bind_ioctl_ops_lock_and_prep(struct drm_exec *exec,
>> struct xe_vm *vm,
>> struct xe_vma_ops *vops)
>> @@ -2850,7 +3014,6 @@ static int vm_bind_ioctl_ops_lock_and_prep(struct drm_exec *exec,
>> vm->xe->vm_inject_error_position == FORCE_OP_ERROR_LOCK)
>> return -ENOSPC;
>> #endif
>> -
>> return 0;
>> }
>>
>> @@ -3492,7 +3655,7 @@ int xe_vm_bind_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
>> u32 prefetch_region = bind_ops[i].prefetch_mem_region_instance;
>> u16 pat_index = bind_ops[i].pat_index;
>>
>> - ops[i] = vm_bind_ioctl_ops_create(vm, bos[i], obj_offset,
>> + ops[i] = vm_bind_ioctl_ops_create(vm, &vops, bos[i], obj_offset,
>> addr, range, op, flags,
>> prefetch_region, pat_index);
>> if (IS_ERR(ops[i])) {
>> @@ -3525,6 +3688,10 @@ int xe_vm_bind_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
>> if (err)
>> goto unwind_ops;
>>
>> + err = xe_vma_ops_execute_ready(vm, &vops);
>> + if (err)
>> + goto unwind_ops;
>> +
>> fence = vm_bind_ioctl_ops_execute(vm, &vops);
>> if (IS_ERR(fence))
>> err = PTR_ERR(fence);
>> @@ -3594,7 +3761,7 @@ struct dma_fence *xe_vm_bind_kernel_bo(struct xe_vm *vm, struct xe_bo *bo,
>>
>> xe_vma_ops_init(&vops, vm, q, NULL, 0);
>>
>> - ops = vm_bind_ioctl_ops_create(vm, bo, 0, addr, bo->size,
>> + ops = vm_bind_ioctl_ops_create(vm, &vops, bo, 0, addr, bo->size,
>> DRM_XE_VM_BIND_OP_MAP, 0, 0,
>> vm->xe->pat.idx[cache_lvl]);
>> if (IS_ERR(ops)) {
>> --
>> 2.34.1
>>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 17/32] drm/xe/uapi: Add madvise interface
2025-04-07 10:17 ` [PATCH v2 17/32] drm/xe/uapi: Add madvise interface Himal Prasad Ghimiray
2025-04-17 18:19 ` Souza, Jose
@ 2025-05-02 14:00 ` Thomas Hellström
2025-05-20 8:13 ` Ghimiray, Himal Prasad
2025-05-20 8:49 ` Ghimiray, Himal Prasad
1 sibling, 2 replies; 120+ messages in thread
From: Thomas Hellström @ 2025-05-02 14:00 UTC (permalink / raw)
To: Himal Prasad Ghimiray, intel-xe; +Cc: matthew.brost
On Mon, 2025-04-07 at 15:47 +0530, Himal Prasad Ghimiray wrote:
> This commit introduces a new madvise interface to support
> driver-specific ioctl operations. The madvise interface allows for
> more
> efficient memory management by providing hints to the driver about
> the
> expected memory usage and pte update policy for gpuvma.
>
> Signed-off-by: Himal Prasad Ghimiray
> <himal.prasad.ghimiray@intel.com>
> ---
> include/uapi/drm/xe_drm.h | 97
> +++++++++++++++++++++++++++++++++++++++
> 1 file changed, 97 insertions(+)
>
> diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
> index 9c08738c3b91..aaf515df3a83 100644
> --- a/include/uapi/drm/xe_drm.h
> +++ b/include/uapi/drm/xe_drm.h
> @@ -81,6 +81,7 @@ extern "C" {
> * - &DRM_IOCTL_XE_EXEC
> * - &DRM_IOCTL_XE_WAIT_USER_FENCE
> * - &DRM_IOCTL_XE_OBSERVATION
> + * - &DRM_IOCTL_XE_MADVISE
> */
>
> /*
> @@ -102,6 +103,7 @@ extern "C" {
> #define DRM_XE_EXEC 0x09
> #define DRM_XE_WAIT_USER_FENCE 0x0a
> #define DRM_XE_OBSERVATION 0x0b
> +#define DRM_XE_MADVISE 0x0c
>
> /* Must be kept compact -- no holes */
>
> @@ -117,6 +119,7 @@ extern "C" {
> #define
> DRM_IOCTL_XE_EXEC DRM_IOW(DRM_COMMAND_BASE + DRM_XE_EXEC, struct drm_xe_exec)
> #define
> DRM_IOCTL_XE_WAIT_USER_FENCE DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_WAIT_USER_FENCE,structdrm_xe_wait_user_fence)
> #define
> DRM_IOCTL_XE_OBSERVATION DRM_IOW(DRM_COMMAND_BASE + DRM_XE_OBSERVATION,structdrm_xe_observation_param)
> +#define
> DRM_IOCTL_XE_MADVISE DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_MADVISE, structdrm_xe_madvise)
>
> /**
> * DOC: Xe IOCTL Extensions
> @@ -1965,6 +1968,100 @@ struct drm_xe_query_eu_stall {
> __u64 sampling_rates[];
> };
>
> +struct drm_xe_madvise_ops {
Suggest using extensions also for the ops, like for vm_bind, since we
might come up with complicated ops in the future that don't fit the
union + resvd below.
> + /** @start: start of the virtual address range */
> + __u64 start;
> +
> + /** @size: size of the virtual address range */
> + __u64 range;
> +
> +#define DRM_XE_VMA_ATTR_PREFERRED_LOC 0
Is UMD currently really using and exercising PREFERRED_LOC? If not, I
suggest removing this op and invent a reasonable default behaviour
until multi-device is in place.
> +#define DRM_XE_VMA_ATTR_ATOMIC 1
> +#define DRM_XE_VMA_ATTR_PAT 2
> +#define DRM_XE_VMA_ATTR_PURGEABLE_STATE 3
> + /** @type: type of attribute */
> + __u32 type;
> +
> + /** @pad: MBZ */
> + __u32 pad;
> +
> + union {
> + struct {
> +#define DRM_XE_VMA_ATOMIC_UNDEFINED 0
> +#define DRM_XE_VMA_ATOMIC_DEVICE 1
> +#define DRM_XE_VMA_ATOMIC_GLOBAL 2
> +#define DRM_XE_VMA_ATOMIC_CPU 3
> + /** @val: value of atomic operation*/
> + __u32 val;
> +
> + /** @reserved: Reserved */
> + __u32 reserved;
> + } atomic;
> +
> + struct {
> +#define DRM_XE_VMA_PURGEABLE_STATE_WILLNEED 0
> +#define DRM_XE_VMA_PURGEABLE_STATE_DONTNEED 1
> +#define DRM_XE_VMA_PURGEABLE_STATE_PURGED 2
I think the purged state, at least on i915 was only known to the KMD
(so shouldn't really be visible in this header). Also we should
probably define the semantics here if
a) There are multiple gpu vms with conflicting purgeable state.
b) What happens if we call dontneed and the bo is deeply pipelined?
c) What if a willneed madvise fails due to the bo being purged? And
that op is embedded in an array of unrelated ops? Should it really fail
the whole IOCTL?
> + /** @val: value for DRM_XE_VMA_ATTR_PURGEABLE_STATE
> */
> + __u32 val;
> +
> + /** @reserved: Reserved */
> + __u32 reserved;
> + } purge_state_val;
> +
> + struct {
> + /** @pat_index */
> + __u32 val;
> +
> + /** @reserved: Reserved */
> + __u32 reserved;
> + } pat_index;
> +
> + /** @preferred_mem_loc: preferred memory location */
> + struct {
> + __u32 devmem_fd;
> +
> +#define MIGRATE_ALL_PAGES 0
> +#define MIGRATE_ONLY_SYSTEM_PAGES 1
> + __u32 migration_policy;
> + } preferred_mem_loc;
> + };
> +
> + /** @reserved: Reserved */
> + __u64 reserved[2];
> +};
> +
> +/**
> + * struct drm_xe_madvise - Input of &DRM_IOCTL_XE_MADVISE
> + *
> + * Set memory attributes to a virtual address range
> + */
> +struct drm_xe_madvise {
> + /** @extensions: Pointer to the first extension struct, if
> any */
> + __u64 extensions;
> +
> + /** @vm_id: vm_id of the virtual range */
> + __u32 vm_id;
> +
> + /** @num_ops: number of madvises in ioctl */
> + __u32 num_ops;
Should we really support an array of ops here given the experience we
had with rollbacks on VM_bind? Also WRT this, also please see the
purgeable state above.
> +
> + union {
> + /** @ops: used if num_ops == 1 */
> + struct drm_xe_madvise_ops ops;
> +
> + /**
> + * @vector_of_ops: userptr to array of struct
> + * drm_xe_vm_madvise_op if num_ops > 1
> + */
> + __u64 vector_of_ops;
> + };
> +
> + /** @reserved: Reserved */
> + __u64 reserved[2];
> +
> +};
> +
> #if defined(__cplusplus)
> }
> #endif
/Thomas
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 20/32] drm/xe/vma: Modify new_vma to accept struct xe_vma_mem_attr as parameter
2025-04-07 10:17 ` [PATCH v2 20/32] drm/xe/vma: Modify new_vma to accept struct xe_vma_mem_attr as parameter Himal Prasad Ghimiray
@ 2025-05-13 2:36 ` Matthew Brost
2025-05-14 18:40 ` Matthew Brost
0 siblings, 1 reply; 120+ messages in thread
From: Matthew Brost @ 2025-05-13 2:36 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe, thomas.hellstrom
On Mon, Apr 07, 2025 at 03:47:07PM +0530, Himal Prasad Ghimiray wrote:
> This change simplifies the logic by ensuring that remapped previous or
> next VMAs are created with the same memory attributes as the original VMA.
> By passing struct xe_vma_mem_attr as a parameter, we maintain consistency
> in memory attributes.
>
> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> ---
> drivers/gpu/drm/xe/xe_vm.c | 37 ++++++++++++++++++++++++++-----------
> 1 file changed, 26 insertions(+), 11 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index 59e2a951db25..6e5ba58d475e 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -2421,8 +2421,16 @@ vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_vma_ops *vops,
>
> ALLOW_ERROR_INJECTION(vm_bind_ioctl_ops_create, ERRNO);
>
> +static void cp_mem_attr(struct xe_vma_mem_attr *dst, struct xe_vma_mem_attr *src)
Drive by commment - not need.
memcopy(dst, src, sizeof(*src));
Matt
> +{
> + dst->preferred_loc.migration_policy = src->preferred_loc.migration_policy;
> + dst->preferred_loc.devmem_fd = src->preferred_loc.devmem_fd;
> + dst->atomic_access = src->atomic_access;
> + dst->pat_index = src->pat_index;
> +}
> +
> static struct xe_vma *new_vma(struct xe_vm *vm, struct drm_gpuva_op_map *op,
> - u16 pat_index, unsigned int flags)
> + struct xe_vma_mem_attr attr, unsigned int flags)
> {
> struct xe_bo *bo = op->gem.obj ? gem_to_xe_bo(op->gem.obj) : NULL;
> struct drm_exec exec;
> @@ -2451,7 +2459,7 @@ static struct xe_vma *new_vma(struct xe_vm *vm, struct drm_gpuva_op_map *op,
> }
> vma = xe_vma_create(vm, bo, op->gem.offset,
> op->va.addr, op->va.addr +
> - op->va.range - 1, pat_index, flags);
> + op->va.range - 1, attr.pat_index, flags);
> if (IS_ERR(vma))
> goto err_unlock;
>
> @@ -2468,14 +2476,10 @@ static struct xe_vma *new_vma(struct xe_vm *vm, struct drm_gpuva_op_map *op,
> prep_vma_destroy(vm, vma, false);
> xe_vma_destroy_unlocked(vma);
> vma = ERR_PTR(err);
> + } else {
> + cp_mem_attr(&vma->attr, &attr);
> }
>
> - /*TODO: assign devmem_fd of local vram once multi device
> - * support is added.
> - */
> - vma->attr.preferred_loc.devmem_fd = 1;
> - vma->attr.atomic_access = DRM_XE_VMA_ATOMIC_UNDEFINED;
> -
> return vma;
> }
>
> @@ -2600,6 +2604,17 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
> switch (op->base.op) {
> case DRM_GPUVA_OP_MAP:
> {
> + struct xe_vma_mem_attr default_attr = {
> + .preferred_loc = {
> + /*TODO: assign devmem_fd of local vram
> + * once multi device support is added.
> + */
> + .devmem_fd = IS_DGFX(vm->xe) ? 1 : 0,
> + .migration_policy = 1, },
> + .atomic_access = DRM_XE_VMA_ATOMIC_UNDEFINED,
> + .pat_index = op->map.pat_index
> + };
> +
> flags |= op->map.read_only ?
> VMA_CREATE_FLAG_READ_ONLY : 0;
> flags |= op->map.is_null ?
> @@ -2609,7 +2624,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
> flags |= op->map.is_cpu_addr_mirror ?
> VMA_CREATE_FLAG_IS_SYSTEM_ALLOCATOR : 0;
>
> - vma = new_vma(vm, &op->base.map, op->map.pat_index,
> + vma = new_vma(vm, &op->base.map, default_attr,
> flags);
> if (IS_ERR(vma))
> return PTR_ERR(vma);
> @@ -2657,7 +2672,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
>
> if (op->base.remap.prev) {
> vma = new_vma(vm, op->base.remap.prev,
> - old->attr.pat_index, flags);
> + old->attr, flags);
> if (IS_ERR(vma))
> return PTR_ERR(vma);
>
> @@ -2687,7 +2702,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
>
> if (op->base.remap.next) {
> vma = new_vma(vm, op->base.remap.next,
> - old->attr.pat_index, flags);
> + old->attr, flags);
> if (IS_ERR(vma))
> return PTR_ERR(vma);
>
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 18/32] drm/xe/vm: Add attributes struct as member of vma
2025-04-07 10:17 ` [PATCH v2 18/32] drm/xe/vm: Add attributes struct as member of vma Himal Prasad Ghimiray
@ 2025-05-14 18:36 ` Matthew Brost
2025-05-20 9:27 ` Ghimiray, Himal Prasad
0 siblings, 1 reply; 120+ messages in thread
From: Matthew Brost @ 2025-05-14 18:36 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe, thomas.hellstrom
On Mon, Apr 07, 2025 at 03:47:05PM +0530, Himal Prasad Ghimiray wrote:
> The attribute of xe_vma will determine the migration policy and the
> encoding of the page table entries (PTEs) for that vma.
> This attribute helps manage how memory pages are moved and how their
> addresses are translated. It will be used by madvise to set the
> behavior of the vma.
>
> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> ---
> drivers/gpu/drm/xe/xe_vm.c | 6 ++++++
> drivers/gpu/drm/xe/xe_vm_types.h | 20 ++++++++++++++++++++
> 2 files changed, 26 insertions(+)
>
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index 27a8dbe709c2..1ff9e477e061 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -2470,6 +2470,12 @@ static struct xe_vma *new_vma(struct xe_vm *vm, struct drm_gpuva_op_map *op,
> vma = ERR_PTR(err);
> }
>
> + /*TODO: assign devmem_fd of local vram once multi device
> + * support is added.
> + */
> + vma->attr.preferred_loc.devmem_fd = 1;
Assigning a value of '1' is a bit odd... I'd prefer using a define or
something similar to indicate the intended behavior. I noticed a few
other assignments to '1' in the final result—same comment applies to
those.
> + vma->attr.atomic_access = DRM_XE_VMA_ATOMIC_UNDEFINED;
> +
> return vma;
> }
>
> diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
> index d3c1209348e9..5f5feffecb82 100644
> --- a/drivers/gpu/drm/xe/xe_vm_types.h
> +++ b/drivers/gpu/drm/xe/xe_vm_types.h
> @@ -77,6 +77,19 @@ struct xe_userptr {
> #endif
> };
>
> +/**
> + * struct xe_vma_mem_attr - memory attributes associated with vma
> + */
> +struct xe_vma_mem_attr {
> + /** @preferred_loc: perferred memory_location*/
> + struct {
> + u32 migration_policy; /* represents migration policies */
> + u32 devmem_fd; /* devmem_fd used for determining pagemap_fd requested by user */
> + } preferred_loc;
I'm a little unclear on how these variables work.
In the uAPI for migration_policy, I see MIGRATE_ALL_PAGES and
MIGRATE_ONLY_SYSTEM_PAGES (these should probably be normalized with a
DRM_XE_* prefix, by the way), but it's unclear to me what exactly these
mean or how they're used based on the final result—could you clarify?
Likewise, I'm confused about the devmem_fd usage. It can either be
assigned a devmem_fd from the uAPI, but in some cases, it's interpreted
as a region. I assume this is anticipating multi-GPU support, but again,
the plan isn't clear to me. Could you explain?
In general I agree with the idea of xe_vma_mem_attr though.
Matt
> + /** @atomic_access: The atomic access type for the vma */
> + u32 atomic_access;
> +};
> +
> struct xe_vma {
> /** @gpuva: Base GPUVA object */
> struct drm_gpuva gpuva;
> @@ -128,6 +141,13 @@ struct xe_vma {
> * Needs to be signalled before UNMAP can be processed.
> */
> struct xe_user_fence *ufence;
> +
> + /**
> + * @attr: The attributes of vma which determines the migration policy
> + * and encoding of the PTEs for this vma.
> + */
> + struct xe_vma_mem_attr attr;
> +
> };
>
> /**
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 19/32] drm/xe/vma: Move pat_index to vma attributes
2025-04-07 10:17 ` [PATCH v2 19/32] drm/xe/vma: Move pat_index to vma attributes Himal Prasad Ghimiray
@ 2025-05-14 18:37 ` Matthew Brost
0 siblings, 0 replies; 120+ messages in thread
From: Matthew Brost @ 2025-05-14 18:37 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe, thomas.hellstrom
On Mon, Apr 07, 2025 at 03:47:06PM +0530, Himal Prasad Ghimiray wrote:
> The PAT index determines how PTEs are encoded and can be modified by
> madvise. Therefore, it is now part of the vma attributes.
>
> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
I think this patch makes sense assuming we land the previous patch.
Matt
> ---
> drivers/gpu/drm/xe/xe_pt.c | 2 +-
> drivers/gpu/drm/xe/xe_vm.c | 6 +++---
> drivers/gpu/drm/xe/xe_vm_types.h | 10 ++++------
> 3 files changed, 8 insertions(+), 10 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
> index 59dc065fae93..2479d830d90a 100644
> --- a/drivers/gpu/drm/xe/xe_pt.c
> +++ b/drivers/gpu/drm/xe/xe_pt.c
> @@ -518,7 +518,7 @@ xe_pt_stage_bind_entry(struct xe_ptw *parent, pgoff_t offset,
> {
> struct xe_pt_stage_bind_walk *xe_walk =
> container_of(walk, typeof(*xe_walk), base);
> - u16 pat_index = xe_walk->vma->pat_index;
> + u16 pat_index = xe_walk->vma->attr.pat_index;
> struct xe_pt *xe_parent = container_of(parent, typeof(*xe_parent), base);
> struct xe_vm *vm = xe_walk->vm;
> struct xe_pt *xe_child;
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index 1ff9e477e061..59e2a951db25 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -1224,7 +1224,7 @@ static struct xe_vma *xe_vma_create(struct xe_vm *vm,
> if (vm->xe->info.has_atomic_enable_pte_bit)
> vma->gpuva.flags |= XE_VMA_ATOMIC_PTE_BIT;
>
> - vma->pat_index = pat_index;
> + vma->attr.pat_index = pat_index;
>
> if (bo) {
> struct drm_gpuvm_bo *vm_bo;
> @@ -2657,7 +2657,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
>
> if (op->base.remap.prev) {
> vma = new_vma(vm, op->base.remap.prev,
> - old->pat_index, flags);
> + old->attr.pat_index, flags);
> if (IS_ERR(vma))
> return PTR_ERR(vma);
>
> @@ -2687,7 +2687,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
>
> if (op->base.remap.next) {
> vma = new_vma(vm, op->base.remap.next,
> - old->pat_index, flags);
> + old->attr.pat_index, flags);
> if (IS_ERR(vma))
> return PTR_ERR(vma);
>
> diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
> index 5f5feffecb82..2fcc48d9d776 100644
> --- a/drivers/gpu/drm/xe/xe_vm_types.h
> +++ b/drivers/gpu/drm/xe/xe_vm_types.h
> @@ -88,6 +88,10 @@ struct xe_vma_mem_attr {
> } preferred_loc;
> /** @atomic_access: The atomic access type for the vma */
> u32 atomic_access;
> + /**
> + * @pat_index: The pat index to use when encoding the PTEs for this vma.
> + */
> + u16 pat_index;
> };
>
> struct xe_vma {
> @@ -131,11 +135,6 @@ struct xe_vma {
> /** @tile_staged: bind is staged for this VMA */
> u8 tile_staged;
>
> - /**
> - * @pat_index: The pat index to use when encoding the PTEs for this vma.
> - */
> - u16 pat_index;
> -
> /**
> * @ufence: The user fence that was provided with MAP.
> * Needs to be signalled before UNMAP can be processed.
> @@ -147,7 +146,6 @@ struct xe_vma {
> * and encoding of the PTEs for this vma.
> */
> struct xe_vma_mem_attr attr;
> -
> };
>
> /**
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 20/32] drm/xe/vma: Modify new_vma to accept struct xe_vma_mem_attr as parameter
2025-05-13 2:36 ` Matthew Brost
@ 2025-05-14 18:40 ` Matthew Brost
2025-05-20 9:28 ` Ghimiray, Himal Prasad
0 siblings, 1 reply; 120+ messages in thread
From: Matthew Brost @ 2025-05-14 18:40 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe, thomas.hellstrom
On Mon, May 12, 2025 at 07:36:56PM -0700, Matthew Brost wrote:
> On Mon, Apr 07, 2025 at 03:47:07PM +0530, Himal Prasad Ghimiray wrote:
> > This change simplifies the logic by ensuring that remapped previous or
> > next VMAs are created with the same memory attributes as the original VMA.
> > By passing struct xe_vma_mem_attr as a parameter, we maintain consistency
> > in memory attributes.
> >
> > Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> > ---
> > drivers/gpu/drm/xe/xe_vm.c | 37 ++++++++++++++++++++++++++-----------
> > 1 file changed, 26 insertions(+), 11 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> > index 59e2a951db25..6e5ba58d475e 100644
> > --- a/drivers/gpu/drm/xe/xe_vm.c
> > +++ b/drivers/gpu/drm/xe/xe_vm.c
> > @@ -2421,8 +2421,16 @@ vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_vma_ops *vops,
> >
> > ALLOW_ERROR_INJECTION(vm_bind_ioctl_ops_create, ERRNO);
> >
> > +static void cp_mem_attr(struct xe_vma_mem_attr *dst, struct xe_vma_mem_attr *src)
>
> Drive by commment - not need.
>
> memcopy(dst, src, sizeof(*src));
>
Actually you can just do:
*dst = *src;
> Matt
>
> > +{
> > + dst->preferred_loc.migration_policy = src->preferred_loc.migration_policy;
> > + dst->preferred_loc.devmem_fd = src->preferred_loc.devmem_fd;
> > + dst->atomic_access = src->atomic_access;
> > + dst->pat_index = src->pat_index;
> > +}
> > +
> > static struct xe_vma *new_vma(struct xe_vm *vm, struct drm_gpuva_op_map *op,
> > - u16 pat_index, unsigned int flags)
> > + struct xe_vma_mem_attr attr, unsigned int flags)
> > {
> > struct xe_bo *bo = op->gem.obj ? gem_to_xe_bo(op->gem.obj) : NULL;
> > struct drm_exec exec;
> > @@ -2451,7 +2459,7 @@ static struct xe_vma *new_vma(struct xe_vm *vm, struct drm_gpuva_op_map *op,
> > }
> > vma = xe_vma_create(vm, bo, op->gem.offset,
> > op->va.addr, op->va.addr +
> > - op->va.range - 1, pat_index, flags);
> > + op->va.range - 1, attr.pat_index, flags);
> > if (IS_ERR(vma))
> > goto err_unlock;
> >
> > @@ -2468,14 +2476,10 @@ static struct xe_vma *new_vma(struct xe_vm *vm, struct drm_gpuva_op_map *op,
> > prep_vma_destroy(vm, vma, false);
> > xe_vma_destroy_unlocked(vma);
> > vma = ERR_PTR(err);
> > + } else {
> > + cp_mem_attr(&vma->attr, &attr);
> > }
> >
> > - /*TODO: assign devmem_fd of local vram once multi device
> > - * support is added.
> > - */
> > - vma->attr.preferred_loc.devmem_fd = 1;
> > - vma->attr.atomic_access = DRM_XE_VMA_ATOMIC_UNDEFINED;
> > -
> > return vma;
> > }
> >
> > @@ -2600,6 +2604,17 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
> > switch (op->base.op) {
> > case DRM_GPUVA_OP_MAP:
> > {
> > + struct xe_vma_mem_attr default_attr = {
> > + .preferred_loc = {
> > + /*TODO: assign devmem_fd of local vram
> > + * once multi device support is added.
> > + */
> > + .devmem_fd = IS_DGFX(vm->xe) ? 1 : 0,
> > + .migration_policy = 1, },
Where are a couple of magic '1' which I suggested to avoid patch 18.
Same question as patch 18 on the usage of these.
In general I think this patch makes sense if the to previous patches
land.
Matt
> > + .atomic_access = DRM_XE_VMA_ATOMIC_UNDEFINED,
> > + .pat_index = op->map.pat_index
> > + };
> > +
> > flags |= op->map.read_only ?
> > VMA_CREATE_FLAG_READ_ONLY : 0;
> > flags |= op->map.is_null ?
> > @@ -2609,7 +2624,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
> > flags |= op->map.is_cpu_addr_mirror ?
> > VMA_CREATE_FLAG_IS_SYSTEM_ALLOCATOR : 0;
> >
> > - vma = new_vma(vm, &op->base.map, op->map.pat_index,
> > + vma = new_vma(vm, &op->base.map, default_attr,
> > flags);
> > if (IS_ERR(vma))
> > return PTR_ERR(vma);
> > @@ -2657,7 +2672,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
> >
> > if (op->base.remap.prev) {
> > vma = new_vma(vm, op->base.remap.prev,
> > - old->attr.pat_index, flags);
> > + old->attr, flags);
> > if (IS_ERR(vma))
> > return PTR_ERR(vma);
> >
> > @@ -2687,7 +2702,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
> >
> > if (op->base.remap.next) {
> > vma = new_vma(vm, op->base.remap.next,
> > - old->attr.pat_index, flags);
> > + old->attr, flags);
> > if (IS_ERR(vma))
> > return PTR_ERR(vma);
> >
> > --
> > 2.34.1
> >
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 21/32] drm/gpusvm: Make drm_gpusvm_for_each_* macros public
2025-04-07 10:17 ` [PATCH v2 21/32] drm/gpusvm: Make drm_gpusvm_for_each_* macros public Himal Prasad Ghimiray
2025-04-08 1:49 ` kernel test robot
@ 2025-05-14 18:47 ` Matthew Brost
1 sibling, 0 replies; 120+ messages in thread
From: Matthew Brost @ 2025-05-14 18:47 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe, thomas.hellstrom
On Mon, Apr 07, 2025 at 03:47:08PM +0530, Himal Prasad Ghimiray wrote:
> The drm_gpusvm_for_each_notifier, drm_gpusvm_for_each_notifier_safe and
> drm_gpusvm_for_each_range_safe macros are useful for locating notifiers
> and ranges within a user-specified range. By making these macros public,
> we enable broader access and utility for developers who need to leverage
> them in their implementations.
>
> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Assuming you have users of these, makes sense to make these public.
A couple of nits below.
> ---
> drivers/gpu/drm/drm_gpusvm.c | 89 +--------------------------------
> include/drm/drm_gpusvm.h | 96 +++++++++++++++++++++++++++++++++++-
> 2 files changed, 96 insertions(+), 89 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_gpusvm.c b/drivers/gpu/drm/drm_gpusvm.c
> index 149ac56eff70..09708eef1c86 100644
> --- a/drivers/gpu/drm/drm_gpusvm.c
> +++ b/drivers/gpu/drm/drm_gpusvm.c
> @@ -391,97 +391,10 @@ struct drm_gpusvm_range *
> drm_gpusvm_range_find(struct drm_gpusvm_notifier *notifier, unsigned long start,
> unsigned long end)
> {
> - struct interval_tree_node *itree;
> -
> - itree = interval_tree_iter_first(¬ifier->root, start, end - 1);
> -
> - if (itree)
> - return container_of(itree, struct drm_gpusvm_range, itree);
> - else
> - return NULL;
> + return __drm_gpusvm_range_find(notifier, start, end);
My do you __drm_gpusvm_range_find as inline? Can't you just call
drm_gpusvm_range_find in public iterators?
Also I think rather than inlines for functions which touch the interval
tree, I'd leave those functions in drm_gpusvm.c - that is what
drm_gpuvm.c does.
> }
> EXPORT_SYMBOL_GPL(drm_gpusvm_range_find);
>
> -/**
> - * drm_gpusvm_for_each_range_safe() - Safely iterate over GPU SVM ranges in a notifier
> - * @range__: Iterator variable for the ranges
> - * @next__: Iterator variable for the ranges temporay storage
> - * @notifier__: Pointer to the GPU SVM notifier
> - * @start__: Start address of the range
> - * @end__: End address of the range
> - *
> - * This macro is used to iterate over GPU SVM ranges in a notifier while
> - * removing ranges from it.
> - */
> -#define drm_gpusvm_for_each_range_safe(range__, next__, notifier__, start__, end__) \
> - for ((range__) = drm_gpusvm_range_find((notifier__), (start__), (end__)), \
> - (next__) = __drm_gpusvm_range_next(range__); \
> - (range__) && (drm_gpusvm_range_start(range__) < (end__)); \
> - (range__) = (next__), (next__) = __drm_gpusvm_range_next(range__))
> -
> -/**
> - * __drm_gpusvm_notifier_next() - get the next drm_gpusvm_notifier in the list
> - * @notifier: a pointer to the current drm_gpusvm_notifier
> - *
> - * Return: A pointer to the next drm_gpusvm_notifier if available, or NULL if
> - * the current notifier is the last one or if the input notifier is
> - * NULL.
> - */
> -static struct drm_gpusvm_notifier *
> -__drm_gpusvm_notifier_next(struct drm_gpusvm_notifier *notifier)
> -{
> - if (notifier && !list_is_last(¬ifier->entry,
> - ¬ifier->gpusvm->notifier_list))
> - return list_next_entry(notifier, entry);
> -
> - return NULL;
> -}
> -
> -static struct drm_gpusvm_notifier *
> -notifier_iter_first(struct rb_root_cached *root, unsigned long start,
> - unsigned long last)
> -{
So make this one an exported function I think.
s/notifier_iter_first/drm_gpusvm_notifier_find
And adjust the arguments.
> - struct interval_tree_node *itree;
> -
> - itree = interval_tree_iter_first(root, start, last);
> -
> - if (itree)
> - return container_of(itree, struct drm_gpusvm_notifier, itree);
> - else
> - return NULL;
> -}
> -
> -/**
> - * drm_gpusvm_for_each_notifier() - Iterate over GPU SVM notifiers in a gpusvm
> - * @notifier__: Iterator variable for the notifiers
> - * @notifier__: Pointer to the GPU SVM notifier
> - * @start__: Start address of the notifier
> - * @end__: End address of the notifier
> - *
> - * This macro is used to iterate over GPU SVM notifiers in a gpusvm.
> - */
> -#define drm_gpusvm_for_each_notifier(notifier__, gpusvm__, start__, end__) \
> - for ((notifier__) = notifier_iter_first(&(gpusvm__)->root, (start__), (end__) - 1); \
> - (notifier__) && (drm_gpusvm_notifier_start(notifier__) < (end__)); \
> - (notifier__) = __drm_gpusvm_notifier_next(notifier__))
> -
> -/**
> - * drm_gpusvm_for_each_notifier_safe() - Safely iterate over GPU SVM notifiers in a gpusvm
> - * @notifier__: Iterator variable for the notifiers
> - * @next__: Iterator variable for the notifiers temporay storage
> - * @notifier__: Pointer to the GPU SVM notifier
> - * @start__: Start address of the notifier
> - * @end__: End address of the notifier
> - *
> - * This macro is used to iterate over GPU SVM notifiers in a gpusvm while
> - * removing notifiers from it.
> - */
> -#define drm_gpusvm_for_each_notifier_safe(notifier__, next__, gpusvm__, start__, end__) \
> - for ((notifier__) = notifier_iter_first(&(gpusvm__)->root, (start__), (end__) - 1), \
> - (next__) = __drm_gpusvm_notifier_next(notifier__); \
> - (notifier__) && (drm_gpusvm_notifier_start(notifier__) < (end__)); \
> - (notifier__) = (next__), (next__) = __drm_gpusvm_notifier_next(notifier__))
> -
> /**
> * drm_gpusvm_notifier_invalidate() - Invalidate a GPU SVM notifier.
> * @mni: Pointer to the mmu_interval_notifier structure.
> diff --git a/include/drm/drm_gpusvm.h b/include/drm/drm_gpusvm.h
> index 8093cc6ab1f4..8b70361db351 100644
> --- a/include/drm/drm_gpusvm.h
> +++ b/include/drm/drm_gpusvm.h
> @@ -491,6 +491,20 @@ __drm_gpusvm_range_next(struct drm_gpusvm_range *range)
> return NULL;
> }
>
> +static inline struct drm_gpusvm_range *
> +__drm_gpusvm_range_find(struct drm_gpusvm_notifier *notifier,
> + unsigned long start, unsigned long end)
> +{
> + struct interval_tree_node *itree;
> +
> + itree = interval_tree_iter_first(¬ifier->root, start, end - 1);
> +
> + if (itree)
> + return container_of(itree, struct drm_gpusvm_range, itree);
> + else
> + return NULL;
> +}
> +
> /**
> * drm_gpusvm_for_each_range() - Iterate over GPU SVM ranges in a notifier
> * @range__: Iterator variable for the ranges. If set, it indicates the start of
> @@ -504,8 +518,88 @@ __drm_gpusvm_range_next(struct drm_gpusvm_range *range)
> */
> #define drm_gpusvm_for_each_range(range__, notifier__, start__, end__) \
> for ((range__) = (range__) ?: \
> - drm_gpusvm_range_find((notifier__), (start__), (end__)); \
> + __drm_gpusvm_range_find((notifier__), (start__), (end__)); \
> (range__) && (drm_gpusvm_range_start(range__) < (end__)); \
> (range__) = __drm_gpusvm_range_next(range__))
>
> +/**
> + * drm_gpusvm_for_each_range_safe() - Safely iterate over GPU SVM ranges in a notifier
> + * @range__: Iterator variable for the ranges
> + * @next__: Iterator variable for the ranges temporay storage
> + * @notifier__: Pointer to the GPU SVM notifier
> + * @start__: Start address of the range
> + * @end__: End address of the range
> + *
> + * This macro is used to iterate over GPU SVM ranges in a notifier while
> + * removing ranges from it.
> + */
> +#define drm_gpusvm_for_each_range_safe(range__, next__, notifier__, start__, end__) \
> + for ((range__) = __drm_gpusvm_range_find((notifier__), (start__), (end__)), \
> + (next__) = __drm_gpusvm_range_next(range__); \
> + (range__) && (drm_gpusvm_range_start(range__) < (end__)); \
> + (range__) = (next__), (next__) = __drm_gpusvm_range_next(range__))
> +
> +/**
> + * __drm_gpusvm_notifier_next() - get the next drm_gpusvm_notifier in the list
> + * @notifier: a pointer to the current drm_gpusvm_notifier
> + *
> + * Return: A pointer to the next drm_gpusvm_notifier if available, or NULL if
> + * the current notifier is the last one or if the input notifier is
> + * NULL.
> + */
> +static inline struct drm_gpusvm_notifier *
> +__drm_gpusvm_notifier_next(struct drm_gpusvm_notifier *notifier)
> +{
This one seems reasonable as an inline.
Matt
> + if (notifier && !list_is_last(¬ifier->entry,
> + ¬ifier->gpusvm->notifier_list))
> + return list_next_entry(notifier, entry);
> +
> + return NULL;
> +}
> +
> +static inline struct drm_gpusvm_notifier *
> +notifier_iter_first(struct rb_root_cached *root, unsigned long start,
> + unsigned long last)
> +{
> + struct interval_tree_node *itree;
> +
> + itree = interval_tree_iter_first(root, start, last);
> +
> + if (itree)
> + return container_of(itree, struct drm_gpusvm_notifier, itree);
> + else
> + return NULL;
> +}
> +
> +/**
> + * drm_gpusvm_for_each_notifier() - Iterate over GPU SVM notifiers in a gpusvm
> + * @notifier__: Iterator variable for the notifiers
> + * @notifier__: Pointer to the GPU SVM notifier
> + * @start__: Start address of the notifier
> + * @end__: End address of the notifier
> + *
> + * This macro is used to iterate over GPU SVM notifiers in a gpusvm.
> + */
> +#define drm_gpusvm_for_each_notifier(notifier__, gpusvm__, start__, end__) \
> + for ((notifier__) = notifier_iter_first(&(gpusvm__)->root, (start__), (end__) - 1); \
> + (notifier__) && (drm_gpusvm_notifier_start(notifier__) < (end__)); \
> + (notifier__) = __drm_gpusvm_notifier_next(notifier__))
> +
> +/**
> + * drm_gpusvm_for_each_notifier_safe() - Safely iterate over GPU SVM notifiers in a gpusvm
> + * @notifier__: Iterator variable for the notifiers
> + * @next__: Iterator variable for the notifiers temporay storage
> + * @notifier__: Pointer to the GPU SVM notifier
> + * @start__: Start address of the notifier
> + * @end__: End address of the notifier
> + *
> + * This macro is used to iterate over GPU SVM notifiers in a gpusvm while
> + * removing notifiers from it.
> + */
> +#define drm_gpusvm_for_each_notifier_safe(notifier__, next__, gpusvm__, start__, end__) \
> + for ((notifier__) = notifier_iter_first(&(gpusvm__)->root, (start__), (end__) - 1), \
> + (next__) = __drm_gpusvm_notifier_next(notifier__); \
> + (notifier__) && (drm_gpusvm_notifier_start(notifier__) < (end__)); \
> + (notifier__) = (next__), (next__) = __drm_gpusvm_notifier_next(notifier__))
> +
> #endif /* __DRM_GPUSVM_H__ */
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 22/32] drm/xe/svm: Split system allocator vma incase of madvise call
2025-04-07 10:17 ` [PATCH v2 22/32] drm/xe/svm: Split system allocator vma incase of madvise call Himal Prasad Ghimiray
@ 2025-05-14 19:01 ` Matthew Brost
2025-05-20 9:46 ` Ghimiray, Himal Prasad
2025-05-14 19:02 ` Matthew Brost
1 sibling, 1 reply; 120+ messages in thread
From: Matthew Brost @ 2025-05-14 19:01 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe, thomas.hellstrom
On Mon, Apr 07, 2025 at 03:47:09PM +0530, Himal Prasad Ghimiray wrote:
> If the start or end of input address range lies within system allocator
> vma split the vma to create new vma's as per input range.
>
> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> ---
> drivers/gpu/drm/xe/xe_vm.c | 84 ++++++++++++++++++++++++++++++++++++++
> drivers/gpu/drm/xe/xe_vm.h | 2 +
> 2 files changed, 86 insertions(+)
>
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index 6e5ba58d475e..c7c012afe9eb 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -4127,3 +4127,87 @@ void xe_vm_snapshot_free(struct xe_vm_snapshot *snap)
> }
> kvfree(snap);
> }
> +
> +int xe_vm_alloc_madvise_vma(struct xe_vm *vm, uint64_t start, uint64_t range)
> +{
> + struct xe_vma_ops vops;
> + struct drm_gpuva_ops *ops = NULL;
> + struct drm_gpuva_op *__op;
> + bool is_cpu_addr_mirror = false;
> + int err;
> +
> + vm_dbg(&vm->xe->drm, "MADVISE IN: addr=0x%016llx, size=0x%016llx", start, range);
> +
lockdep assert for vm->lock, I assume we should have it in write mode
here.
> + if (start & ~PAGE_MASK)
> + start = ALIGN_DOWN(start, SZ_4K);
> +
> + if (range & ~PAGE_MASK)
> + range = ALIGN(range, SZ_4K);
We discussed this - not needed as UMD should align to GPU page size.
BTW - mismatch of PAGE_MASK, SZ_4K - they mean different things as
PAGE_SIZE depends CPU arch. We do this all over the driver but let's not
make this worse.
> +
> + vm_dbg(&vm->xe->drm, "MADVISE_OPS_CREATE: addr=0x%016llx, size=0x%016llx", start, range);
> + ops = drm_gpuvm_sm_map_ops_create(&vm->gpuvm, start, range,
> + DRM_GPUVM_SKIP_GEM_OBJ_VA_SPLIT_MADVISE,
> + NULL, start);
> + if (IS_ERR(ops)) {
> + err = PTR_ERR(ops);
> + goto unwind_ops;
You don't need to unwind here, you can just return.
> + }
> +
> + if (list_empty(&ops->list)) {
> + err = 0;
> + goto free_ops;
> + }
> +
> + drm_gpuva_for_each_op(__op, ops) {
> + struct xe_vma_op *op = gpuva_op_to_vma_op(__op);
> +
> + if (__op->op == DRM_GPUVA_OP_REMAP) {
> + if (xe_vma_is_cpu_addr_mirror(gpuva_to_vma(op->base.remap.unmap->va)))
> + is_cpu_addr_mirror = true;
> + else
> + is_cpu_addr_mirror = false;
> + }
> +
> + if (__op->op == DRM_GPUVA_OP_MAP)
> + op->map.is_cpu_addr_mirror = is_cpu_addr_mirror;
> +
I think this is right, but it took a minute to remember why this code is
needed. I think we need a comment here on is_cpu_addr_mirror usage.
> + print_op(vm->xe, __op);
> + }
> +
> + xe_vma_ops_init(&vops, vm, NULL, NULL, 0);
> + err = vm_bind_ioctl_ops_parse(vm, ops, &vops);
> + if (err)
> + goto unwind_ops;
> +
> + xe_vm_lock(vm, false);
> +
> + drm_gpuva_for_each_op(__op, ops) {
> + struct xe_vma_op *op = gpuva_op_to_vma_op(__op);
> + struct xe_vma *vma;
> + struct xe_vma_mem_attr temp_attr;
> +
> + if (__op->op == DRM_GPUVA_OP_UNMAP) {
> + /* There should be no unmap */
> + xe_assert(vm->xe, true);
xe_assert(vm->xe, true) will pop. How about...
XE_WARN_ON("UNEXPECTED UNMAP");
> + xe_vma_destroy(gpuva_to_vma(op->base.unmap.va), NULL);
> + } else if (__op->op == DRM_GPUVA_OP_REMAP) {
> + vma = gpuva_to_vma(op->base.remap.unmap->va);
> + cp_mem_attr(&temp_attr, &vma->attr);
See my comments about cp_mem_attr in a prior patch.
> + xe_vma_destroy(gpuva_to_vma(op->base.remap.unmap->va), NULL);
> + } else if (__op->op == DRM_GPUVA_OP_MAP) {
> + vma = op->map.vma;
> + cp_mem_attr(&vma->attr, &temp_attr);
> + }
Also perhaps ref the comment above of why copying the attributes is
needed.
Matt
> + }
> +
> + xe_vm_unlock(vm);
> + drm_gpuva_ops_free(&vm->gpuvm, ops);
> + return 0;
> +
> +unwind_ops:
> + vm_bind_ioctl_ops_unwind(vm, &ops, 1);
> +free_ops:
> + if (ops)
> + drm_gpuva_ops_free(&vm->gpuvm, ops);
> + return err;
> +}
> diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
> index 99e164852f63..4e45230b7205 100644
> --- a/drivers/gpu/drm/xe/xe_vm.h
> +++ b/drivers/gpu/drm/xe/xe_vm.h
> @@ -171,6 +171,8 @@ static inline bool xe_vma_is_userptr(struct xe_vma *vma)
>
> struct xe_vma *xe_vm_find_vma_by_addr(struct xe_vm *vm, u64 page_addr);
>
> +int xe_vm_alloc_madvise_vma(struct xe_vm *vm, uint64_t addr, uint64_t size);
> +
> /**
> * to_userptr_vma() - Return a pointer to an embedding userptr vma
> * @vma: Pointer to the embedded struct xe_vma
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 22/32] drm/xe/svm: Split system allocator vma incase of madvise call
2025-04-07 10:17 ` [PATCH v2 22/32] drm/xe/svm: Split system allocator vma incase of madvise call Himal Prasad Ghimiray
2025-05-14 19:01 ` Matthew Brost
@ 2025-05-14 19:02 ` Matthew Brost
1 sibling, 0 replies; 120+ messages in thread
From: Matthew Brost @ 2025-05-14 19:02 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe, thomas.hellstrom
On Mon, Apr 07, 2025 at 03:47:09PM +0530, Himal Prasad Ghimiray wrote:
> If the start or end of input address range lies within system allocator
> vma split the vma to create new vma's as per input range.
>
> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> ---
> drivers/gpu/drm/xe/xe_vm.c | 84 ++++++++++++++++++++++++++++++++++++++
> drivers/gpu/drm/xe/xe_vm.h | 2 +
> 2 files changed, 86 insertions(+)
>
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index 6e5ba58d475e..c7c012afe9eb 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -4127,3 +4127,87 @@ void xe_vm_snapshot_free(struct xe_vm_snapshot *snap)
> }
> kvfree(snap);
> }
> +
Kernel doc, missed this previous reply.
Matt
> +int xe_vm_alloc_madvise_vma(struct xe_vm *vm, uint64_t start, uint64_t range)
> +{
> + struct xe_vma_ops vops;
> + struct drm_gpuva_ops *ops = NULL;
> + struct drm_gpuva_op *__op;
> + bool is_cpu_addr_mirror = false;
> + int err;
> +
> + vm_dbg(&vm->xe->drm, "MADVISE IN: addr=0x%016llx, size=0x%016llx", start, range);
> +
> + if (start & ~PAGE_MASK)
> + start = ALIGN_DOWN(start, SZ_4K);
> +
> + if (range & ~PAGE_MASK)
> + range = ALIGN(range, SZ_4K);
> +
> + vm_dbg(&vm->xe->drm, "MADVISE_OPS_CREATE: addr=0x%016llx, size=0x%016llx", start, range);
> + ops = drm_gpuvm_sm_map_ops_create(&vm->gpuvm, start, range,
> + DRM_GPUVM_SKIP_GEM_OBJ_VA_SPLIT_MADVISE,
> + NULL, start);
> + if (IS_ERR(ops)) {
> + err = PTR_ERR(ops);
> + goto unwind_ops;
> + }
> +
> + if (list_empty(&ops->list)) {
> + err = 0;
> + goto free_ops;
> + }
> +
> + drm_gpuva_for_each_op(__op, ops) {
> + struct xe_vma_op *op = gpuva_op_to_vma_op(__op);
> +
> + if (__op->op == DRM_GPUVA_OP_REMAP) {
> + if (xe_vma_is_cpu_addr_mirror(gpuva_to_vma(op->base.remap.unmap->va)))
> + is_cpu_addr_mirror = true;
> + else
> + is_cpu_addr_mirror = false;
> + }
> +
> + if (__op->op == DRM_GPUVA_OP_MAP)
> + op->map.is_cpu_addr_mirror = is_cpu_addr_mirror;
> +
> + print_op(vm->xe, __op);
> + }
> +
> + xe_vma_ops_init(&vops, vm, NULL, NULL, 0);
> + err = vm_bind_ioctl_ops_parse(vm, ops, &vops);
> + if (err)
> + goto unwind_ops;
> +
> + xe_vm_lock(vm, false);
> +
> + drm_gpuva_for_each_op(__op, ops) {
> + struct xe_vma_op *op = gpuva_op_to_vma_op(__op);
> + struct xe_vma *vma;
> + struct xe_vma_mem_attr temp_attr;
> +
> + if (__op->op == DRM_GPUVA_OP_UNMAP) {
> + /* There should be no unmap */
> + xe_assert(vm->xe, true);
> + xe_vma_destroy(gpuva_to_vma(op->base.unmap.va), NULL);
> + } else if (__op->op == DRM_GPUVA_OP_REMAP) {
> + vma = gpuva_to_vma(op->base.remap.unmap->va);
> + cp_mem_attr(&temp_attr, &vma->attr);
> + xe_vma_destroy(gpuva_to_vma(op->base.remap.unmap->va), NULL);
> + } else if (__op->op == DRM_GPUVA_OP_MAP) {
> + vma = op->map.vma;
> + cp_mem_attr(&vma->attr, &temp_attr);
> + }
> + }
> +
> + xe_vm_unlock(vm);
> + drm_gpuva_ops_free(&vm->gpuvm, ops);
> + return 0;
> +
> +unwind_ops:
> + vm_bind_ioctl_ops_unwind(vm, &ops, 1);
> +free_ops:
> + if (ops)
> + drm_gpuva_ops_free(&vm->gpuvm, ops);
> + return err;
> +}
> diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
> index 99e164852f63..4e45230b7205 100644
> --- a/drivers/gpu/drm/xe/xe_vm.h
> +++ b/drivers/gpu/drm/xe/xe_vm.h
> @@ -171,6 +171,8 @@ static inline bool xe_vma_is_userptr(struct xe_vma *vma)
>
> struct xe_vma *xe_vm_find_vma_by_addr(struct xe_vm *vm, u64 page_addr);
>
> +int xe_vm_alloc_madvise_vma(struct xe_vm *vm, uint64_t addr, uint64_t size);
> +
> /**
> * to_userptr_vma() - Return a pointer to an embedding userptr vma
> * @vma: Pointer to the embedded struct xe_vma
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 24/32] drm/xe: Allow CPU address mirror VMA unbind with gpu bindings for madvise
2025-04-07 10:17 ` [PATCH v2 24/32] drm/xe: Allow CPU address mirror VMA unbind with gpu bindings for madvise Himal Prasad Ghimiray
@ 2025-05-14 19:20 ` Matthew Brost
2025-05-20 10:21 ` Ghimiray, Himal Prasad
0 siblings, 1 reply; 120+ messages in thread
From: Matthew Brost @ 2025-05-14 19:20 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe, thomas.hellstrom
On Mon, Apr 07, 2025 at 03:47:11PM +0530, Himal Prasad Ghimiray wrote:
> In the case of the MADVISE ioctl, if the start or end addresses fall
> within a VMA and existing SVM ranges are present, remove the existing
> SVM mappings. Then, continue with ops_parse to create new VMAs by REMAP
> unmapping of old one.
>
I'm quite confused why this patch is needed. Why is invalidating the
ranges not sufficient?
Matt
> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> ---
> drivers/gpu/drm/xe/xe_svm.c | 25 +++++++++++++++++++++++++
> drivers/gpu/drm/xe/xe_svm.h | 7 +++++++
> drivers/gpu/drm/xe/xe_vm.c | 18 +++++++++++++++++-
> 3 files changed, 49 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
> index 7ec7ecd7eb1f..efcba4b77250 100644
> --- a/drivers/gpu/drm/xe/xe_svm.c
> +++ b/drivers/gpu/drm/xe/xe_svm.c
> @@ -903,6 +903,31 @@ bool xe_svm_has_mapping(struct xe_vm *vm, u64 start, u64 end)
> return drm_gpusvm_has_mapping(&vm->svm.gpusvm, start, end);
> }
>
> +/**
> + * xe_svm_range_clean_if_addr_within - Clean SVM mappings and ranges
> + * @start: start addr
> + * @end: end addr
> + *
> + * This function cleans up svm ranges if start or end address are inside them.
> + */
> +void xe_svm_range_clean_if_addr_within(struct xe_vm *vm, u64 start, u64 end)
> +{
> + struct drm_gpusvm_notifier *notifier, *next;
> +
> + drm_gpusvm_for_each_notifier_safe(notifier, next, &vm->svm.gpusvm, start, end) {
> + struct drm_gpusvm_range *range, *__next;
> +
> + drm_gpusvm_for_each_range_safe(range, __next, notifier, start, end) {
> + if (start > drm_gpusvm_range_start(range) ||
> + end < drm_gpusvm_range_end(range)) {
> + if (IS_DGFX(vm->xe) && xe_svm_range_in_vram(to_xe_range(range)))
> + drm_gpusvm_range_evict(&vm->svm.gpusvm, range);
> + __xe_svm_garbage_collector(vm, to_xe_range(range));
> + }
> + }
> + }
> +}
> +
> /**
> * xe_svm_bo_evict() - SVM evict BO to system memory
> * @bo: BO to evict
> diff --git a/drivers/gpu/drm/xe/xe_svm.h b/drivers/gpu/drm/xe/xe_svm.h
> index d5be8229ca7e..d00ba6d6ba53 100644
> --- a/drivers/gpu/drm/xe/xe_svm.h
> +++ b/drivers/gpu/drm/xe/xe_svm.h
> @@ -98,6 +98,8 @@ int xe_svm_range_get_pages(struct xe_vm *vm, struct xe_svm_range *range,
> bool xe_svm_range_needs_migrate_to_vram(struct xe_svm_range *range, struct xe_vma *vma,
> u32 region);
>
> +void xe_svm_range_clean_if_addr_within(struct xe_vm *vm, u64 start, u64 end);
> +
> /**
> * xe_svm_range_has_dma_mapping() - SVM range has DMA mapping
> * @range: SVM range
> @@ -291,6 +293,11 @@ bool xe_svm_range_needs_migrate_to_vram(struct xe_svm_range *range, struct xe_vm
> return false;
> }
>
> +static inline
> +void xe_svm_range_clean_if_addr_within(struct xe_vm *vm, u64 start, u64 end)
> +{
> +}
> +
> #define xe_svm_assert_in_notifier(...) do {} while (0)
> #define xe_svm_range_has_dma_mapping(...) false
>
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index c7c012afe9eb..92b8e0cac063 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -2362,6 +2362,22 @@ vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_vma_ops *vops,
> op->map.pat_index = pat_index;
> op->map.invalidate_on_bind =
> __xe_vm_needs_clear_scratch_pages(vm, flags);
> + } else if (__op->op == DRM_GPUVA_OP_REMAP) {
> + struct xe_vma *old =
> + gpuva_to_vma(op->base.remap.unmap->va);
> + u64 start = xe_vma_start(old), end = xe_vma_end(old);
> +
> + if (op->base.remap.prev)
> + start = op->base.remap.prev->va.addr +
> + op->base.remap.prev->va.range;
> + if (op->base.remap.next)
> + end = op->base.remap.next->va.addr;
> +
> + if (xe_vma_is_cpu_addr_mirror(old) &&
> + xe_svm_has_mapping(vm, start, end)) {
> + drm_gpuva_ops_free(&vm->gpuvm, ops);
> + return ERR_PTR(-EBUSY);
> + }
> } else if (__op->op == DRM_GPUVA_OP_PREFETCH) {
> struct xe_vma *vma = gpuva_to_vma(op->base.prefetch.va);
>
> @@ -2653,7 +2669,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
>
> if (xe_vma_is_cpu_addr_mirror(old) &&
> xe_svm_has_mapping(vm, start, end))
> - return -EBUSY;
> + xe_svm_range_clean_if_addr_within(vm, start, end);
>
> op->remap.start = xe_vma_start(old);
> op->remap.range = xe_vma_size(old);
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 28/32] drm/xe/uapi: Add flag for consulting madvise hints on svm prefetch
2025-04-07 10:17 ` [PATCH v2 28/32] drm/xe/uapi: Add flag for consulting madvise hints on svm prefetch Himal Prasad Ghimiray
@ 2025-05-14 21:05 ` Matthew Brost
2025-05-21 8:52 ` Ghimiray, Himal Prasad
0 siblings, 1 reply; 120+ messages in thread
From: Matthew Brost @ 2025-05-14 21:05 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe, thomas.hellstrom
On Mon, Apr 07, 2025 at 03:47:15PM +0530, Himal Prasad Ghimiray wrote:
> Introduce flag DRM_XE_CONSULT_MEM_ADVISE_PREF_LOC to ensure prefetching
> in madvise-advised memory regions
>
> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> ---
> include/uapi/drm/xe_drm.h | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
> index aaf515df3a83..ab96dee25f6c 100644
> --- a/include/uapi/drm/xe_drm.h
> +++ b/include/uapi/drm/xe_drm.h
> @@ -1111,6 +1111,7 @@ struct drm_xe_vm_bind_op {
> /** @flags: Bind flags */
> __u32 flags;
>
> +#define DRM_XE_CONSULT_MEM_ADVISE_PREF_LOC -1
Replied to wrong version before. Kernel doc.
Otherwise uAPI LGTM.
Matt
> /**
> * @prefetch_mem_region_instance: Memory region to prefetch VMA to.
> * It is a region instance, not a mask.
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 30/32] drm/xe/uapi: Add uapi for vma count and mem attributes
2025-04-07 10:17 ` [PATCH v2 30/32] drm/xe/uapi: Add uapi for vma count and mem attributes Himal Prasad Ghimiray
@ 2025-05-14 21:08 ` Matthew Brost
2025-05-21 8:54 ` Ghimiray, Himal Prasad
0 siblings, 1 reply; 120+ messages in thread
From: Matthew Brost @ 2025-05-14 21:08 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe, thomas.hellstrom
On Mon, Apr 07, 2025 at 03:47:17PM +0530, Himal Prasad Ghimiray wrote:
> -DRM_IOCTL_XE_VM_QUERY_VMAS: Return number of VMAs in user-specified range.
> -DRM_IOCTL_XE_VM_QUERY_VMAS_ATTRS: Fill VMA attributes in user-provided
> buffer.
>
Replied to wrong version eariler...
I can't remember if we landed on if this is needed? I thought the answer
was - no not needed.
If it is needed could be make this a single IOCTL. e.g. Call in once
with num_vmas == 0 + NULL vector, IOCTL returns num_vmas, then called
again with num_vmas != 0 + non-NULL vector. Generally we try not burn
IOCTL numbers, rather overload functionality.
Matt
> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> ---
> drivers/gpu/drm/xe/xe_device.c | 2 +
> drivers/gpu/drm/xe/xe_vm.c | 94 +++++++++++++++++++++++++++
> drivers/gpu/drm/xe/xe_vm.h | 3 +-
> include/uapi/drm/xe_drm.h | 115 +++++++++++++++++++++++++++++++++
> 4 files changed, 213 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
> index 3e57300014bf..968c24c77241 100644
> --- a/drivers/gpu/drm/xe/xe_device.c
> +++ b/drivers/gpu/drm/xe/xe_device.c
> @@ -198,6 +198,8 @@ static const struct drm_ioctl_desc xe_ioctls[] = {
> DRM_RENDER_ALLOW),
> DRM_IOCTL_DEF_DRV(XE_OBSERVATION, xe_observation_ioctl, DRM_RENDER_ALLOW),
> DRM_IOCTL_DEF_DRV(XE_MADVISE, xe_vm_madvise_ioctl, DRM_RENDER_ALLOW),
> + DRM_IOCTL_DEF_DRV(XE_VM_QUERY_VMAS, xe_vm_query_vmas_ioctl, DRM_RENDER_ALLOW),
> + DRM_IOCTL_DEF_DRV(XE_VM_QUERY_VMAS_ATTRS, xe_vm_query_vmas_attrs_ioctl, DRM_RENDER_ALLOW),
> };
>
> static long xe_drm_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index e5246c633e62..f1d4daf90efe 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -2165,6 +2165,100 @@ int xe_vm_destroy_ioctl(struct drm_device *dev, void *data,
> return err;
> }
>
> +int xe_vm_query_vmas_ioctl(struct drm_device *dev, void *data,
> + struct drm_file *file)
> +{
> + struct xe_device *xe = to_xe_device(dev);
> + struct xe_file *xef = to_xe_file(file);
> + struct drm_xe_vm_query_num_vmas *args = data;
> + struct drm_gpuva *gpuva;
> + struct xe_vm *vm;
> +
> + vm = xe_vm_lookup(xef, args->vm_id);
> + if (XE_IOCTL_DBG(xe, !vm))
> + return -EINVAL;
> +
> + args->num_vmas = 0;
> + down_write(&vm->lock);
> +
> + drm_gpuvm_for_each_va_range(gpuva, &vm->gpuvm, args->start, args->start + args->range)
> + args->num_vmas++;
> +
> + up_write(&vm->lock);
> + return 0;
> +}
> +
> +static int get_mem_attrs(struct xe_vm *vm, u32 *num_vmas, u64 start,
> + u64 end, struct drm_xe_vma_mem_attr *mem_attrs)
> +{
> + struct drm_gpuva *gpuva;
> + int i = 0;
> +
> + lockdep_assert_held(&vm->lock);
> +
> + drm_gpuvm_for_each_va_range(gpuva, &vm->gpuvm, start, end) {
> + struct xe_vma *vma = gpuva_to_vma(gpuva);
> +
> + if (i == *num_vmas)
> + return -EINVAL;
> +
> + mem_attrs[i].start = xe_vma_start(vma);
> + mem_attrs[i].end = xe_vma_end(vma);
> + mem_attrs[i].atomic.val = vma->attr.atomic_access;
> + mem_attrs[i].pat_index.val = vma->attr.pat_index;
> + mem_attrs[i].preferred_mem_loc.devmem_fd = vma->attr.preferred_loc.devmem_fd;
> + mem_attrs[i].preferred_mem_loc.migration_policy = vma->attr.preferred_loc.migration_policy;
> +
> + i++;
> + }
> +
> + if (i < (*num_vmas - 1))
> + *num_vmas = i;
> + return 0;
> +}
> +
> +int xe_vm_query_vmas_attrs_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
> +{
> + struct xe_device *xe = to_xe_device(dev);
> + struct xe_file *xef = to_xe_file(file);
> + struct drm_xe_vma_mem_attr *mem_attrs;
> + struct drm_xe_vm_query_vmas_attr *args = data;
> + u64 __user *attrs_user = NULL;
> + struct xe_vm *vm;
> + int err;
> +
> + if (XE_IOCTL_DBG(xe, args->num_vmas < 1))
> + return -EINVAL;
> +
> + vm = xe_vm_lookup(xef, args->vm_id);
> + if (XE_IOCTL_DBG(xe, !vm))
> + return -EINVAL;
> +
> + down_write(&vm->lock);
> +
> + attrs_user = u64_to_user_ptr(args->vector_of_vma_mem_attr);
> + mem_attrs = kvmalloc_array(args->num_vmas, sizeof(struct drm_xe_vma_mem_attr),
> + GFP_KERNEL | __GFP_ACCOUNT |
> + __GFP_RETRY_MAYFAIL | __GFP_NOWARN);
> + if (!mem_attrs)
> + return args->num_vmas > 1 ? -ENOBUFS : -ENOMEM;
> +
> + err = get_mem_attrs(vm, &args->num_vmas, args->start,
> + args->start + args->range, mem_attrs);
> + if (err)
> + goto free_mem_attrs;
> +
> + err = __copy_to_user(attrs_user, mem_attrs,
> + sizeof(struct drm_xe_vma_mem_attr) * args->num_vmas);
> +
> +free_mem_attrs:
> + kvfree(mem_attrs);
> +
> + up_write(&vm->lock);
> +
> + return err;
> +}
> +
> static bool vma_matches(struct xe_vma *vma, u64 page_addr)
> {
> if (page_addr > xe_vma_end(vma) - 1 ||
> diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
> index 377f62f859b7..0b2d6e9f77ef 100644
> --- a/drivers/gpu/drm/xe/xe_vm.h
> +++ b/drivers/gpu/drm/xe/xe_vm.h
> @@ -193,7 +193,8 @@ int xe_vm_destroy_ioctl(struct drm_device *dev, void *data,
> struct drm_file *file);
> int xe_vm_bind_ioctl(struct drm_device *dev, void *data,
> struct drm_file *file);
> -
> +int xe_vm_query_vmas_ioctl(struct drm_device *dev, void *data, struct drm_file *file);
> +int xe_vm_query_vmas_attrs_ioctl(struct drm_device *dev, void *data, struct drm_file *file);
> void xe_vm_close_and_put(struct xe_vm *vm);
>
> static inline bool xe_vm_in_fault_mode(struct xe_vm *vm)
> diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
> index ab96dee25f6c..177ee3a1c20d 100644
> --- a/include/uapi/drm/xe_drm.h
> +++ b/include/uapi/drm/xe_drm.h
> @@ -82,6 +82,8 @@ extern "C" {
> * - &DRM_IOCTL_XE_WAIT_USER_FENCE
> * - &DRM_IOCTL_XE_OBSERVATION
> * - &DRM_IOCTL_XE_MADVISE
> + * - &DRM_IOCTL_XE_VM_QUERY_VMAS
> + * - &DRM_IOCTL_XE_VM_QUERY_VMAS_ATTRS
> */
>
> /*
> @@ -104,6 +106,8 @@ extern "C" {
> #define DRM_XE_WAIT_USER_FENCE 0x0a
> #define DRM_XE_OBSERVATION 0x0b
> #define DRM_XE_MADVISE 0x0c
> +#define DRM_XE_VM_QUERY_VMAS 0x0d
> +#define DRM_XE_VM_QUERY_VMAS_ATTRS 0x0e
>
> /* Must be kept compact -- no holes */
>
> @@ -120,6 +124,8 @@ extern "C" {
> #define DRM_IOCTL_XE_WAIT_USER_FENCE DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_WAIT_USER_FENCE, struct drm_xe_wait_user_fence)
> #define DRM_IOCTL_XE_OBSERVATION DRM_IOW(DRM_COMMAND_BASE + DRM_XE_OBSERVATION, struct drm_xe_observation_param)
> #define DRM_IOCTL_XE_MADVISE DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_MADVISE, struct drm_xe_madvise)
> +#define DRM_IOCTL_XE_VM_QUERY_VMAS DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_VM_QUERY_VMAS, struct drm_xe_vm_query_num_vmas)
> +#define DRM_IOCTL_XE_VM_QUERY_VMAS_ATTRS DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_VM_QUERY_VMAS_ATTRS, struct drm_xe_vm_query_vmas_attr)
>
> /**
> * DOC: Xe IOCTL Extensions
> @@ -2063,6 +2069,115 @@ struct drm_xe_madvise {
>
> };
>
> +/**
> + * struct drm_xe_vm_query_num_vmas - Input of &DRM_IOCTL_XE_VM_QUERY_VMAS
> + *
> + * Get number of vmas in virtual range of vm_id
> + */
> +struct drm_xe_vm_query_num_vmas {
> + /** @extensions: Pointer to the first extension struct, if any */
> + __u64 extensions;
> +
> + /** @vm_id: vm_id of the virtual range */
> + __u32 vm_id;
> +
> + /** @num_vmas: number of vmas in range returned in @num_vmas */
> + __u32 num_vmas;
> +
> + /** @start: start of the virtual address range */
> + __u64 start;
> +
> + /** @size: size of the virtual address range */
> + __u64 range;
> +
> + /** @reserved: Reserved */
> + __u64 reserved[2];
> +};
> +
> +struct drm_xe_vma_mem_attr {
> + /** @extensions: Pointer to the first extension struct, if any */
> + __u64 extensions;
> +
> + /** @start: start of the vma */
> + __u64 start;
> +
> + /** @size: end of the vma */
> + __u64 end;
> +
> + struct {
> + struct {
> + /** @val: value of atomic operation*/
> + __u32 val;
> +
> + /** @reserved: Reserved */
> + __u32 reserved;
> + } atomic;
> +
> + struct {
> + /** @val: value for DRM_XE_VMA_ATTR_PURGEABLE_STATE */
> + __u32 val;
> +
> + /** @reserved: Reserved */
> + __u32 reserved;
> + } purge_state_val;
> +
> + struct {
> + /** @pat_index */
> + __u32 val;
> +
> + /** @reserved: Reserved */
> + __u32 reserved;
> + } pat_index;
> +
> + /** @preferred_mem_loc: preferred memory location */
> + struct {
> + __u32 devmem_fd;
> +
> + __u32 migration_policy;
> + } preferred_mem_loc;
> + };
> +
> + /** @reserved: Reserved */
> + __u64 reserved[2];
> +};
> +
> +/**
> + * struct drm_xe_vm_query_vmas_attr - Input of &DRM_IOCTL_XE_VM_QUERY_MEM_ATTRIBUTES
> + *
> + * Get memory attributes to a virtual address range
> + */
> +struct drm_xe_vm_query_vmas_attr {
> + /** @extensions: Pointer to the first extension struct, if any */
> + __u64 extensions;
> +
> + /** @vm_id: vm_id of the virtual range */
> + __u32 vm_id;
> +
> + /** @num_vmas: number of vmas in range returned in @num_vmas */
> + __u32 num_vmas;
> +
> + /** @start: start of the virtual address range */
> + __u64 start;
> +
> + /** @size: size of the virtual address range */
> + __u64 range;
> +
> + union {
> + /** @num_vmas: used if num_vmas == 1 */
> + struct drm_xe_vma_mem_attr attr;
> +
> + /**
> + * @vector_of_ops: userptr to array of struct
> + * drm_xe_vma_mem_attr if num_vmas > 1
> + */
> + __u64 vector_of_vma_mem_attr;
> + };
> +
> + /** @reserved: Reserved */
> + __u64 reserved[2];
> +
> +};
> +
> #if defined(__cplusplus)
> }
> #endif
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 31/32] drm/xe/bo: Add attributes field to xe_bo
2025-04-07 10:17 ` [PATCH v2 31/32] drm/xe/bo: Add attributes field to xe_bo Himal Prasad Ghimiray
@ 2025-05-14 21:10 ` Matthew Brost
0 siblings, 0 replies; 120+ messages in thread
From: Matthew Brost @ 2025-05-14 21:10 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe, thomas.hellstrom
On Mon, Apr 07, 2025 at 03:47:18PM +0530, Himal Prasad Ghimiray wrote:
> A single BO can be linked to multiple VMAs, making VMA attributes
> insufficient for determining the placement and PTE update attributes
> of the BO. To address this, an attributes field has been added to the
> BO.
>
> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Replied to wrong version eariler...
Anyways:
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
> ---
> drivers/gpu/drm/xe/xe_bo_types.h | 5 +++++
> 1 file changed, 5 insertions(+)
>
> diff --git a/drivers/gpu/drm/xe/xe_bo_types.h b/drivers/gpu/drm/xe/xe_bo_types.h
> index 81396181aaea..5340127e67ae 100644
> --- a/drivers/gpu/drm/xe/xe_bo_types.h
> +++ b/drivers/gpu/drm/xe/xe_bo_types.h
> @@ -60,6 +60,11 @@ struct xe_bo {
> */
> struct list_head client_link;
> #endif
> + /** @attr: User controlled attributes for bo */
> + struct {
> + /** @atomic_access: type of atomic access bo needs */
> + u32 atomic_access;
> + } attr;
> /**
> * @pxp_key_instance: PXP key instance this BO was created against. A
> * 0 in this variable indicates that the BO does not use PXP encryption.
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 23/32] drm/xe: Implement madvise ioctl for xe
2025-04-07 10:17 ` [PATCH v2 23/32] drm/xe: Implement madvise ioctl for xe Himal Prasad Ghimiray
@ 2025-05-14 21:41 ` Matthew Brost
2025-05-20 10:15 ` Ghimiray, Himal Prasad
0 siblings, 1 reply; 120+ messages in thread
From: Matthew Brost @ 2025-05-14 21:41 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe, thomas.hellstrom
On Mon, Apr 07, 2025 at 03:47:10PM +0530, Himal Prasad Ghimiray wrote:
> This driver-specific ioctl enables UMDs to control the memory attributes
> for GPU VMAs within a specified input range. If the start or end
> addresses fall within an existing VMA, the VMA is split accordingly. The
> attributes of the VMA are modified as provided by the users. The old
> mappings of the VMAs are invalidated, and TLB invalidation is performed
> if necessary.
>
> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> ---
> drivers/gpu/drm/xe/Makefile | 1 +
> drivers/gpu/drm/xe/xe_device.c | 2 +
> drivers/gpu/drm/xe/xe_vm_madvise.c | 309 +++++++++++++++++++++++++++++
> drivers/gpu/drm/xe/xe_vm_madvise.h | 15 ++
> 4 files changed, 327 insertions(+)
> create mode 100644 drivers/gpu/drm/xe/xe_vm_madvise.c
> create mode 100644 drivers/gpu/drm/xe/xe_vm_madvise.h
>
> diff --git a/drivers/gpu/drm/xe/Makefile b/drivers/gpu/drm/xe/Makefile
> index e4fec90bab55..3e83ae8b9dc1 100644
> --- a/drivers/gpu/drm/xe/Makefile
> +++ b/drivers/gpu/drm/xe/Makefile
> @@ -117,6 +117,7 @@ xe-y += xe_bb.o \
> xe_uc.o \
> xe_uc_fw.o \
> xe_vm.o \
> + xe_vm_madvise.o \
> xe_vram.o \
> xe_vram_freq.o \
> xe_vsec.o \
> diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
> index d8e227ddf255..3e57300014bf 100644
> --- a/drivers/gpu/drm/xe/xe_device.c
> +++ b/drivers/gpu/drm/xe/xe_device.c
> @@ -60,6 +60,7 @@
> #include "xe_ttm_stolen_mgr.h"
> #include "xe_ttm_sys_mgr.h"
> #include "xe_vm.h"
> +#include "xe_vm_madvise.h"
> #include "xe_vram.h"
> #include "xe_vsec.h"
> #include "xe_wait_user_fence.h"
> @@ -196,6 +197,7 @@ static const struct drm_ioctl_desc xe_ioctls[] = {
> DRM_IOCTL_DEF_DRV(XE_WAIT_USER_FENCE, xe_wait_user_fence_ioctl,
> DRM_RENDER_ALLOW),
> DRM_IOCTL_DEF_DRV(XE_OBSERVATION, xe_observation_ioctl, DRM_RENDER_ALLOW),
> + DRM_IOCTL_DEF_DRV(XE_MADVISE, xe_vm_madvise_ioctl, DRM_RENDER_ALLOW),
> };
>
> static long xe_drm_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
> new file mode 100644
> index 000000000000..ef50031649e0
> --- /dev/null
> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
> @@ -0,0 +1,309 @@
> +// SPDX-License-Identifier: MIT
> +/*
> + * Copyright © 2024 Intel Corporation
> + */
> +
> +#include "xe_vm_madvise.h"
> +
> +#include <linux/nospec.h>
> +#include <drm/ttm/ttm_tt.h>
> +#include <drm/xe_drm.h>
> +
> +#include "xe_bo.h"
> +#include "xe_gt_tlb_invalidation.h"
> +#include "xe_pt.h"
> +#include "xe_svm.h"
> +
> +static struct xe_vma **get_vmas(struct xe_vm *vm, int *num_vmas,
> + u64 addr, u64 range)
> +{
> + struct xe_vma **vmas, **__vmas;
> + struct drm_gpuva *gpuva;
> + int max_vmas = 8;
> +
> + lockdep_assert_held(&vm->lock);
lockdep_assert_held_write
> +
> + *num_vmas = 0;
> + vmas = kmalloc_array(max_vmas, sizeof(*vmas), GFP_KERNEL);
> + if (!vmas)
> + return NULL;
> +
> + vm_dbg(&vm->xe->drm, "VMA's in range: start=0x%016llx, end=0x%016llx", addr, addr + range);
> +
> + drm_gpuvm_for_each_va_range(gpuva, &vm->gpuvm, addr, addr + range) {
> + struct xe_vma *vma = gpuva_to_vma(gpuva);
> +
> + if (*num_vmas == max_vmas) {
> + max_vmas <<= 1;
> + __vmas = krealloc(vmas, max_vmas * sizeof(*vmas), GFP_KERNEL);
> + if (!__vmas) {
> + kfree(vmas);
> + return NULL;
> + }
> + vmas = __vmas;
> + }
> +
> + vmas[*num_vmas] = vma;
> + (*num_vmas)++;
> + }
> +
> + vm_dbg(&vm->xe->drm, "*num_vmas = %d\n", *num_vmas);
> +
> + if (!*num_vmas) {
> + kfree(vmas);
> + return NULL;
> + }
> +
> + return vmas;
> +}
> +
> +static int madvise_preferred_mem_loc(struct xe_device *xe, struct xe_vm *vm,
> + struct xe_vma **vmas, int num_vmas,
> + struct drm_xe_madvise_ops ops)
> +{
> + /* Implementation pending */
> + return 0;
> +}
> +
> +static int madvise_atomic(struct xe_device *xe, struct xe_vm *vm,
> + struct xe_vma **vmas, int num_vmas,
> + struct drm_xe_madvise_ops ops)
> +{
> + /* Implementation pending */
> + return 0;
> +}
> +
> +static int madvise_pat_index(struct xe_device *xe, struct xe_vm *vm,
> + struct xe_vma **vmas, int num_vmas,
> + struct drm_xe_madvise_ops ops)
> +{
> + /* Implementation pending */
> + return 0;
> +}
> +
> +static int madvise_purgeable_state(struct xe_device *xe, struct xe_vm *vm,
> + struct xe_vma **vmas, int num_vmas,
> + struct drm_xe_madvise_ops ops)
> +{
> + /* Implementation pending */
> + return 0;
> +}
> +
> +typedef int (*madvise_func)(struct xe_device *xe, struct xe_vm *vm,
> + struct xe_vma **vmas, int num_vmas, struct drm_xe_madvise_ops ops);
> +
> +static const madvise_func madvise_funcs[] = {
> + [DRM_XE_VMA_ATTR_PREFERRED_LOC] = madvise_preferred_mem_loc,
> + [DRM_XE_VMA_ATTR_ATOMIC] = madvise_atomic,
> + [DRM_XE_VMA_ATTR_PAT] = madvise_pat_index,
> + [DRM_XE_VMA_ATTR_PURGEABLE_STATE] = madvise_purgeable_state,
> +};
> +
> +static void xe_zap_ptes_in_madvise_range(struct xe_vm *vm, u64 start, u64 end, u8 *tile_mask)
> +{
> + struct drm_gpusvm_notifier *notifier;
> + struct drm_gpuva *gpuva;
> + struct xe_svm_range *range;
> + struct xe_tile *tile;
> + u64 adj_start, adj_end;
> + u8 id;
> +
> + lockdep_assert_held(&vm->lock);
lockdep_assert_held_write
> +
/* Waiting on pending binds */
> + if (dma_resv_wait_timeout(xe_vm_resv(vm), DMA_RESV_USAGE_BOOKKEEP,
> + false, MAX_SCHEDULE_TIMEOUT) <= 0)
> + XE_WARN_ON(1);
> +
> + down_write(&vm->svm.gpusvm.notifier_lock);
> +
xe_svm_notifier_lock
> + drm_gpusvm_for_each_notifier(notifier, &vm->svm.gpusvm, start, end) {
> + struct drm_gpusvm_range *r = NULL;
> +
> + adj_start = max(start, notifier->itree.start);
> + adj_end = min(end, notifier->itree.last + 1);
> + drm_gpusvm_for_each_range(r, notifier, adj_start, adj_end) {
> + range = to_xe_range(r);
> + for_each_tile(tile, vm->xe, id) {
> + if (xe_pt_zap_ptes_range(tile, vm, range)) {
> + *tile_mask |= BIT(id);
> + range->tile_invalidated |= BIT(id);
> + }
> + }
> + }
> + }
> +
> + up_write(&vm->svm.gpusvm.notifier_lock);
> +
xe_svm_notifier_unlock
> + drm_gpuvm_for_each_va_range(gpuva, &vm->gpuvm, start, end) {
> + struct xe_vma *vma = gpuva_to_vma(gpuva);
> +
> + if (xe_vma_is_cpu_addr_mirror(vma))
> + continue;
> +
> + if (xe_vma_is_userptr(vma)) {
> + WARN_ON_ONCE(!mmu_interval_check_retry
> + (&to_userptr_vma(vma)->userptr.notifier,
> + to_userptr_vma(vma)->userptr.notifier_seq));
> +
> + WARN_ON_ONCE(!dma_resv_test_signaled(xe_vm_resv(xe_vma_vm(vma)),
> + DMA_RESV_USAGE_BOOKKEEP));
> + }
> +
> + if (xe_vma_bo(vma))
> + xe_bo_lock(xe_vma_bo(vma), false);
> +
Do you need the BO's dma-resv lock here? I don't think you do. Maybe double
check with Thomas on this one as I could be forgeting something here.
> + for_each_tile(tile, vm->xe, id) {
> + if (xe_pt_zap_ptes(tile, vma))
> + *tile_mask |= BIT(id);
> + }
> +
> + if (xe_vma_bo(vma))
> + xe_bo_unlock(xe_vma_bo(vma));
> + }
> +}
> +
> +static void xe_vm_invalidate_madvise_range(struct xe_vm *vm, u64 start, u64 end)
> +{
> + struct xe_gt_tlb_invalidation_fence
> + fence[XE_MAX_TILES_PER_DEVICE * XE_MAX_GT_PER_TILE];
> + struct xe_tile *tile;
> + u32 fence_id = 0;
> + u8 tile_mask = 0;
> + u8 id;
> +
> + xe_zap_ptes_in_madvise_range(vm, start, end, &tile_mask);
> + if (!tile_mask)
> + return;
> +
> + xe_device_wmb(vm->xe);
> +
We have the below pattern in a few places in the driver. I wonder if it
time for a helper?
> + for_each_tile(tile, vm->xe, id) {
> + if (tile_mask & BIT(id)) {
> + int err;
> +
> + xe_gt_tlb_invalidation_fence_init(tile->primary_gt,
> + &fence[fence_id], true);
> +
> + err = xe_gt_tlb_invalidation_range(tile->primary_gt,
> + &fence[fence_id],
> + start,
> + end,
> + vm->usm.asid);
> + if (WARN_ON_ONCE(err < 0))
> + goto wait;
> + ++fence_id;
> +
> + if (!tile->media_gt)
> + continue;
> +
> + xe_gt_tlb_invalidation_fence_init(tile->media_gt,
> + &fence[fence_id], true);
> +
> + err = xe_gt_tlb_invalidation_range(tile->media_gt,
> + &fence[fence_id],
> + start,
> + end,
> + vm->usm.asid);
> + if (WARN_ON_ONCE(err < 0))
> + goto wait;
> + ++fence_id;
> + }
> + }
> +
> +wait:
> + for (id = 0; id < fence_id; ++id)
> + xe_gt_tlb_invalidation_fence_wait(&fence[id]);
> +}
> +
> +static int input_ranges_same(struct drm_xe_madvise_ops *old,
> + struct drm_xe_madvise_ops *new)
> +{
> + return (new->start == old->start && new->range == old->range);
> +}
> +
> +int xe_vm_madvise_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
Kernel doc.
> +{
> + struct xe_device *xe = to_xe_device(dev);
> + struct xe_file *xef = to_xe_file(file);
> + struct drm_xe_madvise_ops *advs_ops;
> + struct drm_xe_madvise *args = data;
> + struct xe_vm *vm;
> + struct xe_vma **vmas = NULL;
> + int num_vmas, err = 0;
> + int i, j, attr_type;
> +
> + if (XE_IOCTL_DBG(xe, args->num_ops < 1))
> + return -EINVAL;
> +
> + vm = xe_vm_lookup(xef, args->vm_id);
> + if (XE_IOCTL_DBG(xe, !vm))
> + return -EINVAL;
> +
> + if (XE_IOCTL_DBG(xe, !xe_vm_in_fault_mode(vm))) {
Do we want to restrict this fault mode? Maybe check with Mesa if they
see any use cases.
> + err = -EINVAL;
> + goto put_vm;
> + }
> +
> + down_write(&vm->lock);
> +
> + if (XE_IOCTL_DBG(xe, xe_vm_is_closed_or_banned(vm))) {
> + err = -ENOENT;
> + goto unlock_vm;
> + }
> +
> + if (args->num_ops > 1) {
> + u64 __user *madvise_user = u64_to_user_ptr(args->vector_of_ops);
> +
> + advs_ops = kvmalloc_array(args->num_ops, sizeof(struct drm_xe_madvise_ops),
> + GFP_KERNEL | __GFP_ACCOUNT |
> + __GFP_RETRY_MAYFAIL | __GFP_NOWARN);
> + if (!advs_ops)
> + return args->num_ops > 1 ? -ENOBUFS : -ENOMEM;
> +
> + err = __copy_from_user(advs_ops, madvise_user,
> + sizeof(struct drm_xe_madvise_ops) *
> + args->num_ops);
> + if (XE_IOCTL_DBG(xe, err)) {
> + err = -EFAULT;
> + goto free_advs_ops;
> + }
> + } else {
> + advs_ops = &args->ops;
> + }
> +
> + for (i = 0; i < args->num_ops; i++) {
> + xe_vm_alloc_madvise_vma(vm, advs_ops[i].start, advs_ops[i].range);
> +
> + vmas = get_vmas(vm, &num_vmas, advs_ops[i].start, advs_ops[i].range);
> + if (!vmas) {
> + err = -ENOMEM;
> + goto unlock_vm;
> + }
> +
> + attr_type = array_index_nospec(advs_ops[i].type, ARRAY_SIZE(madvise_funcs));
> + err = madvise_funcs[attr_type](xe, vm, vmas, num_vmas, advs_ops[i]);
> +
> + kfree(vmas);
> + vmas = NULL;
> +
> + if (err)
> + break;
> + }
> +
> + for (i = 0; i < args->num_ops; i++) {
> + for (j = i + 1; j < args->num_ops; ++j) {
> + if (input_ranges_same(&advs_ops[j], &advs_ops[i]))
> + break;
> + }
The above loop doesn't look like it actually does anything.
Matt
> + xe_vm_invalidate_madvise_range(vm, advs_ops[i].start,
> + advs_ops[i].start + advs_ops[i].range);
> + }
> +free_advs_ops:
> + if (args->num_ops > 1)
> + kvfree(advs_ops);
> +unlock_vm:
> + up_write(&vm->lock);
> +put_vm:
> + xe_vm_put(vm);
> + return err;
> +}
> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.h b/drivers/gpu/drm/xe/xe_vm_madvise.h
> new file mode 100644
> index 000000000000..c5cdd058c322
> --- /dev/null
> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.h
> @@ -0,0 +1,15 @@
> +/* SPDX-License-Identifier: MIT */
> +/*
> + * Copyright © 2024 Intel Corporation
> + */
> +
> +#ifndef _XE_VM_MADVISE_H_
> +#define _XE_VM_MADVISE_H_
> +
> +struct drm_device;
> +struct drm_file;
> +
> +int xe_vm_madvise_ioctl(struct drm_device *dev, void *data,
> + struct drm_file *file);
> +
> +#endif
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 27/32] drm/xe/svm: Support DRM_XE_SVM_ATTR_PAT memory attribute
2025-04-07 10:17 ` [PATCH v2 27/32] drm/xe/svm: Support DRM_XE_SVM_ATTR_PAT memory attribute Himal Prasad Ghimiray
@ 2025-05-14 21:52 ` Matthew Brost
2025-05-21 8:51 ` Ghimiray, Himal Prasad
0 siblings, 1 reply; 120+ messages in thread
From: Matthew Brost @ 2025-05-14 21:52 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe, thomas.hellstrom
On Mon, Apr 07, 2025 at 03:47:14PM +0530, Himal Prasad Ghimiray wrote:
> This attributes sets the pat_index for the svm used vma range, which is
> utilized to ascertain the coherence.
>
> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> ---
> drivers/gpu/drm/xe/xe_vm_madvise.c | 9 ++++++++-
> 1 file changed, 8 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
> index f870e8642190..f4e0545937b0 100644
> --- a/drivers/gpu/drm/xe/xe_vm_madvise.c
> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
> @@ -104,7 +104,14 @@ static int madvise_pat_index(struct xe_device *xe, struct xe_vm *vm,
> struct xe_vma **vmas, int num_vmas,
> struct drm_xe_madvise_ops ops)
> {
> - /* Implementation pending */
> + int i;
> +
> + xe_assert(vm->xe, ops.type == DRM_XE_VMA_ATTR_PAT);
> + vm_dbg(&xe->drm, "attr_value = %d", ops.pat_index.val);
I don't think the above vm_dbg is all that helpful. If it was per VMA, I
could that being a bit more helpful.
Otherwise LGTM.
Matt
> +
> + for (i = 0; i < num_vmas; i++)
> + vmas[i]->attr.pat_index = ops.pat_index.val;
> +
> return 0;
> }
>
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 26/32] drm/xe/madvise: Update migration policy based on preferred location
2025-04-07 10:17 ` [PATCH v2 26/32] drm/xe/madvise: Update migration policy based on preferred location Himal Prasad Ghimiray
@ 2025-05-14 22:04 ` Matthew Brost
2025-05-21 8:50 ` Ghimiray, Himal Prasad
0 siblings, 1 reply; 120+ messages in thread
From: Matthew Brost @ 2025-05-14 22:04 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe, thomas.hellstrom
On Mon, Apr 07, 2025 at 03:47:13PM +0530, Himal Prasad Ghimiray wrote:
> When the user sets the valid devmem_fd as a preferred location, GPU fault
> will trigger migration to tile of device associated with devmem_fd.
>
> If the user sets an invalid devmem_fd the preferred location is current
> placement only.
>
> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> ---
> drivers/gpu/drm/xe/xe_svm.c | 15 ++++++++++++++-
> drivers/gpu/drm/xe/xe_vm.h | 3 +++
> drivers/gpu/drm/xe/xe_vm_madvise.c | 20 +++++++++++++++++++-
> 3 files changed, 36 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
> index d40111e29bfe..60dfb1bf12ca 100644
> --- a/drivers/gpu/drm/xe/xe_svm.c
> +++ b/drivers/gpu/drm/xe/xe_svm.c
> @@ -765,6 +765,12 @@ bool xe_svm_range_needs_migrate_to_vram(struct xe_svm_range *range, struct xe_vm
> return needs_migrate;
> }
>
> +static const u32 region_to_mem_type[] = {
> + XE_PL_TT,
> + XE_PL_VRAM0,
> + XE_PL_VRAM1,
> +};
> +
> /**
> * xe_svm_handle_pagefault() - SVM handle page fault
> * @vm: The VM.
> @@ -796,6 +802,7 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
> struct xe_tile *tile = gt_to_tile(gt);
> int retry_count = 3;
> ktime_t end = 0;
> + u32 region;
> int err;
>
> lockdep_assert_held_write(&vm->lock);
> @@ -820,7 +827,13 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
>
> range_debug(range, "PAGE FAULT");
>
> - if (xe_svm_range_needs_migrate_to_vram(range, vma, IS_DGFX(vm->xe))) {
> + region = vma->attr.preferred_loc.devmem_fd;
Mentioned this earlier in the series - you are assiging a devmem_fd to a
region which is a bit confusing.
> +
> + if (xe_svm_range_needs_migrate_to_vram(range, vma, region)) {
> + region = region ? region : 1;
I think the default (region unset) should be the VRAM closest to the GT
of the fault.
> + /* Need rework for multigpu */
> + tile = &vm->xe->tiles[region_to_mem_type[region] - XE_PL_VRAM0];
> +
> err = xe_svm_alloc_vram(vm, tile, range, &ctx);
> if (err) {
> if (retry_count) {
> diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
> index 4e45230b7205..377f62f859b7 100644
> --- a/drivers/gpu/drm/xe/xe_vm.h
> +++ b/drivers/gpu/drm/xe/xe_vm.h
> @@ -220,6 +220,9 @@ int __xe_vm_userptr_needs_repin(struct xe_vm *vm);
>
> int xe_vm_userptr_check_repin(struct xe_vm *vm);
>
> +bool xe_vma_has_preferred_mem_loc(struct xe_vma *vma,
> + u32 *mem_region, u32 *devmem_fd);
> +
> int xe_vm_rebind(struct xe_vm *vm, bool rebind_worker);
> struct dma_fence *xe_vma_rebind(struct xe_vm *vm, struct xe_vma *vma,
> u8 tile_mask);
> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
> index 7e1a95106cb9..f870e8642190 100644
> --- a/drivers/gpu/drm/xe/xe_vm_madvise.c
> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
> @@ -61,7 +61,25 @@ static int madvise_preferred_mem_loc(struct xe_device *xe, struct xe_vm *vm,
> struct xe_vma **vmas, int num_vmas,
> struct drm_xe_madvise_ops ops)
> {
> - /* Implementation pending */
> + s32 devmem_fd;
> + u32 migration_policy;
> + int i;
> +
> + xe_assert(vm->xe, ops.type == DRM_XE_VMA_ATTR_PREFERRED_LOC);
> + vm_dbg(&xe->drm, "migration policy = %d, devmem_fd = %d\n",
> + ops.preferred_mem_loc.migration_policy,
> + ops.preferred_mem_loc.devmem_fd);
As mentioned in patch #27, I'm not sure this debug info is all that
useful.
> +
> + devmem_fd = (s32)ops.preferred_mem_loc.devmem_fd;
> + devmem_fd = (devmem_fd < 0) ? 0 : devmem_fd;
> +
Why (devmem_fd < 0) ? 0? I'm not following this.
> + migration_policy = ops.preferred_mem_loc.migration_policy;
> +
Mentioned earlier in the series, I'm confused by migration_policy and it
also looks to be unused unless I'm missing something?
Matt
> + for (i = 0; i < num_vmas; i++) {
> + vmas[i]->attr.preferred_loc.devmem_fd = devmem_fd;
> + vmas[i]->attr.preferred_loc.migration_policy = migration_policy;
> + }
> +
> return 0;
> }
>
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 29/32] drm/xe/svm: Consult madvise preferred location in prefetch
2025-04-07 10:17 ` [PATCH v2 29/32] drm/xe/svm: Consult madvise preferred location in prefetch Himal Prasad Ghimiray
@ 2025-05-14 22:17 ` Matthew Brost
0 siblings, 0 replies; 120+ messages in thread
From: Matthew Brost @ 2025-05-14 22:17 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe, thomas.hellstrom
On Mon, Apr 07, 2025 at 03:47:16PM +0530, Himal Prasad Ghimiray wrote:
> When prefetch region is DRM_XE_CONSULT_MEM_ADVISE_PREF_LOC, prefetch svm
> ranges to preferred location provided by madvise.
>
> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> ---
> drivers/gpu/drm/xe/xe_vm.c | 15 ++++++++++-----
> 1 file changed, 10 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index 0f9c45ce82b4..e5246c633e62 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -2924,9 +2924,12 @@ static int prefetch_ranges_lock_and_prep(struct xe_vm *vm,
> if (!xe_vma_is_cpu_addr_mirror(vma))
> return 0;
>
> - region = op->prefetch_range.region;
> + region = (op->prefetch_range.region == DRM_XE_CONSULT_MEM_ADVISE_PREF_LOC) ?
> + vma->attr.preferred_loc.devmem_fd : op->prefetch_range.region;
Again incongruence between devmem_fd and region.
>
> - /* TODO: Threading the migration */
> + /* TODO: Threading the migration
> + * TODO: Multigpu support migration
> + */
> for (i = 0; i < op->prefetch_range.ranges_count; i++) {
> svm_range = xa_load(&op->prefetch_range.range, i);
> if (xe_svm_range_needs_migrate_to_vram(svm_range, vma, region)) {
> @@ -3001,7 +3004,8 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
> else
> region = op->prefetch.region;
>
> - xe_assert(vm->xe, region <= ARRAY_SIZE(region_to_mem_type));
> + xe_assert(vm->xe, region == DRM_XE_CONSULT_MEM_ADVISE_PREF_LOC ||
> + region <= ARRAY_SIZE(region_to_mem_type));
>
> err = vma_lock_and_validate(exec,
> gpuva_to_vma(op->base.prefetch.va),
> @@ -3426,8 +3430,9 @@ static int vm_bind_ioctl_check_args(struct xe_device *xe, struct xe_vm *vm,
> op == DRM_XE_VM_BIND_OP_PREFETCH) ||
> XE_IOCTL_DBG(xe, prefetch_region &&
> op != DRM_XE_VM_BIND_OP_PREFETCH) ||
> - XE_IOCTL_DBG(xe, !(BIT(prefetch_region) &
> - xe->info.mem_region_mask)) ||
> + XE_IOCTL_DBG(xe, (is_cpu_addr_mirror &&
> + prefetch_region != DRM_XE_CONSULT_MEM_ADVISE_PREF_LOC) &&
Why do you check is_cpu_addr_mirror here? Isn't
DRM_XE_CONSULT_MEM_ADVISE_PREF_LOC valid for all VMAs?
Matt
> + !(BIT(prefetch_region) & xe->info.mem_region_mask)) ||
> XE_IOCTL_DBG(xe, obj &&
> op == DRM_XE_VM_BIND_OP_UNMAP)) {
> err = -EINVAL;
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 25/32] drm/xe/svm : Add svm ranges migration policy on atomic access
2025-04-07 10:17 ` [PATCH v2 25/32] drm/xe/svm : Add svm ranges migration policy on atomic access Himal Prasad Ghimiray
@ 2025-05-14 22:21 ` Matthew Brost
2025-05-20 10:22 ` Ghimiray, Himal Prasad
0 siblings, 1 reply; 120+ messages in thread
From: Matthew Brost @ 2025-05-14 22:21 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe, thomas.hellstrom
On Mon, Apr 07, 2025 at 03:47:12PM +0530, Himal Prasad Ghimiray wrote:
> If the platform does not support atomic access on system memory, and the
> ranges are in system memory, but the user requires atomic accesses on
> the VMA, then migrate the ranges to VRAM. Apply this policy for prefetch
> operations as well.
>
I think the baseline was changed a bit here, but I believe it mostly
makes sense. Will review again on the rebase.
A one nit below.
> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> ---
> drivers/gpu/drm/xe/xe_pt.c | 9 +++++++--
> drivers/gpu/drm/xe/xe_svm.c | 14 ++++++++++++--
> drivers/gpu/drm/xe/xe_vm.c | 2 ++
> drivers/gpu/drm/xe/xe_vm_madvise.c | 11 ++++++++++-
> 4 files changed, 31 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
> index 2479d830d90a..ba9b30b25ded 100644
> --- a/drivers/gpu/drm/xe/xe_pt.c
> +++ b/drivers/gpu/drm/xe/xe_pt.c
> @@ -645,13 +645,18 @@ static bool xe_atomic_for_vram(struct xe_vm *vm)
> return true;
> }
>
> -static bool xe_atomic_for_system(struct xe_vm *vm, struct xe_bo *bo)
> +static bool xe_atomic_for_system(struct xe_vm *vm,
> + struct xe_bo *bo,
> + struct xe_vma *vma)
> {
> struct xe_device *xe = vm->xe;
>
> if (!xe->info.has_device_atomics_on_smem)
> return false;
>
> + if (vma->attr.atomic_access == DRM_XE_VMA_ATOMIC_DEVICE)
> + return true;
> +
> /*
> * If a SMEM+LMEM allocation is backed by SMEM, a device
> * atomics will cause a gpu page fault and which then
> @@ -745,7 +750,7 @@ xe_pt_stage_bind(struct xe_tile *tile, struct xe_vma *vma,
>
> if (vma->gpuva.flags & XE_VMA_ATOMIC_PTE_BIT) {
> xe_walk.default_vram_pte = xe_atomic_for_vram(vm) ? XE_USM_PPGTT_PTE_AE : 0;
> - xe_walk.default_system_pte = xe_atomic_for_system(vm, bo) ?
> + xe_walk.default_system_pte = xe_atomic_for_system(vm, bo, vma) ?
> XE_USM_PPGTT_PTE_AE : 0;
> }
>
> diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
> index efcba4b77250..d40111e29bfe 100644
> --- a/drivers/gpu/drm/xe/xe_svm.c
> +++ b/drivers/gpu/drm/xe/xe_svm.c
> @@ -717,6 +717,16 @@ static bool supports_4K_migration(struct xe_device *xe)
> return false;
> }
>
> +static bool needs_ranges_in_vram_to_support_atomic(struct xe_device *xe, struct xe_vma *vma)
> +{
> + if (vma->attr.atomic_access == DRM_XE_VMA_ATOMIC_UNDEFINED ||
> + (xe->info.has_device_atomics_on_smem &&
> + vma->attr.atomic_access == DRM_XE_VMA_ATOMIC_DEVICE))
> + return false;
> +
> + return true;
> +}
> +
> /**
> * xe_svm_range_needs_migrate_to_vram() - SVM range needs migrate to VRAM or not
> * @range: SVM range for which migration needs to be decided
> @@ -735,7 +745,7 @@ bool xe_svm_range_needs_migrate_to_vram(struct xe_svm_range *range, struct xe_vm
> if (!range->base.flags.migrate_devmem)
> return false;
>
> - needs_migrate = region;
> + needs_migrate = needs_ranges_in_vram_to_support_atomic(vm->xe, vma) || region;
>
> if (needs_migrate && !IS_DGFX(vm->xe)) {
> drm_warn(&vm->xe->drm, "Platform doesn't support VRAM\n");
> @@ -828,7 +838,7 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
>
> }
>
> - if (atomic)
> + if (atomic && needs_ranges_in_vram_to_support_atomic(vm->xe, vma))
> ctx.vram_only = 1;
>
> range_debug(range, "GET PAGES");
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index 92b8e0cac063..0f9c45ce82b4 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -2930,6 +2930,7 @@ static int prefetch_ranges_lock_and_prep(struct xe_vm *vm,
> for (i = 0; i < op->prefetch_range.ranges_count; i++) {
> svm_range = xa_load(&op->prefetch_range.range, i);
> if (xe_svm_range_needs_migrate_to_vram(svm_range, vma, region)) {
> + region = region ? region : 1;
> tile = &vm->xe->tiles[region_to_mem_type[region] - XE_PL_VRAM0];
> err = xe_svm_alloc_vram(vm, tile, svm_range, &ctx);
> if (err) {
> @@ -2938,6 +2939,7 @@ static int prefetch_ranges_lock_and_prep(struct xe_vm *vm,
> return -ENODATA;
> }
> xe_svm_range_debug(svm_range, "PREFETCH - RANGE MIGRATED TO VRAM");
> + ctx.vram_only = 1;
> }
>
> err = xe_svm_range_get_pages(vm, svm_range, &ctx);
> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
> index ef50031649e0..7e1a95106cb9 100644
> --- a/drivers/gpu/drm/xe/xe_vm_madvise.c
> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
> @@ -69,7 +69,16 @@ static int madvise_atomic(struct xe_device *xe, struct xe_vm *vm,
> struct xe_vma **vmas, int num_vmas,
> struct drm_xe_madvise_ops ops)
> {
> - /* Implementation pending */
> + int i;
> +
> + xe_assert(vm->xe, ops.type == DRM_XE_VMA_ATTR_ATOMIC);
> + xe_assert(vm->xe, ops.atomic.val > DRM_XE_VMA_ATOMIC_UNDEFINED &&
> + ops.atomic.val <= DRM_XE_VMA_ATOMIC_CPU);
> + vm_dbg(&xe->drm, "attr_value = %d", ops.atomic.val);
Again I'm unsure if this debug message how a ton of value without
knowing the VMA info.
Matt
> +
> + for (i = 0; i < num_vmas; i++)
> + vmas[i]->attr.atomic_access = ops.atomic.val;
> + /*TODO: handle bo backed vmas */
> return 0;
> }
>
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 32/32] drm/xe/bo: Update atomic_access attribute on madvise
2025-04-07 10:17 ` [PATCH v2 32/32] drm/xe/bo: Update atomic_access attribute on madvise Himal Prasad Ghimiray
@ 2025-05-14 22:31 ` Matthew Brost
2025-05-21 9:13 ` Ghimiray, Himal Prasad
0 siblings, 1 reply; 120+ messages in thread
From: Matthew Brost @ 2025-05-14 22:31 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe, thomas.hellstrom
On Mon, Apr 07, 2025 at 03:47:19PM +0530, Himal Prasad Ghimiray wrote:
> Update the bo_atomic_access based on user-provided input and determine
> the migration to smem during a CPU fault
>
> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> ---
> drivers/gpu/drm/xe/xe_bo.c | 21 ++++++++++++++---
> drivers/gpu/drm/xe/xe_vm.c | 11 +++++++--
> drivers/gpu/drm/xe/xe_vm_madvise.c | 38 +++++++++++++++++++++++++++---
> 3 files changed, 62 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
> index c337790c81ae..fe78f6da7054 100644
> --- a/drivers/gpu/drm/xe/xe_bo.c
> +++ b/drivers/gpu/drm/xe/xe_bo.c
> @@ -1573,6 +1573,12 @@ static void xe_gem_object_close(struct drm_gem_object *obj,
> }
> }
>
> +static bool should_migrate_to_smem(struct xe_bo *bo)
> +{
> + return bo->attr.atomic_access == DRM_XE_VMA_ATOMIC_GLOBAL ||
> + bo->attr.atomic_access == DRM_XE_VMA_ATOMIC_CPU;
> +}
> +
> static vm_fault_t xe_gem_fault(struct vm_fault *vmf)
> {
> struct ttm_buffer_object *tbo = vmf->vma->vm_private_data;
> @@ -1581,7 +1587,7 @@ static vm_fault_t xe_gem_fault(struct vm_fault *vmf)
> struct xe_bo *bo = ttm_to_xe_bo(tbo);
> bool needs_rpm = bo->flags & XE_BO_FLAG_VRAM_MASK;
> vm_fault_t ret;
> - int idx;
> + int idx, r = 0;
>
> if (needs_rpm)
> xe_pm_runtime_get(xe);
> @@ -1593,8 +1599,17 @@ static vm_fault_t xe_gem_fault(struct vm_fault *vmf)
> if (drm_dev_enter(ddev, &idx)) {
> trace_xe_bo_cpu_fault(bo);
>
> - ret = ttm_bo_vm_fault_reserved(vmf, vmf->vma->vm_page_prot,
> - TTM_BO_VM_NUM_PREFAULT);
> + if (should_migrate_to_smem(bo)) {
> + r = xe_bo_migrate(bo, XE_PL_TT);
> + if (r == -EBUSY || r == -ERESTARTSYS || r == -EINTR)
> + ret = VM_FAULT_NOPAGE;
> + else if (r)
> + ret = VM_FAULT_SIGBUS;
> + }
> + if (!ret)
> + ret = ttm_bo_vm_fault_reserved(vmf,
> + vmf->vma->vm_page_prot,
> + TTM_BO_VM_NUM_PREFAULT);
> drm_dev_exit(idx);
> } else {
> ret = ttm_bo_vm_dummy_page(vmf, vmf->vma->vm_page_prot);
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index f1d4daf90efe..189e97113dbe 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -3104,9 +3104,16 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
> err = vma_lock_and_validate(exec,
> gpuva_to_vma(op->base.prefetch.va),
> false);
> - if (!err && !xe_vma_has_no_bo(vma))
> - err = xe_bo_migrate(xe_vma_bo(vma),
> + if (!err && !xe_vma_has_no_bo(vma)) {
> + struct xe_bo *bo = xe_vma_bo(vma);
> +
> + if (region == 0 && !vm->xe->info.has_device_atomics_on_smem &&
> + bo->attr.atomic_access == DRM_XE_VMA_ATOMIC_DEVICE)
> + region = 1;
> +
So here we disallowing migration to system if atomics don't work there?
Shouldn't we just let the GPU fault and fixup on fault?
> + err = xe_bo_migrate(bo,
> region_to_mem_type[region]);
> + }
> break;
> }
> default:
> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
> index f4e0545937b0..bbae2faee603 100644
> --- a/drivers/gpu/drm/xe/xe_vm_madvise.c
> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
> @@ -87,16 +87,48 @@ static int madvise_atomic(struct xe_device *xe, struct xe_vm *vm,
> struct xe_vma **vmas, int num_vmas,
> struct drm_xe_madvise_ops ops)
> {
> - int i;
> + struct xe_bo *bo;
> + int err, i;
>
> xe_assert(vm->xe, ops.type == DRM_XE_VMA_ATTR_ATOMIC);
> xe_assert(vm->xe, ops.atomic.val > DRM_XE_VMA_ATOMIC_UNDEFINED &&
> ops.atomic.val <= DRM_XE_VMA_ATOMIC_CPU);
> vm_dbg(&xe->drm, "attr_value = %d", ops.atomic.val);
>
> - for (i = 0; i < num_vmas; i++)
> + for (i = 0; i < num_vmas; i++) {
> vmas[i]->attr.atomic_access = ops.atomic.val;
> - /*TODO: handle bo backed vmas */
> +
> + bo = xe_vma_bo(vmas[i]);
> + if (!bo)
> + continue;
> +
> + if (XE_IOCTL_DBG(xe, ops.atomic.val == DRM_XE_VMA_ATOMIC_CPU &&
> + !(bo->flags & XE_BO_FLAG_SYSTEM)))
> + return -EINVAL;
> +
> + if (XE_IOCTL_DBG(xe, ops.atomic.val == DRM_XE_VMA_ATOMIC_DEVICE &&
> + !(bo->flags & XE_BO_FLAG_VRAM0) &&
> + !(bo->flags & XE_BO_FLAG_VRAM1)))
> + return -EINVAL;
Don't device atomics work if xe->info.has_device_atomics_on_smem is set?
> +
> + if (XE_IOCTL_DBG(xe, ops.atomic.val == DRM_XE_VMA_ATOMIC_GLOBAL &&
> + (!(bo->flags & XE_BO_FLAG_SYSTEM) ||
> + (!(bo->flags & XE_BO_FLAG_VRAM0) &&
> + !(bo->flags & XE_BO_FLAG_VRAM1)))))
> + return -EINVAL;
One concern is all of the above are platform specific checks - e.g. if
we had a device with CXL atomics just work everywhere. I'd at least add
a comment indicating these are platform specific checks.
> +
> + err = xe_bo_lock(bo, true);
> + if (err)
> + return err;
> + bo->attr.atomic_access = ops.atomic.val;
> +
> + /* Invalidate cpu page table, so bo can migrate to smem in next access */
> + if (bo->attr.atomic_access == DRM_XE_VMA_ATOMIC_CPU ||
> + bo->attr.atomic_access == DRM_XE_VMA_ATOMIC_GLOBAL)
> + ttm_bo_unmap_virtual(&bo->ttm);
If alreday in SMEM, you don't need to unmap do you?
Matt
> +
> + xe_bo_unlock(bo);
> + }
> return 0;
> }
>
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 17/32] drm/xe/uapi: Add madvise interface
2025-05-02 14:00 ` Thomas Hellström
@ 2025-05-20 8:13 ` Ghimiray, Himal Prasad
2025-05-20 8:49 ` Ghimiray, Himal Prasad
1 sibling, 0 replies; 120+ messages in thread
From: Ghimiray, Himal Prasad @ 2025-05-20 8:13 UTC (permalink / raw)
To: Thomas Hellström, intel-xe; +Cc: matthew.brost
On 02-05-2025 19:30, Thomas Hellström wrote:
> On Mon, 2025-04-07 at 15:47 +0530, Himal Prasad Ghimiray wrote:
>> This commit introduces a new madvise interface to support
>> driver-specific ioctl operations. The madvise interface allows for
>> more
>> efficient memory management by providing hints to the driver about
>> the
>> expected memory usage and pte update policy for gpuvma.
>>
>> Signed-off-by: Himal Prasad Ghimiray
>> <himal.prasad.ghimiray@intel.com>
>> ---
>> include/uapi/drm/xe_drm.h | 97
>> +++++++++++++++++++++++++++++++++++++++
>> 1 file changed, 97 insertions(+)
>>
>> diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
>> index 9c08738c3b91..aaf515df3a83 100644
>> --- a/include/uapi/drm/xe_drm.h
>> +++ b/include/uapi/drm/xe_drm.h
>> @@ -81,6 +81,7 @@ extern "C" {
>> * - &DRM_IOCTL_XE_EXEC
>> * - &DRM_IOCTL_XE_WAIT_USER_FENCE
>> * - &DRM_IOCTL_XE_OBSERVATION
>> + * - &DRM_IOCTL_XE_MADVISE
>> */
>>
>> /*
>> @@ -102,6 +103,7 @@ extern "C" {
>> #define DRM_XE_EXEC 0x09
>> #define DRM_XE_WAIT_USER_FENCE 0x0a
>> #define DRM_XE_OBSERVATION 0x0b
>> +#define DRM_XE_MADVISE 0x0c
>>
>> /* Must be kept compact -- no holes */
>>
>> @@ -117,6 +119,7 @@ extern "C" {
>> #define
>> DRM_IOCTL_XE_EXEC DRM_IOW(DRM_COMMAND_BASE + DRM_XE_EXEC, struct drm_xe_exec)
>> #define
>> DRM_IOCTL_XE_WAIT_USER_FENCE DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_WAIT_USER_FENCE,structdrm_xe_wait_user_fence)
>> #define
>> DRM_IOCTL_XE_OBSERVATION DRM_IOW(DRM_COMMAND_BASE + DRM_XE_OBSERVATION,structdrm_xe_observation_param)
>> +#define
>> DRM_IOCTL_XE_MADVISE DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_MADVISE, structdrm_xe_madvise)
>>
>> /**
>> * DOC: Xe IOCTL Extensions
>> @@ -1965,6 +1968,100 @@ struct drm_xe_query_eu_stall {
>> __u64 sampling_rates[];
>> };
>>
>> +struct drm_xe_madvise_ops {
>
> Suggest using extensions also for the ops, like for vm_bind, since we
> might come up with complicated ops in the future that don't fit the
> union + resvd below.
Sure
>
>> + /** @start: start of the virtual address range */
>> + __u64 start;
>> +
>> + /** @size: size of the virtual address range */
>> + __u64 range;
>> +
>> +#define DRM_XE_VMA_ATTR_PREFERRED_LOC 0
>
> Is UMD currently really using and exercising PREFERRED_LOC? If not, I
> suggest removing this op and invent a reasonable default behaviour
> until multi-device is in place.
>
>> +#define DRM_XE_VMA_ATTR_ATOMIC 1
>> +#define DRM_XE_VMA_ATTR_PAT 2
>> +#define DRM_XE_VMA_ATTR_PURGEABLE_STATE 3
>> + /** @type: type of attribute */
>> + __u32 type;
>> +
>> + /** @pad: MBZ */
>> + __u32 pad;
>> +
>> + union {
>> + struct {
>> +#define DRM_XE_VMA_ATOMIC_UNDEFINED 0
>> +#define DRM_XE_VMA_ATOMIC_DEVICE 1
>> +#define DRM_XE_VMA_ATOMIC_GLOBAL 2
>> +#define DRM_XE_VMA_ATOMIC_CPU 3
>> + /** @val: value of atomic operation*/
>> + __u32 val;
>> +
>> + /** @reserved: Reserved */
>> + __u32 reserved;
>> + } atomic;
>> +
>> + struct {
>> +#define DRM_XE_VMA_PURGEABLE_STATE_WILLNEED 0
>> +#define DRM_XE_VMA_PURGEABLE_STATE_DONTNEED 1
>> +#define DRM_XE_VMA_PURGEABLE_STATE_PURGED 2
>
> I think the purged state, at least on i915 was only known to the KMD
> (so shouldn't really be visible in this header). Also we should
> probably define the semantics here if
>
> a) There are multiple gpu vms with conflicting purgeable state.
If even one VM says it still needs the buffer WILLNEED, we play it safe
and keep it around.
> b) What happens if we call dontneed and the bo is deeply pipelined?
return -Ebusy ?
> c) What if a willneed madvise fails due to the bo being purged? And
> that op is embedded in an array of unrelated ops? Should it really fail
> the whole IOCTL?
Either drop array of op handling in ioctl or return status for each op ?
I am not sure what is better here.
>
>> + /** @val: value for DRM_XE_VMA_ATTR_PURGEABLE_STATE
>> */
>> + __u32 val;
>> +
>> + /** @reserved: Reserved */
>> + __u32 reserved;
>> + } purge_state_val;
>> +
>> + struct {
>> + /** @pat_index */
>> + __u32 val;
>> +
>> + /** @reserved: Reserved */
>> + __u32 reserved;
>> + } pat_index;
>> +
>> + /** @preferred_mem_loc: preferred memory location */
>> + struct {
>> + __u32 devmem_fd;
>> +
>> +#define MIGRATE_ALL_PAGES 0
>> +#define MIGRATE_ONLY_SYSTEM_PAGES 1
>> + __u32 migration_policy;
>> + } preferred_mem_loc;
>> + };
>> +
>> + /** @reserved: Reserved */
>> + __u64 reserved[2];
>> +};
>> +
>> +/**
>> + * struct drm_xe_madvise - Input of &DRM_IOCTL_XE_MADVISE
>> + *
>> + * Set memory attributes to a virtual address range
>> + */
>> +struct drm_xe_madvise {
>> + /** @extensions: Pointer to the first extension struct, if
>> any */
>> + __u64 extensions;
>> +
>> + /** @vm_id: vm_id of the virtual range */
>> + __u32 vm_id;
>> +
>> + /** @num_ops: number of madvises in ioctl */
>> + __u32 num_ops;
>
> Should we really support an array of ops here given the experience we
> had with rollbacks on VM_bind? Also WRT this, also please see the
> purgeable state above.
>
>
>
>
>> +
>> + union {
>> + /** @ops: used if num_ops == 1 */
>> + struct drm_xe_madvise_ops ops;
>> +
>> + /**
>> + * @vector_of_ops: userptr to array of struct
>> + * drm_xe_vm_madvise_op if num_ops > 1
>> + */
>> + __u64 vector_of_ops;
>> + };
>> +
>> + /** @reserved: Reserved */
>> + __u64 reserved[2];
>> +
>> +};
>> +
>> #if defined(__cplusplus)
>> }
>> #endif
>
> /Thomas
>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 17/32] drm/xe/uapi: Add madvise interface
2025-05-02 14:00 ` Thomas Hellström
2025-05-20 8:13 ` Ghimiray, Himal Prasad
@ 2025-05-20 8:49 ` Ghimiray, Himal Prasad
1 sibling, 0 replies; 120+ messages in thread
From: Ghimiray, Himal Prasad @ 2025-05-20 8:49 UTC (permalink / raw)
To: Thomas Hellström, intel-xe; +Cc: matthew.brost
On 02-05-2025 19:30, Thomas Hellström wrote:
> On Mon, 2025-04-07 at 15:47 +0530, Himal Prasad Ghimiray wrote:
>> This commit introduces a new madvise interface to support
>> driver-specific ioctl operations. The madvise interface allows for
>> more
>> efficient memory management by providing hints to the driver about
>> the
>> expected memory usage and pte update policy for gpuvma.
>>
>> Signed-off-by: Himal Prasad Ghimiray
>> <himal.prasad.ghimiray@intel.com>
>> ---
>> include/uapi/drm/xe_drm.h | 97
>> +++++++++++++++++++++++++++++++++++++++
>> 1 file changed, 97 insertions(+)
>>
>> diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
>> index 9c08738c3b91..aaf515df3a83 100644
>> --- a/include/uapi/drm/xe_drm.h
>> +++ b/include/uapi/drm/xe_drm.h
>> @@ -81,6 +81,7 @@ extern "C" {
>> * - &DRM_IOCTL_XE_EXEC
>> * - &DRM_IOCTL_XE_WAIT_USER_FENCE
>> * - &DRM_IOCTL_XE_OBSERVATION
>> + * - &DRM_IOCTL_XE_MADVISE
>> */
>>
>> /*
>> @@ -102,6 +103,7 @@ extern "C" {
>> #define DRM_XE_EXEC 0x09
>> #define DRM_XE_WAIT_USER_FENCE 0x0a
>> #define DRM_XE_OBSERVATION 0x0b
>> +#define DRM_XE_MADVISE 0x0c
>>
>> /* Must be kept compact -- no holes */
>>
>> @@ -117,6 +119,7 @@ extern "C" {
>> #define
>> DRM_IOCTL_XE_EXEC DRM_IOW(DRM_COMMAND_BASE + DRM_XE_EXEC, struct drm_xe_exec)
>> #define
>> DRM_IOCTL_XE_WAIT_USER_FENCE DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_WAIT_USER_FENCE,structdrm_xe_wait_user_fence)
>> #define
>> DRM_IOCTL_XE_OBSERVATION DRM_IOW(DRM_COMMAND_BASE + DRM_XE_OBSERVATION,structdrm_xe_observation_param)
>> +#define
>> DRM_IOCTL_XE_MADVISE DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_MADVISE, structdrm_xe_madvise)
>>
>> /**
>> * DOC: Xe IOCTL Extensions
>> @@ -1965,6 +1968,100 @@ struct drm_xe_query_eu_stall {
>> __u64 sampling_rates[];
>> };
>>
>> +struct drm_xe_madvise_ops {
>
> Suggest using extensions also for the ops, like for vm_bind, since we
> might come up with complicated ops in the future that don't fit the
> union + resvd below.
>
>> + /** @start: start of the virtual address range */
>> + __u64 start;
>> +
>> + /** @size: size of the virtual address range */
>> + __u64 range;
>> +
>> +#define DRM_XE_VMA_ATTR_PREFERRED_LOC 0
>
> Is UMD currently really using and exercising PREFERRED_LOC? If not, I
> suggest removing this op and invent a reasonable default behaviour
> until multi-device is in place.
Missed this in previous reply.
Default behavior is preferred location as vram (tile0), as of now in
absence of multi-device support UMD's are using this to fallback to smem
as preferred location by passing invalid devmem_fd.
current behaviour is if invalid devmem_fd is passed -> use smem as
preferred location, with mult-device in place
if invalid devmem_fd -> use local vram as preferred location.
>
>> +#define DRM_XE_VMA_ATTR_ATOMIC 1
>> +#define DRM_XE_VMA_ATTR_PAT 2
>> +#define DRM_XE_VMA_ATTR_PURGEABLE_STATE 3
>> + /** @type: type of attribute */
>> + __u32 type;
>> +
>> + /** @pad: MBZ */
>> + __u32 pad;
>> +
>> + union {
>> + struct {
>> +#define DRM_XE_VMA_ATOMIC_UNDEFINED 0
>> +#define DRM_XE_VMA_ATOMIC_DEVICE 1
>> +#define DRM_XE_VMA_ATOMIC_GLOBAL 2
>> +#define DRM_XE_VMA_ATOMIC_CPU 3
>> + /** @val: value of atomic operation*/
>> + __u32 val;
>> +
>> + /** @reserved: Reserved */
>> + __u32 reserved;
>> + } atomic;
>> +
>> + struct {
>> +#define DRM_XE_VMA_PURGEABLE_STATE_WILLNEED 0
>> +#define DRM_XE_VMA_PURGEABLE_STATE_DONTNEED 1
>> +#define DRM_XE_VMA_PURGEABLE_STATE_PURGED 2
>
> I think the purged state, at least on i915 was only known to the KMD
> (so shouldn't really be visible in this header). Also we should
> probably define the semantics here if
>
> a) There are multiple gpu vms with conflicting purgeable state.
> b) What happens if we call dontneed and the bo is deeply pipelined?
> c) What if a willneed madvise fails due to the bo being purged? And
> that op is embedded in an array of unrelated ops? Should it really fail
> the whole IOCTL?
>
>> + /** @val: value for DRM_XE_VMA_ATTR_PURGEABLE_STATE
>> */
>> + __u32 val;
>> +
>> + /** @reserved: Reserved */
>> + __u32 reserved;
>> + } purge_state_val;
>> +
>> + struct {
>> + /** @pat_index */
>> + __u32 val;
>> +
>> + /** @reserved: Reserved */
>> + __u32 reserved;
>> + } pat_index;
>> +
>> + /** @preferred_mem_loc: preferred memory location */
>> + struct {
>> + __u32 devmem_fd;
>> +
>> +#define MIGRATE_ALL_PAGES 0
>> +#define MIGRATE_ONLY_SYSTEM_PAGES 1
>> + __u32 migration_policy;
>> + } preferred_mem_loc;
>> + };
>> +
>> + /** @reserved: Reserved */
>> + __u64 reserved[2];
>> +};
>> +
>> +/**
>> + * struct drm_xe_madvise - Input of &DRM_IOCTL_XE_MADVISE
>> + *
>> + * Set memory attributes to a virtual address range
>> + */
>> +struct drm_xe_madvise {
>> + /** @extensions: Pointer to the first extension struct, if
>> any */
>> + __u64 extensions;
>> +
>> + /** @vm_id: vm_id of the virtual range */
>> + __u32 vm_id;
>> +
>> + /** @num_ops: number of madvises in ioctl */
>> + __u32 num_ops;
>
> Should we really support an array of ops here given the experience we
> had with rollbacks on VM_bind? Also WRT this, also please see the
> purgeable state above.
>
>
>
>
>> +
>> + union {
>> + /** @ops: used if num_ops == 1 */
>> + struct drm_xe_madvise_ops ops;
>> +
>> + /**
>> + * @vector_of_ops: userptr to array of struct
>> + * drm_xe_vm_madvise_op if num_ops > 1
>> + */
>> + __u64 vector_of_ops;
>> + };
>> +
>> + /** @reserved: Reserved */
>> + __u64 reserved[2];
>> +
>> +};
>> +
>> #if defined(__cplusplus)
>> }
>> #endif
>
> /Thomas
>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 18/32] drm/xe/vm: Add attributes struct as member of vma
2025-05-14 18:36 ` Matthew Brost
@ 2025-05-20 9:27 ` Ghimiray, Himal Prasad
2025-05-27 17:37 ` Matthew Brost
0 siblings, 1 reply; 120+ messages in thread
From: Ghimiray, Himal Prasad @ 2025-05-20 9:27 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-xe, thomas.hellstrom
On 15-05-2025 00:06, Matthew Brost wrote:
> On Mon, Apr 07, 2025 at 03:47:05PM +0530, Himal Prasad Ghimiray wrote:
>> The attribute of xe_vma will determine the migration policy and the
>> encoding of the page table entries (PTEs) for that vma.
>> This attribute helps manage how memory pages are moved and how their
>> addresses are translated. It will be used by madvise to set the
>> behavior of the vma.
>>
>> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>> ---
>> drivers/gpu/drm/xe/xe_vm.c | 6 ++++++
>> drivers/gpu/drm/xe/xe_vm_types.h | 20 ++++++++++++++++++++
>> 2 files changed, 26 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
>> index 27a8dbe709c2..1ff9e477e061 100644
>> --- a/drivers/gpu/drm/xe/xe_vm.c
>> +++ b/drivers/gpu/drm/xe/xe_vm.c
>> @@ -2470,6 +2470,12 @@ static struct xe_vma *new_vma(struct xe_vm *vm, struct drm_gpuva_op_map *op,
>> vma = ERR_PTR(err);
>> }
>>
>> + /*TODO: assign devmem_fd of local vram once multi device
>> + * support is added.
>> + */
>> + vma->attr.preferred_loc.devmem_fd = 1;
>
> Assigning a value of '1' is a bit odd... I'd prefer using a define or
> something similar to indicate the intended behavior. I noticed a few
> other assignments to '1' in the final result—same comment applies to
> those.
Sure
>
>> + vma->attr.atomic_access = DRM_XE_VMA_ATOMIC_UNDEFINED;
>> +
>> return vma;
>> }
>>
>> diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
>> index d3c1209348e9..5f5feffecb82 100644
>> --- a/drivers/gpu/drm/xe/xe_vm_types.h
>> +++ b/drivers/gpu/drm/xe/xe_vm_types.h
>> @@ -77,6 +77,19 @@ struct xe_userptr {
>> #endif
>> };
>>
>> +/**
>> + * struct xe_vma_mem_attr - memory attributes associated with vma
>> + */
>> +struct xe_vma_mem_attr {
>> + /** @preferred_loc: perferred memory_location*/
>> + struct {
>> + u32 migration_policy; /* represents migration policies */
>> + u32 devmem_fd; /* devmem_fd used for determining pagemap_fd requested by user */
>> + } preferred_loc;
>
> I'm a little unclear on how these variables work.
>
> In the uAPI for migration_policy, I see MIGRATE_ALL_PAGES and
> MIGRATE_ONLY_SYSTEM_PAGES (these should probably be normalized with a
> DRM_XE_* prefix, by the way), but it's unclear to me what exactly these
> mean or how they're used based on the final result—could you clarify?
With multi-device support the idea was to have flexibility to move only
system pages to preferred location or also move pages from other vram
location to preferred location.
>
> Likewise, I'm confused about the devmem_fd usage. It can either be
> assigned a devmem_fd from the uAPI, but in some cases, it's interpreted
> as a region. I assume this is anticipating multi-GPU support, but again,
> the plan isn't clear to me. Could you explain?
The devmem_fd is intended to be used to determine the struct drm_pagemap
*, which in turn will be used to identify the tile associated with VRAM
for allocation and binding. The changes that introduce the
devmem_fd->drm_pagemap->tile [1] linkage will be part of the upcoming
multi-GPU support.
To ensure that the current changes are easily scalable and can be
extended for multi-GPU support, I am defining devmem_fd in the UAPI and
using it in the KMD as a region placeholder until multi-GPU support is
integrated.
[1] https://patchwork.freedesktop.org/patch/642773/?series=146227&rev=1
>
> In general I agree with the idea of xe_vma_mem_attr though.
>
> Matt
>
>> + /** @atomic_access: The atomic access type for the vma */
>> + u32 atomic_access;
>> +};
>> +
>> struct xe_vma {
>> /** @gpuva: Base GPUVA object */
>> struct drm_gpuva gpuva;
>> @@ -128,6 +141,13 @@ struct xe_vma {
>> * Needs to be signalled before UNMAP can be processed.
>> */
>> struct xe_user_fence *ufence;
>> +
>> + /**
>> + * @attr: The attributes of vma which determines the migration policy
>> + * and encoding of the PTEs for this vma.
>> + */
>> + struct xe_vma_mem_attr attr;
>> +
>> };
>>
>> /**
>> --
>> 2.34.1
>>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 20/32] drm/xe/vma: Modify new_vma to accept struct xe_vma_mem_attr as parameter
2025-05-14 18:40 ` Matthew Brost
@ 2025-05-20 9:28 ` Ghimiray, Himal Prasad
0 siblings, 0 replies; 120+ messages in thread
From: Ghimiray, Himal Prasad @ 2025-05-20 9:28 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-xe, thomas.hellstrom
On 15-05-2025 00:10, Matthew Brost wrote:
> On Mon, May 12, 2025 at 07:36:56PM -0700, Matthew Brost wrote:
>> On Mon, Apr 07, 2025 at 03:47:07PM +0530, Himal Prasad Ghimiray wrote:
>>> This change simplifies the logic by ensuring that remapped previous or
>>> next VMAs are created with the same memory attributes as the original VMA.
>>> By passing struct xe_vma_mem_attr as a parameter, we maintain consistency
>>> in memory attributes.
>>>
>>> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>>> ---
>>> drivers/gpu/drm/xe/xe_vm.c | 37 ++++++++++++++++++++++++++-----------
>>> 1 file changed, 26 insertions(+), 11 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
>>> index 59e2a951db25..6e5ba58d475e 100644
>>> --- a/drivers/gpu/drm/xe/xe_vm.c
>>> +++ b/drivers/gpu/drm/xe/xe_vm.c
>>> @@ -2421,8 +2421,16 @@ vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_vma_ops *vops,
>>>
>>> ALLOW_ERROR_INJECTION(vm_bind_ioctl_ops_create, ERRNO);
>>>
>>> +static void cp_mem_attr(struct xe_vma_mem_attr *dst, struct xe_vma_mem_attr *src)
>>
>> Drive by commment - not need.
>>
>> memcopy(dst, src, sizeof(*src));
>>
>
> Actually you can just do:
>
> *dst = *src;
Noted
>
>> Matt
>>
>>> +{
>>> + dst->preferred_loc.migration_policy = src->preferred_loc.migration_policy;
>>> + dst->preferred_loc.devmem_fd = src->preferred_loc.devmem_fd;
>>> + dst->atomic_access = src->atomic_access;
>>> + dst->pat_index = src->pat_index;
>>> +}
>>> +
>>> static struct xe_vma *new_vma(struct xe_vm *vm, struct drm_gpuva_op_map *op,
>>> - u16 pat_index, unsigned int flags)
>>> + struct xe_vma_mem_attr attr, unsigned int flags)
>>> {
>>> struct xe_bo *bo = op->gem.obj ? gem_to_xe_bo(op->gem.obj) : NULL;
>>> struct drm_exec exec;
>>> @@ -2451,7 +2459,7 @@ static struct xe_vma *new_vma(struct xe_vm *vm, struct drm_gpuva_op_map *op,
>>> }
>>> vma = xe_vma_create(vm, bo, op->gem.offset,
>>> op->va.addr, op->va.addr +
>>> - op->va.range - 1, pat_index, flags);
>>> + op->va.range - 1, attr.pat_index, flags);
>>> if (IS_ERR(vma))
>>> goto err_unlock;
>>>
>>> @@ -2468,14 +2476,10 @@ static struct xe_vma *new_vma(struct xe_vm *vm, struct drm_gpuva_op_map *op,
>>> prep_vma_destroy(vm, vma, false);
>>> xe_vma_destroy_unlocked(vma);
>>> vma = ERR_PTR(err);
>>> + } else {
>>> + cp_mem_attr(&vma->attr, &attr);
>>> }
>>>
>>> - /*TODO: assign devmem_fd of local vram once multi device
>>> - * support is added.
>>> - */
>>> - vma->attr.preferred_loc.devmem_fd = 1;
>>> - vma->attr.atomic_access = DRM_XE_VMA_ATOMIC_UNDEFINED;
>>> -
>>> return vma;
>>> }
>>>
>>> @@ -2600,6 +2604,17 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
>>> switch (op->base.op) {
>>> case DRM_GPUVA_OP_MAP:
>>> {
>>> + struct xe_vma_mem_attr default_attr = {
>>> + .preferred_loc = {
>>> + /*TODO: assign devmem_fd of local vram
>>> + * once multi device support is added.
>>> + */
>>> + .devmem_fd = IS_DGFX(vm->xe) ? 1 : 0,
>>> + .migration_policy = 1, },
>
> Where are a couple of magic '1' which I suggested to avoid patch 18.
> Same question as patch 18 on the usage of these.
>
> In general I think this patch makes sense if the to previous patches
> land.
>
> Matt
>
>>> + .atomic_access = DRM_XE_VMA_ATOMIC_UNDEFINED,
>>> + .pat_index = op->map.pat_index
>>> + };
>>> +
>>> flags |= op->map.read_only ?
>>> VMA_CREATE_FLAG_READ_ONLY : 0;
>>> flags |= op->map.is_null ?
>>> @@ -2609,7 +2624,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
>>> flags |= op->map.is_cpu_addr_mirror ?
>>> VMA_CREATE_FLAG_IS_SYSTEM_ALLOCATOR : 0;
>>>
>>> - vma = new_vma(vm, &op->base.map, op->map.pat_index,
>>> + vma = new_vma(vm, &op->base.map, default_attr,
>>> flags);
>>> if (IS_ERR(vma))
>>> return PTR_ERR(vma);
>>> @@ -2657,7 +2672,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
>>>
>>> if (op->base.remap.prev) {
>>> vma = new_vma(vm, op->base.remap.prev,
>>> - old->attr.pat_index, flags);
>>> + old->attr, flags);
>>> if (IS_ERR(vma))
>>> return PTR_ERR(vma);
>>>
>>> @@ -2687,7 +2702,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
>>>
>>> if (op->base.remap.next) {
>>> vma = new_vma(vm, op->base.remap.next,
>>> - old->attr.pat_index, flags);
>>> + old->attr, flags);
>>> if (IS_ERR(vma))
>>> return PTR_ERR(vma);
>>>
>>> --
>>> 2.34.1
>>>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 22/32] drm/xe/svm: Split system allocator vma incase of madvise call
2025-05-14 19:01 ` Matthew Brost
@ 2025-05-20 9:46 ` Ghimiray, Himal Prasad
0 siblings, 0 replies; 120+ messages in thread
From: Ghimiray, Himal Prasad @ 2025-05-20 9:46 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-xe, thomas.hellstrom
On 15-05-2025 00:31, Matthew Brost wrote:
> On Mon, Apr 07, 2025 at 03:47:09PM +0530, Himal Prasad Ghimiray wrote:
>> If the start or end of input address range lies within system allocator
>> vma split the vma to create new vma's as per input range.
>>
>> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>> ---
>> drivers/gpu/drm/xe/xe_vm.c | 84 ++++++++++++++++++++++++++++++++++++++
>> drivers/gpu/drm/xe/xe_vm.h | 2 +
>> 2 files changed, 86 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
>> index 6e5ba58d475e..c7c012afe9eb 100644
>> --- a/drivers/gpu/drm/xe/xe_vm.c
>> +++ b/drivers/gpu/drm/xe/xe_vm.c
>> @@ -4127,3 +4127,87 @@ void xe_vm_snapshot_free(struct xe_vm_snapshot *snap)
>> }
>> kvfree(snap);
>> }
>> +
>> +int xe_vm_alloc_madvise_vma(struct xe_vm *vm, uint64_t start, uint64_t range)
>> +{
>> + struct xe_vma_ops vops;
>> + struct drm_gpuva_ops *ops = NULL;
>> + struct drm_gpuva_op *__op;
>> + bool is_cpu_addr_mirror = false;
>> + int err;
>> +
>> + vm_dbg(&vm->xe->drm, "MADVISE IN: addr=0x%016llx, size=0x%016llx", start, range);
>> +
>
> lockdep assert for vm->lock, I assume we should have it in write mode
> here.
Noted
>
>> + if (start & ~PAGE_MASK)
>> + start = ALIGN_DOWN(start, SZ_4K);
>> +
>> + if (range & ~PAGE_MASK)
>> + range = ALIGN(range, SZ_4K);
>
> We discussed this - not needed as UMD should align to GPU page size.
>
> BTW - mismatch of PAGE_MASK, SZ_4K - they mean different things as
> PAGE_SIZE depends CPU arch. We do this all over the driver but let's not
> make this worse.
>
Sure
>> +
>> + vm_dbg(&vm->xe->drm, "MADVISE_OPS_CREATE: addr=0x%016llx, size=0x%016llx", start, range);
>> + ops = drm_gpuvm_sm_map_ops_create(&vm->gpuvm, start, range,
>> + DRM_GPUVM_SKIP_GEM_OBJ_VA_SPLIT_MADVISE,
>> + NULL, start);
>> + if (IS_ERR(ops)) {
>> + err = PTR_ERR(ops);
>> + goto unwind_ops;
>
> You don't need to unwind here, you can just return.
true
>
>> + }
>> +
>> + if (list_empty(&ops->list)) {
>> + err = 0;
>> + goto free_ops;
>> + }
>> +
>> + drm_gpuva_for_each_op(__op, ops) {
>> + struct xe_vma_op *op = gpuva_op_to_vma_op(__op);
>> +
>> + if (__op->op == DRM_GPUVA_OP_REMAP) {
>> + if (xe_vma_is_cpu_addr_mirror(gpuva_to_vma(op->base.remap.unmap->va)))
>> + is_cpu_addr_mirror = true;
>> + else
>> + is_cpu_addr_mirror = false;
>> + }
>> +
>> + if (__op->op == DRM_GPUVA_OP_MAP)
>> + op->map.is_cpu_addr_mirror = is_cpu_addr_mirror;
>> +
>
> I think this is right, but it took a minute to remember why this code is
> needed. I think we need a comment here on is_cpu_addr_mirror usage.
Sure
>
>> + print_op(vm->xe, __op);
>> + }
>> +
>> + xe_vma_ops_init(&vops, vm, NULL, NULL, 0);
>> + err = vm_bind_ioctl_ops_parse(vm, ops, &vops);
>> + if (err)
>> + goto unwind_ops;
>> +
>> + xe_vm_lock(vm, false);
>> +
>> + drm_gpuva_for_each_op(__op, ops) {
>> + struct xe_vma_op *op = gpuva_op_to_vma_op(__op);
>> + struct xe_vma *vma;
>> + struct xe_vma_mem_attr temp_attr;
>> +
>> + if (__op->op == DRM_GPUVA_OP_UNMAP) {
>> + /* There should be no unmap */
>> + xe_assert(vm->xe, true);
>
> xe_assert(vm->xe, true) will pop. How about...
>
> XE_WARN_ON("UNEXPECTED UNMAP");
Ok
>
>> + xe_vma_destroy(gpuva_to_vma(op->base.unmap.va), NULL);
>> + } else if (__op->op == DRM_GPUVA_OP_REMAP) {
>> + vma = gpuva_to_vma(op->base.remap.unmap->va);
>> + cp_mem_attr(&temp_attr, &vma->attr);
>
> See my comments about cp_mem_attr in a prior patch.
Yup
>
>> + xe_vma_destroy(gpuva_to_vma(op->base.remap.unmap->va), NULL);
>> + } else if (__op->op == DRM_GPUVA_OP_MAP) {
>> + vma = op->map.vma;
>> + cp_mem_attr(&vma->attr, &temp_attr);
>> + }
>
> Also perhaps ref the comment above of why copying the attributes is
> needed.
Sure
>
> Matt
>
>> + }
>> +
>> + xe_vm_unlock(vm);
>> + drm_gpuva_ops_free(&vm->gpuvm, ops);
>> + return 0;
>> +
>> +unwind_ops:
>> + vm_bind_ioctl_ops_unwind(vm, &ops, 1);
>> +free_ops:
>> + if (ops)
>> + drm_gpuva_ops_free(&vm->gpuvm, ops);
>> + return err;
>> +}
>> diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
>> index 99e164852f63..4e45230b7205 100644
>> --- a/drivers/gpu/drm/xe/xe_vm.h
>> +++ b/drivers/gpu/drm/xe/xe_vm.h
>> @@ -171,6 +171,8 @@ static inline bool xe_vma_is_userptr(struct xe_vma *vma)
>>
>> struct xe_vma *xe_vm_find_vma_by_addr(struct xe_vm *vm, u64 page_addr);
>>
>> +int xe_vm_alloc_madvise_vma(struct xe_vm *vm, uint64_t addr, uint64_t size);
>> +
>> /**
>> * to_userptr_vma() - Return a pointer to an embedding userptr vma
>> * @vma: Pointer to the embedded struct xe_vma
>> --
>> 2.34.1
>>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 23/32] drm/xe: Implement madvise ioctl for xe
2025-05-14 21:41 ` Matthew Brost
@ 2025-05-20 10:15 ` Ghimiray, Himal Prasad
2025-05-28 5:22 ` Ghimiray, Himal Prasad
0 siblings, 1 reply; 120+ messages in thread
From: Ghimiray, Himal Prasad @ 2025-05-20 10:15 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-xe, thomas.hellstrom
On 15-05-2025 03:11, Matthew Brost wrote:
> On Mon, Apr 07, 2025 at 03:47:10PM +0530, Himal Prasad Ghimiray wrote:
>> This driver-specific ioctl enables UMDs to control the memory attributes
>> for GPU VMAs within a specified input range. If the start or end
>> addresses fall within an existing VMA, the VMA is split accordingly. The
>> attributes of the VMA are modified as provided by the users. The old
>> mappings of the VMAs are invalidated, and TLB invalidation is performed
>> if necessary.
>>
>> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>> ---
>> drivers/gpu/drm/xe/Makefile | 1 +
>> drivers/gpu/drm/xe/xe_device.c | 2 +
>> drivers/gpu/drm/xe/xe_vm_madvise.c | 309 +++++++++++++++++++++++++++++
>> drivers/gpu/drm/xe/xe_vm_madvise.h | 15 ++
>> 4 files changed, 327 insertions(+)
>> create mode 100644 drivers/gpu/drm/xe/xe_vm_madvise.c
>> create mode 100644 drivers/gpu/drm/xe/xe_vm_madvise.h
>>
>> diff --git a/drivers/gpu/drm/xe/Makefile b/drivers/gpu/drm/xe/Makefile
>> index e4fec90bab55..3e83ae8b9dc1 100644
>> --- a/drivers/gpu/drm/xe/Makefile
>> +++ b/drivers/gpu/drm/xe/Makefile
>> @@ -117,6 +117,7 @@ xe-y += xe_bb.o \
>> xe_uc.o \
>> xe_uc_fw.o \
>> xe_vm.o \
>> + xe_vm_madvise.o \
>> xe_vram.o \
>> xe_vram_freq.o \
>> xe_vsec.o \
>> diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
>> index d8e227ddf255..3e57300014bf 100644
>> --- a/drivers/gpu/drm/xe/xe_device.c
>> +++ b/drivers/gpu/drm/xe/xe_device.c
>> @@ -60,6 +60,7 @@
>> #include "xe_ttm_stolen_mgr.h"
>> #include "xe_ttm_sys_mgr.h"
>> #include "xe_vm.h"
>> +#include "xe_vm_madvise.h"
>> #include "xe_vram.h"
>> #include "xe_vsec.h"
>> #include "xe_wait_user_fence.h"
>> @@ -196,6 +197,7 @@ static const struct drm_ioctl_desc xe_ioctls[] = {
>> DRM_IOCTL_DEF_DRV(XE_WAIT_USER_FENCE, xe_wait_user_fence_ioctl,
>> DRM_RENDER_ALLOW),
>> DRM_IOCTL_DEF_DRV(XE_OBSERVATION, xe_observation_ioctl, DRM_RENDER_ALLOW),
>> + DRM_IOCTL_DEF_DRV(XE_MADVISE, xe_vm_madvise_ioctl, DRM_RENDER_ALLOW),
>> };
>>
>> static long xe_drm_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
>> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
>> new file mode 100644
>> index 000000000000..ef50031649e0
>> --- /dev/null
>> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
>> @@ -0,0 +1,309 @@
>> +// SPDX-License-Identifier: MIT
>> +/*
>> + * Copyright © 2024 Intel Corporation
>> + */
>> +
>> +#include "xe_vm_madvise.h"
>> +
>> +#include <linux/nospec.h>
>> +#include <drm/ttm/ttm_tt.h>
>> +#include <drm/xe_drm.h>
>> +
>> +#include "xe_bo.h"
>> +#include "xe_gt_tlb_invalidation.h"
>> +#include "xe_pt.h"
>> +#include "xe_svm.h"
>> +
>> +static struct xe_vma **get_vmas(struct xe_vm *vm, int *num_vmas,
>> + u64 addr, u64 range)
>> +{
>> + struct xe_vma **vmas, **__vmas;
>> + struct drm_gpuva *gpuva;
>> + int max_vmas = 8;
>> +
>> + lockdep_assert_held(&vm->lock);
>
> lockdep_assert_held_write
ok
>
>> +
>> + *num_vmas = 0;
>> + vmas = kmalloc_array(max_vmas, sizeof(*vmas), GFP_KERNEL);
>> + if (!vmas)
>> + return NULL;
>> +
>> + vm_dbg(&vm->xe->drm, "VMA's in range: start=0x%016llx, end=0x%016llx", addr, addr + range);
>> +
>> + drm_gpuvm_for_each_va_range(gpuva, &vm->gpuvm, addr, addr + range) {
>> + struct xe_vma *vma = gpuva_to_vma(gpuva);
>> +
>> + if (*num_vmas == max_vmas) {
>> + max_vmas <<= 1;
>> + __vmas = krealloc(vmas, max_vmas * sizeof(*vmas), GFP_KERNEL);
>> + if (!__vmas) {
>> + kfree(vmas);
>> + return NULL;
>> + }
>> + vmas = __vmas;
>> + }
>> +
>> + vmas[*num_vmas] = vma;
>> + (*num_vmas)++;
>> + }
>> +
>> + vm_dbg(&vm->xe->drm, "*num_vmas = %d\n", *num_vmas);
>> +
>> + if (!*num_vmas) {
>> + kfree(vmas);
>> + return NULL;
>> + }
>> +
>> + return vmas;
>> +}
>> +
>> +static int madvise_preferred_mem_loc(struct xe_device *xe, struct xe_vm *vm,
>> + struct xe_vma **vmas, int num_vmas,
>> + struct drm_xe_madvise_ops ops)
>> +{
>> + /* Implementation pending */
>> + return 0;
>> +}
>> +
>> +static int madvise_atomic(struct xe_device *xe, struct xe_vm *vm,
>> + struct xe_vma **vmas, int num_vmas,
>> + struct drm_xe_madvise_ops ops)
>> +{
>> + /* Implementation pending */
>> + return 0;
>> +}
>> +
>> +static int madvise_pat_index(struct xe_device *xe, struct xe_vm *vm,
>> + struct xe_vma **vmas, int num_vmas,
>> + struct drm_xe_madvise_ops ops)
>> +{
>> + /* Implementation pending */
>> + return 0;
>> +}
>> +
>> +static int madvise_purgeable_state(struct xe_device *xe, struct xe_vm *vm,
>> + struct xe_vma **vmas, int num_vmas,
>> + struct drm_xe_madvise_ops ops)
>> +{
>> + /* Implementation pending */
>> + return 0;
>> +}
>> +
>> +typedef int (*madvise_func)(struct xe_device *xe, struct xe_vm *vm,
>> + struct xe_vma **vmas, int num_vmas, struct drm_xe_madvise_ops ops);
>> +
>> +static const madvise_func madvise_funcs[] = {
>> + [DRM_XE_VMA_ATTR_PREFERRED_LOC] = madvise_preferred_mem_loc,
>> + [DRM_XE_VMA_ATTR_ATOMIC] = madvise_atomic,
>> + [DRM_XE_VMA_ATTR_PAT] = madvise_pat_index,
>> + [DRM_XE_VMA_ATTR_PURGEABLE_STATE] = madvise_purgeable_state,
>> +};
>> +
>> +static void xe_zap_ptes_in_madvise_range(struct xe_vm *vm, u64 start, u64 end, u8 *tile_mask)
>> +{
>> + struct drm_gpusvm_notifier *notifier;
>> + struct drm_gpuva *gpuva;
>> + struct xe_svm_range *range;
>> + struct xe_tile *tile;
>> + u64 adj_start, adj_end;
>> + u8 id;
>> +
>> + lockdep_assert_held(&vm->lock);
>
> lockdep_assert_held_write
>
>> +
ok
>
> /* Waiting on pending binds */
>
>> + if (dma_resv_wait_timeout(xe_vm_resv(vm), DMA_RESV_USAGE_BOOKKEEP,
>> + false, MAX_SCHEDULE_TIMEOUT) <= 0)
>> + XE_WARN_ON(1);
>> +
>> + down_write(&vm->svm.gpusvm.notifier_lock);
>> +
>
> xe_svm_notifier_lock
ok
>
>> + drm_gpusvm_for_each_notifier(notifier, &vm->svm.gpusvm, start, end) {
>> + struct drm_gpusvm_range *r = NULL;
>> +
>> + adj_start = max(start, notifier->itree.start);
>> + adj_end = min(end, notifier->itree.last + 1);
>> + drm_gpusvm_for_each_range(r, notifier, adj_start, adj_end) {
>> + range = to_xe_range(r);
>> + for_each_tile(tile, vm->xe, id) {
>> + if (xe_pt_zap_ptes_range(tile, vm, range)) {
>> + *tile_mask |= BIT(id);
>> + range->tile_invalidated |= BIT(id);
>> + }
>> + }
>> + }
>> + }
>> +
>> + up_write(&vm->svm.gpusvm.notifier_lock);
>> +
>
> xe_svm_notifier_unlock
>
Hmm
>> + drm_gpuvm_for_each_va_range(gpuva, &vm->gpuvm, start, end) {
>> + struct xe_vma *vma = gpuva_to_vma(gpuva);
>> +
>> + if (xe_vma_is_cpu_addr_mirror(vma))
>> + continue;
>> +
>> + if (xe_vma_is_userptr(vma)) {
>> + WARN_ON_ONCE(!mmu_interval_check_retry
>> + (&to_userptr_vma(vma)->userptr.notifier,
>> + to_userptr_vma(vma)->userptr.notifier_seq));
>> +
>> + WARN_ON_ONCE(!dma_resv_test_signaled(xe_vm_resv(xe_vma_vm(vma)),
>> + DMA_RESV_USAGE_BOOKKEEP));
>> + }
>> +
>> + if (xe_vma_bo(vma))
>> + xe_bo_lock(xe_vma_bo(vma), false);
>> +
>
> Do you need the BO's dma-resv lock here? I don't think you do. Maybe double
> check with Thomas on this one as I could be forgeting something here.
Sure
>
>> + for_each_tile(tile, vm->xe, id) {
>> + if (xe_pt_zap_ptes(tile, vma))
>> + *tile_mask |= BIT(id);
>> + }
>> +
>> + if (xe_vma_bo(vma))
>> + xe_bo_unlock(xe_vma_bo(vma));
>> + }
>> +}
>> +
>> +static void xe_vm_invalidate_madvise_range(struct xe_vm *vm, u64 start, u64 end)
>> +{
>> + struct xe_gt_tlb_invalidation_fence
>> + fence[XE_MAX_TILES_PER_DEVICE * XE_MAX_GT_PER_TILE];
>> + struct xe_tile *tile;
>> + u32 fence_id = 0;
>> + u8 tile_mask = 0;
>> + u8 id;
>> +
>> + xe_zap_ptes_in_madvise_range(vm, start, end, &tile_mask);
>> + if (!tile_mask)
>> + return;
>> +
>> + xe_device_wmb(vm->xe);
>> +
>
> We have the below pattern in a few places in the driver. I wonder if it
> time for a helper?
Makes sense
>
>> + for_each_tile(tile, vm->xe, id) {
>> + if (tile_mask & BIT(id)) {
>> + int err;
>> +
>> + xe_gt_tlb_invalidation_fence_init(tile->primary_gt,
>> + &fence[fence_id], true);
>> +
>> + err = xe_gt_tlb_invalidation_range(tile->primary_gt,
>> + &fence[fence_id],
>> + start,
>> + end,
>> + vm->usm.asid);
>> + if (WARN_ON_ONCE(err < 0))
>> + goto wait;
>> + ++fence_id;
>> +
>> + if (!tile->media_gt)
>> + continue;
>> +
>> + xe_gt_tlb_invalidation_fence_init(tile->media_gt,
>> + &fence[fence_id], true);
>> +
>> + err = xe_gt_tlb_invalidation_range(tile->media_gt,
>> + &fence[fence_id],
>> + start,
>> + end,
>> + vm->usm.asid);
>> + if (WARN_ON_ONCE(err < 0))
>> + goto wait;
>> + ++fence_id;
>> + }
>> + }
>> +
>> +wait:
>> + for (id = 0; id < fence_id; ++id)
>> + xe_gt_tlb_invalidation_fence_wait(&fence[id]);
>> +}
>> +
>> +static int input_ranges_same(struct drm_xe_madvise_ops *old,
>> + struct drm_xe_madvise_ops *new)
>> +{
>> + return (new->start == old->start && new->range == old->range);
>> +}
>> +
>> +int xe_vm_madvise_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
>
> Kernel doc.
Sure
>
>> +{
>> + struct xe_device *xe = to_xe_device(dev);
>> + struct xe_file *xef = to_xe_file(file);
>> + struct drm_xe_madvise_ops *advs_ops;
>> + struct drm_xe_madvise *args = data;
>> + struct xe_vm *vm;
>> + struct xe_vma **vmas = NULL;
>> + int num_vmas, err = 0;
>> + int i, j, attr_type;
>> +
>> + if (XE_IOCTL_DBG(xe, args->num_ops < 1))
>> + return -EINVAL;
>> +
>> + vm = xe_vm_lookup(xef, args->vm_id);
>> + if (XE_IOCTL_DBG(xe, !vm))
>> + return -EINVAL;
>> +
>> + if (XE_IOCTL_DBG(xe, !xe_vm_in_fault_mode(vm))) {
>
> Do we want to restrict this fault mode? Maybe check with Mesa if they
> see any use cases.
Ok
>
>> + err = -EINVAL;
>> + goto put_vm;
>> + }
>> +
>> + down_write(&vm->lock);
>> +
>> + if (XE_IOCTL_DBG(xe, xe_vm_is_closed_or_banned(vm))) {
>> + err = -ENOENT;
>> + goto unlock_vm;
>> + }
>> +
>> + if (args->num_ops > 1) {
>> + u64 __user *madvise_user = u64_to_user_ptr(args->vector_of_ops);
>> +
>> + advs_ops = kvmalloc_array(args->num_ops, sizeof(struct drm_xe_madvise_ops),
>> + GFP_KERNEL | __GFP_ACCOUNT |
>> + __GFP_RETRY_MAYFAIL | __GFP_NOWARN);
>> + if (!advs_ops)
>> + return args->num_ops > 1 ? -ENOBUFS : -ENOMEM;
>> +
>> + err = __copy_from_user(advs_ops, madvise_user,
>> + sizeof(struct drm_xe_madvise_ops) *
>> + args->num_ops);
>> + if (XE_IOCTL_DBG(xe, err)) {
>> + err = -EFAULT;
>> + goto free_advs_ops;
>> + }
>> + } else {
>> + advs_ops = &args->ops;
>> + }
>> +
>> + for (i = 0; i < args->num_ops; i++) {
>> + xe_vm_alloc_madvise_vma(vm, advs_ops[i].start, advs_ops[i].range);
>> +
>> + vmas = get_vmas(vm, &num_vmas, advs_ops[i].start, advs_ops[i].range);
>> + if (!vmas) {
>> + err = -ENOMEM;
>> + goto unlock_vm;
>> + }
>> +
>> + attr_type = array_index_nospec(advs_ops[i].type, ARRAY_SIZE(madvise_funcs));
>> + err = madvise_funcs[attr_type](xe, vm, vmas, num_vmas, advs_ops[i]);
>> +
>> + kfree(vmas);
>> + vmas = NULL;
>> +
>> + if (err)
>> + break;
>> + }
>> +
>> + for (i = 0; i < args->num_ops; i++) {
>> + for (j = i + 1; j < args->num_ops; ++j) {
>> + if (input_ranges_same(&advs_ops[j], &advs_ops[i]))
>> + break;
>> + }
>
> The above loop doesn't look like it actually does anything.
My bad.
was intending to do
if (input_ranges_same(&advs_ops[j], &advs_ops[i])) {
needs_invalidation = false;
>> + break;
}
if(needs_invalidation)
xe_vm_invalidate_madvise_range(vm, advs_ops[i].start,
advs_ops[i].start + advs_ops[i].range);
>> + }
>
> Matt
>
>> + xe_vm_invalidate_madvise_range(vm, advs_ops[i].start,
>> + advs_ops[i].start + advs_ops[i].range);
>> + }
>> +free_advs_ops:
>> + if (args->num_ops > 1)
>> + kvfree(advs_ops);
>> +unlock_vm:
>> + up_write(&vm->lock);
>> +put_vm:
>> + xe_vm_put(vm);
>> + return err;
>> +}
>> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.h b/drivers/gpu/drm/xe/xe_vm_madvise.h
>> new file mode 100644
>> index 000000000000..c5cdd058c322
>> --- /dev/null
>> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.h
>> @@ -0,0 +1,15 @@
>> +/* SPDX-License-Identifier: MIT */
>> +/*
>> + * Copyright © 2024 Intel Corporation
>> + */
>> +
>> +#ifndef _XE_VM_MADVISE_H_
>> +#define _XE_VM_MADVISE_H_
>> +
>> +struct drm_device;
>> +struct drm_file;
>> +
>> +int xe_vm_madvise_ioctl(struct drm_device *dev, void *data,
>> + struct drm_file *file);
>> +
>> +#endif
>> --
>> 2.34.1
>>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 24/32] drm/xe: Allow CPU address mirror VMA unbind with gpu bindings for madvise
2025-05-14 19:20 ` Matthew Brost
@ 2025-05-20 10:21 ` Ghimiray, Himal Prasad
2025-05-27 17:32 ` Matthew Brost
0 siblings, 1 reply; 120+ messages in thread
From: Ghimiray, Himal Prasad @ 2025-05-20 10:21 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-xe, thomas.hellstrom
On 15-05-2025 00:50, Matthew Brost wrote:
> On Mon, Apr 07, 2025 at 03:47:11PM +0530, Himal Prasad Ghimiray wrote:
>> In the case of the MADVISE ioctl, if the start or end addresses fall
>> within a VMA and existing SVM ranges are present, remove the existing
>> SVM mappings. Then, continue with ops_parse to create new VMAs by REMAP
>> unmapping of old one.
>>
>
> I'm quite confused why this patch is needed. Why is invalidating the
> ranges not sufficient?
how the madvise is supposed to behave if start or end of input range is
within existing svm range ?
for example lets assume :
svm_range of 2 MiB exists in offset and offset + SZ_2M, madvise is
called with offset as start and offset + SZ_1M as end, in this scenario
vma boundaries will change and previous svm_ranges needs to be removed.
>
> Matt
>
>> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>> ---
>> drivers/gpu/drm/xe/xe_svm.c | 25 +++++++++++++++++++++++++
>> drivers/gpu/drm/xe/xe_svm.h | 7 +++++++
>> drivers/gpu/drm/xe/xe_vm.c | 18 +++++++++++++++++-
>> 3 files changed, 49 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
>> index 7ec7ecd7eb1f..efcba4b77250 100644
>> --- a/drivers/gpu/drm/xe/xe_svm.c
>> +++ b/drivers/gpu/drm/xe/xe_svm.c
>> @@ -903,6 +903,31 @@ bool xe_svm_has_mapping(struct xe_vm *vm, u64 start, u64 end)
>> return drm_gpusvm_has_mapping(&vm->svm.gpusvm, start, end);
>> }
>>
>> +/**
>> + * xe_svm_range_clean_if_addr_within - Clean SVM mappings and ranges
>> + * @start: start addr
>> + * @end: end addr
>> + *
>> + * This function cleans up svm ranges if start or end address are inside them.
>> + */
>> +void xe_svm_range_clean_if_addr_within(struct xe_vm *vm, u64 start, u64 end)
>> +{
>> + struct drm_gpusvm_notifier *notifier, *next;
>> +
>> + drm_gpusvm_for_each_notifier_safe(notifier, next, &vm->svm.gpusvm, start, end) {
>> + struct drm_gpusvm_range *range, *__next;
>> +
>> + drm_gpusvm_for_each_range_safe(range, __next, notifier, start, end) {
>> + if (start > drm_gpusvm_range_start(range) ||
>> + end < drm_gpusvm_range_end(range)) {
>> + if (IS_DGFX(vm->xe) && xe_svm_range_in_vram(to_xe_range(range)))
>> + drm_gpusvm_range_evict(&vm->svm.gpusvm, range);
>> + __xe_svm_garbage_collector(vm, to_xe_range(range));
>> + }
>> + }
>> + }
>> +}
>> +
>> /**
>> * xe_svm_bo_evict() - SVM evict BO to system memory
>> * @bo: BO to evict
>> diff --git a/drivers/gpu/drm/xe/xe_svm.h b/drivers/gpu/drm/xe/xe_svm.h
>> index d5be8229ca7e..d00ba6d6ba53 100644
>> --- a/drivers/gpu/drm/xe/xe_svm.h
>> +++ b/drivers/gpu/drm/xe/xe_svm.h
>> @@ -98,6 +98,8 @@ int xe_svm_range_get_pages(struct xe_vm *vm, struct xe_svm_range *range,
>> bool xe_svm_range_needs_migrate_to_vram(struct xe_svm_range *range, struct xe_vma *vma,
>> u32 region);
>>
>> +void xe_svm_range_clean_if_addr_within(struct xe_vm *vm, u64 start, u64 end);
>> +
>> /**
>> * xe_svm_range_has_dma_mapping() - SVM range has DMA mapping
>> * @range: SVM range
>> @@ -291,6 +293,11 @@ bool xe_svm_range_needs_migrate_to_vram(struct xe_svm_range *range, struct xe_vm
>> return false;
>> }
>>
>> +static inline
>> +void xe_svm_range_clean_if_addr_within(struct xe_vm *vm, u64 start, u64 end)
>> +{
>> +}
>> +
>> #define xe_svm_assert_in_notifier(...) do {} while (0)
>> #define xe_svm_range_has_dma_mapping(...) false
>>
>> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
>> index c7c012afe9eb..92b8e0cac063 100644
>> --- a/drivers/gpu/drm/xe/xe_vm.c
>> +++ b/drivers/gpu/drm/xe/xe_vm.c
>> @@ -2362,6 +2362,22 @@ vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_vma_ops *vops,
>> op->map.pat_index = pat_index;
>> op->map.invalidate_on_bind =
>> __xe_vm_needs_clear_scratch_pages(vm, flags);
>> + } else if (__op->op == DRM_GPUVA_OP_REMAP) {
>> + struct xe_vma *old =
>> + gpuva_to_vma(op->base.remap.unmap->va);
>> + u64 start = xe_vma_start(old), end = xe_vma_end(old);
>> +
>> + if (op->base.remap.prev)
>> + start = op->base.remap.prev->va.addr +
>> + op->base.remap.prev->va.range;
>> + if (op->base.remap.next)
>> + end = op->base.remap.next->va.addr;
>> +
>> + if (xe_vma_is_cpu_addr_mirror(old) &&
>> + xe_svm_has_mapping(vm, start, end)) {
>> + drm_gpuva_ops_free(&vm->gpuvm, ops);
>> + return ERR_PTR(-EBUSY);
>> + }
>> } else if (__op->op == DRM_GPUVA_OP_PREFETCH) {
>> struct xe_vma *vma = gpuva_to_vma(op->base.prefetch.va);
>>
>> @@ -2653,7 +2669,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
>>
>> if (xe_vma_is_cpu_addr_mirror(old) &&
>> xe_svm_has_mapping(vm, start, end))
>> - return -EBUSY;
>> + xe_svm_range_clean_if_addr_within(vm, start, end);
>>
>> op->remap.start = xe_vma_start(old);
>> op->remap.range = xe_vma_size(old);
>> --
>> 2.34.1
>>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 25/32] drm/xe/svm : Add svm ranges migration policy on atomic access
2025-05-14 22:21 ` Matthew Brost
@ 2025-05-20 10:22 ` Ghimiray, Himal Prasad
0 siblings, 0 replies; 120+ messages in thread
From: Ghimiray, Himal Prasad @ 2025-05-20 10:22 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-xe, thomas.hellstrom
On 15-05-2025 03:51, Matthew Brost wrote:
> On Mon, Apr 07, 2025 at 03:47:12PM +0530, Himal Prasad Ghimiray wrote:
>> If the platform does not support atomic access on system memory, and the
>> ranges are in system memory, but the user requires atomic accesses on
>> the VMA, then migrate the ranges to VRAM. Apply this policy for prefetch
>> operations as well.
>>
>
> I think the baseline was changed a bit here, but I believe it mostly
> makes sense. Will review again on the rebase.
>
> A one nit below.
>
>> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>> ---
>> drivers/gpu/drm/xe/xe_pt.c | 9 +++++++--
>> drivers/gpu/drm/xe/xe_svm.c | 14 ++++++++++++--
>> drivers/gpu/drm/xe/xe_vm.c | 2 ++
>> drivers/gpu/drm/xe/xe_vm_madvise.c | 11 ++++++++++-
>> 4 files changed, 31 insertions(+), 5 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
>> index 2479d830d90a..ba9b30b25ded 100644
>> --- a/drivers/gpu/drm/xe/xe_pt.c
>> +++ b/drivers/gpu/drm/xe/xe_pt.c
>> @@ -645,13 +645,18 @@ static bool xe_atomic_for_vram(struct xe_vm *vm)
>> return true;
>> }
>>
>> -static bool xe_atomic_for_system(struct xe_vm *vm, struct xe_bo *bo)
>> +static bool xe_atomic_for_system(struct xe_vm *vm,
>> + struct xe_bo *bo,
>> + struct xe_vma *vma)
>> {
>> struct xe_device *xe = vm->xe;
>>
>> if (!xe->info.has_device_atomics_on_smem)
>> return false;
>>
>> + if (vma->attr.atomic_access == DRM_XE_VMA_ATOMIC_DEVICE)
>> + return true;
>> +
>> /*
>> * If a SMEM+LMEM allocation is backed by SMEM, a device
>> * atomics will cause a gpu page fault and which then
>> @@ -745,7 +750,7 @@ xe_pt_stage_bind(struct xe_tile *tile, struct xe_vma *vma,
>>
>> if (vma->gpuva.flags & XE_VMA_ATOMIC_PTE_BIT) {
>> xe_walk.default_vram_pte = xe_atomic_for_vram(vm) ? XE_USM_PPGTT_PTE_AE : 0;
>> - xe_walk.default_system_pte = xe_atomic_for_system(vm, bo) ?
>> + xe_walk.default_system_pte = xe_atomic_for_system(vm, bo, vma) ?
>> XE_USM_PPGTT_PTE_AE : 0;
>> }
>>
>> diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
>> index efcba4b77250..d40111e29bfe 100644
>> --- a/drivers/gpu/drm/xe/xe_svm.c
>> +++ b/drivers/gpu/drm/xe/xe_svm.c
>> @@ -717,6 +717,16 @@ static bool supports_4K_migration(struct xe_device *xe)
>> return false;
>> }
>>
>> +static bool needs_ranges_in_vram_to_support_atomic(struct xe_device *xe, struct xe_vma *vma)
>> +{
>> + if (vma->attr.atomic_access == DRM_XE_VMA_ATOMIC_UNDEFINED ||
>> + (xe->info.has_device_atomics_on_smem &&
>> + vma->attr.atomic_access == DRM_XE_VMA_ATOMIC_DEVICE))
>> + return false;
>> +
>> + return true;
>> +}
>> +
>> /**
>> * xe_svm_range_needs_migrate_to_vram() - SVM range needs migrate to VRAM or not
>> * @range: SVM range for which migration needs to be decided
>> @@ -735,7 +745,7 @@ bool xe_svm_range_needs_migrate_to_vram(struct xe_svm_range *range, struct xe_vm
>> if (!range->base.flags.migrate_devmem)
>> return false;
>>
>> - needs_migrate = region;
>> + needs_migrate = needs_ranges_in_vram_to_support_atomic(vm->xe, vma) || region;
>>
>> if (needs_migrate && !IS_DGFX(vm->xe)) {
>> drm_warn(&vm->xe->drm, "Platform doesn't support VRAM\n");
>> @@ -828,7 +838,7 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
>>
>> }
>>
>> - if (atomic)
>> + if (atomic && needs_ranges_in_vram_to_support_atomic(vm->xe, vma))
>> ctx.vram_only = 1;
>>
>> range_debug(range, "GET PAGES");
>> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
>> index 92b8e0cac063..0f9c45ce82b4 100644
>> --- a/drivers/gpu/drm/xe/xe_vm.c
>> +++ b/drivers/gpu/drm/xe/xe_vm.c
>> @@ -2930,6 +2930,7 @@ static int prefetch_ranges_lock_and_prep(struct xe_vm *vm,
>> for (i = 0; i < op->prefetch_range.ranges_count; i++) {
>> svm_range = xa_load(&op->prefetch_range.range, i);
>> if (xe_svm_range_needs_migrate_to_vram(svm_range, vma, region)) {
>> + region = region ? region : 1;
>> tile = &vm->xe->tiles[region_to_mem_type[region] - XE_PL_VRAM0];
>> err = xe_svm_alloc_vram(vm, tile, svm_range, &ctx);
>> if (err) {
>> @@ -2938,6 +2939,7 @@ static int prefetch_ranges_lock_and_prep(struct xe_vm *vm,
>> return -ENODATA;
>> }
>> xe_svm_range_debug(svm_range, "PREFETCH - RANGE MIGRATED TO VRAM");
>> + ctx.vram_only = 1;
>> }
>>
>> err = xe_svm_range_get_pages(vm, svm_range, &ctx);
>> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
>> index ef50031649e0..7e1a95106cb9 100644
>> --- a/drivers/gpu/drm/xe/xe_vm_madvise.c
>> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
>> @@ -69,7 +69,16 @@ static int madvise_atomic(struct xe_device *xe, struct xe_vm *vm,
>> struct xe_vma **vmas, int num_vmas,
>> struct drm_xe_madvise_ops ops)
>> {
>> - /* Implementation pending */
>> + int i;
>> +
>> + xe_assert(vm->xe, ops.type == DRM_XE_VMA_ATTR_ATOMIC);
>> + xe_assert(vm->xe, ops.atomic.val > DRM_XE_VMA_ATOMIC_UNDEFINED &&
>> + ops.atomic.val <= DRM_XE_VMA_ATOMIC_CPU);
>> + vm_dbg(&xe->drm, "attr_value = %d", ops.atomic.val);
>
> Again I'm unsure if this debug message how a ton of value without
> knowing the VMA info.
Agreed, will address it in all places
>
> Matt
>
>> +
>> + for (i = 0; i < num_vmas; i++)
>> + vmas[i]->attr.atomic_access = ops.atomic.val;
>> + /*TODO: handle bo backed vmas */
>> return 0;
>> }
>>
>> --
>> 2.34.1
>>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 26/32] drm/xe/madvise: Update migration policy based on preferred location
2025-05-14 22:04 ` Matthew Brost
@ 2025-05-21 8:50 ` Ghimiray, Himal Prasad
2025-05-21 16:51 ` Ghimiray, Himal Prasad
0 siblings, 1 reply; 120+ messages in thread
From: Ghimiray, Himal Prasad @ 2025-05-21 8:50 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-xe, thomas.hellstrom
On 15-05-2025 03:34, Matthew Brost wrote:
> On Mon, Apr 07, 2025 at 03:47:13PM +0530, Himal Prasad Ghimiray wrote:
>> When the user sets the valid devmem_fd as a preferred location, GPU fault
>> will trigger migration to tile of device associated with devmem_fd.
>>
>> If the user sets an invalid devmem_fd the preferred location is current
>> placement only.
>>
>> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>> ---
>> drivers/gpu/drm/xe/xe_svm.c | 15 ++++++++++++++-
>> drivers/gpu/drm/xe/xe_vm.h | 3 +++
>> drivers/gpu/drm/xe/xe_vm_madvise.c | 20 +++++++++++++++++++-
>> 3 files changed, 36 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
>> index d40111e29bfe..60dfb1bf12ca 100644
>> --- a/drivers/gpu/drm/xe/xe_svm.c
>> +++ b/drivers/gpu/drm/xe/xe_svm.c
>> @@ -765,6 +765,12 @@ bool xe_svm_range_needs_migrate_to_vram(struct xe_svm_range *range, struct xe_vm
>> return needs_migrate;
>> }
>>
>> +static const u32 region_to_mem_type[] = {
>> + XE_PL_TT,
>> + XE_PL_VRAM0,
>> + XE_PL_VRAM1,
>> +};
>> +
>> /**
>> * xe_svm_handle_pagefault() - SVM handle page fault
>> * @vm: The VM.
>> @@ -796,6 +802,7 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
>> struct xe_tile *tile = gt_to_tile(gt);
>> int retry_count = 3;
>> ktime_t end = 0;
>> + u32 region;
>> int err;
>>
>> lockdep_assert_held_write(&vm->lock);
>> @@ -820,7 +827,13 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
>>
>> range_debug(range, "PAGE FAULT");
>>
>> - if (xe_svm_range_needs_migrate_to_vram(range, vma, IS_DGFX(vm->xe))) {
>> + region = vma->attr.preferred_loc.devmem_fd;
>
> Mentioned this earlier in the series - you are assiging a devmem_fd to a
> region which is a bit confusing.
Hmm. Agreed
>
>> +
>> + if (xe_svm_range_needs_migrate_to_vram(range, vma, region)) {
>> + region = region ? region : 1;
>
> I think the default (region unset) should be the VRAM closest to the GT
> of the fault.
True
>
>> + /* Need rework for multigpu */
>> + tile = &vm->xe->tiles[region_to_mem_type[region] - XE_PL_VRAM0];
>> +
>> err = xe_svm_alloc_vram(vm, tile, range, &ctx);
>> if (err) {
>> if (retry_count) {
>> diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
>> index 4e45230b7205..377f62f859b7 100644
>> --- a/drivers/gpu/drm/xe/xe_vm.h
>> +++ b/drivers/gpu/drm/xe/xe_vm.h
>> @@ -220,6 +220,9 @@ int __xe_vm_userptr_needs_repin(struct xe_vm *vm);
>>
>> int xe_vm_userptr_check_repin(struct xe_vm *vm);
>>
>> +bool xe_vma_has_preferred_mem_loc(struct xe_vma *vma,
>> + u32 *mem_region, u32 *devmem_fd);
>> +
>> int xe_vm_rebind(struct xe_vm *vm, bool rebind_worker);
>> struct dma_fence *xe_vma_rebind(struct xe_vm *vm, struct xe_vma *vma,
>> u8 tile_mask);
>> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
>> index 7e1a95106cb9..f870e8642190 100644
>> --- a/drivers/gpu/drm/xe/xe_vm_madvise.c
>> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
>> @@ -61,7 +61,25 @@ static int madvise_preferred_mem_loc(struct xe_device *xe, struct xe_vm *vm,
>> struct xe_vma **vmas, int num_vmas,
>> struct drm_xe_madvise_ops ops)
>> {
>> - /* Implementation pending */
>> + s32 devmem_fd;
>> + u32 migration_policy;
>> + int i;
>> +
>> + xe_assert(vm->xe, ops.type == DRM_XE_VMA_ATTR_PREFERRED_LOC);
>> + vm_dbg(&xe->drm, "migration policy = %d, devmem_fd = %d\n",
>> + ops.preferred_mem_loc.migration_policy,
>> + ops.preferred_mem_loc.devmem_fd);
>
> As mentioned in patch #27, I'm not sure this debug info is all that
> useful.
Agreed. Planning to drop preferred location in next version and let it
be brought in along with multi-device support.
>
>> +
>> + devmem_fd = (s32)ops.preferred_mem_loc.devmem_fd;
>> + devmem_fd = (devmem_fd < 0) ? 0 : devmem_fd;
>> +
>
> Why (devmem_fd < 0) ? 0? I'm not following this.
Incase of any -ve(invalid) fd being passed, wanted to have smem as
preferred location.
>
>> + migration_policy = ops.preferred_mem_loc.migration_policy;
>> +
>
> Mentioned earlier in the series, I'm confused by migration_policy and it
> also looks to be unused unless I'm missing something?
This was supposed to be used with multi-device support. >
> Matt
>
>> + for (i = 0; i < num_vmas; i++) {
>> + vmas[i]->attr.preferred_loc.devmem_fd = devmem_fd;
>> + vmas[i]->attr.preferred_loc.migration_policy = migration_policy;
>> + }
>> +
>> return 0;
>> }
>>
>> --
>> 2.34.1
>>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 27/32] drm/xe/svm: Support DRM_XE_SVM_ATTR_PAT memory attribute
2025-05-14 21:52 ` Matthew Brost
@ 2025-05-21 8:51 ` Ghimiray, Himal Prasad
0 siblings, 0 replies; 120+ messages in thread
From: Ghimiray, Himal Prasad @ 2025-05-21 8:51 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-xe, thomas.hellstrom
On 15-05-2025 03:22, Matthew Brost wrote:
> On Mon, Apr 07, 2025 at 03:47:14PM +0530, Himal Prasad Ghimiray wrote:
>> This attributes sets the pat_index for the svm used vma range, which is
>> utilized to ascertain the coherence.
>>
>> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>> ---
>> drivers/gpu/drm/xe/xe_vm_madvise.c | 9 ++++++++-
>> 1 file changed, 8 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
>> index f870e8642190..f4e0545937b0 100644
>> --- a/drivers/gpu/drm/xe/xe_vm_madvise.c
>> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
>> @@ -104,7 +104,14 @@ static int madvise_pat_index(struct xe_device *xe, struct xe_vm *vm,
>> struct xe_vma **vmas, int num_vmas,
>> struct drm_xe_madvise_ops ops)
>> {
>> - /* Implementation pending */
>> + int i;
>> +
>> + xe_assert(vm->xe, ops.type == DRM_XE_VMA_ATTR_PAT);
>> + vm_dbg(&xe->drm, "attr_value = %d", ops.pat_index.val);
>
> I don't think the above vm_dbg is all that helpful. If it was per VMA, I
> could that being a bit more helpful.
Agreed.
>
> Otherwise LGTM.
>
> Matt
>
>> +
>> + for (i = 0; i < num_vmas; i++)
>> + vmas[i]->attr.pat_index = ops.pat_index.val;
>> +
>> return 0;
>> }
>>
>> --
>> 2.34.1
>>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 28/32] drm/xe/uapi: Add flag for consulting madvise hints on svm prefetch
2025-05-14 21:05 ` Matthew Brost
@ 2025-05-21 8:52 ` Ghimiray, Himal Prasad
0 siblings, 0 replies; 120+ messages in thread
From: Ghimiray, Himal Prasad @ 2025-05-21 8:52 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-xe, thomas.hellstrom
On 15-05-2025 02:35, Matthew Brost wrote:
> On Mon, Apr 07, 2025 at 03:47:15PM +0530, Himal Prasad Ghimiray wrote:
>> Introduce flag DRM_XE_CONSULT_MEM_ADVISE_PREF_LOC to ensure prefetching
>> in madvise-advised memory regions
>>
>> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>> ---
>> include/uapi/drm/xe_drm.h | 1 +
>> 1 file changed, 1 insertion(+)
>>
>> diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
>> index aaf515df3a83..ab96dee25f6c 100644
>> --- a/include/uapi/drm/xe_drm.h
>> +++ b/include/uapi/drm/xe_drm.h
>> @@ -1111,6 +1111,7 @@ struct drm_xe_vm_bind_op {
>> /** @flags: Bind flags */
>> __u32 flags;
>>
>> +#define DRM_XE_CONSULT_MEM_ADVISE_PREF_LOC -1
>
> Replied to wrong version before. Kernel doc.
>
> Otherwise uAPI LGTM.
Thanks
>
> Matt
>
>> /**
>> * @prefetch_mem_region_instance: Memory region to prefetch VMA to.
>> * It is a region instance, not a mask.
>> --
>> 2.34.1
>>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 30/32] drm/xe/uapi: Add uapi for vma count and mem attributes
2025-05-14 21:08 ` Matthew Brost
@ 2025-05-21 8:54 ` Ghimiray, Himal Prasad
2025-05-28 16:18 ` Ghimiray, Himal Prasad
0 siblings, 1 reply; 120+ messages in thread
From: Ghimiray, Himal Prasad @ 2025-05-21 8:54 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-xe, thomas.hellstrom
On 15-05-2025 02:38, Matthew Brost wrote:
> On Mon, Apr 07, 2025 at 03:47:17PM +0530, Himal Prasad Ghimiray wrote:
>> -DRM_IOCTL_XE_VM_QUERY_VMAS: Return number of VMAs in user-specified range.
>> -DRM_IOCTL_XE_VM_QUERY_VMAS_ATTRS: Fill VMA attributes in user-provided
>> buffer.
>>
>
> Replied to wrong version eariler...
>
>
> I can't remember if we landed on if this is needed? I thought the answer
> was - no not needed.
Will hold on this, till UMD confirms whether they need it or not.
>
> If it is needed could be make this a single IOCTL. e.g. Call in once
> with num_vmas == 0 + NULL vector, IOCTL returns num_vmas, then called
> again with num_vmas != 0 + non-NULL vector. Generally we try not burn
> IOCTL numbers, rather overload functionality.
>
> Matt
>
>
>> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>> ---
>> drivers/gpu/drm/xe/xe_device.c | 2 +
>> drivers/gpu/drm/xe/xe_vm.c | 94 +++++++++++++++++++++++++++
>> drivers/gpu/drm/xe/xe_vm.h | 3 +-
>> include/uapi/drm/xe_drm.h | 115 +++++++++++++++++++++++++++++++++
>> 4 files changed, 213 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
>> index 3e57300014bf..968c24c77241 100644
>> --- a/drivers/gpu/drm/xe/xe_device.c
>> +++ b/drivers/gpu/drm/xe/xe_device.c
>> @@ -198,6 +198,8 @@ static const struct drm_ioctl_desc xe_ioctls[] = {
>> DRM_RENDER_ALLOW),
>> DRM_IOCTL_DEF_DRV(XE_OBSERVATION, xe_observation_ioctl, DRM_RENDER_ALLOW),
>> DRM_IOCTL_DEF_DRV(XE_MADVISE, xe_vm_madvise_ioctl, DRM_RENDER_ALLOW),
>> + DRM_IOCTL_DEF_DRV(XE_VM_QUERY_VMAS, xe_vm_query_vmas_ioctl, DRM_RENDER_ALLOW),
>> + DRM_IOCTL_DEF_DRV(XE_VM_QUERY_VMAS_ATTRS, xe_vm_query_vmas_attrs_ioctl, DRM_RENDER_ALLOW),
>> };
>>
>> static long xe_drm_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
>> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
>> index e5246c633e62..f1d4daf90efe 100644
>> --- a/drivers/gpu/drm/xe/xe_vm.c
>> +++ b/drivers/gpu/drm/xe/xe_vm.c
>> @@ -2165,6 +2165,100 @@ int xe_vm_destroy_ioctl(struct drm_device *dev, void *data,
>> return err;
>> }
>>
>> +int xe_vm_query_vmas_ioctl(struct drm_device *dev, void *data,
>> + struct drm_file *file)
>> +{
>> + struct xe_device *xe = to_xe_device(dev);
>> + struct xe_file *xef = to_xe_file(file);
>> + struct drm_xe_vm_query_num_vmas *args = data;
>> + struct drm_gpuva *gpuva;
>> + struct xe_vm *vm;
>> +
>> + vm = xe_vm_lookup(xef, args->vm_id);
>> + if (XE_IOCTL_DBG(xe, !vm))
>> + return -EINVAL;
>> +
>> + args->num_vmas = 0;
>> + down_write(&vm->lock);
>> +
>> + drm_gpuvm_for_each_va_range(gpuva, &vm->gpuvm, args->start, args->start + args->range)
>> + args->num_vmas++;
>> +
>> + up_write(&vm->lock);
>> + return 0;
>> +}
>> +
>> +static int get_mem_attrs(struct xe_vm *vm, u32 *num_vmas, u64 start,
>> + u64 end, struct drm_xe_vma_mem_attr *mem_attrs)
>> +{
>> + struct drm_gpuva *gpuva;
>> + int i = 0;
>> +
>> + lockdep_assert_held(&vm->lock);
>> +
>> + drm_gpuvm_for_each_va_range(gpuva, &vm->gpuvm, start, end) {
>> + struct xe_vma *vma = gpuva_to_vma(gpuva);
>> +
>> + if (i == *num_vmas)
>> + return -EINVAL;
>> +
>> + mem_attrs[i].start = xe_vma_start(vma);
>> + mem_attrs[i].end = xe_vma_end(vma);
>> + mem_attrs[i].atomic.val = vma->attr.atomic_access;
>> + mem_attrs[i].pat_index.val = vma->attr.pat_index;
>> + mem_attrs[i].preferred_mem_loc.devmem_fd = vma->attr.preferred_loc.devmem_fd;
>> + mem_attrs[i].preferred_mem_loc.migration_policy = vma->attr.preferred_loc.migration_policy;
>> +
>> + i++;
>> + }
>> +
>> + if (i < (*num_vmas - 1))
>> + *num_vmas = i;
>> + return 0;
>> +}
>> +
>> +int xe_vm_query_vmas_attrs_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
>> +{
>> + struct xe_device *xe = to_xe_device(dev);
>> + struct xe_file *xef = to_xe_file(file);
>> + struct drm_xe_vma_mem_attr *mem_attrs;
>> + struct drm_xe_vm_query_vmas_attr *args = data;
>> + u64 __user *attrs_user = NULL;
>> + struct xe_vm *vm;
>> + int err;
>> +
>> + if (XE_IOCTL_DBG(xe, args->num_vmas < 1))
>> + return -EINVAL;
>> +
>> + vm = xe_vm_lookup(xef, args->vm_id);
>> + if (XE_IOCTL_DBG(xe, !vm))
>> + return -EINVAL;
>> +
>> + down_write(&vm->lock);
>> +
>> + attrs_user = u64_to_user_ptr(args->vector_of_vma_mem_attr);
>> + mem_attrs = kvmalloc_array(args->num_vmas, sizeof(struct drm_xe_vma_mem_attr),
>> + GFP_KERNEL | __GFP_ACCOUNT |
>> + __GFP_RETRY_MAYFAIL | __GFP_NOWARN);
>> + if (!mem_attrs)
>> + return args->num_vmas > 1 ? -ENOBUFS : -ENOMEM;
>> +
>> + err = get_mem_attrs(vm, &args->num_vmas, args->start,
>> + args->start + args->range, mem_attrs);
>> + if (err)
>> + goto free_mem_attrs;
>> +
>> + err = __copy_to_user(attrs_user, mem_attrs,
>> + sizeof(struct drm_xe_vma_mem_attr) * args->num_vmas);
>> +
>> +free_mem_attrs:
>> + kvfree(mem_attrs);
>> +
>> + up_write(&vm->lock);
>> +
>> + return err;
>> +}
>> +
>> static bool vma_matches(struct xe_vma *vma, u64 page_addr)
>> {
>> if (page_addr > xe_vma_end(vma) - 1 ||
>> diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
>> index 377f62f859b7..0b2d6e9f77ef 100644
>> --- a/drivers/gpu/drm/xe/xe_vm.h
>> +++ b/drivers/gpu/drm/xe/xe_vm.h
>> @@ -193,7 +193,8 @@ int xe_vm_destroy_ioctl(struct drm_device *dev, void *data,
>> struct drm_file *file);
>> int xe_vm_bind_ioctl(struct drm_device *dev, void *data,
>> struct drm_file *file);
>> -
>> +int xe_vm_query_vmas_ioctl(struct drm_device *dev, void *data, struct drm_file *file);
>> +int xe_vm_query_vmas_attrs_ioctl(struct drm_device *dev, void *data, struct drm_file *file);
>> void xe_vm_close_and_put(struct xe_vm *vm);
>>
>> static inline bool xe_vm_in_fault_mode(struct xe_vm *vm)
>> diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
>> index ab96dee25f6c..177ee3a1c20d 100644
>> --- a/include/uapi/drm/xe_drm.h
>> +++ b/include/uapi/drm/xe_drm.h
>> @@ -82,6 +82,8 @@ extern "C" {
>> * - &DRM_IOCTL_XE_WAIT_USER_FENCE
>> * - &DRM_IOCTL_XE_OBSERVATION
>> * - &DRM_IOCTL_XE_MADVISE
>> + * - &DRM_IOCTL_XE_VM_QUERY_VMAS
>> + * - &DRM_IOCTL_XE_VM_QUERY_VMAS_ATTRS
>> */
>>
>> /*
>> @@ -104,6 +106,8 @@ extern "C" {
>> #define DRM_XE_WAIT_USER_FENCE 0x0a
>> #define DRM_XE_OBSERVATION 0x0b
>> #define DRM_XE_MADVISE 0x0c
>> +#define DRM_XE_VM_QUERY_VMAS 0x0d
>> +#define DRM_XE_VM_QUERY_VMAS_ATTRS 0x0e
>>
>> /* Must be kept compact -- no holes */
>>
>> @@ -120,6 +124,8 @@ extern "C" {
>> #define DRM_IOCTL_XE_WAIT_USER_FENCE DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_WAIT_USER_FENCE, struct drm_xe_wait_user_fence)
>> #define DRM_IOCTL_XE_OBSERVATION DRM_IOW(DRM_COMMAND_BASE + DRM_XE_OBSERVATION, struct drm_xe_observation_param)
>> #define DRM_IOCTL_XE_MADVISE DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_MADVISE, struct drm_xe_madvise)
>> +#define DRM_IOCTL_XE_VM_QUERY_VMAS DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_VM_QUERY_VMAS, struct drm_xe_vm_query_num_vmas)
>> +#define DRM_IOCTL_XE_VM_QUERY_VMAS_ATTRS DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_VM_QUERY_VMAS_ATTRS, struct drm_xe_vm_query_vmas_attr)
>>
>> /**
>> * DOC: Xe IOCTL Extensions
>> @@ -2063,6 +2069,115 @@ struct drm_xe_madvise {
>>
>> };
>>
>> +/**
>> + * struct drm_xe_vm_query_num_vmas - Input of &DRM_IOCTL_XE_VM_QUERY_VMAS
>> + *
>> + * Get number of vmas in virtual range of vm_id
>> + */
>> +struct drm_xe_vm_query_num_vmas {
>> + /** @extensions: Pointer to the first extension struct, if any */
>> + __u64 extensions;
>> +
>> + /** @vm_id: vm_id of the virtual range */
>> + __u32 vm_id;
>> +
>> + /** @num_vmas: number of vmas in range returned in @num_vmas */
>> + __u32 num_vmas;
>> +
>> + /** @start: start of the virtual address range */
>> + __u64 start;
>> +
>> + /** @size: size of the virtual address range */
>> + __u64 range;
>> +
>> + /** @reserved: Reserved */
>> + __u64 reserved[2];
>> +};
>> +
>> +struct drm_xe_vma_mem_attr {
>> + /** @extensions: Pointer to the first extension struct, if any */
>> + __u64 extensions;
>> +
>> + /** @start: start of the vma */
>> + __u64 start;
>> +
>> + /** @size: end of the vma */
>> + __u64 end;
>> +
>> + struct {
>> + struct {
>> + /** @val: value of atomic operation*/
>> + __u32 val;
>> +
>> + /** @reserved: Reserved */
>> + __u32 reserved;
>> + } atomic;
>> +
>> + struct {
>> + /** @val: value for DRM_XE_VMA_ATTR_PURGEABLE_STATE */
>> + __u32 val;
>> +
>> + /** @reserved: Reserved */
>> + __u32 reserved;
>> + } purge_state_val;
>> +
>> + struct {
>> + /** @pat_index */
>> + __u32 val;
>> +
>> + /** @reserved: Reserved */
>> + __u32 reserved;
>> + } pat_index;
>> +
>> + /** @preferred_mem_loc: preferred memory location */
>> + struct {
>> + __u32 devmem_fd;
>> +
>> + __u32 migration_policy;
>> + } preferred_mem_loc;
>> + };
>> +
>> + /** @reserved: Reserved */
>> + __u64 reserved[2];
>> +};
>> +
>> +/**
>> + * struct drm_xe_vm_query_vmas_attr - Input of &DRM_IOCTL_XE_VM_QUERY_MEM_ATTRIBUTES
>> + *
>> + * Get memory attributes to a virtual address range
>> + */
>> +struct drm_xe_vm_query_vmas_attr {
>> + /** @extensions: Pointer to the first extension struct, if any */
>> + __u64 extensions;
>> +
>> + /** @vm_id: vm_id of the virtual range */
>> + __u32 vm_id;
>> +
>> + /** @num_vmas: number of vmas in range returned in @num_vmas */
>> + __u32 num_vmas;
>> +
>> + /** @start: start of the virtual address range */
>> + __u64 start;
>> +
>> + /** @size: size of the virtual address range */
>> + __u64 range;
>> +
>> + union {
>> + /** @num_vmas: used if num_vmas == 1 */
>> + struct drm_xe_vma_mem_attr attr;
>> +
>> + /**
>> + * @vector_of_ops: userptr to array of struct
>> + * drm_xe_vma_mem_attr if num_vmas > 1
>> + */
>> + __u64 vector_of_vma_mem_attr;
>> + };
>> +
>> + /** @reserved: Reserved */
>> + __u64 reserved[2];
>> +
>> +};
>> +
>> #if defined(__cplusplus)
>> }
>> #endif
>> --
>> 2.34.1
>>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 32/32] drm/xe/bo: Update atomic_access attribute on madvise
2025-05-14 22:31 ` Matthew Brost
@ 2025-05-21 9:13 ` Ghimiray, Himal Prasad
0 siblings, 0 replies; 120+ messages in thread
From: Ghimiray, Himal Prasad @ 2025-05-21 9:13 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-xe, thomas.hellstrom
On 15-05-2025 04:01, Matthew Brost wrote:
> On Mon, Apr 07, 2025 at 03:47:19PM +0530, Himal Prasad Ghimiray wrote:
>> Update the bo_atomic_access based on user-provided input and determine
>> the migration to smem during a CPU fault
>>
>> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>> ---
>> drivers/gpu/drm/xe/xe_bo.c | 21 ++++++++++++++---
>> drivers/gpu/drm/xe/xe_vm.c | 11 +++++++--
>> drivers/gpu/drm/xe/xe_vm_madvise.c | 38 +++++++++++++++++++++++++++---
>> 3 files changed, 62 insertions(+), 8 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
>> index c337790c81ae..fe78f6da7054 100644
>> --- a/drivers/gpu/drm/xe/xe_bo.c
>> +++ b/drivers/gpu/drm/xe/xe_bo.c
>> @@ -1573,6 +1573,12 @@ static void xe_gem_object_close(struct drm_gem_object *obj,
>> }
>> }
>>
>> +static bool should_migrate_to_smem(struct xe_bo *bo)
>> +{
>> + return bo->attr.atomic_access == DRM_XE_VMA_ATOMIC_GLOBAL ||
>> + bo->attr.atomic_access == DRM_XE_VMA_ATOMIC_CPU;
>> +}
>> +
>> static vm_fault_t xe_gem_fault(struct vm_fault *vmf)
>> {
>> struct ttm_buffer_object *tbo = vmf->vma->vm_private_data;
>> @@ -1581,7 +1587,7 @@ static vm_fault_t xe_gem_fault(struct vm_fault *vmf)
>> struct xe_bo *bo = ttm_to_xe_bo(tbo);
>> bool needs_rpm = bo->flags & XE_BO_FLAG_VRAM_MASK;
>> vm_fault_t ret;
>> - int idx;
>> + int idx, r = 0;
>>
>> if (needs_rpm)
>> xe_pm_runtime_get(xe);
>> @@ -1593,8 +1599,17 @@ static vm_fault_t xe_gem_fault(struct vm_fault *vmf)
>> if (drm_dev_enter(ddev, &idx)) {
>> trace_xe_bo_cpu_fault(bo);
>>
>> - ret = ttm_bo_vm_fault_reserved(vmf, vmf->vma->vm_page_prot,
>> - TTM_BO_VM_NUM_PREFAULT);
>> + if (should_migrate_to_smem(bo)) {
>> + r = xe_bo_migrate(bo, XE_PL_TT);
>> + if (r == -EBUSY || r == -ERESTARTSYS || r == -EINTR)
>> + ret = VM_FAULT_NOPAGE;
>> + else if (r)
>> + ret = VM_FAULT_SIGBUS;
>> + }
>> + if (!ret)
>> + ret = ttm_bo_vm_fault_reserved(vmf,
>> + vmf->vma->vm_page_prot,
>> + TTM_BO_VM_NUM_PREFAULT);
>> drm_dev_exit(idx);
>> } else {
>> ret = ttm_bo_vm_dummy_page(vmf, vmf->vma->vm_page_prot);
>> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
>> index f1d4daf90efe..189e97113dbe 100644
>> --- a/drivers/gpu/drm/xe/xe_vm.c
>> +++ b/drivers/gpu/drm/xe/xe_vm.c
>> @@ -3104,9 +3104,16 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
>> err = vma_lock_and_validate(exec,
>> gpuva_to_vma(op->base.prefetch.va),
>> false);
>> - if (!err && !xe_vma_has_no_bo(vma))
>> - err = xe_bo_migrate(xe_vma_bo(vma),
>> + if (!err && !xe_vma_has_no_bo(vma)) {
>> + struct xe_bo *bo = xe_vma_bo(vma);
>> +
>> + if (region == 0 && !vm->xe->info.has_device_atomics_on_smem &&
>> + bo->attr.atomic_access == DRM_XE_VMA_ATOMIC_DEVICE)
>> + region = 1;
>> +
>
> So here we disallowing migration to system if atomics don't work there?
> Shouldn't we just let the GPU fault and fixup on fault?
this is in prefetch, so avoiding GPU fault.
>
>> + err = xe_bo_migrate(bo,
>> region_to_mem_type[region]);
>> + }
>> break;
>> }
>> default:
>> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
>> index f4e0545937b0..bbae2faee603 100644
>> --- a/drivers/gpu/drm/xe/xe_vm_madvise.c
>> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
>> @@ -87,16 +87,48 @@ static int madvise_atomic(struct xe_device *xe, struct xe_vm *vm,
>> struct xe_vma **vmas, int num_vmas,
>> struct drm_xe_madvise_ops ops)
>> {
>> - int i;
>> + struct xe_bo *bo;
>> + int err, i;
>>
>> xe_assert(vm->xe, ops.type == DRM_XE_VMA_ATTR_ATOMIC);
>> xe_assert(vm->xe, ops.atomic.val > DRM_XE_VMA_ATOMIC_UNDEFINED &&
>> ops.atomic.val <= DRM_XE_VMA_ATOMIC_CPU);
>> vm_dbg(&xe->drm, "attr_value = %d", ops.atomic.val);
>>
>> - for (i = 0; i < num_vmas; i++)
>> + for (i = 0; i < num_vmas; i++) {
>> vmas[i]->attr.atomic_access = ops.atomic.val;
>> - /*TODO: handle bo backed vmas */
>> +
>> + bo = xe_vma_bo(vmas[i]);
>> + if (!bo)
>> + continue;
>> +
>> + if (XE_IOCTL_DBG(xe, ops.atomic.val == DRM_XE_VMA_ATOMIC_CPU &&
>> + !(bo->flags & XE_BO_FLAG_SYSTEM)))
>> + return -EINVAL;
>> +
>> + if (XE_IOCTL_DBG(xe, ops.atomic.val == DRM_XE_VMA_ATOMIC_DEVICE &&
>> + !(bo->flags & XE_BO_FLAG_VRAM0) &&
>> + !(bo->flags & XE_BO_FLAG_VRAM1)))
>> + return -EINVAL;
>
> Don't device atomics work if xe->info.has_device_atomics_on_smem is set?
Need to fix this. Thanks
>
>> +
>> + if (XE_IOCTL_DBG(xe, ops.atomic.val == DRM_XE_VMA_ATOMIC_GLOBAL &&
>> + (!(bo->flags & XE_BO_FLAG_SYSTEM) ||
>> + (!(bo->flags & XE_BO_FLAG_VRAM0) &&
>> + !(bo->flags & XE_BO_FLAG_VRAM1)))))
>> + return -EINVAL;
>
> One concern is all of the above are platform specific checks - e.g. if
> we had a device with CXL atomics just work everywhere. I'd at least add
> a comment indicating these are platform specific checks.
Agreed.
>
>> +
>> + err = xe_bo_lock(bo, true);
>> + if (err)
>> + return err;
>> + bo->attr.atomic_access = ops.atomic.val;
>> +
>> + /* Invalidate cpu page table, so bo can migrate to smem in next access */
>> + if (bo->attr.atomic_access == DRM_XE_VMA_ATOMIC_CPU ||
>> + bo->attr.atomic_access == DRM_XE_VMA_ATOMIC_GLOBAL)
>> + ttm_bo_unmap_virtual(&bo->ttm);
>
> If alreday in SMEM, you don't need to unmap do you?
correct not required if bo is already in smem.
>
> Matt
>
>> +
>> + xe_bo_unlock(bo);
>> + }
>> return 0;
>> }
>>
>> --
>> 2.34.1
>>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 26/32] drm/xe/madvise: Update migration policy based on preferred location
2025-05-21 8:50 ` Ghimiray, Himal Prasad
@ 2025-05-21 16:51 ` Ghimiray, Himal Prasad
0 siblings, 0 replies; 120+ messages in thread
From: Ghimiray, Himal Prasad @ 2025-05-21 16:51 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-xe, thomas.hellstrom
On 21-05-2025 14:20, Ghimiray, Himal Prasad wrote:
>
>
> On 15-05-2025 03:34, Matthew Brost wrote:
>> On Mon, Apr 07, 2025 at 03:47:13PM +0530, Himal Prasad Ghimiray wrote:
>>> When the user sets the valid devmem_fd as a preferred location, GPU
>>> fault
>>> will trigger migration to tile of device associated with devmem_fd.
>>>
>>> If the user sets an invalid devmem_fd the preferred location is current
>>> placement only.
>>>
>>> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>>> ---
>>> drivers/gpu/drm/xe/xe_svm.c | 15 ++++++++++++++-
>>> drivers/gpu/drm/xe/xe_vm.h | 3 +++
>>> drivers/gpu/drm/xe/xe_vm_madvise.c | 20 +++++++++++++++++++-
>>> 3 files changed, 36 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
>>> index d40111e29bfe..60dfb1bf12ca 100644
>>> --- a/drivers/gpu/drm/xe/xe_svm.c
>>> +++ b/drivers/gpu/drm/xe/xe_svm.c
>>> @@ -765,6 +765,12 @@ bool xe_svm_range_needs_migrate_to_vram(struct
>>> xe_svm_range *range, struct xe_vm
>>> return needs_migrate;
>>> }
>>> +static const u32 region_to_mem_type[] = {
>>> + XE_PL_TT,
>>> + XE_PL_VRAM0,
>>> + XE_PL_VRAM1,
>>> +};
>>> +
>>> /**
>>> * xe_svm_handle_pagefault() - SVM handle page fault
>>> * @vm: The VM.
>>> @@ -796,6 +802,7 @@ int xe_svm_handle_pagefault(struct xe_vm *vm,
>>> struct xe_vma *vma,
>>> struct xe_tile *tile = gt_to_tile(gt);
>>> int retry_count = 3;
>>> ktime_t end = 0;
>>> + u32 region;
>>> int err;
>>> lockdep_assert_held_write(&vm->lock);
>>> @@ -820,7 +827,13 @@ int xe_svm_handle_pagefault(struct xe_vm *vm,
>>> struct xe_vma *vma,
>>> range_debug(range, "PAGE FAULT");
>>> - if (xe_svm_range_needs_migrate_to_vram(range, vma, IS_DGFX(vm-
>>> >xe))) {
>>> + region = vma->attr.preferred_loc.devmem_fd;
>>
>> Mentioned this earlier in the series - you are assiging a devmem_fd to a
>> region which is a bit confusing.
>
> Hmm. Agreed
>
>>
>>> +
>>> + if (xe_svm_range_needs_migrate_to_vram(range, vma, region)) {
>>> + region = region ? region : 1;
>>
>> I think the default (region unset) should be the VRAM closest to the GT
>> of the fault.
>
> True
>
>>
>>> + /* Need rework for multigpu */
>>> + tile = &vm->xe->tiles[region_to_mem_type[region] -
>>> XE_PL_VRAM0];
>>> +
>>> err = xe_svm_alloc_vram(vm, tile, range, &ctx);
>>> if (err) {
>>> if (retry_count) {
>>> diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
>>> index 4e45230b7205..377f62f859b7 100644
>>> --- a/drivers/gpu/drm/xe/xe_vm.h
>>> +++ b/drivers/gpu/drm/xe/xe_vm.h
>>> @@ -220,6 +220,9 @@ int __xe_vm_userptr_needs_repin(struct xe_vm *vm);
>>> int xe_vm_userptr_check_repin(struct xe_vm *vm);
>>> +bool xe_vma_has_preferred_mem_loc(struct xe_vma *vma,
>>> + u32 *mem_region, u32 *devmem_fd);
>>> +
>>> int xe_vm_rebind(struct xe_vm *vm, bool rebind_worker);
>>> struct dma_fence *xe_vma_rebind(struct xe_vm *vm, struct xe_vma *vma,
>>> u8 tile_mask);
>>> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/
>>> xe_vm_madvise.c
>>> index 7e1a95106cb9..f870e8642190 100644
>>> --- a/drivers/gpu/drm/xe/xe_vm_madvise.c
>>> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
>>> @@ -61,7 +61,25 @@ static int madvise_preferred_mem_loc(struct
>>> xe_device *xe, struct xe_vm *vm,
>>> struct xe_vma **vmas, int num_vmas,
>>> struct drm_xe_madvise_ops ops)
>>> {
>>> - /* Implementation pending */
>>> + s32 devmem_fd;
>>> + u32 migration_policy;
>>> + int i;
>>> +
>>> + xe_assert(vm->xe, ops.type == DRM_XE_VMA_ATTR_PREFERRED_LOC);
>>> + vm_dbg(&xe->drm, "migration policy = %d, devmem_fd = %d\n",
>>> + ops.preferred_mem_loc.migration_policy,
>>> + ops.preferred_mem_loc.devmem_fd);
>>
>> As mentioned in patch #27, I'm not sure this debug info is all that
>> useful.
>
> Agreed. Planning to drop preferred location in next version and let it
> be brought in along with multi-device support.
Had discussion with UMD teams. They need a way to setup smem as
preferred location. so will still need to support preferred location.
Will try to do it in cleaner way.
>
>>
>>> +
>>> + devmem_fd = (s32)ops.preferred_mem_loc.devmem_fd;
>>> + devmem_fd = (devmem_fd < 0) ? 0 : devmem_fd;
>>> +
>>
>> Why (devmem_fd < 0) ? 0? I'm not following this.
>
> Incase of any -ve(invalid) fd being passed, wanted to have smem as
> preferred location.
>
>
>>
>>> + migration_policy = ops.preferred_mem_loc.migration_policy;
>>> +
>>
>> Mentioned earlier in the series, I'm confused by migration_policy and it
>> also looks to be unused unless I'm missing something?
>
> This was supposed to be used with multi-device support. >
>> Matt
>>
>>> + for (i = 0; i < num_vmas; i++) {
>>> + vmas[i]->attr.preferred_loc.devmem_fd = devmem_fd;
>>> + vmas[i]->attr.preferred_loc.migration_policy =
>>> migration_policy;
>>> + }
>>> +
>>> return 0;
>>> }
>>> --
>>> 2.34.1
>>>
>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 16/32] Introduce drm_gpuvm_sm_map_ops_flags enums for sm_map_ops
2025-04-07 10:30 ` Boris Brezillon
@ 2025-05-26 13:48 ` Ghimiray, Himal Prasad
0 siblings, 0 replies; 120+ messages in thread
From: Ghimiray, Himal Prasad @ 2025-05-26 13:48 UTC (permalink / raw)
To: Boris Brezillon
Cc: intel-xe, matthew.brost, thomas.hellstrom, Danilo Krummrich,
Boris Brezillon, dri-devel
On 07-04-2025 16:00, Boris Brezillon wrote:
> On Mon, 7 Apr 2025 15:47:03 +0530
> Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com> wrote:
>
>> - DRM_GPUVM_SM_MAP_NOT_MADVISE: Default sm_map operations for the input
>> range.
>>
>> - DRM_GPUVM_SKIP_GEM_OBJ_VA_SPLIT_MADVISE: This flag is used by
>> drm_gpuvm_sm_map_ops_create to iterate over GPUVMA's in the
>> user-provided range and split the existing non-GEM object VMA if the
>> start or end of the input range lies within it. The operations can
>> create up to 2 REMAPS and 2 MAPs. The purpose of this operation is to be
>> used by the Xe driver to assign attributes to GPUVMA's within the
>> user-defined range. Unlike drm_gpuvm_sm_map_ops_flags in default mode,
>> the operation with this flag will never have UNMAPs and
>> merges, and can be without any final operations.
>>
>> v2
>> - use drm_gpuvm_sm_map_ops_create with flags instead of defining new
>> ops_create (Danilo)
>> - Add doc (Danilo)
>>
>> Cc: Danilo Krummrich <dakr@redhat.com>
>> Cc: Matthew Brost <matthew.brost@intel.com>
>> Cc: Boris Brezillon <bbrezillon@kernel.org>
>> Cc: <dri-devel@lists.freedesktop.org>
>> Signed-off-by: Himal Prasad Ghimiray<himal.prasad.ghimiray@intel.com>
>>
>> ---
>> RFC Link:
>> https://lore.kernel.org/intel-xe/20250314080226.2059819-1-himal.prasad.ghimiray@intel.com/T/#mb706bd1c55232110e42dc7d5c05de61946982472
>> ---
>> drivers/gpu/drm/drm_gpuvm.c | 93 ++++++++++++++++++++------
>> drivers/gpu/drm/nouveau/nouveau_uvmm.c | 1 +
>> drivers/gpu/drm/xe/xe_vm.c | 1 +
>> include/drm/drm_gpuvm.h | 25 ++++++-
>> 4 files changed, 98 insertions(+), 22 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c
>> index f9eb56f24bef..9d09d177b9fa 100644
>> --- a/drivers/gpu/drm/drm_gpuvm.c
>> +++ b/drivers/gpu/drm/drm_gpuvm.c
>> @@ -2102,10 +2102,13 @@ static int
>> __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
>> const struct drm_gpuvm_ops *ops, void *priv,
>> u64 req_addr, u64 req_range,
>> + enum drm_gpuvm_sm_map_ops_flags flags,
>> struct drm_gem_object *req_obj, u64 req_offset)
>
> Not exactly related to this series, but I've been playing with Lina's
> series[1] which is hooking up flag propagation from _map() calls to
> drm_gpuva, and I think we should pass all map args through a struct so
> we don't have to change all call-sites anytime we add one a new optional
> argument. Here's a patch [2] doing that.
Thanks Boris, for sharing the info. I went through the patches and they
look to be providing solid direction to make interface extensible and
future proof.
>
> [1]https://lore.kernel.org/lkml/4a431b98-cccc-495e-b72e-02362828c96b@asahilina.net/T/
> [2]https://gitlab.freedesktop.org/bbrezillon/linux/-/commit/0587c15b9b81ccae1e37ad0a5d524754d8455558
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 24/32] drm/xe: Allow CPU address mirror VMA unbind with gpu bindings for madvise
2025-05-20 10:21 ` Ghimiray, Himal Prasad
@ 2025-05-27 17:32 ` Matthew Brost
0 siblings, 0 replies; 120+ messages in thread
From: Matthew Brost @ 2025-05-27 17:32 UTC (permalink / raw)
To: Ghimiray, Himal Prasad; +Cc: intel-xe, thomas.hellstrom
On Tue, May 20, 2025 at 03:51:26PM +0530, Ghimiray, Himal Prasad wrote:
>
>
> On 15-05-2025 00:50, Matthew Brost wrote:
> > On Mon, Apr 07, 2025 at 03:47:11PM +0530, Himal Prasad Ghimiray wrote:
> > > In the case of the MADVISE ioctl, if the start or end addresses fall
> > > within a VMA and existing SVM ranges are present, remove the existing
> > > SVM mappings. Then, continue with ops_parse to create new VMAs by REMAP
> > > unmapping of old one.
> > >
> >
> > I'm quite confused why this patch is needed. Why is invalidating the
> > ranges not sufficient?
>
> how the madvise is supposed to behave if start or end of input range is
> within existing svm range ?
> for example lets assume :
> svm_range of 2 MiB exists in offset and offset + SZ_2M, madvise is called
> with offset as start and offset + SZ_1M as end, in this scenario vma
> boundaries will change and previous svm_ranges needs to be removed.
>
Right, this is a weird corner case that needs to be handled. Will review
this patch in detail in your latest rev.
Matt
> >
> > Matt
> >
> > > Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> > > ---
> > > drivers/gpu/drm/xe/xe_svm.c | 25 +++++++++++++++++++++++++
> > > drivers/gpu/drm/xe/xe_svm.h | 7 +++++++
> > > drivers/gpu/drm/xe/xe_vm.c | 18 +++++++++++++++++-
> > > 3 files changed, 49 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
> > > index 7ec7ecd7eb1f..efcba4b77250 100644
> > > --- a/drivers/gpu/drm/xe/xe_svm.c
> > > +++ b/drivers/gpu/drm/xe/xe_svm.c
> > > @@ -903,6 +903,31 @@ bool xe_svm_has_mapping(struct xe_vm *vm, u64 start, u64 end)
> > > return drm_gpusvm_has_mapping(&vm->svm.gpusvm, start, end);
> > > }
> > > +/**
> > > + * xe_svm_range_clean_if_addr_within - Clean SVM mappings and ranges
> > > + * @start: start addr
> > > + * @end: end addr
> > > + *
> > > + * This function cleans up svm ranges if start or end address are inside them.
> > > + */
> > > +void xe_svm_range_clean_if_addr_within(struct xe_vm *vm, u64 start, u64 end)
> > > +{
> > > + struct drm_gpusvm_notifier *notifier, *next;
> > > +
> > > + drm_gpusvm_for_each_notifier_safe(notifier, next, &vm->svm.gpusvm, start, end) {
> > > + struct drm_gpusvm_range *range, *__next;
> > > +
> > > + drm_gpusvm_for_each_range_safe(range, __next, notifier, start, end) {
> > > + if (start > drm_gpusvm_range_start(range) ||
> > > + end < drm_gpusvm_range_end(range)) {
> > > + if (IS_DGFX(vm->xe) && xe_svm_range_in_vram(to_xe_range(range)))
> > > + drm_gpusvm_range_evict(&vm->svm.gpusvm, range);
> > > + __xe_svm_garbage_collector(vm, to_xe_range(range));
> > > + }
> > > + }
> > > + }
> > > +}
> > > +
> > > /**
> > > * xe_svm_bo_evict() - SVM evict BO to system memory
> > > * @bo: BO to evict
> > > diff --git a/drivers/gpu/drm/xe/xe_svm.h b/drivers/gpu/drm/xe/xe_svm.h
> > > index d5be8229ca7e..d00ba6d6ba53 100644
> > > --- a/drivers/gpu/drm/xe/xe_svm.h
> > > +++ b/drivers/gpu/drm/xe/xe_svm.h
> > > @@ -98,6 +98,8 @@ int xe_svm_range_get_pages(struct xe_vm *vm, struct xe_svm_range *range,
> > > bool xe_svm_range_needs_migrate_to_vram(struct xe_svm_range *range, struct xe_vma *vma,
> > > u32 region);
> > > +void xe_svm_range_clean_if_addr_within(struct xe_vm *vm, u64 start, u64 end);
> > > +
> > > /**
> > > * xe_svm_range_has_dma_mapping() - SVM range has DMA mapping
> > > * @range: SVM range
> > > @@ -291,6 +293,11 @@ bool xe_svm_range_needs_migrate_to_vram(struct xe_svm_range *range, struct xe_vm
> > > return false;
> > > }
> > > +static inline
> > > +void xe_svm_range_clean_if_addr_within(struct xe_vm *vm, u64 start, u64 end)
> > > +{
> > > +}
> > > +
> > > #define xe_svm_assert_in_notifier(...) do {} while (0)
> > > #define xe_svm_range_has_dma_mapping(...) false
> > > diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> > > index c7c012afe9eb..92b8e0cac063 100644
> > > --- a/drivers/gpu/drm/xe/xe_vm.c
> > > +++ b/drivers/gpu/drm/xe/xe_vm.c
> > > @@ -2362,6 +2362,22 @@ vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_vma_ops *vops,
> > > op->map.pat_index = pat_index;
> > > op->map.invalidate_on_bind =
> > > __xe_vm_needs_clear_scratch_pages(vm, flags);
> > > + } else if (__op->op == DRM_GPUVA_OP_REMAP) {
> > > + struct xe_vma *old =
> > > + gpuva_to_vma(op->base.remap.unmap->va);
> > > + u64 start = xe_vma_start(old), end = xe_vma_end(old);
> > > +
> > > + if (op->base.remap.prev)
> > > + start = op->base.remap.prev->va.addr +
> > > + op->base.remap.prev->va.range;
> > > + if (op->base.remap.next)
> > > + end = op->base.remap.next->va.addr;
> > > +
> > > + if (xe_vma_is_cpu_addr_mirror(old) &&
> > > + xe_svm_has_mapping(vm, start, end)) {
> > > + drm_gpuva_ops_free(&vm->gpuvm, ops);
> > > + return ERR_PTR(-EBUSY);
> > > + }
> > > } else if (__op->op == DRM_GPUVA_OP_PREFETCH) {
> > > struct xe_vma *vma = gpuva_to_vma(op->base.prefetch.va);
> > > @@ -2653,7 +2669,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
> > > if (xe_vma_is_cpu_addr_mirror(old) &&
> > > xe_svm_has_mapping(vm, start, end))
> > > - return -EBUSY;
> > > + xe_svm_range_clean_if_addr_within(vm, start, end);
> > > op->remap.start = xe_vma_start(old);
> > > op->remap.range = xe_vma_size(old);
> > > --
> > > 2.34.1
> > >
>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 18/32] drm/xe/vm: Add attributes struct as member of vma
2025-05-20 9:27 ` Ghimiray, Himal Prasad
@ 2025-05-27 17:37 ` Matthew Brost
2025-05-28 5:33 ` Ghimiray, Himal Prasad
0 siblings, 1 reply; 120+ messages in thread
From: Matthew Brost @ 2025-05-27 17:37 UTC (permalink / raw)
To: Ghimiray, Himal Prasad; +Cc: intel-xe, thomas.hellstrom
On Tue, May 20, 2025 at 02:57:45PM +0530, Ghimiray, Himal Prasad wrote:
>
>
> On 15-05-2025 00:06, Matthew Brost wrote:
> > On Mon, Apr 07, 2025 at 03:47:05PM +0530, Himal Prasad Ghimiray wrote:
> > > The attribute of xe_vma will determine the migration policy and the
> > > encoding of the page table entries (PTEs) for that vma.
> > > This attribute helps manage how memory pages are moved and how their
> > > addresses are translated. It will be used by madvise to set the
> > > behavior of the vma.
> > >
> > > Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> > > ---
> > > drivers/gpu/drm/xe/xe_vm.c | 6 ++++++
> > > drivers/gpu/drm/xe/xe_vm_types.h | 20 ++++++++++++++++++++
> > > 2 files changed, 26 insertions(+)
> > >
> > > diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> > > index 27a8dbe709c2..1ff9e477e061 100644
> > > --- a/drivers/gpu/drm/xe/xe_vm.c
> > > +++ b/drivers/gpu/drm/xe/xe_vm.c
> > > @@ -2470,6 +2470,12 @@ static struct xe_vma *new_vma(struct xe_vm *vm, struct drm_gpuva_op_map *op,
> > > vma = ERR_PTR(err);
> > > }
> > > + /*TODO: assign devmem_fd of local vram once multi device
> > > + * support is added.
> > > + */
> > > + vma->attr.preferred_loc.devmem_fd = 1;
> >
> > Assigning a value of '1' is a bit odd... I'd prefer using a define or
> > something similar to indicate the intended behavior. I noticed a few
> > other assignments to '1' in the final result—same comment applies to
> > those.
>
> Sure
>
> >
> > > + vma->attr.atomic_access = DRM_XE_VMA_ATOMIC_UNDEFINED;
> > > +
> > > return vma;
> > > }
> > > diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
> > > index d3c1209348e9..5f5feffecb82 100644
> > > --- a/drivers/gpu/drm/xe/xe_vm_types.h
> > > +++ b/drivers/gpu/drm/xe/xe_vm_types.h
> > > @@ -77,6 +77,19 @@ struct xe_userptr {
> > > #endif
> > > };
> > > +/**
> > > + * struct xe_vma_mem_attr - memory attributes associated with vma
> > > + */
> > > +struct xe_vma_mem_attr {
> > > + /** @preferred_loc: perferred memory_location*/
> > > + struct {
> > > + u32 migration_policy; /* represents migration policies */
> > > + u32 devmem_fd; /* devmem_fd used for determining pagemap_fd requested by user */
> > > + } preferred_loc;
> >
> > I'm a little unclear on how these variables work.
> >
> > In the uAPI for migration_policy, I see MIGRATE_ALL_PAGES and
> > MIGRATE_ONLY_SYSTEM_PAGES (these should probably be normalized with a
> > DRM_XE_* prefix, by the way), but it's unclear to me what exactly these
> > mean or how they're used based on the final result—could you clarify?
>
> With multi-device support the idea was to have flexibility to move only
> system pages to preferred location or also move pages from other vram
> location to preferred location.
>
Ok, I think having bits set aside for this makes sense.
> >
> > Likewise, I'm confused about the devmem_fd usage. It can either be
> > assigned a devmem_fd from the uAPI, but in some cases, it's interpreted
> > as a region. I assume this is anticipating multi-GPU support, but again,
> > the plan isn't clear to me. Could you explain?
>
> The devmem_fd is intended to be used to determine the struct drm_pagemap *,
> which in turn will be used to identify the tile associated with VRAM for
> allocation and binding. The changes that introduce the
> devmem_fd->drm_pagemap->tile [1] linkage will be part of the upcoming
> multi-GPU support.
>
> To ensure that the current changes are easily scalable and can be extended
> for multi-GPU support, I am defining devmem_fd in the UAPI and using it in
> the KMD as a region placeholder until multi-GPU support is integrated.
>
Hmm, will this break the uAPI though? e.g. How does the UMD choose
between region and FD at the uAPI level? If the answer is once multi-GPU
lands it is always a FD rather than region then we really need to land
some of the multi-GPU patches at same time as madvise - at least the
ones which export memory regions as FDs.
Let's loop in Thomas on the multi-GPU assumptions too to ensure
correctness.
Matt
> [1] https://patchwork.freedesktop.org/patch/642773/?series=146227&rev=1
>
>
> >
> > In general I agree with the idea of xe_vma_mem_attr though.
> >
> > Matt
> >
> > > + /** @atomic_access: The atomic access type for the vma */
> > > + u32 atomic_access;
> > > +};
> > > +
> > > struct xe_vma {
> > > /** @gpuva: Base GPUVA object */
> > > struct drm_gpuva gpuva;
> > > @@ -128,6 +141,13 @@ struct xe_vma {
> > > * Needs to be signalled before UNMAP can be processed.
> > > */
> > > struct xe_user_fence *ufence;
> > > +
> > > + /**
> > > + * @attr: The attributes of vma which determines the migration policy
> > > + * and encoding of the PTEs for this vma.
> > > + */
> > > + struct xe_vma_mem_attr attr;
> > > +
> > > };
> > > /**
> > > --
> > > 2.34.1
> > >
>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 23/32] drm/xe: Implement madvise ioctl for xe
2025-05-20 10:15 ` Ghimiray, Himal Prasad
@ 2025-05-28 5:22 ` Ghimiray, Himal Prasad
0 siblings, 0 replies; 120+ messages in thread
From: Ghimiray, Himal Prasad @ 2025-05-28 5:22 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-xe, thomas.hellstrom
On 20-05-2025 15:45, Ghimiray, Himal Prasad wrote:
>
>
> On 15-05-2025 03:11, Matthew Brost wrote:
>> On Mon, Apr 07, 2025 at 03:47:10PM +0530, Himal Prasad Ghimiray wrote:
>>> This driver-specific ioctl enables UMDs to control the memory attributes
>>> for GPU VMAs within a specified input range. If the start or end
>>> addresses fall within an existing VMA, the VMA is split accordingly. The
>>> attributes of the VMA are modified as provided by the users. The old
>>> mappings of the VMAs are invalidated, and TLB invalidation is performed
>>> if necessary.
>>>
>>> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>>> ---
>>> drivers/gpu/drm/xe/Makefile | 1 +
>>> drivers/gpu/drm/xe/xe_device.c | 2 +
>>> drivers/gpu/drm/xe/xe_vm_madvise.c | 309 +++++++++++++++++++++++++++++
>>> drivers/gpu/drm/xe/xe_vm_madvise.h | 15 ++
>>> 4 files changed, 327 insertions(+)
>>> create mode 100644 drivers/gpu/drm/xe/xe_vm_madvise.c
>>> create mode 100644 drivers/gpu/drm/xe/xe_vm_madvise.h
>>>
>>> diff --git a/drivers/gpu/drm/xe/Makefile b/drivers/gpu/drm/xe/Makefile
>>> index e4fec90bab55..3e83ae8b9dc1 100644
>>> --- a/drivers/gpu/drm/xe/Makefile
>>> +++ b/drivers/gpu/drm/xe/Makefile
>>> @@ -117,6 +117,7 @@ xe-y += xe_bb.o \
>>> xe_uc.o \
>>> xe_uc_fw.o \
>>> xe_vm.o \
>>> + xe_vm_madvise.o \
>>> xe_vram.o \
>>> xe_vram_freq.o \
>>> xe_vsec.o \
>>> diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/
>>> xe_device.c
>>> index d8e227ddf255..3e57300014bf 100644
>>> --- a/drivers/gpu/drm/xe/xe_device.c
>>> +++ b/drivers/gpu/drm/xe/xe_device.c
>>> @@ -60,6 +60,7 @@
>>> #include "xe_ttm_stolen_mgr.h"
>>> #include "xe_ttm_sys_mgr.h"
>>> #include "xe_vm.h"
>>> +#include "xe_vm_madvise.h"
>>> #include "xe_vram.h"
>>> #include "xe_vsec.h"
>>> #include "xe_wait_user_fence.h"
>>> @@ -196,6 +197,7 @@ static const struct drm_ioctl_desc xe_ioctls[] = {
>>> DRM_IOCTL_DEF_DRV(XE_WAIT_USER_FENCE, xe_wait_user_fence_ioctl,
>>> DRM_RENDER_ALLOW),
>>> DRM_IOCTL_DEF_DRV(XE_OBSERVATION, xe_observation_ioctl,
>>> DRM_RENDER_ALLOW),
>>> + DRM_IOCTL_DEF_DRV(XE_MADVISE, xe_vm_madvise_ioctl,
>>> DRM_RENDER_ALLOW),
>>> };
>>> static long xe_drm_ioctl(struct file *file, unsigned int cmd,
>>> unsigned long arg)
>>> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/
>>> xe_vm_madvise.c
>>> new file mode 100644
>>> index 000000000000..ef50031649e0
>>> --- /dev/null
>>> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
>>> @@ -0,0 +1,309 @@
>>> +// SPDX-License-Identifier: MIT
>>> +/*
>>> + * Copyright © 2024 Intel Corporation
>>> + */
>>> +
>>> +#include "xe_vm_madvise.h"
>>> +
>>> +#include <linux/nospec.h>
>>> +#include <drm/ttm/ttm_tt.h>
>>> +#include <drm/xe_drm.h>
>>> +
>>> +#include "xe_bo.h"
>>> +#include "xe_gt_tlb_invalidation.h"
>>> +#include "xe_pt.h"
>>> +#include "xe_svm.h"
>>> +
>>> +static struct xe_vma **get_vmas(struct xe_vm *vm, int *num_vmas,
>>> + u64 addr, u64 range)
>>> +{
>>> + struct xe_vma **vmas, **__vmas;
>>> + struct drm_gpuva *gpuva;
>>> + int max_vmas = 8;
>>> +
>>> + lockdep_assert_held(&vm->lock);
>>
>> lockdep_assert_held_write
>
> ok
>
>>
>>> +
>>> + *num_vmas = 0;
>>> + vmas = kmalloc_array(max_vmas, sizeof(*vmas), GFP_KERNEL);
>>> + if (!vmas)
>>> + return NULL;
>>> +
>>> + vm_dbg(&vm->xe->drm, "VMA's in range: start=0x%016llx,
>>> end=0x%016llx", addr, addr + range);
>>> +
>>> + drm_gpuvm_for_each_va_range(gpuva, &vm->gpuvm, addr, addr +
>>> range) {
>>> + struct xe_vma *vma = gpuva_to_vma(gpuva);
>>> +
>>> + if (*num_vmas == max_vmas) {
>>> + max_vmas <<= 1;
>>> + __vmas = krealloc(vmas, max_vmas * sizeof(*vmas),
>>> GFP_KERNEL);
>>> + if (!__vmas) {
>>> + kfree(vmas);
>>> + return NULL;
>>> + }
>>> + vmas = __vmas;
>>> + }
>>> +
>>> + vmas[*num_vmas] = vma;
>>> + (*num_vmas)++;
>>> + }
>>> +
>>> + vm_dbg(&vm->xe->drm, "*num_vmas = %d\n", *num_vmas);
>>> +
>>> + if (!*num_vmas) {
>>> + kfree(vmas);
>>> + return NULL;
>>> + }
>>> +
>>> + return vmas;
>>> +}
>>> +
>>> +static int madvise_preferred_mem_loc(struct xe_device *xe, struct
>>> xe_vm *vm,
>>> + struct xe_vma **vmas, int num_vmas,
>>> + struct drm_xe_madvise_ops ops)
>>> +{
>>> + /* Implementation pending */
>>> + return 0;
>>> +}
>>> +
>>> +static int madvise_atomic(struct xe_device *xe, struct xe_vm *vm,
>>> + struct xe_vma **vmas, int num_vmas,
>>> + struct drm_xe_madvise_ops ops)
>>> +{
>>> + /* Implementation pending */
>>> + return 0;
>>> +}
>>> +
>>> +static int madvise_pat_index(struct xe_device *xe, struct xe_vm *vm,
>>> + struct xe_vma **vmas, int num_vmas,
>>> + struct drm_xe_madvise_ops ops)
>>> +{
>>> + /* Implementation pending */
>>> + return 0;
>>> +}
>>> +
>>> +static int madvise_purgeable_state(struct xe_device *xe, struct
>>> xe_vm *vm,
>>> + struct xe_vma **vmas, int num_vmas,
>>> + struct drm_xe_madvise_ops ops)
>>> +{
>>> + /* Implementation pending */
>>> + return 0;
>>> +}
>>> +
>>> +typedef int (*madvise_func)(struct xe_device *xe, struct xe_vm *vm,
>>> + struct xe_vma **vmas, int num_vmas, struct
>>> drm_xe_madvise_ops ops);
>>> +
>>> +static const madvise_func madvise_funcs[] = {
>>> + [DRM_XE_VMA_ATTR_PREFERRED_LOC] = madvise_preferred_mem_loc,
>>> + [DRM_XE_VMA_ATTR_ATOMIC] = madvise_atomic,
>>> + [DRM_XE_VMA_ATTR_PAT] = madvise_pat_index,
>>> + [DRM_XE_VMA_ATTR_PURGEABLE_STATE] = madvise_purgeable_state,
>>> +};
>>> +
>>> +static void xe_zap_ptes_in_madvise_range(struct xe_vm *vm, u64
>>> start, u64 end, u8 *tile_mask)
>>> +{
>>> + struct drm_gpusvm_notifier *notifier;
>>> + struct drm_gpuva *gpuva;
>>> + struct xe_svm_range *range;
>>> + struct xe_tile *tile;
>>> + u64 adj_start, adj_end;
>>> + u8 id;
>>> +
>>> + lockdep_assert_held(&vm->lock);
>>
>> lockdep_assert_held_write
>>
>>> +
>
> ok
>
>>
>> /* Waiting on pending binds */
>>
>>> + if (dma_resv_wait_timeout(xe_vm_resv(vm), DMA_RESV_USAGE_BOOKKEEP,
>>> + false, MAX_SCHEDULE_TIMEOUT) <= 0)
>>> + XE_WARN_ON(1);
>>> +
>>> + down_write(&vm->svm.gpusvm.notifier_lock);
>>> +
>>
>> xe_svm_notifier_lock
>
> ok
While testing I remembered now, xe_svm_notifier_lock takes read lock
whereas xe_pt_zap_ptes_range needs write lock.
>
>>
>>> + drm_gpusvm_for_each_notifier(notifier, &vm->svm.gpusvm, start,
>>> end) {
>>> + struct drm_gpusvm_range *r = NULL;
>>> +
>>> + adj_start = max(start, notifier->itree.start);
>>> + adj_end = min(end, notifier->itree.last + 1);
>>> + drm_gpusvm_for_each_range(r, notifier, adj_start, adj_end) {
>>> + range = to_xe_range(r);
>>> + for_each_tile(tile, vm->xe, id) {
>>> + if (xe_pt_zap_ptes_range(tile, vm, range)) {
>>> + *tile_mask |= BIT(id);
>>> + range->tile_invalidated |= BIT(id);
>>> + }
>>> + }
>>> + }
>>> + }
>>> +
>>> + up_write(&vm->svm.gpusvm.notifier_lock);
>>> +
>>
>> xe_svm_notifier_unlock
>>
>
> Hmm
>
>>> + drm_gpuvm_for_each_va_range(gpuva, &vm->gpuvm, start, end) {
>>> + struct xe_vma *vma = gpuva_to_vma(gpuva);
>>> +
>>> + if (xe_vma_is_cpu_addr_mirror(vma))
>>> + continue;
>>> +
>>> + if (xe_vma_is_userptr(vma)) {
>>> + WARN_ON_ONCE(!mmu_interval_check_retry
>>> + (&to_userptr_vma(vma)->userptr.notifier,
>>> + to_userptr_vma(vma)->userptr.notifier_seq));
>>> +
>>> + WARN_ON_ONCE(!
>>> dma_resv_test_signaled(xe_vm_resv(xe_vma_vm(vma)),
>>> + DMA_RESV_USAGE_BOOKKEEP));
>>> + }
>>> +
>>> + if (xe_vma_bo(vma))
>>> + xe_bo_lock(xe_vma_bo(vma), false);
>>> +
>>
>> Do you need the BO's dma-resv lock here? I don't think you do. Maybe
>> double
>> check with Thomas on this one as I could be forgeting something here.
>
> Sure
>
>>
>>> + for_each_tile(tile, vm->xe, id) {
>>> + if (xe_pt_zap_ptes(tile, vma))
>>> + *tile_mask |= BIT(id);
>>> + }
>>> +
>>> + if (xe_vma_bo(vma))
>>> + xe_bo_unlock(xe_vma_bo(vma));
>>> + }
>>> +}
>>> +
>>> +static void xe_vm_invalidate_madvise_range(struct xe_vm *vm, u64
>>> start, u64 end)
>>> +{
>>> + struct xe_gt_tlb_invalidation_fence
>>> + fence[XE_MAX_TILES_PER_DEVICE * XE_MAX_GT_PER_TILE];
>>> + struct xe_tile *tile;
>>> + u32 fence_id = 0;
>>> + u8 tile_mask = 0;
>>> + u8 id;
>>> +
>>> + xe_zap_ptes_in_madvise_range(vm, start, end, &tile_mask);
>>> + if (!tile_mask)
>>> + return;
>>> +
>>> + xe_device_wmb(vm->xe);
>>> +
>>
>> We have the below pattern in a few places in the driver. I wonder if it
>> time for a helper?
>
> Makes sense
>
>>
>>> + for_each_tile(tile, vm->xe, id) {
>>> + if (tile_mask & BIT(id)) {
>>> + int err;
>>> +
>>> + xe_gt_tlb_invalidation_fence_init(tile->primary_gt,
>>> + &fence[fence_id], true);
>>> +
>>> + err = xe_gt_tlb_invalidation_range(tile->primary_gt,
>>> + &fence[fence_id],
>>> + start,
>>> + end,
>>> + vm->usm.asid);
>>> + if (WARN_ON_ONCE(err < 0))
>>> + goto wait;
>>> + ++fence_id;
>>> +
>>> + if (!tile->media_gt)
>>> + continue;
>>> +
>>> + xe_gt_tlb_invalidation_fence_init(tile->media_gt,
>>> + &fence[fence_id], true);
>>> +
>>> + err = xe_gt_tlb_invalidation_range(tile->media_gt,
>>> + &fence[fence_id],
>>> + start,
>>> + end,
>>> + vm->usm.asid);
>>> + if (WARN_ON_ONCE(err < 0))
>>> + goto wait;
>>> + ++fence_id;
>>> + }
>>> + }
>>> +
>>> +wait:
>>> + for (id = 0; id < fence_id; ++id)
>>> + xe_gt_tlb_invalidation_fence_wait(&fence[id]);
>>> +}
>>> +
>>> +static int input_ranges_same(struct drm_xe_madvise_ops *old,
>>> + struct drm_xe_madvise_ops *new)
>>> +{
>>> + return (new->start == old->start && new->range == old->range);
>>> +}
>>> +
>>> +int xe_vm_madvise_ioctl(struct drm_device *dev, void *data, struct
>>> drm_file *file)
>>
>> Kernel doc.
>
> Sure
>
>>
>>> +{
>>> + struct xe_device *xe = to_xe_device(dev);
>>> + struct xe_file *xef = to_xe_file(file);
>>> + struct drm_xe_madvise_ops *advs_ops;
>>> + struct drm_xe_madvise *args = data;
>>> + struct xe_vm *vm;
>>> + struct xe_vma **vmas = NULL;
>>> + int num_vmas, err = 0;
>>> + int i, j, attr_type;
>>> +
>>> + if (XE_IOCTL_DBG(xe, args->num_ops < 1))
>>> + return -EINVAL;
>>> +
>>> + vm = xe_vm_lookup(xef, args->vm_id);
>>> + if (XE_IOCTL_DBG(xe, !vm))
>>> + return -EINVAL;
>>> +
>>> + if (XE_IOCTL_DBG(xe, !xe_vm_in_fault_mode(vm))) {
>>
>> Do we want to restrict this fault mode? Maybe check with Mesa if they
>> see any use cases.
>
> Ok
>
>>
>>> + err = -EINVAL;
>>> + goto put_vm;
>>> + }
>>> +
>>> + down_write(&vm->lock);
>>> +
>>> + if (XE_IOCTL_DBG(xe, xe_vm_is_closed_or_banned(vm))) {
>>> + err = -ENOENT;
>>> + goto unlock_vm;
>>> + }
>>> +
>>> + if (args->num_ops > 1) {
>>> + u64 __user *madvise_user = u64_to_user_ptr(args-
>>> >vector_of_ops);
>>> +
>>> + advs_ops = kvmalloc_array(args->num_ops, sizeof(struct
>>> drm_xe_madvise_ops),
>>> + GFP_KERNEL | __GFP_ACCOUNT |
>>> + __GFP_RETRY_MAYFAIL | __GFP_NOWARN);
>>> + if (!advs_ops)
>>> + return args->num_ops > 1 ? -ENOBUFS : -ENOMEM;
>>> +
>>> + err = __copy_from_user(advs_ops, madvise_user,
>>> + sizeof(struct drm_xe_madvise_ops) *
>>> + args->num_ops);
>>> + if (XE_IOCTL_DBG(xe, err)) {
>>> + err = -EFAULT;
>>> + goto free_advs_ops;
>>> + }
>>> + } else {
>>> + advs_ops = &args->ops;
>>> + }
>>> +
>>> + for (i = 0; i < args->num_ops; i++) {
>>> + xe_vm_alloc_madvise_vma(vm, advs_ops[i].start,
>>> advs_ops[i].range);
>>> +
>>> + vmas = get_vmas(vm, &num_vmas, advs_ops[i].start,
>>> advs_ops[i].range);
>>> + if (!vmas) {
>>> + err = -ENOMEM;
>>> + goto unlock_vm;
>>> + }
>>> +
>>> + attr_type = array_index_nospec(advs_ops[i].type,
>>> ARRAY_SIZE(madvise_funcs));
>>> + err = madvise_funcs[attr_type](xe, vm, vmas, num_vmas,
>>> advs_ops[i]);
>>> +
>>> + kfree(vmas);
>>> + vmas = NULL;
>>> +
>>> + if (err)
>>> + break;
>>> + }
>>> +
>>> + for (i = 0; i < args->num_ops; i++) {
>>> + for (j = i + 1; j < args->num_ops; ++j) {
>>> + if (input_ranges_same(&advs_ops[j], &advs_ops[i]))
>>> + break;
>>> + }
>>
>> The above loop doesn't look like it actually does anything.
>
> My bad.
>
> was intending to do
>
> if (input_ranges_same(&advs_ops[j], &advs_ops[i])) {
> needs_invalidation = false;
> >> + break;
> }
> if(needs_invalidation)
> xe_vm_invalidate_madvise_range(vm, advs_ops[i].start,
> advs_ops[i].start + advs_ops[i].range);
> >> + }
>
>>
>> Matt
>>
>>> + xe_vm_invalidate_madvise_range(vm, advs_ops[i].start,
>>> + advs_ops[i].start + advs_ops[i].range);
>>> + }
>>> +free_advs_ops:
>>> + if (args->num_ops > 1)
>>> + kvfree(advs_ops);
>>> +unlock_vm:
>>> + up_write(&vm->lock);
>>> +put_vm:
>>> + xe_vm_put(vm);
>>> + return err;
>>> +}
>>> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.h b/drivers/gpu/drm/xe/
>>> xe_vm_madvise.h
>>> new file mode 100644
>>> index 000000000000..c5cdd058c322
>>> --- /dev/null
>>> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.h
>>> @@ -0,0 +1,15 @@
>>> +/* SPDX-License-Identifier: MIT */
>>> +/*
>>> + * Copyright © 2024 Intel Corporation
>>> + */
>>> +
>>> +#ifndef _XE_VM_MADVISE_H_
>>> +#define _XE_VM_MADVISE_H_
>>> +
>>> +struct drm_device;
>>> +struct drm_file;
>>> +
>>> +int xe_vm_madvise_ioctl(struct drm_device *dev, void *data,
>>> + struct drm_file *file);
>>> +
>>> +#endif
>>> --
>>> 2.34.1
>>>
>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 18/32] drm/xe/vm: Add attributes struct as member of vma
2025-05-27 17:37 ` Matthew Brost
@ 2025-05-28 5:33 ` Ghimiray, Himal Prasad
2025-05-28 16:09 ` Matthew Brost
0 siblings, 1 reply; 120+ messages in thread
From: Ghimiray, Himal Prasad @ 2025-05-28 5:33 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-xe, thomas.hellstrom
On 27-05-2025 23:07, Matthew Brost wrote:
> On Tue, May 20, 2025 at 02:57:45PM +0530, Ghimiray, Himal Prasad wrote:
>>
>>
>> On 15-05-2025 00:06, Matthew Brost wrote:
>>> On Mon, Apr 07, 2025 at 03:47:05PM +0530, Himal Prasad Ghimiray wrote:
>>>> The attribute of xe_vma will determine the migration policy and the
>>>> encoding of the page table entries (PTEs) for that vma.
>>>> This attribute helps manage how memory pages are moved and how their
>>>> addresses are translated. It will be used by madvise to set the
>>>> behavior of the vma.
>>>>
>>>> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>>>> ---
>>>> drivers/gpu/drm/xe/xe_vm.c | 6 ++++++
>>>> drivers/gpu/drm/xe/xe_vm_types.h | 20 ++++++++++++++++++++
>>>> 2 files changed, 26 insertions(+)
>>>>
>>>> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
>>>> index 27a8dbe709c2..1ff9e477e061 100644
>>>> --- a/drivers/gpu/drm/xe/xe_vm.c
>>>> +++ b/drivers/gpu/drm/xe/xe_vm.c
>>>> @@ -2470,6 +2470,12 @@ static struct xe_vma *new_vma(struct xe_vm *vm, struct drm_gpuva_op_map *op,
>>>> vma = ERR_PTR(err);
>>>> }
>>>> + /*TODO: assign devmem_fd of local vram once multi device
>>>> + * support is added.
>>>> + */
>>>> + vma->attr.preferred_loc.devmem_fd = 1;
>>>
>>> Assigning a value of '1' is a bit odd... I'd prefer using a define or
>>> something similar to indicate the intended behavior. I noticed a few
>>> other assignments to '1' in the final result—same comment applies to
>>> those.
>>
>> Sure
>>
>>>
>>>> + vma->attr.atomic_access = DRM_XE_VMA_ATOMIC_UNDEFINED;
>>>> +
>>>> return vma;
>>>> }
>>>> diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
>>>> index d3c1209348e9..5f5feffecb82 100644
>>>> --- a/drivers/gpu/drm/xe/xe_vm_types.h
>>>> +++ b/drivers/gpu/drm/xe/xe_vm_types.h
>>>> @@ -77,6 +77,19 @@ struct xe_userptr {
>>>> #endif
>>>> };
>>>> +/**
>>>> + * struct xe_vma_mem_attr - memory attributes associated with vma
>>>> + */
>>>> +struct xe_vma_mem_attr {
>>>> + /** @preferred_loc: perferred memory_location*/
>>>> + struct {
>>>> + u32 migration_policy; /* represents migration policies */
>>>> + u32 devmem_fd; /* devmem_fd used for determining pagemap_fd requested by user */
>>>> + } preferred_loc;
>>>
>>> I'm a little unclear on how these variables work.
>>>
>>> In the uAPI for migration_policy, I see MIGRATE_ALL_PAGES and
>>> MIGRATE_ONLY_SYSTEM_PAGES (these should probably be normalized with a
>>> DRM_XE_* prefix, by the way), but it's unclear to me what exactly these
>>> mean or how they're used based on the final result—could you clarify?
>>
>> With multi-device support the idea was to have flexibility to move only
>> system pages to preferred location or also move pages from other vram
>> location to preferred location.
>>
>
> Ok, I think having bits set aside for this makes sense.
>
>>>
>>> Likewise, I'm confused about the devmem_fd usage. It can either be
>>> assigned a devmem_fd from the uAPI, but in some cases, it's interpreted
>>> as a region. I assume this is anticipating multi-GPU support, but again,
>>> the plan isn't clear to me. Could you explain?
>>
>> The devmem_fd is intended to be used to determine the struct drm_pagemap *,
>> which in turn will be used to identify the tile associated with VRAM for
>> allocation and binding. The changes that introduce the
>> devmem_fd->drm_pagemap->tile [1] linkage will be part of the upcoming
>> multi-GPU support.
>>
>> To ensure that the current changes are easily scalable and can be extended
>> for multi-GPU support, I am defining devmem_fd in the UAPI and using it in
>> the KMD as a region placeholder until multi-GPU support is integrated.
>>
>
> Hmm, will this break the uAPI though? e.g. How does the UMD choose
> between region and FD at the uAPI level? If the answer is once multi-GPU
> lands it is always a FD rather than region then we really need to land
> some of the multi-GPU patches at same time as madvise - at least the
> ones which export memory regions as FDs.
I have tried to streamline these changes in the latest revision. Let's
see if they make sense. This is with assumption that in a multi-device
setup devmem_fd <=0 is actually invalid and doesn't point to any
remote/local tile.
A value of 0 for DEVMEM_FD means the default behavior or VRAM of the
faulting tile.
A negative value indicates that SMEM should be used.
Other positive values correspond to valid devmem_fds. With the landing
of multi-device changes, if a positive devmem_fd is not valid, we can
fall back to the faulting tile or SMEM.
>
> Let's loop in Thomas on the multi-GPU assumptions too to ensure
> correctness.
>
> Matt
>
>> [1] https://patchwork.freedesktop.org/patch/642773/?series=146227&rev=1
>>
>>
>>>
>>> In general I agree with the idea of xe_vma_mem_attr though.
>>>
>>> Matt
>>>
>>>> + /** @atomic_access: The atomic access type for the vma */
>>>> + u32 atomic_access;
>>>> +};
>>>> +
>>>> struct xe_vma {
>>>> /** @gpuva: Base GPUVA object */
>>>> struct drm_gpuva gpuva;
>>>> @@ -128,6 +141,13 @@ struct xe_vma {
>>>> * Needs to be signalled before UNMAP can be processed.
>>>> */
>>>> struct xe_user_fence *ufence;
>>>> +
>>>> + /**
>>>> + * @attr: The attributes of vma which determines the migration policy
>>>> + * and encoding of the PTEs for this vma.
>>>> + */
>>>> + struct xe_vma_mem_attr attr;
>>>> +
>>>> };
>>>> /**
>>>> --
>>>> 2.34.1
>>>>
>>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 18/32] drm/xe/vm: Add attributes struct as member of vma
2025-05-28 5:33 ` Ghimiray, Himal Prasad
@ 2025-05-28 16:09 ` Matthew Brost
2025-05-28 16:16 ` Ghimiray, Himal Prasad
0 siblings, 1 reply; 120+ messages in thread
From: Matthew Brost @ 2025-05-28 16:09 UTC (permalink / raw)
To: Ghimiray, Himal Prasad; +Cc: intel-xe, thomas.hellstrom
On Wed, May 28, 2025 at 11:03:31AM +0530, Ghimiray, Himal Prasad wrote:
>
>
> On 27-05-2025 23:07, Matthew Brost wrote:
> > On Tue, May 20, 2025 at 02:57:45PM +0530, Ghimiray, Himal Prasad wrote:
> > >
> > >
> > > On 15-05-2025 00:06, Matthew Brost wrote:
> > > > On Mon, Apr 07, 2025 at 03:47:05PM +0530, Himal Prasad Ghimiray wrote:
> > > > > The attribute of xe_vma will determine the migration policy and the
> > > > > encoding of the page table entries (PTEs) for that vma.
> > > > > This attribute helps manage how memory pages are moved and how their
> > > > > addresses are translated. It will be used by madvise to set the
> > > > > behavior of the vma.
> > > > >
> > > > > Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> > > > > ---
> > > > > drivers/gpu/drm/xe/xe_vm.c | 6 ++++++
> > > > > drivers/gpu/drm/xe/xe_vm_types.h | 20 ++++++++++++++++++++
> > > > > 2 files changed, 26 insertions(+)
> > > > >
> > > > > diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> > > > > index 27a8dbe709c2..1ff9e477e061 100644
> > > > > --- a/drivers/gpu/drm/xe/xe_vm.c
> > > > > +++ b/drivers/gpu/drm/xe/xe_vm.c
> > > > > @@ -2470,6 +2470,12 @@ static struct xe_vma *new_vma(struct xe_vm *vm, struct drm_gpuva_op_map *op,
> > > > > vma = ERR_PTR(err);
> > > > > }
> > > > > + /*TODO: assign devmem_fd of local vram once multi device
> > > > > + * support is added.
> > > > > + */
> > > > > + vma->attr.preferred_loc.devmem_fd = 1;
> > > >
> > > > Assigning a value of '1' is a bit odd... I'd prefer using a define or
> > > > something similar to indicate the intended behavior. I noticed a few
> > > > other assignments to '1' in the final result—same comment applies to
> > > > those.
> > >
> > > Sure
> > >
> > > >
> > > > > + vma->attr.atomic_access = DRM_XE_VMA_ATOMIC_UNDEFINED;
> > > > > +
> > > > > return vma;
> > > > > }
> > > > > diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
> > > > > index d3c1209348e9..5f5feffecb82 100644
> > > > > --- a/drivers/gpu/drm/xe/xe_vm_types.h
> > > > > +++ b/drivers/gpu/drm/xe/xe_vm_types.h
> > > > > @@ -77,6 +77,19 @@ struct xe_userptr {
> > > > > #endif
> > > > > };
> > > > > +/**
> > > > > + * struct xe_vma_mem_attr - memory attributes associated with vma
> > > > > + */
> > > > > +struct xe_vma_mem_attr {
> > > > > + /** @preferred_loc: perferred memory_location*/
> > > > > + struct {
> > > > > + u32 migration_policy; /* represents migration policies */
> > > > > + u32 devmem_fd; /* devmem_fd used for determining pagemap_fd requested by user */
> > > > > + } preferred_loc;
> > > >
> > > > I'm a little unclear on how these variables work.
> > > >
> > > > In the uAPI for migration_policy, I see MIGRATE_ALL_PAGES and
> > > > MIGRATE_ONLY_SYSTEM_PAGES (these should probably be normalized with a
> > > > DRM_XE_* prefix, by the way), but it's unclear to me what exactly these
> > > > mean or how they're used based on the final result—could you clarify?
> > >
> > > With multi-device support the idea was to have flexibility to move only
> > > system pages to preferred location or also move pages from other vram
> > > location to preferred location.
> > >
> >
> > Ok, I think having bits set aside for this makes sense.
> >
> > > >
> > > > Likewise, I'm confused about the devmem_fd usage. It can either be
> > > > assigned a devmem_fd from the uAPI, but in some cases, it's interpreted
> > > > as a region. I assume this is anticipating multi-GPU support, but again,
> > > > the plan isn't clear to me. Could you explain?
> > >
> > > The devmem_fd is intended to be used to determine the struct drm_pagemap *,
> > > which in turn will be used to identify the tile associated with VRAM for
> > > allocation and binding. The changes that introduce the
> > > devmem_fd->drm_pagemap->tile [1] linkage will be part of the upcoming
> > > multi-GPU support.
> > >
> > > To ensure that the current changes are easily scalable and can be extended
> > > for multi-GPU support, I am defining devmem_fd in the UAPI and using it in
> > > the KMD as a region placeholder until multi-GPU support is integrated.
> > >
> >
> > Hmm, will this break the uAPI though? e.g. How does the UMD choose
> > between region and FD at the uAPI level? If the answer is once multi-GPU
> > lands it is always a FD rather than region then we really need to land
> > some of the multi-GPU patches at same time as madvise - at least the
> > ones which export memory regions as FDs.
>
> I have tried to streamline these changes in the latest revision. Let's see
> if they make sense. This is with assumption that in a multi-device setup
> devmem_fd <=0 is actually invalid and doesn't point to any remote/local
> tile.
>
> A value of 0 for DEVMEM_FD means the default behavior or VRAM of the
> faulting tile.
>
> A negative value indicates that SMEM should be used.
>
> Other positive values correspond to valid devmem_fds. With the landing of
> multi-device changes, if a positive devmem_fd is not valid, we can fall back
> to the faulting tile or SMEM.
>
I think that makes sense - so then multi-tile (or more specifically,
multi-vram regions) within a single device then is an extension of
multi-gpu.
Matt
> >
> > Let's loop in Thomas on the multi-GPU assumptions too to ensure
> > correctness.
> >
> > Matt
> >
> > > [1] https://patchwork.freedesktop.org/patch/642773/?series=146227&rev=1
> > >
> > >
> > > >
> > > > In general I agree with the idea of xe_vma_mem_attr though.
> > > >
> > > > Matt
> > > >
> > > > > + /** @atomic_access: The atomic access type for the vma */
> > > > > + u32 atomic_access;
> > > > > +};
> > > > > +
> > > > > struct xe_vma {
> > > > > /** @gpuva: Base GPUVA object */
> > > > > struct drm_gpuva gpuva;
> > > > > @@ -128,6 +141,13 @@ struct xe_vma {
> > > > > * Needs to be signalled before UNMAP can be processed.
> > > > > */
> > > > > struct xe_user_fence *ufence;
> > > > > +
> > > > > + /**
> > > > > + * @attr: The attributes of vma which determines the migration policy
> > > > > + * and encoding of the PTEs for this vma.
> > > > > + */
> > > > > + struct xe_vma_mem_attr attr;
> > > > > +
> > > > > };
> > > > > /**
> > > > > --
> > > > > 2.34.1
> > > > >
> > >
>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 18/32] drm/xe/vm: Add attributes struct as member of vma
2025-05-28 16:09 ` Matthew Brost
@ 2025-05-28 16:16 ` Ghimiray, Himal Prasad
0 siblings, 0 replies; 120+ messages in thread
From: Ghimiray, Himal Prasad @ 2025-05-28 16:16 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-xe, thomas.hellstrom
On 28-05-2025 21:39, Matthew Brost wrote:
> On Wed, May 28, 2025 at 11:03:31AM +0530, Ghimiray, Himal Prasad wrote:
>>
>>
>> On 27-05-2025 23:07, Matthew Brost wrote:
>>> On Tue, May 20, 2025 at 02:57:45PM +0530, Ghimiray, Himal Prasad wrote:
>>>>
>>>>
>>>> On 15-05-2025 00:06, Matthew Brost wrote:
>>>>> On Mon, Apr 07, 2025 at 03:47:05PM +0530, Himal Prasad Ghimiray wrote:
>>>>>> The attribute of xe_vma will determine the migration policy and the
>>>>>> encoding of the page table entries (PTEs) for that vma.
>>>>>> This attribute helps manage how memory pages are moved and how their
>>>>>> addresses are translated. It will be used by madvise to set the
>>>>>> behavior of the vma.
>>>>>>
>>>>>> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>>>>>> ---
>>>>>> drivers/gpu/drm/xe/xe_vm.c | 6 ++++++
>>>>>> drivers/gpu/drm/xe/xe_vm_types.h | 20 ++++++++++++++++++++
>>>>>> 2 files changed, 26 insertions(+)
>>>>>>
>>>>>> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
>>>>>> index 27a8dbe709c2..1ff9e477e061 100644
>>>>>> --- a/drivers/gpu/drm/xe/xe_vm.c
>>>>>> +++ b/drivers/gpu/drm/xe/xe_vm.c
>>>>>> @@ -2470,6 +2470,12 @@ static struct xe_vma *new_vma(struct xe_vm *vm, struct drm_gpuva_op_map *op,
>>>>>> vma = ERR_PTR(err);
>>>>>> }
>>>>>> + /*TODO: assign devmem_fd of local vram once multi device
>>>>>> + * support is added.
>>>>>> + */
>>>>>> + vma->attr.preferred_loc.devmem_fd = 1;
>>>>>
>>>>> Assigning a value of '1' is a bit odd... I'd prefer using a define or
>>>>> something similar to indicate the intended behavior. I noticed a few
>>>>> other assignments to '1' in the final result—same comment applies to
>>>>> those.
>>>>
>>>> Sure
>>>>
>>>>>
>>>>>> + vma->attr.atomic_access = DRM_XE_VMA_ATOMIC_UNDEFINED;
>>>>>> +
>>>>>> return vma;
>>>>>> }
>>>>>> diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
>>>>>> index d3c1209348e9..5f5feffecb82 100644
>>>>>> --- a/drivers/gpu/drm/xe/xe_vm_types.h
>>>>>> +++ b/drivers/gpu/drm/xe/xe_vm_types.h
>>>>>> @@ -77,6 +77,19 @@ struct xe_userptr {
>>>>>> #endif
>>>>>> };
>>>>>> +/**
>>>>>> + * struct xe_vma_mem_attr - memory attributes associated with vma
>>>>>> + */
>>>>>> +struct xe_vma_mem_attr {
>>>>>> + /** @preferred_loc: perferred memory_location*/
>>>>>> + struct {
>>>>>> + u32 migration_policy; /* represents migration policies */
>>>>>> + u32 devmem_fd; /* devmem_fd used for determining pagemap_fd requested by user */
>>>>>> + } preferred_loc;
>>>>>
>>>>> I'm a little unclear on how these variables work.
>>>>>
>>>>> In the uAPI for migration_policy, I see MIGRATE_ALL_PAGES and
>>>>> MIGRATE_ONLY_SYSTEM_PAGES (these should probably be normalized with a
>>>>> DRM_XE_* prefix, by the way), but it's unclear to me what exactly these
>>>>> mean or how they're used based on the final result—could you clarify?
>>>>
>>>> With multi-device support the idea was to have flexibility to move only
>>>> system pages to preferred location or also move pages from other vram
>>>> location to preferred location.
>>>>
>>>
>>> Ok, I think having bits set aside for this makes sense.
>>>
>>>>>
>>>>> Likewise, I'm confused about the devmem_fd usage. It can either be
>>>>> assigned a devmem_fd from the uAPI, but in some cases, it's interpreted
>>>>> as a region. I assume this is anticipating multi-GPU support, but again,
>>>>> the plan isn't clear to me. Could you explain?
>>>>
>>>> The devmem_fd is intended to be used to determine the struct drm_pagemap *,
>>>> which in turn will be used to identify the tile associated with VRAM for
>>>> allocation and binding. The changes that introduce the
>>>> devmem_fd->drm_pagemap->tile [1] linkage will be part of the upcoming
>>>> multi-GPU support.
>>>>
>>>> To ensure that the current changes are easily scalable and can be extended
>>>> for multi-GPU support, I am defining devmem_fd in the UAPI and using it in
>>>> the KMD as a region placeholder until multi-GPU support is integrated.
>>>>
>>>
>>> Hmm, will this break the uAPI though? e.g. How does the UMD choose
>>> between region and FD at the uAPI level? If the answer is once multi-GPU
>>> lands it is always a FD rather than region then we really need to land
>>> some of the multi-GPU patches at same time as madvise - at least the
>>> ones which export memory regions as FDs.
>>
>> I have tried to streamline these changes in the latest revision. Let's see
>> if they make sense. This is with assumption that in a multi-device setup
>> devmem_fd <=0 is actually invalid and doesn't point to any remote/local
>> tile.
>>
>> A value of 0 for DEVMEM_FD means the default behavior or VRAM of the
>> faulting tile.
>>
>> A negative value indicates that SMEM should be used.
>>
>> Other positive values correspond to valid devmem_fds. With the landing of
>> multi-device changes, if a positive devmem_fd is not valid, we can fall back
>> to the faulting tile or SMEM.
>>
>
> I think that makes sense - so then multi-tile (or more specifically,
> multi-vram regions) within a single device then is an extension of
> multi-gpu.
Correct
>
> Matt
>
>>>
>>> Let's loop in Thomas on the multi-GPU assumptions too to ensure
>>> correctness.
>>>
>>> Matt
>>>
>>>> [1] https://patchwork.freedesktop.org/patch/642773/?series=146227&rev=1
>>>>
>>>>
>>>>>
>>>>> In general I agree with the idea of xe_vma_mem_attr though.
>>>>>
>>>>> Matt
>>>>>
>>>>>> + /** @atomic_access: The atomic access type for the vma */
>>>>>> + u32 atomic_access;
>>>>>> +};
>>>>>> +
>>>>>> struct xe_vma {
>>>>>> /** @gpuva: Base GPUVA object */
>>>>>> struct drm_gpuva gpuva;
>>>>>> @@ -128,6 +141,13 @@ struct xe_vma {
>>>>>> * Needs to be signalled before UNMAP can be processed.
>>>>>> */
>>>>>> struct xe_user_fence *ufence;
>>>>>> +
>>>>>> + /**
>>>>>> + * @attr: The attributes of vma which determines the migration policy
>>>>>> + * and encoding of the PTEs for this vma.
>>>>>> + */
>>>>>> + struct xe_vma_mem_attr attr;
>>>>>> +
>>>>>> };
>>>>>> /**
>>>>>> --
>>>>>> 2.34.1
>>>>>>
>>>>
>>
^ permalink raw reply [flat|nested] 120+ messages in thread
* Re: [PATCH v2 30/32] drm/xe/uapi: Add uapi for vma count and mem attributes
2025-05-21 8:54 ` Ghimiray, Himal Prasad
@ 2025-05-28 16:18 ` Ghimiray, Himal Prasad
0 siblings, 0 replies; 120+ messages in thread
From: Ghimiray, Himal Prasad @ 2025-05-28 16:18 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-xe, thomas.hellstrom
On 21-05-2025 14:24, Ghimiray, Himal Prasad wrote:
>
>
> On 15-05-2025 02:38, Matthew Brost wrote:
>> On Mon, Apr 07, 2025 at 03:47:17PM +0530, Himal Prasad Ghimiray wrote:
>>> -DRM_IOCTL_XE_VM_QUERY_VMAS: Return number of VMAs in user-specified
>>> range.
>>> -DRM_IOCTL_XE_VM_QUERY_VMAS_ATTRS: Fill VMA attributes in user-provided
>>> buffer.
>>>
>>
>> Replied to wrong version eariler...
>>
>>
>> I can't remember if we landed on if this is needed? I thought the answer
>> was - no not needed.
>
> Will hold on this, till UMD confirms whether they need it or not.
UMD confirmed, they need this. Will add changes to support this via same
ioctl as suggested.
>
>>
>> If it is needed could be make this a single IOCTL. e.g. Call in once
>> with num_vmas == 0 + NULL vector, IOCTL returns num_vmas, then called
>> again with num_vmas != 0 + non-NULL vector. Generally we try not burn
>> IOCTL numbers, rather overload functionality.
>>
>> Matt
>>
>>
>>> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>>> ---
>>> drivers/gpu/drm/xe/xe_device.c | 2 +
>>> drivers/gpu/drm/xe/xe_vm.c | 94 +++++++++++++++++++++++++++
>>> drivers/gpu/drm/xe/xe_vm.h | 3 +-
>>> include/uapi/drm/xe_drm.h | 115 +++++++++++++++++++++++++++++++++
>>> 4 files changed, 213 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/
>>> xe_device.c
>>> index 3e57300014bf..968c24c77241 100644
>>> --- a/drivers/gpu/drm/xe/xe_device.c
>>> +++ b/drivers/gpu/drm/xe/xe_device.c
>>> @@ -198,6 +198,8 @@ static const struct drm_ioctl_desc xe_ioctls[] = {
>>> DRM_RENDER_ALLOW),
>>> DRM_IOCTL_DEF_DRV(XE_OBSERVATION, xe_observation_ioctl,
>>> DRM_RENDER_ALLOW),
>>> DRM_IOCTL_DEF_DRV(XE_MADVISE, xe_vm_madvise_ioctl,
>>> DRM_RENDER_ALLOW),
>>> + DRM_IOCTL_DEF_DRV(XE_VM_QUERY_VMAS, xe_vm_query_vmas_ioctl,
>>> DRM_RENDER_ALLOW),
>>> + DRM_IOCTL_DEF_DRV(XE_VM_QUERY_VMAS_ATTRS,
>>> xe_vm_query_vmas_attrs_ioctl, DRM_RENDER_ALLOW),
>>> };
>>> static long xe_drm_ioctl(struct file *file, unsigned int cmd,
>>> unsigned long arg)
>>> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
>>> index e5246c633e62..f1d4daf90efe 100644
>>> --- a/drivers/gpu/drm/xe/xe_vm.c
>>> +++ b/drivers/gpu/drm/xe/xe_vm.c
>>> @@ -2165,6 +2165,100 @@ int xe_vm_destroy_ioctl(struct drm_device
>>> *dev, void *data,
>>> return err;
>>> }
>>> +int xe_vm_query_vmas_ioctl(struct drm_device *dev, void *data,
>>> + struct drm_file *file)
>>> +{
>>> + struct xe_device *xe = to_xe_device(dev);
>>> + struct xe_file *xef = to_xe_file(file);
>>> + struct drm_xe_vm_query_num_vmas *args = data;
>>> + struct drm_gpuva *gpuva;
>>> + struct xe_vm *vm;
>>> +
>>> + vm = xe_vm_lookup(xef, args->vm_id);
>>> + if (XE_IOCTL_DBG(xe, !vm))
>>> + return -EINVAL;
>>> +
>>> + args->num_vmas = 0;
>>> + down_write(&vm->lock);
>>> +
>>> + drm_gpuvm_for_each_va_range(gpuva, &vm->gpuvm, args->start,
>>> args->start + args->range)
>>> + args->num_vmas++;
>>> +
>>> + up_write(&vm->lock);
>>> + return 0;
>>> +}
>>> +
>>> +static int get_mem_attrs(struct xe_vm *vm, u32 *num_vmas, u64 start,
>>> + u64 end, struct drm_xe_vma_mem_attr *mem_attrs)
>>> +{
>>> + struct drm_gpuva *gpuva;
>>> + int i = 0;
>>> +
>>> + lockdep_assert_held(&vm->lock);
>>> +
>>> + drm_gpuvm_for_each_va_range(gpuva, &vm->gpuvm, start, end) {
>>> + struct xe_vma *vma = gpuva_to_vma(gpuva);
>>> +
>>> + if (i == *num_vmas)
>>> + return -EINVAL;
>>> +
>>> + mem_attrs[i].start = xe_vma_start(vma);
>>> + mem_attrs[i].end = xe_vma_end(vma);
>>> + mem_attrs[i].atomic.val = vma->attr.atomic_access;
>>> + mem_attrs[i].pat_index.val = vma->attr.pat_index;
>>> + mem_attrs[i].preferred_mem_loc.devmem_fd = vma-
>>> >attr.preferred_loc.devmem_fd;
>>> + mem_attrs[i].preferred_mem_loc.migration_policy = vma-
>>> >attr.preferred_loc.migration_policy;
>>> +
>>> + i++;
>>> + }
>>> +
>>> + if (i < (*num_vmas - 1))
>>> + *num_vmas = i;
>>> + return 0;
>>> +}
>>> +
>>> +int xe_vm_query_vmas_attrs_ioctl(struct drm_device *dev, void *data,
>>> struct drm_file *file)
>>> +{
>>> + struct xe_device *xe = to_xe_device(dev);
>>> + struct xe_file *xef = to_xe_file(file);
>>> + struct drm_xe_vma_mem_attr *mem_attrs;
>>> + struct drm_xe_vm_query_vmas_attr *args = data;
>>> + u64 __user *attrs_user = NULL;
>>> + struct xe_vm *vm;
>>> + int err;
>>> +
>>> + if (XE_IOCTL_DBG(xe, args->num_vmas < 1))
>>> + return -EINVAL;
>>> +
>>> + vm = xe_vm_lookup(xef, args->vm_id);
>>> + if (XE_IOCTL_DBG(xe, !vm))
>>> + return -EINVAL;
>>> +
>>> + down_write(&vm->lock);
>>> +
>>> + attrs_user = u64_to_user_ptr(args->vector_of_vma_mem_attr);
>>> + mem_attrs = kvmalloc_array(args->num_vmas, sizeof(struct
>>> drm_xe_vma_mem_attr),
>>> + GFP_KERNEL | __GFP_ACCOUNT |
>>> + __GFP_RETRY_MAYFAIL | __GFP_NOWARN);
>>> + if (!mem_attrs)
>>> + return args->num_vmas > 1 ? -ENOBUFS : -ENOMEM;
>>> +
>>> + err = get_mem_attrs(vm, &args->num_vmas, args->start,
>>> + args->start + args->range, mem_attrs);
>>> + if (err)
>>> + goto free_mem_attrs;
>>> +
>>> + err = __copy_to_user(attrs_user, mem_attrs,
>>> + sizeof(struct drm_xe_vma_mem_attr) * args->num_vmas);
>>> +
>>> +free_mem_attrs:
>>> + kvfree(mem_attrs);
>>> +
>>> + up_write(&vm->lock);
>>> +
>>> + return err;
>>> +}
>>> +
>>> static bool vma_matches(struct xe_vma *vma, u64 page_addr)
>>> {
>>> if (page_addr > xe_vma_end(vma) - 1 ||
>>> diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
>>> index 377f62f859b7..0b2d6e9f77ef 100644
>>> --- a/drivers/gpu/drm/xe/xe_vm.h
>>> +++ b/drivers/gpu/drm/xe/xe_vm.h
>>> @@ -193,7 +193,8 @@ int xe_vm_destroy_ioctl(struct drm_device *dev,
>>> void *data,
>>> struct drm_file *file);
>>> int xe_vm_bind_ioctl(struct drm_device *dev, void *data,
>>> struct drm_file *file);
>>> -
>>> +int xe_vm_query_vmas_ioctl(struct drm_device *dev, void *data,
>>> struct drm_file *file);
>>> +int xe_vm_query_vmas_attrs_ioctl(struct drm_device *dev, void *data,
>>> struct drm_file *file);
>>> void xe_vm_close_and_put(struct xe_vm *vm);
>>> static inline bool xe_vm_in_fault_mode(struct xe_vm *vm)
>>> diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
>>> index ab96dee25f6c..177ee3a1c20d 100644
>>> --- a/include/uapi/drm/xe_drm.h
>>> +++ b/include/uapi/drm/xe_drm.h
>>> @@ -82,6 +82,8 @@ extern "C" {
>>> * - &DRM_IOCTL_XE_WAIT_USER_FENCE
>>> * - &DRM_IOCTL_XE_OBSERVATION
>>> * - &DRM_IOCTL_XE_MADVISE
>>> + * - &DRM_IOCTL_XE_VM_QUERY_VMAS
>>> + * - &DRM_IOCTL_XE_VM_QUERY_VMAS_ATTRS
>>> */
>>> /*
>>> @@ -104,6 +106,8 @@ extern "C" {
>>> #define DRM_XE_WAIT_USER_FENCE 0x0a
>>> #define DRM_XE_OBSERVATION 0x0b
>>> #define DRM_XE_MADVISE 0x0c
>>> +#define DRM_XE_VM_QUERY_VMAS 0x0d
>>> +#define DRM_XE_VM_QUERY_VMAS_ATTRS 0x0e
>>> /* Must be kept compact -- no holes */
>>> @@ -120,6 +124,8 @@ extern "C" {
>>> #define DRM_IOCTL_XE_WAIT_USER_FENCE
>>> DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_WAIT_USER_FENCE, struct
>>> drm_xe_wait_user_fence)
>>> #define DRM_IOCTL_XE_OBSERVATION DRM_IOW(DRM_COMMAND_BASE +
>>> DRM_XE_OBSERVATION, struct drm_xe_observation_param)
>>> #define DRM_IOCTL_XE_MADVISE DRM_IOWR(DRM_COMMAND_BASE +
>>> DRM_XE_MADVISE, struct drm_xe_madvise)
>>> +#define DRM_IOCTL_XE_VM_QUERY_VMAS DRM_IOWR(DRM_COMMAND_BASE
>>> + DRM_XE_VM_QUERY_VMAS, struct drm_xe_vm_query_num_vmas)
>>> +#define DRM_IOCTL_XE_VM_QUERY_VMAS_ATTRS
>>> DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_VM_QUERY_VMAS_ATTRS, struct
>>> drm_xe_vm_query_vmas_attr)
>>> /**
>>> * DOC: Xe IOCTL Extensions
>>> @@ -2063,6 +2069,115 @@ struct drm_xe_madvise {
>>> };
>>> +/**
>>> + * struct drm_xe_vm_query_num_vmas - Input of
>>> &DRM_IOCTL_XE_VM_QUERY_VMAS
>>> + *
>>> + * Get number of vmas in virtual range of vm_id
>>> + */
>>> +struct drm_xe_vm_query_num_vmas {
>>> + /** @extensions: Pointer to the first extension struct, if any */
>>> + __u64 extensions;
>>> +
>>> + /** @vm_id: vm_id of the virtual range */
>>> + __u32 vm_id;
>>> +
>>> + /** @num_vmas: number of vmas in range returned in @num_vmas */
>>> + __u32 num_vmas;
>>> +
>>> + /** @start: start of the virtual address range */
>>> + __u64 start;
>>> +
>>> + /** @size: size of the virtual address range */
>>> + __u64 range;
>>> +
>>> + /** @reserved: Reserved */
>>> + __u64 reserved[2];
>>> +};
>>> +
>>> +struct drm_xe_vma_mem_attr {
>>> + /** @extensions: Pointer to the first extension struct, if any */
>>> + __u64 extensions;
>>> +
>>> + /** @start: start of the vma */
>>> + __u64 start;
>>> +
>>> + /** @size: end of the vma */
>>> + __u64 end;
>>> +
>>> + struct {
>>> + struct {
>>> + /** @val: value of atomic operation*/
>>> + __u32 val;
>>> +
>>> + /** @reserved: Reserved */
>>> + __u32 reserved;
>>> + } atomic;
>>> +
>>> + struct {
>>> + /** @val: value for DRM_XE_VMA_ATTR_PURGEABLE_STATE */
>>> + __u32 val;
>>> +
>>> + /** @reserved: Reserved */
>>> + __u32 reserved;
>>> + } purge_state_val;
>>> +
>>> + struct {
>>> + /** @pat_index */
>>> + __u32 val;
>>> +
>>> + /** @reserved: Reserved */
>>> + __u32 reserved;
>>> + } pat_index;
>>> +
>>> + /** @preferred_mem_loc: preferred memory location */
>>> + struct {
>>> + __u32 devmem_fd;
>>> +
>>> + __u32 migration_policy;
>>> + } preferred_mem_loc;
>>> + };
>>> +
>>> + /** @reserved: Reserved */
>>> + __u64 reserved[2];
>>> +};
>>> +
>>> +/**
>>> + * struct drm_xe_vm_query_vmas_attr - Input of
>>> &DRM_IOCTL_XE_VM_QUERY_MEM_ATTRIBUTES
>>> + *
>>> + * Get memory attributes to a virtual address range
>>> + */
>>> +struct drm_xe_vm_query_vmas_attr {
>>> + /** @extensions: Pointer to the first extension struct, if any */
>>> + __u64 extensions;
>>> +
>>> + /** @vm_id: vm_id of the virtual range */
>>> + __u32 vm_id;
>>> +
>>> + /** @num_vmas: number of vmas in range returned in @num_vmas */
>>> + __u32 num_vmas;
>>> +
>>> + /** @start: start of the virtual address range */
>>> + __u64 start;
>>> +
>>> + /** @size: size of the virtual address range */
>>> + __u64 range;
>>> +
>>> + union {
>>> + /** @num_vmas: used if num_vmas == 1 */
>>> + struct drm_xe_vma_mem_attr attr;
>>> +
>>> + /**
>>> + * @vector_of_ops: userptr to array of struct
>>> + * drm_xe_vma_mem_attr if num_vmas > 1
>>> + */
>>> + __u64 vector_of_vma_mem_attr;
>>> + };
>>> +
>>> + /** @reserved: Reserved */
>>> + __u64 reserved[2];
>>> +
>>> +};
>>> +
>>> #if defined(__cplusplus)
>>> }
>>> #endif
>>> --
>>> 2.34.1
>>>
>
^ permalink raw reply [flat|nested] 120+ messages in thread
end of thread, other threads:[~2025-05-28 16:18 UTC | newest]
Thread overview: 120+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-04-07 10:16 [PATCH v2 00/32] PREFETCH and MADVISE for SVM ranges Himal Prasad Ghimiray
2025-04-07 10:16 ` [PATCH v2 01/32] drm/xe: Introduce xe_vma_op_prefetch_range struct for prefetch of ranges Himal Prasad Ghimiray
2025-04-07 10:16 ` [PATCH v2 02/32] drm/xe: Make xe_svm_alloc_vram public Himal Prasad Ghimiray
2025-04-17 2:50 ` Matthew Brost
2025-04-21 4:06 ` Ghimiray, Himal Prasad
2025-04-07 10:16 ` [PATCH v2 03/32] drm/xe/svm: Helper to add tile masks to svm ranges Himal Prasad Ghimiray
2025-04-07 10:16 ` [PATCH v2 04/32] drm/xe/svm: Make to_xe_range a public function Himal Prasad Ghimiray
2025-04-07 10:16 ` [PATCH v2 05/32] drm/xe/svm: Make xe_svm_range_* end/start/size public Himal Prasad Ghimiray
2025-04-07 10:16 ` [PATCH v2 06/32] drm/xe/vm: Update xe_vma_ops_incr_pt_update_ops to take an increment value Himal Prasad Ghimiray
2025-04-17 0:10 ` Matthew Brost
2025-04-21 4:09 ` Ghimiray, Himal Prasad
2025-04-07 10:16 ` [PATCH v2 07/32] drm/xe/vm: Add an identifier in xe_vma_ops for svm prefetch Himal Prasad Ghimiray
2025-04-17 2:53 ` Matthew Brost
2025-04-07 10:16 ` [PATCH v2 08/32] drm/xe: Rename lookup_vma function to xe_find_vma_by_addr Himal Prasad Ghimiray
2025-04-07 22:42 ` kernel test robot
2025-04-07 10:16 ` [PATCH v2 09/32] drm/xe/svm: Allow unaligned addresses and ranges for prefetch Himal Prasad Ghimiray
2025-04-17 2:53 ` Matthew Brost
2025-04-07 10:16 ` [PATCH v2 10/32] drm/xe/svm: Refactor usage of drm_gpusvm* function in xe_svm Himal Prasad Ghimiray
2025-04-17 2:57 ` Matthew Brost
2025-04-21 4:30 ` Ghimiray, Himal Prasad
2025-04-07 10:16 ` [PATCH v2 11/32] drm/xe/svm: Add function to determine if range needs VRAM migration Himal Prasad Ghimiray
2025-04-17 3:05 ` Matthew Brost
2025-04-21 4:52 ` Ghimiray, Himal Prasad
2025-04-07 10:16 ` [PATCH v2 12/32] drm/gpusvm: Introduce vram_only flag for VRAM allocation Himal Prasad Ghimiray
2025-04-17 3:07 ` Matthew Brost
2025-04-21 4:55 ` Ghimiray, Himal Prasad
2025-04-07 10:17 ` [PATCH v2 13/32] drm/xe/svm: Incase of atomic access ensure get_pages happens from vram Himal Prasad Ghimiray
2025-04-17 4:19 ` Matthew Brost
2025-04-21 4:58 ` Ghimiray, Himal Prasad
2025-04-21 6:29 ` Ghimiray, Himal Prasad
2025-04-22 15:25 ` Matthew Brost
2025-04-22 15:27 ` Matthew Brost
2025-04-07 10:17 ` [PATCH v2 14/32] drm/xe/svm: Implement prefetch support for SVM ranges Himal Prasad Ghimiray
2025-04-17 4:54 ` Matthew Brost
2025-04-24 10:03 ` Ghimiray, Himal Prasad
2025-04-24 23:48 ` Matthew Brost
2025-04-28 6:44 ` Ghimiray, Himal Prasad
2025-04-07 10:17 ` [PATCH v2 15/32] drm/xe/vm: Add debug prints for SVM range prefetch Himal Prasad Ghimiray
2025-04-17 4:56 ` Matthew Brost
2025-04-07 10:17 ` [PATCH v2 16/32] Introduce drm_gpuvm_sm_map_ops_flags enums for sm_map_ops Himal Prasad Ghimiray
2025-04-07 10:30 ` Boris Brezillon
2025-05-26 13:48 ` Ghimiray, Himal Prasad
2025-04-07 22:42 ` kernel test robot
2025-04-07 10:17 ` [PATCH v2 17/32] drm/xe/uapi: Add madvise interface Himal Prasad Ghimiray
2025-04-17 18:19 ` Souza, Jose
2025-04-17 18:24 ` Souza, Jose
2025-04-22 15:34 ` Matthew Brost
2025-04-22 15:55 ` Souza, Jose
2025-04-22 16:19 ` Matthew Brost
2025-04-22 15:40 ` Matthew Brost
2025-04-22 16:02 ` Souza, Jose
2025-04-22 16:12 ` Matthew Brost
2025-04-22 16:16 ` Souza, Jose
2025-05-02 14:00 ` Thomas Hellström
2025-05-20 8:13 ` Ghimiray, Himal Prasad
2025-05-20 8:49 ` Ghimiray, Himal Prasad
2025-04-07 10:17 ` [PATCH v2 18/32] drm/xe/vm: Add attributes struct as member of vma Himal Prasad Ghimiray
2025-05-14 18:36 ` Matthew Brost
2025-05-20 9:27 ` Ghimiray, Himal Prasad
2025-05-27 17:37 ` Matthew Brost
2025-05-28 5:33 ` Ghimiray, Himal Prasad
2025-05-28 16:09 ` Matthew Brost
2025-05-28 16:16 ` Ghimiray, Himal Prasad
2025-04-07 10:17 ` [PATCH v2 19/32] drm/xe/vma: Move pat_index to vma attributes Himal Prasad Ghimiray
2025-05-14 18:37 ` Matthew Brost
2025-04-07 10:17 ` [PATCH v2 20/32] drm/xe/vma: Modify new_vma to accept struct xe_vma_mem_attr as parameter Himal Prasad Ghimiray
2025-05-13 2:36 ` Matthew Brost
2025-05-14 18:40 ` Matthew Brost
2025-05-20 9:28 ` Ghimiray, Himal Prasad
2025-04-07 10:17 ` [PATCH v2 21/32] drm/gpusvm: Make drm_gpusvm_for_each_* macros public Himal Prasad Ghimiray
2025-04-08 1:49 ` kernel test robot
2025-05-14 18:47 ` Matthew Brost
2025-04-07 10:17 ` [PATCH v2 22/32] drm/xe/svm: Split system allocator vma incase of madvise call Himal Prasad Ghimiray
2025-05-14 19:01 ` Matthew Brost
2025-05-20 9:46 ` Ghimiray, Himal Prasad
2025-05-14 19:02 ` Matthew Brost
2025-04-07 10:17 ` [PATCH v2 23/32] drm/xe: Implement madvise ioctl for xe Himal Prasad Ghimiray
2025-05-14 21:41 ` Matthew Brost
2025-05-20 10:15 ` Ghimiray, Himal Prasad
2025-05-28 5:22 ` Ghimiray, Himal Prasad
2025-04-07 10:17 ` [PATCH v2 24/32] drm/xe: Allow CPU address mirror VMA unbind with gpu bindings for madvise Himal Prasad Ghimiray
2025-05-14 19:20 ` Matthew Brost
2025-05-20 10:21 ` Ghimiray, Himal Prasad
2025-05-27 17:32 ` Matthew Brost
2025-04-07 10:17 ` [PATCH v2 25/32] drm/xe/svm : Add svm ranges migration policy on atomic access Himal Prasad Ghimiray
2025-05-14 22:21 ` Matthew Brost
2025-05-20 10:22 ` Ghimiray, Himal Prasad
2025-04-07 10:17 ` [PATCH v2 26/32] drm/xe/madvise: Update migration policy based on preferred location Himal Prasad Ghimiray
2025-05-14 22:04 ` Matthew Brost
2025-05-21 8:50 ` Ghimiray, Himal Prasad
2025-05-21 16:51 ` Ghimiray, Himal Prasad
2025-04-07 10:17 ` [PATCH v2 27/32] drm/xe/svm: Support DRM_XE_SVM_ATTR_PAT memory attribute Himal Prasad Ghimiray
2025-05-14 21:52 ` Matthew Brost
2025-05-21 8:51 ` Ghimiray, Himal Prasad
2025-04-07 10:17 ` [PATCH v2 28/32] drm/xe/uapi: Add flag for consulting madvise hints on svm prefetch Himal Prasad Ghimiray
2025-05-14 21:05 ` Matthew Brost
2025-05-21 8:52 ` Ghimiray, Himal Prasad
2025-04-07 10:17 ` [PATCH v2 29/32] drm/xe/svm: Consult madvise preferred location in prefetch Himal Prasad Ghimiray
2025-05-14 22:17 ` Matthew Brost
2025-04-07 10:17 ` [PATCH v2 30/32] drm/xe/uapi: Add uapi for vma count and mem attributes Himal Prasad Ghimiray
2025-05-14 21:08 ` Matthew Brost
2025-05-21 8:54 ` Ghimiray, Himal Prasad
2025-05-28 16:18 ` Ghimiray, Himal Prasad
2025-04-07 10:17 ` [PATCH v2 31/32] drm/xe/bo: Add attributes field to xe_bo Himal Prasad Ghimiray
2025-05-14 21:10 ` Matthew Brost
2025-04-07 10:17 ` [PATCH v2 32/32] drm/xe/bo: Update atomic_access attribute on madvise Himal Prasad Ghimiray
2025-05-14 22:31 ` Matthew Brost
2025-05-21 9:13 ` Ghimiray, Himal Prasad
2025-04-07 14:07 ` ✓ CI.Patch_applied: success for PREFETCH and MADVISE for SVM ranges (rev3) Patchwork
2025-04-07 14:07 ` ✗ CI.checkpatch: warning " Patchwork
2025-04-07 14:09 ` ✓ CI.KUnit: success " Patchwork
2025-04-07 14:12 ` ✗ CI.Build: failure " Patchwork
2025-04-09 5:11 ` ✓ CI.Patch_applied: success for PREFETCH and MADVISE for SVM ranges (rev4) Patchwork
2025-04-09 5:11 ` ✗ CI.checkpatch: warning " Patchwork
2025-04-09 5:12 ` ✓ CI.KUnit: success " Patchwork
2025-04-09 5:29 ` ✓ CI.Build: " Patchwork
2025-04-09 5:31 ` ✗ CI.Hooks: failure " Patchwork
2025-04-09 5:32 ` ✗ CI.checksparse: warning " Patchwork
2025-04-09 5:52 ` ✓ Xe.CI.BAT: success " Patchwork
2025-04-09 7:00 ` ✗ Xe.CI.Full: failure " Patchwork
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox