* [PATCH v4 1/8] drm/gpusvm: fix hmm_pfn_to_map_order() usage
2025-05-12 15:06 [PATCH v4 0/8] Replace xe_hmm with gpusvm Matthew Auld
@ 2025-05-12 15:06 ` Matthew Auld
2025-05-12 15:06 ` [PATCH v4 2/8] drm/gpusvm: use more selective dma dir in get_pages() Matthew Auld
` (12 subsequent siblings)
13 siblings, 0 replies; 17+ messages in thread
From: Matthew Auld @ 2025-05-12 15:06 UTC (permalink / raw)
To: intel-xe; +Cc: dri-devel, Thomas Hellström, Matthew Brost
Handle the case where the hmm range partially covers a huge page (like
2M), otherwise we can potentially end up doing something nasty like
mapping memory which is outside the range, and maybe not even mapped by
the mm. Fix is based on the xe userptr code, which in a future patch
will directly use gpusvm, so needs alignment here.
v2:
- Add kernel-doc (Matt B)
- s/fls/ilog2/ (Thomas)
Reported-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
drivers/gpu/drm/drm_gpusvm.c | 33 +++++++++++++++++++++++++++++++--
1 file changed, 31 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/drm_gpusvm.c b/drivers/gpu/drm/drm_gpusvm.c
index f3ac2c78e3b2..abb78e06e810 100644
--- a/drivers/gpu/drm/drm_gpusvm.c
+++ b/drivers/gpu/drm/drm_gpusvm.c
@@ -817,6 +817,35 @@ drm_gpusvm_range_alloc(struct drm_gpusvm *gpusvm,
return range;
}
+/**
+ * drm_gpusvm_hmm_pfn_to_order() - Get the largest CPU mapping order.
+ * @hmm_pfn: The current hmm_pfn.
+ * @hmm_pfn_index: Index of the @hmm_pfn within the pfn array.
+ * @npages: Number of pages within the pfn array i.e the hmm range size.
+ *
+ * To allow skipping PFNs with the same flags (like when they belong to
+ * the same huge PTE) when looping over the pfn array, take a given a hmm_pfn,
+ * and return the largest order that will fit inside the CPU PTE, but also
+ * crucially accounting for the original hmm range boundaries.
+ *
+ * Return: The largest order that will safely fit within the size of the hmm_pfn
+ * CPU PTE.
+ */
+static unsigned int drm_gpusvm_hmm_pfn_to_order(unsigned long hmm_pfn,
+ unsigned long hmm_pfn_index,
+ unsigned long npages)
+{
+ unsigned long size;
+
+ size = 1UL << hmm_pfn_to_map_order(hmm_pfn);
+ size -= (hmm_pfn & ~HMM_PFN_FLAGS) & (size - 1);
+ hmm_pfn_index += size;
+ if (hmm_pfn_index > npages)
+ size -= (hmm_pfn_index - npages);
+
+ return ilog2(size);
+}
+
/**
* drm_gpusvm_check_pages() - Check pages
* @gpusvm: Pointer to the GPU SVM structure
@@ -875,7 +904,7 @@ static bool drm_gpusvm_check_pages(struct drm_gpusvm *gpusvm,
err = -EFAULT;
goto err_free;
}
- i += 0x1 << hmm_pfn_to_map_order(pfns[i]);
+ i += 0x1 << drm_gpusvm_hmm_pfn_to_order(pfns[i], i, npages);
}
err_free:
@@ -1406,7 +1435,7 @@ int drm_gpusvm_range_get_pages(struct drm_gpusvm *gpusvm,
for (i = 0, j = 0; i < npages; ++j) {
struct page *page = hmm_pfn_to_page(pfns[i]);
- order = hmm_pfn_to_map_order(pfns[i]);
+ order = drm_gpusvm_hmm_pfn_to_order(pfns[i], i, npages);
if (is_device_private_page(page) ||
is_device_coherent_page(page)) {
if (zdd != page->zone_device_data && i > 0) {
--
2.49.0
^ permalink raw reply related [flat|nested] 17+ messages in thread* [PATCH v4 2/8] drm/gpusvm: use more selective dma dir in get_pages()
2025-05-12 15:06 [PATCH v4 0/8] Replace xe_hmm with gpusvm Matthew Auld
2025-05-12 15:06 ` [PATCH v4 1/8] drm/gpusvm: fix hmm_pfn_to_map_order() usage Matthew Auld
@ 2025-05-12 15:06 ` Matthew Auld
2025-05-12 15:06 ` [PATCH v4 3/8] drm/gpusvm: pull out drm_gpusvm_pages substructure Matthew Auld
` (11 subsequent siblings)
13 siblings, 0 replies; 17+ messages in thread
From: Matthew Auld @ 2025-05-12 15:06 UTC (permalink / raw)
To: intel-xe; +Cc: dri-devel, Thomas Hellström, Matthew Brost
If we are only reading the memory then from the device pov the direction
can be DMA_TO_DEVICE. This aligns with the xe-userptr code. Using the
most restrictive data direction to represent the access is normally a
good idea.
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/drm_gpusvm.c | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/drm_gpusvm.c b/drivers/gpu/drm/drm_gpusvm.c
index abb78e06e810..949334ca3f5c 100644
--- a/drivers/gpu/drm/drm_gpusvm.c
+++ b/drivers/gpu/drm/drm_gpusvm.c
@@ -1362,6 +1362,8 @@ int drm_gpusvm_range_get_pages(struct drm_gpusvm *gpusvm,
int err = 0;
struct dev_pagemap *pagemap;
struct drm_pagemap *dpagemap;
+ enum dma_data_direction dma_dir = ctx->read_only ? DMA_TO_DEVICE :
+ DMA_BIDIRECTIONAL;
retry:
hmm_range.notifier_seq = mmu_interval_read_begin(notifier);
@@ -1465,7 +1467,7 @@ int drm_gpusvm_range_get_pages(struct drm_gpusvm *gpusvm,
dpagemap->ops->device_map(dpagemap,
gpusvm->drm->dev,
page, order,
- DMA_BIDIRECTIONAL);
+ dma_dir);
if (dma_mapping_error(gpusvm->drm->dev,
range->dma_addr[j].addr)) {
err = -EFAULT;
@@ -1482,7 +1484,7 @@ int drm_gpusvm_range_get_pages(struct drm_gpusvm *gpusvm,
addr = dma_map_page(gpusvm->drm->dev,
page, 0,
PAGE_SIZE << order,
- DMA_BIDIRECTIONAL);
+ dma_dir);
if (dma_mapping_error(gpusvm->drm->dev, addr)) {
err = -EFAULT;
goto err_unmap;
@@ -1490,7 +1492,7 @@ int drm_gpusvm_range_get_pages(struct drm_gpusvm *gpusvm,
range->dma_addr[j] = drm_pagemap_device_addr_encode
(addr, DRM_INTERCONNECT_SYSTEM, order,
- DMA_BIDIRECTIONAL);
+ dma_dir);
}
i += 1 << order;
num_dma_mapped = i;
--
2.49.0
^ permalink raw reply related [flat|nested] 17+ messages in thread* [PATCH v4 3/8] drm/gpusvm: pull out drm_gpusvm_pages substructure
2025-05-12 15:06 [PATCH v4 0/8] Replace xe_hmm with gpusvm Matthew Auld
2025-05-12 15:06 ` [PATCH v4 1/8] drm/gpusvm: fix hmm_pfn_to_map_order() usage Matthew Auld
2025-05-12 15:06 ` [PATCH v4 2/8] drm/gpusvm: use more selective dma dir in get_pages() Matthew Auld
@ 2025-05-12 15:06 ` Matthew Auld
2025-05-12 15:06 ` [PATCH v4 4/8] drm/gpusvm: refactor core API to use pages struct Matthew Auld
` (10 subsequent siblings)
13 siblings, 0 replies; 17+ messages in thread
From: Matthew Auld @ 2025-05-12 15:06 UTC (permalink / raw)
To: intel-xe; +Cc: dri-devel, Matthew Brost, Thomas Hellström
Pull the pages stuff from the svm range into its own substructure, with
the idea of having the main pages related routines, like get_pages(),
unmap_pages() and free_pages() all operating on some lower level
structures, which can then be re-used for stuff like userptr.
v2:
- Move seq into pages struct (Matt B)
Suggested-by: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/drm_gpusvm.c | 66 ++++++++++++++++++++----------------
drivers/gpu/drm/xe/xe_pt.c | 2 +-
drivers/gpu/drm/xe/xe_svm.c | 8 ++---
drivers/gpu/drm/xe/xe_svm.h | 6 ++--
include/drm/drm_gpusvm.h | 53 +++++++++++++++++------------
5 files changed, 76 insertions(+), 59 deletions(-)
diff --git a/drivers/gpu/drm/drm_gpusvm.c b/drivers/gpu/drm/drm_gpusvm.c
index 949334ca3f5c..8b29c2d7d13f 100644
--- a/drivers/gpu/drm/drm_gpusvm.c
+++ b/drivers/gpu/drm/drm_gpusvm.c
@@ -811,8 +811,8 @@ drm_gpusvm_range_alloc(struct drm_gpusvm *gpusvm,
range->itree.start = ALIGN_DOWN(fault_addr, chunk_size);
range->itree.last = ALIGN(fault_addr + 1, chunk_size) - 1;
INIT_LIST_HEAD(&range->entry);
- range->notifier_seq = LONG_MAX;
- range->flags.migrate_devmem = migrate_devmem ? 1 : 0;
+ range->pages.notifier_seq = LONG_MAX;
+ range->pages.flags.migrate_devmem = migrate_devmem ? 1 : 0;
return range;
}
@@ -1140,15 +1140,16 @@ static void __drm_gpusvm_range_unmap_pages(struct drm_gpusvm *gpusvm,
struct drm_gpusvm_range *range,
unsigned long npages)
{
- unsigned long i, j;
- struct drm_pagemap *dpagemap = range->dpagemap;
+ struct drm_gpusvm_pages *svm_pages = &range->pages;
+ struct drm_pagemap *dpagemap = svm_pages->dpagemap;
struct device *dev = gpusvm->drm->dev;
+ unsigned long i, j;
lockdep_assert_held(&gpusvm->notifier_lock);
- if (range->flags.has_dma_mapping) {
+ if (svm_pages->flags.has_dma_mapping) {
for (i = 0, j = 0; i < npages; j++) {
- struct drm_pagemap_device_addr *addr = &range->dma_addr[j];
+ struct drm_pagemap_device_addr *addr = &svm_pages->dma_addr[j];
if (addr->proto == DRM_INTERCONNECT_SYSTEM)
dma_unmap_page(dev,
@@ -1160,9 +1161,9 @@ static void __drm_gpusvm_range_unmap_pages(struct drm_gpusvm *gpusvm,
dev, *addr);
i += 1 << addr->order;
}
- range->flags.has_devmem_pages = false;
- range->flags.has_dma_mapping = false;
- range->dpagemap = NULL;
+ svm_pages->flags.has_devmem_pages = false;
+ svm_pages->flags.has_dma_mapping = false;
+ svm_pages->dpagemap = NULL;
}
}
@@ -1176,11 +1177,13 @@ static void __drm_gpusvm_range_unmap_pages(struct drm_gpusvm *gpusvm,
static void drm_gpusvm_range_free_pages(struct drm_gpusvm *gpusvm,
struct drm_gpusvm_range *range)
{
+ struct drm_gpusvm_pages *svm_pages = &range->pages;
+
lockdep_assert_held(&gpusvm->notifier_lock);
- if (range->dma_addr) {
- kvfree(range->dma_addr);
- range->dma_addr = NULL;
+ if (svm_pages->dma_addr) {
+ kvfree(svm_pages->dma_addr);
+ svm_pages->dma_addr = NULL;
}
}
@@ -1291,9 +1294,11 @@ EXPORT_SYMBOL_GPL(drm_gpusvm_range_put);
bool drm_gpusvm_range_pages_valid(struct drm_gpusvm *gpusvm,
struct drm_gpusvm_range *range)
{
+ struct drm_gpusvm_pages *svm_pages = &range->pages;
+
lockdep_assert_held(&gpusvm->notifier_lock);
- return range->flags.has_devmem_pages || range->flags.has_dma_mapping;
+ return svm_pages->flags.has_devmem_pages || svm_pages->flags.has_dma_mapping;
}
EXPORT_SYMBOL_GPL(drm_gpusvm_range_pages_valid);
@@ -1311,9 +1316,10 @@ static bool
drm_gpusvm_range_pages_valid_unlocked(struct drm_gpusvm *gpusvm,
struct drm_gpusvm_range *range)
{
+ struct drm_gpusvm_pages *svm_pages = &range->pages;
bool pages_valid;
- if (!range->dma_addr)
+ if (!svm_pages->dma_addr)
return false;
drm_gpusvm_notifier_lock(gpusvm);
@@ -1340,6 +1346,7 @@ int drm_gpusvm_range_get_pages(struct drm_gpusvm *gpusvm,
struct drm_gpusvm_range *range,
const struct drm_gpusvm_ctx *ctx)
{
+ struct drm_gpusvm_pages *svm_pages = &range->pages;
struct mmu_interval_notifier *notifier = &range->notifier->notifier;
struct hmm_range hmm_range = {
.default_flags = HMM_PFN_REQ_FAULT | (ctx->read_only ? 0 :
@@ -1407,7 +1414,7 @@ int drm_gpusvm_range_get_pages(struct drm_gpusvm *gpusvm,
*/
drm_gpusvm_notifier_lock(gpusvm);
- if (range->flags.unmapped) {
+ if (svm_pages->flags.unmapped) {
drm_gpusvm_notifier_unlock(gpusvm);
err = -EFAULT;
goto err_free;
@@ -1419,13 +1426,12 @@ int drm_gpusvm_range_get_pages(struct drm_gpusvm *gpusvm,
goto retry;
}
- if (!range->dma_addr) {
+ if (!svm_pages->dma_addr) {
/* Unlock and restart mapping to allocate memory. */
drm_gpusvm_notifier_unlock(gpusvm);
- range->dma_addr = kvmalloc_array(npages,
- sizeof(*range->dma_addr),
- GFP_KERNEL);
- if (!range->dma_addr) {
+ svm_pages->dma_addr =
+ kvmalloc_array(npages, sizeof(*svm_pages->dma_addr), GFP_KERNEL);
+ if (!svm_pages->dma_addr) {
err = -ENOMEM;
goto err_free;
}
@@ -1463,13 +1469,13 @@ int drm_gpusvm_range_get_pages(struct drm_gpusvm *gpusvm,
goto err_unmap;
}
}
- range->dma_addr[j] =
+ svm_pages->dma_addr[j] =
dpagemap->ops->device_map(dpagemap,
gpusvm->drm->dev,
page, order,
dma_dir);
if (dma_mapping_error(gpusvm->drm->dev,
- range->dma_addr[j].addr)) {
+ svm_pages->dma_addr[j].addr)) {
err = -EFAULT;
goto err_unmap;
}
@@ -1490,24 +1496,24 @@ int drm_gpusvm_range_get_pages(struct drm_gpusvm *gpusvm,
goto err_unmap;
}
- range->dma_addr[j] = drm_pagemap_device_addr_encode
+ svm_pages->dma_addr[j] = drm_pagemap_device_addr_encode
(addr, DRM_INTERCONNECT_SYSTEM, order,
dma_dir);
}
i += 1 << order;
num_dma_mapped = i;
- range->flags.has_dma_mapping = true;
+ svm_pages->flags.has_dma_mapping = true;
}
if (zdd) {
- range->flags.has_devmem_pages = true;
- range->dpagemap = dpagemap;
+ svm_pages->flags.has_devmem_pages = true;
+ svm_pages->dpagemap = dpagemap;
}
drm_gpusvm_notifier_unlock(gpusvm);
kvfree(pfns);
set_seqno:
- range->notifier_seq = hmm_range.notifier_seq;
+ svm_pages->notifier_seq = hmm_range.notifier_seq;
return 0;
@@ -1714,7 +1720,7 @@ int drm_gpusvm_migrate_to_devmem(struct drm_gpusvm *gpusvm,
mmap_assert_locked(gpusvm->mm);
- if (!range->flags.migrate_devmem)
+ if (!range->pages.flags.migrate_devmem)
return -EINVAL;
if (!ops->populate_devmem_pfn || !ops->copy_to_devmem ||
@@ -2243,10 +2249,10 @@ void drm_gpusvm_range_set_unmapped(struct drm_gpusvm_range *range,
{
lockdep_assert_held_write(&range->gpusvm->notifier_lock);
- range->flags.unmapped = true;
+ range->pages.flags.unmapped = true;
if (drm_gpusvm_range_start(range) < mmu_range->start ||
drm_gpusvm_range_end(range) > mmu_range->end)
- range->flags.partial_unmap = true;
+ range->pages.flags.partial_unmap = true;
}
EXPORT_SYMBOL_GPL(drm_gpusvm_range_set_unmapped);
diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
index b42cf5d1b20c..5cccfd9cc3e9 100644
--- a/drivers/gpu/drm/xe/xe_pt.c
+++ b/drivers/gpu/drm/xe/xe_pt.c
@@ -725,7 +725,7 @@ xe_pt_stage_bind(struct xe_tile *tile, struct xe_vma *vma,
return -EAGAIN;
}
if (xe_svm_range_has_dma_mapping(range)) {
- xe_res_first_dma(range->base.dma_addr, 0,
+ xe_res_first_dma(range->base.pages.dma_addr, 0,
range->base.itree.last + 1 - range->base.itree.start,
&curs);
xe_svm_range_debug(range, "BIND PREPARE - MIXED");
diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
index d25f02c8d7fc..74301064004c 100644
--- a/drivers/gpu/drm/xe/xe_svm.c
+++ b/drivers/gpu/drm/xe/xe_svm.c
@@ -17,7 +17,7 @@
static bool xe_svm_range_in_vram(struct xe_svm_range *range)
{
/* Not reliable without notifier lock */
- return range->base.flags.has_devmem_pages;
+ return range->base.pages.flags.has_devmem_pages;
}
static bool xe_svm_range_has_vram_binding(struct xe_svm_range *range)
@@ -59,7 +59,7 @@ static unsigned long xe_svm_range_size(struct xe_svm_range *range)
(r__)->base.gpusvm, \
xe_svm_range_in_vram((r__)) ? 1 : 0, \
xe_svm_range_has_vram_binding((r__)) ? 1 : 0, \
- (r__)->base.notifier_seq, \
+ (r__)->base.pages.notifier_seq, \
xe_svm_range_start((r__)), xe_svm_range_end((r__)), \
xe_svm_range_size((r__)))
@@ -135,7 +135,7 @@ xe_svm_range_notifier_event_begin(struct xe_vm *vm, struct drm_gpusvm_range *r,
range_debug(range, "NOTIFIER");
/* Skip if already unmapped or if no binding exist */
- if (range->base.flags.unmapped || !range->tile_present)
+ if (range->base.pages.flags.unmapped || !range->tile_present)
return 0;
range_debug(range, "NOTIFIER - EXECUTE");
@@ -782,7 +782,7 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
range_debug(range, "PAGE FAULT");
/* XXX: Add migration policy, for now migrate range once */
- if (!range->skip_migrate && range->base.flags.migrate_devmem &&
+ if (!range->skip_migrate && range->base.pages.flags.migrate_devmem &&
xe_svm_range_size(range) >= SZ_64K) {
range->skip_migrate = true;
diff --git a/drivers/gpu/drm/xe/xe_svm.h b/drivers/gpu/drm/xe/xe_svm.h
index 2881af1e60b2..bf9792b66869 100644
--- a/drivers/gpu/drm/xe/xe_svm.h
+++ b/drivers/gpu/drm/xe/xe_svm.h
@@ -84,7 +84,7 @@ void xe_svm_range_debug(struct xe_svm_range *range, const char *operation);
static inline bool xe_svm_range_has_dma_mapping(struct xe_svm_range *range)
{
lockdep_assert_held(&range->base.gpusvm->notifier_lock);
- return range->base.flags.has_dma_mapping;
+ return range->base.pages.flags.has_dma_mapping;
}
#define xe_svm_assert_in_notifier(vm__) \
@@ -114,7 +114,9 @@ struct xe_vram_region;
struct xe_svm_range {
struct {
struct interval_tree_node itree;
- const struct drm_pagemap_device_addr *dma_addr;
+ struct {
+ struct drm_pagemap_device_addr *dma_addr;
+ } pages;
} base;
u32 tile_present;
u32 tile_invalidated;
diff --git a/include/drm/drm_gpusvm.h b/include/drm/drm_gpusvm.h
index df120b4d1f83..1b7ed4f4a8e2 100644
--- a/include/drm/drm_gpusvm.h
+++ b/include/drm/drm_gpusvm.h
@@ -185,6 +185,35 @@ struct drm_gpusvm_notifier {
} flags;
};
+/**
+ * struct drm_gpusvm_pages - Structure representing a GPU SVM mapped pages
+ *
+ * @dma_addr: Device address array
+ * @dpagemap: The struct drm_pagemap of the device pages we're dma-mapping.
+ * Note this is assuming only one drm_pagemap per range is allowed.
+ * @notifier_seq: Notifier sequence number of the range's pages
+ * @flags: Flags for range
+ * @flags.migrate_devmem: Flag indicating whether the range can be migrated to device memory
+ * @flags.unmapped: Flag indicating if the range has been unmapped
+ * @flags.partial_unmap: Flag indicating if the range has been partially unmapped
+ * @flags.has_devmem_pages: Flag indicating if the range has devmem pages
+ * @flags.has_dma_mapping: Flag indicating if the range has a DMA mapping
+ */
+struct drm_gpusvm_pages {
+ struct drm_pagemap_device_addr *dma_addr;
+ struct drm_pagemap *dpagemap;
+ unsigned long notifier_seq;
+ struct {
+ /* All flags below must be set upon creation */
+ u16 migrate_devmem : 1;
+ /* All flags below must be set / cleared under notifier lock */
+ u16 unmapped : 1;
+ u16 partial_unmap : 1;
+ u16 has_devmem_pages : 1;
+ u16 has_dma_mapping : 1;
+ } flags;
+};
+
/**
* struct drm_gpusvm_range - Structure representing a GPU SVM range
*
@@ -193,16 +222,7 @@ struct drm_gpusvm_notifier {
* @refcount: Reference count for the range
* @itree: Interval tree node for the range (inserted in GPU SVM notifier)
* @entry: List entry to fast interval tree traversal
- * @notifier_seq: Notifier sequence number of the range's pages
- * @dma_addr: Device address array
- * @dpagemap: The struct drm_pagemap of the device pages we're dma-mapping.
- * Note this is assuming only one drm_pagemap per range is allowed.
- * @flags: Flags for range
- * @flags.migrate_devmem: Flag indicating whether the range can be migrated to device memory
- * @flags.unmapped: Flag indicating if the range has been unmapped
- * @flags.partial_unmap: Flag indicating if the range has been partially unmapped
- * @flags.has_devmem_pages: Flag indicating if the range has devmem pages
- * @flags.has_dma_mapping: Flag indicating if the range has a DMA mapping
+ * @pages: The pages for this range.
*
* This structure represents a GPU SVM range used for tracking memory ranges
* mapped in a DRM device.
@@ -213,18 +233,7 @@ struct drm_gpusvm_range {
struct kref refcount;
struct interval_tree_node itree;
struct list_head entry;
- unsigned long notifier_seq;
- struct drm_pagemap_device_addr *dma_addr;
- struct drm_pagemap *dpagemap;
- struct {
- /* All flags below must be set upon creation */
- u16 migrate_devmem : 1;
- /* All flags below must be set / cleared under notifier lock */
- u16 unmapped : 1;
- u16 partial_unmap : 1;
- u16 has_devmem_pages : 1;
- u16 has_dma_mapping : 1;
- } flags;
+ struct drm_gpusvm_pages pages;
};
/**
--
2.49.0
^ permalink raw reply related [flat|nested] 17+ messages in thread* [PATCH v4 4/8] drm/gpusvm: refactor core API to use pages struct
2025-05-12 15:06 [PATCH v4 0/8] Replace xe_hmm with gpusvm Matthew Auld
` (2 preceding siblings ...)
2025-05-12 15:06 ` [PATCH v4 3/8] drm/gpusvm: pull out drm_gpusvm_pages substructure Matthew Auld
@ 2025-05-12 15:06 ` Matthew Auld
2025-05-12 15:06 ` [PATCH v4 5/8] drm/gpusvm: export drm_gpusvm_pages API Matthew Auld
` (9 subsequent siblings)
13 siblings, 0 replies; 17+ messages in thread
From: Matthew Auld @ 2025-05-12 15:06 UTC (permalink / raw)
To: intel-xe; +Cc: dri-devel, Matthew Brost, Thomas Hellström
Refactor the core API of get/unmap/free pages to all operate on
drm_gpusvm_pages. In the next patch we want to export a simplified core
API without needing fully blown svm range etc.
Suggested-by: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/drm_gpusvm.c | 160 ++++++++++++++++++++++++-----------
1 file changed, 109 insertions(+), 51 deletions(-)
diff --git a/drivers/gpu/drm/drm_gpusvm.c b/drivers/gpu/drm/drm_gpusvm.c
index 8b29c2d7d13f..f998f3fa69fe 100644
--- a/drivers/gpu/drm/drm_gpusvm.c
+++ b/drivers/gpu/drm/drm_gpusvm.c
@@ -1128,19 +1128,18 @@ drm_gpusvm_range_find_or_insert(struct drm_gpusvm *gpusvm,
EXPORT_SYMBOL_GPL(drm_gpusvm_range_find_or_insert);
/**
- * __drm_gpusvm_range_unmap_pages() - Unmap pages associated with a GPU SVM range (internal)
+ * __drm_gpusvm_unmap_pages() - Unmap pages associated with GPU SVM pages (internal)
* @gpusvm: Pointer to the GPU SVM structure
- * @range: Pointer to the GPU SVM range structure
+ * @pages: Pointer to the GPU SVM pages structure
* @npages: Number of pages to unmap
*
- * This function unmap pages associated with a GPU SVM range. Assumes and
+ * This function unmap pages associated with a GPU SVM pages struct. Assumes and
* asserts correct locking is in place when called.
*/
-static void __drm_gpusvm_range_unmap_pages(struct drm_gpusvm *gpusvm,
- struct drm_gpusvm_range *range,
- unsigned long npages)
+static void __drm_gpusvm_unmap_pages(struct drm_gpusvm *gpusvm,
+ struct drm_gpusvm_pages *svm_pages,
+ unsigned long npages)
{
- struct drm_gpusvm_pages *svm_pages = &range->pages;
struct drm_pagemap *dpagemap = svm_pages->dpagemap;
struct device *dev = gpusvm->drm->dev;
unsigned long i, j;
@@ -1168,17 +1167,15 @@ static void __drm_gpusvm_range_unmap_pages(struct drm_gpusvm *gpusvm,
}
/**
- * drm_gpusvm_range_free_pages() - Free pages associated with a GPU SVM range
+ * __drm_gpusvm_free_pages() - Free dma array associated with GPU SVM pages
* @gpusvm: Pointer to the GPU SVM structure
- * @range: Pointer to the GPU SVM range structure
+ * @svm_pages: Pointer to the GPU SVM pages structure
*
* This function frees the dma address array associated with a GPU SVM range.
*/
-static void drm_gpusvm_range_free_pages(struct drm_gpusvm *gpusvm,
- struct drm_gpusvm_range *range)
+static void __drm_gpusvm_free_pages(struct drm_gpusvm *gpusvm,
+ struct drm_gpusvm_pages *svm_pages)
{
- struct drm_gpusvm_pages *svm_pages = &range->pages;
-
lockdep_assert_held(&gpusvm->notifier_lock);
if (svm_pages->dma_addr) {
@@ -1211,8 +1208,8 @@ void drm_gpusvm_range_remove(struct drm_gpusvm *gpusvm,
return;
drm_gpusvm_notifier_lock(gpusvm);
- __drm_gpusvm_range_unmap_pages(gpusvm, range, npages);
- drm_gpusvm_range_free_pages(gpusvm, range);
+ __drm_gpusvm_unmap_pages(gpusvm, &range->pages, npages);
+ __drm_gpusvm_free_pages(gpusvm, &range->pages);
__drm_gpusvm_range_remove(notifier, range);
drm_gpusvm_notifier_unlock(gpusvm);
@@ -1277,6 +1274,28 @@ void drm_gpusvm_range_put(struct drm_gpusvm_range *range)
}
EXPORT_SYMBOL_GPL(drm_gpusvm_range_put);
+/**
+ * drm_gpusvm_pages_valid() - GPU SVM range pages valid
+ * @gpusvm: Pointer to the GPU SVM structure
+ * @svm_pages: Pointer to the GPU SVM pages structure
+ *
+ * This function determines if a GPU SVM range pages are valid. Expected be
+ * called holding gpusvm->notifier_lock and as the last step before committing a
+ * GPU binding. This is akin to a notifier seqno check in the HMM documentation
+ * but due to wider notifiers (i.e., notifiers which span multiple ranges) this
+ * function is required for finer grained checking (i.e., per range) if pages
+ * are valid.
+ *
+ * Return: True if GPU SVM range has valid pages, False otherwise
+ */
+static bool drm_gpusvm_pages_valid(struct drm_gpusvm *gpusvm,
+ struct drm_gpusvm_pages *svm_pages)
+{
+ lockdep_assert_held(&gpusvm->notifier_lock);
+
+ return svm_pages->flags.has_devmem_pages || svm_pages->flags.has_dma_mapping;
+}
+
/**
* drm_gpusvm_range_pages_valid() - GPU SVM range pages valid
* @gpusvm: Pointer to the GPU SVM structure
@@ -1294,11 +1313,7 @@ EXPORT_SYMBOL_GPL(drm_gpusvm_range_put);
bool drm_gpusvm_range_pages_valid(struct drm_gpusvm *gpusvm,
struct drm_gpusvm_range *range)
{
- struct drm_gpusvm_pages *svm_pages = &range->pages;
-
- lockdep_assert_held(&gpusvm->notifier_lock);
-
- return svm_pages->flags.has_devmem_pages || svm_pages->flags.has_dma_mapping;
+ return drm_gpusvm_pages_valid(gpusvm, &range->pages);
}
EXPORT_SYMBOL_GPL(drm_gpusvm_range_pages_valid);
@@ -1312,57 +1327,59 @@ EXPORT_SYMBOL_GPL(drm_gpusvm_range_pages_valid);
*
* Return: True if GPU SVM range has valid pages, False otherwise
*/
-static bool
-drm_gpusvm_range_pages_valid_unlocked(struct drm_gpusvm *gpusvm,
- struct drm_gpusvm_range *range)
+static bool drm_gpusvm_pages_valid_unlocked(struct drm_gpusvm *gpusvm,
+ struct drm_gpusvm_pages *svm_pages)
{
- struct drm_gpusvm_pages *svm_pages = &range->pages;
bool pages_valid;
if (!svm_pages->dma_addr)
return false;
drm_gpusvm_notifier_lock(gpusvm);
- pages_valid = drm_gpusvm_range_pages_valid(gpusvm, range);
+ pages_valid = drm_gpusvm_pages_valid(gpusvm, svm_pages);
if (!pages_valid)
- drm_gpusvm_range_free_pages(gpusvm, range);
+ __drm_gpusvm_free_pages(gpusvm, svm_pages);
drm_gpusvm_notifier_unlock(gpusvm);
return pages_valid;
}
/**
- * drm_gpusvm_range_get_pages() - Get pages for a GPU SVM range
+ * drm_gpusvm_get_pages() - Get pages and populate GPU SVM pages struct
* @gpusvm: Pointer to the GPU SVM structure
- * @range: Pointer to the GPU SVM range structure
+ * @svm_pages: The SVM pages to populate. This will contain the dma-addresses
+ * @mm: The mm corresponding to the CPU range
+ * @notifier: The corresponding notifier for the given CPU range
+ * @pages_start: Start CPU address for the pages
+ * @pages_end: End CPU address for the pages (exclusive)
* @ctx: GPU SVM context
*
- * This function gets pages for a GPU SVM range and ensures they are mapped for
- * DMA access.
+ * This function gets and maps pages for CPU range and ensures they are
+ * mapped for DMA access.
*
* Return: 0 on success, negative error code on failure.
*/
-int drm_gpusvm_range_get_pages(struct drm_gpusvm *gpusvm,
- struct drm_gpusvm_range *range,
- const struct drm_gpusvm_ctx *ctx)
+static int drm_gpusvm_get_pages(struct drm_gpusvm *gpusvm,
+ struct drm_gpusvm_pages *svm_pages,
+ struct mm_struct *mm,
+ struct mmu_interval_notifier *notifier,
+ unsigned long pages_start,
+ unsigned long pages_end,
+ const struct drm_gpusvm_ctx *ctx)
{
- struct drm_gpusvm_pages *svm_pages = &range->pages;
- struct mmu_interval_notifier *notifier = &range->notifier->notifier;
struct hmm_range hmm_range = {
.default_flags = HMM_PFN_REQ_FAULT | (ctx->read_only ? 0 :
HMM_PFN_REQ_WRITE),
.notifier = notifier,
- .start = drm_gpusvm_range_start(range),
- .end = drm_gpusvm_range_end(range),
+ .start = pages_start,
+ .end = pages_end,
.dev_private_owner = gpusvm->device_private_page_owner,
};
- struct mm_struct *mm = gpusvm->mm;
struct drm_gpusvm_zdd *zdd;
unsigned long timeout =
jiffies + msecs_to_jiffies(HMM_RANGE_DEFAULT_TIMEOUT);
unsigned long i, j;
- unsigned long npages = npages_in_range(drm_gpusvm_range_start(range),
- drm_gpusvm_range_end(range));
+ unsigned long npages = npages_in_range(pages_start, pages_end);
unsigned long num_dma_mapped;
unsigned int order = 0;
unsigned long *pfns;
@@ -1374,7 +1391,7 @@ int drm_gpusvm_range_get_pages(struct drm_gpusvm *gpusvm,
retry:
hmm_range.notifier_seq = mmu_interval_read_begin(notifier);
- if (drm_gpusvm_range_pages_valid_unlocked(gpusvm, range))
+ if (drm_gpusvm_pages_valid_unlocked(gpusvm, svm_pages))
goto set_seqno;
pfns = kvmalloc_array(npages, sizeof(*pfns), GFP_KERNEL);
@@ -1518,7 +1535,7 @@ int drm_gpusvm_range_get_pages(struct drm_gpusvm *gpusvm,
return 0;
err_unmap:
- __drm_gpusvm_range_unmap_pages(gpusvm, range, num_dma_mapped);
+ __drm_gpusvm_unmap_pages(gpusvm, svm_pages, num_dma_mapped);
drm_gpusvm_notifier_unlock(gpusvm);
err_free:
kvfree(pfns);
@@ -1526,8 +1543,57 @@ int drm_gpusvm_range_get_pages(struct drm_gpusvm *gpusvm,
goto retry;
return err;
}
+
+/**
+ * drm_gpusvm_range_get_pages() - Get pages for a GPU SVM range
+ * @gpusvm: Pointer to the GPU SVM structure
+ * @range: Pointer to the GPU SVM range structure
+ * @ctx: GPU SVM context
+ *
+ * This function gets pages for a GPU SVM range and ensures they are mapped for
+ * DMA access.
+ *
+ * Return: 0 on success, negative error code on failure.
+ */
+int drm_gpusvm_range_get_pages(struct drm_gpusvm *gpusvm,
+ struct drm_gpusvm_range *range,
+ const struct drm_gpusvm_ctx *ctx)
+{
+ return drm_gpusvm_get_pages(gpusvm, &range->pages, gpusvm->mm,
+ &range->notifier->notifier,
+ drm_gpusvm_range_start(range),
+ drm_gpusvm_range_end(range), ctx);
+}
EXPORT_SYMBOL_GPL(drm_gpusvm_range_get_pages);
+/**
+ * drm_gpusvm_unmap_pages() - Unmap GPU svm pages
+ * @gpusvm: Pointer to the GPU SVM structure
+ * @pages: Pointer to the GPU SVM pages structure
+ * @ctx: GPU SVM context
+ *
+ * This function unmaps pages associated with a GPU SVM pages struct. If
+ * @in_notifier is set, it is assumed that gpusvm->notifier_lock is held in
+ * write mode; if it is clear, it acquires gpusvm->notifier_lock in read mode.
+ * Must be called in the invalidate() callback of the corresponding notifier for
+ * IOMMU security model.
+ */
+static void drm_gpusvm_unmap_pages(struct drm_gpusvm *gpusvm,
+ struct drm_gpusvm_pages *svm_pages,
+ unsigned long npages,
+ const struct drm_gpusvm_ctx *ctx)
+{
+ if (ctx->in_notifier)
+ lockdep_assert_held_write(&gpusvm->notifier_lock);
+ else
+ drm_gpusvm_notifier_lock(gpusvm);
+
+ __drm_gpusvm_unmap_pages(gpusvm, svm_pages, npages);
+
+ if (!ctx->in_notifier)
+ drm_gpusvm_notifier_unlock(gpusvm);
+}
+
/**
* drm_gpusvm_range_unmap_pages() - Unmap pages associated with a GPU SVM range
* @gpusvm: Pointer to the GPU SVM structure
@@ -1547,15 +1613,7 @@ void drm_gpusvm_range_unmap_pages(struct drm_gpusvm *gpusvm,
unsigned long npages = npages_in_range(drm_gpusvm_range_start(range),
drm_gpusvm_range_end(range));
- if (ctx->in_notifier)
- lockdep_assert_held_write(&gpusvm->notifier_lock);
- else
- drm_gpusvm_notifier_lock(gpusvm);
-
- __drm_gpusvm_range_unmap_pages(gpusvm, range, npages);
-
- if (!ctx->in_notifier)
- drm_gpusvm_notifier_unlock(gpusvm);
+ return drm_gpusvm_unmap_pages(gpusvm, &range->pages, npages, ctx);
}
EXPORT_SYMBOL_GPL(drm_gpusvm_range_unmap_pages);
--
2.49.0
^ permalink raw reply related [flat|nested] 17+ messages in thread* [PATCH v4 5/8] drm/gpusvm: export drm_gpusvm_pages API
2025-05-12 15:06 [PATCH v4 0/8] Replace xe_hmm with gpusvm Matthew Auld
` (3 preceding siblings ...)
2025-05-12 15:06 ` [PATCH v4 4/8] drm/gpusvm: refactor core API to use pages struct Matthew Auld
@ 2025-05-12 15:06 ` Matthew Auld
2025-05-12 15:06 ` [PATCH v4 6/8] drm/xe/vm: split userptr bits into separate file Matthew Auld
` (8 subsequent siblings)
13 siblings, 0 replies; 17+ messages in thread
From: Matthew Auld @ 2025-05-12 15:06 UTC (permalink / raw)
To: intel-xe; +Cc: dri-devel, Thomas Hellström, Matthew Brost
Export get/unmap/free pages API. We also need to tweak the SVM init to
allow skipping much of the unneeded parts.
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/drm_gpusvm.c | 66 ++++++++++++++++++++++++++++--------
include/drm/drm_gpusvm.h | 16 +++++++++
2 files changed, 67 insertions(+), 15 deletions(-)
diff --git a/drivers/gpu/drm/drm_gpusvm.c b/drivers/gpu/drm/drm_gpusvm.c
index f998f3fa69fe..eac7f9b165f9 100644
--- a/drivers/gpu/drm/drm_gpusvm.c
+++ b/drivers/gpu/drm/drm_gpusvm.c
@@ -539,6 +539,12 @@ static const struct mmu_interval_notifier_ops drm_gpusvm_notifier_ops = {
*
* This function initializes the GPU SVM.
*
+ * Note: If only using the simple drm_gpusvm_pages API (get/unmap/free),
+ * then only @gpusvm, @name, and @drm are expected. However, the same base
+ * @gpusvm can also be used with both modes together in which case the full
+ * setup is needed, where the core drm_gpusvm_pages API will simply never use
+ * the other fields.
+ *
* Return: 0 on success, a negative error code on failure.
*/
int drm_gpusvm_init(struct drm_gpusvm *gpusvm,
@@ -549,8 +555,16 @@ int drm_gpusvm_init(struct drm_gpusvm *gpusvm,
const struct drm_gpusvm_ops *ops,
const unsigned long *chunk_sizes, int num_chunks)
{
- if (!ops->invalidate || !num_chunks)
- return -EINVAL;
+ if (mm) {
+ if (!ops->invalidate || !num_chunks)
+ return -EINVAL;
+ mmgrab(mm);
+ } else {
+ /* No full SVM mode, only core drm_gpusvm_pages API. */
+ if (ops || num_chunks || mm_range || notifier_size ||
+ device_private_page_owner)
+ return -EINVAL;
+ }
gpusvm->name = name;
gpusvm->drm = drm;
@@ -563,7 +577,6 @@ int drm_gpusvm_init(struct drm_gpusvm *gpusvm,
gpusvm->chunk_sizes = chunk_sizes;
gpusvm->num_chunks = num_chunks;
- mmgrab(mm);
gpusvm->root = RB_ROOT_CACHED;
INIT_LIST_HEAD(&gpusvm->notifier_list);
@@ -671,7 +684,8 @@ void drm_gpusvm_fini(struct drm_gpusvm *gpusvm)
drm_gpusvm_range_remove(gpusvm, range);
}
- mmdrop(gpusvm->mm);
+ if (gpusvm->mm)
+ mmdrop(gpusvm->mm);
WARN_ON(!RB_EMPTY_ROOT(&gpusvm->root.rb_root));
}
EXPORT_SYMBOL_GPL(drm_gpusvm_fini);
@@ -1184,6 +1198,27 @@ static void __drm_gpusvm_free_pages(struct drm_gpusvm *gpusvm,
}
}
+/**
+ * drm_gpusvm_free_pages() - Free dma-mapping associated with GPU SVM pages
+ * struct
+ * @gpusvm: Pointer to the GPU SVM structure
+ * @svm_pages: Pointer to the GPU SVM pages structure
+ * @npages: Number of mapped pages
+ *
+ * This function unmaps and frees the dma address array associated with a GPU
+ * SVM pages struct.
+ */
+void drm_gpusvm_free_pages(struct drm_gpusvm *gpusvm,
+ struct drm_gpusvm_pages *svm_pages,
+ unsigned long npages)
+{
+ drm_gpusvm_notifier_lock(gpusvm);
+ __drm_gpusvm_unmap_pages(gpusvm, svm_pages, npages);
+ __drm_gpusvm_free_pages(gpusvm, svm_pages);
+ drm_gpusvm_notifier_unlock(gpusvm);
+}
+EXPORT_SYMBOL_GPL(drm_gpusvm_free_pages);
+
/**
* drm_gpusvm_range_remove() - Remove GPU SVM range
* @gpusvm: Pointer to the GPU SVM structure
@@ -1359,13 +1394,12 @@ static bool drm_gpusvm_pages_valid_unlocked(struct drm_gpusvm *gpusvm,
*
* Return: 0 on success, negative error code on failure.
*/
-static int drm_gpusvm_get_pages(struct drm_gpusvm *gpusvm,
- struct drm_gpusvm_pages *svm_pages,
- struct mm_struct *mm,
- struct mmu_interval_notifier *notifier,
- unsigned long pages_start,
- unsigned long pages_end,
- const struct drm_gpusvm_ctx *ctx)
+int drm_gpusvm_get_pages(struct drm_gpusvm *gpusvm,
+ struct drm_gpusvm_pages *svm_pages,
+ struct mm_struct *mm,
+ struct mmu_interval_notifier *notifier,
+ unsigned long pages_start, unsigned long pages_end,
+ const struct drm_gpusvm_ctx *ctx)
{
struct hmm_range hmm_range = {
.default_flags = HMM_PFN_REQ_FAULT | (ctx->read_only ? 0 :
@@ -1543,6 +1577,7 @@ static int drm_gpusvm_get_pages(struct drm_gpusvm *gpusvm,
goto retry;
return err;
}
+EXPORT_SYMBOL_GPL(drm_gpusvm_get_pages);
/**
* drm_gpusvm_range_get_pages() - Get pages for a GPU SVM range
@@ -1578,10 +1613,10 @@ EXPORT_SYMBOL_GPL(drm_gpusvm_range_get_pages);
* Must be called in the invalidate() callback of the corresponding notifier for
* IOMMU security model.
*/
-static void drm_gpusvm_unmap_pages(struct drm_gpusvm *gpusvm,
- struct drm_gpusvm_pages *svm_pages,
- unsigned long npages,
- const struct drm_gpusvm_ctx *ctx)
+void drm_gpusvm_unmap_pages(struct drm_gpusvm *gpusvm,
+ struct drm_gpusvm_pages *svm_pages,
+ unsigned long npages,
+ const struct drm_gpusvm_ctx *ctx)
{
if (ctx->in_notifier)
lockdep_assert_held_write(&gpusvm->notifier_lock);
@@ -1593,6 +1628,7 @@ static void drm_gpusvm_unmap_pages(struct drm_gpusvm *gpusvm,
if (!ctx->in_notifier)
drm_gpusvm_notifier_unlock(gpusvm);
}
+EXPORT_SYMBOL_GPL(drm_gpusvm_unmap_pages);
/**
* drm_gpusvm_range_unmap_pages() - Unmap pages associated with a GPU SVM range
diff --git a/include/drm/drm_gpusvm.h b/include/drm/drm_gpusvm.h
index 1b7ed4f4a8e2..611aaba1ac80 100644
--- a/include/drm/drm_gpusvm.h
+++ b/include/drm/drm_gpusvm.h
@@ -370,6 +370,22 @@ void drm_gpusvm_devmem_init(struct drm_gpusvm_devmem *devmem_allocation,
const struct drm_gpusvm_devmem_ops *ops,
struct drm_pagemap *dpagemap, size_t size);
+int drm_gpusvm_get_pages(struct drm_gpusvm *gpusvm,
+ struct drm_gpusvm_pages *svm_pages,
+ struct mm_struct *mm,
+ struct mmu_interval_notifier *notifier,
+ unsigned long pages_start, unsigned long pages_end,
+ const struct drm_gpusvm_ctx *ctx);
+
+void drm_gpusvm_unmap_pages(struct drm_gpusvm *gpusvm,
+ struct drm_gpusvm_pages *svm_pages,
+ unsigned long npages,
+ const struct drm_gpusvm_ctx *ctx);
+
+void drm_gpusvm_free_pages(struct drm_gpusvm *gpusvm,
+ struct drm_gpusvm_pages *svm_pages,
+ unsigned long npages);
+
#ifdef CONFIG_LOCKDEP
/**
* drm_gpusvm_driver_set_lock() - Set the lock protecting accesses to GPU SVM
--
2.49.0
^ permalink raw reply related [flat|nested] 17+ messages in thread* [PATCH v4 6/8] drm/xe/vm: split userptr bits into separate file
2025-05-12 15:06 [PATCH v4 0/8] Replace xe_hmm with gpusvm Matthew Auld
` (4 preceding siblings ...)
2025-05-12 15:06 ` [PATCH v4 5/8] drm/gpusvm: export drm_gpusvm_pages API Matthew Auld
@ 2025-05-12 15:06 ` Matthew Auld
2025-06-09 17:05 ` Matthew Brost
2025-05-12 15:06 ` [PATCH v4 7/8] drm/xe/userptr: replace xe_hmm with gpusvm Matthew Auld
` (7 subsequent siblings)
13 siblings, 1 reply; 17+ messages in thread
From: Matthew Auld @ 2025-05-12 15:06 UTC (permalink / raw)
To: intel-xe; +Cc: dri-devel, Thomas Hellström, Matthew Brost
This will simplify compiling out the bits that depend on DRM_GPUSVM in a
later patch. Without this we end up littering the code with ifdef
checks, plus it becomes hard to be sure that something won't blow at
runtime due to something not being initialised, even though it passed
the build. Should be no functional change here.
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/Makefile | 1 +
drivers/gpu/drm/xe/xe_pt.c | 1 +
drivers/gpu/drm/xe/xe_userptr.c | 303 +++++++++++++++++++++++++++++++
drivers/gpu/drm/xe/xe_userptr.h | 97 ++++++++++
drivers/gpu/drm/xe/xe_vm.c | 280 +---------------------------
drivers/gpu/drm/xe/xe_vm.h | 18 --
drivers/gpu/drm/xe/xe_vm_types.h | 60 +-----
7 files changed, 410 insertions(+), 350 deletions(-)
create mode 100644 drivers/gpu/drm/xe/xe_userptr.c
create mode 100644 drivers/gpu/drm/xe/xe_userptr.h
diff --git a/drivers/gpu/drm/xe/Makefile b/drivers/gpu/drm/xe/Makefile
index e4bf484d4121..10b42118e761 100644
--- a/drivers/gpu/drm/xe/Makefile
+++ b/drivers/gpu/drm/xe/Makefile
@@ -116,6 +116,7 @@ xe-y += xe_bb.o \
xe_tuning.o \
xe_uc.o \
xe_uc_fw.o \
+ xe_userptr.o \
xe_vm.o \
xe_vram.o \
xe_vram_freq.o \
diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
index 5cccfd9cc3e9..720c25bf48f2 100644
--- a/drivers/gpu/drm/xe/xe_pt.c
+++ b/drivers/gpu/drm/xe/xe_pt.c
@@ -23,6 +23,7 @@
#include "xe_svm.h"
#include "xe_trace.h"
#include "xe_ttm_stolen_mgr.h"
+#include "xe_userptr.h"
#include "xe_vm.h"
struct xe_pt_dir {
diff --git a/drivers/gpu/drm/xe/xe_userptr.c b/drivers/gpu/drm/xe/xe_userptr.c
new file mode 100644
index 000000000000..f573842a3d4b
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_userptr.c
@@ -0,0 +1,303 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2025 Intel Corporation
+ */
+
+#include "xe_userptr.h"
+
+#include <linux/mm.h>
+
+#include "xe_hmm.h"
+#include "xe_trace_bo.h"
+
+/**
+ * xe_vma_userptr_check_repin() - Advisory check for repin needed
+ * @uvma: The userptr vma
+ *
+ * Check if the userptr vma has been invalidated since last successful
+ * repin. The check is advisory only and can the function can be called
+ * without the vm->svm.gpusvm.notifier_lock held. There is no guarantee that the
+ * vma userptr will remain valid after a lockless check, so typically
+ * the call needs to be followed by a proper check under the notifier_lock.
+ *
+ * Return: 0 if userptr vma is valid, -EAGAIN otherwise; repin recommended.
+ */
+int xe_vma_userptr_check_repin(struct xe_userptr_vma *uvma)
+{
+ return mmu_interval_check_retry(&uvma->userptr.notifier,
+ uvma->userptr.notifier_seq) ?
+ -EAGAIN : 0;
+}
+
+int xe_vma_userptr_pin_pages(struct xe_userptr_vma *uvma)
+{
+ struct xe_vma *vma = &uvma->vma;
+ struct xe_vm *vm = xe_vma_vm(vma);
+ struct xe_device *xe = vm->xe;
+
+ lockdep_assert_held(&vm->lock);
+ xe_assert(xe, xe_vma_is_userptr(vma));
+
+ return xe_hmm_userptr_populate_range(uvma, false);
+}
+
+static void __vma_userptr_invalidate(struct xe_vm *vm, struct xe_userptr_vma *uvma)
+{
+ struct xe_userptr *userptr = &uvma->userptr;
+ struct xe_vma *vma = &uvma->vma;
+ struct dma_resv_iter cursor;
+ struct dma_fence *fence;
+ long err;
+
+ /*
+ * Tell exec and rebind worker they need to repin and rebind this
+ * userptr.
+ */
+ if (!xe_vm_in_fault_mode(vm) &&
+ !(vma->gpuva.flags & XE_VMA_DESTROYED)) {
+ spin_lock(&vm->userptr.invalidated_lock);
+ list_move_tail(&userptr->invalidate_link,
+ &vm->userptr.invalidated);
+ spin_unlock(&vm->userptr.invalidated_lock);
+ }
+
+ /*
+ * Preempt fences turn into schedule disables, pipeline these.
+ * Note that even in fault mode, we need to wait for binds and
+ * unbinds to complete, and those are attached as BOOKMARK fences
+ * to the vm.
+ */
+ dma_resv_iter_begin(&cursor, xe_vm_resv(vm),
+ DMA_RESV_USAGE_BOOKKEEP);
+ dma_resv_for_each_fence_unlocked(&cursor, fence)
+ dma_fence_enable_sw_signaling(fence);
+ dma_resv_iter_end(&cursor);
+
+ err = dma_resv_wait_timeout(xe_vm_resv(vm),
+ DMA_RESV_USAGE_BOOKKEEP,
+ false, MAX_SCHEDULE_TIMEOUT);
+ XE_WARN_ON(err <= 0);
+
+ if (xe_vm_in_fault_mode(vm) && userptr->initial_bind) {
+ err = xe_vm_invalidate_vma(vma);
+ XE_WARN_ON(err);
+ }
+
+ xe_hmm_userptr_unmap(uvma);
+}
+
+#if IS_ENABLED(CONFIG_DRM_XE_USERPTR_INVAL_INJECT)
+/**
+ * xe_vma_userptr_force_invalidate() - force invalidate a userptr
+ * @uvma: The userptr vma to invalidate
+ *
+ * Perform a forced userptr invalidation for testing purposes.
+ */
+void xe_vma_userptr_force_invalidate(struct xe_userptr_vma *uvma)
+{
+ struct xe_vm *vm = xe_vma_vm(&uvma->vma);
+
+ /* Protect against concurrent userptr pinning */
+ lockdep_assert_held(&vm->lock);
+ /* Protect against concurrent notifiers */
+ lockdep_assert_held(&vm->svm.gpusvm.notifier_lock);
+ /*
+ * Protect against concurrent instances of this function and
+ * the critical exec sections
+ */
+ xe_vm_assert_held(vm);
+
+ if (!mmu_interval_read_retry(&uvma->userptr.notifier,
+ uvma->userptr.notifier_seq))
+ uvma->userptr.notifier_seq -= 2;
+ __vma_userptr_invalidate(vm, uvma);
+}
+#endif
+
+static bool vma_userptr_invalidate(struct mmu_interval_notifier *mni,
+ const struct mmu_notifier_range *range,
+ unsigned long cur_seq)
+{
+ struct xe_userptr_vma *uvma = container_of(mni, typeof(*uvma), userptr.notifier);
+ struct xe_vma *vma = &uvma->vma;
+ struct xe_vm *vm = xe_vma_vm(vma);
+
+ xe_assert(vm->xe, xe_vma_is_userptr(vma));
+ trace_xe_vma_userptr_invalidate(vma);
+
+ if (!mmu_notifier_range_blockable(range))
+ return false;
+
+ vm_dbg(&xe_vma_vm(vma)->xe->drm,
+ "NOTIFIER: addr=0x%016llx, range=0x%016llx",
+ xe_vma_start(vma), xe_vma_size(vma));
+
+ down_write(&vm->svm.gpusvm.notifier_lock);
+ mmu_interval_set_seq(mni, cur_seq);
+
+ __vma_userptr_invalidate(vm, uvma);
+ up_write(&vm->svm.gpusvm.notifier_lock);
+ trace_xe_vma_userptr_invalidate_complete(vma);
+
+ return true;
+}
+
+static const struct mmu_interval_notifier_ops vma_userptr_notifier_ops = {
+ .invalidate = vma_userptr_invalidate,
+};
+
+/**
+ * __xe_vm_userptr_needs_repin() - Check whether the VM does have userptrs
+ * that need repinning.
+ * @vm: The VM.
+ *
+ * This function checks for whether the VM has userptrs that need repinning,
+ * and provides a release-type barrier on the svm.gpusvm.notifier_lock after
+ * checking.
+ *
+ * Return: 0 if there are no userptrs needing repinning, -EAGAIN if there are.
+ */
+int __xe_vm_userptr_needs_repin(struct xe_vm *vm)
+{
+ lockdep_assert_held_read(&vm->svm.gpusvm.notifier_lock);
+
+ return (list_empty(&vm->userptr.repin_list) &&
+ list_empty(&vm->userptr.invalidated)) ? 0 : -EAGAIN;
+}
+
+int xe_vm_userptr_pin(struct xe_vm *vm)
+{
+ struct xe_userptr_vma *uvma, *next;
+ int err = 0;
+
+ xe_assert(vm->xe, !xe_vm_in_fault_mode(vm));
+ lockdep_assert_held_write(&vm->lock);
+
+ /* Collect invalidated userptrs */
+ spin_lock(&vm->userptr.invalidated_lock);
+ xe_assert(vm->xe, list_empty(&vm->userptr.repin_list));
+ list_for_each_entry_safe(uvma, next, &vm->userptr.invalidated,
+ userptr.invalidate_link) {
+ list_del_init(&uvma->userptr.invalidate_link);
+ list_add_tail(&uvma->userptr.repin_link,
+ &vm->userptr.repin_list);
+ }
+ spin_unlock(&vm->userptr.invalidated_lock);
+
+ /* Pin and move to bind list */
+ list_for_each_entry_safe(uvma, next, &vm->userptr.repin_list,
+ userptr.repin_link) {
+ err = xe_vma_userptr_pin_pages(uvma);
+ if (err == -EFAULT) {
+ list_del_init(&uvma->userptr.repin_link);
+ /*
+ * We might have already done the pin once already, but
+ * then had to retry before the re-bind happened, due
+ * some other condition in the caller, but in the
+ * meantime the userptr got dinged by the notifier such
+ * that we need to revalidate here, but this time we hit
+ * the EFAULT. In such a case make sure we remove
+ * ourselves from the rebind list to avoid going down in
+ * flames.
+ */
+ if (!list_empty(&uvma->vma.combined_links.rebind))
+ list_del_init(&uvma->vma.combined_links.rebind);
+
+ /* Wait for pending binds */
+ xe_vm_lock(vm, false);
+ dma_resv_wait_timeout(xe_vm_resv(vm),
+ DMA_RESV_USAGE_BOOKKEEP,
+ false, MAX_SCHEDULE_TIMEOUT);
+
+ err = xe_vm_invalidate_vma(&uvma->vma);
+ xe_vm_unlock(vm);
+ if (err)
+ break;
+ } else {
+ if (err)
+ break;
+
+ list_del_init(&uvma->userptr.repin_link);
+ list_move_tail(&uvma->vma.combined_links.rebind,
+ &vm->rebind_list);
+ }
+ }
+
+ if (err) {
+ down_write(&vm->svm.gpusvm.notifier_lock);
+ spin_lock(&vm->userptr.invalidated_lock);
+ list_for_each_entry_safe(uvma, next, &vm->userptr.repin_list,
+ userptr.repin_link) {
+ list_del_init(&uvma->userptr.repin_link);
+ list_move_tail(&uvma->userptr.invalidate_link,
+ &vm->userptr.invalidated);
+ }
+ spin_unlock(&vm->userptr.invalidated_lock);
+ up_write(&vm->svm.gpusvm.notifier_lock);
+ }
+ return err;
+}
+
+/**
+ * xe_vm_userptr_check_repin() - Check whether the VM might have userptrs
+ * that need repinning.
+ * @vm: The VM.
+ *
+ * This function does an advisory check for whether the VM has userptrs that
+ * need repinning.
+ *
+ * Return: 0 if there are no indications of userptrs needing repinning,
+ * -EAGAIN if there are.
+ */
+int xe_vm_userptr_check_repin(struct xe_vm *vm)
+{
+ return (list_empty_careful(&vm->userptr.repin_list) &&
+ list_empty_careful(&vm->userptr.invalidated)) ? 0 : -EAGAIN;
+}
+
+int xe_userptr_setup(struct xe_userptr_vma *uvma, unsigned long start,
+ unsigned long range)
+{
+ struct xe_userptr *userptr = &uvma->userptr;
+ int err;
+
+ INIT_LIST_HEAD(&userptr->invalidate_link);
+ INIT_LIST_HEAD(&userptr->repin_link);
+ mutex_init(&userptr->unmap_mutex);
+
+ err = mmu_interval_notifier_insert(&userptr->notifier, current->mm,
+ start, range,
+ &vma_userptr_notifier_ops);
+ if (err)
+ return err;
+
+ userptr->notifier_seq = LONG_MAX;
+
+ return 0;
+}
+
+void xe_userptr_remove(struct xe_userptr_vma *uvma)
+{
+ struct xe_userptr *userptr = &uvma->userptr;
+
+ if (userptr->sg)
+ xe_hmm_userptr_free_sg(uvma);
+
+ /*
+ * Since userptr pages are not pinned, we can't remove
+ * the notifier until we're sure the GPU is not accessing
+ * them anymore
+ */
+ mmu_interval_notifier_remove(&userptr->notifier);
+ mutex_destroy(&userptr->unmap_mutex);
+}
+
+void xe_userptr_destroy(struct xe_userptr_vma *uvma)
+{
+ struct xe_vm *vm = xe_vma_vm(&uvma->vma);
+
+ spin_lock(&vm->userptr.invalidated_lock);
+ xe_assert(vm->xe, list_empty(&uvma->userptr.repin_link));
+ list_del(&uvma->userptr.invalidate_link);
+ spin_unlock(&vm->userptr.invalidated_lock);
+}
diff --git a/drivers/gpu/drm/xe/xe_userptr.h b/drivers/gpu/drm/xe/xe_userptr.h
new file mode 100644
index 000000000000..83d17b58ed16
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_userptr.h
@@ -0,0 +1,97 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2025 Intel Corporation
+ */
+
+#ifndef _XE_USERPTR_H_
+#define _XE_USERPTR_H_
+
+#include <linux/list.h>
+#include <linux/mutex.h>
+#include <linux/notifier.h>
+#include <linux/scatterlist.h>
+#include <linux/spinlock.h>
+
+struct xe_vm;
+struct xe_vma;
+struct xe_userptr_vma;
+
+/** struct xe_userptr_vm - User pointer VM level state */
+struct xe_userptr_vm {
+ /**
+ * @userptr.repin_list: list of VMAs which are user pointers,
+ * and needs repinning. Protected by @lock.
+ */
+ struct list_head repin_list;
+ /**
+ * @notifier_lock: protects notifier in write mode and
+ * submission in read mode.
+ */
+ struct rw_semaphore notifier_lock;
+ /**
+ * @userptr.invalidated_lock: Protects the
+ * @userptr.invalidated list.
+ */
+ spinlock_t invalidated_lock;
+ /**
+ * @userptr.invalidated: List of invalidated userptrs, not yet
+ * picked
+ * up for revalidation. Protected from access with the
+ * @invalidated_lock. Removing items from the list
+ * additionally requires @lock in write mode, and adding
+ * items to the list requires either the @userptr.notifer_lock in
+ * write mode, OR @lock in write mode.
+ */
+ struct list_head invalidated;
+};
+
+/** struct xe_userptr - User pointer */
+struct xe_userptr {
+ /** @invalidate_link: Link for the vm::userptr.invalidated list */
+ struct list_head invalidate_link;
+ /** @userptr: link into VM repin list if userptr. */
+ struct list_head repin_link;
+ /**
+ * @notifier: MMU notifier for user pointer (invalidation call back)
+ */
+ struct mmu_interval_notifier notifier;
+ /** @sgt: storage for a scatter gather table */
+ struct sg_table sgt;
+ /** @sg: allocated scatter gather table */
+ struct sg_table *sg;
+ /** @notifier_seq: notifier sequence number */
+ unsigned long notifier_seq;
+ /** @unmap_mutex: Mutex protecting dma-unmapping */
+ struct mutex unmap_mutex;
+ /**
+ * @initial_bind: user pointer has been bound at least once.
+ * write: vm->userptr.notifier_lock in read mode and vm->resv held.
+ * read: vm->userptr.notifier_lock in write mode or vm->resv held.
+ */
+ bool initial_bind;
+ /** @mapped: Whether the @sgt sg-table is dma-mapped. Protected by @unmap_mutex. */
+ bool mapped;
+#if IS_ENABLED(CONFIG_DRM_XE_USERPTR_INVAL_INJECT)
+ u32 divisor;
+#endif
+};
+
+void xe_userptr_remove(struct xe_userptr_vma *uvma);
+int xe_userptr_setup(struct xe_userptr_vma *uvma, unsigned long start,
+ unsigned long range);
+void xe_userptr_destroy(struct xe_userptr_vma *uvma);
+
+int xe_vm_userptr_pin(struct xe_vm *vm);
+int __xe_vm_userptr_needs_repin(struct xe_vm *vm);
+int xe_vm_userptr_check_repin(struct xe_vm *vm);
+int xe_vma_userptr_pin_pages(struct xe_userptr_vma *uvma);
+int xe_vma_userptr_check_repin(struct xe_userptr_vma *uvma);
+
+#if IS_ENABLED(CONFIG_DRM_XE_USERPTR_INVAL_INJECT)
+void xe_vma_userptr_force_invalidate(struct xe_userptr_vma *uvma);
+#else
+static inline void xe_vma_userptr_force_invalidate(struct xe_userptr_vma *uvma)
+{
+}
+#endif
+#endif
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 79323c78130f..e5bf4ddc9d86 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -39,6 +39,7 @@
#include "xe_svm.h"
#include "xe_sync.h"
#include "xe_trace_bo.h"
+#include "xe_userptr.h"
#include "xe_wa.h"
#include "xe_hmm.h"
@@ -47,37 +48,6 @@ static struct drm_gem_object *xe_vm_obj(struct xe_vm *vm)
return vm->gpuvm.r_obj;
}
-/**
- * xe_vma_userptr_check_repin() - Advisory check for repin needed
- * @uvma: The userptr vma
- *
- * Check if the userptr vma has been invalidated since last successful
- * repin. The check is advisory only and can the function can be called
- * without the vm->userptr.notifier_lock held. There is no guarantee that the
- * vma userptr will remain valid after a lockless check, so typically
- * the call needs to be followed by a proper check under the notifier_lock.
- *
- * Return: 0 if userptr vma is valid, -EAGAIN otherwise; repin recommended.
- */
-int xe_vma_userptr_check_repin(struct xe_userptr_vma *uvma)
-{
- return mmu_interval_check_retry(&uvma->userptr.notifier,
- uvma->userptr.notifier_seq) ?
- -EAGAIN : 0;
-}
-
-int xe_vma_userptr_pin_pages(struct xe_userptr_vma *uvma)
-{
- struct xe_vma *vma = &uvma->vma;
- struct xe_vm *vm = xe_vma_vm(vma);
- struct xe_device *xe = vm->xe;
-
- lockdep_assert_held(&vm->lock);
- xe_assert(xe, xe_vma_is_userptr(vma));
-
- return xe_hmm_userptr_populate_range(uvma, false);
-}
-
static bool preempt_fences_waiting(struct xe_vm *vm)
{
struct xe_exec_queue *q;
@@ -299,25 +269,6 @@ void xe_vm_remove_compute_exec_queue(struct xe_vm *vm, struct xe_exec_queue *q)
up_write(&vm->lock);
}
-/**
- * __xe_vm_userptr_needs_repin() - Check whether the VM does have userptrs
- * that need repinning.
- * @vm: The VM.
- *
- * This function checks for whether the VM has userptrs that need repinning,
- * and provides a release-type barrier on the userptr.notifier_lock after
- * checking.
- *
- * Return: 0 if there are no userptrs needing repinning, -EAGAIN if there are.
- */
-int __xe_vm_userptr_needs_repin(struct xe_vm *vm)
-{
- lockdep_assert_held_read(&vm->userptr.notifier_lock);
-
- return (list_empty(&vm->userptr.repin_list) &&
- list_empty(&vm->userptr.invalidated)) ? 0 : -EAGAIN;
-}
-
#define XE_VM_REBIND_RETRY_TIMEOUT_MS 1000
/**
@@ -583,201 +534,6 @@ static void preempt_rebind_work_func(struct work_struct *w)
trace_xe_vm_rebind_worker_exit(vm);
}
-static void __vma_userptr_invalidate(struct xe_vm *vm, struct xe_userptr_vma *uvma)
-{
- struct xe_userptr *userptr = &uvma->userptr;
- struct xe_vma *vma = &uvma->vma;
- struct dma_resv_iter cursor;
- struct dma_fence *fence;
- long err;
-
- /*
- * Tell exec and rebind worker they need to repin and rebind this
- * userptr.
- */
- if (!xe_vm_in_fault_mode(vm) &&
- !(vma->gpuva.flags & XE_VMA_DESTROYED)) {
- spin_lock(&vm->userptr.invalidated_lock);
- list_move_tail(&userptr->invalidate_link,
- &vm->userptr.invalidated);
- spin_unlock(&vm->userptr.invalidated_lock);
- }
-
- /*
- * Preempt fences turn into schedule disables, pipeline these.
- * Note that even in fault mode, we need to wait for binds and
- * unbinds to complete, and those are attached as BOOKMARK fences
- * to the vm.
- */
- dma_resv_iter_begin(&cursor, xe_vm_resv(vm),
- DMA_RESV_USAGE_BOOKKEEP);
- dma_resv_for_each_fence_unlocked(&cursor, fence)
- dma_fence_enable_sw_signaling(fence);
- dma_resv_iter_end(&cursor);
-
- err = dma_resv_wait_timeout(xe_vm_resv(vm),
- DMA_RESV_USAGE_BOOKKEEP,
- false, MAX_SCHEDULE_TIMEOUT);
- XE_WARN_ON(err <= 0);
-
- if (xe_vm_in_fault_mode(vm) && userptr->initial_bind) {
- err = xe_vm_invalidate_vma(vma);
- XE_WARN_ON(err);
- }
-
- xe_hmm_userptr_unmap(uvma);
-}
-
-static bool vma_userptr_invalidate(struct mmu_interval_notifier *mni,
- const struct mmu_notifier_range *range,
- unsigned long cur_seq)
-{
- struct xe_userptr_vma *uvma = container_of(mni, typeof(*uvma), userptr.notifier);
- struct xe_vma *vma = &uvma->vma;
- struct xe_vm *vm = xe_vma_vm(vma);
-
- xe_assert(vm->xe, xe_vma_is_userptr(vma));
- trace_xe_vma_userptr_invalidate(vma);
-
- if (!mmu_notifier_range_blockable(range))
- return false;
-
- vm_dbg(&xe_vma_vm(vma)->xe->drm,
- "NOTIFIER: addr=0x%016llx, range=0x%016llx",
- xe_vma_start(vma), xe_vma_size(vma));
-
- down_write(&vm->userptr.notifier_lock);
- mmu_interval_set_seq(mni, cur_seq);
-
- __vma_userptr_invalidate(vm, uvma);
- up_write(&vm->userptr.notifier_lock);
- trace_xe_vma_userptr_invalidate_complete(vma);
-
- return true;
-}
-
-static const struct mmu_interval_notifier_ops vma_userptr_notifier_ops = {
- .invalidate = vma_userptr_invalidate,
-};
-
-#if IS_ENABLED(CONFIG_DRM_XE_USERPTR_INVAL_INJECT)
-/**
- * xe_vma_userptr_force_invalidate() - force invalidate a userptr
- * @uvma: The userptr vma to invalidate
- *
- * Perform a forced userptr invalidation for testing purposes.
- */
-void xe_vma_userptr_force_invalidate(struct xe_userptr_vma *uvma)
-{
- struct xe_vm *vm = xe_vma_vm(&uvma->vma);
-
- /* Protect against concurrent userptr pinning */
- lockdep_assert_held(&vm->lock);
- /* Protect against concurrent notifiers */
- lockdep_assert_held(&vm->userptr.notifier_lock);
- /*
- * Protect against concurrent instances of this function and
- * the critical exec sections
- */
- xe_vm_assert_held(vm);
-
- if (!mmu_interval_read_retry(&uvma->userptr.notifier,
- uvma->userptr.notifier_seq))
- uvma->userptr.notifier_seq -= 2;
- __vma_userptr_invalidate(vm, uvma);
-}
-#endif
-
-int xe_vm_userptr_pin(struct xe_vm *vm)
-{
- struct xe_userptr_vma *uvma, *next;
- int err = 0;
-
- xe_assert(vm->xe, !xe_vm_in_fault_mode(vm));
- lockdep_assert_held_write(&vm->lock);
-
- /* Collect invalidated userptrs */
- spin_lock(&vm->userptr.invalidated_lock);
- xe_assert(vm->xe, list_empty(&vm->userptr.repin_list));
- list_for_each_entry_safe(uvma, next, &vm->userptr.invalidated,
- userptr.invalidate_link) {
- list_del_init(&uvma->userptr.invalidate_link);
- list_add_tail(&uvma->userptr.repin_link,
- &vm->userptr.repin_list);
- }
- spin_unlock(&vm->userptr.invalidated_lock);
-
- /* Pin and move to bind list */
- list_for_each_entry_safe(uvma, next, &vm->userptr.repin_list,
- userptr.repin_link) {
- err = xe_vma_userptr_pin_pages(uvma);
- if (err == -EFAULT) {
- list_del_init(&uvma->userptr.repin_link);
- /*
- * We might have already done the pin once already, but
- * then had to retry before the re-bind happened, due
- * some other condition in the caller, but in the
- * meantime the userptr got dinged by the notifier such
- * that we need to revalidate here, but this time we hit
- * the EFAULT. In such a case make sure we remove
- * ourselves from the rebind list to avoid going down in
- * flames.
- */
- if (!list_empty(&uvma->vma.combined_links.rebind))
- list_del_init(&uvma->vma.combined_links.rebind);
-
- /* Wait for pending binds */
- xe_vm_lock(vm, false);
- dma_resv_wait_timeout(xe_vm_resv(vm),
- DMA_RESV_USAGE_BOOKKEEP,
- false, MAX_SCHEDULE_TIMEOUT);
-
- err = xe_vm_invalidate_vma(&uvma->vma);
- xe_vm_unlock(vm);
- if (err)
- break;
- } else {
- if (err)
- break;
-
- list_del_init(&uvma->userptr.repin_link);
- list_move_tail(&uvma->vma.combined_links.rebind,
- &vm->rebind_list);
- }
- }
-
- if (err) {
- down_write(&vm->userptr.notifier_lock);
- spin_lock(&vm->userptr.invalidated_lock);
- list_for_each_entry_safe(uvma, next, &vm->userptr.repin_list,
- userptr.repin_link) {
- list_del_init(&uvma->userptr.repin_link);
- list_move_tail(&uvma->userptr.invalidate_link,
- &vm->userptr.invalidated);
- }
- spin_unlock(&vm->userptr.invalidated_lock);
- up_write(&vm->userptr.notifier_lock);
- }
- return err;
-}
-
-/**
- * xe_vm_userptr_check_repin() - Check whether the VM might have userptrs
- * that need repinning.
- * @vm: The VM.
- *
- * This function does an advisory check for whether the VM has userptrs that
- * need repinning.
- *
- * Return: 0 if there are no indications of userptrs needing repinning,
- * -EAGAIN if there are.
- */
-int xe_vm_userptr_check_repin(struct xe_vm *vm)
-{
- return (list_empty_careful(&vm->userptr.repin_list) &&
- list_empty_careful(&vm->userptr.invalidated)) ? 0 : -EAGAIN;
-}
-
static int xe_vma_ops_alloc(struct xe_vma_ops *vops, bool array_of_binds)
{
int i;
@@ -1215,25 +971,15 @@ static struct xe_vma *xe_vma_create(struct xe_vm *vm,
drm_gpuvm_bo_put(vm_bo);
} else /* userptr or null */ {
if (!is_null && !is_cpu_addr_mirror) {
- struct xe_userptr *userptr = &to_userptr_vma(vma)->userptr;
- u64 size = end - start + 1;
+ struct xe_userptr_vma *uvma = to_userptr_vma(vma);
int err;
- INIT_LIST_HEAD(&userptr->invalidate_link);
- INIT_LIST_HEAD(&userptr->repin_link);
- vma->gpuva.gem.offset = bo_offset_or_userptr;
- mutex_init(&userptr->unmap_mutex);
-
- err = mmu_interval_notifier_insert(&userptr->notifier,
- current->mm,
- xe_vma_userptr(vma), size,
- &vma_userptr_notifier_ops);
+ err = xe_userptr_setup(uvma, xe_vma_userptr(vma),
+ end - start + 1);
if (err) {
xe_vma_free(vma);
return ERR_PTR(err);
}
-
- userptr->notifier_seq = LONG_MAX;
}
xe_vm_get(vm);
@@ -1253,18 +999,8 @@ static void xe_vma_destroy_late(struct xe_vma *vma)
if (xe_vma_is_userptr(vma)) {
struct xe_userptr_vma *uvma = to_userptr_vma(vma);
- struct xe_userptr *userptr = &uvma->userptr;
- if (userptr->sg)
- xe_hmm_userptr_free_sg(uvma);
-
- /*
- * Since userptr pages are not pinned, we can't remove
- * the notifier until we're sure the GPU is not accessing
- * them anymore
- */
- mmu_interval_notifier_remove(&userptr->notifier);
- mutex_destroy(&userptr->unmap_mutex);
+ xe_userptr_remove(uvma);
xe_vm_put(vm);
} else if (xe_vma_is_null(vma) || xe_vma_is_cpu_addr_mirror(vma)) {
xe_vm_put(vm);
@@ -1301,11 +1037,7 @@ static void xe_vma_destroy(struct xe_vma *vma, struct dma_fence *fence)
if (xe_vma_is_userptr(vma)) {
xe_assert(vm->xe, vma->gpuva.flags & XE_VMA_DESTROYED);
-
- spin_lock(&vm->userptr.invalidated_lock);
- xe_assert(vm->xe, list_empty(&to_userptr_vma(vma)->userptr.repin_link));
- list_del(&to_userptr_vma(vma)->userptr.invalidate_link);
- spin_unlock(&vm->userptr.invalidated_lock);
+ xe_userptr_destroy(to_userptr_vma(vma));
} else if (!xe_vma_is_null(vma) && !xe_vma_is_cpu_addr_mirror(vma)) {
xe_bo_assert_held(xe_vma_bo(vma));
diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
index 0ef811fc2bde..c59a94e2ffb9 100644
--- a/drivers/gpu/drm/xe/xe_vm.h
+++ b/drivers/gpu/drm/xe/xe_vm.h
@@ -210,12 +210,6 @@ static inline bool xe_vm_in_preempt_fence_mode(struct xe_vm *vm)
int xe_vm_add_compute_exec_queue(struct xe_vm *vm, struct xe_exec_queue *q);
void xe_vm_remove_compute_exec_queue(struct xe_vm *vm, struct xe_exec_queue *q);
-int xe_vm_userptr_pin(struct xe_vm *vm);
-
-int __xe_vm_userptr_needs_repin(struct xe_vm *vm);
-
-int xe_vm_userptr_check_repin(struct xe_vm *vm);
-
int xe_vm_rebind(struct xe_vm *vm, bool rebind_worker);
struct dma_fence *xe_vma_rebind(struct xe_vm *vm, struct xe_vma *vma,
u8 tile_mask);
@@ -253,10 +247,6 @@ static inline void xe_vm_reactivate_rebind(struct xe_vm *vm)
}
}
-int xe_vma_userptr_pin_pages(struct xe_userptr_vma *uvma);
-
-int xe_vma_userptr_check_repin(struct xe_userptr_vma *uvma);
-
bool xe_vm_validate_should_retry(struct drm_exec *exec, int err, ktime_t *end);
int xe_vm_lock_vma(struct drm_exec *exec, struct xe_vma *vma);
@@ -300,12 +290,4 @@ struct xe_vm_snapshot *xe_vm_snapshot_capture(struct xe_vm *vm);
void xe_vm_snapshot_capture_delayed(struct xe_vm_snapshot *snap);
void xe_vm_snapshot_print(struct xe_vm_snapshot *snap, struct drm_printer *p);
void xe_vm_snapshot_free(struct xe_vm_snapshot *snap);
-
-#if IS_ENABLED(CONFIG_DRM_XE_USERPTR_INVAL_INJECT)
-void xe_vma_userptr_force_invalidate(struct xe_userptr_vma *uvma);
-#else
-static inline void xe_vma_userptr_force_invalidate(struct xe_userptr_vma *uvma)
-{
-}
-#endif
#endif
diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
index 1662604c4486..65e889f2537d 100644
--- a/drivers/gpu/drm/xe/xe_vm_types.h
+++ b/drivers/gpu/drm/xe/xe_vm_types.h
@@ -17,6 +17,7 @@
#include "xe_device_types.h"
#include "xe_pt_types.h"
#include "xe_range_fence.h"
+#include "xe_userptr.h"
struct xe_bo;
struct xe_svm_range;
@@ -46,37 +47,6 @@ struct xe_vm_pgtable_update_op;
#define XE_VMA_DUMPABLE (DRM_GPUVA_USERBITS << 8)
#define XE_VMA_SYSTEM_ALLOCATOR (DRM_GPUVA_USERBITS << 9)
-/** struct xe_userptr - User pointer */
-struct xe_userptr {
- /** @invalidate_link: Link for the vm::userptr.invalidated list */
- struct list_head invalidate_link;
- /** @userptr: link into VM repin list if userptr. */
- struct list_head repin_link;
- /**
- * @notifier: MMU notifier for user pointer (invalidation call back)
- */
- struct mmu_interval_notifier notifier;
- /** @sgt: storage for a scatter gather table */
- struct sg_table sgt;
- /** @sg: allocated scatter gather table */
- struct sg_table *sg;
- /** @notifier_seq: notifier sequence number */
- unsigned long notifier_seq;
- /** @unmap_mutex: Mutex protecting dma-unmapping */
- struct mutex unmap_mutex;
- /**
- * @initial_bind: user pointer has been bound at least once.
- * write: vm->userptr.notifier_lock in read mode and vm->resv held.
- * read: vm->userptr.notifier_lock in write mode or vm->resv held.
- */
- bool initial_bind;
- /** @mapped: Whether the @sgt sg-table is dma-mapped. Protected by @unmap_mutex. */
- bool mapped;
-#if IS_ENABLED(CONFIG_DRM_XE_USERPTR_INVAL_INJECT)
- u32 divisor;
-#endif
-};
-
struct xe_vma {
/** @gpuva: Base GPUVA object */
struct drm_gpuva gpuva;
@@ -237,33 +207,7 @@ struct xe_vm {
const struct xe_pt_ops *pt_ops;
/** @userptr: user pointer state */
- struct {
- /**
- * @userptr.repin_list: list of VMAs which are user pointers,
- * and needs repinning. Protected by @lock.
- */
- struct list_head repin_list;
- /**
- * @notifier_lock: protects notifier in write mode and
- * submission in read mode.
- */
- struct rw_semaphore notifier_lock;
- /**
- * @userptr.invalidated_lock: Protects the
- * @userptr.invalidated list.
- */
- spinlock_t invalidated_lock;
- /**
- * @userptr.invalidated: List of invalidated userptrs, not yet
- * picked
- * up for revalidation. Protected from access with the
- * @invalidated_lock. Removing items from the list
- * additionally requires @lock in write mode, and adding
- * items to the list requires either the @userptr.notifer_lock in
- * write mode, OR @lock in write mode.
- */
- struct list_head invalidated;
- } userptr;
+ struct xe_userptr_vm userptr;
/** @preempt: preempt state */
struct {
--
2.49.0
^ permalink raw reply related [flat|nested] 17+ messages in thread* Re: [PATCH v4 6/8] drm/xe/vm: split userptr bits into separate file
2025-05-12 15:06 ` [PATCH v4 6/8] drm/xe/vm: split userptr bits into separate file Matthew Auld
@ 2025-06-09 17:05 ` Matthew Brost
0 siblings, 0 replies; 17+ messages in thread
From: Matthew Brost @ 2025-06-09 17:05 UTC (permalink / raw)
To: Matthew Auld; +Cc: intel-xe, dri-devel, Thomas Hellström
On Mon, May 12, 2025 at 04:06:44PM +0100, Matthew Auld wrote:
> This will simplify compiling out the bits that depend on DRM_GPUSVM in a
> later patch. Without this we end up littering the code with ifdef
> checks, plus it becomes hard to be sure that something won't blow at
> runtime due to something not being initialised, even though it passed
> the build. Should be no functional change here.
>
> Signed-off-by: Matthew Auld <matthew.auld@intel.com>
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
> ---
> drivers/gpu/drm/xe/Makefile | 1 +
> drivers/gpu/drm/xe/xe_pt.c | 1 +
> drivers/gpu/drm/xe/xe_userptr.c | 303 +++++++++++++++++++++++++++++++
> drivers/gpu/drm/xe/xe_userptr.h | 97 ++++++++++
We typically use *_types.h but I didn't do this for xe_svm.h either so
maybe a little unfair to nit pick.
Either way:
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
> drivers/gpu/drm/xe/xe_vm.c | 280 +---------------------------
> drivers/gpu/drm/xe/xe_vm.h | 18 --
> drivers/gpu/drm/xe/xe_vm_types.h | 60 +-----
> 7 files changed, 410 insertions(+), 350 deletions(-)
> create mode 100644 drivers/gpu/drm/xe/xe_userptr.c
> create mode 100644 drivers/gpu/drm/xe/xe_userptr.h
>
> diff --git a/drivers/gpu/drm/xe/Makefile b/drivers/gpu/drm/xe/Makefile
> index e4bf484d4121..10b42118e761 100644
> --- a/drivers/gpu/drm/xe/Makefile
> +++ b/drivers/gpu/drm/xe/Makefile
> @@ -116,6 +116,7 @@ xe-y += xe_bb.o \
> xe_tuning.o \
> xe_uc.o \
> xe_uc_fw.o \
> + xe_userptr.o \
> xe_vm.o \
> xe_vram.o \
> xe_vram_freq.o \
> diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
> index 5cccfd9cc3e9..720c25bf48f2 100644
> --- a/drivers/gpu/drm/xe/xe_pt.c
> +++ b/drivers/gpu/drm/xe/xe_pt.c
> @@ -23,6 +23,7 @@
> #include "xe_svm.h"
> #include "xe_trace.h"
> #include "xe_ttm_stolen_mgr.h"
> +#include "xe_userptr.h"
> #include "xe_vm.h"
>
> struct xe_pt_dir {
> diff --git a/drivers/gpu/drm/xe/xe_userptr.c b/drivers/gpu/drm/xe/xe_userptr.c
> new file mode 100644
> index 000000000000..f573842a3d4b
> --- /dev/null
> +++ b/drivers/gpu/drm/xe/xe_userptr.c
> @@ -0,0 +1,303 @@
> +// SPDX-License-Identifier: MIT
> +/*
> + * Copyright © 2025 Intel Corporation
> + */
> +
> +#include "xe_userptr.h"
> +
> +#include <linux/mm.h>
> +
> +#include "xe_hmm.h"
> +#include "xe_trace_bo.h"
> +
> +/**
> + * xe_vma_userptr_check_repin() - Advisory check for repin needed
> + * @uvma: The userptr vma
> + *
> + * Check if the userptr vma has been invalidated since last successful
> + * repin. The check is advisory only and can the function can be called
> + * without the vm->svm.gpusvm.notifier_lock held. There is no guarantee that the
> + * vma userptr will remain valid after a lockless check, so typically
> + * the call needs to be followed by a proper check under the notifier_lock.
> + *
> + * Return: 0 if userptr vma is valid, -EAGAIN otherwise; repin recommended.
> + */
> +int xe_vma_userptr_check_repin(struct xe_userptr_vma *uvma)
> +{
> + return mmu_interval_check_retry(&uvma->userptr.notifier,
> + uvma->userptr.notifier_seq) ?
> + -EAGAIN : 0;
> +}
> +
> +int xe_vma_userptr_pin_pages(struct xe_userptr_vma *uvma)
> +{
> + struct xe_vma *vma = &uvma->vma;
> + struct xe_vm *vm = xe_vma_vm(vma);
> + struct xe_device *xe = vm->xe;
> +
> + lockdep_assert_held(&vm->lock);
> + xe_assert(xe, xe_vma_is_userptr(vma));
> +
> + return xe_hmm_userptr_populate_range(uvma, false);
> +}
> +
> +static void __vma_userptr_invalidate(struct xe_vm *vm, struct xe_userptr_vma *uvma)
> +{
> + struct xe_userptr *userptr = &uvma->userptr;
> + struct xe_vma *vma = &uvma->vma;
> + struct dma_resv_iter cursor;
> + struct dma_fence *fence;
> + long err;
> +
> + /*
> + * Tell exec and rebind worker they need to repin and rebind this
> + * userptr.
> + */
> + if (!xe_vm_in_fault_mode(vm) &&
> + !(vma->gpuva.flags & XE_VMA_DESTROYED)) {
> + spin_lock(&vm->userptr.invalidated_lock);
> + list_move_tail(&userptr->invalidate_link,
> + &vm->userptr.invalidated);
> + spin_unlock(&vm->userptr.invalidated_lock);
> + }
> +
> + /*
> + * Preempt fences turn into schedule disables, pipeline these.
> + * Note that even in fault mode, we need to wait for binds and
> + * unbinds to complete, and those are attached as BOOKMARK fences
> + * to the vm.
> + */
> + dma_resv_iter_begin(&cursor, xe_vm_resv(vm),
> + DMA_RESV_USAGE_BOOKKEEP);
> + dma_resv_for_each_fence_unlocked(&cursor, fence)
> + dma_fence_enable_sw_signaling(fence);
> + dma_resv_iter_end(&cursor);
> +
> + err = dma_resv_wait_timeout(xe_vm_resv(vm),
> + DMA_RESV_USAGE_BOOKKEEP,
> + false, MAX_SCHEDULE_TIMEOUT);
> + XE_WARN_ON(err <= 0);
> +
> + if (xe_vm_in_fault_mode(vm) && userptr->initial_bind) {
> + err = xe_vm_invalidate_vma(vma);
> + XE_WARN_ON(err);
> + }
> +
> + xe_hmm_userptr_unmap(uvma);
> +}
> +
> +#if IS_ENABLED(CONFIG_DRM_XE_USERPTR_INVAL_INJECT)
> +/**
> + * xe_vma_userptr_force_invalidate() - force invalidate a userptr
> + * @uvma: The userptr vma to invalidate
> + *
> + * Perform a forced userptr invalidation for testing purposes.
> + */
> +void xe_vma_userptr_force_invalidate(struct xe_userptr_vma *uvma)
> +{
> + struct xe_vm *vm = xe_vma_vm(&uvma->vma);
> +
> + /* Protect against concurrent userptr pinning */
> + lockdep_assert_held(&vm->lock);
> + /* Protect against concurrent notifiers */
> + lockdep_assert_held(&vm->svm.gpusvm.notifier_lock);
> + /*
> + * Protect against concurrent instances of this function and
> + * the critical exec sections
> + */
> + xe_vm_assert_held(vm);
> +
> + if (!mmu_interval_read_retry(&uvma->userptr.notifier,
> + uvma->userptr.notifier_seq))
> + uvma->userptr.notifier_seq -= 2;
> + __vma_userptr_invalidate(vm, uvma);
> +}
> +#endif
> +
> +static bool vma_userptr_invalidate(struct mmu_interval_notifier *mni,
> + const struct mmu_notifier_range *range,
> + unsigned long cur_seq)
> +{
> + struct xe_userptr_vma *uvma = container_of(mni, typeof(*uvma), userptr.notifier);
> + struct xe_vma *vma = &uvma->vma;
> + struct xe_vm *vm = xe_vma_vm(vma);
> +
> + xe_assert(vm->xe, xe_vma_is_userptr(vma));
> + trace_xe_vma_userptr_invalidate(vma);
> +
> + if (!mmu_notifier_range_blockable(range))
> + return false;
> +
> + vm_dbg(&xe_vma_vm(vma)->xe->drm,
> + "NOTIFIER: addr=0x%016llx, range=0x%016llx",
> + xe_vma_start(vma), xe_vma_size(vma));
> +
> + down_write(&vm->svm.gpusvm.notifier_lock);
> + mmu_interval_set_seq(mni, cur_seq);
> +
> + __vma_userptr_invalidate(vm, uvma);
> + up_write(&vm->svm.gpusvm.notifier_lock);
> + trace_xe_vma_userptr_invalidate_complete(vma);
> +
> + return true;
> +}
> +
> +static const struct mmu_interval_notifier_ops vma_userptr_notifier_ops = {
> + .invalidate = vma_userptr_invalidate,
> +};
> +
> +/**
> + * __xe_vm_userptr_needs_repin() - Check whether the VM does have userptrs
> + * that need repinning.
> + * @vm: The VM.
> + *
> + * This function checks for whether the VM has userptrs that need repinning,
> + * and provides a release-type barrier on the svm.gpusvm.notifier_lock after
> + * checking.
> + *
> + * Return: 0 if there are no userptrs needing repinning, -EAGAIN if there are.
> + */
> +int __xe_vm_userptr_needs_repin(struct xe_vm *vm)
> +{
> + lockdep_assert_held_read(&vm->svm.gpusvm.notifier_lock);
> +
> + return (list_empty(&vm->userptr.repin_list) &&
> + list_empty(&vm->userptr.invalidated)) ? 0 : -EAGAIN;
> +}
> +
> +int xe_vm_userptr_pin(struct xe_vm *vm)
> +{
> + struct xe_userptr_vma *uvma, *next;
> + int err = 0;
> +
> + xe_assert(vm->xe, !xe_vm_in_fault_mode(vm));
> + lockdep_assert_held_write(&vm->lock);
> +
> + /* Collect invalidated userptrs */
> + spin_lock(&vm->userptr.invalidated_lock);
> + xe_assert(vm->xe, list_empty(&vm->userptr.repin_list));
> + list_for_each_entry_safe(uvma, next, &vm->userptr.invalidated,
> + userptr.invalidate_link) {
> + list_del_init(&uvma->userptr.invalidate_link);
> + list_add_tail(&uvma->userptr.repin_link,
> + &vm->userptr.repin_list);
> + }
> + spin_unlock(&vm->userptr.invalidated_lock);
> +
> + /* Pin and move to bind list */
> + list_for_each_entry_safe(uvma, next, &vm->userptr.repin_list,
> + userptr.repin_link) {
> + err = xe_vma_userptr_pin_pages(uvma);
> + if (err == -EFAULT) {
> + list_del_init(&uvma->userptr.repin_link);
> + /*
> + * We might have already done the pin once already, but
> + * then had to retry before the re-bind happened, due
> + * some other condition in the caller, but in the
> + * meantime the userptr got dinged by the notifier such
> + * that we need to revalidate here, but this time we hit
> + * the EFAULT. In such a case make sure we remove
> + * ourselves from the rebind list to avoid going down in
> + * flames.
> + */
> + if (!list_empty(&uvma->vma.combined_links.rebind))
> + list_del_init(&uvma->vma.combined_links.rebind);
> +
> + /* Wait for pending binds */
> + xe_vm_lock(vm, false);
> + dma_resv_wait_timeout(xe_vm_resv(vm),
> + DMA_RESV_USAGE_BOOKKEEP,
> + false, MAX_SCHEDULE_TIMEOUT);
> +
> + err = xe_vm_invalidate_vma(&uvma->vma);
> + xe_vm_unlock(vm);
> + if (err)
> + break;
> + } else {
> + if (err)
> + break;
> +
> + list_del_init(&uvma->userptr.repin_link);
> + list_move_tail(&uvma->vma.combined_links.rebind,
> + &vm->rebind_list);
> + }
> + }
> +
> + if (err) {
> + down_write(&vm->svm.gpusvm.notifier_lock);
> + spin_lock(&vm->userptr.invalidated_lock);
> + list_for_each_entry_safe(uvma, next, &vm->userptr.repin_list,
> + userptr.repin_link) {
> + list_del_init(&uvma->userptr.repin_link);
> + list_move_tail(&uvma->userptr.invalidate_link,
> + &vm->userptr.invalidated);
> + }
> + spin_unlock(&vm->userptr.invalidated_lock);
> + up_write(&vm->svm.gpusvm.notifier_lock);
> + }
> + return err;
> +}
> +
> +/**
> + * xe_vm_userptr_check_repin() - Check whether the VM might have userptrs
> + * that need repinning.
> + * @vm: The VM.
> + *
> + * This function does an advisory check for whether the VM has userptrs that
> + * need repinning.
> + *
> + * Return: 0 if there are no indications of userptrs needing repinning,
> + * -EAGAIN if there are.
> + */
> +int xe_vm_userptr_check_repin(struct xe_vm *vm)
> +{
> + return (list_empty_careful(&vm->userptr.repin_list) &&
> + list_empty_careful(&vm->userptr.invalidated)) ? 0 : -EAGAIN;
> +}
> +
> +int xe_userptr_setup(struct xe_userptr_vma *uvma, unsigned long start,
> + unsigned long range)
> +{
> + struct xe_userptr *userptr = &uvma->userptr;
> + int err;
> +
> + INIT_LIST_HEAD(&userptr->invalidate_link);
> + INIT_LIST_HEAD(&userptr->repin_link);
> + mutex_init(&userptr->unmap_mutex);
> +
> + err = mmu_interval_notifier_insert(&userptr->notifier, current->mm,
> + start, range,
> + &vma_userptr_notifier_ops);
> + if (err)
> + return err;
> +
> + userptr->notifier_seq = LONG_MAX;
> +
> + return 0;
> +}
> +
> +void xe_userptr_remove(struct xe_userptr_vma *uvma)
> +{
> + struct xe_userptr *userptr = &uvma->userptr;
> +
> + if (userptr->sg)
> + xe_hmm_userptr_free_sg(uvma);
> +
> + /*
> + * Since userptr pages are not pinned, we can't remove
> + * the notifier until we're sure the GPU is not accessing
> + * them anymore
> + */
> + mmu_interval_notifier_remove(&userptr->notifier);
> + mutex_destroy(&userptr->unmap_mutex);
> +}
> +
> +void xe_userptr_destroy(struct xe_userptr_vma *uvma)
> +{
> + struct xe_vm *vm = xe_vma_vm(&uvma->vma);
> +
> + spin_lock(&vm->userptr.invalidated_lock);
> + xe_assert(vm->xe, list_empty(&uvma->userptr.repin_link));
> + list_del(&uvma->userptr.invalidate_link);
> + spin_unlock(&vm->userptr.invalidated_lock);
> +}
> diff --git a/drivers/gpu/drm/xe/xe_userptr.h b/drivers/gpu/drm/xe/xe_userptr.h
> new file mode 100644
> index 000000000000..83d17b58ed16
> --- /dev/null
> +++ b/drivers/gpu/drm/xe/xe_userptr.h
> @@ -0,0 +1,97 @@
> +/* SPDX-License-Identifier: MIT */
> +/*
> + * Copyright © 2025 Intel Corporation
> + */
> +
> +#ifndef _XE_USERPTR_H_
> +#define _XE_USERPTR_H_
> +
> +#include <linux/list.h>
> +#include <linux/mutex.h>
> +#include <linux/notifier.h>
> +#include <linux/scatterlist.h>
> +#include <linux/spinlock.h>
> +
> +struct xe_vm;
> +struct xe_vma;
> +struct xe_userptr_vma;
> +
> +/** struct xe_userptr_vm - User pointer VM level state */
> +struct xe_userptr_vm {
> + /**
> + * @userptr.repin_list: list of VMAs which are user pointers,
> + * and needs repinning. Protected by @lock.
> + */
> + struct list_head repin_list;
> + /**
> + * @notifier_lock: protects notifier in write mode and
> + * submission in read mode.
> + */
> + struct rw_semaphore notifier_lock;
> + /**
> + * @userptr.invalidated_lock: Protects the
> + * @userptr.invalidated list.
> + */
> + spinlock_t invalidated_lock;
> + /**
> + * @userptr.invalidated: List of invalidated userptrs, not yet
> + * picked
> + * up for revalidation. Protected from access with the
> + * @invalidated_lock. Removing items from the list
> + * additionally requires @lock in write mode, and adding
> + * items to the list requires either the @userptr.notifer_lock in
> + * write mode, OR @lock in write mode.
> + */
> + struct list_head invalidated;
> +};
> +
> +/** struct xe_userptr - User pointer */
> +struct xe_userptr {
> + /** @invalidate_link: Link for the vm::userptr.invalidated list */
> + struct list_head invalidate_link;
> + /** @userptr: link into VM repin list if userptr. */
> + struct list_head repin_link;
> + /**
> + * @notifier: MMU notifier for user pointer (invalidation call back)
> + */
> + struct mmu_interval_notifier notifier;
> + /** @sgt: storage for a scatter gather table */
> + struct sg_table sgt;
> + /** @sg: allocated scatter gather table */
> + struct sg_table *sg;
> + /** @notifier_seq: notifier sequence number */
> + unsigned long notifier_seq;
> + /** @unmap_mutex: Mutex protecting dma-unmapping */
> + struct mutex unmap_mutex;
> + /**
> + * @initial_bind: user pointer has been bound at least once.
> + * write: vm->userptr.notifier_lock in read mode and vm->resv held.
> + * read: vm->userptr.notifier_lock in write mode or vm->resv held.
> + */
> + bool initial_bind;
> + /** @mapped: Whether the @sgt sg-table is dma-mapped. Protected by @unmap_mutex. */
> + bool mapped;
> +#if IS_ENABLED(CONFIG_DRM_XE_USERPTR_INVAL_INJECT)
> + u32 divisor;
> +#endif
> +};
> +
> +void xe_userptr_remove(struct xe_userptr_vma *uvma);
> +int xe_userptr_setup(struct xe_userptr_vma *uvma, unsigned long start,
> + unsigned long range);
> +void xe_userptr_destroy(struct xe_userptr_vma *uvma);
> +
> +int xe_vm_userptr_pin(struct xe_vm *vm);
> +int __xe_vm_userptr_needs_repin(struct xe_vm *vm);
> +int xe_vm_userptr_check_repin(struct xe_vm *vm);
> +int xe_vma_userptr_pin_pages(struct xe_userptr_vma *uvma);
> +int xe_vma_userptr_check_repin(struct xe_userptr_vma *uvma);
> +
> +#if IS_ENABLED(CONFIG_DRM_XE_USERPTR_INVAL_INJECT)
> +void xe_vma_userptr_force_invalidate(struct xe_userptr_vma *uvma);
> +#else
> +static inline void xe_vma_userptr_force_invalidate(struct xe_userptr_vma *uvma)
> +{
> +}
> +#endif
> +#endif
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index 79323c78130f..e5bf4ddc9d86 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -39,6 +39,7 @@
> #include "xe_svm.h"
> #include "xe_sync.h"
> #include "xe_trace_bo.h"
> +#include "xe_userptr.h"
> #include "xe_wa.h"
> #include "xe_hmm.h"
>
> @@ -47,37 +48,6 @@ static struct drm_gem_object *xe_vm_obj(struct xe_vm *vm)
> return vm->gpuvm.r_obj;
> }
>
> -/**
> - * xe_vma_userptr_check_repin() - Advisory check for repin needed
> - * @uvma: The userptr vma
> - *
> - * Check if the userptr vma has been invalidated since last successful
> - * repin. The check is advisory only and can the function can be called
> - * without the vm->userptr.notifier_lock held. There is no guarantee that the
> - * vma userptr will remain valid after a lockless check, so typically
> - * the call needs to be followed by a proper check under the notifier_lock.
> - *
> - * Return: 0 if userptr vma is valid, -EAGAIN otherwise; repin recommended.
> - */
> -int xe_vma_userptr_check_repin(struct xe_userptr_vma *uvma)
> -{
> - return mmu_interval_check_retry(&uvma->userptr.notifier,
> - uvma->userptr.notifier_seq) ?
> - -EAGAIN : 0;
> -}
> -
> -int xe_vma_userptr_pin_pages(struct xe_userptr_vma *uvma)
> -{
> - struct xe_vma *vma = &uvma->vma;
> - struct xe_vm *vm = xe_vma_vm(vma);
> - struct xe_device *xe = vm->xe;
> -
> - lockdep_assert_held(&vm->lock);
> - xe_assert(xe, xe_vma_is_userptr(vma));
> -
> - return xe_hmm_userptr_populate_range(uvma, false);
> -}
> -
> static bool preempt_fences_waiting(struct xe_vm *vm)
> {
> struct xe_exec_queue *q;
> @@ -299,25 +269,6 @@ void xe_vm_remove_compute_exec_queue(struct xe_vm *vm, struct xe_exec_queue *q)
> up_write(&vm->lock);
> }
>
> -/**
> - * __xe_vm_userptr_needs_repin() - Check whether the VM does have userptrs
> - * that need repinning.
> - * @vm: The VM.
> - *
> - * This function checks for whether the VM has userptrs that need repinning,
> - * and provides a release-type barrier on the userptr.notifier_lock after
> - * checking.
> - *
> - * Return: 0 if there are no userptrs needing repinning, -EAGAIN if there are.
> - */
> -int __xe_vm_userptr_needs_repin(struct xe_vm *vm)
> -{
> - lockdep_assert_held_read(&vm->userptr.notifier_lock);
> -
> - return (list_empty(&vm->userptr.repin_list) &&
> - list_empty(&vm->userptr.invalidated)) ? 0 : -EAGAIN;
> -}
> -
> #define XE_VM_REBIND_RETRY_TIMEOUT_MS 1000
>
> /**
> @@ -583,201 +534,6 @@ static void preempt_rebind_work_func(struct work_struct *w)
> trace_xe_vm_rebind_worker_exit(vm);
> }
>
> -static void __vma_userptr_invalidate(struct xe_vm *vm, struct xe_userptr_vma *uvma)
> -{
> - struct xe_userptr *userptr = &uvma->userptr;
> - struct xe_vma *vma = &uvma->vma;
> - struct dma_resv_iter cursor;
> - struct dma_fence *fence;
> - long err;
> -
> - /*
> - * Tell exec and rebind worker they need to repin and rebind this
> - * userptr.
> - */
> - if (!xe_vm_in_fault_mode(vm) &&
> - !(vma->gpuva.flags & XE_VMA_DESTROYED)) {
> - spin_lock(&vm->userptr.invalidated_lock);
> - list_move_tail(&userptr->invalidate_link,
> - &vm->userptr.invalidated);
> - spin_unlock(&vm->userptr.invalidated_lock);
> - }
> -
> - /*
> - * Preempt fences turn into schedule disables, pipeline these.
> - * Note that even in fault mode, we need to wait for binds and
> - * unbinds to complete, and those are attached as BOOKMARK fences
> - * to the vm.
> - */
> - dma_resv_iter_begin(&cursor, xe_vm_resv(vm),
> - DMA_RESV_USAGE_BOOKKEEP);
> - dma_resv_for_each_fence_unlocked(&cursor, fence)
> - dma_fence_enable_sw_signaling(fence);
> - dma_resv_iter_end(&cursor);
> -
> - err = dma_resv_wait_timeout(xe_vm_resv(vm),
> - DMA_RESV_USAGE_BOOKKEEP,
> - false, MAX_SCHEDULE_TIMEOUT);
> - XE_WARN_ON(err <= 0);
> -
> - if (xe_vm_in_fault_mode(vm) && userptr->initial_bind) {
> - err = xe_vm_invalidate_vma(vma);
> - XE_WARN_ON(err);
> - }
> -
> - xe_hmm_userptr_unmap(uvma);
> -}
> -
> -static bool vma_userptr_invalidate(struct mmu_interval_notifier *mni,
> - const struct mmu_notifier_range *range,
> - unsigned long cur_seq)
> -{
> - struct xe_userptr_vma *uvma = container_of(mni, typeof(*uvma), userptr.notifier);
> - struct xe_vma *vma = &uvma->vma;
> - struct xe_vm *vm = xe_vma_vm(vma);
> -
> - xe_assert(vm->xe, xe_vma_is_userptr(vma));
> - trace_xe_vma_userptr_invalidate(vma);
> -
> - if (!mmu_notifier_range_blockable(range))
> - return false;
> -
> - vm_dbg(&xe_vma_vm(vma)->xe->drm,
> - "NOTIFIER: addr=0x%016llx, range=0x%016llx",
> - xe_vma_start(vma), xe_vma_size(vma));
> -
> - down_write(&vm->userptr.notifier_lock);
> - mmu_interval_set_seq(mni, cur_seq);
> -
> - __vma_userptr_invalidate(vm, uvma);
> - up_write(&vm->userptr.notifier_lock);
> - trace_xe_vma_userptr_invalidate_complete(vma);
> -
> - return true;
> -}
> -
> -static const struct mmu_interval_notifier_ops vma_userptr_notifier_ops = {
> - .invalidate = vma_userptr_invalidate,
> -};
> -
> -#if IS_ENABLED(CONFIG_DRM_XE_USERPTR_INVAL_INJECT)
> -/**
> - * xe_vma_userptr_force_invalidate() - force invalidate a userptr
> - * @uvma: The userptr vma to invalidate
> - *
> - * Perform a forced userptr invalidation for testing purposes.
> - */
> -void xe_vma_userptr_force_invalidate(struct xe_userptr_vma *uvma)
> -{
> - struct xe_vm *vm = xe_vma_vm(&uvma->vma);
> -
> - /* Protect against concurrent userptr pinning */
> - lockdep_assert_held(&vm->lock);
> - /* Protect against concurrent notifiers */
> - lockdep_assert_held(&vm->userptr.notifier_lock);
> - /*
> - * Protect against concurrent instances of this function and
> - * the critical exec sections
> - */
> - xe_vm_assert_held(vm);
> -
> - if (!mmu_interval_read_retry(&uvma->userptr.notifier,
> - uvma->userptr.notifier_seq))
> - uvma->userptr.notifier_seq -= 2;
> - __vma_userptr_invalidate(vm, uvma);
> -}
> -#endif
> -
> -int xe_vm_userptr_pin(struct xe_vm *vm)
> -{
> - struct xe_userptr_vma *uvma, *next;
> - int err = 0;
> -
> - xe_assert(vm->xe, !xe_vm_in_fault_mode(vm));
> - lockdep_assert_held_write(&vm->lock);
> -
> - /* Collect invalidated userptrs */
> - spin_lock(&vm->userptr.invalidated_lock);
> - xe_assert(vm->xe, list_empty(&vm->userptr.repin_list));
> - list_for_each_entry_safe(uvma, next, &vm->userptr.invalidated,
> - userptr.invalidate_link) {
> - list_del_init(&uvma->userptr.invalidate_link);
> - list_add_tail(&uvma->userptr.repin_link,
> - &vm->userptr.repin_list);
> - }
> - spin_unlock(&vm->userptr.invalidated_lock);
> -
> - /* Pin and move to bind list */
> - list_for_each_entry_safe(uvma, next, &vm->userptr.repin_list,
> - userptr.repin_link) {
> - err = xe_vma_userptr_pin_pages(uvma);
> - if (err == -EFAULT) {
> - list_del_init(&uvma->userptr.repin_link);
> - /*
> - * We might have already done the pin once already, but
> - * then had to retry before the re-bind happened, due
> - * some other condition in the caller, but in the
> - * meantime the userptr got dinged by the notifier such
> - * that we need to revalidate here, but this time we hit
> - * the EFAULT. In such a case make sure we remove
> - * ourselves from the rebind list to avoid going down in
> - * flames.
> - */
> - if (!list_empty(&uvma->vma.combined_links.rebind))
> - list_del_init(&uvma->vma.combined_links.rebind);
> -
> - /* Wait for pending binds */
> - xe_vm_lock(vm, false);
> - dma_resv_wait_timeout(xe_vm_resv(vm),
> - DMA_RESV_USAGE_BOOKKEEP,
> - false, MAX_SCHEDULE_TIMEOUT);
> -
> - err = xe_vm_invalidate_vma(&uvma->vma);
> - xe_vm_unlock(vm);
> - if (err)
> - break;
> - } else {
> - if (err)
> - break;
> -
> - list_del_init(&uvma->userptr.repin_link);
> - list_move_tail(&uvma->vma.combined_links.rebind,
> - &vm->rebind_list);
> - }
> - }
> -
> - if (err) {
> - down_write(&vm->userptr.notifier_lock);
> - spin_lock(&vm->userptr.invalidated_lock);
> - list_for_each_entry_safe(uvma, next, &vm->userptr.repin_list,
> - userptr.repin_link) {
> - list_del_init(&uvma->userptr.repin_link);
> - list_move_tail(&uvma->userptr.invalidate_link,
> - &vm->userptr.invalidated);
> - }
> - spin_unlock(&vm->userptr.invalidated_lock);
> - up_write(&vm->userptr.notifier_lock);
> - }
> - return err;
> -}
> -
> -/**
> - * xe_vm_userptr_check_repin() - Check whether the VM might have userptrs
> - * that need repinning.
> - * @vm: The VM.
> - *
> - * This function does an advisory check for whether the VM has userptrs that
> - * need repinning.
> - *
> - * Return: 0 if there are no indications of userptrs needing repinning,
> - * -EAGAIN if there are.
> - */
> -int xe_vm_userptr_check_repin(struct xe_vm *vm)
> -{
> - return (list_empty_careful(&vm->userptr.repin_list) &&
> - list_empty_careful(&vm->userptr.invalidated)) ? 0 : -EAGAIN;
> -}
> -
> static int xe_vma_ops_alloc(struct xe_vma_ops *vops, bool array_of_binds)
> {
> int i;
> @@ -1215,25 +971,15 @@ static struct xe_vma *xe_vma_create(struct xe_vm *vm,
> drm_gpuvm_bo_put(vm_bo);
> } else /* userptr or null */ {
> if (!is_null && !is_cpu_addr_mirror) {
> - struct xe_userptr *userptr = &to_userptr_vma(vma)->userptr;
> - u64 size = end - start + 1;
> + struct xe_userptr_vma *uvma = to_userptr_vma(vma);
> int err;
>
> - INIT_LIST_HEAD(&userptr->invalidate_link);
> - INIT_LIST_HEAD(&userptr->repin_link);
> - vma->gpuva.gem.offset = bo_offset_or_userptr;
> - mutex_init(&userptr->unmap_mutex);
> -
> - err = mmu_interval_notifier_insert(&userptr->notifier,
> - current->mm,
> - xe_vma_userptr(vma), size,
> - &vma_userptr_notifier_ops);
> + err = xe_userptr_setup(uvma, xe_vma_userptr(vma),
> + end - start + 1);
> if (err) {
> xe_vma_free(vma);
> return ERR_PTR(err);
> }
> -
> - userptr->notifier_seq = LONG_MAX;
> }
>
> xe_vm_get(vm);
> @@ -1253,18 +999,8 @@ static void xe_vma_destroy_late(struct xe_vma *vma)
>
> if (xe_vma_is_userptr(vma)) {
> struct xe_userptr_vma *uvma = to_userptr_vma(vma);
> - struct xe_userptr *userptr = &uvma->userptr;
>
> - if (userptr->sg)
> - xe_hmm_userptr_free_sg(uvma);
> -
> - /*
> - * Since userptr pages are not pinned, we can't remove
> - * the notifier until we're sure the GPU is not accessing
> - * them anymore
> - */
> - mmu_interval_notifier_remove(&userptr->notifier);
> - mutex_destroy(&userptr->unmap_mutex);
> + xe_userptr_remove(uvma);
> xe_vm_put(vm);
> } else if (xe_vma_is_null(vma) || xe_vma_is_cpu_addr_mirror(vma)) {
> xe_vm_put(vm);
> @@ -1301,11 +1037,7 @@ static void xe_vma_destroy(struct xe_vma *vma, struct dma_fence *fence)
>
> if (xe_vma_is_userptr(vma)) {
> xe_assert(vm->xe, vma->gpuva.flags & XE_VMA_DESTROYED);
> -
> - spin_lock(&vm->userptr.invalidated_lock);
> - xe_assert(vm->xe, list_empty(&to_userptr_vma(vma)->userptr.repin_link));
> - list_del(&to_userptr_vma(vma)->userptr.invalidate_link);
> - spin_unlock(&vm->userptr.invalidated_lock);
> + xe_userptr_destroy(to_userptr_vma(vma));
> } else if (!xe_vma_is_null(vma) && !xe_vma_is_cpu_addr_mirror(vma)) {
> xe_bo_assert_held(xe_vma_bo(vma));
>
> diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
> index 0ef811fc2bde..c59a94e2ffb9 100644
> --- a/drivers/gpu/drm/xe/xe_vm.h
> +++ b/drivers/gpu/drm/xe/xe_vm.h
> @@ -210,12 +210,6 @@ static inline bool xe_vm_in_preempt_fence_mode(struct xe_vm *vm)
> int xe_vm_add_compute_exec_queue(struct xe_vm *vm, struct xe_exec_queue *q);
> void xe_vm_remove_compute_exec_queue(struct xe_vm *vm, struct xe_exec_queue *q);
>
> -int xe_vm_userptr_pin(struct xe_vm *vm);
> -
> -int __xe_vm_userptr_needs_repin(struct xe_vm *vm);
> -
> -int xe_vm_userptr_check_repin(struct xe_vm *vm);
> -
> int xe_vm_rebind(struct xe_vm *vm, bool rebind_worker);
> struct dma_fence *xe_vma_rebind(struct xe_vm *vm, struct xe_vma *vma,
> u8 tile_mask);
> @@ -253,10 +247,6 @@ static inline void xe_vm_reactivate_rebind(struct xe_vm *vm)
> }
> }
>
> -int xe_vma_userptr_pin_pages(struct xe_userptr_vma *uvma);
> -
> -int xe_vma_userptr_check_repin(struct xe_userptr_vma *uvma);
> -
> bool xe_vm_validate_should_retry(struct drm_exec *exec, int err, ktime_t *end);
>
> int xe_vm_lock_vma(struct drm_exec *exec, struct xe_vma *vma);
> @@ -300,12 +290,4 @@ struct xe_vm_snapshot *xe_vm_snapshot_capture(struct xe_vm *vm);
> void xe_vm_snapshot_capture_delayed(struct xe_vm_snapshot *snap);
> void xe_vm_snapshot_print(struct xe_vm_snapshot *snap, struct drm_printer *p);
> void xe_vm_snapshot_free(struct xe_vm_snapshot *snap);
> -
> -#if IS_ENABLED(CONFIG_DRM_XE_USERPTR_INVAL_INJECT)
> -void xe_vma_userptr_force_invalidate(struct xe_userptr_vma *uvma);
> -#else
> -static inline void xe_vma_userptr_force_invalidate(struct xe_userptr_vma *uvma)
> -{
> -}
> -#endif
> #endif
> diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
> index 1662604c4486..65e889f2537d 100644
> --- a/drivers/gpu/drm/xe/xe_vm_types.h
> +++ b/drivers/gpu/drm/xe/xe_vm_types.h
> @@ -17,6 +17,7 @@
> #include "xe_device_types.h"
> #include "xe_pt_types.h"
> #include "xe_range_fence.h"
> +#include "xe_userptr.h"
>
> struct xe_bo;
> struct xe_svm_range;
> @@ -46,37 +47,6 @@ struct xe_vm_pgtable_update_op;
> #define XE_VMA_DUMPABLE (DRM_GPUVA_USERBITS << 8)
> #define XE_VMA_SYSTEM_ALLOCATOR (DRM_GPUVA_USERBITS << 9)
>
> -/** struct xe_userptr - User pointer */
> -struct xe_userptr {
> - /** @invalidate_link: Link for the vm::userptr.invalidated list */
> - struct list_head invalidate_link;
> - /** @userptr: link into VM repin list if userptr. */
> - struct list_head repin_link;
> - /**
> - * @notifier: MMU notifier for user pointer (invalidation call back)
> - */
> - struct mmu_interval_notifier notifier;
> - /** @sgt: storage for a scatter gather table */
> - struct sg_table sgt;
> - /** @sg: allocated scatter gather table */
> - struct sg_table *sg;
> - /** @notifier_seq: notifier sequence number */
> - unsigned long notifier_seq;
> - /** @unmap_mutex: Mutex protecting dma-unmapping */
> - struct mutex unmap_mutex;
> - /**
> - * @initial_bind: user pointer has been bound at least once.
> - * write: vm->userptr.notifier_lock in read mode and vm->resv held.
> - * read: vm->userptr.notifier_lock in write mode or vm->resv held.
> - */
> - bool initial_bind;
> - /** @mapped: Whether the @sgt sg-table is dma-mapped. Protected by @unmap_mutex. */
> - bool mapped;
> -#if IS_ENABLED(CONFIG_DRM_XE_USERPTR_INVAL_INJECT)
> - u32 divisor;
> -#endif
> -};
> -
> struct xe_vma {
> /** @gpuva: Base GPUVA object */
> struct drm_gpuva gpuva;
> @@ -237,33 +207,7 @@ struct xe_vm {
> const struct xe_pt_ops *pt_ops;
>
> /** @userptr: user pointer state */
> - struct {
> - /**
> - * @userptr.repin_list: list of VMAs which are user pointers,
> - * and needs repinning. Protected by @lock.
> - */
> - struct list_head repin_list;
> - /**
> - * @notifier_lock: protects notifier in write mode and
> - * submission in read mode.
> - */
> - struct rw_semaphore notifier_lock;
> - /**
> - * @userptr.invalidated_lock: Protects the
> - * @userptr.invalidated list.
> - */
> - spinlock_t invalidated_lock;
> - /**
> - * @userptr.invalidated: List of invalidated userptrs, not yet
> - * picked
> - * up for revalidation. Protected from access with the
> - * @invalidated_lock. Removing items from the list
> - * additionally requires @lock in write mode, and adding
> - * items to the list requires either the @userptr.notifer_lock in
> - * write mode, OR @lock in write mode.
> - */
> - struct list_head invalidated;
> - } userptr;
> + struct xe_userptr_vm userptr;
>
> /** @preempt: preempt state */
> struct {
> --
> 2.49.0
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v4 7/8] drm/xe/userptr: replace xe_hmm with gpusvm
2025-05-12 15:06 [PATCH v4 0/8] Replace xe_hmm with gpusvm Matthew Auld
` (5 preceding siblings ...)
2025-05-12 15:06 ` [PATCH v4 6/8] drm/xe/vm: split userptr bits into separate file Matthew Auld
@ 2025-05-12 15:06 ` Matthew Auld
2025-06-09 17:09 ` Matthew Brost
2025-05-12 15:06 ` [PATCH v4 8/8] drm/xe/pt: unify xe_pt_svm_pre_commit with userptr Matthew Auld
` (6 subsequent siblings)
13 siblings, 1 reply; 17+ messages in thread
From: Matthew Auld @ 2025-05-12 15:06 UTC (permalink / raw)
To: intel-xe
Cc: dri-devel, Himal Prasad Ghimiray, Thomas Hellström,
Dafna Hirschfeld, Matthew Brost
Goal here is cut over to gpusvm and remove xe_hmm, relying instead on
common code. The core facilities we need are get_pages(), unmap_pages()
and free_pages() for a given useptr range, plus a vm level notifier
lock, which is now provided by gpusvm.
v2:
- Reuse the same SVM vm struct we use for full SVM, that way we can
use the same lock (Matt B & Himal)
v3:
- Re-use svm_init/fini for userptr.
v4:
- Allow building xe without userptr if we are missing DRM_GPUSVM
config. (Matt B)
- Always make .read_only match xe_vma_read_only() for the ctx. (Dafna)
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Dafna Hirschfeld <dafna.hirschfeld@intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/Kconfig | 2 +-
drivers/gpu/drm/xe/Makefile | 3 +-
drivers/gpu/drm/xe/xe_exec.c | 5 +-
drivers/gpu/drm/xe/xe_hmm.c | 325 -------------------------------
drivers/gpu/drm/xe/xe_hmm.h | 18 --
drivers/gpu/drm/xe/xe_pt.c | 22 +--
drivers/gpu/drm/xe/xe_svm.c | 32 +--
drivers/gpu/drm/xe/xe_svm.h | 58 ++++--
drivers/gpu/drm/xe/xe_userptr.c | 32 ++-
drivers/gpu/drm/xe/xe_userptr.h | 44 +++--
drivers/gpu/drm/xe/xe_vm.c | 44 ++---
drivers/gpu/drm/xe/xe_vm_types.h | 2 +-
12 files changed, 150 insertions(+), 437 deletions(-)
delete mode 100644 drivers/gpu/drm/xe/xe_hmm.c
delete mode 100644 drivers/gpu/drm/xe/xe_hmm.h
diff --git a/drivers/gpu/drm/xe/Kconfig b/drivers/gpu/drm/xe/Kconfig
index 9bce047901b2..0187902e6139 100644
--- a/drivers/gpu/drm/xe/Kconfig
+++ b/drivers/gpu/drm/xe/Kconfig
@@ -38,12 +38,12 @@ config DRM_XE
select DRM_TTM
select DRM_TTM_HELPER
select DRM_EXEC
+ select DRM_GPUSVM if !UML && DEVICE_PRIVATE
select DRM_GPUVM
select DRM_SCHED
select MMU_NOTIFIER
select WANT_DEV_COREDUMP
select AUXILIARY_BUS
- select HMM_MIRROR
help
Experimental driver for Intel Xe series GPUs
diff --git a/drivers/gpu/drm/xe/Makefile b/drivers/gpu/drm/xe/Makefile
index 10b42118e761..5275a76d0b8e 100644
--- a/drivers/gpu/drm/xe/Makefile
+++ b/drivers/gpu/drm/xe/Makefile
@@ -116,7 +116,6 @@ xe-y += xe_bb.o \
xe_tuning.o \
xe_uc.o \
xe_uc_fw.o \
- xe_userptr.o \
xe_vm.o \
xe_vram.o \
xe_vram_freq.o \
@@ -125,8 +124,8 @@ xe-y += xe_bb.o \
xe_wait_user_fence.o \
xe_wopcm.o
-xe-$(CONFIG_HMM_MIRROR) += xe_hmm.o
xe-$(CONFIG_DRM_XE_GPUSVM) += xe_svm.o
+xe-$(CONFIG_DRM_GPUSVM) += xe_userptr.o
# graphics hardware monitoring (HWMON) support
xe-$(CONFIG_HWMON) += xe_hwmon.o
diff --git a/drivers/gpu/drm/xe/xe_exec.c b/drivers/gpu/drm/xe/xe_exec.c
index 44364c042ad7..25a59b6934f6 100644
--- a/drivers/gpu/drm/xe/xe_exec.c
+++ b/drivers/gpu/drm/xe/xe_exec.c
@@ -19,6 +19,7 @@
#include "xe_ring_ops_types.h"
#include "xe_sched_job.h"
#include "xe_sync.h"
+#include "xe_svm.h"
#include "xe_vm.h"
/**
@@ -294,7 +295,7 @@ int xe_exec_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
if (err)
goto err_put_job;
- err = down_read_interruptible(&vm->userptr.notifier_lock);
+ err = xe_svm_notifier_lock_interruptible(vm);
if (err)
goto err_put_job;
@@ -336,7 +337,7 @@ int xe_exec_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
err_repin:
if (!xe_vm_in_lr_mode(vm))
- up_read(&vm->userptr.notifier_lock);
+ xe_svm_notifier_unlock(vm);
err_put_job:
if (err)
xe_sched_job_put(job);
diff --git a/drivers/gpu/drm/xe/xe_hmm.c b/drivers/gpu/drm/xe/xe_hmm.c
deleted file mode 100644
index 57b71956ddf4..000000000000
--- a/drivers/gpu/drm/xe/xe_hmm.c
+++ /dev/null
@@ -1,325 +0,0 @@
-// SPDX-License-Identifier: MIT
-/*
- * Copyright © 2024 Intel Corporation
- */
-
-#include <linux/scatterlist.h>
-#include <linux/mmu_notifier.h>
-#include <linux/dma-mapping.h>
-#include <linux/memremap.h>
-#include <linux/swap.h>
-#include <linux/hmm.h>
-#include <linux/mm.h>
-#include "xe_hmm.h"
-#include "xe_vm.h"
-#include "xe_bo.h"
-
-static u64 xe_npages_in_range(unsigned long start, unsigned long end)
-{
- return (end - start) >> PAGE_SHIFT;
-}
-
-static int xe_alloc_sg(struct xe_device *xe, struct sg_table *st,
- struct hmm_range *range, struct rw_semaphore *notifier_sem)
-{
- unsigned long i, npages, hmm_pfn;
- unsigned long num_chunks = 0;
- int ret;
-
- /* HMM docs says this is needed. */
- ret = down_read_interruptible(notifier_sem);
- if (ret)
- return ret;
-
- if (mmu_interval_read_retry(range->notifier, range->notifier_seq)) {
- up_read(notifier_sem);
- return -EAGAIN;
- }
-
- npages = xe_npages_in_range(range->start, range->end);
- for (i = 0; i < npages;) {
- unsigned long len;
-
- hmm_pfn = range->hmm_pfns[i];
- xe_assert(xe, hmm_pfn & HMM_PFN_VALID);
-
- len = 1UL << hmm_pfn_to_map_order(hmm_pfn);
-
- /* If order > 0 the page may extend beyond range->start */
- len -= (hmm_pfn & ~HMM_PFN_FLAGS) & (len - 1);
- i += len;
- num_chunks++;
- }
- up_read(notifier_sem);
-
- return sg_alloc_table(st, num_chunks, GFP_KERNEL);
-}
-
-/**
- * xe_build_sg() - build a scatter gather table for all the physical pages/pfn
- * in a hmm_range. dma-map pages if necessary. dma-address is save in sg table
- * and will be used to program GPU page table later.
- * @xe: the xe device who will access the dma-address in sg table
- * @range: the hmm range that we build the sg table from. range->hmm_pfns[]
- * has the pfn numbers of pages that back up this hmm address range.
- * @st: pointer to the sg table.
- * @notifier_sem: The xe notifier lock.
- * @write: whether we write to this range. This decides dma map direction
- * for system pages. If write we map it bi-diretional; otherwise
- * DMA_TO_DEVICE
- *
- * All the contiguous pfns will be collapsed into one entry in
- * the scatter gather table. This is for the purpose of efficiently
- * programming GPU page table.
- *
- * The dma_address in the sg table will later be used by GPU to
- * access memory. So if the memory is system memory, we need to
- * do a dma-mapping so it can be accessed by GPU/DMA.
- *
- * FIXME: This function currently only support pages in system
- * memory. If the memory is GPU local memory (of the GPU who
- * is going to access memory), we need gpu dpa (device physical
- * address), and there is no need of dma-mapping. This is TBD.
- *
- * FIXME: dma-mapping for peer gpu device to access remote gpu's
- * memory. Add this when you support p2p
- *
- * This function allocates the storage of the sg table. It is
- * caller's responsibility to free it calling sg_free_table.
- *
- * Returns 0 if successful; -ENOMEM if fails to allocate memory
- */
-static int xe_build_sg(struct xe_device *xe, struct hmm_range *range,
- struct sg_table *st,
- struct rw_semaphore *notifier_sem,
- bool write)
-{
- unsigned long npages = xe_npages_in_range(range->start, range->end);
- struct device *dev = xe->drm.dev;
- struct scatterlist *sgl;
- struct page *page;
- unsigned long i, j;
-
- lockdep_assert_held(notifier_sem);
-
- i = 0;
- for_each_sg(st->sgl, sgl, st->nents, j) {
- unsigned long hmm_pfn, size;
-
- hmm_pfn = range->hmm_pfns[i];
- page = hmm_pfn_to_page(hmm_pfn);
- xe_assert(xe, !is_device_private_page(page));
-
- size = 1UL << hmm_pfn_to_map_order(hmm_pfn);
- size -= page_to_pfn(page) & (size - 1);
- i += size;
-
- if (unlikely(j == st->nents - 1)) {
- xe_assert(xe, i >= npages);
- if (i > npages)
- size -= (i - npages);
-
- sg_mark_end(sgl);
- } else {
- xe_assert(xe, i < npages);
- }
-
- sg_set_page(sgl, page, size << PAGE_SHIFT, 0);
- }
-
- return dma_map_sgtable(dev, st, write ? DMA_BIDIRECTIONAL : DMA_TO_DEVICE,
- DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_NO_KERNEL_MAPPING);
-}
-
-static void xe_hmm_userptr_set_mapped(struct xe_userptr_vma *uvma)
-{
- struct xe_userptr *userptr = &uvma->userptr;
- struct xe_vm *vm = xe_vma_vm(&uvma->vma);
-
- lockdep_assert_held_write(&vm->lock);
- lockdep_assert_held(&vm->userptr.notifier_lock);
-
- mutex_lock(&userptr->unmap_mutex);
- xe_assert(vm->xe, !userptr->mapped);
- userptr->mapped = true;
- mutex_unlock(&userptr->unmap_mutex);
-}
-
-void xe_hmm_userptr_unmap(struct xe_userptr_vma *uvma)
-{
- struct xe_userptr *userptr = &uvma->userptr;
- struct xe_vma *vma = &uvma->vma;
- bool write = !xe_vma_read_only(vma);
- struct xe_vm *vm = xe_vma_vm(vma);
- struct xe_device *xe = vm->xe;
-
- if (!lockdep_is_held_type(&vm->userptr.notifier_lock, 0) &&
- !lockdep_is_held_type(&vm->lock, 0) &&
- !(vma->gpuva.flags & XE_VMA_DESTROYED)) {
- /* Don't unmap in exec critical section. */
- xe_vm_assert_held(vm);
- /* Don't unmap while mapping the sg. */
- lockdep_assert_held(&vm->lock);
- }
-
- mutex_lock(&userptr->unmap_mutex);
- if (userptr->sg && userptr->mapped)
- dma_unmap_sgtable(xe->drm.dev, userptr->sg,
- write ? DMA_BIDIRECTIONAL : DMA_TO_DEVICE, 0);
- userptr->mapped = false;
- mutex_unlock(&userptr->unmap_mutex);
-}
-
-/**
- * xe_hmm_userptr_free_sg() - Free the scatter gather table of userptr
- * @uvma: the userptr vma which hold the scatter gather table
- *
- * With function xe_userptr_populate_range, we allocate storage of
- * the userptr sg table. This is a helper function to free this
- * sg table, and dma unmap the address in the table.
- */
-void xe_hmm_userptr_free_sg(struct xe_userptr_vma *uvma)
-{
- struct xe_userptr *userptr = &uvma->userptr;
-
- xe_assert(xe_vma_vm(&uvma->vma)->xe, userptr->sg);
- xe_hmm_userptr_unmap(uvma);
- sg_free_table(userptr->sg);
- userptr->sg = NULL;
-}
-
-/**
- * xe_hmm_userptr_populate_range() - Populate physical pages of a virtual
- * address range
- *
- * @uvma: userptr vma which has information of the range to populate.
- * @is_mm_mmap_locked: True if mmap_read_lock is already acquired by caller.
- *
- * This function populate the physical pages of a virtual
- * address range. The populated physical pages is saved in
- * userptr's sg table. It is similar to get_user_pages but call
- * hmm_range_fault.
- *
- * This function also read mmu notifier sequence # (
- * mmu_interval_read_begin), for the purpose of later
- * comparison (through mmu_interval_read_retry).
- *
- * This must be called with mmap read or write lock held.
- *
- * This function allocates the storage of the userptr sg table.
- * It is caller's responsibility to free it calling sg_free_table.
- *
- * returns: 0 for success; negative error no on failure
- */
-int xe_hmm_userptr_populate_range(struct xe_userptr_vma *uvma,
- bool is_mm_mmap_locked)
-{
- unsigned long timeout =
- jiffies + msecs_to_jiffies(HMM_RANGE_DEFAULT_TIMEOUT);
- unsigned long *pfns;
- struct xe_userptr *userptr;
- struct xe_vma *vma = &uvma->vma;
- u64 userptr_start = xe_vma_userptr(vma);
- u64 userptr_end = userptr_start + xe_vma_size(vma);
- struct xe_vm *vm = xe_vma_vm(vma);
- struct hmm_range hmm_range = {
- .pfn_flags_mask = 0, /* ignore pfns */
- .default_flags = HMM_PFN_REQ_FAULT,
- .start = userptr_start,
- .end = userptr_end,
- .notifier = &uvma->userptr.notifier,
- .dev_private_owner = vm->xe,
- };
- bool write = !xe_vma_read_only(vma);
- unsigned long notifier_seq;
- u64 npages;
- int ret;
-
- userptr = &uvma->userptr;
-
- if (is_mm_mmap_locked)
- mmap_assert_locked(userptr->notifier.mm);
-
- if (vma->gpuva.flags & XE_VMA_DESTROYED)
- return 0;
-
- notifier_seq = mmu_interval_read_begin(&userptr->notifier);
- if (notifier_seq == userptr->notifier_seq)
- return 0;
-
- if (userptr->sg)
- xe_hmm_userptr_free_sg(uvma);
-
- npages = xe_npages_in_range(userptr_start, userptr_end);
- pfns = kvmalloc_array(npages, sizeof(*pfns), GFP_KERNEL);
- if (unlikely(!pfns))
- return -ENOMEM;
-
- if (write)
- hmm_range.default_flags |= HMM_PFN_REQ_WRITE;
-
- if (!mmget_not_zero(userptr->notifier.mm)) {
- ret = -EFAULT;
- goto free_pfns;
- }
-
- hmm_range.hmm_pfns = pfns;
-
- while (true) {
- hmm_range.notifier_seq = mmu_interval_read_begin(&userptr->notifier);
-
- if (!is_mm_mmap_locked)
- mmap_read_lock(userptr->notifier.mm);
-
- ret = hmm_range_fault(&hmm_range);
-
- if (!is_mm_mmap_locked)
- mmap_read_unlock(userptr->notifier.mm);
-
- if (ret == -EBUSY) {
- if (time_after(jiffies, timeout))
- break;
-
- continue;
- }
- break;
- }
-
- mmput(userptr->notifier.mm);
-
- if (ret)
- goto free_pfns;
-
- ret = xe_alloc_sg(vm->xe, &userptr->sgt, &hmm_range, &vm->userptr.notifier_lock);
- if (ret)
- goto free_pfns;
-
- ret = down_read_interruptible(&vm->userptr.notifier_lock);
- if (ret)
- goto free_st;
-
- if (mmu_interval_read_retry(hmm_range.notifier, hmm_range.notifier_seq)) {
- ret = -EAGAIN;
- goto out_unlock;
- }
-
- ret = xe_build_sg(vm->xe, &hmm_range, &userptr->sgt,
- &vm->userptr.notifier_lock, write);
- if (ret)
- goto out_unlock;
-
- userptr->sg = &userptr->sgt;
- xe_hmm_userptr_set_mapped(uvma);
- userptr->notifier_seq = hmm_range.notifier_seq;
- up_read(&vm->userptr.notifier_lock);
- kvfree(pfns);
- return 0;
-
-out_unlock:
- up_read(&vm->userptr.notifier_lock);
-free_st:
- sg_free_table(&userptr->sgt);
-free_pfns:
- kvfree(pfns);
- return ret;
-}
diff --git a/drivers/gpu/drm/xe/xe_hmm.h b/drivers/gpu/drm/xe/xe_hmm.h
deleted file mode 100644
index 0ea98d8e7bbc..000000000000
--- a/drivers/gpu/drm/xe/xe_hmm.h
+++ /dev/null
@@ -1,18 +0,0 @@
-/* SPDX-License-Identifier: MIT
- *
- * Copyright © 2024 Intel Corporation
- */
-
-#ifndef _XE_HMM_H_
-#define _XE_HMM_H_
-
-#include <linux/types.h>
-
-struct xe_userptr_vma;
-
-int xe_hmm_userptr_populate_range(struct xe_userptr_vma *uvma, bool is_mm_mmap_locked);
-
-void xe_hmm_userptr_free_sg(struct xe_userptr_vma *uvma);
-
-void xe_hmm_userptr_unmap(struct xe_userptr_vma *uvma);
-#endif
diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
index 720c25bf48f2..92b6a4d63bb1 100644
--- a/drivers/gpu/drm/xe/xe_pt.c
+++ b/drivers/gpu/drm/xe/xe_pt.c
@@ -757,8 +757,8 @@ xe_pt_stage_bind(struct xe_tile *tile, struct xe_vma *vma,
if (!xe_vma_is_null(vma) && !range) {
if (xe_vma_is_userptr(vma))
- xe_res_first_sg(to_userptr_vma(vma)->userptr.sg, 0,
- xe_vma_size(vma), &curs);
+ xe_res_first_dma(to_userptr_vma(vma)->userptr.pages.dma_addr, 0,
+ xe_vma_size(vma), &curs);
else if (xe_bo_is_vram(bo) || xe_bo_is_stolen(bo))
xe_res_first(bo->ttm.resource, xe_vma_bo_offset(vma),
xe_vma_size(vma), &curs);
@@ -1029,7 +1029,7 @@ static void xe_pt_commit_locks_assert(struct xe_vma *vma)
xe_pt_commit_prepare_locks_assert(vma);
if (xe_vma_is_userptr(vma))
- lockdep_assert_held_read(&vm->userptr.notifier_lock);
+ xe_svm_assert_held_read(vm);
}
static void xe_pt_commit(struct xe_vma *vma,
@@ -1369,7 +1369,7 @@ static int vma_check_userptr(struct xe_vm *vm, struct xe_vma *vma,
struct xe_userptr_vma *uvma;
unsigned long notifier_seq;
- lockdep_assert_held_read(&vm->userptr.notifier_lock);
+ xe_svm_assert_held_read(vm);
if (!xe_vma_is_userptr(vma))
return 0;
@@ -1378,7 +1378,7 @@ static int vma_check_userptr(struct xe_vm *vm, struct xe_vma *vma,
if (xe_pt_userptr_inject_eagain(uvma))
xe_vma_userptr_force_invalidate(uvma);
- notifier_seq = uvma->userptr.notifier_seq;
+ notifier_seq = uvma->userptr.pages.notifier_seq;
if (!mmu_interval_read_retry(&uvma->userptr.notifier,
notifier_seq))
@@ -1399,7 +1399,7 @@ static int op_check_userptr(struct xe_vm *vm, struct xe_vma_op *op,
{
int err = 0;
- lockdep_assert_held_read(&vm->userptr.notifier_lock);
+ xe_svm_assert_held_read(vm);
switch (op->base.op) {
case DRM_GPUVA_OP_MAP:
@@ -1440,12 +1440,12 @@ static int xe_pt_userptr_pre_commit(struct xe_migrate_pt_update *pt_update)
if (err)
return err;
- down_read(&vm->userptr.notifier_lock);
+ down_read(&vm->svm.gpusvm.notifier_lock);
list_for_each_entry(op, &vops->list, link) {
err = op_check_userptr(vm, op, pt_update_ops);
if (err) {
- up_read(&vm->userptr.notifier_lock);
+ up_read(&vm->svm.gpusvm.notifier_lock);
break;
}
}
@@ -2172,7 +2172,7 @@ static void bind_op_commit(struct xe_vm *vm, struct xe_tile *tile,
if (invalidate_on_bind)
vma->tile_invalidated |= BIT(tile->id);
if (xe_vma_is_userptr(vma)) {
- lockdep_assert_held_read(&vm->userptr.notifier_lock);
+ xe_svm_assert_held_read(vm);
to_userptr_vma(vma)->userptr.initial_bind = true;
}
@@ -2208,7 +2208,7 @@ static void unbind_op_commit(struct xe_vm *vm, struct xe_tile *tile,
if (!vma->tile_present) {
list_del_init(&vma->combined_links.rebind);
if (xe_vma_is_userptr(vma)) {
- lockdep_assert_held_read(&vm->userptr.notifier_lock);
+ xe_svm_assert_held_read(vm);
spin_lock(&vm->userptr.invalidated_lock);
list_del_init(&to_userptr_vma(vma)->userptr.invalidate_link);
@@ -2457,7 +2457,7 @@ xe_pt_update_ops_run(struct xe_tile *tile, struct xe_vma_ops *vops)
if (pt_update_ops->needs_svm_lock)
xe_svm_notifier_unlock(vm);
if (pt_update_ops->needs_userptr_lock)
- up_read(&vm->userptr.notifier_lock);
+ up_read(&vm->svm.gpusvm.notifier_lock);
return fence;
diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
index 74301064004c..73a1ac850957 100644
--- a/drivers/gpu/drm/xe/xe_svm.c
+++ b/drivers/gpu/drm/xe/xe_svm.c
@@ -606,22 +606,26 @@ int xe_svm_init(struct xe_vm *vm)
{
int err;
- spin_lock_init(&vm->svm.garbage_collector.lock);
- INIT_LIST_HEAD(&vm->svm.garbage_collector.range_list);
- INIT_WORK(&vm->svm.garbage_collector.work,
- xe_svm_garbage_collector_work_func);
+ if (vm->flags & XE_VM_FLAG_FAULT_MODE) {
+ spin_lock_init(&vm->svm.garbage_collector.lock);
+ INIT_LIST_HEAD(&vm->svm.garbage_collector.range_list);
+ INIT_WORK(&vm->svm.garbage_collector.work,
+ xe_svm_garbage_collector_work_func);
- err = drm_gpusvm_init(&vm->svm.gpusvm, "Xe SVM", &vm->xe->drm,
- current->mm, xe_svm_devm_owner(vm->xe), 0,
- vm->size, xe_modparam.svm_notifier_size * SZ_1M,
- &gpusvm_ops, fault_chunk_sizes,
- ARRAY_SIZE(fault_chunk_sizes));
- if (err)
- return err;
+ err = drm_gpusvm_init(&vm->svm.gpusvm, "Xe SVM", &vm->xe->drm,
+ current->mm, xe_svm_devm_owner(vm->xe), 0,
+ vm->size,
+ xe_modparam.svm_notifier_size * SZ_1M,
+ &gpusvm_ops, fault_chunk_sizes,
+ ARRAY_SIZE(fault_chunk_sizes));
+ drm_gpusvm_driver_set_lock(&vm->svm.gpusvm, &vm->lock);
+ } else {
+ err = drm_gpusvm_init(&vm->svm.gpusvm, "Xe SVM (simple)",
+ &vm->xe->drm, NULL, NULL, 0, 0, 0, NULL,
+ NULL, 0);
+ }
- drm_gpusvm_driver_set_lock(&vm->svm.gpusvm, &vm->lock);
-
- return 0;
+ return err;
}
/**
diff --git a/drivers/gpu/drm/xe/xe_svm.h b/drivers/gpu/drm/xe/xe_svm.h
index bf9792b66869..247bf19361e5 100644
--- a/drivers/gpu/drm/xe/xe_svm.h
+++ b/drivers/gpu/drm/xe/xe_svm.h
@@ -87,20 +87,15 @@ static inline bool xe_svm_range_has_dma_mapping(struct xe_svm_range *range)
return range->base.pages.flags.has_dma_mapping;
}
-#define xe_svm_assert_in_notifier(vm__) \
- lockdep_assert_held_write(&(vm__)->svm.gpusvm.notifier_lock)
-
-#define xe_svm_notifier_lock(vm__) \
- drm_gpusvm_notifier_lock(&(vm__)->svm.gpusvm)
-
-#define xe_svm_notifier_unlock(vm__) \
- drm_gpusvm_notifier_unlock(&(vm__)->svm.gpusvm)
-
void xe_svm_flush(struct xe_vm *vm);
#else
#include <linux/interval_tree.h>
+#include "xe_assert.h"
+#include "xe_vm.h"
+#include "xe_vm_types.h"
+
struct drm_pagemap_device_addr;
struct xe_bo;
struct xe_gt;
@@ -136,12 +131,22 @@ int xe_devm_add(struct xe_tile *tile, struct xe_vram_region *vr)
static inline
int xe_svm_init(struct xe_vm *vm)
{
+#if IS_ENABLED(CONFIG_DRM_GPUSVM)
+ return drm_gpusvm_init(&vm->svm.gpusvm, "Xe SVM (simple)", &vm->xe->drm,
+ NULL, NULL, 0, 0, 0, NULL, NULL, 0);
+#else
return 0;
+#endif
}
static inline
void xe_svm_fini(struct xe_vm *vm)
{
+ xe_assert(vm->xe, xe_vm_is_closed(vm));
+
+#if IS_ENABLED(CONFIG_DRM_GPUSVM)
+ drm_gpusvm_fini(&vm->svm.gpusvm);
+#endif
}
static inline
@@ -174,19 +179,46 @@ void xe_svm_range_debug(struct xe_svm_range *range, const char *operation)
{
}
+static inline void xe_svm_flush(struct xe_vm *vm)
+{
+}
+#endif /* CONFIG_DRM_XE_GPUSVM */
+
+#if IS_ENABLED(CONFIG_DRM_GPUSVM) /* Need to support userptr without XE_GPUSVM */
+#define xe_svm_assert_in_notifier(vm__) \
+ lockdep_assert_held_write(&(vm__)->svm.gpusvm.notifier_lock)
+
+#define xe_svm_assert_held_read(vm__) \
+ lockdep_assert_held_read(&(vm__)->svm.gpusvm.notifier_lock)
+
+#define xe_svm_notifier_lock(vm__) \
+ drm_gpusvm_notifier_lock(&(vm__)->svm.gpusvm)
+
+#define xe_svm_notifier_lock_interruptible(vm__) \
+ down_read_interruptible(&(vm__)->svm.gpusvm.notifier_lock)
+
+#define xe_svm_notifier_unlock(vm__) \
+ drm_gpusvm_notifier_unlock(&(vm__)->svm.gpusvm)
+#else
#define xe_svm_assert_in_notifier(...) do {} while (0)
#define xe_svm_range_has_dma_mapping(...) false
+static inline void xe_svm_assert_held_read(struct xe_vm *vm)
+{
+}
+
static inline void xe_svm_notifier_lock(struct xe_vm *vm)
{
}
+static inline int xe_svm_notifier_lock_interruptible(struct xe_vm *vm)
+{
+ return 0;
+}
+
static inline void xe_svm_notifier_unlock(struct xe_vm *vm)
{
}
+#endif /* CONFIG_DRM_GPUSVM */
-static inline void xe_svm_flush(struct xe_vm *vm)
-{
-}
-#endif
#endif
diff --git a/drivers/gpu/drm/xe/xe_userptr.c b/drivers/gpu/drm/xe/xe_userptr.c
index f573842a3d4b..ccf453f96795 100644
--- a/drivers/gpu/drm/xe/xe_userptr.c
+++ b/drivers/gpu/drm/xe/xe_userptr.c
@@ -7,7 +7,6 @@
#include <linux/mm.h>
-#include "xe_hmm.h"
#include "xe_trace_bo.h"
/**
@@ -25,7 +24,7 @@
int xe_vma_userptr_check_repin(struct xe_userptr_vma *uvma)
{
return mmu_interval_check_retry(&uvma->userptr.notifier,
- uvma->userptr.notifier_seq) ?
+ uvma->userptr.pages.notifier_seq) ?
-EAGAIN : 0;
}
@@ -34,11 +33,22 @@ int xe_vma_userptr_pin_pages(struct xe_userptr_vma *uvma)
struct xe_vma *vma = &uvma->vma;
struct xe_vm *vm = xe_vma_vm(vma);
struct xe_device *xe = vm->xe;
+ struct drm_gpusvm_ctx ctx = {
+ .read_only = xe_vma_read_only(vma),
+ };
lockdep_assert_held(&vm->lock);
xe_assert(xe, xe_vma_is_userptr(vma));
- return xe_hmm_userptr_populate_range(uvma, false);
+ if (vma->gpuva.flags & XE_VMA_DESTROYED)
+ return 0;
+
+ return drm_gpusvm_get_pages(&vm->svm.gpusvm, &uvma->userptr.pages,
+ uvma->userptr.notifier.mm,
+ &uvma->userptr.notifier,
+ xe_vma_userptr(vma),
+ xe_vma_userptr(vma) + xe_vma_size(vma),
+ &ctx);
}
static void __vma_userptr_invalidate(struct xe_vm *vm, struct xe_userptr_vma *uvma)
@@ -47,6 +57,10 @@ static void __vma_userptr_invalidate(struct xe_vm *vm, struct xe_userptr_vma *uv
struct xe_vma *vma = &uvma->vma;
struct dma_resv_iter cursor;
struct dma_fence *fence;
+ struct drm_gpusvm_ctx ctx = {
+ .in_notifier = true,
+ .read_only = xe_vma_read_only(vma),
+ };
long err;
/*
@@ -83,7 +97,8 @@ static void __vma_userptr_invalidate(struct xe_vm *vm, struct xe_userptr_vma *uv
XE_WARN_ON(err);
}
- xe_hmm_userptr_unmap(uvma);
+ drm_gpusvm_unmap_pages(&vm->svm.gpusvm, &uvma->userptr.pages,
+ xe_vma_size(vma) >> PAGE_SHIFT, &ctx);
}
#if IS_ENABLED(CONFIG_DRM_XE_USERPTR_INVAL_INJECT)
@@ -263,7 +278,6 @@ int xe_userptr_setup(struct xe_userptr_vma *uvma, unsigned long start,
INIT_LIST_HEAD(&userptr->invalidate_link);
INIT_LIST_HEAD(&userptr->repin_link);
- mutex_init(&userptr->unmap_mutex);
err = mmu_interval_notifier_insert(&userptr->notifier, current->mm,
start, range,
@@ -271,17 +285,18 @@ int xe_userptr_setup(struct xe_userptr_vma *uvma, unsigned long start,
if (err)
return err;
- userptr->notifier_seq = LONG_MAX;
+ userptr->pages.notifier_seq = LONG_MAX;
return 0;
}
void xe_userptr_remove(struct xe_userptr_vma *uvma)
{
+ struct xe_vm *vm = xe_vma_vm(&uvma->vma);
struct xe_userptr *userptr = &uvma->userptr;
- if (userptr->sg)
- xe_hmm_userptr_free_sg(uvma);
+ drm_gpusvm_free_pages(&vm->svm.gpusvm, &uvma->userptr.pages,
+ xe_vma_size(&uvma->vma) >> PAGE_SHIFT);
/*
* Since userptr pages are not pinned, we can't remove
@@ -289,7 +304,6 @@ void xe_userptr_remove(struct xe_userptr_vma *uvma)
* them anymore
*/
mmu_interval_notifier_remove(&userptr->notifier);
- mutex_destroy(&userptr->unmap_mutex);
}
void xe_userptr_destroy(struct xe_userptr_vma *uvma)
diff --git a/drivers/gpu/drm/xe/xe_userptr.h b/drivers/gpu/drm/xe/xe_userptr.h
index 83d17b58ed16..47c883e5fa33 100644
--- a/drivers/gpu/drm/xe/xe_userptr.h
+++ b/drivers/gpu/drm/xe/xe_userptr.h
@@ -12,6 +12,8 @@
#include <linux/scatterlist.h>
#include <linux/spinlock.h>
+#include <drm/drm_gpusvm.h>
+
struct xe_vm;
struct xe_vma;
struct xe_userptr_vma;
@@ -23,11 +25,6 @@ struct xe_userptr_vm {
* and needs repinning. Protected by @lock.
*/
struct list_head repin_list;
- /**
- * @notifier_lock: protects notifier in write mode and
- * submission in read mode.
- */
- struct rw_semaphore notifier_lock;
/**
* @userptr.invalidated_lock: Protects the
* @userptr.invalidated list.
@@ -51,31 +48,27 @@ struct xe_userptr {
struct list_head invalidate_link;
/** @userptr: link into VM repin list if userptr. */
struct list_head repin_link;
+ /**
+ * @pages: gpusvm pages for this user pointer.
+ */
+ struct drm_gpusvm_pages pages;
/**
* @notifier: MMU notifier for user pointer (invalidation call back)
*/
struct mmu_interval_notifier notifier;
- /** @sgt: storage for a scatter gather table */
- struct sg_table sgt;
- /** @sg: allocated scatter gather table */
- struct sg_table *sg;
- /** @notifier_seq: notifier sequence number */
- unsigned long notifier_seq;
- /** @unmap_mutex: Mutex protecting dma-unmapping */
- struct mutex unmap_mutex;
+
/**
* @initial_bind: user pointer has been bound at least once.
- * write: vm->userptr.notifier_lock in read mode and vm->resv held.
- * read: vm->userptr.notifier_lock in write mode or vm->resv held.
+ * write: vm->svm.gpusvm.notifier_lock in read mode and vm->resv held.
+ * read: vm->svm.gpusvm.notifier_lock in write mode or vm->resv held.
*/
bool initial_bind;
- /** @mapped: Whether the @sgt sg-table is dma-mapped. Protected by @unmap_mutex. */
- bool mapped;
#if IS_ENABLED(CONFIG_DRM_XE_USERPTR_INVAL_INJECT)
u32 divisor;
#endif
};
+#if IS_ENABLED(CONFIG_DRM_GPUSVM)
void xe_userptr_remove(struct xe_userptr_vma *uvma);
int xe_userptr_setup(struct xe_userptr_vma *uvma, unsigned long start,
unsigned long range);
@@ -86,6 +79,23 @@ int __xe_vm_userptr_needs_repin(struct xe_vm *vm);
int xe_vm_userptr_check_repin(struct xe_vm *vm);
int xe_vma_userptr_pin_pages(struct xe_userptr_vma *uvma);
int xe_vma_userptr_check_repin(struct xe_userptr_vma *uvma);
+#else
+static inline void xe_userptr_remove(struct xe_userptr_vma *uvma) {}
+
+static inline int xe_userptr_setup(struct xe_userptr_vma *uvma,
+ unsigned long start, unsigned long range)
+{
+ return -ENODEV;
+}
+
+static inline void xe_userptr_destroy(struct xe_userptr_vma *uvma) {}
+
+static inline int xe_vm_userptr_pin(struct xe_vm *vm) { return 0; }
+static inline int __xe_vm_userptr_needs_repin(struct xe_vm *vm) { return 0; }
+static inline int xe_vm_userptr_check_repin(struct xe_vm *vm) { return 0; }
+static inline int xe_vma_userptr_pin_pages(struct xe_userptr_vma *uvma) { return -ENODEV; }
+static inline int xe_vma_userptr_check_repin(struct xe_userptr_vma *uvma) { return -ENODEV; };
+#endif
#if IS_ENABLED(CONFIG_DRM_XE_USERPTR_INVAL_INJECT)
void xe_vma_userptr_force_invalidate(struct xe_userptr_vma *uvma);
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index e5bf4ddc9d86..1373a51f75dd 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -39,9 +39,7 @@
#include "xe_svm.h"
#include "xe_sync.h"
#include "xe_trace_bo.h"
-#include "xe_userptr.h"
#include "xe_wa.h"
-#include "xe_hmm.h"
static struct drm_gem_object *xe_vm_obj(struct xe_vm *vm)
{
@@ -219,7 +217,7 @@ int xe_vm_add_compute_exec_queue(struct xe_vm *vm, struct xe_exec_queue *q)
++vm->preempt.num_exec_queues;
q->lr.pfence = pfence;
- down_read(&vm->userptr.notifier_lock);
+ xe_svm_notifier_lock(vm);
drm_gpuvm_resv_add_fence(&vm->gpuvm, exec, pfence,
DMA_RESV_USAGE_BOOKKEEP, DMA_RESV_USAGE_BOOKKEEP);
@@ -233,7 +231,7 @@ int xe_vm_add_compute_exec_queue(struct xe_vm *vm, struct xe_exec_queue *q)
if (wait)
dma_fence_enable_sw_signaling(pfence);
- up_read(&vm->userptr.notifier_lock);
+ xe_svm_notifier_unlock(vm);
out_fini:
drm_exec_fini(exec);
@@ -497,9 +495,9 @@ static void preempt_rebind_work_func(struct work_struct *w)
(!(__tries)++ || __xe_vm_userptr_needs_repin(__vm)) : \
__xe_vm_userptr_needs_repin(__vm))
- down_read(&vm->userptr.notifier_lock);
+ xe_svm_notifier_lock(vm);
if (retry_required(tries, vm)) {
- up_read(&vm->userptr.notifier_lock);
+ xe_svm_notifier_unlock(vm);
err = -EAGAIN;
goto out_unlock;
}
@@ -513,7 +511,7 @@ static void preempt_rebind_work_func(struct work_struct *w)
/* Point of no return. */
arm_preempt_fences(vm, &preempt_fences);
resume_and_reinstall_preempt_fences(vm, &exec);
- up_read(&vm->userptr.notifier_lock);
+ xe_svm_notifier_unlock(vm);
out_unlock:
drm_exec_fini(&exec);
@@ -1389,7 +1387,6 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags)
INIT_LIST_HEAD(&vm->userptr.repin_list);
INIT_LIST_HEAD(&vm->userptr.invalidated);
- init_rwsem(&vm->userptr.notifier_lock);
spin_lock_init(&vm->userptr.invalidated_lock);
ttm_lru_bulk_move_init(&vm->lru_bulk_move);
@@ -1489,11 +1486,9 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags)
}
}
- if (flags & XE_VM_FLAG_FAULT_MODE) {
- err = xe_svm_init(vm);
- if (err)
- goto err_close;
- }
+ err = xe_svm_init(vm);
+ if (err)
+ goto err_close;
if (number_tiles > 1)
vm->composite_fence_ctx = dma_fence_context_alloc(1);
@@ -1599,9 +1594,9 @@ void xe_vm_close_and_put(struct xe_vm *vm)
vma = gpuva_to_vma(gpuva);
if (xe_vma_has_no_bo(vma)) {
- down_read(&vm->userptr.notifier_lock);
+ xe_svm_notifier_lock(vm);
vma->gpuva.flags |= XE_VMA_DESTROYED;
- up_read(&vm->userptr.notifier_lock);
+ xe_svm_notifier_unlock(vm);
}
xe_vm_remove_vma(vm, vma);
@@ -1645,8 +1640,7 @@ void xe_vm_close_and_put(struct xe_vm *vm)
xe_vma_destroy_unlocked(vma);
}
- if (xe_vm_in_fault_mode(vm))
- xe_svm_fini(vm);
+ xe_svm_fini(vm);
up_write(&vm->lock);
@@ -1877,9 +1871,9 @@ static const u32 region_to_mem_type[] = {
static void prep_vma_destroy(struct xe_vm *vm, struct xe_vma *vma,
bool post_commit)
{
- down_read(&vm->userptr.notifier_lock);
+ xe_svm_notifier_lock(vm);
vma->gpuva.flags |= XE_VMA_DESTROYED;
- up_read(&vm->userptr.notifier_lock);
+ xe_svm_notifier_unlock(vm);
if (post_commit)
xe_vm_remove_vma(vm, vma);
}
@@ -2375,9 +2369,9 @@ static void xe_vma_op_unwind(struct xe_vm *vm, struct xe_vma_op *op,
struct xe_vma *vma = gpuva_to_vma(op->base.unmap.va);
if (vma) {
- down_read(&vm->userptr.notifier_lock);
+ xe_svm_notifier_lock(vm);
vma->gpuva.flags &= ~XE_VMA_DESTROYED;
- up_read(&vm->userptr.notifier_lock);
+ xe_svm_notifier_unlock(vm);
if (post_commit)
xe_vm_insert_vma(vm, vma);
}
@@ -2396,9 +2390,9 @@ static void xe_vma_op_unwind(struct xe_vm *vm, struct xe_vma_op *op,
xe_vma_destroy_unlocked(op->remap.next);
}
if (vma) {
- down_read(&vm->userptr.notifier_lock);
+ xe_svm_notifier_lock(vm);
vma->gpuva.flags &= ~XE_VMA_DESTROYED;
- up_read(&vm->userptr.notifier_lock);
+ xe_svm_notifier_unlock(vm);
if (post_commit)
xe_vm_insert_vma(vm, vma);
}
@@ -2903,6 +2897,8 @@ static int vm_bind_ioctl_check_args(struct xe_device *xe, struct xe_vm *vm,
op == DRM_XE_VM_BIND_OP_MAP_USERPTR) ||
XE_IOCTL_DBG(xe, coh_mode == XE_COH_NONE &&
op == DRM_XE_VM_BIND_OP_MAP_USERPTR) ||
+ XE_IOCTL_DBG(xe, op == DRM_XE_VM_BIND_OP_MAP_USERPTR &&
+ !IS_ENABLED(CONFIG_DRM_GPUSVM)) ||
XE_IOCTL_DBG(xe, obj &&
op == DRM_XE_VM_BIND_OP_PREFETCH) ||
XE_IOCTL_DBG(xe, prefetch_region &&
@@ -3379,7 +3375,7 @@ int xe_vm_invalidate_vma(struct xe_vma *vma)
if (xe_vma_is_userptr(vma)) {
WARN_ON_ONCE(!mmu_interval_check_retry
(&to_userptr_vma(vma)->userptr.notifier,
- to_userptr_vma(vma)->userptr.notifier_seq));
+ to_userptr_vma(vma)->userptr.pages.notifier_seq));
WARN_ON_ONCE(!dma_resv_test_signaled(xe_vm_resv(xe_vma_vm(vma)),
DMA_RESV_USAGE_BOOKKEEP));
diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
index 65e889f2537d..5d0391001e33 100644
--- a/drivers/gpu/drm/xe/xe_vm_types.h
+++ b/drivers/gpu/drm/xe/xe_vm_types.h
@@ -79,7 +79,7 @@ struct xe_vma {
/**
* @tile_present: GT mask of binding are present for this VMA.
* protected by vm->lock, vm->resv and for userptrs,
- * vm->userptr.notifier_lock for writing. Needs either for reading,
+ * vm->svm.gpusvm.notifier_lock for writing. Needs either for reading,
* but if reading is done under the vm->lock only, it needs to be held
* in write mode.
*/
--
2.49.0
^ permalink raw reply related [flat|nested] 17+ messages in thread* Re: [PATCH v4 7/8] drm/xe/userptr: replace xe_hmm with gpusvm
2025-05-12 15:06 ` [PATCH v4 7/8] drm/xe/userptr: replace xe_hmm with gpusvm Matthew Auld
@ 2025-06-09 17:09 ` Matthew Brost
0 siblings, 0 replies; 17+ messages in thread
From: Matthew Brost @ 2025-06-09 17:09 UTC (permalink / raw)
To: Matthew Auld
Cc: intel-xe, dri-devel, Himal Prasad Ghimiray, Thomas Hellström,
Dafna Hirschfeld
On Mon, May 12, 2025 at 04:06:45PM +0100, Matthew Auld wrote:
> Goal here is cut over to gpusvm and remove xe_hmm, relying instead on
> common code. The core facilities we need are get_pages(), unmap_pages()
> and free_pages() for a given useptr range, plus a vm level notifier
> lock, which is now provided by gpusvm.
>
> v2:
> - Reuse the same SVM vm struct we use for full SVM, that way we can
> use the same lock (Matt B & Himal)
> v3:
> - Re-use svm_init/fini for userptr.
> v4:
> - Allow building xe without userptr if we are missing DRM_GPUSVM
> config. (Matt B)
> - Always make .read_only match xe_vma_read_only() for the ctx. (Dafna)
>
> Signed-off-by: Matthew Auld <matthew.auld@intel.com>
> Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Cc: Dafna Hirschfeld <dafna.hirschfeld@intel.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
> ---
> drivers/gpu/drm/xe/Kconfig | 2 +-
> drivers/gpu/drm/xe/Makefile | 3 +-
> drivers/gpu/drm/xe/xe_exec.c | 5 +-
> drivers/gpu/drm/xe/xe_hmm.c | 325 -------------------------------
> drivers/gpu/drm/xe/xe_hmm.h | 18 --
> drivers/gpu/drm/xe/xe_pt.c | 22 +--
> drivers/gpu/drm/xe/xe_svm.c | 32 +--
> drivers/gpu/drm/xe/xe_svm.h | 58 ++++--
> drivers/gpu/drm/xe/xe_userptr.c | 32 ++-
> drivers/gpu/drm/xe/xe_userptr.h | 44 +++--
> drivers/gpu/drm/xe/xe_vm.c | 44 ++---
> drivers/gpu/drm/xe/xe_vm_types.h | 2 +-
> 12 files changed, 150 insertions(+), 437 deletions(-)
> delete mode 100644 drivers/gpu/drm/xe/xe_hmm.c
> delete mode 100644 drivers/gpu/drm/xe/xe_hmm.h
>
> diff --git a/drivers/gpu/drm/xe/Kconfig b/drivers/gpu/drm/xe/Kconfig
> index 9bce047901b2..0187902e6139 100644
> --- a/drivers/gpu/drm/xe/Kconfig
> +++ b/drivers/gpu/drm/xe/Kconfig
> @@ -38,12 +38,12 @@ config DRM_XE
> select DRM_TTM
> select DRM_TTM_HELPER
> select DRM_EXEC
> + select DRM_GPUSVM if !UML && DEVICE_PRIVATE
> select DRM_GPUVM
> select DRM_SCHED
> select MMU_NOTIFIER
> select WANT_DEV_COREDUMP
> select AUXILIARY_BUS
> - select HMM_MIRROR
> help
> Experimental driver for Intel Xe series GPUs
>
> diff --git a/drivers/gpu/drm/xe/Makefile b/drivers/gpu/drm/xe/Makefile
> index 10b42118e761..5275a76d0b8e 100644
> --- a/drivers/gpu/drm/xe/Makefile
> +++ b/drivers/gpu/drm/xe/Makefile
> @@ -116,7 +116,6 @@ xe-y += xe_bb.o \
> xe_tuning.o \
> xe_uc.o \
> xe_uc_fw.o \
> - xe_userptr.o \
> xe_vm.o \
> xe_vram.o \
> xe_vram_freq.o \
> @@ -125,8 +124,8 @@ xe-y += xe_bb.o \
> xe_wait_user_fence.o \
> xe_wopcm.o
>
> -xe-$(CONFIG_HMM_MIRROR) += xe_hmm.o
> xe-$(CONFIG_DRM_XE_GPUSVM) += xe_svm.o
> +xe-$(CONFIG_DRM_GPUSVM) += xe_userptr.o
>
> # graphics hardware monitoring (HWMON) support
> xe-$(CONFIG_HWMON) += xe_hwmon.o
> diff --git a/drivers/gpu/drm/xe/xe_exec.c b/drivers/gpu/drm/xe/xe_exec.c
> index 44364c042ad7..25a59b6934f6 100644
> --- a/drivers/gpu/drm/xe/xe_exec.c
> +++ b/drivers/gpu/drm/xe/xe_exec.c
> @@ -19,6 +19,7 @@
> #include "xe_ring_ops_types.h"
> #include "xe_sched_job.h"
> #include "xe_sync.h"
> +#include "xe_svm.h"
> #include "xe_vm.h"
>
> /**
> @@ -294,7 +295,7 @@ int xe_exec_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
> if (err)
> goto err_put_job;
>
> - err = down_read_interruptible(&vm->userptr.notifier_lock);
> + err = xe_svm_notifier_lock_interruptible(vm);
> if (err)
> goto err_put_job;
>
> @@ -336,7 +337,7 @@ int xe_exec_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
>
> err_repin:
> if (!xe_vm_in_lr_mode(vm))
> - up_read(&vm->userptr.notifier_lock);
> + xe_svm_notifier_unlock(vm);
> err_put_job:
> if (err)
> xe_sched_job_put(job);
> diff --git a/drivers/gpu/drm/xe/xe_hmm.c b/drivers/gpu/drm/xe/xe_hmm.c
> deleted file mode 100644
> index 57b71956ddf4..000000000000
> --- a/drivers/gpu/drm/xe/xe_hmm.c
> +++ /dev/null
> @@ -1,325 +0,0 @@
> -// SPDX-License-Identifier: MIT
> -/*
> - * Copyright © 2024 Intel Corporation
> - */
> -
> -#include <linux/scatterlist.h>
> -#include <linux/mmu_notifier.h>
> -#include <linux/dma-mapping.h>
> -#include <linux/memremap.h>
> -#include <linux/swap.h>
> -#include <linux/hmm.h>
> -#include <linux/mm.h>
> -#include "xe_hmm.h"
> -#include "xe_vm.h"
> -#include "xe_bo.h"
> -
> -static u64 xe_npages_in_range(unsigned long start, unsigned long end)
> -{
> - return (end - start) >> PAGE_SHIFT;
> -}
> -
> -static int xe_alloc_sg(struct xe_device *xe, struct sg_table *st,
> - struct hmm_range *range, struct rw_semaphore *notifier_sem)
> -{
> - unsigned long i, npages, hmm_pfn;
> - unsigned long num_chunks = 0;
> - int ret;
> -
> - /* HMM docs says this is needed. */
> - ret = down_read_interruptible(notifier_sem);
> - if (ret)
> - return ret;
> -
> - if (mmu_interval_read_retry(range->notifier, range->notifier_seq)) {
> - up_read(notifier_sem);
> - return -EAGAIN;
> - }
> -
> - npages = xe_npages_in_range(range->start, range->end);
> - for (i = 0; i < npages;) {
> - unsigned long len;
> -
> - hmm_pfn = range->hmm_pfns[i];
> - xe_assert(xe, hmm_pfn & HMM_PFN_VALID);
> -
> - len = 1UL << hmm_pfn_to_map_order(hmm_pfn);
> -
> - /* If order > 0 the page may extend beyond range->start */
> - len -= (hmm_pfn & ~HMM_PFN_FLAGS) & (len - 1);
> - i += len;
> - num_chunks++;
> - }
> - up_read(notifier_sem);
> -
> - return sg_alloc_table(st, num_chunks, GFP_KERNEL);
> -}
> -
> -/**
> - * xe_build_sg() - build a scatter gather table for all the physical pages/pfn
> - * in a hmm_range. dma-map pages if necessary. dma-address is save in sg table
> - * and will be used to program GPU page table later.
> - * @xe: the xe device who will access the dma-address in sg table
> - * @range: the hmm range that we build the sg table from. range->hmm_pfns[]
> - * has the pfn numbers of pages that back up this hmm address range.
> - * @st: pointer to the sg table.
> - * @notifier_sem: The xe notifier lock.
> - * @write: whether we write to this range. This decides dma map direction
> - * for system pages. If write we map it bi-diretional; otherwise
> - * DMA_TO_DEVICE
> - *
> - * All the contiguous pfns will be collapsed into one entry in
> - * the scatter gather table. This is for the purpose of efficiently
> - * programming GPU page table.
> - *
> - * The dma_address in the sg table will later be used by GPU to
> - * access memory. So if the memory is system memory, we need to
> - * do a dma-mapping so it can be accessed by GPU/DMA.
> - *
> - * FIXME: This function currently only support pages in system
> - * memory. If the memory is GPU local memory (of the GPU who
> - * is going to access memory), we need gpu dpa (device physical
> - * address), and there is no need of dma-mapping. This is TBD.
> - *
> - * FIXME: dma-mapping for peer gpu device to access remote gpu's
> - * memory. Add this when you support p2p
> - *
> - * This function allocates the storage of the sg table. It is
> - * caller's responsibility to free it calling sg_free_table.
> - *
> - * Returns 0 if successful; -ENOMEM if fails to allocate memory
> - */
> -static int xe_build_sg(struct xe_device *xe, struct hmm_range *range,
> - struct sg_table *st,
> - struct rw_semaphore *notifier_sem,
> - bool write)
> -{
> - unsigned long npages = xe_npages_in_range(range->start, range->end);
> - struct device *dev = xe->drm.dev;
> - struct scatterlist *sgl;
> - struct page *page;
> - unsigned long i, j;
> -
> - lockdep_assert_held(notifier_sem);
> -
> - i = 0;
> - for_each_sg(st->sgl, sgl, st->nents, j) {
> - unsigned long hmm_pfn, size;
> -
> - hmm_pfn = range->hmm_pfns[i];
> - page = hmm_pfn_to_page(hmm_pfn);
> - xe_assert(xe, !is_device_private_page(page));
> -
> - size = 1UL << hmm_pfn_to_map_order(hmm_pfn);
> - size -= page_to_pfn(page) & (size - 1);
> - i += size;
> -
> - if (unlikely(j == st->nents - 1)) {
> - xe_assert(xe, i >= npages);
> - if (i > npages)
> - size -= (i - npages);
> -
> - sg_mark_end(sgl);
> - } else {
> - xe_assert(xe, i < npages);
> - }
> -
> - sg_set_page(sgl, page, size << PAGE_SHIFT, 0);
> - }
> -
> - return dma_map_sgtable(dev, st, write ? DMA_BIDIRECTIONAL : DMA_TO_DEVICE,
> - DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_NO_KERNEL_MAPPING);
> -}
> -
> -static void xe_hmm_userptr_set_mapped(struct xe_userptr_vma *uvma)
> -{
> - struct xe_userptr *userptr = &uvma->userptr;
> - struct xe_vm *vm = xe_vma_vm(&uvma->vma);
> -
> - lockdep_assert_held_write(&vm->lock);
> - lockdep_assert_held(&vm->userptr.notifier_lock);
> -
> - mutex_lock(&userptr->unmap_mutex);
> - xe_assert(vm->xe, !userptr->mapped);
> - userptr->mapped = true;
> - mutex_unlock(&userptr->unmap_mutex);
> -}
> -
> -void xe_hmm_userptr_unmap(struct xe_userptr_vma *uvma)
> -{
> - struct xe_userptr *userptr = &uvma->userptr;
> - struct xe_vma *vma = &uvma->vma;
> - bool write = !xe_vma_read_only(vma);
> - struct xe_vm *vm = xe_vma_vm(vma);
> - struct xe_device *xe = vm->xe;
> -
> - if (!lockdep_is_held_type(&vm->userptr.notifier_lock, 0) &&
> - !lockdep_is_held_type(&vm->lock, 0) &&
> - !(vma->gpuva.flags & XE_VMA_DESTROYED)) {
> - /* Don't unmap in exec critical section. */
> - xe_vm_assert_held(vm);
> - /* Don't unmap while mapping the sg. */
> - lockdep_assert_held(&vm->lock);
> - }
> -
> - mutex_lock(&userptr->unmap_mutex);
> - if (userptr->sg && userptr->mapped)
> - dma_unmap_sgtable(xe->drm.dev, userptr->sg,
> - write ? DMA_BIDIRECTIONAL : DMA_TO_DEVICE, 0);
> - userptr->mapped = false;
> - mutex_unlock(&userptr->unmap_mutex);
> -}
> -
> -/**
> - * xe_hmm_userptr_free_sg() - Free the scatter gather table of userptr
> - * @uvma: the userptr vma which hold the scatter gather table
> - *
> - * With function xe_userptr_populate_range, we allocate storage of
> - * the userptr sg table. This is a helper function to free this
> - * sg table, and dma unmap the address in the table.
> - */
> -void xe_hmm_userptr_free_sg(struct xe_userptr_vma *uvma)
> -{
> - struct xe_userptr *userptr = &uvma->userptr;
> -
> - xe_assert(xe_vma_vm(&uvma->vma)->xe, userptr->sg);
> - xe_hmm_userptr_unmap(uvma);
> - sg_free_table(userptr->sg);
> - userptr->sg = NULL;
> -}
> -
> -/**
> - * xe_hmm_userptr_populate_range() - Populate physical pages of a virtual
> - * address range
> - *
> - * @uvma: userptr vma which has information of the range to populate.
> - * @is_mm_mmap_locked: True if mmap_read_lock is already acquired by caller.
> - *
> - * This function populate the physical pages of a virtual
> - * address range. The populated physical pages is saved in
> - * userptr's sg table. It is similar to get_user_pages but call
> - * hmm_range_fault.
> - *
> - * This function also read mmu notifier sequence # (
> - * mmu_interval_read_begin), for the purpose of later
> - * comparison (through mmu_interval_read_retry).
> - *
> - * This must be called with mmap read or write lock held.
> - *
> - * This function allocates the storage of the userptr sg table.
> - * It is caller's responsibility to free it calling sg_free_table.
> - *
> - * returns: 0 for success; negative error no on failure
> - */
> -int xe_hmm_userptr_populate_range(struct xe_userptr_vma *uvma,
> - bool is_mm_mmap_locked)
> -{
> - unsigned long timeout =
> - jiffies + msecs_to_jiffies(HMM_RANGE_DEFAULT_TIMEOUT);
> - unsigned long *pfns;
> - struct xe_userptr *userptr;
> - struct xe_vma *vma = &uvma->vma;
> - u64 userptr_start = xe_vma_userptr(vma);
> - u64 userptr_end = userptr_start + xe_vma_size(vma);
> - struct xe_vm *vm = xe_vma_vm(vma);
> - struct hmm_range hmm_range = {
> - .pfn_flags_mask = 0, /* ignore pfns */
> - .default_flags = HMM_PFN_REQ_FAULT,
> - .start = userptr_start,
> - .end = userptr_end,
> - .notifier = &uvma->userptr.notifier,
> - .dev_private_owner = vm->xe,
> - };
> - bool write = !xe_vma_read_only(vma);
> - unsigned long notifier_seq;
> - u64 npages;
> - int ret;
> -
> - userptr = &uvma->userptr;
> -
> - if (is_mm_mmap_locked)
> - mmap_assert_locked(userptr->notifier.mm);
> -
> - if (vma->gpuva.flags & XE_VMA_DESTROYED)
> - return 0;
> -
> - notifier_seq = mmu_interval_read_begin(&userptr->notifier);
> - if (notifier_seq == userptr->notifier_seq)
> - return 0;
> -
> - if (userptr->sg)
> - xe_hmm_userptr_free_sg(uvma);
> -
> - npages = xe_npages_in_range(userptr_start, userptr_end);
> - pfns = kvmalloc_array(npages, sizeof(*pfns), GFP_KERNEL);
> - if (unlikely(!pfns))
> - return -ENOMEM;
> -
> - if (write)
> - hmm_range.default_flags |= HMM_PFN_REQ_WRITE;
> -
> - if (!mmget_not_zero(userptr->notifier.mm)) {
> - ret = -EFAULT;
> - goto free_pfns;
> - }
> -
> - hmm_range.hmm_pfns = pfns;
> -
> - while (true) {
> - hmm_range.notifier_seq = mmu_interval_read_begin(&userptr->notifier);
> -
> - if (!is_mm_mmap_locked)
> - mmap_read_lock(userptr->notifier.mm);
> -
> - ret = hmm_range_fault(&hmm_range);
> -
> - if (!is_mm_mmap_locked)
> - mmap_read_unlock(userptr->notifier.mm);
> -
> - if (ret == -EBUSY) {
> - if (time_after(jiffies, timeout))
> - break;
> -
> - continue;
> - }
> - break;
> - }
> -
> - mmput(userptr->notifier.mm);
> -
> - if (ret)
> - goto free_pfns;
> -
> - ret = xe_alloc_sg(vm->xe, &userptr->sgt, &hmm_range, &vm->userptr.notifier_lock);
> - if (ret)
> - goto free_pfns;
> -
> - ret = down_read_interruptible(&vm->userptr.notifier_lock);
> - if (ret)
> - goto free_st;
> -
> - if (mmu_interval_read_retry(hmm_range.notifier, hmm_range.notifier_seq)) {
> - ret = -EAGAIN;
> - goto out_unlock;
> - }
> -
> - ret = xe_build_sg(vm->xe, &hmm_range, &userptr->sgt,
> - &vm->userptr.notifier_lock, write);
> - if (ret)
> - goto out_unlock;
> -
> - userptr->sg = &userptr->sgt;
> - xe_hmm_userptr_set_mapped(uvma);
> - userptr->notifier_seq = hmm_range.notifier_seq;
> - up_read(&vm->userptr.notifier_lock);
> - kvfree(pfns);
> - return 0;
> -
> -out_unlock:
> - up_read(&vm->userptr.notifier_lock);
> -free_st:
> - sg_free_table(&userptr->sgt);
> -free_pfns:
> - kvfree(pfns);
> - return ret;
> -}
> diff --git a/drivers/gpu/drm/xe/xe_hmm.h b/drivers/gpu/drm/xe/xe_hmm.h
> deleted file mode 100644
> index 0ea98d8e7bbc..000000000000
> --- a/drivers/gpu/drm/xe/xe_hmm.h
> +++ /dev/null
> @@ -1,18 +0,0 @@
> -/* SPDX-License-Identifier: MIT
> - *
> - * Copyright © 2024 Intel Corporation
> - */
> -
> -#ifndef _XE_HMM_H_
> -#define _XE_HMM_H_
> -
> -#include <linux/types.h>
> -
> -struct xe_userptr_vma;
> -
> -int xe_hmm_userptr_populate_range(struct xe_userptr_vma *uvma, bool is_mm_mmap_locked);
> -
> -void xe_hmm_userptr_free_sg(struct xe_userptr_vma *uvma);
> -
> -void xe_hmm_userptr_unmap(struct xe_userptr_vma *uvma);
> -#endif
> diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
> index 720c25bf48f2..92b6a4d63bb1 100644
> --- a/drivers/gpu/drm/xe/xe_pt.c
> +++ b/drivers/gpu/drm/xe/xe_pt.c
> @@ -757,8 +757,8 @@ xe_pt_stage_bind(struct xe_tile *tile, struct xe_vma *vma,
>
> if (!xe_vma_is_null(vma) && !range) {
> if (xe_vma_is_userptr(vma))
> - xe_res_first_sg(to_userptr_vma(vma)->userptr.sg, 0,
> - xe_vma_size(vma), &curs);
> + xe_res_first_dma(to_userptr_vma(vma)->userptr.pages.dma_addr, 0,
> + xe_vma_size(vma), &curs);
> else if (xe_bo_is_vram(bo) || xe_bo_is_stolen(bo))
> xe_res_first(bo->ttm.resource, xe_vma_bo_offset(vma),
> xe_vma_size(vma), &curs);
> @@ -1029,7 +1029,7 @@ static void xe_pt_commit_locks_assert(struct xe_vma *vma)
> xe_pt_commit_prepare_locks_assert(vma);
>
> if (xe_vma_is_userptr(vma))
> - lockdep_assert_held_read(&vm->userptr.notifier_lock);
> + xe_svm_assert_held_read(vm);
> }
>
> static void xe_pt_commit(struct xe_vma *vma,
> @@ -1369,7 +1369,7 @@ static int vma_check_userptr(struct xe_vm *vm, struct xe_vma *vma,
> struct xe_userptr_vma *uvma;
> unsigned long notifier_seq;
>
> - lockdep_assert_held_read(&vm->userptr.notifier_lock);
> + xe_svm_assert_held_read(vm);
>
> if (!xe_vma_is_userptr(vma))
> return 0;
> @@ -1378,7 +1378,7 @@ static int vma_check_userptr(struct xe_vm *vm, struct xe_vma *vma,
> if (xe_pt_userptr_inject_eagain(uvma))
> xe_vma_userptr_force_invalidate(uvma);
>
> - notifier_seq = uvma->userptr.notifier_seq;
> + notifier_seq = uvma->userptr.pages.notifier_seq;
>
> if (!mmu_interval_read_retry(&uvma->userptr.notifier,
> notifier_seq))
> @@ -1399,7 +1399,7 @@ static int op_check_userptr(struct xe_vm *vm, struct xe_vma_op *op,
> {
> int err = 0;
>
> - lockdep_assert_held_read(&vm->userptr.notifier_lock);
> + xe_svm_assert_held_read(vm);
>
> switch (op->base.op) {
> case DRM_GPUVA_OP_MAP:
> @@ -1440,12 +1440,12 @@ static int xe_pt_userptr_pre_commit(struct xe_migrate_pt_update *pt_update)
> if (err)
> return err;
>
> - down_read(&vm->userptr.notifier_lock);
> + down_read(&vm->svm.gpusvm.notifier_lock);
>
> list_for_each_entry(op, &vops->list, link) {
> err = op_check_userptr(vm, op, pt_update_ops);
> if (err) {
> - up_read(&vm->userptr.notifier_lock);
> + up_read(&vm->svm.gpusvm.notifier_lock);
> break;
> }
> }
> @@ -2172,7 +2172,7 @@ static void bind_op_commit(struct xe_vm *vm, struct xe_tile *tile,
> if (invalidate_on_bind)
> vma->tile_invalidated |= BIT(tile->id);
> if (xe_vma_is_userptr(vma)) {
> - lockdep_assert_held_read(&vm->userptr.notifier_lock);
> + xe_svm_assert_held_read(vm);
> to_userptr_vma(vma)->userptr.initial_bind = true;
> }
>
> @@ -2208,7 +2208,7 @@ static void unbind_op_commit(struct xe_vm *vm, struct xe_tile *tile,
> if (!vma->tile_present) {
> list_del_init(&vma->combined_links.rebind);
> if (xe_vma_is_userptr(vma)) {
> - lockdep_assert_held_read(&vm->userptr.notifier_lock);
> + xe_svm_assert_held_read(vm);
>
> spin_lock(&vm->userptr.invalidated_lock);
> list_del_init(&to_userptr_vma(vma)->userptr.invalidate_link);
> @@ -2457,7 +2457,7 @@ xe_pt_update_ops_run(struct xe_tile *tile, struct xe_vma_ops *vops)
> if (pt_update_ops->needs_svm_lock)
> xe_svm_notifier_unlock(vm);
> if (pt_update_ops->needs_userptr_lock)
> - up_read(&vm->userptr.notifier_lock);
> + up_read(&vm->svm.gpusvm.notifier_lock);
>
> return fence;
>
> diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
> index 74301064004c..73a1ac850957 100644
> --- a/drivers/gpu/drm/xe/xe_svm.c
> +++ b/drivers/gpu/drm/xe/xe_svm.c
> @@ -606,22 +606,26 @@ int xe_svm_init(struct xe_vm *vm)
> {
> int err;
>
> - spin_lock_init(&vm->svm.garbage_collector.lock);
> - INIT_LIST_HEAD(&vm->svm.garbage_collector.range_list);
> - INIT_WORK(&vm->svm.garbage_collector.work,
> - xe_svm_garbage_collector_work_func);
> + if (vm->flags & XE_VM_FLAG_FAULT_MODE) {
> + spin_lock_init(&vm->svm.garbage_collector.lock);
> + INIT_LIST_HEAD(&vm->svm.garbage_collector.range_list);
> + INIT_WORK(&vm->svm.garbage_collector.work,
> + xe_svm_garbage_collector_work_func);
>
> - err = drm_gpusvm_init(&vm->svm.gpusvm, "Xe SVM", &vm->xe->drm,
> - current->mm, xe_svm_devm_owner(vm->xe), 0,
> - vm->size, xe_modparam.svm_notifier_size * SZ_1M,
> - &gpusvm_ops, fault_chunk_sizes,
> - ARRAY_SIZE(fault_chunk_sizes));
> - if (err)
> - return err;
> + err = drm_gpusvm_init(&vm->svm.gpusvm, "Xe SVM", &vm->xe->drm,
> + current->mm, xe_svm_devm_owner(vm->xe), 0,
> + vm->size,
> + xe_modparam.svm_notifier_size * SZ_1M,
> + &gpusvm_ops, fault_chunk_sizes,
> + ARRAY_SIZE(fault_chunk_sizes));
> + drm_gpusvm_driver_set_lock(&vm->svm.gpusvm, &vm->lock);
> + } else {
> + err = drm_gpusvm_init(&vm->svm.gpusvm, "Xe SVM (simple)",
> + &vm->xe->drm, NULL, NULL, 0, 0, 0, NULL,
> + NULL, 0);
> + }
>
> - drm_gpusvm_driver_set_lock(&vm->svm.gpusvm, &vm->lock);
> -
> - return 0;
> + return err;
> }
>
> /**
> diff --git a/drivers/gpu/drm/xe/xe_svm.h b/drivers/gpu/drm/xe/xe_svm.h
> index bf9792b66869..247bf19361e5 100644
> --- a/drivers/gpu/drm/xe/xe_svm.h
> +++ b/drivers/gpu/drm/xe/xe_svm.h
> @@ -87,20 +87,15 @@ static inline bool xe_svm_range_has_dma_mapping(struct xe_svm_range *range)
> return range->base.pages.flags.has_dma_mapping;
> }
>
> -#define xe_svm_assert_in_notifier(vm__) \
> - lockdep_assert_held_write(&(vm__)->svm.gpusvm.notifier_lock)
> -
> -#define xe_svm_notifier_lock(vm__) \
> - drm_gpusvm_notifier_lock(&(vm__)->svm.gpusvm)
> -
> -#define xe_svm_notifier_unlock(vm__) \
> - drm_gpusvm_notifier_unlock(&(vm__)->svm.gpusvm)
> -
> void xe_svm_flush(struct xe_vm *vm);
>
> #else
> #include <linux/interval_tree.h>
>
> +#include "xe_assert.h"
> +#include "xe_vm.h"
> +#include "xe_vm_types.h"
> +
> struct drm_pagemap_device_addr;
> struct xe_bo;
> struct xe_gt;
> @@ -136,12 +131,22 @@ int xe_devm_add(struct xe_tile *tile, struct xe_vram_region *vr)
> static inline
> int xe_svm_init(struct xe_vm *vm)
> {
> +#if IS_ENABLED(CONFIG_DRM_GPUSVM)
> + return drm_gpusvm_init(&vm->svm.gpusvm, "Xe SVM (simple)", &vm->xe->drm,
> + NULL, NULL, 0, 0, 0, NULL, NULL, 0);
> +#else
> return 0;
> +#endif
> }
>
> static inline
> void xe_svm_fini(struct xe_vm *vm)
> {
> + xe_assert(vm->xe, xe_vm_is_closed(vm));
> +
> +#if IS_ENABLED(CONFIG_DRM_GPUSVM)
> + drm_gpusvm_fini(&vm->svm.gpusvm);
> +#endif
> }
>
> static inline
> @@ -174,19 +179,46 @@ void xe_svm_range_debug(struct xe_svm_range *range, const char *operation)
> {
> }
>
> +static inline void xe_svm_flush(struct xe_vm *vm)
> +{
> +}
> +#endif /* CONFIG_DRM_XE_GPUSVM */
> +
> +#if IS_ENABLED(CONFIG_DRM_GPUSVM) /* Need to support userptr without XE_GPUSVM */
> +#define xe_svm_assert_in_notifier(vm__) \
> + lockdep_assert_held_write(&(vm__)->svm.gpusvm.notifier_lock)
> +
> +#define xe_svm_assert_held_read(vm__) \
> + lockdep_assert_held_read(&(vm__)->svm.gpusvm.notifier_lock)
> +
> +#define xe_svm_notifier_lock(vm__) \
> + drm_gpusvm_notifier_lock(&(vm__)->svm.gpusvm)
> +
> +#define xe_svm_notifier_lock_interruptible(vm__) \
> + down_read_interruptible(&(vm__)->svm.gpusvm.notifier_lock)
> +
> +#define xe_svm_notifier_unlock(vm__) \
> + drm_gpusvm_notifier_unlock(&(vm__)->svm.gpusvm)
> +#else
> #define xe_svm_assert_in_notifier(...) do {} while (0)
> #define xe_svm_range_has_dma_mapping(...) false
>
> +static inline void xe_svm_assert_held_read(struct xe_vm *vm)
> +{
> +}
> +
> static inline void xe_svm_notifier_lock(struct xe_vm *vm)
> {
> }
>
> +static inline int xe_svm_notifier_lock_interruptible(struct xe_vm *vm)
> +{
> + return 0;
> +}
> +
> static inline void xe_svm_notifier_unlock(struct xe_vm *vm)
> {
> }
> +#endif /* CONFIG_DRM_GPUSVM */
>
> -static inline void xe_svm_flush(struct xe_vm *vm)
> -{
> -}
> -#endif
> #endif
> diff --git a/drivers/gpu/drm/xe/xe_userptr.c b/drivers/gpu/drm/xe/xe_userptr.c
> index f573842a3d4b..ccf453f96795 100644
> --- a/drivers/gpu/drm/xe/xe_userptr.c
> +++ b/drivers/gpu/drm/xe/xe_userptr.c
> @@ -7,7 +7,6 @@
>
> #include <linux/mm.h>
>
> -#include "xe_hmm.h"
> #include "xe_trace_bo.h"
>
> /**
> @@ -25,7 +24,7 @@
> int xe_vma_userptr_check_repin(struct xe_userptr_vma *uvma)
> {
> return mmu_interval_check_retry(&uvma->userptr.notifier,
> - uvma->userptr.notifier_seq) ?
> + uvma->userptr.pages.notifier_seq) ?
> -EAGAIN : 0;
> }
>
> @@ -34,11 +33,22 @@ int xe_vma_userptr_pin_pages(struct xe_userptr_vma *uvma)
> struct xe_vma *vma = &uvma->vma;
> struct xe_vm *vm = xe_vma_vm(vma);
> struct xe_device *xe = vm->xe;
> + struct drm_gpusvm_ctx ctx = {
> + .read_only = xe_vma_read_only(vma),
> + };
>
> lockdep_assert_held(&vm->lock);
> xe_assert(xe, xe_vma_is_userptr(vma));
>
> - return xe_hmm_userptr_populate_range(uvma, false);
> + if (vma->gpuva.flags & XE_VMA_DESTROYED)
> + return 0;
> +
> + return drm_gpusvm_get_pages(&vm->svm.gpusvm, &uvma->userptr.pages,
> + uvma->userptr.notifier.mm,
> + &uvma->userptr.notifier,
> + xe_vma_userptr(vma),
> + xe_vma_userptr(vma) + xe_vma_size(vma),
> + &ctx);
> }
>
> static void __vma_userptr_invalidate(struct xe_vm *vm, struct xe_userptr_vma *uvma)
> @@ -47,6 +57,10 @@ static void __vma_userptr_invalidate(struct xe_vm *vm, struct xe_userptr_vma *uv
> struct xe_vma *vma = &uvma->vma;
> struct dma_resv_iter cursor;
> struct dma_fence *fence;
> + struct drm_gpusvm_ctx ctx = {
> + .in_notifier = true,
> + .read_only = xe_vma_read_only(vma),
> + };
> long err;
>
> /*
> @@ -83,7 +97,8 @@ static void __vma_userptr_invalidate(struct xe_vm *vm, struct xe_userptr_vma *uv
> XE_WARN_ON(err);
> }
>
> - xe_hmm_userptr_unmap(uvma);
> + drm_gpusvm_unmap_pages(&vm->svm.gpusvm, &uvma->userptr.pages,
> + xe_vma_size(vma) >> PAGE_SHIFT, &ctx);
> }
>
> #if IS_ENABLED(CONFIG_DRM_XE_USERPTR_INVAL_INJECT)
> @@ -263,7 +278,6 @@ int xe_userptr_setup(struct xe_userptr_vma *uvma, unsigned long start,
>
> INIT_LIST_HEAD(&userptr->invalidate_link);
> INIT_LIST_HEAD(&userptr->repin_link);
> - mutex_init(&userptr->unmap_mutex);
>
> err = mmu_interval_notifier_insert(&userptr->notifier, current->mm,
> start, range,
> @@ -271,17 +285,18 @@ int xe_userptr_setup(struct xe_userptr_vma *uvma, unsigned long start,
> if (err)
> return err;
>
> - userptr->notifier_seq = LONG_MAX;
> + userptr->pages.notifier_seq = LONG_MAX;
>
> return 0;
> }
>
> void xe_userptr_remove(struct xe_userptr_vma *uvma)
> {
> + struct xe_vm *vm = xe_vma_vm(&uvma->vma);
> struct xe_userptr *userptr = &uvma->userptr;
>
> - if (userptr->sg)
> - xe_hmm_userptr_free_sg(uvma);
> + drm_gpusvm_free_pages(&vm->svm.gpusvm, &uvma->userptr.pages,
> + xe_vma_size(&uvma->vma) >> PAGE_SHIFT);
>
> /*
> * Since userptr pages are not pinned, we can't remove
> @@ -289,7 +304,6 @@ void xe_userptr_remove(struct xe_userptr_vma *uvma)
> * them anymore
> */
> mmu_interval_notifier_remove(&userptr->notifier);
> - mutex_destroy(&userptr->unmap_mutex);
> }
>
> void xe_userptr_destroy(struct xe_userptr_vma *uvma)
> diff --git a/drivers/gpu/drm/xe/xe_userptr.h b/drivers/gpu/drm/xe/xe_userptr.h
> index 83d17b58ed16..47c883e5fa33 100644
> --- a/drivers/gpu/drm/xe/xe_userptr.h
> +++ b/drivers/gpu/drm/xe/xe_userptr.h
> @@ -12,6 +12,8 @@
> #include <linux/scatterlist.h>
> #include <linux/spinlock.h>
>
> +#include <drm/drm_gpusvm.h>
> +
> struct xe_vm;
> struct xe_vma;
> struct xe_userptr_vma;
> @@ -23,11 +25,6 @@ struct xe_userptr_vm {
> * and needs repinning. Protected by @lock.
> */
> struct list_head repin_list;
> - /**
> - * @notifier_lock: protects notifier in write mode and
> - * submission in read mode.
> - */
> - struct rw_semaphore notifier_lock;
> /**
> * @userptr.invalidated_lock: Protects the
> * @userptr.invalidated list.
> @@ -51,31 +48,27 @@ struct xe_userptr {
> struct list_head invalidate_link;
> /** @userptr: link into VM repin list if userptr. */
> struct list_head repin_link;
> + /**
> + * @pages: gpusvm pages for this user pointer.
> + */
> + struct drm_gpusvm_pages pages;
> /**
> * @notifier: MMU notifier for user pointer (invalidation call back)
> */
> struct mmu_interval_notifier notifier;
> - /** @sgt: storage for a scatter gather table */
> - struct sg_table sgt;
> - /** @sg: allocated scatter gather table */
> - struct sg_table *sg;
> - /** @notifier_seq: notifier sequence number */
> - unsigned long notifier_seq;
> - /** @unmap_mutex: Mutex protecting dma-unmapping */
> - struct mutex unmap_mutex;
> +
> /**
> * @initial_bind: user pointer has been bound at least once.
> - * write: vm->userptr.notifier_lock in read mode and vm->resv held.
> - * read: vm->userptr.notifier_lock in write mode or vm->resv held.
> + * write: vm->svm.gpusvm.notifier_lock in read mode and vm->resv held.
> + * read: vm->svm.gpusvm.notifier_lock in write mode or vm->resv held.
> */
> bool initial_bind;
> - /** @mapped: Whether the @sgt sg-table is dma-mapped. Protected by @unmap_mutex. */
> - bool mapped;
> #if IS_ENABLED(CONFIG_DRM_XE_USERPTR_INVAL_INJECT)
> u32 divisor;
> #endif
> };
>
> +#if IS_ENABLED(CONFIG_DRM_GPUSVM)
> void xe_userptr_remove(struct xe_userptr_vma *uvma);
> int xe_userptr_setup(struct xe_userptr_vma *uvma, unsigned long start,
> unsigned long range);
> @@ -86,6 +79,23 @@ int __xe_vm_userptr_needs_repin(struct xe_vm *vm);
> int xe_vm_userptr_check_repin(struct xe_vm *vm);
> int xe_vma_userptr_pin_pages(struct xe_userptr_vma *uvma);
> int xe_vma_userptr_check_repin(struct xe_userptr_vma *uvma);
> +#else
> +static inline void xe_userptr_remove(struct xe_userptr_vma *uvma) {}
> +
> +static inline int xe_userptr_setup(struct xe_userptr_vma *uvma,
> + unsigned long start, unsigned long range)
> +{
> + return -ENODEV;
> +}
> +
> +static inline void xe_userptr_destroy(struct xe_userptr_vma *uvma) {}
> +
> +static inline int xe_vm_userptr_pin(struct xe_vm *vm) { return 0; }
> +static inline int __xe_vm_userptr_needs_repin(struct xe_vm *vm) { return 0; }
> +static inline int xe_vm_userptr_check_repin(struct xe_vm *vm) { return 0; }
> +static inline int xe_vma_userptr_pin_pages(struct xe_userptr_vma *uvma) { return -ENODEV; }
> +static inline int xe_vma_userptr_check_repin(struct xe_userptr_vma *uvma) { return -ENODEV; };
> +#endif
>
> #if IS_ENABLED(CONFIG_DRM_XE_USERPTR_INVAL_INJECT)
> void xe_vma_userptr_force_invalidate(struct xe_userptr_vma *uvma);
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index e5bf4ddc9d86..1373a51f75dd 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -39,9 +39,7 @@
> #include "xe_svm.h"
> #include "xe_sync.h"
> #include "xe_trace_bo.h"
> -#include "xe_userptr.h"
> #include "xe_wa.h"
> -#include "xe_hmm.h"
>
> static struct drm_gem_object *xe_vm_obj(struct xe_vm *vm)
> {
> @@ -219,7 +217,7 @@ int xe_vm_add_compute_exec_queue(struct xe_vm *vm, struct xe_exec_queue *q)
> ++vm->preempt.num_exec_queues;
> q->lr.pfence = pfence;
>
> - down_read(&vm->userptr.notifier_lock);
> + xe_svm_notifier_lock(vm);
>
> drm_gpuvm_resv_add_fence(&vm->gpuvm, exec, pfence,
> DMA_RESV_USAGE_BOOKKEEP, DMA_RESV_USAGE_BOOKKEEP);
> @@ -233,7 +231,7 @@ int xe_vm_add_compute_exec_queue(struct xe_vm *vm, struct xe_exec_queue *q)
> if (wait)
> dma_fence_enable_sw_signaling(pfence);
>
> - up_read(&vm->userptr.notifier_lock);
> + xe_svm_notifier_unlock(vm);
>
> out_fini:
> drm_exec_fini(exec);
> @@ -497,9 +495,9 @@ static void preempt_rebind_work_func(struct work_struct *w)
> (!(__tries)++ || __xe_vm_userptr_needs_repin(__vm)) : \
> __xe_vm_userptr_needs_repin(__vm))
>
> - down_read(&vm->userptr.notifier_lock);
> + xe_svm_notifier_lock(vm);
> if (retry_required(tries, vm)) {
> - up_read(&vm->userptr.notifier_lock);
> + xe_svm_notifier_unlock(vm);
> err = -EAGAIN;
> goto out_unlock;
> }
> @@ -513,7 +511,7 @@ static void preempt_rebind_work_func(struct work_struct *w)
> /* Point of no return. */
> arm_preempt_fences(vm, &preempt_fences);
> resume_and_reinstall_preempt_fences(vm, &exec);
> - up_read(&vm->userptr.notifier_lock);
> + xe_svm_notifier_unlock(vm);
>
> out_unlock:
> drm_exec_fini(&exec);
> @@ -1389,7 +1387,6 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags)
>
> INIT_LIST_HEAD(&vm->userptr.repin_list);
> INIT_LIST_HEAD(&vm->userptr.invalidated);
> - init_rwsem(&vm->userptr.notifier_lock);
> spin_lock_init(&vm->userptr.invalidated_lock);
>
> ttm_lru_bulk_move_init(&vm->lru_bulk_move);
> @@ -1489,11 +1486,9 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags)
> }
> }
>
> - if (flags & XE_VM_FLAG_FAULT_MODE) {
> - err = xe_svm_init(vm);
> - if (err)
> - goto err_close;
> - }
> + err = xe_svm_init(vm);
> + if (err)
> + goto err_close;
>
> if (number_tiles > 1)
> vm->composite_fence_ctx = dma_fence_context_alloc(1);
> @@ -1599,9 +1594,9 @@ void xe_vm_close_and_put(struct xe_vm *vm)
> vma = gpuva_to_vma(gpuva);
>
> if (xe_vma_has_no_bo(vma)) {
> - down_read(&vm->userptr.notifier_lock);
> + xe_svm_notifier_lock(vm);
> vma->gpuva.flags |= XE_VMA_DESTROYED;
> - up_read(&vm->userptr.notifier_lock);
> + xe_svm_notifier_unlock(vm);
> }
>
> xe_vm_remove_vma(vm, vma);
> @@ -1645,8 +1640,7 @@ void xe_vm_close_and_put(struct xe_vm *vm)
> xe_vma_destroy_unlocked(vma);
> }
>
> - if (xe_vm_in_fault_mode(vm))
> - xe_svm_fini(vm);
> + xe_svm_fini(vm);
>
> up_write(&vm->lock);
>
> @@ -1877,9 +1871,9 @@ static const u32 region_to_mem_type[] = {
> static void prep_vma_destroy(struct xe_vm *vm, struct xe_vma *vma,
> bool post_commit)
> {
> - down_read(&vm->userptr.notifier_lock);
> + xe_svm_notifier_lock(vm);
> vma->gpuva.flags |= XE_VMA_DESTROYED;
> - up_read(&vm->userptr.notifier_lock);
> + xe_svm_notifier_unlock(vm);
> if (post_commit)
> xe_vm_remove_vma(vm, vma);
> }
> @@ -2375,9 +2369,9 @@ static void xe_vma_op_unwind(struct xe_vm *vm, struct xe_vma_op *op,
> struct xe_vma *vma = gpuva_to_vma(op->base.unmap.va);
>
> if (vma) {
> - down_read(&vm->userptr.notifier_lock);
> + xe_svm_notifier_lock(vm);
> vma->gpuva.flags &= ~XE_VMA_DESTROYED;
> - up_read(&vm->userptr.notifier_lock);
> + xe_svm_notifier_unlock(vm);
> if (post_commit)
> xe_vm_insert_vma(vm, vma);
> }
> @@ -2396,9 +2390,9 @@ static void xe_vma_op_unwind(struct xe_vm *vm, struct xe_vma_op *op,
> xe_vma_destroy_unlocked(op->remap.next);
> }
> if (vma) {
> - down_read(&vm->userptr.notifier_lock);
> + xe_svm_notifier_lock(vm);
> vma->gpuva.flags &= ~XE_VMA_DESTROYED;
> - up_read(&vm->userptr.notifier_lock);
> + xe_svm_notifier_unlock(vm);
> if (post_commit)
> xe_vm_insert_vma(vm, vma);
> }
> @@ -2903,6 +2897,8 @@ static int vm_bind_ioctl_check_args(struct xe_device *xe, struct xe_vm *vm,
> op == DRM_XE_VM_BIND_OP_MAP_USERPTR) ||
> XE_IOCTL_DBG(xe, coh_mode == XE_COH_NONE &&
> op == DRM_XE_VM_BIND_OP_MAP_USERPTR) ||
> + XE_IOCTL_DBG(xe, op == DRM_XE_VM_BIND_OP_MAP_USERPTR &&
> + !IS_ENABLED(CONFIG_DRM_GPUSVM)) ||
> XE_IOCTL_DBG(xe, obj &&
> op == DRM_XE_VM_BIND_OP_PREFETCH) ||
> XE_IOCTL_DBG(xe, prefetch_region &&
> @@ -3379,7 +3375,7 @@ int xe_vm_invalidate_vma(struct xe_vma *vma)
> if (xe_vma_is_userptr(vma)) {
> WARN_ON_ONCE(!mmu_interval_check_retry
> (&to_userptr_vma(vma)->userptr.notifier,
> - to_userptr_vma(vma)->userptr.notifier_seq));
> + to_userptr_vma(vma)->userptr.pages.notifier_seq));
> WARN_ON_ONCE(!dma_resv_test_signaled(xe_vm_resv(xe_vma_vm(vma)),
> DMA_RESV_USAGE_BOOKKEEP));
>
> diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
> index 65e889f2537d..5d0391001e33 100644
> --- a/drivers/gpu/drm/xe/xe_vm_types.h
> +++ b/drivers/gpu/drm/xe/xe_vm_types.h
> @@ -79,7 +79,7 @@ struct xe_vma {
> /**
> * @tile_present: GT mask of binding are present for this VMA.
> * protected by vm->lock, vm->resv and for userptrs,
> - * vm->userptr.notifier_lock for writing. Needs either for reading,
> + * vm->svm.gpusvm.notifier_lock for writing. Needs either for reading,
> * but if reading is done under the vm->lock only, it needs to be held
> * in write mode.
> */
> --
> 2.49.0
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v4 8/8] drm/xe/pt: unify xe_pt_svm_pre_commit with userptr
2025-05-12 15:06 [PATCH v4 0/8] Replace xe_hmm with gpusvm Matthew Auld
` (6 preceding siblings ...)
2025-05-12 15:06 ` [PATCH v4 7/8] drm/xe/userptr: replace xe_hmm with gpusvm Matthew Auld
@ 2025-05-12 15:06 ` Matthew Auld
2025-05-12 16:27 ` ✓ CI.Patch_applied: success for Replace xe_hmm with gpusvm (rev4) Patchwork
` (5 subsequent siblings)
13 siblings, 0 replies; 17+ messages in thread
From: Matthew Auld @ 2025-05-12 15:06 UTC (permalink / raw)
To: intel-xe
Cc: dri-devel, Matthew Brost, Himal Prasad Ghimiray,
Thomas Hellström
We now use the same notifier lock for SVM and userptr, with that we can
combine xe_pt_userptr_pre_commit and xe_pt_svm_pre_commit.
v2: (Matt B)
- Re-use xe_svm_notifier_lock/unlock for userptr.
- Combine svm/userptr handling further down into op_check_svm_userptr.
Suggested-by: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_pt.c | 90 ++++++++++----------------------
drivers/gpu/drm/xe/xe_pt_types.h | 2 -
2 files changed, 29 insertions(+), 63 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
index 92b6a4d63bb1..6642fdcc34fd 100644
--- a/drivers/gpu/drm/xe/xe_pt.c
+++ b/drivers/gpu/drm/xe/xe_pt.c
@@ -1394,8 +1394,8 @@ static int vma_check_userptr(struct xe_vm *vm, struct xe_vma *vma,
return 0;
}
-static int op_check_userptr(struct xe_vm *vm, struct xe_vma_op *op,
- struct xe_vm_pgtable_update_ops *pt_update)
+static int op_check_svm_userptr(struct xe_vm *vm, struct xe_vma_op *op,
+ struct xe_vm_pgtable_update_ops *pt_update)
{
int err = 0;
@@ -1420,6 +1420,24 @@ static int op_check_userptr(struct xe_vm *vm, struct xe_vma_op *op,
err = vma_check_userptr(vm, gpuva_to_vma(op->base.prefetch.va),
pt_update);
break;
+#if IS_ENABLED(CONFIG_DRM_XE_GPUSVM)
+ case DRM_GPUVA_OP_DRIVER:
+ if (op->subop == XE_VMA_SUBOP_MAP_RANGE) {
+ struct xe_svm_range *range = op->map_range.range;
+
+ xe_svm_range_debug(range, "PRE-COMMIT");
+
+ xe_assert(vm->xe,
+ xe_vma_is_cpu_addr_mirror(op->map_range.vma));
+ xe_assert(vm->xe, op->subop == XE_VMA_SUBOP_MAP_RANGE);
+
+ if (!xe_svm_range_pages_valid(range)) {
+ xe_svm_range_debug(range, "PRE-COMMIT - RETRY");
+ err = -EAGAIN;
+ }
+ }
+ break;
+#endif
default:
drm_warn(&vm->xe->drm, "NOT POSSIBLE");
}
@@ -1427,7 +1445,7 @@ static int op_check_userptr(struct xe_vm *vm, struct xe_vma_op *op,
return err;
}
-static int xe_pt_userptr_pre_commit(struct xe_migrate_pt_update *pt_update)
+static int xe_pt_svm_userptr_pre_commit(struct xe_migrate_pt_update *pt_update)
{
struct xe_vm *vm = pt_update->vops->vm;
struct xe_vma_ops *vops = pt_update->vops;
@@ -1440,12 +1458,12 @@ static int xe_pt_userptr_pre_commit(struct xe_migrate_pt_update *pt_update)
if (err)
return err;
- down_read(&vm->svm.gpusvm.notifier_lock);
+ xe_svm_notifier_lock(vm);
list_for_each_entry(op, &vops->list, link) {
- err = op_check_userptr(vm, op, pt_update_ops);
+ err = op_check_svm_userptr(vm, op, pt_update_ops);
if (err) {
- up_read(&vm->svm.gpusvm.notifier_lock);
+ xe_svm_notifier_unlock(vm);
break;
}
}
@@ -1453,42 +1471,6 @@ static int xe_pt_userptr_pre_commit(struct xe_migrate_pt_update *pt_update)
return err;
}
-#if IS_ENABLED(CONFIG_DRM_XE_GPUSVM)
-static int xe_pt_svm_pre_commit(struct xe_migrate_pt_update *pt_update)
-{
- struct xe_vm *vm = pt_update->vops->vm;
- struct xe_vma_ops *vops = pt_update->vops;
- struct xe_vma_op *op;
- int err;
-
- err = xe_pt_pre_commit(pt_update);
- if (err)
- return err;
-
- xe_svm_notifier_lock(vm);
-
- list_for_each_entry(op, &vops->list, link) {
- struct xe_svm_range *range = op->map_range.range;
-
- if (op->subop == XE_VMA_SUBOP_UNMAP_RANGE)
- continue;
-
- xe_svm_range_debug(range, "PRE-COMMIT");
-
- xe_assert(vm->xe, xe_vma_is_cpu_addr_mirror(op->map_range.vma));
- xe_assert(vm->xe, op->subop == XE_VMA_SUBOP_MAP_RANGE);
-
- if (!xe_svm_range_pages_valid(range)) {
- xe_svm_range_debug(range, "PRE-COMMIT - RETRY");
- xe_svm_notifier_unlock(vm);
- return -EAGAIN;
- }
- }
-
- return 0;
-}
-#endif
-
struct invalidation_fence {
struct xe_gt_tlb_invalidation_fence base;
struct xe_gt *gt;
@@ -1859,7 +1841,7 @@ static int bind_op_prepare(struct xe_vm *vm, struct xe_tile *tile,
xe_vma_start(vma),
xe_vma_end(vma));
++pt_update_ops->current_op;
- pt_update_ops->needs_userptr_lock |= xe_vma_is_userptr(vma);
+ pt_update_ops->needs_svm_lock |= xe_vma_is_userptr(vma);
/*
* If rebind, we have to invalidate TLB on !LR vms to invalidate
@@ -1967,7 +1949,7 @@ static int unbind_op_prepare(struct xe_tile *tile,
xe_pt_update_ops_rfence_interval(pt_update_ops, xe_vma_start(vma),
xe_vma_end(vma));
++pt_update_ops->current_op;
- pt_update_ops->needs_userptr_lock |= xe_vma_is_userptr(vma);
+ pt_update_ops->needs_svm_lock |= xe_vma_is_userptr(vma);
pt_update_ops->needs_invalidation = true;
xe_pt_commit_prepare_unbind(vma, pt_op->entries, pt_op->num_entries);
@@ -2290,22 +2272,12 @@ static const struct xe_migrate_pt_update_ops migrate_ops = {
.pre_commit = xe_pt_pre_commit,
};
-static const struct xe_migrate_pt_update_ops userptr_migrate_ops = {
+static const struct xe_migrate_pt_update_ops svm_userptr_migrate_ops = {
.populate = xe_vm_populate_pgtable,
.clear = xe_migrate_clear_pgtable_callback,
- .pre_commit = xe_pt_userptr_pre_commit,
+ .pre_commit = xe_pt_svm_userptr_pre_commit,
};
-#if IS_ENABLED(CONFIG_DRM_XE_GPUSVM)
-static const struct xe_migrate_pt_update_ops svm_migrate_ops = {
- .populate = xe_vm_populate_pgtable,
- .clear = xe_migrate_clear_pgtable_callback,
- .pre_commit = xe_pt_svm_pre_commit,
-};
-#else
-static const struct xe_migrate_pt_update_ops svm_migrate_ops;
-#endif
-
/**
* xe_pt_update_ops_run() - Run PT update operations
* @tile: Tile of PT update operations
@@ -2332,9 +2304,7 @@ xe_pt_update_ops_run(struct xe_tile *tile, struct xe_vma_ops *vops)
int err = 0, i;
struct xe_migrate_pt_update update = {
.ops = pt_update_ops->needs_svm_lock ?
- &svm_migrate_ops :
- pt_update_ops->needs_userptr_lock ?
- &userptr_migrate_ops :
+ &svm_userptr_migrate_ops :
&migrate_ops,
.vops = vops,
.tile_id = tile->id,
@@ -2456,8 +2426,6 @@ xe_pt_update_ops_run(struct xe_tile *tile, struct xe_vma_ops *vops)
if (pt_update_ops->needs_svm_lock)
xe_svm_notifier_unlock(vm);
- if (pt_update_ops->needs_userptr_lock)
- up_read(&vm->svm.gpusvm.notifier_lock);
return fence;
diff --git a/drivers/gpu/drm/xe/xe_pt_types.h b/drivers/gpu/drm/xe/xe_pt_types.h
index 69eab6f37cfe..dc0b2d8c3af8 100644
--- a/drivers/gpu/drm/xe/xe_pt_types.h
+++ b/drivers/gpu/drm/xe/xe_pt_types.h
@@ -106,8 +106,6 @@ struct xe_vm_pgtable_update_ops {
u32 current_op;
/** @needs_svm_lock: Needs SVM lock */
bool needs_svm_lock;
- /** @needs_userptr_lock: Needs userptr lock */
- bool needs_userptr_lock;
/** @needs_invalidation: Needs invalidation */
bool needs_invalidation;
/**
--
2.49.0
^ permalink raw reply related [flat|nested] 17+ messages in thread* ✓ CI.Patch_applied: success for Replace xe_hmm with gpusvm (rev4)
2025-05-12 15:06 [PATCH v4 0/8] Replace xe_hmm with gpusvm Matthew Auld
` (7 preceding siblings ...)
2025-05-12 15:06 ` [PATCH v4 8/8] drm/xe/pt: unify xe_pt_svm_pre_commit with userptr Matthew Auld
@ 2025-05-12 16:27 ` Patchwork
2025-05-12 16:27 ` ✗ CI.checkpatch: warning " Patchwork
` (4 subsequent siblings)
13 siblings, 0 replies; 17+ messages in thread
From: Patchwork @ 2025-05-12 16:27 UTC (permalink / raw)
To: Matthew Auld; +Cc: intel-xe
== Series Details ==
Series: Replace xe_hmm with gpusvm (rev4)
URL : https://patchwork.freedesktop.org/series/146553/
State : success
== Summary ==
=== Applying kernel patches on branch 'drm-tip' with base: ===
Base commit: f3767db51c5d drm-tip: 2025y-05m-12d-14h-06m-28s UTC integration manifest
=== git am output follows ===
Applying: drm/gpusvm: fix hmm_pfn_to_map_order() usage
Applying: drm/gpusvm: use more selective dma dir in get_pages()
Applying: drm/gpusvm: pull out drm_gpusvm_pages substructure
Applying: drm/gpusvm: refactor core API to use pages struct
Applying: drm/gpusvm: export drm_gpusvm_pages API
Applying: drm/xe/vm: split userptr bits into separate file
Applying: drm/xe/userptr: replace xe_hmm with gpusvm
Applying: drm/xe/pt: unify xe_pt_svm_pre_commit with userptr
^ permalink raw reply [flat|nested] 17+ messages in thread* ✗ CI.checkpatch: warning for Replace xe_hmm with gpusvm (rev4)
2025-05-12 15:06 [PATCH v4 0/8] Replace xe_hmm with gpusvm Matthew Auld
` (8 preceding siblings ...)
2025-05-12 16:27 ` ✓ CI.Patch_applied: success for Replace xe_hmm with gpusvm (rev4) Patchwork
@ 2025-05-12 16:27 ` Patchwork
2025-05-12 16:28 ` ✓ CI.KUnit: success " Patchwork
` (3 subsequent siblings)
13 siblings, 0 replies; 17+ messages in thread
From: Patchwork @ 2025-05-12 16:27 UTC (permalink / raw)
To: Matthew Auld; +Cc: intel-xe
== Series Details ==
Series: Replace xe_hmm with gpusvm (rev4)
URL : https://patchwork.freedesktop.org/series/146553/
State : warning
== Summary ==
+ KERNEL=/kernel
+ git clone https://gitlab.freedesktop.org/drm/maintainer-tools mt
Cloning into 'mt'...
warning: redirecting to https://gitlab.freedesktop.org/drm/maintainer-tools.git/
+ git -C mt rev-list -n1 origin/master
202708c00696422fd217223bb679a353a5936e23
+ cd /kernel
+ git config --global --add safe.directory /kernel
+ git log -n1
commit 5ed3c4000526d24988aac7776d2403dc0d17658d
Author: Matthew Auld <matthew.auld@intel.com>
Date: Mon May 12 16:06:46 2025 +0100
drm/xe/pt: unify xe_pt_svm_pre_commit with userptr
We now use the same notifier lock for SVM and userptr, with that we can
combine xe_pt_userptr_pre_commit and xe_pt_svm_pre_commit.
v2: (Matt B)
- Re-use xe_svm_notifier_lock/unlock for userptr.
- Combine svm/userptr handling further down into op_check_svm_userptr.
Suggested-by: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
+ /mt/dim checkpatch f3767db51c5d8bc3ba3f2b342332ab329044fe5b drm-intel
b9609f762d84 drm/gpusvm: fix hmm_pfn_to_map_order() usage
-:19: WARNING:BAD_REPORTED_BY_LINK: Reported-by: should be immediately followed by Closes: with a URL to the report
#19:
Reported-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
total: 0 errors, 1 warnings, 0 checks, 51 lines checked
0422f6b8647e drm/gpusvm: use more selective dma dir in get_pages()
9d3adf3a3418 drm/gpusvm: pull out drm_gpusvm_pages substructure
e66b592b62d8 drm/gpusvm: refactor core API to use pages struct
1cc5a4d1e008 drm/gpusvm: export drm_gpusvm_pages API
0e4dfea89a24 drm/xe/vm: split userptr bits into separate file
-:44: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#44:
new file mode 100644
total: 0 errors, 1 warnings, 0 checks, 855 lines checked
7a6528a1d89a drm/xe/userptr: replace xe_hmm with gpusvm
-:101: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#101:
deleted file mode 100644
total: 0 errors, 1 warnings, 0 checks, 583 lines checked
5ed3c4000526 drm/xe/pt: unify xe_pt_svm_pre_commit with userptr
^ permalink raw reply [flat|nested] 17+ messages in thread* ✓ CI.KUnit: success for Replace xe_hmm with gpusvm (rev4)
2025-05-12 15:06 [PATCH v4 0/8] Replace xe_hmm with gpusvm Matthew Auld
` (9 preceding siblings ...)
2025-05-12 16:27 ` ✗ CI.checkpatch: warning " Patchwork
@ 2025-05-12 16:28 ` Patchwork
2025-05-12 16:38 ` ✗ CI.Build: failure " Patchwork
` (2 subsequent siblings)
13 siblings, 0 replies; 17+ messages in thread
From: Patchwork @ 2025-05-12 16:28 UTC (permalink / raw)
To: Matthew Auld; +Cc: intel-xe
== Series Details ==
Series: Replace xe_hmm with gpusvm (rev4)
URL : https://patchwork.freedesktop.org/series/146553/
State : success
== Summary ==
+ trap cleanup EXIT
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/xe/.kunitconfig
[16:27:22] Configuring KUnit Kernel ...
Generating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[16:27:26] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[16:27:52] Starting KUnit Kernel (1/1)...
[16:27:52] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[16:27:53] ================== guc_buf (11 subtests) ===================
[16:27:53] [PASSED] test_smallest
[16:27:53] [PASSED] test_largest
[16:27:53] [PASSED] test_granular
[16:27:53] [PASSED] test_unique
[16:27:53] [PASSED] test_overlap
[16:27:53] [PASSED] test_reusable
[16:27:53] [PASSED] test_too_big
[16:27:53] [PASSED] test_flush
[16:27:53] [PASSED] test_lookup
[16:27:53] [PASSED] test_data
[16:27:53] [PASSED] test_class
[16:27:53] ===================== [PASSED] guc_buf =====================
[16:27:53] =================== guc_dbm (7 subtests) ===================
[16:27:53] [PASSED] test_empty
[16:27:53] [PASSED] test_default
[16:27:53] ======================== test_size ========================
[16:27:53] [PASSED] 4
[16:27:53] [PASSED] 8
[16:27:53] [PASSED] 32
[16:27:53] [PASSED] 256
[16:27:53] ==================== [PASSED] test_size ====================
[16:27:53] ======================= test_reuse ========================
[16:27:53] [PASSED] 4
[16:27:53] [PASSED] 8
[16:27:53] [PASSED] 32
[16:27:53] [PASSED] 256
[16:27:53] =================== [PASSED] test_reuse ====================
[16:27:53] =================== test_range_overlap ====================
[16:27:53] [PASSED] 4
[16:27:53] [PASSED] 8
[16:27:53] [PASSED] 32
[16:27:53] [PASSED] 256
[16:27:53] =============== [PASSED] test_range_overlap ================
[16:27:53] =================== test_range_compact ====================
[16:27:53] [PASSED] 4
[16:27:53] [PASSED] 8
[16:27:53] [PASSED] 32
[16:27:53] [PASSED] 256
[16:27:53] =============== [PASSED] test_range_compact ================
[16:27:53] ==================== test_range_spare =====================
[16:27:53] [PASSED] 4
[16:27:53] [PASSED] 8
[16:27:53] [PASSED] 32
[16:27:53] [PASSED] 256
[16:27:53] ================ [PASSED] test_range_spare =================
[16:27:53] ===================== [PASSED] guc_dbm =====================
[16:27:53] =================== guc_idm (6 subtests) ===================
[16:27:53] [PASSED] bad_init
[16:27:53] [PASSED] no_init
[16:27:53] [PASSED] init_fini
[16:27:53] [PASSED] check_used
[16:27:53] [PASSED] check_quota
[16:27:53] [PASSED] check_all
[16:27:53] ===================== [PASSED] guc_idm =====================
[16:27:53] ================== no_relay (3 subtests) ===================
[16:27:53] [PASSED] xe_drops_guc2pf_if_not_ready
[16:27:53] [PASSED] xe_drops_guc2vf_if_not_ready
[16:27:53] [PASSED] xe_rejects_send_if_not_ready
[16:27:53] ==================== [PASSED] no_relay =====================
[16:27:53] ================== pf_relay (14 subtests) ==================
[16:27:53] [PASSED] pf_rejects_guc2pf_too_short
[16:27:53] [PASSED] pf_rejects_guc2pf_too_long
[16:27:53] [PASSED] pf_rejects_guc2pf_no_payload
[16:27:53] [PASSED] pf_fails_no_payload
[16:27:53] [PASSED] pf_fails_bad_origin
[16:27:53] [PASSED] pf_fails_bad_type
[16:27:53] [PASSED] pf_txn_reports_error
[16:27:53] [PASSED] pf_txn_sends_pf2guc
[16:27:53] [PASSED] pf_sends_pf2guc
[16:27:53] [SKIPPED] pf_loopback_nop
[16:27:53] [SKIPPED] pf_loopback_echo
[16:27:53] [SKIPPED] pf_loopback_fail
[16:27:53] [SKIPPED] pf_loopback_busy
[16:27:53] [SKIPPED] pf_loopback_retry
[16:27:53] ==================== [PASSED] pf_relay =====================
[16:27:53] ================== vf_relay (3 subtests) ===================
[16:27:53] [PASSED] vf_rejects_guc2vf_too_short
[16:27:53] [PASSED] vf_rejects_guc2vf_too_long
[16:27:53] [PASSED] vf_rejects_guc2vf_no_payload
[16:27:53] ==================== [PASSED] vf_relay =====================
[16:27:53] ================= pf_service (11 subtests) =================
[16:27:53] [PASSED] pf_negotiate_any
[16:27:53] [PASSED] pf_negotiate_base_match
[16:27:53] [PASSED] pf_negotiate_base_newer
[16:27:53] [PASSED] pf_negotiate_base_next
[16:27:53] [SKIPPED] pf_negotiate_base_older
[16:27:53] [PASSED] pf_negotiate_base_prev
[16:27:53] [PASSED] pf_negotiate_latest_match
[16:27:53] [PASSED] pf_negotiate_latest_newer
[16:27:53] [PASSED] pf_negotiate_latest_next
[16:27:53] [SKIPPED] pf_negotiate_latest_older
[16:27:53] [SKIPPED] pf_negotiate_latest_prev
[16:27:53] =================== [PASSED] pf_service ====================
[16:27:53] ===================== lmtt (1 subtest) =====================
[16:27:53] ======================== test_ops =========================
[16:27:53] [PASSED] 2-level
[16:27:53] [PASSED] multi-level
[16:27:53] ==================== [PASSED] test_ops =====================
[16:27:53] ====================== [PASSED] lmtt =======================
[16:27:53] =================== xe_mocs (2 subtests) ===================
[16:27:53] ================ xe_live_mocs_kernel_kunit ================
[16:27:53] =========== [SKIPPED] xe_live_mocs_kernel_kunit ============
[16:27:53] ================ xe_live_mocs_reset_kunit =================
[16:27:53] ============ [SKIPPED] xe_live_mocs_reset_kunit ============
[16:27:53] ==================== [SKIPPED] xe_mocs =====================
[16:27:53] ================= xe_migrate (2 subtests) ==================
[16:27:53] ================= xe_migrate_sanity_kunit =================
[16:27:53] ============ [SKIPPED] xe_migrate_sanity_kunit =============
[16:27:53] ================== xe_validate_ccs_kunit ==================
[16:27:53] ============= [SKIPPED] xe_validate_ccs_kunit ==============
[16:27:53] =================== [SKIPPED] xe_migrate ===================
[16:27:53] ================== xe_dma_buf (1 subtest) ==================
[16:27:53] ==================== xe_dma_buf_kunit =====================
[16:27:53] ================ [SKIPPED] xe_dma_buf_kunit ================
[16:27:53] =================== [SKIPPED] xe_dma_buf ===================
[16:27:53] ================= xe_bo_shrink (1 subtest) =================
[16:27:53] =================== xe_bo_shrink_kunit ====================
[16:27:53] =============== [SKIPPED] xe_bo_shrink_kunit ===============
[16:27:53] ================== [SKIPPED] xe_bo_shrink ==================
[16:27:53] ==================== xe_bo (2 subtests) ====================
[16:27:53] ================== xe_ccs_migrate_kunit ===================
[16:27:53] ============== [SKIPPED] xe_ccs_migrate_kunit ==============
[16:27:53] ==================== xe_bo_evict_kunit ====================
[16:27:53] =============== [SKIPPED] xe_bo_evict_kunit ================
[16:27:53] ===================== [SKIPPED] xe_bo ======================
[16:27:53] ==================== args (11 subtests) ====================
[16:27:53] [PASSED] count_args_test
[16:27:53] [PASSED] call_args_example
[16:27:53] [PASSED] call_args_test
[16:27:53] [PASSED] drop_first_arg_example
[16:27:53] [PASSED] drop_first_arg_test
[16:27:53] [PASSED] first_arg_example
[16:27:53] [PASSED] first_arg_test
[16:27:53] [PASSED] last_arg_example
[16:27:53] [PASSED] last_arg_test
[16:27:53] [PASSED] pick_arg_example
[16:27:53] [PASSED] sep_comma_example
[16:27:53] ====================== [PASSED] args =======================
[16:27:53] =================== xe_pci (2 subtests) ====================
[16:27:53] [PASSED] xe_gmdid_graphics_ip
[16:27:53] [PASSED] xe_gmdid_media_ip
[16:27:53] ===================== [PASSED] xe_pci ======================
[16:27:53] =================== xe_rtp (2 subtests) ====================
[16:27:53] =============== xe_rtp_process_to_sr_tests ================
[16:27:53] [PASSED] coalesce-same-reg
[16:27:53] [PASSED] no-match-no-add
[16:27:53] [PASSED] match-or
[16:27:53] [PASSED] match-or-xfail
[16:27:53] [PASSED] no-match-no-add-multiple-rules
[16:27:53] [PASSED] two-regs-two-entries
[16:27:53] [PASSED] clr-one-set-other
[16:27:53] [PASSED] set-field
[16:27:53] [PASSED] conflict-duplicate
[16:27:53] [PASSED] conflict-not-disjoint
stty: 'standard input': Inappropriate ioctl for device
[16:27:53] [PASSED] conflict-reg-type
[16:27:53] =========== [PASSED] xe_rtp_process_to_sr_tests ============
[16:27:53] ================== xe_rtp_process_tests ===================
[16:27:53] [PASSED] active1
[16:27:53] [PASSED] active2
[16:27:53] [PASSED] active-inactive
[16:27:53] [PASSED] inactive-active
[16:27:53] [PASSED] inactive-1st_or_active-inactive
[16:27:53] [PASSED] inactive-2nd_or_active-inactive
[16:27:53] [PASSED] inactive-last_or_active-inactive
[16:27:53] [PASSED] inactive-no_or_active-inactive
[16:27:53] ============== [PASSED] xe_rtp_process_tests ===============
[16:27:53] ===================== [PASSED] xe_rtp ======================
[16:27:53] ==================== xe_wa (1 subtest) =====================
[16:27:53] ======================== xe_wa_gt =========================
[16:27:53] [PASSED] TIGERLAKE (B0)
[16:27:53] [PASSED] DG1 (A0)
[16:27:53] [PASSED] DG1 (B0)
[16:27:53] [PASSED] ALDERLAKE_S (A0)
[16:27:53] [PASSED] ALDERLAKE_S (B0)
[16:27:53] [PASSED] ALDERLAKE_S (C0)
[16:27:53] [PASSED] ALDERLAKE_S (D0)
[16:27:53] [PASSED] ALDERLAKE_P (A0)
[16:27:53] [PASSED] ALDERLAKE_P (B0)
[16:27:53] [PASSED] ALDERLAKE_P (C0)
[16:27:53] [PASSED] ALDERLAKE_S_RPLS (D0)
[16:27:53] [PASSED] ALDERLAKE_P_RPLU (E0)
[16:27:53] [PASSED] DG2_G10 (C0)
[16:27:53] [PASSED] DG2_G11 (B1)
[16:27:53] [PASSED] DG2_G12 (A1)
[16:27:53] [PASSED] METEORLAKE (g:A0, m:A0)
[16:27:53] [PASSED] METEORLAKE (g:A0, m:A0)
[16:27:53] [PASSED] METEORLAKE (g:A0, m:A0)
[16:27:53] [PASSED] LUNARLAKE (g:A0, m:A0)
[16:27:53] [PASSED] LUNARLAKE (g:B0, m:A0)
[16:27:53] [PASSED] BATTLEMAGE (g:A0, m:A1)
[16:27:53] ==================== [PASSED] xe_wa_gt =====================
[16:27:53] ====================== [PASSED] xe_wa ======================
[16:27:53] ============================================================
[16:27:53] Testing complete. Ran 133 tests: passed: 117, skipped: 16
[16:27:53] Elapsed time: 31.013s total, 4.319s configuring, 26.375s building, 0.292s running
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/tests/.kunitconfig
[16:27:53] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[16:27:55] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[16:28:16] Starting KUnit Kernel (1/1)...
[16:28:16] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[16:28:16] == drm_test_atomic_get_connector_for_encoder (1 subtest) ===
[16:28:16] [PASSED] drm_test_drm_atomic_get_connector_for_encoder
[16:28:16] ==== [PASSED] drm_test_atomic_get_connector_for_encoder ====
[16:28:16] =========== drm_validate_clone_mode (2 subtests) ===========
[16:28:16] ============== drm_test_check_in_clone_mode ===============
[16:28:16] [PASSED] in_clone_mode
[16:28:16] [PASSED] not_in_clone_mode
[16:28:16] ========== [PASSED] drm_test_check_in_clone_mode ===========
[16:28:16] =============== drm_test_check_valid_clones ===============
[16:28:16] [PASSED] not_in_clone_mode
[16:28:16] [PASSED] valid_clone
[16:28:16] [PASSED] invalid_clone
[16:28:16] =========== [PASSED] drm_test_check_valid_clones ===========
[16:28:16] ============= [PASSED] drm_validate_clone_mode =============
[16:28:16] ============= drm_validate_modeset (1 subtest) =============
[16:28:16] [PASSED] drm_test_check_connector_changed_modeset
[16:28:16] ============== [PASSED] drm_validate_modeset ===============
[16:28:16] ====== drm_test_bridge_get_current_state (2 subtests) ======
[16:28:16] [PASSED] drm_test_drm_bridge_get_current_state_atomic
[16:28:16] [PASSED] drm_test_drm_bridge_get_current_state_legacy
[16:28:16] ======== [PASSED] drm_test_bridge_get_current_state ========
[16:28:16] ====== drm_test_bridge_helper_reset_crtc (3 subtests) ======
[16:28:16] [PASSED] drm_test_drm_bridge_helper_reset_crtc_atomic
[16:28:16] [PASSED] drm_test_drm_bridge_helper_reset_crtc_atomic_disabled
[16:28:16] [PASSED] drm_test_drm_bridge_helper_reset_crtc_legacy
[16:28:16] ======== [PASSED] drm_test_bridge_helper_reset_crtc ========
[16:28:16] ================== drm_buddy (7 subtests) ==================
[16:28:16] [PASSED] drm_test_buddy_alloc_limit
[16:28:16] [PASSED] drm_test_buddy_alloc_optimistic
[16:28:16] [PASSED] drm_test_buddy_alloc_pessimistic
[16:28:16] [PASSED] drm_test_buddy_alloc_pathological
[16:28:16] [PASSED] drm_test_buddy_alloc_contiguous
[16:28:16] [PASSED] drm_test_buddy_alloc_clear
[16:28:16] [PASSED] drm_test_buddy_alloc_range_bias
[16:28:16] ==================== [PASSED] drm_buddy ====================
[16:28:16] ============= drm_cmdline_parser (40 subtests) =============
[16:28:16] [PASSED] drm_test_cmdline_force_d_only
[16:28:16] [PASSED] drm_test_cmdline_force_D_only_dvi
[16:28:16] [PASSED] drm_test_cmdline_force_D_only_hdmi
[16:28:16] [PASSED] drm_test_cmdline_force_D_only_not_digital
[16:28:16] [PASSED] drm_test_cmdline_force_e_only
[16:28:16] [PASSED] drm_test_cmdline_res
[16:28:16] [PASSED] drm_test_cmdline_res_vesa
[16:28:16] [PASSED] drm_test_cmdline_res_vesa_rblank
[16:28:16] [PASSED] drm_test_cmdline_res_rblank
[16:28:16] [PASSED] drm_test_cmdline_res_bpp
[16:28:16] [PASSED] drm_test_cmdline_res_refresh
[16:28:16] [PASSED] drm_test_cmdline_res_bpp_refresh
[16:28:16] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced
[16:28:16] [PASSED] drm_test_cmdline_res_bpp_refresh_margins
[16:28:16] [PASSED] drm_test_cmdline_res_bpp_refresh_force_off
[16:28:16] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on
[16:28:16] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_analog
[16:28:16] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_digital
[16:28:16] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced_margins_force_on
[16:28:16] [PASSED] drm_test_cmdline_res_margins_force_on
[16:28:16] [PASSED] drm_test_cmdline_res_vesa_margins
[16:28:16] [PASSED] drm_test_cmdline_name
[16:28:16] [PASSED] drm_test_cmdline_name_bpp
[16:28:16] [PASSED] drm_test_cmdline_name_option
[16:28:16] [PASSED] drm_test_cmdline_name_bpp_option
[16:28:16] [PASSED] drm_test_cmdline_rotate_0
[16:28:16] [PASSED] drm_test_cmdline_rotate_90
[16:28:16] [PASSED] drm_test_cmdline_rotate_180
[16:28:16] [PASSED] drm_test_cmdline_rotate_270
[16:28:16] [PASSED] drm_test_cmdline_hmirror
[16:28:16] [PASSED] drm_test_cmdline_vmirror
[16:28:16] [PASSED] drm_test_cmdline_margin_options
[16:28:16] [PASSED] drm_test_cmdline_multiple_options
[16:28:16] [PASSED] drm_test_cmdline_bpp_extra_and_option
[16:28:16] [PASSED] drm_test_cmdline_extra_and_option
[16:28:16] [PASSED] drm_test_cmdline_freestanding_options
[16:28:16] [PASSED] drm_test_cmdline_freestanding_force_e_and_options
[16:28:16] [PASSED] drm_test_cmdline_panel_orientation
[16:28:16] ================ drm_test_cmdline_invalid =================
[16:28:16] [PASSED] margin_only
[16:28:16] [PASSED] interlace_only
[16:28:16] [PASSED] res_missing_x
[16:28:16] [PASSED] res_missing_y
[16:28:16] [PASSED] res_bad_y
[16:28:16] [PASSED] res_missing_y_bpp
[16:28:16] [PASSED] res_bad_bpp
[16:28:16] [PASSED] res_bad_refresh
[16:28:16] [PASSED] res_bpp_refresh_force_on_off
[16:28:16] [PASSED] res_invalid_mode
[16:28:16] [PASSED] res_bpp_wrong_place_mode
[16:28:16] [PASSED] name_bpp_refresh
[16:28:16] [PASSED] name_refresh
[16:28:16] [PASSED] name_refresh_wrong_mode
[16:28:16] [PASSED] name_refresh_invalid_mode
[16:28:16] [PASSED] rotate_multiple
[16:28:16] [PASSED] rotate_invalid_val
[16:28:16] [PASSED] rotate_truncated
[16:28:16] [PASSED] invalid_option
[16:28:16] [PASSED] invalid_tv_option
[16:28:16] [PASSED] truncated_tv_option
[16:28:16] ============ [PASSED] drm_test_cmdline_invalid =============
[16:28:16] =============== drm_test_cmdline_tv_options ===============
[16:28:16] [PASSED] NTSC
[16:28:16] [PASSED] NTSC_443
[16:28:16] [PASSED] NTSC_J
[16:28:16] [PASSED] PAL
[16:28:16] [PASSED] PAL_M
[16:28:16] [PASSED] PAL_N
[16:28:16] [PASSED] SECAM
[16:28:16] [PASSED] MONO_525
[16:28:16] [PASSED] MONO_625
[16:28:16] =========== [PASSED] drm_test_cmdline_tv_options ===========
[16:28:16] =============== [PASSED] drm_cmdline_parser ================
[16:28:16] ========== drmm_connector_hdmi_init (20 subtests) ==========
[16:28:16] [PASSED] drm_test_connector_hdmi_init_valid
[16:28:16] [PASSED] drm_test_connector_hdmi_init_bpc_8
[16:28:16] [PASSED] drm_test_connector_hdmi_init_bpc_10
[16:28:16] [PASSED] drm_test_connector_hdmi_init_bpc_12
[16:28:16] [PASSED] drm_test_connector_hdmi_init_bpc_invalid
[16:28:16] [PASSED] drm_test_connector_hdmi_init_bpc_null
[16:28:16] [PASSED] drm_test_connector_hdmi_init_formats_empty
[16:28:16] [PASSED] drm_test_connector_hdmi_init_formats_no_rgb
[16:28:16] === drm_test_connector_hdmi_init_formats_yuv420_allowed ===
[16:28:16] [PASSED] supported_formats=0x9 yuv420_allowed=1
[16:28:16] [PASSED] supported_formats=0x9 yuv420_allowed=0
[16:28:16] [PASSED] supported_formats=0x3 yuv420_allowed=1
[16:28:16] [PASSED] supported_formats=0x3 yuv420_allowed=0
[16:28:16] === [PASSED] drm_test_connector_hdmi_init_formats_yuv420_allowed ===
[16:28:16] [PASSED] drm_test_connector_hdmi_init_null_ddc
[16:28:16] [PASSED] drm_test_connector_hdmi_init_null_product
[16:28:16] [PASSED] drm_test_connector_hdmi_init_null_vendor
[16:28:16] [PASSED] drm_test_connector_hdmi_init_product_length_exact
[16:28:16] [PASSED] drm_test_connector_hdmi_init_product_length_too_long
[16:28:16] [PASSED] drm_test_connector_hdmi_init_product_valid
[16:28:16] [PASSED] drm_test_connector_hdmi_init_vendor_length_exact
[16:28:16] [PASSED] drm_test_connector_hdmi_init_vendor_length_too_long
[16:28:16] [PASSED] drm_test_connector_hdmi_init_vendor_valid
[16:28:16] ========= drm_test_connector_hdmi_init_type_valid =========
[16:28:16] [PASSED] HDMI-A
[16:28:16] [PASSED] HDMI-B
[16:28:16] ===== [PASSED] drm_test_connector_hdmi_init_type_valid =====
[16:28:16] ======== drm_test_connector_hdmi_init_type_invalid ========
[16:28:16] [PASSED] Unknown
[16:28:16] [PASSED] VGA
[16:28:16] [PASSED] DVI-I
[16:28:16] [PASSED] DVI-D
[16:28:16] [PASSED] DVI-A
[16:28:16] [PASSED] Composite
[16:28:16] [PASSED] SVIDEO
[16:28:16] [PASSED] LVDS
[16:28:16] [PASSED] Component
[16:28:16] [PASSED] DIN
[16:28:16] [PASSED] DP
[16:28:16] [PASSED] TV
[16:28:16] [PASSED] eDP
[16:28:16] [PASSED] Virtual
[16:28:16] [PASSED] DSI
[16:28:16] [PASSED] DPI
[16:28:16] [PASSED] Writeback
[16:28:16] [PASSED] SPI
[16:28:16] [PASSED] USB
[16:28:16] ==== [PASSED] drm_test_connector_hdmi_init_type_invalid ====
[16:28:16] ============ [PASSED] drmm_connector_hdmi_init =============
[16:28:16] ============= drmm_connector_init (3 subtests) =============
[16:28:16] [PASSED] drm_test_drmm_connector_init
[16:28:16] [PASSED] drm_test_drmm_connector_init_null_ddc
[16:28:16] ========= drm_test_drmm_connector_init_type_valid =========
[16:28:16] [PASSED] Unknown
[16:28:16] [PASSED] VGA
[16:28:16] [PASSED] DVI-I
[16:28:16] [PASSED] DVI-D
[16:28:16] [PASSED] DVI-A
[16:28:16] [PASSED] Composite
[16:28:16] [PASSED] SVIDEO
[16:28:16] [PASSED] LVDS
[16:28:16] [PASSED] Component
[16:28:16] [PASSED] DIN
[16:28:16] [PASSED] DP
[16:28:16] [PASSED] HDMI-A
[16:28:16] [PASSED] HDMI-B
[16:28:16] [PASSED] TV
[16:28:16] [PASSED] eDP
[16:28:16] [PASSED] Virtual
[16:28:16] [PASSED] DSI
[16:28:16] [PASSED] DPI
[16:28:16] [PASSED] Writeback
[16:28:16] [PASSED] SPI
[16:28:16] [PASSED] USB
[16:28:16] ===== [PASSED] drm_test_drmm_connector_init_type_valid =====
[16:28:16] =============== [PASSED] drmm_connector_init ===============
[16:28:16] ========= drm_connector_dynamic_init (6 subtests) ==========
[16:28:16] [PASSED] drm_test_drm_connector_dynamic_init
[16:28:16] [PASSED] drm_test_drm_connector_dynamic_init_null_ddc
[16:28:16] [PASSED] drm_test_drm_connector_dynamic_init_not_added
[16:28:16] [PASSED] drm_test_drm_connector_dynamic_init_properties
[16:28:16] ===== drm_test_drm_connector_dynamic_init_type_valid ======
[16:28:16] [PASSED] Unknown
[16:28:16] [PASSED] VGA
[16:28:16] [PASSED] DVI-I
[16:28:16] [PASSED] DVI-D
[16:28:16] [PASSED] DVI-A
[16:28:16] [PASSED] Composite
[16:28:16] [PASSED] SVIDEO
[16:28:16] [PASSED] LVDS
[16:28:16] [PASSED] Component
[16:28:16] [PASSED] DIN
[16:28:16] [PASSED] DP
[16:28:16] [PASSED] HDMI-A
[16:28:16] [PASSED] HDMI-B
[16:28:16] [PASSED] TV
[16:28:16] [PASSED] eDP
[16:28:16] [PASSED] Virtual
[16:28:16] [PASSED] DSI
[16:28:16] [PASSED] DPI
[16:28:16] [PASSED] Writeback
[16:28:16] [PASSED] SPI
[16:28:16] [PASSED] USB
[16:28:16] = [PASSED] drm_test_drm_connector_dynamic_init_type_valid ==
[16:28:16] ======== drm_test_drm_connector_dynamic_init_name =========
[16:28:16] [PASSED] Unknown
[16:28:16] [PASSED] VGA
[16:28:16] [PASSED] DVI-I
[16:28:16] [PASSED] DVI-D
[16:28:16] [PASSED] DVI-A
[16:28:16] [PASSED] Composite
[16:28:16] [PASSED] SVIDEO
[16:28:16] [PASSED] LVDS
[16:28:16] [PASSED] Component
[16:28:16] [PASSED] DIN
[16:28:16] [PASSED] DP
[16:28:16] [PASSED] HDMI-A
[16:28:16] [PASSED] HDMI-B
[16:28:16] [PASSED] TV
[16:28:16] [PASSED] eDP
[16:28:16] [PASSED] Virtual
[16:28:16] [PASSED] DSI
[16:28:16] [PASSED] DPI
[16:28:16] [PASSED] Writeback
[16:28:16] [PASSED] SPI
[16:28:16] [PASSED] USB
[16:28:16] ==== [PASSED] drm_test_drm_connector_dynamic_init_name =====
[16:28:16] =========== [PASSED] drm_connector_dynamic_init ============
[16:28:16] ==== drm_connector_dynamic_register_early (4 subtests) =====
[16:28:16] [PASSED] drm_test_drm_connector_dynamic_register_early_on_list
[16:28:16] [PASSED] drm_test_drm_connector_dynamic_register_early_defer
[16:28:16] [PASSED] drm_test_drm_connector_dynamic_register_early_no_init
[16:28:16] [PASSED] drm_test_drm_connector_dynamic_register_early_no_mode_object
[16:28:16] ====== [PASSED] drm_connector_dynamic_register_early =======
[16:28:16] ======= drm_connector_dynamic_register (7 subtests) ========
[16:28:16] [PASSED] drm_test_drm_connector_dynamic_register_on_list
[16:28:16] [PASSED] drm_test_drm_connector_dynamic_register_no_defer
[16:28:16] [PASSED] drm_test_drm_connector_dynamic_register_no_init
[16:28:16] [PASSED] drm_test_drm_connector_dynamic_register_mode_object
[16:28:16] [PASSED] drm_test_drm_connector_dynamic_register_sysfs
[16:28:16] [PASSED] drm_test_drm_connector_dynamic_register_sysfs_name
[16:28:16] [PASSED] drm_test_drm_connector_dynamic_register_debugfs
[16:28:16] ========= [PASSED] drm_connector_dynamic_register ==========
[16:28:16] = drm_connector_attach_broadcast_rgb_property (2 subtests) =
[16:28:16] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property
[16:28:16] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property_hdmi_connector
[16:28:16] === [PASSED] drm_connector_attach_broadcast_rgb_property ===
[16:28:16] ========== drm_get_tv_mode_from_name (2 subtests) ==========
[16:28:16] ========== drm_test_get_tv_mode_from_name_valid ===========
[16:28:16] [PASSED] NTSC
[16:28:16] [PASSED] NTSC-443
[16:28:16] [PASSED] NTSC-J
[16:28:16] [PASSED] PAL
[16:28:16] [PASSED] PAL-M
[16:28:16] [PASSED] PAL-N
[16:28:16] [PASSED] SECAM
[16:28:16] [PASSED] Mono
[16:28:16] ====== [PASSED] drm_test_get_tv_mode_from_name_valid =======
[16:28:16] [PASSED] drm_test_get_tv_mode_from_name_truncated
[16:28:16] ============ [PASSED] drm_get_tv_mode_from_name ============
[16:28:16] = drm_test_connector_hdmi_compute_mode_clock (12 subtests) =
[16:28:16] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb
[16:28:16] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc
[16:28:16] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc_vic_1
[16:28:16] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc
[16:28:16] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc_vic_1
[16:28:16] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_double
[16:28:16] = drm_test_connector_hdmi_compute_mode_clock_yuv420_valid =
[16:28:16] [PASSED] VIC 96
[16:28:16] [PASSED] VIC 97
[16:28:16] [PASSED] VIC 101
[16:28:16] [PASSED] VIC 102
[16:28:16] [PASSED] VIC 106
[16:28:16] [PASSED] VIC 107
[16:28:16] === [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_valid ===
[16:28:16] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_10_bpc
[16:28:16] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_12_bpc
[16:28:16] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_8_bpc
[16:28:16] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_10_bpc
[16:28:16] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_12_bpc
[16:28:16] === [PASSED] drm_test_connector_hdmi_compute_mode_clock ====
[16:28:16] == drm_hdmi_connector_get_broadcast_rgb_name (2 subtests) ==
[16:28:16] === drm_test_drm_hdmi_connector_get_broadcast_rgb_name ====
[16:28:16] [PASSED] Automatic
[16:28:16] [PASSED] Full
[16:28:16] [PASSED] Limited 16:235
[16:28:16] === [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name ===
[16:28:16] [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name_invalid
[16:28:16] ==== [PASSED] drm_hdmi_connector_get_broadcast_rgb_name ====
[16:28:16] == drm_hdmi_connector_get_output_format_name (2 subtests) ==
[16:28:16] === drm_test_drm_hdmi_connector_get_output_format_name ====
[16:28:16] [PASSED] RGB
[16:28:16] [PASSED] YUV 4:2:0
[16:28:16] [PASSED] YUV 4:2:2
[16:28:16] [PASSED] YUV 4:4:4
[16:28:16] === [PASSED] drm_test_drm_hdmi_connector_get_output_format_name ===
[16:28:16] [PASSED] drm_test_drm_hdmi_connector_get_output_format_name_invalid
[16:28:16] ==== [PASSED] drm_hdmi_connector_get_output_format_name ====
[16:28:16] ============= drm_damage_helper (21 subtests) ==============
[16:28:16] [PASSED] drm_test_damage_iter_no_damage
[16:28:16] [PASSED] drm_test_damage_iter_no_damage_fractional_src
[16:28:16] [PASSED] drm_test_damage_iter_no_damage_src_moved
[16:28:16] [PASSED] drm_test_damage_iter_no_damage_fractional_src_moved
[16:28:16] [PASSED] drm_test_damage_iter_no_damage_not_visible
[16:28:16] [PASSED] drm_test_damage_iter_no_damage_no_crtc
[16:28:16] [PASSED] drm_test_damage_iter_no_damage_no_fb
[16:28:16] [PASSED] drm_test_damage_iter_simple_damage
[16:28:16] [PASSED] drm_test_damage_iter_single_damage
[16:28:16] [PASSED] drm_test_damage_iter_single_damage_intersect_src
[16:28:16] [PASSED] drm_test_damage_iter_single_damage_outside_src
[16:28:16] [PASSED] drm_test_damage_iter_single_damage_fractional_src
[16:28:16] [PASSED] drm_test_damage_iter_single_damage_intersect_fractional_src
[16:28:16] [PASSED] drm_test_damage_iter_single_damage_outside_fractional_src
[16:28:16] [PASSED] drm_test_damage_iter_single_damage_src_moved
[16:28:16] [PASSED] drm_test_damage_iter_single_damage_fractional_src_moved
[16:28:16] [PASSED] drm_test_damage_iter_damage
[16:28:16] [PASSED] drm_test_damage_iter_damage_one_intersect
[16:28:16] [PASSED] drm_test_damage_iter_damage_one_outside
[16:28:16] [PASSED] drm_test_damage_iter_damage_src_moved
[16:28:16] [PASSED] drm_test_damage_iter_damage_not_visible
[16:28:16] ================ [PASSED] drm_damage_helper ================
[16:28:16] ============== drm_dp_mst_helper (3 subtests) ==============
[16:28:16] ============== drm_test_dp_mst_calc_pbn_mode ==============
[16:28:16] [PASSED] Clock 154000 BPP 30 DSC disabled
[16:28:16] [PASSED] Clock 234000 BPP 30 DSC disabled
[16:28:16] [PASSED] Clock 297000 BPP 24 DSC disabled
[16:28:16] [PASSED] Clock 332880 BPP 24 DSC enabled
[16:28:16] [PASSED] Clock 324540 BPP 24 DSC enabled
[16:28:16] ========== [PASSED] drm_test_dp_mst_calc_pbn_mode ==========
[16:28:16] ============== drm_test_dp_mst_calc_pbn_div ===============
[16:28:16] [PASSED] Link rate 2000000 lane count 4
[16:28:16] [PASSED] Link rate 2000000 lane count 2
[16:28:16] [PASSED] Link rate 2000000 lane count 1
[16:28:16] [PASSED] Link rate 1350000 lane count 4
[16:28:16] [PASSED] Link rate 1350000 lane count 2
[16:28:16] [PASSED] Link rate 1350000 lane count 1
[16:28:16] [PASSED] Link rate 1000000 lane count 4
[16:28:16] [PASSED] Link rate 1000000 lane count 2
[16:28:16] [PASSED] Link rate 1000000 lane count 1
[16:28:16] [PASSED] Link rate 810000 lane count 4
[16:28:16] [PASSED] Link rate 810000 lane count 2
[16:28:16] [PASSED] Link rate 810000 lane count 1
[16:28:16] [PASSED] Link rate 540000 lane count 4
[16:28:16] [PASSED] Link rate 540000 lane count 2
[16:28:16] [PASSED] Link rate 540000 lane count 1
[16:28:16] [PASSED] Link rate 270000 lane count 4
[16:28:16] [PASSED] Link rate 270000 lane count 2
[16:28:16] [PASSED] Link rate 270000 lane count 1
[16:28:16] [PASSED] Link rate 162000 lane count 4
[16:28:16] [PASSED] Link rate 162000 lane count 2
[16:28:16] [PASSED] Link rate 162000 lane count 1
[16:28:16] ========== [PASSED] drm_test_dp_mst_calc_pbn_div ===========
[16:28:16] ========= drm_test_dp_mst_sideband_msg_req_decode =========
[16:28:16] [PASSED] DP_ENUM_PATH_RESOURCES with port number
[16:28:16] [PASSED] DP_POWER_UP_PHY with port number
[16:28:16] [PASSED] DP_POWER_DOWN_PHY with port number
[16:28:16] [PASSED] DP_ALLOCATE_PAYLOAD with SDP stream sinks
[16:28:16] [PASSED] DP_ALLOCATE_PAYLOAD with port number
[16:28:16] [PASSED] DP_ALLOCATE_PAYLOAD with VCPI
[16:28:16] [PASSED] DP_ALLOCATE_PAYLOAD with PBN
[16:28:16] [PASSED] DP_QUERY_PAYLOAD with port number
[16:28:16] [PASSED] DP_QUERY_PAYLOAD with VCPI
[16:28:16] [PASSED] DP_REMOTE_DPCD_READ with port number
[16:28:16] [PASSED] DP_REMOTE_DPCD_READ with DPCD address
[16:28:16] [PASSED] DP_REMOTE_DPCD_READ with max number of bytes
[16:28:16] [PASSED] DP_REMOTE_DPCD_WRITE with port number
[16:28:16] [PASSED] DP_REMOTE_DPCD_WRITE with DPCD address
[16:28:16] [PASSED] DP_REMOTE_DPCD_WRITE with data array
[16:28:16] [PASSED] DP_REMOTE_I2C_READ with port number
[16:28:16] [PASSED] DP_REMOTE_I2C_READ with I2C device ID
[16:28:16] [PASSED] DP_REMOTE_I2C_READ with transactions array
[16:28:16] [PASSED] DP_REMOTE_I2C_WRITE with port number
[16:28:16] [PASSED] DP_REMOTE_I2C_WRITE with I2C device ID
[16:28:16] [PASSED] DP_REMOTE_I2C_WRITE with data array
[16:28:16] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream ID
[16:28:16] [PASSED] DP_QUERY_STREAM_ENC_STATUS with client ID
[16:28:16] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream event
[16:28:16] [PASSED] DP_QUERY_STREAM_ENC_STATUS with valid stream event
[16:28:16] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream behavior
[16:28:16] [PASSED] DP_QUERY_STREAM_ENC_STATUS with a valid stream behavior
[16:28:16] ===== [PASSED] drm_test_dp_mst_sideband_msg_req_decode =====
[16:28:16] ================ [PASSED] drm_dp_mst_helper ================
[16:28:16] ================== drm_exec (7 subtests) ===================
[16:28:16] [PASSED] sanitycheck
[16:28:16] [PASSED] test_lock
[16:28:16] [PASSED] test_lock_unlock
[16:28:16] [PASSED] test_duplicates
[16:28:16] [PASSED] test_prepare
[16:28:16] [PASSED] test_prepare_array
[16:28:16] [PASSED] test_multiple_loops
[16:28:16] ==================== [PASSED] drm_exec =====================
[16:28:16] =========== drm_format_helper_test (18 subtests) ===========
[16:28:16] ============== drm_test_fb_xrgb8888_to_gray8 ==============
[16:28:16] [PASSED] single_pixel_source_buffer
[16:28:16] [PASSED] single_pixel_clip_rectangle
[16:28:16] [PASSED] well_known_colors
[16:28:16] [PASSED] destination_pitch
[16:28:16] ========== [PASSED] drm_test_fb_xrgb8888_to_gray8 ==========
[16:28:16] ============= drm_test_fb_xrgb8888_to_rgb332 ==============
[16:28:16] [PASSED] single_pixel_source_buffer
[16:28:16] [PASSED] single_pixel_clip_rectangle
[16:28:16] [PASSED] well_known_colors
[16:28:16] [PASSED] destination_pitch
[16:28:16] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb332 ==========
[16:28:16] ============= drm_test_fb_xrgb8888_to_rgb565 ==============
[16:28:16] [PASSED] single_pixel_source_buffer
[16:28:16] [PASSED] single_pixel_clip_rectangle
[16:28:16] [PASSED] well_known_colors
[16:28:16] [PASSED] destination_pitch
[16:28:16] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb565 ==========
[16:28:16] ============ drm_test_fb_xrgb8888_to_xrgb1555 =============
[16:28:16] [PASSED] single_pixel_source_buffer
[16:28:16] [PASSED] single_pixel_clip_rectangle
[16:28:16] [PASSED] well_known_colors
[16:28:16] [PASSED] destination_pitch
[16:28:16] ======== [PASSED] drm_test_fb_xrgb8888_to_xrgb1555 =========
[16:28:16] ============ drm_test_fb_xrgb8888_to_argb1555 =============
[16:28:16] [PASSED] single_pixel_source_buffer
[16:28:16] [PASSED] single_pixel_clip_rectangle
[16:28:16] [PASSED] well_known_colors
[16:28:16] [PASSED] destination_pitch
[16:28:16] ======== [PASSED] drm_test_fb_xrgb8888_to_argb1555 =========
[16:28:16] ============ drm_test_fb_xrgb8888_to_rgba5551 =============
[16:28:16] [PASSED] single_pixel_source_buffer
[16:28:16] [PASSED] single_pixel_clip_rectangle
[16:28:16] [PASSED] well_known_colors
[16:28:16] [PASSED] destination_pitch
[16:28:16] ======== [PASSED] drm_test_fb_xrgb8888_to_rgba5551 =========
[16:28:16] ============= drm_test_fb_xrgb8888_to_rgb888 ==============
[16:28:16] [PASSED] single_pixel_source_buffer
[16:28:16] [PASSED] single_pixel_clip_rectangle
[16:28:16] [PASSED] well_known_colors
[16:28:16] [PASSED] destination_pitch
[16:28:16] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb888 ==========
[16:28:16] ============= drm_test_fb_xrgb8888_to_bgr888 ==============
[16:28:16] [PASSED] single_pixel_source_buffer
[16:28:16] [PASSED] single_pixel_clip_rectangle
[16:28:16] [PASSED] well_known_colors
[16:28:16] [PASSED] destination_pitch
[16:28:16] ========= [PASSED] drm_test_fb_xrgb8888_to_bgr888 ==========
[16:28:16] ============ drm_test_fb_xrgb8888_to_argb8888 =============
[16:28:16] [PASSED] single_pixel_source_buffer
[16:28:16] [PASSED] single_pixel_clip_rectangle
[16:28:16] [PASSED] well_known_colors
[16:28:16] [PASSED] destination_pitch
[16:28:16] ======== [PASSED] drm_test_fb_xrgb8888_to_argb8888 =========
[16:28:16] =========== drm_test_fb_xrgb8888_to_xrgb2101010 ===========
[16:28:16] [PASSED] single_pixel_source_buffer
[16:28:16] [PASSED] single_pixel_clip_rectangle
[16:28:16] [PASSED] well_known_colors
[16:28:16] [PASSED] destination_pitch
[16:28:16] ======= [PASSED] drm_test_fb_xrgb8888_to_xrgb2101010 =======
[16:28:16] =========== drm_test_fb_xrgb8888_to_argb2101010 ===========
[16:28:16] [PASSED] single_pixel_source_buffer
[16:28:16] [PASSED] single_pixel_clip_rectangle
[16:28:16] [PASSED] well_known_colors
[16:28:16] [PASSED] destination_pitch
[16:28:16] ======= [PASSED] drm_test_fb_xrgb8888_to_argb2101010 =======
[16:28:16] ============== drm_test_fb_xrgb8888_to_mono ===============
[16:28:16] [PASSED] single_pixel_source_buffer
[16:28:16] [PASSED] single_pixel_clip_rectangle
[16:28:16] [PASSED] well_known_colors
[16:28:16] [PASSED] destination_pitch
[16:28:16] ========== [PASSED] drm_test_fb_xrgb8888_to_mono ===========
[16:28:16] ==================== drm_test_fb_swab =====================
[16:28:16] [PASSED] single_pixel_source_buffer
[16:28:16] [PASSED] single_pixel_clip_rectangle
[16:28:16] [PASSED] well_known_colors
[16:28:16] [PASSED] destination_pitch
[16:28:16] ================ [PASSED] drm_test_fb_swab =================
[16:28:16] ============ drm_test_fb_xrgb8888_to_xbgr8888 =============
[16:28:16] [PASSED] single_pixel_source_buffer
[16:28:16] [PASSED] single_pixel_clip_rectangle
[16:28:16] [PASSED] well_known_colors
[16:28:16] [PASSED] destination_pitch
[16:28:16] ======== [PASSED] drm_test_fb_xrgb8888_to_xbgr8888 =========
[16:28:16] ============ drm_test_fb_xrgb8888_to_abgr8888 =============
[16:28:16] [PASSED] single_pixel_source_buffer
[16:28:16] [PASSED] single_pixel_clip_rectangle
[16:28:16] [PASSED] well_known_colors
[16:28:16] [PASSED] destination_pitch
[16:28:16] ======== [PASSED] drm_test_fb_xrgb8888_to_abgr8888 =========
[16:28:16] ================= drm_test_fb_clip_offset =================
[16:28:16] [PASSED] pass through
[16:28:16] [PASSED] horizontal offset
[16:28:16] [PASSED] vertical offset
[16:28:16] [PASSED] horizontal and vertical offset
[16:28:16] [PASSED] horizontal offset (custom pitch)
[16:28:16] [PASSED] vertical offset (custom pitch)
[16:28:16] [PASSED] horizontal and vertical offset (custom pitch)
[16:28:16] ============= [PASSED] drm_test_fb_clip_offset =============
[16:28:16] ============== drm_test_fb_build_fourcc_list ==============
[16:28:16] [PASSED] no native formats
[16:28:16] [PASSED] XRGB8888 as native format
[16:28:16] [PASSED] remove duplicates
[16:28:16] [PASSED] convert alpha formats
[16:28:16] [PASSED] random formats
[16:28:16] ========== [PASSED] drm_test_fb_build_fourcc_list ==========
[16:28:16] =================== drm_test_fb_memcpy ====================
[16:28:16] [PASSED] single_pixel_source_buffer: XR24 little-endian (0x34325258)
[16:28:16] [PASSED] single_pixel_source_buffer: XRA8 little-endian (0x38415258)
[16:28:16] [PASSED] single_pixel_source_buffer: YU24 little-endian (0x34325559)
[16:28:16] [PASSED] single_pixel_clip_rectangle: XB24 little-endian (0x34324258)
[16:28:16] [PASSED] single_pixel_clip_rectangle: XRA8 little-endian (0x38415258)
[16:28:16] [PASSED] single_pixel_clip_rectangle: YU24 little-endian (0x34325559)
[16:28:16] [PASSED] well_known_colors: XB24 little-endian (0x34324258)
[16:28:16] [PASSED] well_known_colors: XRA8 little-endian (0x38415258)
[16:28:16] [PASSED] well_known_colors: YU24 little-endian (0x34325559)
[16:28:16] [PASSED] destination_pitch: XB24 little-endian (0x34324258)
[16:28:16] [PASSED] destination_pitch: XRA8 little-endian (0x38415258)
[16:28:16] [PASSED] destination_pitch: YU24 little-endian (0x34325559)
[16:28:16] =============== [PASSED] drm_test_fb_memcpy ================
[16:28:16] ============= [PASSED] drm_format_helper_test ==============
[16:28:16] ================= drm_format (18 subtests) =================
[16:28:16] [PASSED] drm_test_format_block_width_invalid
[16:28:16] [PASSED] drm_test_format_block_width_one_plane
[16:28:16] [PASSED] drm_test_format_block_width_two_plane
[16:28:16] [PASSED] drm_test_format_block_width_three_plane
[16:28:16] [PASSED] drm_test_format_block_width_tiled
[16:28:16] [PASSED] drm_test_format_block_height_invalid
[16:28:16] [PASSED] drm_test_format_block_height_one_plane
[16:28:16] [PASSED] drm_test_format_block_height_two_plane
[16:28:16] [PASSED] drm_test_format_block_height_three_plane
[16:28:16] [PASSED] drm_test_format_block_height_tiled
[16:28:16] [PASSED] drm_test_format_min_pitch_invalid
[16:28:16] [PASSED] drm_test_format_min_pitch_one_plane_8bpp
[16:28:16] [PASSED] drm_test_format_min_pitch_one_plane_16bpp
[16:28:16] [PASSED] drm_test_format_min_pitch_one_plane_24bpp
[16:28:16] [PASSED] drm_test_format_min_pitch_one_plane_32bpp
[16:28:16] [PASSED] drm_test_format_min_pitch_two_plane
[16:28:16] [PASSED] drm_test_format_min_pitch_three_plane_8bpp
[16:28:16] [PASSED] drm_test_format_min_pitch_tiled
[16:28:16] =================== [PASSED] drm_format ====================
[16:28:16] ============== drm_framebuffer (10 subtests) ===============
[16:28:16] ========== drm_test_framebuffer_check_src_coords ==========
[16:28:16] [PASSED] Success: source fits into fb
[16:28:16] [PASSED] Fail: overflowing fb with x-axis coordinate
[16:28:16] [PASSED] Fail: overflowing fb with y-axis coordinate
[16:28:16] [PASSED] Fail: overflowing fb with source width
[16:28:16] [PASSED] Fail: overflowing fb with source height
[16:28:16] ====== [PASSED] drm_test_framebuffer_check_src_coords ======
[16:28:16] [PASSED] drm_test_framebuffer_cleanup
[16:28:16] =============== drm_test_framebuffer_create ===============
[16:28:16] [PASSED] ABGR8888 normal sizes
[16:28:16] [PASSED] ABGR8888 max sizes
[16:28:16] [PASSED] ABGR8888 pitch greater than min required
[16:28:16] [PASSED] ABGR8888 pitch less than min required
[16:28:16] [PASSED] ABGR8888 Invalid width
[16:28:16] [PASSED] ABGR8888 Invalid buffer handle
[16:28:16] [PASSED] No pixel format
[16:28:16] [PASSED] ABGR8888 Width 0
[16:28:16] [PASSED] ABGR8888 Height 0
[16:28:16] [PASSED] ABGR8888 Out of bound height * pitch combination
[16:28:16] [PASSED] ABGR8888 Large buffer offset
[16:28:16] [PASSED] ABGR8888 Buffer offset for inexistent plane
[16:28:16] [PASSED] ABGR8888 Invalid flag
[16:28:16] [PASSED] ABGR8888 Set DRM_MODE_FB_MODIFIERS without modifiers
[16:28:16] [PASSED] ABGR8888 Valid buffer modifier
[16:28:16] [PASSED] ABGR8888 Invalid buffer modifier(DRM_FORMAT_MOD_SAMSUNG_64_32_TILE)
[16:28:16] [PASSED] ABGR8888 Extra pitches without DRM_MODE_FB_MODIFIERS
[16:28:16] [PASSED] ABGR8888 Extra pitches with DRM_MODE_FB_MODIFIERS
[16:28:16] [PASSED] NV12 Normal sizes
[16:28:16] [PASSED] NV12 Max sizes
[16:28:16] [PASSED] NV12 Invalid pitch
[16:28:16] [PASSED] NV12 Invalid modifier/missing DRM_MODE_FB_MODIFIERS flag
[16:28:16] [PASSED] NV12 different modifier per-plane
[16:28:16] [PASSED] NV12 with DRM_FORMAT_MOD_SAMSUNG_64_32_TILE
[16:28:16] [PASSED] NV12 Valid modifiers without DRM_MODE_FB_MODIFIERS
[16:28:16] [PASSED] NV12 Modifier for inexistent plane
[16:28:16] [PASSED] NV12 Handle for inexistent plane
[16:28:16] [PASSED] NV12 Handle for inexistent plane without DRM_MODE_FB_MODIFIERS
[16:28:16] [PASSED] YVU420 DRM_MODE_FB_MODIFIERS set without modifier
[16:28:16] [PASSED] YVU420 Normal sizes
[16:28:16] [PASSED] YVU420 Max sizes
[16:28:16] [PASSED] YVU420 Invalid pitch
[16:28:16] [PASSED] YVU420 Different pitches
[16:28:16] [PASSED] YVU420 Different buffer offsets/pitches
[16:28:16] [PASSED] YVU420 Modifier set just for plane 0, without DRM_MODE_FB_MODIFIERS
[16:28:16] [PASSED] YVU420 Modifier set just for planes 0, 1, without DRM_MODE_FB_MODIFIERS
[16:28:16] [PASSED] YVU420 Modifier set just for plane 0, 1, with DRM_MODE_FB_MODIFIERS
[16:28:16] [PASSED] YVU420 Valid modifier
[16:28:16] [PASSED] YVU420 Different modifiers per plane
[16:28:16] [PASSED] YVU420 Modifier for inexistent plane
[16:28:16] [PASSED] YUV420_10BIT Invalid modifier(DRM_FORMAT_MOD_LINEAR)
[16:28:16] [PASSED] X0L2 Normal sizes
[16:28:16] [PASSED] X0L2 Max sizes
[16:28:16] [PASSED] X0L2 Invalid pitch
[16:28:16] [PASSED] X0L2 Pitch greater than minimum required
[16:28:16] [PASSED] X0L2 Handle for inexistent plane
[16:28:16] [PASSED] X0L2 Offset for inexistent plane, without DRM_MODE_FB_MODIFIERS set
[16:28:16] [PASSED] X0L2 Modifier without DRM_MODE_FB_MODIFIERS set
[16:28:16] [PASSED] X0L2 Valid modifier
[16:28:16] [PASSED] X0L2 Modifier for inexistent plane
[16:28:16] =========== [PASSED] drm_test_framebuffer_create ===========
[16:28:16] [PASSED] drm_test_framebuffer_free
[16:28:16] [PASSED] drm_test_framebuffer_init
[16:28:16] [PASSED] drm_test_framebuffer_init_bad_format
[16:28:16] [PASSED] drm_test_framebuffer_init_dev_mismatch
[16:28:16] [PASSED] drm_test_framebuffer_lookup
[16:28:16] [PASSED] drm_test_framebuffer_lookup_inexistent
[16:28:16] [PASSED] drm_test_framebuffer_modifiers_not_supported
[16:28:16] ================= [PASSED] drm_framebuffer =================
[16:28:16] ================ drm_gem_shmem (8 subtests) ================
[16:28:16] [PASSED] drm_gem_shmem_test_obj_create
[16:28:16] [PASSED] drm_gem_shmem_test_obj_create_private
[16:28:16] [PASSED] drm_gem_shmem_test_pin_pages
[16:28:16] [PASSED] drm_gem_shmem_test_vmap
[16:28:16] [PASSED] drm_gem_shmem_test_get_pages_sgt
[16:28:16] [PASSED] drm_gem_shmem_test_get_sg_table
[16:28:16] [PASSED] drm_gem_shmem_test_madvise
[16:28:16] [PASSED] drm_gem_shmem_test_purge
[16:28:16] ================== [PASSED] drm_gem_shmem ==================
[16:28:16] === drm_atomic_helper_connector_hdmi_check (23 subtests) ===
[16:28:16] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode
[16:28:16] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode_vic_1
[16:28:16] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode
[16:28:16] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode_vic_1
[16:28:16] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode
[16:28:16] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode_vic_1
[16:28:16] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_changed
[16:28:16] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_not_changed
[16:28:16] [PASSED] drm_test_check_disable_connector
[16:28:16] [PASSED] drm_test_check_hdmi_funcs_reject_rate
[16:28:16] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback
[16:28:16] [PASSED] drm_test_check_max_tmds_rate_format_fallback
[16:28:16] [PASSED] drm_test_check_output_bpc_crtc_mode_changed
[16:28:16] [PASSED] drm_test_check_output_bpc_crtc_mode_not_changed
[16:28:16] [PASSED] drm_test_check_output_bpc_dvi
[16:28:16] [PASSED] drm_test_check_output_bpc_format_vic_1
[16:28:16] [PASSED] drm_test_check_output_bpc_format_display_8bpc_only
[16:28:16] [PASSED] drm_test_check_output_bpc_format_display_rgb_only
[16:28:16] [PASSED] drm_test_check_output_bpc_format_driver_8bpc_only
[16:28:16] [PASSED] drm_test_check_output_bpc_format_driver_rgb_only
[16:28:16] [PASSED] drm_test_check_tmds_char_rate_rgb_8bpc
[16:28:16] [PASSED] drm_test_check_tmds_char_rate_rgb_10bpc
[16:28:16] [PASSED] drm_test_check_tmds_char_rate_rgb_12bpc
[16:28:16] ===== [PASSED] drm_atomic_helper_connector_hdmi_check ======
[16:28:16] === drm_atomic_helper_connector_hdmi_reset (6 subtests) ====
[16:28:16] [PASSED] drm_test_check_broadcast_rgb_value
[16:28:16] [PASSED] drm_test_check_bpc_8_value
[16:28:16] [PASSED] drm_test_check_bpc_10_value
[16:28:16] [PASSED] drm_test_check_bpc_12_value
[16:28:16] [PASSED] drm_test_check_format_value
[16:28:16] [PASSED] drm_test_check_tmds_char_value
[16:28:16] ===== [PASSED] drm_atomic_helper_connector_hdmi_reset ======
[16:28:16] = drm_atomic_helper_connector_hdmi_mode_valid (4 subtests) =
[16:28:16] [PASSED] drm_test_check_mode_valid
[16:28:16] [PASSED] drm_test_check_mode_valid_reject
[16:28:16] [PASSED] drm_test_check_mode_valid_reject_rate
[16:28:16] [PASSED] drm_test_check_mode_valid_reject_max_clock
[16:28:16] === [PASSED] drm_atomic_helper_connector_hdmi_mode_valid ===
[16:28:16] ================= drm_managed (2 subtests) =================
[16:28:16] [PASSED] drm_test_managed_release_action
[16:28:16] [PASSED] drm_test_managed_run_action
[16:28:16] =================== [PASSED] drm_managed ===================
[16:28:16] =================== drm_mm (6 subtests) ====================
[16:28:16] [PASSED] drm_test_mm_init
[16:28:16] [PASSED] drm_test_mm_debug
[16:28:16] [PASSED] drm_test_mm_align32
[16:28:16] [PASSED] drm_test_mm_align64
[16:28:16] [PASSED] drm_test_mm_lowest
[16:28:16] [PASSED] drm_test_mm_highest
[16:28:16] ===================== [PASSED] drm_mm ======================
[16:28:16] ============= drm_modes_analog_tv (5 subtests) =============
[16:28:16] [PASSED] drm_test_modes_analog_tv_mono_576i
[16:28:16] [PASSED] drm_test_modes_analog_tv_ntsc_480i
[16:28:16] [PASSED] drm_test_modes_analog_tv_ntsc_480i_inlined
[16:28:16] [PASSED] drm_test_modes_analog_tv_pal_576i
[16:28:16] [PASSED] drm_test_modes_analog_tv_pal_576i_inlined
[16:28:16] =============== [PASSED] drm_modes_analog_tv ===============
[16:28:16] ============== drm_plane_helper (2 subtests) ===============
[16:28:16] =============== drm_test_check_plane_state ================
[16:28:16] [PASSED] clipping_simple
[16:28:16] [PASSED] clipping_rotate_reflect
[16:28:16] [PASSED] positioning_simple
[16:28:16] [PASSED] upscaling
[16:28:16] [PASSED] downscaling
[16:28:16] [PASSED] rounding1
[16:28:16] [PASSED] rounding2
[16:28:16] [PASSED] rounding3
[16:28:16] [PASSED] rounding4
[16:28:16] =========== [PASSED] drm_test_check_plane_state ============
[16:28:16] =========== drm_test_check_invalid_plane_state ============
[16:28:16] [PASSED] positioning_invalid
[16:28:16] [PASSED] upscaling_invalid
[16:28:16] [PASSED] downscaling_invalid
[16:28:16] ======= [PASSED] drm_test_check_invalid_plane_state ========
[16:28:16] ================ [PASSED] drm_plane_helper =================
[16:28:16] ====== drm_connector_helper_tv_get_modes (1 subtest) =======
[16:28:16] ====== drm_test_connector_helper_tv_get_modes_check =======
[16:28:16] [PASSED] None
[16:28:16] [PASSED] PAL
[16:28:16] [PASSED] NTSC
[16:28:16] [PASSED] Both, NTSC Default
[16:28:16] [PASSED] Both, PAL Default
[16:28:16] [PASSED] Both, NTSC Default, with PAL on command-line
[16:28:16] [PASSED] Both, PAL Default, with NTSC on command-line
[16:28:16] == [PASSED] drm_test_connector_helper_tv_get_modes_check ===
[16:28:16] ======== [PASSED] drm_connector_helper_tv_get_modes ========
[16:28:16] ================== drm_rect (9 subtests) ===================
[16:28:16] [PASSED] drm_test_rect_clip_scaled_div_by_zero
[16:28:16] [PASSED] drm_test_rect_clip_scaled_not_clipped
[16:28:16] [PASSED] drm_test_rect_clip_scaled_clipped
[16:28:16] [PASSED] drm_test_rect_clip_scaled_signed_vs_unsigned
[16:28:16] ================= drm_test_rect_intersect =================
[16:28:16] [PASSED] top-left x bottom-right: 2x2+1+1 x 2x2+0+0
[16:28:16] [PASSED] top-right x bottom-left: 2x2+0+0 x 2x2+1-1
[16:28:16] [PASSED] bottom-left x top-right: 2x2+1-1 x 2x2+0+0
[16:28:16] [PASSED] bottom-right x top-left: 2x2+0+0 x 2x2+1+1
[16:28:16] [PASSED] right x left: 2x1+0+0 x 3x1+1+0
[16:28:16] [PASSED] left x right: 3x1+1+0 x 2x1+0+0
[16:28:16] [PASSED] up x bottom: 1x2+0+0 x 1x3+0-1
[16:28:16] [PASSED] bottom x up: 1x3+0-1 x 1x2+0+0
[16:28:16] [PASSED] touching corner: 1x1+0+0 x 2x2+1+1
[16:28:16] [PASSED] touching side: 1x1+0+0 x 1x1+1+0
[16:28:16] [PASSED] equal rects: 2x2+0+0 x 2x2+0+0
[16:28:16] [PASSED] inside another: 2x2+0+0 x 1x1+1+1
[16:28:16] [PASSED] far away: 1x1+0+0 x 1x1+3+6
[16:28:16] [PASSED] points intersecting: 0x0+5+10 x 0x0+5+10
[16:28:16] [PASSED] points not intersecting: 0x0+0+0 x 0x0+5+10
[16:28:16] ============= [PASSED] drm_test_rect_intersect =============
[16:28:16] ================ drm_test_rect_calc_hscale ================
[16:28:16] [PASSED] normal use
[16:28:16] [PASSED] out of max range
[16:28:16] [PASSED] out of min range
[16:28:16] [PASSED] zero dst
[16:28:16] [PASSED] negative src
[16:28:16] [PASSED] negative dst
[16:28:16] ============ [PASSED] drm_test_rect_calc_hscale ============
[16:28:16] ================ drm_test_rect_calc_vscale ================
[16:28:16] [PASSED] normal use
[16:28:16] [PASSED] out of max range
[16:28:16] [PASSED] out of min range
[16:28:16] [PASSED] zero dst
[16:28:16] [PASSED] negative src
[16:28:16] [PASSED] negative dst
[16:28:16] ============ [PASSED] drm_test_rect_calc_vscale ============
[16:28:16] ================== drm_test_rect_rotate ===================
[16:28:16] [PASSED] reflect-x
[16:28:16] [PASSED] reflect-y
[16:28:16] [PASSED] rotate-0
[16:28:16] [PASSED] rotate-90
[16:28:16] [PASSED] rotate-180
[16:28:16] [PASSED] rotate-270
[16:28:16] ============== [PASSED] drm_test_rect_rotate ===============
[16:28:16] ================ drm_test_rect_rotate_inv =================
[16:28:16] [PASSED] reflect-x
[16:28:16] [PASSED] reflect-y
[16:28:16] [PASSED] rotate-0
[16:28:16] [PASSED] rotate-90
[16:28:16] [PASSED] rotate-180
[16:28:16] [PASSED] rotate-270
[16:28:16] ============ [PASSED] drm_test_rect_rotate_inv =============
stty: 'standard input': Inappropriate ioctl for device
[16:28:16] ==================== [PASSED] drm_rect =====================
[16:28:16] ============================================================
[16:28:16] Testing complete. Ran 608 tests: passed: 608
[16:28:16] Elapsed time: 23.059s total, 1.803s configuring, 21.089s building, 0.148s running
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/ttm/tests/.kunitconfig
[16:28:16] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[16:28:18] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[16:28:26] Starting KUnit Kernel (1/1)...
[16:28:26] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[16:28:26] ================= ttm_device (5 subtests) ==================
[16:28:26] [PASSED] ttm_device_init_basic
[16:28:26] [PASSED] ttm_device_init_multiple
[16:28:26] [PASSED] ttm_device_fini_basic
[16:28:26] [PASSED] ttm_device_init_no_vma_man
[16:28:26] ================== ttm_device_init_pools ==================
[16:28:26] [PASSED] No DMA allocations, no DMA32 required
[16:28:26] [PASSED] DMA allocations, DMA32 required
[16:28:26] [PASSED] No DMA allocations, DMA32 required
[16:28:26] [PASSED] DMA allocations, no DMA32 required
[16:28:26] ============== [PASSED] ttm_device_init_pools ==============
[16:28:26] =================== [PASSED] ttm_device ====================
[16:28:26] ================== ttm_pool (8 subtests) ===================
[16:28:26] ================== ttm_pool_alloc_basic ===================
[16:28:26] [PASSED] One page
[16:28:26] [PASSED] More than one page
[16:28:26] [PASSED] Above the allocation limit
[16:28:26] [PASSED] One page, with coherent DMA mappings enabled
[16:28:26] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[16:28:26] ============== [PASSED] ttm_pool_alloc_basic ===============
[16:28:26] ============== ttm_pool_alloc_basic_dma_addr ==============
[16:28:26] [PASSED] One page
[16:28:26] [PASSED] More than one page
[16:28:26] [PASSED] Above the allocation limit
[16:28:26] [PASSED] One page, with coherent DMA mappings enabled
[16:28:26] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[16:28:26] ========== [PASSED] ttm_pool_alloc_basic_dma_addr ==========
[16:28:26] [PASSED] ttm_pool_alloc_order_caching_match
[16:28:26] [PASSED] ttm_pool_alloc_caching_mismatch
[16:28:26] [PASSED] ttm_pool_alloc_order_mismatch
[16:28:26] [PASSED] ttm_pool_free_dma_alloc
[16:28:26] [PASSED] ttm_pool_free_no_dma_alloc
[16:28:26] [PASSED] ttm_pool_fini_basic
[16:28:26] ==================== [PASSED] ttm_pool =====================
[16:28:26] ================ ttm_resource (8 subtests) =================
[16:28:26] ================= ttm_resource_init_basic =================
[16:28:26] [PASSED] Init resource in TTM_PL_SYSTEM
[16:28:26] [PASSED] Init resource in TTM_PL_VRAM
[16:28:26] [PASSED] Init resource in a private placement
[16:28:26] [PASSED] Init resource in TTM_PL_SYSTEM, set placement flags
[16:28:26] ============= [PASSED] ttm_resource_init_basic =============
[16:28:26] [PASSED] ttm_resource_init_pinned
[16:28:26] [PASSED] ttm_resource_fini_basic
[16:28:26] [PASSED] ttm_resource_manager_init_basic
[16:28:26] [PASSED] ttm_resource_manager_usage_basic
[16:28:26] [PASSED] ttm_resource_manager_set_used_basic
[16:28:26] [PASSED] ttm_sys_man_alloc_basic
[16:28:26] [PASSED] ttm_sys_man_free_basic
[16:28:26] ================== [PASSED] ttm_resource ===================
[16:28:26] =================== ttm_tt (15 subtests) ===================
[16:28:26] ==================== ttm_tt_init_basic ====================
[16:28:26] [PASSED] Page-aligned size
[16:28:26] [PASSED] Extra pages requested
[16:28:26] ================ [PASSED] ttm_tt_init_basic ================
[16:28:26] [PASSED] ttm_tt_init_misaligned
[16:28:26] [PASSED] ttm_tt_fini_basic
[16:28:26] [PASSED] ttm_tt_fini_sg
[16:28:26] [PASSED] ttm_tt_fini_shmem
[16:28:26] [PASSED] ttm_tt_create_basic
[16:28:26] [PASSED] ttm_tt_create_invalid_bo_type
[16:28:26] [PASSED] ttm_tt_create_ttm_exists
[16:28:26] [PASSED] ttm_tt_create_failed
[16:28:26] [PASSED] ttm_tt_destroy_basic
[16:28:26] [PASSED] ttm_tt_populate_null_ttm
[16:28:26] [PASSED] ttm_tt_populate_populated_ttm
[16:28:26] [PASSED] ttm_tt_unpopulate_basic
[16:28:26] [PASSED] ttm_tt_unpopulate_empty_ttm
[16:28:26] [PASSED] ttm_tt_swapin_basic
[16:28:26] ===================== [PASSED] ttm_tt ======================
[16:28:26] =================== ttm_bo (14 subtests) ===================
[16:28:26] =========== ttm_bo_reserve_optimistic_no_ticket ===========
[16:28:26] [PASSED] Cannot be interrupted and sleeps
[16:28:26] [PASSED] Cannot be interrupted, locks straight away
[16:28:26] [PASSED] Can be interrupted, sleeps
[16:28:26] ======= [PASSED] ttm_bo_reserve_optimistic_no_ticket =======
[16:28:26] [PASSED] ttm_bo_reserve_locked_no_sleep
[16:28:26] [PASSED] ttm_bo_reserve_no_wait_ticket
[16:28:26] [PASSED] ttm_bo_reserve_double_resv
[16:28:26] [PASSED] ttm_bo_reserve_interrupted
[16:28:26] [PASSED] ttm_bo_reserve_deadlock
[16:28:26] [PASSED] ttm_bo_unreserve_basic
[16:28:26] [PASSED] ttm_bo_unreserve_pinned
[16:28:26] [PASSED] ttm_bo_unreserve_bulk
[16:28:26] [PASSED] ttm_bo_put_basic
[16:28:26] [PASSED] ttm_bo_put_shared_resv
[16:28:26] [PASSED] ttm_bo_pin_basic
[16:28:26] [PASSED] ttm_bo_pin_unpin_resource
[16:28:26] [PASSED] ttm_bo_multiple_pin_one_unpin
[16:28:26] ===================== [PASSED] ttm_bo ======================
[16:28:26] ============== ttm_bo_validate (22 subtests) ===============
[16:28:26] ============== ttm_bo_init_reserved_sys_man ===============
[16:28:26] [PASSED] Buffer object for userspace
[16:28:26] [PASSED] Kernel buffer object
[16:28:26] [PASSED] Shared buffer object
[16:28:26] ========== [PASSED] ttm_bo_init_reserved_sys_man ===========
[16:28:26] ============== ttm_bo_init_reserved_mock_man ==============
[16:28:26] [PASSED] Buffer object for userspace
[16:28:26] [PASSED] Kernel buffer object
[16:28:26] [PASSED] Shared buffer object
[16:28:26] ========== [PASSED] ttm_bo_init_reserved_mock_man ==========
[16:28:26] [PASSED] ttm_bo_init_reserved_resv
[16:28:26] ================== ttm_bo_validate_basic ==================
[16:28:26] [PASSED] Buffer object for userspace
[16:28:26] [PASSED] Kernel buffer object
[16:28:26] [PASSED] Shared buffer object
[16:28:26] ============== [PASSED] ttm_bo_validate_basic ==============
[16:28:26] [PASSED] ttm_bo_validate_invalid_placement
[16:28:26] ============= ttm_bo_validate_same_placement ==============
[16:28:26] [PASSED] System manager
[16:28:26] [PASSED] VRAM manager
[16:28:26] ========= [PASSED] ttm_bo_validate_same_placement ==========
[16:28:26] [PASSED] ttm_bo_validate_failed_alloc
[16:28:26] [PASSED] ttm_bo_validate_pinned
[16:28:26] [PASSED] ttm_bo_validate_busy_placement
[16:28:26] ================ ttm_bo_validate_multihop =================
[16:28:26] [PASSED] Buffer object for userspace
[16:28:26] [PASSED] Kernel buffer object
[16:28:26] [PASSED] Shared buffer object
[16:28:26] ============ [PASSED] ttm_bo_validate_multihop =============
[16:28:26] ========== ttm_bo_validate_no_placement_signaled ==========
[16:28:26] [PASSED] Buffer object in system domain, no page vector
[16:28:26] [PASSED] Buffer object in system domain with an existing page vector
[16:28:26] ====== [PASSED] ttm_bo_validate_no_placement_signaled ======
[16:28:26] ======== ttm_bo_validate_no_placement_not_signaled ========
[16:28:26] [PASSED] Buffer object for userspace
[16:28:26] [PASSED] Kernel buffer object
[16:28:26] [PASSED] Shared buffer object
[16:28:26] ==== [PASSED] ttm_bo_validate_no_placement_not_signaled ====
[16:28:26] [PASSED] ttm_bo_validate_move_fence_signaled
[16:28:26] ========= ttm_bo_validate_move_fence_not_signaled =========
[16:28:26] [PASSED] Waits for GPU
[16:28:26] [PASSED] Tries to lock straight away
[16:28:26] ===== [PASSED] ttm_bo_validate_move_fence_not_signaled =====
[16:28:26] [PASSED] ttm_bo_validate_swapout
[16:28:26] [PASSED] ttm_bo_validate_happy_evict
[16:28:26] [PASSED] ttm_bo_validate_all_pinned_evict
[16:28:26] [PASSED] ttm_bo_validate_allowed_only_evict
[16:28:26] [PASSED] ttm_bo_validate_deleted_evict
[16:28:26] [PASSED] ttm_bo_validate_busy_domain_evict
[16:28:26] [PASSED] ttm_bo_validate_evict_gutting
[16:28:26] [PASSED] ttm_bo_validate_recrusive_evict
stty: 'standard input': Inappropriate ioctl for device
[16:28:26] ================= [PASSED] ttm_bo_validate =================
[16:28:26] ============================================================
[16:28:26] Testing complete. Ran 102 tests: passed: 102
[16:28:26] Elapsed time: 10.433s total, 1.786s configuring, 8.030s building, 0.503s running
+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel
^ permalink raw reply [flat|nested] 17+ messages in thread* ✗ CI.Build: failure for Replace xe_hmm with gpusvm (rev4)
2025-05-12 15:06 [PATCH v4 0/8] Replace xe_hmm with gpusvm Matthew Auld
` (10 preceding siblings ...)
2025-05-12 16:28 ` ✓ CI.KUnit: success " Patchwork
@ 2025-05-12 16:38 ` Patchwork
2025-05-13 11:04 ` ✗ CI.Patch_applied: " Patchwork
2025-05-26 21:59 ` Patchwork
13 siblings, 0 replies; 17+ messages in thread
From: Patchwork @ 2025-05-12 16:38 UTC (permalink / raw)
To: Matthew Auld; +Cc: intel-xe
== Series Details ==
Series: Replace xe_hmm with gpusvm (rev4)
URL : https://patchwork.freedesktop.org/series/146553/
State : failure
== Summary ==
INSTALL debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/bridge/netfilter/ebt_arpreply.ko
INSTALL debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/bridge/netfilter/ebt_mark.ko
INSTALL debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/bridge/netfilter/ebt_dnat.ko
STRIP debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/bridge/netfilter/ebt_vlan.ko
STRIP debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/bridge/netfilter/ebt_arpreply.ko
STRIP debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/bridge/netfilter/ebt_mark.ko
STRIP debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/bridge/netfilter/ebt_dnat.ko
INSTALL debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/bridge/netfilter/ebt_redirect.ko
INSTALL debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/bridge/netfilter/ebt_snat.ko
INSTALL debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/bridge/netfilter/ebt_log.ko
INSTALL debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/bridge/netfilter/ebt_nflog.ko
STRIP debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/bridge/netfilter/ebt_redirect.ko
STRIP debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/bridge/netfilter/ebt_snat.ko
STRIP debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/bridge/netfilter/ebt_log.ko
STRIP debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/bridge/netfilter/ebt_nflog.ko
INSTALL debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/bridge/bridge.ko
INSTALL debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/bridge/br_netfilter.ko
INSTALL debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/sunrpc/sunrpc.ko
INSTALL debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/sunrpc/auth_gss/auth_rpcgss.ko
STRIP debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/bridge/br_netfilter.ko
INSTALL debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/sunrpc/auth_gss/rpcsec_gss_krb5.ko
INSTALL debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/kcm/kcm.ko
INSTALL debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/l2tp/l2tp_core.ko
INSTALL debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/l2tp/l2tp_ip.ko
STRIP debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/bridge/bridge.ko
INSTALL debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/l2tp/l2tp_netlink.ko
STRIP debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/sunrpc/auth_gss/auth_rpcgss.ko
STRIP debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/sunrpc/auth_gss/rpcsec_gss_krb5.ko
STRIP debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/kcm/kcm.ko
STRIP debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/l2tp/l2tp_core.ko
INSTALL debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/l2tp/l2tp_eth.ko
STRIP debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/sunrpc/sunrpc.ko
STRIP debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/l2tp/l2tp_ip.ko
STRIP debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/l2tp/l2tp_netlink.ko
INSTALL debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/l2tp/l2tp_debugfs.ko
INSTALL debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/l2tp/l2tp_ip6.ko
INSTALL debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/dccp/dccp.ko
STRIP debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/l2tp/l2tp_eth.ko
INSTALL debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/dccp/dccp_ipv4.ko
STRIP debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/l2tp/l2tp_debugfs.ko
STRIP debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/l2tp/l2tp_ip6.ko
STRIP debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/dccp/dccp.ko
INSTALL debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/dccp/dccp_ipv6.ko
STRIP debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/dccp/dccp_ipv4.ko
INSTALL debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/dccp/dccp_diag.ko
INSTALL debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/sctp/sctp.ko
STRIP debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/dccp/dccp_ipv6.ko
INSTALL debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/sctp/sctp_diag.ko
INSTALL debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/rds/rds.ko
STRIP debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/dccp/dccp_diag.ko
INSTALL debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/rds/rds_tcp.ko
INSTALL debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/9p/9pnet.ko
STRIP debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/sctp/sctp_diag.ko
INSTALL debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/9p/9pnet_fd.ko
INSTALL debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/9p/9pnet_xen.ko
STRIP debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/rds/rds.ko
STRIP debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/sctp/sctp.ko
STRIP debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/rds/rds_tcp.ko
STRIP debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/9p/9pnet.ko
STRIP debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/9p/9pnet_fd.ko
INSTALL debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/9p/9pnet_virtio.ko
STRIP debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/9p/9pnet_xen.ko
INSTALL debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/psample/psample.ko
INSTALL debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/vmw_vsock/vsock.ko
INSTALL debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/vmw_vsock/vsock_diag.ko
STRIP debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/9p/9pnet_virtio.ko
STRIP debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/psample/psample.ko
STRIP debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/vmw_vsock/vsock.ko
STRIP debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/vmw_vsock/vsock_diag.ko
INSTALL debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/vmw_vsock/vmw_vsock_vmci_transport.ko
INSTALL debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/vmw_vsock/vmw_vsock_virtio_transport.ko
INSTALL debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/vmw_vsock/vmw_vsock_virtio_transport_common.ko
INSTALL debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/vmw_vsock/hv_sock.ko
INSTALL debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/vmw_vsock/vsock_loopback.ko
STRIP debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/vmw_vsock/vmw_vsock_vmci_transport.ko
STRIP debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/vmw_vsock/vmw_vsock_virtio_transport.ko
STRIP debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/vmw_vsock/vmw_vsock_virtio_transport_common.ko
STRIP debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/vmw_vsock/hv_sock.ko
INSTALL debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/virt/lib/irqbypass.ko
STRIP debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/net/vmw_vsock/vsock_loopback.ko
STRIP debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+/kernel/virt/lib/irqbypass.ko
DEPMOD debian/linux-image-6.15.0-rc6-xe+/lib/modules/6.15.0-rc6-xe+
dpkg-deb: building package 'linux-headers-6.15.0-rc6-xe+' in '../linux-headers-6.15.0-rc6-xe+_6.15.0~rc6-g5ed3c4000526-2_amd64.deb'.
dpkg-deb: building package 'linux-image-6.15.0-rc6-xe+' in '../linux-image-6.15.0-rc6-xe+_6.15.0~rc6-g5ed3c4000526-2_amd64.deb'.
dpkg-deb: building package 'linux-image-6.15.0-rc6-xe+-dbg' in '../linux-image-6.15.0-rc6-xe+-dbg_6.15.0~rc6-g5ed3c4000526-2_amd64.deb'.
dpkg-genbuildinfo --build=binary -O../linux-upstream_6.15.0~rc6-g5ed3c4000526-2_amd64.buildinfo
dpkg-genchanges --build=binary -O../linux-upstream_6.15.0~rc6-g5ed3c4000526-2_amd64.changes
dpkg-genchanges: info: binary-only upload (no source code included)
dpkg-source --after-build .
dpkg-buildpackage: info: binary-only upload (no source included)
make[1]: Leaving directory '/kernel/build64-debug'
+ mkdir -p kernel-debug/ kernel-debug/deb
+ mv kernel-debug.tar.gz kernel-debug/
+ mv linux-headers-6.15.0-rc6-xe+_6.15.0~rc6-g5ed3c4000526-2_amd64.deb linux-image-6.15.0-rc6-xe+_6.15.0~rc6-g5ed3c4000526-2_amd64.deb linux-image-6.15.0-rc6-xe+-dbg_6.15.0~rc6-g5ed3c4000526-2_amd64.deb linux-libc-dev_6.15.0~rc6-g5ed3c4000526-2_amd64.deb kernel-debug/deb/
+ sync
[+] Finished building and packaging 'debug'!
+ echo '[+] Finished building and packaging '\''debug'\''!'
+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel
^ permalink raw reply [flat|nested] 17+ messages in thread* ✗ CI.Patch_applied: failure for Replace xe_hmm with gpusvm (rev4)
2025-05-12 15:06 [PATCH v4 0/8] Replace xe_hmm with gpusvm Matthew Auld
` (11 preceding siblings ...)
2025-05-12 16:38 ` ✗ CI.Build: failure " Patchwork
@ 2025-05-13 11:04 ` Patchwork
2025-05-26 21:59 ` Patchwork
13 siblings, 0 replies; 17+ messages in thread
From: Patchwork @ 2025-05-13 11:04 UTC (permalink / raw)
To: Matthew Auld; +Cc: intel-xe
== Series Details ==
Series: Replace xe_hmm with gpusvm (rev4)
URL : https://patchwork.freedesktop.org/series/146553/
State : failure
== Summary ==
=== Applying kernel patches on branch 'drm-tip' with base: ===
Base commit: c9de1544553b drm-tip: 2025y-05m-13d-08h-26m-34s UTC integration manifest
=== git am output follows ===
error: patch failed: drivers/gpu/drm/drm_gpusvm.c:1362
error: drivers/gpu/drm/drm_gpusvm.c: patch does not apply
hint: Use 'git am --show-current-patch=diff' to see the failed patch
Applying: drm/gpusvm: fix hmm_pfn_to_map_order() usage
Applying: drm/gpusvm: use more selective dma dir in get_pages()
Patch failed at 0002 drm/gpusvm: use more selective dma dir in get_pages()
When you have resolved this problem, run "git am --continue".
If you prefer to skip this patch, run "git am --skip" instead.
To restore the original branch and stop patching, run "git am --abort".
^ permalink raw reply [flat|nested] 17+ messages in thread* ✗ CI.Patch_applied: failure for Replace xe_hmm with gpusvm (rev4)
2025-05-12 15:06 [PATCH v4 0/8] Replace xe_hmm with gpusvm Matthew Auld
` (12 preceding siblings ...)
2025-05-13 11:04 ` ✗ CI.Patch_applied: " Patchwork
@ 2025-05-26 21:59 ` Patchwork
13 siblings, 0 replies; 17+ messages in thread
From: Patchwork @ 2025-05-26 21:59 UTC (permalink / raw)
To: Matthew Auld; +Cc: intel-xe
== Series Details ==
Series: Replace xe_hmm with gpusvm (rev4)
URL : https://patchwork.freedesktop.org/series/146553/
State : failure
== Summary ==
=== Applying kernel patches on branch 'drm-tip' with base: ===
Base commit: 3e019cc3eaff drm-tip: 2025y-05m-26d-14h-49m-23s UTC integration manifest
=== git am output follows ===
error: patch failed: drivers/gpu/drm/drm_gpusvm.c:1362
error: drivers/gpu/drm/drm_gpusvm.c: patch does not apply
hint: Use 'git am --show-current-patch=diff' to see the failed patch
Applying: drm/gpusvm: fix hmm_pfn_to_map_order() usage
Applying: drm/gpusvm: use more selective dma dir in get_pages()
Patch failed at 0002 drm/gpusvm: use more selective dma dir in get_pages()
When you have resolved this problem, run "git am --continue".
If you prefer to skip this patch, run "git am --skip" instead.
To restore the original branch and stop patching, run "git am --abort".
^ permalink raw reply [flat|nested] 17+ messages in thread