* [PATCH 0/4] Enable THP support in drm_pagemap
@ 2025-12-16 20:10 Francois Dugast
2025-12-16 20:10 ` [PATCH 1/4] mm/migrate: Add migrate_device_split_page Francois Dugast
` (8 more replies)
0 siblings, 9 replies; 29+ messages in thread
From: Francois Dugast @ 2025-12-16 20:10 UTC (permalink / raw)
To: intel-xe; +Cc: dri-devel, Francois Dugast
Use Balbir Singh's series for device-private THP support [1] and previous
preparation work in drm_pagemap [2] to add 2MB/THP support in xe. This leads
to significant performance improvements when using SVM with 2MB pages.
[1] https://lore.kernel.org/linux-mm/20251001065707.920170-1-balbirs@nvidia.com/
[2] https://patchwork.freedesktop.org/series/151754/
Francois Dugast (3):
drm/pagemap: Unlock and put folios when possible
drm/pagemap: Add helper to access zone_device_data
drm/pagemap: Enable THP support for GPU memory migration
Matthew Brost (1):
mm/migrate: Add migrate_device_split_page
drivers/gpu/drm/drm_gpusvm.c | 7 +-
drivers/gpu/drm/drm_pagemap.c | 146 +++++++++++++++++++++++++++-------
drivers/gpu/drm/xe/xe_svm.c | 5 ++
include/drm/drm_pagemap.h | 7 +-
include/linux/huge_mm.h | 3 +
include/linux/migrate.h | 1 +
mm/huge_memory.c | 6 +-
mm/migrate_device.c | 49 ++++++++++++
8 files changed, 188 insertions(+), 36 deletions(-)
--
2.43.0
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH 1/4] mm/migrate: Add migrate_device_split_page
2025-12-16 20:10 [PATCH 0/4] Enable THP support in drm_pagemap Francois Dugast
@ 2025-12-16 20:10 ` Francois Dugast
2025-12-16 20:34 ` Matthew Wilcox
2026-01-07 20:20 ` Zi Yan
2025-12-16 20:10 ` [PATCH 2/4] drm/pagemap: Unlock and put folios when possible Francois Dugast
` (7 subsequent siblings)
8 siblings, 2 replies; 29+ messages in thread
From: Francois Dugast @ 2025-12-16 20:10 UTC (permalink / raw)
To: intel-xe
Cc: dri-devel, Matthew Brost, Andrew Morton, Balbir Singh, linux-mm,
Francois Dugast
From: Matthew Brost <matthew.brost@intel.com>
Introduce migrate_device_split_page() to split a device page into
lower-order pages. Used when a folio allocated as higher-order is freed
and later reallocated at a smaller order by the driver memory manager.
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Balbir Singh <balbirs@nvidia.com>
Cc: dri-devel@lists.freedesktop.org
Cc: linux-mm@kvack.org
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
---
include/linux/huge_mm.h | 3 +++
include/linux/migrate.h | 1 +
mm/huge_memory.c | 6 ++---
mm/migrate_device.c | 49 +++++++++++++++++++++++++++++++++++++++++
4 files changed, 56 insertions(+), 3 deletions(-)
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index a4d9f964dfde..6ad8f359bc0d 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -374,6 +374,9 @@ int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list
int folio_split_unmapped(struct folio *folio, unsigned int new_order);
unsigned int min_order_for_split(struct folio *folio);
int split_folio_to_list(struct folio *folio, struct list_head *list);
+int __split_unmapped_folio(struct folio *folio, int new_order,
+ struct page *split_at, struct xa_state *xas,
+ struct address_space *mapping, enum split_type split_type);
int folio_check_splittable(struct folio *folio, unsigned int new_order,
enum split_type split_type);
int folio_split(struct folio *folio, unsigned int new_order, struct page *page,
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index 26ca00c325d9..ec65e4fd5f88 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -192,6 +192,7 @@ void migrate_device_pages(unsigned long *src_pfns, unsigned long *dst_pfns,
unsigned long npages);
void migrate_device_finalize(unsigned long *src_pfns,
unsigned long *dst_pfns, unsigned long npages);
+int migrate_device_split_page(struct page *page);
#endif /* CONFIG_MIGRATION */
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 40cf59301c21..7ded35a3ecec 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3621,9 +3621,9 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
* Return: 0 - successful, <0 - failed (if -ENOMEM is returned, @folio might be
* split but not to @new_order, the caller needs to check)
*/
-static int __split_unmapped_folio(struct folio *folio, int new_order,
- struct page *split_at, struct xa_state *xas,
- struct address_space *mapping, enum split_type split_type)
+int __split_unmapped_folio(struct folio *folio, int new_order,
+ struct page *split_at, struct xa_state *xas,
+ struct address_space *mapping, enum split_type split_type)
{
const bool is_anon = folio_test_anon(folio);
int old_order = folio_order(folio);
diff --git a/mm/migrate_device.c b/mm/migrate_device.c
index 23379663b1e1..eb0f0e938947 100644
--- a/mm/migrate_device.c
+++ b/mm/migrate_device.c
@@ -775,6 +775,49 @@ int migrate_vma_setup(struct migrate_vma *args)
EXPORT_SYMBOL(migrate_vma_setup);
#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
+/**
+ * migrate_device_split_page() - Split device page
+ * @page: Device page to split
+ *
+ * Splits a device page into smaller pages. Typically called when reallocating a
+ * folio to a smaller size. Inherently racy—only safe if the caller ensures
+ * mutual exclusion within the page's folio (i.e., no other threads are using
+ * pages within the folio). Expected to be called a free device page and
+ * restores all split out pages to a free state.
+ */
+int migrate_device_split_page(struct page *page)
+{
+ struct folio *folio = page_folio(page);
+ struct dev_pagemap *pgmap = folio->pgmap;
+ struct page *unlock_page = folio_page(folio, 0);
+ unsigned int order = folio_order(folio), i;
+ int ret = 0;
+
+ VM_BUG_ON_FOLIO(!order, folio);
+ VM_BUG_ON_FOLIO(!folio_is_device_private(folio), folio);
+ VM_BUG_ON_FOLIO(folio_ref_count(folio), folio);
+
+ folio_lock(folio);
+
+ ret = __split_unmapped_folio(folio, 0, page, NULL, NULL, SPLIT_TYPE_UNIFORM);
+ if (ret) {
+ /*
+ * We can't fail here unless the caller doesn't know what they
+ * are doing.
+ */
+ VM_BUG_ON_FOLIO(ret, folio);
+
+ return ret;
+ }
+
+ for (i = 0; i < 0x1 << order; ++i, ++unlock_page) {
+ page_folio(unlock_page)->pgmap = pgmap;
+ folio_unlock(page_folio(unlock_page));
+ }
+
+ return 0;
+}
+
/**
* migrate_vma_insert_huge_pmd_page: Insert a huge folio into @migrate->vma->vm_mm
* at @addr. folio is already allocated as a part of the migration process with
@@ -927,6 +970,11 @@ static int migrate_vma_split_unmapped_folio(struct migrate_vma *migrate,
return ret;
}
#else /* !CONFIG_ARCH_ENABLE_THP_MIGRATION */
+int migrate_device_split_page(struct page *page)
+{
+ return 0;
+}
+
static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate,
unsigned long addr,
struct page *page,
@@ -943,6 +991,7 @@ static int migrate_vma_split_unmapped_folio(struct migrate_vma *migrate,
return 0;
}
#endif
+EXPORT_SYMBOL(migrate_device_split_page);
static unsigned long migrate_vma_nr_pages(unsigned long *src)
{
--
2.43.0
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH 2/4] drm/pagemap: Unlock and put folios when possible
2025-12-16 20:10 [PATCH 0/4] Enable THP support in drm_pagemap Francois Dugast
2025-12-16 20:10 ` [PATCH 1/4] mm/migrate: Add migrate_device_split_page Francois Dugast
@ 2025-12-16 20:10 ` Francois Dugast
2025-12-18 21:59 ` Matthew Brost
2025-12-16 20:10 ` [PATCH 3/4] drm/pagemap: Add helper to access zone_device_data Francois Dugast
` (6 subsequent siblings)
8 siblings, 1 reply; 29+ messages in thread
From: Francois Dugast @ 2025-12-16 20:10 UTC (permalink / raw)
To: intel-xe; +Cc: dri-devel, Francois Dugast, Matthew Brost
If the page is part of a folio, unlock and put the whole folio at once
instead of individual pages one after the other. This will reduce the
amount of operations once device THP are in use.
Suggested-by: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
---
drivers/gpu/drm/drm_pagemap.c | 26 +++++++++++++++++---------
1 file changed, 17 insertions(+), 9 deletions(-)
diff --git a/drivers/gpu/drm/drm_pagemap.c b/drivers/gpu/drm/drm_pagemap.c
index 37d7cfbbb3e8..491de9275add 100644
--- a/drivers/gpu/drm/drm_pagemap.c
+++ b/drivers/gpu/drm/drm_pagemap.c
@@ -149,15 +149,15 @@ static void drm_pagemap_zdd_put(struct drm_pagemap_zdd *zdd)
}
/**
- * drm_pagemap_migration_unlock_put_page() - Put a migration page
- * @page: Pointer to the page to put
+ * drm_pagemap_migration_unlock_put_folio() - Put a migration folio
+ * @folio: Pointer to the folio to put
*
- * This function unlocks and puts a page.
+ * This function unlocks and puts a folio.
*/
-static void drm_pagemap_migration_unlock_put_page(struct page *page)
+static void drm_pagemap_migration_unlock_put_folio(struct folio *folio)
{
- unlock_page(page);
- put_page(page);
+ folio_unlock(folio);
+ folio_put(folio);
}
/**
@@ -172,15 +172,23 @@ static void drm_pagemap_migration_unlock_put_pages(unsigned long npages,
{
unsigned long i;
- for (i = 0; i < npages; ++i) {
+ for (i = 0; i < npages;) {
struct page *page;
+ struct folio *folio;
+ unsigned int order = 0;
if (!migrate_pfn[i])
- continue;
+ goto next;
page = migrate_pfn_to_page(migrate_pfn[i]);
- drm_pagemap_migration_unlock_put_page(page);
+ folio = page_folio(page);
+ order = folio_order(folio);
+
+ drm_pagemap_migration_unlock_put_folio(folio);
migrate_pfn[i] = 0;
+
+next:
+ i += NR_PAGES(order);
}
}
--
2.43.0
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH 3/4] drm/pagemap: Add helper to access zone_device_data
2025-12-16 20:10 [PATCH 0/4] Enable THP support in drm_pagemap Francois Dugast
2025-12-16 20:10 ` [PATCH 1/4] mm/migrate: Add migrate_device_split_page Francois Dugast
2025-12-16 20:10 ` [PATCH 2/4] drm/pagemap: Unlock and put folios when possible Francois Dugast
@ 2025-12-16 20:10 ` Francois Dugast
2025-12-18 22:19 ` Matthew Brost
2025-12-19 20:13 ` Matthew Brost
2025-12-16 20:10 ` [PATCH 4/4] drm/pagemap: Enable THP support for GPU memory migration Francois Dugast
` (5 subsequent siblings)
8 siblings, 2 replies; 29+ messages in thread
From: Francois Dugast @ 2025-12-16 20:10 UTC (permalink / raw)
To: intel-xe; +Cc: dri-devel, Francois Dugast, Matthew Brost
This new helper helps ensure all accesses to zone_device_data use the
correct API whether the page is part of a folio or not.
Suggested-by: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
---
drivers/gpu/drm/drm_gpusvm.c | 7 +++++--
drivers/gpu/drm/drm_pagemap.c | 32 +++++++++++++++++++++++++-------
include/drm/drm_pagemap.h | 2 ++
3 files changed, 32 insertions(+), 9 deletions(-)
diff --git a/drivers/gpu/drm/drm_gpusvm.c b/drivers/gpu/drm/drm_gpusvm.c
index 39c8c50401dd..d0ff6b65e543 100644
--- a/drivers/gpu/drm/drm_gpusvm.c
+++ b/drivers/gpu/drm/drm_gpusvm.c
@@ -1366,12 +1366,15 @@ int drm_gpusvm_get_pages(struct drm_gpusvm *gpusvm,
order = drm_gpusvm_hmm_pfn_to_order(pfns[i], i, npages);
if (is_device_private_page(page) ||
is_device_coherent_page(page)) {
+ struct drm_pagemap_zdd *__zdd =
+ drm_pagemap_page_zone_device_data(page);
+
if (!ctx->allow_mixed &&
- zdd != page->zone_device_data && i > 0) {
+ zdd != __zdd && i > 0) {
err = -EOPNOTSUPP;
goto err_unmap;
}
- zdd = page->zone_device_data;
+ zdd = __zdd;
if (pagemap != page_pgmap(page)) {
if (i > 0) {
err = -EOPNOTSUPP;
diff --git a/drivers/gpu/drm/drm_pagemap.c b/drivers/gpu/drm/drm_pagemap.c
index 491de9275add..b71e47136112 100644
--- a/drivers/gpu/drm/drm_pagemap.c
+++ b/drivers/gpu/drm/drm_pagemap.c
@@ -192,6 +192,22 @@ static void drm_pagemap_migration_unlock_put_pages(unsigned long npages,
}
}
+/**
+ * drm_pagemap_page_zone_device_data() - Page to zone_device_data
+ * @page: Pointer to the page
+ *
+ * Return: Page's zone_device_data
+ */
+void *drm_pagemap_page_zone_device_data(struct page *page)
+{
+ struct folio *folio = page_folio(page);
+
+ if (folio_order(folio))
+ return folio_zone_device_data(folio);
+
+ return page->zone_device_data;
+}
+
/**
* drm_pagemap_get_devmem_page() - Get a reference to a device memory page
* @page: Pointer to the page
@@ -481,8 +497,8 @@ static int drm_pagemap_migrate_populate_ram_pfn(struct vm_area_struct *vas,
goto next;
if (fault_page) {
- if (src_page->zone_device_data !=
- fault_page->zone_device_data)
+ if (drm_pagemap_page_zone_device_data(src_page) !=
+ drm_pagemap_page_zone_device_data(fault_page))
goto next;
}
@@ -670,7 +686,7 @@ static int __drm_pagemap_migrate_to_ram(struct vm_area_struct *vas,
int i, err = 0;
if (page) {
- zdd = page->zone_device_data;
+ zdd = drm_pagemap_page_zone_device_data(page);
if (time_before64(get_jiffies_64(),
zdd->devmem_allocation->timeslice_expiration))
return 0;
@@ -722,7 +738,7 @@ static int __drm_pagemap_migrate_to_ram(struct vm_area_struct *vas,
if (!page)
goto err_finalize;
}
- zdd = page->zone_device_data;
+ zdd = drm_pagemap_page_zone_device_data(page);
ops = zdd->devmem_allocation->ops;
dev = zdd->devmem_allocation->dev;
@@ -768,7 +784,9 @@ static int __drm_pagemap_migrate_to_ram(struct vm_area_struct *vas,
*/
static void drm_pagemap_folio_free(struct folio *folio)
{
- drm_pagemap_zdd_put(folio->page.zone_device_data);
+ struct page *page = folio_page(folio, 0);
+
+ drm_pagemap_zdd_put(drm_pagemap_page_zone_device_data(page));
}
/**
@@ -784,7 +802,7 @@ static void drm_pagemap_folio_free(struct folio *folio)
*/
static vm_fault_t drm_pagemap_migrate_to_ram(struct vm_fault *vmf)
{
- struct drm_pagemap_zdd *zdd = vmf->page->zone_device_data;
+ struct drm_pagemap_zdd *zdd = drm_pagemap_page_zone_device_data(vmf->page);
int err;
err = __drm_pagemap_migrate_to_ram(vmf->vma,
@@ -847,7 +865,7 @@ EXPORT_SYMBOL_GPL(drm_pagemap_devmem_init);
*/
struct drm_pagemap *drm_pagemap_page_to_dpagemap(struct page *page)
{
- struct drm_pagemap_zdd *zdd = page->zone_device_data;
+ struct drm_pagemap_zdd *zdd = drm_pagemap_page_zone_device_data(page);
return zdd->devmem_allocation->dpagemap;
}
diff --git a/include/drm/drm_pagemap.h b/include/drm/drm_pagemap.h
index f6e7e234c089..3a8d0e1cef43 100644
--- a/include/drm/drm_pagemap.h
+++ b/include/drm/drm_pagemap.h
@@ -245,4 +245,6 @@ int drm_pagemap_populate_mm(struct drm_pagemap *dpagemap,
struct mm_struct *mm,
unsigned long timeslice_ms);
+void *drm_pagemap_page_zone_device_data(struct page *page);
+
#endif
--
2.43.0
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH 4/4] drm/pagemap: Enable THP support for GPU memory migration
2025-12-16 20:10 [PATCH 0/4] Enable THP support in drm_pagemap Francois Dugast
` (2 preceding siblings ...)
2025-12-16 20:10 ` [PATCH 3/4] drm/pagemap: Add helper to access zone_device_data Francois Dugast
@ 2025-12-16 20:10 ` Francois Dugast
2025-12-18 22:24 ` Matthew Brost
2025-12-17 0:14 ` ✗ CI.checkpatch: warning for Enable THP support in drm_pagemap Patchwork
` (4 subsequent siblings)
8 siblings, 1 reply; 29+ messages in thread
From: Francois Dugast @ 2025-12-16 20:10 UTC (permalink / raw)
To: intel-xe
Cc: dri-devel, Francois Dugast, Matthew Brost, Thomas Hellström,
Michal Mrozek
This enables support for Transparent Huge Pages (THP) for device pages by
using MIGRATE_VMA_SELECT_COMPOUND during migration. It removes the need to
split folios and loop multiple times over all pages to perform required
operations at page level. Instead, we rely on newly introduced support for
higher orders in drm_pagemap and folio-level API.
In Xe, this drastically improves performance when using SVM. The GT stats
below collected after a 2MB page fault show overall servicing is more than
7 times faster, and thanks to reduced CPU overhead the time spent on the
actual copy goes from 23% without THP to 80% with THP:
Without THP:
svm_2M_pagefault_us: 966
svm_2M_migrate_us: 942
svm_2M_device_copy_us: 223
svm_2M_get_pages_us: 9
svm_2M_bind_us: 10
With THP:
svm_2M_pagefault_us: 132
svm_2M_migrate_us: 128
svm_2M_device_copy_us: 106
svm_2M_get_pages_us: 1
svm_2M_bind_us: 2
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Michal Mrozek <michal.mrozek@intel.com>
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
---
drivers/gpu/drm/drm_pagemap.c | 88 +++++++++++++++++++++++++++++------
drivers/gpu/drm/xe/xe_svm.c | 5 ++
include/drm/drm_pagemap.h | 5 +-
3 files changed, 83 insertions(+), 15 deletions(-)
diff --git a/drivers/gpu/drm/drm_pagemap.c b/drivers/gpu/drm/drm_pagemap.c
index b71e47136112..797ec2094fdf 100644
--- a/drivers/gpu/drm/drm_pagemap.c
+++ b/drivers/gpu/drm/drm_pagemap.c
@@ -211,16 +211,20 @@ void *drm_pagemap_page_zone_device_data(struct page *page)
/**
* drm_pagemap_get_devmem_page() - Get a reference to a device memory page
* @page: Pointer to the page
+ * @order: Order
* @zdd: Pointer to the GPU SVM zone device data
*
* This function associates the given page with the specified GPU SVM zone
* device data and initializes it for zone device usage.
*/
static void drm_pagemap_get_devmem_page(struct page *page,
+ unsigned int order,
struct drm_pagemap_zdd *zdd)
{
- page->zone_device_data = drm_pagemap_zdd_get(zdd);
- zone_device_page_init(page, 0);
+ struct folio *folio = page_folio(page);
+
+ folio_set_zone_device_data(folio, drm_pagemap_zdd_get(zdd));
+ zone_device_page_init(page, order);
}
/**
@@ -345,11 +349,13 @@ int drm_pagemap_migrate_to_devmem(struct drm_pagemap_devmem *devmem_allocation,
void *pgmap_owner)
{
const struct drm_pagemap_devmem_ops *ops = devmem_allocation->ops;
+ struct drm_pagemap *dpagemap = devmem_allocation->dpagemap;
struct migrate_vma migrate = {
.start = start,
.end = end,
.pgmap_owner = pgmap_owner,
- .flags = MIGRATE_VMA_SELECT_SYSTEM,
+ .flags = MIGRATE_VMA_SELECT_SYSTEM
+ | MIGRATE_VMA_SELECT_COMPOUND,
};
unsigned long i, npages = npages_in_range(start, end);
struct vm_area_struct *vas;
@@ -409,11 +415,6 @@ int drm_pagemap_migrate_to_devmem(struct drm_pagemap_devmem *devmem_allocation,
goto err_free;
}
- if (migrate.cpages != npages) {
- err = -EBUSY;
- goto err_finalize;
- }
-
err = ops->populate_devmem_pfn(devmem_allocation, npages, migrate.dst);
if (err)
goto err_finalize;
@@ -424,13 +425,38 @@ int drm_pagemap_migrate_to_devmem(struct drm_pagemap_devmem *devmem_allocation,
if (err)
goto err_finalize;
- for (i = 0; i < npages; ++i) {
+ mutex_lock(&dpagemap->folio_split_lock);
+ for (i = 0; i < npages;) {
+ unsigned long j;
struct page *page = pfn_to_page(migrate.dst[i]);
+ unsigned int order;
pages[i] = page;
migrate.dst[i] = migrate_pfn(migrate.dst[i]);
- drm_pagemap_get_devmem_page(page, zdd);
+
+ if (migrate.src[i] & MIGRATE_PFN_COMPOUND) {
+ order = HPAGE_PMD_ORDER;
+
+ migrate.dst[i] |= MIGRATE_PFN_COMPOUND;
+
+ drm_pagemap_get_devmem_page(page, order, zdd);
+
+ for (j = 1; j < NR_PAGES(order) && i + j < npages; j++)
+ migrate.dst[i + j] = 0;
+
+ } else {
+ order = 0;
+
+ if (folio_order(page_folio(page)))
+ migrate_device_split_page(page);
+
+ zone_device_page_init(page, 0);
+ page->zone_device_data = drm_pagemap_zdd_get(zdd);
+ }
+
+ i += NR_PAGES(order);
}
+ mutex_unlock(&dpagemap->folio_split_lock);
err = ops->copy_to_devmem(pages, pagemap_addr, npages);
if (err)
@@ -516,6 +542,8 @@ static int drm_pagemap_migrate_populate_ram_pfn(struct vm_area_struct *vas,
page = folio_page(folio, 0);
mpfn[i] = migrate_pfn(page_to_pfn(page));
+ if (order)
+ mpfn[i] |= MIGRATE_PFN_COMPOUND;
next:
if (page)
addr += page_size(page);
@@ -617,8 +645,15 @@ int drm_pagemap_evict_to_ram(struct drm_pagemap_devmem *devmem_allocation)
if (err)
goto err_finalize;
- for (i = 0; i < npages; ++i)
+ for (i = 0; i < npages;) {
+ unsigned int order = 0;
+
pages[i] = migrate_pfn_to_page(src[i]);
+ if (pages[i])
+ order = folio_order(page_folio(pages[i]));
+
+ i += NR_PAGES(order);
+ }
err = ops->copy_to_ram(pages, pagemap_addr, npages);
if (err)
@@ -671,8 +706,9 @@ static int __drm_pagemap_migrate_to_ram(struct vm_area_struct *vas,
struct migrate_vma migrate = {
.vma = vas,
.pgmap_owner = device_private_page_owner,
- .flags = MIGRATE_VMA_SELECT_DEVICE_PRIVATE |
- MIGRATE_VMA_SELECT_DEVICE_COHERENT,
+ .flags = MIGRATE_VMA_SELECT_DEVICE_PRIVATE
+ | MIGRATE_VMA_SELECT_DEVICE_COHERENT
+ | MIGRATE_VMA_SELECT_COMPOUND,
.fault_page = page,
};
struct drm_pagemap_zdd *zdd;
@@ -753,8 +789,15 @@ static int __drm_pagemap_migrate_to_ram(struct vm_area_struct *vas,
if (err)
goto err_finalize;
- for (i = 0; i < npages; ++i)
+ for (i = 0; i < npages;) {
+ unsigned int order = 0;
+
pages[i] = migrate_pfn_to_page(migrate.src[i]);
+ if (pages[i])
+ order = folio_order(page_folio(pages[i]));
+
+ i += NR_PAGES(order);
+ }
err = ops->copy_to_ram(pages, pagemap_addr, npages);
if (err)
@@ -813,9 +856,26 @@ static vm_fault_t drm_pagemap_migrate_to_ram(struct vm_fault *vmf)
return err ? VM_FAULT_SIGBUS : 0;
}
+static void drm_pagemap_folio_split(struct folio *orig_folio, struct folio *new_folio)
+{
+ struct drm_pagemap_zdd *zdd;
+
+ if (!new_folio)
+ return;
+
+ new_folio->pgmap = orig_folio->pgmap;
+ zdd = folio_zone_device_data(orig_folio);
+ if (folio_order(new_folio))
+ folio_set_zone_device_data(new_folio, drm_pagemap_zdd_get(zdd));
+ else
+ folio_page(new_folio, 0)->zone_device_data =
+ drm_pagemap_zdd_get(zdd);
+}
+
static const struct dev_pagemap_ops drm_pagemap_pagemap_ops = {
.folio_free = drm_pagemap_folio_free,
.migrate_to_ram = drm_pagemap_migrate_to_ram,
+ .folio_split = drm_pagemap_folio_split,
};
/**
diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
index 93550c7c84ac..037c77de2757 100644
--- a/drivers/gpu/drm/xe/xe_svm.c
+++ b/drivers/gpu/drm/xe/xe_svm.c
@@ -4,6 +4,7 @@
*/
#include <drm/drm_drv.h>
+#include <drm/drm_managed.h>
#include "xe_bo.h"
#include "xe_exec_queue_types.h"
@@ -1470,6 +1471,10 @@ int xe_devm_add(struct xe_tile *tile, struct xe_vram_region *vr)
void *addr;
int ret;
+ ret = drmm_mutex_init(&tile->xe->drm, &vr->dpagemap.folio_split_lock);
+ if (ret)
+ return ret;
+
res = devm_request_free_mem_region(dev, &iomem_resource,
vr->usable_size);
if (IS_ERR(res)) {
diff --git a/include/drm/drm_pagemap.h b/include/drm/drm_pagemap.h
index 3a8d0e1cef43..82b9c0e6392e 100644
--- a/include/drm/drm_pagemap.h
+++ b/include/drm/drm_pagemap.h
@@ -129,11 +129,14 @@ struct drm_pagemap_ops {
* struct drm_pagemap: Additional information for a struct dev_pagemap
* used for device p2p handshaking.
* @ops: The struct drm_pagemap_ops.
- * @dev: The struct drevice owning the device-private memory.
+ * @dev: The struct device owning the device-private memory.
+ * @folio_split_lock: Lock to protect device folio splitting.
*/
struct drm_pagemap {
const struct drm_pagemap_ops *ops;
struct device *dev;
+ /* Protect device folio splitting */
+ struct mutex folio_split_lock;
};
struct drm_pagemap_devmem;
--
2.43.0
^ permalink raw reply related [flat|nested] 29+ messages in thread
* Re: [PATCH 1/4] mm/migrate: Add migrate_device_split_page
2025-12-16 20:10 ` [PATCH 1/4] mm/migrate: Add migrate_device_split_page Francois Dugast
@ 2025-12-16 20:34 ` Matthew Wilcox
2025-12-16 21:39 ` Matthew Brost
2026-01-07 20:20 ` Zi Yan
1 sibling, 1 reply; 29+ messages in thread
From: Matthew Wilcox @ 2025-12-16 20:34 UTC (permalink / raw)
To: Francois Dugast
Cc: intel-xe, dri-devel, Matthew Brost, Andrew Morton, Balbir Singh,
linux-mm
On Tue, Dec 16, 2025 at 09:10:11PM +0100, Francois Dugast wrote:
> + ret = __split_unmapped_folio(folio, 0, page, NULL, NULL, SPLIT_TYPE_UNIFORM);
We're trying to get rid of uniform splits. Why do you need this to be
uniform?
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 1/4] mm/migrate: Add migrate_device_split_page
2025-12-16 20:34 ` Matthew Wilcox
@ 2025-12-16 21:39 ` Matthew Brost
2026-01-06 2:39 ` Matthew Brost
0 siblings, 1 reply; 29+ messages in thread
From: Matthew Brost @ 2025-12-16 21:39 UTC (permalink / raw)
To: Matthew Wilcox
Cc: Francois Dugast, intel-xe, dri-devel, Andrew Morton, Balbir Singh,
linux-mm
On Tue, Dec 16, 2025 at 08:34:46PM +0000, Matthew Wilcox wrote:
> On Tue, Dec 16, 2025 at 09:10:11PM +0100, Francois Dugast wrote:
> > + ret = __split_unmapped_folio(folio, 0, page, NULL, NULL, SPLIT_TYPE_UNIFORM);
>
> We're trying to get rid of uniform splits. Why do you need this to be
> uniform?
It’s very possible we’re doing this incorrectly due to a lack of core MM
experience. I believe Zi Yan suggested this approach (use
__split_unmapped_folio) a while back.
Let me start by explaining what we’re trying to do and see if there’s a
better suggestion for how to accomplish it.
Would SPLIT_TYPE_NON_UNIFORM split work here? Or do you have another
suggestion on how to split the folio aside from __split_unmapped_folio?
This covers the case where a GPU device page was allocated as a THP
(e.g., we call zone_device_folio_init with an order of 9). Later, this
page is freed/unmapped and then reallocated for a CPU VMA that is
smaller than a THP (e.g., we’d allocate either 4KB or 64KB based on
CPU VMA size alignment). At this point, we need to split the device
folio so we can migrate data into 4KB device pages.
Would SPLIT_TYPE_NON_UNIFORM work here? Or do you have another
suggestion for splitting the folio aside from __split_unmapped_folio?
Matt
^ permalink raw reply [flat|nested] 29+ messages in thread
* ✗ CI.checkpatch: warning for Enable THP support in drm_pagemap
2025-12-16 20:10 [PATCH 0/4] Enable THP support in drm_pagemap Francois Dugast
` (3 preceding siblings ...)
2025-12-16 20:10 ` [PATCH 4/4] drm/pagemap: Enable THP support for GPU memory migration Francois Dugast
@ 2025-12-17 0:14 ` Patchwork
2025-12-17 0:16 ` ✓ CI.KUnit: success " Patchwork
` (3 subsequent siblings)
8 siblings, 0 replies; 29+ messages in thread
From: Patchwork @ 2025-12-17 0:14 UTC (permalink / raw)
To: Francois Dugast; +Cc: intel-xe
== Series Details ==
Series: Enable THP support in drm_pagemap
URL : https://patchwork.freedesktop.org/series/159119/
State : warning
== Summary ==
+ KERNEL=/kernel
+ git clone https://gitlab.freedesktop.org/drm/maintainer-tools mt
Cloning into 'mt'...
warning: redirecting to https://gitlab.freedesktop.org/drm/maintainer-tools.git/
+ git -C mt rev-list -n1 origin/master
8f50e69d0ce3656564bbdf8b3e213d61470d463f
+ cd /kernel
+ git config --global --add safe.directory /kernel
+ git log -n1
commit 045a2d7f4d8ee2fa2aae61b920215d726a7a81e3
Author: Francois Dugast <francois.dugast@intel.com>
Date: Tue Dec 16 21:10:14 2025 +0100
drm/pagemap: Enable THP support for GPU memory migration
This enables support for Transparent Huge Pages (THP) for device pages by
using MIGRATE_VMA_SELECT_COMPOUND during migration. It removes the need to
split folios and loop multiple times over all pages to perform required
operations at page level. Instead, we rely on newly introduced support for
higher orders in drm_pagemap and folio-level API.
In Xe, this drastically improves performance when using SVM. The GT stats
below collected after a 2MB page fault show overall servicing is more than
7 times faster, and thanks to reduced CPU overhead the time spent on the
actual copy goes from 23% without THP to 80% with THP:
Without THP:
svm_2M_pagefault_us: 966
svm_2M_migrate_us: 942
svm_2M_device_copy_us: 223
svm_2M_get_pages_us: 9
svm_2M_bind_us: 10
With THP:
svm_2M_pagefault_us: 132
svm_2M_migrate_us: 128
svm_2M_device_copy_us: 106
svm_2M_get_pages_us: 1
svm_2M_bind_us: 2
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Michal Mrozek <michal.mrozek@intel.com>
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
+ /mt/dim checkpatch 72428bdb20b6c86beaeddb9d69bf698d0697aa41 drm-intel
221fce297833 mm/migrate: Add migrate_device_split_page
-:86: WARNING:AVOID_BUG: Do not crash the kernel unless it is absolutely unavoidable--use WARN_ON_ONCE() plus recovery code (if feasible) instead of BUG() or variants
#86: FILE: mm/migrate_device.c:796:
+ VM_BUG_ON_FOLIO(!order, folio);
-:87: WARNING:AVOID_BUG: Do not crash the kernel unless it is absolutely unavoidable--use WARN_ON_ONCE() plus recovery code (if feasible) instead of BUG() or variants
#87: FILE: mm/migrate_device.c:797:
+ VM_BUG_ON_FOLIO(!folio_is_device_private(folio), folio);
-:88: WARNING:AVOID_BUG: Do not crash the kernel unless it is absolutely unavoidable--use WARN_ON_ONCE() plus recovery code (if feasible) instead of BUG() or variants
#88: FILE: mm/migrate_device.c:798:
+ VM_BUG_ON_FOLIO(folio_ref_count(folio), folio);
-:98: WARNING:AVOID_BUG: Do not crash the kernel unless it is absolutely unavoidable--use WARN_ON_ONCE() plus recovery code (if feasible) instead of BUG() or variants
#98: FILE: mm/migrate_device.c:808:
+ VM_BUG_ON_FOLIO(ret, folio);
total: 0 errors, 4 warnings, 0 checks, 95 lines checked
b0e9b0730b6b drm/pagemap: Unlock and put folios when possible
78646c2647f6 drm/pagemap: Add helper to access zone_device_data
045a2d7f4d8e drm/pagemap: Enable THP support for GPU memory migration
^ permalink raw reply [flat|nested] 29+ messages in thread
* ✓ CI.KUnit: success for Enable THP support in drm_pagemap
2025-12-16 20:10 [PATCH 0/4] Enable THP support in drm_pagemap Francois Dugast
` (4 preceding siblings ...)
2025-12-17 0:14 ` ✗ CI.checkpatch: warning for Enable THP support in drm_pagemap Patchwork
@ 2025-12-17 0:16 ` Patchwork
2025-12-17 0:31 ` ✗ CI.checksparse: warning " Patchwork
` (2 subsequent siblings)
8 siblings, 0 replies; 29+ messages in thread
From: Patchwork @ 2025-12-17 0:16 UTC (permalink / raw)
To: Francois Dugast; +Cc: intel-xe
== Series Details ==
Series: Enable THP support in drm_pagemap
URL : https://patchwork.freedesktop.org/series/159119/
State : success
== Summary ==
+ trap cleanup EXIT
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/xe/.kunitconfig
[00:14:57] Configuring KUnit Kernel ...
Generating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[00:15:01] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[00:15:32] Starting KUnit Kernel (1/1)...
[00:15:32] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[00:15:33] ================== guc_buf (11 subtests) ===================
[00:15:33] [PASSED] test_smallest
[00:15:33] [PASSED] test_largest
[00:15:33] [PASSED] test_granular
[00:15:33] [PASSED] test_unique
[00:15:33] [PASSED] test_overlap
[00:15:33] [PASSED] test_reusable
[00:15:33] [PASSED] test_too_big
[00:15:33] [PASSED] test_flush
[00:15:33] [PASSED] test_lookup
[00:15:33] [PASSED] test_data
[00:15:33] [PASSED] test_class
[00:15:33] ===================== [PASSED] guc_buf =====================
[00:15:33] =================== guc_dbm (7 subtests) ===================
[00:15:33] [PASSED] test_empty
[00:15:33] [PASSED] test_default
[00:15:33] ======================== test_size ========================
[00:15:33] [PASSED] 4
[00:15:33] [PASSED] 8
[00:15:33] [PASSED] 32
[00:15:33] [PASSED] 256
[00:15:33] ==================== [PASSED] test_size ====================
[00:15:33] ======================= test_reuse ========================
[00:15:33] [PASSED] 4
[00:15:33] [PASSED] 8
[00:15:33] [PASSED] 32
[00:15:33] [PASSED] 256
[00:15:33] =================== [PASSED] test_reuse ====================
[00:15:33] =================== test_range_overlap ====================
[00:15:33] [PASSED] 4
[00:15:33] [PASSED] 8
[00:15:33] [PASSED] 32
[00:15:33] [PASSED] 256
[00:15:33] =============== [PASSED] test_range_overlap ================
[00:15:33] =================== test_range_compact ====================
[00:15:33] [PASSED] 4
[00:15:33] [PASSED] 8
[00:15:33] [PASSED] 32
[00:15:33] [PASSED] 256
[00:15:33] =============== [PASSED] test_range_compact ================
[00:15:33] ==================== test_range_spare =====================
[00:15:33] [PASSED] 4
[00:15:33] [PASSED] 8
[00:15:33] [PASSED] 32
[00:15:33] [PASSED] 256
[00:15:33] ================ [PASSED] test_range_spare =================
[00:15:33] ===================== [PASSED] guc_dbm =====================
[00:15:33] =================== guc_idm (6 subtests) ===================
[00:15:33] [PASSED] bad_init
[00:15:33] [PASSED] no_init
[00:15:33] [PASSED] init_fini
[00:15:33] [PASSED] check_used
[00:15:33] [PASSED] check_quota
[00:15:33] [PASSED] check_all
[00:15:33] ===================== [PASSED] guc_idm =====================
[00:15:33] ================== no_relay (3 subtests) ===================
[00:15:33] [PASSED] xe_drops_guc2pf_if_not_ready
[00:15:33] [PASSED] xe_drops_guc2vf_if_not_ready
[00:15:33] [PASSED] xe_rejects_send_if_not_ready
[00:15:33] ==================== [PASSED] no_relay =====================
[00:15:33] ================== pf_relay (14 subtests) ==================
[00:15:33] [PASSED] pf_rejects_guc2pf_too_short
[00:15:33] [PASSED] pf_rejects_guc2pf_too_long
[00:15:33] [PASSED] pf_rejects_guc2pf_no_payload
[00:15:33] [PASSED] pf_fails_no_payload
[00:15:33] [PASSED] pf_fails_bad_origin
[00:15:33] [PASSED] pf_fails_bad_type
[00:15:33] [PASSED] pf_txn_reports_error
[00:15:33] [PASSED] pf_txn_sends_pf2guc
[00:15:33] [PASSED] pf_sends_pf2guc
[00:15:33] [SKIPPED] pf_loopback_nop
[00:15:33] [SKIPPED] pf_loopback_echo
[00:15:33] [SKIPPED] pf_loopback_fail
[00:15:33] [SKIPPED] pf_loopback_busy
[00:15:33] [SKIPPED] pf_loopback_retry
[00:15:33] ==================== [PASSED] pf_relay =====================
[00:15:33] ================== vf_relay (3 subtests) ===================
[00:15:33] [PASSED] vf_rejects_guc2vf_too_short
[00:15:33] [PASSED] vf_rejects_guc2vf_too_long
[00:15:33] [PASSED] vf_rejects_guc2vf_no_payload
[00:15:33] ==================== [PASSED] vf_relay =====================
[00:15:33] ================ pf_gt_config (6 subtests) =================
[00:15:33] [PASSED] fair_contexts_1vf
[00:15:33] [PASSED] fair_doorbells_1vf
[00:15:33] [PASSED] fair_ggtt_1vf
[00:15:33] ====================== fair_contexts ======================
[00:15:33] [PASSED] 1 VF
[00:15:33] [PASSED] 2 VFs
[00:15:33] [PASSED] 3 VFs
[00:15:33] [PASSED] 4 VFs
[00:15:33] [PASSED] 5 VFs
[00:15:33] [PASSED] 6 VFs
[00:15:33] [PASSED] 7 VFs
[00:15:33] [PASSED] 8 VFs
[00:15:33] [PASSED] 9 VFs
[00:15:33] [PASSED] 10 VFs
[00:15:33] [PASSED] 11 VFs
[00:15:33] [PASSED] 12 VFs
[00:15:33] [PASSED] 13 VFs
[00:15:33] [PASSED] 14 VFs
[00:15:33] [PASSED] 15 VFs
[00:15:33] [PASSED] 16 VFs
[00:15:33] [PASSED] 17 VFs
[00:15:33] [PASSED] 18 VFs
[00:15:33] [PASSED] 19 VFs
[00:15:33] [PASSED] 20 VFs
[00:15:33] [PASSED] 21 VFs
[00:15:33] [PASSED] 22 VFs
[00:15:33] [PASSED] 23 VFs
[00:15:33] [PASSED] 24 VFs
[00:15:33] [PASSED] 25 VFs
[00:15:33] [PASSED] 26 VFs
[00:15:33] [PASSED] 27 VFs
[00:15:33] [PASSED] 28 VFs
[00:15:33] [PASSED] 29 VFs
[00:15:33] [PASSED] 30 VFs
[00:15:33] [PASSED] 31 VFs
[00:15:33] [PASSED] 32 VFs
[00:15:33] [PASSED] 33 VFs
[00:15:33] [PASSED] 34 VFs
[00:15:33] [PASSED] 35 VFs
[00:15:33] [PASSED] 36 VFs
[00:15:33] [PASSED] 37 VFs
[00:15:33] [PASSED] 38 VFs
[00:15:33] [PASSED] 39 VFs
[00:15:33] [PASSED] 40 VFs
[00:15:33] [PASSED] 41 VFs
[00:15:33] [PASSED] 42 VFs
[00:15:33] [PASSED] 43 VFs
[00:15:33] [PASSED] 44 VFs
[00:15:33] [PASSED] 45 VFs
[00:15:33] [PASSED] 46 VFs
[00:15:33] [PASSED] 47 VFs
[00:15:33] [PASSED] 48 VFs
[00:15:33] [PASSED] 49 VFs
[00:15:33] [PASSED] 50 VFs
[00:15:33] [PASSED] 51 VFs
[00:15:33] [PASSED] 52 VFs
[00:15:33] [PASSED] 53 VFs
[00:15:33] [PASSED] 54 VFs
[00:15:33] [PASSED] 55 VFs
[00:15:33] [PASSED] 56 VFs
[00:15:33] [PASSED] 57 VFs
[00:15:33] [PASSED] 58 VFs
[00:15:33] [PASSED] 59 VFs
[00:15:33] [PASSED] 60 VFs
[00:15:33] [PASSED] 61 VFs
[00:15:33] [PASSED] 62 VFs
[00:15:33] [PASSED] 63 VFs
[00:15:33] ================== [PASSED] fair_contexts ==================
[00:15:33] ===================== fair_doorbells ======================
[00:15:33] [PASSED] 1 VF
[00:15:33] [PASSED] 2 VFs
[00:15:33] [PASSED] 3 VFs
[00:15:33] [PASSED] 4 VFs
[00:15:33] [PASSED] 5 VFs
[00:15:33] [PASSED] 6 VFs
[00:15:33] [PASSED] 7 VFs
[00:15:33] [PASSED] 8 VFs
[00:15:33] [PASSED] 9 VFs
[00:15:33] [PASSED] 10 VFs
[00:15:33] [PASSED] 11 VFs
[00:15:33] [PASSED] 12 VFs
[00:15:33] [PASSED] 13 VFs
[00:15:33] [PASSED] 14 VFs
[00:15:33] [PASSED] 15 VFs
[00:15:33] [PASSED] 16 VFs
[00:15:33] [PASSED] 17 VFs
[00:15:33] [PASSED] 18 VFs
[00:15:33] [PASSED] 19 VFs
[00:15:33] [PASSED] 20 VFs
[00:15:33] [PASSED] 21 VFs
[00:15:33] [PASSED] 22 VFs
[00:15:33] [PASSED] 23 VFs
[00:15:33] [PASSED] 24 VFs
[00:15:33] [PASSED] 25 VFs
[00:15:33] [PASSED] 26 VFs
[00:15:33] [PASSED] 27 VFs
[00:15:33] [PASSED] 28 VFs
[00:15:33] [PASSED] 29 VFs
[00:15:33] [PASSED] 30 VFs
[00:15:33] [PASSED] 31 VFs
[00:15:33] [PASSED] 32 VFs
[00:15:33] [PASSED] 33 VFs
[00:15:33] [PASSED] 34 VFs
[00:15:33] [PASSED] 35 VFs
[00:15:33] [PASSED] 36 VFs
[00:15:33] [PASSED] 37 VFs
[00:15:33] [PASSED] 38 VFs
[00:15:33] [PASSED] 39 VFs
[00:15:33] [PASSED] 40 VFs
[00:15:33] [PASSED] 41 VFs
[00:15:33] [PASSED] 42 VFs
[00:15:33] [PASSED] 43 VFs
[00:15:33] [PASSED] 44 VFs
[00:15:33] [PASSED] 45 VFs
[00:15:33] [PASSED] 46 VFs
[00:15:33] [PASSED] 47 VFs
[00:15:33] [PASSED] 48 VFs
[00:15:33] [PASSED] 49 VFs
[00:15:33] [PASSED] 50 VFs
[00:15:33] [PASSED] 51 VFs
[00:15:33] [PASSED] 52 VFs
[00:15:33] [PASSED] 53 VFs
[00:15:33] [PASSED] 54 VFs
[00:15:33] [PASSED] 55 VFs
[00:15:33] [PASSED] 56 VFs
[00:15:33] [PASSED] 57 VFs
[00:15:33] [PASSED] 58 VFs
[00:15:33] [PASSED] 59 VFs
[00:15:33] [PASSED] 60 VFs
[00:15:33] [PASSED] 61 VFs
[00:15:33] [PASSED] 62 VFs
[00:15:33] [PASSED] 63 VFs
[00:15:33] ================= [PASSED] fair_doorbells ==================
[00:15:33] ======================== fair_ggtt ========================
[00:15:33] [PASSED] 1 VF
[00:15:33] [PASSED] 2 VFs
[00:15:33] [PASSED] 3 VFs
[00:15:33] [PASSED] 4 VFs
[00:15:33] [PASSED] 5 VFs
[00:15:33] [PASSED] 6 VFs
[00:15:33] [PASSED] 7 VFs
[00:15:33] [PASSED] 8 VFs
[00:15:33] [PASSED] 9 VFs
[00:15:33] [PASSED] 10 VFs
[00:15:33] [PASSED] 11 VFs
[00:15:33] [PASSED] 12 VFs
[00:15:33] [PASSED] 13 VFs
[00:15:33] [PASSED] 14 VFs
[00:15:33] [PASSED] 15 VFs
[00:15:33] [PASSED] 16 VFs
[00:15:33] [PASSED] 17 VFs
[00:15:33] [PASSED] 18 VFs
[00:15:33] [PASSED] 19 VFs
[00:15:33] [PASSED] 20 VFs
[00:15:33] [PASSED] 21 VFs
[00:15:33] [PASSED] 22 VFs
[00:15:33] [PASSED] 23 VFs
[00:15:33] [PASSED] 24 VFs
[00:15:33] [PASSED] 25 VFs
[00:15:33] [PASSED] 26 VFs
[00:15:33] [PASSED] 27 VFs
[00:15:33] [PASSED] 28 VFs
[00:15:33] [PASSED] 29 VFs
[00:15:33] [PASSED] 30 VFs
[00:15:33] [PASSED] 31 VFs
[00:15:33] [PASSED] 32 VFs
[00:15:33] [PASSED] 33 VFs
[00:15:33] [PASSED] 34 VFs
[00:15:33] [PASSED] 35 VFs
[00:15:33] [PASSED] 36 VFs
[00:15:33] [PASSED] 37 VFs
[00:15:33] [PASSED] 38 VFs
[00:15:33] [PASSED] 39 VFs
[00:15:33] [PASSED] 40 VFs
[00:15:33] [PASSED] 41 VFs
[00:15:33] [PASSED] 42 VFs
[00:15:33] [PASSED] 43 VFs
[00:15:33] [PASSED] 44 VFs
[00:15:33] [PASSED] 45 VFs
[00:15:33] [PASSED] 46 VFs
[00:15:33] [PASSED] 47 VFs
[00:15:33] [PASSED] 48 VFs
[00:15:33] [PASSED] 49 VFs
[00:15:33] [PASSED] 50 VFs
[00:15:33] [PASSED] 51 VFs
[00:15:33] [PASSED] 52 VFs
[00:15:33] [PASSED] 53 VFs
[00:15:33] [PASSED] 54 VFs
[00:15:33] [PASSED] 55 VFs
[00:15:33] [PASSED] 56 VFs
[00:15:33] [PASSED] 57 VFs
[00:15:33] [PASSED] 58 VFs
[00:15:33] [PASSED] 59 VFs
[00:15:33] [PASSED] 60 VFs
[00:15:33] [PASSED] 61 VFs
[00:15:33] [PASSED] 62 VFs
[00:15:33] [PASSED] 63 VFs
[00:15:33] ==================== [PASSED] fair_ggtt ====================
[00:15:33] ================== [PASSED] pf_gt_config ===================
[00:15:33] ===================== lmtt (1 subtest) =====================
[00:15:33] ======================== test_ops =========================
[00:15:33] [PASSED] 2-level
[00:15:33] [PASSED] multi-level
[00:15:33] ==================== [PASSED] test_ops =====================
[00:15:33] ====================== [PASSED] lmtt =======================
[00:15:33] ================= pf_service (11 subtests) =================
[00:15:33] [PASSED] pf_negotiate_any
[00:15:33] [PASSED] pf_negotiate_base_match
[00:15:33] [PASSED] pf_negotiate_base_newer
[00:15:33] [PASSED] pf_negotiate_base_next
[00:15:33] [SKIPPED] pf_negotiate_base_older
[00:15:33] [PASSED] pf_negotiate_base_prev
[00:15:33] [PASSED] pf_negotiate_latest_match
[00:15:33] [PASSED] pf_negotiate_latest_newer
[00:15:33] [PASSED] pf_negotiate_latest_next
[00:15:33] [SKIPPED] pf_negotiate_latest_older
[00:15:33] [SKIPPED] pf_negotiate_latest_prev
[00:15:33] =================== [PASSED] pf_service ====================
[00:15:33] ================= xe_guc_g2g (2 subtests) ==================
[00:15:33] ============== xe_live_guc_g2g_kunit_default ==============
[00:15:33] ========= [SKIPPED] xe_live_guc_g2g_kunit_default ==========
[00:15:33] ============== xe_live_guc_g2g_kunit_allmem ===============
[00:15:33] ========== [SKIPPED] xe_live_guc_g2g_kunit_allmem ==========
[00:15:33] =================== [SKIPPED] xe_guc_g2g ===================
[00:15:33] =================== xe_mocs (2 subtests) ===================
[00:15:33] ================ xe_live_mocs_kernel_kunit ================
[00:15:33] =========== [SKIPPED] xe_live_mocs_kernel_kunit ============
[00:15:33] ================ xe_live_mocs_reset_kunit =================
[00:15:33] ============ [SKIPPED] xe_live_mocs_reset_kunit ============
[00:15:33] ==================== [SKIPPED] xe_mocs =====================
[00:15:33] ================= xe_migrate (2 subtests) ==================
[00:15:33] ================= xe_migrate_sanity_kunit =================
[00:15:33] ============ [SKIPPED] xe_migrate_sanity_kunit =============
[00:15:33] ================== xe_validate_ccs_kunit ==================
[00:15:33] ============= [SKIPPED] xe_validate_ccs_kunit ==============
[00:15:33] =================== [SKIPPED] xe_migrate ===================
[00:15:33] ================== xe_dma_buf (1 subtest) ==================
[00:15:33] ==================== xe_dma_buf_kunit =====================
[00:15:33] ================ [SKIPPED] xe_dma_buf_kunit ================
[00:15:33] =================== [SKIPPED] xe_dma_buf ===================
[00:15:33] ================= xe_bo_shrink (1 subtest) =================
[00:15:33] =================== xe_bo_shrink_kunit ====================
[00:15:33] =============== [SKIPPED] xe_bo_shrink_kunit ===============
[00:15:33] ================== [SKIPPED] xe_bo_shrink ==================
[00:15:33] ==================== xe_bo (2 subtests) ====================
[00:15:33] ================== xe_ccs_migrate_kunit ===================
[00:15:33] ============== [SKIPPED] xe_ccs_migrate_kunit ==============
[00:15:33] ==================== xe_bo_evict_kunit ====================
[00:15:33] =============== [SKIPPED] xe_bo_evict_kunit ================
[00:15:33] ===================== [SKIPPED] xe_bo ======================
[00:15:33] ==================== args (11 subtests) ====================
[00:15:33] [PASSED] count_args_test
[00:15:33] [PASSED] call_args_example
[00:15:33] [PASSED] call_args_test
[00:15:33] [PASSED] drop_first_arg_example
[00:15:33] [PASSED] drop_first_arg_test
[00:15:33] [PASSED] first_arg_example
[00:15:33] [PASSED] first_arg_test
[00:15:33] [PASSED] last_arg_example
[00:15:33] [PASSED] last_arg_test
[00:15:33] [PASSED] pick_arg_example
[00:15:33] [PASSED] sep_comma_example
[00:15:33] ====================== [PASSED] args =======================
[00:15:33] =================== xe_pci (3 subtests) ====================
[00:15:33] ==================== check_graphics_ip ====================
[00:15:33] [PASSED] 12.00 Xe_LP
[00:15:33] [PASSED] 12.10 Xe_LP+
[00:15:33] [PASSED] 12.55 Xe_HPG
[00:15:33] [PASSED] 12.60 Xe_HPC
[00:15:33] [PASSED] 12.70 Xe_LPG
[00:15:33] [PASSED] 12.71 Xe_LPG
[00:15:33] [PASSED] 12.74 Xe_LPG+
[00:15:33] [PASSED] 20.01 Xe2_HPG
[00:15:33] [PASSED] 20.02 Xe2_HPG
[00:15:33] [PASSED] 20.04 Xe2_LPG
[00:15:33] [PASSED] 30.00 Xe3_LPG
[00:15:33] [PASSED] 30.01 Xe3_LPG
[00:15:33] [PASSED] 30.03 Xe3_LPG
[00:15:33] [PASSED] 30.04 Xe3_LPG
[00:15:33] [PASSED] 30.05 Xe3_LPG
[00:15:33] [PASSED] 35.11 Xe3p_XPC
[00:15:33] ================ [PASSED] check_graphics_ip ================
[00:15:33] ===================== check_media_ip ======================
[00:15:33] [PASSED] 12.00 Xe_M
[00:15:33] [PASSED] 12.55 Xe_HPM
[00:15:33] [PASSED] 13.00 Xe_LPM+
[00:15:33] [PASSED] 13.01 Xe2_HPM
[00:15:33] [PASSED] 20.00 Xe2_LPM
[00:15:33] [PASSED] 30.00 Xe3_LPM
[00:15:33] [PASSED] 30.02 Xe3_LPM
[00:15:33] [PASSED] 35.00 Xe3p_LPM
[00:15:33] [PASSED] 35.03 Xe3p_HPM
[00:15:33] ================= [PASSED] check_media_ip ==================
[00:15:33] =================== check_platform_desc ===================
[00:15:33] [PASSED] 0x9A60 (TIGERLAKE)
[00:15:33] [PASSED] 0x9A68 (TIGERLAKE)
[00:15:33] [PASSED] 0x9A70 (TIGERLAKE)
[00:15:33] [PASSED] 0x9A40 (TIGERLAKE)
[00:15:33] [PASSED] 0x9A49 (TIGERLAKE)
[00:15:33] [PASSED] 0x9A59 (TIGERLAKE)
[00:15:33] [PASSED] 0x9A78 (TIGERLAKE)
[00:15:33] [PASSED] 0x9AC0 (TIGERLAKE)
[00:15:33] [PASSED] 0x9AC9 (TIGERLAKE)
[00:15:33] [PASSED] 0x9AD9 (TIGERLAKE)
[00:15:33] [PASSED] 0x9AF8 (TIGERLAKE)
[00:15:33] [PASSED] 0x4C80 (ROCKETLAKE)
[00:15:33] [PASSED] 0x4C8A (ROCKETLAKE)
[00:15:33] [PASSED] 0x4C8B (ROCKETLAKE)
[00:15:33] [PASSED] 0x4C8C (ROCKETLAKE)
[00:15:33] [PASSED] 0x4C90 (ROCKETLAKE)
[00:15:33] [PASSED] 0x4C9A (ROCKETLAKE)
[00:15:33] [PASSED] 0x4680 (ALDERLAKE_S)
[00:15:33] [PASSED] 0x4682 (ALDERLAKE_S)
[00:15:33] [PASSED] 0x4688 (ALDERLAKE_S)
[00:15:33] [PASSED] 0x468A (ALDERLAKE_S)
[00:15:33] [PASSED] 0x468B (ALDERLAKE_S)
[00:15:33] [PASSED] 0x4690 (ALDERLAKE_S)
[00:15:33] [PASSED] 0x4692 (ALDERLAKE_S)
[00:15:33] [PASSED] 0x4693 (ALDERLAKE_S)
[00:15:33] [PASSED] 0x46A0 (ALDERLAKE_P)
[00:15:33] [PASSED] 0x46A1 (ALDERLAKE_P)
[00:15:33] [PASSED] 0x46A2 (ALDERLAKE_P)
[00:15:33] [PASSED] 0x46A3 (ALDERLAKE_P)
[00:15:33] [PASSED] 0x46A6 (ALDERLAKE_P)
[00:15:33] [PASSED] 0x46A8 (ALDERLAKE_P)
[00:15:33] [PASSED] 0x46AA (ALDERLAKE_P)
[00:15:33] [PASSED] 0x462A (ALDERLAKE_P)
[00:15:33] [PASSED] 0x4626 (ALDERLAKE_P)
[00:15:33] [PASSED] 0x4628 (ALDERLAKE_P)
[00:15:33] [PASSED] 0x46B0 (ALDERLAKE_P)
stty: 'standard input': Inappropriate ioctl for device
[00:15:33] [PASSED] 0x46B1 (ALDERLAKE_P)
[00:15:33] [PASSED] 0x46B2 (ALDERLAKE_P)
[00:15:33] [PASSED] 0x46B3 (ALDERLAKE_P)
[00:15:33] [PASSED] 0x46C0 (ALDERLAKE_P)
[00:15:33] [PASSED] 0x46C1 (ALDERLAKE_P)
[00:15:33] [PASSED] 0x46C2 (ALDERLAKE_P)
[00:15:33] [PASSED] 0x46C3 (ALDERLAKE_P)
[00:15:33] [PASSED] 0x46D0 (ALDERLAKE_N)
[00:15:33] [PASSED] 0x46D1 (ALDERLAKE_N)
[00:15:33] [PASSED] 0x46D2 (ALDERLAKE_N)
[00:15:33] [PASSED] 0x46D3 (ALDERLAKE_N)
[00:15:33] [PASSED] 0x46D4 (ALDERLAKE_N)
[00:15:33] [PASSED] 0xA721 (ALDERLAKE_P)
[00:15:33] [PASSED] 0xA7A1 (ALDERLAKE_P)
[00:15:33] [PASSED] 0xA7A9 (ALDERLAKE_P)
[00:15:33] [PASSED] 0xA7AC (ALDERLAKE_P)
[00:15:33] [PASSED] 0xA7AD (ALDERLAKE_P)
[00:15:33] [PASSED] 0xA720 (ALDERLAKE_P)
[00:15:33] [PASSED] 0xA7A0 (ALDERLAKE_P)
[00:15:33] [PASSED] 0xA7A8 (ALDERLAKE_P)
[00:15:33] [PASSED] 0xA7AA (ALDERLAKE_P)
[00:15:33] [PASSED] 0xA7AB (ALDERLAKE_P)
[00:15:33] [PASSED] 0xA780 (ALDERLAKE_S)
[00:15:33] [PASSED] 0xA781 (ALDERLAKE_S)
[00:15:33] [PASSED] 0xA782 (ALDERLAKE_S)
[00:15:33] [PASSED] 0xA783 (ALDERLAKE_S)
[00:15:33] [PASSED] 0xA788 (ALDERLAKE_S)
[00:15:33] [PASSED] 0xA789 (ALDERLAKE_S)
[00:15:33] [PASSED] 0xA78A (ALDERLAKE_S)
[00:15:33] [PASSED] 0xA78B (ALDERLAKE_S)
[00:15:33] [PASSED] 0x4905 (DG1)
[00:15:33] [PASSED] 0x4906 (DG1)
[00:15:33] [PASSED] 0x4907 (DG1)
[00:15:33] [PASSED] 0x4908 (DG1)
[00:15:33] [PASSED] 0x4909 (DG1)
[00:15:33] [PASSED] 0x56C0 (DG2)
[00:15:33] [PASSED] 0x56C2 (DG2)
[00:15:33] [PASSED] 0x56C1 (DG2)
[00:15:33] [PASSED] 0x7D51 (METEORLAKE)
[00:15:33] [PASSED] 0x7DD1 (METEORLAKE)
[00:15:33] [PASSED] 0x7D41 (METEORLAKE)
[00:15:33] [PASSED] 0x7D67 (METEORLAKE)
[00:15:33] [PASSED] 0xB640 (METEORLAKE)
[00:15:33] [PASSED] 0x56A0 (DG2)
[00:15:33] [PASSED] 0x56A1 (DG2)
[00:15:33] [PASSED] 0x56A2 (DG2)
[00:15:33] [PASSED] 0x56BE (DG2)
[00:15:33] [PASSED] 0x56BF (DG2)
[00:15:33] [PASSED] 0x5690 (DG2)
[00:15:33] [PASSED] 0x5691 (DG2)
[00:15:33] [PASSED] 0x5692 (DG2)
[00:15:33] [PASSED] 0x56A5 (DG2)
[00:15:33] [PASSED] 0x56A6 (DG2)
[00:15:33] [PASSED] 0x56B0 (DG2)
[00:15:33] [PASSED] 0x56B1 (DG2)
[00:15:33] [PASSED] 0x56BA (DG2)
[00:15:33] [PASSED] 0x56BB (DG2)
[00:15:33] [PASSED] 0x56BC (DG2)
[00:15:33] [PASSED] 0x56BD (DG2)
[00:15:33] [PASSED] 0x5693 (DG2)
[00:15:33] [PASSED] 0x5694 (DG2)
[00:15:33] [PASSED] 0x5695 (DG2)
[00:15:33] [PASSED] 0x56A3 (DG2)
[00:15:33] [PASSED] 0x56A4 (DG2)
[00:15:33] [PASSED] 0x56B2 (DG2)
[00:15:33] [PASSED] 0x56B3 (DG2)
[00:15:33] [PASSED] 0x5696 (DG2)
[00:15:33] [PASSED] 0x5697 (DG2)
[00:15:33] [PASSED] 0xB69 (PVC)
[00:15:33] [PASSED] 0xB6E (PVC)
[00:15:33] [PASSED] 0xBD4 (PVC)
[00:15:33] [PASSED] 0xBD5 (PVC)
[00:15:33] [PASSED] 0xBD6 (PVC)
[00:15:33] [PASSED] 0xBD7 (PVC)
[00:15:33] [PASSED] 0xBD8 (PVC)
[00:15:33] [PASSED] 0xBD9 (PVC)
[00:15:33] [PASSED] 0xBDA (PVC)
[00:15:33] [PASSED] 0xBDB (PVC)
[00:15:33] [PASSED] 0xBE0 (PVC)
[00:15:33] [PASSED] 0xBE1 (PVC)
[00:15:33] [PASSED] 0xBE5 (PVC)
[00:15:33] [PASSED] 0x7D40 (METEORLAKE)
[00:15:33] [PASSED] 0x7D45 (METEORLAKE)
[00:15:33] [PASSED] 0x7D55 (METEORLAKE)
[00:15:33] [PASSED] 0x7D60 (METEORLAKE)
[00:15:33] [PASSED] 0x7DD5 (METEORLAKE)
[00:15:33] [PASSED] 0x6420 (LUNARLAKE)
[00:15:33] [PASSED] 0x64A0 (LUNARLAKE)
[00:15:33] [PASSED] 0x64B0 (LUNARLAKE)
[00:15:33] [PASSED] 0xE202 (BATTLEMAGE)
[00:15:33] [PASSED] 0xE209 (BATTLEMAGE)
[00:15:33] [PASSED] 0xE20B (BATTLEMAGE)
[00:15:33] [PASSED] 0xE20C (BATTLEMAGE)
[00:15:33] [PASSED] 0xE20D (BATTLEMAGE)
[00:15:33] [PASSED] 0xE210 (BATTLEMAGE)
[00:15:33] [PASSED] 0xE211 (BATTLEMAGE)
[00:15:33] [PASSED] 0xE212 (BATTLEMAGE)
[00:15:33] [PASSED] 0xE216 (BATTLEMAGE)
[00:15:33] [PASSED] 0xE220 (BATTLEMAGE)
[00:15:33] [PASSED] 0xE221 (BATTLEMAGE)
[00:15:33] [PASSED] 0xE222 (BATTLEMAGE)
[00:15:33] [PASSED] 0xE223 (BATTLEMAGE)
[00:15:33] [PASSED] 0xB080 (PANTHERLAKE)
[00:15:33] [PASSED] 0xB081 (PANTHERLAKE)
[00:15:33] [PASSED] 0xB082 (PANTHERLAKE)
[00:15:33] [PASSED] 0xB083 (PANTHERLAKE)
[00:15:33] [PASSED] 0xB084 (PANTHERLAKE)
[00:15:33] [PASSED] 0xB085 (PANTHERLAKE)
[00:15:33] [PASSED] 0xB086 (PANTHERLAKE)
[00:15:33] [PASSED] 0xB087 (PANTHERLAKE)
[00:15:33] [PASSED] 0xB08F (PANTHERLAKE)
[00:15:33] [PASSED] 0xB090 (PANTHERLAKE)
[00:15:33] [PASSED] 0xB0A0 (PANTHERLAKE)
[00:15:33] [PASSED] 0xB0B0 (PANTHERLAKE)
[00:15:33] [PASSED] 0xFD80 (PANTHERLAKE)
[00:15:33] [PASSED] 0xFD81 (PANTHERLAKE)
[00:15:33] [PASSED] 0xD740 (NOVALAKE_S)
[00:15:33] [PASSED] 0xD741 (NOVALAKE_S)
[00:15:33] [PASSED] 0xD742 (NOVALAKE_S)
[00:15:33] [PASSED] 0xD743 (NOVALAKE_S)
[00:15:33] [PASSED] 0xD744 (NOVALAKE_S)
[00:15:33] [PASSED] 0xD745 (NOVALAKE_S)
[00:15:33] [PASSED] 0x674C (CRESCENTISLAND)
[00:15:33] =============== [PASSED] check_platform_desc ===============
[00:15:33] ===================== [PASSED] xe_pci ======================
[00:15:33] =================== xe_rtp (2 subtests) ====================
[00:15:33] =============== xe_rtp_process_to_sr_tests ================
[00:15:33] [PASSED] coalesce-same-reg
[00:15:33] [PASSED] no-match-no-add
[00:15:33] [PASSED] match-or
[00:15:33] [PASSED] match-or-xfail
[00:15:33] [PASSED] no-match-no-add-multiple-rules
[00:15:33] [PASSED] two-regs-two-entries
[00:15:33] [PASSED] clr-one-set-other
[00:15:33] [PASSED] set-field
[00:15:33] [PASSED] conflict-duplicate
[00:15:33] [PASSED] conflict-not-disjoint
[00:15:33] [PASSED] conflict-reg-type
[00:15:33] =========== [PASSED] xe_rtp_process_to_sr_tests ============
[00:15:33] ================== xe_rtp_process_tests ===================
[00:15:33] [PASSED] active1
[00:15:33] [PASSED] active2
[00:15:33] [PASSED] active-inactive
[00:15:33] [PASSED] inactive-active
[00:15:33] [PASSED] inactive-1st_or_active-inactive
[00:15:33] [PASSED] inactive-2nd_or_active-inactive
[00:15:33] [PASSED] inactive-last_or_active-inactive
[00:15:33] [PASSED] inactive-no_or_active-inactive
[00:15:33] ============== [PASSED] xe_rtp_process_tests ===============
[00:15:33] ===================== [PASSED] xe_rtp ======================
[00:15:33] ==================== xe_wa (1 subtest) =====================
[00:15:33] ======================== xe_wa_gt =========================
[00:15:33] [PASSED] TIGERLAKE B0
[00:15:33] [PASSED] DG1 A0
[00:15:33] [PASSED] DG1 B0
[00:15:33] [PASSED] ALDERLAKE_S A0
[00:15:33] [PASSED] ALDERLAKE_S B0
[00:15:33] [PASSED] ALDERLAKE_S C0
[00:15:33] [PASSED] ALDERLAKE_S D0
[00:15:33] [PASSED] ALDERLAKE_P A0
[00:15:33] [PASSED] ALDERLAKE_P B0
[00:15:33] [PASSED] ALDERLAKE_P C0
[00:15:33] [PASSED] ALDERLAKE_S RPLS D0
[00:15:33] [PASSED] ALDERLAKE_P RPLU E0
[00:15:33] [PASSED] DG2 G10 C0
[00:15:33] [PASSED] DG2 G11 B1
[00:15:33] [PASSED] DG2 G12 A1
[00:15:33] [PASSED] METEORLAKE 12.70(Xe_LPG) A0 13.00(Xe_LPM+) A0
[00:15:33] [PASSED] METEORLAKE 12.71(Xe_LPG) A0 13.00(Xe_LPM+) A0
[00:15:33] [PASSED] METEORLAKE 12.74(Xe_LPG+) A0 13.00(Xe_LPM+) A0
[00:15:33] [PASSED] LUNARLAKE 20.04(Xe2_LPG) A0 20.00(Xe2_LPM) A0
[00:15:33] [PASSED] LUNARLAKE 20.04(Xe2_LPG) B0 20.00(Xe2_LPM) A0
[00:15:33] [PASSED] BATTLEMAGE 20.01(Xe2_HPG) A0 13.01(Xe2_HPM) A1
[00:15:33] [PASSED] PANTHERLAKE 30.00(Xe3_LPG) A0 30.00(Xe3_LPM) A0
[00:15:33] ==================== [PASSED] xe_wa_gt =====================
[00:15:33] ====================== [PASSED] xe_wa ======================
[00:15:33] ============================================================
[00:15:33] Testing complete. Ran 510 tests: passed: 492, skipped: 18
[00:15:33] Elapsed time: 36.141s total, 4.190s configuring, 31.478s building, 0.460s running
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/tests/.kunitconfig
[00:15:33] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[00:15:35] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[00:16:00] Starting KUnit Kernel (1/1)...
[00:16:00] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[00:16:00] ============ drm_test_pick_cmdline (2 subtests) ============
[00:16:00] [PASSED] drm_test_pick_cmdline_res_1920_1080_60
[00:16:00] =============== drm_test_pick_cmdline_named ===============
[00:16:00] [PASSED] NTSC
[00:16:00] [PASSED] NTSC-J
[00:16:00] [PASSED] PAL
[00:16:00] [PASSED] PAL-M
[00:16:00] =========== [PASSED] drm_test_pick_cmdline_named ===========
[00:16:00] ============== [PASSED] drm_test_pick_cmdline ==============
[00:16:00] == drm_test_atomic_get_connector_for_encoder (1 subtest) ===
[00:16:00] [PASSED] drm_test_drm_atomic_get_connector_for_encoder
[00:16:00] ==== [PASSED] drm_test_atomic_get_connector_for_encoder ====
[00:16:00] =========== drm_validate_clone_mode (2 subtests) ===========
[00:16:00] ============== drm_test_check_in_clone_mode ===============
[00:16:00] [PASSED] in_clone_mode
[00:16:00] [PASSED] not_in_clone_mode
[00:16:00] ========== [PASSED] drm_test_check_in_clone_mode ===========
[00:16:00] =============== drm_test_check_valid_clones ===============
[00:16:00] [PASSED] not_in_clone_mode
[00:16:00] [PASSED] valid_clone
[00:16:00] [PASSED] invalid_clone
[00:16:00] =========== [PASSED] drm_test_check_valid_clones ===========
[00:16:00] ============= [PASSED] drm_validate_clone_mode =============
[00:16:00] ============= drm_validate_modeset (1 subtest) =============
[00:16:00] [PASSED] drm_test_check_connector_changed_modeset
[00:16:00] ============== [PASSED] drm_validate_modeset ===============
[00:16:00] ====== drm_test_bridge_get_current_state (2 subtests) ======
[00:16:00] [PASSED] drm_test_drm_bridge_get_current_state_atomic
[00:16:00] [PASSED] drm_test_drm_bridge_get_current_state_legacy
[00:16:00] ======== [PASSED] drm_test_bridge_get_current_state ========
[00:16:00] ====== drm_test_bridge_helper_reset_crtc (3 subtests) ======
[00:16:00] [PASSED] drm_test_drm_bridge_helper_reset_crtc_atomic
[00:16:00] [PASSED] drm_test_drm_bridge_helper_reset_crtc_atomic_disabled
[00:16:00] [PASSED] drm_test_drm_bridge_helper_reset_crtc_legacy
[00:16:00] ======== [PASSED] drm_test_bridge_helper_reset_crtc ========
[00:16:00] ============== drm_bridge_alloc (2 subtests) ===============
[00:16:00] [PASSED] drm_test_drm_bridge_alloc_basic
[00:16:00] [PASSED] drm_test_drm_bridge_alloc_get_put
[00:16:00] ================ [PASSED] drm_bridge_alloc =================
[00:16:00] ================== drm_buddy (8 subtests) ==================
[00:16:00] [PASSED] drm_test_buddy_alloc_limit
[00:16:00] [PASSED] drm_test_buddy_alloc_optimistic
[00:16:00] [PASSED] drm_test_buddy_alloc_pessimistic
[00:16:00] [PASSED] drm_test_buddy_alloc_pathological
[00:16:00] [PASSED] drm_test_buddy_alloc_contiguous
[00:16:00] [PASSED] drm_test_buddy_alloc_clear
[00:16:00] [PASSED] drm_test_buddy_alloc_range_bias
[00:16:00] [PASSED] drm_test_buddy_fragmentation_performance
[00:16:00] ==================== [PASSED] drm_buddy ====================
[00:16:00] ============= drm_cmdline_parser (40 subtests) =============
[00:16:00] [PASSED] drm_test_cmdline_force_d_only
[00:16:00] [PASSED] drm_test_cmdline_force_D_only_dvi
[00:16:00] [PASSED] drm_test_cmdline_force_D_only_hdmi
[00:16:00] [PASSED] drm_test_cmdline_force_D_only_not_digital
[00:16:00] [PASSED] drm_test_cmdline_force_e_only
[00:16:00] [PASSED] drm_test_cmdline_res
[00:16:00] [PASSED] drm_test_cmdline_res_vesa
[00:16:00] [PASSED] drm_test_cmdline_res_vesa_rblank
[00:16:00] [PASSED] drm_test_cmdline_res_rblank
[00:16:00] [PASSED] drm_test_cmdline_res_bpp
[00:16:00] [PASSED] drm_test_cmdline_res_refresh
[00:16:00] [PASSED] drm_test_cmdline_res_bpp_refresh
[00:16:00] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced
[00:16:00] [PASSED] drm_test_cmdline_res_bpp_refresh_margins
[00:16:00] [PASSED] drm_test_cmdline_res_bpp_refresh_force_off
[00:16:00] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on
[00:16:00] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_analog
[00:16:00] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_digital
[00:16:00] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced_margins_force_on
[00:16:00] [PASSED] drm_test_cmdline_res_margins_force_on
[00:16:00] [PASSED] drm_test_cmdline_res_vesa_margins
[00:16:00] [PASSED] drm_test_cmdline_name
[00:16:00] [PASSED] drm_test_cmdline_name_bpp
[00:16:00] [PASSED] drm_test_cmdline_name_option
[00:16:00] [PASSED] drm_test_cmdline_name_bpp_option
[00:16:00] [PASSED] drm_test_cmdline_rotate_0
[00:16:00] [PASSED] drm_test_cmdline_rotate_90
[00:16:00] [PASSED] drm_test_cmdline_rotate_180
[00:16:00] [PASSED] drm_test_cmdline_rotate_270
[00:16:00] [PASSED] drm_test_cmdline_hmirror
[00:16:00] [PASSED] drm_test_cmdline_vmirror
[00:16:00] [PASSED] drm_test_cmdline_margin_options
[00:16:00] [PASSED] drm_test_cmdline_multiple_options
[00:16:00] [PASSED] drm_test_cmdline_bpp_extra_and_option
[00:16:00] [PASSED] drm_test_cmdline_extra_and_option
[00:16:00] [PASSED] drm_test_cmdline_freestanding_options
[00:16:00] [PASSED] drm_test_cmdline_freestanding_force_e_and_options
[00:16:00] [PASSED] drm_test_cmdline_panel_orientation
[00:16:00] ================ drm_test_cmdline_invalid =================
[00:16:00] [PASSED] margin_only
[00:16:00] [PASSED] interlace_only
[00:16:00] [PASSED] res_missing_x
[00:16:00] [PASSED] res_missing_y
[00:16:00] [PASSED] res_bad_y
[00:16:00] [PASSED] res_missing_y_bpp
[00:16:00] [PASSED] res_bad_bpp
[00:16:00] [PASSED] res_bad_refresh
[00:16:00] [PASSED] res_bpp_refresh_force_on_off
[00:16:00] [PASSED] res_invalid_mode
[00:16:00] [PASSED] res_bpp_wrong_place_mode
[00:16:00] [PASSED] name_bpp_refresh
[00:16:00] [PASSED] name_refresh
[00:16:00] [PASSED] name_refresh_wrong_mode
[00:16:00] [PASSED] name_refresh_invalid_mode
[00:16:00] [PASSED] rotate_multiple
[00:16:00] [PASSED] rotate_invalid_val
[00:16:00] [PASSED] rotate_truncated
[00:16:00] [PASSED] invalid_option
[00:16:00] [PASSED] invalid_tv_option
[00:16:00] [PASSED] truncated_tv_option
[00:16:00] ============ [PASSED] drm_test_cmdline_invalid =============
[00:16:00] =============== drm_test_cmdline_tv_options ===============
[00:16:00] [PASSED] NTSC
[00:16:00] [PASSED] NTSC_443
[00:16:00] [PASSED] NTSC_J
[00:16:00] [PASSED] PAL
[00:16:00] [PASSED] PAL_M
[00:16:00] [PASSED] PAL_N
[00:16:00] [PASSED] SECAM
[00:16:00] [PASSED] MONO_525
[00:16:00] [PASSED] MONO_625
[00:16:00] =========== [PASSED] drm_test_cmdline_tv_options ===========
[00:16:00] =============== [PASSED] drm_cmdline_parser ================
[00:16:00] ========== drmm_connector_hdmi_init (20 subtests) ==========
[00:16:00] [PASSED] drm_test_connector_hdmi_init_valid
[00:16:00] [PASSED] drm_test_connector_hdmi_init_bpc_8
[00:16:00] [PASSED] drm_test_connector_hdmi_init_bpc_10
[00:16:00] [PASSED] drm_test_connector_hdmi_init_bpc_12
[00:16:00] [PASSED] drm_test_connector_hdmi_init_bpc_invalid
[00:16:00] [PASSED] drm_test_connector_hdmi_init_bpc_null
[00:16:00] [PASSED] drm_test_connector_hdmi_init_formats_empty
[00:16:00] [PASSED] drm_test_connector_hdmi_init_formats_no_rgb
[00:16:00] === drm_test_connector_hdmi_init_formats_yuv420_allowed ===
[00:16:00] [PASSED] supported_formats=0x9 yuv420_allowed=1
[00:16:00] [PASSED] supported_formats=0x9 yuv420_allowed=0
[00:16:00] [PASSED] supported_formats=0x3 yuv420_allowed=1
[00:16:00] [PASSED] supported_formats=0x3 yuv420_allowed=0
[00:16:00] === [PASSED] drm_test_connector_hdmi_init_formats_yuv420_allowed ===
[00:16:00] [PASSED] drm_test_connector_hdmi_init_null_ddc
[00:16:00] [PASSED] drm_test_connector_hdmi_init_null_product
[00:16:00] [PASSED] drm_test_connector_hdmi_init_null_vendor
[00:16:00] [PASSED] drm_test_connector_hdmi_init_product_length_exact
[00:16:00] [PASSED] drm_test_connector_hdmi_init_product_length_too_long
[00:16:00] [PASSED] drm_test_connector_hdmi_init_product_valid
[00:16:00] [PASSED] drm_test_connector_hdmi_init_vendor_length_exact
[00:16:00] [PASSED] drm_test_connector_hdmi_init_vendor_length_too_long
[00:16:00] [PASSED] drm_test_connector_hdmi_init_vendor_valid
[00:16:00] ========= drm_test_connector_hdmi_init_type_valid =========
[00:16:00] [PASSED] HDMI-A
[00:16:00] [PASSED] HDMI-B
[00:16:00] ===== [PASSED] drm_test_connector_hdmi_init_type_valid =====
[00:16:00] ======== drm_test_connector_hdmi_init_type_invalid ========
[00:16:00] [PASSED] Unknown
[00:16:00] [PASSED] VGA
[00:16:00] [PASSED] DVI-I
[00:16:00] [PASSED] DVI-D
[00:16:00] [PASSED] DVI-A
[00:16:00] [PASSED] Composite
[00:16:00] [PASSED] SVIDEO
[00:16:00] [PASSED] LVDS
[00:16:00] [PASSED] Component
[00:16:00] [PASSED] DIN
[00:16:00] [PASSED] DP
[00:16:00] [PASSED] TV
[00:16:00] [PASSED] eDP
[00:16:00] [PASSED] Virtual
[00:16:00] [PASSED] DSI
[00:16:00] [PASSED] DPI
[00:16:00] [PASSED] Writeback
[00:16:00] [PASSED] SPI
[00:16:00] [PASSED] USB
[00:16:00] ==== [PASSED] drm_test_connector_hdmi_init_type_invalid ====
[00:16:00] ============ [PASSED] drmm_connector_hdmi_init =============
[00:16:00] ============= drmm_connector_init (3 subtests) =============
[00:16:00] [PASSED] drm_test_drmm_connector_init
[00:16:00] [PASSED] drm_test_drmm_connector_init_null_ddc
[00:16:00] ========= drm_test_drmm_connector_init_type_valid =========
[00:16:00] [PASSED] Unknown
[00:16:00] [PASSED] VGA
[00:16:00] [PASSED] DVI-I
[00:16:00] [PASSED] DVI-D
[00:16:00] [PASSED] DVI-A
[00:16:00] [PASSED] Composite
[00:16:00] [PASSED] SVIDEO
[00:16:00] [PASSED] LVDS
[00:16:00] [PASSED] Component
[00:16:00] [PASSED] DIN
[00:16:00] [PASSED] DP
[00:16:00] [PASSED] HDMI-A
[00:16:00] [PASSED] HDMI-B
[00:16:00] [PASSED] TV
[00:16:00] [PASSED] eDP
[00:16:00] [PASSED] Virtual
[00:16:00] [PASSED] DSI
[00:16:00] [PASSED] DPI
[00:16:00] [PASSED] Writeback
[00:16:00] [PASSED] SPI
[00:16:00] [PASSED] USB
[00:16:00] ===== [PASSED] drm_test_drmm_connector_init_type_valid =====
[00:16:00] =============== [PASSED] drmm_connector_init ===============
[00:16:00] ========= drm_connector_dynamic_init (6 subtests) ==========
[00:16:00] [PASSED] drm_test_drm_connector_dynamic_init
[00:16:00] [PASSED] drm_test_drm_connector_dynamic_init_null_ddc
[00:16:00] [PASSED] drm_test_drm_connector_dynamic_init_not_added
[00:16:00] [PASSED] drm_test_drm_connector_dynamic_init_properties
[00:16:00] ===== drm_test_drm_connector_dynamic_init_type_valid ======
[00:16:00] [PASSED] Unknown
[00:16:00] [PASSED] VGA
[00:16:00] [PASSED] DVI-I
[00:16:00] [PASSED] DVI-D
[00:16:00] [PASSED] DVI-A
[00:16:00] [PASSED] Composite
[00:16:00] [PASSED] SVIDEO
[00:16:00] [PASSED] LVDS
[00:16:00] [PASSED] Component
[00:16:00] [PASSED] DIN
[00:16:00] [PASSED] DP
[00:16:00] [PASSED] HDMI-A
[00:16:00] [PASSED] HDMI-B
[00:16:00] [PASSED] TV
[00:16:00] [PASSED] eDP
[00:16:00] [PASSED] Virtual
[00:16:00] [PASSED] DSI
[00:16:00] [PASSED] DPI
[00:16:00] [PASSED] Writeback
[00:16:00] [PASSED] SPI
[00:16:00] [PASSED] USB
[00:16:00] = [PASSED] drm_test_drm_connector_dynamic_init_type_valid ==
[00:16:00] ======== drm_test_drm_connector_dynamic_init_name =========
[00:16:00] [PASSED] Unknown
[00:16:00] [PASSED] VGA
[00:16:00] [PASSED] DVI-I
[00:16:00] [PASSED] DVI-D
[00:16:00] [PASSED] DVI-A
[00:16:00] [PASSED] Composite
[00:16:00] [PASSED] SVIDEO
[00:16:00] [PASSED] LVDS
[00:16:00] [PASSED] Component
[00:16:00] [PASSED] DIN
[00:16:00] [PASSED] DP
[00:16:00] [PASSED] HDMI-A
[00:16:00] [PASSED] HDMI-B
[00:16:00] [PASSED] TV
[00:16:00] [PASSED] eDP
[00:16:00] [PASSED] Virtual
[00:16:00] [PASSED] DSI
[00:16:00] [PASSED] DPI
[00:16:00] [PASSED] Writeback
[00:16:00] [PASSED] SPI
[00:16:00] [PASSED] USB
[00:16:00] ==== [PASSED] drm_test_drm_connector_dynamic_init_name =====
[00:16:00] =========== [PASSED] drm_connector_dynamic_init ============
[00:16:00] ==== drm_connector_dynamic_register_early (4 subtests) =====
[00:16:00] [PASSED] drm_test_drm_connector_dynamic_register_early_on_list
[00:16:00] [PASSED] drm_test_drm_connector_dynamic_register_early_defer
[00:16:00] [PASSED] drm_test_drm_connector_dynamic_register_early_no_init
[00:16:00] [PASSED] drm_test_drm_connector_dynamic_register_early_no_mode_object
[00:16:00] ====== [PASSED] drm_connector_dynamic_register_early =======
[00:16:00] ======= drm_connector_dynamic_register (7 subtests) ========
[00:16:00] [PASSED] drm_test_drm_connector_dynamic_register_on_list
[00:16:00] [PASSED] drm_test_drm_connector_dynamic_register_no_defer
[00:16:00] [PASSED] drm_test_drm_connector_dynamic_register_no_init
[00:16:00] [PASSED] drm_test_drm_connector_dynamic_register_mode_object
[00:16:00] [PASSED] drm_test_drm_connector_dynamic_register_sysfs
[00:16:00] [PASSED] drm_test_drm_connector_dynamic_register_sysfs_name
[00:16:00] [PASSED] drm_test_drm_connector_dynamic_register_debugfs
[00:16:00] ========= [PASSED] drm_connector_dynamic_register ==========
[00:16:00] = drm_connector_attach_broadcast_rgb_property (2 subtests) =
[00:16:00] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property
[00:16:00] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property_hdmi_connector
[00:16:00] === [PASSED] drm_connector_attach_broadcast_rgb_property ===
[00:16:00] ========== drm_get_tv_mode_from_name (2 subtests) ==========
[00:16:00] ========== drm_test_get_tv_mode_from_name_valid ===========
[00:16:00] [PASSED] NTSC
[00:16:00] [PASSED] NTSC-443
[00:16:00] [PASSED] NTSC-J
[00:16:00] [PASSED] PAL
[00:16:00] [PASSED] PAL-M
[00:16:00] [PASSED] PAL-N
[00:16:00] [PASSED] SECAM
[00:16:00] [PASSED] Mono
[00:16:00] ====== [PASSED] drm_test_get_tv_mode_from_name_valid =======
[00:16:00] [PASSED] drm_test_get_tv_mode_from_name_truncated
[00:16:00] ============ [PASSED] drm_get_tv_mode_from_name ============
[00:16:00] = drm_test_connector_hdmi_compute_mode_clock (12 subtests) =
[00:16:00] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb
[00:16:00] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc
[00:16:00] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc_vic_1
[00:16:00] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc
[00:16:00] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc_vic_1
[00:16:00] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_double
[00:16:00] = drm_test_connector_hdmi_compute_mode_clock_yuv420_valid =
[00:16:00] [PASSED] VIC 96
[00:16:00] [PASSED] VIC 97
[00:16:00] [PASSED] VIC 101
[00:16:00] [PASSED] VIC 102
[00:16:00] [PASSED] VIC 106
[00:16:00] [PASSED] VIC 107
[00:16:00] === [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_valid ===
[00:16:00] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_10_bpc
[00:16:00] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_12_bpc
[00:16:00] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_8_bpc
[00:16:00] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_10_bpc
[00:16:00] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_12_bpc
[00:16:00] === [PASSED] drm_test_connector_hdmi_compute_mode_clock ====
[00:16:00] == drm_hdmi_connector_get_broadcast_rgb_name (2 subtests) ==
[00:16:00] === drm_test_drm_hdmi_connector_get_broadcast_rgb_name ====
[00:16:00] [PASSED] Automatic
[00:16:00] [PASSED] Full
[00:16:00] [PASSED] Limited 16:235
[00:16:00] === [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name ===
[00:16:00] [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name_invalid
[00:16:00] ==== [PASSED] drm_hdmi_connector_get_broadcast_rgb_name ====
[00:16:00] == drm_hdmi_connector_get_output_format_name (2 subtests) ==
[00:16:00] === drm_test_drm_hdmi_connector_get_output_format_name ====
[00:16:00] [PASSED] RGB
[00:16:00] [PASSED] YUV 4:2:0
[00:16:00] [PASSED] YUV 4:2:2
[00:16:00] [PASSED] YUV 4:4:4
[00:16:00] === [PASSED] drm_test_drm_hdmi_connector_get_output_format_name ===
[00:16:00] [PASSED] drm_test_drm_hdmi_connector_get_output_format_name_invalid
[00:16:00] ==== [PASSED] drm_hdmi_connector_get_output_format_name ====
[00:16:00] ============= drm_damage_helper (21 subtests) ==============
[00:16:00] [PASSED] drm_test_damage_iter_no_damage
[00:16:00] [PASSED] drm_test_damage_iter_no_damage_fractional_src
[00:16:00] [PASSED] drm_test_damage_iter_no_damage_src_moved
[00:16:00] [PASSED] drm_test_damage_iter_no_damage_fractional_src_moved
[00:16:00] [PASSED] drm_test_damage_iter_no_damage_not_visible
[00:16:00] [PASSED] drm_test_damage_iter_no_damage_no_crtc
[00:16:00] [PASSED] drm_test_damage_iter_no_damage_no_fb
[00:16:00] [PASSED] drm_test_damage_iter_simple_damage
[00:16:00] [PASSED] drm_test_damage_iter_single_damage
[00:16:00] [PASSED] drm_test_damage_iter_single_damage_intersect_src
[00:16:00] [PASSED] drm_test_damage_iter_single_damage_outside_src
[00:16:00] [PASSED] drm_test_damage_iter_single_damage_fractional_src
[00:16:00] [PASSED] drm_test_damage_iter_single_damage_intersect_fractional_src
[00:16:00] [PASSED] drm_test_damage_iter_single_damage_outside_fractional_src
[00:16:00] [PASSED] drm_test_damage_iter_single_damage_src_moved
[00:16:00] [PASSED] drm_test_damage_iter_single_damage_fractional_src_moved
[00:16:00] [PASSED] drm_test_damage_iter_damage
[00:16:00] [PASSED] drm_test_damage_iter_damage_one_intersect
[00:16:00] [PASSED] drm_test_damage_iter_damage_one_outside
[00:16:00] [PASSED] drm_test_damage_iter_damage_src_moved
[00:16:00] [PASSED] drm_test_damage_iter_damage_not_visible
[00:16:00] ================ [PASSED] drm_damage_helper ================
[00:16:00] ============== drm_dp_mst_helper (3 subtests) ==============
[00:16:00] ============== drm_test_dp_mst_calc_pbn_mode ==============
[00:16:00] [PASSED] Clock 154000 BPP 30 DSC disabled
[00:16:00] [PASSED] Clock 234000 BPP 30 DSC disabled
[00:16:00] [PASSED] Clock 297000 BPP 24 DSC disabled
[00:16:00] [PASSED] Clock 332880 BPP 24 DSC enabled
[00:16:00] [PASSED] Clock 324540 BPP 24 DSC enabled
[00:16:00] ========== [PASSED] drm_test_dp_mst_calc_pbn_mode ==========
[00:16:00] ============== drm_test_dp_mst_calc_pbn_div ===============
[00:16:00] [PASSED] Link rate 2000000 lane count 4
[00:16:00] [PASSED] Link rate 2000000 lane count 2
[00:16:00] [PASSED] Link rate 2000000 lane count 1
[00:16:00] [PASSED] Link rate 1350000 lane count 4
[00:16:00] [PASSED] Link rate 1350000 lane count 2
[00:16:00] [PASSED] Link rate 1350000 lane count 1
[00:16:00] [PASSED] Link rate 1000000 lane count 4
[00:16:00] [PASSED] Link rate 1000000 lane count 2
[00:16:00] [PASSED] Link rate 1000000 lane count 1
[00:16:00] [PASSED] Link rate 810000 lane count 4
[00:16:00] [PASSED] Link rate 810000 lane count 2
[00:16:00] [PASSED] Link rate 810000 lane count 1
[00:16:00] [PASSED] Link rate 540000 lane count 4
[00:16:00] [PASSED] Link rate 540000 lane count 2
[00:16:00] [PASSED] Link rate 540000 lane count 1
[00:16:00] [PASSED] Link rate 270000 lane count 4
[00:16:00] [PASSED] Link rate 270000 lane count 2
[00:16:00] [PASSED] Link rate 270000 lane count 1
[00:16:00] [PASSED] Link rate 162000 lane count 4
[00:16:00] [PASSED] Link rate 162000 lane count 2
[00:16:00] [PASSED] Link rate 162000 lane count 1
[00:16:00] ========== [PASSED] drm_test_dp_mst_calc_pbn_div ===========
[00:16:00] ========= drm_test_dp_mst_sideband_msg_req_decode =========
[00:16:00] [PASSED] DP_ENUM_PATH_RESOURCES with port number
[00:16:00] [PASSED] DP_POWER_UP_PHY with port number
[00:16:00] [PASSED] DP_POWER_DOWN_PHY with port number
[00:16:00] [PASSED] DP_ALLOCATE_PAYLOAD with SDP stream sinks
[00:16:00] [PASSED] DP_ALLOCATE_PAYLOAD with port number
[00:16:00] [PASSED] DP_ALLOCATE_PAYLOAD with VCPI
[00:16:00] [PASSED] DP_ALLOCATE_PAYLOAD with PBN
[00:16:00] [PASSED] DP_QUERY_PAYLOAD with port number
[00:16:00] [PASSED] DP_QUERY_PAYLOAD with VCPI
[00:16:00] [PASSED] DP_REMOTE_DPCD_READ with port number
[00:16:00] [PASSED] DP_REMOTE_DPCD_READ with DPCD address
[00:16:00] [PASSED] DP_REMOTE_DPCD_READ with max number of bytes
[00:16:00] [PASSED] DP_REMOTE_DPCD_WRITE with port number
[00:16:00] [PASSED] DP_REMOTE_DPCD_WRITE with DPCD address
[00:16:00] [PASSED] DP_REMOTE_DPCD_WRITE with data array
[00:16:00] [PASSED] DP_REMOTE_I2C_READ with port number
[00:16:00] [PASSED] DP_REMOTE_I2C_READ with I2C device ID
[00:16:00] [PASSED] DP_REMOTE_I2C_READ with transactions array
[00:16:00] [PASSED] DP_REMOTE_I2C_WRITE with port number
[00:16:00] [PASSED] DP_REMOTE_I2C_WRITE with I2C device ID
[00:16:00] [PASSED] DP_REMOTE_I2C_WRITE with data array
[00:16:00] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream ID
[00:16:00] [PASSED] DP_QUERY_STREAM_ENC_STATUS with client ID
[00:16:00] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream event
[00:16:00] [PASSED] DP_QUERY_STREAM_ENC_STATUS with valid stream event
[00:16:00] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream behavior
[00:16:00] [PASSED] DP_QUERY_STREAM_ENC_STATUS with a valid stream behavior
[00:16:00] ===== [PASSED] drm_test_dp_mst_sideband_msg_req_decode =====
[00:16:00] ================ [PASSED] drm_dp_mst_helper ================
[00:16:00] ================== drm_exec (7 subtests) ===================
[00:16:00] [PASSED] sanitycheck
[00:16:00] [PASSED] test_lock
[00:16:00] [PASSED] test_lock_unlock
[00:16:00] [PASSED] test_duplicates
[00:16:00] [PASSED] test_prepare
[00:16:00] [PASSED] test_prepare_array
[00:16:00] [PASSED] test_multiple_loops
[00:16:00] ==================== [PASSED] drm_exec =====================
[00:16:00] =========== drm_format_helper_test (17 subtests) ===========
[00:16:00] ============== drm_test_fb_xrgb8888_to_gray8 ==============
[00:16:00] [PASSED] single_pixel_source_buffer
[00:16:00] [PASSED] single_pixel_clip_rectangle
[00:16:00] [PASSED] well_known_colors
[00:16:00] [PASSED] destination_pitch
[00:16:00] ========== [PASSED] drm_test_fb_xrgb8888_to_gray8 ==========
[00:16:00] ============= drm_test_fb_xrgb8888_to_rgb332 ==============
[00:16:00] [PASSED] single_pixel_source_buffer
[00:16:00] [PASSED] single_pixel_clip_rectangle
[00:16:00] [PASSED] well_known_colors
[00:16:00] [PASSED] destination_pitch
[00:16:00] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb332 ==========
[00:16:00] ============= drm_test_fb_xrgb8888_to_rgb565 ==============
[00:16:00] [PASSED] single_pixel_source_buffer
[00:16:00] [PASSED] single_pixel_clip_rectangle
[00:16:00] [PASSED] well_known_colors
[00:16:00] [PASSED] destination_pitch
[00:16:00] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb565 ==========
[00:16:00] ============ drm_test_fb_xrgb8888_to_xrgb1555 =============
[00:16:00] [PASSED] single_pixel_source_buffer
[00:16:00] [PASSED] single_pixel_clip_rectangle
[00:16:00] [PASSED] well_known_colors
[00:16:00] [PASSED] destination_pitch
[00:16:00] ======== [PASSED] drm_test_fb_xrgb8888_to_xrgb1555 =========
[00:16:00] ============ drm_test_fb_xrgb8888_to_argb1555 =============
[00:16:00] [PASSED] single_pixel_source_buffer
[00:16:00] [PASSED] single_pixel_clip_rectangle
[00:16:00] [PASSED] well_known_colors
[00:16:00] [PASSED] destination_pitch
[00:16:00] ======== [PASSED] drm_test_fb_xrgb8888_to_argb1555 =========
[00:16:00] ============ drm_test_fb_xrgb8888_to_rgba5551 =============
[00:16:00] [PASSED] single_pixel_source_buffer
[00:16:00] [PASSED] single_pixel_clip_rectangle
[00:16:00] [PASSED] well_known_colors
[00:16:00] [PASSED] destination_pitch
[00:16:00] ======== [PASSED] drm_test_fb_xrgb8888_to_rgba5551 =========
[00:16:00] ============= drm_test_fb_xrgb8888_to_rgb888 ==============
[00:16:00] [PASSED] single_pixel_source_buffer
[00:16:00] [PASSED] single_pixel_clip_rectangle
[00:16:00] [PASSED] well_known_colors
[00:16:00] [PASSED] destination_pitch
[00:16:00] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb888 ==========
[00:16:00] ============= drm_test_fb_xrgb8888_to_bgr888 ==============
[00:16:00] [PASSED] single_pixel_source_buffer
[00:16:00] [PASSED] single_pixel_clip_rectangle
[00:16:00] [PASSED] well_known_colors
[00:16:00] [PASSED] destination_pitch
[00:16:00] ========= [PASSED] drm_test_fb_xrgb8888_to_bgr888 ==========
[00:16:00] ============ drm_test_fb_xrgb8888_to_argb8888 =============
[00:16:00] [PASSED] single_pixel_source_buffer
[00:16:00] [PASSED] single_pixel_clip_rectangle
[00:16:00] [PASSED] well_known_colors
[00:16:00] [PASSED] destination_pitch
[00:16:00] ======== [PASSED] drm_test_fb_xrgb8888_to_argb8888 =========
[00:16:00] =========== drm_test_fb_xrgb8888_to_xrgb2101010 ===========
[00:16:00] [PASSED] single_pixel_source_buffer
[00:16:00] [PASSED] single_pixel_clip_rectangle
[00:16:00] [PASSED] well_known_colors
[00:16:00] [PASSED] destination_pitch
[00:16:00] ======= [PASSED] drm_test_fb_xrgb8888_to_xrgb2101010 =======
[00:16:00] =========== drm_test_fb_xrgb8888_to_argb2101010 ===========
[00:16:00] [PASSED] single_pixel_source_buffer
[00:16:00] [PASSED] single_pixel_clip_rectangle
[00:16:00] [PASSED] well_known_colors
[00:16:00] [PASSED] destination_pitch
[00:16:00] ======= [PASSED] drm_test_fb_xrgb8888_to_argb2101010 =======
[00:16:00] ============== drm_test_fb_xrgb8888_to_mono ===============
[00:16:00] [PASSED] single_pixel_source_buffer
[00:16:00] [PASSED] single_pixel_clip_rectangle
[00:16:00] [PASSED] well_known_colors
[00:16:00] [PASSED] destination_pitch
[00:16:00] ========== [PASSED] drm_test_fb_xrgb8888_to_mono ===========
[00:16:00] ==================== drm_test_fb_swab =====================
[00:16:00] [PASSED] single_pixel_source_buffer
[00:16:00] [PASSED] single_pixel_clip_rectangle
[00:16:00] [PASSED] well_known_colors
[00:16:00] [PASSED] destination_pitch
[00:16:00] ================ [PASSED] drm_test_fb_swab =================
[00:16:00] ============ drm_test_fb_xrgb8888_to_xbgr8888 =============
[00:16:00] [PASSED] single_pixel_source_buffer
[00:16:00] [PASSED] single_pixel_clip_rectangle
[00:16:00] [PASSED] well_known_colors
[00:16:00] [PASSED] destination_pitch
[00:16:00] ======== [PASSED] drm_test_fb_xrgb8888_to_xbgr8888 =========
[00:16:00] ============ drm_test_fb_xrgb8888_to_abgr8888 =============
[00:16:00] [PASSED] single_pixel_source_buffer
[00:16:00] [PASSED] single_pixel_clip_rectangle
[00:16:00] [PASSED] well_known_colors
[00:16:00] [PASSED] destination_pitch
[00:16:00] ======== [PASSED] drm_test_fb_xrgb8888_to_abgr8888 =========
[00:16:00] ================= drm_test_fb_clip_offset =================
[00:16:00] [PASSED] pass through
[00:16:00] [PASSED] horizontal offset
[00:16:00] [PASSED] vertical offset
[00:16:00] [PASSED] horizontal and vertical offset
[00:16:00] [PASSED] horizontal offset (custom pitch)
[00:16:00] [PASSED] vertical offset (custom pitch)
[00:16:00] [PASSED] horizontal and vertical offset (custom pitch)
[00:16:00] ============= [PASSED] drm_test_fb_clip_offset =============
[00:16:00] =================== drm_test_fb_memcpy ====================
[00:16:00] [PASSED] single_pixel_source_buffer: XR24 little-endian (0x34325258)
[00:16:00] [PASSED] single_pixel_source_buffer: XRA8 little-endian (0x38415258)
[00:16:00] [PASSED] single_pixel_source_buffer: YU24 little-endian (0x34325559)
[00:16:00] [PASSED] single_pixel_clip_rectangle: XB24 little-endian (0x34324258)
[00:16:00] [PASSED] single_pixel_clip_rectangle: XRA8 little-endian (0x38415258)
[00:16:00] [PASSED] single_pixel_clip_rectangle: YU24 little-endian (0x34325559)
[00:16:00] [PASSED] well_known_colors: XB24 little-endian (0x34324258)
[00:16:00] [PASSED] well_known_colors: XRA8 little-endian (0x38415258)
[00:16:00] [PASSED] well_known_colors: YU24 little-endian (0x34325559)
[00:16:00] [PASSED] destination_pitch: XB24 little-endian (0x34324258)
[00:16:00] [PASSED] destination_pitch: XRA8 little-endian (0x38415258)
[00:16:00] [PASSED] destination_pitch: YU24 little-endian (0x34325559)
[00:16:00] =============== [PASSED] drm_test_fb_memcpy ================
[00:16:00] ============= [PASSED] drm_format_helper_test ==============
[00:16:00] ================= drm_format (18 subtests) =================
[00:16:00] [PASSED] drm_test_format_block_width_invalid
[00:16:00] [PASSED] drm_test_format_block_width_one_plane
[00:16:00] [PASSED] drm_test_format_block_width_two_plane
[00:16:00] [PASSED] drm_test_format_block_width_three_plane
[00:16:00] [PASSED] drm_test_format_block_width_tiled
[00:16:00] [PASSED] drm_test_format_block_height_invalid
[00:16:00] [PASSED] drm_test_format_block_height_one_plane
[00:16:00] [PASSED] drm_test_format_block_height_two_plane
[00:16:00] [PASSED] drm_test_format_block_height_three_plane
[00:16:00] [PASSED] drm_test_format_block_height_tiled
[00:16:00] [PASSED] drm_test_format_min_pitch_invalid
[00:16:00] [PASSED] drm_test_format_min_pitch_one_plane_8bpp
[00:16:00] [PASSED] drm_test_format_min_pitch_one_plane_16bpp
[00:16:00] [PASSED] drm_test_format_min_pitch_one_plane_24bpp
[00:16:00] [PASSED] drm_test_format_min_pitch_one_plane_32bpp
[00:16:00] [PASSED] drm_test_format_min_pitch_two_plane
[00:16:00] [PASSED] drm_test_format_min_pitch_three_plane_8bpp
[00:16:00] [PASSED] drm_test_format_min_pitch_tiled
[00:16:00] =================== [PASSED] drm_format ====================
[00:16:00] ============== drm_framebuffer (10 subtests) ===============
[00:16:00] ========== drm_test_framebuffer_check_src_coords ==========
[00:16:00] [PASSED] Success: source fits into fb
[00:16:00] [PASSED] Fail: overflowing fb with x-axis coordinate
[00:16:00] [PASSED] Fail: overflowing fb with y-axis coordinate
[00:16:00] [PASSED] Fail: overflowing fb with source width
[00:16:00] [PASSED] Fail: overflowing fb with source height
[00:16:00] ====== [PASSED] drm_test_framebuffer_check_src_coords ======
[00:16:00] [PASSED] drm_test_framebuffer_cleanup
[00:16:00] =============== drm_test_framebuffer_create ===============
[00:16:00] [PASSED] ABGR8888 normal sizes
[00:16:00] [PASSED] ABGR8888 max sizes
[00:16:00] [PASSED] ABGR8888 pitch greater than min required
[00:16:00] [PASSED] ABGR8888 pitch less than min required
[00:16:00] [PASSED] ABGR8888 Invalid width
[00:16:00] [PASSED] ABGR8888 Invalid buffer handle
[00:16:00] [PASSED] No pixel format
[00:16:00] [PASSED] ABGR8888 Width 0
[00:16:00] [PASSED] ABGR8888 Height 0
[00:16:00] [PASSED] ABGR8888 Out of bound height * pitch combination
[00:16:00] [PASSED] ABGR8888 Large buffer offset
[00:16:00] [PASSED] ABGR8888 Buffer offset for inexistent plane
[00:16:00] [PASSED] ABGR8888 Invalid flag
[00:16:00] [PASSED] ABGR8888 Set DRM_MODE_FB_MODIFIERS without modifiers
[00:16:00] [PASSED] ABGR8888 Valid buffer modifier
[00:16:00] [PASSED] ABGR8888 Invalid buffer modifier(DRM_FORMAT_MOD_SAMSUNG_64_32_TILE)
[00:16:00] [PASSED] ABGR8888 Extra pitches without DRM_MODE_FB_MODIFIERS
[00:16:00] [PASSED] ABGR8888 Extra pitches with DRM_MODE_FB_MODIFIERS
[00:16:00] [PASSED] NV12 Normal sizes
[00:16:00] [PASSED] NV12 Max sizes
[00:16:00] [PASSED] NV12 Invalid pitch
[00:16:00] [PASSED] NV12 Invalid modifier/missing DRM_MODE_FB_MODIFIERS flag
[00:16:00] [PASSED] NV12 different modifier per-plane
[00:16:00] [PASSED] NV12 with DRM_FORMAT_MOD_SAMSUNG_64_32_TILE
[00:16:00] [PASSED] NV12 Valid modifiers without DRM_MODE_FB_MODIFIERS
[00:16:00] [PASSED] NV12 Modifier for inexistent plane
[00:16:00] [PASSED] NV12 Handle for inexistent plane
[00:16:00] [PASSED] NV12 Handle for inexistent plane without DRM_MODE_FB_MODIFIERS
[00:16:00] [PASSED] YVU420 DRM_MODE_FB_MODIFIERS set without modifier
[00:16:00] [PASSED] YVU420 Normal sizes
[00:16:00] [PASSED] YVU420 Max sizes
[00:16:00] [PASSED] YVU420 Invalid pitch
[00:16:00] [PASSED] YVU420 Different pitches
[00:16:00] [PASSED] YVU420 Different buffer offsets/pitches
[00:16:00] [PASSED] YVU420 Modifier set just for plane 0, without DRM_MODE_FB_MODIFIERS
[00:16:00] [PASSED] YVU420 Modifier set just for planes 0, 1, without DRM_MODE_FB_MODIFIERS
[00:16:00] [PASSED] YVU420 Modifier set just for plane 0, 1, with DRM_MODE_FB_MODIFIERS
[00:16:00] [PASSED] YVU420 Valid modifier
[00:16:00] [PASSED] YVU420 Different modifiers per plane
[00:16:00] [PASSED] YVU420 Modifier for inexistent plane
[00:16:00] [PASSED] YUV420_10BIT Invalid modifier(DRM_FORMAT_MOD_LINEAR)
[00:16:00] [PASSED] X0L2 Normal sizes
[00:16:00] [PASSED] X0L2 Max sizes
[00:16:00] [PASSED] X0L2 Invalid pitch
[00:16:00] [PASSED] X0L2 Pitch greater than minimum required
[00:16:00] [PASSED] X0L2 Handle for inexistent plane
[00:16:00] [PASSED] X0L2 Offset for inexistent plane, without DRM_MODE_FB_MODIFIERS set
[00:16:00] [PASSED] X0L2 Modifier without DRM_MODE_FB_MODIFIERS set
[00:16:00] [PASSED] X0L2 Valid modifier
[00:16:00] [PASSED] X0L2 Modifier for inexistent plane
[00:16:00] =========== [PASSED] drm_test_framebuffer_create ===========
[00:16:00] [PASSED] drm_test_framebuffer_free
[00:16:00] [PASSED] drm_test_framebuffer_init
[00:16:00] [PASSED] drm_test_framebuffer_init_bad_format
[00:16:00] [PASSED] drm_test_framebuffer_init_dev_mismatch
[00:16:00] [PASSED] drm_test_framebuffer_lookup
[00:16:00] [PASSED] drm_test_framebuffer_lookup_inexistent
[00:16:00] [PASSED] drm_test_framebuffer_modifiers_not_supported
[00:16:00] ================= [PASSED] drm_framebuffer =================
[00:16:00] ================ drm_gem_shmem (8 subtests) ================
[00:16:00] [PASSED] drm_gem_shmem_test_obj_create
[00:16:00] [PASSED] drm_gem_shmem_test_obj_create_private
[00:16:00] [PASSED] drm_gem_shmem_test_pin_pages
[00:16:00] [PASSED] drm_gem_shmem_test_vmap
[00:16:00] [PASSED] drm_gem_shmem_test_get_pages_sgt
[00:16:00] [PASSED] drm_gem_shmem_test_get_sg_table
[00:16:00] [PASSED] drm_gem_shmem_test_madvise
[00:16:00] [PASSED] drm_gem_shmem_test_purge
[00:16:00] ================== [PASSED] drm_gem_shmem ==================
[00:16:00] === drm_atomic_helper_connector_hdmi_check (27 subtests) ===
[00:16:00] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode
[00:16:00] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode_vic_1
[00:16:00] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode
[00:16:00] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode_vic_1
[00:16:00] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode
[00:16:00] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode_vic_1
[00:16:00] ====== drm_test_check_broadcast_rgb_cea_mode_yuv420 =======
[00:16:00] [PASSED] Automatic
[00:16:00] [PASSED] Full
[00:16:00] [PASSED] Limited 16:235
[00:16:00] == [PASSED] drm_test_check_broadcast_rgb_cea_mode_yuv420 ===
[00:16:00] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_changed
[00:16:00] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_not_changed
[00:16:00] [PASSED] drm_test_check_disable_connector
[00:16:00] [PASSED] drm_test_check_hdmi_funcs_reject_rate
[00:16:00] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_rgb
[00:16:00] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_yuv420
[00:16:00] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_ignore_yuv422
[00:16:00] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_ignore_yuv420
[00:16:00] [PASSED] drm_test_check_driver_unsupported_fallback_yuv420
[00:16:00] [PASSED] drm_test_check_output_bpc_crtc_mode_changed
[00:16:00] [PASSED] drm_test_check_output_bpc_crtc_mode_not_changed
[00:16:00] [PASSED] drm_test_check_output_bpc_dvi
[00:16:00] [PASSED] drm_test_check_output_bpc_format_vic_1
[00:16:00] [PASSED] drm_test_check_output_bpc_format_display_8bpc_only
[00:16:00] [PASSED] drm_test_check_output_bpc_format_display_rgb_only
[00:16:00] [PASSED] drm_test_check_output_bpc_format_driver_8bpc_only
[00:16:00] [PASSED] drm_test_check_output_bpc_format_driver_rgb_only
[00:16:00] [PASSED] drm_test_check_tmds_char_rate_rgb_8bpc
[00:16:00] [PASSED] drm_test_check_tmds_char_rate_rgb_10bpc
[00:16:00] [PASSED] drm_test_check_tmds_char_rate_rgb_12bpc
[00:16:00] ===== [PASSED] drm_atomic_helper_connector_hdmi_check ======
[00:16:00] === drm_atomic_helper_connector_hdmi_reset (6 subtests) ====
[00:16:00] [PASSED] drm_test_check_broadcast_rgb_value
[00:16:00] [PASSED] drm_test_check_bpc_8_value
[00:16:00] [PASSED] drm_test_check_bpc_10_value
[00:16:00] [PASSED] drm_test_check_bpc_12_value
[00:16:00] [PASSED] drm_test_check_format_value
[00:16:00] [PASSED] drm_test_check_tmds_char_value
[00:16:00] ===== [PASSED] drm_atomic_helper_connector_hdmi_reset ======
[00:16:00] = drm_atomic_helper_connector_hdmi_mode_valid (4 subtests) =
[00:16:00] [PASSED] drm_test_check_mode_valid
[00:16:00] [PASSED] drm_test_check_mode_valid_reject
[00:16:00] [PASSED] drm_test_check_mode_valid_reject_rate
[00:16:00] [PASSED] drm_test_check_mode_valid_reject_max_clock
[00:16:00] === [PASSED] drm_atomic_helper_connector_hdmi_mode_valid ===
[00:16:00] ================= drm_managed (2 subtests) =================
[00:16:00] [PASSED] drm_test_managed_release_action
[00:16:00] [PASSED] drm_test_managed_run_action
[00:16:00] =================== [PASSED] drm_managed ===================
[00:16:00] =================== drm_mm (6 subtests) ====================
[00:16:00] [PASSED] drm_test_mm_init
[00:16:00] [PASSED] drm_test_mm_debug
[00:16:00] [PASSED] drm_test_mm_align32
[00:16:00] [PASSED] drm_test_mm_align64
[00:16:00] [PASSED] drm_test_mm_lowest
[00:16:00] [PASSED] drm_test_mm_highest
[00:16:00] ===================== [PASSED] drm_mm ======================
[00:16:00] ============= drm_modes_analog_tv (5 subtests) =============
[00:16:00] [PASSED] drm_test_modes_analog_tv_mono_576i
[00:16:00] [PASSED] drm_test_modes_analog_tv_ntsc_480i
[00:16:00] [PASSED] drm_test_modes_analog_tv_ntsc_480i_inlined
[00:16:00] [PASSED] drm_test_modes_analog_tv_pal_576i
[00:16:00] [PASSED] drm_test_modes_analog_tv_pal_576i_inlined
[00:16:00] =============== [PASSED] drm_modes_analog_tv ===============
[00:16:00] ============== drm_plane_helper (2 subtests) ===============
[00:16:00] =============== drm_test_check_plane_state ================
[00:16:00] [PASSED] clipping_simple
[00:16:00] [PASSED] clipping_rotate_reflect
[00:16:00] [PASSED] positioning_simple
[00:16:00] [PASSED] upscaling
[00:16:00] [PASSED] downscaling
[00:16:00] [PASSED] rounding1
[00:16:00] [PASSED] rounding2
[00:16:00] [PASSED] rounding3
[00:16:00] [PASSED] rounding4
[00:16:00] =========== [PASSED] drm_test_check_plane_state ============
[00:16:00] =========== drm_test_check_invalid_plane_state ============
[00:16:00] [PASSED] positioning_invalid
[00:16:00] [PASSED] upscaling_invalid
[00:16:00] [PASSED] downscaling_invalid
[00:16:00] ======= [PASSED] drm_test_check_invalid_plane_state ========
[00:16:00] ================ [PASSED] drm_plane_helper =================
[00:16:00] ====== drm_connector_helper_tv_get_modes (1 subtest) =======
[00:16:00] ====== drm_test_connector_helper_tv_get_modes_check =======
[00:16:00] [PASSED] None
[00:16:00] [PASSED] PAL
[00:16:00] [PASSED] NTSC
[00:16:00] [PASSED] Both, NTSC Default
[00:16:00] [PASSED] Both, PAL Default
[00:16:00] [PASSED] Both, NTSC Default, with PAL on command-line
[00:16:00] [PASSED] Both, PAL Default, with NTSC on command-line
[00:16:00] == [PASSED] drm_test_connector_helper_tv_get_modes_check ===
[00:16:00] ======== [PASSED] drm_connector_helper_tv_get_modes ========
[00:16:00] ================== drm_rect (9 subtests) ===================
[00:16:00] [PASSED] drm_test_rect_clip_scaled_div_by_zero
[00:16:00] [PASSED] drm_test_rect_clip_scaled_not_clipped
[00:16:00] [PASSED] drm_test_rect_clip_scaled_clipped
[00:16:00] [PASSED] drm_test_rect_clip_scaled_signed_vs_unsigned
[00:16:00] ================= drm_test_rect_intersect =================
[00:16:00] [PASSED] top-left x bottom-right: 2x2+1+1 x 2x2+0+0
[00:16:00] [PASSED] top-right x bottom-left: 2x2+0+0 x 2x2+1-1
[00:16:00] [PASSED] bottom-left x top-right: 2x2+1-1 x 2x2+0+0
[00:16:00] [PASSED] bottom-right x top-left: 2x2+0+0 x 2x2+1+1
[00:16:00] [PASSED] right x left: 2x1+0+0 x 3x1+1+0
[00:16:00] [PASSED] left x right: 3x1+1+0 x 2x1+0+0
[00:16:00] [PASSED] up x bottom: 1x2+0+0 x 1x3+0-1
[00:16:00] [PASSED] bottom x up: 1x3+0-1 x 1x2+0+0
[00:16:00] [PASSED] touching corner: 1x1+0+0 x 2x2+1+1
[00:16:00] [PASSED] touching side: 1x1+0+0 x 1x1+1+0
[00:16:00] [PASSED] equal rects: 2x2+0+0 x 2x2+0+0
[00:16:00] [PASSED] inside another: 2x2+0+0 x 1x1+1+1
[00:16:00] [PASSED] far away: 1x1+0+0 x 1x1+3+6
[00:16:00] [PASSED] points intersecting: 0x0+5+10 x 0x0+5+10
[00:16:00] [PASSED] points not intersecting: 0x0+0+0 x 0x0+5+10
[00:16:00] ============= [PASSED] drm_test_rect_intersect =============
[00:16:00] ================ drm_test_rect_calc_hscale ================
[00:16:00] [PASSED] normal use
[00:16:00] [PASSED] out of max range
[00:16:00] [PASSED] out of min range
[00:16:00] [PASSED] zero dst
[00:16:00] [PASSED] negative src
[00:16:00] [PASSED] negative dst
[00:16:00] ============ [PASSED] drm_test_rect_calc_hscale ============
[00:16:00] ================ drm_test_rect_calc_vscale ================
[00:16:00] [PASSED] normal use
stty: 'standard input': Inappropriate ioctl for device
[00:16:00] [PASSED] out of max range
[00:16:00] [PASSED] out of min range
[00:16:00] [PASSED] zero dst
[00:16:00] [PASSED] negative src
[00:16:00] [PASSED] negative dst
[00:16:00] ============ [PASSED] drm_test_rect_calc_vscale ============
[00:16:00] ================== drm_test_rect_rotate ===================
[00:16:00] [PASSED] reflect-x
[00:16:00] [PASSED] reflect-y
[00:16:00] [PASSED] rotate-0
[00:16:00] [PASSED] rotate-90
[00:16:00] [PASSED] rotate-180
[00:16:00] [PASSED] rotate-270
[00:16:00] ============== [PASSED] drm_test_rect_rotate ===============
[00:16:00] ================ drm_test_rect_rotate_inv =================
[00:16:00] [PASSED] reflect-x
[00:16:00] [PASSED] reflect-y
[00:16:00] [PASSED] rotate-0
[00:16:00] [PASSED] rotate-90
[00:16:00] [PASSED] rotate-180
[00:16:00] [PASSED] rotate-270
[00:16:00] ============ [PASSED] drm_test_rect_rotate_inv =============
[00:16:00] ==================== [PASSED] drm_rect =====================
[00:16:00] ============ drm_sysfb_modeset_test (1 subtest) ============
[00:16:00] ============ drm_test_sysfb_build_fourcc_list =============
[00:16:00] [PASSED] no native formats
[00:16:00] [PASSED] XRGB8888 as native format
[00:16:00] [PASSED] remove duplicates
[00:16:00] [PASSED] convert alpha formats
[00:16:00] [PASSED] random formats
[00:16:00] ======== [PASSED] drm_test_sysfb_build_fourcc_list =========
[00:16:00] ============= [PASSED] drm_sysfb_modeset_test ==============
[00:16:00] ================== drm_fixp (2 subtests) ===================
[00:16:00] [PASSED] drm_test_int2fixp
[00:16:00] [PASSED] drm_test_sm2fixp
[00:16:00] ==================== [PASSED] drm_fixp =====================
[00:16:00] ============================================================
[00:16:00] Testing complete. Ran 624 tests: passed: 624
[00:16:00] Elapsed time: 27.178s total, 1.679s configuring, 25.080s building, 0.376s running
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/ttm/tests/.kunitconfig
[00:16:00] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[00:16:02] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[00:16:11] Starting KUnit Kernel (1/1)...
[00:16:11] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[00:16:12] ================= ttm_device (5 subtests) ==================
[00:16:12] [PASSED] ttm_device_init_basic
[00:16:12] [PASSED] ttm_device_init_multiple
[00:16:12] [PASSED] ttm_device_fini_basic
[00:16:12] [PASSED] ttm_device_init_no_vma_man
[00:16:12] ================== ttm_device_init_pools ==================
[00:16:12] [PASSED] No DMA allocations, no DMA32 required
[00:16:12] [PASSED] DMA allocations, DMA32 required
[00:16:12] [PASSED] No DMA allocations, DMA32 required
[00:16:12] [PASSED] DMA allocations, no DMA32 required
[00:16:12] ============== [PASSED] ttm_device_init_pools ==============
[00:16:12] =================== [PASSED] ttm_device ====================
[00:16:12] ================== ttm_pool (8 subtests) ===================
[00:16:12] ================== ttm_pool_alloc_basic ===================
[00:16:12] [PASSED] One page
[00:16:12] [PASSED] More than one page
[00:16:12] [PASSED] Above the allocation limit
[00:16:12] [PASSED] One page, with coherent DMA mappings enabled
[00:16:12] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[00:16:12] ============== [PASSED] ttm_pool_alloc_basic ===============
[00:16:12] ============== ttm_pool_alloc_basic_dma_addr ==============
[00:16:12] [PASSED] One page
[00:16:12] [PASSED] More than one page
[00:16:12] [PASSED] Above the allocation limit
[00:16:12] [PASSED] One page, with coherent DMA mappings enabled
[00:16:12] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[00:16:12] ========== [PASSED] ttm_pool_alloc_basic_dma_addr ==========
[00:16:12] [PASSED] ttm_pool_alloc_order_caching_match
[00:16:12] [PASSED] ttm_pool_alloc_caching_mismatch
[00:16:12] [PASSED] ttm_pool_alloc_order_mismatch
[00:16:12] [PASSED] ttm_pool_free_dma_alloc
[00:16:12] [PASSED] ttm_pool_free_no_dma_alloc
[00:16:12] [PASSED] ttm_pool_fini_basic
[00:16:12] ==================== [PASSED] ttm_pool =====================
[00:16:12] ================ ttm_resource (8 subtests) =================
[00:16:12] ================= ttm_resource_init_basic =================
[00:16:12] [PASSED] Init resource in TTM_PL_SYSTEM
[00:16:12] [PASSED] Init resource in TTM_PL_VRAM
[00:16:12] [PASSED] Init resource in a private placement
[00:16:12] [PASSED] Init resource in TTM_PL_SYSTEM, set placement flags
[00:16:12] ============= [PASSED] ttm_resource_init_basic =============
[00:16:12] [PASSED] ttm_resource_init_pinned
[00:16:12] [PASSED] ttm_resource_fini_basic
[00:16:12] [PASSED] ttm_resource_manager_init_basic
[00:16:12] [PASSED] ttm_resource_manager_usage_basic
[00:16:12] [PASSED] ttm_resource_manager_set_used_basic
[00:16:12] [PASSED] ttm_sys_man_alloc_basic
[00:16:12] [PASSED] ttm_sys_man_free_basic
[00:16:12] ================== [PASSED] ttm_resource ===================
[00:16:12] =================== ttm_tt (15 subtests) ===================
[00:16:12] ==================== ttm_tt_init_basic ====================
[00:16:12] [PASSED] Page-aligned size
[00:16:12] [PASSED] Extra pages requested
[00:16:12] ================ [PASSED] ttm_tt_init_basic ================
[00:16:12] [PASSED] ttm_tt_init_misaligned
[00:16:12] [PASSED] ttm_tt_fini_basic
[00:16:12] [PASSED] ttm_tt_fini_sg
[00:16:12] [PASSED] ttm_tt_fini_shmem
[00:16:12] [PASSED] ttm_tt_create_basic
[00:16:12] [PASSED] ttm_tt_create_invalid_bo_type
[00:16:12] [PASSED] ttm_tt_create_ttm_exists
[00:16:12] [PASSED] ttm_tt_create_failed
[00:16:12] [PASSED] ttm_tt_destroy_basic
[00:16:12] [PASSED] ttm_tt_populate_null_ttm
[00:16:12] [PASSED] ttm_tt_populate_populated_ttm
[00:16:12] [PASSED] ttm_tt_unpopulate_basic
[00:16:12] [PASSED] ttm_tt_unpopulate_empty_ttm
[00:16:12] [PASSED] ttm_tt_swapin_basic
[00:16:12] ===================== [PASSED] ttm_tt ======================
[00:16:12] =================== ttm_bo (14 subtests) ===================
[00:16:12] =========== ttm_bo_reserve_optimistic_no_ticket ===========
[00:16:12] [PASSED] Cannot be interrupted and sleeps
[00:16:12] [PASSED] Cannot be interrupted, locks straight away
[00:16:12] [PASSED] Can be interrupted, sleeps
[00:16:12] ======= [PASSED] ttm_bo_reserve_optimistic_no_ticket =======
[00:16:12] [PASSED] ttm_bo_reserve_locked_no_sleep
[00:16:12] [PASSED] ttm_bo_reserve_no_wait_ticket
[00:16:12] [PASSED] ttm_bo_reserve_double_resv
[00:16:12] [PASSED] ttm_bo_reserve_interrupted
[00:16:12] [PASSED] ttm_bo_reserve_deadlock
[00:16:12] [PASSED] ttm_bo_unreserve_basic
[00:16:12] [PASSED] ttm_bo_unreserve_pinned
[00:16:12] [PASSED] ttm_bo_unreserve_bulk
[00:16:12] [PASSED] ttm_bo_fini_basic
[00:16:12] [PASSED] ttm_bo_fini_shared_resv
[00:16:12] [PASSED] ttm_bo_pin_basic
[00:16:12] [PASSED] ttm_bo_pin_unpin_resource
[00:16:12] [PASSED] ttm_bo_multiple_pin_one_unpin
[00:16:12] ===================== [PASSED] ttm_bo ======================
[00:16:12] ============== ttm_bo_validate (21 subtests) ===============
[00:16:12] ============== ttm_bo_init_reserved_sys_man ===============
[00:16:12] [PASSED] Buffer object for userspace
[00:16:12] [PASSED] Kernel buffer object
[00:16:12] [PASSED] Shared buffer object
[00:16:12] ========== [PASSED] ttm_bo_init_reserved_sys_man ===========
[00:16:12] ============== ttm_bo_init_reserved_mock_man ==============
[00:16:12] [PASSED] Buffer object for userspace
[00:16:12] [PASSED] Kernel buffer object
[00:16:12] [PASSED] Shared buffer object
[00:16:12] ========== [PASSED] ttm_bo_init_reserved_mock_man ==========
[00:16:12] [PASSED] ttm_bo_init_reserved_resv
[00:16:12] ================== ttm_bo_validate_basic ==================
[00:16:12] [PASSED] Buffer object for userspace
[00:16:12] [PASSED] Kernel buffer object
[00:16:12] [PASSED] Shared buffer object
[00:16:12] ============== [PASSED] ttm_bo_validate_basic ==============
[00:16:12] [PASSED] ttm_bo_validate_invalid_placement
[00:16:12] ============= ttm_bo_validate_same_placement ==============
[00:16:12] [PASSED] System manager
[00:16:12] [PASSED] VRAM manager
[00:16:12] ========= [PASSED] ttm_bo_validate_same_placement ==========
[00:16:12] [PASSED] ttm_bo_validate_failed_alloc
[00:16:12] [PASSED] ttm_bo_validate_pinned
[00:16:12] [PASSED] ttm_bo_validate_busy_placement
[00:16:12] ================ ttm_bo_validate_multihop =================
[00:16:12] [PASSED] Buffer object for userspace
[00:16:12] [PASSED] Kernel buffer object
[00:16:12] [PASSED] Shared buffer object
[00:16:12] ============ [PASSED] ttm_bo_validate_multihop =============
[00:16:12] ========== ttm_bo_validate_no_placement_signaled ==========
[00:16:12] [PASSED] Buffer object in system domain, no page vector
[00:16:12] [PASSED] Buffer object in system domain with an existing page vector
[00:16:12] ====== [PASSED] ttm_bo_validate_no_placement_signaled ======
[00:16:12] ======== ttm_bo_validate_no_placement_not_signaled ========
[00:16:12] [PASSED] Buffer object for userspace
[00:16:12] [PASSED] Kernel buffer object
[00:16:12] [PASSED] Shared buffer object
[00:16:12] ==== [PASSED] ttm_bo_validate_no_placement_not_signaled ====
[00:16:12] [PASSED] ttm_bo_validate_move_fence_signaled
[00:16:12] ========= ttm_bo_validate_move_fence_not_signaled =========
[00:16:12] [PASSED] Waits for GPU
[00:16:12] [PASSED] Tries to lock straight away
[00:16:12] ===== [PASSED] ttm_bo_validate_move_fence_not_signaled =====
[00:16:12] [PASSED] ttm_bo_validate_happy_evict
[00:16:12] [PASSED] ttm_bo_validate_all_pinned_evict
[00:16:12] [PASSED] ttm_bo_validate_allowed_only_evict
[00:16:12] [PASSED] ttm_bo_validate_deleted_evict
[00:16:12] [PASSED] ttm_bo_validate_busy_domain_evict
[00:16:12] [PASSED] ttm_bo_validate_evict_gutting
[00:16:12] [PASSED] ttm_bo_validate_recrusive_evict
stty: 'standard input': Inappropriate ioctl for device
[00:16:12] ================= [PASSED] ttm_bo_validate =================
[00:16:12] ============================================================
[00:16:12] Testing complete. Ran 101 tests: passed: 101
[00:16:12] Elapsed time: 11.394s total, 1.684s configuring, 9.493s building, 0.178s running
+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel
^ permalink raw reply [flat|nested] 29+ messages in thread
* ✗ CI.checksparse: warning for Enable THP support in drm_pagemap
2025-12-16 20:10 [PATCH 0/4] Enable THP support in drm_pagemap Francois Dugast
` (5 preceding siblings ...)
2025-12-17 0:16 ` ✓ CI.KUnit: success " Patchwork
@ 2025-12-17 0:31 ` Patchwork
2025-12-17 0:55 ` ✓ Xe.CI.BAT: success " Patchwork
2025-12-17 23:22 ` ✗ Xe.CI.Full: failure " Patchwork
8 siblings, 0 replies; 29+ messages in thread
From: Patchwork @ 2025-12-17 0:31 UTC (permalink / raw)
To: Francois Dugast; +Cc: intel-xe
== Series Details ==
Series: Enable THP support in drm_pagemap
URL : https://patchwork.freedesktop.org/series/159119/
State : warning
== Summary ==
+ trap cleanup EXIT
+ KERNEL=/kernel
+ MT=/root/linux/maintainer-tools
+ git clone https://gitlab.freedesktop.org/drm/maintainer-tools /root/linux/maintainer-tools
Cloning into '/root/linux/maintainer-tools'...
warning: redirecting to https://gitlab.freedesktop.org/drm/maintainer-tools.git/
+ make -C /root/linux/maintainer-tools
make: Entering directory '/root/linux/maintainer-tools'
cc -O2 -g -Wextra -o remap-log remap-log.c
make: Leaving directory '/root/linux/maintainer-tools'
+ cd /kernel
+ git config --global --add safe.directory /kernel
+ /root/linux/maintainer-tools/dim sparse --fast 72428bdb20b6c86beaeddb9d69bf698d0697aa41
Sparse version: 0.6.4 (Ubuntu: 0.6.4-4ubuntu3)
Fast mode used, each commit won't be checked separately.
-
+drivers/gpu/drm/display/drm_dp_helper.c:1979:1: error: bad constant expression
+drivers/gpu/drm/display/drm_dp_helper.c:1980:1: error: bad constant expression
+drivers/gpu/drm/display/drm_dp_helper.c:2144:1: error: bad constant expression
+drivers/gpu/drm/display/drm_dp_helper.c:2145:1: error: bad constant expression
+drivers/gpu/drm/drm_bridge.c:1604:1: error: bad constant expression
+drivers/gpu/drm/drm_bridge.c:1605:1: error: bad constant expression
+drivers/gpu/drm/drm_bridge.c:1606:1: error: bad constant expression
+drivers/gpu/drm/drm_bridge.c:1606:1: error: bad constant expression
+drivers/gpu/drm/drm_drv.c:60:1: error: bad constant expression
+drivers/gpu/drm/drm_drv.c:61:1: error: bad constant expression
+drivers/gpu/drm/drm_drv.c:62:1: error: bad constant expression
+drivers/gpu/drm/drm_drv.c:62:1: error: bad constant expression
+drivers/gpu/drm/drm_edid.c:1800:1: error: bad constant expression
+drivers/gpu/drm/drm_edid.c:1801:1: error: bad constant expression
+drivers/gpu/drm/drm_gem_framebuffer_helper.c:23:1: error: bad constant expression
+drivers/gpu/drm/drm_gem_shmem_helper.c:26:1: error: bad constant expression
+drivers/gpu/drm/drm_gem_shmem_helper.c:901:1: error: bad constant expression
+drivers/gpu/drm/drm_gem_shmem_helper.c:902:1: error: bad constant expression
+drivers/gpu/drm/drm_gem_shmem_helper.c:903:1: error: bad constant expression
+drivers/gpu/drm/drm_gem_shmem_helper.c:903:1: error: bad constant expression
+drivers/gpu/drm/drm_panel.c:733:1: error: bad constant expression
+drivers/gpu/drm/drm_panel.c:734:1: error: bad constant expression
+drivers/gpu/drm/drm_panel.c:735:1: error: bad constant expression
+drivers/gpu/drm/drm_panel.c:735:1: error: bad constant expression
+drivers/gpu/drm/drm_panel_orientation_quirks.c:601:1: error: bad constant expression
+drivers/gpu/drm/drm_panel_orientation_quirks.c:602:1: error: bad constant expression
+drivers/gpu/drm/drm_panel_orientation_quirks.c:602:1: error: bad constant expression
+drivers/gpu/drm/drm_prime.c:44:1: error: bad constant expression
+drivers/gpu/drm/drm_probe_helper.c:68:1: error: bad constant expression
+drivers/gpu/drm/drm_simple_kms_helper.c:457:1: error: bad constant expression
+drivers/gpu/drm/drm_simple_kms_helper.c:458:1: error: bad constant expression
+drivers/gpu/drm/drm_simple_kms_helper.c:458:1: error: bad constant expression
+drivers/gpu/drm/drm_vblank.c:173:1: error: bad constant expression
+drivers/gpu/drm/drm_vblank.c:174:1: error: bad constant expression
+drivers/gpu/drm/drm_vblank.c:175:1: error: bad constant expression
+drivers/gpu/drm/drm_vblank.c:176:1: error: bad constant expression
+drivers/gpu/drm/i915/display/dvo_ch7017.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/dvo_ch7xxx.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/dvo_ivch.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/dvo_ns2501.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/dvo_sil164.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/dvo_tfp410.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/g4x_dp.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/g4x_hdmi.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/hsw_ips.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/i9xx_plane.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/i9xx_wm.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h, drivers/gpu/drm/i915/display/intel_display_trace.h):
+drivers/gpu/drm/i915/display/icl_dsi.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h, drivers/gpu/drm/i915/display/intel_dsi.h):
+drivers/gpu/drm/i915/display/intel_acpi.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_alpm.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_atomic.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_audio.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_backlight.c: note: in included file:
+drivers/gpu/drm/i915/display/intel_bios.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_bw.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_casf.c:147:21: error: too long token expansion
+drivers/gpu/drm/i915/display/intel_casf.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_cdclk.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_color.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_colorop.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h, drivers/gpu/drm/i915/display/intel_colorop.h):
+drivers/gpu/drm/i915/display/intel_color_pipeline.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h, drivers/gpu/drm/i915/display/intel_colorop.h):
+drivers/gpu/drm/i915/display/intel_combo_phy.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_connector.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_crtc.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h, drivers/gpu/drm/i915/display/intel_display_trace.h):
+drivers/gpu/drm/i915/display/intel_crt.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_crtc_state_dump.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_cursor.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_cx0_phy.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_dbuf_bw.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_ddi_buf_trans.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_ddi.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_display.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_display_debugfs.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_display_device.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_display_driver.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_display_irq.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h, drivers/gpu/drm/i915/display/intel_display_trace.h):
+drivers/gpu/drm/i915/display/intel_display_power.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_display_power_map.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_display_power_well.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_display_reset.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_display_rps.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_dmc.c:131:1: error: bad constant expression
+drivers/gpu/drm/i915/display/intel_dmc.c:134:1: error: bad constant expression
+drivers/gpu/drm/i915/display/intel_dmc.c:137:1: error: bad constant expression
+drivers/gpu/drm/i915/display/intel_dmc.c:140:1: error: bad constant expression
+drivers/gpu/drm/i915/display/intel_dmc.c:143:1: error: bad constant expression
+drivers/gpu/drm/i915/display/intel_dmc.c:146:1: error: bad constant expression
+drivers/gpu/drm/i915/display/intel_dmc.c:149:1: error: bad constant expression
+drivers/gpu/drm/i915/display/intel_dmc.c:153:1: error: bad constant expression
+drivers/gpu/drm/i915/display/intel_dmc.c:154:1: error: bad constant expression
+drivers/gpu/drm/i915/display/intel_dmc.c:157:1: error: bad constant expression
+drivers/gpu/drm/i915/display/intel_dmc.c:160:1: error: bad constant expression
+drivers/gpu/drm/i915/display/intel_dmc.c:163:1: error: bad constant expression
+drivers/gpu/drm/i915/display/intel_dmc.c:166:1: error: bad constant expression
+drivers/gpu/drm/i915/display/intel_dmc.c:170:1: error: bad constant expression
+drivers/gpu/drm/i915/display/intel_dmc.c:174:1: error: bad constant expression
+drivers/gpu/drm/i915/display/intel_dmc.c:178:1: error: bad constant expression
+drivers/gpu/drm/i915/display/intel_dmc.c:182:1: error: bad constant expression
+drivers/gpu/drm/i915/display/intel_dmc.c:186:1: error: bad constant expression
+drivers/gpu/drm/i915/display/intel_dmc.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_dp_aux_backlight.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_dp_aux.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_dp.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_dp_hdcp.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_dpio_phy.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_dp_link_training.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_dpll.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_dpll_mgr.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_dp_mst.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_dpt.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_dpt_common.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_dp_test.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_drrs.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_dsb.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_dsi.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h, drivers/gpu/drm/i915/display/intel_dsi.h):
+drivers/gpu/drm/i915/display/intel_dsi_dcs_backlight.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_dsi_vbt.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_dvo.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_encoder.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_fb_bo.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_fbc.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h, drivers/gpu/drm/i915/display/intel_display_trace.h):
+drivers/gpu/drm/i915/display/intel_fb.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_fb_pin.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_fdi.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_fifo_underrun.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h, drivers/gpu/drm/i915/display/intel_display_trace.h):
+drivers/gpu/drm/i915/display/intel_flipq.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_frontbuffer.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h, drivers/gpu/drm/i915/display/intel_display_trace.h):
+drivers/gpu/drm/i915/display/intel_global_state.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_gmbus.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_hdcp.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_hdcp_gsc_message.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_hdmi.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_hotplug.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_hotplug_irq.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_link_bw.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_load_detect.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_lspcon.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_lt_phy.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_lvds.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_modeset_lock.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_modeset_setup.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_modeset_verify.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_opregion.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_overlay.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_panel.c: note: in included file:
+drivers/gpu/drm/i915/display/intel_pch_display.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_pch_refclk.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_pfit.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_pipe_crc.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_plane.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h, drivers/gpu/drm/i915/display/intel_colorop.h):
+drivers/gpu/drm/i915/display/intel_plane_initial.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_pmdemand.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h, drivers/gpu/drm/i915/display/intel_display_trace.h):
+drivers/gpu/drm/i915/display/intel_pps.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_psr.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_quirks.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_sdvo.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_snps_hdmi_pll.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_snps_phy.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_sprite.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_sprite_uapi.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_tc.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_tv.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_vblank.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_vdsc.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_vga.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_vrr.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/intel_wm.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/skl_prefill.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/skl_scaler.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h, drivers/gpu/drm/i915/display/intel_display_trace.h):
+drivers/gpu/drm/i915/display/skl_universal_plane.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/skl_watermark.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/vlv_clock.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/vlv_dsi.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/vlv_dsi_pll.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/display/vlv_sideband.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c:18:1: error: bad constant expression
+drivers/gpu/drm/i915/gem/i915_gem_pages.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/gt/intel_reset.c:1569:12: warning: context imbalance in '_intel_gt_reset_lock' - different lock contexts for basic block
+drivers/gpu/drm/i915/gt/intel_sseu.c:600:17: error: too long token expansion
+drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c:191:1: error: bad constant expression
+drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c:191:1: error: bad constant expression
+drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c:191:1: error: bad constant expression
+drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c:191:1: error: bad constant expression
+drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c:191:1: error: bad constant expression
+drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c:191:1: error: bad constant expression
+drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c:191:1: error: bad constant expression
+drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c:191:1: error: bad constant expression
+drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c:191:1: error: bad constant expression
+drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c:191:1: error: bad constant expression
+drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c:191:1: error: bad constant expression
+drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c:191:1: error: bad constant expression
+drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c:191:1: error: bad constant expression
+drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c:191:1: error: bad constant expression
+drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c:191:1: error: bad constant expression
+drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c:191:1: error: bad constant expression
+drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c:191:1: error: bad constant expression
+drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c:191:1: error: bad constant expression
+drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c:191:1: error: bad constant expression
+drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c:191:1: error: bad constant expression
+drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c:191:1: error: bad constant expression
+drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c:192:1: error: bad constant expression
+drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c:192:1: error: bad constant expression
+drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c:192:1: error: bad constant expression
+drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c:192:1: error: bad constant expression
+drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c:192:1: error: bad constant expression
+drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c:192:1: error: bad constant expression
+drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c:192:1: error: bad constant expression
+drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c:192:1: error: bad constant expression
+drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c:192:1: error: bad constant expression
+drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c:192:1: error: bad constant expression
+drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c:192:1: error: bad constant expression
+drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c:192:1: error: bad constant expression
+drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c:192:1: error: bad constant expression
+drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c:192:1: error: bad constant expression
+drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c:192:1: error: bad constant expression
+drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c:192:1: error: bad constant expression
+drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c:192:1: error: bad constant expression
+drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c:192:1: error: bad constant expression
+drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c:192:1: error: bad constant expression
+drivers/gpu/drm/i915/gt/uc/intel_uc_fw.c:193:1: error: bad constant expression
+drivers/gpu/drm/i915/i915_active.c:1062:16: warning: context imbalance in '__i915_active_fence_set' - different lock contexts for basic block
+drivers/gpu/drm/i915/i915_drm_client.c:92:9: error: incompatible types in comparison expression (different address spaces):
+drivers/gpu/drm/i915/i915_drm_client.c:92:9: error: incompatible types in comparison expression (different address spaces):
+drivers/gpu/drm/i915/i915_drm_client.c:92:9: expected struct list_head const *list
+drivers/gpu/drm/i915/i915_drm_client.c:92:9: got struct list_head [noderef] __rcu *pos
+drivers/gpu/drm/i915/i915_drm_client.c:92:9: struct list_head *
+drivers/gpu/drm/i915/i915_drm_client.c:92:9: struct list_head *
+drivers/gpu/drm/i915/i915_drm_client.c:92:9: struct list_head [noderef] __rcu *
+drivers/gpu/drm/i915/i915_drm_client.c:92:9: struct list_head [noderef] __rcu *
+drivers/gpu/drm/i915/i915_drm_client.c:92:9: warning: incorrect type in argument 1 (different address spaces)
+drivers/gpu/drm/i915/i915_gpu_error.c:692:3: warning: symbol 'guc_hw_reg_state' was not declared. Should it be static?
+drivers/gpu/drm/i915/i915_irq.c:467:9: warning: unreplaced symbol '<noident>'
+drivers/gpu/drm/i915/i915_irq.c:467:9: warning: unreplaced symbol '<noident>'
+drivers/gpu/drm/i915/i915_irq.c:475:16: warning: unreplaced symbol '<noident>'
+drivers/gpu/drm/i915/i915_irq.c:475:16: warning: unreplaced symbol '<noident>'
+drivers/gpu/drm/i915/i915_irq.c:480:9: warning: unreplaced symbol '<noident>'
+drivers/gpu/drm/i915/i915_irq.c:480:9: warning: unreplaced symbol '<noident>'
+drivers/gpu/drm/i915/i915_irq.c:480:9: warning: unreplaced symbol '<noident>'
+drivers/gpu/drm/i915/i915_irq.c:518:9: warning: unreplaced symbol '<noident>'
+drivers/gpu/drm/i915/i915_irq.c:518:9: warning: unreplaced symbol '<noident>'
+drivers/gpu/drm/i915/i915_irq.c:526:16: warning: unreplaced symbol '<noident>'
+drivers/gpu/drm/i915/i915_irq.c:526:16: warning: unreplaced symbol '<noident>'
+drivers/gpu/drm/i915/i915_irq.c:531:9: warning: unreplaced symbol '<noident>'
+drivers/gpu/drm/i915/i915_irq.c:531:9: warning: unreplaced symbol '<noident>'
+drivers/gpu/drm/i915/i915_irq.c:531:9: warning: unreplaced symbol '<noident>'
+drivers/gpu/drm/i915/i915_irq.c:575:9: warning: unreplaced symbol '<noident>'
+drivers/gpu/drm/i915/i915_irq.c:575:9: warning: unreplaced symbol '<noident>'
+drivers/gpu/drm/i915/i915_irq.c:578:15: warning: unreplaced symbol '<noident>'
+drivers/gpu/drm/i915/i915_irq.c:578:15: warning: unreplaced symbol '<noident>'
+drivers/gpu/drm/i915/i915_irq.c:582:9: warning: unreplaced symbol '<noident>'
+drivers/gpu/drm/i915/i915_irq.c:582:9: warning: unreplaced symbol '<noident>'
+drivers/gpu/drm/i915/i915_irq.c:589:9: warning: unreplaced symbol '<noident>'
+drivers/gpu/drm/i915/i915_irq.c:589:9: warning: unreplaced symbol '<noident>'
+drivers/gpu/drm/i915/i915_irq.c:589:9: warning: unreplaced symbol '<noident>'
+drivers/gpu/drm/i915/i915_irq.c:589:9: warning: unreplaced symbol '<noident>'
+drivers/gpu/drm/i915/i915_mitigations.c:133:1: error: bad constant expression
+drivers/gpu/drm/i915/i915_module.c:125:1: error: bad constant expression
+drivers/gpu/drm/i915/i915_module.c:126:1: error: bad constant expression
+drivers/gpu/drm/i915/i915_module.c:128:1: error: bad constant expression
+drivers/gpu/drm/i915/i915_module.c:129:1: error: bad constant expression
+drivers/gpu/drm/i915/i915_module.c:129:1: error: bad constant expression
+drivers/gpu/drm/i915/i915_panic.c: note: in included file (through drivers/gpu/drm/i915/display/intel_display_types.h):
+drivers/gpu/drm/i915/i915_params.c:100:1: error: bad constant expression
+drivers/gpu/drm/i915/i915_params.c:100:1: error: bad constant expression
+drivers/gpu/drm/i915/i915_params.c:104:1: error: bad constant expression
+drivers/gpu/drm/i915/i915_params.c:104:1: error: bad constant expression
+drivers/gpu/drm/i915/i915_params.c:107:1: error: bad constant expression
+drivers/gpu/drm/i915/i915_params.c:107:1: error: bad constant expression
+drivers/gpu/drm/i915/i915_params.c:110:1: error: bad constant expression
+drivers/gpu/drm/i915/i915_params.c:110:1: error: bad constant expression
+drivers/gpu/drm/i915/i915_params.c:124:1: error: bad constant expression
+drivers/gpu/drm/i915/i915_params.c:124:1: error: bad constant expression
+drivers/gpu/drm/i915/i915_params.c:128:1: error: bad constant expression
+drivers/gpu/drm/i915/i915_params.c:128:1: error: bad constant expression
+drivers/gpu/drm/i915/i915_params.c:130:1: error: bad constant expression
+drivers/gpu/drm/i915/i915_params.c:130:1: error: bad constant expression
+drivers/gpu/drm/i915/i915_params.c:66:1: error: bad constant expression
+drivers/gpu/drm/i915/i915_params.c:66:1: error: bad constant expression
+drivers/gpu/drm/i915/i915_params.c:69:1: error: bad constant expression
+drivers/gpu/drm/i915/i915_params.c:69:1: error: bad constant expression
+drivers/gpu/drm/i915/i915_params.c:73:1: error: bad constant expression
+drivers/gpu/drm/i915/i915_params.c:73:1: error: bad constant expression
+drivers/gpu/drm/i915/i915_params.c:79:1: error: bad constant expression
+drivers/gpu/drm/i915/i915_params.c:79:1: error: bad constant expression
+drivers/gpu/drm/i915/i915_params.c:84:1: error: bad constant expression
+drivers/gpu/drm/i915/i915_params.c:84:1: error: bad constant expression
+drivers/gpu/drm/i915/i915_params.c:88:1: error: bad constant expression
+drivers/gpu/drm/i915/i915_params.c:88:1: error: bad constant expression
+drivers/gpu/drm/i915/i915_params.c:91:1: error: bad constant expression
+drivers/gpu/drm/i915/i915_params.c:91:1: error: bad constant expression
+drivers/gpu/drm/i915/i915_params.c:95:1: error: bad constant expression
+drivers/gpu/drm/i915/i915_params.c:95:1: error: bad constant expression
+drivers/gpu/drm/i915/intel_uncore.c:1930:1: warning: context imbalance in 'fwtable_read8' - unexpected unlock
+drivers/gpu/drm/i915/intel_uncore.c:1931:1: warning: context imbalance in 'fwtable_read16' - unexpected unlock
+drivers/gpu/drm/i915/intel_uncore.c:1932:1: warning: context imbalance in 'fwtable_read32' - unexpected unlock
+drivers/gpu/drm/i915/intel_uncore.c:1933:1: warning: context imbalance in 'fwtable_read64' - unexpected unlock
+drivers/gpu/drm/i915/intel_uncore.c:1998:1: warning: context imbalance in 'gen6_write8' - unexpected unlock
+drivers/gpu/drm/i915/intel_uncore.c:1999:1: warning: context imbalance in 'gen6_write16' - unexpected unlock
+drivers/gpu/drm/i915/intel_uncore.c:2000:1: warning: context imbalance in 'gen6_write32' - unexpected unlock
+drivers/gpu/drm/i915/intel_uncore.c:2020:1: warning: context imbalance in 'fwtable_write8' - unexpected unlock
+drivers/gpu/drm/i915/intel_uncore.c:2021:1: warning: context imbalance in 'fwtable_write16' - unexpected unlock
+drivers/gpu/drm/i915/intel_uncore.c:2022:1: warning: context imbalance in 'fwtable_write32' - unexpected unlock
+drivers/gpu/drm/i915/intel_wakeref.c:148:19: warning: context imbalance in 'wakeref_auto_timeout' - unexpected unlock
+drivers/gpu/drm/ttm/ttm_bo.c:1203:31: warning: symbol 'ttm_swap_ops' was not declared. Should it be static?
+drivers/gpu/drm/ttm/ttm_bo_util.c:329:38: expected void *virtual
+drivers/gpu/drm/ttm/ttm_bo_util.c:329:38: got void [noderef] __iomem *
+drivers/gpu/drm/ttm/ttm_bo_util.c:329:38: warning: incorrect type in assignment (different address spaces)
+drivers/gpu/drm/ttm/ttm_bo_util.c:332:38: expected void *virtual
+drivers/gpu/drm/ttm/ttm_bo_util.c:332:38: got void [noderef] __iomem *
+drivers/gpu/drm/ttm/ttm_bo_util.c:332:38: warning: incorrect type in assignment (different address spaces)
+drivers/gpu/drm/ttm/ttm_bo_util.c:335:38: expected void *virtual
+drivers/gpu/drm/ttm/ttm_bo_util.c:335:38: got void [noderef] __iomem *
+drivers/gpu/drm/ttm/ttm_bo_util.c:335:38: warning: incorrect type in assignment (different address spaces)
+drivers/gpu/drm/ttm/ttm_bo_util.c:465:28: expected void volatile [noderef] __iomem *addr
+drivers/gpu/drm/ttm/ttm_bo_util.c:465:28: got void *virtual
+drivers/gpu/drm/ttm/ttm_bo_util.c:465:28: warning: incorrect type in argument 1 (different address spaces)
+drivers/gpu/drm/ttm/ttm_pool.c:119:1: error: bad constant expression
+drivers/gpu/drm/ttm/ttm_pool.c:120:1: error: bad constant expression
+drivers/gpu/drm/ttm/ttm_tt.c:54:1: error: bad constant expression
+drivers/gpu/drm/ttm/ttm_tt.c:55:1: error: bad constant expression
+drivers/gpu/drm/ttm/ttm_tt.c:59:1: error: bad constant expression
+drivers/gpu/drm/ttm/ttm_tt.c:60:1: error: bad constant expression
+drivers/gpu/drm/virtio/virtgpu_drv.c:217:1: error: bad constant expression
+drivers/gpu/drm/virtio/virtgpu_drv.c:218:1: error: bad constant expression
+drivers/gpu/drm/virtio/virtgpu_drv.c:218:1: error: bad constant expression
+drivers/gpu/drm/virtio/virtgpu_drv.c:219:1: error: bad constant expression
+drivers/gpu/drm/virtio/virtgpu_drv.c:220:1: error: bad constant expression
+drivers/gpu/drm/virtio/virtgpu_drv.c:221:1: error: bad constant expression
+drivers/gpu/drm/virtio/virtgpu_drv.c:52:1: error: bad constant expression
+drivers/gpu/drm/virtio/virtgpu_drv.c:53:1: error: bad constant expression
+drivers/gpu/drm/virtio/virtgpu_object.c:34:1: error: bad constant expression
+drivers/gpu/drm/virtio/virtgpu_prime.c:30:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+./include/linux/pwm.h:13:1: error: bad constant expression
+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel
^ permalink raw reply [flat|nested] 29+ messages in thread
* ✓ Xe.CI.BAT: success for Enable THP support in drm_pagemap
2025-12-16 20:10 [PATCH 0/4] Enable THP support in drm_pagemap Francois Dugast
` (6 preceding siblings ...)
2025-12-17 0:31 ` ✗ CI.checksparse: warning " Patchwork
@ 2025-12-17 0:55 ` Patchwork
2025-12-17 23:22 ` ✗ Xe.CI.Full: failure " Patchwork
8 siblings, 0 replies; 29+ messages in thread
From: Patchwork @ 2025-12-17 0:55 UTC (permalink / raw)
To: Francois Dugast; +Cc: intel-xe
[-- Attachment #1: Type: text/plain, Size: 947 bytes --]
== Series Details ==
Series: Enable THP support in drm_pagemap
URL : https://patchwork.freedesktop.org/series/159119/
State : success
== Summary ==
CI Bug Log - changes from xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41_BAT -> xe-pw-159119v1_BAT
====================================================
Summary
-------
**SUCCESS**
No regressions found.
Participating hosts (12 -> 12)
------------------------------
No changes in participating hosts
Changes
-------
No changes found
Build changes
-------------
* Linux: xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41 -> xe-pw-159119v1
IGT_8668: 906681747a312ef11ef9af8ab1fa6eff28b4cbd0 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41: 72428bdb20b6c86beaeddb9d69bf698d0697aa41
xe-pw-159119v1: 159119v1
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/index.html
[-- Attachment #2: Type: text/html, Size: 1495 bytes --]
^ permalink raw reply [flat|nested] 29+ messages in thread
* ✗ Xe.CI.Full: failure for Enable THP support in drm_pagemap
2025-12-16 20:10 [PATCH 0/4] Enable THP support in drm_pagemap Francois Dugast
` (7 preceding siblings ...)
2025-12-17 0:55 ` ✓ Xe.CI.BAT: success " Patchwork
@ 2025-12-17 23:22 ` Patchwork
8 siblings, 0 replies; 29+ messages in thread
From: Patchwork @ 2025-12-17 23:22 UTC (permalink / raw)
To: Francois Dugast; +Cc: intel-xe
[-- Attachment #1: Type: text/plain, Size: 83043 bytes --]
== Series Details ==
Series: Enable THP support in drm_pagemap
URL : https://patchwork.freedesktop.org/series/159119/
State : failure
== Summary ==
CI Bug Log - changes from xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41_FULL -> xe-pw-159119v1_FULL
====================================================
Summary
-------
**FAILURE**
Serious unknown changes coming with xe-pw-159119v1_FULL absolutely need to be
verified manually.
If you think the reported changes have nothing to do with the changes
introduced in xe-pw-159119v1_FULL, please notify your bug team (I915-ci-infra@lists.freedesktop.org) to allow them
to document this new failure mode, which will reduce false positives in CI.
Participating hosts (2 -> 2)
------------------------------
No changes in participating hosts
Possible new issues
-------------------
Here are the unknown changes that may have been introduced in xe-pw-159119v1_FULL:
### IGT changes ###
#### Possible regressions ####
* igt@core_getversion@basic:
- shard-bmg: [PASS][1] -> [FAIL][2] +1 other test fail
[1]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-8/igt@core_getversion@basic.html
[2]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-5/igt@core_getversion@basic.html
* igt@xe_ccs@block-copy-compressed-inc-dimension:
- shard-bmg: [PASS][3] -> [INCOMPLETE][4] +1 other test incomplete
[3]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-7/igt@xe_ccs@block-copy-compressed-inc-dimension.html
[4]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-3/igt@xe_ccs@block-copy-compressed-inc-dimension.html
Known issues
------------
Here are the changes found in xe-pw-159119v1_FULL that come from known issues:
### IGT changes ###
#### Issues hit ####
* igt@core_hotunplug@unbind-rebind:
- shard-bmg: NOTRUN -> [SKIP][5] ([Intel XE#6779])
[5]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-5/igt@core_hotunplug@unbind-rebind.html
* igt@fbdev@pan:
- shard-bmg: [PASS][6] -> [SKIP][7] ([Intel XE#2134])
[6]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-8/igt@fbdev@pan.html
[7]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-5/igt@fbdev@pan.html
* igt@fbdev@write:
- shard-bmg: NOTRUN -> [SKIP][8] ([Intel XE#2134]) +1 other test skip
[8]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-5/igt@fbdev@write.html
* igt@kms_atomic_transition@plane-all-modeset-transition-fencing-internal-panels:
- shard-bmg: NOTRUN -> [SKIP][9] ([Intel XE#2370])
[9]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-4/igt@kms_atomic_transition@plane-all-modeset-transition-fencing-internal-panels.html
* igt@kms_big_fb@linear-16bpp-rotate-270:
- shard-bmg: NOTRUN -> [SKIP][10] ([Intel XE#2327]) +1 other test skip
[10]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-4/igt@kms_big_fb@linear-16bpp-rotate-270.html
* igt@kms_big_fb@y-tiled-addfb-size-overflow:
- shard-bmg: NOTRUN -> [SKIP][11] ([Intel XE#610])
[11]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-1/igt@kms_big_fb@y-tiled-addfb-size-overflow.html
* igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-180:
- shard-bmg: NOTRUN -> [SKIP][12] ([Intel XE#1124]) +8 other tests skip
[12]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-1/igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-180.html
* igt@kms_bw@linear-tiling-1-displays-1920x1080p:
- shard-bmg: NOTRUN -> [SKIP][13] ([Intel XE#367]) +1 other test skip
[13]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-4/igt@kms_bw@linear-tiling-1-displays-1920x1080p.html
* igt@kms_ccs@crc-primary-suspend-4-tiled-bmg-ccs:
- shard-bmg: NOTRUN -> [INCOMPLETE][14] ([Intel XE#3862]) +1 other test incomplete
[14]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-1/igt@kms_ccs@crc-primary-suspend-4-tiled-bmg-ccs.html
* igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-rc-ccs:
- shard-bmg: NOTRUN -> [SKIP][15] ([Intel XE#3432])
[15]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-4/igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-rc-ccs.html
* igt@kms_ccs@missing-ccs-buffer-yf-tiled-ccs:
- shard-bmg: NOTRUN -> [SKIP][16] ([Intel XE#2887]) +7 other tests skip
[16]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-1/igt@kms_ccs@missing-ccs-buffer-yf-tiled-ccs.html
* igt@kms_ccs@random-ccs-data-4-tiled-lnl-ccs@pipe-c-dp-2:
- shard-bmg: NOTRUN -> [SKIP][17] ([Intel XE#2652] / [Intel XE#787]) +12 other tests skip
[17]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-3/igt@kms_ccs@random-ccs-data-4-tiled-lnl-ccs@pipe-c-dp-2.html
* igt@kms_chamelium_color@degamma:
- shard-bmg: NOTRUN -> [SKIP][18] ([Intel XE#2325])
[18]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-1/igt@kms_chamelium_color@degamma.html
* igt@kms_chamelium_hpd@common-hpd-after-suspend:
- shard-bmg: NOTRUN -> [SKIP][19] ([Intel XE#2252]) +6 other tests skip
[19]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-4/igt@kms_chamelium_hpd@common-hpd-after-suspend.html
* igt@kms_content_protection@lic-type-0@pipe-a-dp-2:
- shard-bmg: NOTRUN -> [FAIL][20] ([Intel XE#1178])
[20]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-3/igt@kms_content_protection@lic-type-0@pipe-a-dp-2.html
* igt@kms_cursor_crc@cursor-rapid-movement-128x42:
- shard-bmg: NOTRUN -> [SKIP][21] ([Intel XE#2320]) +1 other test skip
[21]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-4/igt@kms_cursor_crc@cursor-rapid-movement-128x42.html
* igt@kms_cursor_crc@cursor-rapid-movement-512x512:
- shard-bmg: NOTRUN -> [SKIP][22] ([Intel XE#2321])
[22]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-1/igt@kms_cursor_crc@cursor-rapid-movement-512x512.html
* igt@kms_cursor_edge_walk@256x256-top-edge:
- shard-bmg: [PASS][23] -> [FAIL][24] ([Intel XE#6841])
[23]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-2/igt@kms_cursor_edge_walk@256x256-top-edge.html
[24]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-2/igt@kms_cursor_edge_walk@256x256-top-edge.html
* igt@kms_cursor_legacy@2x-cursor-vs-flip-atomic:
- shard-bmg: NOTRUN -> [SKIP][25] ([Intel XE#2291])
[25]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-4/igt@kms_cursor_legacy@2x-cursor-vs-flip-atomic.html
* igt@kms_cursor_legacy@cursorb-vs-flipb-legacy:
- shard-bmg: [PASS][26] -> [SKIP][27] ([Intel XE#2291]) +4 other tests skip
[26]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-3/igt@kms_cursor_legacy@cursorb-vs-flipb-legacy.html
[27]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-2/igt@kms_cursor_legacy@cursorb-vs-flipb-legacy.html
* igt@kms_fbcon_fbt@fbc:
- shard-bmg: NOTRUN -> [SKIP][28] ([Intel XE#4156])
[28]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-1/igt@kms_fbcon_fbt@fbc.html
* igt@kms_feature_discovery@chamelium:
- shard-bmg: NOTRUN -> [SKIP][29] ([Intel XE#2372])
[29]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-4/igt@kms_feature_discovery@chamelium.html
* igt@kms_flip@2x-flip-vs-absolute-wf_vblank-interruptible:
- shard-bmg: NOTRUN -> [SKIP][30] ([Intel XE#2316]) +2 other tests skip
[30]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-4/igt@kms_flip@2x-flip-vs-absolute-wf_vblank-interruptible.html
* igt@kms_flip@2x-flip-vs-wf_vblank-interruptible:
- shard-bmg: [PASS][31] -> [SKIP][32] ([Intel XE#2316]) +6 other tests skip
[31]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-3/igt@kms_flip@2x-flip-vs-wf_vblank-interruptible.html
[32]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-2/igt@kms_flip@2x-flip-vs-wf_vblank-interruptible.html
* igt@kms_flip@flip-vs-expired-vblank-interruptible@b-edp1:
- shard-lnl: [PASS][33] -> [FAIL][34] ([Intel XE#301]) +1 other test fail
[33]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-lnl-3/igt@kms_flip@flip-vs-expired-vblank-interruptible@b-edp1.html
[34]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-lnl-5/igt@kms_flip@flip-vs-expired-vblank-interruptible@b-edp1.html
* igt@kms_flip_scaled_crc@flip-64bpp-yftile-to-16bpp-yftile-upscaling:
- shard-bmg: NOTRUN -> [SKIP][35] ([Intel XE#2293] / [Intel XE#2380]) +1 other test skip
[35]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-4/igt@kms_flip_scaled_crc@flip-64bpp-yftile-to-16bpp-yftile-upscaling.html
* igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-16bpp-ytile-downscaling@pipe-a-valid-mode:
- shard-bmg: NOTRUN -> [SKIP][36] ([Intel XE#2293]) +5 other tests skip
[36]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-7/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-16bpp-ytile-downscaling@pipe-a-valid-mode.html
* igt@kms_frontbuffer_tracking@drrs-1p-primscrn-shrfb-pgflip-blt:
- shard-bmg: NOTRUN -> [SKIP][37] ([Intel XE#2311]) +13 other tests skip
[37]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-1/igt@kms_frontbuffer_tracking@drrs-1p-primscrn-shrfb-pgflip-blt.html
* igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-spr-indfb-draw-mmap-wc:
- shard-bmg: NOTRUN -> [SKIP][38] ([Intel XE#4141]) +4 other tests skip
[38]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-1/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-spr-indfb-draw-mmap-wc.html
* igt@kms_frontbuffer_tracking@fbcdrrs-2p-scndscrn-pri-shrfb-draw-render:
- shard-bmg: NOTRUN -> [SKIP][39] ([Intel XE#6703]) +465 other tests skip
[39]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-5/igt@kms_frontbuffer_tracking@fbcdrrs-2p-scndscrn-pri-shrfb-draw-render.html
* igt@kms_frontbuffer_tracking@fbcdrrs-suspend:
- shard-bmg: NOTRUN -> [SKIP][40] ([Intel XE#6557] / [Intel XE#6703]) +4 other tests skip
[40]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-5/igt@kms_frontbuffer_tracking@fbcdrrs-suspend.html
* igt@kms_frontbuffer_tracking@fbcpsr-rgb565-draw-blt:
- shard-bmg: NOTRUN -> [SKIP][41] ([Intel XE#2313]) +17 other tests skip
[41]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-4/igt@kms_frontbuffer_tracking@fbcpsr-rgb565-draw-blt.html
* igt@kms_frontbuffer_tracking@psr-2p-scndscrn-shrfb-pgflip-blt:
- shard-bmg: NOTRUN -> [SKIP][42] ([Intel XE#2312]) +15 other tests skip
[42]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-4/igt@kms_frontbuffer_tracking@psr-2p-scndscrn-shrfb-pgflip-blt.html
* igt@kms_hdr@bpc-switch-dpms@pipe-a-dp-2:
- shard-bmg: NOTRUN -> [ABORT][43] ([Intel XE#6740]) +1 other test abort
[43]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-1/igt@kms_hdr@bpc-switch-dpms@pipe-a-dp-2.html
* igt@kms_multipipe_modeset@basic-max-pipe-crc-check:
- shard-bmg: NOTRUN -> [SKIP][44] ([Intel XE#2501])
[44]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-4/igt@kms_multipipe_modeset@basic-max-pipe-crc-check.html
* igt@kms_plane_cursor@primary:
- shard-bmg: [PASS][45] -> [DMESG-FAIL][46] ([Intel XE#5545]) +1 other test dmesg-fail
[45]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-3/igt@kms_plane_cursor@primary.html
[46]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-2/igt@kms_plane_cursor@primary.html
* igt@kms_plane_lowres@tiling-yf:
- shard-bmg: NOTRUN -> [SKIP][47] ([Intel XE#2393])
[47]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-4/igt@kms_plane_lowres@tiling-yf.html
* igt@kms_plane_multiple@2x-tiling-x:
- shard-bmg: [PASS][48] -> [SKIP][49] ([Intel XE#4596])
[48]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-3/igt@kms_plane_multiple@2x-tiling-x.html
[49]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-2/igt@kms_plane_multiple@2x-tiling-x.html
* igt@kms_plane_scaling@planes-upscale-factor-0-25-downscale-factor-0-75@pipe-b:
- shard-bmg: NOTRUN -> [SKIP][50] ([Intel XE#6886]) +3 other tests skip
[50]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-8/igt@kms_plane_scaling@planes-upscale-factor-0-25-downscale-factor-0-75@pipe-b.html
* igt@kms_pm_rpm@modeset-lpsp-stress:
- shard-bmg: NOTRUN -> [SKIP][51] ([Intel XE#6693])
[51]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-5/igt@kms_pm_rpm@modeset-lpsp-stress.html
* igt@kms_pm_rpm@package-g7:
- shard-bmg: NOTRUN -> [SKIP][52] ([Intel XE#6814])
[52]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-4/igt@kms_pm_rpm@package-g7.html
* igt@kms_psr2_sf@fbc-pr-cursor-plane-move-continuous-sf:
- shard-bmg: NOTRUN -> [SKIP][53] ([Intel XE#1406] / [Intel XE#6703]) +10 other tests skip
[53]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-5/igt@kms_psr2_sf@fbc-pr-cursor-plane-move-continuous-sf.html
* igt@kms_psr2_sf@psr2-plane-move-sf-dmg-area:
- shard-bmg: NOTRUN -> [SKIP][54] ([Intel XE#1406] / [Intel XE#1489]) +6 other tests skip
[54]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-4/igt@kms_psr2_sf@psr2-plane-move-sf-dmg-area.html
* igt@kms_psr@psr-cursor-plane-onoff:
- shard-bmg: NOTRUN -> [SKIP][55] ([Intel XE#1406] / [Intel XE#2234] / [Intel XE#2850]) +5 other tests skip
[55]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-4/igt@kms_psr@psr-cursor-plane-onoff.html
* igt@kms_psr@psr2-primary-render:
- shard-bmg: NOTRUN -> [SKIP][56] ([Intel XE#1406] / [Intel XE#2234])
[56]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-1/igt@kms_psr@psr2-primary-render.html
* igt@kms_scaling_modes@scaling-mode-full-aspect:
- shard-bmg: NOTRUN -> [SKIP][57] ([Intel XE#2413])
[57]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-4/igt@kms_scaling_modes@scaling-mode-full-aspect.html
* igt@kms_setmode@clone-exclusive-crtc:
- shard-bmg: [PASS][58] -> [SKIP][59] ([Intel XE#1435])
[58]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-3/igt@kms_setmode@clone-exclusive-crtc.html
[59]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-2/igt@kms_setmode@clone-exclusive-crtc.html
* igt@kms_sharpness_filter@filter-scaler-downscale:
- shard-bmg: NOTRUN -> [SKIP][60] ([Intel XE#6503])
[60]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-1/igt@kms_sharpness_filter@filter-scaler-downscale.html
* igt@kms_vrr@flip-suspend:
- shard-bmg: NOTRUN -> [SKIP][61] ([Intel XE#1499])
[61]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-4/igt@kms_vrr@flip-suspend.html
* igt@kms_vrr@lobf:
- shard-bmg: NOTRUN -> [SKIP][62] ([Intel XE#2168])
[62]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-1/igt@kms_vrr@lobf.html
* igt@xe_eudebug@basic-exec-queues:
- shard-bmg: NOTRUN -> [SKIP][63] ([Intel XE#4837]) +4 other tests skip
[63]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-4/igt@xe_eudebug@basic-exec-queues.html
* igt@xe_eudebug_online@stopped-thread:
- shard-bmg: NOTRUN -> [SKIP][64] ([Intel XE#4837] / [Intel XE#6665]) +2 other tests skip
[64]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-4/igt@xe_eudebug_online@stopped-thread.html
* igt@xe_eudebug_sriov@deny-sriov:
- shard-bmg: NOTRUN -> [SKIP][65] ([Intel XE#5793])
[65]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-1/igt@xe_eudebug_sriov@deny-sriov.html
* igt@xe_exec_basic@multigpu-no-exec-bindexecqueue-userptr-invalidate:
- shard-bmg: NOTRUN -> [SKIP][66] ([Intel XE#2322]) +6 other tests skip
[66]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-4/igt@xe_exec_basic@multigpu-no-exec-bindexecqueue-userptr-invalidate.html
* igt@xe_exec_multi_queue@one-queue-preempt-mode-fault-dyn-priority-smem:
- shard-bmg: NOTRUN -> [SKIP][67] ([Intel XE#6874]) +18 other tests skip
[67]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-4/igt@xe_exec_multi_queue@one-queue-preempt-mode-fault-dyn-priority-smem.html
* igt@xe_exec_system_allocator@threads-shared-vm-many-large-execqueues-mmap-free-huge-nomemset:
- shard-bmg: NOTRUN -> [SKIP][68] ([Intel XE#4943]) +10 other tests skip
[68]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-1/igt@xe_exec_system_allocator@threads-shared-vm-many-large-execqueues-mmap-free-huge-nomemset.html
* igt@xe_exec_system_allocator@threads-shared-vm-many-mmap-mlock:
- shard-bmg: [PASS][69] -> [SKIP][70] ([Intel XE#6703]) +298 other tests skip
[69]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-8/igt@xe_exec_system_allocator@threads-shared-vm-many-mmap-mlock.html
[70]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-5/igt@xe_exec_system_allocator@threads-shared-vm-many-mmap-mlock.html
* igt@xe_oa@oa-tlb-invalidate:
- shard-bmg: NOTRUN -> [SKIP][71] ([Intel XE#2248])
[71]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-1/igt@xe_oa@oa-tlb-invalidate.html
* igt@xe_pm@d3cold-basic-exec:
- shard-bmg: NOTRUN -> [SKIP][72] ([Intel XE#2284])
[72]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-4/igt@xe_pm@d3cold-basic-exec.html
* igt@xe_pm@vram-d3cold-threshold:
- shard-bmg: NOTRUN -> [SKIP][73] ([Intel XE#579])
[73]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-1/igt@xe_pm@vram-d3cold-threshold.html
* igt@xe_pmu@engine-activity-accuracy-90@engine-drm_xe_engine_class_video_decode0:
- shard-lnl: [PASS][74] -> [FAIL][75] ([Intel XE#6251]) +1 other test fail
[74]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-lnl-2/igt@xe_pmu@engine-activity-accuracy-90@engine-drm_xe_engine_class_video_decode0.html
[75]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-lnl-2/igt@xe_pmu@engine-activity-accuracy-90@engine-drm_xe_engine_class_video_decode0.html
* igt@xe_pxp@pxp-termination-key-update-post-termination-irq:
- shard-bmg: NOTRUN -> [SKIP][76] ([Intel XE#4733])
[76]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-1/igt@xe_pxp@pxp-termination-key-update-post-termination-irq.html
* igt@xe_query@multigpu-query-hwconfig:
- shard-bmg: NOTRUN -> [SKIP][77] ([Intel XE#944]) +1 other test skip
[77]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-4/igt@xe_query@multigpu-query-hwconfig.html
* igt@xe_vm@large-split-binds-536870912:
- shard-bmg: [PASS][78] -> [SKIP][79] ([Intel XE#6557] / [Intel XE#6703])
[78]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-8/igt@xe_vm@large-split-binds-536870912.html
[79]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-5/igt@xe_vm@large-split-binds-536870912.html
#### Possible fixes ####
* igt@core_hotunplug@hotreplug:
- shard-bmg: [SKIP][80] ([Intel XE#6779]) -> [PASS][81] +1 other test pass
[80]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-5/igt@core_hotunplug@hotreplug.html
[81]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-7/igt@core_hotunplug@hotreplug.html
* igt@core_setmaster@master-drop-set-user:
- shard-bmg: [FAIL][82] ([Intel XE#4674]) -> [PASS][83]
[82]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-5/igt@core_setmaster@master-drop-set-user.html
[83]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-7/igt@core_setmaster@master-drop-set-user.html
* igt@fbdev@read:
- shard-bmg: [SKIP][84] ([Intel XE#2134]) -> [PASS][85]
[84]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-5/igt@fbdev@read.html
[85]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-8/igt@fbdev@read.html
* igt@kms_async_flips@alternate-sync-async-flip:
- shard-lnl: [FAIL][86] ([Intel XE#3718]) -> [PASS][87] +1 other test pass
[86]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-lnl-8/igt@kms_async_flips@alternate-sync-async-flip.html
[87]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-lnl-3/igt@kms_async_flips@alternate-sync-async-flip.html
* igt@kms_cursor_legacy@cursorb-vs-flipa-atomic-transitions-varying-size:
- shard-bmg: [SKIP][88] ([Intel XE#2291]) -> [PASS][89] +4 other tests pass
[88]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-4/igt@kms_cursor_legacy@cursorb-vs-flipa-atomic-transitions-varying-size.html
[89]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-8/igt@kms_cursor_legacy@cursorb-vs-flipa-atomic-transitions-varying-size.html
* igt@kms_feature_discovery@display-1x:
- shard-bmg: [SKIP][90] ([Intel XE#6557] / [Intel XE#6703]) -> [PASS][91] +4 other tests pass
[90]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-5/igt@kms_feature_discovery@display-1x.html
[91]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-7/igt@kms_feature_discovery@display-1x.html
* igt@kms_flip@2x-blocking-wf_vblank:
- shard-bmg: [SKIP][92] ([Intel XE#2316]) -> [PASS][93] +2 other tests pass
[92]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-4/igt@kms_flip@2x-blocking-wf_vblank.html
[93]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-8/igt@kms_flip@2x-blocking-wf_vblank.html
* igt@kms_flip@wf_vblank-ts-check:
- shard-bmg: [SKIP][94] ([Intel XE#6703]) -> [PASS][95] +436 other tests pass
[94]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-5/igt@kms_flip@wf_vblank-ts-check.html
[95]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-7/igt@kms_flip@wf_vblank-ts-check.html
* igt@kms_pm_rpm@universal-planes-dpms:
- shard-bmg: [SKIP][96] ([Intel XE#6693]) -> [PASS][97] +2 other tests pass
[96]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-5/igt@kms_pm_rpm@universal-planes-dpms.html
[97]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-7/igt@kms_pm_rpm@universal-planes-dpms.html
* igt@xe_module_load@many-reload:
- shard-bmg: [FAIL][98] -> [PASS][99]
[98]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-5/igt@xe_module_load@many-reload.html
[99]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-7/igt@xe_module_load@many-reload.html
* igt@xe_pmu@engine-activity-accuracy-90@engine-drm_xe_engine_class_compute0:
- shard-lnl: [FAIL][100] ([Intel XE#6251]) -> [PASS][101] +1 other test pass
[100]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-lnl-2/igt@xe_pmu@engine-activity-accuracy-90@engine-drm_xe_engine_class_compute0.html
[101]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-lnl-2/igt@xe_pmu@engine-activity-accuracy-90@engine-drm_xe_engine_class_compute0.html
#### Warnings ####
* igt@kms_atomic_transition@plane-all-modeset-transition-internal-panels:
- shard-bmg: [SKIP][102] ([Intel XE#2370]) -> [SKIP][103] ([Intel XE#6703])
[102]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-8/igt@kms_atomic_transition@plane-all-modeset-transition-internal-panels.html
[103]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-5/igt@kms_atomic_transition@plane-all-modeset-transition-internal-panels.html
* igt@kms_big_fb@4-tiled-8bpp-rotate-90:
- shard-bmg: [SKIP][104] ([Intel XE#2327]) -> [SKIP][105] ([Intel XE#6703]) +1 other test skip
[104]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-8/igt@kms_big_fb@4-tiled-8bpp-rotate-90.html
[105]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-5/igt@kms_big_fb@4-tiled-8bpp-rotate-90.html
* igt@kms_big_fb@linear-32bpp-rotate-270:
- shard-bmg: [SKIP][106] ([Intel XE#6703]) -> [SKIP][107] ([Intel XE#2327]) +2 other tests skip
[106]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-5/igt@kms_big_fb@linear-32bpp-rotate-270.html
[107]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-7/igt@kms_big_fb@linear-32bpp-rotate-270.html
* igt@kms_big_fb@y-tiled-64bpp-rotate-90:
- shard-bmg: [SKIP][108] ([Intel XE#6703]) -> [SKIP][109] ([Intel XE#1124]) +5 other tests skip
[108]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-5/igt@kms_big_fb@y-tiled-64bpp-rotate-90.html
[109]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-7/igt@kms_big_fb@y-tiled-64bpp-rotate-90.html
* igt@kms_big_fb@y-tiled-addfb:
- shard-bmg: [SKIP][110] ([Intel XE#2328]) -> [SKIP][111] ([Intel XE#6703])
[110]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-3/igt@kms_big_fb@y-tiled-addfb.html
[111]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-2/igt@kms_big_fb@y-tiled-addfb.html
* igt@kms_big_fb@y-tiled-addfb-size-offset-overflow:
- shard-bmg: [SKIP][112] ([Intel XE#6703]) -> [SKIP][113] ([Intel XE#607])
[112]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-5/igt@kms_big_fb@y-tiled-addfb-size-offset-overflow.html
[113]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-7/igt@kms_big_fb@y-tiled-addfb-size-offset-overflow.html
* igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-0-hflip:
- shard-bmg: [SKIP][114] ([Intel XE#1124]) -> [SKIP][115] ([Intel XE#6703]) +5 other tests skip
[114]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-8/igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-0-hflip.html
[115]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-5/igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-0-hflip.html
* igt@kms_big_fb@yf-tiled-addfb-size-overflow:
- shard-bmg: [SKIP][116] ([Intel XE#6703]) -> [SKIP][117] ([Intel XE#610])
[116]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-5/igt@kms_big_fb@yf-tiled-addfb-size-overflow.html
[117]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-8/igt@kms_big_fb@yf-tiled-addfb-size-overflow.html
* igt@kms_bw@connected-linear-tiling-3-displays-2560x1440p:
- shard-bmg: [SKIP][118] ([Intel XE#6703]) -> [SKIP][119] ([Intel XE#2314] / [Intel XE#2894])
[118]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-5/igt@kms_bw@connected-linear-tiling-3-displays-2560x1440p.html
[119]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-7/igt@kms_bw@connected-linear-tiling-3-displays-2560x1440p.html
* igt@kms_bw@connected-linear-tiling-4-displays-2160x1440p:
- shard-bmg: [SKIP][120] ([Intel XE#2314] / [Intel XE#2894]) -> [SKIP][121] ([Intel XE#6703]) +1 other test skip
[120]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-8/igt@kms_bw@connected-linear-tiling-4-displays-2160x1440p.html
[121]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-5/igt@kms_bw@connected-linear-tiling-4-displays-2160x1440p.html
* igt@kms_bw@linear-tiling-1-displays-2160x1440p:
- shard-bmg: [SKIP][122] ([Intel XE#6703]) -> [SKIP][123] ([Intel XE#367]) +1 other test skip
[122]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-5/igt@kms_bw@linear-tiling-1-displays-2160x1440p.html
[123]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-7/igt@kms_bw@linear-tiling-1-displays-2160x1440p.html
* igt@kms_ccs@bad-aux-stride-4-tiled-mtl-mc-ccs:
- shard-bmg: [SKIP][124] ([Intel XE#6557] / [Intel XE#6703]) -> [SKIP][125] ([Intel XE#2887])
[124]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-5/igt@kms_ccs@bad-aux-stride-4-tiled-mtl-mc-ccs.html
[125]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-8/igt@kms_ccs@bad-aux-stride-4-tiled-mtl-mc-ccs.html
* igt@kms_ccs@bad-rotation-90-y-tiled-gen12-rc-ccs-cc:
- shard-bmg: [SKIP][126] ([Intel XE#6703]) -> [SKIP][127] ([Intel XE#2887]) +7 other tests skip
[126]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-5/igt@kms_ccs@bad-rotation-90-y-tiled-gen12-rc-ccs-cc.html
[127]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-7/igt@kms_ccs@bad-rotation-90-y-tiled-gen12-rc-ccs-cc.html
* igt@kms_ccs@crc-primary-suspend-yf-tiled-ccs:
- shard-bmg: [SKIP][128] ([Intel XE#6703]) -> [SKIP][129] ([Intel XE#3432]) +3 other tests skip
[128]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-5/igt@kms_ccs@crc-primary-suspend-yf-tiled-ccs.html
[129]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-7/igt@kms_ccs@crc-primary-suspend-yf-tiled-ccs.html
* igt@kms_ccs@crc-sprite-planes-basic-y-tiled-gen12-rc-ccs-cc:
- shard-bmg: [SKIP][130] ([Intel XE#2887]) -> [SKIP][131] ([Intel XE#6703]) +6 other tests skip
[130]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-8/igt@kms_ccs@crc-sprite-planes-basic-y-tiled-gen12-rc-ccs-cc.html
[131]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-5/igt@kms_ccs@crc-sprite-planes-basic-y-tiled-gen12-rc-ccs-cc.html
* igt@kms_cdclk@plane-scaling:
- shard-bmg: [SKIP][132] ([Intel XE#2724]) -> [SKIP][133] ([Intel XE#6703]) +1 other test skip
[132]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-8/igt@kms_cdclk@plane-scaling.html
[133]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-5/igt@kms_cdclk@plane-scaling.html
* igt@kms_chamelium_color@ctm-negative:
- shard-bmg: [SKIP][134] ([Intel XE#6703]) -> [SKIP][135] ([Intel XE#2325])
[134]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-5/igt@kms_chamelium_color@ctm-negative.html
[135]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-8/igt@kms_chamelium_color@ctm-negative.html
* igt@kms_chamelium_edid@dp-edid-read:
- shard-bmg: [SKIP][136] ([Intel XE#2252]) -> [SKIP][137] ([Intel XE#6703]) +4 other tests skip
[136]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-8/igt@kms_chamelium_edid@dp-edid-read.html
[137]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-5/igt@kms_chamelium_edid@dp-edid-read.html
* igt@kms_chamelium_hpd@dp-hpd-with-enabled-mode:
- shard-bmg: [SKIP][138] ([Intel XE#6703]) -> [SKIP][139] ([Intel XE#2252]) +9 other tests skip
[138]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-5/igt@kms_chamelium_hpd@dp-hpd-with-enabled-mode.html
[139]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-8/igt@kms_chamelium_hpd@dp-hpd-with-enabled-mode.html
* igt@kms_color@ctm-0-75:
- shard-bmg: [DMESG-FAIL][140] ([Intel XE#5545]) -> [SKIP][141] ([Intel XE#6703])
[140]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-2/igt@kms_color@ctm-0-75.html
[141]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-5/igt@kms_color@ctm-0-75.html
* igt@kms_content_protection@content-type-change:
- shard-bmg: [SKIP][142] ([Intel XE#6703]) -> [SKIP][143] ([Intel XE#2341]) +1 other test skip
[142]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-5/igt@kms_content_protection@content-type-change.html
[143]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-8/igt@kms_content_protection@content-type-change.html
* igt@kms_content_protection@dp-mst-lic-type-0:
- shard-bmg: [SKIP][144] ([Intel XE#6703]) -> [SKIP][145] ([Intel XE#2390])
[144]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-5/igt@kms_content_protection@dp-mst-lic-type-0.html
[145]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-7/igt@kms_content_protection@dp-mst-lic-type-0.html
* igt@kms_content_protection@legacy:
- shard-bmg: [FAIL][146] ([Intel XE#1178]) -> [SKIP][147] ([Intel XE#6703])
[146]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-8/igt@kms_content_protection@legacy.html
[147]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-5/igt@kms_content_protection@legacy.html
* igt@kms_content_protection@lic-type-0:
- shard-bmg: [SKIP][148] ([Intel XE#2341]) -> [FAIL][149] ([Intel XE#1178])
[148]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-2/igt@kms_content_protection@lic-type-0.html
[149]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-3/igt@kms_content_protection@lic-type-0.html
* igt@kms_content_protection@uevent:
- shard-bmg: [FAIL][150] ([Intel XE#6707]) -> [SKIP][151] ([Intel XE#2341])
[150]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-3/igt@kms_content_protection@uevent.html
[151]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-2/igt@kms_content_protection@uevent.html
* igt@kms_cursor_crc@cursor-offscreen-128x42:
- shard-bmg: [SKIP][152] ([Intel XE#6703]) -> [SKIP][153] ([Intel XE#2320]) +2 other tests skip
[152]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-5/igt@kms_cursor_crc@cursor-offscreen-128x42.html
[153]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-8/igt@kms_cursor_crc@cursor-offscreen-128x42.html
* igt@kms_cursor_crc@cursor-onscreen-512x512:
- shard-bmg: [SKIP][154] ([Intel XE#2321]) -> [SKIP][155] ([Intel XE#6703]) +2 other tests skip
[154]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-8/igt@kms_cursor_crc@cursor-onscreen-512x512.html
[155]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-5/igt@kms_cursor_crc@cursor-onscreen-512x512.html
* igt@kms_cursor_crc@cursor-onscreen-max-size:
- shard-bmg: [SKIP][156] ([Intel XE#2320]) -> [SKIP][157] ([Intel XE#6703]) +1 other test skip
[156]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-8/igt@kms_cursor_crc@cursor-onscreen-max-size.html
[157]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-5/igt@kms_cursor_crc@cursor-onscreen-max-size.html
* igt@kms_cursor_crc@cursor-random-512x170:
- shard-bmg: [SKIP][158] ([Intel XE#6703]) -> [SKIP][159] ([Intel XE#2321])
[158]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-5/igt@kms_cursor_crc@cursor-random-512x170.html
[159]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-7/igt@kms_cursor_crc@cursor-random-512x170.html
* igt@kms_cursor_crc@cursor-rapid-movement-256x85:
- shard-bmg: [SKIP][160] ([Intel XE#6557] / [Intel XE#6703]) -> [SKIP][161] ([Intel XE#2320])
[160]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-5/igt@kms_cursor_crc@cursor-rapid-movement-256x85.html
[161]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-7/igt@kms_cursor_crc@cursor-rapid-movement-256x85.html
* igt@kms_cursor_legacy@short-busy-flip-before-cursor-toggle:
- shard-bmg: [SKIP][162] ([Intel XE#2286]) -> [SKIP][163] ([Intel XE#6703])
[162]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-8/igt@kms_cursor_legacy@short-busy-flip-before-cursor-toggle.html
[163]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-5/igt@kms_cursor_legacy@short-busy-flip-before-cursor-toggle.html
* igt@kms_dsc@dsc-with-bpc:
- shard-bmg: [SKIP][164] ([Intel XE#6703]) -> [SKIP][165] ([Intel XE#2244]) +1 other test skip
[164]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-5/igt@kms_dsc@dsc-with-bpc.html
[165]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-8/igt@kms_dsc@dsc-with-bpc.html
* igt@kms_feature_discovery@psr1:
- shard-bmg: [SKIP][166] ([Intel XE#2374]) -> [SKIP][167] ([Intel XE#6703]) +1 other test skip
[166]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-8/igt@kms_feature_discovery@psr1.html
[167]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-5/igt@kms_feature_discovery@psr1.html
* igt@kms_flip@2x-flip-vs-dpms-on-nop:
- shard-bmg: [SKIP][168] ([Intel XE#2316]) -> [SKIP][169] ([Intel XE#6703])
[168]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-2/igt@kms_flip@2x-flip-vs-dpms-on-nop.html
[169]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-5/igt@kms_flip@2x-flip-vs-dpms-on-nop.html
* igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-16bpp-ytile-downscaling:
- shard-bmg: [SKIP][170] ([Intel XE#6703]) -> [SKIP][171] ([Intel XE#2293] / [Intel XE#2380]) +3 other tests skip
[170]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-5/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-16bpp-ytile-downscaling.html
[171]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-7/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-16bpp-ytile-downscaling.html
* igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytile-downscaling:
- shard-bmg: [SKIP][172] ([Intel XE#2293] / [Intel XE#2380]) -> [SKIP][173] ([Intel XE#6703]) +1 other test skip
[172]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-2/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytile-downscaling.html
[173]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-5/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytile-downscaling.html
* igt@kms_frontbuffer_tracking@drrs-1p-primscrn-pri-indfb-draw-render:
- shard-bmg: [SKIP][174] ([Intel XE#6703]) -> [SKIP][175] ([Intel XE#2311]) +20 other tests skip
[174]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-5/igt@kms_frontbuffer_tracking@drrs-1p-primscrn-pri-indfb-draw-render.html
[175]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-7/igt@kms_frontbuffer_tracking@drrs-1p-primscrn-pri-indfb-draw-render.html
* igt@kms_frontbuffer_tracking@drrs-2p-primscrn-spr-indfb-draw-mmap-wc:
- shard-bmg: [SKIP][176] ([Intel XE#2311]) -> [SKIP][177] ([Intel XE#2312]) +14 other tests skip
[176]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-1/igt@kms_frontbuffer_tracking@drrs-2p-primscrn-spr-indfb-draw-mmap-wc.html
[177]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-4/igt@kms_frontbuffer_tracking@drrs-2p-primscrn-spr-indfb-draw-mmap-wc.html
* igt@kms_frontbuffer_tracking@fbc-1p-offscreen-pri-shrfb-draw-blt:
- shard-bmg: [SKIP][178] ([Intel XE#6703]) -> [SKIP][179] ([Intel XE#4141]) +11 other tests skip
[178]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-5/igt@kms_frontbuffer_tracking@fbc-1p-offscreen-pri-shrfb-draw-blt.html
[179]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-7/igt@kms_frontbuffer_tracking@fbc-1p-offscreen-pri-shrfb-draw-blt.html
* igt@kms_frontbuffer_tracking@fbc-1p-primscrn-cur-indfb-draw-blt:
- shard-bmg: [SKIP][180] ([Intel XE#4141]) -> [SKIP][181] ([Intel XE#6703]) +7 other tests skip
[180]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-8/igt@kms_frontbuffer_tracking@fbc-1p-primscrn-cur-indfb-draw-blt.html
[181]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-5/igt@kms_frontbuffer_tracking@fbc-1p-primscrn-cur-indfb-draw-blt.html
* igt@kms_frontbuffer_tracking@fbc-2p-primscrn-shrfb-pgflip-blt:
- shard-bmg: [SKIP][182] ([Intel XE#2312]) -> [SKIP][183] ([Intel XE#4141]) +1 other test skip
[182]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-2/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-shrfb-pgflip-blt.html
[183]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-3/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-shrfb-pgflip-blt.html
* igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-spr-indfb-onoff:
- shard-bmg: [SKIP][184] ([Intel XE#4141]) -> [SKIP][185] ([Intel XE#2312]) +3 other tests skip
[184]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-3/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-spr-indfb-onoff.html
[185]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-2/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-spr-indfb-onoff.html
* igt@kms_frontbuffer_tracking@fbcdrrs-2p-primscrn-shrfb-plflip-blt:
- shard-bmg: [SKIP][186] ([Intel XE#2312]) -> [SKIP][187] ([Intel XE#2311]) +6 other tests skip
[186]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-2/igt@kms_frontbuffer_tracking@fbcdrrs-2p-primscrn-shrfb-plflip-blt.html
[187]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-3/igt@kms_frontbuffer_tracking@fbcdrrs-2p-primscrn-shrfb-plflip-blt.html
* igt@kms_frontbuffer_tracking@fbcdrrs-2p-scndscrn-indfb-pgflip-blt:
- shard-bmg: [SKIP][188] ([Intel XE#2311]) -> [SKIP][189] ([Intel XE#6703]) +12 other tests skip
[188]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-8/igt@kms_frontbuffer_tracking@fbcdrrs-2p-scndscrn-indfb-pgflip-blt.html
[189]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-5/igt@kms_frontbuffer_tracking@fbcdrrs-2p-scndscrn-indfb-pgflip-blt.html
* igt@kms_frontbuffer_tracking@fbcdrrs-2p-scndscrn-shrfb-pgflip-blt:
- shard-bmg: [SKIP][190] ([Intel XE#6557] / [Intel XE#6703]) -> [SKIP][191] ([Intel XE#2311])
[190]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-5/igt@kms_frontbuffer_tracking@fbcdrrs-2p-scndscrn-shrfb-pgflip-blt.html
[191]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-8/igt@kms_frontbuffer_tracking@fbcdrrs-2p-scndscrn-shrfb-pgflip-blt.html
* igt@kms_frontbuffer_tracking@fbcdrrs-tiling-y:
- shard-bmg: [SKIP][192] ([Intel XE#6703]) -> [SKIP][193] ([Intel XE#2352])
[192]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-5/igt@kms_frontbuffer_tracking@fbcdrrs-tiling-y.html
[193]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-8/igt@kms_frontbuffer_tracking@fbcdrrs-tiling-y.html
* igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-cur-indfb-draw-blt:
- shard-bmg: [SKIP][194] ([Intel XE#2313]) -> [SKIP][195] ([Intel XE#6703]) +15 other tests skip
[194]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-3/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-cur-indfb-draw-blt.html
[195]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-2/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-cur-indfb-draw-blt.html
* igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-indfb-msflip-blt:
- shard-bmg: [SKIP][196] ([Intel XE#2312]) -> [SKIP][197] ([Intel XE#6703]) +1 other test skip
[196]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-2/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-indfb-msflip-blt.html
[197]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-5/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-indfb-msflip-blt.html
* igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-shrfb-pgflip-blt:
- shard-bmg: [SKIP][198] ([Intel XE#2312]) -> [SKIP][199] ([Intel XE#2313]) +7 other tests skip
[198]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-4/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-shrfb-pgflip-blt.html
[199]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-8/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-shrfb-pgflip-blt.html
* igt@kms_frontbuffer_tracking@psr-2p-primscrn-cur-indfb-draw-mmap-wc:
- shard-bmg: [SKIP][200] ([Intel XE#2313]) -> [SKIP][201] ([Intel XE#2312]) +7 other tests skip
[200]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-3/igt@kms_frontbuffer_tracking@psr-2p-primscrn-cur-indfb-draw-mmap-wc.html
[201]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-2/igt@kms_frontbuffer_tracking@psr-2p-primscrn-cur-indfb-draw-mmap-wc.html
* igt@kms_frontbuffer_tracking@psr-2p-primscrn-indfb-plflip-blt:
- shard-bmg: [SKIP][202] ([Intel XE#6703]) -> [SKIP][203] ([Intel XE#2313]) +21 other tests skip
[202]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-5/igt@kms_frontbuffer_tracking@psr-2p-primscrn-indfb-plflip-blt.html
[203]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-7/igt@kms_frontbuffer_tracking@psr-2p-primscrn-indfb-plflip-blt.html
* igt@kms_hdr@bpc-switch-suspend:
- shard-bmg: [ABORT][204] ([Intel XE#6740]) -> [SKIP][205] ([Intel XE#6703])
[204]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-8/igt@kms_hdr@bpc-switch-suspend.html
[205]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-5/igt@kms_hdr@bpc-switch-suspend.html
* igt@kms_hdr@invalid-hdr:
- shard-bmg: [ABORT][206] ([Intel XE#6740]) -> [SKIP][207] ([Intel XE#1503])
[206]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-1/igt@kms_hdr@invalid-hdr.html
[207]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-4/igt@kms_hdr@invalid-hdr.html
* igt@kms_panel_fitting@legacy:
- shard-bmg: [SKIP][208] ([Intel XE#2486]) -> [SKIP][209] ([Intel XE#6703]) +1 other test skip
[208]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-8/igt@kms_panel_fitting@legacy.html
[209]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-5/igt@kms_panel_fitting@legacy.html
* igt@kms_plane_multiple@2x-tiling-y:
- shard-bmg: [SKIP][210] ([Intel XE#6703]) -> [SKIP][211] ([Intel XE#5021])
[210]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-5/igt@kms_plane_multiple@2x-tiling-y.html
[211]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-8/igt@kms_plane_multiple@2x-tiling-y.html
* igt@kms_plane_multiple@tiling-yf:
- shard-bmg: [SKIP][212] ([Intel XE#6703]) -> [SKIP][213] ([Intel XE#5020])
[212]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-5/igt@kms_plane_multiple@tiling-yf.html
[213]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-7/igt@kms_plane_multiple@tiling-yf.html
* igt@kms_plane_scaling@planes-upscale-factor-0-25-downscale-factor-0-75:
- shard-bmg: [SKIP][214] ([Intel XE#6703]) -> [SKIP][215] ([Intel XE#6886])
[214]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-5/igt@kms_plane_scaling@planes-upscale-factor-0-25-downscale-factor-0-75.html
[215]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-8/igt@kms_plane_scaling@planes-upscale-factor-0-25-downscale-factor-0-75.html
* igt@kms_pm_backlight@fade-with-dpms:
- shard-bmg: [SKIP][216] ([Intel XE#870]) -> [SKIP][217] ([Intel XE#6703])
[216]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-8/igt@kms_pm_backlight@fade-with-dpms.html
[217]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-5/igt@kms_pm_backlight@fade-with-dpms.html
* igt@kms_pm_dc@dc6-psr:
- shard-bmg: [SKIP][218] ([Intel XE#2392]) -> [SKIP][219] ([Intel XE#6703])
[218]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-8/igt@kms_pm_dc@dc6-psr.html
[219]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-5/igt@kms_pm_dc@dc6-psr.html
* igt@kms_pm_lpsp@kms-lpsp:
- shard-bmg: [SKIP][220] ([Intel XE#6703]) -> [SKIP][221] ([Intel XE#2499])
[220]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-5/igt@kms_pm_lpsp@kms-lpsp.html
[221]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-7/igt@kms_pm_lpsp@kms-lpsp.html
* igt@kms_pm_rpm@modeset-lpsp-stress-no-wait:
- shard-bmg: [SKIP][222] ([Intel XE#1439] / [Intel XE#3141] / [Intel XE#836]) -> [SKIP][223] ([Intel XE#6693])
[222]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-8/igt@kms_pm_rpm@modeset-lpsp-stress-no-wait.html
[223]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-5/igt@kms_pm_rpm@modeset-lpsp-stress-no-wait.html
* igt@kms_psr2_sf@pr-overlay-plane-update-continuous-sf:
- shard-bmg: [SKIP][224] ([Intel XE#1406] / [Intel XE#1489]) -> [SKIP][225] ([Intel XE#1406] / [Intel XE#6703]) +3 other tests skip
[224]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-8/igt@kms_psr2_sf@pr-overlay-plane-update-continuous-sf.html
[225]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-5/igt@kms_psr2_sf@pr-overlay-plane-update-continuous-sf.html
* igt@kms_psr2_sf@psr2-overlay-plane-move-continuous-exceed-sf:
- shard-bmg: [SKIP][226] ([Intel XE#1406] / [Intel XE#6703]) -> [SKIP][227] ([Intel XE#1406] / [Intel XE#1489]) +5 other tests skip
[226]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-5/igt@kms_psr2_sf@psr2-overlay-plane-move-continuous-exceed-sf.html
[227]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-8/igt@kms_psr2_sf@psr2-overlay-plane-move-continuous-exceed-sf.html
* igt@kms_psr@fbc-pr-primary-page-flip:
- shard-bmg: [SKIP][228] ([Intel XE#1406] / [Intel XE#2234] / [Intel XE#2850]) -> [SKIP][229] ([Intel XE#1406] / [Intel XE#6703]) +6 other tests skip
[228]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-8/igt@kms_psr@fbc-pr-primary-page-flip.html
[229]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-5/igt@kms_psr@fbc-pr-primary-page-flip.html
* igt@kms_psr@psr2-sprite-blt:
- shard-bmg: [SKIP][230] ([Intel XE#1406] / [Intel XE#6703]) -> [SKIP][231] ([Intel XE#1406] / [Intel XE#2234] / [Intel XE#2850]) +8 other tests skip
[230]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-5/igt@kms_psr@psr2-sprite-blt.html
[231]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-8/igt@kms_psr@psr2-sprite-blt.html
* igt@kms_rotation_crc@primary-rotation-90:
- shard-bmg: [SKIP][232] ([Intel XE#3414] / [Intel XE#3904]) -> [SKIP][233] ([Intel XE#6703]) +1 other test skip
[232]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-8/igt@kms_rotation_crc@primary-rotation-90.html
[233]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-5/igt@kms_rotation_crc@primary-rotation-90.html
* igt@kms_rotation_crc@primary-y-tiled-reflect-x-0:
- shard-bmg: [SKIP][234] ([Intel XE#6703]) -> [SKIP][235] ([Intel XE#2330])
[234]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-5/igt@kms_rotation_crc@primary-y-tiled-reflect-x-0.html
[235]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-8/igt@kms_rotation_crc@primary-y-tiled-reflect-x-0.html
* igt@kms_rotation_crc@sprite-rotation-90:
- shard-bmg: [SKIP][236] ([Intel XE#6703]) -> [SKIP][237] ([Intel XE#3414] / [Intel XE#3904])
[236]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-5/igt@kms_rotation_crc@sprite-rotation-90.html
[237]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-8/igt@kms_rotation_crc@sprite-rotation-90.html
* igt@kms_setmode@basic-clone-single-crtc:
- shard-bmg: [SKIP][238] ([Intel XE#1435]) -> [SKIP][239] ([Intel XE#6703])
[238]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-8/igt@kms_setmode@basic-clone-single-crtc.html
[239]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-5/igt@kms_setmode@basic-clone-single-crtc.html
* igt@kms_sharpness_filter@filter-basic:
- shard-bmg: [SKIP][240] ([Intel XE#6703]) -> [SKIP][241] ([Intel XE#6503]) +2 other tests skip
[240]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-5/igt@kms_sharpness_filter@filter-basic.html
[241]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-8/igt@kms_sharpness_filter@filter-basic.html
* igt@kms_sharpness_filter@invalid-plane-with-filter:
- shard-bmg: [SKIP][242] ([Intel XE#6503]) -> [SKIP][243] ([Intel XE#6703])
[242]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-8/igt@kms_sharpness_filter@invalid-plane-with-filter.html
[243]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-5/igt@kms_sharpness_filter@invalid-plane-with-filter.html
* igt@kms_vrr@cmrr:
- shard-bmg: [SKIP][244] ([Intel XE#6703]) -> [SKIP][245] ([Intel XE#2168])
[244]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-5/igt@kms_vrr@cmrr.html
[245]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-7/igt@kms_vrr@cmrr.html
* igt@kms_vrr@max-min:
- shard-bmg: [SKIP][246] ([Intel XE#1499]) -> [SKIP][247] ([Intel XE#6703])
[246]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-3/igt@kms_vrr@max-min.html
[247]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-2/igt@kms_vrr@max-min.html
* igt@kms_vrr@seamless-rr-switch-drrs:
- shard-bmg: [SKIP][248] ([Intel XE#6703]) -> [SKIP][249] ([Intel XE#1499])
[248]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-5/igt@kms_vrr@seamless-rr-switch-drrs.html
[249]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-7/igt@kms_vrr@seamless-rr-switch-drrs.html
* igt@testdisplay:
- shard-bmg: [ABORT][250] -> [SKIP][251] ([Intel XE#6703])
[250]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-3/igt@testdisplay.html
[251]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-2/igt@testdisplay.html
* igt@xe_compute@ccs-mode-basic:
- shard-bmg: [SKIP][252] ([Intel XE#6599]) -> [SKIP][253] ([Intel XE#6703])
[252]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-8/igt@xe_compute@ccs-mode-basic.html
[253]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-5/igt@xe_compute@ccs-mode-basic.html
* igt@xe_compute@ccs-mode-compute-kernel:
- shard-bmg: [SKIP][254] ([Intel XE#6703]) -> [SKIP][255] ([Intel XE#6599])
[254]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-5/igt@xe_compute@ccs-mode-compute-kernel.html
[255]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-8/igt@xe_compute@ccs-mode-compute-kernel.html
* igt@xe_eudebug@multigpu-basic-client:
- shard-bmg: [SKIP][256] ([Intel XE#4837]) -> [SKIP][257] ([Intel XE#6703]) +3 other tests skip
[256]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-8/igt@xe_eudebug@multigpu-basic-client.html
[257]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-5/igt@xe_eudebug@multigpu-basic-client.html
* igt@xe_eudebug@vm-bind-clear:
- shard-bmg: [SKIP][258] ([Intel XE#6703]) -> [SKIP][259] ([Intel XE#4837]) +6 other tests skip
[258]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-5/igt@xe_eudebug@vm-bind-clear.html
[259]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-8/igt@xe_eudebug@vm-bind-clear.html
* igt@xe_eudebug_online@breakpoint-many-sessions-single-tile:
- shard-bmg: [SKIP][260] ([Intel XE#6703]) -> [SKIP][261] ([Intel XE#4837] / [Intel XE#6665]) +3 other tests skip
[260]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-5/igt@xe_eudebug_online@breakpoint-many-sessions-single-tile.html
[261]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-7/igt@xe_eudebug_online@breakpoint-many-sessions-single-tile.html
* igt@xe_eudebug_online@single-step-one:
- shard-bmg: [SKIP][262] ([Intel XE#4837] / [Intel XE#6665]) -> [SKIP][263] ([Intel XE#6703]) +2 other tests skip
[262]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-8/igt@xe_eudebug_online@single-step-one.html
[263]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-5/igt@xe_eudebug_online@single-step-one.html
* igt@xe_eudebug_sriov@deny-eudebug:
- shard-bmg: [SKIP][264] ([Intel XE#5793]) -> [SKIP][265] ([Intel XE#6703])
[264]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-8/igt@xe_eudebug_sriov@deny-eudebug.html
[265]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-5/igt@xe_eudebug_sriov@deny-eudebug.html
* igt@xe_exec_basic@multigpu-many-execqueues-many-vm-null-defer-mmap:
- shard-bmg: [SKIP][266] ([Intel XE#2322]) -> [SKIP][267] ([Intel XE#6703]) +1 other test skip
[266]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-8/igt@xe_exec_basic@multigpu-many-execqueues-many-vm-null-defer-mmap.html
[267]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-5/igt@xe_exec_basic@multigpu-many-execqueues-many-vm-null-defer-mmap.html
* igt@xe_exec_basic@multigpu-many-execqueues-many-vm-userptr-invalidate:
- shard-bmg: [SKIP][268] ([Intel XE#6703]) -> [SKIP][269] ([Intel XE#2322]) +4 other tests skip
[268]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-5/igt@xe_exec_basic@multigpu-many-execqueues-many-vm-userptr-invalidate.html
[269]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-8/igt@xe_exec_basic@multigpu-many-execqueues-many-vm-userptr-invalidate.html
* igt@xe_exec_multi_queue@many-execs-preempt-mode-fault-userptr-invalidate:
- shard-bmg: [SKIP][270] ([Intel XE#6874]) -> [SKIP][271] ([Intel XE#6703]) +17 other tests skip
[270]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-8/igt@xe_exec_multi_queue@many-execs-preempt-mode-fault-userptr-invalidate.html
[271]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-5/igt@xe_exec_multi_queue@many-execs-preempt-mode-fault-userptr-invalidate.html
* igt@xe_exec_multi_queue@many-queues-basic-smem:
- shard-bmg: [SKIP][272] ([Intel XE#6703]) -> [SKIP][273] ([Intel XE#6874]) +21 other tests skip
[272]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-5/igt@xe_exec_multi_queue@many-queues-basic-smem.html
[273]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-7/igt@xe_exec_multi_queue@many-queues-basic-smem.html
* igt@xe_exec_system_allocator@many-64k-mmap-new-huge-nomemset:
- shard-bmg: [SKIP][274] ([Intel XE#6703]) -> [SKIP][275] ([Intel XE#5007])
[274]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-5/igt@xe_exec_system_allocator@many-64k-mmap-new-huge-nomemset.html
[275]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-7/igt@xe_exec_system_allocator@many-64k-mmap-new-huge-nomemset.html
* igt@xe_exec_system_allocator@threads-shared-vm-many-stride-mmap-free-huge:
- shard-bmg: [SKIP][276] ([Intel XE#4943]) -> [SKIP][277] ([Intel XE#6703]) +17 other tests skip
[276]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-8/igt@xe_exec_system_allocator@threads-shared-vm-many-stride-mmap-free-huge.html
[277]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-5/igt@xe_exec_system_allocator@threads-shared-vm-many-stride-mmap-free-huge.html
* igt@xe_exec_system_allocator@twice-mmap-free-huge:
- shard-bmg: [SKIP][278] ([Intel XE#6703]) -> [SKIP][279] ([Intel XE#4943]) +11 other tests skip
[278]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-5/igt@xe_exec_system_allocator@twice-mmap-free-huge.html
[279]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-8/igt@xe_exec_system_allocator@twice-mmap-free-huge.html
* igt@xe_media_fill@media-fill:
- shard-bmg: [SKIP][280] ([Intel XE#6703]) -> [SKIP][281] ([Intel XE#2459] / [Intel XE#2596])
[280]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-5/igt@xe_media_fill@media-fill.html
[281]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-7/igt@xe_media_fill@media-fill.html
* igt@xe_mmap@small-bar:
- shard-bmg: [SKIP][282] ([Intel XE#586]) -> [SKIP][283] ([Intel XE#6703])
[282]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-8/igt@xe_mmap@small-bar.html
[283]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-5/igt@xe_mmap@small-bar.html
* igt@xe_module_load@load:
- shard-bmg: ([PASS][284], [PASS][285], [PASS][286], [PASS][287], [PASS][288], [PASS][289], [PASS][290], [PASS][291], [PASS][292], [PASS][293], [ABORT][294], [ABORT][295], [ABORT][296], [ABORT][297], [PASS][298], [PASS][299], [PASS][300], [PASS][301], [PASS][302], [PASS][303], [PASS][304], [PASS][305], [PASS][306], [PASS][307]) ([Intel XE#6887]) -> ([ABORT][308], [PASS][309], [PASS][310], [ABORT][311], [ABORT][312], [PASS][313], [PASS][314], [PASS][315], [PASS][316], [PASS][317], [PASS][318], [PASS][319], [PASS][320], [PASS][321], [PASS][322], [PASS][323], [PASS][324], [PASS][325], [PASS][326], [PASS][327], [PASS][328], [SKIP][329], [PASS][330], [PASS][331], [PASS][332]) ([Intel XE#2457] / [Intel XE#6887])
[284]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-5/igt@xe_module_load@load.html
[285]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-5/igt@xe_module_load@load.html
[286]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-8/igt@xe_module_load@load.html
[287]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-8/igt@xe_module_load@load.html
[288]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-5/igt@xe_module_load@load.html
[289]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-7/igt@xe_module_load@load.html
[290]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-4/igt@xe_module_load@load.html
[291]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-3/igt@xe_module_load@load.html
[292]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-2/igt@xe_module_load@load.html
[293]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-2/igt@xe_module_load@load.html
[294]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-6/igt@xe_module_load@load.html
[295]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-6/igt@xe_module_load@load.html
[296]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-6/igt@xe_module_load@load.html
[297]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-6/igt@xe_module_load@load.html
[298]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-2/igt@xe_module_load@load.html
[299]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-1/igt@xe_module_load@load.html
[300]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-1/igt@xe_module_load@load.html
[301]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-1/igt@xe_module_load@load.html
[302]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-3/igt@xe_module_load@load.html
[303]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-5/igt@xe_module_load@load.html
[304]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-5/igt@xe_module_load@load.html
[305]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-4/igt@xe_module_load@load.html
[306]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-7/igt@xe_module_load@load.html
[307]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-3/igt@xe_module_load@load.html
[308]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-6/igt@xe_module_load@load.html
[309]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-4/igt@xe_module_load@load.html
[310]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-2/igt@xe_module_load@load.html
[311]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-6/igt@xe_module_load@load.html
[312]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-6/igt@xe_module_load@load.html
[313]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-5/igt@xe_module_load@load.html
[314]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-5/igt@xe_module_load@load.html
[315]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-2/igt@xe_module_load@load.html
[316]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-3/igt@xe_module_load@load.html
[317]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-3/igt@xe_module_load@load.html
[318]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-5/igt@xe_module_load@load.html
[319]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-5/igt@xe_module_load@load.html
[320]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-5/igt@xe_module_load@load.html
[321]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-8/igt@xe_module_load@load.html
[322]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-1/igt@xe_module_load@load.html
[323]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-4/igt@xe_module_load@load.html
[324]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-8/igt@xe_module_load@load.html
[325]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-1/igt@xe_module_load@load.html
[326]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-4/igt@xe_module_load@load.html
[327]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-7/igt@xe_module_load@load.html
[328]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-7/igt@xe_module_load@load.html
[329]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-1/igt@xe_module_load@load.html
[330]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-7/igt@xe_module_load@load.html
[331]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-2/igt@xe_module_load@load.html
[332]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-5/igt@xe_module_load@load.html
* igt@xe_pat@pat-index-xehpc:
- shard-bmg: [SKIP][333] ([Intel XE#1420]) -> [SKIP][334] ([Intel XE#6703])
[333]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-3/igt@xe_pat@pat-index-xehpc.html
[334]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-2/igt@xe_pat@pat-index-xehpc.html
* igt@xe_pm@d3cold-mmap-vram:
- shard-bmg: [SKIP][335] ([Intel XE#2284]) -> [SKIP][336] ([Intel XE#6703])
[335]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-8/igt@xe_pm@d3cold-mmap-vram.html
[336]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-5/igt@xe_pm@d3cold-mmap-vram.html
* igt@xe_pxp@pxp-stale-bo-exec-post-rpm:
- shard-bmg: [SKIP][337] ([Intel XE#6703]) -> [SKIP][338] ([Intel XE#4733]) +2 other tests skip
[337]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-5/igt@xe_pxp@pxp-stale-bo-exec-post-rpm.html
[338]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-7/igt@xe_pxp@pxp-stale-bo-exec-post-rpm.html
* igt@xe_pxp@pxp-stale-bo-exec-post-suspend:
- shard-bmg: [SKIP][339] ([Intel XE#4733]) -> [SKIP][340] ([Intel XE#6703]) +1 other test skip
[339]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-3/igt@xe_pxp@pxp-stale-bo-exec-post-suspend.html
[340]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-2/igt@xe_pxp@pxp-stale-bo-exec-post-suspend.html
* igt@xe_query@multigpu-query-config:
- shard-bmg: [SKIP][341] ([Intel XE#944]) -> [SKIP][342] ([Intel XE#6703])
[341]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-8/igt@xe_query@multigpu-query-config.html
[342]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-5/igt@xe_query@multigpu-query-config.html
* igt@xe_query@multigpu-query-oa-units:
- shard-bmg: [SKIP][343] ([Intel XE#6703]) -> [SKIP][344] ([Intel XE#944])
[343]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41/shard-bmg-5/igt@xe_query@multigpu-query-oa-units.html
[344]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/shard-bmg-8/igt@xe_query@multigpu-query-oa-units.html
[Intel XE#1124]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1124
[Intel XE#1178]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1178
[Intel XE#1406]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1406
[Intel XE#1420]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1420
[Intel XE#1435]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1435
[Intel XE#1439]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1439
[Intel XE#1489]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1489
[Intel XE#1499]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1499
[Intel XE#1503]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1503
[Intel XE#2134]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2134
[Intel XE#2168]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2168
[Intel XE#2234]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2234
[Intel XE#2244]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2244
[Intel XE#2248]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2248
[Intel XE#2252]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2252
[Intel XE#2284]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2284
[Intel XE#2286]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2286
[Intel XE#2291]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2291
[Intel XE#2293]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2293
[Intel XE#2311]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2311
[Intel XE#2312]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2312
[Intel XE#2313]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2313
[Intel XE#2314]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2314
[Intel XE#2316]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2316
[Intel XE#2320]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2320
[Intel XE#2321]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2321
[Intel XE#2322]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2322
[Intel XE#2325]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2325
[Intel XE#2327]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2327
[Intel XE#2328]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2328
[Intel XE#2330]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2330
[Intel XE#2341]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2341
[Intel XE#2352]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2352
[Intel XE#2370]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2370
[Intel XE#2372]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2372
[Intel XE#2374]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2374
[Intel XE#2380]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2380
[Intel XE#2390]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2390
[Intel XE#2392]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2392
[Intel XE#2393]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2393
[Intel XE#2413]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2413
[Intel XE#2457]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2457
[Intel XE#2459]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2459
[Intel XE#2486]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2486
[Intel XE#2499]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2499
[Intel XE#2501]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2501
[Intel XE#2596]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2596
[Intel XE#2652]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2652
[Intel XE#2724]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2724
[Intel XE#2850]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2850
[Intel XE#2887]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2887
[Intel XE#2894]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2894
[Intel XE#301]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/301
[Intel XE#3141]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3141
[Intel XE#3414]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3414
[Intel XE#3432]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3432
[Intel XE#367]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/367
[Intel XE#3718]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3718
[Intel XE#3862]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3862
[Intel XE#3904]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3904
[Intel XE#4141]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4141
[Intel XE#4156]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4156
[Intel XE#4596]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4596
[Intel XE#4674]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4674
[Intel XE#4733]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4733
[Intel XE#4837]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4837
[Intel XE#4943]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4943
[Intel XE#5007]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5007
[Intel XE#5020]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5020
[Intel XE#5021]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5021
[Intel XE#5545]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5545
[Intel XE#579]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/579
[Intel XE#5793]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5793
[Intel XE#586]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/586
[Intel XE#607]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/607
[Intel XE#610]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/610
[Intel XE#6251]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6251
[Intel XE#6503]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6503
[Intel XE#6557]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6557
[Intel XE#6599]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6599
[Intel XE#6665]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6665
[Intel XE#6693]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6693
[Intel XE#6703]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6703
[Intel XE#6707]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6707
[Intel XE#6740]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6740
[Intel XE#6779]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6779
[Intel XE#6814]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6814
[Intel XE#6841]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6841
[Intel XE#6874]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6874
[Intel XE#6886]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6886
[Intel XE#6887]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6887
[Intel XE#787]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/787
[Intel XE#836]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/836
[Intel XE#870]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/870
[Intel XE#944]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/944
Build changes
-------------
* Linux: xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41 -> xe-pw-159119v1
IGT_8668: 906681747a312ef11ef9af8ab1fa6eff28b4cbd0 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
xe-4254-72428bdb20b6c86beaeddb9d69bf698d0697aa41: 72428bdb20b6c86beaeddb9d69bf698d0697aa41
xe-pw-159119v1: 159119v1
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-159119v1/index.html
[-- Attachment #2: Type: text/html, Size: 102999 bytes --]
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 2/4] drm/pagemap: Unlock and put folios when possible
2025-12-16 20:10 ` [PATCH 2/4] drm/pagemap: Unlock and put folios when possible Francois Dugast
@ 2025-12-18 21:59 ` Matthew Brost
0 siblings, 0 replies; 29+ messages in thread
From: Matthew Brost @ 2025-12-18 21:59 UTC (permalink / raw)
To: Francois Dugast; +Cc: intel-xe, dri-devel
On Tue, Dec 16, 2025 at 09:10:12PM +0100, Francois Dugast wrote:
> If the page is part of a folio, unlock and put the whole folio at once
> instead of individual pages one after the other. This will reduce the
> amount of operations once device THP are in use.
>
> Suggested-by: Matthew Brost <matthew.brost@intel.com>
> Signed-off-by: Francois Dugast <francois.dugast@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
> ---
> drivers/gpu/drm/drm_pagemap.c | 26 +++++++++++++++++---------
> 1 file changed, 17 insertions(+), 9 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_pagemap.c b/drivers/gpu/drm/drm_pagemap.c
> index 37d7cfbbb3e8..491de9275add 100644
> --- a/drivers/gpu/drm/drm_pagemap.c
> +++ b/drivers/gpu/drm/drm_pagemap.c
> @@ -149,15 +149,15 @@ static void drm_pagemap_zdd_put(struct drm_pagemap_zdd *zdd)
> }
>
> /**
> - * drm_pagemap_migration_unlock_put_page() - Put a migration page
> - * @page: Pointer to the page to put
> + * drm_pagemap_migration_unlock_put_folio() - Put a migration folio
> + * @folio: Pointer to the folio to put
> *
> - * This function unlocks and puts a page.
> + * This function unlocks and puts a folio.
> */
> -static void drm_pagemap_migration_unlock_put_page(struct page *page)
> +static void drm_pagemap_migration_unlock_put_folio(struct folio *folio)
> {
> - unlock_page(page);
> - put_page(page);
> + folio_unlock(folio);
> + folio_put(folio);
> }
>
> /**
> @@ -172,15 +172,23 @@ static void drm_pagemap_migration_unlock_put_pages(unsigned long npages,
> {
> unsigned long i;
>
> - for (i = 0; i < npages; ++i) {
> + for (i = 0; i < npages;) {
> struct page *page;
> + struct folio *folio;
> + unsigned int order = 0;
>
> if (!migrate_pfn[i])
> - continue;
> + goto next;
>
> page = migrate_pfn_to_page(migrate_pfn[i]);
> - drm_pagemap_migration_unlock_put_page(page);
> + folio = page_folio(page);
> + order = folio_order(folio);
> +
> + drm_pagemap_migration_unlock_put_folio(folio);
> migrate_pfn[i] = 0;
> +
> +next:
> + i += NR_PAGES(order);
> }
> }
>
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 3/4] drm/pagemap: Add helper to access zone_device_data
2025-12-16 20:10 ` [PATCH 3/4] drm/pagemap: Add helper to access zone_device_data Francois Dugast
@ 2025-12-18 22:19 ` Matthew Brost
2025-12-19 15:29 ` Francois Dugast
2025-12-19 20:13 ` Matthew Brost
1 sibling, 1 reply; 29+ messages in thread
From: Matthew Brost @ 2025-12-18 22:19 UTC (permalink / raw)
To: Francois Dugast; +Cc: intel-xe, dri-devel
On Tue, Dec 16, 2025 at 09:10:13PM +0100, Francois Dugast wrote:
> This new helper helps ensure all accesses to zone_device_data use the
> correct API whether the page is part of a folio or not.
>
> Suggested-by: Matthew Brost <matthew.brost@intel.com>
> Signed-off-by: Francois Dugast <francois.dugast@intel.com>
> ---
> drivers/gpu/drm/drm_gpusvm.c | 7 +++++--
> drivers/gpu/drm/drm_pagemap.c | 32 +++++++++++++++++++++++++-------
> include/drm/drm_pagemap.h | 2 ++
> 3 files changed, 32 insertions(+), 9 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_gpusvm.c b/drivers/gpu/drm/drm_gpusvm.c
> index 39c8c50401dd..d0ff6b65e543 100644
> --- a/drivers/gpu/drm/drm_gpusvm.c
> +++ b/drivers/gpu/drm/drm_gpusvm.c
> @@ -1366,12 +1366,15 @@ int drm_gpusvm_get_pages(struct drm_gpusvm *gpusvm,
> order = drm_gpusvm_hmm_pfn_to_order(pfns[i], i, npages);
> if (is_device_private_page(page) ||
> is_device_coherent_page(page)) {
> + struct drm_pagemap_zdd *__zdd =
> + drm_pagemap_page_zone_device_data(page);
> +
> if (!ctx->allow_mixed &&
> - zdd != page->zone_device_data && i > 0) {
> + zdd != __zdd && i > 0) {
> err = -EOPNOTSUPP;
> goto err_unmap;
> }
> - zdd = page->zone_device_data;
> + zdd = __zdd;
> if (pagemap != page_pgmap(page)) {
> if (i > 0) {
> err = -EOPNOTSUPP;
> diff --git a/drivers/gpu/drm/drm_pagemap.c b/drivers/gpu/drm/drm_pagemap.c
> index 491de9275add..b71e47136112 100644
> --- a/drivers/gpu/drm/drm_pagemap.c
> +++ b/drivers/gpu/drm/drm_pagemap.c
> @@ -192,6 +192,22 @@ static void drm_pagemap_migration_unlock_put_pages(unsigned long npages,
> }
> }
>
> +/**
> + * drm_pagemap_page_zone_device_data() - Page to zone_device_data
> + * @page: Pointer to the page
> + *
> + * Return: Page's zone_device_data
> + */
> +void *drm_pagemap_page_zone_device_data(struct page *page)
> +{
> + struct folio *folio = page_folio(page);
> +
I think we can actually just do:
return folio_zone_device_data(folio)
We still need the helper as if page is part of a folio the individual
page->zone_device_data could be NULL.
Also since this called from GPU SVM maybe make this an inline in
drm_pagemap.h too. We'd have to include 'linux/memremap.h' in
drm_pagemap.h but I don't that is a huge deal.
Matt
> + if (folio_order(folio))
> + return folio_zone_device_data(folio);
> +
> + return page->zone_device_data;
> +}
> +
> /**
> * drm_pagemap_get_devmem_page() - Get a reference to a device memory page
> * @page: Pointer to the page
> @@ -481,8 +497,8 @@ static int drm_pagemap_migrate_populate_ram_pfn(struct vm_area_struct *vas,
> goto next;
>
> if (fault_page) {
> - if (src_page->zone_device_data !=
> - fault_page->zone_device_data)
> + if (drm_pagemap_page_zone_device_data(src_page) !=
> + drm_pagemap_page_zone_device_data(fault_page))
> goto next;
> }
>
> @@ -670,7 +686,7 @@ static int __drm_pagemap_migrate_to_ram(struct vm_area_struct *vas,
> int i, err = 0;
>
> if (page) {
> - zdd = page->zone_device_data;
> + zdd = drm_pagemap_page_zone_device_data(page);
> if (time_before64(get_jiffies_64(),
> zdd->devmem_allocation->timeslice_expiration))
> return 0;
> @@ -722,7 +738,7 @@ static int __drm_pagemap_migrate_to_ram(struct vm_area_struct *vas,
> if (!page)
> goto err_finalize;
> }
> - zdd = page->zone_device_data;
> + zdd = drm_pagemap_page_zone_device_data(page);
> ops = zdd->devmem_allocation->ops;
> dev = zdd->devmem_allocation->dev;
>
> @@ -768,7 +784,9 @@ static int __drm_pagemap_migrate_to_ram(struct vm_area_struct *vas,
> */
> static void drm_pagemap_folio_free(struct folio *folio)
> {
> - drm_pagemap_zdd_put(folio->page.zone_device_data);
> + struct page *page = folio_page(folio, 0);
> +
> + drm_pagemap_zdd_put(drm_pagemap_page_zone_device_data(page));
> }
>
> /**
> @@ -784,7 +802,7 @@ static void drm_pagemap_folio_free(struct folio *folio)
> */
> static vm_fault_t drm_pagemap_migrate_to_ram(struct vm_fault *vmf)
> {
> - struct drm_pagemap_zdd *zdd = vmf->page->zone_device_data;
> + struct drm_pagemap_zdd *zdd = drm_pagemap_page_zone_device_data(vmf->page);
> int err;
>
> err = __drm_pagemap_migrate_to_ram(vmf->vma,
> @@ -847,7 +865,7 @@ EXPORT_SYMBOL_GPL(drm_pagemap_devmem_init);
> */
> struct drm_pagemap *drm_pagemap_page_to_dpagemap(struct page *page)
> {
> - struct drm_pagemap_zdd *zdd = page->zone_device_data;
> + struct drm_pagemap_zdd *zdd = drm_pagemap_page_zone_device_data(page);
>
> return zdd->devmem_allocation->dpagemap;
> }
> diff --git a/include/drm/drm_pagemap.h b/include/drm/drm_pagemap.h
> index f6e7e234c089..3a8d0e1cef43 100644
> --- a/include/drm/drm_pagemap.h
> +++ b/include/drm/drm_pagemap.h
> @@ -245,4 +245,6 @@ int drm_pagemap_populate_mm(struct drm_pagemap *dpagemap,
> struct mm_struct *mm,
> unsigned long timeslice_ms);
>
> +void *drm_pagemap_page_zone_device_data(struct page *page);
> +
> #endif
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 4/4] drm/pagemap: Enable THP support for GPU memory migration
2025-12-16 20:10 ` [PATCH 4/4] drm/pagemap: Enable THP support for GPU memory migration Francois Dugast
@ 2025-12-18 22:24 ` Matthew Brost
0 siblings, 0 replies; 29+ messages in thread
From: Matthew Brost @ 2025-12-18 22:24 UTC (permalink / raw)
To: Francois Dugast; +Cc: intel-xe, dri-devel, Thomas Hellström, Michal Mrozek
On Tue, Dec 16, 2025 at 09:10:14PM +0100, Francois Dugast wrote:
> This enables support for Transparent Huge Pages (THP) for device pages by
> using MIGRATE_VMA_SELECT_COMPOUND during migration. It removes the need to
> split folios and loop multiple times over all pages to perform required
> operations at page level. Instead, we rely on newly introduced support for
> higher orders in drm_pagemap and folio-level API.
>
> In Xe, this drastically improves performance when using SVM. The GT stats
> below collected after a 2MB page fault show overall servicing is more than
> 7 times faster, and thanks to reduced CPU overhead the time spent on the
> actual copy goes from 23% without THP to 80% with THP:
>
> Without THP:
>
> svm_2M_pagefault_us: 966
> svm_2M_migrate_us: 942
> svm_2M_device_copy_us: 223
> svm_2M_get_pages_us: 9
> svm_2M_bind_us: 10
>
> With THP:
>
> svm_2M_pagefault_us: 132
> svm_2M_migrate_us: 128
> svm_2M_device_copy_us: 106
> svm_2M_get_pages_us: 1
> svm_2M_bind_us: 2
>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Cc: Michal Mrozek <michal.mrozek@intel.com>
> Signed-off-by: Francois Dugast <francois.dugast@intel.com>
> ---
> drivers/gpu/drm/drm_pagemap.c | 88 +++++++++++++++++++++++++++++------
> drivers/gpu/drm/xe/xe_svm.c | 5 ++
> include/drm/drm_pagemap.h | 5 +-
> 3 files changed, 83 insertions(+), 15 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_pagemap.c b/drivers/gpu/drm/drm_pagemap.c
> index b71e47136112..797ec2094fdf 100644
> --- a/drivers/gpu/drm/drm_pagemap.c
> +++ b/drivers/gpu/drm/drm_pagemap.c
> @@ -211,16 +211,20 @@ void *drm_pagemap_page_zone_device_data(struct page *page)
> /**
> * drm_pagemap_get_devmem_page() - Get a reference to a device memory page
> * @page: Pointer to the page
> + * @order: Order
> * @zdd: Pointer to the GPU SVM zone device data
> *
> * This function associates the given page with the specified GPU SVM zone
> * device data and initializes it for zone device usage.
> */
> static void drm_pagemap_get_devmem_page(struct page *page,
> + unsigned int order,
> struct drm_pagemap_zdd *zdd)
> {
> - page->zone_device_data = drm_pagemap_zdd_get(zdd);
> - zone_device_page_init(page, 0);
> + struct folio *folio = page_folio(page);
> +
> + folio_set_zone_device_data(folio, drm_pagemap_zdd_get(zdd));
> + zone_device_page_init(page, order);
> }
>
> /**
> @@ -345,11 +349,13 @@ int drm_pagemap_migrate_to_devmem(struct drm_pagemap_devmem *devmem_allocation,
> void *pgmap_owner)
> {
> const struct drm_pagemap_devmem_ops *ops = devmem_allocation->ops;
> + struct drm_pagemap *dpagemap = devmem_allocation->dpagemap;
> struct migrate_vma migrate = {
> .start = start,
> .end = end,
> .pgmap_owner = pgmap_owner,
> - .flags = MIGRATE_VMA_SELECT_SYSTEM,
> + .flags = MIGRATE_VMA_SELECT_SYSTEM
> + | MIGRATE_VMA_SELECT_COMPOUND,
> };
> unsigned long i, npages = npages_in_range(start, end);
> struct vm_area_struct *vas;
> @@ -409,11 +415,6 @@ int drm_pagemap_migrate_to_devmem(struct drm_pagemap_devmem *devmem_allocation,
> goto err_free;
> }
>
> - if (migrate.cpages != npages) {
I don't we want to blindly delete this. I believe if the original check
fails, call a subsequent function to calculate cpages based on the pages
in returned migrate.src, if that doesn't match npages bail out.
> - err = -EBUSY;
> - goto err_finalize;
> - }
> -
> err = ops->populate_devmem_pfn(devmem_allocation, npages, migrate.dst);
> if (err)
> goto err_finalize;
> @@ -424,13 +425,38 @@ int drm_pagemap_migrate_to_devmem(struct drm_pagemap_devmem *devmem_allocation,
> if (err)
> goto err_finalize;
>
> - for (i = 0; i < npages; ++i) {
> + mutex_lock(&dpagemap->folio_split_lock);
> + for (i = 0; i < npages;) {
> + unsigned long j;
> struct page *page = pfn_to_page(migrate.dst[i]);
> + unsigned int order;
>
> pages[i] = page;
> migrate.dst[i] = migrate_pfn(migrate.dst[i]);
> - drm_pagemap_get_devmem_page(page, zdd);
> +
> + if (migrate.src[i] & MIGRATE_PFN_COMPOUND) {
> + order = HPAGE_PMD_ORDER;
> +
> + migrate.dst[i] |= MIGRATE_PFN_COMPOUND;
> +
> + drm_pagemap_get_devmem_page(page, order, zdd);
> +
> + for (j = 1; j < NR_PAGES(order) && i + j < npages; j++)
> + migrate.dst[i + j] = 0;
> +
> + } else {
> + order = 0;
> +
> + if (folio_order(page_folio(page)))
> + migrate_device_split_page(page);
> +
> + zone_device_page_init(page, 0);
> + page->zone_device_data = drm_pagemap_zdd_get(zdd);
drm_pagemap_get_devmem_page(page, order, zdd); ?
If so, this part could be moved outside out the if/else clause.
Matt
> + }
> +
> + i += NR_PAGES(order);
> }
> + mutex_unlock(&dpagemap->folio_split_lock);
>
> err = ops->copy_to_devmem(pages, pagemap_addr, npages);
> if (err)
> @@ -516,6 +542,8 @@ static int drm_pagemap_migrate_populate_ram_pfn(struct vm_area_struct *vas,
> page = folio_page(folio, 0);
> mpfn[i] = migrate_pfn(page_to_pfn(page));
>
> + if (order)
> + mpfn[i] |= MIGRATE_PFN_COMPOUND;
> next:
> if (page)
> addr += page_size(page);
> @@ -617,8 +645,15 @@ int drm_pagemap_evict_to_ram(struct drm_pagemap_devmem *devmem_allocation)
> if (err)
> goto err_finalize;
>
> - for (i = 0; i < npages; ++i)
> + for (i = 0; i < npages;) {
> + unsigned int order = 0;
> +
> pages[i] = migrate_pfn_to_page(src[i]);
> + if (pages[i])
> + order = folio_order(page_folio(pages[i]));
> +
> + i += NR_PAGES(order);
> + }
>
> err = ops->copy_to_ram(pages, pagemap_addr, npages);
> if (err)
> @@ -671,8 +706,9 @@ static int __drm_pagemap_migrate_to_ram(struct vm_area_struct *vas,
> struct migrate_vma migrate = {
> .vma = vas,
> .pgmap_owner = device_private_page_owner,
> - .flags = MIGRATE_VMA_SELECT_DEVICE_PRIVATE |
> - MIGRATE_VMA_SELECT_DEVICE_COHERENT,
> + .flags = MIGRATE_VMA_SELECT_DEVICE_PRIVATE
> + | MIGRATE_VMA_SELECT_DEVICE_COHERENT
> + | MIGRATE_VMA_SELECT_COMPOUND,
> .fault_page = page,
> };
> struct drm_pagemap_zdd *zdd;
> @@ -753,8 +789,15 @@ static int __drm_pagemap_migrate_to_ram(struct vm_area_struct *vas,
> if (err)
> goto err_finalize;
>
> - for (i = 0; i < npages; ++i)
> + for (i = 0; i < npages;) {
> + unsigned int order = 0;
> +
> pages[i] = migrate_pfn_to_page(migrate.src[i]);
> + if (pages[i])
> + order = folio_order(page_folio(pages[i]));
> +
> + i += NR_PAGES(order);
> + }
>
> err = ops->copy_to_ram(pages, pagemap_addr, npages);
> if (err)
> @@ -813,9 +856,26 @@ static vm_fault_t drm_pagemap_migrate_to_ram(struct vm_fault *vmf)
> return err ? VM_FAULT_SIGBUS : 0;
> }
>
> +static void drm_pagemap_folio_split(struct folio *orig_folio, struct folio *new_folio)
> +{
> + struct drm_pagemap_zdd *zdd;
> +
> + if (!new_folio)
> + return;
> +
> + new_folio->pgmap = orig_folio->pgmap;
> + zdd = folio_zone_device_data(orig_folio);
> + if (folio_order(new_folio))
> + folio_set_zone_device_data(new_folio, drm_pagemap_zdd_get(zdd));
> + else
> + folio_page(new_folio, 0)->zone_device_data =
> + drm_pagemap_zdd_get(zdd);
> +}
> +
> static const struct dev_pagemap_ops drm_pagemap_pagemap_ops = {
> .folio_free = drm_pagemap_folio_free,
> .migrate_to_ram = drm_pagemap_migrate_to_ram,
> + .folio_split = drm_pagemap_folio_split,
> };
>
> /**
> diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
> index 93550c7c84ac..037c77de2757 100644
> --- a/drivers/gpu/drm/xe/xe_svm.c
> +++ b/drivers/gpu/drm/xe/xe_svm.c
> @@ -4,6 +4,7 @@
> */
>
> #include <drm/drm_drv.h>
> +#include <drm/drm_managed.h>
>
> #include "xe_bo.h"
> #include "xe_exec_queue_types.h"
> @@ -1470,6 +1471,10 @@ int xe_devm_add(struct xe_tile *tile, struct xe_vram_region *vr)
> void *addr;
> int ret;
>
> + ret = drmm_mutex_init(&tile->xe->drm, &vr->dpagemap.folio_split_lock);
> + if (ret)
> + return ret;
> +
> res = devm_request_free_mem_region(dev, &iomem_resource,
> vr->usable_size);
> if (IS_ERR(res)) {
> diff --git a/include/drm/drm_pagemap.h b/include/drm/drm_pagemap.h
> index 3a8d0e1cef43..82b9c0e6392e 100644
> --- a/include/drm/drm_pagemap.h
> +++ b/include/drm/drm_pagemap.h
> @@ -129,11 +129,14 @@ struct drm_pagemap_ops {
> * struct drm_pagemap: Additional information for a struct dev_pagemap
> * used for device p2p handshaking.
> * @ops: The struct drm_pagemap_ops.
> - * @dev: The struct drevice owning the device-private memory.
> + * @dev: The struct device owning the device-private memory.
> + * @folio_split_lock: Lock to protect device folio splitting.
> */
> struct drm_pagemap {
> const struct drm_pagemap_ops *ops;
> struct device *dev;
> + /* Protect device folio splitting */
> + struct mutex folio_split_lock;
> };
>
> struct drm_pagemap_devmem;
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 3/4] drm/pagemap: Add helper to access zone_device_data
2025-12-18 22:19 ` Matthew Brost
@ 2025-12-19 15:29 ` Francois Dugast
0 siblings, 0 replies; 29+ messages in thread
From: Francois Dugast @ 2025-12-19 15:29 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-xe, dri-devel
On Thu, Dec 18, 2025 at 02:19:24PM -0800, Matthew Brost wrote:
> On Tue, Dec 16, 2025 at 09:10:13PM +0100, Francois Dugast wrote:
> > This new helper helps ensure all accesses to zone_device_data use the
> > correct API whether the page is part of a folio or not.
> >
> > Suggested-by: Matthew Brost <matthew.brost@intel.com>
> > Signed-off-by: Francois Dugast <francois.dugast@intel.com>
> > ---
> > drivers/gpu/drm/drm_gpusvm.c | 7 +++++--
> > drivers/gpu/drm/drm_pagemap.c | 32 +++++++++++++++++++++++++-------
> > include/drm/drm_pagemap.h | 2 ++
> > 3 files changed, 32 insertions(+), 9 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/drm_gpusvm.c b/drivers/gpu/drm/drm_gpusvm.c
> > index 39c8c50401dd..d0ff6b65e543 100644
> > --- a/drivers/gpu/drm/drm_gpusvm.c
> > +++ b/drivers/gpu/drm/drm_gpusvm.c
> > @@ -1366,12 +1366,15 @@ int drm_gpusvm_get_pages(struct drm_gpusvm *gpusvm,
> > order = drm_gpusvm_hmm_pfn_to_order(pfns[i], i, npages);
> > if (is_device_private_page(page) ||
> > is_device_coherent_page(page)) {
> > + struct drm_pagemap_zdd *__zdd =
> > + drm_pagemap_page_zone_device_data(page);
> > +
> > if (!ctx->allow_mixed &&
> > - zdd != page->zone_device_data && i > 0) {
> > + zdd != __zdd && i > 0) {
> > err = -EOPNOTSUPP;
> > goto err_unmap;
> > }
> > - zdd = page->zone_device_data;
> > + zdd = __zdd;
> > if (pagemap != page_pgmap(page)) {
> > if (i > 0) {
> > err = -EOPNOTSUPP;
> > diff --git a/drivers/gpu/drm/drm_pagemap.c b/drivers/gpu/drm/drm_pagemap.c
> > index 491de9275add..b71e47136112 100644
> > --- a/drivers/gpu/drm/drm_pagemap.c
> > +++ b/drivers/gpu/drm/drm_pagemap.c
> > @@ -192,6 +192,22 @@ static void drm_pagemap_migration_unlock_put_pages(unsigned long npages,
> > }
> > }
> >
> > +/**
> > + * drm_pagemap_page_zone_device_data() - Page to zone_device_data
> > + * @page: Pointer to the page
> > + *
> > + * Return: Page's zone_device_data
> > + */
> > +void *drm_pagemap_page_zone_device_data(struct page *page)
> > +{
> > + struct folio *folio = page_folio(page);
> > +
>
> I think we can actually just do:
>
> return folio_zone_device_data(folio)
Agreed.
>
> We still need the helper as if page is part of a folio the individual
> page->zone_device_data could be NULL.
>
> Also since this called from GPU SVM maybe make this an inline in
> drm_pagemap.h too. We'd have to include 'linux/memremap.h' in
> drm_pagemap.h but I don't that is a huge deal.
Yes that is better, will do.
Francois
>
> Matt
>
> > + if (folio_order(folio))
> > + return folio_zone_device_data(folio);
> > +
> > + return page->zone_device_data;
> > +}
> > +
> > /**
> > * drm_pagemap_get_devmem_page() - Get a reference to a device memory page
> > * @page: Pointer to the page
> > @@ -481,8 +497,8 @@ static int drm_pagemap_migrate_populate_ram_pfn(struct vm_area_struct *vas,
> > goto next;
> >
> > if (fault_page) {
> > - if (src_page->zone_device_data !=
> > - fault_page->zone_device_data)
> > + if (drm_pagemap_page_zone_device_data(src_page) !=
> > + drm_pagemap_page_zone_device_data(fault_page))
> > goto next;
> > }
> >
> > @@ -670,7 +686,7 @@ static int __drm_pagemap_migrate_to_ram(struct vm_area_struct *vas,
> > int i, err = 0;
> >
> > if (page) {
> > - zdd = page->zone_device_data;
> > + zdd = drm_pagemap_page_zone_device_data(page);
> > if (time_before64(get_jiffies_64(),
> > zdd->devmem_allocation->timeslice_expiration))
> > return 0;
> > @@ -722,7 +738,7 @@ static int __drm_pagemap_migrate_to_ram(struct vm_area_struct *vas,
> > if (!page)
> > goto err_finalize;
> > }
> > - zdd = page->zone_device_data;
> > + zdd = drm_pagemap_page_zone_device_data(page);
> > ops = zdd->devmem_allocation->ops;
> > dev = zdd->devmem_allocation->dev;
> >
> > @@ -768,7 +784,9 @@ static int __drm_pagemap_migrate_to_ram(struct vm_area_struct *vas,
> > */
> > static void drm_pagemap_folio_free(struct folio *folio)
> > {
> > - drm_pagemap_zdd_put(folio->page.zone_device_data);
> > + struct page *page = folio_page(folio, 0);
> > +
> > + drm_pagemap_zdd_put(drm_pagemap_page_zone_device_data(page));
> > }
> >
> > /**
> > @@ -784,7 +802,7 @@ static void drm_pagemap_folio_free(struct folio *folio)
> > */
> > static vm_fault_t drm_pagemap_migrate_to_ram(struct vm_fault *vmf)
> > {
> > - struct drm_pagemap_zdd *zdd = vmf->page->zone_device_data;
> > + struct drm_pagemap_zdd *zdd = drm_pagemap_page_zone_device_data(vmf->page);
> > int err;
> >
> > err = __drm_pagemap_migrate_to_ram(vmf->vma,
> > @@ -847,7 +865,7 @@ EXPORT_SYMBOL_GPL(drm_pagemap_devmem_init);
> > */
> > struct drm_pagemap *drm_pagemap_page_to_dpagemap(struct page *page)
> > {
> > - struct drm_pagemap_zdd *zdd = page->zone_device_data;
> > + struct drm_pagemap_zdd *zdd = drm_pagemap_page_zone_device_data(page);
> >
> > return zdd->devmem_allocation->dpagemap;
> > }
> > diff --git a/include/drm/drm_pagemap.h b/include/drm/drm_pagemap.h
> > index f6e7e234c089..3a8d0e1cef43 100644
> > --- a/include/drm/drm_pagemap.h
> > +++ b/include/drm/drm_pagemap.h
> > @@ -245,4 +245,6 @@ int drm_pagemap_populate_mm(struct drm_pagemap *dpagemap,
> > struct mm_struct *mm,
> > unsigned long timeslice_ms);
> >
> > +void *drm_pagemap_page_zone_device_data(struct page *page);
> > +
> > #endif
> > --
> > 2.43.0
> >
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 3/4] drm/pagemap: Add helper to access zone_device_data
2025-12-16 20:10 ` [PATCH 3/4] drm/pagemap: Add helper to access zone_device_data Francois Dugast
2025-12-18 22:19 ` Matthew Brost
@ 2025-12-19 20:13 ` Matthew Brost
1 sibling, 0 replies; 29+ messages in thread
From: Matthew Brost @ 2025-12-19 20:13 UTC (permalink / raw)
To: Francois Dugast; +Cc: intel-xe, dri-devel
On Tue, Dec 16, 2025 at 09:10:13PM +0100, Francois Dugast wrote:
> This new helper helps ensure all accesses to zone_device_data use the
> correct API whether the page is part of a folio or not.
>
> Suggested-by: Matthew Brost <matthew.brost@intel.com>
> Signed-off-by: Francois Dugast <francois.dugast@intel.com>
> ---
> drivers/gpu/drm/drm_gpusvm.c | 7 +++++--
> drivers/gpu/drm/drm_pagemap.c | 32 +++++++++++++++++++++++++-------
> include/drm/drm_pagemap.h | 2 ++
> 3 files changed, 32 insertions(+), 9 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_gpusvm.c b/drivers/gpu/drm/drm_gpusvm.c
> index 39c8c50401dd..d0ff6b65e543 100644
> --- a/drivers/gpu/drm/drm_gpusvm.c
> +++ b/drivers/gpu/drm/drm_gpusvm.c
> @@ -1366,12 +1366,15 @@ int drm_gpusvm_get_pages(struct drm_gpusvm *gpusvm,
> order = drm_gpusvm_hmm_pfn_to_order(pfns[i], i, npages);
> if (is_device_private_page(page) ||
> is_device_coherent_page(page)) {
> + struct drm_pagemap_zdd *__zdd =
> + drm_pagemap_page_zone_device_data(page);
> +
> if (!ctx->allow_mixed &&
> - zdd != page->zone_device_data && i > 0) {
> + zdd != __zdd && i > 0) {
> err = -EOPNOTSUPP;
> goto err_unmap;
> }
> - zdd = page->zone_device_data;
> + zdd = __zdd;
> if (pagemap != page_pgmap(page)) {
> if (i > 0) {
> err = -EOPNOTSUPP;
> diff --git a/drivers/gpu/drm/drm_pagemap.c b/drivers/gpu/drm/drm_pagemap.c
> index 491de9275add..b71e47136112 100644
> --- a/drivers/gpu/drm/drm_pagemap.c
> +++ b/drivers/gpu/drm/drm_pagemap.c
> @@ -192,6 +192,22 @@ static void drm_pagemap_migration_unlock_put_pages(unsigned long npages,
> }
> }
>
> +/**
> + * drm_pagemap_page_zone_device_data() - Page to zone_device_data
> + * @page: Pointer to the page
> + *
> + * Return: Page's zone_device_data
> + */
> +void *drm_pagemap_page_zone_device_data(struct page *page)
> +{
> + struct folio *folio = page_folio(page);
> +
> + if (folio_order(folio))
> + return folio_zone_device_data(folio);
> +
> + return page->zone_device_data;
> +}
> +
> /**
> * drm_pagemap_get_devmem_page() - Get a reference to a device memory page
> * @page: Pointer to the page
> @@ -481,8 +497,8 @@ static int drm_pagemap_migrate_populate_ram_pfn(struct vm_area_struct *vas,
> goto next;
>
> if (fault_page) {
> - if (src_page->zone_device_data !=
> - fault_page->zone_device_data)
> + if (drm_pagemap_page_zone_device_data(src_page) !=
> + drm_pagemap_page_zone_device_data(fault_page))
> goto next;
> }
>
> @@ -670,7 +686,7 @@ static int __drm_pagemap_migrate_to_ram(struct vm_area_struct *vas,
> int i, err = 0;
>
> if (page) {
> - zdd = page->zone_device_data;
> + zdd = drm_pagemap_page_zone_device_data(page);
> if (time_before64(get_jiffies_64(),
> zdd->devmem_allocation->timeslice_expiration))
> return 0;
> @@ -722,7 +738,7 @@ static int __drm_pagemap_migrate_to_ram(struct vm_area_struct *vas,
> if (!page)
> goto err_finalize;
> }
> - zdd = page->zone_device_data;
> + zdd = drm_pagemap_page_zone_device_data(page);
> ops = zdd->devmem_allocation->ops;
> dev = zdd->devmem_allocation->dev;
>
> @@ -768,7 +784,9 @@ static int __drm_pagemap_migrate_to_ram(struct vm_area_struct *vas,
> */
> static void drm_pagemap_folio_free(struct folio *folio)
> {
> - drm_pagemap_zdd_put(folio->page.zone_device_data);
> + struct page *page = folio_page(folio, 0);
> +
> + drm_pagemap_zdd_put(drm_pagemap_page_zone_device_data(page));
> }
>
> /**
> @@ -784,7 +802,7 @@ static void drm_pagemap_folio_free(struct folio *folio)
> */
> static vm_fault_t drm_pagemap_migrate_to_ram(struct vm_fault *vmf)
> {
> - struct drm_pagemap_zdd *zdd = vmf->page->zone_device_data;
> + struct drm_pagemap_zdd *zdd = drm_pagemap_page_zone_device_data(vmf->page);
> int err;
>
> err = __drm_pagemap_migrate_to_ram(vmf->vma,
> @@ -847,7 +865,7 @@ EXPORT_SYMBOL_GPL(drm_pagemap_devmem_init);
> */
> struct drm_pagemap *drm_pagemap_page_to_dpagemap(struct page *page)
> {
> - struct drm_pagemap_zdd *zdd = page->zone_device_data;
> + struct drm_pagemap_zdd *zdd = drm_pagemap_page_zone_device_data(page);
>
> return zdd->devmem_allocation->dpagemap;
> }
> diff --git a/include/drm/drm_pagemap.h b/include/drm/drm_pagemap.h
> index f6e7e234c089..3a8d0e1cef43 100644
> --- a/include/drm/drm_pagemap.h
> +++ b/include/drm/drm_pagemap.h
> @@ -245,4 +245,6 @@ int drm_pagemap_populate_mm(struct drm_pagemap *dpagemap,
> struct mm_struct *mm,
> unsigned long timeslice_ms);
>
> +void *drm_pagemap_page_zone_device_data(struct page *page);
I missed this in my previous reply:
s/void */struct drm_pagemap_zdd */
Matt
> +
> #endif
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 1/4] mm/migrate: Add migrate_device_split_page
2025-12-16 21:39 ` Matthew Brost
@ 2026-01-06 2:39 ` Matthew Brost
2026-01-07 20:15 ` Zi Yan
0 siblings, 1 reply; 29+ messages in thread
From: Matthew Brost @ 2026-01-06 2:39 UTC (permalink / raw)
To: Matthew Wilcox
Cc: Francois Dugast, intel-xe, dri-devel, Andrew Morton, Balbir Singh,
linux-mm
On Tue, Dec 16, 2025 at 01:39:50PM -0800, Matthew Brost wrote:
> On Tue, Dec 16, 2025 at 08:34:46PM +0000, Matthew Wilcox wrote:
> > On Tue, Dec 16, 2025 at 09:10:11PM +0100, Francois Dugast wrote:
> > > + ret = __split_unmapped_folio(folio, 0, page, NULL, NULL, SPLIT_TYPE_UNIFORM);
> >
> > We're trying to get rid of uniform splits. Why do you need this to be
> > uniform?
I looked into this bit more - we do want a uniform split here. What we
want is to split the THP into 512 4k pages here.
Per the doc for __split_unmapped_folio:
3590 * @split_at: in buddy allocator like split, the folio containing @split_at
3591 * will be split until its order becomes @new_order.
I think this implies some of the pages may still be a higher order which
is not desired behavior for this usage.
Matt
>
> It’s very possible we’re doing this incorrectly due to a lack of core MM
> experience. I believe Zi Yan suggested this approach (use
> __split_unmapped_folio) a while back.
>
> Let me start by explaining what we’re trying to do and see if there’s a
> better suggestion for how to accomplish it.
>
> Would SPLIT_TYPE_NON_UNIFORM split work here? Or do you have another
> suggestion on how to split the folio aside from __split_unmapped_folio?
>
> This covers the case where a GPU device page was allocated as a THP
> (e.g., we call zone_device_folio_init with an order of 9). Later, this
> page is freed/unmapped and then reallocated for a CPU VMA that is
> smaller than a THP (e.g., we’d allocate either 4KB or 64KB based on
> CPU VMA size alignment). At this point, we need to split the device
> folio so we can migrate data into 4KB device pages.
>
> Would SPLIT_TYPE_NON_UNIFORM work here? Or do you have another
> suggestion for splitting the folio aside from __split_unmapped_folio?
>
> Matt
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 1/4] mm/migrate: Add migrate_device_split_page
2026-01-06 2:39 ` Matthew Brost
@ 2026-01-07 20:15 ` Zi Yan
0 siblings, 0 replies; 29+ messages in thread
From: Zi Yan @ 2026-01-07 20:15 UTC (permalink / raw)
To: Matthew Brost
Cc: Matthew Wilcox, Francois Dugast, intel-xe, dri-devel,
Andrew Morton, Balbir Singh, linux-mm
On 5 Jan 2026, at 21:39, Matthew Brost wrote:
> On Tue, Dec 16, 2025 at 01:39:50PM -0800, Matthew Brost wrote:
>> On Tue, Dec 16, 2025 at 08:34:46PM +0000, Matthew Wilcox wrote:
>>> On Tue, Dec 16, 2025 at 09:10:11PM +0100, Francois Dugast wrote:
>>>> + ret = __split_unmapped_folio(folio, 0, page, NULL, NULL, SPLIT_TYPE_UNIFORM);
>>>
>>> We're trying to get rid of uniform splits. Why do you need this to be
>>> uniform?
>
> I looked into this bit more - we do want a uniform split here. What we
> want is to split the THP into 512 4k pages here.
>
> Per the doc for __split_unmapped_folio:
>
> 3590 * @split_at: in buddy allocator like split, the folio containing @split_at
> 3591 * will be split until its order becomes @new_order.
>
> I think this implies some of the pages may still be a higher order which
> is not desired behavior for this usage.
IIUC, this is because there is no mTHP support in device private folio yet
and device private folio can only be order-0 or order-9. But after adding
mTHP support, non uniform split should work, since as you said below,
only 4KB or 64KB is reallocated in CPU memory.
In terms of mTHP support in device private folio, how much effort will it
take? Maybe add a TODO in migrate_device_split_page(), saying move to
NON_UNIFORM when mTHP support is ready.
>
> Matt
>
>>
>> It’s very possible we’re doing this incorrectly due to a lack of core MM
>> experience. I believe Zi Yan suggested this approach (use
>> __split_unmapped_folio) a while back.
>>
>> Let me start by explaining what we’re trying to do and see if there’s a
>> better suggestion for how to accomplish it.
>>
>> Would SPLIT_TYPE_NON_UNIFORM split work here? Or do you have another
>> suggestion on how to split the folio aside from __split_unmapped_folio?
>>
>> This covers the case where a GPU device page was allocated as a THP
>> (e.g., we call zone_device_folio_init with an order of 9). Later, this
>> page is freed/unmapped and then reallocated for a CPU VMA that is
>> smaller than a THP (e.g., we’d allocate either 4KB or 64KB based on
>> CPU VMA size alignment). At this point, we need to split the device
>> folio so we can migrate data into 4KB device pages.
>>
>> Would SPLIT_TYPE_NON_UNIFORM work here? Or do you have another
>> suggestion for splitting the folio aside from __split_unmapped_folio?
>>
>> Matt
Best Regards,
Yan, Zi
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 1/4] mm/migrate: Add migrate_device_split_page
2025-12-16 20:10 ` [PATCH 1/4] mm/migrate: Add migrate_device_split_page Francois Dugast
2025-12-16 20:34 ` Matthew Wilcox
@ 2026-01-07 20:20 ` Zi Yan
2026-01-07 20:38 ` Zi Yan
1 sibling, 1 reply; 29+ messages in thread
From: Zi Yan @ 2026-01-07 20:20 UTC (permalink / raw)
To: Francois Dugast
Cc: intel-xe, dri-devel, Matthew Brost, Andrew Morton, Balbir Singh,
linux-mm, David Hildenbrand, Lorenzo Stoakes, Baolin Wang,
Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain, Barry Song,
Lance Yang
+THP folks
On 16 Dec 2025, at 15:10, Francois Dugast wrote:
> From: Matthew Brost <matthew.brost@intel.com>
>
> Introduce migrate_device_split_page() to split a device page into
> lower-order pages. Used when a folio allocated as higher-order is freed
> and later reallocated at a smaller order by the driver memory manager.
>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Balbir Singh <balbirs@nvidia.com>
> Cc: dri-devel@lists.freedesktop.org
> Cc: linux-mm@kvack.org
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> Signed-off-by: Francois Dugast <francois.dugast@intel.com>
> ---
> include/linux/huge_mm.h | 3 +++
> include/linux/migrate.h | 1 +
> mm/huge_memory.c | 6 ++---
> mm/migrate_device.c | 49 +++++++++++++++++++++++++++++++++++++++++
> 4 files changed, 56 insertions(+), 3 deletions(-)
>
> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> index a4d9f964dfde..6ad8f359bc0d 100644
> --- a/include/linux/huge_mm.h
> +++ b/include/linux/huge_mm.h
> @@ -374,6 +374,9 @@ int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list
> int folio_split_unmapped(struct folio *folio, unsigned int new_order);
> unsigned int min_order_for_split(struct folio *folio);
> int split_folio_to_list(struct folio *folio, struct list_head *list);
> +int __split_unmapped_folio(struct folio *folio, int new_order,
> + struct page *split_at, struct xa_state *xas,
> + struct address_space *mapping, enum split_type split_type);
> int folio_check_splittable(struct folio *folio, unsigned int new_order,
> enum split_type split_type);
> int folio_split(struct folio *folio, unsigned int new_order, struct page *page,
> diff --git a/include/linux/migrate.h b/include/linux/migrate.h
> index 26ca00c325d9..ec65e4fd5f88 100644
> --- a/include/linux/migrate.h
> +++ b/include/linux/migrate.h
> @@ -192,6 +192,7 @@ void migrate_device_pages(unsigned long *src_pfns, unsigned long *dst_pfns,
> unsigned long npages);
> void migrate_device_finalize(unsigned long *src_pfns,
> unsigned long *dst_pfns, unsigned long npages);
> +int migrate_device_split_page(struct page *page);
>
> #endif /* CONFIG_MIGRATION */
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 40cf59301c21..7ded35a3ecec 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -3621,9 +3621,9 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
> * Return: 0 - successful, <0 - failed (if -ENOMEM is returned, @folio might be
> * split but not to @new_order, the caller needs to check)
> */
> -static int __split_unmapped_folio(struct folio *folio, int new_order,
> - struct page *split_at, struct xa_state *xas,
> - struct address_space *mapping, enum split_type split_type)
> +int __split_unmapped_folio(struct folio *folio, int new_order,
> + struct page *split_at, struct xa_state *xas,
> + struct address_space *mapping, enum split_type split_type)
> {
> const bool is_anon = folio_test_anon(folio);
> int old_order = folio_order(folio);
> diff --git a/mm/migrate_device.c b/mm/migrate_device.c
> index 23379663b1e1..eb0f0e938947 100644
> --- a/mm/migrate_device.c
> +++ b/mm/migrate_device.c
> @@ -775,6 +775,49 @@ int migrate_vma_setup(struct migrate_vma *args)
> EXPORT_SYMBOL(migrate_vma_setup);
>
> #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
> +/**
> + * migrate_device_split_page() - Split device page
> + * @page: Device page to split
> + *
> + * Splits a device page into smaller pages. Typically called when reallocating a
> + * folio to a smaller size. Inherently racy—only safe if the caller ensures
> + * mutual exclusion within the page's folio (i.e., no other threads are using
> + * pages within the folio). Expected to be called a free device page and
> + * restores all split out pages to a free state.
> + */
> +int migrate_device_split_page(struct page *page)
> +{
> + struct folio *folio = page_folio(page);
> + struct dev_pagemap *pgmap = folio->pgmap;
> + struct page *unlock_page = folio_page(folio, 0);
> + unsigned int order = folio_order(folio), i;
> + int ret = 0;
> +
> + VM_BUG_ON_FOLIO(!order, folio);
> + VM_BUG_ON_FOLIO(!folio_is_device_private(folio), folio);
> + VM_BUG_ON_FOLIO(folio_ref_count(folio), folio);
> +
> + folio_lock(folio);
> +
> + ret = __split_unmapped_folio(folio, 0, page, NULL, NULL, SPLIT_TYPE_UNIFORM);
> + if (ret) {
> + /*
> + * We can't fail here unless the caller doesn't know what they
> + * are doing.
> + */
> + VM_BUG_ON_FOLIO(ret, folio);
> +
> + return ret;
> + }
> +
> + for (i = 0; i < 0x1 << order; ++i, ++unlock_page) {
> + page_folio(unlock_page)->pgmap = pgmap;
> + folio_unlock(page_folio(unlock_page));
> + }
> +
> + return 0;
> +}
> +
> /**
> * migrate_vma_insert_huge_pmd_page: Insert a huge folio into @migrate->vma->vm_mm
> * at @addr. folio is already allocated as a part of the migration process with
> @@ -927,6 +970,11 @@ static int migrate_vma_split_unmapped_folio(struct migrate_vma *migrate,
> return ret;
> }
> #else /* !CONFIG_ARCH_ENABLE_THP_MIGRATION */
> +int migrate_device_split_page(struct page *page)
> +{
> + return 0;
> +}
> +
> static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate,
> unsigned long addr,
> struct page *page,
> @@ -943,6 +991,7 @@ static int migrate_vma_split_unmapped_folio(struct migrate_vma *migrate,
> return 0;
> }
> #endif
> +EXPORT_SYMBOL(migrate_device_split_page);
>
> static unsigned long migrate_vma_nr_pages(unsigned long *src)
> {
> --
> 2.43.0
Best Regards,
Yan, Zi
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 1/4] mm/migrate: Add migrate_device_split_page
2026-01-07 20:20 ` Zi Yan
@ 2026-01-07 20:38 ` Zi Yan
2026-01-07 21:15 ` Matthew Brost
0 siblings, 1 reply; 29+ messages in thread
From: Zi Yan @ 2026-01-07 20:38 UTC (permalink / raw)
To: Francois Dugast
Cc: intel-xe, dri-devel, Matthew Brost, Andrew Morton, Balbir Singh,
linux-mm, David Hildenbrand, Lorenzo Stoakes, Baolin Wang,
Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain, Barry Song,
Lance Yang, Matthew Wilcox
On 7 Jan 2026, at 15:20, Zi Yan wrote:
> +THP folks
+willy, since he commented in another thread.
>
> On 16 Dec 2025, at 15:10, Francois Dugast wrote:
>
>> From: Matthew Brost <matthew.brost@intel.com>
>>
>> Introduce migrate_device_split_page() to split a device page into
>> lower-order pages. Used when a folio allocated as higher-order is freed
>> and later reallocated at a smaller order by the driver memory manager.
>>
>> Cc: Andrew Morton <akpm@linux-foundation.org>
>> Cc: Balbir Singh <balbirs@nvidia.com>
>> Cc: dri-devel@lists.freedesktop.org
>> Cc: linux-mm@kvack.org
>> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
>> Signed-off-by: Francois Dugast <francois.dugast@intel.com>
>> ---
>> include/linux/huge_mm.h | 3 +++
>> include/linux/migrate.h | 1 +
>> mm/huge_memory.c | 6 ++---
>> mm/migrate_device.c | 49 +++++++++++++++++++++++++++++++++++++++++
>> 4 files changed, 56 insertions(+), 3 deletions(-)
>>
>> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
>> index a4d9f964dfde..6ad8f359bc0d 100644
>> --- a/include/linux/huge_mm.h
>> +++ b/include/linux/huge_mm.h
>> @@ -374,6 +374,9 @@ int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list
>> int folio_split_unmapped(struct folio *folio, unsigned int new_order);
>> unsigned int min_order_for_split(struct folio *folio);
>> int split_folio_to_list(struct folio *folio, struct list_head *list);
>> +int __split_unmapped_folio(struct folio *folio, int new_order,
>> + struct page *split_at, struct xa_state *xas,
>> + struct address_space *mapping, enum split_type split_type);
>> int folio_check_splittable(struct folio *folio, unsigned int new_order,
>> enum split_type split_type);
>> int folio_split(struct folio *folio, unsigned int new_order, struct page *page,
>> diff --git a/include/linux/migrate.h b/include/linux/migrate.h
>> index 26ca00c325d9..ec65e4fd5f88 100644
>> --- a/include/linux/migrate.h
>> +++ b/include/linux/migrate.h
>> @@ -192,6 +192,7 @@ void migrate_device_pages(unsigned long *src_pfns, unsigned long *dst_pfns,
>> unsigned long npages);
>> void migrate_device_finalize(unsigned long *src_pfns,
>> unsigned long *dst_pfns, unsigned long npages);
>> +int migrate_device_split_page(struct page *page);
>>
>> #endif /* CONFIG_MIGRATION */
>>
>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>> index 40cf59301c21..7ded35a3ecec 100644
>> --- a/mm/huge_memory.c
>> +++ b/mm/huge_memory.c
>> @@ -3621,9 +3621,9 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
>> * Return: 0 - successful, <0 - failed (if -ENOMEM is returned, @folio might be
>> * split but not to @new_order, the caller needs to check)
>> */
>> -static int __split_unmapped_folio(struct folio *folio, int new_order,
>> - struct page *split_at, struct xa_state *xas,
>> - struct address_space *mapping, enum split_type split_type)
>> +int __split_unmapped_folio(struct folio *folio, int new_order,
>> + struct page *split_at, struct xa_state *xas,
>> + struct address_space *mapping, enum split_type split_type)
>> {
>> const bool is_anon = folio_test_anon(folio);
>> int old_order = folio_order(folio);
>> diff --git a/mm/migrate_device.c b/mm/migrate_device.c
>> index 23379663b1e1..eb0f0e938947 100644
>> --- a/mm/migrate_device.c
>> +++ b/mm/migrate_device.c
>> @@ -775,6 +775,49 @@ int migrate_vma_setup(struct migrate_vma *args)
>> EXPORT_SYMBOL(migrate_vma_setup);
>>
>> #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
>> +/**
>> + * migrate_device_split_page() - Split device page
>> + * @page: Device page to split
>> + *
>> + * Splits a device page into smaller pages. Typically called when reallocating a
>> + * folio to a smaller size. Inherently racy—only safe if the caller ensures
>> + * mutual exclusion within the page's folio (i.e., no other threads are using
>> + * pages within the folio). Expected to be called a free device page and
>> + * restores all split out pages to a free state.
>> + */
Do you mind explaining why __split_unmapped_folio() is needed for a free device
page? A free page is not supposed to be a large folio, at least from a core
MM point of view. __split_unmapped_folio() is intended to work on large folios
(or compound pages), even if the input folio has refcount == 0 (because it is
frozen).
>> +int migrate_device_split_page(struct page *page)
>> +{
>> + struct folio *folio = page_folio(page);
>> + struct dev_pagemap *pgmap = folio->pgmap;
>> + struct page *unlock_page = folio_page(folio, 0);
>> + unsigned int order = folio_order(folio), i;
>> + int ret = 0;
>> +
>> + VM_BUG_ON_FOLIO(!order, folio);
>> + VM_BUG_ON_FOLIO(!folio_is_device_private(folio), folio);
>> + VM_BUG_ON_FOLIO(folio_ref_count(folio), folio);
Please use VM_WARN_ON_FOLIO() instead to catch errors. There is no need to crash
the kernel
>> +
>> + folio_lock(folio);
>> +
>> + ret = __split_unmapped_folio(folio, 0, page, NULL, NULL, SPLIT_TYPE_UNIFORM);
>> + if (ret) {
>> + /*
>> + * We can't fail here unless the caller doesn't know what they
>> + * are doing.
>> + */
>> + VM_BUG_ON_FOLIO(ret, folio);
Same here.
>> +
>> + return ret;
>> + }
>> +
>> + for (i = 0; i < 0x1 << order; ++i, ++unlock_page) {
>> + page_folio(unlock_page)->pgmap = pgmap;
>> + folio_unlock(page_folio(unlock_page));
>> + }
>> +
>> + return 0;
>> +}
>> +
>> /**
>> * migrate_vma_insert_huge_pmd_page: Insert a huge folio into @migrate->vma->vm_mm
>> * at @addr. folio is already allocated as a part of the migration process with
>> @@ -927,6 +970,11 @@ static int migrate_vma_split_unmapped_folio(struct migrate_vma *migrate,
>> return ret;
>> }
>> #else /* !CONFIG_ARCH_ENABLE_THP_MIGRATION */
>> +int migrate_device_split_page(struct page *page)
>> +{
>> + return 0;
>> +}
>> +
>> static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate,
>> unsigned long addr,
>> struct page *page,
>> @@ -943,6 +991,7 @@ static int migrate_vma_split_unmapped_folio(struct migrate_vma *migrate,
>> return 0;
>> }
>> #endif
>> +EXPORT_SYMBOL(migrate_device_split_page);
>>
>> static unsigned long migrate_vma_nr_pages(unsigned long *src)
>> {
>> --
>> 2.43.0
>
>
> Best Regards,
> Yan, Zi
Best Regards,
Yan, Zi
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 1/4] mm/migrate: Add migrate_device_split_page
2026-01-07 20:38 ` Zi Yan
@ 2026-01-07 21:15 ` Matthew Brost
2026-01-07 22:03 ` Zi Yan
0 siblings, 1 reply; 29+ messages in thread
From: Matthew Brost @ 2026-01-07 21:15 UTC (permalink / raw)
To: Zi Yan
Cc: Francois Dugast, intel-xe, dri-devel, Andrew Morton, Balbir Singh,
linux-mm, David Hildenbrand, Lorenzo Stoakes, Baolin Wang,
Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain, Barry Song,
Lance Yang, Matthew Wilcox
On Wed, Jan 07, 2026 at 03:38:35PM -0500, Zi Yan wrote:
> On 7 Jan 2026, at 15:20, Zi Yan wrote:
>
> > +THP folks
>
> +willy, since he commented in another thread.
>
> >
> > On 16 Dec 2025, at 15:10, Francois Dugast wrote:
> >
> >> From: Matthew Brost <matthew.brost@intel.com>
> >>
> >> Introduce migrate_device_split_page() to split a device page into
> >> lower-order pages. Used when a folio allocated as higher-order is freed
> >> and later reallocated at a smaller order by the driver memory manager.
> >>
> >> Cc: Andrew Morton <akpm@linux-foundation.org>
> >> Cc: Balbir Singh <balbirs@nvidia.com>
> >> Cc: dri-devel@lists.freedesktop.org
> >> Cc: linux-mm@kvack.org
> >> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> >> Signed-off-by: Francois Dugast <francois.dugast@intel.com>
> >> ---
> >> include/linux/huge_mm.h | 3 +++
> >> include/linux/migrate.h | 1 +
> >> mm/huge_memory.c | 6 ++---
> >> mm/migrate_device.c | 49 +++++++++++++++++++++++++++++++++++++++++
> >> 4 files changed, 56 insertions(+), 3 deletions(-)
> >>
> >> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> >> index a4d9f964dfde..6ad8f359bc0d 100644
> >> --- a/include/linux/huge_mm.h
> >> +++ b/include/linux/huge_mm.h
> >> @@ -374,6 +374,9 @@ int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list
> >> int folio_split_unmapped(struct folio *folio, unsigned int new_order);
> >> unsigned int min_order_for_split(struct folio *folio);
> >> int split_folio_to_list(struct folio *folio, struct list_head *list);
> >> +int __split_unmapped_folio(struct folio *folio, int new_order,
> >> + struct page *split_at, struct xa_state *xas,
> >> + struct address_space *mapping, enum split_type split_type);
> >> int folio_check_splittable(struct folio *folio, unsigned int new_order,
> >> enum split_type split_type);
> >> int folio_split(struct folio *folio, unsigned int new_order, struct page *page,
> >> diff --git a/include/linux/migrate.h b/include/linux/migrate.h
> >> index 26ca00c325d9..ec65e4fd5f88 100644
> >> --- a/include/linux/migrate.h
> >> +++ b/include/linux/migrate.h
> >> @@ -192,6 +192,7 @@ void migrate_device_pages(unsigned long *src_pfns, unsigned long *dst_pfns,
> >> unsigned long npages);
> >> void migrate_device_finalize(unsigned long *src_pfns,
> >> unsigned long *dst_pfns, unsigned long npages);
> >> +int migrate_device_split_page(struct page *page);
> >>
> >> #endif /* CONFIG_MIGRATION */
> >>
> >> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> >> index 40cf59301c21..7ded35a3ecec 100644
> >> --- a/mm/huge_memory.c
> >> +++ b/mm/huge_memory.c
> >> @@ -3621,9 +3621,9 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
> >> * Return: 0 - successful, <0 - failed (if -ENOMEM is returned, @folio might be
> >> * split but not to @new_order, the caller needs to check)
> >> */
> >> -static int __split_unmapped_folio(struct folio *folio, int new_order,
> >> - struct page *split_at, struct xa_state *xas,
> >> - struct address_space *mapping, enum split_type split_type)
> >> +int __split_unmapped_folio(struct folio *folio, int new_order,
> >> + struct page *split_at, struct xa_state *xas,
> >> + struct address_space *mapping, enum split_type split_type)
> >> {
> >> const bool is_anon = folio_test_anon(folio);
> >> int old_order = folio_order(folio);
> >> diff --git a/mm/migrate_device.c b/mm/migrate_device.c
> >> index 23379663b1e1..eb0f0e938947 100644
> >> --- a/mm/migrate_device.c
> >> +++ b/mm/migrate_device.c
> >> @@ -775,6 +775,49 @@ int migrate_vma_setup(struct migrate_vma *args)
> >> EXPORT_SYMBOL(migrate_vma_setup);
> >>
> >> #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
> >> +/**
> >> + * migrate_device_split_page() - Split device page
> >> + * @page: Device page to split
> >> + *
> >> + * Splits a device page into smaller pages. Typically called when reallocating a
> >> + * folio to a smaller size. Inherently racy—only safe if the caller ensures
> >> + * mutual exclusion within the page's folio (i.e., no other threads are using
> >> + * pages within the folio). Expected to be called a free device page and
> >> + * restores all split out pages to a free state.
> >> + */
>
> Do you mind explaining why __split_unmapped_folio() is needed for a free device
> page? A free page is not supposed to be a large folio, at least from a core
> MM point of view. __split_unmapped_folio() is intended to work on large folios
> (or compound pages), even if the input folio has refcount == 0 (because it is
> frozen).
>
Well, then maybe this is a bug in core MM where the freed page is still
a THP. Let me explain the scenario and why this is needed from my POV.
Our VRAM allocator in Xe (and several other DRM drivers) is DRM buddy.
This is a shared pool between traditional DRM GEMs (buffer objects) and
SVM allocations (pages). It doesn’t have any view of the page backing—it
basically just hands back a pointer to VRAM space that we allocate from.
From that, if it’s an SVM allocation, we can derive the device pages.
What I see happening is: a 2M buddy allocation occurs, we make the
backing device pages a large folio, and sometime later the folio
refcount goes to zero and we free the buddy allocation. Later, the buddy
allocation is reused for a smaller allocation (e.g., 4K or 64K), but the
backing pages are still a large folio. Here is where we need to split
the folio into 4K pages so we can properly migrate the pages via the
migrate_vma_* calls. Also note: if you call zone_device_page_init with
an order of zero on a large device folio, that also blows up.
Open to other ideas here for how to handle this scenario.
> >> +int migrate_device_split_page(struct page *page)
> >> +{
> >> + struct folio *folio = page_folio(page);
> >> + struct dev_pagemap *pgmap = folio->pgmap;
> >> + struct page *unlock_page = folio_page(folio, 0);
> >> + unsigned int order = folio_order(folio), i;
> >> + int ret = 0;
> >> +
> >> + VM_BUG_ON_FOLIO(!order, folio);
> >> + VM_BUG_ON_FOLIO(!folio_is_device_private(folio), folio);
> >> + VM_BUG_ON_FOLIO(folio_ref_count(folio), folio);
>
> Please use VM_WARN_ON_FOLIO() instead to catch errors. There is no need to crash
> the kernel
>
Sure.
> >> +
> >> + folio_lock(folio);
> >> +
> >> + ret = __split_unmapped_folio(folio, 0, page, NULL, NULL, SPLIT_TYPE_UNIFORM);
> >> + if (ret) {
> >> + /*
> >> + * We can't fail here unless the caller doesn't know what they
> >> + * are doing.
> >> + */
> >> + VM_BUG_ON_FOLIO(ret, folio);
>
> Same here.
>
Will do.
Matt
> >> +
> >> + return ret;
> >> + }
> >> +
> >> + for (i = 0; i < 0x1 << order; ++i, ++unlock_page) {
> >> + page_folio(unlock_page)->pgmap = pgmap;
> >> + folio_unlock(page_folio(unlock_page));
> >> + }
> >> +
> >> + return 0;
> >> +}
> >> +
> >> /**
> >> * migrate_vma_insert_huge_pmd_page: Insert a huge folio into @migrate->vma->vm_mm
> >> * at @addr. folio is already allocated as a part of the migration process with
> >> @@ -927,6 +970,11 @@ static int migrate_vma_split_unmapped_folio(struct migrate_vma *migrate,
> >> return ret;
> >> }
> >> #else /* !CONFIG_ARCH_ENABLE_THP_MIGRATION */
> >> +int migrate_device_split_page(struct page *page)
> >> +{
> >> + return 0;
> >> +}
> >> +
> >> static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate,
> >> unsigned long addr,
> >> struct page *page,
> >> @@ -943,6 +991,7 @@ static int migrate_vma_split_unmapped_folio(struct migrate_vma *migrate,
> >> return 0;
> >> }
> >> #endif
> >> +EXPORT_SYMBOL(migrate_device_split_page);
> >>
> >> static unsigned long migrate_vma_nr_pages(unsigned long *src)
> >> {
> >> --
> >> 2.43.0
> >
> >
> > Best Regards,
> > Yan, Zi
>
>
> Best Regards,
> Yan, Zi
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 1/4] mm/migrate: Add migrate_device_split_page
2026-01-07 21:15 ` Matthew Brost
@ 2026-01-07 22:03 ` Zi Yan
2026-01-08 0:56 ` Balbir Singh
0 siblings, 1 reply; 29+ messages in thread
From: Zi Yan @ 2026-01-07 22:03 UTC (permalink / raw)
To: Matthew Brost, Balbir Singh, Alistair Popple
Cc: Francois Dugast, intel-xe, dri-devel, Andrew Morton, linux-mm,
David Hildenbrand, Lorenzo Stoakes, Baolin Wang, Liam R. Howlett,
Nico Pache, Ryan Roberts, Dev Jain, Barry Song, Lance Yang,
Matthew Wilcox
On 7 Jan 2026, at 16:15, Matthew Brost wrote:
> On Wed, Jan 07, 2026 at 03:38:35PM -0500, Zi Yan wrote:
>> On 7 Jan 2026, at 15:20, Zi Yan wrote:
>>
>>> +THP folks
>>
>> +willy, since he commented in another thread.
>>
>>>
>>> On 16 Dec 2025, at 15:10, Francois Dugast wrote:
>>>
>>>> From: Matthew Brost <matthew.brost@intel.com>
>>>>
>>>> Introduce migrate_device_split_page() to split a device page into
>>>> lower-order pages. Used when a folio allocated as higher-order is freed
>>>> and later reallocated at a smaller order by the driver memory manager.
>>>>
>>>> Cc: Andrew Morton <akpm@linux-foundation.org>
>>>> Cc: Balbir Singh <balbirs@nvidia.com>
>>>> Cc: dri-devel@lists.freedesktop.org
>>>> Cc: linux-mm@kvack.org
>>>> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
>>>> Signed-off-by: Francois Dugast <francois.dugast@intel.com>
>>>> ---
>>>> include/linux/huge_mm.h | 3 +++
>>>> include/linux/migrate.h | 1 +
>>>> mm/huge_memory.c | 6 ++---
>>>> mm/migrate_device.c | 49 +++++++++++++++++++++++++++++++++++++++++
>>>> 4 files changed, 56 insertions(+), 3 deletions(-)
>>>>
>>>> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
>>>> index a4d9f964dfde..6ad8f359bc0d 100644
>>>> --- a/include/linux/huge_mm.h
>>>> +++ b/include/linux/huge_mm.h
>>>> @@ -374,6 +374,9 @@ int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list
>>>> int folio_split_unmapped(struct folio *folio, unsigned int new_order);
>>>> unsigned int min_order_for_split(struct folio *folio);
>>>> int split_folio_to_list(struct folio *folio, struct list_head *list);
>>>> +int __split_unmapped_folio(struct folio *folio, int new_order,
>>>> + struct page *split_at, struct xa_state *xas,
>>>> + struct address_space *mapping, enum split_type split_type);
>>>> int folio_check_splittable(struct folio *folio, unsigned int new_order,
>>>> enum split_type split_type);
>>>> int folio_split(struct folio *folio, unsigned int new_order, struct page *page,
>>>> diff --git a/include/linux/migrate.h b/include/linux/migrate.h
>>>> index 26ca00c325d9..ec65e4fd5f88 100644
>>>> --- a/include/linux/migrate.h
>>>> +++ b/include/linux/migrate.h
>>>> @@ -192,6 +192,7 @@ void migrate_device_pages(unsigned long *src_pfns, unsigned long *dst_pfns,
>>>> unsigned long npages);
>>>> void migrate_device_finalize(unsigned long *src_pfns,
>>>> unsigned long *dst_pfns, unsigned long npages);
>>>> +int migrate_device_split_page(struct page *page);
>>>>
>>>> #endif /* CONFIG_MIGRATION */
>>>>
>>>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>>>> index 40cf59301c21..7ded35a3ecec 100644
>>>> --- a/mm/huge_memory.c
>>>> +++ b/mm/huge_memory.c
>>>> @@ -3621,9 +3621,9 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
>>>> * Return: 0 - successful, <0 - failed (if -ENOMEM is returned, @folio might be
>>>> * split but not to @new_order, the caller needs to check)
>>>> */
>>>> -static int __split_unmapped_folio(struct folio *folio, int new_order,
>>>> - struct page *split_at, struct xa_state *xas,
>>>> - struct address_space *mapping, enum split_type split_type)
>>>> +int __split_unmapped_folio(struct folio *folio, int new_order,
>>>> + struct page *split_at, struct xa_state *xas,
>>>> + struct address_space *mapping, enum split_type split_type)
>>>> {
>>>> const bool is_anon = folio_test_anon(folio);
>>>> int old_order = folio_order(folio);
>>>> diff --git a/mm/migrate_device.c b/mm/migrate_device.c
>>>> index 23379663b1e1..eb0f0e938947 100644
>>>> --- a/mm/migrate_device.c
>>>> +++ b/mm/migrate_device.c
>>>> @@ -775,6 +775,49 @@ int migrate_vma_setup(struct migrate_vma *args)
>>>> EXPORT_SYMBOL(migrate_vma_setup);
>>>>
>>>> #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
>>>> +/**
>>>> + * migrate_device_split_page() - Split device page
>>>> + * @page: Device page to split
>>>> + *
>>>> + * Splits a device page into smaller pages. Typically called when reallocating a
>>>> + * folio to a smaller size. Inherently racy—only safe if the caller ensures
>>>> + * mutual exclusion within the page's folio (i.e., no other threads are using
>>>> + * pages within the folio). Expected to be called a free device page and
>>>> + * restores all split out pages to a free state.
>>>> + */
>>
>> Do you mind explaining why __split_unmapped_folio() is needed for a free device
>> page? A free page is not supposed to be a large folio, at least from a core
>> MM point of view. __split_unmapped_folio() is intended to work on large folios
>> (or compound pages), even if the input folio has refcount == 0 (because it is
>> frozen).
>>
>
> Well, then maybe this is a bug in core MM where the freed page is still
> a THP. Let me explain the scenario and why this is needed from my POV.
>
> Our VRAM allocator in Xe (and several other DRM drivers) is DRM buddy.
> This is a shared pool between traditional DRM GEMs (buffer objects) and
> SVM allocations (pages). It doesn’t have any view of the page backing—it
> basically just hands back a pointer to VRAM space that we allocate from.
> From that, if it’s an SVM allocation, we can derive the device pages.
>
> What I see happening is: a 2M buddy allocation occurs, we make the
> backing device pages a large folio, and sometime later the folio
> refcount goes to zero and we free the buddy allocation. Later, the buddy
> allocation is reused for a smaller allocation (e.g., 4K or 64K), but the
> backing pages are still a large folio. Here is where we need to split
I agree with you that it might be a bug in free_zone_device_folio() based
on my understanding. Since zone_device_page_init() calls prep_compound_page()
for >0 orders, but free_zone_device_folio() never reverse the process.
Balbir and Alistair might be able to help here.
I cherry picked the code from __free_frozen_pages() to reverse the process.
Can you give it a try to see if it solve the above issue? Thanks.
From 3aa03baa39b7e62ea079e826de6ed5aab3061e46 Mon Sep 17 00:00:00 2001
From: Zi Yan <ziy@nvidia.com>
Date: Wed, 7 Jan 2026 16:49:52 -0500
Subject: [PATCH] mm/memremap: free device private folio fix
Content-Type: text/plain; charset="utf-8"
Signed-off-by: Zi Yan <ziy@nvidia.com>
---
mm/memremap.c | 15 +++++++++++++++
1 file changed, 15 insertions(+)
diff --git a/mm/memremap.c b/mm/memremap.c
index 63c6ab4fdf08..483666ff7271 100644
--- a/mm/memremap.c
+++ b/mm/memremap.c
@@ -475,6 +475,21 @@ void free_zone_device_folio(struct folio *folio)
pgmap->ops->folio_free(folio);
break;
}
+
+ if (nr > 1) {
+ struct page *head = folio_page(folio, 0);
+
+ head[1].flags.f &= ~PAGE_FLAGS_SECOND;
+#ifdef NR_PAGES_IN_LARGE_FOLIO
+ folio->_nr_pages = 0;
+#endif
+ for (i = 1; i < nr; i++) {
+ (head + i)->mapping = NULL;
+ clear_compound_head(head + i);
+ }
+ folio->mapping = NULL;
+ head->flags.f &= ~PAGE_FLAGS_CHECK_AT_PREP;
+ }
}
void zone_device_page_init(struct page *page, unsigned int order)
--
2.51.0
> the folio into 4K pages so we can properly migrate the pages via the
> migrate_vma_* calls. Also note: if you call zone_device_page_init with
> an order of zero on a large device folio, that also blows up.
>
> Open to other ideas here for how to handle this scenario.
>
>>>> +int migrate_device_split_page(struct page *page)
>>>> +{
>>>> + struct folio *folio = page_folio(page);
>>>> + struct dev_pagemap *pgmap = folio->pgmap;
>>>> + struct page *unlock_page = folio_page(folio, 0);
>>>> + unsigned int order = folio_order(folio), i;
>>>> + int ret = 0;
>>>> +
>>>> + VM_BUG_ON_FOLIO(!order, folio);
>>>> + VM_BUG_ON_FOLIO(!folio_is_device_private(folio), folio);
>>>> + VM_BUG_ON_FOLIO(folio_ref_count(folio), folio);
>>
>> Please use VM_WARN_ON_FOLIO() instead to catch errors. There is no need to crash
>> the kernel
>>
>
> Sure.
>
>>>> +
>>>> + folio_lock(folio);
>>>> +
>>>> + ret = __split_unmapped_folio(folio, 0, page, NULL, NULL, SPLIT_TYPE_UNIFORM);
>>>> + if (ret) {
>>>> + /*
>>>> + * We can't fail here unless the caller doesn't know what they
>>>> + * are doing.
>>>> + */
>>>> + VM_BUG_ON_FOLIO(ret, folio);
>>
>> Same here.
>>
>
> Will do.
>
> Matt
>
>>>> +
>>>> + return ret;
>>>> + }
>>>> +
>>>> + for (i = 0; i < 0x1 << order; ++i, ++unlock_page) {
>>>> + page_folio(unlock_page)->pgmap = pgmap;
>>>> + folio_unlock(page_folio(unlock_page));
>>>> + }
>>>> +
>>>> + return 0;
>>>> +}
>>>> +
>>>> /**
>>>> * migrate_vma_insert_huge_pmd_page: Insert a huge folio into @migrate->vma->vm_mm
>>>> * at @addr. folio is already allocated as a part of the migration process with
>>>> @@ -927,6 +970,11 @@ static int migrate_vma_split_unmapped_folio(struct migrate_vma *migrate,
>>>> return ret;
>>>> }
>>>> #else /* !CONFIG_ARCH_ENABLE_THP_MIGRATION */
>>>> +int migrate_device_split_page(struct page *page)
>>>> +{
>>>> + return 0;
>>>> +}
>>>> +
>>>> static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate,
>>>> unsigned long addr,
>>>> struct page *page,
>>>> @@ -943,6 +991,7 @@ static int migrate_vma_split_unmapped_folio(struct migrate_vma *migrate,
>>>> return 0;
>>>> }
>>>> #endif
>>>> +EXPORT_SYMBOL(migrate_device_split_page);
>>>>
>>>> static unsigned long migrate_vma_nr_pages(unsigned long *src)
>>>> {
>>>> --
>>>> 2.43.0
>>>
>>>
>>> Best Regards,
>>> Yan, Zi
>>
>>
>> Best Regards,
>> Yan, Zi
Best Regards,
Yan, Zi
^ permalink raw reply related [flat|nested] 29+ messages in thread
* Re: [PATCH 1/4] mm/migrate: Add migrate_device_split_page
2026-01-07 22:03 ` Zi Yan
@ 2026-01-08 0:56 ` Balbir Singh
2026-01-08 2:17 ` Matthew Brost
0 siblings, 1 reply; 29+ messages in thread
From: Balbir Singh @ 2026-01-08 0:56 UTC (permalink / raw)
To: Zi Yan, Matthew Brost, Alistair Popple
Cc: Francois Dugast, intel-xe, dri-devel, Andrew Morton, linux-mm,
David Hildenbrand, Lorenzo Stoakes, Baolin Wang, Liam R. Howlett,
Nico Pache, Ryan Roberts, Dev Jain, Barry Song, Lance Yang,
Matthew Wilcox
On 1/8/26 08:03, Zi Yan wrote:
> On 7 Jan 2026, at 16:15, Matthew Brost wrote:
>
>> On Wed, Jan 07, 2026 at 03:38:35PM -0500, Zi Yan wrote:
>>> On 7 Jan 2026, at 15:20, Zi Yan wrote:
>>>
>>>> +THP folks
>>>
>>> +willy, since he commented in another thread.
>>>
>>>>
>>>> On 16 Dec 2025, at 15:10, Francois Dugast wrote:
>>>>
>>>>> From: Matthew Brost <matthew.brost@intel.com>
>>>>>
>>>>> Introduce migrate_device_split_page() to split a device page into
>>>>> lower-order pages. Used when a folio allocated as higher-order is freed
>>>>> and later reallocated at a smaller order by the driver memory manager.
>>>>>
>>>>> Cc: Andrew Morton <akpm@linux-foundation.org>
>>>>> Cc: Balbir Singh <balbirs@nvidia.com>
>>>>> Cc: dri-devel@lists.freedesktop.org
>>>>> Cc: linux-mm@kvack.org
>>>>> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
>>>>> Signed-off-by: Francois Dugast <francois.dugast@intel.com>
>>>>> ---
>>>>> include/linux/huge_mm.h | 3 +++
>>>>> include/linux/migrate.h | 1 +
>>>>> mm/huge_memory.c | 6 ++---
>>>>> mm/migrate_device.c | 49 +++++++++++++++++++++++++++++++++++++++++
>>>>> 4 files changed, 56 insertions(+), 3 deletions(-)
>>>>>
>>>>> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
>>>>> index a4d9f964dfde..6ad8f359bc0d 100644
>>>>> --- a/include/linux/huge_mm.h
>>>>> +++ b/include/linux/huge_mm.h
>>>>> @@ -374,6 +374,9 @@ int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list
>>>>> int folio_split_unmapped(struct folio *folio, unsigned int new_order);
>>>>> unsigned int min_order_for_split(struct folio *folio);
>>>>> int split_folio_to_list(struct folio *folio, struct list_head *list);
>>>>> +int __split_unmapped_folio(struct folio *folio, int new_order,
>>>>> + struct page *split_at, struct xa_state *xas,
>>>>> + struct address_space *mapping, enum split_type split_type);
>>>>> int folio_check_splittable(struct folio *folio, unsigned int new_order,
>>>>> enum split_type split_type);
>>>>> int folio_split(struct folio *folio, unsigned int new_order, struct page *page,
>>>>> diff --git a/include/linux/migrate.h b/include/linux/migrate.h
>>>>> index 26ca00c325d9..ec65e4fd5f88 100644
>>>>> --- a/include/linux/migrate.h
>>>>> +++ b/include/linux/migrate.h
>>>>> @@ -192,6 +192,7 @@ void migrate_device_pages(unsigned long *src_pfns, unsigned long *dst_pfns,
>>>>> unsigned long npages);
>>>>> void migrate_device_finalize(unsigned long *src_pfns,
>>>>> unsigned long *dst_pfns, unsigned long npages);
>>>>> +int migrate_device_split_page(struct page *page);
>>>>>
>>>>> #endif /* CONFIG_MIGRATION */
>>>>>
>>>>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>>>>> index 40cf59301c21..7ded35a3ecec 100644
>>>>> --- a/mm/huge_memory.c
>>>>> +++ b/mm/huge_memory.c
>>>>> @@ -3621,9 +3621,9 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
>>>>> * Return: 0 - successful, <0 - failed (if -ENOMEM is returned, @folio might be
>>>>> * split but not to @new_order, the caller needs to check)
>>>>> */
>>>>> -static int __split_unmapped_folio(struct folio *folio, int new_order,
>>>>> - struct page *split_at, struct xa_state *xas,
>>>>> - struct address_space *mapping, enum split_type split_type)
>>>>> +int __split_unmapped_folio(struct folio *folio, int new_order,
>>>>> + struct page *split_at, struct xa_state *xas,
>>>>> + struct address_space *mapping, enum split_type split_type)
>>>>> {
>>>>> const bool is_anon = folio_test_anon(folio);
>>>>> int old_order = folio_order(folio);
>>>>> diff --git a/mm/migrate_device.c b/mm/migrate_device.c
>>>>> index 23379663b1e1..eb0f0e938947 100644
>>>>> --- a/mm/migrate_device.c
>>>>> +++ b/mm/migrate_device.c
>>>>> @@ -775,6 +775,49 @@ int migrate_vma_setup(struct migrate_vma *args)
>>>>> EXPORT_SYMBOL(migrate_vma_setup);
>>>>>
>>>>> #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
>>>>> +/**
>>>>> + * migrate_device_split_page() - Split device page
>>>>> + * @page: Device page to split
>>>>> + *
>>>>> + * Splits a device page into smaller pages. Typically called when reallocating a
>>>>> + * folio to a smaller size. Inherently racy—only safe if the caller ensures
>>>>> + * mutual exclusion within the page's folio (i.e., no other threads are using
>>>>> + * pages within the folio). Expected to be called a free device page and
>>>>> + * restores all split out pages to a free state.
>>>>> + */
>>>
>>> Do you mind explaining why __split_unmapped_folio() is needed for a free device
>>> page? A free page is not supposed to be a large folio, at least from a core
>>> MM point of view. __split_unmapped_folio() is intended to work on large folios
>>> (or compound pages), even if the input folio has refcount == 0 (because it is
>>> frozen).
>>>
>>
>> Well, then maybe this is a bug in core MM where the freed page is still
>> a THP. Let me explain the scenario and why this is needed from my POV.
>>
>> Our VRAM allocator in Xe (and several other DRM drivers) is DRM buddy.
>> This is a shared pool between traditional DRM GEMs (buffer objects) and
>> SVM allocations (pages). It doesn’t have any view of the page backing—it
>> basically just hands back a pointer to VRAM space that we allocate from.
>> From that, if it’s an SVM allocation, we can derive the device pages.
>>
>> What I see happening is: a 2M buddy allocation occurs, we make the
>> backing device pages a large folio, and sometime later the folio
>> refcount goes to zero and we free the buddy allocation. Later, the buddy
>> allocation is reused for a smaller allocation (e.g., 4K or 64K), but the
>> backing pages are still a large folio. Here is where we need to split
>
> I agree with you that it might be a bug in free_zone_device_folio() based
> on my understanding. Since zone_device_page_init() calls prep_compound_page()
> for >0 orders, but free_zone_device_folio() never reverse the process.
>
> Balbir and Alistair might be able to help here.
I agree it's an API limitation
>
> I cherry picked the code from __free_frozen_pages() to reverse the process.
> Can you give it a try to see if it solve the above issue? Thanks.
>
> From 3aa03baa39b7e62ea079e826de6ed5aab3061e46 Mon Sep 17 00:00:00 2001
> From: Zi Yan <ziy@nvidia.com>
> Date: Wed, 7 Jan 2026 16:49:52 -0500
> Subject: [PATCH] mm/memremap: free device private folio fix
> Content-Type: text/plain; charset="utf-8"
>
> Signed-off-by: Zi Yan <ziy@nvidia.com>
> ---
> mm/memremap.c | 15 +++++++++++++++
> 1 file changed, 15 insertions(+)
>
> diff --git a/mm/memremap.c b/mm/memremap.c
> index 63c6ab4fdf08..483666ff7271 100644
> --- a/mm/memremap.c
> +++ b/mm/memremap.c
> @@ -475,6 +475,21 @@ void free_zone_device_folio(struct folio *folio)
> pgmap->ops->folio_free(folio);
> break;
> }
> +
> + if (nr > 1) {
> + struct page *head = folio_page(folio, 0);
> +
> + head[1].flags.f &= ~PAGE_FLAGS_SECOND;
> +#ifdef NR_PAGES_IN_LARGE_FOLIO
> + folio->_nr_pages = 0;
> +#endif
> + for (i = 1; i < nr; i++) {
> + (head + i)->mapping = NULL;
> + clear_compound_head(head + i);
I see that your skipping the checks in free_page_tail_prepare()? IIUC, we should be able
to invoke it even for zone device private pages
> + }
> + folio->mapping = NULL;
This is already done in free_zone_device_folio()
> + head->flags.f &= ~PAGE_FLAGS_CHECK_AT_PREP;
I don't think this is required for zone device private folios, but I suppose it
keeps the code generic
> + }
> }
>
> void zone_device_page_init(struct page *page, unsigned int order)
Otherwise, it seems like the right way to solve the issue.
Balbir
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 1/4] mm/migrate: Add migrate_device_split_page
2026-01-08 0:56 ` Balbir Singh
@ 2026-01-08 2:17 ` Matthew Brost
2026-01-08 2:53 ` Zi Yan
0 siblings, 1 reply; 29+ messages in thread
From: Matthew Brost @ 2026-01-08 2:17 UTC (permalink / raw)
To: Balbir Singh
Cc: Zi Yan, Alistair Popple, Francois Dugast, intel-xe, dri-devel,
Andrew Morton, linux-mm, David Hildenbrand, Lorenzo Stoakes,
Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
Barry Song, Lance Yang, Matthew Wilcox
On Thu, Jan 08, 2026 at 11:56:03AM +1100, Balbir Singh wrote:
> On 1/8/26 08:03, Zi Yan wrote:
> > On 7 Jan 2026, at 16:15, Matthew Brost wrote:
> >
> >> On Wed, Jan 07, 2026 at 03:38:35PM -0500, Zi Yan wrote:
> >>> On 7 Jan 2026, at 15:20, Zi Yan wrote:
> >>>
> >>>> +THP folks
> >>>
> >>> +willy, since he commented in another thread.
> >>>
> >>>>
> >>>> On 16 Dec 2025, at 15:10, Francois Dugast wrote:
> >>>>
> >>>>> From: Matthew Brost <matthew.brost@intel.com>
> >>>>>
> >>>>> Introduce migrate_device_split_page() to split a device page into
> >>>>> lower-order pages. Used when a folio allocated as higher-order is freed
> >>>>> and later reallocated at a smaller order by the driver memory manager.
> >>>>>
> >>>>> Cc: Andrew Morton <akpm@linux-foundation.org>
> >>>>> Cc: Balbir Singh <balbirs@nvidia.com>
> >>>>> Cc: dri-devel@lists.freedesktop.org
> >>>>> Cc: linux-mm@kvack.org
> >>>>> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> >>>>> Signed-off-by: Francois Dugast <francois.dugast@intel.com>
> >>>>> ---
> >>>>> include/linux/huge_mm.h | 3 +++
> >>>>> include/linux/migrate.h | 1 +
> >>>>> mm/huge_memory.c | 6 ++---
> >>>>> mm/migrate_device.c | 49 +++++++++++++++++++++++++++++++++++++++++
> >>>>> 4 files changed, 56 insertions(+), 3 deletions(-)
> >>>>>
> >>>>> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> >>>>> index a4d9f964dfde..6ad8f359bc0d 100644
> >>>>> --- a/include/linux/huge_mm.h
> >>>>> +++ b/include/linux/huge_mm.h
> >>>>> @@ -374,6 +374,9 @@ int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list
> >>>>> int folio_split_unmapped(struct folio *folio, unsigned int new_order);
> >>>>> unsigned int min_order_for_split(struct folio *folio);
> >>>>> int split_folio_to_list(struct folio *folio, struct list_head *list);
> >>>>> +int __split_unmapped_folio(struct folio *folio, int new_order,
> >>>>> + struct page *split_at, struct xa_state *xas,
> >>>>> + struct address_space *mapping, enum split_type split_type);
> >>>>> int folio_check_splittable(struct folio *folio, unsigned int new_order,
> >>>>> enum split_type split_type);
> >>>>> int folio_split(struct folio *folio, unsigned int new_order, struct page *page,
> >>>>> diff --git a/include/linux/migrate.h b/include/linux/migrate.h
> >>>>> index 26ca00c325d9..ec65e4fd5f88 100644
> >>>>> --- a/include/linux/migrate.h
> >>>>> +++ b/include/linux/migrate.h
> >>>>> @@ -192,6 +192,7 @@ void migrate_device_pages(unsigned long *src_pfns, unsigned long *dst_pfns,
> >>>>> unsigned long npages);
> >>>>> void migrate_device_finalize(unsigned long *src_pfns,
> >>>>> unsigned long *dst_pfns, unsigned long npages);
> >>>>> +int migrate_device_split_page(struct page *page);
> >>>>>
> >>>>> #endif /* CONFIG_MIGRATION */
> >>>>>
> >>>>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> >>>>> index 40cf59301c21..7ded35a3ecec 100644
> >>>>> --- a/mm/huge_memory.c
> >>>>> +++ b/mm/huge_memory.c
> >>>>> @@ -3621,9 +3621,9 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
> >>>>> * Return: 0 - successful, <0 - failed (if -ENOMEM is returned, @folio might be
> >>>>> * split but not to @new_order, the caller needs to check)
> >>>>> */
> >>>>> -static int __split_unmapped_folio(struct folio *folio, int new_order,
> >>>>> - struct page *split_at, struct xa_state *xas,
> >>>>> - struct address_space *mapping, enum split_type split_type)
> >>>>> +int __split_unmapped_folio(struct folio *folio, int new_order,
> >>>>> + struct page *split_at, struct xa_state *xas,
> >>>>> + struct address_space *mapping, enum split_type split_type)
> >>>>> {
> >>>>> const bool is_anon = folio_test_anon(folio);
> >>>>> int old_order = folio_order(folio);
> >>>>> diff --git a/mm/migrate_device.c b/mm/migrate_device.c
> >>>>> index 23379663b1e1..eb0f0e938947 100644
> >>>>> --- a/mm/migrate_device.c
> >>>>> +++ b/mm/migrate_device.c
> >>>>> @@ -775,6 +775,49 @@ int migrate_vma_setup(struct migrate_vma *args)
> >>>>> EXPORT_SYMBOL(migrate_vma_setup);
> >>>>>
> >>>>> #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
> >>>>> +/**
> >>>>> + * migrate_device_split_page() - Split device page
> >>>>> + * @page: Device page to split
> >>>>> + *
> >>>>> + * Splits a device page into smaller pages. Typically called when reallocating a
> >>>>> + * folio to a smaller size. Inherently racy—only safe if the caller ensures
> >>>>> + * mutual exclusion within the page's folio (i.e., no other threads are using
> >>>>> + * pages within the folio). Expected to be called a free device page and
> >>>>> + * restores all split out pages to a free state.
> >>>>> + */
> >>>
> >>> Do you mind explaining why __split_unmapped_folio() is needed for a free device
> >>> page? A free page is not supposed to be a large folio, at least from a core
> >>> MM point of view. __split_unmapped_folio() is intended to work on large folios
> >>> (or compound pages), even if the input folio has refcount == 0 (because it is
> >>> frozen).
> >>>
> >>
> >> Well, then maybe this is a bug in core MM where the freed page is still
> >> a THP. Let me explain the scenario and why this is needed from my POV.
> >>
> >> Our VRAM allocator in Xe (and several other DRM drivers) is DRM buddy.
> >> This is a shared pool between traditional DRM GEMs (buffer objects) and
> >> SVM allocations (pages). It doesn’t have any view of the page backing—it
> >> basically just hands back a pointer to VRAM space that we allocate from.
> >> From that, if it’s an SVM allocation, we can derive the device pages.
> >>
> >> What I see happening is: a 2M buddy allocation occurs, we make the
> >> backing device pages a large folio, and sometime later the folio
> >> refcount goes to zero and we free the buddy allocation. Later, the buddy
> >> allocation is reused for a smaller allocation (e.g., 4K or 64K), but the
> >> backing pages are still a large folio. Here is where we need to split
> >
> > I agree with you that it might be a bug in free_zone_device_folio() based
> > on my understanding. Since zone_device_page_init() calls prep_compound_page()
> > for >0 orders, but free_zone_device_folio() never reverse the process.
> >
> > Balbir and Alistair might be able to help here.
>
> I agree it's an API limitation
>
> >
> > I cherry picked the code from __free_frozen_pages() to reverse the process.
> > Can you give it a try to see if it solve the above issue? Thanks.
> >
> > From 3aa03baa39b7e62ea079e826de6ed5aab3061e46 Mon Sep 17 00:00:00 2001
> > From: Zi Yan <ziy@nvidia.com>
> > Date: Wed, 7 Jan 2026 16:49:52 -0500
> > Subject: [PATCH] mm/memremap: free device private folio fix
> > Content-Type: text/plain; charset="utf-8"
> >
> > Signed-off-by: Zi Yan <ziy@nvidia.com>
> > ---
> > mm/memremap.c | 15 +++++++++++++++
> > 1 file changed, 15 insertions(+)
> >
> > diff --git a/mm/memremap.c b/mm/memremap.c
> > index 63c6ab4fdf08..483666ff7271 100644
> > --- a/mm/memremap.c
> > +++ b/mm/memremap.c
> > @@ -475,6 +475,21 @@ void free_zone_device_folio(struct folio *folio)
> > pgmap->ops->folio_free(folio);
> > break;
> > }
> > +
> > + if (nr > 1) {
> > + struct page *head = folio_page(folio, 0);
> > +
> > + head[1].flags.f &= ~PAGE_FLAGS_SECOND;
> > +#ifdef NR_PAGES_IN_LARGE_FOLIO
> > + folio->_nr_pages = 0;
> > +#endif
> > + for (i = 1; i < nr; i++) {
> > + (head + i)->mapping = NULL;
> > + clear_compound_head(head + i);
>
> I see that your skipping the checks in free_page_tail_prepare()? IIUC, we should be able
> to invoke it even for zone device private pages
>
> > + }
> > + folio->mapping = NULL;
>
> This is already done in free_zone_device_folio()
>
> > + head->flags.f &= ~PAGE_FLAGS_CHECK_AT_PREP;
>
> I don't think this is required for zone device private folios, but I suppose it
> keeps the code generic
>
Well, the above code doesn’t work, but I think it’s the right idea.
clear_compound_head aliases to pgmap, which we don’t want to be NULL. I
believe the individual pages likely need their flags cleared (?), and
this step must be done before calling folio_free and include a barrier,
as the page can be immediately reallocated.
So here’s what I came up with, and it seems to work (for Xe, GPU SVM):
mm/memremap.c | 21 +++++++++++++++++++++
1 file changed, 21 insertions(+)
diff --git a/mm/memremap.c b/mm/memremap.c
index 63c6ab4fdf08..ac20abb6a441 100644
--- a/mm/memremap.c
+++ b/mm/memremap.c
@@ -448,6 +448,27 @@ void free_zone_device_folio(struct folio *folio)
pgmap->type != MEMORY_DEVICE_GENERIC)
folio->mapping = NULL;
+ if (nr > 1) {
+ struct page *head = folio_page(folio, 0);
+
+ head[1].flags.f &= ~PAGE_FLAGS_SECOND;
+#ifdef NR_PAGES_IN_LARGE_FOLIO
+ folio->_nr_pages = 0;
+#endif
+ for (i = 1; i < nr; i++) {
+ struct folio *new_folio = (struct folio *)(head + i);
+
+ (head + i)->mapping = NULL;
+ (head + i)->flags.f &= ~PAGE_FLAGS_CHECK_AT_PREP;
+
+ /* Overwrite compound_head with pgmap */
+ new_folio->pgmap = pgmap;
+ }
+
+ head->flags.f &= ~PAGE_FLAGS_CHECK_AT_PREP;
+ smp_wmb(); /* Changes but be visable before freeing folio */
+ }
+
switch (pgmap->type) {
case MEMORY_DEVICE_PRIVATE:
case MEMORY_DEVICE_COHERENT:
> > + }
> > }
> >
> > void zone_device_page_init(struct page *page, unsigned int order)
>
>
> Otherwise, it seems like the right way to solve the issue.
>
My question is: why isn’t Nouveau hitting this issue, or your Nvidia
out-of-tree driver (lack of testing, Xe's test suite coverage is quite
good at finding corners).
Also, will this change in behavior break either ofthose drivers?
Matt
> Balbir
^ permalink raw reply related [flat|nested] 29+ messages in thread
* Re: [PATCH 1/4] mm/migrate: Add migrate_device_split_page
2026-01-08 2:17 ` Matthew Brost
@ 2026-01-08 2:53 ` Zi Yan
2026-01-08 3:14 ` Alistair Popple
0 siblings, 1 reply; 29+ messages in thread
From: Zi Yan @ 2026-01-08 2:53 UTC (permalink / raw)
To: Matthew Brost, Balbir Singh
Cc: Alistair Popple, Francois Dugast, intel-xe, dri-devel,
Andrew Morton, linux-mm, David Hildenbrand, Lorenzo Stoakes,
Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
Barry Song, Lance Yang, Matthew Wilcox
On 7 Jan 2026, at 21:17, Matthew Brost wrote:
> On Thu, Jan 08, 2026 at 11:56:03AM +1100, Balbir Singh wrote:
>> On 1/8/26 08:03, Zi Yan wrote:
>>> On 7 Jan 2026, at 16:15, Matthew Brost wrote:
>>>
>>>> On Wed, Jan 07, 2026 at 03:38:35PM -0500, Zi Yan wrote:
>>>>> On 7 Jan 2026, at 15:20, Zi Yan wrote:
>>>>>
>>>>>> +THP folks
>>>>>
>>>>> +willy, since he commented in another thread.
>>>>>
>>>>>>
>>>>>> On 16 Dec 2025, at 15:10, Francois Dugast wrote:
>>>>>>
>>>>>>> From: Matthew Brost <matthew.brost@intel.com>
>>>>>>>
>>>>>>> Introduce migrate_device_split_page() to split a device page into
>>>>>>> lower-order pages. Used when a folio allocated as higher-order is freed
>>>>>>> and later reallocated at a smaller order by the driver memory manager.
>>>>>>>
>>>>>>> Cc: Andrew Morton <akpm@linux-foundation.org>
>>>>>>> Cc: Balbir Singh <balbirs@nvidia.com>
>>>>>>> Cc: dri-devel@lists.freedesktop.org
>>>>>>> Cc: linux-mm@kvack.org
>>>>>>> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
>>>>>>> Signed-off-by: Francois Dugast <francois.dugast@intel.com>
>>>>>>> ---
>>>>>>> include/linux/huge_mm.h | 3 +++
>>>>>>> include/linux/migrate.h | 1 +
>>>>>>> mm/huge_memory.c | 6 ++---
>>>>>>> mm/migrate_device.c | 49 +++++++++++++++++++++++++++++++++++++++++
>>>>>>> 4 files changed, 56 insertions(+), 3 deletions(-)
>>>>>>>
>>>>>>> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
>>>>>>> index a4d9f964dfde..6ad8f359bc0d 100644
>>>>>>> --- a/include/linux/huge_mm.h
>>>>>>> +++ b/include/linux/huge_mm.h
>>>>>>> @@ -374,6 +374,9 @@ int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list
>>>>>>> int folio_split_unmapped(struct folio *folio, unsigned int new_order);
>>>>>>> unsigned int min_order_for_split(struct folio *folio);
>>>>>>> int split_folio_to_list(struct folio *folio, struct list_head *list);
>>>>>>> +int __split_unmapped_folio(struct folio *folio, int new_order,
>>>>>>> + struct page *split_at, struct xa_state *xas,
>>>>>>> + struct address_space *mapping, enum split_type split_type);
>>>>>>> int folio_check_splittable(struct folio *folio, unsigned int new_order,
>>>>>>> enum split_type split_type);
>>>>>>> int folio_split(struct folio *folio, unsigned int new_order, struct page *page,
>>>>>>> diff --git a/include/linux/migrate.h b/include/linux/migrate.h
>>>>>>> index 26ca00c325d9..ec65e4fd5f88 100644
>>>>>>> --- a/include/linux/migrate.h
>>>>>>> +++ b/include/linux/migrate.h
>>>>>>> @@ -192,6 +192,7 @@ void migrate_device_pages(unsigned long *src_pfns, unsigned long *dst_pfns,
>>>>>>> unsigned long npages);
>>>>>>> void migrate_device_finalize(unsigned long *src_pfns,
>>>>>>> unsigned long *dst_pfns, unsigned long npages);
>>>>>>> +int migrate_device_split_page(struct page *page);
>>>>>>>
>>>>>>> #endif /* CONFIG_MIGRATION */
>>>>>>>
>>>>>>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>>>>>>> index 40cf59301c21..7ded35a3ecec 100644
>>>>>>> --- a/mm/huge_memory.c
>>>>>>> +++ b/mm/huge_memory.c
>>>>>>> @@ -3621,9 +3621,9 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
>>>>>>> * Return: 0 - successful, <0 - failed (if -ENOMEM is returned, @folio might be
>>>>>>> * split but not to @new_order, the caller needs to check)
>>>>>>> */
>>>>>>> -static int __split_unmapped_folio(struct folio *folio, int new_order,
>>>>>>> - struct page *split_at, struct xa_state *xas,
>>>>>>> - struct address_space *mapping, enum split_type split_type)
>>>>>>> +int __split_unmapped_folio(struct folio *folio, int new_order,
>>>>>>> + struct page *split_at, struct xa_state *xas,
>>>>>>> + struct address_space *mapping, enum split_type split_type)
>>>>>>> {
>>>>>>> const bool is_anon = folio_test_anon(folio);
>>>>>>> int old_order = folio_order(folio);
>>>>>>> diff --git a/mm/migrate_device.c b/mm/migrate_device.c
>>>>>>> index 23379663b1e1..eb0f0e938947 100644
>>>>>>> --- a/mm/migrate_device.c
>>>>>>> +++ b/mm/migrate_device.c
>>>>>>> @@ -775,6 +775,49 @@ int migrate_vma_setup(struct migrate_vma *args)
>>>>>>> EXPORT_SYMBOL(migrate_vma_setup);
>>>>>>>
>>>>>>> #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
>>>>>>> +/**
>>>>>>> + * migrate_device_split_page() - Split device page
>>>>>>> + * @page: Device page to split
>>>>>>> + *
>>>>>>> + * Splits a device page into smaller pages. Typically called when reallocating a
>>>>>>> + * folio to a smaller size. Inherently racy—only safe if the caller ensures
>>>>>>> + * mutual exclusion within the page's folio (i.e., no other threads are using
>>>>>>> + * pages within the folio). Expected to be called a free device page and
>>>>>>> + * restores all split out pages to a free state.
>>>>>>> + */
>>>>>
>>>>> Do you mind explaining why __split_unmapped_folio() is needed for a free device
>>>>> page? A free page is not supposed to be a large folio, at least from a core
>>>>> MM point of view. __split_unmapped_folio() is intended to work on large folios
>>>>> (or compound pages), even if the input folio has refcount == 0 (because it is
>>>>> frozen).
>>>>>
>>>>
>>>> Well, then maybe this is a bug in core MM where the freed page is still
>>>> a THP. Let me explain the scenario and why this is needed from my POV.
>>>>
>>>> Our VRAM allocator in Xe (and several other DRM drivers) is DRM buddy.
>>>> This is a shared pool between traditional DRM GEMs (buffer objects) and
>>>> SVM allocations (pages). It doesn’t have any view of the page backing—it
>>>> basically just hands back a pointer to VRAM space that we allocate from.
>>>> From that, if it’s an SVM allocation, we can derive the device pages.
>>>>
>>>> What I see happening is: a 2M buddy allocation occurs, we make the
>>>> backing device pages a large folio, and sometime later the folio
>>>> refcount goes to zero and we free the buddy allocation. Later, the buddy
>>>> allocation is reused for a smaller allocation (e.g., 4K or 64K), but the
>>>> backing pages are still a large folio. Here is where we need to split
>>>
>>> I agree with you that it might be a bug in free_zone_device_folio() based
>>> on my understanding. Since zone_device_page_init() calls prep_compound_page()
>>> for >0 orders, but free_zone_device_folio() never reverse the process.
>>>
>>> Balbir and Alistair might be able to help here.
>>
>> I agree it's an API limitation
I am not sure. If free_zone_device_folio() does not get rid of large folio
metadata, there is no guarantee that a freed large device private folio will
be reallocated as a large device private folio. And when mTHP support is
added, the folio order might change too. That can cause issues when
compound_head() is called on a tail page of a previously large folio, since
compound_head() will return the old head page instead of the tail page itself.
>>
>>>
>>> I cherry picked the code from __free_frozen_pages() to reverse the process.
>>> Can you give it a try to see if it solve the above issue? Thanks.
>>>
>>> From 3aa03baa39b7e62ea079e826de6ed5aab3061e46 Mon Sep 17 00:00:00 2001
>>> From: Zi Yan <ziy@nvidia.com>
>>> Date: Wed, 7 Jan 2026 16:49:52 -0500
>>> Subject: [PATCH] mm/memremap: free device private folio fix
>>> Content-Type: text/plain; charset="utf-8"
>>>
>>> Signed-off-by: Zi Yan <ziy@nvidia.com>
>>> ---
>>> mm/memremap.c | 15 +++++++++++++++
>>> 1 file changed, 15 insertions(+)
>>>
>>> diff --git a/mm/memremap.c b/mm/memremap.c
>>> index 63c6ab4fdf08..483666ff7271 100644
>>> --- a/mm/memremap.c
>>> +++ b/mm/memremap.c
>>> @@ -475,6 +475,21 @@ void free_zone_device_folio(struct folio *folio)
>>> pgmap->ops->folio_free(folio);
>>> break;
>>> }
>>> +
>>> + if (nr > 1) {
>>> + struct page *head = folio_page(folio, 0);
>>> +
>>> + head[1].flags.f &= ~PAGE_FLAGS_SECOND;
>>> +#ifdef NR_PAGES_IN_LARGE_FOLIO
>>> + folio->_nr_pages = 0;
>>> +#endif
>>> + for (i = 1; i < nr; i++) {
>>> + (head + i)->mapping = NULL;
>>> + clear_compound_head(head + i);
>>
>> I see that your skipping the checks in free_page_tail_prepare()? IIUC, we should be able
>> to invoke it even for zone device private pages
I am not sure about what part of compound page is also used in device private folio.
Yes, it is better to add right checks.
>>
>>> + }
>>> + folio->mapping = NULL;
>>
>> This is already done in free_zone_device_folio()
>>
>>> + head->flags.f &= ~PAGE_FLAGS_CHECK_AT_PREP;
>>
>> I don't think this is required for zone device private folios, but I suppose it
>> keeps the code generic
>>
>
> Well, the above code doesn’t work, but I think it’s the right idea.
> clear_compound_head aliases to pgmap, which we don’t want to be NULL. I
Thank you for pointing it out. I am not familiar with device private page code.
> believe the individual pages likely need their flags cleared (?), and
Yes, I missed the tail page flag clearing part.
> this step must be done before calling folio_free and include a barrier,
> as the page can be immediately reallocated.
>
> So here’s what I came up with, and it seems to work (for Xe, GPU SVM):
>
> mm/memremap.c | 21 +++++++++++++++++++++
> 1 file changed, 21 insertions(+)
>
> diff --git a/mm/memremap.c b/mm/memremap.c
> index 63c6ab4fdf08..ac20abb6a441 100644
> --- a/mm/memremap.c
> +++ b/mm/memremap.c
> @@ -448,6 +448,27 @@ void free_zone_device_folio(struct folio *folio)
> pgmap->type != MEMORY_DEVICE_GENERIC)
> folio->mapping = NULL;
>
> + if (nr > 1) {
> + struct page *head = folio_page(folio, 0);
> +
> + head[1].flags.f &= ~PAGE_FLAGS_SECOND;
> +#ifdef NR_PAGES_IN_LARGE_FOLIO
> + folio->_nr_pages = 0;
> +#endif
> + for (i = 1; i < nr; i++) {
> + struct folio *new_folio = (struct folio *)(head + i);
> +
> + (head + i)->mapping = NULL;
> + (head + i)->flags.f &= ~PAGE_FLAGS_CHECK_AT_PREP;
> +
> + /* Overwrite compound_head with pgmap */
> + new_folio->pgmap = pgmap;
> + }
> +
> + head->flags.f &= ~PAGE_FLAGS_CHECK_AT_PREP;
> + smp_wmb(); /* Changes but be visable before freeing folio */
> + }
> +
> switch (pgmap->type) {
> case MEMORY_DEVICE_PRIVATE:
> case MEMORY_DEVICE_COHERENT:
>
It looks good to me, but I am very likely missing the detail on device private
pages. Like Balbir pointed out above, for tail pages, calling
free_tail_page_prepare() might be better to get sanity checks like normal
large folio, although you will need to set ->pgmap after it.
It is better to send it as a proper patch and get reviews from other
MM folks.
>>> + }
>>> }
>>>
>>> void zone_device_page_init(struct page *page, unsigned int order)
>>
>>
>> Otherwise, it seems like the right way to solve the issue.
>>
>
> My question is: why isn’t Nouveau hitting this issue, or your Nvidia
> out-of-tree driver (lack of testing, Xe's test suite coverage is quite
> good at finding corners).
>
> Also, will this change in behavior break either ofthose drivers?
>
> Matt
>
>> Balbir
Best Regards,
Yan, Zi
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 1/4] mm/migrate: Add migrate_device_split_page
2026-01-08 2:53 ` Zi Yan
@ 2026-01-08 3:14 ` Alistair Popple
2026-01-08 3:42 ` Matthew Brost
0 siblings, 1 reply; 29+ messages in thread
From: Alistair Popple @ 2026-01-08 3:14 UTC (permalink / raw)
To: Zi Yan
Cc: Matthew Brost, Balbir Singh, Francois Dugast, intel-xe, dri-devel,
Andrew Morton, linux-mm, David Hildenbrand, Lorenzo Stoakes,
Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
Barry Song, Lance Yang, Matthew Wilcox
On 2026-01-08 at 13:53 +1100, Zi Yan <ziy@nvidia.com> wrote...
> On 7 Jan 2026, at 21:17, Matthew Brost wrote:
>
> > On Thu, Jan 08, 2026 at 11:56:03AM +1100, Balbir Singh wrote:
> >> On 1/8/26 08:03, Zi Yan wrote:
> >>> On 7 Jan 2026, at 16:15, Matthew Brost wrote:
> >>>
> >>>> On Wed, Jan 07, 2026 at 03:38:35PM -0500, Zi Yan wrote:
> >>>>> On 7 Jan 2026, at 15:20, Zi Yan wrote:
> >>>>>
> >>>>>> +THP folks
> >>>>>
> >>>>> +willy, since he commented in another thread.
> >>>>>
> >>>>>>
> >>>>>> On 16 Dec 2025, at 15:10, Francois Dugast wrote:
> >>>>>>
> >>>>>>> From: Matthew Brost <matthew.brost@intel.com>
> >>>>>>>
> >>>>>>> Introduce migrate_device_split_page() to split a device page into
> >>>>>>> lower-order pages. Used when a folio allocated as higher-order is freed
> >>>>>>> and later reallocated at a smaller order by the driver memory manager.
> >>>>>>>
> >>>>>>> Cc: Andrew Morton <akpm@linux-foundation.org>
> >>>>>>> Cc: Balbir Singh <balbirs@nvidia.com>
> >>>>>>> Cc: dri-devel@lists.freedesktop.org
> >>>>>>> Cc: linux-mm@kvack.org
> >>>>>>> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> >>>>>>> Signed-off-by: Francois Dugast <francois.dugast@intel.com>
> >>>>>>> ---
> >>>>>>> include/linux/huge_mm.h | 3 +++
> >>>>>>> include/linux/migrate.h | 1 +
> >>>>>>> mm/huge_memory.c | 6 ++---
> >>>>>>> mm/migrate_device.c | 49 +++++++++++++++++++++++++++++++++++++++++
> >>>>>>> 4 files changed, 56 insertions(+), 3 deletions(-)
> >>>>>>>
> >>>>>>> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> >>>>>>> index a4d9f964dfde..6ad8f359bc0d 100644
> >>>>>>> --- a/include/linux/huge_mm.h
> >>>>>>> +++ b/include/linux/huge_mm.h
> >>>>>>> @@ -374,6 +374,9 @@ int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list
> >>>>>>> int folio_split_unmapped(struct folio *folio, unsigned int new_order);
> >>>>>>> unsigned int min_order_for_split(struct folio *folio);
> >>>>>>> int split_folio_to_list(struct folio *folio, struct list_head *list);
> >>>>>>> +int __split_unmapped_folio(struct folio *folio, int new_order,
> >>>>>>> + struct page *split_at, struct xa_state *xas,
> >>>>>>> + struct address_space *mapping, enum split_type split_type);
> >>>>>>> int folio_check_splittable(struct folio *folio, unsigned int new_order,
> >>>>>>> enum split_type split_type);
> >>>>>>> int folio_split(struct folio *folio, unsigned int new_order, struct page *page,
> >>>>>>> diff --git a/include/linux/migrate.h b/include/linux/migrate.h
> >>>>>>> index 26ca00c325d9..ec65e4fd5f88 100644
> >>>>>>> --- a/include/linux/migrate.h
> >>>>>>> +++ b/include/linux/migrate.h
> >>>>>>> @@ -192,6 +192,7 @@ void migrate_device_pages(unsigned long *src_pfns, unsigned long *dst_pfns,
> >>>>>>> unsigned long npages);
> >>>>>>> void migrate_device_finalize(unsigned long *src_pfns,
> >>>>>>> unsigned long *dst_pfns, unsigned long npages);
> >>>>>>> +int migrate_device_split_page(struct page *page);
> >>>>>>>
> >>>>>>> #endif /* CONFIG_MIGRATION */
> >>>>>>>
> >>>>>>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> >>>>>>> index 40cf59301c21..7ded35a3ecec 100644
> >>>>>>> --- a/mm/huge_memory.c
> >>>>>>> +++ b/mm/huge_memory.c
> >>>>>>> @@ -3621,9 +3621,9 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
> >>>>>>> * Return: 0 - successful, <0 - failed (if -ENOMEM is returned, @folio might be
> >>>>>>> * split but not to @new_order, the caller needs to check)
> >>>>>>> */
> >>>>>>> -static int __split_unmapped_folio(struct folio *folio, int new_order,
> >>>>>>> - struct page *split_at, struct xa_state *xas,
> >>>>>>> - struct address_space *mapping, enum split_type split_type)
> >>>>>>> +int __split_unmapped_folio(struct folio *folio, int new_order,
> >>>>>>> + struct page *split_at, struct xa_state *xas,
> >>>>>>> + struct address_space *mapping, enum split_type split_type)
> >>>>>>> {
> >>>>>>> const bool is_anon = folio_test_anon(folio);
> >>>>>>> int old_order = folio_order(folio);
> >>>>>>> diff --git a/mm/migrate_device.c b/mm/migrate_device.c
> >>>>>>> index 23379663b1e1..eb0f0e938947 100644
> >>>>>>> --- a/mm/migrate_device.c
> >>>>>>> +++ b/mm/migrate_device.c
> >>>>>>> @@ -775,6 +775,49 @@ int migrate_vma_setup(struct migrate_vma *args)
> >>>>>>> EXPORT_SYMBOL(migrate_vma_setup);
> >>>>>>>
> >>>>>>> #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
> >>>>>>> +/**
> >>>>>>> + * migrate_device_split_page() - Split device page
> >>>>>>> + * @page: Device page to split
> >>>>>>> + *
> >>>>>>> + * Splits a device page into smaller pages. Typically called when reallocating a
> >>>>>>> + * folio to a smaller size. Inherently racy—only safe if the caller ensures
> >>>>>>> + * mutual exclusion within the page's folio (i.e., no other threads are using
> >>>>>>> + * pages within the folio). Expected to be called a free device page and
> >>>>>>> + * restores all split out pages to a free state.
> >>>>>>> + */
> >>>>>
> >>>>> Do you mind explaining why __split_unmapped_folio() is needed for a free device
> >>>>> page? A free page is not supposed to be a large folio, at least from a core
> >>>>> MM point of view. __split_unmapped_folio() is intended to work on large folios
> >>>>> (or compound pages), even if the input folio has refcount == 0 (because it is
> >>>>> frozen).
> >>>>>
> >>>>
> >>>> Well, then maybe this is a bug in core MM where the freed page is still
> >>>> a THP. Let me explain the scenario and why this is needed from my POV.
> >>>>
> >>>> Our VRAM allocator in Xe (and several other DRM drivers) is DRM buddy.
> >>>> This is a shared pool between traditional DRM GEMs (buffer objects) and
> >>>> SVM allocations (pages). It doesn’t have any view of the page backing—it
> >>>> basically just hands back a pointer to VRAM space that we allocate from.
> >>>> From that, if it’s an SVM allocation, we can derive the device pages.
> >>>>
> >>>> What I see happening is: a 2M buddy allocation occurs, we make the
> >>>> backing device pages a large folio, and sometime later the folio
> >>>> refcount goes to zero and we free the buddy allocation. Later, the buddy
> >>>> allocation is reused for a smaller allocation (e.g., 4K or 64K), but the
> >>>> backing pages are still a large folio. Here is where we need to split
> >>>
> >>> I agree with you that it might be a bug in free_zone_device_folio() based
> >>> on my understanding. Since zone_device_page_init() calls prep_compound_page()
> >>> for >0 orders, but free_zone_device_folio() never reverse the process.
> >>>
> >>> Balbir and Alistair might be able to help here.
Just catching up after the Christmas break.
> >>
> >> I agree it's an API limitation
>
> I am not sure. If free_zone_device_folio() does not get rid of large folio
> metadata, there is no guarantee that a freed large device private folio will
> be reallocated as a large device private folio. And when mTHP support is
> added, the folio order might change too. That can cause issues when
> compound_head() is called on a tail page of a previously large folio, since
> compound_head() will return the old head page instead of the tail page itself.
I agree that freeing the device folio should get rid of the large folio. That
would also keep it consistent with what we do for FS DAX for example.
> >>
> >>>
> >>> I cherry picked the code from __free_frozen_pages() to reverse the process.
> >>> Can you give it a try to see if it solve the above issue? Thanks.
It would be nice if this could be a common helper for freeing compound
ZONE_DEVICE pages. FS DAX already has this for example:
static inline unsigned long dax_folio_put(struct folio *folio)
{
unsigned long ref;
int order, i;
if (!dax_folio_is_shared(folio))
ref = 0;
else
ref = --folio->share;
if (ref)
return ref;
folio->mapping = NULL;
order = folio_order(folio);
if (!order)
return 0;
folio_reset_order(folio);
for (i = 0; i < (1UL << order); i++) {
struct dev_pagemap *pgmap = page_pgmap(&folio->page);
struct page *page = folio_page(folio, i);
struct folio *new_folio = (struct folio *)page;
ClearPageHead(page);
clear_compound_head(page);
new_folio->mapping = NULL;
/*
* Reset pgmap which was over-written by
* prep_compound_page().
*/
new_folio->pgmap = pgmap;
new_folio->share = 0;
WARN_ON_ONCE(folio_ref_count(new_folio));
}
return ref;
}
Aside from the weird refcount checks that FS DAX needs to at the start of this
function I don't think there is anything specific to DEVICE_PRIVATE pages there.
> >>>
> >>> From 3aa03baa39b7e62ea079e826de6ed5aab3061e46 Mon Sep 17 00:00:00 2001
> >>> From: Zi Yan <ziy@nvidia.com>
> >>> Date: Wed, 7 Jan 2026 16:49:52 -0500
> >>> Subject: [PATCH] mm/memremap: free device private folio fix
> >>> Content-Type: text/plain; charset="utf-8"
> >>>
> >>> Signed-off-by: Zi Yan <ziy@nvidia.com>
> >>> ---
> >>> mm/memremap.c | 15 +++++++++++++++
> >>> 1 file changed, 15 insertions(+)
> >>>
> >>> diff --git a/mm/memremap.c b/mm/memremap.c
> >>> index 63c6ab4fdf08..483666ff7271 100644
> >>> --- a/mm/memremap.c
> >>> +++ b/mm/memremap.c
> >>> @@ -475,6 +475,21 @@ void free_zone_device_folio(struct folio *folio)
> >>> pgmap->ops->folio_free(folio);
> >>> break;
> >>> }
> >>> +
> >>> + if (nr > 1) {
> >>> + struct page *head = folio_page(folio, 0);
> >>> +
> >>> + head[1].flags.f &= ~PAGE_FLAGS_SECOND;
> >>> +#ifdef NR_PAGES_IN_LARGE_FOLIO
> >>> + folio->_nr_pages = 0;
> >>> +#endif
> >>> + for (i = 1; i < nr; i++) {
> >>> + (head + i)->mapping = NULL;
> >>> + clear_compound_head(head + i);
> >>
> >> I see that your skipping the checks in free_page_tail_prepare()? IIUC, we should be able
> >> to invoke it even for zone device private pages
>
> I am not sure about what part of compound page is also used in device private folio.
> Yes, it is better to add right checks.
>
> >>
> >>> + }
> >>> + folio->mapping = NULL;
> >>
> >> This is already done in free_zone_device_folio()
> >>
> >>> + head->flags.f &= ~PAGE_FLAGS_CHECK_AT_PREP;
> >>
> >> I don't think this is required for zone device private folios, but I suppose it
> >> keeps the code generic
> >>
> >
> > Well, the above code doesn’t work, but I think it’s the right idea.
> > clear_compound_head aliases to pgmap, which we don’t want to be NULL. I
>
> Thank you for pointing it out. I am not familiar with device private page code.
>
> > believe the individual pages likely need their flags cleared (?), and
>
> Yes, I missed the tail page flag clearing part.
>
> > this step must be done before calling folio_free and include a barrier,
> > as the page can be immediately reallocated.
> >
> > So here’s what I came up with, and it seems to work (for Xe, GPU SVM):
> >
> > mm/memremap.c | 21 +++++++++++++++++++++
> > 1 file changed, 21 insertions(+)
> >
> > diff --git a/mm/memremap.c b/mm/memremap.c
> > index 63c6ab4fdf08..ac20abb6a441 100644
> > --- a/mm/memremap.c
> > +++ b/mm/memremap.c
> > @@ -448,6 +448,27 @@ void free_zone_device_folio(struct folio *folio)
> > pgmap->type != MEMORY_DEVICE_GENERIC)
> > folio->mapping = NULL;
> >
> > + if (nr > 1) {
> > + struct page *head = folio_page(folio, 0);
> > +
> > + head[1].flags.f &= ~PAGE_FLAGS_SECOND;
> > +#ifdef NR_PAGES_IN_LARGE_FOLIO
> > + folio->_nr_pages = 0;
> > +#endif
> > + for (i = 1; i < nr; i++) {
> > + struct folio *new_folio = (struct folio *)(head + i);
> > +
> > + (head + i)->mapping = NULL;
> > + (head + i)->flags.f &= ~PAGE_FLAGS_CHECK_AT_PREP;
> > +
> > + /* Overwrite compound_head with pgmap */
> > + new_folio->pgmap = pgmap;
> > + }
> > +
> > + head->flags.f &= ~PAGE_FLAGS_CHECK_AT_PREP;
> > + smp_wmb(); /* Changes but be visable before freeing folio */
> > + }
> > +
> > switch (pgmap->type) {
> > case MEMORY_DEVICE_PRIVATE:
> > case MEMORY_DEVICE_COHERENT:
> >
>
> It looks good to me, but I am very likely missing the detail on device private
> pages. Like Balbir pointed out above, for tail pages, calling
> free_tail_page_prepare() might be better to get sanity checks like normal
> large folio, although you will need to set ->pgmap after it.
>
> It is better to send it as a proper patch and get reviews from other
> MM folks.
>
> >>> + }
> >>> }
> >>>
> >>> void zone_device_page_init(struct page *page, unsigned int order)
> >>
> >>
> >> Otherwise, it seems like the right way to solve the issue.
> >>
> >
> > My question is: why isn’t Nouveau hitting this issue, or your Nvidia
> > out-of-tree driver (lack of testing, Xe's test suite coverage is quite
> > good at finding corners).
> >
> > Also, will this change in behavior break either ofthose drivers?
> >
> > Matt
> >
> >> Balbir
>
>
> Best Regards,
> Yan, Zi
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 1/4] mm/migrate: Add migrate_device_split_page
2026-01-08 3:14 ` Alistair Popple
@ 2026-01-08 3:42 ` Matthew Brost
2026-01-08 4:47 ` Balbir Singh
0 siblings, 1 reply; 29+ messages in thread
From: Matthew Brost @ 2026-01-08 3:42 UTC (permalink / raw)
To: Alistair Popple
Cc: Zi Yan, Balbir Singh, Francois Dugast, intel-xe, dri-devel,
Andrew Morton, linux-mm, David Hildenbrand, Lorenzo Stoakes,
Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
Barry Song, Lance Yang, Matthew Wilcox
On Thu, Jan 08, 2026 at 02:14:28PM +1100, Alistair Popple wrote:
> On 2026-01-08 at 13:53 +1100, Zi Yan <ziy@nvidia.com> wrote...
> > On 7 Jan 2026, at 21:17, Matthew Brost wrote:
> >
> > > On Thu, Jan 08, 2026 at 11:56:03AM +1100, Balbir Singh wrote:
> > >> On 1/8/26 08:03, Zi Yan wrote:
> > >>> On 7 Jan 2026, at 16:15, Matthew Brost wrote:
> > >>>
> > >>>> On Wed, Jan 07, 2026 at 03:38:35PM -0500, Zi Yan wrote:
> > >>>>> On 7 Jan 2026, at 15:20, Zi Yan wrote:
> > >>>>>
> > >>>>>> +THP folks
> > >>>>>
> > >>>>> +willy, since he commented in another thread.
> > >>>>>
> > >>>>>>
> > >>>>>> On 16 Dec 2025, at 15:10, Francois Dugast wrote:
> > >>>>>>
> > >>>>>>> From: Matthew Brost <matthew.brost@intel.com>
> > >>>>>>>
> > >>>>>>> Introduce migrate_device_split_page() to split a device page into
> > >>>>>>> lower-order pages. Used when a folio allocated as higher-order is freed
> > >>>>>>> and later reallocated at a smaller order by the driver memory manager.
> > >>>>>>>
> > >>>>>>> Cc: Andrew Morton <akpm@linux-foundation.org>
> > >>>>>>> Cc: Balbir Singh <balbirs@nvidia.com>
> > >>>>>>> Cc: dri-devel@lists.freedesktop.org
> > >>>>>>> Cc: linux-mm@kvack.org
> > >>>>>>> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > >>>>>>> Signed-off-by: Francois Dugast <francois.dugast@intel.com>
> > >>>>>>> ---
> > >>>>>>> include/linux/huge_mm.h | 3 +++
> > >>>>>>> include/linux/migrate.h | 1 +
> > >>>>>>> mm/huge_memory.c | 6 ++---
> > >>>>>>> mm/migrate_device.c | 49 +++++++++++++++++++++++++++++++++++++++++
> > >>>>>>> 4 files changed, 56 insertions(+), 3 deletions(-)
> > >>>>>>>
> > >>>>>>> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> > >>>>>>> index a4d9f964dfde..6ad8f359bc0d 100644
> > >>>>>>> --- a/include/linux/huge_mm.h
> > >>>>>>> +++ b/include/linux/huge_mm.h
> > >>>>>>> @@ -374,6 +374,9 @@ int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list
> > >>>>>>> int folio_split_unmapped(struct folio *folio, unsigned int new_order);
> > >>>>>>> unsigned int min_order_for_split(struct folio *folio);
> > >>>>>>> int split_folio_to_list(struct folio *folio, struct list_head *list);
> > >>>>>>> +int __split_unmapped_folio(struct folio *folio, int new_order,
> > >>>>>>> + struct page *split_at, struct xa_state *xas,
> > >>>>>>> + struct address_space *mapping, enum split_type split_type);
> > >>>>>>> int folio_check_splittable(struct folio *folio, unsigned int new_order,
> > >>>>>>> enum split_type split_type);
> > >>>>>>> int folio_split(struct folio *folio, unsigned int new_order, struct page *page,
> > >>>>>>> diff --git a/include/linux/migrate.h b/include/linux/migrate.h
> > >>>>>>> index 26ca00c325d9..ec65e4fd5f88 100644
> > >>>>>>> --- a/include/linux/migrate.h
> > >>>>>>> +++ b/include/linux/migrate.h
> > >>>>>>> @@ -192,6 +192,7 @@ void migrate_device_pages(unsigned long *src_pfns, unsigned long *dst_pfns,
> > >>>>>>> unsigned long npages);
> > >>>>>>> void migrate_device_finalize(unsigned long *src_pfns,
> > >>>>>>> unsigned long *dst_pfns, unsigned long npages);
> > >>>>>>> +int migrate_device_split_page(struct page *page);
> > >>>>>>>
> > >>>>>>> #endif /* CONFIG_MIGRATION */
> > >>>>>>>
> > >>>>>>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> > >>>>>>> index 40cf59301c21..7ded35a3ecec 100644
> > >>>>>>> --- a/mm/huge_memory.c
> > >>>>>>> +++ b/mm/huge_memory.c
> > >>>>>>> @@ -3621,9 +3621,9 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
> > >>>>>>> * Return: 0 - successful, <0 - failed (if -ENOMEM is returned, @folio might be
> > >>>>>>> * split but not to @new_order, the caller needs to check)
> > >>>>>>> */
> > >>>>>>> -static int __split_unmapped_folio(struct folio *folio, int new_order,
> > >>>>>>> - struct page *split_at, struct xa_state *xas,
> > >>>>>>> - struct address_space *mapping, enum split_type split_type)
> > >>>>>>> +int __split_unmapped_folio(struct folio *folio, int new_order,
> > >>>>>>> + struct page *split_at, struct xa_state *xas,
> > >>>>>>> + struct address_space *mapping, enum split_type split_type)
> > >>>>>>> {
> > >>>>>>> const bool is_anon = folio_test_anon(folio);
> > >>>>>>> int old_order = folio_order(folio);
> > >>>>>>> diff --git a/mm/migrate_device.c b/mm/migrate_device.c
> > >>>>>>> index 23379663b1e1..eb0f0e938947 100644
> > >>>>>>> --- a/mm/migrate_device.c
> > >>>>>>> +++ b/mm/migrate_device.c
> > >>>>>>> @@ -775,6 +775,49 @@ int migrate_vma_setup(struct migrate_vma *args)
> > >>>>>>> EXPORT_SYMBOL(migrate_vma_setup);
> > >>>>>>>
> > >>>>>>> #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
> > >>>>>>> +/**
> > >>>>>>> + * migrate_device_split_page() - Split device page
> > >>>>>>> + * @page: Device page to split
> > >>>>>>> + *
> > >>>>>>> + * Splits a device page into smaller pages. Typically called when reallocating a
> > >>>>>>> + * folio to a smaller size. Inherently racy—only safe if the caller ensures
> > >>>>>>> + * mutual exclusion within the page's folio (i.e., no other threads are using
> > >>>>>>> + * pages within the folio). Expected to be called a free device page and
> > >>>>>>> + * restores all split out pages to a free state.
> > >>>>>>> + */
> > >>>>>
> > >>>>> Do you mind explaining why __split_unmapped_folio() is needed for a free device
> > >>>>> page? A free page is not supposed to be a large folio, at least from a core
> > >>>>> MM point of view. __split_unmapped_folio() is intended to work on large folios
> > >>>>> (or compound pages), even if the input folio has refcount == 0 (because it is
> > >>>>> frozen).
> > >>>>>
> > >>>>
> > >>>> Well, then maybe this is a bug in core MM where the freed page is still
> > >>>> a THP. Let me explain the scenario and why this is needed from my POV.
> > >>>>
> > >>>> Our VRAM allocator in Xe (and several other DRM drivers) is DRM buddy.
> > >>>> This is a shared pool between traditional DRM GEMs (buffer objects) and
> > >>>> SVM allocations (pages). It doesn’t have any view of the page backing—it
> > >>>> basically just hands back a pointer to VRAM space that we allocate from.
> > >>>> From that, if it’s an SVM allocation, we can derive the device pages.
> > >>>>
> > >>>> What I see happening is: a 2M buddy allocation occurs, we make the
> > >>>> backing device pages a large folio, and sometime later the folio
> > >>>> refcount goes to zero and we free the buddy allocation. Later, the buddy
> > >>>> allocation is reused for a smaller allocation (e.g., 4K or 64K), but the
> > >>>> backing pages are still a large folio. Here is where we need to split
> > >>>
> > >>> I agree with you that it might be a bug in free_zone_device_folio() based
> > >>> on my understanding. Since zone_device_page_init() calls prep_compound_page()
> > >>> for >0 orders, but free_zone_device_folio() never reverse the process.
> > >>>
> > >>> Balbir and Alistair might be able to help here.
>
> Just catching up after the Christmas break.
>
I think everyone is and scrambling for the release PR. :)
> > >>
> > >> I agree it's an API limitation
> >
> > I am not sure. If free_zone_device_folio() does not get rid of large folio
> > metadata, there is no guarantee that a freed large device private folio will
> > be reallocated as a large device private folio. And when mTHP support is
> > added, the folio order might change too. That can cause issues when
> > compound_head() is called on a tail page of a previously large folio, since
> > compound_head() will return the old head page instead of the tail page itself.
>
> I agree that freeing the device folio should get rid of the large folio. That
> would also keep it consistent with what we do for FS DAX for example.
>
+1
> > >>
> > >>>
> > >>> I cherry picked the code from __free_frozen_pages() to reverse the process.
> > >>> Can you give it a try to see if it solve the above issue? Thanks.
>
> It would be nice if this could be a common helper for freeing compound
> ZONE_DEVICE pages. FS DAX already has this for example:
>
> static inline unsigned long dax_folio_put(struct folio *folio)
> {
> unsigned long ref;
> int order, i;
>
> if (!dax_folio_is_shared(folio))
> ref = 0;
> else
> ref = --folio->share;
>
> if (ref)
> return ref;
>
> folio->mapping = NULL;
> order = folio_order(folio);
> if (!order)
> return 0;
> folio_reset_order(folio);
>
> for (i = 0; i < (1UL << order); i++) {
> struct dev_pagemap *pgmap = page_pgmap(&folio->page);
> struct page *page = folio_page(folio, i);
> struct folio *new_folio = (struct folio *)page;
>
> ClearPageHead(page);
> clear_compound_head(page);
>
> new_folio->mapping = NULL;
> /*
> * Reset pgmap which was over-written by
> * prep_compound_page().
> */
> new_folio->pgmap = pgmap;
> new_folio->share = 0;
> WARN_ON_ONCE(folio_ref_count(new_folio));
> }
>
> return ref;
> }
>
> Aside from the weird refcount checks that FS DAX needs to at the start of this
> function I don't think there is anything specific to DEVICE_PRIVATE pages there.
>
Thanks for the reference, Alistair. This looks roughly like what I
hacked together in an effort to just get something working. I believe a
common helper can be made to work. Let me churn on this tomorrow and put
together a proper patch.
> > >>>
> > >>> From 3aa03baa39b7e62ea079e826de6ed5aab3061e46 Mon Sep 17 00:00:00 2001
> > >>> From: Zi Yan <ziy@nvidia.com>
> > >>> Date: Wed, 7 Jan 2026 16:49:52 -0500
> > >>> Subject: [PATCH] mm/memremap: free device private folio fix
> > >>> Content-Type: text/plain; charset="utf-8"
> > >>>
> > >>> Signed-off-by: Zi Yan <ziy@nvidia.com>
> > >>> ---
> > >>> mm/memremap.c | 15 +++++++++++++++
> > >>> 1 file changed, 15 insertions(+)
> > >>>
> > >>> diff --git a/mm/memremap.c b/mm/memremap.c
> > >>> index 63c6ab4fdf08..483666ff7271 100644
> > >>> --- a/mm/memremap.c
> > >>> +++ b/mm/memremap.c
> > >>> @@ -475,6 +475,21 @@ void free_zone_device_folio(struct folio *folio)
> > >>> pgmap->ops->folio_free(folio);
> > >>> break;
> > >>> }
> > >>> +
> > >>> + if (nr > 1) {
> > >>> + struct page *head = folio_page(folio, 0);
> > >>> +
> > >>> + head[1].flags.f &= ~PAGE_FLAGS_SECOND;
> > >>> +#ifdef NR_PAGES_IN_LARGE_FOLIO
> > >>> + folio->_nr_pages = 0;
> > >>> +#endif
> > >>> + for (i = 1; i < nr; i++) {
> > >>> + (head + i)->mapping = NULL;
> > >>> + clear_compound_head(head + i);
> > >>
> > >> I see that your skipping the checks in free_page_tail_prepare()? IIUC, we should be able
> > >> to invoke it even for zone device private pages
> >
> > I am not sure about what part of compound page is also used in device private folio.
> > Yes, it is better to add right checks.
> >
> > >>
> > >>> + }
> > >>> + folio->mapping = NULL;
> > >>
> > >> This is already done in free_zone_device_folio()
> > >>
> > >>> + head->flags.f &= ~PAGE_FLAGS_CHECK_AT_PREP;
> > >>
> > >> I don't think this is required for zone device private folios, but I suppose it
> > >> keeps the code generic
> > >>
> > >
> > > Well, the above code doesn’t work, but I think it’s the right idea.
> > > clear_compound_head aliases to pgmap, which we don’t want to be NULL. I
> >
> > Thank you for pointing it out. I am not familiar with device private page code.
> >
> > > believe the individual pages likely need their flags cleared (?), and
> >
> > Yes, I missed the tail page flag clearing part.
> >
I think the page head is the only thing that really needs to be cleared,
though I could be wrong.
> > > this step must be done before calling folio_free and include a barrier,
> > > as the page can be immediately reallocated.
> > >
> > > So here’s what I came up with, and it seems to work (for Xe, GPU SVM):
> > >
> > > mm/memremap.c | 21 +++++++++++++++++++++
> > > 1 file changed, 21 insertions(+)
> > >
> > > diff --git a/mm/memremap.c b/mm/memremap.c
> > > index 63c6ab4fdf08..ac20abb6a441 100644
> > > --- a/mm/memremap.c
> > > +++ b/mm/memremap.c
> > > @@ -448,6 +448,27 @@ void free_zone_device_folio(struct folio *folio)
> > > pgmap->type != MEMORY_DEVICE_GENERIC)
> > > folio->mapping = NULL;
> > >
> > > + if (nr > 1) {
> > > + struct page *head = folio_page(folio, 0);
> > > +
> > > + head[1].flags.f &= ~PAGE_FLAGS_SECOND;
> > > +#ifdef NR_PAGES_IN_LARGE_FOLIO
> > > + folio->_nr_pages = 0;
> > > +#endif
> > > + for (i = 1; i < nr; i++) {
> > > + struct folio *new_folio = (struct folio *)(head + i);
> > > +
> > > + (head + i)->mapping = NULL;
> > > + (head + i)->flags.f &= ~PAGE_FLAGS_CHECK_AT_PREP;
> > > +
> > > + /* Overwrite compound_head with pgmap */
> > > + new_folio->pgmap = pgmap;
> > > + }
> > > +
> > > + head->flags.f &= ~PAGE_FLAGS_CHECK_AT_PREP;
> > > + smp_wmb(); /* Changes but be visable before freeing folio */
> > > + }
> > > +
> > > switch (pgmap->type) {
> > > case MEMORY_DEVICE_PRIVATE:
> > > case MEMORY_DEVICE_COHERENT:
> > >
> >
> > It looks good to me, but I am very likely missing the detail on device private
> > pages. Like Balbir pointed out above, for tail pages, calling
> > free_tail_page_prepare() might be better to get sanity checks like normal
> > large folio, although you will need to set ->pgmap after it.
> >
> > It is better to send it as a proper patch and get reviews from other
> > MM folks.
> >
Yes, agreed. See above—I’ll work on a proper patch tomorrow and CC all
the correct MM folks. Aiming to have something ready for the next
release PR.
Matt
> > >>> + }
> > >>> }
> > >>>
> > >>> void zone_device_page_init(struct page *page, unsigned int order)
> > >>
> > >>
> > >> Otherwise, it seems like the right way to solve the issue.
> > >>
> > >
> > > My question is: why isn’t Nouveau hitting this issue, or your Nvidia
> > > out-of-tree driver (lack of testing, Xe's test suite coverage is quite
> > > good at finding corners).
> > >
> > > Also, will this change in behavior break either ofthose drivers?
> > >
> > > Matt
> > >
> > >> Balbir
> >
> >
> > Best Regards,
> > Yan, Zi
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH 1/4] mm/migrate: Add migrate_device_split_page
2026-01-08 3:42 ` Matthew Brost
@ 2026-01-08 4:47 ` Balbir Singh
0 siblings, 0 replies; 29+ messages in thread
From: Balbir Singh @ 2026-01-08 4:47 UTC (permalink / raw)
To: Matthew Brost, Alistair Popple
Cc: Zi Yan, Francois Dugast, intel-xe, dri-devel, Andrew Morton,
linux-mm, David Hildenbrand, Lorenzo Stoakes, Baolin Wang,
Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain, Barry Song,
Lance Yang, Matthew Wilcox
On 1/8/26 13:42, Matthew Brost wrote:
> On Thu, Jan 08, 2026 at 02:14:28PM +1100, Alistair Popple wrote:
>> On 2026-01-08 at 13:53 +1100, Zi Yan <ziy@nvidia.com> wrote...
>>> On 7 Jan 2026, at 21:17, Matthew Brost wrote:
>>>
>>>> On Thu, Jan 08, 2026 at 11:56:03AM +1100, Balbir Singh wrote:
>>>>> On 1/8/26 08:03, Zi Yan wrote:
>>>>>> On 7 Jan 2026, at 16:15, Matthew Brost wrote:
>>>>>>
>>>>>>> On Wed, Jan 07, 2026 at 03:38:35PM -0500, Zi Yan wrote:
>>>>>>>> On 7 Jan 2026, at 15:20, Zi Yan wrote:
>>>>>>>>
>>>>>>>>> +THP folks
>>>>>>>>
>>>>>>>> +willy, since he commented in another thread.
>>>>>>>>
>>>>>>>>>
>>>>>>>>> On 16 Dec 2025, at 15:10, Francois Dugast wrote:
>>>>>>>>>
>>>>>>>>>> From: Matthew Brost <matthew.brost@intel.com>
>>>>>>>>>>
>>>>>>>>>> Introduce migrate_device_split_page() to split a device page into
>>>>>>>>>> lower-order pages. Used when a folio allocated as higher-order is freed
>>>>>>>>>> and later reallocated at a smaller order by the driver memory manager.
>>>>>>>>>>
>>>>>>>>>> Cc: Andrew Morton <akpm@linux-foundation.org>
>>>>>>>>>> Cc: Balbir Singh <balbirs@nvidia.com>
>>>>>>>>>> Cc: dri-devel@lists.freedesktop.org
>>>>>>>>>> Cc: linux-mm@kvack.org
>>>>>>>>>> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
>>>>>>>>>> Signed-off-by: Francois Dugast <francois.dugast@intel.com>
>>>>>>>>>> ---
>>>>>>>>>> include/linux/huge_mm.h | 3 +++
>>>>>>>>>> include/linux/migrate.h | 1 +
>>>>>>>>>> mm/huge_memory.c | 6 ++---
>>>>>>>>>> mm/migrate_device.c | 49 +++++++++++++++++++++++++++++++++++++++++
>>>>>>>>>> 4 files changed, 56 insertions(+), 3 deletions(-)
>>>>>>>>>>
>>>>>>>>>> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
>>>>>>>>>> index a4d9f964dfde..6ad8f359bc0d 100644
>>>>>>>>>> --- a/include/linux/huge_mm.h
>>>>>>>>>> +++ b/include/linux/huge_mm.h
>>>>>>>>>> @@ -374,6 +374,9 @@ int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list
>>>>>>>>>> int folio_split_unmapped(struct folio *folio, unsigned int new_order);
>>>>>>>>>> unsigned int min_order_for_split(struct folio *folio);
>>>>>>>>>> int split_folio_to_list(struct folio *folio, struct list_head *list);
>>>>>>>>>> +int __split_unmapped_folio(struct folio *folio, int new_order,
>>>>>>>>>> + struct page *split_at, struct xa_state *xas,
>>>>>>>>>> + struct address_space *mapping, enum split_type split_type);
>>>>>>>>>> int folio_check_splittable(struct folio *folio, unsigned int new_order,
>>>>>>>>>> enum split_type split_type);
>>>>>>>>>> int folio_split(struct folio *folio, unsigned int new_order, struct page *page,
>>>>>>>>>> diff --git a/include/linux/migrate.h b/include/linux/migrate.h
>>>>>>>>>> index 26ca00c325d9..ec65e4fd5f88 100644
>>>>>>>>>> --- a/include/linux/migrate.h
>>>>>>>>>> +++ b/include/linux/migrate.h
>>>>>>>>>> @@ -192,6 +192,7 @@ void migrate_device_pages(unsigned long *src_pfns, unsigned long *dst_pfns,
>>>>>>>>>> unsigned long npages);
>>>>>>>>>> void migrate_device_finalize(unsigned long *src_pfns,
>>>>>>>>>> unsigned long *dst_pfns, unsigned long npages);
>>>>>>>>>> +int migrate_device_split_page(struct page *page);
>>>>>>>>>>
>>>>>>>>>> #endif /* CONFIG_MIGRATION */
>>>>>>>>>>
>>>>>>>>>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>>>>>>>>>> index 40cf59301c21..7ded35a3ecec 100644
>>>>>>>>>> --- a/mm/huge_memory.c
>>>>>>>>>> +++ b/mm/huge_memory.c
>>>>>>>>>> @@ -3621,9 +3621,9 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
>>>>>>>>>> * Return: 0 - successful, <0 - failed (if -ENOMEM is returned, @folio might be
>>>>>>>>>> * split but not to @new_order, the caller needs to check)
>>>>>>>>>> */
>>>>>>>>>> -static int __split_unmapped_folio(struct folio *folio, int new_order,
>>>>>>>>>> - struct page *split_at, struct xa_state *xas,
>>>>>>>>>> - struct address_space *mapping, enum split_type split_type)
>>>>>>>>>> +int __split_unmapped_folio(struct folio *folio, int new_order,
>>>>>>>>>> + struct page *split_at, struct xa_state *xas,
>>>>>>>>>> + struct address_space *mapping, enum split_type split_type)
>>>>>>>>>> {
>>>>>>>>>> const bool is_anon = folio_test_anon(folio);
>>>>>>>>>> int old_order = folio_order(folio);
>>>>>>>>>> diff --git a/mm/migrate_device.c b/mm/migrate_device.c
>>>>>>>>>> index 23379663b1e1..eb0f0e938947 100644
>>>>>>>>>> --- a/mm/migrate_device.c
>>>>>>>>>> +++ b/mm/migrate_device.c
>>>>>>>>>> @@ -775,6 +775,49 @@ int migrate_vma_setup(struct migrate_vma *args)
>>>>>>>>>> EXPORT_SYMBOL(migrate_vma_setup);
>>>>>>>>>>
>>>>>>>>>> #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
>>>>>>>>>> +/**
>>>>>>>>>> + * migrate_device_split_page() - Split device page
>>>>>>>>>> + * @page: Device page to split
>>>>>>>>>> + *
>>>>>>>>>> + * Splits a device page into smaller pages. Typically called when reallocating a
>>>>>>>>>> + * folio to a smaller size. Inherently racy—only safe if the caller ensures
>>>>>>>>>> + * mutual exclusion within the page's folio (i.e., no other threads are using
>>>>>>>>>> + * pages within the folio). Expected to be called a free device page and
>>>>>>>>>> + * restores all split out pages to a free state.
>>>>>>>>>> + */
>>>>>>>>
>>>>>>>> Do you mind explaining why __split_unmapped_folio() is needed for a free device
>>>>>>>> page? A free page is not supposed to be a large folio, at least from a core
>>>>>>>> MM point of view. __split_unmapped_folio() is intended to work on large folios
>>>>>>>> (or compound pages), even if the input folio has refcount == 0 (because it is
>>>>>>>> frozen).
>>>>>>>>
>>>>>>>
>>>>>>> Well, then maybe this is a bug in core MM where the freed page is still
>>>>>>> a THP. Let me explain the scenario and why this is needed from my POV.
>>>>>>>
>>>>>>> Our VRAM allocator in Xe (and several other DRM drivers) is DRM buddy.
>>>>>>> This is a shared pool between traditional DRM GEMs (buffer objects) and
>>>>>>> SVM allocations (pages). It doesn’t have any view of the page backing—it
>>>>>>> basically just hands back a pointer to VRAM space that we allocate from.
>>>>>>> From that, if it’s an SVM allocation, we can derive the device pages.
>>>>>>>
>>>>>>> What I see happening is: a 2M buddy allocation occurs, we make the
>>>>>>> backing device pages a large folio, and sometime later the folio
>>>>>>> refcount goes to zero and we free the buddy allocation. Later, the buddy
>>>>>>> allocation is reused for a smaller allocation (e.g., 4K or 64K), but the
>>>>>>> backing pages are still a large folio. Here is where we need to split
>>>>>>
>>>>>> I agree with you that it might be a bug in free_zone_device_folio() based
>>>>>> on my understanding. Since zone_device_page_init() calls prep_compound_page()
>>>>>> for >0 orders, but free_zone_device_folio() never reverse the process.
>>>>>>
>>>>>> Balbir and Alistair might be able to help here.
>>
>> Just catching up after the Christmas break.
>>
>
> I think everyone is and scrambling for the release PR. :)
>
>>>>>
>>>>> I agree it's an API limitation
>>>
>>> I am not sure. If free_zone_device_folio() does not get rid of large folio
>>> metadata, there is no guarantee that a freed large device private folio will
>>> be reallocated as a large device private folio. And when mTHP support is
>>> added, the folio order might change too. That can cause issues when
>>> compound_head() is called on a tail page of a previously large folio, since
>>> compound_head() will return the old head page instead of the tail page itself.
>>
>> I agree that freeing the device folio should get rid of the large folio. That
>> would also keep it consistent with what we do for FS DAX for example.
>>
>
> +1
>
>>>>>
>>>>>>
>>>>>> I cherry picked the code from __free_frozen_pages() to reverse the process.
>>>>>> Can you give it a try to see if it solve the above issue? Thanks.
>>
>> It would be nice if this could be a common helper for freeing compound
>> ZONE_DEVICE pages. FS DAX already has this for example:
>>
>> static inline unsigned long dax_folio_put(struct folio *folio)
>> {
>> unsigned long ref;
>> int order, i;
>>
>> if (!dax_folio_is_shared(folio))
>> ref = 0;
>> else
>> ref = --folio->share;
>>
>> if (ref)
>> return ref;
>>
>> folio->mapping = NULL;
>> order = folio_order(folio);
>> if (!order)
>> return 0;
>> folio_reset_order(folio);
>>
>> for (i = 0; i < (1UL << order); i++) {
>> struct dev_pagemap *pgmap = page_pgmap(&folio->page);
>> struct page *page = folio_page(folio, i);
>> struct folio *new_folio = (struct folio *)page;
>>
>> ClearPageHead(page);
>> clear_compound_head(page);
>>
>> new_folio->mapping = NULL;
>> /*
>> * Reset pgmap which was over-written by
>> * prep_compound_page().
>> */
>> new_folio->pgmap = pgmap;
>> new_folio->share = 0;
>> WARN_ON_ONCE(folio_ref_count(new_folio));
>> }
>>
>> return ref;
>> }
>>
>> Aside from the weird refcount checks that FS DAX needs to at the start of this
>> function I don't think there is anything specific to DEVICE_PRIVATE pages there.
>>
>
> Thanks for the reference, Alistair. This looks roughly like what I
> hacked together in an effort to just get something working. I believe a
> common helper can be made to work. Let me churn on this tomorrow and put
> together a proper patch.
>
>>>>>>
>>>>>> From 3aa03baa39b7e62ea079e826de6ed5aab3061e46 Mon Sep 17 00:00:00 2001
>>>>>> From: Zi Yan <ziy@nvidia.com>
>>>>>> Date: Wed, 7 Jan 2026 16:49:52 -0500
>>>>>> Subject: [PATCH] mm/memremap: free device private folio fix
>>>>>> Content-Type: text/plain; charset="utf-8"
>>>>>>
>>>>>> Signed-off-by: Zi Yan <ziy@nvidia.com>
>>>>>> ---
>>>>>> mm/memremap.c | 15 +++++++++++++++
>>>>>> 1 file changed, 15 insertions(+)
>>>>>>
>>>>>> diff --git a/mm/memremap.c b/mm/memremap.c
>>>>>> index 63c6ab4fdf08..483666ff7271 100644
>>>>>> --- a/mm/memremap.c
>>>>>> +++ b/mm/memremap.c
>>>>>> @@ -475,6 +475,21 @@ void free_zone_device_folio(struct folio *folio)
>>>>>> pgmap->ops->folio_free(folio);
>>>>>> break;
>>>>>> }
>>>>>> +
>>>>>> + if (nr > 1) {
>>>>>> + struct page *head = folio_page(folio, 0);
>>>>>> +
>>>>>> + head[1].flags.f &= ~PAGE_FLAGS_SECOND;
>>>>>> +#ifdef NR_PAGES_IN_LARGE_FOLIO
>>>>>> + folio->_nr_pages = 0;
>>>>>> +#endif
>>>>>> + for (i = 1; i < nr; i++) {
>>>>>> + (head + i)->mapping = NULL;
>>>>>> + clear_compound_head(head + i);
>>>>>
>>>>> I see that your skipping the checks in free_page_tail_prepare()? IIUC, we should be able
>>>>> to invoke it even for zone device private pages
>>>
>>> I am not sure about what part of compound page is also used in device private folio.
>>> Yes, it is better to add right checks.
>>>
>>>>>
>>>>>> + }
>>>>>> + folio->mapping = NULL;
>>>>>
>>>>> This is already done in free_zone_device_folio()
>>>>>
>>>>>> + head->flags.f &= ~PAGE_FLAGS_CHECK_AT_PREP;
>>>>>
>>>>> I don't think this is required for zone device private folios, but I suppose it
>>>>> keeps the code generic
>>>>>
>>>>
>>>> Well, the above code doesn’t work, but I think it’s the right idea.
>>>> clear_compound_head aliases to pgmap, which we don’t want to be NULL. I
>>>
>>> Thank you for pointing it out. I am not familiar with device private page code.
>>>
>>>> believe the individual pages likely need their flags cleared (?), and
>>>
>>> Yes, I missed the tail page flag clearing part.
>>>
>
> I think the page head is the only thing that really needs to be cleared,
> though I could be wrong.
>
>>>> this step must be done before calling folio_free and include a barrier,
>>>> as the page can be immediately reallocated.
>>>>
>>>> So here’s what I came up with, and it seems to work (for Xe, GPU SVM):
>>>>
>>>> mm/memremap.c | 21 +++++++++++++++++++++
>>>> 1 file changed, 21 insertions(+)
>>>>
>>>> diff --git a/mm/memremap.c b/mm/memremap.c
>>>> index 63c6ab4fdf08..ac20abb6a441 100644
>>>> --- a/mm/memremap.c
>>>> +++ b/mm/memremap.c
>>>> @@ -448,6 +448,27 @@ void free_zone_device_folio(struct folio *folio)
>>>> pgmap->type != MEMORY_DEVICE_GENERIC)
>>>> folio->mapping = NULL;
>>>>
>>>> + if (nr > 1) {
>>>> + struct page *head = folio_page(folio, 0);
>>>> +
>>>> + head[1].flags.f &= ~PAGE_FLAGS_SECOND;
>>>> +#ifdef NR_PAGES_IN_LARGE_FOLIO
>>>> + folio->_nr_pages = 0;
>>>> +#endif
>>>> + for (i = 1; i < nr; i++) {
>>>> + struct folio *new_folio = (struct folio *)(head + i);
>>>> +
>>>> + (head + i)->mapping = NULL;
>>>> + (head + i)->flags.f &= ~PAGE_FLAGS_CHECK_AT_PREP;
>>>> +
>>>> + /* Overwrite compound_head with pgmap */
>>>> + new_folio->pgmap = pgmap;
>>>> + }
>>>> +
>>>> + head->flags.f &= ~PAGE_FLAGS_CHECK_AT_PREP;
>>>> + smp_wmb(); /* Changes but be visable before freeing folio */
>>>> + }
>>>> +
>>>> switch (pgmap->type) {
>>>> case MEMORY_DEVICE_PRIVATE:
>>>> case MEMORY_DEVICE_COHERENT:
>>>>
>>>
>>> It looks good to me, but I am very likely missing the detail on device private
>>> pages. Like Balbir pointed out above, for tail pages, calling
>>> free_tail_page_prepare() might be better to get sanity checks like normal
>>> large folio, although you will need to set ->pgmap after it.
>>>
>>> It is better to send it as a proper patch and get reviews from other
>>> MM folks.
>>>
>
> Yes, agreed. See above—I’ll work on a proper patch tomorrow and CC all
> the correct MM folks. Aiming to have something ready for the next
> release PR.
>
Yes, please!
Thanks,
Balbir
^ permalink raw reply [flat|nested] 29+ messages in thread
end of thread, other threads:[~2026-01-08 13:08 UTC | newest]
Thread overview: 29+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-12-16 20:10 [PATCH 0/4] Enable THP support in drm_pagemap Francois Dugast
2025-12-16 20:10 ` [PATCH 1/4] mm/migrate: Add migrate_device_split_page Francois Dugast
2025-12-16 20:34 ` Matthew Wilcox
2025-12-16 21:39 ` Matthew Brost
2026-01-06 2:39 ` Matthew Brost
2026-01-07 20:15 ` Zi Yan
2026-01-07 20:20 ` Zi Yan
2026-01-07 20:38 ` Zi Yan
2026-01-07 21:15 ` Matthew Brost
2026-01-07 22:03 ` Zi Yan
2026-01-08 0:56 ` Balbir Singh
2026-01-08 2:17 ` Matthew Brost
2026-01-08 2:53 ` Zi Yan
2026-01-08 3:14 ` Alistair Popple
2026-01-08 3:42 ` Matthew Brost
2026-01-08 4:47 ` Balbir Singh
2025-12-16 20:10 ` [PATCH 2/4] drm/pagemap: Unlock and put folios when possible Francois Dugast
2025-12-18 21:59 ` Matthew Brost
2025-12-16 20:10 ` [PATCH 3/4] drm/pagemap: Add helper to access zone_device_data Francois Dugast
2025-12-18 22:19 ` Matthew Brost
2025-12-19 15:29 ` Francois Dugast
2025-12-19 20:13 ` Matthew Brost
2025-12-16 20:10 ` [PATCH 4/4] drm/pagemap: Enable THP support for GPU memory migration Francois Dugast
2025-12-18 22:24 ` Matthew Brost
2025-12-17 0:14 ` ✗ CI.checkpatch: warning for Enable THP support in drm_pagemap Patchwork
2025-12-17 0:16 ` ✓ CI.KUnit: success " Patchwork
2025-12-17 0:31 ` ✗ CI.checksparse: warning " Patchwork
2025-12-17 0:55 ` ✓ Xe.CI.BAT: success " Patchwork
2025-12-17 23:22 ` ✗ Xe.CI.Full: failure " Patchwork
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox