* [PATCH v4 0/4] Use new dma-map IOVA alloc, link, and sync API in GPU SVM and DRM pagemap
@ 2026-02-05 4:19 Matthew Brost
2026-02-05 4:19 ` [PATCH v4 1/4] drm/pagemap: Add helper to access zone_device_data Matthew Brost
` (6 more replies)
0 siblings, 7 replies; 20+ messages in thread
From: Matthew Brost @ 2026-02-05 4:19 UTC (permalink / raw)
To: intel-xe, dri-devel
Cc: leonro, jgg, francois.dugast, thomas.hellstrom,
himal.prasad.ghimiray
The dma-map IOVA alloc, link, and sync APIs perform significantly better
than dma-map / dma-unmap, as they avoid costly IOMMU synchronizations.
This difference is especially noticeable when mapping a 2MB region in
4KB pages.
Use dma-map IOVA alloc, link, and sync APIs for GPU SVM and DRM page,
which mappings between the CPU and GPU.
Initial results are promising.
Baseline CPU time during 2M / 64K fault with a migration:
Average migrate 2M cpu time (us, percentage): 333.99665178571428571429, .61102853199282922865
Average migrate 64K cpu time (us, percentage): 18.62723214285714285714, .30127985269960467173
After this series CPU time during 2M / 64K fault with a migration:
Average migrate 2M cpu time (us, percentage): 224.81808035714285714286, .51412827364772602557
Average migrate 64K cpu time (us, percentage): 14.65625000000000000000, .25659463050529524405
Matt
v2:
- Include missing basline patch for CI
v3:
- Fix memory corruption
- PoC IOVA alloc for multi-GPU
v4:
- Pack IOVA / drop dummy pages
- Drop multi-GPU IOVA alloc
Francois Dugast (1):
drm/pagemap: Add helper to access zone_device_data
Matthew Brost (3):
drm/gpusvm: Use dma-map IOVA alloc, link, and sync API in GPU SVM
drm/pagemap: Split drm_pagemap_migrate_map_pages into device / system
drm/pagemap: Use dma-map IOVA alloc, link, and sync API for DRM
pagemap
drivers/gpu/drm/drm_gpusvm.c | 62 +++++++--
drivers/gpu/drm/drm_pagemap.c | 229 +++++++++++++++++++++++++---------
include/drm/drm_gpusvm.h | 5 +
include/drm/drm_pagemap.h | 14 +++
4 files changed, 238 insertions(+), 72 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH v4 1/4] drm/pagemap: Add helper to access zone_device_data
2026-02-05 4:19 [PATCH v4 0/4] Use new dma-map IOVA alloc, link, and sync API in GPU SVM and DRM pagemap Matthew Brost
@ 2026-02-05 4:19 ` Matthew Brost
2026-02-05 4:19 ` [PATCH v4 2/4] drm/gpusvm: Use dma-map IOVA alloc, link, and sync API in GPU SVM Matthew Brost
` (5 subsequent siblings)
6 siblings, 0 replies; 20+ messages in thread
From: Matthew Brost @ 2026-02-05 4:19 UTC (permalink / raw)
To: intel-xe, dri-devel
Cc: leonro, jgg, francois.dugast, thomas.hellstrom,
himal.prasad.ghimiray
From: Francois Dugast <francois.dugast@intel.com>
This new helper helps ensure all accesses to zone_device_data use the
correct API whether the page is part of a folio or not.
Suggested-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Francois Dugast <francois.dugast@intel.com>
---
v2:
- Move to drm_pagemap.h, stick to folio_zone_device_data (Matthew Brost)
- Return struct drm_pagemap_zdd * (Matthew Brost)
drivers/gpu/drm/drm_gpusvm.c | 7 +++++--
drivers/gpu/drm/drm_pagemap.c | 21 ++++++++++++---------
include/drm/drm_pagemap.h | 14 ++++++++++++++
3 files changed, 31 insertions(+), 11 deletions(-)
diff --git a/drivers/gpu/drm/drm_gpusvm.c b/drivers/gpu/drm/drm_gpusvm.c
index 871fcccd128a..4b8130a4ce95 100644
--- a/drivers/gpu/drm/drm_gpusvm.c
+++ b/drivers/gpu/drm/drm_gpusvm.c
@@ -1488,12 +1488,15 @@ int drm_gpusvm_get_pages(struct drm_gpusvm *gpusvm,
order = drm_gpusvm_hmm_pfn_to_order(pfns[i], i, npages);
if (is_device_private_page(page) ||
is_device_coherent_page(page)) {
+ struct drm_pagemap_zdd *__zdd =
+ drm_pagemap_page_zone_device_data(page);
+
if (!ctx->allow_mixed &&
- zdd != page->zone_device_data && i > 0) {
+ zdd != __zdd && i > 0) {
err = -EOPNOTSUPP;
goto err_unmap;
}
- zdd = page->zone_device_data;
+ zdd = __zdd;
if (pagemap != page_pgmap(page)) {
if (pagemap) {
err = -EOPNOTSUPP;
diff --git a/drivers/gpu/drm/drm_pagemap.c b/drivers/gpu/drm/drm_pagemap.c
index 38eca94f01a1..fbd69f383457 100644
--- a/drivers/gpu/drm/drm_pagemap.c
+++ b/drivers/gpu/drm/drm_pagemap.c
@@ -244,7 +244,7 @@ static int drm_pagemap_migrate_map_pages(struct device *dev,
order = folio_order(folio);
if (is_device_private_page(page)) {
- struct drm_pagemap_zdd *zdd = page->zone_device_data;
+ struct drm_pagemap_zdd *zdd = drm_pagemap_page_zone_device_data(page);
struct drm_pagemap *dpagemap = zdd->dpagemap;
struct drm_pagemap_addr addr;
@@ -315,7 +315,7 @@ static void drm_pagemap_migrate_unmap_pages(struct device *dev,
goto next;
if (is_zone_device_page(page)) {
- struct drm_pagemap_zdd *zdd = page->zone_device_data;
+ struct drm_pagemap_zdd *zdd = drm_pagemap_page_zone_device_data(page);
struct drm_pagemap *dpagemap = zdd->dpagemap;
dpagemap->ops->device_unmap(dpagemap, dev, pagemap_addr[i]);
@@ -603,7 +603,8 @@ int drm_pagemap_migrate_to_devmem(struct drm_pagemap_devmem *devmem_allocation,
pages[i] = NULL;
if (src_page && is_device_private_page(src_page)) {
- struct drm_pagemap_zdd *src_zdd = src_page->zone_device_data;
+ struct drm_pagemap_zdd *src_zdd =
+ drm_pagemap_page_zone_device_data(src_page);
if (page_pgmap(src_page) == pagemap &&
!mdetails->can_migrate_same_pagemap) {
@@ -725,8 +726,8 @@ static int drm_pagemap_migrate_populate_ram_pfn(struct vm_area_struct *vas,
goto next;
if (fault_page) {
- if (src_page->zone_device_data !=
- fault_page->zone_device_data)
+ if (drm_pagemap_page_zone_device_data(src_page) !=
+ drm_pagemap_page_zone_device_data(fault_page))
goto next;
}
@@ -1067,7 +1068,7 @@ static int __drm_pagemap_migrate_to_ram(struct vm_area_struct *vas,
void *buf;
int i, err = 0;
- zdd = page->zone_device_data;
+ zdd = drm_pagemap_page_zone_device_data(page);
if (time_before64(get_jiffies_64(), zdd->devmem_allocation->timeslice_expiration))
return 0;
@@ -1150,7 +1151,9 @@ static int __drm_pagemap_migrate_to_ram(struct vm_area_struct *vas,
*/
static void drm_pagemap_folio_free(struct folio *folio)
{
- drm_pagemap_zdd_put(folio->page.zone_device_data);
+ struct page *page = folio_page(folio, 0);
+
+ drm_pagemap_zdd_put(drm_pagemap_page_zone_device_data(page));
}
/**
@@ -1166,7 +1169,7 @@ static void drm_pagemap_folio_free(struct folio *folio)
*/
static vm_fault_t drm_pagemap_migrate_to_ram(struct vm_fault *vmf)
{
- struct drm_pagemap_zdd *zdd = vmf->page->zone_device_data;
+ struct drm_pagemap_zdd *zdd = drm_pagemap_page_zone_device_data(vmf->page);
int err;
err = __drm_pagemap_migrate_to_ram(vmf->vma,
@@ -1232,7 +1235,7 @@ EXPORT_SYMBOL_GPL(drm_pagemap_devmem_init);
*/
struct drm_pagemap *drm_pagemap_page_to_dpagemap(struct page *page)
{
- struct drm_pagemap_zdd *zdd = page->zone_device_data;
+ struct drm_pagemap_zdd *zdd = drm_pagemap_page_zone_device_data(page);
return zdd->devmem_allocation->dpagemap;
}
diff --git a/include/drm/drm_pagemap.h b/include/drm/drm_pagemap.h
index 2baf0861f78f..14e1db564c25 100644
--- a/include/drm/drm_pagemap.h
+++ b/include/drm/drm_pagemap.h
@@ -4,6 +4,7 @@
#include <linux/dma-direction.h>
#include <linux/hmm.h>
+#include <linux/memremap.h>
#include <linux/types.h>
#define NR_PAGES(order) (1U << (order))
@@ -341,6 +342,19 @@ struct drm_pagemap_migrate_details {
u32 source_peer_migrates : 1;
};
+/**
+ * drm_pagemap_page_zone_device_data() - Page to zone_device_data
+ * @page: Pointer to the page
+ *
+ * Return: Page's zone_device_data
+ */
+static inline struct drm_pagemap_zdd *drm_pagemap_page_zone_device_data(struct page *page)
+{
+ struct folio *folio = page_folio(page);
+
+ return folio_zone_device_data(folio);
+}
+
#if IS_ENABLED(CONFIG_ZONE_DEVICE)
int drm_pagemap_migrate_to_devmem(struct drm_pagemap_devmem *devmem_allocation,
--
2.34.1
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH v4 2/4] drm/gpusvm: Use dma-map IOVA alloc, link, and sync API in GPU SVM
2026-02-05 4:19 [PATCH v4 0/4] Use new dma-map IOVA alloc, link, and sync API in GPU SVM and DRM pagemap Matthew Brost
2026-02-05 4:19 ` [PATCH v4 1/4] drm/pagemap: Add helper to access zone_device_data Matthew Brost
@ 2026-02-05 4:19 ` Matthew Brost
2026-02-09 9:44 ` Thomas Hellström
2026-02-05 4:19 ` [PATCH v4 3/4] drm/pagemap: Split drm_pagemap_migrate_map_pages into device / system Matthew Brost
` (4 subsequent siblings)
6 siblings, 1 reply; 20+ messages in thread
From: Matthew Brost @ 2026-02-05 4:19 UTC (permalink / raw)
To: intel-xe, dri-devel
Cc: leonro, jgg, francois.dugast, thomas.hellstrom,
himal.prasad.ghimiray
The dma-map IOVA alloc, link, and sync APIs perform significantly better
than dma-map / dma-unmap, as they avoid costly IOMMU synchronizations.
This difference is especially noticeable when mapping a 2MB region in
4KB pages.
Use the IOVA alloc, link, and sync APIs for GPU SVM, which create DMA
mappings between the CPU and GPU.
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
v3:
- Always link IOVA in mixed mappings
- Sync IOVA
v4:
- Initialize IOVA state in get_pages
- Use pack IOVA linking (Jason)
- s/page_to_phys/hmm_pfn_to_phys (Leon)
drivers/gpu/drm/drm_gpusvm.c | 55 ++++++++++++++++++++++++++++++------
include/drm/drm_gpusvm.h | 5 ++++
2 files changed, 52 insertions(+), 8 deletions(-)
diff --git a/drivers/gpu/drm/drm_gpusvm.c b/drivers/gpu/drm/drm_gpusvm.c
index 4b8130a4ce95..800caaf0a783 100644
--- a/drivers/gpu/drm/drm_gpusvm.c
+++ b/drivers/gpu/drm/drm_gpusvm.c
@@ -1139,11 +1139,19 @@ static void __drm_gpusvm_unmap_pages(struct drm_gpusvm *gpusvm,
struct drm_gpusvm_pages_flags flags = {
.__flags = svm_pages->flags.__flags,
};
+ bool use_iova = dma_use_iova(&svm_pages->state);
+
+ if (use_iova) {
+ dma_iova_unlink(dev, &svm_pages->state, 0,
+ svm_pages->state_offset,
+ svm_pages->dma_addr[0].dir, 0);
+ dma_iova_free(dev, &svm_pages->state);
+ }
for (i = 0, j = 0; i < npages; j++) {
struct drm_pagemap_addr *addr = &svm_pages->dma_addr[j];
- if (addr->proto == DRM_INTERCONNECT_SYSTEM)
+ if (!use_iova && addr->proto == DRM_INTERCONNECT_SYSTEM)
dma_unmap_page(dev,
addr->addr,
PAGE_SIZE << addr->order,
@@ -1408,6 +1416,7 @@ int drm_gpusvm_get_pages(struct drm_gpusvm *gpusvm,
struct drm_gpusvm_pages_flags flags;
enum dma_data_direction dma_dir = ctx->read_only ? DMA_TO_DEVICE :
DMA_BIDIRECTIONAL;
+ struct dma_iova_state *state = &svm_pages->state;
retry:
if (time_after(jiffies, timeout))
@@ -1446,6 +1455,9 @@ int drm_gpusvm_get_pages(struct drm_gpusvm *gpusvm,
if (err)
goto err_free;
+ *state = (struct dma_iova_state){};
+ svm_pages->state_offset = 0;
+
map_pages:
/*
* Perform all dma mappings under the notifier lock to not
@@ -1539,13 +1551,33 @@ int drm_gpusvm_get_pages(struct drm_gpusvm *gpusvm,
goto err_unmap;
}
- addr = dma_map_page(gpusvm->drm->dev,
- page, 0,
- PAGE_SIZE << order,
- dma_dir);
- if (dma_mapping_error(gpusvm->drm->dev, addr)) {
- err = -EFAULT;
- goto err_unmap;
+ if (!i)
+ dma_iova_try_alloc(gpusvm->drm->dev, state,
+ npages * PAGE_SIZE >=
+ HPAGE_PMD_SIZE ?
+ HPAGE_PMD_SIZE : 0,
+ npages * PAGE_SIZE);
+
+ if (dma_use_iova(state)) {
+ err = dma_iova_link(gpusvm->drm->dev, state,
+ hmm_pfn_to_phys(pfns[i]),
+ svm_pages->state_offset,
+ PAGE_SIZE << order,
+ dma_dir, 0);
+ if (err)
+ goto err_unmap;
+
+ addr = state->addr + svm_pages->state_offset;
+ svm_pages->state_offset += PAGE_SIZE << order;
+ } else {
+ addr = dma_map_page(gpusvm->drm->dev,
+ page, 0,
+ PAGE_SIZE << order,
+ dma_dir);
+ if (dma_mapping_error(gpusvm->drm->dev, addr)) {
+ err = -EFAULT;
+ goto err_unmap;
+ }
}
svm_pages->dma_addr[j] = drm_pagemap_addr_encode
@@ -1557,6 +1589,13 @@ int drm_gpusvm_get_pages(struct drm_gpusvm *gpusvm,
flags.has_dma_mapping = true;
}
+ if (dma_use_iova(state)) {
+ err = dma_iova_sync(gpusvm->drm->dev, state, 0,
+ svm_pages->state_offset);
+ if (err)
+ goto err_unmap;
+ }
+
if (pagemap) {
flags.has_devmem_pages = true;
drm_pagemap_get(dpagemap);
diff --git a/include/drm/drm_gpusvm.h b/include/drm/drm_gpusvm.h
index 2578ac92a8d4..cd94bb2ee6ee 100644
--- a/include/drm/drm_gpusvm.h
+++ b/include/drm/drm_gpusvm.h
@@ -6,6 +6,7 @@
#ifndef __DRM_GPUSVM_H__
#define __DRM_GPUSVM_H__
+#include <linux/dma-mapping.h>
#include <linux/kref.h>
#include <linux/interval_tree.h>
#include <linux/mmu_notifier.h>
@@ -136,6 +137,8 @@ struct drm_gpusvm_pages_flags {
* @dma_addr: Device address array
* @dpagemap: The struct drm_pagemap of the device pages we're dma-mapping.
* Note this is assuming only one drm_pagemap per range is allowed.
+ * @state: DMA IOVA state for mapping.
+ * @state_offset: DMA IOVA offset for mapping.
* @notifier_seq: Notifier sequence number of the range's pages
* @flags: Flags for range
* @flags.migrate_devmem: Flag indicating whether the range can be migrated to device memory
@@ -147,6 +150,8 @@ struct drm_gpusvm_pages_flags {
struct drm_gpusvm_pages {
struct drm_pagemap_addr *dma_addr;
struct drm_pagemap *dpagemap;
+ struct dma_iova_state state;
+ unsigned long state_offset;
unsigned long notifier_seq;
struct drm_gpusvm_pages_flags flags;
};
--
2.34.1
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH v4 3/4] drm/pagemap: Split drm_pagemap_migrate_map_pages into device / system
2026-02-05 4:19 [PATCH v4 0/4] Use new dma-map IOVA alloc, link, and sync API in GPU SVM and DRM pagemap Matthew Brost
2026-02-05 4:19 ` [PATCH v4 1/4] drm/pagemap: Add helper to access zone_device_data Matthew Brost
2026-02-05 4:19 ` [PATCH v4 2/4] drm/gpusvm: Use dma-map IOVA alloc, link, and sync API in GPU SVM Matthew Brost
@ 2026-02-05 4:19 ` Matthew Brost
2026-02-09 15:49 ` Thomas Hellström
2026-02-05 4:19 ` [PATCH v4 4/4] drm/pagemap: Use dma-map IOVA alloc, link, and sync API for DRM pagemap Matthew Brost
` (3 subsequent siblings)
6 siblings, 1 reply; 20+ messages in thread
From: Matthew Brost @ 2026-02-05 4:19 UTC (permalink / raw)
To: intel-xe, dri-devel
Cc: leonro, jgg, francois.dugast, thomas.hellstrom,
himal.prasad.ghimiray
Split drm_pagemap_migrate_map_pages into device / system helpers clearly
seperating these operations. Will help with upcoming changes to split
IOVA allocation steps.
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/drm_pagemap.c | 146 ++++++++++++++++++++++------------
1 file changed, 96 insertions(+), 50 deletions(-)
diff --git a/drivers/gpu/drm/drm_pagemap.c b/drivers/gpu/drm/drm_pagemap.c
index fbd69f383457..29677b19bb69 100644
--- a/drivers/gpu/drm/drm_pagemap.c
+++ b/drivers/gpu/drm/drm_pagemap.c
@@ -205,7 +205,7 @@ static void drm_pagemap_get_devmem_page(struct page *page,
}
/**
- * drm_pagemap_migrate_map_pages() - Map migration pages for GPU SVM migration
+ * drm_pagemap_migrate_map_device_pages() - Map device migration pages for GPU SVM migration
* @dev: The device performing the migration.
* @local_dpagemap: The drm_pagemap local to the migrating device.
* @pagemap_addr: Array to store DMA information corresponding to mapped pages.
@@ -221,19 +221,22 @@ static void drm_pagemap_get_devmem_page(struct page *page,
*
* Returns: 0 on success, -EFAULT if an error occurs during mapping.
*/
-static int drm_pagemap_migrate_map_pages(struct device *dev,
- struct drm_pagemap *local_dpagemap,
- struct drm_pagemap_addr *pagemap_addr,
- unsigned long *migrate_pfn,
- unsigned long npages,
- enum dma_data_direction dir,
- const struct drm_pagemap_migrate_details *mdetails)
+static int
+drm_pagemap_migrate_map_device_pages(struct device *dev,
+ struct drm_pagemap *local_dpagemap,
+ struct drm_pagemap_addr *pagemap_addr,
+ unsigned long *migrate_pfn,
+ unsigned long npages,
+ enum dma_data_direction dir,
+ const struct drm_pagemap_migrate_details *mdetails)
{
unsigned long num_peer_pages = 0, num_local_pages = 0, i;
for (i = 0; i < npages;) {
struct page *page = migrate_pfn_to_page(migrate_pfn[i]);
- dma_addr_t dma_addr;
+ struct drm_pagemap_zdd *zdd;
+ struct drm_pagemap *dpagemap;
+ struct drm_pagemap_addr addr;
struct folio *folio;
unsigned int order = 0;
@@ -243,36 +246,26 @@ static int drm_pagemap_migrate_map_pages(struct device *dev,
folio = page_folio(page);
order = folio_order(folio);
- if (is_device_private_page(page)) {
- struct drm_pagemap_zdd *zdd = drm_pagemap_page_zone_device_data(page);
- struct drm_pagemap *dpagemap = zdd->dpagemap;
- struct drm_pagemap_addr addr;
-
- if (dpagemap == local_dpagemap) {
- if (!mdetails->can_migrate_same_pagemap)
- goto next;
+ WARN_ON_ONCE(!is_device_private_page(page));
- num_local_pages += NR_PAGES(order);
- } else {
- num_peer_pages += NR_PAGES(order);
- }
+ zdd = drm_pagemap_page_zone_device_data(page);
+ dpagemap = zdd->dpagemap;
- addr = dpagemap->ops->device_map(dpagemap, dev, page, order, dir);
- if (dma_mapping_error(dev, addr.addr))
- return -EFAULT;
+ if (dpagemap == local_dpagemap) {
+ if (!mdetails->can_migrate_same_pagemap)
+ goto next;
- pagemap_addr[i] = addr;
+ num_local_pages += NR_PAGES(order);
} else {
- dma_addr = dma_map_page(dev, page, 0, page_size(page), dir);
- if (dma_mapping_error(dev, dma_addr))
- return -EFAULT;
-
- pagemap_addr[i] =
- drm_pagemap_addr_encode(dma_addr,
- DRM_INTERCONNECT_SYSTEM,
- order, dir);
+ num_peer_pages += NR_PAGES(order);
}
+ addr = dpagemap->ops->device_map(dpagemap, dev, page, order, dir);
+ if (dma_mapping_error(dev, addr.addr))
+ return -EFAULT;
+
+ pagemap_addr[i] = addr;
+
next:
i += NR_PAGES(order);
}
@@ -287,6 +280,59 @@ static int drm_pagemap_migrate_map_pages(struct device *dev,
return 0;
}
+/**
+ * drm_pagemap_migrate_map_system_pages() - Map system migration pages for GPU SVM migration
+ * @dev: The device performing the migration.
+ * @pagemap_addr: Array to store DMA information corresponding to mapped pages.
+ * @migrate_pfn: Array of page frame numbers of system pages or peer pages to map.
+ * @npages: Number of system pages or peer pages to map.
+ * @dir: Direction of data transfer (e.g., DMA_BIDIRECTIONAL)
+ *
+ * This function maps pages of memory for migration usage in GPU SVM. It
+ * iterates over each page frame number provided in @migrate_pfn, maps the
+ * corresponding page, and stores the DMA address in the provided @dma_addr
+ * array.
+ *
+ * Returns: 0 on success, -EFAULT if an error occurs during mapping.
+ */
+static int
+drm_pagemap_migrate_map_system_pages(struct device *dev,
+ struct drm_pagemap_addr *pagemap_addr,
+ unsigned long *migrate_pfn,
+ unsigned long npages,
+ enum dma_data_direction dir)
+{
+ unsigned long i;
+
+ for (i = 0; i < npages;) {
+ struct page *page = migrate_pfn_to_page(migrate_pfn[i]);
+ dma_addr_t dma_addr;
+ struct folio *folio;
+ unsigned int order = 0;
+
+ if (!page)
+ goto next;
+
+ WARN_ON_ONCE(is_device_private_page(page));
+ folio = page_folio(page);
+ order = folio_order(folio);
+
+ dma_addr = dma_map_page(dev, page, 0, page_size(page), dir);
+ if (dma_mapping_error(dev, dma_addr))
+ return -EFAULT;
+
+ pagemap_addr[i] =
+ drm_pagemap_addr_encode(dma_addr,
+ DRM_INTERCONNECT_SYSTEM,
+ order, dir);
+
+next:
+ i += NR_PAGES(order);
+ }
+
+ return 0;
+}
+
/**
* drm_pagemap_migrate_unmap_pages() - Unmap pages previously mapped for GPU SVM migration
* @dev: The device for which the pages were mapped
@@ -347,9 +393,11 @@ drm_pagemap_migrate_remote_to_local(struct drm_pagemap_devmem *devmem,
const struct drm_pagemap_migrate_details *mdetails)
{
- int err = drm_pagemap_migrate_map_pages(remote_device, remote_dpagemap,
- pagemap_addr, local_pfns,
- npages, DMA_FROM_DEVICE, mdetails);
+ int err = drm_pagemap_migrate_map_device_pages(remote_device,
+ remote_dpagemap,
+ pagemap_addr, local_pfns,
+ npages, DMA_FROM_DEVICE,
+ mdetails);
if (err)
goto out;
@@ -368,12 +416,11 @@ drm_pagemap_migrate_sys_to_dev(struct drm_pagemap_devmem *devmem,
struct page *local_pages[],
struct drm_pagemap_addr pagemap_addr[],
unsigned long npages,
- const struct drm_pagemap_devmem_ops *ops,
- const struct drm_pagemap_migrate_details *mdetails)
+ const struct drm_pagemap_devmem_ops *ops)
{
- int err = drm_pagemap_migrate_map_pages(devmem->dev, devmem->dpagemap,
- pagemap_addr, sys_pfns, npages,
- DMA_TO_DEVICE, mdetails);
+ int err = drm_pagemap_migrate_map_system_pages(devmem->dev,
+ pagemap_addr, sys_pfns,
+ npages, DMA_TO_DEVICE);
if (err)
goto out;
@@ -437,7 +484,7 @@ static int drm_pagemap_migrate_range(struct drm_pagemap_devmem *devmem,
&pages[last->start],
&pagemap_addr[last->start],
cur->start - last->start,
- last->ops, mdetails);
+ last->ops);
out:
*last = *cur;
@@ -954,7 +1001,6 @@ EXPORT_SYMBOL(drm_pagemap_put);
int drm_pagemap_evict_to_ram(struct drm_pagemap_devmem *devmem_allocation)
{
const struct drm_pagemap_devmem_ops *ops = devmem_allocation->ops;
- struct drm_pagemap_migrate_details mdetails = {};
unsigned long npages, mpages = 0;
struct page **pages;
unsigned long *src, *dst;
@@ -993,10 +1039,10 @@ int drm_pagemap_evict_to_ram(struct drm_pagemap_devmem *devmem_allocation)
if (err || !mpages)
goto err_finalize;
- err = drm_pagemap_migrate_map_pages(devmem_allocation->dev,
- devmem_allocation->dpagemap, pagemap_addr,
- dst, npages, DMA_FROM_DEVICE,
- &mdetails);
+ err = drm_pagemap_migrate_map_system_pages(devmem_allocation->dev,
+ pagemap_addr,
+ dst, npages,
+ DMA_FROM_DEVICE);
if (err)
goto err_finalize;
@@ -1057,7 +1103,6 @@ static int __drm_pagemap_migrate_to_ram(struct vm_area_struct *vas,
MIGRATE_VMA_SELECT_DEVICE_COHERENT,
.fault_page = page,
};
- struct drm_pagemap_migrate_details mdetails = {};
struct drm_pagemap_zdd *zdd;
const struct drm_pagemap_devmem_ops *ops;
struct device *dev = NULL;
@@ -1115,8 +1160,9 @@ static int __drm_pagemap_migrate_to_ram(struct vm_area_struct *vas,
if (err)
goto err_finalize;
- err = drm_pagemap_migrate_map_pages(dev, zdd->dpagemap, pagemap_addr, migrate.dst, npages,
- DMA_FROM_DEVICE, &mdetails);
+ err = drm_pagemap_migrate_map_system_pages(dev, pagemap_addr,
+ migrate.dst, npages,
+ DMA_FROM_DEVICE);
if (err)
goto err_finalize;
--
2.34.1
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH v4 4/4] drm/pagemap: Use dma-map IOVA alloc, link, and sync API for DRM pagemap
2026-02-05 4:19 [PATCH v4 0/4] Use new dma-map IOVA alloc, link, and sync API in GPU SVM and DRM pagemap Matthew Brost
` (2 preceding siblings ...)
2026-02-05 4:19 ` [PATCH v4 3/4] drm/pagemap: Split drm_pagemap_migrate_map_pages into device / system Matthew Brost
@ 2026-02-05 4:19 ` Matthew Brost
2026-02-11 11:34 ` Thomas Hellström
2026-02-05 6:24 ` ✓ CI.KUnit: success for Use new dma-map IOVA alloc, link, and sync API in GPU SVM and DRM pagemap (rev4) Patchwork
` (2 subsequent siblings)
6 siblings, 1 reply; 20+ messages in thread
From: Matthew Brost @ 2026-02-05 4:19 UTC (permalink / raw)
To: intel-xe, dri-devel
Cc: leonro, jgg, francois.dugast, thomas.hellstrom,
himal.prasad.ghimiray
The dma-map IOVA alloc, link, and sync APIs perform significantly better
than dma-map / dma-unmap, as they avoid costly IOMMU synchronizations.
This difference is especially noticeable when mapping a 2MB region in
4KB pages.
Use the IOVA alloc, link, and sync APIs for DRM pagemap, which create DMA
mappings between the CPU and GPU for copying data.
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
v4:
- Pack IOVA and drop dummy page (Jason)
drivers/gpu/drm/drm_pagemap.c | 84 +++++++++++++++++++++++++++++------
1 file changed, 70 insertions(+), 14 deletions(-)
diff --git a/drivers/gpu/drm/drm_pagemap.c b/drivers/gpu/drm/drm_pagemap.c
index 29677b19bb69..52a196bc8459 100644
--- a/drivers/gpu/drm/drm_pagemap.c
+++ b/drivers/gpu/drm/drm_pagemap.c
@@ -280,6 +280,20 @@ drm_pagemap_migrate_map_device_pages(struct device *dev,
return 0;
}
+/**
+ * struct drm_pagemap_iova_state - DRM pagemap IOVA state
+ *
+ * @dma_state: DMA IOVA state.
+ * @offset: Current offset in IOVA.
+ *
+ * This structure acts as an iterator for packing all IOVA addresses within a
+ * contiguous range.
+ */
+struct drm_pagemap_iova_state {
+ struct dma_iova_state dma_state;
+ unsigned long offset;
+};
+
/**
* drm_pagemap_migrate_map_system_pages() - Map system migration pages for GPU SVM migration
* @dev: The device performing the migration.
@@ -287,6 +301,7 @@ drm_pagemap_migrate_map_device_pages(struct device *dev,
* @migrate_pfn: Array of page frame numbers of system pages or peer pages to map.
* @npages: Number of system pages or peer pages to map.
* @dir: Direction of data transfer (e.g., DMA_BIDIRECTIONAL)
+ * @state: DMA IOVA state for mapping.
*
* This function maps pages of memory for migration usage in GPU SVM. It
* iterates over each page frame number provided in @migrate_pfn, maps the
@@ -300,9 +315,11 @@ drm_pagemap_migrate_map_system_pages(struct device *dev,
struct drm_pagemap_addr *pagemap_addr,
unsigned long *migrate_pfn,
unsigned long npages,
- enum dma_data_direction dir)
+ enum dma_data_direction dir,
+ struct drm_pagemap_iova_state *state)
{
unsigned long i;
+ bool try_alloc = false;
for (i = 0; i < npages;) {
struct page *page = migrate_pfn_to_page(migrate_pfn[i]);
@@ -317,9 +334,31 @@ drm_pagemap_migrate_map_system_pages(struct device *dev,
folio = page_folio(page);
order = folio_order(folio);
- dma_addr = dma_map_page(dev, page, 0, page_size(page), dir);
- if (dma_mapping_error(dev, dma_addr))
- return -EFAULT;
+ if (!try_alloc) {
+ dma_iova_try_alloc(dev, &state->dma_state,
+ npages * PAGE_SIZE >=
+ HPAGE_PMD_SIZE ?
+ HPAGE_PMD_SIZE : 0,
+ npages * PAGE_SIZE);
+ try_alloc = true;
+ }
+
+ if (dma_use_iova(&state->dma_state)) {
+ int err = dma_iova_link(dev, &state->dma_state,
+ page_to_phys(page),
+ state->offset, page_size(page),
+ dir, 0);
+ if (err)
+ return err;
+
+ dma_addr = state->dma_state.addr + state->offset;
+ state->offset += page_size(page);
+ } else {
+ dma_addr = dma_map_page(dev, page, 0, page_size(page),
+ dir);
+ if (dma_mapping_error(dev, dma_addr))
+ return -EFAULT;
+ }
pagemap_addr[i] =
drm_pagemap_addr_encode(dma_addr,
@@ -330,6 +369,9 @@ drm_pagemap_migrate_map_system_pages(struct device *dev,
i += NR_PAGES(order);
}
+ if (dma_use_iova(&state->dma_state))
+ return dma_iova_sync(dev, &state->dma_state, 0, state->offset);
+
return 0;
}
@@ -341,6 +383,7 @@ drm_pagemap_migrate_map_system_pages(struct device *dev,
* @pagemap_addr: Array of DMA information corresponding to mapped pages
* @npages: Number of pages to unmap
* @dir: Direction of data transfer (e.g., DMA_BIDIRECTIONAL)
+ * @state: DMA IOVA state for mapping.
*
* This function unmaps previously mapped pages of memory for GPU Shared Virtual
* Memory (SVM). It iterates over each DMA address provided in @dma_addr, checks
@@ -350,10 +393,17 @@ static void drm_pagemap_migrate_unmap_pages(struct device *dev,
struct drm_pagemap_addr *pagemap_addr,
unsigned long *migrate_pfn,
unsigned long npages,
- enum dma_data_direction dir)
+ enum dma_data_direction dir,
+ struct drm_pagemap_iova_state *state)
{
unsigned long i;
+ if (state && dma_use_iova(&state->dma_state)) {
+ dma_iova_unlink(dev, &state->dma_state, 0, state->offset, dir, 0);
+ dma_iova_free(dev, &state->dma_state);
+ return;
+ }
+
for (i = 0; i < npages;) {
struct page *page = migrate_pfn_to_page(migrate_pfn[i]);
@@ -406,7 +456,7 @@ drm_pagemap_migrate_remote_to_local(struct drm_pagemap_devmem *devmem,
devmem->pre_migrate_fence);
out:
drm_pagemap_migrate_unmap_pages(remote_device, pagemap_addr, local_pfns,
- npages, DMA_FROM_DEVICE);
+ npages, DMA_FROM_DEVICE, NULL);
return err;
}
@@ -416,11 +466,13 @@ drm_pagemap_migrate_sys_to_dev(struct drm_pagemap_devmem *devmem,
struct page *local_pages[],
struct drm_pagemap_addr pagemap_addr[],
unsigned long npages,
- const struct drm_pagemap_devmem_ops *ops)
+ const struct drm_pagemap_devmem_ops *ops,
+ struct drm_pagemap_iova_state *state)
{
int err = drm_pagemap_migrate_map_system_pages(devmem->dev,
pagemap_addr, sys_pfns,
- npages, DMA_TO_DEVICE);
+ npages, DMA_TO_DEVICE,
+ state);
if (err)
goto out;
@@ -429,7 +481,7 @@ drm_pagemap_migrate_sys_to_dev(struct drm_pagemap_devmem *devmem,
devmem->pre_migrate_fence);
out:
drm_pagemap_migrate_unmap_pages(devmem->dev, pagemap_addr, sys_pfns, npages,
- DMA_TO_DEVICE);
+ DMA_TO_DEVICE, state);
return err;
}
@@ -457,6 +509,7 @@ static int drm_pagemap_migrate_range(struct drm_pagemap_devmem *devmem,
const struct migrate_range_loc *cur,
const struct drm_pagemap_migrate_details *mdetails)
{
+ struct drm_pagemap_iova_state state = {};
int ret = 0;
if (cur->start == 0)
@@ -484,7 +537,7 @@ static int drm_pagemap_migrate_range(struct drm_pagemap_devmem *devmem,
&pages[last->start],
&pagemap_addr[last->start],
cur->start - last->start,
- last->ops);
+ last->ops, &state);
out:
*last = *cur;
@@ -1001,6 +1054,7 @@ EXPORT_SYMBOL(drm_pagemap_put);
int drm_pagemap_evict_to_ram(struct drm_pagemap_devmem *devmem_allocation)
{
const struct drm_pagemap_devmem_ops *ops = devmem_allocation->ops;
+ struct drm_pagemap_iova_state state = {};
unsigned long npages, mpages = 0;
struct page **pages;
unsigned long *src, *dst;
@@ -1042,7 +1096,7 @@ int drm_pagemap_evict_to_ram(struct drm_pagemap_devmem *devmem_allocation)
err = drm_pagemap_migrate_map_system_pages(devmem_allocation->dev,
pagemap_addr,
dst, npages,
- DMA_FROM_DEVICE);
+ DMA_FROM_DEVICE, &state);
if (err)
goto err_finalize;
@@ -1059,7 +1113,7 @@ int drm_pagemap_evict_to_ram(struct drm_pagemap_devmem *devmem_allocation)
migrate_device_pages(src, dst, npages);
migrate_device_finalize(src, dst, npages);
drm_pagemap_migrate_unmap_pages(devmem_allocation->dev, pagemap_addr, dst, npages,
- DMA_FROM_DEVICE);
+ DMA_FROM_DEVICE, &state);
err_free:
kvfree(buf);
@@ -1103,6 +1157,7 @@ static int __drm_pagemap_migrate_to_ram(struct vm_area_struct *vas,
MIGRATE_VMA_SELECT_DEVICE_COHERENT,
.fault_page = page,
};
+ struct drm_pagemap_iova_state state = {};
struct drm_pagemap_zdd *zdd;
const struct drm_pagemap_devmem_ops *ops;
struct device *dev = NULL;
@@ -1162,7 +1217,7 @@ static int __drm_pagemap_migrate_to_ram(struct vm_area_struct *vas,
err = drm_pagemap_migrate_map_system_pages(dev, pagemap_addr,
migrate.dst, npages,
- DMA_FROM_DEVICE);
+ DMA_FROM_DEVICE, &state);
if (err)
goto err_finalize;
@@ -1180,7 +1235,8 @@ static int __drm_pagemap_migrate_to_ram(struct vm_area_struct *vas,
migrate_vma_finalize(&migrate);
if (dev)
drm_pagemap_migrate_unmap_pages(dev, pagemap_addr, migrate.dst,
- npages, DMA_FROM_DEVICE);
+ npages, DMA_FROM_DEVICE,
+ &state);
err_free:
kvfree(buf);
err_out:
--
2.34.1
^ permalink raw reply related [flat|nested] 20+ messages in thread
* ✓ CI.KUnit: success for Use new dma-map IOVA alloc, link, and sync API in GPU SVM and DRM pagemap (rev4)
2026-02-05 4:19 [PATCH v4 0/4] Use new dma-map IOVA alloc, link, and sync API in GPU SVM and DRM pagemap Matthew Brost
` (3 preceding siblings ...)
2026-02-05 4:19 ` [PATCH v4 4/4] drm/pagemap: Use dma-map IOVA alloc, link, and sync API for DRM pagemap Matthew Brost
@ 2026-02-05 6:24 ` Patchwork
2026-02-05 7:38 ` ✓ Xe.CI.BAT: " Patchwork
2026-02-06 1:06 ` ✗ Xe.CI.FULL: failure " Patchwork
6 siblings, 0 replies; 20+ messages in thread
From: Patchwork @ 2026-02-05 6:24 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-xe
== Series Details ==
Series: Use new dma-map IOVA alloc, link, and sync API in GPU SVM and DRM pagemap (rev4)
URL : https://patchwork.freedesktop.org/series/160587/
State : success
== Summary ==
+ trap cleanup EXIT
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/xe/.kunitconfig
[06:23:03] Configuring KUnit Kernel ...
Generating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[06:23:07] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[06:23:39] Starting KUnit Kernel (1/1)...
[06:23:39] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[06:23:40] ================== guc_buf (11 subtests) ===================
[06:23:40] [PASSED] test_smallest
[06:23:40] [PASSED] test_largest
[06:23:40] [PASSED] test_granular
[06:23:40] [PASSED] test_unique
[06:23:40] [PASSED] test_overlap
[06:23:40] [PASSED] test_reusable
[06:23:40] [PASSED] test_too_big
[06:23:40] [PASSED] test_flush
[06:23:40] [PASSED] test_lookup
[06:23:40] [PASSED] test_data
[06:23:40] [PASSED] test_class
[06:23:40] ===================== [PASSED] guc_buf =====================
[06:23:40] =================== guc_dbm (7 subtests) ===================
[06:23:40] [PASSED] test_empty
[06:23:40] [PASSED] test_default
[06:23:40] ======================== test_size ========================
[06:23:40] [PASSED] 4
[06:23:40] [PASSED] 8
[06:23:40] [PASSED] 32
[06:23:40] [PASSED] 256
[06:23:40] ==================== [PASSED] test_size ====================
[06:23:40] ======================= test_reuse ========================
[06:23:40] [PASSED] 4
[06:23:40] [PASSED] 8
[06:23:40] [PASSED] 32
[06:23:40] [PASSED] 256
[06:23:40] =================== [PASSED] test_reuse ====================
[06:23:40] =================== test_range_overlap ====================
[06:23:40] [PASSED] 4
[06:23:40] [PASSED] 8
[06:23:40] [PASSED] 32
[06:23:40] [PASSED] 256
[06:23:40] =============== [PASSED] test_range_overlap ================
[06:23:40] =================== test_range_compact ====================
[06:23:40] [PASSED] 4
[06:23:40] [PASSED] 8
[06:23:40] [PASSED] 32
[06:23:40] [PASSED] 256
[06:23:40] =============== [PASSED] test_range_compact ================
[06:23:40] ==================== test_range_spare =====================
[06:23:40] [PASSED] 4
[06:23:40] [PASSED] 8
[06:23:40] [PASSED] 32
[06:23:40] [PASSED] 256
[06:23:40] ================ [PASSED] test_range_spare =================
[06:23:40] ===================== [PASSED] guc_dbm =====================
[06:23:40] =================== guc_idm (6 subtests) ===================
[06:23:40] [PASSED] bad_init
[06:23:40] [PASSED] no_init
[06:23:40] [PASSED] init_fini
[06:23:40] [PASSED] check_used
[06:23:40] [PASSED] check_quota
[06:23:40] [PASSED] check_all
[06:23:40] ===================== [PASSED] guc_idm =====================
[06:23:40] ================== no_relay (3 subtests) ===================
[06:23:40] [PASSED] xe_drops_guc2pf_if_not_ready
[06:23:40] [PASSED] xe_drops_guc2vf_if_not_ready
[06:23:40] [PASSED] xe_rejects_send_if_not_ready
[06:23:40] ==================== [PASSED] no_relay =====================
[06:23:40] ================== pf_relay (14 subtests) ==================
[06:23:40] [PASSED] pf_rejects_guc2pf_too_short
[06:23:40] [PASSED] pf_rejects_guc2pf_too_long
[06:23:40] [PASSED] pf_rejects_guc2pf_no_payload
[06:23:40] [PASSED] pf_fails_no_payload
[06:23:40] [PASSED] pf_fails_bad_origin
[06:23:40] [PASSED] pf_fails_bad_type
[06:23:40] [PASSED] pf_txn_reports_error
[06:23:40] [PASSED] pf_txn_sends_pf2guc
[06:23:40] [PASSED] pf_sends_pf2guc
[06:23:40] [SKIPPED] pf_loopback_nop
[06:23:40] [SKIPPED] pf_loopback_echo
[06:23:40] [SKIPPED] pf_loopback_fail
[06:23:40] [SKIPPED] pf_loopback_busy
[06:23:40] [SKIPPED] pf_loopback_retry
[06:23:40] ==================== [PASSED] pf_relay =====================
[06:23:40] ================== vf_relay (3 subtests) ===================
[06:23:40] [PASSED] vf_rejects_guc2vf_too_short
[06:23:40] [PASSED] vf_rejects_guc2vf_too_long
[06:23:40] [PASSED] vf_rejects_guc2vf_no_payload
[06:23:40] ==================== [PASSED] vf_relay =====================
[06:23:40] ================ pf_gt_config (6 subtests) =================
[06:23:40] [PASSED] fair_contexts_1vf
[06:23:40] [PASSED] fair_doorbells_1vf
[06:23:40] [PASSED] fair_ggtt_1vf
[06:23:40] ====================== fair_contexts ======================
[06:23:40] [PASSED] 1 VF
[06:23:40] [PASSED] 2 VFs
[06:23:40] [PASSED] 3 VFs
[06:23:40] [PASSED] 4 VFs
[06:23:40] [PASSED] 5 VFs
[06:23:40] [PASSED] 6 VFs
[06:23:40] [PASSED] 7 VFs
[06:23:40] [PASSED] 8 VFs
[06:23:40] [PASSED] 9 VFs
[06:23:40] [PASSED] 10 VFs
[06:23:40] [PASSED] 11 VFs
[06:23:40] [PASSED] 12 VFs
[06:23:40] [PASSED] 13 VFs
[06:23:40] [PASSED] 14 VFs
[06:23:40] [PASSED] 15 VFs
[06:23:40] [PASSED] 16 VFs
[06:23:40] [PASSED] 17 VFs
[06:23:40] [PASSED] 18 VFs
[06:23:40] [PASSED] 19 VFs
[06:23:40] [PASSED] 20 VFs
[06:23:40] [PASSED] 21 VFs
[06:23:40] [PASSED] 22 VFs
[06:23:40] [PASSED] 23 VFs
[06:23:40] [PASSED] 24 VFs
[06:23:40] [PASSED] 25 VFs
[06:23:40] [PASSED] 26 VFs
[06:23:40] [PASSED] 27 VFs
[06:23:40] [PASSED] 28 VFs
[06:23:40] [PASSED] 29 VFs
[06:23:40] [PASSED] 30 VFs
[06:23:40] [PASSED] 31 VFs
[06:23:40] [PASSED] 32 VFs
[06:23:40] [PASSED] 33 VFs
[06:23:40] [PASSED] 34 VFs
[06:23:40] [PASSED] 35 VFs
[06:23:40] [PASSED] 36 VFs
[06:23:40] [PASSED] 37 VFs
[06:23:40] [PASSED] 38 VFs
[06:23:40] [PASSED] 39 VFs
[06:23:40] [PASSED] 40 VFs
[06:23:40] [PASSED] 41 VFs
[06:23:40] [PASSED] 42 VFs
[06:23:40] [PASSED] 43 VFs
[06:23:40] [PASSED] 44 VFs
[06:23:40] [PASSED] 45 VFs
[06:23:40] [PASSED] 46 VFs
[06:23:40] [PASSED] 47 VFs
[06:23:40] [PASSED] 48 VFs
[06:23:40] [PASSED] 49 VFs
[06:23:40] [PASSED] 50 VFs
[06:23:40] [PASSED] 51 VFs
[06:23:40] [PASSED] 52 VFs
[06:23:40] [PASSED] 53 VFs
[06:23:40] [PASSED] 54 VFs
[06:23:40] [PASSED] 55 VFs
[06:23:40] [PASSED] 56 VFs
[06:23:40] [PASSED] 57 VFs
[06:23:40] [PASSED] 58 VFs
[06:23:40] [PASSED] 59 VFs
[06:23:40] [PASSED] 60 VFs
[06:23:40] [PASSED] 61 VFs
[06:23:40] [PASSED] 62 VFs
[06:23:40] [PASSED] 63 VFs
[06:23:40] ================== [PASSED] fair_contexts ==================
[06:23:40] ===================== fair_doorbells ======================
[06:23:40] [PASSED] 1 VF
[06:23:40] [PASSED] 2 VFs
[06:23:40] [PASSED] 3 VFs
[06:23:40] [PASSED] 4 VFs
[06:23:40] [PASSED] 5 VFs
[06:23:40] [PASSED] 6 VFs
[06:23:40] [PASSED] 7 VFs
[06:23:40] [PASSED] 8 VFs
[06:23:40] [PASSED] 9 VFs
[06:23:40] [PASSED] 10 VFs
[06:23:40] [PASSED] 11 VFs
[06:23:40] [PASSED] 12 VFs
[06:23:40] [PASSED] 13 VFs
[06:23:40] [PASSED] 14 VFs
[06:23:40] [PASSED] 15 VFs
[06:23:40] [PASSED] 16 VFs
[06:23:40] [PASSED] 17 VFs
[06:23:40] [PASSED] 18 VFs
[06:23:40] [PASSED] 19 VFs
[06:23:40] [PASSED] 20 VFs
[06:23:40] [PASSED] 21 VFs
[06:23:40] [PASSED] 22 VFs
[06:23:40] [PASSED] 23 VFs
[06:23:40] [PASSED] 24 VFs
[06:23:40] [PASSED] 25 VFs
[06:23:40] [PASSED] 26 VFs
[06:23:40] [PASSED] 27 VFs
[06:23:40] [PASSED] 28 VFs
[06:23:40] [PASSED] 29 VFs
[06:23:40] [PASSED] 30 VFs
[06:23:40] [PASSED] 31 VFs
[06:23:40] [PASSED] 32 VFs
[06:23:40] [PASSED] 33 VFs
[06:23:40] [PASSED] 34 VFs
[06:23:40] [PASSED] 35 VFs
[06:23:40] [PASSED] 36 VFs
[06:23:40] [PASSED] 37 VFs
[06:23:40] [PASSED] 38 VFs
[06:23:40] [PASSED] 39 VFs
[06:23:40] [PASSED] 40 VFs
[06:23:40] [PASSED] 41 VFs
[06:23:40] [PASSED] 42 VFs
[06:23:40] [PASSED] 43 VFs
[06:23:40] [PASSED] 44 VFs
[06:23:40] [PASSED] 45 VFs
[06:23:40] [PASSED] 46 VFs
[06:23:40] [PASSED] 47 VFs
[06:23:40] [PASSED] 48 VFs
[06:23:40] [PASSED] 49 VFs
[06:23:40] [PASSED] 50 VFs
[06:23:40] [PASSED] 51 VFs
[06:23:40] [PASSED] 52 VFs
[06:23:40] [PASSED] 53 VFs
[06:23:40] [PASSED] 54 VFs
[06:23:40] [PASSED] 55 VFs
[06:23:40] [PASSED] 56 VFs
[06:23:40] [PASSED] 57 VFs
[06:23:40] [PASSED] 58 VFs
[06:23:40] [PASSED] 59 VFs
[06:23:40] [PASSED] 60 VFs
[06:23:40] [PASSED] 61 VFs
[06:23:40] [PASSED] 62 VFs
[06:23:40] [PASSED] 63 VFs
[06:23:40] ================= [PASSED] fair_doorbells ==================
[06:23:40] ======================== fair_ggtt ========================
[06:23:40] [PASSED] 1 VF
[06:23:40] [PASSED] 2 VFs
[06:23:40] [PASSED] 3 VFs
[06:23:40] [PASSED] 4 VFs
[06:23:40] [PASSED] 5 VFs
[06:23:40] [PASSED] 6 VFs
[06:23:40] [PASSED] 7 VFs
[06:23:40] [PASSED] 8 VFs
[06:23:40] [PASSED] 9 VFs
[06:23:40] [PASSED] 10 VFs
[06:23:40] [PASSED] 11 VFs
[06:23:40] [PASSED] 12 VFs
[06:23:40] [PASSED] 13 VFs
[06:23:40] [PASSED] 14 VFs
[06:23:40] [PASSED] 15 VFs
[06:23:40] [PASSED] 16 VFs
[06:23:40] [PASSED] 17 VFs
[06:23:40] [PASSED] 18 VFs
[06:23:40] [PASSED] 19 VFs
[06:23:40] [PASSED] 20 VFs
[06:23:40] [PASSED] 21 VFs
[06:23:40] [PASSED] 22 VFs
[06:23:40] [PASSED] 23 VFs
[06:23:40] [PASSED] 24 VFs
[06:23:40] [PASSED] 25 VFs
[06:23:40] [PASSED] 26 VFs
[06:23:40] [PASSED] 27 VFs
[06:23:40] [PASSED] 28 VFs
[06:23:40] [PASSED] 29 VFs
[06:23:40] [PASSED] 30 VFs
[06:23:40] [PASSED] 31 VFs
[06:23:40] [PASSED] 32 VFs
[06:23:40] [PASSED] 33 VFs
[06:23:40] [PASSED] 34 VFs
[06:23:40] [PASSED] 35 VFs
[06:23:40] [PASSED] 36 VFs
[06:23:40] [PASSED] 37 VFs
[06:23:40] [PASSED] 38 VFs
[06:23:40] [PASSED] 39 VFs
[06:23:40] [PASSED] 40 VFs
[06:23:40] [PASSED] 41 VFs
[06:23:40] [PASSED] 42 VFs
[06:23:40] [PASSED] 43 VFs
[06:23:40] [PASSED] 44 VFs
[06:23:40] [PASSED] 45 VFs
[06:23:40] [PASSED] 46 VFs
[06:23:40] [PASSED] 47 VFs
[06:23:40] [PASSED] 48 VFs
[06:23:40] [PASSED] 49 VFs
[06:23:40] [PASSED] 50 VFs
[06:23:40] [PASSED] 51 VFs
[06:23:40] [PASSED] 52 VFs
[06:23:40] [PASSED] 53 VFs
[06:23:40] [PASSED] 54 VFs
[06:23:40] [PASSED] 55 VFs
[06:23:40] [PASSED] 56 VFs
[06:23:40] [PASSED] 57 VFs
[06:23:40] [PASSED] 58 VFs
[06:23:40] [PASSED] 59 VFs
[06:23:40] [PASSED] 60 VFs
[06:23:40] [PASSED] 61 VFs
[06:23:40] [PASSED] 62 VFs
[06:23:40] [PASSED] 63 VFs
[06:23:40] ==================== [PASSED] fair_ggtt ====================
[06:23:40] ================== [PASSED] pf_gt_config ===================
[06:23:40] ===================== lmtt (1 subtest) =====================
[06:23:40] ======================== test_ops =========================
[06:23:40] [PASSED] 2-level
[06:23:40] [PASSED] multi-level
[06:23:40] ==================== [PASSED] test_ops =====================
[06:23:40] ====================== [PASSED] lmtt =======================
[06:23:40] ================= pf_service (11 subtests) =================
[06:23:40] [PASSED] pf_negotiate_any
[06:23:40] [PASSED] pf_negotiate_base_match
[06:23:40] [PASSED] pf_negotiate_base_newer
[06:23:40] [PASSED] pf_negotiate_base_next
[06:23:40] [SKIPPED] pf_negotiate_base_older
[06:23:40] [PASSED] pf_negotiate_base_prev
[06:23:40] [PASSED] pf_negotiate_latest_match
[06:23:40] [PASSED] pf_negotiate_latest_newer
[06:23:40] [PASSED] pf_negotiate_latest_next
[06:23:40] [SKIPPED] pf_negotiate_latest_older
[06:23:40] [SKIPPED] pf_negotiate_latest_prev
[06:23:40] =================== [PASSED] pf_service ====================
[06:23:40] ================= xe_guc_g2g (2 subtests) ==================
[06:23:40] ============== xe_live_guc_g2g_kunit_default ==============
[06:23:40] ========= [SKIPPED] xe_live_guc_g2g_kunit_default ==========
[06:23:40] ============== xe_live_guc_g2g_kunit_allmem ===============
[06:23:40] ========== [SKIPPED] xe_live_guc_g2g_kunit_allmem ==========
[06:23:40] =================== [SKIPPED] xe_guc_g2g ===================
[06:23:40] =================== xe_mocs (2 subtests) ===================
[06:23:40] ================ xe_live_mocs_kernel_kunit ================
[06:23:40] =========== [SKIPPED] xe_live_mocs_kernel_kunit ============
[06:23:40] ================ xe_live_mocs_reset_kunit =================
[06:23:40] ============ [SKIPPED] xe_live_mocs_reset_kunit ============
[06:23:40] ==================== [SKIPPED] xe_mocs =====================
[06:23:40] ================= xe_migrate (2 subtests) ==================
[06:23:40] ================= xe_migrate_sanity_kunit =================
[06:23:40] ============ [SKIPPED] xe_migrate_sanity_kunit =============
[06:23:40] ================== xe_validate_ccs_kunit ==================
[06:23:40] ============= [SKIPPED] xe_validate_ccs_kunit ==============
[06:23:40] =================== [SKIPPED] xe_migrate ===================
[06:23:40] ================== xe_dma_buf (1 subtest) ==================
[06:23:40] ==================== xe_dma_buf_kunit =====================
[06:23:40] ================ [SKIPPED] xe_dma_buf_kunit ================
[06:23:40] =================== [SKIPPED] xe_dma_buf ===================
[06:23:40] ================= xe_bo_shrink (1 subtest) =================
[06:23:40] =================== xe_bo_shrink_kunit ====================
[06:23:40] =============== [SKIPPED] xe_bo_shrink_kunit ===============
[06:23:40] ================== [SKIPPED] xe_bo_shrink ==================
[06:23:40] ==================== xe_bo (2 subtests) ====================
[06:23:40] ================== xe_ccs_migrate_kunit ===================
[06:23:40] ============== [SKIPPED] xe_ccs_migrate_kunit ==============
[06:23:40] ==================== xe_bo_evict_kunit ====================
[06:23:40] =============== [SKIPPED] xe_bo_evict_kunit ================
[06:23:40] ===================== [SKIPPED] xe_bo ======================
[06:23:40] ==================== args (13 subtests) ====================
[06:23:40] [PASSED] count_args_test
[06:23:40] [PASSED] call_args_example
[06:23:40] [PASSED] call_args_test
[06:23:40] [PASSED] drop_first_arg_example
[06:23:40] [PASSED] drop_first_arg_test
[06:23:40] [PASSED] first_arg_example
[06:23:40] [PASSED] first_arg_test
[06:23:40] [PASSED] last_arg_example
[06:23:40] [PASSED] last_arg_test
[06:23:40] [PASSED] pick_arg_example
[06:23:40] [PASSED] if_args_example
[06:23:40] [PASSED] if_args_test
[06:23:40] [PASSED] sep_comma_example
[06:23:40] ====================== [PASSED] args =======================
[06:23:40] =================== xe_pci (3 subtests) ====================
[06:23:40] ==================== check_graphics_ip ====================
[06:23:40] [PASSED] 12.00 Xe_LP
[06:23:40] [PASSED] 12.10 Xe_LP+
[06:23:40] [PASSED] 12.55 Xe_HPG
[06:23:40] [PASSED] 12.60 Xe_HPC
[06:23:40] [PASSED] 12.70 Xe_LPG
[06:23:40] [PASSED] 12.71 Xe_LPG
[06:23:40] [PASSED] 12.74 Xe_LPG+
[06:23:40] [PASSED] 20.01 Xe2_HPG
[06:23:40] [PASSED] 20.02 Xe2_HPG
[06:23:40] [PASSED] 20.04 Xe2_LPG
[06:23:40] [PASSED] 30.00 Xe3_LPG
[06:23:40] [PASSED] 30.01 Xe3_LPG
[06:23:40] [PASSED] 30.03 Xe3_LPG
[06:23:40] [PASSED] 30.04 Xe3_LPG
[06:23:40] [PASSED] 30.05 Xe3_LPG
[06:23:40] [PASSED] 35.11 Xe3p_XPC
[06:23:40] ================ [PASSED] check_graphics_ip ================
[06:23:40] ===================== check_media_ip ======================
[06:23:40] [PASSED] 12.00 Xe_M
[06:23:40] [PASSED] 12.55 Xe_HPM
[06:23:40] [PASSED] 13.00 Xe_LPM+
[06:23:40] [PASSED] 13.01 Xe2_HPM
[06:23:40] [PASSED] 20.00 Xe2_LPM
[06:23:40] [PASSED] 30.00 Xe3_LPM
[06:23:40] [PASSED] 30.02 Xe3_LPM
[06:23:40] [PASSED] 35.00 Xe3p_LPM
[06:23:40] [PASSED] 35.03 Xe3p_HPM
[06:23:40] ================= [PASSED] check_media_ip ==================
[06:23:40] =================== check_platform_desc ===================
[06:23:40] [PASSED] 0x9A60 (TIGERLAKE)
[06:23:40] [PASSED] 0x9A68 (TIGERLAKE)
[06:23:40] [PASSED] 0x9A70 (TIGERLAKE)
[06:23:40] [PASSED] 0x9A40 (TIGERLAKE)
[06:23:40] [PASSED] 0x9A49 (TIGERLAKE)
[06:23:40] [PASSED] 0x9A59 (TIGERLAKE)
[06:23:40] [PASSED] 0x9A78 (TIGERLAKE)
[06:23:40] [PASSED] 0x9AC0 (TIGERLAKE)
[06:23:40] [PASSED] 0x9AC9 (TIGERLAKE)
[06:23:40] [PASSED] 0x9AD9 (TIGERLAKE)
[06:23:40] [PASSED] 0x9AF8 (TIGERLAKE)
[06:23:40] [PASSED] 0x4C80 (ROCKETLAKE)
[06:23:40] [PASSED] 0x4C8A (ROCKETLAKE)
[06:23:40] [PASSED] 0x4C8B (ROCKETLAKE)
[06:23:40] [PASSED] 0x4C8C (ROCKETLAKE)
[06:23:40] [PASSED] 0x4C90 (ROCKETLAKE)
[06:23:40] [PASSED] 0x4C9A (ROCKETLAKE)
[06:23:40] [PASSED] 0x4680 (ALDERLAKE_S)
[06:23:40] [PASSED] 0x4682 (ALDERLAKE_S)
[06:23:40] [PASSED] 0x4688 (ALDERLAKE_S)
[06:23:40] [PASSED] 0x468A (ALDERLAKE_S)
[06:23:40] [PASSED] 0x468B (ALDERLAKE_S)
[06:23:40] [PASSED] 0x4690 (ALDERLAKE_S)
[06:23:40] [PASSED] 0x4692 (ALDERLAKE_S)
[06:23:40] [PASSED] 0x4693 (ALDERLAKE_S)
[06:23:40] [PASSED] 0x46A0 (ALDERLAKE_P)
[06:23:40] [PASSED] 0x46A1 (ALDERLAKE_P)
[06:23:40] [PASSED] 0x46A2 (ALDERLAKE_P)
[06:23:40] [PASSED] 0x46A3 (ALDERLAKE_P)
[06:23:40] [PASSED] 0x46A6 (ALDERLAKE_P)
[06:23:40] [PASSED] 0x46A8 (ALDERLAKE_P)
[06:23:40] [PASSED] 0x46AA (ALDERLAKE_P)
[06:23:40] [PASSED] 0x462A (ALDERLAKE_P)
[06:23:40] [PASSED] 0x4626 (ALDERLAKE_P)
[06:23:40] [PASSED] 0x4628 (ALDERLAKE_P)
stty: 'standard input': Inappropriate ioctl for device
[06:23:40] [PASSED] 0x46B0 (ALDERLAKE_P)
[06:23:40] [PASSED] 0x46B1 (ALDERLAKE_P)
[06:23:40] [PASSED] 0x46B2 (ALDERLAKE_P)
[06:23:40] [PASSED] 0x46B3 (ALDERLAKE_P)
[06:23:40] [PASSED] 0x46C0 (ALDERLAKE_P)
[06:23:40] [PASSED] 0x46C1 (ALDERLAKE_P)
[06:23:40] [PASSED] 0x46C2 (ALDERLAKE_P)
[06:23:40] [PASSED] 0x46C3 (ALDERLAKE_P)
[06:23:40] [PASSED] 0x46D0 (ALDERLAKE_N)
[06:23:40] [PASSED] 0x46D1 (ALDERLAKE_N)
[06:23:40] [PASSED] 0x46D2 (ALDERLAKE_N)
[06:23:40] [PASSED] 0x46D3 (ALDERLAKE_N)
[06:23:40] [PASSED] 0x46D4 (ALDERLAKE_N)
[06:23:40] [PASSED] 0xA721 (ALDERLAKE_P)
[06:23:40] [PASSED] 0xA7A1 (ALDERLAKE_P)
[06:23:40] [PASSED] 0xA7A9 (ALDERLAKE_P)
[06:23:40] [PASSED] 0xA7AC (ALDERLAKE_P)
[06:23:40] [PASSED] 0xA7AD (ALDERLAKE_P)
[06:23:40] [PASSED] 0xA720 (ALDERLAKE_P)
[06:23:40] [PASSED] 0xA7A0 (ALDERLAKE_P)
[06:23:40] [PASSED] 0xA7A8 (ALDERLAKE_P)
[06:23:40] [PASSED] 0xA7AA (ALDERLAKE_P)
[06:23:40] [PASSED] 0xA7AB (ALDERLAKE_P)
[06:23:40] [PASSED] 0xA780 (ALDERLAKE_S)
[06:23:40] [PASSED] 0xA781 (ALDERLAKE_S)
[06:23:40] [PASSED] 0xA782 (ALDERLAKE_S)
[06:23:40] [PASSED] 0xA783 (ALDERLAKE_S)
[06:23:40] [PASSED] 0xA788 (ALDERLAKE_S)
[06:23:40] [PASSED] 0xA789 (ALDERLAKE_S)
[06:23:40] [PASSED] 0xA78A (ALDERLAKE_S)
[06:23:40] [PASSED] 0xA78B (ALDERLAKE_S)
[06:23:40] [PASSED] 0x4905 (DG1)
[06:23:40] [PASSED] 0x4906 (DG1)
[06:23:40] [PASSED] 0x4907 (DG1)
[06:23:40] [PASSED] 0x4908 (DG1)
[06:23:40] [PASSED] 0x4909 (DG1)
[06:23:40] [PASSED] 0x56C0 (DG2)
[06:23:40] [PASSED] 0x56C2 (DG2)
[06:23:40] [PASSED] 0x56C1 (DG2)
[06:23:40] [PASSED] 0x7D51 (METEORLAKE)
[06:23:40] [PASSED] 0x7DD1 (METEORLAKE)
[06:23:40] [PASSED] 0x7D41 (METEORLAKE)
[06:23:40] [PASSED] 0x7D67 (METEORLAKE)
[06:23:40] [PASSED] 0xB640 (METEORLAKE)
[06:23:40] [PASSED] 0x56A0 (DG2)
[06:23:40] [PASSED] 0x56A1 (DG2)
[06:23:40] [PASSED] 0x56A2 (DG2)
[06:23:40] [PASSED] 0x56BE (DG2)
[06:23:40] [PASSED] 0x56BF (DG2)
[06:23:40] [PASSED] 0x5690 (DG2)
[06:23:40] [PASSED] 0x5691 (DG2)
[06:23:40] [PASSED] 0x5692 (DG2)
[06:23:40] [PASSED] 0x56A5 (DG2)
[06:23:40] [PASSED] 0x56A6 (DG2)
[06:23:40] [PASSED] 0x56B0 (DG2)
[06:23:40] [PASSED] 0x56B1 (DG2)
[06:23:40] [PASSED] 0x56BA (DG2)
[06:23:40] [PASSED] 0x56BB (DG2)
[06:23:40] [PASSED] 0x56BC (DG2)
[06:23:40] [PASSED] 0x56BD (DG2)
[06:23:40] [PASSED] 0x5693 (DG2)
[06:23:40] [PASSED] 0x5694 (DG2)
[06:23:40] [PASSED] 0x5695 (DG2)
[06:23:40] [PASSED] 0x56A3 (DG2)
[06:23:40] [PASSED] 0x56A4 (DG2)
[06:23:40] [PASSED] 0x56B2 (DG2)
[06:23:40] [PASSED] 0x56B3 (DG2)
[06:23:40] [PASSED] 0x5696 (DG2)
[06:23:40] [PASSED] 0x5697 (DG2)
[06:23:40] [PASSED] 0xB69 (PVC)
[06:23:40] [PASSED] 0xB6E (PVC)
[06:23:40] [PASSED] 0xBD4 (PVC)
[06:23:40] [PASSED] 0xBD5 (PVC)
[06:23:40] [PASSED] 0xBD6 (PVC)
[06:23:40] [PASSED] 0xBD7 (PVC)
[06:23:40] [PASSED] 0xBD8 (PVC)
[06:23:40] [PASSED] 0xBD9 (PVC)
[06:23:40] [PASSED] 0xBDA (PVC)
[06:23:40] [PASSED] 0xBDB (PVC)
[06:23:40] [PASSED] 0xBE0 (PVC)
[06:23:40] [PASSED] 0xBE1 (PVC)
[06:23:40] [PASSED] 0xBE5 (PVC)
[06:23:40] [PASSED] 0x7D40 (METEORLAKE)
[06:23:40] [PASSED] 0x7D45 (METEORLAKE)
[06:23:40] [PASSED] 0x7D55 (METEORLAKE)
[06:23:40] [PASSED] 0x7D60 (METEORLAKE)
[06:23:40] [PASSED] 0x7DD5 (METEORLAKE)
[06:23:40] [PASSED] 0x6420 (LUNARLAKE)
[06:23:40] [PASSED] 0x64A0 (LUNARLAKE)
[06:23:40] [PASSED] 0x64B0 (LUNARLAKE)
[06:23:40] [PASSED] 0xE202 (BATTLEMAGE)
[06:23:40] [PASSED] 0xE209 (BATTLEMAGE)
[06:23:40] [PASSED] 0xE20B (BATTLEMAGE)
[06:23:40] [PASSED] 0xE20C (BATTLEMAGE)
[06:23:40] [PASSED] 0xE20D (BATTLEMAGE)
[06:23:40] [PASSED] 0xE210 (BATTLEMAGE)
[06:23:40] [PASSED] 0xE211 (BATTLEMAGE)
[06:23:40] [PASSED] 0xE212 (BATTLEMAGE)
[06:23:40] [PASSED] 0xE216 (BATTLEMAGE)
[06:23:40] [PASSED] 0xE220 (BATTLEMAGE)
[06:23:40] [PASSED] 0xE221 (BATTLEMAGE)
[06:23:40] [PASSED] 0xE222 (BATTLEMAGE)
[06:23:40] [PASSED] 0xE223 (BATTLEMAGE)
[06:23:40] [PASSED] 0xB080 (PANTHERLAKE)
[06:23:40] [PASSED] 0xB081 (PANTHERLAKE)
[06:23:40] [PASSED] 0xB082 (PANTHERLAKE)
[06:23:40] [PASSED] 0xB083 (PANTHERLAKE)
[06:23:40] [PASSED] 0xB084 (PANTHERLAKE)
[06:23:40] [PASSED] 0xB085 (PANTHERLAKE)
[06:23:40] [PASSED] 0xB086 (PANTHERLAKE)
[06:23:40] [PASSED] 0xB087 (PANTHERLAKE)
[06:23:40] [PASSED] 0xB08F (PANTHERLAKE)
[06:23:40] [PASSED] 0xB090 (PANTHERLAKE)
[06:23:40] [PASSED] 0xB0A0 (PANTHERLAKE)
[06:23:40] [PASSED] 0xB0B0 (PANTHERLAKE)
[06:23:40] [PASSED] 0xFD80 (PANTHERLAKE)
[06:23:40] [PASSED] 0xFD81 (PANTHERLAKE)
[06:23:40] [PASSED] 0xD740 (NOVALAKE_S)
[06:23:40] [PASSED] 0xD741 (NOVALAKE_S)
[06:23:40] [PASSED] 0xD742 (NOVALAKE_S)
[06:23:40] [PASSED] 0xD743 (NOVALAKE_S)
[06:23:40] [PASSED] 0xD744 (NOVALAKE_S)
[06:23:40] [PASSED] 0xD745 (NOVALAKE_S)
[06:23:40] [PASSED] 0x674C (CRESCENTISLAND)
[06:23:40] =============== [PASSED] check_platform_desc ===============
[06:23:40] ===================== [PASSED] xe_pci ======================
[06:23:40] =================== xe_rtp (2 subtests) ====================
[06:23:40] =============== xe_rtp_process_to_sr_tests ================
[06:23:40] [PASSED] coalesce-same-reg
[06:23:40] [PASSED] no-match-no-add
[06:23:40] [PASSED] match-or
[06:23:40] [PASSED] match-or-xfail
[06:23:40] [PASSED] no-match-no-add-multiple-rules
[06:23:40] [PASSED] two-regs-two-entries
[06:23:40] [PASSED] clr-one-set-other
[06:23:40] [PASSED] set-field
[06:23:40] [PASSED] conflict-duplicate
[06:23:40] [PASSED] conflict-not-disjoint
[06:23:40] [PASSED] conflict-reg-type
[06:23:40] =========== [PASSED] xe_rtp_process_to_sr_tests ============
[06:23:40] ================== xe_rtp_process_tests ===================
[06:23:40] [PASSED] active1
[06:23:40] [PASSED] active2
[06:23:40] [PASSED] active-inactive
[06:23:40] [PASSED] inactive-active
[06:23:40] [PASSED] inactive-1st_or_active-inactive
[06:23:40] [PASSED] inactive-2nd_or_active-inactive
[06:23:40] [PASSED] inactive-last_or_active-inactive
[06:23:40] [PASSED] inactive-no_or_active-inactive
[06:23:40] ============== [PASSED] xe_rtp_process_tests ===============
[06:23:40] ===================== [PASSED] xe_rtp ======================
[06:23:40] ==================== xe_wa (1 subtest) =====================
[06:23:40] ======================== xe_wa_gt =========================
[06:23:40] [PASSED] TIGERLAKE B0
[06:23:40] [PASSED] DG1 A0
[06:23:40] [PASSED] DG1 B0
[06:23:40] [PASSED] ALDERLAKE_S A0
[06:23:40] [PASSED] ALDERLAKE_S B0
[06:23:40] [PASSED] ALDERLAKE_S C0
[06:23:40] [PASSED] ALDERLAKE_S D0
[06:23:40] [PASSED] ALDERLAKE_P A0
[06:23:40] [PASSED] ALDERLAKE_P B0
[06:23:40] [PASSED] ALDERLAKE_P C0
[06:23:40] [PASSED] ALDERLAKE_S RPLS D0
[06:23:40] [PASSED] ALDERLAKE_P RPLU E0
[06:23:40] [PASSED] DG2 G10 C0
[06:23:40] [PASSED] DG2 G11 B1
[06:23:40] [PASSED] DG2 G12 A1
[06:23:40] [PASSED] METEORLAKE 12.70(Xe_LPG) A0 13.00(Xe_LPM+) A0
[06:23:40] [PASSED] METEORLAKE 12.71(Xe_LPG) A0 13.00(Xe_LPM+) A0
[06:23:40] [PASSED] METEORLAKE 12.74(Xe_LPG+) A0 13.00(Xe_LPM+) A0
[06:23:40] [PASSED] LUNARLAKE 20.04(Xe2_LPG) A0 20.00(Xe2_LPM) A0
[06:23:40] [PASSED] LUNARLAKE 20.04(Xe2_LPG) B0 20.00(Xe2_LPM) A0
[06:23:40] [PASSED] BATTLEMAGE 20.01(Xe2_HPG) A0 13.01(Xe2_HPM) A1
[06:23:40] [PASSED] PANTHERLAKE 30.00(Xe3_LPG) A0 30.00(Xe3_LPM) A0
[06:23:40] ==================== [PASSED] xe_wa_gt =====================
[06:23:40] ====================== [PASSED] xe_wa ======================
[06:23:40] ============================================================
[06:23:40] Testing complete. Ran 512 tests: passed: 494, skipped: 18
[06:23:40] Elapsed time: 36.988s total, 4.330s configuring, 32.142s building, 0.467s running
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/tests/.kunitconfig
[06:23:40] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[06:23:42] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[06:24:08] Starting KUnit Kernel (1/1)...
[06:24:08] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[06:24:08] ============ drm_test_pick_cmdline (2 subtests) ============
[06:24:08] [PASSED] drm_test_pick_cmdline_res_1920_1080_60
[06:24:08] =============== drm_test_pick_cmdline_named ===============
[06:24:08] [PASSED] NTSC
[06:24:08] [PASSED] NTSC-J
[06:24:08] [PASSED] PAL
[06:24:08] [PASSED] PAL-M
[06:24:08] =========== [PASSED] drm_test_pick_cmdline_named ===========
[06:24:08] ============== [PASSED] drm_test_pick_cmdline ==============
[06:24:08] == drm_test_atomic_get_connector_for_encoder (1 subtest) ===
[06:24:08] [PASSED] drm_test_drm_atomic_get_connector_for_encoder
[06:24:08] ==== [PASSED] drm_test_atomic_get_connector_for_encoder ====
[06:24:08] =========== drm_validate_clone_mode (2 subtests) ===========
[06:24:08] ============== drm_test_check_in_clone_mode ===============
[06:24:08] [PASSED] in_clone_mode
[06:24:08] [PASSED] not_in_clone_mode
[06:24:08] ========== [PASSED] drm_test_check_in_clone_mode ===========
[06:24:08] =============== drm_test_check_valid_clones ===============
[06:24:08] [PASSED] not_in_clone_mode
[06:24:08] [PASSED] valid_clone
[06:24:08] [PASSED] invalid_clone
[06:24:08] =========== [PASSED] drm_test_check_valid_clones ===========
[06:24:08] ============= [PASSED] drm_validate_clone_mode =============
[06:24:08] ============= drm_validate_modeset (1 subtest) =============
[06:24:08] [PASSED] drm_test_check_connector_changed_modeset
[06:24:08] ============== [PASSED] drm_validate_modeset ===============
[06:24:08] ====== drm_test_bridge_get_current_state (2 subtests) ======
[06:24:08] [PASSED] drm_test_drm_bridge_get_current_state_atomic
[06:24:08] [PASSED] drm_test_drm_bridge_get_current_state_legacy
[06:24:08] ======== [PASSED] drm_test_bridge_get_current_state ========
[06:24:08] ====== drm_test_bridge_helper_reset_crtc (3 subtests) ======
[06:24:08] [PASSED] drm_test_drm_bridge_helper_reset_crtc_atomic
[06:24:08] [PASSED] drm_test_drm_bridge_helper_reset_crtc_atomic_disabled
[06:24:08] [PASSED] drm_test_drm_bridge_helper_reset_crtc_legacy
[06:24:08] ======== [PASSED] drm_test_bridge_helper_reset_crtc ========
[06:24:08] ============== drm_bridge_alloc (2 subtests) ===============
[06:24:08] [PASSED] drm_test_drm_bridge_alloc_basic
[06:24:08] [PASSED] drm_test_drm_bridge_alloc_get_put
[06:24:08] ================ [PASSED] drm_bridge_alloc =================
[06:24:08] ================== drm_buddy (9 subtests) ==================
[06:24:08] [PASSED] drm_test_buddy_alloc_limit
[06:24:08] [PASSED] drm_test_buddy_alloc_optimistic
[06:24:08] [PASSED] drm_test_buddy_alloc_pessimistic
[06:24:08] [PASSED] drm_test_buddy_alloc_pathological
[06:24:08] [PASSED] drm_test_buddy_alloc_contiguous
[06:24:08] [PASSED] drm_test_buddy_alloc_clear
[06:24:08] [PASSED] drm_test_buddy_alloc_range_bias
[06:24:08] [PASSED] drm_test_buddy_fragmentation_performance
[06:24:08] [PASSED] drm_test_buddy_alloc_exceeds_max_order
[06:24:08] ==================== [PASSED] drm_buddy ====================
[06:24:08] ============= drm_cmdline_parser (40 subtests) =============
[06:24:08] [PASSED] drm_test_cmdline_force_d_only
[06:24:08] [PASSED] drm_test_cmdline_force_D_only_dvi
[06:24:08] [PASSED] drm_test_cmdline_force_D_only_hdmi
[06:24:08] [PASSED] drm_test_cmdline_force_D_only_not_digital
[06:24:08] [PASSED] drm_test_cmdline_force_e_only
[06:24:08] [PASSED] drm_test_cmdline_res
[06:24:08] [PASSED] drm_test_cmdline_res_vesa
[06:24:08] [PASSED] drm_test_cmdline_res_vesa_rblank
[06:24:08] [PASSED] drm_test_cmdline_res_rblank
[06:24:08] [PASSED] drm_test_cmdline_res_bpp
[06:24:08] [PASSED] drm_test_cmdline_res_refresh
[06:24:08] [PASSED] drm_test_cmdline_res_bpp_refresh
[06:24:08] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced
[06:24:08] [PASSED] drm_test_cmdline_res_bpp_refresh_margins
[06:24:08] [PASSED] drm_test_cmdline_res_bpp_refresh_force_off
[06:24:08] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on
[06:24:08] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_analog
[06:24:08] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_digital
[06:24:08] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced_margins_force_on
[06:24:08] [PASSED] drm_test_cmdline_res_margins_force_on
[06:24:08] [PASSED] drm_test_cmdline_res_vesa_margins
[06:24:08] [PASSED] drm_test_cmdline_name
[06:24:08] [PASSED] drm_test_cmdline_name_bpp
[06:24:08] [PASSED] drm_test_cmdline_name_option
[06:24:08] [PASSED] drm_test_cmdline_name_bpp_option
[06:24:08] [PASSED] drm_test_cmdline_rotate_0
[06:24:08] [PASSED] drm_test_cmdline_rotate_90
[06:24:08] [PASSED] drm_test_cmdline_rotate_180
[06:24:08] [PASSED] drm_test_cmdline_rotate_270
[06:24:08] [PASSED] drm_test_cmdline_hmirror
[06:24:08] [PASSED] drm_test_cmdline_vmirror
[06:24:08] [PASSED] drm_test_cmdline_margin_options
[06:24:08] [PASSED] drm_test_cmdline_multiple_options
[06:24:08] [PASSED] drm_test_cmdline_bpp_extra_and_option
[06:24:08] [PASSED] drm_test_cmdline_extra_and_option
[06:24:08] [PASSED] drm_test_cmdline_freestanding_options
[06:24:08] [PASSED] drm_test_cmdline_freestanding_force_e_and_options
[06:24:08] [PASSED] drm_test_cmdline_panel_orientation
[06:24:08] ================ drm_test_cmdline_invalid =================
[06:24:08] [PASSED] margin_only
[06:24:08] [PASSED] interlace_only
[06:24:08] [PASSED] res_missing_x
[06:24:08] [PASSED] res_missing_y
[06:24:08] [PASSED] res_bad_y
[06:24:08] [PASSED] res_missing_y_bpp
[06:24:08] [PASSED] res_bad_bpp
[06:24:08] [PASSED] res_bad_refresh
[06:24:08] [PASSED] res_bpp_refresh_force_on_off
[06:24:08] [PASSED] res_invalid_mode
[06:24:08] [PASSED] res_bpp_wrong_place_mode
[06:24:08] [PASSED] name_bpp_refresh
[06:24:08] [PASSED] name_refresh
[06:24:08] [PASSED] name_refresh_wrong_mode
[06:24:08] [PASSED] name_refresh_invalid_mode
[06:24:08] [PASSED] rotate_multiple
[06:24:08] [PASSED] rotate_invalid_val
[06:24:08] [PASSED] rotate_truncated
[06:24:08] [PASSED] invalid_option
[06:24:08] [PASSED] invalid_tv_option
[06:24:08] [PASSED] truncated_tv_option
[06:24:08] ============ [PASSED] drm_test_cmdline_invalid =============
[06:24:08] =============== drm_test_cmdline_tv_options ===============
[06:24:08] [PASSED] NTSC
[06:24:08] [PASSED] NTSC_443
[06:24:08] [PASSED] NTSC_J
[06:24:08] [PASSED] PAL
[06:24:08] [PASSED] PAL_M
[06:24:08] [PASSED] PAL_N
[06:24:08] [PASSED] SECAM
[06:24:08] [PASSED] MONO_525
[06:24:08] [PASSED] MONO_625
[06:24:08] =========== [PASSED] drm_test_cmdline_tv_options ===========
[06:24:08] =============== [PASSED] drm_cmdline_parser ================
[06:24:08] ========== drmm_connector_hdmi_init (20 subtests) ==========
[06:24:08] [PASSED] drm_test_connector_hdmi_init_valid
[06:24:08] [PASSED] drm_test_connector_hdmi_init_bpc_8
[06:24:08] [PASSED] drm_test_connector_hdmi_init_bpc_10
[06:24:08] [PASSED] drm_test_connector_hdmi_init_bpc_12
[06:24:08] [PASSED] drm_test_connector_hdmi_init_bpc_invalid
[06:24:08] [PASSED] drm_test_connector_hdmi_init_bpc_null
[06:24:08] [PASSED] drm_test_connector_hdmi_init_formats_empty
[06:24:08] [PASSED] drm_test_connector_hdmi_init_formats_no_rgb
[06:24:08] === drm_test_connector_hdmi_init_formats_yuv420_allowed ===
[06:24:08] [PASSED] supported_formats=0x9 yuv420_allowed=1
[06:24:08] [PASSED] supported_formats=0x9 yuv420_allowed=0
[06:24:08] [PASSED] supported_formats=0x3 yuv420_allowed=1
[06:24:08] [PASSED] supported_formats=0x3 yuv420_allowed=0
[06:24:08] === [PASSED] drm_test_connector_hdmi_init_formats_yuv420_allowed ===
[06:24:08] [PASSED] drm_test_connector_hdmi_init_null_ddc
[06:24:08] [PASSED] drm_test_connector_hdmi_init_null_product
[06:24:08] [PASSED] drm_test_connector_hdmi_init_null_vendor
[06:24:08] [PASSED] drm_test_connector_hdmi_init_product_length_exact
[06:24:08] [PASSED] drm_test_connector_hdmi_init_product_length_too_long
[06:24:08] [PASSED] drm_test_connector_hdmi_init_product_valid
[06:24:08] [PASSED] drm_test_connector_hdmi_init_vendor_length_exact
[06:24:08] [PASSED] drm_test_connector_hdmi_init_vendor_length_too_long
[06:24:08] [PASSED] drm_test_connector_hdmi_init_vendor_valid
[06:24:08] ========= drm_test_connector_hdmi_init_type_valid =========
[06:24:08] [PASSED] HDMI-A
[06:24:08] [PASSED] HDMI-B
[06:24:08] ===== [PASSED] drm_test_connector_hdmi_init_type_valid =====
[06:24:08] ======== drm_test_connector_hdmi_init_type_invalid ========
[06:24:08] [PASSED] Unknown
[06:24:08] [PASSED] VGA
[06:24:08] [PASSED] DVI-I
[06:24:08] [PASSED] DVI-D
[06:24:08] [PASSED] DVI-A
[06:24:08] [PASSED] Composite
[06:24:08] [PASSED] SVIDEO
[06:24:08] [PASSED] LVDS
[06:24:08] [PASSED] Component
[06:24:08] [PASSED] DIN
[06:24:08] [PASSED] DP
[06:24:08] [PASSED] TV
[06:24:08] [PASSED] eDP
[06:24:08] [PASSED] Virtual
[06:24:08] [PASSED] DSI
[06:24:08] [PASSED] DPI
[06:24:08] [PASSED] Writeback
[06:24:08] [PASSED] SPI
[06:24:08] [PASSED] USB
[06:24:08] ==== [PASSED] drm_test_connector_hdmi_init_type_invalid ====
[06:24:08] ============ [PASSED] drmm_connector_hdmi_init =============
[06:24:08] ============= drmm_connector_init (3 subtests) =============
[06:24:08] [PASSED] drm_test_drmm_connector_init
[06:24:08] [PASSED] drm_test_drmm_connector_init_null_ddc
[06:24:08] ========= drm_test_drmm_connector_init_type_valid =========
[06:24:08] [PASSED] Unknown
[06:24:08] [PASSED] VGA
[06:24:08] [PASSED] DVI-I
[06:24:08] [PASSED] DVI-D
[06:24:08] [PASSED] DVI-A
[06:24:08] [PASSED] Composite
[06:24:08] [PASSED] SVIDEO
[06:24:08] [PASSED] LVDS
[06:24:08] [PASSED] Component
[06:24:08] [PASSED] DIN
[06:24:08] [PASSED] DP
[06:24:08] [PASSED] HDMI-A
[06:24:08] [PASSED] HDMI-B
[06:24:08] [PASSED] TV
[06:24:08] [PASSED] eDP
[06:24:08] [PASSED] Virtual
[06:24:08] [PASSED] DSI
[06:24:08] [PASSED] DPI
[06:24:08] [PASSED] Writeback
[06:24:08] [PASSED] SPI
[06:24:08] [PASSED] USB
[06:24:08] ===== [PASSED] drm_test_drmm_connector_init_type_valid =====
[06:24:08] =============== [PASSED] drmm_connector_init ===============
[06:24:08] ========= drm_connector_dynamic_init (6 subtests) ==========
[06:24:08] [PASSED] drm_test_drm_connector_dynamic_init
[06:24:08] [PASSED] drm_test_drm_connector_dynamic_init_null_ddc
[06:24:08] [PASSED] drm_test_drm_connector_dynamic_init_not_added
[06:24:08] [PASSED] drm_test_drm_connector_dynamic_init_properties
[06:24:08] ===== drm_test_drm_connector_dynamic_init_type_valid ======
[06:24:08] [PASSED] Unknown
[06:24:08] [PASSED] VGA
[06:24:08] [PASSED] DVI-I
[06:24:08] [PASSED] DVI-D
[06:24:08] [PASSED] DVI-A
[06:24:08] [PASSED] Composite
[06:24:08] [PASSED] SVIDEO
[06:24:08] [PASSED] LVDS
[06:24:08] [PASSED] Component
[06:24:08] [PASSED] DIN
[06:24:08] [PASSED] DP
[06:24:08] [PASSED] HDMI-A
[06:24:08] [PASSED] HDMI-B
[06:24:08] [PASSED] TV
[06:24:08] [PASSED] eDP
[06:24:08] [PASSED] Virtual
[06:24:08] [PASSED] DSI
[06:24:08] [PASSED] DPI
[06:24:08] [PASSED] Writeback
[06:24:08] [PASSED] SPI
[06:24:08] [PASSED] USB
[06:24:08] = [PASSED] drm_test_drm_connector_dynamic_init_type_valid ==
[06:24:08] ======== drm_test_drm_connector_dynamic_init_name =========
[06:24:08] [PASSED] Unknown
[06:24:08] [PASSED] VGA
[06:24:08] [PASSED] DVI-I
[06:24:08] [PASSED] DVI-D
[06:24:08] [PASSED] DVI-A
[06:24:08] [PASSED] Composite
[06:24:08] [PASSED] SVIDEO
[06:24:08] [PASSED] LVDS
[06:24:08] [PASSED] Component
[06:24:08] [PASSED] DIN
[06:24:08] [PASSED] DP
[06:24:08] [PASSED] HDMI-A
[06:24:08] [PASSED] HDMI-B
[06:24:08] [PASSED] TV
[06:24:08] [PASSED] eDP
[06:24:08] [PASSED] Virtual
[06:24:08] [PASSED] DSI
[06:24:08] [PASSED] DPI
[06:24:08] [PASSED] Writeback
[06:24:08] [PASSED] SPI
[06:24:08] [PASSED] USB
[06:24:08] ==== [PASSED] drm_test_drm_connector_dynamic_init_name =====
[06:24:08] =========== [PASSED] drm_connector_dynamic_init ============
[06:24:08] ==== drm_connector_dynamic_register_early (4 subtests) =====
[06:24:08] [PASSED] drm_test_drm_connector_dynamic_register_early_on_list
[06:24:08] [PASSED] drm_test_drm_connector_dynamic_register_early_defer
[06:24:08] [PASSED] drm_test_drm_connector_dynamic_register_early_no_init
[06:24:08] [PASSED] drm_test_drm_connector_dynamic_register_early_no_mode_object
[06:24:08] ====== [PASSED] drm_connector_dynamic_register_early =======
[06:24:08] ======= drm_connector_dynamic_register (7 subtests) ========
[06:24:08] [PASSED] drm_test_drm_connector_dynamic_register_on_list
[06:24:08] [PASSED] drm_test_drm_connector_dynamic_register_no_defer
[06:24:08] [PASSED] drm_test_drm_connector_dynamic_register_no_init
[06:24:08] [PASSED] drm_test_drm_connector_dynamic_register_mode_object
[06:24:08] [PASSED] drm_test_drm_connector_dynamic_register_sysfs
[06:24:08] [PASSED] drm_test_drm_connector_dynamic_register_sysfs_name
[06:24:08] [PASSED] drm_test_drm_connector_dynamic_register_debugfs
[06:24:08] ========= [PASSED] drm_connector_dynamic_register ==========
[06:24:08] = drm_connector_attach_broadcast_rgb_property (2 subtests) =
[06:24:08] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property
[06:24:08] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property_hdmi_connector
[06:24:08] === [PASSED] drm_connector_attach_broadcast_rgb_property ===
[06:24:08] ========== drm_get_tv_mode_from_name (2 subtests) ==========
[06:24:08] ========== drm_test_get_tv_mode_from_name_valid ===========
[06:24:08] [PASSED] NTSC
[06:24:08] [PASSED] NTSC-443
[06:24:08] [PASSED] NTSC-J
[06:24:08] [PASSED] PAL
[06:24:08] [PASSED] PAL-M
[06:24:08] [PASSED] PAL-N
[06:24:08] [PASSED] SECAM
[06:24:08] [PASSED] Mono
[06:24:08] ====== [PASSED] drm_test_get_tv_mode_from_name_valid =======
[06:24:08] [PASSED] drm_test_get_tv_mode_from_name_truncated
[06:24:08] ============ [PASSED] drm_get_tv_mode_from_name ============
[06:24:08] = drm_test_connector_hdmi_compute_mode_clock (12 subtests) =
[06:24:08] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb
[06:24:08] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc
[06:24:08] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc_vic_1
[06:24:08] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc
[06:24:08] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc_vic_1
[06:24:08] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_double
[06:24:08] = drm_test_connector_hdmi_compute_mode_clock_yuv420_valid =
[06:24:08] [PASSED] VIC 96
[06:24:08] [PASSED] VIC 97
[06:24:08] [PASSED] VIC 101
[06:24:08] [PASSED] VIC 102
[06:24:08] [PASSED] VIC 106
[06:24:08] [PASSED] VIC 107
[06:24:08] === [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_valid ===
[06:24:08] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_10_bpc
[06:24:08] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_12_bpc
[06:24:08] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_8_bpc
[06:24:08] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_10_bpc
[06:24:08] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_12_bpc
[06:24:08] === [PASSED] drm_test_connector_hdmi_compute_mode_clock ====
[06:24:08] == drm_hdmi_connector_get_broadcast_rgb_name (2 subtests) ==
[06:24:08] === drm_test_drm_hdmi_connector_get_broadcast_rgb_name ====
[06:24:08] [PASSED] Automatic
[06:24:08] [PASSED] Full
[06:24:08] [PASSED] Limited 16:235
[06:24:08] === [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name ===
[06:24:08] [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name_invalid
[06:24:08] ==== [PASSED] drm_hdmi_connector_get_broadcast_rgb_name ====
[06:24:08] == drm_hdmi_connector_get_output_format_name (2 subtests) ==
[06:24:08] === drm_test_drm_hdmi_connector_get_output_format_name ====
[06:24:08] [PASSED] RGB
[06:24:08] [PASSED] YUV 4:2:0
[06:24:08] [PASSED] YUV 4:2:2
[06:24:08] [PASSED] YUV 4:4:4
[06:24:08] === [PASSED] drm_test_drm_hdmi_connector_get_output_format_name ===
[06:24:08] [PASSED] drm_test_drm_hdmi_connector_get_output_format_name_invalid
[06:24:08] ==== [PASSED] drm_hdmi_connector_get_output_format_name ====
[06:24:08] ============= drm_damage_helper (21 subtests) ==============
[06:24:08] [PASSED] drm_test_damage_iter_no_damage
[06:24:08] [PASSED] drm_test_damage_iter_no_damage_fractional_src
[06:24:08] [PASSED] drm_test_damage_iter_no_damage_src_moved
[06:24:08] [PASSED] drm_test_damage_iter_no_damage_fractional_src_moved
[06:24:08] [PASSED] drm_test_damage_iter_no_damage_not_visible
[06:24:08] [PASSED] drm_test_damage_iter_no_damage_no_crtc
[06:24:08] [PASSED] drm_test_damage_iter_no_damage_no_fb
[06:24:08] [PASSED] drm_test_damage_iter_simple_damage
[06:24:08] [PASSED] drm_test_damage_iter_single_damage
[06:24:08] [PASSED] drm_test_damage_iter_single_damage_intersect_src
[06:24:08] [PASSED] drm_test_damage_iter_single_damage_outside_src
[06:24:08] [PASSED] drm_test_damage_iter_single_damage_fractional_src
[06:24:08] [PASSED] drm_test_damage_iter_single_damage_intersect_fractional_src
[06:24:08] [PASSED] drm_test_damage_iter_single_damage_outside_fractional_src
[06:24:08] [PASSED] drm_test_damage_iter_single_damage_src_moved
[06:24:08] [PASSED] drm_test_damage_iter_single_damage_fractional_src_moved
[06:24:08] [PASSED] drm_test_damage_iter_damage
[06:24:08] [PASSED] drm_test_damage_iter_damage_one_intersect
[06:24:08] [PASSED] drm_test_damage_iter_damage_one_outside
[06:24:08] [PASSED] drm_test_damage_iter_damage_src_moved
[06:24:08] [PASSED] drm_test_damage_iter_damage_not_visible
[06:24:08] ================ [PASSED] drm_damage_helper ================
[06:24:08] ============== drm_dp_mst_helper (3 subtests) ==============
[06:24:08] ============== drm_test_dp_mst_calc_pbn_mode ==============
[06:24:08] [PASSED] Clock 154000 BPP 30 DSC disabled
[06:24:08] [PASSED] Clock 234000 BPP 30 DSC disabled
[06:24:08] [PASSED] Clock 297000 BPP 24 DSC disabled
[06:24:08] [PASSED] Clock 332880 BPP 24 DSC enabled
[06:24:08] [PASSED] Clock 324540 BPP 24 DSC enabled
[06:24:08] ========== [PASSED] drm_test_dp_mst_calc_pbn_mode ==========
[06:24:08] ============== drm_test_dp_mst_calc_pbn_div ===============
[06:24:08] [PASSED] Link rate 2000000 lane count 4
[06:24:08] [PASSED] Link rate 2000000 lane count 2
[06:24:08] [PASSED] Link rate 2000000 lane count 1
[06:24:08] [PASSED] Link rate 1350000 lane count 4
[06:24:08] [PASSED] Link rate 1350000 lane count 2
[06:24:08] [PASSED] Link rate 1350000 lane count 1
[06:24:08] [PASSED] Link rate 1000000 lane count 4
[06:24:08] [PASSED] Link rate 1000000 lane count 2
[06:24:08] [PASSED] Link rate 1000000 lane count 1
[06:24:08] [PASSED] Link rate 810000 lane count 4
[06:24:08] [PASSED] Link rate 810000 lane count 2
[06:24:08] [PASSED] Link rate 810000 lane count 1
[06:24:08] [PASSED] Link rate 540000 lane count 4
[06:24:08] [PASSED] Link rate 540000 lane count 2
[06:24:08] [PASSED] Link rate 540000 lane count 1
[06:24:08] [PASSED] Link rate 270000 lane count 4
[06:24:08] [PASSED] Link rate 270000 lane count 2
[06:24:08] [PASSED] Link rate 270000 lane count 1
[06:24:08] [PASSED] Link rate 162000 lane count 4
[06:24:08] [PASSED] Link rate 162000 lane count 2
[06:24:08] [PASSED] Link rate 162000 lane count 1
[06:24:08] ========== [PASSED] drm_test_dp_mst_calc_pbn_div ===========
[06:24:08] ========= drm_test_dp_mst_sideband_msg_req_decode =========
[06:24:08] [PASSED] DP_ENUM_PATH_RESOURCES with port number
[06:24:08] [PASSED] DP_POWER_UP_PHY with port number
[06:24:08] [PASSED] DP_POWER_DOWN_PHY with port number
[06:24:08] [PASSED] DP_ALLOCATE_PAYLOAD with SDP stream sinks
[06:24:08] [PASSED] DP_ALLOCATE_PAYLOAD with port number
[06:24:08] [PASSED] DP_ALLOCATE_PAYLOAD with VCPI
[06:24:08] [PASSED] DP_ALLOCATE_PAYLOAD with PBN
[06:24:08] [PASSED] DP_QUERY_PAYLOAD with port number
[06:24:08] [PASSED] DP_QUERY_PAYLOAD with VCPI
[06:24:08] [PASSED] DP_REMOTE_DPCD_READ with port number
[06:24:08] [PASSED] DP_REMOTE_DPCD_READ with DPCD address
[06:24:08] [PASSED] DP_REMOTE_DPCD_READ with max number of bytes
[06:24:08] [PASSED] DP_REMOTE_DPCD_WRITE with port number
[06:24:08] [PASSED] DP_REMOTE_DPCD_WRITE with DPCD address
[06:24:08] [PASSED] DP_REMOTE_DPCD_WRITE with data array
[06:24:08] [PASSED] DP_REMOTE_I2C_READ with port number
[06:24:08] [PASSED] DP_REMOTE_I2C_READ with I2C device ID
[06:24:08] [PASSED] DP_REMOTE_I2C_READ with transactions array
[06:24:08] [PASSED] DP_REMOTE_I2C_WRITE with port number
[06:24:08] [PASSED] DP_REMOTE_I2C_WRITE with I2C device ID
[06:24:08] [PASSED] DP_REMOTE_I2C_WRITE with data array
[06:24:08] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream ID
[06:24:08] [PASSED] DP_QUERY_STREAM_ENC_STATUS with client ID
[06:24:08] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream event
[06:24:08] [PASSED] DP_QUERY_STREAM_ENC_STATUS with valid stream event
[06:24:08] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream behavior
[06:24:08] [PASSED] DP_QUERY_STREAM_ENC_STATUS with a valid stream behavior
[06:24:08] ===== [PASSED] drm_test_dp_mst_sideband_msg_req_decode =====
[06:24:08] ================ [PASSED] drm_dp_mst_helper ================
[06:24:08] ================== drm_exec (7 subtests) ===================
[06:24:08] [PASSED] sanitycheck
[06:24:08] [PASSED] test_lock
[06:24:08] [PASSED] test_lock_unlock
[06:24:08] [PASSED] test_duplicates
[06:24:08] [PASSED] test_prepare
[06:24:08] [PASSED] test_prepare_array
[06:24:08] [PASSED] test_multiple_loops
[06:24:08] ==================== [PASSED] drm_exec =====================
[06:24:08] =========== drm_format_helper_test (17 subtests) ===========
[06:24:08] ============== drm_test_fb_xrgb8888_to_gray8 ==============
[06:24:08] [PASSED] single_pixel_source_buffer
[06:24:08] [PASSED] single_pixel_clip_rectangle
[06:24:08] [PASSED] well_known_colors
[06:24:08] [PASSED] destination_pitch
[06:24:08] ========== [PASSED] drm_test_fb_xrgb8888_to_gray8 ==========
[06:24:08] ============= drm_test_fb_xrgb8888_to_rgb332 ==============
[06:24:08] [PASSED] single_pixel_source_buffer
[06:24:08] [PASSED] single_pixel_clip_rectangle
[06:24:08] [PASSED] well_known_colors
[06:24:08] [PASSED] destination_pitch
[06:24:08] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb332 ==========
[06:24:08] ============= drm_test_fb_xrgb8888_to_rgb565 ==============
[06:24:08] [PASSED] single_pixel_source_buffer
[06:24:08] [PASSED] single_pixel_clip_rectangle
[06:24:08] [PASSED] well_known_colors
[06:24:08] [PASSED] destination_pitch
[06:24:08] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb565 ==========
[06:24:08] ============ drm_test_fb_xrgb8888_to_xrgb1555 =============
[06:24:08] [PASSED] single_pixel_source_buffer
[06:24:08] [PASSED] single_pixel_clip_rectangle
[06:24:08] [PASSED] well_known_colors
[06:24:08] [PASSED] destination_pitch
[06:24:08] ======== [PASSED] drm_test_fb_xrgb8888_to_xrgb1555 =========
[06:24:08] ============ drm_test_fb_xrgb8888_to_argb1555 =============
[06:24:08] [PASSED] single_pixel_source_buffer
[06:24:08] [PASSED] single_pixel_clip_rectangle
[06:24:08] [PASSED] well_known_colors
[06:24:08] [PASSED] destination_pitch
[06:24:08] ======== [PASSED] drm_test_fb_xrgb8888_to_argb1555 =========
[06:24:08] ============ drm_test_fb_xrgb8888_to_rgba5551 =============
[06:24:08] [PASSED] single_pixel_source_buffer
[06:24:08] [PASSED] single_pixel_clip_rectangle
[06:24:08] [PASSED] well_known_colors
[06:24:08] [PASSED] destination_pitch
[06:24:08] ======== [PASSED] drm_test_fb_xrgb8888_to_rgba5551 =========
[06:24:08] ============= drm_test_fb_xrgb8888_to_rgb888 ==============
[06:24:08] [PASSED] single_pixel_source_buffer
[06:24:08] [PASSED] single_pixel_clip_rectangle
[06:24:08] [PASSED] well_known_colors
[06:24:08] [PASSED] destination_pitch
[06:24:08] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb888 ==========
[06:24:08] ============= drm_test_fb_xrgb8888_to_bgr888 ==============
[06:24:08] [PASSED] single_pixel_source_buffer
[06:24:08] [PASSED] single_pixel_clip_rectangle
[06:24:08] [PASSED] well_known_colors
[06:24:08] [PASSED] destination_pitch
[06:24:08] ========= [PASSED] drm_test_fb_xrgb8888_to_bgr888 ==========
[06:24:08] ============ drm_test_fb_xrgb8888_to_argb8888 =============
[06:24:08] [PASSED] single_pixel_source_buffer
[06:24:08] [PASSED] single_pixel_clip_rectangle
[06:24:08] [PASSED] well_known_colors
[06:24:08] [PASSED] destination_pitch
[06:24:08] ======== [PASSED] drm_test_fb_xrgb8888_to_argb8888 =========
[06:24:08] =========== drm_test_fb_xrgb8888_to_xrgb2101010 ===========
[06:24:08] [PASSED] single_pixel_source_buffer
[06:24:08] [PASSED] single_pixel_clip_rectangle
[06:24:08] [PASSED] well_known_colors
[06:24:08] [PASSED] destination_pitch
[06:24:08] ======= [PASSED] drm_test_fb_xrgb8888_to_xrgb2101010 =======
[06:24:08] =========== drm_test_fb_xrgb8888_to_argb2101010 ===========
[06:24:08] [PASSED] single_pixel_source_buffer
[06:24:08] [PASSED] single_pixel_clip_rectangle
[06:24:08] [PASSED] well_known_colors
[06:24:08] [PASSED] destination_pitch
[06:24:08] ======= [PASSED] drm_test_fb_xrgb8888_to_argb2101010 =======
[06:24:08] ============== drm_test_fb_xrgb8888_to_mono ===============
[06:24:08] [PASSED] single_pixel_source_buffer
[06:24:08] [PASSED] single_pixel_clip_rectangle
[06:24:08] [PASSED] well_known_colors
[06:24:08] [PASSED] destination_pitch
[06:24:08] ========== [PASSED] drm_test_fb_xrgb8888_to_mono ===========
[06:24:08] ==================== drm_test_fb_swab =====================
[06:24:08] [PASSED] single_pixel_source_buffer
[06:24:08] [PASSED] single_pixel_clip_rectangle
[06:24:08] [PASSED] well_known_colors
[06:24:08] [PASSED] destination_pitch
[06:24:08] ================ [PASSED] drm_test_fb_swab =================
[06:24:08] ============ drm_test_fb_xrgb8888_to_xbgr8888 =============
[06:24:08] [PASSED] single_pixel_source_buffer
[06:24:08] [PASSED] single_pixel_clip_rectangle
[06:24:08] [PASSED] well_known_colors
[06:24:08] [PASSED] destination_pitch
[06:24:08] ======== [PASSED] drm_test_fb_xrgb8888_to_xbgr8888 =========
[06:24:08] ============ drm_test_fb_xrgb8888_to_abgr8888 =============
[06:24:08] [PASSED] single_pixel_source_buffer
[06:24:08] [PASSED] single_pixel_clip_rectangle
[06:24:08] [PASSED] well_known_colors
[06:24:08] [PASSED] destination_pitch
[06:24:08] ======== [PASSED] drm_test_fb_xrgb8888_to_abgr8888 =========
[06:24:08] ================= drm_test_fb_clip_offset =================
[06:24:08] [PASSED] pass through
[06:24:08] [PASSED] horizontal offset
[06:24:08] [PASSED] vertical offset
[06:24:08] [PASSED] horizontal and vertical offset
[06:24:08] [PASSED] horizontal offset (custom pitch)
[06:24:08] [PASSED] vertical offset (custom pitch)
[06:24:08] [PASSED] horizontal and vertical offset (custom pitch)
[06:24:08] ============= [PASSED] drm_test_fb_clip_offset =============
[06:24:08] =================== drm_test_fb_memcpy ====================
[06:24:08] [PASSED] single_pixel_source_buffer: XR24 little-endian (0x34325258)
[06:24:08] [PASSED] single_pixel_source_buffer: XRA8 little-endian (0x38415258)
[06:24:08] [PASSED] single_pixel_source_buffer: YU24 little-endian (0x34325559)
[06:24:08] [PASSED] single_pixel_clip_rectangle: XB24 little-endian (0x34324258)
[06:24:08] [PASSED] single_pixel_clip_rectangle: XRA8 little-endian (0x38415258)
[06:24:08] [PASSED] single_pixel_clip_rectangle: YU24 little-endian (0x34325559)
[06:24:08] [PASSED] well_known_colors: XB24 little-endian (0x34324258)
[06:24:08] [PASSED] well_known_colors: XRA8 little-endian (0x38415258)
[06:24:08] [PASSED] well_known_colors: YU24 little-endian (0x34325559)
[06:24:08] [PASSED] destination_pitch: XB24 little-endian (0x34324258)
[06:24:08] [PASSED] destination_pitch: XRA8 little-endian (0x38415258)
[06:24:08] [PASSED] destination_pitch: YU24 little-endian (0x34325559)
[06:24:08] =============== [PASSED] drm_test_fb_memcpy ================
[06:24:08] ============= [PASSED] drm_format_helper_test ==============
[06:24:08] ================= drm_format (18 subtests) =================
[06:24:08] [PASSED] drm_test_format_block_width_invalid
[06:24:08] [PASSED] drm_test_format_block_width_one_plane
[06:24:08] [PASSED] drm_test_format_block_width_two_plane
[06:24:08] [PASSED] drm_test_format_block_width_three_plane
[06:24:08] [PASSED] drm_test_format_block_width_tiled
[06:24:08] [PASSED] drm_test_format_block_height_invalid
[06:24:08] [PASSED] drm_test_format_block_height_one_plane
[06:24:08] [PASSED] drm_test_format_block_height_two_plane
[06:24:08] [PASSED] drm_test_format_block_height_three_plane
[06:24:08] [PASSED] drm_test_format_block_height_tiled
[06:24:08] [PASSED] drm_test_format_min_pitch_invalid
[06:24:08] [PASSED] drm_test_format_min_pitch_one_plane_8bpp
[06:24:08] [PASSED] drm_test_format_min_pitch_one_plane_16bpp
[06:24:08] [PASSED] drm_test_format_min_pitch_one_plane_24bpp
[06:24:08] [PASSED] drm_test_format_min_pitch_one_plane_32bpp
[06:24:08] [PASSED] drm_test_format_min_pitch_two_plane
[06:24:08] [PASSED] drm_test_format_min_pitch_three_plane_8bpp
[06:24:08] [PASSED] drm_test_format_min_pitch_tiled
[06:24:08] =================== [PASSED] drm_format ====================
[06:24:08] ============== drm_framebuffer (10 subtests) ===============
[06:24:08] ========== drm_test_framebuffer_check_src_coords ==========
[06:24:08] [PASSED] Success: source fits into fb
[06:24:08] [PASSED] Fail: overflowing fb with x-axis coordinate
[06:24:08] [PASSED] Fail: overflowing fb with y-axis coordinate
[06:24:08] [PASSED] Fail: overflowing fb with source width
[06:24:08] [PASSED] Fail: overflowing fb with source height
[06:24:08] ====== [PASSED] drm_test_framebuffer_check_src_coords ======
[06:24:08] [PASSED] drm_test_framebuffer_cleanup
[06:24:08] =============== drm_test_framebuffer_create ===============
[06:24:08] [PASSED] ABGR8888 normal sizes
[06:24:08] [PASSED] ABGR8888 max sizes
[06:24:08] [PASSED] ABGR8888 pitch greater than min required
[06:24:08] [PASSED] ABGR8888 pitch less than min required
[06:24:08] [PASSED] ABGR8888 Invalid width
[06:24:08] [PASSED] ABGR8888 Invalid buffer handle
[06:24:08] [PASSED] No pixel format
[06:24:08] [PASSED] ABGR8888 Width 0
[06:24:08] [PASSED] ABGR8888 Height 0
[06:24:08] [PASSED] ABGR8888 Out of bound height * pitch combination
[06:24:08] [PASSED] ABGR8888 Large buffer offset
[06:24:08] [PASSED] ABGR8888 Buffer offset for inexistent plane
[06:24:08] [PASSED] ABGR8888 Invalid flag
[06:24:08] [PASSED] ABGR8888 Set DRM_MODE_FB_MODIFIERS without modifiers
[06:24:08] [PASSED] ABGR8888 Valid buffer modifier
[06:24:08] [PASSED] ABGR8888 Invalid buffer modifier(DRM_FORMAT_MOD_SAMSUNG_64_32_TILE)
[06:24:08] [PASSED] ABGR8888 Extra pitches without DRM_MODE_FB_MODIFIERS
[06:24:08] [PASSED] ABGR8888 Extra pitches with DRM_MODE_FB_MODIFIERS
[06:24:08] [PASSED] NV12 Normal sizes
[06:24:08] [PASSED] NV12 Max sizes
[06:24:08] [PASSED] NV12 Invalid pitch
[06:24:08] [PASSED] NV12 Invalid modifier/missing DRM_MODE_FB_MODIFIERS flag
[06:24:08] [PASSED] NV12 different modifier per-plane
[06:24:08] [PASSED] NV12 with DRM_FORMAT_MOD_SAMSUNG_64_32_TILE
[06:24:08] [PASSED] NV12 Valid modifiers without DRM_MODE_FB_MODIFIERS
[06:24:08] [PASSED] NV12 Modifier for inexistent plane
[06:24:08] [PASSED] NV12 Handle for inexistent plane
[06:24:08] [PASSED] NV12 Handle for inexistent plane without DRM_MODE_FB_MODIFIERS
[06:24:08] [PASSED] YVU420 DRM_MODE_FB_MODIFIERS set without modifier
[06:24:08] [PASSED] YVU420 Normal sizes
[06:24:08] [PASSED] YVU420 Max sizes
[06:24:08] [PASSED] YVU420 Invalid pitch
[06:24:08] [PASSED] YVU420 Different pitches
[06:24:08] [PASSED] YVU420 Different buffer offsets/pitches
[06:24:08] [PASSED] YVU420 Modifier set just for plane 0, without DRM_MODE_FB_MODIFIERS
[06:24:08] [PASSED] YVU420 Modifier set just for planes 0, 1, without DRM_MODE_FB_MODIFIERS
[06:24:08] [PASSED] YVU420 Modifier set just for plane 0, 1, with DRM_MODE_FB_MODIFIERS
[06:24:08] [PASSED] YVU420 Valid modifier
[06:24:08] [PASSED] YVU420 Different modifiers per plane
[06:24:08] [PASSED] YVU420 Modifier for inexistent plane
[06:24:08] [PASSED] YUV420_10BIT Invalid modifier(DRM_FORMAT_MOD_LINEAR)
[06:24:08] [PASSED] X0L2 Normal sizes
[06:24:08] [PASSED] X0L2 Max sizes
[06:24:08] [PASSED] X0L2 Invalid pitch
[06:24:08] [PASSED] X0L2 Pitch greater than minimum required
[06:24:08] [PASSED] X0L2 Handle for inexistent plane
[06:24:08] [PASSED] X0L2 Offset for inexistent plane, without DRM_MODE_FB_MODIFIERS set
[06:24:08] [PASSED] X0L2 Modifier without DRM_MODE_FB_MODIFIERS set
[06:24:08] [PASSED] X0L2 Valid modifier
[06:24:08] [PASSED] X0L2 Modifier for inexistent plane
[06:24:08] =========== [PASSED] drm_test_framebuffer_create ===========
[06:24:08] [PASSED] drm_test_framebuffer_free
[06:24:08] [PASSED] drm_test_framebuffer_init
[06:24:08] [PASSED] drm_test_framebuffer_init_bad_format
[06:24:08] [PASSED] drm_test_framebuffer_init_dev_mismatch
[06:24:08] [PASSED] drm_test_framebuffer_lookup
[06:24:08] [PASSED] drm_test_framebuffer_lookup_inexistent
[06:24:08] [PASSED] drm_test_framebuffer_modifiers_not_supported
[06:24:08] ================= [PASSED] drm_framebuffer =================
[06:24:08] ================ drm_gem_shmem (8 subtests) ================
[06:24:08] [PASSED] drm_gem_shmem_test_obj_create
[06:24:08] [PASSED] drm_gem_shmem_test_obj_create_private
[06:24:08] [PASSED] drm_gem_shmem_test_pin_pages
[06:24:08] [PASSED] drm_gem_shmem_test_vmap
[06:24:08] [PASSED] drm_gem_shmem_test_get_sg_table
[06:24:08] [PASSED] drm_gem_shmem_test_get_pages_sgt
[06:24:08] [PASSED] drm_gem_shmem_test_madvise
[06:24:08] [PASSED] drm_gem_shmem_test_purge
[06:24:08] ================== [PASSED] drm_gem_shmem ==================
[06:24:08] === drm_atomic_helper_connector_hdmi_check (27 subtests) ===
[06:24:08] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode
[06:24:08] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode_vic_1
[06:24:08] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode
[06:24:08] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode_vic_1
[06:24:08] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode
[06:24:08] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode_vic_1
[06:24:08] ====== drm_test_check_broadcast_rgb_cea_mode_yuv420 =======
[06:24:08] [PASSED] Automatic
[06:24:08] [PASSED] Full
[06:24:08] [PASSED] Limited 16:235
[06:24:08] == [PASSED] drm_test_check_broadcast_rgb_cea_mode_yuv420 ===
[06:24:08] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_changed
[06:24:08] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_not_changed
[06:24:08] [PASSED] drm_test_check_disable_connector
[06:24:08] [PASSED] drm_test_check_hdmi_funcs_reject_rate
[06:24:08] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_rgb
[06:24:08] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_yuv420
[06:24:08] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_ignore_yuv422
[06:24:08] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_ignore_yuv420
[06:24:08] [PASSED] drm_test_check_driver_unsupported_fallback_yuv420
[06:24:08] [PASSED] drm_test_check_output_bpc_crtc_mode_changed
[06:24:08] [PASSED] drm_test_check_output_bpc_crtc_mode_not_changed
[06:24:08] [PASSED] drm_test_check_output_bpc_dvi
[06:24:08] [PASSED] drm_test_check_output_bpc_format_vic_1
[06:24:08] [PASSED] drm_test_check_output_bpc_format_display_8bpc_only
[06:24:08] [PASSED] drm_test_check_output_bpc_format_display_rgb_only
[06:24:08] [PASSED] drm_test_check_output_bpc_format_driver_8bpc_only
[06:24:08] [PASSED] drm_test_check_output_bpc_format_driver_rgb_only
[06:24:08] [PASSED] drm_test_check_tmds_char_rate_rgb_8bpc
[06:24:08] [PASSED] drm_test_check_tmds_char_rate_rgb_10bpc
[06:24:08] [PASSED] drm_test_check_tmds_char_rate_rgb_12bpc
[06:24:08] ===== [PASSED] drm_atomic_helper_connector_hdmi_check ======
[06:24:08] === drm_atomic_helper_connector_hdmi_reset (6 subtests) ====
[06:24:08] [PASSED] drm_test_check_broadcast_rgb_value
[06:24:08] [PASSED] drm_test_check_bpc_8_value
[06:24:08] [PASSED] drm_test_check_bpc_10_value
[06:24:08] [PASSED] drm_test_check_bpc_12_value
[06:24:08] [PASSED] drm_test_check_format_value
[06:24:08] [PASSED] drm_test_check_tmds_char_value
[06:24:08] ===== [PASSED] drm_atomic_helper_connector_hdmi_reset ======
[06:24:08] = drm_atomic_helper_connector_hdmi_mode_valid (4 subtests) =
[06:24:08] [PASSED] drm_test_check_mode_valid
[06:24:08] [PASSED] drm_test_check_mode_valid_reject
[06:24:08] [PASSED] drm_test_check_mode_valid_reject_rate
[06:24:08] [PASSED] drm_test_check_mode_valid_reject_max_clock
[06:24:08] === [PASSED] drm_atomic_helper_connector_hdmi_mode_valid ===
[06:24:08] = drm_atomic_helper_connector_hdmi_infoframes (5 subtests) =
[06:24:08] [PASSED] drm_test_check_infoframes
[06:24:08] [PASSED] drm_test_check_reject_avi_infoframe
[06:24:08] [PASSED] drm_test_check_reject_hdr_infoframe_bpc_8
[06:24:08] [PASSED] drm_test_check_reject_hdr_infoframe_bpc_10
[06:24:08] [PASSED] drm_test_check_reject_audio_infoframe
[06:24:08] === [PASSED] drm_atomic_helper_connector_hdmi_infoframes ===
[06:24:08] ================= drm_managed (2 subtests) =================
[06:24:08] [PASSED] drm_test_managed_release_action
[06:24:08] [PASSED] drm_test_managed_run_action
[06:24:08] =================== [PASSED] drm_managed ===================
[06:24:08] =================== drm_mm (6 subtests) ====================
[06:24:08] [PASSED] drm_test_mm_init
[06:24:08] [PASSED] drm_test_mm_debug
[06:24:08] [PASSED] drm_test_mm_align32
[06:24:08] [PASSED] drm_test_mm_align64
[06:24:08] [PASSED] drm_test_mm_lowest
[06:24:08] [PASSED] drm_test_mm_highest
[06:24:08] ===================== [PASSED] drm_mm ======================
[06:24:08] ============= drm_modes_analog_tv (5 subtests) =============
[06:24:08] [PASSED] drm_test_modes_analog_tv_mono_576i
[06:24:08] [PASSED] drm_test_modes_analog_tv_ntsc_480i
[06:24:08] [PASSED] drm_test_modes_analog_tv_ntsc_480i_inlined
[06:24:08] [PASSED] drm_test_modes_analog_tv_pal_576i
[06:24:08] [PASSED] drm_test_modes_analog_tv_pal_576i_inlined
[06:24:08] =============== [PASSED] drm_modes_analog_tv ===============
[06:24:08] ============== drm_plane_helper (2 subtests) ===============
[06:24:08] =============== drm_test_check_plane_state ================
[06:24:08] [PASSED] clipping_simple
[06:24:08] [PASSED] clipping_rotate_reflect
[06:24:08] [PASSED] positioning_simple
[06:24:08] [PASSED] upscaling
[06:24:08] [PASSED] downscaling
[06:24:08] [PASSED] rounding1
[06:24:08] [PASSED] rounding2
[06:24:08] [PASSED] rounding3
[06:24:08] [PASSED] rounding4
[06:24:08] =========== [PASSED] drm_test_check_plane_state ============
[06:24:08] =========== drm_test_check_invalid_plane_state ============
[06:24:08] [PASSED] positioning_invalid
[06:24:08] [PASSED] upscaling_invalid
[06:24:08] [PASSED] downscaling_invalid
[06:24:08] ======= [PASSED] drm_test_check_invalid_plane_state ========
[06:24:08] ================ [PASSED] drm_plane_helper =================
[06:24:08] ====== drm_connector_helper_tv_get_modes (1 subtest) =======
[06:24:08] ====== drm_test_connector_helper_tv_get_modes_check =======
[06:24:08] [PASSED] None
[06:24:08] [PASSED] PAL
[06:24:08] [PASSED] NTSC
[06:24:08] [PASSED] Both, NTSC Default
[06:24:08] [PASSED] Both, PAL Default
[06:24:08] [PASSED] Both, NTSC Default, with PAL on command-line
[06:24:08] [PASSED] Both, PAL Default, with NTSC on command-line
[06:24:08] == [PASSED] drm_test_connector_helper_tv_get_modes_check ===
[06:24:08] ======== [PASSED] drm_connector_helper_tv_get_modes ========
[06:24:08] ================== drm_rect (9 subtests) ===================
[06:24:08] [PASSED] drm_test_rect_clip_scaled_div_by_zero
[06:24:08] [PASSED] drm_test_rect_clip_scaled_not_clipped
[06:24:08] [PASSED] drm_test_rect_clip_scaled_clipped
[06:24:08] [PASSED] drm_test_rect_clip_scaled_signed_vs_unsigned
[06:24:08] ================= drm_test_rect_intersect =================
[06:24:08] [PASSED] top-left x bottom-right: 2x2+1+1 x 2x2+0+0
[06:24:08] [PASSED] top-right x bottom-left: 2x2+0+0 x 2x2+1-1
[06:24:08] [PASSED] bottom-left x top-right: 2x2+1-1 x 2x2+0+0
[06:24:08] [PASSED] bottom-right x top-left: 2x2+0+0 x 2x2+1+1
[06:24:08] [PASSED] right x left: 2x1+0+0 x 3x1+1+0
[06:24:08] [PASSED] left x right: 3x1+1+0 x 2x1+0+0
[06:24:08] [PASSED] up x bottom: 1x2+0+0 x 1x3+0-1
[06:24:08] [PASSED] bottom x up: 1x3+0-1 x 1x2+0+0
[06:24:08] [PASSED] touching corner: 1x1+0+0 x 2x2+1+1
[06:24:08] [PASSED] touching side: 1x1+0+0 x 1x1+1+0
[06:24:08] [PASSED] equal rects: 2x2+0+0 x 2x2+0+0
[06:24:08] [PASSED] inside another: 2x2+0+0 x 1x1+1+1
[06:24:08] [PASSED] far away: 1x1+0+0 x 1x1+3+6
[06:24:08] [PASSED] points intersecting: 0x0+5+10 x 0x0+5+10
[06:24:08] [PASSED] points not intersecting: 0x0+0+0 x 0x0+5+10
stty: 'standard input': Inappropriate ioctl for device
[06:24:08] ============= [PASSED] drm_test_rect_intersect =============
[06:24:08] ================ drm_test_rect_calc_hscale ================
[06:24:08] [PASSED] normal use
[06:24:08] [PASSED] out of max range
[06:24:08] [PASSED] out of min range
[06:24:08] [PASSED] zero dst
[06:24:08] [PASSED] negative src
[06:24:08] [PASSED] negative dst
[06:24:08] ============ [PASSED] drm_test_rect_calc_hscale ============
[06:24:08] ================ drm_test_rect_calc_vscale ================
[06:24:08] [PASSED] normal use
[06:24:08] [PASSED] out of max range
[06:24:08] [PASSED] out of min range
[06:24:08] [PASSED] zero dst
[06:24:08] [PASSED] negative src
[06:24:08] [PASSED] negative dst
[06:24:08] ============ [PASSED] drm_test_rect_calc_vscale ============
[06:24:08] ================== drm_test_rect_rotate ===================
[06:24:08] [PASSED] reflect-x
[06:24:08] [PASSED] reflect-y
[06:24:08] [PASSED] rotate-0
[06:24:08] [PASSED] rotate-90
[06:24:08] [PASSED] rotate-180
[06:24:08] [PASSED] rotate-270
[06:24:08] ============== [PASSED] drm_test_rect_rotate ===============
[06:24:08] ================ drm_test_rect_rotate_inv =================
[06:24:08] [PASSED] reflect-x
[06:24:08] [PASSED] reflect-y
[06:24:08] [PASSED] rotate-0
[06:24:08] [PASSED] rotate-90
[06:24:08] [PASSED] rotate-180
[06:24:08] [PASSED] rotate-270
[06:24:08] ============ [PASSED] drm_test_rect_rotate_inv =============
[06:24:08] ==================== [PASSED] drm_rect =====================
[06:24:08] ============ drm_sysfb_modeset_test (1 subtest) ============
[06:24:08] ============ drm_test_sysfb_build_fourcc_list =============
[06:24:08] [PASSED] no native formats
[06:24:08] [PASSED] XRGB8888 as native format
[06:24:08] [PASSED] remove duplicates
[06:24:08] [PASSED] convert alpha formats
[06:24:08] [PASSED] random formats
[06:24:08] ======== [PASSED] drm_test_sysfb_build_fourcc_list =========
[06:24:08] ============= [PASSED] drm_sysfb_modeset_test ==============
[06:24:08] ================== drm_fixp (2 subtests) ===================
[06:24:08] [PASSED] drm_test_int2fixp
[06:24:08] [PASSED] drm_test_sm2fixp
[06:24:08] ==================== [PASSED] drm_fixp =====================
[06:24:08] ============================================================
[06:24:08] Testing complete. Ran 630 tests: passed: 630
[06:24:08] Elapsed time: 27.940s total, 1.625s configuring, 25.898s building, 0.375s running
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/ttm/tests/.kunitconfig
[06:24:08] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[06:24:10] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[06:24:19] Starting KUnit Kernel (1/1)...
[06:24:19] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[06:24:19] ================= ttm_device (5 subtests) ==================
[06:24:19] [PASSED] ttm_device_init_basic
[06:24:19] [PASSED] ttm_device_init_multiple
[06:24:19] [PASSED] ttm_device_fini_basic
[06:24:19] [PASSED] ttm_device_init_no_vma_man
[06:24:19] ================== ttm_device_init_pools ==================
[06:24:19] [PASSED] No DMA allocations, no DMA32 required
[06:24:19] [PASSED] DMA allocations, DMA32 required
[06:24:19] [PASSED] No DMA allocations, DMA32 required
[06:24:19] [PASSED] DMA allocations, no DMA32 required
[06:24:19] ============== [PASSED] ttm_device_init_pools ==============
[06:24:19] =================== [PASSED] ttm_device ====================
[06:24:19] ================== ttm_pool (8 subtests) ===================
[06:24:19] ================== ttm_pool_alloc_basic ===================
[06:24:19] [PASSED] One page
[06:24:19] [PASSED] More than one page
[06:24:19] [PASSED] Above the allocation limit
[06:24:19] [PASSED] One page, with coherent DMA mappings enabled
[06:24:19] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[06:24:19] ============== [PASSED] ttm_pool_alloc_basic ===============
[06:24:19] ============== ttm_pool_alloc_basic_dma_addr ==============
[06:24:19] [PASSED] One page
[06:24:19] [PASSED] More than one page
[06:24:19] [PASSED] Above the allocation limit
[06:24:19] [PASSED] One page, with coherent DMA mappings enabled
[06:24:19] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[06:24:19] ========== [PASSED] ttm_pool_alloc_basic_dma_addr ==========
[06:24:19] [PASSED] ttm_pool_alloc_order_caching_match
[06:24:19] [PASSED] ttm_pool_alloc_caching_mismatch
[06:24:19] [PASSED] ttm_pool_alloc_order_mismatch
[06:24:19] [PASSED] ttm_pool_free_dma_alloc
[06:24:19] [PASSED] ttm_pool_free_no_dma_alloc
[06:24:19] [PASSED] ttm_pool_fini_basic
[06:24:19] ==================== [PASSED] ttm_pool =====================
[06:24:19] ================ ttm_resource (8 subtests) =================
[06:24:19] ================= ttm_resource_init_basic =================
[06:24:19] [PASSED] Init resource in TTM_PL_SYSTEM
[06:24:19] [PASSED] Init resource in TTM_PL_VRAM
[06:24:19] [PASSED] Init resource in a private placement
[06:24:19] [PASSED] Init resource in TTM_PL_SYSTEM, set placement flags
[06:24:19] ============= [PASSED] ttm_resource_init_basic =============
[06:24:19] [PASSED] ttm_resource_init_pinned
[06:24:19] [PASSED] ttm_resource_fini_basic
[06:24:19] [PASSED] ttm_resource_manager_init_basic
[06:24:19] [PASSED] ttm_resource_manager_usage_basic
[06:24:19] [PASSED] ttm_resource_manager_set_used_basic
[06:24:19] [PASSED] ttm_sys_man_alloc_basic
[06:24:19] [PASSED] ttm_sys_man_free_basic
[06:24:19] ================== [PASSED] ttm_resource ===================
[06:24:19] =================== ttm_tt (15 subtests) ===================
[06:24:19] ==================== ttm_tt_init_basic ====================
[06:24:19] [PASSED] Page-aligned size
[06:24:19] [PASSED] Extra pages requested
[06:24:19] ================ [PASSED] ttm_tt_init_basic ================
[06:24:19] [PASSED] ttm_tt_init_misaligned
[06:24:19] [PASSED] ttm_tt_fini_basic
[06:24:19] [PASSED] ttm_tt_fini_sg
[06:24:19] [PASSED] ttm_tt_fini_shmem
[06:24:19] [PASSED] ttm_tt_create_basic
[06:24:19] [PASSED] ttm_tt_create_invalid_bo_type
[06:24:19] [PASSED] ttm_tt_create_ttm_exists
[06:24:19] [PASSED] ttm_tt_create_failed
[06:24:19] [PASSED] ttm_tt_destroy_basic
[06:24:19] [PASSED] ttm_tt_populate_null_ttm
[06:24:19] [PASSED] ttm_tt_populate_populated_ttm
[06:24:19] [PASSED] ttm_tt_unpopulate_basic
[06:24:19] [PASSED] ttm_tt_unpopulate_empty_ttm
[06:24:19] [PASSED] ttm_tt_swapin_basic
[06:24:19] ===================== [PASSED] ttm_tt ======================
[06:24:19] =================== ttm_bo (14 subtests) ===================
[06:24:19] =========== ttm_bo_reserve_optimistic_no_ticket ===========
[06:24:19] [PASSED] Cannot be interrupted and sleeps
[06:24:19] [PASSED] Cannot be interrupted, locks straight away
[06:24:19] [PASSED] Can be interrupted, sleeps
[06:24:19] ======= [PASSED] ttm_bo_reserve_optimistic_no_ticket =======
[06:24:19] [PASSED] ttm_bo_reserve_locked_no_sleep
[06:24:19] [PASSED] ttm_bo_reserve_no_wait_ticket
[06:24:19] [PASSED] ttm_bo_reserve_double_resv
[06:24:19] [PASSED] ttm_bo_reserve_interrupted
[06:24:19] [PASSED] ttm_bo_reserve_deadlock
[06:24:19] [PASSED] ttm_bo_unreserve_basic
[06:24:19] [PASSED] ttm_bo_unreserve_pinned
[06:24:19] [PASSED] ttm_bo_unreserve_bulk
[06:24:19] [PASSED] ttm_bo_fini_basic
[06:24:19] [PASSED] ttm_bo_fini_shared_resv
[06:24:19] [PASSED] ttm_bo_pin_basic
[06:24:19] [PASSED] ttm_bo_pin_unpin_resource
[06:24:19] [PASSED] ttm_bo_multiple_pin_one_unpin
[06:24:19] ===================== [PASSED] ttm_bo ======================
[06:24:19] ============== ttm_bo_validate (21 subtests) ===============
[06:24:19] ============== ttm_bo_init_reserved_sys_man ===============
[06:24:19] [PASSED] Buffer object for userspace
[06:24:19] [PASSED] Kernel buffer object
[06:24:19] [PASSED] Shared buffer object
[06:24:19] ========== [PASSED] ttm_bo_init_reserved_sys_man ===========
[06:24:19] ============== ttm_bo_init_reserved_mock_man ==============
[06:24:19] [PASSED] Buffer object for userspace
[06:24:19] [PASSED] Kernel buffer object
[06:24:19] [PASSED] Shared buffer object
[06:24:19] ========== [PASSED] ttm_bo_init_reserved_mock_man ==========
[06:24:19] [PASSED] ttm_bo_init_reserved_resv
[06:24:19] ================== ttm_bo_validate_basic ==================
[06:24:19] [PASSED] Buffer object for userspace
[06:24:19] [PASSED] Kernel buffer object
[06:24:19] [PASSED] Shared buffer object
[06:24:19] ============== [PASSED] ttm_bo_validate_basic ==============
[06:24:19] [PASSED] ttm_bo_validate_invalid_placement
[06:24:19] ============= ttm_bo_validate_same_placement ==============
[06:24:19] [PASSED] System manager
[06:24:19] [PASSED] VRAM manager
[06:24:19] ========= [PASSED] ttm_bo_validate_same_placement ==========
[06:24:19] [PASSED] ttm_bo_validate_failed_alloc
[06:24:19] [PASSED] ttm_bo_validate_pinned
[06:24:19] [PASSED] ttm_bo_validate_busy_placement
[06:24:19] ================ ttm_bo_validate_multihop =================
[06:24:19] [PASSED] Buffer object for userspace
[06:24:20] [PASSED] Kernel buffer object
[06:24:20] [PASSED] Shared buffer object
[06:24:20] ============ [PASSED] ttm_bo_validate_multihop =============
[06:24:20] ========== ttm_bo_validate_no_placement_signaled ==========
[06:24:20] [PASSED] Buffer object in system domain, no page vector
[06:24:20] [PASSED] Buffer object in system domain with an existing page vector
[06:24:20] ====== [PASSED] ttm_bo_validate_no_placement_signaled ======
[06:24:20] ======== ttm_bo_validate_no_placement_not_signaled ========
[06:24:20] [PASSED] Buffer object for userspace
[06:24:20] [PASSED] Kernel buffer object
[06:24:20] [PASSED] Shared buffer object
[06:24:20] ==== [PASSED] ttm_bo_validate_no_placement_not_signaled ====
[06:24:20] [PASSED] ttm_bo_validate_move_fence_signaled
[06:24:20] ========= ttm_bo_validate_move_fence_not_signaled =========
[06:24:20] [PASSED] Waits for GPU
[06:24:20] [PASSED] Tries to lock straight away
[06:24:20] ===== [PASSED] ttm_bo_validate_move_fence_not_signaled =====
[06:24:20] [PASSED] ttm_bo_validate_happy_evict
[06:24:20] [PASSED] ttm_bo_validate_all_pinned_evict
[06:24:20] [PASSED] ttm_bo_validate_allowed_only_evict
[06:24:20] [PASSED] ttm_bo_validate_deleted_evict
[06:24:20] [PASSED] ttm_bo_validate_busy_domain_evict
[06:24:20] [PASSED] ttm_bo_validate_evict_gutting
[06:24:20] [PASSED] ttm_bo_validate_recrusive_evict
stty: 'standard input': Inappropriate ioctl for device
[06:24:20] ================= [PASSED] ttm_bo_validate =================
[06:24:20] ============================================================
[06:24:20] Testing complete. Ran 101 tests: passed: 101
[06:24:20] Elapsed time: 11.479s total, 1.577s configuring, 9.686s building, 0.177s running
+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel
^ permalink raw reply [flat|nested] 20+ messages in thread
* ✓ Xe.CI.BAT: success for Use new dma-map IOVA alloc, link, and sync API in GPU SVM and DRM pagemap (rev4)
2026-02-05 4:19 [PATCH v4 0/4] Use new dma-map IOVA alloc, link, and sync API in GPU SVM and DRM pagemap Matthew Brost
` (4 preceding siblings ...)
2026-02-05 6:24 ` ✓ CI.KUnit: success for Use new dma-map IOVA alloc, link, and sync API in GPU SVM and DRM pagemap (rev4) Patchwork
@ 2026-02-05 7:38 ` Patchwork
2026-02-06 1:06 ` ✗ Xe.CI.FULL: failure " Patchwork
6 siblings, 0 replies; 20+ messages in thread
From: Patchwork @ 2026-02-05 7:38 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-xe
[-- Attachment #1: Type: text/plain, Size: 1041 bytes --]
== Series Details ==
Series: Use new dma-map IOVA alloc, link, and sync API in GPU SVM and DRM pagemap (rev4)
URL : https://patchwork.freedesktop.org/series/160587/
State : success
== Summary ==
CI Bug Log - changes from xe-4503-8883ec84c85819dacca94997b60371c3ed57ee29_BAT -> xe-pw-160587v4_BAT
====================================================
Summary
-------
**SUCCESS**
No regressions found.
Participating hosts (12 -> 12)
------------------------------
No changes in participating hosts
Changes
-------
No changes found
Build changes
-------------
* IGT: IGT_8737 -> IGT_8738
* Linux: xe-4503-8883ec84c85819dacca94997b60371c3ed57ee29 -> xe-pw-160587v4
IGT_8737: 8737
IGT_8738: b3fc8fb534a27517f2a49e63ef993e7550b9b959 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
xe-4503-8883ec84c85819dacca94997b60371c3ed57ee29: 8883ec84c85819dacca94997b60371c3ed57ee29
xe-pw-160587v4: 160587v4
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/index.html
[-- Attachment #2: Type: text/html, Size: 1603 bytes --]
^ permalink raw reply [flat|nested] 20+ messages in thread
* ✗ Xe.CI.FULL: failure for Use new dma-map IOVA alloc, link, and sync API in GPU SVM and DRM pagemap (rev4)
2026-02-05 4:19 [PATCH v4 0/4] Use new dma-map IOVA alloc, link, and sync API in GPU SVM and DRM pagemap Matthew Brost
` (5 preceding siblings ...)
2026-02-05 7:38 ` ✓ Xe.CI.BAT: " Patchwork
@ 2026-02-06 1:06 ` Patchwork
6 siblings, 0 replies; 20+ messages in thread
From: Patchwork @ 2026-02-06 1:06 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-xe
[-- Attachment #1: Type: text/plain, Size: 57486 bytes --]
== Series Details ==
Series: Use new dma-map IOVA alloc, link, and sync API in GPU SVM and DRM pagemap (rev4)
URL : https://patchwork.freedesktop.org/series/160587/
State : failure
== Summary ==
CI Bug Log - changes from xe-4503-8883ec84c85819dacca94997b60371c3ed57ee29_FULL -> xe-pw-160587v4_FULL
====================================================
Summary
-------
**FAILURE**
Serious unknown changes coming with xe-pw-160587v4_FULL absolutely need to be
verified manually.
If you think the reported changes have nothing to do with the changes
introduced in xe-pw-160587v4_FULL, please notify your bug team (I915-ci-infra@lists.freedesktop.org) to allow them
to document this new failure mode, which will reduce false positives in CI.
Participating hosts (2 -> 2)
------------------------------
No changes in participating hosts
Possible new issues
-------------------
Here are the unknown changes that may have been introduced in xe-pw-160587v4_FULL:
### IGT changes ###
#### Possible regressions ####
* igt@xe_exec_system_allocator@once-free-race-nomemset:
- shard-bmg: [PASS][1] -> [INCOMPLETE][2]
[1]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4503-8883ec84c85819dacca94997b60371c3ed57ee29/shard-bmg-9/igt@xe_exec_system_allocator@once-free-race-nomemset.html
[2]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-7/igt@xe_exec_system_allocator@once-free-race-nomemset.html
Known issues
------------
Here are the changes found in xe-pw-160587v4_FULL that come from known issues:
### IGT changes ###
#### Issues hit ####
* igt@intel_hwmon@hwmon-read:
- shard-lnl: NOTRUN -> [SKIP][3] ([Intel XE#1125])
[3]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-6/igt@intel_hwmon@hwmon-read.html
* igt@kms_atomic_transition@plane-all-modeset-transition-internal-panels:
- shard-bmg: NOTRUN -> [SKIP][4] ([Intel XE#2370]) +1 other test skip
[4]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-4/igt@kms_atomic_transition@plane-all-modeset-transition-internal-panels.html
* igt@kms_big_fb@4-tiled-64bpp-rotate-90:
- shard-bmg: NOTRUN -> [SKIP][5] ([Intel XE#2327]) +1 other test skip
[5]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-8/igt@kms_big_fb@4-tiled-64bpp-rotate-90.html
- shard-lnl: NOTRUN -> [SKIP][6] ([Intel XE#1407])
[6]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-4/igt@kms_big_fb@4-tiled-64bpp-rotate-90.html
* igt@kms_big_fb@linear-max-hw-stride-64bpp-rotate-0-hflip:
- shard-bmg: NOTRUN -> [SKIP][7] ([Intel XE#7059]) +1 other test skip
[7]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-4/igt@kms_big_fb@linear-max-hw-stride-64bpp-rotate-0-hflip.html
- shard-lnl: NOTRUN -> [SKIP][8] ([Intel XE#7059])
[8]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-5/igt@kms_big_fb@linear-max-hw-stride-64bpp-rotate-0-hflip.html
* igt@kms_big_fb@y-tiled-8bpp-rotate-180:
- shard-lnl: NOTRUN -> [SKIP][9] ([Intel XE#1124]) +4 other tests skip
[9]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-3/igt@kms_big_fb@y-tiled-8bpp-rotate-180.html
* igt@kms_big_fb@yf-tiled-8bpp-rotate-0:
- shard-bmg: NOTRUN -> [SKIP][10] ([Intel XE#1124]) +5 other tests skip
[10]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-7/igt@kms_big_fb@yf-tiled-8bpp-rotate-0.html
* igt@kms_bw@connected-linear-tiling-2-displays-3840x2160p:
- shard-bmg: [PASS][11] -> [SKIP][12] ([Intel XE#2314] / [Intel XE#2894])
[11]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4503-8883ec84c85819dacca94997b60371c3ed57ee29/shard-bmg-1/igt@kms_bw@connected-linear-tiling-2-displays-3840x2160p.html
[12]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-5/igt@kms_bw@connected-linear-tiling-2-displays-3840x2160p.html
* igt@kms_bw@connected-linear-tiling-4-displays-2160x1440p:
- shard-bmg: NOTRUN -> [SKIP][13] ([Intel XE#2314] / [Intel XE#2894])
[13]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-10/igt@kms_bw@connected-linear-tiling-4-displays-2160x1440p.html
* igt@kms_bw@linear-tiling-2-displays-1920x1080p:
- shard-lnl: NOTRUN -> [SKIP][14] ([Intel XE#367]) +1 other test skip
[14]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-2/igt@kms_bw@linear-tiling-2-displays-1920x1080p.html
* igt@kms_bw@linear-tiling-3-displays-1920x1080p:
- shard-bmg: NOTRUN -> [SKIP][15] ([Intel XE#367]) +3 other tests skip
[15]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-9/igt@kms_bw@linear-tiling-3-displays-1920x1080p.html
* igt@kms_bw@linear-tiling-4-displays-1920x1080p:
- shard-lnl: NOTRUN -> [SKIP][16] ([Intel XE#1512])
[16]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-1/igt@kms_bw@linear-tiling-4-displays-1920x1080p.html
* igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-rc-ccs-cc:
- shard-lnl: NOTRUN -> [SKIP][17] ([Intel XE#3432]) +1 other test skip
[17]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-1/igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-rc-ccs-cc.html
* igt@kms_ccs@crc-primary-suspend-4-tiled-lnl-ccs@pipe-d-hdmi-a-3:
- shard-bmg: NOTRUN -> [SKIP][18] ([Intel XE#2652] / [Intel XE#787]) +8 other tests skip
[18]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-2/igt@kms_ccs@crc-primary-suspend-4-tiled-lnl-ccs@pipe-d-hdmi-a-3.html
* igt@kms_ccs@crc-primary-suspend-y-tiled-gen12-rc-ccs:
- shard-bmg: NOTRUN -> [SKIP][19] ([Intel XE#3432])
[19]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-8/igt@kms_ccs@crc-primary-suspend-y-tiled-gen12-rc-ccs.html
* igt@kms_ccs@crc-sprite-planes-basic-4-tiled-dg2-rc-ccs-cc:
- shard-bmg: NOTRUN -> [SKIP][20] ([Intel XE#2887]) +11 other tests skip
[20]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-6/igt@kms_ccs@crc-sprite-planes-basic-4-tiled-dg2-rc-ccs-cc.html
* igt@kms_ccs@random-ccs-data-4-tiled-mtl-mc-ccs:
- shard-lnl: NOTRUN -> [SKIP][21] ([Intel XE#2887]) +5 other tests skip
[21]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-5/igt@kms_ccs@random-ccs-data-4-tiled-mtl-mc-ccs.html
* igt@kms_chamelium_color@gamma:
- shard-lnl: NOTRUN -> [SKIP][22] ([Intel XE#306]) +1 other test skip
[22]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-6/igt@kms_chamelium_color@gamma.html
- shard-bmg: NOTRUN -> [SKIP][23] ([Intel XE#2325])
[23]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-10/igt@kms_chamelium_color@gamma.html
* igt@kms_chamelium_edid@hdmi-mode-timings:
- shard-lnl: NOTRUN -> [SKIP][24] ([Intel XE#373]) +4 other tests skip
[24]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-6/igt@kms_chamelium_edid@hdmi-mode-timings.html
* igt@kms_chamelium_hpd@dp-hpd-storm:
- shard-bmg: NOTRUN -> [SKIP][25] ([Intel XE#2252]) +8 other tests skip
[25]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-4/igt@kms_chamelium_hpd@dp-hpd-storm.html
* igt@kms_content_protection@dp-mst-lic-type-1:
- shard-bmg: NOTRUN -> [SKIP][26] ([Intel XE#2390] / [Intel XE#6974])
[26]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-9/igt@kms_content_protection@dp-mst-lic-type-1.html
* igt@kms_content_protection@dp-mst-type-1:
- shard-lnl: NOTRUN -> [SKIP][27] ([Intel XE#307] / [Intel XE#6974])
[27]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-2/igt@kms_content_protection@dp-mst-type-1.html
* igt@kms_content_protection@dp-mst-type-1-suspend-resume:
- shard-bmg: NOTRUN -> [SKIP][28] ([Intel XE#6974])
[28]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-6/igt@kms_content_protection@dp-mst-type-1-suspend-resume.html
* igt@kms_content_protection@lic-type-0@pipe-a-dp-2:
- shard-bmg: NOTRUN -> [FAIL][29] ([Intel XE#1178] / [Intel XE#3304])
[29]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-10/igt@kms_content_protection@lic-type-0@pipe-a-dp-2.html
* igt@kms_content_protection@lic-type-1:
- shard-bmg: NOTRUN -> [SKIP][30] ([Intel XE#2341])
[30]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-5/igt@kms_content_protection@lic-type-1.html
* igt@kms_cursor_crc@cursor-offscreen-512x512:
- shard-bmg: NOTRUN -> [SKIP][31] ([Intel XE#2321]) +1 other test skip
[31]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-7/igt@kms_cursor_crc@cursor-offscreen-512x512.html
* igt@kms_cursor_crc@cursor-onscreen-max-size:
- shard-bmg: NOTRUN -> [SKIP][32] ([Intel XE#2320]) +6 other tests skip
[32]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-1/igt@kms_cursor_crc@cursor-onscreen-max-size.html
* igt@kms_cursor_crc@cursor-rapid-movement-512x512:
- shard-lnl: NOTRUN -> [SKIP][33] ([Intel XE#2321])
[33]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-5/igt@kms_cursor_crc@cursor-rapid-movement-512x512.html
* igt@kms_cursor_crc@cursor-rapid-movement-max-size:
- shard-lnl: NOTRUN -> [SKIP][34] ([Intel XE#1424]) +3 other tests skip
[34]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-8/igt@kms_cursor_crc@cursor-rapid-movement-max-size.html
* igt@kms_cursor_legacy@cursor-vs-flip-varying-size:
- shard-bmg: [PASS][35] -> [DMESG-WARN][36] ([Intel XE#5354])
[35]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4503-8883ec84c85819dacca94997b60371c3ed57ee29/shard-bmg-5/igt@kms_cursor_legacy@cursor-vs-flip-varying-size.html
[36]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-10/igt@kms_cursor_legacy@cursor-vs-flip-varying-size.html
* igt@kms_cursor_legacy@cursora-vs-flipb-atomic:
- shard-lnl: NOTRUN -> [SKIP][37] ([Intel XE#309])
[37]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-1/igt@kms_cursor_legacy@cursora-vs-flipb-atomic.html
* igt@kms_cursor_legacy@cursora-vs-flipb-atomic-transitions-varying-size:
- shard-bmg: [PASS][38] -> [SKIP][39] ([Intel XE#2291]) +1 other test skip
[38]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4503-8883ec84c85819dacca94997b60371c3ed57ee29/shard-bmg-8/igt@kms_cursor_legacy@cursora-vs-flipb-atomic-transitions-varying-size.html
[39]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-6/igt@kms_cursor_legacy@cursora-vs-flipb-atomic-transitions-varying-size.html
* igt@kms_cursor_legacy@flip-vs-cursor-legacy:
- shard-bmg: NOTRUN -> [FAIL][40] ([Intel XE#4633])
[40]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-5/igt@kms_cursor_legacy@flip-vs-cursor-legacy.html
* igt@kms_dirtyfb@fbc-dirtyfb-ioctl:
- shard-bmg: NOTRUN -> [SKIP][41] ([Intel XE#4210])
[41]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-2/igt@kms_dirtyfb@fbc-dirtyfb-ioctl.html
* igt@kms_display_modes@extended-mode-basic:
- shard-bmg: NOTRUN -> [SKIP][42] ([Intel XE#4302])
[42]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-6/igt@kms_display_modes@extended-mode-basic.html
* igt@kms_dp_linktrain_fallback@dp-fallback:
- shard-bmg: [PASS][43] -> [SKIP][44] ([Intel XE#4294])
[43]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4503-8883ec84c85819dacca94997b60371c3ed57ee29/shard-bmg-8/igt@kms_dp_linktrain_fallback@dp-fallback.html
[44]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-5/igt@kms_dp_linktrain_fallback@dp-fallback.html
* igt@kms_dsc@dsc-with-bpc-formats:
- shard-lnl: NOTRUN -> [SKIP][45] ([Intel XE#2244]) +1 other test skip
[45]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-7/igt@kms_dsc@dsc-with-bpc-formats.html
* igt@kms_dsc@dsc-with-output-formats:
- shard-bmg: NOTRUN -> [SKIP][46] ([Intel XE#2244])
[46]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-2/igt@kms_dsc@dsc-with-output-formats.html
* igt@kms_feature_discovery@dp-mst:
- shard-lnl: NOTRUN -> [SKIP][47] ([Intel XE#1137])
[47]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-5/igt@kms_feature_discovery@dp-mst.html
- shard-bmg: NOTRUN -> [SKIP][48] ([Intel XE#2375])
[48]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-4/igt@kms_feature_discovery@dp-mst.html
* igt@kms_flip@2x-blocking-absolute-wf_vblank-interruptible:
- shard-lnl: NOTRUN -> [SKIP][49] ([Intel XE#1421]) +2 other tests skip
[49]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-1/igt@kms_flip@2x-blocking-absolute-wf_vblank-interruptible.html
* igt@kms_flip@2x-modeset-vs-vblank-race-interruptible:
- shard-bmg: [PASS][50] -> [SKIP][51] ([Intel XE#2316]) +9 other tests skip
[50]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4503-8883ec84c85819dacca94997b60371c3ed57ee29/shard-bmg-4/igt@kms_flip@2x-modeset-vs-vblank-race-interruptible.html
[51]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-5/igt@kms_flip@2x-modeset-vs-vblank-race-interruptible.html
* igt@kms_flip_scaled_crc@flip-32bpp-linear-to-32bpp-linear-reflect-x:
- shard-lnl: NOTRUN -> [SKIP][52] ([Intel XE#7179])
[52]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-2/igt@kms_flip_scaled_crc@flip-32bpp-linear-to-32bpp-linear-reflect-x.html
* igt@kms_flip_scaled_crc@flip-64bpp-yftile-to-32bpp-yftile-upscaling:
- shard-lnl: NOTRUN -> [SKIP][53] ([Intel XE#7178]) +3 other tests skip
[53]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-6/igt@kms_flip_scaled_crc@flip-64bpp-yftile-to-32bpp-yftile-upscaling.html
* igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-16bpp-ytile-downscaling:
- shard-bmg: NOTRUN -> [SKIP][54] ([Intel XE#7178]) +3 other tests skip
[54]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-8/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-16bpp-ytile-downscaling.html
* igt@kms_force_connector_basic@force-connector-state:
- shard-lnl: NOTRUN -> [SKIP][55] ([Intel XE#352])
[55]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-4/igt@kms_force_connector_basic@force-connector-state.html
* igt@kms_frontbuffer_tracking@drrs-1p-primscrn-cur-indfb-draw-render:
- shard-lnl: NOTRUN -> [SKIP][56] ([Intel XE#651]) +5 other tests skip
[56]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-3/igt@kms_frontbuffer_tracking@drrs-1p-primscrn-cur-indfb-draw-render.html
* igt@kms_frontbuffer_tracking@drrs-1p-primscrn-spr-indfb-fullscreen:
- shard-bmg: NOTRUN -> [SKIP][57] ([Intel XE#2311]) +18 other tests skip
[57]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-2/igt@kms_frontbuffer_tracking@drrs-1p-primscrn-spr-indfb-fullscreen.html
* igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-pri-shrfb-draw-blt:
- shard-lnl: NOTRUN -> [SKIP][58] ([Intel XE#656]) +27 other tests skip
[58]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-2/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-pri-shrfb-draw-blt.html
* igt@kms_frontbuffer_tracking@fbc-argb161616f-draw-blt:
- shard-bmg: NOTRUN -> [SKIP][59] ([Intel XE#7061]) +3 other tests skip
[59]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-4/igt@kms_frontbuffer_tracking@fbc-argb161616f-draw-blt.html
* igt@kms_frontbuffer_tracking@fbc-rgb565-draw-mmap-wc:
- shard-bmg: NOTRUN -> [SKIP][60] ([Intel XE#4141]) +9 other tests skip
[60]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-9/igt@kms_frontbuffer_tracking@fbc-rgb565-draw-mmap-wc.html
* igt@kms_frontbuffer_tracking@fbcdrrs-1p-offscreen-pri-indfb-draw-render:
- shard-lnl: NOTRUN -> [SKIP][61] ([Intel XE#6312])
[61]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-5/igt@kms_frontbuffer_tracking@fbcdrrs-1p-offscreen-pri-indfb-draw-render.html
* igt@kms_frontbuffer_tracking@fbcdrrs-argb161616f-draw-mmap-wc:
- shard-lnl: NOTRUN -> [SKIP][62] ([Intel XE#7061]) +2 other tests skip
[62]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-2/igt@kms_frontbuffer_tracking@fbcdrrs-argb161616f-draw-mmap-wc.html
* igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-spr-indfb-draw-blt:
- shard-bmg: NOTRUN -> [SKIP][63] ([Intel XE#2312]) +16 other tests skip
[63]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-6/igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-spr-indfb-draw-blt.html
* igt@kms_frontbuffer_tracking@psr-2p-primscrn-indfb-plflip-blt:
- shard-bmg: NOTRUN -> [SKIP][64] ([Intel XE#2313]) +17 other tests skip
[64]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-4/igt@kms_frontbuffer_tracking@psr-2p-primscrn-indfb-plflip-blt.html
* igt@kms_hdr@static-toggle:
- shard-lnl: NOTRUN -> [SKIP][65] ([Intel XE#1503])
[65]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-8/igt@kms_hdr@static-toggle.html
* igt@kms_joiner@invalid-modeset-ultra-joiner:
- shard-lnl: NOTRUN -> [SKIP][66] ([Intel XE#6900]) +1 other test skip
[66]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-5/igt@kms_joiner@invalid-modeset-ultra-joiner.html
- shard-bmg: NOTRUN -> [SKIP][67] ([Intel XE#6911])
[67]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-4/igt@kms_joiner@invalid-modeset-ultra-joiner.html
* igt@kms_plane@pixel-format-4-tiled-dg2-rc-ccs-cc-modifier@pipe-a-plane-5:
- shard-bmg: NOTRUN -> [SKIP][68] ([Intel XE#7130]) +20 other tests skip
[68]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-9/igt@kms_plane@pixel-format-4-tiled-dg2-rc-ccs-cc-modifier@pipe-a-plane-5.html
* igt@kms_plane@pixel-format-4-tiled-dg2-rc-ccs-cc-modifier@pipe-b-plane-5:
- shard-bmg: NOTRUN -> [SKIP][69] ([Intel XE#7111] / [Intel XE#7130]) +5 other tests skip
[69]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-9/igt@kms_plane@pixel-format-4-tiled-dg2-rc-ccs-cc-modifier@pipe-b-plane-5.html
* igt@kms_plane@pixel-format-4-tiled-lnl-ccs-modifier-source-clamping:
- shard-bmg: NOTRUN -> [SKIP][70] ([Intel XE#7111] / [Intel XE#7130] / [Intel XE#7131])
[70]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-5/igt@kms_plane@pixel-format-4-tiled-lnl-ccs-modifier-source-clamping.html
* igt@kms_plane@pixel-format-4-tiled-lnl-ccs-modifier-source-clamping@pipe-a-plane-5:
- shard-lnl: NOTRUN -> [SKIP][71] ([Intel XE#7131]) +1 other test skip
[71]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-7/igt@kms_plane@pixel-format-4-tiled-lnl-ccs-modifier-source-clamping@pipe-a-plane-5.html
- shard-bmg: NOTRUN -> [SKIP][72] ([Intel XE#7131])
[72]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-5/igt@kms_plane@pixel-format-4-tiled-lnl-ccs-modifier-source-clamping@pipe-a-plane-5.html
* igt@kms_plane@pixel-format-4-tiled-lnl-ccs-modifier-source-clamping@pipe-b-plane-5:
- shard-bmg: NOTRUN -> [SKIP][73] ([Intel XE#7111] / [Intel XE#7131])
[73]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-5/igt@kms_plane@pixel-format-4-tiled-lnl-ccs-modifier-source-clamping@pipe-b-plane-5.html
* igt@kms_plane@pixel-format-x-tiled-modifier@pipe-b-plane-5:
- shard-lnl: NOTRUN -> [SKIP][74] ([Intel XE#7130]) +15 other tests skip
[74]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-4/igt@kms_plane@pixel-format-x-tiled-modifier@pipe-b-plane-5.html
* igt@kms_plane_multiple@2x-tiling-yf:
- shard-lnl: NOTRUN -> [SKIP][75] ([Intel XE#4596])
[75]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-3/igt@kms_plane_multiple@2x-tiling-yf.html
* igt@kms_plane_scaling@planes-downscale-factor-0-5-upscale-factor-0-25:
- shard-lnl: NOTRUN -> [SKIP][76] ([Intel XE#6886]) +7 other tests skip
[76]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-7/igt@kms_plane_scaling@planes-downscale-factor-0-5-upscale-factor-0-25.html
* igt@kms_plane_scaling@planes-upscale-20x20-downscale-factor-0-5@pipe-b:
- shard-bmg: NOTRUN -> [SKIP][77] ([Intel XE#6886]) +9 other tests skip
[77]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-2/igt@kms_plane_scaling@planes-upscale-20x20-downscale-factor-0-5@pipe-b.html
* igt@kms_pm_dc@dc3co-vpb-simulation:
- shard-lnl: NOTRUN -> [SKIP][78] ([Intel XE#736])
[78]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-7/igt@kms_pm_dc@dc3co-vpb-simulation.html
- shard-bmg: NOTRUN -> [SKIP][79] ([Intel XE#2391])
[79]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-5/igt@kms_pm_dc@dc3co-vpb-simulation.html
* igt@kms_pm_dc@deep-pkgc:
- shard-lnl: NOTRUN -> [FAIL][80] ([Intel XE#2029])
[80]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-7/igt@kms_pm_dc@deep-pkgc.html
- shard-bmg: NOTRUN -> [SKIP][81] ([Intel XE#2505])
[81]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-5/igt@kms_pm_dc@deep-pkgc.html
* igt@kms_pm_rpm@modeset-lpsp-stress-no-wait:
- shard-bmg: NOTRUN -> [SKIP][82] ([Intel XE#1439] / [Intel XE#3141] / [Intel XE#836])
[82]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-5/igt@kms_pm_rpm@modeset-lpsp-stress-no-wait.html
* igt@kms_psr2_sf@fbc-pr-overlay-plane-move-continuous-exceed-fully-sf:
- shard-lnl: NOTRUN -> [SKIP][83] ([Intel XE#1406] / [Intel XE#2893]) +2 other tests skip
[83]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-2/igt@kms_psr2_sf@fbc-pr-overlay-plane-move-continuous-exceed-fully-sf.html
* igt@kms_psr2_sf@psr2-plane-move-sf-dmg-area:
- shard-bmg: NOTRUN -> [SKIP][84] ([Intel XE#1406] / [Intel XE#1489]) +3 other tests skip
[84]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-2/igt@kms_psr2_sf@psr2-plane-move-sf-dmg-area.html
* igt@kms_psr2_su@frontbuffer-xrgb8888:
- shard-bmg: NOTRUN -> [SKIP][85] ([Intel XE#1406] / [Intel XE#2387])
[85]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-4/igt@kms_psr2_su@frontbuffer-xrgb8888.html
* igt@kms_psr2_su@page_flip-nv12:
- shard-lnl: NOTRUN -> [SKIP][86] ([Intel XE#1128] / [Intel XE#1406])
[86]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-4/igt@kms_psr2_su@page_flip-nv12.html
* igt@kms_psr@fbc-psr2-primary-page-flip@edp-1:
- shard-lnl: NOTRUN -> [SKIP][87] ([Intel XE#1406] / [Intel XE#4609]) +1 other test skip
[87]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-8/igt@kms_psr@fbc-psr2-primary-page-flip@edp-1.html
* igt@kms_psr@fbc-psr2-sprite-plane-onoff:
- shard-lnl: NOTRUN -> [SKIP][88] ([Intel XE#1406]) +2 other tests skip
[88]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-7/igt@kms_psr@fbc-psr2-sprite-plane-onoff.html
* igt@kms_psr@psr-suspend:
- shard-bmg: NOTRUN -> [SKIP][89] ([Intel XE#1406] / [Intel XE#2234] / [Intel XE#2850]) +9 other tests skip
[89]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-9/igt@kms_psr@psr-suspend.html
* igt@kms_rotation_crc@primary-y-tiled-reflect-x-90:
- shard-lnl: NOTRUN -> [SKIP][90] ([Intel XE#3414] / [Intel XE#3904])
[90]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-4/igt@kms_rotation_crc@primary-y-tiled-reflect-x-90.html
- shard-bmg: NOTRUN -> [SKIP][91] ([Intel XE#3414] / [Intel XE#3904])
[91]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-8/igt@kms_rotation_crc@primary-y-tiled-reflect-x-90.html
* igt@kms_setmode@invalid-clone-single-crtc-stealing:
- shard-bmg: [PASS][92] -> [SKIP][93] ([Intel XE#1435])
[92]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4503-8883ec84c85819dacca94997b60371c3ed57ee29/shard-bmg-8/igt@kms_setmode@invalid-clone-single-crtc-stealing.html
[93]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-6/igt@kms_setmode@invalid-clone-single-crtc-stealing.html
* igt@kms_sharpness_filter@invalid-filter-with-scaling-mode:
- shard-bmg: NOTRUN -> [SKIP][94] ([Intel XE#6503]) +3 other tests skip
[94]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-2/igt@kms_sharpness_filter@invalid-filter-with-scaling-mode.html
* igt@kms_vrr@seamless-rr-switch-virtual@pipe-a-edp-1:
- shard-lnl: [PASS][95] -> [FAIL][96] ([Intel XE#2142]) +1 other test fail
[95]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4503-8883ec84c85819dacca94997b60371c3ed57ee29/shard-lnl-2/igt@kms_vrr@seamless-rr-switch-virtual@pipe-a-edp-1.html
[96]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-2/igt@kms_vrr@seamless-rr-switch-virtual@pipe-a-edp-1.html
* igt@sriov_basic@enable-vfs-bind-unbind-each-numvfs-all:
- shard-lnl: NOTRUN -> [SKIP][97] ([Intel XE#1091] / [Intel XE#2849])
[97]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-3/igt@sriov_basic@enable-vfs-bind-unbind-each-numvfs-all.html
* igt@xe_compute@eu-busy-10s:
- shard-bmg: NOTRUN -> [SKIP][98] ([Intel XE#6599])
[98]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-2/igt@xe_compute@eu-busy-10s.html
* igt@xe_eudebug@basic-vm-bind-metadata-discovery:
- shard-bmg: NOTRUN -> [SKIP][99] ([Intel XE#4837]) +7 other tests skip
[99]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-10/igt@xe_eudebug@basic-vm-bind-metadata-discovery.html
* igt@xe_eudebug@multiple-sessions:
- shard-lnl: NOTRUN -> [SKIP][100] ([Intel XE#4837]) +4 other tests skip
[100]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-7/igt@xe_eudebug@multiple-sessions.html
* igt@xe_eudebug_online@interrupt-other:
- shard-bmg: NOTRUN -> [SKIP][101] ([Intel XE#4837] / [Intel XE#6665]) +3 other tests skip
[101]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-4/igt@xe_eudebug_online@interrupt-other.html
* igt@xe_eudebug_online@pagefault-write-stress:
- shard-lnl: NOTRUN -> [SKIP][102] ([Intel XE#6665])
[102]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-5/igt@xe_eudebug_online@pagefault-write-stress.html
* igt@xe_eudebug_online@writes-caching-sram-bb-sram-target-sram:
- shard-lnl: NOTRUN -> [SKIP][103] ([Intel XE#4837] / [Intel XE#6665]) +2 other tests skip
[103]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-5/igt@xe_eudebug_online@writes-caching-sram-bb-sram-target-sram.html
* igt@xe_evict@evict-large-multi-vm:
- shard-lnl: NOTRUN -> [SKIP][104] ([Intel XE#688]) +4 other tests skip
[104]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-6/igt@xe_evict@evict-large-multi-vm.html
* igt@xe_evict@evict-mixed-many-threads-small:
- shard-bmg: [PASS][105] -> [INCOMPLETE][106] ([Intel XE#6321])
[105]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4503-8883ec84c85819dacca94997b60371c3ed57ee29/shard-bmg-6/igt@xe_evict@evict-mixed-many-threads-small.html
[106]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-5/igt@xe_evict@evict-mixed-many-threads-small.html
* igt@xe_exec_basic@multigpu-many-execqueues-many-vm-userptr-invalidate:
- shard-bmg: NOTRUN -> [SKIP][107] ([Intel XE#2322]) +6 other tests skip
[107]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-5/igt@xe_exec_basic@multigpu-many-execqueues-many-vm-userptr-invalidate.html
* igt@xe_exec_basic@multigpu-no-exec-userptr:
- shard-lnl: NOTRUN -> [SKIP][108] ([Intel XE#1392]) +3 other tests skip
[108]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-2/igt@xe_exec_basic@multigpu-no-exec-userptr.html
* igt@xe_exec_fault_mode@twice-multi-queue-imm:
- shard-bmg: NOTRUN -> [SKIP][109] ([Intel XE#7136]) +9 other tests skip
[109]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-2/igt@xe_exec_fault_mode@twice-multi-queue-imm.html
* igt@xe_exec_fault_mode@twice-multi-queue-userptr-invalidate-race-imm:
- shard-lnl: NOTRUN -> [SKIP][110] ([Intel XE#7136]) +6 other tests skip
[110]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-2/igt@xe_exec_fault_mode@twice-multi-queue-userptr-invalidate-race-imm.html
* igt@xe_exec_multi_queue@few-execs-preempt-mode-userptr-invalidate:
- shard-bmg: NOTRUN -> [SKIP][111] ([Intel XE#6874]) +27 other tests skip
[111]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-10/igt@xe_exec_multi_queue@few-execs-preempt-mode-userptr-invalidate.html
* igt@xe_exec_multi_queue@two-queues-close-fd-smem:
- shard-lnl: NOTRUN -> [SKIP][112] ([Intel XE#6874]) +18 other tests skip
[112]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-8/igt@xe_exec_multi_queue@two-queues-close-fd-smem.html
* igt@xe_exec_system_allocator@many-64k-mmap-free-huge:
- shard-lnl: NOTRUN -> [SKIP][113] ([Intel XE#5007])
[113]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-2/igt@xe_exec_system_allocator@many-64k-mmap-free-huge.html
* igt@xe_exec_system_allocator@many-64k-mmap-new-huge-nomemset:
- shard-bmg: NOTRUN -> [SKIP][114] ([Intel XE#5007])
[114]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-10/igt@xe_exec_system_allocator@many-64k-mmap-new-huge-nomemset.html
* igt@xe_exec_system_allocator@threads-many-mmap-new-huge-nomemset:
- shard-bmg: NOTRUN -> [SKIP][115] ([Intel XE#4943]) +22 other tests skip
[115]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-4/igt@xe_exec_system_allocator@threads-many-mmap-new-huge-nomemset.html
* igt@xe_exec_system_allocator@threads-shared-vm-many-large-mmap-new-huge-nomemset:
- shard-lnl: NOTRUN -> [SKIP][116] ([Intel XE#4943]) +15 other tests skip
[116]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-8/igt@xe_exec_system_allocator@threads-shared-vm-many-large-mmap-new-huge-nomemset.html
* igt@xe_exec_threads@threads-many-queues:
- shard-lnl: NOTRUN -> [FAIL][117] ([Intel XE#7166])
[117]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-7/igt@xe_exec_threads@threads-many-queues.html
- shard-bmg: NOTRUN -> [FAIL][118] ([Intel XE#7166])
[118]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-9/igt@xe_exec_threads@threads-many-queues.html
* igt@xe_exec_threads@threads-multi-queue-rebind:
- shard-bmg: NOTRUN -> [SKIP][119] ([Intel XE#7138]) +7 other tests skip
[119]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-9/igt@xe_exec_threads@threads-multi-queue-rebind.html
* igt@xe_exec_threads@threads-multi-queue-shared-vm-basic:
- shard-lnl: NOTRUN -> [SKIP][120] ([Intel XE#7138]) +3 other tests skip
[120]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-7/igt@xe_exec_threads@threads-multi-queue-shared-vm-basic.html
* igt@xe_media_fill@media-fill:
- shard-lnl: NOTRUN -> [SKIP][121] ([Intel XE#560])
[121]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-8/igt@xe_media_fill@media-fill.html
* igt@xe_multigpu_svm@mgpu-latency-basic:
- shard-bmg: NOTRUN -> [SKIP][122] ([Intel XE#6964]) +4 other tests skip
[122]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-9/igt@xe_multigpu_svm@mgpu-latency-basic.html
* igt@xe_multigpu_svm@mgpu-xgpu-access-prefetch:
- shard-lnl: NOTRUN -> [SKIP][123] ([Intel XE#6964]) +1 other test skip
[123]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-5/igt@xe_multigpu_svm@mgpu-xgpu-access-prefetch.html
* igt@xe_pm@d3hot-mmap-vram:
- shard-lnl: NOTRUN -> [SKIP][124] ([Intel XE#1948])
[124]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-4/igt@xe_pm@d3hot-mmap-vram.html
* igt@xe_pm@s2idle-d3cold-basic-exec:
- shard-bmg: NOTRUN -> [SKIP][125] ([Intel XE#2284])
[125]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-3/igt@xe_pm@s2idle-d3cold-basic-exec.html
* igt@xe_pm@s4-d3cold-basic-exec:
- shard-lnl: NOTRUN -> [SKIP][126] ([Intel XE#2284] / [Intel XE#366])
[126]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-1/igt@xe_pm@s4-d3cold-basic-exec.html
* igt@xe_pxp@pxp-stale-bo-exec-post-suspend:
- shard-lnl: [PASS][127] -> [ABORT][128] ([Intel XE#7169])
[127]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4503-8883ec84c85819dacca94997b60371c3ed57ee29/shard-lnl-1/igt@xe_pxp@pxp-stale-bo-exec-post-suspend.html
[128]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-2/igt@xe_pxp@pxp-stale-bo-exec-post-suspend.html
* igt@xe_pxp@pxp-stale-bo-exec-post-termination-irq:
- shard-bmg: NOTRUN -> [SKIP][129] ([Intel XE#4733]) +1 other test skip
[129]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-10/igt@xe_pxp@pxp-stale-bo-exec-post-termination-irq.html
* igt@xe_query@multigpu-query-engines:
- shard-lnl: NOTRUN -> [SKIP][130] ([Intel XE#944]) +1 other test skip
[130]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-1/igt@xe_query@multigpu-query-engines.html
* igt@xe_query@multigpu-query-invalid-cs-cycles:
- shard-bmg: NOTRUN -> [SKIP][131] ([Intel XE#944]) +2 other tests skip
[131]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-6/igt@xe_query@multigpu-query-invalid-cs-cycles.html
* igt@xe_sriov_admin@bulk-sched-priority-vfs-disabled:
- shard-lnl: NOTRUN -> [SKIP][132] ([Intel XE#7174]) +1 other test skip
[132]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-3/igt@xe_sriov_admin@bulk-sched-priority-vfs-disabled.html
* igt@xe_sriov_flr@flr-vfs-parallel:
- shard-lnl: NOTRUN -> [SKIP][133] ([Intel XE#4273])
[133]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-7/igt@xe_sriov_flr@flr-vfs-parallel.html
* igt@xe_survivability@runtime-survivability:
- shard-bmg: [PASS][134] -> [DMESG-WARN][135] ([Intel XE#6627])
[134]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4503-8883ec84c85819dacca94997b60371c3ed57ee29/shard-bmg-5/igt@xe_survivability@runtime-survivability.html
[135]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-9/igt@xe_survivability@runtime-survivability.html
#### Possible fixes ####
* igt@kms_async_flips@async-flip-with-page-flip-events-linear-atomic@pipe-c-edp-1:
- shard-lnl: [FAIL][136] ([Intel XE#6054]) -> [PASS][137] +3 other tests pass
[136]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4503-8883ec84c85819dacca94997b60371c3ed57ee29/shard-lnl-3/igt@kms_async_flips@async-flip-with-page-flip-events-linear-atomic@pipe-c-edp-1.html
[137]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-8/igt@kms_async_flips@async-flip-with-page-flip-events-linear-atomic@pipe-c-edp-1.html
* igt@kms_bw@connected-linear-tiling-2-displays-2560x1440p:
- shard-bmg: [SKIP][138] ([Intel XE#2314] / [Intel XE#2894]) -> [PASS][139]
[138]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4503-8883ec84c85819dacca94997b60371c3ed57ee29/shard-bmg-5/igt@kms_bw@connected-linear-tiling-2-displays-2560x1440p.html
[139]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-4/igt@kms_bw@connected-linear-tiling-2-displays-2560x1440p.html
* igt@kms_cursor_legacy@cursora-vs-flipb-varying-size:
- shard-bmg: [DMESG-WARN][140] ([Intel XE#5354]) -> [PASS][141]
[140]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4503-8883ec84c85819dacca94997b60371c3ed57ee29/shard-bmg-3/igt@kms_cursor_legacy@cursora-vs-flipb-varying-size.html
[141]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-4/igt@kms_cursor_legacy@cursora-vs-flipb-varying-size.html
* igt@kms_cursor_legacy@cursorb-vs-flipa-atomic-transitions-varying-size:
- shard-bmg: [SKIP][142] ([Intel XE#2291]) -> [PASS][143] +3 other tests pass
[142]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4503-8883ec84c85819dacca94997b60371c3ed57ee29/shard-bmg-5/igt@kms_cursor_legacy@cursorb-vs-flipa-atomic-transitions-varying-size.html
[143]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-9/igt@kms_cursor_legacy@cursorb-vs-flipa-atomic-transitions-varying-size.html
* igt@kms_dp_aux_dev:
- shard-bmg: [SKIP][144] ([Intel XE#3009]) -> [PASS][145]
[144]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4503-8883ec84c85819dacca94997b60371c3ed57ee29/shard-bmg-6/igt@kms_dp_aux_dev.html
[145]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-10/igt@kms_dp_aux_dev.html
* igt@kms_flip@2x-nonexisting-fb:
- shard-bmg: [SKIP][146] ([Intel XE#2316]) -> [PASS][147] +6 other tests pass
[146]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4503-8883ec84c85819dacca94997b60371c3ed57ee29/shard-bmg-5/igt@kms_flip@2x-nonexisting-fb.html
[147]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-2/igt@kms_flip@2x-nonexisting-fb.html
* igt@kms_flip@flip-vs-expired-vblank@b-edp1:
- shard-lnl: [FAIL][148] ([Intel XE#301]) -> [PASS][149] +1 other test pass
[148]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4503-8883ec84c85819dacca94997b60371c3ed57ee29/shard-lnl-4/igt@kms_flip@flip-vs-expired-vblank@b-edp1.html
[149]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-6/igt@kms_flip@flip-vs-expired-vblank@b-edp1.html
* igt@kms_flip@flip-vs-suspend-interruptible:
- shard-bmg: [INCOMPLETE][150] ([Intel XE#2049] / [Intel XE#2597]) -> [PASS][151] +3 other tests pass
[150]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4503-8883ec84c85819dacca94997b60371c3ed57ee29/shard-bmg-10/igt@kms_flip@flip-vs-suspend-interruptible.html
[151]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-2/igt@kms_flip@flip-vs-suspend-interruptible.html
* igt@kms_hdr@static-toggle-dpms:
- shard-bmg: [SKIP][152] ([Intel XE#1503]) -> [PASS][153]
[152]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4503-8883ec84c85819dacca94997b60371c3ed57ee29/shard-bmg-5/igt@kms_hdr@static-toggle-dpms.html
[153]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-6/igt@kms_hdr@static-toggle-dpms.html
* igt@kms_plane_multiple@2x-tiling-4:
- shard-bmg: [SKIP][154] ([Intel XE#4596]) -> [PASS][155]
[154]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4503-8883ec84c85819dacca94997b60371c3ed57ee29/shard-bmg-6/igt@kms_plane_multiple@2x-tiling-4.html
[155]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-1/igt@kms_plane_multiple@2x-tiling-4.html
* igt@kms_plane_scaling@2x-scaler-multi-pipe:
- shard-bmg: [SKIP][156] ([Intel XE#2571]) -> [PASS][157]
[156]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4503-8883ec84c85819dacca94997b60371c3ed57ee29/shard-bmg-5/igt@kms_plane_scaling@2x-scaler-multi-pipe.html
[157]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-8/igt@kms_plane_scaling@2x-scaler-multi-pipe.html
* igt@kms_pm_dc@dc6-dpms:
- shard-lnl: [FAIL][158] ([Intel XE#718]) -> [PASS][159]
[158]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4503-8883ec84c85819dacca94997b60371c3ed57ee29/shard-lnl-8/igt@kms_pm_dc@dc6-dpms.html
[159]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-lnl-6/igt@kms_pm_dc@dc6-dpms.html
#### Warnings ####
* igt@kms_chamelium_edid@dp-edid-stress-resolution-non-4k:
- shard-bmg: [SKIP][160] ([Intel XE#2252]) -> [ABORT][161] ([Intel XE#7169])
[160]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4503-8883ec84c85819dacca94997b60371c3ed57ee29/shard-bmg-5/igt@kms_chamelium_edid@dp-edid-stress-resolution-non-4k.html
[161]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-7/igt@kms_chamelium_edid@dp-edid-stress-resolution-non-4k.html
* igt@kms_content_protection@atomic-dpms:
- shard-bmg: [FAIL][162] ([Intel XE#1178] / [Intel XE#3304]) -> [SKIP][163] ([Intel XE#2341])
[162]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4503-8883ec84c85819dacca94997b60371c3ed57ee29/shard-bmg-8/igt@kms_content_protection@atomic-dpms.html
[163]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-6/igt@kms_content_protection@atomic-dpms.html
* igt@kms_content_protection@lic-type-0:
- shard-bmg: [SKIP][164] ([Intel XE#2341]) -> [FAIL][165] ([Intel XE#1178] / [Intel XE#3304])
[164]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4503-8883ec84c85819dacca94997b60371c3ed57ee29/shard-bmg-6/igt@kms_content_protection@lic-type-0.html
[165]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-10/igt@kms_content_protection@lic-type-0.html
* igt@kms_content_protection@lic-type-0-hdcp14:
- shard-bmg: [FAIL][166] ([Intel XE#3304]) -> [SKIP][167] ([Intel XE#7194])
[166]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4503-8883ec84c85819dacca94997b60371c3ed57ee29/shard-bmg-2/igt@kms_content_protection@lic-type-0-hdcp14.html
[167]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-6/igt@kms_content_protection@lic-type-0-hdcp14.html
* igt@kms_flip@2x-flip-vs-rmfb-interruptible:
- shard-bmg: [INCOMPLETE][168] ([Intel XE#1727] / [Intel XE#6819]) -> [SKIP][169] ([Intel XE#2316])
[168]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4503-8883ec84c85819dacca94997b60371c3ed57ee29/shard-bmg-2/igt@kms_flip@2x-flip-vs-rmfb-interruptible.html
[169]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-5/igt@kms_flip@2x-flip-vs-rmfb-interruptible.html
* igt@kms_frontbuffer_tracking@fbc-2p-primscrn-pri-indfb-draw-mmap-wc:
- shard-bmg: [SKIP][170] ([Intel XE#2312]) -> [SKIP][171] ([Intel XE#4141]) +4 other tests skip
[170]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4503-8883ec84c85819dacca94997b60371c3ed57ee29/shard-bmg-6/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-pri-indfb-draw-mmap-wc.html
[171]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-4/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-pri-indfb-draw-mmap-wc.html
* igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-spr-indfb-draw-mmap-wc:
- shard-bmg: [SKIP][172] ([Intel XE#4141]) -> [SKIP][173] ([Intel XE#2312]) +8 other tests skip
[172]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4503-8883ec84c85819dacca94997b60371c3ed57ee29/shard-bmg-2/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-spr-indfb-draw-mmap-wc.html
[173]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-6/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-spr-indfb-draw-mmap-wc.html
* igt@kms_frontbuffer_tracking@fbcdrrs-2p-scndscrn-cur-indfb-draw-mmap-wc:
- shard-bmg: [SKIP][174] ([Intel XE#2312]) -> [SKIP][175] ([Intel XE#2311]) +13 other tests skip
[174]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4503-8883ec84c85819dacca94997b60371c3ed57ee29/shard-bmg-5/igt@kms_frontbuffer_tracking@fbcdrrs-2p-scndscrn-cur-indfb-draw-mmap-wc.html
[175]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-10/igt@kms_frontbuffer_tracking@fbcdrrs-2p-scndscrn-cur-indfb-draw-mmap-wc.html
* igt@kms_frontbuffer_tracking@fbcdrrs-2p-scndscrn-indfb-pgflip-blt:
- shard-bmg: [SKIP][176] ([Intel XE#2311]) -> [SKIP][177] ([Intel XE#2312]) +13 other tests skip
[176]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4503-8883ec84c85819dacca94997b60371c3ed57ee29/shard-bmg-3/igt@kms_frontbuffer_tracking@fbcdrrs-2p-scndscrn-indfb-pgflip-blt.html
[177]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-6/igt@kms_frontbuffer_tracking@fbcdrrs-2p-scndscrn-indfb-pgflip-blt.html
* igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-pri-indfb-draw-mmap-wc:
- shard-bmg: [ABORT][178] ([Intel XE#7169]) -> [SKIP][179] ([Intel XE#2313])
[178]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4503-8883ec84c85819dacca94997b60371c3ed57ee29/shard-bmg-5/igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-pri-indfb-draw-mmap-wc.html
[179]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-10/igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-pri-indfb-draw-mmap-wc.html
* igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-indfb-msflip-blt:
- shard-bmg: [SKIP][180] ([Intel XE#2313]) -> [SKIP][181] ([Intel XE#2312]) +10 other tests skip
[180]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4503-8883ec84c85819dacca94997b60371c3ed57ee29/shard-bmg-8/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-indfb-msflip-blt.html
[181]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-5/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-indfb-msflip-blt.html
* igt@kms_frontbuffer_tracking@psr-2p-scndscrn-spr-indfb-draw-mmap-wc:
- shard-bmg: [SKIP][182] ([Intel XE#2312]) -> [SKIP][183] ([Intel XE#2313]) +14 other tests skip
[182]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4503-8883ec84c85819dacca94997b60371c3ed57ee29/shard-bmg-6/igt@kms_frontbuffer_tracking@psr-2p-scndscrn-spr-indfb-draw-mmap-wc.html
[183]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-8/igt@kms_frontbuffer_tracking@psr-2p-scndscrn-spr-indfb-draw-mmap-wc.html
* igt@kms_hdr@brightness-with-hdr:
- shard-bmg: [SKIP][184] ([Intel XE#3374] / [Intel XE#3544]) -> [SKIP][185] ([Intel XE#3544])
[184]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4503-8883ec84c85819dacca94997b60371c3ed57ee29/shard-bmg-1/igt@kms_hdr@brightness-with-hdr.html
[185]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-2/igt@kms_hdr@brightness-with-hdr.html
* igt@kms_plane_multiple@2x-tiling-yf:
- shard-bmg: [SKIP][186] ([Intel XE#4596]) -> [SKIP][187] ([Intel XE#5021])
[186]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4503-8883ec84c85819dacca94997b60371c3ed57ee29/shard-bmg-5/igt@kms_plane_multiple@2x-tiling-yf.html
[187]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-10/igt@kms_plane_multiple@2x-tiling-yf.html
* igt@kms_tiled_display@basic-test-pattern-with-chamelium:
- shard-bmg: [SKIP][188] ([Intel XE#2509]) -> [SKIP][189] ([Intel XE#2426])
[188]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4503-8883ec84c85819dacca94997b60371c3ed57ee29/shard-bmg-4/igt@kms_tiled_display@basic-test-pattern-with-chamelium.html
[189]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/shard-bmg-8/igt@kms_tiled_display@basic-test-pattern-with-chamelium.html
[Intel XE#1091]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1091
[Intel XE#1124]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1124
[Intel XE#1125]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1125
[Intel XE#1128]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1128
[Intel XE#1137]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1137
[Intel XE#1178]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1178
[Intel XE#1392]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1392
[Intel XE#1406]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1406
[Intel XE#1407]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1407
[Intel XE#1421]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1421
[Intel XE#1424]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1424
[Intel XE#1435]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1435
[Intel XE#1439]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1439
[Intel XE#1489]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1489
[Intel XE#1503]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1503
[Intel XE#1512]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1512
[Intel XE#1727]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1727
[Intel XE#1948]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1948
[Intel XE#2029]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2029
[Intel XE#2049]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2049
[Intel XE#2142]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2142
[Intel XE#2234]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2234
[Intel XE#2244]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2244
[Intel XE#2252]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2252
[Intel XE#2284]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2284
[Intel XE#2291]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2291
[Intel XE#2311]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2311
[Intel XE#2312]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2312
[Intel XE#2313]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2313
[Intel XE#2314]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2314
[Intel XE#2316]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2316
[Intel XE#2320]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2320
[Intel XE#2321]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2321
[Intel XE#2322]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2322
[Intel XE#2325]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2325
[Intel XE#2327]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2327
[Intel XE#2341]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2341
[Intel XE#2370]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2370
[Intel XE#2375]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2375
[Intel XE#2387]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2387
[Intel XE#2390]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2390
[Intel XE#2391]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2391
[Intel XE#2426]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2426
[Intel XE#2505]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2505
[Intel XE#2509]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2509
[Intel XE#2571]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2571
[Intel XE#2597]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2597
[Intel XE#2652]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2652
[Intel XE#2849]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2849
[Intel XE#2850]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2850
[Intel XE#2887]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2887
[Intel XE#2893]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2893
[Intel XE#2894]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2894
[Intel XE#3009]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3009
[Intel XE#301]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/301
[Intel XE#306]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/306
[Intel XE#307]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/307
[Intel XE#309]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/309
[Intel XE#3141]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3141
[Intel XE#3304]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3304
[Intel XE#3374]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3374
[Intel XE#3414]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3414
[Intel XE#3432]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3432
[Intel XE#352]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/352
[Intel XE#3544]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3544
[Intel XE#366]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/366
[Intel XE#367]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/367
[Intel XE#373]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/373
[Intel XE#3904]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3904
[Intel XE#4141]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4141
[Intel XE#4210]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4210
[Intel XE#4273]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4273
[Intel XE#4294]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4294
[Intel XE#4302]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4302
[Intel XE#4596]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4596
[Intel XE#4609]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4609
[Intel XE#4633]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4633
[Intel XE#4733]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4733
[Intel XE#4837]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4837
[Intel XE#4943]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4943
[Intel XE#5007]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5007
[Intel XE#5021]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5021
[Intel XE#5354]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5354
[Intel XE#560]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/560
[Intel XE#6054]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6054
[Intel XE#6312]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6312
[Intel XE#6321]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6321
[Intel XE#6503]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6503
[Intel XE#651]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/651
[Intel XE#656]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/656
[Intel XE#6599]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6599
[Intel XE#6627]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6627
[Intel XE#6665]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6665
[Intel XE#6819]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6819
[Intel XE#6874]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6874
[Intel XE#688]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/688
[Intel XE#6886]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6886
[Intel XE#6900]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6900
[Intel XE#6911]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6911
[Intel XE#6964]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6964
[Intel XE#6974]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6974
[Intel XE#7059]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7059
[Intel XE#7061]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7061
[Intel XE#7111]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7111
[Intel XE#7130]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7130
[Intel XE#7131]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7131
[Intel XE#7136]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7136
[Intel XE#7138]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7138
[Intel XE#7166]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7166
[Intel XE#7169]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7169
[Intel XE#7174]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7174
[Intel XE#7178]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7178
[Intel XE#7179]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7179
[Intel XE#718]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/718
[Intel XE#7194]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7194
[Intel XE#736]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/736
[Intel XE#787]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/787
[Intel XE#836]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/836
[Intel XE#944]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/944
Build changes
-------------
* IGT: IGT_8737 -> IGT_8738
* Linux: xe-4503-8883ec84c85819dacca94997b60371c3ed57ee29 -> xe-pw-160587v4
IGT_8737: 8737
IGT_8738: b3fc8fb534a27517f2a49e63ef993e7550b9b959 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
xe-4503-8883ec84c85819dacca94997b60371c3ed57ee29: 8883ec84c85819dacca94997b60371c3ed57ee29
xe-pw-160587v4: 160587v4
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-160587v4/index.html
[-- Attachment #2: Type: text/html, Size: 66504 bytes --]
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v4 2/4] drm/gpusvm: Use dma-map IOVA alloc, link, and sync API in GPU SVM
2026-02-05 4:19 ` [PATCH v4 2/4] drm/gpusvm: Use dma-map IOVA alloc, link, and sync API in GPU SVM Matthew Brost
@ 2026-02-09 9:44 ` Thomas Hellström
2026-02-09 16:13 ` Matthew Brost
0 siblings, 1 reply; 20+ messages in thread
From: Thomas Hellström @ 2026-02-09 9:44 UTC (permalink / raw)
To: Matthew Brost, intel-xe, dri-devel
Cc: leonro, jgg, francois.dugast, himal.prasad.ghimiray
On Wed, 2026-02-04 at 20:19 -0800, Matthew Brost wrote:
> The dma-map IOVA alloc, link, and sync APIs perform significantly
> better
> than dma-map / dma-unmap, as they avoid costly IOMMU
> synchronizations.
> This difference is especially noticeable when mapping a 2MB region in
> 4KB pages.
>
> Use the IOVA alloc, link, and sync APIs for GPU SVM, which create DMA
> mappings between the CPU and GPU.
>
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> ---
> v3:
> - Always link IOVA in mixed mappings
> - Sync IOVA
> v4:
> - Initialize IOVA state in get_pages
> - Use pack IOVA linking (Jason)
> - s/page_to_phys/hmm_pfn_to_phys (Leon)
>
> drivers/gpu/drm/drm_gpusvm.c | 55 ++++++++++++++++++++++++++++++----
> --
> include/drm/drm_gpusvm.h | 5 ++++
> 2 files changed, 52 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_gpusvm.c
> b/drivers/gpu/drm/drm_gpusvm.c
> index 4b8130a4ce95..800caaf0a783 100644
> --- a/drivers/gpu/drm/drm_gpusvm.c
> +++ b/drivers/gpu/drm/drm_gpusvm.c
> @@ -1139,11 +1139,19 @@ static void __drm_gpusvm_unmap_pages(struct
> drm_gpusvm *gpusvm,
> struct drm_gpusvm_pages_flags flags = {
> .__flags = svm_pages->flags.__flags,
> };
> + bool use_iova = dma_use_iova(&svm_pages->state);
> +
> + if (use_iova) {
> + dma_iova_unlink(dev, &svm_pages->state, 0,
> + svm_pages->state_offset,
> + svm_pages->dma_addr[0].dir,
> 0);
> + dma_iova_free(dev, &svm_pages->state);
> + }
>
> for (i = 0, j = 0; i < npages; j++) {
> struct drm_pagemap_addr *addr = &svm_pages-
> >dma_addr[j];
>
> - if (addr->proto == DRM_INTERCONNECT_SYSTEM)
> + if (!use_iova && addr->proto ==
> DRM_INTERCONNECT_SYSTEM)
> dma_unmap_page(dev,
> addr->addr,
> PAGE_SIZE << addr-
> >order,
> @@ -1408,6 +1416,7 @@ int drm_gpusvm_get_pages(struct drm_gpusvm
> *gpusvm,
> struct drm_gpusvm_pages_flags flags;
> enum dma_data_direction dma_dir = ctx->read_only ?
> DMA_TO_DEVICE :
>
> DMA_BIDIRECTIONAL;
> + struct dma_iova_state *state = &svm_pages->state;
>
> retry:
> if (time_after(jiffies, timeout))
> @@ -1446,6 +1455,9 @@ int drm_gpusvm_get_pages(struct drm_gpusvm
> *gpusvm,
> if (err)
> goto err_free;
>
> + *state = (struct dma_iova_state){};
> + svm_pages->state_offset = 0;
> +
> map_pages:
> /*
> * Perform all dma mappings under the notifier lock to not
> @@ -1539,13 +1551,33 @@ int drm_gpusvm_get_pages(struct drm_gpusvm
> *gpusvm,
> goto err_unmap;
> }
>
> - addr = dma_map_page(gpusvm->drm->dev,
> - page, 0,
> - PAGE_SIZE << order,
> - dma_dir);
> - if (dma_mapping_error(gpusvm->drm->dev,
> addr)) {
> - err = -EFAULT;
> - goto err_unmap;
> + if (!i)
> + dma_iova_try_alloc(gpusvm->drm->dev,
> state,
> + npages *
> PAGE_SIZE >=
> + HPAGE_PMD_SIZE ?
> + HPAGE_PMD_SIZE :
> 0,
Doc says "callers that always do PAGE_SIZE aligned transfers can always
pass 0 here", so can be simplified?
> + npages *
> PAGE_SIZE);
> +
> + if (dma_use_iova(state)) {
> + err = dma_iova_link(gpusvm->drm-
> >dev, state,
> +
> hmm_pfn_to_phys(pfns[i]),
> + svm_pages-
> >state_offset,
> + PAGE_SIZE <<
> order,
> + dma_dir, 0);
> + if (err)
> + goto err_unmap;
> +
> + addr = state->addr + svm_pages-
> >state_offset;
> + svm_pages->state_offset += PAGE_SIZE
> << order;
> + } else {
> + addr = dma_map_page(gpusvm->drm-
> >dev,
> + page, 0,
> + PAGE_SIZE <<
> order,
> + dma_dir);
> + if (dma_mapping_error(gpusvm->drm-
> >dev, addr)) {
> + err = -EFAULT;
> + goto err_unmap;
> + }
> }
>
> svm_pages->dma_addr[j] =
> drm_pagemap_addr_encode
> @@ -1557,6 +1589,13 @@ int drm_gpusvm_get_pages(struct drm_gpusvm
> *gpusvm,
> flags.has_dma_mapping = true;
> }
>
> + if (dma_use_iova(state)) {
> + err = dma_iova_sync(gpusvm->drm->dev, state, 0,
> + svm_pages->state_offset);
> + if (err)
> + goto err_unmap;
> + }
> +
> if (pagemap) {
> flags.has_devmem_pages = true;
> drm_pagemap_get(dpagemap);
> diff --git a/include/drm/drm_gpusvm.h b/include/drm/drm_gpusvm.h
> index 2578ac92a8d4..cd94bb2ee6ee 100644
> --- a/include/drm/drm_gpusvm.h
> +++ b/include/drm/drm_gpusvm.h
> @@ -6,6 +6,7 @@
> #ifndef __DRM_GPUSVM_H__
> #define __DRM_GPUSVM_H__
>
> +#include <linux/dma-mapping.h>
> #include <linux/kref.h>
> #include <linux/interval_tree.h>
> #include <linux/mmu_notifier.h>
> @@ -136,6 +137,8 @@ struct drm_gpusvm_pages_flags {
> * @dma_addr: Device address array
> * @dpagemap: The struct drm_pagemap of the device pages we're dma-
> mapping.
> * Note this is assuming only one drm_pagemap per range
> is allowed.
> + * @state: DMA IOVA state for mapping.
> + * @state_offset: DMA IOVA offset for mapping.
> * @notifier_seq: Notifier sequence number of the range's pages
> * @flags: Flags for range
> * @flags.migrate_devmem: Flag indicating whether the range can be
> migrated to device memory
> @@ -147,6 +150,8 @@ struct drm_gpusvm_pages_flags {
> struct drm_gpusvm_pages {
> struct drm_pagemap_addr *dma_addr;
> struct drm_pagemap *dpagemap;
> + struct dma_iova_state state;
> + unsigned long state_offset;
> unsigned long notifier_seq;
> struct drm_gpusvm_pages_flags flags;
> };
Otherwise LGTM.
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v4 3/4] drm/pagemap: Split drm_pagemap_migrate_map_pages into device / system
2026-02-05 4:19 ` [PATCH v4 3/4] drm/pagemap: Split drm_pagemap_migrate_map_pages into device / system Matthew Brost
@ 2026-02-09 15:49 ` Thomas Hellström
2026-02-09 16:58 ` Matthew Brost
0 siblings, 1 reply; 20+ messages in thread
From: Thomas Hellström @ 2026-02-09 15:49 UTC (permalink / raw)
To: Matthew Brost, intel-xe, dri-devel
Cc: leonro, jgg, francois.dugast, himal.prasad.ghimiray
On Wed, 2026-02-04 at 20:19 -0800, Matthew Brost wrote:
> Split drm_pagemap_migrate_map_pages into device / system helpers
> clearly
> seperating these operations. Will help with upcoming changes to split
> IOVA allocation steps.
>
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> ---
> drivers/gpu/drm/drm_pagemap.c | 146 ++++++++++++++++++++++----------
> --
> 1 file changed, 96 insertions(+), 50 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_pagemap.c
> b/drivers/gpu/drm/drm_pagemap.c
> index fbd69f383457..29677b19bb69 100644
> --- a/drivers/gpu/drm/drm_pagemap.c
> +++ b/drivers/gpu/drm/drm_pagemap.c
> @@ -205,7 +205,7 @@ static void drm_pagemap_get_devmem_page(struct
> page *page,
> }
>
> /**
> - * drm_pagemap_migrate_map_pages() - Map migration pages for GPU SVM
> migration
> + * drm_pagemap_migrate_map_device_pages() - Map device migration
> pages for GPU SVM migration
> * @dev: The device performing the migration.
> * @local_dpagemap: The drm_pagemap local to the migrating device.
> * @pagemap_addr: Array to store DMA information corresponding to
> mapped pages.
> @@ -221,19 +221,22 @@ static void drm_pagemap_get_devmem_page(struct
> page *page,
> *
> * Returns: 0 on success, -EFAULT if an error occurs during mapping.
> */
> -static int drm_pagemap_migrate_map_pages(struct device *dev,
> - struct drm_pagemap
> *local_dpagemap,
> - struct drm_pagemap_addr
> *pagemap_addr,
> - unsigned long *migrate_pfn,
> - unsigned long npages,
> - enum dma_data_direction
> dir,
> - const struct
> drm_pagemap_migrate_details *mdetails)
> +static int
> +drm_pagemap_migrate_map_device_pages(struct device *dev,
> + struct drm_pagemap
> *local_dpagemap,
> + struct drm_pagemap_addr
> *pagemap_addr,
> + unsigned long *migrate_pfn,
> + unsigned long npages,
> + enum dma_data_direction dir,
> + const struct
> drm_pagemap_migrate_details *mdetails)
We might want to call this device_private pages. Device coherent pages
are treated like system pages here, but I figure those are known to the
dma subsystem and can be handled by the map_system_pages callback.
> {
> unsigned long num_peer_pages = 0, num_local_pages = 0, i;
>
> for (i = 0; i < npages;) {
> struct page *page =
> migrate_pfn_to_page(migrate_pfn[i]);
> - dma_addr_t dma_addr;
> + struct drm_pagemap_zdd *zdd;
> + struct drm_pagemap *dpagemap;
> + struct drm_pagemap_addr addr;
> struct folio *folio;
> unsigned int order = 0;
>
> @@ -243,36 +246,26 @@ static int drm_pagemap_migrate_map_pages(struct
> device *dev,
> folio = page_folio(page);
> order = folio_order(folio);
>
> - if (is_device_private_page(page)) {
> - struct drm_pagemap_zdd *zdd =
> drm_pagemap_page_zone_device_data(page);
> - struct drm_pagemap *dpagemap = zdd-
> >dpagemap;
> - struct drm_pagemap_addr addr;
> -
> - if (dpagemap == local_dpagemap) {
> - if (!mdetails-
> >can_migrate_same_pagemap)
> - goto next;
> + WARN_ON_ONCE(!is_device_private_page(page));
>
> - num_local_pages += NR_PAGES(order);
> - } else {
> - num_peer_pages += NR_PAGES(order);
> - }
> + zdd = drm_pagemap_page_zone_device_data(page);
> + dpagemap = zdd->dpagemap;
>
> - addr = dpagemap->ops->device_map(dpagemap,
> dev, page, order, dir);
> - if (dma_mapping_error(dev, addr.addr))
> - return -EFAULT;
> + if (dpagemap == local_dpagemap) {
> + if (!mdetails->can_migrate_same_pagemap)
> + goto next;
>
> - pagemap_addr[i] = addr;
> + num_local_pages += NR_PAGES(order);
> } else {
> - dma_addr = dma_map_page(dev, page, 0,
> page_size(page), dir);
> - if (dma_mapping_error(dev, dma_addr))
> - return -EFAULT;
> -
> - pagemap_addr[i] =
> - drm_pagemap_addr_encode(dma_addr,
> -
> DRM_INTERCONNECT_SYSTEM,
> - order, dir);
> + num_peer_pages += NR_PAGES(order);
> }
>
> + addr = dpagemap->ops->device_map(dpagemap, dev,
> page, order, dir);
> + if (dma_mapping_error(dev, addr.addr))
> + return -EFAULT;
> +
> + pagemap_addr[i] = addr;
> +
> next:
> i += NR_PAGES(order);
> }
> @@ -287,6 +280,59 @@ static int drm_pagemap_migrate_map_pages(struct
> device *dev,
> return 0;
> }
>
> +/**
> + * drm_pagemap_migrate_map_system_pages() - Map system migration
> pages for GPU SVM migration
> + * @dev: The device performing the migration.
> + * @pagemap_addr: Array to store DMA information corresponding to
> mapped pages.
> + * @migrate_pfn: Array of page frame numbers of system pages or peer
> pages to map.
system pages or device coherent pages? "Peer" pages would typically be
device-private pages with the same owner.
> + * @npages: Number of system pages or peer pages to map.
Same here.
> + * @dir: Direction of data transfer (e.g., DMA_BIDIRECTIONAL)
> + *
> + * This function maps pages of memory for migration usage in GPU
> SVM. It
> + * iterates over each page frame number provided in @migrate_pfn,
> maps the
> + * corresponding page, and stores the DMA address in the provided
> @dma_addr
> + * array.
> + *
> + * Returns: 0 on success, -EFAULT if an error occurs during mapping.
> + */
> +static int
> +drm_pagemap_migrate_map_system_pages(struct device *dev,
> + struct drm_pagemap_addr
> *pagemap_addr,
> + unsigned long *migrate_pfn,
> + unsigned long npages,
> + enum dma_data_direction dir)
> +{
> + unsigned long i;
> +
> + for (i = 0; i < npages;) {
> + struct page *page =
> migrate_pfn_to_page(migrate_pfn[i]);
> + dma_addr_t dma_addr;
> + struct folio *folio;
> + unsigned int order = 0;
> +
> + if (!page)
> + goto next;
> +
> + WARN_ON_ONCE(is_device_private_page(page));
> + folio = page_folio(page);
> + order = folio_order(folio);
> +
> + dma_addr = dma_map_page(dev, page, 0,
> page_size(page), dir);
> + if (dma_mapping_error(dev, dma_addr))
> + return -EFAULT;
> +
> + pagemap_addr[i] =
> + drm_pagemap_addr_encode(dma_addr,
> + DRM_INTERCONNECT_SYS
> TEM,
> + order, dir);
> +
> +next:
> + i += NR_PAGES(order);
> + }
> +
> + return 0;
> +}
> +
> /**
> * drm_pagemap_migrate_unmap_pages() - Unmap pages previously mapped
> for GPU SVM migration
> * @dev: The device for which the pages were mapped
> @@ -347,9 +393,11 @@ drm_pagemap_migrate_remote_to_local(struct
> drm_pagemap_devmem *devmem,
> const struct
> drm_pagemap_migrate_details *mdetails)
>
> {
> - int err = drm_pagemap_migrate_map_pages(remote_device,
> remote_dpagemap,
> - pagemap_addr,
> local_pfns,
> - npages,
> DMA_FROM_DEVICE, mdetails);
> + int err =
> drm_pagemap_migrate_map_device_pages(remote_device,
> +
> remote_dpagemap,
> + pagemap_addr,
> local_pfns,
> + npages,
> DMA_FROM_DEVICE,
> + mdetails);
>
> if (err)
> goto out;
> @@ -368,12 +416,11 @@ drm_pagemap_migrate_sys_to_dev(struct
> drm_pagemap_devmem *devmem,
> struct page *local_pages[],
> struct drm_pagemap_addr
> pagemap_addr[],
> unsigned long npages,
> - const struct drm_pagemap_devmem_ops
> *ops,
> - const struct
> drm_pagemap_migrate_details *mdetails)
> + const struct drm_pagemap_devmem_ops
> *ops)
> {
> - int err = drm_pagemap_migrate_map_pages(devmem->dev, devmem-
> >dpagemap,
> - pagemap_addr,
> sys_pfns, npages,
> - DMA_TO_DEVICE,
> mdetails);
> + int err = drm_pagemap_migrate_map_system_pages(devmem->dev,
> + pagemap_addr,
> sys_pfns,
> + npages,
> DMA_TO_DEVICE);
Unfortunately it's a bit more complicated than this. If the destination
gpu migrates, the range to migrate could be a mix of system pages,
device coherent pages and also device private pages, and previously
drm_pagemap_migrate_map_pages() took care of that and did the correct
thing on a per-page basis.
You can exercise this by setting mdetails::source_peer_migrates to
false on xe. That typically "works" but might generate some errors in
the atomic multi-device tests AFAICT because reading from the BAR does
not flush the L2 caches on BMG. But should be sufficient to exercise
this path.
/Thomas
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v4 2/4] drm/gpusvm: Use dma-map IOVA alloc, link, and sync API in GPU SVM
2026-02-09 9:44 ` Thomas Hellström
@ 2026-02-09 16:13 ` Matthew Brost
2026-02-09 16:41 ` Thomas Hellström
0 siblings, 1 reply; 20+ messages in thread
From: Matthew Brost @ 2026-02-09 16:13 UTC (permalink / raw)
To: Thomas Hellström
Cc: intel-xe, dri-devel, leonro, jgg, francois.dugast,
himal.prasad.ghimiray
On Mon, Feb 09, 2026 at 10:44:43AM +0100, Thomas Hellström wrote:
> On Wed, 2026-02-04 at 20:19 -0800, Matthew Brost wrote:
> > The dma-map IOVA alloc, link, and sync APIs perform significantly
> > better
> > than dma-map / dma-unmap, as they avoid costly IOMMU
> > synchronizations.
> > This difference is especially noticeable when mapping a 2MB region in
> > 4KB pages.
> >
> > Use the IOVA alloc, link, and sync APIs for GPU SVM, which create DMA
> > mappings between the CPU and GPU.
> >
> > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > ---
> > v3:
> > - Always link IOVA in mixed mappings
> > - Sync IOVA
> > v4:
> > - Initialize IOVA state in get_pages
> > - Use pack IOVA linking (Jason)
> > - s/page_to_phys/hmm_pfn_to_phys (Leon)
> >
> > drivers/gpu/drm/drm_gpusvm.c | 55 ++++++++++++++++++++++++++++++----
> > --
> > include/drm/drm_gpusvm.h | 5 ++++
> > 2 files changed, 52 insertions(+), 8 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/drm_gpusvm.c
> > b/drivers/gpu/drm/drm_gpusvm.c
> > index 4b8130a4ce95..800caaf0a783 100644
> > --- a/drivers/gpu/drm/drm_gpusvm.c
> > +++ b/drivers/gpu/drm/drm_gpusvm.c
> > @@ -1139,11 +1139,19 @@ static void __drm_gpusvm_unmap_pages(struct
> > drm_gpusvm *gpusvm,
> > struct drm_gpusvm_pages_flags flags = {
> > .__flags = svm_pages->flags.__flags,
> > };
> > + bool use_iova = dma_use_iova(&svm_pages->state);
> > +
> > + if (use_iova) {
> > + dma_iova_unlink(dev, &svm_pages->state, 0,
> > + svm_pages->state_offset,
> > + svm_pages->dma_addr[0].dir,
> > 0);
> > + dma_iova_free(dev, &svm_pages->state);
> > + }
> >
> > for (i = 0, j = 0; i < npages; j++) {
> > struct drm_pagemap_addr *addr = &svm_pages-
> > >dma_addr[j];
> >
> > - if (addr->proto == DRM_INTERCONNECT_SYSTEM)
> > + if (!use_iova && addr->proto ==
> > DRM_INTERCONNECT_SYSTEM)
> > dma_unmap_page(dev,
> > addr->addr,
> > PAGE_SIZE << addr-
> > >order,
> > @@ -1408,6 +1416,7 @@ int drm_gpusvm_get_pages(struct drm_gpusvm
> > *gpusvm,
> > struct drm_gpusvm_pages_flags flags;
> > enum dma_data_direction dma_dir = ctx->read_only ?
> > DMA_TO_DEVICE :
> >
> > DMA_BIDIRECTIONAL;
> > + struct dma_iova_state *state = &svm_pages->state;
> >
> > retry:
> > if (time_after(jiffies, timeout))
> > @@ -1446,6 +1455,9 @@ int drm_gpusvm_get_pages(struct drm_gpusvm
> > *gpusvm,
> > if (err)
> > goto err_free;
> >
> > + *state = (struct dma_iova_state){};
> > + svm_pages->state_offset = 0;
> > +
> > map_pages:
> > /*
> > * Perform all dma mappings under the notifier lock to not
> > @@ -1539,13 +1551,33 @@ int drm_gpusvm_get_pages(struct drm_gpusvm
> > *gpusvm,
> > goto err_unmap;
> > }
> >
> > - addr = dma_map_page(gpusvm->drm->dev,
> > - page, 0,
> > - PAGE_SIZE << order,
> > - dma_dir);
> > - if (dma_mapping_error(gpusvm->drm->dev,
> > addr)) {
> > - err = -EFAULT;
> > - goto err_unmap;
> > + if (!i)
> > + dma_iova_try_alloc(gpusvm->drm->dev,
> > state,
> > + npages *
> > PAGE_SIZE >=
> > + HPAGE_PMD_SIZE ?
> > + HPAGE_PMD_SIZE :
> > 0,
>
> Doc says "callers that always do PAGE_SIZE aligned transfers can always
> pass 0 here", so can be simplified?
>
* Note: @phys is only used to calculate the IOVA alignment. Callers that always
* do PAGE_SIZE aligned transfers can safely pass 0 here.
So 0 would be safe but possibly suboptimal. For mapping greater than or
equal to 2M, we'd like 2M MB alignment so large GPU pages can used too.
I think passing in '0' could result in odd alignment.
I am assuming other vendors have 2M GPU pages here too but that seems
like somewhat safe assumption...
Matt
>
> > + npages *
> > PAGE_SIZE);
> > +
> > + if (dma_use_iova(state)) {
> > + err = dma_iova_link(gpusvm->drm-
> > >dev, state,
> > +
> > hmm_pfn_to_phys(pfns[i]),
> > + svm_pages-
> > >state_offset,
> > + PAGE_SIZE <<
> > order,
> > + dma_dir, 0);
> > + if (err)
> > + goto err_unmap;
> > +
> > + addr = state->addr + svm_pages-
> > >state_offset;
> > + svm_pages->state_offset += PAGE_SIZE
> > << order;
> > + } else {
> > + addr = dma_map_page(gpusvm->drm-
> > >dev,
> > + page, 0,
> > + PAGE_SIZE <<
> > order,
> > + dma_dir);
> > + if (dma_mapping_error(gpusvm->drm-
> > >dev, addr)) {
> > + err = -EFAULT;
> > + goto err_unmap;
> > + }
> > }
> >
> > svm_pages->dma_addr[j] =
> > drm_pagemap_addr_encode
> > @@ -1557,6 +1589,13 @@ int drm_gpusvm_get_pages(struct drm_gpusvm
> > *gpusvm,
> > flags.has_dma_mapping = true;
> > }
> >
> > + if (dma_use_iova(state)) {
> > + err = dma_iova_sync(gpusvm->drm->dev, state, 0,
> > + svm_pages->state_offset);
> > + if (err)
> > + goto err_unmap;
> > + }
> > +
> > if (pagemap) {
> > flags.has_devmem_pages = true;
> > drm_pagemap_get(dpagemap);
> > diff --git a/include/drm/drm_gpusvm.h b/include/drm/drm_gpusvm.h
> > index 2578ac92a8d4..cd94bb2ee6ee 100644
> > --- a/include/drm/drm_gpusvm.h
> > +++ b/include/drm/drm_gpusvm.h
> > @@ -6,6 +6,7 @@
> > #ifndef __DRM_GPUSVM_H__
> > #define __DRM_GPUSVM_H__
> >
> > +#include <linux/dma-mapping.h>
> > #include <linux/kref.h>
> > #include <linux/interval_tree.h>
> > #include <linux/mmu_notifier.h>
> > @@ -136,6 +137,8 @@ struct drm_gpusvm_pages_flags {
> > * @dma_addr: Device address array
> > * @dpagemap: The struct drm_pagemap of the device pages we're dma-
> > mapping.
> > * Note this is assuming only one drm_pagemap per range
> > is allowed.
> > + * @state: DMA IOVA state for mapping.
> > + * @state_offset: DMA IOVA offset for mapping.
> > * @notifier_seq: Notifier sequence number of the range's pages
> > * @flags: Flags for range
> > * @flags.migrate_devmem: Flag indicating whether the range can be
> > migrated to device memory
> > @@ -147,6 +150,8 @@ struct drm_gpusvm_pages_flags {
> > struct drm_gpusvm_pages {
> > struct drm_pagemap_addr *dma_addr;
> > struct drm_pagemap *dpagemap;
> > + struct dma_iova_state state;
> > + unsigned long state_offset;
> > unsigned long notifier_seq;
> > struct drm_gpusvm_pages_flags flags;
> > };
>
> Otherwise LGTM.
> Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v4 2/4] drm/gpusvm: Use dma-map IOVA alloc, link, and sync API in GPU SVM
2026-02-09 16:13 ` Matthew Brost
@ 2026-02-09 16:41 ` Thomas Hellström
0 siblings, 0 replies; 20+ messages in thread
From: Thomas Hellström @ 2026-02-09 16:41 UTC (permalink / raw)
To: Matthew Brost
Cc: intel-xe, dri-devel, leonro, jgg, francois.dugast,
himal.prasad.ghimiray
On Mon, 2026-02-09 at 08:13 -0800, Matthew Brost wrote:
> On Mon, Feb 09, 2026 at 10:44:43AM +0100, Thomas Hellström wrote:
> > On Wed, 2026-02-04 at 20:19 -0800, Matthew Brost wrote:
> > > The dma-map IOVA alloc, link, and sync APIs perform significantly
> > > better
> > > than dma-map / dma-unmap, as they avoid costly IOMMU
> > > synchronizations.
> > > This difference is especially noticeable when mapping a 2MB
> > > region in
> > > 4KB pages.
> > >
> > > Use the IOVA alloc, link, and sync APIs for GPU SVM, which create
> > > DMA
> > > mappings between the CPU and GPU.
> > >
> > > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > > ---
> > > v3:
> > > - Always link IOVA in mixed mappings
> > > - Sync IOVA
> > > v4:
> > > - Initialize IOVA state in get_pages
> > > - Use pack IOVA linking (Jason)
> > > - s/page_to_phys/hmm_pfn_to_phys (Leon)
> > >
> > > drivers/gpu/drm/drm_gpusvm.c | 55
> > > ++++++++++++++++++++++++++++++----
> > > --
> > > include/drm/drm_gpusvm.h | 5 ++++
> > > 2 files changed, 52 insertions(+), 8 deletions(-)
> > >
> > > diff --git a/drivers/gpu/drm/drm_gpusvm.c
> > > b/drivers/gpu/drm/drm_gpusvm.c
> > > index 4b8130a4ce95..800caaf0a783 100644
> > > --- a/drivers/gpu/drm/drm_gpusvm.c
> > > +++ b/drivers/gpu/drm/drm_gpusvm.c
> > > @@ -1139,11 +1139,19 @@ static void
> > > __drm_gpusvm_unmap_pages(struct
> > > drm_gpusvm *gpusvm,
> > > struct drm_gpusvm_pages_flags flags = {
> > > .__flags = svm_pages->flags.__flags,
> > > };
> > > + bool use_iova = dma_use_iova(&svm_pages->state);
> > > +
> > > + if (use_iova) {
> > > + dma_iova_unlink(dev, &svm_pages->state,
> > > 0,
> > > + svm_pages->state_offset,
> > > + svm_pages-
> > > >dma_addr[0].dir,
> > > 0);
> > > + dma_iova_free(dev, &svm_pages->state);
> > > + }
> > >
> > > for (i = 0, j = 0; i < npages; j++) {
> > > struct drm_pagemap_addr *addr =
> > > &svm_pages-
> > > > dma_addr[j];
> > >
> > > - if (addr->proto ==
> > > DRM_INTERCONNECT_SYSTEM)
> > > + if (!use_iova && addr->proto ==
> > > DRM_INTERCONNECT_SYSTEM)
> > > dma_unmap_page(dev,
> > > addr->addr,
> > > PAGE_SIZE <<
> > > addr-
> > > > order,
> > > @@ -1408,6 +1416,7 @@ int drm_gpusvm_get_pages(struct drm_gpusvm
> > > *gpusvm,
> > > struct drm_gpusvm_pages_flags flags;
> > > enum dma_data_direction dma_dir = ctx->read_only ?
> > > DMA_TO_DEVICE :
> > >
> > > DMA_BIDIRECTIONAL;
> > > + struct dma_iova_state *state = &svm_pages->state;
> > >
> > > retry:
> > > if (time_after(jiffies, timeout))
> > > @@ -1446,6 +1455,9 @@ int drm_gpusvm_get_pages(struct drm_gpusvm
> > > *gpusvm,
> > > if (err)
> > > goto err_free;
> > >
> > > + *state = (struct dma_iova_state){};
> > > + svm_pages->state_offset = 0;
> > > +
> > > map_pages:
> > > /*
> > > * Perform all dma mappings under the notifier lock to
> > > not
> > > @@ -1539,13 +1551,33 @@ int drm_gpusvm_get_pages(struct
> > > drm_gpusvm
> > > *gpusvm,
> > > goto err_unmap;
> > > }
> > >
> > > - addr = dma_map_page(gpusvm->drm->dev,
> > > - page, 0,
> > > - PAGE_SIZE << order,
> > > - dma_dir);
> > > - if (dma_mapping_error(gpusvm->drm->dev,
> > > addr)) {
> > > - err = -EFAULT;
> > > - goto err_unmap;
> > > + if (!i)
> > > + dma_iova_try_alloc(gpusvm->drm-
> > > >dev,
> > > state,
> > > + npages *
> > > PAGE_SIZE >=
> > > +
> > > HPAGE_PMD_SIZE ?
> > > +
> > > HPAGE_PMD_SIZE :
> > > 0,
> >
> > Doc says "callers that always do PAGE_SIZE aligned transfers can
> > always
> > pass 0 here", so can be simplified?
> >
>
> * Note: @phys is only used to calculate the IOVA alignment. Callers
> that always
> * do PAGE_SIZE aligned transfers can safely pass 0 here.
>
> So 0 would be safe but possibly suboptimal. For mapping greater than
> or
> equal to 2M, we'd like 2M MB alignment so large GPU pages can used
> too.
> I think passing in '0' could result in odd alignment.
>
> I am assuming other vendors have 2M GPU pages here too but that seems
> like somewhat safe assumption...
Ah, I interpreted that as beyond PAGE_SIZE the function would behave
the same.
Agree that if we can get 2M alignment that's much better.
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>
> Matt
>
> >
> > > + npages *
> > > PAGE_SIZE);
> > > +
> > > + if (dma_use_iova(state)) {
> > > + err = dma_iova_link(gpusvm->drm-
> > > > dev, state,
> > > +
> > > hmm_pfn_to_phys(pfns[i]),
> > > + svm_pages-
> > > > state_offset,
> > > + PAGE_SIZE <<
> > > order,
> > > + dma_dir, 0);
> > > + if (err)
> > > + goto err_unmap;
> > > +
> > > + addr = state->addr + svm_pages-
> > > > state_offset;
> > > + svm_pages->state_offset +=
> > > PAGE_SIZE
> > > << order;
> > > + } else {
> > > + addr = dma_map_page(gpusvm->drm-
> > > > dev,
> > > + page, 0,
> > > + PAGE_SIZE <<
> > > order,
> > > + dma_dir);
> > > + if (dma_mapping_error(gpusvm-
> > > >drm-
> > > > dev, addr)) {
> > > + err = -EFAULT;
> > > + goto err_unmap;
> > > + }
> > > }
> > >
> > > svm_pages->dma_addr[j] =
> > > drm_pagemap_addr_encode
> > > @@ -1557,6 +1589,13 @@ int drm_gpusvm_get_pages(struct drm_gpusvm
> > > *gpusvm,
> > > flags.has_dma_mapping = true;
> > > }
> > >
> > > + if (dma_use_iova(state)) {
> > > + err = dma_iova_sync(gpusvm->drm->dev, state, 0,
> > > + svm_pages->state_offset);
> > > + if (err)
> > > + goto err_unmap;
> > > + }
> > > +
> > > if (pagemap) {
> > > flags.has_devmem_pages = true;
> > > drm_pagemap_get(dpagemap);
> > > diff --git a/include/drm/drm_gpusvm.h b/include/drm/drm_gpusvm.h
> > > index 2578ac92a8d4..cd94bb2ee6ee 100644
> > > --- a/include/drm/drm_gpusvm.h
> > > +++ b/include/drm/drm_gpusvm.h
> > > @@ -6,6 +6,7 @@
> > > #ifndef __DRM_GPUSVM_H__
> > > #define __DRM_GPUSVM_H__
> > >
> > > +#include <linux/dma-mapping.h>
> > > #include <linux/kref.h>
> > > #include <linux/interval_tree.h>
> > > #include <linux/mmu_notifier.h>
> > > @@ -136,6 +137,8 @@ struct drm_gpusvm_pages_flags {
> > > * @dma_addr: Device address array
> > > * @dpagemap: The struct drm_pagemap of the device pages we're
> > > dma-
> > > mapping.
> > > * Note this is assuming only one drm_pagemap per
> > > range
> > > is allowed.
> > > + * @state: DMA IOVA state for mapping.
> > > + * @state_offset: DMA IOVA offset for mapping.
> > > * @notifier_seq: Notifier sequence number of the range's pages
> > > * @flags: Flags for range
> > > * @flags.migrate_devmem: Flag indicating whether the range can
> > > be
> > > migrated to device memory
> > > @@ -147,6 +150,8 @@ struct drm_gpusvm_pages_flags {
> > > struct drm_gpusvm_pages {
> > > struct drm_pagemap_addr *dma_addr;
> > > struct drm_pagemap *dpagemap;
> > > + struct dma_iova_state state;
> > > + unsigned long state_offset;
> > > unsigned long notifier_seq;
> > > struct drm_gpusvm_pages_flags flags;
> > > };
> >
> > Otherwise LGTM.
> > Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v4 3/4] drm/pagemap: Split drm_pagemap_migrate_map_pages into device / system
2026-02-09 15:49 ` Thomas Hellström
@ 2026-02-09 16:58 ` Matthew Brost
2026-02-09 17:09 ` Thomas Hellström
0 siblings, 1 reply; 20+ messages in thread
From: Matthew Brost @ 2026-02-09 16:58 UTC (permalink / raw)
To: Thomas Hellström
Cc: intel-xe, dri-devel, leonro, jgg, francois.dugast,
himal.prasad.ghimiray
On Mon, Feb 09, 2026 at 04:49:03PM +0100, Thomas Hellström wrote:
> On Wed, 2026-02-04 at 20:19 -0800, Matthew Brost wrote:
> > Split drm_pagemap_migrate_map_pages into device / system helpers
> > clearly
> > seperating these operations. Will help with upcoming changes to split
> > IOVA allocation steps.
> >
> > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > ---
> > drivers/gpu/drm/drm_pagemap.c | 146 ++++++++++++++++++++++----------
> > --
> > 1 file changed, 96 insertions(+), 50 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/drm_pagemap.c
> > b/drivers/gpu/drm/drm_pagemap.c
> > index fbd69f383457..29677b19bb69 100644
> > --- a/drivers/gpu/drm/drm_pagemap.c
> > +++ b/drivers/gpu/drm/drm_pagemap.c
> > @@ -205,7 +205,7 @@ static void drm_pagemap_get_devmem_page(struct
> > page *page,
> > }
> >
> > /**
> > - * drm_pagemap_migrate_map_pages() - Map migration pages for GPU SVM
> > migration
> > + * drm_pagemap_migrate_map_device_pages() - Map device migration
> > pages for GPU SVM migration
> > * @dev: The device performing the migration.
> > * @local_dpagemap: The drm_pagemap local to the migrating device.
> > * @pagemap_addr: Array to store DMA information corresponding to
> > mapped pages.
> > @@ -221,19 +221,22 @@ static void drm_pagemap_get_devmem_page(struct
> > page *page,
> > *
> > * Returns: 0 on success, -EFAULT if an error occurs during mapping.
> > */
> > -static int drm_pagemap_migrate_map_pages(struct device *dev,
> > - struct drm_pagemap
> > *local_dpagemap,
> > - struct drm_pagemap_addr
> > *pagemap_addr,
> > - unsigned long *migrate_pfn,
> > - unsigned long npages,
> > - enum dma_data_direction
> > dir,
> > - const struct
> > drm_pagemap_migrate_details *mdetails)
> > +static int
> > +drm_pagemap_migrate_map_device_pages(struct device *dev,
> > + struct drm_pagemap
> > *local_dpagemap,
> > + struct drm_pagemap_addr
> > *pagemap_addr,
> > + unsigned long *migrate_pfn,
> > + unsigned long npages,
> > + enum dma_data_direction dir,
> > + const struct
> > drm_pagemap_migrate_details *mdetails)
>
> We might want to call this device_private pages. Device coherent pages
> are treated like system pages here, but I figure those are known to the
> dma subsystem and can be handled by the map_system_pages callback.
>
Yes.
Eventually we will have figure out we'd want to handle Device coherent
pages with a high speed fabric though.
> > {
> > unsigned long num_peer_pages = 0, num_local_pages = 0, i;
> >
> > for (i = 0; i < npages;) {
> > struct page *page =
> > migrate_pfn_to_page(migrate_pfn[i]);
> > - dma_addr_t dma_addr;
> > + struct drm_pagemap_zdd *zdd;
> > + struct drm_pagemap *dpagemap;
> > + struct drm_pagemap_addr addr;
> > struct folio *folio;
> > unsigned int order = 0;
> >
> > @@ -243,36 +246,26 @@ static int drm_pagemap_migrate_map_pages(struct
> > device *dev,
> > folio = page_folio(page);
> > order = folio_order(folio);
> >
> > - if (is_device_private_page(page)) {
> > - struct drm_pagemap_zdd *zdd =
> > drm_pagemap_page_zone_device_data(page);
> > - struct drm_pagemap *dpagemap = zdd-
> > >dpagemap;
> > - struct drm_pagemap_addr addr;
> > -
> > - if (dpagemap == local_dpagemap) {
> > - if (!mdetails-
> > >can_migrate_same_pagemap)
> > - goto next;
> > + WARN_ON_ONCE(!is_device_private_page(page));
> >
> > - num_local_pages += NR_PAGES(order);
> > - } else {
> > - num_peer_pages += NR_PAGES(order);
> > - }
> > + zdd = drm_pagemap_page_zone_device_data(page);
> > + dpagemap = zdd->dpagemap;
> >
> > - addr = dpagemap->ops->device_map(dpagemap,
> > dev, page, order, dir);
> > - if (dma_mapping_error(dev, addr.addr))
> > - return -EFAULT;
> > + if (dpagemap == local_dpagemap) {
> > + if (!mdetails->can_migrate_same_pagemap)
> > + goto next;
> >
> > - pagemap_addr[i] = addr;
> > + num_local_pages += NR_PAGES(order);
> > } else {
> > - dma_addr = dma_map_page(dev, page, 0,
> > page_size(page), dir);
> > - if (dma_mapping_error(dev, dma_addr))
> > - return -EFAULT;
> > -
> > - pagemap_addr[i] =
> > - drm_pagemap_addr_encode(dma_addr,
> > -
> > DRM_INTERCONNECT_SYSTEM,
> > - order, dir);
> > + num_peer_pages += NR_PAGES(order);
> > }
> >
> > + addr = dpagemap->ops->device_map(dpagemap, dev,
> > page, order, dir);
> > + if (dma_mapping_error(dev, addr.addr))
> > + return -EFAULT;
> > +
> > + pagemap_addr[i] = addr;
> > +
> > next:
> > i += NR_PAGES(order);
> > }
> > @@ -287,6 +280,59 @@ static int drm_pagemap_migrate_map_pages(struct
> > device *dev,
> > return 0;
> > }
> >
> > +/**
> > + * drm_pagemap_migrate_map_system_pages() - Map system migration
> > pages for GPU SVM migration
> > + * @dev: The device performing the migration.
> > + * @pagemap_addr: Array to store DMA information corresponding to
> > mapped pages.
> > + * @migrate_pfn: Array of page frame numbers of system pages or peer
> > pages to map.
>
> system pages or device coherent pages? "Peer" pages would typically be
> device-private pages with the same owner.
>
> > + * @npages: Number of system pages or peer pages to map.
>
> Same here.
Yes, copy paste error.
>
> > + * @dir: Direction of data transfer (e.g., DMA_BIDIRECTIONAL)
> > + *
> > + * This function maps pages of memory for migration usage in GPU
> > SVM. It
> > + * iterates over each page frame number provided in @migrate_pfn,
> > maps the
> > + * corresponding page, and stores the DMA address in the provided
> > @dma_addr
> > + * array.
> > + *
> > + * Returns: 0 on success, -EFAULT if an error occurs during mapping.
> > + */
> > +static int
> > +drm_pagemap_migrate_map_system_pages(struct device *dev,
> > + struct drm_pagemap_addr
> > *pagemap_addr,
> > + unsigned long *migrate_pfn,
> > + unsigned long npages,
> > + enum dma_data_direction dir)
> > +{
> > + unsigned long i;
> > +
> > + for (i = 0; i < npages;) {
> > + struct page *page =
> > migrate_pfn_to_page(migrate_pfn[i]);
> > + dma_addr_t dma_addr;
> > + struct folio *folio;
> > + unsigned int order = 0;
> > +
> > + if (!page)
> > + goto next;
> > +
> > + WARN_ON_ONCE(is_device_private_page(page));
> > + folio = page_folio(page);
> > + order = folio_order(folio);
> > +
> > + dma_addr = dma_map_page(dev, page, 0,
> > page_size(page), dir);
> > + if (dma_mapping_error(dev, dma_addr))
> > + return -EFAULT;
> > +
> > + pagemap_addr[i] =
> > + drm_pagemap_addr_encode(dma_addr,
> > + DRM_INTERCONNECT_SYS
> > TEM,
> > + order, dir);
> > +
> > +next:
> > + i += NR_PAGES(order);
> > + }
> > +
> > + return 0;
> > +}
> > +
> > /**
> > * drm_pagemap_migrate_unmap_pages() - Unmap pages previously mapped
> > for GPU SVM migration
> > * @dev: The device for which the pages were mapped
> > @@ -347,9 +393,11 @@ drm_pagemap_migrate_remote_to_local(struct
> > drm_pagemap_devmem *devmem,
> > const struct
> > drm_pagemap_migrate_details *mdetails)
> >
> > {
> > - int err = drm_pagemap_migrate_map_pages(remote_device,
> > remote_dpagemap,
> > - pagemap_addr,
> > local_pfns,
> > - npages,
> > DMA_FROM_DEVICE, mdetails);
> > + int err =
> > drm_pagemap_migrate_map_device_pages(remote_device,
> > +
> > remote_dpagemap,
> > + pagemap_addr,
> > local_pfns,
> > + npages,
> > DMA_FROM_DEVICE,
> > + mdetails);
> >
> > if (err)
> > goto out;
> > @@ -368,12 +416,11 @@ drm_pagemap_migrate_sys_to_dev(struct
> > drm_pagemap_devmem *devmem,
> > struct page *local_pages[],
> > struct drm_pagemap_addr
> > pagemap_addr[],
> > unsigned long npages,
> > - const struct drm_pagemap_devmem_ops
> > *ops,
> > - const struct
> > drm_pagemap_migrate_details *mdetails)
> > + const struct drm_pagemap_devmem_ops
> > *ops)
> > {
> > - int err = drm_pagemap_migrate_map_pages(devmem->dev, devmem-
> > >dpagemap,
> > - pagemap_addr,
> > sys_pfns, npages,
> > - DMA_TO_DEVICE,
> > mdetails);
> > + int err = drm_pagemap_migrate_map_system_pages(devmem->dev,
> > + pagemap_addr,
> > sys_pfns,
> > + npages,
> > DMA_TO_DEVICE);
>
>
> Unfortunately it's a bit more complicated than this. If the destination
> gpu migrates, the range to migrate could be a mix of system pages,
> device coherent pages and also device private pages, and previously
> drm_pagemap_migrate_map_pages() took care of that and did the correct
> thing on a per-page basis.
>
> You can exercise this by setting mdetails::source_peer_migrates to
> false on xe. That typically "works" but might generate some errors in
> the atomic multi-device tests AFAICT because reading from the BAR does
> not flush the L2 caches on BMG. But should be sufficient to exercise
> this path.
Ah, yes I see I missed this - this patch isn't strickly required I just
didn't want drm_pagemap_migrate_map_pages to have massive cascading if
statements... I can remove for now if that is preferred or should be
just remove source_peer_migrates and assume a value of '1'.
I suggest the later because looking forward source_peer_migrates == 0
would bb difficult to support a high speed fabric, which requires a IOVA
(think UAL with virtual NAs at the target device), plus multiple
different devices being found in the migration pages. Also, with p2p,
isn't source_peer_migrates == '1' (write over p2p) faster than
source_peer_migrates == '0' (read over p2p)?
Matt
>
> /Thomas
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v4 3/4] drm/pagemap: Split drm_pagemap_migrate_map_pages into device / system
2026-02-09 16:58 ` Matthew Brost
@ 2026-02-09 17:09 ` Thomas Hellström
0 siblings, 0 replies; 20+ messages in thread
From: Thomas Hellström @ 2026-02-09 17:09 UTC (permalink / raw)
To: Matthew Brost
Cc: intel-xe, dri-devel, leonro, jgg, francois.dugast,
himal.prasad.ghimiray
On Mon, 2026-02-09 at 08:58 -0800, Matthew Brost wrote:
> On Mon, Feb 09, 2026 at 04:49:03PM +0100, Thomas Hellström wrote:
> > On Wed, 2026-02-04 at 20:19 -0800, Matthew Brost wrote:
> > > Split drm_pagemap_migrate_map_pages into device / system helpers
> > > clearly
> > > seperating these operations. Will help with upcoming changes to
> > > split
> > > IOVA allocation steps.
> > >
> > > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > > ---
> > > drivers/gpu/drm/drm_pagemap.c | 146 ++++++++++++++++++++++------
> > > ----
> > > --
> > > 1 file changed, 96 insertions(+), 50 deletions(-)
> > >
> > > diff --git a/drivers/gpu/drm/drm_pagemap.c
> > > b/drivers/gpu/drm/drm_pagemap.c
> > > index fbd69f383457..29677b19bb69 100644
> > > --- a/drivers/gpu/drm/drm_pagemap.c
> > > +++ b/drivers/gpu/drm/drm_pagemap.c
> > > @@ -205,7 +205,7 @@ static void
> > > drm_pagemap_get_devmem_page(struct
> > > page *page,
> > > }
> > >
> > > /**
> > > - * drm_pagemap_migrate_map_pages() - Map migration pages for GPU
> > > SVM
> > > migration
> > > + * drm_pagemap_migrate_map_device_pages() - Map device migration
> > > pages for GPU SVM migration
> > > * @dev: The device performing the migration.
> > > * @local_dpagemap: The drm_pagemap local to the migrating
> > > device.
> > > * @pagemap_addr: Array to store DMA information corresponding
> > > to
> > > mapped pages.
> > > @@ -221,19 +221,22 @@ static void
> > > drm_pagemap_get_devmem_page(struct
> > > page *page,
> > > *
> > > * Returns: 0 on success, -EFAULT if an error occurs during
> > > mapping.
> > > */
> > > -static int drm_pagemap_migrate_map_pages(struct device *dev,
> > > - struct drm_pagemap
> > > *local_dpagemap,
> > > - struct drm_pagemap_addr
> > > *pagemap_addr,
> > > - unsigned long
> > > *migrate_pfn,
> > > - unsigned long npages,
> > > - enum dma_data_direction
> > > dir,
> > > - const struct
> > > drm_pagemap_migrate_details *mdetails)
> > > +static int
> > > +drm_pagemap_migrate_map_device_pages(struct device *dev,
> > > + struct drm_pagemap
> > > *local_dpagemap,
> > > + struct drm_pagemap_addr
> > > *pagemap_addr,
> > > + unsigned long *migrate_pfn,
> > > + unsigned long npages,
> > > + enum dma_data_direction
> > > dir,
> > > + const struct
> > > drm_pagemap_migrate_details *mdetails)
> >
> > We might want to call this device_private pages. Device coherent
> > pages
> > are treated like system pages here, but I figure those are known to
> > the
> > dma subsystem and can be handled by the map_system_pages callback.
> >
>
> Yes.
>
> Eventually we will have figure out we'd want to handle Device
> coherent
> pages with a high speed fabric though.
>
> > > {
> > > unsigned long num_peer_pages = 0, num_local_pages = 0,
> > > i;
> > >
> > > for (i = 0; i < npages;) {
> > > struct page *page =
> > > migrate_pfn_to_page(migrate_pfn[i]);
> > > - dma_addr_t dma_addr;
> > > + struct drm_pagemap_zdd *zdd;
> > > + struct drm_pagemap *dpagemap;
> > > + struct drm_pagemap_addr addr;
> > > struct folio *folio;
> > > unsigned int order = 0;
> > >
> > > @@ -243,36 +246,26 @@ static int
> > > drm_pagemap_migrate_map_pages(struct
> > > device *dev,
> > > folio = page_folio(page);
> > > order = folio_order(folio);
> > >
> > > - if (is_device_private_page(page)) {
> > > - struct drm_pagemap_zdd *zdd =
> > > drm_pagemap_page_zone_device_data(page);
> > > - struct drm_pagemap *dpagemap = zdd-
> > > > dpagemap;
> > > - struct drm_pagemap_addr addr;
> > > -
> > > - if (dpagemap == local_dpagemap) {
> > > - if (!mdetails-
> > > > can_migrate_same_pagemap)
> > > - goto next;
> > > + WARN_ON_ONCE(!is_device_private_page(page));
> > >
> > > - num_local_pages +=
> > > NR_PAGES(order);
> > > - } else {
> > > - num_peer_pages +=
> > > NR_PAGES(order);
> > > - }
> > > + zdd = drm_pagemap_page_zone_device_data(page);
> > > + dpagemap = zdd->dpagemap;
> > >
> > > - addr = dpagemap->ops-
> > > >device_map(dpagemap,
> > > dev, page, order, dir);
> > > - if (dma_mapping_error(dev, addr.addr))
> > > - return -EFAULT;
> > > + if (dpagemap == local_dpagemap) {
> > > + if (!mdetails->can_migrate_same_pagemap)
> > > + goto next;
> > >
> > > - pagemap_addr[i] = addr;
> > > + num_local_pages += NR_PAGES(order);
> > > } else {
> > > - dma_addr = dma_map_page(dev, page, 0,
> > > page_size(page), dir);
> > > - if (dma_mapping_error(dev, dma_addr))
> > > - return -EFAULT;
> > > -
> > > - pagemap_addr[i] =
> > > -
> > > drm_pagemap_addr_encode(dma_addr,
> > > -
> > > DRM_INTE
> > > RCONNECT_SYSTEM,
> > > - order,
> > > dir);
> > > + num_peer_pages += NR_PAGES(order);
> > > }
> > >
> > > + addr = dpagemap->ops->device_map(dpagemap, dev,
> > > page, order, dir);
> > > + if (dma_mapping_error(dev, addr.addr))
> > > + return -EFAULT;
> > > +
> > > + pagemap_addr[i] = addr;
> > > +
> > > next:
> > > i += NR_PAGES(order);
> > > }
> > > @@ -287,6 +280,59 @@ static int
> > > drm_pagemap_migrate_map_pages(struct
> > > device *dev,
> > > return 0;
> > > }
> > >
> > > +/**
> > > + * drm_pagemap_migrate_map_system_pages() - Map system migration
> > > pages for GPU SVM migration
> > > + * @dev: The device performing the migration.
> > > + * @pagemap_addr: Array to store DMA information corresponding
> > > to
> > > mapped pages.
> > > + * @migrate_pfn: Array of page frame numbers of system pages or
> > > peer
> > > pages to map.
> >
> > system pages or device coherent pages? "Peer" pages would typically
> > be
> > device-private pages with the same owner.
> >
> > > + * @npages: Number of system pages or peer pages to map.
> >
> > Same here.
>
> Yes, copy paste error.
>
> >
> > > + * @dir: Direction of data transfer (e.g., DMA_BIDIRECTIONAL)
> > > + *
> > > + * This function maps pages of memory for migration usage in GPU
> > > SVM. It
> > > + * iterates over each page frame number provided in
> > > @migrate_pfn,
> > > maps the
> > > + * corresponding page, and stores the DMA address in the
> > > provided
> > > @dma_addr
> > > + * array.
> > > + *
> > > + * Returns: 0 on success, -EFAULT if an error occurs during
> > > mapping.
> > > + */
> > > +static int
> > > +drm_pagemap_migrate_map_system_pages(struct device *dev,
> > > + struct drm_pagemap_addr
> > > *pagemap_addr,
> > > + unsigned long *migrate_pfn,
> > > + unsigned long npages,
> > > + enum dma_data_direction
> > > dir)
> > > +{
> > > + unsigned long i;
> > > +
> > > + for (i = 0; i < npages;) {
> > > + struct page *page =
> > > migrate_pfn_to_page(migrate_pfn[i]);
> > > + dma_addr_t dma_addr;
> > > + struct folio *folio;
> > > + unsigned int order = 0;
> > > +
> > > + if (!page)
> > > + goto next;
> > > +
> > > + WARN_ON_ONCE(is_device_private_page(page));
> > > + folio = page_folio(page);
> > > + order = folio_order(folio);
> > > +
> > > + dma_addr = dma_map_page(dev, page, 0,
> > > page_size(page), dir);
> > > + if (dma_mapping_error(dev, dma_addr))
> > > + return -EFAULT;
> > > +
> > > + pagemap_addr[i] =
> > > + drm_pagemap_addr_encode(dma_addr,
> > > + DRM_INTERCONNECT
> > > _SYS
> > > TEM,
> > > + order, dir);
> > > +
> > > +next:
> > > + i += NR_PAGES(order);
> > > + }
> > > +
> > > + return 0;
> > > +}
> > > +
> > > /**
> > > * drm_pagemap_migrate_unmap_pages() - Unmap pages previously
> > > mapped
> > > for GPU SVM migration
> > > * @dev: The device for which the pages were mapped
> > > @@ -347,9 +393,11 @@ drm_pagemap_migrate_remote_to_local(struct
> > > drm_pagemap_devmem *devmem,
> > > const struct
> > > drm_pagemap_migrate_details *mdetails)
> > >
> > > {
> > > - int err = drm_pagemap_migrate_map_pages(remote_device,
> > > remote_dpagemap,
> > > - pagemap_addr,
> > > local_pfns,
> > > - npages,
> > > DMA_FROM_DEVICE, mdetails);
> > > + int err =
> > > drm_pagemap_migrate_map_device_pages(remote_device,
> > > +
> > > remote_dpagemap,
> > > +
> > > pagemap_addr,
> > > local_pfns,
> > > + npages,
> > > DMA_FROM_DEVICE,
> > > +
> > > mdetails);
> > >
> > > if (err)
> > > goto out;
> > > @@ -368,12 +416,11 @@ drm_pagemap_migrate_sys_to_dev(struct
> > > drm_pagemap_devmem *devmem,
> > > struct page *local_pages[],
> > > struct drm_pagemap_addr
> > > pagemap_addr[],
> > > unsigned long npages,
> > > - const struct
> > > drm_pagemap_devmem_ops
> > > *ops,
> > > - const struct
> > > drm_pagemap_migrate_details *mdetails)
> > > + const struct
> > > drm_pagemap_devmem_ops
> > > *ops)
> > > {
> > > - int err = drm_pagemap_migrate_map_pages(devmem->dev,
> > > devmem-
> > > > dpagemap,
> > > - pagemap_addr,
> > > sys_pfns, npages,
> > > - DMA_TO_DEVICE,
> > > mdetails);
> > > + int err = drm_pagemap_migrate_map_system_pages(devmem-
> > > >dev,
> > > +
> > > pagemap_addr,
> > > sys_pfns,
> > > + npages,
> > > DMA_TO_DEVICE);
> >
> >
> > Unfortunately it's a bit more complicated than this. If the
> > destination
> > gpu migrates, the range to migrate could be a mix of system pages,
> > device coherent pages and also device private pages, and previously
> > drm_pagemap_migrate_map_pages() took care of that and did the
> > correct
> > thing on a per-page basis.
> >
> > You can exercise this by setting mdetails::source_peer_migrates to
> > false on xe. That typically "works" but might generate some errors
> > in
> > the atomic multi-device tests AFAICT because reading from the BAR
> > does
> > not flush the L2 caches on BMG. But should be sufficient to
> > exercise
> > this path.
>
> Ah, yes I see I missed this - this patch isn't strickly required I
> just
> didn't want drm_pagemap_migrate_map_pages to have massive cascading
> if
> statements... I can remove for now if that is preferred or should be
> just remove source_peer_migrates and assume a value of '1'.
>
> I suggest the later because looking forward source_peer_migrates == 0
> would bb difficult to support a high speed fabric, which requires a
> IOVA
> (think UAL with virtual NAs at the target device), plus multiple
> different devices being found in the migration pages. Also, with p2p,
> isn't source_peer_migrates == '1' (write over p2p) faster than
> source_peer_migrates == '0' (read over p2p)?
With source_peer_migrates == 0 we have the drawbacks of missed cache
flushing, can't be used with CCS compression, high speed fabric and
also the speed over PCIe as you say, so I don't see xe using it in the
near-term.
So I agree. Let's just let's assume source_peer_migrates == 1 for now.
Thanks,
Thomas
>
> Matt
>
> >
> > /Thomas
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v4 4/4] drm/pagemap: Use dma-map IOVA alloc, link, and sync API for DRM pagemap
2026-02-05 4:19 ` [PATCH v4 4/4] drm/pagemap: Use dma-map IOVA alloc, link, and sync API for DRM pagemap Matthew Brost
@ 2026-02-11 11:34 ` Thomas Hellström
2026-02-11 15:37 ` Matthew Brost
0 siblings, 1 reply; 20+ messages in thread
From: Thomas Hellström @ 2026-02-11 11:34 UTC (permalink / raw)
To: Matthew Brost, intel-xe, dri-devel
Cc: leonro, jgg, francois.dugast, himal.prasad.ghimiray
On Wed, 2026-02-04 at 20:19 -0800, Matthew Brost wrote:
> The dma-map IOVA alloc, link, and sync APIs perform significantly
> better
> than dma-map / dma-unmap, as they avoid costly IOMMU
> synchronizations.
> This difference is especially noticeable when mapping a 2MB region in
> 4KB pages.
>
> Use the IOVA alloc, link, and sync APIs for DRM pagemap, which create
> DMA
> mappings between the CPU and GPU for copying data.
>
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> ---
> v4:
> - Pack IOVA and drop dummy page (Jason)
>
> drivers/gpu/drm/drm_pagemap.c | 84 +++++++++++++++++++++++++++++----
> --
> 1 file changed, 70 insertions(+), 14 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_pagemap.c
> b/drivers/gpu/drm/drm_pagemap.c
> index 29677b19bb69..52a196bc8459 100644
> --- a/drivers/gpu/drm/drm_pagemap.c
> +++ b/drivers/gpu/drm/drm_pagemap.c
> @@ -280,6 +280,20 @@ drm_pagemap_migrate_map_device_pages(struct
> device *dev,
> return 0;
> }
>
> +/**
> + * struct drm_pagemap_iova_state - DRM pagemap IOVA state
> + *
No newline
> + * @dma_state: DMA IOVA state.
> + * @offset: Current offset in IOVA.
> + *
> + * This structure acts as an iterator for packing all IOVA addresses
> within a
> + * contiguous range.
> + */
> +struct drm_pagemap_iova_state {
> + struct dma_iova_state dma_state;
> + unsigned long offset;
> +};
> +
> /**
> * drm_pagemap_migrate_map_system_pages() - Map system migration
> pages for GPU SVM migration
> * @dev: The device performing the migration.
> @@ -287,6 +301,7 @@ drm_pagemap_migrate_map_device_pages(struct
> device *dev,
> * @migrate_pfn: Array of page frame numbers of system pages or peer
> pages to map.
> * @npages: Number of system pages or peer pages to map.
> * @dir: Direction of data transfer (e.g., DMA_BIDIRECTIONAL)
> + * @state: DMA IOVA state for mapping.
> *
> * This function maps pages of memory for migration usage in GPU
> SVM. It
> * iterates over each page frame number provided in @migrate_pfn,
> maps the
> @@ -300,9 +315,11 @@ drm_pagemap_migrate_map_system_pages(struct
> device *dev,
> struct drm_pagemap_addr
> *pagemap_addr,
> unsigned long *migrate_pfn,
> unsigned long npages,
> - enum dma_data_direction dir)
> + enum dma_data_direction dir,
> + struct drm_pagemap_iova_state
> *state)
> {
> unsigned long i;
> + bool try_alloc = false;
>
> for (i = 0; i < npages;) {
> struct page *page =
> migrate_pfn_to_page(migrate_pfn[i]);
> @@ -317,9 +334,31 @@ drm_pagemap_migrate_map_system_pages(struct
> device *dev,
> folio = page_folio(page);
> order = folio_order(folio);
>
> - dma_addr = dma_map_page(dev, page, 0,
> page_size(page), dir);
> - if (dma_mapping_error(dev, dma_addr))
> - return -EFAULT;
> + if (!try_alloc) {
> + dma_iova_try_alloc(dev, &state->dma_state,
> + npages * PAGE_SIZE >=
> + HPAGE_PMD_SIZE ?
> + HPAGE_PMD_SIZE : 0,
> + npages * PAGE_SIZE);
> + try_alloc = true;
> + }
What happens if dma_iova_try_alloc() fails for all i < some value x and
then suddenly succeeds for i == x? While the below code looks correct,
I figure we'd allocate a too large IOVA region and possibly get the
alignment wrong?
Otherwise LGTM.
> +
> + if (dma_use_iova(&state->dma_state)) {
> + int err = dma_iova_link(dev, &state-
> >dma_state,
> + page_to_phys(page),
> + state->offset,
> page_size(page),
> + dir, 0);
> + if (err)
> + return err;
> +
> + dma_addr = state->dma_state.addr + state-
> >offset;
> + state->offset += page_size(page);
> + } else {
> + dma_addr = dma_map_page(dev, page, 0,
> page_size(page),
> + dir);
> + if (dma_mapping_error(dev, dma_addr))
> + return -EFAULT;
> + }
>
> pagemap_addr[i] =
> drm_pagemap_addr_encode(dma_addr,
> @@ -330,6 +369,9 @@ drm_pagemap_migrate_map_system_pages(struct
> device *dev,
> i += NR_PAGES(order);
> }
>
> + if (dma_use_iova(&state->dma_state))
> + return dma_iova_sync(dev, &state->dma_state, 0,
> state->offset);
> +
> return 0;
> }
>
> @@ -341,6 +383,7 @@ drm_pagemap_migrate_map_system_pages(struct
> device *dev,
> * @pagemap_addr: Array of DMA information corresponding to mapped
> pages
> * @npages: Number of pages to unmap
> * @dir: Direction of data transfer (e.g., DMA_BIDIRECTIONAL)
> + * @state: DMA IOVA state for mapping.
> *
> * This function unmaps previously mapped pages of memory for GPU
> Shared Virtual
> * Memory (SVM). It iterates over each DMA address provided in
> @dma_addr, checks
> @@ -350,10 +393,17 @@ static void
> drm_pagemap_migrate_unmap_pages(struct device *dev,
> struct drm_pagemap_addr
> *pagemap_addr,
> unsigned long
> *migrate_pfn,
> unsigned long npages,
> - enum dma_data_direction
> dir)
> + enum dma_data_direction
> dir,
> + struct
> drm_pagemap_iova_state *state)
> {
> unsigned long i;
>
> + if (state && dma_use_iova(&state->dma_state)) {
> + dma_iova_unlink(dev, &state->dma_state, 0, state-
> >offset, dir, 0);
> + dma_iova_free(dev, &state->dma_state);
> + return;
> + }
> +
> for (i = 0; i < npages;) {
> struct page *page =
> migrate_pfn_to_page(migrate_pfn[i]);
>
> @@ -406,7 +456,7 @@ drm_pagemap_migrate_remote_to_local(struct
> drm_pagemap_devmem *devmem,
> devmem->pre_migrate_fence);
> out:
> drm_pagemap_migrate_unmap_pages(remote_device, pagemap_addr,
> local_pfns,
> - npages, DMA_FROM_DEVICE);
> + npages, DMA_FROM_DEVICE,
> NULL);
> return err;
> }
>
> @@ -416,11 +466,13 @@ drm_pagemap_migrate_sys_to_dev(struct
> drm_pagemap_devmem *devmem,
> struct page *local_pages[],
> struct drm_pagemap_addr
> pagemap_addr[],
> unsigned long npages,
> - const struct drm_pagemap_devmem_ops
> *ops)
> + const struct drm_pagemap_devmem_ops
> *ops,
> + struct drm_pagemap_iova_state *state)
> {
> int err = drm_pagemap_migrate_map_system_pages(devmem->dev,
> pagemap_addr,
> sys_pfns,
> - npages,
> DMA_TO_DEVICE);
> + npages,
> DMA_TO_DEVICE,
> + state);
>
> if (err)
> goto out;
> @@ -429,7 +481,7 @@ drm_pagemap_migrate_sys_to_dev(struct
> drm_pagemap_devmem *devmem,
> devmem->pre_migrate_fence);
> out:
> drm_pagemap_migrate_unmap_pages(devmem->dev, pagemap_addr,
> sys_pfns, npages,
> - DMA_TO_DEVICE);
> + DMA_TO_DEVICE, state);
> return err;
> }
>
> @@ -457,6 +509,7 @@ static int drm_pagemap_migrate_range(struct
> drm_pagemap_devmem *devmem,
> const struct migrate_range_loc
> *cur,
> const struct
> drm_pagemap_migrate_details *mdetails)
> {
> + struct drm_pagemap_iova_state state = {};
> int ret = 0;
>
> if (cur->start == 0)
> @@ -484,7 +537,7 @@ static int drm_pagemap_migrate_range(struct
> drm_pagemap_devmem *devmem,
> &pages[last-
> >start],
>
> &pagemap_addr[last->start],
> cur->start -
> last->start,
> - last->ops);
> + last->ops,
> &state);
>
> out:
> *last = *cur;
> @@ -1001,6 +1054,7 @@ EXPORT_SYMBOL(drm_pagemap_put);
> int drm_pagemap_evict_to_ram(struct drm_pagemap_devmem
> *devmem_allocation)
> {
> const struct drm_pagemap_devmem_ops *ops =
> devmem_allocation->ops;
> + struct drm_pagemap_iova_state state = {};
> unsigned long npages, mpages = 0;
> struct page **pages;
> unsigned long *src, *dst;
> @@ -1042,7 +1096,7 @@ int drm_pagemap_evict_to_ram(struct
> drm_pagemap_devmem *devmem_allocation)
> err =
> drm_pagemap_migrate_map_system_pages(devmem_allocation->dev,
> pagemap_addr,
> dst, npages,
> - DMA_FROM_DEVICE);
> + DMA_FROM_DEVICE,
> &state);
> if (err)
> goto err_finalize;
>
> @@ -1059,7 +1113,7 @@ int drm_pagemap_evict_to_ram(struct
> drm_pagemap_devmem *devmem_allocation)
> migrate_device_pages(src, dst, npages);
> migrate_device_finalize(src, dst, npages);
> drm_pagemap_migrate_unmap_pages(devmem_allocation->dev,
> pagemap_addr, dst, npages,
> - DMA_FROM_DEVICE);
> + DMA_FROM_DEVICE, &state);
>
> err_free:
> kvfree(buf);
> @@ -1103,6 +1157,7 @@ static int __drm_pagemap_migrate_to_ram(struct
> vm_area_struct *vas,
> MIGRATE_VMA_SELECT_DEVICE_COHERENT,
> .fault_page = page,
> };
> + struct drm_pagemap_iova_state state = {};
> struct drm_pagemap_zdd *zdd;
> const struct drm_pagemap_devmem_ops *ops;
> struct device *dev = NULL;
> @@ -1162,7 +1217,7 @@ static int __drm_pagemap_migrate_to_ram(struct
> vm_area_struct *vas,
>
> err = drm_pagemap_migrate_map_system_pages(dev,
> pagemap_addr,
> migrate.dst,
> npages,
> - DMA_FROM_DEVICE);
> + DMA_FROM_DEVICE,
> &state);
> if (err)
> goto err_finalize;
>
> @@ -1180,7 +1235,8 @@ static int __drm_pagemap_migrate_to_ram(struct
> vm_area_struct *vas,
> migrate_vma_finalize(&migrate);
> if (dev)
> drm_pagemap_migrate_unmap_pages(dev, pagemap_addr,
> migrate.dst,
> - npages,
> DMA_FROM_DEVICE);
> + npages,
> DMA_FROM_DEVICE,
> + &state);
> err_free:
> kvfree(buf);
> err_out:
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v4 4/4] drm/pagemap: Use dma-map IOVA alloc, link, and sync API for DRM pagemap
2026-02-11 11:34 ` Thomas Hellström
@ 2026-02-11 15:37 ` Matthew Brost
2026-02-11 18:48 ` Thomas Hellström
0 siblings, 1 reply; 20+ messages in thread
From: Matthew Brost @ 2026-02-11 15:37 UTC (permalink / raw)
To: Thomas Hellström
Cc: intel-xe, dri-devel, leonro, jgg, francois.dugast,
himal.prasad.ghimiray
On Wed, Feb 11, 2026 at 12:34:12PM +0100, Thomas Hellström wrote:
> On Wed, 2026-02-04 at 20:19 -0800, Matthew Brost wrote:
> > The dma-map IOVA alloc, link, and sync APIs perform significantly
> > better
> > than dma-map / dma-unmap, as they avoid costly IOMMU
> > synchronizations.
> > This difference is especially noticeable when mapping a 2MB region in
> > 4KB pages.
> >
> > Use the IOVA alloc, link, and sync APIs for DRM pagemap, which create
> > DMA
> > mappings between the CPU and GPU for copying data.
> >
> > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > ---
> > v4:
> > - Pack IOVA and drop dummy page (Jason)
> >
> > drivers/gpu/drm/drm_pagemap.c | 84 +++++++++++++++++++++++++++++----
> > --
> > 1 file changed, 70 insertions(+), 14 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/drm_pagemap.c
> > b/drivers/gpu/drm/drm_pagemap.c
> > index 29677b19bb69..52a196bc8459 100644
> > --- a/drivers/gpu/drm/drm_pagemap.c
> > +++ b/drivers/gpu/drm/drm_pagemap.c
> > @@ -280,6 +280,20 @@ drm_pagemap_migrate_map_device_pages(struct
> > device *dev,
> > return 0;
> > }
> >
> > +/**
> > + * struct drm_pagemap_iova_state - DRM pagemap IOVA state
> > + *
>
> No newline
>
+1
> > + * @dma_state: DMA IOVA state.
> > + * @offset: Current offset in IOVA.
> > + *
> > + * This structure acts as an iterator for packing all IOVA addresses
> > within a
> > + * contiguous range.
> > + */
> > +struct drm_pagemap_iova_state {
> > + struct dma_iova_state dma_state;
> > + unsigned long offset;
> > +};
> > +
> > /**
> > * drm_pagemap_migrate_map_system_pages() - Map system migration
> > pages for GPU SVM migration
> > * @dev: The device performing the migration.
> > @@ -287,6 +301,7 @@ drm_pagemap_migrate_map_device_pages(struct
> > device *dev,
> > * @migrate_pfn: Array of page frame numbers of system pages or peer
> > pages to map.
> > * @npages: Number of system pages or peer pages to map.
> > * @dir: Direction of data transfer (e.g., DMA_BIDIRECTIONAL)
> > + * @state: DMA IOVA state for mapping.
> > *
> > * This function maps pages of memory for migration usage in GPU
> > SVM. It
> > * iterates over each page frame number provided in @migrate_pfn,
> > maps the
> > @@ -300,9 +315,11 @@ drm_pagemap_migrate_map_system_pages(struct
> > device *dev,
> > struct drm_pagemap_addr
> > *pagemap_addr,
> > unsigned long *migrate_pfn,
> > unsigned long npages,
> > - enum dma_data_direction dir)
> > + enum dma_data_direction dir,
> > + struct drm_pagemap_iova_state
> > *state)
> > {
> > unsigned long i;
> > + bool try_alloc = false;
> >
> > for (i = 0; i < npages;) {
> > struct page *page =
> > migrate_pfn_to_page(migrate_pfn[i]);
> > @@ -317,9 +334,31 @@ drm_pagemap_migrate_map_system_pages(struct
> > device *dev,
> > folio = page_folio(page);
> > order = folio_order(folio);
> >
> > - dma_addr = dma_map_page(dev, page, 0,
> > page_size(page), dir);
> > - if (dma_mapping_error(dev, dma_addr))
> > - return -EFAULT;
> > + if (!try_alloc) {
> > + dma_iova_try_alloc(dev, &state->dma_state,
> > + npages * PAGE_SIZE >=
> > + HPAGE_PMD_SIZE ?
> > + HPAGE_PMD_SIZE : 0,
> > + npages * PAGE_SIZE);
> > + try_alloc = true;
> > + }
>
> What happens if dma_iova_try_alloc() fails for all i < some value x and
> then suddenly succeeds for i == x? While the below code looks correct,
We only try to alloc on the first valid page - 'i' may be any value
based on the first page found or we may never alloc if the number of
pages found == 0 (possible, hence why it is inside the loop). This step
is done at most once. If the allocation fails, we use the map_page path
for the remaining loop iterations.
> I figure we'd allocate a too large IOVA region and possibly get the
> alignment wrong?
The first and only IOVA allocation attempts an aligned allocation. What
can happen is only a subset of the IOVA is used for the copy but we pack
in the pages starting at IOVA[0] and end at IOVA[number valid pages - 1].
Matt
>
> Otherwise LGTM.
>
>
> > +
> > + if (dma_use_iova(&state->dma_state)) {
> > + int err = dma_iova_link(dev, &state-
> > >dma_state,
> > + page_to_phys(page),
> > + state->offset,
> > page_size(page),
> > + dir, 0);
> > + if (err)
> > + return err;
> > +
> > + dma_addr = state->dma_state.addr + state-
> > >offset;
> > + state->offset += page_size(page);
> > + } else {
> > + dma_addr = dma_map_page(dev, page, 0,
> > page_size(page),
> > + dir);
> > + if (dma_mapping_error(dev, dma_addr))
> > + return -EFAULT;
> > + }
> >
> > pagemap_addr[i] =
> > drm_pagemap_addr_encode(dma_addr,
> > @@ -330,6 +369,9 @@ drm_pagemap_migrate_map_system_pages(struct
> > device *dev,
> > i += NR_PAGES(order);
> > }
> >
> > + if (dma_use_iova(&state->dma_state))
> > + return dma_iova_sync(dev, &state->dma_state, 0,
> > state->offset);
> > +
> > return 0;
> > }
> >
> > @@ -341,6 +383,7 @@ drm_pagemap_migrate_map_system_pages(struct
> > device *dev,
> > * @pagemap_addr: Array of DMA information corresponding to mapped
> > pages
> > * @npages: Number of pages to unmap
> > * @dir: Direction of data transfer (e.g., DMA_BIDIRECTIONAL)
> > + * @state: DMA IOVA state for mapping.
> > *
> > * This function unmaps previously mapped pages of memory for GPU
> > Shared Virtual
> > * Memory (SVM). It iterates over each DMA address provided in
> > @dma_addr, checks
> > @@ -350,10 +393,17 @@ static void
> > drm_pagemap_migrate_unmap_pages(struct device *dev,
> > struct drm_pagemap_addr
> > *pagemap_addr,
> > unsigned long
> > *migrate_pfn,
> > unsigned long npages,
> > - enum dma_data_direction
> > dir)
> > + enum dma_data_direction
> > dir,
> > + struct
> > drm_pagemap_iova_state *state)
> > {
> > unsigned long i;
> >
> > + if (state && dma_use_iova(&state->dma_state)) {
> > + dma_iova_unlink(dev, &state->dma_state, 0, state-
> > >offset, dir, 0);
> > + dma_iova_free(dev, &state->dma_state);
> > + return;
> > + }
> > +
> > for (i = 0; i < npages;) {
> > struct page *page =
> > migrate_pfn_to_page(migrate_pfn[i]);
> >
> > @@ -406,7 +456,7 @@ drm_pagemap_migrate_remote_to_local(struct
> > drm_pagemap_devmem *devmem,
> > devmem->pre_migrate_fence);
> > out:
> > drm_pagemap_migrate_unmap_pages(remote_device, pagemap_addr,
> > local_pfns,
> > - npages, DMA_FROM_DEVICE);
> > + npages, DMA_FROM_DEVICE,
> > NULL);
> > return err;
> > }
> >
> > @@ -416,11 +466,13 @@ drm_pagemap_migrate_sys_to_dev(struct
> > drm_pagemap_devmem *devmem,
> > struct page *local_pages[],
> > struct drm_pagemap_addr
> > pagemap_addr[],
> > unsigned long npages,
> > - const struct drm_pagemap_devmem_ops
> > *ops)
> > + const struct drm_pagemap_devmem_ops
> > *ops,
> > + struct drm_pagemap_iova_state *state)
> > {
> > int err = drm_pagemap_migrate_map_system_pages(devmem->dev,
> > pagemap_addr,
> > sys_pfns,
> > - npages,
> > DMA_TO_DEVICE);
> > + npages,
> > DMA_TO_DEVICE,
> > + state);
> >
> > if (err)
> > goto out;
> > @@ -429,7 +481,7 @@ drm_pagemap_migrate_sys_to_dev(struct
> > drm_pagemap_devmem *devmem,
> > devmem->pre_migrate_fence);
> > out:
> > drm_pagemap_migrate_unmap_pages(devmem->dev, pagemap_addr,
> > sys_pfns, npages,
> > - DMA_TO_DEVICE);
> > + DMA_TO_DEVICE, state);
> > return err;
> > }
> >
> > @@ -457,6 +509,7 @@ static int drm_pagemap_migrate_range(struct
> > drm_pagemap_devmem *devmem,
> > const struct migrate_range_loc
> > *cur,
> > const struct
> > drm_pagemap_migrate_details *mdetails)
> > {
> > + struct drm_pagemap_iova_state state = {};
> > int ret = 0;
> >
> > if (cur->start == 0)
> > @@ -484,7 +537,7 @@ static int drm_pagemap_migrate_range(struct
> > drm_pagemap_devmem *devmem,
> > &pages[last-
> > >start],
> >
> > &pagemap_addr[last->start],
> > cur->start -
> > last->start,
> > - last->ops);
> > + last->ops,
> > &state);
> >
> > out:
> > *last = *cur;
> > @@ -1001,6 +1054,7 @@ EXPORT_SYMBOL(drm_pagemap_put);
> > int drm_pagemap_evict_to_ram(struct drm_pagemap_devmem
> > *devmem_allocation)
> > {
> > const struct drm_pagemap_devmem_ops *ops =
> > devmem_allocation->ops;
> > + struct drm_pagemap_iova_state state = {};
> > unsigned long npages, mpages = 0;
> > struct page **pages;
> > unsigned long *src, *dst;
> > @@ -1042,7 +1096,7 @@ int drm_pagemap_evict_to_ram(struct
> > drm_pagemap_devmem *devmem_allocation)
> > err =
> > drm_pagemap_migrate_map_system_pages(devmem_allocation->dev,
> > pagemap_addr,
> > dst, npages,
> > - DMA_FROM_DEVICE);
> > + DMA_FROM_DEVICE,
> > &state);
> > if (err)
> > goto err_finalize;
> >
> > @@ -1059,7 +1113,7 @@ int drm_pagemap_evict_to_ram(struct
> > drm_pagemap_devmem *devmem_allocation)
> > migrate_device_pages(src, dst, npages);
> > migrate_device_finalize(src, dst, npages);
> > drm_pagemap_migrate_unmap_pages(devmem_allocation->dev,
> > pagemap_addr, dst, npages,
> > - DMA_FROM_DEVICE);
> > + DMA_FROM_DEVICE, &state);
> >
> > err_free:
> > kvfree(buf);
> > @@ -1103,6 +1157,7 @@ static int __drm_pagemap_migrate_to_ram(struct
> > vm_area_struct *vas,
> > MIGRATE_VMA_SELECT_DEVICE_COHERENT,
> > .fault_page = page,
> > };
> > + struct drm_pagemap_iova_state state = {};
> > struct drm_pagemap_zdd *zdd;
> > const struct drm_pagemap_devmem_ops *ops;
> > struct device *dev = NULL;
> > @@ -1162,7 +1217,7 @@ static int __drm_pagemap_migrate_to_ram(struct
> > vm_area_struct *vas,
> >
> > err = drm_pagemap_migrate_map_system_pages(dev,
> > pagemap_addr,
> > migrate.dst,
> > npages,
> > - DMA_FROM_DEVICE);
> > + DMA_FROM_DEVICE,
> > &state);
> > if (err)
> > goto err_finalize;
> >
> > @@ -1180,7 +1235,8 @@ static int __drm_pagemap_migrate_to_ram(struct
> > vm_area_struct *vas,
> > migrate_vma_finalize(&migrate);
> > if (dev)
> > drm_pagemap_migrate_unmap_pages(dev, pagemap_addr,
> > migrate.dst,
> > - npages,
> > DMA_FROM_DEVICE);
> > + npages,
> > DMA_FROM_DEVICE,
> > + &state);
> > err_free:
> > kvfree(buf);
> > err_out:
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v4 4/4] drm/pagemap: Use dma-map IOVA alloc, link, and sync API for DRM pagemap
2026-02-11 15:37 ` Matthew Brost
@ 2026-02-11 18:48 ` Thomas Hellström
2026-02-11 18:51 ` Matthew Brost
0 siblings, 1 reply; 20+ messages in thread
From: Thomas Hellström @ 2026-02-11 18:48 UTC (permalink / raw)
To: Matthew Brost
Cc: intel-xe, dri-devel, leonro, jgg, francois.dugast,
himal.prasad.ghimiray
On Wed, 2026-02-11 at 07:37 -0800, Matthew Brost wrote:
> On Wed, Feb 11, 2026 at 12:34:12PM +0100, Thomas Hellström wrote:
> > On Wed, 2026-02-04 at 20:19 -0800, Matthew Brost wrote:
> > > The dma-map IOVA alloc, link, and sync APIs perform significantly
> > > better
> > > than dma-map / dma-unmap, as they avoid costly IOMMU
> > > synchronizations.
> > > This difference is especially noticeable when mapping a 2MB
> > > region in
> > > 4KB pages.
> > >
> > > Use the IOVA alloc, link, and sync APIs for DRM pagemap, which
> > > create
> > > DMA
> > > mappings between the CPU and GPU for copying data.
> > >
> > > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > > ---
> > > v4:
> > > - Pack IOVA and drop dummy page (Jason)
> > >
> > > drivers/gpu/drm/drm_pagemap.c | 84
> > > +++++++++++++++++++++++++++++----
> > > --
> > > 1 file changed, 70 insertions(+), 14 deletions(-)
> > >
> > > diff --git a/drivers/gpu/drm/drm_pagemap.c
> > > b/drivers/gpu/drm/drm_pagemap.c
> > > index 29677b19bb69..52a196bc8459 100644
> > > --- a/drivers/gpu/drm/drm_pagemap.c
> > > +++ b/drivers/gpu/drm/drm_pagemap.c
> > > @@ -280,6 +280,20 @@ drm_pagemap_migrate_map_device_pages(struct
> > > device *dev,
> > > return 0;
> > > }
> > >
> > > +/**
> > > + * struct drm_pagemap_iova_state - DRM pagemap IOVA state
> > > + *
> >
> > No newline
> >
>
> +1
>
> > > + * @dma_state: DMA IOVA state.
> > > + * @offset: Current offset in IOVA.
> > > + *
> > > + * This structure acts as an iterator for packing all IOVA
> > > addresses
> > > within a
> > > + * contiguous range.
> > > + */
> > > +struct drm_pagemap_iova_state {
> > > + struct dma_iova_state dma_state;
> > > + unsigned long offset;
> > > +};
> > > +
> > > /**
> > > * drm_pagemap_migrate_map_system_pages() - Map system migration
> > > pages for GPU SVM migration
> > > * @dev: The device performing the migration.
> > > @@ -287,6 +301,7 @@ drm_pagemap_migrate_map_device_pages(struct
> > > device *dev,
> > > * @migrate_pfn: Array of page frame numbers of system pages or
> > > peer
> > > pages to map.
> > > * @npages: Number of system pages or peer pages to map.
> > > * @dir: Direction of data transfer (e.g., DMA_BIDIRECTIONAL)
> > > + * @state: DMA IOVA state for mapping.
> > > *
> > > * This function maps pages of memory for migration usage in GPU
> > > SVM. It
> > > * iterates over each page frame number provided in
> > > @migrate_pfn,
> > > maps the
> > > @@ -300,9 +315,11 @@ drm_pagemap_migrate_map_system_pages(struct
> > > device *dev,
> > > struct drm_pagemap_addr
> > > *pagemap_addr,
> > > unsigned long *migrate_pfn,
> > > unsigned long npages,
> > > - enum dma_data_direction
> > > dir)
> > > + enum dma_data_direction
> > > dir,
> > > + struct
> > > drm_pagemap_iova_state
> > > *state)
> > > {
> > > unsigned long i;
> > > + bool try_alloc = false;
> > >
> > > for (i = 0; i < npages;) {
> > > struct page *page =
> > > migrate_pfn_to_page(migrate_pfn[i]);
> > > @@ -317,9 +334,31 @@ drm_pagemap_migrate_map_system_pages(struct
> > > device *dev,
> > > folio = page_folio(page);
> > > order = folio_order(folio);
> > >
> > > - dma_addr = dma_map_page(dev, page, 0,
> > > page_size(page), dir);
> > > - if (dma_mapping_error(dev, dma_addr))
> > > - return -EFAULT;
> > > + if (!try_alloc) {
> > > + dma_iova_try_alloc(dev, &state-
> > > >dma_state,
> > > + npages * PAGE_SIZE >=
> > > + HPAGE_PMD_SIZE ?
> > > + HPAGE_PMD_SIZE : 0,
> > > + npages * PAGE_SIZE);
> > > + try_alloc = true;
> > > + }
> >
> > What happens if dma_iova_try_alloc() fails for all i < some value x
> > and
> > then suddenly succeeds for i == x? While the below code looks
> > correct,
>
> We only try to alloc on the first valid page - 'i' may be any value
> based on the first page found or we may never alloc if the number of
> pages found == 0 (possible, hence why it is inside the loop). This
> step
> is done at most once. If the allocation fails, we use the map_page
> path
> for the remaining loop iterations.
>
> > I figure we'd allocate a too large IOVA region and possibly get the
> > alignment wrong?
>
> The first and only IOVA allocation attempts an aligned allocation.
> What
> can happen is only a subset of the IOVA is used for the copy but we
> pack
> in the pages starting at IOVA[0] and end at IOVA[number valid pages -
> 1].
>
> Matt
So to be a little nicer on the IOVA allocator we could use the below?
dma_iova_try_alloc(dev, &state->dma_state,
(npages - i) * PAGE_SIZE >=
HPAGE_PMD_SIZE ?
HPAGE_PMD_SIZE : 0,
(npages - i) * PAGE_SIZE);
Thanks,
Thomas
>
> >
> > Otherwise LGTM.
> >
> >
> > > +
> > > + if (dma_use_iova(&state->dma_state)) {
> > > + int err = dma_iova_link(dev, &state-
> > > > dma_state,
> > > + page_to_phys(pag
> > > e),
> > > + state->offset,
> > > page_size(page),
> > > + dir, 0);
> > > + if (err)
> > > + return err;
> > > +
> > > + dma_addr = state->dma_state.addr +
> > > state-
> > > > offset;
> > > + state->offset += page_size(page);
> > > + } else {
> > > + dma_addr = dma_map_page(dev, page, 0,
> > > page_size(page),
> > > + dir);
> > > + if (dma_mapping_error(dev, dma_addr))
> > > + return -EFAULT;
> > > + }
> > >
> > > pagemap_addr[i] =
> > > drm_pagemap_addr_encode(dma_addr,
> > > @@ -330,6 +369,9 @@ drm_pagemap_migrate_map_system_pages(struct
> > > device *dev,
> > > i += NR_PAGES(order);
> > > }
> > >
> > > + if (dma_use_iova(&state->dma_state))
> > > + return dma_iova_sync(dev, &state->dma_state, 0,
> > > state->offset);
> > > +
> > > return 0;
> > > }
> > >
> > > @@ -341,6 +383,7 @@ drm_pagemap_migrate_map_system_pages(struct
> > > device *dev,
> > > * @pagemap_addr: Array of DMA information corresponding to
> > > mapped
> > > pages
> > > * @npages: Number of pages to unmap
> > > * @dir: Direction of data transfer (e.g., DMA_BIDIRECTIONAL)
> > > + * @state: DMA IOVA state for mapping.
> > > *
> > > * This function unmaps previously mapped pages of memory for
> > > GPU
> > > Shared Virtual
> > > * Memory (SVM). It iterates over each DMA address provided in
> > > @dma_addr, checks
> > > @@ -350,10 +393,17 @@ static void
> > > drm_pagemap_migrate_unmap_pages(struct device *dev,
> > > struct
> > > drm_pagemap_addr
> > > *pagemap_addr,
> > > unsigned long
> > > *migrate_pfn,
> > > unsigned long
> > > npages,
> > > - enum
> > > dma_data_direction
> > > dir)
> > > + enum
> > > dma_data_direction
> > > dir,
> > > + struct
> > > drm_pagemap_iova_state *state)
> > > {
> > > unsigned long i;
> > >
> > > + if (state && dma_use_iova(&state->dma_state)) {
> > > + dma_iova_unlink(dev, &state->dma_state, 0,
> > > state-
> > > > offset, dir, 0);
> > > + dma_iova_free(dev, &state->dma_state);
> > > + return;
> > > + }
> > > +
> > > for (i = 0; i < npages;) {
> > > struct page *page =
> > > migrate_pfn_to_page(migrate_pfn[i]);
> > >
> > > @@ -406,7 +456,7 @@ drm_pagemap_migrate_remote_to_local(struct
> > > drm_pagemap_devmem *devmem,
> > > devmem->pre_migrate_fence);
> > > out:
> > > drm_pagemap_migrate_unmap_pages(remote_device,
> > > pagemap_addr,
> > > local_pfns,
> > > - npages,
> > > DMA_FROM_DEVICE);
> > > + npages, DMA_FROM_DEVICE,
> > > NULL);
> > > return err;
> > > }
> > >
> > > @@ -416,11 +466,13 @@ drm_pagemap_migrate_sys_to_dev(struct
> > > drm_pagemap_devmem *devmem,
> > > struct page *local_pages[],
> > > struct drm_pagemap_addr
> > > pagemap_addr[],
> > > unsigned long npages,
> > > - const struct
> > > drm_pagemap_devmem_ops
> > > *ops)
> > > + const struct
> > > drm_pagemap_devmem_ops
> > > *ops,
> > > + struct drm_pagemap_iova_state
> > > *state)
> > > {
> > > int err = drm_pagemap_migrate_map_system_pages(devmem-
> > > >dev,
> > >
> > > pagemap_addr,
> > > sys_pfns,
> > > - npages,
> > > DMA_TO_DEVICE);
> > > + npages,
> > > DMA_TO_DEVICE,
> > > + state);
> > >
> > > if (err)
> > > goto out;
> > > @@ -429,7 +481,7 @@ drm_pagemap_migrate_sys_to_dev(struct
> > > drm_pagemap_devmem *devmem,
> > > devmem->pre_migrate_fence);
> > > out:
> > > drm_pagemap_migrate_unmap_pages(devmem->dev,
> > > pagemap_addr,
> > > sys_pfns, npages,
> > > - DMA_TO_DEVICE);
> > > + DMA_TO_DEVICE, state);
> > > return err;
> > > }
> > >
> > > @@ -457,6 +509,7 @@ static int drm_pagemap_migrate_range(struct
> > > drm_pagemap_devmem *devmem,
> > > const struct
> > > migrate_range_loc
> > > *cur,
> > > const struct
> > > drm_pagemap_migrate_details *mdetails)
> > > {
> > > + struct drm_pagemap_iova_state state = {};
> > > int ret = 0;
> > >
> > > if (cur->start == 0)
> > > @@ -484,7 +537,7 @@ static int drm_pagemap_migrate_range(struct
> > > drm_pagemap_devmem *devmem,
> > >
> > > &pages[last-
> > > > start],
> > >
> > > &pagemap_addr[last->start],
> > > cur->start
> > > -
> > > last->start,
> > > - last->ops);
> > > + last->ops,
> > > &state);
> > >
> > > out:
> > > *last = *cur;
> > > @@ -1001,6 +1054,7 @@ EXPORT_SYMBOL(drm_pagemap_put);
> > > int drm_pagemap_evict_to_ram(struct drm_pagemap_devmem
> > > *devmem_allocation)
> > > {
> > > const struct drm_pagemap_devmem_ops *ops =
> > > devmem_allocation->ops;
> > > + struct drm_pagemap_iova_state state = {};
> > > unsigned long npages, mpages = 0;
> > > struct page **pages;
> > > unsigned long *src, *dst;
> > > @@ -1042,7 +1096,7 @@ int drm_pagemap_evict_to_ram(struct
> > > drm_pagemap_devmem *devmem_allocation)
> > > err =
> > > drm_pagemap_migrate_map_system_pages(devmem_allocation->dev,
> > > pagemap_addr,
> > > dst, npages,
> > > -
> > > DMA_FROM_DEVICE);
> > > +
> > > DMA_FROM_DEVICE,
> > > &state);
> > > if (err)
> > > goto err_finalize;
> > >
> > > @@ -1059,7 +1113,7 @@ int drm_pagemap_evict_to_ram(struct
> > > drm_pagemap_devmem *devmem_allocation)
> > > migrate_device_pages(src, dst, npages);
> > > migrate_device_finalize(src, dst, npages);
> > > drm_pagemap_migrate_unmap_pages(devmem_allocation->dev,
> > > pagemap_addr, dst, npages,
> > > - DMA_FROM_DEVICE);
> > > + DMA_FROM_DEVICE,
> > > &state);
> > >
> > > err_free:
> > > kvfree(buf);
> > > @@ -1103,6 +1157,7 @@ static int
> > > __drm_pagemap_migrate_to_ram(struct
> > > vm_area_struct *vas,
> > > MIGRATE_VMA_SELECT_DEVICE_COHERENT,
> > > .fault_page = page,
> > > };
> > > + struct drm_pagemap_iova_state state = {};
> > > struct drm_pagemap_zdd *zdd;
> > > const struct drm_pagemap_devmem_ops *ops;
> > > struct device *dev = NULL;
> > > @@ -1162,7 +1217,7 @@ static int
> > > __drm_pagemap_migrate_to_ram(struct
> > > vm_area_struct *vas,
> > >
> > > err = drm_pagemap_migrate_map_system_pages(dev,
> > > pagemap_addr,
> > > migrate.dst,
> > > npages,
> > > -
> > > DMA_FROM_DEVICE);
> > > +
> > > DMA_FROM_DEVICE,
> > > &state);
> > > if (err)
> > > goto err_finalize;
> > >
> > > @@ -1180,7 +1235,8 @@ static int
> > > __drm_pagemap_migrate_to_ram(struct
> > > vm_area_struct *vas,
> > > migrate_vma_finalize(&migrate);
> > > if (dev)
> > > drm_pagemap_migrate_unmap_pages(dev,
> > > pagemap_addr,
> > > migrate.dst,
> > > - npages,
> > > DMA_FROM_DEVICE);
> > > + npages,
> > > DMA_FROM_DEVICE,
> > > + &state);
> > > err_free:
> > > kvfree(buf);
> > > err_out:
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v4 4/4] drm/pagemap: Use dma-map IOVA alloc, link, and sync API for DRM pagemap
2026-02-11 18:48 ` Thomas Hellström
@ 2026-02-11 18:51 ` Matthew Brost
[not found] ` <20260213145646.GO750753@ziepe.ca>
0 siblings, 1 reply; 20+ messages in thread
From: Matthew Brost @ 2026-02-11 18:51 UTC (permalink / raw)
To: Thomas Hellström
Cc: intel-xe, dri-devel, leonro, jgg, francois.dugast,
himal.prasad.ghimiray
On Wed, Feb 11, 2026 at 07:48:59PM +0100, Thomas Hellström wrote:
> On Wed, 2026-02-11 at 07:37 -0800, Matthew Brost wrote:
> > On Wed, Feb 11, 2026 at 12:34:12PM +0100, Thomas Hellström wrote:
> > > On Wed, 2026-02-04 at 20:19 -0800, Matthew Brost wrote:
> > > > The dma-map IOVA alloc, link, and sync APIs perform significantly
> > > > better
> > > > than dma-map / dma-unmap, as they avoid costly IOMMU
> > > > synchronizations.
> > > > This difference is especially noticeable when mapping a 2MB
> > > > region in
> > > > 4KB pages.
> > > >
> > > > Use the IOVA alloc, link, and sync APIs for DRM pagemap, which
> > > > create
> > > > DMA
> > > > mappings between the CPU and GPU for copying data.
> > > >
> > > > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > > > ---
> > > > v4:
> > > > - Pack IOVA and drop dummy page (Jason)
> > > >
> > > > drivers/gpu/drm/drm_pagemap.c | 84
> > > > +++++++++++++++++++++++++++++----
> > > > --
> > > > 1 file changed, 70 insertions(+), 14 deletions(-)
> > > >
> > > > diff --git a/drivers/gpu/drm/drm_pagemap.c
> > > > b/drivers/gpu/drm/drm_pagemap.c
> > > > index 29677b19bb69..52a196bc8459 100644
> > > > --- a/drivers/gpu/drm/drm_pagemap.c
> > > > +++ b/drivers/gpu/drm/drm_pagemap.c
> > > > @@ -280,6 +280,20 @@ drm_pagemap_migrate_map_device_pages(struct
> > > > device *dev,
> > > > return 0;
> > > > }
> > > >
> > > > +/**
> > > > + * struct drm_pagemap_iova_state - DRM pagemap IOVA state
> > > > + *
> > >
> > > No newline
> > >
> >
> > +1
> >
> > > > + * @dma_state: DMA IOVA state.
> > > > + * @offset: Current offset in IOVA.
> > > > + *
> > > > + * This structure acts as an iterator for packing all IOVA
> > > > addresses
> > > > within a
> > > > + * contiguous range.
> > > > + */
> > > > +struct drm_pagemap_iova_state {
> > > > + struct dma_iova_state dma_state;
> > > > + unsigned long offset;
> > > > +};
> > > > +
> > > > /**
> > > > * drm_pagemap_migrate_map_system_pages() - Map system migration
> > > > pages for GPU SVM migration
> > > > * @dev: The device performing the migration.
> > > > @@ -287,6 +301,7 @@ drm_pagemap_migrate_map_device_pages(struct
> > > > device *dev,
> > > > * @migrate_pfn: Array of page frame numbers of system pages or
> > > > peer
> > > > pages to map.
> > > > * @npages: Number of system pages or peer pages to map.
> > > > * @dir: Direction of data transfer (e.g., DMA_BIDIRECTIONAL)
> > > > + * @state: DMA IOVA state for mapping.
> > > > *
> > > > * This function maps pages of memory for migration usage in GPU
> > > > SVM. It
> > > > * iterates over each page frame number provided in
> > > > @migrate_pfn,
> > > > maps the
> > > > @@ -300,9 +315,11 @@ drm_pagemap_migrate_map_system_pages(struct
> > > > device *dev,
> > > > struct drm_pagemap_addr
> > > > *pagemap_addr,
> > > > unsigned long *migrate_pfn,
> > > > unsigned long npages,
> > > > - enum dma_data_direction
> > > > dir)
> > > > + enum dma_data_direction
> > > > dir,
> > > > + struct
> > > > drm_pagemap_iova_state
> > > > *state)
> > > > {
> > > > unsigned long i;
> > > > + bool try_alloc = false;
> > > >
> > > > for (i = 0; i < npages;) {
> > > > struct page *page =
> > > > migrate_pfn_to_page(migrate_pfn[i]);
> > > > @@ -317,9 +334,31 @@ drm_pagemap_migrate_map_system_pages(struct
> > > > device *dev,
> > > > folio = page_folio(page);
> > > > order = folio_order(folio);
> > > >
> > > > - dma_addr = dma_map_page(dev, page, 0,
> > > > page_size(page), dir);
> > > > - if (dma_mapping_error(dev, dma_addr))
> > > > - return -EFAULT;
> > > > + if (!try_alloc) {
> > > > + dma_iova_try_alloc(dev, &state-
> > > > >dma_state,
> > > > + npages * PAGE_SIZE >=
> > > > + HPAGE_PMD_SIZE ?
> > > > + HPAGE_PMD_SIZE : 0,
> > > > + npages * PAGE_SIZE);
> > > > + try_alloc = true;
> > > > + }
> > >
> > > What happens if dma_iova_try_alloc() fails for all i < some value x
> > > and
> > > then suddenly succeeds for i == x? While the below code looks
> > > correct,
> >
> > We only try to alloc on the first valid page - 'i' may be any value
> > based on the first page found or we may never alloc if the number of
> > pages found == 0 (possible, hence why it is inside the loop). This
> > step
> > is done at most once. If the allocation fails, we use the map_page
> > path
> > for the remaining loop iterations.
> >
> > > I figure we'd allocate a too large IOVA region and possibly get the
> > > alignment wrong?
> >
> > The first and only IOVA allocation attempts an aligned allocation.
> > What
> > can happen is only a subset of the IOVA is used for the copy but we
> > pack
> > in the pages starting at IOVA[0] and end at IOVA[number valid pages -
> > 1].
> >
> > Matt
>
> So to be a little nicer on the IOVA allocator we could use the below?
>
> dma_iova_try_alloc(dev, &state->dma_state,
> (npages - i) * PAGE_SIZE >=
> HPAGE_PMD_SIZE ?
> HPAGE_PMD_SIZE : 0,
> (npages - i) * PAGE_SIZE);
>
Yes, we can do that. No reason to force alignment if our copy code isn't
going to try to use 2M GPU pages.
Matt
> Thanks,
> Thomas
>
> >
> > >
> > > Otherwise LGTM.
> > >
> > >
> > > > +
> > > > + if (dma_use_iova(&state->dma_state)) {
> > > > + int err = dma_iova_link(dev, &state-
> > > > > dma_state,
> > > > + page_to_phys(pag
> > > > e),
> > > > + state->offset,
> > > > page_size(page),
> > > > + dir, 0);
> > > > + if (err)
> > > > + return err;
> > > > +
> > > > + dma_addr = state->dma_state.addr +
> > > > state-
> > > > > offset;
> > > > + state->offset += page_size(page);
> > > > + } else {
> > > > + dma_addr = dma_map_page(dev, page, 0,
> > > > page_size(page),
> > > > + dir);
> > > > + if (dma_mapping_error(dev, dma_addr))
> > > > + return -EFAULT;
> > > > + }
> > > >
> > > > pagemap_addr[i] =
> > > > drm_pagemap_addr_encode(dma_addr,
> > > > @@ -330,6 +369,9 @@ drm_pagemap_migrate_map_system_pages(struct
> > > > device *dev,
> > > > i += NR_PAGES(order);
> > > > }
> > > >
> > > > + if (dma_use_iova(&state->dma_state))
> > > > + return dma_iova_sync(dev, &state->dma_state, 0,
> > > > state->offset);
> > > > +
> > > > return 0;
> > > > }
> > > >
> > > > @@ -341,6 +383,7 @@ drm_pagemap_migrate_map_system_pages(struct
> > > > device *dev,
> > > > * @pagemap_addr: Array of DMA information corresponding to
> > > > mapped
> > > > pages
> > > > * @npages: Number of pages to unmap
> > > > * @dir: Direction of data transfer (e.g., DMA_BIDIRECTIONAL)
> > > > + * @state: DMA IOVA state for mapping.
> > > > *
> > > > * This function unmaps previously mapped pages of memory for
> > > > GPU
> > > > Shared Virtual
> > > > * Memory (SVM). It iterates over each DMA address provided in
> > > > @dma_addr, checks
> > > > @@ -350,10 +393,17 @@ static void
> > > > drm_pagemap_migrate_unmap_pages(struct device *dev,
> > > > struct
> > > > drm_pagemap_addr
> > > > *pagemap_addr,
> > > > unsigned long
> > > > *migrate_pfn,
> > > > unsigned long
> > > > npages,
> > > > - enum
> > > > dma_data_direction
> > > > dir)
> > > > + enum
> > > > dma_data_direction
> > > > dir,
> > > > + struct
> > > > drm_pagemap_iova_state *state)
> > > > {
> > > > unsigned long i;
> > > >
> > > > + if (state && dma_use_iova(&state->dma_state)) {
> > > > + dma_iova_unlink(dev, &state->dma_state, 0,
> > > > state-
> > > > > offset, dir, 0);
> > > > + dma_iova_free(dev, &state->dma_state);
> > > > + return;
> > > > + }
> > > > +
> > > > for (i = 0; i < npages;) {
> > > > struct page *page =
> > > > migrate_pfn_to_page(migrate_pfn[i]);
> > > >
> > > > @@ -406,7 +456,7 @@ drm_pagemap_migrate_remote_to_local(struct
> > > > drm_pagemap_devmem *devmem,
> > > > devmem->pre_migrate_fence);
> > > > out:
> > > > drm_pagemap_migrate_unmap_pages(remote_device,
> > > > pagemap_addr,
> > > > local_pfns,
> > > > - npages,
> > > > DMA_FROM_DEVICE);
> > > > + npages, DMA_FROM_DEVICE,
> > > > NULL);
> > > > return err;
> > > > }
> > > >
> > > > @@ -416,11 +466,13 @@ drm_pagemap_migrate_sys_to_dev(struct
> > > > drm_pagemap_devmem *devmem,
> > > > struct page *local_pages[],
> > > > struct drm_pagemap_addr
> > > > pagemap_addr[],
> > > > unsigned long npages,
> > > > - const struct
> > > > drm_pagemap_devmem_ops
> > > > *ops)
> > > > + const struct
> > > > drm_pagemap_devmem_ops
> > > > *ops,
> > > > + struct drm_pagemap_iova_state
> > > > *state)
> > > > {
> > > > int err = drm_pagemap_migrate_map_system_pages(devmem-
> > > > >dev,
> > > >
> > > > pagemap_addr,
> > > > sys_pfns,
> > > > - npages,
> > > > DMA_TO_DEVICE);
> > > > + npages,
> > > > DMA_TO_DEVICE,
> > > > + state);
> > > >
> > > > if (err)
> > > > goto out;
> > > > @@ -429,7 +481,7 @@ drm_pagemap_migrate_sys_to_dev(struct
> > > > drm_pagemap_devmem *devmem,
> > > > devmem->pre_migrate_fence);
> > > > out:
> > > > drm_pagemap_migrate_unmap_pages(devmem->dev,
> > > > pagemap_addr,
> > > > sys_pfns, npages,
> > > > - DMA_TO_DEVICE);
> > > > + DMA_TO_DEVICE, state);
> > > > return err;
> > > > }
> > > >
> > > > @@ -457,6 +509,7 @@ static int drm_pagemap_migrate_range(struct
> > > > drm_pagemap_devmem *devmem,
> > > > const struct
> > > > migrate_range_loc
> > > > *cur,
> > > > const struct
> > > > drm_pagemap_migrate_details *mdetails)
> > > > {
> > > > + struct drm_pagemap_iova_state state = {};
> > > > int ret = 0;
> > > >
> > > > if (cur->start == 0)
> > > > @@ -484,7 +537,7 @@ static int drm_pagemap_migrate_range(struct
> > > > drm_pagemap_devmem *devmem,
> > > >
> > > > &pages[last-
> > > > > start],
> > > >
> > > > &pagemap_addr[last->start],
> > > > cur->start
> > > > -
> > > > last->start,
> > > > - last->ops);
> > > > + last->ops,
> > > > &state);
> > > >
> > > > out:
> > > > *last = *cur;
> > > > @@ -1001,6 +1054,7 @@ EXPORT_SYMBOL(drm_pagemap_put);
> > > > int drm_pagemap_evict_to_ram(struct drm_pagemap_devmem
> > > > *devmem_allocation)
> > > > {
> > > > const struct drm_pagemap_devmem_ops *ops =
> > > > devmem_allocation->ops;
> > > > + struct drm_pagemap_iova_state state = {};
> > > > unsigned long npages, mpages = 0;
> > > > struct page **pages;
> > > > unsigned long *src, *dst;
> > > > @@ -1042,7 +1096,7 @@ int drm_pagemap_evict_to_ram(struct
> > > > drm_pagemap_devmem *devmem_allocation)
> > > > err =
> > > > drm_pagemap_migrate_map_system_pages(devmem_allocation->dev,
> > > > pagemap_addr,
> > > > dst, npages,
> > > > -
> > > > DMA_FROM_DEVICE);
> > > > +
> > > > DMA_FROM_DEVICE,
> > > > &state);
> > > > if (err)
> > > > goto err_finalize;
> > > >
> > > > @@ -1059,7 +1113,7 @@ int drm_pagemap_evict_to_ram(struct
> > > > drm_pagemap_devmem *devmem_allocation)
> > > > migrate_device_pages(src, dst, npages);
> > > > migrate_device_finalize(src, dst, npages);
> > > > drm_pagemap_migrate_unmap_pages(devmem_allocation->dev,
> > > > pagemap_addr, dst, npages,
> > > > - DMA_FROM_DEVICE);
> > > > + DMA_FROM_DEVICE,
> > > > &state);
> > > >
> > > > err_free:
> > > > kvfree(buf);
> > > > @@ -1103,6 +1157,7 @@ static int
> > > > __drm_pagemap_migrate_to_ram(struct
> > > > vm_area_struct *vas,
> > > > MIGRATE_VMA_SELECT_DEVICE_COHERENT,
> > > > .fault_page = page,
> > > > };
> > > > + struct drm_pagemap_iova_state state = {};
> > > > struct drm_pagemap_zdd *zdd;
> > > > const struct drm_pagemap_devmem_ops *ops;
> > > > struct device *dev = NULL;
> > > > @@ -1162,7 +1217,7 @@ static int
> > > > __drm_pagemap_migrate_to_ram(struct
> > > > vm_area_struct *vas,
> > > >
> > > > err = drm_pagemap_migrate_map_system_pages(dev,
> > > > pagemap_addr,
> > > > migrate.dst,
> > > > npages,
> > > > -
> > > > DMA_FROM_DEVICE);
> > > > +
> > > > DMA_FROM_DEVICE,
> > > > &state);
> > > > if (err)
> > > > goto err_finalize;
> > > >
> > > > @@ -1180,7 +1235,8 @@ static int
> > > > __drm_pagemap_migrate_to_ram(struct
> > > > vm_area_struct *vas,
> > > > migrate_vma_finalize(&migrate);
> > > > if (dev)
> > > > drm_pagemap_migrate_unmap_pages(dev,
> > > > pagemap_addr,
> > > > migrate.dst,
> > > > - npages,
> > > > DMA_FROM_DEVICE);
> > > > + npages,
> > > > DMA_FROM_DEVICE,
> > > > + &state);
> > > > err_free:
> > > > kvfree(buf);
> > > > err_out:
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v4 4/4] drm/pagemap: Use dma-map IOVA alloc, link, and sync API for DRM pagemap
[not found] ` <20260213145646.GO750753@ziepe.ca>
@ 2026-02-13 20:00 ` Matthew Brost
2026-02-16 14:33 ` Thomas Hellström
0 siblings, 1 reply; 20+ messages in thread
From: Matthew Brost @ 2026-02-13 20:00 UTC (permalink / raw)
To: Jason Gunthorpe
Cc: Thomas Hellström, intel-xe, dri-devel, leonro,
francois.dugast, himal.prasad.ghimiray
On Fri, Feb 13, 2026 at 10:56:46AM -0400, Jason Gunthorpe wrote:
> On Wed, Feb 11, 2026 at 10:51:32AM -0800, Matthew Brost wrote:
> > > So to be a little nicer on the IOVA allocator we could use the below?
> > >
> > > dma_iova_try_alloc(dev, &state->dma_state,
> > > (npages - i) * PAGE_SIZE >=
> > > HPAGE_PMD_SIZE ?
> > > HPAGE_PMD_SIZE : 0,
> > > (npages - i) * PAGE_SIZE);
> > >
> >
> > Yes, we can do that. No reason to force alignment if our copy code isn't
> > going to try to use 2M GPU pages.
>
> When it comes to this I prefer we try to add alignment information
> down to the iova allocator because I have other use cases for this
> alignment optimization.
Trying to parse this - what exactly is your preference here in the
context of this patch?
i.e., Is original code ok, is Thomas's suggestion ok, or should we do
something entirely different?
Matt
>
> Jason
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v4 4/4] drm/pagemap: Use dma-map IOVA alloc, link, and sync API for DRM pagemap
2026-02-13 20:00 ` Matthew Brost
@ 2026-02-16 14:33 ` Thomas Hellström
0 siblings, 0 replies; 20+ messages in thread
From: Thomas Hellström @ 2026-02-16 14:33 UTC (permalink / raw)
To: Matthew Brost, Jason Gunthorpe
Cc: intel-xe, dri-devel, leonro, francois.dugast,
himal.prasad.ghimiray
On Fri, 2026-02-13 at 12:00 -0800, Matthew Brost wrote:
> On Fri, Feb 13, 2026 at 10:56:46AM -0400, Jason Gunthorpe wrote:
> > On Wed, Feb 11, 2026 at 10:51:32AM -0800, Matthew Brost wrote:
> > > > So to be a little nicer on the IOVA allocator we could use the
> > > > below?
> > > >
> > > > dma_iova_try_alloc(dev, &state->dma_state,
> > > > (npages - i) *
> > > > PAGE_SIZE >=
> > > > HPAGE_PMD_SIZE ?
> > > > HPAGE_PMD_SIZE : 0,
> > > > (npages - i) *
> > > > PAGE_SIZE);
> > > >
> > >
> > > Yes, we can do that. No reason to force alignment if our copy
> > > code isn't
> > > going to try to use 2M GPU pages.
> >
> > When it comes to this I prefer we try to add alignment information
> > down to the iova allocator because I have other use cases for this
> > alignment optimization.
>
> Trying to parse this - what exactly is your preference here in the
> context of this patch?
>
> i.e., Is original code ok, is Thomas's suggestion ok, or should we do
> something entirely different?
>
> Matt
Interpreting this as Jason would want an alignment parameter to the
IOVA alloctor.
Although that's already the case, albeit somewhat awkwardly named.
Thanks,
Thomas
>
> >
> > Jason
^ permalink raw reply [flat|nested] 20+ messages in thread
end of thread, other threads:[~2026-02-16 14:33 UTC | newest]
Thread overview: 20+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-05 4:19 [PATCH v4 0/4] Use new dma-map IOVA alloc, link, and sync API in GPU SVM and DRM pagemap Matthew Brost
2026-02-05 4:19 ` [PATCH v4 1/4] drm/pagemap: Add helper to access zone_device_data Matthew Brost
2026-02-05 4:19 ` [PATCH v4 2/4] drm/gpusvm: Use dma-map IOVA alloc, link, and sync API in GPU SVM Matthew Brost
2026-02-09 9:44 ` Thomas Hellström
2026-02-09 16:13 ` Matthew Brost
2026-02-09 16:41 ` Thomas Hellström
2026-02-05 4:19 ` [PATCH v4 3/4] drm/pagemap: Split drm_pagemap_migrate_map_pages into device / system Matthew Brost
2026-02-09 15:49 ` Thomas Hellström
2026-02-09 16:58 ` Matthew Brost
2026-02-09 17:09 ` Thomas Hellström
2026-02-05 4:19 ` [PATCH v4 4/4] drm/pagemap: Use dma-map IOVA alloc, link, and sync API for DRM pagemap Matthew Brost
2026-02-11 11:34 ` Thomas Hellström
2026-02-11 15:37 ` Matthew Brost
2026-02-11 18:48 ` Thomas Hellström
2026-02-11 18:51 ` Matthew Brost
[not found] ` <20260213145646.GO750753@ziepe.ca>
2026-02-13 20:00 ` Matthew Brost
2026-02-16 14:33 ` Thomas Hellström
2026-02-05 6:24 ` ✓ CI.KUnit: success for Use new dma-map IOVA alloc, link, and sync API in GPU SVM and DRM pagemap (rev4) Patchwork
2026-02-05 7:38 ` ✓ Xe.CI.BAT: " Patchwork
2026-02-06 1:06 ` ✗ Xe.CI.FULL: failure " Patchwork
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox