public inbox for intel-gfx@lists.freedesktop.org
 help / color / mirror / Atom feed
* [PATCH v3 1/2] drm/buddy: Improve offset-aligned allocation handling
@ 2026-02-09  8:30 Arunpravin Paneer Selvam
  2026-02-09  8:30 ` [PATCH v3 2/2] drm/buddy: Add KUnit test for offset-aligned allocations Arunpravin Paneer Selvam
                   ` (3 more replies)
  0 siblings, 4 replies; 11+ messages in thread
From: Arunpravin Paneer Selvam @ 2026-02-09  8:30 UTC (permalink / raw)
  To: matthew.auld, christian.koenig, dri-devel, intel-gfx, intel-xe,
	amd-gfx
  Cc: alexander.deucher, Arunpravin Paneer Selvam

Large alignment requests previously forced the buddy allocator to search by
alignment order, which often caused higher-order free blocks to be split even
when a suitably aligned smaller region already existed within them. This led
to excessive fragmentation, especially for workloads requesting small sizes
with large alignment constraints.

This change prioritizes the requested allocation size during the search and
uses an augmented RB-tree field (subtree_max_alignment) to efficiently locate
free blocks that satisfy both size and offset-alignment requirements. As a
result, the allocator can directly select an aligned sub-region without
splitting larger blocks unnecessarily.

A practical example is the VKCTS test
dEQP-VK.memory.allocation.basic.size_8KiB.reverse.count_4000, which repeatedly
allocates 8 KiB buffers with a 256 KiB alignment. Previously, such allocations
caused large blocks to be split aggressively, despite smaller aligned regions
being sufficient. With this change, those aligned regions are reused directly,
significantly reducing fragmentation.

This improvement is visible in the amdgpu VRAM buddy allocator state
(/sys/kernel/debug/dri/1/amdgpu_vram_mm). After the change, higher-order blocks
are preserved and the number of low-order fragments is substantially reduced.

Before:
  order- 5 free: 1936 MiB, blocks: 15490
  order- 4 free:  967 MiB, blocks: 15486
  order- 3 free:  483 MiB, blocks: 15485
  order- 2 free:  241 MiB, blocks: 15486
  order- 1 free:  241 MiB, blocks: 30948

After:
  order- 5 free:  493 MiB, blocks:  3941
  order- 4 free:  246 MiB, blocks:  3943
  order- 3 free:  123 MiB, blocks:  4101
  order- 2 free:   61 MiB, blocks:  4101
  order- 1 free:   61 MiB, blocks:  8018

By avoiding unnecessary splits, this change improves allocator efficiency and
helps maintain larger contiguous free regions under heavy offset-aligned
allocation workloads.

v2:(Matthew)
  - Update augmented information along the path to the inserted node.

v3:
  - Move the patch to gpu/buddy.c file.

Signed-off-by: Arunpravin Paneer Selvam <Arunpravin.PaneerSelvam@amd.com>
Suggested-by: Christian König <christian.koenig@amd.com>
---
 drivers/gpu/buddy.c       | 271 +++++++++++++++++++++++++++++++-------
 include/linux/gpu_buddy.h |   2 +
 2 files changed, 228 insertions(+), 45 deletions(-)

diff --git a/drivers/gpu/buddy.c b/drivers/gpu/buddy.c
index 603c59a2013a..3a25eed050ba 100644
--- a/drivers/gpu/buddy.c
+++ b/drivers/gpu/buddy.c
@@ -14,6 +14,16 @@
 
 static struct kmem_cache *slab_blocks;
 
+static unsigned int gpu_buddy_block_offset_alignment(struct gpu_buddy_block *block)
+{
+	return __ffs(gpu_buddy_block_offset(block));
+}
+
+RB_DECLARE_CALLBACKS_MAX(static, gpu_buddy_augment_cb,
+			 struct gpu_buddy_block, rb,
+			 unsigned int, subtree_max_alignment,
+			 gpu_buddy_block_offset_alignment);
+
 static struct gpu_buddy_block *gpu_block_alloc(struct gpu_buddy *mm,
 					       struct gpu_buddy_block *parent,
 					       unsigned int order,
@@ -31,6 +41,9 @@ static struct gpu_buddy_block *gpu_block_alloc(struct gpu_buddy *mm,
 	block->header |= order;
 	block->parent = parent;
 
+	block->subtree_max_alignment =
+		gpu_buddy_block_offset_alignment(block);
+
 	RB_CLEAR_NODE(&block->rb);
 
 	BUG_ON(block->header & GPU_BUDDY_HEADER_UNUSED);
@@ -67,26 +80,42 @@ static bool rbtree_is_empty(struct rb_root *root)
 	return RB_EMPTY_ROOT(root);
 }
 
-static bool gpu_buddy_block_offset_less(const struct gpu_buddy_block *block,
-					const struct gpu_buddy_block *node)
-{
-	return gpu_buddy_block_offset(block) < gpu_buddy_block_offset(node);
-}
-
-static bool rbtree_block_offset_less(struct rb_node *block,
-				     const struct rb_node *node)
-{
-	return gpu_buddy_block_offset_less(rbtree_get_free_block(block),
-					   rbtree_get_free_block(node));
-}
-
 static void rbtree_insert(struct gpu_buddy *mm,
 			  struct gpu_buddy_block *block,
 			  enum gpu_buddy_free_tree tree)
 {
-	rb_add(&block->rb,
-	       &mm->free_trees[tree][gpu_buddy_block_order(block)],
-	       rbtree_block_offset_less);
+	struct rb_node **link, *parent = NULL;
+	unsigned int block_alignment, order;
+	struct gpu_buddy_block *node;
+	struct rb_root *root;
+
+	order = gpu_buddy_block_order(block);
+	block_alignment = gpu_buddy_block_offset_alignment(block);
+
+	root = &mm->free_trees[tree][order];
+	link = &root->rb_node;
+
+	while (*link) {
+		parent = *link;
+		node = rbtree_get_free_block(parent);
+		/*
+		 * Manual augmentation update during insertion traversal. Required
+		 * because rb_insert_augmented() only calls rotate callback during
+		 * rotations. This ensures all ancestors on the insertion path have
+		 * correct subtree_max_alignment values.
+		 */
+		if (node->subtree_max_alignment < block_alignment)
+			node->subtree_max_alignment = block_alignment;
+
+		if (gpu_buddy_block_offset(block) < gpu_buddy_block_offset(node))
+			link = &parent->rb_left;
+		else
+			link = &parent->rb_right;
+	}
+
+	block->subtree_max_alignment = block_alignment;
+	rb_link_node(&block->rb, parent, link);
+	rb_insert_augmented(&block->rb, root, &gpu_buddy_augment_cb);
 }
 
 static void rbtree_remove(struct gpu_buddy *mm,
@@ -99,7 +128,7 @@ static void rbtree_remove(struct gpu_buddy *mm,
 	tree = get_block_tree(block);
 	root = &mm->free_trees[tree][order];
 
-	rb_erase(&block->rb, root);
+	rb_erase_augmented(&block->rb, root, &gpu_buddy_augment_cb);
 	RB_CLEAR_NODE(&block->rb);
 }
 
@@ -790,6 +819,132 @@ alloc_from_freetree(struct gpu_buddy *mm,
 	return ERR_PTR(err);
 }
 
+static bool
+gpu_buddy_can_offset_align(u64 size, u64 min_block_size)
+{
+	return size < min_block_size && is_power_of_2(size);
+}
+
+static bool gpu_buddy_subtree_can_satisfy(struct rb_node *node,
+					  unsigned int alignment)
+{
+	struct gpu_buddy_block *block;
+
+	if (!node)
+		return false;
+
+	block = rbtree_get_free_block(node);
+	return block->subtree_max_alignment >= alignment;
+}
+
+static struct gpu_buddy_block *
+gpu_buddy_find_block_aligned(struct gpu_buddy *mm,
+			     enum gpu_buddy_free_tree tree,
+			     unsigned int order,
+			     unsigned int tmp,
+			     unsigned int alignment,
+			     unsigned long flags)
+{
+	struct rb_root *root = &mm->free_trees[tree][tmp];
+	struct rb_node *rb = root->rb_node;
+
+	while (rb) {
+		struct gpu_buddy_block *block = rbtree_get_free_block(rb);
+		struct rb_node *left_node = rb->rb_left, *right_node = rb->rb_right;
+
+		if (right_node) {
+			if (gpu_buddy_subtree_can_satisfy(right_node, alignment)) {
+				rb = right_node;
+				continue;
+			}
+		}
+
+		if (gpu_buddy_block_order(block) >= order &&
+		    __ffs(gpu_buddy_block_offset(block)) >= alignment)
+			return block;
+
+		if (left_node) {
+			if (gpu_buddy_subtree_can_satisfy(left_node, alignment)) {
+				rb = left_node;
+				continue;
+			}
+		}
+
+		break;
+	}
+
+	return NULL;
+}
+
+static struct gpu_buddy_block *
+gpu_buddy_offset_aligned_allocation(struct gpu_buddy *mm,
+				    u64 size,
+				    u64 min_block_size,
+				    unsigned long flags)
+{
+	struct gpu_buddy_block *block = NULL;
+	unsigned int order, tmp, alignment;
+	struct gpu_buddy_block *buddy;
+	enum gpu_buddy_free_tree tree;
+	unsigned long pages;
+	int err;
+
+	alignment = ilog2(min_block_size);
+	pages = size >> ilog2(mm->chunk_size);
+	order = fls(pages) - 1;
+
+	tree = (flags & GPU_BUDDY_CLEAR_ALLOCATION) ?
+		GPU_BUDDY_CLEAR_TREE : GPU_BUDDY_DIRTY_TREE;
+
+	for (tmp = order; tmp <= mm->max_order; ++tmp) {
+		block = gpu_buddy_find_block_aligned(mm, tree, order,
+						     tmp, alignment, flags);
+		if (!block) {
+			tree = (tree == GPU_BUDDY_CLEAR_TREE) ?
+				GPU_BUDDY_DIRTY_TREE : GPU_BUDDY_CLEAR_TREE;
+			block = gpu_buddy_find_block_aligned(mm, tree, order,
+							     tmp, alignment, flags);
+		}
+
+		if (block)
+			break;
+	}
+
+	if (!block)
+		return ERR_PTR(-ENOSPC);
+
+	while (gpu_buddy_block_order(block) > order) {
+		struct gpu_buddy_block *left, *right;
+
+		err = split_block(mm, block);
+		if (unlikely(err))
+			goto err_undo;
+
+		left  = block->left;
+		right = block->right;
+
+		if (__ffs(gpu_buddy_block_offset(right)) >= alignment)
+			block = right;
+		else
+			block = left;
+	}
+
+	return block;
+
+err_undo:
+	/*
+	 * We really don't want to leave around a bunch of split blocks, since
+	 * bigger is better, so make sure we merge everything back before we
+	 * free the allocated blocks.
+	 */
+	buddy = __get_buddy(block);
+	if (buddy &&
+	    (gpu_buddy_block_is_free(block) &&
+	     gpu_buddy_block_is_free(buddy)))
+		__gpu_buddy_free(mm, block, false);
+	return ERR_PTR(err);
+}
+
 static int __alloc_range(struct gpu_buddy *mm,
 			 struct list_head *dfs,
 			 u64 start, u64 size,
@@ -1059,6 +1214,7 @@ EXPORT_SYMBOL(gpu_buddy_block_trim);
 static struct gpu_buddy_block *
 __gpu_buddy_alloc_blocks(struct gpu_buddy *mm,
 			 u64 start, u64 end,
+			 u64 size, u64 min_block_size,
 			 unsigned int order,
 			 unsigned long flags)
 {
@@ -1066,6 +1222,11 @@ __gpu_buddy_alloc_blocks(struct gpu_buddy *mm,
 		/* Allocate traversing within the range */
 		return  __gpu_buddy_alloc_range_bias(mm, start, end,
 						     order, flags);
+	else if (size < min_block_size)
+		/* Allocate from an offset-aligned region without size rounding */
+		return gpu_buddy_offset_aligned_allocation(mm, size,
+							   min_block_size,
+							   flags);
 	else
 		/* Allocate from freetree */
 		return alloc_from_freetree(mm, order, flags);
@@ -1137,8 +1298,11 @@ int gpu_buddy_alloc_blocks(struct gpu_buddy *mm,
 	if (flags & GPU_BUDDY_CONTIGUOUS_ALLOCATION) {
 		size = roundup_pow_of_two(size);
 		min_block_size = size;
-	/* Align size value to min_block_size */
-	} else if (!IS_ALIGNED(size, min_block_size)) {
+		/*
+		 * Normalize the requested size to min_block_size for regular allocations.
+		 * Offset-aligned allocations intentionally skip size rounding.
+		 */
+	} else if (!gpu_buddy_can_offset_align(size, min_block_size)) {
 		size = round_up(size, min_block_size);
 	}
 
@@ -1158,43 +1322,60 @@ int gpu_buddy_alloc_blocks(struct gpu_buddy *mm,
 	do {
 		order = min(order, (unsigned int)fls(pages) - 1);
 		BUG_ON(order > mm->max_order);
-		BUG_ON(order < min_order);
+		/*
+		 * Regular allocations must not allocate blocks smaller than min_block_size.
+		 * Offset-aligned allocations deliberately bypass this constraint.
+		 */
+		BUG_ON(size >= min_block_size && order < min_order);
 
 		do {
+			unsigned int fallback_order;
+
 			block = __gpu_buddy_alloc_blocks(mm, start,
 							 end,
+							 size,
+							 min_block_size,
 							 order,
 							 flags);
 			if (!IS_ERR(block))
 				break;
 
-			if (order-- == min_order) {
-				/* Try allocation through force merge method */
-				if (mm->clear_avail &&
-				    !__force_merge(mm, start, end, min_order)) {
-					block = __gpu_buddy_alloc_blocks(mm, start,
-									 end,
-									 min_order,
-									 flags);
-					if (!IS_ERR(block)) {
-						order = min_order;
-						break;
-					}
-				}
+			if (size < min_block_size) {
+				fallback_order = order;
+			} else if (order == min_order) {
+				fallback_order = min_order;
+			} else {
+				order--;
+				continue;
+			}
 
-				/*
-				 * Try contiguous block allocation through
-				 * try harder method.
-				 */
-				if (flags & GPU_BUDDY_CONTIGUOUS_ALLOCATION &&
-				    !(flags & GPU_BUDDY_RANGE_ALLOCATION))
-					return __alloc_contig_try_harder(mm,
-									 original_size,
-									 original_min_size,
-									 blocks);
-				err = -ENOSPC;
-				goto err_free;
+			/* Try allocation through force merge method */
+			if (mm->clear_avail &&
+			    !__force_merge(mm, start, end, fallback_order)) {
+				block = __gpu_buddy_alloc_blocks(mm, start,
+								 end,
+								 size,
+								 min_block_size,
+								 fallback_order,
+								 flags);
+				if (!IS_ERR(block)) {
+					order = fallback_order;
+					break;
+				}
 			}
+
+			/*
+			 * Try contiguous block allocation through
+			 * try harder method.
+			 */
+			if (flags & GPU_BUDDY_CONTIGUOUS_ALLOCATION &&
+			    !(flags & GPU_BUDDY_RANGE_ALLOCATION))
+				return __alloc_contig_try_harder(mm,
+								 original_size,
+								 original_min_size,
+								 blocks);
+			err = -ENOSPC;
+			goto err_free;
 		} while (1);
 
 		mark_allocated(mm, block);
diff --git a/include/linux/gpu_buddy.h b/include/linux/gpu_buddy.h
index 07ac65db6d2e..7ad817c69ec6 100644
--- a/include/linux/gpu_buddy.h
+++ b/include/linux/gpu_buddy.h
@@ -11,6 +11,7 @@
 #include <linux/slab.h>
 #include <linux/sched.h>
 #include <linux/rbtree.h>
+#include <linux/rbtree_augmented.h>
 
 #define GPU_BUDDY_RANGE_ALLOCATION		BIT(0)
 #define GPU_BUDDY_TOPDOWN_ALLOCATION		BIT(1)
@@ -58,6 +59,7 @@ struct gpu_buddy_block {
 	};
 
 	struct list_head tmp_link;
+	unsigned int subtree_max_alignment;
 };
 
 /* Order-zero must be at least SZ_4K */

base-commit: 9d757669b2b22cd224c334924f798393ffca537c
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v3 2/2] drm/buddy: Add KUnit test for offset-aligned allocations
  2026-02-09  8:30 [PATCH v3 1/2] drm/buddy: Improve offset-aligned allocation handling Arunpravin Paneer Selvam
@ 2026-02-09  8:30 ` Arunpravin Paneer Selvam
  2026-02-09 19:23   ` kernel test robot
                     ` (2 more replies)
  2026-02-09  9:46 ` ✓ i915.CI.BAT: success for series starting with [v3,1/2] drm/buddy: Improve offset-aligned allocation handling Patchwork
                   ` (2 subsequent siblings)
  3 siblings, 3 replies; 11+ messages in thread
From: Arunpravin Paneer Selvam @ 2026-02-09  8:30 UTC (permalink / raw)
  To: matthew.auld, christian.koenig, dri-devel, intel-gfx, intel-xe,
	amd-gfx
  Cc: alexander.deucher, Arunpravin Paneer Selvam

Add KUnit test to validate offset-aligned allocations in the DRM buddy
allocator.

Validate offset-aligned allocation:
The test covers allocations with sizes smaller than the alignment constraint
and verifies correct size preservation, offset alignment, and behavior across
multiple allocation sizes. It also exercises fragmentation by freeing
alternating blocks and confirms that allocation fails once all aligned offsets
are consumed.

Stress subtree_max_alignment propagation:
Exercise subtree_max_alignment tracking by allocating blocks with descending
alignment constraints and freeing them in reverse order. This verifies that
free-tree augmentation correctly propagates the maximum offset alignment
present in each subtree at every stage.

v2:
  - Move the patch to gpu/tests/gpu_buddy_test.c file.

Signed-off-by: Arunpravin Paneer Selvam <Arunpravin.PaneerSelvam@amd.com>
---
 drivers/gpu/tests/gpu_buddy_test.c | 166 +++++++++++++++++++++++++++++
 1 file changed, 166 insertions(+)

diff --git a/drivers/gpu/tests/gpu_buddy_test.c b/drivers/gpu/tests/gpu_buddy_test.c
index 450e71deed90..37f22655b5fb 100644
--- a/drivers/gpu/tests/gpu_buddy_test.c
+++ b/drivers/gpu/tests/gpu_buddy_test.c
@@ -21,6 +21,170 @@ static inline u64 get_size(int order, u64 chunk_size)
 	return (1 << order) * chunk_size;
 }
 
+static void gpu_test_buddy_subtree_offset_alignment_stress(struct kunit *test)
+{
+	struct gpu_buddy_block *block;
+	struct rb_node *node = NULL;
+	const u64 mm_size = SZ_2M;
+	const u64 alignments[] = {
+		SZ_1M,
+		SZ_512K,
+		SZ_256K,
+		SZ_128K,
+		SZ_64K,
+		SZ_32K,
+		SZ_16K,
+		SZ_8K,
+	};
+
+	struct list_head allocated[ARRAY_SIZE(alignments)];
+	unsigned int i, order, max_subtree_align = 0;
+	struct gpu_buddy mm;
+	int ret, tree;
+
+	KUNIT_ASSERT_FALSE(test, gpu_buddy_init(&mm, mm_size, SZ_4K),
+			   "buddy_init failed\n");
+
+	for (i = 0; i < ARRAY_SIZE(allocated); i++)
+		INIT_LIST_HEAD(&allocated[i]);
+
+	/*
+	 * Exercise subtree_max_alignment tracking by allocating blocks with descending
+	 * alignment constraints and freeing them in reverse order. This verifies that
+	 * free-tree augmentation correctly propagates the maximum offset alignment
+	 * present in each subtree at every stage.
+	 */
+
+	for (i = 0; i < ARRAY_SIZE(alignments); i++) {
+		struct gpu_buddy_block *root = NULL;
+		unsigned int expected;
+		u64 align;
+
+		align = alignments[i];
+		expected = ilog2(align) - 1;
+
+		for (;;) {
+			ret = gpu_buddy_alloc_blocks(&mm,
+						     0, mm_size,
+						     SZ_4K, align,
+						     &allocated[i],
+						     0);
+			if (ret)
+				break;
+
+			block = list_last_entry(&allocated[i],
+						struct gpu_buddy_block,
+						link);
+			KUNIT_EXPECT_EQ(test, gpu_buddy_block_offset(block) & (align - 1), 0ULL);
+		}
+
+		for (order = mm.max_order + 1; order-- > 0 && !root; ) {
+			for (tree = 0; tree < 2; tree++) {
+				node = mm.free_trees[tree][order].rb_node;
+				if (node) {
+					root = container_of(node,
+							    struct gpu_buddy_block,
+							    rb);
+					break;
+				}
+			}
+		}
+
+		KUNIT_ASSERT_NOT_NULL(test, root);
+		KUNIT_EXPECT_EQ(test, root->subtree_max_alignment, expected);
+	}
+
+	for (i = ARRAY_SIZE(alignments); i-- > 0; ) {
+		gpu_buddy_free_list(&mm, &allocated[i], 0);
+
+		for (order = 0; order <= mm.max_order; order++) {
+			for (tree = 0; tree < 2; tree++) {
+				node = mm.free_trees[tree][order].rb_node;
+				if (!node)
+					continue;
+
+				block = container_of(node, struct gpu_buddy_block, rb);
+				max_subtree_align = max(max_subtree_align, block->subtree_max_alignment);
+			}
+		}
+
+		KUNIT_EXPECT_GE(test, max_subtree_align, ilog2(alignments[i]));
+	}
+
+	gpu_buddy_fini(&mm);
+}
+
+static void gpu_test_buddy_offset_aligned_allocation(struct kunit *test)
+{
+	struct gpu_buddy_block *block, *tmp;
+	int num_blocks, i, count = 0;
+	LIST_HEAD(allocated);
+	struct gpu_buddy mm;
+	u64 mm_size = SZ_4M;
+	LIST_HEAD(freed);
+
+	KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_init(&mm, mm_size, SZ_4K),
+			       "buddy_init failed\n");
+
+	num_blocks = mm_size / SZ_256K;
+	/*
+	 * Allocate multiple sizes under a fixed offset alignment.
+	 * Ensures alignment handling is independent of allocation size and
+	 * exercises subtree max-alignment pruning for small requests.
+	 */
+	for (i = 0; i < num_blocks; i++)
+		KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, mm_size, SZ_8K, SZ_256K,
+								    &allocated, 0),
+					"buddy_alloc hit an error size=%u\n", SZ_8K);
+
+	list_for_each_entry(block, &allocated, link) {
+		/* Ensure the allocated block uses the expected 8 KB size */
+		KUNIT_EXPECT_EQ(test, gpu_buddy_block_size(&mm, block), SZ_8K);
+		/* Ensure the block starts at a 256 KB-aligned offset for proper alignment */
+		KUNIT_EXPECT_EQ(test, gpu_buddy_block_offset(block) & (SZ_256K - 1), 0ULL);
+	}
+	gpu_buddy_free_list(&mm, &allocated, 0);
+
+	for (i = 0; i < num_blocks; i++)
+		KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, mm_size, SZ_16K, SZ_256K,
+								    &allocated, 0),
+					"buddy_alloc hit an error size=%u\n", SZ_16K);
+
+	list_for_each_entry(block, &allocated, link) {
+		/* Ensure the allocated block uses the expected 16 KB size */
+		KUNIT_EXPECT_EQ(test, gpu_buddy_block_size(&mm, block), SZ_16K);
+		/* Ensure the block starts at a 256 KB-aligned offset for proper alignment */
+		KUNIT_EXPECT_EQ(test, gpu_buddy_block_offset(block) & (SZ_256K - 1), 0ULL);
+	}
+
+	/*
+	 * Free alternating aligned blocks to introduce fragmentation.
+	 * Ensures offset-aligned allocations remain valid after frees and
+	 * verifies subtree max-alignment metadata is correctly maintained.
+	 */
+	list_for_each_entry_safe(block, tmp, &allocated, link) {
+		if (count % 2 == 0)
+			list_move_tail(&block->link, &freed);
+		count++;
+	}
+	gpu_buddy_free_list(&mm, &freed, 0);
+
+	for (i = 0; i < num_blocks / 2; i++)
+		KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, mm_size, SZ_16K, SZ_256K,
+								    &allocated, 0),
+					"buddy_alloc hit an error size=%u\n", SZ_16K);
+
+	/*
+	 * Allocate with offset alignment after all slots are used; must fail.
+	 * Confirms that no aligned offsets remain.
+	 */
+	KUNIT_ASSERT_TRUE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, mm_size, SZ_16K, SZ_256K,
+							   &allocated, 0),
+			       "buddy_alloc hit an error size=%u\n", SZ_16K);
+	gpu_buddy_free_list(&mm, &allocated, 0);
+	gpu_buddy_fini(&mm);
+}
+
 static void gpu_test_buddy_fragmentation_performance(struct kunit *test)
 {
 	struct gpu_buddy_block *block, *tmp;
@@ -912,6 +1076,8 @@ static struct kunit_case gpu_buddy_tests[] = {
 	KUNIT_CASE(gpu_test_buddy_alloc_range_bias),
 	KUNIT_CASE(gpu_test_buddy_fragmentation_performance),
 	KUNIT_CASE(gpu_test_buddy_alloc_exceeds_max_order),
+	KUNIT_CASE(gpu_test_buddy_offset_aligned_allocation),
+	KUNIT_CASE(gpu_test_buddy_subtree_offset_alignment_stress),
 	{}
 };
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* ✓ i915.CI.BAT: success for series starting with [v3,1/2] drm/buddy: Improve offset-aligned allocation handling
  2026-02-09  8:30 [PATCH v3 1/2] drm/buddy: Improve offset-aligned allocation handling Arunpravin Paneer Selvam
  2026-02-09  8:30 ` [PATCH v3 2/2] drm/buddy: Add KUnit test for offset-aligned allocations Arunpravin Paneer Selvam
@ 2026-02-09  9:46 ` Patchwork
  2026-02-09 13:22 ` ✗ i915.CI.Full: failure " Patchwork
  2026-02-10 16:26 ` [PATCH v3 1/2] " Matthew Auld
  3 siblings, 0 replies; 11+ messages in thread
From: Patchwork @ 2026-02-09  9:46 UTC (permalink / raw)
  To: Arunpravin Paneer Selvam; +Cc: intel-gfx

[-- Attachment #1: Type: text/plain, Size: 2337 bytes --]

== Series Details ==

Series: series starting with [v3,1/2] drm/buddy: Improve offset-aligned allocation handling
URL   : https://patchwork.freedesktop.org/series/161339/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_17957 -> Patchwork_161339v1
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/index.html

Participating hosts (42 -> 40)
------------------------------

  Missing    (2): bat-dg2-13 fi-snb-2520m 

Known issues
------------

  Here are the changes found in Patchwork_161339v1 that come from known issues:

### IGT changes ###

#### Possible fixes ####

  * igt@i915_selftest@live@workarounds:
    - bat-dg2-9:          [DMESG-FAIL][1] ([i915#12061]) -> [PASS][2] +1 other test pass
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/bat-dg2-9/igt@i915_selftest@live@workarounds.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/bat-dg2-9/igt@i915_selftest@live@workarounds.html
    - bat-dg2-14:         [DMESG-FAIL][3] ([i915#12061]) -> [PASS][4] +1 other test pass
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/bat-dg2-14/igt@i915_selftest@live@workarounds.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/bat-dg2-14/igt@i915_selftest@live@workarounds.html

  * igt@kms_hdmi_inject@inject-audio:
    - fi-tgl-1115g4:      [FAIL][5] ([i915#14867]) -> [PASS][6]
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/fi-tgl-1115g4/igt@kms_hdmi_inject@inject-audio.html
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/fi-tgl-1115g4/igt@kms_hdmi_inject@inject-audio.html

  
  [i915#12061]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/12061
  [i915#14867]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/14867


Build changes
-------------

  * Linux: CI_DRM_17957 -> Patchwork_161339v1

  CI-20190529: 20190529
  CI_DRM_17957: 9ddce2e2e1c2891bc26ea8648b2ba530b73937fe @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_8744: 8744
  Patchwork_161339v1: 9ddce2e2e1c2891bc26ea8648b2ba530b73937fe @ git://anongit.freedesktop.org/gfx-ci/linux

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/index.html

[-- Attachment #2: Type: text/html, Size: 3041 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* ✗ i915.CI.Full: failure for series starting with [v3,1/2] drm/buddy: Improve offset-aligned allocation handling
  2026-02-09  8:30 [PATCH v3 1/2] drm/buddy: Improve offset-aligned allocation handling Arunpravin Paneer Selvam
  2026-02-09  8:30 ` [PATCH v3 2/2] drm/buddy: Add KUnit test for offset-aligned allocations Arunpravin Paneer Selvam
  2026-02-09  9:46 ` ✓ i915.CI.BAT: success for series starting with [v3,1/2] drm/buddy: Improve offset-aligned allocation handling Patchwork
@ 2026-02-09 13:22 ` Patchwork
  2026-02-10 16:26 ` [PATCH v3 1/2] " Matthew Auld
  3 siblings, 0 replies; 11+ messages in thread
From: Patchwork @ 2026-02-09 13:22 UTC (permalink / raw)
  To: Arunpravin Paneer Selvam; +Cc: intel-gfx

[-- Attachment #1: Type: text/plain, Size: 102082 bytes --]

== Series Details ==

Series: series starting with [v3,1/2] drm/buddy: Improve offset-aligned allocation handling
URL   : https://patchwork.freedesktop.org/series/161339/
State : failure

== Summary ==

CI Bug Log - changes from CI_DRM_17957_full -> Patchwork_161339v1_full
====================================================

Summary
-------

  **FAILURE**

  Serious unknown changes coming with Patchwork_161339v1_full absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in Patchwork_161339v1_full, please notify your bug team (I915-ci-infra@lists.freedesktop.org) to allow them
  to document this new failure mode, which will reduce false positives in CI.

  

Participating hosts (10 -> 10)
------------------------------

  No changes in participating hosts

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in Patchwork_161339v1_full:

### IGT changes ###

#### Possible regressions ####

  * igt@gem_exec_fence@parallel@bcs0:
    - shard-dg1:          [PASS][1] -> [DMESG-WARN][2]
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-dg1-15/igt@gem_exec_fence@parallel@bcs0.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-dg1-13/igt@gem_exec_fence@parallel@bcs0.html

  * igt@gem_exec_fence@parallel@vecs0:
    - shard-dg1:          [PASS][3] -> [ABORT][4]
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-dg1-15/igt@gem_exec_fence@parallel@vecs0.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-dg1-13/igt@gem_exec_fence@parallel@vecs0.html

  * igt@kms_hdmi_inject@inject-audio:
    - shard-mtlp:         [PASS][5] -> [SKIP][6]
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-mtlp-4/igt@kms_hdmi_inject@inject-audio.html
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-mtlp-1/igt@kms_hdmi_inject@inject-audio.html

  
New tests
---------

  New tests have been introduced between CI_DRM_17957_full and Patchwork_161339v1_full:

### New IGT tests (53) ###

  * igt@gem_exec_fence@2x-absolute-wf_vblank-interruptible:
    - Statuses :
    - Exec time: [None] s

  * igt@gem_exec_fence@banned:
    - Statuses :
    - Exec time: [None] s

  * igt@gem_exec_fence@basic-each:
    - Statuses :
    - Exec time: [None] s

  * igt@gem_exec_fence@basic-each@vecs0:
    - Statuses :
    - Exec time: [None] s

  * igt@gem_exec_fence@binary-wait-before-signal:
    - Statuses :
    - Exec time: [None] s

  * igt@gem_exec_fence@coherency-gtt:
    - Statuses :
    - Exec time: [None] s

  * igt@gem_exec_fence@create-ext-set-pat:
    - Statuses :
    - Exec time: [None] s

  * igt@gem_exec_fence@cursor-offscreen-max-size:
    - Statuses :
    - Exec time: [None] s

  * igt@gem_exec_fence@display:
    - Statuses :
    - Exec time: [None] s

  * igt@gem_exec_fence@dmabuf-export:
    - Statuses :
    - Exec time: [None] s

  * igt@gem_exec_fence@etime-multi-wait-for-submit-unsubmitted:
    - Statuses :
    - Exec time: [None] s

  * igt@gem_exec_fence@fbc-1p-primscrn-spr-indfb-fullscreen:
    - Statuses :
    - Exec time: [None] s

  * igt@gem_exec_fence@fbc-2p-primscrn-cur-indfb-draw-render:
    - Statuses :
    - Exec time: [None] s

  * igt@gem_exec_fence@fbc-2p-primscrn-cur-indfb-onoff:
    - Statuses :
    - Exec time: [None] s

  * igt@gem_exec_fence@fbc-2p-scndscrn-pri-shrfb-draw-mmap-cpu:
    - Statuses :
    - Exec time: [None] s

  * igt@gem_exec_fence@fbc-2p-scndscrn-spr-indfb-draw-render:
    - Statuses :
    - Exec time: [None] s

  * igt@gem_exec_fence@fbcpsr-1p-offscreen-pri-indfb-draw-pwrite:
    - Statuses :
    - Exec time: [None] s

  * igt@gem_exec_fence@fbcpsr-2p-scndscrn-pri-shrfb-draw-mmap-gtt:
    - Statuses :
    - Exec time: [None] s

  * igt@gem_exec_fence@fbcpsr-2p-scndscrn-shrfb-plflip-blt:
    - Statuses :
    - Exec time: [None] s

  * igt@gem_exec_fence@flip-vs-cursor-crc-legacy:
    - Statuses :
    - Exec time: [None] s

  * igt@gem_exec_fence@invalid-oa-exponent:
    - Statuses :
    - Exec time: [None] s

  * igt@gem_exec_fence@lease-invalid-crtc:
    - Statuses :
    - Exec time: [None] s

  * igt@gem_exec_fence@map-fixed-invalidate-overlap-busy:
    - Statuses :
    - Exec time: [None] s

  * igt@gem_exec_fence@memory-info-idle:
    - Statuses :
    - Exec time: [None] s

  * igt@gem_exec_fence@multi-wait-for-submit-unsubmitted:
    - Statuses :
    - Exec time: [None] s

  * igt@gem_exec_fence@multi-wait-for-submit-unsubmitted@vcs1:
    - Statuses :
    - Exec time: [None] s

  * igt@gem_exec_fence@multi-wait-signaled:
    - Statuses :
    - Exec time: [None] s

  * igt@gem_exec_fence@open-flood:
    - Statuses :
    - Exec time: [None] s

  * igt@gem_exec_fence@planes-upscale-20x20:
    - Statuses :
    - Exec time: [None] s

  * igt@gem_exec_fence@psr-1p-primscrn-cur-indfb-draw-blt:
    - Statuses :
    - Exec time: [None] s

  * igt@gem_exec_fence@psr-1p-primscrn-spr-indfb-draw-render:
    - Statuses :
    - Exec time: [None] s

  * igt@gem_exec_fence@psr-farfromfence-mmap-gtt:
    - Statuses :
    - Exec time: [None] s

  * igt@gem_exec_fence@psr-rgb565-draw-mmap-wc:
    - Statuses :
    - Exec time: [None] s

  * igt@gem_exec_fence@random-ccs-data-y-tiled-gen12-rc-ccs-cc:
    - Statuses :
    - Exec time: [None] s

  * igt@gem_exec_fence@saturated-hostile:
    - Statuses :
    - Exec time: [None] s

  * igt@gem_exec_fence@stress-xrgb8888-4tiled:
    - Statuses :
    - Exec time: [None] s

  * igt@gem_exec_fence@ts-continuation-dpms-suspend:
    - Statuses :
    - Exec time: [None] s

  * igt@gem_exec_fence@unused-modifier:
    - Statuses :
    - Exec time: [None] s

  * igt@gem_exec_fence@unused-pitches:
    - Statuses :
    - Exec time: [None] s

  * igt@gem_exec_fence@viewport:
    - Statuses :
    - Exec time: [None] s

  * igt@gem_exec_fence@viewport@vcs0:
    - Statuses :
    - Exec time: [None] s

  * igt@gem_exec_fence@wait-all-complex:
    - Statuses :
    - Exec time: [None] s

  * igt@gem_exec_fence@yf-tiled-32bpp-rotate-90:
    - Statuses :
    - Exec time: [None] s

  * igt@gem_exec_fence@yf-tiled-max-hw-stride-64bpp-rotate-180-hflip-async-flip:
    - Statuses :
    - Exec time: [None] s

  * igt@i915_pm_rpm@4-tiled-8bpp-rotate-180:
    - Statuses :
    - Exec time: [None] s

  * igt@i915_pm_rpm@bad-pitch-32:
    - Statuses :
    - Exec time: [None] s

  * igt@i915_pm_rpm@compare-crc-sanitycheck-nv12:
    - Statuses :
    - Exec time: [None] s

  * igt@i915_pm_rpm@display-3x:
    - Statuses :
    - Exec time: [None] s

  * igt@i915_pm_rpm@fbc-2p-primscrn-pri-indfb-draw-pwrite:
    - Statuses :
    - Exec time: [None] s

  * igt@i915_pm_rpm@lessee-list:
    - Statuses :
    - Exec time: [None] s

  * igt@i915_pm_rpm@linear:
    - Statuses :
    - Exec time: [None] s

  * igt@i915_pm_rpm@planes-upscale-20x20-downscale-factor-0-75:
    - Statuses :
    - Exec time: [None] s

  * igt@i915_pm_rpm@semaphore-resolve:
    - Statuses :
    - Exec time: [None] s

  

Known issues
------------

  Here are the changes found in Patchwork_161339v1_full that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@api_intel_bb@crc32:
    - shard-rkl:          NOTRUN -> [SKIP][7] ([i915#6230])
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-2/igt@api_intel_bb@crc32.html

  * igt@drm_buddy@drm_buddy:
    - shard-rkl:          NOTRUN -> [SKIP][8] ([i915#15678])
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-8/igt@drm_buddy@drm_buddy.html

  * igt@gem_basic@multigpu-create-close:
    - shard-rkl:          NOTRUN -> [SKIP][9] ([i915#7697])
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-4/igt@gem_basic@multigpu-create-close.html

  * igt@gem_ccs@large-ctrl-surf-copy:
    - shard-tglu:         NOTRUN -> [SKIP][10] ([i915#13008])
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-10/igt@gem_ccs@large-ctrl-surf-copy.html

  * igt@gem_ccs@suspend-resume:
    - shard-dg2:          [PASS][11] -> [INCOMPLETE][12] ([i915#13356])
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-dg2-8/igt@gem_ccs@suspend-resume.html
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-dg2-3/igt@gem_ccs@suspend-resume.html
    - shard-rkl:          NOTRUN -> [SKIP][13] ([i915#9323]) +1 other test skip
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-4/igt@gem_ccs@suspend-resume.html

  * igt@gem_ccs@suspend-resume@xmajor-compressed-compfmt0-smem-lmem0:
    - shard-dg2:          [PASS][14] -> [INCOMPLETE][15] ([i915#12392] / [i915#13356])
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-dg2-8/igt@gem_ccs@suspend-resume@xmajor-compressed-compfmt0-smem-lmem0.html
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-dg2-3/igt@gem_ccs@suspend-resume@xmajor-compressed-compfmt0-smem-lmem0.html

  * igt@gem_close_race@multigpu-basic-threads:
    - shard-tglu-1:       NOTRUN -> [SKIP][16] ([i915#7697])
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-1/igt@gem_close_race@multigpu-basic-threads.html

  * igt@gem_ctx_sseu@engines:
    - shard-tglu:         NOTRUN -> [SKIP][17] ([i915#280])
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-10/igt@gem_ctx_sseu@engines.html

  * igt@gem_exec_balancer@parallel-balancer:
    - shard-rkl:          NOTRUN -> [SKIP][18] ([i915#4525])
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-8/igt@gem_exec_balancer@parallel-balancer.html

  * igt@gem_exec_balancer@parallel-out-fence:
    - shard-tglu:         NOTRUN -> [SKIP][19] ([i915#4525]) +1 other test skip
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-10/igt@gem_exec_balancer@parallel-out-fence.html

  * igt@gem_exec_fence@parallel:
    - shard-dg1:          [PASS][20] -> [ABORT][21] ([i915#13562])
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-dg1-15/igt@gem_exec_fence@parallel.html
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-dg1-13/igt@gem_exec_fence@parallel.html

  * igt@gem_exec_fence@parallel@rcs0:
    - shard-dg1:          [PASS][22] -> [DMESG-WARN][23] ([i915#13562])
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-dg1-15/igt@gem_exec_fence@parallel@rcs0.html
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-dg1-13/igt@gem_exec_fence@parallel@rcs0.html

  * igt@gem_exec_reloc@basic-gtt-wc-noreloc:
    - shard-rkl:          NOTRUN -> [SKIP][24] ([i915#3281]) +9 other tests skip
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-4/igt@gem_exec_reloc@basic-gtt-wc-noreloc.html

  * igt@gem_exec_suspend@basic-s3:
    - shard-rkl:          [PASS][25] -> [INCOMPLETE][26] ([i915#13356]) +1 other test incomplete
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-2/igt@gem_exec_suspend@basic-s3.html
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-6/igt@gem_exec_suspend@basic-s3.html

  * igt@gem_lmem_evict@dontneed-evict-race:
    - shard-rkl:          NOTRUN -> [SKIP][27] ([i915#4613] / [i915#7582])
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-2/igt@gem_lmem_evict@dontneed-evict-race.html

  * igt@gem_lmem_swapping@heavy-random:
    - shard-glk:          NOTRUN -> [SKIP][28] ([i915#4613]) +2 other tests skip
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-glk3/igt@gem_lmem_swapping@heavy-random.html

  * igt@gem_lmem_swapping@massive-random:
    - shard-rkl:          NOTRUN -> [SKIP][29] ([i915#4613]) +2 other tests skip
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-8/igt@gem_lmem_swapping@massive-random.html

  * igt@gem_lmem_swapping@smem-oom:
    - shard-tglu:         NOTRUN -> [SKIP][30] ([i915#4613]) +3 other tests skip
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-10/igt@gem_lmem_swapping@smem-oom.html

  * igt@gem_mmap_wc@bad-offset:
    - shard-dg1:          NOTRUN -> [SKIP][31] ([i915#4083])
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-dg1-13/igt@gem_mmap_wc@bad-offset.html

  * igt@gem_partial_pwrite_pread@reads-snoop:
    - shard-dg1:          NOTRUN -> [SKIP][32] ([i915#3282])
   [32]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-dg1-13/igt@gem_partial_pwrite_pread@reads-snoop.html

  * igt@gem_partial_pwrite_pread@writes-after-reads:
    - shard-rkl:          NOTRUN -> [SKIP][33] ([i915#3282]) +5 other tests skip
   [33]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-5/igt@gem_partial_pwrite_pread@writes-after-reads.html

  * igt@gem_pxp@hw-rejects-pxp-context:
    - shard-tglu-1:       NOTRUN -> [SKIP][34] ([i915#13398])
   [34]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-1/igt@gem_pxp@hw-rejects-pxp-context.html

  * igt@gem_pxp@reject-modify-context-protection-off-1:
    - shard-dg1:          NOTRUN -> [SKIP][35] ([i915#4270])
   [35]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-dg1-13/igt@gem_pxp@reject-modify-context-protection-off-1.html

  * igt@gem_userptr_blits@invalid-mmap-offset-unsync:
    - shard-tglu:         NOTRUN -> [SKIP][36] ([i915#3297]) +2 other tests skip
   [36]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-2/igt@gem_userptr_blits@invalid-mmap-offset-unsync.html

  * igt@gem_userptr_blits@readonly-pwrite-unsync:
    - shard-rkl:          NOTRUN -> [SKIP][37] ([i915#3297]) +1 other test skip
   [37]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-4/igt@gem_userptr_blits@readonly-pwrite-unsync.html

  * igt@gem_userptr_blits@unsync-unmap:
    - shard-tglu-1:       NOTRUN -> [SKIP][38] ([i915#3297])
   [38]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-1/igt@gem_userptr_blits@unsync-unmap.html

  * igt@gem_workarounds@suspend-resume-context:
    - shard-glk:          NOTRUN -> [INCOMPLETE][39] ([i915#13356])
   [39]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-glk9/igt@gem_workarounds@suspend-resume-context.html

  * igt@gen9_exec_parse@basic-rejected:
    - shard-tglu:         NOTRUN -> [SKIP][40] ([i915#2527] / [i915#2856]) +2 other tests skip
   [40]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-10/igt@gen9_exec_parse@basic-rejected.html

  * igt@gen9_exec_parse@shadow-peek:
    - shard-rkl:          NOTRUN -> [SKIP][41] ([i915#2527]) +2 other tests skip
   [41]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-5/igt@gen9_exec_parse@shadow-peek.html

  * igt@i915_module_load@fault-injection@__uc_init:
    - shard-rkl:          NOTRUN -> [SKIP][42] ([i915#15479]) +4 other tests skip
   [42]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-2/igt@i915_module_load@fault-injection@__uc_init.html

  * igt@i915_module_load@fault-injection@intel_connector_register:
    - shard-rkl:          NOTRUN -> [ABORT][43] ([i915#15342]) +1 other test abort
   [43]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-2/igt@i915_module_load@fault-injection@intel_connector_register.html

  * igt@i915_module_load@resize-bar:
    - shard-rkl:          NOTRUN -> [SKIP][44] ([i915#6412])
   [44]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-4/igt@i915_module_load@resize-bar.html

  * igt@i915_pm_freq_api@freq-basic-api:
    - shard-tglu:         NOTRUN -> [SKIP][45] ([i915#8399])
   [45]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-2/igt@i915_pm_freq_api@freq-basic-api.html

  * igt@i915_pm_rpm@system-suspend:
    - shard-rkl:          [PASS][46] -> [ABORT][47] ([i915#15060])
   [46]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-8/igt@i915_pm_rpm@system-suspend.html
   [47]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-1/igt@i915_pm_rpm@system-suspend.html

  * igt@i915_pm_rpm@system-suspend-execbuf:
    - shard-dg2:          [PASS][48] -> [ABORT][49] ([i915#13562])
   [48]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-dg2-5/igt@i915_pm_rpm@system-suspend-execbuf.html
   [49]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-dg2-4/igt@i915_pm_rpm@system-suspend-execbuf.html

  * igt@i915_pm_rps@reset:
    - shard-snb:          [PASS][50] -> [INCOMPLETE][51] ([i915#13729] / [i915#13821])
   [50]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-snb6/igt@i915_pm_rps@reset.html
   [51]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-snb4/igt@i915_pm_rps@reset.html

  * igt@i915_pm_sseu@full-enable:
    - shard-rkl:          NOTRUN -> [SKIP][52] ([i915#4387])
   [52]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-4/igt@i915_pm_sseu@full-enable.html

  * igt@i915_suspend@basic-s3-without-i915:
    - shard-tglu-1:       NOTRUN -> [INCOMPLETE][53] ([i915#4817] / [i915#7443])
   [53]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-1/igt@i915_suspend@basic-s3-without-i915.html
    - shard-dg1:          [PASS][54] -> [DMESG-WARN][55] ([i915#4391] / [i915#4423])
   [54]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-dg1-18/igt@i915_suspend@basic-s3-without-i915.html
   [55]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-dg1-14/igt@i915_suspend@basic-s3-without-i915.html

  * igt@kms_async_flips@async-flip-suspend-resume:
    - shard-glk10:        NOTRUN -> [INCOMPLETE][56] ([i915#12761])
   [56]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-glk10/igt@kms_async_flips@async-flip-suspend-resume.html

  * igt@kms_async_flips@async-flip-suspend-resume@pipe-a-hdmi-a-2:
    - shard-glk10:        NOTRUN -> [INCOMPLETE][57] ([i915#12761] / [i915#14995])
   [57]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-glk10/igt@kms_async_flips@async-flip-suspend-resume@pipe-a-hdmi-a-2.html

  * igt@kms_atomic_transition@plane-all-modeset-transition-internal-panels:
    - shard-glk:          NOTRUN -> [SKIP][58] ([i915#1769])
   [58]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-glk1/igt@kms_atomic_transition@plane-all-modeset-transition-internal-panels.html

  * igt@kms_big_fb@4-tiled-16bpp-rotate-90:
    - shard-rkl:          NOTRUN -> [SKIP][59] ([i915#5286]) +5 other tests skip
   [59]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-4/igt@kms_big_fb@4-tiled-16bpp-rotate-90.html

  * igt@kms_big_fb@4-tiled-addfb:
    - shard-tglu:         NOTRUN -> [SKIP][60] ([i915#5286]) +4 other tests skip
   [60]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-10/igt@kms_big_fb@4-tiled-addfb.html

  * igt@kms_big_fb@4-tiled-max-hw-stride-64bpp-rotate-180-async-flip:
    - shard-tglu-1:       NOTRUN -> [SKIP][61] ([i915#5286]) +3 other tests skip
   [61]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-1/igt@kms_big_fb@4-tiled-max-hw-stride-64bpp-rotate-180-async-flip.html

  * igt@kms_big_fb@4-tiled-max-hw-stride-64bpp-rotate-180-hflip:
    - shard-mtlp:         [PASS][62] -> [FAIL][63] ([i915#5138])
   [62]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-mtlp-4/igt@kms_big_fb@4-tiled-max-hw-stride-64bpp-rotate-180-hflip.html
   [63]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-mtlp-4/igt@kms_big_fb@4-tiled-max-hw-stride-64bpp-rotate-180-hflip.html

  * igt@kms_big_fb@linear-64bpp-rotate-90:
    - shard-rkl:          NOTRUN -> [SKIP][64] ([i915#3638]) +5 other tests skip
   [64]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-4/igt@kms_big_fb@linear-64bpp-rotate-90.html

  * igt@kms_big_fb@linear-max-hw-stride-32bpp-rotate-0-hflip:
    - shard-tglu-1:       NOTRUN -> [SKIP][65] ([i915#3828])
   [65]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-1/igt@kms_big_fb@linear-max-hw-stride-32bpp-rotate-0-hflip.html

  * igt@kms_big_fb@linear-max-hw-stride-32bpp-rotate-180-hflip:
    - shard-tglu:         NOTRUN -> [SKIP][66] ([i915#3828])
   [66]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-2/igt@kms_big_fb@linear-max-hw-stride-32bpp-rotate-180-hflip.html

  * igt@kms_big_fb@y-tiled-64bpp-rotate-90:
    - shard-dg1:          NOTRUN -> [SKIP][67] ([i915#3638])
   [67]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-dg1-13/igt@kms_big_fb@y-tiled-64bpp-rotate-90.html

  * igt@kms_ccs@ccs-on-another-bo-y-tiled-ccs@pipe-b-dp-3:
    - shard-dg2:          NOTRUN -> [SKIP][68] ([i915#10307] / [i915#6095]) +96 other tests skip
   [68]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-dg2-11/igt@kms_ccs@ccs-on-another-bo-y-tiled-ccs@pipe-b-dp-3.html

  * igt@kms_ccs@crc-primary-basic-4-tiled-bmg-ccs:
    - shard-tglu-1:       NOTRUN -> [SKIP][69] ([i915#12313])
   [69]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-1/igt@kms_ccs@crc-primary-basic-4-tiled-bmg-ccs.html

  * igt@kms_ccs@crc-primary-basic-4-tiled-lnl-ccs:
    - shard-rkl:          NOTRUN -> [SKIP][70] ([i915#12313]) +1 other test skip
   [70]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-5/igt@kms_ccs@crc-primary-basic-4-tiled-lnl-ccs.html

  * igt@kms_ccs@crc-primary-basic-4-tiled-mtl-mc-ccs@pipe-c-hdmi-a-2:
    - shard-rkl:          NOTRUN -> [SKIP][71] ([i915#14098] / [i915#14544] / [i915#6095]) +2 other tests skip
   [71]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-6/igt@kms_ccs@crc-primary-basic-4-tiled-mtl-mc-ccs@pipe-c-hdmi-a-2.html

  * igt@kms_ccs@crc-primary-rotation-180-4-tiled-dg2-rc-ccs-cc:
    - shard-tglu:         NOTRUN -> [SKIP][72] ([i915#6095]) +59 other tests skip
   [72]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-10/igt@kms_ccs@crc-primary-rotation-180-4-tiled-dg2-rc-ccs-cc.html

  * igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-rc-ccs-cc@pipe-c-hdmi-a-1:
    - shard-tglu-1:       NOTRUN -> [SKIP][73] ([i915#6095]) +9 other tests skip
   [73]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-1/igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-rc-ccs-cc@pipe-c-hdmi-a-1.html

  * igt@kms_ccs@crc-primary-suspend-y-tiled-gen12-rc-ccs-cc:
    - shard-rkl:          [PASS][74] -> [ABORT][75] ([i915#15132])
   [74]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-5/igt@kms_ccs@crc-primary-suspend-y-tiled-gen12-rc-ccs-cc.html
   [75]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-1/igt@kms_ccs@crc-primary-suspend-y-tiled-gen12-rc-ccs-cc.html

  * igt@kms_ccs@crc-primary-suspend-y-tiled-gen12-rc-ccs-cc@pipe-c-hdmi-a-2:
    - shard-rkl:          NOTRUN -> [ABORT][76] ([i915#15132])
   [76]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-1/igt@kms_ccs@crc-primary-suspend-y-tiled-gen12-rc-ccs-cc@pipe-c-hdmi-a-2.html

  * igt@kms_ccs@crc-primary-suspend-y-tiled-gen12-rc-ccs@pipe-a-hdmi-a-2:
    - shard-rkl:          [PASS][77] -> [INCOMPLETE][78] ([i915#15582]) +1 other test incomplete
   [77]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-3/igt@kms_ccs@crc-primary-suspend-y-tiled-gen12-rc-ccs@pipe-a-hdmi-a-2.html
   [78]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-6/igt@kms_ccs@crc-primary-suspend-y-tiled-gen12-rc-ccs@pipe-a-hdmi-a-2.html

  * igt@kms_ccs@crc-primary-suspend-yf-tiled-ccs:
    - shard-glk:          NOTRUN -> [INCOMPLETE][79] ([i915#14694] / [i915#15582]) +1 other test incomplete
   [79]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-glk1/igt@kms_ccs@crc-primary-suspend-yf-tiled-ccs.html

  * igt@kms_ccs@crc-primary-suspend-yf-tiled-ccs@pipe-a-hdmi-a-1:
    - shard-dg1:          NOTRUN -> [SKIP][80] ([i915#6095]) +184 other tests skip
   [80]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-dg1-15/igt@kms_ccs@crc-primary-suspend-yf-tiled-ccs@pipe-a-hdmi-a-1.html

  * igt@kms_ccs@crc-sprite-planes-basic-4-tiled-bmg-ccs:
    - shard-tglu:         NOTRUN -> [SKIP][81] ([i915#12313]) +1 other test skip
   [81]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-10/igt@kms_ccs@crc-sprite-planes-basic-4-tiled-bmg-ccs.html

  * igt@kms_ccs@crc-sprite-planes-basic-4-tiled-mtl-mc-ccs:
    - shard-rkl:          NOTRUN -> [SKIP][82] ([i915#14098] / [i915#6095]) +54 other tests skip
   [82]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-2/igt@kms_ccs@crc-sprite-planes-basic-4-tiled-mtl-mc-ccs.html

  * igt@kms_ccs@missing-ccs-buffer-y-tiled-ccs@pipe-d-hdmi-a-1:
    - shard-dg2:          NOTRUN -> [SKIP][83] ([i915#10307] / [i915#10434] / [i915#6095]) +2 other tests skip
   [83]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-dg2-4/igt@kms_ccs@missing-ccs-buffer-y-tiled-ccs@pipe-d-hdmi-a-1.html

  * igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs@pipe-c-hdmi-a-1:
    - shard-dg2:          NOTRUN -> [SKIP][84] ([i915#6095]) +59 other tests skip
   [84]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-dg2-4/igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs@pipe-c-hdmi-a-1.html

  * igt@kms_ccs@random-ccs-data-4-tiled-mtl-rc-ccs-cc@pipe-a-hdmi-a-2:
    - shard-rkl:          NOTRUN -> [SKIP][85] ([i915#6095]) +79 other tests skip
   [85]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-1/igt@kms_ccs@random-ccs-data-4-tiled-mtl-rc-ccs-cc@pipe-a-hdmi-a-2.html

  * igt@kms_ccs@random-ccs-data-4-tiled-mtl-rc-ccs@pipe-b-hdmi-a-2:
    - shard-rkl:          NOTRUN -> [SKIP][86] ([i915#14544] / [i915#6095]) +5 other tests skip
   [86]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-6/igt@kms_ccs@random-ccs-data-4-tiled-mtl-rc-ccs@pipe-b-hdmi-a-2.html

  * igt@kms_cdclk@mode-transition:
    - shard-glk:          NOTRUN -> [SKIP][87] +279 other tests skip
   [87]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-glk2/igt@kms_cdclk@mode-transition.html
    - shard-tglu-1:       NOTRUN -> [SKIP][88] ([i915#3742])
   [88]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-1/igt@kms_cdclk@mode-transition.html

  * igt@kms_cdclk@plane-scaling:
    - shard-rkl:          NOTRUN -> [SKIP][89] ([i915#3742])
   [89]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-8/igt@kms_cdclk@plane-scaling.html

  * igt@kms_chamelium_hpd@dp-hpd-enable-disable-mode:
    - shard-tglu-1:       NOTRUN -> [SKIP][90] ([i915#11151] / [i915#7828]) +2 other tests skip
   [90]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-1/igt@kms_chamelium_hpd@dp-hpd-enable-disable-mode.html

  * igt@kms_chamelium_hpd@dp-hpd-storm-disable:
    - shard-tglu:         NOTRUN -> [SKIP][91] ([i915#11151] / [i915#7828]) +5 other tests skip
   [91]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-10/igt@kms_chamelium_hpd@dp-hpd-storm-disable.html

  * igt@kms_chamelium_hpd@vga-hpd-for-each-pipe:
    - shard-rkl:          NOTRUN -> [SKIP][92] ([i915#11151] / [i915#7828]) +7 other tests skip
   [92]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-5/igt@kms_chamelium_hpd@vga-hpd-for-each-pipe.html

  * igt@kms_content_protection@atomic:
    - shard-tglu-1:       NOTRUN -> [SKIP][93] ([i915#6944] / [i915#7116] / [i915#7118] / [i915#9424])
   [93]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-1/igt@kms_content_protection@atomic.html

  * igt@kms_content_protection@atomic-dpms@pipe-a-dp-3:
    - shard-dg2:          NOTRUN -> [FAIL][94] ([i915#7173]) +1 other test fail
   [94]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-dg2-11/igt@kms_content_protection@atomic-dpms@pipe-a-dp-3.html

  * igt@kms_content_protection@dp-mst-lic-type-1:
    - shard-rkl:          NOTRUN -> [SKIP][95] ([i915#15330] / [i915#3116])
   [95]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-5/igt@kms_content_protection@dp-mst-lic-type-1.html
    - shard-tglu:         NOTRUN -> [SKIP][96] ([i915#15330] / [i915#3116] / [i915#3299])
   [96]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-2/igt@kms_content_protection@dp-mst-lic-type-1.html

  * igt@kms_content_protection@dp-mst-type-1-suspend-resume:
    - shard-tglu:         NOTRUN -> [SKIP][97] ([i915#15330])
   [97]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-10/igt@kms_content_protection@dp-mst-type-1-suspend-resume.html

  * igt@kms_content_protection@lic-type-0-hdcp14:
    - shard-tglu:         NOTRUN -> [SKIP][98] ([i915#6944])
   [98]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-10/igt@kms_content_protection@lic-type-0-hdcp14.html

  * igt@kms_content_protection@srm:
    - shard-rkl:          NOTRUN -> [SKIP][99] ([i915#6944] / [i915#7118])
   [99]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-2/igt@kms_content_protection@srm.html

  * igt@kms_content_protection@suspend-resume:
    - shard-rkl:          NOTRUN -> [SKIP][100] ([i915#6944]) +2 other tests skip
   [100]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-4/igt@kms_content_protection@suspend-resume.html

  * igt@kms_cursor_crc@cursor-offscreen-512x170:
    - shard-tglu-1:       NOTRUN -> [SKIP][101] ([i915#13049])
   [101]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-1/igt@kms_cursor_crc@cursor-offscreen-512x170.html

  * igt@kms_cursor_crc@cursor-onscreen-256x85:
    - shard-tglu:         NOTRUN -> [FAIL][102] ([i915#13566]) +3 other tests fail
   [102]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-10/igt@kms_cursor_crc@cursor-onscreen-256x85.html

  * igt@kms_cursor_crc@cursor-onscreen-512x512:
    - shard-dg1:          NOTRUN -> [SKIP][103] ([i915#13049])
   [103]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-dg1-13/igt@kms_cursor_crc@cursor-onscreen-512x512.html

  * igt@kms_cursor_crc@cursor-random-256x85@pipe-a-hdmi-a-2:
    - shard-rkl:          NOTRUN -> [FAIL][104] ([i915#13566]) +2 other tests fail
   [104]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-6/igt@kms_cursor_crc@cursor-random-256x85@pipe-a-hdmi-a-2.html

  * igt@kms_cursor_crc@cursor-rapid-movement-32x32:
    - shard-tglu:         NOTRUN -> [SKIP][105] ([i915#3555])
   [105]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-10/igt@kms_cursor_crc@cursor-rapid-movement-32x32.html

  * igt@kms_cursor_crc@cursor-rapid-movement-512x170:
    - shard-rkl:          NOTRUN -> [SKIP][106] ([i915#13049]) +1 other test skip
   [106]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-4/igt@kms_cursor_crc@cursor-rapid-movement-512x170.html

  * igt@kms_cursor_legacy@basic-busy-flip-before-cursor-atomic:
    - shard-tglu:         NOTRUN -> [SKIP][107] ([i915#4103]) +1 other test skip
   [107]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-2/igt@kms_cursor_legacy@basic-busy-flip-before-cursor-atomic.html

  * igt@kms_cursor_legacy@cursora-vs-flipb-legacy:
    - shard-rkl:          NOTRUN -> [SKIP][108] +19 other tests skip
   [108]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-4/igt@kms_cursor_legacy@cursora-vs-flipb-legacy.html

  * igt@kms_cursor_legacy@short-busy-flip-before-cursor-atomic-transitions-varying-size:
    - shard-rkl:          NOTRUN -> [SKIP][109] ([i915#4103])
   [109]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-2/igt@kms_cursor_legacy@short-busy-flip-before-cursor-atomic-transitions-varying-size.html

  * igt@kms_dirtyfb@psr-dirtyfb-ioctl:
    - shard-rkl:          NOTRUN -> [SKIP][110] ([i915#9723])
   [110]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-5/igt@kms_dirtyfb@psr-dirtyfb-ioctl.html
    - shard-tglu:         NOTRUN -> [SKIP][111] ([i915#9723])
   [111]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-2/igt@kms_dirtyfb@psr-dirtyfb-ioctl.html

  * igt@kms_dp_link_training@non-uhbr-mst:
    - shard-tglu-1:       NOTRUN -> [SKIP][112] ([i915#13749])
   [112]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-1/igt@kms_dp_link_training@non-uhbr-mst.html

  * igt@kms_dp_link_training@non-uhbr-sst:
    - shard-dg2:          [PASS][113] -> [SKIP][114] ([i915#13749])
   [113]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-dg2-11/igt@kms_dp_link_training@non-uhbr-sst.html
   [114]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-dg2-6/igt@kms_dp_link_training@non-uhbr-sst.html

  * igt@kms_dp_link_training@uhbr-mst:
    - shard-rkl:          NOTRUN -> [SKIP][115] ([i915#13748])
   [115]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-8/igt@kms_dp_link_training@uhbr-mst.html

  * igt@kms_dp_link_training@uhbr-sst:
    - shard-tglu:         NOTRUN -> [SKIP][116] ([i915#13748])
   [116]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-10/igt@kms_dp_link_training@uhbr-sst.html

  * igt@kms_dsc@dsc-with-bpc-formats:
    - shard-rkl:          NOTRUN -> [SKIP][117] ([i915#3555] / [i915#3840]) +1 other test skip
   [117]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-8/igt@kms_dsc@dsc-with-bpc-formats.html

  * igt@kms_fbcon_fbt@psr:
    - shard-rkl:          NOTRUN -> [SKIP][118] ([i915#3955])
   [118]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-4/igt@kms_fbcon_fbt@psr.html

  * igt@kms_feature_discovery@display-3x:
    - shard-rkl:          NOTRUN -> [SKIP][119] ([i915#1839])
   [119]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-2/igt@kms_feature_discovery@display-3x.html

  * igt@kms_feature_discovery@psr2:
    - shard-tglu-1:       NOTRUN -> [SKIP][120] ([i915#658])
   [120]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-1/igt@kms_feature_discovery@psr2.html

  * igt@kms_flip@2x-flip-vs-absolute-wf_vblank-interruptible:
    - shard-tglu-1:       NOTRUN -> [SKIP][121] ([i915#3637] / [i915#9934])
   [121]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-1/igt@kms_flip@2x-flip-vs-absolute-wf_vblank-interruptible.html

  * igt@kms_flip@2x-flip-vs-dpms:
    - shard-tglu:         NOTRUN -> [SKIP][122] ([i915#3637] / [i915#9934]) +7 other tests skip
   [122]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-10/igt@kms_flip@2x-flip-vs-dpms.html

  * igt@kms_flip@2x-flip-vs-suspend-interruptible:
    - shard-snb:          [PASS][123] -> [TIMEOUT][124] ([i915#14033] / [i915#14350])
   [123]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-snb7/igt@kms_flip@2x-flip-vs-suspend-interruptible.html
   [124]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-snb6/igt@kms_flip@2x-flip-vs-suspend-interruptible.html

  * igt@kms_flip@2x-flip-vs-suspend-interruptible@ab-vga1-hdmi-a1:
    - shard-snb:          [PASS][125] -> [TIMEOUT][126] ([i915#14033])
   [125]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-snb7/igt@kms_flip@2x-flip-vs-suspend-interruptible@ab-vga1-hdmi-a1.html
   [126]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-snb6/igt@kms_flip@2x-flip-vs-suspend-interruptible@ab-vga1-hdmi-a1.html

  * igt@kms_flip@2x-plain-flip-fb-recreate:
    - shard-rkl:          NOTRUN -> [SKIP][127] ([i915#9934]) +4 other tests skip
   [127]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-2/igt@kms_flip@2x-plain-flip-fb-recreate.html

  * igt@kms_flip@flip-vs-suspend:
    - shard-glk:          NOTRUN -> [INCOMPLETE][128] ([i915#12314] / [i915#12745] / [i915#4839] / [i915#6113])
   [128]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-glk5/igt@kms_flip@flip-vs-suspend.html

  * igt@kms_flip@flip-vs-suspend@a-hdmi-a1:
    - shard-glk:          NOTRUN -> [INCOMPLETE][129] ([i915#12314] / [i915#12745] / [i915#6113])
   [129]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-glk5/igt@kms_flip@flip-vs-suspend@a-hdmi-a1.html

  * igt@kms_flip_scaled_crc@flip-32bpp-yftile-to-32bpp-yftileccs-upscaling:
    - shard-dg1:          NOTRUN -> [SKIP][130] ([i915#15643])
   [130]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-dg1-13/igt@kms_flip_scaled_crc@flip-32bpp-yftile-to-32bpp-yftileccs-upscaling.html

  * igt@kms_flip_scaled_crc@flip-64bpp-4tile-to-16bpp-4tile-downscaling:
    - shard-tglu:         NOTRUN -> [SKIP][131] ([i915#15643]) +2 other tests skip
   [131]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-10/igt@kms_flip_scaled_crc@flip-64bpp-4tile-to-16bpp-4tile-downscaling.html

  * igt@kms_flip_scaled_crc@flip-64bpp-4tile-to-16bpp-4tile-upscaling:
    - shard-tglu-1:       NOTRUN -> [SKIP][132] ([i915#15643]) +1 other test skip
   [132]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-1/igt@kms_flip_scaled_crc@flip-64bpp-4tile-to-16bpp-4tile-upscaling.html

  * igt@kms_flip_scaled_crc@flip-64bpp-yftile-to-32bpp-yftile-upscaling:
    - shard-rkl:          NOTRUN -> [SKIP][133] ([i915#15643]) +2 other tests skip
   [133]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-4/igt@kms_flip_scaled_crc@flip-64bpp-yftile-to-32bpp-yftile-upscaling.html

  * igt@kms_force_connector_basic@force-edid:
    - shard-mtlp:         [PASS][134] -> [SKIP][135] ([i915#15672])
   [134]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-mtlp-2/igt@kms_force_connector_basic@force-edid.html
   [135]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-mtlp-1/igt@kms_force_connector_basic@force-edid.html

  * igt@kms_frontbuffer_tracking@fbc-2p-primscrn-pri-shrfb-draw-mmap-wc:
    - shard-dg1:          NOTRUN -> [SKIP][136] ([i915#8708])
   [136]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-dg1-13/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-pri-shrfb-draw-mmap-wc.html

  * igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-pri-indfb-draw-mmap-wc:
    - shard-tglu-1:       NOTRUN -> [SKIP][137] +15 other tests skip
   [137]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-1/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-pri-indfb-draw-mmap-wc.html

  * igt@kms_frontbuffer_tracking@fbcpsr-1p-offscreen-pri-indfb-draw-render:
    - shard-rkl:          NOTRUN -> [SKIP][138] ([i915#15102]) +1 other test skip
   [138]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-2/igt@kms_frontbuffer_tracking@fbcpsr-1p-offscreen-pri-indfb-draw-render.html

  * igt@kms_frontbuffer_tracking@fbcpsr-tiling-4:
    - shard-rkl:          NOTRUN -> [SKIP][139] ([i915#5439])
   [139]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-4/igt@kms_frontbuffer_tracking@fbcpsr-tiling-4.html

  * igt@kms_frontbuffer_tracking@fbcpsr-tiling-linear:
    - shard-dg1:          NOTRUN -> [SKIP][140] ([i915#15102] / [i915#3458])
   [140]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-dg1-13/igt@kms_frontbuffer_tracking@fbcpsr-tiling-linear.html

  * igt@kms_frontbuffer_tracking@psr-1p-primscrn-pri-shrfb-draw-mmap-gtt:
    - shard-rkl:          NOTRUN -> [SKIP][141] ([i915#15102] / [i915#3023]) +22 other tests skip
   [141]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-5/igt@kms_frontbuffer_tracking@psr-1p-primscrn-pri-shrfb-draw-mmap-gtt.html

  * igt@kms_frontbuffer_tracking@psr-1p-primscrn-spr-indfb-move:
    - shard-tglu:         NOTRUN -> [SKIP][142] ([i915#15102]) +18 other tests skip
   [142]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-2/igt@kms_frontbuffer_tracking@psr-1p-primscrn-spr-indfb-move.html

  * igt@kms_frontbuffer_tracking@psr-1p-rte:
    - shard-tglu-1:       NOTRUN -> [SKIP][143] ([i915#15102]) +4 other tests skip
   [143]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-1/igt@kms_frontbuffer_tracking@psr-1p-rte.html

  * igt@kms_frontbuffer_tracking@psr-2p-scndscrn-indfb-msflip-blt:
    - shard-rkl:          NOTRUN -> [SKIP][144] ([i915#1825]) +34 other tests skip
   [144]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-2/igt@kms_frontbuffer_tracking@psr-2p-scndscrn-indfb-msflip-blt.html

  * igt@kms_frontbuffer_tracking@psr-2p-scndscrn-pri-indfb-draw-blt:
    - shard-dg1:          NOTRUN -> [SKIP][145] +1 other test skip
   [145]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-dg1-13/igt@kms_frontbuffer_tracking@psr-2p-scndscrn-pri-indfb-draw-blt.html

  * igt@kms_frontbuffer_tracking@psr-2p-scndscrn-pri-shrfb-draw-mmap-cpu:
    - shard-tglu:         NOTRUN -> [SKIP][146] +32 other tests skip
   [146]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-10/igt@kms_frontbuffer_tracking@psr-2p-scndscrn-pri-shrfb-draw-mmap-cpu.html

  * igt@kms_hdr@bpc-switch-dpms:
    - shard-rkl:          NOTRUN -> [SKIP][147] ([i915#3555] / [i915#8228])
   [147]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-4/igt@kms_hdr@bpc-switch-dpms.html

  * igt@kms_hdr@static-toggle:
    - shard-tglu:         NOTRUN -> [SKIP][148] ([i915#3555] / [i915#8228])
   [148]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-10/igt@kms_hdr@static-toggle.html

  * igt@kms_joiner@basic-force-big-joiner:
    - shard-rkl:          NOTRUN -> [SKIP][149] ([i915#15459])
   [149]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-8/igt@kms_joiner@basic-force-big-joiner.html

  * igt@kms_joiner@basic-max-non-joiner:
    - shard-tglu-1:       NOTRUN -> [SKIP][150] ([i915#13688])
   [150]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-1/igt@kms_joiner@basic-max-non-joiner.html

  * igt@kms_joiner@invalid-modeset-big-joiner:
    - shard-rkl:          NOTRUN -> [SKIP][151] ([i915#15460])
   [151]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-4/igt@kms_joiner@invalid-modeset-big-joiner.html

  * igt@kms_joiner@invalid-modeset-force-big-joiner:
    - shard-dg2:          [PASS][152] -> [SKIP][153] ([i915#15459])
   [152]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-dg2-11/igt@kms_joiner@invalid-modeset-force-big-joiner.html
   [153]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-dg2-6/igt@kms_joiner@invalid-modeset-force-big-joiner.html

  * igt@kms_joiner@invalid-modeset-ultra-joiner:
    - shard-rkl:          NOTRUN -> [SKIP][154] ([i915#15458])
   [154]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-2/igt@kms_joiner@invalid-modeset-ultra-joiner.html

  * igt@kms_plane@pixel-format-4-tiled-dg2-mc-ccs-modifier:
    - shard-glk10:        NOTRUN -> [SKIP][155] +260 other tests skip
   [155]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-glk10/igt@kms_plane@pixel-format-4-tiled-dg2-mc-ccs-modifier.html

  * igt@kms_plane@pixel-format-4-tiled-dg2-mc-ccs-modifier-source-clamping@pipe-a-plane-0:
    - shard-rkl:          NOTRUN -> [SKIP][156] ([i915#15608]) +21 other tests skip
   [156]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-4/igt@kms_plane@pixel-format-4-tiled-dg2-mc-ccs-modifier-source-clamping@pipe-a-plane-0.html

  * igt@kms_plane@pixel-format-4-tiled-mtl-mc-ccs-modifier-source-clamping@pipe-a-plane-5:
    - shard-rkl:          NOTRUN -> [SKIP][157] ([i915#15609]) +4 other tests skip
   [157]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-2/igt@kms_plane@pixel-format-4-tiled-mtl-mc-ccs-modifier-source-clamping@pipe-a-plane-5.html

  * igt@kms_plane@pixel-format-4-tiled-mtl-mc-ccs-modifier-source-clamping@pipe-b-plane-5:
    - shard-rkl:          NOTRUN -> [SKIP][158] ([i915#15609] / [i915#8825]) +2 other tests skip
   [158]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-2/igt@kms_plane@pixel-format-4-tiled-mtl-mc-ccs-modifier-source-clamping@pipe-b-plane-5.html

  * igt@kms_plane@pixel-format-4-tiled-mtl-rc-ccs-cc-modifier:
    - shard-rkl:          NOTRUN -> [SKIP][159] ([i915#15608] / [i915#8825]) +3 other tests skip
   [159]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-8/igt@kms_plane@pixel-format-4-tiled-mtl-rc-ccs-cc-modifier.html

  * igt@kms_plane@pixel-format-4-tiled-mtl-rc-ccs-modifier-source-clamping:
    - shard-rkl:          NOTRUN -> [SKIP][160] ([i915#15608] / [i915#15609] / [i915#8825]) +2 other tests skip
   [160]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-4/igt@kms_plane@pixel-format-4-tiled-mtl-rc-ccs-modifier-source-clamping.html

  * igt@kms_plane@pixel-format-linear-modifier-source-clamping@pipe-a-plane-7:
    - shard-tglu:         NOTRUN -> [SKIP][161] ([i915#15609]) +2 other tests skip
   [161]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-10/igt@kms_plane@pixel-format-linear-modifier-source-clamping@pipe-a-plane-7.html

  * igt@kms_plane@pixel-format-y-tiled-gen12-mc-ccs-modifier:
    - shard-tglu-1:       NOTRUN -> [SKIP][162] ([i915#15608] / [i915#8825]) +3 other tests skip
   [162]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-1/igt@kms_plane@pixel-format-y-tiled-gen12-mc-ccs-modifier.html

  * igt@kms_plane@pixel-format-y-tiled-gen12-mc-ccs-modifier@pipe-b-plane-0:
    - shard-tglu-1:       NOTRUN -> [SKIP][163] ([i915#15608]) +13 other tests skip
   [163]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-1/igt@kms_plane@pixel-format-y-tiled-gen12-mc-ccs-modifier@pipe-b-plane-0.html

  * igt@kms_plane@pixel-format-yf-tiled-ccs-modifier-source-clamping:
    - shard-tglu:         NOTRUN -> [SKIP][164] ([i915#15608] / [i915#15609] / [i915#8825])
   [164]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-2/igt@kms_plane@pixel-format-yf-tiled-ccs-modifier-source-clamping.html

  * igt@kms_plane@pixel-format-yf-tiled-ccs-modifier-source-clamping@pipe-b-plane-3:
    - shard-tglu:         NOTRUN -> [SKIP][165] ([i915#15608]) +5 other tests skip
   [165]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-2/igt@kms_plane@pixel-format-yf-tiled-ccs-modifier-source-clamping@pipe-b-plane-3.html

  * igt@kms_plane@pixel-format-yf-tiled-ccs-modifier-source-clamping@pipe-b-plane-7:
    - shard-tglu:         NOTRUN -> [SKIP][166] ([i915#15609] / [i915#8825])
   [166]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-2/igt@kms_plane@pixel-format-yf-tiled-ccs-modifier-source-clamping@pipe-b-plane-7.html

  * igt@kms_plane_alpha_blend@alpha-opaque-fb:
    - shard-glk10:        NOTRUN -> [FAIL][167] ([i915#10647] / [i915#12169])
   [167]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-glk10/igt@kms_plane_alpha_blend@alpha-opaque-fb.html

  * igt@kms_plane_alpha_blend@alpha-opaque-fb@pipe-c-hdmi-a-1:
    - shard-glk10:        NOTRUN -> [FAIL][168] ([i915#10647]) +1 other test fail
   [168]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-glk10/igt@kms_plane_alpha_blend@alpha-opaque-fb@pipe-c-hdmi-a-1.html

  * igt@kms_plane_lowres@tiling-4:
    - shard-tglu-1:       NOTRUN -> [SKIP][169] ([i915#3555]) +1 other test skip
   [169]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-1/igt@kms_plane_lowres@tiling-4.html

  * igt@kms_plane_multiple@2x-tiling-none:
    - shard-tglu:         NOTRUN -> [SKIP][170] ([i915#13958])
   [170]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-10/igt@kms_plane_multiple@2x-tiling-none.html

  * igt@kms_plane_multiple@tiling-4:
    - shard-rkl:          NOTRUN -> [SKIP][171] ([i915#14259])
   [171]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-4/igt@kms_plane_multiple@tiling-4.html

  * igt@kms_plane_multiple@tiling-yf:
    - shard-tglu:         NOTRUN -> [SKIP][172] ([i915#14259])
   [172]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-10/igt@kms_plane_multiple@tiling-yf.html

  * igt@kms_plane_scaling@plane-downscale-factor-0-5-with-rotation@pipe-c:
    - shard-tglu:         NOTRUN -> [SKIP][173] ([i915#15329]) +4 other tests skip
   [173]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-2/igt@kms_plane_scaling@plane-downscale-factor-0-5-with-rotation@pipe-c.html

  * igt@kms_pm_backlight@brightness-with-dpms:
    - shard-tglu:         NOTRUN -> [SKIP][174] ([i915#12343])
   [174]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-10/igt@kms_pm_backlight@brightness-with-dpms.html

  * igt@kms_pm_dc@dc3co-vpb-simulation:
    - shard-tglu-1:       NOTRUN -> [SKIP][175] ([i915#9685])
   [175]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-1/igt@kms_pm_dc@dc3co-vpb-simulation.html

  * igt@kms_pm_dc@dc5-retention-flops:
    - shard-rkl:          NOTRUN -> [SKIP][176] ([i915#3828]) +1 other test skip
   [176]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-4/igt@kms_pm_dc@dc5-retention-flops.html

  * igt@kms_pm_dc@dc9-dpms:
    - shard-rkl:          NOTRUN -> [SKIP][177] ([i915#4281])
   [177]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-2/igt@kms_pm_dc@dc9-dpms.html

  * igt@kms_pm_rpm@dpms-mode-unset-lpsp:
    - shard-rkl:          [PASS][178] -> [SKIP][179] ([i915#15073])
   [178]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-5/igt@kms_pm_rpm@dpms-mode-unset-lpsp.html
   [179]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-7/igt@kms_pm_rpm@dpms-mode-unset-lpsp.html

  * igt@kms_pm_rpm@dpms-mode-unset-non-lpsp:
    - shard-dg1:          [PASS][180] -> [SKIP][181] ([i915#15073])
   [180]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-dg1-16/igt@kms_pm_rpm@dpms-mode-unset-non-lpsp.html
   [181]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-dg1-15/igt@kms_pm_rpm@dpms-mode-unset-non-lpsp.html

  * igt@kms_pm_rpm@fences:
    - shard-dg1:          NOTRUN -> [SKIP][182] ([i915#4077]) +1 other test skip
   [182]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-dg1-13/igt@kms_pm_rpm@fences.html

  * igt@kms_pm_rpm@modeset-non-lpsp:
    - shard-dg2:          [PASS][183] -> [SKIP][184] ([i915#15073]) +1 other test skip
   [183]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-dg2-6/igt@kms_pm_rpm@modeset-non-lpsp.html
   [184]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-dg2-4/igt@kms_pm_rpm@modeset-non-lpsp.html

  * igt@kms_pm_rpm@modeset-non-lpsp-stress-no-wait:
    - shard-rkl:          NOTRUN -> [SKIP][185] ([i915#15073])
   [185]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-8/igt@kms_pm_rpm@modeset-non-lpsp-stress-no-wait.html

  * igt@kms_pm_rpm@package-g7:
    - shard-tglu-1:       NOTRUN -> [SKIP][186] ([i915#15403])
   [186]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-1/igt@kms_pm_rpm@package-g7.html

  * igt@kms_psr2_sf@fbc-pr-overlay-primary-update-sf-dmg-area:
    - shard-rkl:          NOTRUN -> [SKIP][187] ([i915#11520]) +11 other tests skip
   [187]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-8/igt@kms_psr2_sf@fbc-pr-overlay-primary-update-sf-dmg-area.html

  * igt@kms_psr2_sf@fbc-psr2-cursor-plane-move-continuous-exceed-sf:
    - shard-tglu-1:       NOTRUN -> [SKIP][188] ([i915#11520]) +1 other test skip
   [188]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-1/igt@kms_psr2_sf@fbc-psr2-cursor-plane-move-continuous-exceed-sf.html

  * igt@kms_psr2_sf@pr-cursor-plane-move-continuous-sf:
    - shard-glk:          NOTRUN -> [SKIP][189] ([i915#11520]) +3 other tests skip
   [189]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-glk2/igt@kms_psr2_sf@pr-cursor-plane-move-continuous-sf.html

  * igt@kms_psr2_sf@psr2-cursor-plane-move-continuous-exceed-fully-sf:
    - shard-tglu:         NOTRUN -> [SKIP][190] ([i915#11520]) +5 other tests skip
   [190]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-2/igt@kms_psr2_sf@psr2-cursor-plane-move-continuous-exceed-fully-sf.html

  * igt@kms_psr2_sf@psr2-plane-move-sf-dmg-area:
    - shard-glk10:        NOTRUN -> [SKIP][191] ([i915#11520]) +3 other tests skip
   [191]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-glk10/igt@kms_psr2_sf@psr2-plane-move-sf-dmg-area.html

  * igt@kms_psr@fbc-psr-primary-blt:
    - shard-tglu-1:       NOTRUN -> [SKIP][192] ([i915#9732]) +5 other tests skip
   [192]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-1/igt@kms_psr@fbc-psr-primary-blt.html

  * igt@kms_psr@pr-dpms:
    - shard-tglu:         NOTRUN -> [SKIP][193] ([i915#9732]) +14 other tests skip
   [193]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-10/igt@kms_psr@pr-dpms.html

  * igt@kms_psr@pr-sprite-render:
    - shard-dg1:          NOTRUN -> [SKIP][194] ([i915#1072] / [i915#9732])
   [194]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-dg1-13/igt@kms_psr@pr-sprite-render.html

  * igt@kms_psr@psr-sprite-plane-move:
    - shard-rkl:          NOTRUN -> [SKIP][195] ([i915#1072] / [i915#9732]) +21 other tests skip
   [195]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-4/igt@kms_psr@psr-sprite-plane-move.html

  * igt@kms_psr_stress_test@flip-primary-invalidate-overlay:
    - shard-tglu:         NOTRUN -> [SKIP][196] ([i915#9685])
   [196]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-10/igt@kms_psr_stress_test@flip-primary-invalidate-overlay.html

  * igt@kms_psr_stress_test@invalidate-primary-flip-overlay:
    - shard-rkl:          NOTRUN -> [SKIP][197] ([i915#9685])
   [197]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-2/igt@kms_psr_stress_test@invalidate-primary-flip-overlay.html

  * igt@kms_rotation_crc@multiplane-rotation-cropping-bottom:
    - shard-glk10:        NOTRUN -> [INCOMPLETE][198] ([i915#15500])
   [198]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-glk10/igt@kms_rotation_crc@multiplane-rotation-cropping-bottom.html

  * igt@kms_rotation_crc@multiplane-rotation-cropping-top:
    - shard-glk:          NOTRUN -> [INCOMPLETE][199] ([i915#15492])
   [199]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-glk2/igt@kms_rotation_crc@multiplane-rotation-cropping-top.html

  * igt@kms_rotation_crc@primary-4-tiled-reflect-x-180:
    - shard-rkl:          NOTRUN -> [SKIP][200] ([i915#5289])
   [200]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-8/igt@kms_rotation_crc@primary-4-tiled-reflect-x-180.html

  * igt@kms_rotation_crc@primary-yf-tiled-reflect-x-0:
    - shard-tglu:         NOTRUN -> [SKIP][201] ([i915#5289])
   [201]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-10/igt@kms_rotation_crc@primary-yf-tiled-reflect-x-0.html

  * igt@kms_rotation_crc@primary-yf-tiled-reflect-x-90:
    - shard-tglu-1:       NOTRUN -> [SKIP][202] ([i915#5289])
   [202]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-1/igt@kms_rotation_crc@primary-yf-tiled-reflect-x-90.html

  * igt@kms_setmode@basic-clone-single-crtc:
    - shard-rkl:          NOTRUN -> [SKIP][203] ([i915#3555]) +4 other tests skip
   [203]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-4/igt@kms_setmode@basic-clone-single-crtc.html

  * igt@kms_tiled_display@basic-test-pattern-with-chamelium:
    - shard-tglu:         NOTRUN -> [SKIP][204] ([i915#8623])
   [204]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-10/igt@kms_tiled_display@basic-test-pattern-with-chamelium.html

  * igt@kms_vblank@ts-continuation-dpms-suspend:
    - shard-rkl:          [PASS][205] -> [INCOMPLETE][206] ([i915#12276]) +1 other test incomplete
   [205]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-7/igt@kms_vblank@ts-continuation-dpms-suspend.html
   [206]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-6/igt@kms_vblank@ts-continuation-dpms-suspend.html

  * igt@kms_vrr@flip-basic:
    - shard-rkl:          NOTRUN -> [SKIP][207] ([i915#15243] / [i915#3555]) +2 other tests skip
   [207]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-4/igt@kms_vrr@flip-basic.html

  * igt@kms_vrr@lobf:
    - shard-rkl:          NOTRUN -> [SKIP][208] ([i915#11920])
   [208]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-2/igt@kms_vrr@lobf.html

  * igt@kms_vrr@seamless-rr-switch-vrr:
    - shard-tglu:         NOTRUN -> [SKIP][209] ([i915#9906])
   [209]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-10/igt@kms_vrr@seamless-rr-switch-vrr.html

  * igt@perf_pmu@module-unload:
    - shard-glk10:        NOTRUN -> [FAIL][210] ([i915#14433])
   [210]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-glk10/igt@perf_pmu@module-unload.html

  * igt@perf_pmu@rc6@other-idle-gt0:
    - shard-tglu:         NOTRUN -> [SKIP][211] ([i915#8516])
   [211]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-2/igt@perf_pmu@rc6@other-idle-gt0.html

  * igt@prime_vgem@fence-write-hang:
    - shard-rkl:          NOTRUN -> [SKIP][212] ([i915#3708])
   [212]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-4/igt@prime_vgem@fence-write-hang.html

  * igt@sriov_basic@enable-vfs-bind-unbind-each:
    - shard-rkl:          NOTRUN -> [SKIP][213] ([i915#9917]) +1 other test skip
   [213]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-5/igt@sriov_basic@enable-vfs-bind-unbind-each.html

  * igt@sriov_basic@enable-vfs-bind-unbind-each@numvfs-random:
    - shard-tglu:         NOTRUN -> [FAIL][214] ([i915#12910]) +8 other tests fail
   [214]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-2/igt@sriov_basic@enable-vfs-bind-unbind-each@numvfs-random.html

  
#### Possible fixes ####

  * igt@gem_ctx_isolation@preservation-s3:
    - shard-rkl:          [INCOMPLETE][215] ([i915#13356]) -> [PASS][216] +2 other tests pass
   [215]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-6/igt@gem_ctx_isolation@preservation-s3.html
   [216]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-5/igt@gem_ctx_isolation@preservation-s3.html

  * igt@i915_pm_rc6_residency@rc6-fence:
    - shard-tglu:         [WARN][217] ([i915#13790] / [i915#2681]) -> [PASS][218] +1 other test pass
   [217]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-tglu-7/igt@i915_pm_rc6_residency@rc6-fence.html
   [218]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-tglu-6/igt@i915_pm_rc6_residency@rc6-fence.html

  * igt@kms_async_flips@async-flip-suspend-resume:
    - shard-rkl:          [ABORT][219] ([i915#15132]) -> [PASS][220]
   [219]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-1/igt@kms_async_flips@async-flip-suspend-resume.html
   [220]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-2/igt@kms_async_flips@async-flip-suspend-resume.html

  * igt@kms_fbcon_fbt@fbc-suspend:
    - shard-rkl:          [INCOMPLETE][221] ([i915#9878]) -> [PASS][222]
   [221]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-4/igt@kms_fbcon_fbt@fbc-suspend.html
   [222]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-4/igt@kms_fbcon_fbt@fbc-suspend.html

  * igt@kms_flip@flip-vs-suspend:
    - shard-rkl:          [INCOMPLETE][223] ([i915#6113]) -> [PASS][224] +1 other test pass
   [223]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-6/igt@kms_flip@flip-vs-suspend.html
   [224]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-4/igt@kms_flip@flip-vs-suspend.html

  * igt@kms_hdr@invalid-metadata-sizes:
    - shard-dg2:          [SKIP][225] ([i915#3555] / [i915#8228]) -> [PASS][226] +1 other test pass
   [225]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-dg2-7/igt@kms_hdr@invalid-metadata-sizes.html
   [226]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-dg2-11/igt@kms_hdr@invalid-metadata-sizes.html

  * igt@kms_hdr@static-toggle-dpms:
    - shard-rkl:          [SKIP][227] ([i915#3555] / [i915#8228]) -> [PASS][228]
   [227]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-5/igt@kms_hdr@static-toggle-dpms.html
   [228]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-1/igt@kms_hdr@static-toggle-dpms.html

  * igt@kms_pm_rpm@modeset-lpsp-stress:
    - shard-dg1:          [SKIP][229] ([i915#15073]) -> [PASS][230] +1 other test pass
   [229]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-dg1-16/igt@kms_pm_rpm@modeset-lpsp-stress.html
   [230]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-dg1-15/igt@kms_pm_rpm@modeset-lpsp-stress.html

  * igt@kms_pm_rpm@modeset-non-lpsp:
    - shard-rkl:          [SKIP][231] ([i915#15073]) -> [PASS][232] +1 other test pass
   [231]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-2/igt@kms_pm_rpm@modeset-non-lpsp.html
   [232]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-6/igt@kms_pm_rpm@modeset-non-lpsp.html

  
#### Warnings ####

  * igt@gem_create@create-ext-set-pat:
    - shard-rkl:          [SKIP][233] ([i915#8562]) -> [SKIP][234] ([i915#14544] / [i915#8562])
   [233]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-7/igt@gem_create@create-ext-set-pat.html
   [234]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-6/igt@gem_create@create-ext-set-pat.html

  * igt@gem_exec_balancer@parallel-bb-first:
    - shard-rkl:          [SKIP][235] ([i915#4525]) -> [SKIP][236] ([i915#14544] / [i915#4525])
   [235]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-7/igt@gem_exec_balancer@parallel-bb-first.html
   [236]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-6/igt@gem_exec_balancer@parallel-bb-first.html

  * igt@gem_exec_reloc@basic-cpu-read-active:
    - shard-rkl:          [SKIP][237] ([i915#3281]) -> [SKIP][238] ([i915#14544] / [i915#3281]) +2 other tests skip
   [237]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-3/igt@gem_exec_reloc@basic-cpu-read-active.html
   [238]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-6/igt@gem_exec_reloc@basic-cpu-read-active.html

  * igt@gem_exec_reloc@basic-gtt-wc:
    - shard-rkl:          [SKIP][239] ([i915#14544] / [i915#3281]) -> [SKIP][240] ([i915#3281]) +2 other tests skip
   [239]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-6/igt@gem_exec_reloc@basic-gtt-wc.html
   [240]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-5/igt@gem_exec_reloc@basic-gtt-wc.html

  * igt@gem_lmem_swapping@heavy-verify-multi:
    - shard-rkl:          [SKIP][241] ([i915#4613]) -> [SKIP][242] ([i915#14544] / [i915#4613])
   [241]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-3/igt@gem_lmem_swapping@heavy-verify-multi.html
   [242]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-6/igt@gem_lmem_swapping@heavy-verify-multi.html

  * igt@gem_lmem_swapping@parallel-random-verify-ccs:
    - shard-rkl:          [SKIP][243] ([i915#14544] / [i915#4613]) -> [SKIP][244] ([i915#4613]) +1 other test skip
   [243]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-6/igt@gem_lmem_swapping@parallel-random-verify-ccs.html
   [244]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-5/igt@gem_lmem_swapping@parallel-random-verify-ccs.html

  * igt@gem_pxp@hw-rejects-pxp-buffer:
    - shard-rkl:          [SKIP][245] ([i915#13717]) -> [SKIP][246] ([i915#13717] / [i915#14544])
   [245]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-7/igt@gem_pxp@hw-rejects-pxp-buffer.html
   [246]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-6/igt@gem_pxp@hw-rejects-pxp-buffer.html

  * igt@gem_readwrite@beyond-eob:
    - shard-rkl:          [SKIP][247] ([i915#3282]) -> [SKIP][248] ([i915#14544] / [i915#3282]) +2 other tests skip
   [247]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-7/igt@gem_readwrite@beyond-eob.html
   [248]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-6/igt@gem_readwrite@beyond-eob.html

  * igt@gem_softpin@evict-snoop-interruptible:
    - shard-rkl:          [SKIP][249] -> [SKIP][250] ([i915#14544]) +10 other tests skip
   [249]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-7/igt@gem_softpin@evict-snoop-interruptible.html
   [250]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-6/igt@gem_softpin@evict-snoop-interruptible.html

  * igt@gem_userptr_blits@create-destroy-unsync:
    - shard-rkl:          [SKIP][251] ([i915#3297]) -> [SKIP][252] ([i915#14544] / [i915#3297]) +1 other test skip
   [251]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-2/igt@gem_userptr_blits@create-destroy-unsync.html
   [252]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-6/igt@gem_userptr_blits@create-destroy-unsync.html

  * igt@gem_userptr_blits@invalid-mmap-offset-unsync:
    - shard-rkl:          [SKIP][253] ([i915#14544] / [i915#3297]) -> [SKIP][254] ([i915#3297])
   [253]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-6/igt@gem_userptr_blits@invalid-mmap-offset-unsync.html
   [254]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-5/igt@gem_userptr_blits@invalid-mmap-offset-unsync.html

  * igt@gen9_exec_parse@batch-invalid-length:
    - shard-rkl:          [SKIP][255] ([i915#2527]) -> [SKIP][256] ([i915#14544] / [i915#2527])
   [255]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-7/igt@gen9_exec_parse@batch-invalid-length.html
   [256]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-6/igt@gen9_exec_parse@batch-invalid-length.html

  * igt@i915_pm_freq_api@freq-basic-api:
    - shard-rkl:          [SKIP][257] ([i915#14544] / [i915#8399]) -> [SKIP][258] ([i915#8399])
   [257]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-6/igt@i915_pm_freq_api@freq-basic-api.html
   [258]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-5/igt@i915_pm_freq_api@freq-basic-api.html

  * igt@i915_pm_freq_api@freq-suspend:
    - shard-rkl:          [SKIP][259] ([i915#8399]) -> [SKIP][260] ([i915#14544] / [i915#8399]) +1 other test skip
   [259]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-7/igt@i915_pm_freq_api@freq-suspend.html
   [260]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-6/igt@i915_pm_freq_api@freq-suspend.html

  * igt@i915_query@test-query-geometry-subslices:
    - shard-rkl:          [SKIP][261] ([i915#5723]) -> [SKIP][262] ([i915#14544] / [i915#5723])
   [261]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-2/igt@i915_query@test-query-geometry-subslices.html
   [262]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-6/igt@i915_query@test-query-geometry-subslices.html

  * igt@kms_big_fb@4-tiled-32bpp-rotate-90:
    - shard-rkl:          [SKIP][263] ([i915#5286]) -> [SKIP][264] ([i915#14544] / [i915#5286]) +2 other tests skip
   [263]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-3/igt@kms_big_fb@4-tiled-32bpp-rotate-90.html
   [264]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-6/igt@kms_big_fb@4-tiled-32bpp-rotate-90.html

  * igt@kms_big_fb@4-tiled-max-hw-stride-64bpp-rotate-180-hflip-async-flip:
    - shard-rkl:          [SKIP][265] ([i915#14544] / [i915#5286]) -> [SKIP][266] ([i915#5286])
   [265]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-6/igt@kms_big_fb@4-tiled-max-hw-stride-64bpp-rotate-180-hflip-async-flip.html
   [266]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-5/igt@kms_big_fb@4-tiled-max-hw-stride-64bpp-rotate-180-hflip-async-flip.html

  * igt@kms_big_fb@linear-32bpp-rotate-90:
    - shard-rkl:          [SKIP][267] ([i915#3638]) -> [SKIP][268] ([i915#14544] / [i915#3638])
   [267]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-2/igt@kms_big_fb@linear-32bpp-rotate-90.html
   [268]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-6/igt@kms_big_fb@linear-32bpp-rotate-90.html

  * igt@kms_ccs@bad-aux-stride-4-tiled-mtl-mc-ccs@pipe-a-hdmi-a-2:
    - shard-rkl:          [SKIP][269] ([i915#6095]) -> [SKIP][270] ([i915#14544] / [i915#6095]) +5 other tests skip
   [269]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-7/igt@kms_ccs@bad-aux-stride-4-tiled-mtl-mc-ccs@pipe-a-hdmi-a-2.html
   [270]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-6/igt@kms_ccs@bad-aux-stride-4-tiled-mtl-mc-ccs@pipe-a-hdmi-a-2.html

  * igt@kms_ccs@random-ccs-data-4-tiled-mtl-rc-ccs:
    - shard-rkl:          [SKIP][271] ([i915#14098] / [i915#6095]) -> [SKIP][272] ([i915#14098] / [i915#14544] / [i915#6095]) +8 other tests skip
   [271]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-2/igt@kms_ccs@random-ccs-data-4-tiled-mtl-rc-ccs.html
   [272]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-6/igt@kms_ccs@random-ccs-data-4-tiled-mtl-rc-ccs.html

  * igt@kms_ccs@random-ccs-data-yf-tiled-ccs:
    - shard-rkl:          [SKIP][273] ([i915#14098] / [i915#14544] / [i915#6095]) -> [SKIP][274] ([i915#14098] / [i915#6095])
   [273]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-6/igt@kms_ccs@random-ccs-data-yf-tiled-ccs.html
   [274]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-5/igt@kms_ccs@random-ccs-data-yf-tiled-ccs.html

  * igt@kms_chamelium_edid@dp-edid-read:
    - shard-rkl:          [SKIP][275] ([i915#11151] / [i915#14544] / [i915#7828]) -> [SKIP][276] ([i915#11151] / [i915#7828])
   [275]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-6/igt@kms_chamelium_edid@dp-edid-read.html
   [276]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-5/igt@kms_chamelium_edid@dp-edid-read.html

  * igt@kms_chamelium_hpd@vga-hpd-fast:
    - shard-rkl:          [SKIP][277] ([i915#11151] / [i915#7828]) -> [SKIP][278] ([i915#11151] / [i915#14544] / [i915#7828]) +2 other tests skip
   [277]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-3/igt@kms_chamelium_hpd@vga-hpd-fast.html
   [278]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-6/igt@kms_chamelium_hpd@vga-hpd-fast.html

  * igt@kms_content_protection@atomic-dpms:
    - shard-dg2:          [SKIP][279] ([i915#6944] / [i915#7118] / [i915#9424]) -> [FAIL][280] ([i915#7173])
   [279]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-dg2-7/igt@kms_content_protection@atomic-dpms.html
   [280]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-dg2-11/igt@kms_content_protection@atomic-dpms.html

  * igt@kms_content_protection@dp-mst-type-0-suspend-resume:
    - shard-rkl:          [SKIP][281] ([i915#15330]) -> [SKIP][282] ([i915#14544] / [i915#15330])
   [281]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-3/igt@kms_content_protection@dp-mst-type-0-suspend-resume.html
   [282]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-6/igt@kms_content_protection@dp-mst-type-0-suspend-resume.html

  * igt@kms_content_protection@legacy:
    - shard-dg2:          [FAIL][283] ([i915#7173]) -> [SKIP][284] ([i915#6944] / [i915#7118] / [i915#9424])
   [283]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-dg2-11/igt@kms_content_protection@legacy.html
   [284]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-dg2-1/igt@kms_content_protection@legacy.html

  * igt@kms_content_protection@lic-type-0-hdcp14:
    - shard-dg2:          [SKIP][285] ([i915#6944]) -> [FAIL][286] ([i915#7173])
   [285]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-dg2-7/igt@kms_content_protection@lic-type-0-hdcp14.html
   [286]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-dg2-11/igt@kms_content_protection@lic-type-0-hdcp14.html

  * igt@kms_content_protection@mei-interface:
    - shard-rkl:          [SKIP][287] ([i915#6944] / [i915#9424]) -> [SKIP][288] ([i915#14544] / [i915#6944] / [i915#9424])
   [287]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-2/igt@kms_content_protection@mei-interface.html
   [288]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-6/igt@kms_content_protection@mei-interface.html

  * igt@kms_content_protection@type1:
    - shard-dg2:          [SKIP][289] ([i915#6944] / [i915#7118] / [i915#7162] / [i915#9424]) -> [SKIP][290] ([i915#6944] / [i915#7118] / [i915#9424])
   [289]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-dg2-11/igt@kms_content_protection@type1.html
   [290]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-dg2-6/igt@kms_content_protection@type1.html

  * igt@kms_cursor_crc@cursor-onscreen-32x32:
    - shard-rkl:          [SKIP][291] ([i915#3555]) -> [SKIP][292] ([i915#14544] / [i915#3555]) +2 other tests skip
   [291]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-2/igt@kms_cursor_crc@cursor-onscreen-32x32.html
   [292]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-6/igt@kms_cursor_crc@cursor-onscreen-32x32.html

  * igt@kms_cursor_crc@cursor-sliding-512x512:
    - shard-rkl:          [SKIP][293] ([i915#13049]) -> [SKIP][294] ([i915#13049] / [i915#14544])
   [293]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-3/igt@kms_cursor_crc@cursor-sliding-512x512.html
   [294]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-6/igt@kms_cursor_crc@cursor-sliding-512x512.html

  * igt@kms_cursor_legacy@basic-busy-flip-before-cursor-atomic:
    - shard-rkl:          [SKIP][295] ([i915#14544] / [i915#4103]) -> [SKIP][296] ([i915#4103])
   [295]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-6/igt@kms_cursor_legacy@basic-busy-flip-before-cursor-atomic.html
   [296]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-5/igt@kms_cursor_legacy@basic-busy-flip-before-cursor-atomic.html

  * igt@kms_dither@fb-8bpc-vs-panel-6bpc:
    - shard-rkl:          [SKIP][297] ([i915#3555] / [i915#3804]) -> [SKIP][298] ([i915#14544] / [i915#3555] / [i915#3804])
   [297]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-7/igt@kms_dither@fb-8bpc-vs-panel-6bpc.html
   [298]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-6/igt@kms_dither@fb-8bpc-vs-panel-6bpc.html

  * igt@kms_dither@fb-8bpc-vs-panel-6bpc@pipe-a-hdmi-a-2:
    - shard-rkl:          [SKIP][299] ([i915#3804]) -> [SKIP][300] ([i915#14544] / [i915#3804])
   [299]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-7/igt@kms_dither@fb-8bpc-vs-panel-6bpc@pipe-a-hdmi-a-2.html
   [300]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-6/igt@kms_dither@fb-8bpc-vs-panel-6bpc@pipe-a-hdmi-a-2.html

  * igt@kms_dsc@dsc-with-output-formats:
    - shard-rkl:          [SKIP][301] ([i915#3555] / [i915#3840]) -> [SKIP][302] ([i915#14544] / [i915#3555] / [i915#3840])
   [301]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-2/igt@kms_dsc@dsc-with-output-formats.html
   [302]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-6/igt@kms_dsc@dsc-with-output-formats.html

  * igt@kms_flip@2x-flip-vs-wf_vblank:
    - shard-rkl:          [SKIP][303] ([i915#14544] / [i915#9934]) -> [SKIP][304] ([i915#9934]) +1 other test skip
   [303]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-6/igt@kms_flip@2x-flip-vs-wf_vblank.html
   [304]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-5/igt@kms_flip@2x-flip-vs-wf_vblank.html

  * igt@kms_flip@2x-plain-flip-ts-check:
    - shard-rkl:          [SKIP][305] ([i915#9934]) -> [SKIP][306] ([i915#14544] / [i915#9934]) +4 other tests skip
   [305]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-3/igt@kms_flip@2x-plain-flip-ts-check.html
   [306]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-6/igt@kms_flip@2x-plain-flip-ts-check.html

  * igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytileccs-downscaling:
    - shard-rkl:          [SKIP][307] ([i915#14544] / [i915#15643]) -> [SKIP][308] ([i915#15643])
   [307]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-6/igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytileccs-downscaling.html
   [308]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-5/igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytileccs-downscaling.html

  * igt@kms_flip_scaled_crc@flip-64bpp-4tile-to-32bpp-4tile-downscaling:
    - shard-rkl:          [SKIP][309] ([i915#15643]) -> [SKIP][310] ([i915#14544] / [i915#15643]) +1 other test skip
   [309]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-7/igt@kms_flip_scaled_crc@flip-64bpp-4tile-to-32bpp-4tile-downscaling.html
   [310]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-6/igt@kms_flip_scaled_crc@flip-64bpp-4tile-to-32bpp-4tile-downscaling.html

  * igt@kms_frontbuffer_tracking@fbc-2p-primscrn-cur-indfb-draw-render:
    - shard-rkl:          [SKIP][311] ([i915#1825]) -> [SKIP][312] ([i915#14544] / [i915#1825]) +17 other tests skip
   [311]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-7/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-cur-indfb-draw-render.html
   [312]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-6/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-cur-indfb-draw-render.html

  * igt@kms_frontbuffer_tracking@fbcpsr-1p-offscreen-pri-indfb-draw-pwrite:
    - shard-rkl:          [SKIP][313] ([i915#15102]) -> [SKIP][314] ([i915#14544] / [i915#15102]) +1 other test skip
   [313]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-7/igt@kms_frontbuffer_tracking@fbcpsr-1p-offscreen-pri-indfb-draw-pwrite.html
   [314]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-6/igt@kms_frontbuffer_tracking@fbcpsr-1p-offscreen-pri-indfb-draw-pwrite.html

  * igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-pri-shrfb-draw-render:
    - shard-rkl:          [SKIP][315] ([i915#15102] / [i915#3023]) -> [SKIP][316] ([i915#14544] / [i915#15102] / [i915#3023]) +9 other tests skip
   [315]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-7/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-pri-shrfb-draw-render.html
   [316]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-6/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-pri-shrfb-draw-render.html

  * igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-spr-indfb-draw-blt:
    - shard-dg2:          [SKIP][317] ([i915#15102] / [i915#3458]) -> [SKIP][318] ([i915#10433] / [i915#15102] / [i915#3458]) +2 other tests skip
   [317]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-dg2-6/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-spr-indfb-draw-blt.html
   [318]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-dg2-4/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-spr-indfb-draw-blt.html

  * igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-pri-shrfb-draw-blt:
    - shard-rkl:          [SKIP][319] ([i915#14544] / [i915#1825]) -> [SKIP][320] ([i915#1825]) +3 other tests skip
   [319]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-6/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-pri-shrfb-draw-blt.html
   [320]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-5/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-pri-shrfb-draw-blt.html

  * igt@kms_frontbuffer_tracking@fbcpsr-tiling-y:
    - shard-rkl:          [SKIP][321] ([i915#14544] / [i915#15102] / [i915#3023]) -> [SKIP][322] ([i915#15102] / [i915#3023]) +1 other test skip
   [321]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-6/igt@kms_frontbuffer_tracking@fbcpsr-tiling-y.html
   [322]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-5/igt@kms_frontbuffer_tracking@fbcpsr-tiling-y.html

  * igt@kms_frontbuffer_tracking@pipe-fbc-rte:
    - shard-rkl:          [SKIP][323] ([i915#14544] / [i915#9766]) -> [SKIP][324] ([i915#9766])
   [323]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-6/igt@kms_frontbuffer_tracking@pipe-fbc-rte.html
   [324]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-4/igt@kms_frontbuffer_tracking@pipe-fbc-rte.html

  * igt@kms_frontbuffer_tracking@psr-indfb-scaledprimary:
    - shard-dg2:          [SKIP][325] ([i915#10433] / [i915#15102] / [i915#3458]) -> [SKIP][326] ([i915#15102] / [i915#3458])
   [325]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-dg2-4/igt@kms_frontbuffer_tracking@psr-indfb-scaledprimary.html
   [326]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-dg2-5/igt@kms_frontbuffer_tracking@psr-indfb-scaledprimary.html

  * igt@kms_hdr@brightness-with-hdr:
    - shard-rkl:          [SKIP][327] ([i915#12713]) -> [SKIP][328] ([i915#1187] / [i915#12713])
   [327]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-4/igt@kms_hdr@brightness-with-hdr.html
   [328]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-3/igt@kms_hdr@brightness-with-hdr.html

  * igt@kms_joiner@invalid-modeset-force-ultra-joiner:
    - shard-rkl:          [SKIP][329] ([i915#15458]) -> [SKIP][330] ([i915#14544] / [i915#15458]) +1 other test skip
   [329]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-7/igt@kms_joiner@invalid-modeset-force-ultra-joiner.html
   [330]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-6/igt@kms_joiner@invalid-modeset-force-ultra-joiner.html

  * igt@kms_panel_fitting@atomic-fastset:
    - shard-dg1:          [SKIP][331] ([i915#4423] / [i915#6301]) -> [SKIP][332] ([i915#6301])
   [331]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-dg1-13/igt@kms_panel_fitting@atomic-fastset.html
   [332]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-dg1-18/igt@kms_panel_fitting@atomic-fastset.html

  * igt@kms_pipe_stress@stress-xrgb8888-4tiled:
    - shard-rkl:          [SKIP][333] ([i915#14712]) -> [SKIP][334] ([i915#14544] / [i915#14712])
   [333]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-7/igt@kms_pipe_stress@stress-xrgb8888-4tiled.html
   [334]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-6/igt@kms_pipe_stress@stress-xrgb8888-4tiled.html

  * igt@kms_plane@pixel-format-4-tiled-lnl-ccs-modifier@pipe-a-plane-0:
    - shard-rkl:          [SKIP][335] ([i915#15608]) -> [SKIP][336] ([i915#14544] / [i915#15608])
   [335]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-7/igt@kms_plane@pixel-format-4-tiled-lnl-ccs-modifier@pipe-a-plane-0.html
   [336]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-6/igt@kms_plane@pixel-format-4-tiled-lnl-ccs-modifier@pipe-a-plane-0.html

  * igt@kms_plane@pixel-format-4-tiled-lnl-ccs-modifier@pipe-b-plane-5:
    - shard-rkl:          [SKIP][337] ([i915#15608] / [i915#8825]) -> [SKIP][338] ([i915#14544] / [i915#15608] / [i915#8825]) +1 other test skip
   [337]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-7/igt@kms_plane@pixel-format-4-tiled-lnl-ccs-modifier@pipe-b-plane-5.html
   [338]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-6/igt@kms_plane@pixel-format-4-tiled-lnl-ccs-modifier@pipe-b-plane-5.html

  * igt@kms_plane@pixel-format-yf-tiled-ccs-modifier-source-clamping:
    - shard-rkl:          [SKIP][339] ([i915#14544] / [i915#15608] / [i915#15609] / [i915#8825]) -> [SKIP][340] ([i915#15608] / [i915#15609] / [i915#8825])
   [339]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-6/igt@kms_plane@pixel-format-yf-tiled-ccs-modifier-source-clamping.html
   [340]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-5/igt@kms_plane@pixel-format-yf-tiled-ccs-modifier-source-clamping.html

  * igt@kms_plane@pixel-format-yf-tiled-ccs-modifier-source-clamping@pipe-a-plane-0:
    - shard-rkl:          [SKIP][341] ([i915#14544] / [i915#15608]) -> [SKIP][342] ([i915#15608])
   [341]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-6/igt@kms_plane@pixel-format-yf-tiled-ccs-modifier-source-clamping@pipe-a-plane-0.html
   [342]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-5/igt@kms_plane@pixel-format-yf-tiled-ccs-modifier-source-clamping@pipe-a-plane-0.html

  * igt@kms_plane@pixel-format-yf-tiled-ccs-modifier-source-clamping@pipe-b-plane-5:
    - shard-rkl:          [SKIP][343] ([i915#14544] / [i915#15609] / [i915#8825]) -> [SKIP][344] ([i915#15609] / [i915#8825])
   [343]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-6/igt@kms_plane@pixel-format-yf-tiled-ccs-modifier-source-clamping@pipe-b-plane-5.html
   [344]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-5/igt@kms_plane@pixel-format-yf-tiled-ccs-modifier-source-clamping@pipe-b-plane-5.html

  * igt@kms_plane_scaling@plane-downscale-factor-0-5-with-rotation@pipe-b:
    - shard-rkl:          [SKIP][345] ([i915#14544] / [i915#15329]) -> [SKIP][346] ([i915#15329]) +3 other tests skip
   [345]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-6/igt@kms_plane_scaling@plane-downscale-factor-0-5-with-rotation@pipe-b.html
   [346]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-5/igt@kms_plane_scaling@plane-downscale-factor-0-5-with-rotation@pipe-b.html

  * igt@kms_plane_scaling@plane-upscale-20x20-with-rotation@pipe-a:
    - shard-rkl:          [SKIP][347] ([i915#15329]) -> [SKIP][348] ([i915#14544] / [i915#15329]) +3 other tests skip
   [347]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-2/igt@kms_plane_scaling@plane-upscale-20x20-with-rotation@pipe-a.html
   [348]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-6/igt@kms_plane_scaling@plane-upscale-20x20-with-rotation@pipe-a.html

  * igt@kms_pm_lpsp@screens-disabled:
    - shard-rkl:          [SKIP][349] ([i915#8430]) -> [SKIP][350] ([i915#14544] / [i915#8430])
   [349]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-2/igt@kms_pm_lpsp@screens-disabled.html
   [350]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-6/igt@kms_pm_lpsp@screens-disabled.html

  * igt@kms_psr2_sf@fbc-pr-overlay-plane-move-continuous-exceed-sf:
    - shard-rkl:          [SKIP][351] ([i915#11520] / [i915#14544]) -> [SKIP][352] ([i915#11520]) +1 other test skip
   [351]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-6/igt@kms_psr2_sf@fbc-pr-overlay-plane-move-continuous-exceed-sf.html
   [352]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-5/igt@kms_psr2_sf@fbc-pr-overlay-plane-move-continuous-exceed-sf.html

  * igt@kms_psr2_sf@fbc-pr-primary-plane-update-sf-dmg-area:
    - shard-rkl:          [SKIP][353] ([i915#11520]) -> [SKIP][354] ([i915#11520] / [i915#14544]) +2 other tests skip
   [353]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-2/igt@kms_psr2_sf@fbc-pr-primary-plane-update-sf-dmg-area.html
   [354]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-6/igt@kms_psr2_sf@fbc-pr-primary-plane-update-sf-dmg-area.html

  * igt@kms_psr2_su@page_flip-xrgb8888:
    - shard-rkl:          [SKIP][355] ([i915#9683]) -> [SKIP][356] ([i915#14544] / [i915#9683])
   [355]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-7/igt@kms_psr2_su@page_flip-xrgb8888.html
   [356]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-6/igt@kms_psr2_su@page_flip-xrgb8888.html

  * igt@kms_psr@fbc-psr2-sprite-render:
    - shard-rkl:          [SKIP][357] ([i915#1072] / [i915#14544] / [i915#9732]) -> [SKIP][358] ([i915#1072] / [i915#9732]) +2 other tests skip
   [357]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-6/igt@kms_psr@fbc-psr2-sprite-render.html
   [358]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-5/igt@kms_psr@fbc-psr2-sprite-render.html

  * igt@kms_psr@pr-primary-render:
    - shard-rkl:          [SKIP][359] ([i915#1072] / [i915#9732]) -> [SKIP][360] ([i915#1072] / [i915#14544] / [i915#9732]) +5 other tests skip
   [359]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-7/igt@kms_psr@pr-primary-render.html
   [360]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-6/igt@kms_psr@pr-primary-render.html

  * igt@perf@unprivileged-single-ctx-counters:
    - shard-rkl:          [SKIP][361] ([i915#14544] / [i915#2433]) -> [SKIP][362] ([i915#2433])
   [361]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-6/igt@perf@unprivileged-single-ctx-counters.html
   [362]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-5/igt@perf@unprivileged-single-ctx-counters.html

  * igt@perf_pmu@event-wait@rcs0:
    - shard-rkl:          [SKIP][363] ([i915#14544]) -> [SKIP][364] +3 other tests skip
   [363]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-6/igt@perf_pmu@event-wait@rcs0.html
   [364]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-4/igt@perf_pmu@event-wait@rcs0.html

  * igt@prime_vgem@coherency-gtt:
    - shard-rkl:          [SKIP][365] ([i915#3708]) -> [SKIP][366] ([i915#14544] / [i915#3708])
   [365]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-7/igt@prime_vgem@coherency-gtt.html
   [366]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-6/igt@prime_vgem@coherency-gtt.html

  * igt@sriov_basic@bind-unbind-vf:
    - shard-rkl:          [SKIP][367] ([i915#9917]) -> [SKIP][368] ([i915#14544] / [i915#9917])
   [367]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17957/shard-rkl-7/igt@sriov_basic@bind-unbind-vf.html
   [368]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/shard-rkl-6/igt@sriov_basic@bind-unbind-vf.html

  
  [i915#10307]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/10307
  [i915#10433]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/10433
  [i915#10434]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/10434
  [i915#10647]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/10647
  [i915#1072]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/1072
  [i915#11151]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/11151
  [i915#11520]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/11520
  [i915#1187]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/1187
  [i915#11920]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/11920
  [i915#12169]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/12169
  [i915#12276]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/12276
  [i915#12313]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/12313
  [i915#12314]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/12314
  [i915#12343]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/12343
  [i915#12392]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/12392
  [i915#12713]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/12713
  [i915#12745]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/12745
  [i915#12761]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/12761
  [i915#12910]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/12910
  [i915#13008]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/13008
  [i915#13049]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/13049
  [i915#13356]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/13356
  [i915#13398]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/13398
  [i915#13562]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/13562
  [i915#13566]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/13566
  [i915#13688]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/13688
  [i915#13717]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/13717
  [i915#13729]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/13729
  [i915#13748]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/13748
  [i915#13749]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/13749
  [i915#13790]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/13790
  [i915#13821]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/13821
  [i915#13958]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/13958
  [i915#14033]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/14033
  [i915#14098]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/14098
  [i915#14259]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/14259
  [i915#14350]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/14350
  [i915#14433]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/14433
  [i915#14544]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/14544
  [i915#14694]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/14694
  [i915#14712]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/14712
  [i915#14995]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/14995
  [i915#15060]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/15060
  [i915#15073]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/15073
  [i915#15102]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/15102
  [i915#15132]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/15132
  [i915#15243]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/15243
  [i915#15329]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/15329
  [i915#15330]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/15330
  [i915#15342]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/15342
  [i915#15403]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/15403
  [i915#15458]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/15458
  [i915#15459]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/15459
  [i915#15460]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/15460
  [i915#15479]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/15479
  [i915#15492]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/15492
  [i915#15500]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/15500
  [i915#15582]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/15582
  [i915#15608]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/15608
  [i915#15609]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/15609
  [i915#15643]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/15643
  [i915#15672]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/15672
  [i915#15678]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/15678
  [i915#1769]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/1769
  [i915#1825]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/1825
  [i915#1839]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/1839
  [i915#2433]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/2433
  [i915#2527]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/2527
  [i915#2681]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/2681
  [i915#280]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/280
  [i915#2856]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/2856
  [i915#3023]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3023
  [i915#3116]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3116
  [i915#3281]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3281
  [i915#3282]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3282
  [i915#3297]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3297
  [i915#3299]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3299
  [i915#3458]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3458
  [i915#3555]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3555
  [i915#3637]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3637
  [i915#3638]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3638
  [i915#3708]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3708
  [i915#3742]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3742
  [i915#3804]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3804
  [i915#3828]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3828
  [i915#3840]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3840
  [i915#3955]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3955
  [i915#4077]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4077
  [i915#4083]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4083
  [i915#4103]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4103
  [i915#4270]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4270
  [i915#4281]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4281
  [i915#4387]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4387
  [i915#4391]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4391
  [i915#4423]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4423
  [i915#4525]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4525
  [i915#4613]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4613
  [i915#4817]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4817
  [i915#4839]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4839
  [i915#5138]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/5138
  [i915#5286]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/5286
  [i915#5289]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/5289
  [i915#5439]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/5439
  [i915#5723]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/5723
  [i915#6095]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/6095
  [i915#6113]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/6113
  [i915#6230]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/6230
  [i915#6301]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/6301
  [i915#6412]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/6412
  [i915#658]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/658
  [i915#6944]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/6944
  [i915#7116]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/7116
  [i915#7118]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/7118
  [i915#7162]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/7162
  [i915#7173]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/7173
  [i915#7443]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/7443
  [i915#7582]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/7582
  [i915#7697]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/7697
  [i915#7828]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/7828
  [i915#8228]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/8228
  [i915#8399]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/8399
  [i915#8430]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/8430
  [i915#8516]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/8516
  [i915#8562]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/8562
  [i915#8623]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/8623
  [i915#8708]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/8708
  [i915#8825]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/8825
  [i915#9323]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9323
  [i915#9424]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9424
  [i915#9683]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9683
  [i915#9685]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9685
  [i915#9723]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9723
  [i915#9732]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9732
  [i915#9766]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9766
  [i915#9878]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9878
  [i915#9906]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9906
  [i915#9917]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9917
  [i915#9934]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9934


Build changes
-------------

  * Linux: CI_DRM_17957 -> Patchwork_161339v1

  CI-20190529: 20190529
  CI_DRM_17957: 9ddce2e2e1c2891bc26ea8648b2ba530b73937fe @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_8744: 8744
  Patchwork_161339v1: 9ddce2e2e1c2891bc26ea8648b2ba530b73937fe @ git://anongit.freedesktop.org/gfx-ci/linux
  piglit_4509: fdc5a4ca11124ab8413c7988896eec4c97336694 @ git://anongit.freedesktop.org/piglit

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_161339v1/index.html

[-- Attachment #2: Type: text/html, Size: 136085 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 2/2] drm/buddy: Add KUnit test for offset-aligned allocations
  2026-02-09  8:30 ` [PATCH v3 2/2] drm/buddy: Add KUnit test for offset-aligned allocations Arunpravin Paneer Selvam
@ 2026-02-09 19:23   ` kernel test robot
  2026-02-09 19:26   ` kernel test robot
  2026-02-09 21:20   ` kernel test robot
  2 siblings, 0 replies; 11+ messages in thread
From: kernel test robot @ 2026-02-09 19:23 UTC (permalink / raw)
  To: Arunpravin Paneer Selvam, matthew.auld, christian.koenig,
	dri-devel, intel-gfx, intel-xe, amd-gfx
  Cc: oe-kbuild-all, alexander.deucher, Arunpravin Paneer Selvam

Hi Arunpravin,

kernel test robot noticed the following build errors:

[auto build test ERROR on 9d757669b2b22cd224c334924f798393ffca537c]

url:    https://github.com/intel-lab-lkp/linux/commits/Arunpravin-Paneer-Selvam/drm-buddy-Add-KUnit-test-for-offset-aligned-allocations/20260209-163512
base:   9d757669b2b22cd224c334924f798393ffca537c
patch link:    https://lore.kernel.org/r/20260209083051.13376-2-Arunpravin.PaneerSelvam%40amd.com
patch subject: [PATCH v3 2/2] drm/buddy: Add KUnit test for offset-aligned allocations
config: m68k-allmodconfig (https://download.01.org/0day-ci/archive/20260210/202602100334.WD4wuI8R-lkp@intel.com/config)
compiler: m68k-linux-gcc (GCC) 15.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260210/202602100334.WD4wuI8R-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202602100334.WD4wuI8R-lkp@intel.com/

All errors (new ones prefixed by >>):

   drivers/gpu/tests/gpu_buddy_test.c: In function 'gpu_test_buddy_subtree_offset_alignment_stress':
>> drivers/gpu/tests/gpu_buddy_test.c:46:49: error: macro 'KUNIT_ASSERT_FALSE' passed 3 arguments, but takes just 2
      46 |                            "buddy_init failed\n");
         |                                                 ^
   In file included from drivers/gpu/tests/gpu_buddy_test.c:7:
   include/kunit/test.h:1390:9: note: macro 'KUNIT_ASSERT_FALSE' defined here
    1390 | #define KUNIT_ASSERT_FALSE(test, condition) \
         |         ^~~~~~~~~~~~~~~~~~
>> drivers/gpu/tests/gpu_buddy_test.c:45:9: error: 'KUNIT_ASSERT_FALSE' undeclared (first use in this function); did you mean 'KUNIT_ASSERTION'?
      45 |         KUNIT_ASSERT_FALSE(test, gpu_buddy_init(&mm, mm_size, SZ_4K),
         |         ^~~~~~~~~~~~~~~~~~
         |         KUNIT_ASSERTION
   drivers/gpu/tests/gpu_buddy_test.c:45:9: note: each undeclared identifier is reported only once for each function it appears in


vim +/KUNIT_ASSERT_FALSE +46 drivers/gpu/tests/gpu_buddy_test.c

    23	
    24	static void gpu_test_buddy_subtree_offset_alignment_stress(struct kunit *test)
    25	{
    26		struct gpu_buddy_block *block;
    27		struct rb_node *node = NULL;
    28		const u64 mm_size = SZ_2M;
    29		const u64 alignments[] = {
    30			SZ_1M,
    31			SZ_512K,
    32			SZ_256K,
    33			SZ_128K,
    34			SZ_64K,
    35			SZ_32K,
    36			SZ_16K,
    37			SZ_8K,
    38		};
    39	
    40		struct list_head allocated[ARRAY_SIZE(alignments)];
    41		unsigned int i, order, max_subtree_align = 0;
    42		struct gpu_buddy mm;
    43		int ret, tree;
    44	
  > 45		KUNIT_ASSERT_FALSE(test, gpu_buddy_init(&mm, mm_size, SZ_4K),
  > 46				   "buddy_init failed\n");
    47	
    48		for (i = 0; i < ARRAY_SIZE(allocated); i++)
    49			INIT_LIST_HEAD(&allocated[i]);
    50	
    51		/*
    52		 * Exercise subtree_max_alignment tracking by allocating blocks with descending
    53		 * alignment constraints and freeing them in reverse order. This verifies that
    54		 * free-tree augmentation correctly propagates the maximum offset alignment
    55		 * present in each subtree at every stage.
    56		 */
    57	
    58		for (i = 0; i < ARRAY_SIZE(alignments); i++) {
    59			struct gpu_buddy_block *root = NULL;
    60			unsigned int expected;
    61			u64 align;
    62	
    63			align = alignments[i];
    64			expected = ilog2(align) - 1;
    65	
    66			for (;;) {
    67				ret = gpu_buddy_alloc_blocks(&mm,
    68							     0, mm_size,
    69							     SZ_4K, align,
    70							     &allocated[i],
    71							     0);
    72				if (ret)
    73					break;
    74	
    75				block = list_last_entry(&allocated[i],
    76							struct gpu_buddy_block,
    77							link);
    78				KUNIT_EXPECT_EQ(test, gpu_buddy_block_offset(block) & (align - 1), 0ULL);
    79			}
    80	
    81			for (order = mm.max_order + 1; order-- > 0 && !root; ) {
    82				for (tree = 0; tree < 2; tree++) {
    83					node = mm.free_trees[tree][order].rb_node;
    84					if (node) {
    85						root = container_of(node,
    86								    struct gpu_buddy_block,
    87								    rb);
    88						break;
    89					}
    90				}
    91			}
    92	
    93			KUNIT_ASSERT_NOT_NULL(test, root);
    94			KUNIT_EXPECT_EQ(test, root->subtree_max_alignment, expected);
    95		}
    96	
    97		for (i = ARRAY_SIZE(alignments); i-- > 0; ) {
    98			gpu_buddy_free_list(&mm, &allocated[i], 0);
    99	
   100			for (order = 0; order <= mm.max_order; order++) {
   101				for (tree = 0; tree < 2; tree++) {
   102					node = mm.free_trees[tree][order].rb_node;
   103					if (!node)
   104						continue;
   105	
   106					block = container_of(node, struct gpu_buddy_block, rb);
   107					max_subtree_align = max(max_subtree_align, block->subtree_max_alignment);
   108				}
   109			}
   110	
   111			KUNIT_EXPECT_GE(test, max_subtree_align, ilog2(alignments[i]));
   112		}
   113	
   114		gpu_buddy_fini(&mm);
   115	}
   116	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 2/2] drm/buddy: Add KUnit test for offset-aligned allocations
  2026-02-09  8:30 ` [PATCH v3 2/2] drm/buddy: Add KUnit test for offset-aligned allocations Arunpravin Paneer Selvam
  2026-02-09 19:23   ` kernel test robot
@ 2026-02-09 19:26   ` kernel test robot
  2026-02-09 21:20   ` kernel test robot
  2 siblings, 0 replies; 11+ messages in thread
From: kernel test robot @ 2026-02-09 19:26 UTC (permalink / raw)
  To: Arunpravin Paneer Selvam, matthew.auld, christian.koenig,
	dri-devel, intel-gfx, intel-xe, amd-gfx
  Cc: oe-kbuild-all, alexander.deucher, Arunpravin Paneer Selvam

Hi Arunpravin,

kernel test robot noticed the following build errors:

[auto build test ERROR on 9d757669b2b22cd224c334924f798393ffca537c]

url:    https://github.com/intel-lab-lkp/linux/commits/Arunpravin-Paneer-Selvam/drm-buddy-Add-KUnit-test-for-offset-aligned-allocations/20260209-163512
base:   9d757669b2b22cd224c334924f798393ffca537c
patch link:    https://lore.kernel.org/r/20260209083051.13376-2-Arunpravin.PaneerSelvam%40amd.com
patch subject: [PATCH v3 2/2] drm/buddy: Add KUnit test for offset-aligned allocations
config: x86_64-rhel-9.4-kunit (https://download.01.org/0day-ci/archive/20260209/202602092035.vOm98J4x-lkp@intel.com/config)
compiler: gcc-14 (Debian 14.2.0-19) 14.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260209/202602092035.vOm98J4x-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202602092035.vOm98J4x-lkp@intel.com/

All errors (new ones prefixed by >>):

   drivers/gpu/tests/gpu_buddy_test.c: In function 'gpu_test_buddy_subtree_offset_alignment_stress':
>> drivers/gpu/tests/gpu_buddy_test.c:46:49: error: macro "KUNIT_ASSERT_FALSE" passed 3 arguments, but takes just 2
      46 |                            "buddy_init failed\n");
         |                                                 ^
   In file included from drivers/gpu/tests/gpu_buddy_test.c:7:
   include/kunit/test.h:1390:9: note: macro "KUNIT_ASSERT_FALSE" defined here
    1390 | #define KUNIT_ASSERT_FALSE(test, condition) \
         |         ^~~~~~~~~~~~~~~~~~
>> drivers/gpu/tests/gpu_buddy_test.c:45:9: error: 'KUNIT_ASSERT_FALSE' undeclared (first use in this function); did you mean 'KUNIT_ASSERTION'?
      45 |         KUNIT_ASSERT_FALSE(test, gpu_buddy_init(&mm, mm_size, SZ_4K),
         |         ^~~~~~~~~~~~~~~~~~
         |         KUNIT_ASSERTION
   drivers/gpu/tests/gpu_buddy_test.c:45:9: note: each undeclared identifier is reported only once for each function it appears in


vim +/KUNIT_ASSERT_FALSE +46 drivers/gpu/tests/gpu_buddy_test.c

    23	
    24	static void gpu_test_buddy_subtree_offset_alignment_stress(struct kunit *test)
    25	{
    26		struct gpu_buddy_block *block;
    27		struct rb_node *node = NULL;
    28		const u64 mm_size = SZ_2M;
    29		const u64 alignments[] = {
    30			SZ_1M,
    31			SZ_512K,
    32			SZ_256K,
    33			SZ_128K,
    34			SZ_64K,
    35			SZ_32K,
    36			SZ_16K,
    37			SZ_8K,
    38		};
    39	
    40		struct list_head allocated[ARRAY_SIZE(alignments)];
    41		unsigned int i, order, max_subtree_align = 0;
    42		struct gpu_buddy mm;
    43		int ret, tree;
    44	
  > 45		KUNIT_ASSERT_FALSE(test, gpu_buddy_init(&mm, mm_size, SZ_4K),
  > 46				   "buddy_init failed\n");
    47	
    48		for (i = 0; i < ARRAY_SIZE(allocated); i++)
    49			INIT_LIST_HEAD(&allocated[i]);
    50	
    51		/*
    52		 * Exercise subtree_max_alignment tracking by allocating blocks with descending
    53		 * alignment constraints and freeing them in reverse order. This verifies that
    54		 * free-tree augmentation correctly propagates the maximum offset alignment
    55		 * present in each subtree at every stage.
    56		 */
    57	
    58		for (i = 0; i < ARRAY_SIZE(alignments); i++) {
    59			struct gpu_buddy_block *root = NULL;
    60			unsigned int expected;
    61			u64 align;
    62	
    63			align = alignments[i];
    64			expected = ilog2(align) - 1;
    65	
    66			for (;;) {
    67				ret = gpu_buddy_alloc_blocks(&mm,
    68							     0, mm_size,
    69							     SZ_4K, align,
    70							     &allocated[i],
    71							     0);
    72				if (ret)
    73					break;
    74	
    75				block = list_last_entry(&allocated[i],
    76							struct gpu_buddy_block,
    77							link);
    78				KUNIT_EXPECT_EQ(test, gpu_buddy_block_offset(block) & (align - 1), 0ULL);
    79			}
    80	
    81			for (order = mm.max_order + 1; order-- > 0 && !root; ) {
    82				for (tree = 0; tree < 2; tree++) {
    83					node = mm.free_trees[tree][order].rb_node;
    84					if (node) {
    85						root = container_of(node,
    86								    struct gpu_buddy_block,
    87								    rb);
    88						break;
    89					}
    90				}
    91			}
    92	
    93			KUNIT_ASSERT_NOT_NULL(test, root);
    94			KUNIT_EXPECT_EQ(test, root->subtree_max_alignment, expected);
    95		}
    96	
    97		for (i = ARRAY_SIZE(alignments); i-- > 0; ) {
    98			gpu_buddy_free_list(&mm, &allocated[i], 0);
    99	
   100			for (order = 0; order <= mm.max_order; order++) {
   101				for (tree = 0; tree < 2; tree++) {
   102					node = mm.free_trees[tree][order].rb_node;
   103					if (!node)
   104						continue;
   105	
   106					block = container_of(node, struct gpu_buddy_block, rb);
   107					max_subtree_align = max(max_subtree_align, block->subtree_max_alignment);
   108				}
   109			}
   110	
   111			KUNIT_EXPECT_GE(test, max_subtree_align, ilog2(alignments[i]));
   112		}
   113	
   114		gpu_buddy_fini(&mm);
   115	}
   116	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 2/2] drm/buddy: Add KUnit test for offset-aligned allocations
  2026-02-09  8:30 ` [PATCH v3 2/2] drm/buddy: Add KUnit test for offset-aligned allocations Arunpravin Paneer Selvam
  2026-02-09 19:23   ` kernel test robot
  2026-02-09 19:26   ` kernel test robot
@ 2026-02-09 21:20   ` kernel test robot
  2 siblings, 0 replies; 11+ messages in thread
From: kernel test robot @ 2026-02-09 21:20 UTC (permalink / raw)
  To: Arunpravin Paneer Selvam, matthew.auld, christian.koenig,
	dri-devel, intel-gfx, intel-xe, amd-gfx
  Cc: llvm, oe-kbuild-all, alexander.deucher, Arunpravin Paneer Selvam

Hi Arunpravin,

kernel test robot noticed the following build warnings:

[auto build test WARNING on 9d757669b2b22cd224c334924f798393ffca537c]

url:    https://github.com/intel-lab-lkp/linux/commits/Arunpravin-Paneer-Selvam/drm-buddy-Add-KUnit-test-for-offset-aligned-allocations/20260209-163512
base:   9d757669b2b22cd224c334924f798393ffca537c
patch link:    https://lore.kernel.org/r/20260209083051.13376-2-Arunpravin.PaneerSelvam%40amd.com
patch subject: [PATCH v3 2/2] drm/buddy: Add KUnit test for offset-aligned allocations
config: riscv-allyesconfig (https://download.01.org/0day-ci/archive/20260210/202602100509.jUETbEEY-lkp@intel.com/config)
compiler: clang version 16.0.6 (https://github.com/llvm/llvm-project 7cbf1a2591520c2491aa35339f227775f4d3adf6)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260210/202602100509.jUETbEEY-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202602100509.jUETbEEY-lkp@intel.com/

All warnings (new ones prefixed by >>):

   drivers/gpu/tests/gpu_buddy_test.c:46:7: error: too many arguments provided to function-like macro invocation
                              "buddy_init failed\n");
                              ^
   include/kunit/test.h:1390:9: note: macro 'KUNIT_ASSERT_FALSE' defined here
   #define KUNIT_ASSERT_FALSE(test, condition) \
           ^
   drivers/gpu/tests/gpu_buddy_test.c:45:2: error: use of undeclared identifier 'KUNIT_ASSERT_FALSE'; did you mean 'KUNIT_ASSERTION'?
           KUNIT_ASSERT_FALSE(test, gpu_buddy_init(&mm, mm_size, SZ_4K),
           ^~~~~~~~~~~~~~~~~~
           KUNIT_ASSERTION
   include/kunit/assert.h:27:2: note: 'KUNIT_ASSERTION' declared here
           KUNIT_ASSERTION,
           ^
>> drivers/gpu/tests/gpu_buddy_test.c:45:2: warning: expression result unused [-Wunused-value]
           KUNIT_ASSERT_FALSE(test, gpu_buddy_init(&mm, mm_size, SZ_4K),
           ^~~~~~~~~~~~~~~~~~
   1 warning and 2 errors generated.


vim +45 drivers/gpu/tests/gpu_buddy_test.c

    23	
    24	static void gpu_test_buddy_subtree_offset_alignment_stress(struct kunit *test)
    25	{
    26		struct gpu_buddy_block *block;
    27		struct rb_node *node = NULL;
    28		const u64 mm_size = SZ_2M;
    29		const u64 alignments[] = {
    30			SZ_1M,
    31			SZ_512K,
    32			SZ_256K,
    33			SZ_128K,
    34			SZ_64K,
    35			SZ_32K,
    36			SZ_16K,
    37			SZ_8K,
    38		};
    39	
    40		struct list_head allocated[ARRAY_SIZE(alignments)];
    41		unsigned int i, order, max_subtree_align = 0;
    42		struct gpu_buddy mm;
    43		int ret, tree;
    44	
  > 45		KUNIT_ASSERT_FALSE(test, gpu_buddy_init(&mm, mm_size, SZ_4K),
    46				   "buddy_init failed\n");
    47	
    48		for (i = 0; i < ARRAY_SIZE(allocated); i++)
    49			INIT_LIST_HEAD(&allocated[i]);
    50	
    51		/*
    52		 * Exercise subtree_max_alignment tracking by allocating blocks with descending
    53		 * alignment constraints and freeing them in reverse order. This verifies that
    54		 * free-tree augmentation correctly propagates the maximum offset alignment
    55		 * present in each subtree at every stage.
    56		 */
    57	
    58		for (i = 0; i < ARRAY_SIZE(alignments); i++) {
    59			struct gpu_buddy_block *root = NULL;
    60			unsigned int expected;
    61			u64 align;
    62	
    63			align = alignments[i];
    64			expected = ilog2(align) - 1;
    65	
    66			for (;;) {
    67				ret = gpu_buddy_alloc_blocks(&mm,
    68							     0, mm_size,
    69							     SZ_4K, align,
    70							     &allocated[i],
    71							     0);
    72				if (ret)
    73					break;
    74	
    75				block = list_last_entry(&allocated[i],
    76							struct gpu_buddy_block,
    77							link);
    78				KUNIT_EXPECT_EQ(test, gpu_buddy_block_offset(block) & (align - 1), 0ULL);
    79			}
    80	
    81			for (order = mm.max_order + 1; order-- > 0 && !root; ) {
    82				for (tree = 0; tree < 2; tree++) {
    83					node = mm.free_trees[tree][order].rb_node;
    84					if (node) {
    85						root = container_of(node,
    86								    struct gpu_buddy_block,
    87								    rb);
    88						break;
    89					}
    90				}
    91			}
    92	
    93			KUNIT_ASSERT_NOT_NULL(test, root);
    94			KUNIT_EXPECT_EQ(test, root->subtree_max_alignment, expected);
    95		}
    96	
    97		for (i = ARRAY_SIZE(alignments); i-- > 0; ) {
    98			gpu_buddy_free_list(&mm, &allocated[i], 0);
    99	
   100			for (order = 0; order <= mm.max_order; order++) {
   101				for (tree = 0; tree < 2; tree++) {
   102					node = mm.free_trees[tree][order].rb_node;
   103					if (!node)
   104						continue;
   105	
   106					block = container_of(node, struct gpu_buddy_block, rb);
   107					max_subtree_align = max(max_subtree_align, block->subtree_max_alignment);
   108				}
   109			}
   110	
   111			KUNIT_EXPECT_GE(test, max_subtree_align, ilog2(alignments[i]));
   112		}
   113	
   114		gpu_buddy_fini(&mm);
   115	}
   116	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 1/2] drm/buddy: Improve offset-aligned allocation handling
  2026-02-09  8:30 [PATCH v3 1/2] drm/buddy: Improve offset-aligned allocation handling Arunpravin Paneer Selvam
                   ` (2 preceding siblings ...)
  2026-02-09 13:22 ` ✗ i915.CI.Full: failure " Patchwork
@ 2026-02-10 16:26 ` Matthew Auld
  2026-02-17  6:03   ` Arunpravin Paneer Selvam
  3 siblings, 1 reply; 11+ messages in thread
From: Matthew Auld @ 2026-02-10 16:26 UTC (permalink / raw)
  To: Arunpravin Paneer Selvam, christian.koenig, dri-devel, intel-gfx,
	intel-xe, amd-gfx
  Cc: alexander.deucher

On 09/02/2026 08:30, Arunpravin Paneer Selvam wrote:
> Large alignment requests previously forced the buddy allocator to search by
> alignment order, which often caused higher-order free blocks to be split even
> when a suitably aligned smaller region already existed within them. This led
> to excessive fragmentation, especially for workloads requesting small sizes
> with large alignment constraints.
> 
> This change prioritizes the requested allocation size during the search and
> uses an augmented RB-tree field (subtree_max_alignment) to efficiently locate
> free blocks that satisfy both size and offset-alignment requirements. As a
> result, the allocator can directly select an aligned sub-region without
> splitting larger blocks unnecessarily.
> 
> A practical example is the VKCTS test
> dEQP-VK.memory.allocation.basic.size_8KiB.reverse.count_4000, which repeatedly
> allocates 8 KiB buffers with a 256 KiB alignment. Previously, such allocations
> caused large blocks to be split aggressively, despite smaller aligned regions
> being sufficient. With this change, those aligned regions are reused directly,
> significantly reducing fragmentation.
> 
> This improvement is visible in the amdgpu VRAM buddy allocator state
> (/sys/kernel/debug/dri/1/amdgpu_vram_mm). After the change, higher-order blocks
> are preserved and the number of low-order fragments is substantially reduced.
> 
> Before:
>    order- 5 free: 1936 MiB, blocks: 15490
>    order- 4 free:  967 MiB, blocks: 15486
>    order- 3 free:  483 MiB, blocks: 15485
>    order- 2 free:  241 MiB, blocks: 15486
>    order- 1 free:  241 MiB, blocks: 30948
> 
> After:
>    order- 5 free:  493 MiB, blocks:  3941
>    order- 4 free:  246 MiB, blocks:  3943
>    order- 3 free:  123 MiB, blocks:  4101
>    order- 2 free:   61 MiB, blocks:  4101
>    order- 1 free:   61 MiB, blocks:  8018
> 
> By avoiding unnecessary splits, this change improves allocator efficiency and
> helps maintain larger contiguous free regions under heavy offset-aligned
> allocation workloads.
> 
> v2:(Matthew)
>    - Update augmented information along the path to the inserted node.
> 
> v3:
>    - Move the patch to gpu/buddy.c file.
> 
> Signed-off-by: Arunpravin Paneer Selvam <Arunpravin.PaneerSelvam@amd.com>
> Suggested-by: Christian König <christian.koenig@amd.com>
> ---
>   drivers/gpu/buddy.c       | 271 +++++++++++++++++++++++++++++++-------
>   include/linux/gpu_buddy.h |   2 +
>   2 files changed, 228 insertions(+), 45 deletions(-)
> 
> diff --git a/drivers/gpu/buddy.c b/drivers/gpu/buddy.c
> index 603c59a2013a..3a25eed050ba 100644
> --- a/drivers/gpu/buddy.c
> +++ b/drivers/gpu/buddy.c
> @@ -14,6 +14,16 @@
>   
>   static struct kmem_cache *slab_blocks;
>   
> +static unsigned int gpu_buddy_block_offset_alignment(struct gpu_buddy_block *block)
> +{
> +	return __ffs(gpu_buddy_block_offset(block));

__ffs() will be undefined for offset zero it seems, so might blow up in 
some strange way. I guess just return the max possible alignment here if 
offset is zero? Also are we meant to use __ffs64() here?

> +}
> +
> +RB_DECLARE_CALLBACKS_MAX(static, gpu_buddy_augment_cb,
> +			 struct gpu_buddy_block, rb,
> +			 unsigned int, subtree_max_alignment,
> +			 gpu_buddy_block_offset_alignment);
> +
>   static struct gpu_buddy_block *gpu_block_alloc(struct gpu_buddy *mm,
>   					       struct gpu_buddy_block *parent,
>   					       unsigned int order,
> @@ -31,6 +41,9 @@ static struct gpu_buddy_block *gpu_block_alloc(struct gpu_buddy *mm,
>   	block->header |= order;
>   	block->parent = parent;
>   
> +	block->subtree_max_alignment =
> +		gpu_buddy_block_offset_alignment(block);
> +
>   	RB_CLEAR_NODE(&block->rb);
>   
>   	BUG_ON(block->header & GPU_BUDDY_HEADER_UNUSED);
> @@ -67,26 +80,42 @@ static bool rbtree_is_empty(struct rb_root *root)
>   	return RB_EMPTY_ROOT(root);
>   }
>   
> -static bool gpu_buddy_block_offset_less(const struct gpu_buddy_block *block,
> -					const struct gpu_buddy_block *node)
> -{
> -	return gpu_buddy_block_offset(block) < gpu_buddy_block_offset(node);
> -}
> -
> -static bool rbtree_block_offset_less(struct rb_node *block,
> -				     const struct rb_node *node)
> -{
> -	return gpu_buddy_block_offset_less(rbtree_get_free_block(block),
> -					   rbtree_get_free_block(node));
> -}
> -
>   static void rbtree_insert(struct gpu_buddy *mm,
>   			  struct gpu_buddy_block *block,
>   			  enum gpu_buddy_free_tree tree)
>   {
> -	rb_add(&block->rb,
> -	       &mm->free_trees[tree][gpu_buddy_block_order(block)],
> -	       rbtree_block_offset_less);
> +	struct rb_node **link, *parent = NULL;
> +	unsigned int block_alignment, order;
> +	struct gpu_buddy_block *node;
> +	struct rb_root *root;
> +
> +	order = gpu_buddy_block_order(block);
> +	block_alignment = gpu_buddy_block_offset_alignment(block);
> +
> +	root = &mm->free_trees[tree][order];
> +	link = &root->rb_node;
> +
> +	while (*link) {
> +		parent = *link;
> +		node = rbtree_get_free_block(parent);
> +		/*
> +		 * Manual augmentation update during insertion traversal. Required
> +		 * because rb_insert_augmented() only calls rotate callback during
> +		 * rotations. This ensures all ancestors on the insertion path have
> +		 * correct subtree_max_alignment values.
> +		 */
> +		if (node->subtree_max_alignment < block_alignment)
> +			node->subtree_max_alignment = block_alignment;
> +
> +		if (gpu_buddy_block_offset(block) < gpu_buddy_block_offset(node))
> +			link = &parent->rb_left;
> +		else
> +			link = &parent->rb_right;
> +	}
> +
> +	block->subtree_max_alignment = block_alignment;
> +	rb_link_node(&block->rb, parent, link);
> +	rb_insert_augmented(&block->rb, root, &gpu_buddy_augment_cb);
>   }
>   
>   static void rbtree_remove(struct gpu_buddy *mm,
> @@ -99,7 +128,7 @@ static void rbtree_remove(struct gpu_buddy *mm,
>   	tree = get_block_tree(block);
>   	root = &mm->free_trees[tree][order];
>   
> -	rb_erase(&block->rb, root);
> +	rb_erase_augmented(&block->rb, root, &gpu_buddy_augment_cb);
>   	RB_CLEAR_NODE(&block->rb);
>   }
>   
> @@ -790,6 +819,132 @@ alloc_from_freetree(struct gpu_buddy *mm,
>   	return ERR_PTR(err);
>   }
>   
> +static bool
> +gpu_buddy_can_offset_align(u64 size, u64 min_block_size)
> +{
> +	return size < min_block_size && is_power_of_2(size);
> +}
> +
> +static bool gpu_buddy_subtree_can_satisfy(struct rb_node *node,
> +					  unsigned int alignment)
> +{
> +	struct gpu_buddy_block *block;
> +
> +	if (!node)
> +		return false;

All callers seem to handle null case already, so could potentially drop 
this?

> +
> +	block = rbtree_get_free_block(node);
> +	return block->subtree_max_alignment >= alignment;
> +}
> +
> +static struct gpu_buddy_block *
> +gpu_buddy_find_block_aligned(struct gpu_buddy *mm,
> +			     enum gpu_buddy_free_tree tree,
> +			     unsigned int order,
> +			     unsigned int tmp,
> +			     unsigned int alignment,
> +			     unsigned long flags)
> +{
> +	struct rb_root *root = &mm->free_trees[tree][tmp];
> +	struct rb_node *rb = root->rb_node;
> +
> +	while (rb) {
> +		struct gpu_buddy_block *block = rbtree_get_free_block(rb);
> +		struct rb_node *left_node = rb->rb_left, *right_node = rb->rb_right;
> +
> +		if (right_node) {
> +			if (gpu_buddy_subtree_can_satisfy(right_node, alignment)) {
> +				rb = right_node;
> +				continue;
> +			}
> +		}
> +
> +		if (gpu_buddy_block_order(block) >= order &&

Is this not always true? With that we can drop order, or better yet 
s/tmp/order/ ?

> +		    __ffs(gpu_buddy_block_offset(block)) >= alignment)

Same here with undefined offset zero case. I guess also use the helper.

> +			return block;
> +
> +		if (left_node) {
> +			if (gpu_buddy_subtree_can_satisfy(left_node, alignment)) {
> +				rb = left_node;
> +				continue;
> +			}
> +		}
> +
> +		break;
> +	}
> +
> +	return NULL;
> +}
> +
> +static struct gpu_buddy_block *
> +gpu_buddy_offset_aligned_allocation(struct gpu_buddy *mm,
> +				    u64 size,
> +				    u64 min_block_size,
> +				    unsigned long flags)
> +{
> +	struct gpu_buddy_block *block = NULL;
> +	unsigned int order, tmp, alignment;
> +	struct gpu_buddy_block *buddy;
> +	enum gpu_buddy_free_tree tree;
> +	unsigned long pages;
> +	int err;
> +
> +	alignment = ilog2(min_block_size);
> +	pages = size >> ilog2(mm->chunk_size);
> +	order = fls(pages) - 1;
> +
> +	tree = (flags & GPU_BUDDY_CLEAR_ALLOCATION) ?
> +		GPU_BUDDY_CLEAR_TREE : GPU_BUDDY_DIRTY_TREE;
> +
> +	for (tmp = order; tmp <= mm->max_order; ++tmp) {
> +		block = gpu_buddy_find_block_aligned(mm, tree, order,
> +						     tmp, alignment, flags);
> +		if (!block) {
> +			tree = (tree == GPU_BUDDY_CLEAR_TREE) ?
> +				GPU_BUDDY_DIRTY_TREE : GPU_BUDDY_CLEAR_TREE;
> +			block = gpu_buddy_find_block_aligned(mm, tree, order,
> +							     tmp, alignment, flags);
> +		}
> +
> +		if (block)
> +			break;
> +	}
> +
> +	if (!block)
> +		return ERR_PTR(-ENOSPC);
> +
> +	while (gpu_buddy_block_order(block) > order) {
> +		struct gpu_buddy_block *left, *right;
> +
> +		err = split_block(mm, block);
> +		if (unlikely(err))
> +			goto err_undo;
> +
> +		left  = block->left;
> +		right = block->right;
> +
> +		if (__ffs(gpu_buddy_block_offset(right)) >= alignment)

Might be better to use the helper for this?

> +			block = right;
> +		else
> +			block = left;
> +	}
> +
> +	return block;
> +
> +err_undo:
> +	/*
> +	 * We really don't want to leave around a bunch of split blocks, since
> +	 * bigger is better, so make sure we merge everything back before we
> +	 * free the allocated blocks.
> +	 */
> +	buddy = __get_buddy(block);
> +	if (buddy &&
> +	    (gpu_buddy_block_is_free(block) &&
> +	     gpu_buddy_block_is_free(buddy)))
> +		__gpu_buddy_free(mm, block, false);
> +	return ERR_PTR(err);
> +}
> +
>   static int __alloc_range(struct gpu_buddy *mm,
>   			 struct list_head *dfs,
>   			 u64 start, u64 size,
> @@ -1059,6 +1214,7 @@ EXPORT_SYMBOL(gpu_buddy_block_trim);
>   static struct gpu_buddy_block *
>   __gpu_buddy_alloc_blocks(struct gpu_buddy *mm,
>   			 u64 start, u64 end,
> +			 u64 size, u64 min_block_size,
>   			 unsigned int order,
>   			 unsigned long flags)
>   {
> @@ -1066,6 +1222,11 @@ __gpu_buddy_alloc_blocks(struct gpu_buddy *mm,
>   		/* Allocate traversing within the range */
>   		return  __gpu_buddy_alloc_range_bias(mm, start, end,
>   						     order, flags);
> +	else if (size < min_block_size)
> +		/* Allocate from an offset-aligned region without size rounding */
> +		return gpu_buddy_offset_aligned_allocation(mm, size,
> +							   min_block_size,
> +							   flags);
>   	else
>   		/* Allocate from freetree */
>   		return alloc_from_freetree(mm, order, flags);
> @@ -1137,8 +1298,11 @@ int gpu_buddy_alloc_blocks(struct gpu_buddy *mm,
>   	if (flags & GPU_BUDDY_CONTIGUOUS_ALLOCATION) {
>   		size = roundup_pow_of_two(size);
>   		min_block_size = size;
> -	/* Align size value to min_block_size */
> -	} else if (!IS_ALIGNED(size, min_block_size)) {
> +		/*
> +		 * Normalize the requested size to min_block_size for regular allocations.
> +		 * Offset-aligned allocations intentionally skip size rounding.
> +		 */
> +	} else if (!gpu_buddy_can_offset_align(size, min_block_size)) {
>   		size = round_up(size, min_block_size);
>   	}
>   
> @@ -1158,43 +1322,60 @@ int gpu_buddy_alloc_blocks(struct gpu_buddy *mm,
>   	do {
>   		order = min(order, (unsigned int)fls(pages) - 1);
>   		BUG_ON(order > mm->max_order);
> -		BUG_ON(order < min_order);
> +		/*
> +		 * Regular allocations must not allocate blocks smaller than min_block_size.
> +		 * Offset-aligned allocations deliberately bypass this constraint.
> +		 */
> +		BUG_ON(size >= min_block_size && order < min_order);
>   
>   		do {
> +			unsigned int fallback_order;
> +
>   			block = __gpu_buddy_alloc_blocks(mm, start,
>   							 end,
> +							 size,
> +							 min_block_size,
>   							 order,
>   							 flags);
>   			if (!IS_ERR(block))
>   				break;
>   
> -			if (order-- == min_order) {
> -				/* Try allocation through force merge method */
> -				if (mm->clear_avail &&
> -				    !__force_merge(mm, start, end, min_order)) {
> -					block = __gpu_buddy_alloc_blocks(mm, start,
> -									 end,
> -									 min_order,
> -									 flags);
> -					if (!IS_ERR(block)) {
> -						order = min_order;
> -						break;
> -					}
> -				}
> +			if (size < min_block_size) {
> +				fallback_order = order;
> +			} else if (order == min_order) {
> +				fallback_order = min_order;
> +			} else {
> +				order--;
> +				continue;
> +			}
>   
> -				/*
> -				 * Try contiguous block allocation through
> -				 * try harder method.
> -				 */
> -				if (flags & GPU_BUDDY_CONTIGUOUS_ALLOCATION &&
> -				    !(flags & GPU_BUDDY_RANGE_ALLOCATION))
> -					return __alloc_contig_try_harder(mm,
> -									 original_size,
> -									 original_min_size,
> -									 blocks);
> -				err = -ENOSPC;
> -				goto err_free;
> +			/* Try allocation through force merge method */
> +			if (mm->clear_avail &&
> +			    !__force_merge(mm, start, end, fallback_order)) {
> +				block = __gpu_buddy_alloc_blocks(mm, start,
> +								 end,
> +								 size,
> +								 min_block_size,
> +								 fallback_order,
> +								 flags);
> +				if (!IS_ERR(block)) {
> +					order = fallback_order;
> +					break;
> +				}
>   			}
> +
> +			/*
> +			 * Try contiguous block allocation through
> +			 * try harder method.
> +			 */
> +			if (flags & GPU_BUDDY_CONTIGUOUS_ALLOCATION &&
> +			    !(flags & GPU_BUDDY_RANGE_ALLOCATION))
> +				return __alloc_contig_try_harder(mm,
> +								 original_size,
> +								 original_min_size,
> +								 blocks);
> +			err = -ENOSPC;
> +			goto err_free;
>   		} while (1);
>   
>   		mark_allocated(mm, block);
> diff --git a/include/linux/gpu_buddy.h b/include/linux/gpu_buddy.h
> index 07ac65db6d2e..7ad817c69ec6 100644
> --- a/include/linux/gpu_buddy.h
> +++ b/include/linux/gpu_buddy.h
> @@ -11,6 +11,7 @@
>   #include <linux/slab.h>
>   #include <linux/sched.h>
>   #include <linux/rbtree.h>
> +#include <linux/rbtree_augmented.h>
>   
>   #define GPU_BUDDY_RANGE_ALLOCATION		BIT(0)
>   #define GPU_BUDDY_TOPDOWN_ALLOCATION		BIT(1)
> @@ -58,6 +59,7 @@ struct gpu_buddy_block {
>   	};
>   
>   	struct list_head tmp_link;
> +	unsigned int subtree_max_alignment;
>   };
>   
>   /* Order-zero must be at least SZ_4K */
> 
> base-commit: 9d757669b2b22cd224c334924f798393ffca537c


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 1/2] drm/buddy: Improve offset-aligned allocation handling
  2026-02-10 16:26 ` [PATCH v3 1/2] " Matthew Auld
@ 2026-02-17  6:03   ` Arunpravin Paneer Selvam
  2026-02-17 10:01     ` Matthew Auld
  0 siblings, 1 reply; 11+ messages in thread
From: Arunpravin Paneer Selvam @ 2026-02-17  6:03 UTC (permalink / raw)
  To: Matthew Auld, christian.koenig, dri-devel, intel-gfx, intel-xe,
	amd-gfx
  Cc: alexander.deucher

Hi Matthew,

On 2/10/2026 9:56 PM, Matthew Auld wrote:
> On 09/02/2026 08:30, Arunpravin Paneer Selvam wrote:
>> Large alignment requests previously forced the buddy allocator to 
>> search by
>> alignment order, which often caused higher-order free blocks to be 
>> split even
>> when a suitably aligned smaller region already existed within them. 
>> This led
>> to excessive fragmentation, especially for workloads requesting small 
>> sizes
>> with large alignment constraints.
>>
>> This change prioritizes the requested allocation size during the 
>> search and
>> uses an augmented RB-tree field (subtree_max_alignment) to 
>> efficiently locate
>> free blocks that satisfy both size and offset-alignment requirements. 
>> As a
>> result, the allocator can directly select an aligned sub-region without
>> splitting larger blocks unnecessarily.
>>
>> A practical example is the VKCTS test
>> dEQP-VK.memory.allocation.basic.size_8KiB.reverse.count_4000, which 
>> repeatedly
>> allocates 8 KiB buffers with a 256 KiB alignment. Previously, such 
>> allocations
>> caused large blocks to be split aggressively, despite smaller aligned 
>> regions
>> being sufficient. With this change, those aligned regions are reused 
>> directly,
>> significantly reducing fragmentation.
>>
>> This improvement is visible in the amdgpu VRAM buddy allocator state
>> (/sys/kernel/debug/dri/1/amdgpu_vram_mm). After the change, 
>> higher-order blocks
>> are preserved and the number of low-order fragments is substantially 
>> reduced.
>>
>> Before:
>>    order- 5 free: 1936 MiB, blocks: 15490
>>    order- 4 free:  967 MiB, blocks: 15486
>>    order- 3 free:  483 MiB, blocks: 15485
>>    order- 2 free:  241 MiB, blocks: 15486
>>    order- 1 free:  241 MiB, blocks: 30948
>>
>> After:
>>    order- 5 free:  493 MiB, blocks:  3941
>>    order- 4 free:  246 MiB, blocks:  3943
>>    order- 3 free:  123 MiB, blocks:  4101
>>    order- 2 free:   61 MiB, blocks:  4101
>>    order- 1 free:   61 MiB, blocks:  8018
>>
>> By avoiding unnecessary splits, this change improves allocator 
>> efficiency and
>> helps maintain larger contiguous free regions under heavy offset-aligned
>> allocation workloads.
>>
>> v2:(Matthew)
>>    - Update augmented information along the path to the inserted node.
>>
>> v3:
>>    - Move the patch to gpu/buddy.c file.
>>
>> Signed-off-by: Arunpravin Paneer Selvam 
>> <Arunpravin.PaneerSelvam@amd.com>
>> Suggested-by: Christian König <christian.koenig@amd.com>
>> ---
>>   drivers/gpu/buddy.c       | 271 +++++++++++++++++++++++++++++++-------
>>   include/linux/gpu_buddy.h |   2 +
>>   2 files changed, 228 insertions(+), 45 deletions(-)
>>
>> diff --git a/drivers/gpu/buddy.c b/drivers/gpu/buddy.c
>> index 603c59a2013a..3a25eed050ba 100644
>> --- a/drivers/gpu/buddy.c
>> +++ b/drivers/gpu/buddy.c
>> @@ -14,6 +14,16 @@
>>     static struct kmem_cache *slab_blocks;
>>   +static unsigned int gpu_buddy_block_offset_alignment(struct 
>> gpu_buddy_block *block)
>> +{
>> +    return __ffs(gpu_buddy_block_offset(block));
>
> __ffs() will be undefined for offset zero it seems, so might blow up 
> in some strange way. I guess just return the max possible alignment 
> here if offset is zero? Also are we meant to use __ffs64() here?
Yes, I had the same concern about __ffs() being undefined when the 
offset is zero. My initial thought was to derive the maximum possible 
alignment from the allocator size using ilog2(mm->size) and return that 
value for the zero-offset case.

But, RB_DECLARE_CALLBACKS_MAX() requires the compute callback 
(gpu_buddy_block_offset_alignment()) to accept only a single struct 
gpu_buddy_block * argument. It does not provide a mechanism to pass 
additional context such as the associated struct gpu_buddy *mm. As a 
result, deriving the alignment from allocator state (e.g., via 
ilog2(mm->size)) is not directly feasible within this callback. When I 
tested the zero-offset case locally, __ffs() returned 64, which 
effectively corresponds to the maximum alignment for a u64 offset. Based 
on that observation, I initially left the __ffs() call unchanged for the 
zero case as well.

One possible alternative would be to store a pointer to struct gpu_buddy 
inside each gpu_buddy_block.

All other review comments have been addressed, and I will send a v4 once 
this point is clarified.

Regards,
Arun.
>
>> +}
>> +
>> +RB_DECLARE_CALLBACKS_MAX(static, gpu_buddy_augment_cb,
>> +             struct gpu_buddy_block, rb,
>> +             unsigned int, subtree_max_alignment,
>> +             gpu_buddy_block_offset_alignment);
>> +
>>   static struct gpu_buddy_block *gpu_block_alloc(struct gpu_buddy *mm,
>>                              struct gpu_buddy_block *parent,
>>                              unsigned int order,
>> @@ -31,6 +41,9 @@ static struct gpu_buddy_block 
>> *gpu_block_alloc(struct gpu_buddy *mm,
>>       block->header |= order;
>>       block->parent = parent;
>>   +    block->subtree_max_alignment =
>> +        gpu_buddy_block_offset_alignment(block);
>> +
>>       RB_CLEAR_NODE(&block->rb);
>>         BUG_ON(block->header & GPU_BUDDY_HEADER_UNUSED);
>> @@ -67,26 +80,42 @@ static bool rbtree_is_empty(struct rb_root *root)
>>       return RB_EMPTY_ROOT(root);
>>   }
>>   -static bool gpu_buddy_block_offset_less(const struct 
>> gpu_buddy_block *block,
>> -                    const struct gpu_buddy_block *node)
>> -{
>> -    return gpu_buddy_block_offset(block) < 
>> gpu_buddy_block_offset(node);
>> -}
>> -
>> -static bool rbtree_block_offset_less(struct rb_node *block,
>> -                     const struct rb_node *node)
>> -{
>> -    return gpu_buddy_block_offset_less(rbtree_get_free_block(block),
>> -                       rbtree_get_free_block(node));
>> -}
>> -
>>   static void rbtree_insert(struct gpu_buddy *mm,
>>                 struct gpu_buddy_block *block,
>>                 enum gpu_buddy_free_tree tree)
>>   {
>> -    rb_add(&block->rb,
>> - &mm->free_trees[tree][gpu_buddy_block_order(block)],
>> -           rbtree_block_offset_less);
>> +    struct rb_node **link, *parent = NULL;
>> +    unsigned int block_alignment, order;
>> +    struct gpu_buddy_block *node;
>> +    struct rb_root *root;
>> +
>> +    order = gpu_buddy_block_order(block);
>> +    block_alignment = gpu_buddy_block_offset_alignment(block);
>> +
>> +    root = &mm->free_trees[tree][order];
>> +    link = &root->rb_node;
>> +
>> +    while (*link) {
>> +        parent = *link;
>> +        node = rbtree_get_free_block(parent);
>> +        /*
>> +         * Manual augmentation update during insertion traversal. 
>> Required
>> +         * because rb_insert_augmented() only calls rotate callback 
>> during
>> +         * rotations. This ensures all ancestors on the insertion 
>> path have
>> +         * correct subtree_max_alignment values.
>> +         */
>> +        if (node->subtree_max_alignment < block_alignment)
>> +            node->subtree_max_alignment = block_alignment;
>> +
>> +        if (gpu_buddy_block_offset(block) < 
>> gpu_buddy_block_offset(node))
>> +            link = &parent->rb_left;
>> +        else
>> +            link = &parent->rb_right;
>> +    }
>> +
>> +    block->subtree_max_alignment = block_alignment;
>> +    rb_link_node(&block->rb, parent, link);
>> +    rb_insert_augmented(&block->rb, root, &gpu_buddy_augment_cb);
>>   }
>>     static void rbtree_remove(struct gpu_buddy *mm,
>> @@ -99,7 +128,7 @@ static void rbtree_remove(struct gpu_buddy *mm,
>>       tree = get_block_tree(block);
>>       root = &mm->free_trees[tree][order];
>>   -    rb_erase(&block->rb, root);
>> +    rb_erase_augmented(&block->rb, root, &gpu_buddy_augment_cb);
>>       RB_CLEAR_NODE(&block->rb);
>>   }
>>   @@ -790,6 +819,132 @@ alloc_from_freetree(struct gpu_buddy *mm,
>>       return ERR_PTR(err);
>>   }
>>   +static bool
>> +gpu_buddy_can_offset_align(u64 size, u64 min_block_size)
>> +{
>> +    return size < min_block_size && is_power_of_2(size);
>> +}
>> +
>> +static bool gpu_buddy_subtree_can_satisfy(struct rb_node *node,
>> +                      unsigned int alignment)
>> +{
>> +    struct gpu_buddy_block *block;
>> +
>> +    if (!node)
>> +        return false;
>
> All callers seem to handle null case already, so could potentially 
> drop this?
>
>> +
>> +    block = rbtree_get_free_block(node);
>> +    return block->subtree_max_alignment >= alignment;
>> +}
>> +
>> +static struct gpu_buddy_block *
>> +gpu_buddy_find_block_aligned(struct gpu_buddy *mm,
>> +                 enum gpu_buddy_free_tree tree,
>> +                 unsigned int order,
>> +                 unsigned int tmp,
>> +                 unsigned int alignment,
>> +                 unsigned long flags)
>> +{
>> +    struct rb_root *root = &mm->free_trees[tree][tmp];
>> +    struct rb_node *rb = root->rb_node;
>> +
>> +    while (rb) {
>> +        struct gpu_buddy_block *block = rbtree_get_free_block(rb);
>> +        struct rb_node *left_node = rb->rb_left, *right_node = 
>> rb->rb_right;
>> +
>> +        if (right_node) {
>> +            if (gpu_buddy_subtree_can_satisfy(right_node, alignment)) {
>> +                rb = right_node;
>> +                continue;
>> +            }
>> +        }
>> +
>> +        if (gpu_buddy_block_order(block) >= order &&
>
> Is this not always true? With that we can drop order, or better yet 
> s/tmp/order/ ?
>
>> + __ffs(gpu_buddy_block_offset(block)) >= alignment)
>
> Same here with undefined offset zero case. I guess also use the helper.
>
>> +            return block;
>> +
>> +        if (left_node) {
>> +            if (gpu_buddy_subtree_can_satisfy(left_node, alignment)) {
>> +                rb = left_node;
>> +                continue;
>> +            }
>> +        }
>> +
>> +        break;
>> +    }
>> +
>> +    return NULL;
>> +}
>> +
>> +static struct gpu_buddy_block *
>> +gpu_buddy_offset_aligned_allocation(struct gpu_buddy *mm,
>> +                    u64 size,
>> +                    u64 min_block_size,
>> +                    unsigned long flags)
>> +{
>> +    struct gpu_buddy_block *block = NULL;
>> +    unsigned int order, tmp, alignment;
>> +    struct gpu_buddy_block *buddy;
>> +    enum gpu_buddy_free_tree tree;
>> +    unsigned long pages;
>> +    int err;
>> +
>> +    alignment = ilog2(min_block_size);
>> +    pages = size >> ilog2(mm->chunk_size);
>> +    order = fls(pages) - 1;
>> +
>> +    tree = (flags & GPU_BUDDY_CLEAR_ALLOCATION) ?
>> +        GPU_BUDDY_CLEAR_TREE : GPU_BUDDY_DIRTY_TREE;
>> +
>> +    for (tmp = order; tmp <= mm->max_order; ++tmp) {
>> +        block = gpu_buddy_find_block_aligned(mm, tree, order,
>> +                             tmp, alignment, flags);
>> +        if (!block) {
>> +            tree = (tree == GPU_BUDDY_CLEAR_TREE) ?
>> +                GPU_BUDDY_DIRTY_TREE : GPU_BUDDY_CLEAR_TREE;
>> +            block = gpu_buddy_find_block_aligned(mm, tree, order,
>> +                                 tmp, alignment, flags);
>> +        }
>> +
>> +        if (block)
>> +            break;
>> +    }
>> +
>> +    if (!block)
>> +        return ERR_PTR(-ENOSPC);
>> +
>> +    while (gpu_buddy_block_order(block) > order) {
>> +        struct gpu_buddy_block *left, *right;
>> +
>> +        err = split_block(mm, block);
>> +        if (unlikely(err))
>> +            goto err_undo;
>> +
>> +        left  = block->left;
>> +        right = block->right;
>> +
>> +        if (__ffs(gpu_buddy_block_offset(right)) >= alignment)
>
> Might be better to use the helper for this?
>
>> +            block = right;
>> +        else
>> +            block = left;
>> +    }
>> +
>> +    return block;
>> +
>> +err_undo:
>> +    /*
>> +     * We really don't want to leave around a bunch of split blocks, 
>> since
>> +     * bigger is better, so make sure we merge everything back 
>> before we
>> +     * free the allocated blocks.
>> +     */
>> +    buddy = __get_buddy(block);
>> +    if (buddy &&
>> +        (gpu_buddy_block_is_free(block) &&
>> +         gpu_buddy_block_is_free(buddy)))
>> +        __gpu_buddy_free(mm, block, false);
>> +    return ERR_PTR(err);
>> +}
>> +
>>   static int __alloc_range(struct gpu_buddy *mm,
>>                struct list_head *dfs,
>>                u64 start, u64 size,
>> @@ -1059,6 +1214,7 @@ EXPORT_SYMBOL(gpu_buddy_block_trim);
>>   static struct gpu_buddy_block *
>>   __gpu_buddy_alloc_blocks(struct gpu_buddy *mm,
>>                u64 start, u64 end,
>> +             u64 size, u64 min_block_size,
>>                unsigned int order,
>>                unsigned long flags)
>>   {
>> @@ -1066,6 +1222,11 @@ __gpu_buddy_alloc_blocks(struct gpu_buddy *mm,
>>           /* Allocate traversing within the range */
>>           return  __gpu_buddy_alloc_range_bias(mm, start, end,
>>                                order, flags);
>> +    else if (size < min_block_size)
>> +        /* Allocate from an offset-aligned region without size 
>> rounding */
>> +        return gpu_buddy_offset_aligned_allocation(mm, size,
>> +                               min_block_size,
>> +                               flags);
>>       else
>>           /* Allocate from freetree */
>>           return alloc_from_freetree(mm, order, flags);
>> @@ -1137,8 +1298,11 @@ int gpu_buddy_alloc_blocks(struct gpu_buddy *mm,
>>       if (flags & GPU_BUDDY_CONTIGUOUS_ALLOCATION) {
>>           size = roundup_pow_of_two(size);
>>           min_block_size = size;
>> -    /* Align size value to min_block_size */
>> -    } else if (!IS_ALIGNED(size, min_block_size)) {
>> +        /*
>> +         * Normalize the requested size to min_block_size for 
>> regular allocations.
>> +         * Offset-aligned allocations intentionally skip size rounding.
>> +         */
>> +    } else if (!gpu_buddy_can_offset_align(size, min_block_size)) {
>>           size = round_up(size, min_block_size);
>>       }
>>   @@ -1158,43 +1322,60 @@ int gpu_buddy_alloc_blocks(struct gpu_buddy 
>> *mm,
>>       do {
>>           order = min(order, (unsigned int)fls(pages) - 1);
>>           BUG_ON(order > mm->max_order);
>> -        BUG_ON(order < min_order);
>> +        /*
>> +         * Regular allocations must not allocate blocks smaller than 
>> min_block_size.
>> +         * Offset-aligned allocations deliberately bypass this 
>> constraint.
>> +         */
>> +        BUG_ON(size >= min_block_size && order < min_order);
>>             do {
>> +            unsigned int fallback_order;
>> +
>>               block = __gpu_buddy_alloc_blocks(mm, start,
>>                                end,
>> +                             size,
>> +                             min_block_size,
>>                                order,
>>                                flags);
>>               if (!IS_ERR(block))
>>                   break;
>>   -            if (order-- == min_order) {
>> -                /* Try allocation through force merge method */
>> -                if (mm->clear_avail &&
>> -                    !__force_merge(mm, start, end, min_order)) {
>> -                    block = __gpu_buddy_alloc_blocks(mm, start,
>> -                                     end,
>> -                                     min_order,
>> -                                     flags);
>> -                    if (!IS_ERR(block)) {
>> -                        order = min_order;
>> -                        break;
>> -                    }
>> -                }
>> +            if (size < min_block_size) {
>> +                fallback_order = order;
>> +            } else if (order == min_order) {
>> +                fallback_order = min_order;
>> +            } else {
>> +                order--;
>> +                continue;
>> +            }
>>   -                /*
>> -                 * Try contiguous block allocation through
>> -                 * try harder method.
>> -                 */
>> -                if (flags & GPU_BUDDY_CONTIGUOUS_ALLOCATION &&
>> -                    !(flags & GPU_BUDDY_RANGE_ALLOCATION))
>> -                    return __alloc_contig_try_harder(mm,
>> -                                     original_size,
>> -                                     original_min_size,
>> -                                     blocks);
>> -                err = -ENOSPC;
>> -                goto err_free;
>> +            /* Try allocation through force merge method */
>> +            if (mm->clear_avail &&
>> +                !__force_merge(mm, start, end, fallback_order)) {
>> +                block = __gpu_buddy_alloc_blocks(mm, start,
>> +                                 end,
>> +                                 size,
>> +                                 min_block_size,
>> +                                 fallback_order,
>> +                                 flags);
>> +                if (!IS_ERR(block)) {
>> +                    order = fallback_order;
>> +                    break;
>> +                }
>>               }
>> +
>> +            /*
>> +             * Try contiguous block allocation through
>> +             * try harder method.
>> +             */
>> +            if (flags & GPU_BUDDY_CONTIGUOUS_ALLOCATION &&
>> +                !(flags & GPU_BUDDY_RANGE_ALLOCATION))
>> +                return __alloc_contig_try_harder(mm,
>> +                                 original_size,
>> +                                 original_min_size,
>> +                                 blocks);
>> +            err = -ENOSPC;
>> +            goto err_free;
>>           } while (1);
>>             mark_allocated(mm, block);
>> diff --git a/include/linux/gpu_buddy.h b/include/linux/gpu_buddy.h
>> index 07ac65db6d2e..7ad817c69ec6 100644
>> --- a/include/linux/gpu_buddy.h
>> +++ b/include/linux/gpu_buddy.h
>> @@ -11,6 +11,7 @@
>>   #include <linux/slab.h>
>>   #include <linux/sched.h>
>>   #include <linux/rbtree.h>
>> +#include <linux/rbtree_augmented.h>
>>     #define GPU_BUDDY_RANGE_ALLOCATION        BIT(0)
>>   #define GPU_BUDDY_TOPDOWN_ALLOCATION        BIT(1)
>> @@ -58,6 +59,7 @@ struct gpu_buddy_block {
>>       };
>>         struct list_head tmp_link;
>> +    unsigned int subtree_max_alignment;
>>   };
>>     /* Order-zero must be at least SZ_4K */
>>
>> base-commit: 9d757669b2b22cd224c334924f798393ffca537c
>


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 1/2] drm/buddy: Improve offset-aligned allocation handling
  2026-02-17  6:03   ` Arunpravin Paneer Selvam
@ 2026-02-17 10:01     ` Matthew Auld
  2026-02-17 10:16       ` Arunpravin Paneer Selvam
  0 siblings, 1 reply; 11+ messages in thread
From: Matthew Auld @ 2026-02-17 10:01 UTC (permalink / raw)
  To: Arunpravin Paneer Selvam, christian.koenig, dri-devel, intel-gfx,
	intel-xe, amd-gfx
  Cc: alexander.deucher

On 17/02/2026 06:03, Arunpravin Paneer Selvam wrote:
> Hi Matthew,
> 
> On 2/10/2026 9:56 PM, Matthew Auld wrote:
>> On 09/02/2026 08:30, Arunpravin Paneer Selvam wrote:
>>> Large alignment requests previously forced the buddy allocator to 
>>> search by
>>> alignment order, which often caused higher-order free blocks to be 
>>> split even
>>> when a suitably aligned smaller region already existed within them. 
>>> This led
>>> to excessive fragmentation, especially for workloads requesting small 
>>> sizes
>>> with large alignment constraints.
>>>
>>> This change prioritizes the requested allocation size during the 
>>> search and
>>> uses an augmented RB-tree field (subtree_max_alignment) to 
>>> efficiently locate
>>> free blocks that satisfy both size and offset-alignment requirements. 
>>> As a
>>> result, the allocator can directly select an aligned sub-region without
>>> splitting larger blocks unnecessarily.
>>>
>>> A practical example is the VKCTS test
>>> dEQP-VK.memory.allocation.basic.size_8KiB.reverse.count_4000, which 
>>> repeatedly
>>> allocates 8 KiB buffers with a 256 KiB alignment. Previously, such 
>>> allocations
>>> caused large blocks to be split aggressively, despite smaller aligned 
>>> regions
>>> being sufficient. With this change, those aligned regions are reused 
>>> directly,
>>> significantly reducing fragmentation.
>>>
>>> This improvement is visible in the amdgpu VRAM buddy allocator state
>>> (/sys/kernel/debug/dri/1/amdgpu_vram_mm). After the change, higher- 
>>> order blocks
>>> are preserved and the number of low-order fragments is substantially 
>>> reduced.
>>>
>>> Before:
>>>    order- 5 free: 1936 MiB, blocks: 15490
>>>    order- 4 free:  967 MiB, blocks: 15486
>>>    order- 3 free:  483 MiB, blocks: 15485
>>>    order- 2 free:  241 MiB, blocks: 15486
>>>    order- 1 free:  241 MiB, blocks: 30948
>>>
>>> After:
>>>    order- 5 free:  493 MiB, blocks:  3941
>>>    order- 4 free:  246 MiB, blocks:  3943
>>>    order- 3 free:  123 MiB, blocks:  4101
>>>    order- 2 free:   61 MiB, blocks:  4101
>>>    order- 1 free:   61 MiB, blocks:  8018
>>>
>>> By avoiding unnecessary splits, this change improves allocator 
>>> efficiency and
>>> helps maintain larger contiguous free regions under heavy offset-aligned
>>> allocation workloads.
>>>
>>> v2:(Matthew)
>>>    - Update augmented information along the path to the inserted node.
>>>
>>> v3:
>>>    - Move the patch to gpu/buddy.c file.
>>>
>>> Signed-off-by: Arunpravin Paneer Selvam 
>>> <Arunpravin.PaneerSelvam@amd.com>
>>> Suggested-by: Christian König <christian.koenig@amd.com>
>>> ---
>>>   drivers/gpu/buddy.c       | 271 +++++++++++++++++++++++++++++++-------
>>>   include/linux/gpu_buddy.h |   2 +
>>>   2 files changed, 228 insertions(+), 45 deletions(-)
>>>
>>> diff --git a/drivers/gpu/buddy.c b/drivers/gpu/buddy.c
>>> index 603c59a2013a..3a25eed050ba 100644
>>> --- a/drivers/gpu/buddy.c
>>> +++ b/drivers/gpu/buddy.c
>>> @@ -14,6 +14,16 @@
>>>     static struct kmem_cache *slab_blocks;
>>>   +static unsigned int gpu_buddy_block_offset_alignment(struct 
>>> gpu_buddy_block *block)
>>> +{
>>> +    return __ffs(gpu_buddy_block_offset(block));
>>
>> __ffs() will be undefined for offset zero it seems, so might blow up 
>> in some strange way. I guess just return the max possible alignment 
>> here if offset is zero? Also are we meant to use __ffs64() here?
> Yes, I had the same concern about __ffs() being undefined when the 
> offset is zero. My initial thought was to derive the maximum possible 
> alignment from the allocator size using ilog2(mm->size) and return that 
> value for the zero-offset case.
> 
> But, RB_DECLARE_CALLBACKS_MAX() requires the compute callback 
> (gpu_buddy_block_offset_alignment()) to accept only a single struct 
> gpu_buddy_block * argument. It does not provide a mechanism to pass 
> additional context such as the associated struct gpu_buddy *mm. As a 
> result, deriving the alignment from allocator state (e.g., via ilog2(mm- 
>  >size)) is not directly feasible within this callback. When I tested 
> the zero-offset case locally, __ffs() returned 64, which effectively 
> corresponds to the maximum alignment for a u64 offset. Based on that 
> observation, I initially left the __ffs() call unchanged for the zero 
> case as well.
> 
> One possible alternative would be to store a pointer to struct gpu_buddy 
> inside each gpu_buddy_block.
> 
> All other review comments have been addressed, and I will send a v4 once 
> this point is clarified.

Yeah, I was thinking we just return the max theoretical value, so 64, or 
perhaps 64+1. It just needs to be a value that will be larger than any 
other possible alignment, since zero is special. It shouldn't matter if 
that is larger than the actual real max for the region, I think.

if (!offset)
	return 64 + 1;

return __ffs64(offset);

?

> 
> Regards,
> Arun.
>>
>>> +}
>>> +
>>> +RB_DECLARE_CALLBACKS_MAX(static, gpu_buddy_augment_cb,
>>> +             struct gpu_buddy_block, rb,
>>> +             unsigned int, subtree_max_alignment,
>>> +             gpu_buddy_block_offset_alignment);
>>> +
>>>   static struct gpu_buddy_block *gpu_block_alloc(struct gpu_buddy *mm,
>>>                              struct gpu_buddy_block *parent,
>>>                              unsigned int order,
>>> @@ -31,6 +41,9 @@ static struct gpu_buddy_block 
>>> *gpu_block_alloc(struct gpu_buddy *mm,
>>>       block->header |= order;
>>>       block->parent = parent;
>>>   +    block->subtree_max_alignment =
>>> +        gpu_buddy_block_offset_alignment(block);
>>> +
>>>       RB_CLEAR_NODE(&block->rb);
>>>         BUG_ON(block->header & GPU_BUDDY_HEADER_UNUSED);
>>> @@ -67,26 +80,42 @@ static bool rbtree_is_empty(struct rb_root *root)
>>>       return RB_EMPTY_ROOT(root);
>>>   }
>>>   -static bool gpu_buddy_block_offset_less(const struct 
>>> gpu_buddy_block *block,
>>> -                    const struct gpu_buddy_block *node)
>>> -{
>>> -    return gpu_buddy_block_offset(block) < 
>>> gpu_buddy_block_offset(node);
>>> -}
>>> -
>>> -static bool rbtree_block_offset_less(struct rb_node *block,
>>> -                     const struct rb_node *node)
>>> -{
>>> -    return gpu_buddy_block_offset_less(rbtree_get_free_block(block),
>>> -                       rbtree_get_free_block(node));
>>> -}
>>> -
>>>   static void rbtree_insert(struct gpu_buddy *mm,
>>>                 struct gpu_buddy_block *block,
>>>                 enum gpu_buddy_free_tree tree)
>>>   {
>>> -    rb_add(&block->rb,
>>> - &mm->free_trees[tree][gpu_buddy_block_order(block)],
>>> -           rbtree_block_offset_less);
>>> +    struct rb_node **link, *parent = NULL;
>>> +    unsigned int block_alignment, order;
>>> +    struct gpu_buddy_block *node;
>>> +    struct rb_root *root;
>>> +
>>> +    order = gpu_buddy_block_order(block);
>>> +    block_alignment = gpu_buddy_block_offset_alignment(block);
>>> +
>>> +    root = &mm->free_trees[tree][order];
>>> +    link = &root->rb_node;
>>> +
>>> +    while (*link) {
>>> +        parent = *link;
>>> +        node = rbtree_get_free_block(parent);
>>> +        /*
>>> +         * Manual augmentation update during insertion traversal. 
>>> Required
>>> +         * because rb_insert_augmented() only calls rotate callback 
>>> during
>>> +         * rotations. This ensures all ancestors on the insertion 
>>> path have
>>> +         * correct subtree_max_alignment values.
>>> +         */
>>> +        if (node->subtree_max_alignment < block_alignment)
>>> +            node->subtree_max_alignment = block_alignment;
>>> +
>>> +        if (gpu_buddy_block_offset(block) < 
>>> gpu_buddy_block_offset(node))
>>> +            link = &parent->rb_left;
>>> +        else
>>> +            link = &parent->rb_right;
>>> +    }
>>> +
>>> +    block->subtree_max_alignment = block_alignment;
>>> +    rb_link_node(&block->rb, parent, link);
>>> +    rb_insert_augmented(&block->rb, root, &gpu_buddy_augment_cb);
>>>   }
>>>     static void rbtree_remove(struct gpu_buddy *mm,
>>> @@ -99,7 +128,7 @@ static void rbtree_remove(struct gpu_buddy *mm,
>>>       tree = get_block_tree(block);
>>>       root = &mm->free_trees[tree][order];
>>>   -    rb_erase(&block->rb, root);
>>> +    rb_erase_augmented(&block->rb, root, &gpu_buddy_augment_cb);
>>>       RB_CLEAR_NODE(&block->rb);
>>>   }
>>>   @@ -790,6 +819,132 @@ alloc_from_freetree(struct gpu_buddy *mm,
>>>       return ERR_PTR(err);
>>>   }
>>>   +static bool
>>> +gpu_buddy_can_offset_align(u64 size, u64 min_block_size)
>>> +{
>>> +    return size < min_block_size && is_power_of_2(size);
>>> +}
>>> +
>>> +static bool gpu_buddy_subtree_can_satisfy(struct rb_node *node,
>>> +                      unsigned int alignment)
>>> +{
>>> +    struct gpu_buddy_block *block;
>>> +
>>> +    if (!node)
>>> +        return false;
>>
>> All callers seem to handle null case already, so could potentially 
>> drop this?
>>
>>> +
>>> +    block = rbtree_get_free_block(node);
>>> +    return block->subtree_max_alignment >= alignment;
>>> +}
>>> +
>>> +static struct gpu_buddy_block *
>>> +gpu_buddy_find_block_aligned(struct gpu_buddy *mm,
>>> +                 enum gpu_buddy_free_tree tree,
>>> +                 unsigned int order,
>>> +                 unsigned int tmp,
>>> +                 unsigned int alignment,
>>> +                 unsigned long flags)
>>> +{
>>> +    struct rb_root *root = &mm->free_trees[tree][tmp];
>>> +    struct rb_node *rb = root->rb_node;
>>> +
>>> +    while (rb) {
>>> +        struct gpu_buddy_block *block = rbtree_get_free_block(rb);
>>> +        struct rb_node *left_node = rb->rb_left, *right_node = rb- 
>>> >rb_right;
>>> +
>>> +        if (right_node) {
>>> +            if (gpu_buddy_subtree_can_satisfy(right_node, alignment)) {
>>> +                rb = right_node;
>>> +                continue;
>>> +            }
>>> +        }
>>> +
>>> +        if (gpu_buddy_block_order(block) >= order &&
>>
>> Is this not always true? With that we can drop order, or better yet s/ 
>> tmp/order/ ?
>>
>>> + __ffs(gpu_buddy_block_offset(block)) >= alignment)
>>
>> Same here with undefined offset zero case. I guess also use the helper.
>>
>>> +            return block;
>>> +
>>> +        if (left_node) {
>>> +            if (gpu_buddy_subtree_can_satisfy(left_node, alignment)) {
>>> +                rb = left_node;
>>> +                continue;
>>> +            }
>>> +        }
>>> +
>>> +        break;
>>> +    }
>>> +
>>> +    return NULL;
>>> +}
>>> +
>>> +static struct gpu_buddy_block *
>>> +gpu_buddy_offset_aligned_allocation(struct gpu_buddy *mm,
>>> +                    u64 size,
>>> +                    u64 min_block_size,
>>> +                    unsigned long flags)
>>> +{
>>> +    struct gpu_buddy_block *block = NULL;
>>> +    unsigned int order, tmp, alignment;
>>> +    struct gpu_buddy_block *buddy;
>>> +    enum gpu_buddy_free_tree tree;
>>> +    unsigned long pages;
>>> +    int err;
>>> +
>>> +    alignment = ilog2(min_block_size);
>>> +    pages = size >> ilog2(mm->chunk_size);
>>> +    order = fls(pages) - 1;
>>> +
>>> +    tree = (flags & GPU_BUDDY_CLEAR_ALLOCATION) ?
>>> +        GPU_BUDDY_CLEAR_TREE : GPU_BUDDY_DIRTY_TREE;
>>> +
>>> +    for (tmp = order; tmp <= mm->max_order; ++tmp) {
>>> +        block = gpu_buddy_find_block_aligned(mm, tree, order,
>>> +                             tmp, alignment, flags);
>>> +        if (!block) {
>>> +            tree = (tree == GPU_BUDDY_CLEAR_TREE) ?
>>> +                GPU_BUDDY_DIRTY_TREE : GPU_BUDDY_CLEAR_TREE;
>>> +            block = gpu_buddy_find_block_aligned(mm, tree, order,
>>> +                                 tmp, alignment, flags);
>>> +        }
>>> +
>>> +        if (block)
>>> +            break;
>>> +    }
>>> +
>>> +    if (!block)
>>> +        return ERR_PTR(-ENOSPC);
>>> +
>>> +    while (gpu_buddy_block_order(block) > order) {
>>> +        struct gpu_buddy_block *left, *right;
>>> +
>>> +        err = split_block(mm, block);
>>> +        if (unlikely(err))
>>> +            goto err_undo;
>>> +
>>> +        left  = block->left;
>>> +        right = block->right;
>>> +
>>> +        if (__ffs(gpu_buddy_block_offset(right)) >= alignment)
>>
>> Might be better to use the helper for this?
>>
>>> +            block = right;
>>> +        else
>>> +            block = left;
>>> +    }
>>> +
>>> +    return block;
>>> +
>>> +err_undo:
>>> +    /*
>>> +     * We really don't want to leave around a bunch of split blocks, 
>>> since
>>> +     * bigger is better, so make sure we merge everything back 
>>> before we
>>> +     * free the allocated blocks.
>>> +     */
>>> +    buddy = __get_buddy(block);
>>> +    if (buddy &&
>>> +        (gpu_buddy_block_is_free(block) &&
>>> +         gpu_buddy_block_is_free(buddy)))
>>> +        __gpu_buddy_free(mm, block, false);
>>> +    return ERR_PTR(err);
>>> +}
>>> +
>>>   static int __alloc_range(struct gpu_buddy *mm,
>>>                struct list_head *dfs,
>>>                u64 start, u64 size,
>>> @@ -1059,6 +1214,7 @@ EXPORT_SYMBOL(gpu_buddy_block_trim);
>>>   static struct gpu_buddy_block *
>>>   __gpu_buddy_alloc_blocks(struct gpu_buddy *mm,
>>>                u64 start, u64 end,
>>> +             u64 size, u64 min_block_size,
>>>                unsigned int order,
>>>                unsigned long flags)
>>>   {
>>> @@ -1066,6 +1222,11 @@ __gpu_buddy_alloc_blocks(struct gpu_buddy *mm,
>>>           /* Allocate traversing within the range */
>>>           return  __gpu_buddy_alloc_range_bias(mm, start, end,
>>>                                order, flags);
>>> +    else if (size < min_block_size)
>>> +        /* Allocate from an offset-aligned region without size 
>>> rounding */
>>> +        return gpu_buddy_offset_aligned_allocation(mm, size,
>>> +                               min_block_size,
>>> +                               flags);
>>>       else
>>>           /* Allocate from freetree */
>>>           return alloc_from_freetree(mm, order, flags);
>>> @@ -1137,8 +1298,11 @@ int gpu_buddy_alloc_blocks(struct gpu_buddy *mm,
>>>       if (flags & GPU_BUDDY_CONTIGUOUS_ALLOCATION) {
>>>           size = roundup_pow_of_two(size);
>>>           min_block_size = size;
>>> -    /* Align size value to min_block_size */
>>> -    } else if (!IS_ALIGNED(size, min_block_size)) {
>>> +        /*
>>> +         * Normalize the requested size to min_block_size for 
>>> regular allocations.
>>> +         * Offset-aligned allocations intentionally skip size rounding.
>>> +         */
>>> +    } else if (!gpu_buddy_can_offset_align(size, min_block_size)) {
>>>           size = round_up(size, min_block_size);
>>>       }
>>>   @@ -1158,43 +1322,60 @@ int gpu_buddy_alloc_blocks(struct gpu_buddy 
>>> *mm,
>>>       do {
>>>           order = min(order, (unsigned int)fls(pages) - 1);
>>>           BUG_ON(order > mm->max_order);
>>> -        BUG_ON(order < min_order);
>>> +        /*
>>> +         * Regular allocations must not allocate blocks smaller than 
>>> min_block_size.
>>> +         * Offset-aligned allocations deliberately bypass this 
>>> constraint.
>>> +         */
>>> +        BUG_ON(size >= min_block_size && order < min_order);
>>>             do {
>>> +            unsigned int fallback_order;
>>> +
>>>               block = __gpu_buddy_alloc_blocks(mm, start,
>>>                                end,
>>> +                             size,
>>> +                             min_block_size,
>>>                                order,
>>>                                flags);
>>>               if (!IS_ERR(block))
>>>                   break;
>>>   -            if (order-- == min_order) {
>>> -                /* Try allocation through force merge method */
>>> -                if (mm->clear_avail &&
>>> -                    !__force_merge(mm, start, end, min_order)) {
>>> -                    block = __gpu_buddy_alloc_blocks(mm, start,
>>> -                                     end,
>>> -                                     min_order,
>>> -                                     flags);
>>> -                    if (!IS_ERR(block)) {
>>> -                        order = min_order;
>>> -                        break;
>>> -                    }
>>> -                }
>>> +            if (size < min_block_size) {
>>> +                fallback_order = order;
>>> +            } else if (order == min_order) {
>>> +                fallback_order = min_order;
>>> +            } else {
>>> +                order--;
>>> +                continue;
>>> +            }
>>>   -                /*
>>> -                 * Try contiguous block allocation through
>>> -                 * try harder method.
>>> -                 */
>>> -                if (flags & GPU_BUDDY_CONTIGUOUS_ALLOCATION &&
>>> -                    !(flags & GPU_BUDDY_RANGE_ALLOCATION))
>>> -                    return __alloc_contig_try_harder(mm,
>>> -                                     original_size,
>>> -                                     original_min_size,
>>> -                                     blocks);
>>> -                err = -ENOSPC;
>>> -                goto err_free;
>>> +            /* Try allocation through force merge method */
>>> +            if (mm->clear_avail &&
>>> +                !__force_merge(mm, start, end, fallback_order)) {
>>> +                block = __gpu_buddy_alloc_blocks(mm, start,
>>> +                                 end,
>>> +                                 size,
>>> +                                 min_block_size,
>>> +                                 fallback_order,
>>> +                                 flags);
>>> +                if (!IS_ERR(block)) {
>>> +                    order = fallback_order;
>>> +                    break;
>>> +                }
>>>               }
>>> +
>>> +            /*
>>> +             * Try contiguous block allocation through
>>> +             * try harder method.
>>> +             */
>>> +            if (flags & GPU_BUDDY_CONTIGUOUS_ALLOCATION &&
>>> +                !(flags & GPU_BUDDY_RANGE_ALLOCATION))
>>> +                return __alloc_contig_try_harder(mm,
>>> +                                 original_size,
>>> +                                 original_min_size,
>>> +                                 blocks);
>>> +            err = -ENOSPC;
>>> +            goto err_free;
>>>           } while (1);
>>>             mark_allocated(mm, block);
>>> diff --git a/include/linux/gpu_buddy.h b/include/linux/gpu_buddy.h
>>> index 07ac65db6d2e..7ad817c69ec6 100644
>>> --- a/include/linux/gpu_buddy.h
>>> +++ b/include/linux/gpu_buddy.h
>>> @@ -11,6 +11,7 @@
>>>   #include <linux/slab.h>
>>>   #include <linux/sched.h>
>>>   #include <linux/rbtree.h>
>>> +#include <linux/rbtree_augmented.h>
>>>     #define GPU_BUDDY_RANGE_ALLOCATION        BIT(0)
>>>   #define GPU_BUDDY_TOPDOWN_ALLOCATION        BIT(1)
>>> @@ -58,6 +59,7 @@ struct gpu_buddy_block {
>>>       };
>>>         struct list_head tmp_link;
>>> +    unsigned int subtree_max_alignment;
>>>   };
>>>     /* Order-zero must be at least SZ_4K */
>>>
>>> base-commit: 9d757669b2b22cd224c334924f798393ffca537c
>>
> 


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 1/2] drm/buddy: Improve offset-aligned allocation handling
  2026-02-17 10:01     ` Matthew Auld
@ 2026-02-17 10:16       ` Arunpravin Paneer Selvam
  0 siblings, 0 replies; 11+ messages in thread
From: Arunpravin Paneer Selvam @ 2026-02-17 10:16 UTC (permalink / raw)
  To: Matthew Auld, christian.koenig, dri-devel, intel-gfx, intel-xe,
	amd-gfx
  Cc: alexander.deucher



On 2/17/2026 3:31 PM, Matthew Auld wrote:
> On 17/02/2026 06:03, Arunpravin Paneer Selvam wrote:
>> Hi Matthew,
>>
>> On 2/10/2026 9:56 PM, Matthew Auld wrote:
>>> On 09/02/2026 08:30, Arunpravin Paneer Selvam wrote:
>>>> Large alignment requests previously forced the buddy allocator to 
>>>> search by
>>>> alignment order, which often caused higher-order free blocks to be 
>>>> split even
>>>> when a suitably aligned smaller region already existed within them. 
>>>> This led
>>>> to excessive fragmentation, especially for workloads requesting 
>>>> small sizes
>>>> with large alignment constraints.
>>>>
>>>> This change prioritizes the requested allocation size during the 
>>>> search and
>>>> uses an augmented RB-tree field (subtree_max_alignment) to 
>>>> efficiently locate
>>>> free blocks that satisfy both size and offset-alignment 
>>>> requirements. As a
>>>> result, the allocator can directly select an aligned sub-region 
>>>> without
>>>> splitting larger blocks unnecessarily.
>>>>
>>>> A practical example is the VKCTS test
>>>> dEQP-VK.memory.allocation.basic.size_8KiB.reverse.count_4000, which 
>>>> repeatedly
>>>> allocates 8 KiB buffers with a 256 KiB alignment. Previously, such 
>>>> allocations
>>>> caused large blocks to be split aggressively, despite smaller 
>>>> aligned regions
>>>> being sufficient. With this change, those aligned regions are 
>>>> reused directly,
>>>> significantly reducing fragmentation.
>>>>
>>>> This improvement is visible in the amdgpu VRAM buddy allocator state
>>>> (/sys/kernel/debug/dri/1/amdgpu_vram_mm). After the change, higher- 
>>>> order blocks
>>>> are preserved and the number of low-order fragments is 
>>>> substantially reduced.
>>>>
>>>> Before:
>>>>    order- 5 free: 1936 MiB, blocks: 15490
>>>>    order- 4 free:  967 MiB, blocks: 15486
>>>>    order- 3 free:  483 MiB, blocks: 15485
>>>>    order- 2 free:  241 MiB, blocks: 15486
>>>>    order- 1 free:  241 MiB, blocks: 30948
>>>>
>>>> After:
>>>>    order- 5 free:  493 MiB, blocks:  3941
>>>>    order- 4 free:  246 MiB, blocks:  3943
>>>>    order- 3 free:  123 MiB, blocks:  4101
>>>>    order- 2 free:   61 MiB, blocks:  4101
>>>>    order- 1 free:   61 MiB, blocks:  8018
>>>>
>>>> By avoiding unnecessary splits, this change improves allocator 
>>>> efficiency and
>>>> helps maintain larger contiguous free regions under heavy 
>>>> offset-aligned
>>>> allocation workloads.
>>>>
>>>> v2:(Matthew)
>>>>    - Update augmented information along the path to the inserted node.
>>>>
>>>> v3:
>>>>    - Move the patch to gpu/buddy.c file.
>>>>
>>>> Signed-off-by: Arunpravin Paneer Selvam 
>>>> <Arunpravin.PaneerSelvam@amd.com>
>>>> Suggested-by: Christian König <christian.koenig@amd.com>
>>>> ---
>>>>   drivers/gpu/buddy.c       | 271 
>>>> +++++++++++++++++++++++++++++++-------
>>>>   include/linux/gpu_buddy.h |   2 +
>>>>   2 files changed, 228 insertions(+), 45 deletions(-)
>>>>
>>>> diff --git a/drivers/gpu/buddy.c b/drivers/gpu/buddy.c
>>>> index 603c59a2013a..3a25eed050ba 100644
>>>> --- a/drivers/gpu/buddy.c
>>>> +++ b/drivers/gpu/buddy.c
>>>> @@ -14,6 +14,16 @@
>>>>     static struct kmem_cache *slab_blocks;
>>>>   +static unsigned int gpu_buddy_block_offset_alignment(struct 
>>>> gpu_buddy_block *block)
>>>> +{
>>>> +    return __ffs(gpu_buddy_block_offset(block));
>>>
>>> __ffs() will be undefined for offset zero it seems, so might blow up 
>>> in some strange way. I guess just return the max possible alignment 
>>> here if offset is zero? Also are we meant to use __ffs64() here?
>> Yes, I had the same concern about __ffs() being undefined when the 
>> offset is zero. My initial thought was to derive the maximum possible 
>> alignment from the allocator size using ilog2(mm->size) and return 
>> that value for the zero-offset case.
>>
>> But, RB_DECLARE_CALLBACKS_MAX() requires the compute callback 
>> (gpu_buddy_block_offset_alignment()) to accept only a single struct 
>> gpu_buddy_block * argument. It does not provide a mechanism to pass 
>> additional context such as the associated struct gpu_buddy *mm. As a 
>> result, deriving the alignment from allocator state (e.g., via 
>> ilog2(mm-  >size)) is not directly feasible within this callback. 
>> When I tested the zero-offset case locally, __ffs() returned 64, 
>> which effectively corresponds to the maximum alignment for a u64 
>> offset. Based on that observation, I initially left the __ffs() call 
>> unchanged for the zero case as well.
>>
>> One possible alternative would be to store a pointer to struct 
>> gpu_buddy inside each gpu_buddy_block.
>>
>> All other review comments have been addressed, and I will send a v4 
>> once this point is clarified.
>
> Yeah, I was thinking we just return the max theoretical value, so 64, 
> or perhaps 64+1. It just needs to be a value that will be larger than 
> any other possible alignment, since zero is special. It shouldn't 
> matter if that is larger than the actual real max for the region, I 
> think.
>
> if (!offset)
>     return 64 + 1;
>
> return __ffs64(offset);
>
> ?
Yes, that should work. I will update the helper accordingly in v4.

Regards,
Arun.
>
>>
>> Regards,
>> Arun.
>>>
>>>> +}
>>>> +
>>>> +RB_DECLARE_CALLBACKS_MAX(static, gpu_buddy_augment_cb,
>>>> +             struct gpu_buddy_block, rb,
>>>> +             unsigned int, subtree_max_alignment,
>>>> +             gpu_buddy_block_offset_alignment);
>>>> +
>>>>   static struct gpu_buddy_block *gpu_block_alloc(struct gpu_buddy *mm,
>>>>                              struct gpu_buddy_block *parent,
>>>>                              unsigned int order,
>>>> @@ -31,6 +41,9 @@ static struct gpu_buddy_block 
>>>> *gpu_block_alloc(struct gpu_buddy *mm,
>>>>       block->header |= order;
>>>>       block->parent = parent;
>>>>   +    block->subtree_max_alignment =
>>>> +        gpu_buddy_block_offset_alignment(block);
>>>> +
>>>>       RB_CLEAR_NODE(&block->rb);
>>>>         BUG_ON(block->header & GPU_BUDDY_HEADER_UNUSED);
>>>> @@ -67,26 +80,42 @@ static bool rbtree_is_empty(struct rb_root *root)
>>>>       return RB_EMPTY_ROOT(root);
>>>>   }
>>>>   -static bool gpu_buddy_block_offset_less(const struct 
>>>> gpu_buddy_block *block,
>>>> -                    const struct gpu_buddy_block *node)
>>>> -{
>>>> -    return gpu_buddy_block_offset(block) < 
>>>> gpu_buddy_block_offset(node);
>>>> -}
>>>> -
>>>> -static bool rbtree_block_offset_less(struct rb_node *block,
>>>> -                     const struct rb_node *node)
>>>> -{
>>>> -    return gpu_buddy_block_offset_less(rbtree_get_free_block(block),
>>>> -                       rbtree_get_free_block(node));
>>>> -}
>>>> -
>>>>   static void rbtree_insert(struct gpu_buddy *mm,
>>>>                 struct gpu_buddy_block *block,
>>>>                 enum gpu_buddy_free_tree tree)
>>>>   {
>>>> -    rb_add(&block->rb,
>>>> - &mm->free_trees[tree][gpu_buddy_block_order(block)],
>>>> -           rbtree_block_offset_less);
>>>> +    struct rb_node **link, *parent = NULL;
>>>> +    unsigned int block_alignment, order;
>>>> +    struct gpu_buddy_block *node;
>>>> +    struct rb_root *root;
>>>> +
>>>> +    order = gpu_buddy_block_order(block);
>>>> +    block_alignment = gpu_buddy_block_offset_alignment(block);
>>>> +
>>>> +    root = &mm->free_trees[tree][order];
>>>> +    link = &root->rb_node;
>>>> +
>>>> +    while (*link) {
>>>> +        parent = *link;
>>>> +        node = rbtree_get_free_block(parent);
>>>> +        /*
>>>> +         * Manual augmentation update during insertion traversal. 
>>>> Required
>>>> +         * because rb_insert_augmented() only calls rotate 
>>>> callback during
>>>> +         * rotations. This ensures all ancestors on the insertion 
>>>> path have
>>>> +         * correct subtree_max_alignment values.
>>>> +         */
>>>> +        if (node->subtree_max_alignment < block_alignment)
>>>> +            node->subtree_max_alignment = block_alignment;
>>>> +
>>>> +        if (gpu_buddy_block_offset(block) < 
>>>> gpu_buddy_block_offset(node))
>>>> +            link = &parent->rb_left;
>>>> +        else
>>>> +            link = &parent->rb_right;
>>>> +    }
>>>> +
>>>> +    block->subtree_max_alignment = block_alignment;
>>>> +    rb_link_node(&block->rb, parent, link);
>>>> +    rb_insert_augmented(&block->rb, root, &gpu_buddy_augment_cb);
>>>>   }
>>>>     static void rbtree_remove(struct gpu_buddy *mm,
>>>> @@ -99,7 +128,7 @@ static void rbtree_remove(struct gpu_buddy *mm,
>>>>       tree = get_block_tree(block);
>>>>       root = &mm->free_trees[tree][order];
>>>>   -    rb_erase(&block->rb, root);
>>>> +    rb_erase_augmented(&block->rb, root, &gpu_buddy_augment_cb);
>>>>       RB_CLEAR_NODE(&block->rb);
>>>>   }
>>>>   @@ -790,6 +819,132 @@ alloc_from_freetree(struct gpu_buddy *mm,
>>>>       return ERR_PTR(err);
>>>>   }
>>>>   +static bool
>>>> +gpu_buddy_can_offset_align(u64 size, u64 min_block_size)
>>>> +{
>>>> +    return size < min_block_size && is_power_of_2(size);
>>>> +}
>>>> +
>>>> +static bool gpu_buddy_subtree_can_satisfy(struct rb_node *node,
>>>> +                      unsigned int alignment)
>>>> +{
>>>> +    struct gpu_buddy_block *block;
>>>> +
>>>> +    if (!node)
>>>> +        return false;
>>>
>>> All callers seem to handle null case already, so could potentially 
>>> drop this?
>>>
>>>> +
>>>> +    block = rbtree_get_free_block(node);
>>>> +    return block->subtree_max_alignment >= alignment;
>>>> +}
>>>> +
>>>> +static struct gpu_buddy_block *
>>>> +gpu_buddy_find_block_aligned(struct gpu_buddy *mm,
>>>> +                 enum gpu_buddy_free_tree tree,
>>>> +                 unsigned int order,
>>>> +                 unsigned int tmp,
>>>> +                 unsigned int alignment,
>>>> +                 unsigned long flags)
>>>> +{
>>>> +    struct rb_root *root = &mm->free_trees[tree][tmp];
>>>> +    struct rb_node *rb = root->rb_node;
>>>> +
>>>> +    while (rb) {
>>>> +        struct gpu_buddy_block *block = rbtree_get_free_block(rb);
>>>> +        struct rb_node *left_node = rb->rb_left, *right_node = rb- 
>>>> >rb_right;
>>>> +
>>>> +        if (right_node) {
>>>> +            if (gpu_buddy_subtree_can_satisfy(right_node, 
>>>> alignment)) {
>>>> +                rb = right_node;
>>>> +                continue;
>>>> +            }
>>>> +        }
>>>> +
>>>> +        if (gpu_buddy_block_order(block) >= order &&
>>>
>>> Is this not always true? With that we can drop order, or better yet 
>>> s/ tmp/order/ ?
>>>
>>>> + __ffs(gpu_buddy_block_offset(block)) >= alignment)
>>>
>>> Same here with undefined offset zero case. I guess also use the helper.
>>>
>>>> +            return block;
>>>> +
>>>> +        if (left_node) {
>>>> +            if (gpu_buddy_subtree_can_satisfy(left_node, 
>>>> alignment)) {
>>>> +                rb = left_node;
>>>> +                continue;
>>>> +            }
>>>> +        }
>>>> +
>>>> +        break;
>>>> +    }
>>>> +
>>>> +    return NULL;
>>>> +}
>>>> +
>>>> +static struct gpu_buddy_block *
>>>> +gpu_buddy_offset_aligned_allocation(struct gpu_buddy *mm,
>>>> +                    u64 size,
>>>> +                    u64 min_block_size,
>>>> +                    unsigned long flags)
>>>> +{
>>>> +    struct gpu_buddy_block *block = NULL;
>>>> +    unsigned int order, tmp, alignment;
>>>> +    struct gpu_buddy_block *buddy;
>>>> +    enum gpu_buddy_free_tree tree;
>>>> +    unsigned long pages;
>>>> +    int err;
>>>> +
>>>> +    alignment = ilog2(min_block_size);
>>>> +    pages = size >> ilog2(mm->chunk_size);
>>>> +    order = fls(pages) - 1;
>>>> +
>>>> +    tree = (flags & GPU_BUDDY_CLEAR_ALLOCATION) ?
>>>> +        GPU_BUDDY_CLEAR_TREE : GPU_BUDDY_DIRTY_TREE;
>>>> +
>>>> +    for (tmp = order; tmp <= mm->max_order; ++tmp) {
>>>> +        block = gpu_buddy_find_block_aligned(mm, tree, order,
>>>> +                             tmp, alignment, flags);
>>>> +        if (!block) {
>>>> +            tree = (tree == GPU_BUDDY_CLEAR_TREE) ?
>>>> +                GPU_BUDDY_DIRTY_TREE : GPU_BUDDY_CLEAR_TREE;
>>>> +            block = gpu_buddy_find_block_aligned(mm, tree, order,
>>>> +                                 tmp, alignment, flags);
>>>> +        }
>>>> +
>>>> +        if (block)
>>>> +            break;
>>>> +    }
>>>> +
>>>> +    if (!block)
>>>> +        return ERR_PTR(-ENOSPC);
>>>> +
>>>> +    while (gpu_buddy_block_order(block) > order) {
>>>> +        struct gpu_buddy_block *left, *right;
>>>> +
>>>> +        err = split_block(mm, block);
>>>> +        if (unlikely(err))
>>>> +            goto err_undo;
>>>> +
>>>> +        left  = block->left;
>>>> +        right = block->right;
>>>> +
>>>> +        if (__ffs(gpu_buddy_block_offset(right)) >= alignment)
>>>
>>> Might be better to use the helper for this?
>>>
>>>> +            block = right;
>>>> +        else
>>>> +            block = left;
>>>> +    }
>>>> +
>>>> +    return block;
>>>> +
>>>> +err_undo:
>>>> +    /*
>>>> +     * We really don't want to leave around a bunch of split 
>>>> blocks, since
>>>> +     * bigger is better, so make sure we merge everything back 
>>>> before we
>>>> +     * free the allocated blocks.
>>>> +     */
>>>> +    buddy = __get_buddy(block);
>>>> +    if (buddy &&
>>>> +        (gpu_buddy_block_is_free(block) &&
>>>> +         gpu_buddy_block_is_free(buddy)))
>>>> +        __gpu_buddy_free(mm, block, false);
>>>> +    return ERR_PTR(err);
>>>> +}
>>>> +
>>>>   static int __alloc_range(struct gpu_buddy *mm,
>>>>                struct list_head *dfs,
>>>>                u64 start, u64 size,
>>>> @@ -1059,6 +1214,7 @@ EXPORT_SYMBOL(gpu_buddy_block_trim);
>>>>   static struct gpu_buddy_block *
>>>>   __gpu_buddy_alloc_blocks(struct gpu_buddy *mm,
>>>>                u64 start, u64 end,
>>>> +             u64 size, u64 min_block_size,
>>>>                unsigned int order,
>>>>                unsigned long flags)
>>>>   {
>>>> @@ -1066,6 +1222,11 @@ __gpu_buddy_alloc_blocks(struct gpu_buddy *mm,
>>>>           /* Allocate traversing within the range */
>>>>           return  __gpu_buddy_alloc_range_bias(mm, start, end,
>>>>                                order, flags);
>>>> +    else if (size < min_block_size)
>>>> +        /* Allocate from an offset-aligned region without size 
>>>> rounding */
>>>> +        return gpu_buddy_offset_aligned_allocation(mm, size,
>>>> +                               min_block_size,
>>>> +                               flags);
>>>>       else
>>>>           /* Allocate from freetree */
>>>>           return alloc_from_freetree(mm, order, flags);
>>>> @@ -1137,8 +1298,11 @@ int gpu_buddy_alloc_blocks(struct gpu_buddy 
>>>> *mm,
>>>>       if (flags & GPU_BUDDY_CONTIGUOUS_ALLOCATION) {
>>>>           size = roundup_pow_of_two(size);
>>>>           min_block_size = size;
>>>> -    /* Align size value to min_block_size */
>>>> -    } else if (!IS_ALIGNED(size, min_block_size)) {
>>>> +        /*
>>>> +         * Normalize the requested size to min_block_size for 
>>>> regular allocations.
>>>> +         * Offset-aligned allocations intentionally skip size 
>>>> rounding.
>>>> +         */
>>>> +    } else if (!gpu_buddy_can_offset_align(size, min_block_size)) {
>>>>           size = round_up(size, min_block_size);
>>>>       }
>>>>   @@ -1158,43 +1322,60 @@ int gpu_buddy_alloc_blocks(struct 
>>>> gpu_buddy *mm,
>>>>       do {
>>>>           order = min(order, (unsigned int)fls(pages) - 1);
>>>>           BUG_ON(order > mm->max_order);
>>>> -        BUG_ON(order < min_order);
>>>> +        /*
>>>> +         * Regular allocations must not allocate blocks smaller 
>>>> than min_block_size.
>>>> +         * Offset-aligned allocations deliberately bypass this 
>>>> constraint.
>>>> +         */
>>>> +        BUG_ON(size >= min_block_size && order < min_order);
>>>>             do {
>>>> +            unsigned int fallback_order;
>>>> +
>>>>               block = __gpu_buddy_alloc_blocks(mm, start,
>>>>                                end,
>>>> +                             size,
>>>> +                             min_block_size,
>>>>                                order,
>>>>                                flags);
>>>>               if (!IS_ERR(block))
>>>>                   break;
>>>>   -            if (order-- == min_order) {
>>>> -                /* Try allocation through force merge method */
>>>> -                if (mm->clear_avail &&
>>>> -                    !__force_merge(mm, start, end, min_order)) {
>>>> -                    block = __gpu_buddy_alloc_blocks(mm, start,
>>>> -                                     end,
>>>> -                                     min_order,
>>>> -                                     flags);
>>>> -                    if (!IS_ERR(block)) {
>>>> -                        order = min_order;
>>>> -                        break;
>>>> -                    }
>>>> -                }
>>>> +            if (size < min_block_size) {
>>>> +                fallback_order = order;
>>>> +            } else if (order == min_order) {
>>>> +                fallback_order = min_order;
>>>> +            } else {
>>>> +                order--;
>>>> +                continue;
>>>> +            }
>>>>   -                /*
>>>> -                 * Try contiguous block allocation through
>>>> -                 * try harder method.
>>>> -                 */
>>>> -                if (flags & GPU_BUDDY_CONTIGUOUS_ALLOCATION &&
>>>> -                    !(flags & GPU_BUDDY_RANGE_ALLOCATION))
>>>> -                    return __alloc_contig_try_harder(mm,
>>>> -                                     original_size,
>>>> -                                     original_min_size,
>>>> -                                     blocks);
>>>> -                err = -ENOSPC;
>>>> -                goto err_free;
>>>> +            /* Try allocation through force merge method */
>>>> +            if (mm->clear_avail &&
>>>> +                !__force_merge(mm, start, end, fallback_order)) {
>>>> +                block = __gpu_buddy_alloc_blocks(mm, start,
>>>> +                                 end,
>>>> +                                 size,
>>>> +                                 min_block_size,
>>>> +                                 fallback_order,
>>>> +                                 flags);
>>>> +                if (!IS_ERR(block)) {
>>>> +                    order = fallback_order;
>>>> +                    break;
>>>> +                }
>>>>               }
>>>> +
>>>> +            /*
>>>> +             * Try contiguous block allocation through
>>>> +             * try harder method.
>>>> +             */
>>>> +            if (flags & GPU_BUDDY_CONTIGUOUS_ALLOCATION &&
>>>> +                !(flags & GPU_BUDDY_RANGE_ALLOCATION))
>>>> +                return __alloc_contig_try_harder(mm,
>>>> +                                 original_size,
>>>> +                                 original_min_size,
>>>> +                                 blocks);
>>>> +            err = -ENOSPC;
>>>> +            goto err_free;
>>>>           } while (1);
>>>>             mark_allocated(mm, block);
>>>> diff --git a/include/linux/gpu_buddy.h b/include/linux/gpu_buddy.h
>>>> index 07ac65db6d2e..7ad817c69ec6 100644
>>>> --- a/include/linux/gpu_buddy.h
>>>> +++ b/include/linux/gpu_buddy.h
>>>> @@ -11,6 +11,7 @@
>>>>   #include <linux/slab.h>
>>>>   #include <linux/sched.h>
>>>>   #include <linux/rbtree.h>
>>>> +#include <linux/rbtree_augmented.h>
>>>>     #define GPU_BUDDY_RANGE_ALLOCATION        BIT(0)
>>>>   #define GPU_BUDDY_TOPDOWN_ALLOCATION        BIT(1)
>>>> @@ -58,6 +59,7 @@ struct gpu_buddy_block {
>>>>       };
>>>>         struct list_head tmp_link;
>>>> +    unsigned int subtree_max_alignment;
>>>>   };
>>>>     /* Order-zero must be at least SZ_4K */
>>>>
>>>> base-commit: 9d757669b2b22cd224c334924f798393ffca537c
>>>
>>
>


^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2026-02-17 10:16 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-09  8:30 [PATCH v3 1/2] drm/buddy: Improve offset-aligned allocation handling Arunpravin Paneer Selvam
2026-02-09  8:30 ` [PATCH v3 2/2] drm/buddy: Add KUnit test for offset-aligned allocations Arunpravin Paneer Selvam
2026-02-09 19:23   ` kernel test robot
2026-02-09 19:26   ` kernel test robot
2026-02-09 21:20   ` kernel test robot
2026-02-09  9:46 ` ✓ i915.CI.BAT: success for series starting with [v3,1/2] drm/buddy: Improve offset-aligned allocation handling Patchwork
2026-02-09 13:22 ` ✗ i915.CI.Full: failure " Patchwork
2026-02-10 16:26 ` [PATCH v3 1/2] " Matthew Auld
2026-02-17  6:03   ` Arunpravin Paneer Selvam
2026-02-17 10:01     ` Matthew Auld
2026-02-17 10:16       ` Arunpravin Paneer Selvam

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox