From: Matthew Auld <matthew.auld@intel.com>
To: Arunpravin Paneer Selvam <Arunpravin.PaneerSelvam@amd.com>,
christian.koenig@amd.com, dri-devel@lists.freedesktop.org,
intel-gfx@lists.freedesktop.org, intel-xe@lists.freedesktop.org,
amd-gfx@lists.freedesktop.org
Cc: alexander.deucher@amd.com
Subject: Re: [PATCH v4 2/2] drm/buddy: Add KUnit test for offset-aligned allocations
Date: Tue, 24 Feb 2026 17:32:18 +0000 [thread overview]
Message-ID: <f529e955-2db2-4fab-ad46-5345febf270f@intel.com> (raw)
In-Reply-To: <20260217113900.10675-2-Arunpravin.PaneerSelvam@amd.com>
On 17/02/2026 11:39, Arunpravin Paneer Selvam wrote:
> Add KUnit test to validate offset-aligned allocations in the DRM buddy
> allocator.
>
> Validate offset-aligned allocation:
> The test covers allocations with sizes smaller than the alignment constraint
> and verifies correct size preservation, offset alignment, and behavior across
> multiple allocation sizes. It also exercises fragmentation by freeing
> alternating blocks and confirms that allocation fails once all aligned offsets
> are consumed.
>
> Stress subtree_max_alignment propagation:
> Exercise subtree_max_alignment tracking by allocating blocks with descending
> alignment constraints and freeing them in reverse order. This verifies that
> free-tree augmentation correctly propagates the maximum offset alignment
> present in each subtree at every stage.
>
> v2:
> - Move the patch to gpu/tests/gpu_buddy_test.c file.
>
> v3:
> - Fixed build warnings reported by kernel test robot <lkp@intel.com>
>
> Signed-off-by: Arunpravin Paneer Selvam <Arunpravin.PaneerSelvam@amd.com>
> ---
> drivers/gpu/tests/gpu_buddy_test.c | 167 +++++++++++++++++++++++++++++
> 1 file changed, 167 insertions(+)
>
> diff --git a/drivers/gpu/tests/gpu_buddy_test.c b/drivers/gpu/tests/gpu_buddy_test.c
> index 450e71deed90..2901d43f4bae 100644
> --- a/drivers/gpu/tests/gpu_buddy_test.c
> +++ b/drivers/gpu/tests/gpu_buddy_test.c
> @@ -21,6 +21,171 @@ static inline u64 get_size(int order, u64 chunk_size)
> return (1 << order) * chunk_size;
> }
>
> +static void gpu_test_buddy_subtree_offset_alignment_stress(struct kunit *test)
> +{
> + struct gpu_buddy_block *block;
> + struct rb_node *node = NULL;
> + const u64 mm_size = SZ_2M;
> + const u64 alignments[] = {
> + SZ_1M,
> + SZ_512K,
> + SZ_256K,
> + SZ_128K,
> + SZ_64K,
> + SZ_32K,
> + SZ_16K,
> + SZ_8K,
> + };
> +
Nit: extra newline
> + struct list_head allocated[ARRAY_SIZE(alignments)];
> + unsigned int i, order, max_subtree_align = 0;
> + struct gpu_buddy mm;
> + int ret, tree;
> +
> + KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_init(&mm, mm_size, SZ_4K),
> + "buddy_init failed\n");
> +
> + for (i = 0; i < ARRAY_SIZE(allocated); i++)
> + INIT_LIST_HEAD(&allocated[i]);
> +
> + /*
> + * Exercise subtree_max_alignment tracking by allocating blocks with descending
> + * alignment constraints and freeing them in reverse order. This verifies that
> + * free-tree augmentation correctly propagates the maximum offset alignment
> + * present in each subtree at every stage.
> + */
> +
> + for (i = 0; i < ARRAY_SIZE(alignments); i++) {
> + struct gpu_buddy_block *root = NULL;
> + unsigned int expected;
> + u64 align;
> +
> + align = alignments[i];
> + expected = ilog2(align) - 1;
> +
> + for (;;) {
> + ret = gpu_buddy_alloc_blocks(&mm,
> + 0, mm_size,
> + SZ_4K, align,
> + &allocated[i],
> + 0);
> + if (ret)
> + break;
> +
> + block = list_last_entry(&allocated[i],
> + struct gpu_buddy_block,
> + link);
> + KUNIT_EXPECT_EQ(test, gpu_buddy_block_offset(block) & (align - 1), 0ULL);
Perhaps simpler to use:
IS_ALIGNED(offset, align)
?
> + }
> +
> + for (order = mm.max_order + 1; order-- > 0 && !root; ) {
This is maybe a bit hard to read?
for (order = mm.max_order; order >= 0 && !root; order--)
And make order an int?
> + for (tree = 0; tree < 2; tree++) {
> + node = mm.free_trees[tree][order].rb_node;
> + if (node) {
> + root = container_of(node,
> + struct gpu_buddy_block,
> + rb);
> + break;
> + }
> + }
> + }
> +
> + KUNIT_ASSERT_NOT_NULL(test, root);
> + KUNIT_EXPECT_EQ(test, root->subtree_max_alignment, expected);
> + }
> +
> + for (i = ARRAY_SIZE(alignments); i-- > 0; ) {
> + gpu_buddy_free_list(&mm, &allocated[i], 0);
> +
> + for (order = 0; order <= mm.max_order; order++) {
> + for (tree = 0; tree < 2; tree++) {
> + node = mm.free_trees[tree][order].rb_node;
> + if (!node)
> + continue;
> +
> + block = container_of(node, struct gpu_buddy_block, rb);
> + max_subtree_align = max(max_subtree_align,
> + block->subtree_max_alignment);
> + }
> + }
> +
> + KUNIT_EXPECT_GE(test, max_subtree_align, ilog2(alignments[i]));
> + }
> +
> + gpu_buddy_fini(&mm);
> +}
> +
> +static void gpu_test_buddy_offset_aligned_allocation(struct kunit *test)
> +{
> + struct gpu_buddy_block *block, *tmp;
> + int num_blocks, i, count = 0;
> + LIST_HEAD(allocated);
> + struct gpu_buddy mm;
> + u64 mm_size = SZ_4M;
> + LIST_HEAD(freed);
> +
> + KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_init(&mm, mm_size, SZ_4K),
> + "buddy_init failed\n");
> +
> + num_blocks = mm_size / SZ_256K;
> + /*
> + * Allocate multiple sizes under a fixed offset alignment.
> + * Ensures alignment handling is independent of allocation size and
> + * exercises subtree max-alignment pruning for small requests.
> + */
> + for (i = 0; i < num_blocks; i++)
> + KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, mm_size, SZ_8K, SZ_256K,
> + &allocated, 0),
> + "buddy_alloc hit an error size=%u\n", SZ_8K);
> +
> + list_for_each_entry(block, &allocated, link) {
> + /* Ensure the allocated block uses the expected 8 KB size */
> + KUNIT_EXPECT_EQ(test, gpu_buddy_block_size(&mm, block), SZ_8K);
> + /* Ensure the block starts at a 256 KB-aligned offset for proper alignment */
> + KUNIT_EXPECT_EQ(test, gpu_buddy_block_offset(block) & (SZ_256K - 1), 0ULL);
IS_ALIGNED() ?
> + }
> + gpu_buddy_free_list(&mm, &allocated, 0);
> +
> + for (i = 0; i < num_blocks; i++)
> + KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, mm_size, SZ_16K, SZ_256K,
> + &allocated, 0),
> + "buddy_alloc hit an error size=%u\n", SZ_16K);
> +
> + list_for_each_entry(block, &allocated, link) {
> + /* Ensure the allocated block uses the expected 16 KB size */
> + KUNIT_EXPECT_EQ(test, gpu_buddy_block_size(&mm, block), SZ_16K);
> + /* Ensure the block starts at a 256 KB-aligned offset for proper alignment */
> + KUNIT_EXPECT_EQ(test, gpu_buddy_block_offset(block) & (SZ_256K - 1), 0ULL);
IS_ALIGNED() ?
Anyway:
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
> + }
> +
> + /*
> + * Free alternating aligned blocks to introduce fragmentation.
> + * Ensures offset-aligned allocations remain valid after frees and
> + * verifies subtree max-alignment metadata is correctly maintained.
> + */
> + list_for_each_entry_safe(block, tmp, &allocated, link) {
> + if (count % 2 == 0)
> + list_move_tail(&block->link, &freed);
> + count++;
> + }
> + gpu_buddy_free_list(&mm, &freed, 0);
> +
> + for (i = 0; i < num_blocks / 2; i++)
> + KUNIT_ASSERT_FALSE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, mm_size, SZ_16K, SZ_256K,
> + &allocated, 0),
> + "buddy_alloc hit an error size=%u\n", SZ_16K);
> +
> + /*
> + * Allocate with offset alignment after all slots are used; must fail.
> + * Confirms that no aligned offsets remain.
> + */
> + KUNIT_ASSERT_TRUE_MSG(test, gpu_buddy_alloc_blocks(&mm, 0, mm_size, SZ_16K, SZ_256K,
> + &allocated, 0),
> + "buddy_alloc hit an error size=%u\n", SZ_16K);
> + gpu_buddy_free_list(&mm, &allocated, 0);
> + gpu_buddy_fini(&mm);
> +}
> +
> static void gpu_test_buddy_fragmentation_performance(struct kunit *test)
> {
> struct gpu_buddy_block *block, *tmp;
> @@ -912,6 +1077,8 @@ static struct kunit_case gpu_buddy_tests[] = {
> KUNIT_CASE(gpu_test_buddy_alloc_range_bias),
> KUNIT_CASE(gpu_test_buddy_fragmentation_performance),
> KUNIT_CASE(gpu_test_buddy_alloc_exceeds_max_order),
> + KUNIT_CASE(gpu_test_buddy_offset_aligned_allocation),
> + KUNIT_CASE(gpu_test_buddy_subtree_offset_alignment_stress),
> {}
> };
>
next prev parent reply other threads:[~2026-02-24 17:32 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-17 11:38 [PATCH v4 1/2] drm/buddy: Improve offset-aligned allocation handling Arunpravin Paneer Selvam
2026-02-17 11:39 ` [PATCH v4 2/2] drm/buddy: Add KUnit test for offset-aligned allocations Arunpravin Paneer Selvam
2026-02-24 17:32 ` Matthew Auld [this message]
2026-02-17 11:45 ` ✗ CI.checkpatch: warning for series starting with [v4,1/2] drm/buddy: Improve offset-aligned allocation handling Patchwork
2026-02-17 11:47 ` ✓ CI.KUnit: success " Patchwork
2026-02-17 12:05 ` ✗ CI.checksparse: warning " Patchwork
2026-02-17 12:36 ` ✗ Xe.CI.BAT: failure " Patchwork
2026-02-17 14:23 ` ✓ Xe.CI.FULL: success " Patchwork
2026-02-23 7:00 ` [PATCH v4 1/2] " Arunpravin Paneer Selvam
2026-02-23 16:47 ` Matthew Auld
2026-02-24 17:35 ` Matthew Auld
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=f529e955-2db2-4fab-ad46-5345febf270f@intel.com \
--to=matthew.auld@intel.com \
--cc=Arunpravin.PaneerSelvam@amd.com \
--cc=alexander.deucher@amd.com \
--cc=amd-gfx@lists.freedesktop.org \
--cc=christian.koenig@amd.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=intel-gfx@lists.freedesktop.org \
--cc=intel-xe@lists.freedesktop.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox