From: Matthew Auld <matthew.auld@intel.com>
To: Arunpravin <Arunpravin.PaneerSelvam@amd.com>,
dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
amd-gfx@lists.freedesktop.org
Cc: alexander.deucher@amd.com, tzimmermann@suse.de, christian.koenig@amd.com
Subject: Re: [Intel-gfx] [PATCH 7/7] drm/selftests: add drm buddy pathological testcase
Date: Tue, 8 Feb 2022 10:26:55 +0000 [thread overview]
Message-ID: <38f0f5d3-2bdf-850f-90ff-688d55c29401@intel.com> (raw)
In-Reply-To: <20220203133234.3350-7-Arunpravin.PaneerSelvam@amd.com>
On 03/02/2022 13:32, Arunpravin wrote:
> create a pot-sized mm, then allocate one of each possible
> order within. This should leave the mm with exactly one
> page left. Free the largest block, then whittle down again.
> Eventually we will have a fully 50% fragmented mm.
>
> Signed-off-by: Arunpravin <Arunpravin.PaneerSelvam@amd.com>
> ---
> .../gpu/drm/selftests/drm_buddy_selftests.h | 1 +
> drivers/gpu/drm/selftests/test-drm_buddy.c | 136 ++++++++++++++++++
> 2 files changed, 137 insertions(+)
>
> diff --git a/drivers/gpu/drm/selftests/drm_buddy_selftests.h b/drivers/gpu/drm/selftests/drm_buddy_selftests.h
> index 411d072cbfc5..455b756c4ae5 100644
> --- a/drivers/gpu/drm/selftests/drm_buddy_selftests.h
> +++ b/drivers/gpu/drm/selftests/drm_buddy_selftests.h
> @@ -12,3 +12,4 @@ selftest(buddy_alloc_range, igt_buddy_alloc_range)
> selftest(buddy_alloc_optimistic, igt_buddy_alloc_optimistic)
> selftest(buddy_alloc_pessimistic, igt_buddy_alloc_pessimistic)
> selftest(buddy_alloc_smoke, igt_buddy_alloc_smoke)
> +selftest(buddy_alloc_pathological, igt_buddy_alloc_pathological)
> diff --git a/drivers/gpu/drm/selftests/test-drm_buddy.c b/drivers/gpu/drm/selftests/test-drm_buddy.c
> index 2074e8c050a4..b2d0313a4bc5 100644
> --- a/drivers/gpu/drm/selftests/test-drm_buddy.c
> +++ b/drivers/gpu/drm/selftests/test-drm_buddy.c
> @@ -338,6 +338,142 @@ static void igt_mm_config(u64 *size, u64 *chunk_size)
> *size = (u64)s << 12;
> }
>
> +static int igt_buddy_alloc_pathological(void *arg)
> +{
> + u64 mm_size, size, min_page_size, start = 0;
> + struct drm_buddy_block *block;
> + const int max_order = 3;
> + unsigned long flags = 0;
> + int order, top, err;
> + struct drm_buddy mm;
> + LIST_HEAD(blocks);
> + LIST_HEAD(holes);
> + LIST_HEAD(tmp);
> +
> + /*
> + * Create a pot-sized mm, then allocate one of each possible
> + * order within. This should leave the mm with exactly one
> + * page left. Free the largest block, then whittle down again.
> + * Eventually we will have a fully 50% fragmented mm.
> + */
> +
> + mm_size = PAGE_SIZE << max_order;
> + err = drm_buddy_init(&mm, mm_size, PAGE_SIZE);
> + if (err) {
> + pr_err("buddy_init failed(%d)\n", err);
> + return err;
> + }
> + BUG_ON(mm.max_order != max_order);
> +
> + for (top = max_order; top; top--) {
> + /* Make room by freeing the largest allocated block */
> + block = list_first_entry_or_null(&blocks, typeof(*block), link);
> + if (block) {
> + list_del(&block->link);
> + drm_buddy_free_block(&mm, block);
> + }
> +
> + for (order = top; order--; ) {
> + size = min_page_size = get_size(order, PAGE_SIZE);
> + err = drm_buddy_alloc_blocks(&mm, start, mm_size, size,
> + min_page_size, &tmp, flags);
> + if (err) {
> + pr_info("buddy_alloc hit -ENOMEM with order=%d, top=%d\n",
> + order, top);
> + goto err;
> + }
> +
> + block = list_first_entry_or_null(&tmp,
> + struct drm_buddy_block,
> + link);
> + if (!block) {
> + pr_err("alloc_blocks has no blocks\n");
> + err = -EINVAL;
> + goto err;
> + }
> +
> + list_del(&block->link);
> + list_add_tail(&block->link, &blocks);
> + }
> +
> + /* There should be one final page for this sub-allocation */
> + size = min_page_size = get_size(0, PAGE_SIZE);
> + err = drm_buddy_alloc_blocks(&mm, start, mm_size, size, min_page_size, &tmp, flags);
> + if (err) {
> + pr_info("buddy_alloc hit -ENOME for hole\n");
ENOMEM
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
> + goto err;
> + }
> +
> + block = list_first_entry_or_null(&tmp,
> + struct drm_buddy_block,
> + link);
> + if (!block) {
> + pr_err("alloc_blocks has no blocks\n");
> + err = -EINVAL;
> + goto err;
> + }
> +
> + list_del(&block->link);
> + list_add_tail(&block->link, &holes);
> +
> + size = min_page_size = get_size(top, PAGE_SIZE);
> + err = drm_buddy_alloc_blocks(&mm, start, mm_size, size, min_page_size, &tmp, flags);
> + if (!err) {
> + pr_info("buddy_alloc unexpectedly succeeded at top-order %d/%d, it should be full!",
> + top, max_order);
> + block = list_first_entry_or_null(&tmp,
> + struct drm_buddy_block,
> + link);
> + if (!block) {
> + pr_err("alloc_blocks has no blocks\n");
> + err = -EINVAL;
> + goto err;
> + }
> +
> + list_del(&block->link);
> + list_add_tail(&block->link, &blocks);
> + err = -EINVAL;
> + goto err;
> + }
> + }
> +
> + drm_buddy_free_list(&mm, &holes);
> +
> + /* Nothing larger than blocks of chunk_size now available */
> + for (order = 1; order <= max_order; order++) {
> + size = min_page_size = get_size(order, PAGE_SIZE);
> + err = drm_buddy_alloc_blocks(&mm, start, mm_size, size, min_page_size, &tmp, flags);
> + if (!err) {
> + pr_info("buddy_alloc unexpectedly succeeded at order %d, it should be full!",
> + order);
> + block = list_first_entry_or_null(&tmp,
> + struct drm_buddy_block,
> + link);
> + if (!block) {
> + pr_err("alloc_blocks has no blocks\n");
> + err = -EINVAL;
> + goto err;
> + }
> +
> + list_del(&block->link);
> + list_add_tail(&block->link, &blocks);
> + err = -EINVAL;
> + goto err;
> + }
> + }
> +
> + if (err) {
> + pr_info("%s - succeeded\n", __func__);
> + err = 0;
> + }
> +
> +err:
> + list_splice_tail(&holes, &blocks);
> + drm_buddy_free_list(&mm, &blocks);
> + drm_buddy_fini(&mm);
> + return err;
> +}
> +
> static int igt_buddy_alloc_smoke(void *arg)
> {
> u64 mm_size, min_page_size, chunk_size, start = 0;
next prev parent reply other threads:[~2022-02-08 10:27 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-02-03 13:32 [Intel-gfx] [PATCH 1/7] drm/selftests: Move i915 buddy selftests into drm Arunpravin
2022-02-03 13:32 ` [Intel-gfx] [PATCH 2/7] drm/selftests: add drm buddy alloc limit testcase Arunpravin
2022-02-08 9:40 ` Matthew Auld
2022-02-22 19:07 ` Arunpravin
2022-02-03 13:32 ` [Intel-gfx] [PATCH 3/7] drm/selftests: add drm buddy alloc range testcase Arunpravin
2022-02-08 10:03 ` Matthew Auld
2022-02-03 13:32 ` [Intel-gfx] [PATCH 4/7] drm/selftests: add drm buddy optimistic testcase Arunpravin
2022-02-08 10:12 ` Matthew Auld
2022-02-03 13:32 ` [Intel-gfx] [PATCH 5/7] drm/selftests: add drm buddy pessimistic testcase Arunpravin
2022-02-08 10:17 ` Matthew Auld
2022-02-03 13:32 ` [Intel-gfx] [PATCH 6/7] drm/selftests: add drm buddy smoke testcase Arunpravin
2022-02-08 10:22 ` Matthew Auld
2022-02-03 13:32 ` [Intel-gfx] [PATCH 7/7] drm/selftests: add drm buddy pathological testcase Arunpravin
2022-02-08 10:26 ` Matthew Auld [this message]
2022-02-03 14:12 ` [Intel-gfx] ✗ Fi.CI.BUILD: failure for series starting with [1/7] drm/selftests: Move i915 buddy selftests into drm Patchwork
2022-02-03 14:33 ` [Intel-gfx] [PATCH 1/7] " Christian König
2022-02-08 10:35 ` Matthew Auld
2022-02-22 18:35 ` Arunpravin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=38f0f5d3-2bdf-850f-90ff-688d55c29401@intel.com \
--to=matthew.auld@intel.com \
--cc=Arunpravin.PaneerSelvam@amd.com \
--cc=alexander.deucher@amd.com \
--cc=amd-gfx@lists.freedesktop.org \
--cc=christian.koenig@amd.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=intel-gfx@lists.freedesktop.org \
--cc=tzimmermann@suse.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox