From: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
To: Satyanarayana K V P <satyanarayana.k.v.p@intel.com>,
intel-xe@lists.freedesktop.org
Cc: Matthew Brost <matthew.brost@intel.com>,
Maarten Lankhorst <dev@lankhorst.se>,
Michal Wajdeczko <michal.wajdeczko@intel.com>
Subject: Re: [PATCH v3 1/3] drm/xe/mm: add XE MEM POOL manager with shadow support
Date: Thu, 02 Apr 2026 10:18:25 +0200 [thread overview]
Message-ID: <3b7f21be3c0701b0661a3e6f73cc28dc73e710d4.camel@linux.intel.com> (raw)
In-Reply-To: <20260401161528.1990499-2-satyanarayana.k.v.p@intel.com>
On Wed, 2026-04-01 at 16:15 +0000, Satyanarayana K V P wrote:
> Add a xe_mem_pool manager to allocate sub-ranges from a BO-backed
> pool
> using drm_mm.
>
> Signed-off-by: Satyanarayana K V P <satyanarayana.k.v.p@intel.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Cc: Maarten Lankhorst <dev@lankhorst.se>
> Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
>
> ---
> V2 -> V3:
> - Renamed xe_mm_suballoc to xe_mem_pool_manager.
> - Splitted xe_mm_suballoc_manager_init() into xe_mem_pool_init() and
> xe_mem_pool_shadow_init() (Michal)
> - Made xe_mm_sa_manager structure private. (Matt)
> - Introduced init flags to initialize allocated pools.
>
> V1 -> V2:
> - Renamed xe_drm_mm to xe_mm_suballoc (Thomas)
> - Removed memset during manager init and insert (Matt)
Hey, Run a kreview using review-prompts with Claude:
> diff --git a/drivers/gpu/drm/xe/xe_mem_pool.c
b/drivers/gpu/drm/xe/xe_mem_pool.c
> new file mode 100644
[ ... ]
> +static void xe_mem_pool_fini(struct drm_device *drm, void *arg)
> +{
> + struct xe_mem_pool_manager *pool_manager = arg;
> +
> + drm_mm_takedown(&pool_manager->base);
> +
> + if (pool_manager->resv_alloc) {
> + drm_mm_remove_node(pool_manager->resv_alloc);
> + kfree(pool_manager->resv_alloc);
> + }
The node removal happens after drm_mm_takedown(), but drm_mm_takedown()
requires the allocator to be clean before it is called.
When XE_MEM_POOL_BO_FLAG_INIT_CMD_BB_END_HIGHEST is used,
xe_mem_pool_init()
calls xe_mem_pool_init_flags() which inserts a reserved node and stores
it
in pool_manager->resv_alloc. On device teardown, xe_mem_pool_fini() is
invoked with that node still allocated inside the drm_mm.
drm_mm_takedown() in drivers/gpu/drm/drm_mm.c does:
void drm_mm_takedown(struct drm_mm *mm)
{
if (WARN(!drm_mm_clean(mm),
"Memory manager not clean during takedown.\n"))
show_leaks(mm);
}
drm_mm_clean() returns list_empty(drm_mm_nodes(mm)), which is false
when
resv_alloc is still inserted. So the WARN fires on every teardown of a
pool created with BB_END_HIGHEST.
Shouldn't drm_mm_remove_node(pool_manager->resv_alloc) be called before
drm_mm_takedown(), so the allocator is clean at the point of takedown?
[ ... ]
> +/**
> + * xe_mem_pool_remove_node() - Remove a node from the DRM MM
manager.
> + * @node: the DRM MM node to remove.
> + *
> + * Return: None.
> + */
> +void xe_mem_pool_remove_node(struct drm_mm_node *node)
> +{
> + return drm_mm_remove_node(node);
> +}
This isn't a bug, but returning a void expression from a void function
is unusual - the return statement here implies a return value where
there
is none. The body could just be drm_mm_remove_node(node) without the
return.
next prev parent reply other threads:[~2026-04-02 8:18 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-01 16:15 [PATCH v3 0/3] USE drm mm instead of drm SA for CCS read/write Satyanarayana K V P
2026-04-01 16:15 ` [PATCH v3 1/3] drm/xe/mm: add XE MEM POOL manager with shadow support Satyanarayana K V P
2026-04-02 1:20 ` Matthew Brost
2026-04-02 8:18 ` Thomas Hellström [this message]
2026-04-02 15:17 ` Michal Wajdeczko
2026-04-01 16:15 ` [PATCH v3 2/3] drm/xe/mm: Add batch buffer allocation functions for xe_mem_pool manager Satyanarayana K V P
2026-04-02 1:22 ` Matthew Brost
2026-04-02 8:21 ` Thomas Hellström
2026-04-02 15:30 ` Michal Wajdeczko
2026-04-01 16:15 ` [PATCH v3 3/3] drm/xe/vf: Use drm mm instead of drm sa for CCS read/write Satyanarayana K V P
2026-04-02 8:29 ` Thomas Hellström
2026-04-01 16:20 ` ✗ CI.checkpatch: warning for USE drm mm instead of drm SA for CCS read/write (rev3) Patchwork
2026-04-01 16:21 ` ✓ CI.KUnit: success " Patchwork
2026-04-01 16:56 ` ✗ Xe.CI.BAT: failure " Patchwork
2026-04-01 21:11 ` ✗ Xe.CI.FULL: " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3b7f21be3c0701b0661a3e6f73cc28dc73e710d4.camel@linux.intel.com \
--to=thomas.hellstrom@linux.intel.com \
--cc=dev@lankhorst.se \
--cc=intel-xe@lists.freedesktop.org \
--cc=matthew.brost@intel.com \
--cc=michal.wajdeczko@intel.com \
--cc=satyanarayana.k.v.p@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox