public inbox for intel-xe@lists.freedesktop.org
 help / color / mirror / Atom feed
From: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
To: Satyanarayana K V P <satyanarayana.k.v.p@intel.com>,
	 intel-xe@lists.freedesktop.org
Cc: Matthew Brost <matthew.brost@intel.com>,
	Maarten Lankhorst <dev@lankhorst.se>,
	Michal Wajdeczko <michal.wajdeczko@intel.com>
Subject: Re: [PATCH v3 2/3] drm/xe/mm: Add batch buffer allocation functions for xe_mem_pool manager
Date: Thu, 02 Apr 2026 10:21:41 +0200	[thread overview]
Message-ID: <f3b1ae54cdc45f9569b965b4f302d6cdc679dd8e.camel@linux.intel.com> (raw)
In-Reply-To: <20260401161528.1990499-3-satyanarayana.k.v.p@intel.com>

On Wed, 2026-04-01 at 16:15 +0000, Satyanarayana K V P wrote:
> New APIs xe_pool_bb_alloc(), xe_pool_bb_insert() and
> xe_pool_bb_free() are created to manage allocations from the
> xe_mem_pool manager.
> 
> Signed-off-by: Satyanarayana K V P <satyanarayana.k.v.p@intel.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Cc: Maarten Lankhorst <dev@lankhorst.se>
> Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
> 
> ---
> V2 -> V3:
> - Renamed xe_mm_suballoc to xe_mem_pool.
> 
> V1 -> V2:
> - Renamed xe_drm_mm to xe_mm_suballoc (Thomas)
> - Removed memset from xe_drm_mm_bb_insert() (Matt).
> ---

Claude's kreview:

> diff --git a/drivers/gpu/drm/xe/xe_bb.c b/drivers/gpu/drm/xe/xe_bb.c

[ ... ]

> +/**
> + * xe_pool_bb_alloc() - Allocate a new batch buffer structure for
drm_mm
> + *
> + * Allocates a new xe_pool_bb structure for use with xe_pool memory
> + * management.
> + *
> + * Returns: Batch buffer structure or an ERR_PTR(-ENOMEM).
> + */
> +struct xe_mem_pool_bb *xe_pool_bb_alloc(void)

The kerneldoc says "Allocates a new xe_pool_bb structure" but the
returned type is struct xe_mem_pool_bb. The type xe_pool_bb does not
exist. Should the description read "xe_mem_pool_bb structure"?

> +/**
> + * xe_pool_bb_insert() - Initialize a batch buffer and insert a hole
> + * @bb: Batch buffer structure to initialize
> + * @bb_pool: drm_mm manager to allocate from
> + * @dwords: Number of dwords to be allocated
> + *
> + * Initializes the batch buffer by allocating memory from the
specified
> + * drm_mm manager.
> + *
> + * Return: 0 on success, negative error code on failure.
> + */
> +int xe_pool_bb_insert(struct xe_mem_pool_bb *bb,
> +		      struct xe_mem_pool_manager *bb_pool, u32 dwords)

The one-line description says "insert a hole", but in drm_mm
terminology a hole is available free space. This function calls
xe_mem_pool_insert_node() which calls drm_mm_insert_node() to
allocate a node from within a hole. Should the description say
"insert a node" instead of "insert a hole"?

Also, xe_mem_pool_insert_node() and xe_mem_pool_remove_node() have
no internal locking. The current caller in xe_migrate.c wraps both
xe_pool_bb_insert() and xe_pool_bb_free() inside:

    scoped_guard(mutex, xe_mem_pool_bo_swap_guard(bb_pool)) {

but neither function documents this requirement. Would it make sense
to add a locking precondition to the kerneldoc of both functions,
similar to the _locked suffix convention used elsewhere in this file?

> +/**
> + * xe_pool_bb_free() - Free a batch buffer allocated with drm_mm
> + * @bb: Batch buffer structure to free
> + */
> +void xe_pool_bb_free(struct xe_mem_pool_bb *bb)

Same locking question as xe_pool_bb_insert() above --
drm_mm_remove_node()
requires external locking and the caller holds swap_guard, but this
is not documented here.

Also, this isn't a bug, but the new functions are named xe_pool_bb_*
while the type they operate on is struct xe_mem_pool_bb and the
manager type is struct xe_mem_pool_manager. The existing infrastructure
uses the xe_mem_pool_ prefix consistently. Would
xe_mem_pool_bb_alloc(),
xe_mem_pool_bb_insert(), and xe_mem_pool_bb_free() be more consistent
with the rest of the subsystem?

/Thomas

  parent reply	other threads:[~2026-04-02  8:21 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-01 16:15 [PATCH v3 0/3] USE drm mm instead of drm SA for CCS read/write Satyanarayana K V P
2026-04-01 16:15 ` [PATCH v3 1/3] drm/xe/mm: add XE MEM POOL manager with shadow support Satyanarayana K V P
2026-04-02  1:20   ` Matthew Brost
2026-04-02  8:18   ` Thomas Hellström
2026-04-02 15:17   ` Michal Wajdeczko
2026-04-01 16:15 ` [PATCH v3 2/3] drm/xe/mm: Add batch buffer allocation functions for xe_mem_pool manager Satyanarayana K V P
2026-04-02  1:22   ` Matthew Brost
2026-04-02  8:21   ` Thomas Hellström [this message]
2026-04-02 15:30   ` Michal Wajdeczko
2026-04-01 16:15 ` [PATCH v3 3/3] drm/xe/vf: Use drm mm instead of drm sa for CCS read/write Satyanarayana K V P
2026-04-02  8:29   ` Thomas Hellström
2026-04-01 16:20 ` ✗ CI.checkpatch: warning for USE drm mm instead of drm SA for CCS read/write (rev3) Patchwork
2026-04-01 16:21 ` ✓ CI.KUnit: success " Patchwork
2026-04-01 16:56 ` ✗ Xe.CI.BAT: failure " Patchwork
2026-04-01 21:11 ` ✗ Xe.CI.FULL: " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=f3b1ae54cdc45f9569b965b4f302d6cdc679dd8e.camel@linux.intel.com \
    --to=thomas.hellstrom@linux.intel.com \
    --cc=dev@lankhorst.se \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=matthew.brost@intel.com \
    --cc=michal.wajdeczko@intel.com \
    --cc=satyanarayana.k.v.p@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox