public inbox for intel-xe@lists.freedesktop.org
 help / color / mirror / Atom feed
From: Michal Wajdeczko <michal.wajdeczko@intel.com>
To: Satyanarayana K V P <satyanarayana.k.v.p@intel.com>,
	<intel-xe@lists.freedesktop.org>
Cc: "Matthew Brost" <matthew.brost@intel.com>,
	"Thomas Hellström" <thomas.hellstrom@linux.intel.com>,
	"Maarten Lankhorst" <dev@lankhorst.se>
Subject: Re: [PATCH v3 1/3] drm/xe/mm: add XE MEM POOL manager with shadow support
Date: Thu, 2 Apr 2026 17:17:52 +0200	[thread overview]
Message-ID: <38164aab-e838-4a10-b16b-0eafc6c858a4@intel.com> (raw)
In-Reply-To: <20260401161528.1990499-2-satyanarayana.k.v.p@intel.com>

nit: title

	drm/xe: Add memory pool with shadow support

On 4/1/2026 6:15 PM, Satyanarayana K V P wrote:
> Add a xe_mem_pool manager to allocate sub-ranges from a BO-backed pool
> using drm_mm.
> 
> Signed-off-by: Satyanarayana K V P <satyanarayana.k.v.p@intel.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Cc: Maarten Lankhorst <dev@lankhorst.se>
> Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
> 
> ---
> V2 -> V3:
> - Renamed xe_mm_suballoc to xe_mem_pool_manager.
> - Splitted xe_mm_suballoc_manager_init() into xe_mem_pool_init() and
> xe_mem_pool_shadow_init() (Michal)

well, my point was that we could have two separate components:

 1. xe_pool - that provides simple sub-allocations, similar to xe_sa but without use of fences
 2. xe_shadow_pool - that is built on top of xe_pool and provides "shadow bo" feature (as needed by CCS)

but that all of this could wait as any refactoring (and reuse in xe_guc_buf) can be later, after fixing hot CCS issue

> - Made xe_mm_sa_manager structure private. (Matt)
> - Introduced init flags to initialize allocated pools.
> 
> V1 -> V2:
> - Renamed xe_drm_mm to xe_mm_suballoc (Thomas)
> - Removed memset during manager init and insert (Matt)
> ---
>  drivers/gpu/drm/xe/Makefile            |   1 +
>  drivers/gpu/drm/xe/xe_mem_pool.c       | 379 +++++++++++++++++++++++++
>  drivers/gpu/drm/xe/xe_mem_pool.h       |  33 +++
>  drivers/gpu/drm/xe/xe_mem_pool_types.h |  30 ++
>  4 files changed, 443 insertions(+)
>  create mode 100644 drivers/gpu/drm/xe/xe_mem_pool.c
>  create mode 100644 drivers/gpu/drm/xe/xe_mem_pool.h
>  create mode 100644 drivers/gpu/drm/xe/xe_mem_pool_types.h
> 
> diff --git a/drivers/gpu/drm/xe/Makefile b/drivers/gpu/drm/xe/Makefile
> index 9dacb0579a7d..8e31b14239ec 100644
> --- a/drivers/gpu/drm/xe/Makefile
> +++ b/drivers/gpu/drm/xe/Makefile
> @@ -88,6 +88,7 @@ xe-y += xe_bb.o \
>  	xe_irq.o \
>  	xe_late_bind_fw.o \
>  	xe_lrc.o \
> +	xe_mem_pool.o \
>  	xe_migrate.o \
>  	xe_mmio.o \
>  	xe_mmio_gem.o \
> diff --git a/drivers/gpu/drm/xe/xe_mem_pool.c b/drivers/gpu/drm/xe/xe_mem_pool.c
> new file mode 100644
> index 000000000000..335a70876bf1
> --- /dev/null
> +++ b/drivers/gpu/drm/xe/xe_mem_pool.c
> @@ -0,0 +1,379 @@
> +// SPDX-License-Identifier: MIT
> +/*
> + * Copyright © 2026 Intel Corporation
> + */
> +
> +#include <linux/kernel.h>
> +
> +#include <drm/drm_managed.h>
> +
> +#include "instructions/xe_mi_commands.h"
> +#include "xe_bo.h"
> +#include "xe_device_types.h"
> +#include "xe_map.h"
> +#include "xe_mem_pool.h"
> +#include "xe_mem_pool_types.h"
> +
> +/**
> + * struct xe_mem_pool_manager - Memory Suballoc manager.

we can drop _manager suffix - there is just a "pool" instance we care of

> + */
> +

extra line

> +struct xe_mem_pool_manager {
> +	/** @base: Range allocator over [0, @size) in bytes */
> +	struct drm_mm base;
> +	/** @bo: Active pool BO (GGTT-pinned, CPU-mapped). */
> +	struct xe_bo *bo;
> +	/** @shadow: Shadow BO for atomic command updates. */
> +	struct xe_bo *shadow;

hmm, this "atomic command updates" seems to be a quite big extension of
the original goal: "allocate sub-ranges from a BO-backed pool"

> +	/** @swap_guard: Timeline guard updating @bo and @shadow */
> +	struct mutex swap_guard;
> +	/** @cpu_addr: CPU virtual address of the active BO. */
> +	void *cpu_addr;
> +	/** @resv_alloc: Reserved allocation. */
> +	struct drm_mm_node *resv_alloc;

do we need this to be dynamically allocated?

> +	/** @size: Total size of the managed address space. */
> +	u64 size;

do we need this field? there is xe_bo_size() we can use

> +};
> +
> +static void xe_mem_pool_fini(struct drm_device *drm, void *arg)

no need to use xe_ prefix in static functions, this could be:

	void fini_pool_action(...

> +{
> +	struct xe_mem_pool_manager *pool_manager = arg;
> +
> +	drm_mm_takedown(&pool_manager->base);

this should be a last step (and CI already complained)

> +
> +	if (pool_manager->resv_alloc) {
> +		drm_mm_remove_node(pool_manager->resv_alloc);
> +		kfree(pool_manager->resv_alloc);
> +	}
> +
> +	if (pool_manager->bo->vmap.is_iomem)
> +		kvfree(pool_manager->cpu_addr);
> +
> +	pool_manager->bo = NULL;
> +	pool_manager->shadow = NULL;

not sure if this is needed, pool was also allocated as managed object
and it will be released in the very next drmm action

> +}
> +
> +static int xe_mem_pool_init_flags(struct xe_mem_pool_manager *mm_pool, u32 size, int flags)
> +{
> +	struct xe_bo *bo = mm_pool->bo;
> +	struct drm_mm_node *node;
> +	struct xe_device *xe;
> +	u32 initializer;
> +	int err;
> +
> +	if (!flags)
> +		return 0;
> +
> +	if (flags & XE_MEM_POOL_BO_FLAG_INIT_ZERO_FILL)
> +		initializer = 0;
> +	else if (flags & XE_MEM_POOL_BO_FLAG_INIT_CMD_NOOP ||
> +		 flags & XE_MEM_POOL_BO_FLAG_INIT_CMD_BB_END_HIGHEST)
> +		initializer = MI_NOOP;

this seems to be CCS usecase specific
not sure if this should be part of the generic pool
besides, isn't MI_NOOP == 0x0 anyway?

> +	else
> +		return -EINVAL;

it would be our programming fault, so assert should be sufficient

> +
> +	xe = tile_to_xe(bo->tile);
> +	if (flags & XE_MEM_POOL_BO_FLAG_INIT_SHADOW_COPY) {

this flag is N/A to plain pool init
and there is no clear separation between supported features (plain vs shadow)

you must decide whether this is generic vs CCS-specific component

> +		bo = mm_pool->shadow;
> +		xe_map_memset(xe, &bo->vmap, 0, initializer, size);
> +
> +		node = mm_pool->resv_alloc;
> +		xe_map_memcpy_to(xe, &mm_pool->shadow->vmap,
> +				 node->start,
> +				 mm_pool->cpu_addr + node->start,
> +				 node->size);
> +		return 0;
> +	}
> +
> +	xe_map_memset(xe, &bo->vmap, 0, initializer, size);
> +
> +	if (flags & XE_MEM_POOL_BO_FLAG_INIT_CMD_BB_END_HIGHEST) {
> +		node = kzalloc_obj(*node);
> +		if (!node)
> +			return -ENOMEM;
> +
> +		err = drm_mm_insert_node_in_range(&mm_pool->base, node, SZ_4,
> +						  0, 0, 0, size, DRM_MM_INSERT_HIGHEST);

this SZ_4 seems to be very specific to the CCS usecase,
and IMO it does not fit as part of the generic "sub-ranges from a BO-backed pool"

> +		if (err) {
> +			kfree(node);
> +			return err;
> +		}
> +		xe_map_wr(xe, &mm_pool->bo->vmap, node->start, u32, MI_BATCH_BUFFER_END);
> +		mm_pool->resv_alloc = node;
> +	}
> +	return 0;
> +}
> +
> +/**
> + * xe_mem_pool_init() - Initialize a DRM MM pool.

... Initialize memory pool

> + * @tile: the &xe_tile where allocate.
> + * @size: number of bytes to allocate.
> + * @flags: flags to use for BO creation.
> + *
> + * Initializes a DRM MM manager for managing memory allocations on a specific
> + * XE tile. The function allocates a buffer object to back the memory region
> + * managed by the DRM MM manager.
> + *
> + * Return: a pointer to the &xe_mem_pool_manager, or an error pointer on failure.
> + */

maybe we should have two functions:

	int xe_mem_pool_init(struct xe_mem_pool *p, ...)
	struct xe_mem_pool *xe_mem_pool_create(...)


> +struct xe_mem_pool_manager *xe_mem_pool_init(struct xe_tile *tile, u32 size, int flags)
> +{
> +	struct xe_device *xe = tile_to_xe(tile);
> +	struct xe_mem_pool_manager *pool_manager;
> +	struct xe_bo *bo;
> +	int ret;
> +
> +	pool_manager = drmm_kzalloc(&xe->drm, sizeof(*pool_manager), GFP_KERNEL);
> +	if (!pool_manager)
> +		return ERR_PTR(-ENOMEM);
> +
> +	bo = xe_managed_bo_create_pin_map(xe, tile, size,
> +					  XE_BO_FLAG_VRAM_IF_DGFX(tile) |
> +					  XE_BO_FLAG_GGTT |
> +					  XE_BO_FLAG_GGTT_INVALIDATE |
> +					  XE_BO_FLAG_PINNED_NORESTORE);
> +	if (IS_ERR(bo)) {
> +		drm_err(&xe->drm, "Failed to prepare %uKiB BO for DRM MM manager (%pe)\n",

we have a tile here, so:

	xe_tile_err(tile, ...

and this is not about "DRM MM manager"

> +			size / SZ_1K, bo);
> +		return ERR_CAST(bo);
> +	}
> +	pool_manager->bo = bo;
> +	pool_manager->size = size;
> +
> +	if (bo->vmap.is_iomem) {
> +		pool_manager->cpu_addr = kvzalloc(size, GFP_KERNEL);
> +		if (!pool_manager->cpu_addr)
> +			return ERR_PTR(-ENOMEM);
> +	} else {
> +		pool_manager->cpu_addr = bo->vmap.vaddr;
> +	}
> +
> +	drm_mm_init(&pool_manager->base, 0, size);
> +	ret = drmm_add_action_or_reset(&xe->drm, xe_mem_pool_fini, pool_manager);
> +	if (ret)
> +		return ERR_PTR(ret);
> +
> +	ret = xe_mem_pool_init_flags(pool_manager, size, flags);

I'm not sure this helper really helps here...

> +	if (ret)
> +		return ERR_PTR(ret);
> +
> +	return pool_manager;
> +}
> +
> +/**
> + * xe_mem_pool_shadow_init() - Initialize the shadow BO for a DRM MM manager.

hmm, since you don't have a separate

	struct xe_mem_pool_shadow

then this init() function is little confusing
note that xe_mem_pool_manager is already polluted with 'shadow' logic

> + * @pool_manager: the DRM MM manager to initialize the shadow BO for.
> + * @flags: flags to use for BO creation.
> + *
> + * Initializes the shadow buffer object for the specified DRM MM manager. The

hmm, DRM MM is just our implementation detail
what we init here is "sub-range allocator"
please revisit all comments/descriptions

> + * shadow BO is used for atomic command updates and is created with the same
> + * size and properties as the primary BO.
> + *
> + * Return: 0 on success, or a negative error code on failure.
> + */
> +int xe_mem_pool_shadow_init(struct xe_mem_pool_manager *pool_manager, int flags)
> +{
> +	struct xe_tile *tile = pool_manager->bo->tile;
> +	struct xe_device *xe = tile_to_xe(tile);
> +	struct xe_bo *shadow;
> +	int ret;
> +
> +	xe_assert(xe, !pool_manager->shadow);
> +
> +	ret = drmm_mutex_init(&xe->drm, &pool_manager->swap_guard);
> +	if (ret)
> +		return ret;
> +
> +	if (IS_ENABLED(CONFIG_PROVE_LOCKING)) {
> +		fs_reclaim_acquire(GFP_KERNEL);
> +		might_lock(&pool_manager->swap_guard);
> +		fs_reclaim_release(GFP_KERNEL);
> +	}
> +	shadow = xe_managed_bo_create_pin_map(xe, tile, pool_manager->size,
> +					      XE_BO_FLAG_VRAM_IF_DGFX(tile) |
> +					      XE_BO_FLAG_GGTT |
> +					      XE_BO_FLAG_GGTT_INVALIDATE |
> +					      XE_BO_FLAG_PINNED_NORESTORE);

nit: btw, maybe for the 'shadow' we don't need a separate BO
but just allocate primary BO twice big? and the just adjust offset?

> +	if (IS_ERR(shadow))
> +		return PTR_ERR(shadow);
> +
> +	pool_manager->shadow = shadow;
> +
> +	ret = xe_mem_pool_init_flags(pool_manager, pool_manager->size,
> +				     flags | XE_MEM_POOL_BO_FLAG_INIT_SHADOW_COPY);
> +	if (ret)
> +		return ret;
> +
> +	return 0;
> +}
> +
> +/**
> + * xe_mem_pool_swap_shadow_locked() - Swap the primary BO with the shadow BO.
> + * @pool_manager: the DRM MM manager containing the primary and shadow BOs.
> + *
> + * Swaps the primary buffer object with the shadow buffer object in the DRM MM
> + * manager. This function must be called with the swap_guard mutex held to
> + * ensure synchronization with any concurrent operations that may be accessing
> + * the BOs.
> + *
> + * Return: None.
> + */
> +void xe_mem_pool_swap_shadow_locked(struct xe_mem_pool_manager *pool_manager)
> +{
> +	struct xe_tile *tile = pool_manager->bo->tile;
> +
> +	xe_tile_assert(tile, pool_manager->shadow);
> +	lockdep_assert_held(&pool_manager->swap_guard);
> +
> +	swap(pool_manager->bo, pool_manager->shadow);
> +	if (!pool_manager->bo->vmap.is_iomem)
> +		pool_manager->cpu_addr = pool_manager->bo->vmap.vaddr;
> +}
> +
> +/**
> + * xe_mem_pool_sync_shadow_locked() - Synchronize the shadow BO with the primary BO.
> + * @pool_manager: the DRM MM manager containing the primary and shadow BOs.
> + * @node: the DRM MM node representing the region to synchronize.
> + *
> + * Copies the contents of the specified region from the primary buffer object to
> + * the shadow buffer object in the DRM MM manager.
> + * Swap_guard must be held to ensure synchronization with any concurrent swap
> + * operations.
> + *
> + * Return: None.
> + */
> +void xe_mem_pool_sync_shadow_locked(struct xe_mem_pool_manager *pool_manager,
> +				    struct drm_mm_node *node)

we shouldn't expose/use pure drm_mm_node in our API

> +{
> +	struct xe_tile *tile = pool_manager->bo->tile;
> +	struct xe_device *xe = tile_to_xe(tile);
> +
> +	xe_tile_assert(tile, pool_manager->shadow);
> +	lockdep_assert_held(&pool_manager->swap_guard);
> +
> +	xe_map_memcpy_to(xe, &pool_manager->shadow->vmap,
> +			 node->start,
> +			 pool_manager->cpu_addr + node->start,
> +			 node->size);
> +}
> +
> +/**
> + * xe_mem_pool_insert_node() - Insert a node into the DRM MM manager.
> + * @pool_manager: the DRM MM manager to insert the node into.
> + * @node: the DRM MM node to insert.
> + * @size: the size of the node to insert.
> + *
> + * Inserts a node into the DRM MM manager and clears the corresponding memory region
> + * in both the primary and shadow buffer objects.
> + *
> + * Return: 0 on success, or a negative error code on failure.
> + */
> +int xe_mem_pool_insert_node(struct xe_mem_pool_manager *pool_manager,
> +			    struct drm_mm_node *node, u32 size)
> +{
> +	struct drm_mm *mm = &pool_manager->base;
> +	int ret;
> +
> +	ret = drm_mm_insert_node(mm, node, size);
> +	if (ret)
> +		return ret;
> +
> +	return 0;
> +}
> +
> +/**
> + * xe_mem_pool_remove_node() - Remove a node from the DRM MM manager.
> + * @node: the DRM MM node to remove.
> + *
> + * Return: None.
> + */
> +void xe_mem_pool_remove_node(struct drm_mm_node *node)
> +{
> +	return drm_mm_remove_node(node);
> +}
> +
> +/**
> + * xe_mem_pool_manager_gpu_addr() - Retrieve GPU address of BO within a memory manager.
> + * @pool_manager: The DRM MM memory manager.
> + *
> + * Returns: GGTT address of the back storage BO
> + */
> +u64 xe_mem_pool_manager_gpu_addr(struct xe_mem_pool_manager *pool_manager)
> +{
> +	return xe_bo_ggtt_addr(pool_manager->bo);
> +}
> +
> +/**
> + * xe_mem_pool_manager_cpu_addr() - Retrieve CPU address of BO within a memory manager.
> + * @pool_manager: The DRM MM memory manager.
> + *
> + * Returns: CPU virtual address of BO.
> + */
> +void *xe_mem_pool_manager_cpu_addr(struct xe_mem_pool_manager *pool_manager)

shouldn't this be per node?

> +{
> +	return pool_manager->cpu_addr;
> +}
> +
> +/**
> + * xe_mem_pool_bo_swap_guard() - Retrieve the mutex used to guard swap operations
> + * on a memory manager.
> + * @pool_manager: The DRM MM memory manager.
> + *
> + * Returns: Swap guard mutex.
> + */
> +struct mutex *xe_mem_pool_bo_swap_guard(struct xe_mem_pool_manager *pool_manager)
> +{
> +	return &pool_manager->swap_guard;
> +}
> +
> +/**
> + * xe_mem_pool_dump() - Dump the state of the DRM MM manager for debugging.
> + * @pool_manager: The DRM MM manager to dump.
> + * @p: The DRM printer to use for output.
> + *
> + * Returns: None.
> + */
> +void xe_mem_pool_dump(struct xe_mem_pool_manager *pool_manager, struct drm_printer *p)
> +{
> +	drm_mm_print(&pool_manager->base, p);

maybe also print info about the BO and shadow BO (like their GGTT)

> +}
> +
> +static inline struct xe_mem_pool_manager *to_xe_mem_pool_manager(struct drm_mm *mng)

please, no "inline" in .c

> +{
> +	return container_of(mng, struct xe_mem_pool_manager, base);
> +}
> +
> +/**
> + * xe_mem_pool_bo_flush_write() - Copy the data from the sub-allocation
> + * to the GPU memory.
> + * @node: the &drm_mm_node to flush
> + */
> +void xe_mem_pool_bo_flush_write(struct drm_mm_node *node)
> +{
> +	struct xe_mem_pool_manager *pool_manager = to_xe_mem_pool_manager(node->mm);
> +	struct xe_device *xe = tile_to_xe(pool_manager->bo->tile);
> +
> +	if (!pool_manager->bo->vmap.is_iomem)
> +		return;
> +
> +	xe_map_memcpy_to(xe, &pool_manager->bo->vmap, node->start,
> +			 pool_manager->cpu_addr + node->start,
> +			 node->size);
> +}
> +
> +/**
> + * xe_mem_pool_bo_sync_read() - Copy the data from GPU memory to the
> + * sub-allocation.
> + * @node: the &&drm_mm_node to sync
> + */
> +void xe_mem_pool_bo_sync_read(struct drm_mm_node *node)
> +{
> +	struct xe_mem_pool_manager *pool_manager = to_xe_mem_pool_manager(node->mm);
> +	struct xe_device *xe = tile_to_xe(pool_manager->bo->tile);
> +
> +	if (!pool_manager->bo->vmap.is_iomem)
> +		return;
> +
> +	xe_map_memcpy_from(xe, pool_manager->cpu_addr + node->start,
> +			   &pool_manager->bo->vmap, node->start, node->size);
> +}
> diff --git a/drivers/gpu/drm/xe/xe_mem_pool.h b/drivers/gpu/drm/xe/xe_mem_pool.h
> new file mode 100644
> index 000000000000..f9c5d1e56dd9
> --- /dev/null
> +++ b/drivers/gpu/drm/xe/xe_mem_pool.h
> @@ -0,0 +1,33 @@
> +/* SPDX-License-Identifier: MIT */
> +/*
> + * Copyright © 2026 Intel Corporation
> + */
> +#ifndef _XE_MEM_POOL_H_
> +#define _XE_MEM_POOL_H_
> +
> +#include <linux/sizes.h>
> +#include <linux/types.h>
> +
> +#include "drm/drm_mm.h"

use <>

> +#include "xe_mem_pool_types.h"
> +
> +struct drm_printer;
> +struct xe_mem_pool_manager;
> +struct xe_tile;
> +
> +struct xe_mem_pool_manager *xe_mem_pool_init(struct xe_tile *tile, u32 size, int flags);
> +int xe_mem_pool_shadow_init(struct xe_mem_pool_manager *drm_mm_manager, int flags);

"drm_mm_manager" - seems to be a wrong name, just "pool" ?

> +void xe_mem_pool_swap_shadow_locked(struct xe_mem_pool_manager *drm_mm_manager);
> +void xe_mem_pool_sync_shadow_locked(struct xe_mem_pool_manager *drm_mm_manager,
> +				    struct drm_mm_node *node);
> +int xe_mem_pool_insert_node(struct xe_mem_pool_manager *drm_mm_manager,
> +			    struct drm_mm_node *node, u32 size);
> +void xe_mem_pool_remove_node(struct drm_mm_node *node);
> +u64 xe_mem_pool_manager_gpu_addr(struct xe_mem_pool_manager *drm_mm_manager);
> +void *xe_mem_pool_manager_cpu_addr(struct xe_mem_pool_manager *mm_manager);
> +struct mutex *xe_mem_pool_bo_swap_guard(struct xe_mem_pool_manager *drm_mm_manager);
> +void xe_mem_pool_dump(struct xe_mem_pool_manager *mm_manager, struct drm_printer *p);
> +void xe_mem_pool_bo_flush_write(struct drm_mm_node *node);
> +void xe_mem_pool_bo_sync_read(struct drm_mm_node *node);
> +
> +#endif
> diff --git a/drivers/gpu/drm/xe/xe_mem_pool_types.h b/drivers/gpu/drm/xe/xe_mem_pool_types.h
> new file mode 100644
> index 000000000000..bae7706aa8d2
> --- /dev/null
> +++ b/drivers/gpu/drm/xe/xe_mem_pool_types.h
> @@ -0,0 +1,30 @@
> +/* SPDX-License-Identifier: MIT */
> +/*
> + * Copyright © 2026 Intel Corporation
> + */
> +
> +#ifndef _XE_MEM_POOL_TYPES_H_
> +#define _XE_MEM_POOL_TYPES_H_
> +
> +#include <drm/drm_mm.h>
> +
> +struct xe_mem_pool_manager;

unused here?

> +
> +#define XE_MEM_POOL_BO_FLAG_INIT_ZERO_FILL			BIT(0)
> +#define XE_MEM_POOL_BO_FLAG_INIT_CMD_NOOP			BIT(1)
> +#define XE_MEM_POOL_BO_FLAG_INIT_CMD_BB_END_HIGHEST		BIT(2)
> +#define XE_MEM_POOL_BO_FLAG_INIT_SHADOW_COPY			BIT(3)
> +
> +/**
> + * struct xe_mem_pool_bb - Sub allocated batch buffer from mem pool.

hmm, suddenly from "sub-range allocations" we jumped to "batch-buffer" specifics

> + */
> +struct xe_mem_pool_bb {

maybe:
	xe_mem_pool_node ?

and it looks little strange that

  * we hide xe_mem_pool_manager details
  * then in functions accept drm_mm_node
  * but expose xe_mem_pool_bb here instead

> +	/** @node: Range node for this batch buffer. */
> +	struct drm_mm_node node;
> +	/** @cs: Command stream for this batch buffer. */
> +	u32 *cs;

maybe we should just have a function to return CPU pointer of the xe_pool_node?

	return pool->cpu_addr + node->start;

> +	/** @len: Length of the CS in dwords. */
> +	u32 len;

do we need this? there is:

	node->size

> +};
> +
> +#endif



  parent reply	other threads:[~2026-04-02 15:18 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-01 16:15 [PATCH v3 0/3] USE drm mm instead of drm SA for CCS read/write Satyanarayana K V P
2026-04-01 16:15 ` [PATCH v3 1/3] drm/xe/mm: add XE MEM POOL manager with shadow support Satyanarayana K V P
2026-04-02  1:20   ` Matthew Brost
2026-04-02  8:18   ` Thomas Hellström
2026-04-02 15:17   ` Michal Wajdeczko [this message]
2026-04-01 16:15 ` [PATCH v3 2/3] drm/xe/mm: Add batch buffer allocation functions for xe_mem_pool manager Satyanarayana K V P
2026-04-02  1:22   ` Matthew Brost
2026-04-02  8:21   ` Thomas Hellström
2026-04-02 15:30   ` Michal Wajdeczko
2026-04-01 16:15 ` [PATCH v3 3/3] drm/xe/vf: Use drm mm instead of drm sa for CCS read/write Satyanarayana K V P
2026-04-02  8:29   ` Thomas Hellström
2026-04-01 16:20 ` ✗ CI.checkpatch: warning for USE drm mm instead of drm SA for CCS read/write (rev3) Patchwork
2026-04-01 16:21 ` ✓ CI.KUnit: success " Patchwork
2026-04-01 16:56 ` ✗ Xe.CI.BAT: failure " Patchwork
2026-04-01 21:11 ` ✗ Xe.CI.FULL: " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=38164aab-e838-4a10-b16b-0eafc6c858a4@intel.com \
    --to=michal.wajdeczko@intel.com \
    --cc=dev@lankhorst.se \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=matthew.brost@intel.com \
    --cc=satyanarayana.k.v.p@intel.com \
    --cc=thomas.hellstrom@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox