* [PATCH drm-misc-next v4 1/8] drm/gpuvm: rename struct drm_gpuva_manager to struct drm_gpuvm
2023-09-20 14:42 [PATCH drm-misc-next v4 0/8] [RFC] DRM GPUVA Manager GPU-VM features Danilo Krummrich
@ 2023-09-20 14:42 ` Danilo Krummrich
2023-09-21 6:48 ` Christian König
2023-09-20 14:42 ` [PATCH drm-misc-next v4 2/8] drm/gpuvm: allow building as module Danilo Krummrich
` (7 subsequent siblings)
8 siblings, 1 reply; 29+ messages in thread
From: Danilo Krummrich @ 2023-09-20 14:42 UTC (permalink / raw)
To: airlied, daniel, matthew.brost, thomas.hellstrom, sarah.walker,
donald.robson, boris.brezillon, christian.koenig, faith.ekstrand
Cc: dri-devel, nouveau, linux-kernel, Danilo Krummrich
Rename struct drm_gpuva_manager to struct drm_gpuvm including
corresponding functions. This way the GPUVA manager's structures align
very well with the documentation of VM_BIND [1] and VM_BIND locking [2].
It also provides a better foundation for the naming of data structures
and functions introduced for implementing a common dma-resv per GPU-VM
including tracking of external and evicted objects in subsequent
patches.
[1] Documentation/gpu/drm-vm-bind-async.rst
[2] Documentation/gpu/drm-vm-bind-locking.rst
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Danilo Krummrich <dakr@redhat.com>
---
drivers/gpu/drm/Makefile | 2 +-
drivers/gpu/drm/drm_debugfs.c | 16 +-
.../gpu/drm/{drm_gpuva_mgr.c => drm_gpuvm.c} | 400 +++++++++---------
drivers/gpu/drm/nouveau/nouveau_exec.c | 2 +-
drivers/gpu/drm/nouveau/nouveau_uvmm.c | 24 +-
drivers/gpu/drm/nouveau/nouveau_uvmm.h | 6 +-
include/drm/drm_debugfs.h | 6 +-
include/drm/{drm_gpuva_mgr.h => drm_gpuvm.h} | 153 ++++---
8 files changed, 304 insertions(+), 305 deletions(-)
rename drivers/gpu/drm/{drm_gpuva_mgr.c => drm_gpuvm.c} (78%)
rename include/drm/{drm_gpuva_mgr.h => drm_gpuvm.h} (78%)
diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile
index 215e78e79125..7a84b3cddeab 100644
--- a/drivers/gpu/drm/Makefile
+++ b/drivers/gpu/drm/Makefile
@@ -45,7 +45,7 @@ drm-y := \
drm_vblank.o \
drm_vblank_work.o \
drm_vma_manager.o \
- drm_gpuva_mgr.o \
+ drm_gpuvm.o \
drm_writeback.o
drm-$(CONFIG_DRM_LEGACY) += \
drm_agpsupport.o \
diff --git a/drivers/gpu/drm/drm_debugfs.c b/drivers/gpu/drm/drm_debugfs.c
index 44ecd7d0daac..f291fb4b359f 100644
--- a/drivers/gpu/drm/drm_debugfs.c
+++ b/drivers/gpu/drm/drm_debugfs.c
@@ -40,7 +40,7 @@
#include <drm/drm_file.h>
#include <drm/drm_gem.h>
#include <drm/drm_managed.h>
-#include <drm/drm_gpuva_mgr.h>
+#include <drm/drm_gpuvm.h>
#include "drm_crtc_internal.h"
#include "drm_internal.h"
@@ -189,31 +189,31 @@ static const struct file_operations drm_debugfs_fops = {
/**
* drm_debugfs_gpuva_info - dump the given DRM GPU VA space
* @m: pointer to the &seq_file to write
- * @mgr: the &drm_gpuva_manager representing the GPU VA space
+ * @gpuvm: the &drm_gpuvm representing the GPU VA space
*
* Dumps the GPU VA mappings of a given DRM GPU VA manager.
*
* For each DRM GPU VA space drivers should call this function from their
* &drm_info_list's show callback.
*
- * Returns: 0 on success, -ENODEV if the &mgr is not initialized
+ * Returns: 0 on success, -ENODEV if the &gpuvm is not initialized
*/
int drm_debugfs_gpuva_info(struct seq_file *m,
- struct drm_gpuva_manager *mgr)
+ struct drm_gpuvm *gpuvm)
{
- struct drm_gpuva *va, *kva = &mgr->kernel_alloc_node;
+ struct drm_gpuva *va, *kva = &gpuvm->kernel_alloc_node;
- if (!mgr->name)
+ if (!gpuvm->name)
return -ENODEV;
seq_printf(m, "DRM GPU VA space (%s) [0x%016llx;0x%016llx]\n",
- mgr->name, mgr->mm_start, mgr->mm_start + mgr->mm_range);
+ gpuvm->name, gpuvm->mm_start, gpuvm->mm_start + gpuvm->mm_range);
seq_printf(m, "Kernel reserved node [0x%016llx;0x%016llx]\n",
kva->va.addr, kva->va.addr + kva->va.range);
seq_puts(m, "\n");
seq_puts(m, " VAs | start | range | end | object | object offset\n");
seq_puts(m, "-------------------------------------------------------------------------------------------------------------\n");
- drm_gpuva_for_each_va(va, mgr) {
+ drm_gpuvm_for_each_va(va, gpuvm) {
if (unlikely(va == kva))
continue;
diff --git a/drivers/gpu/drm/drm_gpuva_mgr.c b/drivers/gpu/drm/drm_gpuvm.c
similarity index 78%
rename from drivers/gpu/drm/drm_gpuva_mgr.c
rename to drivers/gpu/drm/drm_gpuvm.c
index f86bfad74ff8..7074bcad5b28 100644
--- a/drivers/gpu/drm/drm_gpuva_mgr.c
+++ b/drivers/gpu/drm/drm_gpuvm.c
@@ -25,7 +25,7 @@
*
*/
-#include <drm/drm_gpuva_mgr.h>
+#include <drm/drm_gpuvm.h>
#include <linux/interval_tree_generic.h>
#include <linux/mm.h>
@@ -33,8 +33,8 @@
/**
* DOC: Overview
*
- * The DRM GPU VA Manager, represented by struct drm_gpuva_manager keeps track
- * of a GPU's virtual address (VA) space and manages the corresponding virtual
+ * The DRM GPU VA Manager, represented by struct drm_gpuvm keeps track of a
+ * GPU's virtual address (VA) space and manages the corresponding virtual
* mappings represented by &drm_gpuva objects. It also keeps track of the
* mapping's backing &drm_gem_object buffers.
*
@@ -47,28 +47,28 @@
* The GPU VA manager internally uses a rb-tree to manage the
* &drm_gpuva mappings within a GPU's virtual address space.
*
- * The &drm_gpuva_manager contains a special &drm_gpuva representing the
+ * The &drm_gpuvm structure contains a special &drm_gpuva representing the
* portion of VA space reserved by the kernel. This node is initialized together
* with the GPU VA manager instance and removed when the GPU VA manager is
* destroyed.
*
- * In a typical application drivers would embed struct drm_gpuva_manager and
+ * In a typical application drivers would embed struct drm_gpuvm and
* struct drm_gpuva within their own driver specific structures, there won't be
* any memory allocations of its own nor memory allocations of &drm_gpuva
* entries.
*
- * The data structures needed to store &drm_gpuvas within the &drm_gpuva_manager
- * are contained within struct drm_gpuva already. Hence, for inserting
- * &drm_gpuva entries from within dma-fence signalling critical sections it is
- * enough to pre-allocate the &drm_gpuva structures.
+ * The data structures needed to store &drm_gpuvas within the &drm_gpuvm are
+ * contained within struct drm_gpuva already. Hence, for inserting &drm_gpuva
+ * entries from within dma-fence signalling critical sections it is enough to
+ * pre-allocate the &drm_gpuva structures.
*/
/**
* DOC: Split and Merge
*
* Besides its capability to manage and represent a GPU VA space, the
- * &drm_gpuva_manager also provides functions to let the &drm_gpuva_manager
- * calculate a sequence of operations to satisfy a given map or unmap request.
+ * GPU VA manager also provides functions to let the &drm_gpuvm calculate a
+ * sequence of operations to satisfy a given map or unmap request.
*
* Therefore the DRM GPU VA manager provides an algorithm implementing splitting
* and merging of existent GPU VA mappings with the ones that are requested to
@@ -76,16 +76,16 @@
* implement Vulkan 'Sparse Memory Bindings' - drivers UAPIs often refer to this
* as VM BIND.
*
- * Drivers can call drm_gpuva_sm_map() to receive a sequence of callbacks
+ * Drivers can call drm_gpuvm_sm_map() to receive a sequence of callbacks
* containing map, unmap and remap operations for a given newly requested
* mapping. The sequence of callbacks represents the set of operations to
* execute in order to integrate the new mapping cleanly into the current state
* of the GPU VA space.
*
* Depending on how the new GPU VA mapping intersects with the existent mappings
- * of the GPU VA space the &drm_gpuva_fn_ops callbacks contain an arbitrary
- * amount of unmap operations, a maximum of two remap operations and a single
- * map operation. The caller might receive no callback at all if no operation is
+ * of the GPU VA space the &drm_gpuvm_ops callbacks contain an arbitrary amount
+ * of unmap operations, a maximum of two remap operations and a single map
+ * operation. The caller might receive no callback at all if no operation is
* required, e.g. if the requested mapping already exists in the exact same way.
*
* The single map operation represents the original map operation requested by
@@ -95,7 +95,7 @@
* &drm_gpuva to unmap is physically contiguous with the original mapping
* request. Optionally, if 'keep' is set, drivers may keep the actual page table
* entries for this &drm_gpuva, adding the missing page table entries only and
- * update the &drm_gpuva_manager's view of things accordingly.
+ * update the &drm_gpuvm's view of things accordingly.
*
* Drivers may do the same optimization, namely delta page table updates, also
* for remap operations. This is possible since &drm_gpuva_op_remap consists of
@@ -106,34 +106,34 @@
* the beginning and one at the end of the new mapping, hence there is a
* maximum of two remap operations.
*
- * Analogous to drm_gpuva_sm_map() drm_gpuva_sm_unmap() uses &drm_gpuva_fn_ops
- * to call back into the driver in order to unmap a range of GPU VA space. The
+ * Analogous to drm_gpuvm_sm_map() drm_gpuvm_sm_unmap() uses &drm_gpuvm_ops to
+ * call back into the driver in order to unmap a range of GPU VA space. The
* logic behind this function is way simpler though: For all existent mappings
* enclosed by the given range unmap operations are created. For mappings which
* are only partically located within the given range, remap operations are
* created such that those mappings are split up and re-mapped partically.
*
- * As an alternative to drm_gpuva_sm_map() and drm_gpuva_sm_unmap(),
- * drm_gpuva_sm_map_ops_create() and drm_gpuva_sm_unmap_ops_create() can be used
+ * As an alternative to drm_gpuvm_sm_map() and drm_gpuvm_sm_unmap(),
+ * drm_gpuvm_sm_map_ops_create() and drm_gpuvm_sm_unmap_ops_create() can be used
* to directly obtain an instance of struct drm_gpuva_ops containing a list of
* &drm_gpuva_op, which can be iterated with drm_gpuva_for_each_op(). This list
* contains the &drm_gpuva_ops analogous to the callbacks one would receive when
- * calling drm_gpuva_sm_map() or drm_gpuva_sm_unmap(). While this way requires
+ * calling drm_gpuvm_sm_map() or drm_gpuvm_sm_unmap(). While this way requires
* more memory (to allocate the &drm_gpuva_ops), it provides drivers a way to
* iterate the &drm_gpuva_op multiple times, e.g. once in a context where memory
* allocations are possible (e.g. to allocate GPU page tables) and once in the
* dma-fence signalling critical path.
*
- * To update the &drm_gpuva_manager's view of the GPU VA space
- * drm_gpuva_insert() and drm_gpuva_remove() may be used. These functions can
- * safely be used from &drm_gpuva_fn_ops callbacks originating from
- * drm_gpuva_sm_map() or drm_gpuva_sm_unmap(). However, it might be more
- * convenient to use the provided helper functions drm_gpuva_map(),
- * drm_gpuva_remap() and drm_gpuva_unmap() instead.
+ * To update the &drm_gpuvm's view of the GPU VA space drm_gpuva_insert() and
+ * drm_gpuva_remove() may be used. These functions can safely be used from
+ * &drm_gpuvm_ops callbacks originating from drm_gpuvm_sm_map() or
+ * drm_gpuvm_sm_unmap(). However, it might be more convenient to use the
+ * provided helper functions drm_gpuva_map(), drm_gpuva_remap() and
+ * drm_gpuva_unmap() instead.
*
* The following diagram depicts the basic relationships of existent GPU VA
* mappings, a newly requested mapping and the resulting mappings as implemented
- * by drm_gpuva_sm_map() - it doesn't cover any arbitrary combinations of these.
+ * by drm_gpuvm_sm_map() - it doesn't cover any arbitrary combinations of these.
*
* 1) Requested mapping is identical. Replace it, but indicate the backing PTEs
* could be kept.
@@ -421,10 +421,10 @@
* // Allocates a new &drm_gpuva.
* struct drm_gpuva * driver_gpuva_alloc(void);
*
- * // Typically drivers would embedd the &drm_gpuva_manager and &drm_gpuva
+ * // Typically drivers would embedd the &drm_gpuvm and &drm_gpuva
* // structure in individual driver structures and lock the dma-resv with
* // drm_exec or similar helpers.
- * int driver_mapping_create(struct drm_gpuva_manager *mgr,
+ * int driver_mapping_create(struct drm_gpuvm *gpuvm,
* u64 addr, u64 range,
* struct drm_gem_object *obj, u64 offset)
* {
@@ -432,7 +432,7 @@
* struct drm_gpuva_op *op
*
* driver_lock_va_space();
- * ops = drm_gpuva_sm_map_ops_create(mgr, addr, range,
+ * ops = drm_gpuvm_sm_map_ops_create(gpuvm, addr, range,
* obj, offset);
* if (IS_ERR(ops))
* return PTR_ERR(ops);
@@ -448,7 +448,7 @@
* // free memory and unlock
*
* driver_vm_map();
- * drm_gpuva_map(mgr, va, &op->map);
+ * drm_gpuva_map(gpuvm, va, &op->map);
* drm_gpuva_link(va);
*
* break;
@@ -504,23 +504,23 @@
* 2) Receive a callback for each &drm_gpuva_op to create a new mapping::
*
* struct driver_context {
- * struct drm_gpuva_manager *mgr;
+ * struct drm_gpuvm *gpuvm;
* struct drm_gpuva *new_va;
* struct drm_gpuva *prev_va;
* struct drm_gpuva *next_va;
* };
*
- * // ops to pass to drm_gpuva_manager_init()
- * static const struct drm_gpuva_fn_ops driver_gpuva_ops = {
+ * // ops to pass to drm_gpuvm_init()
+ * static const struct drm_gpuvm_ops driver_gpuvm_ops = {
* .sm_step_map = driver_gpuva_map,
* .sm_step_remap = driver_gpuva_remap,
* .sm_step_unmap = driver_gpuva_unmap,
* };
*
- * // Typically drivers would embedd the &drm_gpuva_manager and &drm_gpuva
+ * // Typically drivers would embedd the &drm_gpuvm and &drm_gpuva
* // structure in individual driver structures and lock the dma-resv with
* // drm_exec or similar helpers.
- * int driver_mapping_create(struct drm_gpuva_manager *mgr,
+ * int driver_mapping_create(struct drm_gpuvm *gpuvm,
* u64 addr, u64 range,
* struct drm_gem_object *obj, u64 offset)
* {
@@ -529,7 +529,7 @@
* struct drm_gpuva_op *op;
* int ret = 0;
*
- * ctx.mgr = mgr;
+ * ctx.gpuvm = gpuvm;
*
* ctx.new_va = kzalloc(sizeof(*ctx.new_va), GFP_KERNEL);
* ctx.prev_va = kzalloc(sizeof(*ctx.prev_va), GFP_KERNEL);
@@ -540,7 +540,7 @@
* }
*
* driver_lock_va_space();
- * ret = drm_gpuva_sm_map(mgr, &ctx, addr, range, obj, offset);
+ * ret = drm_gpuvm_sm_map(gpuvm, &ctx, addr, range, obj, offset);
* driver_unlock_va_space();
*
* out:
@@ -554,7 +554,7 @@
* {
* struct driver_context *ctx = __ctx;
*
- * drm_gpuva_map(ctx->mgr, ctx->new_va, &op->map);
+ * drm_gpuva_map(ctx->vm, ctx->new_va, &op->map);
*
* drm_gpuva_link(ctx->new_va);
*
@@ -609,12 +609,12 @@ INTERVAL_TREE_DEFINE(struct drm_gpuva, rb.node, u64, rb.__subtree_last,
GPUVA_START, GPUVA_LAST, static __maybe_unused,
drm_gpuva_it)
-static int __drm_gpuva_insert(struct drm_gpuva_manager *mgr,
+static int __drm_gpuva_insert(struct drm_gpuvm *gpuvm,
struct drm_gpuva *va);
static void __drm_gpuva_remove(struct drm_gpuva *va);
static bool
-drm_gpuva_check_overflow(u64 addr, u64 range)
+drm_gpuvm_check_overflow(u64 addr, u64 range)
{
u64 end;
@@ -623,121 +623,121 @@ drm_gpuva_check_overflow(u64 addr, u64 range)
}
static bool
-drm_gpuva_in_mm_range(struct drm_gpuva_manager *mgr, u64 addr, u64 range)
+drm_gpuvm_in_mm_range(struct drm_gpuvm *gpuvm, u64 addr, u64 range)
{
u64 end = addr + range;
- u64 mm_start = mgr->mm_start;
- u64 mm_end = mm_start + mgr->mm_range;
+ u64 mm_start = gpuvm->mm_start;
+ u64 mm_end = mm_start + gpuvm->mm_range;
return addr >= mm_start && end <= mm_end;
}
static bool
-drm_gpuva_in_kernel_node(struct drm_gpuva_manager *mgr, u64 addr, u64 range)
+drm_gpuvm_in_kernel_node(struct drm_gpuvm *gpuvm, u64 addr, u64 range)
{
u64 end = addr + range;
- u64 kstart = mgr->kernel_alloc_node.va.addr;
- u64 krange = mgr->kernel_alloc_node.va.range;
+ u64 kstart = gpuvm->kernel_alloc_node.va.addr;
+ u64 krange = gpuvm->kernel_alloc_node.va.range;
u64 kend = kstart + krange;
return krange && addr < kend && kstart < end;
}
static bool
-drm_gpuva_range_valid(struct drm_gpuva_manager *mgr,
+drm_gpuva_range_valid(struct drm_gpuvm *gpuvm,
u64 addr, u64 range)
{
- return !drm_gpuva_check_overflow(addr, range) &&
- drm_gpuva_in_mm_range(mgr, addr, range) &&
- !drm_gpuva_in_kernel_node(mgr, addr, range);
+ return !drm_gpuvm_check_overflow(addr, range) &&
+ drm_gpuvm_in_mm_range(gpuvm, addr, range) &&
+ !drm_gpuvm_in_kernel_node(gpuvm, addr, range);
}
/**
- * drm_gpuva_manager_init() - initialize a &drm_gpuva_manager
- * @mgr: pointer to the &drm_gpuva_manager to initialize
+ * drm_gpuvm_init() - initialize a &drm_gpuvm
+ * @gpuvm: pointer to the &drm_gpuvm to initialize
* @name: the name of the GPU VA space
* @start_offset: the start offset of the GPU VA space
* @range: the size of the GPU VA space
* @reserve_offset: the start of the kernel reserved GPU VA area
* @reserve_range: the size of the kernel reserved GPU VA area
- * @ops: &drm_gpuva_fn_ops called on &drm_gpuva_sm_map / &drm_gpuva_sm_unmap
+ * @ops: &drm_gpuvm_ops called on &drm_gpuvm_sm_map / &drm_gpuvm_sm_unmap
*
- * The &drm_gpuva_manager must be initialized with this function before use.
+ * The &drm_gpuvm must be initialized with this function before use.
*
- * Note that @mgr must be cleared to 0 before calling this function. The given
+ * Note that @gpuvm must be cleared to 0 before calling this function. The given
* &name is expected to be managed by the surrounding driver structures.
*/
void
-drm_gpuva_manager_init(struct drm_gpuva_manager *mgr,
- const char *name,
- u64 start_offset, u64 range,
- u64 reserve_offset, u64 reserve_range,
- const struct drm_gpuva_fn_ops *ops)
+drm_gpuvm_init(struct drm_gpuvm *gpuvm,
+ const char *name,
+ u64 start_offset, u64 range,
+ u64 reserve_offset, u64 reserve_range,
+ const struct drm_gpuvm_ops *ops)
{
- mgr->rb.tree = RB_ROOT_CACHED;
- INIT_LIST_HEAD(&mgr->rb.list);
+ gpuvm->rb.tree = RB_ROOT_CACHED;
+ INIT_LIST_HEAD(&gpuvm->rb.list);
- drm_gpuva_check_overflow(start_offset, range);
- mgr->mm_start = start_offset;
- mgr->mm_range = range;
+ drm_gpuvm_check_overflow(start_offset, range);
+ gpuvm->mm_start = start_offset;
+ gpuvm->mm_range = range;
- mgr->name = name ? name : "unknown";
- mgr->ops = ops;
+ gpuvm->name = name ? name : "unknown";
+ gpuvm->ops = ops;
- memset(&mgr->kernel_alloc_node, 0, sizeof(struct drm_gpuva));
+ memset(&gpuvm->kernel_alloc_node, 0, sizeof(struct drm_gpuva));
if (reserve_range) {
- mgr->kernel_alloc_node.va.addr = reserve_offset;
- mgr->kernel_alloc_node.va.range = reserve_range;
+ gpuvm->kernel_alloc_node.va.addr = reserve_offset;
+ gpuvm->kernel_alloc_node.va.range = reserve_range;
- if (likely(!drm_gpuva_check_overflow(reserve_offset,
+ if (likely(!drm_gpuvm_check_overflow(reserve_offset,
reserve_range)))
- __drm_gpuva_insert(mgr, &mgr->kernel_alloc_node);
+ __drm_gpuva_insert(gpuvm, &gpuvm->kernel_alloc_node);
}
}
-EXPORT_SYMBOL_GPL(drm_gpuva_manager_init);
+EXPORT_SYMBOL_GPL(drm_gpuvm_init);
/**
- * drm_gpuva_manager_destroy() - cleanup a &drm_gpuva_manager
- * @mgr: pointer to the &drm_gpuva_manager to clean up
+ * drm_gpuvm_destroy() - cleanup a &drm_gpuvm
+ * @gpuvm: pointer to the &drm_gpuvm to clean up
*
* Note that it is a bug to call this function on a manager that still
* holds GPU VA mappings.
*/
void
-drm_gpuva_manager_destroy(struct drm_gpuva_manager *mgr)
+drm_gpuvm_destroy(struct drm_gpuvm *gpuvm)
{
- mgr->name = NULL;
+ gpuvm->name = NULL;
- if (mgr->kernel_alloc_node.va.range)
- __drm_gpuva_remove(&mgr->kernel_alloc_node);
+ if (gpuvm->kernel_alloc_node.va.range)
+ __drm_gpuva_remove(&gpuvm->kernel_alloc_node);
- WARN(!RB_EMPTY_ROOT(&mgr->rb.tree.rb_root),
+ WARN(!RB_EMPTY_ROOT(&gpuvm->rb.tree.rb_root),
"GPUVA tree is not empty, potentially leaking memory.");
}
-EXPORT_SYMBOL_GPL(drm_gpuva_manager_destroy);
+EXPORT_SYMBOL_GPL(drm_gpuvm_destroy);
static int
-__drm_gpuva_insert(struct drm_gpuva_manager *mgr,
+__drm_gpuva_insert(struct drm_gpuvm *gpuvm,
struct drm_gpuva *va)
{
struct rb_node *node;
struct list_head *head;
- if (drm_gpuva_it_iter_first(&mgr->rb.tree,
+ if (drm_gpuva_it_iter_first(&gpuvm->rb.tree,
GPUVA_START(va),
GPUVA_LAST(va)))
return -EEXIST;
- va->mgr = mgr;
+ va->vm = gpuvm;
- drm_gpuva_it_insert(va, &mgr->rb.tree);
+ drm_gpuva_it_insert(va, &gpuvm->rb.tree);
node = rb_prev(&va->rb.node);
if (node)
head = &(to_drm_gpuva(node))->rb.entry;
else
- head = &mgr->rb.list;
+ head = &gpuvm->rb.list;
list_add(&va->rb.entry, head);
@@ -746,36 +746,36 @@ __drm_gpuva_insert(struct drm_gpuva_manager *mgr,
/**
* drm_gpuva_insert() - insert a &drm_gpuva
- * @mgr: the &drm_gpuva_manager to insert the &drm_gpuva in
+ * @gpuvm: the &drm_gpuvm to insert the &drm_gpuva in
* @va: the &drm_gpuva to insert
*
* Insert a &drm_gpuva with a given address and range into a
- * &drm_gpuva_manager.
+ * &drm_gpuvm.
*
* It is safe to use this function using the safe versions of iterating the GPU
- * VA space, such as drm_gpuva_for_each_va_safe() and
- * drm_gpuva_for_each_va_range_safe().
+ * VA space, such as drm_gpuvm_for_each_va_safe() and
+ * drm_gpuvm_for_each_va_range_safe().
*
* Returns: 0 on success, negative error code on failure.
*/
int
-drm_gpuva_insert(struct drm_gpuva_manager *mgr,
+drm_gpuva_insert(struct drm_gpuvm *gpuvm,
struct drm_gpuva *va)
{
u64 addr = va->va.addr;
u64 range = va->va.range;
- if (unlikely(!drm_gpuva_range_valid(mgr, addr, range)))
+ if (unlikely(!drm_gpuva_range_valid(gpuvm, addr, range)))
return -EINVAL;
- return __drm_gpuva_insert(mgr, va);
+ return __drm_gpuva_insert(gpuvm, va);
}
EXPORT_SYMBOL_GPL(drm_gpuva_insert);
static void
__drm_gpuva_remove(struct drm_gpuva *va)
{
- drm_gpuva_it_remove(va, &va->mgr->rb.tree);
+ drm_gpuva_it_remove(va, &va->vm->rb.tree);
list_del_init(&va->rb.entry);
}
@@ -786,15 +786,15 @@ __drm_gpuva_remove(struct drm_gpuva *va)
* This removes the given &va from the underlaying tree.
*
* It is safe to use this function using the safe versions of iterating the GPU
- * VA space, such as drm_gpuva_for_each_va_safe() and
- * drm_gpuva_for_each_va_range_safe().
+ * VA space, such as drm_gpuvm_for_each_va_safe() and
+ * drm_gpuvm_for_each_va_range_safe().
*/
void
drm_gpuva_remove(struct drm_gpuva *va)
{
- struct drm_gpuva_manager *mgr = va->mgr;
+ struct drm_gpuvm *gpuvm = va->vm;
- if (unlikely(va == &mgr->kernel_alloc_node)) {
+ if (unlikely(va == &gpuvm->kernel_alloc_node)) {
WARN(1, "Can't destroy kernel reserved node.\n");
return;
}
@@ -853,37 +853,37 @@ EXPORT_SYMBOL_GPL(drm_gpuva_unlink);
/**
* drm_gpuva_find_first() - find the first &drm_gpuva in the given range
- * @mgr: the &drm_gpuva_manager to search in
+ * @gpuvm: the &drm_gpuvm to search in
* @addr: the &drm_gpuvas address
* @range: the &drm_gpuvas range
*
* Returns: the first &drm_gpuva within the given range
*/
struct drm_gpuva *
-drm_gpuva_find_first(struct drm_gpuva_manager *mgr,
+drm_gpuva_find_first(struct drm_gpuvm *gpuvm,
u64 addr, u64 range)
{
u64 last = addr + range - 1;
- return drm_gpuva_it_iter_first(&mgr->rb.tree, addr, last);
+ return drm_gpuva_it_iter_first(&gpuvm->rb.tree, addr, last);
}
EXPORT_SYMBOL_GPL(drm_gpuva_find_first);
/**
* drm_gpuva_find() - find a &drm_gpuva
- * @mgr: the &drm_gpuva_manager to search in
+ * @gpuvm: the &drm_gpuvm to search in
* @addr: the &drm_gpuvas address
* @range: the &drm_gpuvas range
*
* Returns: the &drm_gpuva at a given &addr and with a given &range
*/
struct drm_gpuva *
-drm_gpuva_find(struct drm_gpuva_manager *mgr,
+drm_gpuva_find(struct drm_gpuvm *gpuvm,
u64 addr, u64 range)
{
struct drm_gpuva *va;
- va = drm_gpuva_find_first(mgr, addr, range);
+ va = drm_gpuva_find_first(gpuvm, addr, range);
if (!va)
goto out;
@@ -900,7 +900,7 @@ EXPORT_SYMBOL_GPL(drm_gpuva_find);
/**
* drm_gpuva_find_prev() - find the &drm_gpuva before the given address
- * @mgr: the &drm_gpuva_manager to search in
+ * @gpuvm: the &drm_gpuvm to search in
* @start: the given GPU VA's start address
*
* Find the adjacent &drm_gpuva before the GPU VA with given &start address.
@@ -911,18 +911,18 @@ EXPORT_SYMBOL_GPL(drm_gpuva_find);
* Returns: a pointer to the found &drm_gpuva or NULL if none was found
*/
struct drm_gpuva *
-drm_gpuva_find_prev(struct drm_gpuva_manager *mgr, u64 start)
+drm_gpuva_find_prev(struct drm_gpuvm *gpuvm, u64 start)
{
- if (!drm_gpuva_range_valid(mgr, start - 1, 1))
+ if (!drm_gpuva_range_valid(gpuvm, start - 1, 1))
return NULL;
- return drm_gpuva_it_iter_first(&mgr->rb.tree, start - 1, start);
+ return drm_gpuva_it_iter_first(&gpuvm->rb.tree, start - 1, start);
}
EXPORT_SYMBOL_GPL(drm_gpuva_find_prev);
/**
* drm_gpuva_find_next() - find the &drm_gpuva after the given address
- * @mgr: the &drm_gpuva_manager to search in
+ * @gpuvm: the &drm_gpuvm to search in
* @end: the given GPU VA's end address
*
* Find the adjacent &drm_gpuva after the GPU VA with given &end address.
@@ -933,47 +933,47 @@ EXPORT_SYMBOL_GPL(drm_gpuva_find_prev);
* Returns: a pointer to the found &drm_gpuva or NULL if none was found
*/
struct drm_gpuva *
-drm_gpuva_find_next(struct drm_gpuva_manager *mgr, u64 end)
+drm_gpuva_find_next(struct drm_gpuvm *gpuvm, u64 end)
{
- if (!drm_gpuva_range_valid(mgr, end, 1))
+ if (!drm_gpuva_range_valid(gpuvm, end, 1))
return NULL;
- return drm_gpuva_it_iter_first(&mgr->rb.tree, end, end + 1);
+ return drm_gpuva_it_iter_first(&gpuvm->rb.tree, end, end + 1);
}
EXPORT_SYMBOL_GPL(drm_gpuva_find_next);
/**
* drm_gpuva_interval_empty() - indicate whether a given interval of the VA space
* is empty
- * @mgr: the &drm_gpuva_manager to check the range for
+ * @gpuvm: the &drm_gpuvm to check the range for
* @addr: the start address of the range
* @range: the range of the interval
*
* Returns: true if the interval is empty, false otherwise
*/
bool
-drm_gpuva_interval_empty(struct drm_gpuva_manager *mgr, u64 addr, u64 range)
+drm_gpuva_interval_empty(struct drm_gpuvm *gpuvm, u64 addr, u64 range)
{
- return !drm_gpuva_find_first(mgr, addr, range);
+ return !drm_gpuva_find_first(gpuvm, addr, range);
}
EXPORT_SYMBOL_GPL(drm_gpuva_interval_empty);
/**
* drm_gpuva_map() - helper to insert a &drm_gpuva according to a
* &drm_gpuva_op_map
- * @mgr: the &drm_gpuva_manager
+ * @gpuvm: the &drm_gpuvm
* @va: the &drm_gpuva to insert
* @op: the &drm_gpuva_op_map to initialize @va with
*
- * Initializes the @va from the @op and inserts it into the given @mgr.
+ * Initializes the @va from the @op and inserts it into the given @gpuvm.
*/
void
-drm_gpuva_map(struct drm_gpuva_manager *mgr,
+drm_gpuva_map(struct drm_gpuvm *gpuvm,
struct drm_gpuva *va,
struct drm_gpuva_op_map *op)
{
drm_gpuva_init_from_op(va, op);
- drm_gpuva_insert(mgr, va);
+ drm_gpuva_insert(gpuvm, va);
}
EXPORT_SYMBOL_GPL(drm_gpuva_map);
@@ -993,18 +993,18 @@ drm_gpuva_remap(struct drm_gpuva *prev,
struct drm_gpuva_op_remap *op)
{
struct drm_gpuva *curr = op->unmap->va;
- struct drm_gpuva_manager *mgr = curr->mgr;
+ struct drm_gpuvm *gpuvm = curr->vm;
drm_gpuva_remove(curr);
if (op->prev) {
drm_gpuva_init_from_op(prev, op->prev);
- drm_gpuva_insert(mgr, prev);
+ drm_gpuva_insert(gpuvm, prev);
}
if (op->next) {
drm_gpuva_init_from_op(next, op->next);
- drm_gpuva_insert(mgr, next);
+ drm_gpuva_insert(gpuvm, next);
}
}
EXPORT_SYMBOL_GPL(drm_gpuva_remap);
@@ -1024,7 +1024,7 @@ drm_gpuva_unmap(struct drm_gpuva_op_unmap *op)
EXPORT_SYMBOL_GPL(drm_gpuva_unmap);
static int
-op_map_cb(const struct drm_gpuva_fn_ops *fn, void *priv,
+op_map_cb(const struct drm_gpuvm_ops *fn, void *priv,
u64 addr, u64 range,
struct drm_gem_object *obj, u64 offset)
{
@@ -1040,7 +1040,7 @@ op_map_cb(const struct drm_gpuva_fn_ops *fn, void *priv,
}
static int
-op_remap_cb(const struct drm_gpuva_fn_ops *fn, void *priv,
+op_remap_cb(const struct drm_gpuvm_ops *fn, void *priv,
struct drm_gpuva_op_map *prev,
struct drm_gpuva_op_map *next,
struct drm_gpuva_op_unmap *unmap)
@@ -1058,7 +1058,7 @@ op_remap_cb(const struct drm_gpuva_fn_ops *fn, void *priv,
}
static int
-op_unmap_cb(const struct drm_gpuva_fn_ops *fn, void *priv,
+op_unmap_cb(const struct drm_gpuvm_ops *fn, void *priv,
struct drm_gpuva *va, bool merge)
{
struct drm_gpuva_op op = {};
@@ -1071,8 +1071,8 @@ op_unmap_cb(const struct drm_gpuva_fn_ops *fn, void *priv,
}
static int
-__drm_gpuva_sm_map(struct drm_gpuva_manager *mgr,
- const struct drm_gpuva_fn_ops *ops, void *priv,
+__drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
+ const struct drm_gpuvm_ops *ops, void *priv,
u64 req_addr, u64 req_range,
struct drm_gem_object *req_obj, u64 req_offset)
{
@@ -1080,10 +1080,10 @@ __drm_gpuva_sm_map(struct drm_gpuva_manager *mgr,
u64 req_end = req_addr + req_range;
int ret;
- if (unlikely(!drm_gpuva_range_valid(mgr, req_addr, req_range)))
+ if (unlikely(!drm_gpuva_range_valid(gpuvm, req_addr, req_range)))
return -EINVAL;
- drm_gpuva_for_each_va_range_safe(va, next, mgr, req_addr, req_end) {
+ drm_gpuvm_for_each_va_range_safe(va, next, gpuvm, req_addr, req_end) {
struct drm_gem_object *obj = va->gem.obj;
u64 offset = va->gem.offset;
u64 addr = va->va.addr;
@@ -1215,18 +1215,18 @@ __drm_gpuva_sm_map(struct drm_gpuva_manager *mgr,
}
static int
-__drm_gpuva_sm_unmap(struct drm_gpuva_manager *mgr,
- const struct drm_gpuva_fn_ops *ops, void *priv,
+__drm_gpuvm_sm_unmap(struct drm_gpuvm *gpuvm,
+ const struct drm_gpuvm_ops *ops, void *priv,
u64 req_addr, u64 req_range)
{
struct drm_gpuva *va, *next;
u64 req_end = req_addr + req_range;
int ret;
- if (unlikely(!drm_gpuva_range_valid(mgr, req_addr, req_range)))
+ if (unlikely(!drm_gpuva_range_valid(gpuvm, req_addr, req_range)))
return -EINVAL;
- drm_gpuva_for_each_va_range_safe(va, next, mgr, req_addr, req_end) {
+ drm_gpuvm_for_each_va_range_safe(va, next, gpuvm, req_addr, req_end) {
struct drm_gpuva_op_map prev = {}, next = {};
bool prev_split = false, next_split = false;
struct drm_gem_object *obj = va->gem.obj;
@@ -1273,8 +1273,8 @@ __drm_gpuva_sm_unmap(struct drm_gpuva_manager *mgr,
}
/**
- * drm_gpuva_sm_map() - creates the &drm_gpuva_op split/merge steps
- * @mgr: the &drm_gpuva_manager representing the GPU VA space
+ * drm_gpuvm_sm_map() - creates the &drm_gpuva_op split/merge steps
+ * @gpuvm: the &drm_gpuvm representing the GPU VA space
* @req_addr: the start address of the new mapping
* @req_range: the range of the new mapping
* @req_obj: the &drm_gem_object to map
@@ -1282,15 +1282,15 @@ __drm_gpuva_sm_unmap(struct drm_gpuva_manager *mgr,
* @priv: pointer to a driver private data structure
*
* This function iterates the given range of the GPU VA space. It utilizes the
- * &drm_gpuva_fn_ops to call back into the driver providing the split and merge
+ * &drm_gpuvm_ops to call back into the driver providing the split and merge
* steps.
*
* Drivers may use these callbacks to update the GPU VA space right away within
* the callback. In case the driver decides to copy and store the operations for
- * later processing neither this function nor &drm_gpuva_sm_unmap is allowed to
- * be called before the &drm_gpuva_manager's view of the GPU VA space was
+ * later processing neither this function nor &drm_gpuvm_sm_unmap is allowed to
+ * be called before the &drm_gpuvm's view of the GPU VA space was
* updated with the previous set of operations. To update the
- * &drm_gpuva_manager's view of the GPU VA space drm_gpuva_insert(),
+ * &drm_gpuvm's view of the GPU VA space drm_gpuva_insert(),
* drm_gpuva_destroy_locked() and/or drm_gpuva_destroy_unlocked() should be
* used.
*
@@ -1305,39 +1305,39 @@ __drm_gpuva_sm_unmap(struct drm_gpuva_manager *mgr,
* Returns: 0 on success or a negative error code
*/
int
-drm_gpuva_sm_map(struct drm_gpuva_manager *mgr, void *priv,
+drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm, void *priv,
u64 req_addr, u64 req_range,
struct drm_gem_object *req_obj, u64 req_offset)
{
- const struct drm_gpuva_fn_ops *ops = mgr->ops;
+ const struct drm_gpuvm_ops *ops = gpuvm->ops;
if (unlikely(!(ops && ops->sm_step_map &&
ops->sm_step_remap &&
ops->sm_step_unmap)))
return -EINVAL;
- return __drm_gpuva_sm_map(mgr, ops, priv,
+ return __drm_gpuvm_sm_map(gpuvm, ops, priv,
req_addr, req_range,
req_obj, req_offset);
}
-EXPORT_SYMBOL_GPL(drm_gpuva_sm_map);
+EXPORT_SYMBOL_GPL(drm_gpuvm_sm_map);
/**
- * drm_gpuva_sm_unmap() - creates the &drm_gpuva_ops to split on unmap
- * @mgr: the &drm_gpuva_manager representing the GPU VA space
+ * drm_gpuvm_sm_unmap() - creates the &drm_gpuva_ops to split on unmap
+ * @gpuvm: the &drm_gpuvm representing the GPU VA space
* @priv: pointer to a driver private data structure
* @req_addr: the start address of the range to unmap
* @req_range: the range of the mappings to unmap
*
* This function iterates the given range of the GPU VA space. It utilizes the
- * &drm_gpuva_fn_ops to call back into the driver providing the operations to
+ * &drm_gpuvm_ops to call back into the driver providing the operations to
* unmap and, if required, split existent mappings.
*
* Drivers may use these callbacks to update the GPU VA space right away within
* the callback. In case the driver decides to copy and store the operations for
- * later processing neither this function nor &drm_gpuva_sm_map is allowed to be
- * called before the &drm_gpuva_manager's view of the GPU VA space was updated
- * with the previous set of operations. To update the &drm_gpuva_manager's view
+ * later processing neither this function nor &drm_gpuvm_sm_map is allowed to be
+ * called before the &drm_gpuvm's view of the GPU VA space was updated
+ * with the previous set of operations. To update the &drm_gpuvm's view
* of the GPU VA space drm_gpuva_insert(), drm_gpuva_destroy_locked() and/or
* drm_gpuva_destroy_unlocked() should be used.
*
@@ -1350,24 +1350,24 @@ EXPORT_SYMBOL_GPL(drm_gpuva_sm_map);
* Returns: 0 on success or a negative error code
*/
int
-drm_gpuva_sm_unmap(struct drm_gpuva_manager *mgr, void *priv,
+drm_gpuvm_sm_unmap(struct drm_gpuvm *gpuvm, void *priv,
u64 req_addr, u64 req_range)
{
- const struct drm_gpuva_fn_ops *ops = mgr->ops;
+ const struct drm_gpuvm_ops *ops = gpuvm->ops;
if (unlikely(!(ops && ops->sm_step_remap &&
ops->sm_step_unmap)))
return -EINVAL;
- return __drm_gpuva_sm_unmap(mgr, ops, priv,
+ return __drm_gpuvm_sm_unmap(gpuvm, ops, priv,
req_addr, req_range);
}
-EXPORT_SYMBOL_GPL(drm_gpuva_sm_unmap);
+EXPORT_SYMBOL_GPL(drm_gpuvm_sm_unmap);
static struct drm_gpuva_op *
-gpuva_op_alloc(struct drm_gpuva_manager *mgr)
+gpuva_op_alloc(struct drm_gpuvm *gpuvm)
{
- const struct drm_gpuva_fn_ops *fn = mgr->ops;
+ const struct drm_gpuvm_ops *fn = gpuvm->ops;
struct drm_gpuva_op *op;
if (fn && fn->op_alloc)
@@ -1382,10 +1382,10 @@ gpuva_op_alloc(struct drm_gpuva_manager *mgr)
}
static void
-gpuva_op_free(struct drm_gpuva_manager *mgr,
+gpuva_op_free(struct drm_gpuvm *gpuvm,
struct drm_gpuva_op *op)
{
- const struct drm_gpuva_fn_ops *fn = mgr->ops;
+ const struct drm_gpuvm_ops *fn = gpuvm->ops;
if (fn && fn->op_free)
fn->op_free(op);
@@ -1398,14 +1398,14 @@ drm_gpuva_sm_step(struct drm_gpuva_op *__op,
void *priv)
{
struct {
- struct drm_gpuva_manager *mgr;
+ struct drm_gpuvm *vm;
struct drm_gpuva_ops *ops;
} *args = priv;
- struct drm_gpuva_manager *mgr = args->mgr;
+ struct drm_gpuvm *gpuvm = args->vm;
struct drm_gpuva_ops *ops = args->ops;
struct drm_gpuva_op *op;
- op = gpuva_op_alloc(mgr);
+ op = gpuva_op_alloc(gpuvm);
if (unlikely(!op))
goto err;
@@ -1444,20 +1444,20 @@ drm_gpuva_sm_step(struct drm_gpuva_op *__op,
err_free_prev:
kfree(op->remap.prev);
err_free_op:
- gpuva_op_free(mgr, op);
+ gpuva_op_free(gpuvm, op);
err:
return -ENOMEM;
}
-static const struct drm_gpuva_fn_ops gpuva_list_ops = {
+static const struct drm_gpuvm_ops gpuvm_list_ops = {
.sm_step_map = drm_gpuva_sm_step,
.sm_step_remap = drm_gpuva_sm_step,
.sm_step_unmap = drm_gpuva_sm_step,
};
/**
- * drm_gpuva_sm_map_ops_create() - creates the &drm_gpuva_ops to split and merge
- * @mgr: the &drm_gpuva_manager representing the GPU VA space
+ * drm_gpuvm_sm_map_ops_create() - creates the &drm_gpuva_ops to split and merge
+ * @gpuvm: the &drm_gpuvm representing the GPU VA space
* @req_addr: the start address of the new mapping
* @req_range: the range of the new mapping
* @req_obj: the &drm_gem_object to map
@@ -1476,9 +1476,9 @@ static const struct drm_gpuva_fn_ops gpuva_list_ops = {
* map operation requested by the caller.
*
* Note that before calling this function again with another mapping request it
- * is necessary to update the &drm_gpuva_manager's view of the GPU VA space. The
+ * is necessary to update the &drm_gpuvm's view of the GPU VA space. The
* previously obtained operations must be either processed or abandoned. To
- * update the &drm_gpuva_manager's view of the GPU VA space drm_gpuva_insert(),
+ * update the &drm_gpuvm's view of the GPU VA space drm_gpuva_insert(),
* drm_gpuva_destroy_locked() and/or drm_gpuva_destroy_unlocked() should be
* used.
*
@@ -1488,13 +1488,13 @@ static const struct drm_gpuva_fn_ops gpuva_list_ops = {
* Returns: a pointer to the &drm_gpuva_ops on success, an ERR_PTR on failure
*/
struct drm_gpuva_ops *
-drm_gpuva_sm_map_ops_create(struct drm_gpuva_manager *mgr,
+drm_gpuvm_sm_map_ops_create(struct drm_gpuvm *gpuvm,
u64 req_addr, u64 req_range,
struct drm_gem_object *req_obj, u64 req_offset)
{
struct drm_gpuva_ops *ops;
struct {
- struct drm_gpuva_manager *mgr;
+ struct drm_gpuvm *vm;
struct drm_gpuva_ops *ops;
} args;
int ret;
@@ -1505,10 +1505,10 @@ drm_gpuva_sm_map_ops_create(struct drm_gpuva_manager *mgr,
INIT_LIST_HEAD(&ops->list);
- args.mgr = mgr;
+ args.vm = gpuvm;
args.ops = ops;
- ret = __drm_gpuva_sm_map(mgr, &gpuva_list_ops, &args,
+ ret = __drm_gpuvm_sm_map(gpuvm, &gpuvm_list_ops, &args,
req_addr, req_range,
req_obj, req_offset);
if (ret)
@@ -1517,15 +1517,15 @@ drm_gpuva_sm_map_ops_create(struct drm_gpuva_manager *mgr,
return ops;
err_free_ops:
- drm_gpuva_ops_free(mgr, ops);
+ drm_gpuva_ops_free(gpuvm, ops);
return ERR_PTR(ret);
}
-EXPORT_SYMBOL_GPL(drm_gpuva_sm_map_ops_create);
+EXPORT_SYMBOL_GPL(drm_gpuvm_sm_map_ops_create);
/**
- * drm_gpuva_sm_unmap_ops_create() - creates the &drm_gpuva_ops to split on
+ * drm_gpuvm_sm_unmap_ops_create() - creates the &drm_gpuva_ops to split on
* unmap
- * @mgr: the &drm_gpuva_manager representing the GPU VA space
+ * @gpuvm: the &drm_gpuvm representing the GPU VA space
* @req_addr: the start address of the range to unmap
* @req_range: the range of the mappings to unmap
*
@@ -1540,9 +1540,9 @@ EXPORT_SYMBOL_GPL(drm_gpuva_sm_map_ops_create);
* remap operations.
*
* Note that before calling this function again with another range to unmap it
- * is necessary to update the &drm_gpuva_manager's view of the GPU VA space. The
+ * is necessary to update the &drm_gpuvm's view of the GPU VA space. The
* previously obtained operations must be processed or abandoned. To update the
- * &drm_gpuva_manager's view of the GPU VA space drm_gpuva_insert(),
+ * &drm_gpuvm's view of the GPU VA space drm_gpuva_insert(),
* drm_gpuva_destroy_locked() and/or drm_gpuva_destroy_unlocked() should be
* used.
*
@@ -1552,12 +1552,12 @@ EXPORT_SYMBOL_GPL(drm_gpuva_sm_map_ops_create);
* Returns: a pointer to the &drm_gpuva_ops on success, an ERR_PTR on failure
*/
struct drm_gpuva_ops *
-drm_gpuva_sm_unmap_ops_create(struct drm_gpuva_manager *mgr,
+drm_gpuvm_sm_unmap_ops_create(struct drm_gpuvm *gpuvm,
u64 req_addr, u64 req_range)
{
struct drm_gpuva_ops *ops;
struct {
- struct drm_gpuva_manager *mgr;
+ struct drm_gpuvm *vm;
struct drm_gpuva_ops *ops;
} args;
int ret;
@@ -1568,10 +1568,10 @@ drm_gpuva_sm_unmap_ops_create(struct drm_gpuva_manager *mgr,
INIT_LIST_HEAD(&ops->list);
- args.mgr = mgr;
+ args.vm = gpuvm;
args.ops = ops;
- ret = __drm_gpuva_sm_unmap(mgr, &gpuva_list_ops, &args,
+ ret = __drm_gpuvm_sm_unmap(gpuvm, &gpuvm_list_ops, &args,
req_addr, req_range);
if (ret)
goto err_free_ops;
@@ -1579,14 +1579,14 @@ drm_gpuva_sm_unmap_ops_create(struct drm_gpuva_manager *mgr,
return ops;
err_free_ops:
- drm_gpuva_ops_free(mgr, ops);
+ drm_gpuva_ops_free(gpuvm, ops);
return ERR_PTR(ret);
}
-EXPORT_SYMBOL_GPL(drm_gpuva_sm_unmap_ops_create);
+EXPORT_SYMBOL_GPL(drm_gpuvm_sm_unmap_ops_create);
/**
- * drm_gpuva_prefetch_ops_create() - creates the &drm_gpuva_ops to prefetch
- * @mgr: the &drm_gpuva_manager representing the GPU VA space
+ * drm_gpuvm_prefetch_ops_create() - creates the &drm_gpuva_ops to prefetch
+ * @gpuvm: the &drm_gpuvm representing the GPU VA space
* @addr: the start address of the range to prefetch
* @range: the range of the mappings to prefetch
*
@@ -1603,7 +1603,7 @@ EXPORT_SYMBOL_GPL(drm_gpuva_sm_unmap_ops_create);
* Returns: a pointer to the &drm_gpuva_ops on success, an ERR_PTR on failure
*/
struct drm_gpuva_ops *
-drm_gpuva_prefetch_ops_create(struct drm_gpuva_manager *mgr,
+drm_gpuvm_prefetch_ops_create(struct drm_gpuvm *gpuvm,
u64 addr, u64 range)
{
struct drm_gpuva_ops *ops;
@@ -1618,8 +1618,8 @@ drm_gpuva_prefetch_ops_create(struct drm_gpuva_manager *mgr,
INIT_LIST_HEAD(&ops->list);
- drm_gpuva_for_each_va_range(va, mgr, addr, end) {
- op = gpuva_op_alloc(mgr);
+ drm_gpuvm_for_each_va_range(va, gpuvm, addr, end) {
+ op = gpuva_op_alloc(gpuvm);
if (!op) {
ret = -ENOMEM;
goto err_free_ops;
@@ -1633,14 +1633,14 @@ drm_gpuva_prefetch_ops_create(struct drm_gpuva_manager *mgr,
return ops;
err_free_ops:
- drm_gpuva_ops_free(mgr, ops);
+ drm_gpuva_ops_free(gpuvm, ops);
return ERR_PTR(ret);
}
-EXPORT_SYMBOL_GPL(drm_gpuva_prefetch_ops_create);
+EXPORT_SYMBOL_GPL(drm_gpuvm_prefetch_ops_create);
/**
- * drm_gpuva_gem_unmap_ops_create() - creates the &drm_gpuva_ops to unmap a GEM
- * @mgr: the &drm_gpuva_manager representing the GPU VA space
+ * drm_gpuvm_gem_unmap_ops_create() - creates the &drm_gpuva_ops to unmap a GEM
+ * @gpuvm: the &drm_gpuvm representing the GPU VA space
* @obj: the &drm_gem_object to unmap
*
* This function creates a list of operations to perform unmapping for every
@@ -1658,7 +1658,7 @@ EXPORT_SYMBOL_GPL(drm_gpuva_prefetch_ops_create);
* Returns: a pointer to the &drm_gpuva_ops on success, an ERR_PTR on failure
*/
struct drm_gpuva_ops *
-drm_gpuva_gem_unmap_ops_create(struct drm_gpuva_manager *mgr,
+drm_gpuvm_gem_unmap_ops_create(struct drm_gpuvm *gpuvm,
struct drm_gem_object *obj)
{
struct drm_gpuva_ops *ops;
@@ -1675,7 +1675,7 @@ drm_gpuva_gem_unmap_ops_create(struct drm_gpuva_manager *mgr,
INIT_LIST_HEAD(&ops->list);
drm_gem_for_each_gpuva(va, obj) {
- op = gpuva_op_alloc(mgr);
+ op = gpuva_op_alloc(gpuvm);
if (!op) {
ret = -ENOMEM;
goto err_free_ops;
@@ -1689,21 +1689,21 @@ drm_gpuva_gem_unmap_ops_create(struct drm_gpuva_manager *mgr,
return ops;
err_free_ops:
- drm_gpuva_ops_free(mgr, ops);
+ drm_gpuva_ops_free(gpuvm, ops);
return ERR_PTR(ret);
}
-EXPORT_SYMBOL_GPL(drm_gpuva_gem_unmap_ops_create);
+EXPORT_SYMBOL_GPL(drm_gpuvm_gem_unmap_ops_create);
/**
* drm_gpuva_ops_free() - free the given &drm_gpuva_ops
- * @mgr: the &drm_gpuva_manager the ops were created for
+ * @gpuvm: the &drm_gpuvm the ops were created for
* @ops: the &drm_gpuva_ops to free
*
* Frees the given &drm_gpuva_ops structure including all the ops associated
* with it.
*/
void
-drm_gpuva_ops_free(struct drm_gpuva_manager *mgr,
+drm_gpuva_ops_free(struct drm_gpuvm *gpuvm,
struct drm_gpuva_ops *ops)
{
struct drm_gpuva_op *op, *next;
@@ -1717,7 +1717,7 @@ drm_gpuva_ops_free(struct drm_gpuva_manager *mgr,
kfree(op->remap.unmap);
}
- gpuva_op_free(mgr, op);
+ gpuva_op_free(gpuvm, op);
}
kfree(ops);
diff --git a/drivers/gpu/drm/nouveau/nouveau_exec.c b/drivers/gpu/drm/nouveau/nouveau_exec.c
index a90c4cd8cbb2..c001952cd678 100644
--- a/drivers/gpu/drm/nouveau/nouveau_exec.c
+++ b/drivers/gpu/drm/nouveau/nouveau_exec.c
@@ -106,7 +106,7 @@ nouveau_exec_job_submit(struct nouveau_job *job)
drm_exec_until_all_locked(exec) {
struct drm_gpuva *va;
- drm_gpuva_for_each_va(va, &uvmm->umgr) {
+ drm_gpuvm_for_each_va(va, &uvmm->umgr) {
if (unlikely(va == &uvmm->umgr.kernel_alloc_node))
continue;
diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.c b/drivers/gpu/drm/nouveau/nouveau_uvmm.c
index aae780e4a4aa..c750072cb268 100644
--- a/drivers/gpu/drm/nouveau/nouveau_uvmm.c
+++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.c
@@ -444,7 +444,7 @@ op_map_prepare_unwind(struct nouveau_uvma *uvma)
static void
op_unmap_prepare_unwind(struct drm_gpuva *va)
{
- drm_gpuva_insert(va->mgr, va);
+ drm_gpuva_insert(va->vm, va);
}
static void
@@ -1194,7 +1194,7 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job)
goto unwind_continue;
}
- op->ops = drm_gpuva_sm_unmap_ops_create(&uvmm->umgr,
+ op->ops = drm_gpuvm_sm_unmap_ops_create(&uvmm->umgr,
op->va.addr,
op->va.range);
if (IS_ERR(op->ops)) {
@@ -1240,7 +1240,7 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job)
}
}
- op->ops = drm_gpuva_sm_map_ops_create(&uvmm->umgr,
+ op->ops = drm_gpuvm_sm_map_ops_create(&uvmm->umgr,
op->va.addr,
op->va.range,
op->gem.obj,
@@ -1264,7 +1264,7 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job)
break;
}
case OP_UNMAP:
- op->ops = drm_gpuva_sm_unmap_ops_create(&uvmm->umgr,
+ op->ops = drm_gpuvm_sm_unmap_ops_create(&uvmm->umgr,
op->va.addr,
op->va.range);
if (IS_ERR(op->ops)) {
@@ -1836,11 +1836,11 @@ nouveau_uvmm_init(struct nouveau_uvmm *uvmm, struct nouveau_cli *cli,
uvmm->kernel_managed_addr = kernel_managed_addr;
uvmm->kernel_managed_size = kernel_managed_size;
- drm_gpuva_manager_init(&uvmm->umgr, cli->name,
- NOUVEAU_VA_SPACE_START,
- NOUVEAU_VA_SPACE_END,
- kernel_managed_addr, kernel_managed_size,
- NULL);
+ drm_gpuvm_init(&uvmm->umgr, cli->name,
+ NOUVEAU_VA_SPACE_START,
+ NOUVEAU_VA_SPACE_END,
+ kernel_managed_addr, kernel_managed_size,
+ NULL);
ret = nvif_vmm_ctor(&cli->mmu, "uvmm",
cli->vmm.vmm.object.oclass, RAW,
@@ -1855,7 +1855,7 @@ nouveau_uvmm_init(struct nouveau_uvmm *uvmm, struct nouveau_cli *cli,
return 0;
out_free_gpuva_mgr:
- drm_gpuva_manager_destroy(&uvmm->umgr);
+ drm_gpuvm_destroy(&uvmm->umgr);
out_unlock:
mutex_unlock(&cli->mutex);
return ret;
@@ -1877,7 +1877,7 @@ nouveau_uvmm_fini(struct nouveau_uvmm *uvmm)
wait_event(entity->job.wq, list_empty(&entity->job.list.head));
nouveau_uvmm_lock(uvmm);
- drm_gpuva_for_each_va_safe(va, next, &uvmm->umgr) {
+ drm_gpuvm_for_each_va_safe(va, next, &uvmm->umgr) {
struct nouveau_uvma *uvma = uvma_from_va(va);
struct drm_gem_object *obj = va->gem.obj;
@@ -1910,7 +1910,7 @@ nouveau_uvmm_fini(struct nouveau_uvmm *uvmm)
mutex_lock(&cli->mutex);
nouveau_vmm_fini(&uvmm->vmm);
- drm_gpuva_manager_destroy(&uvmm->umgr);
+ drm_gpuvm_destroy(&uvmm->umgr);
mutex_unlock(&cli->mutex);
dma_resv_fini(&uvmm->resv);
diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.h b/drivers/gpu/drm/nouveau/nouveau_uvmm.h
index fc7f6fd2a4e1..e96c9919d1bd 100644
--- a/drivers/gpu/drm/nouveau/nouveau_uvmm.h
+++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.h
@@ -3,13 +3,13 @@
#ifndef __NOUVEAU_UVMM_H__
#define __NOUVEAU_UVMM_H__
-#include <drm/drm_gpuva_mgr.h>
+#include <drm/drm_gpuvm.h>
#include "nouveau_drv.h"
struct nouveau_uvmm {
struct nouveau_vmm vmm;
- struct drm_gpuva_manager umgr;
+ struct drm_gpuvm umgr;
struct maple_tree region_mt;
struct mutex mutex;
struct dma_resv resv;
@@ -44,7 +44,7 @@ struct nouveau_uvma {
#define uvmm_from_mgr(x) container_of((x), struct nouveau_uvmm, umgr)
#define uvma_from_va(x) container_of((x), struct nouveau_uvma, va)
-#define to_uvmm(x) uvmm_from_mgr((x)->va.mgr)
+#define to_uvmm(x) uvmm_from_mgr((x)->va.vm)
struct nouveau_uvmm_bind_job {
struct nouveau_job base;
diff --git a/include/drm/drm_debugfs.h b/include/drm/drm_debugfs.h
index 3bba169f9bae..cf06cee4343f 100644
--- a/include/drm/drm_debugfs.h
+++ b/include/drm/drm_debugfs.h
@@ -35,7 +35,7 @@
#include <linux/types.h>
#include <linux/seq_file.h>
-#include <drm/drm_gpuva_mgr.h>
+#include <drm/drm_gpuvm.h>
/**
* DRM_DEBUGFS_GPUVA_INFO - &drm_info_list entry to dump a GPU VA space
@@ -152,7 +152,7 @@ void drm_debugfs_add_files(struct drm_device *dev,
const struct drm_debugfs_info *files, int count);
int drm_debugfs_gpuva_info(struct seq_file *m,
- struct drm_gpuva_manager *mgr);
+ struct drm_gpuvm *gpuvm);
#else
static inline void drm_debugfs_create_files(const struct drm_info_list *files,
int count, struct dentry *root,
@@ -177,7 +177,7 @@ static inline void drm_debugfs_add_files(struct drm_device *dev,
{}
static inline int drm_debugfs_gpuva_info(struct seq_file *m,
- struct drm_gpuva_manager *mgr)
+ struct drm_gpuvm *gpuvm)
{
return 0;
}
diff --git a/include/drm/drm_gpuva_mgr.h b/include/drm/drm_gpuvm.h
similarity index 78%
rename from include/drm/drm_gpuva_mgr.h
rename to include/drm/drm_gpuvm.h
index ed8d50200cc3..0e802676e0a9 100644
--- a/include/drm/drm_gpuva_mgr.h
+++ b/include/drm/drm_gpuvm.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: GPL-2.0-only */
-#ifndef __DRM_GPUVA_MGR_H__
-#define __DRM_GPUVA_MGR_H__
+#ifndef __DRM_GPUVM_H__
+#define __DRM_GPUVM_H__
/*
* Copyright (c) 2022 Red Hat.
@@ -31,8 +31,8 @@
#include <drm/drm_gem.h>
-struct drm_gpuva_manager;
-struct drm_gpuva_fn_ops;
+struct drm_gpuvm;
+struct drm_gpuvm_ops;
/**
* enum drm_gpuva_flags - flags for struct drm_gpuva
@@ -62,15 +62,15 @@ enum drm_gpuva_flags {
* struct drm_gpuva - structure to track a GPU VA mapping
*
* This structure represents a GPU VA mapping and is associated with a
- * &drm_gpuva_manager.
+ * &drm_gpuvm.
*
* Typically, this structure is embedded in bigger driver structures.
*/
struct drm_gpuva {
/**
- * @mgr: the &drm_gpuva_manager this object is associated with
+ * @vm: the &drm_gpuvm this object is associated with
*/
- struct drm_gpuva_manager *mgr;
+ struct drm_gpuvm *vm;
/**
* @flags: the &drm_gpuva_flags for this mapping
@@ -137,20 +137,20 @@ struct drm_gpuva {
} rb;
};
-int drm_gpuva_insert(struct drm_gpuva_manager *mgr, struct drm_gpuva *va);
+int drm_gpuva_insert(struct drm_gpuvm *gpuvm, struct drm_gpuva *va);
void drm_gpuva_remove(struct drm_gpuva *va);
void drm_gpuva_link(struct drm_gpuva *va);
void drm_gpuva_unlink(struct drm_gpuva *va);
-struct drm_gpuva *drm_gpuva_find(struct drm_gpuva_manager *mgr,
+struct drm_gpuva *drm_gpuva_find(struct drm_gpuvm *gpuvm,
u64 addr, u64 range);
-struct drm_gpuva *drm_gpuva_find_first(struct drm_gpuva_manager *mgr,
+struct drm_gpuva *drm_gpuva_find_first(struct drm_gpuvm *gpuvm,
u64 addr, u64 range);
-struct drm_gpuva *drm_gpuva_find_prev(struct drm_gpuva_manager *mgr, u64 start);
-struct drm_gpuva *drm_gpuva_find_next(struct drm_gpuva_manager *mgr, u64 end);
+struct drm_gpuva *drm_gpuva_find_prev(struct drm_gpuvm *gpuvm, u64 start);
+struct drm_gpuva *drm_gpuva_find_next(struct drm_gpuvm *gpuvm, u64 end);
-bool drm_gpuva_interval_empty(struct drm_gpuva_manager *mgr, u64 addr, u64 range);
+bool drm_gpuva_interval_empty(struct drm_gpuvm *gpuvm, u64 addr, u64 range);
static inline void drm_gpuva_init(struct drm_gpuva *va, u64 addr, u64 range,
struct drm_gem_object *obj, u64 offset)
@@ -186,7 +186,7 @@ static inline bool drm_gpuva_invalidated(struct drm_gpuva *va)
}
/**
- * struct drm_gpuva_manager - DRM GPU VA Manager
+ * struct drm_gpuvm - DRM GPU VA Manager
*
* The DRM GPU VA Manager keeps track of a GPU's virtual address space by using
* &maple_tree structures. Typically, this structure is embedded in bigger
@@ -197,7 +197,7 @@ static inline bool drm_gpuva_invalidated(struct drm_gpuva *va)
*
* There should be one manager instance per GPU virtual address space.
*/
-struct drm_gpuva_manager {
+struct drm_gpuvm {
/**
* @name: the name of the DRM GPU VA space
*/
@@ -237,100 +237,99 @@ struct drm_gpuva_manager {
struct drm_gpuva kernel_alloc_node;
/**
- * @ops: &drm_gpuva_fn_ops providing the split/merge steps to drivers
+ * @ops: &drm_gpuvm_ops providing the split/merge steps to drivers
*/
- const struct drm_gpuva_fn_ops *ops;
+ const struct drm_gpuvm_ops *ops;
};
-void drm_gpuva_manager_init(struct drm_gpuva_manager *mgr,
- const char *name,
- u64 start_offset, u64 range,
- u64 reserve_offset, u64 reserve_range,
- const struct drm_gpuva_fn_ops *ops);
-void drm_gpuva_manager_destroy(struct drm_gpuva_manager *mgr);
+void drm_gpuvm_init(struct drm_gpuvm *gpuvm, const char *name,
+ u64 start_offset, u64 range,
+ u64 reserve_offset, u64 reserve_range,
+ const struct drm_gpuvm_ops *ops);
+void drm_gpuvm_destroy(struct drm_gpuvm *gpuvm);
static inline struct drm_gpuva *
__drm_gpuva_next(struct drm_gpuva *va)
{
- if (va && !list_is_last(&va->rb.entry, &va->mgr->rb.list))
+ if (va && !list_is_last(&va->rb.entry, &va->vm->rb.list))
return list_next_entry(va, rb.entry);
return NULL;
}
/**
- * drm_gpuva_for_each_va_range() - iterate over a range of &drm_gpuvas
+ * drm_gpuvm_for_each_va_range() - iterate over a range of &drm_gpuvas
* @va__: &drm_gpuva structure to assign to in each iteration step
- * @mgr__: &drm_gpuva_manager to walk over
+ * @gpuvm__: &drm_gpuvm to walk over
* @start__: starting offset, the first gpuva will overlap this
* @end__: ending offset, the last gpuva will start before this (but may
* overlap)
*
- * This iterator walks over all &drm_gpuvas in the &drm_gpuva_manager that lie
+ * This iterator walks over all &drm_gpuvas in the &drm_gpuvm that lie
* between @start__ and @end__. It is implemented similarly to list_for_each(),
- * but is using the &drm_gpuva_manager's internal interval tree to accelerate
+ * but is using the &drm_gpuvm's internal interval tree to accelerate
* the search for the starting &drm_gpuva, and hence isn't safe against removal
* of elements. It assumes that @end__ is within (or is the upper limit of) the
- * &drm_gpuva_manager. This iterator does not skip over the &drm_gpuva_manager's
+ * &drm_gpuvm. This iterator does not skip over the &drm_gpuvm's
* @kernel_alloc_node.
*/
-#define drm_gpuva_for_each_va_range(va__, mgr__, start__, end__) \
- for (va__ = drm_gpuva_find_first((mgr__), (start__), (end__) - (start__)); \
+#define drm_gpuvm_for_each_va_range(va__, gpuvm__, start__, end__) \
+ for (va__ = drm_gpuva_find_first((gpuvm__), (start__), (end__) - (start__)); \
va__ && (va__->va.addr < (end__)); \
va__ = __drm_gpuva_next(va__))
/**
- * drm_gpuva_for_each_va_range_safe() - safely iterate over a range of
+ * drm_gpuvm_for_each_va_range_safe() - safely iterate over a range of
* &drm_gpuvas
* @va__: &drm_gpuva to assign to in each iteration step
* @next__: another &drm_gpuva to use as temporary storage
- * @mgr__: &drm_gpuva_manager to walk over
+ * @gpuvm__: &drm_gpuvm to walk over
* @start__: starting offset, the first gpuva will overlap this
* @end__: ending offset, the last gpuva will start before this (but may
* overlap)
*
- * This iterator walks over all &drm_gpuvas in the &drm_gpuva_manager that lie
+ * This iterator walks over all &drm_gpuvas in the &drm_gpuvm that lie
* between @start__ and @end__. It is implemented similarly to
- * list_for_each_safe(), but is using the &drm_gpuva_manager's internal interval
+ * list_for_each_safe(), but is using the &drm_gpuvm's internal interval
* tree to accelerate the search for the starting &drm_gpuva, and hence is safe
* against removal of elements. It assumes that @end__ is within (or is the
- * upper limit of) the &drm_gpuva_manager. This iterator does not skip over the
- * &drm_gpuva_manager's @kernel_alloc_node.
+ * upper limit of) the &drm_gpuvm. This iterator does not skip over the
+ * &drm_gpuvm's @kernel_alloc_node.
*/
-#define drm_gpuva_for_each_va_range_safe(va__, next__, mgr__, start__, end__) \
- for (va__ = drm_gpuva_find_first((mgr__), (start__), (end__) - (start__)), \
+#define drm_gpuvm_for_each_va_range_safe(va__, next__, gpuvm__, start__, end__) \
+ for (va__ = drm_gpuva_find_first((gpuvm__), (start__), (end__) - (start__)), \
next__ = __drm_gpuva_next(va__); \
va__ && (va__->va.addr < (end__)); \
va__ = next__, next__ = __drm_gpuva_next(va__))
/**
- * drm_gpuva_for_each_va() - iterate over all &drm_gpuvas
+ * drm_gpuvm_for_each_va() - iterate over all &drm_gpuvas
* @va__: &drm_gpuva to assign to in each iteration step
- * @mgr__: &drm_gpuva_manager to walk over
+ * @gpuvm__: &drm_gpuvm to walk over
*
* This iterator walks over all &drm_gpuva structures associated with the given
- * &drm_gpuva_manager.
+ * &drm_gpuvm.
*/
-#define drm_gpuva_for_each_va(va__, mgr__) \
- list_for_each_entry(va__, &(mgr__)->rb.list, rb.entry)
+#define drm_gpuvm_for_each_va(va__, gpuvm__) \
+ list_for_each_entry(va__, &(gpuvm__)->rb.list, rb.entry)
/**
- * drm_gpuva_for_each_va_safe() - safely iterate over all &drm_gpuvas
+ * drm_gpuvm_for_each_va_safe() - safely iterate over all &drm_gpuvas
* @va__: &drm_gpuva to assign to in each iteration step
* @next__: another &drm_gpuva to use as temporary storage
- * @mgr__: &drm_gpuva_manager to walk over
+ * @gpuvm__: &drm_gpuvm to walk over
*
* This iterator walks over all &drm_gpuva structures associated with the given
- * &drm_gpuva_manager. It is implemented with list_for_each_entry_safe(), and
+ * &drm_gpuvm. It is implemented with list_for_each_entry_safe(), and
* hence safe against the removal of elements.
*/
-#define drm_gpuva_for_each_va_safe(va__, next__, mgr__) \
- list_for_each_entry_safe(va__, next__, &(mgr__)->rb.list, rb.entry)
+#define drm_gpuvm_for_each_va_safe(va__, next__, gpuvm__) \
+ list_for_each_entry_safe(va__, next__, &(gpuvm__)->rb.list, rb.entry)
/**
* enum drm_gpuva_op_type - GPU VA operation type
*
- * Operations to alter the GPU VA mappings tracked by the &drm_gpuva_manager.
+ * Operations to alter the GPU VA mappings tracked by the &drm_gpuvm.
*/
enum drm_gpuva_op_type {
/**
@@ -413,7 +412,7 @@ struct drm_gpuva_op_unmap {
*
* Optionally, if &keep is set, drivers may keep the actual page table
* mappings for this &drm_gpuva, adding the missing page table entries
- * only and update the &drm_gpuva_manager accordingly.
+ * only and update the &drm_gpuvm accordingly.
*/
bool keep;
};
@@ -584,22 +583,22 @@ struct drm_gpuva_ops {
#define drm_gpuva_next_op(op) list_next_entry(op, entry)
struct drm_gpuva_ops *
-drm_gpuva_sm_map_ops_create(struct drm_gpuva_manager *mgr,
+drm_gpuvm_sm_map_ops_create(struct drm_gpuvm *gpuvm,
u64 addr, u64 range,
struct drm_gem_object *obj, u64 offset);
struct drm_gpuva_ops *
-drm_gpuva_sm_unmap_ops_create(struct drm_gpuva_manager *mgr,
+drm_gpuvm_sm_unmap_ops_create(struct drm_gpuvm *gpuvm,
u64 addr, u64 range);
struct drm_gpuva_ops *
-drm_gpuva_prefetch_ops_create(struct drm_gpuva_manager *mgr,
+drm_gpuvm_prefetch_ops_create(struct drm_gpuvm *gpuvm,
u64 addr, u64 range);
struct drm_gpuva_ops *
-drm_gpuva_gem_unmap_ops_create(struct drm_gpuva_manager *mgr,
+drm_gpuvm_gem_unmap_ops_create(struct drm_gpuvm *gpuvm,
struct drm_gem_object *obj);
-void drm_gpuva_ops_free(struct drm_gpuva_manager *mgr,
+void drm_gpuva_ops_free(struct drm_gpuvm *gpuvm,
struct drm_gpuva_ops *ops);
static inline void drm_gpuva_init_from_op(struct drm_gpuva *va,
@@ -610,15 +609,15 @@ static inline void drm_gpuva_init_from_op(struct drm_gpuva *va,
}
/**
- * struct drm_gpuva_fn_ops - callbacks for split/merge steps
+ * struct drm_gpuvm_ops - callbacks for split/merge steps
*
- * This structure defines the callbacks used by &drm_gpuva_sm_map and
- * &drm_gpuva_sm_unmap to provide the split/merge steps for map and unmap
+ * This structure defines the callbacks used by &drm_gpuvm_sm_map and
+ * &drm_gpuvm_sm_unmap to provide the split/merge steps for map and unmap
* operations to drivers.
*/
-struct drm_gpuva_fn_ops {
+struct drm_gpuvm_ops {
/**
- * @op_alloc: called when the &drm_gpuva_manager allocates
+ * @op_alloc: called when the &drm_gpuvm allocates
* a struct drm_gpuva_op
*
* Some drivers may want to embed struct drm_gpuva_op into driver
@@ -630,7 +629,7 @@ struct drm_gpuva_fn_ops {
struct drm_gpuva_op *(*op_alloc)(void);
/**
- * @op_free: called when the &drm_gpuva_manager frees a
+ * @op_free: called when the &drm_gpuvm frees a
* struct drm_gpuva_op
*
* Some drivers may want to embed struct drm_gpuva_op into driver
@@ -642,19 +641,19 @@ struct drm_gpuva_fn_ops {
void (*op_free)(struct drm_gpuva_op *op);
/**
- * @sm_step_map: called from &drm_gpuva_sm_map to finally insert the
+ * @sm_step_map: called from &drm_gpuvm_sm_map to finally insert the
* mapping once all previous steps were completed
*
* The &priv pointer matches the one the driver passed to
- * &drm_gpuva_sm_map or &drm_gpuva_sm_unmap, respectively.
+ * &drm_gpuvm_sm_map or &drm_gpuvm_sm_unmap, respectively.
*
- * Can be NULL if &drm_gpuva_sm_map is used.
+ * Can be NULL if &drm_gpuvm_sm_map is used.
*/
int (*sm_step_map)(struct drm_gpuva_op *op, void *priv);
/**
- * @sm_step_remap: called from &drm_gpuva_sm_map and
- * &drm_gpuva_sm_unmap to split up an existent mapping
+ * @sm_step_remap: called from &drm_gpuvm_sm_map and
+ * &drm_gpuvm_sm_unmap to split up an existent mapping
*
* This callback is called when existent mapping needs to be split up.
* This is the case when either a newly requested mapping overlaps or
@@ -662,38 +661,38 @@ struct drm_gpuva_fn_ops {
* mapping is requested.
*
* The &priv pointer matches the one the driver passed to
- * &drm_gpuva_sm_map or &drm_gpuva_sm_unmap, respectively.
+ * &drm_gpuvm_sm_map or &drm_gpuvm_sm_unmap, respectively.
*
- * Can be NULL if neither &drm_gpuva_sm_map nor &drm_gpuva_sm_unmap is
+ * Can be NULL if neither &drm_gpuvm_sm_map nor &drm_gpuvm_sm_unmap is
* used.
*/
int (*sm_step_remap)(struct drm_gpuva_op *op, void *priv);
/**
- * @sm_step_unmap: called from &drm_gpuva_sm_map and
- * &drm_gpuva_sm_unmap to unmap an existent mapping
+ * @sm_step_unmap: called from &drm_gpuvm_sm_map and
+ * &drm_gpuvm_sm_unmap to unmap an existent mapping
*
* This callback is called when existent mapping needs to be unmapped.
* This is the case when either a newly requested mapping encloses an
* existent mapping or an unmap of an existent mapping is requested.
*
* The &priv pointer matches the one the driver passed to
- * &drm_gpuva_sm_map or &drm_gpuva_sm_unmap, respectively.
+ * &drm_gpuvm_sm_map or &drm_gpuvm_sm_unmap, respectively.
*
- * Can be NULL if neither &drm_gpuva_sm_map nor &drm_gpuva_sm_unmap is
+ * Can be NULL if neither &drm_gpuvm_sm_map nor &drm_gpuvm_sm_unmap is
* used.
*/
int (*sm_step_unmap)(struct drm_gpuva_op *op, void *priv);
};
-int drm_gpuva_sm_map(struct drm_gpuva_manager *mgr, void *priv,
+int drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm, void *priv,
u64 addr, u64 range,
struct drm_gem_object *obj, u64 offset);
-int drm_gpuva_sm_unmap(struct drm_gpuva_manager *mgr, void *priv,
+int drm_gpuvm_sm_unmap(struct drm_gpuvm *gpuvm, void *priv,
u64 addr, u64 range);
-void drm_gpuva_map(struct drm_gpuva_manager *mgr,
+void drm_gpuva_map(struct drm_gpuvm *gpuvm,
struct drm_gpuva *va,
struct drm_gpuva_op_map *op);
@@ -703,4 +702,4 @@ void drm_gpuva_remap(struct drm_gpuva *prev,
void drm_gpuva_unmap(struct drm_gpuva_op_unmap *op);
-#endif /* __DRM_GPUVA_MGR_H__ */
+#endif /* __DRM_GPUVM_H__ */
--
2.41.0
^ permalink raw reply related [flat|nested] 29+ messages in thread* Re: [PATCH drm-misc-next v4 1/8] drm/gpuvm: rename struct drm_gpuva_manager to struct drm_gpuvm
2023-09-20 14:42 ` [PATCH drm-misc-next v4 1/8] drm/gpuvm: rename struct drm_gpuva_manager to struct drm_gpuvm Danilo Krummrich
@ 2023-09-21 6:48 ` Christian König
2023-09-25 0:42 ` Dave Airlie
0 siblings, 1 reply; 29+ messages in thread
From: Christian König @ 2023-09-21 6:48 UTC (permalink / raw)
To: Danilo Krummrich, airlied, daniel, matthew.brost,
thomas.hellstrom, sarah.walker, donald.robson, boris.brezillon,
faith.ekstrand
Cc: dri-devel, nouveau, linux-kernel
Am 20.09.23 um 16:42 schrieb Danilo Krummrich:
> Rename struct drm_gpuva_manager to struct drm_gpuvm including
> corresponding functions. This way the GPUVA manager's structures align
> very well with the documentation of VM_BIND [1] and VM_BIND locking [2].
>
> It also provides a better foundation for the naming of data structures
> and functions introduced for implementing a common dma-resv per GPU-VM
> including tracking of external and evicted objects in subsequent
> patches.
>
> [1] Documentation/gpu/drm-vm-bind-async.rst
> [2] Documentation/gpu/drm-vm-bind-locking.rst
>
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Signed-off-by: Danilo Krummrich <dakr@redhat.com>
Not sure if that name is better or worse, but from the handling I
suggest to have this patch separately pushed to drm-misc-next.
Feel free to add my Acked-by for pushing this.
Regards,
Christian.
> ---
> drivers/gpu/drm/Makefile | 2 +-
> drivers/gpu/drm/drm_debugfs.c | 16 +-
> .../gpu/drm/{drm_gpuva_mgr.c => drm_gpuvm.c} | 400 +++++++++---------
> drivers/gpu/drm/nouveau/nouveau_exec.c | 2 +-
> drivers/gpu/drm/nouveau/nouveau_uvmm.c | 24 +-
> drivers/gpu/drm/nouveau/nouveau_uvmm.h | 6 +-
> include/drm/drm_debugfs.h | 6 +-
> include/drm/{drm_gpuva_mgr.h => drm_gpuvm.h} | 153 ++++---
> 8 files changed, 304 insertions(+), 305 deletions(-)
> rename drivers/gpu/drm/{drm_gpuva_mgr.c => drm_gpuvm.c} (78%)
> rename include/drm/{drm_gpuva_mgr.h => drm_gpuvm.h} (78%)
>
> diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile
> index 215e78e79125..7a84b3cddeab 100644
> --- a/drivers/gpu/drm/Makefile
> +++ b/drivers/gpu/drm/Makefile
> @@ -45,7 +45,7 @@ drm-y := \
> drm_vblank.o \
> drm_vblank_work.o \
> drm_vma_manager.o \
> - drm_gpuva_mgr.o \
> + drm_gpuvm.o \
> drm_writeback.o
> drm-$(CONFIG_DRM_LEGACY) += \
> drm_agpsupport.o \
> diff --git a/drivers/gpu/drm/drm_debugfs.c b/drivers/gpu/drm/drm_debugfs.c
> index 44ecd7d0daac..f291fb4b359f 100644
> --- a/drivers/gpu/drm/drm_debugfs.c
> +++ b/drivers/gpu/drm/drm_debugfs.c
> @@ -40,7 +40,7 @@
> #include <drm/drm_file.h>
> #include <drm/drm_gem.h>
> #include <drm/drm_managed.h>
> -#include <drm/drm_gpuva_mgr.h>
> +#include <drm/drm_gpuvm.h>
>
> #include "drm_crtc_internal.h"
> #include "drm_internal.h"
> @@ -189,31 +189,31 @@ static const struct file_operations drm_debugfs_fops = {
> /**
> * drm_debugfs_gpuva_info - dump the given DRM GPU VA space
> * @m: pointer to the &seq_file to write
> - * @mgr: the &drm_gpuva_manager representing the GPU VA space
> + * @gpuvm: the &drm_gpuvm representing the GPU VA space
> *
> * Dumps the GPU VA mappings of a given DRM GPU VA manager.
> *
> * For each DRM GPU VA space drivers should call this function from their
> * &drm_info_list's show callback.
> *
> - * Returns: 0 on success, -ENODEV if the &mgr is not initialized
> + * Returns: 0 on success, -ENODEV if the &gpuvm is not initialized
> */
> int drm_debugfs_gpuva_info(struct seq_file *m,
> - struct drm_gpuva_manager *mgr)
> + struct drm_gpuvm *gpuvm)
> {
> - struct drm_gpuva *va, *kva = &mgr->kernel_alloc_node;
> + struct drm_gpuva *va, *kva = &gpuvm->kernel_alloc_node;
>
> - if (!mgr->name)
> + if (!gpuvm->name)
> return -ENODEV;
>
> seq_printf(m, "DRM GPU VA space (%s) [0x%016llx;0x%016llx]\n",
> - mgr->name, mgr->mm_start, mgr->mm_start + mgr->mm_range);
> + gpuvm->name, gpuvm->mm_start, gpuvm->mm_start + gpuvm->mm_range);
> seq_printf(m, "Kernel reserved node [0x%016llx;0x%016llx]\n",
> kva->va.addr, kva->va.addr + kva->va.range);
> seq_puts(m, "\n");
> seq_puts(m, " VAs | start | range | end | object | object offset\n");
> seq_puts(m, "-------------------------------------------------------------------------------------------------------------\n");
> - drm_gpuva_for_each_va(va, mgr) {
> + drm_gpuvm_for_each_va(va, gpuvm) {
> if (unlikely(va == kva))
> continue;
>
> diff --git a/drivers/gpu/drm/drm_gpuva_mgr.c b/drivers/gpu/drm/drm_gpuvm.c
> similarity index 78%
> rename from drivers/gpu/drm/drm_gpuva_mgr.c
> rename to drivers/gpu/drm/drm_gpuvm.c
> index f86bfad74ff8..7074bcad5b28 100644
> --- a/drivers/gpu/drm/drm_gpuva_mgr.c
> +++ b/drivers/gpu/drm/drm_gpuvm.c
> @@ -25,7 +25,7 @@
> *
> */
>
> -#include <drm/drm_gpuva_mgr.h>
> +#include <drm/drm_gpuvm.h>
>
> #include <linux/interval_tree_generic.h>
> #include <linux/mm.h>
> @@ -33,8 +33,8 @@
> /**
> * DOC: Overview
> *
> - * The DRM GPU VA Manager, represented by struct drm_gpuva_manager keeps track
> - * of a GPU's virtual address (VA) space and manages the corresponding virtual
> + * The DRM GPU VA Manager, represented by struct drm_gpuvm keeps track of a
> + * GPU's virtual address (VA) space and manages the corresponding virtual
> * mappings represented by &drm_gpuva objects. It also keeps track of the
> * mapping's backing &drm_gem_object buffers.
> *
> @@ -47,28 +47,28 @@
> * The GPU VA manager internally uses a rb-tree to manage the
> * &drm_gpuva mappings within a GPU's virtual address space.
> *
> - * The &drm_gpuva_manager contains a special &drm_gpuva representing the
> + * The &drm_gpuvm structure contains a special &drm_gpuva representing the
> * portion of VA space reserved by the kernel. This node is initialized together
> * with the GPU VA manager instance and removed when the GPU VA manager is
> * destroyed.
> *
> - * In a typical application drivers would embed struct drm_gpuva_manager and
> + * In a typical application drivers would embed struct drm_gpuvm and
> * struct drm_gpuva within their own driver specific structures, there won't be
> * any memory allocations of its own nor memory allocations of &drm_gpuva
> * entries.
> *
> - * The data structures needed to store &drm_gpuvas within the &drm_gpuva_manager
> - * are contained within struct drm_gpuva already. Hence, for inserting
> - * &drm_gpuva entries from within dma-fence signalling critical sections it is
> - * enough to pre-allocate the &drm_gpuva structures.
> + * The data structures needed to store &drm_gpuvas within the &drm_gpuvm are
> + * contained within struct drm_gpuva already. Hence, for inserting &drm_gpuva
> + * entries from within dma-fence signalling critical sections it is enough to
> + * pre-allocate the &drm_gpuva structures.
> */
>
> /**
> * DOC: Split and Merge
> *
> * Besides its capability to manage and represent a GPU VA space, the
> - * &drm_gpuva_manager also provides functions to let the &drm_gpuva_manager
> - * calculate a sequence of operations to satisfy a given map or unmap request.
> + * GPU VA manager also provides functions to let the &drm_gpuvm calculate a
> + * sequence of operations to satisfy a given map or unmap request.
> *
> * Therefore the DRM GPU VA manager provides an algorithm implementing splitting
> * and merging of existent GPU VA mappings with the ones that are requested to
> @@ -76,16 +76,16 @@
> * implement Vulkan 'Sparse Memory Bindings' - drivers UAPIs often refer to this
> * as VM BIND.
> *
> - * Drivers can call drm_gpuva_sm_map() to receive a sequence of callbacks
> + * Drivers can call drm_gpuvm_sm_map() to receive a sequence of callbacks
> * containing map, unmap and remap operations for a given newly requested
> * mapping. The sequence of callbacks represents the set of operations to
> * execute in order to integrate the new mapping cleanly into the current state
> * of the GPU VA space.
> *
> * Depending on how the new GPU VA mapping intersects with the existent mappings
> - * of the GPU VA space the &drm_gpuva_fn_ops callbacks contain an arbitrary
> - * amount of unmap operations, a maximum of two remap operations and a single
> - * map operation. The caller might receive no callback at all if no operation is
> + * of the GPU VA space the &drm_gpuvm_ops callbacks contain an arbitrary amount
> + * of unmap operations, a maximum of two remap operations and a single map
> + * operation. The caller might receive no callback at all if no operation is
> * required, e.g. if the requested mapping already exists in the exact same way.
> *
> * The single map operation represents the original map operation requested by
> @@ -95,7 +95,7 @@
> * &drm_gpuva to unmap is physically contiguous with the original mapping
> * request. Optionally, if 'keep' is set, drivers may keep the actual page table
> * entries for this &drm_gpuva, adding the missing page table entries only and
> - * update the &drm_gpuva_manager's view of things accordingly.
> + * update the &drm_gpuvm's view of things accordingly.
> *
> * Drivers may do the same optimization, namely delta page table updates, also
> * for remap operations. This is possible since &drm_gpuva_op_remap consists of
> @@ -106,34 +106,34 @@
> * the beginning and one at the end of the new mapping, hence there is a
> * maximum of two remap operations.
> *
> - * Analogous to drm_gpuva_sm_map() drm_gpuva_sm_unmap() uses &drm_gpuva_fn_ops
> - * to call back into the driver in order to unmap a range of GPU VA space. The
> + * Analogous to drm_gpuvm_sm_map() drm_gpuvm_sm_unmap() uses &drm_gpuvm_ops to
> + * call back into the driver in order to unmap a range of GPU VA space. The
> * logic behind this function is way simpler though: For all existent mappings
> * enclosed by the given range unmap operations are created. For mappings which
> * are only partically located within the given range, remap operations are
> * created such that those mappings are split up and re-mapped partically.
> *
> - * As an alternative to drm_gpuva_sm_map() and drm_gpuva_sm_unmap(),
> - * drm_gpuva_sm_map_ops_create() and drm_gpuva_sm_unmap_ops_create() can be used
> + * As an alternative to drm_gpuvm_sm_map() and drm_gpuvm_sm_unmap(),
> + * drm_gpuvm_sm_map_ops_create() and drm_gpuvm_sm_unmap_ops_create() can be used
> * to directly obtain an instance of struct drm_gpuva_ops containing a list of
> * &drm_gpuva_op, which can be iterated with drm_gpuva_for_each_op(). This list
> * contains the &drm_gpuva_ops analogous to the callbacks one would receive when
> - * calling drm_gpuva_sm_map() or drm_gpuva_sm_unmap(). While this way requires
> + * calling drm_gpuvm_sm_map() or drm_gpuvm_sm_unmap(). While this way requires
> * more memory (to allocate the &drm_gpuva_ops), it provides drivers a way to
> * iterate the &drm_gpuva_op multiple times, e.g. once in a context where memory
> * allocations are possible (e.g. to allocate GPU page tables) and once in the
> * dma-fence signalling critical path.
> *
> - * To update the &drm_gpuva_manager's view of the GPU VA space
> - * drm_gpuva_insert() and drm_gpuva_remove() may be used. These functions can
> - * safely be used from &drm_gpuva_fn_ops callbacks originating from
> - * drm_gpuva_sm_map() or drm_gpuva_sm_unmap(). However, it might be more
> - * convenient to use the provided helper functions drm_gpuva_map(),
> - * drm_gpuva_remap() and drm_gpuva_unmap() instead.
> + * To update the &drm_gpuvm's view of the GPU VA space drm_gpuva_insert() and
> + * drm_gpuva_remove() may be used. These functions can safely be used from
> + * &drm_gpuvm_ops callbacks originating from drm_gpuvm_sm_map() or
> + * drm_gpuvm_sm_unmap(). However, it might be more convenient to use the
> + * provided helper functions drm_gpuva_map(), drm_gpuva_remap() and
> + * drm_gpuva_unmap() instead.
> *
> * The following diagram depicts the basic relationships of existent GPU VA
> * mappings, a newly requested mapping and the resulting mappings as implemented
> - * by drm_gpuva_sm_map() - it doesn't cover any arbitrary combinations of these.
> + * by drm_gpuvm_sm_map() - it doesn't cover any arbitrary combinations of these.
> *
> * 1) Requested mapping is identical. Replace it, but indicate the backing PTEs
> * could be kept.
> @@ -421,10 +421,10 @@
> * // Allocates a new &drm_gpuva.
> * struct drm_gpuva * driver_gpuva_alloc(void);
> *
> - * // Typically drivers would embedd the &drm_gpuva_manager and &drm_gpuva
> + * // Typically drivers would embedd the &drm_gpuvm and &drm_gpuva
> * // structure in individual driver structures and lock the dma-resv with
> * // drm_exec or similar helpers.
> - * int driver_mapping_create(struct drm_gpuva_manager *mgr,
> + * int driver_mapping_create(struct drm_gpuvm *gpuvm,
> * u64 addr, u64 range,
> * struct drm_gem_object *obj, u64 offset)
> * {
> @@ -432,7 +432,7 @@
> * struct drm_gpuva_op *op
> *
> * driver_lock_va_space();
> - * ops = drm_gpuva_sm_map_ops_create(mgr, addr, range,
> + * ops = drm_gpuvm_sm_map_ops_create(gpuvm, addr, range,
> * obj, offset);
> * if (IS_ERR(ops))
> * return PTR_ERR(ops);
> @@ -448,7 +448,7 @@
> * // free memory and unlock
> *
> * driver_vm_map();
> - * drm_gpuva_map(mgr, va, &op->map);
> + * drm_gpuva_map(gpuvm, va, &op->map);
> * drm_gpuva_link(va);
> *
> * break;
> @@ -504,23 +504,23 @@
> * 2) Receive a callback for each &drm_gpuva_op to create a new mapping::
> *
> * struct driver_context {
> - * struct drm_gpuva_manager *mgr;
> + * struct drm_gpuvm *gpuvm;
> * struct drm_gpuva *new_va;
> * struct drm_gpuva *prev_va;
> * struct drm_gpuva *next_va;
> * };
> *
> - * // ops to pass to drm_gpuva_manager_init()
> - * static const struct drm_gpuva_fn_ops driver_gpuva_ops = {
> + * // ops to pass to drm_gpuvm_init()
> + * static const struct drm_gpuvm_ops driver_gpuvm_ops = {
> * .sm_step_map = driver_gpuva_map,
> * .sm_step_remap = driver_gpuva_remap,
> * .sm_step_unmap = driver_gpuva_unmap,
> * };
> *
> - * // Typically drivers would embedd the &drm_gpuva_manager and &drm_gpuva
> + * // Typically drivers would embedd the &drm_gpuvm and &drm_gpuva
> * // structure in individual driver structures and lock the dma-resv with
> * // drm_exec or similar helpers.
> - * int driver_mapping_create(struct drm_gpuva_manager *mgr,
> + * int driver_mapping_create(struct drm_gpuvm *gpuvm,
> * u64 addr, u64 range,
> * struct drm_gem_object *obj, u64 offset)
> * {
> @@ -529,7 +529,7 @@
> * struct drm_gpuva_op *op;
> * int ret = 0;
> *
> - * ctx.mgr = mgr;
> + * ctx.gpuvm = gpuvm;
> *
> * ctx.new_va = kzalloc(sizeof(*ctx.new_va), GFP_KERNEL);
> * ctx.prev_va = kzalloc(sizeof(*ctx.prev_va), GFP_KERNEL);
> @@ -540,7 +540,7 @@
> * }
> *
> * driver_lock_va_space();
> - * ret = drm_gpuva_sm_map(mgr, &ctx, addr, range, obj, offset);
> + * ret = drm_gpuvm_sm_map(gpuvm, &ctx, addr, range, obj, offset);
> * driver_unlock_va_space();
> *
> * out:
> @@ -554,7 +554,7 @@
> * {
> * struct driver_context *ctx = __ctx;
> *
> - * drm_gpuva_map(ctx->mgr, ctx->new_va, &op->map);
> + * drm_gpuva_map(ctx->vm, ctx->new_va, &op->map);
> *
> * drm_gpuva_link(ctx->new_va);
> *
> @@ -609,12 +609,12 @@ INTERVAL_TREE_DEFINE(struct drm_gpuva, rb.node, u64, rb.__subtree_last,
> GPUVA_START, GPUVA_LAST, static __maybe_unused,
> drm_gpuva_it)
>
> -static int __drm_gpuva_insert(struct drm_gpuva_manager *mgr,
> +static int __drm_gpuva_insert(struct drm_gpuvm *gpuvm,
> struct drm_gpuva *va);
> static void __drm_gpuva_remove(struct drm_gpuva *va);
>
> static bool
> -drm_gpuva_check_overflow(u64 addr, u64 range)
> +drm_gpuvm_check_overflow(u64 addr, u64 range)
> {
> u64 end;
>
> @@ -623,121 +623,121 @@ drm_gpuva_check_overflow(u64 addr, u64 range)
> }
>
> static bool
> -drm_gpuva_in_mm_range(struct drm_gpuva_manager *mgr, u64 addr, u64 range)
> +drm_gpuvm_in_mm_range(struct drm_gpuvm *gpuvm, u64 addr, u64 range)
> {
> u64 end = addr + range;
> - u64 mm_start = mgr->mm_start;
> - u64 mm_end = mm_start + mgr->mm_range;
> + u64 mm_start = gpuvm->mm_start;
> + u64 mm_end = mm_start + gpuvm->mm_range;
>
> return addr >= mm_start && end <= mm_end;
> }
>
> static bool
> -drm_gpuva_in_kernel_node(struct drm_gpuva_manager *mgr, u64 addr, u64 range)
> +drm_gpuvm_in_kernel_node(struct drm_gpuvm *gpuvm, u64 addr, u64 range)
> {
> u64 end = addr + range;
> - u64 kstart = mgr->kernel_alloc_node.va.addr;
> - u64 krange = mgr->kernel_alloc_node.va.range;
> + u64 kstart = gpuvm->kernel_alloc_node.va.addr;
> + u64 krange = gpuvm->kernel_alloc_node.va.range;
> u64 kend = kstart + krange;
>
> return krange && addr < kend && kstart < end;
> }
>
> static bool
> -drm_gpuva_range_valid(struct drm_gpuva_manager *mgr,
> +drm_gpuva_range_valid(struct drm_gpuvm *gpuvm,
> u64 addr, u64 range)
> {
> - return !drm_gpuva_check_overflow(addr, range) &&
> - drm_gpuva_in_mm_range(mgr, addr, range) &&
> - !drm_gpuva_in_kernel_node(mgr, addr, range);
> + return !drm_gpuvm_check_overflow(addr, range) &&
> + drm_gpuvm_in_mm_range(gpuvm, addr, range) &&
> + !drm_gpuvm_in_kernel_node(gpuvm, addr, range);
> }
>
> /**
> - * drm_gpuva_manager_init() - initialize a &drm_gpuva_manager
> - * @mgr: pointer to the &drm_gpuva_manager to initialize
> + * drm_gpuvm_init() - initialize a &drm_gpuvm
> + * @gpuvm: pointer to the &drm_gpuvm to initialize
> * @name: the name of the GPU VA space
> * @start_offset: the start offset of the GPU VA space
> * @range: the size of the GPU VA space
> * @reserve_offset: the start of the kernel reserved GPU VA area
> * @reserve_range: the size of the kernel reserved GPU VA area
> - * @ops: &drm_gpuva_fn_ops called on &drm_gpuva_sm_map / &drm_gpuva_sm_unmap
> + * @ops: &drm_gpuvm_ops called on &drm_gpuvm_sm_map / &drm_gpuvm_sm_unmap
> *
> - * The &drm_gpuva_manager must be initialized with this function before use.
> + * The &drm_gpuvm must be initialized with this function before use.
> *
> - * Note that @mgr must be cleared to 0 before calling this function. The given
> + * Note that @gpuvm must be cleared to 0 before calling this function. The given
> * &name is expected to be managed by the surrounding driver structures.
> */
> void
> -drm_gpuva_manager_init(struct drm_gpuva_manager *mgr,
> - const char *name,
> - u64 start_offset, u64 range,
> - u64 reserve_offset, u64 reserve_range,
> - const struct drm_gpuva_fn_ops *ops)
> +drm_gpuvm_init(struct drm_gpuvm *gpuvm,
> + const char *name,
> + u64 start_offset, u64 range,
> + u64 reserve_offset, u64 reserve_range,
> + const struct drm_gpuvm_ops *ops)
> {
> - mgr->rb.tree = RB_ROOT_CACHED;
> - INIT_LIST_HEAD(&mgr->rb.list);
> + gpuvm->rb.tree = RB_ROOT_CACHED;
> + INIT_LIST_HEAD(&gpuvm->rb.list);
>
> - drm_gpuva_check_overflow(start_offset, range);
> - mgr->mm_start = start_offset;
> - mgr->mm_range = range;
> + drm_gpuvm_check_overflow(start_offset, range);
> + gpuvm->mm_start = start_offset;
> + gpuvm->mm_range = range;
>
> - mgr->name = name ? name : "unknown";
> - mgr->ops = ops;
> + gpuvm->name = name ? name : "unknown";
> + gpuvm->ops = ops;
>
> - memset(&mgr->kernel_alloc_node, 0, sizeof(struct drm_gpuva));
> + memset(&gpuvm->kernel_alloc_node, 0, sizeof(struct drm_gpuva));
>
> if (reserve_range) {
> - mgr->kernel_alloc_node.va.addr = reserve_offset;
> - mgr->kernel_alloc_node.va.range = reserve_range;
> + gpuvm->kernel_alloc_node.va.addr = reserve_offset;
> + gpuvm->kernel_alloc_node.va.range = reserve_range;
>
> - if (likely(!drm_gpuva_check_overflow(reserve_offset,
> + if (likely(!drm_gpuvm_check_overflow(reserve_offset,
> reserve_range)))
> - __drm_gpuva_insert(mgr, &mgr->kernel_alloc_node);
> + __drm_gpuva_insert(gpuvm, &gpuvm->kernel_alloc_node);
> }
> }
> -EXPORT_SYMBOL_GPL(drm_gpuva_manager_init);
> +EXPORT_SYMBOL_GPL(drm_gpuvm_init);
>
> /**
> - * drm_gpuva_manager_destroy() - cleanup a &drm_gpuva_manager
> - * @mgr: pointer to the &drm_gpuva_manager to clean up
> + * drm_gpuvm_destroy() - cleanup a &drm_gpuvm
> + * @gpuvm: pointer to the &drm_gpuvm to clean up
> *
> * Note that it is a bug to call this function on a manager that still
> * holds GPU VA mappings.
> */
> void
> -drm_gpuva_manager_destroy(struct drm_gpuva_manager *mgr)
> +drm_gpuvm_destroy(struct drm_gpuvm *gpuvm)
> {
> - mgr->name = NULL;
> + gpuvm->name = NULL;
>
> - if (mgr->kernel_alloc_node.va.range)
> - __drm_gpuva_remove(&mgr->kernel_alloc_node);
> + if (gpuvm->kernel_alloc_node.va.range)
> + __drm_gpuva_remove(&gpuvm->kernel_alloc_node);
>
> - WARN(!RB_EMPTY_ROOT(&mgr->rb.tree.rb_root),
> + WARN(!RB_EMPTY_ROOT(&gpuvm->rb.tree.rb_root),
> "GPUVA tree is not empty, potentially leaking memory.");
> }
> -EXPORT_SYMBOL_GPL(drm_gpuva_manager_destroy);
> +EXPORT_SYMBOL_GPL(drm_gpuvm_destroy);
>
> static int
> -__drm_gpuva_insert(struct drm_gpuva_manager *mgr,
> +__drm_gpuva_insert(struct drm_gpuvm *gpuvm,
> struct drm_gpuva *va)
> {
> struct rb_node *node;
> struct list_head *head;
>
> - if (drm_gpuva_it_iter_first(&mgr->rb.tree,
> + if (drm_gpuva_it_iter_first(&gpuvm->rb.tree,
> GPUVA_START(va),
> GPUVA_LAST(va)))
> return -EEXIST;
>
> - va->mgr = mgr;
> + va->vm = gpuvm;
>
> - drm_gpuva_it_insert(va, &mgr->rb.tree);
> + drm_gpuva_it_insert(va, &gpuvm->rb.tree);
>
> node = rb_prev(&va->rb.node);
> if (node)
> head = &(to_drm_gpuva(node))->rb.entry;
> else
> - head = &mgr->rb.list;
> + head = &gpuvm->rb.list;
>
> list_add(&va->rb.entry, head);
>
> @@ -746,36 +746,36 @@ __drm_gpuva_insert(struct drm_gpuva_manager *mgr,
>
> /**
> * drm_gpuva_insert() - insert a &drm_gpuva
> - * @mgr: the &drm_gpuva_manager to insert the &drm_gpuva in
> + * @gpuvm: the &drm_gpuvm to insert the &drm_gpuva in
> * @va: the &drm_gpuva to insert
> *
> * Insert a &drm_gpuva with a given address and range into a
> - * &drm_gpuva_manager.
> + * &drm_gpuvm.
> *
> * It is safe to use this function using the safe versions of iterating the GPU
> - * VA space, such as drm_gpuva_for_each_va_safe() and
> - * drm_gpuva_for_each_va_range_safe().
> + * VA space, such as drm_gpuvm_for_each_va_safe() and
> + * drm_gpuvm_for_each_va_range_safe().
> *
> * Returns: 0 on success, negative error code on failure.
> */
> int
> -drm_gpuva_insert(struct drm_gpuva_manager *mgr,
> +drm_gpuva_insert(struct drm_gpuvm *gpuvm,
> struct drm_gpuva *va)
> {
> u64 addr = va->va.addr;
> u64 range = va->va.range;
>
> - if (unlikely(!drm_gpuva_range_valid(mgr, addr, range)))
> + if (unlikely(!drm_gpuva_range_valid(gpuvm, addr, range)))
> return -EINVAL;
>
> - return __drm_gpuva_insert(mgr, va);
> + return __drm_gpuva_insert(gpuvm, va);
> }
> EXPORT_SYMBOL_GPL(drm_gpuva_insert);
>
> static void
> __drm_gpuva_remove(struct drm_gpuva *va)
> {
> - drm_gpuva_it_remove(va, &va->mgr->rb.tree);
> + drm_gpuva_it_remove(va, &va->vm->rb.tree);
> list_del_init(&va->rb.entry);
> }
>
> @@ -786,15 +786,15 @@ __drm_gpuva_remove(struct drm_gpuva *va)
> * This removes the given &va from the underlaying tree.
> *
> * It is safe to use this function using the safe versions of iterating the GPU
> - * VA space, such as drm_gpuva_for_each_va_safe() and
> - * drm_gpuva_for_each_va_range_safe().
> + * VA space, such as drm_gpuvm_for_each_va_safe() and
> + * drm_gpuvm_for_each_va_range_safe().
> */
> void
> drm_gpuva_remove(struct drm_gpuva *va)
> {
> - struct drm_gpuva_manager *mgr = va->mgr;
> + struct drm_gpuvm *gpuvm = va->vm;
>
> - if (unlikely(va == &mgr->kernel_alloc_node)) {
> + if (unlikely(va == &gpuvm->kernel_alloc_node)) {
> WARN(1, "Can't destroy kernel reserved node.\n");
> return;
> }
> @@ -853,37 +853,37 @@ EXPORT_SYMBOL_GPL(drm_gpuva_unlink);
>
> /**
> * drm_gpuva_find_first() - find the first &drm_gpuva in the given range
> - * @mgr: the &drm_gpuva_manager to search in
> + * @gpuvm: the &drm_gpuvm to search in
> * @addr: the &drm_gpuvas address
> * @range: the &drm_gpuvas range
> *
> * Returns: the first &drm_gpuva within the given range
> */
> struct drm_gpuva *
> -drm_gpuva_find_first(struct drm_gpuva_manager *mgr,
> +drm_gpuva_find_first(struct drm_gpuvm *gpuvm,
> u64 addr, u64 range)
> {
> u64 last = addr + range - 1;
>
> - return drm_gpuva_it_iter_first(&mgr->rb.tree, addr, last);
> + return drm_gpuva_it_iter_first(&gpuvm->rb.tree, addr, last);
> }
> EXPORT_SYMBOL_GPL(drm_gpuva_find_first);
>
> /**
> * drm_gpuva_find() - find a &drm_gpuva
> - * @mgr: the &drm_gpuva_manager to search in
> + * @gpuvm: the &drm_gpuvm to search in
> * @addr: the &drm_gpuvas address
> * @range: the &drm_gpuvas range
> *
> * Returns: the &drm_gpuva at a given &addr and with a given &range
> */
> struct drm_gpuva *
> -drm_gpuva_find(struct drm_gpuva_manager *mgr,
> +drm_gpuva_find(struct drm_gpuvm *gpuvm,
> u64 addr, u64 range)
> {
> struct drm_gpuva *va;
>
> - va = drm_gpuva_find_first(mgr, addr, range);
> + va = drm_gpuva_find_first(gpuvm, addr, range);
> if (!va)
> goto out;
>
> @@ -900,7 +900,7 @@ EXPORT_SYMBOL_GPL(drm_gpuva_find);
>
> /**
> * drm_gpuva_find_prev() - find the &drm_gpuva before the given address
> - * @mgr: the &drm_gpuva_manager to search in
> + * @gpuvm: the &drm_gpuvm to search in
> * @start: the given GPU VA's start address
> *
> * Find the adjacent &drm_gpuva before the GPU VA with given &start address.
> @@ -911,18 +911,18 @@ EXPORT_SYMBOL_GPL(drm_gpuva_find);
> * Returns: a pointer to the found &drm_gpuva or NULL if none was found
> */
> struct drm_gpuva *
> -drm_gpuva_find_prev(struct drm_gpuva_manager *mgr, u64 start)
> +drm_gpuva_find_prev(struct drm_gpuvm *gpuvm, u64 start)
> {
> - if (!drm_gpuva_range_valid(mgr, start - 1, 1))
> + if (!drm_gpuva_range_valid(gpuvm, start - 1, 1))
> return NULL;
>
> - return drm_gpuva_it_iter_first(&mgr->rb.tree, start - 1, start);
> + return drm_gpuva_it_iter_first(&gpuvm->rb.tree, start - 1, start);
> }
> EXPORT_SYMBOL_GPL(drm_gpuva_find_prev);
>
> /**
> * drm_gpuva_find_next() - find the &drm_gpuva after the given address
> - * @mgr: the &drm_gpuva_manager to search in
> + * @gpuvm: the &drm_gpuvm to search in
> * @end: the given GPU VA's end address
> *
> * Find the adjacent &drm_gpuva after the GPU VA with given &end address.
> @@ -933,47 +933,47 @@ EXPORT_SYMBOL_GPL(drm_gpuva_find_prev);
> * Returns: a pointer to the found &drm_gpuva or NULL if none was found
> */
> struct drm_gpuva *
> -drm_gpuva_find_next(struct drm_gpuva_manager *mgr, u64 end)
> +drm_gpuva_find_next(struct drm_gpuvm *gpuvm, u64 end)
> {
> - if (!drm_gpuva_range_valid(mgr, end, 1))
> + if (!drm_gpuva_range_valid(gpuvm, end, 1))
> return NULL;
>
> - return drm_gpuva_it_iter_first(&mgr->rb.tree, end, end + 1);
> + return drm_gpuva_it_iter_first(&gpuvm->rb.tree, end, end + 1);
> }
> EXPORT_SYMBOL_GPL(drm_gpuva_find_next);
>
> /**
> * drm_gpuva_interval_empty() - indicate whether a given interval of the VA space
> * is empty
> - * @mgr: the &drm_gpuva_manager to check the range for
> + * @gpuvm: the &drm_gpuvm to check the range for
> * @addr: the start address of the range
> * @range: the range of the interval
> *
> * Returns: true if the interval is empty, false otherwise
> */
> bool
> -drm_gpuva_interval_empty(struct drm_gpuva_manager *mgr, u64 addr, u64 range)
> +drm_gpuva_interval_empty(struct drm_gpuvm *gpuvm, u64 addr, u64 range)
> {
> - return !drm_gpuva_find_first(mgr, addr, range);
> + return !drm_gpuva_find_first(gpuvm, addr, range);
> }
> EXPORT_SYMBOL_GPL(drm_gpuva_interval_empty);
>
> /**
> * drm_gpuva_map() - helper to insert a &drm_gpuva according to a
> * &drm_gpuva_op_map
> - * @mgr: the &drm_gpuva_manager
> + * @gpuvm: the &drm_gpuvm
> * @va: the &drm_gpuva to insert
> * @op: the &drm_gpuva_op_map to initialize @va with
> *
> - * Initializes the @va from the @op and inserts it into the given @mgr.
> + * Initializes the @va from the @op and inserts it into the given @gpuvm.
> */
> void
> -drm_gpuva_map(struct drm_gpuva_manager *mgr,
> +drm_gpuva_map(struct drm_gpuvm *gpuvm,
> struct drm_gpuva *va,
> struct drm_gpuva_op_map *op)
> {
> drm_gpuva_init_from_op(va, op);
> - drm_gpuva_insert(mgr, va);
> + drm_gpuva_insert(gpuvm, va);
> }
> EXPORT_SYMBOL_GPL(drm_gpuva_map);
>
> @@ -993,18 +993,18 @@ drm_gpuva_remap(struct drm_gpuva *prev,
> struct drm_gpuva_op_remap *op)
> {
> struct drm_gpuva *curr = op->unmap->va;
> - struct drm_gpuva_manager *mgr = curr->mgr;
> + struct drm_gpuvm *gpuvm = curr->vm;
>
> drm_gpuva_remove(curr);
>
> if (op->prev) {
> drm_gpuva_init_from_op(prev, op->prev);
> - drm_gpuva_insert(mgr, prev);
> + drm_gpuva_insert(gpuvm, prev);
> }
>
> if (op->next) {
> drm_gpuva_init_from_op(next, op->next);
> - drm_gpuva_insert(mgr, next);
> + drm_gpuva_insert(gpuvm, next);
> }
> }
> EXPORT_SYMBOL_GPL(drm_gpuva_remap);
> @@ -1024,7 +1024,7 @@ drm_gpuva_unmap(struct drm_gpuva_op_unmap *op)
> EXPORT_SYMBOL_GPL(drm_gpuva_unmap);
>
> static int
> -op_map_cb(const struct drm_gpuva_fn_ops *fn, void *priv,
> +op_map_cb(const struct drm_gpuvm_ops *fn, void *priv,
> u64 addr, u64 range,
> struct drm_gem_object *obj, u64 offset)
> {
> @@ -1040,7 +1040,7 @@ op_map_cb(const struct drm_gpuva_fn_ops *fn, void *priv,
> }
>
> static int
> -op_remap_cb(const struct drm_gpuva_fn_ops *fn, void *priv,
> +op_remap_cb(const struct drm_gpuvm_ops *fn, void *priv,
> struct drm_gpuva_op_map *prev,
> struct drm_gpuva_op_map *next,
> struct drm_gpuva_op_unmap *unmap)
> @@ -1058,7 +1058,7 @@ op_remap_cb(const struct drm_gpuva_fn_ops *fn, void *priv,
> }
>
> static int
> -op_unmap_cb(const struct drm_gpuva_fn_ops *fn, void *priv,
> +op_unmap_cb(const struct drm_gpuvm_ops *fn, void *priv,
> struct drm_gpuva *va, bool merge)
> {
> struct drm_gpuva_op op = {};
> @@ -1071,8 +1071,8 @@ op_unmap_cb(const struct drm_gpuva_fn_ops *fn, void *priv,
> }
>
> static int
> -__drm_gpuva_sm_map(struct drm_gpuva_manager *mgr,
> - const struct drm_gpuva_fn_ops *ops, void *priv,
> +__drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
> + const struct drm_gpuvm_ops *ops, void *priv,
> u64 req_addr, u64 req_range,
> struct drm_gem_object *req_obj, u64 req_offset)
> {
> @@ -1080,10 +1080,10 @@ __drm_gpuva_sm_map(struct drm_gpuva_manager *mgr,
> u64 req_end = req_addr + req_range;
> int ret;
>
> - if (unlikely(!drm_gpuva_range_valid(mgr, req_addr, req_range)))
> + if (unlikely(!drm_gpuva_range_valid(gpuvm, req_addr, req_range)))
> return -EINVAL;
>
> - drm_gpuva_for_each_va_range_safe(va, next, mgr, req_addr, req_end) {
> + drm_gpuvm_for_each_va_range_safe(va, next, gpuvm, req_addr, req_end) {
> struct drm_gem_object *obj = va->gem.obj;
> u64 offset = va->gem.offset;
> u64 addr = va->va.addr;
> @@ -1215,18 +1215,18 @@ __drm_gpuva_sm_map(struct drm_gpuva_manager *mgr,
> }
>
> static int
> -__drm_gpuva_sm_unmap(struct drm_gpuva_manager *mgr,
> - const struct drm_gpuva_fn_ops *ops, void *priv,
> +__drm_gpuvm_sm_unmap(struct drm_gpuvm *gpuvm,
> + const struct drm_gpuvm_ops *ops, void *priv,
> u64 req_addr, u64 req_range)
> {
> struct drm_gpuva *va, *next;
> u64 req_end = req_addr + req_range;
> int ret;
>
> - if (unlikely(!drm_gpuva_range_valid(mgr, req_addr, req_range)))
> + if (unlikely(!drm_gpuva_range_valid(gpuvm, req_addr, req_range)))
> return -EINVAL;
>
> - drm_gpuva_for_each_va_range_safe(va, next, mgr, req_addr, req_end) {
> + drm_gpuvm_for_each_va_range_safe(va, next, gpuvm, req_addr, req_end) {
> struct drm_gpuva_op_map prev = {}, next = {};
> bool prev_split = false, next_split = false;
> struct drm_gem_object *obj = va->gem.obj;
> @@ -1273,8 +1273,8 @@ __drm_gpuva_sm_unmap(struct drm_gpuva_manager *mgr,
> }
>
> /**
> - * drm_gpuva_sm_map() - creates the &drm_gpuva_op split/merge steps
> - * @mgr: the &drm_gpuva_manager representing the GPU VA space
> + * drm_gpuvm_sm_map() - creates the &drm_gpuva_op split/merge steps
> + * @gpuvm: the &drm_gpuvm representing the GPU VA space
> * @req_addr: the start address of the new mapping
> * @req_range: the range of the new mapping
> * @req_obj: the &drm_gem_object to map
> @@ -1282,15 +1282,15 @@ __drm_gpuva_sm_unmap(struct drm_gpuva_manager *mgr,
> * @priv: pointer to a driver private data structure
> *
> * This function iterates the given range of the GPU VA space. It utilizes the
> - * &drm_gpuva_fn_ops to call back into the driver providing the split and merge
> + * &drm_gpuvm_ops to call back into the driver providing the split and merge
> * steps.
> *
> * Drivers may use these callbacks to update the GPU VA space right away within
> * the callback. In case the driver decides to copy and store the operations for
> - * later processing neither this function nor &drm_gpuva_sm_unmap is allowed to
> - * be called before the &drm_gpuva_manager's view of the GPU VA space was
> + * later processing neither this function nor &drm_gpuvm_sm_unmap is allowed to
> + * be called before the &drm_gpuvm's view of the GPU VA space was
> * updated with the previous set of operations. To update the
> - * &drm_gpuva_manager's view of the GPU VA space drm_gpuva_insert(),
> + * &drm_gpuvm's view of the GPU VA space drm_gpuva_insert(),
> * drm_gpuva_destroy_locked() and/or drm_gpuva_destroy_unlocked() should be
> * used.
> *
> @@ -1305,39 +1305,39 @@ __drm_gpuva_sm_unmap(struct drm_gpuva_manager *mgr,
> * Returns: 0 on success or a negative error code
> */
> int
> -drm_gpuva_sm_map(struct drm_gpuva_manager *mgr, void *priv,
> +drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm, void *priv,
> u64 req_addr, u64 req_range,
> struct drm_gem_object *req_obj, u64 req_offset)
> {
> - const struct drm_gpuva_fn_ops *ops = mgr->ops;
> + const struct drm_gpuvm_ops *ops = gpuvm->ops;
>
> if (unlikely(!(ops && ops->sm_step_map &&
> ops->sm_step_remap &&
> ops->sm_step_unmap)))
> return -EINVAL;
>
> - return __drm_gpuva_sm_map(mgr, ops, priv,
> + return __drm_gpuvm_sm_map(gpuvm, ops, priv,
> req_addr, req_range,
> req_obj, req_offset);
> }
> -EXPORT_SYMBOL_GPL(drm_gpuva_sm_map);
> +EXPORT_SYMBOL_GPL(drm_gpuvm_sm_map);
>
> /**
> - * drm_gpuva_sm_unmap() - creates the &drm_gpuva_ops to split on unmap
> - * @mgr: the &drm_gpuva_manager representing the GPU VA space
> + * drm_gpuvm_sm_unmap() - creates the &drm_gpuva_ops to split on unmap
> + * @gpuvm: the &drm_gpuvm representing the GPU VA space
> * @priv: pointer to a driver private data structure
> * @req_addr: the start address of the range to unmap
> * @req_range: the range of the mappings to unmap
> *
> * This function iterates the given range of the GPU VA space. It utilizes the
> - * &drm_gpuva_fn_ops to call back into the driver providing the operations to
> + * &drm_gpuvm_ops to call back into the driver providing the operations to
> * unmap and, if required, split existent mappings.
> *
> * Drivers may use these callbacks to update the GPU VA space right away within
> * the callback. In case the driver decides to copy and store the operations for
> - * later processing neither this function nor &drm_gpuva_sm_map is allowed to be
> - * called before the &drm_gpuva_manager's view of the GPU VA space was updated
> - * with the previous set of operations. To update the &drm_gpuva_manager's view
> + * later processing neither this function nor &drm_gpuvm_sm_map is allowed to be
> + * called before the &drm_gpuvm's view of the GPU VA space was updated
> + * with the previous set of operations. To update the &drm_gpuvm's view
> * of the GPU VA space drm_gpuva_insert(), drm_gpuva_destroy_locked() and/or
> * drm_gpuva_destroy_unlocked() should be used.
> *
> @@ -1350,24 +1350,24 @@ EXPORT_SYMBOL_GPL(drm_gpuva_sm_map);
> * Returns: 0 on success or a negative error code
> */
> int
> -drm_gpuva_sm_unmap(struct drm_gpuva_manager *mgr, void *priv,
> +drm_gpuvm_sm_unmap(struct drm_gpuvm *gpuvm, void *priv,
> u64 req_addr, u64 req_range)
> {
> - const struct drm_gpuva_fn_ops *ops = mgr->ops;
> + const struct drm_gpuvm_ops *ops = gpuvm->ops;
>
> if (unlikely(!(ops && ops->sm_step_remap &&
> ops->sm_step_unmap)))
> return -EINVAL;
>
> - return __drm_gpuva_sm_unmap(mgr, ops, priv,
> + return __drm_gpuvm_sm_unmap(gpuvm, ops, priv,
> req_addr, req_range);
> }
> -EXPORT_SYMBOL_GPL(drm_gpuva_sm_unmap);
> +EXPORT_SYMBOL_GPL(drm_gpuvm_sm_unmap);
>
> static struct drm_gpuva_op *
> -gpuva_op_alloc(struct drm_gpuva_manager *mgr)
> +gpuva_op_alloc(struct drm_gpuvm *gpuvm)
> {
> - const struct drm_gpuva_fn_ops *fn = mgr->ops;
> + const struct drm_gpuvm_ops *fn = gpuvm->ops;
> struct drm_gpuva_op *op;
>
> if (fn && fn->op_alloc)
> @@ -1382,10 +1382,10 @@ gpuva_op_alloc(struct drm_gpuva_manager *mgr)
> }
>
> static void
> -gpuva_op_free(struct drm_gpuva_manager *mgr,
> +gpuva_op_free(struct drm_gpuvm *gpuvm,
> struct drm_gpuva_op *op)
> {
> - const struct drm_gpuva_fn_ops *fn = mgr->ops;
> + const struct drm_gpuvm_ops *fn = gpuvm->ops;
>
> if (fn && fn->op_free)
> fn->op_free(op);
> @@ -1398,14 +1398,14 @@ drm_gpuva_sm_step(struct drm_gpuva_op *__op,
> void *priv)
> {
> struct {
> - struct drm_gpuva_manager *mgr;
> + struct drm_gpuvm *vm;
> struct drm_gpuva_ops *ops;
> } *args = priv;
> - struct drm_gpuva_manager *mgr = args->mgr;
> + struct drm_gpuvm *gpuvm = args->vm;
> struct drm_gpuva_ops *ops = args->ops;
> struct drm_gpuva_op *op;
>
> - op = gpuva_op_alloc(mgr);
> + op = gpuva_op_alloc(gpuvm);
> if (unlikely(!op))
> goto err;
>
> @@ -1444,20 +1444,20 @@ drm_gpuva_sm_step(struct drm_gpuva_op *__op,
> err_free_prev:
> kfree(op->remap.prev);
> err_free_op:
> - gpuva_op_free(mgr, op);
> + gpuva_op_free(gpuvm, op);
> err:
> return -ENOMEM;
> }
>
> -static const struct drm_gpuva_fn_ops gpuva_list_ops = {
> +static const struct drm_gpuvm_ops gpuvm_list_ops = {
> .sm_step_map = drm_gpuva_sm_step,
> .sm_step_remap = drm_gpuva_sm_step,
> .sm_step_unmap = drm_gpuva_sm_step,
> };
>
> /**
> - * drm_gpuva_sm_map_ops_create() - creates the &drm_gpuva_ops to split and merge
> - * @mgr: the &drm_gpuva_manager representing the GPU VA space
> + * drm_gpuvm_sm_map_ops_create() - creates the &drm_gpuva_ops to split and merge
> + * @gpuvm: the &drm_gpuvm representing the GPU VA space
> * @req_addr: the start address of the new mapping
> * @req_range: the range of the new mapping
> * @req_obj: the &drm_gem_object to map
> @@ -1476,9 +1476,9 @@ static const struct drm_gpuva_fn_ops gpuva_list_ops = {
> * map operation requested by the caller.
> *
> * Note that before calling this function again with another mapping request it
> - * is necessary to update the &drm_gpuva_manager's view of the GPU VA space. The
> + * is necessary to update the &drm_gpuvm's view of the GPU VA space. The
> * previously obtained operations must be either processed or abandoned. To
> - * update the &drm_gpuva_manager's view of the GPU VA space drm_gpuva_insert(),
> + * update the &drm_gpuvm's view of the GPU VA space drm_gpuva_insert(),
> * drm_gpuva_destroy_locked() and/or drm_gpuva_destroy_unlocked() should be
> * used.
> *
> @@ -1488,13 +1488,13 @@ static const struct drm_gpuva_fn_ops gpuva_list_ops = {
> * Returns: a pointer to the &drm_gpuva_ops on success, an ERR_PTR on failure
> */
> struct drm_gpuva_ops *
> -drm_gpuva_sm_map_ops_create(struct drm_gpuva_manager *mgr,
> +drm_gpuvm_sm_map_ops_create(struct drm_gpuvm *gpuvm,
> u64 req_addr, u64 req_range,
> struct drm_gem_object *req_obj, u64 req_offset)
> {
> struct drm_gpuva_ops *ops;
> struct {
> - struct drm_gpuva_manager *mgr;
> + struct drm_gpuvm *vm;
> struct drm_gpuva_ops *ops;
> } args;
> int ret;
> @@ -1505,10 +1505,10 @@ drm_gpuva_sm_map_ops_create(struct drm_gpuva_manager *mgr,
>
> INIT_LIST_HEAD(&ops->list);
>
> - args.mgr = mgr;
> + args.vm = gpuvm;
> args.ops = ops;
>
> - ret = __drm_gpuva_sm_map(mgr, &gpuva_list_ops, &args,
> + ret = __drm_gpuvm_sm_map(gpuvm, &gpuvm_list_ops, &args,
> req_addr, req_range,
> req_obj, req_offset);
> if (ret)
> @@ -1517,15 +1517,15 @@ drm_gpuva_sm_map_ops_create(struct drm_gpuva_manager *mgr,
> return ops;
>
> err_free_ops:
> - drm_gpuva_ops_free(mgr, ops);
> + drm_gpuva_ops_free(gpuvm, ops);
> return ERR_PTR(ret);
> }
> -EXPORT_SYMBOL_GPL(drm_gpuva_sm_map_ops_create);
> +EXPORT_SYMBOL_GPL(drm_gpuvm_sm_map_ops_create);
>
> /**
> - * drm_gpuva_sm_unmap_ops_create() - creates the &drm_gpuva_ops to split on
> + * drm_gpuvm_sm_unmap_ops_create() - creates the &drm_gpuva_ops to split on
> * unmap
> - * @mgr: the &drm_gpuva_manager representing the GPU VA space
> + * @gpuvm: the &drm_gpuvm representing the GPU VA space
> * @req_addr: the start address of the range to unmap
> * @req_range: the range of the mappings to unmap
> *
> @@ -1540,9 +1540,9 @@ EXPORT_SYMBOL_GPL(drm_gpuva_sm_map_ops_create);
> * remap operations.
> *
> * Note that before calling this function again with another range to unmap it
> - * is necessary to update the &drm_gpuva_manager's view of the GPU VA space. The
> + * is necessary to update the &drm_gpuvm's view of the GPU VA space. The
> * previously obtained operations must be processed or abandoned. To update the
> - * &drm_gpuva_manager's view of the GPU VA space drm_gpuva_insert(),
> + * &drm_gpuvm's view of the GPU VA space drm_gpuva_insert(),
> * drm_gpuva_destroy_locked() and/or drm_gpuva_destroy_unlocked() should be
> * used.
> *
> @@ -1552,12 +1552,12 @@ EXPORT_SYMBOL_GPL(drm_gpuva_sm_map_ops_create);
> * Returns: a pointer to the &drm_gpuva_ops on success, an ERR_PTR on failure
> */
> struct drm_gpuva_ops *
> -drm_gpuva_sm_unmap_ops_create(struct drm_gpuva_manager *mgr,
> +drm_gpuvm_sm_unmap_ops_create(struct drm_gpuvm *gpuvm,
> u64 req_addr, u64 req_range)
> {
> struct drm_gpuva_ops *ops;
> struct {
> - struct drm_gpuva_manager *mgr;
> + struct drm_gpuvm *vm;
> struct drm_gpuva_ops *ops;
> } args;
> int ret;
> @@ -1568,10 +1568,10 @@ drm_gpuva_sm_unmap_ops_create(struct drm_gpuva_manager *mgr,
>
> INIT_LIST_HEAD(&ops->list);
>
> - args.mgr = mgr;
> + args.vm = gpuvm;
> args.ops = ops;
>
> - ret = __drm_gpuva_sm_unmap(mgr, &gpuva_list_ops, &args,
> + ret = __drm_gpuvm_sm_unmap(gpuvm, &gpuvm_list_ops, &args,
> req_addr, req_range);
> if (ret)
> goto err_free_ops;
> @@ -1579,14 +1579,14 @@ drm_gpuva_sm_unmap_ops_create(struct drm_gpuva_manager *mgr,
> return ops;
>
> err_free_ops:
> - drm_gpuva_ops_free(mgr, ops);
> + drm_gpuva_ops_free(gpuvm, ops);
> return ERR_PTR(ret);
> }
> -EXPORT_SYMBOL_GPL(drm_gpuva_sm_unmap_ops_create);
> +EXPORT_SYMBOL_GPL(drm_gpuvm_sm_unmap_ops_create);
>
> /**
> - * drm_gpuva_prefetch_ops_create() - creates the &drm_gpuva_ops to prefetch
> - * @mgr: the &drm_gpuva_manager representing the GPU VA space
> + * drm_gpuvm_prefetch_ops_create() - creates the &drm_gpuva_ops to prefetch
> + * @gpuvm: the &drm_gpuvm representing the GPU VA space
> * @addr: the start address of the range to prefetch
> * @range: the range of the mappings to prefetch
> *
> @@ -1603,7 +1603,7 @@ EXPORT_SYMBOL_GPL(drm_gpuva_sm_unmap_ops_create);
> * Returns: a pointer to the &drm_gpuva_ops on success, an ERR_PTR on failure
> */
> struct drm_gpuva_ops *
> -drm_gpuva_prefetch_ops_create(struct drm_gpuva_manager *mgr,
> +drm_gpuvm_prefetch_ops_create(struct drm_gpuvm *gpuvm,
> u64 addr, u64 range)
> {
> struct drm_gpuva_ops *ops;
> @@ -1618,8 +1618,8 @@ drm_gpuva_prefetch_ops_create(struct drm_gpuva_manager *mgr,
>
> INIT_LIST_HEAD(&ops->list);
>
> - drm_gpuva_for_each_va_range(va, mgr, addr, end) {
> - op = gpuva_op_alloc(mgr);
> + drm_gpuvm_for_each_va_range(va, gpuvm, addr, end) {
> + op = gpuva_op_alloc(gpuvm);
> if (!op) {
> ret = -ENOMEM;
> goto err_free_ops;
> @@ -1633,14 +1633,14 @@ drm_gpuva_prefetch_ops_create(struct drm_gpuva_manager *mgr,
> return ops;
>
> err_free_ops:
> - drm_gpuva_ops_free(mgr, ops);
> + drm_gpuva_ops_free(gpuvm, ops);
> return ERR_PTR(ret);
> }
> -EXPORT_SYMBOL_GPL(drm_gpuva_prefetch_ops_create);
> +EXPORT_SYMBOL_GPL(drm_gpuvm_prefetch_ops_create);
>
> /**
> - * drm_gpuva_gem_unmap_ops_create() - creates the &drm_gpuva_ops to unmap a GEM
> - * @mgr: the &drm_gpuva_manager representing the GPU VA space
> + * drm_gpuvm_gem_unmap_ops_create() - creates the &drm_gpuva_ops to unmap a GEM
> + * @gpuvm: the &drm_gpuvm representing the GPU VA space
> * @obj: the &drm_gem_object to unmap
> *
> * This function creates a list of operations to perform unmapping for every
> @@ -1658,7 +1658,7 @@ EXPORT_SYMBOL_GPL(drm_gpuva_prefetch_ops_create);
> * Returns: a pointer to the &drm_gpuva_ops on success, an ERR_PTR on failure
> */
> struct drm_gpuva_ops *
> -drm_gpuva_gem_unmap_ops_create(struct drm_gpuva_manager *mgr,
> +drm_gpuvm_gem_unmap_ops_create(struct drm_gpuvm *gpuvm,
> struct drm_gem_object *obj)
> {
> struct drm_gpuva_ops *ops;
> @@ -1675,7 +1675,7 @@ drm_gpuva_gem_unmap_ops_create(struct drm_gpuva_manager *mgr,
> INIT_LIST_HEAD(&ops->list);
>
> drm_gem_for_each_gpuva(va, obj) {
> - op = gpuva_op_alloc(mgr);
> + op = gpuva_op_alloc(gpuvm);
> if (!op) {
> ret = -ENOMEM;
> goto err_free_ops;
> @@ -1689,21 +1689,21 @@ drm_gpuva_gem_unmap_ops_create(struct drm_gpuva_manager *mgr,
> return ops;
>
> err_free_ops:
> - drm_gpuva_ops_free(mgr, ops);
> + drm_gpuva_ops_free(gpuvm, ops);
> return ERR_PTR(ret);
> }
> -EXPORT_SYMBOL_GPL(drm_gpuva_gem_unmap_ops_create);
> +EXPORT_SYMBOL_GPL(drm_gpuvm_gem_unmap_ops_create);
>
> /**
> * drm_gpuva_ops_free() - free the given &drm_gpuva_ops
> - * @mgr: the &drm_gpuva_manager the ops were created for
> + * @gpuvm: the &drm_gpuvm the ops were created for
> * @ops: the &drm_gpuva_ops to free
> *
> * Frees the given &drm_gpuva_ops structure including all the ops associated
> * with it.
> */
> void
> -drm_gpuva_ops_free(struct drm_gpuva_manager *mgr,
> +drm_gpuva_ops_free(struct drm_gpuvm *gpuvm,
> struct drm_gpuva_ops *ops)
> {
> struct drm_gpuva_op *op, *next;
> @@ -1717,7 +1717,7 @@ drm_gpuva_ops_free(struct drm_gpuva_manager *mgr,
> kfree(op->remap.unmap);
> }
>
> - gpuva_op_free(mgr, op);
> + gpuva_op_free(gpuvm, op);
> }
>
> kfree(ops);
> diff --git a/drivers/gpu/drm/nouveau/nouveau_exec.c b/drivers/gpu/drm/nouveau/nouveau_exec.c
> index a90c4cd8cbb2..c001952cd678 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_exec.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_exec.c
> @@ -106,7 +106,7 @@ nouveau_exec_job_submit(struct nouveau_job *job)
> drm_exec_until_all_locked(exec) {
> struct drm_gpuva *va;
>
> - drm_gpuva_for_each_va(va, &uvmm->umgr) {
> + drm_gpuvm_for_each_va(va, &uvmm->umgr) {
> if (unlikely(va == &uvmm->umgr.kernel_alloc_node))
> continue;
>
> diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.c b/drivers/gpu/drm/nouveau/nouveau_uvmm.c
> index aae780e4a4aa..c750072cb268 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_uvmm.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.c
> @@ -444,7 +444,7 @@ op_map_prepare_unwind(struct nouveau_uvma *uvma)
> static void
> op_unmap_prepare_unwind(struct drm_gpuva *va)
> {
> - drm_gpuva_insert(va->mgr, va);
> + drm_gpuva_insert(va->vm, va);
> }
>
> static void
> @@ -1194,7 +1194,7 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job)
> goto unwind_continue;
> }
>
> - op->ops = drm_gpuva_sm_unmap_ops_create(&uvmm->umgr,
> + op->ops = drm_gpuvm_sm_unmap_ops_create(&uvmm->umgr,
> op->va.addr,
> op->va.range);
> if (IS_ERR(op->ops)) {
> @@ -1240,7 +1240,7 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job)
> }
> }
>
> - op->ops = drm_gpuva_sm_map_ops_create(&uvmm->umgr,
> + op->ops = drm_gpuvm_sm_map_ops_create(&uvmm->umgr,
> op->va.addr,
> op->va.range,
> op->gem.obj,
> @@ -1264,7 +1264,7 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job)
> break;
> }
> case OP_UNMAP:
> - op->ops = drm_gpuva_sm_unmap_ops_create(&uvmm->umgr,
> + op->ops = drm_gpuvm_sm_unmap_ops_create(&uvmm->umgr,
> op->va.addr,
> op->va.range);
> if (IS_ERR(op->ops)) {
> @@ -1836,11 +1836,11 @@ nouveau_uvmm_init(struct nouveau_uvmm *uvmm, struct nouveau_cli *cli,
> uvmm->kernel_managed_addr = kernel_managed_addr;
> uvmm->kernel_managed_size = kernel_managed_size;
>
> - drm_gpuva_manager_init(&uvmm->umgr, cli->name,
> - NOUVEAU_VA_SPACE_START,
> - NOUVEAU_VA_SPACE_END,
> - kernel_managed_addr, kernel_managed_size,
> - NULL);
> + drm_gpuvm_init(&uvmm->umgr, cli->name,
> + NOUVEAU_VA_SPACE_START,
> + NOUVEAU_VA_SPACE_END,
> + kernel_managed_addr, kernel_managed_size,
> + NULL);
>
> ret = nvif_vmm_ctor(&cli->mmu, "uvmm",
> cli->vmm.vmm.object.oclass, RAW,
> @@ -1855,7 +1855,7 @@ nouveau_uvmm_init(struct nouveau_uvmm *uvmm, struct nouveau_cli *cli,
> return 0;
>
> out_free_gpuva_mgr:
> - drm_gpuva_manager_destroy(&uvmm->umgr);
> + drm_gpuvm_destroy(&uvmm->umgr);
> out_unlock:
> mutex_unlock(&cli->mutex);
> return ret;
> @@ -1877,7 +1877,7 @@ nouveau_uvmm_fini(struct nouveau_uvmm *uvmm)
> wait_event(entity->job.wq, list_empty(&entity->job.list.head));
>
> nouveau_uvmm_lock(uvmm);
> - drm_gpuva_for_each_va_safe(va, next, &uvmm->umgr) {
> + drm_gpuvm_for_each_va_safe(va, next, &uvmm->umgr) {
> struct nouveau_uvma *uvma = uvma_from_va(va);
> struct drm_gem_object *obj = va->gem.obj;
>
> @@ -1910,7 +1910,7 @@ nouveau_uvmm_fini(struct nouveau_uvmm *uvmm)
>
> mutex_lock(&cli->mutex);
> nouveau_vmm_fini(&uvmm->vmm);
> - drm_gpuva_manager_destroy(&uvmm->umgr);
> + drm_gpuvm_destroy(&uvmm->umgr);
> mutex_unlock(&cli->mutex);
>
> dma_resv_fini(&uvmm->resv);
> diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.h b/drivers/gpu/drm/nouveau/nouveau_uvmm.h
> index fc7f6fd2a4e1..e96c9919d1bd 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_uvmm.h
> +++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.h
> @@ -3,13 +3,13 @@
> #ifndef __NOUVEAU_UVMM_H__
> #define __NOUVEAU_UVMM_H__
>
> -#include <drm/drm_gpuva_mgr.h>
> +#include <drm/drm_gpuvm.h>
>
> #include "nouveau_drv.h"
>
> struct nouveau_uvmm {
> struct nouveau_vmm vmm;
> - struct drm_gpuva_manager umgr;
> + struct drm_gpuvm umgr;
> struct maple_tree region_mt;
> struct mutex mutex;
> struct dma_resv resv;
> @@ -44,7 +44,7 @@ struct nouveau_uvma {
> #define uvmm_from_mgr(x) container_of((x), struct nouveau_uvmm, umgr)
> #define uvma_from_va(x) container_of((x), struct nouveau_uvma, va)
>
> -#define to_uvmm(x) uvmm_from_mgr((x)->va.mgr)
> +#define to_uvmm(x) uvmm_from_mgr((x)->va.vm)
>
> struct nouveau_uvmm_bind_job {
> struct nouveau_job base;
> diff --git a/include/drm/drm_debugfs.h b/include/drm/drm_debugfs.h
> index 3bba169f9bae..cf06cee4343f 100644
> --- a/include/drm/drm_debugfs.h
> +++ b/include/drm/drm_debugfs.h
> @@ -35,7 +35,7 @@
> #include <linux/types.h>
> #include <linux/seq_file.h>
>
> -#include <drm/drm_gpuva_mgr.h>
> +#include <drm/drm_gpuvm.h>
>
> /**
> * DRM_DEBUGFS_GPUVA_INFO - &drm_info_list entry to dump a GPU VA space
> @@ -152,7 +152,7 @@ void drm_debugfs_add_files(struct drm_device *dev,
> const struct drm_debugfs_info *files, int count);
>
> int drm_debugfs_gpuva_info(struct seq_file *m,
> - struct drm_gpuva_manager *mgr);
> + struct drm_gpuvm *gpuvm);
> #else
> static inline void drm_debugfs_create_files(const struct drm_info_list *files,
> int count, struct dentry *root,
> @@ -177,7 +177,7 @@ static inline void drm_debugfs_add_files(struct drm_device *dev,
> {}
>
> static inline int drm_debugfs_gpuva_info(struct seq_file *m,
> - struct drm_gpuva_manager *mgr)
> + struct drm_gpuvm *gpuvm)
> {
> return 0;
> }
> diff --git a/include/drm/drm_gpuva_mgr.h b/include/drm/drm_gpuvm.h
> similarity index 78%
> rename from include/drm/drm_gpuva_mgr.h
> rename to include/drm/drm_gpuvm.h
> index ed8d50200cc3..0e802676e0a9 100644
> --- a/include/drm/drm_gpuva_mgr.h
> +++ b/include/drm/drm_gpuvm.h
> @@ -1,7 +1,7 @@
> /* SPDX-License-Identifier: GPL-2.0-only */
>
> -#ifndef __DRM_GPUVA_MGR_H__
> -#define __DRM_GPUVA_MGR_H__
> +#ifndef __DRM_GPUVM_H__
> +#define __DRM_GPUVM_H__
>
> /*
> * Copyright (c) 2022 Red Hat.
> @@ -31,8 +31,8 @@
>
> #include <drm/drm_gem.h>
>
> -struct drm_gpuva_manager;
> -struct drm_gpuva_fn_ops;
> +struct drm_gpuvm;
> +struct drm_gpuvm_ops;
>
> /**
> * enum drm_gpuva_flags - flags for struct drm_gpuva
> @@ -62,15 +62,15 @@ enum drm_gpuva_flags {
> * struct drm_gpuva - structure to track a GPU VA mapping
> *
> * This structure represents a GPU VA mapping and is associated with a
> - * &drm_gpuva_manager.
> + * &drm_gpuvm.
> *
> * Typically, this structure is embedded in bigger driver structures.
> */
> struct drm_gpuva {
> /**
> - * @mgr: the &drm_gpuva_manager this object is associated with
> + * @vm: the &drm_gpuvm this object is associated with
> */
> - struct drm_gpuva_manager *mgr;
> + struct drm_gpuvm *vm;
>
> /**
> * @flags: the &drm_gpuva_flags for this mapping
> @@ -137,20 +137,20 @@ struct drm_gpuva {
> } rb;
> };
>
> -int drm_gpuva_insert(struct drm_gpuva_manager *mgr, struct drm_gpuva *va);
> +int drm_gpuva_insert(struct drm_gpuvm *gpuvm, struct drm_gpuva *va);
> void drm_gpuva_remove(struct drm_gpuva *va);
>
> void drm_gpuva_link(struct drm_gpuva *va);
> void drm_gpuva_unlink(struct drm_gpuva *va);
>
> -struct drm_gpuva *drm_gpuva_find(struct drm_gpuva_manager *mgr,
> +struct drm_gpuva *drm_gpuva_find(struct drm_gpuvm *gpuvm,
> u64 addr, u64 range);
> -struct drm_gpuva *drm_gpuva_find_first(struct drm_gpuva_manager *mgr,
> +struct drm_gpuva *drm_gpuva_find_first(struct drm_gpuvm *gpuvm,
> u64 addr, u64 range);
> -struct drm_gpuva *drm_gpuva_find_prev(struct drm_gpuva_manager *mgr, u64 start);
> -struct drm_gpuva *drm_gpuva_find_next(struct drm_gpuva_manager *mgr, u64 end);
> +struct drm_gpuva *drm_gpuva_find_prev(struct drm_gpuvm *gpuvm, u64 start);
> +struct drm_gpuva *drm_gpuva_find_next(struct drm_gpuvm *gpuvm, u64 end);
>
> -bool drm_gpuva_interval_empty(struct drm_gpuva_manager *mgr, u64 addr, u64 range);
> +bool drm_gpuva_interval_empty(struct drm_gpuvm *gpuvm, u64 addr, u64 range);
>
> static inline void drm_gpuva_init(struct drm_gpuva *va, u64 addr, u64 range,
> struct drm_gem_object *obj, u64 offset)
> @@ -186,7 +186,7 @@ static inline bool drm_gpuva_invalidated(struct drm_gpuva *va)
> }
>
> /**
> - * struct drm_gpuva_manager - DRM GPU VA Manager
> + * struct drm_gpuvm - DRM GPU VA Manager
> *
> * The DRM GPU VA Manager keeps track of a GPU's virtual address space by using
> * &maple_tree structures. Typically, this structure is embedded in bigger
> @@ -197,7 +197,7 @@ static inline bool drm_gpuva_invalidated(struct drm_gpuva *va)
> *
> * There should be one manager instance per GPU virtual address space.
> */
> -struct drm_gpuva_manager {
> +struct drm_gpuvm {
> /**
> * @name: the name of the DRM GPU VA space
> */
> @@ -237,100 +237,99 @@ struct drm_gpuva_manager {
> struct drm_gpuva kernel_alloc_node;
>
> /**
> - * @ops: &drm_gpuva_fn_ops providing the split/merge steps to drivers
> + * @ops: &drm_gpuvm_ops providing the split/merge steps to drivers
> */
> - const struct drm_gpuva_fn_ops *ops;
> + const struct drm_gpuvm_ops *ops;
> };
>
> -void drm_gpuva_manager_init(struct drm_gpuva_manager *mgr,
> - const char *name,
> - u64 start_offset, u64 range,
> - u64 reserve_offset, u64 reserve_range,
> - const struct drm_gpuva_fn_ops *ops);
> -void drm_gpuva_manager_destroy(struct drm_gpuva_manager *mgr);
> +void drm_gpuvm_init(struct drm_gpuvm *gpuvm, const char *name,
> + u64 start_offset, u64 range,
> + u64 reserve_offset, u64 reserve_range,
> + const struct drm_gpuvm_ops *ops);
> +void drm_gpuvm_destroy(struct drm_gpuvm *gpuvm);
>
> static inline struct drm_gpuva *
> __drm_gpuva_next(struct drm_gpuva *va)
> {
> - if (va && !list_is_last(&va->rb.entry, &va->mgr->rb.list))
> + if (va && !list_is_last(&va->rb.entry, &va->vm->rb.list))
> return list_next_entry(va, rb.entry);
>
> return NULL;
> }
>
> /**
> - * drm_gpuva_for_each_va_range() - iterate over a range of &drm_gpuvas
> + * drm_gpuvm_for_each_va_range() - iterate over a range of &drm_gpuvas
> * @va__: &drm_gpuva structure to assign to in each iteration step
> - * @mgr__: &drm_gpuva_manager to walk over
> + * @gpuvm__: &drm_gpuvm to walk over
> * @start__: starting offset, the first gpuva will overlap this
> * @end__: ending offset, the last gpuva will start before this (but may
> * overlap)
> *
> - * This iterator walks over all &drm_gpuvas in the &drm_gpuva_manager that lie
> + * This iterator walks over all &drm_gpuvas in the &drm_gpuvm that lie
> * between @start__ and @end__. It is implemented similarly to list_for_each(),
> - * but is using the &drm_gpuva_manager's internal interval tree to accelerate
> + * but is using the &drm_gpuvm's internal interval tree to accelerate
> * the search for the starting &drm_gpuva, and hence isn't safe against removal
> * of elements. It assumes that @end__ is within (or is the upper limit of) the
> - * &drm_gpuva_manager. This iterator does not skip over the &drm_gpuva_manager's
> + * &drm_gpuvm. This iterator does not skip over the &drm_gpuvm's
> * @kernel_alloc_node.
> */
> -#define drm_gpuva_for_each_va_range(va__, mgr__, start__, end__) \
> - for (va__ = drm_gpuva_find_first((mgr__), (start__), (end__) - (start__)); \
> +#define drm_gpuvm_for_each_va_range(va__, gpuvm__, start__, end__) \
> + for (va__ = drm_gpuva_find_first((gpuvm__), (start__), (end__) - (start__)); \
> va__ && (va__->va.addr < (end__)); \
> va__ = __drm_gpuva_next(va__))
>
> /**
> - * drm_gpuva_for_each_va_range_safe() - safely iterate over a range of
> + * drm_gpuvm_for_each_va_range_safe() - safely iterate over a range of
> * &drm_gpuvas
> * @va__: &drm_gpuva to assign to in each iteration step
> * @next__: another &drm_gpuva to use as temporary storage
> - * @mgr__: &drm_gpuva_manager to walk over
> + * @gpuvm__: &drm_gpuvm to walk over
> * @start__: starting offset, the first gpuva will overlap this
> * @end__: ending offset, the last gpuva will start before this (but may
> * overlap)
> *
> - * This iterator walks over all &drm_gpuvas in the &drm_gpuva_manager that lie
> + * This iterator walks over all &drm_gpuvas in the &drm_gpuvm that lie
> * between @start__ and @end__. It is implemented similarly to
> - * list_for_each_safe(), but is using the &drm_gpuva_manager's internal interval
> + * list_for_each_safe(), but is using the &drm_gpuvm's internal interval
> * tree to accelerate the search for the starting &drm_gpuva, and hence is safe
> * against removal of elements. It assumes that @end__ is within (or is the
> - * upper limit of) the &drm_gpuva_manager. This iterator does not skip over the
> - * &drm_gpuva_manager's @kernel_alloc_node.
> + * upper limit of) the &drm_gpuvm. This iterator does not skip over the
> + * &drm_gpuvm's @kernel_alloc_node.
> */
> -#define drm_gpuva_for_each_va_range_safe(va__, next__, mgr__, start__, end__) \
> - for (va__ = drm_gpuva_find_first((mgr__), (start__), (end__) - (start__)), \
> +#define drm_gpuvm_for_each_va_range_safe(va__, next__, gpuvm__, start__, end__) \
> + for (va__ = drm_gpuva_find_first((gpuvm__), (start__), (end__) - (start__)), \
> next__ = __drm_gpuva_next(va__); \
> va__ && (va__->va.addr < (end__)); \
> va__ = next__, next__ = __drm_gpuva_next(va__))
>
> /**
> - * drm_gpuva_for_each_va() - iterate over all &drm_gpuvas
> + * drm_gpuvm_for_each_va() - iterate over all &drm_gpuvas
> * @va__: &drm_gpuva to assign to in each iteration step
> - * @mgr__: &drm_gpuva_manager to walk over
> + * @gpuvm__: &drm_gpuvm to walk over
> *
> * This iterator walks over all &drm_gpuva structures associated with the given
> - * &drm_gpuva_manager.
> + * &drm_gpuvm.
> */
> -#define drm_gpuva_for_each_va(va__, mgr__) \
> - list_for_each_entry(va__, &(mgr__)->rb.list, rb.entry)
> +#define drm_gpuvm_for_each_va(va__, gpuvm__) \
> + list_for_each_entry(va__, &(gpuvm__)->rb.list, rb.entry)
>
> /**
> - * drm_gpuva_for_each_va_safe() - safely iterate over all &drm_gpuvas
> + * drm_gpuvm_for_each_va_safe() - safely iterate over all &drm_gpuvas
> * @va__: &drm_gpuva to assign to in each iteration step
> * @next__: another &drm_gpuva to use as temporary storage
> - * @mgr__: &drm_gpuva_manager to walk over
> + * @gpuvm__: &drm_gpuvm to walk over
> *
> * This iterator walks over all &drm_gpuva structures associated with the given
> - * &drm_gpuva_manager. It is implemented with list_for_each_entry_safe(), and
> + * &drm_gpuvm. It is implemented with list_for_each_entry_safe(), and
> * hence safe against the removal of elements.
> */
> -#define drm_gpuva_for_each_va_safe(va__, next__, mgr__) \
> - list_for_each_entry_safe(va__, next__, &(mgr__)->rb.list, rb.entry)
> +#define drm_gpuvm_for_each_va_safe(va__, next__, gpuvm__) \
> + list_for_each_entry_safe(va__, next__, &(gpuvm__)->rb.list, rb.entry)
>
> /**
> * enum drm_gpuva_op_type - GPU VA operation type
> *
> - * Operations to alter the GPU VA mappings tracked by the &drm_gpuva_manager.
> + * Operations to alter the GPU VA mappings tracked by the &drm_gpuvm.
> */
> enum drm_gpuva_op_type {
> /**
> @@ -413,7 +412,7 @@ struct drm_gpuva_op_unmap {
> *
> * Optionally, if &keep is set, drivers may keep the actual page table
> * mappings for this &drm_gpuva, adding the missing page table entries
> - * only and update the &drm_gpuva_manager accordingly.
> + * only and update the &drm_gpuvm accordingly.
> */
> bool keep;
> };
> @@ -584,22 +583,22 @@ struct drm_gpuva_ops {
> #define drm_gpuva_next_op(op) list_next_entry(op, entry)
>
> struct drm_gpuva_ops *
> -drm_gpuva_sm_map_ops_create(struct drm_gpuva_manager *mgr,
> +drm_gpuvm_sm_map_ops_create(struct drm_gpuvm *gpuvm,
> u64 addr, u64 range,
> struct drm_gem_object *obj, u64 offset);
> struct drm_gpuva_ops *
> -drm_gpuva_sm_unmap_ops_create(struct drm_gpuva_manager *mgr,
> +drm_gpuvm_sm_unmap_ops_create(struct drm_gpuvm *gpuvm,
> u64 addr, u64 range);
>
> struct drm_gpuva_ops *
> -drm_gpuva_prefetch_ops_create(struct drm_gpuva_manager *mgr,
> +drm_gpuvm_prefetch_ops_create(struct drm_gpuvm *gpuvm,
> u64 addr, u64 range);
>
> struct drm_gpuva_ops *
> -drm_gpuva_gem_unmap_ops_create(struct drm_gpuva_manager *mgr,
> +drm_gpuvm_gem_unmap_ops_create(struct drm_gpuvm *gpuvm,
> struct drm_gem_object *obj);
>
> -void drm_gpuva_ops_free(struct drm_gpuva_manager *mgr,
> +void drm_gpuva_ops_free(struct drm_gpuvm *gpuvm,
> struct drm_gpuva_ops *ops);
>
> static inline void drm_gpuva_init_from_op(struct drm_gpuva *va,
> @@ -610,15 +609,15 @@ static inline void drm_gpuva_init_from_op(struct drm_gpuva *va,
> }
>
> /**
> - * struct drm_gpuva_fn_ops - callbacks for split/merge steps
> + * struct drm_gpuvm_ops - callbacks for split/merge steps
> *
> - * This structure defines the callbacks used by &drm_gpuva_sm_map and
> - * &drm_gpuva_sm_unmap to provide the split/merge steps for map and unmap
> + * This structure defines the callbacks used by &drm_gpuvm_sm_map and
> + * &drm_gpuvm_sm_unmap to provide the split/merge steps for map and unmap
> * operations to drivers.
> */
> -struct drm_gpuva_fn_ops {
> +struct drm_gpuvm_ops {
> /**
> - * @op_alloc: called when the &drm_gpuva_manager allocates
> + * @op_alloc: called when the &drm_gpuvm allocates
> * a struct drm_gpuva_op
> *
> * Some drivers may want to embed struct drm_gpuva_op into driver
> @@ -630,7 +629,7 @@ struct drm_gpuva_fn_ops {
> struct drm_gpuva_op *(*op_alloc)(void);
>
> /**
> - * @op_free: called when the &drm_gpuva_manager frees a
> + * @op_free: called when the &drm_gpuvm frees a
> * struct drm_gpuva_op
> *
> * Some drivers may want to embed struct drm_gpuva_op into driver
> @@ -642,19 +641,19 @@ struct drm_gpuva_fn_ops {
> void (*op_free)(struct drm_gpuva_op *op);
>
> /**
> - * @sm_step_map: called from &drm_gpuva_sm_map to finally insert the
> + * @sm_step_map: called from &drm_gpuvm_sm_map to finally insert the
> * mapping once all previous steps were completed
> *
> * The &priv pointer matches the one the driver passed to
> - * &drm_gpuva_sm_map or &drm_gpuva_sm_unmap, respectively.
> + * &drm_gpuvm_sm_map or &drm_gpuvm_sm_unmap, respectively.
> *
> - * Can be NULL if &drm_gpuva_sm_map is used.
> + * Can be NULL if &drm_gpuvm_sm_map is used.
> */
> int (*sm_step_map)(struct drm_gpuva_op *op, void *priv);
>
> /**
> - * @sm_step_remap: called from &drm_gpuva_sm_map and
> - * &drm_gpuva_sm_unmap to split up an existent mapping
> + * @sm_step_remap: called from &drm_gpuvm_sm_map and
> + * &drm_gpuvm_sm_unmap to split up an existent mapping
> *
> * This callback is called when existent mapping needs to be split up.
> * This is the case when either a newly requested mapping overlaps or
> @@ -662,38 +661,38 @@ struct drm_gpuva_fn_ops {
> * mapping is requested.
> *
> * The &priv pointer matches the one the driver passed to
> - * &drm_gpuva_sm_map or &drm_gpuva_sm_unmap, respectively.
> + * &drm_gpuvm_sm_map or &drm_gpuvm_sm_unmap, respectively.
> *
> - * Can be NULL if neither &drm_gpuva_sm_map nor &drm_gpuva_sm_unmap is
> + * Can be NULL if neither &drm_gpuvm_sm_map nor &drm_gpuvm_sm_unmap is
> * used.
> */
> int (*sm_step_remap)(struct drm_gpuva_op *op, void *priv);
>
> /**
> - * @sm_step_unmap: called from &drm_gpuva_sm_map and
> - * &drm_gpuva_sm_unmap to unmap an existent mapping
> + * @sm_step_unmap: called from &drm_gpuvm_sm_map and
> + * &drm_gpuvm_sm_unmap to unmap an existent mapping
> *
> * This callback is called when existent mapping needs to be unmapped.
> * This is the case when either a newly requested mapping encloses an
> * existent mapping or an unmap of an existent mapping is requested.
> *
> * The &priv pointer matches the one the driver passed to
> - * &drm_gpuva_sm_map or &drm_gpuva_sm_unmap, respectively.
> + * &drm_gpuvm_sm_map or &drm_gpuvm_sm_unmap, respectively.
> *
> - * Can be NULL if neither &drm_gpuva_sm_map nor &drm_gpuva_sm_unmap is
> + * Can be NULL if neither &drm_gpuvm_sm_map nor &drm_gpuvm_sm_unmap is
> * used.
> */
> int (*sm_step_unmap)(struct drm_gpuva_op *op, void *priv);
> };
>
> -int drm_gpuva_sm_map(struct drm_gpuva_manager *mgr, void *priv,
> +int drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm, void *priv,
> u64 addr, u64 range,
> struct drm_gem_object *obj, u64 offset);
>
> -int drm_gpuva_sm_unmap(struct drm_gpuva_manager *mgr, void *priv,
> +int drm_gpuvm_sm_unmap(struct drm_gpuvm *gpuvm, void *priv,
> u64 addr, u64 range);
>
> -void drm_gpuva_map(struct drm_gpuva_manager *mgr,
> +void drm_gpuva_map(struct drm_gpuvm *gpuvm,
> struct drm_gpuva *va,
> struct drm_gpuva_op_map *op);
>
> @@ -703,4 +702,4 @@ void drm_gpuva_remap(struct drm_gpuva *prev,
>
> void drm_gpuva_unmap(struct drm_gpuva_op_unmap *op);
>
> -#endif /* __DRM_GPUVA_MGR_H__ */
> +#endif /* __DRM_GPUVM_H__ */
^ permalink raw reply [flat|nested] 29+ messages in thread* Re: [PATCH drm-misc-next v4 1/8] drm/gpuvm: rename struct drm_gpuva_manager to struct drm_gpuvm
2023-09-21 6:48 ` Christian König
@ 2023-09-25 0:42 ` Dave Airlie
0 siblings, 0 replies; 29+ messages in thread
From: Dave Airlie @ 2023-09-25 0:42 UTC (permalink / raw)
To: Christian König
Cc: Danilo Krummrich, daniel, matthew.brost, thomas.hellstrom,
sarah.walker, donald.robson, boris.brezillon, faith.ekstrand,
dri-devel, nouveau, linux-kernel
On Thu, 21 Sept 2023 at 16:49, Christian König <christian.koenig@amd.com> wrote:
>
> Am 20.09.23 um 16:42 schrieb Danilo Krummrich:
> > Rename struct drm_gpuva_manager to struct drm_gpuvm including
> > corresponding functions. This way the GPUVA manager's structures align
> > very well with the documentation of VM_BIND [1] and VM_BIND locking [2].
> >
> > It also provides a better foundation for the naming of data structures
> > and functions introduced for implementing a common dma-resv per GPU-VM
> > including tracking of external and evicted objects in subsequent
> > patches.
> >
> > [1] Documentation/gpu/drm-vm-bind-async.rst
> > [2] Documentation/gpu/drm-vm-bind-locking.rst
> >
> > Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> > Cc: Matthew Brost <matthew.brost@intel.com>
> > Signed-off-by: Danilo Krummrich <dakr@redhat.com>
>
> Not sure if that name is better or worse, but from the handling I
> suggest to have this patch separately pushed to drm-misc-next.
>
> Feel free to add my Acked-by for pushing this.
>
Acked-by: Dave Airlie <airlied@redhat.com>
> Regards,
> Christian.
>
> > ---
> > drivers/gpu/drm/Makefile | 2 +-
> > drivers/gpu/drm/drm_debugfs.c | 16 +-
> > .../gpu/drm/{drm_gpuva_mgr.c => drm_gpuvm.c} | 400 +++++++++---------
> > drivers/gpu/drm/nouveau/nouveau_exec.c | 2 +-
> > drivers/gpu/drm/nouveau/nouveau_uvmm.c | 24 +-
> > drivers/gpu/drm/nouveau/nouveau_uvmm.h | 6 +-
> > include/drm/drm_debugfs.h | 6 +-
> > include/drm/{drm_gpuva_mgr.h => drm_gpuvm.h} | 153 ++++---
> > 8 files changed, 304 insertions(+), 305 deletions(-)
> > rename drivers/gpu/drm/{drm_gpuva_mgr.c => drm_gpuvm.c} (78%)
> > rename include/drm/{drm_gpuva_mgr.h => drm_gpuvm.h} (78%)
> >
> > diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile
> > index 215e78e79125..7a84b3cddeab 100644
> > --- a/drivers/gpu/drm/Makefile
> > +++ b/drivers/gpu/drm/Makefile
> > @@ -45,7 +45,7 @@ drm-y := \
> > drm_vblank.o \
> > drm_vblank_work.o \
> > drm_vma_manager.o \
> > - drm_gpuva_mgr.o \
> > + drm_gpuvm.o \
> > drm_writeback.o
> > drm-$(CONFIG_DRM_LEGACY) += \
> > drm_agpsupport.o \
> > diff --git a/drivers/gpu/drm/drm_debugfs.c b/drivers/gpu/drm/drm_debugfs.c
> > index 44ecd7d0daac..f291fb4b359f 100644
> > --- a/drivers/gpu/drm/drm_debugfs.c
> > +++ b/drivers/gpu/drm/drm_debugfs.c
> > @@ -40,7 +40,7 @@
> > #include <drm/drm_file.h>
> > #include <drm/drm_gem.h>
> > #include <drm/drm_managed.h>
> > -#include <drm/drm_gpuva_mgr.h>
> > +#include <drm/drm_gpuvm.h>
> >
> > #include "drm_crtc_internal.h"
> > #include "drm_internal.h"
> > @@ -189,31 +189,31 @@ static const struct file_operations drm_debugfs_fops = {
> > /**
> > * drm_debugfs_gpuva_info - dump the given DRM GPU VA space
> > * @m: pointer to the &seq_file to write
> > - * @mgr: the &drm_gpuva_manager representing the GPU VA space
> > + * @gpuvm: the &drm_gpuvm representing the GPU VA space
> > *
> > * Dumps the GPU VA mappings of a given DRM GPU VA manager.
> > *
> > * For each DRM GPU VA space drivers should call this function from their
> > * &drm_info_list's show callback.
> > *
> > - * Returns: 0 on success, -ENODEV if the &mgr is not initialized
> > + * Returns: 0 on success, -ENODEV if the &gpuvm is not initialized
> > */
> > int drm_debugfs_gpuva_info(struct seq_file *m,
> > - struct drm_gpuva_manager *mgr)
> > + struct drm_gpuvm *gpuvm)
> > {
> > - struct drm_gpuva *va, *kva = &mgr->kernel_alloc_node;
> > + struct drm_gpuva *va, *kva = &gpuvm->kernel_alloc_node;
> >
> > - if (!mgr->name)
> > + if (!gpuvm->name)
> > return -ENODEV;
> >
> > seq_printf(m, "DRM GPU VA space (%s) [0x%016llx;0x%016llx]\n",
> > - mgr->name, mgr->mm_start, mgr->mm_start + mgr->mm_range);
> > + gpuvm->name, gpuvm->mm_start, gpuvm->mm_start + gpuvm->mm_range);
> > seq_printf(m, "Kernel reserved node [0x%016llx;0x%016llx]\n",
> > kva->va.addr, kva->va.addr + kva->va.range);
> > seq_puts(m, "\n");
> > seq_puts(m, " VAs | start | range | end | object | object offset\n");
> > seq_puts(m, "-------------------------------------------------------------------------------------------------------------\n");
> > - drm_gpuva_for_each_va(va, mgr) {
> > + drm_gpuvm_for_each_va(va, gpuvm) {
> > if (unlikely(va == kva))
> > continue;
> >
> > diff --git a/drivers/gpu/drm/drm_gpuva_mgr.c b/drivers/gpu/drm/drm_gpuvm.c
> > similarity index 78%
> > rename from drivers/gpu/drm/drm_gpuva_mgr.c
> > rename to drivers/gpu/drm/drm_gpuvm.c
> > index f86bfad74ff8..7074bcad5b28 100644
> > --- a/drivers/gpu/drm/drm_gpuva_mgr.c
> > +++ b/drivers/gpu/drm/drm_gpuvm.c
> > @@ -25,7 +25,7 @@
> > *
> > */
> >
> > -#include <drm/drm_gpuva_mgr.h>
> > +#include <drm/drm_gpuvm.h>
> >
> > #include <linux/interval_tree_generic.h>
> > #include <linux/mm.h>
> > @@ -33,8 +33,8 @@
> > /**
> > * DOC: Overview
> > *
> > - * The DRM GPU VA Manager, represented by struct drm_gpuva_manager keeps track
> > - * of a GPU's virtual address (VA) space and manages the corresponding virtual
> > + * The DRM GPU VA Manager, represented by struct drm_gpuvm keeps track of a
> > + * GPU's virtual address (VA) space and manages the corresponding virtual
> > * mappings represented by &drm_gpuva objects. It also keeps track of the
> > * mapping's backing &drm_gem_object buffers.
> > *
> > @@ -47,28 +47,28 @@
> > * The GPU VA manager internally uses a rb-tree to manage the
> > * &drm_gpuva mappings within a GPU's virtual address space.
> > *
> > - * The &drm_gpuva_manager contains a special &drm_gpuva representing the
> > + * The &drm_gpuvm structure contains a special &drm_gpuva representing the
> > * portion of VA space reserved by the kernel. This node is initialized together
> > * with the GPU VA manager instance and removed when the GPU VA manager is
> > * destroyed.
> > *
> > - * In a typical application drivers would embed struct drm_gpuva_manager and
> > + * In a typical application drivers would embed struct drm_gpuvm and
> > * struct drm_gpuva within their own driver specific structures, there won't be
> > * any memory allocations of its own nor memory allocations of &drm_gpuva
> > * entries.
> > *
> > - * The data structures needed to store &drm_gpuvas within the &drm_gpuva_manager
> > - * are contained within struct drm_gpuva already. Hence, for inserting
> > - * &drm_gpuva entries from within dma-fence signalling critical sections it is
> > - * enough to pre-allocate the &drm_gpuva structures.
> > + * The data structures needed to store &drm_gpuvas within the &drm_gpuvm are
> > + * contained within struct drm_gpuva already. Hence, for inserting &drm_gpuva
> > + * entries from within dma-fence signalling critical sections it is enough to
> > + * pre-allocate the &drm_gpuva structures.
> > */
> >
> > /**
> > * DOC: Split and Merge
> > *
> > * Besides its capability to manage and represent a GPU VA space, the
> > - * &drm_gpuva_manager also provides functions to let the &drm_gpuva_manager
> > - * calculate a sequence of operations to satisfy a given map or unmap request.
> > + * GPU VA manager also provides functions to let the &drm_gpuvm calculate a
> > + * sequence of operations to satisfy a given map or unmap request.
> > *
> > * Therefore the DRM GPU VA manager provides an algorithm implementing splitting
> > * and merging of existent GPU VA mappings with the ones that are requested to
> > @@ -76,16 +76,16 @@
> > * implement Vulkan 'Sparse Memory Bindings' - drivers UAPIs often refer to this
> > * as VM BIND.
> > *
> > - * Drivers can call drm_gpuva_sm_map() to receive a sequence of callbacks
> > + * Drivers can call drm_gpuvm_sm_map() to receive a sequence of callbacks
> > * containing map, unmap and remap operations for a given newly requested
> > * mapping. The sequence of callbacks represents the set of operations to
> > * execute in order to integrate the new mapping cleanly into the current state
> > * of the GPU VA space.
> > *
> > * Depending on how the new GPU VA mapping intersects with the existent mappings
> > - * of the GPU VA space the &drm_gpuva_fn_ops callbacks contain an arbitrary
> > - * amount of unmap operations, a maximum of two remap operations and a single
> > - * map operation. The caller might receive no callback at all if no operation is
> > + * of the GPU VA space the &drm_gpuvm_ops callbacks contain an arbitrary amount
> > + * of unmap operations, a maximum of two remap operations and a single map
> > + * operation. The caller might receive no callback at all if no operation is
> > * required, e.g. if the requested mapping already exists in the exact same way.
> > *
> > * The single map operation represents the original map operation requested by
> > @@ -95,7 +95,7 @@
> > * &drm_gpuva to unmap is physically contiguous with the original mapping
> > * request. Optionally, if 'keep' is set, drivers may keep the actual page table
> > * entries for this &drm_gpuva, adding the missing page table entries only and
> > - * update the &drm_gpuva_manager's view of things accordingly.
> > + * update the &drm_gpuvm's view of things accordingly.
> > *
> > * Drivers may do the same optimization, namely delta page table updates, also
> > * for remap operations. This is possible since &drm_gpuva_op_remap consists of
> > @@ -106,34 +106,34 @@
> > * the beginning and one at the end of the new mapping, hence there is a
> > * maximum of two remap operations.
> > *
> > - * Analogous to drm_gpuva_sm_map() drm_gpuva_sm_unmap() uses &drm_gpuva_fn_ops
> > - * to call back into the driver in order to unmap a range of GPU VA space. The
> > + * Analogous to drm_gpuvm_sm_map() drm_gpuvm_sm_unmap() uses &drm_gpuvm_ops to
> > + * call back into the driver in order to unmap a range of GPU VA space. The
> > * logic behind this function is way simpler though: For all existent mappings
> > * enclosed by the given range unmap operations are created. For mappings which
> > * are only partically located within the given range, remap operations are
> > * created such that those mappings are split up and re-mapped partically.
> > *
> > - * As an alternative to drm_gpuva_sm_map() and drm_gpuva_sm_unmap(),
> > - * drm_gpuva_sm_map_ops_create() and drm_gpuva_sm_unmap_ops_create() can be used
> > + * As an alternative to drm_gpuvm_sm_map() and drm_gpuvm_sm_unmap(),
> > + * drm_gpuvm_sm_map_ops_create() and drm_gpuvm_sm_unmap_ops_create() can be used
> > * to directly obtain an instance of struct drm_gpuva_ops containing a list of
> > * &drm_gpuva_op, which can be iterated with drm_gpuva_for_each_op(). This list
> > * contains the &drm_gpuva_ops analogous to the callbacks one would receive when
> > - * calling drm_gpuva_sm_map() or drm_gpuva_sm_unmap(). While this way requires
> > + * calling drm_gpuvm_sm_map() or drm_gpuvm_sm_unmap(). While this way requires
> > * more memory (to allocate the &drm_gpuva_ops), it provides drivers a way to
> > * iterate the &drm_gpuva_op multiple times, e.g. once in a context where memory
> > * allocations are possible (e.g. to allocate GPU page tables) and once in the
> > * dma-fence signalling critical path.
> > *
> > - * To update the &drm_gpuva_manager's view of the GPU VA space
> > - * drm_gpuva_insert() and drm_gpuva_remove() may be used. These functions can
> > - * safely be used from &drm_gpuva_fn_ops callbacks originating from
> > - * drm_gpuva_sm_map() or drm_gpuva_sm_unmap(). However, it might be more
> > - * convenient to use the provided helper functions drm_gpuva_map(),
> > - * drm_gpuva_remap() and drm_gpuva_unmap() instead.
> > + * To update the &drm_gpuvm's view of the GPU VA space drm_gpuva_insert() and
> > + * drm_gpuva_remove() may be used. These functions can safely be used from
> > + * &drm_gpuvm_ops callbacks originating from drm_gpuvm_sm_map() or
> > + * drm_gpuvm_sm_unmap(). However, it might be more convenient to use the
> > + * provided helper functions drm_gpuva_map(), drm_gpuva_remap() and
> > + * drm_gpuva_unmap() instead.
> > *
> > * The following diagram depicts the basic relationships of existent GPU VA
> > * mappings, a newly requested mapping and the resulting mappings as implemented
> > - * by drm_gpuva_sm_map() - it doesn't cover any arbitrary combinations of these.
> > + * by drm_gpuvm_sm_map() - it doesn't cover any arbitrary combinations of these.
> > *
> > * 1) Requested mapping is identical. Replace it, but indicate the backing PTEs
> > * could be kept.
> > @@ -421,10 +421,10 @@
> > * // Allocates a new &drm_gpuva.
> > * struct drm_gpuva * driver_gpuva_alloc(void);
> > *
> > - * // Typically drivers would embedd the &drm_gpuva_manager and &drm_gpuva
> > + * // Typically drivers would embedd the &drm_gpuvm and &drm_gpuva
> > * // structure in individual driver structures and lock the dma-resv with
> > * // drm_exec or similar helpers.
> > - * int driver_mapping_create(struct drm_gpuva_manager *mgr,
> > + * int driver_mapping_create(struct drm_gpuvm *gpuvm,
> > * u64 addr, u64 range,
> > * struct drm_gem_object *obj, u64 offset)
> > * {
> > @@ -432,7 +432,7 @@
> > * struct drm_gpuva_op *op
> > *
> > * driver_lock_va_space();
> > - * ops = drm_gpuva_sm_map_ops_create(mgr, addr, range,
> > + * ops = drm_gpuvm_sm_map_ops_create(gpuvm, addr, range,
> > * obj, offset);
> > * if (IS_ERR(ops))
> > * return PTR_ERR(ops);
> > @@ -448,7 +448,7 @@
> > * // free memory and unlock
> > *
> > * driver_vm_map();
> > - * drm_gpuva_map(mgr, va, &op->map);
> > + * drm_gpuva_map(gpuvm, va, &op->map);
> > * drm_gpuva_link(va);
> > *
> > * break;
> > @@ -504,23 +504,23 @@
> > * 2) Receive a callback for each &drm_gpuva_op to create a new mapping::
> > *
> > * struct driver_context {
> > - * struct drm_gpuva_manager *mgr;
> > + * struct drm_gpuvm *gpuvm;
> > * struct drm_gpuva *new_va;
> > * struct drm_gpuva *prev_va;
> > * struct drm_gpuva *next_va;
> > * };
> > *
> > - * // ops to pass to drm_gpuva_manager_init()
> > - * static const struct drm_gpuva_fn_ops driver_gpuva_ops = {
> > + * // ops to pass to drm_gpuvm_init()
> > + * static const struct drm_gpuvm_ops driver_gpuvm_ops = {
> > * .sm_step_map = driver_gpuva_map,
> > * .sm_step_remap = driver_gpuva_remap,
> > * .sm_step_unmap = driver_gpuva_unmap,
> > * };
> > *
> > - * // Typically drivers would embedd the &drm_gpuva_manager and &drm_gpuva
> > + * // Typically drivers would embedd the &drm_gpuvm and &drm_gpuva
> > * // structure in individual driver structures and lock the dma-resv with
> > * // drm_exec or similar helpers.
> > - * int driver_mapping_create(struct drm_gpuva_manager *mgr,
> > + * int driver_mapping_create(struct drm_gpuvm *gpuvm,
> > * u64 addr, u64 range,
> > * struct drm_gem_object *obj, u64 offset)
> > * {
> > @@ -529,7 +529,7 @@
> > * struct drm_gpuva_op *op;
> > * int ret = 0;
> > *
> > - * ctx.mgr = mgr;
> > + * ctx.gpuvm = gpuvm;
> > *
> > * ctx.new_va = kzalloc(sizeof(*ctx.new_va), GFP_KERNEL);
> > * ctx.prev_va = kzalloc(sizeof(*ctx.prev_va), GFP_KERNEL);
> > @@ -540,7 +540,7 @@
> > * }
> > *
> > * driver_lock_va_space();
> > - * ret = drm_gpuva_sm_map(mgr, &ctx, addr, range, obj, offset);
> > + * ret = drm_gpuvm_sm_map(gpuvm, &ctx, addr, range, obj, offset);
> > * driver_unlock_va_space();
> > *
> > * out:
> > @@ -554,7 +554,7 @@
> > * {
> > * struct driver_context *ctx = __ctx;
> > *
> > - * drm_gpuva_map(ctx->mgr, ctx->new_va, &op->map);
> > + * drm_gpuva_map(ctx->vm, ctx->new_va, &op->map);
> > *
> > * drm_gpuva_link(ctx->new_va);
> > *
> > @@ -609,12 +609,12 @@ INTERVAL_TREE_DEFINE(struct drm_gpuva, rb.node, u64, rb.__subtree_last,
> > GPUVA_START, GPUVA_LAST, static __maybe_unused,
> > drm_gpuva_it)
> >
> > -static int __drm_gpuva_insert(struct drm_gpuva_manager *mgr,
> > +static int __drm_gpuva_insert(struct drm_gpuvm *gpuvm,
> > struct drm_gpuva *va);
> > static void __drm_gpuva_remove(struct drm_gpuva *va);
> >
> > static bool
> > -drm_gpuva_check_overflow(u64 addr, u64 range)
> > +drm_gpuvm_check_overflow(u64 addr, u64 range)
> > {
> > u64 end;
> >
> > @@ -623,121 +623,121 @@ drm_gpuva_check_overflow(u64 addr, u64 range)
> > }
> >
> > static bool
> > -drm_gpuva_in_mm_range(struct drm_gpuva_manager *mgr, u64 addr, u64 range)
> > +drm_gpuvm_in_mm_range(struct drm_gpuvm *gpuvm, u64 addr, u64 range)
> > {
> > u64 end = addr + range;
> > - u64 mm_start = mgr->mm_start;
> > - u64 mm_end = mm_start + mgr->mm_range;
> > + u64 mm_start = gpuvm->mm_start;
> > + u64 mm_end = mm_start + gpuvm->mm_range;
> >
> > return addr >= mm_start && end <= mm_end;
> > }
> >
> > static bool
> > -drm_gpuva_in_kernel_node(struct drm_gpuva_manager *mgr, u64 addr, u64 range)
> > +drm_gpuvm_in_kernel_node(struct drm_gpuvm *gpuvm, u64 addr, u64 range)
> > {
> > u64 end = addr + range;
> > - u64 kstart = mgr->kernel_alloc_node.va.addr;
> > - u64 krange = mgr->kernel_alloc_node.va.range;
> > + u64 kstart = gpuvm->kernel_alloc_node.va.addr;
> > + u64 krange = gpuvm->kernel_alloc_node.va.range;
> > u64 kend = kstart + krange;
> >
> > return krange && addr < kend && kstart < end;
> > }
> >
> > static bool
> > -drm_gpuva_range_valid(struct drm_gpuva_manager *mgr,
> > +drm_gpuva_range_valid(struct drm_gpuvm *gpuvm,
> > u64 addr, u64 range)
> > {
> > - return !drm_gpuva_check_overflow(addr, range) &&
> > - drm_gpuva_in_mm_range(mgr, addr, range) &&
> > - !drm_gpuva_in_kernel_node(mgr, addr, range);
> > + return !drm_gpuvm_check_overflow(addr, range) &&
> > + drm_gpuvm_in_mm_range(gpuvm, addr, range) &&
> > + !drm_gpuvm_in_kernel_node(gpuvm, addr, range);
> > }
> >
> > /**
> > - * drm_gpuva_manager_init() - initialize a &drm_gpuva_manager
> > - * @mgr: pointer to the &drm_gpuva_manager to initialize
> > + * drm_gpuvm_init() - initialize a &drm_gpuvm
> > + * @gpuvm: pointer to the &drm_gpuvm to initialize
> > * @name: the name of the GPU VA space
> > * @start_offset: the start offset of the GPU VA space
> > * @range: the size of the GPU VA space
> > * @reserve_offset: the start of the kernel reserved GPU VA area
> > * @reserve_range: the size of the kernel reserved GPU VA area
> > - * @ops: &drm_gpuva_fn_ops called on &drm_gpuva_sm_map / &drm_gpuva_sm_unmap
> > + * @ops: &drm_gpuvm_ops called on &drm_gpuvm_sm_map / &drm_gpuvm_sm_unmap
> > *
> > - * The &drm_gpuva_manager must be initialized with this function before use.
> > + * The &drm_gpuvm must be initialized with this function before use.
> > *
> > - * Note that @mgr must be cleared to 0 before calling this function. The given
> > + * Note that @gpuvm must be cleared to 0 before calling this function. The given
> > * &name is expected to be managed by the surrounding driver structures.
> > */
> > void
> > -drm_gpuva_manager_init(struct drm_gpuva_manager *mgr,
> > - const char *name,
> > - u64 start_offset, u64 range,
> > - u64 reserve_offset, u64 reserve_range,
> > - const struct drm_gpuva_fn_ops *ops)
> > +drm_gpuvm_init(struct drm_gpuvm *gpuvm,
> > + const char *name,
> > + u64 start_offset, u64 range,
> > + u64 reserve_offset, u64 reserve_range,
> > + const struct drm_gpuvm_ops *ops)
> > {
> > - mgr->rb.tree = RB_ROOT_CACHED;
> > - INIT_LIST_HEAD(&mgr->rb.list);
> > + gpuvm->rb.tree = RB_ROOT_CACHED;
> > + INIT_LIST_HEAD(&gpuvm->rb.list);
> >
> > - drm_gpuva_check_overflow(start_offset, range);
> > - mgr->mm_start = start_offset;
> > - mgr->mm_range = range;
> > + drm_gpuvm_check_overflow(start_offset, range);
> > + gpuvm->mm_start = start_offset;
> > + gpuvm->mm_range = range;
> >
> > - mgr->name = name ? name : "unknown";
> > - mgr->ops = ops;
> > + gpuvm->name = name ? name : "unknown";
> > + gpuvm->ops = ops;
> >
> > - memset(&mgr->kernel_alloc_node, 0, sizeof(struct drm_gpuva));
> > + memset(&gpuvm->kernel_alloc_node, 0, sizeof(struct drm_gpuva));
> >
> > if (reserve_range) {
> > - mgr->kernel_alloc_node.va.addr = reserve_offset;
> > - mgr->kernel_alloc_node.va.range = reserve_range;
> > + gpuvm->kernel_alloc_node.va.addr = reserve_offset;
> > + gpuvm->kernel_alloc_node.va.range = reserve_range;
> >
> > - if (likely(!drm_gpuva_check_overflow(reserve_offset,
> > + if (likely(!drm_gpuvm_check_overflow(reserve_offset,
> > reserve_range)))
> > - __drm_gpuva_insert(mgr, &mgr->kernel_alloc_node);
> > + __drm_gpuva_insert(gpuvm, &gpuvm->kernel_alloc_node);
> > }
> > }
> > -EXPORT_SYMBOL_GPL(drm_gpuva_manager_init);
> > +EXPORT_SYMBOL_GPL(drm_gpuvm_init);
> >
> > /**
> > - * drm_gpuva_manager_destroy() - cleanup a &drm_gpuva_manager
> > - * @mgr: pointer to the &drm_gpuva_manager to clean up
> > + * drm_gpuvm_destroy() - cleanup a &drm_gpuvm
> > + * @gpuvm: pointer to the &drm_gpuvm to clean up
> > *
> > * Note that it is a bug to call this function on a manager that still
> > * holds GPU VA mappings.
> > */
> > void
> > -drm_gpuva_manager_destroy(struct drm_gpuva_manager *mgr)
> > +drm_gpuvm_destroy(struct drm_gpuvm *gpuvm)
> > {
> > - mgr->name = NULL;
> > + gpuvm->name = NULL;
> >
> > - if (mgr->kernel_alloc_node.va.range)
> > - __drm_gpuva_remove(&mgr->kernel_alloc_node);
> > + if (gpuvm->kernel_alloc_node.va.range)
> > + __drm_gpuva_remove(&gpuvm->kernel_alloc_node);
> >
> > - WARN(!RB_EMPTY_ROOT(&mgr->rb.tree.rb_root),
> > + WARN(!RB_EMPTY_ROOT(&gpuvm->rb.tree.rb_root),
> > "GPUVA tree is not empty, potentially leaking memory.");
> > }
> > -EXPORT_SYMBOL_GPL(drm_gpuva_manager_destroy);
> > +EXPORT_SYMBOL_GPL(drm_gpuvm_destroy);
> >
> > static int
> > -__drm_gpuva_insert(struct drm_gpuva_manager *mgr,
> > +__drm_gpuva_insert(struct drm_gpuvm *gpuvm,
> > struct drm_gpuva *va)
> > {
> > struct rb_node *node;
> > struct list_head *head;
> >
> > - if (drm_gpuva_it_iter_first(&mgr->rb.tree,
> > + if (drm_gpuva_it_iter_first(&gpuvm->rb.tree,
> > GPUVA_START(va),
> > GPUVA_LAST(va)))
> > return -EEXIST;
> >
> > - va->mgr = mgr;
> > + va->vm = gpuvm;
> >
> > - drm_gpuva_it_insert(va, &mgr->rb.tree);
> > + drm_gpuva_it_insert(va, &gpuvm->rb.tree);
> >
> > node = rb_prev(&va->rb.node);
> > if (node)
> > head = &(to_drm_gpuva(node))->rb.entry;
> > else
> > - head = &mgr->rb.list;
> > + head = &gpuvm->rb.list;
> >
> > list_add(&va->rb.entry, head);
> >
> > @@ -746,36 +746,36 @@ __drm_gpuva_insert(struct drm_gpuva_manager *mgr,
> >
> > /**
> > * drm_gpuva_insert() - insert a &drm_gpuva
> > - * @mgr: the &drm_gpuva_manager to insert the &drm_gpuva in
> > + * @gpuvm: the &drm_gpuvm to insert the &drm_gpuva in
> > * @va: the &drm_gpuva to insert
> > *
> > * Insert a &drm_gpuva with a given address and range into a
> > - * &drm_gpuva_manager.
> > + * &drm_gpuvm.
> > *
> > * It is safe to use this function using the safe versions of iterating the GPU
> > - * VA space, such as drm_gpuva_for_each_va_safe() and
> > - * drm_gpuva_for_each_va_range_safe().
> > + * VA space, such as drm_gpuvm_for_each_va_safe() and
> > + * drm_gpuvm_for_each_va_range_safe().
> > *
> > * Returns: 0 on success, negative error code on failure.
> > */
> > int
> > -drm_gpuva_insert(struct drm_gpuva_manager *mgr,
> > +drm_gpuva_insert(struct drm_gpuvm *gpuvm,
> > struct drm_gpuva *va)
> > {
> > u64 addr = va->va.addr;
> > u64 range = va->va.range;
> >
> > - if (unlikely(!drm_gpuva_range_valid(mgr, addr, range)))
> > + if (unlikely(!drm_gpuva_range_valid(gpuvm, addr, range)))
> > return -EINVAL;
> >
> > - return __drm_gpuva_insert(mgr, va);
> > + return __drm_gpuva_insert(gpuvm, va);
> > }
> > EXPORT_SYMBOL_GPL(drm_gpuva_insert);
> >
> > static void
> > __drm_gpuva_remove(struct drm_gpuva *va)
> > {
> > - drm_gpuva_it_remove(va, &va->mgr->rb.tree);
> > + drm_gpuva_it_remove(va, &va->vm->rb.tree);
> > list_del_init(&va->rb.entry);
> > }
> >
> > @@ -786,15 +786,15 @@ __drm_gpuva_remove(struct drm_gpuva *va)
> > * This removes the given &va from the underlaying tree.
> > *
> > * It is safe to use this function using the safe versions of iterating the GPU
> > - * VA space, such as drm_gpuva_for_each_va_safe() and
> > - * drm_gpuva_for_each_va_range_safe().
> > + * VA space, such as drm_gpuvm_for_each_va_safe() and
> > + * drm_gpuvm_for_each_va_range_safe().
> > */
> > void
> > drm_gpuva_remove(struct drm_gpuva *va)
> > {
> > - struct drm_gpuva_manager *mgr = va->mgr;
> > + struct drm_gpuvm *gpuvm = va->vm;
> >
> > - if (unlikely(va == &mgr->kernel_alloc_node)) {
> > + if (unlikely(va == &gpuvm->kernel_alloc_node)) {
> > WARN(1, "Can't destroy kernel reserved node.\n");
> > return;
> > }
> > @@ -853,37 +853,37 @@ EXPORT_SYMBOL_GPL(drm_gpuva_unlink);
> >
> > /**
> > * drm_gpuva_find_first() - find the first &drm_gpuva in the given range
> > - * @mgr: the &drm_gpuva_manager to search in
> > + * @gpuvm: the &drm_gpuvm to search in
> > * @addr: the &drm_gpuvas address
> > * @range: the &drm_gpuvas range
> > *
> > * Returns: the first &drm_gpuva within the given range
> > */
> > struct drm_gpuva *
> > -drm_gpuva_find_first(struct drm_gpuva_manager *mgr,
> > +drm_gpuva_find_first(struct drm_gpuvm *gpuvm,
> > u64 addr, u64 range)
> > {
> > u64 last = addr + range - 1;
> >
> > - return drm_gpuva_it_iter_first(&mgr->rb.tree, addr, last);
> > + return drm_gpuva_it_iter_first(&gpuvm->rb.tree, addr, last);
> > }
> > EXPORT_SYMBOL_GPL(drm_gpuva_find_first);
> >
> > /**
> > * drm_gpuva_find() - find a &drm_gpuva
> > - * @mgr: the &drm_gpuva_manager to search in
> > + * @gpuvm: the &drm_gpuvm to search in
> > * @addr: the &drm_gpuvas address
> > * @range: the &drm_gpuvas range
> > *
> > * Returns: the &drm_gpuva at a given &addr and with a given &range
> > */
> > struct drm_gpuva *
> > -drm_gpuva_find(struct drm_gpuva_manager *mgr,
> > +drm_gpuva_find(struct drm_gpuvm *gpuvm,
> > u64 addr, u64 range)
> > {
> > struct drm_gpuva *va;
> >
> > - va = drm_gpuva_find_first(mgr, addr, range);
> > + va = drm_gpuva_find_first(gpuvm, addr, range);
> > if (!va)
> > goto out;
> >
> > @@ -900,7 +900,7 @@ EXPORT_SYMBOL_GPL(drm_gpuva_find);
> >
> > /**
> > * drm_gpuva_find_prev() - find the &drm_gpuva before the given address
> > - * @mgr: the &drm_gpuva_manager to search in
> > + * @gpuvm: the &drm_gpuvm to search in
> > * @start: the given GPU VA's start address
> > *
> > * Find the adjacent &drm_gpuva before the GPU VA with given &start address.
> > @@ -911,18 +911,18 @@ EXPORT_SYMBOL_GPL(drm_gpuva_find);
> > * Returns: a pointer to the found &drm_gpuva or NULL if none was found
> > */
> > struct drm_gpuva *
> > -drm_gpuva_find_prev(struct drm_gpuva_manager *mgr, u64 start)
> > +drm_gpuva_find_prev(struct drm_gpuvm *gpuvm, u64 start)
> > {
> > - if (!drm_gpuva_range_valid(mgr, start - 1, 1))
> > + if (!drm_gpuva_range_valid(gpuvm, start - 1, 1))
> > return NULL;
> >
> > - return drm_gpuva_it_iter_first(&mgr->rb.tree, start - 1, start);
> > + return drm_gpuva_it_iter_first(&gpuvm->rb.tree, start - 1, start);
> > }
> > EXPORT_SYMBOL_GPL(drm_gpuva_find_prev);
> >
> > /**
> > * drm_gpuva_find_next() - find the &drm_gpuva after the given address
> > - * @mgr: the &drm_gpuva_manager to search in
> > + * @gpuvm: the &drm_gpuvm to search in
> > * @end: the given GPU VA's end address
> > *
> > * Find the adjacent &drm_gpuva after the GPU VA with given &end address.
> > @@ -933,47 +933,47 @@ EXPORT_SYMBOL_GPL(drm_gpuva_find_prev);
> > * Returns: a pointer to the found &drm_gpuva or NULL if none was found
> > */
> > struct drm_gpuva *
> > -drm_gpuva_find_next(struct drm_gpuva_manager *mgr, u64 end)
> > +drm_gpuva_find_next(struct drm_gpuvm *gpuvm, u64 end)
> > {
> > - if (!drm_gpuva_range_valid(mgr, end, 1))
> > + if (!drm_gpuva_range_valid(gpuvm, end, 1))
> > return NULL;
> >
> > - return drm_gpuva_it_iter_first(&mgr->rb.tree, end, end + 1);
> > + return drm_gpuva_it_iter_first(&gpuvm->rb.tree, end, end + 1);
> > }
> > EXPORT_SYMBOL_GPL(drm_gpuva_find_next);
> >
> > /**
> > * drm_gpuva_interval_empty() - indicate whether a given interval of the VA space
> > * is empty
> > - * @mgr: the &drm_gpuva_manager to check the range for
> > + * @gpuvm: the &drm_gpuvm to check the range for
> > * @addr: the start address of the range
> > * @range: the range of the interval
> > *
> > * Returns: true if the interval is empty, false otherwise
> > */
> > bool
> > -drm_gpuva_interval_empty(struct drm_gpuva_manager *mgr, u64 addr, u64 range)
> > +drm_gpuva_interval_empty(struct drm_gpuvm *gpuvm, u64 addr, u64 range)
> > {
> > - return !drm_gpuva_find_first(mgr, addr, range);
> > + return !drm_gpuva_find_first(gpuvm, addr, range);
> > }
> > EXPORT_SYMBOL_GPL(drm_gpuva_interval_empty);
> >
> > /**
> > * drm_gpuva_map() - helper to insert a &drm_gpuva according to a
> > * &drm_gpuva_op_map
> > - * @mgr: the &drm_gpuva_manager
> > + * @gpuvm: the &drm_gpuvm
> > * @va: the &drm_gpuva to insert
> > * @op: the &drm_gpuva_op_map to initialize @va with
> > *
> > - * Initializes the @va from the @op and inserts it into the given @mgr.
> > + * Initializes the @va from the @op and inserts it into the given @gpuvm.
> > */
> > void
> > -drm_gpuva_map(struct drm_gpuva_manager *mgr,
> > +drm_gpuva_map(struct drm_gpuvm *gpuvm,
> > struct drm_gpuva *va,
> > struct drm_gpuva_op_map *op)
> > {
> > drm_gpuva_init_from_op(va, op);
> > - drm_gpuva_insert(mgr, va);
> > + drm_gpuva_insert(gpuvm, va);
> > }
> > EXPORT_SYMBOL_GPL(drm_gpuva_map);
> >
> > @@ -993,18 +993,18 @@ drm_gpuva_remap(struct drm_gpuva *prev,
> > struct drm_gpuva_op_remap *op)
> > {
> > struct drm_gpuva *curr = op->unmap->va;
> > - struct drm_gpuva_manager *mgr = curr->mgr;
> > + struct drm_gpuvm *gpuvm = curr->vm;
> >
> > drm_gpuva_remove(curr);
> >
> > if (op->prev) {
> > drm_gpuva_init_from_op(prev, op->prev);
> > - drm_gpuva_insert(mgr, prev);
> > + drm_gpuva_insert(gpuvm, prev);
> > }
> >
> > if (op->next) {
> > drm_gpuva_init_from_op(next, op->next);
> > - drm_gpuva_insert(mgr, next);
> > + drm_gpuva_insert(gpuvm, next);
> > }
> > }
> > EXPORT_SYMBOL_GPL(drm_gpuva_remap);
> > @@ -1024,7 +1024,7 @@ drm_gpuva_unmap(struct drm_gpuva_op_unmap *op)
> > EXPORT_SYMBOL_GPL(drm_gpuva_unmap);
> >
> > static int
> > -op_map_cb(const struct drm_gpuva_fn_ops *fn, void *priv,
> > +op_map_cb(const struct drm_gpuvm_ops *fn, void *priv,
> > u64 addr, u64 range,
> > struct drm_gem_object *obj, u64 offset)
> > {
> > @@ -1040,7 +1040,7 @@ op_map_cb(const struct drm_gpuva_fn_ops *fn, void *priv,
> > }
> >
> > static int
> > -op_remap_cb(const struct drm_gpuva_fn_ops *fn, void *priv,
> > +op_remap_cb(const struct drm_gpuvm_ops *fn, void *priv,
> > struct drm_gpuva_op_map *prev,
> > struct drm_gpuva_op_map *next,
> > struct drm_gpuva_op_unmap *unmap)
> > @@ -1058,7 +1058,7 @@ op_remap_cb(const struct drm_gpuva_fn_ops *fn, void *priv,
> > }
> >
> > static int
> > -op_unmap_cb(const struct drm_gpuva_fn_ops *fn, void *priv,
> > +op_unmap_cb(const struct drm_gpuvm_ops *fn, void *priv,
> > struct drm_gpuva *va, bool merge)
> > {
> > struct drm_gpuva_op op = {};
> > @@ -1071,8 +1071,8 @@ op_unmap_cb(const struct drm_gpuva_fn_ops *fn, void *priv,
> > }
> >
> > static int
> > -__drm_gpuva_sm_map(struct drm_gpuva_manager *mgr,
> > - const struct drm_gpuva_fn_ops *ops, void *priv,
> > +__drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
> > + const struct drm_gpuvm_ops *ops, void *priv,
> > u64 req_addr, u64 req_range,
> > struct drm_gem_object *req_obj, u64 req_offset)
> > {
> > @@ -1080,10 +1080,10 @@ __drm_gpuva_sm_map(struct drm_gpuva_manager *mgr,
> > u64 req_end = req_addr + req_range;
> > int ret;
> >
> > - if (unlikely(!drm_gpuva_range_valid(mgr, req_addr, req_range)))
> > + if (unlikely(!drm_gpuva_range_valid(gpuvm, req_addr, req_range)))
> > return -EINVAL;
> >
> > - drm_gpuva_for_each_va_range_safe(va, next, mgr, req_addr, req_end) {
> > + drm_gpuvm_for_each_va_range_safe(va, next, gpuvm, req_addr, req_end) {
> > struct drm_gem_object *obj = va->gem.obj;
> > u64 offset = va->gem.offset;
> > u64 addr = va->va.addr;
> > @@ -1215,18 +1215,18 @@ __drm_gpuva_sm_map(struct drm_gpuva_manager *mgr,
> > }
> >
> > static int
> > -__drm_gpuva_sm_unmap(struct drm_gpuva_manager *mgr,
> > - const struct drm_gpuva_fn_ops *ops, void *priv,
> > +__drm_gpuvm_sm_unmap(struct drm_gpuvm *gpuvm,
> > + const struct drm_gpuvm_ops *ops, void *priv,
> > u64 req_addr, u64 req_range)
> > {
> > struct drm_gpuva *va, *next;
> > u64 req_end = req_addr + req_range;
> > int ret;
> >
> > - if (unlikely(!drm_gpuva_range_valid(mgr, req_addr, req_range)))
> > + if (unlikely(!drm_gpuva_range_valid(gpuvm, req_addr, req_range)))
> > return -EINVAL;
> >
> > - drm_gpuva_for_each_va_range_safe(va, next, mgr, req_addr, req_end) {
> > + drm_gpuvm_for_each_va_range_safe(va, next, gpuvm, req_addr, req_end) {
> > struct drm_gpuva_op_map prev = {}, next = {};
> > bool prev_split = false, next_split = false;
> > struct drm_gem_object *obj = va->gem.obj;
> > @@ -1273,8 +1273,8 @@ __drm_gpuva_sm_unmap(struct drm_gpuva_manager *mgr,
> > }
> >
> > /**
> > - * drm_gpuva_sm_map() - creates the &drm_gpuva_op split/merge steps
> > - * @mgr: the &drm_gpuva_manager representing the GPU VA space
> > + * drm_gpuvm_sm_map() - creates the &drm_gpuva_op split/merge steps
> > + * @gpuvm: the &drm_gpuvm representing the GPU VA space
> > * @req_addr: the start address of the new mapping
> > * @req_range: the range of the new mapping
> > * @req_obj: the &drm_gem_object to map
> > @@ -1282,15 +1282,15 @@ __drm_gpuva_sm_unmap(struct drm_gpuva_manager *mgr,
> > * @priv: pointer to a driver private data structure
> > *
> > * This function iterates the given range of the GPU VA space. It utilizes the
> > - * &drm_gpuva_fn_ops to call back into the driver providing the split and merge
> > + * &drm_gpuvm_ops to call back into the driver providing the split and merge
> > * steps.
> > *
> > * Drivers may use these callbacks to update the GPU VA space right away within
> > * the callback. In case the driver decides to copy and store the operations for
> > - * later processing neither this function nor &drm_gpuva_sm_unmap is allowed to
> > - * be called before the &drm_gpuva_manager's view of the GPU VA space was
> > + * later processing neither this function nor &drm_gpuvm_sm_unmap is allowed to
> > + * be called before the &drm_gpuvm's view of the GPU VA space was
> > * updated with the previous set of operations. To update the
> > - * &drm_gpuva_manager's view of the GPU VA space drm_gpuva_insert(),
> > + * &drm_gpuvm's view of the GPU VA space drm_gpuva_insert(),
> > * drm_gpuva_destroy_locked() and/or drm_gpuva_destroy_unlocked() should be
> > * used.
> > *
> > @@ -1305,39 +1305,39 @@ __drm_gpuva_sm_unmap(struct drm_gpuva_manager *mgr,
> > * Returns: 0 on success or a negative error code
> > */
> > int
> > -drm_gpuva_sm_map(struct drm_gpuva_manager *mgr, void *priv,
> > +drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm, void *priv,
> > u64 req_addr, u64 req_range,
> > struct drm_gem_object *req_obj, u64 req_offset)
> > {
> > - const struct drm_gpuva_fn_ops *ops = mgr->ops;
> > + const struct drm_gpuvm_ops *ops = gpuvm->ops;
> >
> > if (unlikely(!(ops && ops->sm_step_map &&
> > ops->sm_step_remap &&
> > ops->sm_step_unmap)))
> > return -EINVAL;
> >
> > - return __drm_gpuva_sm_map(mgr, ops, priv,
> > + return __drm_gpuvm_sm_map(gpuvm, ops, priv,
> > req_addr, req_range,
> > req_obj, req_offset);
> > }
> > -EXPORT_SYMBOL_GPL(drm_gpuva_sm_map);
> > +EXPORT_SYMBOL_GPL(drm_gpuvm_sm_map);
> >
> > /**
> > - * drm_gpuva_sm_unmap() - creates the &drm_gpuva_ops to split on unmap
> > - * @mgr: the &drm_gpuva_manager representing the GPU VA space
> > + * drm_gpuvm_sm_unmap() - creates the &drm_gpuva_ops to split on unmap
> > + * @gpuvm: the &drm_gpuvm representing the GPU VA space
> > * @priv: pointer to a driver private data structure
> > * @req_addr: the start address of the range to unmap
> > * @req_range: the range of the mappings to unmap
> > *
> > * This function iterates the given range of the GPU VA space. It utilizes the
> > - * &drm_gpuva_fn_ops to call back into the driver providing the operations to
> > + * &drm_gpuvm_ops to call back into the driver providing the operations to
> > * unmap and, if required, split existent mappings.
> > *
> > * Drivers may use these callbacks to update the GPU VA space right away within
> > * the callback. In case the driver decides to copy and store the operations for
> > - * later processing neither this function nor &drm_gpuva_sm_map is allowed to be
> > - * called before the &drm_gpuva_manager's view of the GPU VA space was updated
> > - * with the previous set of operations. To update the &drm_gpuva_manager's view
> > + * later processing neither this function nor &drm_gpuvm_sm_map is allowed to be
> > + * called before the &drm_gpuvm's view of the GPU VA space was updated
> > + * with the previous set of operations. To update the &drm_gpuvm's view
> > * of the GPU VA space drm_gpuva_insert(), drm_gpuva_destroy_locked() and/or
> > * drm_gpuva_destroy_unlocked() should be used.
> > *
> > @@ -1350,24 +1350,24 @@ EXPORT_SYMBOL_GPL(drm_gpuva_sm_map);
> > * Returns: 0 on success or a negative error code
> > */
> > int
> > -drm_gpuva_sm_unmap(struct drm_gpuva_manager *mgr, void *priv,
> > +drm_gpuvm_sm_unmap(struct drm_gpuvm *gpuvm, void *priv,
> > u64 req_addr, u64 req_range)
> > {
> > - const struct drm_gpuva_fn_ops *ops = mgr->ops;
> > + const struct drm_gpuvm_ops *ops = gpuvm->ops;
> >
> > if (unlikely(!(ops && ops->sm_step_remap &&
> > ops->sm_step_unmap)))
> > return -EINVAL;
> >
> > - return __drm_gpuva_sm_unmap(mgr, ops, priv,
> > + return __drm_gpuvm_sm_unmap(gpuvm, ops, priv,
> > req_addr, req_range);
> > }
> > -EXPORT_SYMBOL_GPL(drm_gpuva_sm_unmap);
> > +EXPORT_SYMBOL_GPL(drm_gpuvm_sm_unmap);
> >
> > static struct drm_gpuva_op *
> > -gpuva_op_alloc(struct drm_gpuva_manager *mgr)
> > +gpuva_op_alloc(struct drm_gpuvm *gpuvm)
> > {
> > - const struct drm_gpuva_fn_ops *fn = mgr->ops;
> > + const struct drm_gpuvm_ops *fn = gpuvm->ops;
> > struct drm_gpuva_op *op;
> >
> > if (fn && fn->op_alloc)
> > @@ -1382,10 +1382,10 @@ gpuva_op_alloc(struct drm_gpuva_manager *mgr)
> > }
> >
> > static void
> > -gpuva_op_free(struct drm_gpuva_manager *mgr,
> > +gpuva_op_free(struct drm_gpuvm *gpuvm,
> > struct drm_gpuva_op *op)
> > {
> > - const struct drm_gpuva_fn_ops *fn = mgr->ops;
> > + const struct drm_gpuvm_ops *fn = gpuvm->ops;
> >
> > if (fn && fn->op_free)
> > fn->op_free(op);
> > @@ -1398,14 +1398,14 @@ drm_gpuva_sm_step(struct drm_gpuva_op *__op,
> > void *priv)
> > {
> > struct {
> > - struct drm_gpuva_manager *mgr;
> > + struct drm_gpuvm *vm;
> > struct drm_gpuva_ops *ops;
> > } *args = priv;
> > - struct drm_gpuva_manager *mgr = args->mgr;
> > + struct drm_gpuvm *gpuvm = args->vm;
> > struct drm_gpuva_ops *ops = args->ops;
> > struct drm_gpuva_op *op;
> >
> > - op = gpuva_op_alloc(mgr);
> > + op = gpuva_op_alloc(gpuvm);
> > if (unlikely(!op))
> > goto err;
> >
> > @@ -1444,20 +1444,20 @@ drm_gpuva_sm_step(struct drm_gpuva_op *__op,
> > err_free_prev:
> > kfree(op->remap.prev);
> > err_free_op:
> > - gpuva_op_free(mgr, op);
> > + gpuva_op_free(gpuvm, op);
> > err:
> > return -ENOMEM;
> > }
> >
> > -static const struct drm_gpuva_fn_ops gpuva_list_ops = {
> > +static const struct drm_gpuvm_ops gpuvm_list_ops = {
> > .sm_step_map = drm_gpuva_sm_step,
> > .sm_step_remap = drm_gpuva_sm_step,
> > .sm_step_unmap = drm_gpuva_sm_step,
> > };
> >
> > /**
> > - * drm_gpuva_sm_map_ops_create() - creates the &drm_gpuva_ops to split and merge
> > - * @mgr: the &drm_gpuva_manager representing the GPU VA space
> > + * drm_gpuvm_sm_map_ops_create() - creates the &drm_gpuva_ops to split and merge
> > + * @gpuvm: the &drm_gpuvm representing the GPU VA space
> > * @req_addr: the start address of the new mapping
> > * @req_range: the range of the new mapping
> > * @req_obj: the &drm_gem_object to map
> > @@ -1476,9 +1476,9 @@ static const struct drm_gpuva_fn_ops gpuva_list_ops = {
> > * map operation requested by the caller.
> > *
> > * Note that before calling this function again with another mapping request it
> > - * is necessary to update the &drm_gpuva_manager's view of the GPU VA space. The
> > + * is necessary to update the &drm_gpuvm's view of the GPU VA space. The
> > * previously obtained operations must be either processed or abandoned. To
> > - * update the &drm_gpuva_manager's view of the GPU VA space drm_gpuva_insert(),
> > + * update the &drm_gpuvm's view of the GPU VA space drm_gpuva_insert(),
> > * drm_gpuva_destroy_locked() and/or drm_gpuva_destroy_unlocked() should be
> > * used.
> > *
> > @@ -1488,13 +1488,13 @@ static const struct drm_gpuva_fn_ops gpuva_list_ops = {
> > * Returns: a pointer to the &drm_gpuva_ops on success, an ERR_PTR on failure
> > */
> > struct drm_gpuva_ops *
> > -drm_gpuva_sm_map_ops_create(struct drm_gpuva_manager *mgr,
> > +drm_gpuvm_sm_map_ops_create(struct drm_gpuvm *gpuvm,
> > u64 req_addr, u64 req_range,
> > struct drm_gem_object *req_obj, u64 req_offset)
> > {
> > struct drm_gpuva_ops *ops;
> > struct {
> > - struct drm_gpuva_manager *mgr;
> > + struct drm_gpuvm *vm;
> > struct drm_gpuva_ops *ops;
> > } args;
> > int ret;
> > @@ -1505,10 +1505,10 @@ drm_gpuva_sm_map_ops_create(struct drm_gpuva_manager *mgr,
> >
> > INIT_LIST_HEAD(&ops->list);
> >
> > - args.mgr = mgr;
> > + args.vm = gpuvm;
> > args.ops = ops;
> >
> > - ret = __drm_gpuva_sm_map(mgr, &gpuva_list_ops, &args,
> > + ret = __drm_gpuvm_sm_map(gpuvm, &gpuvm_list_ops, &args,
> > req_addr, req_range,
> > req_obj, req_offset);
> > if (ret)
> > @@ -1517,15 +1517,15 @@ drm_gpuva_sm_map_ops_create(struct drm_gpuva_manager *mgr,
> > return ops;
> >
> > err_free_ops:
> > - drm_gpuva_ops_free(mgr, ops);
> > + drm_gpuva_ops_free(gpuvm, ops);
> > return ERR_PTR(ret);
> > }
> > -EXPORT_SYMBOL_GPL(drm_gpuva_sm_map_ops_create);
> > +EXPORT_SYMBOL_GPL(drm_gpuvm_sm_map_ops_create);
> >
> > /**
> > - * drm_gpuva_sm_unmap_ops_create() - creates the &drm_gpuva_ops to split on
> > + * drm_gpuvm_sm_unmap_ops_create() - creates the &drm_gpuva_ops to split on
> > * unmap
> > - * @mgr: the &drm_gpuva_manager representing the GPU VA space
> > + * @gpuvm: the &drm_gpuvm representing the GPU VA space
> > * @req_addr: the start address of the range to unmap
> > * @req_range: the range of the mappings to unmap
> > *
> > @@ -1540,9 +1540,9 @@ EXPORT_SYMBOL_GPL(drm_gpuva_sm_map_ops_create);
> > * remap operations.
> > *
> > * Note that before calling this function again with another range to unmap it
> > - * is necessary to update the &drm_gpuva_manager's view of the GPU VA space. The
> > + * is necessary to update the &drm_gpuvm's view of the GPU VA space. The
> > * previously obtained operations must be processed or abandoned. To update the
> > - * &drm_gpuva_manager's view of the GPU VA space drm_gpuva_insert(),
> > + * &drm_gpuvm's view of the GPU VA space drm_gpuva_insert(),
> > * drm_gpuva_destroy_locked() and/or drm_gpuva_destroy_unlocked() should be
> > * used.
> > *
> > @@ -1552,12 +1552,12 @@ EXPORT_SYMBOL_GPL(drm_gpuva_sm_map_ops_create);
> > * Returns: a pointer to the &drm_gpuva_ops on success, an ERR_PTR on failure
> > */
> > struct drm_gpuva_ops *
> > -drm_gpuva_sm_unmap_ops_create(struct drm_gpuva_manager *mgr,
> > +drm_gpuvm_sm_unmap_ops_create(struct drm_gpuvm *gpuvm,
> > u64 req_addr, u64 req_range)
> > {
> > struct drm_gpuva_ops *ops;
> > struct {
> > - struct drm_gpuva_manager *mgr;
> > + struct drm_gpuvm *vm;
> > struct drm_gpuva_ops *ops;
> > } args;
> > int ret;
> > @@ -1568,10 +1568,10 @@ drm_gpuva_sm_unmap_ops_create(struct drm_gpuva_manager *mgr,
> >
> > INIT_LIST_HEAD(&ops->list);
> >
> > - args.mgr = mgr;
> > + args.vm = gpuvm;
> > args.ops = ops;
> >
> > - ret = __drm_gpuva_sm_unmap(mgr, &gpuva_list_ops, &args,
> > + ret = __drm_gpuvm_sm_unmap(gpuvm, &gpuvm_list_ops, &args,
> > req_addr, req_range);
> > if (ret)
> > goto err_free_ops;
> > @@ -1579,14 +1579,14 @@ drm_gpuva_sm_unmap_ops_create(struct drm_gpuva_manager *mgr,
> > return ops;
> >
> > err_free_ops:
> > - drm_gpuva_ops_free(mgr, ops);
> > + drm_gpuva_ops_free(gpuvm, ops);
> > return ERR_PTR(ret);
> > }
> > -EXPORT_SYMBOL_GPL(drm_gpuva_sm_unmap_ops_create);
> > +EXPORT_SYMBOL_GPL(drm_gpuvm_sm_unmap_ops_create);
> >
> > /**
> > - * drm_gpuva_prefetch_ops_create() - creates the &drm_gpuva_ops to prefetch
> > - * @mgr: the &drm_gpuva_manager representing the GPU VA space
> > + * drm_gpuvm_prefetch_ops_create() - creates the &drm_gpuva_ops to prefetch
> > + * @gpuvm: the &drm_gpuvm representing the GPU VA space
> > * @addr: the start address of the range to prefetch
> > * @range: the range of the mappings to prefetch
> > *
> > @@ -1603,7 +1603,7 @@ EXPORT_SYMBOL_GPL(drm_gpuva_sm_unmap_ops_create);
> > * Returns: a pointer to the &drm_gpuva_ops on success, an ERR_PTR on failure
> > */
> > struct drm_gpuva_ops *
> > -drm_gpuva_prefetch_ops_create(struct drm_gpuva_manager *mgr,
> > +drm_gpuvm_prefetch_ops_create(struct drm_gpuvm *gpuvm,
> > u64 addr, u64 range)
> > {
> > struct drm_gpuva_ops *ops;
> > @@ -1618,8 +1618,8 @@ drm_gpuva_prefetch_ops_create(struct drm_gpuva_manager *mgr,
> >
> > INIT_LIST_HEAD(&ops->list);
> >
> > - drm_gpuva_for_each_va_range(va, mgr, addr, end) {
> > - op = gpuva_op_alloc(mgr);
> > + drm_gpuvm_for_each_va_range(va, gpuvm, addr, end) {
> > + op = gpuva_op_alloc(gpuvm);
> > if (!op) {
> > ret = -ENOMEM;
> > goto err_free_ops;
> > @@ -1633,14 +1633,14 @@ drm_gpuva_prefetch_ops_create(struct drm_gpuva_manager *mgr,
> > return ops;
> >
> > err_free_ops:
> > - drm_gpuva_ops_free(mgr, ops);
> > + drm_gpuva_ops_free(gpuvm, ops);
> > return ERR_PTR(ret);
> > }
> > -EXPORT_SYMBOL_GPL(drm_gpuva_prefetch_ops_create);
> > +EXPORT_SYMBOL_GPL(drm_gpuvm_prefetch_ops_create);
> >
> > /**
> > - * drm_gpuva_gem_unmap_ops_create() - creates the &drm_gpuva_ops to unmap a GEM
> > - * @mgr: the &drm_gpuva_manager representing the GPU VA space
> > + * drm_gpuvm_gem_unmap_ops_create() - creates the &drm_gpuva_ops to unmap a GEM
> > + * @gpuvm: the &drm_gpuvm representing the GPU VA space
> > * @obj: the &drm_gem_object to unmap
> > *
> > * This function creates a list of operations to perform unmapping for every
> > @@ -1658,7 +1658,7 @@ EXPORT_SYMBOL_GPL(drm_gpuva_prefetch_ops_create);
> > * Returns: a pointer to the &drm_gpuva_ops on success, an ERR_PTR on failure
> > */
> > struct drm_gpuva_ops *
> > -drm_gpuva_gem_unmap_ops_create(struct drm_gpuva_manager *mgr,
> > +drm_gpuvm_gem_unmap_ops_create(struct drm_gpuvm *gpuvm,
> > struct drm_gem_object *obj)
> > {
> > struct drm_gpuva_ops *ops;
> > @@ -1675,7 +1675,7 @@ drm_gpuva_gem_unmap_ops_create(struct drm_gpuva_manager *mgr,
> > INIT_LIST_HEAD(&ops->list);
> >
> > drm_gem_for_each_gpuva(va, obj) {
> > - op = gpuva_op_alloc(mgr);
> > + op = gpuva_op_alloc(gpuvm);
> > if (!op) {
> > ret = -ENOMEM;
> > goto err_free_ops;
> > @@ -1689,21 +1689,21 @@ drm_gpuva_gem_unmap_ops_create(struct drm_gpuva_manager *mgr,
> > return ops;
> >
> > err_free_ops:
> > - drm_gpuva_ops_free(mgr, ops);
> > + drm_gpuva_ops_free(gpuvm, ops);
> > return ERR_PTR(ret);
> > }
> > -EXPORT_SYMBOL_GPL(drm_gpuva_gem_unmap_ops_create);
> > +EXPORT_SYMBOL_GPL(drm_gpuvm_gem_unmap_ops_create);
> >
> > /**
> > * drm_gpuva_ops_free() - free the given &drm_gpuva_ops
> > - * @mgr: the &drm_gpuva_manager the ops were created for
> > + * @gpuvm: the &drm_gpuvm the ops were created for
> > * @ops: the &drm_gpuva_ops to free
> > *
> > * Frees the given &drm_gpuva_ops structure including all the ops associated
> > * with it.
> > */
> > void
> > -drm_gpuva_ops_free(struct drm_gpuva_manager *mgr,
> > +drm_gpuva_ops_free(struct drm_gpuvm *gpuvm,
> > struct drm_gpuva_ops *ops)
> > {
> > struct drm_gpuva_op *op, *next;
> > @@ -1717,7 +1717,7 @@ drm_gpuva_ops_free(struct drm_gpuva_manager *mgr,
> > kfree(op->remap.unmap);
> > }
> >
> > - gpuva_op_free(mgr, op);
> > + gpuva_op_free(gpuvm, op);
> > }
> >
> > kfree(ops);
> > diff --git a/drivers/gpu/drm/nouveau/nouveau_exec.c b/drivers/gpu/drm/nouveau/nouveau_exec.c
> > index a90c4cd8cbb2..c001952cd678 100644
> > --- a/drivers/gpu/drm/nouveau/nouveau_exec.c
> > +++ b/drivers/gpu/drm/nouveau/nouveau_exec.c
> > @@ -106,7 +106,7 @@ nouveau_exec_job_submit(struct nouveau_job *job)
> > drm_exec_until_all_locked(exec) {
> > struct drm_gpuva *va;
> >
> > - drm_gpuva_for_each_va(va, &uvmm->umgr) {
> > + drm_gpuvm_for_each_va(va, &uvmm->umgr) {
> > if (unlikely(va == &uvmm->umgr.kernel_alloc_node))
> > continue;
> >
> > diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.c b/drivers/gpu/drm/nouveau/nouveau_uvmm.c
> > index aae780e4a4aa..c750072cb268 100644
> > --- a/drivers/gpu/drm/nouveau/nouveau_uvmm.c
> > +++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.c
> > @@ -444,7 +444,7 @@ op_map_prepare_unwind(struct nouveau_uvma *uvma)
> > static void
> > op_unmap_prepare_unwind(struct drm_gpuva *va)
> > {
> > - drm_gpuva_insert(va->mgr, va);
> > + drm_gpuva_insert(va->vm, va);
> > }
> >
> > static void
> > @@ -1194,7 +1194,7 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job)
> > goto unwind_continue;
> > }
> >
> > - op->ops = drm_gpuva_sm_unmap_ops_create(&uvmm->umgr,
> > + op->ops = drm_gpuvm_sm_unmap_ops_create(&uvmm->umgr,
> > op->va.addr,
> > op->va.range);
> > if (IS_ERR(op->ops)) {
> > @@ -1240,7 +1240,7 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job)
> > }
> > }
> >
> > - op->ops = drm_gpuva_sm_map_ops_create(&uvmm->umgr,
> > + op->ops = drm_gpuvm_sm_map_ops_create(&uvmm->umgr,
> > op->va.addr,
> > op->va.range,
> > op->gem.obj,
> > @@ -1264,7 +1264,7 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job)
> > break;
> > }
> > case OP_UNMAP:
> > - op->ops = drm_gpuva_sm_unmap_ops_create(&uvmm->umgr,
> > + op->ops = drm_gpuvm_sm_unmap_ops_create(&uvmm->umgr,
> > op->va.addr,
> > op->va.range);
> > if (IS_ERR(op->ops)) {
> > @@ -1836,11 +1836,11 @@ nouveau_uvmm_init(struct nouveau_uvmm *uvmm, struct nouveau_cli *cli,
> > uvmm->kernel_managed_addr = kernel_managed_addr;
> > uvmm->kernel_managed_size = kernel_managed_size;
> >
> > - drm_gpuva_manager_init(&uvmm->umgr, cli->name,
> > - NOUVEAU_VA_SPACE_START,
> > - NOUVEAU_VA_SPACE_END,
> > - kernel_managed_addr, kernel_managed_size,
> > - NULL);
> > + drm_gpuvm_init(&uvmm->umgr, cli->name,
> > + NOUVEAU_VA_SPACE_START,
> > + NOUVEAU_VA_SPACE_END,
> > + kernel_managed_addr, kernel_managed_size,
> > + NULL);
> >
> > ret = nvif_vmm_ctor(&cli->mmu, "uvmm",
> > cli->vmm.vmm.object.oclass, RAW,
> > @@ -1855,7 +1855,7 @@ nouveau_uvmm_init(struct nouveau_uvmm *uvmm, struct nouveau_cli *cli,
> > return 0;
> >
> > out_free_gpuva_mgr:
> > - drm_gpuva_manager_destroy(&uvmm->umgr);
> > + drm_gpuvm_destroy(&uvmm->umgr);
> > out_unlock:
> > mutex_unlock(&cli->mutex);
> > return ret;
> > @@ -1877,7 +1877,7 @@ nouveau_uvmm_fini(struct nouveau_uvmm *uvmm)
> > wait_event(entity->job.wq, list_empty(&entity->job.list.head));
> >
> > nouveau_uvmm_lock(uvmm);
> > - drm_gpuva_for_each_va_safe(va, next, &uvmm->umgr) {
> > + drm_gpuvm_for_each_va_safe(va, next, &uvmm->umgr) {
> > struct nouveau_uvma *uvma = uvma_from_va(va);
> > struct drm_gem_object *obj = va->gem.obj;
> >
> > @@ -1910,7 +1910,7 @@ nouveau_uvmm_fini(struct nouveau_uvmm *uvmm)
> >
> > mutex_lock(&cli->mutex);
> > nouveau_vmm_fini(&uvmm->vmm);
> > - drm_gpuva_manager_destroy(&uvmm->umgr);
> > + drm_gpuvm_destroy(&uvmm->umgr);
> > mutex_unlock(&cli->mutex);
> >
> > dma_resv_fini(&uvmm->resv);
> > diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.h b/drivers/gpu/drm/nouveau/nouveau_uvmm.h
> > index fc7f6fd2a4e1..e96c9919d1bd 100644
> > --- a/drivers/gpu/drm/nouveau/nouveau_uvmm.h
> > +++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.h
> > @@ -3,13 +3,13 @@
> > #ifndef __NOUVEAU_UVMM_H__
> > #define __NOUVEAU_UVMM_H__
> >
> > -#include <drm/drm_gpuva_mgr.h>
> > +#include <drm/drm_gpuvm.h>
> >
> > #include "nouveau_drv.h"
> >
> > struct nouveau_uvmm {
> > struct nouveau_vmm vmm;
> > - struct drm_gpuva_manager umgr;
> > + struct drm_gpuvm umgr;
> > struct maple_tree region_mt;
> > struct mutex mutex;
> > struct dma_resv resv;
> > @@ -44,7 +44,7 @@ struct nouveau_uvma {
> > #define uvmm_from_mgr(x) container_of((x), struct nouveau_uvmm, umgr)
> > #define uvma_from_va(x) container_of((x), struct nouveau_uvma, va)
> >
> > -#define to_uvmm(x) uvmm_from_mgr((x)->va.mgr)
> > +#define to_uvmm(x) uvmm_from_mgr((x)->va.vm)
> >
> > struct nouveau_uvmm_bind_job {
> > struct nouveau_job base;
> > diff --git a/include/drm/drm_debugfs.h b/include/drm/drm_debugfs.h
> > index 3bba169f9bae..cf06cee4343f 100644
> > --- a/include/drm/drm_debugfs.h
> > +++ b/include/drm/drm_debugfs.h
> > @@ -35,7 +35,7 @@
> > #include <linux/types.h>
> > #include <linux/seq_file.h>
> >
> > -#include <drm/drm_gpuva_mgr.h>
> > +#include <drm/drm_gpuvm.h>
> >
> > /**
> > * DRM_DEBUGFS_GPUVA_INFO - &drm_info_list entry to dump a GPU VA space
> > @@ -152,7 +152,7 @@ void drm_debugfs_add_files(struct drm_device *dev,
> > const struct drm_debugfs_info *files, int count);
> >
> > int drm_debugfs_gpuva_info(struct seq_file *m,
> > - struct drm_gpuva_manager *mgr);
> > + struct drm_gpuvm *gpuvm);
> > #else
> > static inline void drm_debugfs_create_files(const struct drm_info_list *files,
> > int count, struct dentry *root,
> > @@ -177,7 +177,7 @@ static inline void drm_debugfs_add_files(struct drm_device *dev,
> > {}
> >
> > static inline int drm_debugfs_gpuva_info(struct seq_file *m,
> > - struct drm_gpuva_manager *mgr)
> > + struct drm_gpuvm *gpuvm)
> > {
> > return 0;
> > }
> > diff --git a/include/drm/drm_gpuva_mgr.h b/include/drm/drm_gpuvm.h
> > similarity index 78%
> > rename from include/drm/drm_gpuva_mgr.h
> > rename to include/drm/drm_gpuvm.h
> > index ed8d50200cc3..0e802676e0a9 100644
> > --- a/include/drm/drm_gpuva_mgr.h
> > +++ b/include/drm/drm_gpuvm.h
> > @@ -1,7 +1,7 @@
> > /* SPDX-License-Identifier: GPL-2.0-only */
> >
> > -#ifndef __DRM_GPUVA_MGR_H__
> > -#define __DRM_GPUVA_MGR_H__
> > +#ifndef __DRM_GPUVM_H__
> > +#define __DRM_GPUVM_H__
> >
> > /*
> > * Copyright (c) 2022 Red Hat.
> > @@ -31,8 +31,8 @@
> >
> > #include <drm/drm_gem.h>
> >
> > -struct drm_gpuva_manager;
> > -struct drm_gpuva_fn_ops;
> > +struct drm_gpuvm;
> > +struct drm_gpuvm_ops;
> >
> > /**
> > * enum drm_gpuva_flags - flags for struct drm_gpuva
> > @@ -62,15 +62,15 @@ enum drm_gpuva_flags {
> > * struct drm_gpuva - structure to track a GPU VA mapping
> > *
> > * This structure represents a GPU VA mapping and is associated with a
> > - * &drm_gpuva_manager.
> > + * &drm_gpuvm.
> > *
> > * Typically, this structure is embedded in bigger driver structures.
> > */
> > struct drm_gpuva {
> > /**
> > - * @mgr: the &drm_gpuva_manager this object is associated with
> > + * @vm: the &drm_gpuvm this object is associated with
> > */
> > - struct drm_gpuva_manager *mgr;
> > + struct drm_gpuvm *vm;
> >
> > /**
> > * @flags: the &drm_gpuva_flags for this mapping
> > @@ -137,20 +137,20 @@ struct drm_gpuva {
> > } rb;
> > };
> >
> > -int drm_gpuva_insert(struct drm_gpuva_manager *mgr, struct drm_gpuva *va);
> > +int drm_gpuva_insert(struct drm_gpuvm *gpuvm, struct drm_gpuva *va);
> > void drm_gpuva_remove(struct drm_gpuva *va);
> >
> > void drm_gpuva_link(struct drm_gpuva *va);
> > void drm_gpuva_unlink(struct drm_gpuva *va);
> >
> > -struct drm_gpuva *drm_gpuva_find(struct drm_gpuva_manager *mgr,
> > +struct drm_gpuva *drm_gpuva_find(struct drm_gpuvm *gpuvm,
> > u64 addr, u64 range);
> > -struct drm_gpuva *drm_gpuva_find_first(struct drm_gpuva_manager *mgr,
> > +struct drm_gpuva *drm_gpuva_find_first(struct drm_gpuvm *gpuvm,
> > u64 addr, u64 range);
> > -struct drm_gpuva *drm_gpuva_find_prev(struct drm_gpuva_manager *mgr, u64 start);
> > -struct drm_gpuva *drm_gpuva_find_next(struct drm_gpuva_manager *mgr, u64 end);
> > +struct drm_gpuva *drm_gpuva_find_prev(struct drm_gpuvm *gpuvm, u64 start);
> > +struct drm_gpuva *drm_gpuva_find_next(struct drm_gpuvm *gpuvm, u64 end);
> >
> > -bool drm_gpuva_interval_empty(struct drm_gpuva_manager *mgr, u64 addr, u64 range);
> > +bool drm_gpuva_interval_empty(struct drm_gpuvm *gpuvm, u64 addr, u64 range);
> >
> > static inline void drm_gpuva_init(struct drm_gpuva *va, u64 addr, u64 range,
> > struct drm_gem_object *obj, u64 offset)
> > @@ -186,7 +186,7 @@ static inline bool drm_gpuva_invalidated(struct drm_gpuva *va)
> > }
> >
> > /**
> > - * struct drm_gpuva_manager - DRM GPU VA Manager
> > + * struct drm_gpuvm - DRM GPU VA Manager
> > *
> > * The DRM GPU VA Manager keeps track of a GPU's virtual address space by using
> > * &maple_tree structures. Typically, this structure is embedded in bigger
> > @@ -197,7 +197,7 @@ static inline bool drm_gpuva_invalidated(struct drm_gpuva *va)
> > *
> > * There should be one manager instance per GPU virtual address space.
> > */
> > -struct drm_gpuva_manager {
> > +struct drm_gpuvm {
> > /**
> > * @name: the name of the DRM GPU VA space
> > */
> > @@ -237,100 +237,99 @@ struct drm_gpuva_manager {
> > struct drm_gpuva kernel_alloc_node;
> >
> > /**
> > - * @ops: &drm_gpuva_fn_ops providing the split/merge steps to drivers
> > + * @ops: &drm_gpuvm_ops providing the split/merge steps to drivers
> > */
> > - const struct drm_gpuva_fn_ops *ops;
> > + const struct drm_gpuvm_ops *ops;
> > };
> >
> > -void drm_gpuva_manager_init(struct drm_gpuva_manager *mgr,
> > - const char *name,
> > - u64 start_offset, u64 range,
> > - u64 reserve_offset, u64 reserve_range,
> > - const struct drm_gpuva_fn_ops *ops);
> > -void drm_gpuva_manager_destroy(struct drm_gpuva_manager *mgr);
> > +void drm_gpuvm_init(struct drm_gpuvm *gpuvm, const char *name,
> > + u64 start_offset, u64 range,
> > + u64 reserve_offset, u64 reserve_range,
> > + const struct drm_gpuvm_ops *ops);
> > +void drm_gpuvm_destroy(struct drm_gpuvm *gpuvm);
> >
> > static inline struct drm_gpuva *
> > __drm_gpuva_next(struct drm_gpuva *va)
> > {
> > - if (va && !list_is_last(&va->rb.entry, &va->mgr->rb.list))
> > + if (va && !list_is_last(&va->rb.entry, &va->vm->rb.list))
> > return list_next_entry(va, rb.entry);
> >
> > return NULL;
> > }
> >
> > /**
> > - * drm_gpuva_for_each_va_range() - iterate over a range of &drm_gpuvas
> > + * drm_gpuvm_for_each_va_range() - iterate over a range of &drm_gpuvas
> > * @va__: &drm_gpuva structure to assign to in each iteration step
> > - * @mgr__: &drm_gpuva_manager to walk over
> > + * @gpuvm__: &drm_gpuvm to walk over
> > * @start__: starting offset, the first gpuva will overlap this
> > * @end__: ending offset, the last gpuva will start before this (but may
> > * overlap)
> > *
> > - * This iterator walks over all &drm_gpuvas in the &drm_gpuva_manager that lie
> > + * This iterator walks over all &drm_gpuvas in the &drm_gpuvm that lie
> > * between @start__ and @end__. It is implemented similarly to list_for_each(),
> > - * but is using the &drm_gpuva_manager's internal interval tree to accelerate
> > + * but is using the &drm_gpuvm's internal interval tree to accelerate
> > * the search for the starting &drm_gpuva, and hence isn't safe against removal
> > * of elements. It assumes that @end__ is within (or is the upper limit of) the
> > - * &drm_gpuva_manager. This iterator does not skip over the &drm_gpuva_manager's
> > + * &drm_gpuvm. This iterator does not skip over the &drm_gpuvm's
> > * @kernel_alloc_node.
> > */
> > -#define drm_gpuva_for_each_va_range(va__, mgr__, start__, end__) \
> > - for (va__ = drm_gpuva_find_first((mgr__), (start__), (end__) - (start__)); \
> > +#define drm_gpuvm_for_each_va_range(va__, gpuvm__, start__, end__) \
> > + for (va__ = drm_gpuva_find_first((gpuvm__), (start__), (end__) - (start__)); \
> > va__ && (va__->va.addr < (end__)); \
> > va__ = __drm_gpuva_next(va__))
> >
> > /**
> > - * drm_gpuva_for_each_va_range_safe() - safely iterate over a range of
> > + * drm_gpuvm_for_each_va_range_safe() - safely iterate over a range of
> > * &drm_gpuvas
> > * @va__: &drm_gpuva to assign to in each iteration step
> > * @next__: another &drm_gpuva to use as temporary storage
> > - * @mgr__: &drm_gpuva_manager to walk over
> > + * @gpuvm__: &drm_gpuvm to walk over
> > * @start__: starting offset, the first gpuva will overlap this
> > * @end__: ending offset, the last gpuva will start before this (but may
> > * overlap)
> > *
> > - * This iterator walks over all &drm_gpuvas in the &drm_gpuva_manager that lie
> > + * This iterator walks over all &drm_gpuvas in the &drm_gpuvm that lie
> > * between @start__ and @end__. It is implemented similarly to
> > - * list_for_each_safe(), but is using the &drm_gpuva_manager's internal interval
> > + * list_for_each_safe(), but is using the &drm_gpuvm's internal interval
> > * tree to accelerate the search for the starting &drm_gpuva, and hence is safe
> > * against removal of elements. It assumes that @end__ is within (or is the
> > - * upper limit of) the &drm_gpuva_manager. This iterator does not skip over the
> > - * &drm_gpuva_manager's @kernel_alloc_node.
> > + * upper limit of) the &drm_gpuvm. This iterator does not skip over the
> > + * &drm_gpuvm's @kernel_alloc_node.
> > */
> > -#define drm_gpuva_for_each_va_range_safe(va__, next__, mgr__, start__, end__) \
> > - for (va__ = drm_gpuva_find_first((mgr__), (start__), (end__) - (start__)), \
> > +#define drm_gpuvm_for_each_va_range_safe(va__, next__, gpuvm__, start__, end__) \
> > + for (va__ = drm_gpuva_find_first((gpuvm__), (start__), (end__) - (start__)), \
> > next__ = __drm_gpuva_next(va__); \
> > va__ && (va__->va.addr < (end__)); \
> > va__ = next__, next__ = __drm_gpuva_next(va__))
> >
> > /**
> > - * drm_gpuva_for_each_va() - iterate over all &drm_gpuvas
> > + * drm_gpuvm_for_each_va() - iterate over all &drm_gpuvas
> > * @va__: &drm_gpuva to assign to in each iteration step
> > - * @mgr__: &drm_gpuva_manager to walk over
> > + * @gpuvm__: &drm_gpuvm to walk over
> > *
> > * This iterator walks over all &drm_gpuva structures associated with the given
> > - * &drm_gpuva_manager.
> > + * &drm_gpuvm.
> > */
> > -#define drm_gpuva_for_each_va(va__, mgr__) \
> > - list_for_each_entry(va__, &(mgr__)->rb.list, rb.entry)
> > +#define drm_gpuvm_for_each_va(va__, gpuvm__) \
> > + list_for_each_entry(va__, &(gpuvm__)->rb.list, rb.entry)
> >
> > /**
> > - * drm_gpuva_for_each_va_safe() - safely iterate over all &drm_gpuvas
> > + * drm_gpuvm_for_each_va_safe() - safely iterate over all &drm_gpuvas
> > * @va__: &drm_gpuva to assign to in each iteration step
> > * @next__: another &drm_gpuva to use as temporary storage
> > - * @mgr__: &drm_gpuva_manager to walk over
> > + * @gpuvm__: &drm_gpuvm to walk over
> > *
> > * This iterator walks over all &drm_gpuva structures associated with the given
> > - * &drm_gpuva_manager. It is implemented with list_for_each_entry_safe(), and
> > + * &drm_gpuvm. It is implemented with list_for_each_entry_safe(), and
> > * hence safe against the removal of elements.
> > */
> > -#define drm_gpuva_for_each_va_safe(va__, next__, mgr__) \
> > - list_for_each_entry_safe(va__, next__, &(mgr__)->rb.list, rb.entry)
> > +#define drm_gpuvm_for_each_va_safe(va__, next__, gpuvm__) \
> > + list_for_each_entry_safe(va__, next__, &(gpuvm__)->rb.list, rb.entry)
> >
> > /**
> > * enum drm_gpuva_op_type - GPU VA operation type
> > *
> > - * Operations to alter the GPU VA mappings tracked by the &drm_gpuva_manager.
> > + * Operations to alter the GPU VA mappings tracked by the &drm_gpuvm.
> > */
> > enum drm_gpuva_op_type {
> > /**
> > @@ -413,7 +412,7 @@ struct drm_gpuva_op_unmap {
> > *
> > * Optionally, if &keep is set, drivers may keep the actual page table
> > * mappings for this &drm_gpuva, adding the missing page table entries
> > - * only and update the &drm_gpuva_manager accordingly.
> > + * only and update the &drm_gpuvm accordingly.
> > */
> > bool keep;
> > };
> > @@ -584,22 +583,22 @@ struct drm_gpuva_ops {
> > #define drm_gpuva_next_op(op) list_next_entry(op, entry)
> >
> > struct drm_gpuva_ops *
> > -drm_gpuva_sm_map_ops_create(struct drm_gpuva_manager *mgr,
> > +drm_gpuvm_sm_map_ops_create(struct drm_gpuvm *gpuvm,
> > u64 addr, u64 range,
> > struct drm_gem_object *obj, u64 offset);
> > struct drm_gpuva_ops *
> > -drm_gpuva_sm_unmap_ops_create(struct drm_gpuva_manager *mgr,
> > +drm_gpuvm_sm_unmap_ops_create(struct drm_gpuvm *gpuvm,
> > u64 addr, u64 range);
> >
> > struct drm_gpuva_ops *
> > -drm_gpuva_prefetch_ops_create(struct drm_gpuva_manager *mgr,
> > +drm_gpuvm_prefetch_ops_create(struct drm_gpuvm *gpuvm,
> > u64 addr, u64 range);
> >
> > struct drm_gpuva_ops *
> > -drm_gpuva_gem_unmap_ops_create(struct drm_gpuva_manager *mgr,
> > +drm_gpuvm_gem_unmap_ops_create(struct drm_gpuvm *gpuvm,
> > struct drm_gem_object *obj);
> >
> > -void drm_gpuva_ops_free(struct drm_gpuva_manager *mgr,
> > +void drm_gpuva_ops_free(struct drm_gpuvm *gpuvm,
> > struct drm_gpuva_ops *ops);
> >
> > static inline void drm_gpuva_init_from_op(struct drm_gpuva *va,
> > @@ -610,15 +609,15 @@ static inline void drm_gpuva_init_from_op(struct drm_gpuva *va,
> > }
> >
> > /**
> > - * struct drm_gpuva_fn_ops - callbacks for split/merge steps
> > + * struct drm_gpuvm_ops - callbacks for split/merge steps
> > *
> > - * This structure defines the callbacks used by &drm_gpuva_sm_map and
> > - * &drm_gpuva_sm_unmap to provide the split/merge steps for map and unmap
> > + * This structure defines the callbacks used by &drm_gpuvm_sm_map and
> > + * &drm_gpuvm_sm_unmap to provide the split/merge steps for map and unmap
> > * operations to drivers.
> > */
> > -struct drm_gpuva_fn_ops {
> > +struct drm_gpuvm_ops {
> > /**
> > - * @op_alloc: called when the &drm_gpuva_manager allocates
> > + * @op_alloc: called when the &drm_gpuvm allocates
> > * a struct drm_gpuva_op
> > *
> > * Some drivers may want to embed struct drm_gpuva_op into driver
> > @@ -630,7 +629,7 @@ struct drm_gpuva_fn_ops {
> > struct drm_gpuva_op *(*op_alloc)(void);
> >
> > /**
> > - * @op_free: called when the &drm_gpuva_manager frees a
> > + * @op_free: called when the &drm_gpuvm frees a
> > * struct drm_gpuva_op
> > *
> > * Some drivers may want to embed struct drm_gpuva_op into driver
> > @@ -642,19 +641,19 @@ struct drm_gpuva_fn_ops {
> > void (*op_free)(struct drm_gpuva_op *op);
> >
> > /**
> > - * @sm_step_map: called from &drm_gpuva_sm_map to finally insert the
> > + * @sm_step_map: called from &drm_gpuvm_sm_map to finally insert the
> > * mapping once all previous steps were completed
> > *
> > * The &priv pointer matches the one the driver passed to
> > - * &drm_gpuva_sm_map or &drm_gpuva_sm_unmap, respectively.
> > + * &drm_gpuvm_sm_map or &drm_gpuvm_sm_unmap, respectively.
> > *
> > - * Can be NULL if &drm_gpuva_sm_map is used.
> > + * Can be NULL if &drm_gpuvm_sm_map is used.
> > */
> > int (*sm_step_map)(struct drm_gpuva_op *op, void *priv);
> >
> > /**
> > - * @sm_step_remap: called from &drm_gpuva_sm_map and
> > - * &drm_gpuva_sm_unmap to split up an existent mapping
> > + * @sm_step_remap: called from &drm_gpuvm_sm_map and
> > + * &drm_gpuvm_sm_unmap to split up an existent mapping
> > *
> > * This callback is called when existent mapping needs to be split up.
> > * This is the case when either a newly requested mapping overlaps or
> > @@ -662,38 +661,38 @@ struct drm_gpuva_fn_ops {
> > * mapping is requested.
> > *
> > * The &priv pointer matches the one the driver passed to
> > - * &drm_gpuva_sm_map or &drm_gpuva_sm_unmap, respectively.
> > + * &drm_gpuvm_sm_map or &drm_gpuvm_sm_unmap, respectively.
> > *
> > - * Can be NULL if neither &drm_gpuva_sm_map nor &drm_gpuva_sm_unmap is
> > + * Can be NULL if neither &drm_gpuvm_sm_map nor &drm_gpuvm_sm_unmap is
> > * used.
> > */
> > int (*sm_step_remap)(struct drm_gpuva_op *op, void *priv);
> >
> > /**
> > - * @sm_step_unmap: called from &drm_gpuva_sm_map and
> > - * &drm_gpuva_sm_unmap to unmap an existent mapping
> > + * @sm_step_unmap: called from &drm_gpuvm_sm_map and
> > + * &drm_gpuvm_sm_unmap to unmap an existent mapping
> > *
> > * This callback is called when existent mapping needs to be unmapped.
> > * This is the case when either a newly requested mapping encloses an
> > * existent mapping or an unmap of an existent mapping is requested.
> > *
> > * The &priv pointer matches the one the driver passed to
> > - * &drm_gpuva_sm_map or &drm_gpuva_sm_unmap, respectively.
> > + * &drm_gpuvm_sm_map or &drm_gpuvm_sm_unmap, respectively.
> > *
> > - * Can be NULL if neither &drm_gpuva_sm_map nor &drm_gpuva_sm_unmap is
> > + * Can be NULL if neither &drm_gpuvm_sm_map nor &drm_gpuvm_sm_unmap is
> > * used.
> > */
> > int (*sm_step_unmap)(struct drm_gpuva_op *op, void *priv);
> > };
> >
> > -int drm_gpuva_sm_map(struct drm_gpuva_manager *mgr, void *priv,
> > +int drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm, void *priv,
> > u64 addr, u64 range,
> > struct drm_gem_object *obj, u64 offset);
> >
> > -int drm_gpuva_sm_unmap(struct drm_gpuva_manager *mgr, void *priv,
> > +int drm_gpuvm_sm_unmap(struct drm_gpuvm *gpuvm, void *priv,
> > u64 addr, u64 range);
> >
> > -void drm_gpuva_map(struct drm_gpuva_manager *mgr,
> > +void drm_gpuva_map(struct drm_gpuvm *gpuvm,
> > struct drm_gpuva *va,
> > struct drm_gpuva_op_map *op);
> >
> > @@ -703,4 +702,4 @@ void drm_gpuva_remap(struct drm_gpuva *prev,
> >
> > void drm_gpuva_unmap(struct drm_gpuva_op_unmap *op);
> >
> > -#endif /* __DRM_GPUVA_MGR_H__ */
> > +#endif /* __DRM_GPUVM_H__ */
>
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH drm-misc-next v4 2/8] drm/gpuvm: allow building as module
2023-09-20 14:42 [PATCH drm-misc-next v4 0/8] [RFC] DRM GPUVA Manager GPU-VM features Danilo Krummrich
2023-09-20 14:42 ` [PATCH drm-misc-next v4 1/8] drm/gpuvm: rename struct drm_gpuva_manager to struct drm_gpuvm Danilo Krummrich
@ 2023-09-20 14:42 ` Danilo Krummrich
2023-09-25 0:42 ` Dave Airlie
2023-09-20 14:42 ` [PATCH drm-misc-next v4 3/8] drm/nouveau: uvmm: rename 'umgr' to 'base' Danilo Krummrich
` (6 subsequent siblings)
8 siblings, 1 reply; 29+ messages in thread
From: Danilo Krummrich @ 2023-09-20 14:42 UTC (permalink / raw)
To: airlied, daniel, matthew.brost, thomas.hellstrom, sarah.walker,
donald.robson, boris.brezillon, christian.koenig, faith.ekstrand
Cc: dri-devel, nouveau, linux-kernel, Danilo Krummrich
Currently, the DRM GPUVM does not have any core dependencies preventing
a module build.
Also, new features from subsequent patches require helpers (namely
drm_exec) which can be built as module.
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Danilo Krummrich <dakr@redhat.com>
---
drivers/gpu/drm/Kconfig | 7 +++++++
drivers/gpu/drm/Makefile | 2 +-
drivers/gpu/drm/drm_gpuvm.c | 3 +++
drivers/gpu/drm/nouveau/Kconfig | 1 +
4 files changed, 12 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
index ab9ef1c20349..0f78a03e4e84 100644
--- a/drivers/gpu/drm/Kconfig
+++ b/drivers/gpu/drm/Kconfig
@@ -216,6 +216,13 @@ config DRM_EXEC
help
Execution context for command submissions
+config DRM_GPUVM
+ tristate
+ depends on DRM && DRM_EXEC
+ help
+ GPU-VM representation providing helpers to manage a GPUs virtual
+ address space
+
config DRM_BUDDY
tristate
depends on DRM
diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile
index 7a84b3cddeab..8e1bde059170 100644
--- a/drivers/gpu/drm/Makefile
+++ b/drivers/gpu/drm/Makefile
@@ -45,7 +45,6 @@ drm-y := \
drm_vblank.o \
drm_vblank_work.o \
drm_vma_manager.o \
- drm_gpuvm.o \
drm_writeback.o
drm-$(CONFIG_DRM_LEGACY) += \
drm_agpsupport.o \
@@ -81,6 +80,7 @@ obj-$(CONFIG_DRM_PANEL_ORIENTATION_QUIRKS) += drm_panel_orientation_quirks.o
#
#
obj-$(CONFIG_DRM_EXEC) += drm_exec.o
+obj-$(CONFIG_DRM_GPUVM) += drm_gpuvm.o
obj-$(CONFIG_DRM_BUDDY) += drm_buddy.o
diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c
index 7074bcad5b28..bfea4a8a19ec 100644
--- a/drivers/gpu/drm/drm_gpuvm.c
+++ b/drivers/gpu/drm/drm_gpuvm.c
@@ -1723,3 +1723,6 @@ drm_gpuva_ops_free(struct drm_gpuvm *gpuvm,
kfree(ops);
}
EXPORT_SYMBOL_GPL(drm_gpuva_ops_free);
+
+MODULE_DESCRIPTION("DRM GPUVM");
+MODULE_LICENSE("GPL");
diff --git a/drivers/gpu/drm/nouveau/Kconfig b/drivers/gpu/drm/nouveau/Kconfig
index c52e8096cca4..1e6aaf95ff7c 100644
--- a/drivers/gpu/drm/nouveau/Kconfig
+++ b/drivers/gpu/drm/nouveau/Kconfig
@@ -11,6 +11,7 @@ config DRM_NOUVEAU
select DRM_TTM
select DRM_TTM_HELPER
select DRM_EXEC
+ select DRM_GPUVM
select DRM_SCHED
select I2C
select I2C_ALGOBIT
--
2.41.0
^ permalink raw reply related [flat|nested] 29+ messages in thread* Re: [PATCH drm-misc-next v4 2/8] drm/gpuvm: allow building as module
2023-09-20 14:42 ` [PATCH drm-misc-next v4 2/8] drm/gpuvm: allow building as module Danilo Krummrich
@ 2023-09-25 0:42 ` Dave Airlie
0 siblings, 0 replies; 29+ messages in thread
From: Dave Airlie @ 2023-09-25 0:42 UTC (permalink / raw)
To: Danilo Krummrich
Cc: daniel, matthew.brost, thomas.hellstrom, sarah.walker,
donald.robson, boris.brezillon, christian.koenig, faith.ekstrand,
dri-devel, nouveau, linux-kernel
On Thu, 21 Sept 2023 at 00:43, Danilo Krummrich <dakr@redhat.com> wrote:
>
> Currently, the DRM GPUVM does not have any core dependencies preventing
> a module build.
>
> Also, new features from subsequent patches require helpers (namely
> drm_exec) which can be built as module.
>
> Reviewed-by: Christian König <christian.koenig@amd.com>
> Signed-off-by: Danilo Krummrich <dakr@redhat.com>
Reviewed-by: Dave Airlie <airlied@redhat.com>
> ---
> drivers/gpu/drm/Kconfig | 7 +++++++
> drivers/gpu/drm/Makefile | 2 +-
> drivers/gpu/drm/drm_gpuvm.c | 3 +++
> drivers/gpu/drm/nouveau/Kconfig | 1 +
> 4 files changed, 12 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
> index ab9ef1c20349..0f78a03e4e84 100644
> --- a/drivers/gpu/drm/Kconfig
> +++ b/drivers/gpu/drm/Kconfig
> @@ -216,6 +216,13 @@ config DRM_EXEC
> help
> Execution context for command submissions
>
> +config DRM_GPUVM
> + tristate
> + depends on DRM && DRM_EXEC
> + help
> + GPU-VM representation providing helpers to manage a GPUs virtual
> + address space
> +
> config DRM_BUDDY
> tristate
> depends on DRM
> diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile
> index 7a84b3cddeab..8e1bde059170 100644
> --- a/drivers/gpu/drm/Makefile
> +++ b/drivers/gpu/drm/Makefile
> @@ -45,7 +45,6 @@ drm-y := \
> drm_vblank.o \
> drm_vblank_work.o \
> drm_vma_manager.o \
> - drm_gpuvm.o \
> drm_writeback.o
> drm-$(CONFIG_DRM_LEGACY) += \
> drm_agpsupport.o \
> @@ -81,6 +80,7 @@ obj-$(CONFIG_DRM_PANEL_ORIENTATION_QUIRKS) += drm_panel_orientation_quirks.o
> #
> #
> obj-$(CONFIG_DRM_EXEC) += drm_exec.o
> +obj-$(CONFIG_DRM_GPUVM) += drm_gpuvm.o
>
> obj-$(CONFIG_DRM_BUDDY) += drm_buddy.o
>
> diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c
> index 7074bcad5b28..bfea4a8a19ec 100644
> --- a/drivers/gpu/drm/drm_gpuvm.c
> +++ b/drivers/gpu/drm/drm_gpuvm.c
> @@ -1723,3 +1723,6 @@ drm_gpuva_ops_free(struct drm_gpuvm *gpuvm,
> kfree(ops);
> }
> EXPORT_SYMBOL_GPL(drm_gpuva_ops_free);
> +
> +MODULE_DESCRIPTION("DRM GPUVM");
> +MODULE_LICENSE("GPL");
> diff --git a/drivers/gpu/drm/nouveau/Kconfig b/drivers/gpu/drm/nouveau/Kconfig
> index c52e8096cca4..1e6aaf95ff7c 100644
> --- a/drivers/gpu/drm/nouveau/Kconfig
> +++ b/drivers/gpu/drm/nouveau/Kconfig
> @@ -11,6 +11,7 @@ config DRM_NOUVEAU
> select DRM_TTM
> select DRM_TTM_HELPER
> select DRM_EXEC
> + select DRM_GPUVM
> select DRM_SCHED
> select I2C
> select I2C_ALGOBIT
> --
> 2.41.0
>
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH drm-misc-next v4 3/8] drm/nouveau: uvmm: rename 'umgr' to 'base'
2023-09-20 14:42 [PATCH drm-misc-next v4 0/8] [RFC] DRM GPUVA Manager GPU-VM features Danilo Krummrich
2023-09-20 14:42 ` [PATCH drm-misc-next v4 1/8] drm/gpuvm: rename struct drm_gpuva_manager to struct drm_gpuvm Danilo Krummrich
2023-09-20 14:42 ` [PATCH drm-misc-next v4 2/8] drm/gpuvm: allow building as module Danilo Krummrich
@ 2023-09-20 14:42 ` Danilo Krummrich
2023-09-25 0:43 ` Dave Airlie
2023-09-20 14:42 ` [PATCH drm-misc-next v4 4/8] drm/gpuvm: add common dma-resv per struct drm_gpuvm Danilo Krummrich
` (5 subsequent siblings)
8 siblings, 1 reply; 29+ messages in thread
From: Danilo Krummrich @ 2023-09-20 14:42 UTC (permalink / raw)
To: airlied, daniel, matthew.brost, thomas.hellstrom, sarah.walker,
donald.robson, boris.brezillon, christian.koenig, faith.ekstrand
Cc: dri-devel, nouveau, linux-kernel, Danilo Krummrich
Rename struct drm_gpuvm within struct nouveau_uvmm from 'umgr' to base.
Signed-off-by: Danilo Krummrich <dakr@redhat.com>
---
drivers/gpu/drm/nouveau/nouveau_debugfs.c | 2 +-
drivers/gpu/drm/nouveau/nouveau_exec.c | 4 +--
drivers/gpu/drm/nouveau/nouveau_uvmm.c | 32 +++++++++++------------
drivers/gpu/drm/nouveau/nouveau_uvmm.h | 6 ++---
4 files changed, 22 insertions(+), 22 deletions(-)
diff --git a/drivers/gpu/drm/nouveau/nouveau_debugfs.c b/drivers/gpu/drm/nouveau/nouveau_debugfs.c
index 053f703f2f68..e83db051e851 100644
--- a/drivers/gpu/drm/nouveau/nouveau_debugfs.c
+++ b/drivers/gpu/drm/nouveau/nouveau_debugfs.c
@@ -231,7 +231,7 @@ nouveau_debugfs_gpuva(struct seq_file *m, void *data)
continue;
nouveau_uvmm_lock(uvmm);
- drm_debugfs_gpuva_info(m, &uvmm->umgr);
+ drm_debugfs_gpuva_info(m, &uvmm->base);
seq_puts(m, "\n");
nouveau_debugfs_gpuva_regions(m, uvmm);
nouveau_uvmm_unlock(uvmm);
diff --git a/drivers/gpu/drm/nouveau/nouveau_exec.c b/drivers/gpu/drm/nouveau/nouveau_exec.c
index c001952cd678..b4239af29e5a 100644
--- a/drivers/gpu/drm/nouveau/nouveau_exec.c
+++ b/drivers/gpu/drm/nouveau/nouveau_exec.c
@@ -106,8 +106,8 @@ nouveau_exec_job_submit(struct nouveau_job *job)
drm_exec_until_all_locked(exec) {
struct drm_gpuva *va;
- drm_gpuvm_for_each_va(va, &uvmm->umgr) {
- if (unlikely(va == &uvmm->umgr.kernel_alloc_node))
+ drm_gpuvm_for_each_va(va, &uvmm->base) {
+ if (unlikely(va == &uvmm->base.kernel_alloc_node))
continue;
ret = drm_exec_prepare_obj(exec, va->gem.obj, 1);
diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.c b/drivers/gpu/drm/nouveau/nouveau_uvmm.c
index c750072cb268..6c86b64273c3 100644
--- a/drivers/gpu/drm/nouveau/nouveau_uvmm.c
+++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.c
@@ -329,7 +329,7 @@ nouveau_uvma_region_create(struct nouveau_uvmm *uvmm,
struct nouveau_uvma_region *reg;
int ret;
- if (!drm_gpuva_interval_empty(&uvmm->umgr, addr, range))
+ if (!drm_gpuva_interval_empty(&uvmm->base, addr, range))
return -ENOSPC;
ret = nouveau_uvma_region_alloc(®);
@@ -384,7 +384,7 @@ nouveau_uvma_region_empty(struct nouveau_uvma_region *reg)
{
struct nouveau_uvmm *uvmm = reg->uvmm;
- return drm_gpuva_interval_empty(&uvmm->umgr,
+ return drm_gpuva_interval_empty(&uvmm->base,
reg->va.addr,
reg->va.range);
}
@@ -589,7 +589,7 @@ op_map_prepare(struct nouveau_uvmm *uvmm,
uvma->region = args->region;
uvma->kind = args->kind;
- drm_gpuva_map(&uvmm->umgr, &uvma->va, op);
+ drm_gpuva_map(&uvmm->base, &uvma->va, op);
/* Keep a reference until this uvma is destroyed. */
nouveau_uvma_gem_get(uvma);
@@ -1194,7 +1194,7 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job)
goto unwind_continue;
}
- op->ops = drm_gpuvm_sm_unmap_ops_create(&uvmm->umgr,
+ op->ops = drm_gpuvm_sm_unmap_ops_create(&uvmm->base,
op->va.addr,
op->va.range);
if (IS_ERR(op->ops)) {
@@ -1205,7 +1205,7 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job)
ret = nouveau_uvmm_sm_unmap_prepare(uvmm, &op->new,
op->ops);
if (ret) {
- drm_gpuva_ops_free(&uvmm->umgr, op->ops);
+ drm_gpuva_ops_free(&uvmm->base, op->ops);
op->ops = NULL;
op->reg = NULL;
goto unwind_continue;
@@ -1240,7 +1240,7 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job)
}
}
- op->ops = drm_gpuvm_sm_map_ops_create(&uvmm->umgr,
+ op->ops = drm_gpuvm_sm_map_ops_create(&uvmm->base,
op->va.addr,
op->va.range,
op->gem.obj,
@@ -1256,7 +1256,7 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job)
op->va.range,
op->flags & 0xff);
if (ret) {
- drm_gpuva_ops_free(&uvmm->umgr, op->ops);
+ drm_gpuva_ops_free(&uvmm->base, op->ops);
op->ops = NULL;
goto unwind_continue;
}
@@ -1264,7 +1264,7 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job)
break;
}
case OP_UNMAP:
- op->ops = drm_gpuvm_sm_unmap_ops_create(&uvmm->umgr,
+ op->ops = drm_gpuvm_sm_unmap_ops_create(&uvmm->base,
op->va.addr,
op->va.range);
if (IS_ERR(op->ops)) {
@@ -1275,7 +1275,7 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job)
ret = nouveau_uvmm_sm_unmap_prepare(uvmm, &op->new,
op->ops);
if (ret) {
- drm_gpuva_ops_free(&uvmm->umgr, op->ops);
+ drm_gpuva_ops_free(&uvmm->base, op->ops);
op->ops = NULL;
goto unwind_continue;
}
@@ -1404,7 +1404,7 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job)
break;
}
- drm_gpuva_ops_free(&uvmm->umgr, op->ops);
+ drm_gpuva_ops_free(&uvmm->base, op->ops);
op->ops = NULL;
op->reg = NULL;
}
@@ -1509,7 +1509,7 @@ nouveau_uvmm_bind_job_free_work_fn(struct work_struct *work)
}
if (!IS_ERR_OR_NULL(op->ops))
- drm_gpuva_ops_free(&uvmm->umgr, op->ops);
+ drm_gpuva_ops_free(&uvmm->base, op->ops);
if (obj)
drm_gem_object_put(obj);
@@ -1836,7 +1836,7 @@ nouveau_uvmm_init(struct nouveau_uvmm *uvmm, struct nouveau_cli *cli,
uvmm->kernel_managed_addr = kernel_managed_addr;
uvmm->kernel_managed_size = kernel_managed_size;
- drm_gpuvm_init(&uvmm->umgr, cli->name,
+ drm_gpuvm_init(&uvmm->base, cli->name,
NOUVEAU_VA_SPACE_START,
NOUVEAU_VA_SPACE_END,
kernel_managed_addr, kernel_managed_size,
@@ -1855,7 +1855,7 @@ nouveau_uvmm_init(struct nouveau_uvmm *uvmm, struct nouveau_cli *cli,
return 0;
out_free_gpuva_mgr:
- drm_gpuvm_destroy(&uvmm->umgr);
+ drm_gpuvm_destroy(&uvmm->base);
out_unlock:
mutex_unlock(&cli->mutex);
return ret;
@@ -1877,11 +1877,11 @@ nouveau_uvmm_fini(struct nouveau_uvmm *uvmm)
wait_event(entity->job.wq, list_empty(&entity->job.list.head));
nouveau_uvmm_lock(uvmm);
- drm_gpuvm_for_each_va_safe(va, next, &uvmm->umgr) {
+ drm_gpuvm_for_each_va_safe(va, next, &uvmm->base) {
struct nouveau_uvma *uvma = uvma_from_va(va);
struct drm_gem_object *obj = va->gem.obj;
- if (unlikely(va == &uvmm->umgr.kernel_alloc_node))
+ if (unlikely(va == &uvmm->base.kernel_alloc_node))
continue;
drm_gpuva_remove(va);
@@ -1910,7 +1910,7 @@ nouveau_uvmm_fini(struct nouveau_uvmm *uvmm)
mutex_lock(&cli->mutex);
nouveau_vmm_fini(&uvmm->vmm);
- drm_gpuvm_destroy(&uvmm->umgr);
+ drm_gpuvm_destroy(&uvmm->base);
mutex_unlock(&cli->mutex);
dma_resv_fini(&uvmm->resv);
diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.h b/drivers/gpu/drm/nouveau/nouveau_uvmm.h
index e96c9919d1bd..a308c59760a5 100644
--- a/drivers/gpu/drm/nouveau/nouveau_uvmm.h
+++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.h
@@ -8,8 +8,8 @@
#include "nouveau_drv.h"
struct nouveau_uvmm {
+ struct drm_gpuvm base;
struct nouveau_vmm vmm;
- struct drm_gpuvm umgr;
struct maple_tree region_mt;
struct mutex mutex;
struct dma_resv resv;
@@ -41,10 +41,10 @@ struct nouveau_uvma {
u8 kind;
};
-#define uvmm_from_mgr(x) container_of((x), struct nouveau_uvmm, umgr)
+#define uvmm_from_gpuvm(x) container_of((x), struct nouveau_uvmm, base)
#define uvma_from_va(x) container_of((x), struct nouveau_uvma, va)
-#define to_uvmm(x) uvmm_from_mgr((x)->va.vm)
+#define to_uvmm(x) uvmm_from_gpuvm((x)->va.vm)
struct nouveau_uvmm_bind_job {
struct nouveau_job base;
--
2.41.0
^ permalink raw reply related [flat|nested] 29+ messages in thread* Re: [PATCH drm-misc-next v4 3/8] drm/nouveau: uvmm: rename 'umgr' to 'base'
2023-09-20 14:42 ` [PATCH drm-misc-next v4 3/8] drm/nouveau: uvmm: rename 'umgr' to 'base' Danilo Krummrich
@ 2023-09-25 0:43 ` Dave Airlie
0 siblings, 0 replies; 29+ messages in thread
From: Dave Airlie @ 2023-09-25 0:43 UTC (permalink / raw)
To: Danilo Krummrich
Cc: daniel, matthew.brost, thomas.hellstrom, sarah.walker,
donald.robson, boris.brezillon, christian.koenig, faith.ekstrand,
dri-devel, nouveau, linux-kernel
On Thu, 21 Sept 2023 at 00:44, Danilo Krummrich <dakr@redhat.com> wrote:
>
> Rename struct drm_gpuvm within struct nouveau_uvmm from 'umgr' to base.
>
> Signed-off-by: Danilo Krummrich <dakr@redhat.com>
Reviewed-by: Dave Airlie <airlied@redhat.com>
> ---
> drivers/gpu/drm/nouveau/nouveau_debugfs.c | 2 +-
> drivers/gpu/drm/nouveau/nouveau_exec.c | 4 +--
> drivers/gpu/drm/nouveau/nouveau_uvmm.c | 32 +++++++++++------------
> drivers/gpu/drm/nouveau/nouveau_uvmm.h | 6 ++---
> 4 files changed, 22 insertions(+), 22 deletions(-)
>
> diff --git a/drivers/gpu/drm/nouveau/nouveau_debugfs.c b/drivers/gpu/drm/nouveau/nouveau_debugfs.c
> index 053f703f2f68..e83db051e851 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_debugfs.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_debugfs.c
> @@ -231,7 +231,7 @@ nouveau_debugfs_gpuva(struct seq_file *m, void *data)
> continue;
>
> nouveau_uvmm_lock(uvmm);
> - drm_debugfs_gpuva_info(m, &uvmm->umgr);
> + drm_debugfs_gpuva_info(m, &uvmm->base);
> seq_puts(m, "\n");
> nouveau_debugfs_gpuva_regions(m, uvmm);
> nouveau_uvmm_unlock(uvmm);
> diff --git a/drivers/gpu/drm/nouveau/nouveau_exec.c b/drivers/gpu/drm/nouveau/nouveau_exec.c
> index c001952cd678..b4239af29e5a 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_exec.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_exec.c
> @@ -106,8 +106,8 @@ nouveau_exec_job_submit(struct nouveau_job *job)
> drm_exec_until_all_locked(exec) {
> struct drm_gpuva *va;
>
> - drm_gpuvm_for_each_va(va, &uvmm->umgr) {
> - if (unlikely(va == &uvmm->umgr.kernel_alloc_node))
> + drm_gpuvm_for_each_va(va, &uvmm->base) {
> + if (unlikely(va == &uvmm->base.kernel_alloc_node))
> continue;
>
> ret = drm_exec_prepare_obj(exec, va->gem.obj, 1);
> diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.c b/drivers/gpu/drm/nouveau/nouveau_uvmm.c
> index c750072cb268..6c86b64273c3 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_uvmm.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.c
> @@ -329,7 +329,7 @@ nouveau_uvma_region_create(struct nouveau_uvmm *uvmm,
> struct nouveau_uvma_region *reg;
> int ret;
>
> - if (!drm_gpuva_interval_empty(&uvmm->umgr, addr, range))
> + if (!drm_gpuva_interval_empty(&uvmm->base, addr, range))
> return -ENOSPC;
>
> ret = nouveau_uvma_region_alloc(®);
> @@ -384,7 +384,7 @@ nouveau_uvma_region_empty(struct nouveau_uvma_region *reg)
> {
> struct nouveau_uvmm *uvmm = reg->uvmm;
>
> - return drm_gpuva_interval_empty(&uvmm->umgr,
> + return drm_gpuva_interval_empty(&uvmm->base,
> reg->va.addr,
> reg->va.range);
> }
> @@ -589,7 +589,7 @@ op_map_prepare(struct nouveau_uvmm *uvmm,
> uvma->region = args->region;
> uvma->kind = args->kind;
>
> - drm_gpuva_map(&uvmm->umgr, &uvma->va, op);
> + drm_gpuva_map(&uvmm->base, &uvma->va, op);
>
> /* Keep a reference until this uvma is destroyed. */
> nouveau_uvma_gem_get(uvma);
> @@ -1194,7 +1194,7 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job)
> goto unwind_continue;
> }
>
> - op->ops = drm_gpuvm_sm_unmap_ops_create(&uvmm->umgr,
> + op->ops = drm_gpuvm_sm_unmap_ops_create(&uvmm->base,
> op->va.addr,
> op->va.range);
> if (IS_ERR(op->ops)) {
> @@ -1205,7 +1205,7 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job)
> ret = nouveau_uvmm_sm_unmap_prepare(uvmm, &op->new,
> op->ops);
> if (ret) {
> - drm_gpuva_ops_free(&uvmm->umgr, op->ops);
> + drm_gpuva_ops_free(&uvmm->base, op->ops);
> op->ops = NULL;
> op->reg = NULL;
> goto unwind_continue;
> @@ -1240,7 +1240,7 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job)
> }
> }
>
> - op->ops = drm_gpuvm_sm_map_ops_create(&uvmm->umgr,
> + op->ops = drm_gpuvm_sm_map_ops_create(&uvmm->base,
> op->va.addr,
> op->va.range,
> op->gem.obj,
> @@ -1256,7 +1256,7 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job)
> op->va.range,
> op->flags & 0xff);
> if (ret) {
> - drm_gpuva_ops_free(&uvmm->umgr, op->ops);
> + drm_gpuva_ops_free(&uvmm->base, op->ops);
> op->ops = NULL;
> goto unwind_continue;
> }
> @@ -1264,7 +1264,7 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job)
> break;
> }
> case OP_UNMAP:
> - op->ops = drm_gpuvm_sm_unmap_ops_create(&uvmm->umgr,
> + op->ops = drm_gpuvm_sm_unmap_ops_create(&uvmm->base,
> op->va.addr,
> op->va.range);
> if (IS_ERR(op->ops)) {
> @@ -1275,7 +1275,7 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job)
> ret = nouveau_uvmm_sm_unmap_prepare(uvmm, &op->new,
> op->ops);
> if (ret) {
> - drm_gpuva_ops_free(&uvmm->umgr, op->ops);
> + drm_gpuva_ops_free(&uvmm->base, op->ops);
> op->ops = NULL;
> goto unwind_continue;
> }
> @@ -1404,7 +1404,7 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job)
> break;
> }
>
> - drm_gpuva_ops_free(&uvmm->umgr, op->ops);
> + drm_gpuva_ops_free(&uvmm->base, op->ops);
> op->ops = NULL;
> op->reg = NULL;
> }
> @@ -1509,7 +1509,7 @@ nouveau_uvmm_bind_job_free_work_fn(struct work_struct *work)
> }
>
> if (!IS_ERR_OR_NULL(op->ops))
> - drm_gpuva_ops_free(&uvmm->umgr, op->ops);
> + drm_gpuva_ops_free(&uvmm->base, op->ops);
>
> if (obj)
> drm_gem_object_put(obj);
> @@ -1836,7 +1836,7 @@ nouveau_uvmm_init(struct nouveau_uvmm *uvmm, struct nouveau_cli *cli,
> uvmm->kernel_managed_addr = kernel_managed_addr;
> uvmm->kernel_managed_size = kernel_managed_size;
>
> - drm_gpuvm_init(&uvmm->umgr, cli->name,
> + drm_gpuvm_init(&uvmm->base, cli->name,
> NOUVEAU_VA_SPACE_START,
> NOUVEAU_VA_SPACE_END,
> kernel_managed_addr, kernel_managed_size,
> @@ -1855,7 +1855,7 @@ nouveau_uvmm_init(struct nouveau_uvmm *uvmm, struct nouveau_cli *cli,
> return 0;
>
> out_free_gpuva_mgr:
> - drm_gpuvm_destroy(&uvmm->umgr);
> + drm_gpuvm_destroy(&uvmm->base);
> out_unlock:
> mutex_unlock(&cli->mutex);
> return ret;
> @@ -1877,11 +1877,11 @@ nouveau_uvmm_fini(struct nouveau_uvmm *uvmm)
> wait_event(entity->job.wq, list_empty(&entity->job.list.head));
>
> nouveau_uvmm_lock(uvmm);
> - drm_gpuvm_for_each_va_safe(va, next, &uvmm->umgr) {
> + drm_gpuvm_for_each_va_safe(va, next, &uvmm->base) {
> struct nouveau_uvma *uvma = uvma_from_va(va);
> struct drm_gem_object *obj = va->gem.obj;
>
> - if (unlikely(va == &uvmm->umgr.kernel_alloc_node))
> + if (unlikely(va == &uvmm->base.kernel_alloc_node))
> continue;
>
> drm_gpuva_remove(va);
> @@ -1910,7 +1910,7 @@ nouveau_uvmm_fini(struct nouveau_uvmm *uvmm)
>
> mutex_lock(&cli->mutex);
> nouveau_vmm_fini(&uvmm->vmm);
> - drm_gpuvm_destroy(&uvmm->umgr);
> + drm_gpuvm_destroy(&uvmm->base);
> mutex_unlock(&cli->mutex);
>
> dma_resv_fini(&uvmm->resv);
> diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.h b/drivers/gpu/drm/nouveau/nouveau_uvmm.h
> index e96c9919d1bd..a308c59760a5 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_uvmm.h
> +++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.h
> @@ -8,8 +8,8 @@
> #include "nouveau_drv.h"
>
> struct nouveau_uvmm {
> + struct drm_gpuvm base;
> struct nouveau_vmm vmm;
> - struct drm_gpuvm umgr;
> struct maple_tree region_mt;
> struct mutex mutex;
> struct dma_resv resv;
> @@ -41,10 +41,10 @@ struct nouveau_uvma {
> u8 kind;
> };
>
> -#define uvmm_from_mgr(x) container_of((x), struct nouveau_uvmm, umgr)
> +#define uvmm_from_gpuvm(x) container_of((x), struct nouveau_uvmm, base)
> #define uvma_from_va(x) container_of((x), struct nouveau_uvma, va)
>
> -#define to_uvmm(x) uvmm_from_mgr((x)->va.vm)
> +#define to_uvmm(x) uvmm_from_gpuvm((x)->va.vm)
>
> struct nouveau_uvmm_bind_job {
> struct nouveau_job base;
> --
> 2.41.0
>
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH drm-misc-next v4 4/8] drm/gpuvm: add common dma-resv per struct drm_gpuvm
2023-09-20 14:42 [PATCH drm-misc-next v4 0/8] [RFC] DRM GPUVA Manager GPU-VM features Danilo Krummrich
` (2 preceding siblings ...)
2023-09-20 14:42 ` [PATCH drm-misc-next v4 3/8] drm/nouveau: uvmm: rename 'umgr' to 'base' Danilo Krummrich
@ 2023-09-20 14:42 ` Danilo Krummrich
2023-09-21 7:39 ` Christian König
2023-09-20 14:42 ` [PATCH drm-misc-next v4 5/8] drm/gpuvm: add an abstraction for a VM / BO combination Danilo Krummrich
` (4 subsequent siblings)
8 siblings, 1 reply; 29+ messages in thread
From: Danilo Krummrich @ 2023-09-20 14:42 UTC (permalink / raw)
To: airlied, daniel, matthew.brost, thomas.hellstrom, sarah.walker,
donald.robson, boris.brezillon, christian.koenig, faith.ekstrand
Cc: dri-devel, nouveau, linux-kernel, Danilo Krummrich
Provide a common dma-resv for GEM objects not being used outside of this
GPU-VM. This is used in a subsequent patch to generalize dma-resv,
external and evicted object handling and GEM validation.
Signed-off-by: Danilo Krummrich <dakr@redhat.com>
---
drivers/gpu/drm/drm_gpuvm.c | 9 +++++++--
drivers/gpu/drm/nouveau/nouveau_uvmm.c | 2 +-
include/drm/drm_gpuvm.h | 17 ++++++++++++++++-
3 files changed, 24 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c
index bfea4a8a19ec..cbf4b738a16c 100644
--- a/drivers/gpu/drm/drm_gpuvm.c
+++ b/drivers/gpu/drm/drm_gpuvm.c
@@ -655,6 +655,7 @@ drm_gpuva_range_valid(struct drm_gpuvm *gpuvm,
/**
* drm_gpuvm_init() - initialize a &drm_gpuvm
* @gpuvm: pointer to the &drm_gpuvm to initialize
+ * @drm: the drivers &drm_device
* @name: the name of the GPU VA space
* @start_offset: the start offset of the GPU VA space
* @range: the size of the GPU VA space
@@ -668,7 +669,7 @@ drm_gpuva_range_valid(struct drm_gpuvm *gpuvm,
* &name is expected to be managed by the surrounding driver structures.
*/
void
-drm_gpuvm_init(struct drm_gpuvm *gpuvm,
+drm_gpuvm_init(struct drm_gpuvm *gpuvm, struct drm_device *drm,
const char *name,
u64 start_offset, u64 range,
u64 reserve_offset, u64 reserve_range,
@@ -694,6 +695,8 @@ drm_gpuvm_init(struct drm_gpuvm *gpuvm,
reserve_range)))
__drm_gpuva_insert(gpuvm, &gpuvm->kernel_alloc_node);
}
+
+ drm_gem_private_object_init(drm, &gpuvm->d_obj, 0);
}
EXPORT_SYMBOL_GPL(drm_gpuvm_init);
@@ -713,7 +716,9 @@ drm_gpuvm_destroy(struct drm_gpuvm *gpuvm)
__drm_gpuva_remove(&gpuvm->kernel_alloc_node);
WARN(!RB_EMPTY_ROOT(&gpuvm->rb.tree.rb_root),
- "GPUVA tree is not empty, potentially leaking memory.");
+ "GPUVA tree is not empty, potentially leaking memory.\n");
+
+ drm_gem_private_object_fini(&gpuvm->d_obj);
}
EXPORT_SYMBOL_GPL(drm_gpuvm_destroy);
diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.c b/drivers/gpu/drm/nouveau/nouveau_uvmm.c
index 6c86b64273c3..a80ac8767843 100644
--- a/drivers/gpu/drm/nouveau/nouveau_uvmm.c
+++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.c
@@ -1836,7 +1836,7 @@ nouveau_uvmm_init(struct nouveau_uvmm *uvmm, struct nouveau_cli *cli,
uvmm->kernel_managed_addr = kernel_managed_addr;
uvmm->kernel_managed_size = kernel_managed_size;
- drm_gpuvm_init(&uvmm->base, cli->name,
+ drm_gpuvm_init(&uvmm->base, cli->drm->dev, cli->name,
NOUVEAU_VA_SPACE_START,
NOUVEAU_VA_SPACE_END,
kernel_managed_addr, kernel_managed_size,
diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h
index 0e802676e0a9..6666c07d7c3e 100644
--- a/include/drm/drm_gpuvm.h
+++ b/include/drm/drm_gpuvm.h
@@ -240,14 +240,29 @@ struct drm_gpuvm {
* @ops: &drm_gpuvm_ops providing the split/merge steps to drivers
*/
const struct drm_gpuvm_ops *ops;
+
+ /**
+ * @d_obj: Dummy GEM object; used internally to pass the GPU VMs
+ * dma-resv to &drm_exec. Provides the GPUVM's &dma-resv.
+ */
+ struct drm_gem_object d_obj;
};
-void drm_gpuvm_init(struct drm_gpuvm *gpuvm, const char *name,
+void drm_gpuvm_init(struct drm_gpuvm *gpuvm, struct drm_device *drm,
+ const char *name,
u64 start_offset, u64 range,
u64 reserve_offset, u64 reserve_range,
const struct drm_gpuvm_ops *ops);
void drm_gpuvm_destroy(struct drm_gpuvm *gpuvm);
+/**
+ * drm_gpuvm_resv() - returns the &drm_gpuvm's &dma_resv
+ * @gpuvm__: the &drm_gpuvm
+ *
+ * Returns: a pointer to the &drm_gpuvm's &dma_resv
+ */
+#define drm_gpuvm_resv(gpuvm__) (&(gpuvm__)->d_obj._resv)
+
static inline struct drm_gpuva *
__drm_gpuva_next(struct drm_gpuva *va)
{
--
2.41.0
^ permalink raw reply related [flat|nested] 29+ messages in thread* Re: [PATCH drm-misc-next v4 4/8] drm/gpuvm: add common dma-resv per struct drm_gpuvm
2023-09-20 14:42 ` [PATCH drm-misc-next v4 4/8] drm/gpuvm: add common dma-resv per struct drm_gpuvm Danilo Krummrich
@ 2023-09-21 7:39 ` Christian König
2023-09-21 13:34 ` Danilo Krummrich
0 siblings, 1 reply; 29+ messages in thread
From: Christian König @ 2023-09-21 7:39 UTC (permalink / raw)
To: Danilo Krummrich, airlied, daniel, matthew.brost,
thomas.hellstrom, sarah.walker, donald.robson, boris.brezillon,
faith.ekstrand
Cc: dri-devel, nouveau, linux-kernel
Am 20.09.23 um 16:42 schrieb Danilo Krummrich:
> Provide a common dma-resv for GEM objects not being used outside of this
> GPU-VM. This is used in a subsequent patch to generalize dma-resv,
> external and evicted object handling and GEM validation.
>
> Signed-off-by: Danilo Krummrich <dakr@redhat.com>
> ---
> drivers/gpu/drm/drm_gpuvm.c | 9 +++++++--
> drivers/gpu/drm/nouveau/nouveau_uvmm.c | 2 +-
> include/drm/drm_gpuvm.h | 17 ++++++++++++++++-
> 3 files changed, 24 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c
> index bfea4a8a19ec..cbf4b738a16c 100644
> --- a/drivers/gpu/drm/drm_gpuvm.c
> +++ b/drivers/gpu/drm/drm_gpuvm.c
> @@ -655,6 +655,7 @@ drm_gpuva_range_valid(struct drm_gpuvm *gpuvm,
> /**
> * drm_gpuvm_init() - initialize a &drm_gpuvm
> * @gpuvm: pointer to the &drm_gpuvm to initialize
> + * @drm: the drivers &drm_device
> * @name: the name of the GPU VA space
> * @start_offset: the start offset of the GPU VA space
> * @range: the size of the GPU VA space
> @@ -668,7 +669,7 @@ drm_gpuva_range_valid(struct drm_gpuvm *gpuvm,
> * &name is expected to be managed by the surrounding driver structures.
> */
> void
> -drm_gpuvm_init(struct drm_gpuvm *gpuvm,
> +drm_gpuvm_init(struct drm_gpuvm *gpuvm, struct drm_device *drm,
> const char *name,
> u64 start_offset, u64 range,
> u64 reserve_offset, u64 reserve_range,
> @@ -694,6 +695,8 @@ drm_gpuvm_init(struct drm_gpuvm *gpuvm,
> reserve_range)))
> __drm_gpuva_insert(gpuvm, &gpuvm->kernel_alloc_node);
> }
> +
> + drm_gem_private_object_init(drm, &gpuvm->d_obj, 0);
> }
> EXPORT_SYMBOL_GPL(drm_gpuvm_init);
>
> @@ -713,7 +716,9 @@ drm_gpuvm_destroy(struct drm_gpuvm *gpuvm)
> __drm_gpuva_remove(&gpuvm->kernel_alloc_node);
>
> WARN(!RB_EMPTY_ROOT(&gpuvm->rb.tree.rb_root),
> - "GPUVA tree is not empty, potentially leaking memory.");
> + "GPUVA tree is not empty, potentially leaking memory.\n");
> +
> + drm_gem_private_object_fini(&gpuvm->d_obj);
> }
> EXPORT_SYMBOL_GPL(drm_gpuvm_destroy);
>
> diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.c b/drivers/gpu/drm/nouveau/nouveau_uvmm.c
> index 6c86b64273c3..a80ac8767843 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_uvmm.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.c
> @@ -1836,7 +1836,7 @@ nouveau_uvmm_init(struct nouveau_uvmm *uvmm, struct nouveau_cli *cli,
> uvmm->kernel_managed_addr = kernel_managed_addr;
> uvmm->kernel_managed_size = kernel_managed_size;
>
> - drm_gpuvm_init(&uvmm->base, cli->name,
> + drm_gpuvm_init(&uvmm->base, cli->drm->dev, cli->name,
> NOUVEAU_VA_SPACE_START,
> NOUVEAU_VA_SPACE_END,
> kernel_managed_addr, kernel_managed_size,
> diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h
> index 0e802676e0a9..6666c07d7c3e 100644
> --- a/include/drm/drm_gpuvm.h
> +++ b/include/drm/drm_gpuvm.h
> @@ -240,14 +240,29 @@ struct drm_gpuvm {
> * @ops: &drm_gpuvm_ops providing the split/merge steps to drivers
> */
> const struct drm_gpuvm_ops *ops;
> +
> + /**
> + * @d_obj: Dummy GEM object; used internally to pass the GPU VMs
> + * dma-resv to &drm_exec. Provides the GPUVM's &dma-resv.
> + */
> + struct drm_gem_object d_obj;
Yeah, as pointed out in the other mail that won't work like this.
The GPUVM contains GEM objects and therefore should probably have a
reference to those objects.
When those GEM objects now use the dma-resv object embedded inside the
GPUVM then they also need a reference to the GPUVM to make sure the
dma-resv object won't be freed before they are freed.
This is a circle reference dependency.
The simplest solution I can see is to let the driver provide the GEM
object to use. Amdgpu uses the root page directory object for this.
Apart from that I strongly think that we shouldn't let the GPUVM code
create a driver GEM object. We did that in TTM for the ghost objects and
it turned out to be a bad idea.
Regards,
Christian.
> };
>
> -void drm_gpuvm_init(struct drm_gpuvm *gpuvm, const char *name,
> +void drm_gpuvm_init(struct drm_gpuvm *gpuvm, struct drm_device *drm,
> + const char *name,
> u64 start_offset, u64 range,
> u64 reserve_offset, u64 reserve_range,
> const struct drm_gpuvm_ops *ops);
> void drm_gpuvm_destroy(struct drm_gpuvm *gpuvm);
>
> +/**
> + * drm_gpuvm_resv() - returns the &drm_gpuvm's &dma_resv
> + * @gpuvm__: the &drm_gpuvm
> + *
> + * Returns: a pointer to the &drm_gpuvm's &dma_resv
> + */
> +#define drm_gpuvm_resv(gpuvm__) (&(gpuvm__)->d_obj._resv)
> +
> static inline struct drm_gpuva *
> __drm_gpuva_next(struct drm_gpuva *va)
> {
^ permalink raw reply [flat|nested] 29+ messages in thread* Re: [PATCH drm-misc-next v4 4/8] drm/gpuvm: add common dma-resv per struct drm_gpuvm
2023-09-21 7:39 ` Christian König
@ 2023-09-21 13:34 ` Danilo Krummrich
2023-09-21 14:21 ` Christian König
2023-09-21 14:25 ` Boris Brezillon
0 siblings, 2 replies; 29+ messages in thread
From: Danilo Krummrich @ 2023-09-21 13:34 UTC (permalink / raw)
To: Christian König, airlied, daniel, matthew.brost,
thomas.hellstrom, sarah.walker, donald.robson, boris.brezillon,
faith.ekstrand
Cc: dri-devel, nouveau, linux-kernel
On 9/21/23 09:39, Christian König wrote:
> Am 20.09.23 um 16:42 schrieb Danilo Krummrich:
>> Provide a common dma-resv for GEM objects not being used outside of this
>> GPU-VM. This is used in a subsequent patch to generalize dma-resv,
>> external and evicted object handling and GEM validation.
>>
>> Signed-off-by: Danilo Krummrich <dakr@redhat.com>
>> ---
>> drivers/gpu/drm/drm_gpuvm.c | 9 +++++++--
>> drivers/gpu/drm/nouveau/nouveau_uvmm.c | 2 +-
>> include/drm/drm_gpuvm.h | 17 ++++++++++++++++-
>> 3 files changed, 24 insertions(+), 4 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c
>> index bfea4a8a19ec..cbf4b738a16c 100644
>> --- a/drivers/gpu/drm/drm_gpuvm.c
>> +++ b/drivers/gpu/drm/drm_gpuvm.c
>> @@ -655,6 +655,7 @@ drm_gpuva_range_valid(struct drm_gpuvm *gpuvm,
>> /**
>> * drm_gpuvm_init() - initialize a &drm_gpuvm
>> * @gpuvm: pointer to the &drm_gpuvm to initialize
>> + * @drm: the drivers &drm_device
>> * @name: the name of the GPU VA space
>> * @start_offset: the start offset of the GPU VA space
>> * @range: the size of the GPU VA space
>> @@ -668,7 +669,7 @@ drm_gpuva_range_valid(struct drm_gpuvm *gpuvm,
>> * &name is expected to be managed by the surrounding driver structures.
>> */
>> void
>> -drm_gpuvm_init(struct drm_gpuvm *gpuvm,
>> +drm_gpuvm_init(struct drm_gpuvm *gpuvm, struct drm_device *drm,
>> const char *name,
>> u64 start_offset, u64 range,
>> u64 reserve_offset, u64 reserve_range,
>> @@ -694,6 +695,8 @@ drm_gpuvm_init(struct drm_gpuvm *gpuvm,
>> reserve_range)))
>> __drm_gpuva_insert(gpuvm, &gpuvm->kernel_alloc_node);
>> }
>> +
>> + drm_gem_private_object_init(drm, &gpuvm->d_obj, 0);
>> }
>> EXPORT_SYMBOL_GPL(drm_gpuvm_init);
>> @@ -713,7 +716,9 @@ drm_gpuvm_destroy(struct drm_gpuvm *gpuvm)
>> __drm_gpuva_remove(&gpuvm->kernel_alloc_node);
>> WARN(!RB_EMPTY_ROOT(&gpuvm->rb.tree.rb_root),
>> - "GPUVA tree is not empty, potentially leaking memory.");
>> + "GPUVA tree is not empty, potentially leaking memory.\n");
>> +
>> + drm_gem_private_object_fini(&gpuvm->d_obj);
>> }
>> EXPORT_SYMBOL_GPL(drm_gpuvm_destroy);
>> diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.c b/drivers/gpu/drm/nouveau/nouveau_uvmm.c
>> index 6c86b64273c3..a80ac8767843 100644
>> --- a/drivers/gpu/drm/nouveau/nouveau_uvmm.c
>> +++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.c
>> @@ -1836,7 +1836,7 @@ nouveau_uvmm_init(struct nouveau_uvmm *uvmm, struct nouveau_cli *cli,
>> uvmm->kernel_managed_addr = kernel_managed_addr;
>> uvmm->kernel_managed_size = kernel_managed_size;
>> - drm_gpuvm_init(&uvmm->base, cli->name,
>> + drm_gpuvm_init(&uvmm->base, cli->drm->dev, cli->name,
>> NOUVEAU_VA_SPACE_START,
>> NOUVEAU_VA_SPACE_END,
>> kernel_managed_addr, kernel_managed_size,
>> diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h
>> index 0e802676e0a9..6666c07d7c3e 100644
>> --- a/include/drm/drm_gpuvm.h
>> +++ b/include/drm/drm_gpuvm.h
>> @@ -240,14 +240,29 @@ struct drm_gpuvm {
>> * @ops: &drm_gpuvm_ops providing the split/merge steps to drivers
>> */
>> const struct drm_gpuvm_ops *ops;
>> +
>> + /**
>> + * @d_obj: Dummy GEM object; used internally to pass the GPU VMs
>> + * dma-resv to &drm_exec. Provides the GPUVM's &dma-resv.
>> + */
>> + struct drm_gem_object d_obj;
>
> Yeah, as pointed out in the other mail that won't work like this.
Which one? Seems that I missed it.
>
> The GPUVM contains GEM objects and therefore should probably have a reference to those objects.
>
> When those GEM objects now use the dma-resv object embedded inside the GPUVM then they also need a reference to the GPUVM to make sure the dma-resv object won't be freed before they are freed.
My assumption here is that GEM objects being local to a certain VM never out-live the VM. We never share it with anyone, otherwise it would be external and hence wouldn't carray the VM's dma-resv. The only references I see are from the VM itself (which is fine) and from userspace. The latter isn't a problem as long as all GEM handles are closed before the VM is destroyed on FD close.
Do I miss something? Do we have use cases where this isn't true?
>
> This is a circle reference dependency.
>
> The simplest solution I can see is to let the driver provide the GEM object to use. Amdgpu uses the root page directory object for this.
Sure, we can do that, if we see cases where VM local GEM objects can out-live the VM.
>
> Apart from that I strongly think that we shouldn't let the GPUVM code create a driver GEM object. We did that in TTM for the ghost objects and it turned out to be a bad idea.
You mean let GPUVM create a dummy GEM based on the drm_device from the driver? What were the problems that were encountered?
- Danilo
>
> Regards,
> Christian.
>
>> };
>> -void drm_gpuvm_init(struct drm_gpuvm *gpuvm, const char *name,
>> +void drm_gpuvm_init(struct drm_gpuvm *gpuvm, struct drm_device *drm,
>> + const char *name,
>> u64 start_offset, u64 range,
>> u64 reserve_offset, u64 reserve_range,
>> const struct drm_gpuvm_ops *ops);
>> void drm_gpuvm_destroy(struct drm_gpuvm *gpuvm);
>> +/**
>> + * drm_gpuvm_resv() - returns the &drm_gpuvm's &dma_resv
>> + * @gpuvm__: the &drm_gpuvm
>> + *
>> + * Returns: a pointer to the &drm_gpuvm's &dma_resv
>> + */
>> +#define drm_gpuvm_resv(gpuvm__) (&(gpuvm__)->d_obj._resv)
>> +
>> static inline struct drm_gpuva *
>> __drm_gpuva_next(struct drm_gpuva *va)
>> {
>
^ permalink raw reply [flat|nested] 29+ messages in thread* Re: [PATCH drm-misc-next v4 4/8] drm/gpuvm: add common dma-resv per struct drm_gpuvm
2023-09-21 13:34 ` Danilo Krummrich
@ 2023-09-21 14:21 ` Christian König
2023-09-21 14:25 ` Boris Brezillon
1 sibling, 0 replies; 29+ messages in thread
From: Christian König @ 2023-09-21 14:21 UTC (permalink / raw)
To: Danilo Krummrich, airlied, daniel, matthew.brost,
thomas.hellstrom, sarah.walker, donald.robson, boris.brezillon,
faith.ekstrand
Cc: dri-devel, nouveau, linux-kernel
Am 21.09.23 um 15:34 schrieb Danilo Krummrich:
> On 9/21/23 09:39, Christian König wrote:
>> Am 20.09.23 um 16:42 schrieb Danilo Krummrich:
>>> Provide a common dma-resv for GEM objects not being used outside of
>>> this
>>> GPU-VM. This is used in a subsequent patch to generalize dma-resv,
>>> external and evicted object handling and GEM validation.
>>>
>>> Signed-off-by: Danilo Krummrich <dakr@redhat.com>
>>> ---
>>> drivers/gpu/drm/drm_gpuvm.c | 9 +++++++--
>>> drivers/gpu/drm/nouveau/nouveau_uvmm.c | 2 +-
>>> include/drm/drm_gpuvm.h | 17 ++++++++++++++++-
>>> 3 files changed, 24 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c
>>> index bfea4a8a19ec..cbf4b738a16c 100644
>>> --- a/drivers/gpu/drm/drm_gpuvm.c
>>> +++ b/drivers/gpu/drm/drm_gpuvm.c
>>> @@ -655,6 +655,7 @@ drm_gpuva_range_valid(struct drm_gpuvm *gpuvm,
>>> /**
>>> * drm_gpuvm_init() - initialize a &drm_gpuvm
>>> * @gpuvm: pointer to the &drm_gpuvm to initialize
>>> + * @drm: the drivers &drm_device
>>> * @name: the name of the GPU VA space
>>> * @start_offset: the start offset of the GPU VA space
>>> * @range: the size of the GPU VA space
>>> @@ -668,7 +669,7 @@ drm_gpuva_range_valid(struct drm_gpuvm *gpuvm,
>>> * &name is expected to be managed by the surrounding driver
>>> structures.
>>> */
>>> void
>>> -drm_gpuvm_init(struct drm_gpuvm *gpuvm,
>>> +drm_gpuvm_init(struct drm_gpuvm *gpuvm, struct drm_device *drm,
>>> const char *name,
>>> u64 start_offset, u64 range,
>>> u64 reserve_offset, u64 reserve_range,
>>> @@ -694,6 +695,8 @@ drm_gpuvm_init(struct drm_gpuvm *gpuvm,
>>> reserve_range)))
>>> __drm_gpuva_insert(gpuvm, &gpuvm->kernel_alloc_node);
>>> }
>>> +
>>> + drm_gem_private_object_init(drm, &gpuvm->d_obj, 0);
>>> }
>>> EXPORT_SYMBOL_GPL(drm_gpuvm_init);
>>> @@ -713,7 +716,9 @@ drm_gpuvm_destroy(struct drm_gpuvm *gpuvm)
>>> __drm_gpuva_remove(&gpuvm->kernel_alloc_node);
>>> WARN(!RB_EMPTY_ROOT(&gpuvm->rb.tree.rb_root),
>>> - "GPUVA tree is not empty, potentially leaking memory.");
>>> + "GPUVA tree is not empty, potentially leaking memory.\n");
>>> +
>>> + drm_gem_private_object_fini(&gpuvm->d_obj);
>>> }
>>> EXPORT_SYMBOL_GPL(drm_gpuvm_destroy);
>>> diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.c
>>> b/drivers/gpu/drm/nouveau/nouveau_uvmm.c
>>> index 6c86b64273c3..a80ac8767843 100644
>>> --- a/drivers/gpu/drm/nouveau/nouveau_uvmm.c
>>> +++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.c
>>> @@ -1836,7 +1836,7 @@ nouveau_uvmm_init(struct nouveau_uvmm *uvmm,
>>> struct nouveau_cli *cli,
>>> uvmm->kernel_managed_addr = kernel_managed_addr;
>>> uvmm->kernel_managed_size = kernel_managed_size;
>>> - drm_gpuvm_init(&uvmm->base, cli->name,
>>> + drm_gpuvm_init(&uvmm->base, cli->drm->dev, cli->name,
>>> NOUVEAU_VA_SPACE_START,
>>> NOUVEAU_VA_SPACE_END,
>>> kernel_managed_addr, kernel_managed_size,
>>> diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h
>>> index 0e802676e0a9..6666c07d7c3e 100644
>>> --- a/include/drm/drm_gpuvm.h
>>> +++ b/include/drm/drm_gpuvm.h
>>> @@ -240,14 +240,29 @@ struct drm_gpuvm {
>>> * @ops: &drm_gpuvm_ops providing the split/merge steps to
>>> drivers
>>> */
>>> const struct drm_gpuvm_ops *ops;
>>> +
>>> + /**
>>> + * @d_obj: Dummy GEM object; used internally to pass the GPU VMs
>>> + * dma-resv to &drm_exec. Provides the GPUVM's &dma-resv.
>>> + */
>>> + struct drm_gem_object d_obj;
>>
>> Yeah, as pointed out in the other mail that won't work like this.
>
> Which one? Seems that I missed it.
>
>>
>> The GPUVM contains GEM objects and therefore should probably have a
>> reference to those objects.
>>
>> When those GEM objects now use the dma-resv object embedded inside
>> the GPUVM then they also need a reference to the GPUVM to make sure
>> the dma-resv object won't be freed before they are freed.
>
> My assumption here is that GEM objects being local to a certain VM
> never out-live the VM. We never share it with anyone, otherwise it
> would be external and hence wouldn't carray the VM's dma-resv. The
> only references I see are from the VM itself (which is fine) and from
> userspace. The latter isn't a problem as long as all GEM handles are
> closed before the VM is destroyed on FD close.
>
> Do I miss something? Do we have use cases where this isn't true?
There are multiple use cases where this isn't true. One example is
memory management with TTM or drm_exec. The both grab references on the
objects they lock.
Since this is eviction code it is perfectly possible that a GEM object
is locked from a different VM then the one currently in use. So a single
GEM object from a VM can live longer than the VM itself.
Another potential case is delayed delete where a GEM object might need
to stay around a bit longer because of hw restrictions. This can simply
be that we wait for shaders to end, but also hw workarounds where we
need to wait some grace time before freeing things.
>
>
>>
>> This is a circle reference dependency.
>>
>> The simplest solution I can see is to let the driver provide the GEM
>> object to use. Amdgpu uses the root page directory object for this.
>
> Sure, we can do that, if we see cases where VM local GEM objects can
> out-live the VM.
>
>>
>> Apart from that I strongly think that we shouldn't let the GPUVM code
>> create a driver GEM object. We did that in TTM for the ghost objects
>> and it turned out to be a bad idea.
>
> You mean let GPUVM create a dummy GEM based on the drm_device from the
> driver? What were the problems that were encountered?
See ttm_buffer_object_transfer() basically we created a dummy TTM BO to
hang on the old resources for pipe-lining eviction work.
While that initially was a good idea because it speed things up quite
massively it turned out to be a big maintenance burden because those
dummy BOs ended up in driver specific functions and the driver tried to
upcast them to their internal representation. That in turn of course
didn't worked and cause very subtle memory corruptions.
KASAN was a big help to narrow those down, but we initially spend month
until we figured why some random code was going south sometimes when TTM
was in use.
I really don't want to repeat that.
Regards,
Christian.
>
>
> - Danilo
>
>>
>> Regards,
>> Christian.
>>
>>> };
>>> -void drm_gpuvm_init(struct drm_gpuvm *gpuvm, const char *name,
>>> +void drm_gpuvm_init(struct drm_gpuvm *gpuvm, struct drm_device *drm,
>>> + const char *name,
>>> u64 start_offset, u64 range,
>>> u64 reserve_offset, u64 reserve_range,
>>> const struct drm_gpuvm_ops *ops);
>>> void drm_gpuvm_destroy(struct drm_gpuvm *gpuvm);
>>> +/**
>>> + * drm_gpuvm_resv() - returns the &drm_gpuvm's &dma_resv
>>> + * @gpuvm__: the &drm_gpuvm
>>> + *
>>> + * Returns: a pointer to the &drm_gpuvm's &dma_resv
>>> + */
>>> +#define drm_gpuvm_resv(gpuvm__) (&(gpuvm__)->d_obj._resv)
>>> +
>>> static inline struct drm_gpuva *
>>> __drm_gpuva_next(struct drm_gpuva *va)
>>> {
>>
>
^ permalink raw reply [flat|nested] 29+ messages in thread* Re: [PATCH drm-misc-next v4 4/8] drm/gpuvm: add common dma-resv per struct drm_gpuvm
2023-09-21 13:34 ` Danilo Krummrich
2023-09-21 14:21 ` Christian König
@ 2023-09-21 14:25 ` Boris Brezillon
2023-09-21 14:34 ` Christian König
2023-09-21 14:38 ` Danilo Krummrich
1 sibling, 2 replies; 29+ messages in thread
From: Boris Brezillon @ 2023-09-21 14:25 UTC (permalink / raw)
To: Danilo Krummrich
Cc: Christian König, airlied, daniel, matthew.brost,
thomas.hellstrom, sarah.walker, donald.robson, faith.ekstrand,
dri-devel, nouveau, linux-kernel
On Thu, 21 Sep 2023 15:34:44 +0200
Danilo Krummrich <dakr@redhat.com> wrote:
> On 9/21/23 09:39, Christian König wrote:
> > Am 20.09.23 um 16:42 schrieb Danilo Krummrich:
> >> Provide a common dma-resv for GEM objects not being used outside of this
> >> GPU-VM. This is used in a subsequent patch to generalize dma-resv,
> >> external and evicted object handling and GEM validation.
> >>
> >> Signed-off-by: Danilo Krummrich <dakr@redhat.com>
> >> ---
> >> drivers/gpu/drm/drm_gpuvm.c | 9 +++++++--
> >> drivers/gpu/drm/nouveau/nouveau_uvmm.c | 2 +-
> >> include/drm/drm_gpuvm.h | 17 ++++++++++++++++-
> >> 3 files changed, 24 insertions(+), 4 deletions(-)
> >>
> >> diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c
> >> index bfea4a8a19ec..cbf4b738a16c 100644
> >> --- a/drivers/gpu/drm/drm_gpuvm.c
> >> +++ b/drivers/gpu/drm/drm_gpuvm.c
> >> @@ -655,6 +655,7 @@ drm_gpuva_range_valid(struct drm_gpuvm *gpuvm,
> >> /**
> >> * drm_gpuvm_init() - initialize a &drm_gpuvm
> >> * @gpuvm: pointer to the &drm_gpuvm to initialize
> >> + * @drm: the drivers &drm_device
> >> * @name: the name of the GPU VA space
> >> * @start_offset: the start offset of the GPU VA space
> >> * @range: the size of the GPU VA space
> >> @@ -668,7 +669,7 @@ drm_gpuva_range_valid(struct drm_gpuvm *gpuvm,
> >> * &name is expected to be managed by the surrounding driver structures.
> >> */
> >> void
> >> -drm_gpuvm_init(struct drm_gpuvm *gpuvm,
> >> +drm_gpuvm_init(struct drm_gpuvm *gpuvm, struct drm_device *drm,
> >> const char *name,
> >> u64 start_offset, u64 range,
> >> u64 reserve_offset, u64 reserve_range,
> >> @@ -694,6 +695,8 @@ drm_gpuvm_init(struct drm_gpuvm *gpuvm,
> >> reserve_range)))
> >> __drm_gpuva_insert(gpuvm, &gpuvm->kernel_alloc_node);
> >> }
> >> +
> >> + drm_gem_private_object_init(drm, &gpuvm->d_obj, 0);
> >> }
> >> EXPORT_SYMBOL_GPL(drm_gpuvm_init);
> >> @@ -713,7 +716,9 @@ drm_gpuvm_destroy(struct drm_gpuvm *gpuvm)
> >> __drm_gpuva_remove(&gpuvm->kernel_alloc_node);
> >> WARN(!RB_EMPTY_ROOT(&gpuvm->rb.tree.rb_root),
> >> - "GPUVA tree is not empty, potentially leaking memory.");
> >> + "GPUVA tree is not empty, potentially leaking memory.\n");
> >> +
> >> + drm_gem_private_object_fini(&gpuvm->d_obj);
> >> }
> >> EXPORT_SYMBOL_GPL(drm_gpuvm_destroy);
> >> diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.c b/drivers/gpu/drm/nouveau/nouveau_uvmm.c
> >> index 6c86b64273c3..a80ac8767843 100644
> >> --- a/drivers/gpu/drm/nouveau/nouveau_uvmm.c
> >> +++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.c
> >> @@ -1836,7 +1836,7 @@ nouveau_uvmm_init(struct nouveau_uvmm *uvmm, struct nouveau_cli *cli,
> >> uvmm->kernel_managed_addr = kernel_managed_addr;
> >> uvmm->kernel_managed_size = kernel_managed_size;
> >> - drm_gpuvm_init(&uvmm->base, cli->name,
> >> + drm_gpuvm_init(&uvmm->base, cli->drm->dev, cli->name,
> >> NOUVEAU_VA_SPACE_START,
> >> NOUVEAU_VA_SPACE_END,
> >> kernel_managed_addr, kernel_managed_size,
> >> diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h
> >> index 0e802676e0a9..6666c07d7c3e 100644
> >> --- a/include/drm/drm_gpuvm.h
> >> +++ b/include/drm/drm_gpuvm.h
> >> @@ -240,14 +240,29 @@ struct drm_gpuvm {
> >> * @ops: &drm_gpuvm_ops providing the split/merge steps to drivers
> >> */
> >> const struct drm_gpuvm_ops *ops;
> >> +
> >> + /**
> >> + * @d_obj: Dummy GEM object; used internally to pass the GPU VMs
> >> + * dma-resv to &drm_exec. Provides the GPUVM's &dma-resv.
> >> + */
> >> + struct drm_gem_object d_obj;
> >
> > Yeah, as pointed out in the other mail that won't work like this.
>
> Which one? Seems that I missed it.
>
> >
> > The GPUVM contains GEM objects and therefore should probably have a reference to those objects.
> >
> > When those GEM objects now use the dma-resv object embedded inside the GPUVM then they also need a reference to the GPUVM to make sure the dma-resv object won't be freed before they are freed.
>
> My assumption here is that GEM objects being local to a certain VM never out-live the VM. We never share it with anyone, otherwise it would be external and hence wouldn't carray the VM's dma-resv. The only references I see are from the VM itself (which is fine) and from userspace. The latter isn't a problem as long as all GEM handles are closed before the VM is destroyed on FD close.
But we don't want to rely on userspace doing the right thing (calling
GEM_CLOSE before releasing the VM), do we?
BTW, even though my private BOs have a ref to their exclusive VM, I just
ran into a bug because drm_gem_shmem_free() acquires the resv lock
(which is questionable, but that's not the topic :-)) and
I was calling vm_put(bo->exclusive_vm) before drm_gem_shmem_free(),
leading to a use-after-free when the gem->resv is acquired. This has
nothing to do with drm_gpuvm, but it proves that this sort of bug is
likely to happen if we don't pay attention.
>
> Do I miss something? Do we have use cases where this isn't true?
The other case I can think of is GEM being v[un]map-ed (kernel
mapping) after the VM was released.
>
> >
> > This is a circle reference dependency.
FWIW, I solved that by having a vm_destroy() function that kills all the
mappings in a VM, which in turn releases all the refs the VM had on
private BOs. Then, it's just a matter of waiting for all private GEMs
to be destroyed to get the final steps of the VM destruction, which is
really just about releasing resources (it's called panthor_vm_release()
in my case) executed when the VM refcount drops to zero.
> >
> > The simplest solution I can see is to let the driver provide the GEM object to use. Amdgpu uses the root page directory object for this.
>
> Sure, we can do that, if we see cases where VM local GEM objects can out-live the VM.
> >
> > Apart from that I strongly think that we shouldn't let the GPUVM code create a driver GEM object. We did that in TTM for the ghost objects and it turned out to be a bad idea.
Would that really solve the circular ref issue? I mean, if you're
taking the root page dir object as your VM resv, you still have to make
sure it outlives the private GEMs, which means, you either need
to take a ref on the object, leading to the same circular ref mess, or
you need to reset private GEMs resvs before destroying this root page
dir GEM (whose lifecyle is likely the same as your VM object which
embeds the drm_gpuvm instance).
Making it driver-specific just moves the responsibility back to drivers
(and also allows re-using an real GEM object instead of a dummy one,
but I'm not sure we care about saving a few hundreds bytes at that
point), which is a good way to not take the blame if the driver does
something wrong, but also doesn't really help people do the right thing.
^ permalink raw reply [flat|nested] 29+ messages in thread* Re: [PATCH drm-misc-next v4 4/8] drm/gpuvm: add common dma-resv per struct drm_gpuvm
2023-09-21 14:25 ` Boris Brezillon
@ 2023-09-21 14:34 ` Christian König
2023-09-21 15:27 ` Boris Brezillon
2023-09-21 15:30 ` Danilo Krummrich
2023-09-21 14:38 ` Danilo Krummrich
1 sibling, 2 replies; 29+ messages in thread
From: Christian König @ 2023-09-21 14:34 UTC (permalink / raw)
To: Boris Brezillon, Danilo Krummrich
Cc: airlied, daniel, matthew.brost, thomas.hellstrom, sarah.walker,
donald.robson, faith.ekstrand, dri-devel, nouveau, linux-kernel
Am 21.09.23 um 16:25 schrieb Boris Brezillon:
> On Thu, 21 Sep 2023 15:34:44 +0200
> Danilo Krummrich <dakr@redhat.com> wrote:
>
>> On 9/21/23 09:39, Christian König wrote:
>>> Am 20.09.23 um 16:42 schrieb Danilo Krummrich:
>>>> Provide a common dma-resv for GEM objects not being used outside of this
>>>> GPU-VM. This is used in a subsequent patch to generalize dma-resv,
>>>> external and evicted object handling and GEM validation.
>>>>
>>>> Signed-off-by: Danilo Krummrich <dakr@redhat.com>
>>>> ---
>>>> drivers/gpu/drm/drm_gpuvm.c | 9 +++++++--
>>>> drivers/gpu/drm/nouveau/nouveau_uvmm.c | 2 +-
>>>> include/drm/drm_gpuvm.h | 17 ++++++++++++++++-
>>>> 3 files changed, 24 insertions(+), 4 deletions(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c
>>>> index bfea4a8a19ec..cbf4b738a16c 100644
>>>> --- a/drivers/gpu/drm/drm_gpuvm.c
>>>> +++ b/drivers/gpu/drm/drm_gpuvm.c
>>>> @@ -655,6 +655,7 @@ drm_gpuva_range_valid(struct drm_gpuvm *gpuvm,
>>>> /**
>>>> * drm_gpuvm_init() - initialize a &drm_gpuvm
>>>> * @gpuvm: pointer to the &drm_gpuvm to initialize
>>>> + * @drm: the drivers &drm_device
>>>> * @name: the name of the GPU VA space
>>>> * @start_offset: the start offset of the GPU VA space
>>>> * @range: the size of the GPU VA space
>>>> @@ -668,7 +669,7 @@ drm_gpuva_range_valid(struct drm_gpuvm *gpuvm,
>>>> * &name is expected to be managed by the surrounding driver structures.
>>>> */
>>>> void
>>>> -drm_gpuvm_init(struct drm_gpuvm *gpuvm,
>>>> +drm_gpuvm_init(struct drm_gpuvm *gpuvm, struct drm_device *drm,
>>>> const char *name,
>>>> u64 start_offset, u64 range,
>>>> u64 reserve_offset, u64 reserve_range,
>>>> @@ -694,6 +695,8 @@ drm_gpuvm_init(struct drm_gpuvm *gpuvm,
>>>> reserve_range)))
>>>> __drm_gpuva_insert(gpuvm, &gpuvm->kernel_alloc_node);
>>>> }
>>>> +
>>>> + drm_gem_private_object_init(drm, &gpuvm->d_obj, 0);
>>>> }
>>>> EXPORT_SYMBOL_GPL(drm_gpuvm_init);
>>>> @@ -713,7 +716,9 @@ drm_gpuvm_destroy(struct drm_gpuvm *gpuvm)
>>>> __drm_gpuva_remove(&gpuvm->kernel_alloc_node);
>>>> WARN(!RB_EMPTY_ROOT(&gpuvm->rb.tree.rb_root),
>>>> - "GPUVA tree is not empty, potentially leaking memory.");
>>>> + "GPUVA tree is not empty, potentially leaking memory.\n");
>>>> +
>>>> + drm_gem_private_object_fini(&gpuvm->d_obj);
>>>> }
>>>> EXPORT_SYMBOL_GPL(drm_gpuvm_destroy);
>>>> diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.c b/drivers/gpu/drm/nouveau/nouveau_uvmm.c
>>>> index 6c86b64273c3..a80ac8767843 100644
>>>> --- a/drivers/gpu/drm/nouveau/nouveau_uvmm.c
>>>> +++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.c
>>>> @@ -1836,7 +1836,7 @@ nouveau_uvmm_init(struct nouveau_uvmm *uvmm, struct nouveau_cli *cli,
>>>> uvmm->kernel_managed_addr = kernel_managed_addr;
>>>> uvmm->kernel_managed_size = kernel_managed_size;
>>>> - drm_gpuvm_init(&uvmm->base, cli->name,
>>>> + drm_gpuvm_init(&uvmm->base, cli->drm->dev, cli->name,
>>>> NOUVEAU_VA_SPACE_START,
>>>> NOUVEAU_VA_SPACE_END,
>>>> kernel_managed_addr, kernel_managed_size,
>>>> diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h
>>>> index 0e802676e0a9..6666c07d7c3e 100644
>>>> --- a/include/drm/drm_gpuvm.h
>>>> +++ b/include/drm/drm_gpuvm.h
>>>> @@ -240,14 +240,29 @@ struct drm_gpuvm {
>>>> * @ops: &drm_gpuvm_ops providing the split/merge steps to drivers
>>>> */
>>>> const struct drm_gpuvm_ops *ops;
>>>> +
>>>> + /**
>>>> + * @d_obj: Dummy GEM object; used internally to pass the GPU VMs
>>>> + * dma-resv to &drm_exec. Provides the GPUVM's &dma-resv.
>>>> + */
>>>> + struct drm_gem_object d_obj;
>>> Yeah, as pointed out in the other mail that won't work like this.
>> Which one? Seems that I missed it.
>>
>>> The GPUVM contains GEM objects and therefore should probably have a reference to those objects.
>>>
>>> When those GEM objects now use the dma-resv object embedded inside the GPUVM then they also need a reference to the GPUVM to make sure the dma-resv object won't be freed before they are freed.
>> My assumption here is that GEM objects being local to a certain VM never out-live the VM. We never share it with anyone, otherwise it would be external and hence wouldn't carray the VM's dma-resv. The only references I see are from the VM itself (which is fine) and from userspace. The latter isn't a problem as long as all GEM handles are closed before the VM is destroyed on FD close.
> But we don't want to rely on userspace doing the right thing (calling
> GEM_CLOSE before releasing the VM), do we?
>
> BTW, even though my private BOs have a ref to their exclusive VM, I just
> ran into a bug because drm_gem_shmem_free() acquires the resv lock
> (which is questionable, but that's not the topic :-)) and
> I was calling vm_put(bo->exclusive_vm) before drm_gem_shmem_free(),
> leading to a use-after-free when the gem->resv is acquired. This has
> nothing to do with drm_gpuvm, but it proves that this sort of bug is
> likely to happen if we don't pay attention.
>
>> Do I miss something? Do we have use cases where this isn't true?
> The other case I can think of is GEM being v[un]map-ed (kernel
> mapping) after the VM was released.
I think the file reference and the VM stays around in those cases as
well, but yes I also think we have use cases which won't work.
>
>>> This is a circle reference dependency.
> FWIW, I solved that by having a vm_destroy() function that kills all the
> mappings in a VM, which in turn releases all the refs the VM had on
> private BOs. Then, it's just a matter of waiting for all private GEMs
> to be destroyed to get the final steps of the VM destruction, which is
> really just about releasing resources (it's called panthor_vm_release()
> in my case) executed when the VM refcount drops to zero.
>
>>> The simplest solution I can see is to let the driver provide the GEM object to use. Amdgpu uses the root page directory object for this.
>> Sure, we can do that, if we see cases where VM local GEM objects can out-live the VM.
>>> Apart from that I strongly think that we shouldn't let the GPUVM code create a driver GEM object. We did that in TTM for the ghost objects and it turned out to be a bad idea.
> Would that really solve the circular ref issue? I mean, if you're
> taking the root page dir object as your VM resv, you still have to make
> sure it outlives the private GEMs, which means, you either need
> to take a ref on the object, leading to the same circular ref mess, or
> you need to reset private GEMs resvs before destroying this root page
> dir GEM (whose lifecyle is likely the same as your VM object which
> embeds the drm_gpuvm instance).
Yes it does help, see how amdgpu does it:
The VM references all BOs, e.g. page tables as well as user BOs.
The BOs which use the dma-resv of the root page directory also reference
the root page directorys BO.
So when the VM drops all references the page tables and user BO are
released first and the root page directory which everybody references last.
> Making it driver-specific just moves the responsibility back to drivers
> (and also allows re-using an real GEM object instead of a dummy one,
> but I'm not sure we care about saving a few hundreds bytes at that
> point), which is a good way to not take the blame if the driver does
> something wrong, but also doesn't really help people do the right thing.
The additional memory usage is irrelevant, but we have very very bad
experience with TTM using dummy objects similar to this here.
They tend to end up in driver specific functions and then the driver
will try to upcast those dummy to driver specific BOs. In the end you
get really hard to figure out memory corruptions.
Regards,
Christian.
^ permalink raw reply [flat|nested] 29+ messages in thread* Re: [PATCH drm-misc-next v4 4/8] drm/gpuvm: add common dma-resv per struct drm_gpuvm
2023-09-21 14:34 ` Christian König
@ 2023-09-21 15:27 ` Boris Brezillon
2023-09-21 15:30 ` Danilo Krummrich
1 sibling, 0 replies; 29+ messages in thread
From: Boris Brezillon @ 2023-09-21 15:27 UTC (permalink / raw)
To: Christian König
Cc: Danilo Krummrich, airlied, daniel, matthew.brost,
thomas.hellstrom, sarah.walker, donald.robson, faith.ekstrand,
dri-devel, nouveau, linux-kernel
On Thu, 21 Sep 2023 16:34:54 +0200
Christian König <christian.koenig@amd.com> wrote:
> Am 21.09.23 um 16:25 schrieb Boris Brezillon:
> > On Thu, 21 Sep 2023 15:34:44 +0200
> > Danilo Krummrich <dakr@redhat.com> wrote:
> >
> >> On 9/21/23 09:39, Christian König wrote:
> >>> Am 20.09.23 um 16:42 schrieb Danilo Krummrich:
> >>>> Provide a common dma-resv for GEM objects not being used outside of this
> >>>> GPU-VM. This is used in a subsequent patch to generalize dma-resv,
> >>>> external and evicted object handling and GEM validation.
> >>>>
> >>>> Signed-off-by: Danilo Krummrich <dakr@redhat.com>
> >>>> ---
> >>>> drivers/gpu/drm/drm_gpuvm.c | 9 +++++++--
> >>>> drivers/gpu/drm/nouveau/nouveau_uvmm.c | 2 +-
> >>>> include/drm/drm_gpuvm.h | 17 ++++++++++++++++-
> >>>> 3 files changed, 24 insertions(+), 4 deletions(-)
> >>>>
> >>>> diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c
> >>>> index bfea4a8a19ec..cbf4b738a16c 100644
> >>>> --- a/drivers/gpu/drm/drm_gpuvm.c
> >>>> +++ b/drivers/gpu/drm/drm_gpuvm.c
> >>>> @@ -655,6 +655,7 @@ drm_gpuva_range_valid(struct drm_gpuvm *gpuvm,
> >>>> /**
> >>>> * drm_gpuvm_init() - initialize a &drm_gpuvm
> >>>> * @gpuvm: pointer to the &drm_gpuvm to initialize
> >>>> + * @drm: the drivers &drm_device
> >>>> * @name: the name of the GPU VA space
> >>>> * @start_offset: the start offset of the GPU VA space
> >>>> * @range: the size of the GPU VA space
> >>>> @@ -668,7 +669,7 @@ drm_gpuva_range_valid(struct drm_gpuvm *gpuvm,
> >>>> * &name is expected to be managed by the surrounding driver structures.
> >>>> */
> >>>> void
> >>>> -drm_gpuvm_init(struct drm_gpuvm *gpuvm,
> >>>> +drm_gpuvm_init(struct drm_gpuvm *gpuvm, struct drm_device *drm,
> >>>> const char *name,
> >>>> u64 start_offset, u64 range,
> >>>> u64 reserve_offset, u64 reserve_range,
> >>>> @@ -694,6 +695,8 @@ drm_gpuvm_init(struct drm_gpuvm *gpuvm,
> >>>> reserve_range)))
> >>>> __drm_gpuva_insert(gpuvm, &gpuvm->kernel_alloc_node);
> >>>> }
> >>>> +
> >>>> + drm_gem_private_object_init(drm, &gpuvm->d_obj, 0);
> >>>> }
> >>>> EXPORT_SYMBOL_GPL(drm_gpuvm_init);
> >>>> @@ -713,7 +716,9 @@ drm_gpuvm_destroy(struct drm_gpuvm *gpuvm)
> >>>> __drm_gpuva_remove(&gpuvm->kernel_alloc_node);
> >>>> WARN(!RB_EMPTY_ROOT(&gpuvm->rb.tree.rb_root),
> >>>> - "GPUVA tree is not empty, potentially leaking memory.");
> >>>> + "GPUVA tree is not empty, potentially leaking memory.\n");
> >>>> +
> >>>> + drm_gem_private_object_fini(&gpuvm->d_obj);
> >>>> }
> >>>> EXPORT_SYMBOL_GPL(drm_gpuvm_destroy);
> >>>> diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.c b/drivers/gpu/drm/nouveau/nouveau_uvmm.c
> >>>> index 6c86b64273c3..a80ac8767843 100644
> >>>> --- a/drivers/gpu/drm/nouveau/nouveau_uvmm.c
> >>>> +++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.c
> >>>> @@ -1836,7 +1836,7 @@ nouveau_uvmm_init(struct nouveau_uvmm *uvmm, struct nouveau_cli *cli,
> >>>> uvmm->kernel_managed_addr = kernel_managed_addr;
> >>>> uvmm->kernel_managed_size = kernel_managed_size;
> >>>> - drm_gpuvm_init(&uvmm->base, cli->name,
> >>>> + drm_gpuvm_init(&uvmm->base, cli->drm->dev, cli->name,
> >>>> NOUVEAU_VA_SPACE_START,
> >>>> NOUVEAU_VA_SPACE_END,
> >>>> kernel_managed_addr, kernel_managed_size,
> >>>> diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h
> >>>> index 0e802676e0a9..6666c07d7c3e 100644
> >>>> --- a/include/drm/drm_gpuvm.h
> >>>> +++ b/include/drm/drm_gpuvm.h
> >>>> @@ -240,14 +240,29 @@ struct drm_gpuvm {
> >>>> * @ops: &drm_gpuvm_ops providing the split/merge steps to drivers
> >>>> */
> >>>> const struct drm_gpuvm_ops *ops;
> >>>> +
> >>>> + /**
> >>>> + * @d_obj: Dummy GEM object; used internally to pass the GPU VMs
> >>>> + * dma-resv to &drm_exec. Provides the GPUVM's &dma-resv.
> >>>> + */
> >>>> + struct drm_gem_object d_obj;
> >>> Yeah, as pointed out in the other mail that won't work like this.
> >> Which one? Seems that I missed it.
> >>
> >>> The GPUVM contains GEM objects and therefore should probably have a reference to those objects.
> >>>
> >>> When those GEM objects now use the dma-resv object embedded inside the GPUVM then they also need a reference to the GPUVM to make sure the dma-resv object won't be freed before they are freed.
> >> My assumption here is that GEM objects being local to a certain VM never out-live the VM. We never share it with anyone, otherwise it would be external and hence wouldn't carray the VM's dma-resv. The only references I see are from the VM itself (which is fine) and from userspace. The latter isn't a problem as long as all GEM handles are closed before the VM is destroyed on FD close.
> > But we don't want to rely on userspace doing the right thing (calling
> > GEM_CLOSE before releasing the VM), do we?
> >
> > BTW, even though my private BOs have a ref to their exclusive VM, I just
> > ran into a bug because drm_gem_shmem_free() acquires the resv lock
> > (which is questionable, but that's not the topic :-)) and
> > I was calling vm_put(bo->exclusive_vm) before drm_gem_shmem_free(),
> > leading to a use-after-free when the gem->resv is acquired. This has
> > nothing to do with drm_gpuvm, but it proves that this sort of bug is
> > likely to happen if we don't pay attention.
> >
> >> Do I miss something? Do we have use cases where this isn't true?
> > The other case I can think of is GEM being v[un]map-ed (kernel
> > mapping) after the VM was released.
>
> I think the file reference and the VM stays around in those cases as
> well, but yes I also think we have use cases which won't work.
>
> >
> >>> This is a circle reference dependency.
> > FWIW, I solved that by having a vm_destroy() function that kills all the
> > mappings in a VM, which in turn releases all the refs the VM had on
> > private BOs. Then, it's just a matter of waiting for all private GEMs
> > to be destroyed to get the final steps of the VM destruction, which is
> > really just about releasing resources (it's called panthor_vm_release()
> > in my case) executed when the VM refcount drops to zero.
> >
> >>> The simplest solution I can see is to let the driver provide the GEM object to use. Amdgpu uses the root page directory object for this.
> >> Sure, we can do that, if we see cases where VM local GEM objects can out-live the VM.
> >>> Apart from that I strongly think that we shouldn't let the GPUVM code create a driver GEM object. We did that in TTM for the ghost objects and it turned out to be a bad idea.
> > Would that really solve the circular ref issue? I mean, if you're
> > taking the root page dir object as your VM resv, you still have to make
> > sure it outlives the private GEMs, which means, you either need
> > to take a ref on the object, leading to the same circular ref mess, or
> > you need to reset private GEMs resvs before destroying this root page
> > dir GEM (whose lifecyle is likely the same as your VM object which
> > embeds the drm_gpuvm instance).
>
> Yes it does help, see how amdgpu does it:
>
> The VM references all BOs, e.g. page tables as well as user BOs.
>
> The BOs which use the dma-resv of the root page directory also reference
> the root page directorys BO.
>
> So when the VM drops all references the page tables and user BO are
> released first and the root page directory which everybody references last.
Right, now I see how having a dynamically allocated GEM on which both
the VM and private BOs hold a reference solve problem.
>
> > Making it driver-specific just moves the responsibility back to drivers
> > (and also allows re-using an real GEM object instead of a dummy one,
> > but I'm not sure we care about saving a few hundreds bytes at that
> > point), which is a good way to not take the blame if the driver does
> > something wrong, but also doesn't really help people do the right thing.
>
> The additional memory usage is irrelevant, but we have very very bad
> experience with TTM using dummy objects similar to this here.
>
> They tend to end up in driver specific functions and then the driver
> will try to upcast those dummy to driver specific BOs. In the end you
> get really hard to figure out memory corruptions.
Hm, I see. Anyway, I guess creating a dummy GEM is simple enough that
we can leave it to drivers (for drivers that don't have a real GEM to
pass, of course).
^ permalink raw reply [flat|nested] 29+ messages in thread* Re: [PATCH drm-misc-next v4 4/8] drm/gpuvm: add common dma-resv per struct drm_gpuvm
2023-09-21 14:34 ` Christian König
2023-09-21 15:27 ` Boris Brezillon
@ 2023-09-21 15:30 ` Danilo Krummrich
1 sibling, 0 replies; 29+ messages in thread
From: Danilo Krummrich @ 2023-09-21 15:30 UTC (permalink / raw)
To: Christian König, Boris Brezillon
Cc: airlied, daniel, matthew.brost, thomas.hellstrom, sarah.walker,
donald.robson, faith.ekstrand, dri-devel, nouveau, linux-kernel
On 9/21/23 16:34, Christian König wrote:
>
>
> Am 21.09.23 um 16:25 schrieb Boris Brezillon:
>> On Thu, 21 Sep 2023 15:34:44 +0200
>> Danilo Krummrich <dakr@redhat.com> wrote:
>>
>>> On 9/21/23 09:39, Christian König wrote:
>>>> Am 20.09.23 um 16:42 schrieb Danilo Krummrich:
>>>>> Provide a common dma-resv for GEM objects not being used outside of this
>>>>> GPU-VM. This is used in a subsequent patch to generalize dma-resv,
>>>>> external and evicted object handling and GEM validation.
>>>>>
>>>>> Signed-off-by: Danilo Krummrich <dakr@redhat.com>
>>>>> ---
>>>>> drivers/gpu/drm/drm_gpuvm.c | 9 +++++++--
>>>>> drivers/gpu/drm/nouveau/nouveau_uvmm.c | 2 +-
>>>>> include/drm/drm_gpuvm.h | 17 ++++++++++++++++-
>>>>> 3 files changed, 24 insertions(+), 4 deletions(-)
>>>>>
>>>>> diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c
>>>>> index bfea4a8a19ec..cbf4b738a16c 100644
>>>>> --- a/drivers/gpu/drm/drm_gpuvm.c
>>>>> +++ b/drivers/gpu/drm/drm_gpuvm.c
>>>>> @@ -655,6 +655,7 @@ drm_gpuva_range_valid(struct drm_gpuvm *gpuvm,
>>>>> /**
>>>>> * drm_gpuvm_init() - initialize a &drm_gpuvm
>>>>> * @gpuvm: pointer to the &drm_gpuvm to initialize
>>>>> + * @drm: the drivers &drm_device
>>>>> * @name: the name of the GPU VA space
>>>>> * @start_offset: the start offset of the GPU VA space
>>>>> * @range: the size of the GPU VA space
>>>>> @@ -668,7 +669,7 @@ drm_gpuva_range_valid(struct drm_gpuvm *gpuvm,
>>>>> * &name is expected to be managed by the surrounding driver structures.
>>>>> */
>>>>> void
>>>>> -drm_gpuvm_init(struct drm_gpuvm *gpuvm,
>>>>> +drm_gpuvm_init(struct drm_gpuvm *gpuvm, struct drm_device *drm,
>>>>> const char *name,
>>>>> u64 start_offset, u64 range,
>>>>> u64 reserve_offset, u64 reserve_range,
>>>>> @@ -694,6 +695,8 @@ drm_gpuvm_init(struct drm_gpuvm *gpuvm,
>>>>> reserve_range)))
>>>>> __drm_gpuva_insert(gpuvm, &gpuvm->kernel_alloc_node);
>>>>> }
>>>>> +
>>>>> + drm_gem_private_object_init(drm, &gpuvm->d_obj, 0);
>>>>> }
>>>>> EXPORT_SYMBOL_GPL(drm_gpuvm_init);
>>>>> @@ -713,7 +716,9 @@ drm_gpuvm_destroy(struct drm_gpuvm *gpuvm)
>>>>> __drm_gpuva_remove(&gpuvm->kernel_alloc_node);
>>>>> WARN(!RB_EMPTY_ROOT(&gpuvm->rb.tree.rb_root),
>>>>> - "GPUVA tree is not empty, potentially leaking memory.");
>>>>> + "GPUVA tree is not empty, potentially leaking memory.\n");
>>>>> +
>>>>> + drm_gem_private_object_fini(&gpuvm->d_obj);
>>>>> }
>>>>> EXPORT_SYMBOL_GPL(drm_gpuvm_destroy);
>>>>> diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.c b/drivers/gpu/drm/nouveau/nouveau_uvmm.c
>>>>> index 6c86b64273c3..a80ac8767843 100644
>>>>> --- a/drivers/gpu/drm/nouveau/nouveau_uvmm.c
>>>>> +++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.c
>>>>> @@ -1836,7 +1836,7 @@ nouveau_uvmm_init(struct nouveau_uvmm *uvmm, struct nouveau_cli *cli,
>>>>> uvmm->kernel_managed_addr = kernel_managed_addr;
>>>>> uvmm->kernel_managed_size = kernel_managed_size;
>>>>> - drm_gpuvm_init(&uvmm->base, cli->name,
>>>>> + drm_gpuvm_init(&uvmm->base, cli->drm->dev, cli->name,
>>>>> NOUVEAU_VA_SPACE_START,
>>>>> NOUVEAU_VA_SPACE_END,
>>>>> kernel_managed_addr, kernel_managed_size,
>>>>> diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h
>>>>> index 0e802676e0a9..6666c07d7c3e 100644
>>>>> --- a/include/drm/drm_gpuvm.h
>>>>> +++ b/include/drm/drm_gpuvm.h
>>>>> @@ -240,14 +240,29 @@ struct drm_gpuvm {
>>>>> * @ops: &drm_gpuvm_ops providing the split/merge steps to drivers
>>>>> */
>>>>> const struct drm_gpuvm_ops *ops;
>>>>> +
>>>>> + /**
>>>>> + * @d_obj: Dummy GEM object; used internally to pass the GPU VMs
>>>>> + * dma-resv to &drm_exec. Provides the GPUVM's &dma-resv.
>>>>> + */
>>>>> + struct drm_gem_object d_obj;
>>>> Yeah, as pointed out in the other mail that won't work like this.
>>> Which one? Seems that I missed it.
>>>
>>>> The GPUVM contains GEM objects and therefore should probably have a reference to those objects.
>>>>
>>>> When those GEM objects now use the dma-resv object embedded inside the GPUVM then they also need a reference to the GPUVM to make sure the dma-resv object won't be freed before they are freed.
>>> My assumption here is that GEM objects being local to a certain VM never out-live the VM. We never share it with anyone, otherwise it would be external and hence wouldn't carray the VM's dma-resv. The only references I see are from the VM itself (which is fine) and from userspace. The latter isn't a problem as long as all GEM handles are closed before the VM is destroyed on FD close.
>> But we don't want to rely on userspace doing the right thing (calling
>> GEM_CLOSE before releasing the VM), do we?
>>
>> BTW, even though my private BOs have a ref to their exclusive VM, I just
>> ran into a bug because drm_gem_shmem_free() acquires the resv lock
>> (which is questionable, but that's not the topic :-)) and
>> I was calling vm_put(bo->exclusive_vm) before drm_gem_shmem_free(),
>> leading to a use-after-free when the gem->resv is acquired. This has
>> nothing to do with drm_gpuvm, but it proves that this sort of bug is
>> likely to happen if we don't pay attention.
>>
>>> Do I miss something? Do we have use cases where this isn't true?
>> The other case I can think of is GEM being v[un]map-ed (kernel
>> mapping) after the VM was released.
>
> I think the file reference and the VM stays around in those cases as well, but yes I also think we have use cases which won't work.
>
>>
>>>> This is a circle reference dependency.
>> FWIW, I solved that by having a vm_destroy() function that kills all the
>> mappings in a VM, which in turn releases all the refs the VM had on
>> private BOs. Then, it's just a matter of waiting for all private GEMs
>> to be destroyed to get the final steps of the VM destruction, which is
>> really just about releasing resources (it's called panthor_vm_release()
>> in my case) executed when the VM refcount drops to zero.
>>
>>>> The simplest solution I can see is to let the driver provide the GEM object to use. Amdgpu uses the root page directory object for this.
>>> Sure, we can do that, if we see cases where VM local GEM objects can out-live the VM.
>>>> Apart from that I strongly think that we shouldn't let the GPUVM code create a driver GEM object. We did that in TTM for the ghost objects and it turned out to be a bad idea.
>> Would that really solve the circular ref issue? I mean, if you're
>> taking the root page dir object as your VM resv, you still have to make
>> sure it outlives the private GEMs, which means, you either need
>> to take a ref on the object, leading to the same circular ref mess, or
>> you need to reset private GEMs resvs before destroying this root page
>> dir GEM (whose lifecyle is likely the same as your VM object which
>> embeds the drm_gpuvm instance).
>
> Yes it does help, see how amdgpu does it:
>
> The VM references all BOs, e.g. page tables as well as user BOs.
>
> The BOs which use the dma-resv of the root page directory also reference the root page directorys BO.
>
> So when the VM drops all references the page tables and user BO are released first and the root page directory which everybody references last.
I think that works, unfortunately it's all driver specific. I it seems like that's just how it is.
>
>> Making it driver-specific just moves the responsibility back to drivers
>> (and also allows re-using an real GEM object instead of a dummy one,
>> but I'm not sure we care about saving a few hundreds bytes at that
>> point), which is a good way to not take the blame if the driver does
>> something wrong, but also doesn't really help people do the right thing.
>
> The additional memory usage is irrelevant, but we have very very bad experience with TTM using dummy objects similar to this here.
>
> They tend to end up in driver specific functions and then the driver will try to upcast those dummy to driver specific BOs. In the end you get really hard to figure out memory corruptions.
I fully agree, however for some drivers it might just be necessary to pass a dummy GEM in case they don't have a real one to pass.
>
> Regards,
> Christian.
>
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH drm-misc-next v4 4/8] drm/gpuvm: add common dma-resv per struct drm_gpuvm
2023-09-21 14:25 ` Boris Brezillon
2023-09-21 14:34 ` Christian König
@ 2023-09-21 14:38 ` Danilo Krummrich
1 sibling, 0 replies; 29+ messages in thread
From: Danilo Krummrich @ 2023-09-21 14:38 UTC (permalink / raw)
To: Boris Brezillon
Cc: Christian König, airlied, daniel, matthew.brost,
thomas.hellstrom, sarah.walker, donald.robson, faith.ekstrand,
dri-devel, nouveau, linux-kernel
On 9/21/23 16:25, Boris Brezillon wrote:
> On Thu, 21 Sep 2023 15:34:44 +0200
> Danilo Krummrich <dakr@redhat.com> wrote:
>
>> On 9/21/23 09:39, Christian König wrote:
>>> Am 20.09.23 um 16:42 schrieb Danilo Krummrich:
>>>> Provide a common dma-resv for GEM objects not being used outside of this
>>>> GPU-VM. This is used in a subsequent patch to generalize dma-resv,
>>>> external and evicted object handling and GEM validation.
>>>>
>>>> Signed-off-by: Danilo Krummrich <dakr@redhat.com>
>>>> ---
>>>> drivers/gpu/drm/drm_gpuvm.c | 9 +++++++--
>>>> drivers/gpu/drm/nouveau/nouveau_uvmm.c | 2 +-
>>>> include/drm/drm_gpuvm.h | 17 ++++++++++++++++-
>>>> 3 files changed, 24 insertions(+), 4 deletions(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c
>>>> index bfea4a8a19ec..cbf4b738a16c 100644
>>>> --- a/drivers/gpu/drm/drm_gpuvm.c
>>>> +++ b/drivers/gpu/drm/drm_gpuvm.c
>>>> @@ -655,6 +655,7 @@ drm_gpuva_range_valid(struct drm_gpuvm *gpuvm,
>>>> /**
>>>> * drm_gpuvm_init() - initialize a &drm_gpuvm
>>>> * @gpuvm: pointer to the &drm_gpuvm to initialize
>>>> + * @drm: the drivers &drm_device
>>>> * @name: the name of the GPU VA space
>>>> * @start_offset: the start offset of the GPU VA space
>>>> * @range: the size of the GPU VA space
>>>> @@ -668,7 +669,7 @@ drm_gpuva_range_valid(struct drm_gpuvm *gpuvm,
>>>> * &name is expected to be managed by the surrounding driver structures.
>>>> */
>>>> void
>>>> -drm_gpuvm_init(struct drm_gpuvm *gpuvm,
>>>> +drm_gpuvm_init(struct drm_gpuvm *gpuvm, struct drm_device *drm,
>>>> const char *name,
>>>> u64 start_offset, u64 range,
>>>> u64 reserve_offset, u64 reserve_range,
>>>> @@ -694,6 +695,8 @@ drm_gpuvm_init(struct drm_gpuvm *gpuvm,
>>>> reserve_range)))
>>>> __drm_gpuva_insert(gpuvm, &gpuvm->kernel_alloc_node);
>>>> }
>>>> +
>>>> + drm_gem_private_object_init(drm, &gpuvm->d_obj, 0);
>>>> }
>>>> EXPORT_SYMBOL_GPL(drm_gpuvm_init);
>>>> @@ -713,7 +716,9 @@ drm_gpuvm_destroy(struct drm_gpuvm *gpuvm)
>>>> __drm_gpuva_remove(&gpuvm->kernel_alloc_node);
>>>> WARN(!RB_EMPTY_ROOT(&gpuvm->rb.tree.rb_root),
>>>> - "GPUVA tree is not empty, potentially leaking memory.");
>>>> + "GPUVA tree is not empty, potentially leaking memory.\n");
>>>> +
>>>> + drm_gem_private_object_fini(&gpuvm->d_obj);
>>>> }
>>>> EXPORT_SYMBOL_GPL(drm_gpuvm_destroy);
>>>> diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.c b/drivers/gpu/drm/nouveau/nouveau_uvmm.c
>>>> index 6c86b64273c3..a80ac8767843 100644
>>>> --- a/drivers/gpu/drm/nouveau/nouveau_uvmm.c
>>>> +++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.c
>>>> @@ -1836,7 +1836,7 @@ nouveau_uvmm_init(struct nouveau_uvmm *uvmm, struct nouveau_cli *cli,
>>>> uvmm->kernel_managed_addr = kernel_managed_addr;
>>>> uvmm->kernel_managed_size = kernel_managed_size;
>>>> - drm_gpuvm_init(&uvmm->base, cli->name,
>>>> + drm_gpuvm_init(&uvmm->base, cli->drm->dev, cli->name,
>>>> NOUVEAU_VA_SPACE_START,
>>>> NOUVEAU_VA_SPACE_END,
>>>> kernel_managed_addr, kernel_managed_size,
>>>> diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h
>>>> index 0e802676e0a9..6666c07d7c3e 100644
>>>> --- a/include/drm/drm_gpuvm.h
>>>> +++ b/include/drm/drm_gpuvm.h
>>>> @@ -240,14 +240,29 @@ struct drm_gpuvm {
>>>> * @ops: &drm_gpuvm_ops providing the split/merge steps to drivers
>>>> */
>>>> const struct drm_gpuvm_ops *ops;
>>>> +
>>>> + /**
>>>> + * @d_obj: Dummy GEM object; used internally to pass the GPU VMs
>>>> + * dma-resv to &drm_exec. Provides the GPUVM's &dma-resv.
>>>> + */
>>>> + struct drm_gem_object d_obj;
>>>
>>> Yeah, as pointed out in the other mail that won't work like this.
>>
>> Which one? Seems that I missed it.
>>
>>>
>>> The GPUVM contains GEM objects and therefore should probably have a reference to those objects.
>>>
>>> When those GEM objects now use the dma-resv object embedded inside the GPUVM then they also need a reference to the GPUVM to make sure the dma-resv object won't be freed before they are freed.
>>
>> My assumption here is that GEM objects being local to a certain VM never out-live the VM. We never share it with anyone, otherwise it would be external and hence wouldn't carray the VM's dma-resv. The only references I see are from the VM itself (which is fine) and from userspace. The latter isn't a problem as long as all GEM handles are closed before the VM is destroyed on FD close.
>
> But we don't want to rely on userspace doing the right thing (calling
> GEM_CLOSE before releasing the VM), do we?
I assume VM's are typically released on postclose() and drm_gem_release() is
called previously. But yeah, I guess there are indeed other issues.
>
> BTW, even though my private BOs have a ref to their exclusive VM, I just
> ran into a bug because drm_gem_shmem_free() acquires the resv lock
> (which is questionable, but that's not the topic :-)) and
> I was calling vm_put(bo->exclusive_vm) before drm_gem_shmem_free(),
> leading to a use-after-free when the gem->resv is acquired. This has
> nothing to do with drm_gpuvm, but it proves that this sort of bug is
> likely to happen if we don't pay attention.
>
>>
>> Do I miss something? Do we have use cases where this isn't true?
>
> The other case I can think of is GEM being v[un]map-ed (kernel
> mapping) after the VM was released.
>
>>
>>>
>>> This is a circle reference dependency.
>
> FWIW, I solved that by having a vm_destroy() function that kills all the
> mappings in a VM, which in turn releases all the refs the VM had on
> private BOs. Then, it's just a matter of waiting for all private GEMs
> to be destroyed to get the final steps of the VM destruction, which is
> really just about releasing resources (it's called panthor_vm_release()
> in my case) executed when the VM refcount drops to zero.
>
>>>
>>> The simplest solution I can see is to let the driver provide the GEM object to use. Amdgpu uses the root page directory object for this.
>>
>> Sure, we can do that, if we see cases where VM local GEM objects can out-live the VM.
>>>
>>> Apart from that I strongly think that we shouldn't let the GPUVM code create a driver GEM object. We did that in TTM for the ghost objects and it turned out to be a bad idea.
>
> Would that really solve the circular ref issue? I mean, if you're
> taking the root page dir object as your VM resv, you still have to make
> sure it outlives the private GEMs, which means, you either need
> to take a ref on the object, leading to the same circular ref mess, or
> you need to reset private GEMs resvs before destroying this root page
> dir GEM (whose lifecyle is likely the same as your VM object which
> embeds the drm_gpuvm instance).
>
> Making it driver-specific just moves the responsibility back to drivers
> (and also allows re-using an real GEM object instead of a dummy one,
> but I'm not sure we care about saving a few hundreds bytes at that
> point), which is a good way to not take the blame if the driver does
> something wrong, but also doesn't really help people do the right thing.
>
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH drm-misc-next v4 5/8] drm/gpuvm: add an abstraction for a VM / BO combination
2023-09-20 14:42 [PATCH drm-misc-next v4 0/8] [RFC] DRM GPUVA Manager GPU-VM features Danilo Krummrich
` (3 preceding siblings ...)
2023-09-20 14:42 ` [PATCH drm-misc-next v4 4/8] drm/gpuvm: add common dma-resv per struct drm_gpuvm Danilo Krummrich
@ 2023-09-20 14:42 ` Danilo Krummrich
2023-09-20 14:42 ` [PATCH drm-misc-next v4 6/8] drm/gpuvm: add drm_gpuvm_flags to drm_gpuvm Danilo Krummrich
` (3 subsequent siblings)
8 siblings, 0 replies; 29+ messages in thread
From: Danilo Krummrich @ 2023-09-20 14:42 UTC (permalink / raw)
To: airlied, daniel, matthew.brost, thomas.hellstrom, sarah.walker,
donald.robson, boris.brezillon, christian.koenig, faith.ekstrand
Cc: dri-devel, nouveau, linux-kernel, Danilo Krummrich
This patch adds an abstraction layer between the drm_gpuva mappings of
a particular drm_gem_object and this GEM object itself. The abstraction
represents a combination of a drm_gem_object and drm_gpuvm. The
drm_gem_object holds a list of drm_gpuvm_bo structures (the structure
representing this abstraction), while each drm_gpuvm_bo contains list of
mappings of this GEM object.
This has multiple advantages:
1) We can use the drm_gpuvm_bo structure to attach it to various lists
of the drm_gpuvm. This is useful for tracking external and evicted
objects per VM, which is introduced in subsequent patches.
2) Finding mappings of a certain drm_gem_object mapped in a certain
drm_gpuvm becomes much cheaper.
3) Drivers can derive and extend the structure to easily represent
driver specific states of a BO for a certain GPUVM.
The idea of this abstraction was taken from amdgpu, hence the credit for
this idea goes to the developers of amdgpu.
Cc: Christian König <christian.koenig@amd.com>
Signed-off-by: Danilo Krummrich <dakr@redhat.com>
---
drivers/gpu/drm/drm_gpuvm.c | 309 ++++++++++++++++++++++---
drivers/gpu/drm/nouveau/nouveau_uvmm.c | 68 ++++--
include/drm/drm_gem.h | 32 +--
include/drm/drm_gpuvm.h | 149 +++++++++++-
4 files changed, 483 insertions(+), 75 deletions(-)
diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c
index cbf4b738a16c..6ee224e1121e 100644
--- a/drivers/gpu/drm/drm_gpuvm.c
+++ b/drivers/gpu/drm/drm_gpuvm.c
@@ -61,6 +61,18 @@
* contained within struct drm_gpuva already. Hence, for inserting &drm_gpuva
* entries from within dma-fence signalling critical sections it is enough to
* pre-allocate the &drm_gpuva structures.
+ *
+ * In order to connect a struct drm_gpuva its backing &drm_gem_object each
+ * &drm_gem_object maintains a list of &drm_gpuvm_bo structures, and each
+ * &drm_gpuvm_bo contains a list of &&drm_gpuva structures.
+ *
+ * A &drm_gpuvm_bo is an abstraction that represents a combination of a
+ * &drm_gpuvm and a &drm_gem_object. Every such combination should be unique.
+ * This is ensured by the API through drm_gpuvm_bo_obtain() and
+ * drm_gpuvm_bo_obtain_prealloc() which first look into the corresponding
+ * &drm_gem_object list of &drm_gpuvm_bos for an existing instance of this
+ * particular combination. If not existent a new instance is created and linked
+ * to the &drm_gem_object.
*/
/**
@@ -393,14 +405,21 @@
* split / merge or prefetch.
*
* The GPU VA manager also does not take care of the locking of the backing
- * &drm_gem_object buffers GPU VA lists by itself; drivers are responsible to
- * enforce mutual exclusion using either the GEMs dma_resv lock or alternatively
- * a driver specific external lock. For the latter see also
- * drm_gem_gpuva_set_lock().
+ * &drm_gem_object buffers GPU VA lists and &drm_gpuvm_bo abstractions by
+ * itself; drivers are responsible to enforce mutual exclusion using either the
+ * GEMs dma_resv lock or alternatively a driver specific external lock. For the
+ * latter see also drm_gem_gpuva_set_lock().
*
* However, the GPU VA manager contains lockdep checks to ensure callers of its
* API hold the corresponding lock whenever the &drm_gem_objects GPU VA list is
- * accessed by functions such as drm_gpuva_link() or drm_gpuva_unlink().
+ * accessed by functions such as drm_gpuva_link() or drm_gpuva_unlink(), but
+ * also drm_gpuvm_bo_obtain() and drm_gpuvm_bo_put().
+ *
+ * The latter is required since on creation and destruction of a &drm_gpuvm_bo
+ * the &drm_gpuvm_bo is attached / removed from the &drm_gem_objects gpuva list.
+ * Subsequent calls to drm_gpuvm_bo_obtain() for the same &drm_gpuvm and
+ * &drm_gem_object must be able to observe previous creations and destructions
+ * of &drm_gpuvm_bos in order to keep instances unique.
*/
/**
@@ -430,6 +449,7 @@
* {
* struct drm_gpuva_ops *ops;
* struct drm_gpuva_op *op
+ * struct drm_gpuvm_bo *vm_bo;
*
* driver_lock_va_space();
* ops = drm_gpuvm_sm_map_ops_create(gpuvm, addr, range,
@@ -437,6 +457,10 @@
* if (IS_ERR(ops))
* return PTR_ERR(ops);
*
+ * vm_bo = drm_gpuvm_bo_obtain(gpuvm, obj);
+ * if (IS_ERR(vm_bo))
+ * return PTR_ERR(vm_bo);
+ *
* drm_gpuva_for_each_op(op, ops) {
* struct drm_gpuva *va;
*
@@ -449,7 +473,7 @@
*
* driver_vm_map();
* drm_gpuva_map(gpuvm, va, &op->map);
- * drm_gpuva_link(va);
+ * drm_gpuva_link(va, vm_bo);
*
* break;
* case DRM_GPUVA_OP_REMAP: {
@@ -476,11 +500,11 @@
* driver_vm_remap();
* drm_gpuva_remap(prev, next, &op->remap);
*
- * drm_gpuva_unlink(va);
* if (prev)
- * drm_gpuva_link(prev);
+ * drm_gpuva_link(prev, va->vm_bo);
* if (next)
- * drm_gpuva_link(next);
+ * drm_gpuva_link(next, va->vm_bo);
+ * drm_gpuva_unlink(va);
*
* break;
* }
@@ -496,6 +520,7 @@
* break;
* }
* }
+ * drm_gpuvm_bo_put(vm_bo);
* driver_unlock_va_space();
*
* return 0;
@@ -505,6 +530,7 @@
*
* struct driver_context {
* struct drm_gpuvm *gpuvm;
+ * struct drm_gpuvm_bo *vm_bo;
* struct drm_gpuva *new_va;
* struct drm_gpuva *prev_va;
* struct drm_gpuva *next_va;
@@ -525,6 +551,7 @@
* struct drm_gem_object *obj, u64 offset)
* {
* struct driver_context ctx;
+ * struct drm_gpuvm_bo *vm_bo;
* struct drm_gpuva_ops *ops;
* struct drm_gpuva_op *op;
* int ret = 0;
@@ -534,16 +561,23 @@
* ctx.new_va = kzalloc(sizeof(*ctx.new_va), GFP_KERNEL);
* ctx.prev_va = kzalloc(sizeof(*ctx.prev_va), GFP_KERNEL);
* ctx.next_va = kzalloc(sizeof(*ctx.next_va), GFP_KERNEL);
- * if (!ctx.new_va || !ctx.prev_va || !ctx.next_va) {
+ * ctx.vm_bo = drm_gpuvm_bo_create(gpuvm, obj);
+ * if (!ctx.new_va || !ctx.prev_va || !ctx.next_va || !vm_bo) {
* ret = -ENOMEM;
* goto out;
* }
*
+ * // Typically protected with a driver specific GEM gpuva lock
+ * // used in the fence signaling path for drm_gpuva_link() and
+ * // drm_gpuva_unlink(), hence pre-allocate.
+ * ctx.vm_bo = drm_gpuvm_bo_obtain_prealloc(ctx.vm_bo);
+ *
* driver_lock_va_space();
* ret = drm_gpuvm_sm_map(gpuvm, &ctx, addr, range, obj, offset);
* driver_unlock_va_space();
*
* out:
+ * drm_gpuvm_bo_put(ctx.vm_bo);
* kfree(ctx.new_va);
* kfree(ctx.prev_va);
* kfree(ctx.next_va);
@@ -556,7 +590,7 @@
*
* drm_gpuva_map(ctx->vm, ctx->new_va, &op->map);
*
- * drm_gpuva_link(ctx->new_va);
+ * drm_gpuva_link(ctx->new_va, ctx->vm_bo);
*
* // prevent the new GPUVA from being freed in
* // driver_mapping_create()
@@ -568,22 +602,23 @@
* int driver_gpuva_remap(struct drm_gpuva_op *op, void *__ctx)
* {
* struct driver_context *ctx = __ctx;
+ * struct drm_gpuva *va = op->remap.unmap->va;
*
* drm_gpuva_remap(ctx->prev_va, ctx->next_va, &op->remap);
*
- * drm_gpuva_unlink(op->remap.unmap->va);
- * kfree(op->remap.unmap->va);
- *
* if (op->remap.prev) {
- * drm_gpuva_link(ctx->prev_va);
+ * drm_gpuva_link(ctx->prev_va, va->vm_bo);
* ctx->prev_va = NULL;
* }
*
* if (op->remap.next) {
- * drm_gpuva_link(ctx->next_va);
+ * drm_gpuva_link(ctx->next_va, va->vm_bo);
* ctx->next_va = NULL;
* }
*
+ * drm_gpuva_unlink(va);
+ * kfree(va);
+ *
* return 0;
* }
*
@@ -722,6 +757,191 @@ drm_gpuvm_destroy(struct drm_gpuvm *gpuvm)
}
EXPORT_SYMBOL_GPL(drm_gpuvm_destroy);
+/**
+ * drm_gpuvm_bo_create() - create a new instance of struct drm_gpuvm_bo
+ * @gpuvm: The &drm_gpuvm the @obj is mapped in.
+ * @obj: The &drm_gem_object being mapped in the @gpuvm.
+ *
+ * If provided by the driver, this function uses the &drm_gpuvm_ops
+ * vm_bo_alloc() callback to allocate.
+ *
+ * Returns: a pointer to the &drm_gpuvm_bo on success, NULL on failure
+ */
+struct drm_gpuvm_bo *
+drm_gpuvm_bo_create(struct drm_gpuvm *gpuvm,
+ struct drm_gem_object *obj)
+{
+ const struct drm_gpuvm_ops *ops = gpuvm->ops;
+ struct drm_gpuvm_bo *vm_bo;
+
+ if (ops && ops->vm_bo_alloc)
+ vm_bo = ops->vm_bo_alloc();
+ else
+ vm_bo = kzalloc(sizeof(*vm_bo), GFP_KERNEL);
+
+ if (unlikely(!vm_bo))
+ return NULL;
+
+ vm_bo->vm = gpuvm;
+ vm_bo->obj = obj;
+
+ kref_init(&vm_bo->kref);
+ INIT_LIST_HEAD(&vm_bo->list.gpuva);
+ INIT_LIST_HEAD(&vm_bo->list.entry.gem);
+
+ drm_gem_object_get(obj);
+
+ return vm_bo;
+}
+EXPORT_SYMBOL_GPL(drm_gpuvm_bo_create);
+
+static void
+drm_gpuvm_bo_destroy(struct kref *kref)
+{
+ struct drm_gpuvm_bo *vm_bo = container_of(kref, struct drm_gpuvm_bo,
+ kref);
+ struct drm_gpuvm *gpuvm = vm_bo->vm;
+ const struct drm_gpuvm_ops *ops = gpuvm->ops;
+ struct drm_gem_object *obj = vm_bo->obj;
+
+ drm_gem_gpuva_assert_lock_held(obj);
+
+ list_del(&vm_bo->list.entry.gem);
+
+ drm_gem_object_put(obj);
+
+ if (ops && ops->vm_bo_free)
+ ops->vm_bo_free(vm_bo);
+ else
+ kfree(vm_bo);
+}
+
+/**
+ * drm_gpuvm_bo_put() - drop a struct drm_gpuvm_bo reference
+ * @vm_bo: the &drm_gpuvm_bo to release the reference of
+ *
+ * This releases a reference to @vm_bo.
+ *
+ * If the reference count drops to zero, the &gpuvm_bo is destroyed, which
+ * includes removing it from the GEMs gpuva list. Hence, if a call to this
+ * function can potentially let the reference count to zero the caller must
+ * hold the dma-resv or driver specific GEM gpuva lock.
+ */
+void
+drm_gpuvm_bo_put(struct drm_gpuvm_bo *vm_bo)
+{
+ if (vm_bo)
+ kref_put(&vm_bo->kref, drm_gpuvm_bo_destroy);
+}
+EXPORT_SYMBOL_GPL(drm_gpuvm_bo_put);
+
+static struct drm_gpuvm_bo *
+__drm_gpuvm_bo_find(struct drm_gpuvm *gpuvm,
+ struct drm_gem_object *obj)
+{
+ struct drm_gpuvm_bo *vm_bo;
+
+ drm_gem_gpuva_assert_lock_held(obj);
+
+ drm_gem_for_each_gpuvm_bo(vm_bo, obj)
+ if (vm_bo->vm == gpuvm)
+ return vm_bo;
+
+ return NULL;
+}
+
+/**
+ * drm_gpuvm_bo_find() - find the &drm_gpuvm_bo for the given
+ * &drm_gpuvm and &drm_gem_object
+ * @gpuvm: The &drm_gpuvm the @obj is mapped in.
+ * @obj: The &drm_gem_object being mapped in the @gpuvm.
+ *
+ * Find the &drm_gpuvm_bo representing the combination of the given
+ * &drm_gpuvm and &drm_gem_object. If found, increases the reference
+ * count of the &drm_gpuvm_bo accordingly.
+ *
+ * Returns: a pointer to the &drm_gpuvm_bo on success, NULL on failure
+ */
+struct drm_gpuvm_bo *
+drm_gpuvm_bo_find(struct drm_gpuvm *gpuvm,
+ struct drm_gem_object *obj)
+{
+ struct drm_gpuvm_bo *vm_bo = __drm_gpuvm_bo_find(gpuvm, obj);
+
+ return vm_bo ? drm_gpuvm_bo_get(vm_bo) : NULL;
+}
+EXPORT_SYMBOL_GPL(drm_gpuvm_bo_find);
+
+/**
+ * drm_gpuvm_bo_obtain() - obtains and instance of the &drm_gpuvm_bo for the
+ * given &drm_gpuvm and &drm_gem_object
+ * @gpuvm: The &drm_gpuvm the @obj is mapped in.
+ * @obj: The &drm_gem_object being mapped in the @gpuvm.
+ *
+ * Find the &drm_gpuvm_bo representing the combination of the given
+ * &drm_gpuvm and &drm_gem_object. If found, increases the reference
+ * count of the &drm_gpuvm_bo accordingly. If not found, allocates a new
+ * &drm_gpuvm_bo.
+ *
+ * A new &drm_gpuvm_bo is added to the GEMs gpuva list.
+ *
+ * Returns: a pointer to the &drm_gpuvm_bo on success, an ERR_PTR on failure
+ */
+struct drm_gpuvm_bo *
+drm_gpuvm_bo_obtain(struct drm_gpuvm *gpuvm,
+ struct drm_gem_object *obj)
+{
+ struct drm_gpuvm_bo *vm_bo;
+
+ vm_bo = drm_gpuvm_bo_find(gpuvm, obj);
+ if (vm_bo)
+ return vm_bo;
+
+ vm_bo = drm_gpuvm_bo_create(gpuvm, obj);
+ if (!vm_bo)
+ return ERR_PTR(-ENOMEM);
+
+ list_add_tail(&vm_bo->list.entry.gem, &obj->gpuva.list);
+
+ return vm_bo;
+}
+EXPORT_SYMBOL_GPL(drm_gpuvm_bo_obtain);
+
+/**
+ * drm_gpuvm_bo_obtain_prealloc() - obtains and instance of the &drm_gpuvm_bo
+ * for the given &drm_gpuvm and &drm_gem_object
+ * @__vm_bo: A pre-allocated struct drm_gpuvm_bo.
+ *
+ * Find the &drm_gpuvm_bo representing the combination of the given
+ * &drm_gpuvm and &drm_gem_object. If found, increases the reference
+ * count of the found &drm_gpuvm_bo accordingly, while the @__vm_bo reference
+ * count is decreased. If not found @__vm_bo is returned without further
+ * increase of the reference count.
+ *
+ * A new &drm_gpuvm_bo is added to the GEMs gpuva list.
+ *
+ * Returns: a pointer to the found &drm_gpuvm_bo or @__vm_bo if no existing
+ * &drm_gpuvm_bo was found
+ */
+struct drm_gpuvm_bo *
+drm_gpuvm_bo_obtain_prealloc(struct drm_gpuvm_bo *__vm_bo)
+{
+ struct drm_gpuvm *gpuvm = __vm_bo->vm;
+ struct drm_gem_object *obj = __vm_bo->obj;
+ struct drm_gpuvm_bo *vm_bo;
+
+ vm_bo = drm_gpuvm_bo_find(gpuvm, obj);
+ if (vm_bo) {
+ drm_gpuvm_bo_put(__vm_bo);
+ return vm_bo;
+ }
+
+ list_add_tail(&__vm_bo->list.entry.gem, &obj->gpuva.list);
+
+ return __vm_bo;
+}
+EXPORT_SYMBOL_GPL(drm_gpuvm_bo_obtain_prealloc);
+
static int
__drm_gpuva_insert(struct drm_gpuvm *gpuvm,
struct drm_gpuva *va)
@@ -811,24 +1031,33 @@ EXPORT_SYMBOL_GPL(drm_gpuva_remove);
/**
* drm_gpuva_link() - link a &drm_gpuva
* @va: the &drm_gpuva to link
+ * @vm_bo: the &drm_gpuvm_bo to add the &drm_gpuva to
*
- * This adds the given &va to the GPU VA list of the &drm_gem_object it is
- * associated with.
+ * This adds the given &va to the GPU VA list of the &drm_gpuvm_bo and the
+ * &drm_gpuvm_bo to the &drm_gem_object it is associated with.
+ *
+ * For every &drm_gpuva entry added to the &drm_gpuvm_bo an additional
+ * reference of the latter is taken.
*
* This function expects the caller to protect the GEM's GPUVA list against
- * concurrent access using the GEMs dma_resv lock.
+ * concurrent access using either the GEMs dma_resv lock or a driver specific
+ * lock set through drm_gem_gpuva_set_lock().
*/
void
-drm_gpuva_link(struct drm_gpuva *va)
+drm_gpuva_link(struct drm_gpuva *va, struct drm_gpuvm_bo *vm_bo)
{
struct drm_gem_object *obj = va->gem.obj;
if (unlikely(!obj))
return;
+ WARN_ON(obj != vm_bo->obj);
drm_gem_gpuva_assert_lock_held(obj);
- list_add_tail(&va->gem.entry, &obj->gpuva.list);
+ drm_gpuvm_bo_get(vm_bo);
+
+ va->vm_bo = vm_bo;
+ list_add_tail(&va->gem.entry, &vm_bo->list.gpuva);
}
EXPORT_SYMBOL_GPL(drm_gpuva_link);
@@ -839,13 +1068,22 @@ EXPORT_SYMBOL_GPL(drm_gpuva_link);
* This removes the given &va from the GPU VA list of the &drm_gem_object it is
* associated with.
*
+ * This removes the given &va from the GPU VA list of the &drm_gpuvm_bo and
+ * the &drm_gpuvm_bo from the &drm_gem_object it is associated with in case
+ * this call unlinks the last &drm_gpuva from the &drm_gpuvm_bo.
+ *
+ * For every &drm_gpuva entry removed from the &drm_gpuvm_bo a reference of
+ * the latter is dropped.
+ *
* This function expects the caller to protect the GEM's GPUVA list against
- * concurrent access using the GEMs dma_resv lock.
+ * concurrent access using either the GEMs dma_resv lock or a driver specific
+ * lock set through drm_gem_gpuva_set_lock().
*/
void
drm_gpuva_unlink(struct drm_gpuva *va)
{
struct drm_gem_object *obj = va->gem.obj;
+ struct drm_gpuvm_bo *vm_bo = va->vm_bo;
if (unlikely(!obj))
return;
@@ -853,6 +1091,9 @@ drm_gpuva_unlink(struct drm_gpuva *va)
drm_gem_gpuva_assert_lock_held(obj);
list_del_init(&va->gem.entry);
+ va->vm_bo = NULL;
+
+ drm_gpuvm_bo_put(vm_bo);
}
EXPORT_SYMBOL_GPL(drm_gpuva_unlink);
@@ -997,10 +1238,10 @@ drm_gpuva_remap(struct drm_gpuva *prev,
struct drm_gpuva *next,
struct drm_gpuva_op_remap *op)
{
- struct drm_gpuva *curr = op->unmap->va;
- struct drm_gpuvm *gpuvm = curr->vm;
+ struct drm_gpuva *va = op->unmap->va;
+ struct drm_gpuvm *gpuvm = va->vm;
- drm_gpuva_remove(curr);
+ drm_gpuva_remove(va);
if (op->prev) {
drm_gpuva_init_from_op(prev, op->prev);
@@ -1644,9 +1885,8 @@ drm_gpuvm_prefetch_ops_create(struct drm_gpuvm *gpuvm,
EXPORT_SYMBOL_GPL(drm_gpuvm_prefetch_ops_create);
/**
- * drm_gpuvm_gem_unmap_ops_create() - creates the &drm_gpuva_ops to unmap a GEM
- * @gpuvm: the &drm_gpuvm representing the GPU VA space
- * @obj: the &drm_gem_object to unmap
+ * drm_gpuvm_bo_unmap_ops_create() - creates the &drm_gpuva_ops to unmap a GEM
+ * @vm_bo: the &drm_gpuvm_bo abstraction
*
* This function creates a list of operations to perform unmapping for every
* GPUVA attached to a GEM.
@@ -1663,15 +1903,14 @@ EXPORT_SYMBOL_GPL(drm_gpuvm_prefetch_ops_create);
* Returns: a pointer to the &drm_gpuva_ops on success, an ERR_PTR on failure
*/
struct drm_gpuva_ops *
-drm_gpuvm_gem_unmap_ops_create(struct drm_gpuvm *gpuvm,
- struct drm_gem_object *obj)
+drm_gpuvm_bo_unmap_ops_create(struct drm_gpuvm_bo *vm_bo)
{
struct drm_gpuva_ops *ops;
struct drm_gpuva_op *op;
struct drm_gpuva *va;
int ret;
- drm_gem_gpuva_assert_lock_held(obj);
+ drm_gem_gpuva_assert_lock_held(vm_bo->obj);
ops = kzalloc(sizeof(*ops), GFP_KERNEL);
if (!ops)
@@ -1679,8 +1918,8 @@ drm_gpuvm_gem_unmap_ops_create(struct drm_gpuvm *gpuvm,
INIT_LIST_HEAD(&ops->list);
- drm_gem_for_each_gpuva(va, obj) {
- op = gpuva_op_alloc(gpuvm);
+ drm_gpuvm_bo_for_each_va(va, vm_bo) {
+ op = gpuva_op_alloc(vm_bo->vm);
if (!op) {
ret = -ENOMEM;
goto err_free_ops;
@@ -1694,10 +1933,10 @@ drm_gpuvm_gem_unmap_ops_create(struct drm_gpuvm *gpuvm,
return ops;
err_free_ops:
- drm_gpuva_ops_free(gpuvm, ops);
+ drm_gpuva_ops_free(vm_bo->vm, ops);
return ERR_PTR(ret);
}
-EXPORT_SYMBOL_GPL(drm_gpuvm_gem_unmap_ops_create);
+EXPORT_SYMBOL_GPL(drm_gpuvm_bo_unmap_ops_create);
/**
* drm_gpuva_ops_free() - free the given &drm_gpuva_ops
diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.c b/drivers/gpu/drm/nouveau/nouveau_uvmm.c
index a80ac8767843..cf709afd2ac7 100644
--- a/drivers/gpu/drm/nouveau/nouveau_uvmm.c
+++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.c
@@ -62,6 +62,8 @@ struct bind_job_op {
enum vm_bind_op op;
u32 flags;
+ struct drm_gpuvm_bo *vm_bo;
+
struct {
u64 addr;
u64 range;
@@ -1113,22 +1115,28 @@ bind_validate_region(struct nouveau_job *job)
}
static void
-bind_link_gpuvas(struct drm_gpuva_ops *ops, struct nouveau_uvma_prealloc *new)
+bind_link_gpuvas(struct bind_job_op *bop)
{
+ struct nouveau_uvma_prealloc *new = &bop->new;
+ struct drm_gpuvm_bo *vm_bo = bop->vm_bo;
+ struct drm_gpuva_ops *ops = bop->ops;
struct drm_gpuva_op *op;
drm_gpuva_for_each_op(op, ops) {
switch (op->op) {
case DRM_GPUVA_OP_MAP:
- drm_gpuva_link(&new->map->va);
+ drm_gpuva_link(&new->map->va, vm_bo);
break;
- case DRM_GPUVA_OP_REMAP:
+ case DRM_GPUVA_OP_REMAP: {
+ struct drm_gpuva *va = op->remap.unmap->va;
+
if (op->remap.prev)
- drm_gpuva_link(&new->prev->va);
+ drm_gpuva_link(&new->prev->va, va->vm_bo);
if (op->remap.next)
- drm_gpuva_link(&new->next->va);
- drm_gpuva_unlink(op->remap.unmap->va);
+ drm_gpuva_link(&new->next->va, va->vm_bo);
+ drm_gpuva_unlink(va);
break;
+ }
case DRM_GPUVA_OP_UNMAP:
drm_gpuva_unlink(op->unmap.va);
break;
@@ -1150,10 +1158,18 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job)
list_for_each_op(op, &bind_job->ops) {
if (op->op == OP_MAP) {
- op->gem.obj = drm_gem_object_lookup(job->file_priv,
- op->gem.handle);
- if (!op->gem.obj)
+ struct drm_gem_object *obj;
+
+ obj = drm_gem_object_lookup(job->file_priv,
+ op->gem.handle);
+ if (!(op->gem.obj = obj))
return -ENOENT;
+
+ dma_resv_lock(obj->resv, NULL);
+ op->vm_bo = drm_gpuvm_bo_obtain(&uvmm->base, obj);
+ dma_resv_unlock(obj->resv);
+ if (IS_ERR(op->vm_bo))
+ return PTR_ERR(op->vm_bo);
}
ret = bind_validate_op(job, op);
@@ -1364,7 +1380,7 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job)
case OP_UNMAP_SPARSE:
case OP_MAP:
case OP_UNMAP:
- bind_link_gpuvas(op->ops, &op->new);
+ bind_link_gpuvas(op);
break;
default:
break;
@@ -1511,6 +1527,12 @@ nouveau_uvmm_bind_job_free_work_fn(struct work_struct *work)
if (!IS_ERR_OR_NULL(op->ops))
drm_gpuva_ops_free(&uvmm->base, op->ops);
+ if (!IS_ERR_OR_NULL(op->vm_bo)) {
+ dma_resv_lock(obj->resv, NULL);
+ drm_gpuvm_bo_put(op->vm_bo);
+ dma_resv_unlock(obj->resv);
+ }
+
if (obj)
drm_gem_object_put(obj);
}
@@ -1776,15 +1798,18 @@ void
nouveau_uvmm_bo_map_all(struct nouveau_bo *nvbo, struct nouveau_mem *mem)
{
struct drm_gem_object *obj = &nvbo->bo.base;
+ struct drm_gpuvm_bo *vm_bo;
struct drm_gpuva *va;
dma_resv_assert_held(obj->resv);
- drm_gem_for_each_gpuva(va, obj) {
- struct nouveau_uvma *uvma = uvma_from_va(va);
+ drm_gem_for_each_gpuvm_bo(vm_bo, obj) {
+ drm_gpuvm_bo_for_each_va(va, vm_bo) {
+ struct nouveau_uvma *uvma = uvma_from_va(va);
- nouveau_uvma_map(uvma, mem);
- drm_gpuva_invalidate(va, false);
+ nouveau_uvma_map(uvma, mem);
+ drm_gpuva_invalidate(va, false);
+ }
}
}
@@ -1792,15 +1817,18 @@ void
nouveau_uvmm_bo_unmap_all(struct nouveau_bo *nvbo)
{
struct drm_gem_object *obj = &nvbo->bo.base;
+ struct drm_gpuvm_bo *vm_bo;
struct drm_gpuva *va;
dma_resv_assert_held(obj->resv);
- drm_gem_for_each_gpuva(va, obj) {
- struct nouveau_uvma *uvma = uvma_from_va(va);
+ drm_gem_for_each_gpuvm_bo(vm_bo, obj) {
+ drm_gpuvm_bo_for_each_va(va, vm_bo) {
+ struct nouveau_uvma *uvma = uvma_from_va(va);
- nouveau_uvma_unmap(uvma);
- drm_gpuva_invalidate(va, true);
+ nouveau_uvma_unmap(uvma);
+ drm_gpuva_invalidate(va, true);
+ }
}
}
@@ -1847,14 +1875,14 @@ nouveau_uvmm_init(struct nouveau_uvmm *uvmm, struct nouveau_cli *cli,
kernel_managed_addr, kernel_managed_size,
NULL, 0, &cli->uvmm.vmm.vmm);
if (ret)
- goto out_free_gpuva_mgr;
+ goto out_free_gpuvm;
cli->uvmm.vmm.cli = cli;
mutex_unlock(&cli->mutex);
return 0;
-out_free_gpuva_mgr:
+out_free_gpuvm:
drm_gpuvm_destroy(&uvmm->base);
out_unlock:
mutex_unlock(&cli->mutex);
diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
index bc9f6aa2f3fe..7147978d82d8 100644
--- a/include/drm/drm_gem.h
+++ b/include/drm/drm_gem.h
@@ -571,7 +571,7 @@ int drm_gem_evict(struct drm_gem_object *obj);
* drm_gem_gpuva_init() - initialize the gpuva list of a GEM object
* @obj: the &drm_gem_object
*
- * This initializes the &drm_gem_object's &drm_gpuva list.
+ * This initializes the &drm_gem_object's &drm_gpuvm_bo list.
*
* Calling this function is only necessary for drivers intending to support the
* &drm_driver_feature DRIVER_GEM_GPUVA.
@@ -584,28 +584,28 @@ static inline void drm_gem_gpuva_init(struct drm_gem_object *obj)
}
/**
- * drm_gem_for_each_gpuva() - iternator to walk over a list of gpuvas
- * @entry__: &drm_gpuva structure to assign to in each iteration step
- * @obj__: the &drm_gem_object the &drm_gpuvas to walk are associated with
+ * drm_gem_for_each_gpuvm_bo() - iterator to walk over a list of &drm_gpuvm_bo
+ * @entry__: &drm_gpuvm_bo structure to assign to in each iteration step
+ * @obj__: the &drm_gem_object the &drm_gpuvm_bo to walk are associated with
*
- * This iterator walks over all &drm_gpuva structures associated with the
- * &drm_gpuva_manager.
+ * This iterator walks over all &drm_gpuvm_bo structures associated with the
+ * &drm_gem_object.
*/
-#define drm_gem_for_each_gpuva(entry__, obj__) \
- list_for_each_entry(entry__, &(obj__)->gpuva.list, gem.entry)
+#define drm_gem_for_each_gpuvm_bo(entry__, obj__) \
+ list_for_each_entry(entry__, &(obj__)->gpuva.list, list.entry.gem)
/**
- * drm_gem_for_each_gpuva_safe() - iternator to safely walk over a list of
- * gpuvas
- * @entry__: &drm_gpuva structure to assign to in each iteration step
- * @next__: &next &drm_gpuva to store the next step
- * @obj__: the &drm_gem_object the &drm_gpuvas to walk are associated with
+ * drm_gem_for_each_gpuvm_bo_safe() - iterator to safely walk over a list of
+ * &drm_gpuvm_bo
+ * @entry__: &drm_gpuvm_bostructure to assign to in each iteration step
+ * @next__: &next &drm_gpuvm_bo to store the next step
+ * @obj__: the &drm_gem_object the &drm_gpuvm_bo to walk are associated with
*
- * This iterator walks over all &drm_gpuva structures associated with the
+ * This iterator walks over all &drm_gpuvm_bo structures associated with the
* &drm_gem_object. It is implemented with list_for_each_entry_safe(), hence
* it is save against removal of elements.
*/
-#define drm_gem_for_each_gpuva_safe(entry__, next__, obj__) \
- list_for_each_entry_safe(entry__, next__, &(obj__)->gpuva.list, gem.entry)
+#define drm_gem_for_each_gpuvm_bo_safe(entry__, next__, obj__) \
+ list_for_each_entry_safe(entry__, next__, &(obj__)->gpuva.list, list.entry.gem)
#endif /* __DRM_GEM_H__ */
diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h
index 6666c07d7c3e..2c9ad6eb9401 100644
--- a/include/drm/drm_gpuvm.h
+++ b/include/drm/drm_gpuvm.h
@@ -32,6 +32,7 @@
#include <drm/drm_gem.h>
struct drm_gpuvm;
+struct drm_gpuvm_bo;
struct drm_gpuvm_ops;
/**
@@ -72,6 +73,12 @@ struct drm_gpuva {
*/
struct drm_gpuvm *vm;
+ /**
+ * @vm_bo: the &drm_gpuvm_bo abstraction for the mapped
+ * &drm_gem_object
+ */
+ struct drm_gpuvm_bo *vm_bo;
+
/**
* @flags: the &drm_gpuva_flags for this mapping
*/
@@ -107,7 +114,7 @@ struct drm_gpuva {
struct drm_gem_object *obj;
/**
- * @entry: the &list_head to attach this object to a &drm_gem_object
+ * @entry: the &list_head to attach this object to a &drm_gpuvm_bo
*/
struct list_head entry;
} gem;
@@ -140,7 +147,7 @@ struct drm_gpuva {
int drm_gpuva_insert(struct drm_gpuvm *gpuvm, struct drm_gpuva *va);
void drm_gpuva_remove(struct drm_gpuva *va);
-void drm_gpuva_link(struct drm_gpuva *va);
+void drm_gpuva_link(struct drm_gpuva *va, struct drm_gpuvm_bo *vm_bo);
void drm_gpuva_unlink(struct drm_gpuva *va);
struct drm_gpuva *drm_gpuva_find(struct drm_gpuvm *gpuvm,
@@ -341,6 +348,117 @@ __drm_gpuva_next(struct drm_gpuva *va)
#define drm_gpuvm_for_each_va_safe(va__, next__, gpuvm__) \
list_for_each_entry_safe(va__, next__, &(gpuvm__)->rb.list, rb.entry)
+/**
+ * struct drm_gpuvm_bo - structure representing a &drm_gpuvm and
+ * &drm_gem_object combination
+ *
+ * This structure is an abstraction representing a &drm_gpuvm and
+ * &drm_gem_object combination. It serves as an indirection to accelerate
+ * iterating all &drm_gpuvas within a &drm_gpuvm backed by the same
+ * &drm_gem_object.
+ *
+ * Furthermore it is used cache evicted GEM objects for a certain GPU-VM to
+ * accelerate validation.
+ *
+ * Typically, drivers want to create an instance of a struct drm_gpuvm_bo once
+ * a GEM object is mapped first in a GPU-VM and release the instance once the
+ * last mapping of the GEM object in this GPU-VM is unmapped.
+ */
+struct drm_gpuvm_bo {
+
+ /**
+ * @gpuvm: The &drm_gpuvm the @obj is mapped in.
+ */
+ struct drm_gpuvm *vm;
+
+ /**
+ * @obj: The &drm_gem_object being mapped in the @gpuvm.
+ */
+ struct drm_gem_object *obj;
+
+ /**
+ * @kref: The reference count for this &drm_gpuvm_bo.
+ */
+ struct kref kref;
+
+ /**
+ * @list: Structure containing all &list_heads.
+ */
+ struct {
+ /**
+ * @gpuva: The list of linked &drm_gpuvas.
+ */
+ struct list_head gpuva;
+
+ /**
+ * @entry: Structure containing all &list_heads serving as
+ * entry.
+ */
+ struct {
+ /**
+ * @gem: List entry to attach to the &drm_gem_objects
+ * gpuva list.
+ */
+ struct list_head gem;
+ } entry;
+ } list;
+};
+
+struct drm_gpuvm_bo *
+drm_gpuvm_bo_create(struct drm_gpuvm *gpuvm,
+ struct drm_gem_object *obj);
+
+struct drm_gpuvm_bo *
+drm_gpuvm_bo_obtain(struct drm_gpuvm *gpuvm,
+ struct drm_gem_object *obj);
+struct drm_gpuvm_bo *
+drm_gpuvm_bo_obtain_prealloc(struct drm_gpuvm_bo *vm_bo);
+
+/**
+ * drm_gpuvm_bo_get() - acquire a struct drm_gpuvm_bo reference
+ * @vm_bo: the &drm_gpuvm_bo to acquire the reference of
+ *
+ * This function acquires an additional reference to @vm_bo. It is illegal to
+ * call this without already holding a reference. No locks required.
+ */
+static inline struct drm_gpuvm_bo *
+drm_gpuvm_bo_get(struct drm_gpuvm_bo *vm_bo)
+{
+ kref_get(&vm_bo->kref);
+ return vm_bo;
+}
+
+void drm_gpuvm_bo_put(struct drm_gpuvm_bo *vm_bo);
+
+struct drm_gpuvm_bo *
+drm_gpuvm_bo_find(struct drm_gpuvm *gpuvm,
+ struct drm_gem_object *obj);
+
+/**
+ * drm_gpuvm_bo_for_each_va() - iterator to walk over a list of &drm_gpuva
+ * @va__: &drm_gpuva structure to assign to in each iteration step
+ * @vm_bo__: the &drm_gpuvm_bo the &drm_gpuva to walk are associated with
+ *
+ * This iterator walks over all &drm_gpuva structures associated with the
+ * &drm_gpuvm_bo.
+ */
+#define drm_gpuvm_bo_for_each_va(va__, vm_bo__) \
+ list_for_each_entry(va__, &(vm_bo)->list.gpuva, gem.entry)
+
+/**
+ * drm_gpuvm_bo_for_each_va_safe() - iterator to safely walk over a list of
+ * &drm_gpuva
+ * @va__: &drm_gpuva structure to assign to in each iteration step
+ * @next__: &next &drm_gpuva to store the next step
+ * @vm_bo__: the &drm_gpuvm_bo the &drm_gpuva to walk are associated with
+ *
+ * This iterator walks over all &drm_gpuva structures associated with the
+ * &drm_gpuvm_bo. It is implemented with list_for_each_entry_safe(), hence
+ * it is save against removal of elements.
+ */
+#define drm_gpuvm_bo_for_each_va_safe(va__, next__, vm_bo__) \
+ list_for_each_entry_safe(va__, next__, &(vm_bo)->list.gpuva, gem.entry)
+
/**
* enum drm_gpuva_op_type - GPU VA operation type
*
@@ -610,8 +728,7 @@ drm_gpuvm_prefetch_ops_create(struct drm_gpuvm *gpuvm,
u64 addr, u64 range);
struct drm_gpuva_ops *
-drm_gpuvm_gem_unmap_ops_create(struct drm_gpuvm *gpuvm,
- struct drm_gem_object *obj);
+drm_gpuvm_bo_unmap_ops_create(struct drm_gpuvm_bo *vm_bo);
void drm_gpuva_ops_free(struct drm_gpuvm *gpuvm,
struct drm_gpuva_ops *ops);
@@ -655,6 +772,30 @@ struct drm_gpuvm_ops {
*/
void (*op_free)(struct drm_gpuva_op *op);
+ /**
+ * @vm_bo_alloc: called when the &drm_gpuvm allocates
+ * a struct drm_gpuvm_bo
+ *
+ * Some drivers may want to embed struct drm_gpuvm_bo into driver
+ * specific structures. By implementing this callback drivers can
+ * allocate memory accordingly.
+ *
+ * This callback is optional.
+ */
+ struct drm_gpuvm_bo *(*vm_bo_alloc)(void);
+
+ /**
+ * @vm_bo_free: called when the &drm_gpuvm frees a
+ * struct drm_gpuvm_bo
+ *
+ * Some drivers may want to embed struct drm_gpuvm_bo into driver
+ * specific structures. By implementing this callback drivers can
+ * free the previously allocated memory accordingly.
+ *
+ * This callback is optional.
+ */
+ void (*vm_bo_free)(struct drm_gpuvm_bo *vm_bo);
+
/**
* @sm_step_map: called from &drm_gpuvm_sm_map to finally insert the
* mapping once all previous steps were completed
--
2.41.0
^ permalink raw reply related [flat|nested] 29+ messages in thread* [PATCH drm-misc-next v4 6/8] drm/gpuvm: add drm_gpuvm_flags to drm_gpuvm
2023-09-20 14:42 [PATCH drm-misc-next v4 0/8] [RFC] DRM GPUVA Manager GPU-VM features Danilo Krummrich
` (4 preceding siblings ...)
2023-09-20 14:42 ` [PATCH drm-misc-next v4 5/8] drm/gpuvm: add an abstraction for a VM / BO combination Danilo Krummrich
@ 2023-09-20 14:42 ` Danilo Krummrich
2023-09-20 16:40 ` kernel test robot
` (2 more replies)
2023-09-20 14:42 ` [PATCH drm-misc-next v4 7/8] drm/gpuvm: generalize dma_resv/extobj handling and GEM validation Danilo Krummrich
` (2 subsequent siblings)
8 siblings, 3 replies; 29+ messages in thread
From: Danilo Krummrich @ 2023-09-20 14:42 UTC (permalink / raw)
To: airlied, daniel, matthew.brost, thomas.hellstrom, sarah.walker,
donald.robson, boris.brezillon, christian.koenig, faith.ekstrand
Cc: dri-devel, nouveau, linux-kernel, Danilo Krummrich
Introduce flags for struct drm_gpuvm, this required by subsequent
commits.
Signed-off-by: Danilo Krummrich <dakr@redhat.com>
---
drivers/gpu/drm/drm_gpuvm.c | 3 ++-
drivers/gpu/drm/nouveau/nouveau_uvmm.c | 2 +-
include/drm/drm_gpuvm.h | 17 ++++++++++++++++-
3 files changed, 19 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c
index 6ee224e1121e..6e9d2d478bb8 100644
--- a/drivers/gpu/drm/drm_gpuvm.c
+++ b/drivers/gpu/drm/drm_gpuvm.c
@@ -705,7 +705,7 @@ drm_gpuva_range_valid(struct drm_gpuvm *gpuvm,
*/
void
drm_gpuvm_init(struct drm_gpuvm *gpuvm, struct drm_device *drm,
- const char *name,
+ const char *name, enum drm_gpuva_flags flags,
u64 start_offset, u64 range,
u64 reserve_offset, u64 reserve_range,
const struct drm_gpuvm_ops *ops)
@@ -718,6 +718,7 @@ drm_gpuvm_init(struct drm_gpuvm *gpuvm, struct drm_device *drm,
gpuvm->mm_range = range;
gpuvm->name = name ? name : "unknown";
+ gpuvm->flags = flags;
gpuvm->ops = ops;
memset(&gpuvm->kernel_alloc_node, 0, sizeof(struct drm_gpuva));
diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.c b/drivers/gpu/drm/nouveau/nouveau_uvmm.c
index cf709afd2ac7..3de8533841db 100644
--- a/drivers/gpu/drm/nouveau/nouveau_uvmm.c
+++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.c
@@ -1864,7 +1864,7 @@ nouveau_uvmm_init(struct nouveau_uvmm *uvmm, struct nouveau_cli *cli,
uvmm->kernel_managed_addr = kernel_managed_addr;
uvmm->kernel_managed_size = kernel_managed_size;
- drm_gpuvm_init(&uvmm->base, cli->drm->dev, cli->name,
+ drm_gpuvm_init(&uvmm->base, cli->drm->dev, cli->name, 0,
NOUVEAU_VA_SPACE_START,
NOUVEAU_VA_SPACE_END,
kernel_managed_addr, kernel_managed_size,
diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h
index 2c9ad6eb9401..f57ad1f0f0d0 100644
--- a/include/drm/drm_gpuvm.h
+++ b/include/drm/drm_gpuvm.h
@@ -192,6 +192,16 @@ static inline bool drm_gpuva_invalidated(struct drm_gpuva *va)
return va->flags & DRM_GPUVA_INVALIDATED;
}
+/**
+ * enum drm_gpuvm_flags - flags for struct drm_gpuvm
+ */
+enum drm_gpuvm_flags {
+ /**
+ * @DRM_GPUVM_USERBITS: user defined bits
+ */
+ DRM_GPUVM_USERBITS = (1 << 0),
+};
+
/**
* struct drm_gpuvm - DRM GPU VA Manager
*
@@ -210,6 +220,11 @@ struct drm_gpuvm {
*/
const char *name;
+ /**
+ * @flags: the &drm_gpuvm_flags of this GPUVM
+ */
+ enum drm_gpuva_flags flags;
+
/**
* @mm_start: start of the VA space
*/
@@ -256,7 +271,7 @@ struct drm_gpuvm {
};
void drm_gpuvm_init(struct drm_gpuvm *gpuvm, struct drm_device *drm,
- const char *name,
+ const char *name, enum drm_gpuva_flags flags,
u64 start_offset, u64 range,
u64 reserve_offset, u64 reserve_range,
const struct drm_gpuvm_ops *ops);
--
2.41.0
^ permalink raw reply related [flat|nested] 29+ messages in thread* Re: [PATCH drm-misc-next v4 6/8] drm/gpuvm: add drm_gpuvm_flags to drm_gpuvm
2023-09-20 14:42 ` [PATCH drm-misc-next v4 6/8] drm/gpuvm: add drm_gpuvm_flags to drm_gpuvm Danilo Krummrich
@ 2023-09-20 16:40 ` kernel test robot
2023-09-22 11:42 ` Boris Brezillon
2023-09-22 11:58 ` Boris Brezillon
2 siblings, 0 replies; 29+ messages in thread
From: kernel test robot @ 2023-09-20 16:40 UTC (permalink / raw)
To: Danilo Krummrich, airlied, daniel, matthew.brost,
thomas.hellstrom, sarah.walker, donald.robson, boris.brezillon,
christian.koenig, faith.ekstrand
Cc: oe-kbuild-all, nouveau, Danilo Krummrich, linux-kernel, dri-devel
Hi Danilo,
kernel test robot noticed the following build warnings:
[auto build test WARNING on 1c7a387ffef894b1ab3942f0482dac7a6e0a909c]
url: https://github.com/intel-lab-lkp/linux/commits/Danilo-Krummrich/drm-gpuvm-rename-struct-drm_gpuva_manager-to-struct-drm_gpuvm/20230920-224605
base: 1c7a387ffef894b1ab3942f0482dac7a6e0a909c
patch link: https://lore.kernel.org/r/20230920144343.64830-7-dakr%40redhat.com
patch subject: [PATCH drm-misc-next v4 6/8] drm/gpuvm: add drm_gpuvm_flags to drm_gpuvm
config: alpha-allyesconfig (https://download.01.org/0day-ci/archive/20230921/202309210041.Ypce0gUk-lkp@intel.com/config)
compiler: alpha-linux-gcc (GCC) 13.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20230921/202309210041.Ypce0gUk-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202309210041.Ypce0gUk-lkp@intel.com/
All warnings (new ones prefixed by >>):
>> drivers/gpu/drm/drm_gpuvm.c:712: warning: Function parameter or member 'flags' not described in 'drm_gpuvm_init'
vim +712 drivers/gpu/drm/drm_gpuvm.c
e6303f323b1ad9 drivers/gpu/drm/drm_gpuva_mgr.c Danilo Krummrich 2023-07-20 689
e6303f323b1ad9 drivers/gpu/drm/drm_gpuva_mgr.c Danilo Krummrich 2023-07-20 690 /**
06f9274d201d5d drivers/gpu/drm/drm_gpuvm.c Danilo Krummrich 2023-09-20 691 * drm_gpuvm_init() - initialize a &drm_gpuvm
06f9274d201d5d drivers/gpu/drm/drm_gpuvm.c Danilo Krummrich 2023-09-20 692 * @gpuvm: pointer to the &drm_gpuvm to initialize
52ef25512ca721 drivers/gpu/drm/drm_gpuvm.c Danilo Krummrich 2023-09-20 693 * @drm: the drivers &drm_device
e6303f323b1ad9 drivers/gpu/drm/drm_gpuva_mgr.c Danilo Krummrich 2023-07-20 694 * @name: the name of the GPU VA space
e6303f323b1ad9 drivers/gpu/drm/drm_gpuva_mgr.c Danilo Krummrich 2023-07-20 695 * @start_offset: the start offset of the GPU VA space
e6303f323b1ad9 drivers/gpu/drm/drm_gpuva_mgr.c Danilo Krummrich 2023-07-20 696 * @range: the size of the GPU VA space
e6303f323b1ad9 drivers/gpu/drm/drm_gpuva_mgr.c Danilo Krummrich 2023-07-20 697 * @reserve_offset: the start of the kernel reserved GPU VA area
e6303f323b1ad9 drivers/gpu/drm/drm_gpuva_mgr.c Danilo Krummrich 2023-07-20 698 * @reserve_range: the size of the kernel reserved GPU VA area
06f9274d201d5d drivers/gpu/drm/drm_gpuvm.c Danilo Krummrich 2023-09-20 699 * @ops: &drm_gpuvm_ops called on &drm_gpuvm_sm_map / &drm_gpuvm_sm_unmap
e6303f323b1ad9 drivers/gpu/drm/drm_gpuva_mgr.c Danilo Krummrich 2023-07-20 700 *
06f9274d201d5d drivers/gpu/drm/drm_gpuvm.c Danilo Krummrich 2023-09-20 701 * The &drm_gpuvm must be initialized with this function before use.
e6303f323b1ad9 drivers/gpu/drm/drm_gpuva_mgr.c Danilo Krummrich 2023-07-20 702 *
06f9274d201d5d drivers/gpu/drm/drm_gpuvm.c Danilo Krummrich 2023-09-20 703 * Note that @gpuvm must be cleared to 0 before calling this function. The given
e6303f323b1ad9 drivers/gpu/drm/drm_gpuva_mgr.c Danilo Krummrich 2023-07-20 704 * &name is expected to be managed by the surrounding driver structures.
e6303f323b1ad9 drivers/gpu/drm/drm_gpuva_mgr.c Danilo Krummrich 2023-07-20 705 */
e6303f323b1ad9 drivers/gpu/drm/drm_gpuva_mgr.c Danilo Krummrich 2023-07-20 706 void
52ef25512ca721 drivers/gpu/drm/drm_gpuvm.c Danilo Krummrich 2023-09-20 707 drm_gpuvm_init(struct drm_gpuvm *gpuvm, struct drm_device *drm,
790facc6dac6ef drivers/gpu/drm/drm_gpuvm.c Danilo Krummrich 2023-09-20 708 const char *name, enum drm_gpuva_flags flags,
e6303f323b1ad9 drivers/gpu/drm/drm_gpuva_mgr.c Danilo Krummrich 2023-07-20 709 u64 start_offset, u64 range,
e6303f323b1ad9 drivers/gpu/drm/drm_gpuva_mgr.c Danilo Krummrich 2023-07-20 710 u64 reserve_offset, u64 reserve_range,
06f9274d201d5d drivers/gpu/drm/drm_gpuvm.c Danilo Krummrich 2023-09-20 711 const struct drm_gpuvm_ops *ops)
e6303f323b1ad9 drivers/gpu/drm/drm_gpuva_mgr.c Danilo Krummrich 2023-07-20 @712 {
06f9274d201d5d drivers/gpu/drm/drm_gpuvm.c Danilo Krummrich 2023-09-20 713 gpuvm->rb.tree = RB_ROOT_CACHED;
06f9274d201d5d drivers/gpu/drm/drm_gpuvm.c Danilo Krummrich 2023-09-20 714 INIT_LIST_HEAD(&gpuvm->rb.list);
e6303f323b1ad9 drivers/gpu/drm/drm_gpuva_mgr.c Danilo Krummrich 2023-07-20 715
06f9274d201d5d drivers/gpu/drm/drm_gpuvm.c Danilo Krummrich 2023-09-20 716 drm_gpuvm_check_overflow(start_offset, range);
06f9274d201d5d drivers/gpu/drm/drm_gpuvm.c Danilo Krummrich 2023-09-20 717 gpuvm->mm_start = start_offset;
06f9274d201d5d drivers/gpu/drm/drm_gpuvm.c Danilo Krummrich 2023-09-20 718 gpuvm->mm_range = range;
e6303f323b1ad9 drivers/gpu/drm/drm_gpuva_mgr.c Danilo Krummrich 2023-07-20 719
06f9274d201d5d drivers/gpu/drm/drm_gpuvm.c Danilo Krummrich 2023-09-20 720 gpuvm->name = name ? name : "unknown";
790facc6dac6ef drivers/gpu/drm/drm_gpuvm.c Danilo Krummrich 2023-09-20 721 gpuvm->flags = flags;
06f9274d201d5d drivers/gpu/drm/drm_gpuvm.c Danilo Krummrich 2023-09-20 722 gpuvm->ops = ops;
e6303f323b1ad9 drivers/gpu/drm/drm_gpuva_mgr.c Danilo Krummrich 2023-07-20 723
06f9274d201d5d drivers/gpu/drm/drm_gpuvm.c Danilo Krummrich 2023-09-20 724 memset(&gpuvm->kernel_alloc_node, 0, sizeof(struct drm_gpuva));
e6303f323b1ad9 drivers/gpu/drm/drm_gpuva_mgr.c Danilo Krummrich 2023-07-20 725
e6303f323b1ad9 drivers/gpu/drm/drm_gpuva_mgr.c Danilo Krummrich 2023-07-20 726 if (reserve_range) {
06f9274d201d5d drivers/gpu/drm/drm_gpuvm.c Danilo Krummrich 2023-09-20 727 gpuvm->kernel_alloc_node.va.addr = reserve_offset;
06f9274d201d5d drivers/gpu/drm/drm_gpuvm.c Danilo Krummrich 2023-09-20 728 gpuvm->kernel_alloc_node.va.range = reserve_range;
e6303f323b1ad9 drivers/gpu/drm/drm_gpuva_mgr.c Danilo Krummrich 2023-07-20 729
06f9274d201d5d drivers/gpu/drm/drm_gpuvm.c Danilo Krummrich 2023-09-20 730 if (likely(!drm_gpuvm_check_overflow(reserve_offset,
e6303f323b1ad9 drivers/gpu/drm/drm_gpuva_mgr.c Danilo Krummrich 2023-07-20 731 reserve_range)))
06f9274d201d5d drivers/gpu/drm/drm_gpuvm.c Danilo Krummrich 2023-09-20 732 __drm_gpuva_insert(gpuvm, &gpuvm->kernel_alloc_node);
e6303f323b1ad9 drivers/gpu/drm/drm_gpuva_mgr.c Danilo Krummrich 2023-07-20 733 }
52ef25512ca721 drivers/gpu/drm/drm_gpuvm.c Danilo Krummrich 2023-09-20 734
52ef25512ca721 drivers/gpu/drm/drm_gpuvm.c Danilo Krummrich 2023-09-20 735 drm_gem_private_object_init(drm, &gpuvm->d_obj, 0);
e6303f323b1ad9 drivers/gpu/drm/drm_gpuva_mgr.c Danilo Krummrich 2023-07-20 736 }
06f9274d201d5d drivers/gpu/drm/drm_gpuvm.c Danilo Krummrich 2023-09-20 737 EXPORT_SYMBOL_GPL(drm_gpuvm_init);
e6303f323b1ad9 drivers/gpu/drm/drm_gpuva_mgr.c Danilo Krummrich 2023-07-20 738
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 29+ messages in thread* Re: [PATCH drm-misc-next v4 6/8] drm/gpuvm: add drm_gpuvm_flags to drm_gpuvm
2023-09-20 14:42 ` [PATCH drm-misc-next v4 6/8] drm/gpuvm: add drm_gpuvm_flags to drm_gpuvm Danilo Krummrich
2023-09-20 16:40 ` kernel test robot
@ 2023-09-22 11:42 ` Boris Brezillon
2023-09-22 11:58 ` Boris Brezillon
2 siblings, 0 replies; 29+ messages in thread
From: Boris Brezillon @ 2023-09-22 11:42 UTC (permalink / raw)
To: Danilo Krummrich
Cc: airlied, daniel, matthew.brost, thomas.hellstrom, sarah.walker,
donald.robson, christian.koenig, faith.ekstrand, dri-devel,
nouveau, linux-kernel
On Wed, 20 Sep 2023 16:42:39 +0200
Danilo Krummrich <dakr@redhat.com> wrote:
> void drm_gpuvm_init(struct drm_gpuvm *gpuvm, struct drm_device *drm,
> - const char *name,
> + const char *name, enum drm_gpuva_flags flags,
s/drm_gpuva_flags/drm_gpuvm_flags/gc
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH drm-misc-next v4 6/8] drm/gpuvm: add drm_gpuvm_flags to drm_gpuvm
2023-09-20 14:42 ` [PATCH drm-misc-next v4 6/8] drm/gpuvm: add drm_gpuvm_flags to drm_gpuvm Danilo Krummrich
2023-09-20 16:40 ` kernel test robot
2023-09-22 11:42 ` Boris Brezillon
@ 2023-09-22 11:58 ` Boris Brezillon
2023-09-27 16:52 ` Danilo Krummrich
2 siblings, 1 reply; 29+ messages in thread
From: Boris Brezillon @ 2023-09-22 11:58 UTC (permalink / raw)
To: Danilo Krummrich
Cc: airlied, daniel, matthew.brost, thomas.hellstrom, sarah.walker,
donald.robson, christian.koenig, faith.ekstrand, dri-devel,
nouveau, linux-kernel
On Wed, 20 Sep 2023 16:42:39 +0200
Danilo Krummrich <dakr@redhat.com> wrote:
> +/**
> + * enum drm_gpuvm_flags - flags for struct drm_gpuvm
> + */
> +enum drm_gpuvm_flags {
> + /**
> + * @DRM_GPUVM_USERBITS: user defined bits
> + */
> + DRM_GPUVM_USERBITS = (1 << 0),
Nit: I tried declaring driver-specific flags, and I find this
counter-intuitive. You basically end up with something like:
enum my_gpuvm_flags {
MY_FLAG_X = DRM_GPUVM_USERBITS,
MY_FLAG_Y = DRM_GPUVM_USERBITS << 1,
};
instead of the usual
enum my_gpuvm_flags {
MY_FLAG_X = BIT(0),
MY_FLAG_Y = BIT(1),
};
pattern.
Another issue I see coming is if we end up adding more core flags and
drivers start falling short of bits for their own flags. This makes me
wonder if we shouldn't kill this notion of USER flags and let drivers
store their flags in some dedicated field, given they're likely to
derive drm_gpuvm and drm_gpuva with their own object anyway.
> +};
> +
^ permalink raw reply [flat|nested] 29+ messages in thread* Re: [PATCH drm-misc-next v4 6/8] drm/gpuvm: add drm_gpuvm_flags to drm_gpuvm
2023-09-22 11:58 ` Boris Brezillon
@ 2023-09-27 16:52 ` Danilo Krummrich
2023-09-28 12:19 ` Boris Brezillon
0 siblings, 1 reply; 29+ messages in thread
From: Danilo Krummrich @ 2023-09-27 16:52 UTC (permalink / raw)
To: Boris Brezillon
Cc: airlied, daniel, matthew.brost, thomas.hellstrom, sarah.walker,
donald.robson, christian.koenig, faith.ekstrand, dri-devel,
nouveau, linux-kernel
On 9/22/23 13:58, Boris Brezillon wrote:
> On Wed, 20 Sep 2023 16:42:39 +0200
> Danilo Krummrich <dakr@redhat.com> wrote:
>
>> +/**
>> + * enum drm_gpuvm_flags - flags for struct drm_gpuvm
>> + */
>> +enum drm_gpuvm_flags {
>> + /**
>> + * @DRM_GPUVM_USERBITS: user defined bits
>> + */
>> + DRM_GPUVM_USERBITS = (1 << 0),
>
> Nit: I tried declaring driver-specific flags, and I find this
> counter-intuitive. You basically end up with something like:
>
> enum my_gpuvm_flags {
> MY_FLAG_X = DRM_GPUVM_USERBITS,
> MY_FLAG_Y = DRM_GPUVM_USERBITS << 1,
> };
>
> instead of the usual
>
> enum my_gpuvm_flags {
> MY_FLAG_X = BIT(0),
> MY_FLAG_Y = BIT(1),
> };
>
> pattern.
Right, same as with dma_fence flags.
>
> Another issue I see coming is if we end up adding more core flags and
> drivers start falling short of bits for their own flags. This makes me
> wonder if we shouldn't kill this notion of USER flags and let drivers
> store their flags in some dedicated field, given they're likely to
> derive drm_gpuvm and drm_gpuva with their own object anyway.
The only reason I have this in the code is that Xe asked for this with
drm_gpuva_flags. Hence, for consistency reasons I added it for drm_gpuvm_flags
too.
Drivers can still have their own flag fields if needed, otherwise I guess it
doesn't really hurt to keep DRM_GPUVM_USERBITS in case someone wants to use it.
>
>> +};
>> +
>
^ permalink raw reply [flat|nested] 29+ messages in thread* Re: [PATCH drm-misc-next v4 6/8] drm/gpuvm: add drm_gpuvm_flags to drm_gpuvm
2023-09-27 16:52 ` Danilo Krummrich
@ 2023-09-28 12:19 ` Boris Brezillon
0 siblings, 0 replies; 29+ messages in thread
From: Boris Brezillon @ 2023-09-28 12:19 UTC (permalink / raw)
To: Danilo Krummrich
Cc: airlied, daniel, matthew.brost, thomas.hellstrom, sarah.walker,
donald.robson, christian.koenig, faith.ekstrand, dri-devel,
nouveau, linux-kernel
On Wed, 27 Sep 2023 18:52:55 +0200
Danilo Krummrich <dakr@redhat.com> wrote:
> On 9/22/23 13:58, Boris Brezillon wrote:
> > On Wed, 20 Sep 2023 16:42:39 +0200
> > Danilo Krummrich <dakr@redhat.com> wrote:
> >
> >> +/**
> >> + * enum drm_gpuvm_flags - flags for struct drm_gpuvm
> >> + */
> >> +enum drm_gpuvm_flags {
> >> + /**
> >> + * @DRM_GPUVM_USERBITS: user defined bits
> >> + */
> >> + DRM_GPUVM_USERBITS = (1 << 0),
> >
> > Nit: I tried declaring driver-specific flags, and I find this
> > counter-intuitive. You basically end up with something like:
> >
> > enum my_gpuvm_flags {
> > MY_FLAG_X = DRM_GPUVM_USERBITS,
> > MY_FLAG_Y = DRM_GPUVM_USERBITS << 1,
> > };
> >
> > instead of the usual
> >
> > enum my_gpuvm_flags {
> > MY_FLAG_X = BIT(0),
> > MY_FLAG_Y = BIT(1),
> > };
> >
> > pattern.
>
> Right, same as with dma_fence flags.
>
> >
> > Another issue I see coming is if we end up adding more core flags and
> > drivers start falling short of bits for their own flags. This makes me
> > wonder if we shouldn't kill this notion of USER flags and let drivers
> > store their flags in some dedicated field, given they're likely to
> > derive drm_gpuvm and drm_gpuva with their own object anyway.
>
> The only reason I have this in the code is that Xe asked for this with
> drm_gpuva_flags. Hence, for consistency reasons I added it for drm_gpuvm_flags
> too.
Yeah, my comment stands for both drm_gpuva_flags and drm_gpuvm_flags
actually.
>
> Drivers can still have their own flag fields if needed, otherwise I guess it
> doesn't really hurt to keep DRM_GPUVM_USERBITS in case someone wants to use it.
Sure, it doesn't hurt, but given drivers are inheriting from this
object anyway, I thought it'd be simpler/more future proof to let them
have their flags in a separate field. It's not like we care about
saving 4 bytes in such a big object. Might be a bit different for
drm_gpuva given the amount of live mappings one VM might have, but even
there, I suspect the current drm_gpuva size is going to hurt if we have
millions of 4k mappings, so, four more bytes won't make a huge
difference...
Anyway, I don't think that's a blocker, I just thought I'd mention it,
that's all.
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH drm-misc-next v4 7/8] drm/gpuvm: generalize dma_resv/extobj handling and GEM validation
2023-09-20 14:42 [PATCH drm-misc-next v4 0/8] [RFC] DRM GPUVA Manager GPU-VM features Danilo Krummrich
` (5 preceding siblings ...)
2023-09-20 14:42 ` [PATCH drm-misc-next v4 6/8] drm/gpuvm: add drm_gpuvm_flags to drm_gpuvm Danilo Krummrich
@ 2023-09-20 14:42 ` Danilo Krummrich
2023-09-22 11:45 ` Boris Brezillon
2023-09-20 14:42 ` [PATCH drm-misc-next v4 8/8] drm/nouveau: GPUVM dma-resv/extobj handling, " Danilo Krummrich
2023-09-28 12:09 ` [PATCH drm-misc-next v4 0/8] [RFC] DRM GPUVA Manager GPU-VM features Boris Brezillon
8 siblings, 1 reply; 29+ messages in thread
From: Danilo Krummrich @ 2023-09-20 14:42 UTC (permalink / raw)
To: airlied, daniel, matthew.brost, thomas.hellstrom, sarah.walker,
donald.robson, boris.brezillon, christian.koenig, faith.ekstrand
Cc: dri-devel, nouveau, linux-kernel, Danilo Krummrich
So far the DRM GPUVA manager offers common infrastructure to track GPU VA
allocations and mappings, generically connect GPU VA mappings to their
backing buffers and perform more complex mapping operations on the GPU VA
space.
However, there are more design patterns commonly used by drivers, which
can potentially be generalized in order to make the DRM GPUVA manager
represent a basic GPU-VM implementation. In this context, this patch aims
at generalizing the following elements.
1) Provide a common dma-resv for GEM objects not being used outside of
this GPU-VM.
2) Provide tracking of external GEM objects (GEM objects which are
shared with other GPU-VMs).
3) Provide functions to efficiently lock all GEM objects dma-resv the
GPU-VM contains mappings of.
4) Provide tracking of evicted GEM objects the GPU-VM contains mappings
of, such that validation of evicted GEM objects is accelerated.
5) Provide some convinience functions for common patterns.
Rather than being designed as a "framework", the target is to make all
features appear as a collection of optional helper functions, such that
drivers are free to make use of the DRM GPUVA managers basic
functionality and opt-in for other features without setting any feature
flags, just by making use of the corresponding functions.
Big thanks to Boris Brezillon for his help to figure out locking for
drivers updating the GPU VA space within the fence signalling path.
Suggested-by: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Danilo Krummrich <dakr@redhat.com>
---
drivers/gpu/drm/drm_gpuvm.c | 627 ++++++++++++++++++++++++++++++++++++
include/drm/drm_gpuvm.h | 268 ++++++++++++++-
2 files changed, 894 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c
index 6e9d2d478bb8..6cac90023efc 100644
--- a/drivers/gpu/drm/drm_gpuvm.c
+++ b/drivers/gpu/drm/drm_gpuvm.c
@@ -73,6 +73,21 @@
* &drm_gem_object list of &drm_gpuvm_bos for an existing instance of this
* particular combination. If not existent a new instance is created and linked
* to the &drm_gem_object.
+ *
+ * &drm_gpuvm_bo structures, since unique for a given &drm_gpuvm, are also used
+ * as entry for the &drm_gpuvm's lists of external and evicted objects. Those
+ * list are maintained in order to accelerate locking of dma-resv locks and
+ * validation of evicted objects bound in a &drm_gpuvm. For instance the all
+ * &drm_gem_object's &dma_resv of a given &drm_gpuvm can be locked by calling
+ * drm_gpuvm_exec_lock(). Once locked drivers can call drm_gpuvm_validate() in
+ * order to validate all evicted &drm_gem_objects. It is also possible to lock
+ * additional &drm_gem_objects by providing the corresponding parameters to
+ * drm_gpuvm_exec_lock() as well as open code the &drm_exec loop while making
+ * use of helper functions such as drm_gpuvm_prepare_range() or
+ * drm_gpuvm_prepare_objects().
+ *
+ * Every bound &drm_gem_object is treated as external object when its &dma_resv
+ * structure is different than the &drm_gpuvm's common &dma_resv structure.
*/
/**
@@ -420,6 +435,21 @@
* Subsequent calls to drm_gpuvm_bo_obtain() for the same &drm_gpuvm and
* &drm_gem_object must be able to observe previous creations and destructions
* of &drm_gpuvm_bos in order to keep instances unique.
+ *
+ * The &drm_gpuvm's lists for keeping track of external and evicted objects are
+ * protected against concurrent insertion / removal and iteration internally.
+ *
+ * However, drivers still need ensure to protect concurrent calls to functions
+ * iterating those lists, such as drm_gpuvm_validate() and
+ * drm_gpuvm_prepare_objects(). Every such function contains a particular
+ * comment and lockdep checks if possible.
+ *
+ * Alternatively, drivers can set the &DRM_GPUVM_RESV_PROTECTED flag indicate
+ * that the corresponding &dma_resv locks are held in order to protect the
+ * lists. If &DRM_GPUVM_RESV_PROTECTED is set, internal locking is disabled and
+ * the corresponding lockdep checks are enabled. This is an optimization for
+ * drivers which are capable of taking the corresponding &dma_resv locks and
+ * hence do not require internal locking.
*/
/**
@@ -632,6 +662,195 @@
* }
*/
+/**
+ * get_next_vm_bo_from_list() - get the next vm_bo element
+ * @__gpuvm: The GPU VM
+ * @__list_name: The name of the list we're iterating on
+ * @__local_list: A pointer to the local list used to store already iterated items
+ * @__prev_vm_bo: The previous element we got from drm_gpuvm_get_next_cached_vm_bo()
+ *
+ * This helper is here to provide lockless list iteration. Lockless as in, the
+ * iterator releases the lock immediately after picking the first element from
+ * the list, so list insertion deletion can happen concurrently.
+ *
+ * Elements popped from the original list are kept in a local list, so removal
+ * and is_empty checks can still happen while we're iterating the list.
+ */
+#define get_next_vm_bo_from_list(__gpuvm, __list_name, __local_list, __prev_vm_bo) \
+ ({ \
+ struct drm_gpuvm_bo *__vm_bo = NULL; \
+ \
+ drm_gpuvm_bo_put(__prev_vm_bo); \
+ \
+ spin_lock(&(__gpuvm)->__list_name.lock); \
+ if (!(__gpuvm)->__list_name.local_list) \
+ (__gpuvm)->__list_name.local_list = __local_list; \
+ else \
+ WARN_ON((__gpuvm)->__list_name.local_list != __local_list); \
+ \
+ while (!list_empty(&(__gpuvm)->__list_name.list)) { \
+ __vm_bo = list_first_entry(&(__gpuvm)->__list_name.list, \
+ struct drm_gpuvm_bo, \
+ list.entry.__list_name); \
+ if (kref_get_unless_zero(&__vm_bo->kref)) { \
+ list_move_tail(&(__vm_bo)->list.entry.__list_name, \
+ __local_list); \
+ break; \
+ } else { \
+ list_del_init(&(__vm_bo)->list.entry.__list_name); \
+ __vm_bo = NULL; \
+ } \
+ } \
+ spin_unlock(&(__gpuvm)->__list_name.lock); \
+ \
+ __vm_bo; \
+ })
+
+/**
+ * for_each_vm_bo_in_list() - internal vm_bo list iterator
+ *
+ * This helper is here to provide lockless list iteration. Lockless as in, the
+ * iterator releases the lock immediately after picking the first element from the
+ * list, hence list insertion and deletion can happen concurrently.
+ *
+ * It is not allowed to re-assign the vm_bo pointer from inside this loop.
+ *
+ * Typical use:
+ *
+ * struct drm_gpuvm_bo *vm_bo;
+ * LIST_HEAD(my_local_list);
+ *
+ * ret = 0;
+ * for_each_vm_bo_in_list(gpuvm, <list_name>, &my_local_list, vm_bo) {
+ * ret = do_something_with_vm_bo(..., vm_bo);
+ * if (ret)
+ * break;
+ * }
+ * drm_gpuvm_bo_put(vm_bo);
+ * restore_vm_bo_list(gpuvm, <list_name>, &my_local_list);
+ *
+ *
+ * Only used for internal list iterations, not meant to be exposed to the outside
+ * world.
+ */
+#define for_each_vm_bo_in_list(__gpuvm, __list_name, __local_list, __vm_bo) \
+ for (__vm_bo = get_next_vm_bo_from_list(__gpuvm, __list_name, \
+ __local_list, NULL); \
+ __vm_bo; \
+ __vm_bo = get_next_vm_bo_from_list(__gpuvm, __list_name, \
+ __local_list, __vm_bo))
+
+static inline void
+__restore_vm_bo_list(struct drm_gpuvm *gpuvm, spinlock_t *lock,
+ struct list_head *list, struct list_head **local_list)
+{
+ /* Merge back the two lists, moving local list elements to the
+ * head to preserve previous ordering, in case it matters.
+ */
+ spin_lock(lock);
+ if (*local_list) {
+ list_splice(*local_list, list);
+ *local_list = NULL;
+ }
+ spin_unlock(lock);
+}
+
+/**
+ * restore_vm_bo_list() - move vm_bo elements back to their original list
+ * @__gpuvm: The GPU VM
+ * @__list_name: The name of the list we're iterating on
+ *
+ * When we're done iterating a vm_bo list, we should call restore_vm_bo_list()
+ * to restore the original state and let new iterations take place.
+ */
+#define restore_vm_bo_list(__gpuvm, __list_name) \
+ __restore_vm_bo_list((__gpuvm), &(__gpuvm)->__list_name.lock, \
+ &(__gpuvm)->__list_name.list, \
+ &(__gpuvm)->__list_name.local_list)
+
+static inline void
+cond_spin_lock(spinlock_t *lock, bool cond)
+{
+ if (cond)
+ spin_lock(lock);
+}
+
+static inline void
+cond_spin_unlock(spinlock_t *lock, bool cond)
+{
+ if (cond)
+ spin_unlock(lock);
+}
+
+static inline void
+__drm_gpuvm_bo_list_add(struct drm_gpuvm *gpuvm, spinlock_t *lock,
+ struct list_head *entry, struct list_head *list)
+{
+ cond_spin_lock(lock, !!lock);
+ if (list_empty(entry))
+ list_add_tail(entry, list);
+ cond_spin_unlock(lock, !!lock);
+}
+
+/**
+ * drm_gpuvm_bo_list_add() - insert a vm_bo into the given list
+ * @__vm_bo: the &drm_gpuvm_bo
+ * @__list_name: the name of the list to insert into
+ * @__lock: whether to lock with the internal spinlock
+ *
+ * Inserts the given @__vm_bo into the list specified by @__list_name.
+ */
+#define drm_gpuvm_bo_list_add(__vm_bo, __list_name, __lock) \
+ __drm_gpuvm_bo_list_add((__vm_bo)->vm, \
+ __lock ? &(__vm_bo)->vm->__list_name.lock : \
+ NULL, \
+ &(__vm_bo)->list.entry.__list_name, \
+ &(__vm_bo)->vm->__list_name.list)
+
+static inline void
+__drm_gpuvm_bo_list_del(struct drm_gpuvm *gpuvm, spinlock_t *lock,
+ struct list_head *entry, bool init)
+{
+ cond_spin_lock(lock, !!lock);
+ if (init) {
+ if (!list_empty(entry))
+ list_del_init(entry);
+ } else {
+ list_del(entry);
+ }
+ cond_spin_unlock(lock, !!lock);
+}
+
+/**
+ * drm_gpuvm_bo_list_del_init() - remove a vm_bo from the given list
+ * @__vm_bo: the &drm_gpuvm_bo
+ * @__list_name: the name of the list to insert into
+ * @__lock: whether to lock with the internal spinlock
+ *
+ * Removes the given @__vm_bo from the list specified by @__list_name.
+ */
+#define drm_gpuvm_bo_list_del_init(__vm_bo, __list_name, __lock) \
+ __drm_gpuvm_bo_list_del((__vm_bo)->vm, \
+ __lock ? &(__vm_bo)->vm->__list_name.lock : \
+ NULL, \
+ &(__vm_bo)->list.entry.__list_name, \
+ true)
+
+/**
+ * drm_gpuvm_bo_list_del() - remove a vm_bo from the given list
+ * @__vm_bo: the &drm_gpuvm_bo
+ * @__list_name: the name of the list to insert into
+ * @__lock: whether to lock with the internal spinlock
+ *
+ * Removes the given @__vm_bo from the list specified by @__list_name.
+ */
+#define drm_gpuvm_bo_list_del(__vm_bo, __list_name, __lock) \
+ __drm_gpuvm_bo_list_del((__vm_bo)->vm, \
+ __lock ? &(__vm_bo)->vm->__list_name.lock : \
+ NULL, \
+ &(__vm_bo)->list.entry.__list_name, \
+ false)
+
#define to_drm_gpuva(__node) container_of((__node), struct drm_gpuva, rb.node)
#define GPUVA_START(node) ((node)->va.addr)
@@ -713,6 +932,12 @@ drm_gpuvm_init(struct drm_gpuvm *gpuvm, struct drm_device *drm,
gpuvm->rb.tree = RB_ROOT_CACHED;
INIT_LIST_HEAD(&gpuvm->rb.list);
+ INIT_LIST_HEAD(&gpuvm->extobj.list);
+ spin_lock_init(&gpuvm->extobj.lock);
+
+ INIT_LIST_HEAD(&gpuvm->evict.list);
+ spin_lock_init(&gpuvm->evict.lock);
+
drm_gpuvm_check_overflow(start_offset, range);
gpuvm->mm_start = start_offset;
gpuvm->mm_range = range;
@@ -754,10 +979,352 @@ drm_gpuvm_destroy(struct drm_gpuvm *gpuvm)
WARN(!RB_EMPTY_ROOT(&gpuvm->rb.tree.rb_root),
"GPUVA tree is not empty, potentially leaking memory.\n");
+ WARN(!list_empty(&gpuvm->extobj.list), "Extobj list should be empty.\n");
+ WARN(!list_empty(&gpuvm->evict.list), "Evict list should be empty.\n");
+
drm_gem_private_object_fini(&gpuvm->d_obj);
}
EXPORT_SYMBOL_GPL(drm_gpuvm_destroy);
+
+static int
+drm_gpuvm_prepare_objects_internal(struct drm_gpuvm *gpuvm,
+ struct drm_exec *exec,
+ unsigned int num_fences)
+{
+ struct drm_gpuvm_bo *vm_bo;
+ LIST_HEAD(extobjs);
+ int ret = 0;
+
+ for_each_vm_bo_in_list(gpuvm, extobj, &extobjs, vm_bo) {
+ ret = drm_exec_prepare_obj(exec, vm_bo->obj, num_fences);
+ if (ret)
+ break;
+ }
+ /* Drop ref in case we break out of the loop. */
+ drm_gpuvm_bo_put(vm_bo);
+ restore_vm_bo_list(gpuvm, extobj);
+
+ return ret;
+}
+
+/**
+ * drm_gpuvm_prepare_objects() - prepare all assoiciated BOs
+ * @gpuvm: the &drm_gpuvm
+ * @exec: the &drm_exec locking context
+ * @num_fences: the amount of &dma_fences to reserve
+ *
+ * Calls drm_exec_prepare_obj() for all &drm_gem_objects the given
+ * &drm_gpuvm contains mappings of.
+ *
+ * Using this function directly, it is the drivers responsibility to call
+ * drm_exec_init() and drm_exec_fini() accordingly.
+ *
+ * Note: This function is safe against concurrent insertion and removal of
+ * external objects, however it is not safe against concurrent usage itself.
+ *
+ * Drivers need to make sure to protect this case with either an outer VM lock
+ * or by calling drm_gpuvm_prepare_vm() before this function within the
+ * drm_exec_until_all_locked() loop, such that the GPUVM's dma-resv lock ensures
+ * mutual exclusion.
+ *
+ * Returns: 0 on success, negative error code on failure.
+ */
+int
+drm_gpuvm_prepare_objects(struct drm_gpuvm *gpuvm,
+ struct drm_exec *exec,
+ unsigned int num_fences)
+{
+ struct drm_gpuvm_bo *vm_bo;
+ int ret = 0;
+
+ if (!drm_gpuvm_resv_protected(gpuvm))
+ return drm_gpuvm_prepare_objects_internal(gpuvm, exec,
+ num_fences);
+
+ drm_gpuvm_resv_assert_held(gpuvm);
+ list_for_each_entry(vm_bo, &gpuvm->extobj.list, list.entry.extobj) {
+ ret = drm_exec_prepare_obj(exec, vm_bo->obj, num_fences);
+ if (ret)
+ break;
+ }
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(drm_gpuvm_prepare_objects);
+
+/**
+ * drm_gpuvm_prepare_range() - prepare all BOs mapped within a given range
+ * @gpuvm: the &drm_gpuvm
+ * @exec: the &drm_exec locking context
+ * @addr: the start address within the VA space
+ * @range: the range to iterate within the VA space
+ * @num_fences: the amount of &dma_fences to reserve
+ *
+ * Calls drm_exec_prepare_obj() for all &drm_gem_objects mapped between @addr
+ * and @addr + @range.
+ *
+ * Returns: 0 on success, negative error code on failure.
+ */
+int
+drm_gpuvm_prepare_range(struct drm_gpuvm *gpuvm, struct drm_exec *exec,
+ u64 addr, u64 range, unsigned int num_fences)
+{
+ struct drm_gpuva *va;
+ u64 end = addr + range;
+ int ret;
+
+ drm_gpuvm_for_each_va_range(va, gpuvm, addr, end) {
+ struct drm_gem_object *obj = va->gem.obj;
+
+ ret = drm_exec_prepare_obj(exec, obj, num_fences);
+ if (ret)
+ return ret;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(drm_gpuvm_prepare_range);
+
+/**
+ * drm_gpuvm_exec_lock() - lock all dma-resv of all assoiciated BOs
+ * @vm_exec: the &drm_gpuvm_exec abstraction
+ * @num_fences: the amount of &dma_fences to reserve
+ * @interruptible: sleep interruptible if waiting
+ *
+ * Acquires all dma-resv locks of all &drm_gem_objects the given
+ * &drm_gpuvm contains mappings of.
+ *
+ * Addionally, when calling this function with struct drm_gpuvm_exec::extra
+ * being set the driver receives the given @fn callback to lock additional
+ * dma-resv in the context of the &drm_gpuvm_exec instance. Typically, drivers
+ * would call drm_exec_prepare_obj() from within this callback.
+ *
+ * Returns: 0 on success, negative error code on failure.
+ */
+int
+drm_gpuvm_exec_lock(struct drm_gpuvm_exec *vm_exec,
+ unsigned int num_fences,
+ bool interruptible)
+{
+ struct drm_gpuvm *gpuvm = vm_exec->vm;
+ struct drm_exec *exec = &vm_exec->exec;
+ uint32_t flags;
+ int ret;
+
+ flags = interruptible ? DRM_EXEC_INTERRUPTIBLE_WAIT : 0 |
+ DRM_EXEC_IGNORE_DUPLICATES;
+
+ drm_exec_init(exec, flags);
+
+ drm_exec_until_all_locked(exec) {
+ ret = drm_gpuvm_prepare_vm(gpuvm, exec, num_fences);
+ drm_exec_retry_on_contention(exec);
+ if (ret)
+ goto err;
+
+ ret = drm_gpuvm_prepare_objects(gpuvm, exec, num_fences);
+ drm_exec_retry_on_contention(exec);
+ if (ret)
+ goto err;
+
+ if (vm_exec->extra.fn) {
+ ret = vm_exec->extra.fn(vm_exec, num_fences);
+ drm_exec_retry_on_contention(exec);
+ if (ret)
+ goto err;
+ }
+ }
+
+ return 0;
+
+err:
+ drm_exec_fini(exec);
+ return ret;
+}
+EXPORT_SYMBOL_GPL(drm_gpuvm_exec_lock);
+
+static int
+fn_lock_array(struct drm_gpuvm_exec *vm_exec, unsigned int num_fences)
+{
+ struct {
+ struct drm_gem_object **objs;
+ unsigned int num_objs;
+ } *args = vm_exec->extra.priv;
+
+ return drm_exec_prepare_array(&vm_exec->exec, args->objs,
+ args->num_objs, num_fences);
+}
+
+/**
+ * drm_gpuvm_exec_lock_array() - lock all dma-resv of all assoiciated BOs
+ * @vm_exec: the &drm_gpuvm_exec abstraction
+ * @objs: additional &drm_gem_objects to lock
+ * @num_objs: the number of additional &drm_gem_objects to lock
+ * @num_fences: the amount of &dma_fences to reserve
+ * @interruptible: sleep interruptible if waiting
+ *
+ * Acquires all dma-resv locks of all &drm_gem_objects the given &drm_gpuvm
+ * contains mappings of, plus the ones given through @objs.
+ *
+ * Returns: 0 on success, negative error code on failure.
+ */
+int
+drm_gpuvm_exec_lock_array(struct drm_gpuvm_exec *vm_exec,
+ struct drm_gem_object **objs,
+ unsigned int num_objs,
+ unsigned int num_fences,
+ bool interruptible)
+{
+ struct {
+ struct drm_gem_object **objs;
+ unsigned int num_objs;
+ } args;
+
+ args.objs = objs;
+ args.num_objs = num_objs;
+
+ vm_exec->extra.fn = fn_lock_array;
+ vm_exec->extra.priv = &args;
+
+ return drm_gpuvm_exec_lock(vm_exec, num_fences, interruptible);
+}
+EXPORT_SYMBOL_GPL(drm_gpuvm_exec_lock_array);
+
+/**
+ * drm_gpuvm_exec_lock_range() - prepare all BOs mapped within a given range
+ * @vm_exec: the &drm_gpuvm_exec abstraction
+ * @addr: the start address within the VA space
+ * @range: the range to iterate within the VA space
+ * @num_fences: the amount of &dma_fences to reserve
+ * @interruptible: sleep interruptible if waiting
+ *
+ * Acquires all dma-resv locks of all &drm_gem_objects mapped between @addr and
+ * @addr + @range.
+ *
+ * Returns: 0 on success, negative error code on failure.
+ */
+int
+drm_gpuvm_exec_lock_range(struct drm_gpuvm_exec *vm_exec,
+ u64 addr, u64 range,
+ unsigned int num_fences,
+ bool interruptible)
+{
+ struct drm_gpuvm *gpuvm = vm_exec->vm;
+ struct drm_exec *exec = &vm_exec->exec;
+ uint32_t flags;
+ int ret;
+
+ flags = interruptible ? DRM_EXEC_INTERRUPTIBLE_WAIT : 0 |
+ DRM_EXEC_IGNORE_DUPLICATES;
+
+ drm_exec_init(exec, flags);
+
+ drm_exec_until_all_locked(exec) {
+ ret = drm_gpuvm_prepare_range(gpuvm, exec, addr, range,
+ num_fences);
+ drm_exec_retry_on_contention(exec);
+ if (ret)
+ goto err;
+ }
+
+ return ret;
+
+err:
+ drm_exec_fini(exec);
+ return ret;
+}
+EXPORT_SYMBOL_GPL(drm_gpuvm_exec_lock_range);
+
+static int
+drm_gpuvm_validate_internal(struct drm_gpuvm *gpuvm, struct drm_exec *exec)
+{
+ const struct drm_gpuvm_ops *ops = gpuvm->ops;
+ struct drm_gpuvm_bo *vm_bo;
+ LIST_HEAD(evict);
+ int ret = 0;
+
+ for_each_vm_bo_in_list(gpuvm, evict, &evict, vm_bo) {
+ dma_resv_assert_held(vm_bo->obj->resv);
+ ret = ops->vm_bo_validate(vm_bo, exec);
+ if (ret)
+ break;
+ }
+ /* Drop ref in case we break out of the loop. */
+ drm_gpuvm_bo_put(vm_bo);
+ restore_vm_bo_list(gpuvm, evict);
+
+ return ret;
+}
+
+/**
+ * drm_gpuvm_validate() - validate all BOs marked as evicted
+ * @gpuvm: the &drm_gpuvm to validate evicted BOs
+ * @exec: the &drm_exec instance used for locking the GPUVM
+ *
+ * Calls the &drm_gpuvm_ops::vm_bo_validate callback for all evicted buffer
+ * objects being mapped in the given &drm_gpuvm.
+ *
+ * Returns: 0 on success, negative error code on failure.
+ */
+int
+drm_gpuvm_validate(struct drm_gpuvm *gpuvm, struct drm_exec *exec)
+{
+ const struct drm_gpuvm_ops *ops = gpuvm->ops;
+ struct drm_gpuvm_bo *vm_bo, *next;
+ int ret = 0;
+
+ if (unlikely(!ops || !ops->vm_bo_validate))
+ return -ENOTSUPP;
+
+ if (!drm_gpuvm_resv_protected(gpuvm))
+ return drm_gpuvm_validate_internal(gpuvm, exec);
+
+ /* Iterate list safely, drivers typically remove the current entry from
+ * their drm_gpuvm_ops::vm_bo_validate callback. Drivers might also
+ * re-add the entry on failure; this is safe since on failure we break
+ * out of the loop.
+ */
+ list_for_each_entry_safe(vm_bo, next, &gpuvm->evict.list,
+ list.entry.evict) {
+ dma_resv_assert_held(vm_bo->obj->resv);
+ ret = ops->vm_bo_validate(vm_bo, exec);
+ if (ret)
+ break;
+ }
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(drm_gpuvm_validate);
+
+/**
+ * drm_gpuvm_resv_add_fence - add fence to private and all extobj
+ * dma-resv
+ * @gpuvm: the &drm_gpuvm to add a fence to
+ * @exec: the &drm_exec locking context
+ * @fence: fence to add
+ * @private_usage: private dma-resv usage
+ * @extobj_usage: extobj dma-resv usage
+ */
+void
+drm_gpuvm_resv_add_fence(struct drm_gpuvm *gpuvm,
+ struct drm_exec *exec,
+ struct dma_fence *fence,
+ enum dma_resv_usage private_usage,
+ enum dma_resv_usage extobj_usage)
+{
+ struct drm_gem_object *obj;
+ unsigned long index;
+
+ drm_exec_for_each_locked_object(exec, index, obj) {
+ dma_resv_assert_held(obj->resv);
+ dma_resv_add_fence(obj->resv, fence,
+ drm_gpuvm_is_extobj(gpuvm, obj) ?
+ private_usage : extobj_usage);
+ }
+}
+EXPORT_SYMBOL_GPL(drm_gpuvm_resv_add_fence);
+
/**
* drm_gpuvm_bo_create() - create a new instance of struct drm_gpuvm_bo
* @gpuvm: The &drm_gpuvm the @obj is mapped in.
@@ -790,6 +1357,9 @@ drm_gpuvm_bo_create(struct drm_gpuvm *gpuvm,
INIT_LIST_HEAD(&vm_bo->list.gpuva);
INIT_LIST_HEAD(&vm_bo->list.entry.gem);
+ INIT_LIST_HEAD(&vm_bo->list.entry.extobj);
+ INIT_LIST_HEAD(&vm_bo->list.entry.evict);
+
drm_gem_object_get(obj);
return vm_bo;
@@ -804,8 +1374,14 @@ drm_gpuvm_bo_destroy(struct kref *kref)
struct drm_gpuvm *gpuvm = vm_bo->vm;
const struct drm_gpuvm_ops *ops = gpuvm->ops;
struct drm_gem_object *obj = vm_bo->obj;
+ bool lock = !drm_gpuvm_resv_protected(gpuvm);
drm_gem_gpuva_assert_lock_held(obj);
+ if (!lock)
+ drm_gpuvm_resv_assert_held(gpuvm);
+
+ drm_gpuvm_bo_list_del(vm_bo, extobj, lock);
+ drm_gpuvm_bo_list_del(vm_bo, evict, lock);
list_del(&vm_bo->list.entry.gem);
@@ -943,6 +1519,55 @@ drm_gpuvm_bo_obtain_prealloc(struct drm_gpuvm_bo *__vm_bo)
}
EXPORT_SYMBOL_GPL(drm_gpuvm_bo_obtain_prealloc);
+/**
+ * drm_gpuvm_bo_extobj_add() - adds the &drm_gpuvm_bo to its &drm_gpuvm's
+ * extobj list
+ * @vm_bo: The &drm_gpuvm_bo to add to its &drm_gpuvm's the extobj list.
+ *
+ * Adds the given @vm_bo to its &drm_gpuvm's extobj list if not on the list
+ * already and if the corresponding &drm_gem_object is an external object,
+ * actually.
+ */
+void
+drm_gpuvm_bo_extobj_add(struct drm_gpuvm_bo *vm_bo)
+{
+ struct drm_gpuvm *gpuvm = vm_bo->vm;
+ bool lock = !drm_gpuvm_resv_protected(gpuvm);
+
+ if (!lock)
+ drm_gpuvm_resv_assert_held(gpuvm);
+
+ if (drm_gpuvm_is_extobj(gpuvm, vm_bo->obj))
+ drm_gpuvm_bo_list_add(vm_bo, extobj, lock);
+}
+EXPORT_SYMBOL_GPL(drm_gpuvm_bo_extobj_add);
+
+/**
+ * drm_gpuvm_bo_evict() - add / remove a &drm_gpuvm_bo to / from the &drm_gpuvms
+ * evicted list
+ * @vm_bo: the &drm_gpuvm_bo to add or remove
+ * @evict: indicates whether the object is evicted
+ *
+ * Adds a &drm_gpuvm_bo to or removes it from the &drm_gpuvms evicted list.
+ */
+void
+drm_gpuvm_bo_evict(struct drm_gpuvm_bo *vm_bo, bool evict)
+{
+ struct drm_gem_object *obj = vm_bo->obj;
+
+ dma_resv_assert_held(obj->resv);
+
+ /* Always lock list transactions, even if DRM_GPUVM_RESV_PROTECTED is
+ * set. This is required to protect multiple concurrent calls to
+ * drm_gpuvm_bo_evict() with BOs with different dma_resv.
+ */
+ if (evict)
+ drm_gpuvm_bo_list_add(vm_bo, evict, true);
+ else
+ drm_gpuvm_bo_list_del_init(vm_bo, evict, true);
+}
+EXPORT_SYMBOL_GPL(drm_gpuvm_bo_evict);
+
static int
__drm_gpuva_insert(struct drm_gpuvm *gpuvm,
struct drm_gpuva *va)
@@ -1094,7 +1719,9 @@ drm_gpuva_unlink(struct drm_gpuva *va)
list_del_init(&va->gem.entry);
va->vm_bo = NULL;
+ drm_gem_object_get(obj);
drm_gpuvm_bo_put(vm_bo);
+ drm_gem_object_put(obj);
}
EXPORT_SYMBOL_GPL(drm_gpuva_unlink);
diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h
index f57ad1f0f0d0..e8bb87ae527d 100644
--- a/include/drm/drm_gpuvm.h
+++ b/include/drm/drm_gpuvm.h
@@ -26,10 +26,12 @@
*/
#include <linux/list.h>
+#include <linux/dma-resv.h>
#include <linux/rbtree.h>
#include <linux/types.h>
#include <drm/drm_gem.h>
+#include <drm/drm_exec.h>
struct drm_gpuvm;
struct drm_gpuvm_bo;
@@ -196,10 +198,16 @@ static inline bool drm_gpuva_invalidated(struct drm_gpuva *va)
* enum drm_gpuvm_flags - flags for struct drm_gpuvm
*/
enum drm_gpuvm_flags {
+ /**
+ * @DRM_GPUVM_RESV_PROTECTED: GPUVM is protected externally by the
+ * GPUVM's &dma_resv lock
+ */
+ DRM_GPUVM_RESV_PROTECTED = (1 << 0),
+
/**
* @DRM_GPUVM_USERBITS: user defined bits
*/
- DRM_GPUVM_USERBITS = (1 << 0),
+ DRM_GPUVM_USERBITS = (1 << 1),
};
/**
@@ -268,6 +276,50 @@ struct drm_gpuvm {
* dma-resv to &drm_exec. Provides the GPUVM's &dma-resv.
*/
struct drm_gem_object d_obj;
+
+ /**
+ * @extobj: structure holding the extobj list
+ */
+ struct {
+ /**
+ * @list: &list_head storing &drm_gpuvm_bos serving as
+ * external object
+ */
+ struct list_head list;
+
+ /**
+ * @local_list: pointer to the local list temporarily storing
+ * entries from the external object list
+ */
+ struct list_head *local_list;
+
+ /**
+ * @lock: spinlock to protect the extobj list
+ */
+ spinlock_t lock;
+ } extobj;
+
+ /**
+ * @evict: structure holding the evict list and evict list lock
+ */
+ struct {
+ /**
+ * @list: &list_head storing &drm_gpuvm_bos currently being
+ * evicted
+ */
+ struct list_head list;
+
+ /**
+ * @local_list: pointer to the local list temporarily storing
+ * entries from the evicted object list
+ */
+ struct list_head *local_list;
+
+ /**
+ * @lock: spinlock to protect the evict list
+ */
+ spinlock_t lock;
+ } evict;
};
void drm_gpuvm_init(struct drm_gpuvm *gpuvm, struct drm_device *drm,
@@ -277,6 +329,19 @@ void drm_gpuvm_init(struct drm_gpuvm *gpuvm, struct drm_device *drm,
const struct drm_gpuvm_ops *ops);
void drm_gpuvm_destroy(struct drm_gpuvm *gpuvm);
+/**
+ * drm_gpuvm_resv_protected() - indicates whether &DRM_GPUVM_RESV_PROTECTED is
+ * set
+ * @gpuvm: the &drm_gpuvm
+ *
+ * Returns: true if &DRM_GPUVM_RESV_PROTECTED is set, false otherwise.
+ */
+static inline bool
+drm_gpuvm_resv_protected(struct drm_gpuvm *gpuvm)
+{
+ return gpuvm->flags & DRM_GPUVM_RESV_PROTECTED;
+}
+
/**
* drm_gpuvm_resv() - returns the &drm_gpuvm's &dma_resv
* @gpuvm__: the &drm_gpuvm
@@ -285,6 +350,28 @@ void drm_gpuvm_destroy(struct drm_gpuvm *gpuvm);
*/
#define drm_gpuvm_resv(gpuvm__) (&(gpuvm__)->d_obj._resv)
+#define drm_gpuvm_resv_held(gpuvm__) \
+ dma_resv_held(drm_gpuvm_resv(gpuvm__))
+
+#define drm_gpuvm_resv_assert_held(gpuvm__) \
+ dma_resv_assert_held(drm_gpuvm_resv(gpuvm__))
+
+/**
+ * drm_gpuvm_is_extobj() - indicates whether the given &drm_gem_object is an
+ * external object
+ * @gpuvm: the &drm_gpuvm to check
+ * @obj: the &drm_gem_object to check
+ *
+ * Returns: true if the &drm_gem_object &dma_resv differs from the
+ * &drm_gpuvms &dma_resv, false otherwise
+ */
+static inline bool
+drm_gpuvm_is_extobj(struct drm_gpuvm *gpuvm,
+ struct drm_gem_object *obj)
+{
+ return obj && obj->resv != drm_gpuvm_resv(gpuvm);
+}
+
static inline struct drm_gpuva *
__drm_gpuva_next(struct drm_gpuva *va)
{
@@ -363,6 +450,140 @@ __drm_gpuva_next(struct drm_gpuva *va)
#define drm_gpuvm_for_each_va_safe(va__, next__, gpuvm__) \
list_for_each_entry_safe(va__, next__, &(gpuvm__)->rb.list, rb.entry)
+/**
+ * struct drm_gpuvm_exec - &drm_gpuvm abstraction of &drm_exec
+ *
+ * This structure should be created on the stack as &drm_exec should be.
+ *
+ * Optionally, @extra can be set in order to lock additional &drm_gem_objects.
+ */
+struct drm_gpuvm_exec {
+ /**
+ * @exec: the &drm_exec structure
+ */
+ struct drm_exec exec;
+
+ /**
+ * @vm: the &drm_gpuvm to lock its DMA reservations
+ */
+ struct drm_gpuvm *vm;
+
+ /**
+ * @extra: Callback and corresponding private data for the driver to
+ * lock arbitrary additional &drm_gem_objects.
+ */
+ struct {
+ /**
+ * @fn: The driver callback to lock additional &drm_gem_objects.
+ */
+ int (*fn)(struct drm_gpuvm_exec *vm_exec,
+ unsigned int num_fences);
+
+ /**
+ * @priv: driver private data for the @fn callback
+ */
+ void *priv;
+ } extra;
+};
+
+/**
+ * drm_gpuvm_prepare_vm() - prepare the GPUVMs common dma-resv
+ * @gpuvm: the &drm_gpuvm
+ * @exec: the &drm_exec context
+ * @num_fences: the amount of &dma_fences to reserve
+ *
+ * Calls drm_exec_prepare_obj() for the GPUVMs dummy &drm_gem_object.
+ *
+ * Using this function directly, it is the drivers responsibility to call
+ * drm_exec_init() and drm_exec_fini() accordingly.
+ *
+ * Returns: 0 on success, negative error code on failure.
+ */
+static inline int
+drm_gpuvm_prepare_vm(struct drm_gpuvm *gpuvm,
+ struct drm_exec *exec,
+ unsigned int num_fences)
+{
+ return drm_exec_prepare_obj(exec, &gpuvm->d_obj, num_fences);
+}
+
+int drm_gpuvm_prepare_objects(struct drm_gpuvm *gpuvm,
+ struct drm_exec *exec,
+ unsigned int num_fences);
+
+int drm_gpuvm_prepare_range(struct drm_gpuvm *gpuvm,
+ struct drm_exec *exec,
+ u64 addr, u64 range,
+ unsigned int num_fences);
+
+int drm_gpuvm_exec_lock(struct drm_gpuvm_exec *vm_exec,
+ unsigned int num_fences,
+ bool interruptible);
+
+int drm_gpuvm_exec_lock_array(struct drm_gpuvm_exec *vm_exec,
+ struct drm_gem_object **objs,
+ unsigned int num_objs,
+ unsigned int num_fences,
+ bool interruptible);
+
+int drm_gpuvm_exec_lock_range(struct drm_gpuvm_exec *vm_exec,
+ u64 addr, u64 range,
+ unsigned int num_fences,
+ bool interruptible);
+
+/**
+ * drm_gpuvm_lock() - lock all dma-resv of all assoiciated BOs
+ * @gpuvm: the &drm_gpuvm
+ *
+ * Releases all dma-resv locks of all &drm_gem_objects previously acquired
+ * through drm_gpuvm_lock() or its variants.
+ *
+ * Returns: 0 on success, negative error code on failure.
+ */
+static inline void
+drm_gpuvm_exec_unlock(struct drm_gpuvm_exec *vm_exec)
+{
+ drm_exec_fini(&vm_exec->exec);
+}
+
+int drm_gpuvm_validate(struct drm_gpuvm *gpuvm, struct drm_exec *exec);
+void drm_gpuvm_resv_add_fence(struct drm_gpuvm *gpuvm,
+ struct drm_exec *exec,
+ struct dma_fence *fence,
+ enum dma_resv_usage private_usage,
+ enum dma_resv_usage extobj_usage);
+
+/**
+ * drm_gpuvm_exec_resv_add_fence()
+ * @vm_exec: the &drm_gpuvm_exec abstraction
+ * @fence: fence to add
+ * @private_usage: private dma-resv usage
+ * @extobj_usage: extobj dma-resv usage
+ *
+ * See drm_gpuvm_resv_add_fence().
+ */
+static inline void
+drm_gpuvm_exec_resv_add_fence(struct drm_gpuvm_exec *vm_exec,
+ struct dma_fence *fence,
+ enum dma_resv_usage private_usage,
+ enum dma_resv_usage extobj_usage)
+{
+ drm_gpuvm_resv_add_fence(vm_exec->vm, &vm_exec->exec, fence,
+ private_usage, extobj_usage);
+}
+
+/**
+ * drm_gpuvm_exec_resv_add_fence()
+ * @vm_exec: the &drm_gpuvm_exec abstraction
+ *
+ * See drm_gpuvm_validate().
+ */
+static inline int
+drm_gpuvm_exec_validate(struct drm_gpuvm_exec *vm_exec)
+{
+ return drm_gpuvm_validate(vm_exec->vm, &vm_exec->exec);
+}
+
/**
* struct drm_gpuvm_bo - structure representing a &drm_gpuvm and
* &drm_gem_object combination
@@ -415,6 +636,18 @@ struct drm_gpuvm_bo {
* gpuva list.
*/
struct list_head gem;
+
+ /**
+ * @evict: List entry to attach to the &drm_gpuvms
+ * extobj list.
+ */
+ struct list_head extobj;
+
+ /**
+ * @evict: List entry to attach to the &drm_gpuvms evict
+ * list.
+ */
+ struct list_head evict;
} entry;
} list;
};
@@ -449,6 +682,27 @@ struct drm_gpuvm_bo *
drm_gpuvm_bo_find(struct drm_gpuvm *gpuvm,
struct drm_gem_object *obj);
+void drm_gpuvm_bo_evict(struct drm_gpuvm_bo *vm_bo, bool evict);
+
+/**
+ * drm_gpuvm_bo_gem_evict()
+ * @obj: the &drm_gem_object
+ * @evict: indicates whether @obj is evicted
+ *
+ * See drm_gpuvm_bo_evict().
+ */
+static inline void
+drm_gpuvm_bo_gem_evict(struct drm_gem_object *obj, bool evict)
+{
+ struct drm_gpuvm_bo *vm_bo;
+
+ drm_gem_gpuva_assert_lock_held(obj);
+ drm_gem_for_each_gpuvm_bo(vm_bo, obj)
+ drm_gpuvm_bo_evict(vm_bo, evict);
+}
+
+void drm_gpuvm_bo_extobj_add(struct drm_gpuvm_bo *vm_bo);
+
/**
* drm_gpuvm_bo_for_each_va() - iterator to walk over a list of &drm_gpuva
* @va__: &drm_gpuva structure to assign to in each iteration step
@@ -811,6 +1065,18 @@ struct drm_gpuvm_ops {
*/
void (*vm_bo_free)(struct drm_gpuvm_bo *vm_bo);
+ /**
+ * @vm_bo_validate: called from drm_gpuvm_validate()
+ *
+ * Drivers receive this callback for every evicted &drm_gem_object being
+ * mapped in the corresponding &drm_gpuvm.
+ *
+ * Typically, drivers would call their driver specific variant of
+ * ttm_bo_validate() from within this callback.
+ */
+ int (*vm_bo_validate)(struct drm_gpuvm_bo *vm_bo,
+ struct drm_exec *exec);
+
/**
* @sm_step_map: called from &drm_gpuvm_sm_map to finally insert the
* mapping once all previous steps were completed
--
2.41.0
^ permalink raw reply related [flat|nested] 29+ messages in thread* Re: [PATCH drm-misc-next v4 7/8] drm/gpuvm: generalize dma_resv/extobj handling and GEM validation
2023-09-20 14:42 ` [PATCH drm-misc-next v4 7/8] drm/gpuvm: generalize dma_resv/extobj handling and GEM validation Danilo Krummrich
@ 2023-09-22 11:45 ` Boris Brezillon
2023-09-27 16:59 ` Danilo Krummrich
0 siblings, 1 reply; 29+ messages in thread
From: Boris Brezillon @ 2023-09-22 11:45 UTC (permalink / raw)
To: Danilo Krummrich
Cc: airlied, daniel, matthew.brost, thomas.hellstrom, sarah.walker,
donald.robson, christian.koenig, faith.ekstrand, dri-devel,
nouveau, linux-kernel
On Wed, 20 Sep 2023 16:42:40 +0200
Danilo Krummrich <dakr@redhat.com> wrote:
> + /**
> + * @DRM_GPUVM_RESV_PROTECTED: GPUVM is protected externally by the
> + * GPUVM's &dma_resv lock
I think we need to be more specific, and list the fields/operations
that need to be externally protected when DRM_GPUVM_RESV_PROTECTED is
set.
> + */
> + DRM_GPUVM_RESV_PROTECTED = (1 << 0),
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH drm-misc-next v4 7/8] drm/gpuvm: generalize dma_resv/extobj handling and GEM validation
2023-09-22 11:45 ` Boris Brezillon
@ 2023-09-27 16:59 ` Danilo Krummrich
0 siblings, 0 replies; 29+ messages in thread
From: Danilo Krummrich @ 2023-09-27 16:59 UTC (permalink / raw)
To: Boris Brezillon
Cc: airlied, daniel, matthew.brost, thomas.hellstrom, sarah.walker,
donald.robson, christian.koenig, faith.ekstrand, dri-devel,
nouveau, linux-kernel
On 9/22/23 13:45, Boris Brezillon wrote:
> On Wed, 20 Sep 2023 16:42:40 +0200
> Danilo Krummrich <dakr@redhat.com> wrote:
>
>> + /**
>> + * @DRM_GPUVM_RESV_PROTECTED: GPUVM is protected externally by the
>> + * GPUVM's &dma_resv lock
>
> I think we need to be more specific, and list the fields/operations
> that need to be externally protected when DRM_GPUVM_RESV_PROTECTED is
> set.
I agree, we should probably keep a list somewhere. However, there are also
lockdep asserts where a lock is required to be held.
>
>> + */
>> + DRM_GPUVM_RESV_PROTECTED = (1 << 0),
>
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH drm-misc-next v4 8/8] drm/nouveau: GPUVM dma-resv/extobj handling, GEM validation
2023-09-20 14:42 [PATCH drm-misc-next v4 0/8] [RFC] DRM GPUVA Manager GPU-VM features Danilo Krummrich
` (6 preceding siblings ...)
2023-09-20 14:42 ` [PATCH drm-misc-next v4 7/8] drm/gpuvm: generalize dma_resv/extobj handling and GEM validation Danilo Krummrich
@ 2023-09-20 14:42 ` Danilo Krummrich
2023-09-28 12:09 ` [PATCH drm-misc-next v4 0/8] [RFC] DRM GPUVA Manager GPU-VM features Boris Brezillon
8 siblings, 0 replies; 29+ messages in thread
From: Danilo Krummrich @ 2023-09-20 14:42 UTC (permalink / raw)
To: airlied, daniel, matthew.brost, thomas.hellstrom, sarah.walker,
donald.robson, boris.brezillon, christian.koenig, faith.ekstrand
Cc: dri-devel, nouveau, linux-kernel, Danilo Krummrich
Make use of the DRM GPUVA managers GPU-VM common dma-resv, external GEM
object tracking, dma-resv locking, evicted GEM object tracking and
validation features.
Signed-off-by: Danilo Krummrich <dakr@redhat.com>
---
drivers/gpu/drm/nouveau/nouveau_bo.c | 4 +-
drivers/gpu/drm/nouveau/nouveau_exec.c | 52 +++----------
drivers/gpu/drm/nouveau/nouveau_exec.h | 4 -
drivers/gpu/drm/nouveau/nouveau_gem.c | 5 +-
drivers/gpu/drm/nouveau/nouveau_sched.h | 4 +-
drivers/gpu/drm/nouveau/nouveau_uvmm.c | 99 ++++++++++++++++---------
6 files changed, 83 insertions(+), 85 deletions(-)
diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c
index 19cab37ac69c..52d3f7eba011 100644
--- a/drivers/gpu/drm/nouveau/nouveau_bo.c
+++ b/drivers/gpu/drm/nouveau/nouveau_bo.c
@@ -1060,17 +1060,18 @@ nouveau_bo_move(struct ttm_buffer_object *bo, bool evict,
{
struct nouveau_drm *drm = nouveau_bdev(bo->bdev);
struct nouveau_bo *nvbo = nouveau_bo(bo);
+ struct drm_gem_object *obj = &bo->base;
struct ttm_resource *old_reg = bo->resource;
struct nouveau_drm_tile *new_tile = NULL;
int ret = 0;
-
if (new_reg->mem_type == TTM_PL_TT) {
ret = nouveau_ttm_tt_bind(bo->bdev, bo->ttm, new_reg);
if (ret)
return ret;
}
+ drm_gpuvm_bo_gem_evict(obj, evict);
nouveau_bo_move_ntfy(bo, new_reg);
ret = ttm_bo_wait_ctx(bo, ctx);
if (ret)
@@ -1135,6 +1136,7 @@ nouveau_bo_move(struct ttm_buffer_object *bo, bool evict,
out_ntfy:
if (ret) {
nouveau_bo_move_ntfy(bo, bo->resource);
+ drm_gpuvm_bo_gem_evict(obj, !evict);
}
return ret;
}
diff --git a/drivers/gpu/drm/nouveau/nouveau_exec.c b/drivers/gpu/drm/nouveau/nouveau_exec.c
index b4239af29e5a..ba6913a3efb6 100644
--- a/drivers/gpu/drm/nouveau/nouveau_exec.c
+++ b/drivers/gpu/drm/nouveau/nouveau_exec.c
@@ -1,7 +1,5 @@
// SPDX-License-Identifier: MIT
-#include <drm/drm_exec.h>
-
#include "nouveau_drv.h"
#include "nouveau_gem.h"
#include "nouveau_mem.h"
@@ -91,9 +89,6 @@ nouveau_exec_job_submit(struct nouveau_job *job)
struct nouveau_exec_job *exec_job = to_nouveau_exec_job(job);
struct nouveau_cli *cli = job->cli;
struct nouveau_uvmm *uvmm = nouveau_cli_uvmm(cli);
- struct drm_exec *exec = &job->exec;
- struct drm_gem_object *obj;
- unsigned long index;
int ret;
ret = nouveau_fence_new(&exec_job->fence);
@@ -101,52 +96,29 @@ nouveau_exec_job_submit(struct nouveau_job *job)
return ret;
nouveau_uvmm_lock(uvmm);
- drm_exec_init(exec, DRM_EXEC_INTERRUPTIBLE_WAIT |
- DRM_EXEC_IGNORE_DUPLICATES);
- drm_exec_until_all_locked(exec) {
- struct drm_gpuva *va;
-
- drm_gpuvm_for_each_va(va, &uvmm->base) {
- if (unlikely(va == &uvmm->base.kernel_alloc_node))
- continue;
-
- ret = drm_exec_prepare_obj(exec, va->gem.obj, 1);
- drm_exec_retry_on_contention(exec);
- if (ret)
- goto err_uvmm_unlock;
- }
+ job->vm_exec.vm = &uvmm->base;
+ ret = drm_gpuvm_exec_lock(&job->vm_exec, 1, false);
+ if (ret) {
+ nouveau_uvmm_unlock(uvmm);
+ return ret;
}
nouveau_uvmm_unlock(uvmm);
- drm_exec_for_each_locked_object(exec, index, obj) {
- struct nouveau_bo *nvbo = nouveau_gem_object(obj);
-
- ret = nouveau_bo_validate(nvbo, true, false);
- if (ret)
- goto err_exec_fini;
+ ret = drm_gpuvm_exec_validate(&job->vm_exec);
+ if (ret) {
+ drm_gpuvm_exec_unlock(&job->vm_exec);
+ return ret;
}
return 0;
-
-err_uvmm_unlock:
- nouveau_uvmm_unlock(uvmm);
-err_exec_fini:
- drm_exec_fini(exec);
- return ret;
-
}
static void
nouveau_exec_job_armed_submit(struct nouveau_job *job)
{
- struct drm_exec *exec = &job->exec;
- struct drm_gem_object *obj;
- unsigned long index;
-
- drm_exec_for_each_locked_object(exec, index, obj)
- dma_resv_add_fence(obj->resv, job->done_fence, job->resv_usage);
-
- drm_exec_fini(exec);
+ drm_gpuvm_exec_resv_add_fence(&job->vm_exec, job->done_fence,
+ job->resv_usage, job->resv_usage);
+ drm_gpuvm_exec_unlock(&job->vm_exec);
}
static struct dma_fence *
diff --git a/drivers/gpu/drm/nouveau/nouveau_exec.h b/drivers/gpu/drm/nouveau/nouveau_exec.h
index 778cacd90f65..b815de2428f3 100644
--- a/drivers/gpu/drm/nouveau/nouveau_exec.h
+++ b/drivers/gpu/drm/nouveau/nouveau_exec.h
@@ -3,16 +3,12 @@
#ifndef __NOUVEAU_EXEC_H__
#define __NOUVEAU_EXEC_H__
-#include <drm/drm_exec.h>
-
#include "nouveau_drv.h"
#include "nouveau_sched.h"
struct nouveau_exec_job_args {
struct drm_file *file_priv;
struct nouveau_sched_entity *sched_entity;
-
- struct drm_exec exec;
struct nouveau_channel *chan;
struct {
diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
index c0b10d8d3d03..732cd7900168 100644
--- a/drivers/gpu/drm/nouveau/nouveau_gem.c
+++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
@@ -111,7 +111,8 @@ nouveau_gem_object_open(struct drm_gem_object *gem, struct drm_file *file_priv)
if (vmm->vmm.object.oclass < NVIF_CLASS_VMM_NV50)
return 0;
- if (nvbo->no_share && uvmm && &uvmm->resv != nvbo->bo.base.resv)
+ if (uvmm && drm_gpuvm_resv(&uvmm->base) != nvbo->bo.base.resv &&
+ nvbo->no_share)
return -EPERM;
ret = ttm_bo_reserve(&nvbo->bo, false, false, NULL);
@@ -245,7 +246,7 @@ nouveau_gem_new(struct nouveau_cli *cli, u64 size, int align, uint32_t domain,
if (unlikely(!uvmm))
return -EINVAL;
- resv = &uvmm->resv;
+ resv = drm_gpuvm_resv(&uvmm->base);
}
if (!(domain & (NOUVEAU_GEM_DOMAIN_VRAM | NOUVEAU_GEM_DOMAIN_GART)))
diff --git a/drivers/gpu/drm/nouveau/nouveau_sched.h b/drivers/gpu/drm/nouveau/nouveau_sched.h
index 27ac19792597..54379af6f925 100644
--- a/drivers/gpu/drm/nouveau/nouveau_sched.h
+++ b/drivers/gpu/drm/nouveau/nouveau_sched.h
@@ -5,7 +5,7 @@
#include <linux/types.h>
-#include <drm/drm_exec.h>
+#include <drm/drm_gpuvm.h>
#include <drm/gpu_scheduler.h>
#include "nouveau_drv.h"
@@ -54,7 +54,7 @@ struct nouveau_job {
struct drm_file *file_priv;
struct nouveau_cli *cli;
- struct drm_exec exec;
+ struct drm_gpuvm_exec vm_exec;
enum dma_resv_usage resv_usage;
struct dma_fence *done_fence;
diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.c b/drivers/gpu/drm/nouveau/nouveau_uvmm.c
index 3de8533841db..581d7fd1649c 100644
--- a/drivers/gpu/drm/nouveau/nouveau_uvmm.c
+++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.c
@@ -438,8 +438,9 @@ nouveau_uvma_region_complete(struct nouveau_uvma_region *reg)
static void
op_map_prepare_unwind(struct nouveau_uvma *uvma)
{
+ struct drm_gpuva *va = &uvma->va;
nouveau_uvma_gem_put(uvma);
- drm_gpuva_remove(&uvma->va);
+ drm_gpuva_remove(va);
nouveau_uvma_free(uvma);
}
@@ -468,6 +469,7 @@ nouveau_uvmm_sm_prepare_unwind(struct nouveau_uvmm *uvmm,
break;
case DRM_GPUVA_OP_REMAP: {
struct drm_gpuva_op_remap *r = &op->remap;
+ struct drm_gpuva *va = r->unmap->va;
if (r->next)
op_map_prepare_unwind(new->next);
@@ -475,7 +477,7 @@ nouveau_uvmm_sm_prepare_unwind(struct nouveau_uvmm *uvmm,
if (r->prev)
op_map_prepare_unwind(new->prev);
- op_unmap_prepare_unwind(r->unmap->va);
+ op_unmap_prepare_unwind(va);
break;
}
case DRM_GPUVA_OP_UNMAP:
@@ -634,6 +636,7 @@ nouveau_uvmm_sm_prepare(struct nouveau_uvmm *uvmm,
goto unwind;
}
}
+
break;
}
case DRM_GPUVA_OP_REMAP: {
@@ -1146,13 +1149,44 @@ bind_link_gpuvas(struct bind_job_op *bop)
}
}
+static int
+bind_lock_extra(struct drm_gpuvm_exec *vm_exec, unsigned int num_fences)
+{
+ struct nouveau_uvmm_bind_job *bind_job = vm_exec->extra.priv;
+ struct drm_exec *exec = &vm_exec->exec;
+ struct bind_job_op *op;
+ int ret;
+
+ list_for_each_op(op, &bind_job->ops) {
+ struct drm_gpuva_op *va_op;
+
+ if (IS_ERR_OR_NULL(op->ops))
+ continue;
+
+ drm_gpuva_for_each_op(va_op, op->ops) {
+ struct drm_gem_object *obj = op_gem_obj(va_op);
+
+ if (unlikely(!obj))
+ continue;
+
+ if (va_op->op != DRM_GPUVA_OP_UNMAP)
+ continue;
+
+ ret = drm_exec_prepare_obj(exec, obj, num_fences);
+ if (ret)
+ return ret;
+ }
+ }
+
+ return 0;
+}
+
static int
nouveau_uvmm_bind_job_submit(struct nouveau_job *job)
{
struct nouveau_uvmm *uvmm = nouveau_cli_uvmm(job->cli);
struct nouveau_uvmm_bind_job *bind_job = to_uvmm_bind_job(job);
struct nouveau_sched_entity *entity = job->entity;
- struct drm_exec *exec = &job->exec;
struct bind_job_op *op;
int ret;
@@ -1170,6 +1204,8 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job)
dma_resv_unlock(obj->resv);
if (IS_ERR(op->vm_bo))
return PTR_ERR(op->vm_bo);
+
+ drm_gpuvm_bo_extobj_add(op->vm_bo);
}
ret = bind_validate_op(job, op);
@@ -1192,6 +1228,7 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job)
* unwind all GPU VA space changes on failure.
*/
nouveau_uvmm_lock(uvmm);
+
list_for_each_op(op, &bind_job->ops) {
switch (op->op) {
case OP_MAP_SPARSE:
@@ -1303,30 +1340,13 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job)
}
}
- drm_exec_init(exec, DRM_EXEC_INTERRUPTIBLE_WAIT |
- DRM_EXEC_IGNORE_DUPLICATES);
- drm_exec_until_all_locked(exec) {
- list_for_each_op(op, &bind_job->ops) {
- struct drm_gpuva_op *va_op;
+ job->vm_exec.vm = &uvmm->base;
+ job->vm_exec.extra.fn = bind_lock_extra;
+ job->vm_exec.extra.priv = bind_job;
- if (IS_ERR_OR_NULL(op->ops))
- continue;
-
- drm_gpuva_for_each_op(va_op, op->ops) {
- struct drm_gem_object *obj = op_gem_obj(va_op);
-
- if (unlikely(!obj))
- continue;
-
- ret = drm_exec_prepare_obj(exec, obj, 1);
- drm_exec_retry_on_contention(exec);
- if (ret) {
- op = list_last_op(&bind_job->ops);
- goto unwind;
- }
- }
- }
- }
+ ret = drm_gpuvm_exec_lock(&job->vm_exec, 1, false);
+ if (ret)
+ goto unwind_continue;
list_for_each_op(op, &bind_job->ops) {
struct drm_gpuva_op *va_op;
@@ -1426,21 +1446,16 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job)
}
nouveau_uvmm_unlock(uvmm);
- drm_exec_fini(exec);
+ drm_gpuvm_exec_unlock(&job->vm_exec);
return ret;
}
static void
nouveau_uvmm_bind_job_armed_submit(struct nouveau_job *job)
{
- struct drm_exec *exec = &job->exec;
- struct drm_gem_object *obj;
- unsigned long index;
-
- drm_exec_for_each_locked_object(exec, index, obj)
- dma_resv_add_fence(obj->resv, job->done_fence, job->resv_usage);
-
- drm_exec_fini(exec);
+ drm_gpuvm_exec_resv_add_fence(&job->vm_exec, job->done_fence,
+ job->resv_usage, job->resv_usage);
+ drm_gpuvm_exec_unlock(&job->vm_exec);
}
static struct dma_fence *
@@ -1832,6 +1847,18 @@ nouveau_uvmm_bo_unmap_all(struct nouveau_bo *nvbo)
}
}
+static int
+nouveau_uvmm_bo_validate(struct drm_gpuvm_bo *vm_bo, struct drm_exec *exec)
+{
+ struct nouveau_bo *nvbo = nouveau_gem_object(vm_bo->obj);
+
+ return nouveau_bo_validate(nvbo, true, false);
+}
+
+static const struct drm_gpuvm_ops gpuvm_ops = {
+ .vm_bo_validate = nouveau_uvmm_bo_validate,
+};
+
int
nouveau_uvmm_init(struct nouveau_uvmm *uvmm, struct nouveau_cli *cli,
u64 kernel_managed_addr, u64 kernel_managed_size)
@@ -1868,7 +1895,7 @@ nouveau_uvmm_init(struct nouveau_uvmm *uvmm, struct nouveau_cli *cli,
NOUVEAU_VA_SPACE_START,
NOUVEAU_VA_SPACE_END,
kernel_managed_addr, kernel_managed_size,
- NULL);
+ &gpuvm_ops);
ret = nvif_vmm_ctor(&cli->mmu, "uvmm",
cli->vmm.vmm.object.oclass, RAW,
--
2.41.0
^ permalink raw reply related [flat|nested] 29+ messages in thread* Re: [PATCH drm-misc-next v4 0/8] [RFC] DRM GPUVA Manager GPU-VM features
2023-09-20 14:42 [PATCH drm-misc-next v4 0/8] [RFC] DRM GPUVA Manager GPU-VM features Danilo Krummrich
` (7 preceding siblings ...)
2023-09-20 14:42 ` [PATCH drm-misc-next v4 8/8] drm/nouveau: GPUVM dma-resv/extobj handling, " Danilo Krummrich
@ 2023-09-28 12:09 ` Boris Brezillon
8 siblings, 0 replies; 29+ messages in thread
From: Boris Brezillon @ 2023-09-28 12:09 UTC (permalink / raw)
To: Danilo Krummrich
Cc: airlied, daniel, matthew.brost, thomas.hellstrom, sarah.walker,
donald.robson, christian.koenig, faith.ekstrand, dri-devel,
nouveau, linux-kernel
On Wed, 20 Sep 2023 16:42:33 +0200
Danilo Krummrich <dakr@redhat.com> wrote:
> So far the DRM GPUVA manager offers common infrastructure to track GPU VA
> allocations and mappings, generically connect GPU VA mappings to their
> backing buffers and perform more complex mapping operations on the GPU VA
> space.
>
> However, there are more design patterns commonly used by drivers, which
> can potentially be generalized in order to make the DRM GPUVA manager
> represent a basic GPU-VM implementation. In this context, this patch series
> aims at generalizing the following elements.
>
> 1) Provide a common dma-resv for GEM objects not being used outside of
> this GPU-VM.
>
> 2) Provide tracking of external GEM objects (GEM objects which are
> shared with other GPU-VMs).
>
> 3) Provide functions to efficiently lock all GEM objects dma-resv the
> GPU-VM contains mappings of.
>
> 4) Provide tracking of evicted GEM objects the GPU-VM contains mappings
> of, such that validation of evicted GEM objects is accelerated.
>
> 5) Provide some convinience functions for common patterns.
>
> The implementation introduces struct drm_gpuvm_bo, which serves as abstraction
> combining a struct drm_gpuvm and struct drm_gem_object, similar to what
> amdgpu does with struct amdgpu_bo_vm. While this adds a bit of complexity it
> improves the efficiency of tracking external and evicted GEM objects.
>
> This patch series also renames struct drm_gpuva_manager to struct drm_gpuvm
> including corresponding functions. This way the GPUVA manager's structures align
> better with the documentation of VM_BIND [1] and VM_BIND locking [2]. It also
> provides a better foundation for the naming of data structures and functions
> introduced for implementing the features of this patch series.
>
> This patch series is also available at [3].
>
> [1] Documentation/gpu/drm-vm-bind-async.rst
> [2] Documentation/gpu/drm-vm-bind-locking.rst
> [3] https://gitlab.freedesktop.org/nouvelles/kernel/-/commits/gpuvm-next
>
> Changes in V2:
> ==============
> - rename 'drm_gpuva_manager' -> 'drm_gpuvm' which generally leads to more
> consistent naming
> - properly separate commits (introduce common dma-resv, drm_gpuvm_bo
> abstraction, etc.)
> - remove maple tree for tracking external objects, use a list drm_gpuvm_bos
> per drm_gpuvm instead
> - rework dma-resv locking helpers (Thomas)
> - add a locking helper for a given range of the VA space (Christian)
> - make the GPUVA manager buildable as module, rather than drm_exec
> builtin (Christian)
>
> Changes in V3:
> ==============
> - rename missing function and files (Boris)
> - warn if vm_obj->obj != obj in drm_gpuva_link() (Boris)
> - don't expose drm_gpuvm_bo_destroy() (Boris)
> - unlink VM_BO from GEM in drm_gpuvm_bo_destroy() rather than
> drm_gpuva_unlink() and link within drm_gpuvm_bo_obtain() to keep
> drm_gpuvm_bo instances unique
> - add internal locking to external and evicted object lists to support drivers
> updating the VA space from within the fence signalling critical path (Boris)
> - unlink external objects and evicted objects from the GPUVM's list in
> drm_gpuvm_bo_destroy()
> - add more documentation and fix some kernel doc issues
>
> Changes in V4:
> ==============
> - add a drm_gpuvm_resv() helper (Boris)
> - add a drm_gpuvm::<list_name>::local_list field (Boris)
> - remove drm_gpuvm_bo_get_unless_zero() helper (Boris)
> - fix missing NULL assignment in get_next_vm_bo_from_list() (Boris)
> - keep a drm_gem_object reference on potential vm_bo destroy (alternatively we
> could free the vm_bo and drop the vm_bo's drm_gem_object reference through
> async work)
> - introduce DRM_GPUVM_RESV_PROTECTED flag to indicate external locking through
> the corresponding dma-resv locks to optimize for drivers already holding
> them when needed; add the corresponding lock_assert_held() calls (Thomas)
> - make drm_gpuvm_bo_evict() per vm_bo and add a drm_gpuvm_bo_gem_evict()
> helper (Thomas)
> - pass a drm_gpuvm_bo in drm_gpuvm_ops::vm_bo_validate() (Thomas)
> - documentation fixes
>
> Danilo Krummrich (8):
> drm/gpuvm: rename struct drm_gpuva_manager to struct drm_gpuvm
> drm/gpuvm: allow building as module
> drm/nouveau: uvmm: rename 'umgr' to 'base'
> drm/gpuvm: add common dma-resv per struct drm_gpuvm
> drm/gpuvm: add an abstraction for a VM / BO combination
> drm/gpuvm: add drm_gpuvm_flags to drm_gpuvm
> drm/gpuvm: generalize dma_resv/extobj handling and GEM validation
Tested-by: Boris Brezillon <boris.brezillon@collabora.com>
> drm/nouveau: GPUVM dma-resv/extobj handling, GEM validation
>
> drivers/gpu/drm/Kconfig | 7 +
> drivers/gpu/drm/Makefile | 2 +-
> drivers/gpu/drm/drm_debugfs.c | 16 +-
> drivers/gpu/drm/drm_gpuva_mgr.c | 1725 --------------
> drivers/gpu/drm/drm_gpuvm.c | 2600 +++++++++++++++++++++
> drivers/gpu/drm/nouveau/Kconfig | 1 +
> drivers/gpu/drm/nouveau/nouveau_bo.c | 4 +-
> drivers/gpu/drm/nouveau/nouveau_debugfs.c | 2 +-
> drivers/gpu/drm/nouveau/nouveau_exec.c | 52 +-
> drivers/gpu/drm/nouveau/nouveau_exec.h | 4 -
> drivers/gpu/drm/nouveau/nouveau_gem.c | 5 +-
> drivers/gpu/drm/nouveau/nouveau_sched.h | 4 +-
> drivers/gpu/drm/nouveau/nouveau_uvmm.c | 207 +-
> drivers/gpu/drm/nouveau/nouveau_uvmm.h | 8 +-
> include/drm/drm_debugfs.h | 6 +-
> include/drm/drm_gem.h | 32 +-
> include/drm/drm_gpuva_mgr.h | 706 ------
> include/drm/drm_gpuvm.h | 1142 +++++++++
> 18 files changed, 3934 insertions(+), 2589 deletions(-)
> delete mode 100644 drivers/gpu/drm/drm_gpuva_mgr.c
> create mode 100644 drivers/gpu/drm/drm_gpuvm.c
> delete mode 100644 include/drm/drm_gpuva_mgr.h
> create mode 100644 include/drm/drm_gpuvm.h
>
>
> base-commit: 1c7a387ffef894b1ab3942f0482dac7a6e0a909c
^ permalink raw reply [flat|nested] 29+ messages in thread