From: Matthew Brost <matthew.brost@intel.com>
To: intel-xe@lists.freedesktop.org, dri-devel@lists.freedesktop.org
Cc: himal.prasad.ghimiray@intel.com, apopple@nvidia.com,
airlied@gmail.com, thomas.hellstrom@linux.intel.com,
simona.vetter@ffwll.ch, felix.kuehling@amd.com, dakr@kernel.org
Subject: [PATCH v7 28/32] drm/xe: Basic SVM BO eviction
Date: Wed, 5 Mar 2025 17:26:53 -0800 [thread overview]
Message-ID: <20250306012657.3505757-29-matthew.brost@intel.com> (raw)
In-Reply-To: <20250306012657.3505757-1-matthew.brost@intel.com>
Wire xe_bo_move to GPU SVM migration via new helper xe_svm_bo_evict.
v2:
- Use xe_svm_bo_evict
- Drop bo->range
v3:
- Kernel doc (Thomas)
v4:
- Add missing xe_bo.c code
v5:
- Add XE_BO_FLAG_CPU_ADDR_MIRROR flag in this patch (Thomas)
- Add message on eviction failure
v6:
- Only compile if CONFIG_DRM_GPUSVM selected (CI, Lucas)
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
drivers/gpu/drm/xe/xe_bo.c | 22 ++++++++++++++++++++++
drivers/gpu/drm/xe/xe_bo.h | 1 +
drivers/gpu/drm/xe/xe_svm.c | 17 ++++++++++++++++-
drivers/gpu/drm/xe/xe_svm.h | 9 +++++++++
4 files changed, 48 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
index 0305e4bdb18c..64f9c936eea0 100644
--- a/drivers/gpu/drm/xe/xe_bo.c
+++ b/drivers/gpu/drm/xe/xe_bo.c
@@ -281,6 +281,8 @@ int xe_bo_placement_for_flags(struct xe_device *xe, struct xe_bo *bo,
static void xe_evict_flags(struct ttm_buffer_object *tbo,
struct ttm_placement *placement)
{
+ struct xe_bo *bo;
+
if (!xe_bo_is_xe_bo(tbo)) {
/* Don't handle scatter gather BOs */
if (tbo->type == ttm_bo_type_sg) {
@@ -292,6 +294,12 @@ static void xe_evict_flags(struct ttm_buffer_object *tbo,
return;
}
+ bo = ttm_to_xe_bo(tbo);
+ if (bo->flags & XE_BO_FLAG_CPU_ADDR_MIRROR) {
+ *placement = sys_placement;
+ return;
+ }
+
/*
* For xe, sg bos that are evicted to system just triggers a
* rebind of the sg list upon subsequent validation to XE_PL_TT.
@@ -789,6 +797,20 @@ static int xe_bo_move(struct ttm_buffer_object *ttm_bo, bool evict,
goto out;
}
+ if (!move_lacks_source && (bo->flags & XE_BO_FLAG_CPU_ADDR_MIRROR) &&
+ new_mem->mem_type == XE_PL_SYSTEM) {
+ ret = xe_svm_bo_evict(bo);
+ if (!ret) {
+ drm_dbg(&xe->drm, "Evict system allocator BO success\n");
+ ttm_bo_move_null(ttm_bo, new_mem);
+ } else {
+ drm_dbg(&xe->drm, "Evict system allocator BO failed=%pe\n",
+ ERR_PTR(ret));
+ }
+
+ goto out;
+ }
+
if (old_mem_type == XE_PL_SYSTEM && new_mem->mem_type == XE_PL_TT && !handle_system_ccs) {
ttm_bo_move_null(ttm_bo, new_mem);
goto out;
diff --git a/drivers/gpu/drm/xe/xe_bo.h b/drivers/gpu/drm/xe/xe_bo.h
index b128becbe8cc..bda3fdd408da 100644
--- a/drivers/gpu/drm/xe/xe_bo.h
+++ b/drivers/gpu/drm/xe/xe_bo.h
@@ -47,6 +47,7 @@
XE_BO_FLAG_GGTT1 | \
XE_BO_FLAG_GGTT2 | \
XE_BO_FLAG_GGTT3)
+#define XE_BO_FLAG_CPU_ADDR_MIRROR BIT(22)
/* this one is trigger internally only */
#define XE_BO_FLAG_INTERNAL_TEST BIT(30)
diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
index 34ecdbcb23b9..f4332b8ffdba 100644
--- a/drivers/gpu/drm/xe/xe_svm.c
+++ b/drivers/gpu/drm/xe/xe_svm.c
@@ -617,7 +617,8 @@ static int xe_svm_alloc_vram(struct xe_vm *vm, struct xe_tile *tile,
bo = xe_bo_create_locked(tile_to_xe(tile), NULL, NULL,
xe_svm_range_size(range),
ttm_bo_type_device,
- XE_BO_FLAG_VRAM_IF_DGFX(tile));
+ XE_BO_FLAG_VRAM_IF_DGFX(tile) |
+ XE_BO_FLAG_CPU_ADDR_MIRROR);
if (IS_ERR(bo)) {
err = PTR_ERR(bo);
if (xe_vm_validate_should_retry(NULL, err, &end))
@@ -772,6 +773,20 @@ bool xe_svm_has_mapping(struct xe_vm *vm, u64 start, u64 end)
return drm_gpusvm_has_mapping(&vm->svm.gpusvm, start, end);
}
+/**
+ * xe_svm_bo_evict() - SVM evict BO to system memory
+ * @bo: BO to evict
+ *
+ * SVM evict BO to system memory. GPU SVM layer ensures all device pages
+ * are evicted before returning.
+ *
+ * Return: 0 on success standard error code otherwise
+ */
+int xe_svm_bo_evict(struct xe_bo *bo)
+{
+ return drm_gpusvm_evict_to_ram(&bo->devmem_allocation);
+}
+
#if IS_ENABLED(CONFIG_DRM_XE_DEVMEM_MIRROR)
static struct drm_pagemap_device_addr
xe_drm_pagemap_device_map(struct drm_pagemap *dpagemap,
diff --git a/drivers/gpu/drm/xe/xe_svm.h b/drivers/gpu/drm/xe/xe_svm.h
index 5d4eeb2d34ce..855aa8e1dd38 100644
--- a/drivers/gpu/drm/xe/xe_svm.h
+++ b/drivers/gpu/drm/xe/xe_svm.h
@@ -11,6 +11,7 @@
#define XE_INTERCONNECT_VRAM DRM_INTERCONNECT_DRIVER
+struct xe_bo;
struct xe_vram_region;
struct xe_tile;
struct xe_vm;
@@ -67,6 +68,8 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
bool atomic);
bool xe_svm_has_mapping(struct xe_vm *vm, u64 start, u64 end);
+
+int xe_svm_bo_evict(struct xe_bo *bo);
#else
static inline bool xe_svm_range_pages_valid(struct xe_svm_range *range)
{
@@ -108,6 +111,12 @@ bool xe_svm_has_mapping(struct xe_vm *vm, u64 start, u64 end)
{
return false;
}
+
+static inline
+int xe_svm_bo_evict(struct xe_bo *bo)
+{
+ return 0;
+}
#endif
/**
--
2.34.1
next prev parent reply other threads:[~2025-03-06 1:26 UTC|newest]
Thread overview: 52+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-03-06 1:26 [PATCH v7 00/32] Introduce GPU SVM and Xe SVM implementation Matthew Brost
2025-03-06 1:26 ` [PATCH v7 01/32] drm/xe: Retry BO allocation Matthew Brost
2025-03-06 1:26 ` [PATCH v7 02/32] mm/migrate: Add migrate_device_pfns Matthew Brost
2025-03-06 1:26 ` [PATCH v7 03/32] mm/migrate: Trylock device page in do_swap_page Matthew Brost
2025-03-06 1:26 ` [PATCH v7 04/32] drm/pagemap: Add DRM pagemap Matthew Brost
2025-03-06 1:26 ` [PATCH v7 05/32] drm/xe/bo: Introduce xe_bo_put_async Matthew Brost
2025-03-06 1:26 ` [PATCH v7 06/32] drm/gpusvm: Add support for GPU Shared Virtual Memory Matthew Brost
2025-03-06 1:26 ` [PATCH v7 07/32] drm/xe: Select DRM_GPUSVM Kconfig Matthew Brost
2025-03-06 1:26 ` [PATCH v7 08/32] drm/xe/uapi: Add DRM_XE_VM_BIND_FLAG_CPU_ADDR_MIRROR Matthew Brost
2025-03-06 1:26 ` [PATCH v7 09/32] drm/xe: Add SVM init / close / fini to faulting VMs Matthew Brost
2025-03-06 1:26 ` [PATCH v7 10/32] drm/xe: Add dma_addr res cursor Matthew Brost
2025-03-06 1:26 ` [PATCH v7 11/32] drm/xe: Nuke VM's mapping upon close Matthew Brost
2025-03-06 1:26 ` [PATCH v7 12/32] drm/xe: Add SVM range invalidation and page fault Matthew Brost
2025-03-06 1:26 ` [PATCH v7 13/32] drm/gpuvm: Add DRM_GPUVA_OP_DRIVER Matthew Brost
2025-03-06 1:26 ` [PATCH v7 14/32] drm/xe: Add (re)bind to SVM page fault handler Matthew Brost
2025-03-06 1:26 ` [PATCH v7 15/32] drm/xe: Add SVM garbage collector Matthew Brost
2025-03-06 1:26 ` [PATCH v7 16/32] drm/xe: Add unbind to " Matthew Brost
2025-03-06 1:26 ` [PATCH v7 17/32] drm/xe: Do not allow CPU address mirror VMA unbind if Matthew Brost
2025-03-06 1:26 ` [PATCH v7 18/32] drm/xe: Enable CPU address mirror uAPI Matthew Brost
2025-03-06 1:26 ` [PATCH v7 19/32] drm/xe/uapi: Add DRM_XE_QUERY_CONFIG_FLAG_HAS_CPU_ADDR_MIRROR Matthew Brost
2025-03-06 1:26 ` [PATCH v7 20/32] drm/xe: Add migrate layer functions for SVM support Matthew Brost
2025-03-06 1:26 ` [PATCH v7 21/32] drm/xe: Add SVM device memory mirroring Matthew Brost
2025-03-06 1:26 ` [PATCH v7 22/32] drm/xe: Add drm_gpusvm_devmem to xe_bo Matthew Brost
2025-03-06 1:26 ` [PATCH v7 23/32] drm/xe: Add drm_pagemap ops to SVM Matthew Brost
2025-03-06 1:26 ` [PATCH v7 24/32] drm/xe: Add GPUSVM device memory copy vfunc functions Matthew Brost
2025-03-06 1:26 ` [PATCH v7 25/32] drm/xe: Add Xe SVM populate_devmem_pfn GPU SVM vfunc Matthew Brost
2025-03-06 1:26 ` [PATCH v7 26/32] drm/xe: Add Xe SVM devmem_release " Matthew Brost
2025-03-06 1:26 ` [PATCH v7 27/32] drm/xe: Add SVM VRAM migration Matthew Brost
2025-03-06 1:26 ` Matthew Brost [this message]
2025-03-06 1:26 ` [PATCH v7 29/32] drm/xe: Add SVM debug Matthew Brost
2025-03-06 1:26 ` [PATCH v7 30/32] drm/xe: Add modparam for SVM notifier size Matthew Brost
2025-03-06 1:26 ` [PATCH v7 31/32] drm/xe: Add always_migrate_to_vram modparam Matthew Brost
2025-03-06 1:26 ` [PATCH v7 32/32] drm/doc: gpusvm: Add GPU SVM documentation Matthew Brost
2025-03-06 5:45 ` Alistair Popple
2025-03-06 6:08 ` Matthew Brost
2025-03-06 1:54 ` ✓ CI.Patch_applied: success for Introduce GPU SVM and Xe SVM implementation (rev7) Patchwork
2025-03-06 1:54 ` ✗ CI.checkpatch: warning " Patchwork
2025-03-06 1:56 ` ✓ CI.KUnit: success " Patchwork
2025-03-06 2:12 ` ✓ CI.Build: " Patchwork
2025-03-06 2:14 ` ✓ CI.Hooks: " Patchwork
2025-03-06 2:16 ` ✗ CI.checksparse: warning " Patchwork
2025-03-06 2:51 ` ✓ Xe.CI.BAT: success " Patchwork
2025-03-06 8:25 ` ✗ Xe.CI.Full: failure " Patchwork
2025-03-06 9:57 ` Matthew Brost
2025-03-06 9:22 ` ✓ CI.Patch_applied: success for Introduce GPU SVM and Xe SVM implementation (rev8) Patchwork
2025-03-06 9:23 ` ✗ CI.checkpatch: warning " Patchwork
2025-03-06 9:24 ` ✓ CI.KUnit: success " Patchwork
2025-03-06 9:41 ` ✓ CI.Build: " Patchwork
2025-03-06 9:43 ` ✓ CI.Hooks: " Patchwork
2025-03-06 9:44 ` ✗ CI.checksparse: warning " Patchwork
2025-03-06 10:17 ` ✗ Xe.CI.BAT: failure " Patchwork
2025-03-06 18:41 ` ✗ Xe.CI.Full: " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250306012657.3505757-29-matthew.brost@intel.com \
--to=matthew.brost@intel.com \
--cc=airlied@gmail.com \
--cc=apopple@nvidia.com \
--cc=dakr@kernel.org \
--cc=dri-devel@lists.freedesktop.org \
--cc=felix.kuehling@amd.com \
--cc=himal.prasad.ghimiray@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=simona.vetter@ffwll.ch \
--cc=thomas.hellstrom@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox