Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Matthew Brost <matthew.brost@intel.com>
To: intel-xe@lists.freedesktop.org, dri-devel@lists.freedesktop.org
Cc: himal.prasad.ghimiray@intel.com, apopple@nvidia.com,
	airlied@gmail.com, thomas.hellstrom@linux.intel.com,
	simona.vetter@ffwll.ch, felix.kuehling@amd.com, dakr@kernel.org
Subject: [PATCH v7 17/32] drm/xe: Do not allow CPU address mirror VMA unbind if
Date: Wed,  5 Mar 2025 17:26:42 -0800	[thread overview]
Message-ID: <20250306012657.3505757-18-matthew.brost@intel.com> (raw)
In-Reply-To: <20250306012657.3505757-1-matthew.brost@intel.com>

uAPI is designed with the use case that only mapping a BO to a malloc'd
address will unbind a CPU-address mirror VMA. Therefore, allowing a
CPU-address mirror VMA to unbind when the GPU has bindings in the range
being unbound does not make much sense. This behavior is not supported,
as it simplifies the code. This decision can always be revisited if a
use case arises.

v3:
 - s/arrises/arises (Thomas)
 - s/system allocator/GPU address mirror (Thomas)
 - Kernel doc (Thomas)
 - Newline between function defs (Thomas)
v5:
 - Kernel doc (Thomas)
v6:
 - Only compile if CONFIG_DRM_GPUSVM selected (CI, Lucas)

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/xe/xe_svm.c | 15 +++++++++++++++
 drivers/gpu/drm/xe/xe_svm.h |  8 ++++++++
 drivers/gpu/drm/xe/xe_vm.c  | 16 ++++++++++++++++
 3 files changed, 39 insertions(+)

diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
index a9d32cd69ae9..80076f4dc4b4 100644
--- a/drivers/gpu/drm/xe/xe_svm.c
+++ b/drivers/gpu/drm/xe/xe_svm.c
@@ -434,3 +434,18 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
 
 	return err;
 }
+
+/**
+ * xe_svm_has_mapping() - SVM has mappings
+ * @vm: The VM.
+ * @start: Start address.
+ * @end: End address.
+ *
+ * Check if an address range has SVM mappings.
+ *
+ * Return: True if address range has a SVM mapping, False otherwise
+ */
+bool xe_svm_has_mapping(struct xe_vm *vm, u64 start, u64 end)
+{
+	return drm_gpusvm_has_mapping(&vm->svm.gpusvm, start, end);
+}
diff --git a/drivers/gpu/drm/xe/xe_svm.h b/drivers/gpu/drm/xe/xe_svm.h
index 87cbda5641bb..35e044e492e0 100644
--- a/drivers/gpu/drm/xe/xe_svm.h
+++ b/drivers/gpu/drm/xe/xe_svm.h
@@ -57,6 +57,8 @@ void xe_svm_close(struct xe_vm *vm);
 int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
 			    struct xe_tile *tile, u64 fault_addr,
 			    bool atomic);
+
+bool xe_svm_has_mapping(struct xe_vm *vm, u64 start, u64 end);
 #else
 static inline bool xe_svm_range_pages_valid(struct xe_svm_range *range)
 {
@@ -86,6 +88,12 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
 {
 	return 0;
 }
+
+static inline
+bool xe_svm_has_mapping(struct xe_vm *vm, u64 start, u64 end)
+{
+	return false;
+}
 #endif
 
 /**
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 5f81c81c8f43..80b57d7a3f4c 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -2485,6 +2485,17 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
 			struct xe_vma *old =
 				gpuva_to_vma(op->base.remap.unmap->va);
 			bool skip = xe_vma_is_cpu_addr_mirror(old);
+			u64 start = xe_vma_start(old), end = xe_vma_end(old);
+
+			if (op->base.remap.prev)
+				start = op->base.remap.prev->va.addr +
+					op->base.remap.prev->va.range;
+			if (op->base.remap.next)
+				end = op->base.remap.next->va.addr;
+
+			if (xe_vma_is_cpu_addr_mirror(old) &&
+			    xe_svm_has_mapping(vm, start, end))
+				return -EBUSY;
 
 			op->remap.start = xe_vma_start(old);
 			op->remap.range = xe_vma_size(old);
@@ -2566,6 +2577,11 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
 		case DRM_GPUVA_OP_UNMAP:
 			vma = gpuva_to_vma(op->base.unmap.va);
 
+			if (xe_vma_is_cpu_addr_mirror(vma) &&
+			    xe_svm_has_mapping(vm, xe_vma_start(vma),
+					       xe_vma_end(vma)))
+				return -EBUSY;
+
 			if (!xe_vma_is_cpu_addr_mirror(vma))
 				xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask);
 			break;
-- 
2.34.1


  parent reply	other threads:[~2025-03-06  1:26 UTC|newest]

Thread overview: 52+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-03-06  1:26 [PATCH v7 00/32] Introduce GPU SVM and Xe SVM implementation Matthew Brost
2025-03-06  1:26 ` [PATCH v7 01/32] drm/xe: Retry BO allocation Matthew Brost
2025-03-06  1:26 ` [PATCH v7 02/32] mm/migrate: Add migrate_device_pfns Matthew Brost
2025-03-06  1:26 ` [PATCH v7 03/32] mm/migrate: Trylock device page in do_swap_page Matthew Brost
2025-03-06  1:26 ` [PATCH v7 04/32] drm/pagemap: Add DRM pagemap Matthew Brost
2025-03-06  1:26 ` [PATCH v7 05/32] drm/xe/bo: Introduce xe_bo_put_async Matthew Brost
2025-03-06  1:26 ` [PATCH v7 06/32] drm/gpusvm: Add support for GPU Shared Virtual Memory Matthew Brost
2025-03-06  1:26 ` [PATCH v7 07/32] drm/xe: Select DRM_GPUSVM Kconfig Matthew Brost
2025-03-06  1:26 ` [PATCH v7 08/32] drm/xe/uapi: Add DRM_XE_VM_BIND_FLAG_CPU_ADDR_MIRROR Matthew Brost
2025-03-06  1:26 ` [PATCH v7 09/32] drm/xe: Add SVM init / close / fini to faulting VMs Matthew Brost
2025-03-06  1:26 ` [PATCH v7 10/32] drm/xe: Add dma_addr res cursor Matthew Brost
2025-03-06  1:26 ` [PATCH v7 11/32] drm/xe: Nuke VM's mapping upon close Matthew Brost
2025-03-06  1:26 ` [PATCH v7 12/32] drm/xe: Add SVM range invalidation and page fault Matthew Brost
2025-03-06  1:26 ` [PATCH v7 13/32] drm/gpuvm: Add DRM_GPUVA_OP_DRIVER Matthew Brost
2025-03-06  1:26 ` [PATCH v7 14/32] drm/xe: Add (re)bind to SVM page fault handler Matthew Brost
2025-03-06  1:26 ` [PATCH v7 15/32] drm/xe: Add SVM garbage collector Matthew Brost
2025-03-06  1:26 ` [PATCH v7 16/32] drm/xe: Add unbind to " Matthew Brost
2025-03-06  1:26 ` Matthew Brost [this message]
2025-03-06  1:26 ` [PATCH v7 18/32] drm/xe: Enable CPU address mirror uAPI Matthew Brost
2025-03-06  1:26 ` [PATCH v7 19/32] drm/xe/uapi: Add DRM_XE_QUERY_CONFIG_FLAG_HAS_CPU_ADDR_MIRROR Matthew Brost
2025-03-06  1:26 ` [PATCH v7 20/32] drm/xe: Add migrate layer functions for SVM support Matthew Brost
2025-03-06  1:26 ` [PATCH v7 21/32] drm/xe: Add SVM device memory mirroring Matthew Brost
2025-03-06  1:26 ` [PATCH v7 22/32] drm/xe: Add drm_gpusvm_devmem to xe_bo Matthew Brost
2025-03-06  1:26 ` [PATCH v7 23/32] drm/xe: Add drm_pagemap ops to SVM Matthew Brost
2025-03-06  1:26 ` [PATCH v7 24/32] drm/xe: Add GPUSVM device memory copy vfunc functions Matthew Brost
2025-03-06  1:26 ` [PATCH v7 25/32] drm/xe: Add Xe SVM populate_devmem_pfn GPU SVM vfunc Matthew Brost
2025-03-06  1:26 ` [PATCH v7 26/32] drm/xe: Add Xe SVM devmem_release " Matthew Brost
2025-03-06  1:26 ` [PATCH v7 27/32] drm/xe: Add SVM VRAM migration Matthew Brost
2025-03-06  1:26 ` [PATCH v7 28/32] drm/xe: Basic SVM BO eviction Matthew Brost
2025-03-06  1:26 ` [PATCH v7 29/32] drm/xe: Add SVM debug Matthew Brost
2025-03-06  1:26 ` [PATCH v7 30/32] drm/xe: Add modparam for SVM notifier size Matthew Brost
2025-03-06  1:26 ` [PATCH v7 31/32] drm/xe: Add always_migrate_to_vram modparam Matthew Brost
2025-03-06  1:26 ` [PATCH v7 32/32] drm/doc: gpusvm: Add GPU SVM documentation Matthew Brost
2025-03-06  5:45   ` Alistair Popple
2025-03-06  6:08     ` Matthew Brost
2025-03-06  1:54 ` ✓ CI.Patch_applied: success for Introduce GPU SVM and Xe SVM implementation (rev7) Patchwork
2025-03-06  1:54 ` ✗ CI.checkpatch: warning " Patchwork
2025-03-06  1:56 ` ✓ CI.KUnit: success " Patchwork
2025-03-06  2:12 ` ✓ CI.Build: " Patchwork
2025-03-06  2:14 ` ✓ CI.Hooks: " Patchwork
2025-03-06  2:16 ` ✗ CI.checksparse: warning " Patchwork
2025-03-06  2:51 ` ✓ Xe.CI.BAT: success " Patchwork
2025-03-06  8:25 ` ✗ Xe.CI.Full: failure " Patchwork
2025-03-06  9:57   ` Matthew Brost
2025-03-06  9:22 ` ✓ CI.Patch_applied: success for Introduce GPU SVM and Xe SVM implementation (rev8) Patchwork
2025-03-06  9:23 ` ✗ CI.checkpatch: warning " Patchwork
2025-03-06  9:24 ` ✓ CI.KUnit: success " Patchwork
2025-03-06  9:41 ` ✓ CI.Build: " Patchwork
2025-03-06  9:43 ` ✓ CI.Hooks: " Patchwork
2025-03-06  9:44 ` ✗ CI.checksparse: warning " Patchwork
2025-03-06 10:17 ` ✗ Xe.CI.BAT: failure " Patchwork
2025-03-06 18:41 ` ✗ Xe.CI.Full: " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250306012657.3505757-18-matthew.brost@intel.com \
    --to=matthew.brost@intel.com \
    --cc=airlied@gmail.com \
    --cc=apopple@nvidia.com \
    --cc=dakr@kernel.org \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=felix.kuehling@amd.com \
    --cc=himal.prasad.ghimiray@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=simona.vetter@ffwll.ch \
    --cc=thomas.hellstrom@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox