Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Matthew Brost <matthew.brost@intel.com>
To: intel-xe@lists.freedesktop.org, dri-devel@lists.freedesktop.org
Cc: himal.prasad.ghimiray@intel.com, apopple@nvidia.com,
	airlied@gmail.com, thomas.hellstrom@linux.intel.com,
	simona.vetter@ffwll.ch, felix.kuehling@amd.com, dakr@kernel.org
Subject: [PATCH v6 17/32] drm/xe: Do not allow CPU address mirror VMA unbind if the GPU has bindings
Date: Mon, 24 Feb 2025 20:42:56 -0800	[thread overview]
Message-ID: <20250225044311.3178695-18-matthew.brost@intel.com> (raw)
In-Reply-To: <20250225044311.3178695-1-matthew.brost@intel.com>

uAPI is designed with the use case that only mapping a BO to a malloc'd
address will unbind a CPU-address mirror VMA. Therefore, allowing a
CPU-address mirror VMA to unbind when the GPU has bindings in the range
being unbound does not make much sense. This behavior is not supported,
as it simplifies the code. This decision can always be revisited if a
use case arises.

v3:
 - s/arrises/arises (Thomas)
 - s/system allocator/GPU address mirror (Thomas)
 - Kernel doc (Thomas)
 - Newline between function defs (Thomas)
v5:
 - Kernel doc (Thomas)
v6:
 - Only compile if CONFIG_DRM_GPUSVM selected (CI, Lucas)

Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
---
 drivers/gpu/drm/xe/xe_svm.c | 15 +++++++++++++++
 drivers/gpu/drm/xe/xe_svm.h |  8 ++++++++
 drivers/gpu/drm/xe/xe_vm.c  | 16 ++++++++++++++++
 3 files changed, 39 insertions(+)

diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
index a9d32cd69ae9..80076f4dc4b4 100644
--- a/drivers/gpu/drm/xe/xe_svm.c
+++ b/drivers/gpu/drm/xe/xe_svm.c
@@ -434,3 +434,18 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
 
 	return err;
 }
+
+/**
+ * xe_svm_has_mapping() - SVM has mappings
+ * @vm: The VM.
+ * @start: Start address.
+ * @end: End address.
+ *
+ * Check if an address range has SVM mappings.
+ *
+ * Return: True if address range has a SVM mapping, False otherwise
+ */
+bool xe_svm_has_mapping(struct xe_vm *vm, u64 start, u64 end)
+{
+	return drm_gpusvm_has_mapping(&vm->svm.gpusvm, start, end);
+}
diff --git a/drivers/gpu/drm/xe/xe_svm.h b/drivers/gpu/drm/xe/xe_svm.h
index 87cbda5641bb..35e044e492e0 100644
--- a/drivers/gpu/drm/xe/xe_svm.h
+++ b/drivers/gpu/drm/xe/xe_svm.h
@@ -57,6 +57,8 @@ void xe_svm_close(struct xe_vm *vm);
 int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
 			    struct xe_tile *tile, u64 fault_addr,
 			    bool atomic);
+
+bool xe_svm_has_mapping(struct xe_vm *vm, u64 start, u64 end);
 #else
 static inline bool xe_svm_range_pages_valid(struct xe_svm_range *range)
 {
@@ -86,6 +88,12 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
 {
 	return 0;
 }
+
+static inline
+bool xe_svm_has_mapping(struct xe_vm *vm, u64 start, u64 end)
+{
+	return false;
+}
 #endif
 
 /**
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 870629cbb859..a3ef76504ce8 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -2442,6 +2442,17 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
 			struct xe_vma *old =
 				gpuva_to_vma(op->base.remap.unmap->va);
 			bool skip = xe_vma_is_cpu_addr_mirror(old);
+			u64 start = xe_vma_start(old), end = xe_vma_end(old);
+
+			if (op->base.remap.prev)
+				start = op->base.remap.prev->va.addr +
+					op->base.remap.prev->va.range;
+			if (op->base.remap.next)
+				end = op->base.remap.next->va.addr;
+
+			if (xe_vma_is_cpu_addr_mirror(old) &&
+			    xe_svm_has_mapping(vm, start, end))
+				return -EBUSY;
 
 			op->remap.start = xe_vma_start(old);
 			op->remap.range = xe_vma_size(old);
@@ -2524,6 +2535,11 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
 		{
 			struct xe_vma *vma = gpuva_to_vma(op->base.unmap.va);
 
+			if (xe_vma_is_cpu_addr_mirror(vma) &&
+			    xe_svm_has_mapping(vm, xe_vma_start(vma),
+					       xe_vma_end(vma)))
+				return -EBUSY;
+
 			if (!xe_vma_is_cpu_addr_mirror(vma))
 				xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask);
 			break;
-- 
2.34.1


  parent reply	other threads:[~2025-02-25  4:42 UTC|newest]

Thread overview: 56+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-02-25  4:42 [PATCH v6 00/32] Introduce GPU SVM and Xe SVM implementation Matthew Brost
2025-02-25  4:42 ` [PATCH v6 01/32] drm/xe: Retry BO allocation Matthew Brost
2025-02-25  4:42 ` [PATCH v6 02/32] mm/migrate: Add migrate_device_pfns Matthew Brost
2025-02-25  4:42 ` [PATCH v6 03/32] mm/migrate: Trylock device page in do_swap_page Matthew Brost
2025-02-25  4:42 ` [PATCH v6 04/32] drm/pagemap: Add DRM pagemap Matthew Brost
2025-02-25 15:06   ` Matthew Auld
2025-02-25 18:16     ` Matthew Brost
2025-02-25  4:42 ` [PATCH v6 05/32] drm/xe/bo: Introduce xe_bo_put_async Matthew Brost
2025-02-25  4:42 ` [PATCH v6 06/32] drm/gpusvm: Add support for GPU Shared Virtual Memory Matthew Brost
2025-02-25 15:14   ` Matthew Auld
2025-02-25 18:16     ` Matthew Brost
2025-02-25  4:42 ` [PATCH v6 07/32] drm/xe: Select DRM_GPUSVM Kconfig Matthew Brost
2025-02-25  4:42 ` [PATCH v6 08/32] drm/xe/uapi: Add DRM_XE_VM_BIND_FLAG_CPU_ADDR_MIRROR Matthew Brost
2025-02-25  4:42 ` [PATCH v6 09/32] drm/xe: Add SVM init / close / fini to faulting VMs Matthew Brost
2025-02-25  4:42 ` [PATCH v6 10/32] drm/xe: Add dma_addr res cursor Matthew Brost
2025-02-25  4:42 ` [PATCH v6 11/32] drm/xe: Nuke VM's mapping upon close Matthew Brost
2025-02-25 18:05   ` Matthew Auld
2025-02-25 18:14     ` Matthew Brost
2025-02-25  4:42 ` [PATCH v6 12/32] drm/xe: Add SVM range invalidation and page fault Matthew Brost
2025-02-25  4:42 ` [PATCH v6 13/32] drm/gpuvm: Add DRM_GPUVA_OP_DRIVER Matthew Brost
2025-02-25  4:42 ` [PATCH v6 14/32] drm/xe: Add (re)bind to SVM page fault handler Matthew Brost
2025-02-26 17:00   ` Thomas Hellström
2025-02-26 17:18   ` Ghimiray, Himal Prasad
2025-02-25  4:42 ` [PATCH v6 15/32] drm/xe: Add SVM garbage collector Matthew Brost
2025-02-25  4:42 ` [PATCH v6 16/32] drm/xe: Add unbind to " Matthew Brost
2025-02-25  4:42 ` Matthew Brost [this message]
2025-02-27 17:01   ` [PATCH v6 17/32] drm/xe: Do not allow CPU address mirror VMA unbind if the GPU has bindings Thomas Hellström
2025-02-25  4:42 ` [PATCH v6 18/32] drm/xe: Enable CPU address mirror uAPI Matthew Brost
2025-02-25  4:42 ` [PATCH v6 19/32] drm/xe/uapi: Add DRM_XE_QUERY_CONFIG_FLAG_HAS_CPU_ADDR_MIRROR Matthew Brost
2025-02-25  4:42 ` [PATCH v6 20/32] drm/xe: Add migrate layer functions for SVM support Matthew Brost
2025-02-25  4:43 ` [PATCH v6 21/32] drm/xe: Add SVM device memory mirroring Matthew Brost
2025-02-25  4:43 ` [PATCH v6 22/32] drm/xe: Add drm_gpusvm_devmem to xe_bo Matthew Brost
2025-02-25  4:43 ` [PATCH v6 23/32] drm/xe: Add drm_pagemap ops to SVM Matthew Brost
2025-02-25  4:43 ` [PATCH v6 24/32] drm/xe: Add GPUSVM device memory copy vfunc functions Matthew Brost
2025-02-25  4:43 ` [PATCH v6 25/32] drm/xe: Add Xe SVM populate_devmem_pfn GPU SVM vfunc Matthew Brost
2025-02-25  4:43 ` [PATCH v6 26/32] drm/xe: Add Xe SVM devmem_release " Matthew Brost
2025-02-25  4:43 ` [PATCH v6 27/32] drm/xe: Add SVM VRAM migration Matthew Brost
2025-02-26 16:47   ` Thomas Hellström
2025-02-26 17:16   ` Ghimiray, Himal Prasad
2025-02-25  4:43 ` [PATCH v6 28/32] drm/xe: Basic SVM BO eviction Matthew Brost
2025-02-25  4:43 ` [PATCH v6 29/32] drm/xe: Add SVM debug Matthew Brost
2025-02-25  4:43 ` [PATCH v6 30/32] drm/xe: Add modparam for SVM notifier size Matthew Brost
2025-02-25  4:43 ` [PATCH v6 31/32] drm/xe: Add always_migrate_to_vram modparam Matthew Brost
2025-02-25  4:43 ` [PATCH v6 32/32] drm/doc: gpusvm: Add GPU SVM documentation Matthew Brost
2025-02-28  2:34   ` Alistair Popple
2025-02-28  4:36     ` Matthew Brost
2025-02-28  5:53       ` Alistair Popple
2025-03-01  0:35         ` Matthew Brost
2025-02-25  4:50 ` ✓ CI.Patch_applied: success for Introduce GPU SVM and Xe SVM implementation (rev6) Patchwork
2025-02-25  4:51 ` ✗ CI.checkpatch: warning " Patchwork
2025-02-25  4:52 ` ✓ CI.KUnit: success " Patchwork
2025-02-25  5:08 ` ✓ CI.Build: " Patchwork
2025-02-25  5:10 ` ✗ CI.Hooks: failure " Patchwork
2025-02-25  5:12 ` ✗ CI.checksparse: warning " Patchwork
2025-02-25  5:32 ` ✓ Xe.CI.BAT: success " Patchwork
2025-02-25  9:55 ` ✗ Xe.CI.Full: failure " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250225044311.3178695-18-matthew.brost@intel.com \
    --to=matthew.brost@intel.com \
    --cc=airlied@gmail.com \
    --cc=apopple@nvidia.com \
    --cc=dakr@kernel.org \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=felix.kuehling@amd.com \
    --cc=himal.prasad.ghimiray@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=simona.vetter@ffwll.ch \
    --cc=thomas.hellstrom@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox