Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Oak Zeng <oak.zeng@intel.com>
To: dri-devel@lists.freedesktop.org, intel-xe@lists.freedesktop.org
Subject: [PATCH 22/22] drm/xe/svm: Add DRM_XE_SVM kernel config entry
Date: Wed, 20 Dec 2023 23:38:12 -0500	[thread overview]
Message-ID: <20231221043812.3783313-23-oak.zeng@intel.com> (raw)
In-Reply-To: <20231221043812.3783313-1-oak.zeng@intel.com>

DRM_XE_SVM kernel config entry is added so
xe svm feature can be configured before kernel
compilation.

Signed-off-by: Oak Zeng <oak.zeng@intel.com>
Co-developed-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@intel.com>
Cc: Brian Welty <brian.welty@intel.com>
---
 drivers/gpu/drm/xe/Kconfig   | 22 ++++++++++++++++++++++
 drivers/gpu/drm/xe/Makefile  |  5 +++++
 drivers/gpu/drm/xe/xe_mmio.c |  5 +++++
 drivers/gpu/drm/xe/xe_vm.c   |  2 ++
 4 files changed, 34 insertions(+)

diff --git a/drivers/gpu/drm/xe/Kconfig b/drivers/gpu/drm/xe/Kconfig
index 5b3da06e7ba3..a57f0972e9ae 100644
--- a/drivers/gpu/drm/xe/Kconfig
+++ b/drivers/gpu/drm/xe/Kconfig
@@ -83,6 +83,28 @@ config DRM_XE_FORCE_PROBE
 
 	  Use "!*" to block the probe of the driver for all known devices.
 
+config DRM_XE_SVM
+    bool "Enable Shared Virtual Memory support in xe"
+    depends on DRM_XE
+    depends on ARCH_ENABLE_MEMORY_HOTPLUG
+    depends on ARCH_ENABLE_MEMORY_HOTREMOVE
+    depends on MEMORY_HOTPLUG
+    depends on MEMORY_HOTREMOVE
+    depends on ARCH_HAS_PTE_DEVMAP
+    depends on SPARSEMEM_VMEMMAP
+    depends on ZONE_DEVICE
+    depends on DEVICE_PRIVATE
+    depends on MMU
+    select HMM_MIRROR
+    select MMU_NOTIFIER
+    default y
+    help
+      Choose this option if you want Shared Virtual Memory (SVM)
+      support in xe. With SVM, virtual address space is shared
+	  between CPU and GPU. This means any virtual address such
+	  as malloc or mmap returns, variables on stack, or global
+	  memory pointers, can be used for GPU transparently.
+
 menu "drm/Xe Debugging"
 depends on DRM_XE
 depends on EXPERT
diff --git a/drivers/gpu/drm/xe/Makefile b/drivers/gpu/drm/xe/Makefile
index df8601d6a59f..b75bdbc5e42c 100644
--- a/drivers/gpu/drm/xe/Makefile
+++ b/drivers/gpu/drm/xe/Makefile
@@ -282,6 +282,11 @@ xe-$(CONFIG_DRM_XE_DISPLAY) += \
 	i915-display/skl_universal_plane.o \
 	i915-display/skl_watermark.o
 
+xe-$(CONFIG_DRM_XE_SVM) += xe_svm.o \
+						   xe_svm_devmem.o \
+						   xe_svm_range.o \
+						   xe_svm_migrate.o
+
 ifeq ($(CONFIG_ACPI),y)
 	xe-$(CONFIG_DRM_XE_DISPLAY) += \
 		i915-display/intel_acpi.o \
diff --git a/drivers/gpu/drm/xe/xe_mmio.c b/drivers/gpu/drm/xe/xe_mmio.c
index cfe25a3c7059..7c95f675ed92 100644
--- a/drivers/gpu/drm/xe/xe_mmio.c
+++ b/drivers/gpu/drm/xe/xe_mmio.c
@@ -286,7 +286,9 @@ int xe_mmio_probe_vram(struct xe_device *xe)
 		}
 
 		io_size -= min_t(u64, tile_size, io_size);
+#if IS_ENABLED(CONFIG_DRM_XE_SVM)
 		xe_svm_devm_add(tile, &tile->mem.vram);
+#endif
 	}
 
 	xe->mem.vram.actual_physical_size = total_size;
@@ -361,8 +363,11 @@ static void mmio_fini(struct drm_device *drm, void *arg)
 	pci_iounmap(to_pci_dev(xe->drm.dev), xe->mmio.regs);
 	if (xe->mem.vram.mapping)
 		iounmap(xe->mem.vram.mapping);
+
+#if IS_ENABLED(CONFIG_DRM_XE_SVM)
 	for_each_tile(tile, xe, id) {
 		xe_svm_devm_remove(xe, &tile->mem.vram);
+#endif
 	}
 }
 
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 3c301a5c7325..12d82f2fc195 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -1376,7 +1376,9 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags)
 		xe->usm.num_vm_in_non_fault_mode++;
 	mutex_unlock(&xe->usm.lock);
 
+#if IS_ENABLED(CONFIG_DRM_XE_SVM)
 	vm->svm = xe_create_svm(vm);
+#endif
 	trace_xe_vm_create(vm);
 
 	return vm;
-- 
2.26.3


      parent reply	other threads:[~2023-12-21  4:28 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-12-21  4:37 [PATCH 00/22] XeKmd basic SVM support Oak Zeng
2023-12-21  4:37 ` [PATCH 01/22] drm/xe/svm: Add SVM document Oak Zeng
2023-12-21  4:37 ` [PATCH 02/22] drm/xe/svm: Add svm key data structures Oak Zeng
2023-12-21  4:37 ` [PATCH 03/22] drm/xe/svm: create xe svm during vm creation Oak Zeng
2023-12-21  4:37 ` [PATCH 04/22] drm/xe/svm: Trace svm creation Oak Zeng
2023-12-21  4:37 ` [PATCH 05/22] drm/xe/svm: add helper to retrieve svm range from address Oak Zeng
2023-12-21  4:37 ` [PATCH 06/22] drm/xe/svm: Introduce a helper to build sg table from hmm range Oak Zeng
2023-12-21  4:37 ` [PATCH 07/22] drm/xe/svm: Add helper for binding hmm range to gpu Oak Zeng
2023-12-21  4:37 ` [PATCH 08/22] drm/xe/svm: Add helper to invalidate svm range from GPU Oak Zeng
2023-12-21  4:37 ` [PATCH 09/22] drm/xe/svm: Remap and provide memmap backing for GPU vram Oak Zeng
2023-12-21  4:38 ` [PATCH 10/22] drm/xe/svm: Introduce svm migration function Oak Zeng
2023-12-21  4:38 ` [PATCH 11/22] drm/xe/svm: implement functions to allocate and free device memory Oak Zeng
2023-12-21  4:38 ` [PATCH 12/22] drm/xe/svm: Trace buddy block allocation and free Oak Zeng
2023-12-21  4:38 ` [PATCH 13/22] drm/xe/svm: Handle CPU page fault Oak Zeng
2023-12-21  4:38 ` [PATCH 14/22] drm/xe/svm: trace svm range migration Oak Zeng
2023-12-21  4:38 ` [PATCH 15/22] drm/xe/svm: Implement functions to register and unregister mmu notifier Oak Zeng
2023-12-21  4:38 ` [PATCH 16/22] drm/xe/svm: Implement the mmu notifier range invalidate callback Oak Zeng
2023-12-21  4:38 ` [PATCH 17/22] drm/xe/svm: clean up svm range during process exit Oak Zeng
2023-12-21  4:38 ` [PATCH 18/22] drm/xe/svm: Move a few structures to xe_gt.h Oak Zeng
2023-12-21  4:38 ` [PATCH 19/22] drm/xe/svm: migrate svm range to vram Oak Zeng
2023-12-21  4:38 ` [PATCH 20/22] drm/xe/svm: Populate svm range Oak Zeng
2023-12-21  4:38 ` [PATCH 21/22] drm/xe/svm: GPU page fault support Oak Zeng
2023-12-21  4:38 ` Oak Zeng [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20231221043812.3783313-23-oak.zeng@intel.com \
    --to=oak.zeng@intel.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=intel-xe@lists.freedesktop.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox