Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Oak Zeng <oak.zeng@intel.com>
To: intel-xe@lists.freedesktop.org
Subject: [CI 11/42] drm/svm: Introduce helper to remap drm memory region
Date: Thu, 13 Jun 2024 11:30:57 -0400	[thread overview]
Message-ID: <20240613153128.681864-11-oak.zeng@intel.com> (raw)
In-Reply-To: <20240613153128.681864-1-oak.zeng@intel.com>

Helper function drm_svm_register_mem_region to remap GPU vram
using devm_memremap_pages, so each GPU vram page is backed by
struct page.

Those struct pages are created to allow hmm migrate buffer b/t
GPU vram and CPU system memory using existing Linux migration
mechanism (i.e., migrating b/t CPU system memory and hard disk).

This is prepare work to enable svm (shared virtual memory) through
Linux kernel hmm framework. The memory remap's page map type is set
to MEMORY_DEVICE_PRIVATE for now. This means even though each GPU
vram page get a struct page and can be mapped in CPU page table,
but such pages are treated as GPU's private resource, so CPU can't
access them. If CPU access such page, a page fault is triggered
and page will be migrate to system memory.

For GPU device which supports coherent memory protocol b/t CPU and
GPU (such as CXL and CAPI protocol), we can remap device memory as
MEMORY_DEVICE_COHERENT. This is TBD.

v1: Support a memory type interface for register_mem_region (Himal)

Cc: Daniel Vetter <daniel.vetter@intel.com>
Cc: Dave Airlie <airlied@redhat.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Christian König <christian.koenig@amd.com>
Cc: Felix Kuehling <felix.kuehling@amd.com>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: Leon Romanovsky <leonro@nvidia.com>
Cc: Brian Welty <brian.welty@intel.com>
Cc: <dri-devel@lists.freedesktop.org>
Signed-off-by: Oak Zeng <oak.zeng@intel.com>
Co-developed-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
---
 drivers/gpu/drm/drm_svm.c | 56 +++++++++++++++++++++++++++++++++++++++
 include/drm/drm_svm.h     |  3 +++
 2 files changed, 59 insertions(+)

diff --git a/drivers/gpu/drm/drm_svm.c b/drivers/gpu/drm/drm_svm.c
index 9a164615f866..741874689e32 100644
--- a/drivers/gpu/drm/drm_svm.c
+++ b/drivers/gpu/drm/drm_svm.c
@@ -13,6 +13,7 @@
 #include <linux/swap.h>
 #include <linux/bug.h>
 #include <linux/hmm.h>
+#include <linux/pci.h>
 #include <linux/mm.h>
 
 static u64 __npages_in_range(unsigned long start, unsigned long end)
@@ -314,3 +315,58 @@ int drm_svm_hmmptr_populate(struct drm_hmmptr *hmmptr, void *owner, u64 start, u
 	return ret;
 }
 EXPORT_SYMBOL_GPL(drm_svm_hmmptr_populate);
+
+static struct dev_pagemap_ops drm_devm_pagemap_ops;
+
+/**
+ * drm_svm_register_mem_region: Remap and provide memmap backing for device memory
+ * @drm: drm device who want to register a memory region
+ * @mr: memory region to register
+ * @type: ZONE_DEVICE memory type
+ *
+ * This remap device memory to host physical address space and create
+ * struct page to back device memory
+ *
+ * Return: 0 on success standard error code otherwise
+ */
+int drm_svm_register_mem_region(const struct drm_device *drm,
+		struct drm_mem_region *mr, enum memory_type type)
+{
+	struct device *dev = &to_pci_dev(drm->dev)->dev;
+	struct resource *res;
+	void *addr;
+	int ret;
+
+	/**FIXME: support MEMORY_DEVICE_COHERENT in the future*/
+	if (type != MEMORY_DEVICE_PRIVATE)
+		return -EINVAL;
+
+	res = devm_request_free_mem_region(dev, &iomem_resource,
+					   mr->usable_size);
+	if (IS_ERR(res)) {
+		ret = PTR_ERR(res);
+		return ret;
+	}
+
+	drm_devm_pagemap_ops.page_free = mr->mr_ops.drm_mem_region_free_page;
+	mr->pagemap.type = type;
+	mr->pagemap.range.start = res->start;
+	mr->pagemap.range.end = res->end;
+	mr->pagemap.nr_range = 1;
+	mr->pagemap.ops = &drm_devm_pagemap_ops;
+	mr->pagemap.owner = mr->mr_ops.drm_mem_region_pagemap_owner(mr);
+	addr = devm_memremap_pages(dev, &mr->pagemap);
+	if (IS_ERR(addr)) {
+		devm_release_mem_region(dev, res->start, resource_size(res));
+		ret = PTR_ERR(addr);
+		drm_err(drm, "Failed to remap memory region %p, errno %d\n",
+				mr, ret);
+		return ret;
+	}
+	mr->hpa_base = res->start;
+
+	drm_info(drm, "Registered device memory [%llx-%llx] to devm, remapped to %pr\n",
+			mr->dpa_base, mr->dpa_base + mr->usable_size, res);
+	return 0;
+}
+EXPORT_SYMBOL_GPL(drm_svm_register_mem_region);
diff --git a/include/drm/drm_svm.h b/include/drm/drm_svm.h
index d443f20b5510..04552cc1c67f 100644
--- a/include/drm/drm_svm.h
+++ b/include/drm/drm_svm.h
@@ -164,6 +164,9 @@ static inline u64 drm_mem_region_page_to_dpa(struct drm_mem_region *mr, struct p
 	return dpa;
 }
 
+int drm_svm_register_mem_region(const struct drm_device *drm,
+		struct drm_mem_region *mr, enum memory_type type);
+
 /**
  * struct drm_hmmptr- hmmptr pointer
  *
-- 
2.26.3


  parent reply	other threads:[~2024-06-13 15:22 UTC|newest]

Thread overview: 46+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-06-13 15:30 [CI 01/42] mm/hmm: let users to tag specific PFNs Oak Zeng
2024-06-13 15:30 ` [CI 02/42] dma-mapping: provide an interface to allocate IOVA Oak Zeng
2024-06-13 15:30 ` [CI 03/42] dma-mapping: provide callbacks to link/unlink pages to specific IOVA Oak Zeng
2024-06-13 15:30 ` [CI 04/42] iommu/dma: Provide an interface to allow preallocate IOVA Oak Zeng
2024-06-13 15:30 ` [CI 05/42] iommu/dma: Prepare map/unmap page functions to receive IOVA Oak Zeng
2024-06-13 15:30 ` [CI 06/42] iommu/dma: Implement link/unlink page callbacks Oak Zeng
2024-06-13 15:30 ` [CI 07/42] drm: Move GPUVA_START/LAST to drm_gpuvm.h Oak Zeng
2024-06-13 15:30 ` [CI 08/42] drm/svm: Mark drm_gpuvm to participate SVM Oak Zeng
2024-06-13 15:30 ` [CI 09/42] drm/svm: introduce drm_mem_region concept Oak Zeng
2024-06-13 15:30 ` [CI 10/42] drm/svm: introduce hmmptr and helper functions Oak Zeng
2024-06-13 15:30 ` Oak Zeng [this message]
2024-06-13 15:30 ` [CI 12/42] drm/svm: handle CPU page fault Oak Zeng
2024-06-13 15:30 ` [CI 13/42] drm/svm: Migrate a range of hmmptr to vram Oak Zeng
2024-06-13 15:31 ` [CI 14/42] drm/svm: Add DRM SVM documentation Oak Zeng
2024-06-13 15:31 ` [CI 15/42] drm/xe: s/xe_tile_migrate_engine/xe_tile_migrate_exec_queue Oak Zeng
2024-06-13 15:31 ` [CI 16/42] drm/xe: Add xe_vm_pgtable_update_op to xe_vma_ops Oak Zeng
2024-06-13 15:31 ` [CI 17/42] drm/xe: Convert multiple bind ops into single job Oak Zeng
2024-06-13 15:31 ` [CI 18/42] drm/xe: Update VM trace events Oak Zeng
2024-06-13 15:31 ` [CI 19/42] drm/xe: Update PT layer with better error handling Oak Zeng
2024-06-13 15:31 ` [CI 20/42] drm/xe: Retry BO allocation Oak Zeng
2024-06-13 15:31 ` [CI 21/42] drm/xe/uapi: Add DRM_XE_VM_BIND_FLAG_SYSTEM_ALLOCATOR flag Oak Zeng
2024-06-13 15:31 ` [CI 22/42] drm/xe: Add a helper to calculate userptr end address Oak Zeng
2024-06-13 15:31 ` [CI 23/42] drm/xe: Add dma_addr res cursor Oak Zeng
2024-06-13 15:31 ` [CI 24/42] drm/xe: Use drm_mem_region for xe Oak Zeng
2024-06-13 15:31 ` [CI 25/42] drm/xe: use drm_hmmptr in xe Oak Zeng
2024-06-13 15:31 ` [CI 26/42] drm/xe: Moving to range based vma invalidation Oak Zeng
2024-06-13 15:31 ` [CI 27/42] drm/xe: Support range based page table update Oak Zeng
2024-06-13 15:31 ` [CI 28/42] drm/xe/uapi: Add DRM_XE_VM_CREATE_FLAG_PARTICIPATE_SVM flag Oak Zeng
2024-06-13 15:31 ` [CI 29/42] drm/xe/svm: Create userptr if page fault occurs on system_allocator VMA Oak Zeng
2024-06-13 15:31 ` [CI 30/42] drm/xe/svm: Add faulted userptr VMA garbage collector Oak Zeng
2024-06-13 15:31 ` [CI 31/42] drm/xe: Introduce helper to get tile from memory region Oak Zeng
2024-06-13 15:31 ` [CI 32/42] drm/xe/svm: implement functions to allocate and free device memory Oak Zeng
2024-06-13 15:31 ` [CI 33/42] drm/xe/svm: Get drm device from drm memory region Oak Zeng
2024-06-13 15:31 ` [CI 34/42] drm/xe/svm: Get page map owner of a " Oak Zeng
2024-06-13 15:31 ` [CI 35/42] drm/xe/svm: Add migrate layer functions for SVM support Oak Zeng
2024-06-13 15:31 ` [CI 36/42] drm/xe/svm: introduce svm migration function Oak Zeng
2024-06-13 15:31 ` [CI 37/42] drm/xe/svm: Register xe memory region to drm layer Oak Zeng
2024-06-13 15:31 ` [CI 38/42] drm/xe/svm: Introduce DRM_XE_SVM kernel config Oak Zeng
2024-06-13 15:31 ` [CI 39/42] drm/xe/svm: Migration from sram to vram for system allocator Oak Zeng
2024-06-13 15:31 ` [CI 40/42] drm/xe/svm: Determine a vma is backed by device memory Oak Zeng
2024-06-13 15:31 ` [CI 41/42] drm/xe/svm: Introduce hmm_pfn array based resource cursor Oak Zeng
2024-06-13 15:31 ` [CI 42/42] drm/xe: Enable system allocator uAPI Oak Zeng
2024-06-13 15:53 ` ✓ CI.Patch_applied: success for series starting with [CI,01/42] mm/hmm: let users to tag specific PFNs Patchwork
2024-06-13 15:54 ` ✗ CI.checkpatch: warning " Patchwork
2024-06-13 15:54 ` ✗ CI.KUnit: failure " Patchwork
  -- strict thread matches above, loose matches on Subject: below --
2024-06-13  4:23 [CI 01/42] " Oak Zeng
2024-06-13  4:23 ` [CI 11/42] drm/svm: Introduce helper to remap drm memory region Oak Zeng

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240613153128.681864-11-oak.zeng@intel.com \
    --to=oak.zeng@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox