From: Oak Zeng <oak.zeng@intel.com>
To: intel-xe@lists.freedesktop.org
Subject: [CI v3 10/26] drm/svm: introduce drm_mem_region concept
Date: Wed, 29 May 2024 20:47:16 -0400 [thread overview]
Message-ID: <20240530004732.84898-10-oak.zeng@intel.com> (raw)
In-Reply-To: <20240530004732.84898-1-oak.zeng@intel.com>
As its name indicates, a drm_mem_region represent a memory region on
a drm device, e.g., a GPU's HBM memory.
In a memory region, we have address information of this region, from
both CPU (hpa_base) and GPU (dpa_base) perspectives. It also has some
interfaces for drm to callback driver to allocate/free memory from
this region, migrate data to/from this memory region, get a pagemap
owner of a memory region, and get the drm device which own the memory
region
This is introduced for system allocator implementation, so the memory
allocation and free interfaces are page based.
A few helper functions are also introduced:
1) drm_mem_region_pfn_to_dpa: calculate device physical address from
a page's pfn
2) drm_page_to_mem_region: retrieve drm memory region that a page
resides in
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@intel.com>
Cc: Brian Welty <brian.welty@intel.com>
Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Signed-off-by: Oak Zeng <oak.zeng@intel.com>
---
include/drm/drm_svm.h | 156 ++++++++++++++++++++++++++++++++++++++++++
1 file changed, 156 insertions(+)
create mode 100644 include/drm/drm_svm.h
diff --git a/include/drm/drm_svm.h b/include/drm/drm_svm.h
new file mode 100644
index 000000000000..2f8658538b4b
--- /dev/null
+++ b/include/drm/drm_svm.h
@@ -0,0 +1,156 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2024 Intel Corporation
+ */
+
+#include <linux/compiler_types.h>
+#include <linux/memremap.h>
+#include <linux/types.h>
+
+struct dma_fence;
+struct drm_mem_region;
+
+/**
+ * struct migrate_vec - a migration vector is an array of addresses,
+ * each of which represents one page.
+ * When it is system memory page, the address is a dmap-mapped address.
+ * When it is vram page, the address is device physical address.
+ */
+struct migrate_vec {
+ /**
+ * @mr: the memory region that pages reside.
+ * When it is system memory page, mr is NULL.
+ */
+ struct drm_mem_region *mr;
+ /** @npages: number of pages */
+ u64 npages;
+ /**
+ * @addr_vec: address vector
+ * each item in addr_vec is an address of a page
+ */
+ union {
+ /** @dma_addr: dma-mapped address of the page, only valid for system pages*/
+ dma_addr_t dma_addr;
+ /** @dpa: device physical address of the page, only valid for vram page*/
+ phys_addr_t dpa;
+ } addr_vec[1];
+};
+
+/**
+ * struct drm_mem_region_ops - memory region operations such as memory allocation
+ * and migration etc. Driver is supposed to implement those operations.
+ */
+struct drm_mem_region_ops {
+ /**
+ * @drm_mem_region_alloc_pages: Called from drm to driver to allocate
+ * device VRAM memory
+ * @mr: The memory region where to allocate device VRAM memory from
+ * @npages: number of pages to allocate
+ * @pfns: Used to return pfns of each page
+ */
+ int (*drm_mem_region_alloc_pages) (struct drm_mem_region *mr,
+ unsigned long npags, unsigned long *pfns);
+ /**
+ * @drm_mem_region_free_page: Called from drm to free one page of device memory
+ * @page: point to the page to free
+ */
+ void (*drm_mem_region_free_page) (struct page *page);
+ /**
+ * @drm_mem_region_migrate: Called from drm to migrate memory from src to
+ * dst. Driver is supposed to implement this function using device hardware
+ * accelerators such as DMA. DRM subsystem call this function to migrate
+ * memory b/t system memory and device memory region
+ *
+ * @src_vec: source migration vector
+ * @dst_vec: destination migration vector
+ */
+ struct dma_fence* (*drm_mem_region_migrate)(struct migrate_vec *src_vec,
+ struct migrate_vec *dst_vec);
+ /**
+ * @drm_mem_region_pagemap_owner: Return the pagemap owner of a memory
+ * region. Pagemap owner is the owner of the device memory. It is
+ * defined by device driver and opaque to drm layer. Drm uses pagemap
+ * owner to set up page migrations (see hmm function migrate_vma_setup)
+ * and range population (see hmm functio hmm_range_fault). Driver has
+ * the freedom to choose the right pagemap owner.
+ *
+ * @mr: the memory region which we want to get the page map owner
+ */
+ void* (*drm_mem_region_pagemap_owner)(struct drm_mem_region *mr);
+ /**
+ * @drm_mem_region_get_device: Return the drm device which has this memory
+ * region.
+ *
+ * @mr: the memory region
+ */
+ struct drm_device* (*drm_mem_region_get_device)(struct drm_mem_region *mr);
+};
+
+/**
+ * struct drm_mem_region - memory region structure
+ * This is used to describe a memory region in drm
+ * device, such as HBM memory or CXL extension memory.
+ *
+ * drm_mem_region is converted from the xe_mem_region
+ * concept. xe_mem_region is moved to drm layer and renamed
+ * as drm_mem_region.
+ *
+ * drm_mem_region is supposed to be embedded in some driver struct such as
+ * "struct xe_tile" or "struct amdgpu_device"
+ */
+struct drm_mem_region {
+ /** @dev_private: device private data which is opaque to drm layer */
+ void *dev_private;
+ /** @dpa_base: This memory regions's DPA (device physical address) base */
+ resource_size_t dpa_base;
+ /**
+ * @usable_size: usable size of VRAM
+ *
+ * Usable size of VRAM excluding reserved portions
+ * (e.g stolen mem)
+ */
+ resource_size_t usable_size;
+ /** @pagemap: Used to remap device memory as ZONE_DEVICE */
+ struct dev_pagemap pagemap;
+ /**
+ * @hpa_base: base host physical address
+ *
+ * This is generated when remap device memory as ZONE_DEVICE
+ */
+ resource_size_t hpa_base;
+ /**
+ * @mr_ops: memory region operation function pointers
+ */
+ struct drm_mem_region_ops mr_ops;
+};
+
+/**
+ * drm_page_to_mem_region() - Get a page's memory region
+ *
+ * @page: a struct page pointer pointing to a page in vram memory region
+ */
+static inline struct drm_mem_region *drm_page_to_mem_region(struct page *page)
+{
+ return container_of(page->pgmap, struct drm_mem_region, pagemap);
+}
+
+/**
+ * drm_mem_region_pfn_to_dpa() - Calculate page's dpa from pfn
+ *
+ * @mr: The memory region that page resides in
+ * @pfn: page frame number of the page
+ *
+ * Returns: the device physical address of the page
+ */
+static inline u64 drm_mem_region_pfn_to_dpa(struct drm_mem_region *mr, u64 pfn)
+{
+ u64 dpa;
+ u64 offset;
+
+ BUG_ON((pfn << PAGE_SHIFT) < mr->hpa_base );
+ BUG_ON((pfn << PAGE_SHIFT) >= mr->hpa_base + mr->usable_size);
+ offset = (pfn << PAGE_SHIFT) - mr->hpa_base;
+ dpa = mr->dpa_base + offset;
+
+ return dpa;
+}
--
2.26.3
next prev parent reply other threads:[~2024-05-30 0:33 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-05-30 0:47 [CI v3 01/26] mm/hmm: let users to tag specific PFNs Oak Zeng
2024-05-30 0:47 ` [CI v3 02/26] dma-mapping: provide an interface to allocate IOVA Oak Zeng
2024-05-30 0:47 ` [CI v3 03/26] dma-mapping: provide callbacks to link/unlink pages to specific IOVA Oak Zeng
2024-05-30 0:47 ` [CI v3 04/26] iommu/dma: Provide an interface to allow preallocate IOVA Oak Zeng
2024-05-30 0:47 ` [CI v3 05/26] iommu/dma: Prepare map/unmap page functions to receive IOVA Oak Zeng
2024-05-30 0:47 ` [CI v3 06/26] iommu/dma: Implement link/unlink page callbacks Oak Zeng
2024-05-30 0:47 ` [CI v3 07/26] drm: move xe_sg_segment_size to drm layer Oak Zeng
2024-05-30 0:47 ` [CI v3 08/26] drm: Move GPUVA_START/LAST to drm_gpuvm.h Oak Zeng
2024-05-30 0:47 ` [CI v3 09/26] drm/svm: add a mm field to drm_gpuvm struct Oak Zeng
2024-05-30 0:47 ` Oak Zeng [this message]
2024-05-30 0:47 ` [CI v3 11/26] drm/svm: introduce hmmptr and helper functions Oak Zeng
2024-05-30 0:47 ` [CI v3 12/26] drm/svm: Introduce helper to remap drm memory region Oak Zeng
2024-05-30 0:47 ` [CI v3 13/26] drm/svm: handle CPU page fault Oak Zeng
2024-05-30 0:47 ` [CI v3 14/26] drm/svm: Migrate a range of hmmptr to vram Oak Zeng
2024-05-30 0:47 ` [CI v3 15/26] drm/svm: Add DRM SVM documentation Oak Zeng
2024-05-30 0:47 ` [CI v3 16/26] drm/xe: s/xe_tile_migrate_engine/xe_tile_migrate_exec_queue Oak Zeng
2024-05-30 0:47 ` [CI v3 17/26] drm/xe: Add xe_vm_pgtable_update_op to xe_vma_ops Oak Zeng
2024-05-30 0:47 ` [CI v3 18/26] drm/xe: Convert multiple bind ops into single job Oak Zeng
2024-05-30 0:47 ` [CI v3 19/26] drm/xe: Update VM trace events Oak Zeng
2024-05-30 0:47 ` [CI v3 20/26] drm/xe: Update PT layer with better error handling Oak Zeng
2024-05-30 0:47 ` [CI v3 21/26] drm/xe: Retry BO allocation Oak Zeng
2024-05-30 0:47 ` [CI v3 22/26] drm/xe: Rework GPU page fault handling Oak Zeng
2024-05-30 0:47 ` [CI v3 23/26] drm/xe/uapi: Add DRM_XE_VM_BIND_FLAG_SYSTEM_ALLOCATOR flag Oak Zeng
2024-05-30 0:47 ` [CI v3 24/26] drm/xe: Add dma_addr res cursor Oak Zeng
2024-05-30 0:47 ` [CI v3 25/26] drm/xe: Use drm_mem_region for xe Oak Zeng
2024-05-30 0:47 ` [CI v3 26/26] drm/xe: use drm_hmmptr in xe Oak Zeng
2024-05-30 0:50 ` ✓ CI.Patch_applied: success for series starting with [CI,v3,01/26] mm/hmm: let users to tag specific PFNs Patchwork
2024-05-30 0:51 ` ✗ CI.checkpatch: warning " Patchwork
2024-05-30 0:51 ` ✗ CI.KUnit: failure " Patchwork
-- strict thread matches above, loose matches on Subject: below --
2024-05-29 1:18 [CI v3 01/26] " Oak Zeng
2024-05-29 1:19 ` [CI v3 10/26] drm/svm: introduce drm_mem_region concept Oak Zeng
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240530004732.84898-10-oak.zeng@intel.com \
--to=oak.zeng@intel.com \
--cc=intel-xe@lists.freedesktop.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox