From: Oak Zeng <oak.zeng@intel.com>
To: intel-xe@lists.freedesktop.org
Subject: [CI 14/43] drm/svm: Migrate a range of hmmptr to vram
Date: Tue, 11 Jun 2024 22:25:36 -0400 [thread overview]
Message-ID: <20240612022605.385062-14-oak.zeng@intel.com> (raw)
In-Reply-To: <20240612022605.385062-1-oak.zeng@intel.com>
Introduce a helper function drm_svm_migrate_hmmptr_to_vram to migrate
any sub-range of a hmmptr to vram. The range has to be at page boundary.
This supposed to be called by driver to migrate a hmmptr to vram.
Cc: Daniel Vetter <daniel.vetter@intel.com>
Cc: Dave Airlie <airlied@redhat.com>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Christian König <christian.koenig@amd.com>
Cc: Felix Kuehling <felix.kuehling@amd.com>
Cc: Brian Welty <brian.welty@intel.com>
Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Cc: <dri-devel@lists.freedesktop.org>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Oak Zeng <oak.zeng@intel.com>
---
drivers/gpu/drm/drm_svm.c | 121 ++++++++++++++++++++++++++++++++++++++
include/drm/drm_svm.h | 3 +
2 files changed, 124 insertions(+)
diff --git a/drivers/gpu/drm/drm_svm.c b/drivers/gpu/drm/drm_svm.c
index ee6d932f434f..0a79b7800400 100644
--- a/drivers/gpu/drm/drm_svm.c
+++ b/drivers/gpu/drm/drm_svm.c
@@ -624,3 +624,124 @@ int drm_svm_register_mem_region(const struct drm_device *drm, struct drm_mem_reg
return 0;
}
EXPORT_SYMBOL_GPL(drm_svm_register_mem_region);
+
+static void __drm_svm_init_device_pages(unsigned long *pfn, unsigned long npages)
+{
+ struct page *page;
+ int i;
+
+ for(i = 0; i < npages; i++) {
+ page = pfn_to_page(pfn[i]);
+ zone_device_page_init(page);
+ pfn[i] = migrate_pfn(pfn[i]);
+ }
+}
+
+/**
+ * drm_svm_migrate_hmmptr_to_vram() - migrate a sub-range of a hmmptr to vram
+ * Must be called with mmap_read_lock held.
+ *
+ * @vm: the vm that the hmmptr belongs to
+ * @mr: the destination memory region we want to migrate to
+ * @hmmptr: the hmmptr to migrate.
+ * @start: start(CPU virtual address, inclusive) of the range to migrate
+ * @end: end(CPU virtual address, exclusive) of the range to migrate
+ *
+ * Returns: negative errno on faiure, 0 on success
+ */
+int drm_svm_migrate_hmmptr_to_vram(struct drm_gpuvm *vm,
+ struct drm_mem_region *mr,
+ struct drm_hmmptr *hmmptr, unsigned long start, unsigned long end)
+{
+ struct drm_device *drm = mr->mr_ops.drm_mem_region_get_device(mr);
+ struct mm_struct *mm = vm->mm;
+ unsigned long npages = __npages_in_range(start, end);
+ struct vm_area_struct *vas;
+ struct migrate_vma migrate = {
+ .start = ALIGN_DOWN(start, PAGE_SIZE),
+ .end = ALIGN(end, PAGE_SIZE),
+ .pgmap_owner = mr->mr_ops.drm_mem_region_pagemap_owner(mr),
+ .flags = MIGRATE_VMA_SELECT_SYSTEM,
+ };
+ struct device *dev = drm->dev;
+ struct dma_fence *fence;
+ struct migrate_vec *src;
+ struct migrate_vec *dst;
+ int ret = 0;
+ void *buf;
+
+ mmap_assert_locked(mm);
+
+ BUG_ON(start < __hmmptr_cpu_start(hmmptr));
+ BUG_ON(end > __hmmptr_cpu_end(hmmptr));
+
+ vas = find_vma_intersection(mm, start, end);
+ if (!vas)
+ return -ENOENT;
+
+ migrate.vma = vas;
+ buf = kvcalloc(npages, 2* sizeof(*migrate.src), GFP_KERNEL);
+ if(!buf)
+ return -ENOMEM;
+
+ migrate.src = buf;
+ migrate.dst = migrate.src + npages;
+ ret = migrate_vma_setup(&migrate);
+ if (ret) {
+ drm_warn(drm, "vma setup returned %d for range [0x%lx - 0x%lx]\n",
+ ret, start, end);
+ goto free_buf;
+ }
+
+ /**
+ * Partial migration is just normal. Print a message for now.
+ * Once this behavior is verified, delete this warning.
+ */
+ if (migrate.cpages != npages)
+ drm_warn(drm, "Partial migration for range [0x%lx - 0x%lx], range is %ld pages, migrate only %ld pages\n",
+ start, end, npages, migrate.cpages);
+
+ ret = mr->mr_ops.drm_mem_region_alloc_pages(mr, migrate.cpages, migrate.dst);
+ if (ret)
+ goto migrate_finalize;
+
+ __drm_svm_init_device_pages(migrate.dst, migrate.cpages);
+
+ src = __generate_migrate_vec_sram(dev, migrate.src, true, npages);
+ if (!src) {
+ ret = -EFAULT;
+ goto free_device_pages;
+ }
+
+ dst = __generate_migrate_vec_vram(migrate.dst, false, migrate.cpages);
+ if (!dst) {
+ ret = -EFAULT;
+ goto free_migrate_src;
+ }
+
+ fence = mr->mr_ops.drm_mem_region_migrate(src, dst);
+ if (IS_ERR(fence)) {
+ ret = -EIO;
+ goto free_migrate_dst;
+ }
+ dma_fence_wait(fence, false);
+ dma_fence_put(fence);
+
+ migrate_vma_pages(&migrate);
+
+free_migrate_dst:
+ __free_migrate_vec_vram(dst);
+free_migrate_src:
+ __free_migrate_vec_sram(dev, src, true);
+free_device_pages:
+ if (ret)
+ __drm_svm_free_pages(migrate.dst, migrate.cpages);
+migrate_finalize:
+ if (ret)
+ memset(migrate.dst, 0, sizeof(*migrate.dst)*migrate.cpages);
+ migrate_vma_finalize(&migrate);
+free_buf:
+ kvfree(buf);
+ return ret;
+}
+EXPORT_SYMBOL_GPL(drm_svm_migrate_hmmptr_to_vram);
diff --git a/include/drm/drm_svm.h b/include/drm/drm_svm.h
index cb1edb6a993e..805f066ef3ff 100644
--- a/include/drm/drm_svm.h
+++ b/include/drm/drm_svm.h
@@ -223,4 +223,7 @@ void drm_svm_hmmptr_map_dma_pages(struct drm_hmmptr *hmmptr, u64 page_idx, u64 n
void drm_svm_hmmptr_unmap_dma_pages(struct drm_hmmptr *hmmptr, u64 page_idx, u64 npages);
int drm_svm_hmmptr_populate(struct drm_hmmptr *hmmptr, void *owner, u64 start, u64 end,
bool write, bool is_mmap_locked);
+int drm_svm_migrate_hmmptr_to_vram(struct drm_gpuvm *vm,
+ struct drm_mem_region *mr,
+ struct drm_hmmptr *hmmptr, unsigned long start, unsigned long end);
#endif
--
2.26.3
next prev parent reply other threads:[~2024-06-12 2:15 UTC|newest]
Thread overview: 46+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-06-12 2:25 [CI 01/43] mm/hmm: let users to tag specific PFNs Oak Zeng
2024-06-12 2:25 ` [CI 02/43] dma-mapping: provide an interface to allocate IOVA Oak Zeng
2024-06-12 2:25 ` [CI 03/43] dma-mapping: provide callbacks to link/unlink pages to specific IOVA Oak Zeng
2024-06-12 2:25 ` [CI 04/43] iommu/dma: Provide an interface to allow preallocate IOVA Oak Zeng
2024-06-12 2:25 ` [CI 05/43] iommu/dma: Prepare map/unmap page functions to receive IOVA Oak Zeng
2024-06-12 2:25 ` [CI 06/43] iommu/dma: Implement link/unlink page callbacks Oak Zeng
2024-06-12 2:25 ` [CI 07/43] drm: move xe_sg_segment_size to drm layer Oak Zeng
2024-06-12 2:25 ` [CI 08/43] drm: Move GPUVA_START/LAST to drm_gpuvm.h Oak Zeng
2024-06-12 2:25 ` [CI 09/43] drm/svm: Mark drm_gpuvm to participate SVM Oak Zeng
2024-06-12 2:25 ` [CI 10/43] drm/svm: introduce drm_mem_region concept Oak Zeng
2024-06-12 2:25 ` [CI 11/43] drm/svm: introduce hmmptr and helper functions Oak Zeng
2024-06-12 2:25 ` [CI 12/43] drm/svm: Introduce helper to remap drm memory region Oak Zeng
2024-06-12 2:25 ` [CI 13/43] drm/svm: handle CPU page fault Oak Zeng
2024-06-12 2:25 ` Oak Zeng [this message]
2024-06-12 2:25 ` [CI 15/43] drm/svm: Add DRM SVM documentation Oak Zeng
2024-06-12 2:25 ` [CI 16/43] drm/xe: s/xe_tile_migrate_engine/xe_tile_migrate_exec_queue Oak Zeng
2024-06-12 2:25 ` [CI 17/43] drm/xe: Add xe_vm_pgtable_update_op to xe_vma_ops Oak Zeng
2024-06-12 2:25 ` [CI 18/43] drm/xe: Convert multiple bind ops into single job Oak Zeng
2024-06-12 2:25 ` [CI 19/43] drm/xe: Update VM trace events Oak Zeng
2024-06-12 2:25 ` [CI 20/43] drm/xe: Update PT layer with better error handling Oak Zeng
2024-06-12 2:25 ` [CI 21/43] drm/xe: Retry BO allocation Oak Zeng
2024-06-12 2:25 ` [CI 22/43] drm/xe/uapi: Add DRM_XE_VM_BIND_FLAG_SYSTEM_ALLOCATOR flag Oak Zeng
2024-06-12 2:25 ` [CI 23/43] drm/xe: Add a helper to calculate userptr end address Oak Zeng
2024-06-12 2:25 ` [CI 24/43] drm/xe: Add dma_addr res cursor Oak Zeng
2024-06-12 2:25 ` [CI 25/43] drm/xe: Use drm_mem_region for xe Oak Zeng
2024-06-12 2:25 ` [CI 26/43] drm/xe: use drm_hmmptr in xe Oak Zeng
2024-06-12 2:25 ` [CI 27/43] drm/xe: Moving to range based vma invalidation Oak Zeng
2024-06-12 2:25 ` [CI 28/43] drm/xe: Support range based page table update Oak Zeng
2024-06-12 2:25 ` [CI 29/43] drm/xe/uapi: Add DRM_XE_VM_CREATE_FLAG_PARTICIPATE_SVM flag Oak Zeng
2024-06-12 2:25 ` [CI 30/43] drm/xe/svm: Create userptr if page fault occurs on system_allocator VMA Oak Zeng
2024-06-12 2:25 ` [CI 31/43] drm/xe/svm: Add faulted userptr VMA garbage collector Oak Zeng
2024-06-12 2:25 ` [CI 32/43] drm/xe: Introduce helper to get tile from memory region Oak Zeng
2024-06-12 2:25 ` [CI 33/43] drm/xe/svm: implement functions to allocate and free device memory Oak Zeng
2024-06-12 2:25 ` [CI 34/43] drm/xe/svm: Get drm device from drm memory region Oak Zeng
2024-06-12 2:25 ` [CI 35/43] drm/xe/svm: Get page map owner of a " Oak Zeng
2024-06-12 2:25 ` [CI 36/43] drm/xe/svm: Add migrate layer functions for SVM support Oak Zeng
2024-06-12 2:25 ` [CI 37/43] drm/xe/svm: introduce svm migration function Oak Zeng
2024-06-12 2:26 ` [CI 38/43] drm/xe/svm: Register xe memory region to drm layer Oak Zeng
2024-06-12 2:26 ` [CI 39/43] drm/xe/svm: Introduce DRM_XE_SVM kernel config Oak Zeng
2024-06-12 2:26 ` [CI 40/43] drm/xe/svm: Migration from sram to vram for system allocator Oak Zeng
2024-06-12 2:26 ` [CI 41/43] drm/xe/svm: Determine a vma is backed by device memory Oak Zeng
2024-06-12 2:26 ` [CI 42/43] drm/xe/svm: Introduce hmm_pfn array based resource cursor Oak Zeng
2024-06-12 2:26 ` [CI 43/43] drm/xe: Enable system allocator uAPI Oak Zeng
2024-06-12 3:14 ` ✓ CI.Patch_applied: success for series starting with [CI,01/43] mm/hmm: let users to tag specific PFNs Patchwork
2024-06-12 3:15 ` ✗ CI.checkpatch: warning " Patchwork
2024-06-12 3:16 ` ✗ CI.KUnit: failure " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240612022605.385062-14-oak.zeng@intel.com \
--to=oak.zeng@intel.com \
--cc=intel-xe@lists.freedesktop.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox