From: Oak Zeng <oak.zeng@intel.com>
To: intel-xe@lists.freedesktop.org
Cc: himal.prasad.ghimiray@intel.com, krishnaiah.bommu@intel.com,
matthew.brost@intel.com, Thomas.Hellstrom@linux.intel.com,
brian.welty@intel.com
Subject: [v2 28/31] drm/xe/svm: Introduce helper to migrate vma to vram
Date: Tue, 9 Apr 2024 16:17:39 -0400 [thread overview]
Message-ID: <20240409201742.3042626-29-oak.zeng@intel.com> (raw)
In-Reply-To: <20240409201742.3042626-1-oak.zeng@intel.com>
Introduce a helper function xe_svm_migrate_vma_to_vram.
Since the source pages of the svm range can be physically not
contiguous, and the destination vram pages can also be not
contiguous, there is no easy way to migrate multiple pages per
blitter command. We do page by page migration for now.
Migration is best effort. Even if we fail to migrate some pages,
we will try to migrate the rest pages.
FIXME: Use one blitter command to copy when both src and dst are
physically contiguous
FIXME: when a vma is partially migrated, split vma as we assume
no mixture vma placement.
Signed-off-by: Oak Zeng <oak.zeng@intel.com>
Co-developed-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@intel.com>
Cc: Brian Welty <brian.welty@intel.com>
---
drivers/gpu/drm/xe/xe_svm.h | 2 +
drivers/gpu/drm/xe/xe_svm_migrate.c | 115 ++++++++++++++++++++++++++++
2 files changed, 117 insertions(+)
diff --git a/drivers/gpu/drm/xe/xe_svm.h b/drivers/gpu/drm/xe/xe_svm.h
index c9e4239c44b4..18ce2e3757c5 100644
--- a/drivers/gpu/drm/xe/xe_svm.h
+++ b/drivers/gpu/drm/xe/xe_svm.h
@@ -83,4 +83,6 @@ int xe_devm_alloc_pages(struct xe_tile *tile,
void xe_devm_free_blocks(struct list_head *blocks);
void xe_devm_page_free(struct page *page);
vm_fault_t xe_svm_migrate_to_sram(struct vm_fault *vmf);
+int xe_svm_migrate_vma_to_vram(struct xe_vm *vm, struct xe_vma *vma,
+ struct xe_tile *tile);
#endif
diff --git a/drivers/gpu/drm/xe/xe_svm_migrate.c b/drivers/gpu/drm/xe/xe_svm_migrate.c
index 0db831af098e..ab8dd1f58aa4 100644
--- a/drivers/gpu/drm/xe/xe_svm_migrate.c
+++ b/drivers/gpu/drm/xe/xe_svm_migrate.c
@@ -220,3 +220,118 @@ vm_fault_t xe_svm_migrate_to_sram(struct vm_fault *vmf)
kvfree(buf);
return 0;
}
+
+/**
+ * xe_svm_migrate_vma_to_vram() - migrate backing store of a vma to vram
+ * Must be called with mmap_read_lock held.
+ * @vm: the vm that the vma belongs to
+ * @vma: the vma to migrate.
+ * @tile: the destination tile which holds the new backing store of the range
+ *
+ * Returns: negative errno on faiure, 0 on success
+ */
+int xe_svm_migrate_vma_to_vram(struct xe_vm *vm,
+ struct xe_vma *vma,
+ struct xe_tile *tile)
+{
+ struct mm_struct *mm = vm->mm;
+ unsigned long start = xe_vma_start(vma);
+ unsigned long end = xe_vma_end(vma);
+ unsigned long npages = (end - start) >> PAGE_SHIFT;
+ struct xe_mem_region *mr = &tile->mem.vram;
+ struct vm_area_struct *vas;
+
+ struct migrate_vma migrate = {
+ .start = start,
+ .end = end,
+ .pgmap_owner = tile->xe,
+ .flags = MIGRATE_VMA_SELECT_SYSTEM,
+ };
+ struct device *dev = tile->xe->drm.dev;
+ dma_addr_t *src_dma_addr;
+ struct dma_fence *fence;
+ struct page *src_page;
+ LIST_HEAD(blocks);
+ int ret = 0, i;
+ u64 dst_dpa;
+ void *buf;
+
+ mmap_assert_locked(mm);
+
+ vas = find_vma_intersection(mm, start, start + 4);
+ if (!vas)
+ return -ENOENT;
+
+ migrate.vma = vas;
+ buf = kvcalloc(npages, 2* sizeof(*migrate.src) + sizeof(*src_dma_addr),
+ GFP_KERNEL);
+ if(!buf)
+ return -ENOMEM;
+ migrate.src = buf;
+ migrate.dst = migrate.src + npages;
+ src_dma_addr = (dma_addr_t *) (migrate.dst + npages);
+ ret = xe_devm_alloc_pages(tile, npages, &blocks, migrate.dst);
+ if (ret)
+ goto kfree_buf;
+
+ ret = migrate_vma_setup(&migrate);
+ if (ret) {
+ drm_err(&tile->xe->drm, "vma setup returned %d for range [%lx - %lx]\n",
+ ret, start, end);
+ goto free_dst_pages;
+ }
+
+ /**FIXME: partial migration of a range print a warning for now.
+ * If this message is printed, we need to split xe_vma as we
+ * don't support a mixture placement of one vma
+ */
+ if (migrate.cpages != npages)
+ drm_warn(&tile->xe->drm, "Partial migration for range [%lx - %lx], range is %ld pages, migrate only %ld pages\n",
+ start, end, npages, migrate.cpages);
+
+ /**Migrate page by page for now.
+ * Both source pages and destination pages can physically not contiguous,
+ * there is no good way to migrate multiple pages per blitter command.
+ */
+ for (i = 0; i < npages; i++) {
+ src_page = migrate_pfn_to_page(migrate.src[i]);
+ if (unlikely(!src_page || !(migrate.src[i] & MIGRATE_PFN_MIGRATE)))
+ goto free_dst_page;
+
+ xe_assert(tile->xe, !is_zone_device_page(src_page));
+ src_dma_addr[i] = dma_map_page(dev, src_page, 0, PAGE_SIZE, DMA_TO_DEVICE);
+ if (unlikely(dma_mapping_error(dev, src_dma_addr[i]))) {
+ drm_warn(&tile->xe->drm, "dma map error for host pfn %lx\n", migrate.src[i]);
+ goto free_dst_page;
+ }
+ dst_dpa = xe_mem_region_pfn_to_dpa(mr, migrate.dst[i]);
+ fence = xe_migrate_pa(tile->migrate, src_dma_addr[i], false,
+ dst_dpa, true, PAGE_SIZE);
+ if (IS_ERR(fence)) {
+ drm_warn(&tile->xe->drm, "migrate host page (pfn: %lx) to vram failed\n",
+ migrate.src[i]);
+ /**Migration is best effort. Even we failed here, we continue*/
+ goto free_dst_page;
+ }
+ /**FIXME: Use the first migration's out fence as the second migration's input fence,
+ * and so on. Only wait the out fence of last migration?
+ */
+ dma_fence_wait(fence, false);
+ dma_fence_put(fence);
+free_dst_page:
+ xe_devm_page_free(pfn_to_page(migrate.dst[i]));
+ }
+
+ for (i = 0; i < npages; i++)
+ if (!(dma_mapping_error(dev, src_dma_addr[i])))
+ dma_unmap_page(dev, src_dma_addr[i], PAGE_SIZE, DMA_TO_DEVICE);
+
+ migrate_vma_pages(&migrate);
+ migrate_vma_finalize(&migrate);
+free_dst_pages:
+ if (ret)
+ xe_devm_free_blocks(&blocks);
+kfree_buf:
+ kfree(buf);
+ return ret;
+}
--
2.26.3
next prev parent reply other threads:[~2024-04-09 20:05 UTC|newest]
Thread overview: 72+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-04-09 20:17 [v2 00/31] Basic system allocator support in xe driver Oak Zeng
2024-04-09 20:17 ` [v2 01/31] drm/xe: Refactor vm_bind Oak Zeng
2024-04-09 20:17 ` [v2 02/31] drm/xe/svm: Add SVM document Oak Zeng
2024-04-09 20:17 ` [v2 03/31] drm/xe: Invalidate userptr VMA on page pin fault Oak Zeng
2024-04-09 20:17 ` [v2 04/31] drm/xe: Drop unused arguments from vm_bind_ioctl_ops_parse Oak Zeng
2024-04-09 20:17 ` [v2 05/31] drm/xe: Fix op->tile_mask for fault mode Oak Zeng
2024-04-09 20:17 ` [v2 06/31] drm/xe/uapi: Add DRM_XE_VM_BIND_FLAG_SYSTEM_ALLOCATOR flag Oak Zeng
2024-04-09 20:17 ` [v2 07/31] drm/xe: Create userptr if page fault occurs on system_allocator VMA Oak Zeng
2024-04-09 20:17 ` [v2 08/31] drm/xe: Add faulted userptr VMA garbage collector Oak Zeng
2024-04-09 20:17 ` [v2 09/31] drm/xe: Introduce helper to populate userptr Oak Zeng
2024-04-09 20:17 ` [v2 10/31] drm/xe: Introduce a helper to free sg table Oak Zeng
2024-04-09 20:17 ` [v2 11/31] drm/xe: Use hmm_range_fault to populate user pages Oak Zeng
2024-04-09 20:17 ` [v2 12/31] drm/xe/svm: Remap and provide memmap backing for GPU vram Oak Zeng
2024-04-10 21:09 ` Matthew Brost
2024-04-16 19:01 ` Matthew Brost
2024-04-09 20:17 ` [v2 13/31] drm/xe/svm: Introduce DRM_XE_SVM kernel config Oak Zeng
2024-04-10 21:13 ` Matthew Brost
2024-06-04 18:57 ` Zeng, Oak
2024-04-09 20:17 ` [v2 14/31] drm/xe: Introduce helper to get tile from memory region Oak Zeng
2024-04-10 21:17 ` Matthew Brost
2024-04-09 20:17 ` [v2 15/31] drm/xe: Introduce a helper to get dpa from pfn Oak Zeng
2024-04-10 21:35 ` Matthew Brost
2024-04-09 20:17 ` [v2 16/31] drm/xe/svm: Get xe memory region from page Oak Zeng
2024-04-10 21:38 ` Matthew Brost
2024-04-09 20:17 ` [v2 17/31] drm/xe: Get xe_vma from xe_userptr Oak Zeng
2024-04-10 21:42 ` Matthew Brost
2024-04-09 20:17 ` [v2 18/31] drm/xe/svm: Build userptr sg table for device pages Oak Zeng
2024-04-10 21:52 ` Matthew Brost
2024-04-09 20:17 ` [v2 19/31] drm/xe/svm: Determine a vma is backed by device memory Oak Zeng
2024-04-10 21:56 ` Matthew Brost
2024-06-05 2:29 ` Zeng, Oak
2024-04-09 20:17 ` [v2 20/31] drm/xe: add xe lock document Oak Zeng
2024-04-09 20:17 ` [v2 21/31] drm/xe/svm: Introduce svm migration function Oak Zeng
2024-04-10 22:06 ` Matthew Brost
2024-04-09 20:17 ` [v2 22/31] drm/xe/svm: implement functions to allocate and free device memory Oak Zeng
2024-04-10 22:23 ` Matthew Brost
2024-04-15 20:13 ` Zeng, Oak
2024-04-15 21:19 ` Matthew Brost
2024-06-05 22:16 ` Zeng, Oak
2024-06-05 23:37 ` Matthew Brost
2024-06-06 3:30 ` Zeng, Oak
2024-06-06 4:44 ` Matthew Brost
2024-04-17 20:55 ` Matthew Brost
2024-04-09 20:17 ` [v2 23/31] drm/xe/svm: Trace buddy block allocation and free Oak Zeng
2024-04-09 20:17 ` [v2 24/31] drm/xe/svm: Create and destroy xe svm Oak Zeng
2024-04-10 22:25 ` Matthew Brost
2024-04-09 20:17 ` [v2 25/31] drm/xe/svm: Add vm to xe_svm process Oak Zeng
2024-04-09 20:17 ` [v2 26/31] drm/xe: Make function lookup_vma public Oak Zeng
2024-04-10 22:26 ` Matthew Brost
2024-04-09 20:17 ` [v2 27/31] drm/xe/svm: Handle CPU page fault Oak Zeng
2024-04-11 2:07 ` Matthew Brost
2024-04-12 17:24 ` Zeng, Oak
2024-04-12 18:10 ` Matthew Brost
2024-04-12 18:39 ` Zeng, Oak
2024-06-07 4:44 ` Zeng, Oak
2024-06-07 4:30 ` Zeng, Oak
2024-04-09 20:17 ` Oak Zeng [this message]
2024-04-11 2:49 ` [v2 28/31] drm/xe/svm: Introduce helper to migrate vma to vram Matthew Brost
2024-04-12 21:21 ` Zeng, Oak
2024-04-15 19:40 ` Matthew Brost
2024-06-07 17:12 ` Zeng, Oak
2024-06-07 17:56 ` Matthew Brost
2024-06-07 18:10 ` Matthew Brost
2024-04-09 20:17 ` [v2 29/31] drm/xe/svm: trace svm migration Oak Zeng
2024-04-09 20:17 ` [v2 30/31] drm/xe/svm: Add a helper to determine a vma is fault userptr Oak Zeng
2024-04-11 2:50 ` Matthew Brost
2024-04-09 20:17 ` [v2 31/31] drm/xe/svm: Migration from sram to vram for system allocator Oak Zeng
2024-04-11 2:55 ` Matthew Brost
2024-06-07 17:22 ` Zeng, Oak
2024-06-07 18:18 ` Matthew Brost
2024-06-07 18:23 ` Matthew Brost
2024-04-09 20:52 ` ✗ CI.Patch_applied: failure for Basic system allocator support in xe driver Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240409201742.3042626-29-oak.zeng@intel.com \
--to=oak.zeng@intel.com \
--cc=Thomas.Hellstrom@linux.intel.com \
--cc=brian.welty@intel.com \
--cc=himal.prasad.ghimiray@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=krishnaiah.bommu@intel.com \
--cc=matthew.brost@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox