From: Oak Zeng <oak.zeng@intel.com>
To: intel-xe@lists.freedesktop.org
Cc: thomas.hellstrom@intel.com, matthew.brost@intel.com,
brian.welty@intel.com, himal.prasad.ghimiray@intel.com
Subject: [PATCH 7/8] drm/xe: Introduce a helper to free sg table
Date: Mon, 18 Mar 2024 22:55:10 -0400 [thread overview]
Message-ID: <20240319025511.1598354-8-oak.zeng@intel.com> (raw)
In-Reply-To: <20240319025511.1598354-1-oak.zeng@intel.com>
Introduce xe_userptr_free_sg helper to dma-unmap all
addresses in userptr's sg table and free sg table.
Signed-off-by: Oak Zeng <oak.zeng@intel.com>
Suggested by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_hmm.c | 59 ++++++++++++++++++++++++++++++++++---
drivers/gpu/drm/xe/xe_hmm.h | 15 ++++++++++
2 files changed, 70 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_hmm.c b/drivers/gpu/drm/xe/xe_hmm.c
index 305e3f2e659b..98d85b615b53 100644
--- a/drivers/gpu/drm/xe/xe_hmm.c
+++ b/drivers/gpu/drm/xe/xe_hmm.c
@@ -3,6 +3,7 @@
* Copyright © 2024 Intel Corporation
*/
+#include <linux/scatterlist.h>
#include <linux/mmu_notifier.h>
#include <linux/dma-mapping.h>
#include <linux/memremap.h>
@@ -13,6 +14,16 @@
#include "xe_svm.h"
#include "xe_vm.h"
+static inline unsigned long append_vram_bit_to_addr(unsigned long addr)
+{
+ return (addr | ADDR_VRAM_BIT);
+}
+
+static inline bool address_is_vram(unsigned long addr)
+{
+ return (addr & ADDR_VRAM_BIT);
+}
+
static inline u64 npages_in_range(unsigned long start, unsigned long end)
{
return ((end - 1) >> PAGE_SHIFT) - (start >> PAGE_SHIFT) + 1;
@@ -55,9 +66,11 @@ static void xe_mark_range_accessed(struct hmm_range *range, bool write)
* for system pages. If write we map it bi-diretional; otherwise
* DMA_TO_DEVICE
*
- * All the contiguous pfns will be collapsed into one entry in
- * the scatter gather table. This is for the convenience of
- * later on operations to bind address range to GPU page table.
+ * If the pfns are backed by vram, all the contiguous pfns will be
+ * collapsed into one entry in the scatter gather table. This is
+ * for the convenience of later on operations to bind address
+ * range to GPU page table. pfns which are backed by system
+ * memory are not collapsed.
*
* The dma_address in the sg table will later be used by GPU to
* access memory. So if the memory is system memory, we need to
@@ -97,12 +110,14 @@ static int build_sg(struct xe_device *xe, struct hmm_range *range,
if (is_device_private_page(page)) {
mr = xe_page_to_mem_region(page);
addr = xe_mem_region_pfn_to_dpa(mr, range->hmm_pfns[i]);
+ addr = append_vram_bit_to_addr(addr);
} else {
addr = dma_map_page(dev, page, 0, PAGE_SIZE,
write ? DMA_BIDIRECTIONAL : DMA_TO_DEVICE);
}
- if (sg && (addr == (sg_dma_address(sg) + sg->length))) {
+ if (sg && is_device_private_page(page) &&
+ (addr == (sg_dma_address(sg) + sg->length))) {
sg->length += PAGE_SIZE;
sg_dma_len(sg) += PAGE_SIZE;
continue;
@@ -119,6 +134,39 @@ static int build_sg(struct xe_device *xe, struct hmm_range *range,
return 0;
}
+/**
+ * xe_userptr_free_sg() - Free the scatter gather table of userptr
+ *
+ * @uvma: the userptr vma which hold the scatter gather table
+ *
+ * With function xe_userptr_populate_range, we allocate storage of
+ * the userptr sg table. This is a helper function to free this
+ * sg table, and dma unmap the address in the table.
+ */
+void xe_userptr_free_sg(struct xe_userptr_vma *uvma)
+{
+ struct xe_userptr *userptr = &uvma->userptr;
+ struct xe_vma *vma = &uvma->vma;
+ bool write = !xe_vma_read_only(vma);
+ struct xe_vm *vm = xe_vma_vm(vma);
+ struct xe_device *xe = vm->xe;
+ struct device *dev = xe->drm.dev;
+ struct scatterlist *sg;
+ unsigned long addr;
+ int i;
+
+ xe_assert(xe, userptr->sg);
+ for_each_sgtable_sg(userptr->sg, sg, i) {
+ addr = sg_dma_address(sg);
+ if (!address_is_vram(addr))
+ dma_unmap_page(dev, addr, PAGE_SIZE,
+ write ? DMA_BIDIRECTIONAL : DMA_TO_DEVICE);
+ }
+
+ sg_free_table(userptr->sg);
+ userptr->sg = NULL;
+}
+
/**
* xe_userptr_populate_range() - Populate physical pages of a virtual
* address range
@@ -163,6 +211,9 @@ int xe_userptr_populate_range(struct xe_userptr_vma *uvma)
if (vma->gpuva.flags & XE_VMA_DESTROYED)
return 0;
+ if (userptr->sg)
+ xe_userptr_free_sg(uvma);
+
npages = npages_in_range(start, end);
pfns = kvmalloc_array(npages, sizeof(*pfns), GFP_KERNEL);
if (unlikely(!pfns))
diff --git a/drivers/gpu/drm/xe/xe_hmm.h b/drivers/gpu/drm/xe/xe_hmm.h
index fa5ddc11f10b..b1e61d48a1cb 100644
--- a/drivers/gpu/drm/xe/xe_hmm.h
+++ b/drivers/gpu/drm/xe/xe_hmm.h
@@ -7,4 +7,19 @@
struct xe_userptr_vma;
+/**
+ * This bit is used during generating of userptr
+ * sg table. If a page is in vram, we append this
+ * bit to the dpa address. This information is
+ * used later to tell whether an address is vram
+ * or system memory.
+ */
+#define ADDR_VRAM_BIT (1<<0)
+
int xe_userptr_populate_range(struct xe_userptr_vma *uvma);
+void xe_userptr_free_sg(struct xe_userptr_vma *uvma);
+
+static inline unsigned long xe_remove_vram_bit_from_addr(unsigned long addr)
+{
+ return (addr & ~ADDR_VRAM_BIT);
+}
--
2.26.3
next prev parent reply other threads:[~2024-03-19 2:42 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-03-19 2:55 [PATCH 0/8] Use hmm_range_fault to populate user page Oak Zeng
2024-03-19 2:55 ` [PATCH 1/8] drm/xe/svm: Remap and provide memmap backing for GPU vram Oak Zeng
2024-03-19 2:55 ` [PATCH 2/8] drm/xe/svm: Add DRM_XE_SVM kernel config entry Oak Zeng
2024-03-19 9:25 ` Hellstrom, Thomas
2024-03-19 18:27 ` Zeng, Oak
2024-03-19 2:55 ` [PATCH 3/8] drm/xe: Helper to get tile from memory region Oak Zeng
2024-03-19 9:28 ` Hellstrom, Thomas
2024-03-19 2:55 ` [PATCH 4/8] drm/xe: Introduce a helper to get dpa from pfn Oak Zeng
2024-03-19 2:55 ` [PATCH 5/8] drm/xe/svm: Get xe memory region from page Oak Zeng
2024-03-19 2:55 ` [PATCH 6/8] drm/xe: Helper to populate a userptr or hmmptr Oak Zeng
2024-03-19 10:33 ` Hellstrom, Thomas
2024-03-19 10:45 ` Hellstrom, Thomas
2024-03-19 20:56 ` Zeng, Oak
2024-03-19 16:48 ` Matthew Brost
2024-03-19 17:59 ` Zeng, Oak
2024-03-19 20:45 ` Zeng, Oak
2024-03-19 2:55 ` Oak Zeng [this message]
2024-03-19 2:55 ` [PATCH 8/8] drm/xe: Use hmm_range_fault to populate user pages Oak Zeng
2024-03-19 4:09 ` ✓ CI.Patch_applied: success for Use hmm_range_fault to populate user page (rev2) Patchwork
2024-03-19 4:09 ` ✗ CI.checkpatch: warning " Patchwork
2024-03-19 4:10 ` ✗ CI.KUnit: failure " Patchwork
-- strict thread matches above, loose matches on Subject: below --
2024-03-20 3:24 [PATCH 0/8] Use hmm_range_fault to populate user page Oak Zeng
2024-03-20 3:24 ` [PATCH 7/8] drm/xe: Introduce a helper to free sg table Oak Zeng
2024-03-20 3:44 [PATCH 0/8] Use hmm_range_fault to populate user page Oak Zeng
2024-03-20 3:44 ` [PATCH 7/8] drm/xe: Introduce a helper to free sg table Oak Zeng
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240319025511.1598354-8-oak.zeng@intel.com \
--to=oak.zeng@intel.com \
--cc=brian.welty@intel.com \
--cc=himal.prasad.ghimiray@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=matthew.brost@intel.com \
--cc=thomas.hellstrom@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox