From: Oak Zeng <oak.zeng@intel.com>
To: intel-xe@lists.freedesktop.org
Subject: [CI 4/8] drm/xe: Introduce a helper to get dpa from pfn
Date: Wed, 20 Mar 2024 14:45:38 -0400 [thread overview]
Message-ID: <20240320184542.2239076-4-oak.zeng@intel.com> (raw)
In-Reply-To: <20240320184542.2239076-1-oak.zeng@intel.com>
Since we now create struct page backing for each vram page,
each vram page now also has a pfn, just like system memory.
This allow us to calcuate device physical address from pfn.
v1: move the function to xe_svm.h (Matt)
s/vram_pfn_to_dpa/xe_mem_region_pfn_to_dpa (Matt)
add kernel document for the helper (Thomas)
Signed-off-by: Oak Zeng <oak.zeng@intel.com>
---
drivers/gpu/drm/xe/xe_svm.h | 27 +++++++++++++++++++++++++--
1 file changed, 25 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_svm.h b/drivers/gpu/drm/xe/xe_svm.h
index e944971cfc6d..8a34429eb674 100644
--- a/drivers/gpu/drm/xe/xe_svm.h
+++ b/drivers/gpu/drm/xe/xe_svm.h
@@ -6,8 +6,31 @@
#ifndef __XE_SVM_H
#define __XE_SVM_H
-struct xe_tile;
-struct xe_mem_region;
+#include "xe_device_types.h"
+#include "xe_device.h"
+#include "xe_assert.h"
+
+/**
+ * xe_mem_region_pfn_to_dpa() - Calculate page's dpa from pfn
+ *
+ * @mr: The memory region that page resides in
+ * @pfn: page frame number of the page
+ *
+ * Returns: the device physical address of the page
+ */
+static inline u64 xe_mem_region_pfn_to_dpa(struct xe_mem_region *mr, u64 pfn)
+{
+ u64 dpa;
+ struct xe_tile *tile = xe_mem_region_to_tile(mr);
+ struct xe_device *xe = tile_to_xe(tile);
+ u64 offset;
+
+ xe_assert(xe, (pfn << PAGE_SHIFT) >= mr->hpa_base);
+ offset = (pfn << PAGE_SHIFT) - mr->hpa_base;
+ dpa = mr->dpa_base + offset;
+
+ return dpa;
+}
int xe_devm_add(struct xe_tile *tile, struct xe_mem_region *mr);
void xe_devm_remove(struct xe_tile *tile, struct xe_mem_region *mr);
--
2.26.3
next prev parent reply other threads:[~2024-03-20 18:40 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-03-20 18:45 [CI 1/8] drm/xe/svm: Remap and provide memmap backing for GPU vram Oak Zeng
2024-03-20 18:45 ` [CI 2/8] drm/xe/svm: Introduce DRM_XE_SVM kernel config Oak Zeng
2024-03-20 18:45 ` [CI 3/8] drm/xe: Introduce helper to get tile from memory region Oak Zeng
2024-03-20 18:45 ` Oak Zeng [this message]
2024-03-20 18:45 ` [CI 5/8] drm/xe/svm: Get xe memory region from page Oak Zeng
2024-03-20 18:45 ` [CI 6/8] drm/xe: Introduce helper to populate userptr Oak Zeng
2024-03-20 18:45 ` [CI 7/8] drm/xe: Introduce a helper to free sg table Oak Zeng
2024-03-20 18:45 ` [CI 8/8] drm/xe: Use hmm_range_fault to populate user pages Oak Zeng
2024-03-20 18:45 ` ✓ CI.Patch_applied: success for series starting with [CI,1/8] drm/xe/svm: Remap and provide memmap backing for GPU vram Patchwork
2024-03-20 18:46 ` ✗ CI.checkpatch: warning " Patchwork
2024-03-20 18:46 ` ✗ CI.KUnit: failure " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240320184542.2239076-4-oak.zeng@intel.com \
--to=oak.zeng@intel.com \
--cc=intel-xe@lists.freedesktop.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox