* [PATCH v3 0/4] Preparation to .map_page and .unmap_page removal
@ 2025-09-16 7:32 Leon Romanovsky
2025-09-16 7:32 ` [PATCH v3 1/4] dma-mapping: prepare dma_map_ops to conversion to physical address Leon Romanovsky
` (3 more replies)
0 siblings, 4 replies; 17+ messages in thread
From: Leon Romanovsky @ 2025-09-16 7:32 UTC (permalink / raw)
To: Marek Szyprowski
Cc: Jason Gunthorpe, iommu, linux-arm-kernel, linux-kernel,
Russell King
Changelog:
v3:
* Rewrote the series to allow combination of .map_resource and .map_page
to one flow.
* Added two new patches to convert and remove .map_resource.
v2: https://lore.kernel.org/all/cover.1752734252.git.leon@kernel.org/
* Added default "else" section without map_phys and map_page
callbacks (impossible).
v1: https://lore.kernel.org/all/cover.1753003879.git.leon@kernel.org
* Changed "else if" instead of "if".
v0: https://lore.kernel.org/all/cover.1752734252.git.leon@kernel.org
---------------------------------------------------------------------
This is followup to "dma-mapping: migrate to physical address-based API"
series https://lore.kernel.org/all/cover.1757423202.git.leonro@nvidia.com
Thanks
Leon Romanovsky (4):
dma-mapping: prepare dma_map_ops to conversion to physical address
dma-mapping: convert dummy ops to physical address mapping
ARM: dma-mapping: Switch to physical address mapping callbacks
dma-mapping: remove unused mapping resource callbacks
arch/arm/mm/dma-mapping.c | 98 +++++++++----------------------------
include/linux/dma-map-ops.h | 13 ++---
kernel/dma/dummy.c | 13 +++--
kernel/dma/mapping.c | 20 +++-----
4 files changed, 43 insertions(+), 101 deletions(-)
--
2.51.0
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v3 1/4] dma-mapping: prepare dma_map_ops to conversion to physical address
2025-09-16 7:32 [PATCH v3 0/4] Preparation to .map_page and .unmap_page removal Leon Romanovsky
@ 2025-09-16 7:32 ` Leon Romanovsky
2025-09-16 13:52 ` Jason Gunthorpe
2025-09-16 7:32 ` [PATCH v3 2/4] dma-mapping: convert dummy ops to physical address mapping Leon Romanovsky
` (2 subsequent siblings)
3 siblings, 1 reply; 17+ messages in thread
From: Leon Romanovsky @ 2025-09-16 7:32 UTC (permalink / raw)
To: Marek Szyprowski
Cc: Leon Romanovsky, Jason Gunthorpe, iommu, linux-arm-kernel,
linux-kernel, Russell King
From: Leon Romanovsky <leonro@nvidia.com>
Add new .map_phys() and .unmap_phys() callbacks to dma_map_ops as a
preparation to replace .map_page() and .unmap_page() respectively.
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
---
include/linux/dma-map-ops.h | 7 +++++++
kernel/dma/mapping.c | 4 ++++
2 files changed, 11 insertions(+)
diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h
index 71f5b30254159..25603cb273769 100644
--- a/include/linux/dma-map-ops.h
+++ b/include/linux/dma-map-ops.h
@@ -37,6 +37,13 @@ struct dma_map_ops {
void (*unmap_page)(struct device *dev, dma_addr_t dma_handle,
size_t size, enum dma_data_direction dir,
unsigned long attrs);
+
+ dma_addr_t (*map_phys)(struct device *dev, phys_addr_t phys,
+ size_t size, enum dma_data_direction dir,
+ unsigned long attrs);
+ void (*unmap_phys)(struct device *dev, dma_addr_t dma_handle,
+ size_t size, enum dma_data_direction dir,
+ unsigned long attrs);
/*
* map_sg should return a negative error code on error. See
* dma_map_sgtable() for a list of appropriate error codes
diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
index fe7472f13b106..4080aebe5debb 100644
--- a/kernel/dma/mapping.c
+++ b/kernel/dma/mapping.c
@@ -169,6 +169,8 @@ dma_addr_t dma_map_phys(struct device *dev, phys_addr_t phys, size_t size,
addr = dma_direct_map_phys(dev, phys, size, dir, attrs);
else if (use_dma_iommu(dev))
addr = iommu_dma_map_phys(dev, phys, size, dir, attrs);
+ else if (ops->map_phys)
+ addr = ops->map_phys(dev, phys, size, dir, attrs);
else if (is_mmio) {
if (!ops->map_resource)
return DMA_MAPPING_ERROR;
@@ -223,6 +225,8 @@ void dma_unmap_phys(struct device *dev, dma_addr_t addr, size_t size,
dma_direct_unmap_phys(dev, addr, size, dir, attrs);
else if (use_dma_iommu(dev))
iommu_dma_unmap_phys(dev, addr, size, dir, attrs);
+ else if (ops->unmap_phys)
+ ops->unmap_phys(dev, addr, size, dir, attrs);
else if (is_mmio) {
if (ops->unmap_resource)
ops->unmap_resource(dev, addr, size, dir, attrs);
--
2.51.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v3 2/4] dma-mapping: convert dummy ops to physical address mapping
2025-09-16 7:32 [PATCH v3 0/4] Preparation to .map_page and .unmap_page removal Leon Romanovsky
2025-09-16 7:32 ` [PATCH v3 1/4] dma-mapping: prepare dma_map_ops to conversion to physical address Leon Romanovsky
@ 2025-09-16 7:32 ` Leon Romanovsky
2025-09-16 13:53 ` Jason Gunthorpe
2025-09-16 7:32 ` [PATCH v3 3/4] ARM: dma-mapping: Switch to physical address mapping callbacks Leon Romanovsky
2025-09-16 7:32 ` [PATCH v3 4/4] dma-mapping: remove unused mapping resource callbacks Leon Romanovsky
3 siblings, 1 reply; 17+ messages in thread
From: Leon Romanovsky @ 2025-09-16 7:32 UTC (permalink / raw)
To: Marek Szyprowski
Cc: Leon Romanovsky, Jason Gunthorpe, iommu, linux-arm-kernel,
linux-kernel, Russell King
From: Leon Romanovsky <leonro@nvidia.com>
Change dma_dummy_map_page and dma_dummy_unmap_page routines
to accept physical address and rename them.
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
---
kernel/dma/dummy.c | 13 ++++++-------
1 file changed, 6 insertions(+), 7 deletions(-)
diff --git a/kernel/dma/dummy.c b/kernel/dma/dummy.c
index 92de80e5b057e..16a51736a2a39 100644
--- a/kernel/dma/dummy.c
+++ b/kernel/dma/dummy.c
@@ -11,17 +11,16 @@ static int dma_dummy_mmap(struct device *dev, struct vm_area_struct *vma,
return -ENXIO;
}
-static dma_addr_t dma_dummy_map_page(struct device *dev, struct page *page,
- unsigned long offset, size_t size, enum dma_data_direction dir,
- unsigned long attrs)
+static dma_addr_t dma_dummy_map_phys(struct device *dev, phys_addr_t phys,
+ size_t size, enum dma_data_direction dir, unsigned long attrs)
{
return DMA_MAPPING_ERROR;
}
-static void dma_dummy_unmap_page(struct device *dev, dma_addr_t dma_handle,
+static void dma_dummy_unmap_phys(struct device *dev, dma_addr_t dma_handle,
size_t size, enum dma_data_direction dir, unsigned long attrs)
{
/*
- * Dummy ops doesn't support map_page, so unmap_page should never be
+ * Dummy ops doesn't support map_phys, so unmap_page should never be
* called.
*/
WARN_ON_ONCE(true);
@@ -51,8 +50,8 @@ static int dma_dummy_supported(struct device *hwdev, u64 mask)
const struct dma_map_ops dma_dummy_ops = {
.mmap = dma_dummy_mmap,
- .map_page = dma_dummy_map_page,
- .unmap_page = dma_dummy_unmap_page,
+ .map_phys = dma_dummy_map_phys,
+ .unmap_phys = dma_dummy_unmap_phys,
.map_sg = dma_dummy_map_sg,
.unmap_sg = dma_dummy_unmap_sg,
.dma_supported = dma_dummy_supported,
--
2.51.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v3 3/4] ARM: dma-mapping: Switch to physical address mapping callbacks
2025-09-16 7:32 [PATCH v3 0/4] Preparation to .map_page and .unmap_page removal Leon Romanovsky
2025-09-16 7:32 ` [PATCH v3 1/4] dma-mapping: prepare dma_map_ops to conversion to physical address Leon Romanovsky
2025-09-16 7:32 ` [PATCH v3 2/4] dma-mapping: convert dummy ops to physical address mapping Leon Romanovsky
@ 2025-09-16 7:32 ` Leon Romanovsky
2025-09-16 18:46 ` Jason Gunthorpe
2025-09-16 7:32 ` [PATCH v3 4/4] dma-mapping: remove unused mapping resource callbacks Leon Romanovsky
3 siblings, 1 reply; 17+ messages in thread
From: Leon Romanovsky @ 2025-09-16 7:32 UTC (permalink / raw)
To: Marek Szyprowski
Cc: Leon Romanovsky, Jason Gunthorpe, iommu, linux-arm-kernel,
linux-kernel, Russell King
From: Leon Romanovsky <leonro@nvidia.com>
Combine resource and page mappings routines to one function, which
handles both these flows at the same manner. This conversion allows
us to remove .map_resource/.unmap_resource callbacks completely.
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
---
arch/arm/mm/dma-mapping.c | 98 +++++++++------------------------------
1 file changed, 22 insertions(+), 76 deletions(-)
diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
index 88c2d68a69c9e..e0d65414b39e1 100644
--- a/arch/arm/mm/dma-mapping.c
+++ b/arch/arm/mm/dma-mapping.c
@@ -737,6 +737,9 @@ static int __dma_info_to_prot(enum dma_data_direction dir, unsigned long attrs)
if (attrs & DMA_ATTR_PRIVILEGED)
prot |= IOMMU_PRIV;
+ if (attrs & DMA_ATTR_MMIO)
+ prot |= IOMMU_MMIO;
+
switch (dir) {
case DMA_BIDIRECTIONAL:
return prot | IOMMU_READ | IOMMU_WRITE;
@@ -1356,25 +1359,27 @@ static void arm_iommu_sync_sg_for_device(struct device *dev,
}
/**
- * arm_iommu_map_page
+ * arm_iommu_map_phys
* @dev: valid struct device pointer
- * @page: page that buffer resides in
- * @offset: offset into page for start of buffer
+ * @phys: physical address that buffer resides in
* @size: size of buffer to map
* @dir: DMA transfer direction
+ * @attrs: DMA mapping attributes
*
* IOMMU aware version of arm_dma_map_page()
*/
-static dma_addr_t arm_iommu_map_page(struct device *dev, struct page *page,
- unsigned long offset, size_t size, enum dma_data_direction dir,
- unsigned long attrs)
+static dma_addr_t arm_iommu_map_phys(struct device *dev, phys_addr_t phys,
+ size_t size, enum dma_data_direction dir, unsigned long attrs)
{
struct dma_iommu_mapping *mapping = to_dma_iommu_mapping(dev);
+ int len = PAGE_ALIGN(size + offset_in_page(phys));
+ phys_addr_t addr = phys_addr & PAGE_MASK;
dma_addr_t dma_addr;
- int ret, prot, len = PAGE_ALIGN(size + offset);
+ int ret, prot;
- if (!dev->dma_coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC))
- __dma_page_cpu_to_dev(page, offset, size, dir);
+ if (!dev->dma_coherent &&
+ !(attrs & (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_MMIO)))
+ __dma_page_cpu_to_dev(phys_to_page(phys), offset, size, dir);
dma_addr = __alloc_iova(mapping, len);
if (dma_addr == DMA_MAPPING_ERROR)
@@ -1382,8 +1387,7 @@ static dma_addr_t arm_iommu_map_page(struct device *dev, struct page *page,
prot = __dma_info_to_prot(dir, attrs);
- ret = iommu_map(mapping->domain, dma_addr, page_to_phys(page), len,
- prot, GFP_KERNEL);
+ ret = iommu_map(mapping->domain, dma_addr, addr, len, prot, GFP_KERNEL);
if (ret < 0)
goto fail;
@@ -1399,10 +1403,11 @@ static dma_addr_t arm_iommu_map_page(struct device *dev, struct page *page,
* @handle: DMA address of buffer
* @size: size of buffer (same as passed to dma_map_page)
* @dir: DMA transfer direction (same as passed to dma_map_page)
+ * @attrs: DMA mapping attributes
*
- * IOMMU aware version of arm_dma_unmap_page()
+ * IOMMU aware version of arm_dma_unmap_phys()
*/
-static void arm_iommu_unmap_page(struct device *dev, dma_addr_t handle,
+static void arm_iommu_unmap_phys(struct device *dev, dma_addr_t handle,
size_t size, enum dma_data_direction dir, unsigned long attrs)
{
struct dma_iommu_mapping *mapping = to_dma_iommu_mapping(dev);
@@ -1414,7 +1419,8 @@ static void arm_iommu_unmap_page(struct device *dev, dma_addr_t handle,
if (!iova)
return;
- if (!dev->dma_coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) {
+ if (!dev->dma_coherent &&
+ !(attrs & (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_MMIO))) {
page = phys_to_page(iommu_iova_to_phys(mapping->domain, iova));
__dma_page_dev_to_cpu(page, offset, size, dir);
}
@@ -1423,63 +1429,6 @@ static void arm_iommu_unmap_page(struct device *dev, dma_addr_t handle,
__free_iova(mapping, iova, len);
}
-/**
- * arm_iommu_map_resource - map a device resource for DMA
- * @dev: valid struct device pointer
- * @phys_addr: physical address of resource
- * @size: size of resource to map
- * @dir: DMA transfer direction
- */
-static dma_addr_t arm_iommu_map_resource(struct device *dev,
- phys_addr_t phys_addr, size_t size,
- enum dma_data_direction dir, unsigned long attrs)
-{
- struct dma_iommu_mapping *mapping = to_dma_iommu_mapping(dev);
- dma_addr_t dma_addr;
- int ret, prot;
- phys_addr_t addr = phys_addr & PAGE_MASK;
- unsigned int offset = phys_addr & ~PAGE_MASK;
- size_t len = PAGE_ALIGN(size + offset);
-
- dma_addr = __alloc_iova(mapping, len);
- if (dma_addr == DMA_MAPPING_ERROR)
- return dma_addr;
-
- prot = __dma_info_to_prot(dir, attrs) | IOMMU_MMIO;
-
- ret = iommu_map(mapping->domain, dma_addr, addr, len, prot, GFP_KERNEL);
- if (ret < 0)
- goto fail;
-
- return dma_addr + offset;
-fail:
- __free_iova(mapping, dma_addr, len);
- return DMA_MAPPING_ERROR;
-}
-
-/**
- * arm_iommu_unmap_resource - unmap a device DMA resource
- * @dev: valid struct device pointer
- * @dma_handle: DMA address to resource
- * @size: size of resource to map
- * @dir: DMA transfer direction
- */
-static void arm_iommu_unmap_resource(struct device *dev, dma_addr_t dma_handle,
- size_t size, enum dma_data_direction dir,
- unsigned long attrs)
-{
- struct dma_iommu_mapping *mapping = to_dma_iommu_mapping(dev);
- dma_addr_t iova = dma_handle & PAGE_MASK;
- unsigned int offset = dma_handle & ~PAGE_MASK;
- size_t len = PAGE_ALIGN(size + offset);
-
- if (!iova)
- return;
-
- iommu_unmap(mapping->domain, iova, len);
- __free_iova(mapping, iova, len);
-}
-
static void arm_iommu_sync_single_for_cpu(struct device *dev,
dma_addr_t handle, size_t size, enum dma_data_direction dir)
{
@@ -1516,8 +1465,8 @@ static const struct dma_map_ops iommu_ops = {
.mmap = arm_iommu_mmap_attrs,
.get_sgtable = arm_iommu_get_sgtable,
- .map_page = arm_iommu_map_page,
- .unmap_page = arm_iommu_unmap_page,
+ .map_phys = arm_iommu_map_phys,
+ .unmap_phys = arm_iommu_unmap_phys,
.sync_single_for_cpu = arm_iommu_sync_single_for_cpu,
.sync_single_for_device = arm_iommu_sync_single_for_device,
@@ -1525,9 +1474,6 @@ static const struct dma_map_ops iommu_ops = {
.unmap_sg = arm_iommu_unmap_sg,
.sync_sg_for_cpu = arm_iommu_sync_sg_for_cpu,
.sync_sg_for_device = arm_iommu_sync_sg_for_device,
-
- .map_resource = arm_iommu_map_resource,
- .unmap_resource = arm_iommu_unmap_resource,
};
/**
--
2.51.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v3 4/4] dma-mapping: remove unused mapping resource callbacks
2025-09-16 7:32 [PATCH v3 0/4] Preparation to .map_page and .unmap_page removal Leon Romanovsky
` (2 preceding siblings ...)
2025-09-16 7:32 ` [PATCH v3 3/4] ARM: dma-mapping: Switch to physical address mapping callbacks Leon Romanovsky
@ 2025-09-16 7:32 ` Leon Romanovsky
2025-09-16 12:19 ` Leon Romanovsky
2025-09-16 18:49 ` Jason Gunthorpe
3 siblings, 2 replies; 17+ messages in thread
From: Leon Romanovsky @ 2025-09-16 7:32 UTC (permalink / raw)
To: Marek Szyprowski
Cc: Leon Romanovsky, Jason Gunthorpe, iommu, linux-arm-kernel,
linux-kernel, Russell King
From: Leon Romanovsky <leonro@nvidia.com>
After ARM conversion to use physical addresses for the mapping,
there is no in-kernel users for map_resource/unmap_resource callbacks,
so remove it.
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
---
include/linux/dma-map-ops.h | 6 ------
kernel/dma/mapping.c | 16 ++++------------
2 files changed, 4 insertions(+), 18 deletions(-)
diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h
index 25603cb273769..a2ec1566aa270 100644
--- a/include/linux/dma-map-ops.h
+++ b/include/linux/dma-map-ops.h
@@ -53,12 +53,6 @@ struct dma_map_ops {
enum dma_data_direction dir, unsigned long attrs);
void (*unmap_sg)(struct device *dev, struct scatterlist *sg, int nents,
enum dma_data_direction dir, unsigned long attrs);
- dma_addr_t (*map_resource)(struct device *dev, phys_addr_t phys_addr,
- size_t size, enum dma_data_direction dir,
- unsigned long attrs);
- void (*unmap_resource)(struct device *dev, dma_addr_t dma_handle,
- size_t size, enum dma_data_direction dir,
- unsigned long attrs);
void (*sync_single_for_cpu)(struct device *dev, dma_addr_t dma_handle,
size_t size, enum dma_data_direction dir);
void (*sync_single_for_device)(struct device *dev,
diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
index 4080aebe5debb..32a85bfdf873a 100644
--- a/kernel/dma/mapping.c
+++ b/kernel/dma/mapping.c
@@ -157,7 +157,7 @@ dma_addr_t dma_map_phys(struct device *dev, phys_addr_t phys, size_t size,
{
const struct dma_map_ops *ops = get_dma_ops(dev);
bool is_mmio = attrs & DMA_ATTR_MMIO;
- dma_addr_t addr;
+ dma_addr_t addr = DMA_MAPPING_ERROR;
BUG_ON(!valid_dma_direction(dir));
@@ -171,18 +171,13 @@ dma_addr_t dma_map_phys(struct device *dev, phys_addr_t phys, size_t size,
addr = iommu_dma_map_phys(dev, phys, size, dir, attrs);
else if (ops->map_phys)
addr = ops->map_phys(dev, phys, size, dir, attrs);
- else if (is_mmio) {
- if (!ops->map_resource)
- return DMA_MAPPING_ERROR;
-
- addr = ops->map_resource(dev, phys, size, dir, attrs);
- } else {
+ else if (!is_mmio && ops->map_page) {
struct page *page = phys_to_page(phys);
size_t offset = offset_in_page(phys);
/*
* The dma_ops API contract for ops->map_page() requires
- * kmappable memory, while ops->map_resource() does not.
+ * kmappable memory.
*/
addr = ops->map_page(dev, page, offset, size, dir, attrs);
}
@@ -227,10 +222,7 @@ void dma_unmap_phys(struct device *dev, dma_addr_t addr, size_t size,
iommu_dma_unmap_phys(dev, addr, size, dir, attrs);
else if (ops->unmap_phys)
ops->unmap_phys(dev, addr, size, dir, attrs);
- else if (is_mmio) {
- if (ops->unmap_resource)
- ops->unmap_resource(dev, addr, size, dir, attrs);
- } else
+ else
ops->unmap_page(dev, addr, size, dir, attrs);
trace_dma_unmap_phys(dev, addr, size, dir, attrs);
debug_dma_unmap_phys(dev, addr, size, dir);
--
2.51.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH v3 4/4] dma-mapping: remove unused mapping resource callbacks
2025-09-16 7:32 ` [PATCH v3 4/4] dma-mapping: remove unused mapping resource callbacks Leon Romanovsky
@ 2025-09-16 12:19 ` Leon Romanovsky
2025-09-16 18:49 ` Jason Gunthorpe
1 sibling, 0 replies; 17+ messages in thread
From: Leon Romanovsky @ 2025-09-16 12:19 UTC (permalink / raw)
To: Marek Szyprowski
Cc: Jason Gunthorpe, iommu, linux-arm-kernel, linux-kernel,
Russell King
On Tue, Sep 16, 2025 at 10:32:07AM +0300, Leon Romanovsky wrote:
> From: Leon Romanovsky <leonro@nvidia.com>
>
> After ARM conversion to use physical addresses for the mapping,
> there is no in-kernel users for map_resource/unmap_resource callbacks,
> so remove it.
>
> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
> ---
> include/linux/dma-map-ops.h | 6 ------
> kernel/dma/mapping.c | 16 ++++------------
> 2 files changed, 4 insertions(+), 18 deletions(-)
This patch is premature, I missed another map_resource user in
drivers/xen/swiotlb-xen.c.
Thanks
>
> diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h
> index 25603cb273769..a2ec1566aa270 100644
> --- a/include/linux/dma-map-ops.h
> +++ b/include/linux/dma-map-ops.h
> @@ -53,12 +53,6 @@ struct dma_map_ops {
> enum dma_data_direction dir, unsigned long attrs);
> void (*unmap_sg)(struct device *dev, struct scatterlist *sg, int nents,
> enum dma_data_direction dir, unsigned long attrs);
> - dma_addr_t (*map_resource)(struct device *dev, phys_addr_t phys_addr,
> - size_t size, enum dma_data_direction dir,
> - unsigned long attrs);
> - void (*unmap_resource)(struct device *dev, dma_addr_t dma_handle,
> - size_t size, enum dma_data_direction dir,
> - unsigned long attrs);
> void (*sync_single_for_cpu)(struct device *dev, dma_addr_t dma_handle,
> size_t size, enum dma_data_direction dir);
> void (*sync_single_for_device)(struct device *dev,
> diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
> index 4080aebe5debb..32a85bfdf873a 100644
> --- a/kernel/dma/mapping.c
> +++ b/kernel/dma/mapping.c
> @@ -157,7 +157,7 @@ dma_addr_t dma_map_phys(struct device *dev, phys_addr_t phys, size_t size,
> {
> const struct dma_map_ops *ops = get_dma_ops(dev);
> bool is_mmio = attrs & DMA_ATTR_MMIO;
> - dma_addr_t addr;
> + dma_addr_t addr = DMA_MAPPING_ERROR;
>
> BUG_ON(!valid_dma_direction(dir));
>
> @@ -171,18 +171,13 @@ dma_addr_t dma_map_phys(struct device *dev, phys_addr_t phys, size_t size,
> addr = iommu_dma_map_phys(dev, phys, size, dir, attrs);
> else if (ops->map_phys)
> addr = ops->map_phys(dev, phys, size, dir, attrs);
> - else if (is_mmio) {
> - if (!ops->map_resource)
> - return DMA_MAPPING_ERROR;
> -
> - addr = ops->map_resource(dev, phys, size, dir, attrs);
> - } else {
> + else if (!is_mmio && ops->map_page) {
> struct page *page = phys_to_page(phys);
> size_t offset = offset_in_page(phys);
>
> /*
> * The dma_ops API contract for ops->map_page() requires
> - * kmappable memory, while ops->map_resource() does not.
> + * kmappable memory.
> */
> addr = ops->map_page(dev, page, offset, size, dir, attrs);
> }
> @@ -227,10 +222,7 @@ void dma_unmap_phys(struct device *dev, dma_addr_t addr, size_t size,
> iommu_dma_unmap_phys(dev, addr, size, dir, attrs);
> else if (ops->unmap_phys)
> ops->unmap_phys(dev, addr, size, dir, attrs);
> - else if (is_mmio) {
> - if (ops->unmap_resource)
> - ops->unmap_resource(dev, addr, size, dir, attrs);
> - } else
> + else
> ops->unmap_page(dev, addr, size, dir, attrs);
> trace_dma_unmap_phys(dev, addr, size, dir, attrs);
> debug_dma_unmap_phys(dev, addr, size, dir);
> --
> 2.51.0
>
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v3 1/4] dma-mapping: prepare dma_map_ops to conversion to physical address
2025-09-16 7:32 ` [PATCH v3 1/4] dma-mapping: prepare dma_map_ops to conversion to physical address Leon Romanovsky
@ 2025-09-16 13:52 ` Jason Gunthorpe
0 siblings, 0 replies; 17+ messages in thread
From: Jason Gunthorpe @ 2025-09-16 13:52 UTC (permalink / raw)
To: Leon Romanovsky
Cc: Marek Szyprowski, Leon Romanovsky, iommu, linux-arm-kernel,
linux-kernel, Russell King
On Tue, Sep 16, 2025 at 10:32:04AM +0300, Leon Romanovsky wrote:
> From: Leon Romanovsky <leonro@nvidia.com>
>
> Add new .map_phys() and .unmap_phys() callbacks to dma_map_ops as a
> preparation to replace .map_page() and .unmap_page() respectively.
>
> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
> ---
> include/linux/dma-map-ops.h | 7 +++++++
> kernel/dma/mapping.c | 4 ++++
> 2 files changed, 11 insertions(+)
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Jason
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v3 2/4] dma-mapping: convert dummy ops to physical address mapping
2025-09-16 7:32 ` [PATCH v3 2/4] dma-mapping: convert dummy ops to physical address mapping Leon Romanovsky
@ 2025-09-16 13:53 ` Jason Gunthorpe
0 siblings, 0 replies; 17+ messages in thread
From: Jason Gunthorpe @ 2025-09-16 13:53 UTC (permalink / raw)
To: Leon Romanovsky
Cc: Marek Szyprowski, Leon Romanovsky, iommu, linux-arm-kernel,
linux-kernel, Russell King
On Tue, Sep 16, 2025 at 10:32:05AM +0300, Leon Romanovsky wrote:
> From: Leon Romanovsky <leonro@nvidia.com>
>
> Change dma_dummy_map_page and dma_dummy_unmap_page routines
> to accept physical address and rename them.
>
> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
> ---
> kernel/dma/dummy.c | 13 ++++++-------
> 1 file changed, 6 insertions(+), 7 deletions(-)
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Jason
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v3 3/4] ARM: dma-mapping: Switch to physical address mapping callbacks
2025-09-16 7:32 ` [PATCH v3 3/4] ARM: dma-mapping: Switch to physical address mapping callbacks Leon Romanovsky
@ 2025-09-16 18:46 ` Jason Gunthorpe
2025-09-17 10:36 ` Leon Romanovsky
0 siblings, 1 reply; 17+ messages in thread
From: Jason Gunthorpe @ 2025-09-16 18:46 UTC (permalink / raw)
To: Leon Romanovsky
Cc: Marek Szyprowski, Leon Romanovsky, iommu, linux-arm-kernel,
linux-kernel, Russell King
On Tue, Sep 16, 2025 at 10:32:06AM +0300, Leon Romanovsky wrote:
> + if (!dev->dma_coherent &&
> + !(attrs & (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_MMIO)))
> + __dma_page_cpu_to_dev(phys_to_page(phys), offset, size, dir);
I'd keep going and get rid of the page here too, maybe as a second
patch in this series:
diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
index 88c2d68a69c9ee..a84d12cd0ba4a9 100644
--- a/arch/arm/mm/dma-mapping.c
+++ b/arch/arm/mm/dma-mapping.c
@@ -624,16 +624,14 @@ static void __arm_dma_free(struct device *dev, size_t size, void *cpu_addr,
kfree(buf);
}
-static void dma_cache_maint_page(struct page *page, unsigned long offset,
+static void dma_cache_maint_page(phys_addr_t paddr,
size_t size, enum dma_data_direction dir,
void (*op)(const void *, size_t, int))
{
- unsigned long pfn;
+ unsigned long pfn = paddr / PAGE_SIZE;
+ unsigned int offset = paddr % PAGE_SIZE;
size_t left = size;
- pfn = page_to_pfn(page) + offset / PAGE_SIZE;
- offset %= PAGE_SIZE;
-
/*
* A single sg entry may refer to multiple physically contiguous
* pages. But we still need to process highmem pages individually.
@@ -644,17 +642,17 @@ static void dma_cache_maint_page(struct page *page, unsigned long offset,
size_t len = left;
void *vaddr;
- page = pfn_to_page(pfn);
-
- if (PageHighMem(page)) {
+ if (PhysHighMem(pfn << PAGE_SHIFT)) {
if (len + offset > PAGE_SIZE)
len = PAGE_SIZE - offset;
if (cache_is_vipt_nonaliasing()) {
- vaddr = kmap_atomic(page);
+ vaddr = kmap_atomic_pfn(pfn);
op(vaddr + offset, len, dir);
kunmap_atomic(vaddr);
} else {
+ struct page *page = pfn_to_page(pfn);
+
vaddr = kmap_high_get(page);
if (vaddr) {
op(vaddr + offset, len, dir);
@@ -662,7 +660,7 @@ static void dma_cache_maint_page(struct page *page, unsigned long offset,
}
}
} else {
- vaddr = page_address(page) + offset;
+ vaddr = phys_to_virt(pfn) + offset;
op(vaddr, len, dir);
}
offset = 0;
@@ -676,14 +674,11 @@ static void dma_cache_maint_page(struct page *page, unsigned long offset,
* Note: Drivers should NOT use this function directly.
* Use the driver DMA support - see dma-mapping.h (dma_sync_*)
*/
-static void __dma_page_cpu_to_dev(struct page *page, unsigned long off,
+static void __dma_page_cpu_to_dev(phys_addr_t paddr,
size_t size, enum dma_data_direction dir)
{
- phys_addr_t paddr;
+ dma_cache_maint_page(paddr, size, dir, dmac_map_area);
- dma_cache_maint_page(page, off, size, dir, dmac_map_area);
-
- paddr = page_to_phys(page) + off;
if (dir == DMA_FROM_DEVICE) {
outer_inv_range(paddr, paddr + size);
} else {
> + if (!dev->dma_coherent &&
> + !(attrs & (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_MMIO))) {
> page = phys_to_page(iommu_iova_to_phys(mapping->domain, iova));
> __dma_page_dev_to_cpu(page, offset, size, dir);
Same treatment here..
Looks Ok though, I didn't notice any pitfalls
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Jason
^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH v3 4/4] dma-mapping: remove unused mapping resource callbacks
2025-09-16 7:32 ` [PATCH v3 4/4] dma-mapping: remove unused mapping resource callbacks Leon Romanovsky
2025-09-16 12:19 ` Leon Romanovsky
@ 2025-09-16 18:49 ` Jason Gunthorpe
1 sibling, 0 replies; 17+ messages in thread
From: Jason Gunthorpe @ 2025-09-16 18:49 UTC (permalink / raw)
To: Leon Romanovsky
Cc: Marek Szyprowski, Leon Romanovsky, iommu, linux-arm-kernel,
linux-kernel, Russell King
On Tue, Sep 16, 2025 at 10:32:07AM +0300, Leon Romanovsky wrote:
> From: Leon Romanovsky <leonro@nvidia.com>
>
> After ARM conversion to use physical addresses for the mapping,
> there is no in-kernel users for map_resource/unmap_resource callbacks,
> so remove it.
>
> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
> ---
> include/linux/dma-map-ops.h | 6 ------
> kernel/dma/mapping.c | 16 ++++------------
> 2 files changed, 4 insertions(+), 18 deletions(-)
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Looks like xen_swiotlb_dma_ops will be easy enough to fix..
Jason
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v3 3/4] ARM: dma-mapping: Switch to physical address mapping callbacks
2025-09-16 18:46 ` Jason Gunthorpe
@ 2025-09-17 10:36 ` Leon Romanovsky
2025-09-17 11:32 ` Jason Gunthorpe
0 siblings, 1 reply; 17+ messages in thread
From: Leon Romanovsky @ 2025-09-17 10:36 UTC (permalink / raw)
To: Jason Gunthorpe
Cc: Marek Szyprowski, iommu, linux-arm-kernel, linux-kernel,
Russell King
On Tue, Sep 16, 2025 at 03:46:17PM -0300, Jason Gunthorpe wrote:
> On Tue, Sep 16, 2025 at 10:32:06AM +0300, Leon Romanovsky wrote:
> > + if (!dev->dma_coherent &&
> > + !(attrs & (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_MMIO)))
> > + __dma_page_cpu_to_dev(phys_to_page(phys), offset, size, dir);
>
> I'd keep going and get rid of the page here too, maybe as a second
> patch in this series:
Thanks, it is always unclear how far to go with cleanups.
>
> diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
> index 88c2d68a69c9ee..a84d12cd0ba4a9 100644
> --- a/arch/arm/mm/dma-mapping.c
> +++ b/arch/arm/mm/dma-mapping.c
> @@ -624,16 +624,14 @@ static void __arm_dma_free(struct device *dev, size_t size, void *cpu_addr,
> kfree(buf);
> }
>
> -static void dma_cache_maint_page(struct page *page, unsigned long offset,
> +static void dma_cache_maint_page(phys_addr_t paddr,
> size_t size, enum dma_data_direction dir,
> void (*op)(const void *, size_t, int))
> {
> - unsigned long pfn;
> + unsigned long pfn = paddr / PAGE_SIZE;
> + unsigned int offset = paddr % PAGE_SIZE;
> size_t left = size;
>
> - pfn = page_to_pfn(page) + offset / PAGE_SIZE;
> - offset %= PAGE_SIZE;
> -
> /*
> * A single sg entry may refer to multiple physically contiguous
> * pages. But we still need to process highmem pages individually.
> @@ -644,17 +642,17 @@ static void dma_cache_maint_page(struct page *page, unsigned long offset,
> size_t len = left;
> void *vaddr;
>
> - page = pfn_to_page(pfn);
> -
> - if (PageHighMem(page)) {
> + if (PhysHighMem(pfn << PAGE_SHIFT)) {
> if (len + offset > PAGE_SIZE)
> len = PAGE_SIZE - offset;
>
> if (cache_is_vipt_nonaliasing()) {
> - vaddr = kmap_atomic(page);
> + vaddr = kmap_atomic_pfn(pfn);
> op(vaddr + offset, len, dir);
> kunmap_atomic(vaddr);
> } else {
> + struct page *page = pfn_to_page(pfn);
> +
> vaddr = kmap_high_get(page);
> if (vaddr) {
> op(vaddr + offset, len, dir);
> @@ -662,7 +660,7 @@ static void dma_cache_maint_page(struct page *page, unsigned long offset,
> }
> }
> } else {
> - vaddr = page_address(page) + offset;
> + vaddr = phys_to_virt(pfn) + offset;
> op(vaddr, len, dir);
> }
> offset = 0;
> @@ -676,14 +674,11 @@ static void dma_cache_maint_page(struct page *page, unsigned long offset,
> * Note: Drivers should NOT use this function directly.
> * Use the driver DMA support - see dma-mapping.h (dma_sync_*)
> */
> -static void __dma_page_cpu_to_dev(struct page *page, unsigned long off,
> +static void __dma_page_cpu_to_dev(phys_addr_t paddr,
> size_t size, enum dma_data_direction dir)
> {
> - phys_addr_t paddr;
> + dma_cache_maint_page(paddr, size, dir, dmac_map_area);
>
> - dma_cache_maint_page(page, off, size, dir, dmac_map_area);
> -
> - paddr = page_to_phys(page) + off;
> if (dir == DMA_FROM_DEVICE) {
> outer_inv_range(paddr, paddr + size);
> } else {
>
> > + if (!dev->dma_coherent &&
> > + !(attrs & (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_MMIO))) {
> > page = phys_to_page(iommu_iova_to_phys(mapping->domain, iova));
> > __dma_page_dev_to_cpu(page, offset, size, dir);
>
> Same treatment here..
>
> Looks Ok though, I didn't notice any pitfalls
>
> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
>
> Jason
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v3 3/4] ARM: dma-mapping: Switch to physical address mapping callbacks
2025-09-17 10:36 ` Leon Romanovsky
@ 2025-09-17 11:32 ` Jason Gunthorpe
2025-09-17 13:41 ` Leon Romanovsky
0 siblings, 1 reply; 17+ messages in thread
From: Jason Gunthorpe @ 2025-09-17 11:32 UTC (permalink / raw)
To: Leon Romanovsky
Cc: Marek Szyprowski, iommu, linux-arm-kernel, linux-kernel,
Russell King
On Wed, Sep 17, 2025 at 01:36:44PM +0300, Leon Romanovsky wrote:
> On Tue, Sep 16, 2025 at 03:46:17PM -0300, Jason Gunthorpe wrote:
> > On Tue, Sep 16, 2025 at 10:32:06AM +0300, Leon Romanovsky wrote:
> > > + if (!dev->dma_coherent &&
> > > + !(attrs & (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_MMIO)))
> > > + __dma_page_cpu_to_dev(phys_to_page(phys), offset, size, dir);
> >
> > I'd keep going and get rid of the page here too, maybe as a second
> > patch in this series:
>
> Thanks, it is always unclear how far to go with cleanups.
IMHO to maximally support what Matthew is working on I'd remove all
the struct page things and prefer the pfn/phys variations from the MM
side.
After this the only thing left is the kmap_high_get(), and I'm not
sure what becomes of WANT_PAGE_VIRTUAL in a memdesc world..
Jason
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v3 3/4] ARM: dma-mapping: Switch to physical address mapping callbacks
2025-09-17 11:32 ` Jason Gunthorpe
@ 2025-09-17 13:41 ` Leon Romanovsky
2025-09-17 13:58 ` Jason Gunthorpe
0 siblings, 1 reply; 17+ messages in thread
From: Leon Romanovsky @ 2025-09-17 13:41 UTC (permalink / raw)
To: Jason Gunthorpe
Cc: Marek Szyprowski, iommu, linux-arm-kernel, linux-kernel,
Russell King
On Wed, Sep 17, 2025 at 08:32:48AM -0300, Jason Gunthorpe wrote:
> On Wed, Sep 17, 2025 at 01:36:44PM +0300, Leon Romanovsky wrote:
> > On Tue, Sep 16, 2025 at 03:46:17PM -0300, Jason Gunthorpe wrote:
> > > On Tue, Sep 16, 2025 at 10:32:06AM +0300, Leon Romanovsky wrote:
> > > > + if (!dev->dma_coherent &&
> > > > + !(attrs & (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_MMIO)))
> > > > + __dma_page_cpu_to_dev(phys_to_page(phys), offset, size, dir);
> > >
> > > I'd keep going and get rid of the page here too, maybe as a second
> > > patch in this series:
> >
> > Thanks, it is always unclear how far to go with cleanups.
>
> IMHO to maximally support what Matthew is working on I'd remove all
> the struct page things and prefer the pfn/phys variations from the MM
> side.
ok, my patches can be found here:
https://git.kernel.org/pub/scm/linux/kernel/git/leon/linux-rdma.git/log/?h=dmabuf-vfio
I converted "struct page" path from all archs with .map_page.
Thanks
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v3 3/4] ARM: dma-mapping: Switch to physical address mapping callbacks
2025-09-17 13:41 ` Leon Romanovsky
@ 2025-09-17 13:58 ` Jason Gunthorpe
2025-09-17 18:46 ` Leon Romanovsky
0 siblings, 1 reply; 17+ messages in thread
From: Jason Gunthorpe @ 2025-09-17 13:58 UTC (permalink / raw)
To: Leon Romanovsky
Cc: Marek Szyprowski, iommu, linux-arm-kernel, linux-kernel,
Russell King
On Wed, Sep 17, 2025 at 04:41:28PM +0300, Leon Romanovsky wrote:
> On Wed, Sep 17, 2025 at 08:32:48AM -0300, Jason Gunthorpe wrote:
> > On Wed, Sep 17, 2025 at 01:36:44PM +0300, Leon Romanovsky wrote:
> > > On Tue, Sep 16, 2025 at 03:46:17PM -0300, Jason Gunthorpe wrote:
> > > > On Tue, Sep 16, 2025 at 10:32:06AM +0300, Leon Romanovsky wrote:
> > > > > + if (!dev->dma_coherent &&
> > > > > + !(attrs & (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_MMIO)))
> > > > > + __dma_page_cpu_to_dev(phys_to_page(phys), offset, size, dir);
> > > >
> > > > I'd keep going and get rid of the page here too, maybe as a second
> > > > patch in this series:
> > >
> > > Thanks, it is always unclear how far to go with cleanups.
> >
> > IMHO to maximally support what Matthew is working on I'd remove all
> > the struct page things and prefer the pfn/phys variations from the MM
> > side.
>
> ok, my patches can be found here:
> https://git.kernel.org/pub/scm/linux/kernel/git/leon/linux-rdma.git/log/?h=dmabuf-vfio
>
> I converted "struct page" path from all archs with .map_page.
Yeah, that's right, I would split this into a few series
This group is all fairly trivial stuff that didn't use the struct
page or virt address at all:
MIPS/jazzdma
x86
vdpa
xen: swiotlb: (though make a phys_to_xen_pfn() macro)
I'd also drop the ATTR_MMIO checks from x86 and jazzdma
since the code is obviously fine with any phys.
vdpa can support ATTR_MMIO with the same iommu prot MMIO adjustment,
that would be a good additional patch.
Then there are more conversions that don't use struct page or va but
are not so trivial
alpha
parisc
Then the last ones sparc/power are quite complicated and do use
__va/etc.
Jason
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v3 3/4] ARM: dma-mapping: Switch to physical address mapping callbacks
2025-09-17 13:58 ` Jason Gunthorpe
@ 2025-09-17 18:46 ` Leon Romanovsky
2025-09-17 19:08 ` Jason Gunthorpe
0 siblings, 1 reply; 17+ messages in thread
From: Leon Romanovsky @ 2025-09-17 18:46 UTC (permalink / raw)
To: Jason Gunthorpe
Cc: Marek Szyprowski, iommu, linux-arm-kernel, linux-kernel,
Russell King
On Wed, Sep 17, 2025 at 10:58:19AM -0300, Jason Gunthorpe wrote:
> On Wed, Sep 17, 2025 at 04:41:28PM +0300, Leon Romanovsky wrote:
> > On Wed, Sep 17, 2025 at 08:32:48AM -0300, Jason Gunthorpe wrote:
> > > On Wed, Sep 17, 2025 at 01:36:44PM +0300, Leon Romanovsky wrote:
> > > > On Tue, Sep 16, 2025 at 03:46:17PM -0300, Jason Gunthorpe wrote:
> > > > > On Tue, Sep 16, 2025 at 10:32:06AM +0300, Leon Romanovsky wrote:
> > > > > > + if (!dev->dma_coherent &&
> > > > > > + !(attrs & (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_MMIO)))
> > > > > > + __dma_page_cpu_to_dev(phys_to_page(phys), offset, size, dir);
> > > > >
> > > > > I'd keep going and get rid of the page here too, maybe as a second
> > > > > patch in this series:
> > > >
> > > > Thanks, it is always unclear how far to go with cleanups.
> > >
> > > IMHO to maximally support what Matthew is working on I'd remove all
> > > the struct page things and prefer the pfn/phys variations from the MM
> > > side.
> >
> > ok, my patches can be found here:
> > https://git.kernel.org/pub/scm/linux/kernel/git/leon/linux-rdma.git/log/?h=dmabuf-vfio
> >
> > I converted "struct page" path from all archs with .map_page.
>
> Yeah, that's right, I would split this into a few series
>
> This group is all fairly trivial stuff that didn't use the struct
> page or virt address at all:
>
> MIPS/jazzdma
> x86
> vdpa
> xen: swiotlb: (though make a phys_to_xen_pfn() macro)
>
> I'd also drop the ATTR_MMIO checks from x86 and jazzdma
> since the code is obviously fine with any phys.
It is not trivial as it sounds.
Let's take as an example MIPS/jazzdma, should we call to vdma_alloc()
in DMA_ATTR_MMIO case? or directly return "phys" as it is done in other
architectures?
>
> vdpa can support ATTR_MMIO with the same iommu prot MMIO adjustment,
> that would be a good additional patch.
There is "bounce page" in VDPA, it sounds like swiotlb to me, which
doesn't work with MMIO.
>
> Then there are more conversions that don't use struct page or va but
> are not so trivial
>
> alpha
> parisc
>
> Then the last ones sparc/power are quite complicated and do use
> __va/etc.
>
> Jason
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v3 3/4] ARM: dma-mapping: Switch to physical address mapping callbacks
2025-09-17 18:46 ` Leon Romanovsky
@ 2025-09-17 19:08 ` Jason Gunthorpe
2025-09-17 19:46 ` Leon Romanovsky
0 siblings, 1 reply; 17+ messages in thread
From: Jason Gunthorpe @ 2025-09-17 19:08 UTC (permalink / raw)
To: Leon Romanovsky
Cc: Marek Szyprowski, iommu, linux-arm-kernel, linux-kernel,
Russell King
On Wed, Sep 17, 2025 at 09:46:25PM +0300, Leon Romanovsky wrote:
> Let's take as an example MIPS/jazzdma, should we call to vdma_alloc()
> in DMA_ATTR_MMIO case?
Yes, that is right, this is an iommu, so it has to be programmed.
> or directly return "phys" as it is done in other
> architectures?
Which?
> > vdpa can support ATTR_MMIO with the same iommu prot MMIO adjustment,
> > that would be a good additional patch.
>
> There is "bounce page" in VDPA, it sounds like swiotlb to me, which
> doesn't work with MMIO.
Oh OK, it eventually does call pfn_to_page().. This isn't the case I
thought it was, there is another one that just routes to iommu_map
Jason
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v3 3/4] ARM: dma-mapping: Switch to physical address mapping callbacks
2025-09-17 19:08 ` Jason Gunthorpe
@ 2025-09-17 19:46 ` Leon Romanovsky
0 siblings, 0 replies; 17+ messages in thread
From: Leon Romanovsky @ 2025-09-17 19:46 UTC (permalink / raw)
To: Jason Gunthorpe
Cc: Marek Szyprowski, iommu, linux-arm-kernel, linux-kernel,
Russell King
On Wed, Sep 17, 2025 at 04:08:46PM -0300, Jason Gunthorpe wrote:
> On Wed, Sep 17, 2025 at 09:46:25PM +0300, Leon Romanovsky wrote:
> > Let's take as an example MIPS/jazzdma, should we call to vdma_alloc()
> > in DMA_ATTR_MMIO case?
>
> Yes, that is right, this is an iommu, so it has to be programmed.
>
> > or directly return "phys" as it is done in other
> > architectures?
>
> Which?
XEN does that.
https://git.kernel.org/pub/scm/linux/kernel/git/mszyprowski/linux.git/tree/drivers/xen/swiotlb-xen.c?h=dma-mapping-for-next#n395
static dma_addr_t xen_swiotlb_direct_map_resource(struct device *dev,
phys_addr_t paddr,
size_t size,
enum dma_data_direction dir,
unsigned long attrs)
{
dma_addr_t dma_addr = paddr;
if (unlikely(!dma_capable(dev, dma_addr, size, false))) {
dev_err_once(dev,
"DMA addr %pad+%zu overflow (mask %llx, bus limit %llx).\n",
&dma_addr, size, *dev->dma_mask, dev->bus_dma_limit);
WARN_ON_ONCE(1);
return DMA_MAPPING_ERROR;
}
return dma_addr;
}
>
> > > vdpa can support ATTR_MMIO with the same iommu prot MMIO adjustment,
> > > that would be a good additional patch.
> >
> > There is "bounce page" in VDPA, it sounds like swiotlb to me, which
> > doesn't work with MMIO.
>
> Oh OK, it eventually does call pfn_to_page().. This isn't the case I
> thought it was, there is another one that just routes to iommu_map
>
> Jason
>
^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2025-09-17 19:46 UTC | newest]
Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-09-16 7:32 [PATCH v3 0/4] Preparation to .map_page and .unmap_page removal Leon Romanovsky
2025-09-16 7:32 ` [PATCH v3 1/4] dma-mapping: prepare dma_map_ops to conversion to physical address Leon Romanovsky
2025-09-16 13:52 ` Jason Gunthorpe
2025-09-16 7:32 ` [PATCH v3 2/4] dma-mapping: convert dummy ops to physical address mapping Leon Romanovsky
2025-09-16 13:53 ` Jason Gunthorpe
2025-09-16 7:32 ` [PATCH v3 3/4] ARM: dma-mapping: Switch to physical address mapping callbacks Leon Romanovsky
2025-09-16 18:46 ` Jason Gunthorpe
2025-09-17 10:36 ` Leon Romanovsky
2025-09-17 11:32 ` Jason Gunthorpe
2025-09-17 13:41 ` Leon Romanovsky
2025-09-17 13:58 ` Jason Gunthorpe
2025-09-17 18:46 ` Leon Romanovsky
2025-09-17 19:08 ` Jason Gunthorpe
2025-09-17 19:46 ` Leon Romanovsky
2025-09-16 7:32 ` [PATCH v3 4/4] dma-mapping: remove unused mapping resource callbacks Leon Romanovsky
2025-09-16 12:19 ` Leon Romanovsky
2025-09-16 18:49 ` Jason Gunthorpe
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).