From: Leon Romanovsky <leon@kernel.org>
To: Jason Gunthorpe <jgg@nvidia.com>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>,
iommu@lists.linux.dev, linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org,
Russell King <linux@armlinux.org.uk>
Subject: Re: [PATCH v3 3/4] ARM: dma-mapping: Switch to physical address mapping callbacks
Date: Wed, 17 Sep 2025 13:36:44 +0300 [thread overview]
Message-ID: <20250917103644.GB6464@unreal> (raw)
In-Reply-To: <20250916184617.GW1086830@nvidia.com>
On Tue, Sep 16, 2025 at 03:46:17PM -0300, Jason Gunthorpe wrote:
> On Tue, Sep 16, 2025 at 10:32:06AM +0300, Leon Romanovsky wrote:
> > + if (!dev->dma_coherent &&
> > + !(attrs & (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_MMIO)))
> > + __dma_page_cpu_to_dev(phys_to_page(phys), offset, size, dir);
>
> I'd keep going and get rid of the page here too, maybe as a second
> patch in this series:
Thanks, it is always unclear how far to go with cleanups.
>
> diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
> index 88c2d68a69c9ee..a84d12cd0ba4a9 100644
> --- a/arch/arm/mm/dma-mapping.c
> +++ b/arch/arm/mm/dma-mapping.c
> @@ -624,16 +624,14 @@ static void __arm_dma_free(struct device *dev, size_t size, void *cpu_addr,
> kfree(buf);
> }
>
> -static void dma_cache_maint_page(struct page *page, unsigned long offset,
> +static void dma_cache_maint_page(phys_addr_t paddr,
> size_t size, enum dma_data_direction dir,
> void (*op)(const void *, size_t, int))
> {
> - unsigned long pfn;
> + unsigned long pfn = paddr / PAGE_SIZE;
> + unsigned int offset = paddr % PAGE_SIZE;
> size_t left = size;
>
> - pfn = page_to_pfn(page) + offset / PAGE_SIZE;
> - offset %= PAGE_SIZE;
> -
> /*
> * A single sg entry may refer to multiple physically contiguous
> * pages. But we still need to process highmem pages individually.
> @@ -644,17 +642,17 @@ static void dma_cache_maint_page(struct page *page, unsigned long offset,
> size_t len = left;
> void *vaddr;
>
> - page = pfn_to_page(pfn);
> -
> - if (PageHighMem(page)) {
> + if (PhysHighMem(pfn << PAGE_SHIFT)) {
> if (len + offset > PAGE_SIZE)
> len = PAGE_SIZE - offset;
>
> if (cache_is_vipt_nonaliasing()) {
> - vaddr = kmap_atomic(page);
> + vaddr = kmap_atomic_pfn(pfn);
> op(vaddr + offset, len, dir);
> kunmap_atomic(vaddr);
> } else {
> + struct page *page = pfn_to_page(pfn);
> +
> vaddr = kmap_high_get(page);
> if (vaddr) {
> op(vaddr + offset, len, dir);
> @@ -662,7 +660,7 @@ static void dma_cache_maint_page(struct page *page, unsigned long offset,
> }
> }
> } else {
> - vaddr = page_address(page) + offset;
> + vaddr = phys_to_virt(pfn) + offset;
> op(vaddr, len, dir);
> }
> offset = 0;
> @@ -676,14 +674,11 @@ static void dma_cache_maint_page(struct page *page, unsigned long offset,
> * Note: Drivers should NOT use this function directly.
> * Use the driver DMA support - see dma-mapping.h (dma_sync_*)
> */
> -static void __dma_page_cpu_to_dev(struct page *page, unsigned long off,
> +static void __dma_page_cpu_to_dev(phys_addr_t paddr,
> size_t size, enum dma_data_direction dir)
> {
> - phys_addr_t paddr;
> + dma_cache_maint_page(paddr, size, dir, dmac_map_area);
>
> - dma_cache_maint_page(page, off, size, dir, dmac_map_area);
> -
> - paddr = page_to_phys(page) + off;
> if (dir == DMA_FROM_DEVICE) {
> outer_inv_range(paddr, paddr + size);
> } else {
>
> > + if (!dev->dma_coherent &&
> > + !(attrs & (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_MMIO))) {
> > page = phys_to_page(iommu_iova_to_phys(mapping->domain, iova));
> > __dma_page_dev_to_cpu(page, offset, size, dir);
>
> Same treatment here..
>
> Looks Ok though, I didn't notice any pitfalls
>
> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
>
> Jason
>
next prev parent reply other threads:[~2025-09-17 10:36 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-16 7:32 [PATCH v3 0/4] Preparation to .map_page and .unmap_page removal Leon Romanovsky
2025-09-16 7:32 ` [PATCH v3 1/4] dma-mapping: prepare dma_map_ops to conversion to physical address Leon Romanovsky
2025-09-16 13:52 ` Jason Gunthorpe
2025-09-16 7:32 ` [PATCH v3 2/4] dma-mapping: convert dummy ops to physical address mapping Leon Romanovsky
2025-09-16 13:53 ` Jason Gunthorpe
2025-09-16 7:32 ` [PATCH v3 3/4] ARM: dma-mapping: Switch to physical address mapping callbacks Leon Romanovsky
2025-09-16 18:46 ` Jason Gunthorpe
2025-09-17 10:36 ` Leon Romanovsky [this message]
2025-09-17 11:32 ` Jason Gunthorpe
2025-09-17 13:41 ` Leon Romanovsky
2025-09-17 13:58 ` Jason Gunthorpe
2025-09-17 18:46 ` Leon Romanovsky
2025-09-17 19:08 ` Jason Gunthorpe
2025-09-17 19:46 ` Leon Romanovsky
2025-09-16 7:32 ` [PATCH v3 4/4] dma-mapping: remove unused mapping resource callbacks Leon Romanovsky
2025-09-16 12:19 ` Leon Romanovsky
2025-09-16 18:49 ` Jason Gunthorpe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250917103644.GB6464@unreal \
--to=leon@kernel.org \
--cc=iommu@lists.linux.dev \
--cc=jgg@nvidia.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux@armlinux.org.uk \
--cc=m.szyprowski@samsung.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox