* xtensa dma-mapping tidyups
@ 2018-09-20 17:15 Christoph Hellwig
[not found] ` <20180920171540.2657-1-hch-jcswGhMUV9g@public.gmane.org>
0 siblings, 1 reply; 9+ messages in thread
From: Christoph Hellwig @ 2018-09-20 17:15 UTC (permalink / raw)
To: Chris Zankel, Max Filippov
Cc: linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw,
iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA
Hi Chris and Max,
this small series has a few tweaks to the xtensa dma-mapping code.
It is against the dma-mapping tree:
git://git.infradead.org/users/hch/dma-mapping.git for-next
Gitweb:
http://git.infradead.org/users/hch/dma-mapping.git/shortlog/refs/heads/for-next
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH 1/3] xtensa: remove partial support for DMA buffers in high memory
[not found] ` <20180920171540.2657-1-hch-jcswGhMUV9g@public.gmane.org>
@ 2018-09-20 17:15 ` Christoph Hellwig
[not found] ` <20180920171540.2657-2-hch-jcswGhMUV9g@public.gmane.org>
2018-09-20 17:15 ` [PATCH 2/3] xtensa: remove ZONE_DMA Christoph Hellwig
2018-09-20 17:15 ` [PATCH 3/3] xtensa: use dma_direct_{alloc,free}_pages Christoph Hellwig
2 siblings, 1 reply; 9+ messages in thread
From: Christoph Hellwig @ 2018-09-20 17:15 UTC (permalink / raw)
To: Chris Zankel, Max Filippov
Cc: linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw,
iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA
This reverts commit 6137e4166004e2ec383ac05d5ca15831f4668806.
We explicitly clear GFP_HIGHMEM from the allowed dma flags at the beginning
of the function (and the generic dma_alloc_attr function calling us does the
same!), so this code just adds dead wood.
Signed-off-by: Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org>
---
arch/xtensa/kernel/pci-dma.c | 20 ++------------------
1 file changed, 2 insertions(+), 18 deletions(-)
diff --git a/arch/xtensa/kernel/pci-dma.c b/arch/xtensa/kernel/pci-dma.c
index 1fc138b6bc0a..a764d894ffdd 100644
--- a/arch/xtensa/kernel/pci-dma.c
+++ b/arch/xtensa/kernel/pci-dma.c
@@ -171,20 +171,6 @@ void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *handle,
return page;
}
-#ifdef CONFIG_MMU
- if (PageHighMem(page)) {
- void *p;
-
- p = dma_common_contiguous_remap(page, size, VM_MAP,
- pgprot_noncached(PAGE_KERNEL),
- __builtin_return_address(0));
- if (!p) {
- if (!dma_release_from_contiguous(dev, page, count))
- __free_pages(page, get_order(size));
- }
- return p;
- }
-#endif
BUG_ON(!platform_vaddr_cached(page_address(page)));
__invalidate_dcache_range((unsigned long)page_address(page), size);
return platform_vaddr_to_uncached(page_address(page));
@@ -201,10 +187,8 @@ void arch_dma_free(struct device *dev, size_t size, void *vaddr,
} else if (platform_vaddr_uncached(vaddr)) {
page = virt_to_page(platform_vaddr_to_cached(vaddr));
} else {
-#ifdef CONFIG_MMU
- dma_common_free_remap(vaddr, size, VM_MAP);
-#endif
- page = pfn_to_page(PHYS_PFN(dma_to_phys(dev, dma_handle)));
+ WARN_ON_ONCE(1);
+ return;
}
if (!dma_release_from_contiguous(dev, page, count))
--
2.18.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 2/3] xtensa: remove ZONE_DMA
[not found] ` <20180920171540.2657-1-hch-jcswGhMUV9g@public.gmane.org>
2018-09-20 17:15 ` [PATCH 1/3] xtensa: remove partial support for DMA buffers in high memory Christoph Hellwig
@ 2018-09-20 17:15 ` Christoph Hellwig
2018-09-20 17:15 ` [PATCH 3/3] xtensa: use dma_direct_{alloc,free}_pages Christoph Hellwig
2 siblings, 0 replies; 9+ messages in thread
From: Christoph Hellwig @ 2018-09-20 17:15 UTC (permalink / raw)
To: Chris Zankel, Max Filippov
Cc: linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw,
iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA
ZONE_DMA is intended for magic < 32-bit pools (usually ISA DMA), which
isn't required on xtensa. Move all the non-highmem memory into
ZONE_NORMAL instead to match other architectures.
Signed-off-by: Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org>
---
arch/xtensa/Kconfig | 3 ---
arch/xtensa/mm/init.c | 2 +-
2 files changed, 1 insertion(+), 4 deletions(-)
diff --git a/arch/xtensa/Kconfig b/arch/xtensa/Kconfig
index 516694937b7a..9a7c654a7654 100644
--- a/arch/xtensa/Kconfig
+++ b/arch/xtensa/Kconfig
@@ -1,7 +1,4 @@
# SPDX-License-Identifier: GPL-2.0
-config ZONE_DMA
- def_bool y
-
config XTENSA
def_bool y
select ARCH_HAS_SYNC_DMA_FOR_CPU
diff --git a/arch/xtensa/mm/init.c b/arch/xtensa/mm/init.c
index 34aead7dcb48..b385e6b73065 100644
--- a/arch/xtensa/mm/init.c
+++ b/arch/xtensa/mm/init.c
@@ -71,7 +71,7 @@ void __init zones_init(void)
{
/* All pages are DMA-able, so we put them all in the DMA zone. */
unsigned long zones_size[MAX_NR_ZONES] = {
- [ZONE_DMA] = max_low_pfn - ARCH_PFN_OFFSET,
+ [ZONE_NORMAL] = max_low_pfn - ARCH_PFN_OFFSET,
#ifdef CONFIG_HIGHMEM
[ZONE_HIGHMEM] = max_pfn - max_low_pfn,
#endif
--
2.18.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 3/3] xtensa: use dma_direct_{alloc,free}_pages
[not found] ` <20180920171540.2657-1-hch-jcswGhMUV9g@public.gmane.org>
2018-09-20 17:15 ` [PATCH 1/3] xtensa: remove partial support for DMA buffers in high memory Christoph Hellwig
2018-09-20 17:15 ` [PATCH 2/3] xtensa: remove ZONE_DMA Christoph Hellwig
@ 2018-09-20 17:15 ` Christoph Hellwig
2 siblings, 0 replies; 9+ messages in thread
From: Christoph Hellwig @ 2018-09-20 17:15 UTC (permalink / raw)
To: Chris Zankel, Max Filippov
Cc: linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw,
iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA
Use the generic helpers for dma allocation instead of opencoding them
with slightly less bells and whistles.
Signed-off-by: Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org>
---
arch/xtensa/kernel/pci-dma.c | 48 ++++++++++--------------------------
1 file changed, 13 insertions(+), 35 deletions(-)
diff --git a/arch/xtensa/kernel/pci-dma.c b/arch/xtensa/kernel/pci-dma.c
index a764d894ffdd..a74ca0dd728a 100644
--- a/arch/xtensa/kernel/pci-dma.c
+++ b/arch/xtensa/kernel/pci-dma.c
@@ -141,56 +141,34 @@ void __attribute__((weak)) *platform_vaddr_to_cached(void *p)
* Note: We assume that the full memory space is always mapped to 'kseg'
* Otherwise we have to use page attributes (not implemented).
*/
-
-void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *handle,
- gfp_t flag, unsigned long attrs)
+void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle,
+ gfp_t gfp, unsigned long attrs)
{
- unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
- struct page *page = NULL;
-
- /* ignore region speicifiers */
-
- flag &= ~(__GFP_DMA | __GFP_HIGHMEM);
-
- if (dev == NULL || (dev->coherent_dma_mask < 0xffffffff))
- flag |= GFP_DMA;
-
- if (gfpflags_allow_blocking(flag))
- page = dma_alloc_from_contiguous(dev, count, get_order(size),
- flag & __GFP_NOWARN);
+ void *vaddr;
- if (!page)
- page = alloc_pages(flag, get_order(size));
-
- if (!page)
+ vaddr = dma_direct_alloc_pages(dev, size, dma_handle, gfp, attrs);
+ if (!vaddr)
return NULL;
- *handle = phys_to_dma(dev, page_to_phys(page));
+ if (attrs & DMA_ATTR_NO_KERNEL_MAPPING)
+ return virt_to_page(vaddr); /* just a random cookie */
- if (attrs & DMA_ATTR_NO_KERNEL_MAPPING) {
- return page;
- }
-
- BUG_ON(!platform_vaddr_cached(page_address(page)));
- __invalidate_dcache_range((unsigned long)page_address(page), size);
- return platform_vaddr_to_uncached(page_address(page));
+ BUG_ON(!platform_vaddr_cached(vaddr));
+ __invalidate_dcache_range((unsigned long)vaddr, size);
+ return platform_vaddr_to_uncached(vaddr);
}
void arch_dma_free(struct device *dev, size_t size, void *vaddr,
dma_addr_t dma_handle, unsigned long attrs)
{
- unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
- struct page *page;
-
if (attrs & DMA_ATTR_NO_KERNEL_MAPPING) {
- page = vaddr;
+ vaddr = page_to_virt((struct page *)vaddr); /* decode cookie */
} else if (platform_vaddr_uncached(vaddr)) {
- page = virt_to_page(platform_vaddr_to_cached(vaddr));
+ vaddr = platform_vaddr_to_cached(vaddr);
} else {
WARN_ON_ONCE(1);
return;
}
- if (!dma_release_from_contiguous(dev, page, count))
- __free_pages(page, get_order(size));
+ dma_direct_free_pages(dev, size, vaddr, dma_handle, attrs);
}
--
2.18.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH 1/3] xtensa: remove partial support for DMA buffers in high memory
[not found] ` <20180920171540.2657-2-hch-jcswGhMUV9g@public.gmane.org>
@ 2018-09-20 17:44 ` Max Filippov
[not found] ` <CAMo8BfKyW+3t+L+oQ+nDyEs8KcASPC8CeUP3QPoANQC=XKP9hg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
0 siblings, 1 reply; 9+ messages in thread
From: Max Filippov @ 2018-09-20 17:44 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Chris Zankel, iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw
Hi Christoph,
On Thu, Sep 20, 2018 at 10:15 AM, Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org> wrote:
> This reverts commit 6137e4166004e2ec383ac05d5ca15831f4668806.
>
> We explicitly clear GFP_HIGHMEM from the allowed dma flags at the beginning
> of the function (and the generic dma_alloc_attr function calling us does the
> same!), so this code just adds dead wood.
No, not really: dma_alloc_from_contiguous does not accept flags (only
no_warn bit)
and may return arbitrary pages. That's the case that this code is handling.
--
Thanks.
-- Max
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 1/3] xtensa: remove partial support for DMA buffers in high memory
[not found] ` <CAMo8BfKyW+3t+L+oQ+nDyEs8KcASPC8CeUP3QPoANQC=XKP9hg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2018-09-20 18:08 ` Christoph Hellwig
[not found] ` <20180920180833.GB27677-jcswGhMUV9g@public.gmane.org>
0 siblings, 1 reply; 9+ messages in thread
From: Christoph Hellwig @ 2018-09-20 18:08 UTC (permalink / raw)
To: Max Filippov
Cc: Chris Zankel, iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
Christoph Hellwig, linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw
On Thu, Sep 20, 2018 at 10:44:55AM -0700, Max Filippov wrote:
> Hi Christoph,
>
> On Thu, Sep 20, 2018 at 10:15 AM, Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org> wrote:
> > This reverts commit 6137e4166004e2ec383ac05d5ca15831f4668806.
> >
> > We explicitly clear GFP_HIGHMEM from the allowed dma flags at the beginning
> > of the function (and the generic dma_alloc_attr function calling us does the
> > same!), so this code just adds dead wood.
>
> No, not really: dma_alloc_from_contiguous does not accept flags (only
> no_warn bit)
> and may return arbitrary pages. That's the case that this code is handling.
dma_alloc_from_contiguous calls cma_alloc to do the actual
allocation, and that uses alloc_contig_range with the GFP_KERNEL
flag. How do you end up getting highmem patches from it?
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 1/3] xtensa: remove partial support for DMA buffers in high memory
[not found] ` <20180920180833.GB27677-jcswGhMUV9g@public.gmane.org>
@ 2018-09-20 19:08 ` Max Filippov
[not found] ` <20180921065100.GA14246@lst.de>
0 siblings, 1 reply; 9+ messages in thread
From: Max Filippov @ 2018-09-20 19:08 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Chris Zankel, iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw
On Thu, Sep 20, 2018 at 11:08 AM, Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org> wrote:
> On Thu, Sep 20, 2018 at 10:44:55AM -0700, Max Filippov wrote:
>> Hi Christoph,
>>
>> On Thu, Sep 20, 2018 at 10:15 AM, Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org> wrote:
>> > This reverts commit 6137e4166004e2ec383ac05d5ca15831f4668806.
>> >
>> > We explicitly clear GFP_HIGHMEM from the allowed dma flags at the beginning
>> > of the function (and the generic dma_alloc_attr function calling us does the
>> > same!), so this code just adds dead wood.
>>
>> No, not really: dma_alloc_from_contiguous does not accept flags (only
>> no_warn bit)
>> and may return arbitrary pages. That's the case that this code is handling.
>
> dma_alloc_from_contiguous calls cma_alloc to do the actual
> allocation, and that uses alloc_contig_range with the GFP_KERNEL
> flag. How do you end up getting highmem patches from it?
I'm not familiar with the details of alloc_contig_range implementation, but
I don't see how gfp_mask is used to limit allocation to non-high memory.
So when alloc_contig_range gets start and end PFNs in high memory
(from the CMA region allocated in high memory) it just returns high memory
pages -- that's what I see.
--
Thanks.
-- Max
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 1/3] xtensa: remove partial support for DMA buffers in high memory
[not found] ` <20180921065100.GA14246-jcswGhMUV9g@public.gmane.org>
@ 2018-09-21 16:40 ` Michał Nazarewicz
2018-09-21 17:42 ` Max Filippov
1 sibling, 0 replies; 9+ messages in thread
From: Michał Nazarewicz @ 2018-09-21 16:40 UTC (permalink / raw)
To: Christoph Hellwig
Cc: linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw, Chris Zankel,
jcmvbkbc-Re5JQEeQqe8AvxtiuMwx3w,
iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA, Joonsoo Kim
> On Thu, Sep 20, 2018 at 12:08:28PM -0700, Max Filippov wrote:
> > I'm not familiar with the details of alloc_contig_range implementation, but
> > I don't see how gfp_mask is used to limit allocation to non-high memory.
> > So when alloc_contig_range gets start and end PFNs in high memory
> > (from the CMA region allocated in high memory) it just returns high memory
> > pages
Correct. alloc_contig_range doesn’t care where the PFNs are. The
gpf_mask is completely ignored in this regard.
> > -- that's what I see.
On Fri, 21 Sep 2018 at 07:51, Christoph Hellwig <hch@lst.de> wrote:
> I can't see what prevents people from doing a CMA in high memory either,
That’s a feature. CMA can be configured such that different devices use
different regions. One use case is separation of devices; another is
devices with different address limits.
> which is bad as except for arm non of the callers handles it completely
> (and that includes xtensa, more below).
>
> CMA maintainers - is it intentional that CMA can be used to reserve and
> return highmem pages?
What Joonsoo said. I’ll just add that what you want to look at are invocations
of dma_contiguous_reserve. For xtensa it’s:
arch/xtensa/mm/init.c: dma_contiguous_reserve(PFN_PHYS(max_low_pfn));
which means that CMA’s default region will be placed below max_low_pfn.
Undesired high memory CMA allocations on a given platform indicate that
platform’s init code incorrectly places CMA region or the region is incorrectly
configured in device tree.
--
Best regards
ミハウ “𝓶𝓲𝓷𝓪86” ナザレヴイツ
«If at first you don’t succeed, give up skydiving»
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 1/3] xtensa: remove partial support for DMA buffers in high memory
[not found] ` <20180921065100.GA14246-jcswGhMUV9g@public.gmane.org>
2018-09-21 16:40 ` Michał Nazarewicz
@ 2018-09-21 17:42 ` Max Filippov
1 sibling, 0 replies; 9+ messages in thread
From: Max Filippov @ 2018-09-21 17:42 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Chris Zankel, linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw,
Michal Nazarewicz,
iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA, Joonsoo Kim
On Thu, Sep 20, 2018 at 11:51 PM, Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org> wrote:
> On Thu, Sep 20, 2018 at 12:08:28PM -0700, Max Filippov wrote:
>> I'm not familiar with the details of alloc_contig_range implementation, but
>> I don't see how gfp_mask is used to limit allocation to non-high memory.
>> So when alloc_contig_range gets start and end PFNs in high memory
>> (from the CMA region allocated in high memory) it just returns high memory
>> pages -- that's what I see.
>
> I can't see what prevents people from doing a CMA in high memory either,
> which is bad as except for arm non of the callers handles it completely
> (and that includes xtensa, more below).
So far it was useful and sufficient for us.
> xtensa is partially prepared for
> it as it can remap in dma_alloc, but it is missing handling in either
> the ->mmap or ->get_sgtable methods.
I'd rather implement the missing parts or put a WARN there if
implementing it is too hard or impossible than remove the existing
partial support.
--
Thanks.
-- Max
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2018-09-21 17:42 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-09-20 17:15 xtensa dma-mapping tidyups Christoph Hellwig
[not found] ` <20180920171540.2657-1-hch-jcswGhMUV9g@public.gmane.org>
2018-09-20 17:15 ` [PATCH 1/3] xtensa: remove partial support for DMA buffers in high memory Christoph Hellwig
[not found] ` <20180920171540.2657-2-hch-jcswGhMUV9g@public.gmane.org>
2018-09-20 17:44 ` Max Filippov
[not found] ` <CAMo8BfKyW+3t+L+oQ+nDyEs8KcASPC8CeUP3QPoANQC=XKP9hg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2018-09-20 18:08 ` Christoph Hellwig
[not found] ` <20180920180833.GB27677-jcswGhMUV9g@public.gmane.org>
2018-09-20 19:08 ` Max Filippov
[not found] ` <20180921065100.GA14246@lst.de>
[not found] ` <20180921065100.GA14246-jcswGhMUV9g@public.gmane.org>
2018-09-21 16:40 ` Michał Nazarewicz
2018-09-21 17:42 ` Max Filippov
2018-09-20 17:15 ` [PATCH 2/3] xtensa: remove ZONE_DMA Christoph Hellwig
2018-09-20 17:15 ` [PATCH 3/3] xtensa: use dma_direct_{alloc,free}_pages Christoph Hellwig
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).