From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.linuxfoundation.org ([140.211.169.12]:46820 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753053AbcKQKdm (ORCPT ); Thu, 17 Nov 2016 05:33:42 -0500 From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Catalin Marinas , Marek Szyprowski , Alexey Brodkin , Vineet Gupta Subject: [PATCH 4.8 14/92] arc: Implement arch-specific dma_map_ops.mmap Date: Thu, 17 Nov 2016 11:31:47 +0100 Message-Id: <20161117103224.797018850@linuxfoundation.org> In-Reply-To: <20161117103224.218007793@linuxfoundation.org> References: <20161117103224.218007793@linuxfoundation.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: stable-owner@vger.kernel.org List-ID: 4.8-stable review patch. If anyone has any objections, please let me know. ------------------ From: Alexey Brodkin commit a79a812131b07254c09cf325ec68c0d05aaed0b5 upstream. We used to use generic implementation of dma_map_ops.mmap which is dma_common_mmap() but that only worked for simpler cached mappings when vaddr = paddr. If a driver requests uncached DMA buffer kernel maps it to virtual address so that MMU gets involved and page uncached status takes into account. In that case usage of dma_common_mmap() lead to mapping of vaddr to vaddr for user-space which is obviously wrong. For more detals please refer to verbose explanation here [1]. So here we implement our own version of mmap() which always deals with dma_addr and maps underlying memory to user-space properly (note that DMA buffer mapped to user-space is always uncached because there's no way to properly manage cache from user-space). [1] https://lkml.org/lkml/2016/10/26/973 Reviewed-by: Catalin Marinas Cc: Marek Szyprowski Signed-off-by: Alexey Brodkin Signed-off-by: Vineet Gupta Signed-off-by: Greg Kroah-Hartman --- arch/arc/mm/dma.c | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+) --- a/arch/arc/mm/dma.c +++ b/arch/arc/mm/dma.c @@ -105,6 +105,31 @@ static void arc_dma_free(struct device * __free_pages(page, get_order(size)); } +static int arc_dma_mmap(struct device *dev, struct vm_area_struct *vma, + void *cpu_addr, dma_addr_t dma_addr, size_t size, + unsigned long attrs) +{ + unsigned long user_count = vma_pages(vma); + unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT; + unsigned long pfn = __phys_to_pfn(plat_dma_to_phys(dev, dma_addr)); + unsigned long off = vma->vm_pgoff; + int ret = -ENXIO; + + vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); + + if (dma_mmap_from_coherent(dev, vma, cpu_addr, size, &ret)) + return ret; + + if (off < count && user_count <= (count - off)) { + ret = remap_pfn_range(vma, vma->vm_start, + pfn + off, + user_count << PAGE_SHIFT, + vma->vm_page_prot); + } + + return ret; +} + /* * streaming DMA Mapping API... * CPU accesses page via normal paddr, thus needs to explicitly made @@ -193,6 +218,7 @@ static int arc_dma_supported(struct devi struct dma_map_ops arc_dma_ops = { .alloc = arc_dma_alloc, .free = arc_dma_free, + .mmap = arc_dma_mmap, .map_page = arc_dma_map_page, .map_sg = arc_dma_map_sg, .sync_single_for_device = arc_dma_sync_single_for_device,