From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1031823AbbDXQMn (ORCPT ); Fri, 24 Apr 2015 12:12:43 -0400 Received: from userp1040.oracle.com ([156.151.31.81]:33141 "EHLO userp1040.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S966123AbbDXQMh (ORCPT ); Fri, 24 Apr 2015 12:12:37 -0400 Date: Fri, 24 Apr 2015 12:12:00 -0400 From: Konrad Rzeszutek Wilk To: Stefano Stabellini Cc: xen-devel@lists.xensource.com, baozich@gmail.com, David Vrabel , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: Re: [PATCH v2] xen: Add __GFP_DMA flag when xen_swiotlb_init gets free pages on ARM Message-ID: <20150424161200.GB8117@konrad-lan.dumpdata.com> References: <20150424132749.GA9192@l.oracle.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.23 (2014-03-12) X-Source-IP: userv0021.oracle.com [156.151.31.71] Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Apr 24, 2015 at 03:31:53PM +0100, Stefano Stabellini wrote: > On Fri, 24 Apr 2015, Konrad Rzeszutek Wilk wrote: > > On Fri, Apr 24, 2015 at 10:16:40AM +0100, Stefano Stabellini wrote: > > > Make sure that xen_swiotlb_init allocates buffers that are DMA capable > > > when at least one memblock is available below 4G. Otherwise we assume > > > that all devices on the SoC can cope with >4G addresses. We do this on > > > ARM and ARM64, where dom0 is mapped 1:1, so pfn == mfn in this case. > > > > > > No functional changes on x86. > > > > > > From: Chen Baozi > > > > > > Signed-off-by: Chen Baozi > > > Signed-off-by: Stefano Stabellini > > > Tested-by: Chen Baozi > > > > Acked-by: Konrad Rzeszutek Wilk > > Thanks! We are still early in the release cycle, should I add it to > xentip/stable/for-linus-4.1? Sure! > > > > > diff --git a/arch/arm/include/asm/xen/page.h b/arch/arm/include/asm/xen/page.h > > > index 2f7e6ff..0b579b2 100644 > > > --- a/arch/arm/include/asm/xen/page.h > > > +++ b/arch/arm/include/asm/xen/page.h > > > @@ -110,5 +110,6 @@ static inline bool set_phys_to_machine(unsigned long pfn, unsigned long mfn) > > > bool xen_arch_need_swiotlb(struct device *dev, > > > unsigned long pfn, > > > unsigned long mfn); > > > +unsigned long xen_get_swiotlb_free_pages(unsigned int order); > > > > > > #endif /* _ASM_ARM_XEN_PAGE_H */ > > > diff --git a/arch/arm/xen/mm.c b/arch/arm/xen/mm.c > > > index 793551d..4983250 100644 > > > --- a/arch/arm/xen/mm.c > > > +++ b/arch/arm/xen/mm.c > > > @@ -4,6 +4,7 @@ > > > #include > > > #include > > > #include > > > +#include > > > #include > > > #include > > > #include > > > @@ -21,6 +22,20 @@ > > > #include > > > #include > > > > > > +unsigned long xen_get_swiotlb_free_pages(unsigned int order) > > > +{ > > > + struct memblock_region *reg; > > > + gfp_t flags = __GFP_NOWARN; > > > + > > > + for_each_memblock(memory, reg) { > > > + if (reg->base < (phys_addr_t)0xffffffff) { > > > + flags |= __GFP_DMA; > > > + break; > > > + } > > > + } > > > + return __get_free_pages(flags, order); > > > +} > > > + > > > enum dma_cache_op { > > > DMA_UNMAP, > > > DMA_MAP, > > > diff --git a/arch/x86/include/asm/xen/page.h b/arch/x86/include/asm/xen/page.h > > > index 358dcd3..c44a5d5 100644 > > > --- a/arch/x86/include/asm/xen/page.h > > > +++ b/arch/x86/include/asm/xen/page.h > > > @@ -269,4 +269,9 @@ static inline bool xen_arch_need_swiotlb(struct device *dev, > > > return false; > > > } > > > > > > +static inline unsigned long xen_get_swiotlb_free_pages(unsigned int order) > > > +{ > > > + return __get_free_pages(__GFP_NOWARN, order); > > > +} > > > + > > > #endif /* _ASM_X86_XEN_PAGE_H */ > > > diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c > > > index 810ad41..4c54932 100644 > > > --- a/drivers/xen/swiotlb-xen.c > > > +++ b/drivers/xen/swiotlb-xen.c > > > @@ -235,7 +235,7 @@ retry: > > > #define SLABS_PER_PAGE (1 << (PAGE_SHIFT - IO_TLB_SHIFT)) > > > #define IO_TLB_MIN_SLABS ((1<<20) >> IO_TLB_SHIFT) > > > while ((SLABS_PER_PAGE << order) > IO_TLB_MIN_SLABS) { > > > - xen_io_tlb_start = (void *)__get_free_pages(__GFP_NOWARN, order); > > > + xen_io_tlb_start = (void *)xen_get_swiotlb_free_pages(order); > > > if (xen_io_tlb_start) > > > break; > > > order--; > >