From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932620AbcFHIGM (ORCPT ); Wed, 8 Jun 2016 04:06:12 -0400 Received: from mx0a-0016f401.pphosted.com ([67.231.148.174]:17350 "EHLO mx0a-0016f401.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752051AbcFHIGF (ORCPT ); Wed, 8 Jun 2016 04:06:05 -0400 Date: Wed, 8 Jun 2016 16:01:40 +0800 From: Jisheng Zhang To: , CC: , Subject: Re: [PATCH] arm64: mm: only initialize swiotlb when necessary Message-ID: <20160608160140.11eb0342@xhacker> In-Reply-To: <1465372426-4077-1-git-send-email-jszhang@marvell.com> References: <1465372426-4077-1-git-send-email-jszhang@marvell.com> X-Mailer: Claws Mail 3.13.2 (GTK+ 2.24.30; x86_64-pc-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2016-06-08_02:,, signatures=0 X-Proofpoint-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=2 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1604210000 definitions=main-1606080095 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Dear all, On Wed, 8 Jun 2016 15:53:46 +0800 Jisheng Zhang wrote: > we only initialize swiotlb when swiotlb_force is true or not all system > memory is DMA-able, this trivial optimization saves us 64MB when > swiotlb is not necessary. another solution is to call swiotlb_free() as ppc does. Either solution can solve my problem. If maintainers prefer that solution, I can send a v2 patch. Thanks, Jisheng > > Signed-off-by: Jisheng Zhang > --- > arch/arm64/mm/dma-mapping.c | 15 ++++++++++++++- > arch/arm64/mm/init.c | 3 ++- > 2 files changed, 16 insertions(+), 2 deletions(-) > > diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c > index c566ec8..46a4157 100644 > --- a/arch/arm64/mm/dma-mapping.c > +++ b/arch/arm64/mm/dma-mapping.c > @@ -19,6 +19,7 @@ > > #include > #include > +#include > #include > #include > #include > @@ -29,6 +30,8 @@ > > #include > > +static int swiotlb __read_mostly; > + > static pgprot_t __get_dma_pgprot(struct dma_attrs *attrs, pgprot_t prot, > bool coherent) > { > @@ -341,6 +344,13 @@ static int __swiotlb_get_sgtable(struct device *dev, struct sg_table *sgt, > return ret; > } > > +static int __swiotlb_dma_supported(struct device *hwdev, u64 mask) > +{ > + if (swiotlb) > + return swiotlb_dma_supported(hwdev, mask); > + return 1; > +} > + > static struct dma_map_ops swiotlb_dma_ops = { > .alloc = __dma_alloc, > .free = __dma_free, > @@ -354,7 +364,7 @@ static struct dma_map_ops swiotlb_dma_ops = { > .sync_single_for_device = __swiotlb_sync_single_for_device, > .sync_sg_for_cpu = __swiotlb_sync_sg_for_cpu, > .sync_sg_for_device = __swiotlb_sync_sg_for_device, > - .dma_supported = swiotlb_dma_supported, > + .dma_supported = __swiotlb_dma_supported, > .mapping_error = swiotlb_dma_mapping_error, > }; > > @@ -513,6 +523,9 @@ EXPORT_SYMBOL(dummy_dma_ops); > > static int __init arm64_dma_init(void) > { > + if (swiotlb_force || max_pfn > (arm64_dma_phys_limit >> PAGE_SHIFT)) > + swiotlb = 1; > + > return atomic_pool_init(); > } > arch_initcall(arm64_dma_init); > diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c > index d45f862..7d25b4d 100644 > --- a/arch/arm64/mm/init.c > +++ b/arch/arm64/mm/init.c > @@ -403,7 +403,8 @@ static void __init free_unused_memmap(void) > */ > void __init mem_init(void) > { > - swiotlb_init(1); > + if (swiotlb_force || max_pfn > (arm64_dma_phys_limit >> PAGE_SHIFT)) > + swiotlb_init(1); > > set_max_mapnr(pfn_to_page(max_pfn) - mem_map); >