From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 35AECC77B73 for ; Fri, 26 May 2023 16:08:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Subject:CC:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=AbXPPwDhtSVMEzmUV5AWfjo4zNOdgWNDCHpPfg8xxsY=; b=LfZipsvyjhwLff FLz20h2Lh1muoupxHQQyA3wSOHl8+T8Xq828RMJF+XrDcXApukEtyqo7cdLnSvIK21EZRs8ySpV/x cq/+y+YKwBZp533UiUC8llFcv2knGpcMlRDRE9sdVz9DMKbNNS6Cp3TSXRc2qQ8IPpFmGbU+DFQES XREna3HMMx7inwDUbCMo+KnP2S85Y8uejg2j6MeNck0YaPMvmT0T1ZTOvP9idaCn+ETK8XBnK4BxS bUJigQBV4nCW1F62TwINKW25D/u4pxtzYyWx226NTLFyE8cZQ/X8NVq8URebxahZ2BhjC6oo120r3 zxeRy1AjB2hlc5caVX5w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1q2Zyz-0034jI-37; Fri, 26 May 2023 16:08:01 +0000 Received: from frasgout.his.huawei.com ([185.176.79.56]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1q2Zyw-0034fw-07 for linux-arm-kernel@lists.infradead.org; Fri, 26 May 2023 16:08:00 +0000 Received: from lhrpeml500005.china.huawei.com (unknown [172.18.147.200]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4QSVCT5Z2Kz67lp8; Sat, 27 May 2023 00:06:13 +0800 (CST) Received: from localhost (10.202.227.76) by lhrpeml500005.china.huawei.com (7.191.163.240) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Fri, 26 May 2023 17:07:41 +0100 Date: Fri, 26 May 2023 17:07:40 +0100 From: Jonathan Cameron To: Catalin Marinas CC: Linus Torvalds , Christoph Hellwig , Robin Murphy , Arnd Bergmann , Greg Kroah-Hartman , "Will Deacon" , Marc Zyngier , Andrew Morton , Herbert Xu , "Ard Biesheuvel" , Isaac Manjarres , Saravana Kannan , Alasdair Kergon , Daniel Vetter , Joerg Roedel , Mark Brown , Mike Snitzer , "Rafael J. Wysocki" , , , Subject: Re: [PATCH v5 00/15] mm, dma, arm64: Reduce ARCH_KMALLOC_MINALIGN to 8 Message-ID: <20230526170740.000000df@Huawei.com> In-Reply-To: References: <20230524171904.3967031-1-catalin.marinas@arm.com> <20230525133138.000014b4@Huawei.com> Organization: Huawei Technologies Research and Development (UK) Ltd. X-Mailer: Claws Mail 4.1.0 (GTK 3.24.33; x86_64-w64-mingw32) MIME-Version: 1.0 X-Originating-IP: [10.202.227.76] X-ClientProxiedBy: lhrpeml100005.china.huawei.com (7.191.160.25) To lhrpeml500005.china.huawei.com (7.191.163.240) X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230526_090758_380809_E0DF0839 X-CRM114-Status: GOOD ( 27.95 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Thu, 25 May 2023 15:31:34 +0100 Catalin Marinas wrote: > On Thu, May 25, 2023 at 01:31:38PM +0100, Jonathan Cameron wrote: > > On Wed, 24 May 2023 18:18:49 +0100 > > Catalin Marinas wrote: > > > Another version of the series reducing the kmalloc() minimum alignment > > > on arm64 to 8 (from 128). Other architectures can easily opt in by > > > defining ARCH_KMALLOC_MINALIGN as 8 and selecting > > > DMA_BOUNCE_UNALIGNED_KMALLOC. > > > > > > The first 10 patches decouple ARCH_KMALLOC_MINALIGN from > > > ARCH_DMA_MINALIGN and, for arm64, limit the kmalloc() caches to those > > > aligned to the run-time probed cache_line_size(). On arm64 we gain the > > > kmalloc-{64,192} caches. > > > > > > The subsequent patches (11 to 15) further reduce the kmalloc() caches to > > > kmalloc-{8,16,32,96} if the default swiotlb is present by bouncing small > > > buffers in the DMA API. > > > > I think IIO_DMA_MINALIGN needs to switch to ARCH_DMA_MINALIGN as well. > > > > It's used to force static alignement of buffers with larger structures, > > to make them suitable for non coherent DMA, similar to your other cases. > > Ah, I forgot that you introduced that macro. However, at a quick grep, I > don't think this forced alignment always works as intended (irrespective > of this series). Let's take an example: > > struct ltc2496_driverdata { > /* this must be the first member */ > struct ltc2497core_driverdata common_ddata; > struct spi_device *spi; > > /* > * DMA (thus cache coherency maintenance) may require the > * transfer buffers to live in their own cache lines. > */ > unsigned char rxbuf[3] __aligned(IIO_DMA_MINALIGN); > unsigned char txbuf[3]; > }; > > The rxbuf is aligned to IIO_DMA_MINALIGN, the structure and its size as > well but txbuf is at an offset of 3 bytes from the aligned > IIO_DMA_MINALIGN. So basically any cache maintenance on rxbuf would > corrupt txbuf. That was intentional (though possibly wrong if I've misunderstood the underlying issue). For SPI controllers at least my understanding was that it is safe to assume that they won't trample on themselves. The driver doesn't touch the buffers when DMA is in flight - to do so would indeed result in corruption. So whilst we could end up with the SPI master writing stale data back to txbuf after the transfer it will never matter (as the value is unchanged). Any flushes in the other direction will end up flushing both rxbuf and txbuf anyway which is also harmless. > You need rxbuf to be the only resident of a > cache line, therefore the next member needs such alignment as well. > > With this series and SWIOTLB enabled, however, if you try to transfer 3 > bytes, they will be bounced, so the missing alignment won't matter much. > Only on arm64? If the above is wrong, it might cause trouble on some other architectures. As a side note spi_write_then_read() goes through a bounce buffer dance to avoid using dma unsafe buffers. Superficially that looks to me like it might now end up with an undersized buffer and hence up bouncing which rather defeats the point of it. It uses SMP_CACHE_BYTES for the size. Jonathan _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel