From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A2E8AC433F5 for ; Thu, 7 Apr 2022 17:49:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1E1436B0071; Thu, 7 Apr 2022 13:49:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 190C78D0001; Thu, 7 Apr 2022 13:49:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0809F6B0075; Thu, 7 Apr 2022 13:49:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id EE6976B0071 for ; Thu, 7 Apr 2022 13:49:18 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id B51562DC2 for ; Thu, 7 Apr 2022 17:49:08 +0000 (UTC) X-FDA: 79330819176.04.63AD26B Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by imf23.hostedemail.com (Postfix) with ESMTP id 09C6714000F for ; Thu, 7 Apr 2022 17:49:07 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 24DBDB82795; Thu, 7 Apr 2022 17:49:06 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9DF03C385A0; Thu, 7 Apr 2022 17:49:00 +0000 (UTC) Date: Thu, 7 Apr 2022 18:48:57 +0100 From: Catalin Marinas To: Vlastimil Babka Cc: Will Deacon , Marc Zyngier , Arnd Bergmann , Greg Kroah-Hartman , Andrew Morton , Linus Torvalds , linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Herbert Xu , "David S. Miller" , Mark Brown , Alasdair Kergon , Mike Snitzer , Daniel Vetter , "Rafael J. Wysocki" , Christoph Lameter , David Rientjes , Pekka Enberg , Joonsoo Kim , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Rustam Kovhaev , David Laight Subject: Re: [PATCH 00/10] mm, arm64: Reduce ARCH_KMALLOC_MINALIGN below the cache line size Message-ID: References: <20220405135758.774016-1-catalin.marinas@arm.com> <0966c4b0-6fff-3283-71c3-2d4e211f7385@suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <0966c4b0-6fff-3283-71c3-2d4e211f7385@suse.cz> Authentication-Results: imf23.hostedemail.com; dkim=none; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=arm.com (policy=none); spf=pass (imf23.hostedemail.com: domain of cmarinas@kernel.org designates 145.40.68.75 as permitted sender) smtp.mailfrom=cmarinas@kernel.org X-Rspam-User: X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 09C6714000F X-Stat-Signature: gj5iaqk6rci91bnmtgr9n3zc1no8p3w1 X-HE-Tag: 1649353747-522363 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Apr 07, 2022 at 04:40:15PM +0200, Vlastimil Babka wrote: > On 4/5/22 15:57, Catalin Marinas wrote: > > This series is beneficial to arm64 even if it's only reducing the > > kmalloc() minimum alignment to 64. While it would be nice to reduce this > > further to 8 (or 16) on SoCs known to be fully DMA coherent, detecting > > this is via arch_setup_dma_ops() is problematic, especially with late > > probed devices. I'd leave it for an additional RFC series on top of > > this (there are ideas like bounce buffering for non-coherent devices if > > the SoC was deemed coherent). [...] > - due to ARCH_KMALLOC_MINALIGN and dma guarantees we should return > allocations aligned to ARCH_KMALLOC_MINALIGN and the prepended size header > should also not share their ARCH_KMALLOC_MINALIGN block with another > (shorter) allocation that has a different lifetime, for the dma coherency > reasons > - this is very wasteful especially with the 128 bytes alignment, and seems > we already violate it in some scenarios anyway [2]. Extending this to all > objects would be even more wasteful. > > So this series would help here, especially if we can get to the 8/16 size. If we get to 8/16 size, it would only be for platforms that are fully coherent. Otherwise, with non-coherent DMA, the minimum kmalloc() alignment would still be the cache line size (typically 64) even if ARCH_KMALLOC_MINALIGN is 8. IIUC your point is that if ARCH_KMALLOC_MINALIGN is 8, kmalloc() could return pointers 8-byte aligned only as long as DMA safety is preserved (like not sharing the rest of the cache line with anything other writers). > But now I also wonder if keeping the name and meaning of "MINALIGN" is in > fact misleading and unnecessarily constraining us? What this is really about > is "granularity of exclusive access", no? Not necessarily. Yes, in lots of cases it is about granularity of access but there are others where the code does need the pointer returned aligned to ARCH_DMA_MINALIGN (currently via ARCH_KMALLOC_MINALIGN). Crypto seems to have such requirement (see the sub-thread with Herbert). Some (all?) callers ask kmalloc() for the aligned size and there's an expectation that if the size is a multiple of a power of two, kmalloc() will return a pointer aligned to that power of two. I think we need to preserve these semantics which may lead to some more wastage if you add the header (e.g. a size of 3*64 returns a pointer either aligned to 192 or 256). > Let's say the dma granularity is 64bytes, and there's a kmalloc(56). In your example, the size is not a power of two (or multiple of), so I guess there's no expectation for a 64-byte alignment (it can be 8) unless DMA is involved. See below. > If SLOB find a 64-bytes aligned block, uses the first 8 bytes for the > size header and returns the remaining 56 bytes, then the returned > pointer is not *aligned* to 64 bytes, but it's still aligned enough > for cpu accesses (which need only e.g. 8), and non-coherent dma should > be also safe because nobody will be accessing the 8 bytes header, > until the user of the object calls kfree() which should happen only > when it's done with any dma operations. Is my reasoning correct and > would this be safe? >From the DMA perspective, it's not safe currently. Let's say we have an inbound DMA transfer, the DMA API will invalidate the cache line prior to DMA. In arm64 terms, it means that the cache line is discarded, not flushed to memory. If the first 8 bytes had not been written back to RAM, they'd be lost. If we can guarantee that no CPU write happens to the cache line during the DMA transfer, we can change the DMA mapping operation to do a clean+invalidate (flush the cacheline to RAM) first. I guess this could be done with an IS_ENABLED(CONFIG_SLOB) check. -- Catalin